Feb 25 06:13:52 np0005629333 kernel: Linux version 5.14.0-686.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Thu Feb 19 10:49:27 UTC 2026
Feb 25 06:13:52 np0005629333 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Feb 25 06:13:52 np0005629333 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-686.el9.x86_64 root=UUID=37391a25-080d-4723-8b0c-cb88a559875b ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb 25 06:13:52 np0005629333 kernel: BIOS-provided physical RAM map:
Feb 25 06:13:52 np0005629333 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Feb 25 06:13:52 np0005629333 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Feb 25 06:13:52 np0005629333 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Feb 25 06:13:52 np0005629333 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Feb 25 06:13:52 np0005629333 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Feb 25 06:13:52 np0005629333 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Feb 25 06:13:52 np0005629333 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Feb 25 06:13:52 np0005629333 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Feb 25 06:13:52 np0005629333 kernel: NX (Execute Disable) protection: active
Feb 25 06:13:52 np0005629333 kernel: APIC: Static calls initialized
Feb 25 06:13:52 np0005629333 kernel: SMBIOS 2.8 present.
Feb 25 06:13:52 np0005629333 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Feb 25 06:13:52 np0005629333 kernel: Hypervisor detected: KVM
Feb 25 06:13:52 np0005629333 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Feb 25 06:13:52 np0005629333 kernel: kvm-clock: using sched offset of 8201244639 cycles
Feb 25 06:13:52 np0005629333 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Feb 25 06:13:52 np0005629333 kernel: tsc: Detected 2800.000 MHz processor
Feb 25 06:13:52 np0005629333 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Feb 25 06:13:52 np0005629333 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Feb 25 06:13:52 np0005629333 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Feb 25 06:13:52 np0005629333 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Feb 25 06:13:52 np0005629333 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Feb 25 06:13:52 np0005629333 kernel: Using GB pages for direct mapping
Feb 25 06:13:52 np0005629333 kernel: RAMDISK: [mem 0x1b6ca000-0x29b5cfff]
Feb 25 06:13:52 np0005629333 kernel: ACPI: Early table checksum verification disabled
Feb 25 06:13:52 np0005629333 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Feb 25 06:13:52 np0005629333 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 25 06:13:52 np0005629333 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 25 06:13:52 np0005629333 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 25 06:13:52 np0005629333 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Feb 25 06:13:52 np0005629333 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 25 06:13:52 np0005629333 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 25 06:13:52 np0005629333 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Feb 25 06:13:52 np0005629333 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Feb 25 06:13:52 np0005629333 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Feb 25 06:13:52 np0005629333 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Feb 25 06:13:52 np0005629333 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Feb 25 06:13:52 np0005629333 kernel: No NUMA configuration found
Feb 25 06:13:52 np0005629333 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Feb 25 06:13:52 np0005629333 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Feb 25 06:13:52 np0005629333 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Feb 25 06:13:52 np0005629333 kernel: Zone ranges:
Feb 25 06:13:52 np0005629333 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Feb 25 06:13:52 np0005629333 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Feb 25 06:13:52 np0005629333 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Feb 25 06:13:52 np0005629333 kernel:  Device   empty
Feb 25 06:13:52 np0005629333 kernel: Movable zone start for each node
Feb 25 06:13:52 np0005629333 kernel: Early memory node ranges
Feb 25 06:13:52 np0005629333 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Feb 25 06:13:52 np0005629333 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Feb 25 06:13:52 np0005629333 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Feb 25 06:13:52 np0005629333 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Feb 25 06:13:52 np0005629333 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Feb 25 06:13:52 np0005629333 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Feb 25 06:13:52 np0005629333 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Feb 25 06:13:52 np0005629333 kernel: ACPI: PM-Timer IO Port: 0x608
Feb 25 06:13:52 np0005629333 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Feb 25 06:13:52 np0005629333 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Feb 25 06:13:52 np0005629333 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Feb 25 06:13:52 np0005629333 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Feb 25 06:13:52 np0005629333 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Feb 25 06:13:52 np0005629333 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Feb 25 06:13:52 np0005629333 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Feb 25 06:13:52 np0005629333 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Feb 25 06:13:52 np0005629333 kernel: TSC deadline timer available
Feb 25 06:13:52 np0005629333 kernel: CPU topo: Max. logical packages:   8
Feb 25 06:13:52 np0005629333 kernel: CPU topo: Max. logical dies:       8
Feb 25 06:13:52 np0005629333 kernel: CPU topo: Max. dies per package:   1
Feb 25 06:13:52 np0005629333 kernel: CPU topo: Max. threads per core:   1
Feb 25 06:13:52 np0005629333 kernel: CPU topo: Num. cores per package:     1
Feb 25 06:13:52 np0005629333 kernel: CPU topo: Num. threads per package:   1
Feb 25 06:13:52 np0005629333 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Feb 25 06:13:52 np0005629333 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Feb 25 06:13:52 np0005629333 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Feb 25 06:13:52 np0005629333 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Feb 25 06:13:52 np0005629333 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Feb 25 06:13:52 np0005629333 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Feb 25 06:13:52 np0005629333 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Feb 25 06:13:52 np0005629333 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Feb 25 06:13:52 np0005629333 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Feb 25 06:13:52 np0005629333 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Feb 25 06:13:52 np0005629333 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Feb 25 06:13:52 np0005629333 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Feb 25 06:13:52 np0005629333 kernel: Booting paravirtualized kernel on KVM
Feb 25 06:13:52 np0005629333 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Feb 25 06:13:52 np0005629333 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Feb 25 06:13:52 np0005629333 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Feb 25 06:13:52 np0005629333 kernel: kvm-guest: PV spinlocks disabled, no host support
Feb 25 06:13:52 np0005629333 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-686.el9.x86_64 root=UUID=37391a25-080d-4723-8b0c-cb88a559875b ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb 25 06:13:52 np0005629333 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-686.el9.x86_64", will be passed to user space.
Feb 25 06:13:52 np0005629333 kernel: random: crng init done
Feb 25 06:13:52 np0005629333 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Feb 25 06:13:52 np0005629333 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb 25 06:13:52 np0005629333 kernel: Fallback order for Node 0: 0 
Feb 25 06:13:52 np0005629333 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Feb 25 06:13:52 np0005629333 kernel: Policy zone: Normal
Feb 25 06:13:52 np0005629333 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 25 06:13:52 np0005629333 kernel: software IO TLB: area num 8.
Feb 25 06:13:52 np0005629333 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Feb 25 06:13:52 np0005629333 kernel: ftrace: allocating 49605 entries in 194 pages
Feb 25 06:13:52 np0005629333 kernel: ftrace: allocated 194 pages with 3 groups
Feb 25 06:13:52 np0005629333 kernel: Dynamic Preempt: voluntary
Feb 25 06:13:52 np0005629333 kernel: rcu: Preemptible hierarchical RCU implementation.
Feb 25 06:13:52 np0005629333 kernel: rcu: #011RCU event tracing is enabled.
Feb 25 06:13:52 np0005629333 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Feb 25 06:13:52 np0005629333 kernel: #011Trampoline variant of Tasks RCU enabled.
Feb 25 06:13:52 np0005629333 kernel: #011Rude variant of Tasks RCU enabled.
Feb 25 06:13:52 np0005629333 kernel: #011Tracing variant of Tasks RCU enabled.
Feb 25 06:13:52 np0005629333 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 25 06:13:52 np0005629333 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Feb 25 06:13:52 np0005629333 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb 25 06:13:52 np0005629333 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb 25 06:13:52 np0005629333 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Feb 25 06:13:52 np0005629333 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Feb 25 06:13:52 np0005629333 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb 25 06:13:52 np0005629333 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Feb 25 06:13:52 np0005629333 kernel: Console: colour VGA+ 80x25
Feb 25 06:13:52 np0005629333 kernel: printk: console [ttyS0] enabled
Feb 25 06:13:52 np0005629333 kernel: ACPI: Core revision 20230331
Feb 25 06:13:52 np0005629333 kernel: APIC: Switch to symmetric I/O mode setup
Feb 25 06:13:52 np0005629333 kernel: x2apic enabled
Feb 25 06:13:52 np0005629333 kernel: APIC: Switched APIC routing to: physical x2apic
Feb 25 06:13:52 np0005629333 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Feb 25 06:13:52 np0005629333 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Feb 25 06:13:52 np0005629333 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Feb 25 06:13:52 np0005629333 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Feb 25 06:13:52 np0005629333 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Feb 25 06:13:52 np0005629333 kernel: mitigations: Enabled attack vectors: user_kernel, user_user, guest_host, guest_guest, SMT mitigations: auto
Feb 25 06:13:52 np0005629333 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Feb 25 06:13:52 np0005629333 kernel: Spectre V2 : Mitigation: Retpolines
Feb 25 06:13:52 np0005629333 kernel: RETBleed: Mitigation: untrained return thunk
Feb 25 06:13:52 np0005629333 kernel: Speculative Return Stack Overflow: Mitigation: SMT disabled
Feb 25 06:13:52 np0005629333 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Feb 25 06:13:52 np0005629333 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Feb 25 06:13:52 np0005629333 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Feb 25 06:13:52 np0005629333 kernel: active return thunk: retbleed_return_thunk
Feb 25 06:13:52 np0005629333 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Feb 25 06:13:52 np0005629333 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Feb 25 06:13:52 np0005629333 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Feb 25 06:13:52 np0005629333 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Feb 25 06:13:52 np0005629333 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Feb 25 06:13:52 np0005629333 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Feb 25 06:13:52 np0005629333 kernel: Freeing SMP alternatives memory: 40K
Feb 25 06:13:52 np0005629333 kernel: pid_max: default: 32768 minimum: 301
Feb 25 06:13:52 np0005629333 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Feb 25 06:13:52 np0005629333 kernel: landlock: Up and running.
Feb 25 06:13:52 np0005629333 kernel: Yama: becoming mindful.
Feb 25 06:13:52 np0005629333 kernel: SELinux:  Initializing.
Feb 25 06:13:52 np0005629333 kernel: LSM support for eBPF active
Feb 25 06:13:52 np0005629333 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb 25 06:13:52 np0005629333 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb 25 06:13:52 np0005629333 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Feb 25 06:13:52 np0005629333 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Feb 25 06:13:52 np0005629333 kernel: ... version:                0
Feb 25 06:13:52 np0005629333 kernel: ... bit width:              48
Feb 25 06:13:52 np0005629333 kernel: ... generic registers:      6
Feb 25 06:13:52 np0005629333 kernel: ... value mask:             0000ffffffffffff
Feb 25 06:13:52 np0005629333 kernel: ... max period:             00007fffffffffff
Feb 25 06:13:52 np0005629333 kernel: ... fixed-purpose events:   0
Feb 25 06:13:52 np0005629333 kernel: ... event mask:             000000000000003f
Feb 25 06:13:52 np0005629333 kernel: signal: max sigframe size: 1776
Feb 25 06:13:52 np0005629333 kernel: rcu: Hierarchical SRCU implementation.
Feb 25 06:13:52 np0005629333 kernel: rcu: #011Max phase no-delay instances is 400.
Feb 25 06:13:52 np0005629333 kernel: smp: Bringing up secondary CPUs ...
Feb 25 06:13:52 np0005629333 kernel: smpboot: x86: Booting SMP configuration:
Feb 25 06:13:52 np0005629333 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Feb 25 06:13:52 np0005629333 kernel: smp: Brought up 1 node, 8 CPUs
Feb 25 06:13:52 np0005629333 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Feb 25 06:13:52 np0005629333 kernel: node 0 deferred pages initialised in 9ms
Feb 25 06:13:52 np0005629333 kernel: Memory: 7617604K/8388068K available (16384K kernel code, 5797K rwdata, 13956K rodata, 4204K init, 7172K bss, 764464K reserved, 0K cma-reserved)
Feb 25 06:13:52 np0005629333 kernel: devtmpfs: initialized
Feb 25 06:13:52 np0005629333 kernel: x86/mm: Memory block size: 128MB
Feb 25 06:13:52 np0005629333 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 25 06:13:52 np0005629333 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Feb 25 06:13:52 np0005629333 kernel: pinctrl core: initialized pinctrl subsystem
Feb 25 06:13:52 np0005629333 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 25 06:13:52 np0005629333 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Feb 25 06:13:52 np0005629333 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Feb 25 06:13:52 np0005629333 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Feb 25 06:13:52 np0005629333 kernel: audit: initializing netlink subsys (disabled)
Feb 25 06:13:52 np0005629333 kernel: audit: type=2000 audit(1772018031.184:1): state=initialized audit_enabled=0 res=1
Feb 25 06:13:52 np0005629333 kernel: thermal_sys: Registered thermal governor 'fair_share'
Feb 25 06:13:52 np0005629333 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 25 06:13:52 np0005629333 kernel: thermal_sys: Registered thermal governor 'user_space'
Feb 25 06:13:52 np0005629333 kernel: cpuidle: using governor menu
Feb 25 06:13:52 np0005629333 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 25 06:13:52 np0005629333 kernel: PCI: Using configuration type 1 for base access
Feb 25 06:13:52 np0005629333 kernel: PCI: Using configuration type 1 for extended access
Feb 25 06:13:52 np0005629333 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Feb 25 06:13:52 np0005629333 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Feb 25 06:13:52 np0005629333 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Feb 25 06:13:52 np0005629333 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Feb 25 06:13:52 np0005629333 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Feb 25 06:13:52 np0005629333 kernel: Demotion targets for Node 0: null
Feb 25 06:13:52 np0005629333 kernel: cryptd: max_cpu_qlen set to 1000
Feb 25 06:13:52 np0005629333 kernel: ACPI: Added _OSI(Module Device)
Feb 25 06:13:52 np0005629333 kernel: ACPI: Added _OSI(Processor Device)
Feb 25 06:13:52 np0005629333 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 25 06:13:52 np0005629333 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb 25 06:13:52 np0005629333 kernel: ACPI: Interpreter enabled
Feb 25 06:13:52 np0005629333 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Feb 25 06:13:52 np0005629333 kernel: ACPI: Using IOAPIC for interrupt routing
Feb 25 06:13:52 np0005629333 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Feb 25 06:13:52 np0005629333 kernel: PCI: Using E820 reservations for host bridge windows
Feb 25 06:13:52 np0005629333 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Feb 25 06:13:52 np0005629333 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb 25 06:13:52 np0005629333 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Feb 25 06:13:52 np0005629333 kernel: acpiphp: Slot [3] registered
Feb 25 06:13:52 np0005629333 kernel: acpiphp: Slot [4] registered
Feb 25 06:13:52 np0005629333 kernel: acpiphp: Slot [5] registered
Feb 25 06:13:52 np0005629333 kernel: acpiphp: Slot [6] registered
Feb 25 06:13:52 np0005629333 kernel: acpiphp: Slot [7] registered
Feb 25 06:13:52 np0005629333 kernel: acpiphp: Slot [8] registered
Feb 25 06:13:52 np0005629333 kernel: acpiphp: Slot [9] registered
Feb 25 06:13:52 np0005629333 kernel: acpiphp: Slot [10] registered
Feb 25 06:13:52 np0005629333 kernel: acpiphp: Slot [11] registered
Feb 25 06:13:52 np0005629333 kernel: acpiphp: Slot [12] registered
Feb 25 06:13:52 np0005629333 kernel: acpiphp: Slot [13] registered
Feb 25 06:13:52 np0005629333 kernel: acpiphp: Slot [14] registered
Feb 25 06:13:52 np0005629333 kernel: acpiphp: Slot [15] registered
Feb 25 06:13:52 np0005629333 kernel: acpiphp: Slot [16] registered
Feb 25 06:13:52 np0005629333 kernel: acpiphp: Slot [17] registered
Feb 25 06:13:52 np0005629333 kernel: acpiphp: Slot [18] registered
Feb 25 06:13:52 np0005629333 kernel: acpiphp: Slot [19] registered
Feb 25 06:13:52 np0005629333 kernel: acpiphp: Slot [20] registered
Feb 25 06:13:52 np0005629333 kernel: acpiphp: Slot [21] registered
Feb 25 06:13:52 np0005629333 kernel: acpiphp: Slot [22] registered
Feb 25 06:13:52 np0005629333 kernel: acpiphp: Slot [23] registered
Feb 25 06:13:52 np0005629333 kernel: acpiphp: Slot [24] registered
Feb 25 06:13:52 np0005629333 kernel: acpiphp: Slot [25] registered
Feb 25 06:13:52 np0005629333 kernel: acpiphp: Slot [26] registered
Feb 25 06:13:52 np0005629333 kernel: acpiphp: Slot [27] registered
Feb 25 06:13:52 np0005629333 kernel: acpiphp: Slot [28] registered
Feb 25 06:13:52 np0005629333 kernel: acpiphp: Slot [29] registered
Feb 25 06:13:52 np0005629333 kernel: acpiphp: Slot [30] registered
Feb 25 06:13:52 np0005629333 kernel: acpiphp: Slot [31] registered
Feb 25 06:13:52 np0005629333 kernel: PCI host bridge to bus 0000:00
Feb 25 06:13:52 np0005629333 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Feb 25 06:13:52 np0005629333 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Feb 25 06:13:52 np0005629333 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Feb 25 06:13:52 np0005629333 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Feb 25 06:13:52 np0005629333 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Feb 25 06:13:52 np0005629333 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Feb 25 06:13:52 np0005629333 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Feb 25 06:13:52 np0005629333 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Feb 25 06:13:52 np0005629333 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Feb 25 06:13:52 np0005629333 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Feb 25 06:13:52 np0005629333 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Feb 25 06:13:52 np0005629333 kernel: iommu: Default domain type: Translated
Feb 25 06:13:52 np0005629333 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Feb 25 06:13:52 np0005629333 kernel: SCSI subsystem initialized
Feb 25 06:13:52 np0005629333 kernel: ACPI: bus type USB registered
Feb 25 06:13:52 np0005629333 kernel: usbcore: registered new interface driver usbfs
Feb 25 06:13:52 np0005629333 kernel: usbcore: registered new interface driver hub
Feb 25 06:13:52 np0005629333 kernel: usbcore: registered new device driver usb
Feb 25 06:13:52 np0005629333 kernel: pps_core: LinuxPPS API ver. 1 registered
Feb 25 06:13:52 np0005629333 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Feb 25 06:13:52 np0005629333 kernel: PTP clock support registered
Feb 25 06:13:52 np0005629333 kernel: EDAC MC: Ver: 3.0.0
Feb 25 06:13:52 np0005629333 kernel: NetLabel: Initializing
Feb 25 06:13:52 np0005629333 kernel: NetLabel:  domain hash size = 128
Feb 25 06:13:52 np0005629333 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Feb 25 06:13:52 np0005629333 kernel: NetLabel:  unlabeled traffic allowed by default
Feb 25 06:13:52 np0005629333 kernel: PCI: Using ACPI for IRQ routing
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Feb 25 06:13:52 np0005629333 kernel: vgaarb: loaded
Feb 25 06:13:52 np0005629333 kernel: clocksource: Switched to clocksource kvm-clock
Feb 25 06:13:52 np0005629333 kernel: VFS: Disk quotas dquot_6.6.0
Feb 25 06:13:52 np0005629333 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 25 06:13:52 np0005629333 kernel: pnp: PnP ACPI init
Feb 25 06:13:52 np0005629333 kernel: pnp: PnP ACPI: found 5 devices
Feb 25 06:13:52 np0005629333 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Feb 25 06:13:52 np0005629333 kernel: NET: Registered PF_INET protocol family
Feb 25 06:13:52 np0005629333 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Feb 25 06:13:52 np0005629333 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Feb 25 06:13:52 np0005629333 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 25 06:13:52 np0005629333 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb 25 06:13:52 np0005629333 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Feb 25 06:13:52 np0005629333 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Feb 25 06:13:52 np0005629333 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Feb 25 06:13:52 np0005629333 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb 25 06:13:52 np0005629333 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb 25 06:13:52 np0005629333 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 25 06:13:52 np0005629333 kernel: NET: Registered PF_XDP protocol family
Feb 25 06:13:52 np0005629333 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Feb 25 06:13:52 np0005629333 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Feb 25 06:13:52 np0005629333 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Feb 25 06:13:52 np0005629333 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Feb 25 06:13:52 np0005629333 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Feb 25 06:13:52 np0005629333 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Feb 25 06:13:52 np0005629333 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 27141 usecs
Feb 25 06:13:52 np0005629333 kernel: PCI: CLS 0 bytes, default 64
Feb 25 06:13:52 np0005629333 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Feb 25 06:13:52 np0005629333 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Feb 25 06:13:52 np0005629333 kernel: Trying to unpack rootfs image as initramfs...
Feb 25 06:13:52 np0005629333 kernel: ACPI: bus type thunderbolt registered
Feb 25 06:13:52 np0005629333 kernel: Initialise system trusted keyrings
Feb 25 06:13:52 np0005629333 kernel: Key type blacklist registered
Feb 25 06:13:52 np0005629333 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Feb 25 06:13:52 np0005629333 kernel: zbud: loaded
Feb 25 06:13:52 np0005629333 kernel: integrity: Platform Keyring initialized
Feb 25 06:13:52 np0005629333 kernel: integrity: Machine keyring initialized
Feb 25 06:13:52 np0005629333 kernel: Freeing initrd memory: 234060K
Feb 25 06:13:52 np0005629333 kernel: NET: Registered PF_ALG protocol family
Feb 25 06:13:52 np0005629333 kernel: xor: automatically using best checksumming function   avx       
Feb 25 06:13:52 np0005629333 kernel: Key type asymmetric registered
Feb 25 06:13:52 np0005629333 kernel: Asymmetric key parser 'x509' registered
Feb 25 06:13:52 np0005629333 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Feb 25 06:13:52 np0005629333 kernel: io scheduler mq-deadline registered
Feb 25 06:13:52 np0005629333 kernel: io scheduler kyber registered
Feb 25 06:13:52 np0005629333 kernel: io scheduler bfq registered
Feb 25 06:13:52 np0005629333 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Feb 25 06:13:52 np0005629333 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Feb 25 06:13:52 np0005629333 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Feb 25 06:13:52 np0005629333 kernel: ACPI: button: Power Button [PWRF]
Feb 25 06:13:52 np0005629333 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Feb 25 06:13:52 np0005629333 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Feb 25 06:13:52 np0005629333 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Feb 25 06:13:52 np0005629333 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 25 06:13:52 np0005629333 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Feb 25 06:13:52 np0005629333 kernel: Non-volatile memory driver v1.3
Feb 25 06:13:52 np0005629333 kernel: rdac: device handler registered
Feb 25 06:13:52 np0005629333 kernel: hp_sw: device handler registered
Feb 25 06:13:52 np0005629333 kernel: emc: device handler registered
Feb 25 06:13:52 np0005629333 kernel: alua: device handler registered
Feb 25 06:13:52 np0005629333 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Feb 25 06:13:52 np0005629333 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Feb 25 06:13:52 np0005629333 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Feb 25 06:13:52 np0005629333 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Feb 25 06:13:52 np0005629333 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Feb 25 06:13:52 np0005629333 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Feb 25 06:13:52 np0005629333 kernel: usb usb1: Product: UHCI Host Controller
Feb 25 06:13:52 np0005629333 kernel: usb usb1: Manufacturer: Linux 5.14.0-686.el9.x86_64 uhci_hcd
Feb 25 06:13:52 np0005629333 kernel: usb usb1: SerialNumber: 0000:00:01.2
Feb 25 06:13:52 np0005629333 kernel: hub 1-0:1.0: USB hub found
Feb 25 06:13:52 np0005629333 kernel: hub 1-0:1.0: 2 ports detected
Feb 25 06:13:52 np0005629333 kernel: usbcore: registered new interface driver usbserial_generic
Feb 25 06:13:52 np0005629333 kernel: usbserial: USB Serial support registered for generic
Feb 25 06:13:52 np0005629333 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Feb 25 06:13:52 np0005629333 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Feb 25 06:13:52 np0005629333 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Feb 25 06:13:52 np0005629333 kernel: mousedev: PS/2 mouse device common for all mice
Feb 25 06:13:52 np0005629333 kernel: rtc_cmos 00:04: RTC can wake from S4
Feb 25 06:13:52 np0005629333 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Feb 25 06:13:52 np0005629333 kernel: rtc_cmos 00:04: registered as rtc0
Feb 25 06:13:52 np0005629333 kernel: rtc_cmos 00:04: setting system clock to 2026-02-25T11:13:51 UTC (1772018031)
Feb 25 06:13:52 np0005629333 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Feb 25 06:13:52 np0005629333 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Feb 25 06:13:52 np0005629333 kernel: hid: raw HID events driver (C) Jiri Kosina
Feb 25 06:13:52 np0005629333 kernel: usbcore: registered new interface driver usbhid
Feb 25 06:13:52 np0005629333 kernel: usbhid: USB HID core driver
Feb 25 06:13:52 np0005629333 kernel: drop_monitor: Initializing network drop monitor service
Feb 25 06:13:52 np0005629333 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Feb 25 06:13:52 np0005629333 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Feb 25 06:13:52 np0005629333 kernel: Initializing XFRM netlink socket
Feb 25 06:13:52 np0005629333 kernel: NET: Registered PF_INET6 protocol family
Feb 25 06:13:52 np0005629333 kernel: Segment Routing with IPv6
Feb 25 06:13:52 np0005629333 kernel: NET: Registered PF_PACKET protocol family
Feb 25 06:13:52 np0005629333 kernel: mpls_gso: MPLS GSO support
Feb 25 06:13:52 np0005629333 kernel: IPI shorthand broadcast: enabled
Feb 25 06:13:52 np0005629333 kernel: AVX2 version of gcm_enc/dec engaged.
Feb 25 06:13:52 np0005629333 kernel: AES CTR mode by8 optimization enabled
Feb 25 06:13:52 np0005629333 kernel: sched_clock: Marking stable (992002360, 139645520)->(1251863250, -120215370)
Feb 25 06:13:52 np0005629333 kernel: registered taskstats version 1
Feb 25 06:13:52 np0005629333 kernel: Loading compiled-in X.509 certificates
Feb 25 06:13:52 np0005629333 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: d9d4cefd3ca2c4957ef0b2e7c6e39a7e4ae16390'
Feb 25 06:13:52 np0005629333 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Feb 25 06:13:52 np0005629333 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Feb 25 06:13:52 np0005629333 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Feb 25 06:13:52 np0005629333 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Feb 25 06:13:52 np0005629333 kernel: Demotion targets for Node 0: null
Feb 25 06:13:52 np0005629333 kernel: page_owner is disabled
Feb 25 06:13:52 np0005629333 kernel: Key type .fscrypt registered
Feb 25 06:13:52 np0005629333 kernel: Key type fscrypt-provisioning registered
Feb 25 06:13:52 np0005629333 kernel: Key type big_key registered
Feb 25 06:13:52 np0005629333 kernel: Key type encrypted registered
Feb 25 06:13:52 np0005629333 kernel: ima: No TPM chip found, activating TPM-bypass!
Feb 25 06:13:52 np0005629333 kernel: Loading compiled-in module X.509 certificates
Feb 25 06:13:52 np0005629333 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: d9d4cefd3ca2c4957ef0b2e7c6e39a7e4ae16390'
Feb 25 06:13:52 np0005629333 kernel: ima: Allocated hash algorithm: sha256
Feb 25 06:13:52 np0005629333 kernel: ima: No architecture policies found
Feb 25 06:13:52 np0005629333 kernel: evm: Initialising EVM extended attributes:
Feb 25 06:13:52 np0005629333 kernel: evm: security.selinux
Feb 25 06:13:52 np0005629333 kernel: evm: security.SMACK64 (disabled)
Feb 25 06:13:52 np0005629333 kernel: evm: security.SMACK64EXEC (disabled)
Feb 25 06:13:52 np0005629333 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Feb 25 06:13:52 np0005629333 kernel: evm: security.SMACK64MMAP (disabled)
Feb 25 06:13:52 np0005629333 kernel: evm: security.apparmor (disabled)
Feb 25 06:13:52 np0005629333 kernel: evm: security.ima
Feb 25 06:13:52 np0005629333 kernel: evm: security.capability
Feb 25 06:13:52 np0005629333 kernel: evm: HMAC attrs: 0x1
Feb 25 06:13:52 np0005629333 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Feb 25 06:13:52 np0005629333 kernel: Running certificate verification RSA selftest
Feb 25 06:13:52 np0005629333 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Feb 25 06:13:52 np0005629333 kernel: Running certificate verification ECDSA selftest
Feb 25 06:13:52 np0005629333 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Feb 25 06:13:52 np0005629333 kernel: clk: Disabling unused clocks
Feb 25 06:13:52 np0005629333 kernel: Freeing unused decrypted memory: 2028K
Feb 25 06:13:52 np0005629333 kernel: Freeing unused kernel image (initmem) memory: 4204K
Feb 25 06:13:52 np0005629333 kernel: Write protecting the kernel read-only data: 30720k
Feb 25 06:13:52 np0005629333 kernel: Freeing unused kernel image (rodata/data gap) memory: 380K
Feb 25 06:13:52 np0005629333 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Feb 25 06:13:52 np0005629333 kernel: Run /init as init process
Feb 25 06:13:52 np0005629333 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb 25 06:13:52 np0005629333 systemd: Detected virtualization kvm.
Feb 25 06:13:52 np0005629333 systemd: Detected architecture x86-64.
Feb 25 06:13:52 np0005629333 systemd: Running in initrd.
Feb 25 06:13:52 np0005629333 systemd: No hostname configured, using default hostname.
Feb 25 06:13:52 np0005629333 systemd: Hostname set to <localhost>.
Feb 25 06:13:52 np0005629333 systemd: Initializing machine ID from VM UUID.
Feb 25 06:13:52 np0005629333 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Feb 25 06:13:52 np0005629333 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Feb 25 06:13:52 np0005629333 kernel: usb 1-1: Product: QEMU USB Tablet
Feb 25 06:13:52 np0005629333 kernel: usb 1-1: Manufacturer: QEMU
Feb 25 06:13:52 np0005629333 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Feb 25 06:13:52 np0005629333 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Feb 25 06:13:52 np0005629333 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Feb 25 06:13:52 np0005629333 systemd: Queued start job for default target Initrd Default Target.
Feb 25 06:13:52 np0005629333 systemd: Started Dispatch Password Requests to Console Directory Watch.
Feb 25 06:13:52 np0005629333 systemd: Reached target Local Encrypted Volumes.
Feb 25 06:13:52 np0005629333 systemd: Reached target Initrd /usr File System.
Feb 25 06:13:52 np0005629333 systemd: Reached target Local File Systems.
Feb 25 06:13:52 np0005629333 systemd: Reached target Path Units.
Feb 25 06:13:52 np0005629333 systemd: Reached target Slice Units.
Feb 25 06:13:52 np0005629333 systemd: Reached target Swaps.
Feb 25 06:13:52 np0005629333 systemd: Reached target Timer Units.
Feb 25 06:13:52 np0005629333 systemd: Listening on D-Bus System Message Bus Socket.
Feb 25 06:13:52 np0005629333 systemd: Listening on Journal Socket (/dev/log).
Feb 25 06:13:52 np0005629333 systemd: Listening on Journal Socket.
Feb 25 06:13:52 np0005629333 systemd: Listening on udev Control Socket.
Feb 25 06:13:52 np0005629333 systemd: Listening on udev Kernel Socket.
Feb 25 06:13:52 np0005629333 systemd: Reached target Socket Units.
Feb 25 06:13:52 np0005629333 systemd: Starting Create List of Static Device Nodes...
Feb 25 06:13:52 np0005629333 systemd: Starting Journal Service...
Feb 25 06:13:52 np0005629333 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Feb 25 06:13:52 np0005629333 systemd: Starting Apply Kernel Variables...
Feb 25 06:13:52 np0005629333 systemd: Starting Create System Users...
Feb 25 06:13:52 np0005629333 systemd: Starting Setup Virtual Console...
Feb 25 06:13:52 np0005629333 systemd: Finished Create List of Static Device Nodes.
Feb 25 06:13:52 np0005629333 systemd: Finished Create System Users.
Feb 25 06:13:52 np0005629333 systemd-journald[309]: Journal started
Feb 25 06:13:52 np0005629333 systemd-journald[309]: Runtime Journal (/run/log/journal/aaf3b8852d7c41fda043b293ce60ba6a) is 8.0M, max 153.6M, 145.6M free.
Feb 25 06:13:52 np0005629333 systemd-sysusers[314]: Creating group 'users' with GID 100.
Feb 25 06:13:52 np0005629333 systemd-sysusers[314]: Creating group 'dbus' with GID 81.
Feb 25 06:13:52 np0005629333 systemd-sysusers[314]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Feb 25 06:13:52 np0005629333 systemd: Started Journal Service.
Feb 25 06:13:52 np0005629333 systemd[1]: Finished Apply Kernel Variables.
Feb 25 06:13:52 np0005629333 systemd[1]: Starting Create Static Device Nodes in /dev...
Feb 25 06:13:52 np0005629333 systemd[1]: Starting Create Volatile Files and Directories...
Feb 25 06:13:52 np0005629333 systemd[1]: Finished Create Static Device Nodes in /dev.
Feb 25 06:13:52 np0005629333 systemd[1]: Finished Create Volatile Files and Directories.
Feb 25 06:13:52 np0005629333 systemd[1]: Finished Setup Virtual Console.
Feb 25 06:13:52 np0005629333 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Feb 25 06:13:52 np0005629333 systemd[1]: Starting dracut cmdline hook...
Feb 25 06:13:52 np0005629333 dracut-cmdline[330]: dracut-9 dracut-057-110.git20260130.el9
Feb 25 06:13:52 np0005629333 dracut-cmdline[330]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-686.el9.x86_64 root=UUID=37391a25-080d-4723-8b0c-cb88a559875b ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Feb 25 06:13:52 np0005629333 systemd[1]: Finished dracut cmdline hook.
Feb 25 06:13:52 np0005629333 systemd[1]: Starting dracut pre-udev hook...
Feb 25 06:13:52 np0005629333 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 25 06:13:52 np0005629333 kernel: device-mapper: uevent: version 1.0.3
Feb 25 06:13:52 np0005629333 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Feb 25 06:13:52 np0005629333 kernel: RPC: Registered named UNIX socket transport module.
Feb 25 06:13:52 np0005629333 kernel: RPC: Registered udp transport module.
Feb 25 06:13:52 np0005629333 kernel: RPC: Registered tcp transport module.
Feb 25 06:13:52 np0005629333 kernel: RPC: Registered tcp-with-tls transport module.
Feb 25 06:13:52 np0005629333 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Feb 25 06:13:52 np0005629333 rpc.statd[446]: Version 2.5.4 starting
Feb 25 06:13:52 np0005629333 rpc.statd[446]: Initializing NSM state
Feb 25 06:13:52 np0005629333 rpc.idmapd[451]: Setting log level to 0
Feb 25 06:13:52 np0005629333 systemd[1]: Finished dracut pre-udev hook.
Feb 25 06:13:52 np0005629333 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Feb 25 06:13:52 np0005629333 systemd-udevd[464]: Using default interface naming scheme 'rhel-9.0'.
Feb 25 06:13:52 np0005629333 systemd[1]: Started Rule-based Manager for Device Events and Files.
Feb 25 06:13:52 np0005629333 systemd[1]: Starting dracut pre-trigger hook...
Feb 25 06:13:52 np0005629333 systemd[1]: Finished dracut pre-trigger hook.
Feb 25 06:13:52 np0005629333 systemd[1]: Starting Coldplug All udev Devices...
Feb 25 06:13:52 np0005629333 systemd[1]: Created slice Slice /system/modprobe.
Feb 25 06:13:53 np0005629333 systemd[1]: Starting Load Kernel Module configfs...
Feb 25 06:13:53 np0005629333 systemd[1]: Finished Coldplug All udev Devices.
Feb 25 06:13:53 np0005629333 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 25 06:13:53 np0005629333 systemd[1]: Finished Load Kernel Module configfs.
Feb 25 06:13:53 np0005629333 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Feb 25 06:13:53 np0005629333 systemd[1]: Reached target Network.
Feb 25 06:13:53 np0005629333 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Feb 25 06:13:53 np0005629333 systemd[1]: Starting dracut initqueue hook...
Feb 25 06:13:53 np0005629333 kernel: scsi host0: ata_piix
Feb 25 06:13:53 np0005629333 kernel: scsi host1: ata_piix
Feb 25 06:13:53 np0005629333 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Feb 25 06:13:53 np0005629333 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Feb 25 06:13:53 np0005629333 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Feb 25 06:13:53 np0005629333 systemd-udevd[506]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 06:13:53 np0005629333 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Feb 25 06:13:53 np0005629333 kernel: vda: vda1
Feb 25 06:13:53 np0005629333 kernel: ACPI: bus type drm_connector registered
Feb 25 06:13:53 np0005629333 systemd[1]: Mounting Kernel Configuration File System...
Feb 25 06:13:53 np0005629333 systemd[1]: Mounted Kernel Configuration File System.
Feb 25 06:13:53 np0005629333 systemd[1]: Reached target System Initialization.
Feb 25 06:13:53 np0005629333 systemd[1]: Reached target Basic System.
Feb 25 06:13:53 np0005629333 kernel: ata1: found unknown device (class 0)
Feb 25 06:13:53 np0005629333 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Feb 25 06:13:53 np0005629333 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Feb 25 06:13:53 np0005629333 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Feb 25 06:13:53 np0005629333 systemd[1]: Found device /dev/disk/by-uuid/37391a25-080d-4723-8b0c-cb88a559875b.
Feb 25 06:13:53 np0005629333 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Feb 25 06:13:53 np0005629333 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Feb 25 06:13:53 np0005629333 systemd[1]: Reached target Initrd Root Device.
Feb 25 06:13:53 np0005629333 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Feb 25 06:13:53 np0005629333 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Feb 25 06:13:53 np0005629333 kernel: Console: switching to colour dummy device 80x25
Feb 25 06:13:53 np0005629333 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Feb 25 06:13:53 np0005629333 kernel: [drm] features: -context_init
Feb 25 06:13:53 np0005629333 kernel: [drm] number of scanouts: 1
Feb 25 06:13:53 np0005629333 kernel: [drm] number of cap sets: 0
Feb 25 06:13:53 np0005629333 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Feb 25 06:13:53 np0005629333 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Feb 25 06:13:53 np0005629333 kernel: Console: switching to colour frame buffer device 128x48
Feb 25 06:13:53 np0005629333 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Feb 25 06:13:53 np0005629333 systemd[1]: Finished dracut initqueue hook.
Feb 25 06:13:53 np0005629333 systemd[1]: Reached target Preparation for Remote File Systems.
Feb 25 06:13:53 np0005629333 systemd[1]: Reached target Remote Encrypted Volumes.
Feb 25 06:13:53 np0005629333 systemd[1]: Reached target Remote File Systems.
Feb 25 06:13:53 np0005629333 systemd[1]: Starting dracut pre-mount hook...
Feb 25 06:13:53 np0005629333 systemd[1]: Finished dracut pre-mount hook.
Feb 25 06:13:53 np0005629333 systemd[1]: Starting File System Check on /dev/disk/by-uuid/37391a25-080d-4723-8b0c-cb88a559875b...
Feb 25 06:13:53 np0005629333 systemd-fsck[571]: /usr/sbin/fsck.xfs: XFS file system.
Feb 25 06:13:53 np0005629333 systemd[1]: Finished File System Check on /dev/disk/by-uuid/37391a25-080d-4723-8b0c-cb88a559875b.
Feb 25 06:13:53 np0005629333 systemd[1]: Mounting /sysroot...
Feb 25 06:13:53 np0005629333 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Feb 25 06:13:53 np0005629333 kernel: XFS (vda1): Mounting V5 Filesystem 37391a25-080d-4723-8b0c-cb88a559875b
Feb 25 06:13:53 np0005629333 kernel: XFS (vda1): Ending clean mount
Feb 25 06:13:53 np0005629333 systemd[1]: Mounted /sysroot.
Feb 25 06:13:53 np0005629333 systemd[1]: Reached target Initrd Root File System.
Feb 25 06:13:54 np0005629333 systemd[1]: Starting Mountpoints Configured in the Real Root...
Feb 25 06:13:54 np0005629333 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 25 06:13:54 np0005629333 systemd[1]: Finished Mountpoints Configured in the Real Root.
Feb 25 06:13:54 np0005629333 systemd[1]: Reached target Initrd File Systems.
Feb 25 06:13:54 np0005629333 systemd[1]: Reached target Initrd Default Target.
Feb 25 06:13:54 np0005629333 systemd[1]: Starting dracut mount hook...
Feb 25 06:13:54 np0005629333 systemd[1]: Finished dracut mount hook.
Feb 25 06:13:54 np0005629333 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Feb 25 06:13:54 np0005629333 rpc.idmapd[451]: exiting on signal 15
Feb 25 06:13:54 np0005629333 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Feb 25 06:13:54 np0005629333 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Feb 25 06:13:54 np0005629333 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Feb 25 06:13:54 np0005629333 systemd[1]: Stopped target Network.
Feb 25 06:13:54 np0005629333 systemd[1]: Stopped target Remote Encrypted Volumes.
Feb 25 06:13:54 np0005629333 systemd[1]: Stopped target Timer Units.
Feb 25 06:13:54 np0005629333 systemd[1]: dbus.socket: Deactivated successfully.
Feb 25 06:13:54 np0005629333 systemd[1]: Closed D-Bus System Message Bus Socket.
Feb 25 06:13:54 np0005629333 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 25 06:13:54 np0005629333 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Feb 25 06:13:54 np0005629333 systemd[1]: Stopped target Initrd Default Target.
Feb 25 06:13:54 np0005629333 systemd[1]: Stopped target Basic System.
Feb 25 06:13:54 np0005629333 systemd[1]: Stopped target Initrd Root Device.
Feb 25 06:13:54 np0005629333 systemd[1]: Stopped target Initrd /usr File System.
Feb 25 06:13:54 np0005629333 systemd[1]: Stopped target Path Units.
Feb 25 06:13:54 np0005629333 systemd[1]: Stopped target Remote File Systems.
Feb 25 06:13:54 np0005629333 systemd[1]: Stopped target Preparation for Remote File Systems.
Feb 25 06:13:54 np0005629333 systemd[1]: Stopped target Slice Units.
Feb 25 06:13:54 np0005629333 systemd[1]: Stopped target Socket Units.
Feb 25 06:13:54 np0005629333 systemd[1]: Stopped target System Initialization.
Feb 25 06:13:54 np0005629333 systemd[1]: Stopped target Local File Systems.
Feb 25 06:13:54 np0005629333 systemd[1]: Stopped target Swaps.
Feb 25 06:13:54 np0005629333 systemd[1]: dracut-mount.service: Deactivated successfully.
Feb 25 06:13:54 np0005629333 systemd[1]: Stopped dracut mount hook.
Feb 25 06:13:54 np0005629333 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 25 06:13:54 np0005629333 systemd[1]: Stopped dracut pre-mount hook.
Feb 25 06:13:54 np0005629333 systemd[1]: Stopped target Local Encrypted Volumes.
Feb 25 06:13:54 np0005629333 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 25 06:13:54 np0005629333 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Feb 25 06:13:54 np0005629333 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 25 06:13:54 np0005629333 systemd[1]: Stopped dracut initqueue hook.
Feb 25 06:13:54 np0005629333 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 25 06:13:54 np0005629333 systemd[1]: Stopped Apply Kernel Variables.
Feb 25 06:13:54 np0005629333 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb 25 06:13:54 np0005629333 systemd[1]: Stopped Create Volatile Files and Directories.
Feb 25 06:13:54 np0005629333 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 25 06:13:54 np0005629333 systemd[1]: Stopped Coldplug All udev Devices.
Feb 25 06:13:54 np0005629333 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 25 06:13:54 np0005629333 systemd[1]: Stopped dracut pre-trigger hook.
Feb 25 06:13:54 np0005629333 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Feb 25 06:13:54 np0005629333 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 25 06:13:54 np0005629333 systemd[1]: Stopped Setup Virtual Console.
Feb 25 06:13:54 np0005629333 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Feb 25 06:13:54 np0005629333 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb 25 06:13:54 np0005629333 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 25 06:13:54 np0005629333 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Feb 25 06:13:54 np0005629333 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 25 06:13:54 np0005629333 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Feb 25 06:13:54 np0005629333 systemd[1]: systemd-udevd.service: Consumed 1.032s CPU time.
Feb 25 06:13:54 np0005629333 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 25 06:13:54 np0005629333 systemd[1]: Closed udev Control Socket.
Feb 25 06:13:54 np0005629333 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 25 06:13:54 np0005629333 systemd[1]: Closed udev Kernel Socket.
Feb 25 06:13:54 np0005629333 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 25 06:13:54 np0005629333 systemd[1]: Stopped dracut pre-udev hook.
Feb 25 06:13:54 np0005629333 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 25 06:13:54 np0005629333 systemd[1]: Stopped dracut cmdline hook.
Feb 25 06:13:54 np0005629333 systemd[1]: Starting Cleanup udev Database...
Feb 25 06:13:54 np0005629333 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb 25 06:13:54 np0005629333 systemd[1]: Stopped Create Static Device Nodes in /dev.
Feb 25 06:13:54 np0005629333 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 25 06:13:54 np0005629333 systemd[1]: Stopped Create List of Static Device Nodes.
Feb 25 06:13:54 np0005629333 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Feb 25 06:13:54 np0005629333 systemd[1]: Stopped Create System Users.
Feb 25 06:13:54 np0005629333 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Feb 25 06:13:54 np0005629333 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Feb 25 06:13:54 np0005629333 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 25 06:13:54 np0005629333 systemd[1]: Finished Cleanup udev Database.
Feb 25 06:13:54 np0005629333 systemd[1]: Reached target Switch Root.
Feb 25 06:13:54 np0005629333 systemd[1]: Starting Switch Root...
Feb 25 06:13:54 np0005629333 systemd[1]: Switching root.
Feb 25 06:13:54 np0005629333 systemd-journald[309]: Journal stopped
Feb 25 06:13:55 np0005629333 systemd-journald: Received SIGTERM from PID 1 (systemd).
Feb 25 06:13:55 np0005629333 kernel: audit: type=1404 audit(1772018034.454:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Feb 25 06:13:55 np0005629333 kernel: SELinux:  policy capability network_peer_controls=1
Feb 25 06:13:55 np0005629333 kernel: SELinux:  policy capability open_perms=1
Feb 25 06:13:55 np0005629333 kernel: SELinux:  policy capability extended_socket_class=1
Feb 25 06:13:55 np0005629333 kernel: SELinux:  policy capability always_check_network=0
Feb 25 06:13:55 np0005629333 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 25 06:13:55 np0005629333 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 25 06:13:55 np0005629333 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 25 06:13:55 np0005629333 kernel: audit: type=1403 audit(1772018034.550:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb 25 06:13:55 np0005629333 systemd: Successfully loaded SELinux policy in 98.626ms.
Feb 25 06:13:55 np0005629333 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 38.422ms.
Feb 25 06:13:55 np0005629333 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb 25 06:13:55 np0005629333 systemd: Detected virtualization kvm.
Feb 25 06:13:55 np0005629333 systemd: Detected architecture x86-64.
Feb 25 06:13:55 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:13:55 np0005629333 systemd: initrd-switch-root.service: Deactivated successfully.
Feb 25 06:13:55 np0005629333 systemd: Stopped Switch Root.
Feb 25 06:13:55 np0005629333 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Feb 25 06:13:55 np0005629333 systemd: Created slice Slice /system/getty.
Feb 25 06:13:55 np0005629333 systemd: Created slice Slice /system/serial-getty.
Feb 25 06:13:55 np0005629333 systemd: Created slice Slice /system/sshd-keygen.
Feb 25 06:13:55 np0005629333 systemd: Created slice User and Session Slice.
Feb 25 06:13:55 np0005629333 systemd: Started Dispatch Password Requests to Console Directory Watch.
Feb 25 06:13:55 np0005629333 systemd: Started Forward Password Requests to Wall Directory Watch.
Feb 25 06:13:55 np0005629333 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Feb 25 06:13:55 np0005629333 systemd: Reached target Local Encrypted Volumes.
Feb 25 06:13:55 np0005629333 systemd: Stopped target Switch Root.
Feb 25 06:13:55 np0005629333 systemd: Stopped target Initrd File Systems.
Feb 25 06:13:55 np0005629333 systemd: Stopped target Initrd Root File System.
Feb 25 06:13:55 np0005629333 systemd: Reached target Local Integrity Protected Volumes.
Feb 25 06:13:55 np0005629333 systemd: Reached target Path Units.
Feb 25 06:13:55 np0005629333 systemd: Reached target rpc_pipefs.target.
Feb 25 06:13:55 np0005629333 systemd: Reached target Slice Units.
Feb 25 06:13:55 np0005629333 systemd: Reached target Swaps.
Feb 25 06:13:55 np0005629333 systemd: Reached target Local Verity Protected Volumes.
Feb 25 06:13:55 np0005629333 systemd: Listening on RPCbind Server Activation Socket.
Feb 25 06:13:55 np0005629333 systemd: Reached target RPC Port Mapper.
Feb 25 06:13:55 np0005629333 systemd: Listening on Process Core Dump Socket.
Feb 25 06:13:55 np0005629333 systemd: Listening on initctl Compatibility Named Pipe.
Feb 25 06:13:55 np0005629333 systemd: Listening on udev Control Socket.
Feb 25 06:13:55 np0005629333 systemd: Listening on udev Kernel Socket.
Feb 25 06:13:55 np0005629333 systemd: Mounting Huge Pages File System...
Feb 25 06:13:55 np0005629333 systemd: Mounting POSIX Message Queue File System...
Feb 25 06:13:55 np0005629333 systemd: Mounting Kernel Debug File System...
Feb 25 06:13:55 np0005629333 systemd: Mounting Kernel Trace File System...
Feb 25 06:13:55 np0005629333 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Feb 25 06:13:55 np0005629333 systemd: Starting Create List of Static Device Nodes...
Feb 25 06:13:55 np0005629333 systemd: Starting Load Kernel Module configfs...
Feb 25 06:13:55 np0005629333 systemd: Starting Load Kernel Module drm...
Feb 25 06:13:55 np0005629333 systemd: Starting Load Kernel Module efi_pstore...
Feb 25 06:13:55 np0005629333 systemd: Starting Load Kernel Module fuse...
Feb 25 06:13:55 np0005629333 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Feb 25 06:13:55 np0005629333 systemd: systemd-fsck-root.service: Deactivated successfully.
Feb 25 06:13:55 np0005629333 systemd: Stopped File System Check on Root Device.
Feb 25 06:13:55 np0005629333 systemd: Stopped Journal Service.
Feb 25 06:13:55 np0005629333 systemd: Starting Journal Service...
Feb 25 06:13:55 np0005629333 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Feb 25 06:13:55 np0005629333 systemd: Starting Generate network units from Kernel command line...
Feb 25 06:13:55 np0005629333 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 25 06:13:55 np0005629333 systemd: Starting Remount Root and Kernel File Systems...
Feb 25 06:13:55 np0005629333 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 25 06:13:55 np0005629333 systemd: Starting Apply Kernel Variables...
Feb 25 06:13:55 np0005629333 systemd-journald[701]: Journal started
Feb 25 06:13:55 np0005629333 systemd-journald[701]: Runtime Journal (/run/log/journal/45af4031c1bdc072f1f045c25038675f) is 8.0M, max 153.6M, 145.6M free.
Feb 25 06:13:55 np0005629333 systemd[1]: Queued start job for default target Multi-User System.
Feb 25 06:13:55 np0005629333 systemd[1]: systemd-journald.service: Deactivated successfully.
Feb 25 06:13:55 np0005629333 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Feb 25 06:13:55 np0005629333 kernel: fuse: init (API version 7.37)
Feb 25 06:13:55 np0005629333 systemd: Starting Coldplug All udev Devices...
Feb 25 06:13:55 np0005629333 systemd: Started Journal Service.
Feb 25 06:13:55 np0005629333 systemd[1]: Mounted Huge Pages File System.
Feb 25 06:13:55 np0005629333 systemd[1]: Mounted POSIX Message Queue File System.
Feb 25 06:13:55 np0005629333 systemd[1]: Mounted Kernel Debug File System.
Feb 25 06:13:55 np0005629333 systemd[1]: Mounted Kernel Trace File System.
Feb 25 06:13:55 np0005629333 systemd[1]: Finished Create List of Static Device Nodes.
Feb 25 06:13:55 np0005629333 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 25 06:13:55 np0005629333 systemd[1]: Finished Load Kernel Module configfs.
Feb 25 06:13:55 np0005629333 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 25 06:13:55 np0005629333 systemd[1]: Finished Load Kernel Module drm.
Feb 25 06:13:55 np0005629333 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 25 06:13:55 np0005629333 systemd[1]: Finished Load Kernel Module efi_pstore.
Feb 25 06:13:55 np0005629333 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb 25 06:13:55 np0005629333 systemd[1]: Finished Load Kernel Module fuse.
Feb 25 06:13:55 np0005629333 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Feb 25 06:13:55 np0005629333 systemd[1]: Finished Generate network units from Kernel command line.
Feb 25 06:13:55 np0005629333 systemd[1]: Finished Remount Root and Kernel File Systems.
Feb 25 06:13:55 np0005629333 systemd[1]: Finished Apply Kernel Variables.
Feb 25 06:13:55 np0005629333 systemd[1]: Mounting FUSE Control File System...
Feb 25 06:13:55 np0005629333 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Feb 25 06:13:55 np0005629333 systemd[1]: Starting Rebuild Hardware Database...
Feb 25 06:13:55 np0005629333 systemd[1]: Starting Flush Journal to Persistent Storage...
Feb 25 06:13:55 np0005629333 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 25 06:13:55 np0005629333 systemd[1]: Starting Load/Save OS Random Seed...
Feb 25 06:13:55 np0005629333 systemd[1]: Starting Create System Users...
Feb 25 06:13:55 np0005629333 systemd[1]: Mounted FUSE Control File System.
Feb 25 06:13:55 np0005629333 systemd-journald[701]: Runtime Journal (/run/log/journal/45af4031c1bdc072f1f045c25038675f) is 8.0M, max 153.6M, 145.6M free.
Feb 25 06:13:55 np0005629333 systemd-journald[701]: Received client request to flush runtime journal.
Feb 25 06:13:55 np0005629333 systemd[1]: Finished Flush Journal to Persistent Storage.
Feb 25 06:13:55 np0005629333 systemd[1]: Finished Load/Save OS Random Seed.
Feb 25 06:13:55 np0005629333 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Feb 25 06:13:55 np0005629333 systemd[1]: Finished Create System Users.
Feb 25 06:13:55 np0005629333 systemd[1]: Starting Create Static Device Nodes in /dev...
Feb 25 06:13:55 np0005629333 systemd[1]: Finished Coldplug All udev Devices.
Feb 25 06:13:55 np0005629333 systemd[1]: Finished Create Static Device Nodes in /dev.
Feb 25 06:13:55 np0005629333 systemd[1]: Reached target Preparation for Local File Systems.
Feb 25 06:13:55 np0005629333 systemd[1]: Reached target Local File Systems.
Feb 25 06:13:55 np0005629333 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Feb 25 06:13:55 np0005629333 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Feb 25 06:13:55 np0005629333 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 25 06:13:55 np0005629333 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Feb 25 06:13:55 np0005629333 systemd[1]: Starting Automatic Boot Loader Update...
Feb 25 06:13:55 np0005629333 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Feb 25 06:13:55 np0005629333 systemd[1]: Starting Create Volatile Files and Directories...
Feb 25 06:13:55 np0005629333 bootctl[719]: Couldn't find EFI system partition, skipping.
Feb 25 06:13:55 np0005629333 systemd[1]: Finished Automatic Boot Loader Update.
Feb 25 06:13:55 np0005629333 systemd[1]: Finished Create Volatile Files and Directories.
Feb 25 06:13:55 np0005629333 systemd[1]: Starting Security Auditing Service...
Feb 25 06:13:55 np0005629333 systemd[1]: Starting RPC Bind...
Feb 25 06:13:55 np0005629333 systemd[1]: Starting Rebuild Journal Catalog...
Feb 25 06:13:55 np0005629333 auditd[725]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Feb 25 06:13:55 np0005629333 auditd[725]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Feb 25 06:13:55 np0005629333 systemd[1]: Finished Rebuild Journal Catalog.
Feb 25 06:13:55 np0005629333 systemd[1]: Started RPC Bind.
Feb 25 06:13:55 np0005629333 augenrules[730]: /sbin/augenrules: No change
Feb 25 06:13:55 np0005629333 augenrules[745]: No rules
Feb 25 06:13:55 np0005629333 augenrules[745]: enabled 1
Feb 25 06:13:55 np0005629333 augenrules[745]: failure 1
Feb 25 06:13:55 np0005629333 augenrules[745]: pid 725
Feb 25 06:13:55 np0005629333 augenrules[745]: rate_limit 0
Feb 25 06:13:55 np0005629333 augenrules[745]: backlog_limit 8192
Feb 25 06:13:55 np0005629333 augenrules[745]: lost 0
Feb 25 06:13:55 np0005629333 augenrules[745]: backlog 2
Feb 25 06:13:55 np0005629333 augenrules[745]: backlog_wait_time 60000
Feb 25 06:13:55 np0005629333 augenrules[745]: backlog_wait_time_actual 0
Feb 25 06:13:55 np0005629333 augenrules[745]: enabled 1
Feb 25 06:13:55 np0005629333 augenrules[745]: failure 1
Feb 25 06:13:55 np0005629333 augenrules[745]: pid 725
Feb 25 06:13:55 np0005629333 augenrules[745]: rate_limit 0
Feb 25 06:13:55 np0005629333 augenrules[745]: backlog_limit 8192
Feb 25 06:13:55 np0005629333 augenrules[745]: lost 0
Feb 25 06:13:55 np0005629333 augenrules[745]: backlog 0
Feb 25 06:13:55 np0005629333 augenrules[745]: backlog_wait_time 60000
Feb 25 06:13:55 np0005629333 augenrules[745]: backlog_wait_time_actual 0
Feb 25 06:13:55 np0005629333 augenrules[745]: enabled 1
Feb 25 06:13:55 np0005629333 augenrules[745]: failure 1
Feb 25 06:13:55 np0005629333 augenrules[745]: pid 725
Feb 25 06:13:55 np0005629333 augenrules[745]: rate_limit 0
Feb 25 06:13:55 np0005629333 augenrules[745]: backlog_limit 8192
Feb 25 06:13:55 np0005629333 augenrules[745]: lost 0
Feb 25 06:13:55 np0005629333 augenrules[745]: backlog 0
Feb 25 06:13:55 np0005629333 augenrules[745]: backlog_wait_time 60000
Feb 25 06:13:55 np0005629333 augenrules[745]: backlog_wait_time_actual 0
Feb 25 06:13:55 np0005629333 systemd[1]: Started Security Auditing Service.
Feb 25 06:13:55 np0005629333 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Feb 25 06:13:55 np0005629333 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Feb 25 06:13:55 np0005629333 systemd[1]: Finished Rebuild Hardware Database.
Feb 25 06:13:55 np0005629333 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Feb 25 06:13:55 np0005629333 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Feb 25 06:13:55 np0005629333 systemd[1]: Starting Update is Completed...
Feb 25 06:13:55 np0005629333 systemd[1]: Finished Update is Completed.
Feb 25 06:13:55 np0005629333 systemd-udevd[753]: Using default interface naming scheme 'rhel-9.0'.
Feb 25 06:13:55 np0005629333 systemd[1]: Started Rule-based Manager for Device Events and Files.
Feb 25 06:13:55 np0005629333 systemd[1]: Reached target System Initialization.
Feb 25 06:13:55 np0005629333 systemd[1]: Started dnf makecache --timer.
Feb 25 06:13:55 np0005629333 systemd[1]: Started Daily rotation of log files.
Feb 25 06:13:55 np0005629333 systemd[1]: Started Daily Cleanup of Temporary Directories.
Feb 25 06:13:55 np0005629333 systemd[1]: Reached target Timer Units.
Feb 25 06:13:55 np0005629333 systemd[1]: Listening on D-Bus System Message Bus Socket.
Feb 25 06:13:55 np0005629333 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Feb 25 06:13:55 np0005629333 systemd[1]: Reached target Socket Units.
Feb 25 06:13:55 np0005629333 systemd[1]: Starting D-Bus System Message Bus...
Feb 25 06:13:55 np0005629333 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 25 06:13:55 np0005629333 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Feb 25 06:13:55 np0005629333 systemd[1]: Starting Load Kernel Module configfs...
Feb 25 06:13:56 np0005629333 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 25 06:13:56 np0005629333 systemd[1]: Finished Load Kernel Module configfs.
Feb 25 06:13:56 np0005629333 systemd-udevd[759]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 06:13:56 np0005629333 systemd[1]: Started D-Bus System Message Bus.
Feb 25 06:13:56 np0005629333 systemd[1]: Reached target Basic System.
Feb 25 06:13:56 np0005629333 dbus-broker-lau[769]: Ready
Feb 25 06:13:56 np0005629333 systemd[1]: Starting NTP client/server...
Feb 25 06:13:56 np0005629333 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Feb 25 06:13:56 np0005629333 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Feb 25 06:13:56 np0005629333 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Feb 25 06:13:56 np0005629333 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Feb 25 06:13:56 np0005629333 systemd[1]: Starting Restore /run/initramfs on shutdown...
Feb 25 06:13:56 np0005629333 systemd[1]: Starting IPv4 firewall with iptables...
Feb 25 06:13:56 np0005629333 systemd[1]: Started irqbalance daemon.
Feb 25 06:13:56 np0005629333 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Feb 25 06:13:56 np0005629333 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 25 06:13:56 np0005629333 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 25 06:13:56 np0005629333 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 25 06:13:56 np0005629333 systemd[1]: Reached target sshd-keygen.target.
Feb 25 06:13:56 np0005629333 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Feb 25 06:13:56 np0005629333 systemd[1]: Reached target User and Group Name Lookups.
Feb 25 06:13:56 np0005629333 chronyd[810]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Feb 25 06:13:56 np0005629333 chronyd[810]: Loaded 0 symmetric keys
Feb 25 06:13:56 np0005629333 chronyd[810]: Using right/UTC timezone to obtain leap second data
Feb 25 06:13:56 np0005629333 systemd[1]: Starting User Login Management...
Feb 25 06:13:56 np0005629333 chronyd[810]: Loaded seccomp filter (level 2)
Feb 25 06:13:56 np0005629333 systemd[1]: Finished Restore /run/initramfs on shutdown.
Feb 25 06:13:56 np0005629333 systemd[1]: Started NTP client/server.
Feb 25 06:13:56 np0005629333 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Feb 25 06:13:56 np0005629333 systemd-logind[811]: New seat seat0.
Feb 25 06:13:56 np0005629333 systemd-logind[811]: Watching system buttons on /dev/input/event0 (Power Button)
Feb 25 06:13:56 np0005629333 systemd-logind[811]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Feb 25 06:13:56 np0005629333 systemd[1]: Started User Login Management.
Feb 25 06:13:56 np0005629333 kernel: kvm_amd: TSC scaling supported
Feb 25 06:13:56 np0005629333 kernel: kvm_amd: Nested Virtualization enabled
Feb 25 06:13:56 np0005629333 kernel: kvm_amd: Nested Paging enabled
Feb 25 06:13:56 np0005629333 kernel: kvm_amd: LBR virtualization supported
Feb 25 06:13:56 np0005629333 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Feb 25 06:13:56 np0005629333 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Feb 25 06:13:56 np0005629333 iptables.init[801]: iptables: Applying firewall rules: [  OK  ]
Feb 25 06:13:56 np0005629333 systemd[1]: Finished IPv4 firewall with iptables.
Feb 25 06:13:56 np0005629333 cloud-init[857]: Cloud-init v. 24.4-8.el9 running 'init-local' at Wed, 25 Feb 2026 11:13:56 +0000. Up 6.11 seconds.
Feb 25 06:13:56 np0005629333 systemd[1]: run-cloud\x2dinit-tmp-tmp7g39y3cm.mount: Deactivated successfully.
Feb 25 06:13:56 np0005629333 systemd[1]: Starting Hostname Service...
Feb 25 06:13:57 np0005629333 systemd[1]: Started Hostname Service.
Feb 25 06:13:57 np0005629333 systemd-hostnamed[871]: Hostname set to <np0005629333.novalocal> (static)
Feb 25 06:13:57 np0005629333 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Feb 25 06:13:57 np0005629333 systemd[1]: Reached target Preparation for Network.
Feb 25 06:13:57 np0005629333 systemd[1]: Starting Network Manager...
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.1917] NetworkManager (version 1.54.3-2.el9) is starting... (boot:5e9335c6-0936-4191-819c-92681868f3d7)
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.1921] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2092] manager[0x55fcb2b1a000]: monitoring kernel firmware directory '/lib/firmware'.
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2146] hostname: hostname: using hostnamed
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2146] hostname: static hostname changed from (none) to "np0005629333.novalocal"
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2150] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2304] manager[0x55fcb2b1a000]: rfkill: Wi-Fi hardware radio set enabled
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2305] manager[0x55fcb2b1a000]: rfkill: WWAN hardware radio set enabled
Feb 25 06:13:57 np0005629333 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2391] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2392] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2392] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2392] manager: Networking is enabled by state file
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2395] settings: Loaded settings plugin: keyfile (internal)
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2422] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2452] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2466] dhcp: init: Using DHCP client 'internal'
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2470] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2486] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2499] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2511] device (lo): Activation: starting connection 'lo' (c854c08b-1693-43bc-843a-c34c47825cf4)
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2519] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2523] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2551] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2555] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2560] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2562] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2564] device (eth0): carrier: link connected
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2568] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2581] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2589] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2594] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2595] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2599] manager: NetworkManager state is now CONNECTING
Feb 25 06:13:57 np0005629333 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2604] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2615] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2619] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 25 06:13:57 np0005629333 systemd[1]: Started Network Manager.
Feb 25 06:13:57 np0005629333 systemd[1]: Reached target Network.
Feb 25 06:13:57 np0005629333 systemd[1]: Starting Network Manager Wait Online...
Feb 25 06:13:57 np0005629333 systemd[1]: Starting GSSAPI Proxy Daemon...
Feb 25 06:13:57 np0005629333 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2771] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2775] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.2782] device (lo): Activation: successful, device activated.
Feb 25 06:13:57 np0005629333 systemd[1]: Started GSSAPI Proxy Daemon.
Feb 25 06:13:57 np0005629333 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Feb 25 06:13:57 np0005629333 systemd[1]: Reached target NFS client services.
Feb 25 06:13:57 np0005629333 systemd[1]: Reached target Preparation for Remote File Systems.
Feb 25 06:13:57 np0005629333 systemd[1]: Reached target Remote File Systems.
Feb 25 06:13:57 np0005629333 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.6341] dhcp4 (eth0): state changed new lease, address=38.102.83.173
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.6352] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.6381] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.6417] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.6420] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.6427] manager: NetworkManager state is now CONNECTED_SITE
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.6434] device (eth0): Activation: successful, device activated.
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.6441] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb 25 06:13:57 np0005629333 NetworkManager[875]: <info>  [1772018037.6448] manager: startup complete
Feb 25 06:13:57 np0005629333 systemd[1]: Finished Network Manager Wait Online.
Feb 25 06:13:57 np0005629333 systemd[1]: Starting Cloud-init: Network Stage...
Feb 25 06:13:57 np0005629333 cloud-init[937]: Cloud-init v. 24.4-8.el9 running 'init' at Wed, 25 Feb 2026 11:13:57 +0000. Up 7.39 seconds.
Feb 25 06:13:57 np0005629333 cloud-init[937]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Feb 25 06:13:57 np0005629333 cloud-init[937]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Feb 25 06:13:57 np0005629333 cloud-init[937]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Feb 25 06:13:57 np0005629333 cloud-init[937]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Feb 25 06:13:57 np0005629333 cloud-init[937]: ci-info: |  eth0  | True |        38.102.83.173         | 255.255.255.0 | global | fa:16:3e:1b:cb:fd |
Feb 25 06:13:57 np0005629333 cloud-init[937]: ci-info: |  eth0  | True | fe80::f816:3eff:fe1b:cbfd/64 |       .       |  link  | fa:16:3e:1b:cb:fd |
Feb 25 06:13:57 np0005629333 cloud-init[937]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Feb 25 06:13:57 np0005629333 cloud-init[937]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Feb 25 06:13:57 np0005629333 cloud-init[937]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Feb 25 06:13:57 np0005629333 cloud-init[937]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Feb 25 06:13:57 np0005629333 cloud-init[937]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb 25 06:13:57 np0005629333 cloud-init[937]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Feb 25 06:13:57 np0005629333 cloud-init[937]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb 25 06:13:57 np0005629333 cloud-init[937]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Feb 25 06:13:57 np0005629333 cloud-init[937]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Feb 25 06:13:57 np0005629333 cloud-init[937]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Feb 25 06:13:57 np0005629333 cloud-init[937]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb 25 06:13:57 np0005629333 cloud-init[937]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Feb 25 06:13:57 np0005629333 cloud-init[937]: ci-info: +-------+-------------+---------+-----------+-------+
Feb 25 06:13:57 np0005629333 cloud-init[937]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Feb 25 06:13:57 np0005629333 cloud-init[937]: ci-info: +-------+-------------+---------+-----------+-------+
Feb 25 06:13:57 np0005629333 cloud-init[937]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Feb 25 06:13:57 np0005629333 cloud-init[937]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Feb 25 06:13:58 np0005629333 cloud-init[937]: ci-info: +-------+-------------+---------+-----------+-------+
Feb 25 06:13:59 np0005629333 cloud-init[937]: Generating public/private rsa key pair.
Feb 25 06:13:59 np0005629333 cloud-init[937]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Feb 25 06:13:59 np0005629333 cloud-init[937]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Feb 25 06:13:59 np0005629333 cloud-init[937]: The key fingerprint is:
Feb 25 06:13:59 np0005629333 cloud-init[937]: SHA256:BXcIgtxZ57qXHrHfHQSft1LKQx/uqNLcikqUnk9iSVY root@np0005629333.novalocal
Feb 25 06:13:59 np0005629333 cloud-init[937]: The key's randomart image is:
Feb 25 06:13:59 np0005629333 cloud-init[937]: +---[RSA 3072]----+
Feb 25 06:13:59 np0005629333 cloud-init[937]: |   . o.o+.o..    |
Feb 25 06:13:59 np0005629333 cloud-init[937]: |    o o. =..     |
Feb 25 06:13:59 np0005629333 cloud-init[937]: |         Eo  .   |
Feb 25 06:13:59 np0005629333 cloud-init[937]: |        oo    o .|
Feb 25 06:13:59 np0005629333 cloud-init[937]: |       =S .  . *.|
Feb 25 06:13:59 np0005629333 cloud-init[937]: |      = o. +o * +|
Feb 25 06:13:59 np0005629333 cloud-init[937]: |       B..B .= = |
Feb 25 06:13:59 np0005629333 cloud-init[937]: |      o ++.= o=..|
Feb 25 06:13:59 np0005629333 cloud-init[937]: |       ..ooo+....|
Feb 25 06:13:59 np0005629333 cloud-init[937]: +----[SHA256]-----+
Feb 25 06:13:59 np0005629333 cloud-init[937]: Generating public/private ecdsa key pair.
Feb 25 06:13:59 np0005629333 cloud-init[937]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Feb 25 06:13:59 np0005629333 cloud-init[937]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Feb 25 06:13:59 np0005629333 cloud-init[937]: The key fingerprint is:
Feb 25 06:13:59 np0005629333 cloud-init[937]: SHA256:+PlZolo1BDQOutakfZo9ip67Gmnk3taJCfSAstCl6GA root@np0005629333.novalocal
Feb 25 06:13:59 np0005629333 cloud-init[937]: The key's randomart image is:
Feb 25 06:13:59 np0005629333 cloud-init[937]: +---[ECDSA 256]---+
Feb 25 06:13:59 np0005629333 cloud-init[937]: |      ..+        |
Feb 25 06:13:59 np0005629333 cloud-init[937]: |    .. o o       |
Feb 25 06:13:59 np0005629333 cloud-init[937]: | + o. . . .      |
Feb 25 06:13:59 np0005629333 cloud-init[937]: |*E=  * . .       |
Feb 25 06:13:59 np0005629333 cloud-init[937]: |*o.o+ + S o      |
Feb 25 06:13:59 np0005629333 cloud-init[937]: |.+.o.  * o .     |
Feb 25 06:13:59 np0005629333 cloud-init[937]: |  =. +o.* . .    |
Feb 25 06:13:59 np0005629333 cloud-init[937]: | o o+ooo + +     |
Feb 25 06:13:59 np0005629333 cloud-init[937]: |  o+Boo.. o      |
Feb 25 06:13:59 np0005629333 cloud-init[937]: +----[SHA256]-----+
Feb 25 06:13:59 np0005629333 cloud-init[937]: Generating public/private ed25519 key pair.
Feb 25 06:13:59 np0005629333 cloud-init[937]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Feb 25 06:13:59 np0005629333 cloud-init[937]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Feb 25 06:13:59 np0005629333 cloud-init[937]: The key fingerprint is:
Feb 25 06:13:59 np0005629333 cloud-init[937]: SHA256:vTs/4p5juHK2f6sEiiQl55kzQY4SnGzVtxcBIbJV9nE root@np0005629333.novalocal
Feb 25 06:13:59 np0005629333 cloud-init[937]: The key's randomart image is:
Feb 25 06:13:59 np0005629333 cloud-init[937]: +--[ED25519 256]--+
Feb 25 06:13:59 np0005629333 cloud-init[937]: |+ oooo.=oo.E     |
Feb 25 06:13:59 np0005629333 cloud-init[937]: | * ++.o...o      |
Feb 25 06:13:59 np0005629333 cloud-init[937]: |o o.= . ...      |
Feb 25 06:13:59 np0005629333 cloud-init[937]: | . = + . o       |
Feb 25 06:13:59 np0005629333 cloud-init[937]: |  . B   S .      |
Feb 25 06:13:59 np0005629333 cloud-init[937]: |   o + . . .     |
Feb 25 06:13:59 np0005629333 cloud-init[937]: |    . .  .o      |
Feb 25 06:13:59 np0005629333 cloud-init[937]: |      . +.=oo    |
Feb 25 06:13:59 np0005629333 cloud-init[937]: |       +oBBBoo   |
Feb 25 06:13:59 np0005629333 cloud-init[937]: +----[SHA256]-----+
Feb 25 06:13:59 np0005629333 systemd[1]: Finished Cloud-init: Network Stage.
Feb 25 06:13:59 np0005629333 systemd[1]: Reached target Cloud-config availability.
Feb 25 06:13:59 np0005629333 systemd[1]: Reached target Network is Online.
Feb 25 06:13:59 np0005629333 systemd[1]: Starting Cloud-init: Config Stage...
Feb 25 06:13:59 np0005629333 systemd[1]: Starting Crash recovery kernel arming...
Feb 25 06:13:59 np0005629333 systemd[1]: Starting Notify NFS peers of a restart...
Feb 25 06:13:59 np0005629333 systemd[1]: Starting System Logging Service...
Feb 25 06:13:59 np0005629333 sm-notify[1019]: Version 2.5.4 starting
Feb 25 06:13:59 np0005629333 systemd[1]: Starting OpenSSH server daemon...
Feb 25 06:13:59 np0005629333 systemd[1]: Starting Permit User Sessions...
Feb 25 06:13:59 np0005629333 systemd[1]: Started Notify NFS peers of a restart.
Feb 25 06:13:59 np0005629333 systemd[1]: Started OpenSSH server daemon.
Feb 25 06:13:59 np0005629333 systemd[1]: Finished Permit User Sessions.
Feb 25 06:13:59 np0005629333 rsyslogd[1020]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1020" x-info="https://www.rsyslog.com"] start
Feb 25 06:13:59 np0005629333 rsyslogd[1020]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Feb 25 06:13:59 np0005629333 systemd[1]: Started Command Scheduler.
Feb 25 06:13:59 np0005629333 systemd[1]: Started Getty on tty1.
Feb 25 06:13:59 np0005629333 systemd[1]: Started Serial Getty on ttyS0.
Feb 25 06:13:59 np0005629333 systemd[1]: Reached target Login Prompts.
Feb 25 06:13:59 np0005629333 systemd[1]: Started System Logging Service.
Feb 25 06:13:59 np0005629333 systemd[1]: Reached target Multi-User System.
Feb 25 06:13:59 np0005629333 systemd[1]: Starting Record Runlevel Change in UTMP...
Feb 25 06:13:59 np0005629333 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Feb 25 06:13:59 np0005629333 systemd[1]: Finished Record Runlevel Change in UTMP.
Feb 25 06:13:59 np0005629333 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 06:13:59 np0005629333 cloud-init[1150]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Wed, 25 Feb 2026 11:13:59 +0000. Up 9.14 seconds.
Feb 25 06:13:59 np0005629333 kdumpctl[1031]: kdump: No kdump initial ramdisk found.
Feb 25 06:13:59 np0005629333 kdumpctl[1031]: kdump: Rebuilding /boot/initramfs-5.14.0-686.el9.x86_64kdump.img
Feb 25 06:13:59 np0005629333 systemd[1]: Finished Cloud-init: Config Stage.
Feb 25 06:13:59 np0005629333 systemd[1]: Starting Cloud-init: Final Stage...
Feb 25 06:14:00 np0005629333 cloud-init[1398]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Wed, 25 Feb 2026 11:14:00 +0000. Up 9.55 seconds.
Feb 25 06:14:00 np0005629333 cloud-init[1438]: #############################################################
Feb 25 06:14:00 np0005629333 cloud-init[1443]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Feb 25 06:14:00 np0005629333 cloud-init[1450]: 256 SHA256:+PlZolo1BDQOutakfZo9ip67Gmnk3taJCfSAstCl6GA root@np0005629333.novalocal (ECDSA)
Feb 25 06:14:00 np0005629333 cloud-init[1458]: 256 SHA256:vTs/4p5juHK2f6sEiiQl55kzQY4SnGzVtxcBIbJV9nE root@np0005629333.novalocal (ED25519)
Feb 25 06:14:00 np0005629333 cloud-init[1462]: 3072 SHA256:BXcIgtxZ57qXHrHfHQSft1LKQx/uqNLcikqUnk9iSVY root@np0005629333.novalocal (RSA)
Feb 25 06:14:00 np0005629333 cloud-init[1471]: -----END SSH HOST KEY FINGERPRINTS-----
Feb 25 06:14:00 np0005629333 cloud-init[1472]: #############################################################
Feb 25 06:14:00 np0005629333 cloud-init[1398]: Cloud-init v. 24.4-8.el9 finished at Wed, 25 Feb 2026 11:14:00 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 9.71 seconds
Feb 25 06:14:00 np0005629333 systemd[1]: Finished Cloud-init: Final Stage.
Feb 25 06:14:00 np0005629333 systemd[1]: Reached target Cloud-init target.
Feb 25 06:14:00 np0005629333 dracut[1544]: dracut-057-110.git20260130.el9
Feb 25 06:14:00 np0005629333 dracut[1546]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/37391a25-080d-4723-8b0c-cb88a559875b /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-686.el9.x86_64kdump.img 5.14.0-686.el9.x86_64
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: memstrack is not available
Feb 25 06:14:01 np0005629333 dracut[1546]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Feb 25 06:14:01 np0005629333 dracut[1546]: memstrack is not available
Feb 25 06:14:01 np0005629333 dracut[1546]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Feb 25 06:14:01 np0005629333 dracut[1546]: *** Including module: systemd ***
Feb 25 06:14:02 np0005629333 dracut[1546]: *** Including module: fips ***
Feb 25 06:14:02 np0005629333 dracut[1546]: *** Including module: systemd-initrd ***
Feb 25 06:14:02 np0005629333 dracut[1546]: *** Including module: i18n ***
Feb 25 06:14:02 np0005629333 dracut[1546]: *** Including module: drm ***
Feb 25 06:14:02 np0005629333 chronyd[810]: Selected source 138.197.164.54 (2.centos.pool.ntp.org)
Feb 25 06:14:02 np0005629333 chronyd[810]: System clock TAI offset set to 37 seconds
Feb 25 06:14:03 np0005629333 dracut[1546]: *** Including module: prefixdevname ***
Feb 25 06:14:03 np0005629333 dracut[1546]: *** Including module: kernel-modules ***
Feb 25 06:14:03 np0005629333 kernel: block vda: the capability attribute has been deprecated.
Feb 25 06:14:03 np0005629333 dracut[1546]: *** Including module: kernel-modules-extra ***
Feb 25 06:14:03 np0005629333 dracut[1546]: *** Including module: qemu ***
Feb 25 06:14:03 np0005629333 dracut[1546]: *** Including module: fstab-sys ***
Feb 25 06:14:03 np0005629333 dracut[1546]: *** Including module: rootfs-block ***
Feb 25 06:14:03 np0005629333 dracut[1546]: *** Including module: terminfo ***
Feb 25 06:14:03 np0005629333 dracut[1546]: *** Including module: udev-rules ***
Feb 25 06:14:04 np0005629333 dracut[1546]: Skipping udev rule: 91-permissions.rules
Feb 25 06:14:04 np0005629333 dracut[1546]: Skipping udev rule: 80-drivers-modprobe.rules
Feb 25 06:14:04 np0005629333 dracut[1546]: *** Including module: virtiofs ***
Feb 25 06:14:04 np0005629333 dracut[1546]: *** Including module: dracut-systemd ***
Feb 25 06:14:04 np0005629333 dracut[1546]: *** Including module: usrmount ***
Feb 25 06:14:04 np0005629333 dracut[1546]: *** Including module: base ***
Feb 25 06:14:04 np0005629333 dracut[1546]: *** Including module: fs-lib ***
Feb 25 06:14:04 np0005629333 dracut[1546]: *** Including module: kdumpbase ***
Feb 25 06:14:05 np0005629333 dracut[1546]: *** Including module: microcode_ctl-fw_dir_override ***
Feb 25 06:14:05 np0005629333 dracut[1546]:  microcode_ctl module: mangling fw_dir
Feb 25 06:14:05 np0005629333 dracut[1546]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Feb 25 06:14:05 np0005629333 dracut[1546]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Feb 25 06:14:05 np0005629333 dracut[1546]:    microcode_ctl: configuration "intel" is ignored
Feb 25 06:14:05 np0005629333 dracut[1546]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Feb 25 06:14:05 np0005629333 dracut[1546]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Feb 25 06:14:05 np0005629333 dracut[1546]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Feb 25 06:14:05 np0005629333 dracut[1546]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Feb 25 06:14:05 np0005629333 dracut[1546]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Feb 25 06:14:05 np0005629333 dracut[1546]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Feb 25 06:14:05 np0005629333 dracut[1546]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Feb 25 06:14:05 np0005629333 dracut[1546]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Feb 25 06:14:05 np0005629333 dracut[1546]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Feb 25 06:14:05 np0005629333 dracut[1546]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Feb 25 06:14:05 np0005629333 dracut[1546]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Feb 25 06:14:05 np0005629333 dracut[1546]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Feb 25 06:14:05 np0005629333 dracut[1546]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Feb 25 06:14:05 np0005629333 dracut[1546]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Feb 25 06:14:05 np0005629333 dracut[1546]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Feb 25 06:14:05 np0005629333 dracut[1546]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Feb 25 06:14:05 np0005629333 dracut[1546]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Feb 25 06:14:05 np0005629333 dracut[1546]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Feb 25 06:14:05 np0005629333 dracut[1546]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Feb 25 06:14:05 np0005629333 dracut[1546]: *** Including module: openssl ***
Feb 25 06:14:05 np0005629333 dracut[1546]: *** Including module: shutdown ***
Feb 25 06:14:05 np0005629333 dracut[1546]: *** Including module: squash ***
Feb 25 06:14:06 np0005629333 dracut[1546]: *** Including modules done ***
Feb 25 06:14:06 np0005629333 dracut[1546]: *** Installing kernel module dependencies ***
Feb 25 06:14:06 np0005629333 irqbalance[804]: Cannot change IRQ 35 affinity: Operation not permitted
Feb 25 06:14:06 np0005629333 irqbalance[804]: IRQ 35 affinity is now unmanaged
Feb 25 06:14:06 np0005629333 irqbalance[804]: Cannot change IRQ 33 affinity: Operation not permitted
Feb 25 06:14:06 np0005629333 irqbalance[804]: IRQ 33 affinity is now unmanaged
Feb 25 06:14:06 np0005629333 irqbalance[804]: Cannot change IRQ 31 affinity: Operation not permitted
Feb 25 06:14:06 np0005629333 irqbalance[804]: IRQ 31 affinity is now unmanaged
Feb 25 06:14:06 np0005629333 irqbalance[804]: Cannot change IRQ 28 affinity: Operation not permitted
Feb 25 06:14:06 np0005629333 irqbalance[804]: IRQ 28 affinity is now unmanaged
Feb 25 06:14:06 np0005629333 irqbalance[804]: Cannot change IRQ 34 affinity: Operation not permitted
Feb 25 06:14:06 np0005629333 irqbalance[804]: IRQ 34 affinity is now unmanaged
Feb 25 06:14:06 np0005629333 irqbalance[804]: Cannot change IRQ 32 affinity: Operation not permitted
Feb 25 06:14:06 np0005629333 irqbalance[804]: IRQ 32 affinity is now unmanaged
Feb 25 06:14:06 np0005629333 irqbalance[804]: Cannot change IRQ 30 affinity: Operation not permitted
Feb 25 06:14:06 np0005629333 irqbalance[804]: IRQ 30 affinity is now unmanaged
Feb 25 06:14:06 np0005629333 irqbalance[804]: Cannot change IRQ 29 affinity: Operation not permitted
Feb 25 06:14:06 np0005629333 irqbalance[804]: IRQ 29 affinity is now unmanaged
Feb 25 06:14:06 np0005629333 dracut[1546]: *** Installing kernel module dependencies done ***
Feb 25 06:14:06 np0005629333 dracut[1546]: *** Resolving executable dependencies ***
Feb 25 06:14:07 np0005629333 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 25 06:14:08 np0005629333 dracut[1546]: *** Resolving executable dependencies done ***
Feb 25 06:14:08 np0005629333 dracut[1546]: *** Generating early-microcode cpio image ***
Feb 25 06:14:08 np0005629333 dracut[1546]: *** Store current command line parameters ***
Feb 25 06:14:08 np0005629333 dracut[1546]: Stored kernel commandline:
Feb 25 06:14:08 np0005629333 dracut[1546]: No dracut internal kernel commandline stored in the initramfs
Feb 25 06:14:08 np0005629333 dracut[1546]: *** Install squash loader ***
Feb 25 06:14:09 np0005629333 dracut[1546]: *** Squashing the files inside the initramfs ***
Feb 25 06:14:10 np0005629333 dracut[1546]: *** Squashing the files inside the initramfs done ***
Feb 25 06:14:10 np0005629333 dracut[1546]: *** Creating image file '/boot/initramfs-5.14.0-686.el9.x86_64kdump.img' ***
Feb 25 06:14:10 np0005629333 dracut[1546]: *** Hardlinking files ***
Feb 25 06:14:10 np0005629333 dracut[1546]: *** Hardlinking files done ***
Feb 25 06:14:10 np0005629333 dracut[1546]: *** Creating initramfs image file '/boot/initramfs-5.14.0-686.el9.x86_64kdump.img' done ***
Feb 25 06:14:11 np0005629333 kdumpctl[1031]: kdump: kexec: loaded kdump kernel
Feb 25 06:14:11 np0005629333 kdumpctl[1031]: kdump: Starting kdump: [OK]
Feb 25 06:14:11 np0005629333 systemd[1]: Finished Crash recovery kernel arming.
Feb 25 06:14:11 np0005629333 systemd[1]: Startup finished in 1.284s (kernel) + 2.620s (initrd) + 16.978s (userspace) = 20.884s.
Feb 25 06:14:21 np0005629333 systemd[1]: Created slice User Slice of UID 1000.
Feb 25 06:14:21 np0005629333 systemd[1]: Starting User Runtime Directory /run/user/1000...
Feb 25 06:14:21 np0005629333 systemd-logind[811]: New session 1 of user zuul.
Feb 25 06:14:21 np0005629333 systemd[1]: Finished User Runtime Directory /run/user/1000.
Feb 25 06:14:21 np0005629333 systemd[1]: Starting User Manager for UID 1000...
Feb 25 06:14:21 np0005629333 systemd[4800]: Queued start job for default target Main User Target.
Feb 25 06:14:21 np0005629333 systemd[4800]: Created slice User Application Slice.
Feb 25 06:14:21 np0005629333 systemd[4800]: Started Mark boot as successful after the user session has run 2 minutes.
Feb 25 06:14:21 np0005629333 systemd[4800]: Started Daily Cleanup of User's Temporary Directories.
Feb 25 06:14:21 np0005629333 systemd[4800]: Reached target Paths.
Feb 25 06:14:21 np0005629333 systemd[4800]: Reached target Timers.
Feb 25 06:14:21 np0005629333 systemd[4800]: Starting D-Bus User Message Bus Socket...
Feb 25 06:14:21 np0005629333 systemd[4800]: Starting Create User's Volatile Files and Directories...
Feb 25 06:14:21 np0005629333 systemd[4800]: Listening on D-Bus User Message Bus Socket.
Feb 25 06:14:21 np0005629333 systemd[4800]: Reached target Sockets.
Feb 25 06:14:21 np0005629333 systemd[4800]: Finished Create User's Volatile Files and Directories.
Feb 25 06:14:21 np0005629333 systemd[4800]: Reached target Basic System.
Feb 25 06:14:21 np0005629333 systemd[4800]: Reached target Main User Target.
Feb 25 06:14:21 np0005629333 systemd[4800]: Startup finished in 134ms.
Feb 25 06:14:21 np0005629333 systemd[1]: Started User Manager for UID 1000.
Feb 25 06:14:21 np0005629333 systemd[1]: Started Session 1 of User zuul.
Feb 25 06:14:22 np0005629333 python3[4882]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:14:24 np0005629333 python3[4910]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:14:27 np0005629333 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 25 06:14:30 np0005629333 python3[4970]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:14:31 np0005629333 python3[5010]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Feb 25 06:14:33 np0005629333 python3[5036]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDr9Q9FFKCR01fvtVyjlaTGptqvjVkUJ1YcsyFqNu1SeNNVIdvzuhSyd5FkOfyEXyA1OArT4VSBG8Huru4VcSdU93OAjh4bp+9LF8KjXAujjwgJ4OndctKPN84ivvs/K7kuXkcOSpNhevc26o8eHyTBnVoJ/Dq1BXirNnrrQP1srZtIOdlDU3oSkzklsZV/wI7qztcbD4F9LiPDDjqGJG8MzE9/v/mS4TOZnsf6iSY7mUkePK1SUJblK5yVHgT2cEu1Yz+f1mAz+eJCEmVaHc/1hv4a2tuvtJLLR0j7DrCuHxL0JmkQ4jgqWNhzt83IMmIkFntxP1CPoZmgT1oNdgafDRPAmXG94bA9HHFHJv/04f19NZeUkg1xof3ASG42LwOTBi0u8bbwtj95dEsaVaeiqIBrrLZxSchGVXbhTwJ/Ub7lQYV/dwqU9xIQ+5rAkNLoEmtt8aFe3hO2rqScwvuja2WNf8xkVmy+BqCpEHd28qilmLAShMpqyvymH+ZTjrM= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 25 06:14:34 np0005629333 python3[5060]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:14:34 np0005629333 python3[5159]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 25 06:14:34 np0005629333 python3[5230]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772018074.3488038-207-244878220979123/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=248f6e5ea8884557b7573ca1ced5658b_id_rsa follow=False checksum=d2e10659a31c671038cf04c7ca8073811909073b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:14:35 np0005629333 python3[5353]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 25 06:14:35 np0005629333 python3[5424]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772018075.3485239-240-152448523268454/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=248f6e5ea8884557b7573ca1ced5658b_id_rsa.pub follow=False checksum=f40bed541363b4f0fcd33191b469f796bc62f44a backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:14:37 np0005629333 python3[5472]: ansible-ping Invoked with data=pong
Feb 25 06:14:38 np0005629333 python3[5496]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:14:40 np0005629333 python3[5554]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Feb 25 06:14:40 np0005629333 python3[5586]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:14:41 np0005629333 python3[5610]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:14:41 np0005629333 python3[5634]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:14:41 np0005629333 python3[5658]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:14:42 np0005629333 python3[5682]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:14:42 np0005629333 python3[5706]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:14:44 np0005629333 python3[5732]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:14:44 np0005629333 python3[5810]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 25 06:14:45 np0005629333 python3[5883]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1772018084.3090756-21-132670922393678/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:14:45 np0005629333 python3[5931]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 25 06:14:46 np0005629333 python3[5955]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 25 06:14:46 np0005629333 python3[5979]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 25 06:14:46 np0005629333 python3[6003]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 25 06:14:46 np0005629333 python3[6027]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 25 06:14:47 np0005629333 python3[6051]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 25 06:14:47 np0005629333 python3[6075]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 25 06:14:47 np0005629333 python3[6099]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 25 06:14:47 np0005629333 python3[6123]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 25 06:14:48 np0005629333 python3[6147]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 25 06:14:48 np0005629333 python3[6171]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 25 06:14:48 np0005629333 python3[6195]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 25 06:14:48 np0005629333 python3[6219]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICWBreHW95Wz2Toz5YwCGQwFcUG8oFYkienDh9tntmDc ralfieri@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 25 06:14:49 np0005629333 python3[6243]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 25 06:14:49 np0005629333 python3[6267]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 25 06:14:49 np0005629333 python3[6291]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 25 06:14:50 np0005629333 python3[6315]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 25 06:14:50 np0005629333 python3[6339]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 25 06:14:50 np0005629333 python3[6363]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 25 06:14:50 np0005629333 python3[6387]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 25 06:14:51 np0005629333 python3[6411]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 25 06:14:51 np0005629333 python3[6435]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 25 06:14:51 np0005629333 python3[6459]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 25 06:14:51 np0005629333 python3[6483]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 25 06:14:52 np0005629333 python3[6507]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 25 06:14:52 np0005629333 python3[6531]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 25 06:14:55 np0005629333 python3[6557]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb 25 06:14:56 np0005629333 systemd[1]: Starting Time & Date Service...
Feb 25 06:14:56 np0005629333 systemd[1]: Started Time & Date Service.
Feb 25 06:14:56 np0005629333 systemd-timedated[6559]: Changed time zone to 'UTC' (UTC).
Feb 25 06:14:56 np0005629333 python3[6588]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:14:56 np0005629333 python3[6664]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 25 06:14:57 np0005629333 python3[6735]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1772018096.7600608-153-272576862969354/source _original_basename=tmpx4vrzg6_ follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:14:57 np0005629333 python3[6835]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 25 06:14:58 np0005629333 python3[6906]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1772018097.683973-183-248221071014756/source _original_basename=tmpcdv_rd2p follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:14:59 np0005629333 python3[7008]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 25 06:14:59 np0005629333 python3[7081]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1772018098.870438-231-18307407097156/source _original_basename=tmp_c_ftjzd follow=False checksum=2e7e63ba56c9b487ea71081bee61c12a1e9cb2fe backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:15:00 np0005629333 python3[7129]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:15:00 np0005629333 python3[7155]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:15:00 np0005629333 python3[7235]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 25 06:15:01 np0005629333 python3[7308]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1772018100.6362324-273-182202874068930/source _original_basename=tmpj4xgb_t9 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:15:01 np0005629333 python3[7359]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-200c-d3c2-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:15:02 np0005629333 python3[7387]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-200c-d3c2-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Feb 25 06:15:03 np0005629333 python3[7415]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:15:08 np0005629333 chronyd[810]: Selected source 167.160.187.12 (2.centos.pool.ntp.org)
Feb 25 06:15:20 np0005629333 python3[7441]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:15:26 np0005629333 systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb 25 06:15:52 np0005629333 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Feb 25 06:15:52 np0005629333 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Feb 25 06:15:52 np0005629333 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Feb 25 06:15:52 np0005629333 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Feb 25 06:15:52 np0005629333 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Feb 25 06:15:52 np0005629333 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Feb 25 06:15:52 np0005629333 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Feb 25 06:15:52 np0005629333 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Feb 25 06:15:52 np0005629333 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Feb 25 06:15:52 np0005629333 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Feb 25 06:15:53 np0005629333 NetworkManager[875]: <info>  [1772018153.0058] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb 25 06:15:53 np0005629333 systemd-udevd[7445]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 06:15:53 np0005629333 NetworkManager[875]: <info>  [1772018153.0225] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 25 06:15:53 np0005629333 NetworkManager[875]: <info>  [1772018153.0254] settings: (eth1): created default wired connection 'Wired connection 1'
Feb 25 06:15:53 np0005629333 NetworkManager[875]: <info>  [1772018153.0258] device (eth1): carrier: link connected
Feb 25 06:15:53 np0005629333 NetworkManager[875]: <info>  [1772018153.0260] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Feb 25 06:15:53 np0005629333 NetworkManager[875]: <info>  [1772018153.0264] policy: auto-activating connection 'Wired connection 1' (28ca6592-12e0-394a-9d0e-3b8dc5e091b0)
Feb 25 06:15:53 np0005629333 NetworkManager[875]: <info>  [1772018153.0268] device (eth1): Activation: starting connection 'Wired connection 1' (28ca6592-12e0-394a-9d0e-3b8dc5e091b0)
Feb 25 06:15:53 np0005629333 NetworkManager[875]: <info>  [1772018153.0269] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 25 06:15:53 np0005629333 NetworkManager[875]: <info>  [1772018153.0271] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 25 06:15:53 np0005629333 NetworkManager[875]: <info>  [1772018153.0274] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 25 06:15:53 np0005629333 NetworkManager[875]: <info>  [1772018153.0279] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb 25 06:15:53 np0005629333 python3[7471]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-2f1c-8727-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:16:03 np0005629333 python3[7551]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 25 06:16:04 np0005629333 python3[7624]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772018163.4820056-102-204124643915705/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=c2bea641aa50318eb491304d75f9e9c6b59b657f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:16:05 np0005629333 python3[7674]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 25 06:16:05 np0005629333 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Feb 25 06:16:05 np0005629333 systemd[1]: Stopped Network Manager Wait Online.
Feb 25 06:16:05 np0005629333 systemd[1]: Stopping Network Manager Wait Online...
Feb 25 06:16:05 np0005629333 systemd[1]: Stopping Network Manager...
Feb 25 06:16:05 np0005629333 NetworkManager[875]: <info>  [1772018165.0587] caught SIGTERM, shutting down normally.
Feb 25 06:16:05 np0005629333 NetworkManager[875]: <info>  [1772018165.0594] dhcp4 (eth0): canceled DHCP transaction
Feb 25 06:16:05 np0005629333 NetworkManager[875]: <info>  [1772018165.0594] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 25 06:16:05 np0005629333 NetworkManager[875]: <info>  [1772018165.0594] dhcp4 (eth0): state changed no lease
Feb 25 06:16:05 np0005629333 NetworkManager[875]: <info>  [1772018165.0596] manager: NetworkManager state is now CONNECTING
Feb 25 06:16:05 np0005629333 NetworkManager[875]: <info>  [1772018165.0658] dhcp4 (eth1): canceled DHCP transaction
Feb 25 06:16:05 np0005629333 NetworkManager[875]: <info>  [1772018165.0658] dhcp4 (eth1): state changed no lease
Feb 25 06:16:05 np0005629333 NetworkManager[875]: <info>  [1772018165.0707] exiting (success)
Feb 25 06:16:05 np0005629333 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 25 06:16:05 np0005629333 systemd[1]: NetworkManager.service: Deactivated successfully.
Feb 25 06:16:05 np0005629333 systemd[1]: Stopped Network Manager.
Feb 25 06:16:05 np0005629333 systemd[1]: Starting Network Manager...
Feb 25 06:16:05 np0005629333 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.1276] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:5e9335c6-0936-4191-819c-92681868f3d7)
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.1280] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.1347] manager[0x56351e8a6000]: monitoring kernel firmware directory '/lib/firmware'.
Feb 25 06:16:05 np0005629333 systemd[1]: Starting Hostname Service...
Feb 25 06:16:05 np0005629333 systemd[1]: Started Hostname Service.
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2100] hostname: hostname: using hostnamed
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2103] hostname: static hostname changed from (none) to "np0005629333.novalocal"
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2111] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2120] manager[0x56351e8a6000]: rfkill: Wi-Fi hardware radio set enabled
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2120] manager[0x56351e8a6000]: rfkill: WWAN hardware radio set enabled
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2164] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2165] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2166] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2167] manager: Networking is enabled by state file
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2171] settings: Loaded settings plugin: keyfile (internal)
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2176] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2221] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2234] dhcp: init: Using DHCP client 'internal'
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2239] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2249] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2258] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2271] device (lo): Activation: starting connection 'lo' (c854c08b-1693-43bc-843a-c34c47825cf4)
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2281] device (eth0): carrier: link connected
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2286] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2295] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2296] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2306] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2318] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2326] device (eth1): carrier: link connected
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2331] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2340] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (28ca6592-12e0-394a-9d0e-3b8dc5e091b0) (indicated)
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2341] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2350] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2362] device (eth1): Activation: starting connection 'Wired connection 1' (28ca6592-12e0-394a-9d0e-3b8dc5e091b0)
Feb 25 06:16:05 np0005629333 systemd[1]: Started Network Manager.
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2369] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2379] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2384] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2388] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2392] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2397] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2401] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2407] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2414] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2441] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2446] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 25 06:16:05 np0005629333 systemd[1]: Starting Network Manager Wait Online...
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2461] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2466] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2493] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2503] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2515] device (lo): Activation: successful, device activated.
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2531] dhcp4 (eth0): state changed new lease, address=38.102.83.173
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2544] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2620] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2635] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2637] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2640] manager: NetworkManager state is now CONNECTED_SITE
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2642] device (eth0): Activation: successful, device activated.
Feb 25 06:16:05 np0005629333 NetworkManager[7680]: <info>  [1772018165.2647] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb 25 06:16:05 np0005629333 python3[7758]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-2f1c-8727-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:16:15 np0005629333 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 25 06:16:35 np0005629333 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 25 06:16:50 np0005629333 NetworkManager[7680]: <info>  [1772018210.5384] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb 25 06:16:50 np0005629333 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 25 06:16:50 np0005629333 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 25 06:16:50 np0005629333 NetworkManager[7680]: <info>  [1772018210.5688] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb 25 06:16:50 np0005629333 NetworkManager[7680]: <info>  [1772018210.5691] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb 25 06:16:50 np0005629333 NetworkManager[7680]: <info>  [1772018210.5697] device (eth1): Activation: successful, device activated.
Feb 25 06:16:50 np0005629333 NetworkManager[7680]: <info>  [1772018210.5705] manager: startup complete
Feb 25 06:16:50 np0005629333 NetworkManager[7680]: <info>  [1772018210.5707] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Feb 25 06:16:50 np0005629333 NetworkManager[7680]: <warn>  [1772018210.5713] device (eth1): Activation: failed for connection 'Wired connection 1'
Feb 25 06:16:50 np0005629333 NetworkManager[7680]: <info>  [1772018210.5722] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Feb 25 06:16:50 np0005629333 systemd[1]: Finished Network Manager Wait Online.
Feb 25 06:16:50 np0005629333 NetworkManager[7680]: <info>  [1772018210.5827] dhcp4 (eth1): canceled DHCP transaction
Feb 25 06:16:50 np0005629333 NetworkManager[7680]: <info>  [1772018210.5827] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb 25 06:16:50 np0005629333 NetworkManager[7680]: <info>  [1772018210.5828] dhcp4 (eth1): state changed no lease
Feb 25 06:16:50 np0005629333 NetworkManager[7680]: <info>  [1772018210.5843] policy: auto-activating connection 'ci-private-network' (6041f6e9-99fd-5935-a363-bb7dd4d3c5fc)
Feb 25 06:16:50 np0005629333 NetworkManager[7680]: <info>  [1772018210.5850] device (eth1): Activation: starting connection 'ci-private-network' (6041f6e9-99fd-5935-a363-bb7dd4d3c5fc)
Feb 25 06:16:50 np0005629333 NetworkManager[7680]: <info>  [1772018210.5851] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 25 06:16:50 np0005629333 NetworkManager[7680]: <info>  [1772018210.5854] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 25 06:16:50 np0005629333 NetworkManager[7680]: <info>  [1772018210.5863] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 25 06:16:50 np0005629333 NetworkManager[7680]: <info>  [1772018210.5875] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 25 06:16:50 np0005629333 NetworkManager[7680]: <info>  [1772018210.5921] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 25 06:16:50 np0005629333 NetworkManager[7680]: <info>  [1772018210.5923] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 25 06:16:50 np0005629333 NetworkManager[7680]: <info>  [1772018210.5930] device (eth1): Activation: successful, device activated.
Feb 25 06:16:57 np0005629333 systemd[4800]: Starting Mark boot as successful...
Feb 25 06:16:57 np0005629333 systemd[4800]: Finished Mark boot as successful.
Feb 25 06:17:00 np0005629333 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 25 06:17:05 np0005629333 systemd-logind[811]: Session 1 logged out. Waiting for processes to exit.
Feb 25 06:17:05 np0005629333 systemd-logind[811]: New session 3 of user zuul.
Feb 25 06:17:05 np0005629333 systemd[1]: Started Session 3 of User zuul.
Feb 25 06:17:05 np0005629333 python3[7872]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 25 06:17:05 np0005629333 python3[7945]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772018225.20557-267-88669693949022/source _original_basename=tmpjczsqbmh follow=False checksum=590c4f6e76c5d996bdd69bb583ef9a18e1e4a6fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:17:08 np0005629333 systemd[1]: session-3.scope: Deactivated successfully.
Feb 25 06:17:08 np0005629333 systemd-logind[811]: Session 3 logged out. Waiting for processes to exit.
Feb 25 06:17:08 np0005629333 systemd-logind[811]: Removed session 3.
Feb 25 06:19:57 np0005629333 systemd[4800]: Created slice User Background Tasks Slice.
Feb 25 06:19:57 np0005629333 systemd[4800]: Starting Cleanup of User's Temporary Files and Directories...
Feb 25 06:19:57 np0005629333 systemd[4800]: Finished Cleanup of User's Temporary Files and Directories.
Feb 25 06:23:46 np0005629333 systemd-logind[811]: New session 4 of user zuul.
Feb 25 06:23:46 np0005629333 systemd[1]: Started Session 4 of User zuul.
Feb 25 06:23:46 np0005629333 python3[8009]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-b3d5-1b0f-00000000222d-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:23:47 np0005629333 python3[8038]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:23:47 np0005629333 python3[8064]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:23:47 np0005629333 python3[8090]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:23:47 np0005629333 python3[8116]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:23:48 np0005629333 python3[8142]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:23:48 np0005629333 python3[8220]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 25 06:23:49 np0005629333 python3[8293]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772018628.5872037-511-276331222749868/source _original_basename=tmpi51wjh94 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:23:50 np0005629333 python3[8343]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 25 06:23:50 np0005629333 systemd[1]: Reloading.
Feb 25 06:23:50 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:23:52 np0005629333 python3[8406]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Feb 25 06:23:52 np0005629333 python3[8432]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:23:52 np0005629333 python3[8460]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:23:53 np0005629333 python3[8488]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:23:53 np0005629333 python3[8516]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:23:53 np0005629333 python3[8543]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-b3d5-1b0f-000000002234-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:23:54 np0005629333 python3[8573]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 25 06:23:56 np0005629333 systemd[1]: session-4.scope: Deactivated successfully.
Feb 25 06:23:56 np0005629333 systemd[1]: session-4.scope: Consumed 3.914s CPU time.
Feb 25 06:23:56 np0005629333 systemd-logind[811]: Session 4 logged out. Waiting for processes to exit.
Feb 25 06:23:56 np0005629333 systemd-logind[811]: Removed session 4.
Feb 25 06:23:58 np0005629333 systemd-logind[811]: New session 5 of user zuul.
Feb 25 06:23:58 np0005629333 systemd[1]: Started Session 5 of User zuul.
Feb 25 06:23:58 np0005629333 python3[8610]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 25 06:24:07 np0005629333 setsebool[8649]: The virt_use_nfs policy boolean was changed to 1 by root
Feb 25 06:24:07 np0005629333 setsebool[8649]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Feb 25 06:24:18 np0005629333 kernel: SELinux:  Converting 385 SID table entries...
Feb 25 06:24:18 np0005629333 kernel: SELinux:  policy capability network_peer_controls=1
Feb 25 06:24:18 np0005629333 kernel: SELinux:  policy capability open_perms=1
Feb 25 06:24:18 np0005629333 kernel: SELinux:  policy capability extended_socket_class=1
Feb 25 06:24:18 np0005629333 kernel: SELinux:  policy capability always_check_network=0
Feb 25 06:24:18 np0005629333 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 25 06:24:18 np0005629333 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 25 06:24:18 np0005629333 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 25 06:24:30 np0005629333 kernel: SELinux:  Converting 388 SID table entries...
Feb 25 06:24:30 np0005629333 kernel: SELinux:  policy capability network_peer_controls=1
Feb 25 06:24:30 np0005629333 kernel: SELinux:  policy capability open_perms=1
Feb 25 06:24:30 np0005629333 kernel: SELinux:  policy capability extended_socket_class=1
Feb 25 06:24:30 np0005629333 kernel: SELinux:  policy capability always_check_network=0
Feb 25 06:24:30 np0005629333 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 25 06:24:30 np0005629333 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 25 06:24:30 np0005629333 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 25 06:24:51 np0005629333 dbus-broker-launch[795]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Feb 25 06:24:51 np0005629333 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 25 06:24:51 np0005629333 systemd[1]: Starting man-db-cache-update.service...
Feb 25 06:24:51 np0005629333 systemd[1]: Reloading.
Feb 25 06:24:51 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:24:52 np0005629333 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 25 06:24:54 np0005629333 python3[11317]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-e4eb-049c-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:24:56 np0005629333 kernel: evm: overlay not supported
Feb 25 06:24:56 np0005629333 systemd[4800]: Starting D-Bus User Message Bus...
Feb 25 06:24:56 np0005629333 dbus-broker-launch[12312]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Feb 25 06:24:56 np0005629333 dbus-broker-launch[12312]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Feb 25 06:24:56 np0005629333 systemd[4800]: Started D-Bus User Message Bus.
Feb 25 06:24:56 np0005629333 dbus-broker-lau[12312]: Ready
Feb 25 06:24:56 np0005629333 systemd[4800]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Feb 25 06:24:56 np0005629333 systemd[4800]: Created slice Slice /user.
Feb 25 06:24:56 np0005629333 systemd[4800]: podman-11993.scope: unit configures an IP firewall, but not running as root.
Feb 25 06:24:56 np0005629333 systemd[4800]: (This warning is only shown for the first unit using IP firewalling.)
Feb 25 06:24:56 np0005629333 systemd[4800]: Started podman-11993.scope.
Feb 25 06:24:56 np0005629333 systemd[4800]: Started podman-pause-93c9a094.scope.
Feb 25 06:24:57 np0005629333 python3[12795]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.89:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.89:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:24:57 np0005629333 python3[12795]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Feb 25 06:24:57 np0005629333 systemd[1]: session-5.scope: Deactivated successfully.
Feb 25 06:24:57 np0005629333 systemd[1]: session-5.scope: Consumed 42.351s CPU time.
Feb 25 06:24:57 np0005629333 systemd-logind[811]: Session 5 logged out. Waiting for processes to exit.
Feb 25 06:24:57 np0005629333 systemd-logind[811]: Removed session 5.
Feb 25 06:25:34 np0005629333 systemd-logind[811]: New session 6 of user zuul.
Feb 25 06:25:34 np0005629333 systemd[1]: Started Session 6 of User zuul.
Feb 25 06:25:34 np0005629333 python3[28630]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCRy82wWeNJHI2UOSWx2qahS0rJAOIvS9svwQ/OxnqKEXzpr0TzkFOBgFZDRRdISDzBE+V85JARAm1BsWE7x4Oo= zuul@np0005629332.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 25 06:25:35 np0005629333 python3[28811]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCRy82wWeNJHI2UOSWx2qahS0rJAOIvS9svwQ/OxnqKEXzpr0TzkFOBgFZDRRdISDzBE+V85JARAm1BsWE7x4Oo= zuul@np0005629332.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 25 06:25:35 np0005629333 python3[29095]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005629333.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Feb 25 06:25:36 np0005629333 python3[29352]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCRy82wWeNJHI2UOSWx2qahS0rJAOIvS9svwQ/OxnqKEXzpr0TzkFOBgFZDRRdISDzBE+V85JARAm1BsWE7x4Oo= zuul@np0005629332.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 25 06:25:36 np0005629333 python3[29568]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 25 06:25:37 np0005629333 python3[29797]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1772018736.6741004-135-214213155157624/source _original_basename=tmpkxn5g7kx follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:25:38 np0005629333 python3[30082]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Feb 25 06:25:38 np0005629333 systemd[1]: Starting Hostname Service...
Feb 25 06:25:38 np0005629333 systemd[1]: Started Hostname Service.
Feb 25 06:25:38 np0005629333 systemd-hostnamed[30181]: Changed pretty hostname to 'compute-0'
Feb 25 06:25:38 np0005629333 systemd-hostnamed[30181]: Hostname set to <compute-0> (static)
Feb 25 06:25:38 np0005629333 NetworkManager[7680]: <info>  [1772018738.3685] hostname: static hostname changed from "np0005629333.novalocal" to "compute-0"
Feb 25 06:25:38 np0005629333 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 25 06:25:38 np0005629333 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 25 06:25:38 np0005629333 systemd[1]: session-6.scope: Deactivated successfully.
Feb 25 06:25:38 np0005629333 systemd[1]: session-6.scope: Consumed 2.087s CPU time.
Feb 25 06:25:38 np0005629333 systemd-logind[811]: Session 6 logged out. Waiting for processes to exit.
Feb 25 06:25:38 np0005629333 systemd-logind[811]: Removed session 6.
Feb 25 06:25:39 np0005629333 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 25 06:25:39 np0005629333 systemd[1]: Finished man-db-cache-update.service.
Feb 25 06:25:39 np0005629333 systemd[1]: man-db-cache-update.service: Consumed 44.968s CPU time.
Feb 25 06:25:39 np0005629333 systemd[1]: run-rd5e1a52e54784ae4bd9b0146a11eeb57.service: Deactivated successfully.
Feb 25 06:25:48 np0005629333 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 25 06:26:08 np0005629333 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 25 06:28:57 np0005629333 systemd[1]: Starting Cleanup of Temporary Directories...
Feb 25 06:28:57 np0005629333 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Feb 25 06:28:57 np0005629333 systemd[1]: Finished Cleanup of Temporary Directories.
Feb 25 06:28:57 np0005629333 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Feb 25 06:29:37 np0005629333 systemd-logind[811]: New session 7 of user zuul.
Feb 25 06:29:37 np0005629333 systemd[1]: Started Session 7 of User zuul.
Feb 25 06:29:38 np0005629333 python3[30640]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:29:39 np0005629333 python3[30756]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 25 06:29:40 np0005629333 python3[30829]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1772018979.6322377-34310-182641894551570/source mode=0755 _original_basename=delorean.repo follow=False checksum=c7624fe5e858d4139de1ac159778eb6fd097c2ca backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:29:40 np0005629333 python3[30855]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 25 06:29:40 np0005629333 python3[30928]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1772018979.6322377-34310-182641894551570/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:29:41 np0005629333 python3[30954]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 25 06:29:41 np0005629333 python3[31027]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1772018979.6322377-34310-182641894551570/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:29:41 np0005629333 python3[31053]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 25 06:29:42 np0005629333 python3[31126]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1772018979.6322377-34310-182641894551570/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:29:42 np0005629333 python3[31152]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 25 06:29:42 np0005629333 python3[31225]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1772018979.6322377-34310-182641894551570/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:29:42 np0005629333 python3[31251]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 25 06:29:43 np0005629333 python3[31324]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1772018979.6322377-34310-182641894551570/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:29:43 np0005629333 python3[31350]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 25 06:29:43 np0005629333 python3[31423]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1772018979.6322377-34310-182641894551570/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=06a0a916cb7cbc51b08d6616a672f1322305cccf backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:29:57 np0005629333 python3[31481]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:30:12 np0005629333 chronyd[810]: Selected source 138.197.164.54 (2.centos.pool.ntp.org)
Feb 25 06:34:56 np0005629333 systemd[1]: session-7.scope: Deactivated successfully.
Feb 25 06:34:56 np0005629333 systemd[1]: session-7.scope: Consumed 4.640s CPU time.
Feb 25 06:34:56 np0005629333 systemd-logind[811]: Session 7 logged out. Waiting for processes to exit.
Feb 25 06:34:56 np0005629333 systemd-logind[811]: Removed session 7.
Feb 25 06:35:57 np0005629333 systemd[1]: Starting dnf makecache...
Feb 25 06:35:57 np0005629333 dnf[31493]: Failed determining last makecache time.
Feb 25 06:35:58 np0005629333 dnf[31493]: delorean-openstack-barbican-42b4c41831408a8e323  73 kB/s |  13 kB     00:00
Feb 25 06:35:58 np0005629333 dnf[31493]: delorean-python-glean-642fffe0203a8ffcc2443db52 446 kB/s |  65 kB     00:00
Feb 25 06:35:58 np0005629333 dnf[31493]: delorean-openstack-cinder-e95a374f4f00ef02d562d 358 kB/s |  32 kB     00:00
Feb 25 06:35:58 np0005629333 dnf[31493]: delorean-python-stevedore-c4acc5639fd2329372142 760 kB/s | 131 kB     00:00
Feb 25 06:35:59 np0005629333 dnf[31493]: delorean-python-cloudkitty-tests-tempest-ef9563 266 kB/s |  32 kB     00:00
Feb 25 06:35:59 np0005629333 dnf[31493]: delorean-diskimage-builder-cbb4478c143869181ba9 1.9 MB/s | 349 kB     00:00
Feb 25 06:35:59 np0005629333 dnf[31493]: delorean-openstack-nova-5cfeecbf22fca58822607dd 636 kB/s |  42 kB     00:00
Feb 25 06:35:59 np0005629333 dnf[31493]: delorean-python-designate-tests-tempest-347fdbc 263 kB/s |  18 kB     00:00
Feb 25 06:35:59 np0005629333 dnf[31493]: delorean-openstack-glance-1fd12c29b339f30fe823e 121 kB/s |  18 kB     00:00
Feb 25 06:35:59 np0005629333 dnf[31493]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 576 kB/s |  29 kB     00:00
Feb 25 06:35:59 np0005629333 dnf[31493]: delorean-openstack-manila-8fa2b5793100022b4d0f6 290 kB/s |  25 kB     00:00
Feb 25 06:35:59 np0005629333 dnf[31493]: delorean-python-whitebox-neutron-tests-tempest- 2.4 MB/s | 153 kB     00:00
Feb 25 06:36:00 np0005629333 dnf[31493]: delorean-openstack-octavia-76dfc1e35cf7f4dd6102 174 kB/s |  26 kB     00:00
Feb 25 06:36:00 np0005629333 dnf[31493]: delorean-openstack-watcher-c014f81a8647287f6dcc 123 kB/s |  16 kB     00:00
Feb 25 06:36:00 np0005629333 dnf[31493]: delorean-python-tcib-b403f1051724db0286e1418f59 101 kB/s | 7.4 kB     00:00
Feb 25 06:36:00 np0005629333 dnf[31493]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 795 kB/s | 144 kB     00:00
Feb 25 06:36:00 np0005629333 dnf[31493]: delorean-openstack-swift-dc98a8463506ac520c469a 244 kB/s |  14 kB     00:00
Feb 25 06:36:00 np0005629333 dnf[31493]: delorean-python-tempestconf-8e33668cda707818ee1 675 kB/s |  53 kB     00:00
Feb 25 06:36:01 np0005629333 dnf[31493]: delorean-openstack-heat-ui-013accbfd179753bc3f0 578 kB/s |  96 kB     00:00
Feb 25 06:36:01 np0005629333 dnf[31493]: CentOS Stream 9 - BaseOS                         56 kB/s | 7.0 kB     00:00
Feb 25 06:36:01 np0005629333 dnf[31493]: CentOS Stream 9 - AppStream                      69 kB/s | 7.1 kB     00:00
Feb 25 06:36:01 np0005629333 dnf[31493]: CentOS Stream 9 - CRB                            29 kB/s | 6.9 kB     00:00
Feb 25 06:36:02 np0005629333 dnf[31493]: CentOS Stream 9 - Extras packages                32 kB/s | 7.6 kB     00:00
Feb 25 06:36:02 np0005629333 dnf[31493]: dlrn-antelope-testing                           6.8 MB/s | 1.1 MB     00:00
Feb 25 06:36:02 np0005629333 dnf[31493]: dlrn-antelope-build-deps                        7.2 MB/s | 461 kB     00:00
Feb 25 06:36:02 np0005629333 dnf[31493]: centos9-rabbitmq                                893 kB/s | 123 kB     00:00
Feb 25 06:36:03 np0005629333 dnf[31493]: centos9-storage                                 3.1 MB/s | 415 kB     00:00
Feb 25 06:36:03 np0005629333 dnf[31493]: centos9-opstools                                1.1 MB/s |  51 kB     00:00
Feb 25 06:36:03 np0005629333 dnf[31493]: NFV SIG OpenvSwitch                             3.4 MB/s | 465 kB     00:00
Feb 25 06:36:04 np0005629333 dnf[31493]: repo-setup-centos-appstream                      59 MB/s |  27 MB     00:00
Feb 25 06:36:10 np0005629333 dnf[31493]: repo-setup-centos-baseos                         30 MB/s | 8.9 MB     00:00
Feb 25 06:36:11 np0005629333 dnf[31493]: repo-setup-centos-highavailability               20 MB/s | 744 kB     00:00
Feb 25 06:36:12 np0005629333 dnf[31493]: repo-setup-centos-powertools                     18 MB/s | 8.0 MB     00:00
Feb 25 06:36:15 np0005629333 dnf[31493]: Extra Packages for Enterprise Linux 9 - x86_64   15 MB/s |  20 MB     00:01
Feb 25 06:36:28 np0005629333 dnf[31493]: Metadata cache created.
Feb 25 06:36:28 np0005629333 systemd[1]: dnf-makecache.service: Deactivated successfully.
Feb 25 06:36:28 np0005629333 systemd[1]: Finished dnf makecache.
Feb 25 06:36:28 np0005629333 systemd[1]: dnf-makecache.service: Consumed 24.620s CPU time.
Feb 25 06:39:54 np0005629333 systemd-logind[811]: New session 8 of user zuul.
Feb 25 06:39:54 np0005629333 systemd[1]: Started Session 8 of User zuul.
Feb 25 06:39:55 np0005629333 python3.9[31754]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:39:56 np0005629333 python3.9[31936]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:40:06 np0005629333 systemd[1]: session-8.scope: Deactivated successfully.
Feb 25 06:40:06 np0005629333 systemd[1]: session-8.scope: Consumed 7.738s CPU time.
Feb 25 06:40:06 np0005629333 systemd-logind[811]: Session 8 logged out. Waiting for processes to exit.
Feb 25 06:40:06 np0005629333 systemd-logind[811]: Removed session 8.
Feb 25 06:40:21 np0005629333 systemd-logind[811]: New session 9 of user zuul.
Feb 25 06:40:21 np0005629333 systemd[1]: Started Session 9 of User zuul.
Feb 25 06:40:21 np0005629333 python3.9[32146]: ansible-ansible.legacy.ping Invoked with data=pong
Feb 25 06:40:23 np0005629333 python3.9[32320]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:40:24 np0005629333 python3.9[32473]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:40:25 np0005629333 python3.9[32627]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 06:40:25 np0005629333 python3.9[32780]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:40:26 np0005629333 python3.9[32933]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:40:27 np0005629333 python3.9[33057]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1772019626.0039344-68-201268279190298/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:40:28 np0005629333 python3.9[33210]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:40:28 np0005629333 python3.9[33367]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:40:29 np0005629333 python3.9[33520]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:40:30 np0005629333 python3.9[33670]: ansible-ansible.builtin.service_facts Invoked
Feb 25 06:40:35 np0005629333 python3.9[33924]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:40:36 np0005629333 python3.9[34074]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:40:37 np0005629333 python3.9[34228]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:40:38 np0005629333 python3.9[34387]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 25 06:40:39 np0005629333 python3.9[34472]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 25 06:41:29 np0005629333 systemd[1]: Reloading.
Feb 25 06:41:29 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:41:30 np0005629333 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Feb 25 06:41:31 np0005629333 systemd[1]: Reloading.
Feb 25 06:41:31 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:41:31 np0005629333 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Feb 25 06:41:31 np0005629333 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Feb 25 06:41:31 np0005629333 systemd[1]: Reloading.
Feb 25 06:41:31 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:41:31 np0005629333 systemd[1]: Listening on LVM2 poll daemon socket.
Feb 25 06:41:31 np0005629333 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Feb 25 06:41:31 np0005629333 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Feb 25 06:41:31 np0005629333 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Feb 25 06:42:51 np0005629333 kernel: SELinux:  Converting 2728 SID table entries...
Feb 25 06:42:51 np0005629333 kernel: SELinux:  policy capability network_peer_controls=1
Feb 25 06:42:51 np0005629333 kernel: SELinux:  policy capability open_perms=1
Feb 25 06:42:51 np0005629333 kernel: SELinux:  policy capability extended_socket_class=1
Feb 25 06:42:51 np0005629333 kernel: SELinux:  policy capability always_check_network=0
Feb 25 06:42:51 np0005629333 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 25 06:42:51 np0005629333 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 25 06:42:51 np0005629333 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 25 06:42:52 np0005629333 dbus-broker-launch[795]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Feb 25 06:42:52 np0005629333 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 25 06:42:52 np0005629333 systemd[1]: Starting man-db-cache-update.service...
Feb 25 06:42:52 np0005629333 systemd[1]: Reloading.
Feb 25 06:42:52 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:42:52 np0005629333 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 25 06:42:54 np0005629333 python3.9[36026]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:42:55 np0005629333 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 25 06:42:55 np0005629333 systemd[1]: Finished man-db-cache-update.service.
Feb 25 06:42:55 np0005629333 systemd[1]: man-db-cache-update.service: Consumed 1.145s CPU time.
Feb 25 06:42:55 np0005629333 systemd[1]: run-r712b1a5560ca45fcaae4177ad19366ed.service: Deactivated successfully.
Feb 25 06:42:58 np0005629333 python3.9[36309]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Feb 25 06:42:58 np0005629333 python3.9[36462]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Feb 25 06:43:02 np0005629333 python3.9[36617]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:43:03 np0005629333 python3.9[36770]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Feb 25 06:43:04 np0005629333 python3.9[36923]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:43:09 np0005629333 python3.9[37077]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:43:10 np0005629333 python3.9[37203]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772019784.6441753-231-236337668006692/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=81e097fe8c9e59927bc656867231837d71d1bb93 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:43:11 np0005629333 python3.9[37356]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 06:43:11 np0005629333 python3.9[37509]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:43:12 np0005629333 python3.9[37663]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:43:14 np0005629333 python3.9[37816]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Feb 25 06:43:14 np0005629333 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 06:43:15 np0005629333 python3.9[37971]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 25 06:43:16 np0005629333 python3.9[38130]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 25 06:43:17 np0005629333 python3.9[38291]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Feb 25 06:43:17 np0005629333 python3.9[38445]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 25 06:43:18 np0005629333 python3.9[38604]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Feb 25 06:43:19 np0005629333 python3.9[38757]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 25 06:43:22 np0005629333 python3.9[38911]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:43:23 np0005629333 python3.9[39064]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:43:23 np0005629333 python3.9[39188]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772019802.5452993-350-215992992023649/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:43:24 np0005629333 python3.9[39341]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 25 06:43:24 np0005629333 systemd[1]: Starting Load Kernel Modules...
Feb 25 06:43:24 np0005629333 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb 25 06:43:24 np0005629333 kernel: Bridge firewalling registered
Feb 25 06:43:24 np0005629333 systemd-modules-load[39345]: Inserted module 'br_netfilter'
Feb 25 06:43:24 np0005629333 systemd[1]: Finished Load Kernel Modules.
Feb 25 06:43:25 np0005629333 python3.9[39502]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:43:25 np0005629333 python3.9[39626]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772019804.7757459-373-31299246175099/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:43:26 np0005629333 python3.9[39779]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 25 06:43:31 np0005629333 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Feb 25 06:43:31 np0005629333 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Feb 25 06:43:31 np0005629333 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 25 06:43:31 np0005629333 systemd[1]: Starting man-db-cache-update.service...
Feb 25 06:43:31 np0005629333 systemd[1]: Reloading.
Feb 25 06:43:32 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:43:32 np0005629333 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 25 06:43:34 np0005629333 python3.9[42112]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 06:43:34 np0005629333 python3.9[43366]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Feb 25 06:43:35 np0005629333 python3.9[43872]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 06:43:36 np0005629333 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 25 06:43:36 np0005629333 systemd[1]: Finished man-db-cache-update.service.
Feb 25 06:43:36 np0005629333 systemd[1]: man-db-cache-update.service: Consumed 3.625s CPU time.
Feb 25 06:43:36 np0005629333 systemd[1]: run-r76553db8311f452b8c2af5f9777b6c81.service: Deactivated successfully.
Feb 25 06:43:36 np0005629333 python3.9[44026]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:43:36 np0005629333 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb 25 06:43:36 np0005629333 systemd[1]: Starting Authorization Manager...
Feb 25 06:43:36 np0005629333 systemd[1]: Started Dynamic System Tuning Daemon.
Feb 25 06:43:36 np0005629333 polkitd[44243]: Started polkitd version 0.117
Feb 25 06:43:37 np0005629333 systemd[1]: Started Authorization Manager.
Feb 25 06:43:37 np0005629333 python3.9[44414]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 06:43:38 np0005629333 systemd[1]: Stopping Dynamic System Tuning Daemon...
Feb 25 06:43:38 np0005629333 systemd[1]: tuned.service: Deactivated successfully.
Feb 25 06:43:38 np0005629333 systemd[1]: Stopped Dynamic System Tuning Daemon.
Feb 25 06:43:38 np0005629333 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb 25 06:43:38 np0005629333 systemd[1]: Started Dynamic System Tuning Daemon.
Feb 25 06:43:38 np0005629333 python3.9[44576]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Feb 25 06:43:41 np0005629333 python3.9[44729]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 06:43:41 np0005629333 systemd[1]: Reloading.
Feb 25 06:43:41 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:43:42 np0005629333 python3.9[44926]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 06:43:42 np0005629333 systemd[1]: Reloading.
Feb 25 06:43:42 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:43:43 np0005629333 python3.9[45123]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:43:43 np0005629333 python3.9[45277]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:43:43 np0005629333 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Feb 25 06:43:44 np0005629333 python3.9[45431]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:43:46 np0005629333 python3.9[45594]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:43:47 np0005629333 python3.9[45748]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 25 06:43:47 np0005629333 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 25 06:43:47 np0005629333 systemd[1]: Stopped Apply Kernel Variables.
Feb 25 06:43:47 np0005629333 systemd[1]: Stopping Apply Kernel Variables...
Feb 25 06:43:47 np0005629333 systemd[1]: Starting Apply Kernel Variables...
Feb 25 06:43:47 np0005629333 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb 25 06:43:47 np0005629333 systemd[1]: Finished Apply Kernel Variables.
Feb 25 06:43:47 np0005629333 systemd[1]: session-9.scope: Deactivated successfully.
Feb 25 06:43:47 np0005629333 systemd[1]: session-9.scope: Consumed 2min 6.453s CPU time.
Feb 25 06:43:47 np0005629333 systemd-logind[811]: Session 9 logged out. Waiting for processes to exit.
Feb 25 06:43:47 np0005629333 systemd-logind[811]: Removed session 9.
Feb 25 06:43:52 np0005629333 systemd-logind[811]: New session 10 of user zuul.
Feb 25 06:43:52 np0005629333 systemd[1]: Started Session 10 of User zuul.
Feb 25 06:43:53 np0005629333 python3.9[45931]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:43:54 np0005629333 python3.9[46088]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Feb 25 06:43:55 np0005629333 python3.9[46242]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 25 06:43:57 np0005629333 python3.9[46401]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 25 06:43:59 np0005629333 python3.9[46562]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 25 06:44:00 np0005629333 python3.9[46647]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 25 06:44:03 np0005629333 python3.9[46812]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 25 06:44:15 np0005629333 kernel: SELinux:  Converting 2740 SID table entries...
Feb 25 06:44:15 np0005629333 kernel: SELinux:  policy capability network_peer_controls=1
Feb 25 06:44:15 np0005629333 kernel: SELinux:  policy capability open_perms=1
Feb 25 06:44:15 np0005629333 kernel: SELinux:  policy capability extended_socket_class=1
Feb 25 06:44:15 np0005629333 kernel: SELinux:  policy capability always_check_network=0
Feb 25 06:44:15 np0005629333 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 25 06:44:15 np0005629333 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 25 06:44:15 np0005629333 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 25 06:44:15 np0005629333 dbus-broker-launch[795]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Feb 25 06:44:15 np0005629333 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Feb 25 06:44:18 np0005629333 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 25 06:44:18 np0005629333 systemd[1]: Starting man-db-cache-update.service...
Feb 25 06:44:18 np0005629333 systemd[1]: Reloading.
Feb 25 06:44:18 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:44:18 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:44:18 np0005629333 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 25 06:44:20 np0005629333 python3.9[47935]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 25 06:44:20 np0005629333 systemd[1]: Reloading.
Feb 25 06:44:20 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:44:20 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:44:20 np0005629333 systemd[1]: Starting Open vSwitch Database Unit...
Feb 25 06:44:20 np0005629333 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 25 06:44:20 np0005629333 systemd[1]: Finished man-db-cache-update.service.
Feb 25 06:44:20 np0005629333 systemd[1]: run-r84d07b51ae034168800905ac2b4dd956.service: Deactivated successfully.
Feb 25 06:44:20 np0005629333 chown[47984]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Feb 25 06:44:20 np0005629333 ovs-ctl[47989]: /etc/openvswitch/conf.db does not exist ... (warning).
Feb 25 06:44:20 np0005629333 ovs-ctl[47989]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Feb 25 06:44:20 np0005629333 ovs-ctl[47989]: Starting ovsdb-server [  OK  ]
Feb 25 06:44:20 np0005629333 ovs-vsctl[48039]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Feb 25 06:44:20 np0005629333 ovs-vsctl[48059]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"a594384c-d614-4492-9e0a-4d6ec095920c\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Feb 25 06:44:20 np0005629333 ovs-ctl[47989]: Configuring Open vSwitch system IDs [  OK  ]
Feb 25 06:44:20 np0005629333 ovs-vsctl[48065]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Feb 25 06:44:20 np0005629333 ovs-ctl[47989]: Enabling remote OVSDB managers [  OK  ]
Feb 25 06:44:20 np0005629333 systemd[1]: Started Open vSwitch Database Unit.
Feb 25 06:44:21 np0005629333 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Feb 25 06:44:21 np0005629333 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Feb 25 06:44:21 np0005629333 systemd[1]: Starting Open vSwitch Forwarding Unit...
Feb 25 06:44:21 np0005629333 kernel: openvswitch: Open vSwitch switching datapath
Feb 25 06:44:21 np0005629333 ovs-ctl[48109]: Inserting openvswitch module [  OK  ]
Feb 25 06:44:21 np0005629333 ovs-ctl[48078]: Starting ovs-vswitchd [  OK  ]
Feb 25 06:44:21 np0005629333 ovs-ctl[48078]: Enabling remote OVSDB managers [  OK  ]
Feb 25 06:44:21 np0005629333 systemd[1]: Started Open vSwitch Forwarding Unit.
Feb 25 06:44:21 np0005629333 ovs-vsctl[48129]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Feb 25 06:44:21 np0005629333 systemd[1]: Starting Open vSwitch...
Feb 25 06:44:21 np0005629333 systemd[1]: Finished Open vSwitch.
Feb 25 06:44:22 np0005629333 python3.9[48281]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:44:23 np0005629333 python3.9[48434]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Feb 25 06:44:24 np0005629333 kernel: SELinux:  Converting 2754 SID table entries...
Feb 25 06:44:24 np0005629333 kernel: SELinux:  policy capability network_peer_controls=1
Feb 25 06:44:24 np0005629333 kernel: SELinux:  policy capability open_perms=1
Feb 25 06:44:24 np0005629333 kernel: SELinux:  policy capability extended_socket_class=1
Feb 25 06:44:24 np0005629333 kernel: SELinux:  policy capability always_check_network=0
Feb 25 06:44:24 np0005629333 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 25 06:44:24 np0005629333 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 25 06:44:24 np0005629333 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 25 06:44:25 np0005629333 python3.9[48589]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:44:26 np0005629333 dbus-broker-launch[795]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Feb 25 06:44:26 np0005629333 python3.9[48748]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 25 06:44:28 np0005629333 python3.9[48902]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:44:29 np0005629333 python3.9[49190]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Feb 25 06:44:30 np0005629333 python3.9[49340]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 06:44:31 np0005629333 python3.9[49495]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 25 06:44:33 np0005629333 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 25 06:44:33 np0005629333 systemd[1]: Starting man-db-cache-update.service...
Feb 25 06:44:33 np0005629333 systemd[1]: Reloading.
Feb 25 06:44:33 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:44:33 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:44:33 np0005629333 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 25 06:44:35 np0005629333 python3.9[49818]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 25 06:44:35 np0005629333 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Feb 25 06:44:35 np0005629333 systemd[1]: Stopped Network Manager Wait Online.
Feb 25 06:44:35 np0005629333 systemd[1]: Stopping Network Manager Wait Online...
Feb 25 06:44:35 np0005629333 systemd[1]: Stopping Network Manager...
Feb 25 06:44:35 np0005629333 NetworkManager[7680]: <info>  [1772019875.4949] caught SIGTERM, shutting down normally.
Feb 25 06:44:35 np0005629333 NetworkManager[7680]: <info>  [1772019875.4968] dhcp4 (eth0): canceled DHCP transaction
Feb 25 06:44:35 np0005629333 NetworkManager[7680]: <info>  [1772019875.4969] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 25 06:44:35 np0005629333 NetworkManager[7680]: <info>  [1772019875.4969] dhcp4 (eth0): state changed no lease
Feb 25 06:44:35 np0005629333 NetworkManager[7680]: <info>  [1772019875.4972] manager: NetworkManager state is now CONNECTED_SITE
Feb 25 06:44:35 np0005629333 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 25 06:44:35 np0005629333 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 25 06:44:35 np0005629333 NetworkManager[7680]: <info>  [1772019875.5961] exiting (success)
Feb 25 06:44:35 np0005629333 systemd[1]: NetworkManager.service: Deactivated successfully.
Feb 25 06:44:35 np0005629333 systemd[1]: Stopped Network Manager.
Feb 25 06:44:35 np0005629333 systemd[1]: NetworkManager.service: Consumed 10.641s CPU time, 4.1M memory peak, read 0B from disk, written 17.0K to disk.
Feb 25 06:44:35 np0005629333 systemd[1]: Starting Network Manager...
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.6715] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:5e9335c6-0936-4191-819c-92681868f3d7)
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.6717] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.6813] manager[0x557f82528000]: monitoring kernel firmware directory '/lib/firmware'.
Feb 25 06:44:35 np0005629333 systemd[1]: Starting Hostname Service...
Feb 25 06:44:35 np0005629333 systemd[1]: Started Hostname Service.
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7683] hostname: hostname: using hostnamed
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7683] hostname: static hostname changed from (none) to "compute-0"
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7691] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7696] manager[0x557f82528000]: rfkill: Wi-Fi hardware radio set enabled
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7696] manager[0x557f82528000]: rfkill: WWAN hardware radio set enabled
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7728] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7741] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7742] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7743] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7743] manager: Networking is enabled by state file
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7746] settings: Loaded settings plugin: keyfile (internal)
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7752] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7796] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7812] dhcp: init: Using DHCP client 'internal'
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7816] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7824] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7831] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7844] device (lo): Activation: starting connection 'lo' (c854c08b-1693-43bc-843a-c34c47825cf4)
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7852] device (eth0): carrier: link connected
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7858] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7865] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7865] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7874] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7884] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7892] device (eth1): carrier: link connected
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7898] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7905] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (6041f6e9-99fd-5935-a363-bb7dd4d3c5fc) (indicated)
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7906] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7914] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7924] device (eth1): Activation: starting connection 'ci-private-network' (6041f6e9-99fd-5935-a363-bb7dd4d3c5fc)
Feb 25 06:44:35 np0005629333 systemd[1]: Started Network Manager.
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7939] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7953] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7956] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7957] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7960] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7962] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7964] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7966] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7968] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7974] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.7992] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.8028] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.8048] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.8063] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.8069] dhcp4 (eth0): state changed new lease, address=38.102.83.173
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.8075] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.8085] device (lo): Activation: successful, device activated.
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.8103] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb 25 06:44:35 np0005629333 systemd[1]: Starting Network Manager Wait Online...
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.8709] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.8742] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.8755] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.8762] manager: NetworkManager state is now CONNECTED_LOCAL
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.8766] device (eth1): Activation: successful, device activated.
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.8781] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.8821] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.8829] manager: NetworkManager state is now CONNECTED_SITE
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.8834] device (eth0): Activation: successful, device activated.
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.8843] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb 25 06:44:35 np0005629333 NetworkManager[49836]: <info>  [1772019875.8850] manager: startup complete
Feb 25 06:44:35 np0005629333 systemd[1]: Finished Network Manager Wait Online.
Feb 25 06:44:36 np0005629333 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 25 06:44:36 np0005629333 systemd[1]: Finished man-db-cache-update.service.
Feb 25 06:44:36 np0005629333 systemd[1]: run-rb5c58cc4486c4e1aab801241e1793021.service: Deactivated successfully.
Feb 25 06:44:36 np0005629333 python3.9[50047]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 25 06:44:45 np0005629333 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 25 06:45:05 np0005629333 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 25 06:45:05 np0005629333 systemd[1]: Starting man-db-cache-update.service...
Feb 25 06:45:05 np0005629333 systemd[1]: Reloading.
Feb 25 06:45:05 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:45:05 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:45:05 np0005629333 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 25 06:45:05 np0005629333 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 25 06:45:10 np0005629333 python3.9[50528]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 06:45:11 np0005629333 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 25 06:45:11 np0005629333 systemd[1]: Finished man-db-cache-update.service.
Feb 25 06:45:11 np0005629333 systemd[1]: run-rb48ed8429475482b8e6426dceb20ea8f.service: Deactivated successfully.
Feb 25 06:45:11 np0005629333 python3.9[50682]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:45:12 np0005629333 python3.9[50837]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:45:13 np0005629333 python3.9[50990]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:45:14 np0005629333 python3.9[51143]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:45:15 np0005629333 python3.9[51296]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:45:16 np0005629333 python3.9[51449]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:45:16 np0005629333 python3.9[51573]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1772019915.4706938-226-266383903593860/.source _original_basename=.txrmgvth follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:45:17 np0005629333 python3.9[51726]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:45:18 np0005629333 python3.9[51879]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Feb 25 06:45:18 np0005629333 python3.9[52032]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:45:21 np0005629333 python3.9[52462]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Feb 25 06:45:22 np0005629333 ansible-async_wrapper.py[52638]: Invoked with j784147759588 300 /home/zuul/.ansible/tmp/ansible-tmp-1772019921.2779582-292-125248706899883/AnsiballZ_edpm_os_net_config.py _
Feb 25 06:45:22 np0005629333 ansible-async_wrapper.py[52641]: Starting module and watcher
Feb 25 06:45:22 np0005629333 ansible-async_wrapper.py[52641]: Start watching 52642 (300)
Feb 25 06:45:22 np0005629333 ansible-async_wrapper.py[52642]: Start module (52642)
Feb 25 06:45:22 np0005629333 ansible-async_wrapper.py[52638]: Return async_wrapper task started.
Feb 25 06:45:22 np0005629333 python3.9[52643]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True remove_config=False safe_defaults=False use_nmstate=True purge_provider=
Feb 25 06:45:23 np0005629333 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Feb 25 06:45:23 np0005629333 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Feb 25 06:45:23 np0005629333 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Feb 25 06:45:23 np0005629333 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Feb 25 06:45:23 np0005629333 kernel: cfg80211: failed to load regulatory.db
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.3848] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52644 uid=0 result="success"
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.3865] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52644 uid=0 result="success"
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.4621] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.4623] audit: op="connection-add" uuid="b6dac48a-06b0-4b6c-8506-d1decca9e056" name="br-ex-br" pid=52644 uid=0 result="success"
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.4638] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.4639] audit: op="connection-add" uuid="356c92a0-466c-4e28-80e2-67350d976dda" name="br-ex-port" pid=52644 uid=0 result="success"
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.4652] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.4653] audit: op="connection-add" uuid="b28854a2-611c-40cd-a240-b91c931bd070" name="eth1-port" pid=52644 uid=0 result="success"
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.4666] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.4667] audit: op="connection-add" uuid="4bad1dd7-5d45-46fd-8683-226489376cd4" name="vlan20-port" pid=52644 uid=0 result="success"
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.4679] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.4681] audit: op="connection-add" uuid="943ac35b-5f0f-4927-a66c-fedf65768ff8" name="vlan21-port" pid=52644 uid=0 result="success"
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.4692] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.4693] audit: op="connection-add" uuid="0fe2a4f6-45d6-4ea4-ad17-6c4a2c48dd5b" name="vlan22-port" pid=52644 uid=0 result="success"
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.4705] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.4707] audit: op="connection-add" uuid="d6e95a32-4eb2-4564-bf1d-7067e467e44e" name="vlan23-port" pid=52644 uid=0 result="success"
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.4727] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.addr-gen-mode,ipv6.method,ipv6.dhcp-timeout,802-3-ethernet.mtu,connection.timestamp,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=52644 uid=0 result="success"
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.4746] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.4747] audit: op="connection-add" uuid="1c902d4d-0c33-4727-bdc6-39634d5dad4f" name="br-ex-if" pid=52644 uid=0 result="success"
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5108] audit: op="connection-update" uuid="6041f6e9-99fd-5935-a363-bb7dd4d3c5fc" name="ci-private-network" args="ipv6.addr-gen-mode,ipv6.method,ipv6.routing-rules,ipv6.addresses,ipv6.dns,ipv6.routes,ovs-external-ids.data,ovs-interface.type,connection.port-type,connection.slave-type,connection.controller,connection.timestamp,connection.master,ipv4.routing-rules,ipv4.method,ipv4.never-default,ipv4.addresses,ipv4.dns,ipv4.routes" pid=52644 uid=0 result="success"
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5129] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5130] audit: op="connection-add" uuid="14382c47-b20c-4ced-970f-44f64137467c" name="vlan20-if" pid=52644 uid=0 result="success"
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5148] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5150] audit: op="connection-add" uuid="aecc57e0-3a2a-4492-b97a-6cf695812ac4" name="vlan21-if" pid=52644 uid=0 result="success"
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5167] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5169] audit: op="connection-add" uuid="e0fc676f-3a03-4634-8d41-116f3cc325a1" name="vlan22-if" pid=52644 uid=0 result="success"
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5187] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5189] audit: op="connection-add" uuid="c0fb1314-a04f-4e22-8915-b8223ff0f8ef" name="vlan23-if" pid=52644 uid=0 result="success"
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5201] audit: op="connection-delete" uuid="28ca6592-12e0-394a-9d0e-3b8dc5e091b0" name="Wired connection 1" pid=52644 uid=0 result="success"
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5213] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <warn>  [1772019924.5215] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5224] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5228] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (b6dac48a-06b0-4b6c-8506-d1decca9e056)
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5228] audit: op="connection-activate" uuid="b6dac48a-06b0-4b6c-8506-d1decca9e056" name="br-ex-br" pid=52644 uid=0 result="success"
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5230] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <warn>  [1772019924.5231] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5238] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5241] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (356c92a0-466c-4e28-80e2-67350d976dda)
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5243] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <warn>  [1772019924.5244] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5249] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5252] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (b28854a2-611c-40cd-a240-b91c931bd070)
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5254] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <warn>  [1772019924.5255] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5261] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5265] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (4bad1dd7-5d45-46fd-8683-226489376cd4)
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5266] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <warn>  [1772019924.5268] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5273] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5276] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (943ac35b-5f0f-4927-a66c-fedf65768ff8)
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5278] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <warn>  [1772019924.5279] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5284] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5288] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (0fe2a4f6-45d6-4ea4-ad17-6c4a2c48dd5b)
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5290] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <warn>  [1772019924.5291] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5296] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5300] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (d6e95a32-4eb2-4564-bf1d-7067e467e44e)
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5301] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5303] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5305] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5312] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <warn>  [1772019924.5312] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5316] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5322] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (1c902d4d-0c33-4727-bdc6-39634d5dad4f)
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5322] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5326] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5328] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5329] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5331] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5343] device (eth1): disconnecting for new activation request.
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5344] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5347] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5349] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5351] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5353] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <warn>  [1772019924.5354] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5357] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5362] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (14382c47-b20c-4ced-970f-44f64137467c)
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5363] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5365] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5368] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5369] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5372] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <warn>  [1772019924.5373] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5377] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5382] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (aecc57e0-3a2a-4492-b97a-6cf695812ac4)
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5383] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5387] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5389] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5390] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5394] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <warn>  [1772019924.5395] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5398] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5402] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (e0fc676f-3a03-4634-8d41-116f3cc325a1)
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5403] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5406] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5408] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5409] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5412] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <warn>  [1772019924.5413] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5418] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5423] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (c0fb1314-a04f-4e22-8915-b8223ff0f8ef)
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5424] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5426] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5429] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5430] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5432] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5450] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=52644 uid=0 result="success"
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5452] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5457] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5460] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5470] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5473] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5476] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5479] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5481] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 kernel: ovs-system: entered promiscuous mode
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5488] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5495] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5500] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5503] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5511] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 kernel: Timeout policy base is empty
Feb 25 06:45:24 np0005629333 systemd-udevd[52650]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 06:45:24 np0005629333 systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5519] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5525] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5529] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5537] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5543] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5549] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5552] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5561] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5569] dhcp4 (eth0): canceled DHCP transaction
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5569] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5569] dhcp4 (eth0): state changed no lease
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5571] dhcp4 (eth0): activation: beginning transaction (no timeout)
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5583] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5589] audit: op="device-reapply" interface="eth1" ifindex=3 pid=52644 uid=0 result="fail" reason="Device is not activated"
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5593] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Feb 25 06:45:24 np0005629333 systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 25 06:45:24 np0005629333 kernel: br-ex: entered promiscuous mode
Feb 25 06:45:24 np0005629333 kernel: vlan21: entered promiscuous mode
Feb 25 06:45:24 np0005629333 systemd-udevd[52649]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5881] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5887] dhcp4 (eth0): state changed new lease, address=38.102.83.173
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5901] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Feb 25 06:45:24 np0005629333 systemd-udevd[52648]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5911] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Feb 25 06:45:24 np0005629333 kernel: vlan20: entered promiscuous mode
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5923] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.5934] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Feb 25 06:45:24 np0005629333 kernel: vlan22: entered promiscuous mode
Feb 25 06:45:24 np0005629333 kernel: vlan23: entered promiscuous mode
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6199] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6383] device (eth1): Activation: starting connection 'ci-private-network' (6041f6e9-99fd-5935-a363-bb7dd4d3c5fc)
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6388] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6390] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6391] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6393] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6394] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6396] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6398] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6405] device (eth1): state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6413] device (eth1): disconnecting for new activation request.
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6414] audit: op="connection-activate" uuid="6041f6e9-99fd-5935-a363-bb7dd4d3c5fc" name="ci-private-network" pid=52644 uid=0 result="success"
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6431] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6438] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6440] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6444] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6449] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6455] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6458] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6461] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6464] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6466] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6471] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6473] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6475] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6478] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6480] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6482] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6487] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6500] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52644 uid=0 result="success"
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6502] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6513] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6518] device (eth1): Activation: starting connection 'ci-private-network' (6041f6e9-99fd-5935-a363-bb7dd4d3c5fc)
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6522] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6535] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6539] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6545] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6550] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6555] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6562] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6570] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6571] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6572] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6574] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6576] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6579] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6583] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 25 06:45:24 np0005629333 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6587] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6590] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6594] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6597] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6601] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6603] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6607] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6609] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6612] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6614] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6860] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6866] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Feb 25 06:45:24 np0005629333 NetworkManager[49836]: <info>  [1772019924.6869] device (eth1): Activation: successful, device activated.
Feb 25 06:45:25 np0005629333 NetworkManager[49836]: <info>  [1772019925.7877] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52644 uid=0 result="success"
Feb 25 06:45:26 np0005629333 python3.9[53007]: ansible-ansible.legacy.async_status Invoked with jid=j784147759588.52638 mode=status _async_dir=/root/.ansible_async
Feb 25 06:45:26 np0005629333 NetworkManager[49836]: <info>  [1772019926.0315] checkpoint[0x557f824fd950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Feb 25 06:45:26 np0005629333 NetworkManager[49836]: <info>  [1772019926.0319] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52644 uid=0 result="success"
Feb 25 06:45:26 np0005629333 NetworkManager[49836]: <info>  [1772019926.5007] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52644 uid=0 result="success"
Feb 25 06:45:26 np0005629333 NetworkManager[49836]: <info>  [1772019926.5026] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52644 uid=0 result="success"
Feb 25 06:45:26 np0005629333 NetworkManager[49836]: <info>  [1772019926.8585] audit: op="networking-control" arg="global-dns-configuration" pid=52644 uid=0 result="success"
Feb 25 06:45:26 np0005629333 NetworkManager[49836]: <info>  [1772019926.8894] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Feb 25 06:45:26 np0005629333 NetworkManager[49836]: <info>  [1772019926.9093] audit: op="networking-control" arg="global-dns-configuration" pid=52644 uid=0 result="success"
Feb 25 06:45:26 np0005629333 NetworkManager[49836]: <info>  [1772019926.9143] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52644 uid=0 result="success"
Feb 25 06:45:27 np0005629333 NetworkManager[49836]: <info>  [1772019927.0780] checkpoint[0x557f824fda20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Feb 25 06:45:27 np0005629333 NetworkManager[49836]: <info>  [1772019927.0784] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52644 uid=0 result="success"
Feb 25 06:45:27 np0005629333 ansible-async_wrapper.py[52642]: Module complete (52642)
Feb 25 06:45:27 np0005629333 ansible-async_wrapper.py[52641]: Done in kid B.
Feb 25 06:45:29 np0005629333 python3.9[53114]: ansible-ansible.legacy.async_status Invoked with jid=j784147759588.52638 mode=status _async_dir=/root/.ansible_async
Feb 25 06:45:30 np0005629333 python3.9[53215]: ansible-ansible.legacy.async_status Invoked with jid=j784147759588.52638 mode=cleanup _async_dir=/root/.ansible_async
Feb 25 06:45:30 np0005629333 python3.9[53368]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:45:31 np0005629333 python3.9[53492]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772019930.3046896-319-43885637630714/.source.returncode _original_basename=.nzxmdzpj follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:45:32 np0005629333 python3.9[53645]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:45:32 np0005629333 python3.9[53770]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772019931.6955318-335-43619757773920/.source.cfg _original_basename=.3i2jvlca follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:45:33 np0005629333 python3.9[53923]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 25 06:45:33 np0005629333 systemd[1]: Reloading Network Manager...
Feb 25 06:45:33 np0005629333 NetworkManager[49836]: <info>  [1772019933.6983] audit: op="reload" arg="0" pid=53927 uid=0 result="success"
Feb 25 06:45:33 np0005629333 NetworkManager[49836]: <info>  [1772019933.6993] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Feb 25 06:45:33 np0005629333 systemd[1]: Reloaded Network Manager.
Feb 25 06:45:34 np0005629333 systemd[1]: session-10.scope: Deactivated successfully.
Feb 25 06:45:34 np0005629333 systemd[1]: session-10.scope: Consumed 47.745s CPU time.
Feb 25 06:45:34 np0005629333 systemd-logind[811]: Session 10 logged out. Waiting for processes to exit.
Feb 25 06:45:34 np0005629333 systemd-logind[811]: Removed session 10.
Feb 25 06:45:38 np0005629333 systemd-logind[811]: New session 11 of user zuul.
Feb 25 06:45:38 np0005629333 systemd[1]: Started Session 11 of User zuul.
Feb 25 06:45:39 np0005629333 python3.9[54111]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:45:40 np0005629333 python3.9[54265]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 25 06:45:42 np0005629333 python3.9[54459]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:45:43 np0005629333 systemd[1]: session-11.scope: Deactivated successfully.
Feb 25 06:45:43 np0005629333 systemd[1]: session-11.scope: Consumed 2.234s CPU time.
Feb 25 06:45:43 np0005629333 systemd-logind[811]: Session 11 logged out. Waiting for processes to exit.
Feb 25 06:45:43 np0005629333 systemd-logind[811]: Removed session 11.
Feb 25 06:45:43 np0005629333 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 25 06:45:49 np0005629333 systemd-logind[811]: New session 12 of user zuul.
Feb 25 06:45:49 np0005629333 systemd[1]: Started Session 12 of User zuul.
Feb 25 06:45:49 np0005629333 python3.9[54641]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:45:51 np0005629333 python3.9[54795]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:45:52 np0005629333 python3.9[54952]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 25 06:45:52 np0005629333 python3.9[55038]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 25 06:45:54 np0005629333 python3.9[55192]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 25 06:45:56 np0005629333 python3.9[55389]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:45:56 np0005629333 python3.9[55542]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:45:56 np0005629333 systemd[1]: var-lib-containers-storage-overlay-compat399853002-merged.mount: Deactivated successfully.
Feb 25 06:45:57 np0005629333 podman[55543]: 2026-02-25 11:45:57.084244518 +0000 UTC m=+0.213381843 system refresh
Feb 25 06:45:57 np0005629333 python3.9[55707]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:45:57 np0005629333 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 25 06:45:58 np0005629333 python3.9[55831]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772019957.2693245-74-234630119071128/.source.json follow=False _original_basename=podman_network_config.j2 checksum=7e719b71dd5bc62c5bf052c09786d4f989963d5f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:45:59 np0005629333 python3.9[55984]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:45:59 np0005629333 python3.9[56108]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772019958.8934724-89-114265991212857/.source.conf follow=False _original_basename=registries.conf.j2 checksum=197bf6e1388aca01b529f5e8d08286f263a7fb81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:46:00 np0005629333 python3.9[56261]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:46:01 np0005629333 python3.9[56414]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:46:02 np0005629333 python3.9[56567]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:46:02 np0005629333 python3.9[56720]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:46:03 np0005629333 python3.9[56873]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 25 06:46:05 np0005629333 python3.9[57027]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:46:06 np0005629333 python3.9[57182]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 06:46:06 np0005629333 python3.9[57335]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 06:46:07 np0005629333 python3.9[57488]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:46:08 np0005629333 python3.9[57642]: ansible-service_facts Invoked
Feb 25 06:46:08 np0005629333 network[57659]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 25 06:46:08 np0005629333 network[57660]: 'network-scripts' will be removed from distribution in near future.
Feb 25 06:46:08 np0005629333 network[57661]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 25 06:46:14 np0005629333 python3.9[58117]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 25 06:46:17 np0005629333 python3.9[58271]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Feb 25 06:46:18 np0005629333 python3.9[58424]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:46:19 np0005629333 python3.9[58550]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772019978.214454-233-248391527883233/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:46:19 np0005629333 python3.9[58705]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:46:20 np0005629333 python3.9[58831]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772019979.522805-248-235526109730859/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:46:21 np0005629333 python3.9[58986]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:46:22 np0005629333 python3.9[59141]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 25 06:46:23 np0005629333 python3.9[59226]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 06:46:25 np0005629333 python3.9[59381]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 25 06:46:25 np0005629333 python3.9[59466]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 25 06:46:25 np0005629333 chronyd[810]: chronyd exiting
Feb 25 06:46:25 np0005629333 systemd[1]: Stopping NTP client/server...
Feb 25 06:46:25 np0005629333 systemd[1]: chronyd.service: Deactivated successfully.
Feb 25 06:46:25 np0005629333 systemd[1]: Stopped NTP client/server.
Feb 25 06:46:25 np0005629333 systemd[1]: Starting NTP client/server...
Feb 25 06:46:26 np0005629333 chronyd[59474]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Feb 25 06:46:26 np0005629333 chronyd[59474]: Frequency -28.437 +/- 0.354 ppm read from /var/lib/chrony/drift
Feb 25 06:46:26 np0005629333 chronyd[59474]: Loaded seccomp filter (level 2)
Feb 25 06:46:26 np0005629333 systemd[1]: Started NTP client/server.
Feb 25 06:46:26 np0005629333 systemd[1]: session-12.scope: Deactivated successfully.
Feb 25 06:46:26 np0005629333 systemd[1]: session-12.scope: Consumed 24.470s CPU time.
Feb 25 06:46:26 np0005629333 systemd-logind[811]: Session 12 logged out. Waiting for processes to exit.
Feb 25 06:46:26 np0005629333 systemd-logind[811]: Removed session 12.
Feb 25 06:46:31 np0005629333 systemd-logind[811]: New session 13 of user zuul.
Feb 25 06:46:31 np0005629333 systemd[1]: Started Session 13 of User zuul.
Feb 25 06:46:32 np0005629333 python3.9[59656]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:46:33 np0005629333 python3.9[59809]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:46:34 np0005629333 python3.9[59933]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772019992.7900581-29-138561723601529/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:46:34 np0005629333 systemd[1]: session-13.scope: Deactivated successfully.
Feb 25 06:46:34 np0005629333 systemd[1]: session-13.scope: Consumed 1.491s CPU time.
Feb 25 06:46:34 np0005629333 systemd-logind[811]: Session 13 logged out. Waiting for processes to exit.
Feb 25 06:46:34 np0005629333 systemd-logind[811]: Removed session 13.
Feb 25 06:46:39 np0005629333 systemd-logind[811]: New session 14 of user zuul.
Feb 25 06:46:39 np0005629333 systemd[1]: Started Session 14 of User zuul.
Feb 25 06:46:40 np0005629333 python3.9[60111]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:46:41 np0005629333 python3.9[60268]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:46:42 np0005629333 python3.9[60444]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:46:43 np0005629333 python3.9[60568]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1772020001.8584409-36-196270470969297/.source.json _original_basename=.79_b6vri follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:46:44 np0005629333 python3.9[60721]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:46:44 np0005629333 python3.9[60845]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772020003.7379913-59-14937426161684/.source _original_basename=._af3tw2m follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:46:45 np0005629333 python3.9[60998]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:46:46 np0005629333 python3.9[61151]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:46:46 np0005629333 python3.9[61275]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772020005.5930457-83-116670452728809/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:46:47 np0005629333 python3.9[61428]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:46:47 np0005629333 python3.9[61552]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772020006.8074431-83-274306287888546/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:46:48 np0005629333 python3.9[61705]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:46:49 np0005629333 python3.9[61858]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:46:49 np0005629333 python3.9[61982]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020008.6969635-120-245582189601811/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:46:50 np0005629333 python3.9[62135]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:46:50 np0005629333 python3.9[62259]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020009.8642328-135-100874575206612/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:46:51 np0005629333 python3.9[62412]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 06:46:51 np0005629333 systemd[1]: Reloading.
Feb 25 06:46:51 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:46:51 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:46:52 np0005629333 systemd[1]: Reloading.
Feb 25 06:46:52 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:46:52 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:46:52 np0005629333 systemd[1]: Starting EDPM Container Shutdown...
Feb 25 06:46:52 np0005629333 systemd[1]: Finished EDPM Container Shutdown.
Feb 25 06:46:52 np0005629333 python3.9[62654]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:46:53 np0005629333 python3.9[62778]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020012.4625926-158-101850414918570/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:46:54 np0005629333 python3.9[62931]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:46:54 np0005629333 python3.9[63055]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020013.6512043-173-275761275328253/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:46:55 np0005629333 python3.9[63208]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 06:46:55 np0005629333 systemd[1]: Reloading.
Feb 25 06:46:55 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:46:55 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:46:55 np0005629333 systemd[1]: Reloading.
Feb 25 06:46:55 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:46:55 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:46:56 np0005629333 systemd[1]: Starting Create netns directory...
Feb 25 06:46:56 np0005629333 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 25 06:46:56 np0005629333 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 25 06:46:56 np0005629333 systemd[1]: Finished Create netns directory.
Feb 25 06:46:56 np0005629333 python3.9[63447]: ansible-ansible.builtin.service_facts Invoked
Feb 25 06:46:56 np0005629333 network[63464]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 25 06:46:56 np0005629333 network[63465]: 'network-scripts' will be removed from distribution in near future.
Feb 25 06:46:56 np0005629333 network[63466]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 25 06:47:00 np0005629333 python3.9[63730]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 06:47:00 np0005629333 systemd[1]: Reloading.
Feb 25 06:47:00 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:47:00 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:47:00 np0005629333 systemd[1]: Stopping IPv4 firewall with iptables...
Feb 25 06:47:00 np0005629333 iptables.init[63778]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Feb 25 06:47:00 np0005629333 iptables.init[63778]: iptables: Flushing firewall rules: [  OK  ]
Feb 25 06:47:00 np0005629333 systemd[1]: iptables.service: Deactivated successfully.
Feb 25 06:47:00 np0005629333 systemd[1]: Stopped IPv4 firewall with iptables.
Feb 25 06:47:01 np0005629333 python3.9[63975]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 06:47:02 np0005629333 python3.9[64130]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 06:47:02 np0005629333 systemd[1]: Reloading.
Feb 25 06:47:02 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:47:02 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:47:02 np0005629333 systemd[1]: Starting Netfilter Tables...
Feb 25 06:47:02 np0005629333 systemd[1]: Finished Netfilter Tables.
Feb 25 06:47:03 np0005629333 python3.9[64330]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:47:04 np0005629333 python3.9[64484]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:47:05 np0005629333 python3.9[64610]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1772020023.9366636-242-266296197110112/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:47:05 np0005629333 python3.9[64764]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 25 06:47:05 np0005629333 systemd[1]: Reloading OpenSSH server daemon...
Feb 25 06:47:05 np0005629333 systemd[1]: Reloaded OpenSSH server daemon.
Feb 25 06:47:06 np0005629333 python3.9[64921]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:47:07 np0005629333 python3.9[65074]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:47:07 np0005629333 python3.9[65198]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020026.703553-273-182166019803440/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:47:08 np0005629333 python3.9[65351]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb 25 06:47:08 np0005629333 systemd[1]: Starting Time & Date Service...
Feb 25 06:47:08 np0005629333 systemd[1]: Started Time & Date Service.
Feb 25 06:47:09 np0005629333 python3.9[65508]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:47:10 np0005629333 python3.9[65661]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:47:10 np0005629333 python3.9[65785]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772020029.7155466-308-111332408485446/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:47:11 np0005629333 python3.9[65938]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:47:11 np0005629333 python3.9[66062]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772020030.9041102-323-76609154518442/.source.yaml _original_basename=.71suaq51 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:47:12 np0005629333 python3.9[66215]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:47:13 np0005629333 python3.9[66339]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020032.1836076-338-178878428024172/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:47:14 np0005629333 python3.9[66492]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:47:14 np0005629333 python3.9[66646]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:47:15 np0005629333 python3[66800]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 25 06:47:16 np0005629333 python3.9[66953]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:47:17 np0005629333 python3.9[67077]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020035.9619818-377-7445444078390/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:47:17 np0005629333 python3.9[67230]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:47:18 np0005629333 python3.9[67354]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020037.2130423-392-233912275696639/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:47:18 np0005629333 python3.9[67507]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:47:19 np0005629333 python3.9[67631]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020038.479348-407-200962747562297/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:47:20 np0005629333 python3.9[67784]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:47:20 np0005629333 python3.9[67908]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020039.7032332-422-53728148320299/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:47:21 np0005629333 python3.9[68061]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:47:22 np0005629333 python3.9[68185]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020041.0602849-437-17104024483120/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:47:22 np0005629333 python3.9[68338]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:47:23 np0005629333 python3.9[68491]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:47:24 np0005629333 python3.9[68651]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:47:25 np0005629333 python3.9[68805]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:47:25 np0005629333 python3.9[68958]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:47:26 np0005629333 python3.9[69111]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb 25 06:47:27 np0005629333 python3.9[69265]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb 25 06:47:27 np0005629333 systemd[1]: session-14.scope: Deactivated successfully.
Feb 25 06:47:27 np0005629333 systemd[1]: session-14.scope: Consumed 33.475s CPU time.
Feb 25 06:47:27 np0005629333 systemd-logind[811]: Session 14 logged out. Waiting for processes to exit.
Feb 25 06:47:27 np0005629333 systemd-logind[811]: Removed session 14.
Feb 25 06:47:34 np0005629333 systemd-logind[811]: New session 15 of user zuul.
Feb 25 06:47:34 np0005629333 systemd[1]: Started Session 15 of User zuul.
Feb 25 06:47:35 np0005629333 python3.9[69447]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Feb 25 06:47:35 np0005629333 python3.9[69600]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 06:47:36 np0005629333 python3.9[69753]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:47:37 np0005629333 python3.9[69906]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4pR0m4uDL6wpjeVsrE8exl2aQkXR/YRBYcrcquH8gsUfPg4Ic7ybncTg3FmomNbKuxFaXek0Tpc2LkgxfxoRVwUAoTsR8v75h5ax8+6Hr5kY922KQAzI2mFhkzXq6Iob2zMQtuYN7wdsX3onHLxyW+7tqo5S1CdPjNGt7lm5ibexJBMeBXylEOipYDisejedLmtOq/ufb/Bbq0QB59hKBth3Uod5kioIyV8qYySkchpFqkqoNBuQb3GodRHNQw597M28Tl4xbfSsBJ+WqwFzdceYcJ/Zbkx5fwOW5q9s48ovP26W7lABtWL6BTvCm2haeVJZBzwDX810hnDSLCH/XcHYBIbiqkujg85xKpFvJRgpDJtPQhx54nFXgyzYNfp+nlkFmhwekSMtjVLXTmTH24ZcokuyX9RlfQyS8r6T8HVnm1lWPc/cum+VNGCrQlboravmcQlrRcEJMmiUVuEzBRKHPSNP0gInw1Q+Bdc2E5zPaW2Lk46FmG/rDKwJ9Tf0=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAC0awy++nKN/asFVCLXvC5FnoQIdfAusWioUMLgrnzM#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFYNEIvBoC1IvH2XK5OeqRtYZ0DLH1Lh2Hwd9tGMJ+mVeJr3NcevvvZCuSJTYgl9EaFFWV9s06MQ2T5blzuQtD4=#012 create=True mode=0644 path=/tmp/ansible.lz38pfi5 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:47:38 np0005629333 python3.9[70059]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.lz38pfi5' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:47:38 np0005629333 systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb 25 06:47:39 np0005629333 python3.9[70216]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.lz38pfi5 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:47:39 np0005629333 systemd[1]: session-15.scope: Deactivated successfully.
Feb 25 06:47:39 np0005629333 systemd[1]: session-15.scope: Consumed 3.174s CPU time.
Feb 25 06:47:39 np0005629333 systemd-logind[811]: Session 15 logged out. Waiting for processes to exit.
Feb 25 06:47:39 np0005629333 systemd-logind[811]: Removed session 15.
Feb 25 06:47:45 np0005629333 systemd-logind[811]: New session 16 of user zuul.
Feb 25 06:47:45 np0005629333 systemd[1]: Started Session 16 of User zuul.
Feb 25 06:47:46 np0005629333 python3.9[70394]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:47:47 np0005629333 python3.9[70551]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb 25 06:47:48 np0005629333 python3.9[70706]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 25 06:47:48 np0005629333 python3.9[70860]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:47:49 np0005629333 python3.9[71014]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 06:47:50 np0005629333 python3.9[71169]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:47:51 np0005629333 python3.9[71325]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:47:51 np0005629333 systemd[1]: session-16.scope: Deactivated successfully.
Feb 25 06:47:51 np0005629333 systemd[1]: session-16.scope: Consumed 4.099s CPU time.
Feb 25 06:47:51 np0005629333 systemd-logind[811]: Session 16 logged out. Waiting for processes to exit.
Feb 25 06:47:51 np0005629333 systemd-logind[811]: Removed session 16.
Feb 25 06:47:56 np0005629333 systemd-logind[811]: New session 17 of user zuul.
Feb 25 06:47:56 np0005629333 systemd[1]: Started Session 17 of User zuul.
Feb 25 06:47:57 np0005629333 python3.9[71503]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:47:58 np0005629333 python3.9[71660]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 25 06:47:59 np0005629333 python3.9[71745]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 25 06:48:01 np0005629333 python3.9[71896]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:48:02 np0005629333 python3.9[72047]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 25 06:48:03 np0005629333 python3.9[72197]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 06:48:03 np0005629333 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 06:48:03 np0005629333 python3.9[72348]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 06:48:04 np0005629333 systemd[1]: session-17.scope: Deactivated successfully.
Feb 25 06:48:04 np0005629333 systemd[1]: session-17.scope: Consumed 5.471s CPU time.
Feb 25 06:48:04 np0005629333 systemd-logind[811]: Session 17 logged out. Waiting for processes to exit.
Feb 25 06:48:04 np0005629333 systemd-logind[811]: Removed session 17.
Feb 25 06:48:11 np0005629333 systemd-logind[811]: New session 18 of user zuul.
Feb 25 06:48:11 np0005629333 systemd[1]: Started Session 18 of User zuul.
Feb 25 06:48:17 np0005629333 python3[73114]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 25 06:48:17 np0005629333 python3[73150]: ansible-ansible.legacy.dnf Invoked with name=['jq'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 25 06:48:19 np0005629333 python3[73177]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-tentacle'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 25 06:48:21 np0005629333 python3[73234]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 25 06:48:25 np0005629333 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 25 06:48:25 np0005629333 systemd[1]: Starting man-db-cache-update.service...
Feb 25 06:48:25 np0005629333 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 25 06:48:25 np0005629333 systemd[1]: Finished man-db-cache-update.service.
Feb 25 06:48:25 np0005629333 systemd[1]: run-r266c9c5aa339474ebac879321170296d.service: Deactivated successfully.
Feb 25 06:48:25 np0005629333 python3[73358]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 25 06:48:26 np0005629333 python3[73387]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:48:27 np0005629333 python3[73482]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 25 06:48:28 np0005629333 python3[73509]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 25 06:48:29 np0005629333 python3[73535]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:48:29 np0005629333 kernel: loop: module loaded
Feb 25 06:48:29 np0005629333 kernel: loop3: detected capacity change from 0 to 41943040
Feb 25 06:48:29 np0005629333 python3[73572]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:48:29 np0005629333 lvm[73575]: PV /dev/loop3 not used.
Feb 25 06:48:29 np0005629333 lvm[73577]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 06:48:29 np0005629333 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Feb 25 06:48:29 np0005629333 lvm[73586]:  1 logical volume(s) in volume group "ceph_vg0" now active
Feb 25 06:48:29 np0005629333 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Feb 25 06:48:30 np0005629333 python3[73664]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 25 06:48:30 np0005629333 python3[73737]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772020110.0548923-36995-218548003110480/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:48:31 np0005629333 python3[73787]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 06:48:31 np0005629333 systemd[1]: Reloading.
Feb 25 06:48:31 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:48:31 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:48:31 np0005629333 systemd[1]: Starting Ceph OSD losetup...
Feb 25 06:48:31 np0005629333 bash[73833]: /dev/loop3: [64513]:4329461 (/var/lib/ceph-osd-0.img)
Feb 25 06:48:31 np0005629333 systemd[1]: Finished Ceph OSD losetup.
Feb 25 06:48:31 np0005629333 lvm[73834]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 06:48:31 np0005629333 lvm[73834]: VG ceph_vg0 finished
Feb 25 06:48:32 np0005629333 python3[73860]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 25 06:48:33 np0005629333 python3[73887]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 25 06:48:33 np0005629333 python3[73913]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:48:33 np0005629333 kernel: loop4: detected capacity change from 0 to 41943040
Feb 25 06:48:34 np0005629333 python3[73945]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:48:34 np0005629333 lvm[73948]: PV /dev/loop4 not used.
Feb 25 06:48:34 np0005629333 lvm[73958]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 06:48:34 np0005629333 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Feb 25 06:48:34 np0005629333 lvm[73960]:  1 logical volume(s) in volume group "ceph_vg1" now active
Feb 25 06:48:34 np0005629333 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Feb 25 06:48:34 np0005629333 python3[74039]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 25 06:48:35 np0005629333 python3[74112]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772020114.5834606-37022-251630316295515/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:48:35 np0005629333 chronyd[59474]: Selected source 23.133.168.244 (pool.ntp.org)
Feb 25 06:48:35 np0005629333 python3[74162]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 06:48:35 np0005629333 systemd[1]: Reloading.
Feb 25 06:48:35 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:48:35 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:48:35 np0005629333 systemd[1]: Starting Ceph OSD losetup...
Feb 25 06:48:35 np0005629333 bash[74210]: /dev/loop4: [64513]:4329463 (/var/lib/ceph-osd-1.img)
Feb 25 06:48:35 np0005629333 systemd[1]: Finished Ceph OSD losetup.
Feb 25 06:48:36 np0005629333 lvm[74211]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 06:48:36 np0005629333 lvm[74211]: VG ceph_vg1 finished
Feb 25 06:48:36 np0005629333 python3[74237]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 25 06:48:37 np0005629333 python3[74264]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 25 06:48:38 np0005629333 python3[74290]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:48:38 np0005629333 kernel: loop5: detected capacity change from 0 to 41943040
Feb 25 06:48:38 np0005629333 python3[74322]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:48:38 np0005629333 lvm[74325]: PV /dev/loop5 not used.
Feb 25 06:48:38 np0005629333 lvm[74334]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 06:48:38 np0005629333 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Feb 25 06:48:38 np0005629333 lvm[74336]:  1 logical volume(s) in volume group "ceph_vg2" now active
Feb 25 06:48:38 np0005629333 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Feb 25 06:48:39 np0005629333 python3[74414]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 25 06:48:39 np0005629333 python3[74487]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772020118.8361375-37049-152017450654165/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:48:39 np0005629333 python3[74537]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 06:48:39 np0005629333 systemd[1]: Reloading.
Feb 25 06:48:40 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:48:40 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:48:40 np0005629333 systemd[1]: Starting Ceph OSD losetup...
Feb 25 06:48:40 np0005629333 bash[74584]: /dev/loop5: [64513]:4997355 (/var/lib/ceph-osd-2.img)
Feb 25 06:48:40 np0005629333 systemd[1]: Finished Ceph OSD losetup.
Feb 25 06:48:40 np0005629333 lvm[74585]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 06:48:40 np0005629333 lvm[74585]: VG ceph_vg2 finished
Feb 25 06:48:42 np0005629333 python3[74609]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:48:44 np0005629333 python3[74702]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:48:45 np0005629333 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 25 06:48:45 np0005629333 python3[74741]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:48:46 np0005629333 python3[74767]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:48:46 np0005629333 python3[74845]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 25 06:48:47 np0005629333 python3[74918]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772020126.5393445-37184-109661912425960/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:48:47 np0005629333 python3[75020]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 25 06:48:48 np0005629333 python3[75093]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772020127.4936585-37202-164508813841938/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:48:48 np0005629333 python3[75143]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 25 06:48:48 np0005629333 python3[75171]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 25 06:48:49 np0005629333 python3[75199]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 25 06:48:49 np0005629333 python3[75225]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 25 06:48:49 np0005629333 python3[75251]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:48:50 np0005629333 systemd[1]: Created slice User Slice of UID 42477.
Feb 25 06:48:50 np0005629333 systemd[1]: Starting User Runtime Directory /run/user/42477...
Feb 25 06:48:50 np0005629333 systemd-logind[811]: New session 19 of user ceph-admin.
Feb 25 06:48:50 np0005629333 systemd[1]: Finished User Runtime Directory /run/user/42477.
Feb 25 06:48:50 np0005629333 systemd[1]: Starting User Manager for UID 42477...
Feb 25 06:48:50 np0005629333 systemd[75259]: Queued start job for default target Main User Target.
Feb 25 06:48:50 np0005629333 systemd[75259]: Created slice User Application Slice.
Feb 25 06:48:50 np0005629333 systemd[75259]: Started Mark boot as successful after the user session has run 2 minutes.
Feb 25 06:48:50 np0005629333 systemd[75259]: Started Daily Cleanup of User's Temporary Directories.
Feb 25 06:48:50 np0005629333 systemd[75259]: Reached target Paths.
Feb 25 06:48:50 np0005629333 systemd[75259]: Reached target Timers.
Feb 25 06:48:50 np0005629333 systemd[75259]: Starting D-Bus User Message Bus Socket...
Feb 25 06:48:50 np0005629333 systemd[75259]: Starting Create User's Volatile Files and Directories...
Feb 25 06:48:50 np0005629333 systemd[75259]: Finished Create User's Volatile Files and Directories.
Feb 25 06:48:50 np0005629333 systemd[75259]: Listening on D-Bus User Message Bus Socket.
Feb 25 06:48:50 np0005629333 systemd[75259]: Reached target Sockets.
Feb 25 06:48:50 np0005629333 systemd[75259]: Reached target Basic System.
Feb 25 06:48:50 np0005629333 systemd[75259]: Reached target Main User Target.
Feb 25 06:48:50 np0005629333 systemd[75259]: Startup finished in 94ms.
Feb 25 06:48:50 np0005629333 systemd[1]: Started User Manager for UID 42477.
Feb 25 06:48:50 np0005629333 systemd[1]: Started Session 19 of User ceph-admin.
Feb 25 06:48:50 np0005629333 systemd[1]: session-19.scope: Deactivated successfully.
Feb 25 06:48:50 np0005629333 systemd-logind[811]: Session 19 logged out. Waiting for processes to exit.
Feb 25 06:48:50 np0005629333 systemd-logind[811]: Removed session 19.
Feb 25 06:48:50 np0005629333 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 25 06:48:50 np0005629333 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 25 06:48:52 np0005629333 systemd[1]: var-lib-containers-storage-overlay-compat1303362286-merged.mount: Deactivated successfully.
Feb 25 06:48:52 np0005629333 systemd[1]: var-lib-containers-storage-overlay-compat1303362286-lower\x2dmapped.mount: Deactivated successfully.
Feb 25 06:49:00 np0005629333 systemd[1]: Stopping User Manager for UID 42477...
Feb 25 06:49:00 np0005629333 systemd[75259]: Activating special unit Exit the Session...
Feb 25 06:49:00 np0005629333 systemd[75259]: Stopped target Main User Target.
Feb 25 06:49:00 np0005629333 systemd[75259]: Stopped target Basic System.
Feb 25 06:49:00 np0005629333 systemd[75259]: Stopped target Paths.
Feb 25 06:49:00 np0005629333 systemd[75259]: Stopped target Sockets.
Feb 25 06:49:00 np0005629333 systemd[75259]: Stopped target Timers.
Feb 25 06:49:00 np0005629333 systemd[75259]: Stopped Mark boot as successful after the user session has run 2 minutes.
Feb 25 06:49:00 np0005629333 systemd[75259]: Stopped Daily Cleanup of User's Temporary Directories.
Feb 25 06:49:00 np0005629333 systemd[75259]: Closed D-Bus User Message Bus Socket.
Feb 25 06:49:00 np0005629333 systemd[75259]: Stopped Create User's Volatile Files and Directories.
Feb 25 06:49:00 np0005629333 systemd[75259]: Removed slice User Application Slice.
Feb 25 06:49:00 np0005629333 systemd[75259]: Reached target Shutdown.
Feb 25 06:49:00 np0005629333 systemd[75259]: Finished Exit the Session.
Feb 25 06:49:00 np0005629333 systemd[75259]: Reached target Exit the Session.
Feb 25 06:49:00 np0005629333 systemd[1]: user@42477.service: Deactivated successfully.
Feb 25 06:49:00 np0005629333 systemd[1]: Stopped User Manager for UID 42477.
Feb 25 06:49:00 np0005629333 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Feb 25 06:49:00 np0005629333 systemd[1]: run-user-42477.mount: Deactivated successfully.
Feb 25 06:49:00 np0005629333 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Feb 25 06:49:00 np0005629333 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Feb 25 06:49:00 np0005629333 systemd[1]: Removed slice User Slice of UID 42477.
Feb 25 06:49:06 np0005629333 podman[75354]: 2026-02-25 11:49:06.720397919 +0000 UTC m=+16.040087234 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:06 np0005629333 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 25 06:49:06 np0005629333 podman[75416]: 2026-02-25 11:49:06.794011563 +0000 UTC m=+0.043697364 container create 16d89a9e501caf0fd0baf6fa4a08d04fda9a2c20874a4c1a284390da0e4c3a82 (image=quay.io/ceph/ceph:v20, name=keen_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Feb 25 06:49:06 np0005629333 systemd[1]: Created slice Virtual Machine and Container Slice.
Feb 25 06:49:06 np0005629333 systemd[1]: Started libpod-conmon-16d89a9e501caf0fd0baf6fa4a08d04fda9a2c20874a4c1a284390da0e4c3a82.scope.
Feb 25 06:49:06 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:06 np0005629333 podman[75416]: 2026-02-25 11:49:06.775484486 +0000 UTC m=+0.025170297 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:06 np0005629333 podman[75416]: 2026-02-25 11:49:06.906348227 +0000 UTC m=+0.156034058 container init 16d89a9e501caf0fd0baf6fa4a08d04fda9a2c20874a4c1a284390da0e4c3a82 (image=quay.io/ceph/ceph:v20, name=keen_jackson, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:49:06 np0005629333 podman[75416]: 2026-02-25 11:49:06.915075505 +0000 UTC m=+0.164761316 container start 16d89a9e501caf0fd0baf6fa4a08d04fda9a2c20874a4c1a284390da0e4c3a82 (image=quay.io/ceph/ceph:v20, name=keen_jackson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 25 06:49:06 np0005629333 podman[75416]: 2026-02-25 11:49:06.919246494 +0000 UTC m=+0.168932375 container attach 16d89a9e501caf0fd0baf6fa4a08d04fda9a2c20874a4c1a284390da0e4c3a82 (image=quay.io/ceph/ceph:v20, name=keen_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 06:49:07 np0005629333 keen_jackson[75433]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Feb 25 06:49:07 np0005629333 systemd[1]: libpod-16d89a9e501caf0fd0baf6fa4a08d04fda9a2c20874a4c1a284390da0e4c3a82.scope: Deactivated successfully.
Feb 25 06:49:07 np0005629333 podman[75416]: 2026-02-25 11:49:07.016760017 +0000 UTC m=+0.266445818 container died 16d89a9e501caf0fd0baf6fa4a08d04fda9a2c20874a4c1a284390da0e4c3a82 (image=quay.io/ceph/ceph:v20, name=keen_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:49:07 np0005629333 systemd[1]: var-lib-containers-storage-overlay-06e8af7bb91a92e6f2c5f2ab125d5ef442183ddd038c6ad859745d2a9b89f975-merged.mount: Deactivated successfully.
Feb 25 06:49:07 np0005629333 podman[75416]: 2026-02-25 11:49:07.057498155 +0000 UTC m=+0.307183946 container remove 16d89a9e501caf0fd0baf6fa4a08d04fda9a2c20874a4c1a284390da0e4c3a82 (image=quay.io/ceph/ceph:v20, name=keen_jackson, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 06:49:07 np0005629333 systemd[1]: libpod-conmon-16d89a9e501caf0fd0baf6fa4a08d04fda9a2c20874a4c1a284390da0e4c3a82.scope: Deactivated successfully.
Feb 25 06:49:07 np0005629333 podman[75450]: 2026-02-25 11:49:07.127022782 +0000 UTC m=+0.048585052 container create 5100907163c1e0e17df50e3e5ce1336e925a1f3a45cb05ac2a99a97fca17411b (image=quay.io/ceph/ceph:v20, name=jovial_montalcini, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:49:07 np0005629333 systemd[1]: Started libpod-conmon-5100907163c1e0e17df50e3e5ce1336e925a1f3a45cb05ac2a99a97fca17411b.scope.
Feb 25 06:49:07 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:07 np0005629333 podman[75450]: 2026-02-25 11:49:07.191091864 +0000 UTC m=+0.112654154 container init 5100907163c1e0e17df50e3e5ce1336e925a1f3a45cb05ac2a99a97fca17411b (image=quay.io/ceph/ceph:v20, name=jovial_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:49:07 np0005629333 podman[75450]: 2026-02-25 11:49:07.197790285 +0000 UTC m=+0.119352565 container start 5100907163c1e0e17df50e3e5ce1336e925a1f3a45cb05ac2a99a97fca17411b (image=quay.io/ceph/ceph:v20, name=jovial_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:49:07 np0005629333 podman[75450]: 2026-02-25 11:49:07.105966154 +0000 UTC m=+0.027528444 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:07 np0005629333 jovial_montalcini[75466]: 167 167
Feb 25 06:49:07 np0005629333 systemd[1]: libpod-5100907163c1e0e17df50e3e5ce1336e925a1f3a45cb05ac2a99a97fca17411b.scope: Deactivated successfully.
Feb 25 06:49:07 np0005629333 podman[75450]: 2026-02-25 11:49:07.203016384 +0000 UTC m=+0.124578654 container attach 5100907163c1e0e17df50e3e5ce1336e925a1f3a45cb05ac2a99a97fca17411b (image=quay.io/ceph/ceph:v20, name=jovial_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 06:49:07 np0005629333 podman[75450]: 2026-02-25 11:49:07.203640541 +0000 UTC m=+0.125202811 container died 5100907163c1e0e17df50e3e5ce1336e925a1f3a45cb05ac2a99a97fca17411b (image=quay.io/ceph/ceph:v20, name=jovial_montalcini, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 06:49:07 np0005629333 podman[75450]: 2026-02-25 11:49:07.241379914 +0000 UTC m=+0.162942184 container remove 5100907163c1e0e17df50e3e5ce1336e925a1f3a45cb05ac2a99a97fca17411b (image=quay.io/ceph/ceph:v20, name=jovial_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:49:07 np0005629333 systemd[1]: libpod-conmon-5100907163c1e0e17df50e3e5ce1336e925a1f3a45cb05ac2a99a97fca17411b.scope: Deactivated successfully.
Feb 25 06:49:07 np0005629333 podman[75482]: 2026-02-25 11:49:07.317300133 +0000 UTC m=+0.054161361 container create 3af8798525c637323d0c982b3292ea2ffa5b77a035b20137a11f1267077399b5 (image=quay.io/ceph/ceph:v20, name=goofy_stonebraker, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 06:49:07 np0005629333 systemd[1]: Started libpod-conmon-3af8798525c637323d0c982b3292ea2ffa5b77a035b20137a11f1267077399b5.scope.
Feb 25 06:49:07 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:07 np0005629333 podman[75482]: 2026-02-25 11:49:07.378627497 +0000 UTC m=+0.115488755 container init 3af8798525c637323d0c982b3292ea2ffa5b77a035b20137a11f1267077399b5 (image=quay.io/ceph/ceph:v20, name=goofy_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:49:07 np0005629333 podman[75482]: 2026-02-25 11:49:07.385593755 +0000 UTC m=+0.122454943 container start 3af8798525c637323d0c982b3292ea2ffa5b77a035b20137a11f1267077399b5 (image=quay.io/ceph/ceph:v20, name=goofy_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 06:49:07 np0005629333 podman[75482]: 2026-02-25 11:49:07.294509675 +0000 UTC m=+0.031370933 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:07 np0005629333 podman[75482]: 2026-02-25 11:49:07.389482216 +0000 UTC m=+0.126343484 container attach 3af8798525c637323d0c982b3292ea2ffa5b77a035b20137a11f1267077399b5 (image=quay.io/ceph/ceph:v20, name=goofy_stonebraker, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 06:49:07 np0005629333 goofy_stonebraker[75499]: AQCz4Z5pAdjVGBAAF6Y//l8nY/gwnJdNkvK3CQ==
Feb 25 06:49:07 np0005629333 systemd[1]: libpod-3af8798525c637323d0c982b3292ea2ffa5b77a035b20137a11f1267077399b5.scope: Deactivated successfully.
Feb 25 06:49:07 np0005629333 podman[75482]: 2026-02-25 11:49:07.420818717 +0000 UTC m=+0.157679945 container died 3af8798525c637323d0c982b3292ea2ffa5b77a035b20137a11f1267077399b5 (image=quay.io/ceph/ceph:v20, name=goofy_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 06:49:07 np0005629333 podman[75482]: 2026-02-25 11:49:07.471133718 +0000 UTC m=+0.207994936 container remove 3af8798525c637323d0c982b3292ea2ffa5b77a035b20137a11f1267077399b5 (image=quay.io/ceph/ceph:v20, name=goofy_stonebraker, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:49:07 np0005629333 systemd[1]: libpod-conmon-3af8798525c637323d0c982b3292ea2ffa5b77a035b20137a11f1267077399b5.scope: Deactivated successfully.
Feb 25 06:49:07 np0005629333 podman[75517]: 2026-02-25 11:49:07.552851232 +0000 UTC m=+0.060718288 container create 44040ca94b03391e4d36d6b9271bd642b2c0277673934888c0f7864847535866 (image=quay.io/ceph/ceph:v20, name=quirky_hugle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 06:49:07 np0005629333 systemd[1]: Started libpod-conmon-44040ca94b03391e4d36d6b9271bd642b2c0277673934888c0f7864847535866.scope.
Feb 25 06:49:07 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:07 np0005629333 podman[75517]: 2026-02-25 11:49:07.524874256 +0000 UTC m=+0.032741372 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:07 np0005629333 podman[75517]: 2026-02-25 11:49:07.632646181 +0000 UTC m=+0.140513227 container init 44040ca94b03391e4d36d6b9271bd642b2c0277673934888c0f7864847535866 (image=quay.io/ceph/ceph:v20, name=quirky_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 06:49:07 np0005629333 podman[75517]: 2026-02-25 11:49:07.639498466 +0000 UTC m=+0.147365502 container start 44040ca94b03391e4d36d6b9271bd642b2c0277673934888c0f7864847535866 (image=quay.io/ceph/ceph:v20, name=quirky_hugle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:49:07 np0005629333 podman[75517]: 2026-02-25 11:49:07.643763857 +0000 UTC m=+0.151630883 container attach 44040ca94b03391e4d36d6b9271bd642b2c0277673934888c0f7864847535866 (image=quay.io/ceph/ceph:v20, name=quirky_hugle, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:49:07 np0005629333 quirky_hugle[75534]: AQCz4Z5pdX3gJxAAK02Xk8rshwcogmrPTCUuBg==
Feb 25 06:49:07 np0005629333 systemd[1]: libpod-44040ca94b03391e4d36d6b9271bd642b2c0277673934888c0f7864847535866.scope: Deactivated successfully.
Feb 25 06:49:07 np0005629333 podman[75517]: 2026-02-25 11:49:07.673200804 +0000 UTC m=+0.181067870 container died 44040ca94b03391e4d36d6b9271bd642b2c0277673934888c0f7864847535866 (image=quay.io/ceph/ceph:v20, name=quirky_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:49:07 np0005629333 podman[75517]: 2026-02-25 11:49:07.738667816 +0000 UTC m=+0.246534882 container remove 44040ca94b03391e4d36d6b9271bd642b2c0277673934888c0f7864847535866 (image=quay.io/ceph/ceph:v20, name=quirky_hugle, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:49:07 np0005629333 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 25 06:49:07 np0005629333 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 25 06:49:07 np0005629333 systemd[1]: libpod-conmon-44040ca94b03391e4d36d6b9271bd642b2c0277673934888c0f7864847535866.scope: Deactivated successfully.
Feb 25 06:49:07 np0005629333 podman[75553]: 2026-02-25 11:49:07.817079605 +0000 UTC m=+0.056891858 container create 05baabb7beac189fdb02e2094cee3028a92073af56595fed8484b3301bbad975 (image=quay.io/ceph/ceph:v20, name=sharp_pike, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:49:07 np0005629333 systemd[1]: Started libpod-conmon-05baabb7beac189fdb02e2094cee3028a92073af56595fed8484b3301bbad975.scope.
Feb 25 06:49:07 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:07 np0005629333 podman[75553]: 2026-02-25 11:49:07.886960713 +0000 UTC m=+0.126772996 container init 05baabb7beac189fdb02e2094cee3028a92073af56595fed8484b3301bbad975 (image=quay.io/ceph/ceph:v20, name=sharp_pike, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:49:07 np0005629333 podman[75553]: 2026-02-25 11:49:07.792226599 +0000 UTC m=+0.032038872 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:07 np0005629333 podman[75553]: 2026-02-25 11:49:07.893284272 +0000 UTC m=+0.133096495 container start 05baabb7beac189fdb02e2094cee3028a92073af56595fed8484b3301bbad975 (image=quay.io/ceph/ceph:v20, name=sharp_pike, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 06:49:07 np0005629333 podman[75553]: 2026-02-25 11:49:07.897257705 +0000 UTC m=+0.137069918 container attach 05baabb7beac189fdb02e2094cee3028a92073af56595fed8484b3301bbad975 (image=quay.io/ceph/ceph:v20, name=sharp_pike, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 06:49:07 np0005629333 sharp_pike[75569]: AQCz4Z5pvm0JNxAAojgDCmIezTx1JuWZu4twQg==
Feb 25 06:49:07 np0005629333 systemd[1]: libpod-05baabb7beac189fdb02e2094cee3028a92073af56595fed8484b3301bbad975.scope: Deactivated successfully.
Feb 25 06:49:07 np0005629333 podman[75553]: 2026-02-25 11:49:07.927049523 +0000 UTC m=+0.166861766 container died 05baabb7beac189fdb02e2094cee3028a92073af56595fed8484b3301bbad975 (image=quay.io/ceph/ceph:v20, name=sharp_pike, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:49:07 np0005629333 podman[75553]: 2026-02-25 11:49:07.969126349 +0000 UTC m=+0.208938552 container remove 05baabb7beac189fdb02e2094cee3028a92073af56595fed8484b3301bbad975 (image=quay.io/ceph/ceph:v20, name=sharp_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:49:07 np0005629333 systemd[1]: libpod-conmon-05baabb7beac189fdb02e2094cee3028a92073af56595fed8484b3301bbad975.scope: Deactivated successfully.
Feb 25 06:49:08 np0005629333 podman[75588]: 2026-02-25 11:49:08.044857833 +0000 UTC m=+0.050883038 container create c7ad775ff45ea308cd75af657d3a41954efcb205c086dcedb63d34ba21900130 (image=quay.io/ceph/ceph:v20, name=adoring_hopper, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 06:49:08 np0005629333 systemd[1]: Started libpod-conmon-c7ad775ff45ea308cd75af657d3a41954efcb205c086dcedb63d34ba21900130.scope.
Feb 25 06:49:08 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:08 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0283b5a921e7b003897a1f65685c7be055259b031fd2a875306fdefc258d455c/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:08 np0005629333 podman[75588]: 2026-02-25 11:49:08.026810359 +0000 UTC m=+0.032835564 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:08 np0005629333 podman[75588]: 2026-02-25 11:49:08.14848797 +0000 UTC m=+0.154513215 container init c7ad775ff45ea308cd75af657d3a41954efcb205c086dcedb63d34ba21900130 (image=quay.io/ceph/ceph:v20, name=adoring_hopper, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:49:08 np0005629333 podman[75588]: 2026-02-25 11:49:08.153406579 +0000 UTC m=+0.159431754 container start c7ad775ff45ea308cd75af657d3a41954efcb205c086dcedb63d34ba21900130 (image=quay.io/ceph/ceph:v20, name=adoring_hopper, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:49:08 np0005629333 adoring_hopper[75605]: /usr/bin/monmaptool: monmap file /tmp/monmap
Feb 25 06:49:08 np0005629333 adoring_hopper[75605]: setting min_mon_release = tentacle
Feb 25 06:49:08 np0005629333 adoring_hopper[75605]: /usr/bin/monmaptool: set fsid to 8ac33163-6221-5d58-9a39-8b6933fe7762
Feb 25 06:49:08 np0005629333 adoring_hopper[75605]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Feb 25 06:49:08 np0005629333 systemd[1]: libpod-c7ad775ff45ea308cd75af657d3a41954efcb205c086dcedb63d34ba21900130.scope: Deactivated successfully.
Feb 25 06:49:08 np0005629333 podman[75588]: 2026-02-25 11:49:08.198836711 +0000 UTC m=+0.204861976 container attach c7ad775ff45ea308cd75af657d3a41954efcb205c086dcedb63d34ba21900130 (image=quay.io/ceph/ceph:v20, name=adoring_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:49:08 np0005629333 podman[75588]: 2026-02-25 11:49:08.199375847 +0000 UTC m=+0.205401052 container died c7ad775ff45ea308cd75af657d3a41954efcb205c086dcedb63d34ba21900130 (image=quay.io/ceph/ceph:v20, name=adoring_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 06:49:08 np0005629333 podman[75588]: 2026-02-25 11:49:08.334235832 +0000 UTC m=+0.340261007 container remove c7ad775ff45ea308cd75af657d3a41954efcb205c086dcedb63d34ba21900130 (image=quay.io/ceph/ceph:v20, name=adoring_hopper, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 06:49:08 np0005629333 systemd[1]: libpod-conmon-c7ad775ff45ea308cd75af657d3a41954efcb205c086dcedb63d34ba21900130.scope: Deactivated successfully.
Feb 25 06:49:08 np0005629333 podman[75625]: 2026-02-25 11:49:08.414646668 +0000 UTC m=+0.056248370 container create cc1afbe2ff4574048043907cb97708f9943a63b157ea533ea77e95a62ee97e28 (image=quay.io/ceph/ceph:v20, name=zealous_cerf, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:49:08 np0005629333 systemd[1]: Started libpod-conmon-cc1afbe2ff4574048043907cb97708f9943a63b157ea533ea77e95a62ee97e28.scope.
Feb 25 06:49:08 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:08 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ac8ad43f647632f3ff28a072d9085680dd572ced7d93a67d22445302a37977e/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:08 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ac8ad43f647632f3ff28a072d9085680dd572ced7d93a67d22445302a37977e/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:08 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ac8ad43f647632f3ff28a072d9085680dd572ced7d93a67d22445302a37977e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:08 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ac8ad43f647632f3ff28a072d9085680dd572ced7d93a67d22445302a37977e/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:08 np0005629333 podman[75625]: 2026-02-25 11:49:08.394878136 +0000 UTC m=+0.036479908 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:08 np0005629333 podman[75625]: 2026-02-25 11:49:08.487726096 +0000 UTC m=+0.129327808 container init cc1afbe2ff4574048043907cb97708f9943a63b157ea533ea77e95a62ee97e28 (image=quay.io/ceph/ceph:v20, name=zealous_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:49:08 np0005629333 podman[75625]: 2026-02-25 11:49:08.496844236 +0000 UTC m=+0.138445938 container start cc1afbe2ff4574048043907cb97708f9943a63b157ea533ea77e95a62ee97e28 (image=quay.io/ceph/ceph:v20, name=zealous_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 06:49:08 np0005629333 podman[75625]: 2026-02-25 11:49:08.505020278 +0000 UTC m=+0.146621970 container attach cc1afbe2ff4574048043907cb97708f9943a63b157ea533ea77e95a62ee97e28 (image=quay.io/ceph/ceph:v20, name=zealous_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:49:08 np0005629333 systemd[1]: libpod-cc1afbe2ff4574048043907cb97708f9943a63b157ea533ea77e95a62ee97e28.scope: Deactivated successfully.
Feb 25 06:49:08 np0005629333 podman[75625]: 2026-02-25 11:49:08.602579081 +0000 UTC m=+0.244180783 container died cc1afbe2ff4574048043907cb97708f9943a63b157ea533ea77e95a62ee97e28 (image=quay.io/ceph/ceph:v20, name=zealous_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:49:08 np0005629333 podman[75625]: 2026-02-25 11:49:08.645663107 +0000 UTC m=+0.287264799 container remove cc1afbe2ff4574048043907cb97708f9943a63b157ea533ea77e95a62ee97e28 (image=quay.io/ceph/ceph:v20, name=zealous_cerf, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:49:08 np0005629333 systemd[1]: libpod-conmon-cc1afbe2ff4574048043907cb97708f9943a63b157ea533ea77e95a62ee97e28.scope: Deactivated successfully.
Feb 25 06:49:08 np0005629333 systemd[1]: Reloading.
Feb 25 06:49:08 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:49:08 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:49:08 np0005629333 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 25 06:49:08 np0005629333 systemd[1]: var-lib-containers-storage-overlay-8c69bc2d6da0ffc5767ee760935f43ea15674e259f70d239b87a86263e7c9e71-merged.mount: Deactivated successfully.
Feb 25 06:49:08 np0005629333 systemd[1]: Reloading.
Feb 25 06:49:09 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:49:09 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:49:09 np0005629333 systemd[1]: Reached target All Ceph clusters and services.
Feb 25 06:49:09 np0005629333 systemd[1]: Reloading.
Feb 25 06:49:09 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:49:09 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:49:09 np0005629333 systemd[1]: Reached target Ceph cluster 8ac33163-6221-5d58-9a39-8b6933fe7762.
Feb 25 06:49:09 np0005629333 systemd[1]: Reloading.
Feb 25 06:49:09 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:49:09 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:49:09 np0005629333 systemd[1]: Reloading.
Feb 25 06:49:09 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:49:09 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:49:10 np0005629333 systemd[1]: Created slice Slice /system/ceph-8ac33163-6221-5d58-9a39-8b6933fe7762.
Feb 25 06:49:10 np0005629333 systemd[1]: Reached target System Time Set.
Feb 25 06:49:10 np0005629333 systemd[1]: Reached target System Time Synchronized.
Feb 25 06:49:10 np0005629333 systemd[1]: Starting Ceph mon.compute-0 for 8ac33163-6221-5d58-9a39-8b6933fe7762...
Feb 25 06:49:10 np0005629333 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 25 06:49:10 np0005629333 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 25 06:49:10 np0005629333 podman[75951]: 2026-02-25 11:49:10.274794814 +0000 UTC m=+0.041079539 container create 896e908168f181d3214c2f0b59281157f302eacdf6fa68d5a3dd1426a6ee5e10 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:49:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d55dd0318a49d67820f2d5fb6f9ebea36e8f71b7a5a92572332badd190c0335c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d55dd0318a49d67820f2d5fb6f9ebea36e8f71b7a5a92572332badd190c0335c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d55dd0318a49d67820f2d5fb6f9ebea36e8f71b7a5a92572332badd190c0335c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d55dd0318a49d67820f2d5fb6f9ebea36e8f71b7a5a92572332badd190c0335c/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:10 np0005629333 podman[75951]: 2026-02-25 11:49:10.253025725 +0000 UTC m=+0.019310500 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:10 np0005629333 podman[75951]: 2026-02-25 11:49:10.352977117 +0000 UTC m=+0.119261882 container init 896e908168f181d3214c2f0b59281157f302eacdf6fa68d5a3dd1426a6ee5e10 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 06:49:10 np0005629333 podman[75951]: 2026-02-25 11:49:10.366008618 +0000 UTC m=+0.132293333 container start 896e908168f181d3214c2f0b59281157f302eacdf6fa68d5a3dd1426a6ee5e10 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:49:10 np0005629333 bash[75951]: 896e908168f181d3214c2f0b59281157f302eacdf6fa68d5a3dd1426a6ee5e10
Feb 25 06:49:10 np0005629333 systemd[1]: Started Ceph mon.compute-0 for 8ac33163-6221-5d58-9a39-8b6933fe7762.
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: set uid:gid to 167:167 (ceph:ceph)
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: pidfile_write: ignore empty --pid-file
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: load: jerasure load: lrc 
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: RocksDB version: 7.9.2
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: Git sha 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: Compile date 2025-10-30 15:42:43
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: DB SUMMARY
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: DB Session ID:  5MEO7CT52K4DDZA2H7HS
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: CURRENT file:  CURRENT
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: IDENTITY file:  IDENTITY
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                         Options.error_if_exists: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                       Options.create_if_missing: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                         Options.paranoid_checks: 1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                                     Options.env: 0x55715bdf0440
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                                      Options.fs: PosixFileSystem
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                                Options.info_log: 0x55715e2b33e0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                Options.max_file_opening_threads: 16
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                              Options.statistics: (nil)
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                               Options.use_fsync: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                       Options.max_log_file_size: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                         Options.allow_fallocate: 1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                        Options.use_direct_reads: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:          Options.create_missing_column_families: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                              Options.db_log_dir: 
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                                 Options.wal_dir: 
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                   Options.advise_random_on_open: 1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                    Options.write_buffer_manager: 0x55715e232140
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                            Options.rate_limiter: (nil)
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                  Options.unordered_write: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                               Options.row_cache: None
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                              Options.wal_filter: None
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:             Options.allow_ingest_behind: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:             Options.two_write_queues: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:             Options.manual_wal_flush: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:             Options.wal_compression: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:             Options.atomic_flush: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                 Options.log_readahead_size: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:             Options.allow_data_in_errors: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:             Options.db_host_id: __hostname__
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:             Options.max_background_jobs: 2
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:             Options.max_background_compactions: -1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:             Options.max_subcompactions: 1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:             Options.max_total_wal_size: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                          Options.max_open_files: -1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                          Options.bytes_per_sync: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:       Options.compaction_readahead_size: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                  Options.max_background_flushes: -1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: Compression algorithms supported:
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: #011kZSTD supported: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: #011kXpressCompression supported: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: #011kBZip2Compression supported: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: #011kLZ4Compression supported: 1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: #011kZlibCompression supported: 1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: #011kLZ4HCCompression supported: 1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: #011kSnappyCompression supported: 1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:           Options.merge_operator: 
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:        Options.compaction_filter: None
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55715e23e700)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55715e2238d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:        Options.write_buffer_size: 33554432
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:  Options.max_write_buffer_number: 2
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:          Options.compression: NoCompression
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:             Options.num_levels: 7
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c540816e-83f6-4a96-94b0-524825c8594a
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020150432556, "job": 1, "event": "recovery_started", "wal_files": [4]}
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020150434778, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "5MEO7CT52K4DDZA2H7HS", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020150434858, "job": 1, "event": "recovery_finished"}
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55715e250e00
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: DB pointer 0x55715e39c000
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55715e2238d0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 8ac33163-6221-5d58-9a39-8b6933fe7762
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0@-1(???) e0 preinit fsid 8ac33163-6221-5d58-9a39-8b6933fe7762
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0@0(probing) e0 win_standalone_election
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0@0(probing) e1 win_standalone_election
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: paxos.0).electionLogic(2) init, last seen epoch 2
Feb 25 06:49:10 np0005629333 podman[75973]: 2026-02-25 11:49:10.471833607 +0000 UTC m=+0.062766276 container create 092637f8c112402e572aee7caddc443b730b5e6c678108a8dd4cd44454a13bac (image=quay.io/ceph/ceph:v20, name=optimistic_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: log_channel(cluster) log [DBG] : monmap epoch 1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: log_channel(cluster) log [DBG] : fsid 8ac33163-6221-5d58-9a39-8b6933fe7762
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: log_channel(cluster) log [DBG] : last_changed 2026-02-25T11:49:08.188048+0000
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: log_channel(cluster) log [DBG] : created 2026-02-25T11:49:08.188048+0000
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: log_channel(cluster) log [DBG] : election_strategy: 1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,ceph_version_when_created=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v20,cpu=AMD EPYC-Rome Processor,created_at=2026-02-25T11:49:08.548355Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Feb 19 10:49:27 UTC 2026,kernel_version=5.14.0-686.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864276,os=Linux}
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout,17=tentacle ondisk layout}
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0@0(leader).mds e1 new map
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0@0(leader).mds e1 print_map#012e1#012btime 2026-02-25T11:49:10:477435+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: log_channel(cluster) log [DBG] : fsmap 
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mkfs 8ac33163-6221-5d58-9a39-8b6933fe7762
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb 25 06:49:10 np0005629333 systemd[1]: Started libpod-conmon-092637f8c112402e572aee7caddc443b730b5e6c678108a8dd4cd44454a13bac.scope.
Feb 25 06:49:10 np0005629333 podman[75973]: 2026-02-25 11:49:10.441219796 +0000 UTC m=+0.032152525 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:10 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f50b4c6a1b62142f76002c2ada45fa10c45160420136a2aac281de828e3a22c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f50b4c6a1b62142f76002c2ada45fa10c45160420136a2aac281de828e3a22c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f50b4c6a1b62142f76002c2ada45fa10c45160420136a2aac281de828e3a22c/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:10 np0005629333 podman[75973]: 2026-02-25 11:49:10.570851923 +0000 UTC m=+0.161784632 container init 092637f8c112402e572aee7caddc443b730b5e6c678108a8dd4cd44454a13bac (image=quay.io/ceph/ceph:v20, name=optimistic_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 06:49:10 np0005629333 podman[75973]: 2026-02-25 11:49:10.579475748 +0000 UTC m=+0.170408417 container start 092637f8c112402e572aee7caddc443b730b5e6c678108a8dd4cd44454a13bac (image=quay.io/ceph/ceph:v20, name=optimistic_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 06:49:10 np0005629333 podman[75973]: 2026-02-25 11:49:10.584011617 +0000 UTC m=+0.174944326 container attach 092637f8c112402e572aee7caddc443b730b5e6c678108a8dd4cd44454a13bac (image=quay.io/ceph/ceph:v20, name=optimistic_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Feb 25 06:49:10 np0005629333 ceph-mon[75972]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/301127101' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb 25 06:49:10 np0005629333 optimistic_maxwell[76028]:  cluster:
Feb 25 06:49:10 np0005629333 optimistic_maxwell[76028]:    id:     8ac33163-6221-5d58-9a39-8b6933fe7762
Feb 25 06:49:10 np0005629333 optimistic_maxwell[76028]:    health: HEALTH_OK
Feb 25 06:49:10 np0005629333 optimistic_maxwell[76028]: 
Feb 25 06:49:10 np0005629333 optimistic_maxwell[76028]:  services:
Feb 25 06:49:10 np0005629333 optimistic_maxwell[76028]:    mon: 1 daemons, quorum compute-0 (age 0.331414s) [leader: compute-0]
Feb 25 06:49:10 np0005629333 optimistic_maxwell[76028]:    mgr: no daemons active
Feb 25 06:49:10 np0005629333 optimistic_maxwell[76028]:    osd: 0 osds: 0 up, 0 in
Feb 25 06:49:10 np0005629333 optimistic_maxwell[76028]: 
Feb 25 06:49:10 np0005629333 optimistic_maxwell[76028]:  data:
Feb 25 06:49:10 np0005629333 optimistic_maxwell[76028]:    pools:   0 pools, 0 pgs
Feb 25 06:49:10 np0005629333 optimistic_maxwell[76028]:    objects: 0 objects, 0 B
Feb 25 06:49:10 np0005629333 optimistic_maxwell[76028]:    usage:   0 B used, 0 B / 0 B avail
Feb 25 06:49:10 np0005629333 optimistic_maxwell[76028]:    pgs:     
Feb 25 06:49:10 np0005629333 optimistic_maxwell[76028]: 
Feb 25 06:49:10 np0005629333 systemd[1]: libpod-092637f8c112402e572aee7caddc443b730b5e6c678108a8dd4cd44454a13bac.scope: Deactivated successfully.
Feb 25 06:49:10 np0005629333 podman[75973]: 2026-02-25 11:49:10.819283167 +0000 UTC m=+0.410215816 container died 092637f8c112402e572aee7caddc443b730b5e6c678108a8dd4cd44454a13bac (image=quay.io/ceph/ceph:v20, name=optimistic_maxwell, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Feb 25 06:49:10 np0005629333 podman[75973]: 2026-02-25 11:49:10.852743359 +0000 UTC m=+0.443676008 container remove 092637f8c112402e572aee7caddc443b730b5e6c678108a8dd4cd44454a13bac (image=quay.io/ceph/ceph:v20, name=optimistic_maxwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 06:49:10 np0005629333 systemd[1]: libpod-conmon-092637f8c112402e572aee7caddc443b730b5e6c678108a8dd4cd44454a13bac.scope: Deactivated successfully.
Feb 25 06:49:10 np0005629333 podman[76065]: 2026-02-25 11:49:10.914424633 +0000 UTC m=+0.043209930 container create e8411f179fc545ca148e9ca6db8c274308bbd75e41deea89fb9018a7fd4be45c (image=quay.io/ceph/ceph:v20, name=serene_aryabhata, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:49:10 np0005629333 systemd[1]: Started libpod-conmon-e8411f179fc545ca148e9ca6db8c274308bbd75e41deea89fb9018a7fd4be45c.scope.
Feb 25 06:49:10 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d534769dbfdeaea74a8e1f98a7881232e1f15fd8f90991b5cc973e8c71371754/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d534769dbfdeaea74a8e1f98a7881232e1f15fd8f90991b5cc973e8c71371754/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d534769dbfdeaea74a8e1f98a7881232e1f15fd8f90991b5cc973e8c71371754/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d534769dbfdeaea74a8e1f98a7881232e1f15fd8f90991b5cc973e8c71371754/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:10 np0005629333 podman[76065]: 2026-02-25 11:49:10.893377854 +0000 UTC m=+0.022163211 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:10 np0005629333 podman[76065]: 2026-02-25 11:49:10.995165769 +0000 UTC m=+0.123951076 container init e8411f179fc545ca148e9ca6db8c274308bbd75e41deea89fb9018a7fd4be45c (image=quay.io/ceph/ceph:v20, name=serene_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 06:49:11 np0005629333 podman[76065]: 2026-02-25 11:49:11.001578941 +0000 UTC m=+0.130364248 container start e8411f179fc545ca148e9ca6db8c274308bbd75e41deea89fb9018a7fd4be45c (image=quay.io/ceph/ceph:v20, name=serene_aryabhata, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:49:11 np0005629333 podman[76065]: 2026-02-25 11:49:11.006170042 +0000 UTC m=+0.134955409 container attach e8411f179fc545ca148e9ca6db8c274308bbd75e41deea89fb9018a7fd4be45c (image=quay.io/ceph/ceph:v20, name=serene_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 06:49:11 np0005629333 ceph-mon[75972]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Feb 25 06:49:11 np0005629333 ceph-mon[75972]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3546483702' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb 25 06:49:11 np0005629333 ceph-mon[75972]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3546483702' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb 25 06:49:11 np0005629333 serene_aryabhata[76082]: 
Feb 25 06:49:11 np0005629333 serene_aryabhata[76082]: [global]
Feb 25 06:49:11 np0005629333 serene_aryabhata[76082]: #011fsid = 8ac33163-6221-5d58-9a39-8b6933fe7762
Feb 25 06:49:11 np0005629333 serene_aryabhata[76082]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Feb 25 06:49:11 np0005629333 serene_aryabhata[76082]: #011osd_crush_chooseleaf_type = 0
Feb 25 06:49:11 np0005629333 systemd[1]: libpod-e8411f179fc545ca148e9ca6db8c274308bbd75e41deea89fb9018a7fd4be45c.scope: Deactivated successfully.
Feb 25 06:49:11 np0005629333 podman[76109]: 2026-02-25 11:49:11.292598977 +0000 UTC m=+0.035060548 container died e8411f179fc545ca148e9ca6db8c274308bbd75e41deea89fb9018a7fd4be45c (image=quay.io/ceph/ceph:v20, name=serene_aryabhata, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 06:49:11 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d534769dbfdeaea74a8e1f98a7881232e1f15fd8f90991b5cc973e8c71371754-merged.mount: Deactivated successfully.
Feb 25 06:49:11 np0005629333 podman[76109]: 2026-02-25 11:49:11.340274223 +0000 UTC m=+0.082735794 container remove e8411f179fc545ca148e9ca6db8c274308bbd75e41deea89fb9018a7fd4be45c (image=quay.io/ceph/ceph:v20, name=serene_aryabhata, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:49:11 np0005629333 systemd[1]: libpod-conmon-e8411f179fc545ca148e9ca6db8c274308bbd75e41deea89fb9018a7fd4be45c.scope: Deactivated successfully.
Feb 25 06:49:11 np0005629333 podman[76124]: 2026-02-25 11:49:11.424071845 +0000 UTC m=+0.053978576 container create 7d1719c5cdcc5722408b0cdbbb319405f6fd9a0901a1ac1c59bdc1ecffff3183 (image=quay.io/ceph/ceph:v20, name=stoic_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 06:49:11 np0005629333 systemd[1]: Started libpod-conmon-7d1719c5cdcc5722408b0cdbbb319405f6fd9a0901a1ac1c59bdc1ecffff3183.scope.
Feb 25 06:49:11 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:11 np0005629333 podman[76124]: 2026-02-25 11:49:11.402016318 +0000 UTC m=+0.031923069 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0f578edac8c14bf9a4a365ecb76e410d81430167d9e3d5625ec2fefa677b5f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0f578edac8c14bf9a4a365ecb76e410d81430167d9e3d5625ec2fefa677b5f8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0f578edac8c14bf9a4a365ecb76e410d81430167d9e3d5625ec2fefa677b5f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0f578edac8c14bf9a4a365ecb76e410d81430167d9e3d5625ec2fefa677b5f8/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:11 np0005629333 ceph-mon[75972]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb 25 06:49:11 np0005629333 ceph-mon[75972]: from='client.? 192.168.122.100:0/3546483702' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb 25 06:49:11 np0005629333 ceph-mon[75972]: from='client.? 192.168.122.100:0/3546483702' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb 25 06:49:11 np0005629333 podman[76124]: 2026-02-25 11:49:11.525931702 +0000 UTC m=+0.155838503 container init 7d1719c5cdcc5722408b0cdbbb319405f6fd9a0901a1ac1c59bdc1ecffff3183 (image=quay.io/ceph/ceph:v20, name=stoic_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 06:49:11 np0005629333 podman[76124]: 2026-02-25 11:49:11.540126196 +0000 UTC m=+0.170032947 container start 7d1719c5cdcc5722408b0cdbbb319405f6fd9a0901a1ac1c59bdc1ecffff3183 (image=quay.io/ceph/ceph:v20, name=stoic_khayyam, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 06:49:11 np0005629333 podman[76124]: 2026-02-25 11:49:11.543865032 +0000 UTC m=+0.173771773 container attach 7d1719c5cdcc5722408b0cdbbb319405f6fd9a0901a1ac1c59bdc1ecffff3183 (image=quay.io/ceph/ceph:v20, name=stoic_khayyam, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:49:11 np0005629333 ceph-mon[75972]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:49:11 np0005629333 ceph-mon[75972]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1267902246' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:49:11 np0005629333 systemd[1]: libpod-7d1719c5cdcc5722408b0cdbbb319405f6fd9a0901a1ac1c59bdc1ecffff3183.scope: Deactivated successfully.
Feb 25 06:49:11 np0005629333 conmon[76141]: conmon 7d1719c5cdcc5722408b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7d1719c5cdcc5722408b0cdbbb319405f6fd9a0901a1ac1c59bdc1ecffff3183.scope/container/memory.events
Feb 25 06:49:11 np0005629333 podman[76124]: 2026-02-25 11:49:11.771367101 +0000 UTC m=+0.401273812 container died 7d1719c5cdcc5722408b0cdbbb319405f6fd9a0901a1ac1c59bdc1ecffff3183 (image=quay.io/ceph/ceph:v20, name=stoic_khayyam, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:49:11 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a0f578edac8c14bf9a4a365ecb76e410d81430167d9e3d5625ec2fefa677b5f8-merged.mount: Deactivated successfully.
Feb 25 06:49:11 np0005629333 podman[76124]: 2026-02-25 11:49:11.822235718 +0000 UTC m=+0.452142459 container remove 7d1719c5cdcc5722408b0cdbbb319405f6fd9a0901a1ac1c59bdc1ecffff3183 (image=quay.io/ceph/ceph:v20, name=stoic_khayyam, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 06:49:11 np0005629333 systemd[1]: libpod-conmon-7d1719c5cdcc5722408b0cdbbb319405f6fd9a0901a1ac1c59bdc1ecffff3183.scope: Deactivated successfully.
Feb 25 06:49:11 np0005629333 systemd[1]: Stopping Ceph mon.compute-0 for 8ac33163-6221-5d58-9a39-8b6933fe7762...
Feb 25 06:49:12 np0005629333 ceph-mon[75972]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Feb 25 06:49:12 np0005629333 ceph-mon[75972]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Feb 25 06:49:12 np0005629333 ceph-mon[75972]: mon.compute-0@0(leader) e1 shutdown
Feb 25 06:49:12 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0[75968]: 2026-02-25T11:49:12.053+0000 7f8f28859640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Feb 25 06:49:12 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0[75968]: 2026-02-25T11:49:12.053+0000 7f8f28859640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Feb 25 06:49:12 np0005629333 ceph-mon[75972]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb 25 06:49:12 np0005629333 ceph-mon[75972]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb 25 06:49:12 np0005629333 podman[76211]: 2026-02-25 11:49:12.114159468 +0000 UTC m=+0.109921156 container died 896e908168f181d3214c2f0b59281157f302eacdf6fa68d5a3dd1426a6ee5e10 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:49:12 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d55dd0318a49d67820f2d5fb6f9ebea36e8f71b7a5a92572332badd190c0335c-merged.mount: Deactivated successfully.
Feb 25 06:49:12 np0005629333 podman[76211]: 2026-02-25 11:49:12.151045097 +0000 UTC m=+0.146806755 container remove 896e908168f181d3214c2f0b59281157f302eacdf6fa68d5a3dd1426a6ee5e10 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 06:49:12 np0005629333 bash[76211]: ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0
Feb 25 06:49:12 np0005629333 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 25 06:49:12 np0005629333 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 25 06:49:12 np0005629333 systemd[1]: ceph-8ac33163-6221-5d58-9a39-8b6933fe7762@mon.compute-0.service: Deactivated successfully.
Feb 25 06:49:12 np0005629333 systemd[1]: Stopped Ceph mon.compute-0 for 8ac33163-6221-5d58-9a39-8b6933fe7762.
Feb 25 06:49:12 np0005629333 systemd[1]: ceph-8ac33163-6221-5d58-9a39-8b6933fe7762@mon.compute-0.service: Consumed 1.010s CPU time.
Feb 25 06:49:12 np0005629333 systemd[1]: Starting Ceph mon.compute-0 for 8ac33163-6221-5d58-9a39-8b6933fe7762...
Feb 25 06:49:12 np0005629333 podman[76315]: 2026-02-25 11:49:12.544977259 +0000 UTC m=+0.046503923 container create ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:49:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df62cda646a785beee828b8546fcf96e6af3845afe2dbc6401c6676275435eb8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df62cda646a785beee828b8546fcf96e6af3845afe2dbc6401c6676275435eb8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df62cda646a785beee828b8546fcf96e6af3845afe2dbc6401c6676275435eb8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df62cda646a785beee828b8546fcf96e6af3845afe2dbc6401c6676275435eb8/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:12 np0005629333 podman[76315]: 2026-02-25 11:49:12.52215331 +0000 UTC m=+0.023680014 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:12 np0005629333 podman[76315]: 2026-02-25 11:49:12.623358158 +0000 UTC m=+0.124884842 container init ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 06:49:12 np0005629333 podman[76315]: 2026-02-25 11:49:12.636507902 +0000 UTC m=+0.138034566 container start ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:49:12 np0005629333 bash[76315]: ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8
Feb 25 06:49:12 np0005629333 systemd[1]: Started Ceph mon.compute-0 for 8ac33163-6221-5d58-9a39-8b6933fe7762.
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: set uid:gid to 167:167 (ceph:ceph)
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: pidfile_write: ignore empty --pid-file
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: load: jerasure load: lrc 
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: RocksDB version: 7.9.2
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: Git sha 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: Compile date 2025-10-30 15:42:43
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: DB SUMMARY
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: DB Session ID:  6RCFLCKJUH7T17FV9P10
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: CURRENT file:  CURRENT
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: IDENTITY file:  IDENTITY
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 60235 ; 
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                         Options.error_if_exists: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                       Options.create_if_missing: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                         Options.paranoid_checks: 1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                                     Options.env: 0x561a19ff0440
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                                      Options.fs: PosixFileSystem
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                                Options.info_log: 0x561a1af49e80
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                Options.max_file_opening_threads: 16
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                              Options.statistics: (nil)
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                               Options.use_fsync: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                       Options.max_log_file_size: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                         Options.allow_fallocate: 1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                        Options.use_direct_reads: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:          Options.create_missing_column_families: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                              Options.db_log_dir: 
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                                 Options.wal_dir: 
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                   Options.advise_random_on_open: 1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                    Options.write_buffer_manager: 0x561a1af94140
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                            Options.rate_limiter: (nil)
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                  Options.unordered_write: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                               Options.row_cache: None
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                              Options.wal_filter: None
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:             Options.allow_ingest_behind: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:             Options.two_write_queues: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:             Options.manual_wal_flush: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:             Options.wal_compression: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:             Options.atomic_flush: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                 Options.log_readahead_size: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:             Options.allow_data_in_errors: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:             Options.db_host_id: __hostname__
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:             Options.max_background_jobs: 2
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:             Options.max_background_compactions: -1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:             Options.max_subcompactions: 1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:             Options.max_total_wal_size: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                          Options.max_open_files: -1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                          Options.bytes_per_sync: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:       Options.compaction_readahead_size: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                  Options.max_background_flushes: -1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: Compression algorithms supported:
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: #011kZSTD supported: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: #011kXpressCompression supported: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: #011kBZip2Compression supported: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: #011kLZ4Compression supported: 1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: #011kZlibCompression supported: 1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: #011kLZ4HCCompression supported: 1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: #011kSnappyCompression supported: 1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:           Options.merge_operator: 
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:        Options.compaction_filter: None
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561a1afa0a00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x561a1af858d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:        Options.write_buffer_size: 33554432
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:  Options.max_write_buffer_number: 2
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:          Options.compression: NoCompression
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:             Options.num_levels: 7
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c540816e-83f6-4a96-94b0-524825c8594a
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020152701541, "job": 1, "event": "recovery_started", "wal_files": [9]}
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020152706406, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59956, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 143, "table_properties": {"data_size": 58434, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3403, "raw_average_key_size": 30, "raw_value_size": 55786, "raw_average_value_size": 507, "num_data_blocks": 9, "num_entries": 110, "num_filter_entries": 110, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020152, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020152706538, "job": 1, "event": "recovery_finished"}
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x561a1afb2e00
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: DB pointer 0x561a1b0fc000
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   60.45 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0   60.45 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 4.54 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 4.54 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561a1af858d0#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 8ac33163-6221-5d58-9a39-8b6933fe7762
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: mon.compute-0@-1(???) e1 preinit fsid 8ac33163-6221-5d58-9a39-8b6933fe7762
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: mon.compute-0@-1(???).mds e1 new map
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: mon.compute-0@-1(???).mds e1 print_map#012e1#012btime 2026-02-25T11:49:10:477435+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(probing) e1 win_standalone_election
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : monmap epoch 1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : fsid 8ac33163-6221-5d58-9a39-8b6933fe7762
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : last_changed 2026-02-25T11:49:08.188048+0000
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : created 2026-02-25T11:49:08.188048+0000
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : election_strategy: 1
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : fsmap 
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Feb 25 06:49:12 np0005629333 podman[76336]: 2026-02-25 11:49:12.733342606 +0000 UTC m=+0.059581755 container create cd5ac2230c364f978a18d619c63299375d47905024bc88113160d5e7d3cfa4ac (image=quay.io/ceph/ceph:v20, name=great_wu, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 06:49:12 np0005629333 systemd[1]: Started libpod-conmon-cd5ac2230c364f978a18d619c63299375d47905024bc88113160d5e7d3cfa4ac.scope.
Feb 25 06:49:12 np0005629333 ceph-mon[76335]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Feb 25 06:49:12 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4a5c495b20d656bcf2865f37fcff1ee60f683a30f6b2e3dc4e113de14368158/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4a5c495b20d656bcf2865f37fcff1ee60f683a30f6b2e3dc4e113de14368158/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4a5c495b20d656bcf2865f37fcff1ee60f683a30f6b2e3dc4e113de14368158/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:12 np0005629333 podman[76336]: 2026-02-25 11:49:12.709044475 +0000 UTC m=+0.035283664 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:12 np0005629333 podman[76336]: 2026-02-25 11:49:12.822048658 +0000 UTC m=+0.148287867 container init cd5ac2230c364f978a18d619c63299375d47905024bc88113160d5e7d3cfa4ac (image=quay.io/ceph/ceph:v20, name=great_wu, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 25 06:49:12 np0005629333 podman[76336]: 2026-02-25 11:49:12.82773959 +0000 UTC m=+0.153978729 container start cd5ac2230c364f978a18d619c63299375d47905024bc88113160d5e7d3cfa4ac (image=quay.io/ceph/ceph:v20, name=great_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 06:49:12 np0005629333 podman[76336]: 2026-02-25 11:49:12.831385724 +0000 UTC m=+0.157624873 container attach cd5ac2230c364f978a18d619c63299375d47905024bc88113160d5e7d3cfa4ac (image=quay.io/ceph/ceph:v20, name=great_wu, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:49:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Feb 25 06:49:13 np0005629333 systemd[1]: libpod-cd5ac2230c364f978a18d619c63299375d47905024bc88113160d5e7d3cfa4ac.scope: Deactivated successfully.
Feb 25 06:49:13 np0005629333 conmon[76390]: conmon cd5ac2230c364f978a18 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cd5ac2230c364f978a18d619c63299375d47905024bc88113160d5e7d3cfa4ac.scope/container/memory.events
Feb 25 06:49:13 np0005629333 podman[76336]: 2026-02-25 11:49:13.041401106 +0000 UTC m=+0.367640245 container died cd5ac2230c364f978a18d619c63299375d47905024bc88113160d5e7d3cfa4ac (image=quay.io/ceph/ceph:v20, name=great_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:49:13 np0005629333 podman[76336]: 2026-02-25 11:49:13.094117415 +0000 UTC m=+0.420356564 container remove cd5ac2230c364f978a18d619c63299375d47905024bc88113160d5e7d3cfa4ac (image=quay.io/ceph/ceph:v20, name=great_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3)
Feb 25 06:49:13 np0005629333 systemd[1]: libpod-conmon-cd5ac2230c364f978a18d619c63299375d47905024bc88113160d5e7d3cfa4ac.scope: Deactivated successfully.
Feb 25 06:49:13 np0005629333 podman[76428]: 2026-02-25 11:49:13.155302875 +0000 UTC m=+0.040093081 container create 89b8686bc1ff31bbd3b3732c92ec44542b30cfe8c3266127f1a9861331b71bbe (image=quay.io/ceph/ceph:v20, name=friendly_goldstine, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 06:49:13 np0005629333 systemd[1]: Started libpod-conmon-89b8686bc1ff31bbd3b3732c92ec44542b30cfe8c3266127f1a9861331b71bbe.scope.
Feb 25 06:49:13 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3daf5bc9006cedbdd0876e9934756f1bfd02d0e21ec143b4590f60f44c96ce96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3daf5bc9006cedbdd0876e9934756f1bfd02d0e21ec143b4590f60f44c96ce96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3daf5bc9006cedbdd0876e9934756f1bfd02d0e21ec143b4590f60f44c96ce96/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:13 np0005629333 podman[76428]: 2026-02-25 11:49:13.136809709 +0000 UTC m=+0.021600005 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:13 np0005629333 podman[76428]: 2026-02-25 11:49:13.25534358 +0000 UTC m=+0.140133816 container init 89b8686bc1ff31bbd3b3732c92ec44542b30cfe8c3266127f1a9861331b71bbe (image=quay.io/ceph/ceph:v20, name=friendly_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:49:13 np0005629333 podman[76428]: 2026-02-25 11:49:13.258616813 +0000 UTC m=+0.143407059 container start 89b8686bc1ff31bbd3b3732c92ec44542b30cfe8c3266127f1a9861331b71bbe (image=quay.io/ceph/ceph:v20, name=friendly_goldstine, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:49:13 np0005629333 podman[76428]: 2026-02-25 11:49:13.262146673 +0000 UTC m=+0.146936919 container attach 89b8686bc1ff31bbd3b3732c92ec44542b30cfe8c3266127f1a9861331b71bbe (image=quay.io/ceph/ceph:v20, name=friendly_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 06:49:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Feb 25 06:49:13 np0005629333 systemd[1]: libpod-89b8686bc1ff31bbd3b3732c92ec44542b30cfe8c3266127f1a9861331b71bbe.scope: Deactivated successfully.
Feb 25 06:49:13 np0005629333 podman[76428]: 2026-02-25 11:49:13.989457896 +0000 UTC m=+0.874248102 container died 89b8686bc1ff31bbd3b3732c92ec44542b30cfe8c3266127f1a9861331b71bbe (image=quay.io/ceph/ceph:v20, name=friendly_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:49:14 np0005629333 systemd[1]: var-lib-containers-storage-overlay-3daf5bc9006cedbdd0876e9934756f1bfd02d0e21ec143b4590f60f44c96ce96-merged.mount: Deactivated successfully.
Feb 25 06:49:14 np0005629333 podman[76428]: 2026-02-25 11:49:14.031730128 +0000 UTC m=+0.916520334 container remove 89b8686bc1ff31bbd3b3732c92ec44542b30cfe8c3266127f1a9861331b71bbe (image=quay.io/ceph/ceph:v20, name=friendly_goldstine, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 06:49:14 np0005629333 systemd[1]: libpod-conmon-89b8686bc1ff31bbd3b3732c92ec44542b30cfe8c3266127f1a9861331b71bbe.scope: Deactivated successfully.
Feb 25 06:49:14 np0005629333 systemd[1]: Reloading.
Feb 25 06:49:14 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:49:14 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:49:14 np0005629333 systemd[1]: Reloading.
Feb 25 06:49:14 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:49:14 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:49:14 np0005629333 systemd[1]: Starting Ceph mgr.compute-0.jzfame for 8ac33163-6221-5d58-9a39-8b6933fe7762...
Feb 25 06:49:14 np0005629333 podman[76621]: 2026-02-25 11:49:14.748652114 +0000 UTC m=+0.042170240 container create f6f9db6e85baeadc7351ac54047d8f972801946493a8ad4294ffa080075424b6 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mgr-compute-0-jzfame, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 06:49:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/883ccbcb046ee8ecd65a0547de4ad5d25b96c9c6cf64d5bb5a204f08b9d2c9eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/883ccbcb046ee8ecd65a0547de4ad5d25b96c9c6cf64d5bb5a204f08b9d2c9eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/883ccbcb046ee8ecd65a0547de4ad5d25b96c9c6cf64d5bb5a204f08b9d2c9eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/883ccbcb046ee8ecd65a0547de4ad5d25b96c9c6cf64d5bb5a204f08b9d2c9eb/merged/var/lib/ceph/mgr/ceph-compute-0.jzfame supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:14 np0005629333 podman[76621]: 2026-02-25 11:49:14.819650253 +0000 UTC m=+0.113168399 container init f6f9db6e85baeadc7351ac54047d8f972801946493a8ad4294ffa080075424b6 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mgr-compute-0-jzfame, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:49:14 np0005629333 podman[76621]: 2026-02-25 11:49:14.827126896 +0000 UTC m=+0.120645022 container start f6f9db6e85baeadc7351ac54047d8f972801946493a8ad4294ffa080075424b6 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mgr-compute-0-jzfame, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:49:14 np0005629333 bash[76621]: f6f9db6e85baeadc7351ac54047d8f972801946493a8ad4294ffa080075424b6
Feb 25 06:49:14 np0005629333 podman[76621]: 2026-02-25 11:49:14.735154671 +0000 UTC m=+0.028672807 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:14 np0005629333 systemd[1]: Started Ceph mgr.compute-0.jzfame for 8ac33163-6221-5d58-9a39-8b6933fe7762.
Feb 25 06:49:14 np0005629333 ceph-mgr[76641]: set uid:gid to 167:167 (ceph:ceph)
Feb 25 06:49:14 np0005629333 ceph-mgr[76641]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Feb 25 06:49:14 np0005629333 ceph-mgr[76641]: pidfile_write: ignore empty --pid-file
Feb 25 06:49:14 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'alerts'
Feb 25 06:49:14 np0005629333 podman[76642]: 2026-02-25 11:49:14.926624475 +0000 UTC m=+0.060261594 container create 322e10e67cd8697c0e493c6a5d62bce372f2bac066381ac7d89d7215fb73a314 (image=quay.io/ceph/ceph:v20, name=recursing_kare, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 06:49:14 np0005629333 systemd[1]: Started libpod-conmon-322e10e67cd8697c0e493c6a5d62bce372f2bac066381ac7d89d7215fb73a314.scope.
Feb 25 06:49:14 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'balancer'
Feb 25 06:49:14 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:14 np0005629333 podman[76642]: 2026-02-25 11:49:14.900817551 +0000 UTC m=+0.034454720 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/528a643f0f49c7887a8943c26c3023ed1bcc9db1b91582ea68205797fa0946b4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/528a643f0f49c7887a8943c26c3023ed1bcc9db1b91582ea68205797fa0946b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/528a643f0f49c7887a8943c26c3023ed1bcc9db1b91582ea68205797fa0946b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:15 np0005629333 podman[76642]: 2026-02-25 11:49:15.021616077 +0000 UTC m=+0.155253216 container init 322e10e67cd8697c0e493c6a5d62bce372f2bac066381ac7d89d7215fb73a314 (image=quay.io/ceph/ceph:v20, name=recursing_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 06:49:15 np0005629333 podman[76642]: 2026-02-25 11:49:15.03053888 +0000 UTC m=+0.164175999 container start 322e10e67cd8697c0e493c6a5d62bce372f2bac066381ac7d89d7215fb73a314 (image=quay.io/ceph/ceph:v20, name=recursing_kare, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2)
Feb 25 06:49:15 np0005629333 podman[76642]: 2026-02-25 11:49:15.035415439 +0000 UTC m=+0.169052628 container attach 322e10e67cd8697c0e493c6a5d62bce372f2bac066381ac7d89d7215fb73a314 (image=quay.io/ceph/ceph:v20, name=recursing_kare, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:49:15 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'cephadm'
Feb 25 06:49:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb 25 06:49:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2580540589' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb 25 06:49:15 np0005629333 recursing_kare[76678]: 
Feb 25 06:49:15 np0005629333 recursing_kare[76678]: {
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:    "fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:    "health": {
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "status": "HEALTH_OK",
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "checks": {},
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "mutes": []
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:    },
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:    "election_epoch": 5,
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:    "quorum": [
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        0
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:    ],
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:    "quorum_names": [
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "compute-0"
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:    ],
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:    "quorum_age": 2,
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:    "monmap": {
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "epoch": 1,
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "min_mon_release_name": "tentacle",
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "num_mons": 1
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:    },
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:    "osdmap": {
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "epoch": 1,
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "num_osds": 0,
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "num_up_osds": 0,
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "osd_up_since": 0,
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "num_in_osds": 0,
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "osd_in_since": 0,
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "num_remapped_pgs": 0
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:    },
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:    "pgmap": {
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "pgs_by_state": [],
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "num_pgs": 0,
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "num_pools": 0,
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "num_objects": 0,
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "data_bytes": 0,
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "bytes_used": 0,
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "bytes_avail": 0,
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "bytes_total": 0
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:    },
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:    "fsmap": {
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "epoch": 1,
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "btime": "2026-02-25T11:49:10:477435+0000",
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "by_rank": [],
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "up:standby": 0
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:    },
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:    "mgrmap": {
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "available": false,
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "num_standbys": 0,
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "modules": [
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:            "iostat",
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:            "nfs"
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        ],
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "services": {}
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:    },
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:    "servicemap": {
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "epoch": 1,
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "modified": "2026-02-25T11:49:10.480293+0000",
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:        "services": {}
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:    },
Feb 25 06:49:15 np0005629333 recursing_kare[76678]:    "progress_events": {}
Feb 25 06:49:15 np0005629333 recursing_kare[76678]: }
Feb 25 06:49:15 np0005629333 systemd[1]: libpod-322e10e67cd8697c0e493c6a5d62bce372f2bac066381ac7d89d7215fb73a314.scope: Deactivated successfully.
Feb 25 06:49:15 np0005629333 podman[76642]: 2026-02-25 11:49:15.246159812 +0000 UTC m=+0.379796951 container died 322e10e67cd8697c0e493c6a5d62bce372f2bac066381ac7d89d7215fb73a314 (image=quay.io/ceph/ceph:v20, name=recursing_kare, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 06:49:15 np0005629333 systemd[1]: var-lib-containers-storage-overlay-528a643f0f49c7887a8943c26c3023ed1bcc9db1b91582ea68205797fa0946b4-merged.mount: Deactivated successfully.
Feb 25 06:49:15 np0005629333 podman[76642]: 2026-02-25 11:49:15.332819806 +0000 UTC m=+0.466456915 container remove 322e10e67cd8697c0e493c6a5d62bce372f2bac066381ac7d89d7215fb73a314 (image=quay.io/ceph/ceph:v20, name=recursing_kare, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 06:49:15 np0005629333 systemd[1]: libpod-conmon-322e10e67cd8697c0e493c6a5d62bce372f2bac066381ac7d89d7215fb73a314.scope: Deactivated successfully.
Feb 25 06:49:15 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'crash'
Feb 25 06:49:15 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'dashboard'
Feb 25 06:49:16 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'devicehealth'
Feb 25 06:49:16 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'diskprediction_local'
Feb 25 06:49:16 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mgr-compute-0-jzfame[76637]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb 25 06:49:16 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mgr-compute-0-jzfame[76637]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb 25 06:49:16 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mgr-compute-0-jzfame[76637]:  from numpy import show_config as show_numpy_config
Feb 25 06:49:16 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'influx'
Feb 25 06:49:16 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'insights'
Feb 25 06:49:16 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'iostat'
Feb 25 06:49:16 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'k8sevents'
Feb 25 06:49:17 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'localpool'
Feb 25 06:49:17 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'mds_autoscaler'
Feb 25 06:49:17 np0005629333 podman[76728]: 2026-02-25 11:49:17.422385025 +0000 UTC m=+0.066125931 container create e14d074c9bf6bc4c644644d337cbf9640ba54d95f5c2a587967c33875fe0d5b5 (image=quay.io/ceph/ceph:v20, name=heuristic_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 06:49:17 np0005629333 systemd[1]: Started libpod-conmon-e14d074c9bf6bc4c644644d337cbf9640ba54d95f5c2a587967c33875fe0d5b5.scope.
Feb 25 06:49:17 np0005629333 podman[76728]: 2026-02-25 11:49:17.398778674 +0000 UTC m=+0.042519640 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:17 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:17 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/150db695056cc391d07b4b4c84e92abe19626dae2316bc58e55d1dad26e9c03f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:17 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/150db695056cc391d07b4b4c84e92abe19626dae2316bc58e55d1dad26e9c03f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:17 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/150db695056cc391d07b4b4c84e92abe19626dae2316bc58e55d1dad26e9c03f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:17 np0005629333 podman[76728]: 2026-02-25 11:49:17.515427071 +0000 UTC m=+0.159167977 container init e14d074c9bf6bc4c644644d337cbf9640ba54d95f5c2a587967c33875fe0d5b5 (image=quay.io/ceph/ceph:v20, name=heuristic_mahavira, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Feb 25 06:49:17 np0005629333 podman[76728]: 2026-02-25 11:49:17.52136829 +0000 UTC m=+0.165109236 container start e14d074c9bf6bc4c644644d337cbf9640ba54d95f5c2a587967c33875fe0d5b5 (image=quay.io/ceph/ceph:v20, name=heuristic_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:49:17 np0005629333 podman[76728]: 2026-02-25 11:49:17.530893681 +0000 UTC m=+0.174634587 container attach e14d074c9bf6bc4c644644d337cbf9640ba54d95f5c2a587967c33875fe0d5b5 (image=quay.io/ceph/ceph:v20, name=heuristic_mahavira, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 06:49:17 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'mirroring'
Feb 25 06:49:17 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'nfs'
Feb 25 06:49:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb 25 06:49:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1504369560' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]: 
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]: {
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:    "fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:    "health": {
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "status": "HEALTH_OK",
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "checks": {},
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "mutes": []
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:    },
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:    "election_epoch": 5,
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:    "quorum": [
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        0
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:    ],
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:    "quorum_names": [
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "compute-0"
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:    ],
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:    "quorum_age": 4,
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:    "monmap": {
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "epoch": 1,
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "min_mon_release_name": "tentacle",
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "num_mons": 1
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:    },
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:    "osdmap": {
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "epoch": 1,
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "num_osds": 0,
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "num_up_osds": 0,
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "osd_up_since": 0,
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "num_in_osds": 0,
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "osd_in_since": 0,
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "num_remapped_pgs": 0
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:    },
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:    "pgmap": {
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "pgs_by_state": [],
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "num_pgs": 0,
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "num_pools": 0,
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "num_objects": 0,
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "data_bytes": 0,
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "bytes_used": 0,
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "bytes_avail": 0,
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "bytes_total": 0
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:    },
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:    "fsmap": {
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "epoch": 1,
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "btime": "2026-02-25T11:49:10:477435+0000",
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "by_rank": [],
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "up:standby": 0
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:    },
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:    "mgrmap": {
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "available": false,
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "num_standbys": 0,
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "modules": [
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:            "iostat",
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:            "nfs"
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        ],
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "services": {}
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:    },
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:    "servicemap": {
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "epoch": 1,
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "modified": "2026-02-25T11:49:10.480293+0000",
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:        "services": {}
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:    },
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]:    "progress_events": {}
Feb 25 06:49:17 np0005629333 heuristic_mahavira[76744]: }
Feb 25 06:49:17 np0005629333 systemd[1]: libpod-e14d074c9bf6bc4c644644d337cbf9640ba54d95f5c2a587967c33875fe0d5b5.scope: Deactivated successfully.
Feb 25 06:49:17 np0005629333 podman[76728]: 2026-02-25 11:49:17.746993796 +0000 UTC m=+0.390734732 container died e14d074c9bf6bc4c644644d337cbf9640ba54d95f5c2a587967c33875fe0d5b5 (image=quay.io/ceph/ceph:v20, name=heuristic_mahavira, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:49:17 np0005629333 systemd[1]: var-lib-containers-storage-overlay-150db695056cc391d07b4b4c84e92abe19626dae2316bc58e55d1dad26e9c03f-merged.mount: Deactivated successfully.
Feb 25 06:49:17 np0005629333 podman[76728]: 2026-02-25 11:49:17.817633845 +0000 UTC m=+0.461374791 container remove e14d074c9bf6bc4c644644d337cbf9640ba54d95f5c2a587967c33875fe0d5b5 (image=quay.io/ceph/ceph:v20, name=heuristic_mahavira, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:49:17 np0005629333 systemd[1]: libpod-conmon-e14d074c9bf6bc4c644644d337cbf9640ba54d95f5c2a587967c33875fe0d5b5.scope: Deactivated successfully.
Feb 25 06:49:17 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'orchestrator'
Feb 25 06:49:18 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'osd_perf_query'
Feb 25 06:49:18 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'osd_support'
Feb 25 06:49:18 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'pg_autoscaler'
Feb 25 06:49:18 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'progress'
Feb 25 06:49:18 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'prometheus'
Feb 25 06:49:18 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'rbd_support'
Feb 25 06:49:18 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'rgw'
Feb 25 06:49:18 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'rook'
Feb 25 06:49:19 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'selftest'
Feb 25 06:49:19 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'smb'
Feb 25 06:49:19 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'snap_schedule'
Feb 25 06:49:19 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'stats'
Feb 25 06:49:19 np0005629333 podman[76782]: 2026-02-25 11:49:19.893454843 +0000 UTC m=+0.054678806 container create c911bd8bc0292978ef94af4c9691b476d67c6290cfc15e4cdc91722dc0079234 (image=quay.io/ceph/ceph:v20, name=infallible_bouman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:49:19 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'status'
Feb 25 06:49:19 np0005629333 systemd[1]: Started libpod-conmon-c911bd8bc0292978ef94af4c9691b476d67c6290cfc15e4cdc91722dc0079234.scope.
Feb 25 06:49:19 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:19 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a5421538c464e230951e07abb72fa468f5e1977f1fe316437d60fb20c790c96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:19 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a5421538c464e230951e07abb72fa468f5e1977f1fe316437d60fb20c790c96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:19 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a5421538c464e230951e07abb72fa468f5e1977f1fe316437d60fb20c790c96/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:19 np0005629333 podman[76782]: 2026-02-25 11:49:19.867453833 +0000 UTC m=+0.028677826 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:19 np0005629333 podman[76782]: 2026-02-25 11:49:19.989677379 +0000 UTC m=+0.150901322 container init c911bd8bc0292978ef94af4c9691b476d67c6290cfc15e4cdc91722dc0079234 (image=quay.io/ceph/ceph:v20, name=infallible_bouman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 06:49:19 np0005629333 podman[76782]: 2026-02-25 11:49:19.993144637 +0000 UTC m=+0.154368570 container start c911bd8bc0292978ef94af4c9691b476d67c6290cfc15e4cdc91722dc0079234 (image=quay.io/ceph/ceph:v20, name=infallible_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 06:49:19 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'telegraf'
Feb 25 06:49:19 np0005629333 podman[76782]: 2026-02-25 11:49:19.996721379 +0000 UTC m=+0.157945322 container attach c911bd8bc0292978ef94af4c9691b476d67c6290cfc15e4cdc91722dc0079234 (image=quay.io/ceph/ceph:v20, name=infallible_bouman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'telemetry'
Feb 25 06:49:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb 25 06:49:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/815072726' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]: 
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]: {
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:    "fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:    "health": {
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "status": "HEALTH_OK",
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "checks": {},
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "mutes": []
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'test_orchestrator'
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:    },
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:    "election_epoch": 5,
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:    "quorum": [
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        0
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:    ],
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:    "quorum_names": [
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "compute-0"
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:    ],
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:    "quorum_age": 7,
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:    "monmap": {
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "epoch": 1,
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "min_mon_release_name": "tentacle",
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "num_mons": 1
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:    },
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:    "osdmap": {
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "epoch": 1,
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "num_osds": 0,
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "num_up_osds": 0,
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "osd_up_since": 0,
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "num_in_osds": 0,
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "osd_in_since": 0,
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "num_remapped_pgs": 0
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:    },
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:    "pgmap": {
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "pgs_by_state": [],
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "num_pgs": 0,
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "num_pools": 0,
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "num_objects": 0,
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "data_bytes": 0,
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "bytes_used": 0,
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "bytes_avail": 0,
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "bytes_total": 0
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:    },
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:    "fsmap": {
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "epoch": 1,
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "btime": "2026-02-25T11:49:10:477435+0000",
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "by_rank": [],
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "up:standby": 0
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:    },
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:    "mgrmap": {
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "available": false,
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "num_standbys": 0,
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "modules": [
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:            "iostat",
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:            "nfs"
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        ],
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "services": {}
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:    },
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:    "servicemap": {
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "epoch": 1,
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "modified": "2026-02-25T11:49:10.480293+0000",
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:        "services": {}
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:    },
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]:    "progress_events": {}
Feb 25 06:49:20 np0005629333 infallible_bouman[76799]: }
Feb 25 06:49:20 np0005629333 systemd[1]: libpod-c911bd8bc0292978ef94af4c9691b476d67c6290cfc15e4cdc91722dc0079234.scope: Deactivated successfully.
Feb 25 06:49:20 np0005629333 podman[76782]: 2026-02-25 11:49:20.201498592 +0000 UTC m=+0.362722545 container died c911bd8bc0292978ef94af4c9691b476d67c6290cfc15e4cdc91722dc0079234 (image=quay.io/ceph/ceph:v20, name=infallible_bouman, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:49:20 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1a5421538c464e230951e07abb72fa468f5e1977f1fe316437d60fb20c790c96-merged.mount: Deactivated successfully.
Feb 25 06:49:20 np0005629333 podman[76782]: 2026-02-25 11:49:20.242128588 +0000 UTC m=+0.403352521 container remove c911bd8bc0292978ef94af4c9691b476d67c6290cfc15e4cdc91722dc0079234 (image=quay.io/ceph/ceph:v20, name=infallible_bouman, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:49:20 np0005629333 systemd[1]: libpod-conmon-c911bd8bc0292978ef94af4c9691b476d67c6290cfc15e4cdc91722dc0079234.scope: Deactivated successfully.
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'volumes'
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: ms_deliver_dispatch: unhandled message 0x55900424f860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Feb 25 06:49:20 np0005629333 ceph-mon[76335]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.jzfame
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: mgr handle_mgr_map Activating!
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: mgr handle_mgr_map I am now activating
Feb 25 06:49:20 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.jzfame(active, starting, since 0.0157038s)
Feb 25 06:49:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Feb 25 06:49:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1712586907' entity='mgr.compute-0.jzfame' cmd={"prefix": "mds metadata"} : dispatch
Feb 25 06:49:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).mds e1 all = 1
Feb 25 06:49:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb 25 06:49:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1712586907' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata"} : dispatch
Feb 25 06:49:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Feb 25 06:49:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1712586907' entity='mgr.compute-0.jzfame' cmd={"prefix": "mon metadata"} : dispatch
Feb 25 06:49:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb 25 06:49:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1712586907' entity='mgr.compute-0.jzfame' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Feb 25 06:49:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.jzfame", "id": "compute-0.jzfame"} v 0)
Feb 25 06:49:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1712586907' entity='mgr.compute-0.jzfame' cmd={"prefix": "mgr metadata", "who": "compute-0.jzfame", "id": "compute-0.jzfame"} : dispatch
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: mgr load Constructed class from module: balancer
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: [balancer INFO root] Starting
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: mgr load Constructed class from module: crash
Feb 25 06:49:20 np0005629333 ceph-mon[76335]: log_channel(cluster) log [INF] : Manager daemon compute-0.jzfame is now available
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_11:49:20
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: [balancer INFO root] No pools available
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: mgr load Constructed class from module: devicehealth
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: [devicehealth INFO root] Starting
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: mgr load Constructed class from module: iostat
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: mgr load Constructed class from module: nfs
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: mgr load Constructed class from module: orchestrator
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: mgr load Constructed class from module: pg_autoscaler
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: mgr load Constructed class from module: progress
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: [progress INFO root] Loading...
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: [progress INFO root] No stored events to load
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: [progress INFO root] Loaded [] historic events
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: [progress INFO root] Loaded OSDMap, ready.
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] recovery thread starting
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] starting setup
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: mgr load Constructed class from module: rbd_support
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 25 06:49:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jzfame/mirror_snapshot_schedule"} v 0)
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: mgr load Constructed class from module: status
Feb 25 06:49:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1712586907' entity='mgr.compute-0.jzfame' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jzfame/mirror_snapshot_schedule"} : dispatch
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: mgr load Constructed class from module: telemetry
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Feb 25 06:49:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] PerfHandler: starting
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TaskHandler: starting
Feb 25 06:49:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jzfame/trash_purge_schedule"} v 0)
Feb 25 06:49:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1712586907' entity='mgr.compute-0.jzfame' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jzfame/trash_purge_schedule"} : dispatch
Feb 25 06:49:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1712586907' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 06:49:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] setup complete
Feb 25 06:49:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1712586907' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:20 np0005629333 ceph-mgr[76641]: mgr load Constructed class from module: volumes
Feb 25 06:49:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Feb 25 06:49:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1712586907' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:21 np0005629333 ceph-mon[76335]: Activating manager daemon compute-0.jzfame
Feb 25 06:49:21 np0005629333 ceph-mon[76335]: Manager daemon compute-0.jzfame is now available
Feb 25 06:49:21 np0005629333 ceph-mon[76335]: from='mgr.14102 192.168.122.100:0/1712586907' entity='mgr.compute-0.jzfame' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jzfame/mirror_snapshot_schedule"} : dispatch
Feb 25 06:49:21 np0005629333 ceph-mon[76335]: from='mgr.14102 192.168.122.100:0/1712586907' entity='mgr.compute-0.jzfame' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jzfame/trash_purge_schedule"} : dispatch
Feb 25 06:49:21 np0005629333 ceph-mon[76335]: from='mgr.14102 192.168.122.100:0/1712586907' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:21 np0005629333 ceph-mon[76335]: from='mgr.14102 192.168.122.100:0/1712586907' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:21 np0005629333 ceph-mon[76335]: from='mgr.14102 192.168.122.100:0/1712586907' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:21 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.jzfame(active, since 1.03226s)
Feb 25 06:49:22 np0005629333 podman[76914]: 2026-02-25 11:49:22.321859679 +0000 UTC m=+0.055051647 container create 6a07bd3b7d3af7601fae13f78e6f6f7c73b8d73e8281d1712eb846338ce5393c (image=quay.io/ceph/ceph:v20, name=pedantic_chatterjee, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:49:22 np0005629333 systemd[1]: Started libpod-conmon-6a07bd3b7d3af7601fae13f78e6f6f7c73b8d73e8281d1712eb846338ce5393c.scope.
Feb 25 06:49:22 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55529fefe3016ed4fff9bfe6942cebda2344f84f737d7b8bb8a5ed8c969e0063/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55529fefe3016ed4fff9bfe6942cebda2344f84f737d7b8bb8a5ed8c969e0063/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55529fefe3016ed4fff9bfe6942cebda2344f84f737d7b8bb8a5ed8c969e0063/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:22 np0005629333 podman[76914]: 2026-02-25 11:49:22.300247904 +0000 UTC m=+0.033439922 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:22 np0005629333 podman[76914]: 2026-02-25 11:49:22.419598478 +0000 UTC m=+0.152790506 container init 6a07bd3b7d3af7601fae13f78e6f6f7c73b8d73e8281d1712eb846338ce5393c (image=quay.io/ceph/ceph:v20, name=pedantic_chatterjee, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 06:49:22 np0005629333 podman[76914]: 2026-02-25 11:49:22.427492372 +0000 UTC m=+0.160684340 container start 6a07bd3b7d3af7601fae13f78e6f6f7c73b8d73e8281d1712eb846338ce5393c (image=quay.io/ceph/ceph:v20, name=pedantic_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 25 06:49:22 np0005629333 podman[76914]: 2026-02-25 11:49:22.431533397 +0000 UTC m=+0.164725405 container attach 6a07bd3b7d3af7601fae13f78e6f6f7c73b8d73e8281d1712eb846338ce5393c (image=quay.io/ceph/ceph:v20, name=pedantic_chatterjee, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3)
Feb 25 06:49:22 np0005629333 ceph-mgr[76641]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 25 06:49:22 np0005629333 ceph-mgr[76641]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 25 06:49:22 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.jzfame(active, since 2s)
Feb 25 06:49:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Feb 25 06:49:22 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3010900929' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]: 
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]: {
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:    "fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:    "health": {
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "status": "HEALTH_OK",
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "checks": {},
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "mutes": []
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:    },
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:    "election_epoch": 5,
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:    "quorum": [
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        0
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:    ],
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:    "quorum_names": [
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "compute-0"
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:    ],
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:    "quorum_age": 10,
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:    "monmap": {
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "epoch": 1,
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "min_mon_release_name": "tentacle",
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "num_mons": 1
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:    },
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:    "osdmap": {
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "epoch": 1,
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "num_osds": 0,
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "num_up_osds": 0,
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "osd_up_since": 0,
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "num_in_osds": 0,
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "osd_in_since": 0,
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "num_remapped_pgs": 0
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:    },
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:    "pgmap": {
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "pgs_by_state": [],
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "num_pgs": 0,
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "num_pools": 0,
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "num_objects": 0,
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "data_bytes": 0,
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "bytes_used": 0,
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "bytes_avail": 0,
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "bytes_total": 0
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:    },
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:    "fsmap": {
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "epoch": 1,
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "btime": "2026-02-25T11:49:10:477435+0000",
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "by_rank": [],
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "up:standby": 0
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:    },
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:    "mgrmap": {
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "available": true,
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "num_standbys": 0,
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "modules": [
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:            "iostat",
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:            "nfs"
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        ],
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "services": {}
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:    },
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:    "servicemap": {
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "epoch": 1,
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "modified": "2026-02-25T11:49:10.480293+0000",
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:        "services": {}
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:    },
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]:    "progress_events": {}
Feb 25 06:49:22 np0005629333 pedantic_chatterjee[76930]: }
Feb 25 06:49:22 np0005629333 systemd[1]: libpod-6a07bd3b7d3af7601fae13f78e6f6f7c73b8d73e8281d1712eb846338ce5393c.scope: Deactivated successfully.
Feb 25 06:49:22 np0005629333 podman[76914]: 2026-02-25 11:49:22.942206798 +0000 UTC m=+0.675398786 container died 6a07bd3b7d3af7601fae13f78e6f6f7c73b8d73e8281d1712eb846338ce5393c (image=quay.io/ceph/ceph:v20, name=pedantic_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 06:49:22 np0005629333 systemd[1]: var-lib-containers-storage-overlay-55529fefe3016ed4fff9bfe6942cebda2344f84f737d7b8bb8a5ed8c969e0063-merged.mount: Deactivated successfully.
Feb 25 06:49:23 np0005629333 podman[76914]: 2026-02-25 11:49:23.092985395 +0000 UTC m=+0.826177353 container remove 6a07bd3b7d3af7601fae13f78e6f6f7c73b8d73e8281d1712eb846338ce5393c (image=quay.io/ceph/ceph:v20, name=pedantic_chatterjee, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Feb 25 06:49:23 np0005629333 systemd[1]: libpod-conmon-6a07bd3b7d3af7601fae13f78e6f6f7c73b8d73e8281d1712eb846338ce5393c.scope: Deactivated successfully.
Feb 25 06:49:23 np0005629333 podman[76968]: 2026-02-25 11:49:23.179427604 +0000 UTC m=+0.063845807 container create 9dd5fb2a3a544595d3581552495aab94f43239dce81e6cdb3ac102d592d0446a (image=quay.io/ceph/ceph:v20, name=gracious_saha, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 06:49:23 np0005629333 systemd[1]: Started libpod-conmon-9dd5fb2a3a544595d3581552495aab94f43239dce81e6cdb3ac102d592d0446a.scope.
Feb 25 06:49:23 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:23 np0005629333 podman[76968]: 2026-02-25 11:49:23.148248057 +0000 UTC m=+0.032666320 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/286654f96aed5f38c26aa54537b6692d45d34fc4921bd5c0bde6670c8d6cc2f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/286654f96aed5f38c26aa54537b6692d45d34fc4921bd5c0bde6670c8d6cc2f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/286654f96aed5f38c26aa54537b6692d45d34fc4921bd5c0bde6670c8d6cc2f8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/286654f96aed5f38c26aa54537b6692d45d34fc4921bd5c0bde6670c8d6cc2f8/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:23 np0005629333 podman[76968]: 2026-02-25 11:49:23.255422215 +0000 UTC m=+0.139840428 container init 9dd5fb2a3a544595d3581552495aab94f43239dce81e6cdb3ac102d592d0446a (image=quay.io/ceph/ceph:v20, name=gracious_saha, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 06:49:23 np0005629333 podman[76968]: 2026-02-25 11:49:23.264118322 +0000 UTC m=+0.148536485 container start 9dd5fb2a3a544595d3581552495aab94f43239dce81e6cdb3ac102d592d0446a (image=quay.io/ceph/ceph:v20, name=gracious_saha, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:49:23 np0005629333 podman[76968]: 2026-02-25 11:49:23.26792023 +0000 UTC m=+0.152338493 container attach 9dd5fb2a3a544595d3581552495aab94f43239dce81e6cdb3ac102d592d0446a (image=quay.io/ceph/ceph:v20, name=gracious_saha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:49:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Feb 25 06:49:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1484081374' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb 25 06:49:23 np0005629333 gracious_saha[76984]: 
Feb 25 06:49:23 np0005629333 gracious_saha[76984]: [global]
Feb 25 06:49:23 np0005629333 gracious_saha[76984]: #011fsid = 8ac33163-6221-5d58-9a39-8b6933fe7762
Feb 25 06:49:23 np0005629333 gracious_saha[76984]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Feb 25 06:49:23 np0005629333 gracious_saha[76984]: #011osd_crush_chooseleaf_type = 0
Feb 25 06:49:23 np0005629333 systemd[1]: libpod-9dd5fb2a3a544595d3581552495aab94f43239dce81e6cdb3ac102d592d0446a.scope: Deactivated successfully.
Feb 25 06:49:23 np0005629333 podman[77010]: 2026-02-25 11:49:23.694427249 +0000 UTC m=+0.022501941 container died 9dd5fb2a3a544595d3581552495aab94f43239dce81e6cdb3ac102d592d0446a (image=quay.io/ceph/ceph:v20, name=gracious_saha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 06:49:23 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/1484081374' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb 25 06:49:23 np0005629333 systemd[1]: var-lib-containers-storage-overlay-286654f96aed5f38c26aa54537b6692d45d34fc4921bd5c0bde6670c8d6cc2f8-merged.mount: Deactivated successfully.
Feb 25 06:49:23 np0005629333 podman[77010]: 2026-02-25 11:49:23.740331824 +0000 UTC m=+0.068406536 container remove 9dd5fb2a3a544595d3581552495aab94f43239dce81e6cdb3ac102d592d0446a (image=quay.io/ceph/ceph:v20, name=gracious_saha, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:49:23 np0005629333 systemd[1]: libpod-conmon-9dd5fb2a3a544595d3581552495aab94f43239dce81e6cdb3ac102d592d0446a.scope: Deactivated successfully.
Feb 25 06:49:23 np0005629333 podman[77026]: 2026-02-25 11:49:23.81966536 +0000 UTC m=+0.054737737 container create ac910e107e32f3aa2df96d354f9ed05b5d52c7f82a8de39100959009df950ad0 (image=quay.io/ceph/ceph:v20, name=inspiring_hawking, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 06:49:23 np0005629333 systemd[1]: Started libpod-conmon-ac910e107e32f3aa2df96d354f9ed05b5d52c7f82a8de39100959009df950ad0.scope.
Feb 25 06:49:23 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5ae0813334b8bd8f87fb21ded785f5d4d6f187fcc35d99f3b236a4d34f1626d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5ae0813334b8bd8f87fb21ded785f5d4d6f187fcc35d99f3b236a4d34f1626d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5ae0813334b8bd8f87fb21ded785f5d4d6f187fcc35d99f3b236a4d34f1626d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:23 np0005629333 podman[77026]: 2026-02-25 11:49:23.879009328 +0000 UTC m=+0.114081695 container init ac910e107e32f3aa2df96d354f9ed05b5d52c7f82a8de39100959009df950ad0 (image=quay.io/ceph/ceph:v20, name=inspiring_hawking, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:49:23 np0005629333 podman[77026]: 2026-02-25 11:49:23.883587718 +0000 UTC m=+0.118660085 container start ac910e107e32f3aa2df96d354f9ed05b5d52c7f82a8de39100959009df950ad0 (image=quay.io/ceph/ceph:v20, name=inspiring_hawking, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 06:49:23 np0005629333 podman[77026]: 2026-02-25 11:49:23.795122042 +0000 UTC m=+0.030194459 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:23 np0005629333 podman[77026]: 2026-02-25 11:49:23.889369852 +0000 UTC m=+0.124442209 container attach ac910e107e32f3aa2df96d354f9ed05b5d52c7f82a8de39100959009df950ad0 (image=quay.io/ceph/ceph:v20, name=inspiring_hawking, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 06:49:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Feb 25 06:49:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1852775491' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Feb 25 06:49:24 np0005629333 ceph-mgr[76641]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 25 06:49:24 np0005629333 ceph-mgr[76641]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 25 06:49:24 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/1852775491' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Feb 25 06:49:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1852775491' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Feb 25 06:49:24 np0005629333 ceph-mgr[76641]: mgr handle_mgr_map respawning because set of enabled modules changed!
Feb 25 06:49:24 np0005629333 ceph-mgr[76641]: mgr respawn  e: '/usr/bin/ceph-mgr'
Feb 25 06:49:24 np0005629333 ceph-mgr[76641]: mgr respawn  0: '/usr/bin/ceph-mgr'
Feb 25 06:49:24 np0005629333 ceph-mgr[76641]: mgr respawn  1: '-n'
Feb 25 06:49:24 np0005629333 ceph-mgr[76641]: mgr respawn  2: 'mgr.compute-0.jzfame'
Feb 25 06:49:24 np0005629333 ceph-mgr[76641]: mgr respawn  3: '-f'
Feb 25 06:49:24 np0005629333 ceph-mgr[76641]: mgr respawn  4: '--setuser'
Feb 25 06:49:24 np0005629333 ceph-mgr[76641]: mgr respawn  5: 'ceph'
Feb 25 06:49:24 np0005629333 ceph-mgr[76641]: mgr respawn  6: '--setgroup'
Feb 25 06:49:24 np0005629333 ceph-mgr[76641]: mgr respawn  7: 'ceph'
Feb 25 06:49:24 np0005629333 ceph-mgr[76641]: mgr respawn  8: '--default-log-to-file=false'
Feb 25 06:49:24 np0005629333 ceph-mgr[76641]: mgr respawn  9: '--default-log-to-journald=true'
Feb 25 06:49:24 np0005629333 ceph-mgr[76641]: mgr respawn  10: '--default-log-to-stderr=false'
Feb 25 06:49:24 np0005629333 ceph-mgr[76641]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Feb 25 06:49:24 np0005629333 ceph-mgr[76641]: mgr respawn  exe_path /proc/self/exe
Feb 25 06:49:24 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.jzfame(active, since 4s)
Feb 25 06:49:24 np0005629333 systemd[1]: libpod-ac910e107e32f3aa2df96d354f9ed05b5d52c7f82a8de39100959009df950ad0.scope: Deactivated successfully.
Feb 25 06:49:24 np0005629333 podman[77026]: 2026-02-25 11:49:24.756522521 +0000 UTC m=+0.991594948 container died ac910e107e32f3aa2df96d354f9ed05b5d52c7f82a8de39100959009df950ad0 (image=quay.io/ceph/ceph:v20, name=inspiring_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:49:24 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a5ae0813334b8bd8f87fb21ded785f5d4d6f187fcc35d99f3b236a4d34f1626d-merged.mount: Deactivated successfully.
Feb 25 06:49:24 np0005629333 podman[77026]: 2026-02-25 11:49:24.806903924 +0000 UTC m=+1.041976301 container remove ac910e107e32f3aa2df96d354f9ed05b5d52c7f82a8de39100959009df950ad0 (image=quay.io/ceph/ceph:v20, name=inspiring_hawking, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 06:49:24 np0005629333 systemd[1]: libpod-conmon-ac910e107e32f3aa2df96d354f9ed05b5d52c7f82a8de39100959009df950ad0.scope: Deactivated successfully.
Feb 25 06:49:24 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mgr-compute-0-jzfame[76637]: ignoring --setuser ceph since I am not root
Feb 25 06:49:24 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mgr-compute-0-jzfame[76637]: ignoring --setgroup ceph since I am not root
Feb 25 06:49:24 np0005629333 ceph-mgr[76641]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Feb 25 06:49:24 np0005629333 ceph-mgr[76641]: pidfile_write: ignore empty --pid-file
Feb 25 06:49:24 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'alerts'
Feb 25 06:49:24 np0005629333 podman[77082]: 2026-02-25 11:49:24.887360562 +0000 UTC m=+0.057985910 container create 3e7fff114d7d8f5da1049cf458b77333a12d7baf28c6142fedb66ea4a6f9a3d7 (image=quay.io/ceph/ceph:v20, name=modest_kepler, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:49:24 np0005629333 systemd[1]: Started libpod-conmon-3e7fff114d7d8f5da1049cf458b77333a12d7baf28c6142fedb66ea4a6f9a3d7.scope.
Feb 25 06:49:24 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'balancer'
Feb 25 06:49:24 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:24 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc4fb142d38d50c236407081008fdc3dd1cec48f392e6e57e76c928aedaff051/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:24 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc4fb142d38d50c236407081008fdc3dd1cec48f392e6e57e76c928aedaff051/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:24 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc4fb142d38d50c236407081008fdc3dd1cec48f392e6e57e76c928aedaff051/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:24 np0005629333 podman[77082]: 2026-02-25 11:49:24.862644789 +0000 UTC m=+0.033270197 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:24 np0005629333 podman[77082]: 2026-02-25 11:49:24.980921713 +0000 UTC m=+0.151547061 container init 3e7fff114d7d8f5da1049cf458b77333a12d7baf28c6142fedb66ea4a6f9a3d7 (image=quay.io/ceph/ceph:v20, name=modest_kepler, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:49:24 np0005629333 podman[77082]: 2026-02-25 11:49:24.987306874 +0000 UTC m=+0.157932212 container start 3e7fff114d7d8f5da1049cf458b77333a12d7baf28c6142fedb66ea4a6f9a3d7 (image=quay.io/ceph/ceph:v20, name=modest_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 06:49:24 np0005629333 podman[77082]: 2026-02-25 11:49:24.991645477 +0000 UTC m=+0.162270865 container attach 3e7fff114d7d8f5da1049cf458b77333a12d7baf28c6142fedb66ea4a6f9a3d7 (image=quay.io/ceph/ceph:v20, name=modest_kepler, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 06:49:25 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'cephadm'
Feb 25 06:49:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Feb 25 06:49:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2518813486' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Feb 25 06:49:25 np0005629333 modest_kepler[77119]: {
Feb 25 06:49:25 np0005629333 modest_kepler[77119]:    "epoch": 5,
Feb 25 06:49:25 np0005629333 modest_kepler[77119]:    "available": true,
Feb 25 06:49:25 np0005629333 modest_kepler[77119]:    "active_name": "compute-0.jzfame",
Feb 25 06:49:25 np0005629333 modest_kepler[77119]:    "num_standby": 0
Feb 25 06:49:25 np0005629333 modest_kepler[77119]: }
Feb 25 06:49:25 np0005629333 systemd[1]: libpod-3e7fff114d7d8f5da1049cf458b77333a12d7baf28c6142fedb66ea4a6f9a3d7.scope: Deactivated successfully.
Feb 25 06:49:25 np0005629333 podman[77082]: 2026-02-25 11:49:25.488417874 +0000 UTC m=+0.659043182 container died 3e7fff114d7d8f5da1049cf458b77333a12d7baf28c6142fedb66ea4a6f9a3d7 (image=quay.io/ceph/ceph:v20, name=modest_kepler, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:49:25 np0005629333 systemd[1]: var-lib-containers-storage-overlay-fc4fb142d38d50c236407081008fdc3dd1cec48f392e6e57e76c928aedaff051-merged.mount: Deactivated successfully.
Feb 25 06:49:25 np0005629333 podman[77082]: 2026-02-25 11:49:25.526482566 +0000 UTC m=+0.697107874 container remove 3e7fff114d7d8f5da1049cf458b77333a12d7baf28c6142fedb66ea4a6f9a3d7 (image=quay.io/ceph/ceph:v20, name=modest_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Feb 25 06:49:25 np0005629333 systemd[1]: libpod-conmon-3e7fff114d7d8f5da1049cf458b77333a12d7baf28c6142fedb66ea4a6f9a3d7.scope: Deactivated successfully.
Feb 25 06:49:25 np0005629333 podman[77169]: 2026-02-25 11:49:25.600741198 +0000 UTC m=+0.052769781 container create 47737c4adfc69ad4bb3ee4bcf0f3398510e7a19b7b3c053e4ee227dcbbc6914a (image=quay.io/ceph/ceph:v20, name=silly_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:49:25 np0005629333 systemd[1]: Started libpod-conmon-47737c4adfc69ad4bb3ee4bcf0f3398510e7a19b7b3c053e4ee227dcbbc6914a.scope.
Feb 25 06:49:25 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'crash'
Feb 25 06:49:25 np0005629333 podman[77169]: 2026-02-25 11:49:25.578946128 +0000 UTC m=+0.030974761 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:25 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:25 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c213ea50b730c2de788d44dd6339fefe4cee4cd2a7dc081ad98859f9b0b04729/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:25 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c213ea50b730c2de788d44dd6339fefe4cee4cd2a7dc081ad98859f9b0b04729/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:25 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c213ea50b730c2de788d44dd6339fefe4cee4cd2a7dc081ad98859f9b0b04729/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:25 np0005629333 podman[77169]: 2026-02-25 11:49:25.708036619 +0000 UTC m=+0.160065262 container init 47737c4adfc69ad4bb3ee4bcf0f3398510e7a19b7b3c053e4ee227dcbbc6914a (image=quay.io/ceph/ceph:v20, name=silly_bardeen, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:49:25 np0005629333 podman[77169]: 2026-02-25 11:49:25.714911295 +0000 UTC m=+0.166939868 container start 47737c4adfc69ad4bb3ee4bcf0f3398510e7a19b7b3c053e4ee227dcbbc6914a (image=quay.io/ceph/ceph:v20, name=silly_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 06:49:25 np0005629333 podman[77169]: 2026-02-25 11:49:25.719101224 +0000 UTC m=+0.171129797 container attach 47737c4adfc69ad4bb3ee4bcf0f3398510e7a19b7b3c053e4ee227dcbbc6914a (image=quay.io/ceph/ceph:v20, name=silly_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030)
Feb 25 06:49:25 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/1852775491' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Feb 25 06:49:25 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'dashboard'
Feb 25 06:49:26 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'devicehealth'
Feb 25 06:49:26 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'diskprediction_local'
Feb 25 06:49:26 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mgr-compute-0-jzfame[76637]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb 25 06:49:26 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mgr-compute-0-jzfame[76637]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb 25 06:49:26 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mgr-compute-0-jzfame[76637]:  from numpy import show_config as show_numpy_config
Feb 25 06:49:26 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'influx'
Feb 25 06:49:26 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'insights'
Feb 25 06:49:26 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'iostat'
Feb 25 06:49:26 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'k8sevents'
Feb 25 06:49:27 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'localpool'
Feb 25 06:49:27 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'mds_autoscaler'
Feb 25 06:49:27 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'mirroring'
Feb 25 06:49:27 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'nfs'
Feb 25 06:49:27 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'orchestrator'
Feb 25 06:49:27 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'osd_perf_query'
Feb 25 06:49:27 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'osd_support'
Feb 25 06:49:28 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'pg_autoscaler'
Feb 25 06:49:28 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'progress'
Feb 25 06:49:28 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'prometheus'
Feb 25 06:49:28 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'rbd_support'
Feb 25 06:49:28 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'rgw'
Feb 25 06:49:28 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'rook'
Feb 25 06:49:29 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'selftest'
Feb 25 06:49:29 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'smb'
Feb 25 06:49:29 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'snap_schedule'
Feb 25 06:49:29 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'stats'
Feb 25 06:49:29 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'status'
Feb 25 06:49:29 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'telegraf'
Feb 25 06:49:29 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'telemetry'
Feb 25 06:49:30 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'test_orchestrator'
Feb 25 06:49:30 np0005629333 ceph-mgr[76641]: mgr[py] Loading python module 'volumes'
Feb 25 06:49:30 np0005629333 ceph-mon[76335]: log_channel(cluster) log [INF] : Active manager daemon compute-0.jzfame restarted
Feb 25 06:49:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Feb 25 06:49:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 25 06:49:30 np0005629333 ceph-mgr[76641]: ms_deliver_dispatch: unhandled message 0x55ea88356000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Feb 25 06:49:30 np0005629333 ceph-mon[76335]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.jzfame
Feb 25 06:49:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.2 inc ratio 0.4 full ratio 0.4
Feb 25 06:49:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Feb 25 06:49:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Feb 25 06:49:30 np0005629333 ceph-mgr[76641]: mgr handle_mgr_map Activating!
Feb 25 06:49:30 np0005629333 ceph-mgr[76641]: mgr handle_mgr_map I am now activating
Feb 25 06:49:30 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Feb 25 06:49:30 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.jzfame(active, starting, since 0.201375s)
Feb 25 06:49:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb 25 06:49:30 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Feb 25 06:49:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.jzfame", "id": "compute-0.jzfame"} v 0)
Feb 25 06:49:30 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "mgr metadata", "who": "compute-0.jzfame", "id": "compute-0.jzfame"} : dispatch
Feb 25 06:49:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Feb 25 06:49:30 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "mds metadata"} : dispatch
Feb 25 06:49:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).mds e1 all = 1
Feb 25 06:49:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb 25 06:49:30 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata"} : dispatch
Feb 25 06:49:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Feb 25 06:49:30 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "mon metadata"} : dispatch
Feb 25 06:49:30 np0005629333 ceph-mgr[76641]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 25 06:49:30 np0005629333 ceph-mgr[76641]: mgr load Constructed class from module: balancer
Feb 25 06:49:30 np0005629333 ceph-mgr[76641]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 25 06:49:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Starting
Feb 25 06:49:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_11:49:30
Feb 25 06:49:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 06:49:30 np0005629333 ceph-mon[76335]: log_channel(cluster) log [INF] : Manager daemon compute-0.jzfame is now available
Feb 25 06:49:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 06:49:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] No pools available
Feb 25 06:49:30 np0005629333 ceph-mon[76335]: Active manager daemon compute-0.jzfame restarted
Feb 25 06:49:30 np0005629333 ceph-mon[76335]: Activating manager daemon compute-0.jzfame
Feb 25 06:49:30 np0005629333 ceph-mon[76335]: Manager daemon compute-0.jzfame is now available
Feb 25 06:49:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.cephadm_root_ca_cert}] v 0)
Feb 25 06:49:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.cephadm_root_ca_key}] v 0)
Feb 25 06:49:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Feb 25 06:49:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Feb 25 06:49:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Feb 25 06:49:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: mgr load Constructed class from module: cephadm
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: mgr load Constructed class from module: crash
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: mgr load Constructed class from module: devicehealth
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: mgr load Constructed class from module: iostat
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: [devicehealth INFO root] Starting
Feb 25 06:49:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb 25 06:49:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: mgr load Constructed class from module: nfs
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: mgr load Constructed class from module: orchestrator
Feb 25 06:49:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb 25 06:49:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: mgr load Constructed class from module: pg_autoscaler
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: mgr load Constructed class from module: progress
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: [progress INFO root] Loading...
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: [progress INFO root] No stored events to load
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: [progress INFO root] Loaded [] historic events
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: [progress INFO root] Loaded OSDMap, ready.
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] recovery thread starting
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] starting setup
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: mgr load Constructed class from module: rbd_support
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: mgr load Constructed class from module: status
Feb 25 06:49:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jzfame/mirror_snapshot_schedule"} v 0)
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 25 06:49:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jzfame/mirror_snapshot_schedule"} : dispatch
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: mgr load Constructed class from module: telemetry
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] PerfHandler: starting
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TaskHandler: starting
Feb 25 06:49:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jzfame/trash_purge_schedule"} v 0)
Feb 25 06:49:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jzfame/trash_purge_schedule"} : dispatch
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] setup complete
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: mgr load Constructed class from module: volumes
Feb 25 06:49:31 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.jzfame(active, since 1.21264s)
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Feb 25 06:49:31 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Feb 25 06:49:31 np0005629333 silly_bardeen[77185]: {
Feb 25 06:49:31 np0005629333 silly_bardeen[77185]:    "mgrmap_epoch": 7,
Feb 25 06:49:31 np0005629333 silly_bardeen[77185]:    "initialized": true
Feb 25 06:49:31 np0005629333 silly_bardeen[77185]: }
Feb 25 06:49:31 np0005629333 systemd[1]: libpod-47737c4adfc69ad4bb3ee4bcf0f3398510e7a19b7b3c053e4ee227dcbbc6914a.scope: Deactivated successfully.
Feb 25 06:49:31 np0005629333 podman[77169]: 2026-02-25 11:49:31.815628497 +0000 UTC m=+6.267657100 container died 47737c4adfc69ad4bb3ee4bcf0f3398510e7a19b7b3c053e4ee227dcbbc6914a (image=quay.io/ceph/ceph:v20, name=silly_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Feb 25 06:49:31 np0005629333 systemd[1]: var-lib-containers-storage-overlay-c213ea50b730c2de788d44dd6339fefe4cee4cd2a7dc081ad98859f9b0b04729-merged.mount: Deactivated successfully.
Feb 25 06:49:31 np0005629333 podman[77169]: 2026-02-25 11:49:31.858652121 +0000 UTC m=+6.310680704 container remove 47737c4adfc69ad4bb3ee4bcf0f3398510e7a19b7b3c053e4ee227dcbbc6914a (image=quay.io/ceph/ceph:v20, name=silly_bardeen, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 06:49:31 np0005629333 systemd[1]: libpod-conmon-47737c4adfc69ad4bb3ee4bcf0f3398510e7a19b7b3c053e4ee227dcbbc6914a.scope: Deactivated successfully.
Feb 25 06:49:31 np0005629333 podman[77333]: 2026-02-25 11:49:31.937370749 +0000 UTC m=+0.055559111 container create f6fa3ca3d12a0e0516d0459f8ba31ab9c58e79420cecc3728eac97dc31a63d0d (image=quay.io/ceph/ceph:v20, name=festive_dijkstra, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 06:49:31 np0005629333 systemd[1]: Started libpod-conmon-f6fa3ca3d12a0e0516d0459f8ba31ab9c58e79420cecc3728eac97dc31a63d0d.scope.
Feb 25 06:49:31 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82767c468db4071c57eea2c2dec215eb08fc0834f7b53c791cc70681331ac754/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82767c468db4071c57eea2c2dec215eb08fc0834f7b53c791cc70681331ac754/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82767c468db4071c57eea2c2dec215eb08fc0834f7b53c791cc70681331ac754/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:32 np0005629333 podman[77333]: 2026-02-25 11:49:31.913904982 +0000 UTC m=+0.032093404 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:32 np0005629333 podman[77333]: 2026-02-25 11:49:32.026669138 +0000 UTC m=+0.144857530 container init f6fa3ca3d12a0e0516d0459f8ba31ab9c58e79420cecc3728eac97dc31a63d0d (image=quay.io/ceph/ceph:v20, name=festive_dijkstra, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 06:49:32 np0005629333 podman[77333]: 2026-02-25 11:49:32.035218361 +0000 UTC m=+0.153406693 container start f6fa3ca3d12a0e0516d0459f8ba31ab9c58e79420cecc3728eac97dc31a63d0d (image=quay.io/ceph/ceph:v20, name=festive_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:49:32 np0005629333 podman[77333]: 2026-02-25 11:49:32.05274607 +0000 UTC m=+0.170934432 container attach f6fa3ca3d12a0e0516d0459f8ba31ab9c58e79420cecc3728eac97dc31a63d0d (image=quay.io/ceph/ceph:v20, name=festive_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle)
Feb 25 06:49:32 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:32 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:32 np0005629333 ceph-mon[76335]: Found migration_current of "None". Setting to last migration.
Feb 25 06:49:32 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:32 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:32 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jzfame/mirror_snapshot_schedule"} : dispatch
Feb 25 06:49:32 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.jzfame/trash_purge_schedule"} : dispatch
Feb 25 06:49:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "orchestrator"} v 0)
Feb 25 06:49:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3869725244' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Feb 25 06:49:32 np0005629333 ceph-mgr[76641]: [cephadm INFO cherrypy.error] [25/Feb/2026:11:49:32] ENGINE Bus STARTING
Feb 25 06:49:32 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : [25/Feb/2026:11:49:32] ENGINE Bus STARTING
Feb 25 06:49:32 np0005629333 ceph-mgr[76641]: [cephadm INFO cherrypy.error] [25/Feb/2026:11:49:32] ENGINE Serving on http://192.168.122.100:8765
Feb 25 06:49:32 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : [25/Feb/2026:11:49:32] ENGINE Serving on http://192.168.122.100:8765
Feb 25 06:49:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019900580 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:49:32 np0005629333 ceph-mgr[76641]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 25 06:49:32 np0005629333 ceph-mgr[76641]: [cephadm INFO cherrypy.error] [25/Feb/2026:11:49:32] ENGINE Serving on https://192.168.122.100:7150
Feb 25 06:49:32 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : [25/Feb/2026:11:49:32] ENGINE Serving on https://192.168.122.100:7150
Feb 25 06:49:32 np0005629333 ceph-mgr[76641]: [cephadm INFO cherrypy.error] [25/Feb/2026:11:49:32] ENGINE Bus STARTED
Feb 25 06:49:32 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : [25/Feb/2026:11:49:32] ENGINE Bus STARTED
Feb 25 06:49:32 np0005629333 ceph-mgr[76641]: [cephadm INFO cherrypy.error] [25/Feb/2026:11:49:32] ENGINE Client ('192.168.122.100', 39254) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb 25 06:49:32 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : [25/Feb/2026:11:49:32] ENGINE Client ('192.168.122.100', 39254) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb 25 06:49:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb 25 06:49:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 25 06:49:33 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/3869725244' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Feb 25 06:49:33 np0005629333 ceph-mon[76335]: [25/Feb/2026:11:49:32] ENGINE Bus STARTING
Feb 25 06:49:33 np0005629333 ceph-mon[76335]: [25/Feb/2026:11:49:32] ENGINE Serving on http://192.168.122.100:8765
Feb 25 06:49:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3869725244' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Feb 25 06:49:33 np0005629333 festive_dijkstra[77349]: module 'orchestrator' is already enabled (always-on)
Feb 25 06:49:33 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.jzfame(active, since 2s)
Feb 25 06:49:33 np0005629333 ceph-mgr[76641]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 25 06:49:33 np0005629333 systemd[1]: libpod-f6fa3ca3d12a0e0516d0459f8ba31ab9c58e79420cecc3728eac97dc31a63d0d.scope: Deactivated successfully.
Feb 25 06:49:33 np0005629333 podman[77399]: 2026-02-25 11:49:33.583497978 +0000 UTC m=+0.036438746 container died f6fa3ca3d12a0e0516d0459f8ba31ab9c58e79420cecc3728eac97dc31a63d0d (image=quay.io/ceph/ceph:v20, name=festive_dijkstra, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:49:33 np0005629333 systemd[1]: var-lib-containers-storage-overlay-82767c468db4071c57eea2c2dec215eb08fc0834f7b53c791cc70681331ac754-merged.mount: Deactivated successfully.
Feb 25 06:49:33 np0005629333 podman[77399]: 2026-02-25 11:49:33.624475344 +0000 UTC m=+0.077416042 container remove f6fa3ca3d12a0e0516d0459f8ba31ab9c58e79420cecc3728eac97dc31a63d0d (image=quay.io/ceph/ceph:v20, name=festive_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 06:49:33 np0005629333 systemd[1]: libpod-conmon-f6fa3ca3d12a0e0516d0459f8ba31ab9c58e79420cecc3728eac97dc31a63d0d.scope: Deactivated successfully.
Feb 25 06:49:33 np0005629333 podman[77415]: 2026-02-25 11:49:33.689442431 +0000 UTC m=+0.042778367 container create 830b0bd13ee67f687072e0f1c2ae791a4e0bb4d4f6dbe602415628312300c825 (image=quay.io/ceph/ceph:v20, name=nostalgic_volhard, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:49:33 np0005629333 systemd[1]: Started libpod-conmon-830b0bd13ee67f687072e0f1c2ae791a4e0bb4d4f6dbe602415628312300c825.scope.
Feb 25 06:49:33 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:33 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85cebda5f7c767636334733482e540b9866cef791a9146e2054797755fd894e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:33 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85cebda5f7c767636334733482e540b9866cef791a9146e2054797755fd894e9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:33 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85cebda5f7c767636334733482e540b9866cef791a9146e2054797755fd894e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:33 np0005629333 podman[77415]: 2026-02-25 11:49:33.666960662 +0000 UTC m=+0.020296608 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:33 np0005629333 podman[77415]: 2026-02-25 11:49:33.779725929 +0000 UTC m=+0.133061875 container init 830b0bd13ee67f687072e0f1c2ae791a4e0bb4d4f6dbe602415628312300c825 (image=quay.io/ceph/ceph:v20, name=nostalgic_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 25 06:49:33 np0005629333 podman[77415]: 2026-02-25 11:49:33.786547163 +0000 UTC m=+0.139883109 container start 830b0bd13ee67f687072e0f1c2ae791a4e0bb4d4f6dbe602415628312300c825 (image=quay.io/ceph/ceph:v20, name=nostalgic_volhard, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:49:33 np0005629333 podman[77415]: 2026-02-25 11:49:33.790728531 +0000 UTC m=+0.144064497 container attach 830b0bd13ee67f687072e0f1c2ae791a4e0bb4d4f6dbe602415628312300c825 (image=quay.io/ceph/ceph:v20, name=nostalgic_volhard, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:49:34 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 06:49:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Feb 25 06:49:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb 25 06:49:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 25 06:49:34 np0005629333 systemd[1]: libpod-830b0bd13ee67f687072e0f1c2ae791a4e0bb4d4f6dbe602415628312300c825.scope: Deactivated successfully.
Feb 25 06:49:34 np0005629333 podman[77415]: 2026-02-25 11:49:34.242243871 +0000 UTC m=+0.595579817 container died 830b0bd13ee67f687072e0f1c2ae791a4e0bb4d4f6dbe602415628312300c825 (image=quay.io/ceph/ceph:v20, name=nostalgic_volhard, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:49:34 np0005629333 systemd[1]: var-lib-containers-storage-overlay-85cebda5f7c767636334733482e540b9866cef791a9146e2054797755fd894e9-merged.mount: Deactivated successfully.
Feb 25 06:49:34 np0005629333 podman[77415]: 2026-02-25 11:49:34.274478898 +0000 UTC m=+0.627814844 container remove 830b0bd13ee67f687072e0f1c2ae791a4e0bb4d4f6dbe602415628312300c825 (image=quay.io/ceph/ceph:v20, name=nostalgic_volhard, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 06:49:34 np0005629333 systemd[1]: libpod-conmon-830b0bd13ee67f687072e0f1c2ae791a4e0bb4d4f6dbe602415628312300c825.scope: Deactivated successfully.
Feb 25 06:49:34 np0005629333 podman[77469]: 2026-02-25 11:49:34.350234822 +0000 UTC m=+0.056443486 container create 2b7e2761a69d4ddc09b553bc77b56b5e786f656f097eace30235ebc27a77a726 (image=quay.io/ceph/ceph:v20, name=dreamy_dhawan, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 06:49:34 np0005629333 systemd[1]: Started libpod-conmon-2b7e2761a69d4ddc09b553bc77b56b5e786f656f097eace30235ebc27a77a726.scope.
Feb 25 06:49:34 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:34 np0005629333 podman[77469]: 2026-02-25 11:49:34.324284754 +0000 UTC m=+0.030493438 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d53ceb7b3a14063e40d65d5050793c1682cdd485cfe484af6cde67e82b5ca67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d53ceb7b3a14063e40d65d5050793c1682cdd485cfe484af6cde67e82b5ca67/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d53ceb7b3a14063e40d65d5050793c1682cdd485cfe484af6cde67e82b5ca67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:34 np0005629333 podman[77469]: 2026-02-25 11:49:34.453753356 +0000 UTC m=+0.159961990 container init 2b7e2761a69d4ddc09b553bc77b56b5e786f656f097eace30235ebc27a77a726 (image=quay.io/ceph/ceph:v20, name=dreamy_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:49:34 np0005629333 podman[77469]: 2026-02-25 11:49:34.458101159 +0000 UTC m=+0.164309803 container start 2b7e2761a69d4ddc09b553bc77b56b5e786f656f097eace30235ebc27a77a726 (image=quay.io/ceph/ceph:v20, name=dreamy_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 06:49:34 np0005629333 podman[77469]: 2026-02-25 11:49:34.462080783 +0000 UTC m=+0.168289437 container attach 2b7e2761a69d4ddc09b553bc77b56b5e786f656f097eace30235ebc27a77a726 (image=quay.io/ceph/ceph:v20, name=dreamy_dhawan, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 06:49:34 np0005629333 ceph-mon[76335]: [25/Feb/2026:11:49:32] ENGINE Serving on https://192.168.122.100:7150
Feb 25 06:49:34 np0005629333 ceph-mon[76335]: [25/Feb/2026:11:49:32] ENGINE Bus STARTED
Feb 25 06:49:34 np0005629333 ceph-mon[76335]: [25/Feb/2026:11:49:32] ENGINE Client ('192.168.122.100', 39254) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb 25 06:49:34 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/3869725244' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Feb 25 06:49:34 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:34 np0005629333 ceph-mgr[76641]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 25 06:49:34 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 06:49:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Feb 25 06:49:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:34 np0005629333 ceph-mgr[76641]: [cephadm INFO root] Set ssh ssh_user
Feb 25 06:49:34 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Feb 25 06:49:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Feb 25 06:49:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:34 np0005629333 ceph-mgr[76641]: [cephadm INFO root] Set ssh ssh_config
Feb 25 06:49:34 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Feb 25 06:49:34 np0005629333 ceph-mgr[76641]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Feb 25 06:49:34 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Feb 25 06:49:34 np0005629333 dreamy_dhawan[77485]: ssh user set to ceph-admin. sudo will be used
Feb 25 06:49:34 np0005629333 systemd[1]: libpod-2b7e2761a69d4ddc09b553bc77b56b5e786f656f097eace30235ebc27a77a726.scope: Deactivated successfully.
Feb 25 06:49:34 np0005629333 podman[77469]: 2026-02-25 11:49:34.853288957 +0000 UTC m=+0.559497591 container died 2b7e2761a69d4ddc09b553bc77b56b5e786f656f097eace30235ebc27a77a726 (image=quay.io/ceph/ceph:v20, name=dreamy_dhawan, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:49:34 np0005629333 systemd[1]: var-lib-containers-storage-overlay-9d53ceb7b3a14063e40d65d5050793c1682cdd485cfe484af6cde67e82b5ca67-merged.mount: Deactivated successfully.
Feb 25 06:49:34 np0005629333 podman[77469]: 2026-02-25 11:49:34.890210767 +0000 UTC m=+0.596419401 container remove 2b7e2761a69d4ddc09b553bc77b56b5e786f656f097eace30235ebc27a77a726 (image=quay.io/ceph/ceph:v20, name=dreamy_dhawan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 06:49:34 np0005629333 systemd[1]: libpod-conmon-2b7e2761a69d4ddc09b553bc77b56b5e786f656f097eace30235ebc27a77a726.scope: Deactivated successfully.
Feb 25 06:49:34 np0005629333 podman[77520]: 2026-02-25 11:49:34.962514813 +0000 UTC m=+0.054987654 container create 5eaca1da6c6ad69d182e27eab4427dec47789914f762b4ccc0b2be23546bba29 (image=quay.io/ceph/ceph:v20, name=epic_feistel, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 06:49:35 np0005629333 systemd[1]: Started libpod-conmon-5eaca1da6c6ad69d182e27eab4427dec47789914f762b4ccc0b2be23546bba29.scope.
Feb 25 06:49:35 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baab6389395f182d97dbe5710dc9fb77ab4e30439feca85a5e90f33209d234ca/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baab6389395f182d97dbe5710dc9fb77ab4e30439feca85a5e90f33209d234ca/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baab6389395f182d97dbe5710dc9fb77ab4e30439feca85a5e90f33209d234ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baab6389395f182d97dbe5710dc9fb77ab4e30439feca85a5e90f33209d234ca/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baab6389395f182d97dbe5710dc9fb77ab4e30439feca85a5e90f33209d234ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:35 np0005629333 podman[77520]: 2026-02-25 11:49:34.937181523 +0000 UTC m=+0.029654414 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:35 np0005629333 podman[77520]: 2026-02-25 11:49:35.06580448 +0000 UTC m=+0.158277291 container init 5eaca1da6c6ad69d182e27eab4427dec47789914f762b4ccc0b2be23546bba29 (image=quay.io/ceph/ceph:v20, name=epic_feistel, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:49:35 np0005629333 podman[77520]: 2026-02-25 11:49:35.071899224 +0000 UTC m=+0.164372065 container start 5eaca1da6c6ad69d182e27eab4427dec47789914f762b4ccc0b2be23546bba29 (image=quay.io/ceph/ceph:v20, name=epic_feistel, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 06:49:35 np0005629333 podman[77520]: 2026-02-25 11:49:35.075547667 +0000 UTC m=+0.168020478 container attach 5eaca1da6c6ad69d182e27eab4427dec47789914f762b4ccc0b2be23546bba29 (image=quay.io/ceph/ceph:v20, name=epic_feistel, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 06:49:35 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 06:49:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Feb 25 06:49:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:35 np0005629333 ceph-mgr[76641]: [cephadm INFO root] Set ssh ssh_identity_key
Feb 25 06:49:35 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Feb 25 06:49:35 np0005629333 ceph-mgr[76641]: [cephadm INFO root] Set ssh private key
Feb 25 06:49:35 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Set ssh private key
Feb 25 06:49:35 np0005629333 systemd[1]: libpod-5eaca1da6c6ad69d182e27eab4427dec47789914f762b4ccc0b2be23546bba29.scope: Deactivated successfully.
Feb 25 06:49:35 np0005629333 podman[77520]: 2026-02-25 11:49:35.483713494 +0000 UTC m=+0.576186295 container died 5eaca1da6c6ad69d182e27eab4427dec47789914f762b4ccc0b2be23546bba29 (image=quay.io/ceph/ceph:v20, name=epic_feistel, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:49:35 np0005629333 systemd[1]: var-lib-containers-storage-overlay-baab6389395f182d97dbe5710dc9fb77ab4e30439feca85a5e90f33209d234ca-merged.mount: Deactivated successfully.
Feb 25 06:49:35 np0005629333 podman[77520]: 2026-02-25 11:49:35.517558367 +0000 UTC m=+0.610031178 container remove 5eaca1da6c6ad69d182e27eab4427dec47789914f762b4ccc0b2be23546bba29 (image=quay.io/ceph/ceph:v20, name=epic_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 25 06:49:35 np0005629333 ceph-mgr[76641]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 25 06:49:35 np0005629333 systemd[1]: libpod-conmon-5eaca1da6c6ad69d182e27eab4427dec47789914f762b4ccc0b2be23546bba29.scope: Deactivated successfully.
Feb 25 06:49:35 np0005629333 podman[77576]: 2026-02-25 11:49:35.59045354 +0000 UTC m=+0.055265593 container create e03b951c1839487377754c1e246fe8bce1e85e54172cd1ecdf60db3f47e392fb (image=quay.io/ceph/ceph:v20, name=friendly_dhawan, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 06:49:35 np0005629333 systemd[1]: Started libpod-conmon-e03b951c1839487377754c1e246fe8bce1e85e54172cd1ecdf60db3f47e392fb.scope.
Feb 25 06:49:35 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:35 np0005629333 podman[77576]: 2026-02-25 11:49:35.559575202 +0000 UTC m=+0.024387265 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f0912ab92b7864187687bbeecd9ece449ed7ce0c5a4070eb2bf30b2b975a31e/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f0912ab92b7864187687bbeecd9ece449ed7ce0c5a4070eb2bf30b2b975a31e/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f0912ab92b7864187687bbeecd9ece449ed7ce0c5a4070eb2bf30b2b975a31e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f0912ab92b7864187687bbeecd9ece449ed7ce0c5a4070eb2bf30b2b975a31e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f0912ab92b7864187687bbeecd9ece449ed7ce0c5a4070eb2bf30b2b975a31e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:35 np0005629333 podman[77576]: 2026-02-25 11:49:35.679759329 +0000 UTC m=+0.144571372 container init e03b951c1839487377754c1e246fe8bce1e85e54172cd1ecdf60db3f47e392fb (image=quay.io/ceph/ceph:v20, name=friendly_dhawan, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 06:49:35 np0005629333 podman[77576]: 2026-02-25 11:49:35.691895394 +0000 UTC m=+0.156707417 container start e03b951c1839487377754c1e246fe8bce1e85e54172cd1ecdf60db3f47e392fb (image=quay.io/ceph/ceph:v20, name=friendly_dhawan, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:49:35 np0005629333 podman[77576]: 2026-02-25 11:49:35.695144917 +0000 UTC m=+0.159956950 container attach e03b951c1839487377754c1e246fe8bce1e85e54172cd1ecdf60db3f47e392fb (image=quay.io/ceph/ceph:v20, name=friendly_dhawan, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 06:49:35 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:35 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:35 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:36 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 06:49:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Feb 25 06:49:36 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:36 np0005629333 ceph-mgr[76641]: [cephadm INFO root] Set ssh ssh_identity_pub
Feb 25 06:49:36 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Feb 25 06:49:36 np0005629333 systemd[1]: libpod-e03b951c1839487377754c1e246fe8bce1e85e54172cd1ecdf60db3f47e392fb.scope: Deactivated successfully.
Feb 25 06:49:36 np0005629333 podman[77576]: 2026-02-25 11:49:36.137029183 +0000 UTC m=+0.601841236 container died e03b951c1839487377754c1e246fe8bce1e85e54172cd1ecdf60db3f47e392fb (image=quay.io/ceph/ceph:v20, name=friendly_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle)
Feb 25 06:49:36 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7f0912ab92b7864187687bbeecd9ece449ed7ce0c5a4070eb2bf30b2b975a31e-merged.mount: Deactivated successfully.
Feb 25 06:49:36 np0005629333 podman[77576]: 2026-02-25 11:49:36.169397393 +0000 UTC m=+0.634209406 container remove e03b951c1839487377754c1e246fe8bce1e85e54172cd1ecdf60db3f47e392fb (image=quay.io/ceph/ceph:v20, name=friendly_dhawan, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 06:49:36 np0005629333 systemd[1]: libpod-conmon-e03b951c1839487377754c1e246fe8bce1e85e54172cd1ecdf60db3f47e392fb.scope: Deactivated successfully.
Feb 25 06:49:36 np0005629333 podman[77629]: 2026-02-25 11:49:36.218632043 +0000 UTC m=+0.036305693 container create 7cdb18b1387404c6516a7d33eb28f552633d6aeb95ee74845fff332e19bcf872 (image=quay.io/ceph/ceph:v20, name=vigilant_rubin, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:49:36 np0005629333 systemd[1]: Started libpod-conmon-7cdb18b1387404c6516a7d33eb28f552633d6aeb95ee74845fff332e19bcf872.scope.
Feb 25 06:49:36 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab642408e96aefa8c8b6e7f1f227fc119df179897b2b8b3d861b9286c4d337e5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab642408e96aefa8c8b6e7f1f227fc119df179897b2b8b3d861b9286c4d337e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab642408e96aefa8c8b6e7f1f227fc119df179897b2b8b3d861b9286c4d337e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:36 np0005629333 podman[77629]: 2026-02-25 11:49:36.283356154 +0000 UTC m=+0.101029824 container init 7cdb18b1387404c6516a7d33eb28f552633d6aeb95ee74845fff332e19bcf872 (image=quay.io/ceph/ceph:v20, name=vigilant_rubin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 06:49:36 np0005629333 podman[77629]: 2026-02-25 11:49:36.287383918 +0000 UTC m=+0.105057578 container start 7cdb18b1387404c6516a7d33eb28f552633d6aeb95ee74845fff332e19bcf872 (image=quay.io/ceph/ceph:v20, name=vigilant_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 06:49:36 np0005629333 podman[77629]: 2026-02-25 11:49:36.291025352 +0000 UTC m=+0.108699012 container attach 7cdb18b1387404c6516a7d33eb28f552633d6aeb95ee74845fff332e19bcf872 (image=quay.io/ceph/ceph:v20, name=vigilant_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 06:49:36 np0005629333 podman[77629]: 2026-02-25 11:49:36.19953344 +0000 UTC m=+0.017207090 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:36 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 06:49:36 np0005629333 vigilant_rubin[77645]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC44L7ryPr18LpQWJTyT1K5gPR7NUrSNL6RlsM1STMZuOCLCTS6LHcPuJp4FyJ0QXUmpkWB9fa9SLxrr7vbMFDX/JEib1gCr/cdyHwO4i+uDS7JWqz6mbfiGa3pulo3RPNyHLk/MsluG/7rbp4b2/L+4OjFj44IUfr/uNqPuadiofalY+AJek1MVCF+4Gal3FiNz8plRyxqbVB8Myaubdtvts95bqAKds2WHSzqXbJAxAX1WRC+Or7MfBkw1KKm6RZmJLItEFLeKlOYxIBUW3HwgekNAmXs5yYTlAihmI7O5bvRIKLIcQccPM0dqvPWSIlnXjGHVQe7VSCz+xzsAUo4IjvIPiZoFuTea8WvVzev7lyD9QRwk5C+4Th4jnj9i3DPKMoKIv6Bn0HLw7hVslUINms5NZJcJBdbU6nAaw+t96+Z2/q4A4VZR+qqljuaHUsDYYNYUJFHFIdXfK6s4T73h2x2pyGfMZGogL1hx3M1UngR1VoUjPUwTVsoT7C2l0c= zuul@controller
Feb 25 06:49:36 np0005629333 systemd[1]: libpod-7cdb18b1387404c6516a7d33eb28f552633d6aeb95ee74845fff332e19bcf872.scope: Deactivated successfully.
Feb 25 06:49:36 np0005629333 podman[77629]: 2026-02-25 11:49:36.727100272 +0000 UTC m=+0.544773932 container died 7cdb18b1387404c6516a7d33eb28f552633d6aeb95ee74845fff332e19bcf872 (image=quay.io/ceph/ceph:v20, name=vigilant_rubin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 06:49:36 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ab642408e96aefa8c8b6e7f1f227fc119df179897b2b8b3d861b9286c4d337e5-merged.mount: Deactivated successfully.
Feb 25 06:49:36 np0005629333 ceph-mgr[76641]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 25 06:49:36 np0005629333 podman[77629]: 2026-02-25 11:49:36.776856287 +0000 UTC m=+0.594529947 container remove 7cdb18b1387404c6516a7d33eb28f552633d6aeb95ee74845fff332e19bcf872 (image=quay.io/ceph/ceph:v20, name=vigilant_rubin, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 06:49:36 np0005629333 systemd[1]: libpod-conmon-7cdb18b1387404c6516a7d33eb28f552633d6aeb95ee74845fff332e19bcf872.scope: Deactivated successfully.
Feb 25 06:49:36 np0005629333 podman[77682]: 2026-02-25 11:49:36.835648849 +0000 UTC m=+0.041492261 container create e6dbc1878435ebea373a3ab8a220a25253e36aea109ecdf2ee3dda2c55f1eb73 (image=quay.io/ceph/ceph:v20, name=nifty_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 06:49:36 np0005629333 ceph-mon[76335]: Set ssh ssh_user
Feb 25 06:49:36 np0005629333 ceph-mon[76335]: Set ssh ssh_config
Feb 25 06:49:36 np0005629333 ceph-mon[76335]: ssh user set to ceph-admin. sudo will be used
Feb 25 06:49:36 np0005629333 ceph-mon[76335]: Set ssh ssh_identity_key
Feb 25 06:49:36 np0005629333 ceph-mon[76335]: Set ssh private key
Feb 25 06:49:36 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:36 np0005629333 ceph-mon[76335]: Set ssh ssh_identity_pub
Feb 25 06:49:36 np0005629333 systemd[1]: Started libpod-conmon-e6dbc1878435ebea373a3ab8a220a25253e36aea109ecdf2ee3dda2c55f1eb73.scope.
Feb 25 06:49:36 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:36 np0005629333 podman[77682]: 2026-02-25 11:49:36.813795898 +0000 UTC m=+0.019639320 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a97c5de448877b14790749f6ef761a39ad24136cd0ee4ca26d68ab4d9ff9292/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a97c5de448877b14790749f6ef761a39ad24136cd0ee4ca26d68ab4d9ff9292/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a97c5de448877b14790749f6ef761a39ad24136cd0ee4ca26d68ab4d9ff9292/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:36 np0005629333 podman[77682]: 2026-02-25 11:49:36.929402925 +0000 UTC m=+0.135246307 container init e6dbc1878435ebea373a3ab8a220a25253e36aea109ecdf2ee3dda2c55f1eb73 (image=quay.io/ceph/ceph:v20, name=nifty_nash, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:49:36 np0005629333 podman[77682]: 2026-02-25 11:49:36.937158655 +0000 UTC m=+0.143002078 container start e6dbc1878435ebea373a3ab8a220a25253e36aea109ecdf2ee3dda2c55f1eb73 (image=quay.io/ceph/ceph:v20, name=nifty_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 06:49:36 np0005629333 podman[77682]: 2026-02-25 11:49:36.941982603 +0000 UTC m=+0.147826005 container attach e6dbc1878435ebea373a3ab8a220a25253e36aea109ecdf2ee3dda2c55f1eb73 (image=quay.io/ceph/ceph:v20, name=nifty_nash, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Feb 25 06:49:37 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 06:49:37 np0005629333 ceph-mgr[76641]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 25 06:49:37 np0005629333 systemd-logind[811]: New session 21 of user ceph-admin.
Feb 25 06:49:37 np0005629333 systemd[1]: Created slice User Slice of UID 42477.
Feb 25 06:49:37 np0005629333 systemd[1]: Starting User Runtime Directory /run/user/42477...
Feb 25 06:49:37 np0005629333 systemd[1]: Finished User Runtime Directory /run/user/42477.
Feb 25 06:49:37 np0005629333 systemd[1]: Starting User Manager for UID 42477...
Feb 25 06:49:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052563 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:49:37 np0005629333 systemd[77728]: Queued start job for default target Main User Target.
Feb 25 06:49:37 np0005629333 systemd-logind[811]: New session 23 of user ceph-admin.
Feb 25 06:49:37 np0005629333 systemd[77728]: Created slice User Application Slice.
Feb 25 06:49:37 np0005629333 systemd[77728]: Started Mark boot as successful after the user session has run 2 minutes.
Feb 25 06:49:37 np0005629333 systemd[77728]: Started Daily Cleanup of User's Temporary Directories.
Feb 25 06:49:37 np0005629333 systemd[77728]: Reached target Paths.
Feb 25 06:49:37 np0005629333 systemd[77728]: Reached target Timers.
Feb 25 06:49:37 np0005629333 systemd[77728]: Starting D-Bus User Message Bus Socket...
Feb 25 06:49:37 np0005629333 systemd[77728]: Starting Create User's Volatile Files and Directories...
Feb 25 06:49:37 np0005629333 systemd[77728]: Finished Create User's Volatile Files and Directories.
Feb 25 06:49:37 np0005629333 systemd[77728]: Listening on D-Bus User Message Bus Socket.
Feb 25 06:49:37 np0005629333 systemd[77728]: Reached target Sockets.
Feb 25 06:49:37 np0005629333 systemd[77728]: Reached target Basic System.
Feb 25 06:49:37 np0005629333 systemd[77728]: Reached target Main User Target.
Feb 25 06:49:37 np0005629333 systemd[77728]: Startup finished in 147ms.
Feb 25 06:49:37 np0005629333 systemd[1]: Started User Manager for UID 42477.
Feb 25 06:49:37 np0005629333 systemd[1]: Started Session 21 of User ceph-admin.
Feb 25 06:49:37 np0005629333 systemd[1]: Started Session 23 of User ceph-admin.
Feb 25 06:49:38 np0005629333 systemd-logind[811]: New session 24 of user ceph-admin.
Feb 25 06:49:38 np0005629333 systemd[1]: Started Session 24 of User ceph-admin.
Feb 25 06:49:38 np0005629333 systemd-logind[811]: New session 25 of user ceph-admin.
Feb 25 06:49:38 np0005629333 systemd[1]: Started Session 25 of User ceph-admin.
Feb 25 06:49:38 np0005629333 ceph-mgr[76641]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Feb 25 06:49:38 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Feb 25 06:49:38 np0005629333 ceph-mgr[76641]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 25 06:49:38 np0005629333 ceph-mon[76335]: Deploying cephadm binary to compute-0
Feb 25 06:49:38 np0005629333 systemd-logind[811]: New session 26 of user ceph-admin.
Feb 25 06:49:38 np0005629333 systemd[1]: Started Session 26 of User ceph-admin.
Feb 25 06:49:39 np0005629333 systemd-logind[811]: New session 27 of user ceph-admin.
Feb 25 06:49:39 np0005629333 systemd[1]: Started Session 27 of User ceph-admin.
Feb 25 06:49:39 np0005629333 ceph-mgr[76641]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 25 06:49:39 np0005629333 systemd-logind[811]: New session 28 of user ceph-admin.
Feb 25 06:49:39 np0005629333 systemd[1]: Started Session 28 of User ceph-admin.
Feb 25 06:49:39 np0005629333 systemd-logind[811]: New session 29 of user ceph-admin.
Feb 25 06:49:39 np0005629333 systemd[1]: Started Session 29 of User ceph-admin.
Feb 25 06:49:40 np0005629333 systemd-logind[811]: New session 30 of user ceph-admin.
Feb 25 06:49:40 np0005629333 systemd[1]: Started Session 30 of User ceph-admin.
Feb 25 06:49:40 np0005629333 systemd-logind[811]: New session 31 of user ceph-admin.
Feb 25 06:49:40 np0005629333 systemd[1]: Started Session 31 of User ceph-admin.
Feb 25 06:49:40 np0005629333 ceph-mgr[76641]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 25 06:49:41 np0005629333 ceph-mgr[76641]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 25 06:49:41 np0005629333 systemd-logind[811]: New session 32 of user ceph-admin.
Feb 25 06:49:41 np0005629333 systemd[1]: Started Session 32 of User ceph-admin.
Feb 25 06:49:42 np0005629333 systemd-logind[811]: New session 33 of user ceph-admin.
Feb 25 06:49:42 np0005629333 systemd[1]: Started Session 33 of User ceph-admin.
Feb 25 06:49:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054701 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:49:42 np0005629333 ceph-mgr[76641]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 25 06:49:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb 25 06:49:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:42 np0005629333 ceph-mgr[76641]: [cephadm INFO root] Added host compute-0
Feb 25 06:49:42 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Added host compute-0
Feb 25 06:49:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb 25 06:49:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 25 06:49:42 np0005629333 nifty_nash[77698]: Added host 'compute-0' with addr '192.168.122.100'
Feb 25 06:49:42 np0005629333 systemd[1]: libpod-e6dbc1878435ebea373a3ab8a220a25253e36aea109ecdf2ee3dda2c55f1eb73.scope: Deactivated successfully.
Feb 25 06:49:42 np0005629333 podman[77682]: 2026-02-25 11:49:42.798925523 +0000 UTC m=+6.004768905 container died e6dbc1878435ebea373a3ab8a220a25253e36aea109ecdf2ee3dda2c55f1eb73 (image=quay.io/ceph/ceph:v20, name=nifty_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 06:49:42 np0005629333 systemd[1]: var-lib-containers-storage-overlay-2a97c5de448877b14790749f6ef761a39ad24136cd0ee4ca26d68ab4d9ff9292-merged.mount: Deactivated successfully.
Feb 25 06:49:42 np0005629333 podman[77682]: 2026-02-25 11:49:42.838434032 +0000 UTC m=+6.044277434 container remove e6dbc1878435ebea373a3ab8a220a25253e36aea109ecdf2ee3dda2c55f1eb73 (image=quay.io/ceph/ceph:v20, name=nifty_nash, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:49:42 np0005629333 systemd[1]: libpod-conmon-e6dbc1878435ebea373a3ab8a220a25253e36aea109ecdf2ee3dda2c55f1eb73.scope: Deactivated successfully.
Feb 25 06:49:42 np0005629333 podman[78121]: 2026-02-25 11:49:42.905949775 +0000 UTC m=+0.051586798 container create 69cabebebb1d1d48bd9cc6219b006040a4c1a35ce02ac9425604b60d1a42fab7 (image=quay.io/ceph/ceph:v20, name=sleepy_satoshi, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 06:49:42 np0005629333 systemd[1]: Started libpod-conmon-69cabebebb1d1d48bd9cc6219b006040a4c1a35ce02ac9425604b60d1a42fab7.scope.
Feb 25 06:49:42 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:42 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19a72796ca506d494b83079b88e3b776e83b4c8432a854c50203cc83ac389956/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:42 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19a72796ca506d494b83079b88e3b776e83b4c8432a854c50203cc83ac389956/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:42 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19a72796ca506d494b83079b88e3b776e83b4c8432a854c50203cc83ac389956/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:42 np0005629333 podman[78121]: 2026-02-25 11:49:42.882363323 +0000 UTC m=+0.028000346 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:42 np0005629333 podman[78121]: 2026-02-25 11:49:42.988413332 +0000 UTC m=+0.134050395 container init 69cabebebb1d1d48bd9cc6219b006040a4c1a35ce02ac9425604b60d1a42fab7 (image=quay.io/ceph/ceph:v20, name=sleepy_satoshi, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 06:49:42 np0005629333 podman[78121]: 2026-02-25 11:49:42.993788907 +0000 UTC m=+0.139425930 container start 69cabebebb1d1d48bd9cc6219b006040a4c1a35ce02ac9425604b60d1a42fab7 (image=quay.io/ceph/ceph:v20, name=sleepy_satoshi, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 06:49:43 np0005629333 podman[78121]: 2026-02-25 11:49:43.004744407 +0000 UTC m=+0.150381420 container attach 69cabebebb1d1d48bd9cc6219b006040a4c1a35ce02ac9425604b60d1a42fab7 (image=quay.io/ceph/ceph:v20, name=sleepy_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 06:49:43 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 06:49:43 np0005629333 ceph-mgr[76641]: [cephadm INFO root] Saving service mon spec with placement count:5
Feb 25 06:49:43 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Feb 25 06:49:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb 25 06:49:43 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:43 np0005629333 sleepy_satoshi[78158]: Scheduled mon update...
Feb 25 06:49:43 np0005629333 systemd[1]: libpod-69cabebebb1d1d48bd9cc6219b006040a4c1a35ce02ac9425604b60d1a42fab7.scope: Deactivated successfully.
Feb 25 06:49:43 np0005629333 podman[78121]: 2026-02-25 11:49:43.468600727 +0000 UTC m=+0.614237760 container died 69cabebebb1d1d48bd9cc6219b006040a4c1a35ce02ac9425604b60d1a42fab7 (image=quay.io/ceph/ceph:v20, name=sleepy_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:49:43 np0005629333 ceph-mgr[76641]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 25 06:49:43 np0005629333 systemd[1]: var-lib-containers-storage-overlay-19a72796ca506d494b83079b88e3b776e83b4c8432a854c50203cc83ac389956-merged.mount: Deactivated successfully.
Feb 25 06:49:43 np0005629333 podman[78177]: 2026-02-25 11:49:43.707263977 +0000 UTC m=+0.579189437 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:43 np0005629333 podman[78121]: 2026-02-25 11:49:43.708675129 +0000 UTC m=+0.854312142 container remove 69cabebebb1d1d48bd9cc6219b006040a4c1a35ce02ac9425604b60d1a42fab7 (image=quay.io/ceph/ceph:v20, name=sleepy_satoshi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 06:49:43 np0005629333 systemd[1]: libpod-conmon-69cabebebb1d1d48bd9cc6219b006040a4c1a35ce02ac9425604b60d1a42fab7.scope: Deactivated successfully.
Feb 25 06:49:43 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:43 np0005629333 ceph-mon[76335]: Added host compute-0
Feb 25 06:49:43 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:43 np0005629333 podman[78223]: 2026-02-25 11:49:43.797514985 +0000 UTC m=+0.065506147 container create f7c8f7335e58b027c0797e6b8d4e532e35c7c5558fd7c303a7a8cc8938703073 (image=quay.io/ceph/ceph:v20, name=brave_blackwell, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 06:49:43 np0005629333 systemd[1]: Started libpod-conmon-f7c8f7335e58b027c0797e6b8d4e532e35c7c5558fd7c303a7a8cc8938703073.scope.
Feb 25 06:49:43 np0005629333 podman[78248]: 2026-02-25 11:49:43.848015804 +0000 UTC m=+0.068472227 container create de56dcd62386608f198e34dda8d37bd089972130986733934fa18f1db9a75faf (image=quay.io/ceph/ceph:v20, name=brave_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:49:43 np0005629333 podman[78223]: 2026-02-25 11:49:43.756468889 +0000 UTC m=+0.024460111 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:43 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0068e30abb8f5aaed86d3bf7479dd43df0fe18ca809bfe03efbfe4becd4273df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0068e30abb8f5aaed86d3bf7479dd43df0fe18ca809bfe03efbfe4becd4273df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0068e30abb8f5aaed86d3bf7479dd43df0fe18ca809bfe03efbfe4becd4273df/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:43 np0005629333 systemd[1]: Started libpod-conmon-de56dcd62386608f198e34dda8d37bd089972130986733934fa18f1db9a75faf.scope.
Feb 25 06:49:43 np0005629333 podman[78223]: 2026-02-25 11:49:43.886738587 +0000 UTC m=+0.154729779 container init f7c8f7335e58b027c0797e6b8d4e532e35c7c5558fd7c303a7a8cc8938703073 (image=quay.io/ceph/ceph:v20, name=brave_blackwell, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:49:43 np0005629333 podman[78223]: 2026-02-25 11:49:43.897489968 +0000 UTC m=+0.165481080 container start f7c8f7335e58b027c0797e6b8d4e532e35c7c5558fd7c303a7a8cc8938703073 (image=quay.io/ceph/ceph:v20, name=brave_blackwell, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:49:43 np0005629333 podman[78248]: 2026-02-25 11:49:43.80930322 +0000 UTC m=+0.029759673 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:43 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:43 np0005629333 podman[78223]: 2026-02-25 11:49:43.902257236 +0000 UTC m=+0.170248408 container attach f7c8f7335e58b027c0797e6b8d4e532e35c7c5558fd7c303a7a8cc8938703073 (image=quay.io/ceph/ceph:v20, name=brave_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 06:49:43 np0005629333 podman[78248]: 2026-02-25 11:49:43.916223177 +0000 UTC m=+0.136679680 container init de56dcd62386608f198e34dda8d37bd089972130986733934fa18f1db9a75faf (image=quay.io/ceph/ceph:v20, name=brave_jones, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 06:49:43 np0005629333 podman[78248]: 2026-02-25 11:49:43.921434725 +0000 UTC m=+0.141891178 container start de56dcd62386608f198e34dda8d37bd089972130986733934fa18f1db9a75faf (image=quay.io/ceph/ceph:v20, name=brave_jones, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:49:43 np0005629333 podman[78248]: 2026-02-25 11:49:43.927013509 +0000 UTC m=+0.147469962 container attach de56dcd62386608f198e34dda8d37bd089972130986733934fa18f1db9a75faf (image=quay.io/ceph/ceph:v20, name=brave_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Feb 25 06:49:44 np0005629333 brave_jones[78271]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Feb 25 06:49:44 np0005629333 systemd[1]: libpod-de56dcd62386608f198e34dda8d37bd089972130986733934fa18f1db9a75faf.scope: Deactivated successfully.
Feb 25 06:49:44 np0005629333 podman[78248]: 2026-02-25 11:49:44.012646175 +0000 UTC m=+0.233102608 container died de56dcd62386608f198e34dda8d37bd089972130986733934fa18f1db9a75faf (image=quay.io/ceph/ceph:v20, name=brave_jones, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:49:44 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ccc0d0e1bfad801efc613e89271c1934b8ca1dc52def4e30c152f8a3d42b7f1a-merged.mount: Deactivated successfully.
Feb 25 06:49:44 np0005629333 podman[78248]: 2026-02-25 11:49:44.05801434 +0000 UTC m=+0.278470753 container remove de56dcd62386608f198e34dda8d37bd089972130986733934fa18f1db9a75faf (image=quay.io/ceph/ceph:v20, name=brave_jones, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Feb 25 06:49:44 np0005629333 systemd[1]: libpod-conmon-de56dcd62386608f198e34dda8d37bd089972130986733934fa18f1db9a75faf.scope: Deactivated successfully.
Feb 25 06:49:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Feb 25 06:49:44 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:44 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 06:49:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb 25 06:49:44 np0005629333 ceph-mgr[76641]: [cephadm INFO root] Saving service mgr spec with placement count:2
Feb 25 06:49:44 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Feb 25 06:49:44 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:44 np0005629333 brave_blackwell[78264]: Scheduled mgr update...
Feb 25 06:49:44 np0005629333 systemd[1]: libpod-f7c8f7335e58b027c0797e6b8d4e532e35c7c5558fd7c303a7a8cc8938703073.scope: Deactivated successfully.
Feb 25 06:49:44 np0005629333 podman[78223]: 2026-02-25 11:49:44.350608668 +0000 UTC m=+0.618599790 container died f7c8f7335e58b027c0797e6b8d4e532e35c7c5558fd7c303a7a8cc8938703073 (image=quay.io/ceph/ceph:v20, name=brave_blackwell, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:49:44 np0005629333 systemd[1]: var-lib-containers-storage-overlay-0068e30abb8f5aaed86d3bf7479dd43df0fe18ca809bfe03efbfe4becd4273df-merged.mount: Deactivated successfully.
Feb 25 06:49:44 np0005629333 podman[78223]: 2026-02-25 11:49:44.421930407 +0000 UTC m=+0.689921529 container remove f7c8f7335e58b027c0797e6b8d4e532e35c7c5558fd7c303a7a8cc8938703073 (image=quay.io/ceph/ceph:v20, name=brave_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:49:44 np0005629333 systemd[1]: libpod-conmon-f7c8f7335e58b027c0797e6b8d4e532e35c7c5558fd7c303a7a8cc8938703073.scope: Deactivated successfully.
Feb 25 06:49:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:49:44 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:44 np0005629333 podman[78394]: 2026-02-25 11:49:44.470175598 +0000 UTC m=+0.034761572 container create 14c0391920813b6c82a9de0eb037db03efbf124fdbb77e50cdb2f547f0edfbb8 (image=quay.io/ceph/ceph:v20, name=nervous_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:49:44 np0005629333 systemd[1]: Started libpod-conmon-14c0391920813b6c82a9de0eb037db03efbf124fdbb77e50cdb2f547f0edfbb8.scope.
Feb 25 06:49:44 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:44 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5cd1fb97c885164e05347481c98617e9e76207d802d3c5aa57962cc5187a1f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:44 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5cd1fb97c885164e05347481c98617e9e76207d802d3c5aa57962cc5187a1f1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:44 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5cd1fb97c885164e05347481c98617e9e76207d802d3c5aa57962cc5187a1f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:44 np0005629333 podman[78394]: 2026-02-25 11:49:44.455195182 +0000 UTC m=+0.019781176 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:44 np0005629333 podman[78394]: 2026-02-25 11:49:44.5769788 +0000 UTC m=+0.141564774 container init 14c0391920813b6c82a9de0eb037db03efbf124fdbb77e50cdb2f547f0edfbb8 (image=quay.io/ceph/ceph:v20, name=nervous_davinci, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 06:49:44 np0005629333 podman[78394]: 2026-02-25 11:49:44.580836068 +0000 UTC m=+0.145422042 container start 14c0391920813b6c82a9de0eb037db03efbf124fdbb77e50cdb2f547f0edfbb8 (image=quay.io/ceph/ceph:v20, name=nervous_davinci, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:49:44 np0005629333 podman[78394]: 2026-02-25 11:49:44.592905486 +0000 UTC m=+0.157491460 container attach 14c0391920813b6c82a9de0eb037db03efbf124fdbb77e50cdb2f547f0edfbb8 (image=quay.io/ceph/ceph:v20, name=nervous_davinci, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 06:49:44 np0005629333 ceph-mgr[76641]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 25 06:49:44 np0005629333 ceph-mon[76335]: Saving service mon spec with placement count:5
Feb 25 06:49:44 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:44 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:44 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:44 np0005629333 podman[78527]: 2026-02-25 11:49:44.947055218 +0000 UTC m=+0.062260915 container exec ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 06:49:44 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 06:49:44 np0005629333 ceph-mgr[76641]: [cephadm INFO root] Saving service crash spec with placement *
Feb 25 06:49:44 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Feb 25 06:49:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb 25 06:49:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:45 np0005629333 nervous_davinci[78439]: Scheduled crash update...
Feb 25 06:49:45 np0005629333 podman[78527]: 2026-02-25 11:49:45.060057001 +0000 UTC m=+0.175262658 container exec_died ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 06:49:45 np0005629333 systemd[1]: libpod-14c0391920813b6c82a9de0eb037db03efbf124fdbb77e50cdb2f547f0edfbb8.scope: Deactivated successfully.
Feb 25 06:49:45 np0005629333 podman[78394]: 2026-02-25 11:49:45.072033835 +0000 UTC m=+0.636619809 container died 14c0391920813b6c82a9de0eb037db03efbf124fdbb77e50cdb2f547f0edfbb8 (image=quay.io/ceph/ceph:v20, name=nervous_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:49:45 np0005629333 systemd[1]: var-lib-containers-storage-overlay-c5cd1fb97c885164e05347481c98617e9e76207d802d3c5aa57962cc5187a1f1-merged.mount: Deactivated successfully.
Feb 25 06:49:45 np0005629333 podman[78394]: 2026-02-25 11:49:45.161351002 +0000 UTC m=+0.725936986 container remove 14c0391920813b6c82a9de0eb037db03efbf124fdbb77e50cdb2f547f0edfbb8 (image=quay.io/ceph/ceph:v20, name=nervous_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030)
Feb 25 06:49:45 np0005629333 systemd[1]: libpod-conmon-14c0391920813b6c82a9de0eb037db03efbf124fdbb77e50cdb2f547f0edfbb8.scope: Deactivated successfully.
Feb 25 06:49:45 np0005629333 podman[78587]: 2026-02-25 11:49:45.196410575 +0000 UTC m=+0.019063764 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:45 np0005629333 podman[78587]: 2026-02-25 11:49:45.299577908 +0000 UTC m=+0.122231037 container create e682d79c70489d542fbadb262853d7cb1341f694fc571f8ea3ab0c8b74c1ee9f (image=quay.io/ceph/ceph:v20, name=zealous_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 06:49:45 np0005629333 systemd[1]: Started libpod-conmon-e682d79c70489d542fbadb262853d7cb1341f694fc571f8ea3ab0c8b74c1ee9f.scope.
Feb 25 06:49:45 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:45 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2481b7f5833e810e176f9e89d1f2b018141b1a68e0310179977756f84e14436/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:45 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2481b7f5833e810e176f9e89d1f2b018141b1a68e0310179977756f84e14436/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:45 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2481b7f5833e810e176f9e89d1f2b018141b1a68e0310179977756f84e14436/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:45 np0005629333 podman[78587]: 2026-02-25 11:49:45.450918898 +0000 UTC m=+0.273572077 container init e682d79c70489d542fbadb262853d7cb1341f694fc571f8ea3ab0c8b74c1ee9f (image=quay.io/ceph/ceph:v20, name=zealous_davinci, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:49:45 np0005629333 podman[78587]: 2026-02-25 11:49:45.458989151 +0000 UTC m=+0.281642280 container start e682d79c70489d542fbadb262853d7cb1341f694fc571f8ea3ab0c8b74c1ee9f (image=quay.io/ceph/ceph:v20, name=zealous_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:49:45 np0005629333 podman[78587]: 2026-02-25 11:49:45.48226815 +0000 UTC m=+0.304921319 container attach e682d79c70489d542fbadb262853d7cb1341f694fc571f8ea3ab0c8b74c1ee9f (image=quay.io/ceph/ceph:v20, name=zealous_davinci, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:49:45 np0005629333 ceph-mgr[76641]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 25 06:49:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:49:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Feb 25 06:49:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1097530015' entity='client.admin' 
Feb 25 06:49:45 np0005629333 ceph-mon[76335]: Saving service mgr spec with placement count:2
Feb 25 06:49:45 np0005629333 ceph-mon[76335]: Saving service crash spec with placement *
Feb 25 06:49:45 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:45 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:45 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/1097530015' entity='client.admin' 
Feb 25 06:49:45 np0005629333 systemd[1]: libpod-e682d79c70489d542fbadb262853d7cb1341f694fc571f8ea3ab0c8b74c1ee9f.scope: Deactivated successfully.
Feb 25 06:49:45 np0005629333 podman[78587]: 2026-02-25 11:49:45.884655631 +0000 UTC m=+0.707308730 container died e682d79c70489d542fbadb262853d7cb1341f694fc571f8ea3ab0c8b74c1ee9f (image=quay.io/ceph/ceph:v20, name=zealous_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:49:45 np0005629333 systemd[1]: var-lib-containers-storage-overlay-c2481b7f5833e810e176f9e89d1f2b018141b1a68e0310179977756f84e14436-merged.mount: Deactivated successfully.
Feb 25 06:49:45 np0005629333 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 78737 (sysctl)
Feb 25 06:49:45 np0005629333 podman[78587]: 2026-02-25 11:49:45.934179748 +0000 UTC m=+0.756832867 container remove e682d79c70489d542fbadb262853d7cb1341f694fc571f8ea3ab0c8b74c1ee9f (image=quay.io/ceph/ceph:v20, name=zealous_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:49:45 np0005629333 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Feb 25 06:49:45 np0005629333 systemd[1]: libpod-conmon-e682d79c70489d542fbadb262853d7cb1341f694fc571f8ea3ab0c8b74c1ee9f.scope: Deactivated successfully.
Feb 25 06:49:45 np0005629333 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Feb 25 06:49:46 np0005629333 podman[78740]: 2026-02-25 11:49:46.001839627 +0000 UTC m=+0.051752635 container create 0759798e541f3ca7fac28a73d36e815193cb21b7ab6b393dc75ffd404a15a298 (image=quay.io/ceph/ceph:v20, name=agitated_sinoussi, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 06:49:46 np0005629333 systemd[1]: Started libpod-conmon-0759798e541f3ca7fac28a73d36e815193cb21b7ab6b393dc75ffd404a15a298.scope.
Feb 25 06:49:46 np0005629333 podman[78740]: 2026-02-25 11:49:45.981798891 +0000 UTC m=+0.031711909 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:46 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937dcb657aa64a632a35d2368cd120abc8435d916be40963751f12cf39693b40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937dcb657aa64a632a35d2368cd120abc8435d916be40963751f12cf39693b40/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937dcb657aa64a632a35d2368cd120abc8435d916be40963751f12cf39693b40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:46 np0005629333 podman[78740]: 2026-02-25 11:49:46.101148791 +0000 UTC m=+0.151061819 container init 0759798e541f3ca7fac28a73d36e815193cb21b7ab6b393dc75ffd404a15a298 (image=quay.io/ceph/ceph:v20, name=agitated_sinoussi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 06:49:46 np0005629333 podman[78740]: 2026-02-25 11:49:46.108025852 +0000 UTC m=+0.157938870 container start 0759798e541f3ca7fac28a73d36e815193cb21b7ab6b393dc75ffd404a15a298 (image=quay.io/ceph/ceph:v20, name=agitated_sinoussi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 06:49:46 np0005629333 podman[78740]: 2026-02-25 11:49:46.111333817 +0000 UTC m=+0.161246875 container attach 0759798e541f3ca7fac28a73d36e815193cb21b7ab6b393dc75ffd404a15a298 (image=quay.io/ceph/ceph:v20, name=agitated_sinoussi, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:49:46 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 06:49:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Feb 25 06:49:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:46 np0005629333 systemd[1]: libpod-0759798e541f3ca7fac28a73d36e815193cb21b7ab6b393dc75ffd404a15a298.scope: Deactivated successfully.
Feb 25 06:49:46 np0005629333 podman[78740]: 2026-02-25 11:49:46.528999087 +0000 UTC m=+0.578912125 container died 0759798e541f3ca7fac28a73d36e815193cb21b7ab6b393dc75ffd404a15a298 (image=quay.io/ceph/ceph:v20, name=agitated_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:49:46 np0005629333 systemd[1]: var-lib-containers-storage-overlay-937dcb657aa64a632a35d2368cd120abc8435d916be40963751f12cf39693b40-merged.mount: Deactivated successfully.
Feb 25 06:49:46 np0005629333 podman[78740]: 2026-02-25 11:49:46.565350157 +0000 UTC m=+0.615263165 container remove 0759798e541f3ca7fac28a73d36e815193cb21b7ab6b393dc75ffd404a15a298 (image=quay.io/ceph/ceph:v20, name=agitated_sinoussi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:49:46 np0005629333 systemd[1]: libpod-conmon-0759798e541f3ca7fac28a73d36e815193cb21b7ab6b393dc75ffd404a15a298.scope: Deactivated successfully.
Feb 25 06:49:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:49:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:46 np0005629333 podman[78881]: 2026-02-25 11:49:46.635313897 +0000 UTC m=+0.050295551 container create 24bd7d8be53bda2e00fbe4e6256b2744e3cf7936e8c217fd13f04388ef74bd71 (image=quay.io/ceph/ceph:v20, name=compassionate_wu, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 06:49:46 np0005629333 systemd[1]: Started libpod-conmon-24bd7d8be53bda2e00fbe4e6256b2744e3cf7936e8c217fd13f04388ef74bd71.scope.
Feb 25 06:49:46 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e0ef6f97740b33f8b7f21749c047ad099eb0e3ee6b1d85641329af2edad10af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e0ef6f97740b33f8b7f21749c047ad099eb0e3ee6b1d85641329af2edad10af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e0ef6f97740b33f8b7f21749c047ad099eb0e3ee6b1d85641329af2edad10af/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:46 np0005629333 podman[78881]: 2026-02-25 11:49:46.609814412 +0000 UTC m=+0.024796026 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:46 np0005629333 podman[78881]: 2026-02-25 11:49:46.717872439 +0000 UTC m=+0.132854063 container init 24bd7d8be53bda2e00fbe4e6256b2744e3cf7936e8c217fd13f04388ef74bd71 (image=quay.io/ceph/ceph:v20, name=compassionate_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:49:46 np0005629333 podman[78881]: 2026-02-25 11:49:46.726200443 +0000 UTC m=+0.141182077 container start 24bd7d8be53bda2e00fbe4e6256b2744e3cf7936e8c217fd13f04388ef74bd71 (image=quay.io/ceph/ceph:v20, name=compassionate_wu, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:49:46 np0005629333 podman[78881]: 2026-02-25 11:49:46.730574594 +0000 UTC m=+0.145556238 container attach 24bd7d8be53bda2e00fbe4e6256b2744e3cf7936e8c217fd13f04388ef74bd71 (image=quay.io/ceph/ceph:v20, name=compassionate_wu, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 06:49:46 np0005629333 ceph-mgr[76641]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 25 06:49:46 np0005629333 podman[78986]: 2026-02-25 11:49:46.998079726 +0000 UTC m=+0.070959995 container create d58331248383c3cc2cdd9f6befb254029c466f510a0881cef0597558727341e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 06:49:47 np0005629333 systemd[1]: Started libpod-conmon-d58331248383c3cc2cdd9f6befb254029c466f510a0881cef0597558727341e9.scope.
Feb 25 06:49:47 np0005629333 podman[78986]: 2026-02-25 11:49:46.971487063 +0000 UTC m=+0.044367352 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:49:47 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:47 np0005629333 podman[78986]: 2026-02-25 11:49:47.090969489 +0000 UTC m=+0.163849808 container init d58331248383c3cc2cdd9f6befb254029c466f510a0881cef0597558727341e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chatterjee, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:49:47 np0005629333 podman[78986]: 2026-02-25 11:49:47.094667731 +0000 UTC m=+0.167547980 container start d58331248383c3cc2cdd9f6befb254029c466f510a0881cef0597558727341e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chatterjee, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 06:49:47 np0005629333 adoring_chatterjee[79002]: 167 167
Feb 25 06:49:47 np0005629333 systemd[1]: libpod-d58331248383c3cc2cdd9f6befb254029c466f510a0881cef0597558727341e9.scope: Deactivated successfully.
Feb 25 06:49:47 np0005629333 conmon[79002]: conmon d58331248383c3cc2cdd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d58331248383c3cc2cdd9f6befb254029c466f510a0881cef0597558727341e9.scope/container/memory.events
Feb 25 06:49:47 np0005629333 podman[78986]: 2026-02-25 11:49:47.111468746 +0000 UTC m=+0.184349075 container attach d58331248383c3cc2cdd9f6befb254029c466f510a0881cef0597558727341e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chatterjee, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:49:47 np0005629333 podman[78986]: 2026-02-25 11:49:47.111925486 +0000 UTC m=+0.184805755 container died d58331248383c3cc2cdd9f6befb254029c466f510a0881cef0597558727341e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:49:47 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 06:49:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb 25 06:49:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:47 np0005629333 ceph-mgr[76641]: [cephadm INFO root] Added label _admin to host compute-0
Feb 25 06:49:47 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Feb 25 06:49:47 np0005629333 compassionate_wu[78932]: Added label _admin to host compute-0
Feb 25 06:49:47 np0005629333 systemd[1]: var-lib-containers-storage-overlay-20f36e1846488d35a2682039919bfccd961009dbff4b79e205855de3da47e387-merged.mount: Deactivated successfully.
Feb 25 06:49:47 np0005629333 systemd[1]: libpod-24bd7d8be53bda2e00fbe4e6256b2744e3cf7936e8c217fd13f04388ef74bd71.scope: Deactivated successfully.
Feb 25 06:49:47 np0005629333 podman[78986]: 2026-02-25 11:49:47.163358786 +0000 UTC m=+0.236239045 container remove d58331248383c3cc2cdd9f6befb254029c466f510a0881cef0597558727341e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chatterjee, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 25 06:49:47 np0005629333 podman[78881]: 2026-02-25 11:49:47.165041219 +0000 UTC m=+0.580022873 container died 24bd7d8be53bda2e00fbe4e6256b2744e3cf7936e8c217fd13f04388ef74bd71 (image=quay.io/ceph/ceph:v20, name=compassionate_wu, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:49:47 np0005629333 systemd[1]: libpod-conmon-d58331248383c3cc2cdd9f6befb254029c466f510a0881cef0597558727341e9.scope: Deactivated successfully.
Feb 25 06:49:47 np0005629333 systemd[1]: var-lib-containers-storage-overlay-2e0ef6f97740b33f8b7f21749c047ad099eb0e3ee6b1d85641329af2edad10af-merged.mount: Deactivated successfully.
Feb 25 06:49:47 np0005629333 podman[78881]: 2026-02-25 11:49:47.196538767 +0000 UTC m=+0.611520381 container remove 24bd7d8be53bda2e00fbe4e6256b2744e3cf7936e8c217fd13f04388ef74bd71 (image=quay.io/ceph/ceph:v20, name=compassionate_wu, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 06:49:47 np0005629333 systemd[1]: libpod-conmon-24bd7d8be53bda2e00fbe4e6256b2744e3cf7936e8c217fd13f04388ef74bd71.scope: Deactivated successfully.
Feb 25 06:49:47 np0005629333 podman[79033]: 2026-02-25 11:49:47.250354541 +0000 UTC m=+0.036272108 container create 8eeb52136ff09a67cb6d3bf404d1d0e33b959c3eaee6e1100cd4115200dc0c8d (image=quay.io/ceph/ceph:v20, name=lucid_spence, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 06:49:47 np0005629333 systemd[1]: Started libpod-conmon-8eeb52136ff09a67cb6d3bf404d1d0e33b959c3eaee6e1100cd4115200dc0c8d.scope.
Feb 25 06:49:47 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:47 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b4fa6ea1ce2a99caf5e8337a748fb4a06f110ed399277b69f43123b9b97bc1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:47 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b4fa6ea1ce2a99caf5e8337a748fb4a06f110ed399277b69f43123b9b97bc1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:47 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b4fa6ea1ce2a99caf5e8337a748fb4a06f110ed399277b69f43123b9b97bc1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:47 np0005629333 podman[79033]: 2026-02-25 11:49:47.234157042 +0000 UTC m=+0.020074699 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:47 np0005629333 podman[79033]: 2026-02-25 11:49:47.345946172 +0000 UTC m=+0.131863809 container init 8eeb52136ff09a67cb6d3bf404d1d0e33b959c3eaee6e1100cd4115200dc0c8d (image=quay.io/ceph/ceph:v20, name=lucid_spence, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:49:47 np0005629333 podman[79033]: 2026-02-25 11:49:47.353088905 +0000 UTC m=+0.139006512 container start 8eeb52136ff09a67cb6d3bf404d1d0e33b959c3eaee6e1100cd4115200dc0c8d (image=quay.io/ceph/ceph:v20, name=lucid_spence, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:49:47 np0005629333 podman[79033]: 2026-02-25 11:49:47.357738738 +0000 UTC m=+0.143656345 container attach 8eeb52136ff09a67cb6d3bf404d1d0e33b959c3eaee6e1100cd4115200dc0c8d (image=quay.io/ceph/ceph:v20, name=lucid_spence, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:49:47 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:47 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:47 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:47 np0005629333 ceph-mgr[76641]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 25 06:49:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:49:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Feb 25 06:49:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3627605732' entity='client.admin' 
Feb 25 06:49:47 np0005629333 lucid_spence[79049]: set mgr/dashboard/cluster/status
Feb 25 06:49:47 np0005629333 systemd[1]: libpod-8eeb52136ff09a67cb6d3bf404d1d0e33b959c3eaee6e1100cd4115200dc0c8d.scope: Deactivated successfully.
Feb 25 06:49:47 np0005629333 podman[79033]: 2026-02-25 11:49:47.936069535 +0000 UTC m=+0.721987132 container died 8eeb52136ff09a67cb6d3bf404d1d0e33b959c3eaee6e1100cd4115200dc0c8d (image=quay.io/ceph/ceph:v20, name=lucid_spence, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 25 06:49:47 np0005629333 systemd[1]: var-lib-containers-storage-overlay-76b4fa6ea1ce2a99caf5e8337a748fb4a06f110ed399277b69f43123b9b97bc1-merged.mount: Deactivated successfully.
Feb 25 06:49:47 np0005629333 podman[79033]: 2026-02-25 11:49:47.978006049 +0000 UTC m=+0.763923656 container remove 8eeb52136ff09a67cb6d3bf404d1d0e33b959c3eaee6e1100cd4115200dc0c8d (image=quay.io/ceph/ceph:v20, name=lucid_spence, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 06:49:47 np0005629333 systemd[1]: libpod-conmon-8eeb52136ff09a67cb6d3bf404d1d0e33b959c3eaee6e1100cd4115200dc0c8d.scope: Deactivated successfully.
Feb 25 06:49:48 np0005629333 systemd[1]: Reloading.
Feb 25 06:49:48 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:49:48 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:49:48 np0005629333 podman[79144]: 2026-02-25 11:49:48.420809389 +0000 UTC m=+0.048339925 container create badca8f836eaa96f1ac86ebb556f1390a887a341e4cc5dc409247aabfed6eb3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_ptolemy, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 06:49:48 np0005629333 systemd[1]: Started libpod-conmon-badca8f836eaa96f1ac86ebb556f1390a887a341e4cc5dc409247aabfed6eb3d.scope.
Feb 25 06:49:48 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb41ece5bff3eb120df9d0f12ee8b5766969b82154ee4068cc86f06e9874eb05/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb41ece5bff3eb120df9d0f12ee8b5766969b82154ee4068cc86f06e9874eb05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb41ece5bff3eb120df9d0f12ee8b5766969b82154ee4068cc86f06e9874eb05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb41ece5bff3eb120df9d0f12ee8b5766969b82154ee4068cc86f06e9874eb05/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:48 np0005629333 podman[79144]: 2026-02-25 11:49:48.401443332 +0000 UTC m=+0.028973878 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:49:48 np0005629333 ceph-mon[76335]: Added label _admin to host compute-0
Feb 25 06:49:48 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/3627605732' entity='client.admin' 
Feb 25 06:49:48 np0005629333 podman[79144]: 2026-02-25 11:49:48.523959041 +0000 UTC m=+0.151489637 container init badca8f836eaa96f1ac86ebb556f1390a887a341e4cc5dc409247aabfed6eb3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 06:49:48 np0005629333 podman[79144]: 2026-02-25 11:49:48.533879115 +0000 UTC m=+0.161409681 container start badca8f836eaa96f1ac86ebb556f1390a887a341e4cc5dc409247aabfed6eb3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_ptolemy, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:49:48 np0005629333 podman[79144]: 2026-02-25 11:49:48.537894521 +0000 UTC m=+0.165425147 container attach badca8f836eaa96f1ac86ebb556f1390a887a341e4cc5dc409247aabfed6eb3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:49:48 np0005629333 ceph-mgr[76641]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Feb 25 06:49:48 np0005629333 python3[79192]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:49:48 np0005629333 podman[79197]: 2026-02-25 11:49:48.966039959 +0000 UTC m=+0.054362719 container create f74eb921e1c53deeda4e36447bba459ef08f7d9b5c61a220313b5aa402cf3819 (image=quay.io/ceph/ceph:v20, name=trusting_hermann, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:49:49 np0005629333 systemd[1]: Started libpod-conmon-f74eb921e1c53deeda4e36447bba459ef08f7d9b5c61a220313b5aa402cf3819.scope.
Feb 25 06:49:49 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7474f9bb9bc9b6d0da895247c5fe446339cbe978c3b32e429885326096ede3cc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7474f9bb9bc9b6d0da895247c5fe446339cbe978c3b32e429885326096ede3cc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:49 np0005629333 podman[79197]: 2026-02-25 11:49:48.944396292 +0000 UTC m=+0.032719062 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:49 np0005629333 podman[79197]: 2026-02-25 11:49:49.056417042 +0000 UTC m=+0.144739862 container init f74eb921e1c53deeda4e36447bba459ef08f7d9b5c61a220313b5aa402cf3819 (image=quay.io/ceph/ceph:v20, name=trusting_hermann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 06:49:49 np0005629333 podman[79197]: 2026-02-25 11:49:49.061376839 +0000 UTC m=+0.149699559 container start f74eb921e1c53deeda4e36447bba459ef08f7d9b5c61a220313b5aa402cf3819 (image=quay.io/ceph/ceph:v20, name=trusting_hermann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]: [
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:    {
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:        "available": false,
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:        "being_replaced": false,
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:        "ceph_device_lvm": false,
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:        "device_id": "QEMU_DVD-ROM_QM00001",
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:        "lsm_data": {},
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:        "lvs": [],
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:        "path": "/dev/sr0",
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:        "rejected_reasons": [
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:            "Has a FileSystem",
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:            "Insufficient space (<5GB)"
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:        ],
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:        "sys_api": {
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:            "actuators": null,
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:            "device_nodes": [
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:                "sr0"
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:            ],
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:            "devname": "sr0",
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:            "human_readable_size": "482.00 KB",
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:            "id_bus": "ata",
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:            "model": "QEMU DVD-ROM",
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:            "nr_requests": "2",
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:            "parent": "/dev/sr0",
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:            "partitions": {},
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:            "path": "/dev/sr0",
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:            "removable": "1",
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:            "rev": "2.5+",
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:            "ro": "0",
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:            "rotational": "1",
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:            "sas_address": "",
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:            "sas_device_handle": "",
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:            "scheduler_mode": "mq-deadline",
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:            "sectors": 0,
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:            "sectorsize": "2048",
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:            "size": 493568.0,
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:            "support_discard": "2048",
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:            "type": "disk",
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:            "vendor": "QEMU"
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:        }
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]:    }
Feb 25 06:49:49 np0005629333 magical_ptolemy[79160]: ]
Feb 25 06:49:49 np0005629333 systemd[1]: libpod-badca8f836eaa96f1ac86ebb556f1390a887a341e4cc5dc409247aabfed6eb3d.scope: Deactivated successfully.
Feb 25 06:49:49 np0005629333 podman[79197]: 2026-02-25 11:49:49.204866376 +0000 UTC m=+0.293189146 container attach f74eb921e1c53deeda4e36447bba459ef08f7d9b5c61a220313b5aa402cf3819 (image=quay.io/ceph/ceph:v20, name=trusting_hermann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:49:49 np0005629333 podman[79144]: 2026-02-25 11:49:49.218373807 +0000 UTC m=+0.845904333 container died badca8f836eaa96f1ac86ebb556f1390a887a341e4cc5dc409247aabfed6eb3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 06:49:49 np0005629333 systemd[1]: var-lib-containers-storage-overlay-fb41ece5bff3eb120df9d0f12ee8b5766969b82154ee4068cc86f06e9874eb05-merged.mount: Deactivated successfully.
Feb 25 06:49:49 np0005629333 podman[79144]: 2026-02-25 11:49:49.45253068 +0000 UTC m=+1.080061246 container remove badca8f836eaa96f1ac86ebb556f1390a887a341e4cc5dc409247aabfed6eb3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:49:49 np0005629333 systemd[1]: libpod-conmon-badca8f836eaa96f1ac86ebb556f1390a887a341e4cc5dc409247aabfed6eb3d.scope: Deactivated successfully.
Feb 25 06:49:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Feb 25 06:49:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:49:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2500648154' entity='client.admin' 
Feb 25 06:49:49 np0005629333 systemd[1]: libpod-f74eb921e1c53deeda4e36447bba459ef08f7d9b5c61a220313b5aa402cf3819.scope: Deactivated successfully.
Feb 25 06:49:49 np0005629333 podman[79197]: 2026-02-25 11:49:49.534853551 +0000 UTC m=+0.623176271 container died f74eb921e1c53deeda4e36447bba459ef08f7d9b5c61a220313b5aa402cf3819 (image=quay.io/ceph/ceph:v20, name=trusting_hermann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 06:49:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:49:49 np0005629333 ceph-mgr[76641]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 25 06:49:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:49:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:49:49 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7474f9bb9bc9b6d0da895247c5fe446339cbe978c3b32e429885326096ede3cc-merged.mount: Deactivated successfully.
Feb 25 06:49:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 25 06:49:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 06:49:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:49:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:49:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 06:49:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:49:49 np0005629333 ceph-mgr[76641]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Feb 25 06:49:49 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Feb 25 06:49:49 np0005629333 podman[79197]: 2026-02-25 11:49:49.792397996 +0000 UTC m=+0.880720746 container remove f74eb921e1c53deeda4e36447bba459ef08f7d9b5c61a220313b5aa402cf3819 (image=quay.io/ceph/ceph:v20, name=trusting_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 06:49:49 np0005629333 systemd[1]: libpod-conmon-f74eb921e1c53deeda4e36447bba459ef08f7d9b5c61a220313b5aa402cf3819.scope: Deactivated successfully.
Feb 25 06:49:50 np0005629333 ceph-mgr[76641]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/config/ceph.conf
Feb 25 06:49:50 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/config/ceph.conf
Feb 25 06:49:50 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/2500648154' entity='client.admin' 
Feb 25 06:49:50 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:50 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:50 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:50 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:50 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 06:49:50 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:49:50 np0005629333 ceph-mon[76335]: Updating compute-0:/etc/ceph/ceph.conf
Feb 25 06:49:50 np0005629333 ceph-mgr[76641]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb 25 06:49:50 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb 25 06:49:50 np0005629333 ansible-async_wrapper.py[80502]: Invoked with j90832755647 30 /home/zuul/.ansible/tmp/ansible-tmp-1772020190.1700873-37250-70438705106308/AnsiballZ_command.py _
Feb 25 06:49:50 np0005629333 ansible-async_wrapper.py[80557]: Starting module and watcher
Feb 25 06:49:50 np0005629333 ansible-async_wrapper.py[80557]: Start watching 80559 (30)
Feb 25 06:49:50 np0005629333 ansible-async_wrapper.py[80559]: Start module (80559)
Feb 25 06:49:50 np0005629333 ansible-async_wrapper.py[80502]: Return async_wrapper task started.
Feb 25 06:49:50 np0005629333 ceph-mgr[76641]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Feb 25 06:49:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 25 06:49:50 np0005629333 ceph-mon[76335]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Feb 25 06:49:50 np0005629333 python3[80561]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:49:50 np0005629333 podman[80662]: 2026-02-25 11:49:50.910416242 +0000 UTC m=+0.048534154 container create ee5210cc55fb57298f3e9cae9e1dbdda502f3a451b82cccd31319cd02ade6701 (image=quay.io/ceph/ceph:v20, name=confident_bhaskara, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:49:50 np0005629333 systemd[1]: Started libpod-conmon-ee5210cc55fb57298f3e9cae9e1dbdda502f3a451b82cccd31319cd02ade6701.scope.
Feb 25 06:49:50 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8246e412626ab90eace063b126bf6f464b564b85ffe86afd904b51873d5e633/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8246e412626ab90eace063b126bf6f464b564b85ffe86afd904b51873d5e633/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:50 np0005629333 podman[80662]: 2026-02-25 11:49:50.894276646 +0000 UTC m=+0.032394588 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:51 np0005629333 podman[80662]: 2026-02-25 11:49:51.002193146 +0000 UTC m=+0.140311088 container init ee5210cc55fb57298f3e9cae9e1dbdda502f3a451b82cccd31319cd02ade6701 (image=quay.io/ceph/ceph:v20, name=confident_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 06:49:51 np0005629333 podman[80662]: 2026-02-25 11:49:51.008437189 +0000 UTC m=+0.146555131 container start ee5210cc55fb57298f3e9cae9e1dbdda502f3a451b82cccd31319cd02ade6701 (image=quay.io/ceph/ceph:v20, name=confident_bhaskara, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 06:49:51 np0005629333 podman[80662]: 2026-02-25 11:49:51.014090897 +0000 UTC m=+0.152208819 container attach ee5210cc55fb57298f3e9cae9e1dbdda502f3a451b82cccd31319cd02ade6701 (image=quay.io/ceph/ceph:v20, name=confident_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 06:49:51 np0005629333 ceph-mgr[76641]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/config/ceph.client.admin.keyring
Feb 25 06:49:51 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/config/ceph.client.admin.keyring
Feb 25 06:49:51 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 25 06:49:51 np0005629333 confident_bhaskara[80726]: 
Feb 25 06:49:51 np0005629333 confident_bhaskara[80726]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb 25 06:49:51 np0005629333 systemd[1]: libpod-ee5210cc55fb57298f3e9cae9e1dbdda502f3a451b82cccd31319cd02ade6701.scope: Deactivated successfully.
Feb 25 06:49:51 np0005629333 podman[80662]: 2026-02-25 11:49:51.448215807 +0000 UTC m=+0.586333749 container died ee5210cc55fb57298f3e9cae9e1dbdda502f3a451b82cccd31319cd02ade6701 (image=quay.io/ceph/ceph:v20, name=confident_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 06:49:51 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d8246e412626ab90eace063b126bf6f464b564b85ffe86afd904b51873d5e633-merged.mount: Deactivated successfully.
Feb 25 06:49:51 np0005629333 podman[80662]: 2026-02-25 11:49:51.481835736 +0000 UTC m=+0.619953658 container remove ee5210cc55fb57298f3e9cae9e1dbdda502f3a451b82cccd31319cd02ade6701 (image=quay.io/ceph/ceph:v20, name=confident_bhaskara, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 06:49:51 np0005629333 systemd[1]: libpod-conmon-ee5210cc55fb57298f3e9cae9e1dbdda502f3a451b82cccd31319cd02ade6701.scope: Deactivated successfully.
Feb 25 06:49:51 np0005629333 ansible-async_wrapper.py[80559]: Module complete (80559)
Feb 25 06:49:51 np0005629333 ceph-mon[76335]: Updating compute-0:/var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/config/ceph.conf
Feb 25 06:49:51 np0005629333 ceph-mon[76335]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Feb 25 06:49:51 np0005629333 ceph-mon[76335]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Feb 25 06:49:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:49:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:49:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:51 np0005629333 ceph-mgr[76641]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 25 06:49:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 06:49:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:51 np0005629333 ceph-mgr[76641]: [progress INFO root] update: starting ev 8ea2337d-0fd8-4904-800a-a0082a1fa46e (Updating crash deployment (+1 -> 1))
Feb 25 06:49:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Feb 25 06:49:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 25 06:49:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Feb 25 06:49:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:49:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:49:51 np0005629333 ceph-mgr[76641]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Feb 25 06:49:51 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Feb 25 06:49:52 np0005629333 podman[81169]: 2026-02-25 11:49:52.066659518 +0000 UTC m=+0.039166984 container create 286910c05029cf6c4fba779254631e0fd33f0869195080d4f1cfd7a32b1a2dc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sammet, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 25 06:49:52 np0005629333 systemd[1]: Started libpod-conmon-286910c05029cf6c4fba779254631e0fd33f0869195080d4f1cfd7a32b1a2dc7.scope.
Feb 25 06:49:52 np0005629333 python3[81149]: ansible-ansible.legacy.async_status Invoked with jid=j90832755647.80502 mode=status _async_dir=/root/.ansible_async
Feb 25 06:49:52 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:52 np0005629333 podman[81169]: 2026-02-25 11:49:52.050557884 +0000 UTC m=+0.023065350 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:49:52 np0005629333 podman[81169]: 2026-02-25 11:49:52.153379412 +0000 UTC m=+0.125886978 container init 286910c05029cf6c4fba779254631e0fd33f0869195080d4f1cfd7a32b1a2dc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sammet, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 06:49:52 np0005629333 podman[81169]: 2026-02-25 11:49:52.159821343 +0000 UTC m=+0.132328849 container start 286910c05029cf6c4fba779254631e0fd33f0869195080d4f1cfd7a32b1a2dc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:49:52 np0005629333 podman[81169]: 2026-02-25 11:49:52.163302236 +0000 UTC m=+0.135809732 container attach 286910c05029cf6c4fba779254631e0fd33f0869195080d4f1cfd7a32b1a2dc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sammet, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:49:52 np0005629333 festive_sammet[81186]: 167 167
Feb 25 06:49:52 np0005629333 systemd[1]: libpod-286910c05029cf6c4fba779254631e0fd33f0869195080d4f1cfd7a32b1a2dc7.scope: Deactivated successfully.
Feb 25 06:49:52 np0005629333 podman[81169]: 2026-02-25 11:49:52.16522667 +0000 UTC m=+0.137734166 container died 286910c05029cf6c4fba779254631e0fd33f0869195080d4f1cfd7a32b1a2dc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:49:52 np0005629333 systemd[1]: var-lib-containers-storage-overlay-4383b090f6d7571ebef4e0b705f16ab5241884a36dfd741e5b8f4232cd968439-merged.mount: Deactivated successfully.
Feb 25 06:49:52 np0005629333 podman[81169]: 2026-02-25 11:49:52.214908613 +0000 UTC m=+0.187416109 container remove 286910c05029cf6c4fba779254631e0fd33f0869195080d4f1cfd7a32b1a2dc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_sammet, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:49:52 np0005629333 systemd[1]: libpod-conmon-286910c05029cf6c4fba779254631e0fd33f0869195080d4f1cfd7a32b1a2dc7.scope: Deactivated successfully.
Feb 25 06:49:52 np0005629333 systemd[1]: Reloading.
Feb 25 06:49:52 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:49:52 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:49:52 np0005629333 ceph-mon[76335]: Updating compute-0:/var/lib/ceph/8ac33163-6221-5d58-9a39-8b6933fe7762/config/ceph.client.admin.keyring
Feb 25 06:49:52 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:52 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:52 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:52 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 25 06:49:52 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Feb 25 06:49:52 np0005629333 ceph-mon[76335]: Deploying daemon crash.compute-0 on compute-0
Feb 25 06:49:52 np0005629333 systemd[1]: Reloading.
Feb 25 06:49:52 np0005629333 python3[81296]: ansible-ansible.legacy.async_status Invoked with jid=j90832755647.80502 mode=cleanup _async_dir=/root/.ansible_async
Feb 25 06:49:52 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:49:52 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:49:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:49:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 25 06:49:52 np0005629333 systemd[1]: Starting Ceph crash.compute-0 for 8ac33163-6221-5d58-9a39-8b6933fe7762...
Feb 25 06:49:53 np0005629333 podman[81416]: 2026-02-25 11:49:53.072455355 +0000 UTC m=+0.058816714 container create d81aab8572381824a4135e1e6c0256722f2cdf4011efd2754a5fd36037615827 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-crash-compute-0, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 06:49:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2313309ac242e7e3cbc57ab6cc03e826bd9da0910d642cb773d21d1dba9baef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2313309ac242e7e3cbc57ab6cc03e826bd9da0910d642cb773d21d1dba9baef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2313309ac242e7e3cbc57ab6cc03e826bd9da0910d642cb773d21d1dba9baef/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2313309ac242e7e3cbc57ab6cc03e826bd9da0910d642cb773d21d1dba9baef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:53 np0005629333 python3[81411]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 25 06:49:53 np0005629333 podman[81416]: 2026-02-25 11:49:53.036060933 +0000 UTC m=+0.022422342 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:49:53 np0005629333 podman[81416]: 2026-02-25 11:49:53.165796008 +0000 UTC m=+0.152157367 container init d81aab8572381824a4135e1e6c0256722f2cdf4011efd2754a5fd36037615827 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-crash-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 06:49:53 np0005629333 podman[81416]: 2026-02-25 11:49:53.171968658 +0000 UTC m=+0.158329997 container start d81aab8572381824a4135e1e6c0256722f2cdf4011efd2754a5fd36037615827 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-crash-compute-0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 06:49:53 np0005629333 bash[81416]: d81aab8572381824a4135e1e6c0256722f2cdf4011efd2754a5fd36037615827
Feb 25 06:49:53 np0005629333 systemd[1]: Started Ceph crash.compute-0 for 8ac33163-6221-5d58-9a39-8b6933fe7762.
Feb 25 06:49:53 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-crash-compute-0[81432]: INFO:ceph-crash:pinging cluster to exercise our key
Feb 25 06:49:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:49:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:49:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb 25 06:49:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:53 np0005629333 ceph-mgr[76641]: [progress INFO root] complete: finished ev 8ea2337d-0fd8-4904-800a-a0082a1fa46e (Updating crash deployment (+1 -> 1))
Feb 25 06:49:53 np0005629333 ceph-mgr[76641]: [progress INFO root] Completed event 8ea2337d-0fd8-4904-800a-a0082a1fa46e (Updating crash deployment (+1 -> 1)) in 2 seconds
Feb 25 06:49:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Feb 25 06:49:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb 25 06:49:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:53 np0005629333 ceph-mgr[76641]: [progress INFO root] update: starting ev a373d7ca-6593-4acb-b8a7-9cbc8b635923 (Updating mgr deployment (+1 -> 2))
Feb 25 06:49:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.kaeadd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Feb 25 06:49:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.kaeadd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 25 06:49:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.kaeadd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Feb 25 06:49:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 25 06:49:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "mgr services"} : dispatch
Feb 25 06:49:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:49:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:49:53 np0005629333 ceph-mgr[76641]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.kaeadd on compute-0
Feb 25 06:49:53 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.kaeadd on compute-0
Feb 25 06:49:53 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-crash-compute-0[81432]: 2026-02-25T11:49:53.313+0000 7f3aa3efe640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Feb 25 06:49:53 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-crash-compute-0[81432]: 2026-02-25T11:49:53.313+0000 7f3aa3efe640 -1 AuthRegistry(0x7f3a9c052930) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Feb 25 06:49:53 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-crash-compute-0[81432]: 2026-02-25T11:49:53.314+0000 7f3aa3efe640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Feb 25 06:49:53 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-crash-compute-0[81432]: 2026-02-25T11:49:53.314+0000 7f3aa3efe640 -1 AuthRegistry(0x7f3aa3efcfe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Feb 25 06:49:53 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-crash-compute-0[81432]: 2026-02-25T11:49:53.314+0000 7f3aa1c73640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Feb 25 06:49:53 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-crash-compute-0[81432]: 2026-02-25T11:49:53.315+0000 7f3aa3efe640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Feb 25 06:49:53 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-crash-compute-0[81432]: [errno 13] RADOS permission denied (error connecting to the cluster)
Feb 25 06:49:53 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-crash-compute-0[81432]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Feb 25 06:49:53 np0005629333 ceph-mgr[76641]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 25 06:49:53 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:53 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:53 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:53 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:53 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:53 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.kaeadd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 25 06:49:53 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.kaeadd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Feb 25 06:49:53 np0005629333 python3[81526]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:49:53 np0005629333 podman[81566]: 2026-02-25 11:49:53.779107996 +0000 UTC m=+0.064564975 container create aaaacde143aed9f9ba3ec0f963225cc331ad53b028bed0ae1448507d1670f388 (image=quay.io/ceph/ceph:v20, name=compassionate_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 06:49:53 np0005629333 podman[81569]: 2026-02-25 11:49:53.80662801 +0000 UTC m=+0.085623237 container create 87221c469cf06e9125af5c6e6690e86e4c15829f139f67cb71ad5b6c69aaa6bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_allen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 06:49:53 np0005629333 systemd[1]: Started libpod-conmon-aaaacde143aed9f9ba3ec0f963225cc331ad53b028bed0ae1448507d1670f388.scope.
Feb 25 06:49:53 np0005629333 systemd[1]: Started libpod-conmon-87221c469cf06e9125af5c6e6690e86e4c15829f139f67cb71ad5b6c69aaa6bd.scope.
Feb 25 06:49:53 np0005629333 podman[81566]: 2026-02-25 11:49:53.739134037 +0000 UTC m=+0.024591026 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:53 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:53 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d07c472d89d29da75a5a6729d46f3487237a37c9544260b42569bc57661cd90/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d07c472d89d29da75a5a6729d46f3487237a37c9544260b42569bc57661cd90/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d07c472d89d29da75a5a6729d46f3487237a37c9544260b42569bc57661cd90/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:53 np0005629333 podman[81569]: 2026-02-25 11:49:53.756304158 +0000 UTC m=+0.035299415 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:49:53 np0005629333 podman[81569]: 2026-02-25 11:49:53.877856535 +0000 UTC m=+0.156851792 container init 87221c469cf06e9125af5c6e6690e86e4c15829f139f67cb71ad5b6c69aaa6bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_allen, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 25 06:49:53 np0005629333 podman[81566]: 2026-02-25 11:49:53.885619345 +0000 UTC m=+0.171076334 container init aaaacde143aed9f9ba3ec0f963225cc331ad53b028bed0ae1448507d1670f388 (image=quay.io/ceph/ceph:v20, name=compassionate_maxwell, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 06:49:53 np0005629333 podman[81569]: 2026-02-25 11:49:53.886684161 +0000 UTC m=+0.165679388 container start 87221c469cf06e9125af5c6e6690e86e4c15829f139f67cb71ad5b6c69aaa6bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_allen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:49:53 np0005629333 podman[81566]: 2026-02-25 11:49:53.891591416 +0000 UTC m=+0.177048385 container start aaaacde143aed9f9ba3ec0f963225cc331ad53b028bed0ae1448507d1670f388 (image=quay.io/ceph/ceph:v20, name=compassionate_maxwell, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 06:49:53 np0005629333 podman[81569]: 2026-02-25 11:49:53.893153974 +0000 UTC m=+0.172149201 container attach 87221c469cf06e9125af5c6e6690e86e4c15829f139f67cb71ad5b6c69aaa6bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_allen, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:49:53 np0005629333 dreamy_allen[81602]: 167 167
Feb 25 06:49:53 np0005629333 systemd[1]: libpod-87221c469cf06e9125af5c6e6690e86e4c15829f139f67cb71ad5b6c69aaa6bd.scope: Deactivated successfully.
Feb 25 06:49:53 np0005629333 podman[81566]: 2026-02-25 11:49:53.901933798 +0000 UTC m=+0.187390797 container attach aaaacde143aed9f9ba3ec0f963225cc331ad53b028bed0ae1448507d1670f388 (image=quay.io/ceph/ceph:v20, name=compassionate_maxwell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:49:53 np0005629333 podman[81569]: 2026-02-25 11:49:53.903655784 +0000 UTC m=+0.182651011 container died 87221c469cf06e9125af5c6e6690e86e4c15829f139f67cb71ad5b6c69aaa6bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_allen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 06:49:53 np0005629333 systemd[1]: var-lib-containers-storage-overlay-15e68d1088d1e3672ca3d0dbd300dd5bedcf80be204d020ddf724a5110fbe87d-merged.mount: Deactivated successfully.
Feb 25 06:49:53 np0005629333 podman[81569]: 2026-02-25 11:49:53.940280946 +0000 UTC m=+0.219276173 container remove 87221c469cf06e9125af5c6e6690e86e4c15829f139f67cb71ad5b6c69aaa6bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:49:53 np0005629333 systemd[1]: libpod-conmon-87221c469cf06e9125af5c6e6690e86e4c15829f139f67cb71ad5b6c69aaa6bd.scope: Deactivated successfully.
Feb 25 06:49:53 np0005629333 systemd[1]: Reloading.
Feb 25 06:49:54 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:49:54 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:49:54 np0005629333 systemd[1]: Reloading.
Feb 25 06:49:54 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 25 06:49:54 np0005629333 compassionate_maxwell[81600]: 
Feb 25 06:49:54 np0005629333 compassionate_maxwell[81600]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb 25 06:49:54 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:49:54 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:49:54 np0005629333 podman[81731]: 2026-02-25 11:49:54.359674301 +0000 UTC m=+0.020682275 container died aaaacde143aed9f9ba3ec0f963225cc331ad53b028bed0ae1448507d1670f388 (image=quay.io/ceph/ceph:v20, name=compassionate_maxwell, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 06:49:54 np0005629333 systemd[1]: libpod-aaaacde143aed9f9ba3ec0f963225cc331ad53b028bed0ae1448507d1670f388.scope: Deactivated successfully.
Feb 25 06:49:54 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1d07c472d89d29da75a5a6729d46f3487237a37c9544260b42569bc57661cd90-merged.mount: Deactivated successfully.
Feb 25 06:49:54 np0005629333 systemd[1]: Starting Ceph mgr.compute-0.kaeadd for 8ac33163-6221-5d58-9a39-8b6933fe7762...
Feb 25 06:49:54 np0005629333 podman[81731]: 2026-02-25 11:49:54.478945139 +0000 UTC m=+0.139953103 container remove aaaacde143aed9f9ba3ec0f963225cc331ad53b028bed0ae1448507d1670f388 (image=quay.io/ceph/ceph:v20, name=compassionate_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 06:49:54 np0005629333 systemd[1]: libpod-conmon-aaaacde143aed9f9ba3ec0f963225cc331ad53b028bed0ae1448507d1670f388.scope: Deactivated successfully.
Feb 25 06:49:54 np0005629333 ceph-mon[76335]: Deploying daemon mgr.compute-0.kaeadd on compute-0
Feb 25 06:49:54 np0005629333 podman[81795]: 2026-02-25 11:49:54.685648411 +0000 UTC m=+0.043983766 container create d558a9e3603044e0b415e52316779d8864e8c646aa03849d49dbc6acb27d4f8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mgr-compute-0-kaeadd, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 06:49:54 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/052b04ef3a07ff89013b0e75b1f112e6788922c8ce8f8cf281f73aeefafafbd0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:54 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/052b04ef3a07ff89013b0e75b1f112e6788922c8ce8f8cf281f73aeefafafbd0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:54 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/052b04ef3a07ff89013b0e75b1f112e6788922c8ce8f8cf281f73aeefafafbd0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:54 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/052b04ef3a07ff89013b0e75b1f112e6788922c8ce8f8cf281f73aeefafafbd0/merged/var/lib/ceph/mgr/ceph-compute-0.kaeadd supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:54 np0005629333 podman[81795]: 2026-02-25 11:49:54.752007063 +0000 UTC m=+0.110342438 container init d558a9e3603044e0b415e52316779d8864e8c646aa03849d49dbc6acb27d4f8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mgr-compute-0-kaeadd, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 06:49:54 np0005629333 podman[81795]: 2026-02-25 11:49:54.757098806 +0000 UTC m=+0.115434161 container start d558a9e3603044e0b415e52316779d8864e8c646aa03849d49dbc6acb27d4f8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mgr-compute-0-kaeadd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 06:49:54 np0005629333 bash[81795]: d558a9e3603044e0b415e52316779d8864e8c646aa03849d49dbc6acb27d4f8b
Feb 25 06:49:54 np0005629333 podman[81795]: 2026-02-25 11:49:54.671870848 +0000 UTC m=+0.030206233 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:49:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 25 06:49:54 np0005629333 systemd[1]: Started Ceph mgr.compute-0.kaeadd for 8ac33163-6221-5d58-9a39-8b6933fe7762.
Feb 25 06:49:54 np0005629333 ceph-mgr[81815]: set uid:gid to 167:167 (ceph:ceph)
Feb 25 06:49:54 np0005629333 ceph-mgr[81815]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Feb 25 06:49:54 np0005629333 ceph-mgr[81815]: pidfile_write: ignore empty --pid-file
Feb 25 06:49:54 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'alerts'
Feb 25 06:49:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:49:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:49:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb 25 06:49:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:54 np0005629333 ceph-mgr[76641]: [progress INFO root] complete: finished ev a373d7ca-6593-4acb-b8a7-9cbc8b635923 (Updating mgr deployment (+1 -> 2))
Feb 25 06:49:54 np0005629333 ceph-mgr[76641]: [progress INFO root] Completed event a373d7ca-6593-4acb-b8a7-9cbc8b635923 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Feb 25 06:49:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb 25 06:49:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:54 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'balancer'
Feb 25 06:49:54 np0005629333 python3[81861]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:49:55 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'cephadm'
Feb 25 06:49:55 np0005629333 podman[81916]: 2026-02-25 11:49:55.033300158 +0000 UTC m=+0.038157790 container create e1276aadbdad2af9f89d944889b6c08e7078dcd589b30f99238bccfa7df671b7 (image=quay.io/ceph/ceph:v20, name=xenodochial_babbage, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 06:49:55 np0005629333 systemd[1]: Started libpod-conmon-e1276aadbdad2af9f89d944889b6c08e7078dcd589b30f99238bccfa7df671b7.scope.
Feb 25 06:49:55 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/171e23526dcd6d3ae365418cee15903469d383547fa8a2e5ccdc913821b928df/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:55 np0005629333 podman[81916]: 2026-02-25 11:49:55.018606375 +0000 UTC m=+0.023463947 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/171e23526dcd6d3ae365418cee15903469d383547fa8a2e5ccdc913821b928df/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/171e23526dcd6d3ae365418cee15903469d383547fa8a2e5ccdc913821b928df/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:55 np0005629333 podman[81916]: 2026-02-25 11:49:55.132640542 +0000 UTC m=+0.137498144 container init e1276aadbdad2af9f89d944889b6c08e7078dcd589b30f99238bccfa7df671b7 (image=quay.io/ceph/ceph:v20, name=xenodochial_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 06:49:55 np0005629333 podman[81916]: 2026-02-25 11:49:55.142088296 +0000 UTC m=+0.146945868 container start e1276aadbdad2af9f89d944889b6c08e7078dcd589b30f99238bccfa7df671b7 (image=quay.io/ceph/ceph:v20, name=xenodochial_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:49:55 np0005629333 podman[81916]: 2026-02-25 11:49:55.145311857 +0000 UTC m=+0.150169459 container attach e1276aadbdad2af9f89d944889b6c08e7078dcd589b30f99238bccfa7df671b7 (image=quay.io/ceph/ceph:v20, name=xenodochial_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:49:55 np0005629333 podman[82022]: 2026-02-25 11:49:55.451770162 +0000 UTC m=+0.073446974 container exec ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:49:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Feb 25 06:49:55 np0005629333 podman[82022]: 2026-02-25 11:49:55.540071564 +0000 UTC m=+0.161748336 container exec_died ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:49:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/323746341' entity='client.admin' 
Feb 25 06:49:55 np0005629333 ceph-mgr[76641]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 25 06:49:55 np0005629333 systemd[1]: libpod-e1276aadbdad2af9f89d944889b6c08e7078dcd589b30f99238bccfa7df671b7.scope: Deactivated successfully.
Feb 25 06:49:55 np0005629333 podman[81916]: 2026-02-25 11:49:55.566015159 +0000 UTC m=+0.570872741 container died e1276aadbdad2af9f89d944889b6c08e7078dcd589b30f99238bccfa7df671b7 (image=quay.io/ceph/ceph:v20, name=xenodochial_babbage, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030)
Feb 25 06:49:55 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:55 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:55 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:55 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:55 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/323746341' entity='client.admin' 
Feb 25 06:49:55 np0005629333 systemd[1]: var-lib-containers-storage-overlay-171e23526dcd6d3ae365418cee15903469d383547fa8a2e5ccdc913821b928df-merged.mount: Deactivated successfully.
Feb 25 06:49:55 np0005629333 podman[81916]: 2026-02-25 11:49:55.617580675 +0000 UTC m=+0.622438247 container remove e1276aadbdad2af9f89d944889b6c08e7078dcd589b30f99238bccfa7df671b7 (image=quay.io/ceph/ceph:v20, name=xenodochial_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Feb 25 06:49:55 np0005629333 systemd[1]: libpod-conmon-e1276aadbdad2af9f89d944889b6c08e7078dcd589b30f99238bccfa7df671b7.scope: Deactivated successfully.
Feb 25 06:49:55 np0005629333 ansible-async_wrapper.py[80557]: Done in kid B.
Feb 25 06:49:55 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'crash'
Feb 25 06:49:55 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'dashboard'
Feb 25 06:49:55 np0005629333 python3[82164]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:49:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:49:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:49:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:49:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:49:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 06:49:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:49:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 06:49:55 np0005629333 podman[82181]: 2026-02-25 11:49:55.975941061 +0000 UTC m=+0.036509118 container create 4bb76a3e29fca616c311bfc67c99a3c6d676f50cedd031898f86a4973ace9348 (image=quay.io/ceph/ceph:v20, name=awesome_benz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 06:49:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:56 np0005629333 systemd[1]: Started libpod-conmon-4bb76a3e29fca616c311bfc67c99a3c6d676f50cedd031898f86a4973ace9348.scope.
Feb 25 06:49:56 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a1b5b5da60910260ac1f1993280f255af66e1434bd4e1a68407e7a77020b00/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a1b5b5da60910260ac1f1993280f255af66e1434bd4e1a68407e7a77020b00/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a1b5b5da60910260ac1f1993280f255af66e1434bd4e1a68407e7a77020b00/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:56 np0005629333 podman[82181]: 2026-02-25 11:49:55.957253373 +0000 UTC m=+0.017821430 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:56 np0005629333 ceph-mgr[76641]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Feb 25 06:49:56 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:49:56 np0005629333 podman[82181]: 2026-02-25 11:49:56.064755196 +0000 UTC m=+0.125323263 container init 4bb76a3e29fca616c311bfc67c99a3c6d676f50cedd031898f86a4973ace9348 (image=quay.io/ceph/ceph:v20, name=awesome_benz, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:49:56 np0005629333 ceph-mgr[76641]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Feb 25 06:49:56 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Feb 25 06:49:56 np0005629333 podman[82181]: 2026-02-25 11:49:56.072249053 +0000 UTC m=+0.132817140 container start 4bb76a3e29fca616c311bfc67c99a3c6d676f50cedd031898f86a4973ace9348 (image=quay.io/ceph/ceph:v20, name=awesome_benz, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:49:56 np0005629333 podman[82181]: 2026-02-25 11:49:56.075872512 +0000 UTC m=+0.136440579 container attach 4bb76a3e29fca616c311bfc67c99a3c6d676f50cedd031898f86a4973ace9348 (image=quay.io/ceph/ceph:v20, name=awesome_benz, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 06:49:56 np0005629333 podman[82311]: 2026-02-25 11:49:56.448341795 +0000 UTC m=+0.038725755 container create e2db3e9cf9b39a68197e6c73792f845bd6fd058d7b031b57c92441817666a236 (image=quay.io/ceph/ceph:v20, name=optimistic_hermann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1433952218' entity='client.admin' 
Feb 25 06:49:56 np0005629333 systemd[1]: Started libpod-conmon-e2db3e9cf9b39a68197e6c73792f845bd6fd058d7b031b57c92441817666a236.scope.
Feb 25 06:49:56 np0005629333 podman[82181]: 2026-02-25 11:49:56.495581311 +0000 UTC m=+0.556149358 container died 4bb76a3e29fca616c311bfc67c99a3c6d676f50cedd031898f86a4973ace9348 (image=quay.io/ceph/ceph:v20, name=awesome_benz, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Feb 25 06:49:56 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:56 np0005629333 systemd[1]: libpod-4bb76a3e29fca616c311bfc67c99a3c6d676f50cedd031898f86a4973ace9348.scope: Deactivated successfully.
Feb 25 06:49:56 np0005629333 podman[82311]: 2026-02-25 11:49:56.511170983 +0000 UTC m=+0.101554943 container init e2db3e9cf9b39a68197e6c73792f845bd6fd058d7b031b57c92441817666a236 (image=quay.io/ceph/ceph:v20, name=optimistic_hermann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:49:56 np0005629333 systemd[1]: var-lib-containers-storage-overlay-90a1b5b5da60910260ac1f1993280f255af66e1434bd4e1a68407e7a77020b00-merged.mount: Deactivated successfully.
Feb 25 06:49:56 np0005629333 podman[82311]: 2026-02-25 11:49:56.521072296 +0000 UTC m=+0.111456256 container start e2db3e9cf9b39a68197e6c73792f845bd6fd058d7b031b57c92441817666a236 (image=quay.io/ceph/ceph:v20, name=optimistic_hermann, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:49:56 np0005629333 optimistic_hermann[82329]: 167 167
Feb 25 06:49:56 np0005629333 systemd[1]: libpod-e2db3e9cf9b39a68197e6c73792f845bd6fd058d7b031b57c92441817666a236.scope: Deactivated successfully.
Feb 25 06:49:56 np0005629333 conmon[82329]: conmon e2db3e9cf9b39a68197e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e2db3e9cf9b39a68197e6c73792f845bd6fd058d7b031b57c92441817666a236.scope/container/memory.events
Feb 25 06:49:56 np0005629333 podman[82311]: 2026-02-25 11:49:56.435248832 +0000 UTC m=+0.025632812 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:56 np0005629333 podman[82181]: 2026-02-25 11:49:56.531955412 +0000 UTC m=+0.592523459 container remove 4bb76a3e29fca616c311bfc67c99a3c6d676f50cedd031898f86a4973ace9348 (image=quay.io/ceph/ceph:v20, name=awesome_benz, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:49:56 np0005629333 ceph-mgr[76641]: [progress INFO root] Writing back 2 completed events
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 25 06:49:56 np0005629333 podman[82311]: 2026-02-25 11:49:56.549079631 +0000 UTC m=+0.139463611 container attach e2db3e9cf9b39a68197e6c73792f845bd6fd058d7b031b57c92441817666a236 (image=quay.io/ceph/ceph:v20, name=optimistic_hermann, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:49:56 np0005629333 podman[82311]: 2026-02-25 11:49:56.550005232 +0000 UTC m=+0.140389192 container died e2db3e9cf9b39a68197e6c73792f845bd6fd058d7b031b57c92441817666a236 (image=quay.io/ceph/ceph:v20, name=optimistic_hermann, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:56 np0005629333 systemd[1]: libpod-conmon-4bb76a3e29fca616c311bfc67c99a3c6d676f50cedd031898f86a4973ace9348.scope: Deactivated successfully.
Feb 25 06:49:56 np0005629333 systemd[1]: var-lib-containers-storage-overlay-5fa928605361df98847f81c9dd66118baa28c15b882a90e84c6ccc7359ba7d6a-merged.mount: Deactivated successfully.
Feb 25 06:49:56 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'devicehealth'
Feb 25 06:49:56 np0005629333 podman[82311]: 2026-02-25 11:49:56.59407435 +0000 UTC m=+0.184458310 container remove e2db3e9cf9b39a68197e6c73792f845bd6fd058d7b031b57c92441817666a236 (image=quay.io/ceph/ceph:v20, name=optimistic_hermann, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:49:56 np0005629333 systemd[1]: libpod-conmon-e2db3e9cf9b39a68197e6c73792f845bd6fd058d7b031b57c92441817666a236.scope: Deactivated successfully.
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:49:56 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'diskprediction_local'
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:56 np0005629333 ceph-mgr[76641]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.jzfame (unknown last config time)...
Feb 25 06:49:56 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.jzfame (unknown last config time)...
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.jzfame", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.jzfame", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "mgr services"} : dispatch
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:49:56 np0005629333 ceph-mgr[76641]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.jzfame on compute-0
Feb 25 06:49:56 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.jzfame on compute-0
Feb 25 06:49:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 25 06:49:56 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mgr-compute-0-kaeadd[81811]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb 25 06:49:56 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mgr-compute-0-kaeadd[81811]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb 25 06:49:56 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mgr-compute-0-kaeadd[81811]:  from numpy import show_config as show_numpy_config
Feb 25 06:49:56 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'influx'
Feb 25 06:49:56 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'insights'
Feb 25 06:49:56 np0005629333 python3[82433]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:49:56 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'iostat'
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: Reconfiguring mon.compute-0 (unknown last config time)...
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: Reconfiguring daemon mon.compute-0 on compute-0
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/1433952218' entity='client.admin' 
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: Reconfiguring mgr.compute-0.jzfame (unknown last config time)...
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.jzfame", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 25 06:49:56 np0005629333 ceph-mon[76335]: Reconfiguring daemon mgr.compute-0.jzfame on compute-0
Feb 25 06:49:56 np0005629333 podman[82434]: 2026-02-25 11:49:56.99048204 +0000 UTC m=+0.055510030 container create 28d3dc8670a319fc5d946fdf16fb6e4dc750ff6a8cf2b05f3ee069f7925f2bf1 (image=quay.io/ceph/ceph:v20, name=mystifying_rosalind, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 06:49:57 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'k8sevents'
Feb 25 06:49:57 np0005629333 systemd[1]: Started libpod-conmon-28d3dc8670a319fc5d946fdf16fb6e4dc750ff6a8cf2b05f3ee069f7925f2bf1.scope.
Feb 25 06:49:57 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:57 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cad71aa6e1e7f9688f8cbce9a1f00409f8a67a1cf60f2d149d288c08e1bd9b94/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:57 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cad71aa6e1e7f9688f8cbce9a1f00409f8a67a1cf60f2d149d288c08e1bd9b94/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:57 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cad71aa6e1e7f9688f8cbce9a1f00409f8a67a1cf60f2d149d288c08e1bd9b94/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:57 np0005629333 podman[82462]: 2026-02-25 11:49:57.070819064 +0000 UTC m=+0.055270499 container create bf64e21ea65accf8d9dd32ada407071fba762165efe0c16d143f3a916e1a3026 (image=quay.io/ceph/ceph:v20, name=beautiful_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 06:49:57 np0005629333 podman[82434]: 2026-02-25 11:49:56.966238399 +0000 UTC m=+0.031266399 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:57 np0005629333 podman[82434]: 2026-02-25 11:49:57.077397492 +0000 UTC m=+0.142425482 container init 28d3dc8670a319fc5d946fdf16fb6e4dc750ff6a8cf2b05f3ee069f7925f2bf1 (image=quay.io/ceph/ceph:v20, name=mystifying_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Feb 25 06:49:57 np0005629333 podman[82434]: 2026-02-25 11:49:57.092122466 +0000 UTC m=+0.157150426 container start 28d3dc8670a319fc5d946fdf16fb6e4dc750ff6a8cf2b05f3ee069f7925f2bf1 (image=quay.io/ceph/ceph:v20, name=mystifying_rosalind, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Feb 25 06:49:57 np0005629333 podman[82434]: 2026-02-25 11:49:57.101319248 +0000 UTC m=+0.166347208 container attach 28d3dc8670a319fc5d946fdf16fb6e4dc750ff6a8cf2b05f3ee069f7925f2bf1 (image=quay.io/ceph/ceph:v20, name=mystifying_rosalind, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:49:57 np0005629333 systemd[1]: Started libpod-conmon-bf64e21ea65accf8d9dd32ada407071fba762165efe0c16d143f3a916e1a3026.scope.
Feb 25 06:49:57 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:57 np0005629333 podman[82462]: 2026-02-25 11:49:57.039394929 +0000 UTC m=+0.023846424 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:57 np0005629333 podman[82462]: 2026-02-25 11:49:57.144745428 +0000 UTC m=+0.129196853 container init bf64e21ea65accf8d9dd32ada407071fba762165efe0c16d143f3a916e1a3026 (image=quay.io/ceph/ceph:v20, name=beautiful_chaplygin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 06:49:57 np0005629333 podman[82462]: 2026-02-25 11:49:57.150377964 +0000 UTC m=+0.134829369 container start bf64e21ea65accf8d9dd32ada407071fba762165efe0c16d143f3a916e1a3026 (image=quay.io/ceph/ceph:v20, name=beautiful_chaplygin, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:49:57 np0005629333 podman[82462]: 2026-02-25 11:49:57.153859406 +0000 UTC m=+0.138310861 container attach bf64e21ea65accf8d9dd32ada407071fba762165efe0c16d143f3a916e1a3026 (image=quay.io/ceph/ceph:v20, name=beautiful_chaplygin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:49:57 np0005629333 beautiful_chaplygin[82484]: 167 167
Feb 25 06:49:57 np0005629333 systemd[1]: libpod-bf64e21ea65accf8d9dd32ada407071fba762165efe0c16d143f3a916e1a3026.scope: Deactivated successfully.
Feb 25 06:49:57 np0005629333 podman[82462]: 2026-02-25 11:49:57.155032368 +0000 UTC m=+0.139483813 container died bf64e21ea65accf8d9dd32ada407071fba762165efe0c16d143f3a916e1a3026 (image=quay.io/ceph/ceph:v20, name=beautiful_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:49:57 np0005629333 systemd[1]: var-lib-containers-storage-overlay-9d854471e1777a79e8e7bad901617828a4a85dcb7b1660289b3a11c1aa338cee-merged.mount: Deactivated successfully.
Feb 25 06:49:57 np0005629333 podman[82462]: 2026-02-25 11:49:57.192444424 +0000 UTC m=+0.176895869 container remove bf64e21ea65accf8d9dd32ada407071fba762165efe0c16d143f3a916e1a3026 (image=quay.io/ceph/ceph:v20, name=beautiful_chaplygin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 06:49:57 np0005629333 systemd[1]: libpod-conmon-bf64e21ea65accf8d9dd32ada407071fba762165efe0c16d143f3a916e1a3026.scope: Deactivated successfully.
Feb 25 06:49:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:49:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:49:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:57 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'localpool'
Feb 25 06:49:57 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'mds_autoscaler'
Feb 25 06:49:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Feb 25 06:49:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2549284006' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Feb 25 06:49:57 np0005629333 ceph-mgr[76641]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 25 06:49:57 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'mirroring'
Feb 25 06:49:57 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'nfs'
Feb 25 06:49:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:49:57 np0005629333 podman[82611]: 2026-02-25 11:49:57.804433674 +0000 UTC m=+0.066285300 container exec ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:49:57 np0005629333 podman[82611]: 2026-02-25 11:49:57.90812795 +0000 UTC m=+0.169979576 container exec_died ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 06:49:57 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'orchestrator'
Feb 25 06:49:58 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'osd_perf_query'
Feb 25 06:49:58 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'osd_support'
Feb 25 06:49:58 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'pg_autoscaler'
Feb 25 06:49:58 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:58 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:58 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/2549284006' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Feb 25 06:49:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Feb 25 06:49:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 25 06:49:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2549284006' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Feb 25 06:49:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Feb 25 06:49:58 np0005629333 mystifying_rosalind[82476]: set require_min_compat_client to mimic
Feb 25 06:49:58 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Feb 25 06:49:58 np0005629333 systemd[1]: libpod-28d3dc8670a319fc5d946fdf16fb6e4dc750ff6a8cf2b05f3ee069f7925f2bf1.scope: Deactivated successfully.
Feb 25 06:49:58 np0005629333 podman[82434]: 2026-02-25 11:49:58.296107482 +0000 UTC m=+1.361135502 container died 28d3dc8670a319fc5d946fdf16fb6e4dc750ff6a8cf2b05f3ee069f7925f2bf1 (image=quay.io/ceph/ceph:v20, name=mystifying_rosalind, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 06:49:58 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'progress'
Feb 25 06:49:58 np0005629333 systemd[1]: var-lib-containers-storage-overlay-cad71aa6e1e7f9688f8cbce9a1f00409f8a67a1cf60f2d149d288c08e1bd9b94-merged.mount: Deactivated successfully.
Feb 25 06:49:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:49:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:49:58 np0005629333 podman[82434]: 2026-02-25 11:49:58.341043937 +0000 UTC m=+1.406071907 container remove 28d3dc8670a319fc5d946fdf16fb6e4dc750ff6a8cf2b05f3ee069f7925f2bf1 (image=quay.io/ceph/ceph:v20, name=mystifying_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 06:49:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:58 np0005629333 systemd[1]: libpod-conmon-28d3dc8670a319fc5d946fdf16fb6e4dc750ff6a8cf2b05f3ee069f7925f2bf1.scope: Deactivated successfully.
Feb 25 06:49:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:49:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:49:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 06:49:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:49:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 06:49:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:58 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'prometheus'
Feb 25 06:49:58 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'rbd_support'
Feb 25 06:49:58 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'rgw'
Feb 25 06:49:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 25 06:49:59 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'rook'
Feb 25 06:49:59 np0005629333 python3[82787]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:49:59 np0005629333 podman[82788]: 2026-02-25 11:49:59.063228157 +0000 UTC m=+0.039349023 container create fbae8c56fa42c50c54650d1ea0ea70218bde276b027f020c87e9e02802bcec80 (image=quay.io/ceph/ceph:v20, name=admiring_johnson, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 06:49:59 np0005629333 systemd[1]: Started libpod-conmon-fbae8c56fa42c50c54650d1ea0ea70218bde276b027f020c87e9e02802bcec80.scope.
Feb 25 06:49:59 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:49:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cd804f1f254526d0aa80099d95bada1b0e65bc7fe641fb251944e0a4cc03cbc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cd804f1f254526d0aa80099d95bada1b0e65bc7fe641fb251944e0a4cc03cbc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cd804f1f254526d0aa80099d95bada1b0e65bc7fe641fb251944e0a4cc03cbc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 25 06:49:59 np0005629333 podman[82788]: 2026-02-25 11:49:59.048742133 +0000 UTC m=+0.024863019 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:49:59 np0005629333 podman[82788]: 2026-02-25 11:49:59.160607626 +0000 UTC m=+0.136728512 container init fbae8c56fa42c50c54650d1ea0ea70218bde276b027f020c87e9e02802bcec80 (image=quay.io/ceph/ceph:v20, name=admiring_johnson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:49:59 np0005629333 podman[82788]: 2026-02-25 11:49:59.164788879 +0000 UTC m=+0.140909745 container start fbae8c56fa42c50c54650d1ea0ea70218bde276b027f020c87e9e02802bcec80 (image=quay.io/ceph/ceph:v20, name=admiring_johnson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:49:59 np0005629333 podman[82788]: 2026-02-25 11:49:59.169898923 +0000 UTC m=+0.146019809 container attach fbae8c56fa42c50c54650d1ea0ea70218bde276b027f020c87e9e02802bcec80 (image=quay.io/ceph/ceph:v20, name=admiring_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 06:49:59 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/2549284006' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Feb 25 06:49:59 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:59 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:59 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:49:59 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:49:59 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'selftest'
Feb 25 06:49:59 np0005629333 ceph-mgr[76641]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 25 06:49:59 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 06:49:59 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'smb'
Feb 25 06:49:59 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'snap_schedule'
Feb 25 06:49:59 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'stats'
Feb 25 06:49:59 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'status'
Feb 25 06:50:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb 25 06:50:00 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'telegraf'
Feb 25 06:50:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb 25 06:50:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb 25 06:50:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Feb 25 06:50:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:00 np0005629333 ceph-mgr[76641]: [cephadm INFO root] Added host compute-0
Feb 25 06:50:00 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Added host compute-0
Feb 25 06:50:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:50:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:50:00 np0005629333 ceph-mgr[76641]: [cephadm INFO root] Saving service mon spec with placement compute-0
Feb 25 06:50:00 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Feb 25 06:50:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb 25 06:50:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 06:50:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:50:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:00 np0005629333 ceph-mgr[76641]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Feb 25 06:50:00 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Feb 25 06:50:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb 25 06:50:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 06:50:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:00 np0005629333 ceph-mgr[76641]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Feb 25 06:50:00 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Feb 25 06:50:00 np0005629333 ceph-mgr[76641]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Feb 25 06:50:00 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Feb 25 06:50:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Feb 25 06:50:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb 25 06:50:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:00 np0005629333 admiring_johnson[82804]: Added host 'compute-0' with addr '192.168.122.100'
Feb 25 06:50:00 np0005629333 admiring_johnson[82804]: Scheduled mon update...
Feb 25 06:50:00 np0005629333 admiring_johnson[82804]: Scheduled mgr update...
Feb 25 06:50:00 np0005629333 admiring_johnson[82804]: Scheduled osd.default_drive_group update...
Feb 25 06:50:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:00 np0005629333 ceph-mgr[76641]: [progress INFO root] update: starting ev c2d9c5b6-c35a-4ad5-952f-56966f3d1612 (Updating mgr deployment (-1 -> 1))
Feb 25 06:50:00 np0005629333 ceph-mgr[76641]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.kaeadd from compute-0 -- ports [8765]
Feb 25 06:50:00 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.kaeadd from compute-0 -- ports [8765]
Feb 25 06:50:00 np0005629333 systemd[1]: libpod-fbae8c56fa42c50c54650d1ea0ea70218bde276b027f020c87e9e02802bcec80.scope: Deactivated successfully.
Feb 25 06:50:00 np0005629333 podman[82788]: 2026-02-25 11:50:00.095670259 +0000 UTC m=+1.071791135 container died fbae8c56fa42c50c54650d1ea0ea70218bde276b027f020c87e9e02802bcec80 (image=quay.io/ceph/ceph:v20, name=admiring_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:50:00 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'telemetry'
Feb 25 06:50:00 np0005629333 systemd[1]: var-lib-containers-storage-overlay-4cd804f1f254526d0aa80099d95bada1b0e65bc7fe641fb251944e0a4cc03cbc-merged.mount: Deactivated successfully.
Feb 25 06:50:00 np0005629333 podman[82788]: 2026-02-25 11:50:00.131347939 +0000 UTC m=+1.107468815 container remove fbae8c56fa42c50c54650d1ea0ea70218bde276b027f020c87e9e02802bcec80 (image=quay.io/ceph/ceph:v20, name=admiring_johnson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 06:50:00 np0005629333 systemd[1]: libpod-conmon-fbae8c56fa42c50c54650d1ea0ea70218bde276b027f020c87e9e02802bcec80.scope: Deactivated successfully.
Feb 25 06:50:00 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'test_orchestrator'
Feb 25 06:50:00 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:00 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:00 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:00 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:00 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:50:00 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:00 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:00 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:00 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:00 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:00 np0005629333 ceph-mgr[81815]: mgr[py] Loading python module 'volumes'
Feb 25 06:50:00 np0005629333 systemd[1]: Stopping Ceph mgr.compute-0.kaeadd for 8ac33163-6221-5d58-9a39-8b6933fe7762...
Feb 25 06:50:00 np0005629333 python3[82999]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:50:00 np0005629333 podman[83029]: 2026-02-25 11:50:00.636024115 +0000 UTC m=+0.041911814 container create 6e5bec5eba92f3df0f27d6f41b0357eb8f328f3f37dec93f727340b172036ccd (image=quay.io/ceph/ceph:v20, name=eager_wozniak, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 06:50:00 np0005629333 systemd[1]: Started libpod-conmon-6e5bec5eba92f3df0f27d6f41b0357eb8f328f3f37dec93f727340b172036ccd.scope.
Feb 25 06:50:00 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e399d769c207eb27563e9ad779f0d5bc17e393ba48c2a1d0ca3e67c0e36a3994/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e399d769c207eb27563e9ad779f0d5bc17e393ba48c2a1d0ca3e67c0e36a3994/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e399d769c207eb27563e9ad779f0d5bc17e393ba48c2a1d0ca3e67c0e36a3994/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:00 np0005629333 podman[83029]: 2026-02-25 11:50:00.621241429 +0000 UTC m=+0.027129138 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:50:00 np0005629333 podman[83029]: 2026-02-25 11:50:00.717253109 +0000 UTC m=+0.123140808 container init 6e5bec5eba92f3df0f27d6f41b0357eb8f328f3f37dec93f727340b172036ccd (image=quay.io/ceph/ceph:v20, name=eager_wozniak, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:50:00 np0005629333 podman[83042]: 2026-02-25 11:50:00.717679227 +0000 UTC m=+0.084730647 container died d558a9e3603044e0b415e52316779d8864e8c646aa03849d49dbc6acb27d4f8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mgr-compute-0-kaeadd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 06:50:00 np0005629333 podman[83029]: 2026-02-25 11:50:00.723039562 +0000 UTC m=+0.128927251 container start 6e5bec5eba92f3df0f27d6f41b0357eb8f328f3f37dec93f727340b172036ccd (image=quay.io/ceph/ceph:v20, name=eager_wozniak, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 25 06:50:00 np0005629333 podman[83029]: 2026-02-25 11:50:00.729811938 +0000 UTC m=+0.135699627 container attach 6e5bec5eba92f3df0f27d6f41b0357eb8f328f3f37dec93f727340b172036ccd (image=quay.io/ceph/ceph:v20, name=eager_wozniak, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 06:50:00 np0005629333 systemd[1]: var-lib-containers-storage-overlay-052b04ef3a07ff89013b0e75b1f112e6788922c8ce8f8cf281f73aeefafafbd0-merged.mount: Deactivated successfully.
Feb 25 06:50:00 np0005629333 podman[83042]: 2026-02-25 11:50:00.757877096 +0000 UTC m=+0.124928556 container remove d558a9e3603044e0b415e52316779d8864e8c646aa03849d49dbc6acb27d4f8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mgr-compute-0-kaeadd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:50:00 np0005629333 bash[83042]: ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mgr-compute-0-kaeadd
Feb 25 06:50:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 25 06:50:00 np0005629333 systemd[1]: ceph-8ac33163-6221-5d58-9a39-8b6933fe7762@mgr.compute-0.kaeadd.service: Main process exited, code=exited, status=143/n/a
Feb 25 06:50:00 np0005629333 systemd[1]: ceph-8ac33163-6221-5d58-9a39-8b6933fe7762@mgr.compute-0.kaeadd.service: Failed with result 'exit-code'.
Feb 25 06:50:00 np0005629333 systemd[1]: Stopped Ceph mgr.compute-0.kaeadd for 8ac33163-6221-5d58-9a39-8b6933fe7762.
Feb 25 06:50:00 np0005629333 systemd[1]: ceph-8ac33163-6221-5d58-9a39-8b6933fe7762@mgr.compute-0.kaeadd.service: Consumed 6.707s CPU time, 449.2M memory peak, read 0B from disk, written 175.5K to disk.
Feb 25 06:50:00 np0005629333 systemd[1]: Reloading.
Feb 25 06:50:01 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:50:01 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:50:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb 25 06:50:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3441895077' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb 25 06:50:01 np0005629333 eager_wozniak[83059]: 
Feb 25 06:50:01 np0005629333 eager_wozniak[83059]: {"fsid":"8ac33163-6221-5d58-9a39-8b6933fe7762","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":48,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-02-25T11:49:10:477435+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-02-25T11:49:10.480293+0000","services":{}},"progress_events":{}}
Feb 25 06:50:01 np0005629333 systemd[1]: libpod-6e5bec5eba92f3df0f27d6f41b0357eb8f328f3f37dec93f727340b172036ccd.scope: Deactivated successfully.
Feb 25 06:50:01 np0005629333 podman[83029]: 2026-02-25 11:50:01.237894023 +0000 UTC m=+0.643781712 container died 6e5bec5eba92f3df0f27d6f41b0357eb8f328f3f37dec93f727340b172036ccd (image=quay.io/ceph/ceph:v20, name=eager_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Feb 25 06:50:01 np0005629333 ceph-mgr[76641]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.kaeadd
Feb 25 06:50:01 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.kaeadd
Feb 25 06:50:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.kaeadd"} v 0)
Feb 25 06:50:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.kaeadd"} : dispatch
Feb 25 06:50:01 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e399d769c207eb27563e9ad779f0d5bc17e393ba48c2a1d0ca3e67c0e36a3994-merged.mount: Deactivated successfully.
Feb 25 06:50:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.kaeadd"}]': finished
Feb 25 06:50:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb 25 06:50:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:01 np0005629333 ceph-mgr[76641]: [progress INFO root] complete: finished ev c2d9c5b6-c35a-4ad5-952f-56966f3d1612 (Updating mgr deployment (-1 -> 1))
Feb 25 06:50:01 np0005629333 ceph-mgr[76641]: [progress INFO root] Completed event c2d9c5b6-c35a-4ad5-952f-56966f3d1612 (Updating mgr deployment (-1 -> 1)) in 1 seconds
Feb 25 06:50:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Feb 25 06:50:01 np0005629333 podman[83029]: 2026-02-25 11:50:01.286702918 +0000 UTC m=+0.692590607 container remove 6e5bec5eba92f3df0f27d6f41b0357eb8f328f3f37dec93f727340b172036ccd (image=quay.io/ceph/ceph:v20, name=eager_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 06:50:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 06:50:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 06:50:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 06:50:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 06:50:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:50:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:50:01 np0005629333 systemd[1]: libpod-conmon-6e5bec5eba92f3df0f27d6f41b0357eb8f328f3f37dec93f727340b172036ccd.scope: Deactivated successfully.
Feb 25 06:50:01 np0005629333 ceph-mon[76335]: Added host compute-0
Feb 25 06:50:01 np0005629333 ceph-mon[76335]: Saving service mon spec with placement compute-0
Feb 25 06:50:01 np0005629333 ceph-mon[76335]: Saving service mgr spec with placement compute-0
Feb 25 06:50:01 np0005629333 ceph-mon[76335]: Marking host: compute-0 for OSDSpec preview refresh.
Feb 25 06:50:01 np0005629333 ceph-mon[76335]: Saving service osd.default_drive_group spec with placement compute-0
Feb 25 06:50:01 np0005629333 ceph-mon[76335]: Removing daemon mgr.compute-0.kaeadd from compute-0 -- ports [8765]
Feb 25 06:50:01 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.kaeadd"} : dispatch
Feb 25 06:50:01 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.kaeadd"}]': finished
Feb 25 06:50:01 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:01 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:01 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 06:50:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:50:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:50:01 np0005629333 ceph-mgr[76641]: [progress INFO root] Writing back 3 completed events
Feb 25 06:50:01 np0005629333 ceph-mgr[76641]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 25 06:50:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 25 06:50:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:50:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:50:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:50:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:50:01 np0005629333 podman[83248]: 2026-02-25 11:50:01.658516872 +0000 UTC m=+0.046691293 container create 4c385ec22caeb8a955b985fb31667eb678076b5df079836067a948c27b3a55cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_galois, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:50:01 np0005629333 systemd[1]: Started libpod-conmon-4c385ec22caeb8a955b985fb31667eb678076b5df079836067a948c27b3a55cc.scope.
Feb 25 06:50:01 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:01 np0005629333 podman[83248]: 2026-02-25 11:50:01.635978626 +0000 UTC m=+0.024153077 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:01 np0005629333 podman[83248]: 2026-02-25 11:50:01.73711253 +0000 UTC m=+0.125286931 container init 4c385ec22caeb8a955b985fb31667eb678076b5df079836067a948c27b3a55cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_galois, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 06:50:01 np0005629333 podman[83248]: 2026-02-25 11:50:01.745603552 +0000 UTC m=+0.133777963 container start 4c385ec22caeb8a955b985fb31667eb678076b5df079836067a948c27b3a55cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_galois, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:50:01 np0005629333 podman[83248]: 2026-02-25 11:50:01.748790781 +0000 UTC m=+0.136965242 container attach 4c385ec22caeb8a955b985fb31667eb678076b5df079836067a948c27b3a55cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_galois, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 06:50:01 np0005629333 condescending_galois[83264]: 167 167
Feb 25 06:50:01 np0005629333 systemd[1]: libpod-4c385ec22caeb8a955b985fb31667eb678076b5df079836067a948c27b3a55cc.scope: Deactivated successfully.
Feb 25 06:50:01 np0005629333 podman[83248]: 2026-02-25 11:50:01.75264362 +0000 UTC m=+0.140818041 container died 4c385ec22caeb8a955b985fb31667eb678076b5df079836067a948c27b3a55cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_galois, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 06:50:01 np0005629333 systemd[1]: var-lib-containers-storage-overlay-fa78a762412df164eb79d23537da8087b1b362c15f7df910d8e2c41fb46d8439-merged.mount: Deactivated successfully.
Feb 25 06:50:01 np0005629333 podman[83248]: 2026-02-25 11:50:01.799462128 +0000 UTC m=+0.187636509 container remove 4c385ec22caeb8a955b985fb31667eb678076b5df079836067a948c27b3a55cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_galois, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:50:01 np0005629333 systemd[1]: libpod-conmon-4c385ec22caeb8a955b985fb31667eb678076b5df079836067a948c27b3a55cc.scope: Deactivated successfully.
Feb 25 06:50:01 np0005629333 podman[83287]: 2026-02-25 11:50:01.955673821 +0000 UTC m=+0.050955710 container create 6c4f23f785cc39abf123b76189b01b8e2db071ef4d4c29c0075a66d4fe4ae989 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 06:50:01 np0005629333 systemd[1]: Started libpod-conmon-6c4f23f785cc39abf123b76189b01b8e2db071ef4d4c29c0075a66d4fe4ae989.scope.
Feb 25 06:50:02 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:02 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f64c886b3cf814585a5bb1377c89769636d4d570e118b6da91ecddff296c874/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:02 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f64c886b3cf814585a5bb1377c89769636d4d570e118b6da91ecddff296c874/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:02 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f64c886b3cf814585a5bb1377c89769636d4d570e118b6da91ecddff296c874/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:02 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f64c886b3cf814585a5bb1377c89769636d4d570e118b6da91ecddff296c874/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:02 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f64c886b3cf814585a5bb1377c89769636d4d570e118b6da91ecddff296c874/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:02 np0005629333 podman[83287]: 2026-02-25 11:50:02.02310235 +0000 UTC m=+0.118384239 container init 6c4f23f785cc39abf123b76189b01b8e2db071ef4d4c29c0075a66d4fe4ae989 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 06:50:02 np0005629333 podman[83287]: 2026-02-25 11:50:02.028100209 +0000 UTC m=+0.123382098 container start 6c4f23f785cc39abf123b76189b01b8e2db071ef4d4c29c0075a66d4fe4ae989 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_joliot, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:50:02 np0005629333 podman[83287]: 2026-02-25 11:50:02.031888225 +0000 UTC m=+0.127170104 container attach 6c4f23f785cc39abf123b76189b01b8e2db071ef4d4c29c0075a66d4fe4ae989 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_joliot, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:50:02 np0005629333 podman[83287]: 2026-02-25 11:50:01.938272269 +0000 UTC m=+0.033554138 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:02 np0005629333 ceph-mon[76335]: Removing key for mgr.compute-0.kaeadd
Feb 25 06:50:02 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:02 np0005629333 lucid_joliot[83303]: --> passed data devices: 0 physical, 3 LVM
Feb 25 06:50:02 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 25 06:50:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:50:02 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 25 06:50:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 25 06:50:02 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new d19afe3c-7923-4776-bcc2-88886150b441
Feb 25 06:50:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "d19afe3c-7923-4776-bcc2-88886150b441"} v 0)
Feb 25 06:50:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1899039818' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "d19afe3c-7923-4776-bcc2-88886150b441"} : dispatch
Feb 25 06:50:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Feb 25 06:50:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 25 06:50:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1899039818' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d19afe3c-7923-4776-bcc2-88886150b441"}]': finished
Feb 25 06:50:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Feb 25 06:50:03 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Feb 25 06:50:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 25 06:50:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 25 06:50:03 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 25 06:50:03 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Feb 25 06:50:03 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Feb 25 06:50:03 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb 25 06:50:03 np0005629333 lvm[83395]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 06:50:03 np0005629333 lvm[83395]: VG ceph_vg0 finished
Feb 25 06:50:03 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Feb 25 06:50:03 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Feb 25 06:50:03 np0005629333 ceph-mgr[76641]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 25 06:50:03 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/1899039818' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "d19afe3c-7923-4776-bcc2-88886150b441"} : dispatch
Feb 25 06:50:03 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/1899039818' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d19afe3c-7923-4776-bcc2-88886150b441"}]': finished
Feb 25 06:50:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Feb 25 06:50:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/818589498' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Feb 25 06:50:03 np0005629333 lucid_joliot[83303]: stderr: got monmap epoch 1
Feb 25 06:50:03 np0005629333 lucid_joliot[83303]: --> Creating keyring file for osd.0
Feb 25 06:50:03 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Feb 25 06:50:03 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Feb 25 06:50:03 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid d19afe3c-7923-4776-bcc2-88886150b441 --setuser ceph --setgroup ceph
Feb 25 06:50:04 np0005629333 lucid_joliot[83303]: stderr: 2026-02-25T11:50:03.960+0000 7f7bfd4fc8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Feb 25 06:50:04 np0005629333 lucid_joliot[83303]: stderr: 2026-02-25T11:50:03.991+0000 7f7bfd4fc8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Feb 25 06:50:04 np0005629333 lucid_joliot[83303]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Feb 25 06:50:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 25 06:50:04 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb 25 06:50:04 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Feb 25 06:50:04 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Feb 25 06:50:04 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Feb 25 06:50:04 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb 25 06:50:04 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb 25 06:50:04 np0005629333 lucid_joliot[83303]: --> ceph-volume lvm activate successful for osd ID: 0
Feb 25 06:50:04 np0005629333 lucid_joliot[83303]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Feb 25 06:50:04 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 25 06:50:04 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 25 06:50:04 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new a25b4fc6-1504-44d3-aca7-62c5ef316350
Feb 25 06:50:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "a25b4fc6-1504-44d3-aca7-62c5ef316350"} v 0)
Feb 25 06:50:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3218094782' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "a25b4fc6-1504-44d3-aca7-62c5ef316350"} : dispatch
Feb 25 06:50:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Feb 25 06:50:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 25 06:50:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3218094782' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a25b4fc6-1504-44d3-aca7-62c5ef316350"}]': finished
Feb 25 06:50:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Feb 25 06:50:05 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Feb 25 06:50:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 25 06:50:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 25 06:50:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 25 06:50:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 25 06:50:05 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 25 06:50:05 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 25 06:50:05 np0005629333 lvm[84332]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 06:50:05 np0005629333 lvm[84332]: VG ceph_vg1 finished
Feb 25 06:50:05 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Feb 25 06:50:05 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Feb 25 06:50:05 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Feb 25 06:50:05 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Feb 25 06:50:05 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Feb 25 06:50:05 np0005629333 ceph-mgr[76641]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 25 06:50:05 np0005629333 ceph-mon[76335]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Feb 25 06:50:05 np0005629333 ceph-mon[76335]: log_channel(cluster) log [INF] : Cluster is now healthy
Feb 25 06:50:05 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/3218094782' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "a25b4fc6-1504-44d3-aca7-62c5ef316350"} : dispatch
Feb 25 06:50:05 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/3218094782' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "a25b4fc6-1504-44d3-aca7-62c5ef316350"}]': finished
Feb 25 06:50:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Feb 25 06:50:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1231231614' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Feb 25 06:50:05 np0005629333 lucid_joliot[83303]: stderr: got monmap epoch 1
Feb 25 06:50:05 np0005629333 lucid_joliot[83303]: --> Creating keyring file for osd.1
Feb 25 06:50:05 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Feb 25 06:50:05 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Feb 25 06:50:06 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid a25b4fc6-1504-44d3-aca7-62c5ef316350 --setuser ceph --setgroup ceph
Feb 25 06:50:06 np0005629333 ceph-mon[76335]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Feb 25 06:50:06 np0005629333 ceph-mon[76335]: Cluster is now healthy
Feb 25 06:50:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 25 06:50:06 np0005629333 lucid_joliot[83303]: stderr: 2026-02-25T11:50:06.068+0000 7f03277db8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Feb 25 06:50:06 np0005629333 lucid_joliot[83303]: stderr: 2026-02-25T11:50:06.092+0000 7f03277db8c0 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Feb 25 06:50:06 np0005629333 lucid_joliot[83303]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Feb 25 06:50:07 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 25 06:50:07 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Feb 25 06:50:07 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Feb 25 06:50:07 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Feb 25 06:50:07 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Feb 25 06:50:07 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 25 06:50:07 np0005629333 lucid_joliot[83303]: --> ceph-volume lvm activate successful for osd ID: 1
Feb 25 06:50:07 np0005629333 lucid_joliot[83303]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Feb 25 06:50:07 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 25 06:50:07 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 25 06:50:07 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new f84d59d3-cae3-44c8-8bca-9fa4643cfc60
Feb 25 06:50:07 np0005629333 ceph-mgr[76641]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 25 06:50:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60"} v 0)
Feb 25 06:50:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2754562958' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60"} : dispatch
Feb 25 06:50:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Feb 25 06:50:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 25 06:50:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2754562958' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60"}]': finished
Feb 25 06:50:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Feb 25 06:50:07 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Feb 25 06:50:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 25 06:50:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 25 06:50:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 25 06:50:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 25 06:50:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 25 06:50:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 25 06:50:07 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 25 06:50:07 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 25 06:50:07 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 25 06:50:07 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/2754562958' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60"} : dispatch
Feb 25 06:50:07 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/2754562958' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60"}]': finished
Feb 25 06:50:07 np0005629333 lvm[85270]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 06:50:07 np0005629333 lvm[85270]: VG ceph_vg2 finished
Feb 25 06:50:07 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Feb 25 06:50:07 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Feb 25 06:50:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:50:07 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Feb 25 06:50:07 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Feb 25 06:50:07 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Feb 25 06:50:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Feb 25 06:50:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2139795014' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Feb 25 06:50:08 np0005629333 lucid_joliot[83303]: stderr: got monmap epoch 1
Feb 25 06:50:08 np0005629333 lucid_joliot[83303]: --> Creating keyring file for osd.2
Feb 25 06:50:08 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Feb 25 06:50:08 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Feb 25 06:50:08 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid f84d59d3-cae3-44c8-8bca-9fa4643cfc60 --setuser ceph --setgroup ceph
Feb 25 06:50:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 25 06:50:09 np0005629333 ceph-mgr[76641]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 25 06:50:09 np0005629333 lucid_joliot[83303]: stderr: 2026-02-25T11:50:08.310+0000 7f91d140b8c0 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Feb 25 06:50:09 np0005629333 lucid_joliot[83303]: stderr: 2026-02-25T11:50:08.347+0000 7f91d140b8c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Feb 25 06:50:09 np0005629333 lucid_joliot[83303]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Feb 25 06:50:09 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb 25 06:50:09 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Feb 25 06:50:09 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Feb 25 06:50:09 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Feb 25 06:50:09 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Feb 25 06:50:09 np0005629333 lucid_joliot[83303]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb 25 06:50:09 np0005629333 lucid_joliot[83303]: --> ceph-volume lvm activate successful for osd ID: 2
Feb 25 06:50:09 np0005629333 lucid_joliot[83303]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Feb 25 06:50:09 np0005629333 systemd[1]: libpod-6c4f23f785cc39abf123b76189b01b8e2db071ef4d4c29c0075a66d4fe4ae989.scope: Deactivated successfully.
Feb 25 06:50:09 np0005629333 podman[83287]: 2026-02-25 11:50:09.881192563 +0000 UTC m=+7.976474432 container died 6c4f23f785cc39abf123b76189b01b8e2db071ef4d4c29c0075a66d4fe4ae989 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_joliot, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:50:09 np0005629333 systemd[1]: libpod-6c4f23f785cc39abf123b76189b01b8e2db071ef4d4c29c0075a66d4fe4ae989.scope: Consumed 5.574s CPU time.
Feb 25 06:50:09 np0005629333 systemd[1]: var-lib-containers-storage-overlay-5f64c886b3cf814585a5bb1377c89769636d4d570e118b6da91ecddff296c874-merged.mount: Deactivated successfully.
Feb 25 06:50:09 np0005629333 podman[83287]: 2026-02-25 11:50:09.935419405 +0000 UTC m=+8.030701294 container remove 6c4f23f785cc39abf123b76189b01b8e2db071ef4d4c29c0075a66d4fe4ae989 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030)
Feb 25 06:50:09 np0005629333 systemd[1]: libpod-conmon-6c4f23f785cc39abf123b76189b01b8e2db071ef4d4c29c0075a66d4fe4ae989.scope: Deactivated successfully.
Feb 25 06:50:10 np0005629333 podman[86258]: 2026-02-25 11:50:10.38383096 +0000 UTC m=+0.064102325 container create 1c6c7f6a4ea253d82d058a7527724a989d68d1190db129f02f92fbc24da35688 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_heisenberg, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:50:10 np0005629333 systemd[1]: Started libpod-conmon-1c6c7f6a4ea253d82d058a7527724a989d68d1190db129f02f92fbc24da35688.scope.
Feb 25 06:50:10 np0005629333 podman[86258]: 2026-02-25 11:50:10.356442242 +0000 UTC m=+0.036713647 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:10 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:10 np0005629333 podman[86258]: 2026-02-25 11:50:10.481136007 +0000 UTC m=+0.161407372 container init 1c6c7f6a4ea253d82d058a7527724a989d68d1190db129f02f92fbc24da35688 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_heisenberg, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Feb 25 06:50:10 np0005629333 podman[86258]: 2026-02-25 11:50:10.490802579 +0000 UTC m=+0.171073944 container start 1c6c7f6a4ea253d82d058a7527724a989d68d1190db129f02f92fbc24da35688 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:50:10 np0005629333 zealous_heisenberg[86274]: 167 167
Feb 25 06:50:10 np0005629333 podman[86258]: 2026-02-25 11:50:10.497235111 +0000 UTC m=+0.177506476 container attach 1c6c7f6a4ea253d82d058a7527724a989d68d1190db129f02f92fbc24da35688 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_heisenberg, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:50:10 np0005629333 systemd[1]: libpod-1c6c7f6a4ea253d82d058a7527724a989d68d1190db129f02f92fbc24da35688.scope: Deactivated successfully.
Feb 25 06:50:10 np0005629333 podman[86258]: 2026-02-25 11:50:10.498100589 +0000 UTC m=+0.178371944 container died 1c6c7f6a4ea253d82d058a7527724a989d68d1190db129f02f92fbc24da35688 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_heisenberg, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 06:50:10 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1d45926702d8f86b7603c37aeca3ac32147980c6c9e1b6027b5559b5fe4601ec-merged.mount: Deactivated successfully.
Feb 25 06:50:10 np0005629333 podman[86258]: 2026-02-25 11:50:10.538166911 +0000 UTC m=+0.218438296 container remove 1c6c7f6a4ea253d82d058a7527724a989d68d1190db129f02f92fbc24da35688 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 06:50:10 np0005629333 systemd[1]: libpod-conmon-1c6c7f6a4ea253d82d058a7527724a989d68d1190db129f02f92fbc24da35688.scope: Deactivated successfully.
Feb 25 06:50:10 np0005629333 podman[86298]: 2026-02-25 11:50:10.707969429 +0000 UTC m=+0.062559117 container create e117c5d8d1f570df60cf88f10b1cc035e58504795064c5273dab27371c9335d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chebyshev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:50:10 np0005629333 systemd[1]: Started libpod-conmon-e117c5d8d1f570df60cf88f10b1cc035e58504795064c5273dab27371c9335d3.scope.
Feb 25 06:50:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 25 06:50:10 np0005629333 podman[86298]: 2026-02-25 11:50:10.678823004 +0000 UTC m=+0.033412552 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:10 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ad39879db53940e653986de803a29d079b96af4f86ca3cf5a8406cbf2e85b64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ad39879db53940e653986de803a29d079b96af4f86ca3cf5a8406cbf2e85b64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ad39879db53940e653986de803a29d079b96af4f86ca3cf5a8406cbf2e85b64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ad39879db53940e653986de803a29d079b96af4f86ca3cf5a8406cbf2e85b64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:10 np0005629333 podman[86298]: 2026-02-25 11:50:10.830718879 +0000 UTC m=+0.185308427 container init e117c5d8d1f570df60cf88f10b1cc035e58504795064c5273dab27371c9335d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chebyshev, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:50:10 np0005629333 podman[86298]: 2026-02-25 11:50:10.840991228 +0000 UTC m=+0.195580716 container start e117c5d8d1f570df60cf88f10b1cc035e58504795064c5273dab27371c9335d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chebyshev, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:50:10 np0005629333 podman[86298]: 2026-02-25 11:50:10.844124535 +0000 UTC m=+0.198714013 container attach e117c5d8d1f570df60cf88f10b1cc035e58504795064c5273dab27371c9335d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]: {
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:    "0": [
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:        {
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "devices": [
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "/dev/loop3"
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            ],
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "lv_name": "ceph_lv0",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "lv_size": "21470642176",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "name": "ceph_lv0",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "tags": {
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.cluster_name": "ceph",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.crush_device_class": "",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.encrypted": "0",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.objectstore": "bluestore",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.osd_id": "0",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.type": "block",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.vdo": "0",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.with_tpm": "0"
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            },
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "type": "block",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "vg_name": "ceph_vg0"
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:        }
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:    ],
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:    "1": [
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:        {
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "devices": [
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "/dev/loop4"
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            ],
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "lv_name": "ceph_lv1",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "lv_size": "21470642176",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "name": "ceph_lv1",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "tags": {
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.cluster_name": "ceph",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.crush_device_class": "",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.encrypted": "0",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.objectstore": "bluestore",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.osd_id": "1",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.type": "block",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.vdo": "0",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.with_tpm": "0"
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            },
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "type": "block",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "vg_name": "ceph_vg1"
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:        }
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:    ],
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:    "2": [
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:        {
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "devices": [
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "/dev/loop5"
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            ],
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "lv_name": "ceph_lv2",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "lv_size": "21470642176",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "name": "ceph_lv2",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "tags": {
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.cluster_name": "ceph",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.crush_device_class": "",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.encrypted": "0",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.objectstore": "bluestore",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.osd_id": "2",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.type": "block",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.vdo": "0",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:                "ceph.with_tpm": "0"
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            },
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "type": "block",
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:            "vg_name": "ceph_vg2"
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:        }
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]:    ]
Feb 25 06:50:11 np0005629333 intelligent_chebyshev[86314]: }
Feb 25 06:50:11 np0005629333 systemd[1]: libpod-e117c5d8d1f570df60cf88f10b1cc035e58504795064c5273dab27371c9335d3.scope: Deactivated successfully.
Feb 25 06:50:11 np0005629333 podman[86298]: 2026-02-25 11:50:11.131744696 +0000 UTC m=+0.486334144 container died e117c5d8d1f570df60cf88f10b1cc035e58504795064c5273dab27371c9335d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chebyshev, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Feb 25 06:50:11 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1ad39879db53940e653986de803a29d079b96af4f86ca3cf5a8406cbf2e85b64-merged.mount: Deactivated successfully.
Feb 25 06:50:11 np0005629333 podman[86298]: 2026-02-25 11:50:11.177503698 +0000 UTC m=+0.532093146 container remove e117c5d8d1f570df60cf88f10b1cc035e58504795064c5273dab27371c9335d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chebyshev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:50:11 np0005629333 systemd[1]: libpod-conmon-e117c5d8d1f570df60cf88f10b1cc035e58504795064c5273dab27371c9335d3.scope: Deactivated successfully.
Feb 25 06:50:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Feb 25 06:50:11 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Feb 25 06:50:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:50:11 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:50:11 np0005629333 ceph-mgr[76641]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Feb 25 06:50:11 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Feb 25 06:50:11 np0005629333 ceph-mgr[76641]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 25 06:50:11 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Feb 25 06:50:11 np0005629333 podman[86425]: 2026-02-25 11:50:11.845526809 +0000 UTC m=+0.051746855 container create 5b823b151e4f2e4ac67ed0baddc92884021cdd06dbe28fb13551cd3bdbc3b2df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:50:11 np0005629333 systemd[1]: Started libpod-conmon-5b823b151e4f2e4ac67ed0baddc92884021cdd06dbe28fb13551cd3bdbc3b2df.scope.
Feb 25 06:50:11 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:11 np0005629333 podman[86425]: 2026-02-25 11:50:11.826655173 +0000 UTC m=+0.032875209 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:11 np0005629333 podman[86425]: 2026-02-25 11:50:11.926000909 +0000 UTC m=+0.132221025 container init 5b823b151e4f2e4ac67ed0baddc92884021cdd06dbe28fb13551cd3bdbc3b2df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 06:50:11 np0005629333 podman[86425]: 2026-02-25 11:50:11.931850635 +0000 UTC m=+0.138070671 container start 5b823b151e4f2e4ac67ed0baddc92884021cdd06dbe28fb13551cd3bdbc3b2df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_blackburn, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:50:11 np0005629333 podman[86425]: 2026-02-25 11:50:11.936022847 +0000 UTC m=+0.142242933 container attach 5b823b151e4f2e4ac67ed0baddc92884021cdd06dbe28fb13551cd3bdbc3b2df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 25 06:50:11 np0005629333 sweet_blackburn[86442]: 167 167
Feb 25 06:50:11 np0005629333 systemd[1]: libpod-5b823b151e4f2e4ac67ed0baddc92884021cdd06dbe28fb13551cd3bdbc3b2df.scope: Deactivated successfully.
Feb 25 06:50:11 np0005629333 podman[86425]: 2026-02-25 11:50:11.938762587 +0000 UTC m=+0.144982623 container died 5b823b151e4f2e4ac67ed0baddc92884021cdd06dbe28fb13551cd3bdbc3b2df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_blackburn, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:50:11 np0005629333 systemd[1]: var-lib-containers-storage-overlay-3c68a51ab6ef9525f373787e69f3cc7b98b440fed0e3e7c2217568c40d3102c1-merged.mount: Deactivated successfully.
Feb 25 06:50:11 np0005629333 podman[86425]: 2026-02-25 11:50:11.976734918 +0000 UTC m=+0.182954964 container remove 5b823b151e4f2e4ac67ed0baddc92884021cdd06dbe28fb13551cd3bdbc3b2df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_blackburn, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 06:50:11 np0005629333 systemd[1]: libpod-conmon-5b823b151e4f2e4ac67ed0baddc92884021cdd06dbe28fb13551cd3bdbc3b2df.scope: Deactivated successfully.
Feb 25 06:50:12 np0005629333 podman[86471]: 2026-02-25 11:50:12.192169672 +0000 UTC m=+0.044263337 container create 48f022c1d7900373718441788541b826f6347dd2517ab6fe241eac53be718c5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0-activate-test, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 06:50:12 np0005629333 systemd[1]: Started libpod-conmon-48f022c1d7900373718441788541b826f6347dd2517ab6fe241eac53be718c5a.scope.
Feb 25 06:50:12 np0005629333 podman[86471]: 2026-02-25 11:50:12.170191931 +0000 UTC m=+0.022285656 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:12 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/825de7bebd61256d83fba470474937b7b7c4b422377134df34f3fc59daf61e65/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/825de7bebd61256d83fba470474937b7b7c4b422377134df34f3fc59daf61e65/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/825de7bebd61256d83fba470474937b7b7c4b422377134df34f3fc59daf61e65/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/825de7bebd61256d83fba470474937b7b7c4b422377134df34f3fc59daf61e65/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/825de7bebd61256d83fba470474937b7b7c4b422377134df34f3fc59daf61e65/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:12 np0005629333 podman[86471]: 2026-02-25 11:50:12.296016015 +0000 UTC m=+0.148109740 container init 48f022c1d7900373718441788541b826f6347dd2517ab6fe241eac53be718c5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0-activate-test, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Feb 25 06:50:12 np0005629333 podman[86471]: 2026-02-25 11:50:12.310324181 +0000 UTC m=+0.162417876 container start 48f022c1d7900373718441788541b826f6347dd2517ab6fe241eac53be718c5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0-activate-test, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 06:50:12 np0005629333 podman[86471]: 2026-02-25 11:50:12.314579617 +0000 UTC m=+0.166673382 container attach 48f022c1d7900373718441788541b826f6347dd2517ab6fe241eac53be718c5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 06:50:12 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0-activate-test[86487]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Feb 25 06:50:12 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0-activate-test[86487]:                            [--no-systemd] [--no-tmpfs]
Feb 25 06:50:12 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0-activate-test[86487]: ceph-volume activate: error: unrecognized arguments: --bad-option
Feb 25 06:50:12 np0005629333 systemd[1]: libpod-48f022c1d7900373718441788541b826f6347dd2517ab6fe241eac53be718c5a.scope: Deactivated successfully.
Feb 25 06:50:12 np0005629333 podman[86471]: 2026-02-25 11:50:12.471453259 +0000 UTC m=+0.323546964 container died 48f022c1d7900373718441788541b826f6347dd2517ab6fe241eac53be718c5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0-activate-test, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:50:12 np0005629333 systemd[1]: var-lib-containers-storage-overlay-825de7bebd61256d83fba470474937b7b7c4b422377134df34f3fc59daf61e65-merged.mount: Deactivated successfully.
Feb 25 06:50:12 np0005629333 podman[86471]: 2026-02-25 11:50:12.514380707 +0000 UTC m=+0.366474382 container remove 48f022c1d7900373718441788541b826f6347dd2517ab6fe241eac53be718c5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 06:50:12 np0005629333 systemd[1]: libpod-conmon-48f022c1d7900373718441788541b826f6347dd2517ab6fe241eac53be718c5a.scope: Deactivated successfully.
Feb 25 06:50:12 np0005629333 systemd[1]: Reloading.
Feb 25 06:50:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:50:12 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:50:12 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:50:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 25 06:50:12 np0005629333 ceph-mon[76335]: Deploying daemon osd.0 on compute-0
Feb 25 06:50:12 np0005629333 systemd[1]: Reloading.
Feb 25 06:50:12 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:50:12 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:50:13 np0005629333 systemd[1]: Starting Ceph osd.0 for 8ac33163-6221-5d58-9a39-8b6933fe7762...
Feb 25 06:50:13 np0005629333 podman[86661]: 2026-02-25 11:50:13.402807478 +0000 UTC m=+0.057668504 container create cec7d17528e249e6e95eda047a3977f72b1e23b12c07aaeab659f912175c25d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0-activate, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 06:50:13 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/596a2bc42ab7341d19856c8faca4b5c962a046a8e2a72ffb01bdbe93ac3cc574/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/596a2bc42ab7341d19856c8faca4b5c962a046a8e2a72ffb01bdbe93ac3cc574/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/596a2bc42ab7341d19856c8faca4b5c962a046a8e2a72ffb01bdbe93ac3cc574/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/596a2bc42ab7341d19856c8faca4b5c962a046a8e2a72ffb01bdbe93ac3cc574/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/596a2bc42ab7341d19856c8faca4b5c962a046a8e2a72ffb01bdbe93ac3cc574/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:13 np0005629333 podman[86661]: 2026-02-25 11:50:13.379578992 +0000 UTC m=+0.034440058 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:13 np0005629333 podman[86661]: 2026-02-25 11:50:13.499171783 +0000 UTC m=+0.154032869 container init cec7d17528e249e6e95eda047a3977f72b1e23b12c07aaeab659f912175c25d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Feb 25 06:50:13 np0005629333 podman[86661]: 2026-02-25 11:50:13.510145533 +0000 UTC m=+0.165006559 container start cec7d17528e249e6e95eda047a3977f72b1e23b12c07aaeab659f912175c25d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:50:13 np0005629333 podman[86661]: 2026-02-25 11:50:13.514570217 +0000 UTC m=+0.169431243 container attach cec7d17528e249e6e95eda047a3977f72b1e23b12c07aaeab659f912175c25d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Feb 25 06:50:13 np0005629333 ceph-mgr[76641]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 25 06:50:13 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0-activate[86676]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 25 06:50:13 np0005629333 bash[86661]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 25 06:50:13 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0-activate[86676]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 25 06:50:13 np0005629333 bash[86661]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 25 06:50:14 np0005629333 lvm[86761]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 06:50:14 np0005629333 lvm[86761]: VG ceph_vg0 finished
Feb 25 06:50:14 np0005629333 lvm[86762]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 06:50:14 np0005629333 lvm[86762]: VG ceph_vg1 finished
Feb 25 06:50:14 np0005629333 lvm[86764]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 06:50:14 np0005629333 lvm[86764]: VG ceph_vg2 finished
Feb 25 06:50:14 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0-activate[86676]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb 25 06:50:14 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0-activate[86676]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 25 06:50:14 np0005629333 bash[86661]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb 25 06:50:14 np0005629333 bash[86661]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 25 06:50:14 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0-activate[86676]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 25 06:50:14 np0005629333 bash[86661]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 25 06:50:14 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0-activate[86676]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb 25 06:50:14 np0005629333 bash[86661]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb 25 06:50:14 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0-activate[86676]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Feb 25 06:50:14 np0005629333 bash[86661]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Feb 25 06:50:14 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0-activate[86676]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Feb 25 06:50:14 np0005629333 bash[86661]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Feb 25 06:50:14 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0-activate[86676]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Feb 25 06:50:14 np0005629333 bash[86661]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Feb 25 06:50:14 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0-activate[86676]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb 25 06:50:14 np0005629333 bash[86661]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb 25 06:50:14 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0-activate[86676]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb 25 06:50:14 np0005629333 bash[86661]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Feb 25 06:50:14 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0-activate[86676]: --> ceph-volume lvm activate successful for osd ID: 0
Feb 25 06:50:14 np0005629333 bash[86661]: --> ceph-volume lvm activate successful for osd ID: 0
Feb 25 06:50:14 np0005629333 systemd[1]: libpod-cec7d17528e249e6e95eda047a3977f72b1e23b12c07aaeab659f912175c25d9.scope: Deactivated successfully.
Feb 25 06:50:14 np0005629333 podman[86661]: 2026-02-25 11:50:14.64235992 +0000 UTC m=+1.297220896 container died cec7d17528e249e6e95eda047a3977f72b1e23b12c07aaeab659f912175c25d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0-activate, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 06:50:14 np0005629333 systemd[1]: libpod-cec7d17528e249e6e95eda047a3977f72b1e23b12c07aaeab659f912175c25d9.scope: Consumed 1.265s CPU time.
Feb 25 06:50:14 np0005629333 systemd[1]: var-lib-containers-storage-overlay-596a2bc42ab7341d19856c8faca4b5c962a046a8e2a72ffb01bdbe93ac3cc574-merged.mount: Deactivated successfully.
Feb 25 06:50:14 np0005629333 podman[86661]: 2026-02-25 11:50:14.690941565 +0000 UTC m=+1.345802591 container remove cec7d17528e249e6e95eda047a3977f72b1e23b12c07aaeab659f912175c25d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 06:50:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 25 06:50:14 np0005629333 podman[86934]: 2026-02-25 11:50:14.892646038 +0000 UTC m=+0.037004090 container create 1e427dc9af907e6574253279e8449e4b67b36070f049b103a12d0cb67c90cdf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:50:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1e932ef9783b329a51606e164c8cc3b01dd57cd3401303dc453c1465725dce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1e932ef9783b329a51606e164c8cc3b01dd57cd3401303dc453c1465725dce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1e932ef9783b329a51606e164c8cc3b01dd57cd3401303dc453c1465725dce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1e932ef9783b329a51606e164c8cc3b01dd57cd3401303dc453c1465725dce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1e932ef9783b329a51606e164c8cc3b01dd57cd3401303dc453c1465725dce/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:14 np0005629333 podman[86934]: 2026-02-25 11:50:14.945160175 +0000 UTC m=+0.089518197 container init 1e427dc9af907e6574253279e8449e4b67b36070f049b103a12d0cb67c90cdf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:50:14 np0005629333 podman[86934]: 2026-02-25 11:50:14.950228297 +0000 UTC m=+0.094586309 container start 1e427dc9af907e6574253279e8449e4b67b36070f049b103a12d0cb67c90cdf7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 06:50:14 np0005629333 bash[86934]: 1e427dc9af907e6574253279e8449e4b67b36070f049b103a12d0cb67c90cdf7
Feb 25 06:50:14 np0005629333 podman[86934]: 2026-02-25 11:50:14.874730694 +0000 UTC m=+0.019088746 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:14 np0005629333 systemd[1]: Started Ceph osd.0 for 8ac33163-6221-5d58-9a39-8b6933fe7762.
Feb 25 06:50:14 np0005629333 ceph-osd[86953]: set uid:gid to 167:167 (ceph:ceph)
Feb 25 06:50:14 np0005629333 ceph-osd[86953]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Feb 25 06:50:14 np0005629333 ceph-osd[86953]: pidfile_write: ignore empty --pid-file
Feb 25 06:50:14 np0005629333 ceph-osd[86953]: bdev(0x5565169a2000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 25 06:50:14 np0005629333 ceph-osd[86953]: bdev(0x5565169a2000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 25 06:50:14 np0005629333 ceph-osd[86953]: bdev(0x5565169a2000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:14 np0005629333 ceph-osd[86953]: bdev(0x5565169a2000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:14 np0005629333 ceph-osd[86953]: bdev(0x5565169a2000 /var/lib/ceph/osd/ceph-0/block) close
Feb 25 06:50:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:50:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:50:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Feb 25 06:50:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Feb 25 06:50:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:50:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:50:15 np0005629333 ceph-mgr[76641]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Feb 25 06:50:15 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a2000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a2000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a2000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a2000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a2000 /var/lib/ceph/osd/ceph-0/block) close
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a2000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a2000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a2000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a2000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a2000 /var/lib/ceph/osd/ceph-0/block) close
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a2000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a2000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a2000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a2000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a2000 /var/lib/ceph/osd/ceph-0/block) close
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a2000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a2000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a2000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a2000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a2000 /var/lib/ceph/osd/ceph-0/block) close
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a2000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a2000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a2000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a2000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a2400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a2400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a2400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a2400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a2400 /var/lib/ceph/osd/ceph-0/block) close
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a2000 /var/lib/ceph/osd/ceph-0/block) close
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: load: jerasure load: lrc 
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a3c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a3c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a3c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a3c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a3c00 /var/lib/ceph/osd/ceph-0/block) close
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a3c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a3c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a3c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a3c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a3c00 /var/lib/ceph/osd/ceph-0/block) close
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a3c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a3c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a3c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a3c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a3c00 /var/lib/ceph/osd/ceph-0/block) close
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a3c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a3c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a3c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a3c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a3c00 /var/lib/ceph/osd/ceph-0/block) close
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a3c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a3c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a3c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a3c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a3c00 /var/lib/ceph/osd/ceph-0/block) close
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a3c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a3c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a3c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x5565169a3c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x556517639800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x556517639800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x556517639800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x556517639800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluefs mount
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluefs mount shared_bdev_used = 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: RocksDB version: 7.9.2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Git sha 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Compile date 2025-10-30 15:42:43
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: DB SUMMARY
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: DB Session ID:  UFK7HPKX7P9TV1W2VBL4
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: CURRENT file:  CURRENT
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: IDENTITY file:  IDENTITY
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                         Options.error_if_exists: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                       Options.create_if_missing: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                         Options.paranoid_checks: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                                     Options.env: 0x556516833ea0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                                Options.info_log: 0x5565178848a0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.max_file_opening_threads: 16
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                              Options.statistics: (nil)
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                               Options.use_fsync: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                       Options.max_log_file_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                         Options.allow_fallocate: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.use_direct_reads: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.create_missing_column_families: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                              Options.db_log_dir: 
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                                 Options.wal_dir: db.wal
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.advise_random_on_open: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.write_buffer_manager: 0x556516898b40
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                            Options.rate_limiter: (nil)
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.unordered_write: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                               Options.row_cache: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                              Options.wal_filter: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.allow_ingest_behind: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.two_write_queues: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.manual_wal_flush: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.wal_compression: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.atomic_flush: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.log_readahead_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.allow_data_in_errors: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.db_host_id: __hostname__
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.max_background_jobs: 4
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.max_background_compactions: -1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.max_subcompactions: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.max_open_files: -1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.bytes_per_sync: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.max_background_flushes: -1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Compression algorithms supported:
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: #011kZSTD supported: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: #011kXpressCompression supported: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: #011kBZip2Compression supported: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: #011kLZ4Compression supported: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: #011kZlibCompression supported: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: #011kLZ4HCCompression supported: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: #011kSnappyCompression supported: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556517884c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5565168378d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556517884c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5565168378d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556517884c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5565168378d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556517884c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5565168378d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556517884c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5565168378d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556517884c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5565168378d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556517884c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5565168378d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556517884c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556516837a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556517884c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556516837a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556517884c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556516837a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e594b66d-6e06-43fb-bad2-3e5f70fd50b6
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020215403053, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020215403960, "job": 1, "event": "recovery_finished"}
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: freelist init
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: freelist _read_cfg
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluefs umount
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x556517639800 /var/lib/ceph/osd/ceph-0/block) close
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x556517639800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x556517639800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x556517639800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bdev(0x556517639800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluefs mount
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluefs mount shared_bdev_used = 27262976
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: RocksDB version: 7.9.2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Git sha 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Compile date 2025-10-30 15:42:43
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: DB SUMMARY
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: DB Session ID:  UFK7HPKX7P9TV1W2VBL5
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: CURRENT file:  CURRENT
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: IDENTITY file:  IDENTITY
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                         Options.error_if_exists: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                       Options.create_if_missing: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                         Options.paranoid_checks: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                                     Options.env: 0x556516833ce0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                                Options.info_log: 0x556517884a20
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.max_file_opening_threads: 16
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                              Options.statistics: (nil)
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                               Options.use_fsync: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                       Options.max_log_file_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                         Options.allow_fallocate: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.use_direct_reads: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.create_missing_column_families: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                              Options.db_log_dir: 
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                                 Options.wal_dir: db.wal
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.advise_random_on_open: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.write_buffer_manager: 0x556516898b40
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                            Options.rate_limiter: (nil)
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.unordered_write: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                               Options.row_cache: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                              Options.wal_filter: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.allow_ingest_behind: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.two_write_queues: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.manual_wal_flush: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.wal_compression: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.atomic_flush: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.log_readahead_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.allow_data_in_errors: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.db_host_id: __hostname__
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.max_background_jobs: 4
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.max_background_compactions: -1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.max_subcompactions: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.max_open_files: -1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.bytes_per_sync: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.max_background_flushes: -1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Compression algorithms supported:
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: #011kZSTD supported: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: #011kXpressCompression supported: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: #011kBZip2Compression supported: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: #011kLZ4Compression supported: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: #011kZlibCompression supported: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: #011kLZ4HCCompression supported: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: #011kSnappyCompression supported: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556517885a00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556516836430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556517885a00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556516836430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556517885a00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556516836430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556517885a00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556516836430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556517885a00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556516836430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556517885a00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556516836430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556517885a00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556516836430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556517885960)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556516837350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556517885960)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556516837350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556517885960)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556516837350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e594b66d-6e06-43fb-bad2-3e5f70fd50b6
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020215461431, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020215466047, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020215, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e594b66d-6e06-43fb-bad2-3e5f70fd50b6", "db_session_id": "UFK7HPKX7P9TV1W2VBL5", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb 25 06:50:15 np0005629333 podman[87279]: 2026-02-25 11:50:15.466178986 +0000 UTC m=+0.042041940 container create c5b087959e56a2cf62b25c34ac9cd248e90655de70440fe6f41bda5c0ed76476 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_perlman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020215472750, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020215, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e594b66d-6e06-43fb-bad2-3e5f70fd50b6", "db_session_id": "UFK7HPKX7P9TV1W2VBL5", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020215479590, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020215, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e594b66d-6e06-43fb-bad2-3e5f70fd50b6", "db_session_id": "UFK7HPKX7P9TV1W2VBL5", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020215481195, "job": 1, "event": "recovery_finished"}
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Feb 25 06:50:15 np0005629333 systemd[1]: Started libpod-conmon-c5b087959e56a2cf62b25c34ac9cd248e90655de70440fe6f41bda5c0ed76476.scope.
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x556517a69c00
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: DB pointer 0x556517a3e000
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556516836430#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556516836430#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556516836430#2 capacity: 460.80 MB usag
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: _get_class not permitted to load lua
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: _get_class not permitted to load sdk
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: osd.0 0 load_pgs
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: osd.0 0 load_pgs opened 0 pgs
Feb 25 06:50:15 np0005629333 ceph-osd[86953]: osd.0 0 log_to_monitors true
Feb 25 06:50:15 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0[86949]: 2026-02-25T11:50:15.511+0000 7ff4d8c918c0 -1 osd.0 0 log_to_monitors true
Feb 25 06:50:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Feb 25 06:50:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2399616326,v1:192.168.122.100:6803/2399616326]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Feb 25 06:50:15 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:15 np0005629333 podman[87279]: 2026-02-25 11:50:15.532849492 +0000 UTC m=+0.108712486 container init c5b087959e56a2cf62b25c34ac9cd248e90655de70440fe6f41bda5c0ed76476 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_perlman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 06:50:15 np0005629333 podman[87279]: 2026-02-25 11:50:15.537162571 +0000 UTC m=+0.113025555 container start c5b087959e56a2cf62b25c34ac9cd248e90655de70440fe6f41bda5c0ed76476 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_perlman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:50:15 np0005629333 podman[87279]: 2026-02-25 11:50:15.539875169 +0000 UTC m=+0.115738143 container attach c5b087959e56a2cf62b25c34ac9cd248e90655de70440fe6f41bda5c0ed76476 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 06:50:15 np0005629333 sad_perlman[87477]: 167 167
Feb 25 06:50:15 np0005629333 systemd[1]: libpod-c5b087959e56a2cf62b25c34ac9cd248e90655de70440fe6f41bda5c0ed76476.scope: Deactivated successfully.
Feb 25 06:50:15 np0005629333 podman[87279]: 2026-02-25 11:50:15.541290621 +0000 UTC m=+0.117153565 container died c5b087959e56a2cf62b25c34ac9cd248e90655de70440fe6f41bda5c0ed76476 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_perlman, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 06:50:15 np0005629333 podman[87279]: 2026-02-25 11:50:15.451100996 +0000 UTC m=+0.026963950 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:15 np0005629333 systemd[1]: var-lib-containers-storage-overlay-aeb2d13b36a3648a9c2309544a079833cc1fae7644bf8d57af95c5fc61d919a8-merged.mount: Deactivated successfully.
Feb 25 06:50:15 np0005629333 ceph-mgr[76641]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 25 06:50:15 np0005629333 podman[87279]: 2026-02-25 11:50:15.56914186 +0000 UTC m=+0.145004814 container remove c5b087959e56a2cf62b25c34ac9cd248e90655de70440fe6f41bda5c0ed76476 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 06:50:15 np0005629333 systemd[1]: libpod-conmon-c5b087959e56a2cf62b25c34ac9cd248e90655de70440fe6f41bda5c0ed76476.scope: Deactivated successfully.
Feb 25 06:50:15 np0005629333 podman[87540]: 2026-02-25 11:50:15.72756431 +0000 UTC m=+0.031136773 container create d9c7e3e87c77a8f95e4a5a8f676ab46eb978559734fe12bae3743c6735d9a919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1-activate-test, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 06:50:15 np0005629333 systemd[1]: Started libpod-conmon-d9c7e3e87c77a8f95e4a5a8f676ab46eb978559734fe12bae3743c6735d9a919.scope.
Feb 25 06:50:15 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a349adb1caac31ab8dc89d2e31a4482c0c98d1d037357687b2bf262f9c8ab6f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a349adb1caac31ab8dc89d2e31a4482c0c98d1d037357687b2bf262f9c8ab6f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a349adb1caac31ab8dc89d2e31a4482c0c98d1d037357687b2bf262f9c8ab6f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a349adb1caac31ab8dc89d2e31a4482c0c98d1d037357687b2bf262f9c8ab6f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a349adb1caac31ab8dc89d2e31a4482c0c98d1d037357687b2bf262f9c8ab6f1/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:15 np0005629333 podman[87540]: 2026-02-25 11:50:15.808812194 +0000 UTC m=+0.112384747 container init d9c7e3e87c77a8f95e4a5a8f676ab46eb978559734fe12bae3743c6735d9a919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:50:15 np0005629333 podman[87540]: 2026-02-25 11:50:15.713204941 +0000 UTC m=+0.016777434 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:15 np0005629333 podman[87540]: 2026-02-25 11:50:15.821145783 +0000 UTC m=+0.124718246 container start d9c7e3e87c77a8f95e4a5a8f676ab46eb978559734fe12bae3743c6735d9a919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1-activate-test, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:50:15 np0005629333 podman[87540]: 2026-02-25 11:50:15.823921215 +0000 UTC m=+0.127493718 container attach d9c7e3e87c77a8f95e4a5a8f676ab46eb978559734fe12bae3743c6735d9a919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:50:15 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1-activate-test[87557]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Feb 25 06:50:15 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1-activate-test[87557]:                            [--no-systemd] [--no-tmpfs]
Feb 25 06:50:15 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1-activate-test[87557]: ceph-volume activate: error: unrecognized arguments: --bad-option
Feb 25 06:50:15 np0005629333 systemd[1]: libpod-d9c7e3e87c77a8f95e4a5a8f676ab46eb978559734fe12bae3743c6735d9a919.scope: Deactivated successfully.
Feb 25 06:50:15 np0005629333 podman[87540]: 2026-02-25 11:50:15.968174455 +0000 UTC m=+0.271746938 container died d9c7e3e87c77a8f95e4a5a8f676ab46eb978559734fe12bae3743c6735d9a919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1-activate-test, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 06:50:15 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a349adb1caac31ab8dc89d2e31a4482c0c98d1d037357687b2bf262f9c8ab6f1-merged.mount: Deactivated successfully.
Feb 25 06:50:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Feb 25 06:50:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 25 06:50:16 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:16 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:16 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Feb 25 06:50:16 np0005629333 ceph-mon[76335]: Deploying daemon osd.1 on compute-0
Feb 25 06:50:16 np0005629333 ceph-mon[76335]: from='osd.0 [v2:192.168.122.100:6802/2399616326,v1:192.168.122.100:6803/2399616326]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Feb 25 06:50:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2399616326,v1:192.168.122.100:6803/2399616326]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Feb 25 06:50:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Feb 25 06:50:16 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Feb 25 06:50:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Feb 25 06:50:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2399616326,v1:192.168.122.100:6803/2399616326]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb 25 06:50:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.02 at location {host=compute-0,root=default}
Feb 25 06:50:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 25 06:50:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 25 06:50:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 25 06:50:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 25 06:50:16 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 25 06:50:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 25 06:50:16 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 25 06:50:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 25 06:50:16 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 25 06:50:16 np0005629333 podman[87540]: 2026-02-25 11:50:16.041739403 +0000 UTC m=+0.345311906 container remove d9c7e3e87c77a8f95e4a5a8f676ab46eb978559734fe12bae3743c6735d9a919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:50:16 np0005629333 systemd[1]: libpod-conmon-d9c7e3e87c77a8f95e4a5a8f676ab46eb978559734fe12bae3743c6735d9a919.scope: Deactivated successfully.
Feb 25 06:50:16 np0005629333 systemd[1]: Reloading.
Feb 25 06:50:16 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:50:16 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:50:16 np0005629333 systemd[1]: Reloading.
Feb 25 06:50:16 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Feb 25 06:50:16 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Feb 25 06:50:16 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:50:16 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:50:16 np0005629333 systemd[1]: Starting Ceph osd.1 for 8ac33163-6221-5d58-9a39-8b6933fe7762...
Feb 25 06:50:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 25 06:50:16 np0005629333 podman[87728]: 2026-02-25 11:50:16.990120617 +0000 UTC m=+0.053924130 container create e9996775c49c025178e8a9cebfd3afb0bab40c73ed5ad3fe9e3bc16a9aada6da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:50:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Feb 25 06:50:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 25 06:50:17 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:17 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc5e022bf2d61e4edc4f213ef014f9999d2e32a54bbd1ec5dc69fd5e1cd9260f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2399616326,v1:192.168.122.100:6803/2399616326]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb 25 06:50:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Feb 25 06:50:17 np0005629333 ceph-osd[86953]: osd.0 0 done with init, starting boot process
Feb 25 06:50:17 np0005629333 ceph-osd[86953]: osd.0 0 start_boot
Feb 25 06:50:17 np0005629333 ceph-osd[86953]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Feb 25 06:50:17 np0005629333 ceph-osd[86953]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Feb 25 06:50:17 np0005629333 ceph-osd[86953]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Feb 25 06:50:17 np0005629333 ceph-osd[86953]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Feb 25 06:50:17 np0005629333 ceph-osd[86953]: osd.0 0  bench count 12288000 bsize 4 KiB
Feb 25 06:50:17 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc5e022bf2d61e4edc4f213ef014f9999d2e32a54bbd1ec5dc69fd5e1cd9260f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:17 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc5e022bf2d61e4edc4f213ef014f9999d2e32a54bbd1ec5dc69fd5e1cd9260f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:17 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc5e022bf2d61e4edc4f213ef014f9999d2e32a54bbd1ec5dc69fd5e1cd9260f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:17 np0005629333 ceph-mon[76335]: from='osd.0 [v2:192.168.122.100:6802/2399616326,v1:192.168.122.100:6803/2399616326]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Feb 25 06:50:17 np0005629333 ceph-mon[76335]: from='osd.0 [v2:192.168.122.100:6802/2399616326,v1:192.168.122.100:6803/2399616326]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb 25 06:50:17 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc5e022bf2d61e4edc4f213ef014f9999d2e32a54bbd1ec5dc69fd5e1cd9260f/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:17 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Feb 25 06:50:17 np0005629333 podman[87728]: 2026-02-25 11:50:16.967666264 +0000 UTC m=+0.031469817 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 25 06:50:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 25 06:50:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 25 06:50:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 25 06:50:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 25 06:50:17 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 25 06:50:17 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 25 06:50:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 25 06:50:17 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 25 06:50:17 np0005629333 ceph-mgr[76641]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2399616326; not ready for session (expect reconnect)
Feb 25 06:50:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 25 06:50:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 25 06:50:17 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 25 06:50:17 np0005629333 podman[87728]: 2026-02-25 11:50:17.124548167 +0000 UTC m=+0.188351750 container init e9996775c49c025178e8a9cebfd3afb0bab40c73ed5ad3fe9e3bc16a9aada6da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:50:17 np0005629333 podman[87728]: 2026-02-25 11:50:17.132285615 +0000 UTC m=+0.196089148 container start e9996775c49c025178e8a9cebfd3afb0bab40c73ed5ad3fe9e3bc16a9aada6da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1-activate, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:50:17 np0005629333 podman[87728]: 2026-02-25 11:50:17.151298857 +0000 UTC m=+0.215102400 container attach e9996775c49c025178e8a9cebfd3afb0bab40c73ed5ad3fe9e3bc16a9aada6da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1-activate, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:50:17 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1-activate[87743]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 25 06:50:17 np0005629333 bash[87728]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 25 06:50:17 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1-activate[87743]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 25 06:50:17 np0005629333 bash[87728]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 25 06:50:17 np0005629333 ceph-mgr[76641]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 25 06:50:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:50:17 np0005629333 lvm[87828]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 06:50:17 np0005629333 lvm[87828]: VG ceph_vg0 finished
Feb 25 06:50:17 np0005629333 lvm[87829]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 06:50:17 np0005629333 lvm[87829]: VG ceph_vg1 finished
Feb 25 06:50:17 np0005629333 lvm[87831]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 06:50:17 np0005629333 lvm[87831]: VG ceph_vg2 finished
Feb 25 06:50:17 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1-activate[87743]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb 25 06:50:17 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1-activate[87743]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 25 06:50:17 np0005629333 bash[87728]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb 25 06:50:17 np0005629333 bash[87728]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 25 06:50:17 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1-activate[87743]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 25 06:50:17 np0005629333 bash[87728]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 25 06:50:18 np0005629333 ceph-mon[76335]: from='osd.0 [v2:192.168.122.100:6802/2399616326,v1:192.168.122.100:6803/2399616326]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb 25 06:50:18 np0005629333 ceph-mgr[76641]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2399616326; not ready for session (expect reconnect)
Feb 25 06:50:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 25 06:50:18 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 25 06:50:18 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 25 06:50:18 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1-activate[87743]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 25 06:50:18 np0005629333 bash[87728]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 25 06:50:18 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1-activate[87743]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Feb 25 06:50:18 np0005629333 bash[87728]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Feb 25 06:50:18 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1-activate[87743]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Feb 25 06:50:18 np0005629333 bash[87728]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Feb 25 06:50:18 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1-activate[87743]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Feb 25 06:50:18 np0005629333 bash[87728]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Feb 25 06:50:18 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1-activate[87743]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Feb 25 06:50:18 np0005629333 bash[87728]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Feb 25 06:50:18 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1-activate[87743]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 25 06:50:18 np0005629333 bash[87728]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 25 06:50:18 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1-activate[87743]: --> ceph-volume lvm activate successful for osd ID: 1
Feb 25 06:50:18 np0005629333 bash[87728]: --> ceph-volume lvm activate successful for osd ID: 1
Feb 25 06:50:18 np0005629333 systemd[1]: libpod-e9996775c49c025178e8a9cebfd3afb0bab40c73ed5ad3fe9e3bc16a9aada6da.scope: Deactivated successfully.
Feb 25 06:50:18 np0005629333 systemd[1]: libpod-e9996775c49c025178e8a9cebfd3afb0bab40c73ed5ad3fe9e3bc16a9aada6da.scope: Consumed 1.310s CPU time.
Feb 25 06:50:18 np0005629333 podman[87728]: 2026-02-25 11:50:18.185255755 +0000 UTC m=+1.249059258 container died e9996775c49c025178e8a9cebfd3afb0bab40c73ed5ad3fe9e3bc16a9aada6da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:50:18 np0005629333 systemd[1]: var-lib-containers-storage-overlay-fc5e022bf2d61e4edc4f213ef014f9999d2e32a54bbd1ec5dc69fd5e1cd9260f-merged.mount: Deactivated successfully.
Feb 25 06:50:18 np0005629333 podman[87728]: 2026-02-25 11:50:18.335380432 +0000 UTC m=+1.399183975 container remove e9996775c49c025178e8a9cebfd3afb0bab40c73ed5ad3fe9e3bc16a9aada6da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:50:18 np0005629333 podman[87992]: 2026-02-25 11:50:18.613794341 +0000 UTC m=+0.088622598 container create 297a02a6b684712fe51cf8bad0532a123d53ea030312397f4b4fa1abd1c65e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:50:18 np0005629333 podman[87992]: 2026-02-25 11:50:18.567355209 +0000 UTC m=+0.042183406 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc78f8efc04a5f3e497b23be12b5d21b8960acdf744045e488826e9b566b9bdd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc78f8efc04a5f3e497b23be12b5d21b8960acdf744045e488826e9b566b9bdd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc78f8efc04a5f3e497b23be12b5d21b8960acdf744045e488826e9b566b9bdd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc78f8efc04a5f3e497b23be12b5d21b8960acdf744045e488826e9b566b9bdd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc78f8efc04a5f3e497b23be12b5d21b8960acdf744045e488826e9b566b9bdd/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:18 np0005629333 podman[87992]: 2026-02-25 11:50:18.720520279 +0000 UTC m=+0.195348476 container init 297a02a6b684712fe51cf8bad0532a123d53ea030312397f4b4fa1abd1c65e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 06:50:18 np0005629333 podman[87992]: 2026-02-25 11:50:18.731271299 +0000 UTC m=+0.206099486 container start 297a02a6b684712fe51cf8bad0532a123d53ea030312397f4b4fa1abd1c65e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 06:50:18 np0005629333 bash[87992]: 297a02a6b684712fe51cf8bad0532a123d53ea030312397f4b4fa1abd1c65e71
Feb 25 06:50:18 np0005629333 systemd[1]: Started Ceph osd.1 for 8ac33163-6221-5d58-9a39-8b6933fe7762.
Feb 25 06:50:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: set uid:gid to 167:167 (ceph:ceph)
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: pidfile_write: ignore empty --pid-file
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0000 /var/lib/ceph/osd/ceph-1/block) close
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0000 /var/lib/ceph/osd/ceph-1/block) close
Feb 25 06:50:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0000 /var/lib/ceph/osd/ceph-1/block) close
Feb 25 06:50:18 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0000 /var/lib/ceph/osd/ceph-1/block) close
Feb 25 06:50:18 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Feb 25 06:50:18 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Feb 25 06:50:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:50:18 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:50:18 np0005629333 ceph-mgr[76641]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Feb 25 06:50:18 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0000 /var/lib/ceph/osd/ceph-1/block) close
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0400 /var/lib/ceph/osd/ceph-1/block) close
Feb 25 06:50:18 np0005629333 ceph-osd[88012]: bdev(0x55a364ba0000 /var/lib/ceph/osd/ceph-1/block) close
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: load: jerasure load: lrc 
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a364ba1c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a364ba1c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a364ba1c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a364ba1c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a364ba1c00 /var/lib/ceph/osd/ceph-1/block) close
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a364ba1c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a364ba1c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a364ba1c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a364ba1c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a364ba1c00 /var/lib/ceph/osd/ceph-1/block) close
Feb 25 06:50:19 np0005629333 ceph-mgr[76641]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2399616326; not ready for session (expect reconnect)
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a364ba1c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a364ba1c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a364ba1c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a364ba1c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a364ba1c00 /var/lib/ceph/osd/ceph-1/block) close
Feb 25 06:50:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 25 06:50:19 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 25 06:50:19 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a364ba1c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a364ba1c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a364ba1c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a364ba1c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a364ba1c00 /var/lib/ceph/osd/ceph-1/block) close
Feb 25 06:50:19 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:19 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:19 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a364ba1c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a364ba1c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a364ba1c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a364ba1c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a364ba1c00 /var/lib/ceph/osd/ceph-1/block) close
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a364ba1c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a364ba1c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a364ba1c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a364ba1c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a365837800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a365837800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a365837800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a365837800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluefs mount
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluefs mount shared_bdev_used = 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: RocksDB version: 7.9.2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Git sha 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Compile date 2025-10-30 15:42:43
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: DB SUMMARY
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: DB Session ID:  K6Q2U5AGQN9K9HPMTY91
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: CURRENT file:  CURRENT
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: IDENTITY file:  IDENTITY
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                         Options.error_if_exists: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                       Options.create_if_missing: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                         Options.paranoid_checks: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                                     Options.env: 0x55a364a31ea0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                                Options.info_log: 0x55a365a828a0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.max_file_opening_threads: 16
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                              Options.statistics: (nil)
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                               Options.use_fsync: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                       Options.max_log_file_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                         Options.allow_fallocate: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.use_direct_reads: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.create_missing_column_families: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                              Options.db_log_dir: 
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                                 Options.wal_dir: db.wal
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.advise_random_on_open: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.write_buffer_manager: 0x55a364a96b40
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                            Options.rate_limiter: (nil)
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.unordered_write: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                               Options.row_cache: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                              Options.wal_filter: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.allow_ingest_behind: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.two_write_queues: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.manual_wal_flush: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.wal_compression: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.atomic_flush: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.log_readahead_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.allow_data_in_errors: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.db_host_id: __hostname__
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.max_background_jobs: 4
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.max_background_compactions: -1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.max_subcompactions: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.max_open_files: -1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.bytes_per_sync: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.max_background_flushes: -1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Compression algorithms supported:
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: #011kZSTD supported: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: #011kXpressCompression supported: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: #011kBZip2Compression supported: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: #011kLZ4Compression supported: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: #011kZlibCompression supported: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: #011kLZ4HCCompression supported: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: #011kSnappyCompression supported: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a365a82c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a364a358d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a365a82c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a364a358d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a365a82c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a364a358d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a365a82c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a364a358d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a365a82c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a364a358d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a365a82c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a364a358d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a365a82c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a364a358d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a365a82c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a364a35a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a365a82c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a364a35a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a365a82c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a364a35a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 77cd87a9-bbed-41b7-bde1-addf5908aec5
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020219169684, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020219171472, "job": 1, "event": "recovery_finished"}
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: freelist init
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: freelist _read_cfg
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluefs umount
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a365837800 /var/lib/ceph/osd/ceph-1/block) close
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a365837800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a365837800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a365837800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bdev(0x55a365837800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluefs mount
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluefs mount shared_bdev_used = 27262976
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: RocksDB version: 7.9.2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Git sha 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Compile date 2025-10-30 15:42:43
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: DB SUMMARY
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: DB Session ID:  K6Q2U5AGQN9K9HPMTY90
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: CURRENT file:  CURRENT
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: IDENTITY file:  IDENTITY
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                         Options.error_if_exists: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                       Options.create_if_missing: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                         Options.paranoid_checks: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                                     Options.env: 0x55a36587ddc0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                                Options.info_log: 0x55a365a83340
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.max_file_opening_threads: 16
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                              Options.statistics: (nil)
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                               Options.use_fsync: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                       Options.max_log_file_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                         Options.allow_fallocate: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.use_direct_reads: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.create_missing_column_families: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                              Options.db_log_dir: 
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                                 Options.wal_dir: db.wal
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.advise_random_on_open: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.write_buffer_manager: 0x55a364a97900
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                            Options.rate_limiter: (nil)
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.unordered_write: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                               Options.row_cache: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                              Options.wal_filter: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.allow_ingest_behind: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.two_write_queues: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.manual_wal_flush: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.wal_compression: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.atomic_flush: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.log_readahead_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.allow_data_in_errors: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.db_host_id: __hostname__
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.max_background_jobs: 4
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.max_background_compactions: -1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.max_subcompactions: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.max_open_files: -1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.bytes_per_sync: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.max_background_flushes: -1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Compression algorithms supported:
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: #011kZSTD supported: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: #011kXpressCompression supported: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: #011kBZip2Compression supported: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: #011kLZ4Compression supported: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: #011kZlibCompression supported: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: #011kLZ4HCCompression supported: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: #011kSnappyCompression supported: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a365acf680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a364a358d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a365acf680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a364a358d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a365acf680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a364a358d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a365acf680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a364a358d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a365acf680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a364a358d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a365acf680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a364a358d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a365acf680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a364a358d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a365acf800)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a364a354b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a365acf800)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a364a354b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a365acf800)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a364a354b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 77cd87a9-bbed-41b7-bde1-addf5908aec5
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020219224516, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020219230390, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020219, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "77cd87a9-bbed-41b7-bde1-addf5908aec5", "db_session_id": "K6Q2U5AGQN9K9HPMTY90", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020219248169, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020219, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "77cd87a9-bbed-41b7-bde1-addf5908aec5", "db_session_id": "K6Q2U5AGQN9K9HPMTY90", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020219252612, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020219, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "77cd87a9-bbed-41b7-bde1-addf5908aec5", "db_session_id": "K6Q2U5AGQN9K9HPMTY90", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020219277997, "job": 1, "event": "recovery_finished"}
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55a365c9dc00
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: DB pointer 0x55a365c3c000
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.2 total, 0.2 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a364a358d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a364a358d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: _get_class not permitted to load lua
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: _get_class not permitted to load sdk
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: osd.1 0 load_pgs
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: osd.1 0 load_pgs opened 0 pgs
Feb 25 06:50:19 np0005629333 ceph-osd[88012]: osd.1 0 log_to_monitors true
Feb 25 06:50:19 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1[88008]: 2026-02-25T11:50:19.395+0000 7f5f4c7c18c0 -1 osd.1 0 log_to_monitors true
Feb 25 06:50:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Feb 25 06:50:19 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/194442992,v1:192.168.122.100:6807/194442992]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Feb 25 06:50:19 np0005629333 podman[88553]: 2026-02-25 11:50:19.502217123 +0000 UTC m=+0.075659621 container create b2c2dc53127decc9deb5f70f81c0bccec074c833548b06ef9575d8bc91d4cbd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_fermat, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:50:19 np0005629333 systemd[1]: Started libpod-conmon-b2c2dc53127decc9deb5f70f81c0bccec074c833548b06ef9575d8bc91d4cbd9.scope.
Feb 25 06:50:19 np0005629333 podman[88553]: 2026-02-25 11:50:19.458204188 +0000 UTC m=+0.031646776 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:19 np0005629333 ceph-mgr[76641]: [devicehealth WARNING root] not enough osds to create mgr pool
Feb 25 06:50:19 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:19 np0005629333 podman[88553]: 2026-02-25 11:50:19.59906752 +0000 UTC m=+0.172510048 container init b2c2dc53127decc9deb5f70f81c0bccec074c833548b06ef9575d8bc91d4cbd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 06:50:19 np0005629333 podman[88553]: 2026-02-25 11:50:19.607145653 +0000 UTC m=+0.180588151 container start b2c2dc53127decc9deb5f70f81c0bccec074c833548b06ef9575d8bc91d4cbd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_fermat, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:50:19 np0005629333 sad_fermat[88569]: 167 167
Feb 25 06:50:19 np0005629333 systemd[1]: libpod-b2c2dc53127decc9deb5f70f81c0bccec074c833548b06ef9575d8bc91d4cbd9.scope: Deactivated successfully.
Feb 25 06:50:19 np0005629333 podman[88553]: 2026-02-25 11:50:19.621108194 +0000 UTC m=+0.194550722 container attach b2c2dc53127decc9deb5f70f81c0bccec074c833548b06ef9575d8bc91d4cbd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_fermat, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 06:50:19 np0005629333 podman[88553]: 2026-02-25 11:50:19.62331881 +0000 UTC m=+0.196761318 container died b2c2dc53127decc9deb5f70f81c0bccec074c833548b06ef9575d8bc91d4cbd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:50:19 np0005629333 systemd[1]: var-lib-containers-storage-overlay-22da7576bfc3659cc309b3ab308f95a99096adb29195e795114db6ac63897d1b-merged.mount: Deactivated successfully.
Feb 25 06:50:19 np0005629333 podman[88553]: 2026-02-25 11:50:19.783727707 +0000 UTC m=+0.357170245 container remove b2c2dc53127decc9deb5f70f81c0bccec074c833548b06ef9575d8bc91d4cbd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_fermat, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:50:19 np0005629333 systemd[1]: libpod-conmon-b2c2dc53127decc9deb5f70f81c0bccec074c833548b06ef9575d8bc91d4cbd9.scope: Deactivated successfully.
Feb 25 06:50:20 np0005629333 ceph-mgr[76641]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2399616326; not ready for session (expect reconnect)
Feb 25 06:50:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 25 06:50:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 25 06:50:20 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 25 06:50:20 np0005629333 podman[88601]: 2026-02-25 11:50:20.078900939 +0000 UTC m=+0.071747070 container create bb0afb52eb14330b87472871d443484c1eb3922c8da1e0e74440486e2154885c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2-activate-test, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:50:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Feb 25 06:50:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 25 06:50:20 np0005629333 podman[88601]: 2026-02-25 11:50:20.04166832 +0000 UTC m=+0.034514481 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:20 np0005629333 systemd[1]: Started libpod-conmon-bb0afb52eb14330b87472871d443484c1eb3922c8da1e0e74440486e2154885c.scope.
Feb 25 06:50:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/194442992,v1:192.168.122.100:6807/194442992]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Feb 25 06:50:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Feb 25 06:50:20 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Feb 25 06:50:20 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8bb1706e7793ada9ca0403968562b9ccfe18800328fe44a6bdc5b1f4b4faa71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8bb1706e7793ada9ca0403968562b9ccfe18800328fe44a6bdc5b1f4b4faa71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8bb1706e7793ada9ca0403968562b9ccfe18800328fe44a6bdc5b1f4b4faa71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8bb1706e7793ada9ca0403968562b9ccfe18800328fe44a6bdc5b1f4b4faa71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8bb1706e7793ada9ca0403968562b9ccfe18800328fe44a6bdc5b1f4b4faa71/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Feb 25 06:50:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/194442992,v1:192.168.122.100:6807/194442992]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb 25 06:50:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.02 at location {host=compute-0,root=default}
Feb 25 06:50:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 25 06:50:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 25 06:50:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 25 06:50:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 25 06:50:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 25 06:50:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 25 06:50:20 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 25 06:50:20 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 25 06:50:20 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 25 06:50:20 np0005629333 ceph-mon[76335]: Deploying daemon osd.2 on compute-0
Feb 25 06:50:20 np0005629333 ceph-mon[76335]: from='osd.1 [v2:192.168.122.100:6806/194442992,v1:192.168.122.100:6807/194442992]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Feb 25 06:50:20 np0005629333 podman[88601]: 2026-02-25 11:50:20.185872327 +0000 UTC m=+0.178718408 container init bb0afb52eb14330b87472871d443484c1eb3922c8da1e0e74440486e2154885c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2-activate-test, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 06:50:20 np0005629333 podman[88601]: 2026-02-25 11:50:20.192897474 +0000 UTC m=+0.185743555 container start bb0afb52eb14330b87472871d443484c1eb3922c8da1e0e74440486e2154885c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2-activate-test, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:50:20 np0005629333 podman[88601]: 2026-02-25 11:50:20.198910017 +0000 UTC m=+0.191756098 container attach bb0afb52eb14330b87472871d443484c1eb3922c8da1e0e74440486e2154885c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2-activate-test, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:50:20 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2-activate-test[88617]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Feb 25 06:50:20 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2-activate-test[88617]:                            [--no-systemd] [--no-tmpfs]
Feb 25 06:50:20 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2-activate-test[88617]: ceph-volume activate: error: unrecognized arguments: --bad-option
Feb 25 06:50:20 np0005629333 systemd[1]: libpod-bb0afb52eb14330b87472871d443484c1eb3922c8da1e0e74440486e2154885c.scope: Deactivated successfully.
Feb 25 06:50:20 np0005629333 podman[88601]: 2026-02-25 11:50:20.359569575 +0000 UTC m=+0.352415666 container died bb0afb52eb14330b87472871d443484c1eb3922c8da1e0e74440486e2154885c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:50:20 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Feb 25 06:50:20 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Feb 25 06:50:20 np0005629333 systemd[1]: var-lib-containers-storage-overlay-b8bb1706e7793ada9ca0403968562b9ccfe18800328fe44a6bdc5b1f4b4faa71-merged.mount: Deactivated successfully.
Feb 25 06:50:20 np0005629333 podman[88601]: 2026-02-25 11:50:20.505748979 +0000 UTC m=+0.498595100 container remove bb0afb52eb14330b87472871d443484c1eb3922c8da1e0e74440486e2154885c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2-activate-test, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:50:20 np0005629333 systemd[1]: libpod-conmon-bb0afb52eb14330b87472871d443484c1eb3922c8da1e0e74440486e2154885c.scope: Deactivated successfully.
Feb 25 06:50:20 np0005629333 ceph-osd[86953]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 22.035 iops: 5641.015 elapsed_sec: 0.532
Feb 25 06:50:20 np0005629333 ceph-osd[86953]: log_channel(cluster) log [WRN] : OSD bench result of 5641.014693 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb 25 06:50:20 np0005629333 ceph-osd[86953]: osd.0 0 waiting for initial osdmap
Feb 25 06:50:20 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0[86949]: 2026-02-25T11:50:20.662+0000 7ff4d5425640 -1 osd.0 0 waiting for initial osdmap
Feb 25 06:50:20 np0005629333 ceph-osd[86953]: osd.0 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Feb 25 06:50:20 np0005629333 ceph-osd[86953]: osd.0 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Feb 25 06:50:20 np0005629333 ceph-osd[86953]: osd.0 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Feb 25 06:50:20 np0005629333 ceph-osd[86953]: osd.0 9 check_osdmap_features require_osd_release unknown -> tentacle
Feb 25 06:50:20 np0005629333 ceph-osd[86953]: osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb 25 06:50:20 np0005629333 ceph-osd[86953]: osd.0 9 set_numa_affinity not setting numa affinity
Feb 25 06:50:20 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-0[86949]: 2026-02-25T11:50:20.688+0000 7ff4cfa18640 -1 osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb 25 06:50:20 np0005629333 ceph-osd[86953]: osd.0 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Feb 25 06:50:20 np0005629333 systemd[1]: Reloading.
Feb 25 06:50:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Feb 25 06:50:20 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:50:20 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:50:21 np0005629333 systemd[1]: Reloading.
Feb 25 06:50:21 np0005629333 ceph-mgr[76641]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2399616326; not ready for session (expect reconnect)
Feb 25 06:50:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 25 06:50:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 25 06:50:21 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Feb 25 06:50:21 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:50:21 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:50:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Feb 25 06:50:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 25 06:50:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/194442992,v1:192.168.122.100:6807/194442992]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb 25 06:50:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Feb 25 06:50:21 np0005629333 ceph-osd[88012]: osd.1 0 done with init, starting boot process
Feb 25 06:50:21 np0005629333 ceph-osd[88012]: osd.1 0 start_boot
Feb 25 06:50:21 np0005629333 ceph-osd[88012]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Feb 25 06:50:21 np0005629333 ceph-osd[88012]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Feb 25 06:50:21 np0005629333 ceph-osd[88012]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Feb 25 06:50:21 np0005629333 ceph-osd[88012]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Feb 25 06:50:21 np0005629333 ceph-osd[88012]: osd.1 0  bench count 12288000 bsize 4 KiB
Feb 25 06:50:21 np0005629333 ceph-mon[76335]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/2399616326,v1:192.168.122.100:6803/2399616326] boot
Feb 25 06:50:21 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Feb 25 06:50:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Feb 25 06:50:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 25 06:50:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 25 06:50:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 25 06:50:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 25 06:50:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 25 06:50:21 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 25 06:50:21 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 25 06:50:21 np0005629333 ceph-mgr[76641]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/194442992; not ready for session (expect reconnect)
Feb 25 06:50:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 25 06:50:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 25 06:50:21 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 25 06:50:21 np0005629333 ceph-osd[86953]: osd.0 10 state: booting -> active
Feb 25 06:50:21 np0005629333 ceph-mon[76335]: from='osd.1 [v2:192.168.122.100:6806/194442992,v1:192.168.122.100:6807/194442992]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Feb 25 06:50:21 np0005629333 ceph-mon[76335]: from='osd.1 [v2:192.168.122.100:6806/194442992,v1:192.168.122.100:6807/194442992]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb 25 06:50:21 np0005629333 ceph-mon[76335]: from='osd.1 [v2:192.168.122.100:6806/194442992,v1:192.168.122.100:6807/194442992]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb 25 06:50:21 np0005629333 ceph-mon[76335]: osd.0 [v2:192.168.122.100:6802/2399616326,v1:192.168.122.100:6803/2399616326] boot
Feb 25 06:50:21 np0005629333 systemd[1]: Starting Ceph osd.2 for 8ac33163-6221-5d58-9a39-8b6933fe7762...
Feb 25 06:50:21 np0005629333 ceph-mgr[76641]: [devicehealth INFO root] creating mgr pool
Feb 25 06:50:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Feb 25 06:50:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Feb 25 06:50:21 np0005629333 podman[88792]: 2026-02-25 11:50:21.586508485 +0000 UTC m=+0.074954810 container create ecb1b25f6686443a2ae2935b99e658ee7488b80fdc8b028d65b345d54b0ed34b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2-activate, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:50:21 np0005629333 podman[88792]: 2026-02-25 11:50:21.538729525 +0000 UTC m=+0.027175900 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:21 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bee9e4ca54738b2617a8d1a299f78409703c78402f426b504e375c95b73b5278/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bee9e4ca54738b2617a8d1a299f78409703c78402f426b504e375c95b73b5278/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bee9e4ca54738b2617a8d1a299f78409703c78402f426b504e375c95b73b5278/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bee9e4ca54738b2617a8d1a299f78409703c78402f426b504e375c95b73b5278/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bee9e4ca54738b2617a8d1a299f78409703c78402f426b504e375c95b73b5278/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:21 np0005629333 podman[88792]: 2026-02-25 11:50:21.674572697 +0000 UTC m=+0.163019012 container init ecb1b25f6686443a2ae2935b99e658ee7488b80fdc8b028d65b345d54b0ed34b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2-activate, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:50:21 np0005629333 podman[88792]: 2026-02-25 11:50:21.680057227 +0000 UTC m=+0.168503562 container start ecb1b25f6686443a2ae2935b99e658ee7488b80fdc8b028d65b345d54b0ed34b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:50:21 np0005629333 podman[88792]: 2026-02-25 11:50:21.710842534 +0000 UTC m=+0.199288859 container attach ecb1b25f6686443a2ae2935b99e658ee7488b80fdc8b028d65b345d54b0ed34b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2-activate, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 06:50:21 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2-activate[88810]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 25 06:50:21 np0005629333 bash[88792]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 25 06:50:21 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2-activate[88810]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 25 06:50:21 np0005629333 bash[88792]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 25 06:50:22 np0005629333 ceph-mgr[76641]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/194442992; not ready for session (expect reconnect)
Feb 25 06:50:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 25 06:50:22 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 25 06:50:22 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 25 06:50:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Feb 25 06:50:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Feb 25 06:50:22 np0005629333 ceph-mon[76335]: OSD bench result of 5641.014693 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb 25 06:50:22 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Feb 25 06:50:22 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Feb 25 06:50:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Feb 25 06:50:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Feb 25 06:50:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Feb 25 06:50:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Feb 25 06:50:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Feb 25 06:50:22 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Feb 25 06:50:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 25 06:50:22 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 25 06:50:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 25 06:50:22 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 25 06:50:22 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 25 06:50:22 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 25 06:50:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Feb 25 06:50:22 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Feb 25 06:50:22 np0005629333 ceph-osd[86953]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Feb 25 06:50:22 np0005629333 ceph-osd[86953]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Feb 25 06:50:22 np0005629333 ceph-osd[86953]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Feb 25 06:50:22 np0005629333 lvm[88895]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 06:50:22 np0005629333 lvm[88895]: VG ceph_vg0 finished
Feb 25 06:50:22 np0005629333 lvm[88896]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 06:50:22 np0005629333 lvm[88896]: VG ceph_vg1 finished
Feb 25 06:50:22 np0005629333 lvm[88898]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 06:50:22 np0005629333 lvm[88898]: VG ceph_vg2 finished
Feb 25 06:50:22 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2-activate[88810]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb 25 06:50:22 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2-activate[88810]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 25 06:50:22 np0005629333 bash[88792]: --> Failed to activate via raw: did not find any matching OSD to activate
Feb 25 06:50:22 np0005629333 bash[88792]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 25 06:50:22 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2-activate[88810]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 25 06:50:22 np0005629333 bash[88792]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 25 06:50:22 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2-activate[88810]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb 25 06:50:22 np0005629333 bash[88792]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb 25 06:50:22 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2-activate[88810]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Feb 25 06:50:22 np0005629333 bash[88792]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Feb 25 06:50:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:50:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v28: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Feb 25 06:50:22 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2-activate[88810]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Feb 25 06:50:22 np0005629333 bash[88792]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Feb 25 06:50:22 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2-activate[88810]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Feb 25 06:50:22 np0005629333 bash[88792]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Feb 25 06:50:22 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2-activate[88810]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Feb 25 06:50:22 np0005629333 bash[88792]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Feb 25 06:50:22 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2-activate[88810]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb 25 06:50:22 np0005629333 bash[88792]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Feb 25 06:50:22 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2-activate[88810]: --> ceph-volume lvm activate successful for osd ID: 2
Feb 25 06:50:22 np0005629333 bash[88792]: --> ceph-volume lvm activate successful for osd ID: 2
Feb 25 06:50:22 np0005629333 systemd[1]: libpod-ecb1b25f6686443a2ae2935b99e658ee7488b80fdc8b028d65b345d54b0ed34b.scope: Deactivated successfully.
Feb 25 06:50:22 np0005629333 systemd[1]: libpod-ecb1b25f6686443a2ae2935b99e658ee7488b80fdc8b028d65b345d54b0ed34b.scope: Consumed 1.453s CPU time.
Feb 25 06:50:22 np0005629333 podman[89012]: 2026-02-25 11:50:22.91803232 +0000 UTC m=+0.039364003 container died ecb1b25f6686443a2ae2935b99e658ee7488b80fdc8b028d65b345d54b0ed34b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 06:50:22 np0005629333 systemd[1]: var-lib-containers-storage-overlay-bee9e4ca54738b2617a8d1a299f78409703c78402f426b504e375c95b73b5278-merged.mount: Deactivated successfully.
Feb 25 06:50:23 np0005629333 podman[89012]: 2026-02-25 11:50:23.034079716 +0000 UTC m=+0.155411339 container remove ecb1b25f6686443a2ae2935b99e658ee7488b80fdc8b028d65b345d54b0ed34b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2-activate, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 06:50:23 np0005629333 ceph-mgr[76641]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/194442992; not ready for session (expect reconnect)
Feb 25 06:50:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 25 06:50:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 25 06:50:23 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 25 06:50:23 np0005629333 podman[89068]: 2026-02-25 11:50:23.300891707 +0000 UTC m=+0.081678673 container create e32ac8b3055194d6d9e229cd887bf79c342441e942b6a6244c63a665e9b4e9fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 06:50:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Feb 25 06:50:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Feb 25 06:50:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Feb 25 06:50:23 np0005629333 podman[89068]: 2026-02-25 11:50:23.252443558 +0000 UTC m=+0.033230574 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:23 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Feb 25 06:50:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8c1f6f45526441d9bef3d5e0ee88a83d01adc23eef2e364e0ccd151285e2584/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8c1f6f45526441d9bef3d5e0ee88a83d01adc23eef2e364e0ccd151285e2584/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8c1f6f45526441d9bef3d5e0ee88a83d01adc23eef2e364e0ccd151285e2584/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8c1f6f45526441d9bef3d5e0ee88a83d01adc23eef2e364e0ccd151285e2584/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8c1f6f45526441d9bef3d5e0ee88a83d01adc23eef2e364e0ccd151285e2584/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 25 06:50:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 25 06:50:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 25 06:50:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 25 06:50:23 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 25 06:50:23 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 25 06:50:23 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Feb 25 06:50:23 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Feb 25 06:50:23 np0005629333 podman[89068]: 2026-02-25 11:50:23.579951054 +0000 UTC m=+0.360738050 container init e32ac8b3055194d6d9e229cd887bf79c342441e942b6a6244c63a665e9b4e9fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:50:23 np0005629333 podman[89068]: 2026-02-25 11:50:23.588298079 +0000 UTC m=+0.369085075 container start e32ac8b3055194d6d9e229cd887bf79c342441e942b6a6244c63a665e9b4e9fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:50:23 np0005629333 bash[89068]: e32ac8b3055194d6d9e229cd887bf79c342441e942b6a6244c63a665e9b4e9fd
Feb 25 06:50:23 np0005629333 systemd[1]: Started Ceph osd.2 for 8ac33163-6221-5d58-9a39-8b6933fe7762.
Feb 25 06:50:23 np0005629333 ceph-osd[89088]: set uid:gid to 167:167 (ceph:ceph)
Feb 25 06:50:23 np0005629333 ceph-osd[89088]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Feb 25 06:50:23 np0005629333 ceph-osd[89088]: pidfile_write: ignore empty --pid-file
Feb 25 06:50:23 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 25 06:50:23 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 25 06:50:23 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:23 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:23 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62000 /var/lib/ceph/osd/ceph-2/block) close
Feb 25 06:50:23 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 25 06:50:23 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 25 06:50:23 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:23 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:23 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62000 /var/lib/ceph/osd/ceph-2/block) close
Feb 25 06:50:23 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 25 06:50:23 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 25 06:50:23 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:23 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:23 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62000 /var/lib/ceph/osd/ceph-2/block) close
Feb 25 06:50:23 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 25 06:50:23 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 25 06:50:23 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:23 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:23 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62000 /var/lib/ceph/osd/ceph-2/block) close
Feb 25 06:50:23 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 25 06:50:23 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 25 06:50:23 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:23 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:23 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62000 /var/lib/ceph/osd/ceph-2/block) close
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62400 /var/lib/ceph/osd/ceph-2/block) close
Feb 25 06:50:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f62000 /var/lib/ceph/osd/ceph-2/block) close
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: load: jerasure load: lrc 
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f63c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f63c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f63c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f63c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f63c00 /var/lib/ceph/osd/ceph-2/block) close
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f63c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f63c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f63c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f63c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f63c00 /var/lib/ceph/osd/ceph-2/block) close
Feb 25 06:50:24 np0005629333 ceph-mgr[76641]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/194442992; not ready for session (expect reconnect)
Feb 25 06:50:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 25 06:50:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 25 06:50:24 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 25 06:50:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f63c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f63c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f63c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f63c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f63c00 /var/lib/ceph/osd/ceph-2/block) close
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f63c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f63c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f63c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f63c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f63c00 /var/lib/ceph/osd/ceph-2/block) close
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f63c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f63c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f63c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f63c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f63c00 /var/lib/ceph/osd/ceph-2/block) close
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f63c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f63c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f63c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d0f63c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d1bf9800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d1bf9800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d1bf9800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d1bf9800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluefs mount
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluefs mount shared_bdev_used = 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: RocksDB version: 7.9.2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Git sha 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Compile date 2025-10-30 15:42:43
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: DB SUMMARY
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: DB Session ID:  7PFWG8CE5PJIJG4T7NUY
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: CURRENT file:  CURRENT
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: IDENTITY file:  IDENTITY
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                         Options.error_if_exists: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                       Options.create_if_missing: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                         Options.paranoid_checks: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                                     Options.env: 0x55f8d0df3ea0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                                Options.info_log: 0x55f8d1e448a0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.max_file_opening_threads: 16
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                              Options.statistics: (nil)
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                               Options.use_fsync: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                       Options.max_log_file_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                         Options.allow_fallocate: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.use_direct_reads: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.create_missing_column_families: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                              Options.db_log_dir: 
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                                 Options.wal_dir: db.wal
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.advise_random_on_open: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.write_buffer_manager: 0x55f8d0e58b40
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                            Options.rate_limiter: (nil)
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.unordered_write: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                               Options.row_cache: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                              Options.wal_filter: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.allow_ingest_behind: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.two_write_queues: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.manual_wal_flush: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.wal_compression: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.atomic_flush: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.log_readahead_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.allow_data_in_errors: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.db_host_id: __hostname__
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.max_background_jobs: 4
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.max_background_compactions: -1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.max_subcompactions: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.max_open_files: -1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.bytes_per_sync: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.max_background_flushes: -1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Compression algorithms supported:
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: #011kZSTD supported: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: #011kXpressCompression supported: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: #011kBZip2Compression supported: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: #011kLZ4Compression supported: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: #011kZlibCompression supported: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: #011kLZ4HCCompression supported: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: #011kSnappyCompression supported: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8d1e44c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f8d0df78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8d1e44c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f8d0df78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8d1e44c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f8d0df78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8d1e44c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f8d0df78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8d1e44c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f8d0df78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8d1e44c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f8d0df78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8d1e44c60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f8d0df78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8d1e44c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f8d0df7a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8d1e44c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f8d0df7a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8d1e44c80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f8d0df7a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 75a8b4a4-fae8-41ae-a03e-61cce931c8d5
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020224308056, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020224309271, "job": 1, "event": "recovery_finished"}
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: freelist init
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: freelist _read_cfg
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluestore(/var/lib/ceph/osd/ceph-2) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluefs umount
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d1bf9800 /var/lib/ceph/osd/ceph-2/block) close
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d1bf9800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d1bf9800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d1bf9800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bdev(0x55f8d1bf9800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluefs mount
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluefs mount shared_bdev_used = 27262976
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: RocksDB version: 7.9.2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Git sha 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Compile date 2025-10-30 15:42:43
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: DB SUMMARY
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: DB Session ID:  7PFWG8CE5PJIJG4T7NUZ
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: CURRENT file:  CURRENT
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: IDENTITY file:  IDENTITY
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                         Options.error_if_exists: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                       Options.create_if_missing: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                         Options.paranoid_checks: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                                     Options.env: 0x55f8d2014a80
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                                Options.info_log: 0x55f8d1e44a20
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.max_file_opening_threads: 16
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                              Options.statistics: (nil)
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                               Options.use_fsync: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                       Options.max_log_file_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                         Options.allow_fallocate: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.use_direct_reads: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.create_missing_column_families: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                              Options.db_log_dir: 
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                                 Options.wal_dir: db.wal
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.advise_random_on_open: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.write_buffer_manager: 0x55f8d0e59900
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                            Options.rate_limiter: (nil)
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.unordered_write: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                               Options.row_cache: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                              Options.wal_filter: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.allow_ingest_behind: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.two_write_queues: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.manual_wal_flush: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.wal_compression: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.atomic_flush: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.log_readahead_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.allow_data_in_errors: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.db_host_id: __hostname__
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.max_background_jobs: 4
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.max_background_compactions: -1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.max_subcompactions: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.max_open_files: -1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.bytes_per_sync: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.max_background_flushes: -1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Compression algorithms supported:
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: #011kZSTD supported: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: #011kXpressCompression supported: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: #011kBZip2Compression supported: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: #011kLZ4Compression supported: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: #011kZlibCompression supported: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: #011kLZ4HCCompression supported: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: #011kSnappyCompression supported: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8d1e44bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f8d0df78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8d1e44bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f8d0df78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8d1e44bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f8d0df78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8d1e44bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f8d0df78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8d1e44bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f8d0df78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8d1e44bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f8d0df78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8d1e44bc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f8d0df78d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8d1e450c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f8d0df7a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8d1e450c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f8d0df7a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:           Options.merge_operator: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.compaction_filter_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.sst_partitioner_factory: None
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8d1e450c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f8d0df7a30#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.write_buffer_size: 16777216
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.max_write_buffer_number: 64
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.compression: LZ4
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.num_levels: 7
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.level: 32767
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.compression_opts.strategy: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                  Options.compression_opts.enabled: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.arena_block_size: 1048576
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.disable_auto_compactions: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.inplace_update_support: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.bloom_locality: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                    Options.max_successive_merges: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.paranoid_file_checks: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.force_consistency_checks: 1
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.report_bg_io_stats: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                               Options.ttl: 2592000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                       Options.enable_blob_files: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                           Options.min_blob_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                          Options.blob_file_size: 268435456
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb:                Options.blob_file_starting_level: 0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 75a8b4a4-fae8-41ae-a03e-61cce931c8d5
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020224358417, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020224417480, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020224, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75a8b4a4-fae8-41ae-a03e-61cce931c8d5", "db_session_id": "7PFWG8CE5PJIJG4T7NUZ", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020224593346, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020224, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75a8b4a4-fae8-41ae-a03e-61cce931c8d5", "db_session_id": "7PFWG8CE5PJIJG4T7NUZ", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Feb 25 06:50:24 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Feb 25 06:50:24 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:24 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020224759139, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020224, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75a8b4a4-fae8-41ae-a03e-61cce931c8d5", "db_session_id": "7PFWG8CE5PJIJG4T7NUZ", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020224770286, "job": 1, "event": "recovery_finished"}
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Feb 25 06:50:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v30: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Feb 25 06:50:24 np0005629333 podman[89568]: 2026-02-25 11:50:24.698338561 +0000 UTC m=+0.035741617 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:24 np0005629333 podman[89568]: 2026-02-25 11:50:24.812597721 +0000 UTC m=+0.150000767 container create b99561cd3df0dd4d35bc906558b089fbd66531921b82eaf1ab243101acb18b1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hodgkin, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:50:24 np0005629333 systemd[1]: Started libpod-conmon-b99561cd3df0dd4d35bc906558b089fbd66531921b82eaf1ab243101acb18b1d.scope.
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55f8d205e000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: DB pointer 0x55f8d1ffe000
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.6 total, 0.6 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.059       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.059       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.059       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.059       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.6 total, 0.6 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f8d0df78d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.6 total, 0.6 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f8d0df78d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.6 total, 0.6 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f8d0df78d0#2 capacity: 460.80 MB usag
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: _get_class not permitted to load lua
Feb 25 06:50:24 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: _get_class not permitted to load sdk
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: osd.2 0 load_pgs
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: osd.2 0 load_pgs opened 0 pgs
Feb 25 06:50:24 np0005629333 ceph-osd[89088]: osd.2 0 log_to_monitors true
Feb 25 06:50:24 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2[89084]: 2026-02-25T11:50:24.933+0000 7fbda24eb8c0 -1 osd.2 0 log_to_monitors true
Feb 25 06:50:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Feb 25 06:50:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/69184315,v1:192.168.122.100:6811/69184315]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Feb 25 06:50:24 np0005629333 podman[89568]: 2026-02-25 11:50:24.949936356 +0000 UTC m=+0.287339472 container init b99561cd3df0dd4d35bc906558b089fbd66531921b82eaf1ab243101acb18b1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hodgkin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:50:24 np0005629333 podman[89568]: 2026-02-25 11:50:24.959170758 +0000 UTC m=+0.296573794 container start b99561cd3df0dd4d35bc906558b089fbd66531921b82eaf1ab243101acb18b1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hodgkin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:50:24 np0005629333 compassionate_hodgkin[89584]: 167 167
Feb 25 06:50:24 np0005629333 systemd[1]: libpod-b99561cd3df0dd4d35bc906558b089fbd66531921b82eaf1ab243101acb18b1d.scope: Deactivated successfully.
Feb 25 06:50:24 np0005629333 podman[89568]: 2026-02-25 11:50:24.973130365 +0000 UTC m=+0.310533431 container attach b99561cd3df0dd4d35bc906558b089fbd66531921b82eaf1ab243101acb18b1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:50:24 np0005629333 podman[89568]: 2026-02-25 11:50:24.973932618 +0000 UTC m=+0.311335664 container died b99561cd3df0dd4d35bc906558b089fbd66531921b82eaf1ab243101acb18b1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hodgkin, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 06:50:25 np0005629333 systemd[1]: var-lib-containers-storage-overlay-928c07f06268f541837dca65dfc8ed270c8ae4b3106192d91143a50c03f14b0e-merged.mount: Deactivated successfully.
Feb 25 06:50:25 np0005629333 podman[89568]: 2026-02-25 11:50:25.075081465 +0000 UTC m=+0.412484511 container remove b99561cd3df0dd4d35bc906558b089fbd66531921b82eaf1ab243101acb18b1d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hodgkin, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 06:50:25 np0005629333 systemd[1]: libpod-conmon-b99561cd3df0dd4d35bc906558b089fbd66531921b82eaf1ab243101acb18b1d.scope: Deactivated successfully.
Feb 25 06:50:25 np0005629333 ceph-mgr[76641]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/194442992; not ready for session (expect reconnect)
Feb 25 06:50:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 25 06:50:25 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Feb 25 06:50:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 25 06:50:25 np0005629333 podman[89642]: 2026-02-25 11:50:25.212727499 +0000 UTC m=+0.043379615 container create 2340a1ea16f166a09df316e65bfa83038043478990fe72f0dcced87ec83a9019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 06:50:25 np0005629333 systemd[1]: Started libpod-conmon-2340a1ea16f166a09df316e65bfa83038043478990fe72f0dcced87ec83a9019.scope.
Feb 25 06:50:25 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:25 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27b747c9b961800867454eb51355b666d99b602ea14b768af15cb6f3213facf7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:25 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27b747c9b961800867454eb51355b666d99b602ea14b768af15cb6f3213facf7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:25 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27b747c9b961800867454eb51355b666d99b602ea14b768af15cb6f3213facf7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:25 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27b747c9b961800867454eb51355b666d99b602ea14b768af15cb6f3213facf7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:25 np0005629333 podman[89642]: 2026-02-25 11:50:25.195157459 +0000 UTC m=+0.025809625 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:25 np0005629333 podman[89642]: 2026-02-25 11:50:25.299383713 +0000 UTC m=+0.130035909 container init 2340a1ea16f166a09df316e65bfa83038043478990fe72f0dcced87ec83a9019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 06:50:25 np0005629333 podman[89642]: 2026-02-25 11:50:25.308385289 +0000 UTC m=+0.139037395 container start 2340a1ea16f166a09df316e65bfa83038043478990fe72f0dcced87ec83a9019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 06:50:25 np0005629333 podman[89642]: 2026-02-25 11:50:25.32529804 +0000 UTC m=+0.155950166 container attach 2340a1ea16f166a09df316e65bfa83038043478990fe72f0dcced87ec83a9019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_stonebraker, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:50:25 np0005629333 ceph-osd[88012]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 29.920 iops: 7659.409 elapsed_sec: 0.392
Feb 25 06:50:25 np0005629333 ceph-osd[88012]: log_channel(cluster) log [WRN] : OSD bench result of 7659.409194 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb 25 06:50:25 np0005629333 ceph-osd[88012]: osd.1 0 waiting for initial osdmap
Feb 25 06:50:25 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1[88008]: 2026-02-25T11:50:25.473+0000 7f5f48743640 -1 osd.1 0 waiting for initial osdmap
Feb 25 06:50:25 np0005629333 ceph-osd[88012]: osd.1 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Feb 25 06:50:25 np0005629333 ceph-osd[88012]: osd.1 12 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Feb 25 06:50:25 np0005629333 ceph-osd[88012]: osd.1 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Feb 25 06:50:25 np0005629333 ceph-osd[88012]: osd.1 12 check_osdmap_features require_osd_release unknown -> tentacle
Feb 25 06:50:25 np0005629333 ceph-osd[88012]: osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb 25 06:50:25 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-1[88008]: 2026-02-25T11:50:25.493+0000 7f5f43548640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb 25 06:50:25 np0005629333 ceph-osd[88012]: osd.1 12 set_numa_affinity not setting numa affinity
Feb 25 06:50:25 np0005629333 ceph-osd[88012]: osd.1 12 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial no unique device path for loop4: no symlink to loop4 in /dev/disk/by-path
Feb 25 06:50:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Feb 25 06:50:25 np0005629333 ceph-mon[76335]: from='osd.2 [v2:192.168.122.100:6810/69184315,v1:192.168.122.100:6811/69184315]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Feb 25 06:50:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/69184315,v1:192.168.122.100:6811/69184315]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Feb 25 06:50:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Feb 25 06:50:25 np0005629333 ceph-mon[76335]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/194442992,v1:192.168.122.100:6807/194442992] boot
Feb 25 06:50:25 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Feb 25 06:50:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Feb 25 06:50:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/69184315,v1:192.168.122.100:6811/69184315]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb 25 06:50:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e13 create-or-move crush item name 'osd.2' initial_weight 0.02 at location {host=compute-0,root=default}
Feb 25 06:50:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Feb 25 06:50:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 25 06:50:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 25 06:50:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 25 06:50:25 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 25 06:50:25 np0005629333 ceph-osd[88012]: osd.1 13 state: booting -> active
Feb 25 06:50:25 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:50:25 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Feb 25 06:50:25 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Feb 25 06:50:25 np0005629333 lvm[89735]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 06:50:25 np0005629333 lvm[89738]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 06:50:25 np0005629333 lvm[89738]: VG ceph_vg1 finished
Feb 25 06:50:25 np0005629333 lvm[89735]: VG ceph_vg0 finished
Feb 25 06:50:25 np0005629333 lvm[89740]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 06:50:25 np0005629333 lvm[89740]: VG ceph_vg2 finished
Feb 25 06:50:25 np0005629333 serene_stonebraker[89659]: {}
Feb 25 06:50:26 np0005629333 systemd[1]: libpod-2340a1ea16f166a09df316e65bfa83038043478990fe72f0dcced87ec83a9019.scope: Deactivated successfully.
Feb 25 06:50:26 np0005629333 systemd[1]: libpod-2340a1ea16f166a09df316e65bfa83038043478990fe72f0dcced87ec83a9019.scope: Consumed 1.070s CPU time.
Feb 25 06:50:26 np0005629333 podman[89642]: 2026-02-25 11:50:26.033251812 +0000 UTC m=+0.863903958 container died 2340a1ea16f166a09df316e65bfa83038043478990fe72f0dcced87ec83a9019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_stonebraker, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:50:26 np0005629333 systemd[1]: var-lib-containers-storage-overlay-27b747c9b961800867454eb51355b666d99b602ea14b768af15cb6f3213facf7-merged.mount: Deactivated successfully.
Feb 25 06:50:26 np0005629333 podman[89642]: 2026-02-25 11:50:26.076646056 +0000 UTC m=+0.907298182 container remove 2340a1ea16f166a09df316e65bfa83038043478990fe72f0dcced87ec83a9019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:50:26 np0005629333 systemd[1]: libpod-conmon-2340a1ea16f166a09df316e65bfa83038043478990fe72f0dcced87ec83a9019.scope: Deactivated successfully.
Feb 25 06:50:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:50:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:50:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Feb 25 06:50:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/69184315,v1:192.168.122.100:6811/69184315]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb 25 06:50:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Feb 25 06:50:26 np0005629333 ceph-osd[89088]: osd.2 0 done with init, starting boot process
Feb 25 06:50:26 np0005629333 ceph-osd[89088]: osd.2 0 start_boot
Feb 25 06:50:26 np0005629333 ceph-osd[89088]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Feb 25 06:50:26 np0005629333 ceph-osd[89088]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Feb 25 06:50:26 np0005629333 ceph-osd[89088]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Feb 25 06:50:26 np0005629333 ceph-osd[89088]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Feb 25 06:50:26 np0005629333 ceph-osd[89088]: osd.2 0  bench count 12288000 bsize 4 KiB
Feb 25 06:50:26 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Feb 25 06:50:26 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=13/14 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:50:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 25 06:50:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 25 06:50:26 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 25 06:50:26 np0005629333 ceph-mgr[76641]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/69184315; not ready for session (expect reconnect)
Feb 25 06:50:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 25 06:50:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 25 06:50:26 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 25 06:50:26 np0005629333 ceph-mon[76335]: OSD bench result of 7659.409194 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb 25 06:50:26 np0005629333 ceph-mon[76335]: from='osd.2 [v2:192.168.122.100:6810/69184315,v1:192.168.122.100:6811/69184315]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Feb 25 06:50:26 np0005629333 ceph-mon[76335]: osd.1 [v2:192.168.122.100:6806/194442992,v1:192.168.122.100:6807/194442992] boot
Feb 25 06:50:26 np0005629333 ceph-mon[76335]: from='osd.2 [v2:192.168.122.100:6810/69184315,v1:192.168.122.100:6811/69184315]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Feb 25 06:50:26 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:26 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:26 np0005629333 ceph-mgr[76641]: [devicehealth INFO root] creating main.db for devicehealth
Feb 25 06:50:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v33: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Feb 25 06:50:26 np0005629333 podman[89876]: 2026-02-25 11:50:26.809963589 +0000 UTC m=+0.090759832 container exec ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:50:26 np0005629333 podman[89876]: 2026-02-25 11:50:26.914000548 +0000 UTC m=+0.194796741 container exec_died ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 06:50:26 np0005629333 ceph-mgr[76641]: [devicehealth INFO root] Check health
Feb 25 06:50:26 np0005629333 ceph-mgr[76641]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Feb 25 06:50:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Feb 25 06:50:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Feb 25 06:50:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Feb 25 06:50:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Feb 25 06:50:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:50:27 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:50:27 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Feb 25 06:50:27 np0005629333 ceph-mgr[76641]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/69184315; not ready for session (expect reconnect)
Feb 25 06:50:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 25 06:50:27 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 25 06:50:27 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 25 06:50:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Feb 25 06:50:27 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Feb 25 06:50:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 25 06:50:27 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 25 06:50:27 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 25 06:50:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:50:27 np0005629333 ceph-mon[76335]: from='osd.2 [v2:192.168.122.100:6810/69184315,v1:192.168.122.100:6811/69184315]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Feb 25 06:50:27 np0005629333 ceph-mon[76335]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Feb 25 06:50:27 np0005629333 ceph-mon[76335]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Feb 25 06:50:27 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:27 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:27 np0005629333 podman[90100]: 2026-02-25 11:50:27.987477433 +0000 UTC m=+0.038821905 container create 239759786f55fa54140651126d30746250ae802a841f7ea5cb466e048076b04f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_babbage, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True)
Feb 25 06:50:28 np0005629333 systemd[1]: Started libpod-conmon-239759786f55fa54140651126d30746250ae802a841f7ea5cb466e048076b04f.scope.
Feb 25 06:50:28 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:28 np0005629333 podman[90100]: 2026-02-25 11:50:27.969572304 +0000 UTC m=+0.020916746 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:28 np0005629333 podman[90100]: 2026-02-25 11:50:28.07599802 +0000 UTC m=+0.127342532 container init 239759786f55fa54140651126d30746250ae802a841f7ea5cb466e048076b04f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_babbage, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:50:28 np0005629333 podman[90100]: 2026-02-25 11:50:28.085020637 +0000 UTC m=+0.136365069 container start 239759786f55fa54140651126d30746250ae802a841f7ea5cb466e048076b04f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_babbage, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 06:50:28 np0005629333 affectionate_babbage[90116]: 167 167
Feb 25 06:50:28 np0005629333 systemd[1]: libpod-239759786f55fa54140651126d30746250ae802a841f7ea5cb466e048076b04f.scope: Deactivated successfully.
Feb 25 06:50:28 np0005629333 podman[90100]: 2026-02-25 11:50:28.107970479 +0000 UTC m=+0.159314991 container attach 239759786f55fa54140651126d30746250ae802a841f7ea5cb466e048076b04f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 06:50:28 np0005629333 podman[90100]: 2026-02-25 11:50:28.109088541 +0000 UTC m=+0.160432983 container died 239759786f55fa54140651126d30746250ae802a841f7ea5cb466e048076b04f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 06:50:28 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1821bc5af4f6256331fcb29fac6213d93157677c5188cb260534fa958c18f047-merged.mount: Deactivated successfully.
Feb 25 06:50:28 np0005629333 podman[90100]: 2026-02-25 11:50:28.245912392 +0000 UTC m=+0.297256824 container remove 239759786f55fa54140651126d30746250ae802a841f7ea5cb466e048076b04f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Feb 25 06:50:28 np0005629333 systemd[1]: libpod-conmon-239759786f55fa54140651126d30746250ae802a841f7ea5cb466e048076b04f.scope: Deactivated successfully.
Feb 25 06:50:28 np0005629333 podman[90141]: 2026-02-25 11:50:28.381173498 +0000 UTC m=+0.052162364 container create 8363334aa700709f64d69616e8079b0e6beea33574c3f94771f0fb0b1d3daed0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 25 06:50:28 np0005629333 systemd[1]: Started libpod-conmon-8363334aa700709f64d69616e8079b0e6beea33574c3f94771f0fb0b1d3daed0.scope.
Feb 25 06:50:28 np0005629333 podman[90141]: 2026-02-25 11:50:28.356742664 +0000 UTC m=+0.027731530 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:28 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:28 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33df12b7262d184942c9b3f60e44811844777a0dfe9b9bfd26c044e9d463868/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:28 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33df12b7262d184942c9b3f60e44811844777a0dfe9b9bfd26c044e9d463868/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:28 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33df12b7262d184942c9b3f60e44811844777a0dfe9b9bfd26c044e9d463868/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:28 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33df12b7262d184942c9b3f60e44811844777a0dfe9b9bfd26c044e9d463868/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:28 np0005629333 podman[90141]: 2026-02-25 11:50:28.512832592 +0000 UTC m=+0.183821458 container init 8363334aa700709f64d69616e8079b0e6beea33574c3f94771f0fb0b1d3daed0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_colden, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:50:28 np0005629333 podman[90141]: 2026-02-25 11:50:28.51942425 +0000 UTC m=+0.190413086 container start 8363334aa700709f64d69616e8079b0e6beea33574c3f94771f0fb0b1d3daed0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_colden, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:50:28 np0005629333 podman[90141]: 2026-02-25 11:50:28.538550344 +0000 UTC m=+0.209539190 container attach 8363334aa700709f64d69616e8079b0e6beea33574c3f94771f0fb0b1d3daed0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_colden, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 06:50:28 np0005629333 ceph-mgr[76641]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/69184315; not ready for session (expect reconnect)
Feb 25 06:50:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 25 06:50:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 25 06:50:28 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 25 06:50:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v35: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Feb 25 06:50:28 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.jzfame(active, since 58s)
Feb 25 06:50:29 np0005629333 great_colden[90157]: [
Feb 25 06:50:29 np0005629333 great_colden[90157]:    {
Feb 25 06:50:29 np0005629333 great_colden[90157]:        "available": false,
Feb 25 06:50:29 np0005629333 great_colden[90157]:        "being_replaced": false,
Feb 25 06:50:29 np0005629333 great_colden[90157]:        "ceph_device_lvm": false,
Feb 25 06:50:29 np0005629333 great_colden[90157]:        "device_id": "QEMU_DVD-ROM_QM00001",
Feb 25 06:50:29 np0005629333 great_colden[90157]:        "lsm_data": {},
Feb 25 06:50:29 np0005629333 great_colden[90157]:        "lvs": [],
Feb 25 06:50:29 np0005629333 great_colden[90157]:        "path": "/dev/sr0",
Feb 25 06:50:29 np0005629333 great_colden[90157]:        "rejected_reasons": [
Feb 25 06:50:29 np0005629333 great_colden[90157]:            "Insufficient space (<5GB)",
Feb 25 06:50:29 np0005629333 great_colden[90157]:            "Has a FileSystem"
Feb 25 06:50:29 np0005629333 great_colden[90157]:        ],
Feb 25 06:50:29 np0005629333 great_colden[90157]:        "sys_api": {
Feb 25 06:50:29 np0005629333 great_colden[90157]:            "actuators": null,
Feb 25 06:50:29 np0005629333 great_colden[90157]:            "device_nodes": [
Feb 25 06:50:29 np0005629333 great_colden[90157]:                "sr0"
Feb 25 06:50:29 np0005629333 great_colden[90157]:            ],
Feb 25 06:50:29 np0005629333 great_colden[90157]:            "devname": "sr0",
Feb 25 06:50:29 np0005629333 great_colden[90157]:            "human_readable_size": "482.00 KB",
Feb 25 06:50:29 np0005629333 great_colden[90157]:            "id_bus": "ata",
Feb 25 06:50:29 np0005629333 great_colden[90157]:            "model": "QEMU DVD-ROM",
Feb 25 06:50:29 np0005629333 great_colden[90157]:            "nr_requests": "2",
Feb 25 06:50:29 np0005629333 great_colden[90157]:            "parent": "/dev/sr0",
Feb 25 06:50:29 np0005629333 great_colden[90157]:            "partitions": {},
Feb 25 06:50:29 np0005629333 great_colden[90157]:            "path": "/dev/sr0",
Feb 25 06:50:29 np0005629333 great_colden[90157]:            "removable": "1",
Feb 25 06:50:29 np0005629333 great_colden[90157]:            "rev": "2.5+",
Feb 25 06:50:29 np0005629333 great_colden[90157]:            "ro": "0",
Feb 25 06:50:29 np0005629333 great_colden[90157]:            "rotational": "1",
Feb 25 06:50:29 np0005629333 great_colden[90157]:            "sas_address": "",
Feb 25 06:50:29 np0005629333 great_colden[90157]:            "sas_device_handle": "",
Feb 25 06:50:29 np0005629333 great_colden[90157]:            "scheduler_mode": "mq-deadline",
Feb 25 06:50:29 np0005629333 great_colden[90157]:            "sectors": 0,
Feb 25 06:50:29 np0005629333 great_colden[90157]:            "sectorsize": "2048",
Feb 25 06:50:29 np0005629333 great_colden[90157]:            "size": 493568.0,
Feb 25 06:50:29 np0005629333 great_colden[90157]:            "support_discard": "2048",
Feb 25 06:50:29 np0005629333 great_colden[90157]:            "type": "disk",
Feb 25 06:50:29 np0005629333 great_colden[90157]:            "vendor": "QEMU"
Feb 25 06:50:29 np0005629333 great_colden[90157]:        }
Feb 25 06:50:29 np0005629333 great_colden[90157]:    }
Feb 25 06:50:29 np0005629333 great_colden[90157]: ]
Feb 25 06:50:29 np0005629333 systemd[1]: libpod-8363334aa700709f64d69616e8079b0e6beea33574c3f94771f0fb0b1d3daed0.scope: Deactivated successfully.
Feb 25 06:50:29 np0005629333 podman[90141]: 2026-02-25 11:50:29.113836033 +0000 UTC m=+0.784824909 container died 8363334aa700709f64d69616e8079b0e6beea33574c3f94771f0fb0b1d3daed0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_colden, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:50:29 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a33df12b7262d184942c9b3f60e44811844777a0dfe9b9bfd26c044e9d463868-merged.mount: Deactivated successfully.
Feb 25 06:50:29 np0005629333 podman[90141]: 2026-02-25 11:50:29.220081854 +0000 UTC m=+0.891070730 container remove 8363334aa700709f64d69616e8079b0e6beea33574c3f94771f0fb0b1d3daed0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_colden, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:50:29 np0005629333 systemd[1]: libpod-conmon-8363334aa700709f64d69616e8079b0e6beea33574c3f94771f0fb0b1d3daed0.scope: Deactivated successfully.
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Feb 25 06:50:29 np0005629333 ceph-mgr[76641]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43680k
Feb 25 06:50:29 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43680k
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Feb 25 06:50:29 np0005629333 ceph-mgr[76641]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44728729: error parsing value: Value '44728729' is below minimum 939524096
Feb 25 06:50:29 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44728729: error parsing value: Value '44728729' is below minimum 939524096
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:50:29 np0005629333 ceph-mgr[76641]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/69184315; not ready for session (expect reconnect)
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 25 06:50:29 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:29 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 06:50:29 np0005629333 podman[91018]: 2026-02-25 11:50:29.891013543 +0000 UTC m=+0.040877683 container create 0a40375707c2803517a1225825e2ca30745e3b948ad7ad834a241a63b5a36fa7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_bouman, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 06:50:29 np0005629333 systemd[1]: Started libpod-conmon-0a40375707c2803517a1225825e2ca30745e3b948ad7ad834a241a63b5a36fa7.scope.
Feb 25 06:50:29 np0005629333 podman[91018]: 2026-02-25 11:50:29.870260163 +0000 UTC m=+0.020124323 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:29 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:29 np0005629333 ceph-osd[89088]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 33.790 iops: 8650.206 elapsed_sec: 0.347
Feb 25 06:50:29 np0005629333 ceph-osd[89088]: log_channel(cluster) log [WRN] : OSD bench result of 8650.205698 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb 25 06:50:29 np0005629333 ceph-osd[89088]: osd.2 0 waiting for initial osdmap
Feb 25 06:50:29 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2[89084]: 2026-02-25T11:50:29.969+0000 7fbd9e46d640 -1 osd.2 0 waiting for initial osdmap
Feb 25 06:50:29 np0005629333 ceph-osd[89088]: osd.2 15 crush map has features 288514051259236352, adjusting msgr requires for clients
Feb 25 06:50:29 np0005629333 ceph-osd[89088]: osd.2 15 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Feb 25 06:50:29 np0005629333 ceph-osd[89088]: osd.2 15 crush map has features 3314933000852226048, adjusting msgr requires for osds
Feb 25 06:50:29 np0005629333 ceph-osd[89088]: osd.2 15 check_osdmap_features require_osd_release unknown -> tentacle
Feb 25 06:50:29 np0005629333 podman[91018]: 2026-02-25 11:50:29.982778593 +0000 UTC m=+0.132642753 container init 0a40375707c2803517a1225825e2ca30745e3b948ad7ad834a241a63b5a36fa7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 06:50:29 np0005629333 podman[91018]: 2026-02-25 11:50:29.987657892 +0000 UTC m=+0.137522032 container start 0a40375707c2803517a1225825e2ca30745e3b948ad7ad834a241a63b5a36fa7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_bouman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:50:29 np0005629333 ceph-osd[89088]: osd.2 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb 25 06:50:29 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-osd-2[89084]: 2026-02-25T11:50:29.988+0000 7fbd99272640 -1 osd.2 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb 25 06:50:29 np0005629333 ceph-osd[89088]: osd.2 15 set_numa_affinity not setting numa affinity
Feb 25 06:50:29 np0005629333 jolly_bouman[91035]: 167 167
Feb 25 06:50:29 np0005629333 podman[91018]: 2026-02-25 11:50:29.990509673 +0000 UTC m=+0.140373833 container attach 0a40375707c2803517a1225825e2ca30745e3b948ad7ad834a241a63b5a36fa7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_bouman, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:50:29 np0005629333 systemd[1]: libpod-0a40375707c2803517a1225825e2ca30745e3b948ad7ad834a241a63b5a36fa7.scope: Deactivated successfully.
Feb 25 06:50:29 np0005629333 ceph-osd[89088]: osd.2 15 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial no unique device path for loop5: no symlink to loop5 in /dev/disk/by-path
Feb 25 06:50:29 np0005629333 conmon[91035]: conmon 0a40375707c2803517a1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0a40375707c2803517a1225825e2ca30745e3b948ad7ad834a241a63b5a36fa7.scope/container/memory.events
Feb 25 06:50:29 np0005629333 podman[91018]: 2026-02-25 11:50:29.991883702 +0000 UTC m=+0.141747852 container died 0a40375707c2803517a1225825e2ca30745e3b948ad7ad834a241a63b5a36fa7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 06:50:30 np0005629333 systemd[1]: var-lib-containers-storage-overlay-30dd819cf380c1a53b99069e3353157da2e1db9286c7a0f1fb294935dab471c5-merged.mount: Deactivated successfully.
Feb 25 06:50:30 np0005629333 podman[91018]: 2026-02-25 11:50:30.023343846 +0000 UTC m=+0.173207976 container remove 0a40375707c2803517a1225825e2ca30745e3b948ad7ad834a241a63b5a36fa7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_bouman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 06:50:30 np0005629333 systemd[1]: libpod-conmon-0a40375707c2803517a1225825e2ca30745e3b948ad7ad834a241a63b5a36fa7.scope: Deactivated successfully.
Feb 25 06:50:30 np0005629333 podman[91059]: 2026-02-25 11:50:30.184651823 +0000 UTC m=+0.054589653 container create 376660baade32d5d805c769e56e87e8b8c6feeb5aee5d5f9026ff4a90f084707 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_goldwasser, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 06:50:30 np0005629333 systemd[1]: Started libpod-conmon-376660baade32d5d805c769e56e87e8b8c6feeb5aee5d5f9026ff4a90f084707.scope.
Feb 25 06:50:30 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:30 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8c1ff289e5fbac938cdd9e283ca948d9722eb1ceec132468d7148372d6c45d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:30 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8c1ff289e5fbac938cdd9e283ca948d9722eb1ceec132468d7148372d6c45d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:30 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8c1ff289e5fbac938cdd9e283ca948d9722eb1ceec132468d7148372d6c45d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:30 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8c1ff289e5fbac938cdd9e283ca948d9722eb1ceec132468d7148372d6c45d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:30 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8c1ff289e5fbac938cdd9e283ca948d9722eb1ceec132468d7148372d6c45d7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:30 np0005629333 podman[91059]: 2026-02-25 11:50:30.16378856 +0000 UTC m=+0.033726380 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:30 np0005629333 podman[91059]: 2026-02-25 11:50:30.264927966 +0000 UTC m=+0.134865846 container init 376660baade32d5d805c769e56e87e8b8c6feeb5aee5d5f9026ff4a90f084707 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:50:30 np0005629333 podman[91059]: 2026-02-25 11:50:30.270136804 +0000 UTC m=+0.140074594 container start 376660baade32d5d805c769e56e87e8b8c6feeb5aee5d5f9026ff4a90f084707 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:50:30 np0005629333 podman[91059]: 2026-02-25 11:50:30.274162879 +0000 UTC m=+0.144100719 container attach 376660baade32d5d805c769e56e87e8b8c6feeb5aee5d5f9026ff4a90f084707 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_goldwasser, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 06:50:30 np0005629333 youthful_goldwasser[91076]: --> passed data devices: 0 physical, 3 LVM
Feb 25 06:50:30 np0005629333 youthful_goldwasser[91076]: --> All data devices are unavailable
Feb 25 06:50:30 np0005629333 systemd[1]: libpod-376660baade32d5d805c769e56e87e8b8c6feeb5aee5d5f9026ff4a90f084707.scope: Deactivated successfully.
Feb 25 06:50:30 np0005629333 podman[91059]: 2026-02-25 11:50:30.680322789 +0000 UTC m=+0.550260619 container died 376660baade32d5d805c769e56e87e8b8c6feeb5aee5d5f9026ff4a90f084707 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_goldwasser, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 25 06:50:30 np0005629333 ceph-mgr[76641]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/69184315; not ready for session (expect reconnect)
Feb 25 06:50:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 25 06:50:30 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 25 06:50:30 np0005629333 ceph-mgr[76641]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Feb 25 06:50:30 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f8c1ff289e5fbac938cdd9e283ca948d9722eb1ceec132468d7148372d6c45d7-merged.mount: Deactivated successfully.
Feb 25 06:50:30 np0005629333 podman[91059]: 2026-02-25 11:50:30.734449518 +0000 UTC m=+0.604387338 container remove 376660baade32d5d805c769e56e87e8b8c6feeb5aee5d5f9026ff4a90f084707 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_goldwasser, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:50:30 np0005629333 systemd[1]: libpod-conmon-376660baade32d5d805c769e56e87e8b8c6feeb5aee5d5f9026ff4a90f084707.scope: Deactivated successfully.
Feb 25 06:50:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v36: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Feb 25 06:50:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_11:50:30
Feb 25 06:50:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 06:50:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Some PGs (1.000000) are inactive; try again later
Feb 25 06:50:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Feb 25 06:50:30 np0005629333 ceph-mon[76335]: Adjusting osd_memory_target on compute-0 to 43680k
Feb 25 06:50:30 np0005629333 ceph-mon[76335]: Unable to set osd_memory_target on compute-0 to 44728729: error parsing value: Value '44728729' is below minimum 939524096
Feb 25 06:50:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e16 e16: 3 total, 3 up, 3 in
Feb 25 06:50:30 np0005629333 ceph-mon[76335]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/69184315,v1:192.168.122.100:6811/69184315] boot
Feb 25 06:50:30 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 3 up, 3 in
Feb 25 06:50:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Feb 25 06:50:30 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 25 06:50:30 np0005629333 ceph-osd[89088]: osd.2 16 state: booting -> active
Feb 25 06:50:31 np0005629333 podman[91170]: 2026-02-25 11:50:31.180124891 +0000 UTC m=+0.041717118 container create 19125a8c156621e1b7eb20cbfeb6f9055a161d50e62cbb35c349bd8a01e990f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sanderson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:50:31 np0005629333 systemd[1]: Started libpod-conmon-19125a8c156621e1b7eb20cbfeb6f9055a161d50e62cbb35c349bd8a01e990f8.scope.
Feb 25 06:50:31 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:31 np0005629333 podman[91170]: 2026-02-25 11:50:31.250304076 +0000 UTC m=+0.111896343 container init 19125a8c156621e1b7eb20cbfeb6f9055a161d50e62cbb35c349bd8a01e990f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sanderson, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:50:31 np0005629333 podman[91170]: 2026-02-25 11:50:31.258407837 +0000 UTC m=+0.120000064 container start 19125a8c156621e1b7eb20cbfeb6f9055a161d50e62cbb35c349bd8a01e990f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:50:31 np0005629333 podman[91170]: 2026-02-25 11:50:31.165258368 +0000 UTC m=+0.026850585 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:31 np0005629333 podman[91170]: 2026-02-25 11:50:31.261940337 +0000 UTC m=+0.123532574 container attach 19125a8c156621e1b7eb20cbfeb6f9055a161d50e62cbb35c349bd8a01e990f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sanderson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 06:50:31 np0005629333 elastic_sanderson[91186]: 167 167
Feb 25 06:50:31 np0005629333 systemd[1]: libpod-19125a8c156621e1b7eb20cbfeb6f9055a161d50e62cbb35c349bd8a01e990f8.scope: Deactivated successfully.
Feb 25 06:50:31 np0005629333 podman[91170]: 2026-02-25 11:50:31.263248804 +0000 UTC m=+0.124841061 container died 19125a8c156621e1b7eb20cbfeb6f9055a161d50e62cbb35c349bd8a01e990f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sanderson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:50:31 np0005629333 systemd[1]: var-lib-containers-storage-overlay-6f66fcabd6f6fd9d2987062360716bf915759a474da30552c07222b9358f09c9-merged.mount: Deactivated successfully.
Feb 25 06:50:31 np0005629333 podman[91170]: 2026-02-25 11:50:31.308146551 +0000 UTC m=+0.169738788 container remove 19125a8c156621e1b7eb20cbfeb6f9055a161d50e62cbb35c349bd8a01e990f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:50:31 np0005629333 systemd[1]: libpod-conmon-19125a8c156621e1b7eb20cbfeb6f9055a161d50e62cbb35c349bd8a01e990f8.scope: Deactivated successfully.
Feb 25 06:50:31 np0005629333 podman[91237]: 2026-02-25 11:50:31.492128143 +0000 UTC m=+0.055197841 container create ade91a88eabb7b74e57a89c0b9cd98314836d50e6b73a4fd613354356f2f3d97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_mirzakhani, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:50:31 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 06:50:31 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Feb 25 06:50:31 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 1 (current 1)
Feb 25 06:50:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:50:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:50:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 06:50:31 np0005629333 systemd[1]: Started libpod-conmon-ade91a88eabb7b74e57a89c0b9cd98314836d50e6b73a4fd613354356f2f3d97.scope.
Feb 25 06:50:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 06:50:31 np0005629333 podman[91237]: 2026-02-25 11:50:31.462168821 +0000 UTC m=+0.025238579 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:50:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:50:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:50:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:50:31 np0005629333 python3[91234]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:50:31 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dbc51e58215904c40efe639494a4a41656158b8a063a1987bf908ad3a77c8f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dbc51e58215904c40efe639494a4a41656158b8a063a1987bf908ad3a77c8f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dbc51e58215904c40efe639494a4a41656158b8a063a1987bf908ad3a77c8f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dbc51e58215904c40efe639494a4a41656158b8a063a1987bf908ad3a77c8f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:31 np0005629333 podman[91237]: 2026-02-25 11:50:31.625326961 +0000 UTC m=+0.188396699 container init ade91a88eabb7b74e57a89c0b9cd98314836d50e6b73a4fd613354356f2f3d97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_mirzakhani, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:50:31 np0005629333 podman[91237]: 2026-02-25 11:50:31.633994847 +0000 UTC m=+0.197064545 container start ade91a88eabb7b74e57a89c0b9cd98314836d50e6b73a4fd613354356f2f3d97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 06:50:31 np0005629333 podman[91237]: 2026-02-25 11:50:31.638449864 +0000 UTC m=+0.201519612 container attach ade91a88eabb7b74e57a89c0b9cd98314836d50e6b73a4fd613354356f2f3d97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle)
Feb 25 06:50:31 np0005629333 podman[91257]: 2026-02-25 11:50:31.643853477 +0000 UTC m=+0.054984434 container create d9b27c85e9866efb00540c8fc0e1edd060b1e335e2445e30965b650ec70542a8 (image=quay.io/ceph/ceph:v20, name=wonderful_cori, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 06:50:31 np0005629333 systemd[1]: Started libpod-conmon-d9b27c85e9866efb00540c8fc0e1edd060b1e335e2445e30965b650ec70542a8.scope.
Feb 25 06:50:31 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/253bfc09e6c2b329d7ee7fce24fadb22367d62d78b927fd0d21f0ed2b8184603/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/253bfc09e6c2b329d7ee7fce24fadb22367d62d78b927fd0d21f0ed2b8184603/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/253bfc09e6c2b329d7ee7fce24fadb22367d62d78b927fd0d21f0ed2b8184603/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:31 np0005629333 podman[91257]: 2026-02-25 11:50:31.617359824 +0000 UTC m=+0.028490841 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:50:31 np0005629333 podman[91257]: 2026-02-25 11:50:31.719409556 +0000 UTC m=+0.130540493 container init d9b27c85e9866efb00540c8fc0e1edd060b1e335e2445e30965b650ec70542a8 (image=quay.io/ceph/ceph:v20, name=wonderful_cori, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 06:50:31 np0005629333 podman[91257]: 2026-02-25 11:50:31.725683434 +0000 UTC m=+0.136814361 container start d9b27c85e9866efb00540c8fc0e1edd060b1e335e2445e30965b650ec70542a8 (image=quay.io/ceph/ceph:v20, name=wonderful_cori, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 06:50:31 np0005629333 podman[91257]: 2026-02-25 11:50:31.729179104 +0000 UTC m=+0.140310061 container attach d9b27c85e9866efb00540c8fc0e1edd060b1e335e2445e30965b650ec70542a8 (image=quay.io/ceph/ceph:v20, name=wonderful_cori, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 06:50:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Feb 25 06:50:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Feb 25 06:50:31 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Feb 25 06:50:31 np0005629333 ceph-mon[76335]: OSD bench result of 8650.205698 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb 25 06:50:31 np0005629333 ceph-mon[76335]: osd.2 [v2:192.168.122.100:6810/69184315,v1:192.168.122.100:6811/69184315] boot
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]: {
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:    "0": [
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:        {
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "devices": [
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "/dev/loop3"
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            ],
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "lv_name": "ceph_lv0",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "lv_size": "21470642176",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "name": "ceph_lv0",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "tags": {
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.cluster_name": "ceph",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.crush_device_class": "",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.encrypted": "0",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.objectstore": "bluestore",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.osd_id": "0",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.type": "block",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.vdo": "0",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.with_tpm": "0"
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            },
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "type": "block",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "vg_name": "ceph_vg0"
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:        }
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:    ],
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:    "1": [
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:        {
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "devices": [
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "/dev/loop4"
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            ],
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "lv_name": "ceph_lv1",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "lv_size": "21470642176",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "name": "ceph_lv1",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "tags": {
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.cluster_name": "ceph",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.crush_device_class": "",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.encrypted": "0",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.objectstore": "bluestore",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.osd_id": "1",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.type": "block",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.vdo": "0",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.with_tpm": "0"
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            },
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "type": "block",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "vg_name": "ceph_vg1"
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:        }
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:    ],
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:    "2": [
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:        {
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "devices": [
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "/dev/loop5"
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            ],
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "lv_name": "ceph_lv2",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "lv_size": "21470642176",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "name": "ceph_lv2",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "tags": {
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.cluster_name": "ceph",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.crush_device_class": "",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.encrypted": "0",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.objectstore": "bluestore",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.osd_id": "2",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.type": "block",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.vdo": "0",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:                "ceph.with_tpm": "0"
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            },
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "type": "block",
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:            "vg_name": "ceph_vg2"
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:        }
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]:    ]
Feb 25 06:50:31 np0005629333 nifty_mirzakhani[91253]: }
Feb 25 06:50:31 np0005629333 systemd[1]: libpod-ade91a88eabb7b74e57a89c0b9cd98314836d50e6b73a4fd613354356f2f3d97.scope: Deactivated successfully.
Feb 25 06:50:31 np0005629333 podman[91237]: 2026-02-25 11:50:31.924237681 +0000 UTC m=+0.487307349 container died ade91a88eabb7b74e57a89c0b9cd98314836d50e6b73a4fd613354356f2f3d97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_mirzakhani, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 25 06:50:31 np0005629333 systemd[1]: var-lib-containers-storage-overlay-6dbc51e58215904c40efe639494a4a41656158b8a063a1987bf908ad3a77c8f5-merged.mount: Deactivated successfully.
Feb 25 06:50:31 np0005629333 podman[91237]: 2026-02-25 11:50:31.96536165 +0000 UTC m=+0.528431308 container remove ade91a88eabb7b74e57a89c0b9cd98314836d50e6b73a4fd613354356f2f3d97 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:50:31 np0005629333 systemd[1]: libpod-conmon-ade91a88eabb7b74e57a89c0b9cd98314836d50e6b73a4fd613354356f2f3d97.scope: Deactivated successfully.
Feb 25 06:50:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb 25 06:50:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2539962776' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb 25 06:50:32 np0005629333 wonderful_cori[91276]: 
Feb 25 06:50:32 np0005629333 wonderful_cori[91276]: {"fsid":"8ac33163-6221-5d58-9a39-8b6933fe7762","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":79,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":17,"num_osds":3,"num_up_osds":3,"osd_up_since":1772020230,"num_in_osds":3,"osd_in_since":1772020207,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"creating+peering","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":474533888,"bytes_avail":42466750464,"bytes_total":42941284352,"inactive_pgs_ratio":1},"fsmap":{"epoch":1,"btime":"2026-02-25T11:49:10:477435+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-02-25T11:49:10.480293+0000","services":{}},"progress_events":{}}
Feb 25 06:50:32 np0005629333 systemd[1]: libpod-d9b27c85e9866efb00540c8fc0e1edd060b1e335e2445e30965b650ec70542a8.scope: Deactivated successfully.
Feb 25 06:50:32 np0005629333 podman[91257]: 2026-02-25 11:50:32.311914525 +0000 UTC m=+0.723045452 container died d9b27c85e9866efb00540c8fc0e1edd060b1e335e2445e30965b650ec70542a8 (image=quay.io/ceph/ceph:v20, name=wonderful_cori, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:50:32 np0005629333 systemd[1]: var-lib-containers-storage-overlay-253bfc09e6c2b329d7ee7fce24fadb22367d62d78b927fd0d21f0ed2b8184603-merged.mount: Deactivated successfully.
Feb 25 06:50:32 np0005629333 podman[91257]: 2026-02-25 11:50:32.347172518 +0000 UTC m=+0.758303435 container remove d9b27c85e9866efb00540c8fc0e1edd060b1e335e2445e30965b650ec70542a8 (image=quay.io/ceph/ceph:v20, name=wonderful_cori, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 06:50:32 np0005629333 systemd[1]: libpod-conmon-d9b27c85e9866efb00540c8fc0e1edd060b1e335e2445e30965b650ec70542a8.scope: Deactivated successfully.
Feb 25 06:50:32 np0005629333 podman[91391]: 2026-02-25 11:50:32.444824184 +0000 UTC m=+0.058918656 container create e54cc02f70cc37296ba454c13d0d19011163a797019ac2a67692d4c54a615d4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_austin, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Feb 25 06:50:32 np0005629333 systemd[1]: Started libpod-conmon-e54cc02f70cc37296ba454c13d0d19011163a797019ac2a67692d4c54a615d4e.scope.
Feb 25 06:50:32 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:32 np0005629333 podman[91391]: 2026-02-25 11:50:32.511669765 +0000 UTC m=+0.125764217 container init e54cc02f70cc37296ba454c13d0d19011163a797019ac2a67692d4c54a615d4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_austin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:50:32 np0005629333 podman[91391]: 2026-02-25 11:50:32.417853567 +0000 UTC m=+0.031948069 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:32 np0005629333 podman[91391]: 2026-02-25 11:50:32.519266071 +0000 UTC m=+0.133360493 container start e54cc02f70cc37296ba454c13d0d19011163a797019ac2a67692d4c54a615d4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_austin, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:50:32 np0005629333 podman[91391]: 2026-02-25 11:50:32.522291887 +0000 UTC m=+0.136386329 container attach e54cc02f70cc37296ba454c13d0d19011163a797019ac2a67692d4c54a615d4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 06:50:32 np0005629333 wizardly_austin[91408]: 167 167
Feb 25 06:50:32 np0005629333 systemd[1]: libpod-e54cc02f70cc37296ba454c13d0d19011163a797019ac2a67692d4c54a615d4e.scope: Deactivated successfully.
Feb 25 06:50:32 np0005629333 podman[91391]: 2026-02-25 11:50:32.524416108 +0000 UTC m=+0.138510570 container died e54cc02f70cc37296ba454c13d0d19011163a797019ac2a67692d4c54a615d4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:50:32 np0005629333 systemd[1]: var-lib-containers-storage-overlay-5f2bd03e96c56a50048bc95a0714627c7177ce72f8ec092d6beabadf94a527a5-merged.mount: Deactivated successfully.
Feb 25 06:50:32 np0005629333 podman[91391]: 2026-02-25 11:50:32.567881334 +0000 UTC m=+0.181975756 container remove e54cc02f70cc37296ba454c13d0d19011163a797019ac2a67692d4c54a615d4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_austin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:50:32 np0005629333 systemd[1]: libpod-conmon-e54cc02f70cc37296ba454c13d0d19011163a797019ac2a67692d4c54a615d4e.scope: Deactivated successfully.
Feb 25 06:50:32 np0005629333 podman[91455]: 2026-02-25 11:50:32.703391687 +0000 UTC m=+0.049000044 container create 9be07fa4f3581129549de099e3dc9069c986ee6b76594ee29ab8a20a6938d900 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 06:50:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e17 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:50:32 np0005629333 systemd[1]: Started libpod-conmon-9be07fa4f3581129549de099e3dc9069c986ee6b76594ee29ab8a20a6938d900.scope.
Feb 25 06:50:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v39: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Feb 25 06:50:32 np0005629333 python3[91463]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:50:32 np0005629333 podman[91455]: 2026-02-25 11:50:32.681446453 +0000 UTC m=+0.027054820 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:32 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edcdac1eab2e5858f519c2758e821de51a939394f1ab46002ea87ac501199ba6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edcdac1eab2e5858f519c2758e821de51a939394f1ab46002ea87ac501199ba6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edcdac1eab2e5858f519c2758e821de51a939394f1ab46002ea87ac501199ba6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edcdac1eab2e5858f519c2758e821de51a939394f1ab46002ea87ac501199ba6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:32 np0005629333 podman[91455]: 2026-02-25 11:50:32.810962166 +0000 UTC m=+0.156570503 container init 9be07fa4f3581129549de099e3dc9069c986ee6b76594ee29ab8a20a6938d900 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:50:32 np0005629333 podman[91455]: 2026-02-25 11:50:32.818779208 +0000 UTC m=+0.164387535 container start 9be07fa4f3581129549de099e3dc9069c986ee6b76594ee29ab8a20a6938d900 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_jackson, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 06:50:32 np0005629333 podman[91455]: 2026-02-25 11:50:32.828972098 +0000 UTC m=+0.174580455 container attach 9be07fa4f3581129549de099e3dc9069c986ee6b76594ee29ab8a20a6938d900 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:50:32 np0005629333 podman[91476]: 2026-02-25 11:50:32.848332809 +0000 UTC m=+0.055595462 container create a70bd3372b304a93c50bf603deaeb256dbb697444744da71da4fe4acfce67214 (image=quay.io/ceph/ceph:v20, name=dreamy_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:50:32 np0005629333 systemd[1]: Started libpod-conmon-a70bd3372b304a93c50bf603deaeb256dbb697444744da71da4fe4acfce67214.scope.
Feb 25 06:50:32 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f807a08addc0e6d8a0f12e11fefd668534cb836baf5e23532947dd9bf1b2c307/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f807a08addc0e6d8a0f12e11fefd668534cb836baf5e23532947dd9bf1b2c307/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:32 np0005629333 podman[91476]: 2026-02-25 11:50:32.913558624 +0000 UTC m=+0.120821317 container init a70bd3372b304a93c50bf603deaeb256dbb697444744da71da4fe4acfce67214 (image=quay.io/ceph/ceph:v20, name=dreamy_mccarthy, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:50:32 np0005629333 podman[91476]: 2026-02-25 11:50:32.818521121 +0000 UTC m=+0.025783794 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:50:32 np0005629333 podman[91476]: 2026-02-25 11:50:32.917887777 +0000 UTC m=+0.125150430 container start a70bd3372b304a93c50bf603deaeb256dbb697444744da71da4fe4acfce67214 (image=quay.io/ceph/ceph:v20, name=dreamy_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:50:32 np0005629333 podman[91476]: 2026-02-25 11:50:32.925520984 +0000 UTC m=+0.132783637 container attach a70bd3372b304a93c50bf603deaeb256dbb697444744da71da4fe4acfce67214 (image=quay.io/ceph/ceph:v20, name=dreamy_mccarthy, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:50:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb 25 06:50:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3725943411' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 25 06:50:33 np0005629333 lvm[91593]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 06:50:33 np0005629333 lvm[91594]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 06:50:33 np0005629333 lvm[91593]: VG ceph_vg0 finished
Feb 25 06:50:33 np0005629333 lvm[91594]: VG ceph_vg1 finished
Feb 25 06:50:33 np0005629333 lvm[91596]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 06:50:33 np0005629333 lvm[91596]: VG ceph_vg2 finished
Feb 25 06:50:33 np0005629333 sleepy_jackson[91473]: {}
Feb 25 06:50:33 np0005629333 systemd[1]: libpod-9be07fa4f3581129549de099e3dc9069c986ee6b76594ee29ab8a20a6938d900.scope: Deactivated successfully.
Feb 25 06:50:33 np0005629333 systemd[1]: libpod-9be07fa4f3581129549de099e3dc9069c986ee6b76594ee29ab8a20a6938d900.scope: Consumed 1.068s CPU time.
Feb 25 06:50:33 np0005629333 podman[91599]: 2026-02-25 11:50:33.691800804 +0000 UTC m=+0.037411184 container died 9be07fa4f3581129549de099e3dc9069c986ee6b76594ee29ab8a20a6938d900 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:50:33 np0005629333 systemd[1]: var-lib-containers-storage-overlay-edcdac1eab2e5858f519c2758e821de51a939394f1ab46002ea87ac501199ba6-merged.mount: Deactivated successfully.
Feb 25 06:50:33 np0005629333 podman[91599]: 2026-02-25 11:50:33.745858132 +0000 UTC m=+0.091468482 container remove 9be07fa4f3581129549de099e3dc9069c986ee6b76594ee29ab8a20a6938d900 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_jackson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:50:33 np0005629333 systemd[1]: libpod-conmon-9be07fa4f3581129549de099e3dc9069c986ee6b76594ee29ab8a20a6938d900.scope: Deactivated successfully.
Feb 25 06:50:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:50:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:50:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Feb 25 06:50:33 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/3725943411' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 25 06:50:33 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:33 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3725943411' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 25 06:50:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Feb 25 06:50:33 np0005629333 dreamy_mccarthy[91494]: pool 'vms' created
Feb 25 06:50:33 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Feb 25 06:50:33 np0005629333 systemd[1]: libpod-a70bd3372b304a93c50bf603deaeb256dbb697444744da71da4fe4acfce67214.scope: Deactivated successfully.
Feb 25 06:50:33 np0005629333 conmon[91494]: conmon a70bd3372b304a93c50b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a70bd3372b304a93c50bf603deaeb256dbb697444744da71da4fe4acfce67214.scope/container/memory.events
Feb 25 06:50:33 np0005629333 podman[91476]: 2026-02-25 11:50:33.892834141 +0000 UTC m=+1.100096814 container died a70bd3372b304a93c50bf603deaeb256dbb697444744da71da4fe4acfce67214 (image=quay.io/ceph/ceph:v20, name=dreamy_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 06:50:33 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f807a08addc0e6d8a0f12e11fefd668534cb836baf5e23532947dd9bf1b2c307-merged.mount: Deactivated successfully.
Feb 25 06:50:33 np0005629333 podman[91476]: 2026-02-25 11:50:33.947012892 +0000 UTC m=+1.154275535 container remove a70bd3372b304a93c50bf603deaeb256dbb697444744da71da4fe4acfce67214 (image=quay.io/ceph/ceph:v20, name=dreamy_mccarthy, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:50:33 np0005629333 systemd[1]: libpod-conmon-a70bd3372b304a93c50bf603deaeb256dbb697444744da71da4fe4acfce67214.scope: Deactivated successfully.
Feb 25 06:50:33 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:50:34 np0005629333 python3[91726]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:50:34 np0005629333 podman[91743]: 2026-02-25 11:50:34.320827192 +0000 UTC m=+0.034389579 container create 9028aa1a3083c2284e5bb034e63247889252c659be51f0f0c6479351bac3ca07 (image=quay.io/ceph/ceph:v20, name=clever_buck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:50:34 np0005629333 systemd[1]: Started libpod-conmon-9028aa1a3083c2284e5bb034e63247889252c659be51f0f0c6479351bac3ca07.scope.
Feb 25 06:50:34 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/909cf39424117848338c62e94b2c9cc4a06dbe5f44578cd7a6c0ec6557346121/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/909cf39424117848338c62e94b2c9cc4a06dbe5f44578cd7a6c0ec6557346121/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:34 np0005629333 podman[91743]: 2026-02-25 11:50:34.305110305 +0000 UTC m=+0.018672712 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:50:34 np0005629333 podman[91743]: 2026-02-25 11:50:34.412994352 +0000 UTC m=+0.126556779 container init 9028aa1a3083c2284e5bb034e63247889252c659be51f0f0c6479351bac3ca07 (image=quay.io/ceph/ceph:v20, name=clever_buck, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 06:50:34 np0005629333 podman[91743]: 2026-02-25 11:50:34.419491817 +0000 UTC m=+0.133054214 container start 9028aa1a3083c2284e5bb034e63247889252c659be51f0f0c6479351bac3ca07 (image=quay.io/ceph/ceph:v20, name=clever_buck, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 06:50:34 np0005629333 podman[91743]: 2026-02-25 11:50:34.424256302 +0000 UTC m=+0.137818689 container attach 9028aa1a3083c2284e5bb034e63247889252c659be51f0f0c6479351bac3ca07 (image=quay.io/ceph/ceph:v20, name=clever_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 06:50:34 np0005629333 podman[91788]: 2026-02-25 11:50:34.479205105 +0000 UTC m=+0.063723904 container exec ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 06:50:34 np0005629333 podman[91788]: 2026-02-25 11:50:34.582303026 +0000 UTC m=+0.166821795 container exec_died ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 06:50:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v41: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Feb 25 06:50:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb 25 06:50:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1012593359' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 25 06:50:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Feb 25 06:50:34 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/3725943411' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 25 06:50:34 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/1012593359' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 25 06:50:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1012593359' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 25 06:50:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Feb 25 06:50:34 np0005629333 clever_buck[91772]: pool 'volumes' created
Feb 25 06:50:34 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Feb 25 06:50:34 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 19 pg[3.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:50:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [2] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:50:34 np0005629333 systemd[1]: libpod-9028aa1a3083c2284e5bb034e63247889252c659be51f0f0c6479351bac3ca07.scope: Deactivated successfully.
Feb 25 06:50:34 np0005629333 podman[91743]: 2026-02-25 11:50:34.928834871 +0000 UTC m=+0.642397348 container died 9028aa1a3083c2284e5bb034e63247889252c659be51f0f0c6479351bac3ca07 (image=quay.io/ceph/ceph:v20, name=clever_buck, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:50:34 np0005629333 systemd[1]: var-lib-containers-storage-overlay-909cf39424117848338c62e94b2c9cc4a06dbe5f44578cd7a6c0ec6557346121-merged.mount: Deactivated successfully.
Feb 25 06:50:34 np0005629333 podman[91743]: 2026-02-25 11:50:34.974672854 +0000 UTC m=+0.688235251 container remove 9028aa1a3083c2284e5bb034e63247889252c659be51f0f0c6479351bac3ca07 (image=quay.io/ceph/ceph:v20, name=clever_buck, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:50:34 np0005629333 systemd[1]: libpod-conmon-9028aa1a3083c2284e5bb034e63247889252c659be51f0f0c6479351bac3ca07.scope: Deactivated successfully.
Feb 25 06:50:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:50:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:50:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:50:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:50:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 06:50:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:50:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 06:50:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 06:50:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 06:50:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 06:50:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 06:50:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:50:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:50:35 np0005629333 python3[91998]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:50:35 np0005629333 podman[92023]: 2026-02-25 11:50:35.349710439 +0000 UTC m=+0.053821962 container create 3806b5bc0fd69bc23a3cad60b38e2711441116abcee1a68db6b660b22925d89e (image=quay.io/ceph/ceph:v20, name=charming_shaw, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 06:50:35 np0005629333 systemd[1]: Started libpod-conmon-3806b5bc0fd69bc23a3cad60b38e2711441116abcee1a68db6b660b22925d89e.scope.
Feb 25 06:50:35 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/752ced155618364c9feda1cc6f4f43ca7ec2207a74cbb6a79ca7b039a622bd63/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/752ced155618364c9feda1cc6f4f43ca7ec2207a74cbb6a79ca7b039a622bd63/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:35 np0005629333 podman[92023]: 2026-02-25 11:50:35.323562435 +0000 UTC m=+0.027674038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:50:35 np0005629333 podman[92023]: 2026-02-25 11:50:35.428944622 +0000 UTC m=+0.133056135 container init 3806b5bc0fd69bc23a3cad60b38e2711441116abcee1a68db6b660b22925d89e (image=quay.io/ceph/ceph:v20, name=charming_shaw, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:50:35 np0005629333 podman[92023]: 2026-02-25 11:50:35.433269455 +0000 UTC m=+0.137380968 container start 3806b5bc0fd69bc23a3cad60b38e2711441116abcee1a68db6b660b22925d89e (image=quay.io/ceph/ceph:v20, name=charming_shaw, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:50:35 np0005629333 podman[92023]: 2026-02-25 11:50:35.438606157 +0000 UTC m=+0.142717680 container attach 3806b5bc0fd69bc23a3cad60b38e2711441116abcee1a68db6b660b22925d89e (image=quay.io/ceph/ceph:v20, name=charming_shaw, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 06:50:35 np0005629333 podman[92098]: 2026-02-25 11:50:35.628821506 +0000 UTC m=+0.042175220 container create 4d0b7f71183942dc9a88bac85523298a6117a8933aeee68023b6437f187a8350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 06:50:35 np0005629333 systemd[1]: Started libpod-conmon-4d0b7f71183942dc9a88bac85523298a6117a8933aeee68023b6437f187a8350.scope.
Feb 25 06:50:35 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:35 np0005629333 podman[92098]: 2026-02-25 11:50:35.690639464 +0000 UTC m=+0.103993198 container init 4d0b7f71183942dc9a88bac85523298a6117a8933aeee68023b6437f187a8350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ramanujan, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 06:50:35 np0005629333 podman[92098]: 2026-02-25 11:50:35.69614027 +0000 UTC m=+0.109494024 container start 4d0b7f71183942dc9a88bac85523298a6117a8933aeee68023b6437f187a8350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ramanujan, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 06:50:35 np0005629333 boring_ramanujan[92115]: 167 167
Feb 25 06:50:35 np0005629333 systemd[1]: libpod-4d0b7f71183942dc9a88bac85523298a6117a8933aeee68023b6437f187a8350.scope: Deactivated successfully.
Feb 25 06:50:35 np0005629333 podman[92098]: 2026-02-25 11:50:35.703556491 +0000 UTC m=+0.116910225 container attach 4d0b7f71183942dc9a88bac85523298a6117a8933aeee68023b6437f187a8350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ramanujan, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 06:50:35 np0005629333 podman[92098]: 2026-02-25 11:50:35.704041455 +0000 UTC m=+0.117395169 container died 4d0b7f71183942dc9a88bac85523298a6117a8933aeee68023b6437f187a8350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ramanujan, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:50:35 np0005629333 podman[92098]: 2026-02-25 11:50:35.607761877 +0000 UTC m=+0.021115611 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:35 np0005629333 systemd[1]: var-lib-containers-storage-overlay-42d928ef167ee136bb5561d47f5d931784233edf7f94fe775f010f37942dfdcd-merged.mount: Deactivated successfully.
Feb 25 06:50:35 np0005629333 podman[92098]: 2026-02-25 11:50:35.744918917 +0000 UTC m=+0.158272621 container remove 4d0b7f71183942dc9a88bac85523298a6117a8933aeee68023b6437f187a8350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ramanujan, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 06:50:35 np0005629333 systemd[1]: libpod-conmon-4d0b7f71183942dc9a88bac85523298a6117a8933aeee68023b6437f187a8350.scope: Deactivated successfully.
Feb 25 06:50:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb 25 06:50:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/774393956' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 25 06:50:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Feb 25 06:50:35 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/1012593359' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 25 06:50:35 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:35 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:35 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:50:35 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:35 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 06:50:35 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/774393956' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 25 06:50:35 np0005629333 podman[92142]: 2026-02-25 11:50:35.917126314 +0000 UTC m=+0.069728983 container create 0c68b31053616e71135b6f0a8883110ac8e4e3abf6529c136f26fddfcfb3996f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:50:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/774393956' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 25 06:50:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Feb 25 06:50:35 np0005629333 charming_shaw[92065]: pool 'backups' created
Feb 25 06:50:35 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Feb 25 06:50:35 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 20 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:50:35 np0005629333 systemd[1]: Started libpod-conmon-0c68b31053616e71135b6f0a8883110ac8e4e3abf6529c136f26fddfcfb3996f.scope.
Feb 25 06:50:35 np0005629333 systemd[1]: libpod-3806b5bc0fd69bc23a3cad60b38e2711441116abcee1a68db6b660b22925d89e.scope: Deactivated successfully.
Feb 25 06:50:35 np0005629333 podman[92023]: 2026-02-25 11:50:35.961247469 +0000 UTC m=+0.665359022 container died 3806b5bc0fd69bc23a3cad60b38e2711441116abcee1a68db6b660b22925d89e (image=quay.io/ceph/ceph:v20, name=charming_shaw, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:50:35 np0005629333 podman[92142]: 2026-02-25 11:50:35.886157944 +0000 UTC m=+0.038760673 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:35 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada19698d8e076da4f7016e89504f579ad8444fc593a63351d9fb2161c7b51b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada19698d8e076da4f7016e89504f579ad8444fc593a63351d9fb2161c7b51b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada19698d8e076da4f7016e89504f579ad8444fc593a63351d9fb2161c7b51b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada19698d8e076da4f7016e89504f579ad8444fc593a63351d9fb2161c7b51b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada19698d8e076da4f7016e89504f579ad8444fc593a63351d9fb2161c7b51b1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:35 np0005629333 systemd[1]: var-lib-containers-storage-overlay-752ced155618364c9feda1cc6f4f43ca7ec2207a74cbb6a79ca7b039a622bd63-merged.mount: Deactivated successfully.
Feb 25 06:50:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 20 pg[4.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:50:36 np0005629333 podman[92142]: 2026-02-25 11:50:36.027070121 +0000 UTC m=+0.179672810 container init 0c68b31053616e71135b6f0a8883110ac8e4e3abf6529c136f26fddfcfb3996f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_wu, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:50:36 np0005629333 podman[92142]: 2026-02-25 11:50:36.034881733 +0000 UTC m=+0.187484402 container start 0c68b31053616e71135b6f0a8883110ac8e4e3abf6529c136f26fddfcfb3996f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_wu, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 06:50:36 np0005629333 podman[92142]: 2026-02-25 11:50:36.039309339 +0000 UTC m=+0.191911988 container attach 0c68b31053616e71135b6f0a8883110ac8e4e3abf6529c136f26fddfcfb3996f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_wu, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:50:36 np0005629333 podman[92023]: 2026-02-25 11:50:36.049614492 +0000 UTC m=+0.753726005 container remove 3806b5bc0fd69bc23a3cad60b38e2711441116abcee1a68db6b660b22925d89e (image=quay.io/ceph/ceph:v20, name=charming_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 06:50:36 np0005629333 systemd[1]: libpod-conmon-3806b5bc0fd69bc23a3cad60b38e2711441116abcee1a68db6b660b22925d89e.scope: Deactivated successfully.
Feb 25 06:50:36 np0005629333 python3[92200]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:50:36 np0005629333 adoring_wu[92159]: --> passed data devices: 0 physical, 3 LVM
Feb 25 06:50:36 np0005629333 adoring_wu[92159]: --> All data devices are unavailable
Feb 25 06:50:36 np0005629333 podman[92213]: 2026-02-25 11:50:36.41666295 +0000 UTC m=+0.059912185 container create 82ec07d81fc47fe23310d7d1165ebbbf32ffe4bf016fc656940110aa81874e52 (image=quay.io/ceph/ceph:v20, name=stupefied_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 06:50:36 np0005629333 systemd[1]: libpod-0c68b31053616e71135b6f0a8883110ac8e4e3abf6529c136f26fddfcfb3996f.scope: Deactivated successfully.
Feb 25 06:50:36 np0005629333 podman[92142]: 2026-02-25 11:50:36.432274154 +0000 UTC m=+0.584876813 container died 0c68b31053616e71135b6f0a8883110ac8e4e3abf6529c136f26fddfcfb3996f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_wu, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 25 06:50:36 np0005629333 systemd[1]: Started libpod-conmon-82ec07d81fc47fe23310d7d1165ebbbf32ffe4bf016fc656940110aa81874e52.scope.
Feb 25 06:50:36 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ada19698d8e076da4f7016e89504f579ad8444fc593a63351d9fb2161c7b51b1-merged.mount: Deactivated successfully.
Feb 25 06:50:36 np0005629333 podman[92213]: 2026-02-25 11:50:36.389422805 +0000 UTC m=+0.032672080 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:50:36 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000735c699912323f1bf24f232fc757ef360e90109cc25361b1e9d2d4c35f89d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000735c699912323f1bf24f232fc757ef360e90109cc25361b1e9d2d4c35f89d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:36 np0005629333 podman[92142]: 2026-02-25 11:50:36.497926951 +0000 UTC m=+0.650529600 container remove 0c68b31053616e71135b6f0a8883110ac8e4e3abf6529c136f26fddfcfb3996f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_wu, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 06:50:36 np0005629333 systemd[1]: libpod-conmon-0c68b31053616e71135b6f0a8883110ac8e4e3abf6529c136f26fddfcfb3996f.scope: Deactivated successfully.
Feb 25 06:50:36 np0005629333 podman[92213]: 2026-02-25 11:50:36.510124717 +0000 UTC m=+0.153374012 container init 82ec07d81fc47fe23310d7d1165ebbbf32ffe4bf016fc656940110aa81874e52 (image=quay.io/ceph/ceph:v20, name=stupefied_clarke, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 06:50:36 np0005629333 podman[92213]: 2026-02-25 11:50:36.517875558 +0000 UTC m=+0.161124793 container start 82ec07d81fc47fe23310d7d1165ebbbf32ffe4bf016fc656940110aa81874e52 (image=quay.io/ceph/ceph:v20, name=stupefied_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:50:36 np0005629333 podman[92213]: 2026-02-25 11:50:36.526424021 +0000 UTC m=+0.169673286 container attach 82ec07d81fc47fe23310d7d1165ebbbf32ffe4bf016fc656940110aa81874e52 (image=quay.io/ceph/ceph:v20, name=stupefied_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:50:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v44: 4 pgs: 3 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:50:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb 25 06:50:36 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2728489441' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 25 06:50:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Feb 25 06:50:36 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2728489441' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 25 06:50:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Feb 25 06:50:36 np0005629333 stupefied_clarke[92243]: pool 'images' created
Feb 25 06:50:36 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/774393956' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 25 06:50:36 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/2728489441' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 25 06:50:36 np0005629333 podman[92329]: 2026-02-25 11:50:36.968619736 +0000 UTC m=+0.062270462 container create 4fb5e6d5b6cb7047819bafdc5bd4dc0e24cb5e14438d456880209633db86fdd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_burnell, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 06:50:36 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Feb 25 06:50:36 np0005629333 systemd[1]: libpod-82ec07d81fc47fe23310d7d1165ebbbf32ffe4bf016fc656940110aa81874e52.scope: Deactivated successfully.
Feb 25 06:50:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 21 pg[4.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:50:36 np0005629333 podman[92213]: 2026-02-25 11:50:36.983998333 +0000 UTC m=+0.627247558 container died 82ec07d81fc47fe23310d7d1165ebbbf32ffe4bf016fc656940110aa81874e52 (image=quay.io/ceph/ceph:v20, name=stupefied_clarke, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:50:37 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 21 pg[5.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [2] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:50:37 np0005629333 systemd[1]: Started libpod-conmon-4fb5e6d5b6cb7047819bafdc5bd4dc0e24cb5e14438d456880209633db86fdd5.scope.
Feb 25 06:50:37 np0005629333 systemd[1]: var-lib-containers-storage-overlay-000735c699912323f1bf24f232fc757ef360e90109cc25361b1e9d2d4c35f89d-merged.mount: Deactivated successfully.
Feb 25 06:50:37 np0005629333 podman[92329]: 2026-02-25 11:50:36.938990653 +0000 UTC m=+0.032641419 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:37 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:37 np0005629333 podman[92213]: 2026-02-25 11:50:37.053018636 +0000 UTC m=+0.696267891 container remove 82ec07d81fc47fe23310d7d1165ebbbf32ffe4bf016fc656940110aa81874e52 (image=quay.io/ceph/ceph:v20, name=stupefied_clarke, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:50:37 np0005629333 systemd[1]: libpod-conmon-82ec07d81fc47fe23310d7d1165ebbbf32ffe4bf016fc656940110aa81874e52.scope: Deactivated successfully.
Feb 25 06:50:37 np0005629333 podman[92329]: 2026-02-25 11:50:37.061316022 +0000 UTC m=+0.154966738 container init 4fb5e6d5b6cb7047819bafdc5bd4dc0e24cb5e14438d456880209633db86fdd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_burnell, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 06:50:37 np0005629333 podman[92329]: 2026-02-25 11:50:37.067726954 +0000 UTC m=+0.161377670 container start 4fb5e6d5b6cb7047819bafdc5bd4dc0e24cb5e14438d456880209633db86fdd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_burnell, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 25 06:50:37 np0005629333 keen_burnell[92359]: 167 167
Feb 25 06:50:37 np0005629333 podman[92329]: 2026-02-25 11:50:37.07145556 +0000 UTC m=+0.165106276 container attach 4fb5e6d5b6cb7047819bafdc5bd4dc0e24cb5e14438d456880209633db86fdd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_burnell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:50:37 np0005629333 systemd[1]: libpod-4fb5e6d5b6cb7047819bafdc5bd4dc0e24cb5e14438d456880209633db86fdd5.scope: Deactivated successfully.
Feb 25 06:50:37 np0005629333 podman[92329]: 2026-02-25 11:50:37.072840099 +0000 UTC m=+0.166490825 container died 4fb5e6d5b6cb7047819bafdc5bd4dc0e24cb5e14438d456880209633db86fdd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:50:37 np0005629333 systemd[1]: var-lib-containers-storage-overlay-4ad35b44cd177728ef287c65a467dd909b9af6340a61f06136e0107a55493a30-merged.mount: Deactivated successfully.
Feb 25 06:50:37 np0005629333 podman[92329]: 2026-02-25 11:50:37.12245139 +0000 UTC m=+0.216102106 container remove 4fb5e6d5b6cb7047819bafdc5bd4dc0e24cb5e14438d456880209633db86fdd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_burnell, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:50:37 np0005629333 systemd[1]: libpod-conmon-4fb5e6d5b6cb7047819bafdc5bd4dc0e24cb5e14438d456880209633db86fdd5.scope: Deactivated successfully.
Feb 25 06:50:37 np0005629333 podman[92413]: 2026-02-25 11:50:37.275984946 +0000 UTC m=+0.050671632 container create c11113e7a067f01aa34e72d57c60c9977c5040ed1fa8d9442161d62b65b4a3c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_hodgkin, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 25 06:50:37 np0005629333 python3[92407]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:50:37 np0005629333 systemd[1]: Started libpod-conmon-c11113e7a067f01aa34e72d57c60c9977c5040ed1fa8d9442161d62b65b4a3c8.scope.
Feb 25 06:50:37 np0005629333 podman[92413]: 2026-02-25 11:50:37.246166168 +0000 UTC m=+0.020852834 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:37 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:37 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51ad4b28ca2773c2b7093a1dc76c12d47277ea99b7865868332d06b14f90cf1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:37 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51ad4b28ca2773c2b7093a1dc76c12d47277ea99b7865868332d06b14f90cf1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:37 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51ad4b28ca2773c2b7093a1dc76c12d47277ea99b7865868332d06b14f90cf1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:37 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51ad4b28ca2773c2b7093a1dc76c12d47277ea99b7865868332d06b14f90cf1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:37 np0005629333 podman[92426]: 2026-02-25 11:50:37.361044445 +0000 UTC m=+0.057261359 container create bdd4053ea4fbca03b2d302278707989a5af024e434ec9e85f481b706a708b2da (image=quay.io/ceph/ceph:v20, name=youthful_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:50:37 np0005629333 podman[92413]: 2026-02-25 11:50:37.394840966 +0000 UTC m=+0.169527672 container init c11113e7a067f01aa34e72d57c60c9977c5040ed1fa8d9442161d62b65b4a3c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:50:37 np0005629333 systemd[1]: Started libpod-conmon-bdd4053ea4fbca03b2d302278707989a5af024e434ec9e85f481b706a708b2da.scope.
Feb 25 06:50:37 np0005629333 podman[92413]: 2026-02-25 11:50:37.403209064 +0000 UTC m=+0.177895750 container start c11113e7a067f01aa34e72d57c60c9977c5040ed1fa8d9442161d62b65b4a3c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_hodgkin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 06:50:37 np0005629333 podman[92413]: 2026-02-25 11:50:37.408483344 +0000 UTC m=+0.183170040 container attach c11113e7a067f01aa34e72d57c60c9977c5040ed1fa8d9442161d62b65b4a3c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 06:50:37 np0005629333 podman[92426]: 2026-02-25 11:50:37.330678171 +0000 UTC m=+0.026895125 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:50:37 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:37 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adef722e8c2b1f127e98a28e97a6316f96bec56bd6f5fb4bd987beb3c5231f0f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:37 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adef722e8c2b1f127e98a28e97a6316f96bec56bd6f5fb4bd987beb3c5231f0f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:37 np0005629333 podman[92426]: 2026-02-25 11:50:37.451328002 +0000 UTC m=+0.147544916 container init bdd4053ea4fbca03b2d302278707989a5af024e434ec9e85f481b706a708b2da (image=quay.io/ceph/ceph:v20, name=youthful_banzai, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:50:37 np0005629333 podman[92426]: 2026-02-25 11:50:37.455772619 +0000 UTC m=+0.151989503 container start bdd4053ea4fbca03b2d302278707989a5af024e434ec9e85f481b706a708b2da (image=quay.io/ceph/ceph:v20, name=youthful_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:50:37 np0005629333 podman[92426]: 2026-02-25 11:50:37.462358196 +0000 UTC m=+0.158575090 container attach bdd4053ea4fbca03b2d302278707989a5af024e434ec9e85f481b706a708b2da (image=quay.io/ceph/ceph:v20, name=youthful_banzai, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]: {
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:    "0": [
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:        {
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "devices": [
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "/dev/loop3"
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            ],
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "lv_name": "ceph_lv0",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "lv_size": "21470642176",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "name": "ceph_lv0",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "tags": {
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.cluster_name": "ceph",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.crush_device_class": "",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.encrypted": "0",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.objectstore": "bluestore",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.osd_id": "0",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.type": "block",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.vdo": "0",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.with_tpm": "0"
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            },
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "type": "block",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "vg_name": "ceph_vg0"
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:        }
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:    ],
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:    "1": [
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:        {
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "devices": [
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "/dev/loop4"
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            ],
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "lv_name": "ceph_lv1",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "lv_size": "21470642176",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "name": "ceph_lv1",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "tags": {
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.cluster_name": "ceph",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.crush_device_class": "",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.encrypted": "0",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.objectstore": "bluestore",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.osd_id": "1",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.type": "block",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.vdo": "0",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.with_tpm": "0"
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            },
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "type": "block",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "vg_name": "ceph_vg1"
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:        }
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:    ],
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:    "2": [
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:        {
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "devices": [
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "/dev/loop5"
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            ],
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "lv_name": "ceph_lv2",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "lv_size": "21470642176",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "name": "ceph_lv2",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "tags": {
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.cluster_name": "ceph",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.crush_device_class": "",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.encrypted": "0",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.objectstore": "bluestore",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.osd_id": "2",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.type": "block",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.vdo": "0",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:                "ceph.with_tpm": "0"
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            },
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "type": "block",
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:            "vg_name": "ceph_vg2"
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:        }
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]:    ]
Feb 25 06:50:37 np0005629333 heuristic_hodgkin[92440]: }
Feb 25 06:50:37 np0005629333 systemd[1]: libpod-c11113e7a067f01aa34e72d57c60c9977c5040ed1fa8d9442161d62b65b4a3c8.scope: Deactivated successfully.
Feb 25 06:50:37 np0005629333 conmon[92440]: conmon c11113e7a067f01aa34e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c11113e7a067f01aa34e72d57c60c9977c5040ed1fa8d9442161d62b65b4a3c8.scope/container/memory.events
Feb 25 06:50:37 np0005629333 podman[92413]: 2026-02-25 11:50:37.727410323 +0000 UTC m=+0.502097019 container died c11113e7a067f01aa34e72d57c60c9977c5040ed1fa8d9442161d62b65b4a3c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_hodgkin, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:50:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e21 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:50:37 np0005629333 systemd[1]: var-lib-containers-storage-overlay-51ad4b28ca2773c2b7093a1dc76c12d47277ea99b7865868332d06b14f90cf1e-merged.mount: Deactivated successfully.
Feb 25 06:50:37 np0005629333 podman[92413]: 2026-02-25 11:50:37.785809664 +0000 UTC m=+0.560496330 container remove c11113e7a067f01aa34e72d57c60c9977c5040ed1fa8d9442161d62b65b4a3c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_hodgkin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:50:37 np0005629333 systemd[1]: libpod-conmon-c11113e7a067f01aa34e72d57c60c9977c5040ed1fa8d9442161d62b65b4a3c8.scope: Deactivated successfully.
Feb 25 06:50:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb 25 06:50:37 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2029736043' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 25 06:50:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Feb 25 06:50:37 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/2728489441' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 25 06:50:37 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/2029736043' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 25 06:50:37 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2029736043' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 25 06:50:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Feb 25 06:50:37 np0005629333 youthful_banzai[92447]: pool 'cephfs.cephfs.meta' created
Feb 25 06:50:37 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Feb 25 06:50:37 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 22 pg[5.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [2] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:50:38 np0005629333 systemd[1]: libpod-bdd4053ea4fbca03b2d302278707989a5af024e434ec9e85f481b706a708b2da.scope: Deactivated successfully.
Feb 25 06:50:38 np0005629333 podman[92426]: 2026-02-25 11:50:38.000297472 +0000 UTC m=+0.696514356 container died bdd4053ea4fbca03b2d302278707989a5af024e434ec9e85f481b706a708b2da (image=quay.io/ceph/ceph:v20, name=youthful_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 06:50:38 np0005629333 systemd[1]: var-lib-containers-storage-overlay-adef722e8c2b1f127e98a28e97a6316f96bec56bd6f5fb4bd987beb3c5231f0f-merged.mount: Deactivated successfully.
Feb 25 06:50:38 np0005629333 podman[92426]: 2026-02-25 11:50:38.080667358 +0000 UTC m=+0.776884262 container remove bdd4053ea4fbca03b2d302278707989a5af024e434ec9e85f481b706a708b2da (image=quay.io/ceph/ceph:v20, name=youthful_banzai, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:50:38 np0005629333 systemd[1]: libpod-conmon-bdd4053ea4fbca03b2d302278707989a5af024e434ec9e85f481b706a708b2da.scope: Deactivated successfully.
Feb 25 06:50:38 np0005629333 podman[92590]: 2026-02-25 11:50:38.276773024 +0000 UTC m=+0.055874040 container create 1b3c3b14bb648208ba988f4723ccc166241bf161a059f06d6e376e1edf04e913 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_khayyam, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 06:50:38 np0005629333 systemd[1]: Started libpod-conmon-1b3c3b14bb648208ba988f4723ccc166241bf161a059f06d6e376e1edf04e913.scope.
Feb 25 06:50:38 np0005629333 podman[92590]: 2026-02-25 11:50:38.250660672 +0000 UTC m=+0.029761678 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:38 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:38 np0005629333 podman[92590]: 2026-02-25 11:50:38.381048579 +0000 UTC m=+0.160149655 container init 1b3c3b14bb648208ba988f4723ccc166241bf161a059f06d6e376e1edf04e913 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_khayyam, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:50:38 np0005629333 podman[92590]: 2026-02-25 11:50:38.385403503 +0000 UTC m=+0.164504499 container start 1b3c3b14bb648208ba988f4723ccc166241bf161a059f06d6e376e1edf04e913 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_khayyam, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Feb 25 06:50:38 np0005629333 compassionate_khayyam[92609]: 167 167
Feb 25 06:50:38 np0005629333 systemd[1]: libpod-1b3c3b14bb648208ba988f4723ccc166241bf161a059f06d6e376e1edf04e913.scope: Deactivated successfully.
Feb 25 06:50:38 np0005629333 podman[92590]: 2026-02-25 11:50:38.390527269 +0000 UTC m=+0.169628345 container attach 1b3c3b14bb648208ba988f4723ccc166241bf161a059f06d6e376e1edf04e913 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_khayyam, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:50:38 np0005629333 podman[92590]: 2026-02-25 11:50:38.391004173 +0000 UTC m=+0.170105189 container died 1b3c3b14bb648208ba988f4723ccc166241bf161a059f06d6e376e1edf04e913 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 06:50:38 np0005629333 systemd[1]: var-lib-containers-storage-overlay-885e54719b7868867627d4f536d656cc386720667e6ef787b8957584f043590f-merged.mount: Deactivated successfully.
Feb 25 06:50:38 np0005629333 podman[92590]: 2026-02-25 11:50:38.438976727 +0000 UTC m=+0.218077743 container remove 1b3c3b14bb648208ba988f4723ccc166241bf161a059f06d6e376e1edf04e913 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_khayyam, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 06:50:38 np0005629333 python3[92604]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:50:38 np0005629333 systemd[1]: libpod-conmon-1b3c3b14bb648208ba988f4723ccc166241bf161a059f06d6e376e1edf04e913.scope: Deactivated successfully.
Feb 25 06:50:38 np0005629333 podman[92626]: 2026-02-25 11:50:38.514950737 +0000 UTC m=+0.055286923 container create 40ff062e86ed1a4a1345ec5624395a0e90db6d2d6f513e9b656c61f38af1a176 (image=quay.io/ceph/ceph:v20, name=condescending_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 06:50:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 22 pg[6.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:50:38 np0005629333 systemd[1]: Started libpod-conmon-40ff062e86ed1a4a1345ec5624395a0e90db6d2d6f513e9b656c61f38af1a176.scope.
Feb 25 06:50:38 np0005629333 podman[92626]: 2026-02-25 11:50:38.483631257 +0000 UTC m=+0.023967413 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:50:38 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:38 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ffd1f920fdb7a7c06a170a399b0af51e310fd030c1245694eccef4f4894fd37/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:38 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ffd1f920fdb7a7c06a170a399b0af51e310fd030c1245694eccef4f4894fd37/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:38 np0005629333 podman[92626]: 2026-02-25 11:50:38.603349291 +0000 UTC m=+0.143685447 container init 40ff062e86ed1a4a1345ec5624395a0e90db6d2d6f513e9b656c61f38af1a176 (image=quay.io/ceph/ceph:v20, name=condescending_nash, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:50:38 np0005629333 podman[92626]: 2026-02-25 11:50:38.613738456 +0000 UTC m=+0.154074602 container start 40ff062e86ed1a4a1345ec5624395a0e90db6d2d6f513e9b656c61f38af1a176 (image=quay.io/ceph/ceph:v20, name=condescending_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:50:38 np0005629333 podman[92626]: 2026-02-25 11:50:38.618959635 +0000 UTC m=+0.159295791 container attach 40ff062e86ed1a4a1345ec5624395a0e90db6d2d6f513e9b656c61f38af1a176 (image=quay.io/ceph/ceph:v20, name=condescending_nash, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 06:50:38 np0005629333 podman[92651]: 2026-02-25 11:50:38.648254448 +0000 UTC m=+0.068870409 container create 6ce81c4fcb411af0d8575bcee43b4dee85aaff05211b9e8a451a7d8a2aa3df4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:50:38 np0005629333 systemd[1]: Started libpod-conmon-6ce81c4fcb411af0d8575bcee43b4dee85aaff05211b9e8a451a7d8a2aa3df4d.scope.
Feb 25 06:50:38 np0005629333 podman[92651]: 2026-02-25 11:50:38.605848472 +0000 UTC m=+0.026464503 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:38 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:38 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8eabc78f8b306c0fcce0050c6a072b125105fb6a232cb8907c38a509bcfbf8d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:38 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8eabc78f8b306c0fcce0050c6a072b125105fb6a232cb8907c38a509bcfbf8d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:38 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8eabc78f8b306c0fcce0050c6a072b125105fb6a232cb8907c38a509bcfbf8d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:38 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8eabc78f8b306c0fcce0050c6a072b125105fb6a232cb8907c38a509bcfbf8d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:38 np0005629333 podman[92651]: 2026-02-25 11:50:38.735580941 +0000 UTC m=+0.156196912 container init 6ce81c4fcb411af0d8575bcee43b4dee85aaff05211b9e8a451a7d8a2aa3df4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_kirch, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:50:38 np0005629333 podman[92651]: 2026-02-25 11:50:38.741415467 +0000 UTC m=+0.162031388 container start 6ce81c4fcb411af0d8575bcee43b4dee85aaff05211b9e8a451a7d8a2aa3df4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_kirch, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 06:50:38 np0005629333 podman[92651]: 2026-02-25 11:50:38.753870781 +0000 UTC m=+0.174486722 container attach 6ce81c4fcb411af0d8575bcee43b4dee85aaff05211b9e8a451a7d8a2aa3df4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_kirch, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:50:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v47: 6 pgs: 4 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:50:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Feb 25 06:50:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Feb 25 06:50:38 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Feb 25 06:50:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 23 pg[6.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:50:38 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/2029736043' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 25 06:50:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Feb 25 06:50:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2730524695' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 25 06:50:39 np0005629333 lvm[92770]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 06:50:39 np0005629333 lvm[92770]: VG ceph_vg1 finished
Feb 25 06:50:39 np0005629333 lvm[92769]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 06:50:39 np0005629333 lvm[92769]: VG ceph_vg0 finished
Feb 25 06:50:39 np0005629333 lvm[92772]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 06:50:39 np0005629333 lvm[92772]: VG ceph_vg2 finished
Feb 25 06:50:39 np0005629333 quirky_kirch[92669]: {}
Feb 25 06:50:39 np0005629333 systemd[1]: libpod-6ce81c4fcb411af0d8575bcee43b4dee85aaff05211b9e8a451a7d8a2aa3df4d.scope: Deactivated successfully.
Feb 25 06:50:39 np0005629333 systemd[1]: libpod-6ce81c4fcb411af0d8575bcee43b4dee85aaff05211b9e8a451a7d8a2aa3df4d.scope: Consumed 1.106s CPU time.
Feb 25 06:50:39 np0005629333 podman[92651]: 2026-02-25 11:50:39.536858157 +0000 UTC m=+0.957474078 container died 6ce81c4fcb411af0d8575bcee43b4dee85aaff05211b9e8a451a7d8a2aa3df4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_kirch, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:50:39 np0005629333 systemd[1]: var-lib-containers-storage-overlay-8eabc78f8b306c0fcce0050c6a072b125105fb6a232cb8907c38a509bcfbf8d2-merged.mount: Deactivated successfully.
Feb 25 06:50:39 np0005629333 podman[92651]: 2026-02-25 11:50:39.599458438 +0000 UTC m=+1.020074379 container remove 6ce81c4fcb411af0d8575bcee43b4dee85aaff05211b9e8a451a7d8a2aa3df4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_kirch, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:50:39 np0005629333 systemd[1]: libpod-conmon-6ce81c4fcb411af0d8575bcee43b4dee85aaff05211b9e8a451a7d8a2aa3df4d.scope: Deactivated successfully.
Feb 25 06:50:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:50:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:50:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Feb 25 06:50:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2730524695' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 25 06:50:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Feb 25 06:50:40 np0005629333 condescending_nash[92644]: pool 'cephfs.cephfs.data' created
Feb 25 06:50:40 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Feb 25 06:50:40 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/2730524695' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Feb 25 06:50:40 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:40 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:40 np0005629333 systemd[1]: libpod-40ff062e86ed1a4a1345ec5624395a0e90db6d2d6f513e9b656c61f38af1a176.scope: Deactivated successfully.
Feb 25 06:50:40 np0005629333 podman[92626]: 2026-02-25 11:50:40.031082721 +0000 UTC m=+1.571418897 container died 40ff062e86ed1a4a1345ec5624395a0e90db6d2d6f513e9b656c61f38af1a176 (image=quay.io/ceph/ceph:v20, name=condescending_nash, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:50:40 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7ffd1f920fdb7a7c06a170a399b0af51e310fd030c1245694eccef4f4894fd37-merged.mount: Deactivated successfully.
Feb 25 06:50:40 np0005629333 podman[92626]: 2026-02-25 11:50:40.08059365 +0000 UTC m=+1.620929816 container remove 40ff062e86ed1a4a1345ec5624395a0e90db6d2d6f513e9b656c61f38af1a176 (image=quay.io/ceph/ceph:v20, name=condescending_nash, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 06:50:40 np0005629333 systemd[1]: libpod-conmon-40ff062e86ed1a4a1345ec5624395a0e90db6d2d6f513e9b656c61f38af1a176.scope: Deactivated successfully.
Feb 25 06:50:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 24 pg[7.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [1] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:50:40 np0005629333 python3[92852]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:50:40 np0005629333 podman[92853]: 2026-02-25 11:50:40.501845989 +0000 UTC m=+0.074974373 container create bb9115bd1248533cb2cb70fd86228b3354939a99867f9aac8f2c17e6fd188277 (image=quay.io/ceph/ceph:v20, name=dazzling_bose, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 06:50:40 np0005629333 podman[92853]: 2026-02-25 11:50:40.450339644 +0000 UTC m=+0.023468038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:50:40 np0005629333 systemd[1]: Started libpod-conmon-bb9115bd1248533cb2cb70fd86228b3354939a99867f9aac8f2c17e6fd188277.scope.
Feb 25 06:50:40 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:40 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c8c365059031a49c0739b38ccf98b09926b282dd17fcaf3d008d8658f48fc16/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:40 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c8c365059031a49c0739b38ccf98b09926b282dd17fcaf3d008d8658f48fc16/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:40 np0005629333 podman[92853]: 2026-02-25 11:50:40.637761844 +0000 UTC m=+0.210890208 container init bb9115bd1248533cb2cb70fd86228b3354939a99867f9aac8f2c17e6fd188277 (image=quay.io/ceph/ceph:v20, name=dazzling_bose, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:50:40 np0005629333 podman[92853]: 2026-02-25 11:50:40.64359213 +0000 UTC m=+0.216720494 container start bb9115bd1248533cb2cb70fd86228b3354939a99867f9aac8f2c17e6fd188277 (image=quay.io/ceph/ceph:v20, name=dazzling_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:50:40 np0005629333 podman[92853]: 2026-02-25 11:50:40.660185002 +0000 UTC m=+0.233313366 container attach bb9115bd1248533cb2cb70fd86228b3354939a99867f9aac8f2c17e6fd188277 (image=quay.io/ceph/ceph:v20, name=dazzling_bose, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 06:50:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v50: 7 pgs: 4 active+clean, 3 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:50:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Feb 25 06:50:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Feb 25 06:50:41 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/2730524695' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Feb 25 06:50:41 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 25 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [1] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:50:41 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Feb 25 06:50:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Feb 25 06:50:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1634652021' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Feb 25 06:50:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Feb 25 06:50:42 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/1634652021' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Feb 25 06:50:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1634652021' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Feb 25 06:50:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Feb 25 06:50:42 np0005629333 dazzling_bose[92868]: enabled application 'rbd' on pool 'vms'
Feb 25 06:50:42 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Feb 25 06:50:42 np0005629333 systemd[1]: libpod-bb9115bd1248533cb2cb70fd86228b3354939a99867f9aac8f2c17e6fd188277.scope: Deactivated successfully.
Feb 25 06:50:42 np0005629333 podman[92853]: 2026-02-25 11:50:42.171283611 +0000 UTC m=+1.744411965 container died bb9115bd1248533cb2cb70fd86228b3354939a99867f9aac8f2c17e6fd188277 (image=quay.io/ceph/ceph:v20, name=dazzling_bose, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 06:50:42 np0005629333 systemd[1]: var-lib-containers-storage-overlay-2c8c365059031a49c0739b38ccf98b09926b282dd17fcaf3d008d8658f48fc16-merged.mount: Deactivated successfully.
Feb 25 06:50:42 np0005629333 podman[92853]: 2026-02-25 11:50:42.367462889 +0000 UTC m=+1.940591273 container remove bb9115bd1248533cb2cb70fd86228b3354939a99867f9aac8f2c17e6fd188277 (image=quay.io/ceph/ceph:v20, name=dazzling_bose, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 06:50:42 np0005629333 systemd[1]: libpod-conmon-bb9115bd1248533cb2cb70fd86228b3354939a99867f9aac8f2c17e6fd188277.scope: Deactivated successfully.
Feb 25 06:50:42 np0005629333 python3[92930]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:50:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:50:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v53: 7 pgs: 1 creating+activating, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:50:42 np0005629333 podman[92931]: 2026-02-25 11:50:42.781102242 +0000 UTC m=+0.066347438 container create 93cca022181d7cd4bd23ecc9a269757eb728f7d89a46346c8cd7dd2cd47607c3 (image=quay.io/ceph/ceph:v20, name=funny_clarke, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 06:50:42 np0005629333 podman[92931]: 2026-02-25 11:50:42.734839296 +0000 UTC m=+0.020084512 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:50:42 np0005629333 systemd[1]: Started libpod-conmon-93cca022181d7cd4bd23ecc9a269757eb728f7d89a46346c8cd7dd2cd47607c3.scope.
Feb 25 06:50:42 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:42 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e31098240e732c8881002e433824ff530d69ddf0a266c62e8ec632576c1d946b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:42 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e31098240e732c8881002e433824ff530d69ddf0a266c62e8ec632576c1d946b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:42 np0005629333 podman[92931]: 2026-02-25 11:50:42.883524714 +0000 UTC m=+0.168769960 container init 93cca022181d7cd4bd23ecc9a269757eb728f7d89a46346c8cd7dd2cd47607c3 (image=quay.io/ceph/ceph:v20, name=funny_clarke, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 06:50:42 np0005629333 podman[92931]: 2026-02-25 11:50:42.889390791 +0000 UTC m=+0.174636027 container start 93cca022181d7cd4bd23ecc9a269757eb728f7d89a46346c8cd7dd2cd47607c3 (image=quay.io/ceph/ceph:v20, name=funny_clarke, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:50:42 np0005629333 podman[92931]: 2026-02-25 11:50:42.897066759 +0000 UTC m=+0.182311965 container attach 93cca022181d7cd4bd23ecc9a269757eb728f7d89a46346c8cd7dd2cd47607c3 (image=quay.io/ceph/ceph:v20, name=funny_clarke, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 06:50:43 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/1634652021' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Feb 25 06:50:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Feb 25 06:50:43 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3085565785' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Feb 25 06:50:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Feb 25 06:50:44 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/3085565785' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Feb 25 06:50:44 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3085565785' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Feb 25 06:50:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Feb 25 06:50:44 np0005629333 funny_clarke[92946]: enabled application 'rbd' on pool 'volumes'
Feb 25 06:50:44 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Feb 25 06:50:44 np0005629333 systemd[1]: libpod-93cca022181d7cd4bd23ecc9a269757eb728f7d89a46346c8cd7dd2cd47607c3.scope: Deactivated successfully.
Feb 25 06:50:44 np0005629333 podman[92931]: 2026-02-25 11:50:44.210550937 +0000 UTC m=+1.495796163 container died 93cca022181d7cd4bd23ecc9a269757eb728f7d89a46346c8cd7dd2cd47607c3 (image=quay.io/ceph/ceph:v20, name=funny_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:50:44 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e31098240e732c8881002e433824ff530d69ddf0a266c62e8ec632576c1d946b-merged.mount: Deactivated successfully.
Feb 25 06:50:44 np0005629333 podman[92931]: 2026-02-25 11:50:44.258462538 +0000 UTC m=+1.543707764 container remove 93cca022181d7cd4bd23ecc9a269757eb728f7d89a46346c8cd7dd2cd47607c3 (image=quay.io/ceph/ceph:v20, name=funny_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 06:50:44 np0005629333 systemd[1]: libpod-conmon-93cca022181d7cd4bd23ecc9a269757eb728f7d89a46346c8cd7dd2cd47607c3.scope: Deactivated successfully.
Feb 25 06:50:44 np0005629333 python3[93007]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:50:44 np0005629333 podman[93008]: 2026-02-25 11:50:44.647951909 +0000 UTC m=+0.052373755 container create eb649514e42699b11fa2b7a6b4fc4e3c88e853b8c74a977be87465312b94f039 (image=quay.io/ceph/ceph:v20, name=amazing_euclid, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 06:50:44 np0005629333 systemd[1]: Started libpod-conmon-eb649514e42699b11fa2b7a6b4fc4e3c88e853b8c74a977be87465312b94f039.scope.
Feb 25 06:50:44 np0005629333 podman[93008]: 2026-02-25 11:50:44.6235673 +0000 UTC m=+0.027989226 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:50:44 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:44 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e2367615aff2f0d8593100ef330fdbbbe4540aad5927b78f05522ed5d86da29/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:44 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e2367615aff2f0d8593100ef330fdbbbe4540aad5927b78f05522ed5d86da29/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:44 np0005629333 podman[93008]: 2026-02-25 11:50:44.746870623 +0000 UTC m=+0.151292469 container init eb649514e42699b11fa2b7a6b4fc4e3c88e853b8c74a977be87465312b94f039 (image=quay.io/ceph/ceph:v20, name=amazing_euclid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 06:50:44 np0005629333 podman[93008]: 2026-02-25 11:50:44.755237752 +0000 UTC m=+0.159659608 container start eb649514e42699b11fa2b7a6b4fc4e3c88e853b8c74a977be87465312b94f039 (image=quay.io/ceph/ceph:v20, name=amazing_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 06:50:44 np0005629333 podman[93008]: 2026-02-25 11:50:44.759398857 +0000 UTC m=+0.163820773 container attach eb649514e42699b11fa2b7a6b4fc4e3c88e853b8c74a977be87465312b94f039 (image=quay.io/ceph/ceph:v20, name=amazing_euclid, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 25 06:50:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v55: 7 pgs: 1 creating+activating, 6 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:50:45 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/3085565785' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Feb 25 06:50:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Feb 25 06:50:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2163009470' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Feb 25 06:50:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Feb 25 06:50:46 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/2163009470' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Feb 25 06:50:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2163009470' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Feb 25 06:50:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Feb 25 06:50:46 np0005629333 amazing_euclid[93024]: enabled application 'rbd' on pool 'backups'
Feb 25 06:50:46 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Feb 25 06:50:46 np0005629333 systemd[1]: libpod-eb649514e42699b11fa2b7a6b4fc4e3c88e853b8c74a977be87465312b94f039.scope: Deactivated successfully.
Feb 25 06:50:46 np0005629333 podman[93008]: 2026-02-25 11:50:46.236499295 +0000 UTC m=+1.640921161 container died eb649514e42699b11fa2b7a6b4fc4e3c88e853b8c74a977be87465312b94f039 (image=quay.io/ceph/ceph:v20, name=amazing_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 06:50:46 np0005629333 systemd[1]: var-lib-containers-storage-overlay-3e2367615aff2f0d8593100ef330fdbbbe4540aad5927b78f05522ed5d86da29-merged.mount: Deactivated successfully.
Feb 25 06:50:46 np0005629333 podman[93008]: 2026-02-25 11:50:46.276296913 +0000 UTC m=+1.680718769 container remove eb649514e42699b11fa2b7a6b4fc4e3c88e853b8c74a977be87465312b94f039 (image=quay.io/ceph/ceph:v20, name=amazing_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:50:46 np0005629333 systemd[1]: libpod-conmon-eb649514e42699b11fa2b7a6b4fc4e3c88e853b8c74a977be87465312b94f039.scope: Deactivated successfully.
Feb 25 06:50:46 np0005629333 python3[93086]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:50:46 np0005629333 podman[93087]: 2026-02-25 11:50:46.622628566 +0000 UTC m=+0.047009055 container create a21ec70c16a56b24a8133f81c9a29bc8059866d8a42477ec05f30e7bbfd948ac (image=quay.io/ceph/ceph:v20, name=relaxed_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 25 06:50:46 np0005629333 systemd[1]: Started libpod-conmon-a21ec70c16a56b24a8133f81c9a29bc8059866d8a42477ec05f30e7bbfd948ac.scope.
Feb 25 06:50:46 np0005629333 podman[93087]: 2026-02-25 11:50:46.597749573 +0000 UTC m=+0.022130122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:50:46 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db21583311bcec5c633e7b132320edcd6335a7215c8448a324f5964ced196c0a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db21583311bcec5c633e7b132320edcd6335a7215c8448a324f5964ced196c0a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:46 np0005629333 podman[93087]: 2026-02-25 11:50:46.718788757 +0000 UTC m=+0.143169276 container init a21ec70c16a56b24a8133f81c9a29bc8059866d8a42477ec05f30e7bbfd948ac (image=quay.io/ceph/ceph:v20, name=relaxed_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 25 06:50:46 np0005629333 podman[93087]: 2026-02-25 11:50:46.725975432 +0000 UTC m=+0.150355881 container start a21ec70c16a56b24a8133f81c9a29bc8059866d8a42477ec05f30e7bbfd948ac (image=quay.io/ceph/ceph:v20, name=relaxed_mcnulty, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 06:50:46 np0005629333 podman[93087]: 2026-02-25 11:50:46.72926375 +0000 UTC m=+0.153644279 container attach a21ec70c16a56b24a8133f81c9a29bc8059866d8a42477ec05f30e7bbfd948ac (image=quay.io/ceph/ceph:v20, name=relaxed_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:50:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v57: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:50:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Feb 25 06:50:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2657265071' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Feb 25 06:50:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Feb 25 06:50:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2657265071' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Feb 25 06:50:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Feb 25 06:50:47 np0005629333 relaxed_mcnulty[93101]: enabled application 'rbd' on pool 'images'
Feb 25 06:50:47 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/2163009470' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Feb 25 06:50:47 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/2657265071' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Feb 25 06:50:47 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Feb 25 06:50:47 np0005629333 systemd[1]: libpod-a21ec70c16a56b24a8133f81c9a29bc8059866d8a42477ec05f30e7bbfd948ac.scope: Deactivated successfully.
Feb 25 06:50:47 np0005629333 podman[93087]: 2026-02-25 11:50:47.250352671 +0000 UTC m=+0.674733150 container died a21ec70c16a56b24a8133f81c9a29bc8059866d8a42477ec05f30e7bbfd948ac (image=quay.io/ceph/ceph:v20, name=relaxed_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 06:50:47 np0005629333 systemd[1]: var-lib-containers-storage-overlay-db21583311bcec5c633e7b132320edcd6335a7215c8448a324f5964ced196c0a-merged.mount: Deactivated successfully.
Feb 25 06:50:47 np0005629333 podman[93087]: 2026-02-25 11:50:47.289364966 +0000 UTC m=+0.713745445 container remove a21ec70c16a56b24a8133f81c9a29bc8059866d8a42477ec05f30e7bbfd948ac (image=quay.io/ceph/ceph:v20, name=relaxed_mcnulty, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:50:47 np0005629333 systemd[1]: libpod-conmon-a21ec70c16a56b24a8133f81c9a29bc8059866d8a42477ec05f30e7bbfd948ac.scope: Deactivated successfully.
Feb 25 06:50:47 np0005629333 python3[93163]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:50:47 np0005629333 podman[93164]: 2026-02-25 11:50:47.618646879 +0000 UTC m=+0.051344624 container create 590b805c88eb350558e4ed8f6a3ce7abcf1ab333e28c7859f0634a92b712429e (image=quay.io/ceph/ceph:v20, name=nifty_hopper, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:50:47 np0005629333 systemd[1]: Started libpod-conmon-590b805c88eb350558e4ed8f6a3ce7abcf1ab333e28c7859f0634a92b712429e.scope.
Feb 25 06:50:47 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:47 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b568e64324145c6b3f5d91c00e2d3093c4a95a6831b4a6861ccc4cb2111be2cc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:47 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b568e64324145c6b3f5d91c00e2d3093c4a95a6831b4a6861ccc4cb2111be2cc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:47 np0005629333 podman[93164]: 2026-02-25 11:50:47.690665199 +0000 UTC m=+0.123362994 container init 590b805c88eb350558e4ed8f6a3ce7abcf1ab333e28c7859f0634a92b712429e (image=quay.io/ceph/ceph:v20, name=nifty_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:50:47 np0005629333 podman[93164]: 2026-02-25 11:50:47.598676542 +0000 UTC m=+0.031374377 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:50:47 np0005629333 podman[93164]: 2026-02-25 11:50:47.695848614 +0000 UTC m=+0.128546369 container start 590b805c88eb350558e4ed8f6a3ce7abcf1ab333e28c7859f0634a92b712429e (image=quay.io/ceph/ceph:v20, name=nifty_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 06:50:47 np0005629333 podman[93164]: 2026-02-25 11:50:47.699229255 +0000 UTC m=+0.131927040 container attach 590b805c88eb350558e4ed8f6a3ce7abcf1ab333e28c7859f0634a92b712429e (image=quay.io/ceph/ceph:v20, name=nifty_hopper, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 06:50:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:50:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Feb 25 06:50:48 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1226954273' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Feb 25 06:50:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Feb 25 06:50:48 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/2657265071' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Feb 25 06:50:48 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/1226954273' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Feb 25 06:50:48 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1226954273' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Feb 25 06:50:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Feb 25 06:50:48 np0005629333 nifty_hopper[93179]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Feb 25 06:50:48 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Feb 25 06:50:48 np0005629333 systemd[1]: libpod-590b805c88eb350558e4ed8f6a3ce7abcf1ab333e28c7859f0634a92b712429e.scope: Deactivated successfully.
Feb 25 06:50:48 np0005629333 podman[93204]: 2026-02-25 11:50:48.285893144 +0000 UTC m=+0.022946876 container died 590b805c88eb350558e4ed8f6a3ce7abcf1ab333e28c7859f0634a92b712429e (image=quay.io/ceph/ceph:v20, name=nifty_hopper, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 06:50:48 np0005629333 systemd[1]: var-lib-containers-storage-overlay-b568e64324145c6b3f5d91c00e2d3093c4a95a6831b4a6861ccc4cb2111be2cc-merged.mount: Deactivated successfully.
Feb 25 06:50:48 np0005629333 podman[93204]: 2026-02-25 11:50:48.333023482 +0000 UTC m=+0.070077154 container remove 590b805c88eb350558e4ed8f6a3ce7abcf1ab333e28c7859f0634a92b712429e (image=quay.io/ceph/ceph:v20, name=nifty_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:50:48 np0005629333 systemd[1]: libpod-conmon-590b805c88eb350558e4ed8f6a3ce7abcf1ab333e28c7859f0634a92b712429e.scope: Deactivated successfully.
Feb 25 06:50:48 np0005629333 python3[93244]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:50:48 np0005629333 podman[93245]: 2026-02-25 11:50:48.722842902 +0000 UTC m=+0.056657392 container create d8eb2e7438b70e8f10a641007b403311394a3e966e19fd1ac3f4fb4feab22495 (image=quay.io/ceph/ceph:v20, name=friendly_yonath, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:50:48 np0005629333 systemd[1]: Started libpod-conmon-d8eb2e7438b70e8f10a641007b403311394a3e966e19fd1ac3f4fb4feab22495.scope.
Feb 25 06:50:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v60: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:50:48 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb511b38d07c9d4188ba105eb2c285c6e15c48e4e5a79cfdfe4896426b4a95ab/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb511b38d07c9d4188ba105eb2c285c6e15c48e4e5a79cfdfe4896426b4a95ab/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:48 np0005629333 podman[93245]: 2026-02-25 11:50:48.697132845 +0000 UTC m=+0.030947405 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:50:48 np0005629333 podman[93245]: 2026-02-25 11:50:48.802084688 +0000 UTC m=+0.135899168 container init d8eb2e7438b70e8f10a641007b403311394a3e966e19fd1ac3f4fb4feab22495 (image=quay.io/ceph/ceph:v20, name=friendly_yonath, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:50:48 np0005629333 podman[93245]: 2026-02-25 11:50:48.808876371 +0000 UTC m=+0.142690831 container start d8eb2e7438b70e8f10a641007b403311394a3e966e19fd1ac3f4fb4feab22495 (image=quay.io/ceph/ceph:v20, name=friendly_yonath, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 06:50:48 np0005629333 podman[93245]: 2026-02-25 11:50:48.812128238 +0000 UTC m=+0.145942728 container attach d8eb2e7438b70e8f10a641007b403311394a3e966e19fd1ac3f4fb4feab22495 (image=quay.io/ceph/ceph:v20, name=friendly_yonath, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 06:50:49 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/1226954273' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Feb 25 06:50:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Feb 25 06:50:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1847572942' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Feb 25 06:50:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Feb 25 06:50:50 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/1847572942' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Feb 25 06:50:50 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1847572942' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Feb 25 06:50:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Feb 25 06:50:50 np0005629333 friendly_yonath[93260]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Feb 25 06:50:50 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Feb 25 06:50:50 np0005629333 systemd[1]: libpod-d8eb2e7438b70e8f10a641007b403311394a3e966e19fd1ac3f4fb4feab22495.scope: Deactivated successfully.
Feb 25 06:50:50 np0005629333 podman[93245]: 2026-02-25 11:50:50.298161503 +0000 UTC m=+1.631976013 container died d8eb2e7438b70e8f10a641007b403311394a3e966e19fd1ac3f4fb4feab22495 (image=quay.io/ceph/ceph:v20, name=friendly_yonath, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 06:50:50 np0005629333 systemd[1]: var-lib-containers-storage-overlay-eb511b38d07c9d4188ba105eb2c285c6e15c48e4e5a79cfdfe4896426b4a95ab-merged.mount: Deactivated successfully.
Feb 25 06:50:50 np0005629333 podman[93245]: 2026-02-25 11:50:50.34126064 +0000 UTC m=+1.675075111 container remove d8eb2e7438b70e8f10a641007b403311394a3e966e19fd1ac3f4fb4feab22495 (image=quay.io/ceph/ceph:v20, name=friendly_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 06:50:50 np0005629333 systemd[1]: libpod-conmon-d8eb2e7438b70e8f10a641007b403311394a3e966e19fd1ac3f4fb4feab22495.scope: Deactivated successfully.
Feb 25 06:50:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v62: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:50:51 np0005629333 python3[93372]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 25 06:50:51 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/1847572942' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Feb 25 06:50:51 np0005629333 python3[93443]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772020250.913384-37364-215390886275547/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:50:52 np0005629333 python3[93545]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 25 06:50:52 np0005629333 python3[93620]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772020251.8822236-37378-213018105440655/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=59b5652c2ab42f8abfeb4f2d97113117b8451846 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:50:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:50:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:50:52 np0005629333 python3[93670]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:50:53 np0005629333 podman[93671]: 2026-02-25 11:50:53.008423427 +0000 UTC m=+0.055351954 container create 18b48645a2aeee7d9ca3baa245dec71221b2629a1f0fc5d020fc54158a1ab8a2 (image=quay.io/ceph/ceph:v20, name=brave_panini, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 06:50:53 np0005629333 systemd[1]: Started libpod-conmon-18b48645a2aeee7d9ca3baa245dec71221b2629a1f0fc5d020fc54158a1ab8a2.scope.
Feb 25 06:50:53 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cb60abba52b64068a04103c180fb40528cd6b01093f3842299786ab47656c2d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cb60abba52b64068a04103c180fb40528cd6b01093f3842299786ab47656c2d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cb60abba52b64068a04103c180fb40528cd6b01093f3842299786ab47656c2d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:53 np0005629333 podman[93671]: 2026-02-25 11:50:52.986677677 +0000 UTC m=+0.033606224 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:50:53 np0005629333 podman[93671]: 2026-02-25 11:50:53.097814816 +0000 UTC m=+0.144743363 container init 18b48645a2aeee7d9ca3baa245dec71221b2629a1f0fc5d020fc54158a1ab8a2 (image=quay.io/ceph/ceph:v20, name=brave_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:50:53 np0005629333 podman[93671]: 2026-02-25 11:50:53.104278209 +0000 UTC m=+0.151206736 container start 18b48645a2aeee7d9ca3baa245dec71221b2629a1f0fc5d020fc54158a1ab8a2 (image=quay.io/ceph/ceph:v20, name=brave_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 06:50:53 np0005629333 podman[93671]: 2026-02-25 11:50:53.112191535 +0000 UTC m=+0.159120082 container attach 18b48645a2aeee7d9ca3baa245dec71221b2629a1f0fc5d020fc54158a1ab8a2 (image=quay.io/ceph/ceph:v20, name=brave_panini, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 06:50:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Feb 25 06:50:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1578790236' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb 25 06:50:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1578790236' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb 25 06:50:53 np0005629333 brave_panini[93687]: 
Feb 25 06:50:53 np0005629333 brave_panini[93687]: [global]
Feb 25 06:50:53 np0005629333 brave_panini[93687]: #011fsid = 8ac33163-6221-5d58-9a39-8b6933fe7762
Feb 25 06:50:53 np0005629333 brave_panini[93687]: #011mon_host = 192.168.122.100
Feb 25 06:50:53 np0005629333 brave_panini[93687]: #011rgw_keystone_api_version = 3
Feb 25 06:50:53 np0005629333 systemd[1]: libpod-18b48645a2aeee7d9ca3baa245dec71221b2629a1f0fc5d020fc54158a1ab8a2.scope: Deactivated successfully.
Feb 25 06:50:53 np0005629333 podman[93671]: 2026-02-25 11:50:53.551902316 +0000 UTC m=+0.598830833 container died 18b48645a2aeee7d9ca3baa245dec71221b2629a1f0fc5d020fc54158a1ab8a2 (image=quay.io/ceph/ceph:v20, name=brave_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 06:50:53 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7cb60abba52b64068a04103c180fb40528cd6b01093f3842299786ab47656c2d-merged.mount: Deactivated successfully.
Feb 25 06:50:53 np0005629333 podman[93671]: 2026-02-25 11:50:53.600611691 +0000 UTC m=+0.647540208 container remove 18b48645a2aeee7d9ca3baa245dec71221b2629a1f0fc5d020fc54158a1ab8a2 (image=quay.io/ceph/ceph:v20, name=brave_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 06:50:53 np0005629333 systemd[1]: libpod-conmon-18b48645a2aeee7d9ca3baa245dec71221b2629a1f0fc5d020fc54158a1ab8a2.scope: Deactivated successfully.
Feb 25 06:50:53 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/1578790236' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Feb 25 06:50:53 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/1578790236' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Feb 25 06:50:53 np0005629333 python3[93799]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:50:54 np0005629333 podman[93823]: 2026-02-25 11:50:54.000649627 +0000 UTC m=+0.069790326 container create 797d72982d9bb7489a482b07a98c21df9b6fd4f6cb95cc50559b12bccde17c01 (image=quay.io/ceph/ceph:v20, name=gallant_bhabha, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:50:54 np0005629333 systemd[1]: Started libpod-conmon-797d72982d9bb7489a482b07a98c21df9b6fd4f6cb95cc50559b12bccde17c01.scope.
Feb 25 06:50:54 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:54 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98758bc3b4749a242bba73e287cb866c721a2ab9538281df29acd6aef2be999/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:54 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98758bc3b4749a242bba73e287cb866c721a2ab9538281df29acd6aef2be999/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:54 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98758bc3b4749a242bba73e287cb866c721a2ab9538281df29acd6aef2be999/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:54 np0005629333 podman[93823]: 2026-02-25 11:50:53.970604249 +0000 UTC m=+0.039744998 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:50:54 np0005629333 podman[93823]: 2026-02-25 11:50:54.080153841 +0000 UTC m=+0.149294550 container init 797d72982d9bb7489a482b07a98c21df9b6fd4f6cb95cc50559b12bccde17c01 (image=quay.io/ceph/ceph:v20, name=gallant_bhabha, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:50:54 np0005629333 podman[93823]: 2026-02-25 11:50:54.086177541 +0000 UTC m=+0.155318210 container start 797d72982d9bb7489a482b07a98c21df9b6fd4f6cb95cc50559b12bccde17c01 (image=quay.io/ceph/ceph:v20, name=gallant_bhabha, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:50:54 np0005629333 podman[93854]: 2026-02-25 11:50:54.088113788 +0000 UTC m=+0.070261179 container exec ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:50:54 np0005629333 podman[93823]: 2026-02-25 11:50:54.089868541 +0000 UTC m=+0.159009320 container attach 797d72982d9bb7489a482b07a98c21df9b6fd4f6cb95cc50559b12bccde17c01 (image=quay.io/ceph/ceph:v20, name=gallant_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 06:50:54 np0005629333 podman[93854]: 2026-02-25 11:50:54.224715728 +0000 UTC m=+0.206863149 container exec_died ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 06:50:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Feb 25 06:50:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1136968253' entity='client.admin' 
Feb 25 06:50:54 np0005629333 gallant_bhabha[93869]: set ssl_option
Feb 25 06:50:54 np0005629333 systemd[1]: libpod-797d72982d9bb7489a482b07a98c21df9b6fd4f6cb95cc50559b12bccde17c01.scope: Deactivated successfully.
Feb 25 06:50:54 np0005629333 podman[93823]: 2026-02-25 11:50:54.649827002 +0000 UTC m=+0.718967701 container died 797d72982d9bb7489a482b07a98c21df9b6fd4f6cb95cc50559b12bccde17c01 (image=quay.io/ceph/ceph:v20, name=gallant_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 06:50:54 np0005629333 systemd[1]: var-lib-containers-storage-overlay-c98758bc3b4749a242bba73e287cb866c721a2ab9538281df29acd6aef2be999-merged.mount: Deactivated successfully.
Feb 25 06:50:54 np0005629333 podman[93823]: 2026-02-25 11:50:54.714931326 +0000 UTC m=+0.784072035 container remove 797d72982d9bb7489a482b07a98c21df9b6fd4f6cb95cc50559b12bccde17c01 (image=quay.io/ceph/ceph:v20, name=gallant_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 06:50:54 np0005629333 systemd[1]: libpod-conmon-797d72982d9bb7489a482b07a98c21df9b6fd4f6cb95cc50559b12bccde17c01.scope: Deactivated successfully.
Feb 25 06:50:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:50:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:50:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:50:55 np0005629333 python3[94115]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:50:55 np0005629333 podman[94117]: 2026-02-25 11:50:55.087392529 +0000 UTC m=+0.048614923 container create d3a2fb1a65d61f55655c7a39a8add5efa9dbfbcd833cf88528c8432751ec32a2 (image=quay.io/ceph/ceph:v20, name=beautiful_stonebraker, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:50:55 np0005629333 systemd[1]: Started libpod-conmon-d3a2fb1a65d61f55655c7a39a8add5efa9dbfbcd833cf88528c8432751ec32a2.scope.
Feb 25 06:50:55 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffce4daee5372dd357c0635cd54231dc8028221d80de87429c5b042136d34ba4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffce4daee5372dd357c0635cd54231dc8028221d80de87429c5b042136d34ba4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffce4daee5372dd357c0635cd54231dc8028221d80de87429c5b042136d34ba4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:55 np0005629333 podman[94117]: 2026-02-25 11:50:55.07133916 +0000 UTC m=+0.032561584 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:50:55 np0005629333 podman[94117]: 2026-02-25 11:50:55.188386425 +0000 UTC m=+0.149608919 container init d3a2fb1a65d61f55655c7a39a8add5efa9dbfbcd833cf88528c8432751ec32a2 (image=quay.io/ceph/ceph:v20, name=beautiful_stonebraker, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:50:55 np0005629333 podman[94117]: 2026-02-25 11:50:55.19392471 +0000 UTC m=+0.155147134 container start d3a2fb1a65d61f55655c7a39a8add5efa9dbfbcd833cf88528c8432751ec32a2 (image=quay.io/ceph/ceph:v20, name=beautiful_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 06:50:55 np0005629333 podman[94117]: 2026-02-25 11:50:55.197580959 +0000 UTC m=+0.158803453 container attach d3a2fb1a65d61f55655c7a39a8add5efa9dbfbcd833cf88528c8432751ec32a2 (image=quay.io/ceph/ceph:v20, name=beautiful_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 06:50:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:50:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:50:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 06:50:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:50:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 06:50:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 06:50:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 06:50:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 06:50:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 06:50:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:50:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:50:55 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.14234 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 06:50:55 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/1136968253' entity='client.admin' 
Feb 25 06:50:55 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:55 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:55 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:50:55 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:55 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 06:50:55 np0005629333 ceph-mgr[76641]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Feb 25 06:50:55 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Feb 25 06:50:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb 25 06:50:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:55 np0005629333 beautiful_stonebraker[94145]: Scheduled rgw.rgw update...
Feb 25 06:50:55 np0005629333 systemd[1]: libpod-d3a2fb1a65d61f55655c7a39a8add5efa9dbfbcd833cf88528c8432751ec32a2.scope: Deactivated successfully.
Feb 25 06:50:55 np0005629333 podman[94117]: 2026-02-25 11:50:55.661362689 +0000 UTC m=+0.622585133 container died d3a2fb1a65d61f55655c7a39a8add5efa9dbfbcd833cf88528c8432751ec32a2 (image=quay.io/ceph/ceph:v20, name=beautiful_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 06:50:55 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ffce4daee5372dd357c0635cd54231dc8028221d80de87429c5b042136d34ba4-merged.mount: Deactivated successfully.
Feb 25 06:50:55 np0005629333 podman[94117]: 2026-02-25 11:50:55.714224948 +0000 UTC m=+0.675447382 container remove d3a2fb1a65d61f55655c7a39a8add5efa9dbfbcd833cf88528c8432751ec32a2 (image=quay.io/ceph/ceph:v20, name=beautiful_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:50:55 np0005629333 systemd[1]: libpod-conmon-d3a2fb1a65d61f55655c7a39a8add5efa9dbfbcd833cf88528c8432751ec32a2.scope: Deactivated successfully.
Feb 25 06:50:55 np0005629333 podman[94263]: 2026-02-25 11:50:55.838660623 +0000 UTC m=+0.055157058 container create e5fc2d79b8677740e208c8359d25742478987cf75c0303840ad08fa4a9e2844b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_northcutt, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 06:50:55 np0005629333 systemd[1]: Started libpod-conmon-e5fc2d79b8677740e208c8359d25742478987cf75c0303840ad08fa4a9e2844b.scope.
Feb 25 06:50:55 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:55 np0005629333 podman[94263]: 2026-02-25 11:50:55.814845272 +0000 UTC m=+0.031342047 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:55 np0005629333 podman[94263]: 2026-02-25 11:50:55.910733845 +0000 UTC m=+0.127230260 container init e5fc2d79b8677740e208c8359d25742478987cf75c0303840ad08fa4a9e2844b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:50:55 np0005629333 podman[94263]: 2026-02-25 11:50:55.919706063 +0000 UTC m=+0.136202478 container start e5fc2d79b8677740e208c8359d25742478987cf75c0303840ad08fa4a9e2844b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_northcutt, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:50:55 np0005629333 podman[94263]: 2026-02-25 11:50:55.923180066 +0000 UTC m=+0.139676511 container attach e5fc2d79b8677740e208c8359d25742478987cf75c0303840ad08fa4a9e2844b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:50:55 np0005629333 nice_northcutt[94280]: 167 167
Feb 25 06:50:55 np0005629333 systemd[1]: libpod-e5fc2d79b8677740e208c8359d25742478987cf75c0303840ad08fa4a9e2844b.scope: Deactivated successfully.
Feb 25 06:50:55 np0005629333 podman[94263]: 2026-02-25 11:50:55.925617839 +0000 UTC m=+0.142114304 container died e5fc2d79b8677740e208c8359d25742478987cf75c0303840ad08fa4a9e2844b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:50:55 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a7161efe2c64121cd67801287237bd7385cf95b2935d20bfc22b08af047ae202-merged.mount: Deactivated successfully.
Feb 25 06:50:55 np0005629333 podman[94263]: 2026-02-25 11:50:55.971990924 +0000 UTC m=+0.188487369 container remove e5fc2d79b8677740e208c8359d25742478987cf75c0303840ad08fa4a9e2844b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:50:55 np0005629333 systemd[1]: libpod-conmon-e5fc2d79b8677740e208c8359d25742478987cf75c0303840ad08fa4a9e2844b.scope: Deactivated successfully.
Feb 25 06:50:56 np0005629333 podman[94305]: 2026-02-25 11:50:56.135810256 +0000 UTC m=+0.047160129 container create aa76ff83004877ed962e8fb24f6e5a3895d8db5ab7fa008bad0856fb0ef4b307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:50:56 np0005629333 systemd[1]: Started libpod-conmon-aa76ff83004877ed962e8fb24f6e5a3895d8db5ab7fa008bad0856fb0ef4b307.scope.
Feb 25 06:50:56 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c09543e4bf034ab1575c08560973c819451456c158990076a7d751ab71b04e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c09543e4bf034ab1575c08560973c819451456c158990076a7d751ab71b04e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c09543e4bf034ab1575c08560973c819451456c158990076a7d751ab71b04e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c09543e4bf034ab1575c08560973c819451456c158990076a7d751ab71b04e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c09543e4bf034ab1575c08560973c819451456c158990076a7d751ab71b04e0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:56 np0005629333 podman[94305]: 2026-02-25 11:50:56.195002064 +0000 UTC m=+0.106351987 container init aa76ff83004877ed962e8fb24f6e5a3895d8db5ab7fa008bad0856fb0ef4b307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_meninsky, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:50:56 np0005629333 podman[94305]: 2026-02-25 11:50:56.200727524 +0000 UTC m=+0.112077427 container start aa76ff83004877ed962e8fb24f6e5a3895d8db5ab7fa008bad0856fb0ef4b307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_meninsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Feb 25 06:50:56 np0005629333 podman[94305]: 2026-02-25 11:50:56.20424634 +0000 UTC m=+0.115596243 container attach aa76ff83004877ed962e8fb24f6e5a3895d8db5ab7fa008bad0856fb0ef4b307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_meninsky, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 06:50:56 np0005629333 podman[94305]: 2026-02-25 11:50:56.122052015 +0000 UTC m=+0.033401908 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:56 np0005629333 python3[94404]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 25 06:50:56 np0005629333 ceph-mon[76335]: Saving service rgw.rgw spec with placement compute-0
Feb 25 06:50:56 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:56 np0005629333 peaceful_meninsky[94322]: --> passed data devices: 0 physical, 3 LVM
Feb 25 06:50:56 np0005629333 peaceful_meninsky[94322]: --> All data devices are unavailable
Feb 25 06:50:56 np0005629333 systemd[1]: libpod-aa76ff83004877ed962e8fb24f6e5a3895d8db5ab7fa008bad0856fb0ef4b307.scope: Deactivated successfully.
Feb 25 06:50:56 np0005629333 podman[94305]: 2026-02-25 11:50:56.726745873 +0000 UTC m=+0.638095746 container died aa76ff83004877ed962e8fb24f6e5a3895d8db5ab7fa008bad0856fb0ef4b307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_meninsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Feb 25 06:50:56 np0005629333 systemd[1]: var-lib-containers-storage-overlay-5c09543e4bf034ab1575c08560973c819451456c158990076a7d751ab71b04e0-merged.mount: Deactivated successfully.
Feb 25 06:50:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:50:56 np0005629333 podman[94305]: 2026-02-25 11:50:56.790482426 +0000 UTC m=+0.701832299 container remove aa76ff83004877ed962e8fb24f6e5a3895d8db5ab7fa008bad0856fb0ef4b307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_meninsky, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:50:56 np0005629333 systemd[1]: libpod-conmon-aa76ff83004877ed962e8fb24f6e5a3895d8db5ab7fa008bad0856fb0ef4b307.scope: Deactivated successfully.
Feb 25 06:50:56 np0005629333 python3[94500]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772020256.3599546-37419-18675730715656/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:50:57 np0005629333 podman[94587]: 2026-02-25 11:50:57.178544894 +0000 UTC m=+0.047216341 container create 61cf070638e77464592d3e47b4ae4f76109319ed01d5f4f1a9dc8700ba6da9c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:50:57 np0005629333 systemd[1]: Started libpod-conmon-61cf070638e77464592d3e47b4ae4f76109319ed01d5f4f1a9dc8700ba6da9c6.scope.
Feb 25 06:50:57 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:57 np0005629333 podman[94587]: 2026-02-25 11:50:57.25038762 +0000 UTC m=+0.119059067 container init 61cf070638e77464592d3e47b4ae4f76109319ed01d5f4f1a9dc8700ba6da9c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_saha, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:50:57 np0005629333 podman[94587]: 2026-02-25 11:50:57.162815454 +0000 UTC m=+0.031486901 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:57 np0005629333 podman[94587]: 2026-02-25 11:50:57.25778472 +0000 UTC m=+0.126456157 container start 61cf070638e77464592d3e47b4ae4f76109319ed01d5f4f1a9dc8700ba6da9c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_saha, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:50:57 np0005629333 xenodochial_saha[94628]: 167 167
Feb 25 06:50:57 np0005629333 systemd[1]: libpod-61cf070638e77464592d3e47b4ae4f76109319ed01d5f4f1a9dc8700ba6da9c6.scope: Deactivated successfully.
Feb 25 06:50:57 np0005629333 podman[94587]: 2026-02-25 11:50:57.262526502 +0000 UTC m=+0.131197949 container attach 61cf070638e77464592d3e47b4ae4f76109319ed01d5f4f1a9dc8700ba6da9c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_saha, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 06:50:57 np0005629333 podman[94587]: 2026-02-25 11:50:57.262947925 +0000 UTC m=+0.131619362 container died 61cf070638e77464592d3e47b4ae4f76109319ed01d5f4f1a9dc8700ba6da9c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:50:57 np0005629333 systemd[1]: var-lib-containers-storage-overlay-5630215d218c446eeae9ec84b8ab2cdb94b0720110cb3fc600b342b7bd94e27f-merged.mount: Deactivated successfully.
Feb 25 06:50:57 np0005629333 podman[94587]: 2026-02-25 11:50:57.299332991 +0000 UTC m=+0.168004428 container remove 61cf070638e77464592d3e47b4ae4f76109319ed01d5f4f1a9dc8700ba6da9c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_saha, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 06:50:57 np0005629333 systemd[1]: libpod-conmon-61cf070638e77464592d3e47b4ae4f76109319ed01d5f4f1a9dc8700ba6da9c6.scope: Deactivated successfully.
Feb 25 06:50:57 np0005629333 python3[94630]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:50:57 np0005629333 podman[94654]: 2026-02-25 11:50:57.416193351 +0000 UTC m=+0.040182981 container create 86b6d2f083ae0258fa9aa617ee97d0f11571ffdedb9129061f6d0d6760854c14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_borg, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 06:50:57 np0005629333 podman[94653]: 2026-02-25 11:50:57.421103808 +0000 UTC m=+0.045070967 container create 2a37e448a1d14c8164ef7bf8dc827724e7b68c313cc8a1f673224cbe3e5d5cdc (image=quay.io/ceph/ceph:v20, name=trusting_ganguly, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:50:57 np0005629333 systemd[1]: Started libpod-conmon-2a37e448a1d14c8164ef7bf8dc827724e7b68c313cc8a1f673224cbe3e5d5cdc.scope.
Feb 25 06:50:57 np0005629333 systemd[1]: Started libpod-conmon-86b6d2f083ae0258fa9aa617ee97d0f11571ffdedb9129061f6d0d6760854c14.scope.
Feb 25 06:50:57 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:57 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa44e84734f5f7edf7ff008c49115c78638d5b29d5a52084d51a15675c044ad7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:57 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa44e84734f5f7edf7ff008c49115c78638d5b29d5a52084d51a15675c044ad7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:57 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa44e84734f5f7edf7ff008c49115c78638d5b29d5a52084d51a15675c044ad7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:57 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:57 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20c5f5a044818577e506de9a8ad5f48d35ef5d3c63e7a387c45e1d94591ab4ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:57 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20c5f5a044818577e506de9a8ad5f48d35ef5d3c63e7a387c45e1d94591ab4ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:57 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20c5f5a044818577e506de9a8ad5f48d35ef5d3c63e7a387c45e1d94591ab4ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:57 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20c5f5a044818577e506de9a8ad5f48d35ef5d3c63e7a387c45e1d94591ab4ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:57 np0005629333 podman[94653]: 2026-02-25 11:50:57.3964033 +0000 UTC m=+0.020370539 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:50:57 np0005629333 podman[94654]: 2026-02-25 11:50:57.397093871 +0000 UTC m=+0.021083511 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:57 np0005629333 podman[94653]: 2026-02-25 11:50:57.493226881 +0000 UTC m=+0.117194070 container init 2a37e448a1d14c8164ef7bf8dc827724e7b68c313cc8a1f673224cbe3e5d5cdc (image=quay.io/ceph/ceph:v20, name=trusting_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:50:57 np0005629333 podman[94653]: 2026-02-25 11:50:57.498768907 +0000 UTC m=+0.122736066 container start 2a37e448a1d14c8164ef7bf8dc827724e7b68c313cc8a1f673224cbe3e5d5cdc (image=quay.io/ceph/ceph:v20, name=trusting_ganguly, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:50:57 np0005629333 podman[94654]: 2026-02-25 11:50:57.503421006 +0000 UTC m=+0.127410686 container init 86b6d2f083ae0258fa9aa617ee97d0f11571ffdedb9129061f6d0d6760854c14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_borg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:50:57 np0005629333 podman[94654]: 2026-02-25 11:50:57.508485847 +0000 UTC m=+0.132475517 container start 86b6d2f083ae0258fa9aa617ee97d0f11571ffdedb9129061f6d0d6760854c14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_borg, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:50:57 np0005629333 podman[94653]: 2026-02-25 11:50:57.510494497 +0000 UTC m=+0.134461656 container attach 2a37e448a1d14c8164ef7bf8dc827724e7b68c313cc8a1f673224cbe3e5d5cdc (image=quay.io/ceph/ceph:v20, name=trusting_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:50:57 np0005629333 podman[94654]: 2026-02-25 11:50:57.514809626 +0000 UTC m=+0.138799276 container attach 86b6d2f083ae0258fa9aa617ee97d0f11571ffdedb9129061f6d0d6760854c14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:50:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]: {
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:    "0": [
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:        {
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "devices": [
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "/dev/loop3"
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            ],
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "lv_name": "ceph_lv0",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "lv_size": "21470642176",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "name": "ceph_lv0",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "tags": {
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.cluster_name": "ceph",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.crush_device_class": "",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.encrypted": "0",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.objectstore": "bluestore",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.osd_id": "0",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.type": "block",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.vdo": "0",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.with_tpm": "0"
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            },
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "type": "block",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "vg_name": "ceph_vg0"
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:        }
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:    ],
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:    "1": [
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:        {
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "devices": [
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "/dev/loop4"
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            ],
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "lv_name": "ceph_lv1",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "lv_size": "21470642176",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "name": "ceph_lv1",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "tags": {
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.cluster_name": "ceph",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.crush_device_class": "",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.encrypted": "0",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.objectstore": "bluestore",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.osd_id": "1",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.type": "block",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.vdo": "0",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.with_tpm": "0"
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            },
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "type": "block",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "vg_name": "ceph_vg1"
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:        }
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:    ],
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:    "2": [
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:        {
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "devices": [
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "/dev/loop5"
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            ],
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "lv_name": "ceph_lv2",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "lv_size": "21470642176",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "name": "ceph_lv2",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "tags": {
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.cluster_name": "ceph",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.crush_device_class": "",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.encrypted": "0",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.objectstore": "bluestore",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.osd_id": "2",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.type": "block",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.vdo": "0",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:                "ceph.with_tpm": "0"
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            },
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "type": "block",
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:            "vg_name": "ceph_vg2"
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:        }
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]:    ]
Feb 25 06:50:57 np0005629333 xenodochial_borg[94687]: }
Feb 25 06:50:57 np0005629333 systemd[1]: libpod-86b6d2f083ae0258fa9aa617ee97d0f11571ffdedb9129061f6d0d6760854c14.scope: Deactivated successfully.
Feb 25 06:50:57 np0005629333 podman[94717]: 2026-02-25 11:50:57.809448934 +0000 UTC m=+0.020193054 container died 86b6d2f083ae0258fa9aa617ee97d0f11571ffdedb9129061f6d0d6760854c14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:50:57 np0005629333 systemd[1]: var-lib-containers-storage-overlay-20c5f5a044818577e506de9a8ad5f48d35ef5d3c63e7a387c45e1d94591ab4ae-merged.mount: Deactivated successfully.
Feb 25 06:50:57 np0005629333 podman[94717]: 2026-02-25 11:50:57.854436668 +0000 UTC m=+0.065180808 container remove 86b6d2f083ae0258fa9aa617ee97d0f11571ffdedb9129061f6d0d6760854c14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_borg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 06:50:57 np0005629333 systemd[1]: libpod-conmon-86b6d2f083ae0258fa9aa617ee97d0f11571ffdedb9129061f6d0d6760854c14.scope: Deactivated successfully.
Feb 25 06:50:57 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.14236 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 06:50:57 np0005629333 ceph-mgr[76641]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Feb 25 06:50:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Feb 25 06:50:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Feb 25 06:50:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Feb 25 06:50:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Feb 25 06:50:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Feb 25 06:50:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Feb 25 06:50:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Feb 25 06:50:57 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0[76331]: 2026-02-25T11:50:57.927+0000 7f7d6fd3e640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Feb 25 06:50:57 np0005629333 ceph-mon[76335]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Feb 25 06:50:57 np0005629333 ceph-mon[76335]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Feb 25 06:50:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Feb 25 06:50:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).mds e2 new map
Feb 25 06:50:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).mds e2 print_map#012e2#012btime 2026-02-25T11:50:57:927874+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-25T11:50:57.927640+0000#012modified#0112026-02-25T11:50:57.927640+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Feb 25 06:50:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Feb 25 06:50:57 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Feb 25 06:50:57 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Feb 25 06:50:57 np0005629333 ceph-mgr[76641]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Feb 25 06:50:57 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Feb 25 06:50:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb 25 06:50:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:57 np0005629333 ceph-mgr[76641]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Feb 25 06:50:57 np0005629333 systemd[1]: libpod-2a37e448a1d14c8164ef7bf8dc827724e7b68c313cc8a1f673224cbe3e5d5cdc.scope: Deactivated successfully.
Feb 25 06:50:57 np0005629333 podman[94653]: 2026-02-25 11:50:57.971987618 +0000 UTC m=+0.595954797 container died 2a37e448a1d14c8164ef7bf8dc827724e7b68c313cc8a1f673224cbe3e5d5cdc (image=quay.io/ceph/ceph:v20, name=trusting_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 06:50:58 np0005629333 systemd[1]: var-lib-containers-storage-overlay-fa44e84734f5f7edf7ff008c49115c78638d5b29d5a52084d51a15675c044ad7-merged.mount: Deactivated successfully.
Feb 25 06:50:58 np0005629333 podman[94653]: 2026-02-25 11:50:58.012792567 +0000 UTC m=+0.636759706 container remove 2a37e448a1d14c8164ef7bf8dc827724e7b68c313cc8a1f673224cbe3e5d5cdc (image=quay.io/ceph/ceph:v20, name=trusting_ganguly, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:50:58 np0005629333 systemd[1]: libpod-conmon-2a37e448a1d14c8164ef7bf8dc827724e7b68c313cc8a1f673224cbe3e5d5cdc.scope: Deactivated successfully.
Feb 25 06:50:58 np0005629333 podman[94834]: 2026-02-25 11:50:58.348478331 +0000 UTC m=+0.058925841 container create c0100f3818ae1dd7423eee3d1aa81733755236d0b9a762f0f0f01431cc59612a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_jones, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 06:50:58 np0005629333 python3[94821]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:50:58 np0005629333 systemd[1]: Started libpod-conmon-c0100f3818ae1dd7423eee3d1aa81733755236d0b9a762f0f0f01431cc59612a.scope.
Feb 25 06:50:58 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:58 np0005629333 podman[94834]: 2026-02-25 11:50:58.328621428 +0000 UTC m=+0.039068988 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:58 np0005629333 podman[94834]: 2026-02-25 11:50:58.425675186 +0000 UTC m=+0.136122696 container init c0100f3818ae1dd7423eee3d1aa81733755236d0b9a762f0f0f01431cc59612a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_jones, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:50:58 np0005629333 podman[94846]: 2026-02-25 11:50:58.434757467 +0000 UTC m=+0.059941581 container create 76f11266555c3374eafae47cec11d1c72cbd49fe4cc1d6add2cb32780d821797 (image=quay.io/ceph/ceph:v20, name=elated_austin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:50:58 np0005629333 podman[94834]: 2026-02-25 11:50:58.437665404 +0000 UTC m=+0.148112884 container start c0100f3818ae1dd7423eee3d1aa81733755236d0b9a762f0f0f01431cc59612a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 25 06:50:58 np0005629333 podman[94834]: 2026-02-25 11:50:58.440478238 +0000 UTC m=+0.150925818 container attach c0100f3818ae1dd7423eee3d1aa81733755236d0b9a762f0f0f01431cc59612a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 25 06:50:58 np0005629333 wonderful_jones[94855]: 167 167
Feb 25 06:50:58 np0005629333 systemd[1]: libpod-c0100f3818ae1dd7423eee3d1aa81733755236d0b9a762f0f0f01431cc59612a.scope: Deactivated successfully.
Feb 25 06:50:58 np0005629333 podman[94834]: 2026-02-25 11:50:58.446622722 +0000 UTC m=+0.157070192 container died c0100f3818ae1dd7423eee3d1aa81733755236d0b9a762f0f0f01431cc59612a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_jones, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:50:58 np0005629333 systemd[1]: Started libpod-conmon-76f11266555c3374eafae47cec11d1c72cbd49fe4cc1d6add2cb32780d821797.scope.
Feb 25 06:50:58 np0005629333 systemd[1]: var-lib-containers-storage-overlay-cd34c564687fa46a753b39aafd1e0d502f7be27287705dc52560ffd5f78de748-merged.mount: Deactivated successfully.
Feb 25 06:50:58 np0005629333 podman[94834]: 2026-02-25 11:50:58.486305827 +0000 UTC m=+0.196753307 container remove c0100f3818ae1dd7423eee3d1aa81733755236d0b9a762f0f0f01431cc59612a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 06:50:58 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e20defc5f9e252186ed33776fafa4b21dfb4b31f175d521323d19e081f45cbcd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e20defc5f9e252186ed33776fafa4b21dfb4b31f175d521323d19e081f45cbcd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e20defc5f9e252186ed33776fafa4b21dfb4b31f175d521323d19e081f45cbcd/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:58 np0005629333 podman[94846]: 2026-02-25 11:50:58.412282026 +0000 UTC m=+0.037466210 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:50:58 np0005629333 systemd[1]: libpod-conmon-c0100f3818ae1dd7423eee3d1aa81733755236d0b9a762f0f0f01431cc59612a.scope: Deactivated successfully.
Feb 25 06:50:58 np0005629333 podman[94846]: 2026-02-25 11:50:58.511104817 +0000 UTC m=+0.136288991 container init 76f11266555c3374eafae47cec11d1c72cbd49fe4cc1d6add2cb32780d821797 (image=quay.io/ceph/ceph:v20, name=elated_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:50:58 np0005629333 podman[94846]: 2026-02-25 11:50:58.517383915 +0000 UTC m=+0.142568049 container start 76f11266555c3374eafae47cec11d1c72cbd49fe4cc1d6add2cb32780d821797 (image=quay.io/ceph/ceph:v20, name=elated_austin, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:50:58 np0005629333 podman[94846]: 2026-02-25 11:50:58.520714304 +0000 UTC m=+0.145898448 container attach 76f11266555c3374eafae47cec11d1c72cbd49fe4cc1d6add2cb32780d821797 (image=quay.io/ceph/ceph:v20, name=elated_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 06:50:58 np0005629333 podman[94892]: 2026-02-25 11:50:58.639807981 +0000 UTC m=+0.059739075 container create 16a98360814b4dbec40d7cd38a20d1e13ea207e81ff7f6cf0f543e5f991f1129 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:50:58 np0005629333 systemd[1]: Started libpod-conmon-16a98360814b4dbec40d7cd38a20d1e13ea207e81ff7f6cf0f543e5f991f1129.scope.
Feb 25 06:50:58 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Feb 25 06:50:58 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Feb 25 06:50:58 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Feb 25 06:50:58 np0005629333 ceph-mon[76335]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Feb 25 06:50:58 np0005629333 ceph-mon[76335]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Feb 25 06:50:58 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Feb 25 06:50:58 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:58 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:50:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb42c69be2fa2260a55cf2b565e7f009549f0ae3f5915c92545861330988d902/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb42c69be2fa2260a55cf2b565e7f009549f0ae3f5915c92545861330988d902/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb42c69be2fa2260a55cf2b565e7f009549f0ae3f5915c92545861330988d902/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb42c69be2fa2260a55cf2b565e7f009549f0ae3f5915c92545861330988d902/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:50:58 np0005629333 podman[94892]: 2026-02-25 11:50:58.611958129 +0000 UTC m=+0.031889273 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:50:58 np0005629333 podman[94892]: 2026-02-25 11:50:58.712927494 +0000 UTC m=+0.132858568 container init 16a98360814b4dbec40d7cd38a20d1e13ea207e81ff7f6cf0f543e5f991f1129 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_pascal, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 25 06:50:58 np0005629333 podman[94892]: 2026-02-25 11:50:58.721015746 +0000 UTC m=+0.140946800 container start 16a98360814b4dbec40d7cd38a20d1e13ea207e81ff7f6cf0f543e5f991f1129 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 06:50:58 np0005629333 podman[94892]: 2026-02-25 11:50:58.724600693 +0000 UTC m=+0.144531747 container attach 16a98360814b4dbec40d7cd38a20d1e13ea207e81ff7f6cf0f543e5f991f1129 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_pascal, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 06:50:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:50:58 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 06:50:58 np0005629333 ceph-mgr[76641]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Feb 25 06:50:58 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Feb 25 06:50:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb 25 06:50:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:58 np0005629333 elated_austin[94879]: Scheduled mds.cephfs update...
Feb 25 06:50:58 np0005629333 systemd[1]: libpod-76f11266555c3374eafae47cec11d1c72cbd49fe4cc1d6add2cb32780d821797.scope: Deactivated successfully.
Feb 25 06:50:58 np0005629333 podman[94846]: 2026-02-25 11:50:58.979628208 +0000 UTC m=+0.604812332 container died 76f11266555c3374eafae47cec11d1c72cbd49fe4cc1d6add2cb32780d821797 (image=quay.io/ceph/ceph:v20, name=elated_austin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:50:59 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e20defc5f9e252186ed33776fafa4b21dfb4b31f175d521323d19e081f45cbcd-merged.mount: Deactivated successfully.
Feb 25 06:50:59 np0005629333 podman[94846]: 2026-02-25 11:50:59.031081985 +0000 UTC m=+0.656266099 container remove 76f11266555c3374eafae47cec11d1c72cbd49fe4cc1d6add2cb32780d821797 (image=quay.io/ceph/ceph:v20, name=elated_austin, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:50:59 np0005629333 systemd[1]: libpod-conmon-76f11266555c3374eafae47cec11d1c72cbd49fe4cc1d6add2cb32780d821797.scope: Deactivated successfully.
Feb 25 06:50:59 np0005629333 lvm[95016]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 06:50:59 np0005629333 lvm[95016]: VG ceph_vg0 finished
Feb 25 06:50:59 np0005629333 lvm[95019]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 06:50:59 np0005629333 lvm[95019]: VG ceph_vg1 finished
Feb 25 06:50:59 np0005629333 lvm[95021]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 06:50:59 np0005629333 lvm[95021]: VG ceph_vg2 finished
Feb 25 06:50:59 np0005629333 romantic_pascal[94926]: {}
Feb 25 06:50:59 np0005629333 systemd[1]: libpod-16a98360814b4dbec40d7cd38a20d1e13ea207e81ff7f6cf0f543e5f991f1129.scope: Deactivated successfully.
Feb 25 06:50:59 np0005629333 systemd[1]: libpod-16a98360814b4dbec40d7cd38a20d1e13ea207e81ff7f6cf0f543e5f991f1129.scope: Consumed 1.028s CPU time.
Feb 25 06:50:59 np0005629333 podman[94892]: 2026-02-25 11:50:59.474928398 +0000 UTC m=+0.894859462 container died 16a98360814b4dbec40d7cd38a20d1e13ea207e81ff7f6cf0f543e5f991f1129 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:50:59 np0005629333 systemd[1]: var-lib-containers-storage-overlay-bb42c69be2fa2260a55cf2b565e7f009549f0ae3f5915c92545861330988d902-merged.mount: Deactivated successfully.
Feb 25 06:50:59 np0005629333 podman[94892]: 2026-02-25 11:50:59.51919247 +0000 UTC m=+0.939123524 container remove 16a98360814b4dbec40d7cd38a20d1e13ea207e81ff7f6cf0f543e5f991f1129 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_pascal, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:50:59 np0005629333 systemd[1]: libpod-conmon-16a98360814b4dbec40d7cd38a20d1e13ea207e81ff7f6cf0f543e5f991f1129.scope: Deactivated successfully.
Feb 25 06:50:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:50:59 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:50:59 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:59 np0005629333 ceph-mon[76335]: Saving service mds.cephfs spec with placement compute-0
Feb 25 06:50:59 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:59 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:50:59 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:00 np0005629333 python3[95189]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 25 06:51:00 np0005629333 podman[95286]: 2026-02-25 11:51:00.308746118 +0000 UTC m=+0.078829435 container exec ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 06:51:00 np0005629333 podman[95286]: 2026-02-25 11:51:00.396015274 +0000 UTC m=+0.166098621 container exec_died ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:51:00 np0005629333 python3[95320]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772020259.74997-37467-180282974780620/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=89903beb1b6e06190ab4b6fe858fb9d1594a1a79 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:51:00 np0005629333 ceph-mon[76335]: Saving service mds.cephfs spec with placement compute-0
Feb 25 06:51:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:51:00 np0005629333 python3[95472]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:51:01 np0005629333 podman[95507]: 2026-02-25 11:51:01.012796392 +0000 UTC m=+0.036247093 container create e76a9dae2e585868bfae4abb218191e0fad04f6104ec4830e5b52da410858bbd (image=quay.io/ceph/ceph:v20, name=competent_hopper, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 06:51:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:51:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:51:01 np0005629333 systemd[1]: Started libpod-conmon-e76a9dae2e585868bfae4abb218191e0fad04f6104ec4830e5b52da410858bbd.scope.
Feb 25 06:51:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:51:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:51:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 06:51:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:51:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 06:51:01 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:51:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea773955af7609388e3662d78e2238a93cde38e97824a51aa109987844e0cdb5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea773955af7609388e3662d78e2238a93cde38e97824a51aa109987844e0cdb5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 06:51:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 06:51:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 06:51:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 06:51:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:51:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:51:01 np0005629333 podman[95507]: 2026-02-25 11:51:01.071364501 +0000 UTC m=+0.094815232 container init e76a9dae2e585868bfae4abb218191e0fad04f6104ec4830e5b52da410858bbd (image=quay.io/ceph/ceph:v20, name=competent_hopper, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle)
Feb 25 06:51:01 np0005629333 podman[95507]: 2026-02-25 11:51:01.081426202 +0000 UTC m=+0.104876903 container start e76a9dae2e585868bfae4abb218191e0fad04f6104ec4830e5b52da410858bbd (image=quay.io/ceph/ceph:v20, name=competent_hopper, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 06:51:01 np0005629333 podman[95507]: 2026-02-25 11:51:01.084176974 +0000 UTC m=+0.107627765 container attach e76a9dae2e585868bfae4abb218191e0fad04f6104ec4830e5b52da410858bbd (image=quay.io/ceph/ceph:v20, name=competent_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:51:01 np0005629333 podman[95507]: 2026-02-25 11:51:00.994494645 +0000 UTC m=+0.017945366 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:51:01 np0005629333 podman[95609]: 2026-02-25 11:51:01.418127276 +0000 UTC m=+0.049936372 container create a8d62994b9b275331e0bf6049cea3c1c6fe9d63daa46e5bd37774e530762ad68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 06:51:01 np0005629333 systemd[1]: Started libpod-conmon-a8d62994b9b275331e0bf6049cea3c1c6fe9d63daa46e5bd37774e530762ad68.scope.
Feb 25 06:51:01 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:51:01 np0005629333 podman[95609]: 2026-02-25 11:51:01.491133816 +0000 UTC m=+0.122942912 container init a8d62994b9b275331e0bf6049cea3c1c6fe9d63daa46e5bd37774e530762ad68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_snyder, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 06:51:01 np0005629333 podman[95609]: 2026-02-25 11:51:01.399188901 +0000 UTC m=+0.030997977 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:51:01 np0005629333 podman[95609]: 2026-02-25 11:51:01.500214477 +0000 UTC m=+0.132023533 container start a8d62994b9b275331e0bf6049cea3c1c6fe9d63daa46e5bd37774e530762ad68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_snyder, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 06:51:01 np0005629333 podman[95609]: 2026-02-25 11:51:01.504206307 +0000 UTC m=+0.136015653 container attach a8d62994b9b275331e0bf6049cea3c1c6fe9d63daa46e5bd37774e530762ad68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_snyder, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 06:51:01 np0005629333 dreamy_snyder[95625]: 167 167
Feb 25 06:51:01 np0005629333 systemd[1]: libpod-a8d62994b9b275331e0bf6049cea3c1c6fe9d63daa46e5bd37774e530762ad68.scope: Deactivated successfully.
Feb 25 06:51:01 np0005629333 podman[95609]: 2026-02-25 11:51:01.506344051 +0000 UTC m=+0.138153147 container died a8d62994b9b275331e0bf6049cea3c1c6fe9d63daa46e5bd37774e530762ad68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_snyder, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:51:01 np0005629333 systemd[1]: var-lib-containers-storage-overlay-9569360178c1e844db4e3e9b5c90c7d6a8f1260e87db88d696cf1be8d022a99e-merged.mount: Deactivated successfully.
Feb 25 06:51:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:51:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:51:01 np0005629333 podman[95609]: 2026-02-25 11:51:01.55891442 +0000 UTC m=+0.190723506 container remove a8d62994b9b275331e0bf6049cea3c1c6fe9d63daa46e5bd37774e530762ad68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_snyder, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:51:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:51:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:51:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:51:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:51:01 np0005629333 systemd[1]: libpod-conmon-a8d62994b9b275331e0bf6049cea3c1c6fe9d63daa46e5bd37774e530762ad68.scope: Deactivated successfully.
Feb 25 06:51:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Feb 25 06:51:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3317869719' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Feb 25 06:51:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3317869719' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Feb 25 06:51:01 np0005629333 systemd[1]: libpod-e76a9dae2e585868bfae4abb218191e0fad04f6104ec4830e5b52da410858bbd.scope: Deactivated successfully.
Feb 25 06:51:01 np0005629333 podman[95507]: 2026-02-25 11:51:01.633506078 +0000 UTC m=+0.656956829 container died e76a9dae2e585868bfae4abb218191e0fad04f6104ec4830e5b52da410858bbd (image=quay.io/ceph/ceph:v20, name=competent_hopper, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 06:51:01 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ea773955af7609388e3662d78e2238a93cde38e97824a51aa109987844e0cdb5-merged.mount: Deactivated successfully.
Feb 25 06:51:01 np0005629333 podman[95507]: 2026-02-25 11:51:01.676059129 +0000 UTC m=+0.699509840 container remove e76a9dae2e585868bfae4abb218191e0fad04f6104ec4830e5b52da410858bbd (image=quay.io/ceph/ceph:v20, name=competent_hopper, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 06:51:01 np0005629333 systemd[1]: libpod-conmon-e76a9dae2e585868bfae4abb218191e0fad04f6104ec4830e5b52da410858bbd.scope: Deactivated successfully.
Feb 25 06:51:01 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:01 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:01 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:51:01 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:01 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 06:51:01 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/3317869719' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Feb 25 06:51:01 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/3317869719' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Feb 25 06:51:01 np0005629333 podman[95661]: 2026-02-25 11:51:01.747397469 +0000 UTC m=+0.050874140 container create 16c306a61db4b6b68bfe668d2b44efdf7c88c9b44a04110c3fa2b2df2c235c8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_chatterjee, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 06:51:01 np0005629333 systemd[1]: Started libpod-conmon-16c306a61db4b6b68bfe668d2b44efdf7c88c9b44a04110c3fa2b2df2c235c8d.scope.
Feb 25 06:51:01 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:51:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a99007e504318734b03f0c7afd4d8e27637e858860997c9d5be162ee5061bcd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a99007e504318734b03f0c7afd4d8e27637e858860997c9d5be162ee5061bcd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a99007e504318734b03f0c7afd4d8e27637e858860997c9d5be162ee5061bcd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a99007e504318734b03f0c7afd4d8e27637e858860997c9d5be162ee5061bcd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a99007e504318734b03f0c7afd4d8e27637e858860997c9d5be162ee5061bcd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:01 np0005629333 podman[95661]: 2026-02-25 11:51:01.730845435 +0000 UTC m=+0.034322126 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:51:01 np0005629333 podman[95661]: 2026-02-25 11:51:01.835642864 +0000 UTC m=+0.139119585 container init 16c306a61db4b6b68bfe668d2b44efdf7c88c9b44a04110c3fa2b2df2c235c8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:51:01 np0005629333 podman[95661]: 2026-02-25 11:51:01.844107547 +0000 UTC m=+0.147584258 container start 16c306a61db4b6b68bfe668d2b44efdf7c88c9b44a04110c3fa2b2df2c235c8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 06:51:01 np0005629333 podman[95661]: 2026-02-25 11:51:01.848173188 +0000 UTC m=+0.151649939 container attach 16c306a61db4b6b68bfe668d2b44efdf7c88c9b44a04110c3fa2b2df2c235c8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_chatterjee, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 06:51:02 np0005629333 upbeat_chatterjee[95678]: --> passed data devices: 0 physical, 3 LVM
Feb 25 06:51:02 np0005629333 upbeat_chatterjee[95678]: --> All data devices are unavailable
Feb 25 06:51:02 np0005629333 systemd[1]: libpod-16c306a61db4b6b68bfe668d2b44efdf7c88c9b44a04110c3fa2b2df2c235c8d.scope: Deactivated successfully.
Feb 25 06:51:02 np0005629333 podman[95661]: 2026-02-25 11:51:02.295150936 +0000 UTC m=+0.598627647 container died 16c306a61db4b6b68bfe668d2b44efdf7c88c9b44a04110c3fa2b2df2c235c8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 06:51:02 np0005629333 systemd[1]: var-lib-containers-storage-overlay-2a99007e504318734b03f0c7afd4d8e27637e858860997c9d5be162ee5061bcd-merged.mount: Deactivated successfully.
Feb 25 06:51:02 np0005629333 podman[95661]: 2026-02-25 11:51:02.346788788 +0000 UTC m=+0.650265499 container remove 16c306a61db4b6b68bfe668d2b44efdf7c88c9b44a04110c3fa2b2df2c235c8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:51:02 np0005629333 systemd[1]: libpod-conmon-16c306a61db4b6b68bfe668d2b44efdf7c88c9b44a04110c3fa2b2df2c235c8d.scope: Deactivated successfully.
Feb 25 06:51:02 np0005629333 python3[95723]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:51:02 np0005629333 podman[95755]: 2026-02-25 11:51:02.46507938 +0000 UTC m=+0.039080258 container create 07fc8075bf0e33ae6d0cd1414738d11f72f4054583c31e65178d0a983756f02d (image=quay.io/ceph/ceph:v20, name=elegant_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:51:02 np0005629333 systemd[1]: Started libpod-conmon-07fc8075bf0e33ae6d0cd1414738d11f72f4054583c31e65178d0a983756f02d.scope.
Feb 25 06:51:02 np0005629333 podman[95755]: 2026-02-25 11:51:02.448260778 +0000 UTC m=+0.022261666 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:51:02 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:51:02 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae63734bf873174682b5eb6adf65ddefcddf7bfb4f7bcd47b351d96d9b861f47/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:02 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae63734bf873174682b5eb6adf65ddefcddf7bfb4f7bcd47b351d96d9b861f47/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:02 np0005629333 podman[95755]: 2026-02-25 11:51:02.572482928 +0000 UTC m=+0.146483836 container init 07fc8075bf0e33ae6d0cd1414738d11f72f4054583c31e65178d0a983756f02d (image=quay.io/ceph/ceph:v20, name=elegant_visvesvaraya, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:51:02 np0005629333 podman[95755]: 2026-02-25 11:51:02.581200318 +0000 UTC m=+0.155201226 container start 07fc8075bf0e33ae6d0cd1414738d11f72f4054583c31e65178d0a983756f02d (image=quay.io/ceph/ceph:v20, name=elegant_visvesvaraya, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 06:51:02 np0005629333 podman[95755]: 2026-02-25 11:51:02.585133435 +0000 UTC m=+0.159134323 container attach 07fc8075bf0e33ae6d0cd1414738d11f72f4054583c31e65178d0a983756f02d (image=quay.io/ceph/ceph:v20, name=elegant_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 06:51:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:51:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:51:02 np0005629333 podman[95837]: 2026-02-25 11:51:02.820996489 +0000 UTC m=+0.052504159 container create 891402431bc6f0663b65e15f89b37bf505c440120a99249d49cf23a75a47cc69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_elion, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:51:02 np0005629333 systemd[1]: Started libpod-conmon-891402431bc6f0663b65e15f89b37bf505c440120a99249d49cf23a75a47cc69.scope.
Feb 25 06:51:02 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:51:02 np0005629333 podman[95837]: 2026-02-25 11:51:02.796440445 +0000 UTC m=+0.027948165 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:51:02 np0005629333 podman[95837]: 2026-02-25 11:51:02.898917976 +0000 UTC m=+0.130425616 container init 891402431bc6f0663b65e15f89b37bf505c440120a99249d49cf23a75a47cc69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 06:51:02 np0005629333 podman[95837]: 2026-02-25 11:51:02.904986797 +0000 UTC m=+0.136494427 container start 891402431bc6f0663b65e15f89b37bf505c440120a99249d49cf23a75a47cc69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_elion, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:51:02 np0005629333 podman[95837]: 2026-02-25 11:51:02.907543283 +0000 UTC m=+0.139050913 container attach 891402431bc6f0663b65e15f89b37bf505c440120a99249d49cf23a75a47cc69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 06:51:02 np0005629333 vigilant_elion[95853]: 167 167
Feb 25 06:51:02 np0005629333 systemd[1]: libpod-891402431bc6f0663b65e15f89b37bf505c440120a99249d49cf23a75a47cc69.scope: Deactivated successfully.
Feb 25 06:51:02 np0005629333 podman[95837]: 2026-02-25 11:51:02.910719018 +0000 UTC m=+0.142226638 container died 891402431bc6f0663b65e15f89b37bf505c440120a99249d49cf23a75a47cc69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_elion, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:51:02 np0005629333 systemd[1]: var-lib-containers-storage-overlay-5b0e0e480421c2653b4dd173f50abc83764c387892dd985ec0d4eb94619b46af-merged.mount: Deactivated successfully.
Feb 25 06:51:02 np0005629333 podman[95837]: 2026-02-25 11:51:02.945209798 +0000 UTC m=+0.176717468 container remove 891402431bc6f0663b65e15f89b37bf505c440120a99249d49cf23a75a47cc69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_elion, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 25 06:51:02 np0005629333 systemd[1]: libpod-conmon-891402431bc6f0663b65e15f89b37bf505c440120a99249d49cf23a75a47cc69.scope: Deactivated successfully.
Feb 25 06:51:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb 25 06:51:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1256890035' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb 25 06:51:03 np0005629333 elegant_visvesvaraya[95803]: 
Feb 25 06:51:03 np0005629333 elegant_visvesvaraya[95803]: {"fsid":"8ac33163-6221-5d58-9a39-8b6933fe7762","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":110,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":32,"num_osds":3,"num_up_osds":3,"osd_up_since":1772020230,"num_in_osds":3,"osd_in_since":1772020207,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83922944,"bytes_avail":64328003584,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2026-02-25T11:50:57:927874+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-02-25T11:50:32.774394+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Feb 25 06:51:03 np0005629333 podman[95878]: 2026-02-25 11:51:03.116316117 +0000 UTC m=+0.061721114 container create 5f6502161f9a1ead727c28eb7e2d9f8a30f6534cd47e1229dec02974e330b5d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_khayyam, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:51:03 np0005629333 systemd[1]: libpod-07fc8075bf0e33ae6d0cd1414738d11f72f4054583c31e65178d0a983756f02d.scope: Deactivated successfully.
Feb 25 06:51:03 np0005629333 podman[95755]: 2026-02-25 11:51:03.124258984 +0000 UTC m=+0.698259882 container died 07fc8075bf0e33ae6d0cd1414738d11f72f4054583c31e65178d0a983756f02d (image=quay.io/ceph/ceph:v20, name=elegant_visvesvaraya, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:51:03 np0005629333 systemd[1]: Started libpod-conmon-5f6502161f9a1ead727c28eb7e2d9f8a30f6534cd47e1229dec02974e330b5d3.scope.
Feb 25 06:51:03 np0005629333 podman[95755]: 2026-02-25 11:51:03.160322801 +0000 UTC m=+0.734323679 container remove 07fc8075bf0e33ae6d0cd1414738d11f72f4054583c31e65178d0a983756f02d (image=quay.io/ceph/ceph:v20, name=elegant_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 06:51:03 np0005629333 podman[95878]: 2026-02-25 11:51:03.085767544 +0000 UTC m=+0.031172621 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:51:03 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:51:03 np0005629333 systemd[1]: libpod-conmon-07fc8075bf0e33ae6d0cd1414738d11f72f4054583c31e65178d0a983756f02d.scope: Deactivated successfully.
Feb 25 06:51:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/654c5db91debf0a71ba78ac7306be5ad808c57491273d4a31adac1b0e6617194/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/654c5db91debf0a71ba78ac7306be5ad808c57491273d4a31adac1b0e6617194/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/654c5db91debf0a71ba78ac7306be5ad808c57491273d4a31adac1b0e6617194/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/654c5db91debf0a71ba78ac7306be5ad808c57491273d4a31adac1b0e6617194/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:03 np0005629333 podman[95878]: 2026-02-25 11:51:03.208657864 +0000 UTC m=+0.154062921 container init 5f6502161f9a1ead727c28eb7e2d9f8a30f6534cd47e1229dec02974e330b5d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True)
Feb 25 06:51:03 np0005629333 podman[95878]: 2026-02-25 11:51:03.214153708 +0000 UTC m=+0.159558695 container start 5f6502161f9a1ead727c28eb7e2d9f8a30f6534cd47e1229dec02974e330b5d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Feb 25 06:51:03 np0005629333 podman[95878]: 2026-02-25 11:51:03.217133657 +0000 UTC m=+0.162538674 container attach 5f6502161f9a1ead727c28eb7e2d9f8a30f6534cd47e1229dec02974e330b5d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_khayyam, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 06:51:03 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ae63734bf873174682b5eb6adf65ddefcddf7bfb4f7bcd47b351d96d9b861f47-merged.mount: Deactivated successfully.
Feb 25 06:51:03 np0005629333 python3[95939]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]: {
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:    "0": [
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:        {
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "devices": [
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "/dev/loop3"
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            ],
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "lv_name": "ceph_lv0",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "lv_size": "21470642176",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "name": "ceph_lv0",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "tags": {
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.cluster_name": "ceph",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.crush_device_class": "",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.encrypted": "0",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.objectstore": "bluestore",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.osd_id": "0",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.type": "block",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.vdo": "0",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.with_tpm": "0"
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            },
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "type": "block",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "vg_name": "ceph_vg0"
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:        }
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:    ],
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:    "1": [
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:        {
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "devices": [
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "/dev/loop4"
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            ],
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "lv_name": "ceph_lv1",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "lv_size": "21470642176",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "name": "ceph_lv1",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "tags": {
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.cluster_name": "ceph",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.crush_device_class": "",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.encrypted": "0",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.objectstore": "bluestore",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.osd_id": "1",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.type": "block",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.vdo": "0",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.with_tpm": "0"
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            },
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "type": "block",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "vg_name": "ceph_vg1"
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:        }
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:    ],
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:    "2": [
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:        {
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "devices": [
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "/dev/loop5"
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            ],
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "lv_name": "ceph_lv2",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "lv_size": "21470642176",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "name": "ceph_lv2",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "tags": {
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.cluster_name": "ceph",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.crush_device_class": "",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.encrypted": "0",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.objectstore": "bluestore",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.osd_id": "2",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.type": "block",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.vdo": "0",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:                "ceph.with_tpm": "0"
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            },
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "type": "block",
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:            "vg_name": "ceph_vg2"
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:        }
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]:    ]
Feb 25 06:51:03 np0005629333 competent_khayyam[95909]: }
Feb 25 06:51:03 np0005629333 systemd[1]: libpod-5f6502161f9a1ead727c28eb7e2d9f8a30f6534cd47e1229dec02974e330b5d3.scope: Deactivated successfully.
Feb 25 06:51:03 np0005629333 podman[95878]: 2026-02-25 11:51:03.537409361 +0000 UTC m=+0.482814378 container died 5f6502161f9a1ead727c28eb7e2d9f8a30f6534cd47e1229dec02974e330b5d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_khayyam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 06:51:03 np0005629333 systemd[1]: var-lib-containers-storage-overlay-654c5db91debf0a71ba78ac7306be5ad808c57491273d4a31adac1b0e6617194-merged.mount: Deactivated successfully.
Feb 25 06:51:03 np0005629333 podman[95944]: 2026-02-25 11:51:03.592310931 +0000 UTC m=+0.075660891 container create f74631bbd139699268b8d9a9e693029c1aa3a1552144fced721dc12023e46bfa (image=quay.io/ceph/ceph:v20, name=elastic_morse, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Feb 25 06:51:03 np0005629333 podman[95878]: 2026-02-25 11:51:03.60165104 +0000 UTC m=+0.547056077 container remove 5f6502161f9a1ead727c28eb7e2d9f8a30f6534cd47e1229dec02974e330b5d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_khayyam, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:51:03 np0005629333 systemd[1]: libpod-conmon-5f6502161f9a1ead727c28eb7e2d9f8a30f6534cd47e1229dec02974e330b5d3.scope: Deactivated successfully.
Feb 25 06:51:03 np0005629333 systemd[1]: Started libpod-conmon-f74631bbd139699268b8d9a9e693029c1aa3a1552144fced721dc12023e46bfa.scope.
Feb 25 06:51:03 np0005629333 podman[95944]: 2026-02-25 11:51:03.559776689 +0000 UTC m=+0.043126679 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:51:03 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:51:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3636a7353bddf55be988f0981d9dd50f6724698271527b48f7561d0e3257c41/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3636a7353bddf55be988f0981d9dd50f6724698271527b48f7561d0e3257c41/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:03 np0005629333 podman[95944]: 2026-02-25 11:51:03.682497434 +0000 UTC m=+0.165847394 container init f74631bbd139699268b8d9a9e693029c1aa3a1552144fced721dc12023e46bfa (image=quay.io/ceph/ceph:v20, name=elastic_morse, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 06:51:03 np0005629333 podman[95944]: 2026-02-25 11:51:03.688751651 +0000 UTC m=+0.172101611 container start f74631bbd139699268b8d9a9e693029c1aa3a1552144fced721dc12023e46bfa (image=quay.io/ceph/ceph:v20, name=elastic_morse, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 06:51:03 np0005629333 podman[95944]: 2026-02-25 11:51:03.692121941 +0000 UTC m=+0.175471901 container attach f74631bbd139699268b8d9a9e693029c1aa3a1552144fced721dc12023e46bfa (image=quay.io/ceph/ceph:v20, name=elastic_morse, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:51:04 np0005629333 podman[96055]: 2026-02-25 11:51:04.071849981 +0000 UTC m=+0.044500230 container create 067b4ea6b67238d6bf6c89b89984cc79c24730e9146a4577df791c909c7adf24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:51:04 np0005629333 systemd[1]: Started libpod-conmon-067b4ea6b67238d6bf6c89b89984cc79c24730e9146a4577df791c909c7adf24.scope.
Feb 25 06:51:04 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:51:04 np0005629333 podman[96055]: 2026-02-25 11:51:04.050272297 +0000 UTC m=+0.022922616 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:51:04 np0005629333 podman[96055]: 2026-02-25 11:51:04.15387256 +0000 UTC m=+0.126522829 container init 067b4ea6b67238d6bf6c89b89984cc79c24730e9146a4577df791c909c7adf24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lehmann, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:51:04 np0005629333 podman[96055]: 2026-02-25 11:51:04.16057006 +0000 UTC m=+0.133220299 container start 067b4ea6b67238d6bf6c89b89984cc79c24730e9146a4577df791c909c7adf24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lehmann, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 06:51:04 np0005629333 podman[96055]: 2026-02-25 11:51:04.163959821 +0000 UTC m=+0.136610080 container attach 067b4ea6b67238d6bf6c89b89984cc79c24730e9146a4577df791c909c7adf24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lehmann, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:51:04 np0005629333 amazing_lehmann[96072]: 167 167
Feb 25 06:51:04 np0005629333 systemd[1]: libpod-067b4ea6b67238d6bf6c89b89984cc79c24730e9146a4577df791c909c7adf24.scope: Deactivated successfully.
Feb 25 06:51:04 np0005629333 podman[96055]: 2026-02-25 11:51:04.165193368 +0000 UTC m=+0.137843627 container died 067b4ea6b67238d6bf6c89b89984cc79c24730e9146a4577df791c909c7adf24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lehmann, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:51:04 np0005629333 systemd[1]: var-lib-containers-storage-overlay-4357974d425cff2208e1f9c922fc4211a9041505816aeb825b2a806670246f48-merged.mount: Deactivated successfully.
Feb 25 06:51:04 np0005629333 podman[96055]: 2026-02-25 11:51:04.206481111 +0000 UTC m=+0.179131350 container remove 067b4ea6b67238d6bf6c89b89984cc79c24730e9146a4577df791c909c7adf24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 06:51:04 np0005629333 systemd[1]: libpod-conmon-067b4ea6b67238d6bf6c89b89984cc79c24730e9146a4577df791c909c7adf24.scope: Deactivated successfully.
Feb 25 06:51:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 06:51:04 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2100907211' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 06:51:04 np0005629333 elastic_morse[95970]: 
Feb 25 06:51:04 np0005629333 elastic_morse[95970]: {"epoch":1,"fsid":"8ac33163-6221-5d58-9a39-8b6933fe7762","modified":"2026-02-25T11:49:08.188048Z","created":"2026-02-25T11:49:08.188048Z","min_mon_release":20,"min_mon_release_name":"tentacle","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid","tentacle"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Feb 25 06:51:04 np0005629333 elastic_morse[95970]: dumped monmap epoch 1
Feb 25 06:51:04 np0005629333 systemd[1]: libpod-f74631bbd139699268b8d9a9e693029c1aa3a1552144fced721dc12023e46bfa.scope: Deactivated successfully.
Feb 25 06:51:04 np0005629333 podman[95944]: 2026-02-25 11:51:04.260184595 +0000 UTC m=+0.743534545 container died f74631bbd139699268b8d9a9e693029c1aa3a1552144fced721dc12023e46bfa (image=quay.io/ceph/ceph:v20, name=elastic_morse, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:51:04 np0005629333 podman[95944]: 2026-02-25 11:51:04.295948483 +0000 UTC m=+0.779298423 container remove f74631bbd139699268b8d9a9e693029c1aa3a1552144fced721dc12023e46bfa (image=quay.io/ceph/ceph:v20, name=elastic_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:51:04 np0005629333 systemd[1]: libpod-conmon-f74631bbd139699268b8d9a9e693029c1aa3a1552144fced721dc12023e46bfa.scope: Deactivated successfully.
Feb 25 06:51:04 np0005629333 systemd[1]: var-lib-containers-storage-overlay-c3636a7353bddf55be988f0981d9dd50f6724698271527b48f7561d0e3257c41-merged.mount: Deactivated successfully.
Feb 25 06:51:04 np0005629333 podman[96109]: 2026-02-25 11:51:04.374653413 +0000 UTC m=+0.042293944 container create 617635a330a0bf73d81a0a60a752c40a602ad37b7bcb56ce15bd7f32e6e494ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 06:51:04 np0005629333 systemd[1]: Started libpod-conmon-617635a330a0bf73d81a0a60a752c40a602ad37b7bcb56ce15bd7f32e6e494ed.scope.
Feb 25 06:51:04 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:51:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0113b2793d025b9284eb7f5a391d46ed9efdd5391e8e243a70ac90bc8d040a18/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0113b2793d025b9284eb7f5a391d46ed9efdd5391e8e243a70ac90bc8d040a18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0113b2793d025b9284eb7f5a391d46ed9efdd5391e8e243a70ac90bc8d040a18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0113b2793d025b9284eb7f5a391d46ed9efdd5391e8e243a70ac90bc8d040a18/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:04 np0005629333 podman[96109]: 2026-02-25 11:51:04.352671057 +0000 UTC m=+0.020311628 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:51:04 np0005629333 podman[96109]: 2026-02-25 11:51:04.451336113 +0000 UTC m=+0.118976654 container init 617635a330a0bf73d81a0a60a752c40a602ad37b7bcb56ce15bd7f32e6e494ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:51:04 np0005629333 podman[96109]: 2026-02-25 11:51:04.457027793 +0000 UTC m=+0.124668334 container start 617635a330a0bf73d81a0a60a752c40a602ad37b7bcb56ce15bd7f32e6e494ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_aryabhata, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 06:51:04 np0005629333 podman[96109]: 2026-02-25 11:51:04.460439015 +0000 UTC m=+0.128079566 container attach 617635a330a0bf73d81a0a60a752c40a602ad37b7bcb56ce15bd7f32e6e494ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_aryabhata, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 06:51:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:51:04 np0005629333 python3[96165]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:51:04 np0005629333 podman[96191]: 2026-02-25 11:51:04.896141296 +0000 UTC m=+0.035979655 container create 59961035221b73c83fab5f2a1be57f3912b20aaa90fc39645c78cc132f62d9bb (image=quay.io/ceph/ceph:v20, name=adoring_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 06:51:04 np0005629333 systemd[1]: Started libpod-conmon-59961035221b73c83fab5f2a1be57f3912b20aaa90fc39645c78cc132f62d9bb.scope.
Feb 25 06:51:04 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:51:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0fdc0ef02da7ee1372f89a31506949f9db7eafa67d2f4fa7841826e5c1e582c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0fdc0ef02da7ee1372f89a31506949f9db7eafa67d2f4fa7841826e5c1e582c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:04 np0005629333 podman[96191]: 2026-02-25 11:51:04.967585399 +0000 UTC m=+0.107423798 container init 59961035221b73c83fab5f2a1be57f3912b20aaa90fc39645c78cc132f62d9bb (image=quay.io/ceph/ceph:v20, name=adoring_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:51:04 np0005629333 podman[96191]: 2026-02-25 11:51:04.972497916 +0000 UTC m=+0.112336285 container start 59961035221b73c83fab5f2a1be57f3912b20aaa90fc39645c78cc132f62d9bb (image=quay.io/ceph/ceph:v20, name=adoring_sammet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 06:51:04 np0005629333 podman[96191]: 2026-02-25 11:51:04.975909498 +0000 UTC m=+0.115747867 container attach 59961035221b73c83fab5f2a1be57f3912b20aaa90fc39645c78cc132f62d9bb (image=quay.io/ceph/ceph:v20, name=adoring_sammet, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 06:51:04 np0005629333 podman[96191]: 2026-02-25 11:51:04.88188001 +0000 UTC m=+0.021718389 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:51:05 np0005629333 lvm[96246]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 06:51:05 np0005629333 lvm[96249]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 06:51:05 np0005629333 lvm[96249]: VG ceph_vg1 finished
Feb 25 06:51:05 np0005629333 lvm[96246]: VG ceph_vg0 finished
Feb 25 06:51:05 np0005629333 lvm[96270]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 06:51:05 np0005629333 lvm[96270]: VG ceph_vg2 finished
Feb 25 06:51:05 np0005629333 zealous_aryabhata[96125]: {}
Feb 25 06:51:05 np0005629333 systemd[1]: libpod-617635a330a0bf73d81a0a60a752c40a602ad37b7bcb56ce15bd7f32e6e494ed.scope: Deactivated successfully.
Feb 25 06:51:05 np0005629333 systemd[1]: libpod-617635a330a0bf73d81a0a60a752c40a602ad37b7bcb56ce15bd7f32e6e494ed.scope: Consumed 1.071s CPU time.
Feb 25 06:51:05 np0005629333 conmon[96125]: conmon 617635a330a0bf73d81a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-617635a330a0bf73d81a0a60a752c40a602ad37b7bcb56ce15bd7f32e6e494ed.scope/container/memory.events
Feb 25 06:51:05 np0005629333 podman[96109]: 2026-02-25 11:51:05.207212885 +0000 UTC m=+0.874853406 container died 617635a330a0bf73d81a0a60a752c40a602ad37b7bcb56ce15bd7f32e6e494ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_aryabhata, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:51:05 np0005629333 systemd[1]: var-lib-containers-storage-overlay-0113b2793d025b9284eb7f5a391d46ed9efdd5391e8e243a70ac90bc8d040a18-merged.mount: Deactivated successfully.
Feb 25 06:51:05 np0005629333 podman[96109]: 2026-02-25 11:51:05.253759655 +0000 UTC m=+0.921400176 container remove 617635a330a0bf73d81a0a60a752c40a602ad37b7bcb56ce15bd7f32e6e494ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 06:51:05 np0005629333 systemd[1]: libpod-conmon-617635a330a0bf73d81a0a60a752c40a602ad37b7bcb56ce15bd7f32e6e494ed.scope: Deactivated successfully.
Feb 25 06:51:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:51:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:51:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:05 np0005629333 ceph-mgr[76641]: [progress INFO root] update: starting ev 40aa67a7-bede-40a4-bf47-cdb881b0cf9b (Updating rgw.rgw deployment (+1 -> 1))
Feb 25 06:51:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.fpqgwn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Feb 25 06:51:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.fpqgwn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Feb 25 06:51:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.fpqgwn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb 25 06:51:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Feb 25 06:51:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:51:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:51:05 np0005629333 ceph-mgr[76641]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.fpqgwn on compute-0
Feb 25 06:51:05 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.fpqgwn on compute-0
Feb 25 06:51:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Feb 25 06:51:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3290433639' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Feb 25 06:51:05 np0005629333 adoring_sammet[96228]: [client.openstack]
Feb 25 06:51:05 np0005629333 adoring_sammet[96228]: #011key = AQCb4Z5pAAAAABAABK9flK9N0VdugkhHl+E8Cg==
Feb 25 06:51:05 np0005629333 adoring_sammet[96228]: #011caps mgr = "allow *"
Feb 25 06:51:05 np0005629333 adoring_sammet[96228]: #011caps mon = "profile rbd"
Feb 25 06:51:05 np0005629333 adoring_sammet[96228]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Feb 25 06:51:05 np0005629333 systemd[1]: libpod-59961035221b73c83fab5f2a1be57f3912b20aaa90fc39645c78cc132f62d9bb.scope: Deactivated successfully.
Feb 25 06:51:05 np0005629333 podman[96191]: 2026-02-25 11:51:05.485233348 +0000 UTC m=+0.625071747 container died 59961035221b73c83fab5f2a1be57f3912b20aaa90fc39645c78cc132f62d9bb (image=quay.io/ceph/ceph:v20, name=adoring_sammet, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 06:51:05 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a0fdc0ef02da7ee1372f89a31506949f9db7eafa67d2f4fa7841826e5c1e582c-merged.mount: Deactivated successfully.
Feb 25 06:51:05 np0005629333 podman[96191]: 2026-02-25 11:51:05.529265082 +0000 UTC m=+0.669103471 container remove 59961035221b73c83fab5f2a1be57f3912b20aaa90fc39645c78cc132f62d9bb (image=quay.io/ceph/ceph:v20, name=adoring_sammet, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:51:05 np0005629333 systemd[1]: libpod-conmon-59961035221b73c83fab5f2a1be57f3912b20aaa90fc39645c78cc132f62d9bb.scope: Deactivated successfully.
Feb 25 06:51:05 np0005629333 podman[96392]: 2026-02-25 11:51:05.847906008 +0000 UTC m=+0.052687975 container create e217afcf714637a39c863560bedb97bfd5679423c83ab6c9df90195fc3ee4574 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_napier, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:51:05 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:05 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:05 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.fpqgwn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Feb 25 06:51:05 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.fpqgwn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Feb 25 06:51:05 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:05 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/3290433639' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Feb 25 06:51:05 np0005629333 systemd[1]: Started libpod-conmon-e217afcf714637a39c863560bedb97bfd5679423c83ab6c9df90195fc3ee4574.scope.
Feb 25 06:51:05 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:51:05 np0005629333 podman[96392]: 2026-02-25 11:51:05.825906171 +0000 UTC m=+0.030688188 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:51:05 np0005629333 podman[96392]: 2026-02-25 11:51:05.92635855 +0000 UTC m=+0.131140507 container init e217afcf714637a39c863560bedb97bfd5679423c83ab6c9df90195fc3ee4574 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:51:05 np0005629333 podman[96392]: 2026-02-25 11:51:05.934179564 +0000 UTC m=+0.138961521 container start e217afcf714637a39c863560bedb97bfd5679423c83ab6c9df90195fc3ee4574 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_napier, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 06:51:05 np0005629333 nostalgic_napier[96408]: 167 167
Feb 25 06:51:05 np0005629333 systemd[1]: libpod-e217afcf714637a39c863560bedb97bfd5679423c83ab6c9df90195fc3ee4574.scope: Deactivated successfully.
Feb 25 06:51:05 np0005629333 podman[96392]: 2026-02-25 11:51:05.941030279 +0000 UTC m=+0.145812246 container attach e217afcf714637a39c863560bedb97bfd5679423c83ab6c9df90195fc3ee4574 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_napier, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:51:05 np0005629333 podman[96392]: 2026-02-25 11:51:05.94207068 +0000 UTC m=+0.146852637 container died e217afcf714637a39c863560bedb97bfd5679423c83ab6c9df90195fc3ee4574 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_napier, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:51:05 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7956d6d076a2180452043c21711bc8e6e01086e15f0502f6b2d1e352785677e1-merged.mount: Deactivated successfully.
Feb 25 06:51:05 np0005629333 podman[96392]: 2026-02-25 11:51:05.995266478 +0000 UTC m=+0.200048455 container remove e217afcf714637a39c863560bedb97bfd5679423c83ab6c9df90195fc3ee4574 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_napier, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:51:06 np0005629333 systemd[1]: libpod-conmon-e217afcf714637a39c863560bedb97bfd5679423c83ab6c9df90195fc3ee4574.scope: Deactivated successfully.
Feb 25 06:51:06 np0005629333 systemd[1]: Reloading.
Feb 25 06:51:06 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:51:06 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:51:06 np0005629333 systemd[1]: Reloading.
Feb 25 06:51:06 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:51:06 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:51:06 np0005629333 systemd[1]: Starting Ceph rgw.rgw.compute-0.fpqgwn for 8ac33163-6221-5d58-9a39-8b6933fe7762...
Feb 25 06:51:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:51:06 np0005629333 podman[96713]: 2026-02-25 11:51:06.826282393 +0000 UTC m=+0.041400177 container create 5cc6a4afc59e3507b54533308c453107cb573fcb561f8cedfbfa1f27f88f4e40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-rgw-rgw-compute-0-fpqgwn, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:51:06 np0005629333 ceph-mon[76335]: Deploying daemon rgw.rgw.compute-0.fpqgwn on compute-0
Feb 25 06:51:06 np0005629333 ansible-async_wrapper.py[96707]: Invoked with j9678683023 30 /home/zuul/.ansible/tmp/ansible-tmp-1772020266.4318433-37539-156802207216116/AnsiballZ_command.py _
Feb 25 06:51:06 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28e87ea8cded85dbc0f174d5edf97b47a2177934240fc2fc839fd76aed75611c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:06 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28e87ea8cded85dbc0f174d5edf97b47a2177934240fc2fc839fd76aed75611c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:06 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28e87ea8cded85dbc0f174d5edf97b47a2177934240fc2fc839fd76aed75611c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:06 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28e87ea8cded85dbc0f174d5edf97b47a2177934240fc2fc839fd76aed75611c/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.fpqgwn supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:06 np0005629333 ansible-async_wrapper.py[96734]: Starting module and watcher
Feb 25 06:51:06 np0005629333 ansible-async_wrapper.py[96734]: Start watching 96735 (30)
Feb 25 06:51:06 np0005629333 ansible-async_wrapper.py[96735]: Start module (96735)
Feb 25 06:51:06 np0005629333 ansible-async_wrapper.py[96707]: Return async_wrapper task started.
Feb 25 06:51:06 np0005629333 podman[96713]: 2026-02-25 11:51:06.899396346 +0000 UTC m=+0.114514170 container init 5cc6a4afc59e3507b54533308c453107cb573fcb561f8cedfbfa1f27f88f4e40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-rgw-rgw-compute-0-fpqgwn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 06:51:06 np0005629333 podman[96713]: 2026-02-25 11:51:06.810848002 +0000 UTC m=+0.025965796 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:51:06 np0005629333 podman[96713]: 2026-02-25 11:51:06.910316652 +0000 UTC m=+0.125434436 container start 5cc6a4afc59e3507b54533308c453107cb573fcb561f8cedfbfa1f27f88f4e40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-rgw-rgw-compute-0-fpqgwn, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:51:06 np0005629333 bash[96713]: 5cc6a4afc59e3507b54533308c453107cb573fcb561f8cedfbfa1f27f88f4e40
Feb 25 06:51:06 np0005629333 systemd[1]: Started Ceph rgw.rgw.compute-0.fpqgwn for 8ac33163-6221-5d58-9a39-8b6933fe7762.
Feb 25 06:51:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:51:06 np0005629333 radosgw[96738]: deferred set uid:gid to 167:167 (ceph:ceph)
Feb 25 06:51:06 np0005629333 radosgw[96738]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process radosgw, pid 2
Feb 25 06:51:06 np0005629333 radosgw[96738]: framework: beast
Feb 25 06:51:06 np0005629333 radosgw[96738]: framework conf key: endpoint, val: 192.168.122.100:8082
Feb 25 06:51:06 np0005629333 radosgw[96738]: init_numa not setting numa affinity
Feb 25 06:51:06 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:51:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb 25 06:51:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:07 np0005629333 ceph-mgr[76641]: [progress INFO root] complete: finished ev 40aa67a7-bede-40a4-bf47-cdb881b0cf9b (Updating rgw.rgw deployment (+1 -> 1))
Feb 25 06:51:07 np0005629333 ceph-mgr[76641]: [progress INFO root] Completed event 40aa67a7-bede-40a4-bf47-cdb881b0cf9b (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Feb 25 06:51:07 np0005629333 ceph-mgr[76641]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Feb 25 06:51:07 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Feb 25 06:51:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb 25 06:51:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Feb 25 06:51:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:07 np0005629333 ceph-mgr[76641]: [progress INFO root] update: starting ev 0efc746a-b112-48e7-a7f5-b92dd618ec9a (Updating mds.cephfs deployment (+1 -> 1))
Feb 25 06:51:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.idxobw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Feb 25 06:51:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.idxobw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 25 06:51:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.idxobw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Feb 25 06:51:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:51:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:51:07 np0005629333 ceph-mgr[76641]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.idxobw on compute-0
Feb 25 06:51:07 np0005629333 ceph-mgr[76641]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.idxobw on compute-0
Feb 25 06:51:07 np0005629333 python3[96736]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:51:07 np0005629333 podman[96768]: 2026-02-25 11:51:07.100047298 +0000 UTC m=+0.036252753 container create 4e733bc29020149080d4f247261b19911507e7c4f88f1eb1a0efe88ccd3c9468 (image=quay.io/ceph/ceph:v20, name=ecstatic_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 06:51:07 np0005629333 systemd[1]: Started libpod-conmon-4e733bc29020149080d4f247261b19911507e7c4f88f1eb1a0efe88ccd3c9468.scope.
Feb 25 06:51:07 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:51:07 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c62ec6adf3a1e9e3bb0cf3efeea179293a1835538989df9e38a9106799de7b84/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:07 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c62ec6adf3a1e9e3bb0cf3efeea179293a1835538989df9e38a9106799de7b84/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:07 np0005629333 podman[96768]: 2026-02-25 11:51:07.084839124 +0000 UTC m=+0.021044499 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:51:07 np0005629333 podman[96768]: 2026-02-25 11:51:07.184064607 +0000 UTC m=+0.120270052 container init 4e733bc29020149080d4f247261b19911507e7c4f88f1eb1a0efe88ccd3c9468 (image=quay.io/ceph/ceph:v20, name=ecstatic_dirac, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:51:07 np0005629333 podman[96768]: 2026-02-25 11:51:07.189230391 +0000 UTC m=+0.125435736 container start 4e733bc29020149080d4f247261b19911507e7c4f88f1eb1a0efe88ccd3c9468 (image=quay.io/ceph/ceph:v20, name=ecstatic_dirac, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:51:07 np0005629333 podman[96768]: 2026-02-25 11:51:07.192407276 +0000 UTC m=+0.128612661 container attach 4e733bc29020149080d4f247261b19911507e7c4f88f1eb1a0efe88ccd3c9468 (image=quay.io/ceph/ceph:v20, name=ecstatic_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 06:51:07 np0005629333 podman[96897]: 2026-02-25 11:51:07.513143534 +0000 UTC m=+0.049359365 container create ebeec3f730018230f2b60ecfb750bd3da84f5c89edb4e7c303b19a27e02e7330 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_kare, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 06:51:07 np0005629333 systemd[1]: Started libpod-conmon-ebeec3f730018230f2b60ecfb750bd3da84f5c89edb4e7c303b19a27e02e7330.scope.
Feb 25 06:51:07 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:51:07 np0005629333 podman[96897]: 2026-02-25 11:51:07.578938609 +0000 UTC m=+0.115154430 container init ebeec3f730018230f2b60ecfb750bd3da84f5c89edb4e7c303b19a27e02e7330 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_kare, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 25 06:51:07 np0005629333 podman[96897]: 2026-02-25 11:51:07.586141124 +0000 UTC m=+0.122356925 container start ebeec3f730018230f2b60ecfb750bd3da84f5c89edb4e7c303b19a27e02e7330 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_kare, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 06:51:07 np0005629333 inspiring_kare[96914]: 167 167
Feb 25 06:51:07 np0005629333 podman[96897]: 2026-02-25 11:51:07.589871155 +0000 UTC m=+0.126086966 container attach ebeec3f730018230f2b60ecfb750bd3da84f5c89edb4e7c303b19a27e02e7330 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_kare, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 06:51:07 np0005629333 podman[96897]: 2026-02-25 11:51:07.49291585 +0000 UTC m=+0.029131661 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:51:07 np0005629333 podman[96897]: 2026-02-25 11:51:07.590544136 +0000 UTC m=+0.126759967 container died ebeec3f730018230f2b60ecfb750bd3da84f5c89edb4e7c303b19a27e02e7330 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_kare, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:51:07 np0005629333 systemd[1]: libpod-ebeec3f730018230f2b60ecfb750bd3da84f5c89edb4e7c303b19a27e02e7330.scope: Deactivated successfully.
Feb 25 06:51:07 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.14251 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 25 06:51:07 np0005629333 ecstatic_dirac[96830]: 
Feb 25 06:51:07 np0005629333 ecstatic_dirac[96830]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb 25 06:51:07 np0005629333 systemd[1]: var-lib-containers-storage-overlay-9c634b25e04d92511a3b72753c45395d48ab23e608f4ce2e93a665b8b763be44-merged.mount: Deactivated successfully.
Feb 25 06:51:07 np0005629333 systemd[1]: libpod-4e733bc29020149080d4f247261b19911507e7c4f88f1eb1a0efe88ccd3c9468.scope: Deactivated successfully.
Feb 25 06:51:07 np0005629333 podman[96768]: 2026-02-25 11:51:07.631445577 +0000 UTC m=+0.567650962 container died 4e733bc29020149080d4f247261b19911507e7c4f88f1eb1a0efe88ccd3c9468 (image=quay.io/ceph/ceph:v20, name=ecstatic_dirac, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:51:07 np0005629333 systemd[1]: var-lib-containers-storage-overlay-c62ec6adf3a1e9e3bb0cf3efeea179293a1835538989df9e38a9106799de7b84-merged.mount: Deactivated successfully.
Feb 25 06:51:07 np0005629333 podman[96768]: 2026-02-25 11:51:07.67307822 +0000 UTC m=+0.609283565 container remove 4e733bc29020149080d4f247261b19911507e7c4f88f1eb1a0efe88ccd3c9468 (image=quay.io/ceph/ceph:v20, name=ecstatic_dirac, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:51:07 np0005629333 systemd[1]: libpod-conmon-4e733bc29020149080d4f247261b19911507e7c4f88f1eb1a0efe88ccd3c9468.scope: Deactivated successfully.
Feb 25 06:51:07 np0005629333 podman[96897]: 2026-02-25 11:51:07.691789269 +0000 UTC m=+0.228005100 container remove ebeec3f730018230f2b60ecfb750bd3da84f5c89edb4e7c303b19a27e02e7330 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_kare, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 06:51:07 np0005629333 ansible-async_wrapper.py[96735]: Module complete (96735)
Feb 25 06:51:07 np0005629333 systemd[1]: libpod-conmon-ebeec3f730018230f2b60ecfb750bd3da84f5c89edb4e7c303b19a27e02e7330.scope: Deactivated successfully.
Feb 25 06:51:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:51:07 np0005629333 systemd[1]: Reloading.
Feb 25 06:51:07 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:51:07 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:51:07 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:07 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:07 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:07 np0005629333 ceph-mon[76335]: Saving service rgw.rgw spec with placement compute-0
Feb 25 06:51:07 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:07 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:07 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.idxobw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 25 06:51:07 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.idxobw", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Feb 25 06:51:07 np0005629333 ceph-mon[76335]: Deploying daemon mds.cephfs.compute-0.idxobw on compute-0
Feb 25 06:51:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Feb 25 06:51:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Feb 25 06:51:08 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Feb 25 06:51:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Feb 25 06:51:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/212144099' entity='client.rgw.rgw.compute-0.fpqgwn' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Feb 25 06:51:08 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 33 pg[8.0( empty local-lis/les=0/0 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [1] r=0 lpr=33 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:08 np0005629333 systemd[1]: Reloading.
Feb 25 06:51:08 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:51:08 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:51:08 np0005629333 systemd[1]: Starting Ceph mds.cephfs.compute-0.idxobw for 8ac33163-6221-5d58-9a39-8b6933fe7762...
Feb 25 06:51:08 np0005629333 python3[97086]: ansible-ansible.legacy.async_status Invoked with jid=j9678683023.96707 mode=status _async_dir=/root/.ansible_async
Feb 25 06:51:08 np0005629333 podman[97137]: 2026-02-25 11:51:08.522353841 +0000 UTC m=+0.060978312 container create fb4195c949c95df7836f6cb0aa354165f1a29f801a66972319b880df960c55bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mds-cephfs-compute-0-idxobw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:51:08 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a6fcf5c04a4ee7d0795601c237491ae268188a449d9128fe1394983e68374f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:08 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a6fcf5c04a4ee7d0795601c237491ae268188a449d9128fe1394983e68374f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:08 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a6fcf5c04a4ee7d0795601c237491ae268188a449d9128fe1394983e68374f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:08 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9a6fcf5c04a4ee7d0795601c237491ae268188a449d9128fe1394983e68374f/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.idxobw supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:08 np0005629333 podman[97137]: 2026-02-25 11:51:08.595428853 +0000 UTC m=+0.134053324 container init fb4195c949c95df7836f6cb0aa354165f1a29f801a66972319b880df960c55bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mds-cephfs-compute-0-idxobw, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:51:08 np0005629333 podman[97137]: 2026-02-25 11:51:08.495871881 +0000 UTC m=+0.034496392 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:51:08 np0005629333 podman[97137]: 2026-02-25 11:51:08.600577357 +0000 UTC m=+0.139201828 container start fb4195c949c95df7836f6cb0aa354165f1a29f801a66972319b880df960c55bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mds-cephfs-compute-0-idxobw, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3)
Feb 25 06:51:08 np0005629333 bash[97137]: fb4195c949c95df7836f6cb0aa354165f1a29f801a66972319b880df960c55bf
Feb 25 06:51:08 np0005629333 systemd[1]: Started Ceph mds.cephfs.compute-0.idxobw for 8ac33163-6221-5d58-9a39-8b6933fe7762.
Feb 25 06:51:08 np0005629333 ceph-mds[97202]: set uid:gid to 167:167 (ceph:ceph)
Feb 25 06:51:08 np0005629333 ceph-mds[97202]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mds, pid 2
Feb 25 06:51:08 np0005629333 ceph-mds[97202]: main not setting numa affinity
Feb 25 06:51:08 np0005629333 ceph-mds[97202]: pidfile_write: ignore empty --pid-file
Feb 25 06:51:08 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mds-cephfs-compute-0-idxobw[97198]: starting mds.cephfs.compute-0.idxobw at 
Feb 25 06:51:08 np0005629333 ceph-mds[97202]: mds.cephfs.compute-0.idxobw Updating MDS map to version 2 from mon.0
Feb 25 06:51:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:51:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:51:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb 25 06:51:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:08 np0005629333 ceph-mgr[76641]: [progress INFO root] complete: finished ev 0efc746a-b112-48e7-a7f5-b92dd618ec9a (Updating mds.cephfs deployment (+1 -> 1))
Feb 25 06:51:08 np0005629333 ceph-mgr[76641]: [progress INFO root] Completed event 0efc746a-b112-48e7-a7f5-b92dd618ec9a (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Feb 25 06:51:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Feb 25 06:51:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Feb 25 06:51:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:08 np0005629333 python3[97195]: ansible-ansible.legacy.async_status Invoked with jid=j9678683023.96707 mode=cleanup _async_dir=/root/.ansible_async
Feb 25 06:51:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v73: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:51:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Feb 25 06:51:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/212144099' entity='client.rgw.rgw.compute-0.fpqgwn' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Feb 25 06:51:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Feb 25 06:51:09 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Feb 25 06:51:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).mds e3 new map
Feb 25 06:51:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).mds e3 print_map#012e3#012btime 2026-02-25T11:51:09:010365+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-25T11:50:57.927640+0000#012modified#0112026-02-25T11:50:57.927640+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.idxobw{-1:14253} state up:standby seq 1 addr [v2:192.168.122.100:6814/1908271133,v1:192.168.122.100:6815/1908271133] compat {c=[1],r=[1],i=[1fff]}]
Feb 25 06:51:09 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/212144099' entity='client.rgw.rgw.compute-0.fpqgwn' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Feb 25 06:51:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:09 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 34 pg[8.0( empty local-lis/les=33/34 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [1] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:09 np0005629333 ceph-mds[97202]: mds.cephfs.compute-0.idxobw Updating MDS map to version 3 from mon.0
Feb 25 06:51:09 np0005629333 ceph-mds[97202]: mds.cephfs.compute-0.idxobw Monitors have assigned me to become a standby
Feb 25 06:51:09 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/1908271133,v1:192.168.122.100:6815/1908271133] up:boot
Feb 25 06:51:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/1908271133,v1:192.168.122.100:6815/1908271133] as mds.0
Feb 25 06:51:09 np0005629333 ceph-mon[76335]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.idxobw assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Feb 25 06:51:09 np0005629333 ceph-mon[76335]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Feb 25 06:51:09 np0005629333 ceph-mon[76335]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Feb 25 06:51:09 np0005629333 ceph-mon[76335]: log_channel(cluster) log [INF] : Cluster is now healthy
Feb 25 06:51:09 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Feb 25 06:51:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.idxobw"} v 0)
Feb 25 06:51:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.idxobw"} : dispatch
Feb 25 06:51:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).mds e3 all = 0
Feb 25 06:51:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).mds e4 new map
Feb 25 06:51:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).mds e4 print_map#012e4#012btime 2026-02-25T11:51:09:030876+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-25T11:50:57.927640+0000#012modified#0112026-02-25T11:51:09.030859+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14253}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-0.idxobw{0:14253} state up:creating seq 1 addr [v2:192.168.122.100:6814/1908271133,v1:192.168.122.100:6815/1908271133] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Feb 25 06:51:09 np0005629333 ceph-mds[97202]: mds.cephfs.compute-0.idxobw Updating MDS map to version 4 from mon.0
Feb 25 06:51:09 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.idxobw=up:creating}
Feb 25 06:51:09 np0005629333 ceph-mds[97202]: mds.0.4 handle_mds_map I am now mds.0.4
Feb 25 06:51:09 np0005629333 ceph-mds[97202]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Feb 25 06:51:09 np0005629333 ceph-mds[97202]: mds.0.cache creating system inode with ino:0x1
Feb 25 06:51:09 np0005629333 ceph-mds[97202]: mds.0.cache creating system inode with ino:0x100
Feb 25 06:51:09 np0005629333 ceph-mds[97202]: mds.0.cache creating system inode with ino:0x600
Feb 25 06:51:09 np0005629333 ceph-mds[97202]: mds.0.cache creating system inode with ino:0x601
Feb 25 06:51:09 np0005629333 ceph-mds[97202]: mds.0.cache creating system inode with ino:0x602
Feb 25 06:51:09 np0005629333 ceph-mds[97202]: mds.0.cache creating system inode with ino:0x603
Feb 25 06:51:09 np0005629333 ceph-mds[97202]: mds.0.cache creating system inode with ino:0x604
Feb 25 06:51:09 np0005629333 ceph-mds[97202]: mds.0.cache creating system inode with ino:0x605
Feb 25 06:51:09 np0005629333 ceph-mds[97202]: mds.0.cache creating system inode with ino:0x606
Feb 25 06:51:09 np0005629333 ceph-mds[97202]: mds.0.cache creating system inode with ino:0x607
Feb 25 06:51:09 np0005629333 ceph-mds[97202]: mds.0.cache creating system inode with ino:0x608
Feb 25 06:51:09 np0005629333 ceph-mds[97202]: mds.0.cache creating system inode with ino:0x609
Feb 25 06:51:09 np0005629333 ceph-mds[97202]: mds.0.4 creating_done
Feb 25 06:51:09 np0005629333 ceph-mon[76335]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.idxobw is now active in filesystem cephfs as rank 0
Feb 25 06:51:09 np0005629333 podman[97921]: 2026-02-25 11:51:09.281394708 +0000 UTC m=+0.078057102 container exec ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:51:09 np0005629333 podman[97921]: 2026-02-25 11:51:09.371178559 +0000 UTC m=+0.167840873 container exec_died ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030)
Feb 25 06:51:09 np0005629333 python3[97949]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:51:09 np0005629333 podman[97969]: 2026-02-25 11:51:09.433940683 +0000 UTC m=+0.038509581 container create 44119d9536485c3272f329230c106d1356ea34025119f236ece5c912f74c085f (image=quay.io/ceph/ceph:v20, name=dazzling_bardeen, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:51:09 np0005629333 systemd[1]: Started libpod-conmon-44119d9536485c3272f329230c106d1356ea34025119f236ece5c912f74c085f.scope.
Feb 25 06:51:09 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:51:09 np0005629333 podman[97969]: 2026-02-25 11:51:09.415015748 +0000 UTC m=+0.019584616 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:51:09 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839ea3f6cc07db0283c4942c9004a632a8ae966e849bdac12a7c65c18e54443c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:09 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/839ea3f6cc07db0283c4942c9004a632a8ae966e849bdac12a7c65c18e54443c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:09 np0005629333 podman[97969]: 2026-02-25 11:51:09.568908174 +0000 UTC m=+0.173477042 container init 44119d9536485c3272f329230c106d1356ea34025119f236ece5c912f74c085f (image=quay.io/ceph/ceph:v20, name=dazzling_bardeen, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 06:51:09 np0005629333 podman[97969]: 2026-02-25 11:51:09.57783526 +0000 UTC m=+0.182404078 container start 44119d9536485c3272f329230c106d1356ea34025119f236ece5c912f74c085f (image=quay.io/ceph/ceph:v20, name=dazzling_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 06:51:09 np0005629333 podman[97969]: 2026-02-25 11:51:09.583291073 +0000 UTC m=+0.187859911 container attach 44119d9536485c3272f329230c106d1356ea34025119f236ece5c912f74c085f (image=quay.io/ceph/ceph:v20, name=dazzling_bardeen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:51:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Feb 25 06:51:10 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 25 06:51:10 np0005629333 dazzling_bardeen[98006]: 
Feb 25 06:51:10 np0005629333 dazzling_bardeen[98006]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Feb 25 06:51:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Feb 25 06:51:10 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Feb 25 06:51:10 np0005629333 podman[97969]: 2026-02-25 11:51:10.044065933 +0000 UTC m=+0.648634741 container died 44119d9536485c3272f329230c106d1356ea34025119f236ece5c912f74c085f (image=quay.io/ceph/ceph:v20, name=dazzling_bardeen, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 25 06:51:10 np0005629333 systemd[1]: libpod-44119d9536485c3272f329230c106d1356ea34025119f236ece5c912f74c085f.scope: Deactivated successfully.
Feb 25 06:51:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Feb 25 06:51:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3213503567' entity='client.rgw.rgw.compute-0.fpqgwn' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Feb 25 06:51:10 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/212144099' entity='client.rgw.rgw.compute-0.fpqgwn' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Feb 25 06:51:10 np0005629333 ceph-mon[76335]: daemon mds.cephfs.compute-0.idxobw assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Feb 25 06:51:10 np0005629333 ceph-mon[76335]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Feb 25 06:51:10 np0005629333 ceph-mon[76335]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Feb 25 06:51:10 np0005629333 ceph-mon[76335]: Cluster is now healthy
Feb 25 06:51:10 np0005629333 ceph-mon[76335]: daemon mds.cephfs.compute-0.idxobw is now active in filesystem cephfs as rank 0
Feb 25 06:51:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).mds e5 new map
Feb 25 06:51:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).mds e5 print_map#012e5#012btime 2026-02-25T11:51:10:054218+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-02-25T11:50:57.927640+0000#012modified#0112026-02-25T11:51:10.054215+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14253}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 14253 members: 14253#012[mds.cephfs.compute-0.idxobw{0:14253} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/1908271133,v1:192.168.122.100:6815/1908271133] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Feb 25 06:51:10 np0005629333 ceph-mds[97202]: mds.cephfs.compute-0.idxobw Updating MDS map to version 5 from mon.0
Feb 25 06:51:10 np0005629333 ceph-mds[97202]: mds.0.4 handle_mds_map I am now mds.0.4
Feb 25 06:51:10 np0005629333 ceph-mds[97202]: mds.0.4 handle_mds_map state change up:creating --> up:active
Feb 25 06:51:10 np0005629333 ceph-mds[97202]: mds.0.4 recovery_done -- successful recovery!
Feb 25 06:51:10 np0005629333 ceph-mds[97202]: mds.0.4 active_start
Feb 25 06:51:10 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/1908271133,v1:192.168.122.100:6815/1908271133] up:active
Feb 25 06:51:10 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.idxobw=up:active}
Feb 25 06:51:10 np0005629333 systemd[1]: var-lib-containers-storage-overlay-839ea3f6cc07db0283c4942c9004a632a8ae966e849bdac12a7c65c18e54443c-merged.mount: Deactivated successfully.
Feb 25 06:51:10 np0005629333 podman[97969]: 2026-02-25 11:51:10.112324501 +0000 UTC m=+0.716893279 container remove 44119d9536485c3272f329230c106d1356ea34025119f236ece5c912f74c085f (image=quay.io/ceph/ceph:v20, name=dazzling_bardeen, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 06:51:10 np0005629333 systemd[1]: libpod-conmon-44119d9536485c3272f329230c106d1356ea34025119f236ece5c912f74c085f.scope: Deactivated successfully.
Feb 25 06:51:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:51:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:51:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:51:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:51:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 06:51:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:51:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 06:51:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 06:51:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 06:51:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 06:51:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 06:51:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:51:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:51:10 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 35 pg[9.0( empty local-lis/les=0/0 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v76: 9 pgs: 2 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:51:10 np0005629333 podman[98265]: 2026-02-25 11:51:10.837327 +0000 UTC m=+0.059377374 container create 6163aa97bdfea642ff49be392c46377bba77f3a966e003e42eb64918b6f6f274 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 06:51:10 np0005629333 systemd[1]: Started libpod-conmon-6163aa97bdfea642ff49be392c46377bba77f3a966e003e42eb64918b6f6f274.scope.
Feb 25 06:51:10 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:51:10 np0005629333 podman[98265]: 2026-02-25 11:51:10.803371136 +0000 UTC m=+0.025421600 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:51:10 np0005629333 podman[98265]: 2026-02-25 11:51:10.912860006 +0000 UTC m=+0.134910420 container init 6163aa97bdfea642ff49be392c46377bba77f3a966e003e42eb64918b6f6f274 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_johnson, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 06:51:10 np0005629333 podman[98265]: 2026-02-25 11:51:10.91634194 +0000 UTC m=+0.138392324 container start 6163aa97bdfea642ff49be392c46377bba77f3a966e003e42eb64918b6f6f274 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_johnson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 25 06:51:10 np0005629333 python3[98274]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:51:10 np0005629333 distracted_johnson[98284]: 167 167
Feb 25 06:51:10 np0005629333 systemd[1]: libpod-6163aa97bdfea642ff49be392c46377bba77f3a966e003e42eb64918b6f6f274.scope: Deactivated successfully.
Feb 25 06:51:10 np0005629333 podman[98265]: 2026-02-25 11:51:10.926187814 +0000 UTC m=+0.148238268 container attach 6163aa97bdfea642ff49be392c46377bba77f3a966e003e42eb64918b6f6f274 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:51:10 np0005629333 podman[98265]: 2026-02-25 11:51:10.929508283 +0000 UTC m=+0.151558697 container died 6163aa97bdfea642ff49be392c46377bba77f3a966e003e42eb64918b6f6f274 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_johnson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 06:51:10 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e14ae4366ea08a409fa8e37263cdf0d71c8a3fc8dee5ed04c4b2649a7352a2cd-merged.mount: Deactivated successfully.
Feb 25 06:51:11 np0005629333 podman[98265]: 2026-02-25 11:51:11.001054809 +0000 UTC m=+0.223105213 container remove 6163aa97bdfea642ff49be392c46377bba77f3a966e003e42eb64918b6f6f274 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_johnson, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 06:51:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Feb 25 06:51:11 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3213503567' entity='client.rgw.rgw.compute-0.fpqgwn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Feb 25 06:51:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Feb 25 06:51:11 np0005629333 podman[98289]: 2026-02-25 11:51:10.950425777 +0000 UTC m=+0.017428791 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:51:11 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Feb 25 06:51:11 np0005629333 podman[98289]: 2026-02-25 11:51:11.052158585 +0000 UTC m=+0.119161619 container create 93ac5684973ee2337b5b419fbf696dbce77e410fff30202e20f38d5fdff89401 (image=quay.io/ceph/ceph:v20, name=nifty_euclid, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:51:11 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 36 pg[9.0( empty local-lis/les=35/36 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:11 np0005629333 systemd[1]: libpod-conmon-6163aa97bdfea642ff49be392c46377bba77f3a966e003e42eb64918b6f6f274.scope: Deactivated successfully.
Feb 25 06:51:11 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/3213503567' entity='client.rgw.rgw.compute-0.fpqgwn' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Feb 25 06:51:11 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:11 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:11 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:51:11 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:11 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 06:51:11 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/3213503567' entity='client.rgw.rgw.compute-0.fpqgwn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Feb 25 06:51:11 np0005629333 systemd[1]: Started libpod-conmon-93ac5684973ee2337b5b419fbf696dbce77e410fff30202e20f38d5fdff89401.scope.
Feb 25 06:51:11 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:51:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bab2a3d567475ce5f9dce76ed29cc7113220239311102161f73e5e4e668963d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bab2a3d567475ce5f9dce76ed29cc7113220239311102161f73e5e4e668963d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:11 np0005629333 podman[98289]: 2026-02-25 11:51:11.146266686 +0000 UTC m=+0.213269700 container init 93ac5684973ee2337b5b419fbf696dbce77e410fff30202e20f38d5fdff89401 (image=quay.io/ceph/ceph:v20, name=nifty_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:51:11 np0005629333 podman[98289]: 2026-02-25 11:51:11.15109396 +0000 UTC m=+0.218096994 container start 93ac5684973ee2337b5b419fbf696dbce77e410fff30202e20f38d5fdff89401 (image=quay.io/ceph/ceph:v20, name=nifty_euclid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 06:51:11 np0005629333 podman[98323]: 2026-02-25 11:51:11.162125469 +0000 UTC m=+0.056903890 container create 470b5f039e7631b3767fd7f16cffe6163f7c30ac20402dc6e8ec0601aaeda155 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_cori, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 06:51:11 np0005629333 podman[98289]: 2026-02-25 11:51:11.192338891 +0000 UTC m=+0.259341915 container attach 93ac5684973ee2337b5b419fbf696dbce77e410fff30202e20f38d5fdff89401 (image=quay.io/ceph/ceph:v20, name=nifty_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 06:51:11 np0005629333 systemd[1]: Started libpod-conmon-470b5f039e7631b3767fd7f16cffe6163f7c30ac20402dc6e8ec0601aaeda155.scope.
Feb 25 06:51:11 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:51:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c70bd747f2ee7cdc02f8410f460437d71953db67e246e4559f2a9878ed38890/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c70bd747f2ee7cdc02f8410f460437d71953db67e246e4559f2a9878ed38890/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c70bd747f2ee7cdc02f8410f460437d71953db67e246e4559f2a9878ed38890/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c70bd747f2ee7cdc02f8410f460437d71953db67e246e4559f2a9878ed38890/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c70bd747f2ee7cdc02f8410f460437d71953db67e246e4559f2a9878ed38890/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:11 np0005629333 podman[98323]: 2026-02-25 11:51:11.128918358 +0000 UTC m=+0.023697099 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:51:11 np0005629333 podman[98323]: 2026-02-25 11:51:11.230027547 +0000 UTC m=+0.124805968 container init 470b5f039e7631b3767fd7f16cffe6163f7c30ac20402dc6e8ec0601aaeda155 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_cori, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:51:11 np0005629333 podman[98323]: 2026-02-25 11:51:11.236047077 +0000 UTC m=+0.130825498 container start 470b5f039e7631b3767fd7f16cffe6163f7c30ac20402dc6e8ec0601aaeda155 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_cori, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 06:51:11 np0005629333 podman[98323]: 2026-02-25 11:51:11.241248292 +0000 UTC m=+0.136026693 container attach 470b5f039e7631b3767fd7f16cffe6163f7c30ac20402dc6e8ec0601aaeda155 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:51:11 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 25 06:51:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.fpqgwn", "name": "rgw_frontends"} v 0)
Feb 25 06:51:11 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.fpqgwn", "name": "rgw_frontends"} : dispatch
Feb 25 06:51:11 np0005629333 nifty_euclid[98325]: 
Feb 25 06:51:11 np0005629333 nifty_euclid[98325]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_exit_timeout_secs": 120, "rgw_frontend_port": 8082}}]
Feb 25 06:51:11 np0005629333 ceph-mgr[76641]: [progress INFO root] Writing back 5 completed events
Feb 25 06:51:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 25 06:51:11 np0005629333 podman[98289]: 2026-02-25 11:51:11.571729551 +0000 UTC m=+0.638732545 container died 93ac5684973ee2337b5b419fbf696dbce77e410fff30202e20f38d5fdff89401 (image=quay.io/ceph/ceph:v20, name=nifty_euclid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 06:51:11 np0005629333 systemd[1]: libpod-93ac5684973ee2337b5b419fbf696dbce77e410fff30202e20f38d5fdff89401.scope: Deactivated successfully.
Feb 25 06:51:11 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:11 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1bab2a3d567475ce5f9dce76ed29cc7113220239311102161f73e5e4e668963d-merged.mount: Deactivated successfully.
Feb 25 06:51:11 np0005629333 podman[98289]: 2026-02-25 11:51:11.633447524 +0000 UTC m=+0.700450518 container remove 93ac5684973ee2337b5b419fbf696dbce77e410fff30202e20f38d5fdff89401 (image=quay.io/ceph/ceph:v20, name=nifty_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 25 06:51:11 np0005629333 dreamy_cori[98344]: --> passed data devices: 0 physical, 3 LVM
Feb 25 06:51:11 np0005629333 dreamy_cori[98344]: --> All data devices are unavailable
Feb 25 06:51:11 np0005629333 systemd[1]: libpod-conmon-93ac5684973ee2337b5b419fbf696dbce77e410fff30202e20f38d5fdff89401.scope: Deactivated successfully.
Feb 25 06:51:11 np0005629333 systemd[1]: libpod-470b5f039e7631b3767fd7f16cffe6163f7c30ac20402dc6e8ec0601aaeda155.scope: Deactivated successfully.
Feb 25 06:51:11 np0005629333 conmon[98344]: conmon 470b5f039e7631b3767f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-470b5f039e7631b3767fd7f16cffe6163f7c30ac20402dc6e8ec0601aaeda155.scope/container/memory.events
Feb 25 06:51:11 np0005629333 podman[98323]: 2026-02-25 11:51:11.677725296 +0000 UTC m=+0.572503707 container died 470b5f039e7631b3767fd7f16cffe6163f7c30ac20402dc6e8ec0601aaeda155 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:51:11 np0005629333 systemd[1]: var-lib-containers-storage-overlay-6c70bd747f2ee7cdc02f8410f460437d71953db67e246e4559f2a9878ed38890-merged.mount: Deactivated successfully.
Feb 25 06:51:11 np0005629333 podman[98323]: 2026-02-25 11:51:11.73077647 +0000 UTC m=+0.625554911 container remove 470b5f039e7631b3767fd7f16cffe6163f7c30ac20402dc6e8ec0601aaeda155 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 06:51:11 np0005629333 systemd[1]: libpod-conmon-470b5f039e7631b3767fd7f16cffe6163f7c30ac20402dc6e8ec0601aaeda155.scope: Deactivated successfully.
Feb 25 06:51:11 np0005629333 ansible-async_wrapper.py[96734]: Done in kid B.
Feb 25 06:51:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Feb 25 06:51:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Feb 25 06:51:12 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Feb 25 06:51:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Feb 25 06:51:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3213503567' entity='client.rgw.rgw.compute-0.fpqgwn' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Feb 25 06:51:12 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 37 pg[10.0( empty local-lis/les=0/0 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:12 np0005629333 podman[98471]: 2026-02-25 11:51:12.155165944 +0000 UTC m=+0.056219200 container create 893bb53fddd04741fb74eaa89d82f347de23c31f43c8f33c76707c72e3cf4a3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_murdock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 06:51:12 np0005629333 systemd[1]: Started libpod-conmon-893bb53fddd04741fb74eaa89d82f347de23c31f43c8f33c76707c72e3cf4a3a.scope.
Feb 25 06:51:12 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:51:12 np0005629333 podman[98471]: 2026-02-25 11:51:12.128671082 +0000 UTC m=+0.029724348 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:51:12 np0005629333 podman[98471]: 2026-02-25 11:51:12.243628945 +0000 UTC m=+0.144682261 container init 893bb53fddd04741fb74eaa89d82f347de23c31f43c8f33c76707c72e3cf4a3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_murdock, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 06:51:12 np0005629333 podman[98471]: 2026-02-25 11:51:12.24948116 +0000 UTC m=+0.150534406 container start 893bb53fddd04741fb74eaa89d82f347de23c31f43c8f33c76707c72e3cf4a3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 25 06:51:12 np0005629333 hungry_murdock[98487]: 167 167
Feb 25 06:51:12 np0005629333 systemd[1]: libpod-893bb53fddd04741fb74eaa89d82f347de23c31f43c8f33c76707c72e3cf4a3a.scope: Deactivated successfully.
Feb 25 06:51:12 np0005629333 conmon[98487]: conmon 893bb53fddd04741fb74 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-893bb53fddd04741fb74eaa89d82f347de23c31f43c8f33c76707c72e3cf4a3a.scope/container/memory.events
Feb 25 06:51:12 np0005629333 podman[98471]: 2026-02-25 11:51:12.256624823 +0000 UTC m=+0.157678089 container attach 893bb53fddd04741fb74eaa89d82f347de23c31f43c8f33c76707c72e3cf4a3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 06:51:12 np0005629333 podman[98471]: 2026-02-25 11:51:12.256963303 +0000 UTC m=+0.158016529 container died 893bb53fddd04741fb74eaa89d82f347de23c31f43c8f33c76707c72e3cf4a3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_murdock, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:51:12 np0005629333 systemd[1]: var-lib-containers-storage-overlay-44885c1bea6025680a743854843ad2c80922f8fabfda49cd1b4194dea36b52d9-merged.mount: Deactivated successfully.
Feb 25 06:51:12 np0005629333 podman[98471]: 2026-02-25 11:51:12.331678155 +0000 UTC m=+0.232731411 container remove 893bb53fddd04741fb74eaa89d82f347de23c31f43c8f33c76707c72e3cf4a3a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_murdock, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 06:51:12 np0005629333 systemd[1]: libpod-conmon-893bb53fddd04741fb74eaa89d82f347de23c31f43c8f33c76707c72e3cf4a3a.scope: Deactivated successfully.
Feb 25 06:51:12 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:12 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/3213503567' entity='client.rgw.rgw.compute-0.fpqgwn' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Feb 25 06:51:12 np0005629333 podman[98537]: 2026-02-25 11:51:12.507371981 +0000 UTC m=+0.051754526 container create c71907a972dd90996273db52814b71d30ffd2e94533e0ee145ce14852d42baad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kalam, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 06:51:12 np0005629333 python3[98531]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:51:12 np0005629333 systemd[1]: Started libpod-conmon-c71907a972dd90996273db52814b71d30ffd2e94533e0ee145ce14852d42baad.scope.
Feb 25 06:51:12 np0005629333 podman[98537]: 2026-02-25 11:51:12.482415336 +0000 UTC m=+0.026797921 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:51:12 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:51:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be3b140743bde80caa3e01221e73b7ccbf2c4462bd385449ab770ee352189e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be3b140743bde80caa3e01221e73b7ccbf2c4462bd385449ab770ee352189e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be3b140743bde80caa3e01221e73b7ccbf2c4462bd385449ab770ee352189e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be3b140743bde80caa3e01221e73b7ccbf2c4462bd385449ab770ee352189e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:12 np0005629333 podman[98552]: 2026-02-25 11:51:12.605252424 +0000 UTC m=+0.061379124 container create 4b699941a1facf14a9150c1a5cb8e07ead2ae4febdc1993572ad7b85398821f7 (image=quay.io/ceph/ceph:v20, name=pedantic_proskuriakova, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:51:12 np0005629333 podman[98537]: 2026-02-25 11:51:12.61349153 +0000 UTC m=+0.157874135 container init c71907a972dd90996273db52814b71d30ffd2e94533e0ee145ce14852d42baad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 06:51:12 np0005629333 podman[98537]: 2026-02-25 11:51:12.621317274 +0000 UTC m=+0.165699789 container start c71907a972dd90996273db52814b71d30ffd2e94533e0ee145ce14852d42baad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030)
Feb 25 06:51:12 np0005629333 podman[98537]: 2026-02-25 11:51:12.625089716 +0000 UTC m=+0.169472321 container attach c71907a972dd90996273db52814b71d30ffd2e94533e0ee145ce14852d42baad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:51:12 np0005629333 systemd[1]: Started libpod-conmon-4b699941a1facf14a9150c1a5cb8e07ead2ae4febdc1993572ad7b85398821f7.scope.
Feb 25 06:51:12 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:51:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07334988876cfad37982fa7003467ad8e100c73b0ba092bf00b72d23851519c0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07334988876cfad37982fa7003467ad8e100c73b0ba092bf00b72d23851519c0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:12 np0005629333 podman[98552]: 2026-02-25 11:51:12.575919958 +0000 UTC m=+0.032046648 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:51:12 np0005629333 podman[98552]: 2026-02-25 11:51:12.691661435 +0000 UTC m=+0.147788145 container init 4b699941a1facf14a9150c1a5cb8e07ead2ae4febdc1993572ad7b85398821f7 (image=quay.io/ceph/ceph:v20, name=pedantic_proskuriakova, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Feb 25 06:51:12 np0005629333 podman[98552]: 2026-02-25 11:51:12.700485168 +0000 UTC m=+0.156611878 container start 4b699941a1facf14a9150c1a5cb8e07ead2ae4febdc1993572ad7b85398821f7 (image=quay.io/ceph/ceph:v20, name=pedantic_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 06:51:12 np0005629333 podman[98552]: 2026-02-25 11:51:12.709529058 +0000 UTC m=+0.165655768 container attach 4b699941a1facf14a9150c1a5cb8e07ead2ae4febdc1993572ad7b85398821f7 (image=quay.io/ceph/ceph:v20, name=pedantic_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:51:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:51:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v79: 10 pgs: 1 unknown, 9 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]: {
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:    "0": [
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:        {
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "devices": [
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "/dev/loop3"
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            ],
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "lv_name": "ceph_lv0",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "lv_size": "21470642176",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "name": "ceph_lv0",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "tags": {
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.cluster_name": "ceph",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.crush_device_class": "",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.encrypted": "0",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.objectstore": "bluestore",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.osd_id": "0",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.type": "block",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.vdo": "0",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.with_tpm": "0"
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            },
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "type": "block",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "vg_name": "ceph_vg0"
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:        }
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:    ],
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:    "1": [
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:        {
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "devices": [
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "/dev/loop4"
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            ],
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "lv_name": "ceph_lv1",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "lv_size": "21470642176",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "name": "ceph_lv1",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "tags": {
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.cluster_name": "ceph",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.crush_device_class": "",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.encrypted": "0",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.objectstore": "bluestore",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.osd_id": "1",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.type": "block",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.vdo": "0",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.with_tpm": "0"
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            },
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "type": "block",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "vg_name": "ceph_vg1"
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:        }
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:    ],
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:    "2": [
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:        {
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "devices": [
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "/dev/loop5"
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            ],
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "lv_name": "ceph_lv2",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "lv_size": "21470642176",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "name": "ceph_lv2",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "tags": {
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.cluster_name": "ceph",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.crush_device_class": "",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.encrypted": "0",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.objectstore": "bluestore",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.osd_id": "2",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.type": "block",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.vdo": "0",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:                "ceph.with_tpm": "0"
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            },
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "type": "block",
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:            "vg_name": "ceph_vg2"
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:        }
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]:    ]
Feb 25 06:51:12 np0005629333 pedantic_kalam[98564]: }
Feb 25 06:51:12 np0005629333 systemd[1]: libpod-c71907a972dd90996273db52814b71d30ffd2e94533e0ee145ce14852d42baad.scope: Deactivated successfully.
Feb 25 06:51:12 np0005629333 podman[98602]: 2026-02-25 11:51:12.977416418 +0000 UTC m=+0.027272355 container died c71907a972dd90996273db52814b71d30ffd2e94533e0ee145ce14852d42baad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kalam, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:51:13 np0005629333 systemd[1]: var-lib-containers-storage-overlay-6be3b140743bde80caa3e01221e73b7ccbf2c4462bd385449ab770ee352189e3-merged.mount: Deactivated successfully.
Feb 25 06:51:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Feb 25 06:51:13 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3213503567' entity='client.rgw.rgw.compute-0.fpqgwn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Feb 25 06:51:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Feb 25 06:51:13 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Feb 25 06:51:13 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 25 06:51:13 np0005629333 pedantic_proskuriakova[98575]: 
Feb 25 06:51:13 np0005629333 pedantic_proskuriakova[98575]: [{"container_id": "d81aab857238", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "0.21%", "created": "2026-02-25T11:49:53.184927Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2026-02-25T11:49:53.279394Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-25T11:51:10.307264Z", "memory_usage": 7799308, "pending_daemon_config": false, "ports": [], "service_name": "crash", "started": "2026-02-25T11:49:53.043110Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-8ac33163-6221-5d58-9a39-8b6933fe7762@crash.compute-0", "version": "20.2.0"}, {"container_id": "fb4195c949c9", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "7.12%", "created": "2026-02-25T11:51:08.613576Z", "daemon_id": "cephfs.compute-0.idxobw", "daemon_name": "mds.cephfs.compute-0.idxobw", "daemon_type": "mds", "events": ["2026-02-25T11:51:08.684802Z daemon:mds.cephfs.compute-0.idxobw [INFO] \"Deployed mds.cephfs.compute-0.idxobw on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-25T11:51:10.307645Z", "memory_usage": 15980298, "pending_daemon_config": false, "ports": [], "service_name": "mds.cephfs", "started": "2026-02-25T11:51:08.502928Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-8ac33163-6221-5d58-9a39-8b6933fe7762@mds.cephfs.compute-0.idxobw", "version": "20.2.0"}, {"container_id": "f6f9db6e85ba", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "16.92%", "created": "2026-02-25T11:49:14.839629Z", "daemon_id": "compute-0.jzfame", "daemon_name": "mgr.compute-0.jzfame", "daemon_type": "mgr", "events": ["2026-02-25T11:49:57.261714Z daemon:mgr.compute-0.jzfame [INFO] \"Reconfigured mgr.compute-0.jzfame on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-25T11:51:10.307189Z", "memory_usage": 545993523, "pending_daemon_config": false, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-02-25T11:49:14.739383Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-8ac33163-6221-5d58-9a39-8b6933fe7762@mgr.compute-0.jzfame", "version": "20.2.0"}, {"container_id": "ca851dfb430e", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "2.92%", "created": "2026-02-25T11:49:10.380502Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2026-02-25T11:49:56.652732Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-25T11:51:10.307076Z", "memory_request": 2147483648, "memory_usage": 40978350, "pending_daemon_config": false, "ports": [], "service_name": "mon", "started": "2026-02-25T11:49:12.529722Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-8ac33163-6221-5d58-9a39-8b6933fe7762@mon.compute-0", "version": "20.2.0"}, {"container_id": "1e427dc9af90", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.60%", "created": "2026-02-25T11:50:14.959879Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2026-02-25T11:50:15.008047Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-25T11:51:10.307337Z", "memory_request": 4294967296, "memory_usage": 58395197, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-02-25T11:50:14.880117Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-8ac33163-6221-5d58-9a39-8b6933fe7762@osd.0", "version": "20.2.0"}, {"container_id": "297a02a6b684", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.83%", "created": "2026-02-25T11:50:18.756045Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2026-02-25T11:50:18.888513Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-25T11:51:10.307409Z", "memory_request": 4294967296, "memory_usage": 62075699, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-02-25T11:50:18.574738Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-8ac33163-6221-5d58-9a39-8b6933fe7762@osd.1", "version": "20.2.0"}, {"container_id": "e32ac8b30551", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.96%", "created": "2026-02-25T11:50:23.825267Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2026-02-25T11:50:24.361595Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-02-25T11:51:10.307484Z", "memory_request": 4294967296, "memory_usage": 57482936, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-02-25T11:50:23.258556Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-8ac33163-6221-5d58-9a39-8b6933fe7762@osd.2", "version": "20.2.0"}, {"container_id": "5cc6a4afc59e", "container_image_digests": ["quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1", "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac68
Feb 25 06:51:13 np0005629333 systemd[1]: libpod-4b699941a1facf14a9150c1a5cb8e07ead2ae4febdc1993572ad7b85398821f7.scope: Deactivated successfully.
Feb 25 06:51:13 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 38 pg[10.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:13 np0005629333 podman[98602]: 2026-02-25 11:51:13.297585269 +0000 UTC m=+0.347441146 container remove c71907a972dd90996273db52814b71d30ffd2e94533e0ee145ce14852d42baad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 06:51:13 np0005629333 rsyslogd[1020]: message too long (8842) with configured size 8096, begin of message is: [{"container_id": "d81aab857238", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Feb 25 06:51:13 np0005629333 systemd[1]: libpod-conmon-c71907a972dd90996273db52814b71d30ffd2e94533e0ee145ce14852d42baad.scope: Deactivated successfully.
Feb 25 06:51:13 np0005629333 podman[98552]: 2026-02-25 11:51:13.337562803 +0000 UTC m=+0.793689513 container died 4b699941a1facf14a9150c1a5cb8e07ead2ae4febdc1993572ad7b85398821f7 (image=quay.io/ceph/ceph:v20, name=pedantic_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Feb 25 06:51:13 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/3213503567' entity='client.rgw.rgw.compute-0.fpqgwn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Feb 25 06:51:14 np0005629333 systemd[1]: var-lib-containers-storage-overlay-07334988876cfad37982fa7003467ad8e100c73b0ba092bf00b72d23851519c0-merged.mount: Deactivated successfully.
Feb 25 06:51:14 np0005629333 podman[98621]: 2026-02-25 11:51:14.030181565 +0000 UTC m=+0.860542708 container remove 4b699941a1facf14a9150c1a5cb8e07ead2ae4febdc1993572ad7b85398821f7 (image=quay.io/ceph/ceph:v20, name=pedantic_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:51:14 np0005629333 systemd[1]: libpod-conmon-4b699941a1facf14a9150c1a5cb8e07ead2ae4febdc1993572ad7b85398821f7.scope: Deactivated successfully.
Feb 25 06:51:14 np0005629333 ceph-mds[97202]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Feb 25 06:51:14 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mds-cephfs-compute-0-idxobw[97198]: 2026-02-25T11:51:14.041+0000 7f1b4a8db640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Feb 25 06:51:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Feb 25 06:51:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Feb 25 06:51:14 np0005629333 podman[98700]: 2026-02-25 11:51:14.109565775 +0000 UTC m=+0.042732237 container create 0a495a7bf171b7085306710ff718770ab9d4ae2d78244c130474397dee57a6c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_keller, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:51:14 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Feb 25 06:51:14 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 39 pg[11.0( empty local-lis/les=0/0 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Feb 25 06:51:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3213503567' entity='client.rgw.rgw.compute-0.fpqgwn' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Feb 25 06:51:14 np0005629333 systemd[1]: Started libpod-conmon-0a495a7bf171b7085306710ff718770ab9d4ae2d78244c130474397dee57a6c3.scope.
Feb 25 06:51:14 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:51:14 np0005629333 podman[98700]: 2026-02-25 11:51:14.085235119 +0000 UTC m=+0.018401561 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:51:14 np0005629333 podman[98700]: 2026-02-25 11:51:14.203589453 +0000 UTC m=+0.136755925 container init 0a495a7bf171b7085306710ff718770ab9d4ae2d78244c130474397dee57a6c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_keller, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:51:14 np0005629333 podman[98700]: 2026-02-25 11:51:14.213356445 +0000 UTC m=+0.146522867 container start 0a495a7bf171b7085306710ff718770ab9d4ae2d78244c130474397dee57a6c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_keller, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 06:51:14 np0005629333 fervent_keller[98716]: 167 167
Feb 25 06:51:14 np0005629333 systemd[1]: libpod-0a495a7bf171b7085306710ff718770ab9d4ae2d78244c130474397dee57a6c3.scope: Deactivated successfully.
Feb 25 06:51:14 np0005629333 podman[98700]: 2026-02-25 11:51:14.221721555 +0000 UTC m=+0.154887977 container attach 0a495a7bf171b7085306710ff718770ab9d4ae2d78244c130474397dee57a6c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_keller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 06:51:14 np0005629333 podman[98700]: 2026-02-25 11:51:14.222118746 +0000 UTC m=+0.155285168 container died 0a495a7bf171b7085306710ff718770ab9d4ae2d78244c130474397dee57a6c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 06:51:14 np0005629333 systemd[1]: var-lib-containers-storage-overlay-57561088240a8a3e01a4c529932be79256579f9fdee3731609c989f11830109f-merged.mount: Deactivated successfully.
Feb 25 06:51:14 np0005629333 podman[98700]: 2026-02-25 11:51:14.287173619 +0000 UTC m=+0.220340081 container remove 0a495a7bf171b7085306710ff718770ab9d4ae2d78244c130474397dee57a6c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_keller, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:51:14 np0005629333 systemd[1]: libpod-conmon-0a495a7bf171b7085306710ff718770ab9d4ae2d78244c130474397dee57a6c3.scope: Deactivated successfully.
Feb 25 06:51:14 np0005629333 podman[98741]: 2026-02-25 11:51:14.442359403 +0000 UTC m=+0.080500745 container create 4c893ca348760bf00547d6144e5825f79e1865667d3b3b507e7e33865dd27520 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_volhard, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 06:51:14 np0005629333 podman[98741]: 2026-02-25 11:51:14.381839246 +0000 UTC m=+0.019980578 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:51:14 np0005629333 systemd[1]: Started libpod-conmon-4c893ca348760bf00547d6144e5825f79e1865667d3b3b507e7e33865dd27520.scope.
Feb 25 06:51:14 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:51:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a143afdeab2b584603fcf86a8d9e4bb339fa4515008caee2192d21cde8d2f84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a143afdeab2b584603fcf86a8d9e4bb339fa4515008caee2192d21cde8d2f84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a143afdeab2b584603fcf86a8d9e4bb339fa4515008caee2192d21cde8d2f84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a143afdeab2b584603fcf86a8d9e4bb339fa4515008caee2192d21cde8d2f84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:14 np0005629333 podman[98741]: 2026-02-25 11:51:14.675478944 +0000 UTC m=+0.313620296 container init 4c893ca348760bf00547d6144e5825f79e1865667d3b3b507e7e33865dd27520 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_volhard, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 06:51:14 np0005629333 podman[98741]: 2026-02-25 11:51:14.683857655 +0000 UTC m=+0.321999007 container start 4c893ca348760bf00547d6144e5825f79e1865667d3b3b507e7e33865dd27520 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 06:51:14 np0005629333 podman[98741]: 2026-02-25 11:51:14.750060672 +0000 UTC m=+0.388202024 container attach 4c893ca348760bf00547d6144e5825f79e1865667d3b3b507e7e33865dd27520 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_volhard, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True)
Feb 25 06:51:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v82: 11 pgs: 2 unknown, 9 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Feb 25 06:51:14 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/3213503567' entity='client.rgw.rgw.compute-0.fpqgwn' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Feb 25 06:51:14 np0005629333 python3[98788]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:51:15 np0005629333 podman[98799]: 2026-02-25 11:51:15.010384356 +0000 UTC m=+0.031716979 container create 62ba16906f36830b52aefbbfbb8668cbed121188dfc301279881b0adf8e5ff11 (image=quay.io/ceph/ceph:v20, name=elastic_khayyam, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:51:15 np0005629333 systemd[1]: Started libpod-conmon-62ba16906f36830b52aefbbfbb8668cbed121188dfc301279881b0adf8e5ff11.scope.
Feb 25 06:51:15 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:51:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1af8d8e4472cac9160cb8777d5d46a7f5da31a0b4f57d8ff207dac0a7bd4863a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1af8d8e4472cac9160cb8777d5d46a7f5da31a0b4f57d8ff207dac0a7bd4863a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:15 np0005629333 podman[98799]: 2026-02-25 11:51:14.997604324 +0000 UTC m=+0.018936967 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:51:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Feb 25 06:51:15 np0005629333 podman[98799]: 2026-02-25 11:51:15.131785571 +0000 UTC m=+0.153118224 container init 62ba16906f36830b52aefbbfbb8668cbed121188dfc301279881b0adf8e5ff11 (image=quay.io/ceph/ceph:v20, name=elastic_khayyam, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 06:51:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3213503567' entity='client.rgw.rgw.compute-0.fpqgwn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Feb 25 06:51:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Feb 25 06:51:15 np0005629333 podman[98799]: 2026-02-25 11:51:15.13742964 +0000 UTC m=+0.158762263 container start 62ba16906f36830b52aefbbfbb8668cbed121188dfc301279881b0adf8e5ff11 (image=quay.io/ceph/ceph:v20, name=elastic_khayyam, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:51:15 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Feb 25 06:51:15 np0005629333 podman[98799]: 2026-02-25 11:51:15.227064336 +0000 UTC m=+0.248397019 container attach 62ba16906f36830b52aefbbfbb8668cbed121188dfc301279881b0adf8e5ff11 (image=quay.io/ceph/ceph:v20, name=elastic_khayyam, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 06:51:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Feb 25 06:51:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3213503567' entity='client.rgw.rgw.compute-0.fpqgwn' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Feb 25 06:51:15 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 40 pg[11.0( empty local-lis/les=39/40 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:15 np0005629333 lvm[98904]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 06:51:15 np0005629333 lvm[98900]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 06:51:15 np0005629333 lvm[98904]: VG ceph_vg2 finished
Feb 25 06:51:15 np0005629333 lvm[98900]: VG ceph_vg0 finished
Feb 25 06:51:15 np0005629333 lvm[98902]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 06:51:15 np0005629333 lvm[98902]: VG ceph_vg1 finished
Feb 25 06:51:15 np0005629333 nostalgic_volhard[98758]: {}
Feb 25 06:51:15 np0005629333 systemd[1]: libpod-4c893ca348760bf00547d6144e5825f79e1865667d3b3b507e7e33865dd27520.scope: Deactivated successfully.
Feb 25 06:51:15 np0005629333 systemd[1]: libpod-4c893ca348760bf00547d6144e5825f79e1865667d3b3b507e7e33865dd27520.scope: Consumed 1.078s CPU time.
Feb 25 06:51:15 np0005629333 podman[98741]: 2026-02-25 11:51:15.48804308 +0000 UTC m=+1.126184382 container died 4c893ca348760bf00547d6144e5825f79e1865667d3b3b507e7e33865dd27520 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_volhard, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:51:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb 25 06:51:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/922302485' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb 25 06:51:15 np0005629333 elastic_khayyam[98823]: 
Feb 25 06:51:15 np0005629333 elastic_khayyam[98823]: {"fsid":"8ac33163-6221-5d58-9a39-8b6933fe7762","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":122,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":40,"num_osds":3,"num_up_osds":3,"osd_up_since":1772020230,"num_in_osds":3,"osd_in_since":1772020207,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":9},{"state_name":"unknown","count":2}],"num_pgs":11,"num_pools":11,"num_objects":30,"data_bytes":463390,"bytes_used":84045824,"bytes_avail":64327880704,"bytes_total":64411926528,"unknown_pgs_ratio":0.18181818723678589,"read_bytes_sec":1279,"write_bytes_sec":5374,"read_op_per_sec":0,"write_op_per_sec":13},"fsmap":{"epoch":5,"btime":"2026-02-25T11:51:10:054218+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.idxobw","status":"up:active","gid":14253}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-02-25T11:50:32.774394+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Feb 25 06:51:15 np0005629333 systemd[1]: libpod-62ba16906f36830b52aefbbfbb8668cbed121188dfc301279881b0adf8e5ff11.scope: Deactivated successfully.
Feb 25 06:51:15 np0005629333 podman[98799]: 2026-02-25 11:51:15.6199925 +0000 UTC m=+0.641325123 container died 62ba16906f36830b52aefbbfbb8668cbed121188dfc301279881b0adf8e5ff11 (image=quay.io/ceph/ceph:v20, name=elastic_khayyam, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 06:51:15 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1af8d8e4472cac9160cb8777d5d46a7f5da31a0b4f57d8ff207dac0a7bd4863a-merged.mount: Deactivated successfully.
Feb 25 06:51:15 np0005629333 podman[98799]: 2026-02-25 11:51:15.927539103 +0000 UTC m=+0.948871777 container remove 62ba16906f36830b52aefbbfbb8668cbed121188dfc301279881b0adf8e5ff11 (image=quay.io/ceph/ceph:v20, name=elastic_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:51:16 np0005629333 systemd[1]: var-lib-containers-storage-overlay-4a143afdeab2b584603fcf86a8d9e4bb339fa4515008caee2192d21cde8d2f84-merged.mount: Deactivated successfully.
Feb 25 06:51:16 np0005629333 podman[98741]: 2026-02-25 11:51:16.078231304 +0000 UTC m=+1.716372656 container remove 4c893ca348760bf00547d6144e5825f79e1865667d3b3b507e7e33865dd27520 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_volhard, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 06:51:16 np0005629333 systemd[1]: libpod-conmon-4c893ca348760bf00547d6144e5825f79e1865667d3b3b507e7e33865dd27520.scope: Deactivated successfully.
Feb 25 06:51:16 np0005629333 systemd[1]: libpod-conmon-62ba16906f36830b52aefbbfbb8668cbed121188dfc301279881b0adf8e5ff11.scope: Deactivated successfully.
Feb 25 06:51:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Feb 25 06:51:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:51:16 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/3213503567' entity='client.rgw.rgw.compute-0.fpqgwn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Feb 25 06:51:16 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/3213503567' entity='client.rgw.rgw.compute-0.fpqgwn' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Feb 25 06:51:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3213503567' entity='client.rgw.rgw.compute-0.fpqgwn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Feb 25 06:51:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Feb 25 06:51:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:16 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Feb 25 06:51:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:51:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:51:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:51:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v85: 11 pgs: 11 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s
Feb 25 06:51:16 np0005629333 radosgw[96738]: v1 topic migration: starting v1 topic migration..
Feb 25 06:51:16 np0005629333 radosgw[96738]: v1 topic migration: finished v1 topic migration
Feb 25 06:51:16 np0005629333 radosgw[96738]: framework: beast
Feb 25 06:51:16 np0005629333 radosgw[96738]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Feb 25 06:51:16 np0005629333 radosgw[96738]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Feb 25 06:51:16 np0005629333 radosgw[96738]: starting handler: beast
Feb 25 06:51:16 np0005629333 podman[99097]: 2026-02-25 11:51:16.991054633 +0000 UTC m=+0.079967079 container exec ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 06:51:17 np0005629333 radosgw[96738]: set uid:gid to 167:167 (ceph:ceph)
Feb 25 06:51:17 np0005629333 radosgw[96738]: mgrc service_daemon_register rgw.14256 metadata {arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.fpqgwn,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Feb 19 10:49:27 UTC 2026,kernel_version=5.14.0-686.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864276,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=6e32c1e7-fb83-4135-ab89-2ba33304ec83,zone_name=default,zonegroup_id=1d1c3da4-46ab-4c9f-88fd-a23bf46e8b66,zonegroup_name=default}
Feb 25 06:51:17 np0005629333 python3[99099]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:51:17 np0005629333 podman[99097]: 2026-02-25 11:51:17.090240355 +0000 UTC m=+0.179152751 container exec_died ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:51:17 np0005629333 podman[99136]: 2026-02-25 11:51:17.100662196 +0000 UTC m=+0.048457808 container create 1df30ed467ec3aafd1671a6db372a7ca37166d3ef1790ecf05248c1656ac4357 (image=quay.io/ceph/ceph:v20, name=dreamy_mirzakhani, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 06:51:17 np0005629333 systemd[1]: Started libpod-conmon-1df30ed467ec3aafd1671a6db372a7ca37166d3ef1790ecf05248c1656ac4357.scope.
Feb 25 06:51:17 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:51:17 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/854af1e138ce7aaa61129b1c48657a39efb54114cab5f8d7737852cf05121b12/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:17 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/854af1e138ce7aaa61129b1c48657a39efb54114cab5f8d7737852cf05121b12/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:17 np0005629333 podman[99136]: 2026-02-25 11:51:17.07801731 +0000 UTC m=+0.025812952 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:51:17 np0005629333 podman[99136]: 2026-02-25 11:51:17.1811582 +0000 UTC m=+0.128953822 container init 1df30ed467ec3aafd1671a6db372a7ca37166d3ef1790ecf05248c1656ac4357 (image=quay.io/ceph/ceph:v20, name=dreamy_mirzakhani, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:51:17 np0005629333 podman[99136]: 2026-02-25 11:51:17.186043895 +0000 UTC m=+0.133839497 container start 1df30ed467ec3aafd1671a6db372a7ca37166d3ef1790ecf05248c1656ac4357 (image=quay.io/ceph/ceph:v20, name=dreamy_mirzakhani, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:51:17 np0005629333 podman[99136]: 2026-02-25 11:51:17.193298302 +0000 UTC m=+0.141093934 container attach 1df30ed467ec3aafd1671a6db372a7ca37166d3ef1790ecf05248c1656ac4357 (image=quay.io/ceph/ceph:v20, name=dreamy_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:51:17 np0005629333 ceph-mon[76335]: from='client.? 192.168.122.100:0/3213503567' entity='client.rgw.rgw.compute-0.fpqgwn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Feb 25 06:51:17 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:17 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:17 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:17 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Feb 25 06:51:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3050023625' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 25 06:51:17 np0005629333 dreamy_mirzakhani[99176]: 
Feb 25 06:51:17 np0005629333 dreamy_mirzakhani[99176]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.fpqgwn","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Feb 25 06:51:17 np0005629333 podman[99136]: 2026-02-25 11:51:17.62458592 +0000 UTC m=+0.572381592 container died 1df30ed467ec3aafd1671a6db372a7ca37166d3ef1790ecf05248c1656ac4357 (image=quay.io/ceph/ceph:v20, name=dreamy_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 06:51:17 np0005629333 systemd[1]: libpod-1df30ed467ec3aafd1671a6db372a7ca37166d3ef1790ecf05248c1656ac4357.scope: Deactivated successfully.
Feb 25 06:51:17 np0005629333 systemd[1]: var-lib-containers-storage-overlay-854af1e138ce7aaa61129b1c48657a39efb54114cab5f8d7737852cf05121b12-merged.mount: Deactivated successfully.
Feb 25 06:51:17 np0005629333 podman[99136]: 2026-02-25 11:51:17.74246903 +0000 UTC m=+0.690264672 container remove 1df30ed467ec3aafd1671a6db372a7ca37166d3ef1790ecf05248c1656ac4357 (image=quay.io/ceph/ceph:v20, name=dreamy_mirzakhani, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:51:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:51:17 np0005629333 systemd[1]: libpod-conmon-1df30ed467ec3aafd1671a6db372a7ca37166d3ef1790ecf05248c1656ac4357.scope: Deactivated successfully.
Feb 25 06:51:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:51:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:51:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:51:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:51:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 06:51:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:51:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 06:51:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 06:51:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 06:51:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 06:51:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 06:51:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:51:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:51:18 np0005629333 podman[99421]: 2026-02-25 11:51:18.277508378 +0000 UTC m=+0.048923462 container create 7220e243d8d551d1e3f8b2d419517ed3a0b7dd1b63e1b3662b9309fd74aa5288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:51:18 np0005629333 systemd[1]: Started libpod-conmon-7220e243d8d551d1e3f8b2d419517ed3a0b7dd1b63e1b3662b9309fd74aa5288.scope.
Feb 25 06:51:18 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:51:18 np0005629333 podman[99421]: 2026-02-25 11:51:18.258234772 +0000 UTC m=+0.029649876 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:51:18 np0005629333 podman[99421]: 2026-02-25 11:51:18.356102105 +0000 UTC m=+0.127517269 container init 7220e243d8d551d1e3f8b2d419517ed3a0b7dd1b63e1b3662b9309fd74aa5288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_mahavira, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:51:18 np0005629333 podman[99421]: 2026-02-25 11:51:18.363054182 +0000 UTC m=+0.134469306 container start 7220e243d8d551d1e3f8b2d419517ed3a0b7dd1b63e1b3662b9309fd74aa5288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_mahavira, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:51:18 np0005629333 nervous_mahavira[99438]: 167 167
Feb 25 06:51:18 np0005629333 systemd[1]: libpod-7220e243d8d551d1e3f8b2d419517ed3a0b7dd1b63e1b3662b9309fd74aa5288.scope: Deactivated successfully.
Feb 25 06:51:18 np0005629333 podman[99421]: 2026-02-25 11:51:18.369870396 +0000 UTC m=+0.141285580 container attach 7220e243d8d551d1e3f8b2d419517ed3a0b7dd1b63e1b3662b9309fd74aa5288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 06:51:18 np0005629333 podman[99421]: 2026-02-25 11:51:18.370263858 +0000 UTC m=+0.141678972 container died 7220e243d8d551d1e3f8b2d419517ed3a0b7dd1b63e1b3662b9309fd74aa5288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:51:18 np0005629333 systemd[1]: var-lib-containers-storage-overlay-572c9c0ab82e08d048f9f883d3597224e14b4566fe87a7a8c09fa38755ad4885-merged.mount: Deactivated successfully.
Feb 25 06:51:18 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:18 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:18 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:51:18 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:18 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 06:51:18 np0005629333 podman[99421]: 2026-02-25 11:51:18.431073344 +0000 UTC m=+0.202488458 container remove 7220e243d8d551d1e3f8b2d419517ed3a0b7dd1b63e1b3662b9309fd74aa5288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_mahavira, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:51:18 np0005629333 systemd[1]: libpod-conmon-7220e243d8d551d1e3f8b2d419517ed3a0b7dd1b63e1b3662b9309fd74aa5288.scope: Deactivated successfully.
Feb 25 06:51:18 np0005629333 python3[99482]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:51:18 np0005629333 podman[99488]: 2026-02-25 11:51:18.625110798 +0000 UTC m=+0.112003906 container create 3469cc426c129fb054e66aaebda2f6dc97b234cf9d3a6dfe9ad615cbc765a975 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_cerf, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:51:18 np0005629333 podman[99488]: 2026-02-25 11:51:18.533227434 +0000 UTC m=+0.020120542 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:51:18 np0005629333 podman[99502]: 2026-02-25 11:51:18.696128609 +0000 UTC m=+0.096146992 container create b790951ee385aaa869922f8cb52165ab62b4acf1a40bfdf0cc9cf529b2577a04 (image=quay.io/ceph/ceph:v20, name=great_hypatia, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:51:18 np0005629333 systemd[1]: Started libpod-conmon-3469cc426c129fb054e66aaebda2f6dc97b234cf9d3a6dfe9ad615cbc765a975.scope.
Feb 25 06:51:18 np0005629333 systemd[1]: Started libpod-conmon-b790951ee385aaa869922f8cb52165ab62b4acf1a40bfdf0cc9cf529b2577a04.scope.
Feb 25 06:51:18 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:51:18 np0005629333 podman[99502]: 2026-02-25 11:51:18.638836818 +0000 UTC m=+0.038855231 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:51:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a0e0b7762600f74bbe4054dab2ca334d36e6c71ffd00401bc914671649956c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a0e0b7762600f74bbe4054dab2ca334d36e6c71ffd00401bc914671649956c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a0e0b7762600f74bbe4054dab2ca334d36e6c71ffd00401bc914671649956c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:18 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:51:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a0e0b7762600f74bbe4054dab2ca334d36e6c71ffd00401bc914671649956c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a0e0b7762600f74bbe4054dab2ca334d36e6c71ffd00401bc914671649956c4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05cfef194e8cb5f04091462148904aec2cdaf166de11e5e1dbb763252b0d3b95/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05cfef194e8cb5f04091462148904aec2cdaf166de11e5e1dbb763252b0d3b95/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:18 np0005629333 podman[99488]: 2026-02-25 11:51:18.755601635 +0000 UTC m=+0.242494803 container init 3469cc426c129fb054e66aaebda2f6dc97b234cf9d3a6dfe9ad615cbc765a975 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_cerf, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 06:51:18 np0005629333 podman[99502]: 2026-02-25 11:51:18.763381037 +0000 UTC m=+0.163399450 container init b790951ee385aaa869922f8cb52165ab62b4acf1a40bfdf0cc9cf529b2577a04 (image=quay.io/ceph/ceph:v20, name=great_hypatia, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 06:51:18 np0005629333 podman[99488]: 2026-02-25 11:51:18.764408168 +0000 UTC m=+0.251301286 container start 3469cc426c129fb054e66aaebda2f6dc97b234cf9d3a6dfe9ad615cbc765a975 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_cerf, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 06:51:18 np0005629333 podman[99502]: 2026-02-25 11:51:18.770308844 +0000 UTC m=+0.170327257 container start b790951ee385aaa869922f8cb52165ab62b4acf1a40bfdf0cc9cf529b2577a04 (image=quay.io/ceph/ceph:v20, name=great_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 06:51:18 np0005629333 podman[99488]: 2026-02-25 11:51:18.772782608 +0000 UTC m=+0.259675716 container attach 3469cc426c129fb054e66aaebda2f6dc97b234cf9d3a6dfe9ad615cbc765a975 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_cerf, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:51:18 np0005629333 podman[99502]: 2026-02-25 11:51:18.783306242 +0000 UTC m=+0.183324635 container attach b790951ee385aaa869922f8cb52165ab62b4acf1a40bfdf0cc9cf529b2577a04 (image=quay.io/ceph/ceph:v20, name=great_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:51:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v86: 11 pgs: 11 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 179 B/s rd, 359 B/s wr, 1 op/s
Feb 25 06:51:19 np0005629333 reverent_cerf[99518]: --> passed data devices: 0 physical, 3 LVM
Feb 25 06:51:19 np0005629333 reverent_cerf[99518]: --> All data devices are unavailable
Feb 25 06:51:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Feb 25 06:51:19 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/7910197' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Feb 25 06:51:19 np0005629333 great_hypatia[99522]: mimic
Feb 25 06:51:19 np0005629333 systemd[1]: libpod-3469cc426c129fb054e66aaebda2f6dc97b234cf9d3a6dfe9ad615cbc765a975.scope: Deactivated successfully.
Feb 25 06:51:19 np0005629333 podman[99488]: 2026-02-25 11:51:19.21288653 +0000 UTC m=+0.699779608 container died 3469cc426c129fb054e66aaebda2f6dc97b234cf9d3a6dfe9ad615cbc765a975 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_cerf, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 06:51:19 np0005629333 systemd[1]: libpod-b790951ee385aaa869922f8cb52165ab62b4acf1a40bfdf0cc9cf529b2577a04.scope: Deactivated successfully.
Feb 25 06:51:19 np0005629333 podman[99502]: 2026-02-25 11:51:19.236845676 +0000 UTC m=+0.636864059 container died b790951ee385aaa869922f8cb52165ab62b4acf1a40bfdf0cc9cf529b2577a04 (image=quay.io/ceph/ceph:v20, name=great_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 06:51:19 np0005629333 systemd[1]: var-lib-containers-storage-overlay-3a0e0b7762600f74bbe4054dab2ca334d36e6c71ffd00401bc914671649956c4-merged.mount: Deactivated successfully.
Feb 25 06:51:19 np0005629333 podman[99488]: 2026-02-25 11:51:19.289425176 +0000 UTC m=+0.776318264 container remove 3469cc426c129fb054e66aaebda2f6dc97b234cf9d3a6dfe9ad615cbc765a975 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 06:51:19 np0005629333 systemd[1]: libpod-conmon-3469cc426c129fb054e66aaebda2f6dc97b234cf9d3a6dfe9ad615cbc765a975.scope: Deactivated successfully.
Feb 25 06:51:19 np0005629333 systemd[1]: var-lib-containers-storage-overlay-05cfef194e8cb5f04091462148904aec2cdaf166de11e5e1dbb763252b0d3b95-merged.mount: Deactivated successfully.
Feb 25 06:51:19 np0005629333 podman[99502]: 2026-02-25 11:51:19.349974094 +0000 UTC m=+0.749992467 container remove b790951ee385aaa869922f8cb52165ab62b4acf1a40bfdf0cc9cf529b2577a04 (image=quay.io/ceph/ceph:v20, name=great_hypatia, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:51:19 np0005629333 systemd[1]: libpod-conmon-b790951ee385aaa869922f8cb52165ab62b4acf1a40bfdf0cc9cf529b2577a04.scope: Deactivated successfully.
Feb 25 06:51:19 np0005629333 podman[99651]: 2026-02-25 11:51:19.743787214 +0000 UTC m=+0.048252772 container create 4c635764910e634346b67dc692ae5fa15aa4916b29ea005cbeb8444b0c70bc5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:51:19 np0005629333 systemd[1]: Started libpod-conmon-4c635764910e634346b67dc692ae5fa15aa4916b29ea005cbeb8444b0c70bc5e.scope.
Feb 25 06:51:19 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:51:19 np0005629333 podman[99651]: 2026-02-25 11:51:19.722026594 +0000 UTC m=+0.026492212 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:51:19 np0005629333 podman[99651]: 2026-02-25 11:51:19.818475794 +0000 UTC m=+0.122941332 container init 4c635764910e634346b67dc692ae5fa15aa4916b29ea005cbeb8444b0c70bc5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:51:19 np0005629333 podman[99651]: 2026-02-25 11:51:19.824595657 +0000 UTC m=+0.129061185 container start 4c635764910e634346b67dc692ae5fa15aa4916b29ea005cbeb8444b0c70bc5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_thompson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:51:19 np0005629333 festive_thompson[99667]: 167 167
Feb 25 06:51:19 np0005629333 systemd[1]: libpod-4c635764910e634346b67dc692ae5fa15aa4916b29ea005cbeb8444b0c70bc5e.scope: Deactivated successfully.
Feb 25 06:51:19 np0005629333 podman[99651]: 2026-02-25 11:51:19.829242496 +0000 UTC m=+0.133708054 container attach 4c635764910e634346b67dc692ae5fa15aa4916b29ea005cbeb8444b0c70bc5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_thompson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 25 06:51:19 np0005629333 podman[99651]: 2026-02-25 11:51:19.830158803 +0000 UTC m=+0.134624331 container died 4c635764910e634346b67dc692ae5fa15aa4916b29ea005cbeb8444b0c70bc5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_thompson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:51:19 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7cc31c6de316df328d552cb4abf9d53c5c553ea654101fd3cdc568598abc579c-merged.mount: Deactivated successfully.
Feb 25 06:51:19 np0005629333 podman[99651]: 2026-02-25 11:51:19.883889678 +0000 UTC m=+0.188355206 container remove 4c635764910e634346b67dc692ae5fa15aa4916b29ea005cbeb8444b0c70bc5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_thompson, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 06:51:19 np0005629333 systemd[1]: libpod-conmon-4c635764910e634346b67dc692ae5fa15aa4916b29ea005cbeb8444b0c70bc5e.scope: Deactivated successfully.
Feb 25 06:51:20 np0005629333 podman[99691]: 2026-02-25 11:51:20.059514092 +0000 UTC m=+0.069058993 container create 82ed433b8599a71b49c57a882b9413ae3e9ea3aff229e50b0620d3f15a00b537 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_ramanujan, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:51:20 np0005629333 systemd[1]: Started libpod-conmon-82ed433b8599a71b49c57a882b9413ae3e9ea3aff229e50b0620d3f15a00b537.scope.
Feb 25 06:51:20 np0005629333 podman[99691]: 2026-02-25 11:51:20.024320781 +0000 UTC m=+0.033865792 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:51:20 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:51:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32d691a62f47cf29d148731297155024dcc33dfc8f7bbbf578628c48d8667ab4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32d691a62f47cf29d148731297155024dcc33dfc8f7bbbf578628c48d8667ab4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32d691a62f47cf29d148731297155024dcc33dfc8f7bbbf578628c48d8667ab4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32d691a62f47cf29d148731297155024dcc33dfc8f7bbbf578628c48d8667ab4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:20 np0005629333 podman[99691]: 2026-02-25 11:51:20.181132584 +0000 UTC m=+0.190677565 container init 82ed433b8599a71b49c57a882b9413ae3e9ea3aff229e50b0620d3f15a00b537 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_ramanujan, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:51:20 np0005629333 podman[99691]: 2026-02-25 11:51:20.187333789 +0000 UTC m=+0.196878710 container start 82ed433b8599a71b49c57a882b9413ae3e9ea3aff229e50b0620d3f15a00b537 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_ramanujan, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:51:20 np0005629333 podman[99691]: 2026-02-25 11:51:20.197528494 +0000 UTC m=+0.207073415 container attach 82ed433b8599a71b49c57a882b9413ae3e9ea3aff229e50b0620d3f15a00b537 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_ramanujan, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:51:20 np0005629333 python3[99731]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:51:20 np0005629333 podman[99740]: 2026-02-25 11:51:20.270461042 +0000 UTC m=+0.047390007 container create a6f71f511298092656fe0f4ae1443e34d89bff6c60d35aec200c98a4d5db5fa8 (image=quay.io/ceph/ceph:v20, name=busy_edison, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 06:51:20 np0005629333 systemd[1]: Started libpod-conmon-a6f71f511298092656fe0f4ae1443e34d89bff6c60d35aec200c98a4d5db5fa8.scope.
Feb 25 06:51:20 np0005629333 podman[99740]: 2026-02-25 11:51:20.250088483 +0000 UTC m=+0.027017448 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:51:20 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:51:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d862414bf3367ef98aa9bbc8b731c9cbf891f16ec2b2a1b110fbd152f9e7752d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d862414bf3367ef98aa9bbc8b731c9cbf891f16ec2b2a1b110fbd152f9e7752d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:20 np0005629333 podman[99740]: 2026-02-25 11:51:20.376350744 +0000 UTC m=+0.153279769 container init a6f71f511298092656fe0f4ae1443e34d89bff6c60d35aec200c98a4d5db5fa8 (image=quay.io/ceph/ceph:v20, name=busy_edison, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 06:51:20 np0005629333 podman[99740]: 2026-02-25 11:51:20.383439755 +0000 UTC m=+0.160368720 container start a6f71f511298092656fe0f4ae1443e34d89bff6c60d35aec200c98a4d5db5fa8 (image=quay.io/ceph/ceph:v20, name=busy_edison, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 06:51:20 np0005629333 podman[99740]: 2026-02-25 11:51:20.391350792 +0000 UTC m=+0.168279757 container attach a6f71f511298092656fe0f4ae1443e34d89bff6c60d35aec200c98a4d5db5fa8 (image=quay.io/ceph/ceph:v20, name=busy_edison, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]: {
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:    "0": [
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:        {
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "devices": [
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "/dev/loop3"
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            ],
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "lv_name": "ceph_lv0",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "lv_size": "21470642176",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "name": "ceph_lv0",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "tags": {
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.cluster_name": "ceph",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.crush_device_class": "",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.encrypted": "0",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.objectstore": "bluestore",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.osd_id": "0",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.type": "block",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.vdo": "0",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.with_tpm": "0"
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            },
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "type": "block",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "vg_name": "ceph_vg0"
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:        }
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:    ],
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:    "1": [
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:        {
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "devices": [
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "/dev/loop4"
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            ],
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "lv_name": "ceph_lv1",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "lv_size": "21470642176",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "name": "ceph_lv1",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "tags": {
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.cluster_name": "ceph",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.crush_device_class": "",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.encrypted": "0",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.objectstore": "bluestore",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.osd_id": "1",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.type": "block",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.vdo": "0",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.with_tpm": "0"
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            },
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "type": "block",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "vg_name": "ceph_vg1"
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:        }
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:    ],
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:    "2": [
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:        {
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "devices": [
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "/dev/loop5"
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            ],
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "lv_name": "ceph_lv2",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "lv_size": "21470642176",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "name": "ceph_lv2",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "tags": {
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.cluster_name": "ceph",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.crush_device_class": "",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.encrypted": "0",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.objectstore": "bluestore",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.osd_id": "2",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.type": "block",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.vdo": "0",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:                "ceph.with_tpm": "0"
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            },
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "type": "block",
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:            "vg_name": "ceph_vg2"
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:        }
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]:    ]
Feb 25 06:51:20 np0005629333 happy_ramanujan[99735]: }
Feb 25 06:51:20 np0005629333 systemd[1]: libpod-82ed433b8599a71b49c57a882b9413ae3e9ea3aff229e50b0620d3f15a00b537.scope: Deactivated successfully.
Feb 25 06:51:20 np0005629333 podman[99691]: 2026-02-25 11:51:20.520338553 +0000 UTC m=+0.529883484 container died 82ed433b8599a71b49c57a882b9413ae3e9ea3aff229e50b0620d3f15a00b537 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:51:20 np0005629333 systemd[1]: var-lib-containers-storage-overlay-32d691a62f47cf29d148731297155024dcc33dfc8f7bbbf578628c48d8667ab4-merged.mount: Deactivated successfully.
Feb 25 06:51:20 np0005629333 podman[99691]: 2026-02-25 11:51:20.588795348 +0000 UTC m=+0.598340269 container remove 82ed433b8599a71b49c57a882b9413ae3e9ea3aff229e50b0620d3f15a00b537 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_ramanujan, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:51:20 np0005629333 systemd[1]: libpod-conmon-82ed433b8599a71b49c57a882b9413ae3e9ea3aff229e50b0620d3f15a00b537.scope: Deactivated successfully.
Feb 25 06:51:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v87: 11 pgs: 11 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 153 B/s rd, 306 B/s wr, 0 op/s
Feb 25 06:51:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Feb 25 06:51:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1596018089' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Feb 25 06:51:20 np0005629333 busy_edison[99756]: 
Feb 25 06:51:20 np0005629333 busy_edison[99756]: {"mon":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"mgr":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"osd":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":3},"mds":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"rgw":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"overall":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":7}}
Feb 25 06:51:20 np0005629333 systemd[1]: libpod-a6f71f511298092656fe0f4ae1443e34d89bff6c60d35aec200c98a4d5db5fa8.scope: Deactivated successfully.
Feb 25 06:51:20 np0005629333 podman[99740]: 2026-02-25 11:51:20.887766396 +0000 UTC m=+0.664695341 container died a6f71f511298092656fe0f4ae1443e34d89bff6c60d35aec200c98a4d5db5fa8 (image=quay.io/ceph/ceph:v20, name=busy_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:51:20 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d862414bf3367ef98aa9bbc8b731c9cbf891f16ec2b2a1b110fbd152f9e7752d-merged.mount: Deactivated successfully.
Feb 25 06:51:20 np0005629333 podman[99740]: 2026-02-25 11:51:20.941643043 +0000 UTC m=+0.718571988 container remove a6f71f511298092656fe0f4ae1443e34d89bff6c60d35aec200c98a4d5db5fa8 (image=quay.io/ceph/ceph:v20, name=busy_edison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 06:51:20 np0005629333 systemd[1]: libpod-conmon-a6f71f511298092656fe0f4ae1443e34d89bff6c60d35aec200c98a4d5db5fa8.scope: Deactivated successfully.
Feb 25 06:51:21 np0005629333 podman[99871]: 2026-02-25 11:51:21.059905695 +0000 UTC m=+0.042180551 container create fac585bab808caa7d4cd5e26e64015b66f1be7456d85aad56b904bd1de1be6af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_khorana, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:51:21 np0005629333 systemd[1]: Started libpod-conmon-fac585bab808caa7d4cd5e26e64015b66f1be7456d85aad56b904bd1de1be6af.scope.
Feb 25 06:51:21 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:51:21 np0005629333 podman[99871]: 2026-02-25 11:51:21.038442114 +0000 UTC m=+0.020716950 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:51:21 np0005629333 podman[99871]: 2026-02-25 11:51:21.14345862 +0000 UTC m=+0.125733476 container init fac585bab808caa7d4cd5e26e64015b66f1be7456d85aad56b904bd1de1be6af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:51:21 np0005629333 podman[99871]: 2026-02-25 11:51:21.150597873 +0000 UTC m=+0.132872729 container start fac585bab808caa7d4cd5e26e64015b66f1be7456d85aad56b904bd1de1be6af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 06:51:21 np0005629333 trusting_khorana[99888]: 167 167
Feb 25 06:51:21 np0005629333 systemd[1]: libpod-fac585bab808caa7d4cd5e26e64015b66f1be7456d85aad56b904bd1de1be6af.scope: Deactivated successfully.
Feb 25 06:51:21 np0005629333 podman[99871]: 2026-02-25 11:51:21.159415747 +0000 UTC m=+0.141690603 container attach fac585bab808caa7d4cd5e26e64015b66f1be7456d85aad56b904bd1de1be6af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_khorana, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 06:51:21 np0005629333 podman[99871]: 2026-02-25 11:51:21.159778437 +0000 UTC m=+0.142053293 container died fac585bab808caa7d4cd5e26e64015b66f1be7456d85aad56b904bd1de1be6af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_khorana, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:51:21 np0005629333 systemd[1]: var-lib-containers-storage-overlay-2ce765fb1c975b54eeaac497d5f8d5bd3a56e5c7c042fe3c1df69eb0ef11dd06-merged.mount: Deactivated successfully.
Feb 25 06:51:21 np0005629333 podman[99871]: 2026-02-25 11:51:21.218514551 +0000 UTC m=+0.200789407 container remove fac585bab808caa7d4cd5e26e64015b66f1be7456d85aad56b904bd1de1be6af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:51:21 np0005629333 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 06:51:21 np0005629333 systemd[1]: libpod-conmon-fac585bab808caa7d4cd5e26e64015b66f1be7456d85aad56b904bd1de1be6af.scope: Deactivated successfully.
Feb 25 06:51:21 np0005629333 podman[99915]: 2026-02-25 11:51:21.402781384 +0000 UTC m=+0.062581100 container create 5cb11c7572bec00232cf230fc75fd502797e77b0e15c71bea016433417c09f7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_brattain, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 06:51:21 np0005629333 systemd[1]: Started libpod-conmon-5cb11c7572bec00232cf230fc75fd502797e77b0e15c71bea016433417c09f7f.scope.
Feb 25 06:51:21 np0005629333 podman[99915]: 2026-02-25 11:51:21.372231742 +0000 UTC m=+0.032031488 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:51:21 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:51:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f629b5c6dbfd7620d163e54c4c284fa063ce33b0025935eb6f94cc3173e35a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f629b5c6dbfd7620d163e54c4c284fa063ce33b0025935eb6f94cc3173e35a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f629b5c6dbfd7620d163e54c4c284fa063ce33b0025935eb6f94cc3173e35a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f629b5c6dbfd7620d163e54c4c284fa063ce33b0025935eb6f94cc3173e35a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:51:21 np0005629333 podman[99915]: 2026-02-25 11:51:21.72881624 +0000 UTC m=+0.388615956 container init 5cb11c7572bec00232cf230fc75fd502797e77b0e15c71bea016433417c09f7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 06:51:21 np0005629333 podman[99915]: 2026-02-25 11:51:21.7368388 +0000 UTC m=+0.396638486 container start 5cb11c7572bec00232cf230fc75fd502797e77b0e15c71bea016433417c09f7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:51:21 np0005629333 podman[99915]: 2026-02-25 11:51:21.76230751 +0000 UTC m=+0.422107216 container attach 5cb11c7572bec00232cf230fc75fd502797e77b0e15c71bea016433417c09f7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_brattain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:51:22 np0005629333 lvm[100011]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 06:51:22 np0005629333 lvm[100011]: VG ceph_vg1 finished
Feb 25 06:51:22 np0005629333 lvm[100008]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 06:51:22 np0005629333 lvm[100008]: VG ceph_vg0 finished
Feb 25 06:51:22 np0005629333 lvm[100013]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 06:51:22 np0005629333 lvm[100013]: VG ceph_vg2 finished
Feb 25 06:51:22 np0005629333 youthful_brattain[99932]: {}
Feb 25 06:51:22 np0005629333 systemd[1]: libpod-5cb11c7572bec00232cf230fc75fd502797e77b0e15c71bea016433417c09f7f.scope: Deactivated successfully.
Feb 25 06:51:22 np0005629333 systemd[1]: libpod-5cb11c7572bec00232cf230fc75fd502797e77b0e15c71bea016433417c09f7f.scope: Consumed 1.179s CPU time.
Feb 25 06:51:22 np0005629333 podman[99915]: 2026-02-25 11:51:22.668848332 +0000 UTC m=+1.328648098 container died 5cb11c7572bec00232cf230fc75fd502797e77b0e15c71bea016433417c09f7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_brattain, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:51:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:51:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v88: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 8.2 KiB/s wr, 178 op/s
Feb 25 06:51:22 np0005629333 systemd[1]: var-lib-containers-storage-overlay-55f629b5c6dbfd7620d163e54c4c284fa063ce33b0025935eb6f94cc3173e35a-merged.mount: Deactivated successfully.
Feb 25 06:51:23 np0005629333 podman[99915]: 2026-02-25 11:51:23.056758665 +0000 UTC m=+1.716558391 container remove 5cb11c7572bec00232cf230fc75fd502797e77b0e15c71bea016433417c09f7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_brattain, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 06:51:23 np0005629333 systemd[1]: libpod-conmon-5cb11c7572bec00232cf230fc75fd502797e77b0e15c71bea016433417c09f7f.scope: Deactivated successfully.
Feb 25 06:51:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:51:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:51:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:24 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:24 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v89: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 6.8 KiB/s wr, 147 op/s
Feb 25 06:51:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v90: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 6.1 KiB/s wr, 134 op/s
Feb 25 06:51:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:51:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v91: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 118 op/s
Feb 25 06:51:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_11:51:30
Feb 25 06:51:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 06:51:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 06:51:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['backups', 'images', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'default.rgw.log', 'vms']
Feb 25 06:51:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 06:51:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v92: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 5.3 KiB/s wr, 118 op/s
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.301693526268813e-07 of space, bias 4.0, pg target 0.0011162032231522576 quantized to 16 (current 1)
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 1)
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Feb 25 06:51:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Feb 25 06:51:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 06:51:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 06:51:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Feb 25 06:51:32 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Feb 25 06:51:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Feb 25 06:51:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Feb 25 06:51:32 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Feb 25 06:51:32 np0005629333 ceph-mgr[76641]: [progress INFO root] update: starting ev a95fee6a-e605-4db5-9d7e-5baf7c58bb54 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Feb 25 06:51:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Feb 25 06:51:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Feb 25 06:51:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:51:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v94: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:51:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Feb 25 06:51:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 25 06:51:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Feb 25 06:51:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Feb 25 06:51:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Feb 25 06:51:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Feb 25 06:51:33 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Feb 25 06:51:33 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Feb 25 06:51:33 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 25 06:51:33 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Feb 25 06:51:33 np0005629333 ceph-mgr[76641]: [progress INFO root] update: starting ev 2ca62905-21c5-4b28-bec2-04460775461e (PG autoscaler increasing pool 3 PGs from 1 to 32)
Feb 25 06:51:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Feb 25 06:51:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Feb 25 06:51:33 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 43 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=43 pruub=13.251921654s) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active pruub 81.990036011s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:33 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 43 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=43 pruub=13.251921654s) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown pruub 81.990036011s@ mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Feb 25 06:51:34 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Feb 25 06:51:34 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Feb 25 06:51:34 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Feb 25 06:51:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Feb 25 06:51:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Feb 25 06:51:34 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Feb 25 06:51:34 np0005629333 ceph-mgr[76641]: [progress INFO root] update: starting ev 04f5e04b-2d3a-4f7d-9fe5-e65f51eb8e64 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Feb 25 06:51:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Feb 25 06:51:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.1( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.c( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.e( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.10( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.1e( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.12( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.14( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.1a( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=18/19 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.1f( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.1c( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.1b( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.9( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.5( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.6( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.a( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.4( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.1d( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.3( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.8( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.1( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.7( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.0( empty local-lis/les=43/44 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.2( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.b( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.d( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.c( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.10( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.f( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.11( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.e( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.13( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.1e( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.12( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.14( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.18( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.16( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.1a( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.17( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.15( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:34 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 44 pg[2.19( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=18/18 les/c/f=19/19/0 sis=43) [2] r=0 lpr=43 pi=[18,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v97: 42 pgs: 31 unknown, 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:51:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Feb 25 06:51:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 25 06:51:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Feb 25 06:51:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 25 06:51:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Feb 25 06:51:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Feb 25 06:51:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Feb 25 06:51:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Feb 25 06:51:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Feb 25 06:51:35 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Feb 25 06:51:35 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Feb 25 06:51:35 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 25 06:51:35 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 25 06:51:35 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 45 pg[4.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=45 pruub=13.495801926s) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active pruub 93.480506897s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:35 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 45 pg[4.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=45 pruub=13.495801926s) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown pruub 93.480506897s@ mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:35 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Feb 25 06:51:35 np0005629333 ceph-mgr[76641]: [progress INFO root] update: starting ev 3340e18f-01f3-4f2e-821f-3ae172278a3f (PG autoscaler increasing pool 5 PGs from 1 to 32)
Feb 25 06:51:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Feb 25 06:51:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Feb 25 06:51:36 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Feb 25 06:51:36 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 45 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=45 pruub=11.802621841s) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active pruub 88.566757202s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 45 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=45 pruub=11.802621841s) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown pruub 88.566757202s@ mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Feb 25 06:51:36 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Feb 25 06:51:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.1f( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.1e( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.1d( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.1c( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.1b( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.19( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.18( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.7( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.6( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.5( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.1( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.8( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.b( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.a( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.4( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.1a( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.9( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.2( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.3( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.d( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.e( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.c( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.f( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.10( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.11( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.12( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.13( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.15( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.14( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.16( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.17( empty local-lis/les=19/20 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.1f( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.1f( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.1d( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.1c( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.8( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-mgr[76641]: [progress INFO root] update: starting ev 50ce2a9b-e043-4950-9d2c-3ef686e48d5c (PG autoscaler increasing pool 6 PGs from 1 to 16)
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.7( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.1e( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.6( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.1b( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.b( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.a( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.1d( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.1e( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.19( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.18( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.5( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.9( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.1a( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.4( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.19( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.1c( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.6( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.1b( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.5( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.1( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.7( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.8( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.b( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.0( empty local-lis/les=45/46 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.a( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.9( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.2( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.4( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.3( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.d( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.e( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.10( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.11( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.f( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.c( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.12( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.13( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.15( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.16( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.17( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.1a( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 46 pg[3.14( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=19/19 les/c/f=20/20/0 sis=45) [1] r=0 lpr=45 pi=[19,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.3( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.2( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.c( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.d( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.f( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.e( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.1( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.10( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.14( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.15( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.16( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.17( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.13( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.12( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.18( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.11( empty local-lis/les=20/21 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.1f( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.8( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.1d( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.1c( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.7( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.6( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.b( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.1e( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.5( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.9( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.4( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.1a( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.19( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.3( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.0( empty local-lis/les=45/46 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.d( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.2( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.1b( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.c( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.a( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.e( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.f( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.1( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.14( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.15( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.16( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.10( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.17( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.13( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.18( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.12( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 46 pg[4.11( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=20/20 les/c/f=21/21/0 sis=45) [0] r=0 lpr=45 pi=[20,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Feb 25 06:51:36 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Feb 25 06:51:36 np0005629333 ceph-mgr[76641]: [progress WARNING root] Starting Global Recovery Event,94 pgs not in active + clean state
Feb 25 06:51:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v100: 104 pgs: 1 peering, 62 unknown, 41 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:51:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Feb 25 06:51:36 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Feb 25 06:51:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Feb 25 06:51:36 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 25 06:51:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Feb 25 06:51:37 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Feb 25 06:51:37 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Feb 25 06:51:37 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Feb 25 06:51:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Feb 25 06:51:37 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Feb 25 06:51:37 np0005629333 ceph-mgr[76641]: [progress INFO root] update: starting ev d6384aa5-1271-496c-8578-0b839ff1ea04 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Feb 25 06:51:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Feb 25 06:51:37 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Feb 25 06:51:37 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 47 pg[6.0( v 35'39 (0'0,35'39] local-lis/les=22/23 n=22 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=47 pruub=13.483985901s) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 lcod 34'38 mlcod 34'38 active pruub 95.490653992s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:37 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 47 pg[6.0( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=22/23 n=1 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=47 pruub=13.483985901s) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 lcod 34'38 mlcod 0'0 unknown pruub 95.490653992s@ mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:37 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Feb 25 06:51:37 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Feb 25 06:51:37 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Feb 25 06:51:37 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 25 06:51:37 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Feb 25 06:51:37 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Feb 25 06:51:37 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Feb 25 06:51:37 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Feb 25 06:51:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Feb 25 06:51:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Feb 25 06:51:38 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Feb 25 06:51:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Feb 25 06:51:38 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Feb 25 06:51:38 np0005629333 ceph-mgr[76641]: [progress INFO root] update: starting ev c6b63597-0cb1-4f24-86a1-5d821c89e2d0 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Feb 25 06:51:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Feb 25 06:51:38 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Feb 25 06:51:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 48 pg[6.a( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=22/23 n=1 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 48 pg[6.5( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=22/23 n=2 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 48 pg[6.9( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=22/23 n=1 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 48 pg[6.4( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=22/23 n=2 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 48 pg[6.8( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=22/23 n=1 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 48 pg[6.7( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=22/23 n=1 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 48 pg[6.6( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=22/23 n=2 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 48 pg[6.b( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=22/23 n=1 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 48 pg[6.1( v 35'39 (0'0,35'39] local-lis/les=22/23 n=2 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 48 pg[6.3( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=22/23 n=2 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 48 pg[6.2( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=22/23 n=2 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 48 pg[6.e( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=22/23 n=1 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 48 pg[6.c( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=22/23 n=1 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 48 pg[6.f( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=22/23 n=1 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 48 pg[6.d( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=22/23 n=1 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 48 pg[6.a( v 35'39 (0'0,35'39] local-lis/les=47/48 n=1 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 48 pg[6.8( v 35'39 (0'0,35'39] local-lis/les=47/48 n=1 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 48 pg[6.5( v 35'39 (0'0,35'39] local-lis/les=47/48 n=2 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 48 pg[6.9( v 35'39 (0'0,35'39] local-lis/les=47/48 n=1 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 48 pg[6.4( v 35'39 (0'0,35'39] local-lis/les=47/48 n=2 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 48 pg[6.7( v 35'39 (0'0,35'39] local-lis/les=47/48 n=1 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 48 pg[6.6( v 35'39 (0'0,35'39] local-lis/les=47/48 n=2 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 48 pg[6.0( v 35'39 (0'0,35'39] local-lis/les=47/48 n=1 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 lcod 34'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 48 pg[6.3( v 35'39 (0'0,35'39] local-lis/les=47/48 n=2 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 48 pg[6.b( v 35'39 (0'0,35'39] local-lis/les=47/48 n=1 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 48 pg[6.2( v 35'39 (0'0,35'39] local-lis/les=47/48 n=2 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 48 pg[6.1( v 35'39 (0'0,35'39] local-lis/les=47/48 n=2 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 48 pg[6.c( v 35'39 (0'0,35'39] local-lis/les=47/48 n=1 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 48 pg[6.e( v 35'39 (0'0,35'39] local-lis/les=47/48 n=1 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 48 pg[6.d( v 35'39 (0'0,35'39] local-lis/les=47/48 n=1 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:38 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 48 pg[6.f( v 35'39 (0'0,35'39] local-lis/les=47/48 n=1 ec=47/22 lis/c=22/22 les/c/f=23/23/0 sis=47) [0] r=0 lpr=47 pi=[22,47)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 47 pg[5.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=47 pruub=11.436985970s) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active pruub 85.069404602s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 48 pg[5.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=47 pruub=11.436985970s) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown pruub 85.069404602s@ mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 48 pg[5.7( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 48 pg[5.8( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 48 pg[5.9( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 48 pg[5.b( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 48 pg[5.c( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 48 pg[5.12( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 48 pg[5.a( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 48 pg[5.13( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 48 pg[5.3( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 48 pg[5.4( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 48 pg[5.14( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 48 pg[5.15( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 48 pg[5.16( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 48 pg[5.5( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 48 pg[5.6( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 48 pg[5.d( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 48 pg[5.e( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 48 pg[5.1( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 48 pg[5.2( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 48 pg[5.f( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 48 pg[5.10( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 48 pg[5.11( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 48 pg[5.17( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 48 pg[5.18( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 48 pg[5.19( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 48 pg[5.1a( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 48 pg[5.1b( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 48 pg[5.1c( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 48 pg[5.1e( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 48 pg[5.1f( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 48 pg[5.1d( empty local-lis/les=21/22 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:38 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Feb 25 06:51:38 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Feb 25 06:51:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v103: 150 pgs: 1 peering, 77 unknown, 72 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:51:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Feb 25 06:51:38 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 25 06:51:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Feb 25 06:51:38 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 25 06:51:39 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Feb 25 06:51:39 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Feb 25 06:51:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Feb 25 06:51:39 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Feb 25 06:51:39 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Feb 25 06:51:39 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 25 06:51:39 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 25 06:51:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Feb 25 06:51:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Feb 25 06:51:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Feb 25 06:51:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Feb 25 06:51:39 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Feb 25 06:51:39 np0005629333 ceph-mgr[76641]: [progress INFO root] update: starting ev a5a0298b-8c85-4085-b74d-ad30aa0465e6 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Feb 25 06:51:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Feb 25 06:51:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Feb 25 06:51:39 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 49 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=49 pruub=13.623297691s) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active pruub 93.771377563s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:39 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 49 pg[8.0( v 34'6 (0'0,34'6] local-lis/les=33/34 n=6 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=49 pruub=9.486465454s) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 34'5 mlcod 34'5 active pruub 89.634552002s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:39 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 49 pg[5.1f( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:39 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 49 pg[5.1e( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:39 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 49 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=49 pruub=13.623297691s) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown pruub 93.771377563s@ mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:39 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 49 pg[8.0( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=49 pruub=9.486465454s) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 34'5 mlcod 0'0 unknown pruub 89.634552002s@ mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:39 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 49 pg[5.12( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:39 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 49 pg[5.1d( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:39 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 49 pg[5.11( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:39 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 49 pg[5.13( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:39 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 49 pg[5.15( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:39 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 49 pg[5.14( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:39 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 49 pg[5.17( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:39 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 49 pg[5.8( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:39 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 49 pg[5.10( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:39 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 49 pg[5.a( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:39 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 49 pg[5.9( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:39 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 49 pg[5.c( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:39 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 49 pg[5.b( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:39 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 49 pg[5.16( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:39 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 49 pg[5.7( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:39 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 49 pg[5.f( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:39 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 49 pg[5.0( empty local-lis/les=47/49 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:39 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 49 pg[5.6( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:39 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 49 pg[5.4( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:39 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 49 pg[5.5( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:39 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 49 pg[5.2( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:39 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 49 pg[5.3( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:39 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 49 pg[5.1( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:39 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 49 pg[5.e( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:39 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 49 pg[5.d( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:39 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 49 pg[5.1c( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:39 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 49 pg[5.1a( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:39 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 49 pg[5.1b( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:39 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 49 pg[5.19( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:39 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 49 pg[5.18( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=21/21 les/c/f=22/22/0 sis=47) [2] r=0 lpr=47 pi=[21,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:39 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Feb 25 06:51:39 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Feb 25 06:51:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Feb 25 06:51:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Feb 25 06:51:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Feb 25 06:51:40 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.1d( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.1c( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.12( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.1e( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-mgr[76641]: [progress INFO root] update: starting ev d12352a3-7549-45d1-877e-f9adc93b2be1 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.1f( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.10( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.18( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.17( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.19( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.1a( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.16( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.14( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.1b( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.b( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.4( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=1 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.5( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=1 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.a( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.6( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=1 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.9( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.7( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.8( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.d( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.9( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Feb 25 06:51:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Feb 25 06:51:40 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Feb 25 06:51:40 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Feb 25 06:51:40 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Feb 25 06:51:40 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.2( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=1 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.6( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.b( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.f( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.4( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.1( v 34'6 (0'0,34'6] local-lis/les=33/34 n=1 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.f( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.e( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.3( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=1 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.c( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.a( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.8( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.e( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.7( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.1( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.d( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.2( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.c( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.3( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.13( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.12( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.1d( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.1e( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.11( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.10( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.1f( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.17( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.18( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.19( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.16( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.15( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.1a( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.14( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=33/34 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.1b( empty local-lis/les=24/25 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.1d( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.12( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.1e( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.1f( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.10( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.1c( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.18( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.17( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.19( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.16( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.1a( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.4( v 34'6 (0'0,34'6] local-lis/les=49/50 n=1 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.14( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.5( v 34'6 (0'0,34'6] local-lis/les=49/50 n=1 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.b( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.a( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.6( v 34'6 (0'0,34'6] local-lis/les=49/50 n=1 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.1b( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.9( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.7( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.8( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.9( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.d( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.6( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.b( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.2( v 34'6 (0'0,34'6] local-lis/les=49/50 n=1 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.0( empty local-lis/les=49/50 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.f( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.0( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 34'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.1( v 34'6 (0'0,34'6] local-lis/les=49/50 n=1 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.4( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.e( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.f( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.3( v 34'6 (0'0,34'6] local-lis/les=49/50 n=1 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.c( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.a( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.8( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.e( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.1( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.7( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.d( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.2( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.c( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.13( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.3( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.12( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.1e( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.1d( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.11( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.10( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.1f( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.17( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.18( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.19( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.15( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.1a( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.14( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[8.16( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 50 pg[7.1b( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=24/24 les/c/f=25/25/0 sis=49) [1] r=0 lpr=49 pi=[24,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v106: 212 pgs: 1 peering, 139 unknown, 72 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:51:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Feb 25 06:51:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 25 06:51:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Feb 25 06:51:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 25 06:51:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Feb 25 06:51:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Feb 25 06:51:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Feb 25 06:51:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Feb 25 06:51:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Feb 25 06:51:41 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Feb 25 06:51:41 np0005629333 ceph-mgr[76641]: [progress INFO root] update: starting ev 6d690084-2169-4d51-b1c2-65e14226daed (PG autoscaler increasing pool 11 PGs from 1 to 32)
Feb 25 06:51:41 np0005629333 ceph-mgr[76641]: [progress INFO root] complete: finished ev a95fee6a-e605-4db5-9d7e-5baf7c58bb54 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Feb 25 06:51:41 np0005629333 ceph-mgr[76641]: [progress INFO root] Completed event a95fee6a-e605-4db5-9d7e-5baf7c58bb54 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 10 seconds
Feb 25 06:51:41 np0005629333 ceph-mgr[76641]: [progress INFO root] complete: finished ev 2ca62905-21c5-4b28-bec2-04460775461e (PG autoscaler increasing pool 3 PGs from 1 to 32)
Feb 25 06:51:41 np0005629333 ceph-mgr[76641]: [progress INFO root] Completed event 2ca62905-21c5-4b28-bec2-04460775461e (PG autoscaler increasing pool 3 PGs from 1 to 32) in 8 seconds
Feb 25 06:51:41 np0005629333 ceph-mgr[76641]: [progress INFO root] complete: finished ev 04f5e04b-2d3a-4f7d-9fe5-e65f51eb8e64 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Feb 25 06:51:41 np0005629333 ceph-mgr[76641]: [progress INFO root] Completed event 04f5e04b-2d3a-4f7d-9fe5-e65f51eb8e64 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 7 seconds
Feb 25 06:51:41 np0005629333 ceph-mgr[76641]: [progress INFO root] complete: finished ev 3340e18f-01f3-4f2e-821f-3ae172278a3f (PG autoscaler increasing pool 5 PGs from 1 to 32)
Feb 25 06:51:41 np0005629333 ceph-mgr[76641]: [progress INFO root] Completed event 3340e18f-01f3-4f2e-821f-3ae172278a3f (PG autoscaler increasing pool 5 PGs from 1 to 32) in 6 seconds
Feb 25 06:51:41 np0005629333 ceph-mgr[76641]: [progress INFO root] complete: finished ev 50ce2a9b-e043-4950-9d2c-3ef686e48d5c (PG autoscaler increasing pool 6 PGs from 1 to 16)
Feb 25 06:51:41 np0005629333 ceph-mgr[76641]: [progress INFO root] Completed event 50ce2a9b-e043-4950-9d2c-3ef686e48d5c (PG autoscaler increasing pool 6 PGs from 1 to 16) in 5 seconds
Feb 25 06:51:41 np0005629333 ceph-mgr[76641]: [progress INFO root] complete: finished ev d6384aa5-1271-496c-8578-0b839ff1ea04 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Feb 25 06:51:41 np0005629333 ceph-mgr[76641]: [progress INFO root] Completed event d6384aa5-1271-496c-8578-0b839ff1ea04 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Feb 25 06:51:41 np0005629333 ceph-mgr[76641]: [progress INFO root] complete: finished ev c6b63597-0cb1-4f24-86a1-5d821c89e2d0 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Feb 25 06:51:41 np0005629333 ceph-mgr[76641]: [progress INFO root] Completed event c6b63597-0cb1-4f24-86a1-5d821c89e2d0 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Feb 25 06:51:41 np0005629333 ceph-mgr[76641]: [progress INFO root] complete: finished ev a5a0298b-8c85-4085-b74d-ad30aa0465e6 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Feb 25 06:51:41 np0005629333 ceph-mgr[76641]: [progress INFO root] Completed event a5a0298b-8c85-4085-b74d-ad30aa0465e6 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Feb 25 06:51:41 np0005629333 ceph-mgr[76641]: [progress INFO root] complete: finished ev d12352a3-7549-45d1-877e-f9adc93b2be1 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Feb 25 06:51:41 np0005629333 ceph-mgr[76641]: [progress INFO root] Completed event d12352a3-7549-45d1-877e-f9adc93b2be1 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Feb 25 06:51:41 np0005629333 ceph-mgr[76641]: [progress INFO root] complete: finished ev 6d690084-2169-4d51-b1c2-65e14226daed (PG autoscaler increasing pool 11 PGs from 1 to 32)
Feb 25 06:51:41 np0005629333 ceph-mgr[76641]: [progress INFO root] Completed event 6d690084-2169-4d51-b1c2-65e14226daed (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Feb 25 06:51:41 np0005629333 ceph-mgr[76641]: [progress INFO root] Writing back 15 completed events
Feb 25 06:51:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 25 06:51:41 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Feb 25 06:51:41 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Feb 25 06:51:41 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 25 06:51:41 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 25 06:51:41 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Feb 25 06:51:41 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Feb 25 06:51:41 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Feb 25 06:51:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:42 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:42 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 51 pg[10.0( v 41'18 (0'0,41'18] local-lis/les=37/38 n=9 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=51 pruub=10.636496544s) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 41'17 mlcod 41'17 active pruub 88.375015259s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:42 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 51 pg[10.0( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=51 pruub=10.636496544s) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 41'17 mlcod 0'0 unknown pruub 88.375015259s@ mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:51:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v108: 274 pgs: 62 unknown, 212 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:51:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Feb 25 06:51:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 51 pg[9.0( v 41'483 (0'0,41'483] local-lis/les=35/36 n=210 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=51 pruub=8.121180534s) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 41'482 mlcod 41'482 active pruub 91.668220520s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 51 pg[9.0( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=51 pruub=8.121180534s) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 41'482 mlcod 0'0 unknown pruub 91.668220520s@ mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366ea6280 space 0x55a36674c240 0x0~9a clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366ddd280 space 0x55a365c92840 0x0~9a clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d98900 space 0x55a36670b440 0x0~98 clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d9a900 space 0x55a366ef7a40 0x0~9a clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366ddce80 space 0x55a366f02240 0x0~9a clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366dddc80 space 0x55a366f03140 0x0~9a clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d99900 space 0x55a3672b1a40 0x0~6e clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366da5980 space 0x55a36639b440 0x0~6e clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d98200 space 0x55a3660f6540 0x0~6e clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d9fc00 space 0x55a366166b40 0x0~6e clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d9e980 space 0x55a3672edd40 0x0~6e clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366e5e700 space 0x55a36643a240 0x0~6e clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366ddd480 space 0x55a3666e7140 0x0~9a clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d9f600 space 0x55a3663a9440 0x0~6e clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d9b400 space 0x55a366167d40 0x0~6e clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d98600 space 0x55a3675a9740 0x0~98 clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d9af00 space 0x55a366693440 0x0~9a clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d9a580 space 0x55a3667a9740 0x0~6e clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d9a480 space 0x55a3666e6240 0x0~6e clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d99c80 space 0x55a36643ab40 0x0~6e clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d9b200 space 0x55a3666e8840 0x0~6e clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d9a280 space 0x55a36614ab40 0x0~6e clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d99700 space 0x55a3672b1140 0x0~6e clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d9f400 space 0x55a3672ec540 0x0~6e clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d98100 space 0x55a366f0e840 0x0~98 clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d98880 space 0x55a3663a8240 0x0~6e clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d9a700 space 0x55a366c9c840 0x0~9a clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366dd1880 space 0x55a3666c2840 0x0~98 clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d9f200 space 0x55a3672ece40 0x0~6e clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366ddc600 space 0x55a366ef7140 0x0~9a clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366e64f80 space 0x55a366ef6840 0x0~9a clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d9a500 space 0x55a3667a8540 0x0~6e clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d9fa80 space 0x55a366c9da40 0x0~9a clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d98280 space 0x55a366f0f140 0x0~9a clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d98680 space 0x55a3663a8b40 0x0~6e clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d9e600 space 0x55a3666c3740 0x0~98 clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d98400 space 0x55a3660f6e40 0x0~6e clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d9e080 space 0x55a3666e6b40 0x0~98 clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d9f000 space 0x55a3666c3440 0x0~9a clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366dddf00 space 0x55a366725d40 0x0~9a clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d9fe00 space 0x55a3660f7740 0x0~6e clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d9ff00 space 0x55a366167440 0x0~6e clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366df9f80 space 0x55a366724240 0x0~9a clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d99500 space 0x55a3672b0840 0x0~6e clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366ddcd80 space 0x55a3666c2e40 0x0~9a clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366dd1800 space 0x55a3666e9d40 0x0~9a clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d9ad80 space 0x55a366693740 0x0~98 clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366e5f080 space 0x55a36614b740 0x0~6e clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d9f880 space 0x55a366c9d440 0x0~9a clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d9e200 space 0x55a3666e9a40 0x0~6e clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366ddd900 space 0x55a366f0e540 0x0~9a clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d9e780 space 0x55a3666e7d40 0x0~6e clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366dddd80 space 0x55a366725740 0x0~9a clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d99e00 space 0x55a366ef7740 0x0~9a clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d9ff80 space 0x55a3663a9a40 0x0~6e clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d9a780 space 0x55a3667a8e40 0x0~6e clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d99a80 space 0x55a36643b440 0x0~6e clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d98800 space 0x55a366f0cb40 0x0~98 clean)
Feb 25 06:51:42 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55a3672f6000) split_cache   moving buffer(0x55a366d9e380 space 0x55a3666e9140 0x0~6e clean)
Feb 25 06:51:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Feb 25 06:51:43 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Feb 25 06:51:43 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Feb 25 06:51:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Feb 25 06:51:43 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.15( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.17( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.14( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.13( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.12( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.11( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.10( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.1f( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.1d( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.1c( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.1b( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.1e( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.1a( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.19( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.7( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.6( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.5( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.4( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.3( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.18( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.8( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.f( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.9( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.16( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.11( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.12( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.d( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.a( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.c( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.d( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.c( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.e( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.1( v 41'18 (0'0,41'18] local-lis/les=37/38 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.f( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.2( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.13( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.9( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.b( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.b( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.14( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[11.0( empty local-lis/les=39/40 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=52 pruub=11.572382927s) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active pruub 95.842056274s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.16( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.17( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.2( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.15( v 41'18 lc 0'0 (0'0,41'18] local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.1( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.a( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.e( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.12( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.8( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.3( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.6( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.4( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.7( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.1a( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.5( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.1b( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.18( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.19( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.1e( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.1f( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.1c( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.1d( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.10( v 41'483 lc 0'0 (0'0,41'483] local-lis/les=35/36 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.11( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.1d( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.1f( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.1c( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.1b( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.10( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.1e( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.6( v 41'18 (0'0,41'18] local-lis/les=51/52 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.19( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.7( v 41'18 (0'0,41'18] local-lis/les=51/52 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.4( v 41'18 (0'0,41'18] local-lis/les=51/52 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.5( v 41'18 (0'0,41'18] local-lis/les=51/52 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.18( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.3( v 41'18 (0'0,41'18] local-lis/les=51/52 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.f( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.8( v 41'18 (0'0,41'18] local-lis/les=51/52 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.9( v 41'18 (0'0,41'18] local-lis/les=51/52 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.0( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 41'17 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.c( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.a( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.1( v 41'18 (0'0,41'18] local-lis/les=51/52 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.d( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.e( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.2( v 41'18 (0'0,41'18] local-lis/les=51/52 n=1 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.13( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.b( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.14( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.17( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[11.0( empty local-lis/les=39/40 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=52 pruub=11.572382927s) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown pruub 95.842056274s@ mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.15( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.16( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 52 pg[10.1a( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [2] r=0 lpr=51 pi=[37,51)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.14( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.12( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.0( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 41'482 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.2( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.e( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.a( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.4( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.1a( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.5( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.18( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.1e( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.1c( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:43 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 52 pg[9.10( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=35/35 les/c/f=36/36/0 sis=51) [1] r=0 lpr=51 pi=[35,51)/1 crt=41'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Feb 25 06:51:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Feb 25 06:51:44 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Feb 25 06:51:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Feb 25 06:51:44 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.16( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.17( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.15( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.14( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.13( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.12( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.11( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.10( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.f( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.e( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.d( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.b( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.9( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.2( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.3( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.c( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.a( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.8( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.1( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.4( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.5( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.6( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.18( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.7( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.19( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.1a( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.1c( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.1b( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.1d( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.1e( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.1f( empty local-lis/les=39/40 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.16( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.17( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.13( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.14( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.15( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.12( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.11( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.10( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.e( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.f( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.d( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.b( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.9( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.0( empty local-lis/les=52/53 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.2( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.c( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.3( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.a( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.8( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.1( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.5( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.4( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.18( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.6( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.1a( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.19( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.1c( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.7( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.1e( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.1f( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.1d( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 53 pg[11.1b( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=39/39 les/c/f=40/40/0 sis=52) [1] r=0 lpr=52 pi=[39,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v111: 305 pgs: 93 unknown, 212 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:51:46 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Feb 25 06:51:46 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Feb 25 06:51:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v112: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:51:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 25 06:51:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 25 06:51:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 25 06:51:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 25 06:51:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 25 06:51:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 25 06:51:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Feb 25 06:51:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Feb 25 06:51:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 25 06:51:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 25 06:51:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Feb 25 06:51:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Feb 25 06:51:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 25 06:51:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 25 06:51:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 25 06:51:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 25 06:51:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 25 06:51:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 25 06:51:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 25 06:51:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Feb 25 06:51:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Feb 25 06:51:47 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 25 06:51:47 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 25 06:51:47 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 25 06:51:47 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Feb 25 06:51:47 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 25 06:51:47 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Feb 25 06:51:47 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 25 06:51:47 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 25 06:51:47 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 25 06:51:47 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 25 06:51:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 25 06:51:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 25 06:51:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 25 06:51:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Feb 25 06:51:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 25 06:51:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Feb 25 06:51:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 25 06:51:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 25 06:51:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 25 06:51:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 25 06:51:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Feb 25 06:51:47 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.8( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.778066635s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.007873535s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.1c( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.778147697s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.007987976s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[6.5( v 35'39 (0'0,35'39] local-lis/les=47/48 n=2 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=14.787426949s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=35'39 lcod 0'0 active pruub 107.017280579s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.8( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.777998924s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.007873535s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.1c( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.778092384s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.007987976s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[6.9( v 35'39 (0'0,35'39] local-lis/les=47/48 n=1 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=14.787508965s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=35'39 lcod 0'0 active pruub 107.017402649s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.7( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.778080940s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.008003235s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[6.5( v 35'39 (0'0,35'39] local-lis/les=47/48 n=2 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=14.787370682s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 107.017280579s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[6.9( v 35'39 (0'0,35'39] local-lis/les=47/48 n=1 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=14.787437439s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 107.017402649s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.7( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.777992249s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.008003235s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.a( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.778432846s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.008712769s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.a( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.778408051s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.008712769s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[6.7( v 35'39 (0'0,35'39] local-lis/les=47/48 n=1 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=14.787214279s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=35'39 lcod 0'0 active pruub 107.017532349s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.5( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.777729034s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.008079529s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[6.7( v 35'39 (0'0,35'39] local-lis/les=47/48 n=1 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=14.787179947s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 107.017532349s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.5( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.777692795s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.008079529s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.1a( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.777602196s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.008178711s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[6.b( v 35'39 (0'0,35'39] local-lis/les=47/48 n=1 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=14.787024498s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=35'39 lcod 0'0 active pruub 107.017639160s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.1a( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.777571678s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.008178711s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[6.b( v 35'39 (0'0,35'39] local-lis/les=47/48 n=1 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=14.786995888s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 107.017639160s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.4( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.777410507s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.008178711s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.9( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.777400017s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.008171082s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.9( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.777377129s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.008171082s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.4( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.777383804s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.008178711s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.1b( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.777865410s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.008682251s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[6.1( v 35'39 (0'0,35'39] local-lis/les=47/48 n=2 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=14.787001610s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=35'39 lcod 0'0 active pruub 107.017837524s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.1b( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.777828217s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.008682251s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[6.1( v 35'39 (0'0,35'39] local-lis/les=47/48 n=2 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=14.786973000s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 107.017837524s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.1( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.777818680s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.008758545s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.1( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.777758598s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.008758545s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.2( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.777621269s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.008705139s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[6.3( v 35'39 (0'0,35'39] local-lis/les=47/48 n=2 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=14.786544800s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=35'39 lcod 0'0 active pruub 107.017631531s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.2( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.777589798s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.008705139s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[6.3( v 35'39 (0'0,35'39] local-lis/les=47/48 n=2 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=14.786509514s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 107.017631531s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[6.f( v 35'39 (0'0,35'39] local-lis/les=47/48 n=1 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=14.786904335s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=35'39 lcod 0'0 active pruub 107.018142700s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[6.f( v 35'39 (0'0,35'39] local-lis/les=47/48 n=1 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=14.786884308s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 107.018142700s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.d( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.777408600s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.008720398s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.d( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.777380943s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.008720398s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.e( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.777336121s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.008728027s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[6.d( v 35'39 (0'0,35'39] local-lis/les=47/48 n=1 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=14.786729813s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=35'39 lcod 0'0 active pruub 107.018127441s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.e( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.777309418s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.008728027s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[6.d( v 35'39 (0'0,35'39] local-lis/les=47/48 n=1 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54 pruub=14.786702156s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 107.018127441s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.f( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.777233124s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.008758545s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.f( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.777215004s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.008758545s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.10( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.777422905s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.008987427s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.10( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.777398109s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.008987427s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.12( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.777416229s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.009063721s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.12( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.777391434s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.009063721s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.13( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.777336121s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.009040833s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.11( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.778745651s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.010444641s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.13( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.777320862s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.009040833s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.11( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.778709412s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.010444641s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.14( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.776955605s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.008766174s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.14( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.776919365s) [1] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.008766174s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.18( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.776993752s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 105.009048462s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[4.18( empty local-lis/les=45/46 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.776881218s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 105.009048462s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[4.18( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[4.1b( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[4.10( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[4.12( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[4.1a( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[4.e( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[4.1( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[4.a( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[4.14( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[4.8( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[6.b( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[4.13( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[4.9( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[4.11( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[6.9( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[6.7( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[4.1c( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[4.5( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.12( v 52'19 (0'0,52'19] local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.920888901s) [1] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 41'18 active pruub 94.739578247s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.12( v 52'19 (0'0,52'19] local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.920834541s) [1] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 94.739578247s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.11( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.922760963s) [1] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 active pruub 94.741668701s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.11( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.922746658s) [1] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.741668701s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.1d( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.805860519s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 98.624794006s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.1d( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.805819511s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 98.624794006s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.1e( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.801112175s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 98.620178223s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.1e( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.801095009s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 98.620178223s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.10( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.922386169s) [1] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 active pruub 94.741821289s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.10( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.922367096s) [1] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.741821289s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.19( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.737987518s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 93.557487488s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.18( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.737948418s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 93.557472229s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.19( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.737940788s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 93.557487488s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.18( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.737917900s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 93.557472229s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.17( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.737822533s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 93.557479858s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.17( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.737806320s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 93.557479858s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.1e( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.922105789s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 active pruub 94.741836548s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.1e( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.922069550s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.741836548s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.16( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.737666130s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 93.557464600s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.11( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.805116653s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 98.624954224s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.11( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.805098534s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 98.624954224s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.16( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.737638474s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 93.557464600s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.15( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.737501144s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 93.557487488s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.15( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.737488747s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 93.557487488s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.12( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.804537773s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 98.624633789s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.12( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.804525375s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 98.624633789s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.13( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.804532051s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 98.624961853s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.13( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.736954689s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 93.557380676s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.13( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.804512978s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 98.624961853s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.13( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.736920357s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 93.557380676s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.14( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.804489136s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 98.625015259s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.14( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.804462433s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 98.625015259s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.1a( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.922067642s) [1] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 active pruub 94.742691040s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.1a( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.922012329s) [1] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.742691040s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.19( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.921496391s) [1] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 active pruub 94.742286682s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.19( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.921471596s) [1] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.742286682s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.16( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.805479050s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 98.626335144s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.15( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.804174423s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 98.625030518s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.11( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.736500740s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 93.557373047s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.16( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.805462837s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 98.626335144s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.15( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.804140091s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 98.625030518s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.11( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.736469269s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 93.557373047s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.7( v 41'18 (0'0,41'18] local-lis/les=51/52 n=1 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.921263695s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 active pruub 94.742286682s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.7( v 41'18 (0'0,41'18] local-lis/les=51/52 n=1 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.921249390s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.742286682s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.f( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.736288071s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 93.557365417s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.6( v 41'18 (0'0,41'18] local-lis/les=51/52 n=1 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.921036720s) [1] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 active pruub 94.742202759s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.f( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.736253738s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 93.557365417s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[5.1e( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.6( v 41'18 (0'0,41'18] local-lis/les=51/52 n=1 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.921022415s) [1] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.742202759s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[6.5( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.d( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.735970497s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 93.557312012s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[4.7( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[6.1( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[6.f( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[4.d( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[6.d( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[4.f( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[4.4( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[4.2( empty local-lis/les=0/0 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[6.3( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.17( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.930223465s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 101.284652710s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.17( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.930196762s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 101.284652710s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.1f( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.757623672s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 101.112190247s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.1f( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.757604599s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 101.112190247s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.d( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.735954285s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 93.557312012s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[10.12( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [1] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.9( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.804498672s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 98.625907898s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.4( v 41'18 (0'0,41'18] local-lis/les=51/52 n=1 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.920919418s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 active pruub 94.742332458s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.c( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.804604530s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 98.626075745s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.9( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.804449081s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 98.625907898s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.b( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.735826492s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 93.557304382s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.c( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.804590225s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 98.626075745s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.b( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.735802650s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 93.557304382s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.8( v 41'18 (0'0,41'18] local-lis/les=51/52 n=1 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.920845032s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 active pruub 94.742408752s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.8( v 41'18 (0'0,41'18] local-lis/les=51/52 n=1 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.920818329s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.742408752s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[2.19( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.f( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.920590401s) [1] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 active pruub 94.742408752s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.f( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.920571327s) [1] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.742408752s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.7( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.735386848s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 93.557250977s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.7( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.735354424s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 93.557250977s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.8( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.735288620s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 93.557228088s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.4( v 41'18 (0'0,41'18] local-lis/les=51/52 n=1 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.920447350s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.742332458s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.8( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.735275269s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 93.557228088s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.7( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.804614067s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 98.626579285s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.7( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.804579735s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 98.626579285s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.9( v 52'19 (0'0,52'19] local-lis/les=51/52 n=1 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.920329094s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 41'18 active pruub 94.742454529s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.f( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.804444313s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 98.626594543s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.9( v 52'19 (0'0,52'19] local-lis/les=51/52 n=1 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.920301437s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 94.742454529s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.f( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.804414749s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 98.626594543s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.2( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.734981537s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 93.557296753s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.2( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.734968185s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 93.557296753s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.5( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.804452896s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 98.626876831s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.b( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.920177460s) [1] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 active pruub 94.742614746s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.5( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.804412842s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 98.626876831s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.1b( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.798912048s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 active pruub 97.156219482s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.b( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.920131683s) [1] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.742614746s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.1b( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.798895836s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 unknown NOTIFY pruub 97.156219482s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.3( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.734711647s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 93.557243347s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.4( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.804172516s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 98.626792908s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.4( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.804158211s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 98.626792908s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.4( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.734417915s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 93.557167053s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.3( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.804161072s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 98.626945496s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.3( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.734471321s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 93.557243347s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.3( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.804145813s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 98.626945496s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.d( v 52'19 (0'0,52'19] local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.919637680s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 41'18 active pruub 94.742538452s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.5( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.734234810s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 93.557151794s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.5( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.734221458s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 93.557151794s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.d( v 52'19 (0'0,52'19] local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.919594765s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 94.742538452s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.2( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.803649902s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 98.626876831s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.14( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.798537254s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 active pruub 97.156143188s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.2( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.803620338s) [0] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 98.626876831s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.919731140s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'483 lcod 0'0 active pruub 100.277359009s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.919685364s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 100.277359009s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.14( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.798442841s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 97.156143188s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.6( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.733773232s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 93.557159424s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.6( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.733749390s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 93.557159424s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.1( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.803434372s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 98.626945496s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.1( v 41'18 (0'0,41'18] local-lis/les=51/52 n=1 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.918979645s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 active pruub 94.742546082s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.1( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.803402901s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 98.626945496s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.1( v 41'18 (0'0,41'18] local-lis/les=51/52 n=1 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.918964386s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.742546082s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.9( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.733863831s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 93.557502747s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.9( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.733831406s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 93.557502747s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.e( v 52'19 (0'0,52'19] local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.918784142s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 41'18 active pruub 94.742546082s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.a( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.733381271s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 93.557159424s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.a( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.733366013s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 93.557159424s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.2( v 41'18 (0'0,41'18] local-lis/les=51/52 n=1 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.918754578s) [1] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 active pruub 94.742568970s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.e( v 52'19 (0'0,52'19] local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.918724060s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 94.742546082s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.2( v 41'18 (0'0,41'18] local-lis/les=51/52 n=1 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.918723106s) [1] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.742568970s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.13( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.918670654s) [1] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 active pruub 94.742591858s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.13( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.918651581s) [1] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.742591858s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.1b( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.733083725s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 93.557044983s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.1b( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.733058929s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 93.557044983s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.14( v 52'19 (0'0,52'19] local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.918519020s) [1] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 41'18 active pruub 94.742614746s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.1c( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.732907295s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 93.557029724s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.1c( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.732890129s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 93.557029724s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.14( v 52'19 (0'0,52'19] local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.918470383s) [1] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 94.742614746s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.15( v 52'19 (0'0,52'19] local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.918467522s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 41'18 active pruub 94.742645264s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.15( v 52'19 (0'0,52'19] local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.918416977s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 41'18 unknown NOTIFY pruub 94.742645264s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.1a( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.802643776s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 98.627037048s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.1a( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.802612305s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 98.627037048s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.1d( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.732441902s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 93.557174683s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.1e( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.757930756s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 101.115928650s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.1e( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.757910728s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 101.115928650s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.1a( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.798002243s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 active pruub 97.156112671s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.1a( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.797971725s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 unknown NOTIFY pruub 97.156112671s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.15( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.797894478s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 active pruub 97.156082153s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.15( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.797874451s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 97.156082153s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.1d( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.757581711s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 101.116088867s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[2.18( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[10.1e( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[2.13( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[5.14( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.1d( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.732405663s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 93.557174683s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.4( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.732289314s) [1] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 93.557167053s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.16( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.917527199s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 active pruub 94.742660522s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.16( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.917495728s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.742660522s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.19( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.801335335s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 98.627105713s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.19( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.801314354s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 98.627105713s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.1d( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.757528305s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 101.116088867s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.17( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.916824341s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 active pruub 94.742637634s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[10.17( v 41'18 (0'0,41'18] local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.916794777s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 unknown NOTIFY pruub 94.742637634s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.919015884s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'483 lcod 0'0 active pruub 100.277702332s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.918997765s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 100.277702332s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.1f( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.725463867s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 active pruub 93.551383972s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.14( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.925808907s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 101.284667969s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[2.1f( empty local-lis/les=43/44 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54 pruub=10.725437164s) [0] r=-1 lpr=54 pi=[43,54)/1 crt=0'0 unknown NOTIFY pruub 93.551383972s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.18( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.797039032s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 active pruub 97.155967712s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.14( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.925774574s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 101.284667969s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.18( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.797020912s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 unknown NOTIFY pruub 97.155967712s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.18( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.801176071s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 active pruub 98.627220154s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[5.18( empty local-lis/les=47/49 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54 pruub=15.801153183s) [1] r=-1 lpr=54 pi=[47,54)/1 crt=0'0 unknown NOTIFY pruub 98.627220154s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.1b( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.757002831s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 101.116218567s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.1b( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.756984711s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 101.116218567s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.1f( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.795939445s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 active pruub 97.155273438s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.1f( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.795912743s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 unknown NOTIFY pruub 97.155273438s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.10( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.795735359s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 active pruub 97.155265808s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[2.11( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.10( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.795712471s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 97.155265808s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.918609619s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'483 lcod 0'0 active pruub 100.278373718s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.918579102s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 100.278373718s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[3.1e( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[2.16( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.12( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.924785614s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 101.284721375s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.12( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.924729347s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 101.284721375s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.11( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.795091629s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 active pruub 97.155258179s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.11( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.795073509s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 97.155258179s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.11( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.924509048s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 101.284759521s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.11( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.924485207s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 101.284759521s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.12( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.794856071s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 active pruub 97.155220032s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.12( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.794835091s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 97.155220032s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.10( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.924241066s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 101.284782410s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.10( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.924227715s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 101.284782410s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.917164803s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'483 lcod 0'0 active pruub 100.277832031s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[10.7( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.15( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.926276207s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 101.284667969s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.917128563s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 100.277832031s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.15( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.923893929s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 101.284667969s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.18( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.755220413s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 101.116111755s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.18( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.755207062s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 101.116111755s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.1c( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.794214249s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 active pruub 97.155204773s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.1c( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.794203758s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 unknown NOTIFY pruub 97.155204773s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.f( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.923725128s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 101.284851074s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.f( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.923713684s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 101.284851074s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[5.15( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[7.1a( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.3( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.793745041s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 active pruub 97.155151367s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.3( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.793729782s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 unknown NOTIFY pruub 97.155151367s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.7( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.754829407s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 101.116310120s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.7( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.754799843s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 101.116310120s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.c( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.793466568s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 active pruub 97.155090332s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.c( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.793449402s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 97.155090332s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.e( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.922924042s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 101.284828186s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.e( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.922904015s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 101.284828186s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.2( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.793018341s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 active pruub 97.155090332s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[8.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.2( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.792937279s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 unknown NOTIFY pruub 97.155090332s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.6( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.753965378s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 101.116203308s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.6( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.753944397s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 101.116203308s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.d( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.792797089s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 active pruub 97.155082703s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.d( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.792759895s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 97.155082703s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.d( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.922329903s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 101.284881592s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.d( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.922300339s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 101.284881592s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.915571213s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'483 lcod 0'0 active pruub 100.278244019s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[2.f( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.915486336s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 100.278244019s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[2.b( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[3.1d( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[11.12( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.e( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.790756226s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 active pruub 97.153831482s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.e( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.790722847s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 97.153831482s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.915358543s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'483 lcod 0'0 active pruub 100.278549194s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.915325165s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 100.278549194s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.b( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.921466827s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 101.284889221s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.b( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.921437263s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 101.284889221s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.3( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.753009796s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 101.116500854s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.3( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.752985954s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 101.116500854s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.915019989s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'483 lcod 0'0 active pruub 100.278755188s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.9( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.921129227s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 101.284919739s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.914991379s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 100.278755188s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.9( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.921095848s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 101.284919739s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.5( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.752346039s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 101.116279602s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.5( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.752323151s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 101.116279602s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.5( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.789830208s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 active pruub 97.153808594s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.5( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.789807320s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 unknown NOTIFY pruub 97.153808594s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.1( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.752264023s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 101.116333008s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.1( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.752232552s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 101.116333008s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.c( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.789627075s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 active pruub 97.153793335s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.c( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.789607048s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 unknown NOTIFY pruub 97.153793335s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.914524078s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'483 lcod 0'0 active pruub 100.278785706s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.8( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.752009392s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 101.116348267s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.914484024s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 100.278785706s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.8( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.751993179s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 101.116348267s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.2( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.920269012s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 101.284927368s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.e( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.789030075s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 active pruub 97.153755188s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.e( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.789010048s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 unknown NOTIFY pruub 97.153755188s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.2( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.920232773s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 101.284927368s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.a( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.751503944s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 101.116371155s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.3( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.919936180s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 101.284965515s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.3( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.919901848s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 101.284965515s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.a( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.751486778s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 101.116371155s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[8.11( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.f( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.788434982s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 active pruub 97.153778076s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.f( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.788392067s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 unknown NOTIFY pruub 97.153778076s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.913239479s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'483 lcod 0'0 active pruub 100.278816223s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.913214684s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 100.278816223s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.f( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.787794113s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 active pruub 97.153678894s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.f( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.787755013s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 97.153678894s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.8( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.919032097s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 101.284996033s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[11.11( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.b( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.787399292s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 active pruub 97.153663635s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[10.8( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[2.8( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[10.4( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[10.9( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[8.12( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[2.2( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.8( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.919012070s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 101.284996033s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.4( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.787269592s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 active pruub 97.153739929s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.6( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.787126541s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 active pruub 97.153648376s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.4( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.787225723s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 unknown NOTIFY pruub 97.153739929s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[11.15( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.6( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.787105560s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 unknown NOTIFY pruub 97.153648376s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[5.5( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.b( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.787364006s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 97.153663635s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.9( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.786628723s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 active pruub 97.153579712s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.9( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.786391258s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 97.153579712s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.1( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.917759895s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 101.285018921s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.9( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.748976707s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 101.116470337s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.1( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.917552948s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 101.285018921s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.2( v 34'6 (0'0,34'6] local-lis/les=49/50 n=1 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.786154747s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 active pruub 97.153671265s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.9( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.748931885s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 101.116470337s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.2( v 34'6 (0'0,34'6] local-lis/les=49/50 n=1 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.786122322s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 97.153671265s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.1( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.786032677s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 active pruub 97.153854370s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.4( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.917230606s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 101.285057068s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.4( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.917210579s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 101.285057068s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.8( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.785685539s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 active pruub 97.153564453s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.911067009s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'483 lcod 0'0 active pruub 100.278945923s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.1( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.785999298s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 unknown NOTIFY pruub 97.153854370s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.8( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.785650253s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 unknown NOTIFY pruub 97.153564453s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.911013603s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 100.278945923s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[3.18( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[5.4( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.c( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.748647690s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 101.116798401s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.c( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.748620987s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 101.116798401s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.6( v 34'6 (0'0,34'6] local-lis/les=49/50 n=1 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.785073280s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 active pruub 97.153511047s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.910413742s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'483 lcod 0'0 active pruub 100.278961182s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.6( v 34'6 (0'0,34'6] local-lis/les=49/50 n=1 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.784980774s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 97.153511047s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.a( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.784927368s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 active pruub 97.153495789s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.6( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.916465759s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 101.285102844s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.6( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.916419983s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 101.285102844s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[7.1c( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[3.7( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[5.3( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.a( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.784874916s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 unknown NOTIFY pruub 97.153495789s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[7.2( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[10.d( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[8.d( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.f( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.747156143s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 101.116683960s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.f( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.747108459s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 101.116683960s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.9( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.784053802s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 active pruub 97.153541565s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.9( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.783629417s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 unknown NOTIFY pruub 97.153541565s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.909322739s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 100.278961182s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[5.2( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.18( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.914734840s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 101.285072327s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.18( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.914715767s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 101.285072327s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.4( v 34'6 (0'0,34'6] local-lis/les=49/50 n=1 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.782946587s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 active pruub 97.153434753s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[9.5( v 52'484 (0'0,52'484] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.908498764s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'483 lcod 41'483 active pruub 100.278991699s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.1b( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.783016205s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 active pruub 97.153541565s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.1b( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.783000946s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 97.153541565s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.4( v 34'6 (0'0,34'6] local-lis/les=49/50 n=1 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.782915115s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 97.153434753s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[9.5( v 52'484 (0'0,52'484] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.908432007s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'483 lcod 41'483 unknown NOTIFY pruub 100.278991699s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[5.7( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.15( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.782498360s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 active pruub 97.153411865s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.15( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.782477379s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 unknown NOTIFY pruub 97.153411865s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.1a( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.782466888s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 active pruub 97.153388977s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.1a( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.782421112s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 97.153388977s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.907977104s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'483 lcod 0'0 active pruub 100.279022217s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.907961845s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 100.279022217s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[11.d( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[10.1( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.1a( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.913803101s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 101.285110474s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.1a( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.913786888s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 101.285110474s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.1b( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.913966179s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 101.285438538s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.1b( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.913951874s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 101.285438538s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.12( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.745106697s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 101.116767883s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.11( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.745021820s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 101.116622925s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[10.e( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.11( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.744842529s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 101.116622925s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.18( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.781492233s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 active pruub 97.153335571s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.18( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.781472206s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 97.153335571s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.12( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.744901657s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 101.116767883s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.19( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.913050652s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 101.285125732s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.19( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.913012505s) [0] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 101.285125732s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[2.1c( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[11.b( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[10.17( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[11.9( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[3.5( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.915280342s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'483 lcod 0'0 active pruub 100.287887573s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.915233612s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 100.287887573s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.e( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.743682861s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 101.116607666s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.e( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.743636131s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 101.116607666s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[2.1f( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[7.5( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.1c( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.911897659s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 101.285148621s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.1c( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.911867142s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 101.285148621s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.1f( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.779693604s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 active pruub 97.153289795s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.1f( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.779675484s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 97.153289795s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[7.c( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.914183617s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'483 lcod 0'0 active pruub 100.287948608s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.11( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.779204369s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 active pruub 97.153114319s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.914076805s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 100.287948608s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[10.15( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.11( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.779187202s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 unknown NOTIFY pruub 97.153114319s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.1e( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.911046982s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 101.285171509s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.15( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.742553711s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 101.116737366s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.1e( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.910976410s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 101.285171509s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[3.8( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.16( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.742368698s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 101.116790771s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.15( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.742512703s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 101.116737366s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.1f( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.910728455s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 active pruub 101.285171509s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[11.1f( empty local-lis/les=52/53 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54 pruub=12.910708427s) [2] r=-1 lpr=54 pi=[52,54)/1 crt=0'0 unknown NOTIFY pruub 101.285171509s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.16( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.742324829s) [2] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 101.116790771s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.17( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.742900848s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 active pruub 101.117683411s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.13( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.778496742s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 active pruub 97.153266907s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.913118362s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'483 lcod 0'0 active pruub 100.287971497s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[10.16( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[7.13( empty local-lis/les=49/50 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.778459549s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 unknown NOTIFY pruub 97.153266907s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54 pruub=11.913083076s) [0] r=-1 lpr=54 pi=[51,54)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 100.287971497s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[3.17( empty local-lis/les=45/46 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54 pruub=12.742877960s) [0] r=-1 lpr=54 pi=[45,54)/1 crt=0'0 unknown NOTIFY pruub 101.117683411s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[2.1d( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.1c( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.778115273s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 active pruub 97.153312683s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.1c( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.778093338s) [2] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 97.153312683s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[10.11( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [1] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[7.e( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[11.17( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[11.2( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[3.1f( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[11.3( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[7.1b( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[5.1d( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[11.8( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[7.8( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.1d( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.776424408s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 active pruub 97.151458740s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[8.2( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[8.1d( v 34'6 (0'0,34'6] local-lis/les=49/50 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54 pruub=8.774798393s) [0] r=-1 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 unknown NOTIFY pruub 97.151458740s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[8.14( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[7.1( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[7.a( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[10.10( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [1] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[9.17( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[11.18( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[11.14( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[7.18( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[8.1b( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[3.1b( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[8.10( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[8.4( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[2.17( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[5.11( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[11.10( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[7.15( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[2.15( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[7.1f( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[5.12( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[11.1a( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[7.3( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[5.13( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[11.1b( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[3.11( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[10.1a( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [1] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[3.e( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[8.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[10.19( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [1] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[11.1c( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[11.f( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[7.11( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[11.e( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[5.16( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[10.6( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [1] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[11.1e( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[11.1f( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[9.13( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[2.d( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[3.16( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[3.6( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[5.9( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[5.c( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 54 pg[8.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[10.f( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [1] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[8.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[2.7( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[5.f( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[10.b( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [1] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[2.3( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[3.3( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[2.5( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[2.6( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[9.b( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[5.1( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[3.a( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[7.f( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[2.9( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[2.a( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[9.1( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[10.2( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [1] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[10.13( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [1] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[3.1( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[8.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[2.1b( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[7.4( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[10.14( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [1] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[8.b( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[5.1a( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[2.4( empty local-lis/les=0/0 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[5.19( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[8.9( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 54 pg[5.18( empty local-lis/les=0/0 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[7.6( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[11.1( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[3.9( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[11.4( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[9.3( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[3.c( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[8.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[11.6( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[3.f( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[7.9( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[9.7( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[9.5( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[8.1a( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[8.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[8.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[11.19( empty local-lis/les=0/0 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[3.12( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[3.15( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[3.17( empty local-lis/les=0/0 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[7.13( empty local-lis/les=0/0 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[8.1d( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 54 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Feb 25 06:51:48 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 25 06:51:48 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 25 06:51:48 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 25 06:51:48 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Feb 25 06:51:48 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 25 06:51:48 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Feb 25 06:51:48 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 25 06:51:48 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 25 06:51:48 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 25 06:51:48 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 25 06:51:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Feb 25 06:51:48 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[9.5( v 52'484 (0'0,52'484] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[9.5( v 52'484 (0'0,52'484] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[5.19( empty local-lis/les=54/55 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[7.1c( empty local-lis/les=54/55 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[5.18( empty local-lis/les=54/55 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[5.1a( empty local-lis/les=54/55 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[10.14( v 52'19 lc 38'7 (0'0,52'19] local-lis/les=54/55 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [1] r=0 lpr=54 pi=[51,54)/1 crt=52'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[10.12( v 52'19 lc 41'17 (0'0,52'19] local-lis/les=54/55 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [1] r=0 lpr=54 pi=[51,54)/1 crt=52'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[2.1b( empty local-lis/les=54/55 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[10.13( v 41'18 (0'0,41'18] local-lis/les=54/55 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [1] r=0 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[10.10( v 41'18 (0'0,41'18] local-lis/les=54/55 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [1] r=0 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[5.1d( empty local-lis/les=54/55 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[10.11( v 41'18 (0'0,41'18] local-lis/les=54/55 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [1] r=0 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[2.6( empty local-lis/les=54/55 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[2.7( empty local-lis/les=54/55 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[10.f( v 41'18 (0'0,41'18] local-lis/les=54/55 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [1] r=0 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[2.4( empty local-lis/les=54/55 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[4.2( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[6.3( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=54/55 n=2 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=35'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[4.4( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[9.1b( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[2.9( empty local-lis/les=54/55 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[6.d( v 35'39 lc 34'13 (0'0,35'39] local-lis/les=54/55 n=1 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[4.f( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[4.d( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[5.c( empty local-lis/les=54/55 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[6.f( v 35'39 lc 34'1 (0'0,35'39] local-lis/les=54/55 n=1 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[2.a( empty local-lis/les=54/55 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[10.2( v 41'18 (0'0,41'18] local-lis/les=54/55 n=1 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [1] r=0 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[2.5( empty local-lis/les=54/55 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[6.1( v 35'39 (0'0,35'39] local-lis/les=54/55 n=2 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[9.3( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[9.3( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[9.1( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[5.1( empty local-lis/les=54/55 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[9.1( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[9.17( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[9.17( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[9.7( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[9.7( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[9.b( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[9.b( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[9.5( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[9.5( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[9.13( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[9.13( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[11.2( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[11.11( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[7.5( empty local-lis/les=54/55 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[8.12( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[3.18( empty local-lis/les=54/55 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[8.d( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[11.9( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[7.2( empty local-lis/les=54/55 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[7.1( empty local-lis/les=54/55 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[11.d( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[11.b( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[11.3( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[3.8( empty local-lis/les=54/55 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[7.c( empty local-lis/les=54/55 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[7.e( empty local-lis/les=54/55 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[8.11( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[11.12( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[3.5( empty local-lis/les=54/55 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[11.15( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[8.15( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[3.1d( empty local-lis/les=54/55 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[3.1e( empty local-lis/les=54/55 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[4.18( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[3.7( empty local-lis/les=54/55 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[7.1a( empty local-lis/les=54/55 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[4.1b( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[4.1a( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[11.8( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[8.2( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=54/55 n=1 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=34'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[4.1( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[7.8( empty local-lis/les=54/55 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[4.a( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[8.4( v 34'6 (0'0,34'6] local-lis/les=54/55 n=1 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[3.e( empty local-lis/les=54/55 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[3.11( empty local-lis/les=54/55 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[7.15( empty local-lis/les=54/55 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] r=-1 lpr=55 pi=[51,55)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[7.a( empty local-lis/les=54/55 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[11.18( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[11.1a( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[11.1b( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[7.11( empty local-lis/les=54/55 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[4.13( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[11.1f( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[11.1c( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[3.16( empty local-lis/les=54/55 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[8.1c( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[11.1e( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [2] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[4.11( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[4.1c( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[8.1b( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [2] r=0 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 55 pg[4.e( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [2] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[7.1b( empty local-lis/les=54/55 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[4.7( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[6.5( v 35'39 lc 34'11 (0'0,35'39] local-lis/les=54/55 n=2 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[2.3( empty local-lis/les=54/55 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[4.5( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[10.b( v 41'18 (0'0,41'18] local-lis/les=54/55 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [1] r=0 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[5.f( empty local-lis/les=54/55 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[2.d( empty local-lis/les=54/55 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[6.9( v 35'39 (0'0,35'39] local-lis/les=54/55 n=1 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[4.9( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[6.7( v 35'39 lc 34'21 (0'0,35'39] local-lis/les=54/55 n=1 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[5.9( empty local-lis/les=54/55 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[4.8( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[10.6( v 41'18 (0'0,41'18] local-lis/les=54/55 n=1 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [1] r=0 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[5.16( empty local-lis/les=54/55 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[10.19( v 41'18 (0'0,41'18] local-lis/les=54/55 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [1] r=0 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[4.14( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[10.1a( v 41'18 (0'0,41'18] local-lis/les=54/55 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [1] r=0 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[5.12( empty local-lis/les=54/55 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[2.15( empty local-lis/les=54/55 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[4.12( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[5.13( empty local-lis/les=54/55 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[5.11( empty local-lis/les=54/55 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[4.10( empty local-lis/les=54/55 n=0 ec=45/20 lis/c=45/45 les/c/f=46/46/0 sis=54) [1] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[2.17( empty local-lis/les=54/55 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [1] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[8.14( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[11.17( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[8.1a( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[11.19( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[10.1( v 41'18 (0'0,41'18] local-lis/les=54/55 n=1 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[8.18( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[10.16( v 41'18 (0'0,41'18] local-lis/les=54/55 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[3.12( empty local-lis/les=54/55 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[3.15( empty local-lis/les=54/55 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[8.1f( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[2.8( empty local-lis/les=54/55 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[3.9( empty local-lis/les=54/55 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[11.1( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[3.a( empty local-lis/les=54/55 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[7.f( empty local-lis/les=54/55 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[8.c( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[11.f( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[7.3( empty local-lis/les=54/55 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[10.e( v 52'19 lc 38'4 (0'0,52'19] local-lis/les=54/55 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=52'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[8.e( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[11.e( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[5.3( empty local-lis/les=54/55 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[5.2( empty local-lis/les=54/55 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[3.6( empty local-lis/les=54/55 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[2.1f( empty local-lis/les=54/55 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[10.17( v 41'18 (0'0,41'18] local-lis/les=54/55 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[3.3( empty local-lis/les=54/55 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[5.5( empty local-lis/les=54/55 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[11.6( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[2.1c( empty local-lis/les=54/55 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[2.2( empty local-lis/les=54/55 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[7.6( empty local-lis/les=54/55 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[5.4( empty local-lis/les=54/55 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[3.1f( empty local-lis/les=54/55 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[8.9( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[7.9( empty local-lis/les=54/55 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[7.18( empty local-lis/les=54/55 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[8.6( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=54/55 n=1 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=34'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[11.14( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[8.f( v 34'6 lc 0'0 (0'0,34'6] local-lis/les=54/55 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=34'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[2.1d( empty local-lis/les=54/55 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[10.15( v 52'19 lc 38'3 (0'0,52'19] local-lis/les=54/55 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=52'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[10.8( v 41'18 (0'0,41'18] local-lis/les=54/55 n=1 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[3.1( empty local-lis/les=54/55 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[11.4( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[3.c( empty local-lis/les=54/55 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[5.7( empty local-lis/les=54/55 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[7.4( empty local-lis/les=54/55 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[8.b( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[3.f( empty local-lis/les=54/55 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[7.1f( empty local-lis/les=54/55 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[10.9( v 52'19 lc 38'8 (0'0,52'19] local-lis/les=54/55 n=1 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=52'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[8.10( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[3.1b( empty local-lis/les=54/55 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[2.b( empty local-lis/les=54/55 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[11.10( empty local-lis/les=54/55 n=0 ec=52/39 lis/c=52/52 les/c/f=53/53/0 sis=54) [0] r=0 lpr=54 pi=[52,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[5.1e( empty local-lis/les=54/55 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[2.18( empty local-lis/les=54/55 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[10.4( v 41'18 (0'0,41'18] local-lis/les=54/55 n=1 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[2.19( empty local-lis/les=54/55 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[2.f( empty local-lis/les=54/55 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[2.11( empty local-lis/les=54/55 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[10.7( v 41'18 (0'0,41'18] local-lis/les=54/55 n=1 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[5.15( empty local-lis/les=54/55 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[2.13( empty local-lis/les=54/55 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[10.1e( v 41'18 (0'0,41'18] local-lis/les=54/55 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=41'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[5.14( empty local-lis/les=54/55 n=0 ec=47/21 lis/c=47/47 les/c/f=49/49/0 sis=54) [0] r=0 lpr=54 pi=[47,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[2.16( empty local-lis/les=54/55 n=0 ec=43/18 lis/c=43/43 les/c/f=44/44/0 sis=54) [0] r=0 lpr=54 pi=[43,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[8.1d( v 34'6 (0'0,34'6] local-lis/les=54/55 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=34'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[3.17( empty local-lis/les=54/55 n=0 ec=45/19 lis/c=45/45 les/c/f=46/46/0 sis=54) [0] r=0 lpr=54 pi=[45,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[7.13( empty local-lis/les=54/55 n=0 ec=49/24 lis/c=49/49 les/c/f=50/50/0 sis=54) [0] r=0 lpr=54 pi=[49,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 55 pg[6.b( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=54/55 n=1 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=54) [1] r=0 lpr=54 pi=[47,54)/1 crt=35'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 55 pg[10.d( v 52'19 lc 38'5 (0'0,52'19] local-lis/les=54/55 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=54) [0] r=0 lpr=54 pi=[51,54)/1 crt=52'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v115: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:51:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Feb 25 06:51:48 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Feb 25 06:51:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Feb 25 06:51:48 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Feb 25 06:51:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Feb 25 06:51:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Feb 25 06:51:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Feb 25 06:51:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Feb 25 06:51:49 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Feb 25 06:51:49 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Feb 25 06:51:49 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Feb 25 06:51:49 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 56 pg[6.a( v 35'39 (0'0,35'39] local-lis/les=47/48 n=1 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=56 pruub=12.586139679s) [1] r=-1 lpr=56 pi=[47,56)/1 crt=35'39 lcod 0'0 active pruub 107.015190125s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:49 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 56 pg[6.a( v 35'39 (0'0,35'39] local-lis/les=47/48 n=1 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=56 pruub=12.586095810s) [1] r=-1 lpr=56 pi=[47,56)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 107.015190125s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:49 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 56 pg[6.6( v 35'39 (0'0,35'39] local-lis/les=47/48 n=2 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=56 pruub=12.588056564s) [1] r=-1 lpr=56 pi=[47,56)/1 crt=35'39 lcod 0'0 active pruub 107.017524719s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:49 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 56 pg[6.6( v 35'39 (0'0,35'39] local-lis/les=47/48 n=2 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=56 pruub=12.587992668s) [1] r=-1 lpr=56 pi=[47,56)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 107.017524719s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:49 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 56 pg[6.e( v 35'39 (0'0,35'39] local-lis/les=47/48 n=1 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=56 pruub=12.588135719s) [1] r=-1 lpr=56 pi=[47,56)/1 crt=35'39 lcod 0'0 active pruub 107.017868042s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:49 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 56 pg[6.2( v 35'39 (0'0,35'39] local-lis/les=47/48 n=2 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=56 pruub=12.587944031s) [1] r=-1 lpr=56 pi=[47,56)/1 crt=35'39 lcod 0'0 active pruub 107.017707825s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:49 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 56 pg[6.e( v 35'39 (0'0,35'39] local-lis/les=47/48 n=1 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=56 pruub=12.588104248s) [1] r=-1 lpr=56 pi=[47,56)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 107.017868042s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:49 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 56 pg[6.2( v 35'39 (0'0,35'39] local-lis/les=47/48 n=2 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=56 pruub=12.587900162s) [1] r=-1 lpr=56 pi=[47,56)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 107.017707825s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:49 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 56 pg[6.a( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:49 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 56 pg[6.6( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:49 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 56 pg[6.e( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:49 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 56 pg[6.2( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:49 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 56 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:49 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 56 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:49 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 56 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:49 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 56 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:49 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 56 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:49 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 56 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:49 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 56 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:49 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 56 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:49 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 56 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:49 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 56 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:49 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 56 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:49 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 56 pg[9.5( v 52'484 (0'0,52'484] local-lis/les=55/56 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[51,55)/1 crt=52'484 lcod 41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:49 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 56 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:49 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 56 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:49 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 56 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:49 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 56 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=55) [0]/[1] async=[0] r=0 lpr=55 pi=[51,55)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:50 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Feb 25 06:51:50 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Feb 25 06:51:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Feb 25 06:51:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Feb 25 06:51:50 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Feb 25 06:51:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v118: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:51:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Feb 25 06:51:50 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Feb 25 06:51:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Feb 25 06:51:50 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Feb 25 06:51:50 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 57 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57 pruub=15.142944336s) [0] async=[0] r=-1 lpr=57 pi=[51,57)/1 crt=41'483 lcod 0'0 active pruub 106.551506042s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:50 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 57 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57 pruub=15.141734123s) [0] async=[0] r=-1 lpr=57 pi=[51,57)/1 crt=41'483 lcod 0'0 active pruub 106.550430298s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:50 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Feb 25 06:51:50 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Feb 25 06:51:50 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 57 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57 pruub=15.142156601s) [0] async=[0] r=-1 lpr=57 pi=[51,57)/1 crt=41'483 lcod 0'0 active pruub 106.551033020s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:50 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 57 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57 pruub=15.142131805s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 106.551033020s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:50 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 57 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57 pruub=15.141900063s) [0] async=[0] r=-1 lpr=57 pi=[51,57)/1 crt=41'483 lcod 0'0 active pruub 106.550994873s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:50 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 57 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57 pruub=15.141864777s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 106.550994873s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:50 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 57 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57 pruub=15.142096519s) [0] async=[0] r=-1 lpr=57 pi=[51,57)/1 crt=41'483 lcod 0'0 active pruub 106.551422119s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:50 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 57 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57 pruub=15.142710686s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 106.551506042s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:50 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 57 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57 pruub=15.142044067s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 106.551422119s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:50 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 57 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57 pruub=15.142009735s) [0] async=[0] r=-1 lpr=57 pi=[51,57)/1 crt=41'483 lcod 0'0 active pruub 106.551437378s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:50 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 57 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57 pruub=15.141643524s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 106.550430298s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:50 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 57 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57 pruub=15.141832352s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 106.551437378s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:50 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 57 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57 pruub=15.141119003s) [0] async=[0] r=-1 lpr=57 pi=[51,57)/1 crt=41'483 lcod 0'0 active pruub 106.550903320s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:50 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 57 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57 pruub=15.140943527s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 106.550903320s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:50 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 57 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57 pruub=15.140459061s) [0] async=[0] r=-1 lpr=57 pi=[51,57)/1 crt=41'483 lcod 0'0 active pruub 106.550506592s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:50 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 57 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57 pruub=15.140375137s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 106.550506592s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:50 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 57 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57 pruub=15.136788368s) [0] async=[0] r=-1 lpr=57 pi=[51,57)/1 crt=41'483 lcod 0'0 active pruub 106.547050476s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:50 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 57 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57 pruub=15.136754036s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 106.547050476s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:50 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 57 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57 pruub=15.140009880s) [0] async=[0] r=-1 lpr=57 pi=[51,57)/1 crt=41'483 lcod 0'0 active pruub 106.550422668s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:50 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 57 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57 pruub=15.140073776s) [0] async=[0] r=-1 lpr=57 pi=[51,57)/1 crt=41'483 lcod 0'0 active pruub 106.550567627s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:50 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 57 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57 pruub=15.139943123s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 106.550422668s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:50 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 57 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57 pruub=15.139805794s) [0] async=[0] r=-1 lpr=57 pi=[51,57)/1 crt=41'483 lcod 0'0 active pruub 106.550445557s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:50 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 57 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57 pruub=15.139750481s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 106.550445557s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:50 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 57 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57 pruub=15.140033722s) [0] r=-1 lpr=57 pi=[51,57)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 106.550567627s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:50 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 57 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:50 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 57 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:50 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 57 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:50 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 57 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:50 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 57 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:50 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 57 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:50 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 57 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:50 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 57 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:50 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 57 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:50 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 57 pg[6.2( v 35'39 (0'0,35'39] local-lis/les=56/57 n=2 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:50 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 57 pg[6.e( v 35'39 lc 34'19 (0'0,35'39] local-lis/les=56/57 n=1 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:50 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 57 pg[6.a( v 35'39 (0'0,35'39] local-lis/les=56/57 n=1 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:50 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 57 pg[6.6( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=56/57 n=2 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=56) [1] r=0 lpr=56 pi=[47,56)/1 crt=35'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:50 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 57 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:50 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 57 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:50 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 57 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:50 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 57 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:50 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 57 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:50 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 57 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:50 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 57 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:50 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 57 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:50 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 57 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:50 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 57 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:50 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 57 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:50 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 57 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:50 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 57 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:50 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 57 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:50 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 57 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:51 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Feb 25 06:51:51 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Feb 25 06:51:51 np0005629333 ceph-mgr[76641]: [progress INFO root] Completed event 736097e8-dc2b-404f-bed5-faedab22194d (Global Recovery Event) in 15 seconds
Feb 25 06:51:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Feb 25 06:51:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Feb 25 06:51:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Feb 25 06:51:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Feb 25 06:51:51 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Feb 25 06:51:51 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 58 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=58 pruub=14.136744499s) [0] async=[0] r=-1 lpr=58 pi=[51,58)/1 crt=41'483 lcod 0'0 active pruub 106.551528931s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:51 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 58 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=55/56 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=58 pruub=14.136502266s) [0] r=-1 lpr=58 pi=[51,58)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 106.551528931s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:51 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 58 pg[6.3( v 35'39 (0'0,35'39] local-lis/les=54/55 n=2 ec=47/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.950649261s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=35'39 active pruub 105.365913391s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:51 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 58 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=58 pruub=14.136114120s) [0] async=[0] r=-1 lpr=58 pi=[51,58)/1 crt=41'483 lcod 0'0 active pruub 106.551414490s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:51 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 58 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:51 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 58 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:51 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 58 pg[6.3( v 35'39 (0'0,35'39] local-lis/les=54/55 n=2 ec=47/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.950537682s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=35'39 unknown NOTIFY pruub 105.365913391s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:51 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 58 pg[9.5( v 56'486 (0'0,56'486] local-lis/les=0/0 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 pct=0'0 crt=52'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:51 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 58 pg[9.5( v 56'486 (0'0,56'486] local-lis/les=0/0 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=52'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:51 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 58 pg[6.f( v 35'39 (0'0,35'39] local-lis/les=54/55 n=1 ec=47/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.950531960s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=35'39 active pruub 105.366081238s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:51 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 58 pg[6.f( v 35'39 (0'0,35'39] local-lis/les=54/55 n=1 ec=47/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.950467110s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=35'39 unknown NOTIFY pruub 105.366081238s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:51 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 58 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=58 pruub=14.135824203s) [0] r=-1 lpr=58 pi=[51,58)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 106.551414490s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:51 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 58 pg[6.7( v 35'39 (0'0,35'39] local-lis/les=54/55 n=1 ec=47/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.955606461s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=35'39 active pruub 105.371536255s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:51 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 58 pg[6.7( v 35'39 (0'0,35'39] local-lis/les=54/55 n=1 ec=47/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.955503464s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=35'39 unknown NOTIFY pruub 105.371536255s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:51 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 58 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:51 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 58 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:51 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Feb 25 06:51:51 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Feb 25 06:51:51 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Feb 25 06:51:51 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Feb 25 06:51:51 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 58 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=58 pruub=14.134704590s) [0] async=[0] r=-1 lpr=58 pi=[51,58)/1 crt=41'483 lcod 0'0 active pruub 106.551773071s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:51 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 58 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=55/56 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=58 pruub=14.134649277s) [0] r=-1 lpr=58 pi=[51,58)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 106.551773071s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:51 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 58 pg[9.5( v 56'486 (0'0,56'486] local-lis/les=55/56 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=58 pruub=14.134096146s) [0] async=[0] r=-1 lpr=58 pi=[51,58)/1 crt=52'484 lcod 56'485 active pruub 106.551467896s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:51 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 58 pg[9.5( v 56'486 (0'0,56'486] local-lis/les=55/56 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=58 pruub=14.134045601s) [0] r=-1 lpr=58 pi=[51,58)/1 crt=52'484 lcod 56'485 unknown NOTIFY pruub 106.551467896s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:51 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 58 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:51 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 58 pg[6.b( v 35'39 (0'0,35'39] local-lis/les=54/55 n=1 ec=47/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.963215828s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=35'39 active pruub 105.380958557s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:51 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 58 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:51 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 58 pg[6.3( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:51 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 58 pg[6.b( v 35'39 (0'0,35'39] local-lis/les=54/55 n=1 ec=47/22 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=12.962740898s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=35'39 unknown NOTIFY pruub 105.380958557s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:51 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 58 pg[6.f( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:51 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 58 pg[6.7( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:51 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:51 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 58 pg[9.11( v 41'483 (0'0,41'483] local-lis/les=57/58 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:51 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 58 pg[9.b( v 41'483 (0'0,41'483] local-lis/les=57/58 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:51 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 58 pg[9.17( v 41'483 (0'0,41'483] local-lis/les=57/58 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:51 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 58 pg[9.3( v 41'483 (0'0,41'483] local-lis/les=57/58 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:51 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 58 pg[9.1d( v 41'483 (0'0,41'483] local-lis/les=57/58 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:51 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 58 pg[9.19( v 41'483 (0'0,41'483] local-lis/les=57/58 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:51 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 58 pg[9.9( v 41'483 (0'0,41'483] local-lis/les=57/58 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:51 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 58 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=57/58 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:51 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 58 pg[9.1( v 41'483 (0'0,41'483] local-lis/les=57/58 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:51 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 58 pg[9.d( v 41'483 (0'0,41'483] local-lis/les=57/58 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:51 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 58 pg[9.1b( v 41'483 (0'0,41'483] local-lis/les=57/58 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:51 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 58 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=57/58 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:52 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Feb 25 06:51:52 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Feb 25 06:51:52 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Feb 25 06:51:52 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Feb 25 06:51:52 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Feb 25 06:51:52 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Feb 25 06:51:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:51:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v120: 305 pgs: 8 peering, 297 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s, 2 keys/s, 30 objects/s recovering
Feb 25 06:51:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Feb 25 06:51:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Feb 25 06:51:52 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Feb 25 06:51:52 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 59 pg[9.13( v 41'483 (0'0,41'483] local-lis/les=58/59 n=6 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:52 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 59 pg[9.5( v 56'486 (0'0,56'486] local-lis/les=58/59 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=56'486 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:52 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 59 pg[6.7( v 35'39 lc 34'21 (0'0,35'39] local-lis/les=58/59 n=1 ec=47/22 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:52 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 59 pg[6.b( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=58/59 n=1 ec=47/22 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=35'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:52 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 59 pg[6.3( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=58/59 n=2 ec=47/22 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=35'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:52 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 59 pg[9.f( v 41'483 (0'0,41'483] local-lis/les=58/59 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:52 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 59 pg[6.f( v 35'39 lc 34'1 (0'0,35'39] local-lis/les=58/59 n=1 ec=47/22 lis/c=54/54 les/c/f=55/55/0 sis=58) [0] r=0 lpr=58 pi=[54,58)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:52 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 59 pg[9.7( v 41'483 (0'0,41'483] local-lis/les=58/59 n=7 ec=51/35 lis/c=55/51 les/c/f=56/52/0 sis=58) [0] r=0 lpr=58 pi=[51,58)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:51:52 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Feb 25 06:51:52 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Feb 25 06:51:53 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Feb 25 06:51:53 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Feb 25 06:51:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v122: 305 pgs: 8 peering, 297 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s, 1 keys/s, 24 objects/s recovering
Feb 25 06:51:56 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 4.b scrub starts
Feb 25 06:51:56 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 4.b scrub ok
Feb 25 06:51:56 np0005629333 ceph-mgr[76641]: [progress INFO root] Writing back 16 completed events
Feb 25 06:51:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 25 06:51:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v123: 305 pgs: 8 peering, 297 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 935 B/s, 1 keys/s, 20 objects/s recovering
Feb 25 06:51:56 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Feb 25 06:51:56 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Feb 25 06:51:56 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:51:57 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Feb 25 06:51:57 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Feb 25 06:51:57 np0005629333 systemd[77728]: Starting Mark boot as successful...
Feb 25 06:51:57 np0005629333 systemd[77728]: Finished Mark boot as successful.
Feb 25 06:51:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:51:57 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Feb 25 06:51:57 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Feb 25 06:51:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v124: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 783 B/s, 2 keys/s, 16 objects/s recovering
Feb 25 06:51:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Feb 25 06:51:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Feb 25 06:51:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Feb 25 06:51:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Feb 25 06:51:58 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Feb 25 06:51:58 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Feb 25 06:51:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Feb 25 06:51:59 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Feb 25 06:51:59 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Feb 25 06:51:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Feb 25 06:51:59 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Feb 25 06:51:59 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 60 pg[6.4( v 35'39 (0'0,35'39] local-lis/les=47/48 n=2 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=60 pruub=11.477794647s) [1] r=-1 lpr=60 pi=[47,60)/1 crt=35'39 lcod 0'0 active pruub 115.017623901s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:59 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 60 pg[6.4( v 35'39 (0'0,35'39] local-lis/les=47/48 n=2 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=60 pruub=11.477747917s) [1] r=-1 lpr=60 pi=[47,60)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 115.017623901s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:59 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 60 pg[6.c( v 35'39 (0'0,35'39] local-lis/les=47/48 n=1 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=60 pruub=11.477539062s) [1] r=-1 lpr=60 pi=[47,60)/1 crt=35'39 lcod 0'0 active pruub 115.018013000s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:51:59 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 60 pg[6.c( v 35'39 (0'0,35'39] local-lis/les=47/48 n=1 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=60 pruub=11.477499008s) [1] r=-1 lpr=60 pi=[47,60)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 115.018013000s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:51:59 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 60 pg[6.4( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=60) [1] r=0 lpr=60 pi=[47,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:59 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 60 pg[6.c( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=60) [1] r=0 lpr=60 pi=[47,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:51:59 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Feb 25 06:51:59 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Feb 25 06:52:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Feb 25 06:52:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Feb 25 06:52:00 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 61 pg[6.4( v 35'39 lc 34'15 (0'0,35'39] local-lis/les=60/61 n=2 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=60) [1] r=0 lpr=60 pi=[47,60)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:00 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Feb 25 06:52:00 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 61 pg[6.c( v 35'39 lc 34'17 (0'0,35'39] local-lis/les=60/61 n=1 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=60) [1] r=0 lpr=60 pi=[47,60)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:00 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Feb 25 06:52:00 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Feb 25 06:52:00 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Feb 25 06:52:00 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Feb 25 06:52:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v127: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 81 B/s, 1 keys/s, 0 objects/s recovering
Feb 25 06:52:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Feb 25 06:52:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Feb 25 06:52:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Feb 25 06:52:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Feb 25 06:52:00 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Feb 25 06:52:00 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Feb 25 06:52:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Feb 25 06:52:01 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Feb 25 06:52:01 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Feb 25 06:52:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Feb 25 06:52:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Feb 25 06:52:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Feb 25 06:52:01 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Feb 25 06:52:01 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 62 pg[6.5( v 35'39 (0'0,35'39] local-lis/les=54/55 n=2 ec=47/22 lis/c=54/54 les/c/f=55/55/0 sis=62 pruub=11.677883148s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=35'39 active pruub 113.371376038s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:01 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 62 pg[6.5( v 35'39 (0'0,35'39] local-lis/les=54/55 n=2 ec=47/22 lis/c=54/54 les/c/f=55/55/0 sis=62 pruub=11.677741051s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=35'39 unknown NOTIFY pruub 113.371376038s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:01 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 62 pg[6.d( v 35'39 (0'0,35'39] local-lis/les=54/55 n=1 ec=47/22 lis/c=54/54 les/c/f=55/55/0 sis=62 pruub=11.672867775s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=35'39 active pruub 113.366432190s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:01 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 62 pg[6.d( v 35'39 (0'0,35'39] local-lis/les=54/55 n=1 ec=47/22 lis/c=54/54 les/c/f=55/55/0 sis=62 pruub=11.672083855s) [0] r=-1 lpr=62 pi=[54,62)/1 crt=35'39 unknown NOTIFY pruub 113.366432190s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:01 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 62 pg[6.d( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=54/54 les/c/f=55/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:01 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 62 pg[6.5( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=54/54 les/c/f=55/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:52:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:52:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:52:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:52:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:52:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:52:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Feb 25 06:52:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Feb 25 06:52:02 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Feb 25 06:52:02 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Feb 25 06:52:02 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Feb 25 06:52:02 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 63 pg[6.5( v 35'39 lc 34'11 (0'0,35'39] local-lis/les=62/63 n=2 ec=47/22 lis/c=54/54 les/c/f=55/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:02 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 63 pg[6.d( v 35'39 lc 34'13 (0'0,35'39] local-lis/les=62/63 n=1 ec=47/22 lis/c=54/54 les/c/f=55/55/0 sis=62) [0] r=0 lpr=62 pi=[54,62)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:02 np0005629333 python3[100079]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:52:02 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Feb 25 06:52:02 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Feb 25 06:52:02 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Feb 25 06:52:02 np0005629333 podman[100080]: 2026-02-25 11:52:02.530467937 +0000 UTC m=+0.059645029 container create f288530e1699f5ba3433b960a7b194a820eb73fb2615fb7d6d0303893093e03b (image=quay.io/ceph/ceph:v20, name=zealous_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:52:02 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Feb 25 06:52:02 np0005629333 systemd[1]: Started libpod-conmon-f288530e1699f5ba3433b960a7b194a820eb73fb2615fb7d6d0303893093e03b.scope.
Feb 25 06:52:02 np0005629333 podman[100080]: 2026-02-25 11:52:02.505685061 +0000 UTC m=+0.034862213 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:52:02 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:52:02 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7908b83d8379ad1de271efcbcd8bfb0ce73bad8bb67284ff44948dd261631e0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:52:02 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7908b83d8379ad1de271efcbcd8bfb0ce73bad8bb67284ff44948dd261631e0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:52:02 np0005629333 podman[100080]: 2026-02-25 11:52:02.625580094 +0000 UTC m=+0.154757206 container init f288530e1699f5ba3433b960a7b194a820eb73fb2615fb7d6d0303893093e03b (image=quay.io/ceph/ceph:v20, name=zealous_meitner, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:52:02 np0005629333 podman[100080]: 2026-02-25 11:52:02.631500163 +0000 UTC m=+0.160677265 container start f288530e1699f5ba3433b960a7b194a820eb73fb2615fb7d6d0303893093e03b (image=quay.io/ceph/ceph:v20, name=zealous_meitner, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 06:52:02 np0005629333 podman[100080]: 2026-02-25 11:52:02.637581936 +0000 UTC m=+0.166759048 container attach f288530e1699f5ba3433b960a7b194a820eb73fb2615fb7d6d0303893093e03b (image=quay.io/ceph/ceph:v20, name=zealous_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 06:52:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:52:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v130: 305 pgs: 2 peering, 303 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 405 B/s, 1 objects/s recovering
Feb 25 06:52:02 np0005629333 zealous_meitner[100095]: could not fetch user info: no user info saved
Feb 25 06:52:02 np0005629333 systemd[1]: libpod-f288530e1699f5ba3433b960a7b194a820eb73fb2615fb7d6d0303893093e03b.scope: Deactivated successfully.
Feb 25 06:52:02 np0005629333 podman[100080]: 2026-02-25 11:52:02.937107742 +0000 UTC m=+0.466284854 container died f288530e1699f5ba3433b960a7b194a820eb73fb2615fb7d6d0303893093e03b (image=quay.io/ceph/ceph:v20, name=zealous_meitner, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:52:02 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e7908b83d8379ad1de271efcbcd8bfb0ce73bad8bb67284ff44948dd261631e0-merged.mount: Deactivated successfully.
Feb 25 06:52:02 np0005629333 podman[100080]: 2026-02-25 11:52:02.977901903 +0000 UTC m=+0.507079005 container remove f288530e1699f5ba3433b960a7b194a820eb73fb2615fb7d6d0303893093e03b (image=quay.io/ceph/ceph:v20, name=zealous_meitner, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 06:52:02 np0005629333 systemd[1]: libpod-conmon-f288530e1699f5ba3433b960a7b194a820eb73fb2615fb7d6d0303893093e03b.scope: Deactivated successfully.
Feb 25 06:52:03 np0005629333 python3[100217]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 8ac33163-6221-5d58-9a39-8b6933fe7762 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:52:03 np0005629333 podman[100218]: 2026-02-25 11:52:03.403991932 +0000 UTC m=+0.047349119 container create 49c86d5891f370ac8056c6ea577809fa47b57b0aabbe53a42a9151dc251d3eaa (image=quay.io/ceph/ceph:v20, name=stupefied_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 06:52:03 np0005629333 systemd[1]: Started libpod-conmon-49c86d5891f370ac8056c6ea577809fa47b57b0aabbe53a42a9151dc251d3eaa.scope.
Feb 25 06:52:03 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:52:03 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Feb 25 06:52:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a77cafad163ffe403333fe7f2c75d9c706f036a83a91c7a8b461c6ffb55c3743/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:52:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a77cafad163ffe403333fe7f2c75d9c706f036a83a91c7a8b461c6ffb55c3743/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:52:03 np0005629333 podman[100218]: 2026-02-25 11:52:03.3818091 +0000 UTC m=+0.025166267 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Feb 25 06:52:03 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Feb 25 06:52:03 np0005629333 podman[100218]: 2026-02-25 11:52:03.48965421 +0000 UTC m=+0.133011377 container init 49c86d5891f370ac8056c6ea577809fa47b57b0aabbe53a42a9151dc251d3eaa (image=quay.io/ceph/ceph:v20, name=stupefied_payne, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 06:52:03 np0005629333 podman[100218]: 2026-02-25 11:52:03.493415627 +0000 UTC m=+0.136772814 container start 49c86d5891f370ac8056c6ea577809fa47b57b0aabbe53a42a9151dc251d3eaa (image=quay.io/ceph/ceph:v20, name=stupefied_payne, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:52:03 np0005629333 podman[100218]: 2026-02-25 11:52:03.496656089 +0000 UTC m=+0.140013256 container attach 49c86d5891f370ac8056c6ea577809fa47b57b0aabbe53a42a9151dc251d3eaa (image=quay.io/ceph/ceph:v20, name=stupefied_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]: {
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:    "user_id": "openstack",
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:    "display_name": "openstack",
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:    "email": "",
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:    "suspended": 0,
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:    "max_buckets": 1000,
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:    "subusers": [],
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:    "keys": [
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:        {
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:            "user": "openstack",
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:            "access_key": "AVMQ3N6HZ2YEMQ2ZDYA4",
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:            "secret_key": "UH1zPOLkbtZ0SzFIdw2X1XDYfatpFJh0Gw3wbzAb",
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:            "active": true,
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:            "create_date": "2026-02-25T11:52:03.688855Z"
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:        }
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:    ],
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:    "swift_keys": [],
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:    "caps": [],
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:    "op_mask": "read, write, delete",
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:    "default_placement": "",
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:    "default_storage_class": "",
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:    "placement_tags": [],
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:    "bucket_quota": {
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:        "enabled": false,
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:        "check_on_raw": false,
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:        "max_size": -1,
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:        "max_size_kb": 0,
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:        "max_objects": -1
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:    },
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:    "user_quota": {
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:        "enabled": false,
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:        "check_on_raw": false,
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:        "max_size": -1,
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:        "max_size_kb": 0,
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:        "max_objects": -1
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:    },
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:    "temp_url_keys": [],
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:    "type": "rgw",
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:    "mfa_ids": [],
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:    "account_id": "",
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:    "path": "/",
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:    "create_date": "2026-02-25T11:52:03.688256Z",
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:    "tags": [],
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]:    "group_ids": []
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]: }
Feb 25 06:52:03 np0005629333 stupefied_payne[100235]: 
Feb 25 06:52:03 np0005629333 systemd[1]: libpod-49c86d5891f370ac8056c6ea577809fa47b57b0aabbe53a42a9151dc251d3eaa.scope: Deactivated successfully.
Feb 25 06:52:03 np0005629333 podman[100218]: 2026-02-25 11:52:03.72255201 +0000 UTC m=+0.365909177 container died 49c86d5891f370ac8056c6ea577809fa47b57b0aabbe53a42a9151dc251d3eaa (image=quay.io/ceph/ceph:v20, name=stupefied_payne, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 06:52:03 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a77cafad163ffe403333fe7f2c75d9c706f036a83a91c7a8b461c6ffb55c3743-merged.mount: Deactivated successfully.
Feb 25 06:52:03 np0005629333 podman[100218]: 2026-02-25 11:52:03.7661444 +0000 UTC m=+0.409501547 container remove 49c86d5891f370ac8056c6ea577809fa47b57b0aabbe53a42a9151dc251d3eaa (image=quay.io/ceph/ceph:v20, name=stupefied_payne, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:52:03 np0005629333 systemd[1]: libpod-conmon-49c86d5891f370ac8056c6ea577809fa47b57b0aabbe53a42a9151dc251d3eaa.scope: Deactivated successfully.
Feb 25 06:52:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v131: 305 pgs: 2 peering, 303 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 281 B/s, 0 objects/s recovering
Feb 25 06:52:05 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Feb 25 06:52:05 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Feb 25 06:52:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v132: 305 pgs: 2 peering, 303 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 151 B/s wr, 25 op/s; 240 B/s, 0 objects/s recovering
Feb 25 06:52:06 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Feb 25 06:52:06 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Feb 25 06:52:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:52:08 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Feb 25 06:52:08 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Feb 25 06:52:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v133: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 255 B/s wr, 35 op/s; 222 B/s, 1 objects/s recovering
Feb 25 06:52:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Feb 25 06:52:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Feb 25 06:52:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Feb 25 06:52:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Feb 25 06:52:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Feb 25 06:52:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Feb 25 06:52:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Feb 25 06:52:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Feb 25 06:52:08 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Feb 25 06:52:08 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Feb 25 06:52:08 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 64 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=64 pruub=14.791683197s) [2] r=-1 lpr=64 pi=[51,64)/1 crt=41'483 lcod 0'0 active pruub 124.278381348s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:08 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 64 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=64 pruub=14.791617393s) [2] r=-1 lpr=64 pi=[51,64)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 124.278381348s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:08 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=64) [2] r=0 lpr=64 pi=[51,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:08 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 64 pg[9.e( v 63'489 (0'0,63'489] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=64 pruub=14.790995598s) [2] r=-1 lpr=64 pi=[51,64)/1 crt=63'488 lcod 63'488 active pruub 124.279243469s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:08 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 64 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=64 pruub=14.791216850s) [2] r=-1 lpr=64 pi=[51,64)/1 crt=41'483 lcod 0'0 active pruub 124.279579163s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:08 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 64 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=64 pruub=14.791182518s) [2] r=-1 lpr=64 pi=[51,64)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 124.279579163s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:08 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 64 pg[9.e( v 63'489 (0'0,63'489] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=64 pruub=14.790780067s) [2] r=-1 lpr=64 pi=[51,64)/1 crt=63'488 lcod 63'488 unknown NOTIFY pruub 124.279243469s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:08 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 64 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=64 pruub=14.799925804s) [2] r=-1 lpr=64 pi=[51,64)/1 crt=63'484 lcod 63'484 active pruub 124.288536072s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:08 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 64 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=64 pruub=14.799863815s) [2] r=-1 lpr=64 pi=[51,64)/1 crt=63'484 lcod 63'484 unknown NOTIFY pruub 124.288536072s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:08 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=64) [2] r=0 lpr=64 pi=[51,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:08 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=64) [2] r=0 lpr=64 pi=[51,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:08 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=64) [2] r=0 lpr=64 pi=[51,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:08 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Feb 25 06:52:08 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Feb 25 06:52:08 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Feb 25 06:52:09 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Feb 25 06:52:09 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Feb 25 06:52:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Feb 25 06:52:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Feb 25 06:52:09 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Feb 25 06:52:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Feb 25 06:52:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Feb 25 06:52:09 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[51,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:09 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[51,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:09 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[51,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:09 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[51,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:09 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[51,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:09 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[51,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:09 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[51,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:09 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[51,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:09 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 65 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=65) [2]/[1] r=0 lpr=65 pi=[51,65)/1 crt=63'484 lcod 63'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:09 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 65 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=65) [2]/[1] r=0 lpr=65 pi=[51,65)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:09 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 65 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=65) [2]/[1] r=0 lpr=65 pi=[51,65)/1 crt=63'484 lcod 63'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:09 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 65 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=65) [2]/[1] r=0 lpr=65 pi=[51,65)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:09 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 65 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=65) [2]/[1] r=0 lpr=65 pi=[51,65)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:09 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 65 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=65) [2]/[1] r=0 lpr=65 pi=[51,65)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:09 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 65 pg[9.e( v 63'489 (0'0,63'489] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=65) [2]/[1] r=0 lpr=65 pi=[51,65)/1 crt=63'488 lcod 63'488 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:09 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 65 pg[9.e( v 63'489 (0'0,63'489] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=65) [2]/[1] r=0 lpr=65 pi=[51,65)/1 crt=63'488 lcod 63'488 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:09 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Feb 25 06:52:09 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Feb 25 06:52:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v136: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 255 B/s wr, 35 op/s; 19 B/s, 0 objects/s recovering
Feb 25 06:52:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Feb 25 06:52:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Feb 25 06:52:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Feb 25 06:52:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Feb 25 06:52:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Feb 25 06:52:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Feb 25 06:52:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Feb 25 06:52:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Feb 25 06:52:10 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Feb 25 06:52:10 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Feb 25 06:52:10 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Feb 25 06:52:10 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 66 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=65/66 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[51,65)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:10 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 66 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=65/66 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[51,65)/1 crt=63'485 lcod 63'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:10 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 66 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=65/66 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[51,65)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:10 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 66 pg[9.e( v 63'489 (0'0,63'489] local-lis/les=65/66 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[51,65)/1 crt=63'489 lcod 63'488 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Feb 25 06:52:11 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Feb 25 06:52:11 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Feb 25 06:52:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Feb 25 06:52:11 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Feb 25 06:52:11 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 67 pg[9.e( v 63'489 (0'0,63'489] local-lis/les=0/0 n=7 ec=51/35 lis/c=65/51 les/c/f=66/52/0 sis=67) [2] r=0 lpr=67 pi=[51,67)/1 pct=0'0 crt=63'489 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:11 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 67 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=65/51 les/c/f=66/52/0 sis=67) [2] r=0 lpr=67 pi=[51,67)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:11 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 67 pg[9.e( v 63'489 (0'0,63'489] local-lis/les=0/0 n=7 ec=51/35 lis/c=65/51 les/c/f=66/52/0 sis=67) [2] r=0 lpr=67 pi=[51,67)/1 crt=63'489 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:11 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 67 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=65/51 les/c/f=66/52/0 sis=67) [2] r=0 lpr=67 pi=[51,67)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:11 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 67 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=65/51 les/c/f=66/52/0 sis=67) [2] r=0 lpr=67 pi=[51,67)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:11 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 67 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=65/51 les/c/f=66/52/0 sis=67) [2] r=0 lpr=67 pi=[51,67)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:11 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 67 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=51/35 lis/c=65/51 les/c/f=66/52/0 sis=67) [2] r=0 lpr=67 pi=[51,67)/1 pct=0'0 crt=63'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:11 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 67 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=51/35 lis/c=65/51 les/c/f=66/52/0 sis=67) [2] r=0 lpr=67 pi=[51,67)/1 crt=63'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:11 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 67 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=65/66 n=6 ec=51/35 lis/c=65/51 les/c/f=66/52/0 sis=67 pruub=14.984430313s) [2] async=[2] r=-1 lpr=67 pi=[51,67)/1 crt=63'485 lcod 63'484 active pruub 127.525672913s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:11 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 67 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=65/66 n=7 ec=51/35 lis/c=65/51 les/c/f=66/52/0 sis=67 pruub=14.984457970s) [2] async=[2] r=-1 lpr=67 pi=[51,67)/1 crt=41'483 lcod 0'0 active pruub 127.525733948s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:11 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 67 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=65/66 n=6 ec=51/35 lis/c=65/51 les/c/f=66/52/0 sis=67 pruub=14.978965759s) [2] async=[2] r=-1 lpr=67 pi=[51,67)/1 crt=41'483 lcod 0'0 active pruub 127.520294189s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:11 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 67 pg[9.e( v 63'489 (0'0,63'489] local-lis/les=65/66 n=7 ec=51/35 lis/c=65/51 les/c/f=66/52/0 sis=67 pruub=14.984086990s) [2] async=[2] r=-1 lpr=67 pi=[51,67)/1 crt=63'489 lcod 63'488 active pruub 127.525756836s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:11 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 67 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=65/66 n=6 ec=51/35 lis/c=65/51 les/c/f=66/52/0 sis=67 pruub=14.983798981s) [2] r=-1 lpr=67 pi=[51,67)/1 crt=63'485 lcod 63'484 unknown NOTIFY pruub 127.525672913s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:11 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 67 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=65/66 n=7 ec=51/35 lis/c=65/51 les/c/f=66/52/0 sis=67 pruub=14.983848572s) [2] r=-1 lpr=67 pi=[51,67)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 127.525733948s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:11 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 67 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=65/66 n=6 ec=51/35 lis/c=65/51 les/c/f=66/52/0 sis=67 pruub=14.978299141s) [2] r=-1 lpr=67 pi=[51,67)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 127.520294189s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:11 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 67 pg[9.e( v 63'489 (0'0,63'489] local-lis/les=65/66 n=7 ec=51/35 lis/c=65/51 les/c/f=66/52/0 sis=67 pruub=14.983860970s) [2] r=-1 lpr=67 pi=[51,67)/1 crt=63'489 lcod 63'488 unknown NOTIFY pruub 127.525756836s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:11 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Feb 25 06:52:11 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Feb 25 06:52:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:52:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v139: 305 pgs: 4 active+remapped, 301 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 207 B/s, 5 objects/s recovering
Feb 25 06:52:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Feb 25 06:52:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Feb 25 06:52:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Feb 25 06:52:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Feb 25 06:52:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Feb 25 06:52:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Feb 25 06:52:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Feb 25 06:52:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Feb 25 06:52:12 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Feb 25 06:52:12 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Feb 25 06:52:13 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 68 pg[9.18( v 63'487 (0'0,63'487] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=68 pruub=10.657967567s) [2] r=-1 lpr=68 pi=[51,68)/1 crt=63'486 lcod 63'486 active pruub 124.288459778s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:13 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 68 pg[9.18( v 63'487 (0'0,63'487] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=68 pruub=10.657921791s) [2] r=-1 lpr=68 pi=[51,68)/1 crt=63'486 lcod 63'486 unknown NOTIFY pruub 124.288459778s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:13 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 68 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=68 pruub=10.648600578s) [2] r=-1 lpr=68 pi=[51,68)/1 crt=41'483 lcod 0'0 active pruub 124.279602051s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:13 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 68 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=68 pruub=10.648465157s) [2] r=-1 lpr=68 pi=[51,68)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 124.279602051s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:13 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Feb 25 06:52:13 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 68 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=68) [2] r=0 lpr=68 pi=[51,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:13 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 68 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=68) [2] r=0 lpr=68 pi=[51,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:13 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 68 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=67/68 n=6 ec=51/35 lis/c=65/51 les/c/f=66/52/0 sis=67) [2] r=0 lpr=67 pi=[51,67)/1 crt=63'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:13 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 68 pg[9.e( v 63'489 (0'0,63'489] local-lis/les=67/68 n=7 ec=51/35 lis/c=65/51 les/c/f=66/52/0 sis=67) [2] r=0 lpr=67 pi=[51,67)/1 crt=63'489 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:13 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 68 pg[9.6( v 41'483 (0'0,41'483] local-lis/les=67/68 n=7 ec=51/35 lis/c=65/51 les/c/f=66/52/0 sis=67) [2] r=0 lpr=67 pi=[51,67)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:13 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 68 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=67/68 n=6 ec=51/35 lis/c=65/51 les/c/f=66/52/0 sis=67) [2] r=0 lpr=67 pi=[51,67)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:13 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 66 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=58/59 n=7 ec=51/35 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=11.478696823s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=63'486 lcod 63'486 active pruub 129.336242676s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:13 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 66 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=57/58 n=6 ec=51/35 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=10.453901291s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=63'484 lcod 63'484 active pruub 128.311584473s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:13 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 68 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=58/59 n=7 ec=51/35 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=11.478470802s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=63'486 lcod 63'486 unknown NOTIFY pruub 129.336242676s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:13 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 68 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=57/58 n=6 ec=51/35 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=10.453787804s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=63'484 lcod 63'484 unknown NOTIFY pruub 128.311584473s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:13 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 68 pg[6.8( v 35'39 (0'0,35'39] local-lis/les=47/48 n=1 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=68 pruub=13.160320282s) [2] r=-1 lpr=68 pi=[47,68)/1 crt=35'39 lcod 0'0 active pruub 131.017944336s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:13 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 68 pg[6.8( v 35'39 (0'0,35'39] local-lis/les=47/48 n=1 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=68 pruub=13.159966469s) [2] r=-1 lpr=68 pi=[47,68)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 131.017944336s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:13 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 66 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=58/59 n=7 ec=51/35 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=11.477666855s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=63'484 lcod 63'484 active pruub 129.336166382s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:13 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 68 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=58/59 n=7 ec=51/35 lis/c=58/58 les/c/f=59/59/0 sis=66 pruub=11.477419853s) [2] r=-1 lpr=66 pi=[58,66)/1 crt=63'484 lcod 63'484 unknown NOTIFY pruub 129.336166382s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:13 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 66 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=57/58 n=6 ec=51/35 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=10.452865601s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=41'483 active pruub 128.311813354s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:13 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 68 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=57/58 n=6 ec=51/35 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=10.452751160s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=41'483 unknown NOTIFY pruub 128.311813354s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:13 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 68 pg[9.7( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=58/58 les/c/f=59/59/0 sis=66) [2] r=0 lpr=68 pi=[58,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:13 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 68 pg[9.17( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=68 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:13 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 68 pg[6.8( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:13 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 68 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=68 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:13 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 68 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=58/58 les/c/f=59/59/0 sis=66) [2] r=0 lpr=68 pi=[58,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Feb 25 06:52:13 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Feb 25 06:52:13 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Feb 25 06:52:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Feb 25 06:52:13 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Feb 25 06:52:13 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 69 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=69) [2]/[1] r=0 lpr=69 pi=[51,69)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:13 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 69 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=69) [2]/[1] r=0 lpr=69 pi=[51,69)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:13 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 69 pg[9.18( v 63'487 (0'0,63'487] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=69) [2]/[1] r=0 lpr=69 pi=[51,69)/1 crt=63'486 lcod 63'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:13 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 69 pg[9.18( v 63'487 (0'0,63'487] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=69) [2]/[1] r=0 lpr=69 pi=[51,69)/1 crt=63'486 lcod 63'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:13 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 69 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[51,69)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:13 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 69 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[51,69)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:13 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 69 pg[9.17( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=57/57 les/c/f=58/58/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[57,69)/2 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:13 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 69 pg[9.17( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=57/57 les/c/f=58/58/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[57,69)/2 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:13 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 69 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=58/58 les/c/f=59/59/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[58,69)/2 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:13 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 69 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=58/58 les/c/f=59/59/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[58,69)/2 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:13 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 69 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[51,69)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:13 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 69 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=69) [2]/[1] r=-1 lpr=69 pi=[51,69)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:13 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 69 pg[9.7( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=58/58 les/c/f=59/59/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[58,69)/2 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:13 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 69 pg[9.7( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=58/58 les/c/f=59/59/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[58,69)/2 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:13 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 69 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=57/57 les/c/f=58/58/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[57,69)/2 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:13 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 69 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=57/57 les/c/f=58/58/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[57,69)/2 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:13 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 69 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=57/58 n=6 ec=51/35 lis/c=57/57 les/c/f=58/58/0 sis=69) [2]/[0] r=0 lpr=69 pi=[57,69)/2 crt=41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:13 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 69 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=58/59 n=7 ec=51/35 lis/c=58/58 les/c/f=59/59/0 sis=69) [2]/[0] r=0 lpr=69 pi=[58,69)/2 crt=63'486 lcod 63'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:13 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 69 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=57/58 n=6 ec=51/35 lis/c=57/57 les/c/f=58/58/0 sis=69) [2]/[0] r=0 lpr=69 pi=[57,69)/2 crt=41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:13 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 69 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=58/59 n=7 ec=51/35 lis/c=58/58 les/c/f=59/59/0 sis=69) [2]/[0] r=0 lpr=69 pi=[58,69)/2 crt=63'484 lcod 63'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:13 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 69 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=58/59 n=7 ec=51/35 lis/c=58/58 les/c/f=59/59/0 sis=69) [2]/[0] r=0 lpr=69 pi=[58,69)/2 crt=63'486 lcod 63'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:13 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 69 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=58/59 n=7 ec=51/35 lis/c=58/58 les/c/f=59/59/0 sis=69) [2]/[0] r=0 lpr=69 pi=[58,69)/2 crt=63'484 lcod 63'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:13 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 69 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=57/58 n=6 ec=51/35 lis/c=57/57 les/c/f=58/58/0 sis=69) [2]/[0] r=0 lpr=69 pi=[57,69)/2 crt=63'484 lcod 63'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:13 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 69 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=57/58 n=6 ec=51/35 lis/c=57/57 les/c/f=58/58/0 sis=69) [2]/[0] r=0 lpr=69 pi=[57,69)/2 crt=63'484 lcod 63'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:13 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 69 pg[6.8( v 35'39 (0'0,35'39] local-lis/les=68/69 n=1 ec=47/22 lis/c=47/47 les/c/f=48/48/0 sis=68) [2] r=0 lpr=68 pi=[47,68)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v142: 305 pgs: 4 active+remapped, 301 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 207 B/s, 5 objects/s recovering
Feb 25 06:52:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Feb 25 06:52:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Feb 25 06:52:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Feb 25 06:52:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Feb 25 06:52:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Feb 25 06:52:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Feb 25 06:52:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Feb 25 06:52:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Feb 25 06:52:15 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Feb 25 06:52:15 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Feb 25 06:52:15 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Feb 25 06:52:15 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 70 pg[6.9( v 35'39 (0'0,35'39] local-lis/les=54/55 n=1 ec=47/22 lis/c=54/54 les/c/f=55/55/0 sis=70 pruub=13.739082336s) [0] r=-1 lpr=70 pi=[54,70)/1 crt=35'39 lcod 0'0 active pruub 129.372039795s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:15 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 70 pg[6.9( v 35'39 (0'0,35'39] local-lis/les=54/55 n=1 ec=47/22 lis/c=54/54 les/c/f=55/55/0 sis=70 pruub=13.739030838s) [0] r=-1 lpr=70 pi=[54,70)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 129.372039795s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:15 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 70 pg[6.9( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=54/54 les/c/f=55/55/0 sis=70) [0] r=0 lpr=70 pi=[54,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:15 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Feb 25 06:52:15 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Feb 25 06:52:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Feb 25 06:52:16 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Feb 25 06:52:16 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Feb 25 06:52:16 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 70 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=69/70 n=6 ec=51/35 lis/c=57/57 les/c/f=58/58/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[57,69)/2 crt=41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:16 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 70 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=69/70 n=6 ec=51/35 lis/c=57/57 les/c/f=58/58/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[57,69)/2 crt=63'485 lcod 63'484 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:16 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 70 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=69/70 n=7 ec=51/35 lis/c=58/58 les/c/f=59/59/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[58,69)/2 crt=63'485 lcod 63'484 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:16 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 70 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=69/70 n=7 ec=51/35 lis/c=58/58 les/c/f=59/59/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[58,69)/2 crt=63'487 lcod 63'486 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:16 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 70 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=69/70 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=69) [2]/[1] async=[2] r=0 lpr=69 pi=[51,69)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:16 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 70 pg[9.18( v 63'487 (0'0,63'487] local-lis/les=69/70 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=69) [2]/[1] async=[2] r=0 lpr=69 pi=[51,69)/1 crt=63'487 lcod 63'486 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Feb 25 06:52:16 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Feb 25 06:52:16 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 71 pg[6.9( v 35'39 (0'0,35'39] local-lis/les=70/71 n=1 ec=47/22 lis/c=54/54 les/c/f=55/55/0 sis=70) [0] r=0 lpr=70 pi=[54,70)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v145: 305 pgs: 1 active+recovering+remapped, 1 active+recovery_wait+remapped, 303 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 12/252 objects misplaced (4.762%); 0 B/s, 0 objects/s recovering
Feb 25 06:52:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Feb 25 06:52:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Feb 25 06:52:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Feb 25 06:52:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Feb 25 06:52:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Feb 25 06:52:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Feb 25 06:52:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Feb 25 06:52:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Feb 25 06:52:17 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Feb 25 06:52:17 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Feb 25 06:52:17 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 72 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=69/51 les/c/f=70/52/0 sis=72) [2] r=0 lpr=72 pi=[51,72)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:17 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 72 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=69/51 les/c/f=70/52/0 sis=72) [2] r=0 lpr=72 pi=[51,72)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:17 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 72 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=0/0 n=7 ec=51/35 lis/c=69/58 les/c/f=70/59/0 sis=72) [2] r=0 lpr=72 pi=[58,72)/2 pct=0'0 crt=63'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:17 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 72 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=0/0 n=7 ec=51/35 lis/c=69/58 les/c/f=70/59/0 sis=72) [2] r=0 lpr=72 pi=[58,72)/2 crt=63'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:17 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 72 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=51/35 lis/c=69/57 les/c/f=70/58/0 sis=72) [2] r=0 lpr=72 pi=[57,72)/2 pct=0'0 crt=63'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:17 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 72 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=51/35 lis/c=69/57 les/c/f=70/58/0 sis=72) [2] r=0 lpr=72 pi=[57,72)/2 crt=63'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:17 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 72 pg[9.18( v 63'487 (0'0,63'487] local-lis/les=0/0 n=6 ec=51/35 lis/c=69/51 les/c/f=70/52/0 sis=72) [2] r=0 lpr=72 pi=[51,72)/1 pct=0'0 crt=63'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:17 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 72 pg[9.18( v 63'487 (0'0,63'487] local-lis/les=0/0 n=6 ec=51/35 lis/c=69/51 les/c/f=70/52/0 sis=72) [2] r=0 lpr=72 pi=[51,72)/1 crt=63'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:17 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 72 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=69/57 les/c/f=70/58/0 sis=72) [2] r=0 lpr=72 pi=[57,72)/2 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:17 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 72 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=69/57 les/c/f=70/58/0 sis=72) [2] r=0 lpr=72 pi=[57,72)/2 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:17 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 72 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=0/0 n=7 ec=51/35 lis/c=69/58 les/c/f=70/59/0 sis=72) [2] r=0 lpr=72 pi=[58,72)/2 pct=0'0 crt=63'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:17 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 72 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=0/0 n=7 ec=51/35 lis/c=69/58 les/c/f=70/59/0 sis=72) [2] r=0 lpr=72 pi=[58,72)/2 crt=63'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:17 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Feb 25 06:52:17 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 72 pg[9.18( v 63'487 (0'0,63'487] local-lis/les=69/70 n=6 ec=51/35 lis/c=69/51 les/c/f=70/52/0 sis=72 pruub=14.956593513s) [2] async=[2] r=-1 lpr=72 pi=[51,72)/1 crt=63'487 lcod 63'486 active pruub 132.645523071s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:17 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 72 pg[9.18( v 63'487 (0'0,63'487] local-lis/les=69/70 n=6 ec=51/35 lis/c=69/51 les/c/f=70/52/0 sis=72 pruub=14.956520081s) [2] r=-1 lpr=72 pi=[51,72)/1 crt=63'487 lcod 63'486 unknown NOTIFY pruub 132.645523071s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:17 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 72 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=69/70 n=7 ec=51/35 lis/c=69/51 les/c/f=70/52/0 sis=72 pruub=14.956008911s) [2] async=[2] r=-1 lpr=72 pi=[51,72)/1 crt=41'483 lcod 0'0 active pruub 132.645446777s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:17 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 72 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=69/70 n=7 ec=51/35 lis/c=69/51 les/c/f=70/52/0 sis=72 pruub=14.955935478s) [2] r=-1 lpr=72 pi=[51,72)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 132.645446777s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:17 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 72 pg[6.a( v 35'39 (0'0,35'39] local-lis/les=56/57 n=1 ec=47/22 lis/c=56/56 les/c/f=57/57/0 sis=72 pruub=13.734545708s) [0] r=-1 lpr=72 pi=[56,72)/1 crt=35'39 lcod 0'0 active pruub 131.424575806s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:17 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 72 pg[6.a( v 35'39 (0'0,35'39] local-lis/les=56/57 n=1 ec=47/22 lis/c=56/56 les/c/f=57/57/0 sis=72 pruub=13.734519005s) [0] r=-1 lpr=72 pi=[56,72)/1 crt=35'39 lcod 0'0 unknown NOTIFY pruub 131.424575806s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:17 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 72 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=69/70 n=6 ec=51/35 lis/c=69/57 les/c/f=70/58/0 sis=72 pruub=14.951440811s) [2] async=[2] r=-1 lpr=72 pi=[57,72)/2 crt=41'483 active pruub 136.527694702s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:17 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 72 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=69/70 n=6 ec=51/35 lis/c=69/57 les/c/f=70/58/0 sis=72 pruub=14.951379776s) [2] r=-1 lpr=72 pi=[57,72)/2 crt=41'483 unknown NOTIFY pruub 136.527694702s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:17 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 72 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=69/70 n=7 ec=51/35 lis/c=69/58 les/c/f=70/59/0 sis=72 pruub=14.953891754s) [2] async=[2] r=-1 lpr=72 pi=[58,72)/2 crt=63'485 lcod 63'484 active pruub 136.530426025s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:17 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 72 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=69/70 n=7 ec=51/35 lis/c=69/58 les/c/f=70/59/0 sis=72 pruub=14.953821182s) [2] r=-1 lpr=72 pi=[58,72)/2 crt=63'485 lcod 63'484 unknown NOTIFY pruub 136.530426025s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:17 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 72 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=69/70 n=6 ec=51/35 lis/c=69/57 les/c/f=70/58/0 sis=72 pruub=14.950997353s) [2] async=[2] r=-1 lpr=72 pi=[57,72)/2 crt=63'485 lcod 63'484 active pruub 136.527725220s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:17 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 72 pg[6.a( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=56/56 les/c/f=57/57/0 sis=72) [0] r=0 lpr=72 pi=[56,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:17 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 72 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=69/70 n=6 ec=51/35 lis/c=69/57 les/c/f=70/58/0 sis=72 pruub=14.950905800s) [2] r=-1 lpr=72 pi=[57,72)/2 crt=63'485 lcod 63'484 unknown NOTIFY pruub 136.527725220s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:17 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 72 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=69/70 n=7 ec=51/35 lis/c=69/58 les/c/f=70/59/0 sis=72 pruub=14.953100204s) [2] async=[2] r=-1 lpr=72 pi=[58,72)/2 crt=63'487 lcod 63'486 active pruub 136.530441284s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:17 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 72 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=69/70 n=7 ec=51/35 lis/c=69/58 les/c/f=70/59/0 sis=72 pruub=14.953025818s) [2] r=-1 lpr=72 pi=[58,72)/2 crt=63'487 lcod 63'486 unknown NOTIFY pruub 136.530441284s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:17 np0005629333 systemd-logind[811]: New session 34 of user zuul.
Feb 25 06:52:17 np0005629333 systemd[1]: Started Session 34 of User zuul.
Feb 25 06:52:17 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Feb 25 06:52:17 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Feb 25 06:52:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:52:18 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Feb 25 06:52:18 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Feb 25 06:52:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Feb 25 06:52:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Feb 25 06:52:18 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Feb 25 06:52:18 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 73 pg[6.a( v 35'39 (0'0,35'39] local-lis/les=72/73 n=1 ec=47/22 lis/c=56/56 les/c/f=57/57/0 sis=72) [0] r=0 lpr=72 pi=[56,72)/1 crt=35'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:18 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Feb 25 06:52:18 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Feb 25 06:52:18 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 73 pg[9.f( v 63'485 (0'0,63'485] local-lis/les=72/73 n=7 ec=51/35 lis/c=69/58 les/c/f=70/59/0 sis=72) [2] r=0 lpr=72 pi=[58,72)/2 crt=63'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:18 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 73 pg[9.17( v 63'485 (0'0,63'485] local-lis/les=72/73 n=6 ec=51/35 lis/c=69/57 les/c/f=70/58/0 sis=72) [2] r=0 lpr=72 pi=[57,72)/2 crt=63'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:18 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 73 pg[9.7( v 63'487 (0'0,63'487] local-lis/les=72/73 n=7 ec=51/35 lis/c=69/58 les/c/f=70/59/0 sis=72) [2] r=0 lpr=72 pi=[58,72)/2 crt=63'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:18 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 73 pg[9.8( v 41'483 (0'0,41'483] local-lis/les=72/73 n=7 ec=51/35 lis/c=69/51 les/c/f=70/52/0 sis=72) [2] r=0 lpr=72 pi=[51,72)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:18 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 73 pg[9.18( v 63'487 (0'0,63'487] local-lis/les=72/73 n=6 ec=51/35 lis/c=69/51 les/c/f=70/52/0 sis=72) [2] r=0 lpr=72 pi=[51,72)/1 crt=63'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:18 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 73 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=72/73 n=6 ec=51/35 lis/c=69/57 les/c/f=70/58/0 sis=72) [2] r=0 lpr=72 pi=[57,72)/2 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:18 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 4.c scrub starts
Feb 25 06:52:18 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 4.c scrub ok
Feb 25 06:52:18 np0005629333 python3.9[100490]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:52:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v148: 305 pgs: 1 peering, 1 active+recovering+remapped, 1 active+recovery_wait+remapped, 302 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 12/252 objects misplaced (4.762%); 0 B/s, 0 objects/s recovering
Feb 25 06:52:19 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 2.e scrub starts
Feb 25 06:52:19 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 2.e scrub ok
Feb 25 06:52:19 np0005629333 python3.9[100709]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:52:20 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Feb 25 06:52:20 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Feb 25 06:52:20 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Feb 25 06:52:20 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Feb 25 06:52:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v149: 305 pgs: 1 peering, 1 active+recovering+remapped, 1 active+recovery_wait+remapped, 302 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 12/252 objects misplaced (4.762%); 0 B/s, 0 objects/s recovering
Feb 25 06:52:20 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 5.a scrub starts
Feb 25 06:52:21 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 5.a scrub ok
Feb 25 06:52:22 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Feb 25 06:52:22 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Feb 25 06:52:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:52:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v150: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 250 B/s, 5 objects/s recovering
Feb 25 06:52:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Feb 25 06:52:22 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Feb 25 06:52:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Feb 25 06:52:22 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Feb 25 06:52:22 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 2.c scrub starts
Feb 25 06:52:22 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 2.c scrub ok
Feb 25 06:52:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Feb 25 06:52:23 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Feb 25 06:52:23 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Feb 25 06:52:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Feb 25 06:52:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Feb 25 06:52:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Feb 25 06:52:23 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Feb 25 06:52:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:52:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:52:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 06:52:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:52:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 06:52:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:52:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 06:52:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 06:52:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 06:52:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 06:52:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:52:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:52:24 np0005629333 podman[100874]: 2026-02-25 11:52:24.27699085 +0000 UTC m=+0.055788539 container create 78e3bea6ab4e258ec4184ec7c04e9aef065754bf9fd78d627ee8dfd6987cb707 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_diffie, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:52:24 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Feb 25 06:52:24 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Feb 25 06:52:24 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:52:24 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:52:24 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 06:52:24 np0005629333 systemd[1]: Started libpod-conmon-78e3bea6ab4e258ec4184ec7c04e9aef065754bf9fd78d627ee8dfd6987cb707.scope.
Feb 25 06:52:24 np0005629333 podman[100874]: 2026-02-25 11:52:24.245923996 +0000 UTC m=+0.024721695 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:52:24 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:52:24 np0005629333 podman[100874]: 2026-02-25 11:52:24.384945923 +0000 UTC m=+0.163743672 container init 78e3bea6ab4e258ec4184ec7c04e9aef065754bf9fd78d627ee8dfd6987cb707 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_diffie, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:52:24 np0005629333 podman[100874]: 2026-02-25 11:52:24.393289491 +0000 UTC m=+0.172087190 container start 78e3bea6ab4e258ec4184ec7c04e9aef065754bf9fd78d627ee8dfd6987cb707 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_diffie, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 06:52:24 np0005629333 podman[100874]: 2026-02-25 11:52:24.396970766 +0000 UTC m=+0.175768465 container attach 78e3bea6ab4e258ec4184ec7c04e9aef065754bf9fd78d627ee8dfd6987cb707 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_diffie, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 06:52:24 np0005629333 interesting_diffie[100890]: 167 167
Feb 25 06:52:24 np0005629333 systemd[1]: libpod-78e3bea6ab4e258ec4184ec7c04e9aef065754bf9fd78d627ee8dfd6987cb707.scope: Deactivated successfully.
Feb 25 06:52:24 np0005629333 podman[100874]: 2026-02-25 11:52:24.399784546 +0000 UTC m=+0.178582265 container died 78e3bea6ab4e258ec4184ec7c04e9aef065754bf9fd78d627ee8dfd6987cb707 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:52:24 np0005629333 systemd[1]: var-lib-containers-storage-overlay-eef38a07d0f0b271950db16b0925db9ad643c6d08af44c11f465cf0dc1befb90-merged.mount: Deactivated successfully.
Feb 25 06:52:24 np0005629333 podman[100874]: 2026-02-25 11:52:24.438147738 +0000 UTC m=+0.216945397 container remove 78e3bea6ab4e258ec4184ec7c04e9aef065754bf9fd78d627ee8dfd6987cb707 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 06:52:24 np0005629333 systemd[1]: libpod-conmon-78e3bea6ab4e258ec4184ec7c04e9aef065754bf9fd78d627ee8dfd6987cb707.scope: Deactivated successfully.
Feb 25 06:52:24 np0005629333 podman[100914]: 2026-02-25 11:52:24.614832257 +0000 UTC m=+0.055577633 container create 99cf0c0c856bc1fa4365b56a76521c02a61836433bfbccd7af3233c8b2d1b5b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_dewdney, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:52:24 np0005629333 systemd[1]: Started libpod-conmon-99cf0c0c856bc1fa4365b56a76521c02a61836433bfbccd7af3233c8b2d1b5b9.scope.
Feb 25 06:52:24 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:52:24 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c4342590c44aa90560767a6b5845ad788e6c49785c70ae7c822ba1805c660e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:52:24 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c4342590c44aa90560767a6b5845ad788e6c49785c70ae7c822ba1805c660e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:52:24 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c4342590c44aa90560767a6b5845ad788e6c49785c70ae7c822ba1805c660e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:52:24 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c4342590c44aa90560767a6b5845ad788e6c49785c70ae7c822ba1805c660e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:52:24 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c4342590c44aa90560767a6b5845ad788e6c49785c70ae7c822ba1805c660e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:52:24 np0005629333 podman[100914]: 2026-02-25 11:52:24.58896096 +0000 UTC m=+0.029706426 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:52:24 np0005629333 podman[100914]: 2026-02-25 11:52:24.705495898 +0000 UTC m=+0.146241334 container init 99cf0c0c856bc1fa4365b56a76521c02a61836433bfbccd7af3233c8b2d1b5b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_dewdney, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:52:24 np0005629333 podman[100914]: 2026-02-25 11:52:24.712380594 +0000 UTC m=+0.153125980 container start 99cf0c0c856bc1fa4365b56a76521c02a61836433bfbccd7af3233c8b2d1b5b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 06:52:24 np0005629333 podman[100914]: 2026-02-25 11:52:24.716655275 +0000 UTC m=+0.157400691 container attach 99cf0c0c856bc1fa4365b56a76521c02a61836433bfbccd7af3233c8b2d1b5b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:52:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v152: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 218 B/s, 4 objects/s recovering
Feb 25 06:52:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Feb 25 06:52:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Feb 25 06:52:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Feb 25 06:52:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Feb 25 06:52:25 np0005629333 eager_dewdney[100931]: --> passed data devices: 0 physical, 3 LVM
Feb 25 06:52:25 np0005629333 eager_dewdney[100931]: --> All data devices are unavailable
Feb 25 06:52:25 np0005629333 systemd[1]: libpod-99cf0c0c856bc1fa4365b56a76521c02a61836433bfbccd7af3233c8b2d1b5b9.scope: Deactivated successfully.
Feb 25 06:52:25 np0005629333 podman[100914]: 2026-02-25 11:52:25.154600041 +0000 UTC m=+0.595345447 container died 99cf0c0c856bc1fa4365b56a76521c02a61836433bfbccd7af3233c8b2d1b5b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_dewdney, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 06:52:25 np0005629333 systemd[1]: var-lib-containers-storage-overlay-22c4342590c44aa90560767a6b5845ad788e6c49785c70ae7c822ba1805c660e-merged.mount: Deactivated successfully.
Feb 25 06:52:25 np0005629333 podman[100914]: 2026-02-25 11:52:25.220450796 +0000 UTC m=+0.661196212 container remove 99cf0c0c856bc1fa4365b56a76521c02a61836433bfbccd7af3233c8b2d1b5b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 06:52:25 np0005629333 systemd[1]: libpod-conmon-99cf0c0c856bc1fa4365b56a76521c02a61836433bfbccd7af3233c8b2d1b5b9.scope: Deactivated successfully.
Feb 25 06:52:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Feb 25 06:52:25 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Feb 25 06:52:25 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Feb 25 06:52:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Feb 25 06:52:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Feb 25 06:52:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Feb 25 06:52:25 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Feb 25 06:52:25 np0005629333 podman[101033]: 2026-02-25 11:52:25.700949472 +0000 UTC m=+0.054341557 container create 3c2577478e9a05aed3760ae882f9121ef4508359a634563df81aa4769d18e9db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_faraday, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:52:25 np0005629333 systemd[1]: Started libpod-conmon-3c2577478e9a05aed3760ae882f9121ef4508359a634563df81aa4769d18e9db.scope.
Feb 25 06:52:25 np0005629333 podman[101033]: 2026-02-25 11:52:25.676679672 +0000 UTC m=+0.030071797 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:52:25 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:52:25 np0005629333 podman[101033]: 2026-02-25 11:52:25.789246896 +0000 UTC m=+0.142639061 container init 3c2577478e9a05aed3760ae882f9121ef4508359a634563df81aa4769d18e9db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_faraday, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:52:25 np0005629333 podman[101033]: 2026-02-25 11:52:25.797754948 +0000 UTC m=+0.151147013 container start 3c2577478e9a05aed3760ae882f9121ef4508359a634563df81aa4769d18e9db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 06:52:25 np0005629333 podman[101033]: 2026-02-25 11:52:25.800955419 +0000 UTC m=+0.154347534 container attach 3c2577478e9a05aed3760ae882f9121ef4508359a634563df81aa4769d18e9db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_faraday, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 06:52:25 np0005629333 strange_faraday[101050]: 167 167
Feb 25 06:52:25 np0005629333 systemd[1]: libpod-3c2577478e9a05aed3760ae882f9121ef4508359a634563df81aa4769d18e9db.scope: Deactivated successfully.
Feb 25 06:52:25 np0005629333 podman[101033]: 2026-02-25 11:52:25.803219493 +0000 UTC m=+0.156611568 container died 3c2577478e9a05aed3760ae882f9121ef4508359a634563df81aa4769d18e9db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_faraday, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:52:25 np0005629333 systemd[1]: var-lib-containers-storage-overlay-63121b2ee66e4e38a2e8f03a554a3c9587a28e2e3e2be9d518d36a3e8baef201-merged.mount: Deactivated successfully.
Feb 25 06:52:25 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 74 pg[6.b( v 35'39 (0'0,35'39] local-lis/les=58/59 n=1 ec=47/22 lis/c=58/58 les/c/f=59/59/0 sis=74 pruub=15.001687050s) [1] r=-1 lpr=74 pi=[58,74)/1 crt=35'39 active pruub 145.336486816s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:25 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 74 pg[6.b( v 35'39 (0'0,35'39] local-lis/les=58/59 n=1 ec=47/22 lis/c=58/58 les/c/f=59/59/0 sis=74 pruub=15.001637459s) [1] r=-1 lpr=74 pi=[58,74)/1 crt=35'39 unknown NOTIFY pruub 145.336486816s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:25 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 74 pg[6.b( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=58/58 les/c/f=59/59/0 sis=74) [1] r=0 lpr=74 pi=[58,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:25 np0005629333 podman[101033]: 2026-02-25 11:52:25.841411641 +0000 UTC m=+0.194803726 container remove 3c2577478e9a05aed3760ae882f9121ef4508359a634563df81aa4769d18e9db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_faraday, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:52:25 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 75 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=75 pruub=13.819780350s) [2] r=-1 lpr=75 pi=[51,75)/1 crt=41'483 lcod 0'0 active pruub 140.279342651s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:25 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 75 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=75 pruub=13.819732666s) [2] r=-1 lpr=75 pi=[51,75)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 140.279342651s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:25 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 75 pg[9.c( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:25 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 75 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=75 pruub=13.827685356s) [2] r=-1 lpr=75 pi=[51,75)/1 crt=63'486 lcod 63'486 active pruub 140.288787842s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:25 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 75 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=75 pruub=13.827627182s) [2] r=-1 lpr=75 pi=[51,75)/1 crt=63'486 lcod 63'486 unknown NOTIFY pruub 140.288787842s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:25 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 75 pg[9.1c( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=75) [2] r=0 lpr=75 pi=[51,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:25 np0005629333 systemd[1]: libpod-conmon-3c2577478e9a05aed3760ae882f9121ef4508359a634563df81aa4769d18e9db.scope: Deactivated successfully.
Feb 25 06:52:25 np0005629333 podman[101077]: 2026-02-25 11:52:25.998118581 +0000 UTC m=+0.043567071 container create 3dd4b6f7721e91e81c82a966774826013f51edb38f6e0839f5d97cdec6ba9c80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mccarthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 06:52:26 np0005629333 systemd[1]: Started libpod-conmon-3dd4b6f7721e91e81c82a966774826013f51edb38f6e0839f5d97cdec6ba9c80.scope.
Feb 25 06:52:26 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:52:26 np0005629333 podman[101077]: 2026-02-25 11:52:25.980448088 +0000 UTC m=+0.025896588 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:52:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e51e26011f16f0a7ff246c7213414f86ff9ea240136c3bf1460fa9a5c9732c2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:52:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e51e26011f16f0a7ff246c7213414f86ff9ea240136c3bf1460fa9a5c9732c2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:52:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e51e26011f16f0a7ff246c7213414f86ff9ea240136c3bf1460fa9a5c9732c2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:52:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e51e26011f16f0a7ff246c7213414f86ff9ea240136c3bf1460fa9a5c9732c2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:52:26 np0005629333 podman[101077]: 2026-02-25 11:52:26.091741706 +0000 UTC m=+0.137190276 container init 3dd4b6f7721e91e81c82a966774826013f51edb38f6e0839f5d97cdec6ba9c80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Feb 25 06:52:26 np0005629333 podman[101077]: 2026-02-25 11:52:26.099670002 +0000 UTC m=+0.145118492 container start 3dd4b6f7721e91e81c82a966774826013f51edb38f6e0839f5d97cdec6ba9c80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mccarthy, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 06:52:26 np0005629333 podman[101077]: 2026-02-25 11:52:26.104209241 +0000 UTC m=+0.149657781 container attach 3dd4b6f7721e91e81c82a966774826013f51edb38f6e0839f5d97cdec6ba9c80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:52:26 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Feb 25 06:52:26 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Feb 25 06:52:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Feb 25 06:52:26 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Feb 25 06:52:26 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Feb 25 06:52:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Feb 25 06:52:26 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Feb 25 06:52:26 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 76 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=0 lpr=76 pi=[51,76)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:26 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:26 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 76 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=51/52 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=0 lpr=76 pi=[51,76)/1 crt=41'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:26 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 76 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=0 lpr=76 pi=[51,76)/1 crt=63'486 lcod 63'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:26 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 76 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=51/52 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=0 lpr=76 pi=[51,76)/1 crt=63'486 lcod 63'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:26 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 76 pg[9.c( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:26 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:26 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 76 pg[9.1c( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] r=-1 lpr=76 pi=[51,76)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]: {
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:    "0": [
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:        {
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "devices": [
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "/dev/loop3"
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            ],
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "lv_name": "ceph_lv0",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "lv_size": "21470642176",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "name": "ceph_lv0",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "tags": {
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.cluster_name": "ceph",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.crush_device_class": "",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.encrypted": "0",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.objectstore": "bluestore",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.osd_id": "0",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.type": "block",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.vdo": "0",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.with_tpm": "0"
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            },
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "type": "block",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "vg_name": "ceph_vg0"
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:        }
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:    ],
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:    "1": [
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:        {
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "devices": [
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "/dev/loop4"
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            ],
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "lv_name": "ceph_lv1",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "lv_size": "21470642176",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "name": "ceph_lv1",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "tags": {
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.cluster_name": "ceph",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.crush_device_class": "",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.encrypted": "0",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.objectstore": "bluestore",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.osd_id": "1",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.type": "block",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.vdo": "0",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.with_tpm": "0"
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            },
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "type": "block",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "vg_name": "ceph_vg1"
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:        }
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:    ],
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:    "2": [
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:        {
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "devices": [
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "/dev/loop5"
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            ],
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "lv_name": "ceph_lv2",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "lv_size": "21470642176",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "name": "ceph_lv2",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "tags": {
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.cluster_name": "ceph",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.crush_device_class": "",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.encrypted": "0",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.objectstore": "bluestore",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.osd_id": "2",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.type": "block",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.vdo": "0",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:                "ceph.with_tpm": "0"
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            },
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "type": "block",
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:            "vg_name": "ceph_vg2"
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:        }
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]:    ]
Feb 25 06:52:26 np0005629333 mystifying_mccarthy[101094]: }
Feb 25 06:52:26 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 76 pg[6.b( v 35'39 lc 0'0 (0'0,35'39] local-lis/les=74/76 n=1 ec=47/22 lis/c=58/58 les/c/f=59/59/0 sis=74) [1] r=0 lpr=74 pi=[58,74)/1 crt=35'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:26 np0005629333 systemd[1]: libpod-3dd4b6f7721e91e81c82a966774826013f51edb38f6e0839f5d97cdec6ba9c80.scope: Deactivated successfully.
Feb 25 06:52:26 np0005629333 podman[101077]: 2026-02-25 11:52:26.409606224 +0000 UTC m=+0.455054774 container died 3dd4b6f7721e91e81c82a966774826013f51edb38f6e0839f5d97cdec6ba9c80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:52:26 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e51e26011f16f0a7ff246c7213414f86ff9ea240136c3bf1460fa9a5c9732c2d-merged.mount: Deactivated successfully.
Feb 25 06:52:26 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Feb 25 06:52:26 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Feb 25 06:52:26 np0005629333 podman[101077]: 2026-02-25 11:52:26.478868536 +0000 UTC m=+0.524317066 container remove 3dd4b6f7721e91e81c82a966774826013f51edb38f6e0839f5d97cdec6ba9c80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mccarthy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 06:52:26 np0005629333 systemd[1]: libpod-conmon-3dd4b6f7721e91e81c82a966774826013f51edb38f6e0839f5d97cdec6ba9c80.scope: Deactivated successfully.
Feb 25 06:52:26 np0005629333 systemd-logind[811]: Session 34 logged out. Waiting for processes to exit.
Feb 25 06:52:26 np0005629333 systemd[1]: session-34.scope: Deactivated successfully.
Feb 25 06:52:26 np0005629333 systemd[1]: session-34.scope: Consumed 7.603s CPU time.
Feb 25 06:52:26 np0005629333 systemd-logind[811]: Removed session 34.
Feb 25 06:52:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v155: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 281 B/s, 5 objects/s recovering
Feb 25 06:52:26 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 5.b scrub starts
Feb 25 06:52:26 np0005629333 podman[101203]: 2026-02-25 11:52:26.934805064 +0000 UTC m=+0.049956413 container create 5c4c455a4b4b1be39d9a1474d0beaf240a99053d186c6d0318759a33492b8468 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wozniak, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 06:52:26 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 5.b scrub ok
Feb 25 06:52:26 np0005629333 systemd[1]: Started libpod-conmon-5c4c455a4b4b1be39d9a1474d0beaf240a99053d186c6d0318759a33492b8468.scope.
Feb 25 06:52:27 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:52:27 np0005629333 podman[101203]: 2026-02-25 11:52:26.909182505 +0000 UTC m=+0.024333864 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:52:27 np0005629333 podman[101203]: 2026-02-25 11:52:27.020285467 +0000 UTC m=+0.135436926 container init 5c4c455a4b4b1be39d9a1474d0beaf240a99053d186c6d0318759a33492b8468 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:52:27 np0005629333 podman[101203]: 2026-02-25 11:52:27.02739354 +0000 UTC m=+0.142544919 container start 5c4c455a4b4b1be39d9a1474d0beaf240a99053d186c6d0318759a33492b8468 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:52:27 np0005629333 sad_wozniak[101219]: 167 167
Feb 25 06:52:27 np0005629333 podman[101203]: 2026-02-25 11:52:27.031744224 +0000 UTC m=+0.146895623 container attach 5c4c455a4b4b1be39d9a1474d0beaf240a99053d186c6d0318759a33492b8468 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 06:52:27 np0005629333 systemd[1]: libpod-5c4c455a4b4b1be39d9a1474d0beaf240a99053d186c6d0318759a33492b8468.scope: Deactivated successfully.
Feb 25 06:52:27 np0005629333 podman[101203]: 2026-02-25 11:52:27.032226277 +0000 UTC m=+0.147377646 container died 5c4c455a4b4b1be39d9a1474d0beaf240a99053d186c6d0318759a33492b8468 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wozniak, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 06:52:27 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1451d0f86eddb2c7f5f0b1b66f6ce8c5565f63039ad3d2ee0c65f04ec31fa92e-merged.mount: Deactivated successfully.
Feb 25 06:52:27 np0005629333 podman[101203]: 2026-02-25 11:52:27.075512 +0000 UTC m=+0.190663379 container remove 5c4c455a4b4b1be39d9a1474d0beaf240a99053d186c6d0318759a33492b8468 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wozniak, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:52:27 np0005629333 systemd[1]: libpod-conmon-5c4c455a4b4b1be39d9a1474d0beaf240a99053d186c6d0318759a33492b8468.scope: Deactivated successfully.
Feb 25 06:52:27 np0005629333 podman[101246]: 2026-02-25 11:52:27.236372148 +0000 UTC m=+0.055877601 container create 91b671b210db0ede11f9254e6384fe96e9afd481f72b5949a6a05a2abf8b7b72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_goldwasser, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:52:27 np0005629333 systemd[1]: Started libpod-conmon-91b671b210db0ede11f9254e6384fe96e9afd481f72b5949a6a05a2abf8b7b72.scope.
Feb 25 06:52:27 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:52:27 np0005629333 podman[101246]: 2026-02-25 11:52:27.211540992 +0000 UTC m=+0.031046495 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:52:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba2137de8d5b5388664c429c22755048276a99ca758951cdc8527d3ffee98f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:52:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba2137de8d5b5388664c429c22755048276a99ca758951cdc8527d3ffee98f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:52:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba2137de8d5b5388664c429c22755048276a99ca758951cdc8527d3ffee98f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:52:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba2137de8d5b5388664c429c22755048276a99ca758951cdc8527d3ffee98f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:52:27 np0005629333 podman[101246]: 2026-02-25 11:52:27.344820725 +0000 UTC m=+0.164326228 container init 91b671b210db0ede11f9254e6384fe96e9afd481f72b5949a6a05a2abf8b7b72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_goldwasser, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 06:52:27 np0005629333 podman[101246]: 2026-02-25 11:52:27.349734785 +0000 UTC m=+0.169240238 container start 91b671b210db0ede11f9254e6384fe96e9afd481f72b5949a6a05a2abf8b7b72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_goldwasser, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 06:52:27 np0005629333 podman[101246]: 2026-02-25 11:52:27.354738338 +0000 UTC m=+0.174243841 container attach 91b671b210db0ede11f9254e6384fe96e9afd481f72b5949a6a05a2abf8b7b72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:52:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Feb 25 06:52:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Feb 25 06:52:27 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Feb 25 06:52:27 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 77 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=76/77 n=7 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[51,76)/1 crt=41'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:27 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 77 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=76/77 n=6 ec=51/35 lis/c=51/51 les/c/f=52/52/0 sis=76) [2]/[1] async=[2] r=0 lpr=76 pi=[51,76)/1 crt=63'487 lcod 63'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:52:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Feb 25 06:52:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Feb 25 06:52:27 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Feb 25 06:52:27 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 78 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=76/77 n=7 ec=51/35 lis/c=76/51 les/c/f=77/52/0 sis=78 pruub=15.679971695s) [2] async=[2] r=-1 lpr=78 pi=[51,78)/1 crt=41'483 lcod 0'0 active pruub 144.051925659s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:27 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 78 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=76/77 n=7 ec=51/35 lis/c=76/51 les/c/f=77/52/0 sis=78 pruub=15.679907799s) [2] r=-1 lpr=78 pi=[51,78)/1 crt=41'483 lcod 0'0 unknown NOTIFY pruub 144.051925659s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:27 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 78 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=76/77 n=6 ec=51/35 lis/c=76/51 les/c/f=77/52/0 sis=78 pruub=15.679253578s) [2] async=[2] r=-1 lpr=78 pi=[51,78)/1 crt=63'487 lcod 63'486 active pruub 144.051940918s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:27 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 78 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=76/77 n=6 ec=51/35 lis/c=76/51 les/c/f=77/52/0 sis=78 pruub=15.679106712s) [2] r=-1 lpr=78 pi=[51,78)/1 crt=63'487 lcod 63'486 unknown NOTIFY pruub 144.051940918s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:27 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 78 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:27 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 78 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=0/0 n=6 ec=51/35 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 pct=0'0 crt=63'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:27 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 78 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=0/0 n=6 ec=51/35 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=63'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:27 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 78 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=0/0 n=7 ec=51/35 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:28 np0005629333 lvm[101342]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 06:52:28 np0005629333 lvm[101342]: VG ceph_vg1 finished
Feb 25 06:52:28 np0005629333 lvm[101341]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 06:52:28 np0005629333 lvm[101341]: VG ceph_vg0 finished
Feb 25 06:52:28 np0005629333 lvm[101344]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 06:52:28 np0005629333 lvm[101344]: VG ceph_vg2 finished
Feb 25 06:52:28 np0005629333 crazy_goldwasser[101263]: {}
Feb 25 06:52:28 np0005629333 systemd[1]: libpod-91b671b210db0ede11f9254e6384fe96e9afd481f72b5949a6a05a2abf8b7b72.scope: Deactivated successfully.
Feb 25 06:52:28 np0005629333 systemd[1]: libpod-91b671b210db0ede11f9254e6384fe96e9afd481f72b5949a6a05a2abf8b7b72.scope: Consumed 1.063s CPU time.
Feb 25 06:52:28 np0005629333 podman[101246]: 2026-02-25 11:52:28.150031536 +0000 UTC m=+0.969537039 container died 91b671b210db0ede11f9254e6384fe96e9afd481f72b5949a6a05a2abf8b7b72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_goldwasser, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Feb 25 06:52:28 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7ba2137de8d5b5388664c429c22755048276a99ca758951cdc8527d3ffee98f6-merged.mount: Deactivated successfully.
Feb 25 06:52:28 np0005629333 podman[101246]: 2026-02-25 11:52:28.211678031 +0000 UTC m=+1.031183454 container remove 91b671b210db0ede11f9254e6384fe96e9afd481f72b5949a6a05a2abf8b7b72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_goldwasser, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:52:28 np0005629333 systemd[1]: libpod-conmon-91b671b210db0ede11f9254e6384fe96e9afd481f72b5949a6a05a2abf8b7b72.scope: Deactivated successfully.
Feb 25 06:52:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:52:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:52:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:52:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:52:28 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Feb 25 06:52:28 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Feb 25 06:52:28 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:52:28 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:52:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Feb 25 06:52:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Feb 25 06:52:28 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Feb 25 06:52:28 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 79 pg[9.c( v 41'483 (0'0,41'483] local-lis/les=78/79 n=7 ec=51/35 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:28 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 79 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=78/79 n=6 ec=51/35 lis/c=76/51 les/c/f=77/52/0 sis=78) [2] r=0 lpr=78 pi=[51,78)/1 crt=63'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v159: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:52:30 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Feb 25 06:52:30 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Feb 25 06:52:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_11:52:30
Feb 25 06:52:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 06:52:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Some PGs (0.003279) are inactive; try again later
Feb 25 06:52:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v160: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:52:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:52:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:52:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:52:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:52:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:52:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:52:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 06:52:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 06:52:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 06:52:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 06:52:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 06:52:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 06:52:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 06:52:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 06:52:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 06:52:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 06:52:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:52:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v161: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 2 objects/s recovering
Feb 25 06:52:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Feb 25 06:52:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Feb 25 06:52:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Feb 25 06:52:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Feb 25 06:52:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Feb 25 06:52:32 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Feb 25 06:52:32 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Feb 25 06:52:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Feb 25 06:52:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Feb 25 06:52:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Feb 25 06:52:32 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Feb 25 06:52:32 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 80 pg[6.d( v 35'39 (0'0,35'39] local-lis/les=62/63 n=1 ec=47/22 lis/c=62/62 les/c/f=63/63/0 sis=80 pruub=9.185418129s) [1] r=-1 lpr=80 pi=[62,80)/1 crt=35'39 active pruub 146.599624634s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:32 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 80 pg[6.d( v 35'39 (0'0,35'39] local-lis/les=62/63 n=1 ec=47/22 lis/c=62/62 les/c/f=63/63/0 sis=80 pruub=9.184962273s) [1] r=-1 lpr=80 pi=[62,80)/1 crt=35'39 unknown NOTIFY pruub 146.599624634s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:32 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 80 pg[6.d( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=62/62 les/c/f=63/63/0 sis=80) [1] r=0 lpr=80 pi=[62,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Feb 25 06:52:33 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Feb 25 06:52:33 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Feb 25 06:52:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Feb 25 06:52:33 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Feb 25 06:52:33 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 81 pg[6.d( v 35'39 lc 34'13 (0'0,35'39] local-lis/les=80/81 n=1 ec=47/22 lis/c=62/62 les/c/f=63/63/0 sis=80) [1] r=0 lpr=80 pi=[62,80)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v164: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 2 objects/s recovering
Feb 25 06:52:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Feb 25 06:52:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Feb 25 06:52:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Feb 25 06:52:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Feb 25 06:52:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Feb 25 06:52:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Feb 25 06:52:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Feb 25 06:52:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Feb 25 06:52:34 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Feb 25 06:52:34 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Feb 25 06:52:34 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Feb 25 06:52:35 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 8.a scrub starts
Feb 25 06:52:35 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 8.a scrub ok
Feb 25 06:52:35 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Feb 25 06:52:35 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Feb 25 06:52:36 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Feb 25 06:52:36 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Feb 25 06:52:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v166: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 81 B/s, 2 objects/s recovering
Feb 25 06:52:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Feb 25 06:52:36 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Feb 25 06:52:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Feb 25 06:52:36 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Feb 25 06:52:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Feb 25 06:52:36 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Feb 25 06:52:36 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Feb 25 06:52:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Feb 25 06:52:36 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Feb 25 06:52:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 83 pg[6.f( v 35'39 (0'0,35'39] local-lis/les=58/59 n=1 ec=47/22 lis/c=58/58 les/c/f=59/59/0 sis=83 pruub=11.863888741s) [2] r=-1 lpr=83 pi=[58,83)/1 crt=35'39 active pruub 153.336807251s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:36 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Feb 25 06:52:36 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Feb 25 06:52:36 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 83 pg[6.f( v 35'39 (0'0,35'39] local-lis/les=58/59 n=1 ec=47/22 lis/c=58/58 les/c/f=59/59/0 sis=83 pruub=11.863797188s) [2] r=-1 lpr=83 pi=[58,83)/1 crt=35'39 unknown NOTIFY pruub 153.336807251s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:36 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 83 pg[6.f( empty local-lis/les=0/0 n=0 ec=47/22 lis/c=58/58 les/c/f=59/59/0 sis=83) [2] r=0 lpr=83 pi=[58,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:37 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Feb 25 06:52:37 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Feb 25 06:52:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:52:37 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Feb 25 06:52:37 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Feb 25 06:52:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Feb 25 06:52:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Feb 25 06:52:38 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Feb 25 06:52:38 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 84 pg[6.f( v 35'39 lc 34'1 (0'0,35'39] local-lis/les=83/84 n=1 ec=47/22 lis/c=58/58 les/c/f=59/59/0 sis=83) [2] r=0 lpr=83 pi=[58,83)/1 crt=35'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:38 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Feb 25 06:52:38 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Feb 25 06:52:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v169: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Feb 25 06:52:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Feb 25 06:52:38 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Feb 25 06:52:38 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Feb 25 06:52:38 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Feb 25 06:52:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Feb 25 06:52:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Feb 25 06:52:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Feb 25 06:52:39 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Feb 25 06:52:39 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Feb 25 06:52:40 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Feb 25 06:52:40 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Feb 25 06:52:40 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Feb 25 06:52:40 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Feb 25 06:52:40 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Feb 25 06:52:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v171: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 11 B/s, 0 objects/s recovering
Feb 25 06:52:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Feb 25 06:52:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Feb 25 06:52:40 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Feb 25 06:52:40 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Feb 25 06:52:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Feb 25 06:52:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Feb 25 06:52:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Feb 25 06:52:41 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Feb 25 06:52:41 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Feb 25 06:52:41 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Feb 25 06:52:41 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Feb 25 06:52:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 06:52:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:52:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 06:52:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:52:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:52:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:52:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:52:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:52:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:52:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:52:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:52:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:52:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.3245384522835677e-06 of space, bias 4.0, pg target 0.002789446142740281 quantized to 16 (current 16)
Feb 25 06:52:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:52:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:52:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:52:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 06:52:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:52:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 06:52:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:52:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:52:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:52:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 06:52:41 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Feb 25 06:52:41 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Feb 25 06:52:42 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Feb 25 06:52:42 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Feb 25 06:52:42 np0005629333 systemd-logind[811]: New session 35 of user zuul.
Feb 25 06:52:42 np0005629333 systemd[1]: Started Session 35 of User zuul.
Feb 25 06:52:42 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Feb 25 06:52:42 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Feb 25 06:52:42 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Feb 25 06:52:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:52:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v173: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 104 B/s, 0 objects/s recovering
Feb 25 06:52:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Feb 25 06:52:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Feb 25 06:52:43 np0005629333 python3.9[101536]: ansible-ansible.legacy.ping Invoked with data=pong
Feb 25 06:52:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Feb 25 06:52:43 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Feb 25 06:52:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Feb 25 06:52:43 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Feb 25 06:52:43 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Feb 25 06:52:43 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Feb 25 06:52:43 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Feb 25 06:52:44 np0005629333 python3.9[101710]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:52:44 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Feb 25 06:52:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v175: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 102 B/s, 0 objects/s recovering
Feb 25 06:52:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Feb 25 06:52:44 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Feb 25 06:52:44 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Feb 25 06:52:44 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Feb 25 06:52:45 np0005629333 python3.9[101867]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:52:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Feb 25 06:52:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Feb 25 06:52:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Feb 25 06:52:45 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Feb 25 06:52:45 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 88 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=58/59 n=6 ec=51/35 lis/c=58/58 les/c/f=59/59/0 sis=88 pruub=10.999403954s) [2] r=-1 lpr=88 pi=[58,88)/1 crt=63'484 lcod 63'484 active pruub 161.331527710s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:45 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 88 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=58/59 n=6 ec=51/35 lis/c=58/58 les/c/f=59/59/0 sis=88 pruub=10.999073982s) [2] r=-1 lpr=88 pi=[58,88)/1 crt=63'484 lcod 63'484 unknown NOTIFY pruub 161.331527710s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:45 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 88 pg[9.13( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=58/58 les/c/f=59/59/0 sis=88) [2] r=0 lpr=88 pi=[58,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:46 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Feb 25 06:52:46 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Feb 25 06:52:46 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Feb 25 06:52:46 np0005629333 python3.9[102021]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 06:52:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Feb 25 06:52:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v177: 305 pgs: 1 unknown, 304 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 58 B/s, 0 objects/s recovering
Feb 25 06:52:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Feb 25 06:52:46 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Feb 25 06:52:46 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 89 pg[9.13( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=58/58 les/c/f=59/59/0 sis=89) [2]/[0] r=-1 lpr=89 pi=[58,89)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:46 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 89 pg[9.13( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=58/58 les/c/f=59/59/0 sis=89) [2]/[0] r=-1 lpr=89 pi=[58,89)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 89 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=58/59 n=6 ec=51/35 lis/c=58/58 les/c/f=59/59/0 sis=89) [2]/[0] r=0 lpr=89 pi=[58,89)/1 crt=63'484 lcod 63'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:47 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 89 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=58/59 n=6 ec=51/35 lis/c=58/58 les/c/f=59/59/0 sis=89) [2]/[0] r=0 lpr=89 pi=[58,89)/1 crt=63'484 lcod 63'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:47 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Feb 25 06:52:47 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Feb 25 06:52:47 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Feb 25 06:52:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:52:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Feb 25 06:52:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Feb 25 06:52:47 np0005629333 python3.9[102177]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:52:47 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Feb 25 06:52:48 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Feb 25 06:52:48 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Feb 25 06:52:48 np0005629333 python3.9[102330]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:52:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v180: 305 pgs: 1 unknown, 304 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:52:48 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 10.a scrub starts
Feb 25 06:52:48 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 10.a scrub ok
Feb 25 06:52:49 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 90 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=89/90 n=6 ec=51/35 lis/c=58/58 les/c/f=59/59/0 sis=89) [2]/[0] async=[2] r=0 lpr=89 pi=[58,89)/1 crt=63'485 lcod 63'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:49 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 3.b scrub starts
Feb 25 06:52:49 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 3.b scrub ok
Feb 25 06:52:49 np0005629333 python3.9[102480]: ansible-ansible.builtin.service_facts Invoked
Feb 25 06:52:49 np0005629333 network[102497]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 25 06:52:49 np0005629333 network[102498]: 'network-scripts' will be removed from distribution in near future.
Feb 25 06:52:49 np0005629333 network[102499]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 25 06:52:49 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 10.c scrub starts
Feb 25 06:52:49 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 10.c scrub ok
Feb 25 06:52:50 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Feb 25 06:52:50 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Feb 25 06:52:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Feb 25 06:52:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Feb 25 06:52:50 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Feb 25 06:52:50 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 91 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=89/90 n=6 ec=51/35 lis/c=89/58 les/c/f=90/59/0 sis=91 pruub=14.659872055s) [2] async=[2] r=-1 lpr=91 pi=[58,91)/1 crt=63'485 lcod 63'484 active pruub 169.607772827s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:50 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 91 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=89/90 n=6 ec=51/35 lis/c=89/58 les/c/f=90/59/0 sis=91 pruub=14.659772873s) [2] r=-1 lpr=91 pi=[58,91)/1 crt=63'485 lcod 63'484 unknown NOTIFY pruub 169.607772827s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:50 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 91 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=51/35 lis/c=89/58 les/c/f=90/59/0 sis=91) [2] r=0 lpr=91 pi=[58,91)/1 pct=0'0 crt=63'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:50 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 91 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=51/35 lis/c=89/58 les/c/f=90/59/0 sis=91) [2] r=0 lpr=91 pi=[58,91)/1 crt=63'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v182: 305 pgs: 1 unknown, 304 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:52:50 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 5.e scrub starts
Feb 25 06:52:50 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 5.e scrub ok
Feb 25 06:52:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Feb 25 06:52:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Feb 25 06:52:51 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Feb 25 06:52:51 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 92 pg[9.13( v 63'485 (0'0,63'485] local-lis/les=91/92 n=6 ec=51/35 lis/c=89/58 les/c/f=90/59/0 sis=91) [2] r=0 lpr=91 pi=[58,91)/1 crt=63'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:51 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 5.d scrub starts
Feb 25 06:52:51 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 5.d scrub ok
Feb 25 06:52:52 np0005629333 python3.9[102760]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:52:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:52:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v184: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 342 B/s wr, 8 op/s; 87 B/s, 2 objects/s recovering
Feb 25 06:52:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Feb 25 06:52:52 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Feb 25 06:52:53 np0005629333 python3.9[102910]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:52:53 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 11.c scrub starts
Feb 25 06:52:53 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 11.c scrub ok
Feb 25 06:52:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Feb 25 06:52:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Feb 25 06:52:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Feb 25 06:52:53 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Feb 25 06:52:53 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Feb 25 06:52:54 np0005629333 python3.9[103064]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:52:54 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Feb 25 06:52:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v186: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 341 B/s wr, 8 op/s; 87 B/s, 1 objects/s recovering
Feb 25 06:52:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Feb 25 06:52:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Feb 25 06:52:55 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Feb 25 06:52:55 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Feb 25 06:52:55 np0005629333 python3.9[103223]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 25 06:52:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Feb 25 06:52:55 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Feb 25 06:52:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Feb 25 06:52:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Feb 25 06:52:55 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Feb 25 06:52:56 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Feb 25 06:52:56 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Feb 25 06:52:56 np0005629333 python3.9[103308]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 25 06:52:56 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Feb 25 06:52:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v188: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 341 B/s wr, 8 op/s; 87 B/s, 1 objects/s recovering
Feb 25 06:52:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Feb 25 06:52:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Feb 25 06:52:57 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 94 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=57/58 n=6 ec=51/35 lis/c=57/57 les/c/f=58/58/0 sis=94 pruub=14.748041153s) [1] r=-1 lpr=94 pi=[57,94)/1 crt=41'483 active pruub 176.313323975s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:57 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 94 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=57/57 les/c/f=58/58/0 sis=94) [1] r=0 lpr=94 pi=[57,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:57 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 94 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=57/58 n=6 ec=51/35 lis/c=57/57 les/c/f=58/58/0 sis=94 pruub=14.748006821s) [1] r=-1 lpr=94 pi=[57,94)/1 crt=41'483 unknown NOTIFY pruub 176.313323975s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:57 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Feb 25 06:52:57 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Feb 25 06:52:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Feb 25 06:52:57 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Feb 25 06:52:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Feb 25 06:52:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Feb 25 06:52:57 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Feb 25 06:52:57 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 95 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=57/58 n=6 ec=51/35 lis/c=57/57 les/c/f=58/58/0 sis=95) [1]/[0] r=0 lpr=95 pi=[57,95)/1 crt=41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:57 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 95 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=57/58 n=6 ec=51/35 lis/c=57/57 les/c/f=58/58/0 sis=95) [1]/[0] r=0 lpr=95 pi=[57,95)/1 crt=41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:57 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 95 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=57/57 les/c/f=58/58/0 sis=95) [1]/[0] r=-1 lpr=95 pi=[57,95)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:57 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 95 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=57/57 les/c/f=58/58/0 sis=95) [1]/[0] r=-1 lpr=95 pi=[57,95)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:52:57 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Feb 25 06:52:57 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Feb 25 06:52:58 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 95 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=67/68 n=6 ec=51/35 lis/c=67/67 les/c/f=68/68/0 sis=95 pruub=10.701500893s) [0] r=-1 lpr=95 pi=[67,95)/1 crt=41'483 active pruub 164.202865601s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:58 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 95 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=67/68 n=6 ec=51/35 lis/c=67/67 les/c/f=68/68/0 sis=95 pruub=10.701441765s) [0] r=-1 lpr=95 pi=[67,95)/1 crt=41'483 unknown NOTIFY pruub 164.202865601s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:58 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 95 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=67/67 les/c/f=68/68/0 sis=95) [0] r=0 lpr=95 pi=[67,95)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Feb 25 06:52:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Feb 25 06:52:58 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Feb 25 06:52:58 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 96 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=67/67 les/c/f=68/68/0 sis=96) [0]/[2] r=-1 lpr=96 pi=[67,96)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:58 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 96 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=67/67 les/c/f=68/68/0 sis=96) [0]/[2] r=-1 lpr=96 pi=[67,96)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:58 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Feb 25 06:52:58 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 96 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=67/68 n=6 ec=51/35 lis/c=67/67 les/c/f=68/68/0 sis=96) [0]/[2] r=0 lpr=96 pi=[67,96)/1 crt=41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:58 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 96 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=67/68 n=6 ec=51/35 lis/c=67/67 les/c/f=68/68/0 sis=96) [0]/[2] r=0 lpr=96 pi=[67,96)/1 crt=41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:52:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v191: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:52:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Feb 25 06:52:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Feb 25 06:52:59 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 96 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=95/96 n=6 ec=51/35 lis/c=57/57 les/c/f=58/58/0 sis=95) [1]/[0] async=[1] r=0 lpr=95 pi=[57,95)/1 crt=41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Feb 25 06:52:59 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Feb 25 06:52:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Feb 25 06:52:59 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Feb 25 06:52:59 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Feb 25 06:52:59 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 97 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=95/96 n=6 ec=51/35 lis/c=95/57 les/c/f=96/58/0 sis=97 pruub=15.710318565s) [1] async=[1] r=-1 lpr=97 pi=[57,97)/1 crt=41'483 active pruub 179.969635010s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:59 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 97 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=95/96 n=6 ec=51/35 lis/c=95/57 les/c/f=96/58/0 sis=97 pruub=15.710215569s) [1] r=-1 lpr=97 pi=[57,97)/1 crt=41'483 unknown NOTIFY pruub 179.969635010s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:52:59 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 97 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=96/97 n=6 ec=51/35 lis/c=67/67 les/c/f=68/68/0 sis=96) [0]/[2] async=[0] r=0 lpr=96 pi=[67,96)/1 crt=41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:52:59 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 97 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=95/57 les/c/f=96/58/0 sis=97) [1] r=0 lpr=97 pi=[57,97)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:52:59 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 97 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=95/57 les/c/f=96/58/0 sis=97) [1] r=0 lpr=97 pi=[57,97)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:53:00 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Feb 25 06:53:00 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Feb 25 06:53:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Feb 25 06:53:00 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Feb 25 06:53:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Feb 25 06:53:00 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Feb 25 06:53:00 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 98 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=96/97 n=6 ec=51/35 lis/c=96/67 les/c/f=97/68/0 sis=98 pruub=14.943275452s) [0] async=[0] r=-1 lpr=98 pi=[67,98)/1 crt=41'483 active pruub 170.841751099s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:53:00 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 98 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=96/97 n=6 ec=51/35 lis/c=96/67 les/c/f=97/68/0 sis=98 pruub=14.943073273s) [0] r=-1 lpr=98 pi=[67,98)/1 crt=41'483 unknown NOTIFY pruub 170.841751099s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:53:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v194: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:53:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Feb 25 06:53:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Feb 25 06:53:00 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 98 pg[9.15( v 41'483 (0'0,41'483] local-lis/les=97/98 n=6 ec=51/35 lis/c=95/57 les/c/f=96/58/0 sis=97) [1] r=0 lpr=97 pi=[57,97)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:53:00 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 98 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=96/67 les/c/f=97/68/0 sis=98) [0] r=0 lpr=98 pi=[67,98)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:53:00 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 98 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=96/67 les/c/f=97/68/0 sis=98) [0] r=0 lpr=98 pi=[67,98)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:53:01 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Feb 25 06:53:01 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Feb 25 06:53:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:53:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:53:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:53:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:53:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:53:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:53:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Feb 25 06:53:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Feb 25 06:53:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Feb 25 06:53:01 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Feb 25 06:53:01 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Feb 25 06:53:01 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 99 pg[9.16( v 41'483 (0'0,41'483] local-lis/les=98/99 n=6 ec=51/35 lis/c=96/67 les/c/f=97/68/0 sis=98) [0] r=0 lpr=98 pi=[67,98)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:53:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:53:02 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Feb 25 06:53:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v196: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 53 B/s, 1 objects/s recovering
Feb 25 06:53:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Feb 25 06:53:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Feb 25 06:53:03 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Feb 25 06:53:03 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Feb 25 06:53:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Feb 25 06:53:03 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Feb 25 06:53:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Feb 25 06:53:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Feb 25 06:53:03 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Feb 25 06:53:03 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 100 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=57/58 n=6 ec=51/35 lis/c=57/57 les/c/f=58/58/0 sis=100 pruub=15.983002663s) [2] r=-1 lpr=100 pi=[57,100)/1 crt=63'486 lcod 63'486 active pruub 184.313262939s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:53:03 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 100 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=57/58 n=6 ec=51/35 lis/c=57/57 les/c/f=58/58/0 sis=100 pruub=15.982892990s) [2] r=-1 lpr=100 pi=[57,100)/1 crt=63'486 lcod 63'486 unknown NOTIFY pruub 184.313262939s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:53:03 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 100 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=57/57 les/c/f=58/58/0 sis=100) [2] r=0 lpr=100 pi=[57,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:53:04 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Feb 25 06:53:04 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Feb 25 06:53:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v198: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 43 B/s, 1 objects/s recovering
Feb 25 06:53:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Feb 25 06:53:04 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Feb 25 06:53:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Feb 25 06:53:04 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Feb 25 06:53:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Feb 25 06:53:04 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Feb 25 06:53:04 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 101 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=57/58 n=6 ec=51/35 lis/c=57/57 les/c/f=58/58/0 sis=101) [2]/[0] r=0 lpr=101 pi=[57,101)/1 crt=63'486 lcod 63'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:53:04 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 101 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=57/58 n=6 ec=51/35 lis/c=57/57 les/c/f=58/58/0 sis=101) [2]/[0] r=0 lpr=101 pi=[57,101)/1 crt=63'486 lcod 63'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:53:04 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 101 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=57/57 les/c/f=58/58/0 sis=101) [2]/[0] r=-1 lpr=101 pi=[57,101)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:53:04 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 101 pg[9.19( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=57/57 les/c/f=58/58/0 sis=101) [2]/[0] r=-1 lpr=101 pi=[57,101)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:53:04 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Feb 25 06:53:04 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Feb 25 06:53:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Feb 25 06:53:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Feb 25 06:53:05 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Feb 25 06:53:05 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 102 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=101/102 n=6 ec=51/35 lis/c=57/57 les/c/f=58/58/0 sis=101) [2]/[0] async=[2] r=0 lpr=101 pi=[57,101)/1 crt=63'487 lcod 63'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:53:05 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Feb 25 06:53:06 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Feb 25 06:53:06 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Feb 25 06:53:06 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Feb 25 06:53:06 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Feb 25 06:53:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v201: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 21 B/s, 0 objects/s recovering
Feb 25 06:53:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Feb 25 06:53:06 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Feb 25 06:53:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Feb 25 06:53:06 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Feb 25 06:53:06 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Feb 25 06:53:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Feb 25 06:53:06 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Feb 25 06:53:06 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 103 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=101/102 n=6 ec=51/35 lis/c=101/57 les/c/f=102/58/0 sis=103 pruub=14.973417282s) [2] async=[2] r=-1 lpr=103 pi=[57,103)/1 crt=63'487 lcod 63'486 active pruub 186.342636108s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:53:06 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 103 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=101/102 n=6 ec=51/35 lis/c=101/57 les/c/f=102/58/0 sis=103 pruub=14.973354340s) [2] r=-1 lpr=103 pi=[57,103)/1 crt=63'487 lcod 63'486 unknown NOTIFY pruub 186.342636108s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:53:06 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 103 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=0/0 n=6 ec=51/35 lis/c=101/57 les/c/f=102/58/0 sis=103) [2] r=0 lpr=103 pi=[57,103)/1 pct=0'0 crt=63'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:53:06 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 103 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=0/0 n=6 ec=51/35 lis/c=101/57 les/c/f=102/58/0 sis=103) [2] r=0 lpr=103 pi=[57,103)/1 crt=63'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:53:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:53:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Feb 25 06:53:07 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Feb 25 06:53:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Feb 25 06:53:07 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Feb 25 06:53:07 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 104 pg[9.19( v 63'487 (0'0,63'487] local-lis/les=103/104 n=6 ec=51/35 lis/c=101/57 les/c/f=102/58/0 sis=103) [2] r=0 lpr=103 pi=[57,103)/1 crt=63'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:53:08 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Feb 25 06:53:08 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Feb 25 06:53:08 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Feb 25 06:53:08 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Feb 25 06:53:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v204: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:53:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Feb 25 06:53:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Feb 25 06:53:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Feb 25 06:53:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Feb 25 06:53:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Feb 25 06:53:08 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Feb 25 06:53:08 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Feb 25 06:53:09 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Feb 25 06:53:09 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Feb 25 06:53:09 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 105 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=78/79 n=6 ec=51/35 lis/c=78/78 les/c/f=79/79/0 sis=105 pruub=15.241693497s) [0] r=-1 lpr=105 pi=[78,105)/1 crt=63'487 active pruub 179.848342896s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:53:09 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 105 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=78/79 n=6 ec=51/35 lis/c=78/78 les/c/f=79/79/0 sis=105 pruub=15.241346359s) [0] r=-1 lpr=105 pi=[78,105)/1 crt=63'487 unknown NOTIFY pruub 179.848342896s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:53:09 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 105 pg[9.1c( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=78/78 les/c/f=79/79/0 sis=105) [0] r=0 lpr=105 pi=[78,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:53:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Feb 25 06:53:10 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Feb 25 06:53:10 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Feb 25 06:53:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Feb 25 06:53:10 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Feb 25 06:53:10 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 106 pg[9.1c( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=78/78 les/c/f=79/79/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[78,106)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:53:10 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 106 pg[9.1c( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=78/78 les/c/f=79/79/0 sis=106) [0]/[2] r=-1 lpr=106 pi=[78,106)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:53:10 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Feb 25 06:53:10 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 106 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=78/79 n=6 ec=51/35 lis/c=78/78 les/c/f=79/79/0 sis=106) [0]/[2] r=0 lpr=106 pi=[78,106)/1 crt=63'487 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:53:10 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 106 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=78/79 n=6 ec=51/35 lis/c=78/78 les/c/f=79/79/0 sis=106) [0]/[2] r=0 lpr=106 pi=[78,106)/1 crt=63'487 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:53:10 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 11.a scrub starts
Feb 25 06:53:10 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 11.a scrub ok
Feb 25 06:53:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v207: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:53:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Feb 25 06:53:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Feb 25 06:53:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Feb 25 06:53:11 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Feb 25 06:53:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Feb 25 06:53:11 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Feb 25 06:53:11 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Feb 25 06:53:11 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 107 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=106/107 n=6 ec=51/35 lis/c=78/78 les/c/f=79/79/0 sis=106) [0]/[2] async=[0] r=0 lpr=106 pi=[78,106)/1 crt=63'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:53:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Feb 25 06:53:12 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Feb 25 06:53:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Feb 25 06:53:12 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Feb 25 06:53:12 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 108 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=0/0 n=6 ec=51/35 lis/c=106/78 les/c/f=107/79/0 sis=108) [0] r=0 lpr=108 pi=[78,108)/1 pct=0'0 crt=63'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:53:12 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 108 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=0/0 n=6 ec=51/35 lis/c=106/78 les/c/f=107/79/0 sis=108) [0] r=0 lpr=108 pi=[78,108)/1 crt=63'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:53:12 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 108 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=106/107 n=6 ec=51/35 lis/c=106/78 les/c/f=107/79/0 sis=108 pruub=14.991110802s) [0] async=[0] r=-1 lpr=108 pi=[78,108)/1 crt=63'487 active pruub 182.309310913s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:53:12 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 108 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=106/107 n=6 ec=51/35 lis/c=106/78 les/c/f=107/79/0 sis=108 pruub=14.990996361s) [0] r=-1 lpr=108 pi=[78,108)/1 crt=63'487 unknown NOTIFY pruub 182.309310913s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:53:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:53:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v210: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 208 B/s, 4 objects/s recovering
Feb 25 06:53:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Feb 25 06:53:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Feb 25 06:53:13 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Feb 25 06:53:13 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Feb 25 06:53:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Feb 25 06:53:13 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Feb 25 06:53:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Feb 25 06:53:13 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Feb 25 06:53:13 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 109 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=67/68 n=6 ec=51/35 lis/c=67/67 les/c/f=68/68/0 sis=109 pruub=11.872773170s) [0] r=-1 lpr=109 pi=[67,109)/1 crt=63'485 active pruub 180.203353882s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:53:13 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 109 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=67/68 n=6 ec=51/35 lis/c=67/67 les/c/f=68/68/0 sis=109 pruub=11.872069359s) [0] r=-1 lpr=109 pi=[67,109)/1 crt=63'485 unknown NOTIFY pruub 180.203353882s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:53:13 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 109 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=67/67 les/c/f=68/68/0 sis=109) [0] r=0 lpr=109 pi=[67,109)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:53:13 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Feb 25 06:53:13 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 109 pg[9.1c( v 63'487 (0'0,63'487] local-lis/les=108/109 n=6 ec=51/35 lis/c=106/78 les/c/f=107/79/0 sis=108) [0] r=0 lpr=108 pi=[78,108)/1 crt=63'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:53:13 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Feb 25 06:53:13 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Feb 25 06:53:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Feb 25 06:53:14 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Feb 25 06:53:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Feb 25 06:53:14 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Feb 25 06:53:14 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 110 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=67/67 les/c/f=68/68/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[67,110)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:53:14 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 110 pg[9.1e( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=67/67 les/c/f=68/68/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[67,110)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:53:14 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 110 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=67/68 n=6 ec=51/35 lis/c=67/67 les/c/f=68/68/0 sis=110) [0]/[2] r=0 lpr=110 pi=[67,110)/1 crt=63'485 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:53:14 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 110 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=67/68 n=6 ec=51/35 lis/c=67/67 les/c/f=68/68/0 sis=110) [0]/[2] r=0 lpr=110 pi=[67,110)/1 crt=63'485 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:53:14 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Feb 25 06:53:14 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Feb 25 06:53:14 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 7.d scrub starts
Feb 25 06:53:14 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 7.d scrub ok
Feb 25 06:53:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 208 B/s, 4 objects/s recovering
Feb 25 06:53:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Feb 25 06:53:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 25 06:53:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Feb 25 06:53:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 25 06:53:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Feb 25 06:53:15 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Feb 25 06:53:15 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 111 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=72/73 n=6 ec=51/35 lis/c=72/72 les/c/f=73/73/0 sis=111 pruub=14.797804832s) [1] r=-1 lpr=111 pi=[72,111)/1 crt=41'483 active pruub 185.162246704s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:53:15 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 111 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=72/73 n=6 ec=51/35 lis/c=72/72 les/c/f=73/73/0 sis=111 pruub=14.797755241s) [1] r=-1 lpr=111 pi=[72,111)/1 crt=41'483 unknown NOTIFY pruub 185.162246704s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:53:15 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 111 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=72/72 les/c/f=73/73/0 sis=111) [1] r=0 lpr=111 pi=[72,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:53:15 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Feb 25 06:53:15 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Feb 25 06:53:15 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 111 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=110/111 n=6 ec=51/35 lis/c=67/67 les/c/f=68/68/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[67,110)/1 crt=63'485 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:53:15 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Feb 25 06:53:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Feb 25 06:53:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Feb 25 06:53:16 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Feb 25 06:53:16 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 112 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=72/73 n=6 ec=51/35 lis/c=72/72 les/c/f=73/73/0 sis=112) [1]/[2] r=0 lpr=112 pi=[72,112)/1 crt=41'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:53:16 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 112 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=72/73 n=6 ec=51/35 lis/c=72/72 les/c/f=73/73/0 sis=112) [1]/[2] r=0 lpr=112 pi=[72,112)/1 crt=41'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Feb 25 06:53:16 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=72/72 les/c/f=73/73/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[72,112)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:53:16 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 112 pg[9.1f( empty local-lis/les=0/0 n=0 ec=51/35 lis/c=72/72 les/c/f=73/73/0 sis=112) [1]/[2] r=-1 lpr=112 pi=[72,112)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 25 06:53:16 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 112 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=110/111 n=6 ec=51/35 lis/c=110/67 les/c/f=111/68/0 sis=112 pruub=15.051042557s) [0] async=[0] r=-1 lpr=112 pi=[67,112)/1 crt=63'485 active pruub 186.419418335s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:53:16 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 112 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=110/111 n=6 ec=51/35 lis/c=110/67 les/c/f=111/68/0 sis=112 pruub=15.050867081s) [0] r=-1 lpr=112 pi=[67,112)/1 crt=63'485 unknown NOTIFY pruub 186.419418335s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:53:16 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Feb 25 06:53:16 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 112 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=51/35 lis/c=110/67 les/c/f=111/68/0 sis=112) [0] r=0 lpr=112 pi=[67,112)/1 pct=0'0 crt=63'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:53:16 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 112 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=0/0 n=6 ec=51/35 lis/c=110/67 les/c/f=111/68/0 sis=112) [0] r=0 lpr=112 pi=[67,112)/1 crt=63'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:53:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v216: 305 pgs: 1 unknown, 1 active+remapped, 303 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:53:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Feb 25 06:53:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Feb 25 06:53:17 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Feb 25 06:53:17 np0005629333 ceph-osd[86953]: osd.0 pg_epoch: 113 pg[9.1e( v 63'485 (0'0,63'485] local-lis/les=112/113 n=6 ec=51/35 lis/c=110/67 les/c/f=111/68/0 sis=112) [0] r=0 lpr=112 pi=[67,112)/1 crt=63'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:53:17 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 113 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=112/113 n=6 ec=51/35 lis/c=72/72 les/c/f=73/73/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[72,112)/1 crt=41'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:53:17 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 3.a scrub starts
Feb 25 06:53:17 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 3.a scrub ok
Feb 25 06:53:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:53:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Feb 25 06:53:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Feb 25 06:53:17 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 114 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=112/113 n=6 ec=51/35 lis/c=112/72 les/c/f=113/73/0 sis=114 pruub=15.528677940s) [1] async=[1] r=-1 lpr=114 pi=[72,114)/1 crt=41'483 active pruub 188.381164551s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:53:17 np0005629333 ceph-osd[89088]: osd.2 pg_epoch: 114 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=112/113 n=6 ec=51/35 lis/c=112/72 les/c/f=113/73/0 sis=114 pruub=15.528569221s) [1] r=-1 lpr=114 pi=[72,114)/1 crt=41'483 unknown NOTIFY pruub 188.381164551s@ mbc={}] state<Start>: transitioning to Stray
Feb 25 06:53:17 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Feb 25 06:53:17 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 114 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=112/72 les/c/f=113/73/0 sis=114) [1] r=0 lpr=114 pi=[72,114)/1 pct=0'0 crt=41'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Feb 25 06:53:17 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 114 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=0/0 n=6 ec=51/35 lis/c=112/72 les/c/f=113/73/0 sis=114) [1] r=0 lpr=114 pi=[72,114)/1 crt=41'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 25 06:53:18 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 7.f scrub starts
Feb 25 06:53:18 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 7.f scrub ok
Feb 25 06:53:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Feb 25 06:53:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Feb 25 06:53:18 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Feb 25 06:53:18 np0005629333 ceph-osd[88012]: osd.1 pg_epoch: 115 pg[9.1f( v 41'483 (0'0,41'483] local-lis/les=114/115 n=6 ec=51/35 lis/c=112/72 les/c/f=113/73/0 sis=114) [1] r=0 lpr=114 pi=[72,114)/1 crt=41'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 25 06:53:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v220: 305 pgs: 1 peering, 1 unknown, 303 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:53:19 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Feb 25 06:53:19 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Feb 25 06:53:19 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 8.c scrub starts
Feb 25 06:53:19 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 8.c scrub ok
Feb 25 06:53:20 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Feb 25 06:53:20 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Feb 25 06:53:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v221: 305 pgs: 1 peering, 1 unknown, 303 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 33 B/s, 1 objects/s recovering
Feb 25 06:53:21 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 11.f scrub starts
Feb 25 06:53:21 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 11.f scrub ok
Feb 25 06:53:22 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Feb 25 06:53:22 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Feb 25 06:53:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:53:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v222: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 341 B/s wr, 7 op/s; 80 B/s, 3 objects/s recovering
Feb 25 06:53:23 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 8.d scrub starts
Feb 25 06:53:23 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 8.d scrub ok
Feb 25 06:53:23 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 8.e scrub starts
Feb 25 06:53:23 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 8.e scrub ok
Feb 25 06:53:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v223: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 271 B/s wr, 5 op/s; 43 B/s, 1 objects/s recovering
Feb 25 06:53:25 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Feb 25 06:53:25 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Feb 25 06:53:26 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Feb 25 06:53:26 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Feb 25 06:53:26 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Feb 25 06:53:26 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Feb 25 06:53:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v224: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 225 B/s wr, 4 op/s; 36 B/s, 1 objects/s recovering
Feb 25 06:53:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:53:28 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Feb 25 06:53:28 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Feb 25 06:53:28 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Feb 25 06:53:28 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Feb 25 06:53:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v225: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 203 B/s wr, 4 op/s; 32 B/s, 1 objects/s recovering
Feb 25 06:53:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:53:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:53:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 06:53:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:53:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 06:53:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:53:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 06:53:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 06:53:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 06:53:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 06:53:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:53:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:53:29 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:53:29 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:53:29 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 06:53:29 np0005629333 podman[103600]: 2026-02-25 11:53:29.42333304 +0000 UTC m=+0.051757704 container create 0df23088efcc13cf4520c0d9c8491814ae89e1d6cc5809f370169e9188132594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 06:53:29 np0005629333 systemd[1]: Started libpod-conmon-0df23088efcc13cf4520c0d9c8491814ae89e1d6cc5809f370169e9188132594.scope.
Feb 25 06:53:29 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:53:29 np0005629333 podman[103600]: 2026-02-25 11:53:29.394262862 +0000 UTC m=+0.022687606 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:53:29 np0005629333 podman[103600]: 2026-02-25 11:53:29.502474003 +0000 UTC m=+0.130898737 container init 0df23088efcc13cf4520c0d9c8491814ae89e1d6cc5809f370169e9188132594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_joliot, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 06:53:29 np0005629333 podman[103600]: 2026-02-25 11:53:29.507937568 +0000 UTC m=+0.136362232 container start 0df23088efcc13cf4520c0d9c8491814ae89e1d6cc5809f370169e9188132594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_joliot, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:53:29 np0005629333 upbeat_joliot[103617]: 167 167
Feb 25 06:53:29 np0005629333 systemd[1]: libpod-0df23088efcc13cf4520c0d9c8491814ae89e1d6cc5809f370169e9188132594.scope: Deactivated successfully.
Feb 25 06:53:29 np0005629333 podman[103600]: 2026-02-25 11:53:29.51152836 +0000 UTC m=+0.139953054 container attach 0df23088efcc13cf4520c0d9c8491814ae89e1d6cc5809f370169e9188132594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_joliot, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 06:53:29 np0005629333 podman[103600]: 2026-02-25 11:53:29.513834256 +0000 UTC m=+0.142258950 container died 0df23088efcc13cf4520c0d9c8491814ae89e1d6cc5809f370169e9188132594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 06:53:29 np0005629333 systemd[1]: var-lib-containers-storage-overlay-3748a7c9688791d6b096e5832a35a08a0f853c6657108f6dd6ae57e0a03d2ad0-merged.mount: Deactivated successfully.
Feb 25 06:53:29 np0005629333 podman[103600]: 2026-02-25 11:53:29.55155922 +0000 UTC m=+0.179983884 container remove 0df23088efcc13cf4520c0d9c8491814ae89e1d6cc5809f370169e9188132594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_joliot, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 06:53:29 np0005629333 systemd[1]: libpod-conmon-0df23088efcc13cf4520c0d9c8491814ae89e1d6cc5809f370169e9188132594.scope: Deactivated successfully.
Feb 25 06:53:29 np0005629333 podman[103641]: 2026-02-25 11:53:29.67701148 +0000 UTC m=+0.040179424 container create 15d662d126dbb3e3dac8f7db3de5934927f4f8b9895a2f32f858a96f43969318 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 06:53:29 np0005629333 systemd[1]: Started libpod-conmon-15d662d126dbb3e3dac8f7db3de5934927f4f8b9895a2f32f858a96f43969318.scope.
Feb 25 06:53:29 np0005629333 podman[103641]: 2026-02-25 11:53:29.658472093 +0000 UTC m=+0.021640067 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:53:29 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:53:29 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c44405c3b6c35cdc876d000e6d979d2d4ab93e6973518266000b9812992f2fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:53:29 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c44405c3b6c35cdc876d000e6d979d2d4ab93e6973518266000b9812992f2fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:53:29 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c44405c3b6c35cdc876d000e6d979d2d4ab93e6973518266000b9812992f2fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:53:29 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c44405c3b6c35cdc876d000e6d979d2d4ab93e6973518266000b9812992f2fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:53:29 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c44405c3b6c35cdc876d000e6d979d2d4ab93e6973518266000b9812992f2fe/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:53:29 np0005629333 podman[103641]: 2026-02-25 11:53:29.780043663 +0000 UTC m=+0.143211637 container init 15d662d126dbb3e3dac8f7db3de5934927f4f8b9895a2f32f858a96f43969318 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_shannon, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 06:53:29 np0005629333 podman[103641]: 2026-02-25 11:53:29.789824571 +0000 UTC m=+0.152992535 container start 15d662d126dbb3e3dac8f7db3de5934927f4f8b9895a2f32f858a96f43969318 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:53:29 np0005629333 podman[103641]: 2026-02-25 11:53:29.793183026 +0000 UTC m=+0.156350980 container attach 15d662d126dbb3e3dac8f7db3de5934927f4f8b9895a2f32f858a96f43969318 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_shannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:53:30 np0005629333 serene_shannon[103658]: --> passed data devices: 0 physical, 3 LVM
Feb 25 06:53:30 np0005629333 serene_shannon[103658]: --> All data devices are unavailable
Feb 25 06:53:30 np0005629333 podman[103641]: 2026-02-25 11:53:30.20928969 +0000 UTC m=+0.572457624 container died 15d662d126dbb3e3dac8f7db3de5934927f4f8b9895a2f32f858a96f43969318 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_shannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 06:53:30 np0005629333 systemd[1]: libpod-15d662d126dbb3e3dac8f7db3de5934927f4f8b9895a2f32f858a96f43969318.scope: Deactivated successfully.
Feb 25 06:53:30 np0005629333 systemd[1]: var-lib-containers-storage-overlay-5c44405c3b6c35cdc876d000e6d979d2d4ab93e6973518266000b9812992f2fe-merged.mount: Deactivated successfully.
Feb 25 06:53:30 np0005629333 podman[103641]: 2026-02-25 11:53:30.302940635 +0000 UTC m=+0.666108599 container remove 15d662d126dbb3e3dac8f7db3de5934927f4f8b9895a2f32f858a96f43969318 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_shannon, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:53:30 np0005629333 systemd[1]: libpod-conmon-15d662d126dbb3e3dac8f7db3de5934927f4f8b9895a2f32f858a96f43969318.scope: Deactivated successfully.
Feb 25 06:53:30 np0005629333 podman[103755]: 2026-02-25 11:53:30.755558618 +0000 UTC m=+0.060144762 container create c208c9e7dbd093e4820426798ed7f168094fdc527528e0323b662d02ea26be5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_boyd, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:53:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_11:53:30
Feb 25 06:53:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 06:53:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 06:53:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['vms', 'backups', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log']
Feb 25 06:53:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 06:53:30 np0005629333 systemd[1]: Started libpod-conmon-c208c9e7dbd093e4820426798ed7f168094fdc527528e0323b662d02ea26be5b.scope.
Feb 25 06:53:30 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:53:30 np0005629333 podman[103755]: 2026-02-25 11:53:30.730416843 +0000 UTC m=+0.035003027 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:53:30 np0005629333 podman[103755]: 2026-02-25 11:53:30.824586913 +0000 UTC m=+0.129173067 container init c208c9e7dbd093e4820426798ed7f168094fdc527528e0323b662d02ea26be5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_boyd, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:53:30 np0005629333 podman[103755]: 2026-02-25 11:53:30.832167069 +0000 UTC m=+0.136753213 container start c208c9e7dbd093e4820426798ed7f168094fdc527528e0323b662d02ea26be5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_boyd, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 06:53:30 np0005629333 podman[103755]: 2026-02-25 11:53:30.835593347 +0000 UTC m=+0.140179511 container attach c208c9e7dbd093e4820426798ed7f168094fdc527528e0323b662d02ea26be5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_boyd, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:53:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 170 B/s wr, 3 op/s; 27 B/s, 1 objects/s recovering
Feb 25 06:53:30 np0005629333 happy_boyd[103771]: 167 167
Feb 25 06:53:30 np0005629333 systemd[1]: libpod-c208c9e7dbd093e4820426798ed7f168094fdc527528e0323b662d02ea26be5b.scope: Deactivated successfully.
Feb 25 06:53:30 np0005629333 podman[103755]: 2026-02-25 11:53:30.839077096 +0000 UTC m=+0.143663260 container died c208c9e7dbd093e4820426798ed7f168094fdc527528e0323b662d02ea26be5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_boyd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 06:53:30 np0005629333 systemd[1]: var-lib-containers-storage-overlay-0ddba997ed61ae5721f81de253f9ff52371f7be539fdb4abbe2febd1ed552326-merged.mount: Deactivated successfully.
Feb 25 06:53:30 np0005629333 podman[103755]: 2026-02-25 11:53:30.887771702 +0000 UTC m=+0.192357846 container remove c208c9e7dbd093e4820426798ed7f168094fdc527528e0323b662d02ea26be5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_boyd, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:53:30 np0005629333 systemd[1]: libpod-conmon-c208c9e7dbd093e4820426798ed7f168094fdc527528e0323b662d02ea26be5b.scope: Deactivated successfully.
Feb 25 06:53:31 np0005629333 podman[103795]: 2026-02-25 11:53:31.007533751 +0000 UTC m=+0.039514666 container create b210c5c0f074d206392a3a37e5814020034f14285aefa9e31d741ab37760eda1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_hodgkin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:53:31 np0005629333 systemd[1]: Started libpod-conmon-b210c5c0f074d206392a3a37e5814020034f14285aefa9e31d741ab37760eda1.scope.
Feb 25 06:53:31 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:53:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eab2abc9cbf60df87628739129171f8ed00fb2745857c65c22d7ce905020e22/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:53:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eab2abc9cbf60df87628739129171f8ed00fb2745857c65c22d7ce905020e22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:53:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eab2abc9cbf60df87628739129171f8ed00fb2745857c65c22d7ce905020e22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:53:31 np0005629333 podman[103795]: 2026-02-25 11:53:30.988208151 +0000 UTC m=+0.020189046 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:53:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eab2abc9cbf60df87628739129171f8ed00fb2745857c65c22d7ce905020e22/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:53:31 np0005629333 podman[103795]: 2026-02-25 11:53:31.119358833 +0000 UTC m=+0.151339748 container init b210c5c0f074d206392a3a37e5814020034f14285aefa9e31d741ab37760eda1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:53:31 np0005629333 podman[103795]: 2026-02-25 11:53:31.123917503 +0000 UTC m=+0.155898408 container start b210c5c0f074d206392a3a37e5814020034f14285aefa9e31d741ab37760eda1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:53:31 np0005629333 podman[103795]: 2026-02-25 11:53:31.140201277 +0000 UTC m=+0.172182192 container attach b210c5c0f074d206392a3a37e5814020034f14285aefa9e31d741ab37760eda1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_hodgkin, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]: {
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:    "0": [
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:        {
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "devices": [
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "/dev/loop3"
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            ],
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "lv_name": "ceph_lv0",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "lv_size": "21470642176",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "name": "ceph_lv0",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "tags": {
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.cluster_name": "ceph",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.crush_device_class": "",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.encrypted": "0",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.objectstore": "bluestore",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.osd_id": "0",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.type": "block",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.vdo": "0",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.with_tpm": "0"
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            },
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "type": "block",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "vg_name": "ceph_vg0"
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:        }
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:    ],
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:    "1": [
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:        {
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "devices": [
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "/dev/loop4"
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            ],
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "lv_name": "ceph_lv1",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "lv_size": "21470642176",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "name": "ceph_lv1",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "tags": {
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.cluster_name": "ceph",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.crush_device_class": "",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.encrypted": "0",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.objectstore": "bluestore",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.osd_id": "1",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.type": "block",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.vdo": "0",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.with_tpm": "0"
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            },
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "type": "block",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "vg_name": "ceph_vg1"
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:        }
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:    ],
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:    "2": [
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:        {
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "devices": [
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "/dev/loop5"
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            ],
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "lv_name": "ceph_lv2",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "lv_size": "21470642176",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "name": "ceph_lv2",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "tags": {
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.cluster_name": "ceph",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.crush_device_class": "",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.encrypted": "0",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.objectstore": "bluestore",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.osd_id": "2",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.type": "block",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.vdo": "0",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:                "ceph.with_tpm": "0"
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            },
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "type": "block",
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:            "vg_name": "ceph_vg2"
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:        }
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]:    ]
Feb 25 06:53:31 np0005629333 gifted_hodgkin[103812]: }
Feb 25 06:53:31 np0005629333 systemd[1]: libpod-b210c5c0f074d206392a3a37e5814020034f14285aefa9e31d741ab37760eda1.scope: Deactivated successfully.
Feb 25 06:53:31 np0005629333 podman[103795]: 2026-02-25 11:53:31.424498148 +0000 UTC m=+0.456479023 container died b210c5c0f074d206392a3a37e5814020034f14285aefa9e31d741ab37760eda1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:53:31 np0005629333 systemd[1]: var-lib-containers-storage-overlay-4eab2abc9cbf60df87628739129171f8ed00fb2745857c65c22d7ce905020e22-merged.mount: Deactivated successfully.
Feb 25 06:53:31 np0005629333 podman[103795]: 2026-02-25 11:53:31.470856808 +0000 UTC m=+0.502837693 container remove b210c5c0f074d206392a3a37e5814020034f14285aefa9e31d741ab37760eda1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_hodgkin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:53:31 np0005629333 systemd[1]: libpod-conmon-b210c5c0f074d206392a3a37e5814020034f14285aefa9e31d741ab37760eda1.scope: Deactivated successfully.
Feb 25 06:53:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:53:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:53:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:53:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:53:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:53:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:53:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 06:53:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 06:53:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 06:53:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 06:53:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 06:53:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 06:53:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 06:53:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 06:53:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 06:53:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 06:53:31 np0005629333 podman[103896]: 2026-02-25 11:53:31.881501746 +0000 UTC m=+0.043064737 container create b6fb1857b9adeec381f5c82fd99ab84cc7ac3cbd58107b7678c907864bb931f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_satoshi, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 06:53:31 np0005629333 systemd[1]: Started libpod-conmon-b6fb1857b9adeec381f5c82fd99ab84cc7ac3cbd58107b7678c907864bb931f8.scope.
Feb 25 06:53:31 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:53:31 np0005629333 podman[103896]: 2026-02-25 11:53:31.860137788 +0000 UTC m=+0.021700869 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:53:31 np0005629333 podman[103896]: 2026-02-25 11:53:31.95576896 +0000 UTC m=+0.117332001 container init b6fb1857b9adeec381f5c82fd99ab84cc7ac3cbd58107b7678c907864bb931f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:53:31 np0005629333 podman[103896]: 2026-02-25 11:53:31.961016249 +0000 UTC m=+0.122579250 container start b6fb1857b9adeec381f5c82fd99ab84cc7ac3cbd58107b7678c907864bb931f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:53:31 np0005629333 tender_satoshi[103912]: 167 167
Feb 25 06:53:31 np0005629333 podman[103896]: 2026-02-25 11:53:31.964320303 +0000 UTC m=+0.125883324 container attach b6fb1857b9adeec381f5c82fd99ab84cc7ac3cbd58107b7678c907864bb931f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_satoshi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 06:53:31 np0005629333 systemd[1]: libpod-b6fb1857b9adeec381f5c82fd99ab84cc7ac3cbd58107b7678c907864bb931f8.scope: Deactivated successfully.
Feb 25 06:53:31 np0005629333 podman[103896]: 2026-02-25 11:53:31.965135756 +0000 UTC m=+0.126698747 container died b6fb1857b9adeec381f5c82fd99ab84cc7ac3cbd58107b7678c907864bb931f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 06:53:31 np0005629333 systemd[1]: var-lib-containers-storage-overlay-5e253c492f21ec4ea92c92a6b9c59b4a99569829aa0093a10e7a0a3bddddfa1f-merged.mount: Deactivated successfully.
Feb 25 06:53:31 np0005629333 podman[103896]: 2026-02-25 11:53:31.998841956 +0000 UTC m=+0.160404937 container remove b6fb1857b9adeec381f5c82fd99ab84cc7ac3cbd58107b7678c907864bb931f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:53:32 np0005629333 systemd[1]: libpod-conmon-b6fb1857b9adeec381f5c82fd99ab84cc7ac3cbd58107b7678c907864bb931f8.scope: Deactivated successfully.
Feb 25 06:53:32 np0005629333 podman[103934]: 2026-02-25 11:53:32.112050038 +0000 UTC m=+0.041258445 container create 1ec617948da695189a0e31d6fc694e4e5352a737d1ebcf9c5dd95f51bf81af3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_kalam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 06:53:32 np0005629333 systemd[1]: Started libpod-conmon-1ec617948da695189a0e31d6fc694e4e5352a737d1ebcf9c5dd95f51bf81af3d.scope.
Feb 25 06:53:32 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:53:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e5d5d5da01137dc00e0fae942381288104c5fad299e2f320c45b18b21764e25/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:53:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e5d5d5da01137dc00e0fae942381288104c5fad299e2f320c45b18b21764e25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:53:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e5d5d5da01137dc00e0fae942381288104c5fad299e2f320c45b18b21764e25/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:53:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e5d5d5da01137dc00e0fae942381288104c5fad299e2f320c45b18b21764e25/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:53:32 np0005629333 podman[103934]: 2026-02-25 11:53:32.184247003 +0000 UTC m=+0.113455390 container init 1ec617948da695189a0e31d6fc694e4e5352a737d1ebcf9c5dd95f51bf81af3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_kalam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Feb 25 06:53:32 np0005629333 podman[103934]: 2026-02-25 11:53:32.096474895 +0000 UTC m=+0.025683302 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:53:32 np0005629333 podman[103934]: 2026-02-25 11:53:32.193567878 +0000 UTC m=+0.122776295 container start 1ec617948da695189a0e31d6fc694e4e5352a737d1ebcf9c5dd95f51bf81af3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_kalam, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 06:53:32 np0005629333 podman[103934]: 2026-02-25 11:53:32.20206564 +0000 UTC m=+0.131274057 container attach 1ec617948da695189a0e31d6fc694e4e5352a737d1ebcf9c5dd95f51bf81af3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_kalam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 06:53:32 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 11.e scrub starts
Feb 25 06:53:32 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 11.e scrub ok
Feb 25 06:53:32 np0005629333 lvm[104031]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 06:53:32 np0005629333 lvm[104031]: VG ceph_vg1 finished
Feb 25 06:53:32 np0005629333 lvm[104029]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 06:53:32 np0005629333 lvm[104029]: VG ceph_vg0 finished
Feb 25 06:53:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:53:32 np0005629333 lvm[104033]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 06:53:32 np0005629333 lvm[104033]: VG ceph_vg2 finished
Feb 25 06:53:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 85 B/s wr, 3 op/s; 27 B/s, 1 objects/s recovering
Feb 25 06:53:32 np0005629333 silly_kalam[103952]: {}
Feb 25 06:53:32 np0005629333 systemd[1]: libpod-1ec617948da695189a0e31d6fc694e4e5352a737d1ebcf9c5dd95f51bf81af3d.scope: Deactivated successfully.
Feb 25 06:53:32 np0005629333 systemd[1]: libpod-1ec617948da695189a0e31d6fc694e4e5352a737d1ebcf9c5dd95f51bf81af3d.scope: Consumed 1.029s CPU time.
Feb 25 06:53:32 np0005629333 podman[103934]: 2026-02-25 11:53:32.915870207 +0000 UTC m=+0.845078594 container died 1ec617948da695189a0e31d6fc694e4e5352a737d1ebcf9c5dd95f51bf81af3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_kalam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 06:53:32 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7e5d5d5da01137dc00e0fae942381288104c5fad299e2f320c45b18b21764e25-merged.mount: Deactivated successfully.
Feb 25 06:53:32 np0005629333 podman[103934]: 2026-02-25 11:53:32.965636594 +0000 UTC m=+0.894845011 container remove 1ec617948da695189a0e31d6fc694e4e5352a737d1ebcf9c5dd95f51bf81af3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_kalam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:53:32 np0005629333 systemd[1]: libpod-conmon-1ec617948da695189a0e31d6fc694e4e5352a737d1ebcf9c5dd95f51bf81af3d.scope: Deactivated successfully.
Feb 25 06:53:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:53:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:53:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:53:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:53:33 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Feb 25 06:53:33 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Feb 25 06:53:33 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:53:33 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:53:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:53:35 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Feb 25 06:53:35 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Feb 25 06:53:36 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 3.d scrub starts
Feb 25 06:53:36 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 3.d scrub ok
Feb 25 06:53:36 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Feb 25 06:53:36 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Feb 25 06:53:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v229: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:53:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:53:38 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Feb 25 06:53:38 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Feb 25 06:53:38 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Feb 25 06:53:38 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Feb 25 06:53:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:53:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v231: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:53:41 np0005629333 python3.9[104224]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:53:41 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 11.d scrub starts
Feb 25 06:53:41 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 11.d scrub ok
Feb 25 06:53:41 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Feb 25 06:53:41 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Feb 25 06:53:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 06:53:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:53:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 06:53:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:53:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:53:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:53:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:53:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:53:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:53:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:53:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:53:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:53:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6988014389607423e-06 of space, bias 4.0, pg target 0.0032385617267528906 quantized to 16 (current 16)
Feb 25 06:53:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:53:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:53:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:53:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 06:53:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:53:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 06:53:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:53:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:53:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:53:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 06:53:42 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Feb 25 06:53:42 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Feb 25 06:53:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:53:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:53:43 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 11.b scrub starts
Feb 25 06:53:43 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 11.b scrub ok
Feb 25 06:53:43 np0005629333 python3.9[104512]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Feb 25 06:53:43 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 7.b scrub starts
Feb 25 06:53:43 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 7.b scrub ok
Feb 25 06:53:44 np0005629333 python3.9[104665]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Feb 25 06:53:44 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Feb 25 06:53:44 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Feb 25 06:53:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:53:45 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Feb 25 06:53:45 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Feb 25 06:53:45 np0005629333 python3.9[104818]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:53:46 np0005629333 python3.9[104971]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Feb 25 06:53:46 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Feb 25 06:53:46 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Feb 25 06:53:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v234: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:53:47 np0005629333 python3.9[105126]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:53:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:53:48 np0005629333 python3.9[105279]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:53:48 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Feb 25 06:53:48 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Feb 25 06:53:48 np0005629333 python3.9[105358]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:53:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:53:49 np0005629333 python3.9[105511]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 06:53:50 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 7.c scrub starts
Feb 25 06:53:50 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 7.c scrub ok
Feb 25 06:53:50 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Feb 25 06:53:50 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Feb 25 06:53:50 np0005629333 python3.9[105666]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Feb 25 06:53:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:53:51 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 7.e scrub starts
Feb 25 06:53:51 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 7.e scrub ok
Feb 25 06:53:51 np0005629333 python3.9[105820]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Feb 25 06:53:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:53:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:53:53 np0005629333 python3.9[105974]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 25 06:53:53 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Feb 25 06:53:53 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Feb 25 06:53:53 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Feb 25 06:53:53 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Feb 25 06:53:53 np0005629333 python3.9[106127]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Feb 25 06:53:54 np0005629333 python3.9[106280]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 25 06:53:54 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Feb 25 06:53:54 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Feb 25 06:53:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:53:55 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Feb 25 06:53:55 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Feb 25 06:53:56 np0005629333 python3.9[106434]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:53:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:53:57 np0005629333 python3.9[106587]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:53:57 np0005629333 python3.9[106666]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:53:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:53:58 np0005629333 python3.9[106819]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:53:58 np0005629333 python3.9[106898]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:53:58 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Feb 25 06:53:58 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Feb 25 06:53:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:53:59 np0005629333 python3.9[107051]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 25 06:54:00 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Feb 25 06:54:00 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Feb 25 06:54:00 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Feb 25 06:54:00 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Feb 25 06:54:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:54:01 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Feb 25 06:54:01 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Feb 25 06:54:01 np0005629333 python3.9[107202]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 06:54:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:54:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:54:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:54:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:54:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:54:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:54:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:54:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:54:03 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Feb 25 06:54:03 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Feb 25 06:54:03 np0005629333 python3.9[107354]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Feb 25 06:54:04 np0005629333 python3.9[107504]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 06:54:04 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Feb 25 06:54:04 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Feb 25 06:54:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:54:05 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Feb 25 06:54:05 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Feb 25 06:54:06 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Feb 25 06:54:06 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Feb 25 06:54:06 np0005629333 python3.9[107657]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 06:54:06 np0005629333 systemd[1]: Stopping Dynamic System Tuning Daemon...
Feb 25 06:54:06 np0005629333 systemd[1]: tuned.service: Deactivated successfully.
Feb 25 06:54:06 np0005629333 systemd[1]: Stopped Dynamic System Tuning Daemon.
Feb 25 06:54:06 np0005629333 systemd[1]: Starting Dynamic System Tuning Daemon...
Feb 25 06:54:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:54:06 np0005629333 systemd[1]: Started Dynamic System Tuning Daemon.
Feb 25 06:54:07 np0005629333 python3.9[107818]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Feb 25 06:54:07 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Feb 25 06:54:07 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Feb 25 06:54:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:54:08 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Feb 25 06:54:08 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Feb 25 06:54:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:54:09 np0005629333 python3.9[107971]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 06:54:09 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Feb 25 06:54:09 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Feb 25 06:54:10 np0005629333 python3.9[108126]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 06:54:10 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Feb 25 06:54:10 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Feb 25 06:54:10 np0005629333 systemd-logind[811]: Session 35 logged out. Waiting for processes to exit.
Feb 25 06:54:10 np0005629333 systemd[1]: session-35.scope: Deactivated successfully.
Feb 25 06:54:10 np0005629333 systemd[1]: session-35.scope: Consumed 1min 1.785s CPU time.
Feb 25 06:54:10 np0005629333 systemd-logind[811]: Removed session 35.
Feb 25 06:54:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:54:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:54:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:54:13 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Feb 25 06:54:13 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Feb 25 06:54:14 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Feb 25 06:54:14 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Feb 25 06:54:14 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Feb 25 06:54:14 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Feb 25 06:54:14 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Feb 25 06:54:14 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Feb 25 06:54:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:54:16 np0005629333 systemd-logind[811]: New session 36 of user zuul.
Feb 25 06:54:16 np0005629333 systemd[1]: Started Session 36 of User zuul.
Feb 25 06:54:16 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Feb 25 06:54:16 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Feb 25 06:54:16 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Feb 25 06:54:16 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Feb 25 06:54:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:54:17 np0005629333 python3.9[108306]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:54:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:54:18 np0005629333 python3.9[108463]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Feb 25 06:54:18 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Feb 25 06:54:18 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Feb 25 06:54:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:54:19 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Feb 25 06:54:19 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Feb 25 06:54:19 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Feb 25 06:54:19 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Feb 25 06:54:19 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Feb 25 06:54:19 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Feb 25 06:54:20 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Feb 25 06:54:20 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Feb 25 06:54:20 np0005629333 python3.9[108617]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 25 06:54:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:54:21 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Feb 25 06:54:21 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Feb 25 06:54:21 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Feb 25 06:54:21 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Feb 25 06:54:21 np0005629333 python3.9[108702]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 25 06:54:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:54:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:54:23 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Feb 25 06:54:23 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Feb 25 06:54:23 np0005629333 python3.9[108856]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 25 06:54:23 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Feb 25 06:54:23 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Feb 25 06:54:24 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Feb 25 06:54:24 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Feb 25 06:54:24 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Feb 25 06:54:24 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Feb 25 06:54:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:54:25 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 3.c scrub starts
Feb 25 06:54:25 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 3.c scrub ok
Feb 25 06:54:26 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 8.b scrub starts
Feb 25 06:54:26 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 8.b scrub ok
Feb 25 06:54:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:54:27 np0005629333 python3.9[109010]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 25 06:54:27 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Feb 25 06:54:27 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Feb 25 06:54:27 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 3.f scrub starts
Feb 25 06:54:27 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 3.f scrub ok
Feb 25 06:54:27 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Feb 25 06:54:27 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Feb 25 06:54:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:54:28 np0005629333 python3.9[109164]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:54:28 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Feb 25 06:54:28 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Feb 25 06:54:28 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Feb 25 06:54:28 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Feb 25 06:54:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:54:29 np0005629333 python3.9[109317]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Feb 25 06:54:29 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Feb 25 06:54:29 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Feb 25 06:54:29 np0005629333 python3.9[109467]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:54:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_11:54:30
Feb 25 06:54:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 06:54:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 06:54:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'backups', 'images', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'volumes', 'cephfs.cephfs.meta', '.rgw.root']
Feb 25 06:54:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 06:54:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:54:30 np0005629333 python3.9[109626]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 25 06:54:31 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Feb 25 06:54:31 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Feb 25 06:54:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:54:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:54:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:54:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:54:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:54:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:54:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 06:54:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 06:54:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 06:54:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 06:54:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 06:54:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 06:54:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 06:54:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 06:54:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 06:54:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 06:54:31 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Feb 25 06:54:31 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Feb 25 06:54:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:54:32 np0005629333 python3.9[109780]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:54:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:54:33 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Feb 25 06:54:33 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Feb 25 06:54:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:54:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:54:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 06:54:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:54:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 06:54:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:54:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 06:54:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 06:54:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 06:54:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 06:54:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:54:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:54:33 np0005629333 podman[110182]: 2026-02-25 11:54:33.974161229 +0000 UTC m=+0.042944753 container create a741a61da67b8b1097d64384bcb81cef9f484f2f105a924014de2e7e150f121d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_rubin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:54:33 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:54:33 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:54:33 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 06:54:34 np0005629333 systemd[1]: Started libpod-conmon-a741a61da67b8b1097d64384bcb81cef9f484f2f105a924014de2e7e150f121d.scope.
Feb 25 06:54:34 np0005629333 podman[110182]: 2026-02-25 11:54:33.950657341 +0000 UTC m=+0.019440845 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:54:34 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:54:34 np0005629333 podman[110182]: 2026-02-25 11:54:34.075281021 +0000 UTC m=+0.144064515 container init a741a61da67b8b1097d64384bcb81cef9f484f2f105a924014de2e7e150f121d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 06:54:34 np0005629333 podman[110182]: 2026-02-25 11:54:34.083951714 +0000 UTC m=+0.152735228 container start a741a61da67b8b1097d64384bcb81cef9f484f2f105a924014de2e7e150f121d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_rubin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:54:34 np0005629333 podman[110182]: 2026-02-25 11:54:34.08917087 +0000 UTC m=+0.157954364 container attach a741a61da67b8b1097d64384bcb81cef9f484f2f105a924014de2e7e150f121d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:54:34 np0005629333 quizzical_rubin[110228]: 167 167
Feb 25 06:54:34 np0005629333 podman[110182]: 2026-02-25 11:54:34.091576747 +0000 UTC m=+0.160360271 container died a741a61da67b8b1097d64384bcb81cef9f484f2f105a924014de2e7e150f121d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:54:34 np0005629333 systemd[1]: libpod-a741a61da67b8b1097d64384bcb81cef9f484f2f105a924014de2e7e150f121d.scope: Deactivated successfully.
Feb 25 06:54:34 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ec0e911f6bec304c876f4fa99aa1ba2bba44f966629b794abca784cec1f75bab-merged.mount: Deactivated successfully.
Feb 25 06:54:34 np0005629333 podman[110182]: 2026-02-25 11:54:34.150039575 +0000 UTC m=+0.218823099 container remove a741a61da67b8b1097d64384bcb81cef9f484f2f105a924014de2e7e150f121d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_rubin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 06:54:34 np0005629333 systemd[1]: libpod-conmon-a741a61da67b8b1097d64384bcb81cef9f484f2f105a924014de2e7e150f121d.scope: Deactivated successfully.
Feb 25 06:54:34 np0005629333 python3.9[110225]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Feb 25 06:54:34 np0005629333 podman[110255]: 2026-02-25 11:54:34.292676818 +0000 UTC m=+0.048021356 container create 5813dbcdae9235dbdc9e3c65aba162ca527488f4cc693cca2836ce214a20e9c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_hypatia, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:54:34 np0005629333 systemd[1]: Started libpod-conmon-5813dbcdae9235dbdc9e3c65aba162ca527488f4cc693cca2836ce214a20e9c2.scope.
Feb 25 06:54:34 np0005629333 podman[110255]: 2026-02-25 11:54:34.267315588 +0000 UTC m=+0.022660166 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:54:34 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:54:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66db0ce931636deab683eb6394e238150fea94547c3629528fa0c2784771a5be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:54:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66db0ce931636deab683eb6394e238150fea94547c3629528fa0c2784771a5be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:54:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66db0ce931636deab683eb6394e238150fea94547c3629528fa0c2784771a5be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:54:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66db0ce931636deab683eb6394e238150fea94547c3629528fa0c2784771a5be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:54:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66db0ce931636deab683eb6394e238150fea94547c3629528fa0c2784771a5be/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:54:34 np0005629333 podman[110255]: 2026-02-25 11:54:34.409715965 +0000 UTC m=+0.165060653 container init 5813dbcdae9235dbdc9e3c65aba162ca527488f4cc693cca2836ce214a20e9c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_hypatia, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 06:54:34 np0005629333 podman[110255]: 2026-02-25 11:54:34.419240882 +0000 UTC m=+0.174585420 container start 5813dbcdae9235dbdc9e3c65aba162ca527488f4cc693cca2836ce214a20e9c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_hypatia, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True)
Feb 25 06:54:34 np0005629333 podman[110255]: 2026-02-25 11:54:34.429366246 +0000 UTC m=+0.184710784 container attach 5813dbcdae9235dbdc9e3c65aba162ca527488f4cc693cca2836ce214a20e9c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_hypatia, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 06:54:34 np0005629333 eager_hypatia[110314]: --> passed data devices: 0 physical, 3 LVM
Feb 25 06:54:34 np0005629333 eager_hypatia[110314]: --> All data devices are unavailable
Feb 25 06:54:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:54:34 np0005629333 systemd[1]: libpod-5813dbcdae9235dbdc9e3c65aba162ca527488f4cc693cca2836ce214a20e9c2.scope: Deactivated successfully.
Feb 25 06:54:34 np0005629333 python3.9[110430]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 06:54:34 np0005629333 podman[110439]: 2026-02-25 11:54:34.926091356 +0000 UTC m=+0.034641081 container died 5813dbcdae9235dbdc9e3c65aba162ca527488f4cc693cca2836ce214a20e9c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_hypatia, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030)
Feb 25 06:54:34 np0005629333 systemd[1]: var-lib-containers-storage-overlay-66db0ce931636deab683eb6394e238150fea94547c3629528fa0c2784771a5be-merged.mount: Deactivated successfully.
Feb 25 06:54:34 np0005629333 podman[110439]: 2026-02-25 11:54:34.968939586 +0000 UTC m=+0.077489261 container remove 5813dbcdae9235dbdc9e3c65aba162ca527488f4cc693cca2836ce214a20e9c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:54:34 np0005629333 systemd[1]: libpod-conmon-5813dbcdae9235dbdc9e3c65aba162ca527488f4cc693cca2836ce214a20e9c2.scope: Deactivated successfully.
Feb 25 06:54:35 np0005629333 podman[110669]: 2026-02-25 11:54:35.425497851 +0000 UTC m=+0.065848445 container create 4e814bd71713f7c7d13e1d386b1218cf339cfc6828f628339d1e5361a8731fc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_satoshi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:54:35 np0005629333 systemd[1]: Started libpod-conmon-4e814bd71713f7c7d13e1d386b1218cf339cfc6828f628339d1e5361a8731fc5.scope.
Feb 25 06:54:35 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:54:35 np0005629333 podman[110669]: 2026-02-25 11:54:35.478106164 +0000 UTC m=+0.118456728 container init 4e814bd71713f7c7d13e1d386b1218cf339cfc6828f628339d1e5361a8731fc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_satoshi, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 06:54:35 np0005629333 podman[110669]: 2026-02-25 11:54:35.483629619 +0000 UTC m=+0.123980223 container start 4e814bd71713f7c7d13e1d386b1218cf339cfc6828f628339d1e5361a8731fc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 06:54:35 np0005629333 reverent_satoshi[110688]: 167 167
Feb 25 06:54:35 np0005629333 systemd[1]: libpod-4e814bd71713f7c7d13e1d386b1218cf339cfc6828f628339d1e5361a8731fc5.scope: Deactivated successfully.
Feb 25 06:54:35 np0005629333 podman[110669]: 2026-02-25 11:54:35.489681648 +0000 UTC m=+0.130032242 container attach 4e814bd71713f7c7d13e1d386b1218cf339cfc6828f628339d1e5361a8731fc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_satoshi, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 06:54:35 np0005629333 podman[110669]: 2026-02-25 11:54:35.490113931 +0000 UTC m=+0.130464525 container died 4e814bd71713f7c7d13e1d386b1218cf339cfc6828f628339d1e5361a8731fc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_satoshi, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 06:54:35 np0005629333 podman[110669]: 2026-02-25 11:54:35.400635075 +0000 UTC m=+0.040985639 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:54:35 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7cfb567441b198a92fdac96b8ee6789bb5343a9bc7c047a32bfaea11345cf011-merged.mount: Deactivated successfully.
Feb 25 06:54:35 np0005629333 podman[110669]: 2026-02-25 11:54:35.542368914 +0000 UTC m=+0.182719518 container remove 4e814bd71713f7c7d13e1d386b1218cf339cfc6828f628339d1e5361a8731fc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_satoshi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 06:54:35 np0005629333 systemd[1]: libpod-conmon-4e814bd71713f7c7d13e1d386b1218cf339cfc6828f628339d1e5361a8731fc5.scope: Deactivated successfully.
Feb 25 06:54:35 np0005629333 python3.9[110678]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 25 06:54:35 np0005629333 podman[110712]: 2026-02-25 11:54:35.735924154 +0000 UTC m=+0.073082657 container create 49ac681680b7147b5ade578b529c46817a45d25f9c79a20c0b5cea8430be1672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_saha, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:54:35 np0005629333 systemd[1]: Started libpod-conmon-49ac681680b7147b5ade578b529c46817a45d25f9c79a20c0b5cea8430be1672.scope.
Feb 25 06:54:35 np0005629333 podman[110712]: 2026-02-25 11:54:35.698200548 +0000 UTC m=+0.035359061 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:54:35 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:54:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bc5f7b10290ab83270b5124358a8ff99e7d666c49220a9e3bdfc2e92a6d60a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:54:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bc5f7b10290ab83270b5124358a8ff99e7d666c49220a9e3bdfc2e92a6d60a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:54:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bc5f7b10290ab83270b5124358a8ff99e7d666c49220a9e3bdfc2e92a6d60a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:54:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bc5f7b10290ab83270b5124358a8ff99e7d666c49220a9e3bdfc2e92a6d60a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:54:35 np0005629333 podman[110712]: 2026-02-25 11:54:35.86755706 +0000 UTC m=+0.204715583 container init 49ac681680b7147b5ade578b529c46817a45d25f9c79a20c0b5cea8430be1672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_saha, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:54:35 np0005629333 podman[110712]: 2026-02-25 11:54:35.876430029 +0000 UTC m=+0.213588542 container start 49ac681680b7147b5ade578b529c46817a45d25f9c79a20c0b5cea8430be1672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:54:35 np0005629333 podman[110712]: 2026-02-25 11:54:35.894576167 +0000 UTC m=+0.231734710 container attach 49ac681680b7147b5ade578b529c46817a45d25f9c79a20c0b5cea8430be1672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_saha, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 06:54:36 np0005629333 busy_saha[110729]: {
Feb 25 06:54:36 np0005629333 busy_saha[110729]:    "0": [
Feb 25 06:54:36 np0005629333 busy_saha[110729]:        {
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "devices": [
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "/dev/loop3"
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            ],
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "lv_name": "ceph_lv0",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "lv_size": "21470642176",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "name": "ceph_lv0",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "tags": {
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.cluster_name": "ceph",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.crush_device_class": "",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.encrypted": "0",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.objectstore": "bluestore",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.osd_id": "0",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.type": "block",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.vdo": "0",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.with_tpm": "0"
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            },
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "type": "block",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "vg_name": "ceph_vg0"
Feb 25 06:54:36 np0005629333 busy_saha[110729]:        }
Feb 25 06:54:36 np0005629333 busy_saha[110729]:    ],
Feb 25 06:54:36 np0005629333 busy_saha[110729]:    "1": [
Feb 25 06:54:36 np0005629333 busy_saha[110729]:        {
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "devices": [
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "/dev/loop4"
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            ],
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "lv_name": "ceph_lv1",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "lv_size": "21470642176",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "name": "ceph_lv1",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "tags": {
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.cluster_name": "ceph",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.crush_device_class": "",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.encrypted": "0",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.objectstore": "bluestore",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.osd_id": "1",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.type": "block",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.vdo": "0",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.with_tpm": "0"
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            },
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "type": "block",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "vg_name": "ceph_vg1"
Feb 25 06:54:36 np0005629333 busy_saha[110729]:        }
Feb 25 06:54:36 np0005629333 busy_saha[110729]:    ],
Feb 25 06:54:36 np0005629333 busy_saha[110729]:    "2": [
Feb 25 06:54:36 np0005629333 busy_saha[110729]:        {
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "devices": [
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "/dev/loop5"
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            ],
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "lv_name": "ceph_lv2",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "lv_size": "21470642176",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "name": "ceph_lv2",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "tags": {
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.cluster_name": "ceph",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.crush_device_class": "",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.encrypted": "0",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.objectstore": "bluestore",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.osd_id": "2",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.type": "block",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.vdo": "0",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:                "ceph.with_tpm": "0"
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            },
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "type": "block",
Feb 25 06:54:36 np0005629333 busy_saha[110729]:            "vg_name": "ceph_vg2"
Feb 25 06:54:36 np0005629333 busy_saha[110729]:        }
Feb 25 06:54:36 np0005629333 busy_saha[110729]:    ]
Feb 25 06:54:36 np0005629333 busy_saha[110729]: }
Feb 25 06:54:36 np0005629333 systemd[1]: libpod-49ac681680b7147b5ade578b529c46817a45d25f9c79a20c0b5cea8430be1672.scope: Deactivated successfully.
Feb 25 06:54:36 np0005629333 podman[110712]: 2026-02-25 11:54:36.135159234 +0000 UTC m=+0.472317707 container died 49ac681680b7147b5ade578b529c46817a45d25f9c79a20c0b5cea8430be1672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:54:36 np0005629333 systemd[1]: var-lib-containers-storage-overlay-3bc5f7b10290ab83270b5124358a8ff99e7d666c49220a9e3bdfc2e92a6d60a3-merged.mount: Deactivated successfully.
Feb 25 06:54:36 np0005629333 podman[110712]: 2026-02-25 11:54:36.188193199 +0000 UTC m=+0.525351662 container remove 49ac681680b7147b5ade578b529c46817a45d25f9c79a20c0b5cea8430be1672 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 06:54:36 np0005629333 systemd[1]: libpod-conmon-49ac681680b7147b5ade578b529c46817a45d25f9c79a20c0b5cea8430be1672.scope: Deactivated successfully.
Feb 25 06:54:36 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Feb 25 06:54:36 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Feb 25 06:54:36 np0005629333 podman[110813]: 2026-02-25 11:54:36.592731628 +0000 UTC m=+0.045082654 container create fe806e223ec8d4dc7424451e6d9d748006f27ea2f821eff8a00a3f41ab6c0fe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_aryabhata, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 06:54:36 np0005629333 podman[110813]: 2026-02-25 11:54:36.565602518 +0000 UTC m=+0.017953564 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:54:36 np0005629333 systemd[1]: Started libpod-conmon-fe806e223ec8d4dc7424451e6d9d748006f27ea2f821eff8a00a3f41ab6c0fe5.scope.
Feb 25 06:54:36 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:54:36 np0005629333 podman[110813]: 2026-02-25 11:54:36.756840893 +0000 UTC m=+0.209192019 container init fe806e223ec8d4dc7424451e6d9d748006f27ea2f821eff8a00a3f41ab6c0fe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_aryabhata, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:54:36 np0005629333 podman[110813]: 2026-02-25 11:54:36.763080908 +0000 UTC m=+0.215431934 container start fe806e223ec8d4dc7424451e6d9d748006f27ea2f821eff8a00a3f41ab6c0fe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_aryabhata, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:54:36 np0005629333 keen_aryabhata[110829]: 167 167
Feb 25 06:54:36 np0005629333 systemd[1]: libpod-fe806e223ec8d4dc7424451e6d9d748006f27ea2f821eff8a00a3f41ab6c0fe5.scope: Deactivated successfully.
Feb 25 06:54:36 np0005629333 podman[110813]: 2026-02-25 11:54:36.778239403 +0000 UTC m=+0.230590539 container attach fe806e223ec8d4dc7424451e6d9d748006f27ea2f821eff8a00a3f41ab6c0fe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:54:36 np0005629333 podman[110813]: 2026-02-25 11:54:36.778907381 +0000 UTC m=+0.231258407 container died fe806e223ec8d4dc7424451e6d9d748006f27ea2f821eff8a00a3f41ab6c0fe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 06:54:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:54:36 np0005629333 systemd[1]: var-lib-containers-storage-overlay-9f30a633878ac381228110f64c38c7aa8c67c91b8fc328853a5e73cf08f66b37-merged.mount: Deactivated successfully.
Feb 25 06:54:36 np0005629333 podman[110813]: 2026-02-25 11:54:36.887446341 +0000 UTC m=+0.339797387 container remove fe806e223ec8d4dc7424451e6d9d748006f27ea2f821eff8a00a3f41ab6c0fe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_aryabhata, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:54:36 np0005629333 systemd[1]: libpod-conmon-fe806e223ec8d4dc7424451e6d9d748006f27ea2f821eff8a00a3f41ab6c0fe5.scope: Deactivated successfully.
Feb 25 06:54:37 np0005629333 podman[110855]: 2026-02-25 11:54:37.068251624 +0000 UTC m=+0.054208939 container create 4f41a694a53a1c30cba4f25338f890ad479eacac706f8a760d9e62d6c7c1a3fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_brown, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 06:54:37 np0005629333 systemd[1]: Started libpod-conmon-4f41a694a53a1c30cba4f25338f890ad479eacac706f8a760d9e62d6c7c1a3fd.scope.
Feb 25 06:54:37 np0005629333 podman[110855]: 2026-02-25 11:54:37.038154761 +0000 UTC m=+0.024112156 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:54:37 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:54:37 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b6d766a74ca5337879ad9948fb1cc99ebc3a2eddbea0ef3192c0c69c8c4a434/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:54:37 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b6d766a74ca5337879ad9948fb1cc99ebc3a2eddbea0ef3192c0c69c8c4a434/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:54:37 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b6d766a74ca5337879ad9948fb1cc99ebc3a2eddbea0ef3192c0c69c8c4a434/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:54:37 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b6d766a74ca5337879ad9948fb1cc99ebc3a2eddbea0ef3192c0c69c8c4a434/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:54:37 np0005629333 podman[110855]: 2026-02-25 11:54:37.16736652 +0000 UTC m=+0.153323855 container init 4f41a694a53a1c30cba4f25338f890ad479eacac706f8a760d9e62d6c7c1a3fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 06:54:37 np0005629333 podman[110855]: 2026-02-25 11:54:37.176233058 +0000 UTC m=+0.162190393 container start 4f41a694a53a1c30cba4f25338f890ad479eacac706f8a760d9e62d6c7c1a3fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 06:54:37 np0005629333 podman[110855]: 2026-02-25 11:54:37.181098484 +0000 UTC m=+0.167055839 container attach 4f41a694a53a1c30cba4f25338f890ad479eacac706f8a760d9e62d6c7c1a3fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_brown, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Feb 25 06:54:37 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Feb 25 06:54:37 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Feb 25 06:54:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:54:37 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Feb 25 06:54:37 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Feb 25 06:54:37 np0005629333 lvm[111051]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 06:54:37 np0005629333 lvm[111051]: VG ceph_vg1 finished
Feb 25 06:54:37 np0005629333 lvm[111050]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 06:54:37 np0005629333 lvm[111050]: VG ceph_vg0 finished
Feb 25 06:54:37 np0005629333 systemd[77728]: Created slice User Background Tasks Slice.
Feb 25 06:54:37 np0005629333 lvm[111053]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 06:54:37 np0005629333 lvm[111053]: VG ceph_vg2 finished
Feb 25 06:54:37 np0005629333 systemd[77728]: Starting Cleanup of User's Temporary Files and Directories...
Feb 25 06:54:37 np0005629333 systemd[77728]: Finished Cleanup of User's Temporary Files and Directories.
Feb 25 06:54:37 np0005629333 exciting_brown[110896]: {}
Feb 25 06:54:37 np0005629333 systemd[1]: libpod-4f41a694a53a1c30cba4f25338f890ad479eacac706f8a760d9e62d6c7c1a3fd.scope: Deactivated successfully.
Feb 25 06:54:37 np0005629333 systemd[1]: libpod-4f41a694a53a1c30cba4f25338f890ad479eacac706f8a760d9e62d6c7c1a3fd.scope: Consumed 1.109s CPU time.
Feb 25 06:54:37 np0005629333 podman[110855]: 2026-02-25 11:54:37.989321376 +0000 UTC m=+0.975278721 container died 4f41a694a53a1c30cba4f25338f890ad479eacac706f8a760d9e62d6c7c1a3fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_brown, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:54:38 np0005629333 systemd[1]: var-lib-containers-storage-overlay-9b6d766a74ca5337879ad9948fb1cc99ebc3a2eddbea0ef3192c0c69c8c4a434-merged.mount: Deactivated successfully.
Feb 25 06:54:38 np0005629333 podman[110855]: 2026-02-25 11:54:38.035044927 +0000 UTC m=+1.021002232 container remove 4f41a694a53a1c30cba4f25338f890ad479eacac706f8a760d9e62d6c7c1a3fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 06:54:38 np0005629333 systemd[1]: libpod-conmon-4f41a694a53a1c30cba4f25338f890ad479eacac706f8a760d9e62d6c7c1a3fd.scope: Deactivated successfully.
Feb 25 06:54:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:54:38 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:54:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:54:38 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:54:38 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Feb 25 06:54:38 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Feb 25 06:54:38 np0005629333 python3.9[111147]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 25 06:54:38 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 2.b scrub starts
Feb 25 06:54:38 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 2.b scrub ok
Feb 25 06:54:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:54:39 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:54:39 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:54:39 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Feb 25 06:54:39 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Feb 25 06:54:40 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Feb 25 06:54:40 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Feb 25 06:54:40 np0005629333 python3.9[111301]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 06:54:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:54:41 np0005629333 python3.9[111456]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Feb 25 06:54:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 06:54:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:54:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 06:54:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:54:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:54:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:54:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:54:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:54:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:54:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:54:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:54:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:54:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6988014389607423e-06 of space, bias 4.0, pg target 0.0032385617267528906 quantized to 16 (current 16)
Feb 25 06:54:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:54:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:54:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:54:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 06:54:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:54:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 06:54:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:54:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:54:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:54:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 06:54:42 np0005629333 systemd[1]: session-36.scope: Deactivated successfully.
Feb 25 06:54:42 np0005629333 systemd[1]: session-36.scope: Consumed 16.335s CPU time.
Feb 25 06:54:42 np0005629333 systemd-logind[811]: Session 36 logged out. Waiting for processes to exit.
Feb 25 06:54:42 np0005629333 systemd-logind[811]: Removed session 36.
Feb 25 06:54:42 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 4.a scrub starts
Feb 25 06:54:42 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 4.a scrub ok
Feb 25 06:54:42 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Feb 25 06:54:42 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Feb 25 06:54:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:54:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:54:43 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Feb 25 06:54:43 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Feb 25 06:54:43 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Feb 25 06:54:43 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Feb 25 06:54:44 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 3.e scrub starts
Feb 25 06:54:44 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 3.e scrub ok
Feb 25 06:54:44 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Feb 25 06:54:44 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Feb 25 06:54:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:54:46 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Feb 25 06:54:46 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Feb 25 06:54:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:54:46 np0005629333 systemd-logind[811]: New session 37 of user zuul.
Feb 25 06:54:46 np0005629333 systemd[1]: Started Session 37 of User zuul.
Feb 25 06:54:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:54:47 np0005629333 python3.9[111634]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:54:48 np0005629333 python3.9[111788]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 25 06:54:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:54:49 np0005629333 python3.9[111981]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:54:50 np0005629333 systemd[1]: session-37.scope: Deactivated successfully.
Feb 25 06:54:50 np0005629333 systemd[1]: session-37.scope: Consumed 2.034s CPU time.
Feb 25 06:54:50 np0005629333 systemd-logind[811]: Session 37 logged out. Waiting for processes to exit.
Feb 25 06:54:50 np0005629333 systemd-logind[811]: Removed session 37.
Feb 25 06:54:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:54:51 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Feb 25 06:54:51 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Feb 25 06:54:52 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Feb 25 06:54:52 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Feb 25 06:54:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:54:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:54:53 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Feb 25 06:54:53 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Feb 25 06:54:53 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Feb 25 06:54:53 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Feb 25 06:54:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:54:55 np0005629333 systemd-logind[811]: New session 38 of user zuul.
Feb 25 06:54:55 np0005629333 systemd[1]: Started Session 38 of User zuul.
Feb 25 06:54:56 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Feb 25 06:54:56 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Feb 25 06:54:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:54:57 np0005629333 python3.9[112160]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:54:57 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 7.a scrub starts
Feb 25 06:54:57 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 7.a scrub ok
Feb 25 06:54:57 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 10.f scrub starts
Feb 25 06:54:57 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 10.f scrub ok
Feb 25 06:54:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:54:57 np0005629333 python3.9[112314]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:54:58 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Feb 25 06:54:58 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Feb 25 06:54:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:54:58 np0005629333 python3.9[112471]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 25 06:54:59 np0005629333 python3.9[112556]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 25 06:55:00 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Feb 25 06:55:00 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Feb 25 06:55:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:55:01 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Feb 25 06:55:01 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Feb 25 06:55:01 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Feb 25 06:55:01 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Feb 25 06:55:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:55:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:55:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:55:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:55:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:55:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:55:01 np0005629333 python3.9[112710]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 25 06:55:02 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Feb 25 06:55:02 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Feb 25 06:55:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:55:02 np0005629333 python3.9[112906]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:55:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:55:03 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 2.f scrub starts
Feb 25 06:55:03 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 2.f scrub ok
Feb 25 06:55:03 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Feb 25 06:55:06 np0005629333 python3.9[113536]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:55:06 np0005629333 rsyslogd[1020]: imjournal: 23 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Feb 25 06:55:06 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Feb 25 06:55:06 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Feb 25 06:55:06 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Feb 25 06:55:06 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Feb 25 06:55:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:55:07 np0005629333 python3.9[113689]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:55:07 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 4.d scrub starts
Feb 25 06:55:07 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 4.d scrub ok
Feb 25 06:55:07 np0005629333 python3.9[113842]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:55:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:55:08 np0005629333 python3.9[113995]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:55:08 np0005629333 python3.9[114148]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:55:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:55:09 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 4.f scrub starts
Feb 25 06:55:09 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 4.f scrub ok
Feb 25 06:55:09 np0005629333 python3.9[114301]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 25 06:55:10 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Feb 25 06:55:10 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Feb 25 06:55:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:55:11 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Feb 25 06:55:11 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Feb 25 06:55:11 np0005629333 python3.9[114455]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:55:12 np0005629333 python3.9[114610]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 06:55:12 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 5.c scrub starts
Feb 25 06:55:12 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 5.c scrub ok
Feb 25 06:55:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:55:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:55:13 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Feb 25 06:55:13 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Feb 25 06:55:13 np0005629333 python3.9[114763]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 06:55:13 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Feb 25 06:55:13 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Feb 25 06:55:13 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Feb 25 06:55:13 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Feb 25 06:55:13 np0005629333 python3.9[114916]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:55:14 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 2.a scrub starts
Feb 25 06:55:14 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 2.a scrub ok
Feb 25 06:55:14 np0005629333 python3.9[115070]: ansible-service_facts Invoked
Feb 25 06:55:14 np0005629333 network[115087]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 25 06:55:14 np0005629333 network[115088]: 'network-scripts' will be removed from distribution in near future.
Feb 25 06:55:14 np0005629333 network[115089]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 25 06:55:14 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Feb 25 06:55:14 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Feb 25 06:55:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:55:16 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Feb 25 06:55:16 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Feb 25 06:55:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:55:17 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Feb 25 06:55:17 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Feb 25 06:55:17 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Feb 25 06:55:17 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Feb 25 06:55:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:55:17 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Feb 25 06:55:17 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Feb 25 06:55:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:55:19 np0005629333 python3.9[115544]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 25 06:55:20 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Feb 25 06:55:20 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Feb 25 06:55:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:55:21 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Feb 25 06:55:21 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Feb 25 06:55:21 np0005629333 python3.9[115698]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Feb 25 06:55:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:55:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:55:23 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Feb 25 06:55:23 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Feb 25 06:55:23 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Feb 25 06:55:23 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Feb 25 06:55:24 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Feb 25 06:55:24 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Feb 25 06:55:24 np0005629333 python3.9[115851]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:55:24 np0005629333 python3.9[115930]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:55:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:55:25 np0005629333 python3.9[116083]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:55:25 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Feb 25 06:55:25 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Feb 25 06:55:25 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Feb 25 06:55:25 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Feb 25 06:55:25 np0005629333 python3.9[116162]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:55:25 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Feb 25 06:55:25 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Feb 25 06:55:26 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Feb 25 06:55:26 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Feb 25 06:55:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:55:27 np0005629333 python3.9[116315]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:55:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:55:27 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Feb 25 06:55:27 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Feb 25 06:55:28 np0005629333 python3.9[116468]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 25 06:55:28 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 5.f scrub starts
Feb 25 06:55:28 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 5.f scrub ok
Feb 25 06:55:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:55:29 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Feb 25 06:55:29 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Feb 25 06:55:29 np0005629333 python3.9[116553]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 06:55:30 np0005629333 systemd-logind[811]: Session 38 logged out. Waiting for processes to exit.
Feb 25 06:55:30 np0005629333 systemd[1]: session-38.scope: Deactivated successfully.
Feb 25 06:55:30 np0005629333 systemd[1]: session-38.scope: Consumed 21.697s CPU time.
Feb 25 06:55:30 np0005629333 systemd-logind[811]: Removed session 38.
Feb 25 06:55:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_11:55:30
Feb 25 06:55:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 06:55:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 06:55:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['vms', 'volumes', '.mgr', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', '.rgw.root']
Feb 25 06:55:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 06:55:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:55:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:55:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:55:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:55:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:55:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:55:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:55:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 06:55:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 06:55:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 06:55:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 06:55:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 06:55:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 06:55:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 06:55:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 06:55:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 06:55:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 06:55:31 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 2.d scrub starts
Feb 25 06:55:31 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 2.d scrub ok
Feb 25 06:55:32 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Feb 25 06:55:32 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Feb 25 06:55:32 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Feb 25 06:55:32 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Feb 25 06:55:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:55:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:55:33 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Feb 25 06:55:33 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Feb 25 06:55:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:55:35 np0005629333 systemd-logind[811]: New session 39 of user zuul.
Feb 25 06:55:35 np0005629333 systemd[1]: Started Session 39 of User zuul.
Feb 25 06:55:35 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Feb 25 06:55:35 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Feb 25 06:55:36 np0005629333 python3.9[116736]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:55:36 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Feb 25 06:55:36 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Feb 25 06:55:36 np0005629333 python3.9[116891]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:55:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:55:37 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 4.e scrub starts
Feb 25 06:55:37 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 4.e scrub ok
Feb 25 06:55:37 np0005629333 python3.9[116970]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:55:37 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Feb 25 06:55:37 np0005629333 systemd[1]: session-39.scope: Deactivated successfully.
Feb 25 06:55:37 np0005629333 systemd[1]: session-39.scope: Consumed 1.579s CPU time.
Feb 25 06:55:37 np0005629333 systemd-logind[811]: Session 39 logged out. Waiting for processes to exit.
Feb 25 06:55:37 np0005629333 systemd-logind[811]: Removed session 39.
Feb 25 06:55:37 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Feb 25 06:55:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:55:37 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 10.e scrub starts
Feb 25 06:55:37 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 10.e scrub ok
Feb 25 06:55:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:55:38 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:55:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 06:55:38 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:55:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 06:55:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:55:38 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:55:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 06:55:38 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 06:55:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 06:55:38 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 06:55:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:55:38 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:55:39 np0005629333 podman[117139]: 2026-02-25 11:55:39.245467575 +0000 UTC m=+0.023659647 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:55:39 np0005629333 podman[117139]: 2026-02-25 11:55:39.511466177 +0000 UTC m=+0.289658249 container create 54659a7869169f9fc6283220510038d5413a43225d15e564debbf9a64a364ae5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_khorana, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 06:55:39 np0005629333 systemd[1]: Started libpod-conmon-54659a7869169f9fc6283220510038d5413a43225d15e564debbf9a64a364ae5.scope.
Feb 25 06:55:39 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:55:39 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:55:39 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 06:55:39 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:55:39 np0005629333 podman[117139]: 2026-02-25 11:55:39.725817395 +0000 UTC m=+0.504009457 container init 54659a7869169f9fc6283220510038d5413a43225d15e564debbf9a64a364ae5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:55:39 np0005629333 podman[117139]: 2026-02-25 11:55:39.734508777 +0000 UTC m=+0.512700839 container start 54659a7869169f9fc6283220510038d5413a43225d15e564debbf9a64a364ae5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_khorana, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:55:39 np0005629333 focused_khorana[117155]: 167 167
Feb 25 06:55:39 np0005629333 systemd[1]: libpod-54659a7869169f9fc6283220510038d5413a43225d15e564debbf9a64a364ae5.scope: Deactivated successfully.
Feb 25 06:55:39 np0005629333 podman[117139]: 2026-02-25 11:55:39.769125807 +0000 UTC m=+0.547317859 container attach 54659a7869169f9fc6283220510038d5413a43225d15e564debbf9a64a364ae5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_khorana, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 06:55:39 np0005629333 podman[117139]: 2026-02-25 11:55:39.7710367 +0000 UTC m=+0.549228792 container died 54659a7869169f9fc6283220510038d5413a43225d15e564debbf9a64a364ae5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 06:55:39 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Feb 25 06:55:39 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Feb 25 06:55:39 np0005629333 systemd[1]: var-lib-containers-storage-overlay-c3af8926d4893d040cdc6635d9c2a1fd252af2a1fcfa01411b85f6e55f52d55a-merged.mount: Deactivated successfully.
Feb 25 06:55:40 np0005629333 podman[117139]: 2026-02-25 11:55:40.015049412 +0000 UTC m=+0.793241474 container remove 54659a7869169f9fc6283220510038d5413a43225d15e564debbf9a64a364ae5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 06:55:40 np0005629333 systemd[1]: libpod-conmon-54659a7869169f9fc6283220510038d5413a43225d15e564debbf9a64a364ae5.scope: Deactivated successfully.
Feb 25 06:55:40 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Feb 25 06:55:40 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Feb 25 06:55:40 np0005629333 podman[117179]: 2026-02-25 11:55:40.106117899 +0000 UTC m=+0.017593679 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:55:40 np0005629333 podman[117179]: 2026-02-25 11:55:40.250881776 +0000 UTC m=+0.162357566 container create 0f5da719f46b5ba1ede1276f81e691dfab543c798c174e5bc8001225a9f6b80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_fermat, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:55:40 np0005629333 systemd[1]: Started libpod-conmon-0f5da719f46b5ba1ede1276f81e691dfab543c798c174e5bc8001225a9f6b80b.scope.
Feb 25 06:55:40 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:55:40 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cefa44c7177af2f095ee10aaaa7e50e0ae0e661839f394b560bcce90a4c9243/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:55:40 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cefa44c7177af2f095ee10aaaa7e50e0ae0e661839f394b560bcce90a4c9243/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:55:40 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cefa44c7177af2f095ee10aaaa7e50e0ae0e661839f394b560bcce90a4c9243/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:55:40 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cefa44c7177af2f095ee10aaaa7e50e0ae0e661839f394b560bcce90a4c9243/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:55:40 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cefa44c7177af2f095ee10aaaa7e50e0ae0e661839f394b560bcce90a4c9243/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:55:40 np0005629333 podman[117179]: 2026-02-25 11:55:40.610106605 +0000 UTC m=+0.521582385 container init 0f5da719f46b5ba1ede1276f81e691dfab543c798c174e5bc8001225a9f6b80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:55:40 np0005629333 podman[117179]: 2026-02-25 11:55:40.625294376 +0000 UTC m=+0.536770146 container start 0f5da719f46b5ba1ede1276f81e691dfab543c798c174e5bc8001225a9f6b80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_fermat, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:55:40 np0005629333 podman[117179]: 2026-02-25 11:55:40.64022266 +0000 UTC m=+0.551698440 container attach 0f5da719f46b5ba1ede1276f81e691dfab543c798c174e5bc8001225a9f6b80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_fermat, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:55:40 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Feb 25 06:55:40 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Feb 25 06:55:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:55:41 np0005629333 pedantic_fermat[117196]: --> passed data devices: 0 physical, 3 LVM
Feb 25 06:55:41 np0005629333 pedantic_fermat[117196]: --> All data devices are unavailable
Feb 25 06:55:41 np0005629333 systemd[1]: libpod-0f5da719f46b5ba1ede1276f81e691dfab543c798c174e5bc8001225a9f6b80b.scope: Deactivated successfully.
Feb 25 06:55:41 np0005629333 podman[117179]: 2026-02-25 11:55:41.097961243 +0000 UTC m=+1.009437003 container died 0f5da719f46b5ba1ede1276f81e691dfab543c798c174e5bc8001225a9f6b80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_fermat, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:55:41 np0005629333 systemd[1]: var-lib-containers-storage-overlay-2cefa44c7177af2f095ee10aaaa7e50e0ae0e661839f394b560bcce90a4c9243-merged.mount: Deactivated successfully.
Feb 25 06:55:41 np0005629333 podman[117179]: 2026-02-25 11:55:41.21750207 +0000 UTC m=+1.128977820 container remove 0f5da719f46b5ba1ede1276f81e691dfab543c798c174e5bc8001225a9f6b80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_fermat, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 06:55:41 np0005629333 systemd[1]: libpod-conmon-0f5da719f46b5ba1ede1276f81e691dfab543c798c174e5bc8001225a9f6b80b.scope: Deactivated successfully.
Feb 25 06:55:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 06:55:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:55:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 06:55:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:55:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:55:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:55:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:55:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:55:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:55:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:55:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:55:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:55:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6988014389607423e-06 of space, bias 4.0, pg target 0.0032385617267528906 quantized to 16 (current 16)
Feb 25 06:55:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:55:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:55:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:55:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 06:55:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:55:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 06:55:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:55:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:55:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:55:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 06:55:41 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Feb 25 06:55:41 np0005629333 podman[117294]: 2026-02-25 11:55:41.684816318 +0000 UTC m=+0.060316745 container create cbabd84ed2692e764e8506a91106db364ba6bf7bd05397d9f30e40c110817aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wozniak, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:55:41 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Feb 25 06:55:41 np0005629333 systemd[1]: Started libpod-conmon-cbabd84ed2692e764e8506a91106db364ba6bf7bd05397d9f30e40c110817aa5.scope.
Feb 25 06:55:41 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:55:41 np0005629333 podman[117294]: 2026-02-25 11:55:41.659923768 +0000 UTC m=+0.035424265 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:55:41 np0005629333 podman[117294]: 2026-02-25 11:55:41.770044713 +0000 UTC m=+0.145545130 container init cbabd84ed2692e764e8506a91106db364ba6bf7bd05397d9f30e40c110817aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wozniak, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:55:41 np0005629333 podman[117294]: 2026-02-25 11:55:41.780867764 +0000 UTC m=+0.156368221 container start cbabd84ed2692e764e8506a91106db364ba6bf7bd05397d9f30e40c110817aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wozniak, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:55:41 np0005629333 podman[117294]: 2026-02-25 11:55:41.784819843 +0000 UTC m=+0.160320250 container attach cbabd84ed2692e764e8506a91106db364ba6bf7bd05397d9f30e40c110817aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wozniak, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 06:55:41 np0005629333 naughty_wozniak[117312]: 167 167
Feb 25 06:55:41 np0005629333 systemd[1]: libpod-cbabd84ed2692e764e8506a91106db364ba6bf7bd05397d9f30e40c110817aa5.scope: Deactivated successfully.
Feb 25 06:55:41 np0005629333 podman[117294]: 2026-02-25 11:55:41.787297022 +0000 UTC m=+0.162797479 container died cbabd84ed2692e764e8506a91106db364ba6bf7bd05397d9f30e40c110817aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 06:55:41 np0005629333 systemd[1]: var-lib-containers-storage-overlay-bcf5927e64ba1f53f6c4155b5a0a7408e76261fe94753eb5de729de55c19bc81-merged.mount: Deactivated successfully.
Feb 25 06:55:41 np0005629333 podman[117294]: 2026-02-25 11:55:41.836507038 +0000 UTC m=+0.212007455 container remove cbabd84ed2692e764e8506a91106db364ba6bf7bd05397d9f30e40c110817aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 06:55:41 np0005629333 systemd[1]: libpod-conmon-cbabd84ed2692e764e8506a91106db364ba6bf7bd05397d9f30e40c110817aa5.scope: Deactivated successfully.
Feb 25 06:55:41 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 8.f scrub starts
Feb 25 06:55:41 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 8.f scrub ok
Feb 25 06:55:42 np0005629333 podman[117335]: 2026-02-25 11:55:42.011925836 +0000 UTC m=+0.053399753 container create 3bba211ca3451430e3b59ae62693896778cb95c28793b3bcab9b7f96af194edc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_meitner, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:55:42 np0005629333 systemd[1]: Started libpod-conmon-3bba211ca3451430e3b59ae62693896778cb95c28793b3bcab9b7f96af194edc.scope.
Feb 25 06:55:42 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:55:42 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe577d7e6dbbdaafd5520d2c9c35d6f459a5adf4c16c91732973dcd24e9194b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:55:42 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe577d7e6dbbdaafd5520d2c9c35d6f459a5adf4c16c91732973dcd24e9194b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:55:42 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe577d7e6dbbdaafd5520d2c9c35d6f459a5adf4c16c91732973dcd24e9194b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:55:42 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe577d7e6dbbdaafd5520d2c9c35d6f459a5adf4c16c91732973dcd24e9194b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:55:42 np0005629333 podman[117335]: 2026-02-25 11:55:41.991634133 +0000 UTC m=+0.033108130 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:55:42 np0005629333 podman[117335]: 2026-02-25 11:55:42.094076885 +0000 UTC m=+0.135550832 container init 3bba211ca3451430e3b59ae62693896778cb95c28793b3bcab9b7f96af194edc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 06:55:42 np0005629333 podman[117335]: 2026-02-25 11:55:42.099357362 +0000 UTC m=+0.140831279 container start 3bba211ca3451430e3b59ae62693896778cb95c28793b3bcab9b7f96af194edc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:55:42 np0005629333 podman[117335]: 2026-02-25 11:55:42.102302024 +0000 UTC m=+0.143775961 container attach 3bba211ca3451430e3b59ae62693896778cb95c28793b3bcab9b7f96af194edc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_meitner, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]: {
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:    "0": [
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:        {
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "devices": [
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "/dev/loop3"
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            ],
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "lv_name": "ceph_lv0",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "lv_size": "21470642176",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "name": "ceph_lv0",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "tags": {
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.cluster_name": "ceph",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.crush_device_class": "",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.encrypted": "0",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.objectstore": "bluestore",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.osd_id": "0",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.type": "block",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.vdo": "0",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.with_tpm": "0"
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            },
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "type": "block",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "vg_name": "ceph_vg0"
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:        }
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:    ],
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:    "1": [
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:        {
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "devices": [
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "/dev/loop4"
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            ],
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "lv_name": "ceph_lv1",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "lv_size": "21470642176",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "name": "ceph_lv1",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "tags": {
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.cluster_name": "ceph",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.crush_device_class": "",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.encrypted": "0",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.objectstore": "bluestore",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.osd_id": "1",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.type": "block",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.vdo": "0",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.with_tpm": "0"
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            },
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "type": "block",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "vg_name": "ceph_vg1"
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:        }
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:    ],
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:    "2": [
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:        {
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "devices": [
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "/dev/loop5"
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            ],
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "lv_name": "ceph_lv2",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "lv_size": "21470642176",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "name": "ceph_lv2",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "tags": {
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.cluster_name": "ceph",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.crush_device_class": "",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.encrypted": "0",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.objectstore": "bluestore",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.osd_id": "2",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.type": "block",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.vdo": "0",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:                "ceph.with_tpm": "0"
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            },
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "type": "block",
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:            "vg_name": "ceph_vg2"
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:        }
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]:    ]
Feb 25 06:55:42 np0005629333 friendly_meitner[117352]: }
Feb 25 06:55:42 np0005629333 systemd[1]: libpod-3bba211ca3451430e3b59ae62693896778cb95c28793b3bcab9b7f96af194edc.scope: Deactivated successfully.
Feb 25 06:55:42 np0005629333 podman[117335]: 2026-02-25 11:55:42.43985732 +0000 UTC m=+0.481331247 container died 3bba211ca3451430e3b59ae62693896778cb95c28793b3bcab9b7f96af194edc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:55:42 np0005629333 systemd[1]: var-lib-containers-storage-overlay-fe577d7e6dbbdaafd5520d2c9c35d6f459a5adf4c16c91732973dcd24e9194b9-merged.mount: Deactivated successfully.
Feb 25 06:55:42 np0005629333 podman[117335]: 2026-02-25 11:55:42.485503547 +0000 UTC m=+0.526977484 container remove 3bba211ca3451430e3b59ae62693896778cb95c28793b3bcab9b7f96af194edc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_meitner, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 06:55:42 np0005629333 systemd[1]: libpod-conmon-3bba211ca3451430e3b59ae62693896778cb95c28793b3bcab9b7f96af194edc.scope: Deactivated successfully.
Feb 25 06:55:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:55:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:55:42 np0005629333 podman[117436]: 2026-02-25 11:55:42.904933286 +0000 UTC m=+0.054537944 container create 55291973c2a83541f3f0de86d20e906cabadffe6cad4681a0a02ad5d64477721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_keldysh, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:55:42 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Feb 25 06:55:42 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Feb 25 06:55:42 np0005629333 systemd[1]: Started libpod-conmon-55291973c2a83541f3f0de86d20e906cabadffe6cad4681a0a02ad5d64477721.scope.
Feb 25 06:55:42 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:55:42 np0005629333 podman[117436]: 2026-02-25 11:55:42.882616707 +0000 UTC m=+0.032221425 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:55:42 np0005629333 podman[117436]: 2026-02-25 11:55:42.991681453 +0000 UTC m=+0.141286171 container init 55291973c2a83541f3f0de86d20e906cabadffe6cad4681a0a02ad5d64477721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_keldysh, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:55:43 np0005629333 podman[117436]: 2026-02-25 11:55:42.999942183 +0000 UTC m=+0.149546861 container start 55291973c2a83541f3f0de86d20e906cabadffe6cad4681a0a02ad5d64477721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_keldysh, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Feb 25 06:55:43 np0005629333 ecstatic_keldysh[117452]: 167 167
Feb 25 06:55:43 np0005629333 systemd[1]: libpod-55291973c2a83541f3f0de86d20e906cabadffe6cad4681a0a02ad5d64477721.scope: Deactivated successfully.
Feb 25 06:55:43 np0005629333 conmon[117452]: conmon 55291973c2a83541f3f0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-55291973c2a83541f3f0de86d20e906cabadffe6cad4681a0a02ad5d64477721.scope/container/memory.events
Feb 25 06:55:43 np0005629333 podman[117436]: 2026-02-25 11:55:43.006368311 +0000 UTC m=+0.155973009 container attach 55291973c2a83541f3f0de86d20e906cabadffe6cad4681a0a02ad5d64477721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 06:55:43 np0005629333 podman[117436]: 2026-02-25 11:55:43.007645076 +0000 UTC m=+0.157249744 container died 55291973c2a83541f3f0de86d20e906cabadffe6cad4681a0a02ad5d64477721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 06:55:43 np0005629333 systemd[1]: var-lib-containers-storage-overlay-db79269d1febe735811f4aa5187da4e1a9a1bf07a835f08e9cd18f3289014f2b-merged.mount: Deactivated successfully.
Feb 25 06:55:43 np0005629333 podman[117436]: 2026-02-25 11:55:43.049434526 +0000 UTC m=+0.199039164 container remove 55291973c2a83541f3f0de86d20e906cabadffe6cad4681a0a02ad5d64477721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_keldysh, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 06:55:43 np0005629333 systemd[1]: libpod-conmon-55291973c2a83541f3f0de86d20e906cabadffe6cad4681a0a02ad5d64477721.scope: Deactivated successfully.
Feb 25 06:55:43 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Feb 25 06:55:43 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Feb 25 06:55:43 np0005629333 podman[117475]: 2026-02-25 11:55:43.210169186 +0000 UTC m=+0.057641690 container create 0a1ec27b8c1dc23c2f3658f89674bc3ad1f9f976c256fb1893a31c3c71271bd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_dirac, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:55:43 np0005629333 systemd[1]: Started libpod-conmon-0a1ec27b8c1dc23c2f3658f89674bc3ad1f9f976c256fb1893a31c3c71271bd7.scope.
Feb 25 06:55:43 np0005629333 podman[117475]: 2026-02-25 11:55:43.18650661 +0000 UTC m=+0.033979174 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:55:43 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:55:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05f8768d598fbf760a1db8c1037d1c80c3709ec95fa772d5c6d205d2c74b9a22/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:55:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05f8768d598fbf760a1db8c1037d1c80c3709ec95fa772d5c6d205d2c74b9a22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:55:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05f8768d598fbf760a1db8c1037d1c80c3709ec95fa772d5c6d205d2c74b9a22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:55:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05f8768d598fbf760a1db8c1037d1c80c3709ec95fa772d5c6d205d2c74b9a22/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:55:43 np0005629333 podman[117475]: 2026-02-25 11:55:43.320796256 +0000 UTC m=+0.168268760 container init 0a1ec27b8c1dc23c2f3658f89674bc3ad1f9f976c256fb1893a31c3c71271bd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_dirac, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:55:43 np0005629333 podman[117475]: 2026-02-25 11:55:43.33139352 +0000 UTC m=+0.178866024 container start 0a1ec27b8c1dc23c2f3658f89674bc3ad1f9f976c256fb1893a31c3c71271bd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_dirac, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:55:43 np0005629333 podman[117475]: 2026-02-25 11:55:43.33499846 +0000 UTC m=+0.182471004 container attach 0a1ec27b8c1dc23c2f3658f89674bc3ad1f9f976c256fb1893a31c3c71271bd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 06:55:43 np0005629333 systemd-logind[811]: New session 40 of user zuul.
Feb 25 06:55:43 np0005629333 systemd[1]: Started Session 40 of User zuul.
Feb 25 06:55:43 np0005629333 lvm[117696]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 06:55:43 np0005629333 lvm[117697]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 06:55:43 np0005629333 lvm[117696]: VG ceph_vg1 finished
Feb 25 06:55:43 np0005629333 lvm[117697]: VG ceph_vg0 finished
Feb 25 06:55:43 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 10.d scrub starts
Feb 25 06:55:43 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 10.d scrub ok
Feb 25 06:55:43 np0005629333 lvm[117708]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 06:55:43 np0005629333 lvm[117708]: VG ceph_vg2 finished
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:55:44.004146) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020544004233, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7159, "num_deletes": 251, "total_data_size": 9621796, "memory_usage": 9778672, "flush_reason": "Manual Compaction"}
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Feb 25 06:55:44 np0005629333 jolly_dirac[117493]: {}
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020544056527, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7590414, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 146, "largest_seqno": 7302, "table_properties": {"data_size": 7564215, "index_size": 17060, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8069, "raw_key_size": 74336, "raw_average_key_size": 23, "raw_value_size": 7502493, "raw_average_value_size": 2333, "num_data_blocks": 751, "num_entries": 3215, "num_filter_entries": 3215, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020152, "oldest_key_time": 1772020152, "file_creation_time": 1772020544, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 52449 microseconds, and 23336 cpu microseconds.
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:55:44.056599) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7590414 bytes OK
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:55:44.056650) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:55:44.058479) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:55:44.058497) EVENT_LOG_v1 {"time_micros": 1772020544058492, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:55:44.058548) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9590728, prev total WAL file size 9590728, number of live WAL files 2.
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:55:44.061132) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7412KB) 13(58KB) 8(1944B)]
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020544061236, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7652314, "oldest_snapshot_seqno": -1}
Feb 25 06:55:44 np0005629333 systemd[1]: libpod-0a1ec27b8c1dc23c2f3658f89674bc3ad1f9f976c256fb1893a31c3c71271bd7.scope: Deactivated successfully.
Feb 25 06:55:44 np0005629333 systemd[1]: libpod-0a1ec27b8c1dc23c2f3658f89674bc3ad1f9f976c256fb1893a31c3c71271bd7.scope: Consumed 1.069s CPU time.
Feb 25 06:55:44 np0005629333 podman[117475]: 2026-02-25 11:55:44.089524539 +0000 UTC m=+0.936997003 container died 0a1ec27b8c1dc23c2f3658f89674bc3ad1f9f976c256fb1893a31c3c71271bd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_dirac, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3041 keys, 7605220 bytes, temperature: kUnknown
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020544110638, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7605220, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7579412, "index_size": 17122, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7621, "raw_key_size": 72776, "raw_average_key_size": 23, "raw_value_size": 7519011, "raw_average_value_size": 2472, "num_data_blocks": 755, "num_entries": 3041, "num_filter_entries": 3041, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772020544, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:55:44.110912) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7605220 bytes
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:55:44.112201) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.6 rd, 153.6 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.3, 0.0 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3330, records dropped: 289 output_compression: NoCompression
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:55:44.112226) EVENT_LOG_v1 {"time_micros": 1772020544112213, "job": 4, "event": "compaction_finished", "compaction_time_micros": 49507, "compaction_time_cpu_micros": 15261, "output_level": 6, "num_output_files": 1, "total_output_size": 7605220, "num_input_records": 3330, "num_output_records": 3041, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020544113245, "job": 4, "event": "table_file_deletion", "file_number": 19}
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020544113345, "job": 4, "event": "table_file_deletion", "file_number": 13}
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020544113426, "job": 4, "event": "table_file_deletion", "file_number": 8}
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:55:44.060872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 06:55:44 np0005629333 systemd[1]: var-lib-containers-storage-overlay-05f8768d598fbf760a1db8c1037d1c80c3709ec95fa772d5c6d205d2c74b9a22-merged.mount: Deactivated successfully.
Feb 25 06:55:44 np0005629333 podman[117475]: 2026-02-25 11:55:44.164184531 +0000 UTC m=+1.011657025 container remove 0a1ec27b8c1dc23c2f3658f89674bc3ad1f9f976c256fb1893a31c3c71271bd7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_dirac, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:55:44 np0005629333 systemd[1]: libpod-conmon-0a1ec27b8c1dc23c2f3658f89674bc3ad1f9f976c256fb1893a31c3c71271bd7.scope: Deactivated successfully.
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:55:44 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:55:44 np0005629333 python3.9[117727]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:55:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:55:45 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:55:45 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:55:45 np0005629333 python3.9[117926]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:55:46 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 6.f scrub starts
Feb 25 06:55:46 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 6.f scrub ok
Feb 25 06:55:46 np0005629333 python3.9[118102]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:55:46 np0005629333 python3.9[118181]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.3v8hkm98 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:55:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:55:47 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Feb 25 06:55:47 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Feb 25 06:55:47 np0005629333 python3.9[118334]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:55:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:55:47 np0005629333 python3.9[118413]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.hcy4zsso recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:55:48 np0005629333 python3.9[118566]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:55:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:55:49 np0005629333 python3.9[118719]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:55:49 np0005629333 python3.9[118798]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:55:50 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 6.a scrub starts
Feb 25 06:55:50 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 6.a scrub ok
Feb 25 06:55:50 np0005629333 python3.9[118951]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:55:50 np0005629333 python3.9[119030]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:55:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:55:50 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Feb 25 06:55:50 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Feb 25 06:55:51 np0005629333 python3.9[119183]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:55:52 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Feb 25 06:55:52 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Feb 25 06:55:52 np0005629333 python3.9[119336]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:55:52 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Feb 25 06:55:52 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Feb 25 06:55:52 np0005629333 python3.9[119415]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:55:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:55:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:55:53 np0005629333 python3.9[119568]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:55:53 np0005629333 python3.9[119647]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:55:53 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Feb 25 06:55:53 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Feb 25 06:55:54 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Feb 25 06:55:54 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Feb 25 06:55:54 np0005629333 python3.9[119800]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 06:55:54 np0005629333 systemd[1]: Reloading.
Feb 25 06:55:54 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:55:54 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:55:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:55:55 np0005629333 python3.9[119998]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:55:55 np0005629333 python3.9[120077]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:55:56 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Feb 25 06:55:56 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Feb 25 06:55:56 np0005629333 python3.9[120230]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:55:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:55:57 np0005629333 python3.9[120309]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:55:57 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Feb 25 06:55:57 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Feb 25 06:55:57 np0005629333 python3.9[120462]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 06:55:57 np0005629333 systemd[1]: Reloading.
Feb 25 06:55:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:55:57 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:55:57 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:55:57 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Feb 25 06:55:57 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Feb 25 06:55:58 np0005629333 systemd[1]: Starting Create netns directory...
Feb 25 06:55:58 np0005629333 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 25 06:55:58 np0005629333 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 25 06:55:58 np0005629333 systemd[1]: Finished Create netns directory.
Feb 25 06:55:58 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Feb 25 06:55:58 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Feb 25 06:55:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:55:58 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Feb 25 06:55:58 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Feb 25 06:55:59 np0005629333 python3.9[120662]: ansible-ansible.builtin.service_facts Invoked
Feb 25 06:55:59 np0005629333 network[120679]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 25 06:55:59 np0005629333 network[120680]: 'network-scripts' will be removed from distribution in near future.
Feb 25 06:55:59 np0005629333 network[120681]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 25 06:55:59 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Feb 25 06:55:59 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Feb 25 06:56:00 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 9.e scrub starts
Feb 25 06:56:00 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 9.e scrub ok
Feb 25 06:56:00 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Feb 25 06:56:00 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Feb 25 06:56:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:56:01 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Feb 25 06:56:01 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Feb 25 06:56:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:56:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:56:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:56:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:56:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:56:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:56:02 np0005629333 python3.9[120945]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:56:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:56:02 np0005629333 python3.9[121024]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:56:02 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Feb 25 06:56:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:56:02 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Feb 25 06:56:03 np0005629333 python3.9[121177]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:56:04 np0005629333 python3.9[121330]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:56:04 np0005629333 python3.9[121409]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:56:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:56:05 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Feb 25 06:56:05 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Feb 25 06:56:05 np0005629333 python3.9[121562]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb 25 06:56:05 np0005629333 systemd[1]: Starting Time & Date Service...
Feb 25 06:56:05 np0005629333 systemd[1]: Started Time & Date Service.
Feb 25 06:56:06 np0005629333 python3.9[121719]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:56:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:56:06 np0005629333 python3.9[121872]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:56:07 np0005629333 python3.9[121951]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:56:07 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Feb 25 06:56:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:56:07 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Feb 25 06:56:07 np0005629333 python3.9[122104]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:56:08 np0005629333 python3.9[122183]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.6xsw90o4 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:56:08 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 9.b scrub starts
Feb 25 06:56:08 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 9.b scrub ok
Feb 25 06:56:08 np0005629333 python3.9[122336]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:56:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:56:09 np0005629333 python3.9[122415]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:56:10 np0005629333 python3.9[122568]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:56:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:56:11 np0005629333 python3[122722]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 25 06:56:11 np0005629333 python3.9[122875]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:56:12 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 9.f scrub starts
Feb 25 06:56:12 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 9.f scrub ok
Feb 25 06:56:12 np0005629333 python3.9[122954]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:56:12 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Feb 25 06:56:12 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Feb 25 06:56:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:56:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:56:13 np0005629333 python3.9[123107]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:56:14 np0005629333 python3.9[123233]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020572.5057108-308-125895569771058/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:56:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:56:15 np0005629333 python3.9[123386]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:56:15 np0005629333 python3.9[123465]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:56:15 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 9.c scrub starts
Feb 25 06:56:16 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 9.c scrub ok
Feb 25 06:56:16 np0005629333 python3.9[123618]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:56:16 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Feb 25 06:56:16 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Feb 25 06:56:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:56:17 np0005629333 python3.9[123697]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:56:17 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Feb 25 06:56:17 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Feb 25 06:56:17 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 9.d scrub starts
Feb 25 06:56:17 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 9.d scrub ok
Feb 25 06:56:17 np0005629333 python3.9[123850]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:56:17 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Feb 25 06:56:18 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Feb 25 06:56:18 np0005629333 python3.9[123929]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:56:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:56:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:56:19 np0005629333 python3.9[124082]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:56:19 np0005629333 python3.9[124238]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:56:19 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Feb 25 06:56:19 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Feb 25 06:56:20 np0005629333 python3.9[124391]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:56:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:56:20 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Feb 25 06:56:20 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Feb 25 06:56:21 np0005629333 python3.9[124544]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:56:21 np0005629333 python3.9[124697]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb 25 06:56:21 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Feb 25 06:56:21 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Feb 25 06:56:22 np0005629333 python3.9[124850]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb 25 06:56:22 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Feb 25 06:56:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:56:22 np0005629333 systemd-logind[811]: Session 40 logged out. Waiting for processes to exit.
Feb 25 06:56:22 np0005629333 systemd[1]: session-40.scope: Deactivated successfully.
Feb 25 06:56:22 np0005629333 systemd[1]: session-40.scope: Consumed 27.970s CPU time.
Feb 25 06:56:22 np0005629333 systemd-logind[811]: Removed session 40.
Feb 25 06:56:22 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Feb 25 06:56:22 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Feb 25 06:56:22 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Feb 25 06:56:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:56:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:56:24 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Feb 25 06:56:24 np0005629333 ceph-osd[89088]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Feb 25 06:56:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:56:27 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Feb 25 06:56:27 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Feb 25 06:56:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:56:28 np0005629333 systemd-logind[811]: New session 41 of user zuul.
Feb 25 06:56:28 np0005629333 systemd[1]: Started Session 41 of User zuul.
Feb 25 06:56:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:56:29 np0005629333 python3.9[125031]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Feb 25 06:56:29 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 10.b scrub starts
Feb 25 06:56:29 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 10.b scrub ok
Feb 25 06:56:30 np0005629333 python3.9[125184]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 06:56:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_11:56:30
Feb 25 06:56:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 06:56:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 06:56:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'vms', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'backups', 'volumes', 'images']
Feb 25 06:56:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 06:56:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:56:31 np0005629333 python3.9[125339]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Feb 25 06:56:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:56:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:56:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:56:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:56:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:56:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:56:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 06:56:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 06:56:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 06:56:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 06:56:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 06:56:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 06:56:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 06:56:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 06:56:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 06:56:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 06:56:32 np0005629333 python3.9[125492]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.yrwljolq follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:56:32 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Feb 25 06:56:32 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Feb 25 06:56:32 np0005629333 python3.9[125618]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.yrwljolq mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772020591.6685865-44-272232823843987/.source.yrwljolq _original_basename=.6bingwe8 follow=False checksum=f1fbf7deb318b6cb96229d9ec829aab71a158ff4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:56:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:56:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:56:33 np0005629333 python3.9[125771]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:56:34 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Feb 25 06:56:34 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Feb 25 06:56:34 np0005629333 python3.9[125924]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4pR0m4uDL6wpjeVsrE8exl2aQkXR/YRBYcrcquH8gsUfPg4Ic7ybncTg3FmomNbKuxFaXek0Tpc2LkgxfxoRVwUAoTsR8v75h5ax8+6Hr5kY922KQAzI2mFhkzXq6Iob2zMQtuYN7wdsX3onHLxyW+7tqo5S1CdPjNGt7lm5ibexJBMeBXylEOipYDisejedLmtOq/ufb/Bbq0QB59hKBth3Uod5kioIyV8qYySkchpFqkqoNBuQb3GodRHNQw597M28Tl4xbfSsBJ+WqwFzdceYcJ/Zbkx5fwOW5q9s48ovP26W7lABtWL6BTvCm2haeVJZBzwDX810hnDSLCH/XcHYBIbiqkujg85xKpFvJRgpDJtPQhx54nFXgyzYNfp+nlkFmhwekSMtjVLXTmTH24ZcokuyX9RlfQyS8r6T8HVnm1lWPc/cum+VNGCrQlboravmcQlrRcEJMmiUVuEzBRKHPSNP0gInw1Q+Bdc2E5zPaW2Lk46FmG/rDKwJ9Tf0=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAC0awy++nKN/asFVCLXvC5FnoQIdfAusWioUMLgrnzM#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFYNEIvBoC1IvH2XK5OeqRtYZ0DLH1Lh2Hwd9tGMJ+mVeJr3NcevvvZCuSJTYgl9EaFFWV9s06MQ2T5blzuQtD4=#012 create=True mode=0644 path=/tmp/ansible.yrwljolq state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:56:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:56:35 np0005629333 python3.9[126077]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.yrwljolq' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:56:35 np0005629333 systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb 25 06:56:36 np0005629333 python3.9[126234]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.yrwljolq state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:56:36 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Feb 25 06:56:36 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Feb 25 06:56:36 np0005629333 systemd-logind[811]: Session 41 logged out. Waiting for processes to exit.
Feb 25 06:56:36 np0005629333 systemd[1]: session-41.scope: Deactivated successfully.
Feb 25 06:56:36 np0005629333 systemd[1]: session-41.scope: Consumed 4.647s CPU time.
Feb 25 06:56:36 np0005629333 systemd-logind[811]: Removed session 41.
Feb 25 06:56:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:56:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:56:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:56:38 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Feb 25 06:56:39 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Feb 25 06:56:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:56:40 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Feb 25 06:56:40 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Feb 25 06:56:41 np0005629333 systemd-logind[811]: New session 42 of user zuul.
Feb 25 06:56:41 np0005629333 systemd[1]: Started Session 42 of User zuul.
Feb 25 06:56:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 06:56:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:56:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 06:56:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:56:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:56:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:56:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:56:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:56:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:56:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:56:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:56:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:56:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6988014389607423e-06 of space, bias 4.0, pg target 0.0032385617267528906 quantized to 16 (current 16)
Feb 25 06:56:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:56:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:56:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:56:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 06:56:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:56:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 06:56:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:56:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:56:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:56:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 06:56:42 np0005629333 python3.9[126412]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:56:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:56:43 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Feb 25 06:56:43 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Feb 25 06:56:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:56:43 np0005629333 python3.9[126569]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb 25 06:56:43 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Feb 25 06:56:44 np0005629333 ceph-osd[86953]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Feb 25 06:56:44 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Feb 25 06:56:44 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Feb 25 06:56:44 np0005629333 python3.9[126724]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 25 06:56:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:56:44 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:56:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 06:56:44 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:56:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 06:56:44 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:56:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 06:56:44 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 06:56:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 06:56:44 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 06:56:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:56:44 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:56:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:56:45 np0005629333 podman[126993]: 2026-02-25 11:56:45.260121932 +0000 UTC m=+0.052679585 container create 9a73ab2a01802317858c3f769180711de5c15588e7b8ee6bcc465e60a5937728 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 06:56:45 np0005629333 systemd[1]: Started libpod-conmon-9a73ab2a01802317858c3f769180711de5c15588e7b8ee6bcc465e60a5937728.scope.
Feb 25 06:56:45 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:56:45 np0005629333 podman[126993]: 2026-02-25 11:56:45.238036618 +0000 UTC m=+0.030594311 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:56:45 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Feb 25 06:56:45 np0005629333 podman[126993]: 2026-02-25 11:56:45.341438531 +0000 UTC m=+0.133996154 container init 9a73ab2a01802317858c3f769180711de5c15588e7b8ee6bcc465e60a5937728 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_easley, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:56:45 np0005629333 podman[126993]: 2026-02-25 11:56:45.350124483 +0000 UTC m=+0.142682106 container start 9a73ab2a01802317858c3f769180711de5c15588e7b8ee6bcc465e60a5937728 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_easley, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 06:56:45 np0005629333 interesting_easley[127040]: 167 167
Feb 25 06:56:45 np0005629333 systemd[1]: libpod-9a73ab2a01802317858c3f769180711de5c15588e7b8ee6bcc465e60a5937728.scope: Deactivated successfully.
Feb 25 06:56:45 np0005629333 conmon[127040]: conmon 9a73ab2a01802317858c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9a73ab2a01802317858c3f769180711de5c15588e7b8ee6bcc465e60a5937728.scope/container/memory.events
Feb 25 06:56:45 np0005629333 podman[126993]: 2026-02-25 11:56:45.354045311 +0000 UTC m=+0.146602955 container attach 9a73ab2a01802317858c3f769180711de5c15588e7b8ee6bcc465e60a5937728 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 06:56:45 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Feb 25 06:56:45 np0005629333 podman[126993]: 2026-02-25 11:56:45.354875155 +0000 UTC m=+0.147432768 container died 9a73ab2a01802317858c3f769180711de5c15588e7b8ee6bcc465e60a5937728 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_easley, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:56:45 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a5787e95d57338d40764ad9d009fb8a5dc15a04966ae4b3e907daa3cee473a83-merged.mount: Deactivated successfully.
Feb 25 06:56:45 np0005629333 podman[126993]: 2026-02-25 11:56:45.401034177 +0000 UTC m=+0.193591830 container remove 9a73ab2a01802317858c3f769180711de5c15588e7b8ee6bcc465e60a5937728 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_easley, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:56:45 np0005629333 systemd[1]: libpod-conmon-9a73ab2a01802317858c3f769180711de5c15588e7b8ee6bcc465e60a5937728.scope: Deactivated successfully.
Feb 25 06:56:45 np0005629333 python3.9[127036]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:56:45 np0005629333 podman[127067]: 2026-02-25 11:56:45.559430998 +0000 UTC m=+0.051844991 container create b6aaf0b55ec1bd2afbf07697b183c2a69c06d511d1f7ec513f6fa91230d9100d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:56:45 np0005629333 systemd[1]: Started libpod-conmon-b6aaf0b55ec1bd2afbf07697b183c2a69c06d511d1f7ec513f6fa91230d9100d.scope.
Feb 25 06:56:45 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:56:45 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e57589e5aa1196d8572195204ce5b3c1230dad50d85c3f9b40a873a034ba731b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:56:45 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e57589e5aa1196d8572195204ce5b3c1230dad50d85c3f9b40a873a034ba731b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:56:45 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e57589e5aa1196d8572195204ce5b3c1230dad50d85c3f9b40a873a034ba731b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:56:45 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e57589e5aa1196d8572195204ce5b3c1230dad50d85c3f9b40a873a034ba731b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:56:45 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e57589e5aa1196d8572195204ce5b3c1230dad50d85c3f9b40a873a034ba731b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:56:45 np0005629333 podman[127067]: 2026-02-25 11:56:45.542775476 +0000 UTC m=+0.035189409 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:56:45 np0005629333 podman[127067]: 2026-02-25 11:56:45.656818724 +0000 UTC m=+0.149232647 container init b6aaf0b55ec1bd2afbf07697b183c2a69c06d511d1f7ec513f6fa91230d9100d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_grothendieck, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:56:45 np0005629333 podman[127067]: 2026-02-25 11:56:45.663224922 +0000 UTC m=+0.155638815 container start b6aaf0b55ec1bd2afbf07697b183c2a69c06d511d1f7ec513f6fa91230d9100d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_grothendieck, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Feb 25 06:56:45 np0005629333 podman[127067]: 2026-02-25 11:56:45.666985117 +0000 UTC m=+0.159399010 container attach b6aaf0b55ec1bd2afbf07697b183c2a69c06d511d1f7ec513f6fa91230d9100d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_grothendieck, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:56:45 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:56:45 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:56:45 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 06:56:46 np0005629333 condescending_grothendieck[127107]: --> passed data devices: 0 physical, 3 LVM
Feb 25 06:56:46 np0005629333 condescending_grothendieck[127107]: --> All data devices are unavailable
Feb 25 06:56:46 np0005629333 systemd[1]: libpod-b6aaf0b55ec1bd2afbf07697b183c2a69c06d511d1f7ec513f6fa91230d9100d.scope: Deactivated successfully.
Feb 25 06:56:46 np0005629333 podman[127067]: 2026-02-25 11:56:46.107464696 +0000 UTC m=+0.599878599 container died b6aaf0b55ec1bd2afbf07697b183c2a69c06d511d1f7ec513f6fa91230d9100d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_grothendieck, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:56:46 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e57589e5aa1196d8572195204ce5b3c1230dad50d85c3f9b40a873a034ba731b-merged.mount: Deactivated successfully.
Feb 25 06:56:46 np0005629333 podman[127067]: 2026-02-25 11:56:46.153559136 +0000 UTC m=+0.645973029 container remove b6aaf0b55ec1bd2afbf07697b183c2a69c06d511d1f7ec513f6fa91230d9100d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle)
Feb 25 06:56:46 np0005629333 systemd[1]: libpod-conmon-b6aaf0b55ec1bd2afbf07697b183c2a69c06d511d1f7ec513f6fa91230d9100d.scope: Deactivated successfully.
Feb 25 06:56:46 np0005629333 python3.9[127256]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 06:56:46 np0005629333 podman[127372]: 2026-02-25 11:56:46.545167337 +0000 UTC m=+0.036553426 container create 2083c73d0d2f19b4e1fe90fa6f1991e52d8928644a2c10397896fa8c8ec61b8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 06:56:46 np0005629333 systemd[1]: Started libpod-conmon-2083c73d0d2f19b4e1fe90fa6f1991e52d8928644a2c10397896fa8c8ec61b8e.scope.
Feb 25 06:56:46 np0005629333 podman[127372]: 2026-02-25 11:56:46.531529149 +0000 UTC m=+0.022915158 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:56:46 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:56:46 np0005629333 podman[127372]: 2026-02-25 11:56:46.654597717 +0000 UTC m=+0.145983726 container init 2083c73d0d2f19b4e1fe90fa6f1991e52d8928644a2c10397896fa8c8ec61b8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_fermat, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:56:46 np0005629333 podman[127372]: 2026-02-25 11:56:46.66116881 +0000 UTC m=+0.152554799 container start 2083c73d0d2f19b4e1fe90fa6f1991e52d8928644a2c10397896fa8c8ec61b8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_fermat, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 06:56:46 np0005629333 podman[127372]: 2026-02-25 11:56:46.664663387 +0000 UTC m=+0.156049406 container attach 2083c73d0d2f19b4e1fe90fa6f1991e52d8928644a2c10397896fa8c8ec61b8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_fermat, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:56:46 np0005629333 systemd[1]: libpod-2083c73d0d2f19b4e1fe90fa6f1991e52d8928644a2c10397896fa8c8ec61b8e.scope: Deactivated successfully.
Feb 25 06:56:46 np0005629333 cool_fermat[127422]: 167 167
Feb 25 06:56:46 np0005629333 conmon[127422]: conmon 2083c73d0d2f19b4e1fe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2083c73d0d2f19b4e1fe90fa6f1991e52d8928644a2c10397896fa8c8ec61b8e.scope/container/memory.events
Feb 25 06:56:46 np0005629333 podman[127372]: 2026-02-25 11:56:46.669572343 +0000 UTC m=+0.160958352 container died 2083c73d0d2f19b4e1fe90fa6f1991e52d8928644a2c10397896fa8c8ec61b8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_fermat, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True)
Feb 25 06:56:46 np0005629333 systemd[1]: var-lib-containers-storage-overlay-82320b00e1f57e13c97b47cfca6fdecc96bb0e32bf3d98fa0900591ba1cf9335-merged.mount: Deactivated successfully.
Feb 25 06:56:46 np0005629333 podman[127372]: 2026-02-25 11:56:46.788470697 +0000 UTC m=+0.279856696 container remove 2083c73d0d2f19b4e1fe90fa6f1991e52d8928644a2c10397896fa8c8ec61b8e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Feb 25 06:56:46 np0005629333 systemd[1]: libpod-conmon-2083c73d0d2f19b4e1fe90fa6f1991e52d8928644a2c10397896fa8c8ec61b8e.scope: Deactivated successfully.
Feb 25 06:56:46 np0005629333 podman[127519]: 2026-02-25 11:56:46.917370158 +0000 UTC m=+0.043539191 container create e2c20c0d38dae55a7d2be95bd23dc8186c1b193e65fab85e2802ce2ac7a6bf4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_archimedes, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:56:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:56:46 np0005629333 systemd[1]: Started libpod-conmon-e2c20c0d38dae55a7d2be95bd23dc8186c1b193e65fab85e2802ce2ac7a6bf4c.scope.
Feb 25 06:56:46 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:56:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52e1c2cc09bba024413fb3e1ac2152e465e53e8dcc41eec3515f8c060d82bce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:56:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52e1c2cc09bba024413fb3e1ac2152e465e53e8dcc41eec3515f8c060d82bce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:56:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52e1c2cc09bba024413fb3e1ac2152e465e53e8dcc41eec3515f8c060d82bce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:56:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52e1c2cc09bba024413fb3e1ac2152e465e53e8dcc41eec3515f8c060d82bce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:56:46 np0005629333 podman[127519]: 2026-02-25 11:56:46.899241405 +0000 UTC m=+0.025410438 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:56:47 np0005629333 podman[127519]: 2026-02-25 11:56:47.003046759 +0000 UTC m=+0.129215752 container init e2c20c0d38dae55a7d2be95bd23dc8186c1b193e65fab85e2802ce2ac7a6bf4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 06:56:47 np0005629333 podman[127519]: 2026-02-25 11:56:47.009465557 +0000 UTC m=+0.135634560 container start e2c20c0d38dae55a7d2be95bd23dc8186c1b193e65fab85e2802ce2ac7a6bf4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:56:47 np0005629333 podman[127519]: 2026-02-25 11:56:47.021028828 +0000 UTC m=+0.147197821 container attach e2c20c0d38dae55a7d2be95bd23dc8186c1b193e65fab85e2802ce2ac7a6bf4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_archimedes, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 06:56:47 np0005629333 python3.9[127534]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]: {
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:    "0": [
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:        {
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "devices": [
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "/dev/loop3"
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            ],
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "lv_name": "ceph_lv0",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "lv_size": "21470642176",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "name": "ceph_lv0",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "tags": {
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.cluster_name": "ceph",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.crush_device_class": "",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.encrypted": "0",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.objectstore": "bluestore",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.osd_id": "0",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.type": "block",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.vdo": "0",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.with_tpm": "0"
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            },
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "type": "block",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "vg_name": "ceph_vg0"
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:        }
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:    ],
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:    "1": [
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:        {
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "devices": [
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "/dev/loop4"
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            ],
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "lv_name": "ceph_lv1",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "lv_size": "21470642176",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "name": "ceph_lv1",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "tags": {
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.cluster_name": "ceph",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.crush_device_class": "",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.encrypted": "0",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.objectstore": "bluestore",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.osd_id": "1",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.type": "block",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.vdo": "0",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.with_tpm": "0"
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            },
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "type": "block",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "vg_name": "ceph_vg1"
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:        }
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:    ],
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:    "2": [
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:        {
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "devices": [
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "/dev/loop5"
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            ],
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "lv_name": "ceph_lv2",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "lv_size": "21470642176",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "name": "ceph_lv2",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "tags": {
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.cluster_name": "ceph",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.crush_device_class": "",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.encrypted": "0",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.objectstore": "bluestore",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.osd_id": "2",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.type": "block",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.vdo": "0",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:                "ceph.with_tpm": "0"
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            },
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "type": "block",
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:            "vg_name": "ceph_vg2"
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:        }
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]:    ]
Feb 25 06:56:47 np0005629333 quirky_archimedes[127540]: }
Feb 25 06:56:47 np0005629333 systemd[1]: libpod-e2c20c0d38dae55a7d2be95bd23dc8186c1b193e65fab85e2802ce2ac7a6bf4c.scope: Deactivated successfully.
Feb 25 06:56:47 np0005629333 podman[127519]: 2026-02-25 11:56:47.310351867 +0000 UTC m=+0.436520870 container died e2c20c0d38dae55a7d2be95bd23dc8186c1b193e65fab85e2802ce2ac7a6bf4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_archimedes, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 06:56:47 np0005629333 systemd[1]: var-lib-containers-storage-overlay-c52e1c2cc09bba024413fb3e1ac2152e465e53e8dcc41eec3515f8c060d82bce-merged.mount: Deactivated successfully.
Feb 25 06:56:47 np0005629333 podman[127519]: 2026-02-25 11:56:47.346942134 +0000 UTC m=+0.473111127 container remove e2c20c0d38dae55a7d2be95bd23dc8186c1b193e65fab85e2802ce2ac7a6bf4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_archimedes, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 06:56:47 np0005629333 systemd[1]: libpod-conmon-e2c20c0d38dae55a7d2be95bd23dc8186c1b193e65fab85e2802ce2ac7a6bf4c.scope: Deactivated successfully.
Feb 25 06:56:47 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Feb 25 06:56:47 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Feb 25 06:56:47 np0005629333 systemd[1]: session-42.scope: Deactivated successfully.
Feb 25 06:56:47 np0005629333 systemd[1]: session-42.scope: Consumed 3.610s CPU time.
Feb 25 06:56:47 np0005629333 systemd-logind[811]: Session 42 logged out. Waiting for processes to exit.
Feb 25 06:56:47 np0005629333 systemd-logind[811]: Removed session 42.
Feb 25 06:56:47 np0005629333 podman[127647]: 2026-02-25 11:56:47.783948247 +0000 UTC m=+0.078470862 container create cc65a35ae984cf946ad2701ad7399e315f7d7f8fe10a7f64dcba2d77ba8add84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:56:47 np0005629333 podman[127647]: 2026-02-25 11:56:47.725271006 +0000 UTC m=+0.019793641 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:56:47 np0005629333 systemd[1]: Started libpod-conmon-cc65a35ae984cf946ad2701ad7399e315f7d7f8fe10a7f64dcba2d77ba8add84.scope.
Feb 25 06:56:47 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:56:48 np0005629333 podman[127647]: 2026-02-25 11:56:48.014355028 +0000 UTC m=+0.308877673 container init cc65a35ae984cf946ad2701ad7399e315f7d7f8fe10a7f64dcba2d77ba8add84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_rubin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 06:56:48 np0005629333 podman[127647]: 2026-02-25 11:56:48.018598786 +0000 UTC m=+0.313121401 container start cc65a35ae984cf946ad2701ad7399e315f7d7f8fe10a7f64dcba2d77ba8add84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 06:56:48 np0005629333 gracious_rubin[127664]: 167 167
Feb 25 06:56:48 np0005629333 systemd[1]: libpod-cc65a35ae984cf946ad2701ad7399e315f7d7f8fe10a7f64dcba2d77ba8add84.scope: Deactivated successfully.
Feb 25 06:56:48 np0005629333 podman[127647]: 2026-02-25 11:56:48.141926163 +0000 UTC m=+0.436448818 container attach cc65a35ae984cf946ad2701ad7399e315f7d7f8fe10a7f64dcba2d77ba8add84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_rubin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 06:56:48 np0005629333 podman[127647]: 2026-02-25 11:56:48.142353385 +0000 UTC m=+0.436876040 container died cc65a35ae984cf946ad2701ad7399e315f7d7f8fe10a7f64dcba2d77ba8add84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_rubin, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:56:48 np0005629333 systemd[1]: var-lib-containers-storage-overlay-10fd6925f1d96ac71341981a7496a281542494b4162963b9ab90eb0fe5a9b3f8-merged.mount: Deactivated successfully.
Feb 25 06:56:48 np0005629333 podman[127647]: 2026-02-25 11:56:48.434310977 +0000 UTC m=+0.728833592 container remove cc65a35ae984cf946ad2701ad7399e315f7d7f8fe10a7f64dcba2d77ba8add84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:56:48 np0005629333 systemd[1]: libpod-conmon-cc65a35ae984cf946ad2701ad7399e315f7d7f8fe10a7f64dcba2d77ba8add84.scope: Deactivated successfully.
Feb 25 06:56:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:56:48 np0005629333 podman[127687]: 2026-02-25 11:56:48.602262984 +0000 UTC m=+0.042280306 container create 3c95dae0fcbcd4f486d59b8a163950cc577db7c63b80f8d24446a06c63dac9f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_booth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Feb 25 06:56:48 np0005629333 systemd[1]: Started libpod-conmon-3c95dae0fcbcd4f486d59b8a163950cc577db7c63b80f8d24446a06c63dac9f2.scope.
Feb 25 06:56:48 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:56:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/047fcaec4c17d2242c65c9943b7e1f991eba8ba292162d413b9bcec2c6e98f4e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:56:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/047fcaec4c17d2242c65c9943b7e1f991eba8ba292162d413b9bcec2c6e98f4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:56:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/047fcaec4c17d2242c65c9943b7e1f991eba8ba292162d413b9bcec2c6e98f4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:56:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/047fcaec4c17d2242c65c9943b7e1f991eba8ba292162d413b9bcec2c6e98f4e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:56:48 np0005629333 podman[127687]: 2026-02-25 11:56:48.584402968 +0000 UTC m=+0.024420290 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:56:48 np0005629333 podman[127687]: 2026-02-25 11:56:48.685860397 +0000 UTC m=+0.125877719 container init 3c95dae0fcbcd4f486d59b8a163950cc577db7c63b80f8d24446a06c63dac9f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:56:48 np0005629333 podman[127687]: 2026-02-25 11:56:48.694031154 +0000 UTC m=+0.134048446 container start 3c95dae0fcbcd4f486d59b8a163950cc577db7c63b80f8d24446a06c63dac9f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_booth, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 06:56:48 np0005629333 podman[127687]: 2026-02-25 11:56:48.697881051 +0000 UTC m=+0.137898353 container attach 3c95dae0fcbcd4f486d59b8a163950cc577db7c63b80f8d24446a06c63dac9f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 06:56:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:56:49 np0005629333 lvm[127779]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 06:56:49 np0005629333 lvm[127779]: VG ceph_vg0 finished
Feb 25 06:56:49 np0005629333 lvm[127782]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 06:56:49 np0005629333 lvm[127782]: VG ceph_vg1 finished
Feb 25 06:56:49 np0005629333 lvm[127784]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 06:56:49 np0005629333 lvm[127784]: VG ceph_vg2 finished
Feb 25 06:56:49 np0005629333 focused_booth[127703]: {}
Feb 25 06:56:49 np0005629333 systemd[1]: libpod-3c95dae0fcbcd4f486d59b8a163950cc577db7c63b80f8d24446a06c63dac9f2.scope: Deactivated successfully.
Feb 25 06:56:49 np0005629333 podman[127687]: 2026-02-25 11:56:49.458359431 +0000 UTC m=+0.898376763 container died 3c95dae0fcbcd4f486d59b8a163950cc577db7c63b80f8d24446a06c63dac9f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 06:56:49 np0005629333 systemd[1]: libpod-3c95dae0fcbcd4f486d59b8a163950cc577db7c63b80f8d24446a06c63dac9f2.scope: Consumed 1.047s CPU time.
Feb 25 06:56:49 np0005629333 systemd[1]: var-lib-containers-storage-overlay-047fcaec4c17d2242c65c9943b7e1f991eba8ba292162d413b9bcec2c6e98f4e-merged.mount: Deactivated successfully.
Feb 25 06:56:49 np0005629333 podman[127687]: 2026-02-25 11:56:49.514221423 +0000 UTC m=+0.954238755 container remove 3c95dae0fcbcd4f486d59b8a163950cc577db7c63b80f8d24446a06c63dac9f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_booth, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 06:56:49 np0005629333 systemd[1]: libpod-conmon-3c95dae0fcbcd4f486d59b8a163950cc577db7c63b80f8d24446a06c63dac9f2.scope: Deactivated successfully.
Feb 25 06:56:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:56:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:56:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:56:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:56:50 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:56:50 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:56:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:56:52 np0005629333 systemd-logind[811]: New session 43 of user zuul.
Feb 25 06:56:52 np0005629333 systemd[1]: Started Session 43 of User zuul.
Feb 25 06:56:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:56:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:56:53 np0005629333 python3.9[127976]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:56:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:56:55 np0005629333 python3.9[128133]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 25 06:56:55 np0005629333 python3.9[128218]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 25 06:56:56 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 6.d scrub starts
Feb 25 06:56:56 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 6.d scrub ok
Feb 25 06:56:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:56:58 np0005629333 python3.9[128369]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:56:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:56:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:56:59 np0005629333 python3.9[128520]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 25 06:57:00 np0005629333 python3.9[128670]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 06:57:00 np0005629333 python3.9[128820]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 06:57:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:57:01 np0005629333 systemd[1]: session-43.scope: Deactivated successfully.
Feb 25 06:57:01 np0005629333 systemd[1]: session-43.scope: Consumed 5.522s CPU time.
Feb 25 06:57:01 np0005629333 systemd-logind[811]: Session 43 logged out. Waiting for processes to exit.
Feb 25 06:57:01 np0005629333 systemd-logind[811]: Removed session 43.
Feb 25 06:57:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:57:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:57:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:57:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:57:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:57:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:57:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:57:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:57:03 np0005629333 systemd[1]: session-18.scope: Deactivated successfully.
Feb 25 06:57:03 np0005629333 systemd[1]: session-18.scope: Consumed 1min 30.134s CPU time.
Feb 25 06:57:03 np0005629333 systemd-logind[811]: Session 18 logged out. Waiting for processes to exit.
Feb 25 06:57:03 np0005629333 systemd-logind[811]: Removed session 18.
Feb 25 06:57:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:57:05 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 6.e scrub starts
Feb 25 06:57:05 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 6.e scrub ok
Feb 25 06:57:05 np0005629333 systemd-logind[811]: New session 44 of user zuul.
Feb 25 06:57:05 np0005629333 systemd[1]: Started Session 44 of User zuul.
Feb 25 06:57:06 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Feb 25 06:57:06 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Feb 25 06:57:06 np0005629333 python3.9[128998]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:57:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:57:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:57:08 np0005629333 python3.9[129155]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:57:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:57:09 np0005629333 python3.9[129308]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:57:09 np0005629333 python3.9[129461]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:57:10 np0005629333 python3.9[129585]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020629.3016207-60-85589739639191/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=e55861f3b95495704334618df4ad221be75889d9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:57:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:57:11 np0005629333 python3.9[129738]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:57:12 np0005629333 python3.9[129862]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020630.800543-60-195247615416650/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=8209baa31489b003e71fbfb35f2775feb2e4f420 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:57:12 np0005629333 python3.9[130015]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:57:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:57:13 np0005629333 python3.9[130139]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020632.134511-60-76345289669014/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=1db32b01e9f50403a44c5153f4beeecaff270ea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:57:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:57:14 np0005629333 python3.9[130292]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:57:14 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 6.c scrub starts
Feb 25 06:57:14 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 6.c scrub ok
Feb 25 06:57:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:57:15 np0005629333 python3.9[130445]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:57:15 np0005629333 python3.9[130598]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:57:16 np0005629333 python3.9[130722]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020635.1879313-119-211685569370763/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=8c8bcee820db2c4d2aa80e5eadfab78b07a78f41 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:57:16 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 6.b scrub starts
Feb 25 06:57:16 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 6.b scrub ok
Feb 25 06:57:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:57:17 np0005629333 python3.9[130875]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:57:17 np0005629333 python3.9[130999]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020636.5993605-119-204189465977746/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=004ba8751543ff5f5349d64c2e53b51e855eb69c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:57:18 np0005629333 python3.9[131152]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:57:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:57:18 np0005629333 python3.9[131276]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020637.7010233-119-178762885522361/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=2a92de6f4143265ed906b16009731e7809884a61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:57:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:57:19 np0005629333 python3.9[131429]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:57:19 np0005629333 python3.9[131582]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:57:20 np0005629333 python3.9[131735]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:57:20 np0005629333 python3.9[131859]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020639.969218-178-112839649486674/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=235c576b9107a536c6115318d41b51895b681db4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:57:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:57:21 np0005629333 python3.9[132012]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:57:22 np0005629333 python3.9[132136]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020641.0918946-178-29961026139758/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=004ba8751543ff5f5349d64c2e53b51e855eb69c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:57:22 np0005629333 python3.9[132289]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:57:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:57:23 np0005629333 python3.9[132413]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020642.2662582-178-247715020700213/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=4f8cb804a50cb2867fb417a64983e7ca7eba1b18 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:57:23 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Feb 25 06:57:23 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Feb 25 06:57:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:57:24 np0005629333 python3.9[132566]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:57:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:57:25 np0005629333 python3.9[132719]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:57:25 np0005629333 python3.9[132843]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020644.5849915-246-31902506715518/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=81e097fe8c9e59927bc656867231837d71d1bb93 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:57:26 np0005629333 python3.9[132996]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:57:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:57:27 np0005629333 python3.9[133149]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:57:27 np0005629333 python3.9[133273]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020646.5762084-270-9774023049537/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=81e097fe8c9e59927bc656867231837d71d1bb93 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:57:28 np0005629333 python3.9[133426]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:57:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:57:28 np0005629333 python3.9[133579]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:57:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:57:29 np0005629333 python3.9[133703]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020648.3189797-294-173803707314077/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=81e097fe8c9e59927bc656867231837d71d1bb93 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:57:29 np0005629333 python3.9[133856]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:57:30 np0005629333 python3.9[134009]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:57:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_11:57:30
Feb 25 06:57:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 06:57:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 06:57:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['backups', 'volumes', 'images', 'default.rgw.meta', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta']
Feb 25 06:57:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 06:57:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:57:31 np0005629333 python3.9[134133]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020650.061152-318-104744105919518/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=81e097fe8c9e59927bc656867231837d71d1bb93 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:57:31 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Feb 25 06:57:31 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Feb 25 06:57:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:57:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:57:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:57:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:57:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:57:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:57:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 06:57:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 06:57:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 06:57:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 06:57:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 06:57:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 06:57:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 06:57:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 06:57:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 06:57:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 06:57:31 np0005629333 python3.9[134286]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:57:32 np0005629333 python3.9[134439]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:57:32 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Feb 25 06:57:32 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Feb 25 06:57:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:57:32 np0005629333 python3.9[134563]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020651.9943151-342-114284044528710/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=81e097fe8c9e59927bc656867231837d71d1bb93 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:57:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:57:33 np0005629333 python3.9[134716]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:57:34 np0005629333 python3.9[134869]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:57:34 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Feb 25 06:57:34 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Feb 25 06:57:34 np0005629333 python3.9[134993]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020653.9034731-366-106312422701287/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=81e097fe8c9e59927bc656867231837d71d1bb93 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:57:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:57:35 np0005629333 systemd[1]: session-44.scope: Deactivated successfully.
Feb 25 06:57:35 np0005629333 systemd[1]: session-44.scope: Consumed 21.509s CPU time.
Feb 25 06:57:35 np0005629333 systemd-logind[811]: Session 44 logged out. Waiting for processes to exit.
Feb 25 06:57:35 np0005629333 systemd-logind[811]: Removed session 44.
Feb 25 06:57:36 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Feb 25 06:57:36 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Feb 25 06:57:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:57:38 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Feb 25 06:57:38 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Feb 25 06:57:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:57:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:57:40 np0005629333 systemd-logind[811]: New session 45 of user zuul.
Feb 25 06:57:40 np0005629333 systemd[1]: Started Session 45 of User zuul.
Feb 25 06:57:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:57:41 np0005629333 python3.9[135174]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:57:41 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 9.a scrub starts
Feb 25 06:57:41 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 9.a scrub ok
Feb 25 06:57:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 06:57:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:57:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 06:57:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:57:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:57:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:57:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:57:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:57:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:57:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:57:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:57:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:57:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6988014389607423e-06 of space, bias 4.0, pg target 0.0032385617267528906 quantized to 16 (current 16)
Feb 25 06:57:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:57:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:57:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:57:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 06:57:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:57:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 06:57:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:57:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:57:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:57:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 06:57:41 np0005629333 python3.9[135327]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:57:42 np0005629333 python3.9[135451]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772020661.17977-29-22061509128290/.source.conf _original_basename=ceph.conf follow=False checksum=9175b7cfc014b73ecc93e7db390b118c88a425fc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:57:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:57:43 np0005629333 python3.9[135604]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:57:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:57:43 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Feb 25 06:57:43 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Feb 25 06:57:43 np0005629333 python3.9[135728]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1772020662.6942816-29-222687073893193/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=89903beb1b6e06190ab4b6fe858fb9d1594a1a79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:57:44 np0005629333 systemd[1]: session-45.scope: Deactivated successfully.
Feb 25 06:57:44 np0005629333 systemd[1]: session-45.scope: Consumed 2.536s CPU time.
Feb 25 06:57:44 np0005629333 systemd-logind[811]: Session 45 logged out. Waiting for processes to exit.
Feb 25 06:57:44 np0005629333 systemd-logind[811]: Removed session 45.
Feb 25 06:57:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:57:46 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Feb 25 06:57:46 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Feb 25 06:57:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:57:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:57:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:57:49 np0005629333 systemd-logind[811]: New session 46 of user zuul.
Feb 25 06:57:49 np0005629333 systemd[1]: Started Session 46 of User zuul.
Feb 25 06:57:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:57:50 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:57:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 06:57:50 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:57:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 06:57:50 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:57:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 06:57:50 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 06:57:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 06:57:50 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 06:57:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:57:50 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:57:50 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:57:50 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:57:50 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 06:57:50 np0005629333 python3.9[135975]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:57:50 np0005629333 podman[136053]: 2026-02-25 11:57:50.615161854 +0000 UTC m=+0.045195407 container create f256b343a7207462640468a1292b394fae64f21c115ba90281e885d983297b77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 06:57:50 np0005629333 systemd[1]: Started libpod-conmon-f256b343a7207462640468a1292b394fae64f21c115ba90281e885d983297b77.scope.
Feb 25 06:57:50 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:57:50 np0005629333 podman[136053]: 2026-02-25 11:57:50.595152698 +0000 UTC m=+0.025186231 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:57:50 np0005629333 podman[136053]: 2026-02-25 11:57:50.706557064 +0000 UTC m=+0.136590597 container init f256b343a7207462640468a1292b394fae64f21c115ba90281e885d983297b77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:57:50 np0005629333 podman[136053]: 2026-02-25 11:57:50.71650411 +0000 UTC m=+0.146537623 container start f256b343a7207462640468a1292b394fae64f21c115ba90281e885d983297b77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keldysh, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 06:57:50 np0005629333 tender_keldysh[136090]: 167 167
Feb 25 06:57:50 np0005629333 systemd[1]: libpod-f256b343a7207462640468a1292b394fae64f21c115ba90281e885d983297b77.scope: Deactivated successfully.
Feb 25 06:57:50 np0005629333 podman[136053]: 2026-02-25 11:57:50.721261222 +0000 UTC m=+0.151294755 container attach f256b343a7207462640468a1292b394fae64f21c115ba90281e885d983297b77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keldysh, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:57:50 np0005629333 podman[136053]: 2026-02-25 11:57:50.722044324 +0000 UTC m=+0.152077837 container died f256b343a7207462640468a1292b394fae64f21c115ba90281e885d983297b77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keldysh, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:57:50 np0005629333 systemd[1]: var-lib-containers-storage-overlay-aff379ab49a7a0a26a54de118d3a79f39deda269ce09ae3c32e798425ff1da37-merged.mount: Deactivated successfully.
Feb 25 06:57:50 np0005629333 podman[136053]: 2026-02-25 11:57:50.76544959 +0000 UTC m=+0.195483113 container remove f256b343a7207462640468a1292b394fae64f21c115ba90281e885d983297b77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keldysh, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:57:50 np0005629333 systemd[1]: libpod-conmon-f256b343a7207462640468a1292b394fae64f21c115ba90281e885d983297b77.scope: Deactivated successfully.
Feb 25 06:57:50 np0005629333 podman[136133]: 2026-02-25 11:57:50.901180271 +0000 UTC m=+0.046383040 container create cee88bffac85e69a4484073bc98744b5d5728096646319a39cd2b5f08df4ef20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 06:57:50 np0005629333 systemd[1]: Started libpod-conmon-cee88bffac85e69a4484073bc98744b5d5728096646319a39cd2b5f08df4ef20.scope.
Feb 25 06:57:50 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:57:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a075203b86d919405caa363aaa46e81185ad71eb32abd029dd7c547bd10867/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:57:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a075203b86d919405caa363aaa46e81185ad71eb32abd029dd7c547bd10867/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:57:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a075203b86d919405caa363aaa46e81185ad71eb32abd029dd7c547bd10867/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:57:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a075203b86d919405caa363aaa46e81185ad71eb32abd029dd7c547bd10867/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:57:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20a075203b86d919405caa363aaa46e81185ad71eb32abd029dd7c547bd10867/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:57:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:57:50 np0005629333 podman[136133]: 2026-02-25 11:57:50.880997721 +0000 UTC m=+0.026200320 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:57:50 np0005629333 podman[136133]: 2026-02-25 11:57:50.979018654 +0000 UTC m=+0.124221213 container init cee88bffac85e69a4484073bc98744b5d5728096646319a39cd2b5f08df4ef20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_gates, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:57:50 np0005629333 podman[136133]: 2026-02-25 11:57:50.98463126 +0000 UTC m=+0.129833799 container start cee88bffac85e69a4484073bc98744b5d5728096646319a39cd2b5f08df4ef20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_gates, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:57:50 np0005629333 podman[136133]: 2026-02-25 11:57:50.987838629 +0000 UTC m=+0.133041198 container attach cee88bffac85e69a4484073bc98744b5d5728096646319a39cd2b5f08df4ef20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_gates, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 06:57:51 np0005629333 loving_gates[136188]: --> passed data devices: 0 physical, 3 LVM
Feb 25 06:57:51 np0005629333 loving_gates[136188]: --> All data devices are unavailable
Feb 25 06:57:51 np0005629333 systemd[1]: libpod-cee88bffac85e69a4484073bc98744b5d5728096646319a39cd2b5f08df4ef20.scope: Deactivated successfully.
Feb 25 06:57:51 np0005629333 podman[136133]: 2026-02-25 11:57:51.45396774 +0000 UTC m=+0.599170319 container died cee88bffac85e69a4484073bc98744b5d5728096646319a39cd2b5f08df4ef20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_gates, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 06:57:51 np0005629333 systemd[1]: var-lib-containers-storage-overlay-20a075203b86d919405caa363aaa46e81185ad71eb32abd029dd7c547bd10867-merged.mount: Deactivated successfully.
Feb 25 06:57:51 np0005629333 podman[136133]: 2026-02-25 11:57:51.512437084 +0000 UTC m=+0.657639663 container remove cee88bffac85e69a4484073bc98744b5d5728096646319a39cd2b5f08df4ef20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_gates, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:57:51 np0005629333 python3.9[136276]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:57:51 np0005629333 systemd[1]: libpod-conmon-cee88bffac85e69a4484073bc98744b5d5728096646319a39cd2b5f08df4ef20.scope: Deactivated successfully.
Feb 25 06:57:51 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Feb 25 06:57:51 np0005629333 ceph-osd[88012]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Feb 25 06:57:51 np0005629333 podman[136497]: 2026-02-25 11:57:51.994704544 +0000 UTC m=+0.047585813 container create 83657495e5f1d8a93bd19e23a7668c6309e3ad39a72ccc59942e2ece8bbf1ba8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:57:52 np0005629333 systemd[1]: Started libpod-conmon-83657495e5f1d8a93bd19e23a7668c6309e3ad39a72ccc59942e2ece8bbf1ba8.scope.
Feb 25 06:57:52 np0005629333 podman[136497]: 2026-02-25 11:57:51.974983946 +0000 UTC m=+0.027865255 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:57:52 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:57:52 np0005629333 podman[136497]: 2026-02-25 11:57:52.097801649 +0000 UTC m=+0.150682998 container init 83657495e5f1d8a93bd19e23a7668c6309e3ad39a72ccc59942e2ece8bbf1ba8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_newton, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 06:57:52 np0005629333 podman[136497]: 2026-02-25 11:57:52.10719202 +0000 UTC m=+0.160073319 container start 83657495e5f1d8a93bd19e23a7668c6309e3ad39a72ccc59942e2ece8bbf1ba8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_newton, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 06:57:52 np0005629333 podman[136497]: 2026-02-25 11:57:52.111221052 +0000 UTC m=+0.164102361 container attach 83657495e5f1d8a93bd19e23a7668c6309e3ad39a72ccc59942e2ece8bbf1ba8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_newton, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 06:57:52 np0005629333 systemd[1]: libpod-83657495e5f1d8a93bd19e23a7668c6309e3ad39a72ccc59942e2ece8bbf1ba8.scope: Deactivated successfully.
Feb 25 06:57:52 np0005629333 stoic_newton[136527]: 167 167
Feb 25 06:57:52 np0005629333 conmon[136527]: conmon 83657495e5f1d8a93bd1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-83657495e5f1d8a93bd19e23a7668c6309e3ad39a72ccc59942e2ece8bbf1ba8.scope/container/memory.events
Feb 25 06:57:52 np0005629333 podman[136497]: 2026-02-25 11:57:52.115187482 +0000 UTC m=+0.168068791 container died 83657495e5f1d8a93bd19e23a7668c6309e3ad39a72ccc59942e2ece8bbf1ba8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_newton, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:57:52 np0005629333 systemd[1]: var-lib-containers-storage-overlay-93b34ab30383ff63c90ea71cd419f3275413738950a7b24b777192face1f3108-merged.mount: Deactivated successfully.
Feb 25 06:57:52 np0005629333 python3.9[136523]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:57:52 np0005629333 podman[136497]: 2026-02-25 11:57:52.1633566 +0000 UTC m=+0.216237869 container remove 83657495e5f1d8a93bd19e23a7668c6309e3ad39a72ccc59942e2ece8bbf1ba8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_newton, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:57:52 np0005629333 systemd[1]: libpod-conmon-83657495e5f1d8a93bd19e23a7668c6309e3ad39a72ccc59942e2ece8bbf1ba8.scope: Deactivated successfully.
Feb 25 06:57:52 np0005629333 podman[136575]: 2026-02-25 11:57:52.308803192 +0000 UTC m=+0.051310147 container create 502e6628dbcfd387013ca38a88d5e91e5f26b82287c16760b04229fb65abc0ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_dijkstra, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 06:57:52 np0005629333 systemd[1]: Started libpod-conmon-502e6628dbcfd387013ca38a88d5e91e5f26b82287c16760b04229fb65abc0ae.scope.
Feb 25 06:57:52 np0005629333 podman[136575]: 2026-02-25 11:57:52.291948163 +0000 UTC m=+0.034455098 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:57:52 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:57:52 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b980a63e5d678f01d94519ff0df03501448a7f20c7d5c00305bd05368ad8aeac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:57:52 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b980a63e5d678f01d94519ff0df03501448a7f20c7d5c00305bd05368ad8aeac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:57:52 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b980a63e5d678f01d94519ff0df03501448a7f20c7d5c00305bd05368ad8aeac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:57:52 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b980a63e5d678f01d94519ff0df03501448a7f20c7d5c00305bd05368ad8aeac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:57:52 np0005629333 podman[136575]: 2026-02-25 11:57:52.4213737 +0000 UTC m=+0.163880695 container init 502e6628dbcfd387013ca38a88d5e91e5f26b82287c16760b04229fb65abc0ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_dijkstra, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 06:57:52 np0005629333 podman[136575]: 2026-02-25 11:57:52.431786519 +0000 UTC m=+0.174293454 container start 502e6628dbcfd387013ca38a88d5e91e5f26b82287c16760b04229fb65abc0ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_dijkstra, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 06:57:52 np0005629333 podman[136575]: 2026-02-25 11:57:52.435352348 +0000 UTC m=+0.177859303 container attach 502e6628dbcfd387013ca38a88d5e91e5f26b82287c16760b04229fb65abc0ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_dijkstra, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]: {
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:    "0": [
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:        {
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "devices": [
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "/dev/loop3"
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            ],
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "lv_name": "ceph_lv0",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "lv_size": "21470642176",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "name": "ceph_lv0",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "tags": {
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.cluster_name": "ceph",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.crush_device_class": "",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.encrypted": "0",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.objectstore": "bluestore",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.osd_id": "0",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.type": "block",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.vdo": "0",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.with_tpm": "0"
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            },
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "type": "block",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "vg_name": "ceph_vg0"
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:        }
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:    ],
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:    "1": [
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:        {
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "devices": [
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "/dev/loop4"
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            ],
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "lv_name": "ceph_lv1",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "lv_size": "21470642176",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "name": "ceph_lv1",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "tags": {
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.cluster_name": "ceph",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.crush_device_class": "",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.encrypted": "0",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.objectstore": "bluestore",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.osd_id": "1",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.type": "block",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.vdo": "0",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.with_tpm": "0"
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            },
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "type": "block",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "vg_name": "ceph_vg1"
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:        }
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:    ],
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:    "2": [
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:        {
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "devices": [
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "/dev/loop5"
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            ],
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "lv_name": "ceph_lv2",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "lv_size": "21470642176",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "name": "ceph_lv2",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "tags": {
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.cluster_name": "ceph",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.crush_device_class": "",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.encrypted": "0",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.objectstore": "bluestore",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.osd_id": "2",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.type": "block",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.vdo": "0",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:                "ceph.with_tpm": "0"
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            },
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "type": "block",
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:            "vg_name": "ceph_vg2"
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:        }
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]:    ]
Feb 25 06:57:52 np0005629333 compassionate_dijkstra[136614]: }
Feb 25 06:57:52 np0005629333 systemd[1]: libpod-502e6628dbcfd387013ca38a88d5e91e5f26b82287c16760b04229fb65abc0ae.scope: Deactivated successfully.
Feb 25 06:57:52 np0005629333 podman[136575]: 2026-02-25 11:57:52.744106997 +0000 UTC m=+0.486613942 container died 502e6628dbcfd387013ca38a88d5e91e5f26b82287c16760b04229fb65abc0ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_dijkstra, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:57:52 np0005629333 systemd[1]: var-lib-containers-storage-overlay-b980a63e5d678f01d94519ff0df03501448a7f20c7d5c00305bd05368ad8aeac-merged.mount: Deactivated successfully.
Feb 25 06:57:52 np0005629333 podman[136575]: 2026-02-25 11:57:52.799727222 +0000 UTC m=+0.542234147 container remove 502e6628dbcfd387013ca38a88d5e91e5f26b82287c16760b04229fb65abc0ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_dijkstra, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 06:57:52 np0005629333 systemd[1]: libpod-conmon-502e6628dbcfd387013ca38a88d5e91e5f26b82287c16760b04229fb65abc0ae.scope: Deactivated successfully.
Feb 25 06:57:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:57:53 np0005629333 python3.9[136725]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:57:53 np0005629333 podman[136825]: 2026-02-25 11:57:53.279329728 +0000 UTC m=+0.051577144 container create cf8f98697e71d59b00d4d090c23c8fb92fdb3b68feb6adc479c24c4a8600aa15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_nightingale, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 06:57:53 np0005629333 systemd[1]: Started libpod-conmon-cf8f98697e71d59b00d4d090c23c8fb92fdb3b68feb6adc479c24c4a8600aa15.scope.
Feb 25 06:57:53 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:57:53 np0005629333 podman[136825]: 2026-02-25 11:57:53.253739057 +0000 UTC m=+0.025986523 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:57:53 np0005629333 podman[136825]: 2026-02-25 11:57:53.359814985 +0000 UTC m=+0.132062441 container init cf8f98697e71d59b00d4d090c23c8fb92fdb3b68feb6adc479c24c4a8600aa15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_nightingale, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:57:53 np0005629333 podman[136825]: 2026-02-25 11:57:53.367783816 +0000 UTC m=+0.140031232 container start cf8f98697e71d59b00d4d090c23c8fb92fdb3b68feb6adc479c24c4a8600aa15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:57:53 np0005629333 podman[136825]: 2026-02-25 11:57:53.371120669 +0000 UTC m=+0.143368085 container attach cf8f98697e71d59b00d4d090c23c8fb92fdb3b68feb6adc479c24c4a8600aa15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 25 06:57:53 np0005629333 focused_nightingale[136893]: 167 167
Feb 25 06:57:53 np0005629333 systemd[1]: libpod-cf8f98697e71d59b00d4d090c23c8fb92fdb3b68feb6adc479c24c4a8600aa15.scope: Deactivated successfully.
Feb 25 06:57:53 np0005629333 podman[136825]: 2026-02-25 11:57:53.37297152 +0000 UTC m=+0.145218936 container died cf8f98697e71d59b00d4d090c23c8fb92fdb3b68feb6adc479c24c4a8600aa15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_nightingale, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Feb 25 06:57:53 np0005629333 systemd[1]: var-lib-containers-storage-overlay-b36dca85dc49c6987bcee8648cc7e6e82de41f7cb4ca44e3888859d22ed1118f-merged.mount: Deactivated successfully.
Feb 25 06:57:53 np0005629333 podman[136825]: 2026-02-25 11:57:53.415111231 +0000 UTC m=+0.187358637 container remove cf8f98697e71d59b00d4d090c23c8fb92fdb3b68feb6adc479c24c4a8600aa15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_nightingale, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 06:57:53 np0005629333 systemd[1]: libpod-conmon-cf8f98697e71d59b00d4d090c23c8fb92fdb3b68feb6adc479c24c4a8600aa15.scope: Deactivated successfully.
Feb 25 06:57:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:57:53 np0005629333 podman[136918]: 2026-02-25 11:57:53.570967222 +0000 UTC m=+0.049756714 container create 998120320dc2909fe52b0b91b256315c57171ed72255f36d5f9a7a3df1c3e255 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_clarke, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:57:53 np0005629333 systemd[1]: Started libpod-conmon-998120320dc2909fe52b0b91b256315c57171ed72255f36d5f9a7a3df1c3e255.scope.
Feb 25 06:57:53 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:57:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/055dfc3413735fe9f3543022b84683ee97b322d96199009e7b4ef961e773adb2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:57:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/055dfc3413735fe9f3543022b84683ee97b322d96199009e7b4ef961e773adb2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:57:53 np0005629333 podman[136918]: 2026-02-25 11:57:53.546149872 +0000 UTC m=+0.024939424 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:57:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/055dfc3413735fe9f3543022b84683ee97b322d96199009e7b4ef961e773adb2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:57:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/055dfc3413735fe9f3543022b84683ee97b322d96199009e7b4ef961e773adb2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:57:53 np0005629333 podman[136918]: 2026-02-25 11:57:53.664684546 +0000 UTC m=+0.143474048 container init 998120320dc2909fe52b0b91b256315c57171ed72255f36d5f9a7a3df1c3e255 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_clarke, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:57:53 np0005629333 podman[136918]: 2026-02-25 11:57:53.671255488 +0000 UTC m=+0.150044960 container start 998120320dc2909fe52b0b91b256315c57171ed72255f36d5f9a7a3df1c3e255 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_clarke, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Feb 25 06:57:53 np0005629333 podman[136918]: 2026-02-25 11:57:53.677497962 +0000 UTC m=+0.156287424 container attach 998120320dc2909fe52b0b91b256315c57171ed72255f36d5f9a7a3df1c3e255 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:57:53 np0005629333 python3.9[137014]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Feb 25 06:57:54 np0005629333 lvm[137087]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 06:57:54 np0005629333 lvm[137089]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 06:57:54 np0005629333 lvm[137089]: VG ceph_vg1 finished
Feb 25 06:57:54 np0005629333 lvm[137087]: VG ceph_vg0 finished
Feb 25 06:57:54 np0005629333 lvm[137091]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 06:57:54 np0005629333 lvm[137091]: VG ceph_vg2 finished
Feb 25 06:57:54 np0005629333 vibrant_clarke[136980]: {}
Feb 25 06:57:54 np0005629333 systemd[1]: libpod-998120320dc2909fe52b0b91b256315c57171ed72255f36d5f9a7a3df1c3e255.scope: Deactivated successfully.
Feb 25 06:57:54 np0005629333 podman[136918]: 2026-02-25 11:57:54.484050252 +0000 UTC m=+0.962839744 container died 998120320dc2909fe52b0b91b256315c57171ed72255f36d5f9a7a3df1c3e255 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 06:57:54 np0005629333 systemd[1]: libpod-998120320dc2909fe52b0b91b256315c57171ed72255f36d5f9a7a3df1c3e255.scope: Consumed 1.095s CPU time.
Feb 25 06:57:54 np0005629333 systemd[1]: var-lib-containers-storage-overlay-055dfc3413735fe9f3543022b84683ee97b322d96199009e7b4ef961e773adb2-merged.mount: Deactivated successfully.
Feb 25 06:57:54 np0005629333 podman[136918]: 2026-02-25 11:57:54.542855286 +0000 UTC m=+1.021644778 container remove 998120320dc2909fe52b0b91b256315c57171ed72255f36d5f9a7a3df1c3e255 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_clarke, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 06:57:54 np0005629333 systemd[1]: libpod-conmon-998120320dc2909fe52b0b91b256315c57171ed72255f36d5f9a7a3df1c3e255.scope: Deactivated successfully.
Feb 25 06:57:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:57:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:57:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:57:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:57:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:57:55 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:57:55 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:57:56 np0005629333 dbus-broker-launch[795]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Feb 25 06:57:56 np0005629333 python3.9[137289]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 25 06:57:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:57:57 np0005629333 python3.9[137374]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 25 06:57:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:57:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:57:59 np0005629333 python3.9[137528]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 25 06:58:00 np0005629333 python3[137684]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Feb 25 06:58:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:58:01 np0005629333 python3.9[137837]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:58:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:58:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:58:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:58:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:58:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:58:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:58:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:58:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:58:03 np0005629333 python3.9[137990]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:58:04 np0005629333 python3.9[138069]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:58:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:58:05 np0005629333 python3.9[138222]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:58:05 np0005629333 python3.9[138301]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.q0i5ixys recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:58:06 np0005629333 python3.9[138454]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:58:06 np0005629333 python3.9[138533]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:58:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:58:07 np0005629333 python3.9[138686]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:58:08 np0005629333 python3[138840]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 25 06:58:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:58:08 np0005629333 python3.9[138993]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:58:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:58:09 np0005629333 python3.9[139119]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020688.3228264-152-207479068030082/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:58:10 np0005629333 python3.9[139272]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:58:10 np0005629333 python3.9[139398]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020689.744913-167-166204192279602/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:58:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:58:11 np0005629333 python3.9[139551]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:58:12 np0005629333 python3.9[139677]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020691.0381448-182-49247153928277/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:58:12 np0005629333 python3.9[139830]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:58:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:58:13 np0005629333 python3.9[139956]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020692.3100812-197-249360429062436/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:58:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:58:14 np0005629333 python3.9[140109]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:58:14 np0005629333 python3.9[140235]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020693.4804409-212-85352442369072/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:58:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:58:15 np0005629333 python3.9[140388]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:58:16 np0005629333 python3.9[140541]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:58:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:58:16 np0005629333 python3.9[140697]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:58:17 np0005629333 python3.9[140850]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:58:18 np0005629333 python3.9[141004]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 06:58:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:58:18 np0005629333 python3.9[141159]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:58:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:58:19 np0005629333 python3.9[141315]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:58:20 np0005629333 python3.9[141465]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:58:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:58:21 np0005629333 python3.9[141619]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:e0:eb:c4:a5" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:58:21 np0005629333 ovs-vsctl[141620]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:e0:eb:c4:a5 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Feb 25 06:58:22 np0005629333 python3.9[141773]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:58:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:58:23 np0005629333 python3.9[141929]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:58:23 np0005629333 ovs-vsctl[141930]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:58:23.552737) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020703552794, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1778, "num_deletes": 251, "total_data_size": 2483044, "memory_usage": 2515432, "flush_reason": "Manual Compaction"}
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020703562943, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1462316, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7303, "largest_seqno": 9080, "table_properties": {"data_size": 1456690, "index_size": 2509, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 17032, "raw_average_key_size": 20, "raw_value_size": 1443014, "raw_average_value_size": 1761, "num_data_blocks": 118, "num_entries": 819, "num_filter_entries": 819, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020544, "oldest_key_time": 1772020544, "file_creation_time": 1772020703, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 10269 microseconds, and 5264 cpu microseconds.
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:58:23.563006) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1462316 bytes OK
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:58:23.563029) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:58:23.564489) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:58:23.564506) EVENT_LOG_v1 {"time_micros": 1772020703564500, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:58:23.564527) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2475195, prev total WAL file size 2475195, number of live WAL files 2.
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:58:23.565149) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1428KB)], [20(7426KB)]
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020703565218, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 9067536, "oldest_snapshot_seqno": -1}
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3419 keys, 7041817 bytes, temperature: kUnknown
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020703620580, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 7041817, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7016128, "index_size": 16044, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8581, "raw_key_size": 81480, "raw_average_key_size": 23, "raw_value_size": 6951564, "raw_average_value_size": 2033, "num_data_blocks": 712, "num_entries": 3419, "num_filter_entries": 3419, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772020703, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:58:23.620837) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 7041817 bytes
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:58:23.622380) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 163.6 rd, 127.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 7.3 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(11.0) write-amplify(4.8) OK, records in: 3860, records dropped: 441 output_compression: NoCompression
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:58:23.622411) EVENT_LOG_v1 {"time_micros": 1772020703622396, "job": 6, "event": "compaction_finished", "compaction_time_micros": 55430, "compaction_time_cpu_micros": 22073, "output_level": 6, "num_output_files": 1, "total_output_size": 7041817, "num_input_records": 3860, "num_output_records": 3419, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020703622777, "job": 6, "event": "table_file_deletion", "file_number": 22}
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020703623862, "job": 6, "event": "table_file_deletion", "file_number": 20}
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:58:23.565045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:58:23.623994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:58:23.624005) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:58:23.624010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:58:23.624014) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 06:58:23 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:58:23.624021) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 06:58:23 np0005629333 python3.9[142080]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 06:58:24 np0005629333 python3.9[142235]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:58:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:58:25 np0005629333 python3.9[142388]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:58:25 np0005629333 python3.9[142467]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:58:26 np0005629333 python3.9[142620]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:58:26 np0005629333 python3.9[142699]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:58:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:58:27 np0005629333 python3.9[142853]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:58:27 np0005629333 python3.9[143006]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:58:28 np0005629333 python3.9[143085]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:58:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:58:28 np0005629333 python3.9[143238]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:58:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:58:29 np0005629333 python3.9[143317]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:58:30 np0005629333 python3.9[143470]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 06:58:30 np0005629333 systemd[1]: Reloading.
Feb 25 06:58:30 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:58:30 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:58:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_11:58:30
Feb 25 06:58:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 06:58:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 06:58:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.log', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'images', 'cephfs.cephfs.data', 'backups', 'vms']
Feb 25 06:58:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 06:58:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:58:31 np0005629333 python3.9[143668]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:58:31 np0005629333 python3.9[143747]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:58:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:58:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:58:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:58:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:58:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:58:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:58:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 06:58:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 06:58:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 06:58:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 06:58:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 06:58:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 06:58:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 06:58:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 06:58:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 06:58:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 06:58:32 np0005629333 python3.9[143900]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:58:32 np0005629333 python3.9[143979]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:58:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:58:33 np0005629333 python3.9[144132]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 06:58:33 np0005629333 systemd[1]: Reloading.
Feb 25 06:58:33 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:58:33 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:58:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:58:33 np0005629333 systemd[1]: Starting Create netns directory...
Feb 25 06:58:33 np0005629333 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 25 06:58:33 np0005629333 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 25 06:58:33 np0005629333 systemd[1]: Finished Create netns directory.
Feb 25 06:58:34 np0005629333 python3.9[144333]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:58:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:58:35 np0005629333 python3.9[144486]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:58:35 np0005629333 python3.9[144610]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772020714.5920877-464-88931758018481/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:58:36 np0005629333 python3.9[144763]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:58:36 np0005629333 python3.9[144916]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:58:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:58:37 np0005629333 python3.9[145069]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:58:38 np0005629333 python3.9[145193]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1772020717.105314-497-109022654266923/.source.json _original_basename=.dms7_qjx follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:58:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:58:38 np0005629333 python3.9[145343]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:58:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:58:40 np0005629333 python3.9[145767]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Feb 25 06:58:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:58:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 06:58:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:58:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 06:58:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:58:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:58:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:58:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:58:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:58:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:58:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:58:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:58:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:58:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6988014389607423e-06 of space, bias 4.0, pg target 0.0032385617267528906 quantized to 16 (current 16)
Feb 25 06:58:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:58:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:58:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:58:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 06:58:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:58:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 06:58:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:58:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:58:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:58:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 06:58:41 np0005629333 python3.9[145920]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 25 06:58:42 np0005629333 python3[146073]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Feb 25 06:58:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:58:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:58:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:58:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:58:47 np0005629333 podman[146086]: 2026-02-25 11:58:47.154719658 +0000 UTC m=+4.245616392 image pull ce6781f051bf092c13d84cb587c56ad7edaa58b70fcc0effc1dff15724d5232e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb 25 06:58:47 np0005629333 podman[146207]: 2026-02-25 11:58:47.293451853 +0000 UTC m=+0.045814107 container create 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:58:47 np0005629333 podman[146207]: 2026-02-25 11:58:47.271476327 +0000 UTC m=+0.023838631 image pull ce6781f051bf092c13d84cb587c56ad7edaa58b70fcc0effc1dff15724d5232e quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb 25 06:58:47 np0005629333 python3[146073]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb 25 06:58:48 np0005629333 python3.9[146398]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 06:58:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:58:48 np0005629333 python3.9[146553]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:58:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:58:49 np0005629333 python3.9[146630]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 06:58:49 np0005629333 python3.9[146782]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772020729.2763453-575-94897550032291/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:58:50 np0005629333 python3.9[146859]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 25 06:58:50 np0005629333 systemd[1]: Reloading.
Feb 25 06:58:50 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:58:50 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:58:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:58:51 np0005629333 python3.9[146977]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 06:58:51 np0005629333 systemd[1]: Reloading.
Feb 25 06:58:51 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:58:51 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:58:51 np0005629333 systemd[1]: Starting ovn_controller container...
Feb 25 06:58:51 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:58:51 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e691fe98996adc025e3377cb7202dc02d51b0c212a21c3edc7d37b65eecfc41/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Feb 25 06:58:51 np0005629333 systemd[1]: Started /usr/bin/podman healthcheck run 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c.
Feb 25 06:58:51 np0005629333 podman[147025]: 2026-02-25 11:58:51.641439397 +0000 UTC m=+0.128902442 container init 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 06:58:51 np0005629333 ovn_controller[147040]: + sudo -E kolla_set_configs
Feb 25 06:58:51 np0005629333 podman[147025]: 2026-02-25 11:58:51.677761618 +0000 UTC m=+0.165224673 container start 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 06:58:51 np0005629333 systemd[1]: Created slice User Slice of UID 0.
Feb 25 06:58:51 np0005629333 edpm-start-podman-container[147025]: ovn_controller
Feb 25 06:58:51 np0005629333 systemd[1]: Starting User Runtime Directory /run/user/0...
Feb 25 06:58:51 np0005629333 systemd[1]: Finished User Runtime Directory /run/user/0.
Feb 25 06:58:51 np0005629333 systemd[1]: Starting User Manager for UID 0...
Feb 25 06:58:51 np0005629333 edpm-start-podman-container[147024]: Creating additional drop-in dependency for "ovn_controller" (5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c)
Feb 25 06:58:51 np0005629333 podman[147047]: 2026-02-25 11:58:51.774759116 +0000 UTC m=+0.082364735 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 06:58:51 np0005629333 systemd[1]: 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c-702e9080d9f5db2f.service: Main process exited, code=exited, status=1/FAILURE
Feb 25 06:58:51 np0005629333 systemd[1]: 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c-702e9080d9f5db2f.service: Failed with result 'exit-code'.
Feb 25 06:58:51 np0005629333 systemd[1]: Reloading.
Feb 25 06:58:51 np0005629333 systemd[147067]: Queued start job for default target Main User Target.
Feb 25 06:58:51 np0005629333 systemd[147067]: Created slice User Application Slice.
Feb 25 06:58:51 np0005629333 systemd[147067]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Feb 25 06:58:51 np0005629333 systemd[147067]: Started Daily Cleanup of User's Temporary Directories.
Feb 25 06:58:51 np0005629333 systemd[147067]: Reached target Paths.
Feb 25 06:58:51 np0005629333 systemd[147067]: Reached target Timers.
Feb 25 06:58:51 np0005629333 systemd[147067]: Starting D-Bus User Message Bus Socket...
Feb 25 06:58:51 np0005629333 systemd[147067]: Starting Create User's Volatile Files and Directories...
Feb 25 06:58:51 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:58:51 np0005629333 systemd[147067]: Listening on D-Bus User Message Bus Socket.
Feb 25 06:58:51 np0005629333 systemd[147067]: Finished Create User's Volatile Files and Directories.
Feb 25 06:58:51 np0005629333 systemd[147067]: Reached target Sockets.
Feb 25 06:58:51 np0005629333 systemd[147067]: Reached target Basic System.
Feb 25 06:58:51 np0005629333 systemd[147067]: Reached target Main User Target.
Feb 25 06:58:51 np0005629333 systemd[147067]: Startup finished in 137ms.
Feb 25 06:58:51 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:58:52 np0005629333 systemd[1]: Started User Manager for UID 0.
Feb 25 06:58:52 np0005629333 systemd[1]: Started ovn_controller container.
Feb 25 06:58:52 np0005629333 systemd[1]: Started Session c1 of User root.
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: INFO:__main__:Validating config file
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: INFO:__main__:Writing out command to execute
Feb 25 06:58:52 np0005629333 systemd[1]: session-c1.scope: Deactivated successfully.
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: ++ cat /run_command
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: + ARGS=
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: + sudo kolla_copy_cacerts
Feb 25 06:58:52 np0005629333 systemd[1]: Started Session c2 of User root.
Feb 25 06:58:52 np0005629333 systemd[1]: session-c2.scope: Deactivated successfully.
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: + [[ ! -n '' ]]
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: + . kolla_extend_start
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: + umask 0022
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T11:58:52Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T11:58:52Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T11:58:52Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T11:58:52Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T11:58:52Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T11:58:52Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Feb 25 06:58:52 np0005629333 NetworkManager[49836]: <info>  [1772020732.2356] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Feb 25 06:58:52 np0005629333 NetworkManager[49836]: <info>  [1772020732.2366] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 25 06:58:52 np0005629333 NetworkManager[49836]: <warn>  [1772020732.2369] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 25 06:58:52 np0005629333 NetworkManager[49836]: <info>  [1772020732.2382] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Feb 25 06:58:52 np0005629333 NetworkManager[49836]: <info>  [1772020732.2391] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Feb 25 06:58:52 np0005629333 NetworkManager[49836]: <info>  [1772020732.2396] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Feb 25 06:58:52 np0005629333 kernel: br-int: entered promiscuous mode
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T11:58:52Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T11:58:52Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T11:58:52Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T11:58:52Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T11:58:52Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T11:58:52Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T11:58:52Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T11:58:52Z|00014|main|INFO|OVS feature set changed, force recompute.
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T11:58:52Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T11:58:52Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T11:58:52Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T11:58:52Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T11:58:52Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T11:58:52Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T11:58:52Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T11:58:52Z|00022|main|INFO|OVS feature set changed, force recompute.
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T11:58:52Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T11:58:52Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T11:58:52Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T11:58:52Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T11:58:52Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T11:58:52Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T11:58:52Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 25 06:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T11:58:52Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 25 06:58:52 np0005629333 NetworkManager[49836]: <info>  [1772020732.2700] manager: (ovn-e00d43-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Feb 25 06:58:52 np0005629333 systemd-udevd[147205]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 06:58:52 np0005629333 kernel: genev_sys_6081: entered promiscuous mode
Feb 25 06:58:52 np0005629333 systemd-udevd[147206]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 06:58:52 np0005629333 NetworkManager[49836]: <info>  [1772020732.2913] device (genev_sys_6081): carrier: link connected
Feb 25 06:58:52 np0005629333 NetworkManager[49836]: <info>  [1772020732.2918] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Feb 25 06:58:52 np0005629333 python3.9[147314]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 25 06:58:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:58:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:58:53 np0005629333 python3.9[147467]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:58:54 np0005629333 python3.9[147591]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772020733.2422297-620-173627705881295/.source.yaml _original_basename=._mcf7zvn follow=False checksum=c454e1f576c13c2e06f9be7d8e29d212812f314e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:58:54 np0005629333 python3.9[147767]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:58:54 np0005629333 ovs-vsctl[147795]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Feb 25 06:58:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:58:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:58:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:58:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 06:58:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:58:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 06:58:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:58:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 06:58:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 06:58:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 06:58:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 06:58:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 06:58:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 06:58:55 np0005629333 python3.9[148027]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:58:55 np0005629333 ovs-vsctl[148033]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Feb 25 06:58:55 np0005629333 podman[148063]: 2026-02-25 11:58:55.706048006 +0000 UTC m=+0.036013523 container create 6a46e99c2541036f3a7c4bd584443cef841df543a4bd83aa5cb7aec3b9d7b20f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_villani, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:58:55 np0005629333 systemd[1]: Started libpod-conmon-6a46e99c2541036f3a7c4bd584443cef841df543a4bd83aa5cb7aec3b9d7b20f.scope.
Feb 25 06:58:55 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:58:55 np0005629333 podman[148063]: 2026-02-25 11:58:55.687481769 +0000 UTC m=+0.017447276 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:58:55 np0005629333 podman[148063]: 2026-02-25 11:58:55.786513345 +0000 UTC m=+0.116478952 container init 6a46e99c2541036f3a7c4bd584443cef841df543a4bd83aa5cb7aec3b9d7b20f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 06:58:55 np0005629333 podman[148063]: 2026-02-25 11:58:55.79081426 +0000 UTC m=+0.120779767 container start 6a46e99c2541036f3a7c4bd584443cef841df543a4bd83aa5cb7aec3b9d7b20f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_villani, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:58:55 np0005629333 podman[148063]: 2026-02-25 11:58:55.79359415 +0000 UTC m=+0.123559747 container attach 6a46e99c2541036f3a7c4bd584443cef841df543a4bd83aa5cb7aec3b9d7b20f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_villani, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 06:58:55 np0005629333 quirky_villani[148087]: 167 167
Feb 25 06:58:55 np0005629333 systemd[1]: libpod-6a46e99c2541036f3a7c4bd584443cef841df543a4bd83aa5cb7aec3b9d7b20f.scope: Deactivated successfully.
Feb 25 06:58:55 np0005629333 podman[148063]: 2026-02-25 11:58:55.796787693 +0000 UTC m=+0.126753210 container died 6a46e99c2541036f3a7c4bd584443cef841df543a4bd83aa5cb7aec3b9d7b20f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_villani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 06:58:55 np0005629333 systemd[1]: var-lib-containers-storage-overlay-0f246efc9680ea9e780b321a5a3c86cdd93b3e2c2aef761ebf06f869c670a672-merged.mount: Deactivated successfully.
Feb 25 06:58:55 np0005629333 podman[148063]: 2026-02-25 11:58:55.831792416 +0000 UTC m=+0.161757923 container remove 6a46e99c2541036f3a7c4bd584443cef841df543a4bd83aa5cb7aec3b9d7b20f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 06:58:55 np0005629333 systemd[1]: libpod-conmon-6a46e99c2541036f3a7c4bd584443cef841df543a4bd83aa5cb7aec3b9d7b20f.scope: Deactivated successfully.
Feb 25 06:58:56 np0005629333 podman[148112]: 2026-02-25 11:58:56.000302733 +0000 UTC m=+0.058142364 container create 15210d5f001b034fa77a4678bc857ef3cfeac0bcea1b606a672f1bf01a67222f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_noether, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 06:58:56 np0005629333 systemd[1]: Started libpod-conmon-15210d5f001b034fa77a4678bc857ef3cfeac0bcea1b606a672f1bf01a67222f.scope.
Feb 25 06:58:56 np0005629333 podman[148112]: 2026-02-25 11:58:55.972605381 +0000 UTC m=+0.030445012 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:58:56 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:58:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83fce537f8ee8bfea07cc33c19b2d2cba525cc0cc855bca4e0c27f77f99c6db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:58:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83fce537f8ee8bfea07cc33c19b2d2cba525cc0cc855bca4e0c27f77f99c6db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:58:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83fce537f8ee8bfea07cc33c19b2d2cba525cc0cc855bca4e0c27f77f99c6db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:58:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83fce537f8ee8bfea07cc33c19b2d2cba525cc0cc855bca4e0c27f77f99c6db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:58:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83fce537f8ee8bfea07cc33c19b2d2cba525cc0cc855bca4e0c27f77f99c6db/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 06:58:56 np0005629333 podman[148112]: 2026-02-25 11:58:56.091989797 +0000 UTC m=+0.149829438 container init 15210d5f001b034fa77a4678bc857ef3cfeac0bcea1b606a672f1bf01a67222f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 06:58:56 np0005629333 podman[148112]: 2026-02-25 11:58:56.097696562 +0000 UTC m=+0.155536153 container start 15210d5f001b034fa77a4678bc857ef3cfeac0bcea1b606a672f1bf01a67222f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 06:58:56 np0005629333 podman[148112]: 2026-02-25 11:58:56.100842443 +0000 UTC m=+0.158682054 container attach 15210d5f001b034fa77a4678bc857ef3cfeac0bcea1b606a672f1bf01a67222f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_noether, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 06:58:56 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 06:58:56 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:58:56 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 06:58:56 np0005629333 python3.9[148261]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 06:58:56 np0005629333 ovs-vsctl[148274]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Feb 25 06:58:56 np0005629333 cool_noether[148162]: --> passed data devices: 0 physical, 3 LVM
Feb 25 06:58:56 np0005629333 cool_noether[148162]: --> All data devices are unavailable
Feb 25 06:58:56 np0005629333 systemd[1]: libpod-15210d5f001b034fa77a4678bc857ef3cfeac0bcea1b606a672f1bf01a67222f.scope: Deactivated successfully.
Feb 25 06:58:56 np0005629333 podman[148112]: 2026-02-25 11:58:56.561394713 +0000 UTC m=+0.619234314 container died 15210d5f001b034fa77a4678bc857ef3cfeac0bcea1b606a672f1bf01a67222f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_noether, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 06:58:56 np0005629333 systemd[1]: var-lib-containers-storage-overlay-c83fce537f8ee8bfea07cc33c19b2d2cba525cc0cc855bca4e0c27f77f99c6db-merged.mount: Deactivated successfully.
Feb 25 06:58:56 np0005629333 podman[148112]: 2026-02-25 11:58:56.595715036 +0000 UTC m=+0.653554637 container remove 15210d5f001b034fa77a4678bc857ef3cfeac0bcea1b606a672f1bf01a67222f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_noether, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3)
Feb 25 06:58:56 np0005629333 systemd[1]: libpod-conmon-15210d5f001b034fa77a4678bc857ef3cfeac0bcea1b606a672f1bf01a67222f.scope: Deactivated successfully.
Feb 25 06:58:56 np0005629333 systemd[1]: session-46.scope: Deactivated successfully.
Feb 25 06:58:56 np0005629333 systemd[1]: session-46.scope: Consumed 53.519s CPU time.
Feb 25 06:58:56 np0005629333 systemd-logind[811]: Session 46 logged out. Waiting for processes to exit.
Feb 25 06:58:56 np0005629333 systemd-logind[811]: Removed session 46.
Feb 25 06:58:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:58:57 np0005629333 podman[148378]: 2026-02-25 11:58:57.123157332 +0000 UTC m=+0.091698845 container create 260404eab19aef158aa227f8ce985abc87da44f116566612d88f08558bb17f00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:58:57 np0005629333 systemd[1]: Started libpod-conmon-260404eab19aef158aa227f8ce985abc87da44f116566612d88f08558bb17f00.scope.
Feb 25 06:58:57 np0005629333 podman[148378]: 2026-02-25 11:58:57.096525761 +0000 UTC m=+0.065067324 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:58:57 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:58:57 np0005629333 podman[148378]: 2026-02-25 11:58:57.210297114 +0000 UTC m=+0.178838827 container init 260404eab19aef158aa227f8ce985abc87da44f116566612d88f08558bb17f00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_almeida, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:58:57 np0005629333 podman[148378]: 2026-02-25 11:58:57.220137069 +0000 UTC m=+0.188678592 container start 260404eab19aef158aa227f8ce985abc87da44f116566612d88f08558bb17f00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:58:57 np0005629333 podman[148378]: 2026-02-25 11:58:57.225238987 +0000 UTC m=+0.193780510 container attach 260404eab19aef158aa227f8ce985abc87da44f116566612d88f08558bb17f00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_almeida, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:58:57 np0005629333 boring_almeida[148394]: 167 167
Feb 25 06:58:57 np0005629333 systemd[1]: libpod-260404eab19aef158aa227f8ce985abc87da44f116566612d88f08558bb17f00.scope: Deactivated successfully.
Feb 25 06:58:57 np0005629333 podman[148378]: 2026-02-25 11:58:57.227458401 +0000 UTC m=+0.195999924 container died 260404eab19aef158aa227f8ce985abc87da44f116566612d88f08558bb17f00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_almeida, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 06:58:57 np0005629333 systemd[1]: var-lib-containers-storage-overlay-c64507946bc0b347908c895026bc68187ca3924b68a12342dfb550d034fca4b2-merged.mount: Deactivated successfully.
Feb 25 06:58:57 np0005629333 podman[148378]: 2026-02-25 11:58:57.273012929 +0000 UTC m=+0.241554442 container remove 260404eab19aef158aa227f8ce985abc87da44f116566612d88f08558bb17f00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_almeida, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 06:58:57 np0005629333 systemd[1]: libpod-conmon-260404eab19aef158aa227f8ce985abc87da44f116566612d88f08558bb17f00.scope: Deactivated successfully.
Feb 25 06:58:57 np0005629333 podman[148418]: 2026-02-25 11:58:57.420268831 +0000 UTC m=+0.052336096 container create 980b7ad73a49faa3d6147413cd585f8ca92ab4c585e298b2c50e708d4b7f3dae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:58:57 np0005629333 systemd[1]: Started libpod-conmon-980b7ad73a49faa3d6147413cd585f8ca92ab4c585e298b2c50e708d4b7f3dae.scope.
Feb 25 06:58:57 np0005629333 podman[148418]: 2026-02-25 11:58:57.393057424 +0000 UTC m=+0.025124789 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:58:57 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:58:57 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa6f748e0ca910c7e490f08c6118f584e89ada65aa305d05b2f419489e7967cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:58:57 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa6f748e0ca910c7e490f08c6118f584e89ada65aa305d05b2f419489e7967cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:58:57 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa6f748e0ca910c7e490f08c6118f584e89ada65aa305d05b2f419489e7967cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:58:57 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa6f748e0ca910c7e490f08c6118f584e89ada65aa305d05b2f419489e7967cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:58:57 np0005629333 podman[148418]: 2026-02-25 11:58:57.518905936 +0000 UTC m=+0.150973261 container init 980b7ad73a49faa3d6147413cd585f8ca92ab4c585e298b2c50e708d4b7f3dae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_chatelet, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Feb 25 06:58:57 np0005629333 podman[148418]: 2026-02-25 11:58:57.524184599 +0000 UTC m=+0.156251874 container start 980b7ad73a49faa3d6147413cd585f8ca92ab4c585e298b2c50e708d4b7f3dae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 06:58:57 np0005629333 podman[148418]: 2026-02-25 11:58:57.528294138 +0000 UTC m=+0.160361473 container attach 980b7ad73a49faa3d6147413cd585f8ca92ab4c585e298b2c50e708d4b7f3dae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_chatelet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]: {
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:    "0": [
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:        {
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "devices": [
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "/dev/loop3"
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            ],
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "lv_name": "ceph_lv0",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "lv_size": "21470642176",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "name": "ceph_lv0",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "tags": {
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.cluster_name": "ceph",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.crush_device_class": "",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.encrypted": "0",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.objectstore": "bluestore",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.osd_id": "0",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.type": "block",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.vdo": "0",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.with_tpm": "0"
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            },
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "type": "block",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "vg_name": "ceph_vg0"
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:        }
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:    ],
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:    "1": [
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:        {
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "devices": [
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "/dev/loop4"
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            ],
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "lv_name": "ceph_lv1",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "lv_size": "21470642176",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "name": "ceph_lv1",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "tags": {
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.cluster_name": "ceph",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.crush_device_class": "",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.encrypted": "0",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.objectstore": "bluestore",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.osd_id": "1",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.type": "block",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.vdo": "0",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.with_tpm": "0"
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            },
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "type": "block",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "vg_name": "ceph_vg1"
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:        }
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:    ],
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:    "2": [
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:        {
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "devices": [
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "/dev/loop5"
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            ],
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "lv_name": "ceph_lv2",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "lv_size": "21470642176",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "name": "ceph_lv2",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "tags": {
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.cephx_lockbox_secret": "",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.cluster_name": "ceph",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.crush_device_class": "",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.encrypted": "0",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.objectstore": "bluestore",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.osd_id": "2",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.type": "block",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.vdo": "0",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:                "ceph.with_tpm": "0"
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            },
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "type": "block",
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:            "vg_name": "ceph_vg2"
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:        }
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]:    ]
Feb 25 06:58:57 np0005629333 optimistic_chatelet[148434]: }
Feb 25 06:58:57 np0005629333 systemd[1]: libpod-980b7ad73a49faa3d6147413cd585f8ca92ab4c585e298b2c50e708d4b7f3dae.scope: Deactivated successfully.
Feb 25 06:58:57 np0005629333 podman[148418]: 2026-02-25 11:58:57.849742432 +0000 UTC m=+0.481809707 container died 980b7ad73a49faa3d6147413cd585f8ca92ab4c585e298b2c50e708d4b7f3dae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_chatelet, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 06:58:57 np0005629333 systemd[1]: var-lib-containers-storage-overlay-fa6f748e0ca910c7e490f08c6118f584e89ada65aa305d05b2f419489e7967cb-merged.mount: Deactivated successfully.
Feb 25 06:58:57 np0005629333 podman[148418]: 2026-02-25 11:58:57.909884042 +0000 UTC m=+0.541951317 container remove 980b7ad73a49faa3d6147413cd585f8ca92ab4c585e298b2c50e708d4b7f3dae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:58:57 np0005629333 systemd[1]: libpod-conmon-980b7ad73a49faa3d6147413cd585f8ca92ab4c585e298b2c50e708d4b7f3dae.scope: Deactivated successfully.
Feb 25 06:58:58 np0005629333 podman[148518]: 2026-02-25 11:58:58.392551552 +0000 UTC m=+0.046057854 container create 95498d95a2fcf5c686f8b7effbe59c96730fcac188da565d40ddeff18b89724c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 25 06:58:58 np0005629333 systemd[1]: Started libpod-conmon-95498d95a2fcf5c686f8b7effbe59c96730fcac188da565d40ddeff18b89724c.scope.
Feb 25 06:58:58 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:58:58 np0005629333 podman[148518]: 2026-02-25 11:58:58.372269655 +0000 UTC m=+0.025775947 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:58:58 np0005629333 podman[148518]: 2026-02-25 11:58:58.47470616 +0000 UTC m=+0.128212462 container init 95498d95a2fcf5c686f8b7effbe59c96730fcac188da565d40ddeff18b89724c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 06:58:58 np0005629333 podman[148518]: 2026-02-25 11:58:58.484159294 +0000 UTC m=+0.137665566 container start 95498d95a2fcf5c686f8b7effbe59c96730fcac188da565d40ddeff18b89724c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_engelbart, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 06:58:58 np0005629333 podman[148518]: 2026-02-25 11:58:58.48850995 +0000 UTC m=+0.142016222 container attach 95498d95a2fcf5c686f8b7effbe59c96730fcac188da565d40ddeff18b89724c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_engelbart, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 06:58:58 np0005629333 charming_engelbart[148534]: 167 167
Feb 25 06:58:58 np0005629333 systemd[1]: libpod-95498d95a2fcf5c686f8b7effbe59c96730fcac188da565d40ddeff18b89724c.scope: Deactivated successfully.
Feb 25 06:58:58 np0005629333 podman[148518]: 2026-02-25 11:58:58.48990129 +0000 UTC m=+0.143407602 container died 95498d95a2fcf5c686f8b7effbe59c96730fcac188da565d40ddeff18b89724c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 06:58:58 np0005629333 systemd[1]: var-lib-containers-storage-overlay-c8ab01182fe45f529bbfc46ce47bdaed3ed8157f15064a5d1b26e441073a767c-merged.mount: Deactivated successfully.
Feb 25 06:58:58 np0005629333 podman[148518]: 2026-02-25 11:58:58.530752192 +0000 UTC m=+0.184258464 container remove 95498d95a2fcf5c686f8b7effbe59c96730fcac188da565d40ddeff18b89724c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_engelbart, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 06:58:58 np0005629333 systemd[1]: libpod-conmon-95498d95a2fcf5c686f8b7effbe59c96730fcac188da565d40ddeff18b89724c.scope: Deactivated successfully.
Feb 25 06:58:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:58:58 np0005629333 podman[148558]: 2026-02-25 11:58:58.705440288 +0000 UTC m=+0.055168947 container create 29aaba2731d76896c3572a1ffca2fb7bcdf4fd09ede66225434a4995fd7f578b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_jones, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 06:58:58 np0005629333 systemd[1]: Started libpod-conmon-29aaba2731d76896c3572a1ffca2fb7bcdf4fd09ede66225434a4995fd7f578b.scope.
Feb 25 06:58:58 np0005629333 podman[148558]: 2026-02-25 11:58:58.678442327 +0000 UTC m=+0.028171036 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 06:58:58 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:58:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfcc79b14694d7188fb3543b20d21b9a0d0e38a97ba62e98a0cf164bebe7aa5d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 06:58:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfcc79b14694d7188fb3543b20d21b9a0d0e38a97ba62e98a0cf164bebe7aa5d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 06:58:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfcc79b14694d7188fb3543b20d21b9a0d0e38a97ba62e98a0cf164bebe7aa5d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 06:58:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfcc79b14694d7188fb3543b20d21b9a0d0e38a97ba62e98a0cf164bebe7aa5d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 06:58:58 np0005629333 podman[148558]: 2026-02-25 11:58:58.81296602 +0000 UTC m=+0.162694639 container init 29aaba2731d76896c3572a1ffca2fb7bcdf4fd09ede66225434a4995fd7f578b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_jones, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 06:58:58 np0005629333 podman[148558]: 2026-02-25 11:58:58.818434649 +0000 UTC m=+0.168163308 container start 29aaba2731d76896c3572a1ffca2fb7bcdf4fd09ede66225434a4995fd7f578b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_jones, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 25 06:58:58 np0005629333 podman[148558]: 2026-02-25 11:58:58.821426355 +0000 UTC m=+0.171155004 container attach 29aaba2731d76896c3572a1ffca2fb7bcdf4fd09ede66225434a4995fd7f578b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_jones, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 06:58:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:58:59 np0005629333 lvm[148654]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 06:58:59 np0005629333 lvm[148653]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 06:58:59 np0005629333 lvm[148654]: VG ceph_vg1 finished
Feb 25 06:58:59 np0005629333 lvm[148653]: VG ceph_vg0 finished
Feb 25 06:58:59 np0005629333 lvm[148656]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 06:58:59 np0005629333 lvm[148656]: VG ceph_vg2 finished
Feb 25 06:58:59 np0005629333 lucid_jones[148575]: {}
Feb 25 06:58:59 np0005629333 systemd[1]: libpod-29aaba2731d76896c3572a1ffca2fb7bcdf4fd09ede66225434a4995fd7f578b.scope: Deactivated successfully.
Feb 25 06:58:59 np0005629333 systemd[1]: libpod-29aaba2731d76896c3572a1ffca2fb7bcdf4fd09ede66225434a4995fd7f578b.scope: Consumed 1.102s CPU time.
Feb 25 06:58:59 np0005629333 podman[148558]: 2026-02-25 11:58:59.605316213 +0000 UTC m=+0.955044842 container died 29aaba2731d76896c3572a1ffca2fb7bcdf4fd09ede66225434a4995fd7f578b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_jones, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 06:58:59 np0005629333 systemd[1]: var-lib-containers-storage-overlay-bfcc79b14694d7188fb3543b20d21b9a0d0e38a97ba62e98a0cf164bebe7aa5d-merged.mount: Deactivated successfully.
Feb 25 06:58:59 np0005629333 podman[148558]: 2026-02-25 11:58:59.711621499 +0000 UTC m=+1.061350158 container remove 29aaba2731d76896c3572a1ffca2fb7bcdf4fd09ede66225434a4995fd7f578b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_jones, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 06:58:59 np0005629333 systemd[1]: libpod-conmon-29aaba2731d76896c3572a1ffca2fb7bcdf4fd09ede66225434a4995fd7f578b.scope: Deactivated successfully.
Feb 25 06:58:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 06:58:59 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:58:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 06:58:59 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:59:00 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:59:00 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 06:59:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:59:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:59:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:59:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:59:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:59:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:59:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:59:02 np0005629333 systemd[1]: Stopping User Manager for UID 0...
Feb 25 06:59:02 np0005629333 systemd[147067]: Activating special unit Exit the Session...
Feb 25 06:59:02 np0005629333 systemd[147067]: Stopped target Main User Target.
Feb 25 06:59:02 np0005629333 systemd[147067]: Stopped target Basic System.
Feb 25 06:59:02 np0005629333 systemd[147067]: Stopped target Paths.
Feb 25 06:59:02 np0005629333 systemd[147067]: Stopped target Sockets.
Feb 25 06:59:02 np0005629333 systemd[147067]: Stopped target Timers.
Feb 25 06:59:02 np0005629333 systemd[147067]: Stopped Daily Cleanup of User's Temporary Directories.
Feb 25 06:59:02 np0005629333 systemd[147067]: Closed D-Bus User Message Bus Socket.
Feb 25 06:59:02 np0005629333 systemd[147067]: Stopped Create User's Volatile Files and Directories.
Feb 25 06:59:02 np0005629333 systemd[147067]: Removed slice User Application Slice.
Feb 25 06:59:02 np0005629333 systemd[147067]: Reached target Shutdown.
Feb 25 06:59:02 np0005629333 systemd[147067]: Finished Exit the Session.
Feb 25 06:59:02 np0005629333 systemd[147067]: Reached target Exit the Session.
Feb 25 06:59:02 np0005629333 systemd[1]: user@0.service: Deactivated successfully.
Feb 25 06:59:02 np0005629333 systemd[1]: Stopped User Manager for UID 0.
Feb 25 06:59:02 np0005629333 systemd[1]: Stopping User Runtime Directory /run/user/0...
Feb 25 06:59:02 np0005629333 systemd-logind[811]: New session 48 of user zuul.
Feb 25 06:59:02 np0005629333 systemd[1]: Started Session 48 of User zuul.
Feb 25 06:59:02 np0005629333 systemd[1]: run-user-0.mount: Deactivated successfully.
Feb 25 06:59:02 np0005629333 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Feb 25 06:59:02 np0005629333 systemd[1]: Stopped User Runtime Directory /run/user/0.
Feb 25 06:59:02 np0005629333 systemd[1]: Removed slice User Slice of UID 0.
Feb 25 06:59:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:59:03 np0005629333 python3.9[148852]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:59:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:59:04 np0005629333 python3.9[149009]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:59:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:59:06 np0005629333 python3.9[149173]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:59:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:59:07 np0005629333 python3.9[149327]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:59:07 np0005629333 python3.9[149480]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:59:07.641620) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020747642071, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 600, "num_deletes": 251, "total_data_size": 709640, "memory_usage": 722024, "flush_reason": "Manual Compaction"}
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020747647943, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 703849, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9081, "largest_seqno": 9680, "table_properties": {"data_size": 700556, "index_size": 1201, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7157, "raw_average_key_size": 18, "raw_value_size": 694061, "raw_average_value_size": 1784, "num_data_blocks": 55, "num_entries": 389, "num_filter_entries": 389, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020703, "oldest_key_time": 1772020703, "file_creation_time": 1772020747, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 5988 microseconds, and 3029 cpu microseconds.
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:59:07.647992) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 703849 bytes OK
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:59:07.648011) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:59:07.649542) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:59:07.649566) EVENT_LOG_v1 {"time_micros": 1772020747649559, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:59:07.649589) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 706365, prev total WAL file size 706365, number of live WAL files 2.
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:59:07.650600) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(687KB)], [23(6876KB)]
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020747650808, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7745666, "oldest_snapshot_seqno": -1}
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3295 keys, 5885627 bytes, temperature: kUnknown
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020747695898, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 5885627, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 5862329, "index_size": 13973, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 79736, "raw_average_key_size": 24, "raw_value_size": 5801461, "raw_average_value_size": 1760, "num_data_blocks": 610, "num_entries": 3295, "num_filter_entries": 3295, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772020747, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:59:07.696191) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 5885627 bytes
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:59:07.697883) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.4 rd, 130.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 6.7 +0.0 blob) out(5.6 +0.0 blob), read-write-amplify(19.4) write-amplify(8.4) OK, records in: 3808, records dropped: 513 output_compression: NoCompression
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:59:07.697912) EVENT_LOG_v1 {"time_micros": 1772020747697898, "job": 8, "event": "compaction_finished", "compaction_time_micros": 45202, "compaction_time_cpu_micros": 18580, "output_level": 6, "num_output_files": 1, "total_output_size": 5885627, "num_input_records": 3808, "num_output_records": 3295, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020747698471, "job": 8, "event": "table_file_deletion", "file_number": 25}
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020747699792, "job": 8, "event": "table_file_deletion", "file_number": 23}
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:59:07.650459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:59:07.699910) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:59:07.699916) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:59:07.699918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:59:07.699920) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 06:59:07 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-11:59:07.699921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 06:59:08 np0005629333 python3.9[149633]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:59:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:59:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:59:08 np0005629333 python3.9[149783]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 06:59:10 np0005629333 python3.9[149936]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Feb 25 06:59:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:59:11 np0005629333 python3.9[150087]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:59:12 np0005629333 python3.9[150208]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772020751.1021657-81-62569869827397/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:59:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 06:59:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2118 writes, 9565 keys, 2118 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2118 writes, 2118 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2118 writes, 9565 keys, 2118 commit groups, 1.0 writes per commit group, ingest: 12.23 MB, 0.02 MB/s#012Interval WAL: 2118 writes, 2118 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    127.9      0.07              0.03         4    0.018       0      0       0.0       0.0#012  L6      1/0    5.61 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.1    155.4    130.4      0.15              0.06         3    0.050     10K   1243       0.0       0.0#012 Sum      1/0    5.61 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1    104.5    129.6      0.22              0.09         7    0.032     10K   1243       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1    106.6    132.0      0.22              0.09         6    0.036     10K   1243       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    155.4    130.4      0.15              0.06         3    0.050     10K   1243       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    135.4      0.07              0.03         3    0.023       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.009, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.03 GB write, 0.05 MB/s write, 0.02 GB read, 0.04 MB/s read, 0.2 seconds#012Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.02 GB read, 0.04 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561a1af858d0#2 capacity: 308.00 MB usage: 983.20 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(53,868.58 KB,0.275396%) FilterBlock(8,38.05 KB,0.0120634%) IndexBlock(8,76.58 KB,0.0242803%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb 25 06:59:12 np0005629333 python3.9[150358]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:59:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:59:13 np0005629333 python3.9[150479]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772020752.5345802-96-184553248744478/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:59:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:59:14 np0005629333 python3.9[150632]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 25 06:59:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:59:15 np0005629333 python3.9[150717]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 25 06:59:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:59:17 np0005629333 python3.9[150871]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 25 06:59:18 np0005629333 python3.9[151024]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:59:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:59:18 np0005629333 python3.9[151145]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772020757.8481574-133-62273468718228/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:59:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:59:19 np0005629333 python3.9[151295]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:59:19 np0005629333 python3.9[151416]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772020758.9801552-133-80535961775106/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:59:20 np0005629333 python3.9[151566]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:59:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:59:21 np0005629333 python3.9[151687]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772020760.544801-177-48312908836792/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:59:21 np0005629333 python3.9[151837]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:59:22 np0005629333 ovn_controller[147040]: 2026-02-25T11:59:22Z|00025|memory|INFO|16000 kB peak resident set size after 30.0 seconds
Feb 25 06:59:22 np0005629333 ovn_controller[147040]: 2026-02-25T11:59:22Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Feb 25 06:59:22 np0005629333 podman[151932]: 2026-02-25 11:59:22.214876439 +0000 UTC m=+0.102467680 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 06:59:22 np0005629333 python3.9[151965]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772020761.4805028-177-94610728037713/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:59:22 np0005629333 python3.9[152134]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 06:59:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:59:23 np0005629333 python3.9[152289]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:59:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:59:24 np0005629333 python3.9[152442]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:59:24 np0005629333 python3.9[152521]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:59:24 np0005629333 python3.9[152674]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:59:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:59:25 np0005629333 python3.9[152753]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:59:26 np0005629333 python3.9[152906]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:59:26 np0005629333 python3.9[153059]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:59:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:59:27 np0005629333 python3.9[153138]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:59:27 np0005629333 python3.9[153291]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:59:28 np0005629333 python3.9[153370]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:59:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:59:28 np0005629333 python3.9[153523]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 06:59:28 np0005629333 systemd[1]: Reloading.
Feb 25 06:59:28 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:59:28 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:59:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:59:29 np0005629333 python3.9[153720]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:59:30 np0005629333 python3.9[153799]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:59:30 np0005629333 python3.9[153952]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:59:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_11:59:30
Feb 25 06:59:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 06:59:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 06:59:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', '.rgw.root', 'images', 'volumes', 'cephfs.cephfs.meta', 'vms']
Feb 25 06:59:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 06:59:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:59:31 np0005629333 python3.9[154031]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:59:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:59:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:59:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:59:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:59:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 06:59:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 06:59:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 06:59:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 06:59:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 06:59:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 06:59:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 06:59:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 06:59:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 06:59:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 06:59:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 06:59:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 06:59:32 np0005629333 python3.9[154184]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 06:59:32 np0005629333 systemd[1]: Reloading.
Feb 25 06:59:32 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:59:32 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:59:32 np0005629333 systemd[1]: Starting Create netns directory...
Feb 25 06:59:32 np0005629333 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 25 06:59:32 np0005629333 systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 25 06:59:32 np0005629333 systemd[1]: Finished Create netns directory.
Feb 25 06:59:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:59:33 np0005629333 python3.9[154385]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:59:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:59:33 np0005629333 python3.9[154538]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:59:34 np0005629333 python3.9[154662]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772020773.1732337-328-103213218460243/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:59:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:59:35 np0005629333 python3.9[154815]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:59:35 np0005629333 python3.9[154968]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 25 06:59:36 np0005629333 python3.9[155121]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:59:36 np0005629333 python3.9[155245]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1772020775.9611342-361-160439003999655/.source.json _original_basename=.9zb8wgbw follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:59:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:59:37 np0005629333 python3.9[155395]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:59:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:59:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:59:39 np0005629333 python3.9[155819]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Feb 25 06:59:40 np0005629333 python3.9[155972]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 25 06:59:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:59:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 06:59:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:59:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 06:59:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:59:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:59:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:59:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:59:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:59:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:59:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:59:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:59:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:59:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6988014389607423e-06 of space, bias 4.0, pg target 0.0032385617267528906 quantized to 16 (current 16)
Feb 25 06:59:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:59:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:59:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:59:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 06:59:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:59:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 06:59:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:59:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 06:59:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 06:59:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 06:59:41 np0005629333 python3[156125]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Feb 25 06:59:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:59:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:59:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:59:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:59:48 np0005629333 podman[156139]: 2026-02-25 11:59:48.416169147 +0000 UTC m=+6.598421256 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 06:59:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:59:48 np0005629333 podman[156262]: 2026-02-25 11:59:48.57964347 +0000 UTC m=+0.060808979 container create 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:59:48 np0005629333 podman[156262]: 2026-02-25 11:59:48.543573921 +0000 UTC m=+0.024739440 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 06:59:48 np0005629333 python3[156125]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 06:59:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:59:49 np0005629333 python3.9[156453]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 06:59:49 np0005629333 python3.9[156608]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:59:50 np0005629333 python3.9[156685]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 06:59:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:59:51 np0005629333 python3.9[156837]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772020790.6165414-439-60938047951586/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:59:51 np0005629333 python3.9[156914]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 25 06:59:51 np0005629333 systemd[1]: Reloading.
Feb 25 06:59:51 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:59:51 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:59:52 np0005629333 podman[157004]: 2026-02-25 11:59:52.478780027 +0000 UTC m=+0.127152278 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:59:52 np0005629333 python3.9[157050]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 06:59:52 np0005629333 systemd[1]: Reloading.
Feb 25 06:59:52 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:59:52 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:59:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:59:53 np0005629333 systemd[1]: Starting ovn_metadata_agent container...
Feb 25 06:59:53 np0005629333 systemd[1]: Started libcrun container.
Feb 25 06:59:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38b0dc219fae1199e133c1e81854a784920a2e8b9724e33008c5052e1d8c7e62/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Feb 25 06:59:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38b0dc219fae1199e133c1e81854a784920a2e8b9724e33008c5052e1d8c7e62/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 06:59:53 np0005629333 systemd[1]: Started /usr/bin/podman healthcheck run 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204.
Feb 25 06:59:53 np0005629333 podman[157108]: 2026-02-25 11:59:53.20798706 +0000 UTC m=+0.118907138 container init 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 06:59:53 np0005629333 ovn_metadata_agent[157124]: + sudo -E kolla_set_configs
Feb 25 06:59:53 np0005629333 podman[157108]: 2026-02-25 11:59:53.23446157 +0000 UTC m=+0.145381648 container start 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 06:59:53 np0005629333 edpm-start-podman-container[157108]: ovn_metadata_agent
Feb 25 06:59:53 np0005629333 ovn_metadata_agent[157124]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 25 06:59:53 np0005629333 ovn_metadata_agent[157124]: INFO:__main__:Validating config file
Feb 25 06:59:53 np0005629333 ovn_metadata_agent[157124]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 25 06:59:53 np0005629333 ovn_metadata_agent[157124]: INFO:__main__:Copying service configuration files
Feb 25 06:59:53 np0005629333 ovn_metadata_agent[157124]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Feb 25 06:59:53 np0005629333 ovn_metadata_agent[157124]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Feb 25 06:59:53 np0005629333 ovn_metadata_agent[157124]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Feb 25 06:59:53 np0005629333 ovn_metadata_agent[157124]: INFO:__main__:Writing out command to execute
Feb 25 06:59:53 np0005629333 ovn_metadata_agent[157124]: INFO:__main__:Setting permission for /var/lib/neutron
Feb 25 06:59:53 np0005629333 ovn_metadata_agent[157124]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Feb 25 06:59:53 np0005629333 ovn_metadata_agent[157124]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Feb 25 06:59:53 np0005629333 ovn_metadata_agent[157124]: INFO:__main__:Setting permission for /var/lib/neutron/external
Feb 25 06:59:53 np0005629333 ovn_metadata_agent[157124]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Feb 25 06:59:53 np0005629333 ovn_metadata_agent[157124]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Feb 25 06:59:53 np0005629333 ovn_metadata_agent[157124]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Feb 25 06:59:53 np0005629333 edpm-start-podman-container[157107]: Creating additional drop-in dependency for "ovn_metadata_agent" (2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204)
Feb 25 06:59:53 np0005629333 ovn_metadata_agent[157124]: ++ cat /run_command
Feb 25 06:59:53 np0005629333 ovn_metadata_agent[157124]: + CMD=neutron-ovn-metadata-agent
Feb 25 06:59:53 np0005629333 ovn_metadata_agent[157124]: + ARGS=
Feb 25 06:59:53 np0005629333 ovn_metadata_agent[157124]: + sudo kolla_copy_cacerts
Feb 25 06:59:53 np0005629333 systemd[1]: Reloading.
Feb 25 06:59:53 np0005629333 ovn_metadata_agent[157124]: + [[ ! -n '' ]]
Feb 25 06:59:53 np0005629333 ovn_metadata_agent[157124]: + . kolla_extend_start
Feb 25 06:59:53 np0005629333 ovn_metadata_agent[157124]: Running command: 'neutron-ovn-metadata-agent'
Feb 25 06:59:53 np0005629333 ovn_metadata_agent[157124]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Feb 25 06:59:53 np0005629333 ovn_metadata_agent[157124]: + umask 0022
Feb 25 06:59:53 np0005629333 ovn_metadata_agent[157124]: + exec neutron-ovn-metadata-agent
Feb 25 06:59:53 np0005629333 podman[157131]: 2026-02-25 11:59:53.338366211 +0000 UTC m=+0.092101499 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent)
Feb 25 06:59:53 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 06:59:53 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 06:59:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:59:53 np0005629333 systemd[1]: Started ovn_metadata_agent container.
Feb 25 06:59:54 np0005629333 python3.9[157368]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.945 157129 INFO neutron.common.config [-] Logging enabled!#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.946 157129 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev44#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.946 157129 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.946 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.946 157129 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.946 157129 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.947 157129 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.947 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.947 157129 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.947 157129 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.947 157129 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.947 157129 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.947 157129 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.947 157129 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.947 157129 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.948 157129 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.948 157129 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.948 157129 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.948 157129 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.948 157129 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.948 157129 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.948 157129 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.948 157129 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.948 157129 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.948 157129 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.949 157129 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.949 157129 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.949 157129 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.949 157129 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.949 157129 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.949 157129 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.949 157129 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.949 157129 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.949 157129 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.950 157129 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.950 157129 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.950 157129 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.950 157129 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.950 157129 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.950 157129 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.950 157129 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.950 157129 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.950 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.951 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.951 157129 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.951 157129 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.951 157129 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.951 157129 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.951 157129 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.951 157129 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.951 157129 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.951 157129 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.951 157129 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.952 157129 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.952 157129 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.952 157129 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.952 157129 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.952 157129 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.952 157129 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.952 157129 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.952 157129 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.952 157129 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.953 157129 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.953 157129 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.953 157129 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.953 157129 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.953 157129 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.953 157129 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.953 157129 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.954 157129 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.954 157129 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.954 157129 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.954 157129 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.954 157129 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.954 157129 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.954 157129 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.954 157129 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.954 157129 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.955 157129 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.955 157129 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.955 157129 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.955 157129 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.955 157129 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.955 157129 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.955 157129 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.955 157129 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.956 157129 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.956 157129 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.956 157129 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.956 157129 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.956 157129 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.956 157129 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.956 157129 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.956 157129 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.957 157129 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.957 157129 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.957 157129 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.957 157129 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.957 157129 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.957 157129 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.957 157129 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.957 157129 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.957 157129 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.957 157129 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.957 157129 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.958 157129 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.958 157129 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.958 157129 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.958 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.958 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.958 157129 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.958 157129 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.958 157129 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.958 157129 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.959 157129 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.959 157129 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.959 157129 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.959 157129 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.959 157129 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.959 157129 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.959 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.959 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.959 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.960 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.960 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.960 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.960 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.960 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.960 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.960 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.960 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.960 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.961 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.961 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.961 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.961 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.961 157129 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.961 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.961 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.961 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.961 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.961 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.962 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.962 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.962 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.962 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.962 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.962 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.962 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.962 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.962 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.962 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.963 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.963 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.963 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.963 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.963 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.963 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.963 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.963 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.964 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.964 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.964 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.964 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.964 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.964 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.964 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.964 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.964 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.964 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.965 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.965 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.965 157129 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.965 157129 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.965 157129 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.965 157129 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.965 157129 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.965 157129 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.965 157129 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.965 157129 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.966 157129 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.966 157129 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.966 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.966 157129 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.966 157129 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.966 157129 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.966 157129 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.966 157129 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.967 157129 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.967 157129 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.967 157129 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.967 157129 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.967 157129 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.967 157129 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.967 157129 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.967 157129 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.967 157129 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.968 157129 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.968 157129 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.968 157129 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.968 157129 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.968 157129 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.968 157129 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.968 157129 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.968 157129 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.968 157129 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.968 157129 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.969 157129 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.969 157129 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.969 157129 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.969 157129 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.969 157129 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.969 157129 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.969 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.969 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.969 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.969 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.970 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.970 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.970 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.970 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.970 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.970 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.970 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.970 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.970 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.970 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.971 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.971 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.971 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.971 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.971 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.971 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.971 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.971 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.971 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.971 157129 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.972 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.972 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.972 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.972 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.972 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.972 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.972 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.972 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.972 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.972 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.973 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.973 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.973 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.973 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.973 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.973 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.973 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.973 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.973 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.973 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.974 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.974 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.974 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.974 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.974 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.974 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.974 157129 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.974 157129 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.974 157129 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.974 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.975 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.975 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.975 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.975 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.975 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.975 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.975 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.975 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.975 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.976 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.976 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.976 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.976 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.976 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.976 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.976 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.976 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.976 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.976 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.977 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.977 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.977 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.977 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.977 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.977 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.977 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.977 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.977 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.977 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.978 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.978 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.978 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.978 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.978 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.978 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.978 157129 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.978 157129 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.987 157129 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.987 157129 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.987 157129 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.987 157129 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Feb 25 06:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:54.987 157129 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Feb 25 06:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:55.000 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name a594384c-d614-4492-9e0a-4d6ec095920c (UUID: a594384c-d614-4492-9e0a-4d6ec095920c) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Feb 25 06:59:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:55.028 157129 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Feb 25 06:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:55.029 157129 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Feb 25 06:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:55.029 157129 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb 25 06:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:55.029 157129 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb 25 06:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:55.033 157129 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Feb 25 06:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:55.038 157129 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Feb 25 06:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:55.045 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'a594384c-d614-4492-9e0a-4d6ec095920c'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], external_ids={}, name=a594384c-d614-4492-9e0a-4d6ec095920c, nb_cfg_timestamp=1772020740271, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 06:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:55.046 157129 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fe5544eea90>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Feb 25 06:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:55.047 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 06:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:55.048 157129 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 06:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:55.048 157129 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 06:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:55.048 157129 INFO oslo_service.service [-] Starting 1 workers#033[00m
Feb 25 06:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:55.051 157129 DEBUG oslo_service.service [-] Started child 157394 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Feb 25 06:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:55.053 157129 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmptbd_i9a1/privsep.sock']#033[00m
Feb 25 06:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:55.055 157394 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-242466'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Feb 25 06:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:55.082 157394 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Feb 25 06:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:55.082 157394 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Feb 25 06:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:55.082 157394 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb 25 06:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:55.086 157394 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Feb 25 06:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:55.095 157394 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Feb 25 06:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:55.102 157394 INFO eventlet.wsgi.server [-] (157394) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Feb 25 06:59:55 np0005629333 python3.9[157526]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 06:59:55 np0005629333 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Feb 25 06:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:55.709 157129 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Feb 25 06:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:55.710 157129 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmptbd_i9a1/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Feb 25 06:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:55.595 157528 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Feb 25 06:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:55.598 157528 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Feb 25 06:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:55.600 157528 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Feb 25 06:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:55.601 157528 INFO oslo.privsep.daemon [-] privsep daemon running as pid 157528#033[00m
Feb 25 06:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:55.714 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[7a5bf674-79a4-4633-a227-503e5d4b1fd5]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 06:59:56 np0005629333 python3.9[157657]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772020795.087447-484-18000330950491/.source.yaml _original_basename=.0jxlz49j follow=False checksum=2f9e5ef996aaf9c6fcf64107d568f0dadb1ef26d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.203 157528 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.205 157528 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.205 157528 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 06:59:56 np0005629333 systemd[1]: session-48.scope: Deactivated successfully.
Feb 25 06:59:56 np0005629333 systemd[1]: session-48.scope: Consumed 51.134s CPU time.
Feb 25 06:59:56 np0005629333 systemd-logind[811]: Session 48 logged out. Waiting for processes to exit.
Feb 25 06:59:56 np0005629333 systemd-logind[811]: Removed session 48.
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.693 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[c26e9a18-149a-46a7-bc85-8cfa8abe71da]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.695 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, column=external_ids, values=({'neutron:ovn-metadata-id': 'd0e86935-dcce-5b24-8555-07314fa09201'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.702 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.706 157129 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.707 157129 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.707 157129 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.707 157129 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.707 157129 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.707 157129 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.707 157129 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.707 157129 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.708 157129 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.708 157129 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.708 157129 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.708 157129 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.708 157129 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.708 157129 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.708 157129 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.709 157129 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.709 157129 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.709 157129 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.709 157129 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.709 157129 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.709 157129 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.709 157129 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.709 157129 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.709 157129 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.710 157129 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.710 157129 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.710 157129 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.710 157129 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.710 157129 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.710 157129 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.710 157129 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.710 157129 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.710 157129 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.711 157129 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.711 157129 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.711 157129 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.711 157129 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.711 157129 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.711 157129 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.711 157129 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.712 157129 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.712 157129 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.712 157129 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.712 157129 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.712 157129 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.712 157129 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.712 157129 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.712 157129 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.713 157129 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.713 157129 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.713 157129 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.713 157129 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.713 157129 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.713 157129 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.713 157129 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.713 157129 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.713 157129 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.714 157129 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.714 157129 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.714 157129 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.714 157129 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.714 157129 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.714 157129 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.714 157129 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.714 157129 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.714 157129 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.715 157129 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.715 157129 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.715 157129 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.715 157129 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.715 157129 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.715 157129 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.715 157129 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.715 157129 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.716 157129 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.716 157129 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.716 157129 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.716 157129 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.716 157129 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.716 157129 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.716 157129 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.716 157129 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.717 157129 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.717 157129 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.717 157129 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.717 157129 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.717 157129 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.717 157129 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.717 157129 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.718 157129 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.718 157129 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.718 157129 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.718 157129 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.718 157129 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.718 157129 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.718 157129 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.718 157129 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.718 157129 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.719 157129 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.719 157129 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.719 157129 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.719 157129 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.719 157129 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.719 157129 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.719 157129 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.719 157129 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.720 157129 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.720 157129 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.720 157129 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.720 157129 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.720 157129 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.720 157129 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.720 157129 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.720 157129 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.721 157129 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.721 157129 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.721 157129 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.721 157129 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.721 157129 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.721 157129 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.722 157129 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.722 157129 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.722 157129 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.722 157129 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.722 157129 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.722 157129 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.722 157129 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.723 157129 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.723 157129 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.723 157129 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.723 157129 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.723 157129 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.723 157129 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.723 157129 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.723 157129 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.723 157129 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.724 157129 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.724 157129 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.724 157129 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.724 157129 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.724 157129 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.724 157129 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.724 157129 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.724 157129 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.724 157129 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.724 157129 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.725 157129 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.725 157129 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.725 157129 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.725 157129 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.725 157129 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.725 157129 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.725 157129 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.725 157129 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.725 157129 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.725 157129 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.726 157129 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.726 157129 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.726 157129 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.726 157129 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.726 157129 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.726 157129 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.726 157129 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.726 157129 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.726 157129 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.726 157129 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.726 157129 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.727 157129 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.727 157129 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.727 157129 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.727 157129 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.727 157129 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.727 157129 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.727 157129 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.727 157129 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.727 157129 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.728 157129 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.728 157129 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.728 157129 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.728 157129 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.728 157129 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.728 157129 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.728 157129 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.728 157129 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.728 157129 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.728 157129 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.729 157129 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.729 157129 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.729 157129 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.729 157129 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.729 157129 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.729 157129 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.729 157129 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.729 157129 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.729 157129 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.729 157129 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.730 157129 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.730 157129 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.730 157129 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.730 157129 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.730 157129 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.730 157129 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.730 157129 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.730 157129 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.730 157129 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.730 157129 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.731 157129 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.731 157129 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.731 157129 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.731 157129 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.731 157129 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.731 157129 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.731 157129 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.731 157129 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.731 157129 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.732 157129 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.732 157129 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.732 157129 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.732 157129 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.732 157129 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.732 157129 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.732 157129 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.732 157129 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.732 157129 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.733 157129 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.733 157129 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.733 157129 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.733 157129 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.733 157129 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.733 157129 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.733 157129 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.733 157129 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.734 157129 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.734 157129 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.734 157129 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.734 157129 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.734 157129 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.734 157129 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.734 157129 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.734 157129 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.734 157129 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.735 157129 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.735 157129 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.735 157129 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.735 157129 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.735 157129 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.735 157129 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.735 157129 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.735 157129 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.735 157129 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.735 157129 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.736 157129 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.736 157129 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.736 157129 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.736 157129 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.736 157129 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.736 157129 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.736 157129 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.736 157129 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.736 157129 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.736 157129 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.737 157129 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.737 157129 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.737 157129 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.737 157129 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.737 157129 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.737 157129 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.737 157129 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.737 157129 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.737 157129 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.738 157129 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.738 157129 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.738 157129 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.738 157129 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.738 157129 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.738 157129 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.738 157129 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.738 157129 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.739 157129 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.739 157129 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.739 157129 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.739 157129 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.739 157129 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.739 157129 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.739 157129 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.740 157129 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.740 157129 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.740 157129 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.740 157129 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.740 157129 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.740 157129 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.740 157129 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.740 157129 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.740 157129 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.741 157129 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.741 157129 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.741 157129 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.741 157129 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.741 157129 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.741 157129 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.741 157129 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.741 157129 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.741 157129 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 06:59:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 11:59:56.742 157129 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Feb 25 06:59:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 06:59:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 06:59:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:00:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:00:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:00:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:00:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:00:00 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:00:00 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:00:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:00:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:00:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:00:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:00:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:00:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:00:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:00:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:00:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:00:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:00:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:00:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:00:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:00:01 np0005629333 podman[157895]: 2026-02-25 12:00:01.156147393 +0000 UTC m=+0.046261576 container create d18ef92ff3d9fbb6acba0a2213fdf2f5142799b0629f2dac5748a9e745e31c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:00:01 np0005629333 systemd[1]: Started libpod-conmon-d18ef92ff3d9fbb6acba0a2213fdf2f5142799b0629f2dac5748a9e745e31c5c.scope.
Feb 25 07:00:01 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:00:01 np0005629333 podman[157895]: 2026-02-25 12:00:01.132744782 +0000 UTC m=+0.022859065 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:00:01 np0005629333 podman[157895]: 2026-02-25 12:00:01.230413502 +0000 UTC m=+0.120527725 container init d18ef92ff3d9fbb6acba0a2213fdf2f5142799b0629f2dac5748a9e745e31c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_franklin, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 07:00:01 np0005629333 podman[157895]: 2026-02-25 12:00:01.236576021 +0000 UTC m=+0.126690224 container start d18ef92ff3d9fbb6acba0a2213fdf2f5142799b0629f2dac5748a9e745e31c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:00:01 np0005629333 podman[157895]: 2026-02-25 12:00:01.241639879 +0000 UTC m=+0.131754082 container attach d18ef92ff3d9fbb6acba0a2213fdf2f5142799b0629f2dac5748a9e745e31c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_franklin, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:00:01 np0005629333 suspicious_franklin[157912]: 167 167
Feb 25 07:00:01 np0005629333 systemd[1]: libpod-d18ef92ff3d9fbb6acba0a2213fdf2f5142799b0629f2dac5748a9e745e31c5c.scope: Deactivated successfully.
Feb 25 07:00:01 np0005629333 podman[157895]: 2026-02-25 12:00:01.244878963 +0000 UTC m=+0.134993166 container died d18ef92ff3d9fbb6acba0a2213fdf2f5142799b0629f2dac5748a9e745e31c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 25 07:00:01 np0005629333 systemd[1]: var-lib-containers-storage-overlay-127b4f24bbd1d1c22e564a49b6b4b42e22bea3205430dc2244f5f616c72a6385-merged.mount: Deactivated successfully.
Feb 25 07:00:01 np0005629333 podman[157895]: 2026-02-25 12:00:01.306536826 +0000 UTC m=+0.196651049 container remove d18ef92ff3d9fbb6acba0a2213fdf2f5142799b0629f2dac5748a9e745e31c5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:00:01 np0005629333 systemd[1]: libpod-conmon-d18ef92ff3d9fbb6acba0a2213fdf2f5142799b0629f2dac5748a9e745e31c5c.scope: Deactivated successfully.
Feb 25 07:00:01 np0005629333 podman[157936]: 2026-02-25 12:00:01.494529042 +0000 UTC m=+0.089056570 container create f2a052aa4c788628c4ba43acb2505e9ad0fefb671e0e3d7841642c4ad1d2dad7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 07:00:01 np0005629333 podman[157936]: 2026-02-25 12:00:01.43530154 +0000 UTC m=+0.029829048 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:00:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:00:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:00:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:00:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:00:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:00:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:00:01 np0005629333 systemd[1]: Started libpod-conmon-f2a052aa4c788628c4ba43acb2505e9ad0fefb671e0e3d7841642c4ad1d2dad7.scope.
Feb 25 07:00:01 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:00:01 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:00:01 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:00:01 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:00:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a514d6017f71677b5724e5422ee807f5e0acefca9b5584fad467f555bc6c1bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:00:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a514d6017f71677b5724e5422ee807f5e0acefca9b5584fad467f555bc6c1bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:00:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a514d6017f71677b5724e5422ee807f5e0acefca9b5584fad467f555bc6c1bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:00:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a514d6017f71677b5724e5422ee807f5e0acefca9b5584fad467f555bc6c1bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:00:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a514d6017f71677b5724e5422ee807f5e0acefca9b5584fad467f555bc6c1bc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:00:01 np0005629333 podman[157936]: 2026-02-25 12:00:01.667966415 +0000 UTC m=+0.262493993 container init f2a052aa4c788628c4ba43acb2505e9ad0fefb671e0e3d7841642c4ad1d2dad7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_bhaskara, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:00:01 np0005629333 podman[157936]: 2026-02-25 12:00:01.676236436 +0000 UTC m=+0.270763964 container start f2a052aa4c788628c4ba43acb2505e9ad0fefb671e0e3d7841642c4ad1d2dad7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:00:01 np0005629333 podman[157936]: 2026-02-25 12:00:01.679781379 +0000 UTC m=+0.274308927 container attach f2a052aa4c788628c4ba43acb2505e9ad0fefb671e0e3d7841642c4ad1d2dad7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_bhaskara, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:00:02 np0005629333 ecstatic_bhaskara[157952]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:00:02 np0005629333 ecstatic_bhaskara[157952]: --> All data devices are unavailable
Feb 25 07:00:02 np0005629333 systemd[1]: libpod-f2a052aa4c788628c4ba43acb2505e9ad0fefb671e0e3d7841642c4ad1d2dad7.scope: Deactivated successfully.
Feb 25 07:00:02 np0005629333 podman[157936]: 2026-02-25 12:00:02.163997378 +0000 UTC m=+0.758524906 container died f2a052aa4c788628c4ba43acb2505e9ad0fefb671e0e3d7841642c4ad1d2dad7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:00:02 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1a514d6017f71677b5724e5422ee807f5e0acefca9b5584fad467f555bc6c1bc-merged.mount: Deactivated successfully.
Feb 25 07:00:02 np0005629333 podman[157936]: 2026-02-25 12:00:02.216623009 +0000 UTC m=+0.811150537 container remove f2a052aa4c788628c4ba43acb2505e9ad0fefb671e0e3d7841642c4ad1d2dad7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_bhaskara, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:00:02 np0005629333 systemd[1]: libpod-conmon-f2a052aa4c788628c4ba43acb2505e9ad0fefb671e0e3d7841642c4ad1d2dad7.scope: Deactivated successfully.
Feb 25 07:00:02 np0005629333 podman[158049]: 2026-02-25 12:00:02.737635348 +0000 UTC m=+0.062456447 container create 508d47e2ca9d09f392bd7aa8630f5033349fee2e7ae7b116d3b2a4109c76ebbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_golick, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:00:02 np0005629333 systemd[1]: Started libpod-conmon-508d47e2ca9d09f392bd7aa8630f5033349fee2e7ae7b116d3b2a4109c76ebbc.scope.
Feb 25 07:00:02 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:00:02 np0005629333 podman[158049]: 2026-02-25 12:00:02.714225848 +0000 UTC m=+0.039046997 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:00:02 np0005629333 podman[158049]: 2026-02-25 12:00:02.812344841 +0000 UTC m=+0.137165980 container init 508d47e2ca9d09f392bd7aa8630f5033349fee2e7ae7b116d3b2a4109c76ebbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:00:02 np0005629333 podman[158049]: 2026-02-25 12:00:02.821748794 +0000 UTC m=+0.146569883 container start 508d47e2ca9d09f392bd7aa8630f5033349fee2e7ae7b116d3b2a4109c76ebbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_golick, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 07:00:02 np0005629333 podman[158049]: 2026-02-25 12:00:02.824572486 +0000 UTC m=+0.149393605 container attach 508d47e2ca9d09f392bd7aa8630f5033349fee2e7ae7b116d3b2a4109c76ebbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_golick, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 07:00:02 np0005629333 clever_golick[158065]: 167 167
Feb 25 07:00:02 np0005629333 systemd[1]: libpod-508d47e2ca9d09f392bd7aa8630f5033349fee2e7ae7b116d3b2a4109c76ebbc.scope: Deactivated successfully.
Feb 25 07:00:02 np0005629333 podman[158070]: 2026-02-25 12:00:02.871555672 +0000 UTC m=+0.027698276 container died 508d47e2ca9d09f392bd7aa8630f5033349fee2e7ae7b116d3b2a4109c76ebbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_golick, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:00:02 np0005629333 systemd[1]: var-lib-containers-storage-overlay-903aad7ca7bc08e586abe116dbf33ff0a12a1d9a8c7817e2d9d889764a0dc71c-merged.mount: Deactivated successfully.
Feb 25 07:00:02 np0005629333 podman[158070]: 2026-02-25 12:00:02.907507918 +0000 UTC m=+0.063650492 container remove 508d47e2ca9d09f392bd7aa8630f5033349fee2e7ae7b116d3b2a4109c76ebbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_golick, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 07:00:02 np0005629333 systemd[1]: libpod-conmon-508d47e2ca9d09f392bd7aa8630f5033349fee2e7ae7b116d3b2a4109c76ebbc.scope: Deactivated successfully.
Feb 25 07:00:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:00:03 np0005629333 podman[158092]: 2026-02-25 12:00:03.08606547 +0000 UTC m=+0.039806839 container create fa766487a6f096afc881d38d3bb056cc42e4333239fd67230c2a26f74928e825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_faraday, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:00:03 np0005629333 systemd[1]: Started libpod-conmon-fa766487a6f096afc881d38d3bb056cc42e4333239fd67230c2a26f74928e825.scope.
Feb 25 07:00:03 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:00:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee85bc6f786c92d6e35b3f0ef7477c3680677bc13d0592243fb811c79c818ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:00:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee85bc6f786c92d6e35b3f0ef7477c3680677bc13d0592243fb811c79c818ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:00:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee85bc6f786c92d6e35b3f0ef7477c3680677bc13d0592243fb811c79c818ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:00:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ee85bc6f786c92d6e35b3f0ef7477c3680677bc13d0592243fb811c79c818ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:00:03 np0005629333 podman[158092]: 2026-02-25 12:00:03.06922701 +0000 UTC m=+0.022968399 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:00:03 np0005629333 podman[158092]: 2026-02-25 12:00:03.169134425 +0000 UTC m=+0.122875804 container init fa766487a6f096afc881d38d3bb056cc42e4333239fd67230c2a26f74928e825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 25 07:00:03 np0005629333 podman[158092]: 2026-02-25 12:00:03.176625383 +0000 UTC m=+0.130366782 container start fa766487a6f096afc881d38d3bb056cc42e4333239fd67230c2a26f74928e825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_faraday, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 25 07:00:03 np0005629333 podman[158092]: 2026-02-25 12:00:03.17997144 +0000 UTC m=+0.133712849 container attach fa766487a6f096afc881d38d3bb056cc42e4333239fd67230c2a26f74928e825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_faraday, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:00:03 np0005629333 zen_faraday[158109]: {
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:    "0": [
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:        {
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "devices": [
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "/dev/loop3"
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            ],
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "lv_name": "ceph_lv0",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "lv_size": "21470642176",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "name": "ceph_lv0",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "tags": {
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.cluster_name": "ceph",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.crush_device_class": "",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.encrypted": "0",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.objectstore": "bluestore",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.osd_id": "0",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.type": "block",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.vdo": "0",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.with_tpm": "0"
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            },
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "type": "block",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "vg_name": "ceph_vg0"
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:        }
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:    ],
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:    "1": [
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:        {
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "devices": [
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "/dev/loop4"
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            ],
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "lv_name": "ceph_lv1",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "lv_size": "21470642176",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "name": "ceph_lv1",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "tags": {
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.cluster_name": "ceph",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.crush_device_class": "",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.encrypted": "0",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.objectstore": "bluestore",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.osd_id": "1",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.type": "block",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.vdo": "0",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.with_tpm": "0"
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            },
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "type": "block",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "vg_name": "ceph_vg1"
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:        }
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:    ],
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:    "2": [
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:        {
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "devices": [
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "/dev/loop5"
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            ],
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "lv_name": "ceph_lv2",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "lv_size": "21470642176",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "name": "ceph_lv2",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "tags": {
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.cluster_name": "ceph",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.crush_device_class": "",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.encrypted": "0",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.objectstore": "bluestore",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.osd_id": "2",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.type": "block",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.vdo": "0",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:                "ceph.with_tpm": "0"
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            },
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "type": "block",
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:            "vg_name": "ceph_vg2"
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:        }
Feb 25 07:00:03 np0005629333 zen_faraday[158109]:    ]
Feb 25 07:00:03 np0005629333 zen_faraday[158109]: }
Feb 25 07:00:03 np0005629333 systemd[1]: libpod-fa766487a6f096afc881d38d3bb056cc42e4333239fd67230c2a26f74928e825.scope: Deactivated successfully.
Feb 25 07:00:03 np0005629333 podman[158092]: 2026-02-25 12:00:03.45542216 +0000 UTC m=+0.409163569 container died fa766487a6f096afc881d38d3bb056cc42e4333239fd67230c2a26f74928e825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_faraday, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 07:00:03 np0005629333 systemd[1]: var-lib-containers-storage-overlay-6ee85bc6f786c92d6e35b3f0ef7477c3680677bc13d0592243fb811c79c818ac-merged.mount: Deactivated successfully.
Feb 25 07:00:03 np0005629333 podman[158092]: 2026-02-25 12:00:03.502112558 +0000 UTC m=+0.455853937 container remove fa766487a6f096afc881d38d3bb056cc42e4333239fd67230c2a26f74928e825 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_faraday, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 07:00:03 np0005629333 systemd[1]: libpod-conmon-fa766487a6f096afc881d38d3bb056cc42e4333239fd67230c2a26f74928e825.scope: Deactivated successfully.
Feb 25 07:00:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:00:03 np0005629333 podman[158194]: 2026-02-25 12:00:03.955876011 +0000 UTC m=+0.043634060 container create 7e1187f8e195ef710dfeab35b8276481a7ee34a3a2dcb60a5172d17f8e22b820 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_booth, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 07:00:03 np0005629333 systemd[1]: Started libpod-conmon-7e1187f8e195ef710dfeab35b8276481a7ee34a3a2dcb60a5172d17f8e22b820.scope.
Feb 25 07:00:04 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:00:04 np0005629333 podman[158194]: 2026-02-25 12:00:03.938523706 +0000 UTC m=+0.026281755 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:00:04 np0005629333 podman[158194]: 2026-02-25 12:00:04.040602975 +0000 UTC m=+0.128361094 container init 7e1187f8e195ef710dfeab35b8276481a7ee34a3a2dcb60a5172d17f8e22b820 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 07:00:04 np0005629333 podman[158194]: 2026-02-25 12:00:04.049839053 +0000 UTC m=+0.137597082 container start 7e1187f8e195ef710dfeab35b8276481a7ee34a3a2dcb60a5172d17f8e22b820 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_booth, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 07:00:04 np0005629333 friendly_booth[158210]: 167 167
Feb 25 07:00:04 np0005629333 systemd[1]: libpod-7e1187f8e195ef710dfeab35b8276481a7ee34a3a2dcb60a5172d17f8e22b820.scope: Deactivated successfully.
Feb 25 07:00:04 np0005629333 podman[158194]: 2026-02-25 12:00:04.053410107 +0000 UTC m=+0.141168156 container attach 7e1187f8e195ef710dfeab35b8276481a7ee34a3a2dcb60a5172d17f8e22b820 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_booth, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 07:00:04 np0005629333 podman[158194]: 2026-02-25 12:00:04.054311913 +0000 UTC m=+0.142069982 container died 7e1187f8e195ef710dfeab35b8276481a7ee34a3a2dcb60a5172d17f8e22b820 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 07:00:04 np0005629333 systemd[1]: var-lib-containers-storage-overlay-601716c4bc22e45583e4ec391f7370f1fc218844b4d6608f5971a1c50fb5b4d3-merged.mount: Deactivated successfully.
Feb 25 07:00:04 np0005629333 podman[158194]: 2026-02-25 12:00:04.092104342 +0000 UTC m=+0.179862401 container remove 7e1187f8e195ef710dfeab35b8276481a7ee34a3a2dcb60a5172d17f8e22b820 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:00:04 np0005629333 systemd[1]: libpod-conmon-7e1187f8e195ef710dfeab35b8276481a7ee34a3a2dcb60a5172d17f8e22b820.scope: Deactivated successfully.
Feb 25 07:00:04 np0005629333 podman[158236]: 2026-02-25 12:00:04.254293218 +0000 UTC m=+0.044564587 container create 9a9691f0a8e121dba5d550f7751b919a306798450e510ecf07a13f1f65fda609 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_khorana, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 07:00:04 np0005629333 systemd[1]: Started libpod-conmon-9a9691f0a8e121dba5d550f7751b919a306798450e510ecf07a13f1f65fda609.scope.
Feb 25 07:00:04 np0005629333 systemd-logind[811]: New session 49 of user zuul.
Feb 25 07:00:04 np0005629333 systemd[1]: Started Session 49 of User zuul.
Feb 25 07:00:04 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:00:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c1e7e2593d4c1b4d9efd22ceca143956184b4c0d03cd6a149d042bb3275db4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:00:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c1e7e2593d4c1b4d9efd22ceca143956184b4c0d03cd6a149d042bb3275db4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:00:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c1e7e2593d4c1b4d9efd22ceca143956184b4c0d03cd6a149d042bb3275db4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:00:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c1e7e2593d4c1b4d9efd22ceca143956184b4c0d03cd6a149d042bb3275db4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:00:04 np0005629333 podman[158236]: 2026-02-25 12:00:04.236654395 +0000 UTC m=+0.026925824 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:00:04 np0005629333 podman[158236]: 2026-02-25 12:00:04.352933787 +0000 UTC m=+0.143205236 container init 9a9691f0a8e121dba5d550f7751b919a306798450e510ecf07a13f1f65fda609 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle)
Feb 25 07:00:04 np0005629333 podman[158236]: 2026-02-25 12:00:04.364449971 +0000 UTC m=+0.154721380 container start 9a9691f0a8e121dba5d550f7751b919a306798450e510ecf07a13f1f65fda609 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 07:00:04 np0005629333 podman[158236]: 2026-02-25 12:00:04.368567331 +0000 UTC m=+0.158838740 container attach 9a9691f0a8e121dba5d550f7751b919a306798450e510ecf07a13f1f65fda609 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_khorana, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:00:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:00:05 np0005629333 lvm[158461]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:00:05 np0005629333 lvm[158461]: VG ceph_vg0 finished
Feb 25 07:00:05 np0005629333 lvm[158465]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:00:05 np0005629333 lvm[158465]: VG ceph_vg1 finished
Feb 25 07:00:05 np0005629333 lvm[158487]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:00:05 np0005629333 lvm[158487]: VG ceph_vg2 finished
Feb 25 07:00:05 np0005629333 hardcore_khorana[158255]: {}
Feb 25 07:00:05 np0005629333 systemd[1]: libpod-9a9691f0a8e121dba5d550f7751b919a306798450e510ecf07a13f1f65fda609.scope: Deactivated successfully.
Feb 25 07:00:05 np0005629333 systemd[1]: libpod-9a9691f0a8e121dba5d550f7751b919a306798450e510ecf07a13f1f65fda609.scope: Consumed 1.113s CPU time.
Feb 25 07:00:05 np0005629333 podman[158236]: 2026-02-25 12:00:05.178531873 +0000 UTC m=+0.968803282 container died 9a9691f0a8e121dba5d550f7751b919a306798450e510ecf07a13f1f65fda609 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 07:00:05 np0005629333 systemd[1]: var-lib-containers-storage-overlay-06c1e7e2593d4c1b4d9efd22ceca143956184b4c0d03cd6a149d042bb3275db4-merged.mount: Deactivated successfully.
Feb 25 07:00:05 np0005629333 podman[158236]: 2026-02-25 12:00:05.230564306 +0000 UTC m=+1.020835715 container remove 9a9691f0a8e121dba5d550f7751b919a306798450e510ecf07a13f1f65fda609 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_khorana, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 07:00:05 np0005629333 systemd[1]: libpod-conmon-9a9691f0a8e121dba5d550f7751b919a306798450e510ecf07a13f1f65fda609.scope: Deactivated successfully.
Feb 25 07:00:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:00:05 np0005629333 python3.9[158485]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 07:00:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:00:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:00:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:00:06 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:00:06 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:00:06 np0005629333 python3.9[158683]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 07:00:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:00:07 np0005629333 python3.9[158850]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 25 07:00:07 np0005629333 systemd[1]: Reloading.
Feb 25 07:00:07 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 07:00:07 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 07:00:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:00:08 np0005629333 python3.9[159042]: ansible-ansible.builtin.service_facts Invoked
Feb 25 07:00:08 np0005629333 network[159059]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 25 07:00:08 np0005629333 network[159060]: 'network-scripts' will be removed from distribution in near future.
Feb 25 07:00:08 np0005629333 network[159061]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 25 07:00:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:00:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:00:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:00:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:00:13 np0005629333 python3.9[159325]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 07:00:14 np0005629333 python3.9[159479]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 07:00:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:00:15 np0005629333 python3.9[159633]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 07:00:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 07:00:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5582 writes, 24K keys, 5582 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5582 writes, 872 syncs, 6.40 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5582 writes, 24K keys, 5582 commit groups, 1.0 writes per commit group, ingest: 18.59 MB, 0.03 MB/s#012Interval WAL: 5582 writes, 872 syncs, 6.40 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556516836430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556516836430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Feb 25 07:00:16 np0005629333 python3.9[159787]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 07:00:16 np0005629333 python3.9[159941]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 07:00:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:00:17 np0005629333 python3.9[160095]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 07:00:18 np0005629333 python3.9[160249]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 07:00:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:00:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:00:19 np0005629333 python3.9[160403]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:00:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 07:00:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.2 total, 600.0 interval#012Cumulative writes: 6895 writes, 28K keys, 6895 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6895 writes, 1301 syncs, 5.30 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6895 writes, 28K keys, 6895 commit groups, 1.0 writes per commit group, ingest: 19.67 MB, 0.03 MB/s#012Interval WAL: 6895 writes, 1301 syncs, 5.30 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a364a358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a364a358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Feb 25 07:00:20 np0005629333 python3.9[160557]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:00:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:00:21 np0005629333 python3.9[160710]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:00:22 np0005629333 python3.9[160863]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:00:22 np0005629333 podman[160931]: 2026-02-25 12:00:22.75352366 +0000 UTC m=+0.093404877 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 07:00:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:00:23 np0005629333 python3.9[161043]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:00:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:00:23 np0005629333 python3.9[161196]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:00:23 np0005629333 podman[161197]: 2026-02-25 12:00:23.72539911 +0000 UTC m=+0.080904633 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223)
Feb 25 07:00:24 np0005629333 python3.9[161368]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:00:24 np0005629333 python3.9[161521]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:00:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 07:00:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.6 total, 600.0 interval#012Cumulative writes: 5441 writes, 23K keys, 5441 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5441 writes, 785 syncs, 6.93 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5441 writes, 23K keys, 5441 commit groups, 1.0 writes per commit group, ingest: 18.33 MB, 0.03 MB/s#012Interval WAL: 5441 writes, 785 syncs, 6.93 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.059       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.059       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.059       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.6 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f8d0df78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.6 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f8d0df78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.6 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Feb 25 07:00:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:00:25 np0005629333 python3.9[161674]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:00:25 np0005629333 python3.9[161827]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:00:26 np0005629333 python3.9[161980]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:00:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:00:27 np0005629333 python3.9[162133]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:00:27 np0005629333 ceph-mgr[76641]: [devicehealth INFO root] Check health
Feb 25 07:00:27 np0005629333 python3.9[162286]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:00:28 np0005629333 python3.9[162439]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:00:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:00:28 np0005629333 python3.9[162592]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 07:00:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:00:29 np0005629333 python3.9[162744]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 25 07:00:30 np0005629333 python3.9[162897]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 25 07:00:30 np0005629333 systemd[1]: Reloading.
Feb 25 07:00:30 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 07:00:30 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 07:00:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:00:30
Feb 25 07:00:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:00:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:00:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'images', 'default.rgw.meta', '.mgr', '.rgw.root']
Feb 25 07:00:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:00:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:00:31 np0005629333 python3.9[163091]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 07:00:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:00:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:00:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:00:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:00:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:00:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:00:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:00:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:00:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:00:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:00:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:00:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:00:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:00:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:00:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:00:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:00:32 np0005629333 python3.9[163245]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 07:00:32 np0005629333 python3.9[163399]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 07:00:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:00:33 np0005629333 python3.9[163553]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 07:00:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:00:33 np0005629333 python3.9[163707]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 07:00:34 np0005629333 python3.9[163861]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 07:00:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:00:35 np0005629333 python3.9[164015]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 07:00:36 np0005629333 python3.9[164169]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Feb 25 07:00:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:00:37 np0005629333 python3.9[164323]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 25 07:00:38 np0005629333 python3.9[164482]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 25 07:00:38 np0005629333 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 07:00:38 np0005629333 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 07:00:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:00:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:00:39 np0005629333 python3.9[164644]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 25 07:00:40 np0005629333 python3.9[164729]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 25 07:00:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:00:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:00:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:00:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:00:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:00:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:00:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:00:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:00:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:00:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:00:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:00:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:00:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:00:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6988014389607423e-06 of space, bias 4.0, pg target 0.0032385617267528906 quantized to 16 (current 16)
Feb 25 07:00:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:00:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:00:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:00:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:00:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:00:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:00:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:00:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:00:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:00:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:00:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:00:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:00:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:00:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:00:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:00:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:00:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:00:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:00:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:00:53 np0005629333 podman[164808]: 2026-02-25 12:00:53.787154828 +0000 UTC m=+0.127998693 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller)
Feb 25 07:00:53 np0005629333 podman[164842]: 2026-02-25 12:00:53.878474143 +0000 UTC m=+0.065184602 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 07:00:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:00:54.980 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:00:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:00:54.981 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:00:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:00:54.981 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:00:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:00:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:00:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:00:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:01:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:01:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:01:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:01:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:01:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:01:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:01:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:01:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:01:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:01:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:01:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 25 07:01:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 07:01:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:01:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:01:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:01:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:01:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:01:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:01:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:01:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:01:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:01:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:01:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:01:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:01:06 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 07:01:06 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:01:06 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:01:06 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:01:06 np0005629333 podman[165125]: 2026-02-25 12:01:06.408545268 +0000 UTC m=+0.048217207 container create ae866c7a1d007c3064e05448e0e4643c73e9ec9aaffc9516187ca566f50d99d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lederberg, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:01:06 np0005629333 systemd[1]: Started libpod-conmon-ae866c7a1d007c3064e05448e0e4643c73e9ec9aaffc9516187ca566f50d99d6.scope.
Feb 25 07:01:06 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:01:06 np0005629333 podman[165125]: 2026-02-25 12:01:06.481303288 +0000 UTC m=+0.120975247 container init ae866c7a1d007c3064e05448e0e4643c73e9ec9aaffc9516187ca566f50d99d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lederberg, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 07:01:06 np0005629333 podman[165125]: 2026-02-25 12:01:06.386401784 +0000 UTC m=+0.026073803 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:01:06 np0005629333 podman[165125]: 2026-02-25 12:01:06.487604216 +0000 UTC m=+0.127276165 container start ae866c7a1d007c3064e05448e0e4643c73e9ec9aaffc9516187ca566f50d99d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:01:06 np0005629333 podman[165125]: 2026-02-25 12:01:06.492222061 +0000 UTC m=+0.131894070 container attach ae866c7a1d007c3064e05448e0e4643c73e9ec9aaffc9516187ca566f50d99d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lederberg, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:01:06 np0005629333 adoring_lederberg[165142]: 167 167
Feb 25 07:01:06 np0005629333 systemd[1]: libpod-ae866c7a1d007c3064e05448e0e4643c73e9ec9aaffc9516187ca566f50d99d6.scope: Deactivated successfully.
Feb 25 07:01:06 np0005629333 podman[165125]: 2026-02-25 12:01:06.502115459 +0000 UTC m=+0.141787428 container died ae866c7a1d007c3064e05448e0e4643c73e9ec9aaffc9516187ca566f50d99d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lederberg, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:01:06 np0005629333 systemd[1]: var-lib-containers-storage-overlay-de2bca05552deb249cbbbd71ef7023d9e05a80758879320de50467f4724c5397-merged.mount: Deactivated successfully.
Feb 25 07:01:06 np0005629333 podman[165125]: 2026-02-25 12:01:06.550739225 +0000 UTC m=+0.190411164 container remove ae866c7a1d007c3064e05448e0e4643c73e9ec9aaffc9516187ca566f50d99d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 07:01:06 np0005629333 systemd[1]: libpod-conmon-ae866c7a1d007c3064e05448e0e4643c73e9ec9aaffc9516187ca566f50d99d6.scope: Deactivated successfully.
Feb 25 07:01:06 np0005629333 podman[165166]: 2026-02-25 12:01:06.709811614 +0000 UTC m=+0.058808122 container create e8cff68a7b687eb10367cba23190183fd93ba841fa8f4ba74058285ecac2b80a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 07:01:06 np0005629333 systemd[1]: Started libpod-conmon-e8cff68a7b687eb10367cba23190183fd93ba841fa8f4ba74058285ecac2b80a.scope.
Feb 25 07:01:06 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:01:06 np0005629333 podman[165166]: 2026-02-25 12:01:06.689135087 +0000 UTC m=+0.038131625 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:01:06 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57b363d3369bf6abb80987e85031ebf4c7e86ad513ff6287ee4b9b0a6b358ace/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:01:06 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57b363d3369bf6abb80987e85031ebf4c7e86ad513ff6287ee4b9b0a6b358ace/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:01:06 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57b363d3369bf6abb80987e85031ebf4c7e86ad513ff6287ee4b9b0a6b358ace/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:01:06 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57b363d3369bf6abb80987e85031ebf4c7e86ad513ff6287ee4b9b0a6b358ace/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:01:06 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57b363d3369bf6abb80987e85031ebf4c7e86ad513ff6287ee4b9b0a6b358ace/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:01:06 np0005629333 podman[165166]: 2026-02-25 12:01:06.815313254 +0000 UTC m=+0.164309762 container init e8cff68a7b687eb10367cba23190183fd93ba841fa8f4ba74058285ecac2b80a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:01:06 np0005629333 podman[165166]: 2026-02-25 12:01:06.822120714 +0000 UTC m=+0.171117242 container start e8cff68a7b687eb10367cba23190183fd93ba841fa8f4ba74058285ecac2b80a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 07:01:06 np0005629333 podman[165166]: 2026-02-25 12:01:06.829599351 +0000 UTC m=+0.178595869 container attach e8cff68a7b687eb10367cba23190183fd93ba841fa8f4ba74058285ecac2b80a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lovelace, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:01:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:01:07 np0005629333 amazing_lovelace[165182]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:01:07 np0005629333 amazing_lovelace[165182]: --> All data devices are unavailable
Feb 25 07:01:07 np0005629333 systemd[1]: libpod-e8cff68a7b687eb10367cba23190183fd93ba841fa8f4ba74058285ecac2b80a.scope: Deactivated successfully.
Feb 25 07:01:07 np0005629333 podman[165166]: 2026-02-25 12:01:07.35559988 +0000 UTC m=+0.704596398 container died e8cff68a7b687eb10367cba23190183fd93ba841fa8f4ba74058285ecac2b80a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lovelace, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 07:01:07 np0005629333 systemd[1]: var-lib-containers-storage-overlay-57b363d3369bf6abb80987e85031ebf4c7e86ad513ff6287ee4b9b0a6b358ace-merged.mount: Deactivated successfully.
Feb 25 07:01:07 np0005629333 podman[165166]: 2026-02-25 12:01:07.404456382 +0000 UTC m=+0.753452900 container remove e8cff68a7b687eb10367cba23190183fd93ba841fa8f4ba74058285ecac2b80a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lovelace, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:01:07 np0005629333 systemd[1]: libpod-conmon-e8cff68a7b687eb10367cba23190183fd93ba841fa8f4ba74058285ecac2b80a.scope: Deactivated successfully.
Feb 25 07:01:07 np0005629333 podman[165276]: 2026-02-25 12:01:07.828859629 +0000 UTC m=+0.046407812 container create a6a6b1f0cdb95400b3b23d17bc3cb35d4986df9a326e4cee5c2fce11905450e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_ptolemy, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 07:01:07 np0005629333 systemd[1]: Started libpod-conmon-a6a6b1f0cdb95400b3b23d17bc3cb35d4986df9a326e4cee5c2fce11905450e9.scope.
Feb 25 07:01:07 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:01:07 np0005629333 podman[165276]: 2026-02-25 12:01:07.803940816 +0000 UTC m=+0.021489039 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:01:07 np0005629333 podman[165276]: 2026-02-25 12:01:07.917569208 +0000 UTC m=+0.135117471 container init a6a6b1f0cdb95400b3b23d17bc3cb35d4986df9a326e4cee5c2fce11905450e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Feb 25 07:01:07 np0005629333 podman[165276]: 2026-02-25 12:01:07.923017045 +0000 UTC m=+0.140565258 container start a6a6b1f0cdb95400b3b23d17bc3cb35d4986df9a326e4cee5c2fce11905450e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_ptolemy, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:01:07 np0005629333 dreamy_ptolemy[165292]: 167 167
Feb 25 07:01:07 np0005629333 systemd[1]: libpod-a6a6b1f0cdb95400b3b23d17bc3cb35d4986df9a326e4cee5c2fce11905450e9.scope: Deactivated successfully.
Feb 25 07:01:07 np0005629333 podman[165276]: 2026-02-25 12:01:07.926137973 +0000 UTC m=+0.143686146 container attach a6a6b1f0cdb95400b3b23d17bc3cb35d4986df9a326e4cee5c2fce11905450e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 07:01:07 np0005629333 podman[165276]: 2026-02-25 12:01:07.927947428 +0000 UTC m=+0.145495641 container died a6a6b1f0cdb95400b3b23d17bc3cb35d4986df9a326e4cee5c2fce11905450e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_ptolemy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:01:07 np0005629333 systemd[1]: var-lib-containers-storage-overlay-55488bc9fa7604213cb5a7a201394103febc6297e4a50d45d97bfc6a497326bb-merged.mount: Deactivated successfully.
Feb 25 07:01:07 np0005629333 podman[165276]: 2026-02-25 12:01:07.975041255 +0000 UTC m=+0.192589468 container remove a6a6b1f0cdb95400b3b23d17bc3cb35d4986df9a326e4cee5c2fce11905450e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_ptolemy, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:01:07 np0005629333 systemd[1]: libpod-conmon-a6a6b1f0cdb95400b3b23d17bc3cb35d4986df9a326e4cee5c2fce11905450e9.scope: Deactivated successfully.
Feb 25 07:01:08 np0005629333 podman[165316]: 2026-02-25 12:01:08.127284464 +0000 UTC m=+0.044077724 container create 942a19fa0a0ef3de9f3bfdc8fa935218e5c792a5ead9e8bccdcc030ebf78616f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 07:01:08 np0005629333 systemd[1]: Started libpod-conmon-942a19fa0a0ef3de9f3bfdc8fa935218e5c792a5ead9e8bccdcc030ebf78616f.scope.
Feb 25 07:01:08 np0005629333 podman[165316]: 2026-02-25 12:01:08.104381371 +0000 UTC m=+0.021174711 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:01:08 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:01:08 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36abb93fdd968f2d7b0f21b4591842cc3ccdf0e085d02e87564bd70ad1361214/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:01:08 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36abb93fdd968f2d7b0f21b4591842cc3ccdf0e085d02e87564bd70ad1361214/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:01:08 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36abb93fdd968f2d7b0f21b4591842cc3ccdf0e085d02e87564bd70ad1361214/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:01:08 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36abb93fdd968f2d7b0f21b4591842cc3ccdf0e085d02e87564bd70ad1361214/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:01:08 np0005629333 podman[165316]: 2026-02-25 12:01:08.22267012 +0000 UTC m=+0.139463460 container init 942a19fa0a0ef3de9f3bfdc8fa935218e5c792a5ead9e8bccdcc030ebf78616f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 07:01:08 np0005629333 podman[165316]: 2026-02-25 12:01:08.23145902 +0000 UTC m=+0.148252300 container start 942a19fa0a0ef3de9f3bfdc8fa935218e5c792a5ead9e8bccdcc030ebf78616f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_fermi, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:01:08 np0005629333 podman[165316]: 2026-02-25 12:01:08.235289555 +0000 UTC m=+0.152082895 container attach 942a19fa0a0ef3de9f3bfdc8fa935218e5c792a5ead9e8bccdcc030ebf78616f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]: {
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:    "0": [
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:        {
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "devices": [
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "/dev/loop3"
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            ],
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "lv_name": "ceph_lv0",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "lv_size": "21470642176",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "name": "ceph_lv0",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "tags": {
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.cluster_name": "ceph",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.crush_device_class": "",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.encrypted": "0",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.objectstore": "bluestore",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.osd_id": "0",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.type": "block",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.vdo": "0",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.with_tpm": "0"
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            },
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "type": "block",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "vg_name": "ceph_vg0"
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:        }
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:    ],
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:    "1": [
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:        {
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "devices": [
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "/dev/loop4"
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            ],
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "lv_name": "ceph_lv1",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "lv_size": "21470642176",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "name": "ceph_lv1",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "tags": {
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.cluster_name": "ceph",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.crush_device_class": "",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.encrypted": "0",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.objectstore": "bluestore",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.osd_id": "1",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.type": "block",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.vdo": "0",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.with_tpm": "0"
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            },
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "type": "block",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "vg_name": "ceph_vg1"
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:        }
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:    ],
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:    "2": [
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:        {
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "devices": [
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "/dev/loop5"
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            ],
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "lv_name": "ceph_lv2",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "lv_size": "21470642176",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "name": "ceph_lv2",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "tags": {
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.cluster_name": "ceph",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.crush_device_class": "",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.encrypted": "0",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.objectstore": "bluestore",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.osd_id": "2",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.type": "block",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.vdo": "0",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:                "ceph.with_tpm": "0"
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            },
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "type": "block",
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:            "vg_name": "ceph_vg2"
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:        }
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]:    ]
Feb 25 07:01:08 np0005629333 stupefied_fermi[165332]: }
Feb 25 07:01:08 np0005629333 systemd[1]: libpod-942a19fa0a0ef3de9f3bfdc8fa935218e5c792a5ead9e8bccdcc030ebf78616f.scope: Deactivated successfully.
Feb 25 07:01:08 np0005629333 podman[165316]: 2026-02-25 12:01:08.49649708 +0000 UTC m=+0.413290340 container died 942a19fa0a0ef3de9f3bfdc8fa935218e5c792a5ead9e8bccdcc030ebf78616f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:01:08 np0005629333 systemd[1]: var-lib-containers-storage-overlay-36abb93fdd968f2d7b0f21b4591842cc3ccdf0e085d02e87564bd70ad1361214-merged.mount: Deactivated successfully.
Feb 25 07:01:08 np0005629333 podman[165316]: 2026-02-25 12:01:08.543958207 +0000 UTC m=+0.460751467 container remove 942a19fa0a0ef3de9f3bfdc8fa935218e5c792a5ead9e8bccdcc030ebf78616f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_fermi, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 07:01:08 np0005629333 systemd[1]: libpod-conmon-942a19fa0a0ef3de9f3bfdc8fa935218e5c792a5ead9e8bccdcc030ebf78616f.scope: Deactivated successfully.
Feb 25 07:01:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:01:09 np0005629333 podman[165415]: 2026-02-25 12:01:09.039643408 +0000 UTC m=+0.055597412 container create e6507407f59ffe1ea794755870b1bcdf8edabeb992680f76997e4461f9961750 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keller, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 07:01:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:01:09 np0005629333 systemd[1]: Started libpod-conmon-e6507407f59ffe1ea794755870b1bcdf8edabeb992680f76997e4461f9961750.scope.
Feb 25 07:01:09 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:01:09 np0005629333 podman[165415]: 2026-02-25 12:01:09.019089193 +0000 UTC m=+0.035043127 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:01:09 np0005629333 podman[165415]: 2026-02-25 12:01:09.123320361 +0000 UTC m=+0.139274205 container init e6507407f59ffe1ea794755870b1bcdf8edabeb992680f76997e4461f9961750 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 07:01:09 np0005629333 podman[165415]: 2026-02-25 12:01:09.131477955 +0000 UTC m=+0.147431789 container start e6507407f59ffe1ea794755870b1bcdf8edabeb992680f76997e4461f9961750 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keller, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:01:09 np0005629333 tender_keller[165431]: 167 167
Feb 25 07:01:09 np0005629333 systemd[1]: libpod-e6507407f59ffe1ea794755870b1bcdf8edabeb992680f76997e4461f9961750.scope: Deactivated successfully.
Feb 25 07:01:09 np0005629333 podman[165415]: 2026-02-25 12:01:09.151170128 +0000 UTC m=+0.167124012 container attach e6507407f59ffe1ea794755870b1bcdf8edabeb992680f76997e4461f9961750 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:01:09 np0005629333 podman[165415]: 2026-02-25 12:01:09.151742102 +0000 UTC m=+0.167695936 container died e6507407f59ffe1ea794755870b1bcdf8edabeb992680f76997e4461f9961750 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keller, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 07:01:09 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1504c1a4797ccbd5cb20213ed66aafad79ff4dba81d46ac7a283d6217a324cc9-merged.mount: Deactivated successfully.
Feb 25 07:01:09 np0005629333 podman[165415]: 2026-02-25 12:01:09.192302337 +0000 UTC m=+0.208256201 container remove e6507407f59ffe1ea794755870b1bcdf8edabeb992680f76997e4461f9961750 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 07:01:09 np0005629333 systemd[1]: libpod-conmon-e6507407f59ffe1ea794755870b1bcdf8edabeb992680f76997e4461f9961750.scope: Deactivated successfully.
Feb 25 07:01:09 np0005629333 podman[165458]: 2026-02-25 12:01:09.302033742 +0000 UTC m=+0.022925665 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:01:09 np0005629333 podman[165458]: 2026-02-25 12:01:09.495250655 +0000 UTC m=+0.216142528 container create eca742ff5665f99d5bfc0124901a3dcca0c6b121c86f3c97c40f53f892c06e43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_babbage, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 07:01:09 np0005629333 systemd[1]: Started libpod-conmon-eca742ff5665f99d5bfc0124901a3dcca0c6b121c86f3c97c40f53f892c06e43.scope.
Feb 25 07:01:09 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:01:09 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9451787ec272741189a455b090bf2afe90ed7a8e73786e3799f4ff3d9fbedbf7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:01:09 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9451787ec272741189a455b090bf2afe90ed7a8e73786e3799f4ff3d9fbedbf7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:01:09 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9451787ec272741189a455b090bf2afe90ed7a8e73786e3799f4ff3d9fbedbf7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:01:09 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9451787ec272741189a455b090bf2afe90ed7a8e73786e3799f4ff3d9fbedbf7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:01:09 np0005629333 podman[165458]: 2026-02-25 12:01:09.788066861 +0000 UTC m=+0.508958744 container init eca742ff5665f99d5bfc0124901a3dcca0c6b121c86f3c97c40f53f892c06e43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 07:01:09 np0005629333 podman[165458]: 2026-02-25 12:01:09.795673811 +0000 UTC m=+0.516565694 container start eca742ff5665f99d5bfc0124901a3dcca0c6b121c86f3c97c40f53f892c06e43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_babbage, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:01:09 np0005629333 podman[165458]: 2026-02-25 12:01:09.808147273 +0000 UTC m=+0.529039186 container attach eca742ff5665f99d5bfc0124901a3dcca0c6b121c86f3c97c40f53f892c06e43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 07:01:09 np0005629333 kernel: SELinux:  Converting 2778 SID table entries...
Feb 25 07:01:09 np0005629333 kernel: SELinux:  policy capability network_peer_controls=1
Feb 25 07:01:09 np0005629333 kernel: SELinux:  policy capability open_perms=1
Feb 25 07:01:09 np0005629333 kernel: SELinux:  policy capability extended_socket_class=1
Feb 25 07:01:09 np0005629333 kernel: SELinux:  policy capability always_check_network=0
Feb 25 07:01:09 np0005629333 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 25 07:01:09 np0005629333 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 25 07:01:09 np0005629333 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 25 07:01:10 np0005629333 dbus-broker-launch[795]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Feb 25 07:01:10 np0005629333 lvm[165561]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:01:10 np0005629333 lvm[165559]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:01:10 np0005629333 lvm[165561]: VG ceph_vg1 finished
Feb 25 07:01:10 np0005629333 lvm[165559]: VG ceph_vg0 finished
Feb 25 07:01:10 np0005629333 lvm[165563]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:01:10 np0005629333 lvm[165563]: VG ceph_vg2 finished
Feb 25 07:01:10 np0005629333 hungry_babbage[165479]: {}
Feb 25 07:01:10 np0005629333 systemd[1]: libpod-eca742ff5665f99d5bfc0124901a3dcca0c6b121c86f3c97c40f53f892c06e43.scope: Deactivated successfully.
Feb 25 07:01:10 np0005629333 systemd[1]: libpod-eca742ff5665f99d5bfc0124901a3dcca0c6b121c86f3c97c40f53f892c06e43.scope: Consumed 1.163s CPU time.
Feb 25 07:01:10 np0005629333 podman[165458]: 2026-02-25 12:01:10.622388112 +0000 UTC m=+1.343279985 container died eca742ff5665f99d5bfc0124901a3dcca0c6b121c86f3c97c40f53f892c06e43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_babbage, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 07:01:10 np0005629333 systemd[1]: var-lib-containers-storage-overlay-9451787ec272741189a455b090bf2afe90ed7a8e73786e3799f4ff3d9fbedbf7-merged.mount: Deactivated successfully.
Feb 25 07:01:10 np0005629333 podman[165458]: 2026-02-25 12:01:10.684981958 +0000 UTC m=+1.405873831 container remove eca742ff5665f99d5bfc0124901a3dcca0c6b121c86f3c97c40f53f892c06e43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_babbage, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 07:01:10 np0005629333 systemd[1]: libpod-conmon-eca742ff5665f99d5bfc0124901a3dcca0c6b121c86f3c97c40f53f892c06e43.scope: Deactivated successfully.
Feb 25 07:01:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:01:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:01:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:01:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:01:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:01:11 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:01:11 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:01:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:01:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:01:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:01:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:01:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:01:18 np0005629333 kernel: SELinux:  Converting 2778 SID table entries...
Feb 25 07:01:18 np0005629333 kernel: SELinux:  policy capability network_peer_controls=1
Feb 25 07:01:18 np0005629333 kernel: SELinux:  policy capability open_perms=1
Feb 25 07:01:18 np0005629333 kernel: SELinux:  policy capability extended_socket_class=1
Feb 25 07:01:18 np0005629333 kernel: SELinux:  policy capability always_check_network=0
Feb 25 07:01:18 np0005629333 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 25 07:01:18 np0005629333 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 25 07:01:18 np0005629333 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 25 07:01:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Feb 25 07:01:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Feb 25 07:01:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 07:01:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:01:24 np0005629333 dbus-broker-launch[795]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Feb 25 07:01:24 np0005629333 podman[165613]: 2026-02-25 12:01:24.726919474 +0000 UTC m=+0.058349131 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Feb 25 07:01:24 np0005629333 podman[165614]: 2026-02-25 12:01:24.815539621 +0000 UTC m=+0.141693196 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible)
Feb 25 07:01:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 07:01:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 07:01:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:01:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 07:01:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:01:30
Feb 25 07:01:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:01:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:01:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'images', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'default.rgw.meta', 'volumes', 'default.rgw.control']
Feb 25 07:01:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:01:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Feb 25 07:01:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:01:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:01:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:01:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:01:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:01:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:01:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:01:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:01:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:01:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:01:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:01:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:01:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:01:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:01:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:01:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:01:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Feb 25 07:01:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:01:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:01:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:01:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:01:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:01:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:01:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:01:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:01:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:01:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:01:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:01:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:01:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:01:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:01:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:01:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:01:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:01:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:01:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6988014389607423e-06 of space, bias 4.0, pg target 0.0032385617267528906 quantized to 16 (current 16)
Feb 25 07:01:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:01:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:01:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:01:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:01:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:01:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:01:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:01:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:01:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:01:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:01:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:01:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:01:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:01:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:01:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:01:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:01:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:01:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:01:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:01:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:01:54.982 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:01:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:01:54.983 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:01:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:01:54.983 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:01:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:01:55 np0005629333 podman[182552]: 2026-02-25 12:01:55.47744039 +0000 UTC m=+0.070870594 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:01:55 np0005629333 podman[182553]: 2026-02-25 12:01:55.525475092 +0000 UTC m=+0.118541538 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Feb 25 07:01:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:01:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:01:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:02:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:02:01 np0005629333 kernel: SELinux:  Converting 2779 SID table entries...
Feb 25 07:02:01 np0005629333 kernel: SELinux:  policy capability network_peer_controls=1
Feb 25 07:02:01 np0005629333 kernel: SELinux:  policy capability open_perms=1
Feb 25 07:02:01 np0005629333 kernel: SELinux:  policy capability extended_socket_class=1
Feb 25 07:02:01 np0005629333 kernel: SELinux:  policy capability always_check_network=0
Feb 25 07:02:01 np0005629333 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 25 07:02:01 np0005629333 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 25 07:02:01 np0005629333 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 25 07:02:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:02:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:02:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:02:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:02:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:02:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:02:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:02:03 np0005629333 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Feb 25 07:02:03 np0005629333 dbus-broker-launch[795]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Feb 25 07:02:03 np0005629333 dbus-broker-launch[769]: Noticed file-system modification, trigger reload.
Feb 25 07:02:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:02:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:02:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:02:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:02:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:02:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:02:11 np0005629333 systemd[1]: Stopping OpenSSH server daemon...
Feb 25 07:02:11 np0005629333 systemd[1]: sshd.service: Deactivated successfully.
Feb 25 07:02:11 np0005629333 systemd[1]: Stopped OpenSSH server daemon.
Feb 25 07:02:11 np0005629333 systemd[1]: sshd.service: Consumed 3.044s CPU time, read 32.0K from disk, written 36.0K to disk.
Feb 25 07:02:11 np0005629333 systemd[1]: Stopped target sshd-keygen.target.
Feb 25 07:02:11 np0005629333 systemd[1]: Stopping sshd-keygen.target...
Feb 25 07:02:11 np0005629333 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 25 07:02:11 np0005629333 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 25 07:02:11 np0005629333 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 25 07:02:11 np0005629333 systemd[1]: Reached target sshd-keygen.target.
Feb 25 07:02:11 np0005629333 systemd[1]: Starting OpenSSH server daemon...
Feb 25 07:02:11 np0005629333 systemd[1]: Started OpenSSH server daemon.
Feb 25 07:02:11 np0005629333 podman[183549]: 2026-02-25 12:02:11.447728137 +0000 UTC m=+0.086959510 container exec ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 07:02:11 np0005629333 podman[183549]: 2026-02-25 12:02:11.524566438 +0000 UTC m=+0.163797721 container exec_died ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:02:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:02:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:02:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:02:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:02:12 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:02:12 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:02:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:02:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:02:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:02:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:02:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:02:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:02:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:02:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:02:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:02:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:02:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:02:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:02:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:02:13 np0005629333 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 25 07:02:13 np0005629333 systemd[1]: Starting man-db-cache-update.service...
Feb 25 07:02:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:02:14 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:02:14 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:02:14 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:02:14 np0005629333 systemd[1]: Reloading.
Feb 25 07:02:14 np0005629333 podman[184111]: 2026-02-25 12:02:14.202380291 +0000 UTC m=+0.040266226 container create 1ee56503c4db9a5b7c621d33bb953616d9ae2ee732549e779204f462186838d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 07:02:14 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 07:02:14 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 07:02:14 np0005629333 podman[184111]: 2026-02-25 12:02:14.18230531 +0000 UTC m=+0.020191295 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:02:14 np0005629333 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 25 07:02:14 np0005629333 systemd[1]: Started libpod-conmon-1ee56503c4db9a5b7c621d33bb953616d9ae2ee732549e779204f462186838d2.scope.
Feb 25 07:02:14 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:02:14 np0005629333 podman[184111]: 2026-02-25 12:02:14.519852432 +0000 UTC m=+0.357738377 container init 1ee56503c4db9a5b7c621d33bb953616d9ae2ee732549e779204f462186838d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_johnson, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:02:14 np0005629333 podman[184111]: 2026-02-25 12:02:14.526928894 +0000 UTC m=+0.364814829 container start 1ee56503c4db9a5b7c621d33bb953616d9ae2ee732549e779204f462186838d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Feb 25 07:02:14 np0005629333 podman[184111]: 2026-02-25 12:02:14.530733566 +0000 UTC m=+0.368619521 container attach 1ee56503c4db9a5b7c621d33bb953616d9ae2ee732549e779204f462186838d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_johnson, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 07:02:14 np0005629333 amazing_johnson[184389]: 167 167
Feb 25 07:02:14 np0005629333 systemd[1]: libpod-1ee56503c4db9a5b7c621d33bb953616d9ae2ee732549e779204f462186838d2.scope: Deactivated successfully.
Feb 25 07:02:14 np0005629333 podman[184111]: 2026-02-25 12:02:14.53417054 +0000 UTC m=+0.372056535 container died 1ee56503c4db9a5b7c621d33bb953616d9ae2ee732549e779204f462186838d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_johnson, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:02:14 np0005629333 systemd[1]: var-lib-containers-storage-overlay-294f5f345262d748532cc034d93c4ea1d7e0bf0fe0258f25d3a27e5dd4b0f9fb-merged.mount: Deactivated successfully.
Feb 25 07:02:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:02:15 np0005629333 podman[184111]: 2026-02-25 12:02:15.937129983 +0000 UTC m=+1.775015918 container remove 1ee56503c4db9a5b7c621d33bb953616d9ae2ee732549e779204f462186838d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_johnson, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 07:02:16 np0005629333 systemd[1]: libpod-conmon-1ee56503c4db9a5b7c621d33bb953616d9ae2ee732549e779204f462186838d2.scope: Deactivated successfully.
Feb 25 07:02:16 np0005629333 podman[186459]: 2026-02-25 12:02:16.368839948 +0000 UTC m=+0.043936045 container create 9a7724382335a32a25fd29a65a35c17fcd90b78a04632a772b2dc068b3e465dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_lamport, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:02:16 np0005629333 systemd[1]: Started libpod-conmon-9a7724382335a32a25fd29a65a35c17fcd90b78a04632a772b2dc068b3e465dd.scope.
Feb 25 07:02:16 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:02:16 np0005629333 podman[186459]: 2026-02-25 12:02:16.348739786 +0000 UTC m=+0.023835903 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:02:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a215dedc5b828c5b7594433168c8a78c4d46c61e79384ee42517ec567be0e78/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:02:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a215dedc5b828c5b7594433168c8a78c4d46c61e79384ee42517ec567be0e78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:02:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a215dedc5b828c5b7594433168c8a78c4d46c61e79384ee42517ec567be0e78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:02:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a215dedc5b828c5b7594433168c8a78c4d46c61e79384ee42517ec567be0e78/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:02:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a215dedc5b828c5b7594433168c8a78c4d46c61e79384ee42517ec567be0e78/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:02:16 np0005629333 podman[186459]: 2026-02-25 12:02:16.471293519 +0000 UTC m=+0.146389626 container init 9a7724382335a32a25fd29a65a35c17fcd90b78a04632a772b2dc068b3e465dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:02:16 np0005629333 podman[186459]: 2026-02-25 12:02:16.476871199 +0000 UTC m=+0.151967306 container start 9a7724382335a32a25fd29a65a35c17fcd90b78a04632a772b2dc068b3e465dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 07:02:16 np0005629333 podman[186459]: 2026-02-25 12:02:16.480826444 +0000 UTC m=+0.155922551 container attach 9a7724382335a32a25fd29a65a35c17fcd90b78a04632a772b2dc068b3e465dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 07:02:16 np0005629333 sweet_lamport[186623]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:02:16 np0005629333 sweet_lamport[186623]: --> All data devices are unavailable
Feb 25 07:02:16 np0005629333 systemd[1]: libpod-9a7724382335a32a25fd29a65a35c17fcd90b78a04632a772b2dc068b3e465dd.scope: Deactivated successfully.
Feb 25 07:02:16 np0005629333 podman[187336]: 2026-02-25 12:02:16.930854053 +0000 UTC m=+0.024456866 container died 9a7724382335a32a25fd29a65a35c17fcd90b78a04632a772b2dc068b3e465dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_lamport, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 07:02:16 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1a215dedc5b828c5b7594433168c8a78c4d46c61e79384ee42517ec567be0e78-merged.mount: Deactivated successfully.
Feb 25 07:02:16 np0005629333 podman[187336]: 2026-02-25 12:02:16.975828079 +0000 UTC m=+0.069430862 container remove 9a7724382335a32a25fd29a65a35c17fcd90b78a04632a772b2dc068b3e465dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_lamport, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 07:02:16 np0005629333 systemd[1]: libpod-conmon-9a7724382335a32a25fd29a65a35c17fcd90b78a04632a772b2dc068b3e465dd.scope: Deactivated successfully.
Feb 25 07:02:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:02:17 np0005629333 podman[188083]: 2026-02-25 12:02:17.459465261 +0000 UTC m=+0.055050754 container create 1a647186f6dae8ce2e65e9747a804eef8c40eecc2adc6db767dd198dcbd00c95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_varahamihira, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 07:02:17 np0005629333 systemd[1]: Started libpod-conmon-1a647186f6dae8ce2e65e9747a804eef8c40eecc2adc6db767dd198dcbd00c95.scope.
Feb 25 07:02:17 np0005629333 podman[188083]: 2026-02-25 12:02:17.431918869 +0000 UTC m=+0.027504412 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:02:17 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:02:17 np0005629333 podman[188083]: 2026-02-25 12:02:17.544345844 +0000 UTC m=+0.139931397 container init 1a647186f6dae8ce2e65e9747a804eef8c40eecc2adc6db767dd198dcbd00c95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:02:17 np0005629333 podman[188083]: 2026-02-25 12:02:17.551794794 +0000 UTC m=+0.147380287 container start 1a647186f6dae8ce2e65e9747a804eef8c40eecc2adc6db767dd198dcbd00c95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_varahamihira, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 07:02:17 np0005629333 podman[188083]: 2026-02-25 12:02:17.556387353 +0000 UTC m=+0.151972896 container attach 1a647186f6dae8ce2e65e9747a804eef8c40eecc2adc6db767dd198dcbd00c95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_varahamihira, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:02:17 np0005629333 zen_varahamihira[188238]: 167 167
Feb 25 07:02:17 np0005629333 systemd[1]: libpod-1a647186f6dae8ce2e65e9747a804eef8c40eecc2adc6db767dd198dcbd00c95.scope: Deactivated successfully.
Feb 25 07:02:17 np0005629333 podman[188083]: 2026-02-25 12:02:17.558749044 +0000 UTC m=+0.154334537 container died 1a647186f6dae8ce2e65e9747a804eef8c40eecc2adc6db767dd198dcbd00c95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_varahamihira, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 07:02:17 np0005629333 systemd[1]: var-lib-containers-storage-overlay-211c546b568b7e64da72926d834a2caee97b8fce44eb9671a51d8702dd5fe0a7-merged.mount: Deactivated successfully.
Feb 25 07:02:17 np0005629333 podman[188083]: 2026-02-25 12:02:17.604681681 +0000 UTC m=+0.200267134 container remove 1a647186f6dae8ce2e65e9747a804eef8c40eecc2adc6db767dd198dcbd00c95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_varahamihira, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 07:02:17 np0005629333 systemd[1]: libpod-conmon-1a647186f6dae8ce2e65e9747a804eef8c40eecc2adc6db767dd198dcbd00c95.scope: Deactivated successfully.
Feb 25 07:02:17 np0005629333 podman[188513]: 2026-02-25 12:02:17.783587575 +0000 UTC m=+0.050572408 container create b03db0376d4c33190062811ec26ecf8c873fd929f0dfabe47f785693fb1957de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_banach, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:02:17 np0005629333 systemd[1]: Started libpod-conmon-b03db0376d4c33190062811ec26ecf8c873fd929f0dfabe47f785693fb1957de.scope.
Feb 25 07:02:17 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:02:17 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1213669a1e9b31475eeef221b50ee4de444aabf4f51cdd468c66ac2ae6ab6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:02:17 np0005629333 podman[188513]: 2026-02-25 12:02:17.761738795 +0000 UTC m=+0.028723668 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:02:17 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1213669a1e9b31475eeef221b50ee4de444aabf4f51cdd468c66ac2ae6ab6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:02:17 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1213669a1e9b31475eeef221b50ee4de444aabf4f51cdd468c66ac2ae6ab6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:02:17 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c1213669a1e9b31475eeef221b50ee4de444aabf4f51cdd468c66ac2ae6ab6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:02:17 np0005629333 podman[188513]: 2026-02-25 12:02:17.882258325 +0000 UTC m=+0.149243188 container init b03db0376d4c33190062811ec26ecf8c873fd929f0dfabe47f785693fb1957de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_banach, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 07:02:17 np0005629333 podman[188513]: 2026-02-25 12:02:17.89089424 +0000 UTC m=+0.157879063 container start b03db0376d4c33190062811ec26ecf8c873fd929f0dfabe47f785693fb1957de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 07:02:17 np0005629333 podman[188513]: 2026-02-25 12:02:17.89461238 +0000 UTC m=+0.161597203 container attach b03db0376d4c33190062811ec26ecf8c873fd929f0dfabe47f785693fb1957de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_banach, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 07:02:17 np0005629333 python3.9[188464]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 25 07:02:17 np0005629333 systemd[1]: Reloading.
Feb 25 07:02:18 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 07:02:18 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 07:02:18 np0005629333 sharp_banach[188606]: {
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:    "0": [
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:        {
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "devices": [
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "/dev/loop3"
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            ],
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "lv_name": "ceph_lv0",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "lv_size": "21470642176",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "name": "ceph_lv0",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "tags": {
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.cluster_name": "ceph",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.crush_device_class": "",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.encrypted": "0",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.objectstore": "bluestore",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.osd_id": "0",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.type": "block",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.vdo": "0",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.with_tpm": "0"
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            },
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "type": "block",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "vg_name": "ceph_vg0"
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:        }
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:    ],
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:    "1": [
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:        {
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "devices": [
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "/dev/loop4"
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            ],
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "lv_name": "ceph_lv1",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "lv_size": "21470642176",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "name": "ceph_lv1",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "tags": {
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.cluster_name": "ceph",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.crush_device_class": "",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.encrypted": "0",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.objectstore": "bluestore",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.osd_id": "1",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.type": "block",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.vdo": "0",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.with_tpm": "0"
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            },
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "type": "block",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "vg_name": "ceph_vg1"
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:        }
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:    ],
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:    "2": [
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:        {
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "devices": [
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "/dev/loop5"
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            ],
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "lv_name": "ceph_lv2",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "lv_size": "21470642176",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "name": "ceph_lv2",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "tags": {
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.cluster_name": "ceph",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.crush_device_class": "",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.encrypted": "0",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.objectstore": "bluestore",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.osd_id": "2",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.type": "block",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.vdo": "0",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:                "ceph.with_tpm": "0"
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            },
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "type": "block",
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:            "vg_name": "ceph_vg2"
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:        }
Feb 25 07:02:18 np0005629333 sharp_banach[188606]:    ]
Feb 25 07:02:18 np0005629333 sharp_banach[188606]: }
Feb 25 07:02:18 np0005629333 podman[188513]: 2026-02-25 12:02:18.191501499 +0000 UTC m=+0.458486322 container died b03db0376d4c33190062811ec26ecf8c873fd929f0dfabe47f785693fb1957de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_banach, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 07:02:18 np0005629333 systemd[1]: libpod-b03db0376d4c33190062811ec26ecf8c873fd929f0dfabe47f785693fb1957de.scope: Deactivated successfully.
Feb 25 07:02:18 np0005629333 systemd[1]: var-lib-containers-storage-overlay-2c1213669a1e9b31475eeef221b50ee4de444aabf4f51cdd468c66ac2ae6ab6a-merged.mount: Deactivated successfully.
Feb 25 07:02:18 np0005629333 podman[188513]: 2026-02-25 12:02:18.29256577 +0000 UTC m=+0.559550633 container remove b03db0376d4c33190062811ec26ecf8c873fd929f0dfabe47f785693fb1957de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_banach, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 07:02:18 np0005629333 systemd[1]: libpod-conmon-b03db0376d4c33190062811ec26ecf8c873fd929f0dfabe47f785693fb1957de.scope: Deactivated successfully.
Feb 25 07:02:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:02:18 np0005629333 podman[189763]: 2026-02-25 12:02:18.68677665 +0000 UTC m=+0.036905514 container create 4925947d13d8cf2723a6475a1fa8db614b55602707a5e47370ed0ceb9e313768 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_rosalind, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 07:02:18 np0005629333 systemd[1]: Started libpod-conmon-4925947d13d8cf2723a6475a1fa8db614b55602707a5e47370ed0ceb9e313768.scope.
Feb 25 07:02:18 np0005629333 podman[189763]: 2026-02-25 12:02:18.669020099 +0000 UTC m=+0.019149003 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:02:18 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:02:18 np0005629333 podman[189763]: 2026-02-25 12:02:18.783495978 +0000 UTC m=+0.133624922 container init 4925947d13d8cf2723a6475a1fa8db614b55602707a5e47370ed0ceb9e313768 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_rosalind, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 07:02:18 np0005629333 podman[189763]: 2026-02-25 12:02:18.789199831 +0000 UTC m=+0.139328695 container start 4925947d13d8cf2723a6475a1fa8db614b55602707a5e47370ed0ceb9e313768 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_rosalind, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 07:02:18 np0005629333 unruffled_rosalind[189887]: 167 167
Feb 25 07:02:18 np0005629333 systemd[1]: libpod-4925947d13d8cf2723a6475a1fa8db614b55602707a5e47370ed0ceb9e313768.scope: Deactivated successfully.
Feb 25 07:02:18 np0005629333 podman[189763]: 2026-02-25 12:02:18.796261302 +0000 UTC m=+0.146390166 container attach 4925947d13d8cf2723a6475a1fa8db614b55602707a5e47370ed0ceb9e313768 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_rosalind, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:02:18 np0005629333 podman[189763]: 2026-02-25 12:02:18.796565379 +0000 UTC m=+0.146694243 container died 4925947d13d8cf2723a6475a1fa8db614b55602707a5e47370ed0ceb9e313768 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:02:18 np0005629333 systemd[1]: var-lib-containers-storage-overlay-6080a41bb681cb54e85b930345b38d060b23aa8cae10c15774f6a068294f2e6a-merged.mount: Deactivated successfully.
Feb 25 07:02:18 np0005629333 podman[189763]: 2026-02-25 12:02:18.836926076 +0000 UTC m=+0.187054930 container remove 4925947d13d8cf2723a6475a1fa8db614b55602707a5e47370ed0ceb9e313768 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:02:18 np0005629333 systemd[1]: libpod-conmon-4925947d13d8cf2723a6475a1fa8db614b55602707a5e47370ed0ceb9e313768.scope: Deactivated successfully.
Feb 25 07:02:18 np0005629333 python3.9[189781]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 25 07:02:18 np0005629333 systemd[1]: Reloading.
Feb 25 07:02:18 np0005629333 podman[190180]: 2026-02-25 12:02:18.988016562 +0000 UTC m=+0.053281225 container create c73920871cf963389fa6c26c71a68200446d70d7cf24c34602aef5cf8da80283 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_lumiere, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:02:19 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 07:02:19 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 07:02:19 np0005629333 podman[190180]: 2026-02-25 12:02:18.970372713 +0000 UTC m=+0.035637426 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:02:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:02:19 np0005629333 systemd[1]: Started libpod-conmon-c73920871cf963389fa6c26c71a68200446d70d7cf24c34602aef5cf8da80283.scope.
Feb 25 07:02:19 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:02:19 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1028ad7b4f2a452a7f4a562ac513ac03b26fab43b43b83345ed95043362c1337/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:02:19 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1028ad7b4f2a452a7f4a562ac513ac03b26fab43b43b83345ed95043362c1337/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:02:19 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1028ad7b4f2a452a7f4a562ac513ac03b26fab43b43b83345ed95043362c1337/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:02:19 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1028ad7b4f2a452a7f4a562ac513ac03b26fab43b43b83345ed95043362c1337/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:02:19 np0005629333 podman[190180]: 2026-02-25 12:02:19.310827708 +0000 UTC m=+0.376092391 container init c73920871cf963389fa6c26c71a68200446d70d7cf24c34602aef5cf8da80283 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 07:02:19 np0005629333 podman[190180]: 2026-02-25 12:02:19.320531537 +0000 UTC m=+0.385796230 container start c73920871cf963389fa6c26c71a68200446d70d7cf24c34602aef5cf8da80283 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 07:02:19 np0005629333 podman[190180]: 2026-02-25 12:02:19.32440256 +0000 UTC m=+0.389667233 container attach c73920871cf963389fa6c26c71a68200446d70d7cf24c34602aef5cf8da80283 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 07:02:19 np0005629333 lvm[191644]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:02:19 np0005629333 lvm[191644]: VG ceph_vg0 finished
Feb 25 07:02:19 np0005629333 python3.9[191223]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 25 07:02:19 np0005629333 lvm[191646]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:02:19 np0005629333 lvm[191646]: VG ceph_vg1 finished
Feb 25 07:02:19 np0005629333 lvm[191677]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:02:19 np0005629333 lvm[191677]: VG ceph_vg2 finished
Feb 25 07:02:19 np0005629333 systemd[1]: Reloading.
Feb 25 07:02:20 np0005629333 great_lumiere[190612]: {}
Feb 25 07:02:20 np0005629333 podman[190180]: 2026-02-25 12:02:20.044612933 +0000 UTC m=+1.109877616 container died c73920871cf963389fa6c26c71a68200446d70d7cf24c34602aef5cf8da80283 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_lumiere, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 07:02:20 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 07:02:20 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 07:02:20 np0005629333 systemd[1]: libpod-c73920871cf963389fa6c26c71a68200446d70d7cf24c34602aef5cf8da80283.scope: Deactivated successfully.
Feb 25 07:02:20 np0005629333 systemd[1]: libpod-c73920871cf963389fa6c26c71a68200446d70d7cf24c34602aef5cf8da80283.scope: Consumed 1.089s CPU time.
Feb 25 07:02:20 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1028ad7b4f2a452a7f4a562ac513ac03b26fab43b43b83345ed95043362c1337-merged.mount: Deactivated successfully.
Feb 25 07:02:20 np0005629333 podman[190180]: 2026-02-25 12:02:20.314452911 +0000 UTC m=+1.379717584 container remove c73920871cf963389fa6c26c71a68200446d70d7cf24c34602aef5cf8da80283 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 07:02:20 np0005629333 systemd[1]: libpod-conmon-c73920871cf963389fa6c26c71a68200446d70d7cf24c34602aef5cf8da80283.scope: Deactivated successfully.
Feb 25 07:02:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:02:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:02:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:02:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:02:20 np0005629333 python3.9[192896]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 25 07:02:20 np0005629333 systemd[1]: Reloading.
Feb 25 07:02:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:02:21 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 07:02:21 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 07:02:21 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:02:21 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:02:21 np0005629333 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 25 07:02:21 np0005629333 systemd[1]: Finished man-db-cache-update.service.
Feb 25 07:02:21 np0005629333 systemd[1]: man-db-cache-update.service: Consumed 8.935s CPU time.
Feb 25 07:02:21 np0005629333 systemd[1]: run-rfb5e077760b74f618cde10787a4d7049.service: Deactivated successfully.
Feb 25 07:02:22 np0005629333 python3.9[193959]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 25 07:02:22 np0005629333 systemd[1]: Reloading.
Feb 25 07:02:22 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 07:02:22 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 07:02:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:02:23 np0005629333 python3.9[194158]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 25 07:02:23 np0005629333 systemd[1]: Reloading.
Feb 25 07:02:23 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 07:02:23 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 07:02:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:02:24 np0005629333 python3.9[194356]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 25 07:02:24 np0005629333 systemd[1]: Reloading.
Feb 25 07:02:24 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 07:02:24 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 07:02:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:02:25 np0005629333 python3.9[194553]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 25 07:02:25 np0005629333 podman[194656]: 2026-02-25 12:02:25.720947102 +0000 UTC m=+0.066361491 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 25 07:02:25 np0005629333 podman[194657]: 2026-02-25 12:02:25.756910419 +0000 UTC m=+0.103035758 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 07:02:26 np0005629333 python3.9[194754]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 25 07:02:26 np0005629333 systemd[1]: Reloading.
Feb 25 07:02:26 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 07:02:26 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 07:02:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:02:27 np0005629333 python3.9[194952]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 25 07:02:27 np0005629333 systemd[1]: Reloading.
Feb 25 07:02:27 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 07:02:27 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 07:02:27 np0005629333 systemd[1]: Listening on libvirt proxy daemon socket.
Feb 25 07:02:27 np0005629333 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Feb 25 07:02:28 np0005629333 python3.9[195153]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 25 07:02:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:02:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:02:30 np0005629333 python3.9[195309]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 25 07:02:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:02:30
Feb 25 07:02:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:02:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:02:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'backups', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'vms', 'default.rgw.log']
Feb 25 07:02:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:02:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:02:31 np0005629333 python3.9[195465]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 25 07:02:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:02:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:02:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:02:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:02:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:02:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:02:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:02:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:02:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:02:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:02:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:02:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:02:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:02:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:02:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:02:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:02:31 np0005629333 python3.9[195621]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 25 07:02:32 np0005629333 python3.9[195777]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 25 07:02:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:02:33 np0005629333 python3.9[195933]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 25 07:02:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:02:34 np0005629333 python3.9[196089]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 25 07:02:34 np0005629333 python3.9[196245]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 25 07:02:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:02:35 np0005629333 python3.9[196401]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 25 07:02:36 np0005629333 python3.9[196557]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 25 07:02:36 np0005629333 python3.9[196713]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 25 07:02:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:02:37 np0005629333 python3.9[196869]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 25 07:02:38 np0005629333 python3.9[197025]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 25 07:02:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:02:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:02:39 np0005629333 python3.9[197181]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 25 07:02:39 np0005629333 python3.9[197337]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 25 07:02:40 np0005629333 python3.9[197490]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 25 07:02:40 np0005629333 python3.9[197644]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 07:02:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:02:41 np0005629333 python3.9[197797]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 07:02:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:02:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:02:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:02:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:02:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:02:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:02:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:02:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:02:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:02:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:02:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:02:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:02:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6988014389607423e-06 of space, bias 4.0, pg target 0.0032385617267528906 quantized to 16 (current 16)
Feb 25 07:02:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:02:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:02:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:02:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:02:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:02:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:02:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:02:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:02:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:02:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:02:42 np0005629333 python3.9[197951]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 07:02:42 np0005629333 python3.9[198104]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 25 07:02:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:02:43 np0005629333 python3.9[198254]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 07:02:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:02:44 np0005629333 python3.9[198407]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:02:44 np0005629333 python3.9[198533]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1772020963.563482-557-149002965081080/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:02:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:02:45 np0005629333 python3.9[198686]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:02:46 np0005629333 python3.9[198812]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1772020965.113098-557-134658824396371/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:02:46 np0005629333 python3.9[198965]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:02:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:02:47 np0005629333 python3.9[199091]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1772020966.3682494-557-178801516950337/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:02:48 np0005629333 python3.9[199244]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:02:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:02:48 np0005629333 python3.9[199370]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1772020967.5913582-557-81797533053672/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:02:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:02:49 np0005629333 python3.9[199523]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:02:49 np0005629333 python3.9[199649]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1772020968.7654257-557-281424362333989/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:02:50 np0005629333 python3.9[199802]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:02:51 np0005629333 python3.9[199928]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1772020969.9727437-557-272242081103530/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:02:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:02:51 np0005629333 python3.9[200081]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:02:52 np0005629333 python3.9[200205]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1772020971.1805959-557-116530243320030/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:02:52 np0005629333 python3.9[200358]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:02:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:02:53 np0005629333 python3.9[200484]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1772020972.3135235-557-232075011523392/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:02:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:02:54 np0005629333 python3.9[200637]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Feb 25 07:02:54 np0005629333 python3.9[200791]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:02:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:02:54.983 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:02:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:02:54.984 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:02:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:02:54.985 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:02:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:02:55 np0005629333 python3.9[200944]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:02:55 np0005629333 python3.9[201097]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:02:56 np0005629333 podman[201103]: 2026-02-25 12:02:56.044736555 +0000 UTC m=+0.085076907 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 25 07:02:56 np0005629333 podman[201098]: 2026-02-25 12:02:56.044645283 +0000 UTC m=+0.088299027 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS)
Feb 25 07:02:56 np0005629333 python3.9[201291]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:02:57 np0005629333 python3.9[201444]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:02:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:02:57 np0005629333 python3.9[201597]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:02:58 np0005629333 python3.9[201750]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:02:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:02:58 np0005629333 python3.9[201903]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:02:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:02:59 np0005629333 python3.9[202056]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:00 np0005629333 python3.9[202209]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:00 np0005629333 python3.9[202362]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:03:01.273050) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020981273096, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2036, "num_deletes": 251, "total_data_size": 3553342, "memory_usage": 3615648, "flush_reason": "Manual Compaction"}
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020981292644, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3477125, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9681, "largest_seqno": 11716, "table_properties": {"data_size": 3467851, "index_size": 5896, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17719, "raw_average_key_size": 19, "raw_value_size": 3449532, "raw_average_value_size": 3782, "num_data_blocks": 267, "num_entries": 912, "num_filter_entries": 912, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020748, "oldest_key_time": 1772020748, "file_creation_time": 1772020981, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 19649 microseconds, and 4287 cpu microseconds.
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:03:01.292681) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3477125 bytes OK
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:03:01.292721) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:03:01.294211) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:03:01.294224) EVENT_LOG_v1 {"time_micros": 1772020981294220, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:03:01.294239) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3544863, prev total WAL file size 3544863, number of live WAL files 2.
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:03:01.294882) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3395KB)], [26(5747KB)]
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020981294934, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9362752, "oldest_snapshot_seqno": -1}
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3693 keys, 7864881 bytes, temperature: kUnknown
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020981337593, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 7864881, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7836876, "index_size": 17655, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9285, "raw_key_size": 88530, "raw_average_key_size": 23, "raw_value_size": 7766971, "raw_average_value_size": 2103, "num_data_blocks": 765, "num_entries": 3693, "num_filter_entries": 3693, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772020981, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:03:01.337981) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 7864881 bytes
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:03:01.339373) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 218.7 rd, 183.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.6 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(5.0) write-amplify(2.3) OK, records in: 4207, records dropped: 514 output_compression: NoCompression
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:03:01.339405) EVENT_LOG_v1 {"time_micros": 1772020981339390, "job": 10, "event": "compaction_finished", "compaction_time_micros": 42803, "compaction_time_cpu_micros": 24544, "output_level": 6, "num_output_files": 1, "total_output_size": 7864881, "num_input_records": 4207, "num_output_records": 3693, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020981340048, "job": 10, "event": "table_file_deletion", "file_number": 28}
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772020981340863, "job": 10, "event": "table_file_deletion", "file_number": 26}
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:03:01.294736) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:03:01.341032) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:03:01.341042) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:03:01.341048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:03:01.341057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:03:01 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:03:01.341061) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:03:01 np0005629333 python3.9[202515]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:03:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:03:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:03:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:03:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:03:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:03:02 np0005629333 python3.9[202668]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:02 np0005629333 python3.9[202821]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:03:03 np0005629333 python3.9[202974]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:03:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:03:03 np0005629333 python3.9[203098]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020982.8245654-778-201105111398118/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:04 np0005629333 python3.9[203251]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:03:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:03:05 np0005629333 python3.9[203375]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020984.0334275-778-213355330353194/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:05 np0005629333 python3.9[203528]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:03:06 np0005629333 python3.9[203652]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020985.2658448-778-91317434899120/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:06 np0005629333 python3.9[203805]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:03:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:03:07 np0005629333 python3.9[203929]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020986.4114127-778-79826673944217/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:08 np0005629333 python3.9[204082]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:03:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:03:08 np0005629333 python3.9[204206]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020987.615756-778-127352088254037/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:03:09 np0005629333 python3.9[204359]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:03:09 np0005629333 python3.9[204483]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020988.8551974-778-123364835356684/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:10 np0005629333 python3.9[204636]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:03:10 np0005629333 python3.9[204760]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020989.9473627-778-74245319269414/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:03:11 np0005629333 python3.9[204913]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:03:12 np0005629333 python3.9[205037]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020991.0918746-778-47815993553623/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:12 np0005629333 python3.9[205190]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:03:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:03:13 np0005629333 python3.9[205314]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020992.3318107-778-154688985213388/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:03:13 np0005629333 python3.9[205467]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:03:14 np0005629333 python3.9[205591]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020993.4415846-778-250533862256047/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:15 np0005629333 python3.9[205744]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:03:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:03:15 np0005629333 python3.9[205868]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020994.5109973-778-65451257469715/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:16 np0005629333 python3.9[206021]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:03:16 np0005629333 auditd[725]: Audit daemon rotating log files
Feb 25 07:03:16 np0005629333 python3.9[206145]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020995.669665-778-91312105121511/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:03:17 np0005629333 python3.9[206298]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:03:18 np0005629333 python3.9[206422]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020996.8574874-778-7748518765326/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:18 np0005629333 python3.9[206575]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:03:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:03:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:03:19 np0005629333 python3.9[206699]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772020998.152536-778-94706463415172/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:19 np0005629333 python3.9[206849]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 07:03:20 np0005629333 python3.9[207055]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Feb 25 07:03:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:03:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:03:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:03:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:03:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:03:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:03:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:03:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:03:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:03:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:03:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:03:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:03:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:03:21 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:03:21 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:03:21 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:03:21 np0005629333 podman[207150]: 2026-02-25 12:03:21.54200324 +0000 UTC m=+0.050610168 container create ca69a54862c62005a8238182f35782a8707903110df2223bf6b7f033e44d053f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 07:03:21 np0005629333 dbus-broker-launch[795]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Feb 25 07:03:21 np0005629333 podman[207150]: 2026-02-25 12:03:21.513661812 +0000 UTC m=+0.022268790 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:03:21 np0005629333 systemd[1]: Started libpod-conmon-ca69a54862c62005a8238182f35782a8707903110df2223bf6b7f033e44d053f.scope.
Feb 25 07:03:21 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:03:21 np0005629333 podman[207150]: 2026-02-25 12:03:21.702729801 +0000 UTC m=+0.211336799 container init ca69a54862c62005a8238182f35782a8707903110df2223bf6b7f033e44d053f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:03:21 np0005629333 podman[207150]: 2026-02-25 12:03:21.710849776 +0000 UTC m=+0.219456674 container start ca69a54862c62005a8238182f35782a8707903110df2223bf6b7f033e44d053f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 07:03:21 np0005629333 podman[207150]: 2026-02-25 12:03:21.713990954 +0000 UTC m=+0.222597902 container attach ca69a54862c62005a8238182f35782a8707903110df2223bf6b7f033e44d053f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_poincare, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:03:21 np0005629333 interesting_poincare[207169]: 167 167
Feb 25 07:03:21 np0005629333 systemd[1]: libpod-ca69a54862c62005a8238182f35782a8707903110df2223bf6b7f033e44d053f.scope: Deactivated successfully.
Feb 25 07:03:21 np0005629333 podman[207150]: 2026-02-25 12:03:21.718415707 +0000 UTC m=+0.227022615 container died ca69a54862c62005a8238182f35782a8707903110df2223bf6b7f033e44d053f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:03:21 np0005629333 systemd[1]: var-lib-containers-storage-overlay-266ebb2881b6ca75f0736069c8904a1dfb917fed64406810080fa83827e82bbd-merged.mount: Deactivated successfully.
Feb 25 07:03:21 np0005629333 podman[207150]: 2026-02-25 12:03:21.764888879 +0000 UTC m=+0.273495777 container remove ca69a54862c62005a8238182f35782a8707903110df2223bf6b7f033e44d053f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_poincare, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:03:21 np0005629333 systemd[1]: libpod-conmon-ca69a54862c62005a8238182f35782a8707903110df2223bf6b7f033e44d053f.scope: Deactivated successfully.
Feb 25 07:03:21 np0005629333 podman[207190]: 2026-02-25 12:03:21.913590455 +0000 UTC m=+0.054729293 container create 6bcda4c76f4975014169884ea5f353258976bbb44b053ce49aed8fd1062ad812 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_blackwell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 07:03:21 np0005629333 systemd[1]: Started libpod-conmon-6bcda4c76f4975014169884ea5f353258976bbb44b053ce49aed8fd1062ad812.scope.
Feb 25 07:03:21 np0005629333 podman[207190]: 2026-02-25 12:03:21.87817358 +0000 UTC m=+0.019312448 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:03:21 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:03:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/049ca631a415eb9f19465cfe9f7bed93336c67acd832bc8873a3a94999aba2a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:03:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/049ca631a415eb9f19465cfe9f7bed93336c67acd832bc8873a3a94999aba2a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:03:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/049ca631a415eb9f19465cfe9f7bed93336c67acd832bc8873a3a94999aba2a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:03:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/049ca631a415eb9f19465cfe9f7bed93336c67acd832bc8873a3a94999aba2a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:03:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/049ca631a415eb9f19465cfe9f7bed93336c67acd832bc8873a3a94999aba2a8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:03:22 np0005629333 podman[207190]: 2026-02-25 12:03:22.012941938 +0000 UTC m=+0.154080806 container init 6bcda4c76f4975014169884ea5f353258976bbb44b053ce49aed8fd1062ad812 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:03:22 np0005629333 podman[207190]: 2026-02-25 12:03:22.022480423 +0000 UTC m=+0.163619291 container start 6bcda4c76f4975014169884ea5f353258976bbb44b053ce49aed8fd1062ad812 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 07:03:22 np0005629333 podman[207190]: 2026-02-25 12:03:22.026141165 +0000 UTC m=+0.167280033 container attach 6bcda4c76f4975014169884ea5f353258976bbb44b053ce49aed8fd1062ad812 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_blackwell, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:03:22 np0005629333 hungry_blackwell[207230]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:03:22 np0005629333 hungry_blackwell[207230]: --> All data devices are unavailable
Feb 25 07:03:22 np0005629333 systemd[1]: libpod-6bcda4c76f4975014169884ea5f353258976bbb44b053ce49aed8fd1062ad812.scope: Deactivated successfully.
Feb 25 07:03:22 np0005629333 podman[207190]: 2026-02-25 12:03:22.492461794 +0000 UTC m=+0.633600622 container died 6bcda4c76f4975014169884ea5f353258976bbb44b053ce49aed8fd1062ad812 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:03:22 np0005629333 systemd[1]: var-lib-containers-storage-overlay-049ca631a415eb9f19465cfe9f7bed93336c67acd832bc8873a3a94999aba2a8-merged.mount: Deactivated successfully.
Feb 25 07:03:22 np0005629333 podman[207190]: 2026-02-25 12:03:22.529240977 +0000 UTC m=+0.670379825 container remove 6bcda4c76f4975014169884ea5f353258976bbb44b053ce49aed8fd1062ad812 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 07:03:22 np0005629333 systemd[1]: libpod-conmon-6bcda4c76f4975014169884ea5f353258976bbb44b053ce49aed8fd1062ad812.scope: Deactivated successfully.
Feb 25 07:03:22 np0005629333 python3.9[207377]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:22 np0005629333 podman[207556]: 2026-02-25 12:03:22.951444399 +0000 UTC m=+0.046376000 container create 6122587d0f01a010f2b2a2db161be490428dd82c9c1ffeec9c2ebcb41c1da3a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_robinson, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 07:03:22 np0005629333 systemd[1]: Started libpod-conmon-6122587d0f01a010f2b2a2db161be490428dd82c9c1ffeec9c2ebcb41c1da3a5.scope.
Feb 25 07:03:23 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:03:23 np0005629333 podman[207556]: 2026-02-25 12:03:23.024573953 +0000 UTC m=+0.119505554 container init 6122587d0f01a010f2b2a2db161be490428dd82c9c1ffeec9c2ebcb41c1da3a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_robinson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:03:23 np0005629333 podman[207556]: 2026-02-25 12:03:22.930074425 +0000 UTC m=+0.025006096 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:03:23 np0005629333 podman[207556]: 2026-02-25 12:03:23.030057126 +0000 UTC m=+0.124988747 container start 6122587d0f01a010f2b2a2db161be490428dd82c9c1ffeec9c2ebcb41c1da3a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_robinson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 07:03:23 np0005629333 podman[207556]: 2026-02-25 12:03:23.033668286 +0000 UTC m=+0.128599917 container attach 6122587d0f01a010f2b2a2db161be490428dd82c9c1ffeec9c2ebcb41c1da3a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:03:23 np0005629333 loving_robinson[207624]: 167 167
Feb 25 07:03:23 np0005629333 systemd[1]: libpod-6122587d0f01a010f2b2a2db161be490428dd82c9c1ffeec9c2ebcb41c1da3a5.scope: Deactivated successfully.
Feb 25 07:03:23 np0005629333 conmon[207624]: conmon 6122587d0f01a010f2b2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6122587d0f01a010f2b2a2db161be490428dd82c9c1ffeec9c2ebcb41c1da3a5.scope/container/memory.events
Feb 25 07:03:23 np0005629333 podman[207556]: 2026-02-25 12:03:23.036247278 +0000 UTC m=+0.131178909 container died 6122587d0f01a010f2b2a2db161be490428dd82c9c1ffeec9c2ebcb41c1da3a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_robinson, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:03:23 np0005629333 systemd[1]: var-lib-containers-storage-overlay-65a12de7df47ca5030e7ea7fd3a3735a08e6a8cce369da5718c03e029972ad32-merged.mount: Deactivated successfully.
Feb 25 07:03:23 np0005629333 podman[207556]: 2026-02-25 12:03:23.077526566 +0000 UTC m=+0.172458197 container remove 6122587d0f01a010f2b2a2db161be490428dd82c9c1ffeec9c2ebcb41c1da3a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:03:23 np0005629333 systemd[1]: libpod-conmon-6122587d0f01a010f2b2a2db161be490428dd82c9c1ffeec9c2ebcb41c1da3a5.scope: Deactivated successfully.
Feb 25 07:03:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:03:23 np0005629333 python3.9[207629]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:23 np0005629333 podman[207649]: 2026-02-25 12:03:23.242230317 +0000 UTC m=+0.037261068 container create 6aa385027741499d78c8d4dd517afa3e7c1c1417ff21a4d151d110c45bdb4e3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 07:03:23 np0005629333 systemd[1]: Started libpod-conmon-6aa385027741499d78c8d4dd517afa3e7c1c1417ff21a4d151d110c45bdb4e3b.scope.
Feb 25 07:03:23 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:03:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a850b5eef5b19752a4fee60309e1fa89e3b5bd01fc0bc420acd18f5dafb462ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:03:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a850b5eef5b19752a4fee60309e1fa89e3b5bd01fc0bc420acd18f5dafb462ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:03:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a850b5eef5b19752a4fee60309e1fa89e3b5bd01fc0bc420acd18f5dafb462ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:03:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a850b5eef5b19752a4fee60309e1fa89e3b5bd01fc0bc420acd18f5dafb462ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:03:23 np0005629333 podman[207649]: 2026-02-25 12:03:23.316245885 +0000 UTC m=+0.111276656 container init 6aa385027741499d78c8d4dd517afa3e7c1c1417ff21a4d151d110c45bdb4e3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 07:03:23 np0005629333 podman[207649]: 2026-02-25 12:03:23.321393288 +0000 UTC m=+0.116424039 container start 6aa385027741499d78c8d4dd517afa3e7c1c1417ff21a4d151d110c45bdb4e3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:03:23 np0005629333 podman[207649]: 2026-02-25 12:03:23.22759463 +0000 UTC m=+0.022625381 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:03:23 np0005629333 podman[207649]: 2026-02-25 12:03:23.324451083 +0000 UTC m=+0.119481904 container attach 6aa385027741499d78c8d4dd517afa3e7c1c1417ff21a4d151d110c45bdb4e3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_gates, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]: {
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:    "0": [
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:        {
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "devices": [
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "/dev/loop3"
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            ],
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "lv_name": "ceph_lv0",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "lv_size": "21470642176",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "name": "ceph_lv0",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "tags": {
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.cluster_name": "ceph",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.crush_device_class": "",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.encrypted": "0",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.objectstore": "bluestore",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.osd_id": "0",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.type": "block",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.vdo": "0",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.with_tpm": "0"
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            },
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "type": "block",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "vg_name": "ceph_vg0"
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:        }
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:    ],
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:    "1": [
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:        {
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "devices": [
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "/dev/loop4"
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            ],
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "lv_name": "ceph_lv1",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "lv_size": "21470642176",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "name": "ceph_lv1",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "tags": {
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.cluster_name": "ceph",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.crush_device_class": "",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.encrypted": "0",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.objectstore": "bluestore",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.osd_id": "1",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.type": "block",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.vdo": "0",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.with_tpm": "0"
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            },
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "type": "block",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "vg_name": "ceph_vg1"
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:        }
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:    ],
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:    "2": [
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:        {
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "devices": [
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "/dev/loop5"
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            ],
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "lv_name": "ceph_lv2",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "lv_size": "21470642176",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "name": "ceph_lv2",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "tags": {
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.cluster_name": "ceph",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.crush_device_class": "",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.encrypted": "0",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.objectstore": "bluestore",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.osd_id": "2",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.type": "block",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.vdo": "0",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:                "ceph.with_tpm": "0"
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            },
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "type": "block",
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:            "vg_name": "ceph_vg2"
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:        }
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]:    ]
Feb 25 07:03:23 np0005629333 peaceful_gates[207689]: }
Feb 25 07:03:23 np0005629333 systemd[1]: libpod-6aa385027741499d78c8d4dd517afa3e7c1c1417ff21a4d151d110c45bdb4e3b.scope: Deactivated successfully.
Feb 25 07:03:23 np0005629333 podman[207649]: 2026-02-25 12:03:23.605207412 +0000 UTC m=+0.400238173 container died 6aa385027741499d78c8d4dd517afa3e7c1c1417ff21a4d151d110c45bdb4e3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_gates, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 07:03:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:03:23 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a850b5eef5b19752a4fee60309e1fa89e3b5bd01fc0bc420acd18f5dafb462ce-merged.mount: Deactivated successfully.
Feb 25 07:03:23 np0005629333 podman[207649]: 2026-02-25 12:03:23.648186607 +0000 UTC m=+0.443217358 container remove 6aa385027741499d78c8d4dd517afa3e7c1c1417ff21a4d151d110c45bdb4e3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_gates, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:03:23 np0005629333 systemd[1]: libpod-conmon-6aa385027741499d78c8d4dd517afa3e7c1c1417ff21a4d151d110c45bdb4e3b.scope: Deactivated successfully.
Feb 25 07:03:23 np0005629333 python3.9[207826]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:24 np0005629333 podman[208023]: 2026-02-25 12:03:24.050207697 +0000 UTC m=+0.049656482 container create 041784e733b673a302756d02b45a455e2ee6454b5d3c3c60975314c255cd1c34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_heyrovsky, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 07:03:24 np0005629333 systemd[1]: Started libpod-conmon-041784e733b673a302756d02b45a455e2ee6454b5d3c3c60975314c255cd1c34.scope.
Feb 25 07:03:24 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:03:24 np0005629333 podman[208023]: 2026-02-25 12:03:24.114588577 +0000 UTC m=+0.114037362 container init 041784e733b673a302756d02b45a455e2ee6454b5d3c3c60975314c255cd1c34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_heyrovsky, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 07:03:24 np0005629333 podman[208023]: 2026-02-25 12:03:24.120906733 +0000 UTC m=+0.120355518 container start 041784e733b673a302756d02b45a455e2ee6454b5d3c3c60975314c255cd1c34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 07:03:24 np0005629333 podman[208023]: 2026-02-25 12:03:24.126071537 +0000 UTC m=+0.125520332 container attach 041784e733b673a302756d02b45a455e2ee6454b5d3c3c60975314c255cd1c34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:03:24 np0005629333 hopeful_heyrovsky[208071]: 167 167
Feb 25 07:03:24 np0005629333 podman[208023]: 2026-02-25 12:03:24.028850553 +0000 UTC m=+0.028299378 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:03:24 np0005629333 systemd[1]: libpod-041784e733b673a302756d02b45a455e2ee6454b5d3c3c60975314c255cd1c34.scope: Deactivated successfully.
Feb 25 07:03:24 np0005629333 podman[208023]: 2026-02-25 12:03:24.127383213 +0000 UTC m=+0.126832048 container died 041784e733b673a302756d02b45a455e2ee6454b5d3c3c60975314c255cd1c34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_heyrovsky, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 07:03:24 np0005629333 systemd[1]: var-lib-containers-storage-overlay-8c7f7da5adeb0fdde25e889e87d879f6064d8e0fa7c3388405a80867e8abf58a-merged.mount: Deactivated successfully.
Feb 25 07:03:24 np0005629333 podman[208023]: 2026-02-25 12:03:24.1768858 +0000 UTC m=+0.176334615 container remove 041784e733b673a302756d02b45a455e2ee6454b5d3c3c60975314c255cd1c34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 07:03:24 np0005629333 systemd[1]: libpod-conmon-041784e733b673a302756d02b45a455e2ee6454b5d3c3c60975314c255cd1c34.scope: Deactivated successfully.
Feb 25 07:03:24 np0005629333 python3.9[208068]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:24 np0005629333 podman[208095]: 2026-02-25 12:03:24.303955444 +0000 UTC m=+0.040996241 container create b5cf6b025e07037760414ef986deccc29a0178f8a0c66f5eaad77bdd7c8a9bb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_dhawan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 07:03:24 np0005629333 systemd[1]: Started libpod-conmon-b5cf6b025e07037760414ef986deccc29a0178f8a0c66f5eaad77bdd7c8a9bb6.scope.
Feb 25 07:03:24 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:03:24 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49e89baf87da6aa503af0fa3ce1b106faa4f1be21d5be29faceda069342d1c3a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:03:24 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49e89baf87da6aa503af0fa3ce1b106faa4f1be21d5be29faceda069342d1c3a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:03:24 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49e89baf87da6aa503af0fa3ce1b106faa4f1be21d5be29faceda069342d1c3a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:03:24 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49e89baf87da6aa503af0fa3ce1b106faa4f1be21d5be29faceda069342d1c3a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:03:24 np0005629333 podman[208095]: 2026-02-25 12:03:24.281920481 +0000 UTC m=+0.018961298 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:03:24 np0005629333 podman[208095]: 2026-02-25 12:03:24.397382242 +0000 UTC m=+0.134423079 container init b5cf6b025e07037760414ef986deccc29a0178f8a0c66f5eaad77bdd7c8a9bb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 07:03:24 np0005629333 podman[208095]: 2026-02-25 12:03:24.410009884 +0000 UTC m=+0.147050721 container start b5cf6b025e07037760414ef986deccc29a0178f8a0c66f5eaad77bdd7c8a9bb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_dhawan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:03:24 np0005629333 podman[208095]: 2026-02-25 12:03:24.413643455 +0000 UTC m=+0.150684282 container attach b5cf6b025e07037760414ef986deccc29a0178f8a0c66f5eaad77bdd7c8a9bb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_dhawan, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 25 07:03:24 np0005629333 python3.9[208278]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:25 np0005629333 lvm[208367]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:03:25 np0005629333 lvm[208367]: VG ceph_vg1 finished
Feb 25 07:03:25 np0005629333 lvm[208366]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:03:25 np0005629333 lvm[208366]: VG ceph_vg0 finished
Feb 25 07:03:25 np0005629333 lvm[208370]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:03:25 np0005629333 lvm[208370]: VG ceph_vg2 finished
Feb 25 07:03:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:03:25 np0005629333 inspiring_dhawan[208116]: {}
Feb 25 07:03:25 np0005629333 systemd[1]: libpod-b5cf6b025e07037760414ef986deccc29a0178f8a0c66f5eaad77bdd7c8a9bb6.scope: Deactivated successfully.
Feb 25 07:03:25 np0005629333 systemd[1]: libpod-b5cf6b025e07037760414ef986deccc29a0178f8a0c66f5eaad77bdd7c8a9bb6.scope: Consumed 1.090s CPU time.
Feb 25 07:03:25 np0005629333 podman[208095]: 2026-02-25 12:03:25.170413302 +0000 UTC m=+0.907454139 container died b5cf6b025e07037760414ef986deccc29a0178f8a0c66f5eaad77bdd7c8a9bb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 07:03:25 np0005629333 systemd[1]: var-lib-containers-storage-overlay-49e89baf87da6aa503af0fa3ce1b106faa4f1be21d5be29faceda069342d1c3a-merged.mount: Deactivated successfully.
Feb 25 07:03:25 np0005629333 podman[208095]: 2026-02-25 12:03:25.219303831 +0000 UTC m=+0.956344628 container remove b5cf6b025e07037760414ef986deccc29a0178f8a0c66f5eaad77bdd7c8a9bb6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_dhawan, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 07:03:25 np0005629333 systemd[1]: libpod-conmon-b5cf6b025e07037760414ef986deccc29a0178f8a0c66f5eaad77bdd7c8a9bb6.scope: Deactivated successfully.
Feb 25 07:03:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:03:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:03:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:03:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:03:25 np0005629333 python3.9[208535]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:26 np0005629333 python3.9[208690]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:26 np0005629333 podman[208691]: 2026-02-25 12:03:26.189074012 +0000 UTC m=+0.097844422 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 25 07:03:26 np0005629333 podman[208692]: 2026-02-25 12:03:26.213571744 +0000 UTC m=+0.123468745 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 07:03:26 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:03:26 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:03:26 np0005629333 python3.9[208888]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:03:27 np0005629333 python3.9[209041]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:27 np0005629333 python3.9[209194]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:03:28 np0005629333 python3.9[209347]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 25 07:03:28 np0005629333 systemd[1]: Reloading.
Feb 25 07:03:28 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 07:03:28 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 07:03:29 np0005629333 systemd[1]: Starting libvirt logging daemon socket...
Feb 25 07:03:29 np0005629333 systemd[1]: Listening on libvirt logging daemon socket.
Feb 25 07:03:29 np0005629333 systemd[1]: Starting libvirt logging daemon admin socket...
Feb 25 07:03:29 np0005629333 systemd[1]: Listening on libvirt logging daemon admin socket.
Feb 25 07:03:29 np0005629333 systemd[1]: Starting libvirt logging daemon...
Feb 25 07:03:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:03:29 np0005629333 systemd[1]: Started libvirt logging daemon.
Feb 25 07:03:29 np0005629333 python3.9[209549]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 25 07:03:29 np0005629333 systemd[1]: Reloading.
Feb 25 07:03:29 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 07:03:29 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 07:03:30 np0005629333 systemd[1]: Starting libvirt nodedev daemon socket...
Feb 25 07:03:30 np0005629333 systemd[1]: Listening on libvirt nodedev daemon socket.
Feb 25 07:03:30 np0005629333 systemd[1]: Starting libvirt nodedev daemon admin socket...
Feb 25 07:03:30 np0005629333 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Feb 25 07:03:30 np0005629333 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Feb 25 07:03:30 np0005629333 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Feb 25 07:03:30 np0005629333 systemd[1]: Starting libvirt nodedev daemon...
Feb 25 07:03:30 np0005629333 systemd[1]: Started libvirt nodedev daemon.
Feb 25 07:03:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:03:30
Feb 25 07:03:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:03:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:03:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['backups', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', '.mgr', 'default.rgw.meta']
Feb 25 07:03:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:03:31 np0005629333 python3.9[209773]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 25 07:03:31 np0005629333 systemd[1]: Reloading.
Feb 25 07:03:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:03:31 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 07:03:31 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 07:03:31 np0005629333 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Feb 25 07:03:31 np0005629333 systemd[1]: Starting libvirt proxy daemon admin socket...
Feb 25 07:03:31 np0005629333 systemd[1]: Starting libvirt proxy daemon read-only socket...
Feb 25 07:03:31 np0005629333 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Feb 25 07:03:31 np0005629333 systemd[1]: Listening on libvirt proxy daemon admin socket.
Feb 25 07:03:31 np0005629333 systemd[1]: Starting libvirt proxy daemon...
Feb 25 07:03:31 np0005629333 systemd[1]: Started libvirt proxy daemon.
Feb 25 07:03:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:03:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:03:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:03:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:03:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:03:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:03:31 np0005629333 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Feb 25 07:03:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:03:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:03:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:03:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:03:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:03:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:03:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:03:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:03:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:03:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:03:31 np0005629333 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Feb 25 07:03:31 np0005629333 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Feb 25 07:03:32 np0005629333 python3.9[210000]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 25 07:03:32 np0005629333 systemd[1]: Reloading.
Feb 25 07:03:32 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 07:03:32 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 07:03:32 np0005629333 systemd[1]: Listening on libvirt locking daemon socket.
Feb 25 07:03:32 np0005629333 systemd[1]: Starting libvirt QEMU daemon socket...
Feb 25 07:03:32 np0005629333 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb 25 07:03:32 np0005629333 systemd[1]: Starting Virtual Machine and Container Registration Service...
Feb 25 07:03:32 np0005629333 systemd[1]: Listening on libvirt QEMU daemon socket.
Feb 25 07:03:32 np0005629333 systemd[1]: Starting libvirt QEMU daemon admin socket...
Feb 25 07:03:32 np0005629333 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Feb 25 07:03:32 np0005629333 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Feb 25 07:03:32 np0005629333 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Feb 25 07:03:32 np0005629333 systemd[1]: Started Virtual Machine and Container Registration Service.
Feb 25 07:03:32 np0005629333 systemd[1]: Starting libvirt QEMU daemon...
Feb 25 07:03:32 np0005629333 systemd[1]: Started libvirt QEMU daemon.
Feb 25 07:03:32 np0005629333 setroubleshoot[209816]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 5958e280-b5e3-4149-95ee-c3e519b0c9f4
Feb 25 07:03:32 np0005629333 setroubleshoot[209816]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Feb 25 07:03:32 np0005629333 setroubleshoot[209816]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 5958e280-b5e3-4149-95ee-c3e519b0c9f4
Feb 25 07:03:32 np0005629333 setroubleshoot[209816]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Feb 25 07:03:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:03:33 np0005629333 python3.9[210225]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 25 07:03:33 np0005629333 systemd[1]: Reloading.
Feb 25 07:03:33 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 07:03:33 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 07:03:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:03:33 np0005629333 systemd[1]: Starting libvirt secret daemon socket...
Feb 25 07:03:33 np0005629333 systemd[1]: Listening on libvirt secret daemon socket.
Feb 25 07:03:33 np0005629333 systemd[1]: Starting libvirt secret daemon admin socket...
Feb 25 07:03:33 np0005629333 systemd[1]: Starting libvirt secret daemon read-only socket...
Feb 25 07:03:33 np0005629333 systemd[1]: Listening on libvirt secret daemon read-only socket.
Feb 25 07:03:33 np0005629333 systemd[1]: Listening on libvirt secret daemon admin socket.
Feb 25 07:03:33 np0005629333 systemd[1]: Starting libvirt secret daemon...
Feb 25 07:03:33 np0005629333 systemd[1]: Started libvirt secret daemon.
Feb 25 07:03:34 np0005629333 python3.9[210445]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:34 np0005629333 python3.9[210598]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 25 07:03:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:03:35 np0005629333 python3.9[210751]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 07:03:36 np0005629333 python3.9[210905]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 25 07:03:37 np0005629333 python3.9[211055]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:03:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:03:37 np0005629333 python3.9[211176]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1772021016.5729759-1136-153186266942308/.source.xml follow=False _original_basename=secret.xml.j2 checksum=70c23ea195a35102193554e976cf657b77f21a71 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:38 np0005629333 python3.9[211329]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 8ac33163-6221-5d58-9a39-8b6933fe7762#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 07:03:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:03:38 np0005629333 python3.9[211491]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:03:40 np0005629333 python3.9[211957]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:03:41 np0005629333 python3.9[212110]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:03:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:03:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:03:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:03:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:03:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:03:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:03:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:03:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:03:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:03:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:03:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:03:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:03:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6988014389607423e-06 of space, bias 4.0, pg target 0.0032385617267528906 quantized to 16 (current 16)
Feb 25 07:03:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:03:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:03:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:03:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:03:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:03:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:03:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:03:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:03:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:03:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:03:42 np0005629333 python3.9[212234]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1772021021.174832-1191-198314354907548/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:42 np0005629333 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Feb 25 07:03:42 np0005629333 systemd[1]: setroubleshootd.service: Deactivated successfully.
Feb 25 07:03:42 np0005629333 python3.9[212387]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:03:43 np0005629333 python3.9[212540]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:03:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:03:44 np0005629333 python3.9[212619]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:44 np0005629333 python3.9[212772]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:03:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:03:45 np0005629333 python3.9[212851]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.rdtsol6x recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:45 np0005629333 python3.9[213004]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:03:46 np0005629333 python3.9[213083]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:47 np0005629333 python3.9[213236]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 07:03:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:03:47 np0005629333 python3[213390]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 25 07:03:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:03:48 np0005629333 python3.9[213543]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:03:49 np0005629333 python3.9[213622]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:03:49 np0005629333 python3.9[213775]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:03:50 np0005629333 python3.9[213901]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772021029.2302544-1280-135307639419736/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:50 np0005629333 python3.9[214054]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:03:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:03:51 np0005629333 python3.9[214133]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:51 np0005629333 python3.9[214286]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:03:52 np0005629333 python3.9[214365]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:03:53 np0005629333 python3.9[214518]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:03:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:03:53 np0005629333 python3.9[214644]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772021032.6209009-1319-12841526921371/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:54 np0005629333 python3.9[214797]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:03:54.984 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:03:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:03:54.986 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:03:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:03:54.986 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:03:55 np0005629333 python3.9[214950]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 07:03:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:03:55 np0005629333 python3.9[215106]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:56 np0005629333 podman[215230]: 2026-02-25 12:03:56.4337503 +0000 UTC m=+0.093333250 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 07:03:56 np0005629333 podman[215231]: 2026-02-25 12:03:56.436180757 +0000 UTC m=+0.095957873 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260223)
Feb 25 07:03:56 np0005629333 python3.9[215288]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 07:03:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:03:57 np0005629333 python3.9[215460]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 07:03:57 np0005629333 python3.9[215615]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 07:03:58 np0005629333 python3.9[215771]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:03:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:03:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:03:59 np0005629333 python3.9[215924]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:03:59 np0005629333 python3.9[216048]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772021038.7856672-1391-115083942981509/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:04:00 np0005629333 python3.9[216201]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:04:01 np0005629333 python3.9[216325]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772021040.0432143-1406-238965995727930/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:04:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:04:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:04:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:04:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:04:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:04:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:04:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:04:01 np0005629333 python3.9[216478]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:04:02 np0005629333 python3.9[216602]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772021041.3363237-1421-82150164032545/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:04:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:04:03 np0005629333 python3.9[216755]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 07:04:03 np0005629333 systemd[1]: Reloading.
Feb 25 07:04:03 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 07:04:03 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 07:04:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:04:03 np0005629333 systemd[1]: Reached target edpm_libvirt.target.
Feb 25 07:04:04 np0005629333 python3.9[216954]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb 25 07:04:04 np0005629333 systemd[1]: Reloading.
Feb 25 07:04:04 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 07:04:04 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 07:04:04 np0005629333 systemd[1]: Reloading.
Feb 25 07:04:05 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 07:04:05 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 07:04:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:04:05 np0005629333 systemd[1]: session-49.scope: Deactivated successfully.
Feb 25 07:04:05 np0005629333 systemd[1]: session-49.scope: Consumed 3min 5.657s CPU time.
Feb 25 07:04:05 np0005629333 systemd-logind[811]: Session 49 logged out. Waiting for processes to exit.
Feb 25 07:04:05 np0005629333 systemd-logind[811]: Removed session 49.
Feb 25 07:04:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:04:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:04:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:04:10 np0005629333 systemd-logind[811]: New session 50 of user zuul.
Feb 25 07:04:10 np0005629333 systemd[1]: Started Session 50 of User zuul.
Feb 25 07:04:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:04:11 np0005629333 python3.9[217217]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 07:04:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:04:13 np0005629333 python3.9[217371]: ansible-ansible.builtin.service_facts Invoked
Feb 25 07:04:13 np0005629333 network[217388]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 25 07:04:13 np0005629333 network[217389]: 'network-scripts' will be removed from distribution in near future.
Feb 25 07:04:13 np0005629333 network[217390]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 25 07:04:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:04:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:04:16 np0005629333 python3.9[217666]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 25 07:04:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:04:17 np0005629333 python3.9[217751]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 25 07:04:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:04:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:04:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:04:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:04:23 np0005629333 python3.9[217905]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 07:04:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:04:24 np0005629333 python3.9[218058]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 07:04:25 np0005629333 python3.9[218212]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 07:04:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:04:25 np0005629333 python3.9[218415]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 07:04:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:04:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:04:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:04:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:04:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:04:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:04:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:04:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:04:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:04:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:04:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:04:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:04:26 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:04:26 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:04:26 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:04:26 np0005629333 podman[218665]: 2026-02-25 12:04:26.427875757 +0000 UTC m=+0.063123229 container create f805e4a8f52465f1516ed82be768462f5c958db40e2b379b909bd2ffec750212 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_proskuriakova, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:04:26 np0005629333 systemd[1]: Started libpod-conmon-f805e4a8f52465f1516ed82be768462f5c958db40e2b379b909bd2ffec750212.scope.
Feb 25 07:04:26 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:04:26 np0005629333 podman[218665]: 2026-02-25 12:04:26.398265992 +0000 UTC m=+0.033513424 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:04:26 np0005629333 podman[218665]: 2026-02-25 12:04:26.496914687 +0000 UTC m=+0.132162059 container init f805e4a8f52465f1516ed82be768462f5c958db40e2b379b909bd2ffec750212 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_proskuriakova, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 07:04:26 np0005629333 podman[218665]: 2026-02-25 12:04:26.507072057 +0000 UTC m=+0.142319429 container start f805e4a8f52465f1516ed82be768462f5c958db40e2b379b909bd2ffec750212 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_proskuriakova, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:04:26 np0005629333 python3.9[218650]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:04:26 np0005629333 podman[218665]: 2026-02-25 12:04:26.510954774 +0000 UTC m=+0.146202186 container attach f805e4a8f52465f1516ed82be768462f5c958db40e2b379b909bd2ffec750212 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_proskuriakova, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:04:26 np0005629333 confident_proskuriakova[218684]: 167 167
Feb 25 07:04:26 np0005629333 systemd[1]: libpod-f805e4a8f52465f1516ed82be768462f5c958db40e2b379b909bd2ffec750212.scope: Deactivated successfully.
Feb 25 07:04:26 np0005629333 podman[218665]: 2026-02-25 12:04:26.514524402 +0000 UTC m=+0.149771744 container died f805e4a8f52465f1516ed82be768462f5c958db40e2b379b909bd2ffec750212 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_proskuriakova, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:04:26 np0005629333 systemd[1]: var-lib-containers-storage-overlay-fbcfc5017babeef432d9831b94b9560eeae4209bcf4ff6a6dc4ac407930ed484-merged.mount: Deactivated successfully.
Feb 25 07:04:26 np0005629333 podman[218665]: 2026-02-25 12:04:26.55986894 +0000 UTC m=+0.195116302 container remove f805e4a8f52465f1516ed82be768462f5c958db40e2b379b909bd2ffec750212 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:04:26 np0005629333 systemd[1]: libpod-conmon-f805e4a8f52465f1516ed82be768462f5c958db40e2b379b909bd2ffec750212.scope: Deactivated successfully.
Feb 25 07:04:26 np0005629333 podman[218686]: 2026-02-25 12:04:26.597122676 +0000 UTC m=+0.110319388 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 07:04:26 np0005629333 podman[218683]: 2026-02-25 12:04:26.608580651 +0000 UTC m=+0.121971668 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 07:04:26 np0005629333 podman[218797]: 2026-02-25 12:04:26.699785232 +0000 UTC m=+0.038568143 container create b216f03f11ed5bc052d1465eb2d8267792897841aa76a7df69ff57cfa22010be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_grothendieck, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:04:26 np0005629333 systemd[1]: Started libpod-conmon-b216f03f11ed5bc052d1465eb2d8267792897841aa76a7df69ff57cfa22010be.scope.
Feb 25 07:04:26 np0005629333 podman[218797]: 2026-02-25 12:04:26.682429884 +0000 UTC m=+0.021212795 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:04:26 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:04:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5dce8017f3d49394c40ac6b2fefb1bdaaa1a38ffdc3ba8d3a2e3fdb8272c38b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:04:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5dce8017f3d49394c40ac6b2fefb1bdaaa1a38ffdc3ba8d3a2e3fdb8272c38b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:04:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5dce8017f3d49394c40ac6b2fefb1bdaaa1a38ffdc3ba8d3a2e3fdb8272c38b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:04:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5dce8017f3d49394c40ac6b2fefb1bdaaa1a38ffdc3ba8d3a2e3fdb8272c38b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:04:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5dce8017f3d49394c40ac6b2fefb1bdaaa1a38ffdc3ba8d3a2e3fdb8272c38b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:04:26 np0005629333 podman[218797]: 2026-02-25 12:04:26.805863971 +0000 UTC m=+0.144646912 container init b216f03f11ed5bc052d1465eb2d8267792897841aa76a7df69ff57cfa22010be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 07:04:26 np0005629333 podman[218797]: 2026-02-25 12:04:26.815095626 +0000 UTC m=+0.153878537 container start b216f03f11ed5bc052d1465eb2d8267792897841aa76a7df69ff57cfa22010be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_grothendieck, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:04:26 np0005629333 podman[218797]: 2026-02-25 12:04:26.819170548 +0000 UTC m=+0.157953459 container attach b216f03f11ed5bc052d1465eb2d8267792897841aa76a7df69ff57cfa22010be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_grothendieck, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:04:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:04:27 np0005629333 python3.9[218895]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772021065.9676461-90-176802165857329/.source.iscsi _original_basename=.ywt5kc2y follow=False checksum=55181439e3eb60d032fa275c8fa9fb71f3701279 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:04:27 np0005629333 eloquent_grothendieck[218814]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:04:27 np0005629333 eloquent_grothendieck[218814]: --> All data devices are unavailable
Feb 25 07:04:27 np0005629333 systemd[1]: libpod-b216f03f11ed5bc052d1465eb2d8267792897841aa76a7df69ff57cfa22010be.scope: Deactivated successfully.
Feb 25 07:04:27 np0005629333 podman[218797]: 2026-02-25 12:04:27.283247612 +0000 UTC m=+0.622030513 container died b216f03f11ed5bc052d1465eb2d8267792897841aa76a7df69ff57cfa22010be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 07:04:27 np0005629333 systemd[1]: var-lib-containers-storage-overlay-b5dce8017f3d49394c40ac6b2fefb1bdaaa1a38ffdc3ba8d3a2e3fdb8272c38b-merged.mount: Deactivated successfully.
Feb 25 07:04:27 np0005629333 podman[218797]: 2026-02-25 12:04:27.329730391 +0000 UTC m=+0.668513302 container remove b216f03f11ed5bc052d1465eb2d8267792897841aa76a7df69ff57cfa22010be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Feb 25 07:04:27 np0005629333 systemd[1]: libpod-conmon-b216f03f11ed5bc052d1465eb2d8267792897841aa76a7df69ff57cfa22010be.scope: Deactivated successfully.
Feb 25 07:04:27 np0005629333 podman[219087]: 2026-02-25 12:04:27.737958589 +0000 UTC m=+0.032339272 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:04:27 np0005629333 podman[219087]: 2026-02-25 12:04:27.898669752 +0000 UTC m=+0.193050455 container create ef7cc04943991d4e849c970f756404218af57549d81bdf5db2818544c7cccc92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_faraday, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 07:04:27 np0005629333 systemd[1]: Started libpod-conmon-ef7cc04943991d4e849c970f756404218af57549d81bdf5db2818544c7cccc92.scope.
Feb 25 07:04:27 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:04:27 np0005629333 podman[219087]: 2026-02-25 12:04:27.995039645 +0000 UTC m=+0.289420408 container init ef7cc04943991d4e849c970f756404218af57549d81bdf5db2818544c7cccc92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_faraday, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 07:04:28 np0005629333 podman[219087]: 2026-02-25 12:04:28.003438326 +0000 UTC m=+0.297819019 container start ef7cc04943991d4e849c970f756404218af57549d81bdf5db2818544c7cccc92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 07:04:28 np0005629333 podman[219087]: 2026-02-25 12:04:28.007417516 +0000 UTC m=+0.301798209 container attach ef7cc04943991d4e849c970f756404218af57549d81bdf5db2818544c7cccc92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_faraday, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:04:28 np0005629333 vigorous_faraday[219156]: 167 167
Feb 25 07:04:28 np0005629333 systemd[1]: libpod-ef7cc04943991d4e849c970f756404218af57549d81bdf5db2818544c7cccc92.scope: Deactivated successfully.
Feb 25 07:04:28 np0005629333 podman[219087]: 2026-02-25 12:04:28.009080621 +0000 UTC m=+0.303461334 container died ef7cc04943991d4e849c970f756404218af57549d81bdf5db2818544c7cccc92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Feb 25 07:04:28 np0005629333 systemd[1]: var-lib-containers-storage-overlay-3ca613b44aafd48e1107b054020dd06159a1e9da7ae6cdec56aeeddc251ca7ca-merged.mount: Deactivated successfully.
Feb 25 07:04:28 np0005629333 podman[219087]: 2026-02-25 12:04:28.051511189 +0000 UTC m=+0.345891892 container remove ef7cc04943991d4e849c970f756404218af57549d81bdf5db2818544c7cccc92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:04:28 np0005629333 systemd[1]: libpod-conmon-ef7cc04943991d4e849c970f756404218af57549d81bdf5db2818544c7cccc92.scope: Deactivated successfully.
Feb 25 07:04:28 np0005629333 python3.9[219153]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:04:28 np0005629333 podman[219204]: 2026-02-25 12:04:28.236543533 +0000 UTC m=+0.053165805 container create ea9a230fbc9fb9ebf4e9c8c64595e2089f418087350760a547673e00375a8e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 07:04:28 np0005629333 systemd[1]: Started libpod-conmon-ea9a230fbc9fb9ebf4e9c8c64595e2089f418087350760a547673e00375a8e59.scope.
Feb 25 07:04:28 np0005629333 podman[219204]: 2026-02-25 12:04:28.21029078 +0000 UTC m=+0.026913092 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:04:28 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:04:28 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d21a5787a237e98ca6e15f6b8901fbdfd01d68878650a1f417d3a989890511ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:04:28 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d21a5787a237e98ca6e15f6b8901fbdfd01d68878650a1f417d3a989890511ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:04:28 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d21a5787a237e98ca6e15f6b8901fbdfd01d68878650a1f417d3a989890511ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:04:28 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d21a5787a237e98ca6e15f6b8901fbdfd01d68878650a1f417d3a989890511ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:04:28 np0005629333 podman[219204]: 2026-02-25 12:04:28.366057907 +0000 UTC m=+0.182680209 container init ea9a230fbc9fb9ebf4e9c8c64595e2089f418087350760a547673e00375a8e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_lamarr, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:04:28 np0005629333 podman[219204]: 2026-02-25 12:04:28.374644523 +0000 UTC m=+0.191266805 container start ea9a230fbc9fb9ebf4e9c8c64595e2089f418087350760a547673e00375a8e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 07:04:28 np0005629333 podman[219204]: 2026-02-25 12:04:28.378510429 +0000 UTC m=+0.195132751 container attach ea9a230fbc9fb9ebf4e9c8c64595e2089f418087350760a547673e00375a8e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_lamarr, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 07:04:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]: {
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:    "0": [
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:        {
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "devices": [
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "/dev/loop3"
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            ],
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "lv_name": "ceph_lv0",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "lv_size": "21470642176",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "name": "ceph_lv0",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "tags": {
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.cluster_name": "ceph",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.crush_device_class": "",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.encrypted": "0",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.objectstore": "bluestore",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.osd_id": "0",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.type": "block",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.vdo": "0",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.with_tpm": "0"
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            },
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "type": "block",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "vg_name": "ceph_vg0"
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:        }
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:    ],
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:    "1": [
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:        {
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "devices": [
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "/dev/loop4"
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            ],
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "lv_name": "ceph_lv1",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "lv_size": "21470642176",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "name": "ceph_lv1",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "tags": {
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.cluster_name": "ceph",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.crush_device_class": "",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.encrypted": "0",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.objectstore": "bluestore",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.osd_id": "1",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.type": "block",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.vdo": "0",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.with_tpm": "0"
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            },
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "type": "block",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "vg_name": "ceph_vg1"
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:        }
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:    ],
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:    "2": [
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:        {
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "devices": [
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "/dev/loop5"
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            ],
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "lv_name": "ceph_lv2",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "lv_size": "21470642176",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "name": "ceph_lv2",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "tags": {
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.cluster_name": "ceph",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.crush_device_class": "",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.encrypted": "0",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.objectstore": "bluestore",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.osd_id": "2",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.type": "block",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.vdo": "0",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:                "ceph.with_tpm": "0"
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            },
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "type": "block",
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:            "vg_name": "ceph_vg2"
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:        }
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]:    ]
Feb 25 07:04:28 np0005629333 gallant_lamarr[219246]: }
Feb 25 07:04:28 np0005629333 systemd[1]: libpod-ea9a230fbc9fb9ebf4e9c8c64595e2089f418087350760a547673e00375a8e59.scope: Deactivated successfully.
Feb 25 07:04:28 np0005629333 podman[219204]: 2026-02-25 12:04:28.711743622 +0000 UTC m=+0.528366004 container died ea9a230fbc9fb9ebf4e9c8c64595e2089f418087350760a547673e00375a8e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_lamarr, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:04:28 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d21a5787a237e98ca6e15f6b8901fbdfd01d68878650a1f417d3a989890511ff-merged.mount: Deactivated successfully.
Feb 25 07:04:28 np0005629333 podman[219204]: 2026-02-25 12:04:28.760190336 +0000 UTC m=+0.576812608 container remove ea9a230fbc9fb9ebf4e9c8c64595e2089f418087350760a547673e00375a8e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:04:28 np0005629333 systemd[1]: libpod-conmon-ea9a230fbc9fb9ebf4e9c8c64595e2089f418087350760a547673e00375a8e59.scope: Deactivated successfully.
Feb 25 07:04:28 np0005629333 python3.9[219364]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:04:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:04:29 np0005629333 podman[219496]: 2026-02-25 12:04:29.243181981 +0000 UTC m=+0.033467043 container create b6170e13541a755f581f60c1076c64ca43487c76f38ef7c214e2d9f895e82388 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 07:04:29 np0005629333 systemd[1]: Started libpod-conmon-b6170e13541a755f581f60c1076c64ca43487c76f38ef7c214e2d9f895e82388.scope.
Feb 25 07:04:29 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:04:29 np0005629333 podman[219496]: 2026-02-25 12:04:29.295521171 +0000 UTC m=+0.085806213 container init b6170e13541a755f581f60c1076c64ca43487c76f38ef7c214e2d9f895e82388 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:04:29 np0005629333 podman[219496]: 2026-02-25 12:04:29.299780939 +0000 UTC m=+0.090065981 container start b6170e13541a755f581f60c1076c64ca43487c76f38ef7c214e2d9f895e82388 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:04:29 np0005629333 podman[219496]: 2026-02-25 12:04:29.302269767 +0000 UTC m=+0.092554809 container attach b6170e13541a755f581f60c1076c64ca43487c76f38ef7c214e2d9f895e82388 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wright, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 07:04:29 np0005629333 stoic_wright[219523]: 167 167
Feb 25 07:04:29 np0005629333 systemd[1]: libpod-b6170e13541a755f581f60c1076c64ca43487c76f38ef7c214e2d9f895e82388.scope: Deactivated successfully.
Feb 25 07:04:29 np0005629333 podman[219496]: 2026-02-25 12:04:29.30384363 +0000 UTC m=+0.094128702 container died b6170e13541a755f581f60c1076c64ca43487c76f38ef7c214e2d9f895e82388 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wright, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 07:04:29 np0005629333 systemd[1]: var-lib-containers-storage-overlay-77fbac16df605b0fc185acc259f87f700ce5e8aac92b59298193c060b742e3a8-merged.mount: Deactivated successfully.
Feb 25 07:04:29 np0005629333 podman[219496]: 2026-02-25 12:04:29.228490776 +0000 UTC m=+0.018775838 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:04:29 np0005629333 podman[219496]: 2026-02-25 12:04:29.336546751 +0000 UTC m=+0.126831813 container remove b6170e13541a755f581f60c1076c64ca43487c76f38ef7c214e2d9f895e82388 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wright, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Feb 25 07:04:29 np0005629333 systemd[1]: libpod-conmon-b6170e13541a755f581f60c1076c64ca43487c76f38ef7c214e2d9f895e82388.scope: Deactivated successfully.
Feb 25 07:04:29 np0005629333 podman[219549]: 2026-02-25 12:04:29.474251231 +0000 UTC m=+0.054505911 container create eb462687c1702c36005f6e96c4ff496b097ac310f628fe6172aca24510c861a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 07:04:29 np0005629333 systemd[1]: Started libpod-conmon-eb462687c1702c36005f6e96c4ff496b097ac310f628fe6172aca24510c861a9.scope.
Feb 25 07:04:29 np0005629333 podman[219549]: 2026-02-25 12:04:29.450627071 +0000 UTC m=+0.030881801 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:04:29 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:04:29 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5468a1054ae03fdfc052f3eb673973b723bb176c3fe73f45eec4204eaf73464d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:04:29 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5468a1054ae03fdfc052f3eb673973b723bb176c3fe73f45eec4204eaf73464d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:04:29 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5468a1054ae03fdfc052f3eb673973b723bb176c3fe73f45eec4204eaf73464d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:04:29 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5468a1054ae03fdfc052f3eb673973b723bb176c3fe73f45eec4204eaf73464d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:04:29 np0005629333 podman[219549]: 2026-02-25 12:04:29.586681386 +0000 UTC m=+0.166936086 container init eb462687c1702c36005f6e96c4ff496b097ac310f628fe6172aca24510c861a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kalam, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:04:29 np0005629333 podman[219549]: 2026-02-25 12:04:29.602358177 +0000 UTC m=+0.182612847 container start eb462687c1702c36005f6e96c4ff496b097ac310f628fe6172aca24510c861a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kalam, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 07:04:29 np0005629333 podman[219549]: 2026-02-25 12:04:29.606203193 +0000 UTC m=+0.186457873 container attach eb462687c1702c36005f6e96c4ff496b097ac310f628fe6172aca24510c861a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 07:04:30 np0005629333 python3.9[219646]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 07:04:30 np0005629333 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Feb 25 07:04:30 np0005629333 lvm[219759]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:04:30 np0005629333 lvm[219759]: VG ceph_vg0 finished
Feb 25 07:04:30 np0005629333 lvm[219772]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:04:30 np0005629333 lvm[219772]: VG ceph_vg1 finished
Feb 25 07:04:30 np0005629333 lvm[219782]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:04:30 np0005629333 lvm[219782]: VG ceph_vg2 finished
Feb 25 07:04:30 np0005629333 lvm[219792]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:04:30 np0005629333 lvm[219792]: VG ceph_vg1 finished
Feb 25 07:04:30 np0005629333 funny_kalam[219582]: {}
Feb 25 07:04:30 np0005629333 systemd[1]: libpod-eb462687c1702c36005f6e96c4ff496b097ac310f628fe6172aca24510c861a9.scope: Deactivated successfully.
Feb 25 07:04:30 np0005629333 systemd[1]: libpod-eb462687c1702c36005f6e96c4ff496b097ac310f628fe6172aca24510c861a9.scope: Consumed 1.105s CPU time.
Feb 25 07:04:30 np0005629333 podman[219549]: 2026-02-25 12:04:30.376585479 +0000 UTC m=+0.956840149 container died eb462687c1702c36005f6e96c4ff496b097ac310f628fe6172aca24510c861a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:04:30 np0005629333 systemd[1]: var-lib-containers-storage-overlay-5468a1054ae03fdfc052f3eb673973b723bb176c3fe73f45eec4204eaf73464d-merged.mount: Deactivated successfully.
Feb 25 07:04:30 np0005629333 podman[219549]: 2026-02-25 12:04:30.426968136 +0000 UTC m=+1.007222816 container remove eb462687c1702c36005f6e96c4ff496b097ac310f628fe6172aca24510c861a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kalam, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 07:04:30 np0005629333 systemd[1]: libpod-conmon-eb462687c1702c36005f6e96c4ff496b097ac310f628fe6172aca24510c861a9.scope: Deactivated successfully.
Feb 25 07:04:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:04:30 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:04:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:04:30 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:04:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:04:30
Feb 25 07:04:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:04:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:04:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'images', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', '.rgw.root', 'backups', 'default.rgw.log']
Feb 25 07:04:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:04:30 np0005629333 python3.9[219918]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 07:04:30 np0005629333 systemd[1]: Reloading.
Feb 25 07:04:30 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 07:04:30 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 07:04:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:04:31 np0005629333 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Feb 25 07:04:31 np0005629333 systemd[1]: Starting Open-iSCSI...
Feb 25 07:04:31 np0005629333 kernel: Loading iSCSI transport class v2.0-870.
Feb 25 07:04:31 np0005629333 systemd[1]: Started Open-iSCSI.
Feb 25 07:04:31 np0005629333 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Feb 25 07:04:31 np0005629333 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Feb 25 07:04:31 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:04:31 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:04:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:04:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:04:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:04:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:04:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:04:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:04:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:04:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:04:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:04:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:04:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:04:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:04:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:04:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:04:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:04:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:04:32 np0005629333 python3.9[220124]: ansible-ansible.builtin.service_facts Invoked
Feb 25 07:04:32 np0005629333 network[220141]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 25 07:04:32 np0005629333 network[220142]: 'network-scripts' will be removed from distribution in near future.
Feb 25 07:04:32 np0005629333 network[220143]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 25 07:04:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:04:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:04:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:04:36 np0005629333 python3.9[220417]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 25 07:04:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:04:38 np0005629333 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 25 07:04:38 np0005629333 systemd[1]: Starting man-db-cache-update.service...
Feb 25 07:04:38 np0005629333 systemd[1]: Reloading.
Feb 25 07:04:38 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 07:04:38 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 07:04:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:04:38 np0005629333 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 25 07:04:38 np0005629333 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 25 07:04:38 np0005629333 systemd[1]: Finished man-db-cache-update.service.
Feb 25 07:04:38 np0005629333 systemd[1]: run-rfdbde1b423044981a1b1625bf55a2c01.service: Deactivated successfully.
Feb 25 07:04:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:04:39 np0005629333 python3.9[220750]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Feb 25 07:04:40 np0005629333 python3.9[220903]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Feb 25 07:04:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:04:41 np0005629333 python3.9[221060]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:04:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:04:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:04:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:04:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:04:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:04:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:04:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:04:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:04:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:04:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:04:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:04:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:04:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6988014389607423e-06 of space, bias 4.0, pg target 0.0032385617267528906 quantized to 16 (current 16)
Feb 25 07:04:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:04:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:04:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:04:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:04:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:04:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:04:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:04:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:04:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:04:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:04:42 np0005629333 python3.9[221184]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772021081.0467532-178-99253334816980/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:04:42 np0005629333 python3.9[221337]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:04:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:04:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:04:43 np0005629333 python3.9[221490]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 25 07:04:43 np0005629333 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 25 07:04:43 np0005629333 systemd[1]: Stopped Load Kernel Modules.
Feb 25 07:04:43 np0005629333 systemd[1]: Stopping Load Kernel Modules...
Feb 25 07:04:43 np0005629333 systemd[1]: Starting Load Kernel Modules...
Feb 25 07:04:43 np0005629333 systemd[1]: Finished Load Kernel Modules.
Feb 25 07:04:44 np0005629333 python3.9[221647]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 07:04:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:04:45 np0005629333 python3.9[221801]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 07:04:46 np0005629333 python3.9[221954]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:04:46 np0005629333 python3.9[222078]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772021085.8507857-229-15022428263582/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:04:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:04:47 np0005629333 python3.9[222231]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 07:04:48 np0005629333 python3.9[222385]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:04:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:04:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:04:49 np0005629333 python3.9[222538]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:04:49 np0005629333 python3.9[222691]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:04:50 np0005629333 python3.9[222844]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:04:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:04:51 np0005629333 python3.9[222997]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:04:52 np0005629333 python3.9[223150]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:04:52 np0005629333 python3.9[223303]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:04:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:04:53 np0005629333 python3.9[223456]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 07:04:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:04:54 np0005629333 python3.9[223611]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 07:04:54 np0005629333 python3.9[223765]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 07:04:54 np0005629333 systemd[1]: Listening on multipathd control socket.
Feb 25 07:04:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:04:54.985 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:04:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:04:54.986 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:04:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:04:54.987 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:04:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:04:55 np0005629333 python3.9[223922]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 07:04:55 np0005629333 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Feb 25 07:04:55 np0005629333 udevadm[223927]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Feb 25 07:04:55 np0005629333 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Feb 25 07:04:55 np0005629333 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Feb 25 07:04:55 np0005629333 multipathd[223931]: --------start up--------
Feb 25 07:04:55 np0005629333 multipathd[223931]: read /etc/multipath.conf
Feb 25 07:04:55 np0005629333 multipathd[223931]: path checkers start up
Feb 25 07:04:55 np0005629333 systemd[1]: Started Device-Mapper Multipath Device Controller.
Feb 25 07:04:56 np0005629333 podman[224042]: 2026-02-25 12:04:56.756108203 +0000 UTC m=+0.086253223 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:04:56 np0005629333 podman[224058]: 2026-02-25 12:04:56.813954514 +0000 UTC m=+0.144442504 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:04:56 np0005629333 python3.9[224133]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Feb 25 07:04:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:04:57 np0005629333 python3.9[224289]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Feb 25 07:04:57 np0005629333 kernel: Key type psk registered
Feb 25 07:04:58 np0005629333 python3.9[224453]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:04:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:04:59 np0005629333 python3.9[224577]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772021097.940836-359-43362096783880/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:04:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:04:59 np0005629333 python3.9[224730]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:05:00 np0005629333 python3.9[224883]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 25 07:05:00 np0005629333 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 25 07:05:00 np0005629333 systemd[1]: Stopped Load Kernel Modules.
Feb 25 07:05:00 np0005629333 systemd[1]: Stopping Load Kernel Modules...
Feb 25 07:05:00 np0005629333 systemd[1]: Starting Load Kernel Modules...
Feb 25 07:05:00 np0005629333 systemd[1]: Finished Load Kernel Modules.
Feb 25 07:05:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:05:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:05:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:05:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:05:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:05:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:05:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:05:01 np0005629333 python3.9[225040]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 25 07:05:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:05:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:05:04 np0005629333 systemd[1]: Reloading.
Feb 25 07:05:04 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 07:05:04 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 07:05:04 np0005629333 systemd[1]: Reloading.
Feb 25 07:05:04 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 07:05:04 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 07:05:04 np0005629333 virtsecretd[210274]: libvirt version: 11.10.0, package: 4.el9 (builder@centos.org, 2026-01-29-15:25:17, )
Feb 25 07:05:04 np0005629333 virtsecretd[210274]: hostname: compute-0
Feb 25 07:05:04 np0005629333 virtsecretd[210274]: nl_recv returned with error: No buffer space available
Feb 25 07:05:04 np0005629333 systemd-logind[811]: Watching system buttons on /dev/input/event0 (Power Button)
Feb 25 07:05:05 np0005629333 systemd-logind[811]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Feb 25 07:05:05 np0005629333 lvm[225168]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:05:05 np0005629333 lvm[225168]: VG ceph_vg0 finished
Feb 25 07:05:05 np0005629333 lvm[225169]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:05:05 np0005629333 lvm[225169]: VG ceph_vg1 finished
Feb 25 07:05:05 np0005629333 lvm[225170]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:05:05 np0005629333 lvm[225170]: VG ceph_vg2 finished
Feb 25 07:05:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:05:05 np0005629333 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 25 07:05:05 np0005629333 systemd[1]: Starting man-db-cache-update.service...
Feb 25 07:05:05 np0005629333 systemd[1]: Reloading.
Feb 25 07:05:05 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 07:05:05 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 07:05:05 np0005629333 systemd[1]: Queuing reload/restart jobs for marked units…
Feb 25 07:05:06 np0005629333 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 25 07:05:06 np0005629333 systemd[1]: Finished man-db-cache-update.service.
Feb 25 07:05:06 np0005629333 systemd[1]: man-db-cache-update.service: Consumed 1.186s CPU time.
Feb 25 07:05:06 np0005629333 systemd[1]: run-rd3cf8fcc364c4c3696716381d72afac1.service: Deactivated successfully.
Feb 25 07:05:06 np0005629333 python3.9[226539]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 25 07:05:07 np0005629333 systemd[1]: Stopping Open-iSCSI...
Feb 25 07:05:07 np0005629333 iscsid[219968]: iscsid shutting down.
Feb 25 07:05:07 np0005629333 systemd[1]: iscsid.service: Deactivated successfully.
Feb 25 07:05:07 np0005629333 systemd[1]: Stopped Open-iSCSI.
Feb 25 07:05:07 np0005629333 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Feb 25 07:05:07 np0005629333 systemd[1]: Starting Open-iSCSI...
Feb 25 07:05:07 np0005629333 systemd[1]: Started Open-iSCSI.
Feb 25 07:05:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:05:07 np0005629333 python3.9[226696]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 25 07:05:07 np0005629333 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Feb 25 07:05:07 np0005629333 multipathd[223931]: exit (signal)
Feb 25 07:05:07 np0005629333 multipathd[223931]: --------shut down-------
Feb 25 07:05:07 np0005629333 systemd[1]: multipathd.service: Deactivated successfully.
Feb 25 07:05:07 np0005629333 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Feb 25 07:05:07 np0005629333 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Feb 25 07:05:07 np0005629333 multipathd[226702]: --------start up--------
Feb 25 07:05:07 np0005629333 multipathd[226702]: read /etc/multipath.conf
Feb 25 07:05:08 np0005629333 multipathd[226702]: path checkers start up
Feb 25 07:05:08 np0005629333 systemd[1]: Started Device-Mapper Multipath Device Controller.
Feb 25 07:05:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:05:08 np0005629333 python3.9[226860]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 25 07:05:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:05:09 np0005629333 python3.9[227017]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:05:10 np0005629333 python3.9[227170]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 25 07:05:10 np0005629333 systemd[1]: Reloading.
Feb 25 07:05:10 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 07:05:10 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 07:05:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:05:11 np0005629333 python3.9[227362]: ansible-ansible.builtin.service_facts Invoked
Feb 25 07:05:11 np0005629333 network[227379]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 25 07:05:11 np0005629333 network[227380]: 'network-scripts' will be removed from distribution in near future.
Feb 25 07:05:11 np0005629333 network[227381]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 25 07:05:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:05:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:05:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:05:15 np0005629333 python3.9[227656]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 07:05:16 np0005629333 python3.9[227810]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 07:05:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:05:17 np0005629333 python3.9[227964]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 07:05:18 np0005629333 python3.9[228118]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 07:05:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:05:18 np0005629333 python3.9[228272]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 07:05:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:05:19 np0005629333 python3.9[228426]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 07:05:20 np0005629333 python3.9[228580]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 07:05:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:05:21 np0005629333 python3.9[228734]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 07:05:22 np0005629333 python3.9[228888]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:05:22 np0005629333 python3.9[229041]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:05:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:05:23 np0005629333 python3.9[229194]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:05:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:05:24 np0005629333 python3.9[229347]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:05:24 np0005629333 python3.9[229500]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:05:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:05:25 np0005629333 python3.9[229653]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:05:26 np0005629333 python3.9[229806]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:05:26 np0005629333 python3.9[229959]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:05:27 np0005629333 podman[229960]: 2026-02-25 12:05:27.009308313 +0000 UTC m=+0.092947368 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 25 07:05:27 np0005629333 podman[229966]: 2026-02-25 12:05:27.053067986 +0000 UTC m=+0.137186354 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 07:05:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:05:27 np0005629333 python3.9[230157]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:05:28 np0005629333 python3.9[230310]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:05:28 np0005629333 python3.9[230463]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:05:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:05:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:05:29 np0005629333 python3.9[230616]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:05:29 np0005629333 python3.9[230769]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:05:30 np0005629333 systemd[1]: virtnodedevd.service: Deactivated successfully.
Feb 25 07:05:30 np0005629333 python3.9[230922]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:05:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:05:30
Feb 25 07:05:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:05:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:05:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', '.mgr', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'vms', 'volumes', 'default.rgw.meta', 'images']
Feb 25 07:05:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:05:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:05:31 np0005629333 python3.9[231140]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:05:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:05:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:05:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:05:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:05:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:05:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:05:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:05:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:05:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:05:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:05:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:05:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:05:31 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:05:31 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:05:31 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:05:31 np0005629333 systemd[1]: virtproxyd.service: Deactivated successfully.
Feb 25 07:05:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:05:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:05:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:05:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:05:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:05:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:05:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:05:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:05:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:05:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:05:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:05:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:05:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:05:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:05:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:05:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:05:31 np0005629333 podman[231365]: 2026-02-25 12:05:31.719857305 +0000 UTC m=+0.111723933 container create 318583a4c6c48d7623f27a09af5a839f4258624a394f1b30bb12f596d4d165c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:05:31 np0005629333 podman[231365]: 2026-02-25 12:05:31.629029567 +0000 UTC m=+0.020896235 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:05:31 np0005629333 systemd[1]: Started libpod-conmon-318583a4c6c48d7623f27a09af5a839f4258624a394f1b30bb12f596d4d165c4.scope.
Feb 25 07:05:31 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:05:31 np0005629333 podman[231365]: 2026-02-25 12:05:31.827043993 +0000 UTC m=+0.218910661 container init 318583a4c6c48d7623f27a09af5a839f4258624a394f1b30bb12f596d4d165c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 07:05:31 np0005629333 podman[231365]: 2026-02-25 12:05:31.833046178 +0000 UTC m=+0.224912776 container start 318583a4c6c48d7623f27a09af5a839f4258624a394f1b30bb12f596d4d165c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_thompson, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 07:05:31 np0005629333 strange_thompson[231390]: 167 167
Feb 25 07:05:31 np0005629333 systemd[1]: libpod-318583a4c6c48d7623f27a09af5a839f4258624a394f1b30bb12f596d4d165c4.scope: Deactivated successfully.
Feb 25 07:05:31 np0005629333 conmon[231390]: conmon 318583a4c6c48d7623f2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-318583a4c6c48d7623f27a09af5a839f4258624a394f1b30bb12f596d4d165c4.scope/container/memory.events
Feb 25 07:05:31 np0005629333 podman[231365]: 2026-02-25 12:05:31.893731027 +0000 UTC m=+0.285597715 container attach 318583a4c6c48d7623f27a09af5a839f4258624a394f1b30bb12f596d4d165c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_thompson, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:05:31 np0005629333 podman[231365]: 2026-02-25 12:05:31.894621252 +0000 UTC m=+0.286487880 container died 318583a4c6c48d7623f27a09af5a839f4258624a394f1b30bb12f596d4d165c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 07:05:31 np0005629333 python3.9[231387]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:05:31 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a7f8c67eca8a9cfb8d4352e7596c0452f0adfed824092fd8881d2e1ace3439ae-merged.mount: Deactivated successfully.
Feb 25 07:05:31 np0005629333 podman[231365]: 2026-02-25 12:05:31.959894727 +0000 UTC m=+0.351761355 container remove 318583a4c6c48d7623f27a09af5a839f4258624a394f1b30bb12f596d4d165c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_thompson, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 07:05:31 np0005629333 systemd[1]: libpod-conmon-318583a4c6c48d7623f27a09af5a839f4258624a394f1b30bb12f596d4d165c4.scope: Deactivated successfully.
Feb 25 07:05:32 np0005629333 podman[231438]: 2026-02-25 12:05:32.106442888 +0000 UTC m=+0.051629841 container create 2d0cdb943255af218357454d097875e67b9370aca341dcdff4eb2ca738435d02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_kirch, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 07:05:32 np0005629333 systemd[1]: Started libpod-conmon-2d0cdb943255af218357454d097875e67b9370aca341dcdff4eb2ca738435d02.scope.
Feb 25 07:05:32 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:05:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63de149369e3182de615dac91e2d6f6d43b8f81e188eb8ce9694e356a1a526b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:05:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63de149369e3182de615dac91e2d6f6d43b8f81e188eb8ce9694e356a1a526b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:05:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63de149369e3182de615dac91e2d6f6d43b8f81e188eb8ce9694e356a1a526b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:05:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63de149369e3182de615dac91e2d6f6d43b8f81e188eb8ce9694e356a1a526b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:05:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63de149369e3182de615dac91e2d6f6d43b8f81e188eb8ce9694e356a1a526b8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:05:32 np0005629333 podman[231438]: 2026-02-25 12:05:32.084747141 +0000 UTC m=+0.029934134 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:05:32 np0005629333 podman[231438]: 2026-02-25 12:05:32.239163328 +0000 UTC m=+0.184350341 container init 2d0cdb943255af218357454d097875e67b9370aca341dcdff4eb2ca738435d02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_kirch, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:05:32 np0005629333 podman[231438]: 2026-02-25 12:05:32.246280353 +0000 UTC m=+0.191467306 container start 2d0cdb943255af218357454d097875e67b9370aca341dcdff4eb2ca738435d02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:05:32 np0005629333 podman[231438]: 2026-02-25 12:05:32.24980145 +0000 UTC m=+0.194988493 container attach 2d0cdb943255af218357454d097875e67b9370aca341dcdff4eb2ca738435d02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_kirch, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle)
Feb 25 07:05:32 np0005629333 python3.9[231588]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 07:05:32 np0005629333 beautiful_kirch[231485]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:05:32 np0005629333 beautiful_kirch[231485]: --> All data devices are unavailable
Feb 25 07:05:32 np0005629333 systemd[1]: libpod-2d0cdb943255af218357454d097875e67b9370aca341dcdff4eb2ca738435d02.scope: Deactivated successfully.
Feb 25 07:05:32 np0005629333 podman[231438]: 2026-02-25 12:05:32.690215813 +0000 UTC m=+0.635402786 container died 2d0cdb943255af218357454d097875e67b9370aca341dcdff4eb2ca738435d02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 07:05:32 np0005629333 systemd[1]: var-lib-containers-storage-overlay-63de149369e3182de615dac91e2d6f6d43b8f81e188eb8ce9694e356a1a526b8-merged.mount: Deactivated successfully.
Feb 25 07:05:32 np0005629333 podman[231438]: 2026-02-25 12:05:32.771430257 +0000 UTC m=+0.716617240 container remove 2d0cdb943255af218357454d097875e67b9370aca341dcdff4eb2ca738435d02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_kirch, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:05:32 np0005629333 systemd[1]: libpod-conmon-2d0cdb943255af218357454d097875e67b9370aca341dcdff4eb2ca738435d02.scope: Deactivated successfully.
Feb 25 07:05:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:05:33 np0005629333 podman[231800]: 2026-02-25 12:05:33.237682249 +0000 UTC m=+0.049315857 container create 1e2577a262fd9a0512b86bbf8bd8ac1e4b98671429b1770b5d750a2343fb44db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:05:33 np0005629333 systemd[1]: Started libpod-conmon-1e2577a262fd9a0512b86bbf8bd8ac1e4b98671429b1770b5d750a2343fb44db.scope.
Feb 25 07:05:33 np0005629333 podman[231800]: 2026-02-25 12:05:33.2108141 +0000 UTC m=+0.022447498 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:05:33 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:05:33 np0005629333 podman[231800]: 2026-02-25 12:05:33.33008292 +0000 UTC m=+0.141716358 container init 1e2577a262fd9a0512b86bbf8bd8ac1e4b98671429b1770b5d750a2343fb44db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 07:05:33 np0005629333 podman[231800]: 2026-02-25 12:05:33.341307889 +0000 UTC m=+0.152941277 container start 1e2577a262fd9a0512b86bbf8bd8ac1e4b98671429b1770b5d750a2343fb44db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_golick, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 07:05:33 np0005629333 elastic_golick[231847]: 167 167
Feb 25 07:05:33 np0005629333 podman[231800]: 2026-02-25 12:05:33.345665429 +0000 UTC m=+0.157298827 container attach 1e2577a262fd9a0512b86bbf8bd8ac1e4b98671429b1770b5d750a2343fb44db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 07:05:33 np0005629333 systemd[1]: libpod-1e2577a262fd9a0512b86bbf8bd8ac1e4b98671429b1770b5d750a2343fb44db.scope: Deactivated successfully.
Feb 25 07:05:33 np0005629333 conmon[231847]: conmon 1e2577a262fd9a0512b8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1e2577a262fd9a0512b86bbf8bd8ac1e4b98671429b1770b5d750a2343fb44db.scope/container/memory.events
Feb 25 07:05:33 np0005629333 podman[231800]: 2026-02-25 12:05:33.347941511 +0000 UTC m=+0.159574909 container died 1e2577a262fd9a0512b86bbf8bd8ac1e4b98671429b1770b5d750a2343fb44db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Feb 25 07:05:33 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1886d1e2b06ed5a5f53d9506e66702bc001da8b7ad15355dc4b5651d52412973-merged.mount: Deactivated successfully.
Feb 25 07:05:33 np0005629333 podman[231800]: 2026-02-25 12:05:33.390131752 +0000 UTC m=+0.201765120 container remove 1e2577a262fd9a0512b86bbf8bd8ac1e4b98671429b1770b5d750a2343fb44db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_golick, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 07:05:33 np0005629333 systemd[1]: libpod-conmon-1e2577a262fd9a0512b86bbf8bd8ac1e4b98671429b1770b5d750a2343fb44db.scope: Deactivated successfully.
Feb 25 07:05:33 np0005629333 python3.9[231844]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 25 07:05:33 np0005629333 podman[231894]: 2026-02-25 12:05:33.553637049 +0000 UTC m=+0.046701326 container create cb9c4675d6f3ba69263743daa5e4945f1b1a1e5e78e9fa4a52843e3ae614cc35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_dijkstra, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 07:05:33 np0005629333 systemd[1]: Started libpod-conmon-cb9c4675d6f3ba69263743daa5e4945f1b1a1e5e78e9fa4a52843e3ae614cc35.scope.
Feb 25 07:05:33 np0005629333 podman[231894]: 2026-02-25 12:05:33.527905241 +0000 UTC m=+0.020969498 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:05:33 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:05:33 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47bae10b579fa380d7c0df7583c3ca9b2bc4eb11a7a56edbd1a428383f654bd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:05:33 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47bae10b579fa380d7c0df7583c3ca9b2bc4eb11a7a56edbd1a428383f654bd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:05:33 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47bae10b579fa380d7c0df7583c3ca9b2bc4eb11a7a56edbd1a428383f654bd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:05:33 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47bae10b579fa380d7c0df7583c3ca9b2bc4eb11a7a56edbd1a428383f654bd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:05:33 np0005629333 podman[231894]: 2026-02-25 12:05:33.650474542 +0000 UTC m=+0.143538819 container init cb9c4675d6f3ba69263743daa5e4945f1b1a1e5e78e9fa4a52843e3ae614cc35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:05:33 np0005629333 podman[231894]: 2026-02-25 12:05:33.658651257 +0000 UTC m=+0.151715504 container start cb9c4675d6f3ba69263743daa5e4945f1b1a1e5e78e9fa4a52843e3ae614cc35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 07:05:33 np0005629333 podman[231894]: 2026-02-25 12:05:33.664401655 +0000 UTC m=+0.157465972 container attach cb9c4675d6f3ba69263743daa5e4945f1b1a1e5e78e9fa4a52843e3ae614cc35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:05:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]: {
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:    "0": [
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:        {
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "devices": [
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "/dev/loop3"
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            ],
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "lv_name": "ceph_lv0",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "lv_size": "21470642176",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "name": "ceph_lv0",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "tags": {
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.cluster_name": "ceph",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.crush_device_class": "",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.encrypted": "0",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.objectstore": "bluestore",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.osd_id": "0",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.type": "block",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.vdo": "0",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.with_tpm": "0"
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            },
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "type": "block",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "vg_name": "ceph_vg0"
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:        }
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:    ],
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:    "1": [
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:        {
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "devices": [
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "/dev/loop4"
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            ],
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "lv_name": "ceph_lv1",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "lv_size": "21470642176",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "name": "ceph_lv1",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "tags": {
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.cluster_name": "ceph",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.crush_device_class": "",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.encrypted": "0",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.objectstore": "bluestore",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.osd_id": "1",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.type": "block",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.vdo": "0",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.with_tpm": "0"
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            },
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "type": "block",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "vg_name": "ceph_vg1"
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:        }
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:    ],
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:    "2": [
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:        {
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "devices": [
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "/dev/loop5"
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            ],
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "lv_name": "ceph_lv2",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "lv_size": "21470642176",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "name": "ceph_lv2",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "tags": {
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.cluster_name": "ceph",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.crush_device_class": "",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.encrypted": "0",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.objectstore": "bluestore",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.osd_id": "2",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.type": "block",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.vdo": "0",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:                "ceph.with_tpm": "0"
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            },
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "type": "block",
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:            "vg_name": "ceph_vg2"
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:        }
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]:    ]
Feb 25 07:05:33 np0005629333 amazing_dijkstra[231911]: }
Feb 25 07:05:33 np0005629333 systemd[1]: libpod-cb9c4675d6f3ba69263743daa5e4945f1b1a1e5e78e9fa4a52843e3ae614cc35.scope: Deactivated successfully.
Feb 25 07:05:33 np0005629333 podman[231894]: 2026-02-25 12:05:33.9492696 +0000 UTC m=+0.442333847 container died cb9c4675d6f3ba69263743daa5e4945f1b1a1e5e78e9fa4a52843e3ae614cc35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 07:05:33 np0005629333 systemd[1]: var-lib-containers-storage-overlay-47bae10b579fa380d7c0df7583c3ca9b2bc4eb11a7a56edbd1a428383f654bd4-merged.mount: Deactivated successfully.
Feb 25 07:05:34 np0005629333 podman[231894]: 2026-02-25 12:05:34.005950238 +0000 UTC m=+0.499014485 container remove cb9c4675d6f3ba69263743daa5e4945f1b1a1e5e78e9fa4a52843e3ae614cc35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_dijkstra, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:05:34 np0005629333 systemd[1]: libpod-conmon-cb9c4675d6f3ba69263743daa5e4945f1b1a1e5e78e9fa4a52843e3ae614cc35.scope: Deactivated successfully.
Feb 25 07:05:34 np0005629333 python3.9[232048]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 25 07:05:34 np0005629333 systemd[1]: Reloading.
Feb 25 07:05:34 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 07:05:34 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 07:05:34 np0005629333 podman[232165]: 2026-02-25 12:05:34.505757754 +0000 UTC m=+0.074559181 container create 695f4c446f82c86bdf01bdce2c3f43010fbf98eb9f26561a5d809e16737f1992 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lamport, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 07:05:34 np0005629333 podman[232165]: 2026-02-25 12:05:34.468952812 +0000 UTC m=+0.037754289 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:05:34 np0005629333 systemd[1]: Started libpod-conmon-695f4c446f82c86bdf01bdce2c3f43010fbf98eb9f26561a5d809e16737f1992.scope.
Feb 25 07:05:34 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:05:34 np0005629333 podman[232165]: 2026-02-25 12:05:34.630405483 +0000 UTC m=+0.199206960 container init 695f4c446f82c86bdf01bdce2c3f43010fbf98eb9f26561a5d809e16737f1992 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lamport, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:05:34 np0005629333 podman[232165]: 2026-02-25 12:05:34.638992839 +0000 UTC m=+0.207794246 container start 695f4c446f82c86bdf01bdce2c3f43010fbf98eb9f26561a5d809e16737f1992 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 07:05:34 np0005629333 podman[232165]: 2026-02-25 12:05:34.643243006 +0000 UTC m=+0.212044443 container attach 695f4c446f82c86bdf01bdce2c3f43010fbf98eb9f26561a5d809e16737f1992 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 07:05:34 np0005629333 charming_lamport[232181]: 167 167
Feb 25 07:05:34 np0005629333 systemd[1]: libpod-695f4c446f82c86bdf01bdce2c3f43010fbf98eb9f26561a5d809e16737f1992.scope: Deactivated successfully.
Feb 25 07:05:34 np0005629333 podman[232165]: 2026-02-25 12:05:34.645986891 +0000 UTC m=+0.214788288 container died 695f4c446f82c86bdf01bdce2c3f43010fbf98eb9f26561a5d809e16737f1992 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lamport, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:05:34 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1859e4ca7f6c7f461eca743d41633583b759c85f1440017ca7b8983d84a45ce8-merged.mount: Deactivated successfully.
Feb 25 07:05:34 np0005629333 podman[232165]: 2026-02-25 12:05:34.727953696 +0000 UTC m=+0.296755123 container remove 695f4c446f82c86bdf01bdce2c3f43010fbf98eb9f26561a5d809e16737f1992 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:05:34 np0005629333 systemd[1]: libpod-conmon-695f4c446f82c86bdf01bdce2c3f43010fbf98eb9f26561a5d809e16737f1992.scope: Deactivated successfully.
Feb 25 07:05:34 np0005629333 podman[232292]: 2026-02-25 12:05:34.891225946 +0000 UTC m=+0.039399935 container create 42d0e99dceee097653ebc1e06f973873dc36e342989fc3815684381ab7c7e232 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_kalam, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 07:05:34 np0005629333 systemd[1]: Started libpod-conmon-42d0e99dceee097653ebc1e06f973873dc36e342989fc3815684381ab7c7e232.scope.
Feb 25 07:05:34 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:05:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/071602e6c446183291399e252fd00cf8d331546807fc4df5f8bb4a8a375e8291/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:05:34 np0005629333 podman[232292]: 2026-02-25 12:05:34.871244316 +0000 UTC m=+0.019418335 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:05:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/071602e6c446183291399e252fd00cf8d331546807fc4df5f8bb4a8a375e8291/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:05:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/071602e6c446183291399e252fd00cf8d331546807fc4df5f8bb4a8a375e8291/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:05:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/071602e6c446183291399e252fd00cf8d331546807fc4df5f8bb4a8a375e8291/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:05:34 np0005629333 podman[232292]: 2026-02-25 12:05:34.989092698 +0000 UTC m=+0.137266747 container init 42d0e99dceee097653ebc1e06f973873dc36e342989fc3815684381ab7c7e232 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 07:05:35 np0005629333 podman[232292]: 2026-02-25 12:05:35.003141824 +0000 UTC m=+0.151315813 container start 42d0e99dceee097653ebc1e06f973873dc36e342989fc3815684381ab7c7e232 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_kalam, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 07:05:35 np0005629333 podman[232292]: 2026-02-25 12:05:35.010059074 +0000 UTC m=+0.158233093 container attach 42d0e99dceee097653ebc1e06f973873dc36e342989fc3815684381ab7c7e232 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:05:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:05:35 np0005629333 python3.9[232381]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 07:05:35 np0005629333 lvm[232579]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:05:35 np0005629333 lvm[232580]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:05:35 np0005629333 lvm[232579]: VG ceph_vg0 finished
Feb 25 07:05:35 np0005629333 lvm[232580]: VG ceph_vg1 finished
Feb 25 07:05:35 np0005629333 lvm[232586]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:05:35 np0005629333 lvm[232586]: VG ceph_vg2 finished
Feb 25 07:05:35 np0005629333 quirky_kalam[232326]: {}
Feb 25 07:05:35 np0005629333 systemd[1]: libpod-42d0e99dceee097653ebc1e06f973873dc36e342989fc3815684381ab7c7e232.scope: Deactivated successfully.
Feb 25 07:05:35 np0005629333 systemd[1]: libpod-42d0e99dceee097653ebc1e06f973873dc36e342989fc3815684381ab7c7e232.scope: Consumed 1.005s CPU time.
Feb 25 07:05:35 np0005629333 podman[232292]: 2026-02-25 12:05:35.725748588 +0000 UTC m=+0.873922597 container died 42d0e99dceee097653ebc1e06f973873dc36e342989fc3815684381ab7c7e232 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Feb 25 07:05:35 np0005629333 systemd[1]: var-lib-containers-storage-overlay-071602e6c446183291399e252fd00cf8d331546807fc4df5f8bb4a8a375e8291-merged.mount: Deactivated successfully.
Feb 25 07:05:35 np0005629333 podman[232292]: 2026-02-25 12:05:35.768917015 +0000 UTC m=+0.917091004 container remove 42d0e99dceee097653ebc1e06f973873dc36e342989fc3815684381ab7c7e232 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_kalam, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:05:35 np0005629333 systemd[1]: libpod-conmon-42d0e99dceee097653ebc1e06f973873dc36e342989fc3815684381ab7c7e232.scope: Deactivated successfully.
Feb 25 07:05:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:05:35 np0005629333 python3.9[232612]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 07:05:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:05:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:05:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:05:36 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:05:36 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:05:36 np0005629333 python3.9[232804]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 07:05:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:05:37 np0005629333 python3.9[232958]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 07:05:37 np0005629333 python3.9[233112]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 07:05:38 np0005629333 python3.9[233266]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 07:05:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:05:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:05:39 np0005629333 python3.9[233420]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 07:05:39 np0005629333 python3.9[233574]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 25 07:05:40 np0005629333 systemd[1]: virtsecretd.service: Deactivated successfully.
Feb 25 07:05:40 np0005629333 systemd[1]: virtqemud.service: Deactivated successfully.
Feb 25 07:05:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:05:41 np0005629333 python3.9[233730]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 07:05:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:05:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:05:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:05:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:05:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:05:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:05:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:05:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:05:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:05:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:05:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:05:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:05:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6988014389607423e-06 of space, bias 4.0, pg target 0.0032385617267528906 quantized to 16 (current 16)
Feb 25 07:05:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:05:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:05:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:05:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:05:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:05:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:05:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:05:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:05:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:05:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:05:41 np0005629333 python3.9[233883]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 07:05:42 np0005629333 python3.9[234036]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 07:05:43 np0005629333 python3.9[234189]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 07:05:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:05:43 np0005629333 python3.9[234342]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 07:05:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:05:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:05:45 np0005629333 python3.9[234495]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 25 07:05:46 np0005629333 python3.9[234648]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 25 07:05:47 np0005629333 python3.9[234801]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 25 07:05:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:05:47 np0005629333 python3.9[234954]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 25 07:05:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:05:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:05:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:05:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:05:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:05:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:05:54.987 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:05:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:05:54.988 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:05:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:05:54.988 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:05:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:05:55 np0005629333 python3.9[235107]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Feb 25 07:05:56 np0005629333 python3.9[235261]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 25 07:05:57 np0005629333 python3.9[235420]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 25 07:05:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:05:57 np0005629333 podman[235422]: 2026-02-25 12:05:57.223122497 +0000 UTC m=+0.082045755 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 25 07:05:57 np0005629333 podman[235423]: 2026-02-25 12:05:57.280870694 +0000 UTC m=+0.139826273 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 07:05:58 np0005629333 systemd-logind[811]: New session 51 of user zuul.
Feb 25 07:05:58 np0005629333 systemd[1]: Started Session 51 of User zuul.
Feb 25 07:05:58 np0005629333 systemd[1]: session-51.scope: Deactivated successfully.
Feb 25 07:05:58 np0005629333 systemd-logind[811]: Session 51 logged out. Waiting for processes to exit.
Feb 25 07:05:58 np0005629333 systemd-logind[811]: Removed session 51.
Feb 25 07:05:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:05:59 np0005629333 python3.9[235650]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:05:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:05:59 np0005629333 python3.9[235726]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 25 07:05:59 np0005629333 python3.9[235876]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:06:00 np0005629333 python3.9[235997]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772021159.5780168-1019-21865698219281/.source _original_basename=ssh-config follow=False checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 25 07:06:01 np0005629333 python3.9[236147]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:06:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:06:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:06:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:06:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:06:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:06:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:06:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:06:01 np0005629333 python3.9[236268]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772021160.7244158-1019-211068301746409/.source.py _original_basename=nova_statedir_ownership.py follow=False checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 25 07:06:02 np0005629333 python3.9[236418]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:06:02 np0005629333 python3.9[236539]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772021161.9561272-1019-63317627360278/.source _original_basename=run-on-host follow=False checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 25 07:06:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:06:03 np0005629333 python3.9[236689]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:06:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:06:04 np0005629333 python3.9[236810]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772021162.9731765-1073-4750417329766/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 25 07:06:04 np0005629333 python3.9[236963]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:06:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:06:05 np0005629333 python3.9[237116]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:06:05 np0005629333 python3.9[237269]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 07:06:06 np0005629333 python3.9[237422]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:06:07 np0005629333 python3.9[237546]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1772021166.066824-1112-165981731013219/.source _original_basename=.frg3fiqg follow=False checksum=7b97d292a3e70868856d2f7f4959e51128fdae9e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Feb 25 07:06:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:06:07 np0005629333 python3.9[237698]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 07:06:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:06:09 np0005629333 python3.9[237853]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:06:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:06:09 np0005629333 python3.9[238006]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 25 07:06:10 np0005629333 python3.9[238156]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/nova_compute_init state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:06:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:06:12 np0005629333 python3.9[238580]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/nova_compute_init config_pattern=*.json debug=False
Feb 25 07:06:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:06:13 np0005629333 python3.9[238733]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 25 07:06:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:06:14 np0005629333 python3[238886]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/nova_compute_init config_id=nova_compute_init config_overrides={} config_patterns=*.json containers=['nova_compute_init'] log_base_path=/var/log/containers/stdouts debug=False
Feb 25 07:06:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:06:15 np0005629333 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 07:06:15 np0005629333 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 07:06:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:06:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:06:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:06:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 5 op/s
Feb 25 07:06:22 np0005629333 podman[238898]: 2026-02-25 12:06:22.701608624 +0000 UTC m=+7.880480450 image pull 7e637240710437807d86f704ec92f4417e40d6b1f76088848cab504c91655fe5 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb 25 07:06:22 np0005629333 podman[238980]: 2026-02-25 12:06:22.883653632 +0000 UTC m=+0.066760833 container create 26859d41ffaa0d2125a7c3af776ae34c49d58a26cc44e17324a3abe6d9b892b5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, config_id=nova_compute_init, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '46d21a4105e6ed83e061146dd14c53f3ee36f480aa458280c08f0b5f3d8159dd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=nova_compute_init, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, managed_by=edpm_ansible)
Feb 25 07:06:22 np0005629333 podman[238980]: 2026-02-25 12:06:22.8493392 +0000 UTC m=+0.032446481 image pull 7e637240710437807d86f704ec92f4417e40d6b1f76088848cab504c91655fe5 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb 25 07:06:22 np0005629333 python3[238886]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --env EDPM_CONFIG_HASH=46d21a4105e6ed83e061146dd14c53f3ee36f480aa458280c08f0b5f3d8159dd --label config_id=nova_compute_init --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '46d21a4105e6ed83e061146dd14c53f3ee36f480aa458280c08f0b5f3d8159dd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Feb 25 07:06:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 07:06:23 np0005629333 python3.9[239172]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 07:06:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:06:24 np0005629333 python3.9[239324]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 25 07:06:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 07:06:25 np0005629333 python3.9[239477]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:06:26 np0005629333 python3.9[239603]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772021184.9899955-1238-97722773755621/.source.yaml _original_basename=.2oawnvlc follow=False checksum=67ca0963d64e1be204014eff194b71fde4fe4b03 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:06:26 np0005629333 python3.9[239756]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:06:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 07:06:27 np0005629333 podman[239880]: 2026-02-25 12:06:27.343345949 +0000 UTC m=+0.073679309 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 25 07:06:27 np0005629333 podman[239918]: 2026-02-25 12:06:27.443736713 +0000 UTC m=+0.109378540 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 25 07:06:27 np0005629333 python3.9[239928]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 25 07:06:28 np0005629333 python3.9[240108]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:06:28 np0005629333 python3.9[240232]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/nova_compute.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1772021187.6910534-1271-235643249736609/.source.json _original_basename=.pxeu2839 follow=False checksum=0018389a48392615f4a8869cad43008a907328ff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:06:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:06:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 07:06:29 np0005629333 python3.9[240382]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/nova_compute state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:06:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:06:30
Feb 25 07:06:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:06:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:06:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'volumes', 'backups', 'images', '.mgr', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta']
Feb 25 07:06:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:06:31 np0005629333 python3.9[240806]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/nova_compute config_pattern=*.json debug=False
Feb 25 07:06:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 07:06:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:06:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:06:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:06:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:06:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:06:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:06:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:06:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:06:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:06:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:06:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:06:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:06:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:06:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:06:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:06:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:06:31 np0005629333 python3.9[240959]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 25 07:06:32 np0005629333 python3[241112]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/nova_compute config_id=nova_compute config_overrides={} config_patterns=*.json containers=['nova_compute'] log_base_path=/var/log/containers/stdouts debug=False
Feb 25 07:06:32 np0005629333 podman[241146]: 2026-02-25 12:06:32.931660114 +0000 UTC m=+0.046363795 container create b7d9270bce5d1e1ac663a61021bd399bea861e9d353f641979ffdd1339629739 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_id=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-e4339334ce3c7417c8c75d93639f9f436ec81d31ef3778c7686609059d57b52c-46d21a4105e6ed83e061146dd14c53f3ee36f480aa458280c08f0b5f3d8159dd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, container_name=nova_compute)
Feb 25 07:06:32 np0005629333 podman[241146]: 2026-02-25 12:06:32.90719641 +0000 UTC m=+0.021900101 image pull 7e637240710437807d86f704ec92f4417e40d6b1f76088848cab504c91655fe5 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb 25 07:06:32 np0005629333 python3[241112]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-e4339334ce3c7417c8c75d93639f9f436ec81d31ef3778c7686609059d57b52c-46d21a4105e6ed83e061146dd14c53f3ee36f480aa458280c08f0b5f3d8159dd --label config_id=nova_compute --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-e4339334ce3c7417c8c75d93639f9f436ec81d31ef3778c7686609059d57b52c-46d21a4105e6ed83e061146dd14c53f3ee36f480aa458280c08f0b5f3d8159dd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Feb 25 07:06:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 10 op/s
Feb 25 07:06:33 np0005629333 python3.9[241338]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 07:06:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:06:34 np0005629333 python3.9[241493]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:06:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:06:35 np0005629333 python3.9[241570]: ansible-stat Invoked with path=/etc/systemd/system/edpm_nova_compute_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 07:06:36 np0005629333 python3.9[241722]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772021195.3952115-1349-212196057081416/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:06:36 np0005629333 python3.9[241856]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 25 07:06:36 np0005629333 systemd[1]: Reloading.
Feb 25 07:06:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:06:36 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:06:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:06:36 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:06:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:06:36 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:06:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:06:36 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:06:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:06:36 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:06:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:06:36 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:06:36 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 07:06:36 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 07:06:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:06:37 np0005629333 podman[242057]: 2026-02-25 12:06:37.237662956 +0000 UTC m=+0.060067572 container create 6e2065aca9ada455e93786036b24d44ea7416927c4751052bf2426bc2681a081 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:06:37 np0005629333 systemd[1]: Started libpod-conmon-6e2065aca9ada455e93786036b24d44ea7416927c4751052bf2426bc2681a081.scope.
Feb 25 07:06:37 np0005629333 podman[242057]: 2026-02-25 12:06:37.210920169 +0000 UTC m=+0.033324785 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:06:37 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:06:37 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:06:37 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:06:37 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:06:37 np0005629333 podman[242057]: 2026-02-25 12:06:37.339598814 +0000 UTC m=+0.162003360 container init 6e2065aca9ada455e93786036b24d44ea7416927c4751052bf2426bc2681a081 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:06:37 np0005629333 podman[242057]: 2026-02-25 12:06:37.347943571 +0000 UTC m=+0.170348117 container start 6e2065aca9ada455e93786036b24d44ea7416927c4751052bf2426bc2681a081 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_kepler, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:06:37 np0005629333 podman[242057]: 2026-02-25 12:06:37.351914003 +0000 UTC m=+0.174318559 container attach 6e2065aca9ada455e93786036b24d44ea7416927c4751052bf2426bc2681a081 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_kepler, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 07:06:37 np0005629333 compassionate_kepler[242082]: 167 167
Feb 25 07:06:37 np0005629333 systemd[1]: libpod-6e2065aca9ada455e93786036b24d44ea7416927c4751052bf2426bc2681a081.scope: Deactivated successfully.
Feb 25 07:06:37 np0005629333 podman[242057]: 2026-02-25 12:06:37.356634767 +0000 UTC m=+0.179039293 container died 6e2065aca9ada455e93786036b24d44ea7416927c4751052bf2426bc2681a081 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_kepler, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 07:06:37 np0005629333 systemd[1]: var-lib-containers-storage-overlay-6192ae4011ab60389230c13602846dacfdeaa566d4cca99508f042ee0ad0fdd7-merged.mount: Deactivated successfully.
Feb 25 07:06:37 np0005629333 podman[242057]: 2026-02-25 12:06:37.409133324 +0000 UTC m=+0.231537840 container remove 6e2065aca9ada455e93786036b24d44ea7416927c4751052bf2426bc2681a081 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 07:06:37 np0005629333 systemd[1]: libpod-conmon-6e2065aca9ada455e93786036b24d44ea7416927c4751052bf2426bc2681a081.scope: Deactivated successfully.
Feb 25 07:06:37 np0005629333 python3.9[242075]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 25 07:06:37 np0005629333 podman[242105]: 2026-02-25 12:06:37.562244872 +0000 UTC m=+0.056432480 container create 5e77fabbcbec65f9ed2742cd6e06290990c0225be37523f75c7ffd6ce17622e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:06:37 np0005629333 systemd[1]: Started libpod-conmon-5e77fabbcbec65f9ed2742cd6e06290990c0225be37523f75c7ffd6ce17622e1.scope.
Feb 25 07:06:37 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:06:37 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7357a325822f278938d21287201f3d0ebfe43141eabf2d45792e1c948f822679/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:06:37 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7357a325822f278938d21287201f3d0ebfe43141eabf2d45792e1c948f822679/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:06:37 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7357a325822f278938d21287201f3d0ebfe43141eabf2d45792e1c948f822679/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:06:37 np0005629333 podman[242105]: 2026-02-25 12:06:37.538201921 +0000 UTC m=+0.032389609 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:06:37 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7357a325822f278938d21287201f3d0ebfe43141eabf2d45792e1c948f822679/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:06:37 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7357a325822f278938d21287201f3d0ebfe43141eabf2d45792e1c948f822679/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:06:37 np0005629333 podman[242105]: 2026-02-25 12:06:37.650668278 +0000 UTC m=+0.144855916 container init 5e77fabbcbec65f9ed2742cd6e06290990c0225be37523f75c7ffd6ce17622e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_galileo, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 07:06:37 np0005629333 podman[242105]: 2026-02-25 12:06:37.666944829 +0000 UTC m=+0.161132467 container start 5e77fabbcbec65f9ed2742cd6e06290990c0225be37523f75c7ffd6ce17622e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_galileo, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:06:37 np0005629333 podman[242105]: 2026-02-25 12:06:37.671876219 +0000 UTC m=+0.166063857 container attach 5e77fabbcbec65f9ed2742cd6e06290990c0225be37523f75c7ffd6ce17622e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_galileo, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:06:38 np0005629333 vibrant_galileo[242123]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:06:38 np0005629333 vibrant_galileo[242123]: --> All data devices are unavailable
Feb 25 07:06:38 np0005629333 systemd[1]: libpod-5e77fabbcbec65f9ed2742cd6e06290990c0225be37523f75c7ffd6ce17622e1.scope: Deactivated successfully.
Feb 25 07:06:38 np0005629333 podman[242105]: 2026-02-25 12:06:38.163359334 +0000 UTC m=+0.657546962 container died 5e77fabbcbec65f9ed2742cd6e06290990c0225be37523f75c7ffd6ce17622e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_galileo, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:06:38 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7357a325822f278938d21287201f3d0ebfe43141eabf2d45792e1c948f822679-merged.mount: Deactivated successfully.
Feb 25 07:06:38 np0005629333 podman[242105]: 2026-02-25 12:06:38.21402828 +0000 UTC m=+0.708215918 container remove 5e77fabbcbec65f9ed2742cd6e06290990c0225be37523f75c7ffd6ce17622e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_galileo, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 07:06:38 np0005629333 systemd[1]: libpod-conmon-5e77fabbcbec65f9ed2742cd6e06290990c0225be37523f75c7ffd6ce17622e1.scope: Deactivated successfully.
Feb 25 07:06:38 np0005629333 systemd[1]: Reloading.
Feb 25 07:06:38 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 07:06:38 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 07:06:38 np0005629333 podman[242262]: 2026-02-25 12:06:38.768909741 +0000 UTC m=+0.049854943 container create a24f1f7b6bb3968cfca847ce159cff15f373d884094220ef32158c3943e92a52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_roentgen, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:06:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:06:38 np0005629333 podman[242262]: 2026-02-25 12:06:38.756031977 +0000 UTC m=+0.036977199 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:06:38 np0005629333 systemd[1]: Started libpod-conmon-a24f1f7b6bb3968cfca847ce159cff15f373d884094220ef32158c3943e92a52.scope.
Feb 25 07:06:38 np0005629333 systemd[1]: Starting nova_compute container...
Feb 25 07:06:38 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:06:38 np0005629333 podman[242262]: 2026-02-25 12:06:38.903724711 +0000 UTC m=+0.184669953 container init a24f1f7b6bb3968cfca847ce159cff15f373d884094220ef32158c3943e92a52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_roentgen, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 07:06:38 np0005629333 podman[242262]: 2026-02-25 12:06:38.912410037 +0000 UTC m=+0.193355279 container start a24f1f7b6bb3968cfca847ce159cff15f373d884094220ef32158c3943e92a52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:06:38 np0005629333 bold_roentgen[242281]: 167 167
Feb 25 07:06:38 np0005629333 systemd[1]: libpod-a24f1f7b6bb3968cfca847ce159cff15f373d884094220ef32158c3943e92a52.scope: Deactivated successfully.
Feb 25 07:06:38 np0005629333 podman[242262]: 2026-02-25 12:06:38.918089398 +0000 UTC m=+0.199034630 container attach a24f1f7b6bb3968cfca847ce159cff15f373d884094220ef32158c3943e92a52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_roentgen, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 07:06:38 np0005629333 podman[242262]: 2026-02-25 12:06:38.918840559 +0000 UTC m=+0.199785801 container died a24f1f7b6bb3968cfca847ce159cff15f373d884094220ef32158c3943e92a52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_roentgen, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 07:06:38 np0005629333 systemd[1]: var-lib-containers-storage-overlay-225fb32ad2252cde7aa117037fe4ff952be5773364444bb57020ee82e7192c9b-merged.mount: Deactivated successfully.
Feb 25 07:06:38 np0005629333 podman[242262]: 2026-02-25 12:06:38.971547733 +0000 UTC m=+0.252492975 container remove a24f1f7b6bb3968cfca847ce159cff15f373d884094220ef32158c3943e92a52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_roentgen, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:06:38 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:06:38 np0005629333 systemd[1]: libpod-conmon-a24f1f7b6bb3968cfca847ce159cff15f373d884094220ef32158c3943e92a52.scope: Deactivated successfully.
Feb 25 07:06:39 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a4238a0f12133cf84aa8736b88b0118a53310265a65a3619ac255cecafea6c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Feb 25 07:06:39 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a4238a0f12133cf84aa8736b88b0118a53310265a65a3619ac255cecafea6c/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Feb 25 07:06:39 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a4238a0f12133cf84aa8736b88b0118a53310265a65a3619ac255cecafea6c/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 25 07:06:39 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a4238a0f12133cf84aa8736b88b0118a53310265a65a3619ac255cecafea6c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 25 07:06:39 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a4238a0f12133cf84aa8736b88b0118a53310265a65a3619ac255cecafea6c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Feb 25 07:06:39 np0005629333 podman[242283]: 2026-02-25 12:06:39.031855592 +0000 UTC m=+0.138868726 container init b7d9270bce5d1e1ac663a61021bd399bea861e9d353f641979ffdd1339629739 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-e4339334ce3c7417c8c75d93639f9f436ec81d31ef3778c7686609059d57b52c-46d21a4105e6ed83e061146dd14c53f3ee36f480aa458280c08f0b5f3d8159dd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=nova_compute, container_name=nova_compute)
Feb 25 07:06:39 np0005629333 podman[242283]: 2026-02-25 12:06:39.041177716 +0000 UTC m=+0.148190790 container start b7d9270bce5d1e1ac663a61021bd399bea861e9d353f641979ffdd1339629739 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-e4339334ce3c7417c8c75d93639f9f436ec81d31ef3778c7686609059d57b52c-46d21a4105e6ed83e061146dd14c53f3ee36f480aa458280c08f0b5f3d8159dd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=nova_compute, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 07:06:39 np0005629333 podman[242283]: nova_compute
Feb 25 07:06:39 np0005629333 nova_compute[242312]: + sudo -E kolla_set_configs
Feb 25 07:06:39 np0005629333 systemd[1]: Started nova_compute container.
Feb 25 07:06:39 np0005629333 podman[242326]: 2026-02-25 12:06:39.151147111 +0000 UTC m=+0.049194754 container create 5663ec4b24662e6460ee42ae7428d9f376e021bbd84f8eca03c69c9e092c9136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Validating config file
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Copying service configuration files
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Deleting /etc/nova/nova.conf
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Copying /var/lib/kolla/config_files/src/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Copying /var/lib/kolla/config_files/src/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Copying /var/lib/kolla/config_files/src/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Copying /var/lib/kolla/config_files/src/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Deleting /etc/ceph
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Creating directory /etc/ceph
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Setting permission for /etc/ceph
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.conf to /etc/ceph/ceph.conf
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-config to /var/lib/nova/.ssh/config
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Copying /var/lib/kolla/config_files/src/run-on-host to /usr/sbin/iscsiadm
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Writing out command to execute
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 25 07:06:39 np0005629333 nova_compute[242312]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 25 07:06:39 np0005629333 nova_compute[242312]: ++ cat /run_command
Feb 25 07:06:39 np0005629333 nova_compute[242312]: + CMD=nova-compute
Feb 25 07:06:39 np0005629333 nova_compute[242312]: + ARGS=
Feb 25 07:06:39 np0005629333 nova_compute[242312]: + sudo kolla_copy_cacerts
Feb 25 07:06:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:06:39 np0005629333 nova_compute[242312]: + [[ ! -n '' ]]
Feb 25 07:06:39 np0005629333 nova_compute[242312]: + . kolla_extend_start
Feb 25 07:06:39 np0005629333 nova_compute[242312]: Running command: 'nova-compute'
Feb 25 07:06:39 np0005629333 nova_compute[242312]: + echo 'Running command: '\''nova-compute'\'''
Feb 25 07:06:39 np0005629333 nova_compute[242312]: + umask 0022
Feb 25 07:06:39 np0005629333 nova_compute[242312]: + exec nova-compute
Feb 25 07:06:39 np0005629333 systemd[1]: Started libpod-conmon-5663ec4b24662e6460ee42ae7428d9f376e021bbd84f8eca03c69c9e092c9136.scope.
Feb 25 07:06:39 np0005629333 podman[242326]: 2026-02-25 12:06:39.131806003 +0000 UTC m=+0.029853726 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:06:39 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:06:39 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b354313044098a7a690bd9507539423eb77bfebdf0c924319aeb13cc2bba63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:06:39 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b354313044098a7a690bd9507539423eb77bfebdf0c924319aeb13cc2bba63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:06:39 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b354313044098a7a690bd9507539423eb77bfebdf0c924319aeb13cc2bba63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:06:39 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b354313044098a7a690bd9507539423eb77bfebdf0c924319aeb13cc2bba63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:06:39 np0005629333 podman[242326]: 2026-02-25 12:06:39.274220889 +0000 UTC m=+0.172268572 container init 5663ec4b24662e6460ee42ae7428d9f376e021bbd84f8eca03c69c9e092c9136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 07:06:39 np0005629333 podman[242326]: 2026-02-25 12:06:39.280922688 +0000 UTC m=+0.178970351 container start 5663ec4b24662e6460ee42ae7428d9f376e021bbd84f8eca03c69c9e092c9136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_poincare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:06:39 np0005629333 podman[242326]: 2026-02-25 12:06:39.285369615 +0000 UTC m=+0.183417258 container attach 5663ec4b24662e6460ee42ae7428d9f376e021bbd84f8eca03c69c9e092c9136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]: {
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:    "0": [
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:        {
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "devices": [
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "/dev/loop3"
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            ],
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "lv_name": "ceph_lv0",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "lv_size": "21470642176",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "name": "ceph_lv0",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "tags": {
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.cluster_name": "ceph",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.crush_device_class": "",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.encrypted": "0",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.objectstore": "bluestore",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.osd_id": "0",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.type": "block",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.vdo": "0",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.with_tpm": "0"
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            },
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "type": "block",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "vg_name": "ceph_vg0"
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:        }
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:    ],
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:    "1": [
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:        {
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "devices": [
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "/dev/loop4"
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            ],
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "lv_name": "ceph_lv1",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "lv_size": "21470642176",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "name": "ceph_lv1",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "tags": {
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.cluster_name": "ceph",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.crush_device_class": "",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.encrypted": "0",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.objectstore": "bluestore",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.osd_id": "1",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.type": "block",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.vdo": "0",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.with_tpm": "0"
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            },
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "type": "block",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "vg_name": "ceph_vg1"
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:        }
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:    ],
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:    "2": [
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:        {
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "devices": [
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "/dev/loop5"
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            ],
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "lv_name": "ceph_lv2",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "lv_size": "21470642176",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "name": "ceph_lv2",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "tags": {
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.cluster_name": "ceph",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.crush_device_class": "",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.encrypted": "0",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.objectstore": "bluestore",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.osd_id": "2",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.type": "block",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.vdo": "0",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:                "ceph.with_tpm": "0"
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            },
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "type": "block",
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:            "vg_name": "ceph_vg2"
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:        }
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]:    ]
Feb 25 07:06:39 np0005629333 infallible_poincare[242372]: }
Feb 25 07:06:39 np0005629333 systemd[1]: libpod-5663ec4b24662e6460ee42ae7428d9f376e021bbd84f8eca03c69c9e092c9136.scope: Deactivated successfully.
Feb 25 07:06:39 np0005629333 podman[242326]: 2026-02-25 12:06:39.584662865 +0000 UTC m=+0.482710518 container died 5663ec4b24662e6460ee42ae7428d9f376e021bbd84f8eca03c69c9e092c9136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_poincare, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 25 07:06:39 np0005629333 podman[242326]: 2026-02-25 12:06:39.62190661 +0000 UTC m=+0.519954253 container remove 5663ec4b24662e6460ee42ae7428d9f376e021bbd84f8eca03c69c9e092c9136 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_poincare, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:06:39 np0005629333 systemd[1]: libpod-conmon-5663ec4b24662e6460ee42ae7428d9f376e021bbd84f8eca03c69c9e092c9136.scope: Deactivated successfully.
Feb 25 07:06:39 np0005629333 python3.9[242516]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 25 07:06:39 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e8b354313044098a7a690bd9507539423eb77bfebdf0c924319aeb13cc2bba63-merged.mount: Deactivated successfully.
Feb 25 07:06:40 np0005629333 podman[242604]: 2026-02-25 12:06:40.073459634 +0000 UTC m=+0.052712505 container create 0ff9a142dd4d3057244da0ae844d4f5c3a3076a0b90a5fe2c2185c89d5832eb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 07:06:40 np0005629333 systemd[1]: Started libpod-conmon-0ff9a142dd4d3057244da0ae844d4f5c3a3076a0b90a5fe2c2185c89d5832eb5.scope.
Feb 25 07:06:40 np0005629333 podman[242604]: 2026-02-25 12:06:40.050288857 +0000 UTC m=+0.029541778 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:06:40 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:06:40 np0005629333 podman[242604]: 2026-02-25 12:06:40.174804545 +0000 UTC m=+0.154057476 container init 0ff9a142dd4d3057244da0ae844d4f5c3a3076a0b90a5fe2c2185c89d5832eb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:06:40 np0005629333 podman[242604]: 2026-02-25 12:06:40.18274128 +0000 UTC m=+0.161994151 container start 0ff9a142dd4d3057244da0ae844d4f5c3a3076a0b90a5fe2c2185c89d5832eb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lichterman, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:06:40 np0005629333 amazing_lichterman[242621]: 167 167
Feb 25 07:06:40 np0005629333 podman[242604]: 2026-02-25 12:06:40.186640251 +0000 UTC m=+0.165893132 container attach 0ff9a142dd4d3057244da0ae844d4f5c3a3076a0b90a5fe2c2185c89d5832eb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lichterman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:06:40 np0005629333 systemd[1]: libpod-0ff9a142dd4d3057244da0ae844d4f5c3a3076a0b90a5fe2c2185c89d5832eb5.scope: Deactivated successfully.
Feb 25 07:06:40 np0005629333 podman[242604]: 2026-02-25 12:06:40.188351899 +0000 UTC m=+0.167604780 container died 0ff9a142dd4d3057244da0ae844d4f5c3a3076a0b90a5fe2c2185c89d5832eb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lichterman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3)
Feb 25 07:06:40 np0005629333 systemd[1]: var-lib-containers-storage-overlay-20602069786d27aed2cba216d8ab00a13958a73e5a8c89141f1a7041f8d4040b-merged.mount: Deactivated successfully.
Feb 25 07:06:40 np0005629333 podman[242604]: 2026-02-25 12:06:40.234228459 +0000 UTC m=+0.213481310 container remove 0ff9a142dd4d3057244da0ae844d4f5c3a3076a0b90a5fe2c2185c89d5832eb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 07:06:40 np0005629333 systemd[1]: libpod-conmon-0ff9a142dd4d3057244da0ae844d4f5c3a3076a0b90a5fe2c2185c89d5832eb5.scope: Deactivated successfully.
Feb 25 07:06:40 np0005629333 podman[242720]: 2026-02-25 12:06:40.380316138 +0000 UTC m=+0.044449120 container create 77c0d11c9e475227fcf0f14ac5cbd279b7b60322b07ed5ceaabc04cbc27992af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_satoshi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:06:40 np0005629333 systemd[1]: Started libpod-conmon-77c0d11c9e475227fcf0f14ac5cbd279b7b60322b07ed5ceaabc04cbc27992af.scope.
Feb 25 07:06:40 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:06:40 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8f10e689479fa17defdc8dd992ea025721749ddad88c6f27123fcded05eb427/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:06:40 np0005629333 podman[242720]: 2026-02-25 12:06:40.364310245 +0000 UTC m=+0.028443187 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:06:40 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8f10e689479fa17defdc8dd992ea025721749ddad88c6f27123fcded05eb427/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:06:40 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8f10e689479fa17defdc8dd992ea025721749ddad88c6f27123fcded05eb427/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:06:40 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8f10e689479fa17defdc8dd992ea025721749ddad88c6f27123fcded05eb427/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:06:40 np0005629333 podman[242720]: 2026-02-25 12:06:40.495733758 +0000 UTC m=+0.159866750 container init 77c0d11c9e475227fcf0f14ac5cbd279b7b60322b07ed5ceaabc04cbc27992af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_satoshi, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:06:40 np0005629333 podman[242720]: 2026-02-25 12:06:40.501934224 +0000 UTC m=+0.166067166 container start 77c0d11c9e475227fcf0f14ac5cbd279b7b60322b07ed5ceaabc04cbc27992af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_satoshi, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:06:40 np0005629333 podman[242720]: 2026-02-25 12:06:40.505029292 +0000 UTC m=+0.169162234 container attach 77c0d11c9e475227fcf0f14ac5cbd279b7b60322b07ed5ceaabc04cbc27992af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_satoshi, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:06:40 np0005629333 python3.9[242792]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 25 07:06:40 np0005629333 nova_compute[242312]: 2026-02-25 12:06:40.952 242318 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb 25 07:06:40 np0005629333 nova_compute[242312]: 2026-02-25 12:06:40.952 242318 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb 25 07:06:40 np0005629333 nova_compute[242312]: 2026-02-25 12:06:40.952 242318 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb 25 07:06:40 np0005629333 nova_compute[242312]: 2026-02-25 12:06:40.953 242318 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.068 242318 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.081 242318 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.082 242318 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Feb 25 07:06:41 np0005629333 python3.9[242977]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772021200.1809902-1394-24421583730416/.source.yaml _original_basename=.n08ce6xp follow=False checksum=5f7f322f172128f758e53aaf72648f95c69c6f23 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 25 07:06:41 np0005629333 lvm[242999]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:06:41 np0005629333 lvm[242997]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:06:41 np0005629333 lvm[242997]: VG ceph_vg0 finished
Feb 25 07:06:41 np0005629333 lvm[242999]: VG ceph_vg1 finished
Feb 25 07:06:41 np0005629333 lvm[243000]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:06:41 np0005629333 lvm[243000]: VG ceph_vg2 finished
Feb 25 07:06:41 np0005629333 recursing_satoshi[242784]: {}
Feb 25 07:06:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:06:41 np0005629333 systemd[1]: libpod-77c0d11c9e475227fcf0f14ac5cbd279b7b60322b07ed5ceaabc04cbc27992af.scope: Deactivated successfully.
Feb 25 07:06:41 np0005629333 podman[242720]: 2026-02-25 12:06:41.221561483 +0000 UTC m=+0.885694415 container died 77c0d11c9e475227fcf0f14ac5cbd279b7b60322b07ed5ceaabc04cbc27992af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_satoshi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:06:41 np0005629333 systemd[1]: libpod-77c0d11c9e475227fcf0f14ac5cbd279b7b60322b07ed5ceaabc04cbc27992af.scope: Consumed 1.008s CPU time.
Feb 25 07:06:41 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a8f10e689479fa17defdc8dd992ea025721749ddad88c6f27123fcded05eb427-merged.mount: Deactivated successfully.
Feb 25 07:06:41 np0005629333 podman[242720]: 2026-02-25 12:06:41.263666726 +0000 UTC m=+0.927799688 container remove 77c0d11c9e475227fcf0f14ac5cbd279b7b60322b07ed5ceaabc04cbc27992af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:06:41 np0005629333 systemd[1]: libpod-conmon-77c0d11c9e475227fcf0f14ac5cbd279b7b60322b07ed5ceaabc04cbc27992af.scope: Deactivated successfully.
Feb 25 07:06:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:06:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:06:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:06:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.681 242318 INFO nova.virt.driver [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Feb 25 07:06:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:06:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:06:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:06:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:06:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:06:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:06:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:06:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:06:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:06:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:06:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:06:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:06:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6988014389607423e-06 of space, bias 4.0, pg target 0.0032385617267528906 quantized to 16 (current 16)
Feb 25 07:06:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:06:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:06:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:06:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:06:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:06:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:06:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:06:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:06:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:06:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.905 242318 INFO nova.compute.provider_config [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Feb 25 07:06:41 np0005629333 python3.9[243189]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.924 242318 DEBUG oslo_concurrency.lockutils [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.924 242318 DEBUG oslo_concurrency.lockutils [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.925 242318 DEBUG oslo_concurrency.lockutils [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.925 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.926 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.926 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.926 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.926 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.926 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.927 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.927 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.927 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.927 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.927 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.928 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.928 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.928 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.928 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.928 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.929 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.929 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.929 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.929 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.930 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.930 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.930 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.930 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.930 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.931 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.931 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.931 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.931 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.932 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.932 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.932 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.932 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.932 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.933 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.933 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.933 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.933 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.934 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.935 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.935 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.935 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.935 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.935 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.936 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.936 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.936 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.936 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.936 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.937 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.937 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.937 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.937 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.937 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.938 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.938 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.938 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.938 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.938 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.938 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.939 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.939 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.939 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.939 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.939 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.940 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.940 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.940 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.940 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.940 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.940 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.941 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.941 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.941 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.941 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.941 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.942 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.942 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.942 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.942 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.942 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.943 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.943 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.943 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.943 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.943 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.943 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.944 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.944 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.944 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.944 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.944 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.945 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.945 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.945 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.945 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.945 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.945 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.946 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.946 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.946 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.946 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.946 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.947 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.947 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.947 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.947 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.947 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.948 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.948 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.948 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.948 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.948 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.948 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.949 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.949 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.949 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.949 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.949 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.950 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.950 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.950 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.950 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.950 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.950 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.951 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.951 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.951 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.951 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.951 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.951 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.952 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.952 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.952 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.952 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.952 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.953 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.953 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.953 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.953 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.953 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.953 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.954 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.954 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.954 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.954 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.954 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.955 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.955 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.955 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.955 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.955 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.955 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.956 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.956 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.956 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.956 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.956 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.957 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.957 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.957 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.957 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.957 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.958 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.958 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.958 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.958 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.958 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.959 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.959 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.959 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.959 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.959 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.960 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.960 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.960 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.960 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.960 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.961 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.961 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.961 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.961 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.961 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.961 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.962 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.962 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.962 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.962 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.962 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.963 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.963 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.963 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.963 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.963 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.964 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.964 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.964 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.964 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.964 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.965 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.965 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.965 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.965 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.965 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.966 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.966 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.966 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.966 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.966 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.966 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.967 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.967 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.967 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.967 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.967 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.968 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.968 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.968 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.968 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.968 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.968 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.969 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.969 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cinder.os_region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.969 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.969 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.969 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.970 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.970 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.970 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.970 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.970 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.970 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.971 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.971 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.971 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.971 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.971 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.972 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.972 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.972 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.972 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.972 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.973 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.973 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.973 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.973 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.973 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.974 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.974 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.974 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.974 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.974 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.975 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.975 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.975 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.975 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.975 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.976 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.976 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.976 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.976 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.976 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.977 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.977 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.977 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.977 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.978 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.978 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.978 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.978 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.978 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.979 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.979 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.979 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.979 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.979 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.980 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.980 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.980 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.980 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.980 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.981 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.981 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.981 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.981 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.981 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.982 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.982 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.982 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.982 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.982 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.983 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.983 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.983 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.983 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.983 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.984 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.984 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.984 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.984 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.984 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.985 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.985 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.985 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.985 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.986 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.986 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.986 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.986 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.986 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.986 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.987 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.987 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.987 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.987 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.988 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.988 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.988 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.988 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.988 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.989 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.989 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.989 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.989 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.989 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.990 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.990 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.990 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.990 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.990 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.990 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.991 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.991 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.991 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.991 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.991 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.992 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.992 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.992 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.992 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.992 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.993 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.993 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.993 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.993 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.993 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.993 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.994 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.994 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.994 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.994 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.994 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.994 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.994 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.995 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.995 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.995 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.995 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.995 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.995 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.996 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.996 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.996 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.996 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.996 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.996 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.996 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.997 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.997 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.997 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.997 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.997 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.997 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.997 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.997 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.998 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.998 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.998 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.998 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.998 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.998 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.998 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.999 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.999 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.999 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.999 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.999 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:41 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.999 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:41.999 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.000 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.000 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.000 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.000 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.000 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.000 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] barbican.barbican_region_name  = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.000 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.001 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.001 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.001 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.001 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.001 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.001 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.001 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.002 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.002 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.002 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.002 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.002 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.002 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.002 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.002 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.003 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.003 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.003 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.003 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.003 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.003 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.003 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.004 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.004 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.004 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.004 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.004 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.004 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.004 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.005 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.005 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.005 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.005 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.005 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.005 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.005 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.005 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.006 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.006 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.006 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.006 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.006 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.006 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.006 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.007 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.007 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.007 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.007 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.007 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.007 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.007 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.008 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.008 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.008 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.008 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.008 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.008 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.008 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.009 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.009 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.009 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.009 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.009 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.009 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.009 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.009 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.010 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.010 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.010 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.010 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.010 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.010 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.010 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.011 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.011 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.011 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.011 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.011 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.011 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.011 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.011 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.012 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.012 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.012 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.012 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.012 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.012 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.012 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.012 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.013 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.013 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.013 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.013 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.013 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.013 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.013 242318 WARNING oslo_config.cfg [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Feb 25 07:06:42 np0005629333 nova_compute[242312]: live_migration_uri is deprecated for removal in favor of two other options that
Feb 25 07:06:42 np0005629333 nova_compute[242312]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Feb 25 07:06:42 np0005629333 nova_compute[242312]: and ``live_migration_inbound_addr`` respectively.
Feb 25 07:06:42 np0005629333 nova_compute[242312]: ).  Its value may be silently ignored in the future.#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.014 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.014 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.014 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.014 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.014 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.014 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.015 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.015 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.015 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.015 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.015 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.015 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.015 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.015 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.016 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.016 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.016 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.016 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.016 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.rbd_secret_uuid        = 8ac33163-6221-5d58-9a39-8b6933fe7762 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.016 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.016 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.017 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.017 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.017 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.017 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.017 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.017 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.017 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.017 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.018 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.018 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.018 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.018 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.018 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.018 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.018 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.019 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.019 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.019 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.019 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.019 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.019 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.019 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.019 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.020 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.020 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.020 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.020 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.020 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.020 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.020 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.021 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.021 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.021 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.021 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.021 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.021 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.021 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.021 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.022 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.022 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.022 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.022 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.022 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.022 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.022 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.023 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.023 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.023 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.023 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.023 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.023 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.023 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.023 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.024 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.024 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.024 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.024 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.024 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.024 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.024 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.025 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.025 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.025 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.025 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.025 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.025 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.025 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.026 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.026 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.026 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.026 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.026 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.026 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.026 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.026 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.027 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.027 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.027 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.027 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.027 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.027 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.027 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.027 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.028 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.028 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.028 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.028 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.028 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.028 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.028 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.029 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.029 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.029 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.029 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.029 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.029 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.029 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.029 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.030 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.030 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.030 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.030 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.030 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.030 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.030 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.031 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.031 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.031 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.031 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.031 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.031 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.031 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.032 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.032 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.032 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.032 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.032 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.032 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.032 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.033 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.033 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.033 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.033 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.033 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.033 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.034 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.034 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.034 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.034 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.034 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.034 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.034 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.034 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.035 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.035 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.035 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.035 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.035 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.035 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.035 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.036 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.036 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.036 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.036 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.036 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.036 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.036 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.036 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.037 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.037 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.037 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.037 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.037 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.037 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.037 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.037 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.038 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.038 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.038 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.038 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.038 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.038 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.038 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.039 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.039 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.039 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.039 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.039 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.039 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.039 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.039 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.040 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.040 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.040 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.040 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.040 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.040 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.041 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.041 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.041 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.041 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.041 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.041 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.041 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.042 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.042 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.042 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.042 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.042 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.042 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.042 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.042 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.043 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.043 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.043 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.043 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.043 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.043 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.043 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.043 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.044 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.044 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.044 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.044 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.044 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.044 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.044 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.044 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.045 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.045 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.046 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.046 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.046 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.047 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.047 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.047 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.048 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.048 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.048 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.049 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.049 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.049 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.049 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.049 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.050 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.050 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.050 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.050 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.050 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.050 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.051 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.051 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.051 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.051 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.051 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.051 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.052 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.052 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.052 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.052 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.052 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.052 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.052 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.052 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.053 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.053 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.053 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.053 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.053 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.053 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.053 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.054 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.054 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.054 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.054 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.054 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.054 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.054 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.055 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.055 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.055 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.055 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.055 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.055 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.055 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.055 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.056 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.056 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.056 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.056 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.056 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.056 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.056 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.057 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.057 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.057 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.057 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.057 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.057 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.057 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.058 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.058 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.058 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.058 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.058 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.058 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.058 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.059 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.059 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.059 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.059 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.059 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.059 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.059 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.059 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.060 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.060 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.060 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.060 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.060 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.060 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.060 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.060 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.061 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.061 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.061 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.061 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.061 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.061 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.061 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.062 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.062 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.062 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.062 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.062 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.062 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.062 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.063 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.063 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.063 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.063 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.063 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.063 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.063 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.063 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.064 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.064 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.064 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.064 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.064 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.064 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.064 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.065 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.065 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.065 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.065 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.065 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.065 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.065 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.065 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.066 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.066 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.066 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.066 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.066 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.066 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.066 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.067 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.067 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.067 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.067 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.067 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.067 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.067 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.067 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.068 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.068 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.068 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.068 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.068 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.068 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.068 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.068 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.069 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.069 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.069 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.069 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.069 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.069 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.069 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.070 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.070 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.070 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.070 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.070 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.070 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.070 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.071 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.071 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.071 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.071 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.071 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.071 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.071 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.072 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.072 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.072 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.072 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.072 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.072 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.072 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.073 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.073 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.073 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.073 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.073 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.073 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.073 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.073 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.074 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.074 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.074 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.074 242318 DEBUG oslo_service.service [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.075 242318 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260220085704.5cfeecb.el9)#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.087 242318 DEBUG nova.virt.libvirt.host [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.088 242318 DEBUG nova.virt.libvirt.host [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.088 242318 DEBUG nova.virt.libvirt.host [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.088 242318 DEBUG nova.virt.libvirt.host [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Feb 25 07:06:42 np0005629333 systemd[1]: Starting libvirt QEMU daemon...
Feb 25 07:06:42 np0005629333 systemd[1]: Started libvirt QEMU daemon.
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.171 242318 DEBUG nova.virt.libvirt.host [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f5a97523370> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.175 242318 DEBUG nova.virt.libvirt.host [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f5a97523370> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.176 242318 INFO nova.virt.libvirt.driver [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Connection event '1' reason 'None'#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.193 242318 WARNING nova.virt.libvirt.driver [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Feb 25 07:06:42 np0005629333 nova_compute[242312]: 2026-02-25 12:06:42.193 242318 DEBUG nova.virt.libvirt.volume.mount [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Feb 25 07:06:42 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:06:42 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:06:42 np0005629333 python3.9[243390]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 07:06:43 np0005629333 nova_compute[242312]: 2026-02-25 12:06:43.015 242318 INFO nova.virt.libvirt.host [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Libvirt host capabilities <capabilities>
Feb 25 07:06:43 np0005629333 nova_compute[242312]: 
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <host>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <uuid>aaf3b885-2d7c-41fd-a043-b293ce60ba6a</uuid>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <cpu>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <arch>x86_64</arch>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model>EPYC-Rome-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <vendor>AMD</vendor>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <microcode version='16777317'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <signature family='23' model='49' stepping='0'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <maxphysaddr mode='emulate' bits='40'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature name='x2apic'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature name='tsc-deadline'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature name='osxsave'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature name='hypervisor'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature name='tsc_adjust'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature name='spec-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature name='stibp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature name='arch-capabilities'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature name='ssbd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature name='cmp_legacy'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature name='topoext'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature name='virt-ssbd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature name='lbrv'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature name='tsc-scale'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature name='vmcb-clean'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature name='pause-filter'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature name='pfthreshold'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature name='svme-addr-chk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature name='rdctl-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature name='skip-l1dfl-vmentry'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature name='mds-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature name='pschange-mc-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <pages unit='KiB' size='4'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <pages unit='KiB' size='2048'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <pages unit='KiB' size='1048576'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </cpu>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <power_management>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <suspend_mem/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </power_management>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <iommu support='no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <migration_features>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <live/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <uri_transports>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <uri_transport>tcp</uri_transport>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <uri_transport>rdma</uri_transport>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </uri_transports>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </migration_features>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <topology>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <cells num='1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <cell id='0'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:          <memory unit='KiB'>7864276</memory>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:          <pages unit='KiB' size='4'>1966069</pages>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:          <pages unit='KiB' size='2048'>0</pages>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:          <pages unit='KiB' size='1048576'>0</pages>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:          <distances>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:            <sibling id='0' value='10'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:          </distances>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:          <cpus num='8'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:          </cpus>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        </cell>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </cells>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </topology>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <cache>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </cache>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <secmodel>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model>selinux</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <doi>0</doi>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </secmodel>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <secmodel>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model>dac</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <doi>0</doi>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <baselabel type='kvm'>+107:+107</baselabel>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <baselabel type='qemu'>+107:+107</baselabel>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </secmodel>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  </host>
Feb 25 07:06:43 np0005629333 nova_compute[242312]: 
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <guest>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <os_type>hvm</os_type>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <arch name='i686'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <wordsize>32</wordsize>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <domain type='qemu'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <domain type='kvm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </arch>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <features>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <pae/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <nonpae/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <acpi default='on' toggle='yes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <apic default='on' toggle='no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <cpuselection/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <deviceboot/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <disksnapshot default='on' toggle='no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <externalSnapshot/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </features>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  </guest>
Feb 25 07:06:43 np0005629333 nova_compute[242312]: 
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <guest>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <os_type>hvm</os_type>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <arch name='x86_64'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <wordsize>64</wordsize>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <domain type='qemu'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <domain type='kvm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </arch>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <features>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <acpi default='on' toggle='yes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <apic default='on' toggle='no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <cpuselection/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <deviceboot/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <disksnapshot default='on' toggle='no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <externalSnapshot/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </features>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  </guest>
Feb 25 07:06:43 np0005629333 nova_compute[242312]: 
Feb 25 07:06:43 np0005629333 nova_compute[242312]: </capabilities>
Feb 25 07:06:43 np0005629333 nova_compute[242312]: #033[00m
Feb 25 07:06:43 np0005629333 nova_compute[242312]: 2026-02-25 12:06:43.024 242318 DEBUG nova.virt.libvirt.host [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Feb 25 07:06:43 np0005629333 nova_compute[242312]: 2026-02-25 12:06:43.051 242318 DEBUG nova.virt.libvirt.host [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Feb 25 07:06:43 np0005629333 nova_compute[242312]: <domainCapabilities>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <path>/usr/libexec/qemu-kvm</path>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <domain>kvm</domain>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <machine>pc-i440fx-rhel7.6.0</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <arch>i686</arch>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <vcpu max='240'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <iothreads supported='yes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <os supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <enum name='firmware'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <loader supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='type'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>rom</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>pflash</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='readonly'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>yes</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>no</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='secure'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>no</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </loader>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  </os>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <cpu>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <mode name='host-passthrough' supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='hostPassthroughMigratable'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>on</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>off</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </mode>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <mode name='maximum' supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='maximumMigratable'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>on</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>off</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </mode>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <mode name='host-model' supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model fallback='forbid'>EPYC-Rome</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <vendor>AMD</vendor>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='x2apic'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='tsc-deadline'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='hypervisor'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='tsc_adjust'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='spec-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='stibp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='ssbd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='cmp_legacy'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='overflow-recov'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='succor'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='ibrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='amd-ssbd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='virt-ssbd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='lbrv'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='tsc-scale'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='vmcb-clean'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='flushbyasid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='pause-filter'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='pfthreshold'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='svme-addr-chk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='lfence-always-serializing'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='disable' name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </mode>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <mode name='custom' supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Broadwell'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Broadwell-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Broadwell-noTSX'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Broadwell-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Broadwell-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Broadwell-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Broadwell-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cascadelake-Server'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cascadelake-Server-noTSX'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cascadelake-Server-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cascadelake-Server-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cascadelake-Server-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cascadelake-Server-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cascadelake-Server-v5'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='ClearwaterForest'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni-int16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bhi-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bhi-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cmpccxadd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ddpd-u'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='intel-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ipred-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='lam'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='prefetchiti'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rrsba-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sha512'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sm3'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sm4'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='ClearwaterForest-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni-int16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bhi-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bhi-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cmpccxadd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ddpd-u'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='intel-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ipred-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='lam'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='prefetchiti'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rrsba-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sha512'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sm3'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sm4'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cooperlake'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cooperlake-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cooperlake-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Denverton'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mpx'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Denverton-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mpx'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Denverton-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Denverton-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Dhyana-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Genoa'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amd-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='auto-ibrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='stibp-always-on'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Genoa-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amd-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='auto-ibrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='stibp-always-on'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Genoa-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amd-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='auto-ibrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fs-gs-base-ns'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='perfmon-v2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='stibp-always-on'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Milan'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Milan-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Milan-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amd-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='stibp-always-on'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Milan-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amd-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='stibp-always-on'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Rome'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Rome-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Rome-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Rome-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Turin'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amd-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='auto-ibrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vp2intersect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fs-gs-base-ns'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibpb-brtype'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='perfmon-v2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='prefetchi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbpb'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='srso-user-kernel-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='stibp-always-on'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Turin-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amd-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='auto-ibrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vp2intersect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fs-gs-base-ns'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibpb-brtype'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='perfmon-v2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='prefetchi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbpb'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='srso-user-kernel-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='stibp-always-on'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-v5'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='GraniteRapids'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='prefetchiti'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='GraniteRapids-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='prefetchiti'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='GraniteRapids-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx10'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx10-128'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx10-256'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx10-512'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='prefetchiti'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='GraniteRapids-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx10'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx10-128'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx10-256'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx10-512'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='prefetchiti'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Haswell'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Haswell-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Haswell-noTSX'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Haswell-noTSX-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Haswell-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Haswell-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Haswell-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Haswell-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server-noTSX'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server-v5'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server-v6'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server-v7'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='IvyBridge'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='IvyBridge-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='IvyBridge-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='IvyBridge-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='KnightsMill'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-4fmaps'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-4vnniw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512er'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512pf'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='KnightsMill-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-4fmaps'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-4vnniw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512er'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512pf'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Opteron_G4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fma4'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xop'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Opteron_G4-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fma4'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xop'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Opteron_G5'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fma4'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tbm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xop'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Opteron_G5-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fma4'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tbm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xop'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SapphireRapids'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SapphireRapids-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SapphireRapids-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SapphireRapids-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SapphireRapids-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SierraForest'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cmpccxadd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SierraForest-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cmpccxadd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SierraForest-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bhi-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cmpccxadd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='intel-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ipred-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='lam'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rrsba-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SierraForest-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bhi-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cmpccxadd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='intel-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ipred-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='lam'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rrsba-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Client'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Client-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Client-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Client-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Client-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Client-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Server'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Server-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Server-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Server-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Server-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Server-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Server-v5'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Snowridge'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='core-capability'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mpx'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='split-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Snowridge-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='core-capability'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mpx'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='split-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Snowridge-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='core-capability'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='split-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Snowridge-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='core-capability'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='split-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Snowridge-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='athlon'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='3dnow'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='3dnowext'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='athlon-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='3dnow'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='3dnowext'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='core2duo'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='core2duo-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='coreduo'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='coreduo-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='n270'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='n270-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='phenom'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='3dnow'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='3dnowext'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='phenom-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='3dnow'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='3dnowext'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </mode>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  </cpu>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <memoryBacking supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <enum name='sourceType'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <value>file</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <value>anonymous</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <value>memfd</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  </memoryBacking>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <devices>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <disk supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='diskDevice'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>disk</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>cdrom</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>floppy</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>lun</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='bus'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>ide</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>fdc</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>scsi</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtio</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>usb</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>sata</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='model'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtio</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtio-transitional</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtio-non-transitional</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </disk>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <graphics supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='type'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>vnc</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>egl-headless</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>dbus</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </graphics>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <video supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='modelType'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>vga</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>cirrus</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtio</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>none</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>bochs</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>ramfb</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </video>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <hostdev supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='mode'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>subsystem</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='startupPolicy'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>default</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>mandatory</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>requisite</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>optional</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='subsysType'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>usb</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>pci</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>scsi</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='capsType'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='pciBackend'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </hostdev>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <rng supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='model'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtio</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtio-transitional</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtio-non-transitional</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='backendModel'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>random</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>egd</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>builtin</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </rng>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <filesystem supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='driverType'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>path</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>handle</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtiofs</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </filesystem>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <tpm supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='model'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>tpm-tis</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>tpm-crb</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='backendModel'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>emulator</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>external</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='backendVersion'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>2.0</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </tpm>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <redirdev supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='bus'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>usb</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </redirdev>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <channel supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='type'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>pty</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>unix</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </channel>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <crypto supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='model'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='type'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>qemu</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='backendModel'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>builtin</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </crypto>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <interface supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='backendType'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>default</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>passt</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </interface>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <panic supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='model'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>isa</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>hyperv</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </panic>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <console supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='type'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>null</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>vc</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>pty</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>dev</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>file</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>pipe</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>stdio</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>udp</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>tcp</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>unix</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>qemu-vdagent</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>dbus</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </console>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  </devices>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <features>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <gic supported='no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <vmcoreinfo supported='yes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <genid supported='yes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <backingStoreInput supported='yes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <backup supported='yes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <async-teardown supported='yes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <s390-pv supported='no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <ps2 supported='yes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <tdx supported='no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <sev supported='no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <sgx supported='no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <hyperv supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='features'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>relaxed</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>vapic</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>spinlocks</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>vpindex</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>runtime</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>synic</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>stimer</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>reset</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>vendor_id</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>frequencies</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>reenlightenment</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>tlbflush</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>ipi</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>avic</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>emsr_bitmap</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>xmm_input</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <defaults>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <spinlocks>4095</spinlocks>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <stimer_direct>on</stimer_direct>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <tlbflush_direct>on</tlbflush_direct>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <tlbflush_extended>on</tlbflush_extended>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </defaults>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </hyperv>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <launchSecurity supported='no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  </features>
Feb 25 07:06:43 np0005629333 nova_compute[242312]: </domainCapabilities>
Feb 25 07:06:43 np0005629333 nova_compute[242312]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb 25 07:06:43 np0005629333 nova_compute[242312]: 2026-02-25 12:06:43.061 242318 DEBUG nova.virt.libvirt.host [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Feb 25 07:06:43 np0005629333 nova_compute[242312]: <domainCapabilities>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <path>/usr/libexec/qemu-kvm</path>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <domain>kvm</domain>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <machine>pc-q35-rhel9.8.0</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <arch>i686</arch>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <vcpu max='4096'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <iothreads supported='yes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <os supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <enum name='firmware'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <loader supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='type'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>rom</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>pflash</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='readonly'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>yes</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>no</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='secure'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>no</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </loader>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  </os>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <cpu>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <mode name='host-passthrough' supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='hostPassthroughMigratable'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>on</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>off</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </mode>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <mode name='maximum' supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='maximumMigratable'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>on</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>off</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </mode>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <mode name='host-model' supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model fallback='forbid'>EPYC-Rome</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <vendor>AMD</vendor>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='x2apic'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='tsc-deadline'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='hypervisor'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='tsc_adjust'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='spec-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='stibp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='ssbd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='cmp_legacy'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='overflow-recov'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='succor'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='ibrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='amd-ssbd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='virt-ssbd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='lbrv'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='tsc-scale'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='vmcb-clean'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='flushbyasid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='pause-filter'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='pfthreshold'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='svme-addr-chk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='lfence-always-serializing'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='disable' name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </mode>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <mode name='custom' supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Broadwell'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Broadwell-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Broadwell-noTSX'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Broadwell-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Broadwell-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Broadwell-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Broadwell-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cascadelake-Server'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cascadelake-Server-noTSX'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cascadelake-Server-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cascadelake-Server-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cascadelake-Server-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cascadelake-Server-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cascadelake-Server-v5'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='ClearwaterForest'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni-int16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bhi-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bhi-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cmpccxadd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ddpd-u'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='intel-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ipred-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='lam'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='prefetchiti'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rrsba-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sha512'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sm3'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sm4'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='ClearwaterForest-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni-int16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bhi-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bhi-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cmpccxadd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ddpd-u'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='intel-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ipred-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='lam'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='prefetchiti'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rrsba-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sha512'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sm3'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sm4'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cooperlake'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cooperlake-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cooperlake-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Denverton'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mpx'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Denverton-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mpx'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Denverton-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Denverton-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Dhyana-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Genoa'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amd-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='auto-ibrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='stibp-always-on'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Genoa-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amd-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='auto-ibrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='stibp-always-on'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Genoa-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amd-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='auto-ibrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fs-gs-base-ns'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='perfmon-v2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='stibp-always-on'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Milan'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Milan-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Milan-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amd-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='stibp-always-on'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Milan-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amd-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='stibp-always-on'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Rome'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Rome-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Rome-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Rome-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Turin'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amd-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='auto-ibrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vp2intersect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fs-gs-base-ns'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibpb-brtype'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='perfmon-v2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='prefetchi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbpb'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='srso-user-kernel-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='stibp-always-on'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Turin-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amd-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='auto-ibrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vp2intersect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fs-gs-base-ns'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibpb-brtype'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='perfmon-v2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='prefetchi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbpb'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='srso-user-kernel-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='stibp-always-on'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-v5'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='GraniteRapids'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='prefetchiti'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='GraniteRapids-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='prefetchiti'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='GraniteRapids-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx10'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx10-128'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx10-256'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx10-512'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='prefetchiti'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='GraniteRapids-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx10'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx10-128'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx10-256'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx10-512'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='prefetchiti'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Haswell'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Haswell-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Haswell-noTSX'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Haswell-noTSX-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Haswell-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Haswell-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Haswell-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Haswell-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server-noTSX'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server-v5'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server-v6'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server-v7'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='IvyBridge'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='IvyBridge-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='IvyBridge-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='IvyBridge-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='KnightsMill'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-4fmaps'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-4vnniw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512er'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512pf'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='KnightsMill-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-4fmaps'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-4vnniw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512er'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512pf'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Opteron_G4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fma4'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xop'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Opteron_G4-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fma4'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xop'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Opteron_G5'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fma4'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tbm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xop'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Opteron_G5-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fma4'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tbm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xop'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SapphireRapids'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SapphireRapids-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SapphireRapids-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SapphireRapids-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SapphireRapids-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SierraForest'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cmpccxadd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SierraForest-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cmpccxadd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SierraForest-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bhi-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cmpccxadd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='intel-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ipred-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='lam'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rrsba-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SierraForest-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bhi-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cmpccxadd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='intel-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ipred-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='lam'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rrsba-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Client'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Client-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Client-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Client-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Client-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Client-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Server'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Server-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Server-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Server-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Server-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Server-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Server-v5'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Snowridge'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='core-capability'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mpx'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='split-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Snowridge-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='core-capability'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mpx'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='split-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Snowridge-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='core-capability'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='split-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Snowridge-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='core-capability'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='split-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Snowridge-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='athlon'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='3dnow'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='3dnowext'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='athlon-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='3dnow'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='3dnowext'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='core2duo'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='core2duo-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='coreduo'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='coreduo-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='n270'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='n270-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='phenom'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='3dnow'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='3dnowext'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='phenom-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='3dnow'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='3dnowext'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </mode>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  </cpu>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <memoryBacking supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <enum name='sourceType'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <value>file</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <value>anonymous</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <value>memfd</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  </memoryBacking>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <devices>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <disk supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='diskDevice'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>disk</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>cdrom</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>floppy</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>lun</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='bus'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>fdc</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>scsi</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtio</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>usb</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>sata</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='model'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtio</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtio-transitional</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtio-non-transitional</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </disk>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <graphics supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='type'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>vnc</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>egl-headless</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>dbus</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </graphics>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <video supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='modelType'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>vga</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>cirrus</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtio</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>none</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>bochs</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>ramfb</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </video>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <hostdev supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='mode'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>subsystem</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='startupPolicy'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>default</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>mandatory</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>requisite</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>optional</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='subsysType'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>usb</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>pci</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>scsi</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='capsType'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='pciBackend'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </hostdev>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <rng supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='model'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtio</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtio-transitional</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtio-non-transitional</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='backendModel'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>random</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>egd</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>builtin</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </rng>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <filesystem supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='driverType'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>path</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>handle</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtiofs</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </filesystem>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <tpm supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='model'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>tpm-tis</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>tpm-crb</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='backendModel'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>emulator</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>external</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='backendVersion'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>2.0</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </tpm>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <redirdev supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='bus'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>usb</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </redirdev>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <channel supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='type'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>pty</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>unix</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </channel>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <crypto supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='model'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='type'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>qemu</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='backendModel'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>builtin</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </crypto>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <interface supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='backendType'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>default</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>passt</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </interface>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <panic supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='model'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>isa</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>hyperv</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </panic>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <console supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='type'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>null</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>vc</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>pty</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>dev</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>file</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>pipe</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>stdio</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>udp</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>tcp</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>unix</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>qemu-vdagent</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>dbus</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </console>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  </devices>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <features>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <gic supported='no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <vmcoreinfo supported='yes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <genid supported='yes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <backingStoreInput supported='yes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <backup supported='yes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <async-teardown supported='yes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <s390-pv supported='no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <ps2 supported='yes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <tdx supported='no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <sev supported='no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <sgx supported='no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <hyperv supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='features'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>relaxed</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>vapic</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>spinlocks</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>vpindex</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>runtime</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>synic</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>stimer</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>reset</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>vendor_id</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>frequencies</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>reenlightenment</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>tlbflush</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>ipi</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>avic</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>emsr_bitmap</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>xmm_input</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <defaults>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <spinlocks>4095</spinlocks>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <stimer_direct>on</stimer_direct>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <tlbflush_direct>on</tlbflush_direct>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <tlbflush_extended>on</tlbflush_extended>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </defaults>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </hyperv>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <launchSecurity supported='no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  </features>
Feb 25 07:06:43 np0005629333 nova_compute[242312]: </domainCapabilities>
Feb 25 07:06:43 np0005629333 nova_compute[242312]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb 25 07:06:43 np0005629333 nova_compute[242312]: 2026-02-25 12:06:43.133 242318 DEBUG nova.virt.libvirt.host [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Feb 25 07:06:43 np0005629333 nova_compute[242312]: 2026-02-25 12:06:43.138 242318 DEBUG nova.virt.libvirt.host [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Feb 25 07:06:43 np0005629333 nova_compute[242312]: <domainCapabilities>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <path>/usr/libexec/qemu-kvm</path>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <domain>kvm</domain>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <machine>pc-i440fx-rhel7.6.0</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <arch>x86_64</arch>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <vcpu max='240'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <iothreads supported='yes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <os supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <enum name='firmware'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <loader supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='type'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>rom</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>pflash</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='readonly'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>yes</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>no</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='secure'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>no</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </loader>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  </os>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <cpu>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <mode name='host-passthrough' supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='hostPassthroughMigratable'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>on</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>off</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </mode>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <mode name='maximum' supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='maximumMigratable'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>on</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>off</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </mode>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <mode name='host-model' supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model fallback='forbid'>EPYC-Rome</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <vendor>AMD</vendor>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='x2apic'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='tsc-deadline'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='hypervisor'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='tsc_adjust'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='spec-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='stibp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='ssbd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='cmp_legacy'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='overflow-recov'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='succor'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='ibrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='amd-ssbd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='virt-ssbd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='lbrv'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='tsc-scale'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='vmcb-clean'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='flushbyasid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='pause-filter'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='pfthreshold'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='svme-addr-chk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='lfence-always-serializing'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='disable' name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </mode>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <mode name='custom' supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Broadwell'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Broadwell-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Broadwell-noTSX'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Broadwell-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Broadwell-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Broadwell-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Broadwell-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cascadelake-Server'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cascadelake-Server-noTSX'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cascadelake-Server-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cascadelake-Server-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cascadelake-Server-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cascadelake-Server-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cascadelake-Server-v5'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='ClearwaterForest'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni-int16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bhi-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bhi-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cmpccxadd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ddpd-u'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='intel-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ipred-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='lam'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='prefetchiti'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rrsba-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sha512'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sm3'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sm4'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='ClearwaterForest-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni-int16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bhi-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bhi-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cmpccxadd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ddpd-u'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='intel-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ipred-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='lam'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='prefetchiti'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rrsba-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sha512'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sm3'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sm4'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cooperlake'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cooperlake-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cooperlake-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Denverton'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mpx'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Denverton-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mpx'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Denverton-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Denverton-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Dhyana-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Genoa'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amd-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='auto-ibrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='stibp-always-on'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Genoa-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amd-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='auto-ibrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='stibp-always-on'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Genoa-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amd-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='auto-ibrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fs-gs-base-ns'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='perfmon-v2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='stibp-always-on'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Milan'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Milan-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Milan-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amd-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='stibp-always-on'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Milan-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amd-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='stibp-always-on'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Rome'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Rome-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Rome-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Rome-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Turin'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amd-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='auto-ibrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vp2intersect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fs-gs-base-ns'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibpb-brtype'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='perfmon-v2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='prefetchi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbpb'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='srso-user-kernel-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='stibp-always-on'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Turin-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amd-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='auto-ibrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vp2intersect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fs-gs-base-ns'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibpb-brtype'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='perfmon-v2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='prefetchi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbpb'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='srso-user-kernel-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='stibp-always-on'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-v5'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='GraniteRapids'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='prefetchiti'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='GraniteRapids-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='prefetchiti'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='GraniteRapids-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx10'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx10-128'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx10-256'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx10-512'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='prefetchiti'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='GraniteRapids-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx10'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx10-128'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx10-256'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx10-512'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='prefetchiti'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Haswell'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Haswell-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Haswell-noTSX'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Haswell-noTSX-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Haswell-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Haswell-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Haswell-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Haswell-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server-noTSX'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server-v5'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server-v6'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server-v7'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='IvyBridge'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='IvyBridge-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='IvyBridge-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='IvyBridge-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='KnightsMill'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-4fmaps'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-4vnniw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512er'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512pf'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='KnightsMill-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-4fmaps'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-4vnniw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512er'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512pf'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Opteron_G4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fma4'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xop'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Opteron_G4-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fma4'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xop'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Opteron_G5'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fma4'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tbm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xop'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Opteron_G5-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fma4'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tbm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xop'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SapphireRapids'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SapphireRapids-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SapphireRapids-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SapphireRapids-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SapphireRapids-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SierraForest'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cmpccxadd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SierraForest-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cmpccxadd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SierraForest-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bhi-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cmpccxadd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='intel-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ipred-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='lam'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rrsba-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SierraForest-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bhi-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cmpccxadd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='intel-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ipred-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='lam'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rrsba-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Client'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Client-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Client-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Client-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Client-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Client-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Server'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Server-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Server-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Server-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Server-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Server-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Server-v5'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Snowridge'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='core-capability'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mpx'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='split-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Snowridge-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='core-capability'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mpx'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='split-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Snowridge-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='core-capability'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='split-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Snowridge-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='core-capability'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='split-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Snowridge-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='athlon'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='3dnow'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='3dnowext'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='athlon-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='3dnow'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='3dnowext'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='core2duo'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='core2duo-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='coreduo'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='coreduo-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='n270'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='n270-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='phenom'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='3dnow'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='3dnowext'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='phenom-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='3dnow'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='3dnowext'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </mode>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  </cpu>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <memoryBacking supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <enum name='sourceType'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <value>file</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <value>anonymous</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <value>memfd</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  </memoryBacking>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <devices>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <disk supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='diskDevice'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>disk</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>cdrom</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>floppy</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>lun</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='bus'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>ide</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>fdc</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>scsi</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtio</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>usb</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>sata</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='model'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtio</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtio-transitional</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtio-non-transitional</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </disk>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <graphics supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='type'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>vnc</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>egl-headless</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>dbus</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </graphics>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <video supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='modelType'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>vga</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>cirrus</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtio</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>none</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>bochs</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>ramfb</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </video>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <hostdev supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='mode'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>subsystem</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='startupPolicy'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>default</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>mandatory</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>requisite</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>optional</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='subsysType'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>usb</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>pci</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>scsi</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='capsType'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='pciBackend'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </hostdev>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <rng supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='model'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtio</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtio-transitional</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtio-non-transitional</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='backendModel'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>random</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>egd</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>builtin</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </rng>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <filesystem supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='driverType'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>path</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>handle</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtiofs</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </filesystem>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <tpm supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='model'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>tpm-tis</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>tpm-crb</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='backendModel'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>emulator</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>external</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='backendVersion'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>2.0</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </tpm>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <redirdev supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='bus'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>usb</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </redirdev>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <channel supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='type'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>pty</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>unix</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </channel>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <crypto supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='model'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='type'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>qemu</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='backendModel'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>builtin</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </crypto>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <interface supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='backendType'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>default</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>passt</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </interface>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <panic supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='model'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>isa</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>hyperv</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </panic>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <console supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='type'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>null</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>vc</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>pty</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>dev</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>file</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>pipe</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>stdio</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>udp</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>tcp</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>unix</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>qemu-vdagent</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>dbus</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </console>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  </devices>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <features>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <gic supported='no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <vmcoreinfo supported='yes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <genid supported='yes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <backingStoreInput supported='yes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <backup supported='yes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <async-teardown supported='yes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <s390-pv supported='no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <ps2 supported='yes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <tdx supported='no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <sev supported='no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <sgx supported='no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <hyperv supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='features'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>relaxed</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>vapic</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>spinlocks</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>vpindex</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>runtime</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>synic</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>stimer</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>reset</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>vendor_id</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>frequencies</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>reenlightenment</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>tlbflush</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>ipi</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>avic</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>emsr_bitmap</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>xmm_input</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <defaults>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <spinlocks>4095</spinlocks>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <stimer_direct>on</stimer_direct>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <tlbflush_direct>on</tlbflush_direct>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <tlbflush_extended>on</tlbflush_extended>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </defaults>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </hyperv>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <launchSecurity supported='no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  </features>
Feb 25 07:06:43 np0005629333 nova_compute[242312]: </domainCapabilities>
Feb 25 07:06:43 np0005629333 nova_compute[242312]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb 25 07:06:43 np0005629333 nova_compute[242312]: 2026-02-25 12:06:43.204 242318 DEBUG nova.virt.libvirt.host [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Feb 25 07:06:43 np0005629333 nova_compute[242312]: <domainCapabilities>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <path>/usr/libexec/qemu-kvm</path>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <domain>kvm</domain>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <machine>pc-q35-rhel9.8.0</machine>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <arch>x86_64</arch>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <vcpu max='4096'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <iothreads supported='yes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <os supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <enum name='firmware'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <value>efi</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <loader supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='type'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>rom</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>pflash</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='readonly'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>yes</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>no</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='secure'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>yes</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>no</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </loader>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  </os>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <cpu>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <mode name='host-passthrough' supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='hostPassthroughMigratable'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>on</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>off</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </mode>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <mode name='maximum' supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='maximumMigratable'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>on</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>off</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </mode>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <mode name='host-model' supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model fallback='forbid'>EPYC-Rome</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <vendor>AMD</vendor>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='x2apic'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='tsc-deadline'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='hypervisor'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='tsc_adjust'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='spec-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='stibp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='ssbd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='cmp_legacy'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='overflow-recov'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='succor'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='ibrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='amd-ssbd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='virt-ssbd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='lbrv'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='tsc-scale'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='vmcb-clean'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='flushbyasid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='pause-filter'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='pfthreshold'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='svme-addr-chk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='require' name='lfence-always-serializing'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <feature policy='disable' name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </mode>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <mode name='custom' supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Broadwell'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Broadwell-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Broadwell-noTSX'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Broadwell-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Broadwell-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Broadwell-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Broadwell-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cascadelake-Server'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cascadelake-Server-noTSX'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cascadelake-Server-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cascadelake-Server-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cascadelake-Server-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cascadelake-Server-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cascadelake-Server-v5'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='ClearwaterForest'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni-int16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bhi-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bhi-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cmpccxadd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ddpd-u'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='intel-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ipred-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='lam'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='prefetchiti'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rrsba-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sha512'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sm3'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sm4'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='ClearwaterForest-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni-int16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bhi-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bhi-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cmpccxadd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ddpd-u'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='intel-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ipred-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='lam'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='prefetchiti'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rrsba-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sha512'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sm3'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sm4'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cooperlake'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cooperlake-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Cooperlake-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Denverton'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mpx'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Denverton-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mpx'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Denverton-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Denverton-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Dhyana-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Genoa'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amd-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='auto-ibrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='stibp-always-on'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Genoa-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amd-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='auto-ibrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='stibp-always-on'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Genoa-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amd-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='auto-ibrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fs-gs-base-ns'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='perfmon-v2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='stibp-always-on'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Milan'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Milan-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Milan-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amd-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='stibp-always-on'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Milan-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amd-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='stibp-always-on'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Rome'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Rome-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Rome-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Rome-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Turin'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amd-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='auto-ibrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vp2intersect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fs-gs-base-ns'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibpb-brtype'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='perfmon-v2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='prefetchi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbpb'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='srso-user-kernel-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='stibp-always-on'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-Turin-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amd-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='auto-ibrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vp2intersect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fs-gs-base-ns'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibpb-brtype'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='perfmon-v2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='prefetchi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbpb'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='srso-user-kernel-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='stibp-always-on'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='EPYC-v5'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='GraniteRapids'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='prefetchiti'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='GraniteRapids-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='prefetchiti'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='GraniteRapids-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx10'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx10-128'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx10-256'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx10-512'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='prefetchiti'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='GraniteRapids-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx10'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx10-128'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx10-256'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx10-512'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='prefetchiti'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Haswell'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Haswell-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Haswell-noTSX'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Haswell-noTSX-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Haswell-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Haswell-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Haswell-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Haswell-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server-noTSX'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server-v5'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server-v6'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Icelake-Server-v7'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='IvyBridge'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='IvyBridge-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='IvyBridge-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='IvyBridge-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='KnightsMill'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-4fmaps'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-4vnniw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512er'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512pf'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='KnightsMill-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-4fmaps'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-4vnniw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512er'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512pf'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Opteron_G4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fma4'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xop'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Opteron_G4-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fma4'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xop'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Opteron_G5'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fma4'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tbm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xop'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Opteron_G5-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fma4'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tbm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xop'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SapphireRapids'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SapphireRapids-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SapphireRapids-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SapphireRapids-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SapphireRapids-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='amx-tile'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-bf16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-fp16'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bitalg'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrc'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fzrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='la57'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='taa-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SierraForest'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cmpccxadd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SierraForest-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cmpccxadd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SierraForest-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bhi-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cmpccxadd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='intel-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ipred-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='lam'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rrsba-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='SierraForest-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ifma'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bhi-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cmpccxadd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fbsdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='fsrs'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ibrs-all'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='intel-psfd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ipred-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='lam'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mcdt-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pbrsb-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='psdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rrsba-ctrl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='serialize'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vaes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Client'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Client-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Client-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Client-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Client-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Client-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Server'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Server-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Server-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Server-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='hle'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='rtm'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Server-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Server-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Skylake-Server-v5'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512bw'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512cd'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512dq'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512f'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='avx512vl'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='invpcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pcid'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='pku'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Snowridge'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='core-capability'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mpx'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='split-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Snowridge-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='core-capability'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='mpx'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='split-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Snowridge-v2'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='core-capability'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='split-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Snowridge-v3'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='core-capability'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='split-lock-detect'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='Snowridge-v4'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='cldemote'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='erms'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='gfni'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdir64b'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='movdiri'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='xsaves'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='athlon'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='3dnow'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='3dnowext'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='athlon-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='3dnow'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='3dnowext'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='core2duo'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='core2duo-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='coreduo'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='coreduo-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='n270'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='n270-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='ss'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='phenom'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='3dnow'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='3dnowext'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <blockers model='phenom-v1'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='3dnow'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <feature name='3dnowext'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </blockers>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </mode>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  </cpu>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <memoryBacking supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <enum name='sourceType'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <value>file</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <value>anonymous</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <value>memfd</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  </memoryBacking>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <devices>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <disk supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='diskDevice'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>disk</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>cdrom</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>floppy</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>lun</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='bus'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>fdc</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>scsi</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtio</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>usb</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>sata</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='model'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtio</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtio-transitional</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtio-non-transitional</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </disk>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <graphics supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='type'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>vnc</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>egl-headless</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>dbus</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </graphics>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <video supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='modelType'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>vga</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>cirrus</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtio</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>none</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>bochs</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>ramfb</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </video>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <hostdev supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='mode'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>subsystem</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='startupPolicy'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>default</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>mandatory</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>requisite</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>optional</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='subsysType'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>usb</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>pci</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>scsi</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='capsType'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='pciBackend'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </hostdev>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <rng supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='model'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtio</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtio-transitional</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtio-non-transitional</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='backendModel'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>random</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>egd</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>builtin</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </rng>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <filesystem supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='driverType'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>path</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>handle</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>virtiofs</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </filesystem>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <tpm supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='model'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>tpm-tis</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>tpm-crb</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='backendModel'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>emulator</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>external</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='backendVersion'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>2.0</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </tpm>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <redirdev supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='bus'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>usb</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </redirdev>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <channel supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='type'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>pty</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>unix</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </channel>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <crypto supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='model'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='type'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>qemu</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='backendModel'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>builtin</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </crypto>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <interface supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='backendType'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>default</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>passt</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </interface>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <panic supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='model'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>isa</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>hyperv</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </panic>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <console supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='type'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>null</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>vc</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>pty</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>dev</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>file</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>pipe</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>stdio</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>udp</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>tcp</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>unix</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>qemu-vdagent</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>dbus</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </console>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  </devices>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  <features>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <gic supported='no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <vmcoreinfo supported='yes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <genid supported='yes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <backingStoreInput supported='yes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <backup supported='yes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <async-teardown supported='yes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <s390-pv supported='no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <ps2 supported='yes'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <tdx supported='no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <sev supported='no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <sgx supported='no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <hyperv supported='yes'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <enum name='features'>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>relaxed</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>vapic</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>spinlocks</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>vpindex</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>runtime</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>synic</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>stimer</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>reset</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>vendor_id</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>frequencies</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>reenlightenment</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>tlbflush</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>ipi</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>avic</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>emsr_bitmap</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <value>xmm_input</value>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </enum>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      <defaults>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <spinlocks>4095</spinlocks>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <stimer_direct>on</stimer_direct>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <tlbflush_direct>on</tlbflush_direct>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <tlbflush_extended>on</tlbflush_extended>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:      </defaults>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    </hyperv>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:    <launchSecurity supported='no'/>
Feb 25 07:06:43 np0005629333 nova_compute[242312]:  </features>
Feb 25 07:06:43 np0005629333 nova_compute[242312]: </domainCapabilities>
Feb 25 07:06:43 np0005629333 nova_compute[242312]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb 25 07:06:43 np0005629333 nova_compute[242312]: 2026-02-25 12:06:43.262 242318 DEBUG nova.virt.libvirt.host [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Feb 25 07:06:43 np0005629333 nova_compute[242312]: 2026-02-25 12:06:43.263 242318 DEBUG nova.virt.libvirt.host [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Feb 25 07:06:43 np0005629333 nova_compute[242312]: 2026-02-25 12:06:43.263 242318 DEBUG nova.virt.libvirt.host [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Feb 25 07:06:43 np0005629333 nova_compute[242312]: 2026-02-25 12:06:43.265 242318 INFO nova.virt.libvirt.host [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Secure Boot support detected#033[00m
Feb 25 07:06:43 np0005629333 nova_compute[242312]: 2026-02-25 12:06:43.276 242318 INFO nova.virt.libvirt.driver [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Feb 25 07:06:43 np0005629333 nova_compute[242312]: 2026-02-25 12:06:43.276 242318 INFO nova.virt.libvirt.driver [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Feb 25 07:06:43 np0005629333 nova_compute[242312]: 2026-02-25 12:06:43.287 242318 DEBUG nova.virt.libvirt.driver [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Feb 25 07:06:43 np0005629333 nova_compute[242312]: 2026-02-25 12:06:43.368 242318 INFO nova.virt.node [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Determined node identity cb4dae98-2ac3-4218-9445-2320139e12ad from /var/lib/nova/compute_id#033[00m
Feb 25 07:06:43 np0005629333 nova_compute[242312]: 2026-02-25 12:06:43.384 242318 WARNING nova.compute.manager [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Compute nodes ['cb4dae98-2ac3-4218-9445-2320139e12ad'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Feb 25 07:06:43 np0005629333 nova_compute[242312]: 2026-02-25 12:06:43.414 242318 INFO nova.compute.manager [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Feb 25 07:06:43 np0005629333 nova_compute[242312]: 2026-02-25 12:06:43.458 242318 WARNING nova.compute.manager [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Feb 25 07:06:43 np0005629333 nova_compute[242312]: 2026-02-25 12:06:43.458 242318 DEBUG oslo_concurrency.lockutils [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:06:43 np0005629333 nova_compute[242312]: 2026-02-25 12:06:43.459 242318 DEBUG oslo_concurrency.lockutils [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:06:43 np0005629333 nova_compute[242312]: 2026-02-25 12:06:43.459 242318 DEBUG oslo_concurrency.lockutils [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:06:43 np0005629333 nova_compute[242312]: 2026-02-25 12:06:43.459 242318 DEBUG nova.compute.resource_tracker [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:06:43 np0005629333 nova_compute[242312]: 2026-02-25 12:06:43.460 242318 DEBUG oslo_concurrency.processutils [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:06:43 np0005629333 python3.9[243552]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:06:43.846283) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021203846356, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1957, "num_deletes": 251, "total_data_size": 3368265, "memory_usage": 3409200, "flush_reason": "Manual Compaction"}
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021203857564, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1899947, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11717, "largest_seqno": 13673, "table_properties": {"data_size": 1893660, "index_size": 3234, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15761, "raw_average_key_size": 20, "raw_value_size": 1879705, "raw_average_value_size": 2403, "num_data_blocks": 150, "num_entries": 782, "num_filter_entries": 782, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020982, "oldest_key_time": 1772020982, "file_creation_time": 1772021203, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 11304 microseconds, and 2935 cpu microseconds.
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:06:43.857599) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1899947 bytes OK
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:06:43.857615) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:06:43.859717) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:06:43.859731) EVENT_LOG_v1 {"time_micros": 1772021203859727, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:06:43.859748) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3360051, prev total WAL file size 3360051, number of live WAL files 2.
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:06:43.860301) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1855KB)], [29(7680KB)]
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021203860381, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9764828, "oldest_snapshot_seqno": -1}
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4066 keys, 7807789 bytes, temperature: kUnknown
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021203938575, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7807789, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7778950, "index_size": 17593, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 96355, "raw_average_key_size": 23, "raw_value_size": 7704034, "raw_average_value_size": 1894, "num_data_blocks": 766, "num_entries": 4066, "num_filter_entries": 4066, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772021203, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:06:43.938916) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7807789 bytes
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:06:43.940977) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 124.7 rd, 99.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 7.5 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(9.2) write-amplify(4.1) OK, records in: 4475, records dropped: 409 output_compression: NoCompression
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:06:43.941007) EVENT_LOG_v1 {"time_micros": 1772021203940991, "job": 12, "event": "compaction_finished", "compaction_time_micros": 78317, "compaction_time_cpu_micros": 25828, "output_level": 6, "num_output_files": 1, "total_output_size": 7807789, "num_input_records": 4475, "num_output_records": 4066, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021203941356, "job": 12, "event": "table_file_deletion", "file_number": 31}
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021203942498, "job": 12, "event": "table_file_deletion", "file_number": 29}
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:06:43.860162) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:06:43.942586) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:06:43.942594) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:06:43.942596) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:06:43.942599) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:06:43.942601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:06:43 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1921492880' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:06:44 np0005629333 nova_compute[242312]: 2026-02-25 12:06:43.999 242318 DEBUG oslo_concurrency.processutils [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:06:44 np0005629333 systemd[1]: Starting libvirt nodedev daemon...
Feb 25 07:06:44 np0005629333 systemd[1]: Started libvirt nodedev daemon.
Feb 25 07:06:44 np0005629333 nova_compute[242312]: 2026-02-25 12:06:44.305 242318 WARNING nova.virt.libvirt.driver [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:06:44 np0005629333 nova_compute[242312]: 2026-02-25 12:06:44.307 242318 DEBUG nova.compute.resource_tracker [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5012MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:06:44 np0005629333 nova_compute[242312]: 2026-02-25 12:06:44.308 242318 DEBUG oslo_concurrency.lockutils [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:06:44 np0005629333 nova_compute[242312]: 2026-02-25 12:06:44.308 242318 DEBUG oslo_concurrency.lockutils [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:06:44 np0005629333 nova_compute[242312]: 2026-02-25 12:06:44.337 242318 WARNING nova.compute.resource_tracker [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] No compute node record for compute-0.ctlplane.example.com:cb4dae98-2ac3-4218-9445-2320139e12ad: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host cb4dae98-2ac3-4218-9445-2320139e12ad could not be found.#033[00m
Feb 25 07:06:44 np0005629333 nova_compute[242312]: 2026-02-25 12:06:44.355 242318 INFO nova.compute.resource_tracker [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: cb4dae98-2ac3-4218-9445-2320139e12ad#033[00m
Feb 25 07:06:44 np0005629333 nova_compute[242312]: 2026-02-25 12:06:44.409 242318 DEBUG nova.compute.resource_tracker [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:06:44 np0005629333 nova_compute[242312]: 2026-02-25 12:06:44.409 242318 DEBUG nova.compute.resource_tracker [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:06:44 np0005629333 python3.9[243750]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Feb 25 07:06:44 np0005629333 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 07:06:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:06:45 np0005629333 nova_compute[242312]: 2026-02-25 12:06:45.228 242318 INFO nova.scheduler.client.report [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] [req-574aae66-9f9e-4b08-b327-f36776734c41] Created resource provider record via placement API for resource provider with UUID cb4dae98-2ac3-4218-9445-2320139e12ad and name compute-0.ctlplane.example.com.#033[00m
Feb 25 07:06:45 np0005629333 python3.9[243927]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 25 07:06:45 np0005629333 nova_compute[242312]: 2026-02-25 12:06:45.608 242318 DEBUG oslo_concurrency.processutils [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:06:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:06:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3569430836' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:06:46 np0005629333 nova_compute[242312]: 2026-02-25 12:06:46.146 242318 DEBUG oslo_concurrency.processutils [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:06:46 np0005629333 nova_compute[242312]: 2026-02-25 12:06:46.154 242318 DEBUG nova.virt.libvirt.host [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Feb 25 07:06:46 np0005629333 nova_compute[242312]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Feb 25 07:06:46 np0005629333 nova_compute[242312]: 2026-02-25 12:06:46.155 242318 INFO nova.virt.libvirt.host [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] kernel doesn't support AMD SEV#033[00m
Feb 25 07:06:46 np0005629333 nova_compute[242312]: 2026-02-25 12:06:46.157 242318 DEBUG nova.compute.provider_tree [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 25 07:06:46 np0005629333 nova_compute[242312]: 2026-02-25 12:06:46.157 242318 DEBUG nova.virt.libvirt.driver [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:06:46 np0005629333 nova_compute[242312]: 2026-02-25 12:06:46.209 242318 DEBUG nova.scheduler.client.report [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Updated inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Feb 25 07:06:46 np0005629333 nova_compute[242312]: 2026-02-25 12:06:46.209 242318 DEBUG nova.compute.provider_tree [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Updating resource provider cb4dae98-2ac3-4218-9445-2320139e12ad generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Feb 25 07:06:46 np0005629333 nova_compute[242312]: 2026-02-25 12:06:46.209 242318 DEBUG nova.compute.provider_tree [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 25 07:06:46 np0005629333 nova_compute[242312]: 2026-02-25 12:06:46.283 242318 DEBUG nova.compute.provider_tree [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Updating resource provider cb4dae98-2ac3-4218-9445-2320139e12ad generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Feb 25 07:06:46 np0005629333 nova_compute[242312]: 2026-02-25 12:06:46.308 242318 DEBUG nova.compute.resource_tracker [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:06:46 np0005629333 nova_compute[242312]: 2026-02-25 12:06:46.308 242318 DEBUG oslo_concurrency.lockutils [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:06:46 np0005629333 nova_compute[242312]: 2026-02-25 12:06:46.308 242318 DEBUG nova.service [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Feb 25 07:06:46 np0005629333 nova_compute[242312]: 2026-02-25 12:06:46.374 242318 DEBUG nova.service [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Feb 25 07:06:46 np0005629333 nova_compute[242312]: 2026-02-25 12:06:46.374 242318 DEBUG nova.servicegroup.drivers.db [None req-d0febcec-50e7-488f-9c8a-70a0fafdc219 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Feb 25 07:06:46 np0005629333 systemd[1]: Stopping nova_compute container...
Feb 25 07:06:46 np0005629333 nova_compute[242312]: 2026-02-25 12:06:46.640 242318 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored#033[00m
Feb 25 07:06:46 np0005629333 nova_compute[242312]: 2026-02-25 12:06:46.642 242318 DEBUG oslo_concurrency.lockutils [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:06:46 np0005629333 nova_compute[242312]: 2026-02-25 12:06:46.642 242318 DEBUG oslo_concurrency.lockutils [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:06:46 np0005629333 nova_compute[242312]: 2026-02-25 12:06:46.642 242318 DEBUG oslo_concurrency.lockutils [None req-7604b28d-4eac-4ad2-9ccd-2e976bacf76d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:06:46 np0005629333 systemd[1]: libpod-b7d9270bce5d1e1ac663a61021bd399bea861e9d353f641979ffdd1339629739.scope: Deactivated successfully.
Feb 25 07:06:46 np0005629333 virtqemud[243235]: libvirt version: 11.10.0, package: 4.el9 (builder@centos.org, 2026-01-29-15:25:17, )
Feb 25 07:06:46 np0005629333 virtqemud[243235]: hostname: compute-0
Feb 25 07:06:46 np0005629333 virtqemud[243235]: End of file while reading data: Input/output error
Feb 25 07:06:46 np0005629333 systemd[1]: libpod-b7d9270bce5d1e1ac663a61021bd399bea861e9d353f641979ffdd1339629739.scope: Consumed 4.006s CPU time.
Feb 25 07:06:46 np0005629333 podman[243953]: 2026-02-25 12:06:46.992597466 +0000 UTC m=+0.524419020 container died b7d9270bce5d1e1ac663a61021bd399bea861e9d353f641979ffdd1339629739 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=nova_compute, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-e4339334ce3c7417c8c75d93639f9f436ec81d31ef3778c7686609059d57b52c-46d21a4105e6ed83e061146dd14c53f3ee36f480aa458280c08f0b5f3d8159dd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Feb 25 07:06:47 np0005629333 systemd[1]: var-lib-containers-storage-overlay-51a4238a0f12133cf84aa8736b88b0118a53310265a65a3619ac255cecafea6c-merged.mount: Deactivated successfully.
Feb 25 07:06:47 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b7d9270bce5d1e1ac663a61021bd399bea861e9d353f641979ffdd1339629739-userdata-shm.mount: Deactivated successfully.
Feb 25 07:06:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:06:48 np0005629333 podman[243953]: 2026-02-25 12:06:48.713274588 +0000 UTC m=+2.245096162 container cleanup b7d9270bce5d1e1ac663a61021bd399bea861e9d353f641979ffdd1339629739 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-e4339334ce3c7417c8c75d93639f9f436ec81d31ef3778c7686609059d57b52c-46d21a4105e6ed83e061146dd14c53f3ee36f480aa458280c08f0b5f3d8159dd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=nova_compute, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:06:48 np0005629333 podman[243953]: nova_compute
Feb 25 07:06:48 np0005629333 podman[243985]: nova_compute
Feb 25 07:06:48 np0005629333 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Feb 25 07:06:48 np0005629333 systemd[1]: Stopped nova_compute container.
Feb 25 07:06:48 np0005629333 systemd[1]: Starting nova_compute container...
Feb 25 07:06:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:06:48 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:06:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a4238a0f12133cf84aa8736b88b0118a53310265a65a3619ac255cecafea6c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Feb 25 07:06:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a4238a0f12133cf84aa8736b88b0118a53310265a65a3619ac255cecafea6c/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Feb 25 07:06:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a4238a0f12133cf84aa8736b88b0118a53310265a65a3619ac255cecafea6c/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 25 07:06:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a4238a0f12133cf84aa8736b88b0118a53310265a65a3619ac255cecafea6c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 25 07:06:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51a4238a0f12133cf84aa8736b88b0118a53310265a65a3619ac255cecafea6c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Feb 25 07:06:48 np0005629333 podman[243998]: 2026-02-25 12:06:48.934264349 +0000 UTC m=+0.117406307 container init b7d9270bce5d1e1ac663a61021bd399bea861e9d353f641979ffdd1339629739 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-e4339334ce3c7417c8c75d93639f9f436ec81d31ef3778c7686609059d57b52c-46d21a4105e6ed83e061146dd14c53f3ee36f480aa458280c08f0b5f3d8159dd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=nova_compute, container_name=nova_compute, org.label-schema.license=GPLv2)
Feb 25 07:06:48 np0005629333 podman[243998]: 2026-02-25 12:06:48.947216846 +0000 UTC m=+0.130358764 container start b7d9270bce5d1e1ac663a61021bd399bea861e9d353f641979ffdd1339629739 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=nova_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, container_name=nova_compute, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-e4339334ce3c7417c8c75d93639f9f436ec81d31ef3778c7686609059d57b52c-46d21a4105e6ed83e061146dd14c53f3ee36f480aa458280c08f0b5f3d8159dd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 07:06:48 np0005629333 podman[243998]: nova_compute
Feb 25 07:06:48 np0005629333 systemd[1]: Started nova_compute container.
Feb 25 07:06:48 np0005629333 nova_compute[244014]: + sudo -E kolla_set_configs
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Validating config file
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Copying service configuration files
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Deleting /etc/nova/nova.conf
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Copying /var/lib/kolla/config_files/src/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Copying /var/lib/kolla/config_files/src/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Copying /var/lib/kolla/config_files/src/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Copying /var/lib/kolla/config_files/src/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Deleting /etc/ceph
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Creating directory /etc/ceph
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Setting permission for /etc/ceph
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.conf to /etc/ceph/ceph.conf
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-config to /var/lib/nova/.ssh/config
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Copying /var/lib/kolla/config_files/src/run-on-host to /usr/sbin/iscsiadm
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Writing out command to execute
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 25 07:06:49 np0005629333 nova_compute[244014]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 25 07:06:49 np0005629333 nova_compute[244014]: ++ cat /run_command
Feb 25 07:06:49 np0005629333 nova_compute[244014]: + CMD=nova-compute
Feb 25 07:06:49 np0005629333 nova_compute[244014]: + ARGS=
Feb 25 07:06:49 np0005629333 nova_compute[244014]: + sudo kolla_copy_cacerts
Feb 25 07:06:49 np0005629333 nova_compute[244014]: + [[ ! -n '' ]]
Feb 25 07:06:49 np0005629333 nova_compute[244014]: + . kolla_extend_start
Feb 25 07:06:49 np0005629333 nova_compute[244014]: Running command: 'nova-compute'
Feb 25 07:06:49 np0005629333 nova_compute[244014]: + echo 'Running command: '\''nova-compute'\'''
Feb 25 07:06:49 np0005629333 nova_compute[244014]: + umask 0022
Feb 25 07:06:49 np0005629333 nova_compute[244014]: + exec nova-compute
Feb 25 07:06:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:06:49 np0005629333 python3.9[244178]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Feb 25 07:06:49 np0005629333 systemd[1]: Started libpod-conmon-26859d41ffaa0d2125a7c3af776ae34c49d58a26cc44e17324a3abe6d9b892b5.scope.
Feb 25 07:06:49 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:06:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a079393b47af3087364f029f830d94db2615f7092642a171c4a2f100c36b158d/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Feb 25 07:06:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a079393b47af3087364f029f830d94db2615f7092642a171c4a2f100c36b158d/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 25 07:06:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a079393b47af3087364f029f830d94db2615f7092642a171c4a2f100c36b158d/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Feb 25 07:06:49 np0005629333 podman[244204]: 2026-02-25 12:06:49.966114555 +0000 UTC m=+0.135954613 container init 26859d41ffaa0d2125a7c3af776ae34c49d58a26cc44e17324a3abe6d9b892b5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '46d21a4105e6ed83e061146dd14c53f3ee36f480aa458280c08f0b5f3d8159dd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=nova_compute_init, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:06:49 np0005629333 podman[244204]: 2026-02-25 12:06:49.974176084 +0000 UTC m=+0.144016112 container start 26859d41ffaa0d2125a7c3af776ae34c49d58a26cc44e17324a3abe6d9b892b5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '46d21a4105e6ed83e061146dd14c53f3ee36f480aa458280c08f0b5f3d8159dd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=nova_compute_init, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 07:06:50 np0005629333 nova_compute_init[244225]: INFO:nova_statedir:Applying nova statedir ownership
Feb 25 07:06:50 np0005629333 nova_compute_init[244225]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Feb 25 07:06:50 np0005629333 nova_compute_init[244225]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Feb 25 07:06:50 np0005629333 nova_compute_init[244225]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Feb 25 07:06:50 np0005629333 nova_compute_init[244225]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Feb 25 07:06:50 np0005629333 nova_compute_init[244225]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Feb 25 07:06:50 np0005629333 nova_compute_init[244225]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Feb 25 07:06:50 np0005629333 nova_compute_init[244225]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Feb 25 07:06:50 np0005629333 nova_compute_init[244225]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Feb 25 07:06:50 np0005629333 nova_compute_init[244225]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Feb 25 07:06:50 np0005629333 nova_compute_init[244225]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Feb 25 07:06:50 np0005629333 nova_compute_init[244225]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Feb 25 07:06:50 np0005629333 nova_compute_init[244225]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Feb 25 07:06:50 np0005629333 nova_compute_init[244225]: INFO:nova_statedir:Nova statedir ownership complete
Feb 25 07:06:50 np0005629333 systemd[1]: libpod-26859d41ffaa0d2125a7c3af776ae34c49d58a26cc44e17324a3abe6d9b892b5.scope: Deactivated successfully.
Feb 25 07:06:50 np0005629333 podman[244226]: 2026-02-25 12:06:50.075528375 +0000 UTC m=+0.027415818 container died 26859d41ffaa0d2125a7c3af776ae34c49d58a26cc44e17324a3abe6d9b892b5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=nova_compute_init, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '46d21a4105e6ed83e061146dd14c53f3ee36f480aa458280c08f0b5f3d8159dd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Feb 25 07:06:50 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-26859d41ffaa0d2125a7c3af776ae34c49d58a26cc44e17324a3abe6d9b892b5-userdata-shm.mount: Deactivated successfully.
Feb 25 07:06:50 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a079393b47af3087364f029f830d94db2615f7092642a171c4a2f100c36b158d-merged.mount: Deactivated successfully.
Feb 25 07:06:50 np0005629333 podman[244226]: 2026-02-25 12:06:50.110062534 +0000 UTC m=+0.061949897 container cleanup 26859d41ffaa0d2125a7c3af776ae34c49d58a26cc44e17324a3abe6d9b892b5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '46d21a4105e6ed83e061146dd14c53f3ee36f480aa458280c08f0b5f3d8159dd'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, container_name=nova_compute_init, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, config_id=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:06:50 np0005629333 systemd[1]: libpod-conmon-26859d41ffaa0d2125a7c3af776ae34c49d58a26cc44e17324a3abe6d9b892b5.scope: Deactivated successfully.
Feb 25 07:06:50 np0005629333 nova_compute[244014]: 2026-02-25 12:06:50.730 244018 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb 25 07:06:50 np0005629333 nova_compute[244014]: 2026-02-25 12:06:50.730 244018 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb 25 07:06:50 np0005629333 nova_compute[244014]: 2026-02-25 12:06:50.730 244018 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Feb 25 07:06:50 np0005629333 nova_compute[244014]: 2026-02-25 12:06:50.730 244018 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Feb 25 07:06:50 np0005629333 nova_compute[244014]: 2026-02-25 12:06:50.844 244018 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:06:50 np0005629333 nova_compute[244014]: 2026-02-25 12:06:50.866 244018 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:06:50 np0005629333 nova_compute[244014]: 2026-02-25 12:06:50.867 244018 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Feb 25 07:06:50 np0005629333 python3.9[244178]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Feb 25 07:06:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.273 244018 INFO nova.virt.driver [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.393 244018 INFO nova.compute.provider_config [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.405 244018 DEBUG oslo_concurrency.lockutils [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.405 244018 DEBUG oslo_concurrency.lockutils [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.405 244018 DEBUG oslo_concurrency.lockutils [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.406 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.406 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.406 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.406 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.406 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.406 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.407 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.407 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.407 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.407 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.407 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.407 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.407 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.407 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.408 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.408 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.408 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.408 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.408 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.408 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.408 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.408 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.409 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.409 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.409 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.409 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.409 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.409 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.410 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.410 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.410 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.410 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.410 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.411 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.411 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.411 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.411 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.411 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.412 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.412 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.412 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.412 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.413 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.413 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.413 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.413 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.413 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.414 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.414 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.414 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.414 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.414 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.415 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.415 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.415 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.415 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.415 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.416 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.416 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.416 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.416 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.416 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.416 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.417 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.417 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.417 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.417 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.417 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.418 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.418 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.418 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.418 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.418 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.419 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.419 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.419 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.419 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.419 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.419 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.420 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.420 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.420 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.420 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.420 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.421 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.421 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.421 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.421 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.421 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.422 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.422 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.422 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.422 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.422 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.423 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.423 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.423 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.423 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.423 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.423 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.424 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.424 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.424 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.424 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.424 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.425 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.425 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.425 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.425 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.425 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.425 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.425 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.425 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.426 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.426 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.426 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.426 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.426 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.426 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.426 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.426 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.427 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.427 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.427 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.427 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.427 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.427 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.427 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.427 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.428 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.428 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.428 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.428 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.428 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.428 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.428 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.429 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.429 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.429 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.429 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.429 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.429 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.429 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.429 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.430 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.430 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.430 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.430 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.430 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.430 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.430 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.431 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.431 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.431 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.431 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.431 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.431 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.431 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.431 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.432 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.432 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.432 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.432 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.432 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.432 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.432 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.433 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.433 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.433 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.433 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.433 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.433 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.433 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.433 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.434 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.434 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.434 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.434 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.434 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.434 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.434 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.435 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.435 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.435 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.435 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.435 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.435 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.435 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.435 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.436 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.436 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.436 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.436 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.436 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.436 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.436 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.437 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.437 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.437 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.437 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.437 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.437 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.437 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.437 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.438 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.438 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.438 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.438 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.438 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.438 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.438 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.439 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.439 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.439 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.439 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.439 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.439 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.439 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.439 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.440 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.440 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.440 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.440 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cinder.os_region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.440 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.440 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.440 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.440 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.441 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.441 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.441 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.441 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.441 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.441 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.441 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.442 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.442 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.442 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.442 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.442 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.442 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.442 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.442 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.443 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.443 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.443 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.443 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.443 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.443 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.443 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.443 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.444 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.444 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.444 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.444 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.444 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.444 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.444 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.445 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.445 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.445 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.445 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.445 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.445 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.445 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.445 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.446 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.446 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.446 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.446 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.446 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.446 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.446 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.446 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.447 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.447 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.447 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.447 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.447 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.447 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.448 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.448 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.448 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.448 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.448 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.448 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.448 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.449 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.449 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.449 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.449 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.449 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.449 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.449 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.450 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.450 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.450 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.450 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.450 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.450 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.450 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.450 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.451 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.451 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.451 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.451 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.451 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.451 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.451 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.451 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.452 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.452 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.452 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.452 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.452 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.452 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.452 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.453 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.453 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.453 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.453 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.453 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.453 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.453 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.453 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.454 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.454 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.454 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.454 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.454 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.454 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.454 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.454 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.455 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.455 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.455 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.455 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.455 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.455 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.455 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.455 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.456 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.456 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.456 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.456 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.456 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.456 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.456 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.457 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.457 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.457 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.457 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.457 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.457 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.457 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.457 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.458 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.458 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.458 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.458 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.458 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.458 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.459 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.459 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.459 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.459 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.459 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.459 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.459 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.459 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.460 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.460 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.460 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.460 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.460 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.460 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.460 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.461 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.461 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.461 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.461 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.461 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.461 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.461 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.461 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.462 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.462 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.462 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.462 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.462 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.462 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.462 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.462 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.463 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.463 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.463 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.barbican_region_name  = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.463 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.463 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.463 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.463 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.464 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.464 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.464 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.464 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.464 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.464 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.464 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.464 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.465 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.465 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.465 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.465 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.465 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.465 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.465 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.466 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.466 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.466 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.466 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.466 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.466 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.466 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.466 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.467 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.467 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.467 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.467 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.467 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.467 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.467 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.467 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.468 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.468 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.468 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.468 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.468 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.468 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.468 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.469 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.469 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.469 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.469 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.469 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.469 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.469 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.469 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.470 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.470 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.470 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.470 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.470 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.470 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.470 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.471 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.471 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.471 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.471 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.471 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.471 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.471 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.472 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.472 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.472 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.472 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.472 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.472 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.472 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.472 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.473 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.473 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.473 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.473 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.473 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.473 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.473 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.474 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.474 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.474 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.474 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.474 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.474 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.474 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.474 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.475 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.475 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.475 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.475 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.475 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.475 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.475 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.476 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.476 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.476 244018 WARNING oslo_config.cfg [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Feb 25 07:06:51 np0005629333 nova_compute[244014]: live_migration_uri is deprecated for removal in favor of two other options that
Feb 25 07:06:51 np0005629333 nova_compute[244014]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Feb 25 07:06:51 np0005629333 nova_compute[244014]: and ``live_migration_inbound_addr`` respectively.
Feb 25 07:06:51 np0005629333 nova_compute[244014]: ).  Its value may be silently ignored in the future.#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.476 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.476 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.476 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.477 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.477 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.477 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.477 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.477 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.477 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.477 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.478 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.478 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.478 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.478 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.478 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.478 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.478 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.478 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.479 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.rbd_secret_uuid        = 8ac33163-6221-5d58-9a39-8b6933fe7762 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.479 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.479 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.479 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.479 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.479 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.479 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.480 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.480 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.480 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.480 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.480 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.480 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.481 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.481 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.481 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.481 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.481 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.481 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.481 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.481 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.482 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.482 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.482 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.482 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.482 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.482 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.482 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.483 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.483 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.483 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.483 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.483 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.483 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.483 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.483 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.484 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.484 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.484 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.484 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.484 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.484 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.484 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.485 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.485 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.485 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.485 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.485 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.485 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.485 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.486 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.486 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.486 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.486 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.486 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.486 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.486 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.486 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.487 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.487 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.487 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.487 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.487 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.487 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.487 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.487 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.488 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.488 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.488 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.488 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.488 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.488 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.489 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.489 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.489 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.489 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.489 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.489 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.489 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.489 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.490 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.490 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.490 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.490 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.490 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.490 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.490 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.490 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.491 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.491 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.491 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.491 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.491 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.491 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.491 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.492 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.492 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.492 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.492 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.492 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.492 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.492 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.493 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.493 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.493 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.493 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.493 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.493 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.493 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.494 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.494 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.494 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.494 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.494 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.494 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.494 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.495 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.495 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.495 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.495 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.495 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.495 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.496 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.496 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.496 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.496 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.496 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.496 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.496 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.497 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.497 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.497 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.497 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.497 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.497 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.497 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.498 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.498 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.498 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.498 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.498 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.498 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.498 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.499 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.499 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.499 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.499 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.499 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.499 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.499 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.499 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.500 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.500 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.500 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.500 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.500 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.500 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.500 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.500 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.501 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.501 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.501 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.501 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.501 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.501 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.502 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.502 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.502 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.502 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.502 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.502 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.502 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.502 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.503 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.503 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.503 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.503 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.503 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.503 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.503 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.504 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.504 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.504 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.504 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.504 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.504 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.504 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.505 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.505 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.505 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.505 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.505 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.505 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.505 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.505 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.506 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.506 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.506 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.506 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.506 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.506 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.506 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.506 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.507 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.507 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.507 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.507 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.507 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.507 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.507 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.507 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.508 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.508 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.508 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.508 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.508 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.508 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.508 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.509 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.509 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.509 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.509 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.509 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.509 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.509 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.509 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.510 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.510 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.510 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.510 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.510 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.510 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.511 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.511 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.511 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.511 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.511 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.511 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.511 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.511 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.512 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.512 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.512 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.512 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.512 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.512 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.512 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.513 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.513 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.513 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.513 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.513 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.513 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.513 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.513 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.514 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.514 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.514 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.514 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.514 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.514 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.514 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.515 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.515 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.515 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.515 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.515 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.515 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.515 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.515 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.516 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.516 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.516 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.516 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.516 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.516 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.516 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.517 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.517 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.517 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.517 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.517 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.517 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.517 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.518 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.518 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.518 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.518 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.518 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.518 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.518 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.518 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.519 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.519 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.519 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.519 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.519 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.519 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.519 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.520 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.520 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.520 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.520 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.520 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.520 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.520 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.521 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.521 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.521 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.521 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.521 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.521 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.521 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.521 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.522 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.522 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.522 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.522 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.522 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.522 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.523 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.523 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.523 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.523 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.523 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.523 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.523 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.523 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.524 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.524 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.524 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.524 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.524 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.524 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.524 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.525 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.525 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.525 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.525 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.525 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.525 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.525 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.525 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.526 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.526 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.526 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.526 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.526 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.526 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.526 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.526 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.527 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.527 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.527 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.527 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.527 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.527 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.527 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.527 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.528 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.528 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.528 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.528 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.528 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.528 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.528 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.529 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.529 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.529 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.529 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.529 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.529 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.529 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.529 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.530 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.530 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.530 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.530 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.530 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.530 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.530 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.531 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.531 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.531 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.531 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.531 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.531 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.531 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.531 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.532 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.532 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.532 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.532 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.532 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.532 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.532 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.533 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.533 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.533 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.533 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.533 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.533 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.533 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.533 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.534 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.534 244018 DEBUG oslo_service.service [None req-d09539d6-95b0-4c2d-bf91-abe294f06ffc - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.534 244018 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260220085704.5cfeecb.el9)#033[00m
Feb 25 07:06:51 np0005629333 systemd[1]: session-50.scope: Deactivated successfully.
Feb 25 07:06:51 np0005629333 systemd[1]: session-50.scope: Consumed 1min 58.675s CPU time.
Feb 25 07:06:51 np0005629333 systemd-logind[811]: Session 50 logged out. Waiting for processes to exit.
Feb 25 07:06:51 np0005629333 systemd-logind[811]: Removed session 50.
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.581 244018 INFO nova.virt.node [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Determined node identity cb4dae98-2ac3-4218-9445-2320139e12ad from /var/lib/nova/compute_id#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.582 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.582 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.582 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.583 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.593 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f675e3bb6d0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.595 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f675e3bb6d0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.595 244018 INFO nova.virt.libvirt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Connection event '1' reason 'None'#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.603 244018 INFO nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Libvirt host capabilities <capabilities>
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <host>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <uuid>aaf3b885-2d7c-41fd-a043-b293ce60ba6a</uuid>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <cpu>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <arch>x86_64</arch>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model>EPYC-Rome-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <vendor>AMD</vendor>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <microcode version='16777317'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <signature family='23' model='49' stepping='0'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <maxphysaddr mode='emulate' bits='40'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature name='x2apic'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature name='tsc-deadline'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature name='osxsave'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature name='hypervisor'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature name='tsc_adjust'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature name='spec-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature name='stibp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature name='arch-capabilities'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature name='ssbd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature name='cmp_legacy'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature name='topoext'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature name='virt-ssbd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature name='lbrv'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature name='tsc-scale'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature name='vmcb-clean'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature name='pause-filter'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature name='pfthreshold'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature name='svme-addr-chk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature name='rdctl-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature name='skip-l1dfl-vmentry'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature name='mds-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature name='pschange-mc-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <pages unit='KiB' size='4'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <pages unit='KiB' size='2048'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <pages unit='KiB' size='1048576'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </cpu>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <power_management>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <suspend_mem/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </power_management>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <iommu support='no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <migration_features>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <live/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <uri_transports>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <uri_transport>tcp</uri_transport>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <uri_transport>rdma</uri_transport>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </uri_transports>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </migration_features>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <topology>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <cells num='1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <cell id='0'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:          <memory unit='KiB'>7864276</memory>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:          <pages unit='KiB' size='4'>1966069</pages>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:          <pages unit='KiB' size='2048'>0</pages>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:          <pages unit='KiB' size='1048576'>0</pages>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:          <distances>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:            <sibling id='0' value='10'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:          </distances>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:          <cpus num='8'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:          </cpus>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        </cell>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </cells>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </topology>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <cache>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </cache>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <secmodel>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model>selinux</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <doi>0</doi>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </secmodel>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <secmodel>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model>dac</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <doi>0</doi>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <baselabel type='kvm'>+107:+107</baselabel>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <baselabel type='qemu'>+107:+107</baselabel>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </secmodel>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  </host>
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <guest>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <os_type>hvm</os_type>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <arch name='i686'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <wordsize>32</wordsize>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <domain type='qemu'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <domain type='kvm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </arch>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <features>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <pae/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <nonpae/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <acpi default='on' toggle='yes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <apic default='on' toggle='no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <cpuselection/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <deviceboot/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <disksnapshot default='on' toggle='no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <externalSnapshot/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </features>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  </guest>
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <guest>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <os_type>hvm</os_type>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <arch name='x86_64'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <wordsize>64</wordsize>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <domain type='qemu'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <domain type='kvm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </arch>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <features>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <acpi default='on' toggle='yes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <apic default='on' toggle='no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <cpuselection/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <deviceboot/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <disksnapshot default='on' toggle='no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <externalSnapshot/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </features>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  </guest>
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 
Feb 25 07:06:51 np0005629333 nova_compute[244014]: </capabilities>
Feb 25 07:06:51 np0005629333 nova_compute[244014]: #033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.606 244018 DEBUG nova.virt.libvirt.volume.mount [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.610 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.617 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Feb 25 07:06:51 np0005629333 nova_compute[244014]: <domainCapabilities>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <path>/usr/libexec/qemu-kvm</path>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <domain>kvm</domain>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <machine>pc-q35-rhel9.8.0</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <arch>i686</arch>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <vcpu max='4096'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <iothreads supported='yes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <os supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <enum name='firmware'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <loader supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='type'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>rom</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>pflash</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='readonly'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>yes</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>no</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='secure'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>no</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </loader>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <cpu>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <mode name='host-passthrough' supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='hostPassthroughMigratable'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>on</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>off</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </mode>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <mode name='maximum' supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='maximumMigratable'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>on</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>off</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </mode>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <mode name='host-model' supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model fallback='forbid'>EPYC-Rome</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <vendor>AMD</vendor>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='x2apic'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='tsc-deadline'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='hypervisor'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='tsc_adjust'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='spec-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='stibp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='ssbd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='cmp_legacy'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='overflow-recov'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='succor'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='ibrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='amd-ssbd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='virt-ssbd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='lbrv'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='tsc-scale'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='vmcb-clean'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='flushbyasid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='pause-filter'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='pfthreshold'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='svme-addr-chk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='lfence-always-serializing'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='disable' name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </mode>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <mode name='custom' supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Broadwell'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Broadwell-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Broadwell-noTSX'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Broadwell-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Broadwell-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Broadwell-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Broadwell-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cascadelake-Server'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cascadelake-Server-noTSX'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cascadelake-Server-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cascadelake-Server-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cascadelake-Server-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cascadelake-Server-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cascadelake-Server-v5'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='ClearwaterForest'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni-int16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bhi-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bhi-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cmpccxadd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ddpd-u'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='intel-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ipred-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='lam'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='prefetchiti'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rrsba-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sha512'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sm3'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sm4'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='ClearwaterForest-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni-int16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bhi-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bhi-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cmpccxadd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ddpd-u'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='intel-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ipred-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='lam'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='prefetchiti'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rrsba-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sha512'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sm3'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sm4'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cooperlake'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cooperlake-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cooperlake-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Denverton'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mpx'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Denverton-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mpx'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Denverton-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Denverton-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Dhyana-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Genoa'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amd-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='auto-ibrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='stibp-always-on'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Genoa-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amd-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='auto-ibrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='stibp-always-on'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Genoa-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amd-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='auto-ibrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fs-gs-base-ns'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='perfmon-v2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='stibp-always-on'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Milan'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Milan-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Milan-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amd-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='stibp-always-on'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Milan-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amd-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='stibp-always-on'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Rome'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Rome-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Rome-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Rome-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Turin'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amd-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='auto-ibrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vp2intersect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fs-gs-base-ns'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibpb-brtype'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='perfmon-v2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='prefetchi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbpb'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='srso-user-kernel-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='stibp-always-on'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Turin-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amd-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='auto-ibrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vp2intersect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fs-gs-base-ns'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibpb-brtype'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='perfmon-v2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='prefetchi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbpb'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='srso-user-kernel-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='stibp-always-on'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-v5'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='GraniteRapids'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='prefetchiti'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='GraniteRapids-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='prefetchiti'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='GraniteRapids-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx10'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx10-128'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx10-256'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx10-512'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='prefetchiti'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='GraniteRapids-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx10'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx10-128'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx10-256'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx10-512'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='prefetchiti'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Haswell'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Haswell-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Haswell-noTSX'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Haswell-noTSX-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Haswell-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Haswell-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Haswell-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Haswell-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server-noTSX'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server-v5'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server-v6'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server-v7'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='IvyBridge'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='IvyBridge-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='IvyBridge-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='IvyBridge-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='KnightsMill'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-4fmaps'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-4vnniw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512er'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512pf'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='KnightsMill-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-4fmaps'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-4vnniw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512er'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512pf'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Opteron_G4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fma4'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xop'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Opteron_G4-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fma4'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xop'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Opteron_G5'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fma4'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tbm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xop'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Opteron_G5-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fma4'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tbm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xop'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SapphireRapids'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SapphireRapids-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SapphireRapids-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SapphireRapids-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SapphireRapids-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SierraForest'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cmpccxadd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SierraForest-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cmpccxadd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SierraForest-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bhi-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cmpccxadd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='intel-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ipred-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='lam'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rrsba-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SierraForest-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bhi-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cmpccxadd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='intel-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ipred-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='lam'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rrsba-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Client'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Client-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Client-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Client-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Client-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Client-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Server'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Server-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Server-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Server-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Server-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Server-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Server-v5'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Snowridge'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='core-capability'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mpx'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='split-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Snowridge-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='core-capability'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mpx'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='split-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Snowridge-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='core-capability'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='split-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Snowridge-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='core-capability'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='split-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Snowridge-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='athlon'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='3dnow'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='3dnowext'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='athlon-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='3dnow'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='3dnowext'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='core2duo'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='core2duo-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='coreduo'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='coreduo-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='n270'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='n270-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='phenom'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='3dnow'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='3dnowext'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='phenom-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='3dnow'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='3dnowext'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </mode>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <memoryBacking supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <enum name='sourceType'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <value>file</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <value>anonymous</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <value>memfd</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  </memoryBacking>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <disk supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='diskDevice'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>disk</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>cdrom</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>floppy</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>lun</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='bus'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>fdc</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>scsi</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtio</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>usb</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>sata</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='model'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtio</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtio-transitional</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtio-non-transitional</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <graphics supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='type'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>vnc</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>egl-headless</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>dbus</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </graphics>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <video supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='modelType'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>vga</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>cirrus</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtio</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>none</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>bochs</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>ramfb</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <hostdev supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='mode'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>subsystem</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='startupPolicy'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>default</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>mandatory</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>requisite</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>optional</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='subsysType'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>usb</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>pci</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>scsi</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='capsType'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='pciBackend'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </hostdev>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <rng supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='model'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtio</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtio-transitional</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtio-non-transitional</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='backendModel'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>random</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>egd</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>builtin</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <filesystem supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='driverType'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>path</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>handle</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtiofs</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </filesystem>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <tpm supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='model'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>tpm-tis</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>tpm-crb</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='backendModel'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>emulator</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>external</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='backendVersion'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>2.0</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </tpm>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <redirdev supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='bus'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>usb</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </redirdev>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <channel supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='type'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>pty</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>unix</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </channel>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <crypto supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='model'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='type'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>qemu</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='backendModel'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>builtin</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </crypto>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <interface supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='backendType'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>default</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>passt</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <panic supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='model'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>isa</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>hyperv</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </panic>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <console supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='type'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>null</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>vc</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>pty</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>dev</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>file</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>pipe</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>stdio</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>udp</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>tcp</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>unix</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>qemu-vdagent</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>dbus</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </console>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <gic supported='no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <vmcoreinfo supported='yes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <genid supported='yes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <backingStoreInput supported='yes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <backup supported='yes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <async-teardown supported='yes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <s390-pv supported='no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <ps2 supported='yes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <tdx supported='no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <sev supported='no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <sgx supported='no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <hyperv supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='features'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>relaxed</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>vapic</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>spinlocks</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>vpindex</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>runtime</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>synic</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>stimer</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>reset</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>vendor_id</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>frequencies</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>reenlightenment</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>tlbflush</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>ipi</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>avic</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>emsr_bitmap</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>xmm_input</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <defaults>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <spinlocks>4095</spinlocks>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <stimer_direct>on</stimer_direct>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <tlbflush_direct>on</tlbflush_direct>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <tlbflush_extended>on</tlbflush_extended>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </defaults>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </hyperv>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <launchSecurity supported='no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:06:51 np0005629333 nova_compute[244014]: </domainCapabilities>
Feb 25 07:06:51 np0005629333 nova_compute[244014]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.625 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Feb 25 07:06:51 np0005629333 nova_compute[244014]: <domainCapabilities>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <path>/usr/libexec/qemu-kvm</path>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <domain>kvm</domain>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <machine>pc-i440fx-rhel7.6.0</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <arch>i686</arch>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <vcpu max='240'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <iothreads supported='yes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <os supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <enum name='firmware'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <loader supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='type'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>rom</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>pflash</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='readonly'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>yes</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>no</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='secure'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>no</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </loader>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <cpu>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <mode name='host-passthrough' supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='hostPassthroughMigratable'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>on</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>off</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </mode>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <mode name='maximum' supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='maximumMigratable'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>on</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>off</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </mode>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <mode name='host-model' supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model fallback='forbid'>EPYC-Rome</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <vendor>AMD</vendor>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='x2apic'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='tsc-deadline'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='hypervisor'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='tsc_adjust'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='spec-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='stibp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='ssbd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='cmp_legacy'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='overflow-recov'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='succor'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='ibrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='amd-ssbd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='virt-ssbd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='lbrv'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='tsc-scale'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='vmcb-clean'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='flushbyasid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='pause-filter'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='pfthreshold'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='svme-addr-chk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='lfence-always-serializing'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='disable' name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </mode>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <mode name='custom' supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Broadwell'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Broadwell-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Broadwell-noTSX'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Broadwell-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Broadwell-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Broadwell-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Broadwell-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cascadelake-Server'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cascadelake-Server-noTSX'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cascadelake-Server-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cascadelake-Server-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cascadelake-Server-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cascadelake-Server-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cascadelake-Server-v5'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='ClearwaterForest'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni-int16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bhi-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bhi-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cmpccxadd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ddpd-u'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='intel-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ipred-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='lam'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='prefetchiti'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rrsba-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sha512'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sm3'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sm4'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='ClearwaterForest-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni-int16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bhi-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bhi-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cmpccxadd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ddpd-u'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='intel-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ipred-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='lam'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='prefetchiti'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rrsba-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sha512'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sm3'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sm4'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cooperlake'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cooperlake-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cooperlake-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Denverton'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mpx'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Denverton-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mpx'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Denverton-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Denverton-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Dhyana-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Genoa'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amd-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='auto-ibrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='stibp-always-on'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Genoa-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amd-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='auto-ibrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='stibp-always-on'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Genoa-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amd-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='auto-ibrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fs-gs-base-ns'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='perfmon-v2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='stibp-always-on'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Milan'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Milan-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Milan-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amd-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='stibp-always-on'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Milan-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amd-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='stibp-always-on'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Rome'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Rome-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Rome-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Rome-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Turin'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amd-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='auto-ibrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vp2intersect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fs-gs-base-ns'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibpb-brtype'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='perfmon-v2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='prefetchi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbpb'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='srso-user-kernel-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='stibp-always-on'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Turin-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amd-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='auto-ibrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vp2intersect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fs-gs-base-ns'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibpb-brtype'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='perfmon-v2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='prefetchi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbpb'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='srso-user-kernel-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='stibp-always-on'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-v5'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='GraniteRapids'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='prefetchiti'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='GraniteRapids-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='prefetchiti'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='GraniteRapids-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx10'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx10-128'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx10-256'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx10-512'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='prefetchiti'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='GraniteRapids-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx10'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx10-128'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx10-256'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx10-512'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='prefetchiti'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Haswell'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Haswell-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Haswell-noTSX'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Haswell-noTSX-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Haswell-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Haswell-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Haswell-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Haswell-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server-noTSX'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server-v5'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server-v6'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server-v7'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='IvyBridge'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='IvyBridge-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='IvyBridge-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='IvyBridge-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='KnightsMill'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-4fmaps'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-4vnniw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512er'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512pf'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='KnightsMill-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-4fmaps'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-4vnniw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512er'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512pf'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Opteron_G4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fma4'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xop'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Opteron_G4-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fma4'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xop'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Opteron_G5'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fma4'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tbm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xop'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Opteron_G5-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fma4'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tbm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xop'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SapphireRapids'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SapphireRapids-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SapphireRapids-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SapphireRapids-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SapphireRapids-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SierraForest'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cmpccxadd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SierraForest-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cmpccxadd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SierraForest-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bhi-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cmpccxadd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='intel-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ipred-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='lam'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rrsba-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SierraForest-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bhi-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cmpccxadd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='intel-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ipred-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='lam'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rrsba-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Client'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Client-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Client-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Client-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Client-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Client-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Server'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Server-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Server-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Server-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Server-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Server-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Server-v5'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Snowridge'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='core-capability'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mpx'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='split-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Snowridge-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='core-capability'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mpx'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='split-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Snowridge-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='core-capability'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='split-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Snowridge-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='core-capability'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='split-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Snowridge-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='athlon'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='3dnow'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='3dnowext'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='athlon-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='3dnow'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='3dnowext'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='core2duo'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='core2duo-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='coreduo'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='coreduo-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='n270'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='n270-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='phenom'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='3dnow'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='3dnowext'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='phenom-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='3dnow'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='3dnowext'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </mode>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <memoryBacking supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <enum name='sourceType'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <value>file</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <value>anonymous</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <value>memfd</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  </memoryBacking>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <disk supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='diskDevice'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>disk</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>cdrom</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>floppy</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>lun</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='bus'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>ide</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>fdc</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>scsi</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtio</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>usb</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>sata</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='model'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtio</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtio-transitional</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtio-non-transitional</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <graphics supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='type'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>vnc</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>egl-headless</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>dbus</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </graphics>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <video supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='modelType'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>vga</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>cirrus</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtio</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>none</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>bochs</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>ramfb</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <hostdev supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='mode'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>subsystem</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='startupPolicy'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>default</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>mandatory</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>requisite</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>optional</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='subsysType'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>usb</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>pci</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>scsi</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='capsType'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='pciBackend'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </hostdev>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <rng supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='model'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtio</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtio-transitional</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtio-non-transitional</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='backendModel'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>random</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>egd</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>builtin</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <filesystem supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='driverType'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>path</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>handle</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtiofs</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </filesystem>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <tpm supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='model'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>tpm-tis</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>tpm-crb</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='backendModel'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>emulator</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>external</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='backendVersion'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>2.0</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </tpm>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <redirdev supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='bus'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>usb</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </redirdev>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <channel supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='type'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>pty</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>unix</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </channel>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <crypto supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='model'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='type'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>qemu</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='backendModel'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>builtin</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </crypto>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <interface supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='backendType'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>default</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>passt</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <panic supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='model'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>isa</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>hyperv</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </panic>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <console supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='type'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>null</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>vc</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>pty</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>dev</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>file</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>pipe</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>stdio</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>udp</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>tcp</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>unix</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>qemu-vdagent</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>dbus</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </console>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <gic supported='no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <vmcoreinfo supported='yes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <genid supported='yes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <backingStoreInput supported='yes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <backup supported='yes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <async-teardown supported='yes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <s390-pv supported='no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <ps2 supported='yes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <tdx supported='no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <sev supported='no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <sgx supported='no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <hyperv supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='features'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>relaxed</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>vapic</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>spinlocks</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>vpindex</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>runtime</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>synic</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>stimer</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>reset</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>vendor_id</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>frequencies</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>reenlightenment</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>tlbflush</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>ipi</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>avic</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>emsr_bitmap</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>xmm_input</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <defaults>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <spinlocks>4095</spinlocks>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <stimer_direct>on</stimer_direct>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <tlbflush_direct>on</tlbflush_direct>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <tlbflush_extended>on</tlbflush_extended>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </defaults>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </hyperv>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <launchSecurity supported='no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:06:51 np0005629333 nova_compute[244014]: </domainCapabilities>
Feb 25 07:06:51 np0005629333 nova_compute[244014]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.663 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.668 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Feb 25 07:06:51 np0005629333 nova_compute[244014]: <domainCapabilities>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <path>/usr/libexec/qemu-kvm</path>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <domain>kvm</domain>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <machine>pc-q35-rhel9.8.0</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <arch>x86_64</arch>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <vcpu max='4096'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <iothreads supported='yes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <os supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <enum name='firmware'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <value>efi</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <loader supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='type'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>rom</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>pflash</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='readonly'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>yes</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>no</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='secure'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>yes</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>no</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </loader>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <cpu>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <mode name='host-passthrough' supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='hostPassthroughMigratable'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>on</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>off</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </mode>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <mode name='maximum' supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='maximumMigratable'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>on</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>off</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </mode>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <mode name='host-model' supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model fallback='forbid'>EPYC-Rome</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <vendor>AMD</vendor>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='x2apic'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='tsc-deadline'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='hypervisor'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='tsc_adjust'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='spec-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='stibp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='ssbd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='cmp_legacy'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='overflow-recov'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='succor'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='ibrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='amd-ssbd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='virt-ssbd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='lbrv'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='tsc-scale'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='vmcb-clean'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='flushbyasid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='pause-filter'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='pfthreshold'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='svme-addr-chk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='lfence-always-serializing'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='disable' name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </mode>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <mode name='custom' supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Broadwell'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Broadwell-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Broadwell-noTSX'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Broadwell-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Broadwell-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Broadwell-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Broadwell-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cascadelake-Server'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cascadelake-Server-noTSX'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cascadelake-Server-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cascadelake-Server-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cascadelake-Server-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cascadelake-Server-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cascadelake-Server-v5'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='ClearwaterForest'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni-int16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bhi-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bhi-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cmpccxadd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ddpd-u'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='intel-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ipred-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='lam'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='prefetchiti'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rrsba-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sha512'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sm3'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sm4'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='ClearwaterForest-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni-int16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bhi-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bhi-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cmpccxadd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ddpd-u'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='intel-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ipred-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='lam'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='prefetchiti'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rrsba-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sha512'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sm3'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sm4'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cooperlake'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cooperlake-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cooperlake-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Denverton'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mpx'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Denverton-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mpx'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Denverton-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Denverton-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Dhyana-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Genoa'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amd-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='auto-ibrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='stibp-always-on'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Genoa-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amd-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='auto-ibrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='stibp-always-on'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Genoa-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amd-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='auto-ibrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fs-gs-base-ns'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='perfmon-v2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='stibp-always-on'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Milan'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Milan-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Milan-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amd-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='stibp-always-on'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Milan-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amd-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='stibp-always-on'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Rome'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Rome-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Rome-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Rome-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Turin'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amd-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='auto-ibrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vp2intersect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fs-gs-base-ns'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibpb-brtype'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='perfmon-v2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='prefetchi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbpb'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='srso-user-kernel-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='stibp-always-on'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Turin-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amd-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='auto-ibrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vp2intersect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fs-gs-base-ns'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibpb-brtype'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='perfmon-v2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='prefetchi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbpb'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='srso-user-kernel-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='stibp-always-on'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-v5'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='GraniteRapids'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='prefetchiti'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='GraniteRapids-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='prefetchiti'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='GraniteRapids-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx10'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx10-128'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx10-256'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx10-512'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='prefetchiti'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='GraniteRapids-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx10'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx10-128'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx10-256'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx10-512'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='prefetchiti'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Haswell'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Haswell-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Haswell-noTSX'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Haswell-noTSX-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Haswell-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Haswell-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Haswell-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Haswell-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server-noTSX'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server-v5'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server-v6'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server-v7'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='IvyBridge'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='IvyBridge-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='IvyBridge-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='IvyBridge-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='KnightsMill'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-4fmaps'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-4vnniw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512er'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512pf'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='KnightsMill-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-4fmaps'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-4vnniw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512er'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512pf'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Opteron_G4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fma4'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xop'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Opteron_G4-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fma4'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xop'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Opteron_G5'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fma4'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tbm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xop'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Opteron_G5-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fma4'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tbm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xop'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SapphireRapids'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SapphireRapids-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SapphireRapids-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SapphireRapids-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SapphireRapids-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SierraForest'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cmpccxadd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SierraForest-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cmpccxadd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SierraForest-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bhi-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cmpccxadd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='intel-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ipred-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='lam'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rrsba-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SierraForest-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bhi-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cmpccxadd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='intel-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ipred-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='lam'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rrsba-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Client'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Client-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Client-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Client-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Client-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Client-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Server'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Server-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Server-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Server-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Server-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Server-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Server-v5'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Snowridge'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='core-capability'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mpx'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='split-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Snowridge-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='core-capability'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mpx'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='split-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Snowridge-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='core-capability'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='split-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Snowridge-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='core-capability'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='split-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Snowridge-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='athlon'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='3dnow'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='3dnowext'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='athlon-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='3dnow'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='3dnowext'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='core2duo'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='core2duo-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='coreduo'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='coreduo-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='n270'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='n270-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='phenom'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='3dnow'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='3dnowext'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='phenom-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='3dnow'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='3dnowext'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </mode>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <memoryBacking supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <enum name='sourceType'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <value>file</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <value>anonymous</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <value>memfd</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  </memoryBacking>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <disk supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='diskDevice'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>disk</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>cdrom</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>floppy</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>lun</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='bus'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>fdc</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>scsi</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtio</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>usb</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>sata</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='model'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtio</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtio-transitional</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtio-non-transitional</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <graphics supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='type'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>vnc</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>egl-headless</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>dbus</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </graphics>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <video supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='modelType'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>vga</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>cirrus</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtio</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>none</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>bochs</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>ramfb</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <hostdev supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='mode'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>subsystem</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='startupPolicy'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>default</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>mandatory</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>requisite</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>optional</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='subsysType'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>usb</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>pci</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>scsi</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='capsType'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='pciBackend'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </hostdev>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <rng supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='model'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtio</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtio-transitional</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtio-non-transitional</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='backendModel'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>random</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>egd</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>builtin</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <filesystem supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='driverType'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>path</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>handle</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtiofs</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </filesystem>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <tpm supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='model'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>tpm-tis</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>tpm-crb</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='backendModel'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>emulator</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>external</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='backendVersion'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>2.0</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </tpm>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <redirdev supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='bus'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>usb</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </redirdev>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <channel supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='type'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>pty</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>unix</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </channel>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <crypto supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='model'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='type'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>qemu</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='backendModel'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>builtin</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </crypto>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <interface supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='backendType'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>default</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>passt</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <panic supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='model'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>isa</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>hyperv</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </panic>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <console supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='type'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>null</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>vc</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>pty</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>dev</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>file</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>pipe</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>stdio</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>udp</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>tcp</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>unix</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>qemu-vdagent</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>dbus</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </console>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <gic supported='no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <vmcoreinfo supported='yes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <genid supported='yes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <backingStoreInput supported='yes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <backup supported='yes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <async-teardown supported='yes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <s390-pv supported='no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <ps2 supported='yes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <tdx supported='no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <sev supported='no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <sgx supported='no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <hyperv supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='features'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>relaxed</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>vapic</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>spinlocks</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>vpindex</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>runtime</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>synic</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>stimer</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>reset</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>vendor_id</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>frequencies</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>reenlightenment</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>tlbflush</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>ipi</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>avic</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>emsr_bitmap</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>xmm_input</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <defaults>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <spinlocks>4095</spinlocks>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <stimer_direct>on</stimer_direct>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <tlbflush_direct>on</tlbflush_direct>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <tlbflush_extended>on</tlbflush_extended>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </defaults>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </hyperv>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <launchSecurity supported='no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:06:51 np0005629333 nova_compute[244014]: </domainCapabilities>
Feb 25 07:06:51 np0005629333 nova_compute[244014]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.726 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Feb 25 07:06:51 np0005629333 nova_compute[244014]: <domainCapabilities>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <path>/usr/libexec/qemu-kvm</path>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <domain>kvm</domain>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <machine>pc-i440fx-rhel7.6.0</machine>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <arch>x86_64</arch>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <vcpu max='240'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <iothreads supported='yes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <os supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <enum name='firmware'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <loader supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='type'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>rom</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>pflash</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='readonly'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>yes</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>no</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='secure'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>no</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </loader>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <cpu>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <mode name='host-passthrough' supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='hostPassthroughMigratable'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>on</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>off</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </mode>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <mode name='maximum' supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='maximumMigratable'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>on</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>off</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </mode>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <mode name='host-model' supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model fallback='forbid'>EPYC-Rome</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <vendor>AMD</vendor>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <maxphysaddr mode='passthrough' limit='40'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='x2apic'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='tsc-deadline'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='hypervisor'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='tsc_adjust'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='spec-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='stibp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='ssbd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='cmp_legacy'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='overflow-recov'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='succor'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='ibrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='amd-ssbd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='virt-ssbd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='lbrv'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='tsc-scale'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='vmcb-clean'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='flushbyasid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='pause-filter'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='pfthreshold'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='svme-addr-chk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='require' name='lfence-always-serializing'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <feature policy='disable' name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </mode>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <mode name='custom' supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Broadwell'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Broadwell-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Broadwell-noTSX'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Broadwell-noTSX-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Broadwell-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Broadwell-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Broadwell-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Broadwell-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cascadelake-Server'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cascadelake-Server-noTSX'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cascadelake-Server-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cascadelake-Server-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cascadelake-Server-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cascadelake-Server-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cascadelake-Server-v5'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='ClearwaterForest'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni-int16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bhi-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bhi-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cmpccxadd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ddpd-u'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='intel-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ipred-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='lam'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='prefetchiti'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rrsba-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sha512'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sm3'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sm4'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='ClearwaterForest-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni-int16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bhi-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bhi-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cmpccxadd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ddpd-u'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='intel-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ipred-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='lam'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='prefetchiti'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rrsba-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sha512'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sm3'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sm4'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cooperlake'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cooperlake-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Cooperlake-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Denverton'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mpx'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Denverton-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mpx'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Denverton-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Denverton-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Dhyana-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Genoa'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amd-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='auto-ibrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='stibp-always-on'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Genoa-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amd-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='auto-ibrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='stibp-always-on'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Genoa-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amd-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='auto-ibrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fs-gs-base-ns'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='perfmon-v2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='stibp-always-on'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Milan'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Milan-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Milan-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amd-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='stibp-always-on'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Milan-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amd-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='stibp-always-on'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Rome'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Rome-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Rome-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Rome-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Turin'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amd-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='auto-ibrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vp2intersect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fs-gs-base-ns'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibpb-brtype'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='perfmon-v2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='prefetchi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbpb'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='srso-user-kernel-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='stibp-always-on'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-Turin-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amd-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='auto-ibrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vp2intersect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fs-gs-base-ns'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibpb-brtype'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='no-nested-data-bp'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='null-sel-clr-base'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='perfmon-v2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='prefetchi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbpb'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='srso-user-kernel-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='stibp-always-on'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='EPYC-v5'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='GraniteRapids'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='prefetchiti'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='GraniteRapids-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='prefetchiti'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='GraniteRapids-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx10'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx10-128'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx10-256'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx10-512'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='prefetchiti'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='GraniteRapids-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx10'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx10-128'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx10-256'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx10-512'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='prefetchiti'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Haswell'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Haswell-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Haswell-noTSX'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Haswell-noTSX-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Haswell-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Haswell-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Haswell-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Haswell-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server-noTSX'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server-v5'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server-v6'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Icelake-Server-v7'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='IvyBridge'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='IvyBridge-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='IvyBridge-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='IvyBridge-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='KnightsMill'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-4fmaps'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-4vnniw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512er'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512pf'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='KnightsMill-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-4fmaps'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-4vnniw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512er'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512pf'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Opteron_G4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fma4'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xop'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Opteron_G4-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fma4'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xop'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Opteron_G5'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fma4'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tbm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xop'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Opteron_G5-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fma4'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tbm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xop'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SapphireRapids'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SapphireRapids-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SapphireRapids-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SapphireRapids-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SapphireRapids-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='amx-tile'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-bf16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-fp16'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512-vpopcntdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bitalg'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vbmi2'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrc'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fzrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='la57'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='taa-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='tsx-ldtrk'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SierraForest'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cmpccxadd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SierraForest-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cmpccxadd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SierraForest-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bhi-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cmpccxadd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='intel-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ipred-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='lam'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rrsba-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='SierraForest-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ifma'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-ne-convert'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx-vnni-int8'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bhi-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='bus-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cmpccxadd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fbsdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='fsrs'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ibrs-all'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='intel-psfd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ipred-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='lam'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mcdt-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pbrsb-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='psdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rrsba-ctrl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='sbdr-ssdp-no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='serialize'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vaes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='vpclmulqdq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Client'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Client-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Client-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Client-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Client-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Client-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Server'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Server-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Server-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Server-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='hle'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='rtm'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Server-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Server-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Skylake-Server-v5'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512bw'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512cd'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512dq'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512f'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='avx512vl'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='invpcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pcid'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='pku'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Snowridge'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='core-capability'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mpx'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='split-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Snowridge-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='core-capability'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='mpx'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='split-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Snowridge-v2'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='core-capability'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='split-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Snowridge-v3'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='core-capability'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='split-lock-detect'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='Snowridge-v4'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='cldemote'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='erms'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='gfni'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdir64b'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='movdiri'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='xsaves'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='athlon'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='3dnow'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='3dnowext'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='athlon-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='3dnow'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='3dnowext'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='core2duo'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='core2duo-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='coreduo'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='coreduo-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='n270'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='n270-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='ss'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='phenom'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='3dnow'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='3dnowext'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <blockers model='phenom-v1'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='3dnow'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <feature name='3dnowext'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </blockers>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </mode>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <memoryBacking supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <enum name='sourceType'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <value>file</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <value>anonymous</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <value>memfd</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  </memoryBacking>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <disk supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='diskDevice'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>disk</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>cdrom</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>floppy</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>lun</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='bus'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>ide</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>fdc</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>scsi</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtio</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>usb</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>sata</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='model'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtio</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtio-transitional</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtio-non-transitional</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <graphics supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='type'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>vnc</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>egl-headless</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>dbus</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </graphics>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <video supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='modelType'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>vga</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>cirrus</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtio</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>none</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>bochs</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>ramfb</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <hostdev supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='mode'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>subsystem</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='startupPolicy'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>default</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>mandatory</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>requisite</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>optional</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='subsysType'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>usb</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>pci</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>scsi</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='capsType'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='pciBackend'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </hostdev>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <rng supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='model'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtio</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtio-transitional</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtio-non-transitional</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='backendModel'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>random</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>egd</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>builtin</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <filesystem supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='driverType'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>path</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>handle</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>virtiofs</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </filesystem>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <tpm supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='model'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>tpm-tis</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>tpm-crb</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='backendModel'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>emulator</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>external</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='backendVersion'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>2.0</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </tpm>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <redirdev supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='bus'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>usb</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </redirdev>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <channel supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='type'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>pty</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>unix</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </channel>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <crypto supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='model'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='type'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>qemu</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='backendModel'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>builtin</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </crypto>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <interface supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='backendType'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>default</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>passt</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <panic supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='model'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>isa</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>hyperv</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </panic>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <console supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='type'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>null</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>vc</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>pty</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>dev</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>file</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>pipe</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>stdio</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>udp</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>tcp</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>unix</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>qemu-vdagent</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>dbus</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </console>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <gic supported='no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <vmcoreinfo supported='yes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <genid supported='yes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <backingStoreInput supported='yes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <backup supported='yes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <async-teardown supported='yes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <s390-pv supported='no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <ps2 supported='yes'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <tdx supported='no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <sev supported='no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <sgx supported='no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <hyperv supported='yes'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <enum name='features'>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>relaxed</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>vapic</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>spinlocks</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>vpindex</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>runtime</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>synic</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>stimer</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>reset</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>vendor_id</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>frequencies</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>reenlightenment</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>tlbflush</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>ipi</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>avic</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>emsr_bitmap</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <value>xmm_input</value>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </enum>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      <defaults>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <spinlocks>4095</spinlocks>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <stimer_direct>on</stimer_direct>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <tlbflush_direct>on</tlbflush_direct>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <tlbflush_extended>on</tlbflush_extended>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:        <vendor_id>Linux KVM Hv</vendor_id>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:      </defaults>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    </hyperv>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:    <launchSecurity supported='no'/>
Feb 25 07:06:51 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:06:51 np0005629333 nova_compute[244014]: </domainCapabilities>
Feb 25 07:06:51 np0005629333 nova_compute[244014]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.780 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.780 244018 INFO nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Secure Boot support detected#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.782 244018 INFO nova.virt.libvirt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.782 244018 INFO nova.virt.libvirt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.790 244018 DEBUG nova.virt.libvirt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.806 244018 INFO nova.virt.node [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Determined node identity cb4dae98-2ac3-4218-9445-2320139e12ad from /var/lib/nova/compute_id#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.822 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Verified node cb4dae98-2ac3-4218-9445-2320139e12ad matches my host compute-0.ctlplane.example.com _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.842 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.943 244018 DEBUG oslo_concurrency.lockutils [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.943 244018 DEBUG oslo_concurrency.lockutils [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.943 244018 DEBUG oslo_concurrency.lockutils [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.944 244018 DEBUG nova.compute.resource_tracker [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:06:51 np0005629333 nova_compute[244014]: 2026-02-25 12:06:51.944 244018 DEBUG oslo_concurrency.processutils [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:06:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:06:52 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3411204322' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:06:52 np0005629333 nova_compute[244014]: 2026-02-25 12:06:52.488 244018 DEBUG oslo_concurrency.processutils [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:06:52 np0005629333 nova_compute[244014]: 2026-02-25 12:06:52.710 244018 WARNING nova.virt.libvirt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:06:52 np0005629333 nova_compute[244014]: 2026-02-25 12:06:52.712 244018 DEBUG nova.compute.resource_tracker [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5018MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:06:52 np0005629333 nova_compute[244014]: 2026-02-25 12:06:52.712 244018 DEBUG oslo_concurrency.lockutils [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:06:52 np0005629333 nova_compute[244014]: 2026-02-25 12:06:52.713 244018 DEBUG oslo_concurrency.lockutils [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:06:52 np0005629333 nova_compute[244014]: 2026-02-25 12:06:52.821 244018 DEBUG nova.compute.resource_tracker [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:06:52 np0005629333 nova_compute[244014]: 2026-02-25 12:06:52.822 244018 DEBUG nova.compute.resource_tracker [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:06:52 np0005629333 nova_compute[244014]: 2026-02-25 12:06:52.879 244018 DEBUG nova.scheduler.client.report [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 25 07:06:52 np0005629333 nova_compute[244014]: 2026-02-25 12:06:52.904 244018 DEBUG nova.scheduler.client.report [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 25 07:06:52 np0005629333 nova_compute[244014]: 2026-02-25 12:06:52.905 244018 DEBUG nova.compute.provider_tree [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 25 07:06:52 np0005629333 nova_compute[244014]: 2026-02-25 12:06:52.920 244018 DEBUG nova.scheduler.client.report [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 25 07:06:52 np0005629333 nova_compute[244014]: 2026-02-25 12:06:52.942 244018 DEBUG nova.scheduler.client.report [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 25 07:06:52 np0005629333 nova_compute[244014]: 2026-02-25 12:06:52.969 244018 DEBUG oslo_concurrency.processutils [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:06:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:06:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:06:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4275610153' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:06:53 np0005629333 nova_compute[244014]: 2026-02-25 12:06:53.528 244018 DEBUG oslo_concurrency.processutils [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:06:53 np0005629333 nova_compute[244014]: 2026-02-25 12:06:53.534 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Feb 25 07:06:53 np0005629333 nova_compute[244014]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Feb 25 07:06:53 np0005629333 nova_compute[244014]: 2026-02-25 12:06:53.534 244018 INFO nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] kernel doesn't support AMD SEV#033[00m
Feb 25 07:06:53 np0005629333 nova_compute[244014]: 2026-02-25 12:06:53.535 244018 DEBUG nova.compute.provider_tree [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:06:53 np0005629333 nova_compute[244014]: 2026-02-25 12:06:53.536 244018 DEBUG nova.virt.libvirt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:06:53 np0005629333 nova_compute[244014]: 2026-02-25 12:06:53.557 244018 DEBUG nova.scheduler.client.report [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:06:53 np0005629333 nova_compute[244014]: 2026-02-25 12:06:53.583 244018 DEBUG nova.compute.resource_tracker [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:06:53 np0005629333 nova_compute[244014]: 2026-02-25 12:06:53.584 244018 DEBUG oslo_concurrency.lockutils [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:06:53 np0005629333 nova_compute[244014]: 2026-02-25 12:06:53.584 244018 DEBUG nova.service [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Feb 25 07:06:53 np0005629333 nova_compute[244014]: 2026-02-25 12:06:53.645 244018 DEBUG nova.service [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Feb 25 07:06:53 np0005629333 nova_compute[244014]: 2026-02-25 12:06:53.646 244018 DEBUG nova.servicegroup.drivers.db [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Feb 25 07:06:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:06:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:06:54.988 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:06:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:06:54.988 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:06:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:06:54.988 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:06:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:06:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:06:57 np0005629333 podman[244360]: 2026-02-25 12:06:57.761869793 +0000 UTC m=+0.102690170 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 25 07:06:57 np0005629333 podman[244361]: 2026-02-25 12:06:57.770031959 +0000 UTC m=+0.109592479 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 07:06:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:06:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:07:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:07:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:07:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:07:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:07:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:07:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:07:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:07:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:07:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:07:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:07:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:07:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:07:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:07:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:07:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:07:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:07:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:07:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:07:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2797691297' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:07:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:07:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2797691297' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:07:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:07:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3301395650' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:07:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:07:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3301395650' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:07:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:07:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3025346241' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:07:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:07:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3025346241' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:07:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:07:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:07:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:07:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:07:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:07:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:07:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:07:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:07:28 np0005629333 podman[244405]: 2026-02-25 12:07:28.754654256 +0000 UTC m=+0.097752987 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 25 07:07:28 np0005629333 podman[244406]: 2026-02-25 12:07:28.788878875 +0000 UTC m=+0.129310629 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 07:07:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:07:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:07:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:07:30
Feb 25 07:07:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:07:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:07:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['volumes', 'default.rgw.log', '.mgr', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'default.rgw.control']
Feb 25 07:07:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:07:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:07:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:07:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:07:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:07:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:07:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:07:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:07:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:07:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:07:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:07:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:07:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:07:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:07:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:07:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:07:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:07:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:07:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:07:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:07:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:07:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:07:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:07:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:07:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:07:41 np0005629333 nova_compute[244014]: 2026-02-25 12:07:41.647 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:07:41 np0005629333 nova_compute[244014]: 2026-02-25 12:07:41.736 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:07:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:07:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:07:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:07:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:07:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:07:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:07:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:07:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:07:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:07:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:07:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:07:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:07:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 2.6988014389607423e-06 of space, bias 4.0, pg target 0.0032385617267528906 quantized to 16 (current 16)
Feb 25 07:07:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:07:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:07:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:07:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:07:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:07:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:07:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:07:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:07:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:07:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:07:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:07:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:07:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:07:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:07:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:07:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:07:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:07:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:07:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:07:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:07:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:07:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:07:42 np0005629333 podman[244592]: 2026-02-25 12:07:42.602052234 +0000 UTC m=+0.033872011 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:07:42 np0005629333 podman[244592]: 2026-02-25 12:07:42.763326436 +0000 UTC m=+0.195146173 container create 112ae3e67b8271b7cf14e87ad0b9115507d7655403958981b02749ad0071e736 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 07:07:42 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:07:42 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:07:42 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:07:42 np0005629333 systemd[1]: Started libpod-conmon-112ae3e67b8271b7cf14e87ad0b9115507d7655403958981b02749ad0071e736.scope.
Feb 25 07:07:42 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:07:43 np0005629333 podman[244592]: 2026-02-25 12:07:43.107033502 +0000 UTC m=+0.538853229 container init 112ae3e67b8271b7cf14e87ad0b9115507d7655403958981b02749ad0071e736 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_dewdney, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 07:07:43 np0005629333 podman[244592]: 2026-02-25 12:07:43.116358842 +0000 UTC m=+0.548178579 container start 112ae3e67b8271b7cf14e87ad0b9115507d7655403958981b02749ad0071e736 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:07:43 np0005629333 podman[244592]: 2026-02-25 12:07:43.120200493 +0000 UTC m=+0.552020230 container attach 112ae3e67b8271b7cf14e87ad0b9115507d7655403958981b02749ad0071e736 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_dewdney, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 07:07:43 np0005629333 keen_dewdney[244608]: 167 167
Feb 25 07:07:43 np0005629333 systemd[1]: libpod-112ae3e67b8271b7cf14e87ad0b9115507d7655403958981b02749ad0071e736.scope: Deactivated successfully.
Feb 25 07:07:43 np0005629333 podman[244592]: 2026-02-25 12:07:43.123272002 +0000 UTC m=+0.555091739 container died 112ae3e67b8271b7cf14e87ad0b9115507d7655403958981b02749ad0071e736 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:07:43 np0005629333 systemd[1]: var-lib-containers-storage-overlay-aa00eb16be33a65a10b778f36de1d4815062531df3030dde03ff4fd0290ed311-merged.mount: Deactivated successfully.
Feb 25 07:07:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:07:43 np0005629333 podman[244592]: 2026-02-25 12:07:43.373231348 +0000 UTC m=+0.805051085 container remove 112ae3e67b8271b7cf14e87ad0b9115507d7655403958981b02749ad0071e736 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_dewdney, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:07:43 np0005629333 systemd[1]: libpod-conmon-112ae3e67b8271b7cf14e87ad0b9115507d7655403958981b02749ad0071e736.scope: Deactivated successfully.
Feb 25 07:07:43 np0005629333 podman[244633]: 2026-02-25 12:07:43.537821206 +0000 UTC m=+0.030763280 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:07:43 np0005629333 podman[244633]: 2026-02-25 12:07:43.638986651 +0000 UTC m=+0.131928725 container create 25b521d5051373841c06191acff0f2a158756c665b2ce3db77afb249c98cf352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_poitras, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 07:07:43 np0005629333 systemd[1]: Started libpod-conmon-25b521d5051373841c06191acff0f2a158756c665b2ce3db77afb249c98cf352.scope.
Feb 25 07:07:43 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:07:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e96d2f7e0edb7a51296dfdf209783cc3c09f3d4338f0c7620f89a65b6634b4b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:07:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e96d2f7e0edb7a51296dfdf209783cc3c09f3d4338f0c7620f89a65b6634b4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:07:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e96d2f7e0edb7a51296dfdf209783cc3c09f3d4338f0c7620f89a65b6634b4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:07:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e96d2f7e0edb7a51296dfdf209783cc3c09f3d4338f0c7620f89a65b6634b4b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:07:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e96d2f7e0edb7a51296dfdf209783cc3c09f3d4338f0c7620f89a65b6634b4b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:07:43 np0005629333 podman[244633]: 2026-02-25 12:07:43.807825052 +0000 UTC m=+0.300767166 container init 25b521d5051373841c06191acff0f2a158756c665b2ce3db77afb249c98cf352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_poitras, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:07:43 np0005629333 podman[244633]: 2026-02-25 12:07:43.823489335 +0000 UTC m=+0.316431419 container start 25b521d5051373841c06191acff0f2a158756c665b2ce3db77afb249c98cf352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:07:43 np0005629333 podman[244633]: 2026-02-25 12:07:43.829923321 +0000 UTC m=+0.322865405 container attach 25b521d5051373841c06191acff0f2a158756c665b2ce3db77afb249c98cf352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_poitras, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 07:07:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 25 07:07:43 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Feb 25 07:07:43 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:07:43.907026) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 07:07:43 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Feb 25 07:07:43 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021263907066, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 987, "num_deletes": 508, "total_data_size": 889144, "memory_usage": 906888, "flush_reason": "Manual Compaction"}
Feb 25 07:07:43 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Feb 25 07:07:43 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021263929928, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 880138, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13674, "largest_seqno": 14660, "table_properties": {"data_size": 875773, "index_size": 1505, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 12234, "raw_average_key_size": 17, "raw_value_size": 865004, "raw_average_value_size": 1241, "num_data_blocks": 70, "num_entries": 697, "num_filter_entries": 697, "num_deletions": 508, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772021204, "oldest_key_time": 1772021204, "file_creation_time": 1772021263, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:07:43 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 22957 microseconds, and 3536 cpu microseconds.
Feb 25 07:07:43 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:07:44 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:07:43.929981) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 880138 bytes OK
Feb 25 07:07:44 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:07:43.930004) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Feb 25 07:07:44 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:07:44.011221) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Feb 25 07:07:44 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:07:44.011277) EVENT_LOG_v1 {"time_micros": 1772021264011263, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 07:07:44 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:07:44.011311) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 07:07:44 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 883335, prev total WAL file size 883335, number of live WAL files 2.
Feb 25 07:07:44 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:07:44 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:07:44.012454) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323534' seq:0, type:0; will stop at (end)
Feb 25 07:07:44 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 07:07:44 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(859KB)], [32(7624KB)]
Feb 25 07:07:44 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021264012486, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 8687927, "oldest_snapshot_seqno": -1}
Feb 25 07:07:44 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3737 keys, 6808210 bytes, temperature: kUnknown
Feb 25 07:07:44 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021264179280, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 6808210, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6782077, "index_size": 15720, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9349, "raw_key_size": 91712, "raw_average_key_size": 24, "raw_value_size": 6713224, "raw_average_value_size": 1796, "num_data_blocks": 665, "num_entries": 3737, "num_filter_entries": 3737, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772021264, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:07:44 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:07:44 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:07:44.179777) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 6808210 bytes
Feb 25 07:07:44 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:07:44.211595) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 52.0 rd, 40.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 7.4 +0.0 blob) out(6.5 +0.0 blob), read-write-amplify(17.6) write-amplify(7.7) OK, records in: 4763, records dropped: 1026 output_compression: NoCompression
Feb 25 07:07:44 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:07:44.211661) EVENT_LOG_v1 {"time_micros": 1772021264211634, "job": 14, "event": "compaction_finished", "compaction_time_micros": 167059, "compaction_time_cpu_micros": 10678, "output_level": 6, "num_output_files": 1, "total_output_size": 6808210, "num_input_records": 4763, "num_output_records": 3737, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 07:07:44 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:07:44 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021264212084, "job": 14, "event": "table_file_deletion", "file_number": 34}
Feb 25 07:07:44 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:07:44 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021264213372, "job": 14, "event": "table_file_deletion", "file_number": 32}
Feb 25 07:07:44 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:07:44.012062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:07:44 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:07:44.213457) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:07:44 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:07:44.213468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:07:44 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:07:44.213472) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:07:44 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:07:44.213476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:07:44 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:07:44.213480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:07:44 np0005629333 xenodochial_poitras[244650]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:07:44 np0005629333 xenodochial_poitras[244650]: --> All data devices are unavailable
Feb 25 07:07:44 np0005629333 systemd[1]: libpod-25b521d5051373841c06191acff0f2a158756c665b2ce3db77afb249c98cf352.scope: Deactivated successfully.
Feb 25 07:07:44 np0005629333 podman[244633]: 2026-02-25 12:07:44.287944682 +0000 UTC m=+0.780886766 container died 25b521d5051373841c06191acff0f2a158756c665b2ce3db77afb249c98cf352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_poitras, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 07:07:44 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1e96d2f7e0edb7a51296dfdf209783cc3c09f3d4338f0c7620f89a65b6634b4b-merged.mount: Deactivated successfully.
Feb 25 07:07:45 np0005629333 podman[244633]: 2026-02-25 12:07:45.200261827 +0000 UTC m=+1.693203901 container remove 25b521d5051373841c06191acff0f2a158756c665b2ce3db77afb249c98cf352 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 07:07:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:07:45 np0005629333 systemd[1]: libpod-conmon-25b521d5051373841c06191acff0f2a158756c665b2ce3db77afb249c98cf352.scope: Deactivated successfully.
Feb 25 07:15:08 np0005629333 nova_compute[244014]: 2026-02-25 12:15:08.098 244018 DEBUG oslo_concurrency.lockutils [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Acquiring lock "16c8e25c-54e5-4f4a-a188-5e5621ce23b2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:15:08 np0005629333 nova_compute[244014]: 2026-02-25 12:15:08.098 244018 DEBUG oslo_concurrency.lockutils [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Lock "16c8e25c-54e5-4f4a-a188-5e5621ce23b2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:15:08 np0005629333 nova_compute[244014]: 2026-02-25 12:15:08.099 244018 DEBUG oslo_concurrency.lockutils [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Acquiring lock "16c8e25c-54e5-4f4a-a188-5e5621ce23b2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:15:08 np0005629333 nova_compute[244014]: 2026-02-25 12:15:08.099 244018 DEBUG oslo_concurrency.lockutils [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Lock "16c8e25c-54e5-4f4a-a188-5e5621ce23b2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:15:08 np0005629333 nova_compute[244014]: 2026-02-25 12:15:08.099 244018 DEBUG oslo_concurrency.lockutils [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Lock "16c8e25c-54e5-4f4a-a188-5e5621ce23b2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:15:08 np0005629333 nova_compute[244014]: 2026-02-25 12:15:08.100 244018 INFO nova.compute.manager [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Terminating instance#033[00m
Feb 25 07:15:08 np0005629333 nova_compute[244014]: 2026-02-25 12:15:08.101 244018 DEBUG oslo_concurrency.lockutils [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Acquiring lock "refresh_cache-16c8e25c-54e5-4f4a-a188-5e5621ce23b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:15:08 np0005629333 nova_compute[244014]: 2026-02-25 12:15:08.101 244018 DEBUG oslo_concurrency.lockutils [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Acquired lock "refresh_cache-16c8e25c-54e5-4f4a-a188-5e5621ce23b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:15:08 np0005629333 nova_compute[244014]: 2026-02-25 12:15:08.101 244018 DEBUG nova.network.neutron [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:15:08 np0005629333 rsyslogd[1020]: imjournal: 3448 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Feb 25 07:15:08 np0005629333 nova_compute[244014]: 2026-02-25 12:15:08.888 244018 DEBUG nova.network.neutron [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:15:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:15:09 np0005629333 nova_compute[244014]: 2026-02-25 12:15:09.170 244018 DEBUG nova.network.neutron [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:15:09 np0005629333 nova_compute[244014]: 2026-02-25 12:15:09.219 244018 DEBUG oslo_concurrency.lockutils [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Releasing lock "refresh_cache-16c8e25c-54e5-4f4a-a188-5e5621ce23b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:15:09 np0005629333 nova_compute[244014]: 2026-02-25 12:15:09.220 244018 DEBUG nova.compute.manager [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:15:09 np0005629333 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Feb 25 07:15:09 np0005629333 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 5.357s CPU time.
Feb 25 07:15:09 np0005629333 systemd-machined[210048]: Machine qemu-1-instance-00000001 terminated.
Feb 25 07:15:09 np0005629333 nova_compute[244014]: 2026-02-25 12:15:09.437 244018 INFO nova.virt.libvirt.driver [-] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Instance destroyed successfully.#033[00m
Feb 25 07:15:09 np0005629333 nova_compute[244014]: 2026-02-25 12:15:09.438 244018 DEBUG nova.objects.instance [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Lazy-loading 'resources' on Instance uuid 16c8e25c-54e5-4f4a-a188-5e5621ce23b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:15:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 122 op/s
Feb 25 07:15:10 np0005629333 nova_compute[244014]: 2026-02-25 12:15:10.384 244018 DEBUG oslo_concurrency.lockutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Acquiring lock "f3ac54ca-7761-47fb-a44f-5c64ff55e40c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:15:10 np0005629333 nova_compute[244014]: 2026-02-25 12:15:10.384 244018 DEBUG oslo_concurrency.lockutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "f3ac54ca-7761-47fb-a44f-5c64ff55e40c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:15:10 np0005629333 nova_compute[244014]: 2026-02-25 12:15:10.519 244018 DEBUG nova.compute.manager [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:15:10 np0005629333 podman[250577]: 2026-02-25 12:15:10.74663412 +0000 UTC m=+0.079366242 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Feb 25 07:15:10 np0005629333 podman[250579]: 2026-02-25 12:15:10.791944424 +0000 UTC m=+0.124800459 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible)
Feb 25 07:15:10 np0005629333 nova_compute[244014]: 2026-02-25 12:15:10.800 244018 DEBUG oslo_concurrency.lockutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:15:10 np0005629333 nova_compute[244014]: 2026-02-25 12:15:10.800 244018 DEBUG oslo_concurrency.lockutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:15:10 np0005629333 nova_compute[244014]: 2026-02-25 12:15:10.809 244018 DEBUG nova.virt.hardware [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:15:10 np0005629333 nova_compute[244014]: 2026-02-25 12:15:10.810 244018 INFO nova.compute.claims [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:15:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 182 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 122 op/s
Feb 25 07:15:11 np0005629333 nova_compute[244014]: 2026-02-25 12:15:11.563 244018 DEBUG oslo_concurrency.processutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:11 np0005629333 nova_compute[244014]: 2026-02-25 12:15:11.748 244018 INFO nova.virt.libvirt.driver [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Deleting instance files /var/lib/nova/instances/16c8e25c-54e5-4f4a-a188-5e5621ce23b2_del#033[00m
Feb 25 07:15:11 np0005629333 nova_compute[244014]: 2026-02-25 12:15:11.750 244018 INFO nova.virt.libvirt.driver [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Deletion of /var/lib/nova/instances/16c8e25c-54e5-4f4a-a188-5e5621ce23b2_del complete#033[00m
Feb 25 07:15:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:15:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/221663491' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:15:12 np0005629333 nova_compute[244014]: 2026-02-25 12:15:12.083 244018 DEBUG oslo_concurrency.processutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:12 np0005629333 nova_compute[244014]: 2026-02-25 12:15:12.090 244018 DEBUG nova.compute.provider_tree [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:15:12 np0005629333 nova_compute[244014]: 2026-02-25 12:15:12.119 244018 DEBUG nova.virt.libvirt.host [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Feb 25 07:15:12 np0005629333 nova_compute[244014]: 2026-02-25 12:15:12.119 244018 INFO nova.virt.libvirt.host [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] UEFI support detected#033[00m
Feb 25 07:15:12 np0005629333 nova_compute[244014]: 2026-02-25 12:15:12.121 244018 INFO nova.compute.manager [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Took 2.90 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:15:12 np0005629333 nova_compute[244014]: 2026-02-25 12:15:12.122 244018 DEBUG oslo.service.loopingcall [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:15:12 np0005629333 nova_compute[244014]: 2026-02-25 12:15:12.122 244018 DEBUG nova.compute.manager [-] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:15:12 np0005629333 nova_compute[244014]: 2026-02-25 12:15:12.123 244018 DEBUG nova.network.neutron [-] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:15:12 np0005629333 nova_compute[244014]: 2026-02-25 12:15:12.182 244018 DEBUG nova.scheduler.client.report [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:15:12 np0005629333 nova_compute[244014]: 2026-02-25 12:15:12.403 244018 DEBUG nova.network.neutron [-] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:15:12 np0005629333 nova_compute[244014]: 2026-02-25 12:15:12.483 244018 DEBUG nova.network.neutron [-] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:15:12 np0005629333 nova_compute[244014]: 2026-02-25 12:15:12.565 244018 DEBUG oslo_concurrency.lockutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:15:12 np0005629333 nova_compute[244014]: 2026-02-25 12:15:12.566 244018 DEBUG nova.compute.manager [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:15:13 np0005629333 nova_compute[244014]: 2026-02-25 12:15:13.272 244018 INFO nova.compute.manager [-] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Took 1.15 seconds to deallocate network for instance.#033[00m
Feb 25 07:15:13 np0005629333 nova_compute[244014]: 2026-02-25 12:15:13.284 244018 DEBUG nova.compute.manager [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:15:13 np0005629333 nova_compute[244014]: 2026-02-25 12:15:13.284 244018 DEBUG nova.network.neutron [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:15:13 np0005629333 nova_compute[244014]: 2026-02-25 12:15:13.391 244018 INFO nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:15:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 153 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 119 op/s
Feb 25 07:15:13 np0005629333 nova_compute[244014]: 2026-02-25 12:15:13.480 244018 DEBUG nova.compute.manager [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:15:13 np0005629333 nova_compute[244014]: 2026-02-25 12:15:13.511 244018 DEBUG oslo_concurrency.lockutils [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:15:13 np0005629333 nova_compute[244014]: 2026-02-25 12:15:13.512 244018 DEBUG oslo_concurrency.lockutils [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:15:13 np0005629333 nova_compute[244014]: 2026-02-25 12:15:13.598 244018 DEBUG oslo_concurrency.processutils [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:13 np0005629333 nova_compute[244014]: 2026-02-25 12:15:13.698 244018 DEBUG nova.compute.manager [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:15:13 np0005629333 nova_compute[244014]: 2026-02-25 12:15:13.700 244018 DEBUG nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:15:13 np0005629333 nova_compute[244014]: 2026-02-25 12:15:13.701 244018 INFO nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Creating image(s)#033[00m
Feb 25 07:15:13 np0005629333 nova_compute[244014]: 2026-02-25 12:15:13.738 244018 DEBUG nova.storage.rbd_utils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] rbd image f3ac54ca-7761-47fb-a44f-5c64ff55e40c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:15:13 np0005629333 nova_compute[244014]: 2026-02-25 12:15:13.765 244018 DEBUG nova.storage.rbd_utils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] rbd image f3ac54ca-7761-47fb-a44f-5c64ff55e40c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:15:13 np0005629333 nova_compute[244014]: 2026-02-25 12:15:13.793 244018 DEBUG nova.storage.rbd_utils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] rbd image f3ac54ca-7761-47fb-a44f-5c64ff55e40c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:15:13 np0005629333 nova_compute[244014]: 2026-02-25 12:15:13.796 244018 DEBUG oslo_concurrency.processutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:13 np0005629333 nova_compute[244014]: 2026-02-25 12:15:13.854 244018 DEBUG oslo_concurrency.processutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:13 np0005629333 nova_compute[244014]: 2026-02-25 12:15:13.856 244018 DEBUG oslo_concurrency.lockutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:15:13 np0005629333 nova_compute[244014]: 2026-02-25 12:15:13.857 244018 DEBUG oslo_concurrency.lockutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:15:13 np0005629333 nova_compute[244014]: 2026-02-25 12:15:13.858 244018 DEBUG oslo_concurrency.lockutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:15:13 np0005629333 nova_compute[244014]: 2026-02-25 12:15:13.886 244018 DEBUG nova.storage.rbd_utils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] rbd image f3ac54ca-7761-47fb-a44f-5c64ff55e40c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:15:13 np0005629333 nova_compute[244014]: 2026-02-25 12:15:13.890 244018 DEBUG oslo_concurrency.processutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 f3ac54ca-7761-47fb-a44f-5c64ff55e40c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1288963850' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.113 244018 DEBUG oslo_concurrency.processutils [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.121 244018 DEBUG nova.compute.provider_tree [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.145 244018 DEBUG nova.scheduler.client.report [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.152 244018 DEBUG nova.network.neutron [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.153 244018 DEBUG nova.compute.manager [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.175 244018 DEBUG oslo_concurrency.lockutils [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:14.187907) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021714187959, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 331, "num_deletes": 255, "total_data_size": 128911, "memory_usage": 135816, "flush_reason": "Manual Compaction"}
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.220 244018 INFO nova.scheduler.client.report [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Deleted allocations for instance 16c8e25c-54e5-4f4a-a188-5e5621ce23b2#033[00m
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021714228813, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 128145, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19080, "largest_seqno": 19410, "table_properties": {"data_size": 126073, "index_size": 236, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 4861, "raw_average_key_size": 16, "raw_value_size": 121985, "raw_average_value_size": 420, "num_data_blocks": 11, "num_entries": 290, "num_filter_entries": 290, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772021705, "oldest_key_time": 1772021705, "file_creation_time": 1772021714, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 40947 microseconds, and 941 cpu microseconds.
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:14.228860) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 128145 bytes OK
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:14.228879) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:14.256245) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:14.256281) EVENT_LOG_v1 {"time_micros": 1772021714256271, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:14.256310) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 126603, prev total WAL file size 126603, number of live WAL files 2.
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:14.256905) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323533' seq:72057594037927935, type:22 .. '6C6F676D00353034' seq:0, type:0; will stop at (end)
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(125KB)], [44(6564KB)]
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021714256947, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 6850312, "oldest_snapshot_seqno": -1}
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4115 keys, 6735502 bytes, temperature: kUnknown
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021714332652, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 6735502, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6707849, "index_size": 16280, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10309, "raw_key_size": 102087, "raw_average_key_size": 24, "raw_value_size": 6633071, "raw_average_value_size": 1611, "num_data_blocks": 682, "num_entries": 4115, "num_filter_entries": 4115, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772021714, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.333 244018 DEBUG oslo_concurrency.lockutils [None req-636cfef5-cf8c-4d2a-a950-ce7af34499a9 ddd8db22e5c0487dbbc394a46dcf24db 0615b26acc544c6798f0df8252424d5f - - default default] Lock "16c8e25c-54e5-4f4a-a188-5e5621ce23b2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:14.332999) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 6735502 bytes
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:14.353425) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 90.3 rd, 88.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 6.4 +0.0 blob) out(6.4 +0.0 blob), read-write-amplify(106.0) write-amplify(52.6) OK, records in: 4632, records dropped: 517 output_compression: NoCompression
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:14.353476) EVENT_LOG_v1 {"time_micros": 1772021714353455, "job": 22, "event": "compaction_finished", "compaction_time_micros": 75836, "compaction_time_cpu_micros": 21880, "output_level": 6, "num_output_files": 1, "total_output_size": 6735502, "num_input_records": 4632, "num_output_records": 4115, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021714353720, "job": 22, "event": "table_file_deletion", "file_number": 46}
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021714354788, "job": 22, "event": "table_file_deletion", "file_number": 44}
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:14.256697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:14.354870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:14.354877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:14.354879) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:14.354881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:15:14 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:15:14.354882) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.442 244018 DEBUG oslo_concurrency.processutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 f3ac54ca-7761-47fb-a44f-5c64ff55e40c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.517 244018 DEBUG nova.storage.rbd_utils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] resizing rbd image f3ac54ca-7761-47fb-a44f-5c64ff55e40c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.629 244018 DEBUG nova.objects.instance [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lazy-loading 'migration_context' on Instance uuid f3ac54ca-7761-47fb-a44f-5c64ff55e40c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.666 244018 DEBUG nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.666 244018 DEBUG nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Ensure instance console log exists: /var/lib/nova/instances/f3ac54ca-7761-47fb-a44f-5c64ff55e40c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.667 244018 DEBUG oslo_concurrency.lockutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.667 244018 DEBUG oslo_concurrency.lockutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.667 244018 DEBUG oslo_concurrency.lockutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.670 244018 DEBUG nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.675 244018 WARNING nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.681 244018 DEBUG nova.virt.libvirt.host [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.681 244018 DEBUG nova.virt.libvirt.host [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.684 244018 DEBUG nova.virt.libvirt.host [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.685 244018 DEBUG nova.virt.libvirt.host [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.685 244018 DEBUG nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.686 244018 DEBUG nova.virt.hardware [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.686 244018 DEBUG nova.virt.hardware [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.687 244018 DEBUG nova.virt.hardware [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.687 244018 DEBUG nova.virt.hardware [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.687 244018 DEBUG nova.virt.hardware [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.688 244018 DEBUG nova.virt.hardware [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.688 244018 DEBUG nova.virt.hardware [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.688 244018 DEBUG nova.virt.hardware [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.689 244018 DEBUG nova.virt.hardware [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.689 244018 DEBUG nova.virt.hardware [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.689 244018 DEBUG nova.virt.hardware [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:15:14 np0005629333 nova_compute[244014]: 2026-02-25 12:15:14.693 244018 DEBUG oslo_concurrency.processutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:15:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/304538727' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:15:15 np0005629333 nova_compute[244014]: 2026-02-25 12:15:15.271 244018 DEBUG oslo_concurrency.processutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:15 np0005629333 nova_compute[244014]: 2026-02-25 12:15:15.303 244018 DEBUG nova.storage.rbd_utils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] rbd image f3ac54ca-7761-47fb-a44f-5c64ff55e40c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:15:15 np0005629333 nova_compute[244014]: 2026-02-25 12:15:15.309 244018 DEBUG oslo_concurrency.processutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 153 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 104 op/s
Feb 25 07:15:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:15:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/927726857' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:15:15 np0005629333 nova_compute[244014]: 2026-02-25 12:15:15.929 244018 DEBUG oslo_concurrency.processutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:15 np0005629333 nova_compute[244014]: 2026-02-25 12:15:15.932 244018 DEBUG nova.objects.instance [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lazy-loading 'pci_devices' on Instance uuid f3ac54ca-7761-47fb-a44f-5c64ff55e40c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:15:15 np0005629333 nova_compute[244014]: 2026-02-25 12:15:15.972 244018 DEBUG nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:15:15 np0005629333 nova_compute[244014]:  <uuid>f3ac54ca-7761-47fb-a44f-5c64ff55e40c</uuid>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:  <name>instance-00000002</name>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:15:15 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:      <nova:name>tempest-DeleteServersAdminTestJSON-server-789124960</nova:name>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:15:14</nova:creationTime>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:15:15 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:        <nova:user uuid="0b1ac75e114a4f7493006c0ffae0d4cf">tempest-DeleteServersAdminTestJSON-848319223-project-member</nova:user>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:        <nova:project uuid="71ba161ac9034524bed0ed4918ac0d2d">tempest-DeleteServersAdminTestJSON-848319223</nova:project>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:      <nova:ports/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:      <entry name="serial">f3ac54ca-7761-47fb-a44f-5c64ff55e40c</entry>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:      <entry name="uuid">f3ac54ca-7761-47fb-a44f-5c64ff55e40c</entry>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:15:15 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/f3ac54ca-7761-47fb-a44f-5c64ff55e40c_disk">
Feb 25 07:15:15 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:15:15 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:15:15 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/f3ac54ca-7761-47fb-a44f-5c64ff55e40c_disk.config">
Feb 25 07:15:15 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:15:15 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:15:15 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/f3ac54ca-7761-47fb-a44f-5c64ff55e40c/console.log" append="off"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:15:15 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:15:15 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:15:15 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:15:15 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:15:15 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:15:16 np0005629333 nova_compute[244014]: 2026-02-25 12:15:16.064 244018 DEBUG nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:15:16 np0005629333 nova_compute[244014]: 2026-02-25 12:15:16.064 244018 DEBUG nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:15:16 np0005629333 nova_compute[244014]: 2026-02-25 12:15:16.065 244018 INFO nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Using config drive#033[00m
Feb 25 07:15:16 np0005629333 nova_compute[244014]: 2026-02-25 12:15:16.093 244018 DEBUG nova.storage.rbd_utils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] rbd image f3ac54ca-7761-47fb-a44f-5c64ff55e40c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:15:16 np0005629333 nova_compute[244014]: 2026-02-25 12:15:16.449 244018 INFO nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Creating config drive at /var/lib/nova/instances/f3ac54ca-7761-47fb-a44f-5c64ff55e40c/disk.config#033[00m
Feb 25 07:15:16 np0005629333 nova_compute[244014]: 2026-02-25 12:15:16.455 244018 DEBUG oslo_concurrency.processutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f3ac54ca-7761-47fb-a44f-5c64ff55e40c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3x8jvnwe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:16 np0005629333 nova_compute[244014]: 2026-02-25 12:15:16.579 244018 DEBUG oslo_concurrency.processutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f3ac54ca-7761-47fb-a44f-5c64ff55e40c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3x8jvnwe" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:16 np0005629333 nova_compute[244014]: 2026-02-25 12:15:16.650 244018 DEBUG nova.storage.rbd_utils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] rbd image f3ac54ca-7761-47fb-a44f-5c64ff55e40c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:15:16 np0005629333 nova_compute[244014]: 2026-02-25 12:15:16.656 244018 DEBUG oslo_concurrency.processutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f3ac54ca-7761-47fb-a44f-5c64ff55e40c/disk.config f3ac54ca-7761-47fb-a44f-5c64ff55e40c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:16 np0005629333 nova_compute[244014]: 2026-02-25 12:15:16.700 244018 DEBUG oslo_concurrency.processutils [None req-f14ff759-3d81-4d95-857d-4e87714b74b2 872527ec067d4bf69631833e15dd7506 5c15d69b0f154e82aa1af966bc9537e3 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:16 np0005629333 nova_compute[244014]: 2026-02-25 12:15:16.720 244018 DEBUG oslo_concurrency.processutils [None req-f14ff759-3d81-4d95-857d-4e87714b74b2 872527ec067d4bf69631833e15dd7506 5c15d69b0f154e82aa1af966bc9537e3 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:17 np0005629333 nova_compute[244014]: 2026-02-25 12:15:17.087 244018 DEBUG oslo_concurrency.processutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f3ac54ca-7761-47fb-a44f-5c64ff55e40c/disk.config f3ac54ca-7761-47fb-a44f-5c64ff55e40c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:17 np0005629333 nova_compute[244014]: 2026-02-25 12:15:17.088 244018 INFO nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Deleting local config drive /var/lib/nova/instances/f3ac54ca-7761-47fb-a44f-5c64ff55e40c/disk.config because it was imported into RBD.#033[00m
Feb 25 07:15:17 np0005629333 systemd-machined[210048]: New machine qemu-2-instance-00000002.
Feb 25 07:15:17 np0005629333 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Feb 25 07:15:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Feb 25 07:15:18 np0005629333 nova_compute[244014]: 2026-02-25 12:15:18.175 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021718.1745355, f3ac54ca-7761-47fb-a44f-5c64ff55e40c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:15:18 np0005629333 nova_compute[244014]: 2026-02-25 12:15:18.175 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:15:18 np0005629333 nova_compute[244014]: 2026-02-25 12:15:18.179 244018 DEBUG nova.compute.manager [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:15:18 np0005629333 nova_compute[244014]: 2026-02-25 12:15:18.180 244018 DEBUG nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:15:18 np0005629333 nova_compute[244014]: 2026-02-25 12:15:18.184 244018 INFO nova.virt.libvirt.driver [-] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Instance spawned successfully.#033[00m
Feb 25 07:15:18 np0005629333 nova_compute[244014]: 2026-02-25 12:15:18.184 244018 DEBUG nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:15:18 np0005629333 nova_compute[244014]: 2026-02-25 12:15:18.198 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:15:18 np0005629333 nova_compute[244014]: 2026-02-25 12:15:18.205 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:15:18 np0005629333 nova_compute[244014]: 2026-02-25 12:15:18.210 244018 DEBUG nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:15:18 np0005629333 nova_compute[244014]: 2026-02-25 12:15:18.211 244018 DEBUG nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:15:18 np0005629333 nova_compute[244014]: 2026-02-25 12:15:18.212 244018 DEBUG nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:15:18 np0005629333 nova_compute[244014]: 2026-02-25 12:15:18.212 244018 DEBUG nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:15:18 np0005629333 nova_compute[244014]: 2026-02-25 12:15:18.213 244018 DEBUG nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:15:18 np0005629333 nova_compute[244014]: 2026-02-25 12:15:18.214 244018 DEBUG nova.virt.libvirt.driver [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:15:18 np0005629333 nova_compute[244014]: 2026-02-25 12:15:18.224 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:15:18 np0005629333 nova_compute[244014]: 2026-02-25 12:15:18.225 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021718.1768045, f3ac54ca-7761-47fb-a44f-5c64ff55e40c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:15:18 np0005629333 nova_compute[244014]: 2026-02-25 12:15:18.225 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] VM Started (Lifecycle Event)#033[00m
Feb 25 07:15:18 np0005629333 nova_compute[244014]: 2026-02-25 12:15:18.259 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:15:18 np0005629333 nova_compute[244014]: 2026-02-25 12:15:18.264 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:15:18 np0005629333 nova_compute[244014]: 2026-02-25 12:15:18.288 244018 INFO nova.compute.manager [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Took 4.59 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:15:18 np0005629333 nova_compute[244014]: 2026-02-25 12:15:18.288 244018 DEBUG nova.compute.manager [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:15:18 np0005629333 nova_compute[244014]: 2026-02-25 12:15:18.344 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:15:18 np0005629333 nova_compute[244014]: 2026-02-25 12:15:18.375 244018 INFO nova.compute.manager [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Took 7.61 seconds to build instance.#033[00m
Feb 25 07:15:18 np0005629333 nova_compute[244014]: 2026-02-25 12:15:18.402 244018 DEBUG oslo_concurrency.lockutils [None req-6aad5f31-c341-4aea-a351-fe52066381bf 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "f3ac54ca-7761-47fb-a44f-5c64ff55e40c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:15:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:15:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 25 07:15:19 np0005629333 nova_compute[244014]: 2026-02-25 12:15:19.910 244018 DEBUG oslo_concurrency.lockutils [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Acquiring lock "f3ac54ca-7761-47fb-a44f-5c64ff55e40c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:15:19 np0005629333 nova_compute[244014]: 2026-02-25 12:15:19.911 244018 DEBUG oslo_concurrency.lockutils [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Lock "f3ac54ca-7761-47fb-a44f-5c64ff55e40c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:15:19 np0005629333 nova_compute[244014]: 2026-02-25 12:15:19.911 244018 DEBUG oslo_concurrency.lockutils [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Acquiring lock "f3ac54ca-7761-47fb-a44f-5c64ff55e40c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:15:19 np0005629333 nova_compute[244014]: 2026-02-25 12:15:19.912 244018 DEBUG oslo_concurrency.lockutils [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Lock "f3ac54ca-7761-47fb-a44f-5c64ff55e40c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:15:19 np0005629333 nova_compute[244014]: 2026-02-25 12:15:19.912 244018 DEBUG oslo_concurrency.lockutils [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Lock "f3ac54ca-7761-47fb-a44f-5c64ff55e40c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:15:19 np0005629333 nova_compute[244014]: 2026-02-25 12:15:19.915 244018 INFO nova.compute.manager [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Terminating instance#033[00m
Feb 25 07:15:19 np0005629333 nova_compute[244014]: 2026-02-25 12:15:19.917 244018 DEBUG oslo_concurrency.lockutils [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Acquiring lock "refresh_cache-f3ac54ca-7761-47fb-a44f-5c64ff55e40c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:15:19 np0005629333 nova_compute[244014]: 2026-02-25 12:15:19.917 244018 DEBUG oslo_concurrency.lockutils [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Acquired lock "refresh_cache-f3ac54ca-7761-47fb-a44f-5c64ff55e40c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:15:19 np0005629333 nova_compute[244014]: 2026-02-25 12:15:19.918 244018 DEBUG nova.network.neutron [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:15:20 np0005629333 nova_compute[244014]: 2026-02-25 12:15:20.438 244018 DEBUG nova.network.neutron [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:15:21 np0005629333 nova_compute[244014]: 2026-02-25 12:15:21.050 244018 DEBUG nova.network.neutron [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:15:21 np0005629333 nova_compute[244014]: 2026-02-25 12:15:21.170 244018 DEBUG oslo_concurrency.lockutils [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Releasing lock "refresh_cache-f3ac54ca-7761-47fb-a44f-5c64ff55e40c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:15:21 np0005629333 nova_compute[244014]: 2026-02-25 12:15:21.171 244018 DEBUG nova.compute.manager [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:15:21 np0005629333 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Feb 25 07:15:21 np0005629333 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 4.136s CPU time.
Feb 25 07:15:21 np0005629333 systemd-machined[210048]: Machine qemu-2-instance-00000002 terminated.
Feb 25 07:15:21 np0005629333 nova_compute[244014]: 2026-02-25 12:15:21.392 244018 INFO nova.virt.libvirt.driver [-] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Instance destroyed successfully.#033[00m
Feb 25 07:15:21 np0005629333 nova_compute[244014]: 2026-02-25 12:15:21.392 244018 DEBUG nova.objects.instance [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Lazy-loading 'resources' on Instance uuid f3ac54ca-7761-47fb-a44f-5c64ff55e40c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:15:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 605 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Feb 25 07:15:21 np0005629333 nova_compute[244014]: 2026-02-25 12:15:21.834 244018 INFO nova.virt.libvirt.driver [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Deleting instance files /var/lib/nova/instances/f3ac54ca-7761-47fb-a44f-5c64ff55e40c_del#033[00m
Feb 25 07:15:21 np0005629333 nova_compute[244014]: 2026-02-25 12:15:21.835 244018 INFO nova.virt.libvirt.driver [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Deletion of /var/lib/nova/instances/f3ac54ca-7761-47fb-a44f-5c64ff55e40c_del complete#033[00m
Feb 25 07:15:21 np0005629333 nova_compute[244014]: 2026-02-25 12:15:21.923 244018 INFO nova.compute.manager [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:15:21 np0005629333 nova_compute[244014]: 2026-02-25 12:15:21.924 244018 DEBUG oslo.service.loopingcall [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:15:21 np0005629333 nova_compute[244014]: 2026-02-25 12:15:21.924 244018 DEBUG nova.compute.manager [-] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:15:21 np0005629333 nova_compute[244014]: 2026-02-25 12:15:21.925 244018 DEBUG nova.network.neutron [-] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:15:22 np0005629333 nova_compute[244014]: 2026-02-25 12:15:22.143 244018 DEBUG nova.network.neutron [-] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:15:22 np0005629333 nova_compute[244014]: 2026-02-25 12:15:22.166 244018 DEBUG nova.network.neutron [-] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:15:22 np0005629333 nova_compute[244014]: 2026-02-25 12:15:22.179 244018 INFO nova.compute.manager [-] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Took 0.25 seconds to deallocate network for instance.#033[00m
Feb 25 07:15:22 np0005629333 nova_compute[244014]: 2026-02-25 12:15:22.231 244018 DEBUG oslo_concurrency.lockutils [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:15:22 np0005629333 nova_compute[244014]: 2026-02-25 12:15:22.231 244018 DEBUG oslo_concurrency.lockutils [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:15:22 np0005629333 nova_compute[244014]: 2026-02-25 12:15:22.290 244018 DEBUG oslo_concurrency.processutils [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:15:22 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3893988309' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:15:22 np0005629333 nova_compute[244014]: 2026-02-25 12:15:22.847 244018 DEBUG oslo_concurrency.processutils [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:22 np0005629333 nova_compute[244014]: 2026-02-25 12:15:22.855 244018 DEBUG nova.compute.provider_tree [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:15:22 np0005629333 nova_compute[244014]: 2026-02-25 12:15:22.878 244018 DEBUG nova.scheduler.client.report [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:15:22 np0005629333 nova_compute[244014]: 2026-02-25 12:15:22.905 244018 DEBUG oslo_concurrency.lockutils [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:15:22 np0005629333 nova_compute[244014]: 2026-02-25 12:15:22.947 244018 INFO nova.scheduler.client.report [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Deleted allocations for instance f3ac54ca-7761-47fb-a44f-5c64ff55e40c#033[00m
Feb 25 07:15:23 np0005629333 nova_compute[244014]: 2026-02-25 12:15:23.013 244018 DEBUG oslo_concurrency.lockutils [None req-8277ef8d-fd9f-46c7-964f-3b85dc2c177f e12fec713b204531b7f63585f1f80b6f d23637a5678a4641984c270eaaeb2b84 - - default default] Lock "f3ac54ca-7761-47fb-a44f-5c64ff55e40c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:15:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 171 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 138 op/s
Feb 25 07:15:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:15:24 np0005629333 nova_compute[244014]: 2026-02-25 12:15:24.436 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021709.43531, 16c8e25c-54e5-4f4a-a188-5e5621ce23b2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:15:24 np0005629333 nova_compute[244014]: 2026-02-25 12:15:24.437 244018 INFO nova.compute.manager [-] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:15:24 np0005629333 nova_compute[244014]: 2026-02-25 12:15:24.454 244018 DEBUG nova.compute.manager [None req-b45c7e4c-8bdc-4d8b-ae3a-04a38ffae486 - - - - - -] [instance: 16c8e25c-54e5-4f4a-a188-5e5621ce23b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:15:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 171 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 114 op/s
Feb 25 07:15:25 np0005629333 nova_compute[244014]: 2026-02-25 12:15:25.469 244018 DEBUG oslo_concurrency.lockutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Acquiring lock "5e6e56f8-55af-4ba8-b447-8c1ab9ab7892" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:15:25 np0005629333 nova_compute[244014]: 2026-02-25 12:15:25.470 244018 DEBUG oslo_concurrency.lockutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "5e6e56f8-55af-4ba8-b447-8c1ab9ab7892" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:15:25 np0005629333 nova_compute[244014]: 2026-02-25 12:15:25.505 244018 DEBUG nova.compute.manager [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:15:25 np0005629333 nova_compute[244014]: 2026-02-25 12:15:25.584 244018 DEBUG oslo_concurrency.lockutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:15:25 np0005629333 nova_compute[244014]: 2026-02-25 12:15:25.585 244018 DEBUG oslo_concurrency.lockutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:15:25 np0005629333 nova_compute[244014]: 2026-02-25 12:15:25.594 244018 DEBUG nova.virt.hardware [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:15:25 np0005629333 nova_compute[244014]: 2026-02-25 12:15:25.595 244018 INFO nova.compute.claims [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:15:25 np0005629333 nova_compute[244014]: 2026-02-25 12:15:25.711 244018 DEBUG oslo_concurrency.processutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:15:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2633691069' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:15:26 np0005629333 nova_compute[244014]: 2026-02-25 12:15:26.209 244018 DEBUG oslo_concurrency.processutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:26 np0005629333 nova_compute[244014]: 2026-02-25 12:15:26.216 244018 DEBUG nova.compute.provider_tree [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:15:26 np0005629333 nova_compute[244014]: 2026-02-25 12:15:26.241 244018 DEBUG nova.scheduler.client.report [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:15:26 np0005629333 nova_compute[244014]: 2026-02-25 12:15:26.279 244018 DEBUG oslo_concurrency.lockutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:15:26 np0005629333 nova_compute[244014]: 2026-02-25 12:15:26.280 244018 DEBUG nova.compute.manager [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:15:26 np0005629333 nova_compute[244014]: 2026-02-25 12:15:26.341 244018 DEBUG nova.compute.manager [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:15:26 np0005629333 nova_compute[244014]: 2026-02-25 12:15:26.342 244018 DEBUG nova.network.neutron [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:15:26 np0005629333 nova_compute[244014]: 2026-02-25 12:15:26.366 244018 INFO nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:15:26 np0005629333 nova_compute[244014]: 2026-02-25 12:15:26.387 244018 DEBUG nova.compute.manager [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:15:26 np0005629333 nova_compute[244014]: 2026-02-25 12:15:26.478 244018 DEBUG nova.compute.manager [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:15:26 np0005629333 nova_compute[244014]: 2026-02-25 12:15:26.480 244018 DEBUG nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:15:26 np0005629333 nova_compute[244014]: 2026-02-25 12:15:26.481 244018 INFO nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Creating image(s)#033[00m
Feb 25 07:15:26 np0005629333 nova_compute[244014]: 2026-02-25 12:15:26.509 244018 DEBUG nova.storage.rbd_utils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] rbd image 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:15:26 np0005629333 nova_compute[244014]: 2026-02-25 12:15:26.541 244018 DEBUG nova.storage.rbd_utils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] rbd image 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:15:26 np0005629333 nova_compute[244014]: 2026-02-25 12:15:26.572 244018 DEBUG nova.storage.rbd_utils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] rbd image 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:15:26 np0005629333 nova_compute[244014]: 2026-02-25 12:15:26.576 244018 DEBUG oslo_concurrency.processutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:26 np0005629333 nova_compute[244014]: 2026-02-25 12:15:26.647 244018 DEBUG oslo_concurrency.processutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:26 np0005629333 nova_compute[244014]: 2026-02-25 12:15:26.649 244018 DEBUG oslo_concurrency.lockutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:15:26 np0005629333 nova_compute[244014]: 2026-02-25 12:15:26.649 244018 DEBUG oslo_concurrency.lockutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:15:26 np0005629333 nova_compute[244014]: 2026-02-25 12:15:26.649 244018 DEBUG oslo_concurrency.lockutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:15:26 np0005629333 nova_compute[244014]: 2026-02-25 12:15:26.671 244018 DEBUG nova.storage.rbd_utils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] rbd image 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:15:26 np0005629333 nova_compute[244014]: 2026-02-25 12:15:26.674 244018 DEBUG oslo_concurrency.processutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:26 np0005629333 nova_compute[244014]: 2026-02-25 12:15:26.931 244018 DEBUG nova.network.neutron [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Feb 25 07:15:26 np0005629333 nova_compute[244014]: 2026-02-25 12:15:26.932 244018 DEBUG nova.compute.manager [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:15:26 np0005629333 nova_compute[244014]: 2026-02-25 12:15:26.977 244018 DEBUG oslo_concurrency.processutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:27 np0005629333 nova_compute[244014]: 2026-02-25 12:15:27.048 244018 DEBUG nova.storage.rbd_utils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] resizing rbd image 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:15:27 np0005629333 nova_compute[244014]: 2026-02-25 12:15:27.149 244018 DEBUG nova.objects.instance [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lazy-loading 'migration_context' on Instance uuid 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:15:27 np0005629333 nova_compute[244014]: 2026-02-25 12:15:27.178 244018 DEBUG nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:15:27 np0005629333 nova_compute[244014]: 2026-02-25 12:15:27.178 244018 DEBUG nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Ensure instance console log exists: /var/lib/nova/instances/5e6e56f8-55af-4ba8-b447-8c1ab9ab7892/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:15:27 np0005629333 nova_compute[244014]: 2026-02-25 12:15:27.179 244018 DEBUG oslo_concurrency.lockutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:15:27 np0005629333 nova_compute[244014]: 2026-02-25 12:15:27.179 244018 DEBUG oslo_concurrency.lockutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:15:27 np0005629333 nova_compute[244014]: 2026-02-25 12:15:27.180 244018 DEBUG oslo_concurrency.lockutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:15:27 np0005629333 nova_compute[244014]: 2026-02-25 12:15:27.181 244018 DEBUG nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:15:27 np0005629333 nova_compute[244014]: 2026-02-25 12:15:27.185 244018 WARNING nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:15:27 np0005629333 nova_compute[244014]: 2026-02-25 12:15:27.190 244018 DEBUG nova.virt.libvirt.host [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:15:27 np0005629333 nova_compute[244014]: 2026-02-25 12:15:27.190 244018 DEBUG nova.virt.libvirt.host [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:15:27 np0005629333 nova_compute[244014]: 2026-02-25 12:15:27.193 244018 DEBUG nova.virt.libvirt.host [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:15:27 np0005629333 nova_compute[244014]: 2026-02-25 12:15:27.194 244018 DEBUG nova.virt.libvirt.host [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:15:27 np0005629333 nova_compute[244014]: 2026-02-25 12:15:27.194 244018 DEBUG nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:15:27 np0005629333 nova_compute[244014]: 2026-02-25 12:15:27.194 244018 DEBUG nova.virt.hardware [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:15:27 np0005629333 nova_compute[244014]: 2026-02-25 12:15:27.195 244018 DEBUG nova.virt.hardware [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:15:27 np0005629333 nova_compute[244014]: 2026-02-25 12:15:27.195 244018 DEBUG nova.virt.hardware [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:15:27 np0005629333 nova_compute[244014]: 2026-02-25 12:15:27.196 244018 DEBUG nova.virt.hardware [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:15:27 np0005629333 nova_compute[244014]: 2026-02-25 12:15:27.196 244018 DEBUG nova.virt.hardware [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:15:27 np0005629333 nova_compute[244014]: 2026-02-25 12:15:27.196 244018 DEBUG nova.virt.hardware [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:15:27 np0005629333 nova_compute[244014]: 2026-02-25 12:15:27.196 244018 DEBUG nova.virt.hardware [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:15:27 np0005629333 nova_compute[244014]: 2026-02-25 12:15:27.197 244018 DEBUG nova.virt.hardware [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:15:27 np0005629333 nova_compute[244014]: 2026-02-25 12:15:27.197 244018 DEBUG nova.virt.hardware [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:15:27 np0005629333 nova_compute[244014]: 2026-02-25 12:15:27.197 244018 DEBUG nova.virt.hardware [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:15:27 np0005629333 nova_compute[244014]: 2026-02-25 12:15:27.198 244018 DEBUG nova.virt.hardware [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:15:27 np0005629333 nova_compute[244014]: 2026-02-25 12:15:27.201 244018 DEBUG oslo_concurrency.processutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 170 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 140 op/s
Feb 25 07:15:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:15:27 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3710387522' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:15:27 np0005629333 nova_compute[244014]: 2026-02-25 12:15:27.766 244018 DEBUG oslo_concurrency.processutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:27 np0005629333 nova_compute[244014]: 2026-02-25 12:15:27.786 244018 DEBUG nova.storage.rbd_utils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] rbd image 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:15:27 np0005629333 nova_compute[244014]: 2026-02-25 12:15:27.790 244018 DEBUG oslo_concurrency.processutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:15:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2460758435' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:15:28 np0005629333 nova_compute[244014]: 2026-02-25 12:15:28.299 244018 DEBUG oslo_concurrency.processutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:28 np0005629333 nova_compute[244014]: 2026-02-25 12:15:28.302 244018 DEBUG nova.objects.instance [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lazy-loading 'pci_devices' on Instance uuid 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:15:28 np0005629333 nova_compute[244014]: 2026-02-25 12:15:28.415 244018 DEBUG nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:15:28 np0005629333 nova_compute[244014]:  <uuid>5e6e56f8-55af-4ba8-b447-8c1ab9ab7892</uuid>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:  <name>instance-00000003</name>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:15:28 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:      <nova:name>tempest-DeleteServersAdminTestJSON-server-830310681</nova:name>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:15:27</nova:creationTime>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:15:28 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:        <nova:user uuid="0b1ac75e114a4f7493006c0ffae0d4cf">tempest-DeleteServersAdminTestJSON-848319223-project-member</nova:user>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:        <nova:project uuid="71ba161ac9034524bed0ed4918ac0d2d">tempest-DeleteServersAdminTestJSON-848319223</nova:project>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:      <nova:ports/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:      <entry name="serial">5e6e56f8-55af-4ba8-b447-8c1ab9ab7892</entry>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:      <entry name="uuid">5e6e56f8-55af-4ba8-b447-8c1ab9ab7892</entry>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:15:28 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/5e6e56f8-55af-4ba8-b447-8c1ab9ab7892_disk">
Feb 25 07:15:28 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:15:28 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:15:28 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/5e6e56f8-55af-4ba8-b447-8c1ab9ab7892_disk.config">
Feb 25 07:15:28 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:15:28 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:15:28 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/5e6e56f8-55af-4ba8-b447-8c1ab9ab7892/console.log" append="off"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:15:28 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:15:28 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:15:28 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:15:28 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:15:28 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:15:28 np0005629333 nova_compute[244014]: 2026-02-25 12:15:28.523 244018 DEBUG nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:15:28 np0005629333 nova_compute[244014]: 2026-02-25 12:15:28.524 244018 DEBUG nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:15:28 np0005629333 nova_compute[244014]: 2026-02-25 12:15:28.525 244018 INFO nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Using config drive#033[00m
Feb 25 07:15:28 np0005629333 nova_compute[244014]: 2026-02-25 12:15:28.556 244018 DEBUG nova.storage.rbd_utils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] rbd image 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:15:29 np0005629333 nova_compute[244014]: 2026-02-25 12:15:29.011 244018 INFO nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Creating config drive at /var/lib/nova/instances/5e6e56f8-55af-4ba8-b447-8c1ab9ab7892/disk.config#033[00m
Feb 25 07:15:29 np0005629333 nova_compute[244014]: 2026-02-25 12:15:29.015 244018 DEBUG oslo_concurrency.processutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5e6e56f8-55af-4ba8-b447-8c1ab9ab7892/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpnl65_st3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:29 np0005629333 nova_compute[244014]: 2026-02-25 12:15:29.135 244018 DEBUG oslo_concurrency.processutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5e6e56f8-55af-4ba8-b447-8c1ab9ab7892/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpnl65_st3" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:15:29 np0005629333 nova_compute[244014]: 2026-02-25 12:15:29.159 244018 DEBUG nova.storage.rbd_utils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] rbd image 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:15:29 np0005629333 nova_compute[244014]: 2026-02-25 12:15:29.163 244018 DEBUG oslo_concurrency.processutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5e6e56f8-55af-4ba8-b447-8c1ab9ab7892/disk.config 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:29 np0005629333 nova_compute[244014]: 2026-02-25 12:15:29.312 244018 DEBUG oslo_concurrency.processutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5e6e56f8-55af-4ba8-b447-8c1ab9ab7892/disk.config 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:29 np0005629333 nova_compute[244014]: 2026-02-25 12:15:29.314 244018 INFO nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Deleting local config drive /var/lib/nova/instances/5e6e56f8-55af-4ba8-b447-8c1ab9ab7892/disk.config because it was imported into RBD.#033[00m
Feb 25 07:15:29 np0005629333 systemd-machined[210048]: New machine qemu-3-instance-00000003.
Feb 25 07:15:29 np0005629333 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Feb 25 07:15:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 170 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 661 KiB/s wr, 111 op/s
Feb 25 07:15:29 np0005629333 nova_compute[244014]: 2026-02-25 12:15:29.734 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021729.7340386, 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:15:29 np0005629333 nova_compute[244014]: 2026-02-25 12:15:29.735 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:15:29 np0005629333 nova_compute[244014]: 2026-02-25 12:15:29.738 244018 DEBUG nova.compute.manager [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:15:29 np0005629333 nova_compute[244014]: 2026-02-25 12:15:29.738 244018 DEBUG nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:15:29 np0005629333 nova_compute[244014]: 2026-02-25 12:15:29.743 244018 INFO nova.virt.libvirt.driver [-] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Instance spawned successfully.#033[00m
Feb 25 07:15:29 np0005629333 nova_compute[244014]: 2026-02-25 12:15:29.743 244018 DEBUG nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:15:29 np0005629333 nova_compute[244014]: 2026-02-25 12:15:29.764 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:15:29 np0005629333 nova_compute[244014]: 2026-02-25 12:15:29.770 244018 DEBUG nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:15:29 np0005629333 nova_compute[244014]: 2026-02-25 12:15:29.771 244018 DEBUG nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:15:29 np0005629333 nova_compute[244014]: 2026-02-25 12:15:29.771 244018 DEBUG nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:15:29 np0005629333 nova_compute[244014]: 2026-02-25 12:15:29.772 244018 DEBUG nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:15:29 np0005629333 nova_compute[244014]: 2026-02-25 12:15:29.772 244018 DEBUG nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:15:29 np0005629333 nova_compute[244014]: 2026-02-25 12:15:29.773 244018 DEBUG nova.virt.libvirt.driver [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:15:29 np0005629333 nova_compute[244014]: 2026-02-25 12:15:29.777 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:15:29 np0005629333 nova_compute[244014]: 2026-02-25 12:15:29.819 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:15:29 np0005629333 nova_compute[244014]: 2026-02-25 12:15:29.820 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021729.737302, 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:15:29 np0005629333 nova_compute[244014]: 2026-02-25 12:15:29.820 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] VM Started (Lifecycle Event)#033[00m
Feb 25 07:15:29 np0005629333 nova_compute[244014]: 2026-02-25 12:15:29.848 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:15:29 np0005629333 nova_compute[244014]: 2026-02-25 12:15:29.853 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:15:29 np0005629333 nova_compute[244014]: 2026-02-25 12:15:29.861 244018 INFO nova.compute.manager [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Took 3.38 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:15:29 np0005629333 nova_compute[244014]: 2026-02-25 12:15:29.861 244018 DEBUG nova.compute.manager [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:15:29 np0005629333 nova_compute[244014]: 2026-02-25 12:15:29.872 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:15:29 np0005629333 nova_compute[244014]: 2026-02-25 12:15:29.929 244018 INFO nova.compute.manager [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Took 4.36 seconds to build instance.#033[00m
Feb 25 07:15:29 np0005629333 nova_compute[244014]: 2026-02-25 12:15:29.947 244018 DEBUG oslo_concurrency.lockutils [None req-c82667b7-f5e8-4a2a-b862-c19759083319 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "5e6e56f8-55af-4ba8-b447-8c1ab9ab7892" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:15:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:15:30
Feb 25 07:15:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:15:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:15:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', '.mgr', 'backups', 'default.rgw.control', '.rgw.root', 'vms', 'images', 'default.rgw.meta']
Feb 25 07:15:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:15:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 194 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.6 MiB/s wr, 165 op/s
Feb 25 07:15:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:15:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:15:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:15:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:15:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:15:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:15:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:15:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:15:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:15:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:15:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:15:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:15:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:15:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:15:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:15:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:15:32 np0005629333 nova_compute[244014]: 2026-02-25 12:15:32.867 244018 DEBUG oslo_concurrency.lockutils [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Acquiring lock "5e6e56f8-55af-4ba8-b447-8c1ab9ab7892" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:15:32 np0005629333 nova_compute[244014]: 2026-02-25 12:15:32.868 244018 DEBUG oslo_concurrency.lockutils [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "5e6e56f8-55af-4ba8-b447-8c1ab9ab7892" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:15:32 np0005629333 nova_compute[244014]: 2026-02-25 12:15:32.868 244018 DEBUG oslo_concurrency.lockutils [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Acquiring lock "5e6e56f8-55af-4ba8-b447-8c1ab9ab7892-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:15:32 np0005629333 nova_compute[244014]: 2026-02-25 12:15:32.869 244018 DEBUG oslo_concurrency.lockutils [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "5e6e56f8-55af-4ba8-b447-8c1ab9ab7892-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:15:32 np0005629333 nova_compute[244014]: 2026-02-25 12:15:32.869 244018 DEBUG oslo_concurrency.lockutils [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "5e6e56f8-55af-4ba8-b447-8c1ab9ab7892-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:15:32 np0005629333 nova_compute[244014]: 2026-02-25 12:15:32.871 244018 INFO nova.compute.manager [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Terminating instance#033[00m
Feb 25 07:15:32 np0005629333 nova_compute[244014]: 2026-02-25 12:15:32.872 244018 DEBUG oslo_concurrency.lockutils [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Acquiring lock "refresh_cache-5e6e56f8-55af-4ba8-b447-8c1ab9ab7892" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:15:32 np0005629333 nova_compute[244014]: 2026-02-25 12:15:32.873 244018 DEBUG oslo_concurrency.lockutils [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Acquired lock "refresh_cache-5e6e56f8-55af-4ba8-b447-8c1ab9ab7892" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:15:32 np0005629333 nova_compute[244014]: 2026-02-25 12:15:32.873 244018 DEBUG nova.network.neutron [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:15:33 np0005629333 nova_compute[244014]: 2026-02-25 12:15:33.147 244018 DEBUG nova.network.neutron [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:15:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 176 op/s
Feb 25 07:15:33 np0005629333 nova_compute[244014]: 2026-02-25 12:15:33.743 244018 DEBUG nova.network.neutron [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:15:33 np0005629333 nova_compute[244014]: 2026-02-25 12:15:33.757 244018 DEBUG oslo_concurrency.lockutils [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Releasing lock "refresh_cache-5e6e56f8-55af-4ba8-b447-8c1ab9ab7892" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:15:33 np0005629333 nova_compute[244014]: 2026-02-25 12:15:33.758 244018 DEBUG nova.compute.manager [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:15:33 np0005629333 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Feb 25 07:15:33 np0005629333 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 4.511s CPU time.
Feb 25 07:15:33 np0005629333 systemd-machined[210048]: Machine qemu-3-instance-00000003 terminated.
Feb 25 07:15:33 np0005629333 nova_compute[244014]: 2026-02-25 12:15:33.974 244018 INFO nova.virt.libvirt.driver [-] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Instance destroyed successfully.#033[00m
Feb 25 07:15:33 np0005629333 nova_compute[244014]: 2026-02-25 12:15:33.975 244018 DEBUG nova.objects.instance [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lazy-loading 'resources' on Instance uuid 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:15:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:15:34 np0005629333 nova_compute[244014]: 2026-02-25 12:15:34.386 244018 INFO nova.virt.libvirt.driver [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Deleting instance files /var/lib/nova/instances/5e6e56f8-55af-4ba8-b447-8c1ab9ab7892_del#033[00m
Feb 25 07:15:34 np0005629333 nova_compute[244014]: 2026-02-25 12:15:34.387 244018 INFO nova.virt.libvirt.driver [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Deletion of /var/lib/nova/instances/5e6e56f8-55af-4ba8-b447-8c1ab9ab7892_del complete#033[00m
Feb 25 07:15:34 np0005629333 nova_compute[244014]: 2026-02-25 12:15:34.480 244018 INFO nova.compute.manager [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:15:34 np0005629333 nova_compute[244014]: 2026-02-25 12:15:34.481 244018 DEBUG oslo.service.loopingcall [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:15:34 np0005629333 nova_compute[244014]: 2026-02-25 12:15:34.481 244018 DEBUG nova.compute.manager [-] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:15:34 np0005629333 nova_compute[244014]: 2026-02-25 12:15:34.481 244018 DEBUG nova.network.neutron [-] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:15:34 np0005629333 nova_compute[244014]: 2026-02-25 12:15:34.610 244018 DEBUG nova.network.neutron [-] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:15:34 np0005629333 nova_compute[244014]: 2026-02-25 12:15:34.628 244018 DEBUG nova.network.neutron [-] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:15:34 np0005629333 nova_compute[244014]: 2026-02-25 12:15:34.645 244018 INFO nova.compute.manager [-] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Took 0.16 seconds to deallocate network for instance.#033[00m
Feb 25 07:15:34 np0005629333 nova_compute[244014]: 2026-02-25 12:15:34.684 244018 DEBUG oslo_concurrency.lockutils [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:15:34 np0005629333 nova_compute[244014]: 2026-02-25 12:15:34.685 244018 DEBUG oslo_concurrency.lockutils [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:15:34 np0005629333 nova_compute[244014]: 2026-02-25 12:15:34.752 244018 DEBUG oslo_concurrency.processutils [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:15:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1555577469' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:15:35 np0005629333 nova_compute[244014]: 2026-02-25 12:15:35.270 244018 DEBUG oslo_concurrency.processutils [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:35 np0005629333 nova_compute[244014]: 2026-02-25 12:15:35.277 244018 DEBUG nova.compute.provider_tree [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:15:35 np0005629333 nova_compute[244014]: 2026-02-25 12:15:35.295 244018 DEBUG nova.scheduler.client.report [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:15:35 np0005629333 nova_compute[244014]: 2026-02-25 12:15:35.320 244018 DEBUG oslo_concurrency.lockutils [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:15:35 np0005629333 nova_compute[244014]: 2026-02-25 12:15:35.351 244018 INFO nova.scheduler.client.report [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Deleted allocations for instance 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892#033[00m
Feb 25 07:15:35 np0005629333 nova_compute[244014]: 2026-02-25 12:15:35.436 244018 DEBUG oslo_concurrency.lockutils [None req-fcde5954-b820-4cb0-8d49-e1d3c62afed2 0b1ac75e114a4f7493006c0ffae0d4cf 71ba161ac9034524bed0ed4918ac0d2d - - default default] Lock "5e6e56f8-55af-4ba8-b447-8c1ab9ab7892" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:15:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 113 op/s
Feb 25 07:15:36 np0005629333 nova_compute[244014]: 2026-02-25 12:15:36.390 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021721.3885195, f3ac54ca-7761-47fb-a44f-5c64ff55e40c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:15:36 np0005629333 nova_compute[244014]: 2026-02-25 12:15:36.390 244018 INFO nova.compute.manager [-] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:15:36 np0005629333 nova_compute[244014]: 2026-02-25 12:15:36.412 244018 DEBUG nova.compute.manager [None req-b24e27f8-9520-4bb3-ab7a-adcc1f0285fd - - - - - -] [instance: f3ac54ca-7761-47fb-a44f-5c64ff55e40c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:15:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 153 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Feb 25 07:15:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:15:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 153 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 114 op/s
Feb 25 07:15:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 153 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 114 op/s
Feb 25 07:15:41 np0005629333 podman[251468]: 2026-02-25 12:15:41.749423536 +0000 UTC m=+0.088800903 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent)
Feb 25 07:15:41 np0005629333 podman[251469]: 2026-02-25 12:15:41.758196817 +0000 UTC m=+0.097574154 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0)
Feb 25 07:15:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:15:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:15:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:15:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:15:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.829490380927735e-06 of space, bias 1.0, pg target 0.0005488471142783205 quantized to 32 (current 32)
Feb 25 07:15:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:15:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:15:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:15:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:15:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:15:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024895834148089277 of space, bias 1.0, pg target 0.7468750244426783 quantized to 32 (current 32)
Feb 25 07:15:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:15:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.6104160454649397e-06 of space, bias 4.0, pg target 0.0019324992545579278 quantized to 16 (current 16)
Feb 25 07:15:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:15:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:15:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:15:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:15:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:15:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:15:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:15:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:15:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:15:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:15:42 np0005629333 nova_compute[244014]: 2026-02-25 12:15:42.149 244018 DEBUG oslo_concurrency.lockutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquiring lock "ee9e86a4-8a34-43a9-baf1-f9b2e7f85534" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:15:42 np0005629333 nova_compute[244014]: 2026-02-25 12:15:42.149 244018 DEBUG oslo_concurrency.lockutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "ee9e86a4-8a34-43a9-baf1-f9b2e7f85534" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:15:42 np0005629333 nova_compute[244014]: 2026-02-25 12:15:42.190 244018 DEBUG nova.compute.manager [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:15:42 np0005629333 nova_compute[244014]: 2026-02-25 12:15:42.269 244018 DEBUG oslo_concurrency.lockutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:15:42 np0005629333 nova_compute[244014]: 2026-02-25 12:15:42.270 244018 DEBUG oslo_concurrency.lockutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:15:42 np0005629333 nova_compute[244014]: 2026-02-25 12:15:42.278 244018 DEBUG nova.virt.hardware [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:15:42 np0005629333 nova_compute[244014]: 2026-02-25 12:15:42.278 244018 INFO nova.compute.claims [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:15:42 np0005629333 nova_compute[244014]: 2026-02-25 12:15:42.380 244018 DEBUG oslo_concurrency.processutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:15:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2643754188' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:15:42 np0005629333 nova_compute[244014]: 2026-02-25 12:15:42.939 244018 DEBUG oslo_concurrency.processutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:42 np0005629333 nova_compute[244014]: 2026-02-25 12:15:42.945 244018 DEBUG nova.compute.provider_tree [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:15:42 np0005629333 nova_compute[244014]: 2026-02-25 12:15:42.960 244018 DEBUG nova.scheduler.client.report [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:15:42 np0005629333 nova_compute[244014]: 2026-02-25 12:15:42.983 244018 DEBUG oslo_concurrency.lockutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:15:42 np0005629333 nova_compute[244014]: 2026-02-25 12:15:42.984 244018 DEBUG nova.compute.manager [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.034 244018 DEBUG nova.compute.manager [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.034 244018 DEBUG nova.network.neutron [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.061 244018 INFO nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.089 244018 DEBUG nova.compute.manager [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.194 244018 DEBUG nova.compute.manager [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.196 244018 DEBUG nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.196 244018 INFO nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Creating image(s)#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.228 244018 DEBUG nova.storage.rbd_utils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] rbd image ee9e86a4-8a34-43a9-baf1-f9b2e7f85534_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.260 244018 DEBUG nova.storage.rbd_utils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] rbd image ee9e86a4-8a34-43a9-baf1-f9b2e7f85534_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.288 244018 DEBUG nova.storage.rbd_utils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] rbd image ee9e86a4-8a34-43a9-baf1-f9b2e7f85534_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.291 244018 DEBUG oslo_concurrency.processutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.370 244018 DEBUG oslo_concurrency.processutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.371 244018 DEBUG oslo_concurrency.lockutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.372 244018 DEBUG oslo_concurrency.lockutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.372 244018 DEBUG oslo_concurrency.lockutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.392 244018 DEBUG nova.storage.rbd_utils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] rbd image ee9e86a4-8a34-43a9-baf1-f9b2e7f85534_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.396 244018 DEBUG oslo_concurrency.processutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 ee9e86a4-8a34-43a9-baf1-f9b2e7f85534_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 153 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 814 KiB/s rd, 188 KiB/s wr, 60 op/s
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.564 244018 DEBUG nova.network.neutron [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.565 244018 DEBUG nova.compute.manager [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.739 244018 DEBUG oslo_concurrency.processutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 ee9e86a4-8a34-43a9-baf1-f9b2e7f85534_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.820 244018 DEBUG nova.storage.rbd_utils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] resizing rbd image ee9e86a4-8a34-43a9-baf1-f9b2e7f85534_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.929 244018 DEBUG nova.objects.instance [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lazy-loading 'migration_context' on Instance uuid ee9e86a4-8a34-43a9-baf1-f9b2e7f85534 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.973 244018 DEBUG nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.973 244018 DEBUG nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Ensure instance console log exists: /var/lib/nova/instances/ee9e86a4-8a34-43a9-baf1-f9b2e7f85534/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.974 244018 DEBUG oslo_concurrency.lockutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.974 244018 DEBUG oslo_concurrency.lockutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.975 244018 DEBUG oslo_concurrency.lockutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.976 244018 DEBUG nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.981 244018 WARNING nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.986 244018 DEBUG nova.virt.libvirt.host [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.986 244018 DEBUG nova.virt.libvirt.host [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.989 244018 DEBUG nova.virt.libvirt.host [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.990 244018 DEBUG nova.virt.libvirt.host [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.990 244018 DEBUG nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.991 244018 DEBUG nova.virt.hardware [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.992 244018 DEBUG nova.virt.hardware [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.993 244018 DEBUG nova.virt.hardware [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.994 244018 DEBUG nova.virt.hardware [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.994 244018 DEBUG nova.virt.hardware [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.995 244018 DEBUG nova.virt.hardware [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.995 244018 DEBUG nova.virt.hardware [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.996 244018 DEBUG nova.virt.hardware [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.996 244018 DEBUG nova.virt.hardware [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.997 244018 DEBUG nova.virt.hardware [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:15:43 np0005629333 nova_compute[244014]: 2026-02-25 12:15:43.997 244018 DEBUG nova.virt.hardware [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:15:44 np0005629333 nova_compute[244014]: 2026-02-25 12:15:44.002 244018 DEBUG oslo_concurrency.processutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:15:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:15:44 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/61062265' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:15:44 np0005629333 nova_compute[244014]: 2026-02-25 12:15:44.533 244018 DEBUG oslo_concurrency.processutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:44 np0005629333 nova_compute[244014]: 2026-02-25 12:15:44.553 244018 DEBUG nova.storage.rbd_utils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] rbd image ee9e86a4-8a34-43a9-baf1-f9b2e7f85534_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:15:44 np0005629333 nova_compute[244014]: 2026-02-25 12:15:44.556 244018 DEBUG oslo_concurrency.processutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:15:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3571584857' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:15:45 np0005629333 nova_compute[244014]: 2026-02-25 12:15:45.035 244018 DEBUG oslo_concurrency.processutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:45 np0005629333 nova_compute[244014]: 2026-02-25 12:15:45.038 244018 DEBUG nova.objects.instance [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lazy-loading 'pci_devices' on Instance uuid ee9e86a4-8a34-43a9-baf1-f9b2e7f85534 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:15:45 np0005629333 nova_compute[244014]: 2026-02-25 12:15:45.053 244018 DEBUG nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:15:45 np0005629333 nova_compute[244014]:  <uuid>ee9e86a4-8a34-43a9-baf1-f9b2e7f85534</uuid>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:  <name>instance-00000004</name>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:15:45 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:      <nova:name>tempest-LiveMigrationNegativeTest-server-1560092483</nova:name>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:15:43</nova:creationTime>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:15:45 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:        <nova:user uuid="0da29cefe0e94220a8d9cf895454b55c">tempest-LiveMigrationNegativeTest-1055227276-project-member</nova:user>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:        <nova:project uuid="6bddec791c4c46b7bef787d3d6634b12">tempest-LiveMigrationNegativeTest-1055227276</nova:project>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:      <nova:ports/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:      <entry name="serial">ee9e86a4-8a34-43a9-baf1-f9b2e7f85534</entry>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:      <entry name="uuid">ee9e86a4-8a34-43a9-baf1-f9b2e7f85534</entry>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:15:45 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/ee9e86a4-8a34-43a9-baf1-f9b2e7f85534_disk">
Feb 25 07:15:45 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:15:45 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:15:45 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/ee9e86a4-8a34-43a9-baf1-f9b2e7f85534_disk.config">
Feb 25 07:15:45 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:15:45 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:15:45 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/ee9e86a4-8a34-43a9-baf1-f9b2e7f85534/console.log" append="off"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:15:45 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:15:45 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:15:45 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:15:45 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:15:45 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:15:45 np0005629333 nova_compute[244014]: 2026-02-25 12:15:45.105 244018 DEBUG nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:15:45 np0005629333 nova_compute[244014]: 2026-02-25 12:15:45.105 244018 DEBUG nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:15:45 np0005629333 nova_compute[244014]: 2026-02-25 12:15:45.106 244018 INFO nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Using config drive#033[00m
Feb 25 07:15:45 np0005629333 nova_compute[244014]: 2026-02-25 12:15:45.134 244018 DEBUG nova.storage.rbd_utils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] rbd image ee9e86a4-8a34-43a9-baf1-f9b2e7f85534_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:15:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 153 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Feb 25 07:15:45 np0005629333 nova_compute[244014]: 2026-02-25 12:15:45.515 244018 INFO nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Creating config drive at /var/lib/nova/instances/ee9e86a4-8a34-43a9-baf1-f9b2e7f85534/disk.config#033[00m
Feb 25 07:15:45 np0005629333 nova_compute[244014]: 2026-02-25 12:15:45.521 244018 DEBUG oslo_concurrency.processutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ee9e86a4-8a34-43a9-baf1-f9b2e7f85534/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpvvv9m4v2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:45 np0005629333 nova_compute[244014]: 2026-02-25 12:15:45.641 244018 DEBUG oslo_concurrency.processutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ee9e86a4-8a34-43a9-baf1-f9b2e7f85534/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpvvv9m4v2" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:45 np0005629333 nova_compute[244014]: 2026-02-25 12:15:45.672 244018 DEBUG nova.storage.rbd_utils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] rbd image ee9e86a4-8a34-43a9-baf1-f9b2e7f85534_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:15:45 np0005629333 nova_compute[244014]: 2026-02-25 12:15:45.675 244018 DEBUG oslo_concurrency.processutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ee9e86a4-8a34-43a9-baf1-f9b2e7f85534/disk.config ee9e86a4-8a34-43a9-baf1-f9b2e7f85534_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:45 np0005629333 nova_compute[244014]: 2026-02-25 12:15:45.833 244018 DEBUG oslo_concurrency.processutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ee9e86a4-8a34-43a9-baf1-f9b2e7f85534/disk.config ee9e86a4-8a34-43a9-baf1-f9b2e7f85534_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:45 np0005629333 nova_compute[244014]: 2026-02-25 12:15:45.834 244018 INFO nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Deleting local config drive /var/lib/nova/instances/ee9e86a4-8a34-43a9-baf1-f9b2e7f85534/disk.config because it was imported into RBD.#033[00m
Feb 25 07:15:45 np0005629333 systemd-machined[210048]: New machine qemu-4-instance-00000004.
Feb 25 07:15:45 np0005629333 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Feb 25 07:15:46 np0005629333 nova_compute[244014]: 2026-02-25 12:15:46.522 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021746.522241, ee9e86a4-8a34-43a9-baf1-f9b2e7f85534 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:15:46 np0005629333 nova_compute[244014]: 2026-02-25 12:15:46.524 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:15:46 np0005629333 nova_compute[244014]: 2026-02-25 12:15:46.528 244018 DEBUG nova.compute.manager [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:15:46 np0005629333 nova_compute[244014]: 2026-02-25 12:15:46.529 244018 DEBUG nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:15:46 np0005629333 nova_compute[244014]: 2026-02-25 12:15:46.533 244018 INFO nova.virt.libvirt.driver [-] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Instance spawned successfully.#033[00m
Feb 25 07:15:46 np0005629333 nova_compute[244014]: 2026-02-25 12:15:46.533 244018 DEBUG nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:15:46 np0005629333 nova_compute[244014]: 2026-02-25 12:15:46.559 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:15:46 np0005629333 nova_compute[244014]: 2026-02-25 12:15:46.565 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:15:46 np0005629333 nova_compute[244014]: 2026-02-25 12:15:46.571 244018 DEBUG nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:15:46 np0005629333 nova_compute[244014]: 2026-02-25 12:15:46.572 244018 DEBUG nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:15:46 np0005629333 nova_compute[244014]: 2026-02-25 12:15:46.572 244018 DEBUG nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:15:46 np0005629333 nova_compute[244014]: 2026-02-25 12:15:46.573 244018 DEBUG nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:15:46 np0005629333 nova_compute[244014]: 2026-02-25 12:15:46.574 244018 DEBUG nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:15:46 np0005629333 nova_compute[244014]: 2026-02-25 12:15:46.574 244018 DEBUG nova.virt.libvirt.driver [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:15:46 np0005629333 nova_compute[244014]: 2026-02-25 12:15:46.603 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:15:46 np0005629333 nova_compute[244014]: 2026-02-25 12:15:46.603 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021746.5278625, ee9e86a4-8a34-43a9-baf1-f9b2e7f85534 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:15:46 np0005629333 nova_compute[244014]: 2026-02-25 12:15:46.604 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] VM Started (Lifecycle Event)#033[00m
Feb 25 07:15:46 np0005629333 nova_compute[244014]: 2026-02-25 12:15:46.637 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:15:46 np0005629333 nova_compute[244014]: 2026-02-25 12:15:46.641 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:15:46 np0005629333 nova_compute[244014]: 2026-02-25 12:15:46.648 244018 INFO nova.compute.manager [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Took 3.45 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:15:46 np0005629333 nova_compute[244014]: 2026-02-25 12:15:46.648 244018 DEBUG nova.compute.manager [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:15:46 np0005629333 nova_compute[244014]: 2026-02-25 12:15:46.663 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:15:46 np0005629333 nova_compute[244014]: 2026-02-25 12:15:46.711 244018 INFO nova.compute.manager [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Took 4.48 seconds to build instance.#033[00m
Feb 25 07:15:46 np0005629333 nova_compute[244014]: 2026-02-25 12:15:46.732 244018 DEBUG oslo_concurrency.lockutils [None req-7cb72641-f13d-4139-b872-f13807ce17c8 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "ee9e86a4-8a34-43a9-baf1-f9b2e7f85534" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:15:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:15:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:15:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:15:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:15:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:15:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:15:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:15:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:15:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:15:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:15:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:15:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:15:47 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:15:47 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:15:47 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:15:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:15:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2538534503' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:15:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:15:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2538534503' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:15:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 167 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Feb 25 07:15:47 np0005629333 podman[252027]: 2026-02-25 12:15:47.490989902 +0000 UTC m=+0.060103640 container create 8a5c21dc5b2607124b78ad194efe2a257f1b3efaa93486cf37a87f3de42d866a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_gould, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Feb 25 07:15:47 np0005629333 systemd[1]: Started libpod-conmon-8a5c21dc5b2607124b78ad194efe2a257f1b3efaa93486cf37a87f3de42d866a.scope.
Feb 25 07:15:47 np0005629333 podman[252027]: 2026-02-25 12:15:47.458208186 +0000 UTC m=+0.027321974 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:15:47 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:15:47 np0005629333 podman[252027]: 2026-02-25 12:15:47.599818521 +0000 UTC m=+0.168932279 container init 8a5c21dc5b2607124b78ad194efe2a257f1b3efaa93486cf37a87f3de42d866a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 07:15:47 np0005629333 podman[252027]: 2026-02-25 12:15:47.609512469 +0000 UTC m=+0.178626207 container start 8a5c21dc5b2607124b78ad194efe2a257f1b3efaa93486cf37a87f3de42d866a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_gould, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:15:47 np0005629333 systemd[1]: libpod-8a5c21dc5b2607124b78ad194efe2a257f1b3efaa93486cf37a87f3de42d866a.scope: Deactivated successfully.
Feb 25 07:15:47 np0005629333 vigilant_gould[252043]: 167 167
Feb 25 07:15:47 np0005629333 conmon[252043]: conmon 8a5c21dc5b2607124b78 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8a5c21dc5b2607124b78ad194efe2a257f1b3efaa93486cf37a87f3de42d866a.scope/container/memory.events
Feb 25 07:15:47 np0005629333 podman[252027]: 2026-02-25 12:15:47.617743074 +0000 UTC m=+0.186856802 container attach 8a5c21dc5b2607124b78ad194efe2a257f1b3efaa93486cf37a87f3de42d866a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_gould, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Feb 25 07:15:47 np0005629333 podman[252027]: 2026-02-25 12:15:47.618391073 +0000 UTC m=+0.187504801 container died 8a5c21dc5b2607124b78ad194efe2a257f1b3efaa93486cf37a87f3de42d866a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_gould, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Feb 25 07:15:47 np0005629333 systemd[1]: var-lib-containers-storage-overlay-6f3d102529d860b6729deb8f406b21449ba7be641334dfbdc2801a08827bc973-merged.mount: Deactivated successfully.
Feb 25 07:15:47 np0005629333 podman[252027]: 2026-02-25 12:15:47.706337021 +0000 UTC m=+0.275450719 container remove 8a5c21dc5b2607124b78ad194efe2a257f1b3efaa93486cf37a87f3de42d866a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:15:47 np0005629333 systemd[1]: libpod-conmon-8a5c21dc5b2607124b78ad194efe2a257f1b3efaa93486cf37a87f3de42d866a.scope: Deactivated successfully.
Feb 25 07:15:47 np0005629333 podman[252065]: 2026-02-25 12:15:47.895328946 +0000 UTC m=+0.063989406 container create f5b2f61786eea52fee281c22e38fe881cfffea9fe3e593b30bb7a6c4c17b7ea4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_chatelet, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 07:15:47 np0005629333 systemd[1]: Started libpod-conmon-f5b2f61786eea52fee281c22e38fe881cfffea9fe3e593b30bb7a6c4c17b7ea4.scope.
Feb 25 07:15:47 np0005629333 podman[252065]: 2026-02-25 12:15:47.86758943 +0000 UTC m=+0.036249950 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:15:47 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:15:47 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a6b2edc8005091d8bf8af300d858e3dc7549656533831b5c312cb48b8a1f658/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:15:47 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a6b2edc8005091d8bf8af300d858e3dc7549656533831b5c312cb48b8a1f658/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:15:47 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a6b2edc8005091d8bf8af300d858e3dc7549656533831b5c312cb48b8a1f658/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:15:47 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a6b2edc8005091d8bf8af300d858e3dc7549656533831b5c312cb48b8a1f658/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:15:47 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a6b2edc8005091d8bf8af300d858e3dc7549656533831b5c312cb48b8a1f658/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:15:48 np0005629333 podman[252065]: 2026-02-25 12:15:48.010088991 +0000 UTC m=+0.178749441 container init f5b2f61786eea52fee281c22e38fe881cfffea9fe3e593b30bb7a6c4c17b7ea4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:15:48 np0005629333 podman[252065]: 2026-02-25 12:15:48.018923424 +0000 UTC m=+0.187583874 container start f5b2f61786eea52fee281c22e38fe881cfffea9fe3e593b30bb7a6c4c17b7ea4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:15:48 np0005629333 podman[252065]: 2026-02-25 12:15:48.026739537 +0000 UTC m=+0.195399997 container attach f5b2f61786eea52fee281c22e38fe881cfffea9fe3e593b30bb7a6c4c17b7ea4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:15:48 np0005629333 fervent_chatelet[252082]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:15:48 np0005629333 fervent_chatelet[252082]: --> All data devices are unavailable
Feb 25 07:15:48 np0005629333 systemd[1]: libpod-f5b2f61786eea52fee281c22e38fe881cfffea9fe3e593b30bb7a6c4c17b7ea4.scope: Deactivated successfully.
Feb 25 07:15:48 np0005629333 podman[252065]: 2026-02-25 12:15:48.514014277 +0000 UTC m=+0.682674717 container died f5b2f61786eea52fee281c22e38fe881cfffea9fe3e593b30bb7a6c4c17b7ea4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_chatelet, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:15:48 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7a6b2edc8005091d8bf8af300d858e3dc7549656533831b5c312cb48b8a1f658-merged.mount: Deactivated successfully.
Feb 25 07:15:48 np0005629333 podman[252065]: 2026-02-25 12:15:48.590996479 +0000 UTC m=+0.759656919 container remove f5b2f61786eea52fee281c22e38fe881cfffea9fe3e593b30bb7a6c4c17b7ea4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 07:15:48 np0005629333 systemd[1]: libpod-conmon-f5b2f61786eea52fee281c22e38fe881cfffea9fe3e593b30bb7a6c4c17b7ea4.scope: Deactivated successfully.
Feb 25 07:15:48 np0005629333 nova_compute[244014]: 2026-02-25 12:15:48.972 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021733.9709713, 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:15:48 np0005629333 nova_compute[244014]: 2026-02-25 12:15:48.973 244018 INFO nova.compute.manager [-] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:15:48 np0005629333 nova_compute[244014]: 2026-02-25 12:15:48.990 244018 DEBUG nova.compute.manager [None req-bad64f4c-a272-42c2-bc47-3478abafbde5 - - - - - -] [instance: 5e6e56f8-55af-4ba8-b447-8c1ab9ab7892] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:15:49 np0005629333 podman[252175]: 2026-02-25 12:15:49.045425763 +0000 UTC m=+0.067553901 container create 59e518c67b8a21d78789c0e926124bfc843e247ac7dcd4063a43c501c4a9e24f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 07:15:49 np0005629333 systemd[1]: Started libpod-conmon-59e518c67b8a21d78789c0e926124bfc843e247ac7dcd4063a43c501c4a9e24f.scope.
Feb 25 07:15:49 np0005629333 podman[252175]: 2026-02-25 12:15:49.016723319 +0000 UTC m=+0.038851547 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:15:49 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:15:49 np0005629333 podman[252175]: 2026-02-25 12:15:49.131188756 +0000 UTC m=+0.153316914 container init 59e518c67b8a21d78789c0e926124bfc843e247ac7dcd4063a43c501c4a9e24f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:15:49 np0005629333 podman[252175]: 2026-02-25 12:15:49.138190064 +0000 UTC m=+0.160318222 container start 59e518c67b8a21d78789c0e926124bfc843e247ac7dcd4063a43c501c4a9e24f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wilbur, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 07:15:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:15:49 np0005629333 podman[252175]: 2026-02-25 12:15:49.144204193 +0000 UTC m=+0.166332341 container attach 59e518c67b8a21d78789c0e926124bfc843e247ac7dcd4063a43c501c4a9e24f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wilbur, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 07:15:49 np0005629333 systemd[1]: libpod-59e518c67b8a21d78789c0e926124bfc843e247ac7dcd4063a43c501c4a9e24f.scope: Deactivated successfully.
Feb 25 07:15:49 np0005629333 cranky_wilbur[252192]: 167 167
Feb 25 07:15:49 np0005629333 conmon[252192]: conmon 59e518c67b8a21d78789 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-59e518c67b8a21d78789c0e926124bfc843e247ac7dcd4063a43c501c4a9e24f.scope/container/memory.events
Feb 25 07:15:49 np0005629333 podman[252175]: 2026-02-25 12:15:49.145823461 +0000 UTC m=+0.167951609 container died 59e518c67b8a21d78789c0e926124bfc843e247ac7dcd4063a43c501c4a9e24f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wilbur, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:15:49 np0005629333 systemd[1]: var-lib-containers-storage-overlay-6feb36f09c3bf0d95e99a2ce42610ed1804e992fcbedbb6772b36c2057391abb-merged.mount: Deactivated successfully.
Feb 25 07:15:49 np0005629333 podman[252175]: 2026-02-25 12:15:49.206311762 +0000 UTC m=+0.228439920 container remove 59e518c67b8a21d78789c0e926124bfc843e247ac7dcd4063a43c501c4a9e24f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_wilbur, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:15:49 np0005629333 systemd[1]: libpod-conmon-59e518c67b8a21d78789c0e926124bfc843e247ac7dcd4063a43c501c4a9e24f.scope: Deactivated successfully.
Feb 25 07:15:49 np0005629333 podman[252215]: 2026-02-25 12:15:49.35041532 +0000 UTC m=+0.051219135 container create 5c98ce8414d97b1992fffa5a44e08d05c27deab946fbafe3a66441cbf3342a25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mccarthy, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 07:15:49 np0005629333 systemd[1]: Started libpod-conmon-5c98ce8414d97b1992fffa5a44e08d05c27deab946fbafe3a66441cbf3342a25.scope.
Feb 25 07:15:49 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:15:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e821dfb7168c64e2299e12d8e2067530696958ee443e98052c174d96fac3fc80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:15:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e821dfb7168c64e2299e12d8e2067530696958ee443e98052c174d96fac3fc80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:15:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e821dfb7168c64e2299e12d8e2067530696958ee443e98052c174d96fac3fc80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:15:49 np0005629333 podman[252215]: 2026-02-25 12:15:49.325424326 +0000 UTC m=+0.026228171 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:15:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e821dfb7168c64e2299e12d8e2067530696958ee443e98052c174d96fac3fc80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:15:49 np0005629333 podman[252215]: 2026-02-25 12:15:49.439850552 +0000 UTC m=+0.140654387 container init 5c98ce8414d97b1992fffa5a44e08d05c27deab946fbafe3a66441cbf3342a25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 07:15:49 np0005629333 podman[252215]: 2026-02-25 12:15:49.447973094 +0000 UTC m=+0.148776939 container start 5c98ce8414d97b1992fffa5a44e08d05c27deab946fbafe3a66441cbf3342a25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 07:15:49 np0005629333 podman[252215]: 2026-02-25 12:15:49.455351563 +0000 UTC m=+0.156155428 container attach 5c98ce8414d97b1992fffa5a44e08d05c27deab946fbafe3a66441cbf3342a25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:15:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 148 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]: {
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:    "0": [
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:        {
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "devices": [
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "/dev/loop3"
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            ],
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "lv_name": "ceph_lv0",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "lv_size": "21470642176",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "name": "ceph_lv0",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "tags": {
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.cluster_name": "ceph",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.crush_device_class": "",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.encrypted": "0",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.objectstore": "bluestore",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.osd_id": "0",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.type": "block",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.vdo": "0",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.with_tpm": "0"
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            },
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "type": "block",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "vg_name": "ceph_vg0"
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:        }
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:    ],
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:    "1": [
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:        {
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "devices": [
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "/dev/loop4"
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            ],
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "lv_name": "ceph_lv1",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "lv_size": "21470642176",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "name": "ceph_lv1",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "tags": {
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.cluster_name": "ceph",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.crush_device_class": "",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.encrypted": "0",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.objectstore": "bluestore",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.osd_id": "1",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.type": "block",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.vdo": "0",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.with_tpm": "0"
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            },
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "type": "block",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "vg_name": "ceph_vg1"
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:        }
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:    ],
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:    "2": [
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:        {
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "devices": [
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "/dev/loop5"
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            ],
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "lv_name": "ceph_lv2",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "lv_size": "21470642176",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "name": "ceph_lv2",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "tags": {
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.cluster_name": "ceph",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.crush_device_class": "",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.encrypted": "0",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.objectstore": "bluestore",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.osd_id": "2",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.type": "block",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.vdo": "0",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:                "ceph.with_tpm": "0"
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            },
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "type": "block",
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:            "vg_name": "ceph_vg2"
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:        }
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]:    ]
Feb 25 07:15:49 np0005629333 cranky_mccarthy[252232]: }
Feb 25 07:15:49 np0005629333 systemd[1]: libpod-5c98ce8414d97b1992fffa5a44e08d05c27deab946fbafe3a66441cbf3342a25.scope: Deactivated successfully.
Feb 25 07:15:49 np0005629333 podman[252215]: 2026-02-25 12:15:49.738018476 +0000 UTC m=+0.438822301 container died 5c98ce8414d97b1992fffa5a44e08d05c27deab946fbafe3a66441cbf3342a25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:15:49 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e821dfb7168c64e2299e12d8e2067530696958ee443e98052c174d96fac3fc80-merged.mount: Deactivated successfully.
Feb 25 07:15:49 np0005629333 podman[252215]: 2026-02-25 12:15:49.819643065 +0000 UTC m=+0.520446910 container remove 5c98ce8414d97b1992fffa5a44e08d05c27deab946fbafe3a66441cbf3342a25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mccarthy, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Feb 25 07:15:49 np0005629333 systemd[1]: libpod-conmon-5c98ce8414d97b1992fffa5a44e08d05c27deab946fbafe3a66441cbf3342a25.scope: Deactivated successfully.
Feb 25 07:15:50 np0005629333 podman[252317]: 2026-02-25 12:15:50.269454022 +0000 UTC m=+0.056193143 container create 020b6e605622bc98f2a1f0430f8134a3906be32ec6bdb54f0ea46966d2a323d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 07:15:50 np0005629333 systemd[1]: Started libpod-conmon-020b6e605622bc98f2a1f0430f8134a3906be32ec6bdb54f0ea46966d2a323d7.scope.
Feb 25 07:15:50 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:15:50 np0005629333 podman[252317]: 2026-02-25 12:15:50.242869241 +0000 UTC m=+0.029608342 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:15:50 np0005629333 podman[252317]: 2026-02-25 12:15:50.346348851 +0000 UTC m=+0.133087962 container init 020b6e605622bc98f2a1f0430f8134a3906be32ec6bdb54f0ea46966d2a323d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 07:15:50 np0005629333 podman[252317]: 2026-02-25 12:15:50.353236726 +0000 UTC m=+0.139975807 container start 020b6e605622bc98f2a1f0430f8134a3906be32ec6bdb54f0ea46966d2a323d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_thompson, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 07:15:50 np0005629333 vibrant_thompson[252334]: 167 167
Feb 25 07:15:50 np0005629333 systemd[1]: libpod-020b6e605622bc98f2a1f0430f8134a3906be32ec6bdb54f0ea46966d2a323d7.scope: Deactivated successfully.
Feb 25 07:15:50 np0005629333 podman[252317]: 2026-02-25 12:15:50.360603175 +0000 UTC m=+0.147342266 container attach 020b6e605622bc98f2a1f0430f8134a3906be32ec6bdb54f0ea46966d2a323d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_thompson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:15:50 np0005629333 podman[252317]: 2026-02-25 12:15:50.361011487 +0000 UTC m=+0.147750578 container died 020b6e605622bc98f2a1f0430f8134a3906be32ec6bdb54f0ea46966d2a323d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_thompson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:15:50 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e9ad1237404f82995611ba3187a7be09b8907a51e589fe743f5f98157763ff85-merged.mount: Deactivated successfully.
Feb 25 07:15:50 np0005629333 podman[252317]: 2026-02-25 12:15:50.435293468 +0000 UTC m=+0.222032589 container remove 020b6e605622bc98f2a1f0430f8134a3906be32ec6bdb54f0ea46966d2a323d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_thompson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:15:50 np0005629333 systemd[1]: libpod-conmon-020b6e605622bc98f2a1f0430f8134a3906be32ec6bdb54f0ea46966d2a323d7.scope: Deactivated successfully.
Feb 25 07:15:50 np0005629333 podman[252360]: 2026-02-25 12:15:50.597539626 +0000 UTC m=+0.052018599 container create 7da2b14e512026cb681f7e23196636ed644713fd9e27724110508a9fcb71985a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 07:15:50 np0005629333 systemd[1]: Started libpod-conmon-7da2b14e512026cb681f7e23196636ed644713fd9e27724110508a9fcb71985a.scope.
Feb 25 07:15:50 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:15:50 np0005629333 podman[252360]: 2026-02-25 12:15:50.570758739 +0000 UTC m=+0.025237802 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:15:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68994932d3cfcec4908801fe44f9a36a9d2e68c7f89f777c73c3d8434cd57c72/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:15:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68994932d3cfcec4908801fe44f9a36a9d2e68c7f89f777c73c3d8434cd57c72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:15:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68994932d3cfcec4908801fe44f9a36a9d2e68c7f89f777c73c3d8434cd57c72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:15:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68994932d3cfcec4908801fe44f9a36a9d2e68c7f89f777c73c3d8434cd57c72/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:15:50 np0005629333 podman[252360]: 2026-02-25 12:15:50.690518764 +0000 UTC m=+0.144997767 container init 7da2b14e512026cb681f7e23196636ed644713fd9e27724110508a9fcb71985a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_engelbart, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:15:50 np0005629333 podman[252360]: 2026-02-25 12:15:50.702787179 +0000 UTC m=+0.157266152 container start 7da2b14e512026cb681f7e23196636ed644713fd9e27724110508a9fcb71985a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 07:15:50 np0005629333 podman[252360]: 2026-02-25 12:15:50.711724265 +0000 UTC m=+0.166203338 container attach 7da2b14e512026cb681f7e23196636ed644713fd9e27724110508a9fcb71985a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_engelbart, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:15:51 np0005629333 nova_compute[244014]: 2026-02-25 12:15:51.055 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "6de08989-13cc-415b-adc9-04b338e13d0f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:15:51 np0005629333 nova_compute[244014]: 2026-02-25 12:15:51.057 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "6de08989-13cc-415b-adc9-04b338e13d0f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:15:51 np0005629333 nova_compute[244014]: 2026-02-25 12:15:51.073 244018 DEBUG nova.compute.manager [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:15:51 np0005629333 nova_compute[244014]: 2026-02-25 12:15:51.148 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:15:51 np0005629333 nova_compute[244014]: 2026-02-25 12:15:51.149 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:15:51 np0005629333 nova_compute[244014]: 2026-02-25 12:15:51.155 244018 DEBUG nova.virt.hardware [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:15:51 np0005629333 nova_compute[244014]: 2026-02-25 12:15:51.155 244018 INFO nova.compute.claims [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:15:51 np0005629333 nova_compute[244014]: 2026-02-25 12:15:51.283 244018 DEBUG oslo_concurrency.processutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:51 np0005629333 lvm[252457]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:15:51 np0005629333 lvm[252455]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:15:51 np0005629333 lvm[252455]: VG ceph_vg0 finished
Feb 25 07:15:51 np0005629333 lvm[252457]: VG ceph_vg1 finished
Feb 25 07:15:51 np0005629333 lvm[252459]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:15:51 np0005629333 lvm[252459]: VG ceph_vg2 finished
Feb 25 07:15:51 np0005629333 confident_engelbart[252377]: {}
Feb 25 07:15:51 np0005629333 systemd[1]: libpod-7da2b14e512026cb681f7e23196636ed644713fd9e27724110508a9fcb71985a.scope: Deactivated successfully.
Feb 25 07:15:51 np0005629333 podman[252360]: 2026-02-25 12:15:51.461375256 +0000 UTC m=+0.915854229 container died 7da2b14e512026cb681f7e23196636ed644713fd9e27724110508a9fcb71985a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 07:15:51 np0005629333 systemd[1]: libpod-7da2b14e512026cb681f7e23196636ed644713fd9e27724110508a9fcb71985a.scope: Consumed 1.114s CPU time.
Feb 25 07:15:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 800 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Feb 25 07:15:51 np0005629333 systemd[1]: var-lib-containers-storage-overlay-68994932d3cfcec4908801fe44f9a36a9d2e68c7f89f777c73c3d8434cd57c72-merged.mount: Deactivated successfully.
Feb 25 07:15:51 np0005629333 podman[252360]: 2026-02-25 12:15:51.531683328 +0000 UTC m=+0.986162301 container remove 7da2b14e512026cb681f7e23196636ed644713fd9e27724110508a9fcb71985a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:15:51 np0005629333 systemd[1]: libpod-conmon-7da2b14e512026cb681f7e23196636ed644713fd9e27724110508a9fcb71985a.scope: Deactivated successfully.
Feb 25 07:15:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:15:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:15:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:15:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:15:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:15:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/573914449' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:15:51 np0005629333 nova_compute[244014]: 2026-02-25 12:15:51.841 244018 DEBUG oslo_concurrency.processutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:51 np0005629333 nova_compute[244014]: 2026-02-25 12:15:51.852 244018 DEBUG nova.compute.provider_tree [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:15:51 np0005629333 nova_compute[244014]: 2026-02-25 12:15:51.880 244018 DEBUG nova.scheduler.client.report [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:15:52 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:15:52 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:15:52 np0005629333 nova_compute[244014]: 2026-02-25 12:15:52.852 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:15:52 np0005629333 nova_compute[244014]: 2026-02-25 12:15:52.853 244018 DEBUG nova.compute.manager [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:15:52 np0005629333 nova_compute[244014]: 2026-02-25 12:15:52.859 244018 DEBUG oslo_concurrency.lockutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquiring lock "6d979dde-168d-4976-99a4-bb4e3eb22ae0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:15:52 np0005629333 nova_compute[244014]: 2026-02-25 12:15:52.859 244018 DEBUG oslo_concurrency.lockutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "6d979dde-168d-4976-99a4-bb4e3eb22ae0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:15:52 np0005629333 nova_compute[244014]: 2026-02-25 12:15:52.886 244018 DEBUG nova.compute.manager [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:15:52 np0005629333 nova_compute[244014]: 2026-02-25 12:15:52.909 244018 DEBUG nova.compute.manager [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:15:52 np0005629333 nova_compute[244014]: 2026-02-25 12:15:52.909 244018 DEBUG nova.network.neutron [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:15:52 np0005629333 nova_compute[244014]: 2026-02-25 12:15:52.933 244018 INFO nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:15:52 np0005629333 nova_compute[244014]: 2026-02-25 12:15:52.955 244018 DEBUG oslo_concurrency.lockutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:15:52 np0005629333 nova_compute[244014]: 2026-02-25 12:15:52.956 244018 DEBUG oslo_concurrency.lockutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:15:52 np0005629333 nova_compute[244014]: 2026-02-25 12:15:52.960 244018 DEBUG nova.compute.manager [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:15:52 np0005629333 nova_compute[244014]: 2026-02-25 12:15:52.967 244018 DEBUG nova.virt.hardware [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:15:52 np0005629333 nova_compute[244014]: 2026-02-25 12:15:52.967 244018 INFO nova.compute.claims [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:15:53 np0005629333 nova_compute[244014]: 2026-02-25 12:15:53.125 244018 DEBUG nova.compute.manager [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:15:53 np0005629333 nova_compute[244014]: 2026-02-25 12:15:53.128 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:15:53 np0005629333 nova_compute[244014]: 2026-02-25 12:15:53.129 244018 INFO nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Creating image(s)#033[00m
Feb 25 07:15:53 np0005629333 nova_compute[244014]: 2026-02-25 12:15:53.158 244018 DEBUG nova.storage.rbd_utils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 6de08989-13cc-415b-adc9-04b338e13d0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:15:53 np0005629333 nova_compute[244014]: 2026-02-25 12:15:53.183 244018 DEBUG nova.storage.rbd_utils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 6de08989-13cc-415b-adc9-04b338e13d0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:15:53 np0005629333 nova_compute[244014]: 2026-02-25 12:15:53.206 244018 DEBUG nova.storage.rbd_utils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 6de08989-13cc-415b-adc9-04b338e13d0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:15:53 np0005629333 nova_compute[244014]: 2026-02-25 12:15:53.210 244018 DEBUG oslo_concurrency.processutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:53 np0005629333 nova_compute[244014]: 2026-02-25 12:15:53.243 244018 WARNING oslo_policy.policy [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Feb 25 07:15:53 np0005629333 nova_compute[244014]: 2026-02-25 12:15:53.244 244018 WARNING oslo_policy.policy [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Feb 25 07:15:53 np0005629333 nova_compute[244014]: 2026-02-25 12:15:53.246 244018 DEBUG nova.policy [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee6c6e44a0624805afeb68a67c99f325', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0cd0968a9a1b4b9e984b0a10a6ac77a8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:15:53 np0005629333 nova_compute[244014]: 2026-02-25 12:15:53.278 244018 DEBUG oslo_concurrency.processutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:53 np0005629333 nova_compute[244014]: 2026-02-25 12:15:53.279 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:15:53 np0005629333 nova_compute[244014]: 2026-02-25 12:15:53.280 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:15:53 np0005629333 nova_compute[244014]: 2026-02-25 12:15:53.280 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:15:53 np0005629333 nova_compute[244014]: 2026-02-25 12:15:53.299 244018 DEBUG nova.storage.rbd_utils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 6de08989-13cc-415b-adc9-04b338e13d0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:15:53 np0005629333 nova_compute[244014]: 2026-02-25 12:15:53.302 244018 DEBUG oslo_concurrency.processutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 6de08989-13cc-415b-adc9-04b338e13d0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:53 np0005629333 nova_compute[244014]: 2026-02-25 12:15:53.315 244018 DEBUG oslo_concurrency.processutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 07:15:53 np0005629333 nova_compute[244014]: 2026-02-25 12:15:53.549 244018 DEBUG oslo_concurrency.processutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 6de08989-13cc-415b-adc9-04b338e13d0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.247s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:53 np0005629333 nova_compute[244014]: 2026-02-25 12:15:53.612 244018 DEBUG nova.storage.rbd_utils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] resizing rbd image 6de08989-13cc-415b-adc9-04b338e13d0f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:15:53 np0005629333 nova_compute[244014]: 2026-02-25 12:15:53.713 244018 DEBUG nova.objects.instance [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lazy-loading 'migration_context' on Instance uuid 6de08989-13cc-415b-adc9-04b338e13d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:15:53 np0005629333 nova_compute[244014]: 2026-02-25 12:15:53.732 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:15:53 np0005629333 nova_compute[244014]: 2026-02-25 12:15:53.735 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Ensure instance console log exists: /var/lib/nova/instances/6de08989-13cc-415b-adc9-04b338e13d0f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:15:53 np0005629333 nova_compute[244014]: 2026-02-25 12:15:53.736 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:15:53 np0005629333 nova_compute[244014]: 2026-02-25 12:15:53.736 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:15:53 np0005629333 nova_compute[244014]: 2026-02-25 12:15:53.737 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:15:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:15:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1331721725' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:15:53 np0005629333 nova_compute[244014]: 2026-02-25 12:15:53.799 244018 DEBUG oslo_concurrency.processutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:53 np0005629333 nova_compute[244014]: 2026-02-25 12:15:53.806 244018 DEBUG nova.compute.provider_tree [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:15:53 np0005629333 nova_compute[244014]: 2026-02-25 12:15:53.825 244018 DEBUG nova.scheduler.client.report [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:15:53 np0005629333 nova_compute[244014]: 2026-02-25 12:15:53.846 244018 DEBUG oslo_concurrency.lockutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:15:53 np0005629333 nova_compute[244014]: 2026-02-25 12:15:53.847 244018 DEBUG nova.compute.manager [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:15:53 np0005629333 nova_compute[244014]: 2026-02-25 12:15:53.888 244018 DEBUG nova.compute.manager [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:15:53 np0005629333 nova_compute[244014]: 2026-02-25 12:15:53.889 244018 DEBUG nova.network.neutron [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:15:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:15:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:15:54.198 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:15:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:15:54.199 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:15:54 np0005629333 nova_compute[244014]: 2026-02-25 12:15:54.222 244018 INFO nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:15:54 np0005629333 nova_compute[244014]: 2026-02-25 12:15:54.836 244018 DEBUG nova.compute.manager [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:15:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:15:55.000 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:15:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:15:55.001 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:15:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:15:55.001 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:15:55 np0005629333 nova_compute[244014]: 2026-02-25 12:15:55.113 244018 DEBUG nova.compute.manager [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:15:55 np0005629333 nova_compute[244014]: 2026-02-25 12:15:55.115 244018 DEBUG nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:15:55 np0005629333 nova_compute[244014]: 2026-02-25 12:15:55.116 244018 INFO nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Creating image(s)#033[00m
Feb 25 07:15:55 np0005629333 nova_compute[244014]: 2026-02-25 12:15:55.166 244018 DEBUG nova.storage.rbd_utils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] rbd image 6d979dde-168d-4976-99a4-bb4e3eb22ae0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:15:55 np0005629333 nova_compute[244014]: 2026-02-25 12:15:55.199 244018 DEBUG nova.storage.rbd_utils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] rbd image 6d979dde-168d-4976-99a4-bb4e3eb22ae0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:15:55 np0005629333 nova_compute[244014]: 2026-02-25 12:15:55.232 244018 DEBUG nova.storage.rbd_utils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] rbd image 6d979dde-168d-4976-99a4-bb4e3eb22ae0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:15:55 np0005629333 nova_compute[244014]: 2026-02-25 12:15:55.236 244018 DEBUG oslo_concurrency.processutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:55 np0005629333 nova_compute[244014]: 2026-02-25 12:15:55.253 244018 DEBUG nova.network.neutron [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Feb 25 07:15:55 np0005629333 nova_compute[244014]: 2026-02-25 12:15:55.256 244018 DEBUG nova.compute.manager [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:15:55 np0005629333 nova_compute[244014]: 2026-02-25 12:15:55.258 244018 DEBUG nova.network.neutron [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Successfully created port: 277c556d-c41e-4d6d-9f29-56e96f6a65e2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:15:55 np0005629333 nova_compute[244014]: 2026-02-25 12:15:55.294 244018 DEBUG oslo_concurrency.processutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:55 np0005629333 nova_compute[244014]: 2026-02-25 12:15:55.296 244018 DEBUG oslo_concurrency.lockutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:15:55 np0005629333 nova_compute[244014]: 2026-02-25 12:15:55.296 244018 DEBUG oslo_concurrency.lockutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:15:55 np0005629333 nova_compute[244014]: 2026-02-25 12:15:55.297 244018 DEBUG oslo_concurrency.lockutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:15:55 np0005629333 nova_compute[244014]: 2026-02-25 12:15:55.326 244018 DEBUG nova.storage.rbd_utils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] rbd image 6d979dde-168d-4976-99a4-bb4e3eb22ae0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:15:55 np0005629333 nova_compute[244014]: 2026-02-25 12:15:55.330 244018 DEBUG oslo_concurrency.processutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 6d979dde-168d-4976-99a4-bb4e3eb22ae0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 200 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 07:15:55 np0005629333 nova_compute[244014]: 2026-02-25 12:15:55.797 244018 DEBUG oslo_concurrency.processutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 6d979dde-168d-4976-99a4-bb4e3eb22ae0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:55 np0005629333 nova_compute[244014]: 2026-02-25 12:15:55.875 244018 DEBUG nova.storage.rbd_utils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] resizing rbd image 6d979dde-168d-4976-99a4-bb4e3eb22ae0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:15:55 np0005629333 nova_compute[244014]: 2026-02-25 12:15:55.980 244018 DEBUG nova.objects.instance [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lazy-loading 'migration_context' on Instance uuid 6d979dde-168d-4976-99a4-bb4e3eb22ae0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:15:56 np0005629333 nova_compute[244014]: 2026-02-25 12:15:56.616 244018 DEBUG nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:15:56 np0005629333 nova_compute[244014]: 2026-02-25 12:15:56.616 244018 DEBUG nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Ensure instance console log exists: /var/lib/nova/instances/6d979dde-168d-4976-99a4-bb4e3eb22ae0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:15:56 np0005629333 nova_compute[244014]: 2026-02-25 12:15:56.617 244018 DEBUG oslo_concurrency.lockutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:15:56 np0005629333 nova_compute[244014]: 2026-02-25 12:15:56.618 244018 DEBUG oslo_concurrency.lockutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:15:56 np0005629333 nova_compute[244014]: 2026-02-25 12:15:56.618 244018 DEBUG oslo_concurrency.lockutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:15:56 np0005629333 nova_compute[244014]: 2026-02-25 12:15:56.621 244018 DEBUG nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:15:56 np0005629333 nova_compute[244014]: 2026-02-25 12:15:56.624 244018 DEBUG nova.network.neutron [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Successfully updated port: 277c556d-c41e-4d6d-9f29-56e96f6a65e2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:15:56 np0005629333 nova_compute[244014]: 2026-02-25 12:15:56.630 244018 WARNING nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:15:56 np0005629333 nova_compute[244014]: 2026-02-25 12:15:56.634 244018 DEBUG nova.virt.libvirt.host [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:15:56 np0005629333 nova_compute[244014]: 2026-02-25 12:15:56.635 244018 DEBUG nova.virt.libvirt.host [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:15:56 np0005629333 nova_compute[244014]: 2026-02-25 12:15:56.637 244018 DEBUG nova.virt.libvirt.host [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:15:56 np0005629333 nova_compute[244014]: 2026-02-25 12:15:56.638 244018 DEBUG nova.virt.libvirt.host [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:15:56 np0005629333 nova_compute[244014]: 2026-02-25 12:15:56.639 244018 DEBUG nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:15:56 np0005629333 nova_compute[244014]: 2026-02-25 12:15:56.639 244018 DEBUG nova.virt.hardware [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:15:56 np0005629333 nova_compute[244014]: 2026-02-25 12:15:56.640 244018 DEBUG nova.virt.hardware [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:15:56 np0005629333 nova_compute[244014]: 2026-02-25 12:15:56.640 244018 DEBUG nova.virt.hardware [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:15:56 np0005629333 nova_compute[244014]: 2026-02-25 12:15:56.641 244018 DEBUG nova.virt.hardware [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:15:56 np0005629333 nova_compute[244014]: 2026-02-25 12:15:56.641 244018 DEBUG nova.virt.hardware [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:15:56 np0005629333 nova_compute[244014]: 2026-02-25 12:15:56.642 244018 DEBUG nova.virt.hardware [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:15:56 np0005629333 nova_compute[244014]: 2026-02-25 12:15:56.642 244018 DEBUG nova.virt.hardware [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:15:56 np0005629333 nova_compute[244014]: 2026-02-25 12:15:56.643 244018 DEBUG nova.virt.hardware [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:15:56 np0005629333 nova_compute[244014]: 2026-02-25 12:15:56.643 244018 DEBUG nova.virt.hardware [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:15:56 np0005629333 nova_compute[244014]: 2026-02-25 12:15:56.643 244018 DEBUG nova.virt.hardware [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:15:56 np0005629333 nova_compute[244014]: 2026-02-25 12:15:56.644 244018 DEBUG nova.virt.hardware [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:15:56 np0005629333 nova_compute[244014]: 2026-02-25 12:15:56.648 244018 DEBUG oslo_concurrency.processutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:56 np0005629333 nova_compute[244014]: 2026-02-25 12:15:56.724 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "refresh_cache-6de08989-13cc-415b-adc9-04b338e13d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:15:56 np0005629333 nova_compute[244014]: 2026-02-25 12:15:56.725 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquired lock "refresh_cache-6de08989-13cc-415b-adc9-04b338e13d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:15:56 np0005629333 nova_compute[244014]: 2026-02-25 12:15:56.725 244018 DEBUG nova.network.neutron [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:15:57 np0005629333 nova_compute[244014]: 2026-02-25 12:15:57.037 244018 DEBUG nova.network.neutron [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:15:57 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb 25 07:15:57 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb 25 07:15:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:15:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2074957029' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:15:57 np0005629333 nova_compute[244014]: 2026-02-25 12:15:57.245 244018 DEBUG oslo_concurrency.processutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:57 np0005629333 nova_compute[244014]: 2026-02-25 12:15:57.266 244018 DEBUG nova.storage.rbd_utils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] rbd image 6d979dde-168d-4976-99a4-bb4e3eb22ae0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:15:57 np0005629333 nova_compute[244014]: 2026-02-25 12:15:57.270 244018 DEBUG oslo_concurrency.processutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:57 np0005629333 nova_compute[244014]: 2026-02-25 12:15:57.332 244018 DEBUG nova.compute.manager [req-66b5083a-9726-4778-a521-26b7c3fcb4c8 req-bf35c6e1-8a5f-43a6-9a0d-62bdeb8fe632 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Received event network-changed-277c556d-c41e-4d6d-9f29-56e96f6a65e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:15:57 np0005629333 nova_compute[244014]: 2026-02-25 12:15:57.332 244018 DEBUG nova.compute.manager [req-66b5083a-9726-4778-a521-26b7c3fcb4c8 req-bf35c6e1-8a5f-43a6-9a0d-62bdeb8fe632 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Refreshing instance network info cache due to event network-changed-277c556d-c41e-4d6d-9f29-56e96f6a65e2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:15:57 np0005629333 nova_compute[244014]: 2026-02-25 12:15:57.333 244018 DEBUG oslo_concurrency.lockutils [req-66b5083a-9726-4778-a521-26b7c3fcb4c8 req-bf35c6e1-8a5f-43a6-9a0d-62bdeb8fe632 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-6de08989-13cc-415b-adc9-04b338e13d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:15:57 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb 25 07:15:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 299 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.9 MiB/s wr, 164 op/s
Feb 25 07:15:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:15:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/810532628' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:15:57 np0005629333 nova_compute[244014]: 2026-02-25 12:15:57.798 244018 DEBUG oslo_concurrency.processutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:57 np0005629333 nova_compute[244014]: 2026-02-25 12:15:57.802 244018 DEBUG nova.objects.instance [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6d979dde-168d-4976-99a4-bb4e3eb22ae0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:15:58 np0005629333 nova_compute[244014]: 2026-02-25 12:15:58.190 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:15:58 np0005629333 nova_compute[244014]: 2026-02-25 12:15:58.191 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:15:58 np0005629333 nova_compute[244014]: 2026-02-25 12:15:58.192 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 07:15:58 np0005629333 nova_compute[244014]: 2026-02-25 12:15:58.343 244018 DEBUG nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:15:58 np0005629333 nova_compute[244014]:  <uuid>6d979dde-168d-4976-99a4-bb4e3eb22ae0</uuid>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:  <name>instance-00000006</name>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:15:58 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:      <nova:name>tempest-LiveMigrationNegativeTest-server-340575447</nova:name>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:15:56</nova:creationTime>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:15:58 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:        <nova:user uuid="0da29cefe0e94220a8d9cf895454b55c">tempest-LiveMigrationNegativeTest-1055227276-project-member</nova:user>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:        <nova:project uuid="6bddec791c4c46b7bef787d3d6634b12">tempest-LiveMigrationNegativeTest-1055227276</nova:project>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:      <nova:ports/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:      <entry name="serial">6d979dde-168d-4976-99a4-bb4e3eb22ae0</entry>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:      <entry name="uuid">6d979dde-168d-4976-99a4-bb4e3eb22ae0</entry>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:15:58 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/6d979dde-168d-4976-99a4-bb4e3eb22ae0_disk">
Feb 25 07:15:58 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:15:58 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:15:58 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/6d979dde-168d-4976-99a4-bb4e3eb22ae0_disk.config">
Feb 25 07:15:58 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:15:58 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:15:58 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/6d979dde-168d-4976-99a4-bb4e3eb22ae0/console.log" append="off"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:15:58 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:15:58 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:15:58 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:15:58 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:15:58 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:15:58 np0005629333 nova_compute[244014]: 2026-02-25 12:15:58.361 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 25 07:15:58 np0005629333 nova_compute[244014]: 2026-02-25 12:15:58.361 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 25 07:15:58 np0005629333 nova_compute[244014]: 2026-02-25 12:15:58.424 244018 DEBUG nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:15:58 np0005629333 nova_compute[244014]: 2026-02-25 12:15:58.425 244018 DEBUG nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:15:58 np0005629333 nova_compute[244014]: 2026-02-25 12:15:58.426 244018 INFO nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Using config drive#033[00m
Feb 25 07:15:58 np0005629333 nova_compute[244014]: 2026-02-25 12:15:58.457 244018 DEBUG nova.storage.rbd_utils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] rbd image 6d979dde-168d-4976-99a4-bb4e3eb22ae0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:15:58 np0005629333 nova_compute[244014]: 2026-02-25 12:15:58.746 244018 INFO nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Creating config drive at /var/lib/nova/instances/6d979dde-168d-4976-99a4-bb4e3eb22ae0/disk.config#033[00m
Feb 25 07:15:58 np0005629333 nova_compute[244014]: 2026-02-25 12:15:58.752 244018 DEBUG oslo_concurrency.processutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6d979dde-168d-4976-99a4-bb4e3eb22ae0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8y7iise3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:58 np0005629333 nova_compute[244014]: 2026-02-25 12:15:58.779 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-ee9e86a4-8a34-43a9-baf1-f9b2e7f85534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:15:58 np0005629333 nova_compute[244014]: 2026-02-25 12:15:58.780 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-ee9e86a4-8a34-43a9-baf1-f9b2e7f85534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:15:58 np0005629333 nova_compute[244014]: 2026-02-25 12:15:58.780 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 25 07:15:58 np0005629333 nova_compute[244014]: 2026-02-25 12:15:58.781 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid ee9e86a4-8a34-43a9-baf1-f9b2e7f85534 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:15:58 np0005629333 nova_compute[244014]: 2026-02-25 12:15:58.874 244018 DEBUG oslo_concurrency.processutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6d979dde-168d-4976-99a4-bb4e3eb22ae0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8y7iise3" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:15:58 np0005629333 nova_compute[244014]: 2026-02-25 12:15:58.900 244018 DEBUG nova.storage.rbd_utils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] rbd image 6d979dde-168d-4976-99a4-bb4e3eb22ae0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:15:58 np0005629333 nova_compute[244014]: 2026-02-25 12:15:58.906 244018 DEBUG oslo_concurrency.processutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6d979dde-168d-4976-99a4-bb4e3eb22ae0/disk.config 6d979dde-168d-4976-99a4-bb4e3eb22ae0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:15:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:15:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 299 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.1 MiB/s wr, 123 op/s
Feb 25 07:15:59 np0005629333 nova_compute[244014]: 2026-02-25 12:15:59.751 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:16:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:00.201 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:16:00 np0005629333 nova_compute[244014]: 2026-02-25 12:16:00.508 244018 DEBUG oslo_concurrency.processutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6d979dde-168d-4976-99a4-bb4e3eb22ae0/disk.config 6d979dde-168d-4976-99a4-bb4e3eb22ae0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:00 np0005629333 nova_compute[244014]: 2026-02-25 12:16:00.509 244018 INFO nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Deleting local config drive /var/lib/nova/instances/6d979dde-168d-4976-99a4-bb4e3eb22ae0/disk.config because it was imported into RBD.#033[00m
Feb 25 07:16:00 np0005629333 systemd-machined[210048]: New machine qemu-5-instance-00000006.
Feb 25 07:16:00 np0005629333 systemd[1]: Started Virtual Machine qemu-5-instance-00000006.
Feb 25 07:16:00 np0005629333 nova_compute[244014]: 2026-02-25 12:16:00.902 244018 DEBUG nova.network.neutron [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Updating instance_info_cache with network_info: [{"id": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "address": "fa:16:3e:9f:6d:04", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap277c556d-c4", "ovs_interfaceid": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:16:00 np0005629333 nova_compute[244014]: 2026-02-25 12:16:00.950 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021760.950319, 6d979dde-168d-4976-99a4-bb4e3eb22ae0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:16:00 np0005629333 nova_compute[244014]: 2026-02-25 12:16:00.951 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:16:00 np0005629333 nova_compute[244014]: 2026-02-25 12:16:00.956 244018 DEBUG nova.compute.manager [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:16:00 np0005629333 nova_compute[244014]: 2026-02-25 12:16:00.956 244018 DEBUG nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:16:00 np0005629333 nova_compute[244014]: 2026-02-25 12:16:00.957 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Releasing lock "refresh_cache-6de08989-13cc-415b-adc9-04b338e13d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:16:00 np0005629333 nova_compute[244014]: 2026-02-25 12:16:00.958 244018 DEBUG nova.compute.manager [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Instance network_info: |[{"id": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "address": "fa:16:3e:9f:6d:04", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap277c556d-c4", "ovs_interfaceid": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:16:00 np0005629333 nova_compute[244014]: 2026-02-25 12:16:00.959 244018 DEBUG oslo_concurrency.lockutils [req-66b5083a-9726-4778-a521-26b7c3fcb4c8 req-bf35c6e1-8a5f-43a6-9a0d-62bdeb8fe632 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-6de08989-13cc-415b-adc9-04b338e13d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:16:00 np0005629333 nova_compute[244014]: 2026-02-25 12:16:00.960 244018 DEBUG nova.network.neutron [req-66b5083a-9726-4778-a521-26b7c3fcb4c8 req-bf35c6e1-8a5f-43a6-9a0d-62bdeb8fe632 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Refreshing network info cache for port 277c556d-c41e-4d6d-9f29-56e96f6a65e2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:16:00 np0005629333 nova_compute[244014]: 2026-02-25 12:16:00.965 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Start _get_guest_xml network_info=[{"id": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "address": "fa:16:3e:9f:6d:04", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap277c556d-c4", "ovs_interfaceid": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:16:00 np0005629333 nova_compute[244014]: 2026-02-25 12:16:00.971 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:16:00 np0005629333 nova_compute[244014]: 2026-02-25 12:16:00.975 244018 INFO nova.virt.libvirt.driver [-] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Instance spawned successfully.#033[00m
Feb 25 07:16:00 np0005629333 nova_compute[244014]: 2026-02-25 12:16:00.976 244018 DEBUG nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:16:00 np0005629333 nova_compute[244014]: 2026-02-25 12:16:00.981 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:16:00 np0005629333 nova_compute[244014]: 2026-02-25 12:16:00.985 244018 WARNING nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:16:00 np0005629333 nova_compute[244014]: 2026-02-25 12:16:00.996 244018 DEBUG nova.virt.libvirt.host [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:16:00 np0005629333 nova_compute[244014]: 2026-02-25 12:16:00.997 244018 DEBUG nova.virt.libvirt.host [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.001 244018 DEBUG nova.virt.libvirt.host [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.002 244018 DEBUG nova.virt.libvirt.host [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.002 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.003 244018 DEBUG nova.virt.hardware [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:15:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='966515861',id=21,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_0-1872382025',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.004 244018 DEBUG nova.virt.hardware [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.004 244018 DEBUG nova.virt.hardware [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.005 244018 DEBUG nova.virt.hardware [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.005 244018 DEBUG nova.virt.hardware [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.005 244018 DEBUG nova.virt.hardware [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.006 244018 DEBUG nova.virt.hardware [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.006 244018 DEBUG nova.virt.hardware [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.007 244018 DEBUG nova.virt.hardware [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.007 244018 DEBUG nova.virt.hardware [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.008 244018 DEBUG nova.virt.hardware [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.012 244018 DEBUG oslo_concurrency.processutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.035 244018 DEBUG nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.036 244018 DEBUG nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.036 244018 DEBUG nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.037 244018 DEBUG nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.037 244018 DEBUG nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.038 244018 DEBUG nova.virt.libvirt.driver [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.051 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.051 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021760.9526653, 6d979dde-168d-4976-99a4-bb4e3eb22ae0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.052 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] VM Started (Lifecycle Event)#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.109 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.112 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.154 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.180 244018 INFO nova.compute.manager [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Took 6.07 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.180 244018 DEBUG nova.compute.manager [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.258 244018 INFO nova.compute.manager [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Took 8.33 seconds to build instance.#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.286 244018 DEBUG oslo_concurrency.lockutils [None req-89535210-ba65-4dca-8aee-c28b441f340e 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "6d979dde-168d-4976-99a4-bb4e3eb22ae0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.426s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 309 MiB data, 403 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.7 MiB/s wr, 148 op/s
Feb 25 07:16:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:16:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3832437348' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.529 244018 DEBUG oslo_concurrency.processutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.561 244018 DEBUG nova.storage.rbd_utils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 6de08989-13cc-415b-adc9-04b338e13d0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.566 244018 DEBUG oslo_concurrency.processutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:16:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:16:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:16:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:16:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:16:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.731 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.757 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-ee9e86a4-8a34-43a9-baf1-f9b2e7f85534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.757 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.757 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.758 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.758 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.759 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.759 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.759 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.759 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.759 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.785 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.785 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.786 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.786 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:16:01 np0005629333 nova_compute[244014]: 2026-02-25 12:16:01.786 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:16:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1786276841' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.148 244018 DEBUG oslo_concurrency.processutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.150 244018 DEBUG nova.virt.libvirt.vif [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:15:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-685039122',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-685039122',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(21),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-685039122',id=5,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=21,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJvvkxGBHug345PSqKu7AM4guWtlD1Z/ZHlY6xJOr1x7TK7JerzVRxChzrn82oXIZjPXupKAJ5z18TSxrDr8DDKEwo/biNUpuR+tKuw+djeyjF8yxNV8v2GUFxoZtp+/Q==',key_name='tempest-keypair-1665806790',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0cd0968a9a1b4b9e984b0a10a6ac77a8',ramdisk_id='',reservation_id='r-c0zagdsb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1630390846',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1630390846-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:15:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee6c6e44a0624805afeb68a67c99f325',uuid=6de08989-13cc-415b-adc9-04b338e13d0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "address": "fa:16:3e:9f:6d:04", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap277c556d-c4", "ovs_interfaceid": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.150 244018 DEBUG nova.network.os_vif_util [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Converting VIF {"id": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "address": "fa:16:3e:9f:6d:04", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap277c556d-c4", "ovs_interfaceid": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.151 244018 DEBUG nova.network.os_vif_util [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:6d:04,bridge_name='br-int',has_traffic_filtering=True,id=277c556d-c41e-4d6d-9f29-56e96f6a65e2,network=Network(d3fb36f1-0e88-43b4-a8a4-3844d55f1de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap277c556d-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.152 244018 DEBUG nova.objects.instance [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6de08989-13cc-415b-adc9-04b338e13d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.188 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:16:02 np0005629333 nova_compute[244014]:  <uuid>6de08989-13cc-415b-adc9-04b338e13d0f</uuid>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:  <name>instance-00000005</name>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-685039122</nova:name>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:16:00</nova:creationTime>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      <nova:flavor name="tempest-flavor_with_ephemeral_0-1872382025">
Feb 25 07:16:02 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:        <nova:user uuid="ee6c6e44a0624805afeb68a67c99f325">tempest-ServersWithSpecificFlavorTestJSON-1630390846-project-member</nova:user>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:        <nova:project uuid="0cd0968a9a1b4b9e984b0a10a6ac77a8">tempest-ServersWithSpecificFlavorTestJSON-1630390846</nova:project>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:        <nova:port uuid="277c556d-c41e-4d6d-9f29-56e96f6a65e2">
Feb 25 07:16:02 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      <entry name="serial">6de08989-13cc-415b-adc9-04b338e13d0f</entry>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      <entry name="uuid">6de08989-13cc-415b-adc9-04b338e13d0f</entry>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/6de08989-13cc-415b-adc9-04b338e13d0f_disk">
Feb 25 07:16:02 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:16:02 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/6de08989-13cc-415b-adc9-04b338e13d0f_disk.config">
Feb 25 07:16:02 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:16:02 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:9f:6d:04"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      <target dev="tap277c556d-c4"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/6de08989-13cc-415b-adc9-04b338e13d0f/console.log" append="off"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:16:02 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:16:02 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:16:02 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:16:02 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.189 244018 DEBUG nova.compute.manager [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Preparing to wait for external event network-vif-plugged-277c556d-c41e-4d6d-9f29-56e96f6a65e2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.189 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.189 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.189 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.190 244018 DEBUG nova.virt.libvirt.vif [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:15:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-685039122',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-685039122',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(21),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-685039122',id=5,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=21,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJvvkxGBHug345PSqKu7AM4guWtlD1Z/ZHlY6xJOr1x7TK7JerzVRxChzrn82oXIZjPXupKAJ5z18TSxrDr8DDKEwo/biNUpuR+tKuw+djeyjF8yxNV8v2GUFxoZtp+/Q==',key_name='tempest-keypair-1665806790',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0cd0968a9a1b4b9e984b0a10a6ac77a8',ramdisk_id='',reservation_id='r-c0zagdsb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1630390846',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1630390846-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:15:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee6c6e44a0624805afeb68a67c99f325',uuid=6de08989-13cc-415b-adc9-04b338e13d0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "address": "fa:16:3e:9f:6d:04", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap277c556d-c4", "ovs_interfaceid": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.190 244018 DEBUG nova.network.os_vif_util [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Converting VIF {"id": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "address": "fa:16:3e:9f:6d:04", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap277c556d-c4", "ovs_interfaceid": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.191 244018 DEBUG nova.network.os_vif_util [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:6d:04,bridge_name='br-int',has_traffic_filtering=True,id=277c556d-c41e-4d6d-9f29-56e96f6a65e2,network=Network(d3fb36f1-0e88-43b4-a8a4-3844d55f1de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap277c556d-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.191 244018 DEBUG os_vif [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:6d:04,bridge_name='br-int',has_traffic_filtering=True,id=277c556d-c41e-4d6d-9f29-56e96f6a65e2,network=Network(d3fb36f1-0e88-43b4-a8a4-3844d55f1de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap277c556d-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.232 244018 DEBUG ovsdbapp.backend.ovs_idl [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.233 244018 DEBUG ovsdbapp.backend.ovs_idl [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.233 244018 DEBUG ovsdbapp.backend.ovs_idl [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.234 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.235 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [POLLOUT] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.235 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.236 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:16:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/840390076' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.348 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.351 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.352 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.362 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.362 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.362 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.364 244018 INFO oslo.privsep.daemon [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpi0flyv1m/privsep.sock']#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.572 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.573 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.577 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.577 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.640 244018 DEBUG nova.objects.instance [None req-3f7ddc02-fe16-4b96-b871-08ac2002a35c 1175078d14ec4976bef47e03b7b1c6a4 1764ea8d72cf4177bc85b3771c21c585 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6d979dde-168d-4976-99a4-bb4e3eb22ae0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.667 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021762.6668618, 6d979dde-168d-4976-99a4-bb4e3eb22ae0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.667 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.719 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.722 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.748 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.749 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4782MB free_disk=59.918520422652364GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.749 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.749 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.752 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.848 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance ee9e86a4-8a34-43a9-baf1-f9b2e7f85534 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.849 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 6de08989-13cc-415b-adc9-04b338e13d0f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.849 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 6d979dde-168d-4976-99a4-bb4e3eb22ae0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.849 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.849 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:16:02 np0005629333 nova_compute[244014]: 2026-02-25 12:16:02.956 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:03 np0005629333 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000006.scope: Deactivated successfully.
Feb 25 07:16:03 np0005629333 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000006.scope: Consumed 2.257s CPU time.
Feb 25 07:16:03 np0005629333 systemd-machined[210048]: Machine qemu-5-instance-00000006 terminated.
Feb 25 07:16:03 np0005629333 nova_compute[244014]: 2026-02-25 12:16:03.260 244018 INFO oslo.privsep.daemon [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Feb 25 07:16:03 np0005629333 nova_compute[244014]: 2026-02-25 12:16:03.150 253165 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Feb 25 07:16:03 np0005629333 nova_compute[244014]: 2026-02-25 12:16:03.155 253165 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Feb 25 07:16:03 np0005629333 nova_compute[244014]: 2026-02-25 12:16:03.160 253165 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Feb 25 07:16:03 np0005629333 nova_compute[244014]: 2026-02-25 12:16:03.160 253165 INFO oslo.privsep.daemon [-] privsep daemon running as pid 253165#033[00m
Feb 25 07:16:03 np0005629333 nova_compute[244014]: 2026-02-25 12:16:03.292 244018 DEBUG nova.compute.manager [None req-3f7ddc02-fe16-4b96-b871-08ac2002a35c 1175078d14ec4976bef47e03b7b1c6a4 1764ea8d72cf4177bc85b3771c21c585 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:16:03 np0005629333 nova_compute[244014]: 2026-02-25 12:16:03.345 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 325 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.7 MiB/s wr, 178 op/s
Feb 25 07:16:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:16:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4239469560' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:16:03 np0005629333 nova_compute[244014]: 2026-02-25 12:16:03.538 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:03 np0005629333 nova_compute[244014]: 2026-02-25 12:16:03.543 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:16:03 np0005629333 nova_compute[244014]: 2026-02-25 12:16:03.653 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:03 np0005629333 nova_compute[244014]: 2026-02-25 12:16:03.654 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap277c556d-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:16:03 np0005629333 nova_compute[244014]: 2026-02-25 12:16:03.654 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap277c556d-c4, col_values=(('external_ids', {'iface-id': '277c556d-c41e-4d6d-9f29-56e96f6a65e2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9f:6d:04', 'vm-uuid': '6de08989-13cc-415b-adc9-04b338e13d0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:16:03 np0005629333 nova_compute[244014]: 2026-02-25 12:16:03.657 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:03 np0005629333 NetworkManager[49836]: <info>  [1772021763.6581] manager: (tap277c556d-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Feb 25 07:16:03 np0005629333 nova_compute[244014]: 2026-02-25 12:16:03.660 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:16:03 np0005629333 nova_compute[244014]: 2026-02-25 12:16:03.663 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:03 np0005629333 nova_compute[244014]: 2026-02-25 12:16:03.664 244018 INFO os_vif [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:6d:04,bridge_name='br-int',has_traffic_filtering=True,id=277c556d-c41e-4d6d-9f29-56e96f6a65e2,network=Network(d3fb36f1-0e88-43b4-a8a4-3844d55f1de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap277c556d-c4')#033[00m
Feb 25 07:16:03 np0005629333 nova_compute[244014]: 2026-02-25 12:16:03.824 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:16:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:16:04 np0005629333 nova_compute[244014]: 2026-02-25 12:16:04.415 244018 DEBUG nova.network.neutron [req-66b5083a-9726-4778-a521-26b7c3fcb4c8 req-bf35c6e1-8a5f-43a6-9a0d-62bdeb8fe632 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Updated VIF entry in instance network info cache for port 277c556d-c41e-4d6d-9f29-56e96f6a65e2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:16:04 np0005629333 nova_compute[244014]: 2026-02-25 12:16:04.416 244018 DEBUG nova.network.neutron [req-66b5083a-9726-4778-a521-26b7c3fcb4c8 req-bf35c6e1-8a5f-43a6-9a0d-62bdeb8fe632 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Updating instance_info_cache with network_info: [{"id": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "address": "fa:16:3e:9f:6d:04", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap277c556d-c4", "ovs_interfaceid": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:16:04 np0005629333 nova_compute[244014]: 2026-02-25 12:16:04.461 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:16:04 np0005629333 nova_compute[244014]: 2026-02-25 12:16:04.462 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:04 np0005629333 nova_compute[244014]: 2026-02-25 12:16:04.571 244018 DEBUG oslo_concurrency.lockutils [req-66b5083a-9726-4778-a521-26b7c3fcb4c8 req-bf35c6e1-8a5f-43a6-9a0d-62bdeb8fe632 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-6de08989-13cc-415b-adc9-04b338e13d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:16:04 np0005629333 nova_compute[244014]: 2026-02-25 12:16:04.584 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:16:04 np0005629333 nova_compute[244014]: 2026-02-25 12:16:04.585 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:16:04 np0005629333 nova_compute[244014]: 2026-02-25 12:16:04.585 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] No VIF found with MAC fa:16:3e:9f:6d:04, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:16:04 np0005629333 nova_compute[244014]: 2026-02-25 12:16:04.586 244018 INFO nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Using config drive#033[00m
Feb 25 07:16:04 np0005629333 nova_compute[244014]: 2026-02-25 12:16:04.617 244018 DEBUG nova.storage.rbd_utils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 6de08989-13cc-415b-adc9-04b338e13d0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:05 np0005629333 nova_compute[244014]: 2026-02-25 12:16:05.142 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:16:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 325 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 796 KiB/s rd, 5.7 MiB/s wr, 140 op/s
Feb 25 07:16:05 np0005629333 nova_compute[244014]: 2026-02-25 12:16:05.693 244018 INFO nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Creating config drive at /var/lib/nova/instances/6de08989-13cc-415b-adc9-04b338e13d0f/disk.config#033[00m
Feb 25 07:16:05 np0005629333 nova_compute[244014]: 2026-02-25 12:16:05.701 244018 DEBUG oslo_concurrency.processutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6de08989-13cc-415b-adc9-04b338e13d0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpn1u5ne6j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:05 np0005629333 nova_compute[244014]: 2026-02-25 12:16:05.828 244018 DEBUG oslo_concurrency.processutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6de08989-13cc-415b-adc9-04b338e13d0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpn1u5ne6j" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:05 np0005629333 nova_compute[244014]: 2026-02-25 12:16:05.880 244018 DEBUG nova.storage.rbd_utils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 6de08989-13cc-415b-adc9-04b338e13d0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:05 np0005629333 nova_compute[244014]: 2026-02-25 12:16:05.884 244018 DEBUG oslo_concurrency.processutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6de08989-13cc-415b-adc9-04b338e13d0f/disk.config 6de08989-13cc-415b-adc9-04b338e13d0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:06 np0005629333 nova_compute[244014]: 2026-02-25 12:16:06.428 244018 DEBUG oslo_concurrency.processutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6de08989-13cc-415b-adc9-04b338e13d0f/disk.config 6de08989-13cc-415b-adc9-04b338e13d0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:06 np0005629333 nova_compute[244014]: 2026-02-25 12:16:06.429 244018 INFO nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Deleting local config drive /var/lib/nova/instances/6de08989-13cc-415b-adc9-04b338e13d0f/disk.config because it was imported into RBD.#033[00m
Feb 25 07:16:06 np0005629333 kernel: tun: Universal TUN/TAP device driver, 1.6
Feb 25 07:16:06 np0005629333 kernel: tap277c556d-c4: entered promiscuous mode
Feb 25 07:16:06 np0005629333 NetworkManager[49836]: <info>  [1772021766.5249] manager: (tap277c556d-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/22)
Feb 25 07:16:06 np0005629333 ovn_controller[147040]: 2026-02-25T12:16:06Z|00027|binding|INFO|Claiming lport 277c556d-c41e-4d6d-9f29-56e96f6a65e2 for this chassis.
Feb 25 07:16:06 np0005629333 ovn_controller[147040]: 2026-02-25T12:16:06Z|00028|binding|INFO|277c556d-c41e-4d6d-9f29-56e96f6a65e2: Claiming fa:16:3e:9f:6d:04 10.100.0.6
Feb 25 07:16:06 np0005629333 nova_compute[244014]: 2026-02-25 12:16:06.526 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:06 np0005629333 nova_compute[244014]: 2026-02-25 12:16:06.533 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:06.559 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:6d:04 10.100.0.6'], port_security=['fa:16:3e:9f:6d:04 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6de08989-13cc-415b-adc9-04b338e13d0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0cd0968a9a1b4b9e984b0a10a6ac77a8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f43142d8-27de-490e-9b86-d4e77c9c7e07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad5d1aaf-d2e8-49e5-aeb2-335f304b15d9, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=277c556d-c41e-4d6d-9f29-56e96f6a65e2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:16:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:06.562 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 277c556d-c41e-4d6d-9f29-56e96f6a65e2 in datapath d3fb36f1-0e88-43b4-a8a4-3844d55f1de8 bound to our chassis#033[00m
Feb 25 07:16:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:06.566 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d3fb36f1-0e88-43b4-a8a4-3844d55f1de8#033[00m
Feb 25 07:16:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:06.567 157129 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp2ek8exws/privsep.sock']#033[00m
Feb 25 07:16:06 np0005629333 systemd-udevd[253249]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:16:06 np0005629333 systemd-machined[210048]: New machine qemu-6-instance-00000005.
Feb 25 07:16:06 np0005629333 NetworkManager[49836]: <info>  [1772021766.6087] device (tap277c556d-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:16:06 np0005629333 NetworkManager[49836]: <info>  [1772021766.6095] device (tap277c556d-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:16:06 np0005629333 nova_compute[244014]: 2026-02-25 12:16:06.607 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:06 np0005629333 ovn_controller[147040]: 2026-02-25T12:16:06Z|00029|binding|INFO|Setting lport 277c556d-c41e-4d6d-9f29-56e96f6a65e2 ovn-installed in OVS
Feb 25 07:16:06 np0005629333 ovn_controller[147040]: 2026-02-25T12:16:06Z|00030|binding|INFO|Setting lport 277c556d-c41e-4d6d-9f29-56e96f6a65e2 up in Southbound
Feb 25 07:16:06 np0005629333 nova_compute[244014]: 2026-02-25 12:16:06.612 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:06 np0005629333 systemd[1]: Started Virtual Machine qemu-6-instance-00000005.
Feb 25 07:16:06 np0005629333 nova_compute[244014]: 2026-02-25 12:16:06.958 244018 DEBUG oslo_concurrency.lockutils [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquiring lock "6d979dde-168d-4976-99a4-bb4e3eb22ae0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:06 np0005629333 nova_compute[244014]: 2026-02-25 12:16:06.959 244018 DEBUG oslo_concurrency.lockutils [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "6d979dde-168d-4976-99a4-bb4e3eb22ae0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:06 np0005629333 nova_compute[244014]: 2026-02-25 12:16:06.960 244018 DEBUG oslo_concurrency.lockutils [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquiring lock "6d979dde-168d-4976-99a4-bb4e3eb22ae0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:06 np0005629333 nova_compute[244014]: 2026-02-25 12:16:06.960 244018 DEBUG oslo_concurrency.lockutils [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "6d979dde-168d-4976-99a4-bb4e3eb22ae0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:06 np0005629333 nova_compute[244014]: 2026-02-25 12:16:06.960 244018 DEBUG oslo_concurrency.lockutils [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "6d979dde-168d-4976-99a4-bb4e3eb22ae0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:06 np0005629333 nova_compute[244014]: 2026-02-25 12:16:06.963 244018 INFO nova.compute.manager [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Terminating instance#033[00m
Feb 25 07:16:06 np0005629333 nova_compute[244014]: 2026-02-25 12:16:06.964 244018 DEBUG oslo_concurrency.lockutils [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquiring lock "refresh_cache-6d979dde-168d-4976-99a4-bb4e3eb22ae0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:16:06 np0005629333 nova_compute[244014]: 2026-02-25 12:16:06.964 244018 DEBUG oslo_concurrency.lockutils [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquired lock "refresh_cache-6d979dde-168d-4976-99a4-bb4e3eb22ae0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:16:06 np0005629333 nova_compute[244014]: 2026-02-25 12:16:06.965 244018 DEBUG nova.network.neutron [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:16:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:07.174 157129 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Feb 25 07:16:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:07.175 157129 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp2ek8exws/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Feb 25 07:16:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:07.076 253268 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Feb 25 07:16:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:07.081 253268 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Feb 25 07:16:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:07.084 253268 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Feb 25 07:16:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:07.085 253268 INFO oslo.privsep.daemon [-] privsep daemon running as pid 253268#033[00m
Feb 25 07:16:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:07.180 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[09de8cf9-5db8-4229-b592-bff9ceb82099]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.320 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021767.319154, 6de08989-13cc-415b-adc9-04b338e13d0f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.320 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] VM Started (Lifecycle Event)#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.343 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.348 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021767.3194554, 6de08989-13cc-415b-adc9-04b338e13d0f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.348 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.369 244018 DEBUG nova.network.neutron [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.374 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.378 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.404 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.458 244018 DEBUG nova.compute.manager [req-19f865af-d0df-42f6-bd8f-574e429f3ef8 req-f4df1cd1-7ac5-42dd-b7bd-49c00a960351 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Received event network-vif-plugged-277c556d-c41e-4d6d-9f29-56e96f6a65e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.459 244018 DEBUG oslo_concurrency.lockutils [req-19f865af-d0df-42f6-bd8f-574e429f3ef8 req-f4df1cd1-7ac5-42dd-b7bd-49c00a960351 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.459 244018 DEBUG oslo_concurrency.lockutils [req-19f865af-d0df-42f6-bd8f-574e429f3ef8 req-f4df1cd1-7ac5-42dd-b7bd-49c00a960351 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.460 244018 DEBUG oslo_concurrency.lockutils [req-19f865af-d0df-42f6-bd8f-574e429f3ef8 req-f4df1cd1-7ac5-42dd-b7bd-49c00a960351 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.460 244018 DEBUG nova.compute.manager [req-19f865af-d0df-42f6-bd8f-574e429f3ef8 req-f4df1cd1-7ac5-42dd-b7bd-49c00a960351 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Processing event network-vif-plugged-277c556d-c41e-4d6d-9f29-56e96f6a65e2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.461 244018 DEBUG nova.compute.manager [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.466 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021767.4660094, 6de08989-13cc-415b-adc9-04b338e13d0f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.466 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.469 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.473 244018 INFO nova.virt.libvirt.driver [-] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Instance spawned successfully.#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.473 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.497 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.505 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:16:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 326 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 197 op/s
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.510 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.510 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.511 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.512 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.513 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.513 244018 DEBUG nova.virt.libvirt.driver [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.560 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.601 244018 INFO nova.compute.manager [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Took 14.47 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.601 244018 DEBUG nova.compute.manager [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.675 244018 INFO nova.compute.manager [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Took 16.56 seconds to build instance.#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.686 244018 DEBUG nova.network.neutron [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.700 244018 DEBUG oslo_concurrency.lockutils [None req-fb1558bc-acfa-4db4-9e2c-bcd3dd614981 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "6de08989-13cc-415b-adc9-04b338e13d0f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.707 244018 DEBUG oslo_concurrency.lockutils [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Releasing lock "refresh_cache-6d979dde-168d-4976-99a4-bb4e3eb22ae0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.708 244018 DEBUG nova.compute.manager [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.716 244018 INFO nova.virt.libvirt.driver [-] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Instance destroyed successfully.#033[00m
Feb 25 07:16:07 np0005629333 nova_compute[244014]: 2026-02-25 12:16:07.717 244018 DEBUG nova.objects.instance [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lazy-loading 'resources' on Instance uuid 6d979dde-168d-4976-99a4-bb4e3eb22ae0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:16:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:07.726 253268 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:07.727 253268 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:07.727 253268 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:08 np0005629333 nova_compute[244014]: 2026-02-25 12:16:08.184 244018 INFO nova.virt.libvirt.driver [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Deleting instance files /var/lib/nova/instances/6d979dde-168d-4976-99a4-bb4e3eb22ae0_del#033[00m
Feb 25 07:16:08 np0005629333 nova_compute[244014]: 2026-02-25 12:16:08.185 244018 INFO nova.virt.libvirt.driver [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Deletion of /var/lib/nova/instances/6d979dde-168d-4976-99a4-bb4e3eb22ae0_del complete#033[00m
Feb 25 07:16:08 np0005629333 nova_compute[244014]: 2026-02-25 12:16:08.255 244018 INFO nova.compute.manager [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Took 0.55 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:16:08 np0005629333 nova_compute[244014]: 2026-02-25 12:16:08.256 244018 DEBUG oslo.service.loopingcall [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:16:08 np0005629333 nova_compute[244014]: 2026-02-25 12:16:08.257 244018 DEBUG nova.compute.manager [-] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:16:08 np0005629333 nova_compute[244014]: 2026-02-25 12:16:08.257 244018 DEBUG nova.network.neutron [-] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:16:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:08.329 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c8e5cd66-6be3-418e-ba1b-36ce97e0927a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:08.330 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd3fb36f1-01 in ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:16:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:08.334 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd3fb36f1-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:16:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:08.334 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[39053281-eb6b-43e3-9134-efce1e9fbae5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:08.340 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[979d9fea-bab6-4ab9-acc6-61f20ea855c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:08 np0005629333 nova_compute[244014]: 2026-02-25 12:16:08.347 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:08.365 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed7b61b-502a-4dc6-87a0-32b4bb4139b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:08.383 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9d9eb080-9739-4ad2-8bbc-ea11f2b59b8b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:08.385 157129 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp1h0_cpqu/privsep.sock']#033[00m
Feb 25 07:16:08 np0005629333 nova_compute[244014]: 2026-02-25 12:16:08.658 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:08 np0005629333 nova_compute[244014]: 2026-02-25 12:16:08.936 244018 DEBUG nova.network.neutron [-] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:16:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:08.961 157129 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Feb 25 07:16:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:08.963 157129 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp1h0_cpqu/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Feb 25 07:16:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:08.870 253343 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Feb 25 07:16:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:08.875 253343 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Feb 25 07:16:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:08.879 253343 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Feb 25 07:16:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:08.879 253343 INFO oslo.privsep.daemon [-] privsep daemon running as pid 253343#033[00m
Feb 25 07:16:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:08.967 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[6a0337b7-7500-4f2d-9113-80724f9c89d0]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:16:09 np0005629333 nova_compute[244014]: 2026-02-25 12:16:09.202 244018 DEBUG nova.network.neutron [-] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:16:09 np0005629333 nova_compute[244014]: 2026-02-25 12:16:09.227 244018 INFO nova.compute.manager [-] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Took 0.97 seconds to deallocate network for instance.#033[00m
Feb 25 07:16:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:09.400 253343 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:09.400 253343 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:09.400 253343 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 326 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.6 MiB/s wr, 134 op/s
Feb 25 07:16:09 np0005629333 nova_compute[244014]: 2026-02-25 12:16:09.818 244018 DEBUG oslo_concurrency.lockutils [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:09 np0005629333 nova_compute[244014]: 2026-02-25 12:16:09.819 244018 DEBUG oslo_concurrency.lockutils [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:09 np0005629333 nova_compute[244014]: 2026-02-25 12:16:09.826 244018 DEBUG oslo_concurrency.lockutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Acquiring lock "661b348d-5b73-45ed-8357-3aefed90d054" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:09 np0005629333 nova_compute[244014]: 2026-02-25 12:16:09.827 244018 DEBUG oslo_concurrency.lockutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lock "661b348d-5b73-45ed-8357-3aefed90d054" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:09.876 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9c4cd20b-dd5c-4f38-a1b9-b325f9b44b40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:09 np0005629333 NetworkManager[49836]: <info>  [1772021769.8960] manager: (tapd3fb36f1-00): new Veth device (/org/freedesktop/NetworkManager/Devices/23)
Feb 25 07:16:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:09.896 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6689e1ed-1851-4104-9e91-a8647e09c6a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:09 np0005629333 systemd-udevd[253355]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:16:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:09.922 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[90b54755-c9a1-45b9-8254-72c86af1f8a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:09.926 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[eca92c68-2365-41a6-83ee-8df38e93056a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:09 np0005629333 NetworkManager[49836]: <info>  [1772021769.9485] device (tapd3fb36f1-00): carrier: link connected
Feb 25 07:16:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:09.951 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d750fc87-7cc8-407d-bb04-5625da359b72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:09.965 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[011b4c04-2b02-4b41-9b22-e608da8c14a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd3fb36f1-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:f0:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 373935, 'reachable_time': 27729, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253373, 'error': None, 'target': 'ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:09.981 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e9ec494f-050d-4f80-93ee-353b9c226265]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb7:f004'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 373935, 'tstamp': 373935}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253375, 'error': None, 'target': 'ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:09.994 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[658085f5-8a7a-47a8-88d4-615db235503c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd3fb36f1-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:f0:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 373935, 'reachable_time': 27729, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253376, 'error': None, 'target': 'ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:10.018 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2960a571-bd55-46c1-b336-2c2b874dd024]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:10.053 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ee6eacb5-ca8b-4c83-acaf-4f9ae1e13d05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:10.055 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3fb36f1-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:10.055 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:10.056 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3fb36f1-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:16:10 np0005629333 NetworkManager[49836]: <info>  [1772021770.0588] manager: (tapd3fb36f1-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/24)
Feb 25 07:16:10 np0005629333 nova_compute[244014]: 2026-02-25 12:16:10.058 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:10 np0005629333 kernel: tapd3fb36f1-00: entered promiscuous mode
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:10.065 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd3fb36f1-00, col_values=(('external_ids', {'iface-id': '642d12d0-ef5c-4bc5-ba96-4b85e033986b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:16:10 np0005629333 nova_compute[244014]: 2026-02-25 12:16:10.064 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:16:10Z|00031|binding|INFO|Releasing lport 642d12d0-ef5c-4bc5-ba96-4b85e033986b from this chassis (sb_readonly=0)
Feb 25 07:16:10 np0005629333 nova_compute[244014]: 2026-02-25 12:16:10.066 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:10 np0005629333 nova_compute[244014]: 2026-02-25 12:16:10.081 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:10 np0005629333 nova_compute[244014]: 2026-02-25 12:16:10.089 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:10.090 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d3fb36f1-0e88-43b4-a8a4-3844d55f1de8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d3fb36f1-0e88-43b4-a8a4-3844d55f1de8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:10.091 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2dd523c3-da9f-4cb1-8135-d708b066e2c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:10.092 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/d3fb36f1-0e88-43b4-a8a4-3844d55f1de8.pid.haproxy
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID d3fb36f1-0e88-43b4-a8a4-3844d55f1de8
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:16:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:10.093 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8', 'env', 'PROCESS_TAG=haproxy-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d3fb36f1-0e88-43b4-a8a4-3844d55f1de8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:16:10 np0005629333 podman[253409]: 2026-02-25 12:16:10.428620273 +0000 UTC m=+0.072970193 container create 4ac83e63c68ca85fc4089a51e3ab40f8dd053d8ad236fe59c6e74010af956e3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 07:16:10 np0005629333 nova_compute[244014]: 2026-02-25 12:16:10.436 244018 DEBUG nova.compute.manager [req-b5243f3c-4750-4449-813e-6ebd0214b71b req-119d2660-86d0-48d6-a4be-4a185785dd09 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Received event network-vif-plugged-277c556d-c41e-4d6d-9f29-56e96f6a65e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:16:10 np0005629333 nova_compute[244014]: 2026-02-25 12:16:10.437 244018 DEBUG oslo_concurrency.lockutils [req-b5243f3c-4750-4449-813e-6ebd0214b71b req-119d2660-86d0-48d6-a4be-4a185785dd09 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:10 np0005629333 nova_compute[244014]: 2026-02-25 12:16:10.438 244018 DEBUG oslo_concurrency.lockutils [req-b5243f3c-4750-4449-813e-6ebd0214b71b req-119d2660-86d0-48d6-a4be-4a185785dd09 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:10 np0005629333 nova_compute[244014]: 2026-02-25 12:16:10.438 244018 DEBUG oslo_concurrency.lockutils [req-b5243f3c-4750-4449-813e-6ebd0214b71b req-119d2660-86d0-48d6-a4be-4a185785dd09 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:10 np0005629333 nova_compute[244014]: 2026-02-25 12:16:10.439 244018 DEBUG nova.compute.manager [req-b5243f3c-4750-4449-813e-6ebd0214b71b req-119d2660-86d0-48d6-a4be-4a185785dd09 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] No waiting events found dispatching network-vif-plugged-277c556d-c41e-4d6d-9f29-56e96f6a65e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:16:10 np0005629333 nova_compute[244014]: 2026-02-25 12:16:10.439 244018 WARNING nova.compute.manager [req-b5243f3c-4750-4449-813e-6ebd0214b71b req-119d2660-86d0-48d6-a4be-4a185785dd09 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Received unexpected event network-vif-plugged-277c556d-c41e-4d6d-9f29-56e96f6a65e2 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:16:10 np0005629333 nova_compute[244014]: 2026-02-25 12:16:10.454 244018 DEBUG nova.compute.manager [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:16:10 np0005629333 podman[253409]: 2026-02-25 12:16:10.378350257 +0000 UTC m=+0.022700217 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:16:10 np0005629333 systemd[1]: Started libpod-conmon-4ac83e63c68ca85fc4089a51e3ab40f8dd053d8ad236fe59c6e74010af956e3c.scope.
Feb 25 07:16:10 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:16:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df85cc05f2dfb2a93e56b0f9c6c65dbdcac823808956d17e7a377f617bac8210/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:16:10 np0005629333 podman[253409]: 2026-02-25 12:16:10.534749582 +0000 UTC m=+0.179099542 container init 4ac83e63c68ca85fc4089a51e3ab40f8dd053d8ad236fe59c6e74010af956e3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:16:10 np0005629333 podman[253409]: 2026-02-25 12:16:10.546149521 +0000 UTC m=+0.190499431 container start 4ac83e63c68ca85fc4089a51e3ab40f8dd053d8ad236fe59c6e74010af956e3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223)
Feb 25 07:16:10 np0005629333 nova_compute[244014]: 2026-02-25 12:16:10.579 244018 DEBUG oslo_concurrency.processutils [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:10 np0005629333 neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8[253424]: [NOTICE]   (253428) : New worker (253431) forked
Feb 25 07:16:10 np0005629333 neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8[253424]: [NOTICE]   (253428) : Loading success.
Feb 25 07:16:10 np0005629333 nova_compute[244014]: 2026-02-25 12:16:10.645 244018 DEBUG oslo_concurrency.lockutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:16:11 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3002540957' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:16:11 np0005629333 nova_compute[244014]: 2026-02-25 12:16:11.087 244018 DEBUG oslo_concurrency.processutils [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:11 np0005629333 nova_compute[244014]: 2026-02-25 12:16:11.093 244018 DEBUG nova.compute.provider_tree [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:16:11 np0005629333 nova_compute[244014]: 2026-02-25 12:16:11.146 244018 DEBUG nova.scheduler.client.report [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:16:11 np0005629333 NetworkManager[49836]: <info>  [1772021771.1506] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/25)
Feb 25 07:16:11 np0005629333 NetworkManager[49836]: <info>  [1772021771.1513] device (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 25 07:16:11 np0005629333 NetworkManager[49836]: <warn>  [1772021771.1515] device (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 25 07:16:11 np0005629333 NetworkManager[49836]: <info>  [1772021771.1524] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/26)
Feb 25 07:16:11 np0005629333 NetworkManager[49836]: <info>  [1772021771.1529] device (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Feb 25 07:16:11 np0005629333 NetworkManager[49836]: <warn>  [1772021771.1530] device (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Feb 25 07:16:11 np0005629333 NetworkManager[49836]: <info>  [1772021771.1539] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Feb 25 07:16:11 np0005629333 NetworkManager[49836]: <info>  [1772021771.1548] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Feb 25 07:16:11 np0005629333 NetworkManager[49836]: <info>  [1772021771.1553] device (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Feb 25 07:16:11 np0005629333 NetworkManager[49836]: <info>  [1772021771.1556] device (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Feb 25 07:16:11 np0005629333 nova_compute[244014]: 2026-02-25 12:16:11.173 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:11 np0005629333 nova_compute[244014]: 2026-02-25 12:16:11.201 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:11 np0005629333 ovn_controller[147040]: 2026-02-25T12:16:11Z|00032|binding|INFO|Releasing lport 642d12d0-ef5c-4bc5-ba96-4b85e033986b from this chassis (sb_readonly=0)
Feb 25 07:16:11 np0005629333 nova_compute[244014]: 2026-02-25 12:16:11.216 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:11 np0005629333 nova_compute[244014]: 2026-02-25 12:16:11.347 244018 DEBUG oslo_concurrency.lockutils [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.528s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:11 np0005629333 nova_compute[244014]: 2026-02-25 12:16:11.350 244018 DEBUG oslo_concurrency.lockutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:11 np0005629333 nova_compute[244014]: 2026-02-25 12:16:11.357 244018 DEBUG nova.virt.hardware [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:16:11 np0005629333 nova_compute[244014]: 2026-02-25 12:16:11.358 244018 INFO nova.compute.claims [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:16:11 np0005629333 nova_compute[244014]: 2026-02-25 12:16:11.404 244018 INFO nova.scheduler.client.report [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Deleted allocations for instance 6d979dde-168d-4976-99a4-bb4e3eb22ae0#033[00m
Feb 25 07:16:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 292 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.6 MiB/s wr, 178 op/s
Feb 25 07:16:11 np0005629333 nova_compute[244014]: 2026-02-25 12:16:11.567 244018 DEBUG oslo_concurrency.processutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:11 np0005629333 nova_compute[244014]: 2026-02-25 12:16:11.587 244018 DEBUG oslo_concurrency.lockutils [None req-45d8452d-5ccc-4ed2-a344-9147ada2fbcf 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "6d979dde-168d-4976-99a4-bb4e3eb22ae0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:11 np0005629333 nova_compute[244014]: 2026-02-25 12:16:11.807 244018 DEBUG nova.compute.manager [req-78f27a64-e0d9-4c8e-b4d4-1f479eee1c63 req-f408a13d-d10b-420a-8065-5ef8c12cea0d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Received event network-changed-277c556d-c41e-4d6d-9f29-56e96f6a65e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:16:11 np0005629333 nova_compute[244014]: 2026-02-25 12:16:11.807 244018 DEBUG nova.compute.manager [req-78f27a64-e0d9-4c8e-b4d4-1f479eee1c63 req-f408a13d-d10b-420a-8065-5ef8c12cea0d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Refreshing instance network info cache due to event network-changed-277c556d-c41e-4d6d-9f29-56e96f6a65e2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:16:11 np0005629333 nova_compute[244014]: 2026-02-25 12:16:11.808 244018 DEBUG oslo_concurrency.lockutils [req-78f27a64-e0d9-4c8e-b4d4-1f479eee1c63 req-f408a13d-d10b-420a-8065-5ef8c12cea0d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-6de08989-13cc-415b-adc9-04b338e13d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:16:11 np0005629333 nova_compute[244014]: 2026-02-25 12:16:11.808 244018 DEBUG oslo_concurrency.lockutils [req-78f27a64-e0d9-4c8e-b4d4-1f479eee1c63 req-f408a13d-d10b-420a-8065-5ef8c12cea0d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-6de08989-13cc-415b-adc9-04b338e13d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:16:11 np0005629333 nova_compute[244014]: 2026-02-25 12:16:11.808 244018 DEBUG nova.network.neutron [req-78f27a64-e0d9-4c8e-b4d4-1f479eee1c63 req-f408a13d-d10b-420a-8065-5ef8c12cea0d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Refreshing network info cache for port 277c556d-c41e-4d6d-9f29-56e96f6a65e2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:16:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:16:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3664686447' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:16:12 np0005629333 nova_compute[244014]: 2026-02-25 12:16:12.095 244018 DEBUG oslo_concurrency.processutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:12 np0005629333 nova_compute[244014]: 2026-02-25 12:16:12.101 244018 DEBUG nova.compute.provider_tree [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:16:12 np0005629333 nova_compute[244014]: 2026-02-25 12:16:12.123 244018 DEBUG nova.scheduler.client.report [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:16:12 np0005629333 nova_compute[244014]: 2026-02-25 12:16:12.186 244018 DEBUG oslo_concurrency.lockutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:12 np0005629333 nova_compute[244014]: 2026-02-25 12:16:12.187 244018 DEBUG nova.compute.manager [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:16:12 np0005629333 nova_compute[244014]: 2026-02-25 12:16:12.352 244018 DEBUG nova.compute.manager [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Feb 25 07:16:12 np0005629333 nova_compute[244014]: 2026-02-25 12:16:12.408 244018 INFO nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:16:12 np0005629333 nova_compute[244014]: 2026-02-25 12:16:12.443 244018 DEBUG nova.compute.manager [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:16:12 np0005629333 nova_compute[244014]: 2026-02-25 12:16:12.601 244018 DEBUG nova.compute.manager [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:16:12 np0005629333 nova_compute[244014]: 2026-02-25 12:16:12.603 244018 DEBUG nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:16:12 np0005629333 nova_compute[244014]: 2026-02-25 12:16:12.605 244018 INFO nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Creating image(s)#033[00m
Feb 25 07:16:12 np0005629333 nova_compute[244014]: 2026-02-25 12:16:12.635 244018 DEBUG nova.storage.rbd_utils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] rbd image 661b348d-5b73-45ed-8357-3aefed90d054_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:12 np0005629333 nova_compute[244014]: 2026-02-25 12:16:12.669 244018 DEBUG nova.storage.rbd_utils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] rbd image 661b348d-5b73-45ed-8357-3aefed90d054_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:12 np0005629333 nova_compute[244014]: 2026-02-25 12:16:12.707 244018 DEBUG nova.storage.rbd_utils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] rbd image 661b348d-5b73-45ed-8357-3aefed90d054_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:12 np0005629333 nova_compute[244014]: 2026-02-25 12:16:12.713 244018 DEBUG oslo_concurrency.processutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:12 np0005629333 nova_compute[244014]: 2026-02-25 12:16:12.815 244018 DEBUG oslo_concurrency.processutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:12 np0005629333 nova_compute[244014]: 2026-02-25 12:16:12.816 244018 DEBUG oslo_concurrency.lockutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:12 np0005629333 nova_compute[244014]: 2026-02-25 12:16:12.817 244018 DEBUG oslo_concurrency.lockutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:12 np0005629333 nova_compute[244014]: 2026-02-25 12:16:12.818 244018 DEBUG oslo_concurrency.lockutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:12 np0005629333 nova_compute[244014]: 2026-02-25 12:16:12.851 244018 DEBUG nova.storage.rbd_utils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] rbd image 661b348d-5b73-45ed-8357-3aefed90d054_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:12 np0005629333 nova_compute[244014]: 2026-02-25 12:16:12.857 244018 DEBUG oslo_concurrency.processutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 661b348d-5b73-45ed-8357-3aefed90d054_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:12 np0005629333 podman[253500]: 2026-02-25 12:16:12.892925295 +0000 UTC m=+0.066355536 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223)
Feb 25 07:16:12 np0005629333 podman[253542]: 2026-02-25 12:16:12.932568345 +0000 UTC m=+0.106315606 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Feb 25 07:16:12 np0005629333 nova_compute[244014]: 2026-02-25 12:16:12.977 244018 DEBUG oslo_concurrency.lockutils [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquiring lock "ee9e86a4-8a34-43a9-baf1-f9b2e7f85534" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:12 np0005629333 nova_compute[244014]: 2026-02-25 12:16:12.977 244018 DEBUG oslo_concurrency.lockutils [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "ee9e86a4-8a34-43a9-baf1-f9b2e7f85534" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:12 np0005629333 nova_compute[244014]: 2026-02-25 12:16:12.978 244018 DEBUG oslo_concurrency.lockutils [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquiring lock "ee9e86a4-8a34-43a9-baf1-f9b2e7f85534-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:12 np0005629333 nova_compute[244014]: 2026-02-25 12:16:12.978 244018 DEBUG oslo_concurrency.lockutils [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "ee9e86a4-8a34-43a9-baf1-f9b2e7f85534-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:12 np0005629333 nova_compute[244014]: 2026-02-25 12:16:12.978 244018 DEBUG oslo_concurrency.lockutils [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "ee9e86a4-8a34-43a9-baf1-f9b2e7f85534-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:12 np0005629333 nova_compute[244014]: 2026-02-25 12:16:12.980 244018 INFO nova.compute.manager [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Terminating instance#033[00m
Feb 25 07:16:12 np0005629333 nova_compute[244014]: 2026-02-25 12:16:12.981 244018 DEBUG oslo_concurrency.lockutils [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquiring lock "refresh_cache-ee9e86a4-8a34-43a9-baf1-f9b2e7f85534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:16:12 np0005629333 nova_compute[244014]: 2026-02-25 12:16:12.981 244018 DEBUG oslo_concurrency.lockutils [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquired lock "refresh_cache-ee9e86a4-8a34-43a9-baf1-f9b2e7f85534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:16:12 np0005629333 nova_compute[244014]: 2026-02-25 12:16:12.981 244018 DEBUG nova.network.neutron [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.155 244018 DEBUG oslo_concurrency.processutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 661b348d-5b73-45ed-8357-3aefed90d054_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.192 244018 DEBUG nova.network.neutron [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.244 244018 DEBUG nova.storage.rbd_utils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] resizing rbd image 661b348d-5b73-45ed-8357-3aefed90d054_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.351 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.364 244018 DEBUG nova.objects.instance [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lazy-loading 'migration_context' on Instance uuid 661b348d-5b73-45ed-8357-3aefed90d054 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.441 244018 DEBUG nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.442 244018 DEBUG nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Ensure instance console log exists: /var/lib/nova/instances/661b348d-5b73-45ed-8357-3aefed90d054/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.443 244018 DEBUG oslo_concurrency.lockutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.443 244018 DEBUG oslo_concurrency.lockutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.443 244018 DEBUG oslo_concurrency.lockutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.446 244018 DEBUG nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.450 244018 WARNING nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.458 244018 DEBUG nova.virt.libvirt.host [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.459 244018 DEBUG nova.virt.libvirt.host [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.476 244018 DEBUG nova.virt.libvirt.host [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.477 244018 DEBUG nova.virt.libvirt.host [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.477 244018 DEBUG nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.477 244018 DEBUG nova.virt.hardware [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.478 244018 DEBUG nova.virt.hardware [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.478 244018 DEBUG nova.virt.hardware [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.479 244018 DEBUG nova.virt.hardware [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.479 244018 DEBUG nova.virt.hardware [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.479 244018 DEBUG nova.virt.hardware [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.479 244018 DEBUG nova.virt.hardware [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.480 244018 DEBUG nova.virt.hardware [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.480 244018 DEBUG nova.virt.hardware [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.480 244018 DEBUG nova.virt.hardware [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.481 244018 DEBUG nova.virt.hardware [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.485 244018 DEBUG oslo_concurrency.processutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.506 244018 DEBUG nova.network.neutron [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:16:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 279 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.0 MiB/s wr, 203 op/s
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.579 244018 DEBUG oslo_concurrency.lockutils [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Releasing lock "refresh_cache-ee9e86a4-8a34-43a9-baf1-f9b2e7f85534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.580 244018 DEBUG nova.compute.manager [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:16:13 np0005629333 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Feb 25 07:16:13 np0005629333 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 12.653s CPU time.
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.660 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:13 np0005629333 systemd-machined[210048]: Machine qemu-4-instance-00000004 terminated.
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.743 244018 DEBUG nova.network.neutron [req-78f27a64-e0d9-4c8e-b4d4-1f479eee1c63 req-f408a13d-d10b-420a-8065-5ef8c12cea0d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Updated VIF entry in instance network info cache for port 277c556d-c41e-4d6d-9f29-56e96f6a65e2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.744 244018 DEBUG nova.network.neutron [req-78f27a64-e0d9-4c8e-b4d4-1f479eee1c63 req-f408a13d-d10b-420a-8065-5ef8c12cea0d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Updating instance_info_cache with network_info: [{"id": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "address": "fa:16:3e:9f:6d:04", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap277c556d-c4", "ovs_interfaceid": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.769 244018 DEBUG oslo_concurrency.lockutils [req-78f27a64-e0d9-4c8e-b4d4-1f479eee1c63 req-f408a13d-d10b-420a-8065-5ef8c12cea0d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-6de08989-13cc-415b-adc9-04b338e13d0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.798 244018 INFO nova.virt.libvirt.driver [-] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Instance destroyed successfully.#033[00m
Feb 25 07:16:13 np0005629333 nova_compute[244014]: 2026-02-25 12:16:13.799 244018 DEBUG nova.objects.instance [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lazy-loading 'resources' on Instance uuid ee9e86a4-8a34-43a9-baf1-f9b2e7f85534 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:16:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:16:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1367881996' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:16:14 np0005629333 nova_compute[244014]: 2026-02-25 12:16:14.101 244018 DEBUG oslo_concurrency.processutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:14 np0005629333 nova_compute[244014]: 2026-02-25 12:16:14.134 244018 DEBUG nova.storage.rbd_utils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] rbd image 661b348d-5b73-45ed-8357-3aefed90d054_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:14 np0005629333 nova_compute[244014]: 2026-02-25 12:16:14.139 244018 DEBUG oslo_concurrency.processutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:16:14 np0005629333 nova_compute[244014]: 2026-02-25 12:16:14.310 244018 INFO nova.virt.libvirt.driver [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Deleting instance files /var/lib/nova/instances/ee9e86a4-8a34-43a9-baf1-f9b2e7f85534_del#033[00m
Feb 25 07:16:14 np0005629333 nova_compute[244014]: 2026-02-25 12:16:14.311 244018 INFO nova.virt.libvirt.driver [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Deletion of /var/lib/nova/instances/ee9e86a4-8a34-43a9-baf1-f9b2e7f85534_del complete#033[00m
Feb 25 07:16:14 np0005629333 nova_compute[244014]: 2026-02-25 12:16:14.455 244018 INFO nova.compute.manager [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Took 0.88 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:16:14 np0005629333 nova_compute[244014]: 2026-02-25 12:16:14.456 244018 DEBUG oslo.service.loopingcall [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:16:14 np0005629333 nova_compute[244014]: 2026-02-25 12:16:14.456 244018 DEBUG nova.compute.manager [-] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:16:14 np0005629333 nova_compute[244014]: 2026-02-25 12:16:14.457 244018 DEBUG nova.network.neutron [-] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:16:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:16:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2913879078' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:16:14 np0005629333 nova_compute[244014]: 2026-02-25 12:16:14.667 244018 DEBUG oslo_concurrency.processutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:14 np0005629333 nova_compute[244014]: 2026-02-25 12:16:14.668 244018 DEBUG nova.objects.instance [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 661b348d-5b73-45ed-8357-3aefed90d054 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:16:14 np0005629333 nova_compute[244014]: 2026-02-25 12:16:14.861 244018 DEBUG nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:16:14 np0005629333 nova_compute[244014]:  <uuid>661b348d-5b73-45ed-8357-3aefed90d054</uuid>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:  <name>instance-00000007</name>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:16:14 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServerDiagnosticsV248Test-server-1233417614</nova:name>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:16:13</nova:creationTime>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:16:14 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:        <nova:user uuid="6fff08b2934f41e8be5a7d014dc41013">tempest-ServerDiagnosticsV248Test-870949120-project-member</nova:user>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:        <nova:project uuid="c661c7cccd024c7080e4a22a47c088b7">tempest-ServerDiagnosticsV248Test-870949120</nova:project>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:      <nova:ports/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:      <entry name="serial">661b348d-5b73-45ed-8357-3aefed90d054</entry>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:      <entry name="uuid">661b348d-5b73-45ed-8357-3aefed90d054</entry>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:16:14 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/661b348d-5b73-45ed-8357-3aefed90d054_disk">
Feb 25 07:16:14 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:16:14 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:16:14 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/661b348d-5b73-45ed-8357-3aefed90d054_disk.config">
Feb 25 07:16:14 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:16:14 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:16:14 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/661b348d-5b73-45ed-8357-3aefed90d054/console.log" append="off"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:16:14 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:16:14 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:16:14 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:16:14 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:16:14 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:16:15 np0005629333 nova_compute[244014]: 2026-02-25 12:16:15.070 244018 DEBUG nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:16:15 np0005629333 nova_compute[244014]: 2026-02-25 12:16:15.071 244018 DEBUG nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:16:15 np0005629333 nova_compute[244014]: 2026-02-25 12:16:15.072 244018 INFO nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Using config drive#033[00m
Feb 25 07:16:15 np0005629333 nova_compute[244014]: 2026-02-25 12:16:15.101 244018 DEBUG nova.storage.rbd_utils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] rbd image 661b348d-5b73-45ed-8357-3aefed90d054_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:15 np0005629333 nova_compute[244014]: 2026-02-25 12:16:15.215 244018 DEBUG nova.network.neutron [-] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:16:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 279 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 27 KiB/s wr, 152 op/s
Feb 25 07:16:15 np0005629333 nova_compute[244014]: 2026-02-25 12:16:15.554 244018 DEBUG nova.network.neutron [-] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:16:15 np0005629333 nova_compute[244014]: 2026-02-25 12:16:15.618 244018 INFO nova.compute.manager [-] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Took 1.16 seconds to deallocate network for instance.#033[00m
Feb 25 07:16:15 np0005629333 nova_compute[244014]: 2026-02-25 12:16:15.695 244018 INFO nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Creating config drive at /var/lib/nova/instances/661b348d-5b73-45ed-8357-3aefed90d054/disk.config#033[00m
Feb 25 07:16:15 np0005629333 nova_compute[244014]: 2026-02-25 12:16:15.702 244018 DEBUG oslo_concurrency.processutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/661b348d-5b73-45ed-8357-3aefed90d054/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp28h6jrws execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:15 np0005629333 nova_compute[244014]: 2026-02-25 12:16:15.764 244018 DEBUG oslo_concurrency.lockutils [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:15 np0005629333 nova_compute[244014]: 2026-02-25 12:16:15.765 244018 DEBUG oslo_concurrency.lockutils [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:15 np0005629333 nova_compute[244014]: 2026-02-25 12:16:15.827 244018 DEBUG oslo_concurrency.processutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/661b348d-5b73-45ed-8357-3aefed90d054/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp28h6jrws" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:15 np0005629333 nova_compute[244014]: 2026-02-25 12:16:15.866 244018 DEBUG nova.storage.rbd_utils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] rbd image 661b348d-5b73-45ed-8357-3aefed90d054_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:15 np0005629333 nova_compute[244014]: 2026-02-25 12:16:15.870 244018 DEBUG oslo_concurrency.processutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/661b348d-5b73-45ed-8357-3aefed90d054/disk.config 661b348d-5b73-45ed-8357-3aefed90d054_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:15 np0005629333 nova_compute[244014]: 2026-02-25 12:16:15.914 244018 DEBUG oslo_concurrency.processutils [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:16 np0005629333 nova_compute[244014]: 2026-02-25 12:16:16.029 244018 DEBUG oslo_concurrency.processutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/661b348d-5b73-45ed-8357-3aefed90d054/disk.config 661b348d-5b73-45ed-8357-3aefed90d054_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:16 np0005629333 nova_compute[244014]: 2026-02-25 12:16:16.030 244018 INFO nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Deleting local config drive /var/lib/nova/instances/661b348d-5b73-45ed-8357-3aefed90d054/disk.config because it was imported into RBD.#033[00m
Feb 25 07:16:16 np0005629333 systemd-machined[210048]: New machine qemu-7-instance-00000007.
Feb 25 07:16:16 np0005629333 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Feb 25 07:16:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:16:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3135789209' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:16:16 np0005629333 nova_compute[244014]: 2026-02-25 12:16:16.514 244018 DEBUG oslo_concurrency.processutils [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:16 np0005629333 nova_compute[244014]: 2026-02-25 12:16:16.519 244018 DEBUG nova.compute.provider_tree [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:16:16 np0005629333 nova_compute[244014]: 2026-02-25 12:16:16.566 244018 DEBUG nova.scheduler.client.report [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:16:16 np0005629333 nova_compute[244014]: 2026-02-25 12:16:16.573 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021776.5727217, 661b348d-5b73-45ed-8357-3aefed90d054 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:16:16 np0005629333 nova_compute[244014]: 2026-02-25 12:16:16.573 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:16:16 np0005629333 nova_compute[244014]: 2026-02-25 12:16:16.575 244018 DEBUG nova.compute.manager [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:16:16 np0005629333 nova_compute[244014]: 2026-02-25 12:16:16.575 244018 DEBUG nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:16:16 np0005629333 nova_compute[244014]: 2026-02-25 12:16:16.578 244018 INFO nova.virt.libvirt.driver [-] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Instance spawned successfully.#033[00m
Feb 25 07:16:16 np0005629333 nova_compute[244014]: 2026-02-25 12:16:16.579 244018 DEBUG nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:16:16 np0005629333 nova_compute[244014]: 2026-02-25 12:16:16.641 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:16:16 np0005629333 nova_compute[244014]: 2026-02-25 12:16:16.646 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:16:16 np0005629333 nova_compute[244014]: 2026-02-25 12:16:16.650 244018 DEBUG nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:16 np0005629333 nova_compute[244014]: 2026-02-25 12:16:16.651 244018 DEBUG nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:16 np0005629333 nova_compute[244014]: 2026-02-25 12:16:16.651 244018 DEBUG nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:16 np0005629333 nova_compute[244014]: 2026-02-25 12:16:16.652 244018 DEBUG nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:16 np0005629333 nova_compute[244014]: 2026-02-25 12:16:16.652 244018 DEBUG nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:16 np0005629333 nova_compute[244014]: 2026-02-25 12:16:16.653 244018 DEBUG nova.virt.libvirt.driver [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:16 np0005629333 nova_compute[244014]: 2026-02-25 12:16:16.685 244018 DEBUG oslo_concurrency.lockutils [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:16 np0005629333 nova_compute[244014]: 2026-02-25 12:16:16.752 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:16:16 np0005629333 nova_compute[244014]: 2026-02-25 12:16:16.753 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021776.5746243, 661b348d-5b73-45ed-8357-3aefed90d054 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:16:16 np0005629333 nova_compute[244014]: 2026-02-25 12:16:16.753 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] VM Started (Lifecycle Event)#033[00m
Feb 25 07:16:16 np0005629333 nova_compute[244014]: 2026-02-25 12:16:16.795 244018 INFO nova.scheduler.client.report [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Deleted allocations for instance ee9e86a4-8a34-43a9-baf1-f9b2e7f85534#033[00m
Feb 25 07:16:16 np0005629333 nova_compute[244014]: 2026-02-25 12:16:16.875 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:16:16 np0005629333 nova_compute[244014]: 2026-02-25 12:16:16.884 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:16:16 np0005629333 nova_compute[244014]: 2026-02-25 12:16:16.945 244018 INFO nova.compute.manager [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Took 4.34 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:16:16 np0005629333 nova_compute[244014]: 2026-02-25 12:16:16.946 244018 DEBUG nova.compute.manager [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:16:17 np0005629333 nova_compute[244014]: 2026-02-25 12:16:17.325 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:16:17 np0005629333 nova_compute[244014]: 2026-02-25 12:16:17.385 244018 DEBUG oslo_concurrency.lockutils [None req-a0bd51af-f624-41d0-bf2e-78806ef0c3f2 0da29cefe0e94220a8d9cf895454b55c 6bddec791c4c46b7bef787d3d6634b12 - - default default] Lock "ee9e86a4-8a34-43a9-baf1-f9b2e7f85534" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.408s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:17 np0005629333 nova_compute[244014]: 2026-02-25 12:16:17.408 244018 INFO nova.compute.manager [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Took 6.79 seconds to build instance.#033[00m
Feb 25 07:16:17 np0005629333 nova_compute[244014]: 2026-02-25 12:16:17.431 244018 DEBUG oslo_concurrency.lockutils [None req-ba8cb7f5-a2ee-4817-9770-d2e7fddc693e 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lock "661b348d-5b73-45ed-8357-3aefed90d054" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 246 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 222 op/s
Feb 25 07:16:18 np0005629333 nova_compute[244014]: 2026-02-25 12:16:18.294 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021763.2925382, 6d979dde-168d-4976-99a4-bb4e3eb22ae0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:16:18 np0005629333 nova_compute[244014]: 2026-02-25 12:16:18.294 244018 INFO nova.compute.manager [-] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:16:18 np0005629333 nova_compute[244014]: 2026-02-25 12:16:18.351 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:18 np0005629333 ovn_controller[147040]: 2026-02-25T12:16:18Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9f:6d:04 10.100.0.6
Feb 25 07:16:18 np0005629333 ovn_controller[147040]: 2026-02-25T12:16:18Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9f:6d:04 10.100.0.6
Feb 25 07:16:18 np0005629333 nova_compute[244014]: 2026-02-25 12:16:18.392 244018 DEBUG nova.compute.manager [None req-4709e7fe-bf25-46e9-a89e-9e3f30c8e97a - - - - - -] [instance: 6d979dde-168d-4976-99a4-bb4e3eb22ae0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:16:18 np0005629333 nova_compute[244014]: 2026-02-25 12:16:18.662 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:18 np0005629333 nova_compute[244014]: 2026-02-25 12:16:18.683 244018 DEBUG nova.compute.manager [None req-508a393b-b326-48b1-be60-4cd4705114a9 431f432a640849889a0f6a0f429aadbc c7f58877702640de8bdf3e202a60769c - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:16:18 np0005629333 nova_compute[244014]: 2026-02-25 12:16:18.686 244018 INFO nova.compute.manager [None req-508a393b-b326-48b1-be60-4cd4705114a9 431f432a640849889a0f6a0f429aadbc c7f58877702640de8bdf3e202a60769c - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Retrieving diagnostics#033[00m
Feb 25 07:16:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:16:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 246 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 165 op/s
Feb 25 07:16:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 262 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.8 MiB/s wr, 191 op/s
Feb 25 07:16:23 np0005629333 nova_compute[244014]: 2026-02-25 12:16:23.353 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 279 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.9 MiB/s wr, 245 op/s
Feb 25 07:16:23 np0005629333 nova_compute[244014]: 2026-02-25 12:16:23.663 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:16:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 279 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 194 op/s
Feb 25 07:16:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 290 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.3 MiB/s wr, 215 op/s
Feb 25 07:16:28 np0005629333 nova_compute[244014]: 2026-02-25 12:16:28.355 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:28 np0005629333 nova_compute[244014]: 2026-02-25 12:16:28.666 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:28 np0005629333 nova_compute[244014]: 2026-02-25 12:16:28.797 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021773.7963176, ee9e86a4-8a34-43a9-baf1-f9b2e7f85534 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:16:28 np0005629333 nova_compute[244014]: 2026-02-25 12:16:28.797 244018 INFO nova.compute.manager [-] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.006 244018 DEBUG nova.compute.manager [None req-bf278eb8-5fb7-4543-a77d-3cccc287a7a9 - - - - - -] [instance: ee9e86a4-8a34-43a9-baf1-f9b2e7f85534] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:16:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.188 244018 DEBUG nova.compute.manager [None req-f4b28b12-c0f7-4224-a724-6b057459f09f 431f432a640849889a0f6a0f429aadbc c7f58877702640de8bdf3e202a60769c - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.192 244018 INFO nova.compute.manager [None req-f4b28b12-c0f7-4224-a724-6b057459f09f 431f432a640849889a0f6a0f429aadbc c7f58877702640de8bdf3e202a60769c - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Retrieving diagnostics#033[00m
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.261 244018 DEBUG oslo_concurrency.lockutils [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "6de08989-13cc-415b-adc9-04b338e13d0f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.261 244018 DEBUG oslo_concurrency.lockutils [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "6de08989-13cc-415b-adc9-04b338e13d0f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.262 244018 DEBUG oslo_concurrency.lockutils [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.263 244018 DEBUG oslo_concurrency.lockutils [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.263 244018 DEBUG oslo_concurrency.lockutils [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.266 244018 INFO nova.compute.manager [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Terminating instance#033[00m
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.268 244018 DEBUG nova.compute.manager [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:16:29 np0005629333 kernel: tap277c556d-c4 (unregistering): left promiscuous mode
Feb 25 07:16:29 np0005629333 NetworkManager[49836]: <info>  [1772021789.3208] device (tap277c556d-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.322 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:29 np0005629333 ovn_controller[147040]: 2026-02-25T12:16:29Z|00033|binding|INFO|Releasing lport 277c556d-c41e-4d6d-9f29-56e96f6a65e2 from this chassis (sb_readonly=0)
Feb 25 07:16:29 np0005629333 ovn_controller[147040]: 2026-02-25T12:16:29Z|00034|binding|INFO|Setting lport 277c556d-c41e-4d6d-9f29-56e96f6a65e2 down in Southbound
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.328 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:29 np0005629333 ovn_controller[147040]: 2026-02-25T12:16:29Z|00035|binding|INFO|Removing iface tap277c556d-c4 ovn-installed in OVS
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.331 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.342 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:29 np0005629333 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000005.scope: Deactivated successfully.
Feb 25 07:16:29 np0005629333 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000005.scope: Consumed 12.369s CPU time.
Feb 25 07:16:29 np0005629333 systemd-machined[210048]: Machine qemu-6-instance-00000005 terminated.
Feb 25 07:16:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:29.439 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:6d:04 10.100.0.6'], port_security=['fa:16:3e:9f:6d:04 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6de08989-13cc-415b-adc9-04b338e13d0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0cd0968a9a1b4b9e984b0a10a6ac77a8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f43142d8-27de-490e-9b86-d4e77c9c7e07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad5d1aaf-d2e8-49e5-aeb2-335f304b15d9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=277c556d-c41e-4d6d-9f29-56e96f6a65e2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:16:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:29.441 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 277c556d-c41e-4d6d-9f29-56e96f6a65e2 in datapath d3fb36f1-0e88-43b4-a8a4-3844d55f1de8 unbound from our chassis#033[00m
Feb 25 07:16:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:29.442 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d3fb36f1-0e88-43b4-a8a4-3844d55f1de8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:16:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:29.442 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e52ddb22-8381-4075-8898-965c8f9f3732]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:29.443 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8 namespace which is not needed anymore#033[00m
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.507 244018 INFO nova.virt.libvirt.driver [-] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Instance destroyed successfully.#033[00m
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.508 244018 DEBUG nova.objects.instance [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lazy-loading 'resources' on Instance uuid 6de08989-13cc-415b-adc9-04b338e13d0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:16:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 290 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.5 MiB/s wr, 145 op/s
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.530 244018 DEBUG nova.virt.libvirt.vif [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:15:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-685039122',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-685039122',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(21),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-685039122',id=5,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=21,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJvvkxGBHug345PSqKu7AM4guWtlD1Z/ZHlY6xJOr1x7TK7JerzVRxChzrn82oXIZjPXupKAJ5z18TSxrDr8DDKEwo/biNUpuR+tKuw+djeyjF8yxNV8v2GUFxoZtp+/Q==',key_name='tempest-keypair-1665806790',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:16:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0cd0968a9a1b4b9e984b0a10a6ac77a8',ramdisk_id='',reservation_id='r-c0zagdsb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1630390846',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1630390846-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:16:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee6c6e44a0624805afeb68a67c99f325',uuid=6de08989-13cc-415b-adc9-04b338e13d0f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "address": "fa:16:3e:9f:6d:04", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap277c556d-c4", "ovs_interfaceid": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.531 244018 DEBUG nova.network.os_vif_util [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Converting VIF {"id": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "address": "fa:16:3e:9f:6d:04", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap277c556d-c4", "ovs_interfaceid": "277c556d-c41e-4d6d-9f29-56e96f6a65e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.532 244018 DEBUG nova.network.os_vif_util [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9f:6d:04,bridge_name='br-int',has_traffic_filtering=True,id=277c556d-c41e-4d6d-9f29-56e96f6a65e2,network=Network(d3fb36f1-0e88-43b4-a8a4-3844d55f1de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap277c556d-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.533 244018 DEBUG os_vif [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9f:6d:04,bridge_name='br-int',has_traffic_filtering=True,id=277c556d-c41e-4d6d-9f29-56e96f6a65e2,network=Network(d3fb36f1-0e88-43b4-a8a4-3844d55f1de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap277c556d-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.536 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.536 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap277c556d-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.538 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.541 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.544 244018 INFO os_vif [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9f:6d:04,bridge_name='br-int',has_traffic_filtering=True,id=277c556d-c41e-4d6d-9f29-56e96f6a65e2,network=Network(d3fb36f1-0e88-43b4-a8a4-3844d55f1de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap277c556d-c4')#033[00m
Feb 25 07:16:29 np0005629333 neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8[253424]: [NOTICE]   (253428) : haproxy version is 2.8.14-c23fe91
Feb 25 07:16:29 np0005629333 neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8[253424]: [NOTICE]   (253428) : path to executable is /usr/sbin/haproxy
Feb 25 07:16:29 np0005629333 neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8[253424]: [WARNING]  (253428) : Exiting Master process...
Feb 25 07:16:29 np0005629333 systemd[1]: libpod-4ac83e63c68ca85fc4089a51e3ab40f8dd053d8ad236fe59c6e74010af956e3c.scope: Deactivated successfully.
Feb 25 07:16:29 np0005629333 neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8[253424]: [ALERT]    (253428) : Current worker (253431) exited with code 143 (Terminated)
Feb 25 07:16:29 np0005629333 neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8[253424]: [WARNING]  (253428) : All workers exited. Exiting... (0)
Feb 25 07:16:29 np0005629333 conmon[253424]: conmon 4ac83e63c68ca85fc408 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4ac83e63c68ca85fc4089a51e3ab40f8dd053d8ad236fe59c6e74010af956e3c.scope/container/memory.events
Feb 25 07:16:29 np0005629333 podman[253948]: 2026-02-25 12:16:29.640661136 +0000 UTC m=+0.077892349 container died 4ac83e63c68ca85fc4089a51e3ab40f8dd053d8ad236fe59c6e74010af956e3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 25 07:16:29 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4ac83e63c68ca85fc4089a51e3ab40f8dd053d8ad236fe59c6e74010af956e3c-userdata-shm.mount: Deactivated successfully.
Feb 25 07:16:29 np0005629333 systemd[1]: var-lib-containers-storage-overlay-df85cc05f2dfb2a93e56b0f9c6c65dbdcac823808956d17e7a377f617bac8210-merged.mount: Deactivated successfully.
Feb 25 07:16:29 np0005629333 podman[253948]: 2026-02-25 12:16:29.706948479 +0000 UTC m=+0.144179672 container cleanup 4ac83e63c68ca85fc4089a51e3ab40f8dd053d8ad236fe59c6e74010af956e3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0)
Feb 25 07:16:29 np0005629333 systemd[1]: libpod-conmon-4ac83e63c68ca85fc4089a51e3ab40f8dd053d8ad236fe59c6e74010af956e3c.scope: Deactivated successfully.
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.741 244018 DEBUG oslo_concurrency.lockutils [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Acquiring lock "661b348d-5b73-45ed-8357-3aefed90d054" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.743 244018 DEBUG oslo_concurrency.lockutils [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lock "661b348d-5b73-45ed-8357-3aefed90d054" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.744 244018 DEBUG oslo_concurrency.lockutils [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Acquiring lock "661b348d-5b73-45ed-8357-3aefed90d054-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.744 244018 DEBUG oslo_concurrency.lockutils [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lock "661b348d-5b73-45ed-8357-3aefed90d054-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.745 244018 DEBUG oslo_concurrency.lockutils [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lock "661b348d-5b73-45ed-8357-3aefed90d054-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.746 244018 INFO nova.compute.manager [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Terminating instance#033[00m
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.747 244018 DEBUG oslo_concurrency.lockutils [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Acquiring lock "refresh_cache-661b348d-5b73-45ed-8357-3aefed90d054" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.747 244018 DEBUG oslo_concurrency.lockutils [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Acquired lock "refresh_cache-661b348d-5b73-45ed-8357-3aefed90d054" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.747 244018 DEBUG nova.network.neutron [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:16:29 np0005629333 podman[253993]: 2026-02-25 12:16:29.808302386 +0000 UTC m=+0.082265300 container remove 4ac83e63c68ca85fc4089a51e3ab40f8dd053d8ad236fe59c6e74010af956e3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 07:16:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:29.815 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d0e876bc-ae39-4830-b8a3-e15333c7a202]: (4, ('Wed Feb 25 12:16:29 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8 (4ac83e63c68ca85fc4089a51e3ab40f8dd053d8ad236fe59c6e74010af956e3c)\n4ac83e63c68ca85fc4089a51e3ab40f8dd053d8ad236fe59c6e74010af956e3c\nWed Feb 25 12:16:29 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8 (4ac83e63c68ca85fc4089a51e3ab40f8dd053d8ad236fe59c6e74010af956e3c)\n4ac83e63c68ca85fc4089a51e3ab40f8dd053d8ad236fe59c6e74010af956e3c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:29.818 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6186b2b4-c147-47d0-9357-6d21073fe584]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:29.820 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3fb36f1-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.823 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:29 np0005629333 kernel: tapd3fb36f1-00: left promiscuous mode
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.832 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:29.834 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e50d6905-05d8-4a7b-8669-13b43fc371ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:29.846 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[56db67ab-6a38-42aa-a7a4-3e2c69e5a38e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:29.848 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1d6c1ef0-72e9-4dff-820b-d8b64607d5e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:29.861 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a4aa23a8-9d97-4a69-9ab3-1e6853af6ea9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 373928, 'reachable_time': 29959, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254012, 'error': None, 'target': 'ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:29 np0005629333 systemd[1]: run-netns-ovnmeta\x2dd3fb36f1\x2d0e88\x2d43b4\x2da8a4\x2d3844d55f1de8.mount: Deactivated successfully.
Feb 25 07:16:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:29.871 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:16:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:29.872 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[2ddc24c6-bcf6-44ca-b3ab-37772efc2167]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.972 244018 INFO nova.virt.libvirt.driver [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Deleting instance files /var/lib/nova/instances/6de08989-13cc-415b-adc9-04b338e13d0f_del#033[00m
Feb 25 07:16:29 np0005629333 nova_compute[244014]: 2026-02-25 12:16:29.973 244018 INFO nova.virt.libvirt.driver [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Deletion of /var/lib/nova/instances/6de08989-13cc-415b-adc9-04b338e13d0f_del complete#033[00m
Feb 25 07:16:30 np0005629333 nova_compute[244014]: 2026-02-25 12:16:30.365 244018 INFO nova.compute.manager [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Took 1.10 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:16:30 np0005629333 nova_compute[244014]: 2026-02-25 12:16:30.366 244018 DEBUG oslo.service.loopingcall [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:16:30 np0005629333 nova_compute[244014]: 2026-02-25 12:16:30.367 244018 DEBUG nova.compute.manager [-] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:16:30 np0005629333 nova_compute[244014]: 2026-02-25 12:16:30.367 244018 DEBUG nova.network.neutron [-] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:16:30 np0005629333 nova_compute[244014]: 2026-02-25 12:16:30.738 244018 DEBUG nova.network.neutron [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:16:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:16:30
Feb 25 07:16:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:16:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:16:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', '.rgw.root', 'images', 'backups', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'vms']
Feb 25 07:16:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:16:31 np0005629333 nova_compute[244014]: 2026-02-25 12:16:31.012 244018 DEBUG nova.compute.manager [req-226a5a3f-6627-4a2a-ba2b-a215c3e41c71 req-8c7bd997-d48a-4add-98dd-93755c404aa3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Received event network-vif-unplugged-277c556d-c41e-4d6d-9f29-56e96f6a65e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:16:31 np0005629333 nova_compute[244014]: 2026-02-25 12:16:31.013 244018 DEBUG oslo_concurrency.lockutils [req-226a5a3f-6627-4a2a-ba2b-a215c3e41c71 req-8c7bd997-d48a-4add-98dd-93755c404aa3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:31 np0005629333 nova_compute[244014]: 2026-02-25 12:16:31.013 244018 DEBUG oslo_concurrency.lockutils [req-226a5a3f-6627-4a2a-ba2b-a215c3e41c71 req-8c7bd997-d48a-4add-98dd-93755c404aa3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:31 np0005629333 nova_compute[244014]: 2026-02-25 12:16:31.013 244018 DEBUG oslo_concurrency.lockutils [req-226a5a3f-6627-4a2a-ba2b-a215c3e41c71 req-8c7bd997-d48a-4add-98dd-93755c404aa3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:31 np0005629333 nova_compute[244014]: 2026-02-25 12:16:31.013 244018 DEBUG nova.compute.manager [req-226a5a3f-6627-4a2a-ba2b-a215c3e41c71 req-8c7bd997-d48a-4add-98dd-93755c404aa3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] No waiting events found dispatching network-vif-unplugged-277c556d-c41e-4d6d-9f29-56e96f6a65e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:16:31 np0005629333 nova_compute[244014]: 2026-02-25 12:16:31.013 244018 DEBUG nova.compute.manager [req-226a5a3f-6627-4a2a-ba2b-a215c3e41c71 req-8c7bd997-d48a-4add-98dd-93755c404aa3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Received event network-vif-unplugged-277c556d-c41e-4d6d-9f29-56e96f6a65e2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:16:31 np0005629333 nova_compute[244014]: 2026-02-25 12:16:31.202 244018 DEBUG nova.network.neutron [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:16:31 np0005629333 nova_compute[244014]: 2026-02-25 12:16:31.231 244018 DEBUG oslo_concurrency.lockutils [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Releasing lock "refresh_cache-661b348d-5b73-45ed-8357-3aefed90d054" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:16:31 np0005629333 nova_compute[244014]: 2026-02-25 12:16:31.232 244018 DEBUG nova.compute.manager [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:16:31 np0005629333 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Feb 25 07:16:31 np0005629333 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 11.601s CPU time.
Feb 25 07:16:31 np0005629333 systemd-machined[210048]: Machine qemu-7-instance-00000007 terminated.
Feb 25 07:16:31 np0005629333 nova_compute[244014]: 2026-02-25 12:16:31.454 244018 INFO nova.virt.libvirt.driver [-] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Instance destroyed successfully.#033[00m
Feb 25 07:16:31 np0005629333 nova_compute[244014]: 2026-02-25 12:16:31.455 244018 DEBUG nova.objects.instance [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lazy-loading 'resources' on Instance uuid 661b348d-5b73-45ed-8357-3aefed90d054 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:16:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 265 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 157 op/s
Feb 25 07:16:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:16:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:16:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:16:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:16:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:16:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:16:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:16:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:16:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:16:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:16:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:16:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:16:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:16:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:16:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:16:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:16:31 np0005629333 nova_compute[244014]: 2026-02-25 12:16:31.895 244018 INFO nova.virt.libvirt.driver [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Deleting instance files /var/lib/nova/instances/661b348d-5b73-45ed-8357-3aefed90d054_del#033[00m
Feb 25 07:16:31 np0005629333 nova_compute[244014]: 2026-02-25 12:16:31.896 244018 INFO nova.virt.libvirt.driver [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Deletion of /var/lib/nova/instances/661b348d-5b73-45ed-8357-3aefed90d054_del complete#033[00m
Feb 25 07:16:32 np0005629333 nova_compute[244014]: 2026-02-25 12:16:32.002 244018 INFO nova.compute.manager [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:16:32 np0005629333 nova_compute[244014]: 2026-02-25 12:16:32.004 244018 DEBUG oslo.service.loopingcall [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:16:32 np0005629333 nova_compute[244014]: 2026-02-25 12:16:32.005 244018 DEBUG nova.compute.manager [-] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:16:32 np0005629333 nova_compute[244014]: 2026-02-25 12:16:32.005 244018 DEBUG nova.network.neutron [-] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:16:32 np0005629333 nova_compute[244014]: 2026-02-25 12:16:32.278 244018 DEBUG nova.network.neutron [-] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:16:32 np0005629333 nova_compute[244014]: 2026-02-25 12:16:32.349 244018 DEBUG nova.network.neutron [-] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:16:32 np0005629333 nova_compute[244014]: 2026-02-25 12:16:32.390 244018 DEBUG nova.network.neutron [-] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:16:32 np0005629333 nova_compute[244014]: 2026-02-25 12:16:32.444 244018 INFO nova.compute.manager [-] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Took 0.44 seconds to deallocate network for instance.#033[00m
Feb 25 07:16:32 np0005629333 nova_compute[244014]: 2026-02-25 12:16:32.450 244018 INFO nova.compute.manager [-] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Took 2.08 seconds to deallocate network for instance.#033[00m
Feb 25 07:16:32 np0005629333 nova_compute[244014]: 2026-02-25 12:16:32.596 244018 DEBUG oslo_concurrency.lockutils [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:32 np0005629333 nova_compute[244014]: 2026-02-25 12:16:32.596 244018 DEBUG oslo_concurrency.lockutils [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:32 np0005629333 nova_compute[244014]: 2026-02-25 12:16:32.629 244018 DEBUG oslo_concurrency.lockutils [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:32 np0005629333 nova_compute[244014]: 2026-02-25 12:16:32.692 244018 DEBUG oslo_concurrency.processutils [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:33 np0005629333 nova_compute[244014]: 2026-02-25 12:16:33.169 244018 DEBUG nova.compute.manager [req-ac335dea-2ce6-468b-b5d4-d1d404e93a38 req-6d4ab28c-6c88-4e22-b781-9c66840fceb6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Received event network-vif-plugged-277c556d-c41e-4d6d-9f29-56e96f6a65e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:16:33 np0005629333 nova_compute[244014]: 2026-02-25 12:16:33.170 244018 DEBUG oslo_concurrency.lockutils [req-ac335dea-2ce6-468b-b5d4-d1d404e93a38 req-6d4ab28c-6c88-4e22-b781-9c66840fceb6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:33 np0005629333 nova_compute[244014]: 2026-02-25 12:16:33.170 244018 DEBUG oslo_concurrency.lockutils [req-ac335dea-2ce6-468b-b5d4-d1d404e93a38 req-6d4ab28c-6c88-4e22-b781-9c66840fceb6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:33 np0005629333 nova_compute[244014]: 2026-02-25 12:16:33.171 244018 DEBUG oslo_concurrency.lockutils [req-ac335dea-2ce6-468b-b5d4-d1d404e93a38 req-6d4ab28c-6c88-4e22-b781-9c66840fceb6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6de08989-13cc-415b-adc9-04b338e13d0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:33 np0005629333 nova_compute[244014]: 2026-02-25 12:16:33.171 244018 DEBUG nova.compute.manager [req-ac335dea-2ce6-468b-b5d4-d1d404e93a38 req-6d4ab28c-6c88-4e22-b781-9c66840fceb6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] No waiting events found dispatching network-vif-plugged-277c556d-c41e-4d6d-9f29-56e96f6a65e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:16:33 np0005629333 nova_compute[244014]: 2026-02-25 12:16:33.171 244018 WARNING nova.compute.manager [req-ac335dea-2ce6-468b-b5d4-d1d404e93a38 req-6d4ab28c-6c88-4e22-b781-9c66840fceb6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Received unexpected event network-vif-plugged-277c556d-c41e-4d6d-9f29-56e96f6a65e2 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:16:33 np0005629333 nova_compute[244014]: 2026-02-25 12:16:33.171 244018 DEBUG nova.compute.manager [req-ac335dea-2ce6-468b-b5d4-d1d404e93a38 req-6d4ab28c-6c88-4e22-b781-9c66840fceb6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Received event network-vif-deleted-277c556d-c41e-4d6d-9f29-56e96f6a65e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:16:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:16:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3557133748' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:16:33 np0005629333 nova_compute[244014]: 2026-02-25 12:16:33.225 244018 DEBUG oslo_concurrency.processutils [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:33 np0005629333 nova_compute[244014]: 2026-02-25 12:16:33.231 244018 DEBUG nova.compute.provider_tree [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:16:33 np0005629333 nova_compute[244014]: 2026-02-25 12:16:33.285 244018 DEBUG nova.scheduler.client.report [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:16:33 np0005629333 nova_compute[244014]: 2026-02-25 12:16:33.356 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:33 np0005629333 nova_compute[244014]: 2026-02-25 12:16:33.367 244018 DEBUG oslo_concurrency.lockutils [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:33 np0005629333 nova_compute[244014]: 2026-02-25 12:16:33.372 244018 DEBUG oslo_concurrency.lockutils [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 177 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.3 MiB/s wr, 207 op/s
Feb 25 07:16:33 np0005629333 nova_compute[244014]: 2026-02-25 12:16:33.990 244018 INFO nova.scheduler.client.report [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Deleted allocations for instance 661b348d-5b73-45ed-8357-3aefed90d054#033[00m
Feb 25 07:16:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:16:34 np0005629333 nova_compute[244014]: 2026-02-25 12:16:34.189 244018 DEBUG oslo_concurrency.processutils [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:34 np0005629333 nova_compute[244014]: 2026-02-25 12:16:34.227 244018 DEBUG oslo_concurrency.lockutils [None req-c0754267-18bf-4979-8ffe-2503be624fda 6fff08b2934f41e8be5a7d014dc41013 c661c7cccd024c7080e4a22a47c088b7 - - default default] Lock "661b348d-5b73-45ed-8357-3aefed90d054" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.484s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:34 np0005629333 nova_compute[244014]: 2026-02-25 12:16:34.540 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:16:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2974601813' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:16:34 np0005629333 nova_compute[244014]: 2026-02-25 12:16:34.774 244018 DEBUG oslo_concurrency.processutils [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:34 np0005629333 nova_compute[244014]: 2026-02-25 12:16:34.780 244018 DEBUG nova.compute.provider_tree [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:16:34 np0005629333 nova_compute[244014]: 2026-02-25 12:16:34.802 244018 DEBUG nova.scheduler.client.report [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:16:35 np0005629333 nova_compute[244014]: 2026-02-25 12:16:35.044 244018 DEBUG oslo_concurrency.lockutils [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:35 np0005629333 nova_compute[244014]: 2026-02-25 12:16:35.148 244018 INFO nova.scheduler.client.report [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Deleted allocations for instance 6de08989-13cc-415b-adc9-04b338e13d0f#033[00m
Feb 25 07:16:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 177 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 394 KiB/s rd, 2.1 MiB/s wr, 110 op/s
Feb 25 07:16:36 np0005629333 nova_compute[244014]: 2026-02-25 12:16:36.168 244018 DEBUG oslo_concurrency.lockutils [None req-25421325-f80c-4e7d-8098-2733135c087a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "6de08989-13cc-415b-adc9-04b338e13d0f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.906s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:36 np0005629333 nova_compute[244014]: 2026-02-25 12:16:36.219 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "20c8b8a1-1561-49f5-9fce-af2840195a57" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:36 np0005629333 nova_compute[244014]: 2026-02-25 12:16:36.220 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "20c8b8a1-1561-49f5-9fce-af2840195a57" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:36 np0005629333 nova_compute[244014]: 2026-02-25 12:16:36.262 244018 DEBUG nova.compute.manager [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:16:36 np0005629333 nova_compute[244014]: 2026-02-25 12:16:36.400 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:36 np0005629333 nova_compute[244014]: 2026-02-25 12:16:36.400 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:36 np0005629333 nova_compute[244014]: 2026-02-25 12:16:36.410 244018 DEBUG nova.virt.hardware [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:16:36 np0005629333 nova_compute[244014]: 2026-02-25 12:16:36.411 244018 INFO nova.compute.claims [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:16:36 np0005629333 nova_compute[244014]: 2026-02-25 12:16:36.558 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:16:37 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3226950202' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:16:37 np0005629333 nova_compute[244014]: 2026-02-25 12:16:37.136 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:37 np0005629333 nova_compute[244014]: 2026-02-25 12:16:37.142 244018 DEBUG nova.compute.provider_tree [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:16:37 np0005629333 nova_compute[244014]: 2026-02-25 12:16:37.194 244018 DEBUG nova.scheduler.client.report [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:16:37 np0005629333 nova_compute[244014]: 2026-02-25 12:16:37.230 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:37 np0005629333 nova_compute[244014]: 2026-02-25 12:16:37.232 244018 DEBUG nova.compute.manager [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:16:37 np0005629333 nova_compute[244014]: 2026-02-25 12:16:37.293 244018 DEBUG nova.compute.manager [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:16:37 np0005629333 nova_compute[244014]: 2026-02-25 12:16:37.294 244018 DEBUG nova.network.neutron [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:16:37 np0005629333 nova_compute[244014]: 2026-02-25 12:16:37.316 244018 INFO nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:16:37 np0005629333 nova_compute[244014]: 2026-02-25 12:16:37.367 244018 DEBUG nova.compute.manager [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:16:37 np0005629333 nova_compute[244014]: 2026-02-25 12:16:37.457 244018 DEBUG nova.compute.manager [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:16:37 np0005629333 nova_compute[244014]: 2026-02-25 12:16:37.459 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:16:37 np0005629333 nova_compute[244014]: 2026-02-25 12:16:37.460 244018 INFO nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Creating image(s)#033[00m
Feb 25 07:16:37 np0005629333 nova_compute[244014]: 2026-02-25 12:16:37.494 244018 DEBUG nova.storage.rbd_utils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 20c8b8a1-1561-49f5-9fce-af2840195a57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 153 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 402 KiB/s rd, 2.1 MiB/s wr, 120 op/s
Feb 25 07:16:37 np0005629333 nova_compute[244014]: 2026-02-25 12:16:37.530 244018 DEBUG nova.storage.rbd_utils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 20c8b8a1-1561-49f5-9fce-af2840195a57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:37 np0005629333 nova_compute[244014]: 2026-02-25 12:16:37.564 244018 DEBUG nova.storage.rbd_utils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 20c8b8a1-1561-49f5-9fce-af2840195a57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:37 np0005629333 nova_compute[244014]: 2026-02-25 12:16:37.568 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:37 np0005629333 nova_compute[244014]: 2026-02-25 12:16:37.644 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:37 np0005629333 nova_compute[244014]: 2026-02-25 12:16:37.645 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:37 np0005629333 nova_compute[244014]: 2026-02-25 12:16:37.646 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:37 np0005629333 nova_compute[244014]: 2026-02-25 12:16:37.647 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:37 np0005629333 nova_compute[244014]: 2026-02-25 12:16:37.675 244018 DEBUG nova.storage.rbd_utils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 20c8b8a1-1561-49f5-9fce-af2840195a57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:37 np0005629333 nova_compute[244014]: 2026-02-25 12:16:37.679 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 20c8b8a1-1561-49f5-9fce-af2840195a57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:37 np0005629333 nova_compute[244014]: 2026-02-25 12:16:37.924 244018 DEBUG nova.policy [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ee6c6e44a0624805afeb68a67c99f325', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0cd0968a9a1b4b9e984b0a10a6ac77a8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:16:37 np0005629333 nova_compute[244014]: 2026-02-25 12:16:37.979 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 20c8b8a1-1561-49f5-9fce-af2840195a57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:38 np0005629333 nova_compute[244014]: 2026-02-25 12:16:38.058 244018 DEBUG nova.storage.rbd_utils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] resizing rbd image 20c8b8a1-1561-49f5-9fce-af2840195a57_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:16:38 np0005629333 nova_compute[244014]: 2026-02-25 12:16:38.177 244018 DEBUG nova.objects.instance [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lazy-loading 'migration_context' on Instance uuid 20c8b8a1-1561-49f5-9fce-af2840195a57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:16:38 np0005629333 nova_compute[244014]: 2026-02-25 12:16:38.358 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:38 np0005629333 nova_compute[244014]: 2026-02-25 12:16:38.407 244018 DEBUG nova.storage.rbd_utils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 20c8b8a1-1561-49f5-9fce-af2840195a57_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:38 np0005629333 nova_compute[244014]: 2026-02-25 12:16:38.458 244018 DEBUG nova.storage.rbd_utils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 20c8b8a1-1561-49f5-9fce-af2840195a57_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:38 np0005629333 nova_compute[244014]: 2026-02-25 12:16:38.463 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:38 np0005629333 nova_compute[244014]: 2026-02-25 12:16:38.464 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:38 np0005629333 nova_compute[244014]: 2026-02-25 12:16:38.465 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:38 np0005629333 nova_compute[244014]: 2026-02-25 12:16:38.485 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:38 np0005629333 nova_compute[244014]: 2026-02-25 12:16:38.486 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:38 np0005629333 nova_compute[244014]: 2026-02-25 12:16:38.540 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:38 np0005629333 nova_compute[244014]: 2026-02-25 12:16:38.541 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:38 np0005629333 nova_compute[244014]: 2026-02-25 12:16:38.572 244018 DEBUG nova.storage.rbd_utils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 20c8b8a1-1561-49f5-9fce-af2840195a57_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:38 np0005629333 nova_compute[244014]: 2026-02-25 12:16:38.577 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 20c8b8a1-1561-49f5-9fce-af2840195a57_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:16:39 np0005629333 nova_compute[244014]: 2026-02-25 12:16:39.388 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 20c8b8a1-1561-49f5-9fce-af2840195a57_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.811s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:39 np0005629333 nova_compute[244014]: 2026-02-25 12:16:39.505 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:16:39 np0005629333 nova_compute[244014]: 2026-02-25 12:16:39.506 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Ensure instance console log exists: /var/lib/nova/instances/20c8b8a1-1561-49f5-9fce-af2840195a57/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:16:39 np0005629333 nova_compute[244014]: 2026-02-25 12:16:39.507 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:39 np0005629333 nova_compute[244014]: 2026-02-25 12:16:39.507 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:39 np0005629333 nova_compute[244014]: 2026-02-25 12:16:39.508 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 153 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 278 KiB/s rd, 768 KiB/s wr, 99 op/s
Feb 25 07:16:39 np0005629333 nova_compute[244014]: 2026-02-25 12:16:39.543 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:39 np0005629333 nova_compute[244014]: 2026-02-25 12:16:39.733 244018 DEBUG nova.network.neutron [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Successfully created port: 5fae567b-e26e-4045-8efe-6d5fc03a1658 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:16:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 159 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 285 KiB/s rd, 1.1 MiB/s wr, 111 op/s
Feb 25 07:16:41 np0005629333 nova_compute[244014]: 2026-02-25 12:16:41.853 244018 DEBUG nova.network.neutron [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Successfully updated port: 5fae567b-e26e-4045-8efe-6d5fc03a1658 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:16:41 np0005629333 nova_compute[244014]: 2026-02-25 12:16:41.874 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "refresh_cache-20c8b8a1-1561-49f5-9fce-af2840195a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:16:41 np0005629333 nova_compute[244014]: 2026-02-25 12:16:41.874 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquired lock "refresh_cache-20c8b8a1-1561-49f5-9fce-af2840195a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:16:41 np0005629333 nova_compute[244014]: 2026-02-25 12:16:41.875 244018 DEBUG nova.network.neutron [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:16:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:16:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:16:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:16:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:16:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.158101789232669e-05 of space, bias 1.0, pg target 0.018474305367698007 quantized to 32 (current 32)
Feb 25 07:16:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:16:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:16:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:16:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:16:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:16:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024895927298540185 of space, bias 1.0, pg target 0.7468778189562055 quantized to 32 (current 32)
Feb 25 07:16:41 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:16:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3017775514531495e-06 of space, bias 4.0, pg target 0.0015621330617437794 quantized to 16 (current 16)
Feb 25 07:16:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:16:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:16:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:16:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:16:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:16:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:16:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:16:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:16:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:16:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:16:42 np0005629333 nova_compute[244014]: 2026-02-25 12:16:42.022 244018 DEBUG nova.compute.manager [req-0ca55554-9363-48b7-bcb7-32cdb139f253 req-0b900148-1d57-4351-91d8-0145910fe31b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Received event network-changed-5fae567b-e26e-4045-8efe-6d5fc03a1658 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:16:42 np0005629333 nova_compute[244014]: 2026-02-25 12:16:42.023 244018 DEBUG nova.compute.manager [req-0ca55554-9363-48b7-bcb7-32cdb139f253 req-0b900148-1d57-4351-91d8-0145910fe31b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Refreshing instance network info cache due to event network-changed-5fae567b-e26e-4045-8efe-6d5fc03a1658. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:16:42 np0005629333 nova_compute[244014]: 2026-02-25 12:16:42.023 244018 DEBUG oslo_concurrency.lockutils [req-0ca55554-9363-48b7-bcb7-32cdb139f253 req-0b900148-1d57-4351-91d8-0145910fe31b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-20c8b8a1-1561-49f5-9fce-af2840195a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:16:42 np0005629333 nova_compute[244014]: 2026-02-25 12:16:42.058 244018 DEBUG nova.network.neutron [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:16:42 np0005629333 nova_compute[244014]: 2026-02-25 12:16:42.639 244018 DEBUG oslo_concurrency.lockutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Acquiring lock "29947b36-0ef7-49c6-974b-34891af8b99a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:42 np0005629333 nova_compute[244014]: 2026-02-25 12:16:42.640 244018 DEBUG oslo_concurrency.lockutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lock "29947b36-0ef7-49c6-974b-34891af8b99a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:42 np0005629333 nova_compute[244014]: 2026-02-25 12:16:42.658 244018 DEBUG nova.compute.manager [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:16:42 np0005629333 nova_compute[244014]: 2026-02-25 12:16:42.787 244018 DEBUG oslo_concurrency.lockutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:42 np0005629333 nova_compute[244014]: 2026-02-25 12:16:42.788 244018 DEBUG oslo_concurrency.lockutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:42 np0005629333 nova_compute[244014]: 2026-02-25 12:16:42.794 244018 DEBUG nova.virt.hardware [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:16:42 np0005629333 nova_compute[244014]: 2026-02-25 12:16:42.795 244018 INFO nova.compute.claims [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:16:42 np0005629333 nova_compute[244014]: 2026-02-25 12:16:42.946 244018 DEBUG oslo_concurrency.processutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:43 np0005629333 nova_compute[244014]: 2026-02-25 12:16:43.360 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:16:43 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2990213171' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:16:43 np0005629333 nova_compute[244014]: 2026-02-25 12:16:43.497 244018 DEBUG oslo_concurrency.processutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:43 np0005629333 nova_compute[244014]: 2026-02-25 12:16:43.502 244018 DEBUG nova.compute.provider_tree [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:16:43 np0005629333 nova_compute[244014]: 2026-02-25 12:16:43.522 244018 DEBUG nova.scheduler.client.report [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:16:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 202 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 268 KiB/s rd, 2.3 MiB/s wr, 126 op/s
Feb 25 07:16:43 np0005629333 nova_compute[244014]: 2026-02-25 12:16:43.555 244018 DEBUG oslo_concurrency.lockutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:43 np0005629333 nova_compute[244014]: 2026-02-25 12:16:43.555 244018 DEBUG nova.compute.manager [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:16:43 np0005629333 nova_compute[244014]: 2026-02-25 12:16:43.692 244018 DEBUG nova.compute.manager [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:16:43 np0005629333 nova_compute[244014]: 2026-02-25 12:16:43.692 244018 DEBUG nova.network.neutron [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:16:43 np0005629333 podman[254420]: 2026-02-25 12:16:43.741468852 +0000 UTC m=+0.087005820 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:16:43 np0005629333 nova_compute[244014]: 2026-02-25 12:16:43.744 244018 INFO nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:16:43 np0005629333 podman[254421]: 2026-02-25 12:16:43.766519278 +0000 UTC m=+0.105846971 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:16:43 np0005629333 nova_compute[244014]: 2026-02-25 12:16:43.782 244018 DEBUG nova.compute.manager [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:16:43 np0005629333 nova_compute[244014]: 2026-02-25 12:16:43.960 244018 DEBUG nova.network.neutron [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Updating instance_info_cache with network_info: [{"id": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "address": "fa:16:3e:3b:e1:5a", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fae567b-e2", "ovs_interfaceid": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.072 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Releasing lock "refresh_cache-20c8b8a1-1561-49f5-9fce-af2840195a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.072 244018 DEBUG nova.compute.manager [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Instance network_info: |[{"id": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "address": "fa:16:3e:3b:e1:5a", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fae567b-e2", "ovs_interfaceid": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.073 244018 DEBUG oslo_concurrency.lockutils [req-0ca55554-9363-48b7-bcb7-32cdb139f253 req-0b900148-1d57-4351-91d8-0145910fe31b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-20c8b8a1-1561-49f5-9fce-af2840195a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.073 244018 DEBUG nova.network.neutron [req-0ca55554-9363-48b7-bcb7-32cdb139f253 req-0b900148-1d57-4351-91d8-0145910fe31b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Refreshing network info cache for port 5fae567b-e26e-4045-8efe-6d5fc03a1658 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.078 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Start _get_guest_xml network_info=[{"id": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "address": "fa:16:3e:3b:e1:5a", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fae567b-e2", "ovs_interfaceid": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [{'disk_bus': 'virtio', 'size': 1, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'device_type': 'disk', 'device_name': '/dev/vdb', 'encryption_secret_uuid': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.085 244018 WARNING nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.091 244018 DEBUG nova.virt.libvirt.host [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.092 244018 DEBUG nova.virt.libvirt.host [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.100 244018 DEBUG nova.virt.libvirt.host [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.100 244018 DEBUG nova.virt.libvirt.host [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.101 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.101 244018 DEBUG nova.virt.hardware [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:15:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={hw_rng:allowed='True'},flavorid='236815221',id=20,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_1-960178861',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.102 244018 DEBUG nova.virt.hardware [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.102 244018 DEBUG nova.virt.hardware [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.103 244018 DEBUG nova.virt.hardware [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.103 244018 DEBUG nova.virt.hardware [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.103 244018 DEBUG nova.virt.hardware [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.104 244018 DEBUG nova.virt.hardware [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.104 244018 DEBUG nova.virt.hardware [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.105 244018 DEBUG nova.virt.hardware [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.105 244018 DEBUG nova.virt.hardware [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.105 244018 DEBUG nova.virt.hardware [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.110 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.144 244018 DEBUG nova.compute.manager [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.146 244018 DEBUG nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.147 244018 INFO nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Creating image(s)#033[00m
Feb 25 07:16:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.172 244018 DEBUG nova.storage.rbd_utils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] rbd image 29947b36-0ef7-49c6-974b-34891af8b99a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.203 244018 DEBUG nova.storage.rbd_utils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] rbd image 29947b36-0ef7-49c6-974b-34891af8b99a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.235 244018 DEBUG nova.storage.rbd_utils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] rbd image 29947b36-0ef7-49c6-974b-34891af8b99a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.240 244018 DEBUG oslo_concurrency.processutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.306 244018 DEBUG oslo_concurrency.processutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.307 244018 DEBUG oslo_concurrency.lockutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.308 244018 DEBUG oslo_concurrency.lockutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.309 244018 DEBUG oslo_concurrency.lockutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.340 244018 DEBUG nova.storage.rbd_utils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] rbd image 29947b36-0ef7-49c6-974b-34891af8b99a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.344 244018 DEBUG oslo_concurrency.processutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 29947b36-0ef7-49c6-974b-34891af8b99a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.504 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021789.5033104, 6de08989-13cc-415b-adc9-04b338e13d0f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.506 244018 INFO nova.compute.manager [-] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.547 244018 DEBUG nova.compute.manager [None req-f31a8b76-cea1-4a4a-b80e-c31dbc081bb3 - - - - - -] [instance: 6de08989-13cc-415b-adc9-04b338e13d0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.548 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:16:44 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1385261770' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.643 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.644 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.665 244018 DEBUG oslo_concurrency.processutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 29947b36-0ef7-49c6-974b-34891af8b99a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.321s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.740 244018 DEBUG nova.storage.rbd_utils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] resizing rbd image 29947b36-0ef7-49c6-974b-34891af8b99a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.778 244018 DEBUG nova.network.neutron [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.778 244018 DEBUG nova.compute.manager [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.852 244018 DEBUG nova.objects.instance [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lazy-loading 'migration_context' on Instance uuid 29947b36-0ef7-49c6-974b-34891af8b99a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.876 244018 DEBUG nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.877 244018 DEBUG nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Ensure instance console log exists: /var/lib/nova/instances/29947b36-0ef7-49c6-974b-34891af8b99a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.878 244018 DEBUG oslo_concurrency.lockutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.879 244018 DEBUG oslo_concurrency.lockutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.879 244018 DEBUG oslo_concurrency.lockutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.882 244018 DEBUG nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.887 244018 WARNING nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.892 244018 DEBUG nova.virt.libvirt.host [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.892 244018 DEBUG nova.virt.libvirt.host [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.897 244018 DEBUG nova.virt.libvirt.host [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.898 244018 DEBUG nova.virt.libvirt.host [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.898 244018 DEBUG nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.899 244018 DEBUG nova.virt.hardware [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.900 244018 DEBUG nova.virt.hardware [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.900 244018 DEBUG nova.virt.hardware [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.901 244018 DEBUG nova.virt.hardware [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.901 244018 DEBUG nova.virt.hardware [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.902 244018 DEBUG nova.virt.hardware [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.902 244018 DEBUG nova.virt.hardware [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.903 244018 DEBUG nova.virt.hardware [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.904 244018 DEBUG nova.virt.hardware [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.904 244018 DEBUG nova.virt.hardware [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.905 244018 DEBUG nova.virt.hardware [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:16:44 np0005629333 nova_compute[244014]: 2026-02-25 12:16:44.909 244018 DEBUG oslo_concurrency.processutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:16:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/475270521' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.210 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.239 244018 DEBUG nova.storage.rbd_utils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 20c8b8a1-1561-49f5-9fce-af2840195a57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.243 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:16:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1248107500' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.485 244018 DEBUG oslo_concurrency.processutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.515 244018 DEBUG nova.storage.rbd_utils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] rbd image 29947b36-0ef7-49c6-974b-34891af8b99a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.521 244018 DEBUG oslo_concurrency.processutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 202 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Feb 25 07:16:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:16:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3402641402' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.751 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.754 244018 DEBUG nova.virt.libvirt.vif [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:16:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1235856482',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1235856482',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(20),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1235856482',id=8,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=20,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJvvkxGBHug345PSqKu7AM4guWtlD1Z/ZHlY6xJOr1x7TK7JerzVRxChzrn82oXIZjPXupKAJ5z18TSxrDr8DDKEwo/biNUpuR+tKuw+djeyjF8yxNV8v2GUFxoZtp+/Q==',key_name='tempest-keypair-1665806790',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0cd0968a9a1b4b9e984b0a10a6ac77a8',ramdisk_id='',reservation_id='r-6yvik42u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1630390846',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1630390846-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:16:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee6c6e44a0624805afeb68a67c99f325',uuid=20c8b8a1-1561-49f5-9fce-af2840195a57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "address": "fa:16:3e:3b:e1:5a", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fae567b-e2", "ovs_interfaceid": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.755 244018 DEBUG nova.network.os_vif_util [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Converting VIF {"id": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "address": "fa:16:3e:3b:e1:5a", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fae567b-e2", "ovs_interfaceid": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.757 244018 DEBUG nova.network.os_vif_util [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:e1:5a,bridge_name='br-int',has_traffic_filtering=True,id=5fae567b-e26e-4045-8efe-6d5fc03a1658,network=Network(d3fb36f1-0e88-43b4-a8a4-3844d55f1de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fae567b-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.758 244018 DEBUG nova.objects.instance [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 20c8b8a1-1561-49f5-9fce-af2840195a57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.780 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:16:45 np0005629333 nova_compute[244014]:  <uuid>20c8b8a1-1561-49f5-9fce-af2840195a57</uuid>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:  <name>instance-00000008</name>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-1235856482</nova:name>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:16:44</nova:creationTime>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <nova:flavor name="tempest-flavor_with_ephemeral_1-960178861">
Feb 25 07:16:45 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:        <nova:ephemeral>1</nova:ephemeral>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:        <nova:user uuid="ee6c6e44a0624805afeb68a67c99f325">tempest-ServersWithSpecificFlavorTestJSON-1630390846-project-member</nova:user>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:        <nova:project uuid="0cd0968a9a1b4b9e984b0a10a6ac77a8">tempest-ServersWithSpecificFlavorTestJSON-1630390846</nova:project>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:        <nova:port uuid="5fae567b-e26e-4045-8efe-6d5fc03a1658">
Feb 25 07:16:45 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <entry name="serial">20c8b8a1-1561-49f5-9fce-af2840195a57</entry>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <entry name="uuid">20c8b8a1-1561-49f5-9fce-af2840195a57</entry>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/20c8b8a1-1561-49f5-9fce-af2840195a57_disk">
Feb 25 07:16:45 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:16:45 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/20c8b8a1-1561-49f5-9fce-af2840195a57_disk.eph0">
Feb 25 07:16:45 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:16:45 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <target dev="vdb" bus="virtio"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/20c8b8a1-1561-49f5-9fce-af2840195a57_disk.config">
Feb 25 07:16:45 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:16:45 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:3b:e1:5a"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <target dev="tap5fae567b-e2"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/20c8b8a1-1561-49f5-9fce-af2840195a57/console.log" append="off"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:16:45 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:16:45 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:16:45 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:16:45 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.790 244018 DEBUG nova.compute.manager [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Preparing to wait for external event network-vif-plugged-5fae567b-e26e-4045-8efe-6d5fc03a1658 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.790 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.791 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.792 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.793 244018 DEBUG nova.virt.libvirt.vif [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:16:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1235856482',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1235856482',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(20),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1235856482',id=8,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=20,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJvvkxGBHug345PSqKu7AM4guWtlD1Z/ZHlY6xJOr1x7TK7JerzVRxChzrn82oXIZjPXupKAJ5z18TSxrDr8DDKEwo/biNUpuR+tKuw+djeyjF8yxNV8v2GUFxoZtp+/Q==',key_name='tempest-keypair-1665806790',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0cd0968a9a1b4b9e984b0a10a6ac77a8',ramdisk_id='',reservation_id='r-6yvik42u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1630390846',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1630390846-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:16:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee6c6e44a0624805afeb68a67c99f325',uuid=20c8b8a1-1561-49f5-9fce-af2840195a57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "address": "fa:16:3e:3b:e1:5a", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fae567b-e2", "ovs_interfaceid": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.794 244018 DEBUG nova.network.os_vif_util [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Converting VIF {"id": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "address": "fa:16:3e:3b:e1:5a", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fae567b-e2", "ovs_interfaceid": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.795 244018 DEBUG nova.network.os_vif_util [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:e1:5a,bridge_name='br-int',has_traffic_filtering=True,id=5fae567b-e26e-4045-8efe-6d5fc03a1658,network=Network(d3fb36f1-0e88-43b4-a8a4-3844d55f1de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fae567b-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.796 244018 DEBUG os_vif [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:e1:5a,bridge_name='br-int',has_traffic_filtering=True,id=5fae567b-e26e-4045-8efe-6d5fc03a1658,network=Network(d3fb36f1-0e88-43b4-a8a4-3844d55f1de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fae567b-e2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.797 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.798 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.798 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.803 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.803 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5fae567b-e2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.804 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5fae567b-e2, col_values=(('external_ids', {'iface-id': '5fae567b-e26e-4045-8efe-6d5fc03a1658', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3b:e1:5a', 'vm-uuid': '20c8b8a1-1561-49f5-9fce-af2840195a57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.806 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:45 np0005629333 NetworkManager[49836]: <info>  [1772021805.8080] manager: (tap5fae567b-e2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.812 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.813 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.815 244018 INFO os_vif [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:e1:5a,bridge_name='br-int',has_traffic_filtering=True,id=5fae567b-e26e-4045-8efe-6d5fc03a1658,network=Network(d3fb36f1-0e88-43b4-a8a4-3844d55f1de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fae567b-e2')#033[00m
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.887 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.888 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.888 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.889 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] No VIF found with MAC fa:16:3e:3b:e1:5a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.890 244018 INFO nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Using config drive#033[00m
Feb 25 07:16:45 np0005629333 nova_compute[244014]: 2026-02-25 12:16:45.920 244018 DEBUG nova.storage.rbd_utils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 20c8b8a1-1561-49f5-9fce-af2840195a57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:16:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/649575112' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.026 244018 DEBUG oslo_concurrency.processutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.028 244018 DEBUG nova.objects.instance [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lazy-loading 'pci_devices' on Instance uuid 29947b36-0ef7-49c6-974b-34891af8b99a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.045 244018 DEBUG nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:16:46 np0005629333 nova_compute[244014]:  <uuid>29947b36-0ef7-49c6-974b-34891af8b99a</uuid>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:  <name>instance-00000009</name>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:16:46 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServerDiagnosticsTest-server-840556860</nova:name>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:16:44</nova:creationTime>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:16:46 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:        <nova:user uuid="3b5b46d7dac249e288d46ca1056e55e2">tempest-ServerDiagnosticsTest-727848253-project-member</nova:user>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:        <nova:project uuid="cbbbbb19c373469196c30db9a3ed4171">tempest-ServerDiagnosticsTest-727848253</nova:project>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:      <nova:ports/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:      <entry name="serial">29947b36-0ef7-49c6-974b-34891af8b99a</entry>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:      <entry name="uuid">29947b36-0ef7-49c6-974b-34891af8b99a</entry>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:16:46 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/29947b36-0ef7-49c6-974b-34891af8b99a_disk">
Feb 25 07:16:46 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:16:46 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:16:46 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/29947b36-0ef7-49c6-974b-34891af8b99a_disk.config">
Feb 25 07:16:46 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:16:46 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:16:46 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/29947b36-0ef7-49c6-974b-34891af8b99a/console.log" append="off"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:16:46 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:16:46 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:16:46 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:16:46 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:16:46 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.120 244018 DEBUG nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.121 244018 DEBUG nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.122 244018 INFO nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Using config drive#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.151 244018 DEBUG nova.storage.rbd_utils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] rbd image 29947b36-0ef7-49c6-974b-34891af8b99a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.235 244018 DEBUG oslo_concurrency.lockutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Acquiring lock "db0fc9fa-1fc0-4334-96f9-2205fa53e308" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.236 244018 DEBUG oslo_concurrency.lockutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "db0fc9fa-1fc0-4334-96f9-2205fa53e308" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.263 244018 DEBUG nova.compute.manager [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.276 244018 INFO nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Creating config drive at /var/lib/nova/instances/20c8b8a1-1561-49f5-9fce-af2840195a57/disk.config#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.284 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/20c8b8a1-1561-49f5-9fce-af2840195a57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqoh9bd35 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.304 244018 DEBUG nova.network.neutron [req-0ca55554-9363-48b7-bcb7-32cdb139f253 req-0b900148-1d57-4351-91d8-0145910fe31b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Updated VIF entry in instance network info cache for port 5fae567b-e26e-4045-8efe-6d5fc03a1658. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.305 244018 DEBUG nova.network.neutron [req-0ca55554-9363-48b7-bcb7-32cdb139f253 req-0b900148-1d57-4351-91d8-0145910fe31b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Updating instance_info_cache with network_info: [{"id": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "address": "fa:16:3e:3b:e1:5a", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fae567b-e2", "ovs_interfaceid": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.345 244018 DEBUG oslo_concurrency.lockutils [req-0ca55554-9363-48b7-bcb7-32cdb139f253 req-0b900148-1d57-4351-91d8-0145910fe31b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-20c8b8a1-1561-49f5-9fce-af2840195a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.359 244018 INFO nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Creating config drive at /var/lib/nova/instances/29947b36-0ef7-49c6-974b-34891af8b99a/disk.config#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.364 244018 DEBUG oslo_concurrency.processutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/29947b36-0ef7-49c6-974b-34891af8b99a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpurogm5al execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.403 244018 DEBUG oslo_concurrency.lockutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.404 244018 DEBUG oslo_concurrency.lockutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.412 244018 DEBUG nova.virt.hardware [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.412 244018 INFO nova.compute.claims [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.417 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/20c8b8a1-1561-49f5-9fce-af2840195a57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqoh9bd35" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.451 244018 DEBUG nova.storage.rbd_utils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] rbd image 20c8b8a1-1561-49f5-9fce-af2840195a57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.455 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/20c8b8a1-1561-49f5-9fce-af2840195a57/disk.config 20c8b8a1-1561-49f5-9fce-af2840195a57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.477 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021791.4522336, 661b348d-5b73-45ed-8357-3aefed90d054 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.477 244018 INFO nova.compute.manager [-] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.491 244018 DEBUG oslo_concurrency.processutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/29947b36-0ef7-49c6-974b-34891af8b99a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpurogm5al" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.521 244018 DEBUG nova.storage.rbd_utils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] rbd image 29947b36-0ef7-49c6-974b-34891af8b99a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.528 244018 DEBUG oslo_concurrency.processutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/29947b36-0ef7-49c6-974b-34891af8b99a/disk.config 29947b36-0ef7-49c6-974b-34891af8b99a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.543 244018 DEBUG nova.compute.manager [None req-16d6bc58-3c7c-4f17-a575-83c41d122efa - - - - - -] [instance: 661b348d-5b73-45ed-8357-3aefed90d054] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.611 244018 DEBUG oslo_concurrency.processutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.771 244018 DEBUG oslo_concurrency.processutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/20c8b8a1-1561-49f5-9fce-af2840195a57/disk.config 20c8b8a1-1561-49f5-9fce-af2840195a57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.773 244018 INFO nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Deleting local config drive /var/lib/nova/instances/20c8b8a1-1561-49f5-9fce-af2840195a57/disk.config because it was imported into RBD.#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.808 244018 DEBUG oslo_concurrency.processutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/29947b36-0ef7-49c6-974b-34891af8b99a/disk.config 29947b36-0ef7-49c6-974b-34891af8b99a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.809 244018 INFO nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Deleting local config drive /var/lib/nova/instances/29947b36-0ef7-49c6-974b-34891af8b99a/disk.config because it was imported into RBD.#033[00m
Feb 25 07:16:46 np0005629333 kernel: tap5fae567b-e2: entered promiscuous mode
Feb 25 07:16:46 np0005629333 NetworkManager[49836]: <info>  [1772021806.8355] manager: (tap5fae567b-e2): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.838 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:16:46Z|00036|binding|INFO|Claiming lport 5fae567b-e26e-4045-8efe-6d5fc03a1658 for this chassis.
Feb 25 07:16:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:16:46Z|00037|binding|INFO|5fae567b-e26e-4045-8efe-6d5fc03a1658: Claiming fa:16:3e:3b:e1:5a 10.100.0.8
Feb 25 07:16:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:16:46Z|00038|binding|INFO|Setting lport 5fae567b-e26e-4045-8efe-6d5fc03a1658 ovn-installed in OVS
Feb 25 07:16:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:16:46Z|00039|binding|INFO|Setting lport 5fae567b-e26e-4045-8efe-6d5fc03a1658 up in Southbound
Feb 25 07:16:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:46.852 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:e1:5a 10.100.0.8'], port_security=['fa:16:3e:3b:e1:5a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '20c8b8a1-1561-49f5-9fce-af2840195a57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0cd0968a9a1b4b9e984b0a10a6ac77a8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f43142d8-27de-490e-9b86-d4e77c9c7e07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad5d1aaf-d2e8-49e5-aeb2-335f304b15d9, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=5fae567b-e26e-4045-8efe-6d5fc03a1658) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:16:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:46.853 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 5fae567b-e26e-4045-8efe-6d5fc03a1658 in datapath d3fb36f1-0e88-43b4-a8a4-3844d55f1de8 bound to our chassis#033[00m
Feb 25 07:16:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:46.854 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d3fb36f1-0e88-43b4-a8a4-3844d55f1de8#033[00m
Feb 25 07:16:46 np0005629333 nova_compute[244014]: 2026-02-25 12:16:46.859 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:46.865 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7aae872d-5f9b-42a1-abd1-cd5dc6367cba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:46.866 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd3fb36f1-01 in ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:16:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:46.868 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd3fb36f1-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:16:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:46.868 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[611a17e4-7c6c-4309-8f64-12ecc000e4b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:46.869 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1321a4ad-a8a9-46b3-958e-014b1ed25793]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:46 np0005629333 systemd-udevd[254939]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:16:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:46.890 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[e1f944bc-ded2-4666-ba2c-c92fe56b6eed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:46 np0005629333 NetworkManager[49836]: <info>  [1772021806.9052] device (tap5fae567b-e2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:16:46 np0005629333 NetworkManager[49836]: <info>  [1772021806.9061] device (tap5fae567b-e2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:16:46 np0005629333 systemd-machined[210048]: New machine qemu-8-instance-00000008.
Feb 25 07:16:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:46.915 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ea88c88f-ad3e-402d-9dc0-42d73bec8226]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:46 np0005629333 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Feb 25 07:16:46 np0005629333 systemd-machined[210048]: New machine qemu-9-instance-00000009.
Feb 25 07:16:46 np0005629333 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Feb 25 07:16:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:46.962 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[aa5a2b51-e907-45ab-93e7-c614c712256e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:46 np0005629333 systemd-udevd[254944]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:16:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:46.969 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[46c98b4e-dab3-44e5-8090-8a1959bf9f9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:46 np0005629333 NetworkManager[49836]: <info>  [1772021806.9709] manager: (tapd3fb36f1-00): new Veth device (/org/freedesktop/NetworkManager/Devices/31)
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:47.005 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3913f3ed-c51f-4d5e-82ae-309cc60a4f20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:47.009 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b7412fdb-2f01-403f-848b-f3b5a9cbd6b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:47 np0005629333 NetworkManager[49836]: <info>  [1772021807.0367] device (tapd3fb36f1-00): carrier: link connected
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:47.042 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f9995fcb-7ec6-4091-bc2e-20a8bb10c381]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:47.059 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[34e34fd8-aece-47ce-acad-ced72b0e4f28]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd3fb36f1-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:f0:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 377644, 'reachable_time': 41940, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254978, 'error': None, 'target': 'ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:47.076 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[97ddf54d-9d9c-4265-8007-56b193eb74f0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb7:f004'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 377644, 'tstamp': 377644}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254979, 'error': None, 'target': 'ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:47.095 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0b1b506f-c56f-4bf1-b684-f151fab8f87c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd3fb36f1-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:f0:04'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 377644, 'reachable_time': 41940, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254995, 'error': None, 'target': 'ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:47.128 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4381bfac-010e-4516-bad5-eb353de2746b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:47.192 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d8573b12-48cf-4e1e-94e1-c241192a1c84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:47.194 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3fb36f1-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:47.195 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:47.196 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3fb36f1-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:16:47 np0005629333 kernel: tapd3fb36f1-00: entered promiscuous mode
Feb 25 07:16:47 np0005629333 NetworkManager[49836]: <info>  [1772021807.1996] manager: (tapd3fb36f1-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.203 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:47.204 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd3fb36f1-00, col_values=(('external_ids', {'iface-id': '642d12d0-ef5c-4bc5-ba96-4b85e033986b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:16:47 np0005629333 ovn_controller[147040]: 2026-02-25T12:16:47Z|00040|binding|INFO|Releasing lport 642d12d0-ef5c-4bc5-ba96-4b85e033986b from this chassis (sb_readonly=0)
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:47.206 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d3fb36f1-0e88-43b4-a8a4-3844d55f1de8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d3fb36f1-0e88-43b4-a8a4-3844d55f1de8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:47.207 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0bf0d727-fdb8-45a1-9519-ae00af8d2173]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:47.207 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/d3fb36f1-0e88-43b4-a8a4-3844d55f1de8.pid.haproxy
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID d3fb36f1-0e88-43b4-a8a4-3844d55f1de8
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:16:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:47.208 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8', 'env', 'PROCESS_TAG=haproxy-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d3fb36f1-0e88-43b4-a8a4-3844d55f1de8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.212 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:16:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1231002901' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.252 244018 DEBUG oslo_concurrency.processutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.259 244018 DEBUG nova.compute.provider_tree [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.264 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021807.2599277, 29947b36-0ef7-49c6-974b-34891af8b99a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.264 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.269 244018 DEBUG nova.compute.manager [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.269 244018 DEBUG nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.275 244018 INFO nova.virt.libvirt.driver [-] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Instance spawned successfully.#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.275 244018 DEBUG nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.304 244018 DEBUG nova.scheduler.client.report [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.314 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.319 244018 DEBUG nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.319 244018 DEBUG nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.320 244018 DEBUG nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.321 244018 DEBUG nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.322 244018 DEBUG nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.323 244018 DEBUG nova.virt.libvirt.driver [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.328 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.392 244018 DEBUG oslo_concurrency.lockutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.988s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.393 244018 DEBUG nova.compute.manager [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.414 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.414 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021807.262309, 29947b36-0ef7-49c6-974b-34891af8b99a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.415 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] VM Started (Lifecycle Event)#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.458 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.463 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:16:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:16:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1386756722' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:16:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:16:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1386756722' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.475 244018 INFO nova.compute.manager [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Took 3.33 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.476 244018 DEBUG nova.compute.manager [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.488 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.489 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021807.4041114, 20c8b8a1-1561-49f5-9fce-af2840195a57 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.490 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] VM Started (Lifecycle Event)#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.496 244018 DEBUG nova.compute.manager [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Feb 25 07:16:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 248 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 3.6 MiB/s wr, 87 op/s
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.537 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.543 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021807.4041996, 20c8b8a1-1561-49f5-9fce-af2840195a57 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.543 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.547 244018 INFO nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.571 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.585 244018 DEBUG nova.compute.manager [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.591 244018 INFO nova.compute.manager [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Took 4.83 seconds to build instance.#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.593 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.630 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.638 244018 DEBUG oslo_concurrency.lockutils [None req-5a4db378-e4ce-420b-a212-1d97c87e4459 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lock "29947b36-0ef7-49c6-974b-34891af8b99a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.999s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:47 np0005629333 podman[255115]: 2026-02-25 12:16:47.670912798 +0000 UTC m=+0.102480161 container create 2f771423fea679a9ed4aa046feaf5d3d997bb34cfaabf1d0087aee9822c5dc5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_managed=true)
Feb 25 07:16:47 np0005629333 podman[255115]: 2026-02-25 12:16:47.607545842 +0000 UTC m=+0.039113245 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.699 244018 DEBUG nova.compute.manager [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.702 244018 DEBUG nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.703 244018 INFO nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Creating image(s)#033[00m
Feb 25 07:16:47 np0005629333 systemd[1]: Started libpod-conmon-2f771423fea679a9ed4aa046feaf5d3d997bb34cfaabf1d0087aee9822c5dc5e.scope.
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.738 244018 DEBUG nova.storage.rbd_utils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:47 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:16:47 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/707d0ec3801d09f720d201cc99cc3906ecd5ce5937d1d449d2a23665084e2304/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:16:47 np0005629333 podman[255115]: 2026-02-25 12:16:47.781028945 +0000 UTC m=+0.212596388 container init 2f771423fea679a9ed4aa046feaf5d3d997bb34cfaabf1d0087aee9822c5dc5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0)
Feb 25 07:16:47 np0005629333 podman[255115]: 2026-02-25 12:16:47.785163828 +0000 UTC m=+0.216731231 container start 2f771423fea679a9ed4aa046feaf5d3d997bb34cfaabf1d0087aee9822c5dc5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.790 244018 DEBUG nova.storage.rbd_utils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.820 244018 DEBUG nova.storage.rbd_utils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.823 244018 DEBUG oslo_concurrency.processutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:47 np0005629333 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 07:16:47 np0005629333 neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8[255138]: [NOTICE]   (255169) : New worker (255192) forked
Feb 25 07:16:47 np0005629333 neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8[255138]: [NOTICE]   (255169) : Loading success.
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.887 244018 DEBUG nova.compute.manager [req-2986612b-3f92-4793-b4e6-a11ccb0be262 req-5c86dd08-a726-4dd4-aa0f-95acb2f19a05 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Received event network-vif-plugged-5fae567b-e26e-4045-8efe-6d5fc03a1658 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.888 244018 DEBUG oslo_concurrency.lockutils [req-2986612b-3f92-4793-b4e6-a11ccb0be262 req-5c86dd08-a726-4dd4-aa0f-95acb2f19a05 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.888 244018 DEBUG oslo_concurrency.lockutils [req-2986612b-3f92-4793-b4e6-a11ccb0be262 req-5c86dd08-a726-4dd4-aa0f-95acb2f19a05 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.888 244018 DEBUG oslo_concurrency.lockutils [req-2986612b-3f92-4793-b4e6-a11ccb0be262 req-5c86dd08-a726-4dd4-aa0f-95acb2f19a05 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.889 244018 DEBUG nova.compute.manager [req-2986612b-3f92-4793-b4e6-a11ccb0be262 req-5c86dd08-a726-4dd4-aa0f-95acb2f19a05 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Processing event network-vif-plugged-5fae567b-e26e-4045-8efe-6d5fc03a1658 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.889 244018 DEBUG nova.compute.manager [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.890 244018 DEBUG oslo_concurrency.processutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.890 244018 DEBUG oslo_concurrency.lockutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.891 244018 DEBUG oslo_concurrency.lockutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.892 244018 DEBUG oslo_concurrency.lockutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.917 244018 DEBUG nova.storage.rbd_utils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.921 244018 DEBUG oslo_concurrency.processutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.934 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021807.893856, 20c8b8a1-1561-49f5-9fce-af2840195a57 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.935 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.941 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.944 244018 INFO nova.virt.libvirt.driver [-] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Instance spawned successfully.#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.944 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.968 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.972 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.973 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.973 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.974 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.974 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.974 244018 DEBUG nova.virt.libvirt.driver [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:47 np0005629333 nova_compute[244014]: 2026-02-25 12:16:47.979 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.019 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.093 244018 INFO nova.compute.manager [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Took 10.64 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.093 244018 DEBUG nova.compute.manager [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.175 244018 INFO nova.compute.manager [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Took 11.80 seconds to build instance.#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.198 244018 DEBUG oslo_concurrency.processutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.226 244018 DEBUG oslo_concurrency.lockutils [None req-2c5d5ec7-5f6e-4293-86cc-0147a7eb1e8a ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "20c8b8a1-1561-49f5-9fce-af2840195a57" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.266 244018 DEBUG nova.storage.rbd_utils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] resizing rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.359 244018 DEBUG nova.objects.instance [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lazy-loading 'migration_context' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.361 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.409 244018 DEBUG nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.409 244018 DEBUG nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Ensure instance console log exists: /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.410 244018 DEBUG oslo_concurrency.lockutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.410 244018 DEBUG oslo_concurrency.lockutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.410 244018 DEBUG oslo_concurrency.lockutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.412 244018 DEBUG nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.415 244018 WARNING nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.420 244018 DEBUG nova.virt.libvirt.host [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.421 244018 DEBUG nova.virt.libvirt.host [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.423 244018 DEBUG nova.virt.libvirt.host [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.423 244018 DEBUG nova.virt.libvirt.host [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.424 244018 DEBUG nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.424 244018 DEBUG nova.virt.hardware [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.425 244018 DEBUG nova.virt.hardware [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.425 244018 DEBUG nova.virt.hardware [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.425 244018 DEBUG nova.virt.hardware [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.425 244018 DEBUG nova.virt.hardware [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.426 244018 DEBUG nova.virt.hardware [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.426 244018 DEBUG nova.virt.hardware [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.426 244018 DEBUG nova.virt.hardware [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.426 244018 DEBUG nova.virt.hardware [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.427 244018 DEBUG nova.virt.hardware [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.427 244018 DEBUG nova.virt.hardware [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:16:48 np0005629333 nova_compute[244014]: 2026-02-25 12:16:48.430 244018 DEBUG oslo_concurrency.processutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:16:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2995159402' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:16:49 np0005629333 nova_compute[244014]: 2026-02-25 12:16:49.020 244018 DEBUG oslo_concurrency.processutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:49 np0005629333 nova_compute[244014]: 2026-02-25 12:16:49.056 244018 DEBUG nova.storage.rbd_utils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:49 np0005629333 nova_compute[244014]: 2026-02-25 12:16:49.064 244018 DEBUG oslo_concurrency.processutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:16:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 248 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.6 MiB/s wr, 77 op/s
Feb 25 07:16:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:16:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/987021487' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:16:49 np0005629333 nova_compute[244014]: 2026-02-25 12:16:49.585 244018 DEBUG oslo_concurrency.processutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:49 np0005629333 nova_compute[244014]: 2026-02-25 12:16:49.588 244018 DEBUG nova.objects.instance [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lazy-loading 'pci_devices' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:16:50 np0005629333 nova_compute[244014]: 2026-02-25 12:16:50.209 244018 DEBUG nova.compute.manager [None req-91302496-e037-46c5-baad-b498794749b5 e7c0a976ae734458914db7c57527b6df f051d58fecda492cbff0432942a1a355 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:16:50 np0005629333 nova_compute[244014]: 2026-02-25 12:16:50.213 244018 INFO nova.compute.manager [None req-91302496-e037-46c5-baad-b498794749b5 e7c0a976ae734458914db7c57527b6df f051d58fecda492cbff0432942a1a355 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Retrieving diagnostics#033[00m
Feb 25 07:16:50 np0005629333 nova_compute[244014]: 2026-02-25 12:16:50.224 244018 DEBUG nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:16:50 np0005629333 nova_compute[244014]:  <uuid>db0fc9fa-1fc0-4334-96f9-2205fa53e308</uuid>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:  <name>instance-0000000a</name>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:16:50 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServersAdmin275Test-server-1821961636</nova:name>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:16:48</nova:creationTime>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:16:50 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:        <nova:user uuid="40cba1ddb2af4adea0f03477ccf87402">tempest-ServersAdmin275Test-694282747-project-member</nova:user>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:        <nova:project uuid="03ef50f1c8db4f9d9f7512943746f9e7">tempest-ServersAdmin275Test-694282747</nova:project>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:      <nova:ports/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:      <entry name="serial">db0fc9fa-1fc0-4334-96f9-2205fa53e308</entry>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:      <entry name="uuid">db0fc9fa-1fc0-4334-96f9-2205fa53e308</entry>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:16:50 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk">
Feb 25 07:16:50 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:16:50 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:16:50 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config">
Feb 25 07:16:50 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:16:50 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:16:50 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/console.log" append="off"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:16:50 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:16:50 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:16:50 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:16:50 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:16:50 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:16:50 np0005629333 nova_compute[244014]: 2026-02-25 12:16:50.302 244018 DEBUG nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:16:50 np0005629333 nova_compute[244014]: 2026-02-25 12:16:50.303 244018 DEBUG nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:16:50 np0005629333 nova_compute[244014]: 2026-02-25 12:16:50.303 244018 INFO nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Using config drive#033[00m
Feb 25 07:16:50 np0005629333 nova_compute[244014]: 2026-02-25 12:16:50.329 244018 DEBUG nova.storage.rbd_utils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:50 np0005629333 nova_compute[244014]: 2026-02-25 12:16:50.423 244018 DEBUG nova.compute.manager [req-e72aa3e0-2e85-42c1-8a69-5eeaedd14cbe req-2282baa8-ec70-4582-b929-2cd951fe0d50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Received event network-vif-plugged-5fae567b-e26e-4045-8efe-6d5fc03a1658 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:16:50 np0005629333 nova_compute[244014]: 2026-02-25 12:16:50.424 244018 DEBUG oslo_concurrency.lockutils [req-e72aa3e0-2e85-42c1-8a69-5eeaedd14cbe req-2282baa8-ec70-4582-b929-2cd951fe0d50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:50 np0005629333 nova_compute[244014]: 2026-02-25 12:16:50.424 244018 DEBUG oslo_concurrency.lockutils [req-e72aa3e0-2e85-42c1-8a69-5eeaedd14cbe req-2282baa8-ec70-4582-b929-2cd951fe0d50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:50 np0005629333 nova_compute[244014]: 2026-02-25 12:16:50.425 244018 DEBUG oslo_concurrency.lockutils [req-e72aa3e0-2e85-42c1-8a69-5eeaedd14cbe req-2282baa8-ec70-4582-b929-2cd951fe0d50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:50 np0005629333 nova_compute[244014]: 2026-02-25 12:16:50.425 244018 DEBUG nova.compute.manager [req-e72aa3e0-2e85-42c1-8a69-5eeaedd14cbe req-2282baa8-ec70-4582-b929-2cd951fe0d50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] No waiting events found dispatching network-vif-plugged-5fae567b-e26e-4045-8efe-6d5fc03a1658 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:16:50 np0005629333 nova_compute[244014]: 2026-02-25 12:16:50.425 244018 WARNING nova.compute.manager [req-e72aa3e0-2e85-42c1-8a69-5eeaedd14cbe req-2282baa8-ec70-4582-b929-2cd951fe0d50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Received unexpected event network-vif-plugged-5fae567b-e26e-4045-8efe-6d5fc03a1658 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:16:50 np0005629333 nova_compute[244014]: 2026-02-25 12:16:50.595 244018 INFO nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Creating config drive at /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config#033[00m
Feb 25 07:16:50 np0005629333 nova_compute[244014]: 2026-02-25 12:16:50.601 244018 DEBUG oslo_concurrency.processutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpddk8psre execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:50 np0005629333 nova_compute[244014]: 2026-02-25 12:16:50.662 244018 DEBUG oslo_concurrency.lockutils [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Acquiring lock "29947b36-0ef7-49c6-974b-34891af8b99a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:50 np0005629333 nova_compute[244014]: 2026-02-25 12:16:50.664 244018 DEBUG oslo_concurrency.lockutils [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lock "29947b36-0ef7-49c6-974b-34891af8b99a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:50 np0005629333 nova_compute[244014]: 2026-02-25 12:16:50.664 244018 DEBUG oslo_concurrency.lockutils [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Acquiring lock "29947b36-0ef7-49c6-974b-34891af8b99a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:50 np0005629333 nova_compute[244014]: 2026-02-25 12:16:50.665 244018 DEBUG oslo_concurrency.lockutils [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lock "29947b36-0ef7-49c6-974b-34891af8b99a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:50 np0005629333 nova_compute[244014]: 2026-02-25 12:16:50.666 244018 DEBUG oslo_concurrency.lockutils [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lock "29947b36-0ef7-49c6-974b-34891af8b99a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:50 np0005629333 nova_compute[244014]: 2026-02-25 12:16:50.673 244018 INFO nova.compute.manager [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Terminating instance#033[00m
Feb 25 07:16:50 np0005629333 nova_compute[244014]: 2026-02-25 12:16:50.675 244018 DEBUG oslo_concurrency.lockutils [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Acquiring lock "refresh_cache-29947b36-0ef7-49c6-974b-34891af8b99a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:16:50 np0005629333 nova_compute[244014]: 2026-02-25 12:16:50.676 244018 DEBUG oslo_concurrency.lockutils [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Acquired lock "refresh_cache-29947b36-0ef7-49c6-974b-34891af8b99a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:16:50 np0005629333 nova_compute[244014]: 2026-02-25 12:16:50.676 244018 DEBUG nova.network.neutron [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:16:50 np0005629333 nova_compute[244014]: 2026-02-25 12:16:50.737 244018 DEBUG oslo_concurrency.processutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpddk8psre" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:50 np0005629333 nova_compute[244014]: 2026-02-25 12:16:50.770 244018 DEBUG nova.storage.rbd_utils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:16:50 np0005629333 nova_compute[244014]: 2026-02-25 12:16:50.778 244018 DEBUG oslo_concurrency.processutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:50 np0005629333 nova_compute[244014]: 2026-02-25 12:16:50.807 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:50 np0005629333 nova_compute[244014]: 2026-02-25 12:16:50.931 244018 DEBUG oslo_concurrency.processutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:50 np0005629333 nova_compute[244014]: 2026-02-25 12:16:50.932 244018 INFO nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Deleting local config drive /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config because it was imported into RBD.#033[00m
Feb 25 07:16:50 np0005629333 systemd-machined[210048]: New machine qemu-10-instance-0000000a.
Feb 25 07:16:51 np0005629333 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Feb 25 07:16:51 np0005629333 nova_compute[244014]: 2026-02-25 12:16:51.246 244018 DEBUG nova.network.neutron [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:16:51 np0005629333 nova_compute[244014]: 2026-02-25 12:16:51.513 244018 DEBUG nova.network.neutron [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:16:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 257 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 914 KiB/s rd, 4.1 MiB/s wr, 123 op/s
Feb 25 07:16:51 np0005629333 nova_compute[244014]: 2026-02-25 12:16:51.532 244018 DEBUG oslo_concurrency.lockutils [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Releasing lock "refresh_cache-29947b36-0ef7-49c6-974b-34891af8b99a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:16:51 np0005629333 nova_compute[244014]: 2026-02-25 12:16:51.533 244018 DEBUG nova.compute.manager [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:16:51 np0005629333 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Feb 25 07:16:51 np0005629333 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 4.551s CPU time.
Feb 25 07:16:51 np0005629333 systemd-machined[210048]: Machine qemu-9-instance-00000009 terminated.
Feb 25 07:16:51 np0005629333 nova_compute[244014]: 2026-02-25 12:16:51.651 244018 DEBUG nova.compute.manager [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:16:51 np0005629333 nova_compute[244014]: 2026-02-25 12:16:51.653 244018 DEBUG nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:16:51 np0005629333 nova_compute[244014]: 2026-02-25 12:16:51.653 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021811.6524277, db0fc9fa-1fc0-4334-96f9-2205fa53e308 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:16:51 np0005629333 nova_compute[244014]: 2026-02-25 12:16:51.654 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:16:51 np0005629333 nova_compute[244014]: 2026-02-25 12:16:51.658 244018 INFO nova.virt.libvirt.driver [-] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance spawned successfully.#033[00m
Feb 25 07:16:51 np0005629333 nova_compute[244014]: 2026-02-25 12:16:51.659 244018 DEBUG nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:16:51 np0005629333 nova_compute[244014]: 2026-02-25 12:16:51.694 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:16:51 np0005629333 nova_compute[244014]: 2026-02-25 12:16:51.704 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:16:51 np0005629333 nova_compute[244014]: 2026-02-25 12:16:51.711 244018 DEBUG nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:51 np0005629333 nova_compute[244014]: 2026-02-25 12:16:51.712 244018 DEBUG nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:51 np0005629333 nova_compute[244014]: 2026-02-25 12:16:51.712 244018 DEBUG nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:51 np0005629333 nova_compute[244014]: 2026-02-25 12:16:51.713 244018 DEBUG nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:51 np0005629333 nova_compute[244014]: 2026-02-25 12:16:51.714 244018 DEBUG nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:51 np0005629333 nova_compute[244014]: 2026-02-25 12:16:51.714 244018 DEBUG nova.virt.libvirt.driver [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:16:51 np0005629333 nova_compute[244014]: 2026-02-25 12:16:51.745 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:16:51 np0005629333 nova_compute[244014]: 2026-02-25 12:16:51.745 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021811.6526158, db0fc9fa-1fc0-4334-96f9-2205fa53e308 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:16:51 np0005629333 nova_compute[244014]: 2026-02-25 12:16:51.745 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] VM Started (Lifecycle Event)#033[00m
Feb 25 07:16:51 np0005629333 nova_compute[244014]: 2026-02-25 12:16:51.760 244018 INFO nova.virt.libvirt.driver [-] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Instance destroyed successfully.#033[00m
Feb 25 07:16:51 np0005629333 nova_compute[244014]: 2026-02-25 12:16:51.761 244018 DEBUG nova.objects.instance [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lazy-loading 'resources' on Instance uuid 29947b36-0ef7-49c6-974b-34891af8b99a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:16:51 np0005629333 nova_compute[244014]: 2026-02-25 12:16:51.777 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:16:51 np0005629333 nova_compute[244014]: 2026-02-25 12:16:51.795 244018 INFO nova.compute.manager [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Took 4.10 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:16:51 np0005629333 nova_compute[244014]: 2026-02-25 12:16:51.795 244018 DEBUG nova.compute.manager [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:16:51 np0005629333 nova_compute[244014]: 2026-02-25 12:16:51.799 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:16:51 np0005629333 nova_compute[244014]: 2026-02-25 12:16:51.847 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:16:51 np0005629333 nova_compute[244014]: 2026-02-25 12:16:51.869 244018 INFO nova.compute.manager [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Took 5.52 seconds to build instance.#033[00m
Feb 25 07:16:51 np0005629333 nova_compute[244014]: 2026-02-25 12:16:51.885 244018 DEBUG oslo_concurrency.lockutils [None req-2591064a-db03-40c9-8214-80d4271cef9b 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "db0fc9fa-1fc0-4334-96f9-2205fa53e308" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:52 np0005629333 nova_compute[244014]: 2026-02-25 12:16:52.222 244018 INFO nova.virt.libvirt.driver [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Deleting instance files /var/lib/nova/instances/29947b36-0ef7-49c6-974b-34891af8b99a_del#033[00m
Feb 25 07:16:52 np0005629333 nova_compute[244014]: 2026-02-25 12:16:52.225 244018 INFO nova.virt.libvirt.driver [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Deletion of /var/lib/nova/instances/29947b36-0ef7-49c6-974b-34891af8b99a_del complete#033[00m
Feb 25 07:16:52 np0005629333 nova_compute[244014]: 2026-02-25 12:16:52.286 244018 INFO nova.compute.manager [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:16:52 np0005629333 nova_compute[244014]: 2026-02-25 12:16:52.286 244018 DEBUG oslo.service.loopingcall [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:16:52 np0005629333 nova_compute[244014]: 2026-02-25 12:16:52.287 244018 DEBUG nova.compute.manager [-] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:16:52 np0005629333 nova_compute[244014]: 2026-02-25 12:16:52.287 244018 DEBUG nova.network.neutron [-] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:16:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:16:52 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:16:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:16:52 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:16:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:16:52 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:16:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:16:52 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:16:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:16:52 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:16:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:16:52 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:16:52 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:16:52 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:16:52 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:16:52 np0005629333 nova_compute[244014]: 2026-02-25 12:16:52.835 244018 DEBUG nova.network.neutron [-] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:16:52 np0005629333 nova_compute[244014]: 2026-02-25 12:16:52.866 244018 DEBUG nova.network.neutron [-] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:16:52 np0005629333 nova_compute[244014]: 2026-02-25 12:16:52.876 244018 INFO nova.compute.manager [-] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Took 0.59 seconds to deallocate network for instance.#033[00m
Feb 25 07:16:52 np0005629333 podman[255656]: 2026-02-25 12:16:52.895756145 +0000 UTC m=+0.053939625 container create 1543e3c4676c9fe3c93b3054ae70218e1ec6537692bce6dbcec5f54eb3f0c9b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_lewin, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:16:52 np0005629333 systemd[1]: Started libpod-conmon-1543e3c4676c9fe3c93b3054ae70218e1ec6537692bce6dbcec5f54eb3f0c9b1.scope.
Feb 25 07:16:52 np0005629333 nova_compute[244014]: 2026-02-25 12:16:52.936 244018 DEBUG oslo_concurrency.lockutils [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:52 np0005629333 nova_compute[244014]: 2026-02-25 12:16:52.936 244018 DEBUG oslo_concurrency.lockutils [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:52 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:16:52 np0005629333 podman[255656]: 2026-02-25 12:16:52.867434312 +0000 UTC m=+0.025617792 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:16:52 np0005629333 podman[255656]: 2026-02-25 12:16:52.983643731 +0000 UTC m=+0.141827241 container init 1543e3c4676c9fe3c93b3054ae70218e1ec6537692bce6dbcec5f54eb3f0c9b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 07:16:52 np0005629333 podman[255656]: 2026-02-25 12:16:52.988562537 +0000 UTC m=+0.146746047 container start 1543e3c4676c9fe3c93b3054ae70218e1ec6537692bce6dbcec5f54eb3f0c9b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 07:16:52 np0005629333 exciting_lewin[255672]: 167 167
Feb 25 07:16:52 np0005629333 systemd[1]: libpod-1543e3c4676c9fe3c93b3054ae70218e1ec6537692bce6dbcec5f54eb3f0c9b1.scope: Deactivated successfully.
Feb 25 07:16:53 np0005629333 podman[255656]: 2026-02-25 12:16:52.99504324 +0000 UTC m=+0.153226740 container attach 1543e3c4676c9fe3c93b3054ae70218e1ec6537692bce6dbcec5f54eb3f0c9b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_lewin, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:16:53 np0005629333 podman[255656]: 2026-02-25 12:16:53.00141479 +0000 UTC m=+0.159598300 container died 1543e3c4676c9fe3c93b3054ae70218e1ec6537692bce6dbcec5f54eb3f0c9b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_lewin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:16:53 np0005629333 systemd[1]: var-lib-containers-storage-overlay-86ade9d28cdec68a1d5cd504f19ebadf88996996ee8aa60a5d0c703247b4a331-merged.mount: Deactivated successfully.
Feb 25 07:16:53 np0005629333 nova_compute[244014]: 2026-02-25 12:16:53.047 244018 DEBUG oslo_concurrency.processutils [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:16:53 np0005629333 podman[255656]: 2026-02-25 12:16:53.068509507 +0000 UTC m=+0.226692977 container remove 1543e3c4676c9fe3c93b3054ae70218e1ec6537692bce6dbcec5f54eb3f0c9b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 07:16:53 np0005629333 systemd[1]: libpod-conmon-1543e3c4676c9fe3c93b3054ae70218e1ec6537692bce6dbcec5f54eb3f0c9b1.scope: Deactivated successfully.
Feb 25 07:16:53 np0005629333 podman[255717]: 2026-02-25 12:16:53.256390618 +0000 UTC m=+0.046859895 container create 7c61ddf31b0a769831c9082ddfe7aab3d036c8c321c7409995d77c6f8769081f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:16:53 np0005629333 systemd[1]: Started libpod-conmon-7c61ddf31b0a769831c9082ddfe7aab3d036c8c321c7409995d77c6f8769081f.scope.
Feb 25 07:16:53 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:16:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72598b2ea353be19f217fb334442bd54625c9421af8b805fe60abfd98b91cb3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:16:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72598b2ea353be19f217fb334442bd54625c9421af8b805fe60abfd98b91cb3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:16:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72598b2ea353be19f217fb334442bd54625c9421af8b805fe60abfd98b91cb3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:16:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72598b2ea353be19f217fb334442bd54625c9421af8b805fe60abfd98b91cb3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:16:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72598b2ea353be19f217fb334442bd54625c9421af8b805fe60abfd98b91cb3f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:16:53 np0005629333 podman[255717]: 2026-02-25 12:16:53.239777884 +0000 UTC m=+0.030247141 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:16:53 np0005629333 podman[255717]: 2026-02-25 12:16:53.349043076 +0000 UTC m=+0.139512383 container init 7c61ddf31b0a769831c9082ddfe7aab3d036c8c321c7409995d77c6f8769081f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_shannon, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Feb 25 07:16:53 np0005629333 podman[255717]: 2026-02-25 12:16:53.367816774 +0000 UTC m=+0.158286041 container start 7c61ddf31b0a769831c9082ddfe7aab3d036c8c321c7409995d77c6f8769081f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_shannon, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 25 07:16:53 np0005629333 nova_compute[244014]: 2026-02-25 12:16:53.368 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:53 np0005629333 podman[255717]: 2026-02-25 12:16:53.373406001 +0000 UTC m=+0.163875318 container attach 7c61ddf31b0a769831c9082ddfe7aab3d036c8c321c7409995d77c6f8769081f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_shannon, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:16:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 286 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.1 MiB/s wr, 259 op/s
Feb 25 07:16:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:16:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/314277043' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:16:53 np0005629333 nova_compute[244014]: 2026-02-25 12:16:53.620 244018 DEBUG oslo_concurrency.processutils [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:16:53 np0005629333 nova_compute[244014]: 2026-02-25 12:16:53.628 244018 DEBUG nova.compute.provider_tree [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:16:53 np0005629333 nova_compute[244014]: 2026-02-25 12:16:53.654 244018 DEBUG nova.scheduler.client.report [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:16:53 np0005629333 nova_compute[244014]: 2026-02-25 12:16:53.698 244018 DEBUG oslo_concurrency.lockutils [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:53 np0005629333 ecstatic_shannon[255734]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:16:53 np0005629333 ecstatic_shannon[255734]: --> All data devices are unavailable
Feb 25 07:16:53 np0005629333 nova_compute[244014]: 2026-02-25 12:16:53.841 244018 INFO nova.scheduler.client.report [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Deleted allocations for instance 29947b36-0ef7-49c6-974b-34891af8b99a#033[00m
Feb 25 07:16:53 np0005629333 systemd[1]: libpod-7c61ddf31b0a769831c9082ddfe7aab3d036c8c321c7409995d77c6f8769081f.scope: Deactivated successfully.
Feb 25 07:16:53 np0005629333 nova_compute[244014]: 2026-02-25 12:16:53.882 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:16:53 np0005629333 nova_compute[244014]: 2026-02-25 12:16:53.884 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:16:53 np0005629333 nova_compute[244014]: 2026-02-25 12:16:53.884 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 07:16:53 np0005629333 podman[255756]: 2026-02-25 12:16:53.905817826 +0000 UTC m=+0.031611772 container died 7c61ddf31b0a769831c9082ddfe7aab3d036c8c321c7409995d77c6f8769081f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_shannon, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:16:53 np0005629333 systemd[1]: var-lib-containers-storage-overlay-72598b2ea353be19f217fb334442bd54625c9421af8b805fe60abfd98b91cb3f-merged.mount: Deactivated successfully.
Feb 25 07:16:53 np0005629333 podman[255756]: 2026-02-25 12:16:53.94795478 +0000 UTC m=+0.073748726 container remove 7c61ddf31b0a769831c9082ddfe7aab3d036c8c321c7409995d77c6f8769081f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_shannon, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Feb 25 07:16:53 np0005629333 systemd[1]: libpod-conmon-7c61ddf31b0a769831c9082ddfe7aab3d036c8c321c7409995d77c6f8769081f.scope: Deactivated successfully.
Feb 25 07:16:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:16:54 np0005629333 podman[255832]: 2026-02-25 12:16:54.374025971 +0000 UTC m=+0.050780673 container create 0123244da3e0f46a7d68e68fb6223667e5d3ec3485465b9fbbeba9557e6eafcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:16:54 np0005629333 systemd[1]: Started libpod-conmon-0123244da3e0f46a7d68e68fb6223667e5d3ec3485465b9fbbeba9557e6eafcf.scope.
Feb 25 07:16:54 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:16:54 np0005629333 podman[255832]: 2026-02-25 12:16:54.35147739 +0000 UTC m=+0.028232112 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:16:54 np0005629333 podman[255832]: 2026-02-25 12:16:54.461211776 +0000 UTC m=+0.137966548 container init 0123244da3e0f46a7d68e68fb6223667e5d3ec3485465b9fbbeba9557e6eafcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_meninsky, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:16:54 np0005629333 podman[255832]: 2026-02-25 12:16:54.46843189 +0000 UTC m=+0.145186632 container start 0123244da3e0f46a7d68e68fb6223667e5d3ec3485465b9fbbeba9557e6eafcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_meninsky, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 25 07:16:54 np0005629333 admiring_meninsky[255847]: 167 167
Feb 25 07:16:54 np0005629333 systemd[1]: libpod-0123244da3e0f46a7d68e68fb6223667e5d3ec3485465b9fbbeba9557e6eafcf.scope: Deactivated successfully.
Feb 25 07:16:54 np0005629333 podman[255832]: 2026-02-25 12:16:54.475186862 +0000 UTC m=+0.151941634 container attach 0123244da3e0f46a7d68e68fb6223667e5d3ec3485465b9fbbeba9557e6eafcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:16:54 np0005629333 podman[255832]: 2026-02-25 12:16:54.475548832 +0000 UTC m=+0.152303564 container died 0123244da3e0f46a7d68e68fb6223667e5d3ec3485465b9fbbeba9557e6eafcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 07:16:54 np0005629333 systemd[1]: var-lib-containers-storage-overlay-9405cb8c14c2c2416e661bb3c11eaf3fa6332d0282eb8cbb578211a4ca4b48d0-merged.mount: Deactivated successfully.
Feb 25 07:16:54 np0005629333 podman[255832]: 2026-02-25 12:16:54.580391103 +0000 UTC m=+0.257145805 container remove 0123244da3e0f46a7d68e68fb6223667e5d3ec3485465b9fbbeba9557e6eafcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Feb 25 07:16:54 np0005629333 systemd[1]: libpod-conmon-0123244da3e0f46a7d68e68fb6223667e5d3ec3485465b9fbbeba9557e6eafcf.scope: Deactivated successfully.
Feb 25 07:16:54 np0005629333 nova_compute[244014]: 2026-02-25 12:16:54.670 244018 DEBUG oslo_concurrency.lockutils [None req-f24e39fd-a010-4c76-badb-bf6884837486 3b5b46d7dac249e288d46ca1056e55e2 cbbbbb19c373469196c30db9a3ed4171 - - default default] Lock "29947b36-0ef7-49c6-974b-34891af8b99a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:54 np0005629333 podman[255871]: 2026-02-25 12:16:54.739398825 +0000 UTC m=+0.044898157 container create 5332afc43e69498459e5f762b696f1e21454ca1b1629ede7caf67a4bc4469533 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True)
Feb 25 07:16:54 np0005629333 systemd[1]: Started libpod-conmon-5332afc43e69498459e5f762b696f1e21454ca1b1629ede7caf67a4bc4469533.scope.
Feb 25 07:16:54 np0005629333 podman[255871]: 2026-02-25 12:16:54.715296238 +0000 UTC m=+0.020795570 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:16:54 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:16:54 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d32a60bbdd09c690d75aaf2935c3327c72f8a8f9f645b5d1db1b8ef06702f86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:16:54 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d32a60bbdd09c690d75aaf2935c3327c72f8a8f9f645b5d1db1b8ef06702f86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:16:54 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d32a60bbdd09c690d75aaf2935c3327c72f8a8f9f645b5d1db1b8ef06702f86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:16:54 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d32a60bbdd09c690d75aaf2935c3327c72f8a8f9f645b5d1db1b8ef06702f86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:16:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:55.001 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:16:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:55.002 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:16:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:55.003 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:16:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:55.027 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:16:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:16:55.028 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:16:55 np0005629333 nova_compute[244014]: 2026-02-25 12:16:55.028 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-20c8b8a1-1561-49f5-9fce-af2840195a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:16:55 np0005629333 nova_compute[244014]: 2026-02-25 12:16:55.029 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-20c8b8a1-1561-49f5-9fce-af2840195a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:16:55 np0005629333 nova_compute[244014]: 2026-02-25 12:16:55.030 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 25 07:16:55 np0005629333 nova_compute[244014]: 2026-02-25 12:16:55.030 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid 20c8b8a1-1561-49f5-9fce-af2840195a57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:16:55 np0005629333 nova_compute[244014]: 2026-02-25 12:16:55.033 244018 DEBUG nova.compute.manager [req-e158ef55-2958-4868-8626-304af300205a req-c14a229f-bea0-4f33-861e-df93c2897cda 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Received event network-changed-5fae567b-e26e-4045-8efe-6d5fc03a1658 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:16:55 np0005629333 podman[255871]: 2026-02-25 12:16:55.034332973 +0000 UTC m=+0.339832325 container init 5332afc43e69498459e5f762b696f1e21454ca1b1629ede7caf67a4bc4469533 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_wiles, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 07:16:55 np0005629333 nova_compute[244014]: 2026-02-25 12:16:55.034 244018 DEBUG nova.compute.manager [req-e158ef55-2958-4868-8626-304af300205a req-c14a229f-bea0-4f33-861e-df93c2897cda 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Refreshing instance network info cache due to event network-changed-5fae567b-e26e-4045-8efe-6d5fc03a1658. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:16:55 np0005629333 nova_compute[244014]: 2026-02-25 12:16:55.035 244018 DEBUG oslo_concurrency.lockutils [req-e158ef55-2958-4868-8626-304af300205a req-c14a229f-bea0-4f33-861e-df93c2897cda 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-20c8b8a1-1561-49f5-9fce-af2840195a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:16:55 np0005629333 nova_compute[244014]: 2026-02-25 12:16:55.036 244018 INFO nova.compute.manager [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Rebuilding instance#033[00m
Feb 25 07:16:55 np0005629333 nova_compute[244014]: 2026-02-25 12:16:55.042 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:55 np0005629333 podman[255871]: 2026-02-25 12:16:55.045176745 +0000 UTC m=+0.350676077 container start 5332afc43e69498459e5f762b696f1e21454ca1b1629ede7caf67a4bc4469533 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_wiles, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 07:16:55 np0005629333 podman[255871]: 2026-02-25 12:16:55.053854904 +0000 UTC m=+0.359354286 container attach 5332afc43e69498459e5f762b696f1e21454ca1b1629ede7caf67a4bc4469533 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_wiles, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]: {
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:    "0": [
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:        {
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "devices": [
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "/dev/loop3"
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            ],
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "lv_name": "ceph_lv0",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "lv_size": "21470642176",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "name": "ceph_lv0",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "tags": {
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.cluster_name": "ceph",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.crush_device_class": "",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.encrypted": "0",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.objectstore": "bluestore",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.osd_id": "0",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.type": "block",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.vdo": "0",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.with_tpm": "0"
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            },
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "type": "block",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "vg_name": "ceph_vg0"
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:        }
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:    ],
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:    "1": [
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:        {
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "devices": [
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "/dev/loop4"
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            ],
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "lv_name": "ceph_lv1",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "lv_size": "21470642176",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "name": "ceph_lv1",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "tags": {
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.cluster_name": "ceph",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.crush_device_class": "",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.encrypted": "0",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.objectstore": "bluestore",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.osd_id": "1",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.type": "block",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.vdo": "0",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.with_tpm": "0"
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            },
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "type": "block",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "vg_name": "ceph_vg1"
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:        }
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:    ],
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:    "2": [
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:        {
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "devices": [
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "/dev/loop5"
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            ],
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "lv_name": "ceph_lv2",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "lv_size": "21470642176",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "name": "ceph_lv2",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "tags": {
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.cluster_name": "ceph",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.crush_device_class": "",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.encrypted": "0",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.objectstore": "bluestore",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.osd_id": "2",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.type": "block",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.vdo": "0",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:                "ceph.with_tpm": "0"
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            },
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "type": "block",
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:            "vg_name": "ceph_vg2"
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:        }
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]:    ]
Feb 25 07:16:55 np0005629333 wizardly_wiles[255887]: }
Feb 25 07:16:55 np0005629333 systemd[1]: libpod-5332afc43e69498459e5f762b696f1e21454ca1b1629ede7caf67a4bc4469533.scope: Deactivated successfully.
Feb 25 07:16:55 np0005629333 podman[255871]: 2026-02-25 12:16:55.350499922 +0000 UTC m=+0.655999264 container died 5332afc43e69498459e5f762b696f1e21454ca1b1629ede7caf67a4bc4469533 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:16:55 np0005629333 nova_compute[244014]: 2026-02-25 12:16:55.486 244018 DEBUG nova.objects.instance [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lazy-loading 'trusted_certs' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:16:55 np0005629333 nova_compute[244014]: 2026-02-25 12:16:55.512 244018 DEBUG nova.compute.manager [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:16:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 286 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 232 op/s
Feb 25 07:16:55 np0005629333 systemd[1]: var-lib-containers-storage-overlay-6d32a60bbdd09c690d75aaf2935c3327c72f8a8f9f645b5d1db1b8ef06702f86-merged.mount: Deactivated successfully.
Feb 25 07:16:55 np0005629333 podman[255871]: 2026-02-25 12:16:55.589869866 +0000 UTC m=+0.895369188 container remove 5332afc43e69498459e5f762b696f1e21454ca1b1629ede7caf67a4bc4469533 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_wiles, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:16:55 np0005629333 nova_compute[244014]: 2026-02-25 12:16:55.604 244018 DEBUG nova.objects.instance [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lazy-loading 'pci_requests' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:16:55 np0005629333 systemd[1]: libpod-conmon-5332afc43e69498459e5f762b696f1e21454ca1b1629ede7caf67a4bc4469533.scope: Deactivated successfully.
Feb 25 07:16:55 np0005629333 nova_compute[244014]: 2026-02-25 12:16:55.629 244018 DEBUG nova.objects.instance [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lazy-loading 'pci_devices' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:16:55 np0005629333 nova_compute[244014]: 2026-02-25 12:16:55.649 244018 DEBUG nova.objects.instance [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lazy-loading 'resources' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:16:55 np0005629333 nova_compute[244014]: 2026-02-25 12:16:55.665 244018 DEBUG nova.objects.instance [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lazy-loading 'migration_context' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:16:55 np0005629333 nova_compute[244014]: 2026-02-25 12:16:55.722 244018 DEBUG nova.objects.instance [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 25 07:16:55 np0005629333 nova_compute[244014]: 2026-02-25 12:16:55.726 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 25 07:16:55 np0005629333 nova_compute[244014]: 2026-02-25 12:16:55.809 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:56 np0005629333 podman[255967]: 2026-02-25 12:16:56.009163265 +0000 UTC m=+0.040604469 container create dde648f4d9c00a18e95741828b823da12344f2deb0e94a836dca24f7caf68437 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chaum, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:16:56 np0005629333 systemd[1]: Started libpod-conmon-dde648f4d9c00a18e95741828b823da12344f2deb0e94a836dca24f7caf68437.scope.
Feb 25 07:16:56 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:16:56 np0005629333 podman[255967]: 2026-02-25 12:16:55.986283574 +0000 UTC m=+0.017724778 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:16:56 np0005629333 podman[255967]: 2026-02-25 12:16:56.091183286 +0000 UTC m=+0.122624550 container init dde648f4d9c00a18e95741828b823da12344f2deb0e94a836dca24f7caf68437 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:16:56 np0005629333 podman[255967]: 2026-02-25 12:16:56.096521185 +0000 UTC m=+0.127962399 container start dde648f4d9c00a18e95741828b823da12344f2deb0e94a836dca24f7caf68437 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chaum, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 07:16:56 np0005629333 relaxed_chaum[255984]: 167 167
Feb 25 07:16:56 np0005629333 systemd[1]: libpod-dde648f4d9c00a18e95741828b823da12344f2deb0e94a836dca24f7caf68437.scope: Deactivated successfully.
Feb 25 07:16:56 np0005629333 podman[255967]: 2026-02-25 12:16:56.102636877 +0000 UTC m=+0.134078141 container attach dde648f4d9c00a18e95741828b823da12344f2deb0e94a836dca24f7caf68437 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chaum, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:16:56 np0005629333 podman[255967]: 2026-02-25 12:16:56.10339632 +0000 UTC m=+0.134837554 container died dde648f4d9c00a18e95741828b823da12344f2deb0e94a836dca24f7caf68437 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:16:56 np0005629333 systemd[1]: var-lib-containers-storage-overlay-593d5eea1d4d5bbcd3258de0c51d54ba61e130d5675ec80af952a4cb724a1fb2-merged.mount: Deactivated successfully.
Feb 25 07:16:56 np0005629333 podman[255967]: 2026-02-25 12:16:56.172723893 +0000 UTC m=+0.204165097 container remove dde648f4d9c00a18e95741828b823da12344f2deb0e94a836dca24f7caf68437 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chaum, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 07:16:56 np0005629333 systemd[1]: libpod-conmon-dde648f4d9c00a18e95741828b823da12344f2deb0e94a836dca24f7caf68437.scope: Deactivated successfully.
Feb 25 07:16:56 np0005629333 podman[256007]: 2026-02-25 12:16:56.363432519 +0000 UTC m=+0.057626686 container create 836b9156475e8fb6feee1269f3a37e2365d066cb206af6173e85fb14f34ededa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_ritchie, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:16:56 np0005629333 systemd[1]: Started libpod-conmon-836b9156475e8fb6feee1269f3a37e2365d066cb206af6173e85fb14f34ededa.scope.
Feb 25 07:16:56 np0005629333 podman[256007]: 2026-02-25 12:16:56.327477479 +0000 UTC m=+0.021671646 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:16:56 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:16:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7480b4215226c9aea03bff390d56ec8884825caa00d144d84aa77d6372f35518/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:16:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7480b4215226c9aea03bff390d56ec8884825caa00d144d84aa77d6372f35518/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:16:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7480b4215226c9aea03bff390d56ec8884825caa00d144d84aa77d6372f35518/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:16:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7480b4215226c9aea03bff390d56ec8884825caa00d144d84aa77d6372f35518/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:16:56 np0005629333 podman[256007]: 2026-02-25 12:16:56.495299672 +0000 UTC m=+0.189493879 container init 836b9156475e8fb6feee1269f3a37e2365d066cb206af6173e85fb14f34ededa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_ritchie, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 07:16:56 np0005629333 podman[256007]: 2026-02-25 12:16:56.5049674 +0000 UTC m=+0.199161557 container start 836b9156475e8fb6feee1269f3a37e2365d066cb206af6173e85fb14f34ededa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_ritchie, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 07:16:56 np0005629333 podman[256007]: 2026-02-25 12:16:56.511858255 +0000 UTC m=+0.206052422 container attach 836b9156475e8fb6feee1269f3a37e2365d066cb206af6173e85fb14f34ededa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_ritchie, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 07:16:56 np0005629333 nova_compute[244014]: 2026-02-25 12:16:56.770 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Updating instance_info_cache with network_info: [{"id": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "address": "fa:16:3e:3b:e1:5a", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fae567b-e2", "ovs_interfaceid": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:16:56 np0005629333 nova_compute[244014]: 2026-02-25 12:16:56.797 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-20c8b8a1-1561-49f5-9fce-af2840195a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:16:56 np0005629333 nova_compute[244014]: 2026-02-25 12:16:56.797 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 25 07:16:56 np0005629333 nova_compute[244014]: 2026-02-25 12:16:56.798 244018 DEBUG oslo_concurrency.lockutils [req-e158ef55-2958-4868-8626-304af300205a req-c14a229f-bea0-4f33-861e-df93c2897cda 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-20c8b8a1-1561-49f5-9fce-af2840195a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:16:56 np0005629333 nova_compute[244014]: 2026-02-25 12:16:56.798 244018 DEBUG nova.network.neutron [req-e158ef55-2958-4868-8626-304af300205a req-c14a229f-bea0-4f33-861e-df93c2897cda 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Refreshing network info cache for port 5fae567b-e26e-4045-8efe-6d5fc03a1658 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:16:56 np0005629333 nova_compute[244014]: 2026-02-25 12:16:56.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:16:56 np0005629333 nova_compute[244014]: 2026-02-25 12:16:56.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 25 07:16:56 np0005629333 nova_compute[244014]: 2026-02-25 12:16:56.907 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 25 07:16:57 np0005629333 lvm[256104]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:16:57 np0005629333 lvm[256104]: VG ceph_vg0 finished
Feb 25 07:16:57 np0005629333 lvm[256106]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:16:57 np0005629333 lvm[256106]: VG ceph_vg1 finished
Feb 25 07:16:57 np0005629333 lvm[256107]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:16:57 np0005629333 lvm[256107]: VG ceph_vg2 finished
Feb 25 07:16:57 np0005629333 admiring_ritchie[256024]: {}
Feb 25 07:16:57 np0005629333 systemd[1]: libpod-836b9156475e8fb6feee1269f3a37e2365d066cb206af6173e85fb14f34ededa.scope: Deactivated successfully.
Feb 25 07:16:57 np0005629333 systemd[1]: libpod-836b9156475e8fb6feee1269f3a37e2365d066cb206af6173e85fb14f34ededa.scope: Consumed 1.056s CPU time.
Feb 25 07:16:57 np0005629333 podman[256110]: 2026-02-25 12:16:57.308984359 +0000 UTC m=+0.036821137 container died 836b9156475e8fb6feee1269f3a37e2365d066cb206af6173e85fb14f34ededa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_ritchie, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 07:16:57 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7480b4215226c9aea03bff390d56ec8884825caa00d144d84aa77d6372f35518-merged.mount: Deactivated successfully.
Feb 25 07:16:57 np0005629333 podman[256110]: 2026-02-25 12:16:57.406582163 +0000 UTC m=+0.134418941 container remove 836b9156475e8fb6feee1269f3a37e2365d066cb206af6173e85fb14f34ededa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_ritchie, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 07:16:57 np0005629333 systemd[1]: libpod-conmon-836b9156475e8fb6feee1269f3a37e2365d066cb206af6173e85fb14f34ededa.scope: Deactivated successfully.
Feb 25 07:16:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:16:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:16:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:16:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:16:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 248 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 304 op/s
Feb 25 07:16:57 np0005629333 nova_compute[244014]: 2026-02-25 12:16:57.907 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:16:58 np0005629333 nova_compute[244014]: 2026-02-25 12:16:58.366 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:16:58 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:16:58 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:16:58 np0005629333 nova_compute[244014]: 2026-02-25 12:16:58.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:16:58 np0005629333 nova_compute[244014]: 2026-02-25 12:16:58.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:16:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:16:59 np0005629333 nova_compute[244014]: 2026-02-25 12:16:59.252 244018 DEBUG nova.network.neutron [req-e158ef55-2958-4868-8626-304af300205a req-c14a229f-bea0-4f33-861e-df93c2897cda 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Updated VIF entry in instance network info cache for port 5fae567b-e26e-4045-8efe-6d5fc03a1658. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:16:59 np0005629333 nova_compute[244014]: 2026-02-25 12:16:59.253 244018 DEBUG nova.network.neutron [req-e158ef55-2958-4868-8626-304af300205a req-c14a229f-bea0-4f33-861e-df93c2897cda 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Updating instance_info_cache with network_info: [{"id": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "address": "fa:16:3e:3b:e1:5a", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fae567b-e2", "ovs_interfaceid": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:16:59 np0005629333 nova_compute[244014]: 2026-02-25 12:16:59.322 244018 DEBUG oslo_concurrency.lockutils [req-e158ef55-2958-4868-8626-304af300205a req-c14a229f-bea0-4f33-861e-df93c2897cda 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-20c8b8a1-1561-49f5-9fce-af2840195a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:16:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 248 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 266 op/s
Feb 25 07:16:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:16:59Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3b:e1:5a 10.100.0.8
Feb 25 07:16:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:16:59Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3b:e1:5a 10.100.0.8
Feb 25 07:16:59 np0005629333 nova_compute[244014]: 2026-02-25 12:16:59.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:16:59 np0005629333 nova_compute[244014]: 2026-02-25 12:16:59.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:16:59 np0005629333 nova_compute[244014]: 2026-02-25 12:16:59.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:16:59 np0005629333 nova_compute[244014]: 2026-02-25 12:16:59.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:17:00 np0005629333 nova_compute[244014]: 2026-02-25 12:17:00.811 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:00 np0005629333 nova_compute[244014]: 2026-02-25 12:17:00.985 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:17:00 np0005629333 nova_compute[244014]: 2026-02-25 12:17:00.985 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:17:00 np0005629333 nova_compute[244014]: 2026-02-25 12:17:00.986 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:17:01 np0005629333 nova_compute[244014]: 2026-02-25 12:17:01.031 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:01 np0005629333 nova_compute[244014]: 2026-02-25 12:17:01.032 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:01 np0005629333 nova_compute[244014]: 2026-02-25 12:17:01.032 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:01 np0005629333 nova_compute[244014]: 2026-02-25 12:17:01.033 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:17:01 np0005629333 nova_compute[244014]: 2026-02-25 12:17:01.033 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 262 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 2.5 MiB/s wr, 292 op/s
Feb 25 07:17:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:17:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:17:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:17:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:17:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:17:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:17:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:17:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3217483231' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:17:01 np0005629333 nova_compute[244014]: 2026-02-25 12:17:01.640 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:01 np0005629333 nova_compute[244014]: 2026-02-25 12:17:01.737 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:17:01 np0005629333 nova_compute[244014]: 2026-02-25 12:17:01.738 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:17:01 np0005629333 nova_compute[244014]: 2026-02-25 12:17:01.741 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:17:01 np0005629333 nova_compute[244014]: 2026-02-25 12:17:01.741 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:17:01 np0005629333 nova_compute[244014]: 2026-02-25 12:17:01.741 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:17:01 np0005629333 nova_compute[244014]: 2026-02-25 12:17:01.919 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:17:01 np0005629333 nova_compute[244014]: 2026-02-25 12:17:01.920 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4401MB free_disk=59.946292605251074GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:17:01 np0005629333 nova_compute[244014]: 2026-02-25 12:17:01.920 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:01 np0005629333 nova_compute[244014]: 2026-02-25 12:17:01.921 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:02 np0005629333 nova_compute[244014]: 2026-02-25 12:17:02.167 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 20c8b8a1-1561-49f5-9fce-af2840195a57 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:17:02 np0005629333 nova_compute[244014]: 2026-02-25 12:17:02.167 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance db0fc9fa-1fc0-4334-96f9-2205fa53e308 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:17:02 np0005629333 nova_compute[244014]: 2026-02-25 12:17:02.167 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:17:02 np0005629333 nova_compute[244014]: 2026-02-25 12:17:02.168 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:17:02 np0005629333 nova_compute[244014]: 2026-02-25 12:17:02.336 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:17:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3427421751' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:17:02 np0005629333 nova_compute[244014]: 2026-02-25 12:17:02.899 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:02 np0005629333 nova_compute[244014]: 2026-02-25 12:17:02.905 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:17:02 np0005629333 nova_compute[244014]: 2026-02-25 12:17:02.942 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:17:03 np0005629333 nova_compute[244014]: 2026-02-25 12:17:03.159 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:17:03 np0005629333 nova_compute[244014]: 2026-02-25 12:17:03.160 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.239s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:03 np0005629333 nova_compute[244014]: 2026-02-25 12:17:03.161 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:17:03 np0005629333 nova_compute[244014]: 2026-02-25 12:17:03.161 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 25 07:17:03 np0005629333 nova_compute[244014]: 2026-02-25 12:17:03.369 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 282 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 3.8 MiB/s wr, 295 op/s
Feb 25 07:17:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:17:04 np0005629333 nova_compute[244014]: 2026-02-25 12:17:04.347 244018 DEBUG oslo_concurrency.lockutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquiring lock "77af7d73-f695-47b4-8ec1-98a3672ff8d8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:04 np0005629333 nova_compute[244014]: 2026-02-25 12:17:04.348 244018 DEBUG oslo_concurrency.lockutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "77af7d73-f695-47b4-8ec1-98a3672ff8d8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:04 np0005629333 nova_compute[244014]: 2026-02-25 12:17:04.421 244018 DEBUG nova.compute.manager [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:17:04 np0005629333 nova_compute[244014]: 2026-02-25 12:17:04.498 244018 DEBUG oslo_concurrency.lockutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:04 np0005629333 nova_compute[244014]: 2026-02-25 12:17:04.499 244018 DEBUG oslo_concurrency.lockutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:04 np0005629333 nova_compute[244014]: 2026-02-25 12:17:04.507 244018 DEBUG nova.virt.hardware [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:17:04 np0005629333 nova_compute[244014]: 2026-02-25 12:17:04.508 244018 INFO nova.compute.claims [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:17:04 np0005629333 nova_compute[244014]: 2026-02-25 12:17:04.649 244018 DEBUG oslo_concurrency.processutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:05.031 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:17:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:17:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1836423578' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:17:05 np0005629333 nova_compute[244014]: 2026-02-25 12:17:05.204 244018 DEBUG oslo_concurrency.processutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:05 np0005629333 nova_compute[244014]: 2026-02-25 12:17:05.212 244018 DEBUG nova.compute.provider_tree [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:17:05 np0005629333 nova_compute[244014]: 2026-02-25 12:17:05.255 244018 DEBUG nova.scheduler.client.report [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:17:05 np0005629333 nova_compute[244014]: 2026-02-25 12:17:05.297 244018 DEBUG oslo_concurrency.lockutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:05 np0005629333 nova_compute[244014]: 2026-02-25 12:17:05.299 244018 DEBUG nova.compute.manager [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:17:05 np0005629333 nova_compute[244014]: 2026-02-25 12:17:05.421 244018 DEBUG nova.compute.manager [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:17:05 np0005629333 nova_compute[244014]: 2026-02-25 12:17:05.421 244018 DEBUG nova.network.neutron [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:17:05 np0005629333 nova_compute[244014]: 2026-02-25 12:17:05.487 244018 INFO nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:17:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 282 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 147 op/s
Feb 25 07:17:05 np0005629333 nova_compute[244014]: 2026-02-25 12:17:05.537 244018 DEBUG nova.compute.manager [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:17:05 np0005629333 nova_compute[244014]: 2026-02-25 12:17:05.659 244018 DEBUG nova.compute.manager [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:17:05 np0005629333 nova_compute[244014]: 2026-02-25 12:17:05.661 244018 DEBUG nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:17:05 np0005629333 nova_compute[244014]: 2026-02-25 12:17:05.662 244018 INFO nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Creating image(s)#033[00m
Feb 25 07:17:05 np0005629333 nova_compute[244014]: 2026-02-25 12:17:05.737 244018 DEBUG nova.storage.rbd_utils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] rbd image 77af7d73-f695-47b4-8ec1-98a3672ff8d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:05 np0005629333 nova_compute[244014]: 2026-02-25 12:17:05.762 244018 DEBUG nova.storage.rbd_utils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] rbd image 77af7d73-f695-47b4-8ec1-98a3672ff8d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:05 np0005629333 nova_compute[244014]: 2026-02-25 12:17:05.785 244018 DEBUG nova.storage.rbd_utils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] rbd image 77af7d73-f695-47b4-8ec1-98a3672ff8d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:05 np0005629333 nova_compute[244014]: 2026-02-25 12:17:05.788 244018 DEBUG oslo_concurrency.processutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:05 np0005629333 nova_compute[244014]: 2026-02-25 12:17:05.809 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Feb 25 07:17:05 np0005629333 nova_compute[244014]: 2026-02-25 12:17:05.814 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:05 np0005629333 nova_compute[244014]: 2026-02-25 12:17:05.818 244018 DEBUG nova.network.neutron [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Feb 25 07:17:05 np0005629333 nova_compute[244014]: 2026-02-25 12:17:05.818 244018 DEBUG nova.compute.manager [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:17:05 np0005629333 nova_compute[244014]: 2026-02-25 12:17:05.869 244018 DEBUG oslo_concurrency.processutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:05 np0005629333 nova_compute[244014]: 2026-02-25 12:17:05.870 244018 DEBUG oslo_concurrency.lockutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:05 np0005629333 nova_compute[244014]: 2026-02-25 12:17:05.870 244018 DEBUG oslo_concurrency.lockutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:05 np0005629333 nova_compute[244014]: 2026-02-25 12:17:05.870 244018 DEBUG oslo_concurrency.lockutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:05 np0005629333 nova_compute[244014]: 2026-02-25 12:17:05.896 244018 DEBUG nova.storage.rbd_utils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] rbd image 77af7d73-f695-47b4-8ec1-98a3672ff8d8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:05 np0005629333 nova_compute[244014]: 2026-02-25 12:17:05.900 244018 DEBUG oslo_concurrency.processutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 77af7d73-f695-47b4-8ec1-98a3672ff8d8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:06 np0005629333 nova_compute[244014]: 2026-02-25 12:17:06.186 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:17:06 np0005629333 nova_compute[244014]: 2026-02-25 12:17:06.598 244018 DEBUG oslo_concurrency.processutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 77af7d73-f695-47b4-8ec1-98a3672ff8d8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.699s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:06 np0005629333 nova_compute[244014]: 2026-02-25 12:17:06.660 244018 DEBUG nova.storage.rbd_utils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] resizing rbd image 77af7d73-f695-47b4-8ec1-98a3672ff8d8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:17:06 np0005629333 nova_compute[244014]: 2026-02-25 12:17:06.752 244018 DEBUG nova.objects.instance [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lazy-loading 'migration_context' on Instance uuid 77af7d73-f695-47b4-8ec1-98a3672ff8d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:17:06 np0005629333 nova_compute[244014]: 2026-02-25 12:17:06.794 244018 DEBUG nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:17:06 np0005629333 nova_compute[244014]: 2026-02-25 12:17:06.795 244018 DEBUG nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Ensure instance console log exists: /var/lib/nova/instances/77af7d73-f695-47b4-8ec1-98a3672ff8d8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:17:06 np0005629333 nova_compute[244014]: 2026-02-25 12:17:06.795 244018 DEBUG oslo_concurrency.lockutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:06 np0005629333 nova_compute[244014]: 2026-02-25 12:17:06.796 244018 DEBUG oslo_concurrency.lockutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:06 np0005629333 nova_compute[244014]: 2026-02-25 12:17:06.796 244018 DEBUG oslo_concurrency.lockutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:06 np0005629333 nova_compute[244014]: 2026-02-25 12:17:06.797 244018 DEBUG nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:17:06 np0005629333 nova_compute[244014]: 2026-02-25 12:17:06.803 244018 WARNING nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:17:06 np0005629333 nova_compute[244014]: 2026-02-25 12:17:06.808 244018 DEBUG nova.virt.libvirt.host [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:17:06 np0005629333 nova_compute[244014]: 2026-02-25 12:17:06.809 244018 DEBUG nova.virt.libvirt.host [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:17:06 np0005629333 nova_compute[244014]: 2026-02-25 12:17:06.811 244018 DEBUG nova.virt.libvirt.host [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:17:06 np0005629333 nova_compute[244014]: 2026-02-25 12:17:06.812 244018 DEBUG nova.virt.libvirt.host [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:17:06 np0005629333 nova_compute[244014]: 2026-02-25 12:17:06.812 244018 DEBUG nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:17:06 np0005629333 nova_compute[244014]: 2026-02-25 12:17:06.813 244018 DEBUG nova.virt.hardware [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:17:06 np0005629333 nova_compute[244014]: 2026-02-25 12:17:06.813 244018 DEBUG nova.virt.hardware [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:17:06 np0005629333 nova_compute[244014]: 2026-02-25 12:17:06.813 244018 DEBUG nova.virt.hardware [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:17:06 np0005629333 nova_compute[244014]: 2026-02-25 12:17:06.814 244018 DEBUG nova.virt.hardware [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:17:06 np0005629333 nova_compute[244014]: 2026-02-25 12:17:06.814 244018 DEBUG nova.virt.hardware [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:17:06 np0005629333 nova_compute[244014]: 2026-02-25 12:17:06.814 244018 DEBUG nova.virt.hardware [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:17:06 np0005629333 nova_compute[244014]: 2026-02-25 12:17:06.815 244018 DEBUG nova.virt.hardware [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:17:06 np0005629333 nova_compute[244014]: 2026-02-25 12:17:06.815 244018 DEBUG nova.virt.hardware [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:17:06 np0005629333 nova_compute[244014]: 2026-02-25 12:17:06.815 244018 DEBUG nova.virt.hardware [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:17:06 np0005629333 nova_compute[244014]: 2026-02-25 12:17:06.815 244018 DEBUG nova.virt.hardware [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:17:06 np0005629333 nova_compute[244014]: 2026-02-25 12:17:06.816 244018 DEBUG nova.virt.hardware [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:17:06 np0005629333 nova_compute[244014]: 2026-02-25 12:17:06.819 244018 DEBUG oslo_concurrency.processutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:06 np0005629333 nova_compute[244014]: 2026-02-25 12:17:06.848 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021811.7544844, 29947b36-0ef7-49c6-974b-34891af8b99a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:17:06 np0005629333 nova_compute[244014]: 2026-02-25 12:17:06.849 244018 INFO nova.compute.manager [-] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:17:06 np0005629333 nova_compute[244014]: 2026-02-25 12:17:06.868 244018 DEBUG nova.compute.manager [None req-b72a3521-8341-436f-ba66-7b4e8449bcc8 - - - - - -] [instance: 29947b36-0ef7-49c6-974b-34891af8b99a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:17:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:17:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/640942291' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:17:07 np0005629333 nova_compute[244014]: 2026-02-25 12:17:07.326 244018 DEBUG oslo_concurrency.processutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:07 np0005629333 nova_compute[244014]: 2026-02-25 12:17:07.346 244018 DEBUG nova.storage.rbd_utils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] rbd image 77af7d73-f695-47b4-8ec1-98a3672ff8d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:07 np0005629333 nova_compute[244014]: 2026-02-25 12:17:07.349 244018 DEBUG oslo_concurrency.processutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 353 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.6 MiB/s wr, 219 op/s
Feb 25 07:17:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:17:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/357481479' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:17:07 np0005629333 nova_compute[244014]: 2026-02-25 12:17:07.947 244018 DEBUG oslo_concurrency.processutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:07 np0005629333 nova_compute[244014]: 2026-02-25 12:17:07.948 244018 DEBUG nova.objects.instance [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 77af7d73-f695-47b4-8ec1-98a3672ff8d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:17:07 np0005629333 nova_compute[244014]: 2026-02-25 12:17:07.964 244018 DEBUG nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:17:07 np0005629333 nova_compute[244014]:  <uuid>77af7d73-f695-47b4-8ec1-98a3672ff8d8</uuid>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:  <name>instance-0000000b</name>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:17:07 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServersAdminNegativeTestJSON-server-344510811</nova:name>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:17:06</nova:creationTime>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:17:07 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:        <nova:user uuid="f653b2ec5be5483092804fcd55beab49">tempest-ServersAdminNegativeTestJSON-49539203-project-member</nova:user>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:        <nova:project uuid="2805044e788543068625873119e58bd0">tempest-ServersAdminNegativeTestJSON-49539203</nova:project>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:      <nova:ports/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:      <entry name="serial">77af7d73-f695-47b4-8ec1-98a3672ff8d8</entry>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:      <entry name="uuid">77af7d73-f695-47b4-8ec1-98a3672ff8d8</entry>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:17:07 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/77af7d73-f695-47b4-8ec1-98a3672ff8d8_disk">
Feb 25 07:17:07 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:17:07 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:17:07 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/77af7d73-f695-47b4-8ec1-98a3672ff8d8_disk.config">
Feb 25 07:17:07 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:17:07 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:17:07 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/77af7d73-f695-47b4-8ec1-98a3672ff8d8/console.log" append="off"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:17:07 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:17:07 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:17:07 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:17:07 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:17:07 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:17:08 np0005629333 nova_compute[244014]: 2026-02-25 12:17:08.016 244018 DEBUG nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:17:08 np0005629333 nova_compute[244014]: 2026-02-25 12:17:08.016 244018 DEBUG nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:17:08 np0005629333 nova_compute[244014]: 2026-02-25 12:17:08.016 244018 INFO nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Using config drive#033[00m
Feb 25 07:17:08 np0005629333 nova_compute[244014]: 2026-02-25 12:17:08.038 244018 DEBUG nova.storage.rbd_utils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] rbd image 77af7d73-f695-47b4-8ec1-98a3672ff8d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:08 np0005629333 nova_compute[244014]: 2026-02-25 12:17:08.372 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:08 np0005629333 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Feb 25 07:17:08 np0005629333 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 12.363s CPU time.
Feb 25 07:17:08 np0005629333 systemd-machined[210048]: Machine qemu-10-instance-0000000a terminated.
Feb 25 07:17:08 np0005629333 nova_compute[244014]: 2026-02-25 12:17:08.841 244018 INFO nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Creating config drive at /var/lib/nova/instances/77af7d73-f695-47b4-8ec1-98a3672ff8d8/disk.config#033[00m
Feb 25 07:17:08 np0005629333 nova_compute[244014]: 2026-02-25 12:17:08.848 244018 DEBUG oslo_concurrency.processutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/77af7d73-f695-47b4-8ec1-98a3672ff8d8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpe8pyo8xz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:08 np0005629333 nova_compute[244014]: 2026-02-25 12:17:08.886 244018 INFO nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance shutdown successfully after 13 seconds.#033[00m
Feb 25 07:17:08 np0005629333 nova_compute[244014]: 2026-02-25 12:17:08.894 244018 INFO nova.virt.libvirt.driver [-] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance destroyed successfully.#033[00m
Feb 25 07:17:08 np0005629333 nova_compute[244014]: 2026-02-25 12:17:08.903 244018 INFO nova.virt.libvirt.driver [-] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance destroyed successfully.#033[00m
Feb 25 07:17:08 np0005629333 nova_compute[244014]: 2026-02-25 12:17:08.978 244018 DEBUG oslo_concurrency.processutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/77af7d73-f695-47b4-8ec1-98a3672ff8d8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpe8pyo8xz" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.008 244018 DEBUG nova.storage.rbd_utils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] rbd image 77af7d73-f695-47b4-8ec1-98a3672ff8d8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.012 244018 DEBUG oslo_concurrency.processutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/77af7d73-f695-47b4-8ec1-98a3672ff8d8/disk.config 77af7d73-f695-47b4-8ec1-98a3672ff8d8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.156 244018 DEBUG oslo_concurrency.processutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/77af7d73-f695-47b4-8ec1-98a3672ff8d8/disk.config 77af7d73-f695-47b4-8ec1-98a3672ff8d8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.159 244018 INFO nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Deleting local config drive /var/lib/nova/instances/77af7d73-f695-47b4-8ec1-98a3672ff8d8/disk.config because it was imported into RBD.#033[00m
Feb 25 07:17:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:17:09 np0005629333 systemd-machined[210048]: New machine qemu-11-instance-0000000b.
Feb 25 07:17:09 np0005629333 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.340 244018 INFO nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Deleting instance files /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308_del#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.341 244018 INFO nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Deletion of /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308_del complete#033[00m
Feb 25 07:17:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 353 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 591 KiB/s rd, 5.6 MiB/s wr, 147 op/s
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.555 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.556 244018 INFO nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Creating image(s)#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.592 244018 DEBUG nova.storage.rbd_utils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.623 244018 DEBUG nova.storage.rbd_utils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.650 244018 DEBUG nova.storage.rbd_utils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.653 244018 DEBUG oslo_concurrency.lockutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Acquiring lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.654 244018 DEBUG oslo_concurrency.lockutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.657 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021829.5782044, 77af7d73-f695-47b4-8ec1-98a3672ff8d8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.658 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.662 244018 DEBUG nova.compute.manager [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.662 244018 DEBUG nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.667 244018 INFO nova.virt.libvirt.driver [-] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Instance spawned successfully.#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.667 244018 DEBUG nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.679 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.684 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.689 244018 DEBUG nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.690 244018 DEBUG nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.690 244018 DEBUG nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.691 244018 DEBUG nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.691 244018 DEBUG nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.692 244018 DEBUG nova.virt.libvirt.driver [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.716 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.716 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021829.5783021, 77af7d73-f695-47b4-8ec1-98a3672ff8d8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.717 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] VM Started (Lifecycle Event)#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.784 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.787 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.792 244018 INFO nova.compute.manager [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Took 4.13 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.792 244018 DEBUG nova.compute.manager [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.812 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.857 244018 INFO nova.compute.manager [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Took 5.38 seconds to build instance.#033[00m
Feb 25 07:17:09 np0005629333 nova_compute[244014]: 2026-02-25 12:17:09.874 244018 DEBUG oslo_concurrency.lockutils [None req-369b0a68-e57c-4dba-9681-55b31b1ec12f f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "77af7d73-f695-47b4-8ec1-98a3672ff8d8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.526s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:10 np0005629333 nova_compute[244014]: 2026-02-25 12:17:10.027 244018 DEBUG nova.virt.libvirt.imagebackend [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Image locations are: [{'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/f0ef5a9a-23b8-4883-8e47-feb7403a11d8/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/f0ef5a9a-23b8-4883-8e47-feb7403a11d8/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Feb 25 07:17:11 np0005629333 nova_compute[244014]: 2026-02-25 12:17:11.061 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:11 np0005629333 nova_compute[244014]: 2026-02-25 12:17:11.258 244018 DEBUG oslo_concurrency.processutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:11 np0005629333 nova_compute[244014]: 2026-02-25 12:17:11.338 244018 DEBUG oslo_concurrency.processutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538.part --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:11 np0005629333 nova_compute[244014]: 2026-02-25 12:17:11.339 244018 DEBUG nova.virt.images [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] f0ef5a9a-23b8-4883-8e47-feb7403a11d8 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Feb 25 07:17:11 np0005629333 nova_compute[244014]: 2026-02-25 12:17:11.343 244018 DEBUG nova.privsep.utils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Feb 25 07:17:11 np0005629333 nova_compute[244014]: 2026-02-25 12:17:11.343 244018 DEBUG oslo_concurrency.processutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538.part /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 342 MiB data, 430 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 6.1 MiB/s wr, 194 op/s
Feb 25 07:17:11 np0005629333 nova_compute[244014]: 2026-02-25 12:17:11.563 244018 DEBUG oslo_concurrency.processutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538.part /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538.converted" returned: 0 in 0.219s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:11 np0005629333 nova_compute[244014]: 2026-02-25 12:17:11.568 244018 DEBUG oslo_concurrency.processutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:11 np0005629333 nova_compute[244014]: 2026-02-25 12:17:11.635 244018 DEBUG oslo_concurrency.processutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538.converted --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:11 np0005629333 nova_compute[244014]: 2026-02-25 12:17:11.637 244018 DEBUG oslo_concurrency.lockutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.983s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:11 np0005629333 nova_compute[244014]: 2026-02-25 12:17:11.669 244018 DEBUG nova.storage.rbd_utils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:11 np0005629333 nova_compute[244014]: 2026-02-25 12:17:11.673 244018 DEBUG oslo_concurrency.processutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:11 np0005629333 nova_compute[244014]: 2026-02-25 12:17:11.945 244018 DEBUG oslo_concurrency.lockutils [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "20c8b8a1-1561-49f5-9fce-af2840195a57" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:11 np0005629333 nova_compute[244014]: 2026-02-25 12:17:11.945 244018 DEBUG oslo_concurrency.lockutils [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "20c8b8a1-1561-49f5-9fce-af2840195a57" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:11 np0005629333 nova_compute[244014]: 2026-02-25 12:17:11.946 244018 DEBUG oslo_concurrency.lockutils [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:11 np0005629333 nova_compute[244014]: 2026-02-25 12:17:11.946 244018 DEBUG oslo_concurrency.lockutils [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:11 np0005629333 nova_compute[244014]: 2026-02-25 12:17:11.947 244018 DEBUG oslo_concurrency.lockutils [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:11 np0005629333 nova_compute[244014]: 2026-02-25 12:17:11.949 244018 INFO nova.compute.manager [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Terminating instance#033[00m
Feb 25 07:17:11 np0005629333 nova_compute[244014]: 2026-02-25 12:17:11.955 244018 DEBUG nova.compute.manager [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:17:11 np0005629333 nova_compute[244014]: 2026-02-25 12:17:11.956 244018 DEBUG oslo_concurrency.processutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.045 244018 DEBUG nova.storage.rbd_utils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] resizing rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.128 244018 DEBUG oslo_concurrency.lockutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquiring lock "9f78f954-3472-4631-a0ae-b945b9c26f5a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.128 244018 DEBUG oslo_concurrency.lockutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "9f78f954-3472-4631-a0ae-b945b9c26f5a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.144 244018 DEBUG nova.compute.manager [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:17:12 np0005629333 kernel: tap5fae567b-e2 (unregistering): left promiscuous mode
Feb 25 07:17:12 np0005629333 NetworkManager[49836]: <info>  [1772021832.2249] device (tap5fae567b-e2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.224 244018 DEBUG oslo_concurrency.lockutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.224 244018 DEBUG oslo_concurrency.lockutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.230 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:12 np0005629333 ovn_controller[147040]: 2026-02-25T12:17:12Z|00041|binding|INFO|Releasing lport 5fae567b-e26e-4045-8efe-6d5fc03a1658 from this chassis (sb_readonly=0)
Feb 25 07:17:12 np0005629333 ovn_controller[147040]: 2026-02-25T12:17:12Z|00042|binding|INFO|Setting lport 5fae567b-e26e-4045-8efe-6d5fc03a1658 down in Southbound
Feb 25 07:17:12 np0005629333 ovn_controller[147040]: 2026-02-25T12:17:12Z|00043|binding|INFO|Removing iface tap5fae567b-e2 ovn-installed in OVS
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.232 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.242 244018 DEBUG nova.virt.hardware [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.243 244018 INFO nova.compute.claims [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:17:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:12.242 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:e1:5a 10.100.0.8'], port_security=['fa:16:3e:3b:e1:5a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '20c8b8a1-1561-49f5-9fce-af2840195a57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0cd0968a9a1b4b9e984b0a10a6ac77a8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f43142d8-27de-490e-9b86-d4e77c9c7e07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad5d1aaf-d2e8-49e5-aeb2-335f304b15d9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=5fae567b-e26e-4045-8efe-6d5fc03a1658) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:17:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:12.245 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 5fae567b-e26e-4045-8efe-6d5fc03a1658 in datapath d3fb36f1-0e88-43b4-a8a4-3844d55f1de8 unbound from our chassis#033[00m
Feb 25 07:17:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:12.248 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d3fb36f1-0e88-43b4-a8a4-3844d55f1de8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:17:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:12.250 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[459194ea-d2c4-4c26-a111-f49e7a0b23d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:17:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:12.252 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8 namespace which is not needed anymore#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.252 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:12 np0005629333 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Feb 25 07:17:12 np0005629333 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 12.629s CPU time.
Feb 25 07:17:12 np0005629333 systemd-machined[210048]: Machine qemu-8-instance-00000008 terminated.
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.340 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.341 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Ensure instance console log exists: /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.341 244018 DEBUG oslo_concurrency.lockutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.341 244018 DEBUG oslo_concurrency.lockutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.341 244018 DEBUG oslo_concurrency.lockutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.342 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.345 244018 WARNING nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.353 244018 DEBUG nova.virt.libvirt.host [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.353 244018 DEBUG nova.virt.libvirt.host [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.355 244018 DEBUG nova.virt.libvirt.host [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.356 244018 DEBUG nova.virt.libvirt.host [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.356 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.356 244018 DEBUG nova.virt.hardware [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.356 244018 DEBUG nova.virt.hardware [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.357 244018 DEBUG nova.virt.hardware [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.357 244018 DEBUG nova.virt.hardware [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.357 244018 DEBUG nova.virt.hardware [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.357 244018 DEBUG nova.virt.hardware [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.357 244018 DEBUG nova.virt.hardware [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.357 244018 DEBUG nova.virt.hardware [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.357 244018 DEBUG nova.virt.hardware [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.358 244018 DEBUG nova.virt.hardware [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.358 244018 DEBUG nova.virt.hardware [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.358 244018 DEBUG nova.objects.instance [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lazy-loading 'vcpu_model' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.372 244018 DEBUG oslo_concurrency.processutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:12 np0005629333 neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8[255138]: [NOTICE]   (255169) : haproxy version is 2.8.14-c23fe91
Feb 25 07:17:12 np0005629333 neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8[255138]: [NOTICE]   (255169) : path to executable is /usr/sbin/haproxy
Feb 25 07:17:12 np0005629333 neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8[255138]: [WARNING]  (255169) : Exiting Master process...
Feb 25 07:17:12 np0005629333 neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8[255138]: [ALERT]    (255169) : Current worker (255192) exited with code 143 (Terminated)
Feb 25 07:17:12 np0005629333 neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8[255138]: [WARNING]  (255169) : All workers exited. Exiting... (0)
Feb 25 07:17:12 np0005629333 systemd[1]: libpod-2f771423fea679a9ed4aa046feaf5d3d997bb34cfaabf1d0087aee9822c5dc5e.scope: Deactivated successfully.
Feb 25 07:17:12 np0005629333 conmon[255138]: conmon 2f771423fea679a9ed4a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2f771423fea679a9ed4aa046feaf5d3d997bb34cfaabf1d0087aee9822c5dc5e.scope/container/memory.events
Feb 25 07:17:12 np0005629333 podman[256782]: 2026-02-25 12:17:12.394188623 +0000 UTC m=+0.049094082 container died 2f771423fea679a9ed4aa046feaf5d3d997bb34cfaabf1d0087aee9822c5dc5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.424 244018 INFO nova.virt.libvirt.driver [-] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Instance destroyed successfully.#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.428 244018 DEBUG nova.objects.instance [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lazy-loading 'resources' on Instance uuid 20c8b8a1-1561-49f5-9fce-af2840195a57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:17:12 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2f771423fea679a9ed4aa046feaf5d3d997bb34cfaabf1d0087aee9822c5dc5e-userdata-shm.mount: Deactivated successfully.
Feb 25 07:17:12 np0005629333 systemd[1]: var-lib-containers-storage-overlay-707d0ec3801d09f720d201cc99cc3906ecd5ce5937d1d449d2a23665084e2304-merged.mount: Deactivated successfully.
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.440 244018 DEBUG nova.virt.libvirt.vif [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:16:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1235856482',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1235856482',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(20),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1235856482',id=8,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=20,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHJvvkxGBHug345PSqKu7AM4guWtlD1Z/ZHlY6xJOr1x7TK7JerzVRxChzrn82oXIZjPXupKAJ5z18TSxrDr8DDKEwo/biNUpuR+tKuw+djeyjF8yxNV8v2GUFxoZtp+/Q==',key_name='tempest-keypair-1665806790',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:16:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0cd0968a9a1b4b9e984b0a10a6ac77a8',ramdisk_id='',reservation_id='r-6yvik42u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1630390846',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1630390846-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:16:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ee6c6e44a0624805afeb68a67c99f325',uuid=20c8b8a1-1561-49f5-9fce-af2840195a57,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "address": "fa:16:3e:3b:e1:5a", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fae567b-e2", "ovs_interfaceid": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.441 244018 DEBUG nova.network.os_vif_util [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Converting VIF {"id": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "address": "fa:16:3e:3b:e1:5a", "network": {"id": "d3fb36f1-0e88-43b4-a8a4-3844d55f1de8", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-750162630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0cd0968a9a1b4b9e984b0a10a6ac77a8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fae567b-e2", "ovs_interfaceid": "5fae567b-e26e-4045-8efe-6d5fc03a1658", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.441 244018 DEBUG nova.network.os_vif_util [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3b:e1:5a,bridge_name='br-int',has_traffic_filtering=True,id=5fae567b-e26e-4045-8efe-6d5fc03a1658,network=Network(d3fb36f1-0e88-43b4-a8a4-3844d55f1de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fae567b-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.442 244018 DEBUG os_vif [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3b:e1:5a,bridge_name='br-int',has_traffic_filtering=True,id=5fae567b-e26e-4045-8efe-6d5fc03a1658,network=Network(d3fb36f1-0e88-43b4-a8a4-3844d55f1de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fae567b-e2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.445 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.445 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5fae567b-e2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.447 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.448 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.450 244018 INFO os_vif [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3b:e1:5a,bridge_name='br-int',has_traffic_filtering=True,id=5fae567b-e26e-4045-8efe-6d5fc03a1658,network=Network(d3fb36f1-0e88-43b4-a8a4-3844d55f1de8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fae567b-e2')#033[00m
Feb 25 07:17:12 np0005629333 podman[256782]: 2026-02-25 12:17:12.457817437 +0000 UTC m=+0.112722906 container cleanup 2f771423fea679a9ed4aa046feaf5d3d997bb34cfaabf1d0087aee9822c5dc5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:17:12 np0005629333 systemd[1]: libpod-conmon-2f771423fea679a9ed4aa046feaf5d3d997bb34cfaabf1d0087aee9822c5dc5e.scope: Deactivated successfully.
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.474 244018 DEBUG oslo_concurrency.processutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:12 np0005629333 podman[256836]: 2026-02-25 12:17:12.543389153 +0000 UTC m=+0.064963084 container remove 2f771423fea679a9ed4aa046feaf5d3d997bb34cfaabf1d0087aee9822c5dc5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 07:17:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:12.548 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[073c6b70-f17c-4be0-9513-1bd4500d384a]: (4, ('Wed Feb 25 12:17:12 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8 (2f771423fea679a9ed4aa046feaf5d3d997bb34cfaabf1d0087aee9822c5dc5e)\n2f771423fea679a9ed4aa046feaf5d3d997bb34cfaabf1d0087aee9822c5dc5e\nWed Feb 25 12:17:12 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8 (2f771423fea679a9ed4aa046feaf5d3d997bb34cfaabf1d0087aee9822c5dc5e)\n2f771423fea679a9ed4aa046feaf5d3d997bb34cfaabf1d0087aee9822c5dc5e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:17:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:12.550 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cba0ba49-ddeb-43a9-a7a6-6e6e324b652c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:17:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:12.551 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3fb36f1-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:17:12 np0005629333 kernel: tapd3fb36f1-00: left promiscuous mode
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.554 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:12.561 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a4266573-3765-4eb3-a454-60da139b0b75]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.566 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:12.574 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[237c970b-2855-43b4-9485-b07f46aa8c80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:17:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:12.575 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bea060ff-ba97-4ffc-817b-78a8910d5a27]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:17:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:12.592 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f1bb7eed-f3fa-4a5a-a9e0-a298f9c56b75]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 377636, 'reachable_time': 43116, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256876, 'error': None, 'target': 'ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:17:12 np0005629333 systemd[1]: run-netns-ovnmeta\x2dd3fb36f1\x2d0e88\x2d43b4\x2da8a4\x2d3844d55f1de8.mount: Deactivated successfully.
Feb 25 07:17:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:12.597 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d3fb36f1-0e88-43b4-a8a4-3844d55f1de8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:17:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:12.597 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[aa34f02b-66ab-45c2-a81c-65a2246da3b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.861 244018 INFO nova.virt.libvirt.driver [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Deleting instance files /var/lib/nova/instances/20c8b8a1-1561-49f5-9fce-af2840195a57_del#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.861 244018 INFO nova.virt.libvirt.driver [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Deletion of /var/lib/nova/instances/20c8b8a1-1561-49f5-9fce-af2840195a57_del complete#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.867 244018 DEBUG nova.compute.manager [req-4f002f8d-91aa-4b58-91b6-c7aa1d53a7cf req-719a5e3d-cad7-4185-9d75-cfe9e2703b2f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Received event network-vif-unplugged-5fae567b-e26e-4045-8efe-6d5fc03a1658 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.867 244018 DEBUG oslo_concurrency.lockutils [req-4f002f8d-91aa-4b58-91b6-c7aa1d53a7cf req-719a5e3d-cad7-4185-9d75-cfe9e2703b2f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.867 244018 DEBUG oslo_concurrency.lockutils [req-4f002f8d-91aa-4b58-91b6-c7aa1d53a7cf req-719a5e3d-cad7-4185-9d75-cfe9e2703b2f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.867 244018 DEBUG oslo_concurrency.lockutils [req-4f002f8d-91aa-4b58-91b6-c7aa1d53a7cf req-719a5e3d-cad7-4185-9d75-cfe9e2703b2f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.867 244018 DEBUG nova.compute.manager [req-4f002f8d-91aa-4b58-91b6-c7aa1d53a7cf req-719a5e3d-cad7-4185-9d75-cfe9e2703b2f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] No waiting events found dispatching network-vif-unplugged-5fae567b-e26e-4045-8efe-6d5fc03a1658 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.868 244018 DEBUG nova.compute.manager [req-4f002f8d-91aa-4b58-91b6-c7aa1d53a7cf req-719a5e3d-cad7-4185-9d75-cfe9e2703b2f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Received event network-vif-unplugged-5fae567b-e26e-4045-8efe-6d5fc03a1658 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:17:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:17:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4272380381' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.960 244018 DEBUG oslo_concurrency.processutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.977 244018 DEBUG nova.storage.rbd_utils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.980 244018 DEBUG oslo_concurrency.processutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.998 244018 INFO nova.compute.manager [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Took 1.04 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:17:12 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.999 244018 DEBUG oslo.service.loopingcall [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:12.999 244018 DEBUG nova.compute.manager [-] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.000 244018 DEBUG nova.network.neutron [-] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:17:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:17:13 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/115378256' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.111 244018 DEBUG oslo_concurrency.processutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.115 244018 DEBUG nova.compute.provider_tree [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.140 244018 DEBUG nova.scheduler.client.report [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.182 244018 DEBUG oslo_concurrency.lockutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.957s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.182 244018 DEBUG nova.compute.manager [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.296 244018 DEBUG nova.compute.manager [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.296 244018 DEBUG nova.network.neutron [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.353 244018 INFO nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.374 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.399 244018 DEBUG nova.compute.manager [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:17:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:17:13 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1062890002' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.499 244018 DEBUG oslo_concurrency.processutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.502 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:17:13 np0005629333 nova_compute[244014]:  <uuid>db0fc9fa-1fc0-4334-96f9-2205fa53e308</uuid>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:  <name>instance-0000000a</name>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:17:13 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServersAdmin275Test-server-1821961636</nova:name>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:17:12</nova:creationTime>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:17:13 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:        <nova:user uuid="40cba1ddb2af4adea0f03477ccf87402">tempest-ServersAdmin275Test-694282747-project-member</nova:user>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:        <nova:project uuid="03ef50f1c8db4f9d9f7512943746f9e7">tempest-ServersAdmin275Test-694282747</nova:project>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="f0ef5a9a-23b8-4883-8e47-feb7403a11d8"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:      <nova:ports/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:      <entry name="serial">db0fc9fa-1fc0-4334-96f9-2205fa53e308</entry>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:      <entry name="uuid">db0fc9fa-1fc0-4334-96f9-2205fa53e308</entry>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:17:13 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk">
Feb 25 07:17:13 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:17:13 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:17:13 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config">
Feb 25 07:17:13 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:17:13 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:17:13 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/console.log" append="off"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:17:13 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:17:13 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:17:13 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:17:13 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:17:13 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:17:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 300 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 6.0 MiB/s wr, 253 op/s
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.581 244018 DEBUG nova.compute.manager [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.583 244018 DEBUG nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.584 244018 INFO nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Creating image(s)#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.611 244018 DEBUG nova.storage.rbd_utils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] rbd image 9f78f954-3472-4631-a0ae-b945b9c26f5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.654 244018 DEBUG nova.storage.rbd_utils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] rbd image 9f78f954-3472-4631-a0ae-b945b9c26f5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.684 244018 DEBUG nova.storage.rbd_utils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] rbd image 9f78f954-3472-4631-a0ae-b945b9c26f5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.688 244018 DEBUG oslo_concurrency.processutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.704 244018 DEBUG nova.network.neutron [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.705 244018 DEBUG nova.compute.manager [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.710 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.710 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.711 244018 INFO nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Using config drive#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.732 244018 DEBUG nova.storage.rbd_utils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.743 244018 DEBUG oslo_concurrency.processutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.743 244018 DEBUG oslo_concurrency.lockutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.744 244018 DEBUG oslo_concurrency.lockutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.744 244018 DEBUG oslo_concurrency.lockutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.764 244018 DEBUG nova.storage.rbd_utils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] rbd image 9f78f954-3472-4631-a0ae-b945b9c26f5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.768 244018 DEBUG oslo_concurrency.processutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 9f78f954-3472-4631-a0ae-b945b9c26f5a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.812 244018 DEBUG nova.objects.instance [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lazy-loading 'ec2_ids' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.878 244018 DEBUG nova.objects.instance [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lazy-loading 'keypairs' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.942 244018 DEBUG nova.network.neutron [-] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:17:13 np0005629333 nova_compute[244014]: 2026-02-25 12:17:13.973 244018 INFO nova.compute.manager [-] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Took 0.97 seconds to deallocate network for instance.#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.018 244018 DEBUG oslo_concurrency.processutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 9f78f954-3472-4631-a0ae-b945b9c26f5a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.250s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.044 244018 DEBUG oslo_concurrency.lockutils [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.044 244018 DEBUG oslo_concurrency.lockutils [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.085 244018 INFO nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Creating config drive at /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.091 244018 DEBUG oslo_concurrency.processutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpznk6fh7w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.125 244018 DEBUG nova.storage.rbd_utils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] resizing rbd image 9f78f954-3472-4631-a0ae-b945b9c26f5a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:17:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.217 244018 DEBUG oslo_concurrency.processutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpznk6fh7w" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.254 244018 DEBUG nova.storage.rbd_utils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.258 244018 DEBUG oslo_concurrency.processutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.331 244018 DEBUG nova.objects.instance [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lazy-loading 'migration_context' on Instance uuid 9f78f954-3472-4631-a0ae-b945b9c26f5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.336 244018 DEBUG oslo_concurrency.processutils [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.369 244018 DEBUG nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.370 244018 DEBUG nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Ensure instance console log exists: /var/lib/nova/instances/9f78f954-3472-4631-a0ae-b945b9c26f5a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.371 244018 DEBUG oslo_concurrency.lockutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.371 244018 DEBUG oslo_concurrency.lockutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.372 244018 DEBUG oslo_concurrency.lockutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.375 244018 DEBUG nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.380 244018 WARNING nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.386 244018 DEBUG nova.virt.libvirt.host [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.387 244018 DEBUG nova.virt.libvirt.host [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.390 244018 DEBUG nova.virt.libvirt.host [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.390 244018 DEBUG nova.virt.libvirt.host [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.391 244018 DEBUG nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.392 244018 DEBUG nova.virt.hardware [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.393 244018 DEBUG nova.virt.hardware [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.393 244018 DEBUG nova.virt.hardware [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.393 244018 DEBUG nova.virt.hardware [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.394 244018 DEBUG nova.virt.hardware [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.394 244018 DEBUG nova.virt.hardware [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.395 244018 DEBUG nova.virt.hardware [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.395 244018 DEBUG nova.virt.hardware [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.396 244018 DEBUG nova.virt.hardware [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.396 244018 DEBUG nova.virt.hardware [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.396 244018 DEBUG nova.virt.hardware [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.403 244018 DEBUG oslo_concurrency.processutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.570 244018 DEBUG oslo_concurrency.processutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.571 244018 INFO nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Deleting local config drive /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config because it was imported into RBD.#033[00m
Feb 25 07:17:14 np0005629333 systemd-machined[210048]: New machine qemu-12-instance-0000000a.
Feb 25 07:17:14 np0005629333 systemd[1]: Started Virtual Machine qemu-12-instance-0000000a.
Feb 25 07:17:14 np0005629333 podman[257213]: 2026-02-25 12:17:14.713531419 +0000 UTC m=+0.072170259 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 07:17:14 np0005629333 podman[257212]: 2026-02-25 12:17:14.720832777 +0000 UTC m=+0.080797406 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 25 07:17:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:17:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3105879104' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.894 244018 DEBUG oslo_concurrency.processutils [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.899 244018 DEBUG nova.compute.provider_tree [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:17:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:17:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1278180881' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.934 244018 DEBUG oslo_concurrency.processutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.953 244018 DEBUG nova.storage.rbd_utils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] rbd image 9f78f954-3472-4631-a0ae-b945b9c26f5a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.956 244018 DEBUG oslo_concurrency.processutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:14 np0005629333 nova_compute[244014]: 2026-02-25 12:17:14.973 244018 DEBUG nova.scheduler.client.report [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.009 244018 DEBUG oslo_concurrency.lockutils [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.964s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.015 244018 DEBUG nova.compute.manager [req-19aa8c6a-2910-4285-a04d-8dd7f6cd8419 req-5034d8d0-43e8-4d45-96f7-5ade31627c5d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Received event network-vif-plugged-5fae567b-e26e-4045-8efe-6d5fc03a1658 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.016 244018 DEBUG oslo_concurrency.lockutils [req-19aa8c6a-2910-4285-a04d-8dd7f6cd8419 req-5034d8d0-43e8-4d45-96f7-5ade31627c5d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.016 244018 DEBUG oslo_concurrency.lockutils [req-19aa8c6a-2910-4285-a04d-8dd7f6cd8419 req-5034d8d0-43e8-4d45-96f7-5ade31627c5d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.016 244018 DEBUG oslo_concurrency.lockutils [req-19aa8c6a-2910-4285-a04d-8dd7f6cd8419 req-5034d8d0-43e8-4d45-96f7-5ade31627c5d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "20c8b8a1-1561-49f5-9fce-af2840195a57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.016 244018 DEBUG nova.compute.manager [req-19aa8c6a-2910-4285-a04d-8dd7f6cd8419 req-5034d8d0-43e8-4d45-96f7-5ade31627c5d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] No waiting events found dispatching network-vif-plugged-5fae567b-e26e-4045-8efe-6d5fc03a1658 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.016 244018 WARNING nova.compute.manager [req-19aa8c6a-2910-4285-a04d-8dd7f6cd8419 req-5034d8d0-43e8-4d45-96f7-5ade31627c5d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Received unexpected event network-vif-plugged-5fae567b-e26e-4045-8efe-6d5fc03a1658 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.017 244018 DEBUG nova.compute.manager [req-19aa8c6a-2910-4285-a04d-8dd7f6cd8419 req-5034d8d0-43e8-4d45-96f7-5ade31627c5d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Received event network-vif-deleted-5fae567b-e26e-4045-8efe-6d5fc03a1658 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.045 244018 INFO nova.scheduler.client.report [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Deleted allocations for instance 20c8b8a1-1561-49f5-9fce-af2840195a57#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.293 244018 DEBUG oslo_concurrency.lockutils [None req-a73a2d53-1ad4-44e5-a1b9-a92edf0b1762 ee6c6e44a0624805afeb68a67c99f325 0cd0968a9a1b4b9e984b0a10a6ac77a8 - - default default] Lock "20c8b8a1-1561-49f5-9fce-af2840195a57" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:17:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3561197649' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.508 244018 DEBUG oslo_concurrency.processutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.509 244018 DEBUG nova.objects.instance [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9f78f954-3472-4631-a0ae-b945b9c26f5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.513 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for db0fc9fa-1fc0-4334-96f9-2205fa53e308 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.513 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021835.5128403, db0fc9fa-1fc0-4334-96f9-2205fa53e308 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.513 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.515 244018 DEBUG nova.compute.manager [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.516 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.519 244018 INFO nova.virt.libvirt.driver [-] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance spawned successfully.#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.519 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.526 244018 DEBUG nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:17:15 np0005629333 nova_compute[244014]:  <uuid>9f78f954-3472-4631-a0ae-b945b9c26f5a</uuid>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:  <name>instance-0000000c</name>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:17:15 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServersAdminNegativeTestJSON-server-1226014116</nova:name>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:17:14</nova:creationTime>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:17:15 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:        <nova:user uuid="f653b2ec5be5483092804fcd55beab49">tempest-ServersAdminNegativeTestJSON-49539203-project-member</nova:user>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:        <nova:project uuid="2805044e788543068625873119e58bd0">tempest-ServersAdminNegativeTestJSON-49539203</nova:project>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:      <nova:ports/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:      <entry name="serial">9f78f954-3472-4631-a0ae-b945b9c26f5a</entry>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:      <entry name="uuid">9f78f954-3472-4631-a0ae-b945b9c26f5a</entry>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:17:15 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/9f78f954-3472-4631-a0ae-b945b9c26f5a_disk">
Feb 25 07:17:15 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:17:15 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:17:15 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/9f78f954-3472-4631-a0ae-b945b9c26f5a_disk.config">
Feb 25 07:17:15 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:17:15 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:17:15 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/9f78f954-3472-4631-a0ae-b945b9c26f5a/console.log" append="off"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:17:15 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:17:15 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:17:15 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:17:15 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:17:15 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.535 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:17:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 300 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.2 MiB/s wr, 204 op/s
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.540 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.541 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.541 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.541 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.542 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.542 244018 DEBUG nova.virt.libvirt.driver [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.545 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.569 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.570 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021835.5151365, db0fc9fa-1fc0-4334-96f9-2205fa53e308 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.570 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] VM Started (Lifecycle Event)#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.595 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.598 244018 DEBUG nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.599 244018 DEBUG nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.599 244018 INFO nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Using config drive#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.618 244018 DEBUG nova.storage.rbd_utils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] rbd image 9f78f954-3472-4631-a0ae-b945b9c26f5a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.625 244018 DEBUG nova.compute.manager [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.626 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.650 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.705 244018 DEBUG oslo_concurrency.lockutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.705 244018 DEBUG oslo_concurrency.lockutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.705 244018 DEBUG nova.objects.instance [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.810 244018 DEBUG oslo_concurrency.lockutils [None req-d9015e5c-4034-4148-95b4-bb74d6fa7c35 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.819 244018 INFO nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Creating config drive at /var/lib/nova/instances/9f78f954-3472-4631-a0ae-b945b9c26f5a/disk.config#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.824 244018 DEBUG oslo_concurrency.processutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9f78f954-3472-4631-a0ae-b945b9c26f5a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmprrv7_hrf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.951 244018 DEBUG oslo_concurrency.processutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9f78f954-3472-4631-a0ae-b945b9c26f5a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmprrv7_hrf" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:15 np0005629333 nova_compute[244014]: 2026-02-25 12:17:15.996 244018 DEBUG nova.storage.rbd_utils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] rbd image 9f78f954-3472-4631-a0ae-b945b9c26f5a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:16 np0005629333 nova_compute[244014]: 2026-02-25 12:17:16.000 244018 DEBUG oslo_concurrency.processutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9f78f954-3472-4631-a0ae-b945b9c26f5a/disk.config 9f78f954-3472-4631-a0ae-b945b9c26f5a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:16 np0005629333 nova_compute[244014]: 2026-02-25 12:17:16.216 244018 DEBUG oslo_concurrency.processutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9f78f954-3472-4631-a0ae-b945b9c26f5a/disk.config 9f78f954-3472-4631-a0ae-b945b9c26f5a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.216s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:16 np0005629333 nova_compute[244014]: 2026-02-25 12:17:16.217 244018 INFO nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Deleting local config drive /var/lib/nova/instances/9f78f954-3472-4631-a0ae-b945b9c26f5a/disk.config because it was imported into RBD.#033[00m
Feb 25 07:17:16 np0005629333 systemd-machined[210048]: New machine qemu-13-instance-0000000c.
Feb 25 07:17:16 np0005629333 systemd[1]: Started Virtual Machine qemu-13-instance-0000000c.
Feb 25 07:17:17 np0005629333 nova_compute[244014]: 2026-02-25 12:17:17.096 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021837.0951371, 9f78f954-3472-4631-a0ae-b945b9c26f5a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:17:17 np0005629333 nova_compute[244014]: 2026-02-25 12:17:17.097 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:17:17 np0005629333 nova_compute[244014]: 2026-02-25 12:17:17.103 244018 DEBUG nova.compute.manager [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:17:17 np0005629333 nova_compute[244014]: 2026-02-25 12:17:17.104 244018 DEBUG nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:17:17 np0005629333 nova_compute[244014]: 2026-02-25 12:17:17.108 244018 INFO nova.virt.libvirt.driver [-] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Instance spawned successfully.#033[00m
Feb 25 07:17:17 np0005629333 nova_compute[244014]: 2026-02-25 12:17:17.109 244018 DEBUG nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:17:17 np0005629333 nova_compute[244014]: 2026-02-25 12:17:17.128 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:17:17 np0005629333 nova_compute[244014]: 2026-02-25 12:17:17.132 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:17:17 np0005629333 nova_compute[244014]: 2026-02-25 12:17:17.142 244018 DEBUG nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:17:17 np0005629333 nova_compute[244014]: 2026-02-25 12:17:17.143 244018 DEBUG nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:17:17 np0005629333 nova_compute[244014]: 2026-02-25 12:17:17.144 244018 DEBUG nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:17:17 np0005629333 nova_compute[244014]: 2026-02-25 12:17:17.144 244018 DEBUG nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:17:17 np0005629333 nova_compute[244014]: 2026-02-25 12:17:17.145 244018 DEBUG nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:17:17 np0005629333 nova_compute[244014]: 2026-02-25 12:17:17.146 244018 DEBUG nova.virt.libvirt.driver [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:17:17 np0005629333 nova_compute[244014]: 2026-02-25 12:17:17.153 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:17:17 np0005629333 nova_compute[244014]: 2026-02-25 12:17:17.153 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021837.0961735, 9f78f954-3472-4631-a0ae-b945b9c26f5a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:17:17 np0005629333 nova_compute[244014]: 2026-02-25 12:17:17.154 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] VM Started (Lifecycle Event)#033[00m
Feb 25 07:17:17 np0005629333 nova_compute[244014]: 2026-02-25 12:17:17.201 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:17:17 np0005629333 nova_compute[244014]: 2026-02-25 12:17:17.205 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:17:17 np0005629333 nova_compute[244014]: 2026-02-25 12:17:17.214 244018 INFO nova.compute.manager [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Took 3.63 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:17:17 np0005629333 nova_compute[244014]: 2026-02-25 12:17:17.215 244018 DEBUG nova.compute.manager [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:17:17 np0005629333 nova_compute[244014]: 2026-02-25 12:17:17.225 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:17:17 np0005629333 nova_compute[244014]: 2026-02-25 12:17:17.275 244018 INFO nova.compute.manager [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Took 5.09 seconds to build instance.#033[00m
Feb 25 07:17:17 np0005629333 nova_compute[244014]: 2026-02-25 12:17:17.299 244018 DEBUG oslo_concurrency.lockutils [None req-ae97ae10-d982-40d9-b01f-402ec90caeac f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "9f78f954-3472-4631-a0ae-b945b9c26f5a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:17 np0005629333 nova_compute[244014]: 2026-02-25 12:17:17.449 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 293 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 5.7 MiB/s rd, 7.1 MiB/s wr, 359 op/s
Feb 25 07:17:18 np0005629333 nova_compute[244014]: 2026-02-25 12:17:18.307 244018 INFO nova.compute.manager [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Rebuilding instance#033[00m
Feb 25 07:17:18 np0005629333 nova_compute[244014]: 2026-02-25 12:17:18.375 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:18 np0005629333 nova_compute[244014]: 2026-02-25 12:17:18.767 244018 DEBUG nova.objects.instance [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Lazy-loading 'trusted_certs' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:17:18 np0005629333 nova_compute[244014]: 2026-02-25 12:17:18.793 244018 DEBUG nova.compute.manager [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:17:18 np0005629333 nova_compute[244014]: 2026-02-25 12:17:18.857 244018 DEBUG nova.objects.instance [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Lazy-loading 'pci_requests' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:17:18 np0005629333 nova_compute[244014]: 2026-02-25 12:17:18.869 244018 DEBUG nova.objects.instance [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Lazy-loading 'pci_devices' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:17:18 np0005629333 nova_compute[244014]: 2026-02-25 12:17:18.887 244018 DEBUG nova.objects.instance [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Lazy-loading 'resources' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:17:18 np0005629333 nova_compute[244014]: 2026-02-25 12:17:18.913 244018 DEBUG nova.objects.instance [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Lazy-loading 'migration_context' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:17:18 np0005629333 nova_compute[244014]: 2026-02-25 12:17:18.932 244018 DEBUG nova.objects.instance [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 25 07:17:18 np0005629333 nova_compute[244014]: 2026-02-25 12:17:18.937 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 25 07:17:19 np0005629333 nova_compute[244014]: 2026-02-25 12:17:19.087 244018 DEBUG nova.objects.instance [None req-0d544355-0e80-4e87-a307-6c0c91a9cd3e 6a24496b0a9947e48d0da17ecf258476 40504c45ce5547dc88d4082880bfc79f - - default default] Lazy-loading 'pci_devices' on Instance uuid 9f78f954-3472-4631-a0ae-b945b9c26f5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:17:19 np0005629333 nova_compute[244014]: 2026-02-25 12:17:19.107 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021839.1075826, 9f78f954-3472-4631-a0ae-b945b9c26f5a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:17:19 np0005629333 nova_compute[244014]: 2026-02-25 12:17:19.108 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:17:19 np0005629333 nova_compute[244014]: 2026-02-25 12:17:19.128 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:17:19 np0005629333 nova_compute[244014]: 2026-02-25 12:17:19.134 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:17:19 np0005629333 nova_compute[244014]: 2026-02-25 12:17:19.152 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Feb 25 07:17:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:17:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 293 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 4.0 MiB/s wr, 287 op/s
Feb 25 07:17:19 np0005629333 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Feb 25 07:17:19 np0005629333 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Consumed 2.952s CPU time.
Feb 25 07:17:19 np0005629333 systemd-machined[210048]: Machine qemu-13-instance-0000000c terminated.
Feb 25 07:17:19 np0005629333 nova_compute[244014]: 2026-02-25 12:17:19.730 244018 DEBUG nova.compute.manager [None req-0d544355-0e80-4e87-a307-6c0c91a9cd3e 6a24496b0a9947e48d0da17ecf258476 40504c45ce5547dc88d4082880bfc79f - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:17:21 np0005629333 nova_compute[244014]: 2026-02-25 12:17:21.267 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:21 np0005629333 nova_compute[244014]: 2026-02-25 12:17:21.348 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 300 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 7.0 MiB/s rd, 4.2 MiB/s wr, 354 op/s
Feb 25 07:17:21 np0005629333 nova_compute[244014]: 2026-02-25 12:17:21.876 244018 DEBUG oslo_concurrency.lockutils [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquiring lock "9f78f954-3472-4631-a0ae-b945b9c26f5a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:21 np0005629333 nova_compute[244014]: 2026-02-25 12:17:21.877 244018 DEBUG oslo_concurrency.lockutils [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "9f78f954-3472-4631-a0ae-b945b9c26f5a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:21 np0005629333 nova_compute[244014]: 2026-02-25 12:17:21.877 244018 DEBUG oslo_concurrency.lockutils [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquiring lock "9f78f954-3472-4631-a0ae-b945b9c26f5a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:21 np0005629333 nova_compute[244014]: 2026-02-25 12:17:21.878 244018 DEBUG oslo_concurrency.lockutils [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "9f78f954-3472-4631-a0ae-b945b9c26f5a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:21 np0005629333 nova_compute[244014]: 2026-02-25 12:17:21.878 244018 DEBUG oslo_concurrency.lockutils [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "9f78f954-3472-4631-a0ae-b945b9c26f5a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:21 np0005629333 nova_compute[244014]: 2026-02-25 12:17:21.881 244018 INFO nova.compute.manager [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Terminating instance#033[00m
Feb 25 07:17:21 np0005629333 nova_compute[244014]: 2026-02-25 12:17:21.882 244018 DEBUG oslo_concurrency.lockutils [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquiring lock "refresh_cache-9f78f954-3472-4631-a0ae-b945b9c26f5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:17:21 np0005629333 nova_compute[244014]: 2026-02-25 12:17:21.883 244018 DEBUG oslo_concurrency.lockutils [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquired lock "refresh_cache-9f78f954-3472-4631-a0ae-b945b9c26f5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:17:21 np0005629333 nova_compute[244014]: 2026-02-25 12:17:21.883 244018 DEBUG nova.network.neutron [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:17:22 np0005629333 nova_compute[244014]: 2026-02-25 12:17:22.075 244018 DEBUG nova.network.neutron [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:17:22 np0005629333 nova_compute[244014]: 2026-02-25 12:17:22.452 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:22 np0005629333 nova_compute[244014]: 2026-02-25 12:17:22.745 244018 DEBUG nova.network.neutron [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:17:22 np0005629333 nova_compute[244014]: 2026-02-25 12:17:22.768 244018 DEBUG oslo_concurrency.lockutils [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Releasing lock "refresh_cache-9f78f954-3472-4631-a0ae-b945b9c26f5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:17:22 np0005629333 nova_compute[244014]: 2026-02-25 12:17:22.769 244018 DEBUG nova.compute.manager [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:17:22 np0005629333 nova_compute[244014]: 2026-02-25 12:17:22.777 244018 INFO nova.virt.libvirt.driver [-] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Instance destroyed successfully.#033[00m
Feb 25 07:17:22 np0005629333 nova_compute[244014]: 2026-02-25 12:17:22.778 244018 DEBUG nova.objects.instance [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lazy-loading 'resources' on Instance uuid 9f78f954-3472-4631-a0ae-b945b9c26f5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:17:23 np0005629333 nova_compute[244014]: 2026-02-25 12:17:23.184 244018 INFO nova.virt.libvirt.driver [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Deleting instance files /var/lib/nova/instances/9f78f954-3472-4631-a0ae-b945b9c26f5a_del#033[00m
Feb 25 07:17:23 np0005629333 nova_compute[244014]: 2026-02-25 12:17:23.185 244018 INFO nova.virt.libvirt.driver [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Deletion of /var/lib/nova/instances/9f78f954-3472-4631-a0ae-b945b9c26f5a_del complete#033[00m
Feb 25 07:17:23 np0005629333 nova_compute[244014]: 2026-02-25 12:17:23.262 244018 INFO nova.compute.manager [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Took 0.49 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:17:23 np0005629333 nova_compute[244014]: 2026-02-25 12:17:23.263 244018 DEBUG oslo.service.loopingcall [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:17:23 np0005629333 nova_compute[244014]: 2026-02-25 12:17:23.264 244018 DEBUG nova.compute.manager [-] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:17:23 np0005629333 nova_compute[244014]: 2026-02-25 12:17:23.264 244018 DEBUG nova.network.neutron [-] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:17:23 np0005629333 nova_compute[244014]: 2026-02-25 12:17:23.376 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 325 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 7.0 MiB/s rd, 5.7 MiB/s wr, 378 op/s
Feb 25 07:17:23 np0005629333 nova_compute[244014]: 2026-02-25 12:17:23.579 244018 DEBUG nova.network.neutron [-] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:17:23 np0005629333 nova_compute[244014]: 2026-02-25 12:17:23.612 244018 DEBUG nova.network.neutron [-] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:17:23 np0005629333 nova_compute[244014]: 2026-02-25 12:17:23.626 244018 INFO nova.compute.manager [-] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Took 0.36 seconds to deallocate network for instance.#033[00m
Feb 25 07:17:23 np0005629333 nova_compute[244014]: 2026-02-25 12:17:23.663 244018 DEBUG oslo_concurrency.lockutils [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:23 np0005629333 nova_compute[244014]: 2026-02-25 12:17:23.663 244018 DEBUG oslo_concurrency.lockutils [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:23 np0005629333 nova_compute[244014]: 2026-02-25 12:17:23.742 244018 DEBUG oslo_concurrency.processutils [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:17:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:17:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/303118052' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:17:24 np0005629333 nova_compute[244014]: 2026-02-25 12:17:24.296 244018 DEBUG oslo_concurrency.processutils [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:24 np0005629333 nova_compute[244014]: 2026-02-25 12:17:24.304 244018 DEBUG nova.compute.provider_tree [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:17:24 np0005629333 nova_compute[244014]: 2026-02-25 12:17:24.320 244018 DEBUG nova.scheduler.client.report [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:17:24 np0005629333 nova_compute[244014]: 2026-02-25 12:17:24.345 244018 DEBUG oslo_concurrency.lockutils [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:24 np0005629333 nova_compute[244014]: 2026-02-25 12:17:24.366 244018 INFO nova.scheduler.client.report [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Deleted allocations for instance 9f78f954-3472-4631-a0ae-b945b9c26f5a#033[00m
Feb 25 07:17:24 np0005629333 nova_compute[244014]: 2026-02-25 12:17:24.426 244018 DEBUG oslo_concurrency.lockutils [None req-ab8dd7e1-bc0f-4107-a0bb-7880e6d8cec4 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "9f78f954-3472-4631-a0ae-b945b9c26f5a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:25 np0005629333 nova_compute[244014]: 2026-02-25 12:17:25.152 244018 DEBUG oslo_concurrency.lockutils [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquiring lock "77af7d73-f695-47b4-8ec1-98a3672ff8d8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:25 np0005629333 nova_compute[244014]: 2026-02-25 12:17:25.152 244018 DEBUG oslo_concurrency.lockutils [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "77af7d73-f695-47b4-8ec1-98a3672ff8d8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:25 np0005629333 nova_compute[244014]: 2026-02-25 12:17:25.153 244018 DEBUG oslo_concurrency.lockutils [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquiring lock "77af7d73-f695-47b4-8ec1-98a3672ff8d8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:25 np0005629333 nova_compute[244014]: 2026-02-25 12:17:25.153 244018 DEBUG oslo_concurrency.lockutils [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "77af7d73-f695-47b4-8ec1-98a3672ff8d8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:25 np0005629333 nova_compute[244014]: 2026-02-25 12:17:25.153 244018 DEBUG oslo_concurrency.lockutils [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "77af7d73-f695-47b4-8ec1-98a3672ff8d8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:25 np0005629333 nova_compute[244014]: 2026-02-25 12:17:25.155 244018 INFO nova.compute.manager [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Terminating instance#033[00m
Feb 25 07:17:25 np0005629333 nova_compute[244014]: 2026-02-25 12:17:25.157 244018 DEBUG oslo_concurrency.lockutils [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquiring lock "refresh_cache-77af7d73-f695-47b4-8ec1-98a3672ff8d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:17:25 np0005629333 nova_compute[244014]: 2026-02-25 12:17:25.157 244018 DEBUG oslo_concurrency.lockutils [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquired lock "refresh_cache-77af7d73-f695-47b4-8ec1-98a3672ff8d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:17:25 np0005629333 nova_compute[244014]: 2026-02-25 12:17:25.158 244018 DEBUG nova.network.neutron [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:17:25 np0005629333 nova_compute[244014]: 2026-02-25 12:17:25.456 244018 DEBUG nova.network.neutron [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:17:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 325 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.0 MiB/s wr, 292 op/s
Feb 25 07:17:25 np0005629333 nova_compute[244014]: 2026-02-25 12:17:25.914 244018 DEBUG nova.network.neutron [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:17:25 np0005629333 nova_compute[244014]: 2026-02-25 12:17:25.934 244018 DEBUG oslo_concurrency.lockutils [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Releasing lock "refresh_cache-77af7d73-f695-47b4-8ec1-98a3672ff8d8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:17:25 np0005629333 nova_compute[244014]: 2026-02-25 12:17:25.935 244018 DEBUG nova.compute.manager [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:17:26 np0005629333 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Feb 25 07:17:26 np0005629333 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 11.130s CPU time.
Feb 25 07:17:26 np0005629333 systemd-machined[210048]: Machine qemu-11-instance-0000000b terminated.
Feb 25 07:17:26 np0005629333 nova_compute[244014]: 2026-02-25 12:17:26.156 244018 INFO nova.virt.libvirt.driver [-] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Instance destroyed successfully.#033[00m
Feb 25 07:17:26 np0005629333 nova_compute[244014]: 2026-02-25 12:17:26.156 244018 DEBUG nova.objects.instance [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lazy-loading 'resources' on Instance uuid 77af7d73-f695-47b4-8ec1-98a3672ff8d8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:17:26 np0005629333 nova_compute[244014]: 2026-02-25 12:17:26.575 244018 INFO nova.virt.libvirt.driver [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Deleting instance files /var/lib/nova/instances/77af7d73-f695-47b4-8ec1-98a3672ff8d8_del#033[00m
Feb 25 07:17:26 np0005629333 nova_compute[244014]: 2026-02-25 12:17:26.576 244018 INFO nova.virt.libvirt.driver [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Deletion of /var/lib/nova/instances/77af7d73-f695-47b4-8ec1-98a3672ff8d8_del complete#033[00m
Feb 25 07:17:26 np0005629333 nova_compute[244014]: 2026-02-25 12:17:26.636 244018 INFO nova.compute.manager [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Took 0.70 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:17:26 np0005629333 nova_compute[244014]: 2026-02-25 12:17:26.637 244018 DEBUG oslo.service.loopingcall [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:17:26 np0005629333 nova_compute[244014]: 2026-02-25 12:17:26.638 244018 DEBUG nova.compute.manager [-] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:17:26 np0005629333 nova_compute[244014]: 2026-02-25 12:17:26.638 244018 DEBUG nova.network.neutron [-] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:17:27 np0005629333 nova_compute[244014]: 2026-02-25 12:17:27.012 244018 DEBUG nova.network.neutron [-] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:17:27 np0005629333 nova_compute[244014]: 2026-02-25 12:17:27.035 244018 DEBUG nova.network.neutron [-] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:17:27 np0005629333 nova_compute[244014]: 2026-02-25 12:17:27.051 244018 INFO nova.compute.manager [-] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Took 0.41 seconds to deallocate network for instance.#033[00m
Feb 25 07:17:27 np0005629333 nova_compute[244014]: 2026-02-25 12:17:27.108 244018 DEBUG oslo_concurrency.lockutils [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:27 np0005629333 nova_compute[244014]: 2026-02-25 12:17:27.109 244018 DEBUG oslo_concurrency.lockutils [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:27 np0005629333 nova_compute[244014]: 2026-02-25 12:17:27.171 244018 DEBUG oslo_concurrency.processutils [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:27 np0005629333 nova_compute[244014]: 2026-02-25 12:17:27.423 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021832.4227164, 20c8b8a1-1561-49f5-9fce-af2840195a57 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:17:27 np0005629333 nova_compute[244014]: 2026-02-25 12:17:27.424 244018 INFO nova.compute.manager [-] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:17:27 np0005629333 nova_compute[244014]: 2026-02-25 12:17:27.447 244018 DEBUG nova.compute.manager [None req-9a9878a5-dc30-44f9-9de6-0cf65d38ef9e - - - - - -] [instance: 20c8b8a1-1561-49f5-9fce-af2840195a57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:17:27 np0005629333 nova_compute[244014]: 2026-02-25 12:17:27.455 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 239 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 7.1 MiB/s wr, 374 op/s
Feb 25 07:17:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:17:27 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1540527969' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:17:27 np0005629333 nova_compute[244014]: 2026-02-25 12:17:27.793 244018 DEBUG oslo_concurrency.processutils [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:27 np0005629333 nova_compute[244014]: 2026-02-25 12:17:27.799 244018 DEBUG nova.compute.provider_tree [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:17:27 np0005629333 nova_compute[244014]: 2026-02-25 12:17:27.820 244018 DEBUG nova.scheduler.client.report [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:17:27 np0005629333 nova_compute[244014]: 2026-02-25 12:17:27.852 244018 DEBUG oslo_concurrency.lockutils [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:27 np0005629333 nova_compute[244014]: 2026-02-25 12:17:27.876 244018 INFO nova.scheduler.client.report [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Deleted allocations for instance 77af7d73-f695-47b4-8ec1-98a3672ff8d8#033[00m
Feb 25 07:17:27 np0005629333 nova_compute[244014]: 2026-02-25 12:17:27.957 244018 DEBUG oslo_concurrency.lockutils [None req-b589a499-2a4d-4866-a607-d512c1325315 f653b2ec5be5483092804fcd55beab49 2805044e788543068625873119e58bd0 - - default default] Lock "77af7d73-f695-47b4-8ec1-98a3672ff8d8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:28 np0005629333 nova_compute[244014]: 2026-02-25 12:17:28.378 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:28 np0005629333 nova_compute[244014]: 2026-02-25 12:17:28.982 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Feb 25 07:17:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:17:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 239 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.2 MiB/s wr, 219 op/s
Feb 25 07:17:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:17:30
Feb 25 07:17:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:17:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:17:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'images', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'backups', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log']
Feb 25 07:17:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:17:31 np0005629333 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Feb 25 07:17:31 np0005629333 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000a.scope: Consumed 12.655s CPU time.
Feb 25 07:17:31 np0005629333 systemd-machined[210048]: Machine qemu-12-instance-0000000a terminated.
Feb 25 07:17:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 227 MiB data, 414 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.3 MiB/s wr, 242 op/s
Feb 25 07:17:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:17:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:17:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:17:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:17:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:17:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:17:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:17:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:17:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:17:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:17:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:17:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:17:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:17:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:17:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:17:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:17:32 np0005629333 nova_compute[244014]: 2026-02-25 12:17:31.999 244018 INFO nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance shutdown successfully after 13 seconds.#033[00m
Feb 25 07:17:32 np0005629333 nova_compute[244014]: 2026-02-25 12:17:32.007 244018 INFO nova.virt.libvirt.driver [-] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance destroyed successfully.#033[00m
Feb 25 07:17:32 np0005629333 nova_compute[244014]: 2026-02-25 12:17:32.014 244018 INFO nova.virt.libvirt.driver [-] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance destroyed successfully.#033[00m
Feb 25 07:17:32 np0005629333 nova_compute[244014]: 2026-02-25 12:17:32.459 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:32 np0005629333 nova_compute[244014]: 2026-02-25 12:17:32.605 244018 INFO nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Deleting instance files /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308_del#033[00m
Feb 25 07:17:32 np0005629333 nova_compute[244014]: 2026-02-25 12:17:32.606 244018 INFO nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Deletion of /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308_del complete#033[00m
Feb 25 07:17:32 np0005629333 nova_compute[244014]: 2026-02-25 12:17:32.749 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:17:32 np0005629333 nova_compute[244014]: 2026-02-25 12:17:32.750 244018 INFO nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Creating image(s)#033[00m
Feb 25 07:17:32 np0005629333 nova_compute[244014]: 2026-02-25 12:17:32.781 244018 DEBUG nova.storage.rbd_utils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:32 np0005629333 nova_compute[244014]: 2026-02-25 12:17:32.819 244018 DEBUG nova.storage.rbd_utils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:32 np0005629333 nova_compute[244014]: 2026-02-25 12:17:32.849 244018 DEBUG nova.storage.rbd_utils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:32 np0005629333 nova_compute[244014]: 2026-02-25 12:17:32.853 244018 DEBUG oslo_concurrency.processutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:32 np0005629333 nova_compute[244014]: 2026-02-25 12:17:32.927 244018 DEBUG oslo_concurrency.processutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:32 np0005629333 nova_compute[244014]: 2026-02-25 12:17:32.929 244018 DEBUG oslo_concurrency.lockutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:32 np0005629333 nova_compute[244014]: 2026-02-25 12:17:32.930 244018 DEBUG oslo_concurrency.lockutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:32 np0005629333 nova_compute[244014]: 2026-02-25 12:17:32.930 244018 DEBUG oslo_concurrency.lockutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:32 np0005629333 nova_compute[244014]: 2026-02-25 12:17:32.961 244018 DEBUG nova.storage.rbd_utils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:32 np0005629333 nova_compute[244014]: 2026-02-25 12:17:32.966 244018 DEBUG oslo_concurrency.processutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:33 np0005629333 nova_compute[244014]: 2026-02-25 12:17:33.380 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:33 np0005629333 nova_compute[244014]: 2026-02-25 12:17:33.410 244018 DEBUG oslo_concurrency.processutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:33 np0005629333 nova_compute[244014]: 2026-02-25 12:17:33.486 244018 DEBUG nova.storage.rbd_utils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] resizing rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:17:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 213 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 4.1 MiB/s wr, 187 op/s
Feb 25 07:17:33 np0005629333 nova_compute[244014]: 2026-02-25 12:17:33.630 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:17:33 np0005629333 nova_compute[244014]: 2026-02-25 12:17:33.631 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Ensure instance console log exists: /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:17:33 np0005629333 nova_compute[244014]: 2026-02-25 12:17:33.632 244018 DEBUG oslo_concurrency.lockutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:33 np0005629333 nova_compute[244014]: 2026-02-25 12:17:33.633 244018 DEBUG oslo_concurrency.lockutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:33 np0005629333 nova_compute[244014]: 2026-02-25 12:17:33.633 244018 DEBUG oslo_concurrency.lockutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:33 np0005629333 nova_compute[244014]: 2026-02-25 12:17:33.636 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:17:33 np0005629333 nova_compute[244014]: 2026-02-25 12:17:33.640 244018 WARNING nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Feb 25 07:17:33 np0005629333 nova_compute[244014]: 2026-02-25 12:17:33.646 244018 DEBUG nova.virt.libvirt.host [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:17:33 np0005629333 nova_compute[244014]: 2026-02-25 12:17:33.646 244018 DEBUG nova.virt.libvirt.host [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:17:33 np0005629333 nova_compute[244014]: 2026-02-25 12:17:33.650 244018 DEBUG nova.virt.libvirt.host [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:17:33 np0005629333 nova_compute[244014]: 2026-02-25 12:17:33.651 244018 DEBUG nova.virt.libvirt.host [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:17:33 np0005629333 nova_compute[244014]: 2026-02-25 12:17:33.651 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:17:33 np0005629333 nova_compute[244014]: 2026-02-25 12:17:33.652 244018 DEBUG nova.virt.hardware [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:17:33 np0005629333 nova_compute[244014]: 2026-02-25 12:17:33.653 244018 DEBUG nova.virt.hardware [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:17:33 np0005629333 nova_compute[244014]: 2026-02-25 12:17:33.653 244018 DEBUG nova.virt.hardware [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:17:33 np0005629333 nova_compute[244014]: 2026-02-25 12:17:33.653 244018 DEBUG nova.virt.hardware [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:17:33 np0005629333 nova_compute[244014]: 2026-02-25 12:17:33.654 244018 DEBUG nova.virt.hardware [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:17:33 np0005629333 nova_compute[244014]: 2026-02-25 12:17:33.654 244018 DEBUG nova.virt.hardware [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:17:33 np0005629333 nova_compute[244014]: 2026-02-25 12:17:33.654 244018 DEBUG nova.virt.hardware [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:17:33 np0005629333 nova_compute[244014]: 2026-02-25 12:17:33.655 244018 DEBUG nova.virt.hardware [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:17:33 np0005629333 nova_compute[244014]: 2026-02-25 12:17:33.655 244018 DEBUG nova.virt.hardware [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:17:33 np0005629333 nova_compute[244014]: 2026-02-25 12:17:33.656 244018 DEBUG nova.virt.hardware [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:17:33 np0005629333 nova_compute[244014]: 2026-02-25 12:17:33.656 244018 DEBUG nova.virt.hardware [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:17:33 np0005629333 nova_compute[244014]: 2026-02-25 12:17:33.657 244018 DEBUG nova.objects.instance [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Lazy-loading 'vcpu_model' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:17:33 np0005629333 nova_compute[244014]: 2026-02-25 12:17:33.687 244018 DEBUG oslo_concurrency.processutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:17:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:17:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/715058366' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:17:34 np0005629333 nova_compute[244014]: 2026-02-25 12:17:34.295 244018 DEBUG oslo_concurrency.processutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:34 np0005629333 nova_compute[244014]: 2026-02-25 12:17:34.331 244018 DEBUG nova.storage.rbd_utils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:34 np0005629333 nova_compute[244014]: 2026-02-25 12:17:34.337 244018 DEBUG oslo_concurrency.processutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:34 np0005629333 nova_compute[244014]: 2026-02-25 12:17:34.733 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021839.7315087, 9f78f954-3472-4631-a0ae-b945b9c26f5a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:17:34 np0005629333 nova_compute[244014]: 2026-02-25 12:17:34.734 244018 INFO nova.compute.manager [-] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:17:34 np0005629333 nova_compute[244014]: 2026-02-25 12:17:34.773 244018 DEBUG nova.compute.manager [None req-e7173a5e-71e9-412e-b5e2-21467220b878 - - - - - -] [instance: 9f78f954-3472-4631-a0ae-b945b9c26f5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:17:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:17:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1423179979' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:17:34 np0005629333 nova_compute[244014]: 2026-02-25 12:17:34.916 244018 DEBUG oslo_concurrency.processutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:34 np0005629333 nova_compute[244014]: 2026-02-25 12:17:34.920 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:17:34 np0005629333 nova_compute[244014]:  <uuid>db0fc9fa-1fc0-4334-96f9-2205fa53e308</uuid>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:  <name>instance-0000000a</name>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:17:34 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServersAdmin275Test-server-1821961636</nova:name>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:17:33</nova:creationTime>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:17:34 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:        <nova:user uuid="40cba1ddb2af4adea0f03477ccf87402">tempest-ServersAdmin275Test-694282747-project-member</nova:user>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:        <nova:project uuid="03ef50f1c8db4f9d9f7512943746f9e7">tempest-ServersAdmin275Test-694282747</nova:project>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:      <nova:ports/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:      <entry name="serial">db0fc9fa-1fc0-4334-96f9-2205fa53e308</entry>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:      <entry name="uuid">db0fc9fa-1fc0-4334-96f9-2205fa53e308</entry>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:17:34 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk">
Feb 25 07:17:34 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:17:34 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:17:34 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config">
Feb 25 07:17:34 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:17:34 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:17:34 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/console.log" append="off"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:17:34 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:17:34 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:17:34 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:17:34 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:17:34 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:17:34 np0005629333 nova_compute[244014]: 2026-02-25 12:17:34.973 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:17:34 np0005629333 nova_compute[244014]: 2026-02-25 12:17:34.974 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:17:34 np0005629333 nova_compute[244014]: 2026-02-25 12:17:34.977 244018 INFO nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Using config drive#033[00m
Feb 25 07:17:35 np0005629333 nova_compute[244014]: 2026-02-25 12:17:35.001 244018 DEBUG nova.storage.rbd_utils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:35 np0005629333 nova_compute[244014]: 2026-02-25 12:17:35.025 244018 DEBUG nova.objects.instance [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Lazy-loading 'ec2_ids' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:17:35 np0005629333 nova_compute[244014]: 2026-02-25 12:17:35.063 244018 DEBUG nova.objects.instance [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Lazy-loading 'keypairs' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:17:35 np0005629333 nova_compute[244014]: 2026-02-25 12:17:35.470 244018 INFO nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Creating config drive at /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config#033[00m
Feb 25 07:17:35 np0005629333 nova_compute[244014]: 2026-02-25 12:17:35.477 244018 DEBUG oslo_concurrency.processutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpq954wmnq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 213 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 235 KiB/s rd, 2.2 MiB/s wr, 117 op/s
Feb 25 07:17:35 np0005629333 nova_compute[244014]: 2026-02-25 12:17:35.605 244018 DEBUG oslo_concurrency.processutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpq954wmnq" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:35 np0005629333 nova_compute[244014]: 2026-02-25 12:17:35.639 244018 DEBUG nova.storage.rbd_utils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] rbd image db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:35 np0005629333 nova_compute[244014]: 2026-02-25 12:17:35.643 244018 DEBUG oslo_concurrency.processutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:35 np0005629333 nova_compute[244014]: 2026-02-25 12:17:35.813 244018 DEBUG oslo_concurrency.processutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config db0fc9fa-1fc0-4334-96f9-2205fa53e308_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:35 np0005629333 nova_compute[244014]: 2026-02-25 12:17:35.815 244018 INFO nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Deleting local config drive /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308/disk.config because it was imported into RBD.#033[00m
Feb 25 07:17:35 np0005629333 systemd-machined[210048]: New machine qemu-14-instance-0000000a.
Feb 25 07:17:35 np0005629333 systemd[1]: Started Virtual Machine qemu-14-instance-0000000a.
Feb 25 07:17:36 np0005629333 nova_compute[244014]: 2026-02-25 12:17:36.327 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for db0fc9fa-1fc0-4334-96f9-2205fa53e308 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 25 07:17:36 np0005629333 nova_compute[244014]: 2026-02-25 12:17:36.328 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021856.327064, db0fc9fa-1fc0-4334-96f9-2205fa53e308 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:17:36 np0005629333 nova_compute[244014]: 2026-02-25 12:17:36.329 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:17:36 np0005629333 nova_compute[244014]: 2026-02-25 12:17:36.334 244018 DEBUG nova.compute.manager [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:17:36 np0005629333 nova_compute[244014]: 2026-02-25 12:17:36.334 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:17:36 np0005629333 nova_compute[244014]: 2026-02-25 12:17:36.339 244018 INFO nova.virt.libvirt.driver [-] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance spawned successfully.#033[00m
Feb 25 07:17:36 np0005629333 nova_compute[244014]: 2026-02-25 12:17:36.340 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:17:36 np0005629333 nova_compute[244014]: 2026-02-25 12:17:36.358 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:17:36 np0005629333 nova_compute[244014]: 2026-02-25 12:17:36.362 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:17:36 np0005629333 nova_compute[244014]: 2026-02-25 12:17:36.463 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 25 07:17:36 np0005629333 nova_compute[244014]: 2026-02-25 12:17:36.463 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021856.3272145, db0fc9fa-1fc0-4334-96f9-2205fa53e308 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:17:36 np0005629333 nova_compute[244014]: 2026-02-25 12:17:36.464 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] VM Started (Lifecycle Event)#033[00m
Feb 25 07:17:36 np0005629333 nova_compute[244014]: 2026-02-25 12:17:36.469 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:17:36 np0005629333 nova_compute[244014]: 2026-02-25 12:17:36.470 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:17:36 np0005629333 nova_compute[244014]: 2026-02-25 12:17:36.470 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:17:36 np0005629333 nova_compute[244014]: 2026-02-25 12:17:36.471 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:17:36 np0005629333 nova_compute[244014]: 2026-02-25 12:17:36.472 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:17:36 np0005629333 nova_compute[244014]: 2026-02-25 12:17:36.472 244018 DEBUG nova.virt.libvirt.driver [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:17:36 np0005629333 nova_compute[244014]: 2026-02-25 12:17:36.551 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:17:36 np0005629333 nova_compute[244014]: 2026-02-25 12:17:36.555 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:17:36 np0005629333 nova_compute[244014]: 2026-02-25 12:17:36.755 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 25 07:17:36 np0005629333 nova_compute[244014]: 2026-02-25 12:17:36.764 244018 DEBUG nova.compute.manager [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:17:36 np0005629333 nova_compute[244014]: 2026-02-25 12:17:36.843 244018 DEBUG oslo_concurrency.lockutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:36 np0005629333 nova_compute[244014]: 2026-02-25 12:17:36.844 244018 DEBUG oslo_concurrency.lockutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:36 np0005629333 nova_compute[244014]: 2026-02-25 12:17:36.844 244018 DEBUG nova.objects.instance [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 25 07:17:36 np0005629333 nova_compute[244014]: 2026-02-25 12:17:36.903 244018 DEBUG oslo_concurrency.lockutils [None req-058c0f50-bc41-49e2-878a-cd2730578fc2 101ac4ab722c48ed8aba04bb6649e603 3c3688e740c44672be09082deca8f251 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:37 np0005629333 nova_compute[244014]: 2026-02-25 12:17:37.463 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 771 KiB/s rd, 3.9 MiB/s wr, 195 op/s
Feb 25 07:17:38 np0005629333 nova_compute[244014]: 2026-02-25 12:17:38.114 244018 DEBUG oslo_concurrency.lockutils [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Acquiring lock "db0fc9fa-1fc0-4334-96f9-2205fa53e308" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:38 np0005629333 nova_compute[244014]: 2026-02-25 12:17:38.115 244018 DEBUG oslo_concurrency.lockutils [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "db0fc9fa-1fc0-4334-96f9-2205fa53e308" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:38 np0005629333 nova_compute[244014]: 2026-02-25 12:17:38.115 244018 DEBUG oslo_concurrency.lockutils [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Acquiring lock "db0fc9fa-1fc0-4334-96f9-2205fa53e308-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:38 np0005629333 nova_compute[244014]: 2026-02-25 12:17:38.116 244018 DEBUG oslo_concurrency.lockutils [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "db0fc9fa-1fc0-4334-96f9-2205fa53e308-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:38 np0005629333 nova_compute[244014]: 2026-02-25 12:17:38.116 244018 DEBUG oslo_concurrency.lockutils [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "db0fc9fa-1fc0-4334-96f9-2205fa53e308-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:38 np0005629333 nova_compute[244014]: 2026-02-25 12:17:38.118 244018 INFO nova.compute.manager [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Terminating instance#033[00m
Feb 25 07:17:38 np0005629333 nova_compute[244014]: 2026-02-25 12:17:38.120 244018 DEBUG oslo_concurrency.lockutils [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Acquiring lock "refresh_cache-db0fc9fa-1fc0-4334-96f9-2205fa53e308" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:17:38 np0005629333 nova_compute[244014]: 2026-02-25 12:17:38.120 244018 DEBUG oslo_concurrency.lockutils [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Acquired lock "refresh_cache-db0fc9fa-1fc0-4334-96f9-2205fa53e308" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:17:38 np0005629333 nova_compute[244014]: 2026-02-25 12:17:38.121 244018 DEBUG nova.network.neutron [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:17:38 np0005629333 nova_compute[244014]: 2026-02-25 12:17:38.292 244018 DEBUG nova.network.neutron [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:17:38 np0005629333 nova_compute[244014]: 2026-02-25 12:17:38.381 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:17:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 620 KiB/s rd, 1.9 MiB/s wr, 113 op/s
Feb 25 07:17:39 np0005629333 nova_compute[244014]: 2026-02-25 12:17:39.846 244018 DEBUG nova.network.neutron [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:17:39 np0005629333 nova_compute[244014]: 2026-02-25 12:17:39.869 244018 DEBUG oslo_concurrency.lockutils [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Releasing lock "refresh_cache-db0fc9fa-1fc0-4334-96f9-2205fa53e308" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:17:39 np0005629333 nova_compute[244014]: 2026-02-25 12:17:39.869 244018 DEBUG nova.compute.manager [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:17:39 np0005629333 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Feb 25 07:17:39 np0005629333 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000a.scope: Consumed 4.118s CPU time.
Feb 25 07:17:39 np0005629333 systemd-machined[210048]: Machine qemu-14-instance-0000000a terminated.
Feb 25 07:17:40 np0005629333 nova_compute[244014]: 2026-02-25 12:17:40.090 244018 INFO nova.virt.libvirt.driver [-] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance destroyed successfully.#033[00m
Feb 25 07:17:40 np0005629333 nova_compute[244014]: 2026-02-25 12:17:40.091 244018 DEBUG nova.objects.instance [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lazy-loading 'resources' on Instance uuid db0fc9fa-1fc0-4334-96f9-2205fa53e308 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:17:40 np0005629333 nova_compute[244014]: 2026-02-25 12:17:40.494 244018 INFO nova.virt.libvirt.driver [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Deleting instance files /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308_del#033[00m
Feb 25 07:17:40 np0005629333 nova_compute[244014]: 2026-02-25 12:17:40.495 244018 INFO nova.virt.libvirt.driver [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Deletion of /var/lib/nova/instances/db0fc9fa-1fc0-4334-96f9-2205fa53e308_del complete#033[00m
Feb 25 07:17:40 np0005629333 nova_compute[244014]: 2026-02-25 12:17:40.548 244018 INFO nova.compute.manager [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:17:40 np0005629333 nova_compute[244014]: 2026-02-25 12:17:40.549 244018 DEBUG oslo.service.loopingcall [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:17:40 np0005629333 nova_compute[244014]: 2026-02-25 12:17:40.550 244018 DEBUG nova.compute.manager [-] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:17:40 np0005629333 nova_compute[244014]: 2026-02-25 12:17:40.550 244018 DEBUG nova.network.neutron [-] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:17:40 np0005629333 nova_compute[244014]: 2026-02-25 12:17:40.729 244018 DEBUG nova.network.neutron [-] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:17:40 np0005629333 nova_compute[244014]: 2026-02-25 12:17:40.783 244018 DEBUG nova.network.neutron [-] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:17:40 np0005629333 nova_compute[244014]: 2026-02-25 12:17:40.801 244018 INFO nova.compute.manager [-] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Took 0.25 seconds to deallocate network for instance.#033[00m
Feb 25 07:17:40 np0005629333 nova_compute[244014]: 2026-02-25 12:17:40.851 244018 DEBUG oslo_concurrency.lockutils [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:40 np0005629333 nova_compute[244014]: 2026-02-25 12:17:40.851 244018 DEBUG oslo_concurrency.lockutils [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:40 np0005629333 nova_compute[244014]: 2026-02-25 12:17:40.912 244018 DEBUG oslo_concurrency.processutils [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:41 np0005629333 nova_compute[244014]: 2026-02-25 12:17:41.155 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021846.1541154, 77af7d73-f695-47b4-8ec1-98a3672ff8d8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:17:41 np0005629333 nova_compute[244014]: 2026-02-25 12:17:41.156 244018 INFO nova.compute.manager [-] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:17:41 np0005629333 nova_compute[244014]: 2026-02-25 12:17:41.182 244018 DEBUG nova.compute.manager [None req-56a48e00-c165-4210-be24-14fec9ae075c - - - - - -] [instance: 77af7d73-f695-47b4-8ec1-98a3672ff8d8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:17:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:17:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3692699550' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:17:41 np0005629333 nova_compute[244014]: 2026-02-25 12:17:41.431 244018 DEBUG oslo_concurrency.processutils [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:41 np0005629333 nova_compute[244014]: 2026-02-25 12:17:41.436 244018 DEBUG nova.compute.provider_tree [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:17:41 np0005629333 nova_compute[244014]: 2026-02-25 12:17:41.456 244018 DEBUG nova.scheduler.client.report [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:17:41 np0005629333 nova_compute[244014]: 2026-02-25 12:17:41.483 244018 DEBUG oslo_concurrency.lockutils [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:41 np0005629333 nova_compute[244014]: 2026-02-25 12:17:41.505 244018 INFO nova.scheduler.client.report [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Deleted allocations for instance db0fc9fa-1fc0-4334-96f9-2205fa53e308#033[00m
Feb 25 07:17:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 178 MiB data, 351 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.9 MiB/s wr, 156 op/s
Feb 25 07:17:41 np0005629333 nova_compute[244014]: 2026-02-25 12:17:41.565 244018 DEBUG oslo_concurrency.lockutils [None req-41e1371a-987c-4e91-82e3-8ff8820ba0de 40cba1ddb2af4adea0f03477ccf87402 03ef50f1c8db4f9d9f7512943746f9e7 - - default default] Lock "db0fc9fa-1fc0-4334-96f9-2205fa53e308" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.450s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:41 np0005629333 nova_compute[244014]: 2026-02-25 12:17:41.649 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:17:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:17:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:17:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:17:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:17:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00015022157419548375 of space, bias 1.0, pg target 0.04506647225864512 quantized to 32 (current 32)
Feb 25 07:17:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:17:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:17:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:17:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:17:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:17:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00248968974604864 of space, bias 1.0, pg target 0.746906923814592 quantized to 32 (current 32)
Feb 25 07:17:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:17:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.436791589578832e-07 of space, bias 4.0, pg target 0.0010124149907494598 quantized to 16 (current 16)
Feb 25 07:17:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:17:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:17:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:17:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:17:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:17:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:17:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:17:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:17:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:17:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:17:42 np0005629333 nova_compute[244014]: 2026-02-25 12:17:42.468 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:43 np0005629333 nova_compute[244014]: 2026-02-25 12:17:43.383 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 153 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 163 op/s
Feb 25 07:17:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:17:44 np0005629333 nova_compute[244014]: 2026-02-25 12:17:44.802 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Acquiring lock "4c768bc2-a711-4179-b12f-604509e47856" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:44 np0005629333 nova_compute[244014]: 2026-02-25 12:17:44.803 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lock "4c768bc2-a711-4179-b12f-604509e47856" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:44 np0005629333 nova_compute[244014]: 2026-02-25 12:17:44.826 244018 DEBUG nova.compute.manager [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:17:44 np0005629333 nova_compute[244014]: 2026-02-25 12:17:44.886 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:44 np0005629333 nova_compute[244014]: 2026-02-25 12:17:44.887 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:44 np0005629333 nova_compute[244014]: 2026-02-25 12:17:44.895 244018 DEBUG nova.virt.hardware [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:17:44 np0005629333 nova_compute[244014]: 2026-02-25 12:17:44.895 244018 INFO nova.compute.claims [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:17:45 np0005629333 nova_compute[244014]: 2026-02-25 12:17:45.003 244018 DEBUG oslo_concurrency.processutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:17:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/555895963' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:17:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 153 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 151 op/s
Feb 25 07:17:45 np0005629333 nova_compute[244014]: 2026-02-25 12:17:45.560 244018 DEBUG oslo_concurrency.processutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:45 np0005629333 nova_compute[244014]: 2026-02-25 12:17:45.566 244018 DEBUG nova.compute.provider_tree [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:17:45 np0005629333 podman[257993]: 2026-02-25 12:17:45.73174287 +0000 UTC m=+0.071848228 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Feb 25 07:17:45 np0005629333 nova_compute[244014]: 2026-02-25 12:17:45.764 244018 DEBUG nova.scheduler.client.report [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:17:45 np0005629333 podman[257994]: 2026-02-25 12:17:45.773566504 +0000 UTC m=+0.113978780 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 07:17:45 np0005629333 nova_compute[244014]: 2026-02-25 12:17:45.842 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:45 np0005629333 nova_compute[244014]: 2026-02-25 12:17:45.843 244018 DEBUG nova.compute.manager [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:17:45 np0005629333 nova_compute[244014]: 2026-02-25 12:17:45.903 244018 DEBUG nova.compute.manager [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:17:45 np0005629333 nova_compute[244014]: 2026-02-25 12:17:45.904 244018 DEBUG nova.network.neutron [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:17:45 np0005629333 nova_compute[244014]: 2026-02-25 12:17:45.928 244018 INFO nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:17:45 np0005629333 nova_compute[244014]: 2026-02-25 12:17:45.947 244018 DEBUG nova.compute.manager [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:17:46 np0005629333 nova_compute[244014]: 2026-02-25 12:17:46.048 244018 DEBUG nova.compute.manager [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:17:46 np0005629333 nova_compute[244014]: 2026-02-25 12:17:46.050 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:17:46 np0005629333 nova_compute[244014]: 2026-02-25 12:17:46.051 244018 INFO nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Creating image(s)#033[00m
Feb 25 07:17:46 np0005629333 nova_compute[244014]: 2026-02-25 12:17:46.082 244018 DEBUG nova.storage.rbd_utils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] rbd image 4c768bc2-a711-4179-b12f-604509e47856_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:46 np0005629333 nova_compute[244014]: 2026-02-25 12:17:46.115 244018 DEBUG nova.storage.rbd_utils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] rbd image 4c768bc2-a711-4179-b12f-604509e47856_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:46 np0005629333 nova_compute[244014]: 2026-02-25 12:17:46.142 244018 DEBUG nova.storage.rbd_utils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] rbd image 4c768bc2-a711-4179-b12f-604509e47856_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:46 np0005629333 nova_compute[244014]: 2026-02-25 12:17:46.146 244018 DEBUG oslo_concurrency.processutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:46 np0005629333 nova_compute[244014]: 2026-02-25 12:17:46.208 244018 DEBUG oslo_concurrency.processutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:46 np0005629333 nova_compute[244014]: 2026-02-25 12:17:46.209 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:46 np0005629333 nova_compute[244014]: 2026-02-25 12:17:46.210 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:46 np0005629333 nova_compute[244014]: 2026-02-25 12:17:46.211 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:46 np0005629333 nova_compute[244014]: 2026-02-25 12:17:46.239 244018 DEBUG nova.storage.rbd_utils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] rbd image 4c768bc2-a711-4179-b12f-604509e47856_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:46 np0005629333 nova_compute[244014]: 2026-02-25 12:17:46.242 244018 DEBUG oslo_concurrency.processutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 4c768bc2-a711-4179-b12f-604509e47856_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:46 np0005629333 nova_compute[244014]: 2026-02-25 12:17:46.432 244018 DEBUG nova.policy [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '38c6d2e6875a408687bd85066c826987', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c7e518f2e6c84550b07b36ea68922707', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:17:46 np0005629333 nova_compute[244014]: 2026-02-25 12:17:46.516 244018 DEBUG oslo_concurrency.processutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 4c768bc2-a711-4179-b12f-604509e47856_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:46 np0005629333 nova_compute[244014]: 2026-02-25 12:17:46.592 244018 DEBUG nova.storage.rbd_utils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] resizing rbd image 4c768bc2-a711-4179-b12f-604509e47856_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:17:46 np0005629333 nova_compute[244014]: 2026-02-25 12:17:46.713 244018 DEBUG nova.objects.instance [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lazy-loading 'migration_context' on Instance uuid 4c768bc2-a711-4179-b12f-604509e47856 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:17:46 np0005629333 nova_compute[244014]: 2026-02-25 12:17:46.729 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:17:46 np0005629333 nova_compute[244014]: 2026-02-25 12:17:46.730 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Ensure instance console log exists: /var/lib/nova/instances/4c768bc2-a711-4179-b12f-604509e47856/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:17:46 np0005629333 nova_compute[244014]: 2026-02-25 12:17:46.730 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:46 np0005629333 nova_compute[244014]: 2026-02-25 12:17:46.731 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:46 np0005629333 nova_compute[244014]: 2026-02-25 12:17:46.731 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:47 np0005629333 nova_compute[244014]: 2026-02-25 12:17:47.317 244018 DEBUG nova.network.neutron [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Successfully created port: e4e15a18-1007-4bd1-80a7-b8d2cb64f15c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:17:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:17:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3631058678' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:17:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:17:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3631058678' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:17:47 np0005629333 nova_compute[244014]: 2026-02-25 12:17:47.472 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 178 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 155 op/s
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:17:48.073284) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021868073311, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1603, "num_deletes": 251, "total_data_size": 2397811, "memory_usage": 2437872, "flush_reason": "Manual Compaction"}
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021868097979, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2361794, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19411, "largest_seqno": 21013, "table_properties": {"data_size": 2354643, "index_size": 4094, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15851, "raw_average_key_size": 20, "raw_value_size": 2339922, "raw_average_value_size": 2969, "num_data_blocks": 185, "num_entries": 788, "num_filter_entries": 788, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772021714, "oldest_key_time": 1772021714, "file_creation_time": 1772021868, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 24766 microseconds, and 3583 cpu microseconds.
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:17:48 np0005629333 nova_compute[244014]: 2026-02-25 12:17:48.190 244018 DEBUG nova.network.neutron [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Successfully updated port: e4e15a18-1007-4bd1-80a7-b8d2cb64f15c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:17:48.098041) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2361794 bytes OK
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:17:48.098066) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:17:48.195757) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:17:48.195807) EVENT_LOG_v1 {"time_micros": 1772021868195797, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:17:48.195838) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2390779, prev total WAL file size 2390779, number of live WAL files 2.
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:17:48.196539) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2306KB)], [47(6577KB)]
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021868196580, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9097296, "oldest_snapshot_seqno": -1}
Feb 25 07:17:48 np0005629333 nova_compute[244014]: 2026-02-25 12:17:48.235 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Acquiring lock "refresh_cache-4c768bc2-a711-4179-b12f-604509e47856" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:17:48 np0005629333 nova_compute[244014]: 2026-02-25 12:17:48.235 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Acquired lock "refresh_cache-4c768bc2-a711-4179-b12f-604509e47856" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:17:48 np0005629333 nova_compute[244014]: 2026-02-25 12:17:48.235 244018 DEBUG nova.network.neutron [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4389 keys, 7345579 bytes, temperature: kUnknown
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021868264074, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7345579, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7315727, "index_size": 17774, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11013, "raw_key_size": 108511, "raw_average_key_size": 24, "raw_value_size": 7235915, "raw_average_value_size": 1648, "num_data_blocks": 743, "num_entries": 4389, "num_filter_entries": 4389, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772021868, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:17:48.264345) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7345579 bytes
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:17:48.268129) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 134.6 rd, 108.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 6.4 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(7.0) write-amplify(3.1) OK, records in: 4903, records dropped: 514 output_compression: NoCompression
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:17:48.268169) EVENT_LOG_v1 {"time_micros": 1772021868268150, "job": 24, "event": "compaction_finished", "compaction_time_micros": 67575, "compaction_time_cpu_micros": 14495, "output_level": 6, "num_output_files": 1, "total_output_size": 7345579, "num_input_records": 4903, "num_output_records": 4389, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021868268858, "job": 24, "event": "table_file_deletion", "file_number": 49}
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772021868270114, "job": 24, "event": "table_file_deletion", "file_number": 47}
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:17:48.196473) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:17:48.270195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:17:48.270201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:17:48.270204) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:17:48.270207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:17:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:17:48.270210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:17:48 np0005629333 nova_compute[244014]: 2026-02-25 12:17:48.385 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:48 np0005629333 nova_compute[244014]: 2026-02-25 12:17:48.466 244018 DEBUG nova.compute.manager [req-d3eb959a-f511-49f6-9581-95d23f34fbf6 req-59dafb10-3ce6-459f-ba54-cb039d9c6f6d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Received event network-changed-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:17:48 np0005629333 nova_compute[244014]: 2026-02-25 12:17:48.467 244018 DEBUG nova.compute.manager [req-d3eb959a-f511-49f6-9581-95d23f34fbf6 req-59dafb10-3ce6-459f-ba54-cb039d9c6f6d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Refreshing instance network info cache due to event network-changed-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:17:48 np0005629333 nova_compute[244014]: 2026-02-25 12:17:48.467 244018 DEBUG oslo_concurrency.lockutils [req-d3eb959a-f511-49f6-9581-95d23f34fbf6 req-59dafb10-3ce6-459f-ba54-cb039d9c6f6d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-4c768bc2-a711-4179-b12f-604509e47856" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:17:48 np0005629333 nova_compute[244014]: 2026-02-25 12:17:48.595 244018 DEBUG nova.network.neutron [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:17:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:17:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 178 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 862 KiB/s wr, 76 op/s
Feb 25 07:17:50 np0005629333 nova_compute[244014]: 2026-02-25 12:17:50.077 244018 DEBUG nova.network.neutron [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Updating instance_info_cache with network_info: [{"id": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "address": "fa:16:3e:26:a8:6b", "network": {"id": "1b6027b5-2ee6-49fb-b9e5-c79e5230d07e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-325792779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7e518f2e6c84550b07b36ea68922707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e15a18-10", "ovs_interfaceid": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:17:50 np0005629333 nova_compute[244014]: 2026-02-25 12:17:50.107 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Releasing lock "refresh_cache-4c768bc2-a711-4179-b12f-604509e47856" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:17:50 np0005629333 nova_compute[244014]: 2026-02-25 12:17:50.108 244018 DEBUG nova.compute.manager [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Instance network_info: |[{"id": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "address": "fa:16:3e:26:a8:6b", "network": {"id": "1b6027b5-2ee6-49fb-b9e5-c79e5230d07e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-325792779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7e518f2e6c84550b07b36ea68922707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e15a18-10", "ovs_interfaceid": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:17:50 np0005629333 nova_compute[244014]: 2026-02-25 12:17:50.108 244018 DEBUG oslo_concurrency.lockutils [req-d3eb959a-f511-49f6-9581-95d23f34fbf6 req-59dafb10-3ce6-459f-ba54-cb039d9c6f6d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-4c768bc2-a711-4179-b12f-604509e47856" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:17:50 np0005629333 nova_compute[244014]: 2026-02-25 12:17:50.109 244018 DEBUG nova.network.neutron [req-d3eb959a-f511-49f6-9581-95d23f34fbf6 req-59dafb10-3ce6-459f-ba54-cb039d9c6f6d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Refreshing network info cache for port e4e15a18-1007-4bd1-80a7-b8d2cb64f15c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:17:50 np0005629333 nova_compute[244014]: 2026-02-25 12:17:50.115 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Start _get_guest_xml network_info=[{"id": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "address": "fa:16:3e:26:a8:6b", "network": {"id": "1b6027b5-2ee6-49fb-b9e5-c79e5230d07e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-325792779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7e518f2e6c84550b07b36ea68922707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e15a18-10", "ovs_interfaceid": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:17:50 np0005629333 nova_compute[244014]: 2026-02-25 12:17:50.120 244018 WARNING nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:17:50 np0005629333 nova_compute[244014]: 2026-02-25 12:17:50.129 244018 DEBUG nova.virt.libvirt.host [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:17:50 np0005629333 nova_compute[244014]: 2026-02-25 12:17:50.130 244018 DEBUG nova.virt.libvirt.host [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:17:50 np0005629333 nova_compute[244014]: 2026-02-25 12:17:50.134 244018 DEBUG nova.virt.libvirt.host [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:17:50 np0005629333 nova_compute[244014]: 2026-02-25 12:17:50.134 244018 DEBUG nova.virt.libvirt.host [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:17:50 np0005629333 nova_compute[244014]: 2026-02-25 12:17:50.135 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:17:50 np0005629333 nova_compute[244014]: 2026-02-25 12:17:50.135 244018 DEBUG nova.virt.hardware [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:17:50 np0005629333 nova_compute[244014]: 2026-02-25 12:17:50.136 244018 DEBUG nova.virt.hardware [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:17:50 np0005629333 nova_compute[244014]: 2026-02-25 12:17:50.137 244018 DEBUG nova.virt.hardware [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:17:50 np0005629333 nova_compute[244014]: 2026-02-25 12:17:50.137 244018 DEBUG nova.virt.hardware [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:17:50 np0005629333 nova_compute[244014]: 2026-02-25 12:17:50.138 244018 DEBUG nova.virt.hardware [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:17:50 np0005629333 nova_compute[244014]: 2026-02-25 12:17:50.138 244018 DEBUG nova.virt.hardware [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:17:50 np0005629333 nova_compute[244014]: 2026-02-25 12:17:50.139 244018 DEBUG nova.virt.hardware [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:17:50 np0005629333 nova_compute[244014]: 2026-02-25 12:17:50.139 244018 DEBUG nova.virt.hardware [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:17:50 np0005629333 nova_compute[244014]: 2026-02-25 12:17:50.140 244018 DEBUG nova.virt.hardware [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:17:50 np0005629333 nova_compute[244014]: 2026-02-25 12:17:50.140 244018 DEBUG nova.virt.hardware [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:17:50 np0005629333 nova_compute[244014]: 2026-02-25 12:17:50.141 244018 DEBUG nova.virt.hardware [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:17:50 np0005629333 nova_compute[244014]: 2026-02-25 12:17:50.145 244018 DEBUG oslo_concurrency.processutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:17:50 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/834894133' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:17:50 np0005629333 nova_compute[244014]: 2026-02-25 12:17:50.700 244018 DEBUG oslo_concurrency.processutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:50 np0005629333 nova_compute[244014]: 2026-02-25 12:17:50.730 244018 DEBUG nova.storage.rbd_utils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] rbd image 4c768bc2-a711-4179-b12f-604509e47856_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:50 np0005629333 nova_compute[244014]: 2026-02-25 12:17:50.735 244018 DEBUG oslo_concurrency.processutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:17:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1022918749' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:17:51 np0005629333 nova_compute[244014]: 2026-02-25 12:17:51.339 244018 DEBUG oslo_concurrency.processutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:51 np0005629333 nova_compute[244014]: 2026-02-25 12:17:51.343 244018 DEBUG nova.virt.libvirt.vif [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:17:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-942373444',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-942373444',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-942373444',id=13,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c7e518f2e6c84550b07b36ea68922707',ramdisk_id='',reservation_id='r-m5sgaoda',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-701873226',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-701873226-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:17:45Z,user_data=None,user_id='38c6d2e6875a408687bd85066c826987',uuid=4c768bc2-a711-4179-b12f-604509e47856,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "address": "fa:16:3e:26:a8:6b", "network": {"id": "1b6027b5-2ee6-49fb-b9e5-c79e5230d07e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-325792779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7e518f2e6c84550b07b36ea68922707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e15a18-10", "ovs_interfaceid": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:17:51 np0005629333 nova_compute[244014]: 2026-02-25 12:17:51.344 244018 DEBUG nova.network.os_vif_util [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Converting VIF {"id": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "address": "fa:16:3e:26:a8:6b", "network": {"id": "1b6027b5-2ee6-49fb-b9e5-c79e5230d07e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-325792779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7e518f2e6c84550b07b36ea68922707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e15a18-10", "ovs_interfaceid": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:17:51 np0005629333 nova_compute[244014]: 2026-02-25 12:17:51.346 244018 DEBUG nova.network.os_vif_util [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:a8:6b,bridge_name='br-int',has_traffic_filtering=True,id=e4e15a18-1007-4bd1-80a7-b8d2cb64f15c,network=Network(1b6027b5-2ee6-49fb-b9e5-c79e5230d07e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4e15a18-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:17:51 np0005629333 nova_compute[244014]: 2026-02-25 12:17:51.348 244018 DEBUG nova.objects.instance [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4c768bc2-a711-4179-b12f-604509e47856 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:17:51 np0005629333 nova_compute[244014]: 2026-02-25 12:17:51.435 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:17:51 np0005629333 nova_compute[244014]:  <uuid>4c768bc2-a711-4179-b12f-604509e47856</uuid>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:  <name>instance-0000000d</name>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      <nova:name>tempest-FloatingIPsAssociationNegativeTestJSON-server-942373444</nova:name>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:17:50</nova:creationTime>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:17:51 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:        <nova:user uuid="38c6d2e6875a408687bd85066c826987">tempest-FloatingIPsAssociationNegativeTestJSON-701873226-project-member</nova:user>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:        <nova:project uuid="c7e518f2e6c84550b07b36ea68922707">tempest-FloatingIPsAssociationNegativeTestJSON-701873226</nova:project>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:        <nova:port uuid="e4e15a18-1007-4bd1-80a7-b8d2cb64f15c">
Feb 25 07:17:51 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      <entry name="serial">4c768bc2-a711-4179-b12f-604509e47856</entry>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      <entry name="uuid">4c768bc2-a711-4179-b12f-604509e47856</entry>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/4c768bc2-a711-4179-b12f-604509e47856_disk">
Feb 25 07:17:51 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:17:51 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/4c768bc2-a711-4179-b12f-604509e47856_disk.config">
Feb 25 07:17:51 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:17:51 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:26:a8:6b"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      <target dev="tape4e15a18-10"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/4c768bc2-a711-4179-b12f-604509e47856/console.log" append="off"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:17:51 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:17:51 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:17:51 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:17:51 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:17:51 np0005629333 nova_compute[244014]: 2026-02-25 12:17:51.437 244018 DEBUG nova.compute.manager [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Preparing to wait for external event network-vif-plugged-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:17:51 np0005629333 nova_compute[244014]: 2026-02-25 12:17:51.438 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Acquiring lock "4c768bc2-a711-4179-b12f-604509e47856-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:51 np0005629333 nova_compute[244014]: 2026-02-25 12:17:51.439 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lock "4c768bc2-a711-4179-b12f-604509e47856-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:51 np0005629333 nova_compute[244014]: 2026-02-25 12:17:51.439 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lock "4c768bc2-a711-4179-b12f-604509e47856-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:51 np0005629333 nova_compute[244014]: 2026-02-25 12:17:51.441 244018 DEBUG nova.virt.libvirt.vif [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:17:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-942373444',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-942373444',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-942373444',id=13,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c7e518f2e6c84550b07b36ea68922707',ramdisk_id='',reservation_id='r-m5sgaoda',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-701873226',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-701873226-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:17:45Z,user_data=None,user_id='38c6d2e6875a408687bd85066c826987',uuid=4c768bc2-a711-4179-b12f-604509e47856,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "address": "fa:16:3e:26:a8:6b", "network": {"id": "1b6027b5-2ee6-49fb-b9e5-c79e5230d07e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-325792779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7e518f2e6c84550b07b36ea68922707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e15a18-10", "ovs_interfaceid": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:17:51 np0005629333 nova_compute[244014]: 2026-02-25 12:17:51.442 244018 DEBUG nova.network.os_vif_util [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Converting VIF {"id": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "address": "fa:16:3e:26:a8:6b", "network": {"id": "1b6027b5-2ee6-49fb-b9e5-c79e5230d07e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-325792779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7e518f2e6c84550b07b36ea68922707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e15a18-10", "ovs_interfaceid": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:17:51 np0005629333 nova_compute[244014]: 2026-02-25 12:17:51.443 244018 DEBUG nova.network.os_vif_util [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:a8:6b,bridge_name='br-int',has_traffic_filtering=True,id=e4e15a18-1007-4bd1-80a7-b8d2cb64f15c,network=Network(1b6027b5-2ee6-49fb-b9e5-c79e5230d07e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4e15a18-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:17:51 np0005629333 nova_compute[244014]: 2026-02-25 12:17:51.444 244018 DEBUG os_vif [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:a8:6b,bridge_name='br-int',has_traffic_filtering=True,id=e4e15a18-1007-4bd1-80a7-b8d2cb64f15c,network=Network(1b6027b5-2ee6-49fb-b9e5-c79e5230d07e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4e15a18-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:17:51 np0005629333 nova_compute[244014]: 2026-02-25 12:17:51.445 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:51 np0005629333 nova_compute[244014]: 2026-02-25 12:17:51.446 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:17:51 np0005629333 nova_compute[244014]: 2026-02-25 12:17:51.447 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:17:51 np0005629333 nova_compute[244014]: 2026-02-25 12:17:51.451 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:51 np0005629333 nova_compute[244014]: 2026-02-25 12:17:51.452 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4e15a18-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:17:51 np0005629333 nova_compute[244014]: 2026-02-25 12:17:51.453 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape4e15a18-10, col_values=(('external_ids', {'iface-id': 'e4e15a18-1007-4bd1-80a7-b8d2cb64f15c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:a8:6b', 'vm-uuid': '4c768bc2-a711-4179-b12f-604509e47856'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:17:51 np0005629333 NetworkManager[49836]: <info>  [1772021871.4568] manager: (tape4e15a18-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Feb 25 07:17:51 np0005629333 nova_compute[244014]: 2026-02-25 12:17:51.457 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:17:51 np0005629333 nova_compute[244014]: 2026-02-25 12:17:51.463 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:51 np0005629333 nova_compute[244014]: 2026-02-25 12:17:51.464 244018 INFO os_vif [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:a8:6b,bridge_name='br-int',has_traffic_filtering=True,id=e4e15a18-1007-4bd1-80a7-b8d2cb64f15c,network=Network(1b6027b5-2ee6-49fb-b9e5-c79e5230d07e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4e15a18-10')#033[00m
Feb 25 07:17:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Feb 25 07:17:51 np0005629333 nova_compute[244014]: 2026-02-25 12:17:51.671 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:17:51 np0005629333 nova_compute[244014]: 2026-02-25 12:17:51.671 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:17:51 np0005629333 nova_compute[244014]: 2026-02-25 12:17:51.672 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] No VIF found with MAC fa:16:3e:26:a8:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:17:51 np0005629333 nova_compute[244014]: 2026-02-25 12:17:51.673 244018 INFO nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Using config drive#033[00m
Feb 25 07:17:51 np0005629333 nova_compute[244014]: 2026-02-25 12:17:51.703 244018 DEBUG nova.storage.rbd_utils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] rbd image 4c768bc2-a711-4179-b12f-604509e47856_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:52 np0005629333 nova_compute[244014]: 2026-02-25 12:17:52.503 244018 INFO nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Creating config drive at /var/lib/nova/instances/4c768bc2-a711-4179-b12f-604509e47856/disk.config#033[00m
Feb 25 07:17:52 np0005629333 nova_compute[244014]: 2026-02-25 12:17:52.509 244018 DEBUG oslo_concurrency.processutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4c768bc2-a711-4179-b12f-604509e47856/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp9srjv0y0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:52 np0005629333 nova_compute[244014]: 2026-02-25 12:17:52.637 244018 DEBUG oslo_concurrency.processutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4c768bc2-a711-4179-b12f-604509e47856/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp9srjv0y0" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:52 np0005629333 nova_compute[244014]: 2026-02-25 12:17:52.674 244018 DEBUG nova.storage.rbd_utils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] rbd image 4c768bc2-a711-4179-b12f-604509e47856_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:52 np0005629333 nova_compute[244014]: 2026-02-25 12:17:52.678 244018 DEBUG oslo_concurrency.processutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4c768bc2-a711-4179-b12f-604509e47856/disk.config 4c768bc2-a711-4179-b12f-604509e47856_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:52 np0005629333 nova_compute[244014]: 2026-02-25 12:17:52.829 244018 DEBUG oslo_concurrency.processutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4c768bc2-a711-4179-b12f-604509e47856/disk.config 4c768bc2-a711-4179-b12f-604509e47856_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:52 np0005629333 nova_compute[244014]: 2026-02-25 12:17:52.830 244018 INFO nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Deleting local config drive /var/lib/nova/instances/4c768bc2-a711-4179-b12f-604509e47856/disk.config because it was imported into RBD.#033[00m
Feb 25 07:17:52 np0005629333 kernel: tape4e15a18-10: entered promiscuous mode
Feb 25 07:17:52 np0005629333 NetworkManager[49836]: <info>  [1772021872.8824] manager: (tape4e15a18-10): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Feb 25 07:17:52 np0005629333 ovn_controller[147040]: 2026-02-25T12:17:52Z|00044|binding|INFO|Claiming lport e4e15a18-1007-4bd1-80a7-b8d2cb64f15c for this chassis.
Feb 25 07:17:52 np0005629333 ovn_controller[147040]: 2026-02-25T12:17:52Z|00045|binding|INFO|e4e15a18-1007-4bd1-80a7-b8d2cb64f15c: Claiming fa:16:3e:26:a8:6b 10.100.0.3
Feb 25 07:17:52 np0005629333 nova_compute[244014]: 2026-02-25 12:17:52.883 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:52 np0005629333 nova_compute[244014]: 2026-02-25 12:17:52.886 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:52 np0005629333 nova_compute[244014]: 2026-02-25 12:17:52.890 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:52 np0005629333 systemd-udevd[258340]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:17:52 np0005629333 systemd-machined[210048]: New machine qemu-15-instance-0000000d.
Feb 25 07:17:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:52.919 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:a8:6b 10.100.0.3'], port_security=['fa:16:3e:26:a8:6b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '4c768bc2-a711-4179-b12f-604509e47856', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c7e518f2e6c84550b07b36ea68922707', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd3f2670-feae-47a4-bde0-e33bcae0ae19', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e09bb1cc-ff05-4116-89c1-7f1944c54b5c, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=e4e15a18-1007-4bd1-80a7-b8d2cb64f15c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:17:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:52.920 157129 INFO neutron.agent.ovn.metadata.agent [-] Port e4e15a18-1007-4bd1-80a7-b8d2cb64f15c in datapath 1b6027b5-2ee6-49fb-b9e5-c79e5230d07e bound to our chassis#033[00m
Feb 25 07:17:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:52.921 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1b6027b5-2ee6-49fb-b9e5-c79e5230d07e#033[00m
Feb 25 07:17:52 np0005629333 systemd[1]: Started Virtual Machine qemu-15-instance-0000000d.
Feb 25 07:17:52 np0005629333 NetworkManager[49836]: <info>  [1772021872.9295] device (tape4e15a18-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:17:52 np0005629333 NetworkManager[49836]: <info>  [1772021872.9303] device (tape4e15a18-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:17:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:52.931 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[78ac9fe2-8e87-4b6e-837f-aa7f724e566b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:17:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:52.932 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1b6027b5-21 in ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:17:52 np0005629333 nova_compute[244014]: 2026-02-25 12:17:52.934 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:52 np0005629333 ovn_controller[147040]: 2026-02-25T12:17:52Z|00046|binding|INFO|Setting lport e4e15a18-1007-4bd1-80a7-b8d2cb64f15c ovn-installed in OVS
Feb 25 07:17:52 np0005629333 ovn_controller[147040]: 2026-02-25T12:17:52Z|00047|binding|INFO|Setting lport e4e15a18-1007-4bd1-80a7-b8d2cb64f15c up in Southbound
Feb 25 07:17:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:52.933 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1b6027b5-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:17:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:52.933 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[813c4ac4-5ff1-4f99-880f-0f4dc7fa8188]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:17:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:52.934 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3808052f-84ec-4253-9fbc-277f53a0bbef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:17:52 np0005629333 nova_compute[244014]: 2026-02-25 12:17:52.938 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:52.947 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[a5d05bdb-86a8-495f-a87c-af5f3b6f8271]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:17:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:52.959 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3ee63867-402b-4ab7-a2c7-f817437df4cc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:17:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:52.982 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[823883ac-35a6-49d1-be73-a71ca93195d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:17:52 np0005629333 systemd-udevd[258343]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:17:52 np0005629333 NetworkManager[49836]: <info>  [1772021872.9893] manager: (tap1b6027b5-20): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Feb 25 07:17:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:52.988 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3086ef21-2660-4ebb-956f-e9bc5e773c18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:53.018 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8c0f46c9-8f47-405a-a9b2-1ded8b9df28e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:53.023 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c1b4a904-ce8e-4fe2-a516-24425f3708ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:17:53 np0005629333 NetworkManager[49836]: <info>  [1772021873.0437] device (tap1b6027b5-20): carrier: link connected
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:53.046 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[35284fca-6392-4221-b284-ecbe98f5a965]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:53.063 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e5c48dc0-ec20-46e0-9154-4053d9fad3a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1b6027b5-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:ad:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384245, 'reachable_time': 29268, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258373, 'error': None, 'target': 'ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:53.080 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5c3bb7e9-7efa-4b4b-922f-3ac49369345a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe54:ad5e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 384245, 'tstamp': 384245}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258374, 'error': None, 'target': 'ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:53.108 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f2084cb3-6508-42cc-930f-dbd2738bbc32]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1b6027b5-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:ad:5e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384245, 'reachable_time': 29268, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258375, 'error': None, 'target': 'ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:53.138 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[acadf091-db52-49ff-a6ff-bb3f380c9b2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:53.185 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[82059ad9-59cf-4d5d-bf58-7e5b1d2a0101]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:53.186 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b6027b5-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:53.187 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:53.187 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1b6027b5-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:17:53 np0005629333 kernel: tap1b6027b5-20: entered promiscuous mode
Feb 25 07:17:53 np0005629333 NetworkManager[49836]: <info>  [1772021873.1897] manager: (tap1b6027b5-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:53.192 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1b6027b5-20, col_values=(('external_ids', {'iface-id': '7dd8ba6d-d4c9-4e9e-b4b6-0321252d1200'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:17:53 np0005629333 ovn_controller[147040]: 2026-02-25T12:17:53Z|00048|binding|INFO|Releasing lport 7dd8ba6d-d4c9-4e9e-b4b6-0321252d1200 from this chassis (sb_readonly=0)
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:53.202 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1b6027b5-2ee6-49fb-b9e5-c79e5230d07e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1b6027b5-2ee6-49fb-b9e5-c79e5230d07e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:17:53 np0005629333 nova_compute[244014]: 2026-02-25 12:17:53.204 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:53 np0005629333 nova_compute[244014]: 2026-02-25 12:17:53.208 244018 DEBUG nova.network.neutron [req-d3eb959a-f511-49f6-9581-95d23f34fbf6 req-59dafb10-3ce6-459f-ba54-cb039d9c6f6d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Updated VIF entry in instance network info cache for port e4e15a18-1007-4bd1-80a7-b8d2cb64f15c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:17:53 np0005629333 nova_compute[244014]: 2026-02-25 12:17:53.209 244018 DEBUG nova.network.neutron [req-d3eb959a-f511-49f6-9581-95d23f34fbf6 req-59dafb10-3ce6-459f-ba54-cb039d9c6f6d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Updating instance_info_cache with network_info: [{"id": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "address": "fa:16:3e:26:a8:6b", "network": {"id": "1b6027b5-2ee6-49fb-b9e5-c79e5230d07e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-325792779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7e518f2e6c84550b07b36ea68922707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e15a18-10", "ovs_interfaceid": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:53.211 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cd7407bd-4840-452f-a053-4b171e70a67e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:53.211 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/1b6027b5-2ee6-49fb-b9e5-c79e5230d07e.pid.haproxy
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 1b6027b5-2ee6-49fb-b9e5-c79e5230d07e
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:17:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:53.212 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e', 'env', 'PROCESS_TAG=haproxy-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1b6027b5-2ee6-49fb-b9e5-c79e5230d07e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:17:53 np0005629333 nova_compute[244014]: 2026-02-25 12:17:53.294 244018 DEBUG oslo_concurrency.lockutils [req-d3eb959a-f511-49f6-9581-95d23f34fbf6 req-59dafb10-3ce6-459f-ba54-cb039d9c6f6d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-4c768bc2-a711-4179-b12f-604509e47856" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:17:53 np0005629333 nova_compute[244014]: 2026-02-25 12:17:53.388 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:53 np0005629333 podman[258442]: 2026-02-25 12:17:53.553238755 +0000 UTC m=+0.066939990 container create 4e25652aef3bf93b1f0724b7a4fcd75bb4cb93bbe595e197703d248748da5148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 07:17:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 390 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Feb 25 07:17:53 np0005629333 nova_compute[244014]: 2026-02-25 12:17:53.572 244018 DEBUG nova.compute.manager [req-9048d1c7-b44d-48ae-bfcb-20a4917b109d req-60d71dd7-801a-425f-856a-4d9376ef3423 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Received event network-vif-plugged-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:17:53 np0005629333 nova_compute[244014]: 2026-02-25 12:17:53.574 244018 DEBUG oslo_concurrency.lockutils [req-9048d1c7-b44d-48ae-bfcb-20a4917b109d req-60d71dd7-801a-425f-856a-4d9376ef3423 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "4c768bc2-a711-4179-b12f-604509e47856-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:53 np0005629333 nova_compute[244014]: 2026-02-25 12:17:53.575 244018 DEBUG oslo_concurrency.lockutils [req-9048d1c7-b44d-48ae-bfcb-20a4917b109d req-60d71dd7-801a-425f-856a-4d9376ef3423 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4c768bc2-a711-4179-b12f-604509e47856-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:53 np0005629333 nova_compute[244014]: 2026-02-25 12:17:53.576 244018 DEBUG oslo_concurrency.lockutils [req-9048d1c7-b44d-48ae-bfcb-20a4917b109d req-60d71dd7-801a-425f-856a-4d9376ef3423 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4c768bc2-a711-4179-b12f-604509e47856-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:53 np0005629333 nova_compute[244014]: 2026-02-25 12:17:53.577 244018 DEBUG nova.compute.manager [req-9048d1c7-b44d-48ae-bfcb-20a4917b109d req-60d71dd7-801a-425f-856a-4d9376ef3423 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Processing event network-vif-plugged-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:17:53 np0005629333 nova_compute[244014]: 2026-02-25 12:17:53.578 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021873.577011, 4c768bc2-a711-4179-b12f-604509e47856 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:17:53 np0005629333 nova_compute[244014]: 2026-02-25 12:17:53.579 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4c768bc2-a711-4179-b12f-604509e47856] VM Started (Lifecycle Event)#033[00m
Feb 25 07:17:53 np0005629333 nova_compute[244014]: 2026-02-25 12:17:53.581 244018 DEBUG nova.compute.manager [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:17:53 np0005629333 nova_compute[244014]: 2026-02-25 12:17:53.587 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:17:53 np0005629333 nova_compute[244014]: 2026-02-25 12:17:53.592 244018 INFO nova.virt.libvirt.driver [-] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Instance spawned successfully.#033[00m
Feb 25 07:17:53 np0005629333 nova_compute[244014]: 2026-02-25 12:17:53.592 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:17:53 np0005629333 systemd[1]: Started libpod-conmon-4e25652aef3bf93b1f0724b7a4fcd75bb4cb93bbe595e197703d248748da5148.scope.
Feb 25 07:17:53 np0005629333 podman[258442]: 2026-02-25 12:17:53.512144791 +0000 UTC m=+0.025846056 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:17:53 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:17:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8c0265e3a393cacb87e5ee458bb9d5c78aa9ff00fd616b394131707d01c2e19/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:17:53 np0005629333 podman[258442]: 2026-02-25 12:17:53.658232642 +0000 UTC m=+0.171933907 container init 4e25652aef3bf93b1f0724b7a4fcd75bb4cb93bbe595e197703d248748da5148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 07:17:53 np0005629333 podman[258442]: 2026-02-25 12:17:53.665852785 +0000 UTC m=+0.179554020 container start 4e25652aef3bf93b1f0724b7a4fcd75bb4cb93bbe595e197703d248748da5148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 25 07:17:53 np0005629333 neutron-haproxy-ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e[258463]: [NOTICE]   (258467) : New worker (258469) forked
Feb 25 07:17:53 np0005629333 neutron-haproxy-ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e[258463]: [NOTICE]   (258467) : Loading success.
Feb 25 07:17:53 np0005629333 nova_compute[244014]: 2026-02-25 12:17:53.699 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:17:53 np0005629333 nova_compute[244014]: 2026-02-25 12:17:53.711 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:17:53 np0005629333 nova_compute[244014]: 2026-02-25 12:17:53.717 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:17:53 np0005629333 nova_compute[244014]: 2026-02-25 12:17:53.717 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:17:53 np0005629333 nova_compute[244014]: 2026-02-25 12:17:53.718 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:17:53 np0005629333 nova_compute[244014]: 2026-02-25 12:17:53.719 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:17:53 np0005629333 nova_compute[244014]: 2026-02-25 12:17:53.720 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:17:53 np0005629333 nova_compute[244014]: 2026-02-25 12:17:53.720 244018 DEBUG nova.virt.libvirt.driver [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:17:53 np0005629333 nova_compute[244014]: 2026-02-25 12:17:53.837 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4c768bc2-a711-4179-b12f-604509e47856] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:17:53 np0005629333 nova_compute[244014]: 2026-02-25 12:17:53.837 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021873.5771196, 4c768bc2-a711-4179-b12f-604509e47856 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:17:53 np0005629333 nova_compute[244014]: 2026-02-25 12:17:53.838 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4c768bc2-a711-4179-b12f-604509e47856] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:17:53 np0005629333 nova_compute[244014]: 2026-02-25 12:17:53.894 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:17:53 np0005629333 nova_compute[244014]: 2026-02-25 12:17:53.894 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:17:54 np0005629333 nova_compute[244014]: 2026-02-25 12:17:54.015 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:17:54 np0005629333 nova_compute[244014]: 2026-02-25 12:17:54.021 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021873.5857973, 4c768bc2-a711-4179-b12f-604509e47856 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:17:54 np0005629333 nova_compute[244014]: 2026-02-25 12:17:54.021 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4c768bc2-a711-4179-b12f-604509e47856] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:17:54 np0005629333 nova_compute[244014]: 2026-02-25 12:17:54.062 244018 INFO nova.compute.manager [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Took 8.01 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:17:54 np0005629333 nova_compute[244014]: 2026-02-25 12:17:54.063 244018 DEBUG nova.compute.manager [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:17:54 np0005629333 nova_compute[244014]: 2026-02-25 12:17:54.149 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:17:54 np0005629333 nova_compute[244014]: 2026-02-25 12:17:54.154 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:17:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:17:54 np0005629333 nova_compute[244014]: 2026-02-25 12:17:54.250 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 07:17:54 np0005629333 nova_compute[244014]: 2026-02-25 12:17:54.254 244018 INFO nova.compute.manager [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Took 9.38 seconds to build instance.#033[00m
Feb 25 07:17:54 np0005629333 nova_compute[244014]: 2026-02-25 12:17:54.438 244018 DEBUG oslo_concurrency.lockutils [None req-55aa3e8e-e9af-4040-98e5-9169941cb084 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lock "4c768bc2-a711-4179-b12f-604509e47856" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:55.002 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:55.003 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:55.003 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:55 np0005629333 nova_compute[244014]: 2026-02-25 12:17:55.088 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021860.0872273, db0fc9fa-1fc0-4334-96f9-2205fa53e308 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:17:55 np0005629333 nova_compute[244014]: 2026-02-25 12:17:55.088 244018 INFO nova.compute.manager [-] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:17:55 np0005629333 nova_compute[244014]: 2026-02-25 12:17:55.117 244018 DEBUG nova.compute.manager [None req-3adc958f-69f7-4b06-a731-86ed952e4f11 - - - - - -] [instance: db0fc9fa-1fc0-4334-96f9-2205fa53e308] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:17:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 07:17:55 np0005629333 nova_compute[244014]: 2026-02-25 12:17:55.668 244018 DEBUG nova.compute.manager [req-165d0960-8c4b-4563-a0a4-c16284657b80 req-5f049777-4047-4891-978a-bef2c0d4d530 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Received event network-vif-plugged-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:17:55 np0005629333 nova_compute[244014]: 2026-02-25 12:17:55.669 244018 DEBUG oslo_concurrency.lockutils [req-165d0960-8c4b-4563-a0a4-c16284657b80 req-5f049777-4047-4891-978a-bef2c0d4d530 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "4c768bc2-a711-4179-b12f-604509e47856-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:55 np0005629333 nova_compute[244014]: 2026-02-25 12:17:55.670 244018 DEBUG oslo_concurrency.lockutils [req-165d0960-8c4b-4563-a0a4-c16284657b80 req-5f049777-4047-4891-978a-bef2c0d4d530 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4c768bc2-a711-4179-b12f-604509e47856-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:55 np0005629333 nova_compute[244014]: 2026-02-25 12:17:55.670 244018 DEBUG oslo_concurrency.lockutils [req-165d0960-8c4b-4563-a0a4-c16284657b80 req-5f049777-4047-4891-978a-bef2c0d4d530 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4c768bc2-a711-4179-b12f-604509e47856-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:55 np0005629333 nova_compute[244014]: 2026-02-25 12:17:55.671 244018 DEBUG nova.compute.manager [req-165d0960-8c4b-4563-a0a4-c16284657b80 req-5f049777-4047-4891-978a-bef2c0d4d530 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] No waiting events found dispatching network-vif-plugged-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:17:55 np0005629333 nova_compute[244014]: 2026-02-25 12:17:55.671 244018 WARNING nova.compute.manager [req-165d0960-8c4b-4563-a0a4-c16284657b80 req-5f049777-4047-4891-978a-bef2c0d4d530 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Received unexpected event network-vif-plugged-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c for instance with vm_state active and task_state None.#033[00m
Feb 25 07:17:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:56.152 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:17:56 np0005629333 nova_compute[244014]: 2026-02-25 12:17:56.152 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:56.153 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:17:56 np0005629333 nova_compute[244014]: 2026-02-25 12:17:56.456 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:56 np0005629333 nova_compute[244014]: 2026-02-25 12:17:56.814 244018 DEBUG oslo_concurrency.lockutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Acquiring lock "f53717b5-3196-44b5-bc3a-aa8f53ce397d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:56 np0005629333 nova_compute[244014]: 2026-02-25 12:17:56.815 244018 DEBUG oslo_concurrency.lockutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lock "f53717b5-3196-44b5-bc3a-aa8f53ce397d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:56 np0005629333 nova_compute[244014]: 2026-02-25 12:17:56.920 244018 DEBUG nova.compute.manager [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:17:57 np0005629333 nova_compute[244014]: 2026-02-25 12:17:57.049 244018 DEBUG oslo_concurrency.lockutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:57 np0005629333 nova_compute[244014]: 2026-02-25 12:17:57.050 244018 DEBUG oslo_concurrency.lockutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:57 np0005629333 nova_compute[244014]: 2026-02-25 12:17:57.059 244018 DEBUG nova.virt.hardware [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:17:57 np0005629333 nova_compute[244014]: 2026-02-25 12:17:57.059 244018 INFO nova.compute.claims [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:17:57 np0005629333 nova_compute[244014]: 2026-02-25 12:17:57.234 244018 DEBUG oslo_concurrency.processutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 07:17:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:17:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2367595341' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:17:57 np0005629333 nova_compute[244014]: 2026-02-25 12:17:57.795 244018 DEBUG oslo_concurrency.processutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:57 np0005629333 nova_compute[244014]: 2026-02-25 12:17:57.803 244018 DEBUG nova.compute.provider_tree [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:17:57 np0005629333 nova_compute[244014]: 2026-02-25 12:17:57.822 244018 DEBUG nova.scheduler.client.report [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:17:57 np0005629333 nova_compute[244014]: 2026-02-25 12:17:57.868 244018 DEBUG oslo_concurrency.lockutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:57 np0005629333 nova_compute[244014]: 2026-02-25 12:17:57.870 244018 DEBUG nova.compute.manager [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:17:57 np0005629333 nova_compute[244014]: 2026-02-25 12:17:57.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:17:57 np0005629333 nova_compute[244014]: 2026-02-25 12:17:57.923 244018 DEBUG nova.compute.manager [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:17:57 np0005629333 nova_compute[244014]: 2026-02-25 12:17:57.923 244018 DEBUG nova.network.neutron [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:17:57 np0005629333 nova_compute[244014]: 2026-02-25 12:17:57.951 244018 INFO nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:17:57 np0005629333 nova_compute[244014]: 2026-02-25 12:17:57.977 244018 DEBUG nova.compute.manager [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.125 244018 DEBUG nova.compute.manager [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.127 244018 DEBUG nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.127 244018 INFO nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Creating image(s)#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.145 244018 DEBUG nova.storage.rbd_utils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] rbd image f53717b5-3196-44b5-bc3a-aa8f53ce397d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.162 244018 DEBUG nova.storage.rbd_utils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] rbd image f53717b5-3196-44b5-bc3a-aa8f53ce397d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.181 244018 DEBUG nova.storage.rbd_utils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] rbd image f53717b5-3196-44b5-bc3a-aa8f53ce397d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.186 244018 DEBUG oslo_concurrency.processutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.229 244018 DEBUG oslo_concurrency.processutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.230 244018 DEBUG oslo_concurrency.lockutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.231 244018 DEBUG oslo_concurrency.lockutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.231 244018 DEBUG oslo_concurrency.lockutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.269 244018 DEBUG nova.storage.rbd_utils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] rbd image f53717b5-3196-44b5-bc3a-aa8f53ce397d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.273 244018 DEBUG oslo_concurrency.processutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 f53717b5-3196-44b5-bc3a-aa8f53ce397d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:17:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:17:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:17:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.316 244018 DEBUG nova.network.neutron [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.316 244018 DEBUG nova.compute.manager [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:17:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:17:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:17:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:17:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:17:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:17:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:17:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:17:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.390 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.545 244018 DEBUG oslo_concurrency.processutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 f53717b5-3196-44b5-bc3a-aa8f53ce397d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.626 244018 DEBUG nova.storage.rbd_utils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] resizing rbd image f53717b5-3196-44b5-bc3a-aa8f53ce397d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.746 244018 DEBUG nova.objects.instance [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lazy-loading 'migration_context' on Instance uuid f53717b5-3196-44b5-bc3a-aa8f53ce397d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.763 244018 DEBUG nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.764 244018 DEBUG nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Ensure instance console log exists: /var/lib/nova/instances/f53717b5-3196-44b5-bc3a-aa8f53ce397d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.764 244018 DEBUG oslo_concurrency.lockutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.765 244018 DEBUG oslo_concurrency.lockutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.765 244018 DEBUG oslo_concurrency.lockutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.768 244018 DEBUG nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.777 244018 WARNING nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.785 244018 DEBUG nova.virt.libvirt.host [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.785 244018 DEBUG nova.virt.libvirt.host [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.789 244018 DEBUG nova.virt.libvirt.host [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.789 244018 DEBUG nova.virt.libvirt.host [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.790 244018 DEBUG nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.790 244018 DEBUG nova.virt.hardware [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.791 244018 DEBUG nova.virt.hardware [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.791 244018 DEBUG nova.virt.hardware [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.791 244018 DEBUG nova.virt.hardware [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.792 244018 DEBUG nova.virt.hardware [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.792 244018 DEBUG nova.virt.hardware [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.792 244018 DEBUG nova.virt.hardware [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.793 244018 DEBUG nova.virt.hardware [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.793 244018 DEBUG nova.virt.hardware [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.793 244018 DEBUG nova.virt.hardware [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.794 244018 DEBUG nova.virt.hardware [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:17:58 np0005629333 nova_compute[244014]: 2026-02-25 12:17:58.798 244018 DEBUG oslo_concurrency.processutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:58 np0005629333 podman[258805]: 2026-02-25 12:17:58.808904001 +0000 UTC m=+0.066621181 container create 3293bf7d3d8546ed8e2883a90c686c4e9526a4ece0e21602039a4cf248c4b10f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_babbage, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 25 07:17:58 np0005629333 systemd[1]: Started libpod-conmon-3293bf7d3d8546ed8e2883a90c686c4e9526a4ece0e21602039a4cf248c4b10f.scope.
Feb 25 07:17:58 np0005629333 podman[258805]: 2026-02-25 12:17:58.777626063 +0000 UTC m=+0.035343253 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:17:58 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:17:58 np0005629333 podman[258805]: 2026-02-25 12:17:58.903106505 +0000 UTC m=+0.160823675 container init 3293bf7d3d8546ed8e2883a90c686c4e9526a4ece0e21602039a4cf248c4b10f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_babbage, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:17:58 np0005629333 podman[258805]: 2026-02-25 12:17:58.911595563 +0000 UTC m=+0.169312753 container start 3293bf7d3d8546ed8e2883a90c686c4e9526a4ece0e21602039a4cf248c4b10f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:17:58 np0005629333 sleepy_babbage[258825]: 167 167
Feb 25 07:17:58 np0005629333 systemd[1]: libpod-3293bf7d3d8546ed8e2883a90c686c4e9526a4ece0e21602039a4cf248c4b10f.scope: Deactivated successfully.
Feb 25 07:17:58 np0005629333 podman[258805]: 2026-02-25 12:17:58.919348681 +0000 UTC m=+0.177065871 container attach 3293bf7d3d8546ed8e2883a90c686c4e9526a4ece0e21602039a4cf248c4b10f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_babbage, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 25 07:17:58 np0005629333 podman[258805]: 2026-02-25 12:17:58.919711321 +0000 UTC m=+0.177428481 container died 3293bf7d3d8546ed8e2883a90c686c4e9526a4ece0e21602039a4cf248c4b10f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:17:58 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1150796ec1108e939246a3387ab9a4e098f60b631d9a3209ec2c88f6be0fdd64-merged.mount: Deactivated successfully.
Feb 25 07:17:58 np0005629333 podman[258805]: 2026-02-25 12:17:58.989210212 +0000 UTC m=+0.246927402 container remove 3293bf7d3d8546ed8e2883a90c686c4e9526a4ece0e21602039a4cf248c4b10f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 07:17:59 np0005629333 systemd[1]: libpod-conmon-3293bf7d3d8546ed8e2883a90c686c4e9526a4ece0e21602039a4cf248c4b10f.scope: Deactivated successfully.
Feb 25 07:17:59 np0005629333 podman[258866]: 2026-02-25 12:17:59.145460707 +0000 UTC m=+0.053004348 container create 28fccd64b344ca4372c186efb158c4dd0d313a686a68f3c4ec1f870963681993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 07:17:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:17:59.156 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:17:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:17:59 np0005629333 systemd[1]: Started libpod-conmon-28fccd64b344ca4372c186efb158c4dd0d313a686a68f3c4ec1f870963681993.scope.
Feb 25 07:17:59 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:17:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b05b41e790b4f9c39b9674b05b6118941869517ace1abcd6c9d3e647d2efdb4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:17:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b05b41e790b4f9c39b9674b05b6118941869517ace1abcd6c9d3e647d2efdb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:17:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b05b41e790b4f9c39b9674b05b6118941869517ace1abcd6c9d3e647d2efdb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:17:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b05b41e790b4f9c39b9674b05b6118941869517ace1abcd6c9d3e647d2efdb4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:17:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b05b41e790b4f9c39b9674b05b6118941869517ace1abcd6c9d3e647d2efdb4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:17:59 np0005629333 podman[258866]: 2026-02-25 12:17:59.125544518 +0000 UTC m=+0.033088169 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:17:59 np0005629333 podman[258866]: 2026-02-25 12:17:59.235497295 +0000 UTC m=+0.143040956 container init 28fccd64b344ca4372c186efb158c4dd0d313a686a68f3c4ec1f870963681993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Feb 25 07:17:59 np0005629333 podman[258866]: 2026-02-25 12:17:59.24210553 +0000 UTC m=+0.149649171 container start 28fccd64b344ca4372c186efb158c4dd0d313a686a68f3c4ec1f870963681993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:17:59 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:17:59 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:17:59 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:17:59 np0005629333 podman[258866]: 2026-02-25 12:17:59.255589799 +0000 UTC m=+0.163133440 container attach 28fccd64b344ca4372c186efb158c4dd0d313a686a68f3c4ec1f870963681993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 07:17:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:17:59 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3561680764' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:17:59 np0005629333 nova_compute[244014]: 2026-02-25 12:17:59.441 244018 DEBUG oslo_concurrency.processutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.644s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:17:59 np0005629333 nova_compute[244014]: 2026-02-25 12:17:59.458 244018 DEBUG nova.storage.rbd_utils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] rbd image f53717b5-3196-44b5-bc3a-aa8f53ce397d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:17:59 np0005629333 nova_compute[244014]: 2026-02-25 12:17:59.462 244018 DEBUG oslo_concurrency.processutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:17:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 966 KiB/s wr, 96 op/s
Feb 25 07:17:59 np0005629333 gifted_chatterjee[258881]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:17:59 np0005629333 gifted_chatterjee[258881]: --> All data devices are unavailable
Feb 25 07:17:59 np0005629333 systemd[1]: libpod-28fccd64b344ca4372c186efb158c4dd0d313a686a68f3c4ec1f870963681993.scope: Deactivated successfully.
Feb 25 07:17:59 np0005629333 podman[258866]: 2026-02-25 12:17:59.695527477 +0000 UTC m=+0.603071188 container died 28fccd64b344ca4372c186efb158c4dd0d313a686a68f3c4ec1f870963681993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chatterjee, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:17:59 np0005629333 systemd[1]: var-lib-containers-storage-overlay-0b05b41e790b4f9c39b9674b05b6118941869517ace1abcd6c9d3e647d2efdb4-merged.mount: Deactivated successfully.
Feb 25 07:17:59 np0005629333 podman[258866]: 2026-02-25 12:17:59.769937936 +0000 UTC m=+0.677481607 container remove 28fccd64b344ca4372c186efb158c4dd0d313a686a68f3c4ec1f870963681993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chatterjee, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 07:17:59 np0005629333 systemd[1]: libpod-conmon-28fccd64b344ca4372c186efb158c4dd0d313a686a68f3c4ec1f870963681993.scope: Deactivated successfully.
Feb 25 07:17:59 np0005629333 nova_compute[244014]: 2026-02-25 12:17:59.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:17:59 np0005629333 nova_compute[244014]: 2026-02-25 12:17:59.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:18:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:18:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/868051482' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:18:00 np0005629333 nova_compute[244014]: 2026-02-25 12:18:00.021 244018 DEBUG oslo_concurrency.processutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:00 np0005629333 nova_compute[244014]: 2026-02-25 12:18:00.023 244018 DEBUG nova.objects.instance [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lazy-loading 'pci_devices' on Instance uuid f53717b5-3196-44b5-bc3a-aa8f53ce397d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:18:00 np0005629333 nova_compute[244014]: 2026-02-25 12:18:00.038 244018 DEBUG nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:18:00 np0005629333 nova_compute[244014]:  <uuid>f53717b5-3196-44b5-bc3a-aa8f53ce397d</uuid>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:  <name>instance-0000000e</name>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:18:00 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServerDiagnosticsNegativeTest-server-954327769</nova:name>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:17:58</nova:creationTime>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:18:00 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:        <nova:user uuid="d8d80b1211534b27a4143b5919f28d50">tempest-ServerDiagnosticsNegativeTest-2137851213-project-member</nova:user>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:        <nova:project uuid="2a1df86b8c4e48d7aaa799f3804282bc">tempest-ServerDiagnosticsNegativeTest-2137851213</nova:project>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:      <nova:ports/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:      <entry name="serial">f53717b5-3196-44b5-bc3a-aa8f53ce397d</entry>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:      <entry name="uuid">f53717b5-3196-44b5-bc3a-aa8f53ce397d</entry>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:18:00 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/f53717b5-3196-44b5-bc3a-aa8f53ce397d_disk">
Feb 25 07:18:00 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:18:00 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:18:00 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/f53717b5-3196-44b5-bc3a-aa8f53ce397d_disk.config">
Feb 25 07:18:00 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:18:00 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:18:00 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/f53717b5-3196-44b5-bc3a-aa8f53ce397d/console.log" append="off"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:18:00 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:18:00 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:18:00 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:18:00 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:18:00 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:18:00 np0005629333 nova_compute[244014]: 2026-02-25 12:18:00.110 244018 DEBUG nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:18:00 np0005629333 nova_compute[244014]: 2026-02-25 12:18:00.111 244018 DEBUG nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:18:00 np0005629333 nova_compute[244014]: 2026-02-25 12:18:00.111 244018 INFO nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Using config drive#033[00m
Feb 25 07:18:00 np0005629333 nova_compute[244014]: 2026-02-25 12:18:00.140 244018 DEBUG nova.storage.rbd_utils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] rbd image f53717b5-3196-44b5-bc3a-aa8f53ce397d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:00 np0005629333 podman[259037]: 2026-02-25 12:18:00.227022605 +0000 UTC m=+0.061581629 container create 12978ed38e42107ae420285be7273918a7782f9ed396724f03408ab79f0c7183 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_herschel, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 07:18:00 np0005629333 systemd[1]: Started libpod-conmon-12978ed38e42107ae420285be7273918a7782f9ed396724f03408ab79f0c7183.scope.
Feb 25 07:18:00 np0005629333 podman[259037]: 2026-02-25 12:18:00.200263954 +0000 UTC m=+0.034823028 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:18:00 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:18:00 np0005629333 podman[259037]: 2026-02-25 12:18:00.329581824 +0000 UTC m=+0.164140838 container init 12978ed38e42107ae420285be7273918a7782f9ed396724f03408ab79f0c7183 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 07:18:00 np0005629333 podman[259037]: 2026-02-25 12:18:00.336990092 +0000 UTC m=+0.171549106 container start 12978ed38e42107ae420285be7273918a7782f9ed396724f03408ab79f0c7183 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_herschel, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:18:00 np0005629333 exciting_herschel[259052]: 167 167
Feb 25 07:18:00 np0005629333 systemd[1]: libpod-12978ed38e42107ae420285be7273918a7782f9ed396724f03408ab79f0c7183.scope: Deactivated successfully.
Feb 25 07:18:00 np0005629333 podman[259037]: 2026-02-25 12:18:00.345065078 +0000 UTC m=+0.179624082 container attach 12978ed38e42107ae420285be7273918a7782f9ed396724f03408ab79f0c7183 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 07:18:00 np0005629333 podman[259037]: 2026-02-25 12:18:00.345850871 +0000 UTC m=+0.180409895 container died 12978ed38e42107ae420285be7273918a7782f9ed396724f03408ab79f0c7183 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_herschel, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 07:18:00 np0005629333 systemd[1]: var-lib-containers-storage-overlay-807689e24177a5b89e496204e5e74597949f6a8f76e55e0cf3fa3cff4eefccb2-merged.mount: Deactivated successfully.
Feb 25 07:18:00 np0005629333 podman[259037]: 2026-02-25 12:18:00.400917246 +0000 UTC m=+0.235476240 container remove 12978ed38e42107ae420285be7273918a7782f9ed396724f03408ab79f0c7183 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_herschel, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:18:00 np0005629333 nova_compute[244014]: 2026-02-25 12:18:00.415 244018 INFO nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Creating config drive at /var/lib/nova/instances/f53717b5-3196-44b5-bc3a-aa8f53ce397d/disk.config#033[00m
Feb 25 07:18:00 np0005629333 nova_compute[244014]: 2026-02-25 12:18:00.420 244018 DEBUG oslo_concurrency.processutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f53717b5-3196-44b5-bc3a-aa8f53ce397d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpc6vxtgnr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:00 np0005629333 systemd[1]: libpod-conmon-12978ed38e42107ae420285be7273918a7782f9ed396724f03408ab79f0c7183.scope: Deactivated successfully.
Feb 25 07:18:00 np0005629333 ovn_controller[147040]: 2026-02-25T12:18:00Z|00049|binding|INFO|Releasing lport 7dd8ba6d-d4c9-4e9e-b4b6-0321252d1200 from this chassis (sb_readonly=0)
Feb 25 07:18:00 np0005629333 NetworkManager[49836]: <info>  [1772021880.4491] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Feb 25 07:18:00 np0005629333 nova_compute[244014]: 2026-02-25 12:18:00.449 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:00 np0005629333 NetworkManager[49836]: <info>  [1772021880.4501] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Feb 25 07:18:00 np0005629333 nova_compute[244014]: 2026-02-25 12:18:00.461 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:00 np0005629333 ovn_controller[147040]: 2026-02-25T12:18:00Z|00050|binding|INFO|Releasing lport 7dd8ba6d-d4c9-4e9e-b4b6-0321252d1200 from this chassis (sb_readonly=0)
Feb 25 07:18:00 np0005629333 nova_compute[244014]: 2026-02-25 12:18:00.466 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:00 np0005629333 nova_compute[244014]: 2026-02-25 12:18:00.541 244018 DEBUG oslo_concurrency.processutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f53717b5-3196-44b5-bc3a-aa8f53ce397d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpc6vxtgnr" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:00 np0005629333 podman[259079]: 2026-02-25 12:18:00.566065752 +0000 UTC m=+0.061137887 container create 08c916f9fb5b5822419436728166d8eff234dc507bbf16f064e2d9ea0ecb3bb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_clarke, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:18:00 np0005629333 nova_compute[244014]: 2026-02-25 12:18:00.574 244018 DEBUG nova.storage.rbd_utils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] rbd image f53717b5-3196-44b5-bc3a-aa8f53ce397d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:00 np0005629333 nova_compute[244014]: 2026-02-25 12:18:00.580 244018 DEBUG oslo_concurrency.processutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f53717b5-3196-44b5-bc3a-aa8f53ce397d/disk.config f53717b5-3196-44b5-bc3a-aa8f53ce397d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:00 np0005629333 systemd[1]: Started libpod-conmon-08c916f9fb5b5822419436728166d8eff234dc507bbf16f064e2d9ea0ecb3bb5.scope.
Feb 25 07:18:00 np0005629333 podman[259079]: 2026-02-25 12:18:00.533722114 +0000 UTC m=+0.028794149 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:18:00 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:18:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/596ba95705856135a09a99b11f13f01ed6775b1e98d1ae63da340e1ffdc05d69/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:18:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/596ba95705856135a09a99b11f13f01ed6775b1e98d1ae63da340e1ffdc05d69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:18:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/596ba95705856135a09a99b11f13f01ed6775b1e98d1ae63da340e1ffdc05d69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:18:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/596ba95705856135a09a99b11f13f01ed6775b1e98d1ae63da340e1ffdc05d69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:18:00 np0005629333 podman[259079]: 2026-02-25 12:18:00.656184961 +0000 UTC m=+0.151257026 container init 08c916f9fb5b5822419436728166d8eff234dc507bbf16f064e2d9ea0ecb3bb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_clarke, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:18:00 np0005629333 podman[259079]: 2026-02-25 12:18:00.662885169 +0000 UTC m=+0.157957184 container start 08c916f9fb5b5822419436728166d8eff234dc507bbf16f064e2d9ea0ecb3bb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:18:00 np0005629333 podman[259079]: 2026-02-25 12:18:00.670808232 +0000 UTC m=+0.165880287 container attach 08c916f9fb5b5822419436728166d8eff234dc507bbf16f064e2d9ea0ecb3bb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_clarke, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 07:18:00 np0005629333 nova_compute[244014]: 2026-02-25 12:18:00.728 244018 DEBUG oslo_concurrency.processutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f53717b5-3196-44b5-bc3a-aa8f53ce397d/disk.config f53717b5-3196-44b5-bc3a-aa8f53ce397d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:00 np0005629333 nova_compute[244014]: 2026-02-25 12:18:00.729 244018 INFO nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Deleting local config drive /var/lib/nova/instances/f53717b5-3196-44b5-bc3a-aa8f53ce397d/disk.config because it was imported into RBD.#033[00m
Feb 25 07:18:00 np0005629333 systemd-machined[210048]: New machine qemu-16-instance-0000000e.
Feb 25 07:18:00 np0005629333 systemd[1]: Started Virtual Machine qemu-16-instance-0000000e.
Feb 25 07:18:00 np0005629333 nova_compute[244014]: 2026-02-25 12:18:00.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:18:00 np0005629333 nova_compute[244014]: 2026-02-25 12:18:00.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:18:00 np0005629333 nova_compute[244014]: 2026-02-25 12:18:00.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]: {
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:    "0": [
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:        {
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "devices": [
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "/dev/loop3"
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            ],
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "lv_name": "ceph_lv0",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "lv_size": "21470642176",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "name": "ceph_lv0",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "tags": {
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.cluster_name": "ceph",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.crush_device_class": "",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.encrypted": "0",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.objectstore": "bluestore",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.osd_id": "0",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.type": "block",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.vdo": "0",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.with_tpm": "0"
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            },
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "type": "block",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "vg_name": "ceph_vg0"
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:        }
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:    ],
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:    "1": [
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:        {
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "devices": [
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "/dev/loop4"
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            ],
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "lv_name": "ceph_lv1",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "lv_size": "21470642176",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "name": "ceph_lv1",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "tags": {
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.cluster_name": "ceph",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.crush_device_class": "",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.encrypted": "0",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.objectstore": "bluestore",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.osd_id": "1",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.type": "block",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.vdo": "0",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.with_tpm": "0"
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            },
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "type": "block",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "vg_name": "ceph_vg1"
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:        }
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:    ],
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:    "2": [
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:        {
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "devices": [
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "/dev/loop5"
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            ],
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "lv_name": "ceph_lv2",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "lv_size": "21470642176",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "name": "ceph_lv2",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "tags": {
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.cluster_name": "ceph",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.crush_device_class": "",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.encrypted": "0",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.objectstore": "bluestore",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.osd_id": "2",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.type": "block",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.vdo": "0",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:                "ceph.with_tpm": "0"
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            },
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "type": "block",
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:            "vg_name": "ceph_vg2"
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:        }
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]:    ]
Feb 25 07:18:00 np0005629333 sleepy_clarke[259114]: }
Feb 25 07:18:01 np0005629333 systemd[1]: libpod-08c916f9fb5b5822419436728166d8eff234dc507bbf16f064e2d9ea0ecb3bb5.scope: Deactivated successfully.
Feb 25 07:18:01 np0005629333 podman[259079]: 2026-02-25 12:18:01.007009847 +0000 UTC m=+0.502081892 container died 08c916f9fb5b5822419436728166d8eff234dc507bbf16f064e2d9ea0ecb3bb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.030 244018 DEBUG nova.compute.manager [req-b5f05ae6-c339-4f08-9bb5-05e331046bdd req-35e1fe99-c2f0-4108-9c85-60638ccbf2c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Received event network-changed-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.031 244018 DEBUG nova.compute.manager [req-b5f05ae6-c339-4f08-9bb5-05e331046bdd req-35e1fe99-c2f0-4108-9c85-60638ccbf2c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Refreshing instance network info cache due to event network-changed-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.032 244018 DEBUG oslo_concurrency.lockutils [req-b5f05ae6-c339-4f08-9bb5-05e331046bdd req-35e1fe99-c2f0-4108-9c85-60638ccbf2c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-4c768bc2-a711-4179-b12f-604509e47856" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.032 244018 DEBUG oslo_concurrency.lockutils [req-b5f05ae6-c339-4f08-9bb5-05e331046bdd req-35e1fe99-c2f0-4108-9c85-60638ccbf2c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-4c768bc2-a711-4179-b12f-604509e47856" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.033 244018 DEBUG nova.network.neutron [req-b5f05ae6-c339-4f08-9bb5-05e331046bdd req-35e1fe99-c2f0-4108-9c85-60638ccbf2c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Refreshing network info cache for port e4e15a18-1007-4bd1-80a7-b8d2cb64f15c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:18:01 np0005629333 systemd[1]: var-lib-containers-storage-overlay-596ba95705856135a09a99b11f13f01ed6775b1e98d1ae63da340e1ffdc05d69-merged.mount: Deactivated successfully.
Feb 25 07:18:01 np0005629333 podman[259079]: 2026-02-25 12:18:01.073942076 +0000 UTC m=+0.569014101 container remove 08c916f9fb5b5822419436728166d8eff234dc507bbf16f064e2d9ea0ecb3bb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 07:18:01 np0005629333 systemd[1]: libpod-conmon-08c916f9fb5b5822419436728166d8eff234dc507bbf16f064e2d9ea0ecb3bb5.scope: Deactivated successfully.
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.256 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021881.2562892, f53717b5-3196-44b5-bc3a-aa8f53ce397d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.257 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.259 244018 DEBUG nova.compute.manager [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.259 244018 DEBUG nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.262 244018 INFO nova.virt.libvirt.driver [-] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Instance spawned successfully.#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.263 244018 DEBUG nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.285 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.292 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.295 244018 DEBUG nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.295 244018 DEBUG nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.295 244018 DEBUG nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.296 244018 DEBUG nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.296 244018 DEBUG nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.297 244018 DEBUG nova.virt.libvirt.driver [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.337 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.337 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021881.257664, f53717b5-3196-44b5-bc3a-aa8f53ce397d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.337 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] VM Started (Lifecycle Event)#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.365 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.368 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.376 244018 INFO nova.compute.manager [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Took 3.25 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.376 244018 DEBUG nova.compute.manager [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.399 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.446 244018 INFO nova.compute.manager [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Took 4.46 seconds to build instance.#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.458 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.464 244018 DEBUG oslo_concurrency.lockutils [None req-53151d2f-013e-484c-9e7d-e56f9a6584e7 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lock "f53717b5-3196-44b5-bc3a-aa8f53ce397d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:01 np0005629333 podman[259271]: 2026-02-25 12:18:01.507467844 +0000 UTC m=+0.039995893 container create 80cfb2e2da3d146499c15cb555c93697bda0dddf1cd8fef9535f0094c7c50d84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:18:01 np0005629333 systemd[1]: Started libpod-conmon-80cfb2e2da3d146499c15cb555c93697bda0dddf1cd8fef9535f0094c7c50d84.scope.
Feb 25 07:18:01 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:18:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 215 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 110 op/s
Feb 25 07:18:01 np0005629333 podman[259271]: 2026-02-25 12:18:01.572215082 +0000 UTC m=+0.104743161 container init 80cfb2e2da3d146499c15cb555c93697bda0dddf1cd8fef9535f0094c7c50d84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Feb 25 07:18:01 np0005629333 podman[259271]: 2026-02-25 12:18:01.576709698 +0000 UTC m=+0.109237757 container start 80cfb2e2da3d146499c15cb555c93697bda0dddf1cd8fef9535f0094c7c50d84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_proskuriakova, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 07:18:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:18:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:18:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:18:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:18:01 np0005629333 silly_proskuriakova[259287]: 167 167
Feb 25 07:18:01 np0005629333 systemd[1]: libpod-80cfb2e2da3d146499c15cb555c93697bda0dddf1cd8fef9535f0094c7c50d84.scope: Deactivated successfully.
Feb 25 07:18:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:18:01 np0005629333 podman[259271]: 2026-02-25 12:18:01.484601693 +0000 UTC m=+0.017129732 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:18:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:18:01 np0005629333 podman[259271]: 2026-02-25 12:18:01.586514303 +0000 UTC m=+0.119042362 container attach 80cfb2e2da3d146499c15cb555c93697bda0dddf1cd8fef9535f0094c7c50d84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_proskuriakova, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 07:18:01 np0005629333 podman[259271]: 2026-02-25 12:18:01.586913134 +0000 UTC m=+0.119441163 container died 80cfb2e2da3d146499c15cb555c93697bda0dddf1cd8fef9535f0094c7c50d84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 07:18:01 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d70e07e02d6ceaaea1cb9292422951d70b3a900db669e6e41bb16c0e5beac004-merged.mount: Deactivated successfully.
Feb 25 07:18:01 np0005629333 podman[259271]: 2026-02-25 12:18:01.648562745 +0000 UTC m=+0.181090764 container remove 80cfb2e2da3d146499c15cb555c93697bda0dddf1cd8fef9535f0094c7c50d84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_proskuriakova, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:18:01 np0005629333 systemd[1]: libpod-conmon-80cfb2e2da3d146499c15cb555c93697bda0dddf1cd8fef9535f0094c7c50d84.scope: Deactivated successfully.
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:18:01 np0005629333 podman[259312]: 2026-02-25 12:18:01.878375845 +0000 UTC m=+0.070227952 container create 153eefe38931463b752ed452d84d3b476c574ddfb75d730da08f6d730b0f712d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_engelbart, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:18:01 np0005629333 systemd[1]: Started libpod-conmon-153eefe38931463b752ed452d84d3b476c574ddfb75d730da08f6d730b0f712d.scope.
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.918 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.918 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.919 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.919 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:18:01 np0005629333 nova_compute[244014]: 2026-02-25 12:18:01.919 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:01 np0005629333 podman[259312]: 2026-02-25 12:18:01.84183769 +0000 UTC m=+0.033689817 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:18:01 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:18:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e07bc62d606ef0912c833457b21d1d548806fb4f768908e7f2a52952c6a62e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:18:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e07bc62d606ef0912c833457b21d1d548806fb4f768908e7f2a52952c6a62e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:18:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e07bc62d606ef0912c833457b21d1d548806fb4f768908e7f2a52952c6a62e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:18:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e07bc62d606ef0912c833457b21d1d548806fb4f768908e7f2a52952c6a62e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:18:01 np0005629333 podman[259312]: 2026-02-25 12:18:01.977257231 +0000 UTC m=+0.169109328 container init 153eefe38931463b752ed452d84d3b476c574ddfb75d730da08f6d730b0f712d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_engelbart, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 07:18:01 np0005629333 podman[259312]: 2026-02-25 12:18:01.983511706 +0000 UTC m=+0.175363803 container start 153eefe38931463b752ed452d84d3b476c574ddfb75d730da08f6d730b0f712d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:18:01 np0005629333 podman[259312]: 2026-02-25 12:18:01.991303965 +0000 UTC m=+0.183156052 container attach 153eefe38931463b752ed452d84d3b476c574ddfb75d730da08f6d730b0f712d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:18:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:18:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2701414873' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:18:02 np0005629333 nova_compute[244014]: 2026-02-25 12:18:02.502 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:02 np0005629333 nova_compute[244014]: 2026-02-25 12:18:02.563 244018 DEBUG oslo_concurrency.lockutils [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Acquiring lock "f53717b5-3196-44b5-bc3a-aa8f53ce397d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:02 np0005629333 nova_compute[244014]: 2026-02-25 12:18:02.565 244018 DEBUG oslo_concurrency.lockutils [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lock "f53717b5-3196-44b5-bc3a-aa8f53ce397d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:02 np0005629333 nova_compute[244014]: 2026-02-25 12:18:02.565 244018 DEBUG oslo_concurrency.lockutils [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Acquiring lock "f53717b5-3196-44b5-bc3a-aa8f53ce397d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:02 np0005629333 nova_compute[244014]: 2026-02-25 12:18:02.565 244018 DEBUG oslo_concurrency.lockutils [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lock "f53717b5-3196-44b5-bc3a-aa8f53ce397d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:02 np0005629333 nova_compute[244014]: 2026-02-25 12:18:02.565 244018 DEBUG oslo_concurrency.lockutils [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lock "f53717b5-3196-44b5-bc3a-aa8f53ce397d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:02 np0005629333 nova_compute[244014]: 2026-02-25 12:18:02.566 244018 INFO nova.compute.manager [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Terminating instance#033[00m
Feb 25 07:18:02 np0005629333 nova_compute[244014]: 2026-02-25 12:18:02.567 244018 DEBUG oslo_concurrency.lockutils [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Acquiring lock "refresh_cache-f53717b5-3196-44b5-bc3a-aa8f53ce397d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:18:02 np0005629333 nova_compute[244014]: 2026-02-25 12:18:02.567 244018 DEBUG oslo_concurrency.lockutils [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Acquired lock "refresh_cache-f53717b5-3196-44b5-bc3a-aa8f53ce397d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:18:02 np0005629333 nova_compute[244014]: 2026-02-25 12:18:02.567 244018 DEBUG nova.network.neutron [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:18:02 np0005629333 nova_compute[244014]: 2026-02-25 12:18:02.588 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:18:02 np0005629333 nova_compute[244014]: 2026-02-25 12:18:02.588 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:18:02 np0005629333 nova_compute[244014]: 2026-02-25 12:18:02.590 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:18:02 np0005629333 nova_compute[244014]: 2026-02-25 12:18:02.591 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:18:02 np0005629333 lvm[259426]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:18:02 np0005629333 lvm[259426]: VG ceph_vg0 finished
Feb 25 07:18:02 np0005629333 lvm[259429]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:18:02 np0005629333 lvm[259429]: VG ceph_vg1 finished
Feb 25 07:18:02 np0005629333 lvm[259430]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:18:02 np0005629333 lvm[259430]: VG ceph_vg2 finished
Feb 25 07:18:02 np0005629333 nova_compute[244014]: 2026-02-25 12:18:02.692 244018 DEBUG nova.network.neutron [req-b5f05ae6-c339-4f08-9bb5-05e331046bdd req-35e1fe99-c2f0-4108-9c85-60638ccbf2c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Updated VIF entry in instance network info cache for port e4e15a18-1007-4bd1-80a7-b8d2cb64f15c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:18:02 np0005629333 nova_compute[244014]: 2026-02-25 12:18:02.693 244018 DEBUG nova.network.neutron [req-b5f05ae6-c339-4f08-9bb5-05e331046bdd req-35e1fe99-c2f0-4108-9c85-60638ccbf2c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Updating instance_info_cache with network_info: [{"id": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "address": "fa:16:3e:26:a8:6b", "network": {"id": "1b6027b5-2ee6-49fb-b9e5-c79e5230d07e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-325792779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7e518f2e6c84550b07b36ea68922707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e15a18-10", "ovs_interfaceid": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:18:02 np0005629333 determined_engelbart[259328]: {}
Feb 25 07:18:02 np0005629333 nova_compute[244014]: 2026-02-25 12:18:02.750 244018 DEBUG oslo_concurrency.lockutils [req-b5f05ae6-c339-4f08-9bb5-05e331046bdd req-35e1fe99-c2f0-4108-9c85-60638ccbf2c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-4c768bc2-a711-4179-b12f-604509e47856" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:18:02 np0005629333 systemd[1]: libpod-153eefe38931463b752ed452d84d3b476c574ddfb75d730da08f6d730b0f712d.scope: Deactivated successfully.
Feb 25 07:18:02 np0005629333 podman[259312]: 2026-02-25 12:18:02.755256848 +0000 UTC m=+0.947108965 container died 153eefe38931463b752ed452d84d3b476c574ddfb75d730da08f6d730b0f712d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 07:18:02 np0005629333 systemd[1]: libpod-153eefe38931463b752ed452d84d3b476c574ddfb75d730da08f6d730b0f712d.scope: Consumed 1.000s CPU time.
Feb 25 07:18:02 np0005629333 systemd[1]: var-lib-containers-storage-overlay-0e07bc62d606ef0912c833457b21d1d548806fb4f768908e7f2a52952c6a62e7-merged.mount: Deactivated successfully.
Feb 25 07:18:02 np0005629333 podman[259312]: 2026-02-25 12:18:02.814954754 +0000 UTC m=+1.006806821 container remove 153eefe38931463b752ed452d84d3b476c574ddfb75d730da08f6d730b0f712d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Feb 25 07:18:02 np0005629333 systemd[1]: libpod-conmon-153eefe38931463b752ed452d84d3b476c574ddfb75d730da08f6d730b0f712d.scope: Deactivated successfully.
Feb 25 07:18:02 np0005629333 nova_compute[244014]: 2026-02-25 12:18:02.841 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:18:02 np0005629333 nova_compute[244014]: 2026-02-25 12:18:02.843 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4320MB free_disk=59.959298719652GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:18:02 np0005629333 nova_compute[244014]: 2026-02-25 12:18:02.843 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:02 np0005629333 nova_compute[244014]: 2026-02-25 12:18:02.844 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:18:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:18:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:18:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:18:02 np0005629333 nova_compute[244014]: 2026-02-25 12:18:02.947 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 4c768bc2-a711-4179-b12f-604509e47856 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:18:02 np0005629333 nova_compute[244014]: 2026-02-25 12:18:02.948 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance f53717b5-3196-44b5-bc3a-aa8f53ce397d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:18:02 np0005629333 nova_compute[244014]: 2026-02-25 12:18:02.948 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:18:02 np0005629333 nova_compute[244014]: 2026-02-25 12:18:02.949 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:18:02 np0005629333 nova_compute[244014]: 2026-02-25 12:18:02.959 244018 DEBUG nova.network.neutron [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:18:03 np0005629333 nova_compute[244014]: 2026-02-25 12:18:03.021 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:03 np0005629333 nova_compute[244014]: 2026-02-25 12:18:03.290 244018 DEBUG nova.network.neutron [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:18:03 np0005629333 nova_compute[244014]: 2026-02-25 12:18:03.327 244018 DEBUG oslo_concurrency.lockutils [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Releasing lock "refresh_cache-f53717b5-3196-44b5-bc3a-aa8f53ce397d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:18:03 np0005629333 nova_compute[244014]: 2026-02-25 12:18:03.328 244018 DEBUG nova.compute.manager [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:18:03 np0005629333 nova_compute[244014]: 2026-02-25 12:18:03.391 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:03 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:18:03 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:18:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:18:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3948967032' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:18:03 np0005629333 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Feb 25 07:18:03 np0005629333 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000e.scope: Consumed 2.410s CPU time.
Feb 25 07:18:03 np0005629333 systemd-machined[210048]: Machine qemu-16-instance-0000000e terminated.
Feb 25 07:18:03 np0005629333 nova_compute[244014]: 2026-02-25 12:18:03.547 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:03 np0005629333 nova_compute[244014]: 2026-02-25 12:18:03.555 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:18:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Feb 25 07:18:03 np0005629333 nova_compute[244014]: 2026-02-25 12:18:03.582 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:18:03 np0005629333 nova_compute[244014]: 2026-02-25 12:18:03.627 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:18:03 np0005629333 nova_compute[244014]: 2026-02-25 12:18:03.628 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:03 np0005629333 nova_compute[244014]: 2026-02-25 12:18:03.752 244018 INFO nova.virt.libvirt.driver [-] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Instance destroyed successfully.#033[00m
Feb 25 07:18:03 np0005629333 nova_compute[244014]: 2026-02-25 12:18:03.752 244018 DEBUG nova.objects.instance [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lazy-loading 'resources' on Instance uuid f53717b5-3196-44b5-bc3a-aa8f53ce397d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:18:04 np0005629333 ovn_controller[147040]: 2026-02-25T12:18:04Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:26:a8:6b 10.100.0.3
Feb 25 07:18:04 np0005629333 ovn_controller[147040]: 2026-02-25T12:18:04Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:26:a8:6b 10.100.0.3
Feb 25 07:18:04 np0005629333 nova_compute[244014]: 2026-02-25 12:18:04.142 244018 INFO nova.virt.libvirt.driver [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Deleting instance files /var/lib/nova/instances/f53717b5-3196-44b5-bc3a-aa8f53ce397d_del#033[00m
Feb 25 07:18:04 np0005629333 nova_compute[244014]: 2026-02-25 12:18:04.143 244018 INFO nova.virt.libvirt.driver [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Deletion of /var/lib/nova/instances/f53717b5-3196-44b5-bc3a-aa8f53ce397d_del complete#033[00m
Feb 25 07:18:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:18:04 np0005629333 nova_compute[244014]: 2026-02-25 12:18:04.186 244018 INFO nova.compute.manager [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:18:04 np0005629333 nova_compute[244014]: 2026-02-25 12:18:04.187 244018 DEBUG oslo.service.loopingcall [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:18:04 np0005629333 nova_compute[244014]: 2026-02-25 12:18:04.187 244018 DEBUG nova.compute.manager [-] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:18:04 np0005629333 nova_compute[244014]: 2026-02-25 12:18:04.187 244018 DEBUG nova.network.neutron [-] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:18:04 np0005629333 nova_compute[244014]: 2026-02-25 12:18:04.286 244018 DEBUG nova.network.neutron [-] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:18:04 np0005629333 nova_compute[244014]: 2026-02-25 12:18:04.299 244018 DEBUG nova.network.neutron [-] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:18:04 np0005629333 nova_compute[244014]: 2026-02-25 12:18:04.322 244018 INFO nova.compute.manager [-] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Took 0.13 seconds to deallocate network for instance.#033[00m
Feb 25 07:18:04 np0005629333 nova_compute[244014]: 2026-02-25 12:18:04.367 244018 DEBUG oslo_concurrency.lockutils [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:04 np0005629333 nova_compute[244014]: 2026-02-25 12:18:04.367 244018 DEBUG oslo_concurrency.lockutils [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:04 np0005629333 nova_compute[244014]: 2026-02-25 12:18:04.432 244018 DEBUG oslo_concurrency.processutils [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:18:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/957699284' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:18:05 np0005629333 nova_compute[244014]: 2026-02-25 12:18:05.168 244018 DEBUG oslo_concurrency.processutils [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.736s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:05 np0005629333 nova_compute[244014]: 2026-02-25 12:18:05.175 244018 DEBUG nova.compute.provider_tree [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:18:05 np0005629333 nova_compute[244014]: 2026-02-25 12:18:05.230 244018 DEBUG nova.scheduler.client.report [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:18:05 np0005629333 nova_compute[244014]: 2026-02-25 12:18:05.374 244018 DEBUG oslo_concurrency.lockutils [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:05 np0005629333 nova_compute[244014]: 2026-02-25 12:18:05.403 244018 INFO nova.scheduler.client.report [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Deleted allocations for instance f53717b5-3196-44b5-bc3a-aa8f53ce397d#033[00m
Feb 25 07:18:05 np0005629333 nova_compute[244014]: 2026-02-25 12:18:05.484 244018 DEBUG oslo_concurrency.lockutils [None req-7d8e151a-014c-42e7-9db7-e8c02aecfac1 d8d80b1211534b27a4143b5919f28d50 2a1df86b8c4e48d7aaa799f3804282bc - - default default] Lock "f53717b5-3196-44b5-bc3a-aa8f53ce397d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Feb 25 07:18:06 np0005629333 nova_compute[244014]: 2026-02-25 12:18:06.462 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 233 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 266 op/s
Feb 25 07:18:08 np0005629333 nova_compute[244014]: 2026-02-25 12:18:08.394 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:08 np0005629333 nova_compute[244014]: 2026-02-25 12:18:08.843 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:18:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 233 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 192 op/s
Feb 25 07:18:10 np0005629333 nova_compute[244014]: 2026-02-25 12:18:10.704 244018 DEBUG oslo_concurrency.lockutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Acquiring lock "0b5fe226-aafe-4d7f-8a2e-28aea8ed8795" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:10 np0005629333 nova_compute[244014]: 2026-02-25 12:18:10.704 244018 DEBUG oslo_concurrency.lockutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lock "0b5fe226-aafe-4d7f-8a2e-28aea8ed8795" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:10 np0005629333 nova_compute[244014]: 2026-02-25 12:18:10.727 244018 DEBUG nova.compute.manager [req-3c4e5f6f-2a5e-41e2-9d35-9996577ea915 req-28d85161-6094-4a46-802e-04bc7fea6136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Received event network-changed-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:18:10 np0005629333 nova_compute[244014]: 2026-02-25 12:18:10.727 244018 DEBUG nova.compute.manager [req-3c4e5f6f-2a5e-41e2-9d35-9996577ea915 req-28d85161-6094-4a46-802e-04bc7fea6136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Refreshing instance network info cache due to event network-changed-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:18:10 np0005629333 nova_compute[244014]: 2026-02-25 12:18:10.727 244018 DEBUG oslo_concurrency.lockutils [req-3c4e5f6f-2a5e-41e2-9d35-9996577ea915 req-28d85161-6094-4a46-802e-04bc7fea6136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-4c768bc2-a711-4179-b12f-604509e47856" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:18:10 np0005629333 nova_compute[244014]: 2026-02-25 12:18:10.728 244018 DEBUG oslo_concurrency.lockutils [req-3c4e5f6f-2a5e-41e2-9d35-9996577ea915 req-28d85161-6094-4a46-802e-04bc7fea6136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-4c768bc2-a711-4179-b12f-604509e47856" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:18:10 np0005629333 nova_compute[244014]: 2026-02-25 12:18:10.728 244018 DEBUG nova.network.neutron [req-3c4e5f6f-2a5e-41e2-9d35-9996577ea915 req-28d85161-6094-4a46-802e-04bc7fea6136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Refreshing network info cache for port e4e15a18-1007-4bd1-80a7-b8d2cb64f15c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:18:10 np0005629333 nova_compute[244014]: 2026-02-25 12:18:10.946 244018 DEBUG nova.compute.manager [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:18:11 np0005629333 nova_compute[244014]: 2026-02-25 12:18:11.033 244018 DEBUG oslo_concurrency.lockutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:11 np0005629333 nova_compute[244014]: 2026-02-25 12:18:11.034 244018 DEBUG oslo_concurrency.lockutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:11 np0005629333 nova_compute[244014]: 2026-02-25 12:18:11.042 244018 DEBUG nova.virt.hardware [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:18:11 np0005629333 nova_compute[244014]: 2026-02-25 12:18:11.043 244018 INFO nova.compute.claims [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:18:11 np0005629333 nova_compute[244014]: 2026-02-25 12:18:11.250 244018 DEBUG oslo_concurrency.processutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:11 np0005629333 nova_compute[244014]: 2026-02-25 12:18:11.465 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 233 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 192 op/s
Feb 25 07:18:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:18:11 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3867391249' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:18:11 np0005629333 nova_compute[244014]: 2026-02-25 12:18:11.794 244018 DEBUG oslo_concurrency.processutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:11 np0005629333 nova_compute[244014]: 2026-02-25 12:18:11.800 244018 DEBUG nova.compute.provider_tree [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:18:11 np0005629333 nova_compute[244014]: 2026-02-25 12:18:11.826 244018 DEBUG nova.scheduler.client.report [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:18:12 np0005629333 nova_compute[244014]: 2026-02-25 12:18:12.091 244018 DEBUG oslo_concurrency.lockutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:12 np0005629333 nova_compute[244014]: 2026-02-25 12:18:12.093 244018 DEBUG nova.compute.manager [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:18:12 np0005629333 nova_compute[244014]: 2026-02-25 12:18:12.150 244018 DEBUG nova.compute.manager [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:18:12 np0005629333 nova_compute[244014]: 2026-02-25 12:18:12.151 244018 DEBUG nova.network.neutron [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:18:12 np0005629333 nova_compute[244014]: 2026-02-25 12:18:12.172 244018 INFO nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:18:12 np0005629333 nova_compute[244014]: 2026-02-25 12:18:12.191 244018 DEBUG nova.compute.manager [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:18:12 np0005629333 nova_compute[244014]: 2026-02-25 12:18:12.307 244018 DEBUG nova.compute.manager [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:18:12 np0005629333 nova_compute[244014]: 2026-02-25 12:18:12.309 244018 DEBUG nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:18:12 np0005629333 nova_compute[244014]: 2026-02-25 12:18:12.310 244018 INFO nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Creating image(s)#033[00m
Feb 25 07:18:12 np0005629333 nova_compute[244014]: 2026-02-25 12:18:12.340 244018 DEBUG nova.storage.rbd_utils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] rbd image 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:12 np0005629333 nova_compute[244014]: 2026-02-25 12:18:12.370 244018 DEBUG nova.storage.rbd_utils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] rbd image 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:12 np0005629333 nova_compute[244014]: 2026-02-25 12:18:12.400 244018 DEBUG nova.storage.rbd_utils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] rbd image 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:12 np0005629333 nova_compute[244014]: 2026-02-25 12:18:12.405 244018 DEBUG oslo_concurrency.processutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:12 np0005629333 nova_compute[244014]: 2026-02-25 12:18:12.477 244018 DEBUG oslo_concurrency.processutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:12 np0005629333 nova_compute[244014]: 2026-02-25 12:18:12.478 244018 DEBUG oslo_concurrency.lockutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:12 np0005629333 nova_compute[244014]: 2026-02-25 12:18:12.480 244018 DEBUG oslo_concurrency.lockutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:12 np0005629333 nova_compute[244014]: 2026-02-25 12:18:12.480 244018 DEBUG oslo_concurrency.lockutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:12 np0005629333 nova_compute[244014]: 2026-02-25 12:18:12.510 244018 DEBUG nova.storage.rbd_utils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] rbd image 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:12 np0005629333 nova_compute[244014]: 2026-02-25 12:18:12.516 244018 DEBUG oslo_concurrency.processutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:12 np0005629333 nova_compute[244014]: 2026-02-25 12:18:12.628 244018 DEBUG nova.network.neutron [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Feb 25 07:18:12 np0005629333 nova_compute[244014]: 2026-02-25 12:18:12.629 244018 DEBUG nova.compute.manager [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:18:12 np0005629333 nova_compute[244014]: 2026-02-25 12:18:12.821 244018 DEBUG oslo_concurrency.processutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:12 np0005629333 nova_compute[244014]: 2026-02-25 12:18:12.902 244018 DEBUG nova.storage.rbd_utils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] resizing rbd image 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.003 244018 DEBUG nova.objects.instance [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lazy-loading 'migration_context' on Instance uuid 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.023 244018 DEBUG nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.024 244018 DEBUG nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Ensure instance console log exists: /var/lib/nova/instances/0b5fe226-aafe-4d7f-8a2e-28aea8ed8795/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.024 244018 DEBUG oslo_concurrency.lockutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.025 244018 DEBUG oslo_concurrency.lockutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.025 244018 DEBUG oslo_concurrency.lockutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.027 244018 DEBUG nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.031 244018 WARNING nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.035 244018 DEBUG nova.virt.libvirt.host [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.035 244018 DEBUG nova.virt.libvirt.host [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.038 244018 DEBUG nova.virt.libvirt.host [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.039 244018 DEBUG nova.virt.libvirt.host [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.039 244018 DEBUG nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.040 244018 DEBUG nova.virt.hardware [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.041 244018 DEBUG nova.virt.hardware [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.041 244018 DEBUG nova.virt.hardware [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.042 244018 DEBUG nova.virt.hardware [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.042 244018 DEBUG nova.virt.hardware [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.042 244018 DEBUG nova.virt.hardware [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.043 244018 DEBUG nova.virt.hardware [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.043 244018 DEBUG nova.virt.hardware [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.044 244018 DEBUG nova.virt.hardware [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.044 244018 DEBUG nova.virt.hardware [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.045 244018 DEBUG nova.virt.hardware [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.049 244018 DEBUG oslo_concurrency.processutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.398 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 233 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 179 op/s
Feb 25 07:18:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:18:13 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4162791070' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.582 244018 DEBUG oslo_concurrency.processutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.609 244018 DEBUG nova.storage.rbd_utils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] rbd image 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.613 244018 DEBUG oslo_concurrency.processutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.776 244018 DEBUG nova.network.neutron [req-3c4e5f6f-2a5e-41e2-9d35-9996577ea915 req-28d85161-6094-4a46-802e-04bc7fea6136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Updated VIF entry in instance network info cache for port e4e15a18-1007-4bd1-80a7-b8d2cb64f15c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.778 244018 DEBUG nova.network.neutron [req-3c4e5f6f-2a5e-41e2-9d35-9996577ea915 req-28d85161-6094-4a46-802e-04bc7fea6136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Updating instance_info_cache with network_info: [{"id": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "address": "fa:16:3e:26:a8:6b", "network": {"id": "1b6027b5-2ee6-49fb-b9e5-c79e5230d07e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-325792779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7e518f2e6c84550b07b36ea68922707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e15a18-10", "ovs_interfaceid": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.799 244018 DEBUG oslo_concurrency.lockutils [req-3c4e5f6f-2a5e-41e2-9d35-9996577ea915 req-28d85161-6094-4a46-802e-04bc7fea6136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-4c768bc2-a711-4179-b12f-604509e47856" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:18:13 np0005629333 nova_compute[244014]: 2026-02-25 12:18:13.843 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:18:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:18:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3952549009' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:18:14 np0005629333 nova_compute[244014]: 2026-02-25 12:18:14.236 244018 DEBUG oslo_concurrency.processutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:14 np0005629333 nova_compute[244014]: 2026-02-25 12:18:14.238 244018 DEBUG nova.objects.instance [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lazy-loading 'pci_devices' on Instance uuid 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:18:14 np0005629333 nova_compute[244014]: 2026-02-25 12:18:14.266 244018 DEBUG nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:18:14 np0005629333 nova_compute[244014]:  <uuid>0b5fe226-aafe-4d7f-8a2e-28aea8ed8795</uuid>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:  <name>instance-0000000f</name>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:18:14 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServerExternalEventsTest-server-415826439</nova:name>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:18:13</nova:creationTime>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:18:14 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:        <nova:user uuid="0ef1a239db3f426694c02473a6b39653">tempest-ServerExternalEventsTest-569547795-project-member</nova:user>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:        <nova:project uuid="e7d55de27cf3451a8ed2d95d03721e1b">tempest-ServerExternalEventsTest-569547795</nova:project>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:      <nova:ports/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:      <entry name="serial">0b5fe226-aafe-4d7f-8a2e-28aea8ed8795</entry>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:      <entry name="uuid">0b5fe226-aafe-4d7f-8a2e-28aea8ed8795</entry>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:18:14 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/0b5fe226-aafe-4d7f-8a2e-28aea8ed8795_disk">
Feb 25 07:18:14 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:18:14 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:18:14 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/0b5fe226-aafe-4d7f-8a2e-28aea8ed8795_disk.config">
Feb 25 07:18:14 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:18:14 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:18:14 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/0b5fe226-aafe-4d7f-8a2e-28aea8ed8795/console.log" append="off"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:18:14 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:18:14 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:18:14 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:18:14 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:18:14 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:18:14 np0005629333 nova_compute[244014]: 2026-02-25 12:18:14.349 244018 DEBUG nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:18:14 np0005629333 nova_compute[244014]: 2026-02-25 12:18:14.350 244018 DEBUG nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:18:14 np0005629333 nova_compute[244014]: 2026-02-25 12:18:14.350 244018 INFO nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Using config drive#033[00m
Feb 25 07:18:14 np0005629333 nova_compute[244014]: 2026-02-25 12:18:14.379 244018 DEBUG nova.storage.rbd_utils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] rbd image 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:14 np0005629333 nova_compute[244014]: 2026-02-25 12:18:14.774 244018 INFO nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Creating config drive at /var/lib/nova/instances/0b5fe226-aafe-4d7f-8a2e-28aea8ed8795/disk.config#033[00m
Feb 25 07:18:14 np0005629333 nova_compute[244014]: 2026-02-25 12:18:14.780 244018 DEBUG oslo_concurrency.processutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0b5fe226-aafe-4d7f-8a2e-28aea8ed8795/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3666zc2g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:14 np0005629333 nova_compute[244014]: 2026-02-25 12:18:14.909 244018 DEBUG oslo_concurrency.processutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0b5fe226-aafe-4d7f-8a2e-28aea8ed8795/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3666zc2g" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:14 np0005629333 nova_compute[244014]: 2026-02-25 12:18:14.931 244018 DEBUG nova.storage.rbd_utils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] rbd image 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:14 np0005629333 nova_compute[244014]: 2026-02-25 12:18:14.934 244018 DEBUG oslo_concurrency.processutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0b5fe226-aafe-4d7f-8a2e-28aea8ed8795/disk.config 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:15 np0005629333 nova_compute[244014]: 2026-02-25 12:18:15.060 244018 DEBUG oslo_concurrency.processutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0b5fe226-aafe-4d7f-8a2e-28aea8ed8795/disk.config 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:15 np0005629333 nova_compute[244014]: 2026-02-25 12:18:15.061 244018 INFO nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Deleting local config drive /var/lib/nova/instances/0b5fe226-aafe-4d7f-8a2e-28aea8ed8795/disk.config because it was imported into RBD.#033[00m
Feb 25 07:18:15 np0005629333 systemd-machined[210048]: New machine qemu-17-instance-0000000f.
Feb 25 07:18:15 np0005629333 systemd[1]: Started Virtual Machine qemu-17-instance-0000000f.
Feb 25 07:18:15 np0005629333 nova_compute[244014]: 2026-02-25 12:18:15.454 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021895.4540918, 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:18:15 np0005629333 nova_compute[244014]: 2026-02-25 12:18:15.455 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:18:15 np0005629333 nova_compute[244014]: 2026-02-25 12:18:15.460 244018 DEBUG nova.compute.manager [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:18:15 np0005629333 nova_compute[244014]: 2026-02-25 12:18:15.461 244018 DEBUG nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:18:15 np0005629333 nova_compute[244014]: 2026-02-25 12:18:15.466 244018 INFO nova.virt.libvirt.driver [-] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Instance spawned successfully.#033[00m
Feb 25 07:18:15 np0005629333 nova_compute[244014]: 2026-02-25 12:18:15.466 244018 DEBUG nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:18:15 np0005629333 nova_compute[244014]: 2026-02-25 12:18:15.486 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:18:15 np0005629333 nova_compute[244014]: 2026-02-25 12:18:15.494 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:18:15 np0005629333 nova_compute[244014]: 2026-02-25 12:18:15.498 244018 DEBUG nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:15 np0005629333 nova_compute[244014]: 2026-02-25 12:18:15.499 244018 DEBUG nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:15 np0005629333 nova_compute[244014]: 2026-02-25 12:18:15.500 244018 DEBUG nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:15 np0005629333 nova_compute[244014]: 2026-02-25 12:18:15.500 244018 DEBUG nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:15 np0005629333 nova_compute[244014]: 2026-02-25 12:18:15.501 244018 DEBUG nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:15 np0005629333 nova_compute[244014]: 2026-02-25 12:18:15.502 244018 DEBUG nova.virt.libvirt.driver [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:15 np0005629333 nova_compute[244014]: 2026-02-25 12:18:15.530 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:18:15 np0005629333 nova_compute[244014]: 2026-02-25 12:18:15.531 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021895.4601972, 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:18:15 np0005629333 nova_compute[244014]: 2026-02-25 12:18:15.531 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] VM Started (Lifecycle Event)#033[00m
Feb 25 07:18:15 np0005629333 nova_compute[244014]: 2026-02-25 12:18:15.552 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:18:15 np0005629333 nova_compute[244014]: 2026-02-25 12:18:15.557 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:18:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 233 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Feb 25 07:18:15 np0005629333 nova_compute[244014]: 2026-02-25 12:18:15.567 244018 INFO nova.compute.manager [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Took 3.26 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:18:15 np0005629333 nova_compute[244014]: 2026-02-25 12:18:15.567 244018 DEBUG nova.compute.manager [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:18:15 np0005629333 nova_compute[244014]: 2026-02-25 12:18:15.577 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:18:15 np0005629333 nova_compute[244014]: 2026-02-25 12:18:15.640 244018 INFO nova.compute.manager [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Took 4.63 seconds to build instance.#033[00m
Feb 25 07:18:15 np0005629333 nova_compute[244014]: 2026-02-25 12:18:15.657 244018 DEBUG oslo_concurrency.lockutils [None req-b7081c57-638e-4d77-a76d-ef942360e20b 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lock "0b5fe226-aafe-4d7f-8a2e-28aea8ed8795" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.953s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:16 np0005629333 nova_compute[244014]: 2026-02-25 12:18:16.108 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:16 np0005629333 nova_compute[244014]: 2026-02-25 12:18:16.468 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:16 np0005629333 nova_compute[244014]: 2026-02-25 12:18:16.562 244018 DEBUG oslo_concurrency.lockutils [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Acquiring lock "4c768bc2-a711-4179-b12f-604509e47856" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:16 np0005629333 nova_compute[244014]: 2026-02-25 12:18:16.563 244018 DEBUG oslo_concurrency.lockutils [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lock "4c768bc2-a711-4179-b12f-604509e47856" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:16 np0005629333 nova_compute[244014]: 2026-02-25 12:18:16.564 244018 DEBUG oslo_concurrency.lockutils [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Acquiring lock "4c768bc2-a711-4179-b12f-604509e47856-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:16 np0005629333 nova_compute[244014]: 2026-02-25 12:18:16.565 244018 DEBUG oslo_concurrency.lockutils [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lock "4c768bc2-a711-4179-b12f-604509e47856-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:16 np0005629333 nova_compute[244014]: 2026-02-25 12:18:16.565 244018 DEBUG oslo_concurrency.lockutils [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lock "4c768bc2-a711-4179-b12f-604509e47856-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:16 np0005629333 nova_compute[244014]: 2026-02-25 12:18:16.567 244018 INFO nova.compute.manager [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Terminating instance#033[00m
Feb 25 07:18:16 np0005629333 nova_compute[244014]: 2026-02-25 12:18:16.569 244018 DEBUG nova.compute.manager [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:18:16 np0005629333 kernel: tape4e15a18-10 (unregistering): left promiscuous mode
Feb 25 07:18:16 np0005629333 NetworkManager[49836]: <info>  [1772021896.6224] device (tape4e15a18-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:18:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:18:16Z|00051|binding|INFO|Releasing lport e4e15a18-1007-4bd1-80a7-b8d2cb64f15c from this chassis (sb_readonly=0)
Feb 25 07:18:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:18:16Z|00052|binding|INFO|Setting lport e4e15a18-1007-4bd1-80a7-b8d2cb64f15c down in Southbound
Feb 25 07:18:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:18:16Z|00053|binding|INFO|Removing iface tape4e15a18-10 ovn-installed in OVS
Feb 25 07:18:16 np0005629333 nova_compute[244014]: 2026-02-25 12:18:16.630 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:16 np0005629333 nova_compute[244014]: 2026-02-25 12:18:16.638 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:16.646 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:a8:6b 10.100.0.3'], port_security=['fa:16:3e:26:a8:6b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '4c768bc2-a711-4179-b12f-604509e47856', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c7e518f2e6c84550b07b36ea68922707', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd3f2670-feae-47a4-bde0-e33bcae0ae19', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e09bb1cc-ff05-4116-89c1-7f1944c54b5c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=e4e15a18-1007-4bd1-80a7-b8d2cb64f15c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:18:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:16.649 157129 INFO neutron.agent.ovn.metadata.agent [-] Port e4e15a18-1007-4bd1-80a7-b8d2cb64f15c in datapath 1b6027b5-2ee6-49fb-b9e5-c79e5230d07e unbound from our chassis#033[00m
Feb 25 07:18:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:16.652 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1b6027b5-2ee6-49fb-b9e5-c79e5230d07e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:18:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:16.653 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[de15619b-ab64-4748-ae96-c15f3c867771]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:16.655 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e namespace which is not needed anymore#033[00m
Feb 25 07:18:16 np0005629333 nova_compute[244014]: 2026-02-25 12:18:16.660 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:16 np0005629333 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Feb 25 07:18:16 np0005629333 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000d.scope: Consumed 11.741s CPU time.
Feb 25 07:18:16 np0005629333 systemd-machined[210048]: Machine qemu-15-instance-0000000d terminated.
Feb 25 07:18:16 np0005629333 podman[259904]: 2026-02-25 12:18:16.721485564 +0000 UTC m=+0.077100505 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 07:18:16 np0005629333 podman[259910]: 2026-02-25 12:18:16.741946438 +0000 UTC m=+0.097354634 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb 25 07:18:16 np0005629333 NetworkManager[49836]: <info>  [1772021896.7855] manager: (tape4e15a18-10): new Tun device (/org/freedesktop/NetworkManager/Devices/39)
Feb 25 07:18:16 np0005629333 nova_compute[244014]: 2026-02-25 12:18:16.786 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:16 np0005629333 nova_compute[244014]: 2026-02-25 12:18:16.792 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:16 np0005629333 nova_compute[244014]: 2026-02-25 12:18:16.802 244018 INFO nova.virt.libvirt.driver [-] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Instance destroyed successfully.#033[00m
Feb 25 07:18:16 np0005629333 nova_compute[244014]: 2026-02-25 12:18:16.802 244018 DEBUG nova.objects.instance [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lazy-loading 'resources' on Instance uuid 4c768bc2-a711-4179-b12f-604509e47856 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:18:16 np0005629333 neutron-haproxy-ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e[258463]: [NOTICE]   (258467) : haproxy version is 2.8.14-c23fe91
Feb 25 07:18:16 np0005629333 neutron-haproxy-ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e[258463]: [NOTICE]   (258467) : path to executable is /usr/sbin/haproxy
Feb 25 07:18:16 np0005629333 neutron-haproxy-ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e[258463]: [WARNING]  (258467) : Exiting Master process...
Feb 25 07:18:16 np0005629333 neutron-haproxy-ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e[258463]: [ALERT]    (258467) : Current worker (258469) exited with code 143 (Terminated)
Feb 25 07:18:16 np0005629333 neutron-haproxy-ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e[258463]: [WARNING]  (258467) : All workers exited. Exiting... (0)
Feb 25 07:18:16 np0005629333 systemd[1]: libpod-4e25652aef3bf93b1f0724b7a4fcd75bb4cb93bbe595e197703d248748da5148.scope: Deactivated successfully.
Feb 25 07:18:16 np0005629333 podman[259967]: 2026-02-25 12:18:16.813827665 +0000 UTC m=+0.054008416 container died 4e25652aef3bf93b1f0724b7a4fcd75bb4cb93bbe595e197703d248748da5148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 07:18:16 np0005629333 nova_compute[244014]: 2026-02-25 12:18:16.816 244018 DEBUG nova.virt.libvirt.vif [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:17:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-942373444',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-942373444',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-942373444',id=13,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:17:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c7e518f2e6c84550b07b36ea68922707',ramdisk_id='',reservation_id='r-m5sgaoda',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-701873226',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-701873226-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:17:54Z,user_data=None,user_id='38c6d2e6875a408687bd85066c826987',uuid=4c768bc2-a711-4179-b12f-604509e47856,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "address": "fa:16:3e:26:a8:6b", "network": {"id": "1b6027b5-2ee6-49fb-b9e5-c79e5230d07e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-325792779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7e518f2e6c84550b07b36ea68922707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e15a18-10", "ovs_interfaceid": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:18:16 np0005629333 nova_compute[244014]: 2026-02-25 12:18:16.816 244018 DEBUG nova.network.os_vif_util [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Converting VIF {"id": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "address": "fa:16:3e:26:a8:6b", "network": {"id": "1b6027b5-2ee6-49fb-b9e5-c79e5230d07e", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-325792779-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7e518f2e6c84550b07b36ea68922707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4e15a18-10", "ovs_interfaceid": "e4e15a18-1007-4bd1-80a7-b8d2cb64f15c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:18:16 np0005629333 nova_compute[244014]: 2026-02-25 12:18:16.817 244018 DEBUG nova.network.os_vif_util [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:a8:6b,bridge_name='br-int',has_traffic_filtering=True,id=e4e15a18-1007-4bd1-80a7-b8d2cb64f15c,network=Network(1b6027b5-2ee6-49fb-b9e5-c79e5230d07e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4e15a18-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:18:16 np0005629333 nova_compute[244014]: 2026-02-25 12:18:16.817 244018 DEBUG os_vif [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:a8:6b,bridge_name='br-int',has_traffic_filtering=True,id=e4e15a18-1007-4bd1-80a7-b8d2cb64f15c,network=Network(1b6027b5-2ee6-49fb-b9e5-c79e5230d07e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4e15a18-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:18:16 np0005629333 nova_compute[244014]: 2026-02-25 12:18:16.819 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:16 np0005629333 nova_compute[244014]: 2026-02-25 12:18:16.819 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4e15a18-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:18:16 np0005629333 nova_compute[244014]: 2026-02-25 12:18:16.820 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:16 np0005629333 nova_compute[244014]: 2026-02-25 12:18:16.822 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:16 np0005629333 nova_compute[244014]: 2026-02-25 12:18:16.824 244018 INFO os_vif [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:a8:6b,bridge_name='br-int',has_traffic_filtering=True,id=e4e15a18-1007-4bd1-80a7-b8d2cb64f15c,network=Network(1b6027b5-2ee6-49fb-b9e5-c79e5230d07e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4e15a18-10')#033[00m
Feb 25 07:18:16 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d8c0265e3a393cacb87e5ee458bb9d5c78aa9ff00fd616b394131707d01c2e19-merged.mount: Deactivated successfully.
Feb 25 07:18:16 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4e25652aef3bf93b1f0724b7a4fcd75bb4cb93bbe595e197703d248748da5148-userdata-shm.mount: Deactivated successfully.
Feb 25 07:18:16 np0005629333 podman[259967]: 2026-02-25 12:18:16.848807167 +0000 UTC m=+0.088987918 container cleanup 4e25652aef3bf93b1f0724b7a4fcd75bb4cb93bbe595e197703d248748da5148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 25 07:18:16 np0005629333 systemd[1]: libpod-conmon-4e25652aef3bf93b1f0724b7a4fcd75bb4cb93bbe595e197703d248748da5148.scope: Deactivated successfully.
Feb 25 07:18:16 np0005629333 podman[260019]: 2026-02-25 12:18:16.914266465 +0000 UTC m=+0.045939811 container remove 4e25652aef3bf93b1f0724b7a4fcd75bb4cb93bbe595e197703d248748da5148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 07:18:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:16.918 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ad7360b6-60cf-4977-aa06-51c0328ee530]: (4, ('Wed Feb 25 12:18:16 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e (4e25652aef3bf93b1f0724b7a4fcd75bb4cb93bbe595e197703d248748da5148)\n4e25652aef3bf93b1f0724b7a4fcd75bb4cb93bbe595e197703d248748da5148\nWed Feb 25 12:18:16 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e (4e25652aef3bf93b1f0724b7a4fcd75bb4cb93bbe595e197703d248748da5148)\n4e25652aef3bf93b1f0724b7a4fcd75bb4cb93bbe595e197703d248748da5148\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:16.920 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2a67206f-c909-4dbf-9fda-23ae21bb9614]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:16.921 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b6027b5-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:18:16 np0005629333 kernel: tap1b6027b5-20: left promiscuous mode
Feb 25 07:18:16 np0005629333 nova_compute[244014]: 2026-02-25 12:18:16.922 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:16 np0005629333 nova_compute[244014]: 2026-02-25 12:18:16.930 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:16.933 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ae3d3c91-672f-44f4-8f13-229d42ea548e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:16.947 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cd567ae9-18c0-4543-ad10-e6e0d2c00f0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:16.948 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1ecd893a-ab8f-45a5-8dce-f52c34215d0e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:16.962 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6fbac4b4-3516-4203-8b32-6bc295c47506]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384238, 'reachable_time': 21166, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260036, 'error': None, 'target': 'ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:16.964 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1b6027b5-2ee6-49fb-b9e5-c79e5230d07e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:18:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:16.964 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[b32ad794-a420-4c33-80fb-397d6b88c3b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:16 np0005629333 systemd[1]: run-netns-ovnmeta\x2d1b6027b5\x2d2ee6\x2d49fb\x2db9e5\x2dc79e5230d07e.mount: Deactivated successfully.
Feb 25 07:18:17 np0005629333 nova_compute[244014]: 2026-02-25 12:18:17.097 244018 INFO nova.virt.libvirt.driver [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Deleting instance files /var/lib/nova/instances/4c768bc2-a711-4179-b12f-604509e47856_del#033[00m
Feb 25 07:18:17 np0005629333 nova_compute[244014]: 2026-02-25 12:18:17.106 244018 INFO nova.virt.libvirt.driver [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Deletion of /var/lib/nova/instances/4c768bc2-a711-4179-b12f-604509e47856_del complete#033[00m
Feb 25 07:18:17 np0005629333 nova_compute[244014]: 2026-02-25 12:18:17.153 244018 INFO nova.compute.manager [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Took 0.58 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:18:17 np0005629333 nova_compute[244014]: 2026-02-25 12:18:17.154 244018 DEBUG oslo.service.loopingcall [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:18:17 np0005629333 nova_compute[244014]: 2026-02-25 12:18:17.156 244018 DEBUG nova.compute.manager [-] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:18:17 np0005629333 nova_compute[244014]: 2026-02-25 12:18:17.156 244018 DEBUG nova.network.neutron [-] [instance: 4c768bc2-a711-4179-b12f-604509e47856] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:18:17 np0005629333 nova_compute[244014]: 2026-02-25 12:18:17.161 244018 DEBUG nova.compute.manager [None req-6add7ff5-a093-4d42-ac60-797c74c2ea07 9d18ccdf42b646dcac98db18d105d87a 8ab08bec1ff94fc1bfc78f8059ae282b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Received event network-changed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:18:17 np0005629333 nova_compute[244014]: 2026-02-25 12:18:17.161 244018 DEBUG nova.compute.manager [None req-6add7ff5-a093-4d42-ac60-797c74c2ea07 9d18ccdf42b646dcac98db18d105d87a 8ab08bec1ff94fc1bfc78f8059ae282b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Refreshing instance network info cache due to event network-changed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:18:17 np0005629333 nova_compute[244014]: 2026-02-25 12:18:17.162 244018 DEBUG oslo_concurrency.lockutils [None req-6add7ff5-a093-4d42-ac60-797c74c2ea07 9d18ccdf42b646dcac98db18d105d87a 8ab08bec1ff94fc1bfc78f8059ae282b - - default default] Acquiring lock "refresh_cache-0b5fe226-aafe-4d7f-8a2e-28aea8ed8795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:18:17 np0005629333 nova_compute[244014]: 2026-02-25 12:18:17.162 244018 DEBUG oslo_concurrency.lockutils [None req-6add7ff5-a093-4d42-ac60-797c74c2ea07 9d18ccdf42b646dcac98db18d105d87a 8ab08bec1ff94fc1bfc78f8059ae282b - - default default] Acquired lock "refresh_cache-0b5fe226-aafe-4d7f-8a2e-28aea8ed8795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:18:17 np0005629333 nova_compute[244014]: 2026-02-25 12:18:17.162 244018 DEBUG nova.network.neutron [None req-6add7ff5-a093-4d42-ac60-797c74c2ea07 9d18ccdf42b646dcac98db18d105d87a 8ab08bec1ff94fc1bfc78f8059ae282b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:18:17 np0005629333 nova_compute[244014]: 2026-02-25 12:18:17.309 244018 DEBUG oslo_concurrency.lockutils [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Acquiring lock "0b5fe226-aafe-4d7f-8a2e-28aea8ed8795" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:17 np0005629333 nova_compute[244014]: 2026-02-25 12:18:17.309 244018 DEBUG oslo_concurrency.lockutils [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lock "0b5fe226-aafe-4d7f-8a2e-28aea8ed8795" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:17 np0005629333 nova_compute[244014]: 2026-02-25 12:18:17.310 244018 DEBUG oslo_concurrency.lockutils [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Acquiring lock "0b5fe226-aafe-4d7f-8a2e-28aea8ed8795-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:17 np0005629333 nova_compute[244014]: 2026-02-25 12:18:17.310 244018 DEBUG oslo_concurrency.lockutils [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lock "0b5fe226-aafe-4d7f-8a2e-28aea8ed8795-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:17 np0005629333 nova_compute[244014]: 2026-02-25 12:18:17.310 244018 DEBUG oslo_concurrency.lockutils [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lock "0b5fe226-aafe-4d7f-8a2e-28aea8ed8795-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:17 np0005629333 nova_compute[244014]: 2026-02-25 12:18:17.311 244018 INFO nova.compute.manager [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Terminating instance#033[00m
Feb 25 07:18:17 np0005629333 nova_compute[244014]: 2026-02-25 12:18:17.312 244018 DEBUG oslo_concurrency.lockutils [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Acquiring lock "refresh_cache-0b5fe226-aafe-4d7f-8a2e-28aea8ed8795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:18:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 248 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.9 MiB/s wr, 238 op/s
Feb 25 07:18:17 np0005629333 nova_compute[244014]: 2026-02-25 12:18:17.872 244018 DEBUG nova.network.neutron [None req-6add7ff5-a093-4d42-ac60-797c74c2ea07 9d18ccdf42b646dcac98db18d105d87a 8ab08bec1ff94fc1bfc78f8059ae282b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:18:18 np0005629333 nova_compute[244014]: 2026-02-25 12:18:18.399 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:18 np0005629333 nova_compute[244014]: 2026-02-25 12:18:18.750 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021883.7498527, f53717b5-3196-44b5-bc3a-aa8f53ce397d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:18:18 np0005629333 nova_compute[244014]: 2026-02-25 12:18:18.752 244018 INFO nova.compute.manager [-] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:18:18 np0005629333 nova_compute[244014]: 2026-02-25 12:18:18.806 244018 DEBUG nova.compute.manager [None req-515c8926-dfe1-4bfc-85df-6b5acff7fb35 - - - - - -] [instance: f53717b5-3196-44b5-bc3a-aa8f53ce397d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:18:18 np0005629333 nova_compute[244014]: 2026-02-25 12:18:18.872 244018 DEBUG nova.network.neutron [None req-6add7ff5-a093-4d42-ac60-797c74c2ea07 9d18ccdf42b646dcac98db18d105d87a 8ab08bec1ff94fc1bfc78f8059ae282b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:18:18 np0005629333 nova_compute[244014]: 2026-02-25 12:18:18.936 244018 DEBUG oslo_concurrency.lockutils [None req-6add7ff5-a093-4d42-ac60-797c74c2ea07 9d18ccdf42b646dcac98db18d105d87a 8ab08bec1ff94fc1bfc78f8059ae282b - - default default] Releasing lock "refresh_cache-0b5fe226-aafe-4d7f-8a2e-28aea8ed8795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:18:18 np0005629333 nova_compute[244014]: 2026-02-25 12:18:18.937 244018 DEBUG oslo_concurrency.lockutils [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Acquired lock "refresh_cache-0b5fe226-aafe-4d7f-8a2e-28aea8ed8795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:18:18 np0005629333 nova_compute[244014]: 2026-02-25 12:18:18.938 244018 DEBUG nova.network.neutron [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:18:19 np0005629333 nova_compute[244014]: 2026-02-25 12:18:19.159 244018 DEBUG nova.network.neutron [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:18:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:18:19 np0005629333 nova_compute[244014]: 2026-02-25 12:18:19.541 244018 DEBUG nova.compute.manager [req-f26e72b7-4b1a-4f3f-81a6-1c0a4af56775 req-c2c9f9cb-b875-457e-800a-83e95d6d2949 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Received event network-vif-unplugged-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:18:19 np0005629333 nova_compute[244014]: 2026-02-25 12:18:19.542 244018 DEBUG oslo_concurrency.lockutils [req-f26e72b7-4b1a-4f3f-81a6-1c0a4af56775 req-c2c9f9cb-b875-457e-800a-83e95d6d2949 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "4c768bc2-a711-4179-b12f-604509e47856-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:19 np0005629333 nova_compute[244014]: 2026-02-25 12:18:19.543 244018 DEBUG oslo_concurrency.lockutils [req-f26e72b7-4b1a-4f3f-81a6-1c0a4af56775 req-c2c9f9cb-b875-457e-800a-83e95d6d2949 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4c768bc2-a711-4179-b12f-604509e47856-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:19 np0005629333 nova_compute[244014]: 2026-02-25 12:18:19.543 244018 DEBUG oslo_concurrency.lockutils [req-f26e72b7-4b1a-4f3f-81a6-1c0a4af56775 req-c2c9f9cb-b875-457e-800a-83e95d6d2949 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4c768bc2-a711-4179-b12f-604509e47856-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:19 np0005629333 nova_compute[244014]: 2026-02-25 12:18:19.544 244018 DEBUG nova.compute.manager [req-f26e72b7-4b1a-4f3f-81a6-1c0a4af56775 req-c2c9f9cb-b875-457e-800a-83e95d6d2949 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] No waiting events found dispatching network-vif-unplugged-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:18:19 np0005629333 nova_compute[244014]: 2026-02-25 12:18:19.544 244018 DEBUG nova.compute.manager [req-f26e72b7-4b1a-4f3f-81a6-1c0a4af56775 req-c2c9f9cb-b875-457e-800a-83e95d6d2949 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Received event network-vif-unplugged-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:18:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 248 MiB data, 410 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Feb 25 07:18:19 np0005629333 nova_compute[244014]: 2026-02-25 12:18:19.886 244018 DEBUG nova.network.neutron [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:18:19 np0005629333 nova_compute[244014]: 2026-02-25 12:18:19.906 244018 DEBUG oslo_concurrency.lockutils [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Releasing lock "refresh_cache-0b5fe226-aafe-4d7f-8a2e-28aea8ed8795" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:18:19 np0005629333 nova_compute[244014]: 2026-02-25 12:18:19.907 244018 DEBUG nova.compute.manager [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:18:19 np0005629333 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Feb 25 07:18:19 np0005629333 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000000f.scope: Consumed 4.950s CPU time.
Feb 25 07:18:19 np0005629333 systemd-machined[210048]: Machine qemu-17-instance-0000000f terminated.
Feb 25 07:18:19 np0005629333 nova_compute[244014]: 2026-02-25 12:18:19.981 244018 DEBUG nova.network.neutron [-] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:18:20 np0005629333 nova_compute[244014]: 2026-02-25 12:18:20.003 244018 INFO nova.compute.manager [-] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Took 2.85 seconds to deallocate network for instance.#033[00m
Feb 25 07:18:20 np0005629333 nova_compute[244014]: 2026-02-25 12:18:20.074 244018 DEBUG oslo_concurrency.lockutils [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:20 np0005629333 nova_compute[244014]: 2026-02-25 12:18:20.075 244018 DEBUG oslo_concurrency.lockutils [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:20 np0005629333 nova_compute[244014]: 2026-02-25 12:18:20.128 244018 INFO nova.virt.libvirt.driver [-] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Instance destroyed successfully.#033[00m
Feb 25 07:18:20 np0005629333 nova_compute[244014]: 2026-02-25 12:18:20.132 244018 DEBUG nova.objects.instance [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lazy-loading 'resources' on Instance uuid 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:18:20 np0005629333 nova_compute[244014]: 2026-02-25 12:18:20.149 244018 DEBUG oslo_concurrency.processutils [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:20 np0005629333 nova_compute[244014]: 2026-02-25 12:18:20.462 244018 INFO nova.virt.libvirt.driver [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Deleting instance files /var/lib/nova/instances/0b5fe226-aafe-4d7f-8a2e-28aea8ed8795_del#033[00m
Feb 25 07:18:20 np0005629333 nova_compute[244014]: 2026-02-25 12:18:20.463 244018 INFO nova.virt.libvirt.driver [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Deletion of /var/lib/nova/instances/0b5fe226-aafe-4d7f-8a2e-28aea8ed8795_del complete#033[00m
Feb 25 07:18:20 np0005629333 nova_compute[244014]: 2026-02-25 12:18:20.532 244018 INFO nova.compute.manager [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Took 0.62 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:18:20 np0005629333 nova_compute[244014]: 2026-02-25 12:18:20.532 244018 DEBUG oslo.service.loopingcall [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:18:20 np0005629333 nova_compute[244014]: 2026-02-25 12:18:20.533 244018 DEBUG nova.compute.manager [-] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:18:20 np0005629333 nova_compute[244014]: 2026-02-25 12:18:20.534 244018 DEBUG nova.network.neutron [-] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:18:20 np0005629333 nova_compute[244014]: 2026-02-25 12:18:20.732 244018 DEBUG nova.network.neutron [-] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:18:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:18:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1609934762' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:18:20 np0005629333 nova_compute[244014]: 2026-02-25 12:18:20.755 244018 DEBUG nova.network.neutron [-] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:18:20 np0005629333 nova_compute[244014]: 2026-02-25 12:18:20.757 244018 DEBUG oslo_concurrency.processutils [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:20 np0005629333 nova_compute[244014]: 2026-02-25 12:18:20.764 244018 DEBUG nova.compute.provider_tree [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:18:20 np0005629333 nova_compute[244014]: 2026-02-25 12:18:20.772 244018 INFO nova.compute.manager [-] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Took 0.24 seconds to deallocate network for instance.#033[00m
Feb 25 07:18:20 np0005629333 nova_compute[244014]: 2026-02-25 12:18:20.782 244018 DEBUG nova.scheduler.client.report [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:18:20 np0005629333 nova_compute[244014]: 2026-02-25 12:18:20.839 244018 DEBUG oslo_concurrency.lockutils [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:20 np0005629333 nova_compute[244014]: 2026-02-25 12:18:20.844 244018 DEBUG oslo_concurrency.lockutils [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:20 np0005629333 nova_compute[244014]: 2026-02-25 12:18:20.844 244018 DEBUG oslo_concurrency.lockutils [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:20 np0005629333 nova_compute[244014]: 2026-02-25 12:18:20.912 244018 DEBUG oslo_concurrency.processutils [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:20 np0005629333 nova_compute[244014]: 2026-02-25 12:18:20.978 244018 INFO nova.scheduler.client.report [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Deleted allocations for instance 4c768bc2-a711-4179-b12f-604509e47856#033[00m
Feb 25 07:18:21 np0005629333 nova_compute[244014]: 2026-02-25 12:18:21.113 244018 DEBUG oslo_concurrency.lockutils [None req-1b3df93f-f8ce-4eac-a590-53b59a67d0bd 38c6d2e6875a408687bd85066c826987 c7e518f2e6c84550b07b36ea68922707 - - default default] Lock "4c768bc2-a711-4179-b12f-604509e47856" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:18:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2031068564' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:18:21 np0005629333 nova_compute[244014]: 2026-02-25 12:18:21.443 244018 DEBUG oslo_concurrency.processutils [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:21 np0005629333 nova_compute[244014]: 2026-02-25 12:18:21.450 244018 DEBUG nova.compute.provider_tree [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:18:21 np0005629333 nova_compute[244014]: 2026-02-25 12:18:21.547 244018 DEBUG nova.scheduler.client.report [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:18:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 202 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 137 op/s
Feb 25 07:18:21 np0005629333 nova_compute[244014]: 2026-02-25 12:18:21.614 244018 DEBUG oslo_concurrency.lockutils [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:21 np0005629333 nova_compute[244014]: 2026-02-25 12:18:21.629 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:21 np0005629333 nova_compute[244014]: 2026-02-25 12:18:21.648 244018 INFO nova.scheduler.client.report [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Deleted allocations for instance 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795#033[00m
Feb 25 07:18:21 np0005629333 nova_compute[244014]: 2026-02-25 12:18:21.724 244018 DEBUG nova.compute.manager [req-0888dff0-aef4-45ad-ac91-5dc3f208ae7a req-f8aa7c17-a961-4b41-bdef-8173d922212a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Received event network-vif-deleted-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:18:21 np0005629333 nova_compute[244014]: 2026-02-25 12:18:21.725 244018 DEBUG nova.compute.manager [req-0888dff0-aef4-45ad-ac91-5dc3f208ae7a req-f8aa7c17-a961-4b41-bdef-8173d922212a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Received event network-vif-plugged-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:18:21 np0005629333 nova_compute[244014]: 2026-02-25 12:18:21.725 244018 DEBUG oslo_concurrency.lockutils [req-0888dff0-aef4-45ad-ac91-5dc3f208ae7a req-f8aa7c17-a961-4b41-bdef-8173d922212a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "4c768bc2-a711-4179-b12f-604509e47856-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:21 np0005629333 nova_compute[244014]: 2026-02-25 12:18:21.726 244018 DEBUG oslo_concurrency.lockutils [req-0888dff0-aef4-45ad-ac91-5dc3f208ae7a req-f8aa7c17-a961-4b41-bdef-8173d922212a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4c768bc2-a711-4179-b12f-604509e47856-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:21 np0005629333 nova_compute[244014]: 2026-02-25 12:18:21.727 244018 DEBUG oslo_concurrency.lockutils [req-0888dff0-aef4-45ad-ac91-5dc3f208ae7a req-f8aa7c17-a961-4b41-bdef-8173d922212a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4c768bc2-a711-4179-b12f-604509e47856-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:21 np0005629333 nova_compute[244014]: 2026-02-25 12:18:21.727 244018 DEBUG nova.compute.manager [req-0888dff0-aef4-45ad-ac91-5dc3f208ae7a req-f8aa7c17-a961-4b41-bdef-8173d922212a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] No waiting events found dispatching network-vif-plugged-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:18:21 np0005629333 nova_compute[244014]: 2026-02-25 12:18:21.728 244018 WARNING nova.compute.manager [req-0888dff0-aef4-45ad-ac91-5dc3f208ae7a req-f8aa7c17-a961-4b41-bdef-8173d922212a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Received unexpected event network-vif-plugged-e4e15a18-1007-4bd1-80a7-b8d2cb64f15c for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:18:21 np0005629333 nova_compute[244014]: 2026-02-25 12:18:21.759 244018 DEBUG oslo_concurrency.lockutils [None req-9f6f9345-84ee-4f5a-a78d-f2b25e461097 0ef1a239db3f426694c02473a6b39653 e7d55de27cf3451a8ed2d95d03721e1b - - default default] Lock "0b5fe226-aafe-4d7f-8a2e-28aea8ed8795" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.449s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:21 np0005629333 nova_compute[244014]: 2026-02-25 12:18:21.820 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:23 np0005629333 nova_compute[244014]: 2026-02-25 12:18:23.401 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 153 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 155 op/s
Feb 25 07:18:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:18:24 np0005629333 nova_compute[244014]: 2026-02-25 12:18:24.579 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:24 np0005629333 nova_compute[244014]: 2026-02-25 12:18:24.674 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 153 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 154 op/s
Feb 25 07:18:26 np0005629333 nova_compute[244014]: 2026-02-25 12:18:26.824 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 153 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 154 op/s
Feb 25 07:18:28 np0005629333 nova_compute[244014]: 2026-02-25 12:18:28.403 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:18:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 153 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 164 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Feb 25 07:18:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:18:30
Feb 25 07:18:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:18:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:18:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'vms', 'images', '.rgw.root', 'default.rgw.log', 'backups', '.mgr', 'cephfs.cephfs.meta']
Feb 25 07:18:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:18:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 153 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 164 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Feb 25 07:18:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:18:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:18:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:18:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:18:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:18:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:18:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:18:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:18:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:18:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:18:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:18:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:18:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:18:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:18:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:18:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:18:31 np0005629333 nova_compute[244014]: 2026-02-25 12:18:31.799 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021896.7975128, 4c768bc2-a711-4179-b12f-604509e47856 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:18:31 np0005629333 nova_compute[244014]: 2026-02-25 12:18:31.799 244018 INFO nova.compute.manager [-] [instance: 4c768bc2-a711-4179-b12f-604509e47856] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:18:31 np0005629333 nova_compute[244014]: 2026-02-25 12:18:31.826 244018 DEBUG nova.compute.manager [None req-b408afb7-f80f-4f47-a1a8-7691ab3a541e - - - - - -] [instance: 4c768bc2-a711-4179-b12f-604509e47856] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:18:31 np0005629333 nova_compute[244014]: 2026-02-25 12:18:31.828 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:33 np0005629333 nova_compute[244014]: 2026-02-25 12:18:33.044 244018 DEBUG oslo_concurrency.lockutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Acquiring lock "d974e887-fd2f-479e-951e-fad497dd7b0f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:33 np0005629333 nova_compute[244014]: 2026-02-25 12:18:33.045 244018 DEBUG oslo_concurrency.lockutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lock "d974e887-fd2f-479e-951e-fad497dd7b0f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:33 np0005629333 nova_compute[244014]: 2026-02-25 12:18:33.067 244018 DEBUG nova.compute.manager [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:18:33 np0005629333 nova_compute[244014]: 2026-02-25 12:18:33.168 244018 DEBUG oslo_concurrency.lockutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:33 np0005629333 nova_compute[244014]: 2026-02-25 12:18:33.169 244018 DEBUG oslo_concurrency.lockutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:33 np0005629333 nova_compute[244014]: 2026-02-25 12:18:33.179 244018 DEBUG nova.virt.hardware [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:18:33 np0005629333 nova_compute[244014]: 2026-02-25 12:18:33.179 244018 INFO nova.compute.claims [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:18:33 np0005629333 nova_compute[244014]: 2026-02-25 12:18:33.327 244018 DEBUG oslo_concurrency.processutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:33 np0005629333 nova_compute[244014]: 2026-02-25 12:18:33.405 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 153 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 682 B/s wr, 17 op/s
Feb 25 07:18:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:18:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3502609560' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:18:33 np0005629333 nova_compute[244014]: 2026-02-25 12:18:33.943 244018 DEBUG oslo_concurrency.processutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:33 np0005629333 nova_compute[244014]: 2026-02-25 12:18:33.949 244018 DEBUG nova.compute.provider_tree [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:18:33 np0005629333 nova_compute[244014]: 2026-02-25 12:18:33.980 244018 DEBUG nova.scheduler.client.report [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.008 244018 DEBUG oslo_concurrency.lockutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.010 244018 DEBUG nova.compute.manager [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.062 244018 DEBUG nova.compute.manager [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.062 244018 DEBUG nova.network.neutron [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.081 244018 INFO nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.106 244018 DEBUG nova.compute.manager [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:18:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.187 244018 DEBUG nova.compute.manager [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.189 244018 DEBUG nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.189 244018 INFO nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Creating image(s)#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.212 244018 DEBUG nova.storage.rbd_utils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] rbd image d974e887-fd2f-479e-951e-fad497dd7b0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.244 244018 DEBUG nova.storage.rbd_utils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] rbd image d974e887-fd2f-479e-951e-fad497dd7b0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.270 244018 DEBUG nova.storage.rbd_utils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] rbd image d974e887-fd2f-479e-951e-fad497dd7b0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.273 244018 DEBUG oslo_concurrency.processutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.333 244018 DEBUG oslo_concurrency.processutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.334 244018 DEBUG oslo_concurrency.lockutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.334 244018 DEBUG oslo_concurrency.lockutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.335 244018 DEBUG oslo_concurrency.lockutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.357 244018 DEBUG nova.storage.rbd_utils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] rbd image d974e887-fd2f-479e-951e-fad497dd7b0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.360 244018 DEBUG oslo_concurrency.processutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d974e887-fd2f-479e-951e-fad497dd7b0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.550 244018 DEBUG nova.network.neutron [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.551 244018 DEBUG nova.compute.manager [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.660 244018 DEBUG oslo_concurrency.processutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d974e887-fd2f-479e-951e-fad497dd7b0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.742 244018 DEBUG nova.storage.rbd_utils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] resizing rbd image d974e887-fd2f-479e-951e-fad497dd7b0f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.854 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "52f927ad-a417-489f-9f92-87bc3433649d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.855 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.864 244018 DEBUG nova.objects.instance [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lazy-loading 'migration_context' on Instance uuid d974e887-fd2f-479e-951e-fad497dd7b0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.878 244018 DEBUG nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.878 244018 DEBUG nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Ensure instance console log exists: /var/lib/nova/instances/d974e887-fd2f-479e-951e-fad497dd7b0f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.879 244018 DEBUG oslo_concurrency.lockutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.880 244018 DEBUG oslo_concurrency.lockutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.880 244018 DEBUG oslo_concurrency.lockutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.883 244018 DEBUG nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.888 244018 DEBUG nova.compute.manager [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.893 244018 WARNING nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.901 244018 DEBUG nova.virt.libvirt.host [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.902 244018 DEBUG nova.virt.libvirt.host [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.906 244018 DEBUG nova.virt.libvirt.host [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.907 244018 DEBUG nova.virt.libvirt.host [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.908 244018 DEBUG nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.908 244018 DEBUG nova.virt.hardware [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.909 244018 DEBUG nova.virt.hardware [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.909 244018 DEBUG nova.virt.hardware [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.910 244018 DEBUG nova.virt.hardware [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.910 244018 DEBUG nova.virt.hardware [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.911 244018 DEBUG nova.virt.hardware [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.911 244018 DEBUG nova.virt.hardware [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.912 244018 DEBUG nova.virt.hardware [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.912 244018 DEBUG nova.virt.hardware [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.912 244018 DEBUG nova.virt.hardware [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.913 244018 DEBUG nova.virt.hardware [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:18:34 np0005629333 nova_compute[244014]: 2026-02-25 12:18:34.917 244018 DEBUG oslo_concurrency.processutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:35 np0005629333 nova_compute[244014]: 2026-02-25 12:18:35.005 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:35 np0005629333 nova_compute[244014]: 2026-02-25 12:18:35.006 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:35 np0005629333 nova_compute[244014]: 2026-02-25 12:18:35.015 244018 DEBUG nova.virt.hardware [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:18:35 np0005629333 nova_compute[244014]: 2026-02-25 12:18:35.016 244018 INFO nova.compute.claims [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:18:35 np0005629333 nova_compute[244014]: 2026-02-25 12:18:35.126 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021900.1253996, 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:18:35 np0005629333 nova_compute[244014]: 2026-02-25 12:18:35.127 244018 INFO nova.compute.manager [-] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:18:35 np0005629333 nova_compute[244014]: 2026-02-25 12:18:35.145 244018 DEBUG oslo_concurrency.processutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:35 np0005629333 nova_compute[244014]: 2026-02-25 12:18:35.169 244018 DEBUG nova.compute.manager [None req-8e766892-b5a3-49bf-ae34-b29f068632e9 - - - - - -] [instance: 0b5fe226-aafe-4d7f-8a2e-28aea8ed8795] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:18:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:18:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2310083825' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:18:35 np0005629333 nova_compute[244014]: 2026-02-25 12:18:35.460 244018 DEBUG oslo_concurrency.processutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:35 np0005629333 nova_compute[244014]: 2026-02-25 12:18:35.491 244018 DEBUG nova.storage.rbd_utils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] rbd image d974e887-fd2f-479e-951e-fad497dd7b0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:35 np0005629333 nova_compute[244014]: 2026-02-25 12:18:35.496 244018 DEBUG oslo_concurrency.processutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 153 MiB data, 343 MiB used, 60 GiB / 60 GiB avail
Feb 25 07:18:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:18:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/774732830' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:18:35 np0005629333 nova_compute[244014]: 2026-02-25 12:18:35.716 244018 DEBUG oslo_concurrency.processutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:35 np0005629333 nova_compute[244014]: 2026-02-25 12:18:35.723 244018 DEBUG nova.compute.provider_tree [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:18:35 np0005629333 nova_compute[244014]: 2026-02-25 12:18:35.752 244018 DEBUG nova.scheduler.client.report [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:18:35 np0005629333 nova_compute[244014]: 2026-02-25 12:18:35.780 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:35 np0005629333 nova_compute[244014]: 2026-02-25 12:18:35.781 244018 DEBUG nova.compute.manager [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:18:35 np0005629333 nova_compute[244014]: 2026-02-25 12:18:35.838 244018 DEBUG nova.compute.manager [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:18:35 np0005629333 nova_compute[244014]: 2026-02-25 12:18:35.839 244018 DEBUG nova.network.neutron [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:18:35 np0005629333 nova_compute[244014]: 2026-02-25 12:18:35.860 244018 INFO nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:18:35 np0005629333 nova_compute[244014]: 2026-02-25 12:18:35.882 244018 DEBUG nova.compute.manager [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:18:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:18:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3998105227' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:18:35 np0005629333 nova_compute[244014]: 2026-02-25 12:18:35.961 244018 DEBUG oslo_concurrency.processutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:35 np0005629333 nova_compute[244014]: 2026-02-25 12:18:35.964 244018 DEBUG nova.objects.instance [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lazy-loading 'pci_devices' on Instance uuid d974e887-fd2f-479e-951e-fad497dd7b0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:18:35 np0005629333 nova_compute[244014]: 2026-02-25 12:18:35.985 244018 DEBUG nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:18:35 np0005629333 nova_compute[244014]:  <uuid>d974e887-fd2f-479e-951e-fad497dd7b0f</uuid>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:  <name>instance-00000010</name>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:18:35 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:      <nova:name>tempest-TenantUsagesTestJSON-server-522638906</nova:name>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:18:34</nova:creationTime>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:18:35 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:        <nova:user uuid="1768606fbf83414784f34904db6329b8">tempest-TenantUsagesTestJSON-721733181-project-member</nova:user>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:        <nova:project uuid="d1b2f7969c344b348a5d9bd271c293f9">tempest-TenantUsagesTestJSON-721733181</nova:project>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:      <nova:ports/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:      <entry name="serial">d974e887-fd2f-479e-951e-fad497dd7b0f</entry>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:      <entry name="uuid">d974e887-fd2f-479e-951e-fad497dd7b0f</entry>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:18:35 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/d974e887-fd2f-479e-951e-fad497dd7b0f_disk">
Feb 25 07:18:35 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:18:35 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:18:35 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/d974e887-fd2f-479e-951e-fad497dd7b0f_disk.config">
Feb 25 07:18:35 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:18:35 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:18:35 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/d974e887-fd2f-479e-951e-fad497dd7b0f/console.log" append="off"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:18:35 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:18:35 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:18:35 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:18:35 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:18:35 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:18:35 np0005629333 nova_compute[244014]: 2026-02-25 12:18:35.992 244018 DEBUG nova.compute.manager [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:18:35 np0005629333 nova_compute[244014]: 2026-02-25 12:18:35.994 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:18:35 np0005629333 nova_compute[244014]: 2026-02-25 12:18:35.994 244018 INFO nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Creating image(s)#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.020 244018 DEBUG nova.storage.rbd_utils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.044 244018 DEBUG nova.storage.rbd_utils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.064 244018 DEBUG nova.storage.rbd_utils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.067 244018 DEBUG oslo_concurrency.processutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.081 244018 DEBUG nova.policy [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6395ac4bfa5d4910aed9116395bbbdeb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.112 244018 DEBUG oslo_concurrency.processutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.113 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.113 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.113 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.131 244018 DEBUG nova.storage.rbd_utils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.133 244018 DEBUG oslo_concurrency.processutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 52f927ad-a417-489f-9f92-87bc3433649d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.150 244018 DEBUG nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.151 244018 DEBUG nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.151 244018 INFO nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Using config drive#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.170 244018 DEBUG nova.storage.rbd_utils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] rbd image d974e887-fd2f-479e-951e-fad497dd7b0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.368 244018 DEBUG oslo_concurrency.processutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 52f927ad-a417-489f-9f92-87bc3433649d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.235s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.406 244018 INFO nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Creating config drive at /var/lib/nova/instances/d974e887-fd2f-479e-951e-fad497dd7b0f/disk.config#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.412 244018 DEBUG oslo_concurrency.processutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d974e887-fd2f-479e-951e-fad497dd7b0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpsimli3_d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.470 244018 DEBUG nova.storage.rbd_utils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] resizing rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.559 244018 DEBUG oslo_concurrency.processutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d974e887-fd2f-479e-951e-fad497dd7b0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpsimli3_d" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.583 244018 DEBUG nova.storage.rbd_utils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] rbd image d974e887-fd2f-479e-951e-fad497dd7b0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.588 244018 DEBUG oslo_concurrency.processutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d974e887-fd2f-479e-951e-fad497dd7b0f/disk.config d974e887-fd2f-479e-951e-fad497dd7b0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.611 244018 DEBUG nova.objects.instance [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'migration_context' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.631 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.632 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Ensure instance console log exists: /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.632 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.633 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.633 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.724 244018 DEBUG oslo_concurrency.processutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d974e887-fd2f-479e-951e-fad497dd7b0f/disk.config d974e887-fd2f-479e-951e-fad497dd7b0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.724 244018 INFO nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Deleting local config drive /var/lib/nova/instances/d974e887-fd2f-479e-951e-fad497dd7b0f/disk.config because it was imported into RBD.#033[00m
Feb 25 07:18:36 np0005629333 systemd-machined[210048]: New machine qemu-18-instance-00000010.
Feb 25 07:18:36 np0005629333 systemd[1]: Started Virtual Machine qemu-18-instance-00000010.
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.821 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "7a6ab503-d433-40a7-9395-3d5660e852c9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.822 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "7a6ab503-d433-40a7-9395-3d5660e852c9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.831 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:36 np0005629333 nova_compute[244014]: 2026-02-25 12:18:36.929 244018 DEBUG nova.compute.manager [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.138 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.139 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.148 244018 DEBUG nova.virt.hardware [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.149 244018 INFO nova.compute.claims [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.327 244018 DEBUG oslo_concurrency.processutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.376 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021917.3754072, d974e887-fd2f-479e-951e-fad497dd7b0f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.377 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.381 244018 DEBUG nova.compute.manager [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.382 244018 DEBUG nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.386 244018 INFO nova.virt.libvirt.driver [-] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Instance spawned successfully.#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.386 244018 DEBUG nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.420 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.438 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.443 244018 DEBUG nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.444 244018 DEBUG nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.445 244018 DEBUG nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.446 244018 DEBUG nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.446 244018 DEBUG nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.447 244018 DEBUG nova.virt.libvirt.driver [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.481 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.482 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021917.376808, d974e887-fd2f-479e-951e-fad497dd7b0f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.482 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] VM Started (Lifecycle Event)#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.536 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.538 244018 DEBUG nova.network.neutron [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Successfully created port: 2e503dd2-735e-4bfc-87c7-dffab319d935 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.547 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.555 244018 INFO nova.compute.manager [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Took 3.37 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.556 244018 DEBUG nova.compute.manager [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.570 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:18:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 246 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 3.5 MiB/s wr, 58 op/s
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.673 244018 INFO nova.compute.manager [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Took 4.55 seconds to build instance.#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.715 244018 DEBUG oslo_concurrency.lockutils [None req-236f9690-8c59-4605-9fcd-1857dfe77090 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lock "d974e887-fd2f-479e-951e-fad497dd7b0f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:18:37 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/320252035' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.905 244018 DEBUG oslo_concurrency.processutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.910 244018 DEBUG nova.compute.provider_tree [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.925 244018 DEBUG nova.scheduler.client.report [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.947 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:37 np0005629333 nova_compute[244014]: 2026-02-25 12:18:37.948 244018 DEBUG nova.compute.manager [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:18:38 np0005629333 nova_compute[244014]: 2026-02-25 12:18:38.037 244018 DEBUG nova.compute.manager [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:18:38 np0005629333 nova_compute[244014]: 2026-02-25 12:18:38.038 244018 DEBUG nova.network.neutron [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:18:38 np0005629333 nova_compute[244014]: 2026-02-25 12:18:38.072 244018 INFO nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:18:38 np0005629333 nova_compute[244014]: 2026-02-25 12:18:38.104 244018 DEBUG nova.compute.manager [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:18:38 np0005629333 nova_compute[244014]: 2026-02-25 12:18:38.216 244018 DEBUG nova.compute.manager [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:18:38 np0005629333 nova_compute[244014]: 2026-02-25 12:18:38.218 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:18:38 np0005629333 nova_compute[244014]: 2026-02-25 12:18:38.219 244018 INFO nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Creating image(s)#033[00m
Feb 25 07:18:38 np0005629333 nova_compute[244014]: 2026-02-25 12:18:38.257 244018 DEBUG nova.storage.rbd_utils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 7a6ab503-d433-40a7-9395-3d5660e852c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:38 np0005629333 nova_compute[244014]: 2026-02-25 12:18:38.294 244018 DEBUG nova.storage.rbd_utils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 7a6ab503-d433-40a7-9395-3d5660e852c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:38 np0005629333 nova_compute[244014]: 2026-02-25 12:18:38.324 244018 DEBUG nova.storage.rbd_utils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 7a6ab503-d433-40a7-9395-3d5660e852c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:38 np0005629333 nova_compute[244014]: 2026-02-25 12:18:38.328 244018 DEBUG oslo_concurrency.processutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:38 np0005629333 nova_compute[244014]: 2026-02-25 12:18:38.346 244018 DEBUG nova.policy [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6395ac4bfa5d4910aed9116395bbbdeb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:18:38 np0005629333 nova_compute[244014]: 2026-02-25 12:18:38.388 244018 DEBUG oslo_concurrency.processutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:38 np0005629333 nova_compute[244014]: 2026-02-25 12:18:38.389 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:38 np0005629333 nova_compute[244014]: 2026-02-25 12:18:38.389 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:38 np0005629333 nova_compute[244014]: 2026-02-25 12:18:38.390 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:38 np0005629333 nova_compute[244014]: 2026-02-25 12:18:38.406 244018 DEBUG nova.storage.rbd_utils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 7a6ab503-d433-40a7-9395-3d5660e852c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:38 np0005629333 nova_compute[244014]: 2026-02-25 12:18:38.411 244018 DEBUG oslo_concurrency.processutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 7a6ab503-d433-40a7-9395-3d5660e852c9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:38 np0005629333 nova_compute[244014]: 2026-02-25 12:18:38.423 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:38 np0005629333 nova_compute[244014]: 2026-02-25 12:18:38.622 244018 DEBUG oslo_concurrency.processutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 7a6ab503-d433-40a7-9395-3d5660e852c9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.211s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:38 np0005629333 nova_compute[244014]: 2026-02-25 12:18:38.686 244018 DEBUG nova.storage.rbd_utils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] resizing rbd image 7a6ab503-d433-40a7-9395-3d5660e852c9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:18:38 np0005629333 nova_compute[244014]: 2026-02-25 12:18:38.762 244018 DEBUG nova.objects.instance [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'migration_context' on Instance uuid 7a6ab503-d433-40a7-9395-3d5660e852c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:18:38 np0005629333 nova_compute[244014]: 2026-02-25 12:18:38.802 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:18:38 np0005629333 nova_compute[244014]: 2026-02-25 12:18:38.802 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Ensure instance console log exists: /var/lib/nova/instances/7a6ab503-d433-40a7-9395-3d5660e852c9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:18:38 np0005629333 nova_compute[244014]: 2026-02-25 12:18:38.803 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:38 np0005629333 nova_compute[244014]: 2026-02-25 12:18:38.803 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:38 np0005629333 nova_compute[244014]: 2026-02-25 12:18:38.803 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:18:39 np0005629333 nova_compute[244014]: 2026-02-25 12:18:39.290 244018 DEBUG oslo_concurrency.lockutils [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Acquiring lock "d974e887-fd2f-479e-951e-fad497dd7b0f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:39 np0005629333 nova_compute[244014]: 2026-02-25 12:18:39.290 244018 DEBUG oslo_concurrency.lockutils [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lock "d974e887-fd2f-479e-951e-fad497dd7b0f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:39 np0005629333 nova_compute[244014]: 2026-02-25 12:18:39.290 244018 DEBUG oslo_concurrency.lockutils [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Acquiring lock "d974e887-fd2f-479e-951e-fad497dd7b0f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:39 np0005629333 nova_compute[244014]: 2026-02-25 12:18:39.291 244018 DEBUG oslo_concurrency.lockutils [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lock "d974e887-fd2f-479e-951e-fad497dd7b0f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:39 np0005629333 nova_compute[244014]: 2026-02-25 12:18:39.291 244018 DEBUG oslo_concurrency.lockutils [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lock "d974e887-fd2f-479e-951e-fad497dd7b0f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:39 np0005629333 nova_compute[244014]: 2026-02-25 12:18:39.292 244018 INFO nova.compute.manager [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Terminating instance#033[00m
Feb 25 07:18:39 np0005629333 nova_compute[244014]: 2026-02-25 12:18:39.292 244018 DEBUG oslo_concurrency.lockutils [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Acquiring lock "refresh_cache-d974e887-fd2f-479e-951e-fad497dd7b0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:18:39 np0005629333 nova_compute[244014]: 2026-02-25 12:18:39.293 244018 DEBUG oslo_concurrency.lockutils [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Acquired lock "refresh_cache-d974e887-fd2f-479e-951e-fad497dd7b0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:18:39 np0005629333 nova_compute[244014]: 2026-02-25 12:18:39.293 244018 DEBUG nova.network.neutron [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:18:39 np0005629333 nova_compute[244014]: 2026-02-25 12:18:39.299 244018 DEBUG nova.network.neutron [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Successfully updated port: 2e503dd2-735e-4bfc-87c7-dffab319d935 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:18:39 np0005629333 nova_compute[244014]: 2026-02-25 12:18:39.321 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "refresh_cache-52f927ad-a417-489f-9f92-87bc3433649d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:18:39 np0005629333 nova_compute[244014]: 2026-02-25 12:18:39.321 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquired lock "refresh_cache-52f927ad-a417-489f-9f92-87bc3433649d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:18:39 np0005629333 nova_compute[244014]: 2026-02-25 12:18:39.322 244018 DEBUG nova.network.neutron [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:18:39 np0005629333 nova_compute[244014]: 2026-02-25 12:18:39.425 244018 DEBUG nova.network.neutron [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Successfully created port: 179cbc6a-d79f-4f46-88e8-f362ea0e4a26 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:18:39 np0005629333 nova_compute[244014]: 2026-02-25 12:18:39.511 244018 DEBUG nova.network.neutron [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:18:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 246 MiB data, 375 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 3.5 MiB/s wr, 58 op/s
Feb 25 07:18:39 np0005629333 nova_compute[244014]: 2026-02-25 12:18:39.611 244018 DEBUG nova.network.neutron [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:18:39 np0005629333 nova_compute[244014]: 2026-02-25 12:18:39.773 244018 DEBUG nova.compute.manager [req-3b303e83-cb8f-4fbe-a8ba-12dd198e50e3 req-28dde81a-be40-42d2-8ae1-8e7738587ab0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received event network-changed-2e503dd2-735e-4bfc-87c7-dffab319d935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:18:39 np0005629333 nova_compute[244014]: 2026-02-25 12:18:39.774 244018 DEBUG nova.compute.manager [req-3b303e83-cb8f-4fbe-a8ba-12dd198e50e3 req-28dde81a-be40-42d2-8ae1-8e7738587ab0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Refreshing instance network info cache due to event network-changed-2e503dd2-735e-4bfc-87c7-dffab319d935. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:18:39 np0005629333 nova_compute[244014]: 2026-02-25 12:18:39.774 244018 DEBUG oslo_concurrency.lockutils [req-3b303e83-cb8f-4fbe-a8ba-12dd198e50e3 req-28dde81a-be40-42d2-8ae1-8e7738587ab0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-52f927ad-a417-489f-9f92-87bc3433649d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:18:39 np0005629333 nova_compute[244014]: 2026-02-25 12:18:39.894 244018 DEBUG nova.network.neutron [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:18:39 np0005629333 nova_compute[244014]: 2026-02-25 12:18:39.912 244018 DEBUG oslo_concurrency.lockutils [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Releasing lock "refresh_cache-d974e887-fd2f-479e-951e-fad497dd7b0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:18:39 np0005629333 nova_compute[244014]: 2026-02-25 12:18:39.912 244018 DEBUG nova.compute.manager [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:18:39 np0005629333 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000010.scope: Deactivated successfully.
Feb 25 07:18:39 np0005629333 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000010.scope: Consumed 3.180s CPU time.
Feb 25 07:18:39 np0005629333 systemd-machined[210048]: Machine qemu-18-instance-00000010 terminated.
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.138 244018 INFO nova.virt.libvirt.driver [-] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Instance destroyed successfully.#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.139 244018 DEBUG nova.objects.instance [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lazy-loading 'resources' on Instance uuid d974e887-fd2f-479e-951e-fad497dd7b0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.504 244018 INFO nova.virt.libvirt.driver [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Deleting instance files /var/lib/nova/instances/d974e887-fd2f-479e-951e-fad497dd7b0f_del#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.506 244018 INFO nova.virt.libvirt.driver [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Deletion of /var/lib/nova/instances/d974e887-fd2f-479e-951e-fad497dd7b0f_del complete#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.572 244018 INFO nova.compute.manager [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Took 0.66 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.573 244018 DEBUG oslo.service.loopingcall [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.573 244018 DEBUG nova.compute.manager [-] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.573 244018 DEBUG nova.network.neutron [-] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.779 244018 DEBUG nova.network.neutron [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Updating instance_info_cache with network_info: [{"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.805 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Releasing lock "refresh_cache-52f927ad-a417-489f-9f92-87bc3433649d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.806 244018 DEBUG nova.compute.manager [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Instance network_info: |[{"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.808 244018 DEBUG oslo_concurrency.lockutils [req-3b303e83-cb8f-4fbe-a8ba-12dd198e50e3 req-28dde81a-be40-42d2-8ae1-8e7738587ab0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-52f927ad-a417-489f-9f92-87bc3433649d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.808 244018 DEBUG nova.network.neutron [req-3b303e83-cb8f-4fbe-a8ba-12dd198e50e3 req-28dde81a-be40-42d2-8ae1-8e7738587ab0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Refreshing network info cache for port 2e503dd2-735e-4bfc-87c7-dffab319d935 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.814 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Start _get_guest_xml network_info=[{"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.821 244018 WARNING nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.826 244018 DEBUG nova.virt.libvirt.host [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.827 244018 DEBUG nova.virt.libvirt.host [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.836 244018 DEBUG nova.virt.libvirt.host [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.837 244018 DEBUG nova.virt.libvirt.host [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.838 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.838 244018 DEBUG nova.virt.hardware [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.839 244018 DEBUG nova.virt.hardware [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.840 244018 DEBUG nova.virt.hardware [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.841 244018 DEBUG nova.virt.hardware [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.841 244018 DEBUG nova.virt.hardware [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.842 244018 DEBUG nova.virt.hardware [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.842 244018 DEBUG nova.virt.hardware [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.843 244018 DEBUG nova.virt.hardware [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.843 244018 DEBUG nova.virt.hardware [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.844 244018 DEBUG nova.virt.hardware [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.845 244018 DEBUG nova.virt.hardware [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.849 244018 DEBUG oslo_concurrency.processutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.873 244018 DEBUG nova.network.neutron [-] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.893 244018 DEBUG nova.network.neutron [-] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.917 244018 INFO nova.compute.manager [-] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Took 0.34 seconds to deallocate network for instance.#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.969 244018 DEBUG oslo_concurrency.lockutils [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:40 np0005629333 nova_compute[244014]: 2026-02-25 12:18:40.970 244018 DEBUG oslo_concurrency.lockutils [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.072 244018 DEBUG oslo_concurrency.processutils [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.173 244018 DEBUG nova.network.neutron [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Successfully updated port: 179cbc6a-d79f-4f46-88e8-f362ea0e4a26 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.192 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "refresh_cache-7a6ab503-d433-40a7-9395-3d5660e852c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.193 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquired lock "refresh_cache-7a6ab503-d433-40a7-9395-3d5660e852c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.194 244018 DEBUG nova.network.neutron [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.309 244018 DEBUG nova.compute.manager [req-7e744290-1c2e-47ea-92d5-4ff475341fa8 req-7e4a1d17-be5a-4f93-bb4e-e8af32d8078d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Received event network-changed-179cbc6a-d79f-4f46-88e8-f362ea0e4a26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.310 244018 DEBUG nova.compute.manager [req-7e744290-1c2e-47ea-92d5-4ff475341fa8 req-7e4a1d17-be5a-4f93-bb4e-e8af32d8078d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Refreshing instance network info cache due to event network-changed-179cbc6a-d79f-4f46-88e8-f362ea0e4a26. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.310 244018 DEBUG oslo_concurrency.lockutils [req-7e744290-1c2e-47ea-92d5-4ff475341fa8 req-7e4a1d17-be5a-4f93-bb4e-e8af32d8078d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-7a6ab503-d433-40a7-9395-3d5660e852c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:18:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:18:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1444724717' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.400 244018 DEBUG oslo_concurrency.processutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.430 244018 DEBUG nova.storage.rbd_utils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.435 244018 DEBUG oslo_concurrency.processutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 240 MiB data, 373 MiB used, 60 GiB / 60 GiB avail; 695 KiB/s rd, 3.6 MiB/s wr, 109 op/s
Feb 25 07:18:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:18:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3134556862' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.597 244018 DEBUG oslo_concurrency.processutils [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.604 244018 DEBUG nova.compute.provider_tree [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.621 244018 DEBUG nova.scheduler.client.report [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.651 244018 DEBUG oslo_concurrency.lockutils [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.687 244018 INFO nova.scheduler.client.report [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Deleted allocations for instance d974e887-fd2f-479e-951e-fad497dd7b0f#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.736 244018 DEBUG nova.network.neutron [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.822 244018 DEBUG oslo_concurrency.lockutils [None req-7a6e0e13-fc82-4115-a6c5-e4c28a231900 1768606fbf83414784f34904db6329b8 d1b2f7969c344b348a5d9bd271c293f9 - - default default] Lock "d974e887-fd2f-479e-951e-fad497dd7b0f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.833 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:18:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3612929653' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.947 244018 DEBUG oslo_concurrency.processutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.949 244018 DEBUG nova.virt.libvirt.vif [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:18:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1736622759',display_name='tempest-ServersAdminTestJSON-server-1736622759',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1736622759',id=17,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-ae0qgj1z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:18:35Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=52f927ad-a417-489f-9f92-87bc3433649d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.950 244018 DEBUG nova.network.os_vif_util [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.951 244018 DEBUG nova.network.os_vif_util [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.953 244018 DEBUG nova.objects.instance [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'pci_devices' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.982 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:18:41 np0005629333 nova_compute[244014]:  <uuid>52f927ad-a417-489f-9f92-87bc3433649d</uuid>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:  <name>instance-00000011</name>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServersAdminTestJSON-server-1736622759</nova:name>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:18:40</nova:creationTime>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:18:41 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:        <nova:user uuid="6395ac4bfa5d4910aed9116395bbbdeb">tempest-ServersAdminTestJSON-147238686-project-member</nova:user>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:        <nova:project uuid="b35cd816238a43d8825ab11e83d2b8bf">tempest-ServersAdminTestJSON-147238686</nova:project>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:        <nova:port uuid="2e503dd2-735e-4bfc-87c7-dffab319d935">
Feb 25 07:18:41 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      <entry name="serial">52f927ad-a417-489f-9f92-87bc3433649d</entry>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      <entry name="uuid">52f927ad-a417-489f-9f92-87bc3433649d</entry>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/52f927ad-a417-489f-9f92-87bc3433649d_disk">
Feb 25 07:18:41 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:18:41 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/52f927ad-a417-489f-9f92-87bc3433649d_disk.config">
Feb 25 07:18:41 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:18:41 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:87:71:62"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      <target dev="tap2e503dd2-73"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/console.log" append="off"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:18:41 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:18:41 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:18:41 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:18:41 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.983 244018 DEBUG nova.compute.manager [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Preparing to wait for external event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.984 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "52f927ad-a417-489f-9f92-87bc3433649d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.984 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.984 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.985 244018 DEBUG nova.virt.libvirt.vif [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:18:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1736622759',display_name='tempest-ServersAdminTestJSON-server-1736622759',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1736622759',id=17,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-ae0qgj1z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:18:35Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=52f927ad-a417-489f-9f92-87bc3433649d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.986 244018 DEBUG nova.network.os_vif_util [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.987 244018 DEBUG nova.network.os_vif_util [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.987 244018 DEBUG os_vif [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.988 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.989 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.989 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.994 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.994 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e503dd2-73, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.995 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2e503dd2-73, col_values=(('external_ids', {'iface-id': '2e503dd2-735e-4bfc-87c7-dffab319d935', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:87:71:62', 'vm-uuid': '52f927ad-a417-489f-9f92-87bc3433649d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:18:41 np0005629333 nova_compute[244014]: 2026-02-25 12:18:41.997 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:41 np0005629333 NetworkManager[49836]: <info>  [1772021921.9987] manager: (tap2e503dd2-73): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Feb 25 07:18:42 np0005629333 nova_compute[244014]: 2026-02-25 12:18:42.000 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:18:42 np0005629333 nova_compute[244014]: 2026-02-25 12:18:42.005 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:42 np0005629333 nova_compute[244014]: 2026-02-25 12:18:42.008 244018 INFO os_vif [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73')#033[00m
Feb 25 07:18:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:18:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:18:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:18:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:18:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006154421725418758 of space, bias 1.0, pg target 0.18463265176256272 quantized to 32 (current 32)
Feb 25 07:18:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:18:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:18:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:18:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:18:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:18:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002489711543254153 of space, bias 1.0, pg target 0.7469134629762458 quantized to 32 (current 32)
Feb 25 07:18:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:18:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.271449539215372e-07 of space, bias 4.0, pg target 0.0009925739447058447 quantized to 16 (current 16)
Feb 25 07:18:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:18:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:18:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:18:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:18:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:18:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:18:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:18:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:18:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:18:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:18:42 np0005629333 nova_compute[244014]: 2026-02-25 12:18:42.123 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:18:42 np0005629333 nova_compute[244014]: 2026-02-25 12:18:42.124 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:18:42 np0005629333 nova_compute[244014]: 2026-02-25 12:18:42.124 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No VIF found with MAC fa:16:3e:87:71:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:18:42 np0005629333 nova_compute[244014]: 2026-02-25 12:18:42.125 244018 INFO nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Using config drive#033[00m
Feb 25 07:18:42 np0005629333 nova_compute[244014]: 2026-02-25 12:18:42.157 244018 DEBUG nova.storage.rbd_utils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:43 np0005629333 nova_compute[244014]: 2026-02-25 12:18:43.410 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 181 op/s
Feb 25 07:18:43 np0005629333 nova_compute[244014]: 2026-02-25 12:18:43.967 244018 INFO nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Creating config drive at /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config#033[00m
Feb 25 07:18:43 np0005629333 nova_compute[244014]: 2026-02-25 12:18:43.974 244018 DEBUG oslo_concurrency.processutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6do9jecn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.099 244018 DEBUG oslo_concurrency.processutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6do9jecn" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.136 244018 DEBUG nova.storage.rbd_utils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.141 244018 DEBUG oslo_concurrency.processutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config 52f927ad-a417-489f-9f92-87bc3433649d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.186 244018 DEBUG nova.network.neutron [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Updating instance_info_cache with network_info: [{"id": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "address": "fa:16:3e:d2:1d:06", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap179cbc6a-d7", "ovs_interfaceid": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.261 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Releasing lock "refresh_cache-7a6ab503-d433-40a7-9395-3d5660e852c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.261 244018 DEBUG nova.compute.manager [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Instance network_info: |[{"id": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "address": "fa:16:3e:d2:1d:06", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap179cbc6a-d7", "ovs_interfaceid": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.262 244018 DEBUG oslo_concurrency.lockutils [req-7e744290-1c2e-47ea-92d5-4ff475341fa8 req-7e4a1d17-be5a-4f93-bb4e-e8af32d8078d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-7a6ab503-d433-40a7-9395-3d5660e852c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.262 244018 DEBUG nova.network.neutron [req-7e744290-1c2e-47ea-92d5-4ff475341fa8 req-7e4a1d17-be5a-4f93-bb4e-e8af32d8078d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Refreshing network info cache for port 179cbc6a-d79f-4f46-88e8-f362ea0e4a26 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.265 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Start _get_guest_xml network_info=[{"id": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "address": "fa:16:3e:d2:1d:06", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap179cbc6a-d7", "ovs_interfaceid": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.272 244018 WARNING nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.278 244018 DEBUG nova.virt.libvirt.host [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.279 244018 DEBUG nova.virt.libvirt.host [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.291 244018 DEBUG nova.virt.libvirt.host [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.292 244018 DEBUG nova.virt.libvirt.host [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.292 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.293 244018 DEBUG nova.virt.hardware [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.294 244018 DEBUG nova.virt.hardware [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.294 244018 DEBUG nova.virt.hardware [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.294 244018 DEBUG nova.virt.hardware [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.295 244018 DEBUG nova.virt.hardware [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.295 244018 DEBUG nova.virt.hardware [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.295 244018 DEBUG nova.virt.hardware [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.296 244018 DEBUG nova.virt.hardware [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.296 244018 DEBUG nova.virt.hardware [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.296 244018 DEBUG nova.virt.hardware [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.297 244018 DEBUG nova.virt.hardware [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.302 244018 DEBUG oslo_concurrency.processutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.323 244018 DEBUG oslo_concurrency.processutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config 52f927ad-a417-489f-9f92-87bc3433649d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.326 244018 INFO nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Deleting local config drive /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config because it was imported into RBD.#033[00m
Feb 25 07:18:44 np0005629333 kernel: tap2e503dd2-73: entered promiscuous mode
Feb 25 07:18:44 np0005629333 NetworkManager[49836]: <info>  [1772021924.3784] manager: (tap2e503dd2-73): new Tun device (/org/freedesktop/NetworkManager/Devices/41)
Feb 25 07:18:44 np0005629333 ovn_controller[147040]: 2026-02-25T12:18:44Z|00054|binding|INFO|Claiming lport 2e503dd2-735e-4bfc-87c7-dffab319d935 for this chassis.
Feb 25 07:18:44 np0005629333 ovn_controller[147040]: 2026-02-25T12:18:44Z|00055|binding|INFO|2e503dd2-735e-4bfc-87c7-dffab319d935: Claiming fa:16:3e:87:71:62 10.100.0.5
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.391 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.395 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.408 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:71:62 10.100.0.5'], port_security=['fa:16:3e:87:71:62 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '52f927ad-a417-489f-9f92-87bc3433649d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fd7733ad-d262-4781-bcfa-77cfa8b67164', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556b4b98-e95d-460c-a904-adc77baf4b88, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=2e503dd2-735e-4bfc-87c7-dffab319d935) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.413 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 2e503dd2-735e-4bfc-87c7-dffab319d935 in datapath 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 bound to our chassis#033[00m
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.417 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6#033[00m
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.431 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[57896b35-523c-4cf4-8e58-043a548b143e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.433 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1f4cbf9a-41 in ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.435 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1f4cbf9a-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.435 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7d3cfb3f-8ab9-4d40-bc16-6e84fba2bcd4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.436 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b1bec435-41ca-424a-8a19-efa8e3686f45]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:44 np0005629333 systemd-machined[210048]: New machine qemu-19-instance-00000011.
Feb 25 07:18:44 np0005629333 systemd-udevd[261029]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.449 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:44 np0005629333 systemd[1]: Started Virtual Machine qemu-19-instance-00000011.
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.450 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[0ad07655-ee12-4183-b994-3e5bc332b5db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:44 np0005629333 ovn_controller[147040]: 2026-02-25T12:18:44Z|00056|binding|INFO|Setting lport 2e503dd2-735e-4bfc-87c7-dffab319d935 ovn-installed in OVS
Feb 25 07:18:44 np0005629333 ovn_controller[147040]: 2026-02-25T12:18:44Z|00057|binding|INFO|Setting lport 2e503dd2-735e-4bfc-87c7-dffab319d935 up in Southbound
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.458 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:44 np0005629333 NetworkManager[49836]: <info>  [1772021924.4640] device (tap2e503dd2-73): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:18:44 np0005629333 NetworkManager[49836]: <info>  [1772021924.4650] device (tap2e503dd2-73): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.469 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fc649c27-199a-4655-a5c5-2941b6cb6559]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.495 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c7186ba1-0906-4823-a994-a20b4722bd8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:44 np0005629333 systemd-udevd[261049]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.502 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[05c55829-7390-4cfa-aebb-40d64cc944bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:44 np0005629333 NetworkManager[49836]: <info>  [1772021924.5043] manager: (tap1f4cbf9a-40): new Veth device (/org/freedesktop/NetworkManager/Devices/42)
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.533 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[96527515-7a5c-42da-86be-b8a2a0a7ba71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.537 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1ed8e519-909e-4d91-8268-01f749d4b3db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:44 np0005629333 NetworkManager[49836]: <info>  [1772021924.5589] device (tap1f4cbf9a-40): carrier: link connected
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.564 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c00a627f-a443-449d-b24a-9418cd422e68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.582 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[75588d2d-7c8d-44b0-953e-277904d4beb6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f4cbf9a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:f8:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389396, 'reachable_time': 28421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261080, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.596 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dce1d045-9240-4703-8b27-68d4fcdca2ce]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb3:f8a7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389396, 'tstamp': 389396}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261081, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.614 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[66f369e5-ecb2-4465-93b3-8517f44789d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f4cbf9a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:f8:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389396, 'reachable_time': 28421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261082, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.643 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[57e22e1e-ba50-4079-8e2d-b5e3b876459e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.696 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9d02bf53-1eae-4a83-9f55-3ee6b9673bb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.697 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f4cbf9a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.698 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.699 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f4cbf9a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.701 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:44 np0005629333 kernel: tap1f4cbf9a-40: entered promiscuous mode
Feb 25 07:18:44 np0005629333 NetworkManager[49836]: <info>  [1772021924.7020] manager: (tap1f4cbf9a-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.703 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.709 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1f4cbf9a-40, col_values=(('external_ids', {'iface-id': '2cfd1e6b-d28d-43c0-bbbd-c6ad77855812'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.710 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.711 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:44 np0005629333 ovn_controller[147040]: 2026-02-25T12:18:44Z|00058|binding|INFO|Releasing lport 2cfd1e6b-d28d-43c0-bbbd-c6ad77855812 from this chassis (sb_readonly=0)
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.715 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.716 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7d055957-b673-41aa-9356-f8d125b7deea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.717 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6.pid.haproxy
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:18:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:44.718 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'env', 'PROCESS_TAG=haproxy-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.721 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.811 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021924.8108473, 52f927ad-a417-489f-9f92-87bc3433649d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.812 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] VM Started (Lifecycle Event)#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.851 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.855 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021924.8116775, 52f927ad-a417-489f-9f92-87bc3433649d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.855 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.886 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.891 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.896 244018 DEBUG nova.network.neutron [req-3b303e83-cb8f-4fbe-a8ba-12dd198e50e3 req-28dde81a-be40-42d2-8ae1-8e7738587ab0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Updated VIF entry in instance network info cache for port 2e503dd2-735e-4bfc-87c7-dffab319d935. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.897 244018 DEBUG nova.network.neutron [req-3b303e83-cb8f-4fbe-a8ba-12dd198e50e3 req-28dde81a-be40-42d2-8ae1-8e7738587ab0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Updating instance_info_cache with network_info: [{"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.921 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.924 244018 DEBUG oslo_concurrency.lockutils [req-3b303e83-cb8f-4fbe-a8ba-12dd198e50e3 req-28dde81a-be40-42d2-8ae1-8e7738587ab0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-52f927ad-a417-489f-9f92-87bc3433649d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:18:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:18:44 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2461457061' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.951 244018 DEBUG oslo_concurrency.processutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.649s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.971 244018 DEBUG nova.storage.rbd_utils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 7a6ab503-d433-40a7-9395-3d5660e852c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:44 np0005629333 nova_compute[244014]: 2026-02-25 12:18:44.975 244018 DEBUG oslo_concurrency.processutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:45 np0005629333 podman[261177]: 2026-02-25 12:18:45.099081404 +0000 UTC m=+0.057636979 container create 5cace0ed50a1824992af06633e096a65ec6f66dc35d5b95126e92895214bbe2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0)
Feb 25 07:18:45 np0005629333 systemd[1]: Started libpod-conmon-5cace0ed50a1824992af06633e096a65ec6f66dc35d5b95126e92895214bbe2a.scope.
Feb 25 07:18:45 np0005629333 podman[261177]: 2026-02-25 12:18:45.068083904 +0000 UTC m=+0.026639539 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:18:45 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:18:45 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc838cfd7fef58997060e892c18c0d6fc490da055ecee8b3fae6fbd9365cbb97/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:18:45 np0005629333 podman[261177]: 2026-02-25 12:18:45.191020064 +0000 UTC m=+0.149575639 container init 5cace0ed50a1824992af06633e096a65ec6f66dc35d5b95126e92895214bbe2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:18:45 np0005629333 podman[261177]: 2026-02-25 12:18:45.196093797 +0000 UTC m=+0.154649352 container start 5cace0ed50a1824992af06633e096a65ec6f66dc35d5b95126e92895214bbe2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0)
Feb 25 07:18:45 np0005629333 neutron-haproxy-ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6[261212]: [NOTICE]   (261216) : New worker (261218) forked
Feb 25 07:18:45 np0005629333 neutron-haproxy-ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6[261212]: [NOTICE]   (261216) : Loading success.
Feb 25 07:18:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:18:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/635832215' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:18:45 np0005629333 nova_compute[244014]: 2026-02-25 12:18:45.576 244018 DEBUG oslo_concurrency.processutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:45 np0005629333 nova_compute[244014]: 2026-02-25 12:18:45.578 244018 DEBUG nova.virt.libvirt.vif [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:18:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-671436254',display_name='tempest-ServersAdminTestJSON-server-671436254',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-671436254',id=18,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-drd5ht2j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:18:38Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=7a6ab503-d433-40a7-9395-3d5660e852c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "address": "fa:16:3e:d2:1d:06", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap179cbc6a-d7", "ovs_interfaceid": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:18:45 np0005629333 nova_compute[244014]: 2026-02-25 12:18:45.579 244018 DEBUG nova.network.os_vif_util [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "address": "fa:16:3e:d2:1d:06", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap179cbc6a-d7", "ovs_interfaceid": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:18:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 181 op/s
Feb 25 07:18:45 np0005629333 nova_compute[244014]: 2026-02-25 12:18:45.580 244018 DEBUG nova.network.os_vif_util [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:1d:06,bridge_name='br-int',has_traffic_filtering=True,id=179cbc6a-d79f-4f46-88e8-f362ea0e4a26,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap179cbc6a-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:18:45 np0005629333 nova_compute[244014]: 2026-02-25 12:18:45.582 244018 DEBUG nova.objects.instance [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'pci_devices' on Instance uuid 7a6ab503-d433-40a7-9395-3d5660e852c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:18:45 np0005629333 nova_compute[244014]: 2026-02-25 12:18:45.609 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:18:45 np0005629333 nova_compute[244014]:  <uuid>7a6ab503-d433-40a7-9395-3d5660e852c9</uuid>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:  <name>instance-00000012</name>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServersAdminTestJSON-server-671436254</nova:name>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:18:44</nova:creationTime>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:18:45 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:        <nova:user uuid="6395ac4bfa5d4910aed9116395bbbdeb">tempest-ServersAdminTestJSON-147238686-project-member</nova:user>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:        <nova:project uuid="b35cd816238a43d8825ab11e83d2b8bf">tempest-ServersAdminTestJSON-147238686</nova:project>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:        <nova:port uuid="179cbc6a-d79f-4f46-88e8-f362ea0e4a26">
Feb 25 07:18:45 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      <entry name="serial">7a6ab503-d433-40a7-9395-3d5660e852c9</entry>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      <entry name="uuid">7a6ab503-d433-40a7-9395-3d5660e852c9</entry>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/7a6ab503-d433-40a7-9395-3d5660e852c9_disk">
Feb 25 07:18:45 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:18:45 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/7a6ab503-d433-40a7-9395-3d5660e852c9_disk.config">
Feb 25 07:18:45 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:18:45 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:d2:1d:06"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      <target dev="tap179cbc6a-d7"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/7a6ab503-d433-40a7-9395-3d5660e852c9/console.log" append="off"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:18:45 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:18:45 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:18:45 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:18:45 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:18:45 np0005629333 nova_compute[244014]: 2026-02-25 12:18:45.611 244018 DEBUG nova.compute.manager [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Preparing to wait for external event network-vif-plugged-179cbc6a-d79f-4f46-88e8-f362ea0e4a26 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:18:45 np0005629333 nova_compute[244014]: 2026-02-25 12:18:45.612 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:45 np0005629333 nova_compute[244014]: 2026-02-25 12:18:45.612 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:45 np0005629333 nova_compute[244014]: 2026-02-25 12:18:45.612 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:45 np0005629333 nova_compute[244014]: 2026-02-25 12:18:45.613 244018 DEBUG nova.virt.libvirt.vif [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:18:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-671436254',display_name='tempest-ServersAdminTestJSON-server-671436254',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-671436254',id=18,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-drd5ht2j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:18:38Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=7a6ab503-d433-40a7-9395-3d5660e852c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "address": "fa:16:3e:d2:1d:06", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap179cbc6a-d7", "ovs_interfaceid": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:18:45 np0005629333 nova_compute[244014]: 2026-02-25 12:18:45.614 244018 DEBUG nova.network.os_vif_util [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "address": "fa:16:3e:d2:1d:06", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap179cbc6a-d7", "ovs_interfaceid": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:18:45 np0005629333 nova_compute[244014]: 2026-02-25 12:18:45.615 244018 DEBUG nova.network.os_vif_util [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:1d:06,bridge_name='br-int',has_traffic_filtering=True,id=179cbc6a-d79f-4f46-88e8-f362ea0e4a26,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap179cbc6a-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:18:45 np0005629333 nova_compute[244014]: 2026-02-25 12:18:45.615 244018 DEBUG os_vif [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:1d:06,bridge_name='br-int',has_traffic_filtering=True,id=179cbc6a-d79f-4f46-88e8-f362ea0e4a26,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap179cbc6a-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:18:45 np0005629333 nova_compute[244014]: 2026-02-25 12:18:45.616 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:45 np0005629333 nova_compute[244014]: 2026-02-25 12:18:45.617 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:18:45 np0005629333 nova_compute[244014]: 2026-02-25 12:18:45.617 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:18:45 np0005629333 nova_compute[244014]: 2026-02-25 12:18:45.621 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:45 np0005629333 nova_compute[244014]: 2026-02-25 12:18:45.622 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap179cbc6a-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:18:45 np0005629333 nova_compute[244014]: 2026-02-25 12:18:45.623 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap179cbc6a-d7, col_values=(('external_ids', {'iface-id': '179cbc6a-d79f-4f46-88e8-f362ea0e4a26', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d2:1d:06', 'vm-uuid': '7a6ab503-d433-40a7-9395-3d5660e852c9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:18:45 np0005629333 nova_compute[244014]: 2026-02-25 12:18:45.625 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:45 np0005629333 NetworkManager[49836]: <info>  [1772021925.6270] manager: (tap179cbc6a-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Feb 25 07:18:45 np0005629333 nova_compute[244014]: 2026-02-25 12:18:45.628 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:18:45 np0005629333 nova_compute[244014]: 2026-02-25 12:18:45.630 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:45 np0005629333 nova_compute[244014]: 2026-02-25 12:18:45.632 244018 INFO os_vif [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:1d:06,bridge_name='br-int',has_traffic_filtering=True,id=179cbc6a-d79f-4f46-88e8-f362ea0e4a26,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap179cbc6a-d7')#033[00m
Feb 25 07:18:45 np0005629333 nova_compute[244014]: 2026-02-25 12:18:45.711 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:18:45 np0005629333 nova_compute[244014]: 2026-02-25 12:18:45.712 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:18:45 np0005629333 nova_compute[244014]: 2026-02-25 12:18:45.713 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No VIF found with MAC fa:16:3e:d2:1d:06, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:18:45 np0005629333 nova_compute[244014]: 2026-02-25 12:18:45.713 244018 INFO nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Using config drive#033[00m
Feb 25 07:18:45 np0005629333 nova_compute[244014]: 2026-02-25 12:18:45.745 244018 DEBUG nova.storage.rbd_utils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 7a6ab503-d433-40a7-9395-3d5660e852c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:46 np0005629333 nova_compute[244014]: 2026-02-25 12:18:46.236 244018 INFO nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Creating config drive at /var/lib/nova/instances/7a6ab503-d433-40a7-9395-3d5660e852c9/disk.config#033[00m
Feb 25 07:18:46 np0005629333 nova_compute[244014]: 2026-02-25 12:18:46.242 244018 DEBUG oslo_concurrency.processutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7a6ab503-d433-40a7-9395-3d5660e852c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp0snxfpqu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:46 np0005629333 nova_compute[244014]: 2026-02-25 12:18:46.369 244018 DEBUG oslo_concurrency.processutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7a6ab503-d433-40a7-9395-3d5660e852c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp0snxfpqu" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:46 np0005629333 nova_compute[244014]: 2026-02-25 12:18:46.400 244018 DEBUG nova.storage.rbd_utils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 7a6ab503-d433-40a7-9395-3d5660e852c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:46 np0005629333 nova_compute[244014]: 2026-02-25 12:18:46.405 244018 DEBUG oslo_concurrency.processutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7a6ab503-d433-40a7-9395-3d5660e852c9/disk.config 7a6ab503-d433-40a7-9395-3d5660e852c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:46 np0005629333 nova_compute[244014]: 2026-02-25 12:18:46.473 244018 DEBUG nova.network.neutron [req-7e744290-1c2e-47ea-92d5-4ff475341fa8 req-7e4a1d17-be5a-4f93-bb4e-e8af32d8078d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Updated VIF entry in instance network info cache for port 179cbc6a-d79f-4f46-88e8-f362ea0e4a26. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:18:46 np0005629333 nova_compute[244014]: 2026-02-25 12:18:46.474 244018 DEBUG nova.network.neutron [req-7e744290-1c2e-47ea-92d5-4ff475341fa8 req-7e4a1d17-be5a-4f93-bb4e-e8af32d8078d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Updating instance_info_cache with network_info: [{"id": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "address": "fa:16:3e:d2:1d:06", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap179cbc6a-d7", "ovs_interfaceid": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:18:46 np0005629333 nova_compute[244014]: 2026-02-25 12:18:46.492 244018 DEBUG oslo_concurrency.lockutils [req-7e744290-1c2e-47ea-92d5-4ff475341fa8 req-7e4a1d17-be5a-4f93-bb4e-e8af32d8078d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-7a6ab503-d433-40a7-9395-3d5660e852c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:18:46 np0005629333 nova_compute[244014]: 2026-02-25 12:18:46.543 244018 DEBUG oslo_concurrency.processutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7a6ab503-d433-40a7-9395-3d5660e852c9/disk.config 7a6ab503-d433-40a7-9395-3d5660e852c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:46 np0005629333 nova_compute[244014]: 2026-02-25 12:18:46.543 244018 INFO nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Deleting local config drive /var/lib/nova/instances/7a6ab503-d433-40a7-9395-3d5660e852c9/disk.config because it was imported into RBD.#033[00m
Feb 25 07:18:46 np0005629333 kernel: tap179cbc6a-d7: entered promiscuous mode
Feb 25 07:18:46 np0005629333 NetworkManager[49836]: <info>  [1772021926.6028] manager: (tap179cbc6a-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/45)
Feb 25 07:18:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:18:46Z|00059|binding|INFO|Claiming lport 179cbc6a-d79f-4f46-88e8-f362ea0e4a26 for this chassis.
Feb 25 07:18:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:18:46Z|00060|binding|INFO|179cbc6a-d79f-4f46-88e8-f362ea0e4a26: Claiming fa:16:3e:d2:1d:06 10.100.0.7
Feb 25 07:18:46 np0005629333 nova_compute[244014]: 2026-02-25 12:18:46.606 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:46 np0005629333 systemd-udevd[261067]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:18:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:46.614 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:1d:06 10.100.0.7'], port_security=['fa:16:3e:d2:1d:06 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '7a6ab503-d433-40a7-9395-3d5660e852c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fd7733ad-d262-4781-bcfa-77cfa8b67164', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556b4b98-e95d-460c-a904-adc77baf4b88, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=179cbc6a-d79f-4f46-88e8-f362ea0e4a26) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:18:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:46.617 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 179cbc6a-d79f-4f46-88e8-f362ea0e4a26 in datapath 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 bound to our chassis#033[00m
Feb 25 07:18:46 np0005629333 nova_compute[244014]: 2026-02-25 12:18:46.619 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:46.620 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6#033[00m
Feb 25 07:18:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:18:46Z|00061|binding|INFO|Setting lport 179cbc6a-d79f-4f46-88e8-f362ea0e4a26 ovn-installed in OVS
Feb 25 07:18:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:18:46Z|00062|binding|INFO|Setting lport 179cbc6a-d79f-4f46-88e8-f362ea0e4a26 up in Southbound
Feb 25 07:18:46 np0005629333 nova_compute[244014]: 2026-02-25 12:18:46.622 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:46 np0005629333 NetworkManager[49836]: <info>  [1772021926.6271] device (tap179cbc6a-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:18:46 np0005629333 NetworkManager[49836]: <info>  [1772021926.6303] device (tap179cbc6a-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:18:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:46.638 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d73d5263-9873-4b78-bf66-dbc24ab5c4f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:46 np0005629333 systemd-machined[210048]: New machine qemu-20-instance-00000012.
Feb 25 07:18:46 np0005629333 systemd[1]: Started Virtual Machine qemu-20-instance-00000012.
Feb 25 07:18:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:46.669 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cc58e7c3-f952-4f69-855d-00be62da81d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:46.674 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[163af9b0-fe04-4950-bfe5-0e6c04a40d63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:46.706 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0065c47f-fd6a-4af9-8539-8100059ad7f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:46.728 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[403838d2-5f19-4743-898f-d7fe2d8f862c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f4cbf9a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:f8:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389396, 'reachable_time': 28421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261312, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:46.745 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[91fcb9e9-2565-493a-9446-49d8d8e31642]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389407, 'tstamp': 389407}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261314, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389409, 'tstamp': 389409}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261314, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:46.748 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f4cbf9a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:18:46 np0005629333 nova_compute[244014]: 2026-02-25 12:18:46.750 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:46.754 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f4cbf9a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:18:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:46.754 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:18:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:46.755 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1f4cbf9a-40, col_values=(('external_ids', {'iface-id': '2cfd1e6b-d28d-43c0-bbbd-c6ad77855812'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:18:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:46.756 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:18:47 np0005629333 nova_compute[244014]: 2026-02-25 12:18:47.291 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021927.28961, 7a6ab503-d433-40a7-9395-3d5660e852c9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:18:47 np0005629333 nova_compute[244014]: 2026-02-25 12:18:47.292 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] VM Started (Lifecycle Event)#033[00m
Feb 25 07:18:47 np0005629333 nova_compute[244014]: 2026-02-25 12:18:47.345 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:18:47 np0005629333 nova_compute[244014]: 2026-02-25 12:18:47.349 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021927.290977, 7a6ab503-d433-40a7-9395-3d5660e852c9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:18:47 np0005629333 nova_compute[244014]: 2026-02-25 12:18:47.350 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:18:47 np0005629333 nova_compute[244014]: 2026-02-25 12:18:47.367 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:18:47 np0005629333 nova_compute[244014]: 2026-02-25 12:18:47.371 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:18:47 np0005629333 nova_compute[244014]: 2026-02-25 12:18:47.415 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:18:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:18:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3224183954' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:18:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:18:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3224183954' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:18:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 191 op/s
Feb 25 07:18:47 np0005629333 podman[261357]: 2026-02-25 12:18:47.749715392 +0000 UTC m=+0.082335512 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 07:18:47 np0005629333 podman[261358]: 2026-02-25 12:18:47.814669685 +0000 UTC m=+0.147493961 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 07:18:48 np0005629333 nova_compute[244014]: 2026-02-25 12:18:48.412 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:18:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 133 op/s
Feb 25 07:18:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Feb 25 07:18:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Feb 25 07:18:49 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Feb 25 07:18:50 np0005629333 nova_compute[244014]: 2026-02-25 12:18:50.414 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Acquiring lock "606b209d-b916-4ac7-87c7-887c274f747f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:50 np0005629333 nova_compute[244014]: 2026-02-25 12:18:50.415 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lock "606b209d-b916-4ac7-87c7-887c274f747f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:50 np0005629333 nova_compute[244014]: 2026-02-25 12:18:50.447 244018 DEBUG nova.compute.manager [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:18:50 np0005629333 nova_compute[244014]: 2026-02-25 12:18:50.574 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:50 np0005629333 nova_compute[244014]: 2026-02-25 12:18:50.575 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:50 np0005629333 nova_compute[244014]: 2026-02-25 12:18:50.583 244018 DEBUG nova.virt.hardware [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:18:50 np0005629333 nova_compute[244014]: 2026-02-25 12:18:50.584 244018 INFO nova.compute.claims [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:18:50 np0005629333 nova_compute[244014]: 2026-02-25 12:18:50.625 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:50 np0005629333 nova_compute[244014]: 2026-02-25 12:18:50.741 244018 DEBUG oslo_concurrency.processutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:18:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1079271304' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.264 244018 DEBUG nova.compute.manager [req-79663541-7cf5-4bc6-b839-d06da54cdf5c req-51ae7b5e-9a13-4238-bc1d-20a7905a3b3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Received event network-vif-plugged-179cbc6a-d79f-4f46-88e8-f362ea0e4a26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.265 244018 DEBUG oslo_concurrency.lockutils [req-79663541-7cf5-4bc6-b839-d06da54cdf5c req-51ae7b5e-9a13-4238-bc1d-20a7905a3b3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.266 244018 DEBUG oslo_concurrency.lockutils [req-79663541-7cf5-4bc6-b839-d06da54cdf5c req-51ae7b5e-9a13-4238-bc1d-20a7905a3b3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.267 244018 DEBUG oslo_concurrency.lockutils [req-79663541-7cf5-4bc6-b839-d06da54cdf5c req-51ae7b5e-9a13-4238-bc1d-20a7905a3b3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.267 244018 DEBUG nova.compute.manager [req-79663541-7cf5-4bc6-b839-d06da54cdf5c req-51ae7b5e-9a13-4238-bc1d-20a7905a3b3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Processing event network-vif-plugged-179cbc6a-d79f-4f46-88e8-f362ea0e4a26 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.268 244018 DEBUG oslo_concurrency.processutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.269 244018 DEBUG nova.compute.manager [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.274 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021931.274385, 7a6ab503-d433-40a7-9395-3d5660e852c9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.275 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.296 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.302 244018 DEBUG nova.compute.provider_tree [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.309 244018 INFO nova.virt.libvirt.driver [-] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Instance spawned successfully.#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.309 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.321 244018 DEBUG nova.scheduler.client.report [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.329 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.337 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.342 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.343 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.343 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.344 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.345 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.346 244018 DEBUG nova.virt.libvirt.driver [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.394 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.395 244018 DEBUG nova.compute.manager [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.398 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.467 244018 INFO nova.compute.manager [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Took 13.25 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.468 244018 DEBUG nova.compute.manager [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.483 244018 DEBUG nova.compute.manager [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.484 244018 DEBUG nova.network.neutron [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.524 244018 INFO nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:18:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 246 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 118 op/s
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.590 244018 DEBUG nova.compute.manager [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.595 244018 INFO nova.compute.manager [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Took 14.48 seconds to build instance.#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.645 244018 DEBUG oslo_concurrency.lockutils [None req-75fafa78-fe2e-412a-a580-738bb3254e96 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "7a6ab503-d433-40a7-9395-3d5660e852c9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.737 244018 DEBUG nova.compute.manager [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.739 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.740 244018 INFO nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Creating image(s)#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.766 244018 DEBUG nova.storage.rbd_utils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] rbd image 606b209d-b916-4ac7-87c7-887c274f747f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.794 244018 DEBUG nova.storage.rbd_utils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] rbd image 606b209d-b916-4ac7-87c7-887c274f747f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.826 244018 DEBUG nova.storage.rbd_utils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] rbd image 606b209d-b916-4ac7-87c7-887c274f747f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.832 244018 DEBUG oslo_concurrency.processutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.866 244018 DEBUG nova.policy [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6cf653b76cca4209ae56b9938dc0be3b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9bc1369afb1243ce828d930dea442277', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.872 244018 DEBUG nova.compute.manager [req-b1470198-30ca-489a-80e8-0ea5244f56ed req-0f26d934-178b-4051-a890-f638801ef4f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.872 244018 DEBUG oslo_concurrency.lockutils [req-b1470198-30ca-489a-80e8-0ea5244f56ed req-0f26d934-178b-4051-a890-f638801ef4f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "52f927ad-a417-489f-9f92-87bc3433649d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.873 244018 DEBUG oslo_concurrency.lockutils [req-b1470198-30ca-489a-80e8-0ea5244f56ed req-0f26d934-178b-4051-a890-f638801ef4f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.873 244018 DEBUG oslo_concurrency.lockutils [req-b1470198-30ca-489a-80e8-0ea5244f56ed req-0f26d934-178b-4051-a890-f638801ef4f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.873 244018 DEBUG nova.compute.manager [req-b1470198-30ca-489a-80e8-0ea5244f56ed req-0f26d934-178b-4051-a890-f638801ef4f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Processing event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.875 244018 DEBUG nova.compute.manager [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Instance event wait completed in 7 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:18:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.895 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021931.8804073, 52f927ad-a417-489f-9f92-87bc3433649d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.895 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.899 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:18:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.904 244018 INFO nova.virt.libvirt.driver [-] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Instance spawned successfully.#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.905 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:18:51 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.918 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.927 244018 DEBUG oslo_concurrency.processutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.927 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.928 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.929 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.963 244018 DEBUG nova.storage.rbd_utils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] rbd image 606b209d-b916-4ac7-87c7-887c274f747f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.969 244018 DEBUG oslo_concurrency.processutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 606b209d-b916-4ac7-87c7-887c274f747f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:51 np0005629333 nova_compute[244014]: 2026-02-25 12:18:51.990 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:18:52 np0005629333 nova_compute[244014]: 2026-02-25 12:18:52.002 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:52 np0005629333 nova_compute[244014]: 2026-02-25 12:18:52.002 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:52 np0005629333 nova_compute[244014]: 2026-02-25 12:18:52.004 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:52 np0005629333 nova_compute[244014]: 2026-02-25 12:18:52.004 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:52 np0005629333 nova_compute[244014]: 2026-02-25 12:18:52.006 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:52 np0005629333 nova_compute[244014]: 2026-02-25 12:18:52.007 244018 DEBUG nova.virt.libvirt.driver [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:52 np0005629333 nova_compute[244014]: 2026-02-25 12:18:52.039 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:18:52 np0005629333 nova_compute[244014]: 2026-02-25 12:18:52.160 244018 INFO nova.compute.manager [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Took 16.17 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:18:52 np0005629333 nova_compute[244014]: 2026-02-25 12:18:52.160 244018 DEBUG nova.compute.manager [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:18:52 np0005629333 nova_compute[244014]: 2026-02-25 12:18:52.257 244018 INFO nova.compute.manager [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Took 17.32 seconds to build instance.#033[00m
Feb 25 07:18:52 np0005629333 nova_compute[244014]: 2026-02-25 12:18:52.262 244018 DEBUG oslo_concurrency.processutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 606b209d-b916-4ac7-87c7-887c274f747f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:52 np0005629333 nova_compute[244014]: 2026-02-25 12:18:52.303 244018 DEBUG oslo_concurrency.lockutils [None req-cbcfb3bc-6be0-4bc4-b223-3c35295ce201 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.447s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:52 np0005629333 nova_compute[244014]: 2026-02-25 12:18:52.353 244018 DEBUG nova.storage.rbd_utils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] resizing rbd image 606b209d-b916-4ac7-87c7-887c274f747f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:18:52 np0005629333 nova_compute[244014]: 2026-02-25 12:18:52.473 244018 DEBUG nova.objects.instance [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lazy-loading 'migration_context' on Instance uuid 606b209d-b916-4ac7-87c7-887c274f747f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:18:52 np0005629333 nova_compute[244014]: 2026-02-25 12:18:52.523 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:18:52 np0005629333 nova_compute[244014]: 2026-02-25 12:18:52.524 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Ensure instance console log exists: /var/lib/nova/instances/606b209d-b916-4ac7-87c7-887c274f747f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:18:52 np0005629333 nova_compute[244014]: 2026-02-25 12:18:52.525 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:52 np0005629333 nova_compute[244014]: 2026-02-25 12:18:52.525 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:52 np0005629333 nova_compute[244014]: 2026-02-25 12:18:52.526 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:52 np0005629333 nova_compute[244014]: 2026-02-25 12:18:52.794 244018 DEBUG nova.network.neutron [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Successfully created port: 259aef1d-40c8-413d-93ff-e8e3a0eede8a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:18:53 np0005629333 nova_compute[244014]: 2026-02-25 12:18:53.415 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:53 np0005629333 nova_compute[244014]: 2026-02-25 12:18:53.459 244018 DEBUG nova.compute.manager [req-981c8ac5-240f-4f76-adeb-3aee4b640eb2 req-533956dd-baf4-4ca7-b5f1-614f67bec3e2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Received event network-vif-plugged-179cbc6a-d79f-4f46-88e8-f362ea0e4a26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:18:53 np0005629333 nova_compute[244014]: 2026-02-25 12:18:53.459 244018 DEBUG oslo_concurrency.lockutils [req-981c8ac5-240f-4f76-adeb-3aee4b640eb2 req-533956dd-baf4-4ca7-b5f1-614f67bec3e2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:53 np0005629333 nova_compute[244014]: 2026-02-25 12:18:53.460 244018 DEBUG oslo_concurrency.lockutils [req-981c8ac5-240f-4f76-adeb-3aee4b640eb2 req-533956dd-baf4-4ca7-b5f1-614f67bec3e2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:53 np0005629333 nova_compute[244014]: 2026-02-25 12:18:53.461 244018 DEBUG oslo_concurrency.lockutils [req-981c8ac5-240f-4f76-adeb-3aee4b640eb2 req-533956dd-baf4-4ca7-b5f1-614f67bec3e2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:53 np0005629333 nova_compute[244014]: 2026-02-25 12:18:53.462 244018 DEBUG nova.compute.manager [req-981c8ac5-240f-4f76-adeb-3aee4b640eb2 req-533956dd-baf4-4ca7-b5f1-614f67bec3e2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] No waiting events found dispatching network-vif-plugged-179cbc6a-d79f-4f46-88e8-f362ea0e4a26 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:18:53 np0005629333 nova_compute[244014]: 2026-02-25 12:18:53.462 244018 WARNING nova.compute.manager [req-981c8ac5-240f-4f76-adeb-3aee4b640eb2 req-533956dd-baf4-4ca7-b5f1-614f67bec3e2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Received unexpected event network-vif-plugged-179cbc6a-d79f-4f46-88e8-f362ea0e4a26 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:18:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 703 KiB/s rd, 873 KiB/s wr, 82 op/s
Feb 25 07:18:54 np0005629333 nova_compute[244014]: 2026-02-25 12:18:54.131 244018 DEBUG nova.network.neutron [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Successfully updated port: 259aef1d-40c8-413d-93ff-e8e3a0eede8a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:18:54 np0005629333 nova_compute[244014]: 2026-02-25 12:18:54.176 244018 DEBUG nova.compute.manager [req-2099468d-8267-4523-83f9-3269ecabef01 req-720ebff7-330c-4b12-a60e-e34dc8491a63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:18:54 np0005629333 nova_compute[244014]: 2026-02-25 12:18:54.176 244018 DEBUG oslo_concurrency.lockutils [req-2099468d-8267-4523-83f9-3269ecabef01 req-720ebff7-330c-4b12-a60e-e34dc8491a63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "52f927ad-a417-489f-9f92-87bc3433649d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:54 np0005629333 nova_compute[244014]: 2026-02-25 12:18:54.177 244018 DEBUG oslo_concurrency.lockutils [req-2099468d-8267-4523-83f9-3269ecabef01 req-720ebff7-330c-4b12-a60e-e34dc8491a63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:54 np0005629333 nova_compute[244014]: 2026-02-25 12:18:54.178 244018 DEBUG oslo_concurrency.lockutils [req-2099468d-8267-4523-83f9-3269ecabef01 req-720ebff7-330c-4b12-a60e-e34dc8491a63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:54 np0005629333 nova_compute[244014]: 2026-02-25 12:18:54.178 244018 DEBUG nova.compute.manager [req-2099468d-8267-4523-83f9-3269ecabef01 req-720ebff7-330c-4b12-a60e-e34dc8491a63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] No waiting events found dispatching network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:18:54 np0005629333 nova_compute[244014]: 2026-02-25 12:18:54.179 244018 WARNING nova.compute.manager [req-2099468d-8267-4523-83f9-3269ecabef01 req-720ebff7-330c-4b12-a60e-e34dc8491a63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received unexpected event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:18:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:18:54 np0005629333 nova_compute[244014]: 2026-02-25 12:18:54.181 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Acquiring lock "refresh_cache-606b209d-b916-4ac7-87c7-887c274f747f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:18:54 np0005629333 nova_compute[244014]: 2026-02-25 12:18:54.181 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Acquired lock "refresh_cache-606b209d-b916-4ac7-87c7-887c274f747f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:18:54 np0005629333 nova_compute[244014]: 2026-02-25 12:18:54.181 244018 DEBUG nova.network.neutron [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:18:54 np0005629333 nova_compute[244014]: 2026-02-25 12:18:54.630 244018 DEBUG nova.network.neutron [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:18:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:55.003 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:55.003 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:55.004 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:55 np0005629333 nova_compute[244014]: 2026-02-25 12:18:55.134 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021920.1328886, d974e887-fd2f-479e-951e-fad497dd7b0f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:18:55 np0005629333 nova_compute[244014]: 2026-02-25 12:18:55.135 244018 INFO nova.compute.manager [-] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:18:55 np0005629333 nova_compute[244014]: 2026-02-25 12:18:55.309 244018 DEBUG nova.compute.manager [None req-b06d835a-134f-43ba-b901-482d785ebecc - - - - - -] [instance: d974e887-fd2f-479e-951e-fad497dd7b0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:18:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 691 KiB/s rd, 854 KiB/s wr, 66 op/s
Feb 25 07:18:55 np0005629333 nova_compute[244014]: 2026-02-25 12:18:55.627 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:56 np0005629333 nova_compute[244014]: 2026-02-25 12:18:56.201 244018 DEBUG nova.compute.manager [req-5b5cf4a2-0e92-4ce8-8870-7638bd1c12c7 req-76a45f54-0d05-4e0a-904e-0e46a1c52e8d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Received event network-changed-259aef1d-40c8-413d-93ff-e8e3a0eede8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:18:56 np0005629333 nova_compute[244014]: 2026-02-25 12:18:56.202 244018 DEBUG nova.compute.manager [req-5b5cf4a2-0e92-4ce8-8870-7638bd1c12c7 req-76a45f54-0d05-4e0a-904e-0e46a1c52e8d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Refreshing instance network info cache due to event network-changed-259aef1d-40c8-413d-93ff-e8e3a0eede8a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:18:56 np0005629333 nova_compute[244014]: 2026-02-25 12:18:56.203 244018 DEBUG oslo_concurrency.lockutils [req-5b5cf4a2-0e92-4ce8-8870-7638bd1c12c7 req-76a45f54-0d05-4e0a-904e-0e46a1c52e8d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-606b209d-b916-4ac7-87c7-887c274f747f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:18:56 np0005629333 nova_compute[244014]: 2026-02-25 12:18:56.530 244018 DEBUG nova.network.neutron [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Updating instance_info_cache with network_info: [{"id": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "address": "fa:16:3e:4c:ea:d4", "network": {"id": "66127cb2-231b-48c1-8e16-29c2386af2ee", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-446934594-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bc1369afb1243ce828d930dea442277", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap259aef1d-40", "ovs_interfaceid": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:18:56 np0005629333 nova_compute[244014]: 2026-02-25 12:18:56.553 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Releasing lock "refresh_cache-606b209d-b916-4ac7-87c7-887c274f747f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:18:56 np0005629333 nova_compute[244014]: 2026-02-25 12:18:56.554 244018 DEBUG nova.compute.manager [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Instance network_info: |[{"id": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "address": "fa:16:3e:4c:ea:d4", "network": {"id": "66127cb2-231b-48c1-8e16-29c2386af2ee", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-446934594-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bc1369afb1243ce828d930dea442277", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap259aef1d-40", "ovs_interfaceid": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:18:56 np0005629333 nova_compute[244014]: 2026-02-25 12:18:56.555 244018 DEBUG oslo_concurrency.lockutils [req-5b5cf4a2-0e92-4ce8-8870-7638bd1c12c7 req-76a45f54-0d05-4e0a-904e-0e46a1c52e8d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-606b209d-b916-4ac7-87c7-887c274f747f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:18:56 np0005629333 nova_compute[244014]: 2026-02-25 12:18:56.555 244018 DEBUG nova.network.neutron [req-5b5cf4a2-0e92-4ce8-8870-7638bd1c12c7 req-76a45f54-0d05-4e0a-904e-0e46a1c52e8d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Refreshing network info cache for port 259aef1d-40c8-413d-93ff-e8e3a0eede8a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:18:56 np0005629333 nova_compute[244014]: 2026-02-25 12:18:56.560 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Start _get_guest_xml network_info=[{"id": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "address": "fa:16:3e:4c:ea:d4", "network": {"id": "66127cb2-231b-48c1-8e16-29c2386af2ee", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-446934594-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bc1369afb1243ce828d930dea442277", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap259aef1d-40", "ovs_interfaceid": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:18:56 np0005629333 nova_compute[244014]: 2026-02-25 12:18:56.565 244018 WARNING nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:18:56 np0005629333 nova_compute[244014]: 2026-02-25 12:18:56.570 244018 DEBUG nova.virt.libvirt.host [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:18:56 np0005629333 nova_compute[244014]: 2026-02-25 12:18:56.571 244018 DEBUG nova.virt.libvirt.host [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:18:56 np0005629333 nova_compute[244014]: 2026-02-25 12:18:56.579 244018 DEBUG nova.virt.libvirt.host [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:18:56 np0005629333 nova_compute[244014]: 2026-02-25 12:18:56.580 244018 DEBUG nova.virt.libvirt.host [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:18:56 np0005629333 nova_compute[244014]: 2026-02-25 12:18:56.581 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:18:56 np0005629333 nova_compute[244014]: 2026-02-25 12:18:56.581 244018 DEBUG nova.virt.hardware [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:18:56 np0005629333 nova_compute[244014]: 2026-02-25 12:18:56.582 244018 DEBUG nova.virt.hardware [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:18:56 np0005629333 nova_compute[244014]: 2026-02-25 12:18:56.583 244018 DEBUG nova.virt.hardware [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:18:56 np0005629333 nova_compute[244014]: 2026-02-25 12:18:56.583 244018 DEBUG nova.virt.hardware [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:18:56 np0005629333 nova_compute[244014]: 2026-02-25 12:18:56.584 244018 DEBUG nova.virt.hardware [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:18:56 np0005629333 nova_compute[244014]: 2026-02-25 12:18:56.584 244018 DEBUG nova.virt.hardware [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:18:56 np0005629333 nova_compute[244014]: 2026-02-25 12:18:56.585 244018 DEBUG nova.virt.hardware [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:18:56 np0005629333 nova_compute[244014]: 2026-02-25 12:18:56.585 244018 DEBUG nova.virt.hardware [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:18:56 np0005629333 nova_compute[244014]: 2026-02-25 12:18:56.586 244018 DEBUG nova.virt.hardware [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:18:56 np0005629333 nova_compute[244014]: 2026-02-25 12:18:56.586 244018 DEBUG nova.virt.hardware [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:18:56 np0005629333 nova_compute[244014]: 2026-02-25 12:18:56.587 244018 DEBUG nova.virt.hardware [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:18:56 np0005629333 nova_compute[244014]: 2026-02-25 12:18:56.591 244018 DEBUG oslo_concurrency.processutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:18:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1887839894' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.236 244018 DEBUG oslo_concurrency.processutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.645s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:57.258 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:18:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:57.259 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.273 244018 DEBUG nova.storage.rbd_utils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] rbd image 606b209d-b916-4ac7-87c7-887c274f747f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.281 244018 DEBUG oslo_concurrency.processutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.305 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 293 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.7 MiB/s wr, 295 op/s
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.629 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.630 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.630 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.652 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 25 07:18:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:18:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/329172932' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.857 244018 DEBUG oslo_concurrency.processutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.859 244018 DEBUG nova.virt.libvirt.vif [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:18:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1771878194',display_name='tempest-ImagesNegativeTestJSON-server-1771878194',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1771878194',id=19,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9bc1369afb1243ce828d930dea442277',ramdisk_id='',reservation_id='r-kxueboo3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-926895290',owner_user_name='tempest-ImagesNegativeTestJSON-926895290-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:18:51Z,user_data=None,user_id='6cf653b76cca4209ae56b9938dc0be3b',uuid=606b209d-b916-4ac7-87c7-887c274f747f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "address": "fa:16:3e:4c:ea:d4", "network": {"id": "66127cb2-231b-48c1-8e16-29c2386af2ee", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-446934594-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bc1369afb1243ce828d930dea442277", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap259aef1d-40", "ovs_interfaceid": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.860 244018 DEBUG nova.network.os_vif_util [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Converting VIF {"id": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "address": "fa:16:3e:4c:ea:d4", "network": {"id": "66127cb2-231b-48c1-8e16-29c2386af2ee", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-446934594-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bc1369afb1243ce828d930dea442277", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap259aef1d-40", "ovs_interfaceid": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.861 244018 DEBUG nova.network.os_vif_util [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=259aef1d-40c8-413d-93ff-e8e3a0eede8a,network=Network(66127cb2-231b-48c1-8e16-29c2386af2ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap259aef1d-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.863 244018 DEBUG nova.objects.instance [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lazy-loading 'pci_devices' on Instance uuid 606b209d-b916-4ac7-87c7-887c274f747f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.888 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:18:57 np0005629333 nova_compute[244014]:  <uuid>606b209d-b916-4ac7-87c7-887c274f747f</uuid>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:  <name>instance-00000013</name>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      <nova:name>tempest-ImagesNegativeTestJSON-server-1771878194</nova:name>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:18:56</nova:creationTime>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:18:57 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:        <nova:user uuid="6cf653b76cca4209ae56b9938dc0be3b">tempest-ImagesNegativeTestJSON-926895290-project-member</nova:user>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:        <nova:project uuid="9bc1369afb1243ce828d930dea442277">tempest-ImagesNegativeTestJSON-926895290</nova:project>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:        <nova:port uuid="259aef1d-40c8-413d-93ff-e8e3a0eede8a">
Feb 25 07:18:57 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      <entry name="serial">606b209d-b916-4ac7-87c7-887c274f747f</entry>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      <entry name="uuid">606b209d-b916-4ac7-87c7-887c274f747f</entry>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/606b209d-b916-4ac7-87c7-887c274f747f_disk">
Feb 25 07:18:57 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:18:57 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/606b209d-b916-4ac7-87c7-887c274f747f_disk.config">
Feb 25 07:18:57 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:18:57 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:4c:ea:d4"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      <target dev="tap259aef1d-40"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/606b209d-b916-4ac7-87c7-887c274f747f/console.log" append="off"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:18:57 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:18:57 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:18:57 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:18:57 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.897 244018 DEBUG nova.compute.manager [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Preparing to wait for external event network-vif-plugged-259aef1d-40c8-413d-93ff-e8e3a0eede8a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.897 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Acquiring lock "606b209d-b916-4ac7-87c7-887c274f747f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.898 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lock "606b209d-b916-4ac7-87c7-887c274f747f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.898 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lock "606b209d-b916-4ac7-87c7-887c274f747f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.899 244018 DEBUG nova.virt.libvirt.vif [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:18:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1771878194',display_name='tempest-ImagesNegativeTestJSON-server-1771878194',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1771878194',id=19,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9bc1369afb1243ce828d930dea442277',ramdisk_id='',reservation_id='r-kxueboo3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-926895290',owner_user_name='tempest-ImagesNegativeTestJSON-926895290-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:18:51Z,user_data=None,user_id='6cf653b76cca4209ae56b9938dc0be3b',uuid=606b209d-b916-4ac7-87c7-887c274f747f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "address": "fa:16:3e:4c:ea:d4", "network": {"id": "66127cb2-231b-48c1-8e16-29c2386af2ee", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-446934594-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bc1369afb1243ce828d930dea442277", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap259aef1d-40", "ovs_interfaceid": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.900 244018 DEBUG nova.network.os_vif_util [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Converting VIF {"id": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "address": "fa:16:3e:4c:ea:d4", "network": {"id": "66127cb2-231b-48c1-8e16-29c2386af2ee", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-446934594-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bc1369afb1243ce828d930dea442277", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap259aef1d-40", "ovs_interfaceid": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.901 244018 DEBUG nova.network.os_vif_util [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=259aef1d-40c8-413d-93ff-e8e3a0eede8a,network=Network(66127cb2-231b-48c1-8e16-29c2386af2ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap259aef1d-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.902 244018 DEBUG os_vif [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=259aef1d-40c8-413d-93ff-e8e3a0eede8a,network=Network(66127cb2-231b-48c1-8e16-29c2386af2ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap259aef1d-40') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.903 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.904 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.904 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.906 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-52f927ad-a417-489f-9f92-87bc3433649d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.906 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-52f927ad-a417-489f-9f92-87bc3433649d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.907 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.907 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.912 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.913 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap259aef1d-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.914 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap259aef1d-40, col_values=(('external_ids', {'iface-id': '259aef1d-40c8-413d-93ff-e8e3a0eede8a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4c:ea:d4', 'vm-uuid': '606b209d-b916-4ac7-87c7-887c274f747f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.916 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:57 np0005629333 NetworkManager[49836]: <info>  [1772021937.9174] manager: (tap259aef1d-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.919 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.921 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.922 244018 INFO os_vif [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=259aef1d-40c8-413d-93ff-e8e3a0eede8a,network=Network(66127cb2-231b-48c1-8e16-29c2386af2ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap259aef1d-40')#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.991 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.992 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.993 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] No VIF found with MAC fa:16:3e:4c:ea:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:18:57 np0005629333 nova_compute[244014]: 2026-02-25 12:18:57.994 244018 INFO nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Using config drive#033[00m
Feb 25 07:18:58 np0005629333 nova_compute[244014]: 2026-02-25 12:18:58.025 244018 DEBUG nova.storage.rbd_utils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] rbd image 606b209d-b916-4ac7-87c7-887c274f747f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:58 np0005629333 nova_compute[244014]: 2026-02-25 12:18:58.408 244018 DEBUG nova.network.neutron [req-5b5cf4a2-0e92-4ce8-8870-7638bd1c12c7 req-76a45f54-0d05-4e0a-904e-0e46a1c52e8d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Updated VIF entry in instance network info cache for port 259aef1d-40c8-413d-93ff-e8e3a0eede8a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:18:58 np0005629333 nova_compute[244014]: 2026-02-25 12:18:58.410 244018 DEBUG nova.network.neutron [req-5b5cf4a2-0e92-4ce8-8870-7638bd1c12c7 req-76a45f54-0d05-4e0a-904e-0e46a1c52e8d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Updating instance_info_cache with network_info: [{"id": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "address": "fa:16:3e:4c:ea:d4", "network": {"id": "66127cb2-231b-48c1-8e16-29c2386af2ee", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-446934594-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bc1369afb1243ce828d930dea442277", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap259aef1d-40", "ovs_interfaceid": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:18:58 np0005629333 nova_compute[244014]: 2026-02-25 12:18:58.416 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:58 np0005629333 nova_compute[244014]: 2026-02-25 12:18:58.516 244018 DEBUG oslo_concurrency.lockutils [req-5b5cf4a2-0e92-4ce8-8870-7638bd1c12c7 req-76a45f54-0d05-4e0a-904e-0e46a1c52e8d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-606b209d-b916-4ac7-87c7-887c274f747f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:18:58 np0005629333 nova_compute[244014]: 2026-02-25 12:18:58.713 244018 INFO nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Creating config drive at /var/lib/nova/instances/606b209d-b916-4ac7-87c7-887c274f747f/disk.config#033[00m
Feb 25 07:18:58 np0005629333 nova_compute[244014]: 2026-02-25 12:18:58.721 244018 DEBUG oslo_concurrency.processutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/606b209d-b916-4ac7-87c7-887c274f747f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_lih5yk0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:58 np0005629333 nova_compute[244014]: 2026-02-25 12:18:58.854 244018 DEBUG oslo_concurrency.processutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/606b209d-b916-4ac7-87c7-887c274f747f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_lih5yk0" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:58 np0005629333 nova_compute[244014]: 2026-02-25 12:18:58.901 244018 DEBUG nova.storage.rbd_utils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] rbd image 606b209d-b916-4ac7-87c7-887c274f747f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:18:58 np0005629333 nova_compute[244014]: 2026-02-25 12:18:58.919 244018 DEBUG oslo_concurrency.processutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/606b209d-b916-4ac7-87c7-887c274f747f/disk.config 606b209d-b916-4ac7-87c7-887c274f747f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:58 np0005629333 nova_compute[244014]: 2026-02-25 12:18:58.967 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:58 np0005629333 nova_compute[244014]: 2026-02-25 12:18:58.969 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:58 np0005629333 nova_compute[244014]: 2026-02-25 12:18:58.995 244018 DEBUG nova.compute.manager [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.063 244018 DEBUG oslo_concurrency.processutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/606b209d-b916-4ac7-87c7-887c274f747f/disk.config 606b209d-b916-4ac7-87c7-887c274f747f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.064 244018 INFO nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Deleting local config drive /var/lib/nova/instances/606b209d-b916-4ac7-87c7-887c274f747f/disk.config because it was imported into RBD.#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.088 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.089 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.100 244018 DEBUG nova.virt.hardware [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.101 244018 INFO nova.compute.claims [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:18:59 np0005629333 kernel: tap259aef1d-40: entered promiscuous mode
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.113 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:18:59Z|00063|binding|INFO|Claiming lport 259aef1d-40c8-413d-93ff-e8e3a0eede8a for this chassis.
Feb 25 07:18:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:18:59Z|00064|binding|INFO|259aef1d-40c8-413d-93ff-e8e3a0eede8a: Claiming fa:16:3e:4c:ea:d4 10.100.0.12
Feb 25 07:18:59 np0005629333 NetworkManager[49836]: <info>  [1772021939.1143] manager: (tap259aef1d-40): new Tun device (/org/freedesktop/NetworkManager/Devices/47)
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.115 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.131 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:ea:d4 10.100.0.12'], port_security=['fa:16:3e:4c:ea:d4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '606b209d-b916-4ac7-87c7-887c274f747f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-66127cb2-231b-48c1-8e16-29c2386af2ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9bc1369afb1243ce828d930dea442277', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f267df70-0e5b-45b2-8b12-ebc2ca8b02c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=252bdedd-c508-4af3-ad86-a8b65ae67147, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=259aef1d-40c8-413d-93ff-e8e3a0eede8a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.132 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 259aef1d-40c8-413d-93ff-e8e3a0eede8a in datapath 66127cb2-231b-48c1-8e16-29c2386af2ee bound to our chassis#033[00m
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.133 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 66127cb2-231b-48c1-8e16-29c2386af2ee#033[00m
Feb 25 07:18:59 np0005629333 systemd-udevd[261720]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.149 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3b7bbff4-84f8-4303-bde8-5834b4c81a35]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.150 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap66127cb2-21 in ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.151 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.152 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap66127cb2-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:18:59 np0005629333 NetworkManager[49836]: <info>  [1772021939.1564] device (tap259aef1d-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.152 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1205603e-1e85-4462-aa7b-70bf83622a83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:59 np0005629333 NetworkManager[49836]: <info>  [1772021939.1572] device (tap259aef1d-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.155 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1e32186e-1ad0-4515-ac74-af04efbba25e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:18:59Z|00065|binding|INFO|Setting lport 259aef1d-40c8-413d-93ff-e8e3a0eede8a ovn-installed in OVS
Feb 25 07:18:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:18:59Z|00066|binding|INFO|Setting lport 259aef1d-40c8-413d-93ff-e8e3a0eede8a up in Southbound
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.160 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:59 np0005629333 systemd-machined[210048]: New machine qemu-21-instance-00000013.
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.169 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[37cd81ab-9b98-4905-8e4a-0d38af714817]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:59 np0005629333 systemd[1]: Started Virtual Machine qemu-21-instance-00000013.
Feb 25 07:18:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:18:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.182 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[180faba3-fb2e-4624-b764-64b3a2769874]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Feb 25 07:18:59 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.210 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9e48f93e-c0db-42fa-b2cd-655b0be0b7e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.215 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5a733557-ab14-4782-8dae-becac0bfd449]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:59 np0005629333 systemd-udevd[261726]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:18:59 np0005629333 NetworkManager[49836]: <info>  [1772021939.2169] manager: (tap66127cb2-20): new Veth device (/org/freedesktop/NetworkManager/Devices/48)
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.250 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f87441a3-b9fd-4860-8a6e-153068a2fb09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.255 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e2167697-b1cc-4e66-9fb8-05e0fa87976f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:59 np0005629333 NetworkManager[49836]: <info>  [1772021939.2713] device (tap66127cb2-20): carrier: link connected
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.275 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[6ef823b3-950a-463a-ad64-c32797599edc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.287 244018 DEBUG oslo_concurrency.processutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.290 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e3d0f416-1e60-4894-9d33-5e1aeb369eec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap66127cb2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:2a:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 390867, 'reachable_time': 35171, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261756, 'error': None, 'target': 'ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.308 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0b80ede2-6a9e-44b3-af92-0018fb5fc1fa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe93:2a5b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 390867, 'tstamp': 390867}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261758, 'error': None, 'target': 'ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.326 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[00298518-a645-4d06-a19f-938a00398c55]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap66127cb2-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:2a:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 390867, 'reachable_time': 35171, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261759, 'error': None, 'target': 'ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.353 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[21cdba1a-15b6-4d34-aa2c-8310ea5a6464]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.407 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8c093b1d-f73f-4aab-8a39-a6bb6b4e42fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.408 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap66127cb2-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.409 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.411 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap66127cb2-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:18:59 np0005629333 NetworkManager[49836]: <info>  [1772021939.4145] manager: (tap66127cb2-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Feb 25 07:18:59 np0005629333 kernel: tap66127cb2-20: entered promiscuous mode
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.414 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.418 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap66127cb2-20, col_values=(('external_ids', {'iface-id': 'ca0c018e-5c35-4116-8dd5-552f68754c1a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.419 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:18:59Z|00067|binding|INFO|Releasing lport ca0c018e-5c35-4116-8dd5-552f68754c1a from this chassis (sb_readonly=0)
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.421 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/66127cb2-231b-48c1-8e16-29c2386af2ee.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/66127cb2-231b-48c1-8e16-29c2386af2ee.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.422 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[23162b11-ac62-4120-8d33-aaf03af8d133]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.423 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-66127cb2-231b-48c1-8e16-29c2386af2ee
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/66127cb2-231b-48c1-8e16-29c2386af2ee.pid.haproxy
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 66127cb2-231b-48c1-8e16-29c2386af2ee
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:18:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:18:59.424 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee', 'env', 'PROCESS_TAG=haproxy-66127cb2-231b-48c1-8e16-29c2386af2ee', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/66127cb2-231b-48c1-8e16-29c2386af2ee.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.425 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:18:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 293 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.7 MiB/s wr, 270 op/s
Feb 25 07:18:59 np0005629333 podman[261828]: 2026-02-25 12:18:59.752739506 +0000 UTC m=+0.042685259 container create 56154355d1978a6d4e25487d674d80400aa37aa4ed85e9782ce917f758c3861c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 07:18:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:18:59 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2744705052' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:18:59 np0005629333 systemd[1]: Started libpod-conmon-56154355d1978a6d4e25487d674d80400aa37aa4ed85e9782ce917f758c3861c.scope.
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.816 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021939.8165283, 606b209d-b916-4ac7-87c7-887c274f747f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.817 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] VM Started (Lifecycle Event)#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.820 244018 DEBUG oslo_concurrency.processutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.824 244018 DEBUG nova.compute.provider_tree [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:18:59 np0005629333 podman[261828]: 2026-02-25 12:18:59.729947496 +0000 UTC m=+0.019893269 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:18:59 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:18:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95d034a831b5d2ac46587817457b2ea4e03f7429753844bd67b29486a82f6de6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.841 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.843 244018 DEBUG nova.scheduler.client.report [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:18:59 np0005629333 podman[261828]: 2026-02-25 12:18:59.848430622 +0000 UTC m=+0.138376395 container init 56154355d1978a6d4e25487d674d80400aa37aa4ed85e9782ce917f758c3861c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_managed=true)
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.848 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021939.8166451, 606b209d-b916-4ac7-87c7-887c274f747f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.849 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:18:59 np0005629333 podman[261828]: 2026-02-25 12:18:59.862877677 +0000 UTC m=+0.152823420 container start 56154355d1978a6d4e25487d674d80400aa37aa4ed85e9782ce917f758c3861c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.870 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.872 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.873 244018 DEBUG nova.compute.manager [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.876 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.888 244018 DEBUG nova.compute.manager [req-375feb3b-4344-4773-9783-4715456c4b8d req-4df39ef9-3c76-48bb-869c-9d08906e8244 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Received event network-vif-plugged-259aef1d-40c8-413d-93ff-e8e3a0eede8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.889 244018 DEBUG oslo_concurrency.lockutils [req-375feb3b-4344-4773-9783-4715456c4b8d req-4df39ef9-3c76-48bb-869c-9d08906e8244 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "606b209d-b916-4ac7-87c7-887c274f747f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.889 244018 DEBUG oslo_concurrency.lockutils [req-375feb3b-4344-4773-9783-4715456c4b8d req-4df39ef9-3c76-48bb-869c-9d08906e8244 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "606b209d-b916-4ac7-87c7-887c274f747f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.889 244018 DEBUG oslo_concurrency.lockutils [req-375feb3b-4344-4773-9783-4715456c4b8d req-4df39ef9-3c76-48bb-869c-9d08906e8244 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "606b209d-b916-4ac7-87c7-887c274f747f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.890 244018 DEBUG nova.compute.manager [req-375feb3b-4344-4773-9783-4715456c4b8d req-4df39ef9-3c76-48bb-869c-9d08906e8244 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Processing event network-vif-plugged-259aef1d-40c8-413d-93ff-e8e3a0eede8a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.890 244018 DEBUG nova.compute.manager [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:18:59 np0005629333 neutron-haproxy-ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee[261867]: [NOTICE]   (261871) : New worker (261873) forked
Feb 25 07:18:59 np0005629333 neutron-haproxy-ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee[261867]: [NOTICE]   (261871) : Loading success.
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.897 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.899 244018 INFO nova.virt.libvirt.driver [-] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Instance spawned successfully.#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.900 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.904 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.904 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021939.8935006, 606b209d-b916-4ac7-87c7-887c274f747f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.904 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.930 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.931 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.931 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.932 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.932 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.934 244018 DEBUG nova.virt.libvirt.driver [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.938 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.941 244018 DEBUG nova.compute.manager [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.941 244018 DEBUG nova.network.neutron [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.946 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.964 244018 INFO nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.967 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:18:59 np0005629333 nova_compute[244014]: 2026-02-25 12:18:59.996 244018 DEBUG nova.compute.manager [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:19:00 np0005629333 nova_compute[244014]: 2026-02-25 12:19:00.006 244018 INFO nova.compute.manager [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Took 8.27 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:19:00 np0005629333 nova_compute[244014]: 2026-02-25 12:19:00.007 244018 DEBUG nova.compute.manager [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:00 np0005629333 nova_compute[244014]: 2026-02-25 12:19:00.078 244018 INFO nova.compute.manager [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Took 9.53 seconds to build instance.#033[00m
Feb 25 07:19:00 np0005629333 nova_compute[244014]: 2026-02-25 12:19:00.108 244018 DEBUG oslo_concurrency.lockutils [None req-945c4665-c831-439d-9f00-7a15ae4b0737 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lock "606b209d-b916-4ac7-87c7-887c274f747f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:00 np0005629333 nova_compute[244014]: 2026-02-25 12:19:00.116 244018 DEBUG nova.compute.manager [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:19:00 np0005629333 nova_compute[244014]: 2026-02-25 12:19:00.118 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:19:00 np0005629333 nova_compute[244014]: 2026-02-25 12:19:00.118 244018 INFO nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Creating image(s)#033[00m
Feb 25 07:19:00 np0005629333 nova_compute[244014]: 2026-02-25 12:19:00.141 244018 DEBUG nova.storage.rbd_utils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image d9b67bce-8a7c-4f49-9cab-3e20377ca630_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:00 np0005629333 nova_compute[244014]: 2026-02-25 12:19:00.171 244018 DEBUG nova.storage.rbd_utils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image d9b67bce-8a7c-4f49-9cab-3e20377ca630_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:00 np0005629333 nova_compute[244014]: 2026-02-25 12:19:00.217 244018 DEBUG nova.storage.rbd_utils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image d9b67bce-8a7c-4f49-9cab-3e20377ca630_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:00 np0005629333 nova_compute[244014]: 2026-02-25 12:19:00.221 244018 DEBUG oslo_concurrency.processutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:00 np0005629333 nova_compute[244014]: 2026-02-25 12:19:00.293 244018 DEBUG oslo_concurrency.processutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:00 np0005629333 nova_compute[244014]: 2026-02-25 12:19:00.295 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:00 np0005629333 nova_compute[244014]: 2026-02-25 12:19:00.295 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:00 np0005629333 nova_compute[244014]: 2026-02-25 12:19:00.296 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:00 np0005629333 nova_compute[244014]: 2026-02-25 12:19:00.319 244018 DEBUG nova.storage.rbd_utils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image d9b67bce-8a7c-4f49-9cab-3e20377ca630_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:00 np0005629333 nova_compute[244014]: 2026-02-25 12:19:00.323 244018 DEBUG oslo_concurrency.processutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d9b67bce-8a7c-4f49-9cab-3e20377ca630_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:00 np0005629333 nova_compute[244014]: 2026-02-25 12:19:00.399 244018 DEBUG nova.policy [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6395ac4bfa5d4910aed9116395bbbdeb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:19:00 np0005629333 nova_compute[244014]: 2026-02-25 12:19:00.595 244018 DEBUG oslo_concurrency.processutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d9b67bce-8a7c-4f49-9cab-3e20377ca630_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:00 np0005629333 nova_compute[244014]: 2026-02-25 12:19:00.702 244018 DEBUG nova.storage.rbd_utils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] resizing rbd image d9b67bce-8a7c-4f49-9cab-3e20377ca630_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:19:00 np0005629333 nova_compute[244014]: 2026-02-25 12:19:00.808 244018 DEBUG nova.objects.instance [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'migration_context' on Instance uuid d9b67bce-8a7c-4f49-9cab-3e20377ca630 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:19:00 np0005629333 nova_compute[244014]: 2026-02-25 12:19:00.832 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:19:00 np0005629333 nova_compute[244014]: 2026-02-25 12:19:00.833 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Ensure instance console log exists: /var/lib/nova/instances/d9b67bce-8a7c-4f49-9cab-3e20377ca630/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:19:00 np0005629333 nova_compute[244014]: 2026-02-25 12:19:00.833 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:00 np0005629333 nova_compute[244014]: 2026-02-25 12:19:00.834 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:00 np0005629333 nova_compute[244014]: 2026-02-25 12:19:00.834 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:01 np0005629333 nova_compute[244014]: 2026-02-25 12:19:01.563 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Updating instance_info_cache with network_info: [{"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:19:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:19:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:19:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:19:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:19:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:19:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:19:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 314 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 3.2 MiB/s wr, 238 op/s
Feb 25 07:19:01 np0005629333 nova_compute[244014]: 2026-02-25 12:19:01.589 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-52f927ad-a417-489f-9f92-87bc3433649d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:19:01 np0005629333 nova_compute[244014]: 2026-02-25 12:19:01.590 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 25 07:19:01 np0005629333 nova_compute[244014]: 2026-02-25 12:19:01.591 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:19:01 np0005629333 nova_compute[244014]: 2026-02-25 12:19:01.592 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:19:01 np0005629333 nova_compute[244014]: 2026-02-25 12:19:01.593 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:19:01 np0005629333 nova_compute[244014]: 2026-02-25 12:19:01.776 244018 DEBUG oslo_concurrency.lockutils [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Acquiring lock "606b209d-b916-4ac7-87c7-887c274f747f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:01 np0005629333 nova_compute[244014]: 2026-02-25 12:19:01.777 244018 DEBUG oslo_concurrency.lockutils [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lock "606b209d-b916-4ac7-87c7-887c274f747f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:01 np0005629333 nova_compute[244014]: 2026-02-25 12:19:01.778 244018 DEBUG oslo_concurrency.lockutils [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Acquiring lock "606b209d-b916-4ac7-87c7-887c274f747f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:01 np0005629333 nova_compute[244014]: 2026-02-25 12:19:01.778 244018 DEBUG oslo_concurrency.lockutils [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lock "606b209d-b916-4ac7-87c7-887c274f747f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:01 np0005629333 nova_compute[244014]: 2026-02-25 12:19:01.779 244018 DEBUG oslo_concurrency.lockutils [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lock "606b209d-b916-4ac7-87c7-887c274f747f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:01 np0005629333 nova_compute[244014]: 2026-02-25 12:19:01.784 244018 INFO nova.compute.manager [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Terminating instance#033[00m
Feb 25 07:19:01 np0005629333 nova_compute[244014]: 2026-02-25 12:19:01.789 244018 DEBUG nova.compute.manager [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:19:01 np0005629333 nova_compute[244014]: 2026-02-25 12:19:01.793 244018 DEBUG nova.network.neutron [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Successfully created port: 24e2d6b3-f0d8-4603-8c1e-da65e164050d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:19:01 np0005629333 kernel: tap259aef1d-40 (unregistering): left promiscuous mode
Feb 25 07:19:01 np0005629333 NetworkManager[49836]: <info>  [1772021941.8655] device (tap259aef1d-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:19:01 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:01Z|00068|binding|INFO|Releasing lport 259aef1d-40c8-413d-93ff-e8e3a0eede8a from this chassis (sb_readonly=0)
Feb 25 07:19:01 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:01Z|00069|binding|INFO|Setting lport 259aef1d-40c8-413d-93ff-e8e3a0eede8a down in Southbound
Feb 25 07:19:01 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:01Z|00070|binding|INFO|Removing iface tap259aef1d-40 ovn-installed in OVS
Feb 25 07:19:01 np0005629333 nova_compute[244014]: 2026-02-25 12:19:01.872 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:01 np0005629333 nova_compute[244014]: 2026-02-25 12:19:01.873 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:01 np0005629333 nova_compute[244014]: 2026-02-25 12:19:01.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:19:01 np0005629333 nova_compute[244014]: 2026-02-25 12:19:01.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:19:01 np0005629333 nova_compute[244014]: 2026-02-25 12:19:01.879 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:01.880 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:ea:d4 10.100.0.12'], port_security=['fa:16:3e:4c:ea:d4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '606b209d-b916-4ac7-87c7-887c274f747f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-66127cb2-231b-48c1-8e16-29c2386af2ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9bc1369afb1243ce828d930dea442277', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f267df70-0e5b-45b2-8b12-ebc2ca8b02c5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=252bdedd-c508-4af3-ad86-a8b65ae67147, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=259aef1d-40c8-413d-93ff-e8e3a0eede8a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:19:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:01.882 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 259aef1d-40c8-413d-93ff-e8e3a0eede8a in datapath 66127cb2-231b-48c1-8e16-29c2386af2ee unbound from our chassis#033[00m
Feb 25 07:19:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:01.885 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 66127cb2-231b-48c1-8e16-29c2386af2ee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:19:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:01.887 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab4227f-161d-4acd-9b20-997f262d8b9a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:01.890 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee namespace which is not needed anymore#033[00m
Feb 25 07:19:01 np0005629333 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000013.scope: Deactivated successfully.
Feb 25 07:19:01 np0005629333 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000013.scope: Consumed 2.586s CPU time.
Feb 25 07:19:01 np0005629333 systemd-machined[210048]: Machine qemu-21-instance-00000013 terminated.
Feb 25 07:19:02 np0005629333 nova_compute[244014]: 2026-02-25 12:19:02.025 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:02 np0005629333 nova_compute[244014]: 2026-02-25 12:19:02.034 244018 DEBUG nova.compute.manager [req-11b8e5c6-e59c-4b3a-9af6-c100386af960 req-27788e3d-2bea-44ac-8762-e93351b1b51b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Received event network-vif-plugged-259aef1d-40c8-413d-93ff-e8e3a0eede8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:19:02 np0005629333 nova_compute[244014]: 2026-02-25 12:19:02.035 244018 DEBUG oslo_concurrency.lockutils [req-11b8e5c6-e59c-4b3a-9af6-c100386af960 req-27788e3d-2bea-44ac-8762-e93351b1b51b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "606b209d-b916-4ac7-87c7-887c274f747f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:02 np0005629333 nova_compute[244014]: 2026-02-25 12:19:02.035 244018 DEBUG oslo_concurrency.lockutils [req-11b8e5c6-e59c-4b3a-9af6-c100386af960 req-27788e3d-2bea-44ac-8762-e93351b1b51b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "606b209d-b916-4ac7-87c7-887c274f747f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:02 np0005629333 nova_compute[244014]: 2026-02-25 12:19:02.036 244018 DEBUG oslo_concurrency.lockutils [req-11b8e5c6-e59c-4b3a-9af6-c100386af960 req-27788e3d-2bea-44ac-8762-e93351b1b51b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "606b209d-b916-4ac7-87c7-887c274f747f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:02 np0005629333 nova_compute[244014]: 2026-02-25 12:19:02.036 244018 DEBUG nova.compute.manager [req-11b8e5c6-e59c-4b3a-9af6-c100386af960 req-27788e3d-2bea-44ac-8762-e93351b1b51b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] No waiting events found dispatching network-vif-plugged-259aef1d-40c8-413d-93ff-e8e3a0eede8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:19:02 np0005629333 nova_compute[244014]: 2026-02-25 12:19:02.037 244018 WARNING nova.compute.manager [req-11b8e5c6-e59c-4b3a-9af6-c100386af960 req-27788e3d-2bea-44ac-8762-e93351b1b51b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Received unexpected event network-vif-plugged-259aef1d-40c8-413d-93ff-e8e3a0eede8a for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:19:02 np0005629333 nova_compute[244014]: 2026-02-25 12:19:02.038 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:02 np0005629333 nova_compute[244014]: 2026-02-25 12:19:02.046 244018 INFO nova.virt.libvirt.driver [-] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Instance destroyed successfully.#033[00m
Feb 25 07:19:02 np0005629333 nova_compute[244014]: 2026-02-25 12:19:02.047 244018 DEBUG nova.objects.instance [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lazy-loading 'resources' on Instance uuid 606b209d-b916-4ac7-87c7-887c274f747f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:19:02 np0005629333 neutron-haproxy-ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee[261867]: [NOTICE]   (261871) : haproxy version is 2.8.14-c23fe91
Feb 25 07:19:02 np0005629333 neutron-haproxy-ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee[261867]: [NOTICE]   (261871) : path to executable is /usr/sbin/haproxy
Feb 25 07:19:02 np0005629333 neutron-haproxy-ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee[261867]: [WARNING]  (261871) : Exiting Master process...
Feb 25 07:19:02 np0005629333 neutron-haproxy-ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee[261867]: [WARNING]  (261871) : Exiting Master process...
Feb 25 07:19:02 np0005629333 neutron-haproxy-ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee[261867]: [ALERT]    (261871) : Current worker (261873) exited with code 143 (Terminated)
Feb 25 07:19:02 np0005629333 neutron-haproxy-ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee[261867]: [WARNING]  (261871) : All workers exited. Exiting... (0)
Feb 25 07:19:02 np0005629333 systemd[1]: libpod-56154355d1978a6d4e25487d674d80400aa37aa4ed85e9782ce917f758c3861c.scope: Deactivated successfully.
Feb 25 07:19:02 np0005629333 nova_compute[244014]: 2026-02-25 12:19:02.070 244018 DEBUG nova.virt.libvirt.vif [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:18:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-1771878194',display_name='tempest-ImagesNegativeTestJSON-server-1771878194',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-1771878194',id=19,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9bc1369afb1243ce828d930dea442277',ramdisk_id='',reservation_id='r-kxueboo3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesNegativeTestJSON-926895290',owner_user_name='tempest-ImagesNegativeTestJSON-926895290-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:19:00Z,user_data=None,user_id='6cf653b76cca4209ae56b9938dc0be3b',uuid=606b209d-b916-4ac7-87c7-887c274f747f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "address": "fa:16:3e:4c:ea:d4", "network": {"id": "66127cb2-231b-48c1-8e16-29c2386af2ee", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-446934594-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bc1369afb1243ce828d930dea442277", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap259aef1d-40", "ovs_interfaceid": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:19:02 np0005629333 nova_compute[244014]: 2026-02-25 12:19:02.071 244018 DEBUG nova.network.os_vif_util [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Converting VIF {"id": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "address": "fa:16:3e:4c:ea:d4", "network": {"id": "66127cb2-231b-48c1-8e16-29c2386af2ee", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-446934594-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9bc1369afb1243ce828d930dea442277", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap259aef1d-40", "ovs_interfaceid": "259aef1d-40c8-413d-93ff-e8e3a0eede8a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:19:02 np0005629333 nova_compute[244014]: 2026-02-25 12:19:02.072 244018 DEBUG nova.network.os_vif_util [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=259aef1d-40c8-413d-93ff-e8e3a0eede8a,network=Network(66127cb2-231b-48c1-8e16-29c2386af2ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap259aef1d-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:19:02 np0005629333 nova_compute[244014]: 2026-02-25 12:19:02.072 244018 DEBUG os_vif [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=259aef1d-40c8-413d-93ff-e8e3a0eede8a,network=Network(66127cb2-231b-48c1-8e16-29c2386af2ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap259aef1d-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:19:02 np0005629333 podman[262070]: 2026-02-25 12:19:02.075321326 +0000 UTC m=+0.057909006 container died 56154355d1978a6d4e25487d674d80400aa37aa4ed85e9782ce917f758c3861c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 07:19:02 np0005629333 nova_compute[244014]: 2026-02-25 12:19:02.077 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:02 np0005629333 nova_compute[244014]: 2026-02-25 12:19:02.077 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap259aef1d-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:02 np0005629333 nova_compute[244014]: 2026-02-25 12:19:02.082 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:02 np0005629333 nova_compute[244014]: 2026-02-25 12:19:02.084 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:19:02 np0005629333 nova_compute[244014]: 2026-02-25 12:19:02.086 244018 INFO os_vif [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:ea:d4,bridge_name='br-int',has_traffic_filtering=True,id=259aef1d-40c8-413d-93ff-e8e3a0eede8a,network=Network(66127cb2-231b-48c1-8e16-29c2386af2ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap259aef1d-40')#033[00m
Feb 25 07:19:02 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-56154355d1978a6d4e25487d674d80400aa37aa4ed85e9782ce917f758c3861c-userdata-shm.mount: Deactivated successfully.
Feb 25 07:19:02 np0005629333 systemd[1]: var-lib-containers-storage-overlay-95d034a831b5d2ac46587817457b2ea4e03f7429753844bd67b29486a82f6de6-merged.mount: Deactivated successfully.
Feb 25 07:19:02 np0005629333 podman[262070]: 2026-02-25 12:19:02.149778756 +0000 UTC m=+0.132366436 container cleanup 56154355d1978a6d4e25487d674d80400aa37aa4ed85e9782ce917f758c3861c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 07:19:02 np0005629333 systemd[1]: libpod-conmon-56154355d1978a6d4e25487d674d80400aa37aa4ed85e9782ce917f758c3861c.scope: Deactivated successfully.
Feb 25 07:19:02 np0005629333 podman[262126]: 2026-02-25 12:19:02.245440711 +0000 UTC m=+0.072520606 container remove 56154355d1978a6d4e25487d674d80400aa37aa4ed85e9782ce917f758c3861c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 25 07:19:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:02.252 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cdaa444e-31c3-43d7-8f8a-7b1f0fd53ec5]: (4, ('Wed Feb 25 12:19:02 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee (56154355d1978a6d4e25487d674d80400aa37aa4ed85e9782ce917f758c3861c)\n56154355d1978a6d4e25487d674d80400aa37aa4ed85e9782ce917f758c3861c\nWed Feb 25 12:19:02 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee (56154355d1978a6d4e25487d674d80400aa37aa4ed85e9782ce917f758c3861c)\n56154355d1978a6d4e25487d674d80400aa37aa4ed85e9782ce917f758c3861c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:02.254 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4590dd38-1660-413a-806d-c4ac293f5a8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:02.255 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap66127cb2-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:02 np0005629333 kernel: tap66127cb2-20: left promiscuous mode
Feb 25 07:19:02 np0005629333 nova_compute[244014]: 2026-02-25 12:19:02.257 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:02 np0005629333 nova_compute[244014]: 2026-02-25 12:19:02.265 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:02.267 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9a73e3c5-c9a9-43b9-a3ce-d3133cb89911]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:02.280 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[49f1cc35-1608-4184-b5b4-552332887d41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:02.281 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2296cb07-7603-4031-8721-90ca39969a99]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:02.302 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4f9d9121-5a3b-4f1b-ba91-513d0f187d04]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 390861, 'reachable_time': 22852, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262138, 'error': None, 'target': 'ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:02 np0005629333 systemd[1]: run-netns-ovnmeta\x2d66127cb2\x2d231b\x2d48c1\x2d8e16\x2d29c2386af2ee.mount: Deactivated successfully.
Feb 25 07:19:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:02.307 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-66127cb2-231b-48c1-8e16-29c2386af2ee deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:19:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:02.307 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[c3efe016-ae99-4a39-ae30-0635a31f53f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:02 np0005629333 nova_compute[244014]: 2026-02-25 12:19:02.462 244018 INFO nova.virt.libvirt.driver [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Deleting instance files /var/lib/nova/instances/606b209d-b916-4ac7-87c7-887c274f747f_del#033[00m
Feb 25 07:19:02 np0005629333 nova_compute[244014]: 2026-02-25 12:19:02.464 244018 INFO nova.virt.libvirt.driver [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Deletion of /var/lib/nova/instances/606b209d-b916-4ac7-87c7-887c274f747f_del complete#033[00m
Feb 25 07:19:02 np0005629333 nova_compute[244014]: 2026-02-25 12:19:02.544 244018 INFO nova.compute.manager [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:19:02 np0005629333 nova_compute[244014]: 2026-02-25 12:19:02.545 244018 DEBUG oslo.service.loopingcall [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:19:02 np0005629333 nova_compute[244014]: 2026-02-25 12:19:02.545 244018 DEBUG nova.compute.manager [-] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:19:02 np0005629333 nova_compute[244014]: 2026-02-25 12:19:02.545 244018 DEBUG nova.network.neutron [-] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:19:02 np0005629333 nova_compute[244014]: 2026-02-25 12:19:02.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:19:02 np0005629333 nova_compute[244014]: 2026-02-25 12:19:02.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:19:03 np0005629333 nova_compute[244014]: 2026-02-25 12:19:03.418 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 357 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 6.6 MiB/s rd, 5.9 MiB/s wr, 349 op/s
Feb 25 07:19:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:19:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:19:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:19:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:19:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:19:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:19:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:19:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:19:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:19:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:19:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:19:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:19:03 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:03Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:87:71:62 10.100.0.5
Feb 25 07:19:03 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:03Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:87:71:62 10.100.0.5
Feb 25 07:19:03 np0005629333 nova_compute[244014]: 2026-02-25 12:19:03.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:19:03 np0005629333 nova_compute[244014]: 2026-02-25 12:19:03.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:19:03 np0005629333 nova_compute[244014]: 2026-02-25 12:19:03.923 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:03 np0005629333 nova_compute[244014]: 2026-02-25 12:19:03.924 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:03 np0005629333 nova_compute[244014]: 2026-02-25 12:19:03.924 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:03 np0005629333 nova_compute[244014]: 2026-02-25 12:19:03.924 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:19:03 np0005629333 nova_compute[244014]: 2026-02-25 12:19:03.925 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:04 np0005629333 nova_compute[244014]: 2026-02-25 12:19:04.125 244018 DEBUG nova.compute.manager [req-9d2fd49c-66c4-4bca-acba-9903bebc1b75 req-c4616d60-1875-4fb3-917c-bb53c5aa9a78 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Received event network-vif-unplugged-259aef1d-40c8-413d-93ff-e8e3a0eede8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:19:04 np0005629333 nova_compute[244014]: 2026-02-25 12:19:04.126 244018 DEBUG oslo_concurrency.lockutils [req-9d2fd49c-66c4-4bca-acba-9903bebc1b75 req-c4616d60-1875-4fb3-917c-bb53c5aa9a78 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "606b209d-b916-4ac7-87c7-887c274f747f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:04 np0005629333 nova_compute[244014]: 2026-02-25 12:19:04.126 244018 DEBUG oslo_concurrency.lockutils [req-9d2fd49c-66c4-4bca-acba-9903bebc1b75 req-c4616d60-1875-4fb3-917c-bb53c5aa9a78 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "606b209d-b916-4ac7-87c7-887c274f747f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:04 np0005629333 nova_compute[244014]: 2026-02-25 12:19:04.127 244018 DEBUG oslo_concurrency.lockutils [req-9d2fd49c-66c4-4bca-acba-9903bebc1b75 req-c4616d60-1875-4fb3-917c-bb53c5aa9a78 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "606b209d-b916-4ac7-87c7-887c274f747f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:04 np0005629333 nova_compute[244014]: 2026-02-25 12:19:04.127 244018 DEBUG nova.compute.manager [req-9d2fd49c-66c4-4bca-acba-9903bebc1b75 req-c4616d60-1875-4fb3-917c-bb53c5aa9a78 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] No waiting events found dispatching network-vif-unplugged-259aef1d-40c8-413d-93ff-e8e3a0eede8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:19:04 np0005629333 nova_compute[244014]: 2026-02-25 12:19:04.128 244018 DEBUG nova.compute.manager [req-9d2fd49c-66c4-4bca-acba-9903bebc1b75 req-c4616d60-1875-4fb3-917c-bb53c5aa9a78 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Received event network-vif-unplugged-259aef1d-40c8-413d-93ff-e8e3a0eede8a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:19:04 np0005629333 nova_compute[244014]: 2026-02-25 12:19:04.128 244018 DEBUG nova.compute.manager [req-9d2fd49c-66c4-4bca-acba-9903bebc1b75 req-c4616d60-1875-4fb3-917c-bb53c5aa9a78 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Received event network-vif-plugged-259aef1d-40c8-413d-93ff-e8e3a0eede8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:19:04 np0005629333 nova_compute[244014]: 2026-02-25 12:19:04.128 244018 DEBUG oslo_concurrency.lockutils [req-9d2fd49c-66c4-4bca-acba-9903bebc1b75 req-c4616d60-1875-4fb3-917c-bb53c5aa9a78 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "606b209d-b916-4ac7-87c7-887c274f747f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:04 np0005629333 nova_compute[244014]: 2026-02-25 12:19:04.129 244018 DEBUG oslo_concurrency.lockutils [req-9d2fd49c-66c4-4bca-acba-9903bebc1b75 req-c4616d60-1875-4fb3-917c-bb53c5aa9a78 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "606b209d-b916-4ac7-87c7-887c274f747f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:04 np0005629333 nova_compute[244014]: 2026-02-25 12:19:04.129 244018 DEBUG oslo_concurrency.lockutils [req-9d2fd49c-66c4-4bca-acba-9903bebc1b75 req-c4616d60-1875-4fb3-917c-bb53c5aa9a78 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "606b209d-b916-4ac7-87c7-887c274f747f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:04 np0005629333 nova_compute[244014]: 2026-02-25 12:19:04.129 244018 DEBUG nova.compute.manager [req-9d2fd49c-66c4-4bca-acba-9903bebc1b75 req-c4616d60-1875-4fb3-917c-bb53c5aa9a78 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] No waiting events found dispatching network-vif-plugged-259aef1d-40c8-413d-93ff-e8e3a0eede8a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:19:04 np0005629333 nova_compute[244014]: 2026-02-25 12:19:04.130 244018 WARNING nova.compute.manager [req-9d2fd49c-66c4-4bca-acba-9903bebc1b75 req-c4616d60-1875-4fb3-917c-bb53c5aa9a78 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Received unexpected event network-vif-plugged-259aef1d-40c8-413d-93ff-e8e3a0eede8a for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:19:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:19:04 np0005629333 podman[262308]: 2026-02-25 12:19:04.199455887 +0000 UTC m=+0.046298260 container create fc44281bda6b3eeb041a99b1a690265a1bee1fc66231588454f99ca240b3354f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:19:04 np0005629333 nova_compute[244014]: 2026-02-25 12:19:04.203 244018 DEBUG nova.network.neutron [-] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:19:04 np0005629333 nova_compute[244014]: 2026-02-25 12:19:04.225 244018 INFO nova.compute.manager [-] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Took 1.68 seconds to deallocate network for instance.#033[00m
Feb 25 07:19:04 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:19:04 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:19:04 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:19:04 np0005629333 systemd[1]: Started libpod-conmon-fc44281bda6b3eeb041a99b1a690265a1bee1fc66231588454f99ca240b3354f.scope.
Feb 25 07:19:04 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:04Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d2:1d:06 10.100.0.7
Feb 25 07:19:04 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:04Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d2:1d:06 10.100.0.7
Feb 25 07:19:04 np0005629333 podman[262308]: 2026-02-25 12:19:04.177756048 +0000 UTC m=+0.024598441 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:19:04 np0005629333 nova_compute[244014]: 2026-02-25 12:19:04.283 244018 DEBUG oslo_concurrency.lockutils [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:04 np0005629333 nova_compute[244014]: 2026-02-25 12:19:04.283 244018 DEBUG oslo_concurrency.lockutils [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:04 np0005629333 nova_compute[244014]: 2026-02-25 12:19:04.288 244018 DEBUG nova.compute.manager [req-7c157564-7632-4dfc-8726-2e371f7eaada req-eaeeb418-af7e-4d81-a30e-ee5e42fe438f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Received event network-vif-deleted-259aef1d-40c8-413d-93ff-e8e3a0eede8a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:19:04 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:19:04 np0005629333 podman[262308]: 2026-02-25 12:19:04.324381724 +0000 UTC m=+0.171224107 container init fc44281bda6b3eeb041a99b1a690265a1bee1fc66231588454f99ca240b3354f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_yonath, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 07:19:04 np0005629333 podman[262308]: 2026-02-25 12:19:04.33458473 +0000 UTC m=+0.181427113 container start fc44281bda6b3eeb041a99b1a690265a1bee1fc66231588454f99ca240b3354f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:19:04 np0005629333 objective_yonath[262324]: 167 167
Feb 25 07:19:04 np0005629333 systemd[1]: libpod-fc44281bda6b3eeb041a99b1a690265a1bee1fc66231588454f99ca240b3354f.scope: Deactivated successfully.
Feb 25 07:19:04 np0005629333 podman[262308]: 2026-02-25 12:19:04.346672079 +0000 UTC m=+0.193514502 container attach fc44281bda6b3eeb041a99b1a690265a1bee1fc66231588454f99ca240b3354f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:19:04 np0005629333 podman[262308]: 2026-02-25 12:19:04.347305537 +0000 UTC m=+0.194147920 container died fc44281bda6b3eeb041a99b1a690265a1bee1fc66231588454f99ca240b3354f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Feb 25 07:19:04 np0005629333 nova_compute[244014]: 2026-02-25 12:19:04.384 244018 DEBUG oslo_concurrency.processutils [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:04 np0005629333 systemd[1]: var-lib-containers-storage-overlay-8d8775d111f512136a542e65d31a4ba28f25d2bd05449f827e65888111911a62-merged.mount: Deactivated successfully.
Feb 25 07:19:04 np0005629333 podman[262308]: 2026-02-25 12:19:04.444387582 +0000 UTC m=+0.291229965 container remove fc44281bda6b3eeb041a99b1a690265a1bee1fc66231588454f99ca240b3354f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_yonath, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 25 07:19:04 np0005629333 systemd[1]: libpod-conmon-fc44281bda6b3eeb041a99b1a690265a1bee1fc66231588454f99ca240b3354f.scope: Deactivated successfully.
Feb 25 07:19:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:19:04 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3181884690' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:19:04 np0005629333 nova_compute[244014]: 2026-02-25 12:19:04.497 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:04 np0005629333 nova_compute[244014]: 2026-02-25 12:19:04.578 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:19:04 np0005629333 nova_compute[244014]: 2026-02-25 12:19:04.580 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:19:04 np0005629333 nova_compute[244014]: 2026-02-25 12:19:04.584 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:19:04 np0005629333 nova_compute[244014]: 2026-02-25 12:19:04.585 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:19:04 np0005629333 podman[262370]: 2026-02-25 12:19:04.658337107 +0000 UTC m=+0.075811979 container create 17cac282f7da3591424d11c4c91cd16a19ed5f79bdc1db1d4e7e820465bacc6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:19:04 np0005629333 podman[262370]: 2026-02-25 12:19:04.610787023 +0000 UTC m=+0.028261945 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:19:04 np0005629333 systemd[1]: Started libpod-conmon-17cac282f7da3591424d11c4c91cd16a19ed5f79bdc1db1d4e7e820465bacc6b.scope.
Feb 25 07:19:04 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:19:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b0a53679f98cecbba1d48164f51070200903776c9828b55ef18b828acc3a81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:19:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b0a53679f98cecbba1d48164f51070200903776c9828b55ef18b828acc3a81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:19:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b0a53679f98cecbba1d48164f51070200903776c9828b55ef18b828acc3a81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:19:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b0a53679f98cecbba1d48164f51070200903776c9828b55ef18b828acc3a81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:19:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b0a53679f98cecbba1d48164f51070200903776c9828b55ef18b828acc3a81/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:19:04 np0005629333 podman[262370]: 2026-02-25 12:19:04.759056164 +0000 UTC m=+0.176531046 container init 17cac282f7da3591424d11c4c91cd16a19ed5f79bdc1db1d4e7e820465bacc6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_ganguly, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 07:19:04 np0005629333 podman[262370]: 2026-02-25 12:19:04.766265616 +0000 UTC m=+0.183740448 container start 17cac282f7da3591424d11c4c91cd16a19ed5f79bdc1db1d4e7e820465bacc6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_ganguly, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:19:04 np0005629333 podman[262370]: 2026-02-25 12:19:04.779362164 +0000 UTC m=+0.196836996 container attach 17cac282f7da3591424d11c4c91cd16a19ed5f79bdc1db1d4e7e820465bacc6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_ganguly, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:19:04 np0005629333 nova_compute[244014]: 2026-02-25 12:19:04.820 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:19:04 np0005629333 nova_compute[244014]: 2026-02-25 12:19:04.822 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4189MB free_disk=59.88206484075636GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:19:04 np0005629333 nova_compute[244014]: 2026-02-25 12:19:04.822 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:04 np0005629333 nova_compute[244014]: 2026-02-25 12:19:04.855 244018 DEBUG nova.network.neutron [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Successfully updated port: 24e2d6b3-f0d8-4603-8c1e-da65e164050d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:19:04 np0005629333 nova_compute[244014]: 2026-02-25 12:19:04.869 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "refresh_cache-d9b67bce-8a7c-4f49-9cab-3e20377ca630" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:19:04 np0005629333 nova_compute[244014]: 2026-02-25 12:19:04.870 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquired lock "refresh_cache-d9b67bce-8a7c-4f49-9cab-3e20377ca630" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:19:04 np0005629333 nova_compute[244014]: 2026-02-25 12:19:04.870 244018 DEBUG nova.network.neutron [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:19:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:19:04 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/240962882' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:19:05 np0005629333 nova_compute[244014]: 2026-02-25 12:19:05.007 244018 DEBUG oslo_concurrency.processutils [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:05 np0005629333 nova_compute[244014]: 2026-02-25 12:19:05.011 244018 DEBUG nova.compute.provider_tree [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:19:05 np0005629333 nova_compute[244014]: 2026-02-25 12:19:05.027 244018 DEBUG nova.scheduler.client.report [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:19:05 np0005629333 nova_compute[244014]: 2026-02-25 12:19:05.048 244018 DEBUG oslo_concurrency.lockutils [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:05 np0005629333 nova_compute[244014]: 2026-02-25 12:19:05.049 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.227s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:05 np0005629333 nova_compute[244014]: 2026-02-25 12:19:05.076 244018 INFO nova.scheduler.client.report [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Deleted allocations for instance 606b209d-b916-4ac7-87c7-887c274f747f#033[00m
Feb 25 07:19:05 np0005629333 nova_compute[244014]: 2026-02-25 12:19:05.122 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 52f927ad-a417-489f-9f92-87bc3433649d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:19:05 np0005629333 nova_compute[244014]: 2026-02-25 12:19:05.122 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 7a6ab503-d433-40a7-9395-3d5660e852c9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:19:05 np0005629333 nova_compute[244014]: 2026-02-25 12:19:05.122 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance d9b67bce-8a7c-4f49-9cab-3e20377ca630 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:19:05 np0005629333 nova_compute[244014]: 2026-02-25 12:19:05.122 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:19:05 np0005629333 nova_compute[244014]: 2026-02-25 12:19:05.122 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:19:05 np0005629333 nova_compute[244014]: 2026-02-25 12:19:05.147 244018 DEBUG oslo_concurrency.lockutils [None req-61d2c01a-f50f-42ac-b93a-5bafb0f99ca7 6cf653b76cca4209ae56b9938dc0be3b 9bc1369afb1243ce828d930dea442277 - - default default] Lock "606b209d-b916-4ac7-87c7-887c274f747f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:05 np0005629333 nova_compute[244014]: 2026-02-25 12:19:05.161 244018 DEBUG nova.network.neutron [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:19:05 np0005629333 gifted_ganguly[262386]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:19:05 np0005629333 gifted_ganguly[262386]: --> All data devices are unavailable
Feb 25 07:19:05 np0005629333 nova_compute[244014]: 2026-02-25 12:19:05.204 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:05 np0005629333 systemd[1]: libpod-17cac282f7da3591424d11c4c91cd16a19ed5f79bdc1db1d4e7e820465bacc6b.scope: Deactivated successfully.
Feb 25 07:19:05 np0005629333 podman[262370]: 2026-02-25 12:19:05.236050853 +0000 UTC m=+0.653525715 container died 17cac282f7da3591424d11c4c91cd16a19ed5f79bdc1db1d4e7e820465bacc6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 07:19:05 np0005629333 systemd[1]: var-lib-containers-storage-overlay-b9b0a53679f98cecbba1d48164f51070200903776c9828b55ef18b828acc3a81-merged.mount: Deactivated successfully.
Feb 25 07:19:05 np0005629333 podman[262370]: 2026-02-25 12:19:05.356616956 +0000 UTC m=+0.774091788 container remove 17cac282f7da3591424d11c4c91cd16a19ed5f79bdc1db1d4e7e820465bacc6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_ganguly, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:19:05 np0005629333 systemd[1]: libpod-conmon-17cac282f7da3591424d11c4c91cd16a19ed5f79bdc1db1d4e7e820465bacc6b.scope: Deactivated successfully.
Feb 25 07:19:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 357 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 6.6 MiB/s rd, 5.9 MiB/s wr, 349 op/s
Feb 25 07:19:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:19:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/337113847' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:19:05 np0005629333 nova_compute[244014]: 2026-02-25 12:19:05.770 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:05 np0005629333 nova_compute[244014]: 2026-02-25 12:19:05.778 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:19:05 np0005629333 nova_compute[244014]: 2026-02-25 12:19:05.794 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:19:05 np0005629333 nova_compute[244014]: 2026-02-25 12:19:05.820 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:19:05 np0005629333 nova_compute[244014]: 2026-02-25 12:19:05.821 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:05 np0005629333 podman[262506]: 2026-02-25 12:19:05.861877888 +0000 UTC m=+0.072593739 container create fb53daf14f1118e5afa56537488ba75fa3bfa034fa413d58139684c11926ccb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:19:05 np0005629333 podman[262506]: 2026-02-25 12:19:05.814130907 +0000 UTC m=+0.024846818 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:19:05 np0005629333 systemd[1]: Started libpod-conmon-fb53daf14f1118e5afa56537488ba75fa3bfa034fa413d58139684c11926ccb4.scope.
Feb 25 07:19:05 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:19:05 np0005629333 podman[262506]: 2026-02-25 12:19:05.972216375 +0000 UTC m=+0.182932276 container init fb53daf14f1118e5afa56537488ba75fa3bfa034fa413d58139684c11926ccb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_colden, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:19:05 np0005629333 podman[262506]: 2026-02-25 12:19:05.980326592 +0000 UTC m=+0.191042453 container start fb53daf14f1118e5afa56537488ba75fa3bfa034fa413d58139684c11926ccb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_colden, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:19:05 np0005629333 peaceful_colden[262523]: 167 167
Feb 25 07:19:05 np0005629333 systemd[1]: libpod-fb53daf14f1118e5afa56537488ba75fa3bfa034fa413d58139684c11926ccb4.scope: Deactivated successfully.
Feb 25 07:19:06 np0005629333 podman[262506]: 2026-02-25 12:19:06.002808303 +0000 UTC m=+0.213524214 container attach fb53daf14f1118e5afa56537488ba75fa3bfa034fa413d58139684c11926ccb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_colden, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 07:19:06 np0005629333 podman[262506]: 2026-02-25 12:19:06.004256224 +0000 UTC m=+0.214972085 container died fb53daf14f1118e5afa56537488ba75fa3bfa034fa413d58139684c11926ccb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_colden, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Feb 25 07:19:06 np0005629333 systemd[1]: var-lib-containers-storage-overlay-cb6dcf0141afc45ac3079eb8817b4c30ff8221dd55fa98456b51f4fca9336fb0-merged.mount: Deactivated successfully.
Feb 25 07:19:06 np0005629333 podman[262506]: 2026-02-25 12:19:06.133420239 +0000 UTC m=+0.344136090 container remove fb53daf14f1118e5afa56537488ba75fa3bfa034fa413d58139684c11926ccb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_colden, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:19:06 np0005629333 systemd[1]: libpod-conmon-fb53daf14f1118e5afa56537488ba75fa3bfa034fa413d58139684c11926ccb4.scope: Deactivated successfully.
Feb 25 07:19:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:06.261 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:06 np0005629333 podman[262548]: 2026-02-25 12:19:06.357930821 +0000 UTC m=+0.079359628 container create 029d0ff8b845c6c3205bc35f8f3c567ce9559199171936fd16237e4eb68214cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 07:19:06 np0005629333 podman[262548]: 2026-02-25 12:19:06.315253173 +0000 UTC m=+0.036682020 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:19:06 np0005629333 nova_compute[244014]: 2026-02-25 12:19:06.411 244018 DEBUG nova.compute.manager [req-489947ed-117d-482a-af0c-02b00a7213ec req-68ba3c8f-bebd-4d35-913b-c3dfd71e0155 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Received event network-changed-24e2d6b3-f0d8-4603-8c1e-da65e164050d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:19:06 np0005629333 nova_compute[244014]: 2026-02-25 12:19:06.411 244018 DEBUG nova.compute.manager [req-489947ed-117d-482a-af0c-02b00a7213ec req-68ba3c8f-bebd-4d35-913b-c3dfd71e0155 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Refreshing instance network info cache due to event network-changed-24e2d6b3-f0d8-4603-8c1e-da65e164050d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:19:06 np0005629333 nova_compute[244014]: 2026-02-25 12:19:06.412 244018 DEBUG oslo_concurrency.lockutils [req-489947ed-117d-482a-af0c-02b00a7213ec req-68ba3c8f-bebd-4d35-913b-c3dfd71e0155 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d9b67bce-8a7c-4f49-9cab-3e20377ca630" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:19:06 np0005629333 systemd[1]: Started libpod-conmon-029d0ff8b845c6c3205bc35f8f3c567ce9559199171936fd16237e4eb68214cc.scope.
Feb 25 07:19:06 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:19:06 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d726619dda4afd6c4abd9534a387ecbf18821ba5f6efe413e48b05fc113205de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:19:06 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d726619dda4afd6c4abd9534a387ecbf18821ba5f6efe413e48b05fc113205de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:19:06 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d726619dda4afd6c4abd9534a387ecbf18821ba5f6efe413e48b05fc113205de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:19:06 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d726619dda4afd6c4abd9534a387ecbf18821ba5f6efe413e48b05fc113205de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:19:06 np0005629333 podman[262548]: 2026-02-25 12:19:06.502503519 +0000 UTC m=+0.223932336 container init 029d0ff8b845c6c3205bc35f8f3c567ce9559199171936fd16237e4eb68214cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_proskuriakova, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 07:19:06 np0005629333 podman[262548]: 2026-02-25 12:19:06.511847451 +0000 UTC m=+0.233276248 container start 029d0ff8b845c6c3205bc35f8f3c567ce9559199171936fd16237e4eb68214cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_proskuriakova, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:19:06 np0005629333 podman[262548]: 2026-02-25 12:19:06.526579065 +0000 UTC m=+0.248007912 container attach 029d0ff8b845c6c3205bc35f8f3c567ce9559199171936fd16237e4eb68214cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_proskuriakova, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 07:19:06 np0005629333 nova_compute[244014]: 2026-02-25 12:19:06.568 244018 DEBUG nova.network.neutron [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Updating instance_info_cache with network_info: [{"id": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "address": "fa:16:3e:4b:b0:82", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e2d6b3-f0", "ovs_interfaceid": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:19:06 np0005629333 nova_compute[244014]: 2026-02-25 12:19:06.584 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Releasing lock "refresh_cache-d9b67bce-8a7c-4f49-9cab-3e20377ca630" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:19:06 np0005629333 nova_compute[244014]: 2026-02-25 12:19:06.585 244018 DEBUG nova.compute.manager [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Instance network_info: |[{"id": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "address": "fa:16:3e:4b:b0:82", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e2d6b3-f0", "ovs_interfaceid": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:19:06 np0005629333 nova_compute[244014]: 2026-02-25 12:19:06.585 244018 DEBUG oslo_concurrency.lockutils [req-489947ed-117d-482a-af0c-02b00a7213ec req-68ba3c8f-bebd-4d35-913b-c3dfd71e0155 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d9b67bce-8a7c-4f49-9cab-3e20377ca630" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:19:06 np0005629333 nova_compute[244014]: 2026-02-25 12:19:06.585 244018 DEBUG nova.network.neutron [req-489947ed-117d-482a-af0c-02b00a7213ec req-68ba3c8f-bebd-4d35-913b-c3dfd71e0155 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Refreshing network info cache for port 24e2d6b3-f0d8-4603-8c1e-da65e164050d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:19:06 np0005629333 nova_compute[244014]: 2026-02-25 12:19:06.589 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Start _get_guest_xml network_info=[{"id": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "address": "fa:16:3e:4b:b0:82", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e2d6b3-f0", "ovs_interfaceid": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:19:06 np0005629333 nova_compute[244014]: 2026-02-25 12:19:06.593 244018 WARNING nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:19:06 np0005629333 nova_compute[244014]: 2026-02-25 12:19:06.598 244018 DEBUG nova.virt.libvirt.host [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:19:06 np0005629333 nova_compute[244014]: 2026-02-25 12:19:06.600 244018 DEBUG nova.virt.libvirt.host [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:19:06 np0005629333 nova_compute[244014]: 2026-02-25 12:19:06.606 244018 DEBUG nova.virt.libvirt.host [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:19:06 np0005629333 nova_compute[244014]: 2026-02-25 12:19:06.606 244018 DEBUG nova.virt.libvirt.host [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:19:06 np0005629333 nova_compute[244014]: 2026-02-25 12:19:06.607 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:19:06 np0005629333 nova_compute[244014]: 2026-02-25 12:19:06.607 244018 DEBUG nova.virt.hardware [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:19:06 np0005629333 nova_compute[244014]: 2026-02-25 12:19:06.607 244018 DEBUG nova.virt.hardware [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:19:06 np0005629333 nova_compute[244014]: 2026-02-25 12:19:06.608 244018 DEBUG nova.virt.hardware [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:19:06 np0005629333 nova_compute[244014]: 2026-02-25 12:19:06.608 244018 DEBUG nova.virt.hardware [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:19:06 np0005629333 nova_compute[244014]: 2026-02-25 12:19:06.608 244018 DEBUG nova.virt.hardware [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:19:06 np0005629333 nova_compute[244014]: 2026-02-25 12:19:06.608 244018 DEBUG nova.virt.hardware [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:19:06 np0005629333 nova_compute[244014]: 2026-02-25 12:19:06.608 244018 DEBUG nova.virt.hardware [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:19:06 np0005629333 nova_compute[244014]: 2026-02-25 12:19:06.609 244018 DEBUG nova.virt.hardware [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:19:06 np0005629333 nova_compute[244014]: 2026-02-25 12:19:06.609 244018 DEBUG nova.virt.hardware [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:19:06 np0005629333 nova_compute[244014]: 2026-02-25 12:19:06.609 244018 DEBUG nova.virt.hardware [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:19:06 np0005629333 nova_compute[244014]: 2026-02-25 12:19:06.609 244018 DEBUG nova.virt.hardware [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:19:06 np0005629333 nova_compute[244014]: 2026-02-25 12:19:06.612 244018 DEBUG oslo_concurrency.processutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]: {
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:    "0": [
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:        {
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "devices": [
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "/dev/loop3"
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            ],
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "lv_name": "ceph_lv0",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "lv_size": "21470642176",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "name": "ceph_lv0",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "tags": {
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.cluster_name": "ceph",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.crush_device_class": "",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.encrypted": "0",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.objectstore": "bluestore",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.osd_id": "0",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.type": "block",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.vdo": "0",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.with_tpm": "0"
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            },
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "type": "block",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "vg_name": "ceph_vg0"
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:        }
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:    ],
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:    "1": [
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:        {
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "devices": [
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "/dev/loop4"
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            ],
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "lv_name": "ceph_lv1",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "lv_size": "21470642176",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "name": "ceph_lv1",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "tags": {
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.cluster_name": "ceph",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.crush_device_class": "",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.encrypted": "0",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.objectstore": "bluestore",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.osd_id": "1",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.type": "block",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.vdo": "0",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.with_tpm": "0"
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            },
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "type": "block",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "vg_name": "ceph_vg1"
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:        }
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:    ],
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:    "2": [
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:        {
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "devices": [
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "/dev/loop5"
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            ],
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "lv_name": "ceph_lv2",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "lv_size": "21470642176",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "name": "ceph_lv2",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "tags": {
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.cluster_name": "ceph",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.crush_device_class": "",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.encrypted": "0",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.objectstore": "bluestore",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.osd_id": "2",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.type": "block",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.vdo": "0",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:                "ceph.with_tpm": "0"
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            },
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "type": "block",
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:            "vg_name": "ceph_vg2"
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:        }
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]:    ]
Feb 25 07:19:06 np0005629333 intelligent_proskuriakova[262564]: }
Feb 25 07:19:06 np0005629333 systemd[1]: libpod-029d0ff8b845c6c3205bc35f8f3c567ce9559199171936fd16237e4eb68214cc.scope: Deactivated successfully.
Feb 25 07:19:06 np0005629333 podman[262548]: 2026-02-25 12:19:06.869473739 +0000 UTC m=+0.590902526 container died 029d0ff8b845c6c3205bc35f8f3c567ce9559199171936fd16237e4eb68214cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_proskuriakova, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 07:19:06 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d726619dda4afd6c4abd9534a387ecbf18821ba5f6efe413e48b05fc113205de-merged.mount: Deactivated successfully.
Feb 25 07:19:06 np0005629333 podman[262548]: 2026-02-25 12:19:06.961006918 +0000 UTC m=+0.682435695 container remove 029d0ff8b845c6c3205bc35f8f3c567ce9559199171936fd16237e4eb68214cc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:19:06 np0005629333 systemd[1]: libpod-conmon-029d0ff8b845c6c3205bc35f8f3c567ce9559199171936fd16237e4eb68214cc.scope: Deactivated successfully.
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.081 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:19:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3182560998' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.153 244018 DEBUG oslo_concurrency.processutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.183 244018 DEBUG nova.storage.rbd_utils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image d9b67bce-8a7c-4f49-9cab-3e20377ca630_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.187 244018 DEBUG oslo_concurrency.processutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:07 np0005629333 podman[262706]: 2026-02-25 12:19:07.416203215 +0000 UTC m=+0.058989497 container create 7faa86780e41c37491255f47a0d091c98b9897ef0a3973a14991010903203a5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:19:07 np0005629333 systemd[1]: Started libpod-conmon-7faa86780e41c37491255f47a0d091c98b9897ef0a3973a14991010903203a5f.scope.
Feb 25 07:19:07 np0005629333 podman[262706]: 2026-02-25 12:19:07.392060347 +0000 UTC m=+0.034846609 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:19:07 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:19:07 np0005629333 podman[262706]: 2026-02-25 12:19:07.513724952 +0000 UTC m=+0.156511214 container init 7faa86780e41c37491255f47a0d091c98b9897ef0a3973a14991010903203a5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_einstein, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 07:19:07 np0005629333 podman[262706]: 2026-02-25 12:19:07.521591013 +0000 UTC m=+0.164377295 container start 7faa86780e41c37491255f47a0d091c98b9897ef0a3973a14991010903203a5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:19:07 np0005629333 charming_einstein[262722]: 167 167
Feb 25 07:19:07 np0005629333 systemd[1]: libpod-7faa86780e41c37491255f47a0d091c98b9897ef0a3973a14991010903203a5f.scope: Deactivated successfully.
Feb 25 07:19:07 np0005629333 podman[262706]: 2026-02-25 12:19:07.527254642 +0000 UTC m=+0.170041074 container attach 7faa86780e41c37491255f47a0d091c98b9897ef0a3973a14991010903203a5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_einstein, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:19:07 np0005629333 podman[262706]: 2026-02-25 12:19:07.528191298 +0000 UTC m=+0.170977580 container died 7faa86780e41c37491255f47a0d091c98b9897ef0a3973a14991010903203a5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_einstein, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:19:07 np0005629333 systemd[1]: var-lib-containers-storage-overlay-78b13219497a183ed859effe85a7038d56c81742c962618778f3ea843fa7e766-merged.mount: Deactivated successfully.
Feb 25 07:19:07 np0005629333 podman[262706]: 2026-02-25 12:19:07.58417791 +0000 UTC m=+0.226964192 container remove 7faa86780e41c37491255f47a0d091c98b9897ef0a3973a14991010903203a5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 07:19:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 358 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 7.2 MiB/s wr, 300 op/s
Feb 25 07:19:07 np0005629333 systemd[1]: libpod-conmon-7faa86780e41c37491255f47a0d091c98b9897ef0a3973a14991010903203a5f.scope: Deactivated successfully.
Feb 25 07:19:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:19:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/84937775' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.750 244018 DEBUG oslo_concurrency.processutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.752 244018 DEBUG nova.virt.libvirt.vif [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:18:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-654045690',display_name='tempest-ServersAdminTestJSON-server-654045690',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-654045690',id=20,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-ki1fzu1r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:19:00Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=d9b67bce-8a7c-4f49-9cab-3e20377ca630,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "address": "fa:16:3e:4b:b0:82", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e2d6b3-f0", "ovs_interfaceid": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.753 244018 DEBUG nova.network.os_vif_util [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "address": "fa:16:3e:4b:b0:82", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e2d6b3-f0", "ovs_interfaceid": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.754 244018 DEBUG nova.network.os_vif_util [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:b0:82,bridge_name='br-int',has_traffic_filtering=True,id=24e2d6b3-f0d8-4603-8c1e-da65e164050d,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24e2d6b3-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.756 244018 DEBUG nova.objects.instance [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'pci_devices' on Instance uuid d9b67bce-8a7c-4f49-9cab-3e20377ca630 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:19:07 np0005629333 podman[262746]: 2026-02-25 12:19:07.777071274 +0000 UTC m=+0.061696343 container create 6f101496d59920df4caf5c5e4d1b950f10bb5edb1f34e3a6e4d159ae37007477 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_heisenberg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.794 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:19:07 np0005629333 nova_compute[244014]:  <uuid>d9b67bce-8a7c-4f49-9cab-3e20377ca630</uuid>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:  <name>instance-00000014</name>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServersAdminTestJSON-server-654045690</nova:name>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:19:06</nova:creationTime>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:19:07 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:        <nova:user uuid="6395ac4bfa5d4910aed9116395bbbdeb">tempest-ServersAdminTestJSON-147238686-project-member</nova:user>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:        <nova:project uuid="b35cd816238a43d8825ab11e83d2b8bf">tempest-ServersAdminTestJSON-147238686</nova:project>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:        <nova:port uuid="24e2d6b3-f0d8-4603-8c1e-da65e164050d">
Feb 25 07:19:07 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      <entry name="serial">d9b67bce-8a7c-4f49-9cab-3e20377ca630</entry>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      <entry name="uuid">d9b67bce-8a7c-4f49-9cab-3e20377ca630</entry>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/d9b67bce-8a7c-4f49-9cab-3e20377ca630_disk">
Feb 25 07:19:07 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:19:07 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/d9b67bce-8a7c-4f49-9cab-3e20377ca630_disk.config">
Feb 25 07:19:07 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:19:07 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:4b:b0:82"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      <target dev="tap24e2d6b3-f0"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/d9b67bce-8a7c-4f49-9cab-3e20377ca630/console.log" append="off"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:19:07 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:19:07 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:19:07 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:19:07 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.795 244018 DEBUG nova.compute.manager [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Preparing to wait for external event network-vif-plugged-24e2d6b3-f0d8-4603-8c1e-da65e164050d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.795 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.795 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.796 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.796 244018 DEBUG nova.virt.libvirt.vif [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:18:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-654045690',display_name='tempest-ServersAdminTestJSON-server-654045690',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-654045690',id=20,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-ki1fzu1r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:19:00Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=d9b67bce-8a7c-4f49-9cab-3e20377ca630,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "address": "fa:16:3e:4b:b0:82", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e2d6b3-f0", "ovs_interfaceid": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.797 244018 DEBUG nova.network.os_vif_util [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "address": "fa:16:3e:4b:b0:82", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e2d6b3-f0", "ovs_interfaceid": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.797 244018 DEBUG nova.network.os_vif_util [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:b0:82,bridge_name='br-int',has_traffic_filtering=True,id=24e2d6b3-f0d8-4603-8c1e-da65e164050d,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24e2d6b3-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.798 244018 DEBUG os_vif [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:b0:82,bridge_name='br-int',has_traffic_filtering=True,id=24e2d6b3-f0d8-4603-8c1e-da65e164050d,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24e2d6b3-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.798 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.798 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.799 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.802 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.802 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap24e2d6b3-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.803 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap24e2d6b3-f0, col_values=(('external_ids', {'iface-id': '24e2d6b3-f0d8-4603-8c1e-da65e164050d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4b:b0:82', 'vm-uuid': 'd9b67bce-8a7c-4f49-9cab-3e20377ca630'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.805 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:07 np0005629333 NetworkManager[49836]: <info>  [1772021947.8061] manager: (tap24e2d6b3-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.806 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.813 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.814 244018 INFO os_vif [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:b0:82,bridge_name='br-int',has_traffic_filtering=True,id=24e2d6b3-f0d8-4603-8c1e-da65e164050d,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24e2d6b3-f0')#033[00m
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.815 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:19:07 np0005629333 systemd[1]: Started libpod-conmon-6f101496d59920df4caf5c5e4d1b950f10bb5edb1f34e3a6e4d159ae37007477.scope.
Feb 25 07:19:07 np0005629333 podman[262746]: 2026-02-25 12:19:07.749913742 +0000 UTC m=+0.034538821 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:19:07 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.861 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.861 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.862 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No VIF found with MAC fa:16:3e:4b:b0:82, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.862 244018 INFO nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Using config drive#033[00m
Feb 25 07:19:07 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43f83beca2732a6cb288f5a1a2a8008ef4384d7da61fedf0852e1af660d0c8d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:19:07 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43f83beca2732a6cb288f5a1a2a8008ef4384d7da61fedf0852e1af660d0c8d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:19:07 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43f83beca2732a6cb288f5a1a2a8008ef4384d7da61fedf0852e1af660d0c8d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:19:07 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43f83beca2732a6cb288f5a1a2a8008ef4384d7da61fedf0852e1af660d0c8d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:19:07 np0005629333 nova_compute[244014]: 2026-02-25 12:19:07.892 244018 DEBUG nova.storage.rbd_utils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image d9b67bce-8a7c-4f49-9cab-3e20377ca630_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:07 np0005629333 podman[262746]: 2026-02-25 12:19:07.897321869 +0000 UTC m=+0.181946978 container init 6f101496d59920df4caf5c5e4d1b950f10bb5edb1f34e3a6e4d159ae37007477 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_heisenberg, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:19:07 np0005629333 podman[262746]: 2026-02-25 12:19:07.909632795 +0000 UTC m=+0.194257864 container start 6f101496d59920df4caf5c5e4d1b950f10bb5edb1f34e3a6e4d159ae37007477 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:19:07 np0005629333 podman[262746]: 2026-02-25 12:19:07.914023068 +0000 UTC m=+0.198648137 container attach 6f101496d59920df4caf5c5e4d1b950f10bb5edb1f34e3a6e4d159ae37007477 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_heisenberg, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:19:08 np0005629333 nova_compute[244014]: 2026-02-25 12:19:08.419 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:08 np0005629333 lvm[262864]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:19:08 np0005629333 lvm[262863]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:19:08 np0005629333 lvm[262863]: VG ceph_vg0 finished
Feb 25 07:19:08 np0005629333 lvm[262864]: VG ceph_vg1 finished
Feb 25 07:19:08 np0005629333 lvm[262866]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:19:08 np0005629333 lvm[262866]: VG ceph_vg2 finished
Feb 25 07:19:08 np0005629333 nova_compute[244014]: 2026-02-25 12:19:08.571 244018 INFO nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Creating config drive at /var/lib/nova/instances/d9b67bce-8a7c-4f49-9cab-3e20377ca630/disk.config#033[00m
Feb 25 07:19:08 np0005629333 nova_compute[244014]: 2026-02-25 12:19:08.578 244018 DEBUG oslo_concurrency.processutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d9b67bce-8a7c-4f49-9cab-3e20377ca630/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmply5zrg8v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:08 np0005629333 focused_heisenberg[262767]: {}
Feb 25 07:19:08 np0005629333 nova_compute[244014]: 2026-02-25 12:19:08.626 244018 DEBUG nova.network.neutron [req-489947ed-117d-482a-af0c-02b00a7213ec req-68ba3c8f-bebd-4d35-913b-c3dfd71e0155 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Updated VIF entry in instance network info cache for port 24e2d6b3-f0d8-4603-8c1e-da65e164050d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:19:08 np0005629333 nova_compute[244014]: 2026-02-25 12:19:08.628 244018 DEBUG nova.network.neutron [req-489947ed-117d-482a-af0c-02b00a7213ec req-68ba3c8f-bebd-4d35-913b-c3dfd71e0155 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Updating instance_info_cache with network_info: [{"id": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "address": "fa:16:3e:4b:b0:82", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e2d6b3-f0", "ovs_interfaceid": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:19:08 np0005629333 nova_compute[244014]: 2026-02-25 12:19:08.644 244018 DEBUG oslo_concurrency.lockutils [req-489947ed-117d-482a-af0c-02b00a7213ec req-68ba3c8f-bebd-4d35-913b-c3dfd71e0155 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d9b67bce-8a7c-4f49-9cab-3e20377ca630" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:19:08 np0005629333 systemd[1]: libpod-6f101496d59920df4caf5c5e4d1b950f10bb5edb1f34e3a6e4d159ae37007477.scope: Deactivated successfully.
Feb 25 07:19:08 np0005629333 systemd[1]: libpod-6f101496d59920df4caf5c5e4d1b950f10bb5edb1f34e3a6e4d159ae37007477.scope: Consumed 1.101s CPU time.
Feb 25 07:19:08 np0005629333 podman[262746]: 2026-02-25 12:19:08.649477881 +0000 UTC m=+0.934102950 container died 6f101496d59920df4caf5c5e4d1b950f10bb5edb1f34e3a6e4d159ae37007477 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_heisenberg, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:19:08 np0005629333 systemd[1]: var-lib-containers-storage-overlay-43f83beca2732a6cb288f5a1a2a8008ef4384d7da61fedf0852e1af660d0c8d7-merged.mount: Deactivated successfully.
Feb 25 07:19:08 np0005629333 nova_compute[244014]: 2026-02-25 12:19:08.704 244018 DEBUG oslo_concurrency.processutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d9b67bce-8a7c-4f49-9cab-3e20377ca630/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmply5zrg8v" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:08 np0005629333 podman[262746]: 2026-02-25 12:19:08.709345471 +0000 UTC m=+0.993970510 container remove 6f101496d59920df4caf5c5e4d1b950f10bb5edb1f34e3a6e4d159ae37007477 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 07:19:08 np0005629333 systemd[1]: libpod-conmon-6f101496d59920df4caf5c5e4d1b950f10bb5edb1f34e3a6e4d159ae37007477.scope: Deactivated successfully.
Feb 25 07:19:08 np0005629333 nova_compute[244014]: 2026-02-25 12:19:08.731 244018 DEBUG nova.storage.rbd_utils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image d9b67bce-8a7c-4f49-9cab-3e20377ca630_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:08 np0005629333 nova_compute[244014]: 2026-02-25 12:19:08.735 244018 DEBUG oslo_concurrency.processutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d9b67bce-8a7c-4f49-9cab-3e20377ca630/disk.config d9b67bce-8a7c-4f49-9cab-3e20377ca630_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:19:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:19:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:19:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:19:08 np0005629333 nova_compute[244014]: 2026-02-25 12:19:08.892 244018 DEBUG oslo_concurrency.processutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d9b67bce-8a7c-4f49-9cab-3e20377ca630/disk.config d9b67bce-8a7c-4f49-9cab-3e20377ca630_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:08 np0005629333 nova_compute[244014]: 2026-02-25 12:19:08.893 244018 INFO nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Deleting local config drive /var/lib/nova/instances/d9b67bce-8a7c-4f49-9cab-3e20377ca630/disk.config because it was imported into RBD.#033[00m
Feb 25 07:19:08 np0005629333 kernel: tap24e2d6b3-f0: entered promiscuous mode
Feb 25 07:19:08 np0005629333 NetworkManager[49836]: <info>  [1772021948.9491] manager: (tap24e2d6b3-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/51)
Feb 25 07:19:08 np0005629333 systemd-udevd[262862]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:19:08 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:08Z|00071|binding|INFO|Claiming lport 24e2d6b3-f0d8-4603-8c1e-da65e164050d for this chassis.
Feb 25 07:19:08 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:08Z|00072|binding|INFO|24e2d6b3-f0d8-4603-8c1e-da65e164050d: Claiming fa:16:3e:4b:b0:82 10.100.0.9
Feb 25 07:19:08 np0005629333 nova_compute[244014]: 2026-02-25 12:19:08.949 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:08.958 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:b0:82 10.100.0.9'], port_security=['fa:16:3e:4b:b0:82 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'd9b67bce-8a7c-4f49-9cab-3e20377ca630', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fd7733ad-d262-4781-bcfa-77cfa8b67164', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556b4b98-e95d-460c-a904-adc77baf4b88, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=24e2d6b3-f0d8-4603-8c1e-da65e164050d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:19:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:08.961 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 24e2d6b3-f0d8-4603-8c1e-da65e164050d in datapath 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 bound to our chassis#033[00m
Feb 25 07:19:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:08.963 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6#033[00m
Feb 25 07:19:08 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:08Z|00073|binding|INFO|Setting lport 24e2d6b3-f0d8-4603-8c1e-da65e164050d ovn-installed in OVS
Feb 25 07:19:08 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:08Z|00074|binding|INFO|Setting lport 24e2d6b3-f0d8-4603-8c1e-da65e164050d up in Southbound
Feb 25 07:19:08 np0005629333 NetworkManager[49836]: <info>  [1772021948.9680] device (tap24e2d6b3-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:19:08 np0005629333 NetworkManager[49836]: <info>  [1772021948.9693] device (tap24e2d6b3-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:19:08 np0005629333 nova_compute[244014]: 2026-02-25 12:19:08.972 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:08.984 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b3313897-0338-48de-b7e4-89b45c95a0a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:08 np0005629333 systemd-machined[210048]: New machine qemu-22-instance-00000014.
Feb 25 07:19:09 np0005629333 systemd[1]: Started Virtual Machine qemu-22-instance-00000014.
Feb 25 07:19:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:09.011 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[be8f7056-403d-48a8-8ca6-dc11dfe30e88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:09.016 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f21be837-7fea-43ab-b2cb-dc3048171929]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:09.045 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5097fc1c-f7c3-4236-9fda-75bc561eb75e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:09.066 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c966b610-36ad-4dc1-a242-958188bacab5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f4cbf9a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:f8:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389396, 'reachable_time': 28421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262968, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:09.084 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[02ee3282-32e6-46b3-ac96-c7bde7185abe]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389407, 'tstamp': 389407}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262969, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389409, 'tstamp': 389409}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262969, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:09.086 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f4cbf9a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:09 np0005629333 nova_compute[244014]: 2026-02-25 12:19:09.088 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:09 np0005629333 nova_compute[244014]: 2026-02-25 12:19:09.089 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:09.090 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f4cbf9a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:09.090 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:19:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:09.091 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1f4cbf9a-40, col_values=(('external_ids', {'iface-id': '2cfd1e6b-d28d-43c0-bbbd-c6ad77855812'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:09.091 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:19:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:19:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:19:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:19:09 np0005629333 nova_compute[244014]: 2026-02-25 12:19:09.484 244018 DEBUG nova.compute.manager [req-ca9e9782-1422-4100-8ea7-ff5b57fbfc0e req-9551c66d-164b-46c1-91c6-a3413a496a2f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Received event network-vif-plugged-24e2d6b3-f0d8-4603-8c1e-da65e164050d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:19:09 np0005629333 nova_compute[244014]: 2026-02-25 12:19:09.485 244018 DEBUG oslo_concurrency.lockutils [req-ca9e9782-1422-4100-8ea7-ff5b57fbfc0e req-9551c66d-164b-46c1-91c6-a3413a496a2f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:09 np0005629333 nova_compute[244014]: 2026-02-25 12:19:09.485 244018 DEBUG oslo_concurrency.lockutils [req-ca9e9782-1422-4100-8ea7-ff5b57fbfc0e req-9551c66d-164b-46c1-91c6-a3413a496a2f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:09 np0005629333 nova_compute[244014]: 2026-02-25 12:19:09.486 244018 DEBUG oslo_concurrency.lockutils [req-ca9e9782-1422-4100-8ea7-ff5b57fbfc0e req-9551c66d-164b-46c1-91c6-a3413a496a2f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:09 np0005629333 nova_compute[244014]: 2026-02-25 12:19:09.486 244018 DEBUG nova.compute.manager [req-ca9e9782-1422-4100-8ea7-ff5b57fbfc0e req-9551c66d-164b-46c1-91c6-a3413a496a2f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Processing event network-vif-plugged-24e2d6b3-f0d8-4603-8c1e-da65e164050d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:19:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 358 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 7.0 MiB/s wr, 289 op/s
Feb 25 07:19:09 np0005629333 nova_compute[244014]: 2026-02-25 12:19:09.845 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021949.8452568, d9b67bce-8a7c-4f49-9cab-3e20377ca630 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:19:09 np0005629333 nova_compute[244014]: 2026-02-25 12:19:09.846 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] VM Started (Lifecycle Event)#033[00m
Feb 25 07:19:09 np0005629333 nova_compute[244014]: 2026-02-25 12:19:09.849 244018 DEBUG nova.compute.manager [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:19:09 np0005629333 nova_compute[244014]: 2026-02-25 12:19:09.852 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:19:09 np0005629333 nova_compute[244014]: 2026-02-25 12:19:09.856 244018 INFO nova.virt.libvirt.driver [-] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Instance spawned successfully.#033[00m
Feb 25 07:19:09 np0005629333 nova_compute[244014]: 2026-02-25 12:19:09.857 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:19:09 np0005629333 nova_compute[244014]: 2026-02-25 12:19:09.914 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:09 np0005629333 nova_compute[244014]: 2026-02-25 12:19:09.917 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:19:09 np0005629333 nova_compute[244014]: 2026-02-25 12:19:09.977 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:09 np0005629333 nova_compute[244014]: 2026-02-25 12:19:09.978 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:09 np0005629333 nova_compute[244014]: 2026-02-25 12:19:09.979 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:09 np0005629333 nova_compute[244014]: 2026-02-25 12:19:09.980 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:09 np0005629333 nova_compute[244014]: 2026-02-25 12:19:09.980 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:09 np0005629333 nova_compute[244014]: 2026-02-25 12:19:09.981 244018 DEBUG nova.virt.libvirt.driver [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:10 np0005629333 nova_compute[244014]: 2026-02-25 12:19:10.024 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:19:10 np0005629333 nova_compute[244014]: 2026-02-25 12:19:10.025 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021949.8453603, d9b67bce-8a7c-4f49-9cab-3e20377ca630 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:19:10 np0005629333 nova_compute[244014]: 2026-02-25 12:19:10.025 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:19:10 np0005629333 nova_compute[244014]: 2026-02-25 12:19:10.052 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:10 np0005629333 nova_compute[244014]: 2026-02-25 12:19:10.057 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021949.8511143, d9b67bce-8a7c-4f49-9cab-3e20377ca630 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:19:10 np0005629333 nova_compute[244014]: 2026-02-25 12:19:10.057 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:19:10 np0005629333 nova_compute[244014]: 2026-02-25 12:19:10.089 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:10 np0005629333 nova_compute[244014]: 2026-02-25 12:19:10.093 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:19:10 np0005629333 nova_compute[244014]: 2026-02-25 12:19:10.104 244018 INFO nova.compute.manager [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Took 9.99 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:19:10 np0005629333 nova_compute[244014]: 2026-02-25 12:19:10.104 244018 DEBUG nova.compute.manager [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:10 np0005629333 nova_compute[244014]: 2026-02-25 12:19:10.114 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:19:10 np0005629333 nova_compute[244014]: 2026-02-25 12:19:10.176 244018 INFO nova.compute.manager [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Took 11.13 seconds to build instance.#033[00m
Feb 25 07:19:10 np0005629333 nova_compute[244014]: 2026-02-25 12:19:10.252 244018 DEBUG oslo_concurrency.lockutils [None req-950fe1ec-8e39-4b6b-881f-e046fd8f011e 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.284s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:11 np0005629333 nova_compute[244014]: 2026-02-25 12:19:11.561 244018 DEBUG nova.compute.manager [req-5f1046a4-bb5b-45c7-a89d-a2df8e3b8b3b req-08e2b917-f515-4605-9970-1769a831f8ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Received event network-vif-plugged-24e2d6b3-f0d8-4603-8c1e-da65e164050d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:19:11 np0005629333 nova_compute[244014]: 2026-02-25 12:19:11.561 244018 DEBUG oslo_concurrency.lockutils [req-5f1046a4-bb5b-45c7-a89d-a2df8e3b8b3b req-08e2b917-f515-4605-9970-1769a831f8ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:11 np0005629333 nova_compute[244014]: 2026-02-25 12:19:11.561 244018 DEBUG oslo_concurrency.lockutils [req-5f1046a4-bb5b-45c7-a89d-a2df8e3b8b3b req-08e2b917-f515-4605-9970-1769a831f8ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:11 np0005629333 nova_compute[244014]: 2026-02-25 12:19:11.561 244018 DEBUG oslo_concurrency.lockutils [req-5f1046a4-bb5b-45c7-a89d-a2df8e3b8b3b req-08e2b917-f515-4605-9970-1769a831f8ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:11 np0005629333 nova_compute[244014]: 2026-02-25 12:19:11.562 244018 DEBUG nova.compute.manager [req-5f1046a4-bb5b-45c7-a89d-a2df8e3b8b3b req-08e2b917-f515-4605-9970-1769a831f8ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] No waiting events found dispatching network-vif-plugged-24e2d6b3-f0d8-4603-8c1e-da65e164050d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:19:11 np0005629333 nova_compute[244014]: 2026-02-25 12:19:11.562 244018 WARNING nova.compute.manager [req-5f1046a4-bb5b-45c7-a89d-a2df8e3b8b3b req-08e2b917-f515-4605-9970-1769a831f8ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Received unexpected event network-vif-plugged-24e2d6b3-f0d8-4603-8c1e-da65e164050d for instance with vm_state active and task_state None.#033[00m
Feb 25 07:19:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 358 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 6.0 MiB/s wr, 285 op/s
Feb 25 07:19:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 07:19:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 4856 writes, 21K keys, 4856 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 4856 writes, 4856 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1436 writes, 6465 keys, 1436 commit groups, 1.0 writes per commit group, ingest: 9.06 MB, 0.02 MB/s#012Interval WAL: 1436 writes, 1436 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     77.5      0.31              0.07        12    0.026       0      0       0.0       0.0#012  L6      1/0    7.01 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    111.5     91.4      0.84              0.22        11    0.077     48K   5757       0.0       0.0#012 Sum      1/0    7.01 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2     81.6     87.6      1.15              0.28        23    0.050     48K   5757       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.1     75.2     76.1      0.59              0.12        10    0.059     23K   2565       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    111.5     91.4      0.84              0.22        11    0.077     48K   5757       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     78.5      0.31              0.07        11    0.028       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.023, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.09 GB read, 0.05 MB/s read, 1.2 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561a1af858d0#2 capacity: 304.00 MB usage: 9.31 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000133 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(561,8.91 MB,2.9319%) FilterBlock(24,141.55 KB,0.0454702%) IndexBlock(24,260.44 KB,0.0836623%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb 25 07:19:12 np0005629333 nova_compute[244014]: 2026-02-25 12:19:12.804 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:13 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:13Z|00075|binding|INFO|Releasing lport 2cfd1e6b-d28d-43c0-bbbd-c6ad77855812 from this chassis (sb_readonly=0)
Feb 25 07:19:13 np0005629333 nova_compute[244014]: 2026-02-25 12:19:13.126 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:13 np0005629333 nova_compute[244014]: 2026-02-25 12:19:13.421 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 358 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.2 MiB/s wr, 310 op/s
Feb 25 07:19:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:19:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 358 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 186 op/s
Feb 25 07:19:16 np0005629333 nova_compute[244014]: 2026-02-25 12:19:16.065 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:16 np0005629333 nova_compute[244014]: 2026-02-25 12:19:16.066 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:16 np0005629333 nova_compute[244014]: 2026-02-25 12:19:16.165 244018 DEBUG nova.compute.manager [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:19:16 np0005629333 nova_compute[244014]: 2026-02-25 12:19:16.271 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:16 np0005629333 nova_compute[244014]: 2026-02-25 12:19:16.272 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:16 np0005629333 nova_compute[244014]: 2026-02-25 12:19:16.281 244018 DEBUG nova.virt.hardware [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:19:16 np0005629333 nova_compute[244014]: 2026-02-25 12:19:16.281 244018 INFO nova.compute.claims [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:19:16 np0005629333 nova_compute[244014]: 2026-02-25 12:19:16.485 244018 DEBUG oslo_concurrency.processutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:19:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1742440866' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.035 244018 DEBUG oslo_concurrency.processutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.040 244018 DEBUG nova.compute.provider_tree [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.042 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021942.0402136, 606b209d-b916-4ac7-87c7-887c274f747f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.042 244018 INFO nova.compute.manager [-] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.065 244018 DEBUG nova.scheduler.client.report [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.069 244018 DEBUG nova.compute.manager [None req-8f5fb875-9083-416d-b952-ba207c00be5a - - - - - -] [instance: 606b209d-b916-4ac7-87c7-887c274f747f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.093 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.094 244018 DEBUG nova.compute.manager [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.165 244018 DEBUG nova.compute.manager [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.166 244018 DEBUG nova.network.neutron [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.195 244018 INFO nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.213 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.213 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.230 244018 DEBUG nova.compute.manager [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.236 244018 DEBUG nova.compute.manager [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.337 244018 DEBUG nova.policy [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6395ac4bfa5d4910aed9116395bbbdeb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.341 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.341 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.350 244018 DEBUG nova.virt.hardware [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.351 244018 INFO nova.compute.claims [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.356 244018 DEBUG nova.compute.manager [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.358 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.359 244018 INFO nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Creating image(s)#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.397 244018 DEBUG nova.storage.rbd_utils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.429 244018 DEBUG nova.storage.rbd_utils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.463 244018 DEBUG nova.storage.rbd_utils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.468 244018 DEBUG oslo_concurrency.processutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.545 244018 DEBUG oslo_concurrency.processutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.546 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.547 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.547 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.568 244018 DEBUG nova.storage.rbd_utils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.571 244018 DEBUG oslo_concurrency.processutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 358 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.4 MiB/s wr, 186 op/s
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.658 244018 DEBUG oslo_concurrency.processutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.807 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.888 244018 DEBUG oslo_concurrency.processutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:17 np0005629333 nova_compute[244014]: 2026-02-25 12:19:17.965 244018 DEBUG nova.storage.rbd_utils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] resizing rbd image 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:19:18 np0005629333 nova_compute[244014]: 2026-02-25 12:19:18.069 244018 DEBUG nova.objects.instance [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'migration_context' on Instance uuid 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:19:18 np0005629333 nova_compute[244014]: 2026-02-25 12:19:18.098 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:19:18 np0005629333 nova_compute[244014]: 2026-02-25 12:19:18.099 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Ensure instance console log exists: /var/lib/nova/instances/8993d2e7-8b5d-42eb-9e24-f96dcc4da39b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:19:18 np0005629333 nova_compute[244014]: 2026-02-25 12:19:18.099 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:18 np0005629333 nova_compute[244014]: 2026-02-25 12:19:18.100 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:18 np0005629333 nova_compute[244014]: 2026-02-25 12:19:18.101 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:19:18 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3532858643' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:19:18 np0005629333 nova_compute[244014]: 2026-02-25 12:19:18.224 244018 DEBUG oslo_concurrency.processutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:18 np0005629333 nova_compute[244014]: 2026-02-25 12:19:18.229 244018 DEBUG nova.compute.provider_tree [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:19:18 np0005629333 nova_compute[244014]: 2026-02-25 12:19:18.251 244018 DEBUG nova.scheduler.client.report [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:19:18 np0005629333 nova_compute[244014]: 2026-02-25 12:19:18.279 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:18 np0005629333 nova_compute[244014]: 2026-02-25 12:19:18.280 244018 DEBUG nova.compute.manager [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:19:18 np0005629333 nova_compute[244014]: 2026-02-25 12:19:18.362 244018 DEBUG nova.compute.manager [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:19:18 np0005629333 nova_compute[244014]: 2026-02-25 12:19:18.363 244018 DEBUG nova.network.neutron [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:19:18 np0005629333 nova_compute[244014]: 2026-02-25 12:19:18.423 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:18 np0005629333 nova_compute[244014]: 2026-02-25 12:19:18.462 244018 INFO nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:19:18 np0005629333 nova_compute[244014]: 2026-02-25 12:19:18.510 244018 DEBUG nova.compute.manager [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:19:18 np0005629333 nova_compute[244014]: 2026-02-25 12:19:18.692 244018 DEBUG nova.compute.manager [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:19:18 np0005629333 nova_compute[244014]: 2026-02-25 12:19:18.694 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:19:18 np0005629333 nova_compute[244014]: 2026-02-25 12:19:18.694 244018 INFO nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Creating image(s)#033[00m
Feb 25 07:19:18 np0005629333 nova_compute[244014]: 2026-02-25 12:19:18.748 244018 DEBUG nova.storage.rbd_utils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] rbd image de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:18 np0005629333 podman[263224]: 2026-02-25 12:19:18.753982435 +0000 UTC m=+0.093681521 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 07:19:18 np0005629333 nova_compute[244014]: 2026-02-25 12:19:18.785 244018 DEBUG nova.storage.rbd_utils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] rbd image de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:18 np0005629333 podman[263225]: 2026-02-25 12:19:18.789863972 +0000 UTC m=+0.129605279 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.license=GPLv2)
Feb 25 07:19:18 np0005629333 nova_compute[244014]: 2026-02-25 12:19:18.810 244018 DEBUG nova.storage.rbd_utils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] rbd image de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:18 np0005629333 nova_compute[244014]: 2026-02-25 12:19:18.814 244018 DEBUG oslo_concurrency.processutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:18 np0005629333 nova_compute[244014]: 2026-02-25 12:19:18.834 244018 DEBUG nova.network.neutron [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Successfully created port: 55015950-cf1b-4183-802f-22f661123534 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:19:18 np0005629333 nova_compute[244014]: 2026-02-25 12:19:18.879 244018 DEBUG oslo_concurrency.processutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:18 np0005629333 nova_compute[244014]: 2026-02-25 12:19:18.880 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:18 np0005629333 nova_compute[244014]: 2026-02-25 12:19:18.880 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:18 np0005629333 nova_compute[244014]: 2026-02-25 12:19:18.881 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:18 np0005629333 nova_compute[244014]: 2026-02-25 12:19:18.909 244018 DEBUG nova.storage.rbd_utils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] rbd image de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:18 np0005629333 nova_compute[244014]: 2026-02-25 12:19:18.913 244018 DEBUG oslo_concurrency.processutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:18 np0005629333 nova_compute[244014]: 2026-02-25 12:19:18.935 244018 DEBUG nova.policy [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9aa84b2700234a5e9dcba1fc0bbc4cea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67d0ed57ac554e4390e928b3c8f9b5f6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:19:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:19:19 np0005629333 nova_compute[244014]: 2026-02-25 12:19:19.189 244018 DEBUG oslo_concurrency.processutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.276s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:19 np0005629333 nova_compute[244014]: 2026-02-25 12:19:19.254 244018 DEBUG nova.storage.rbd_utils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] resizing rbd image de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:19:19 np0005629333 nova_compute[244014]: 2026-02-25 12:19:19.343 244018 DEBUG nova.objects.instance [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lazy-loading 'migration_context' on Instance uuid de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:19:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 358 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 41 KiB/s wr, 74 op/s
Feb 25 07:19:19 np0005629333 nova_compute[244014]: 2026-02-25 12:19:19.599 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:19:19 np0005629333 nova_compute[244014]: 2026-02-25 12:19:19.600 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Ensure instance console log exists: /var/lib/nova/instances/de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:19:19 np0005629333 nova_compute[244014]: 2026-02-25 12:19:19.600 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:19 np0005629333 nova_compute[244014]: 2026-02-25 12:19:19.601 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:19 np0005629333 nova_compute[244014]: 2026-02-25 12:19:19.601 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:19 np0005629333 nova_compute[244014]: 2026-02-25 12:19:19.786 244018 DEBUG nova.network.neutron [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Successfully updated port: 55015950-cf1b-4183-802f-22f661123534 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:19:19 np0005629333 nova_compute[244014]: 2026-02-25 12:19:19.821 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "refresh_cache-8993d2e7-8b5d-42eb-9e24-f96dcc4da39b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:19:19 np0005629333 nova_compute[244014]: 2026-02-25 12:19:19.821 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquired lock "refresh_cache-8993d2e7-8b5d-42eb-9e24-f96dcc4da39b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:19:19 np0005629333 nova_compute[244014]: 2026-02-25 12:19:19.821 244018 DEBUG nova.network.neutron [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:19:19 np0005629333 nova_compute[244014]: 2026-02-25 12:19:19.954 244018 DEBUG nova.network.neutron [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:19:20 np0005629333 nova_compute[244014]: 2026-02-25 12:19:20.025 244018 DEBUG nova.compute.manager [req-623e3f00-f476-42dd-9e68-689a9dcb893b req-92c27aec-e283-47f4-8cdf-4eab17e936b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Received event network-changed-55015950-cf1b-4183-802f-22f661123534 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:19:20 np0005629333 nova_compute[244014]: 2026-02-25 12:19:20.026 244018 DEBUG nova.compute.manager [req-623e3f00-f476-42dd-9e68-689a9dcb893b req-92c27aec-e283-47f4-8cdf-4eab17e936b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Refreshing instance network info cache due to event network-changed-55015950-cf1b-4183-802f-22f661123534. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:19:20 np0005629333 nova_compute[244014]: 2026-02-25 12:19:20.026 244018 DEBUG oslo_concurrency.lockutils [req-623e3f00-f476-42dd-9e68-689a9dcb893b req-92c27aec-e283-47f4-8cdf-4eab17e936b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8993d2e7-8b5d-42eb-9e24-f96dcc4da39b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:19:20 np0005629333 nova_compute[244014]: 2026-02-25 12:19:20.172 244018 DEBUG nova.network.neutron [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Successfully created port: b2336583-1aaa-4789-8d4f-a3a14997891d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:19:21 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:21Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4b:b0:82 10.100.0.9
Feb 25 07:19:21 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:21Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4b:b0:82 10.100.0.9
Feb 25 07:19:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 404 MiB data, 485 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.4 MiB/s wr, 108 op/s
Feb 25 07:19:21 np0005629333 nova_compute[244014]: 2026-02-25 12:19:21.780 244018 DEBUG nova.network.neutron [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Updating instance_info_cache with network_info: [{"id": "55015950-cf1b-4183-802f-22f661123534", "address": "fa:16:3e:c6:da:42", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55015950-cf", "ovs_interfaceid": "55015950-cf1b-4183-802f-22f661123534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:19:21 np0005629333 nova_compute[244014]: 2026-02-25 12:19:21.803 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Releasing lock "refresh_cache-8993d2e7-8b5d-42eb-9e24-f96dcc4da39b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:19:21 np0005629333 nova_compute[244014]: 2026-02-25 12:19:21.804 244018 DEBUG nova.compute.manager [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Instance network_info: |[{"id": "55015950-cf1b-4183-802f-22f661123534", "address": "fa:16:3e:c6:da:42", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55015950-cf", "ovs_interfaceid": "55015950-cf1b-4183-802f-22f661123534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:19:21 np0005629333 nova_compute[244014]: 2026-02-25 12:19:21.805 244018 DEBUG oslo_concurrency.lockutils [req-623e3f00-f476-42dd-9e68-689a9dcb893b req-92c27aec-e283-47f4-8cdf-4eab17e936b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8993d2e7-8b5d-42eb-9e24-f96dcc4da39b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:19:21 np0005629333 nova_compute[244014]: 2026-02-25 12:19:21.806 244018 DEBUG nova.network.neutron [req-623e3f00-f476-42dd-9e68-689a9dcb893b req-92c27aec-e283-47f4-8cdf-4eab17e936b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Refreshing network info cache for port 55015950-cf1b-4183-802f-22f661123534 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:19:21 np0005629333 nova_compute[244014]: 2026-02-25 12:19:21.811 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Start _get_guest_xml network_info=[{"id": "55015950-cf1b-4183-802f-22f661123534", "address": "fa:16:3e:c6:da:42", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55015950-cf", "ovs_interfaceid": "55015950-cf1b-4183-802f-22f661123534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:19:21 np0005629333 nova_compute[244014]: 2026-02-25 12:19:21.817 244018 WARNING nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:19:21 np0005629333 nova_compute[244014]: 2026-02-25 12:19:21.824 244018 DEBUG nova.virt.libvirt.host [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:19:21 np0005629333 nova_compute[244014]: 2026-02-25 12:19:21.825 244018 DEBUG nova.virt.libvirt.host [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:19:21 np0005629333 nova_compute[244014]: 2026-02-25 12:19:21.828 244018 DEBUG nova.virt.libvirt.host [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:19:21 np0005629333 nova_compute[244014]: 2026-02-25 12:19:21.828 244018 DEBUG nova.virt.libvirt.host [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:19:21 np0005629333 nova_compute[244014]: 2026-02-25 12:19:21.829 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:19:21 np0005629333 nova_compute[244014]: 2026-02-25 12:19:21.830 244018 DEBUG nova.virt.hardware [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:19:21 np0005629333 nova_compute[244014]: 2026-02-25 12:19:21.830 244018 DEBUG nova.virt.hardware [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:19:21 np0005629333 nova_compute[244014]: 2026-02-25 12:19:21.831 244018 DEBUG nova.virt.hardware [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:19:21 np0005629333 nova_compute[244014]: 2026-02-25 12:19:21.831 244018 DEBUG nova.virt.hardware [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:19:21 np0005629333 nova_compute[244014]: 2026-02-25 12:19:21.832 244018 DEBUG nova.virt.hardware [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:19:21 np0005629333 nova_compute[244014]: 2026-02-25 12:19:21.832 244018 DEBUG nova.virt.hardware [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:19:21 np0005629333 nova_compute[244014]: 2026-02-25 12:19:21.832 244018 DEBUG nova.virt.hardware [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:19:21 np0005629333 nova_compute[244014]: 2026-02-25 12:19:21.833 244018 DEBUG nova.virt.hardware [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:19:21 np0005629333 nova_compute[244014]: 2026-02-25 12:19:21.833 244018 DEBUG nova.virt.hardware [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:19:21 np0005629333 nova_compute[244014]: 2026-02-25 12:19:21.834 244018 DEBUG nova.virt.hardware [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:19:21 np0005629333 nova_compute[244014]: 2026-02-25 12:19:21.834 244018 DEBUG nova.virt.hardware [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:19:21 np0005629333 nova_compute[244014]: 2026-02-25 12:19:21.839 244018 DEBUG oslo_concurrency.processutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.305 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Acquiring lock "d44c3dbc-e4bc-4235-bd88-b39616473248" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.306 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lock "d44c3dbc-e4bc-4235-bd88-b39616473248" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.314 244018 DEBUG nova.network.neutron [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Successfully updated port: b2336583-1aaa-4789-8d4f-a3a14997891d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:19:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:19:22 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1381563903' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.401 244018 DEBUG oslo_concurrency.processutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.432 244018 DEBUG nova.storage.rbd_utils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.437 244018 DEBUG oslo_concurrency.processutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.457 244018 DEBUG nova.compute.manager [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.463 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.464 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquired lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.464 244018 DEBUG nova.network.neutron [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.467 244018 DEBUG nova.compute.manager [req-913bb8da-fa63-4f63-bae3-483f3134a1e7 req-a667e206-32bb-4ae0-82c3-04f1eaae39c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Received event network-changed-b2336583-1aaa-4789-8d4f-a3a14997891d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.468 244018 DEBUG nova.compute.manager [req-913bb8da-fa63-4f63-bae3-483f3134a1e7 req-a667e206-32bb-4ae0-82c3-04f1eaae39c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Refreshing instance network info cache due to event network-changed-b2336583-1aaa-4789-8d4f-a3a14997891d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.468 244018 DEBUG oslo_concurrency.lockutils [req-913bb8da-fa63-4f63-bae3-483f3134a1e7 req-a667e206-32bb-4ae0-82c3-04f1eaae39c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.637 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.638 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.646 244018 DEBUG nova.virt.hardware [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.646 244018 INFO nova.compute.claims [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.767 244018 DEBUG nova.network.neutron [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.809 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.930 244018 DEBUG oslo_concurrency.processutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:19:22 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/589651980' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.960 244018 DEBUG oslo_concurrency.processutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.963 244018 DEBUG nova.virt.libvirt.vif [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:19:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-649798416',display_name='tempest-ServersAdminTestJSON-server-649798416',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-649798416',id=21,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-fszvg8lb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:19:17Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=8993d2e7-8b5d-42eb-9e24-f96dcc4da39b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "55015950-cf1b-4183-802f-22f661123534", "address": "fa:16:3e:c6:da:42", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55015950-cf", "ovs_interfaceid": "55015950-cf1b-4183-802f-22f661123534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.964 244018 DEBUG nova.network.os_vif_util [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "55015950-cf1b-4183-802f-22f661123534", "address": "fa:16:3e:c6:da:42", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55015950-cf", "ovs_interfaceid": "55015950-cf1b-4183-802f-22f661123534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.964 244018 DEBUG nova.network.os_vif_util [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:da:42,bridge_name='br-int',has_traffic_filtering=True,id=55015950-cf1b-4183-802f-22f661123534,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55015950-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.966 244018 DEBUG nova.objects.instance [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'pci_devices' on Instance uuid 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.984 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:19:22 np0005629333 nova_compute[244014]:  <uuid>8993d2e7-8b5d-42eb-9e24-f96dcc4da39b</uuid>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:  <name>instance-00000015</name>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServersAdminTestJSON-server-649798416</nova:name>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:19:21</nova:creationTime>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:19:22 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:        <nova:user uuid="6395ac4bfa5d4910aed9116395bbbdeb">tempest-ServersAdminTestJSON-147238686-project-member</nova:user>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:        <nova:project uuid="b35cd816238a43d8825ab11e83d2b8bf">tempest-ServersAdminTestJSON-147238686</nova:project>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:        <nova:port uuid="55015950-cf1b-4183-802f-22f661123534">
Feb 25 07:19:22 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      <entry name="serial">8993d2e7-8b5d-42eb-9e24-f96dcc4da39b</entry>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      <entry name="uuid">8993d2e7-8b5d-42eb-9e24-f96dcc4da39b</entry>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/8993d2e7-8b5d-42eb-9e24-f96dcc4da39b_disk">
Feb 25 07:19:22 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:19:22 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/8993d2e7-8b5d-42eb-9e24-f96dcc4da39b_disk.config">
Feb 25 07:19:22 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:19:22 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:c6:da:42"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      <target dev="tap55015950-cf"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/8993d2e7-8b5d-42eb-9e24-f96dcc4da39b/console.log" append="off"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:19:22 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:19:22 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:19:22 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:19:22 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.987 244018 DEBUG nova.compute.manager [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Preparing to wait for external event network-vif-plugged-55015950-cf1b-4183-802f-22f661123534 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.988 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.988 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.989 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.991 244018 DEBUG nova.virt.libvirt.vif [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:19:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-649798416',display_name='tempest-ServersAdminTestJSON-server-649798416',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-649798416',id=21,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-fszvg8lb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:19:17Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=8993d2e7-8b5d-42eb-9e24-f96dcc4da39b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "55015950-cf1b-4183-802f-22f661123534", "address": "fa:16:3e:c6:da:42", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55015950-cf", "ovs_interfaceid": "55015950-cf1b-4183-802f-22f661123534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.991 244018 DEBUG nova.network.os_vif_util [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "55015950-cf1b-4183-802f-22f661123534", "address": "fa:16:3e:c6:da:42", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55015950-cf", "ovs_interfaceid": "55015950-cf1b-4183-802f-22f661123534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.993 244018 DEBUG nova.network.os_vif_util [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:da:42,bridge_name='br-int',has_traffic_filtering=True,id=55015950-cf1b-4183-802f-22f661123534,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55015950-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.994 244018 DEBUG os_vif [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:da:42,bridge_name='br-int',has_traffic_filtering=True,id=55015950-cf1b-4183-802f-22f661123534,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55015950-cf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.995 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.996 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:22 np0005629333 nova_compute[244014]: 2026-02-25 12:19:22.997 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:19:23 np0005629333 nova_compute[244014]: 2026-02-25 12:19:23.002 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:23 np0005629333 nova_compute[244014]: 2026-02-25 12:19:23.002 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55015950-cf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:23 np0005629333 nova_compute[244014]: 2026-02-25 12:19:23.003 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap55015950-cf, col_values=(('external_ids', {'iface-id': '55015950-cf1b-4183-802f-22f661123534', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c6:da:42', 'vm-uuid': '8993d2e7-8b5d-42eb-9e24-f96dcc4da39b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:23 np0005629333 nova_compute[244014]: 2026-02-25 12:19:23.006 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:23 np0005629333 NetworkManager[49836]: <info>  [1772021963.0070] manager: (tap55015950-cf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Feb 25 07:19:23 np0005629333 nova_compute[244014]: 2026-02-25 12:19:23.011 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:19:23 np0005629333 nova_compute[244014]: 2026-02-25 12:19:23.013 244018 INFO os_vif [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:da:42,bridge_name='br-int',has_traffic_filtering=True,id=55015950-cf1b-4183-802f-22f661123534,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55015950-cf')#033[00m
Feb 25 07:19:23 np0005629333 nova_compute[244014]: 2026-02-25 12:19:23.064 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:19:23 np0005629333 nova_compute[244014]: 2026-02-25 12:19:23.065 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:19:23 np0005629333 nova_compute[244014]: 2026-02-25 12:19:23.065 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No VIF found with MAC fa:16:3e:c6:da:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:19:23 np0005629333 nova_compute[244014]: 2026-02-25 12:19:23.065 244018 INFO nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Using config drive#033[00m
Feb 25 07:19:23 np0005629333 nova_compute[244014]: 2026-02-25 12:19:23.081 244018 DEBUG nova.storage.rbd_utils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:23 np0005629333 nova_compute[244014]: 2026-02-25 12:19:23.425 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:19:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2282474270' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:19:23 np0005629333 nova_compute[244014]: 2026-02-25 12:19:23.478 244018 DEBUG oslo_concurrency.processutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:23 np0005629333 nova_compute[244014]: 2026-02-25 12:19:23.490 244018 DEBUG nova.compute.provider_tree [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:19:23 np0005629333 nova_compute[244014]: 2026-02-25 12:19:23.520 244018 DEBUG nova.scheduler.client.report [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:19:23 np0005629333 nova_compute[244014]: 2026-02-25 12:19:23.561 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:23 np0005629333 nova_compute[244014]: 2026-02-25 12:19:23.562 244018 DEBUG nova.compute.manager [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:19:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 480 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 5.6 MiB/s wr, 147 op/s
Feb 25 07:19:23 np0005629333 nova_compute[244014]: 2026-02-25 12:19:23.673 244018 DEBUG nova.compute.manager [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:19:23 np0005629333 nova_compute[244014]: 2026-02-25 12:19:23.674 244018 DEBUG nova.network.neutron [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:19:23 np0005629333 nova_compute[244014]: 2026-02-25 12:19:23.735 244018 INFO nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:19:23 np0005629333 nova_compute[244014]: 2026-02-25 12:19:23.771 244018 DEBUG nova.compute.manager [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:19:23 np0005629333 nova_compute[244014]: 2026-02-25 12:19:23.822 244018 DEBUG nova.network.neutron [req-623e3f00-f476-42dd-9e68-689a9dcb893b req-92c27aec-e283-47f4-8cdf-4eab17e936b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Updated VIF entry in instance network info cache for port 55015950-cf1b-4183-802f-22f661123534. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:19:23 np0005629333 nova_compute[244014]: 2026-02-25 12:19:23.822 244018 DEBUG nova.network.neutron [req-623e3f00-f476-42dd-9e68-689a9dcb893b req-92c27aec-e283-47f4-8cdf-4eab17e936b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Updating instance_info_cache with network_info: [{"id": "55015950-cf1b-4183-802f-22f661123534", "address": "fa:16:3e:c6:da:42", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55015950-cf", "ovs_interfaceid": "55015950-cf1b-4183-802f-22f661123534", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:19:23 np0005629333 nova_compute[244014]: 2026-02-25 12:19:23.848 244018 INFO nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Creating config drive at /var/lib/nova/instances/8993d2e7-8b5d-42eb-9e24-f96dcc4da39b/disk.config#033[00m
Feb 25 07:19:23 np0005629333 nova_compute[244014]: 2026-02-25 12:19:23.854 244018 DEBUG oslo_concurrency.processutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8993d2e7-8b5d-42eb-9e24-f96dcc4da39b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6sxladem execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:23 np0005629333 nova_compute[244014]: 2026-02-25 12:19:23.876 244018 DEBUG oslo_concurrency.lockutils [req-623e3f00-f476-42dd-9e68-689a9dcb893b req-92c27aec-e283-47f4-8cdf-4eab17e936b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8993d2e7-8b5d-42eb-9e24-f96dcc4da39b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:19:23 np0005629333 nova_compute[244014]: 2026-02-25 12:19:23.924 244018 DEBUG nova.compute.manager [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:19:23 np0005629333 nova_compute[244014]: 2026-02-25 12:19:23.927 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:19:23 np0005629333 nova_compute[244014]: 2026-02-25 12:19:23.927 244018 INFO nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Creating image(s)#033[00m
Feb 25 07:19:23 np0005629333 nova_compute[244014]: 2026-02-25 12:19:23.962 244018 DEBUG nova.storage.rbd_utils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] rbd image d44c3dbc-e4bc-4235-bd88-b39616473248_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:23 np0005629333 nova_compute[244014]: 2026-02-25 12:19:23.994 244018 DEBUG nova.storage.rbd_utils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] rbd image d44c3dbc-e4bc-4235-bd88-b39616473248_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.021 244018 DEBUG nova.storage.rbd_utils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] rbd image d44c3dbc-e4bc-4235-bd88-b39616473248_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.026 244018 DEBUG oslo_concurrency.processutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.054 244018 DEBUG nova.policy [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8349db6a5fcb4d8596a69d83481207b3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '61fb5315043b44e588f1a84d85f1547b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.057 244018 DEBUG oslo_concurrency.processutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8993d2e7-8b5d-42eb-9e24-f96dcc4da39b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6sxladem" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.092 244018 DEBUG nova.storage.rbd_utils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.098 244018 DEBUG oslo_concurrency.processutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8993d2e7-8b5d-42eb-9e24-f96dcc4da39b/disk.config 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.125 244018 DEBUG nova.network.neutron [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Updating instance_info_cache with network_info: [{"id": "b2336583-1aaa-4789-8d4f-a3a14997891d", "address": "fa:16:3e:23:d7:29", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2336583-1a", "ovs_interfaceid": "b2336583-1aaa-4789-8d4f-a3a14997891d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.130 244018 DEBUG oslo_concurrency.processutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.131 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.132 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.132 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.164 244018 DEBUG nova.storage.rbd_utils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] rbd image d44c3dbc-e4bc-4235-bd88-b39616473248_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.171 244018 DEBUG oslo_concurrency.processutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d44c3dbc-e4bc-4235-bd88-b39616473248_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.196 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Releasing lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.197 244018 DEBUG nova.compute.manager [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Instance network_info: |[{"id": "b2336583-1aaa-4789-8d4f-a3a14997891d", "address": "fa:16:3e:23:d7:29", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2336583-1a", "ovs_interfaceid": "b2336583-1aaa-4789-8d4f-a3a14997891d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.199 244018 DEBUG oslo_concurrency.lockutils [req-913bb8da-fa63-4f63-bae3-483f3134a1e7 req-a667e206-32bb-4ae0-82c3-04f1eaae39c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.200 244018 DEBUG nova.network.neutron [req-913bb8da-fa63-4f63-bae3-483f3134a1e7 req-a667e206-32bb-4ae0-82c3-04f1eaae39c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Refreshing network info cache for port b2336583-1aaa-4789-8d4f-a3a14997891d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.208 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Start _get_guest_xml network_info=[{"id": "b2336583-1aaa-4789-8d4f-a3a14997891d", "address": "fa:16:3e:23:d7:29", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2336583-1a", "ovs_interfaceid": "b2336583-1aaa-4789-8d4f-a3a14997891d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.217 244018 WARNING nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.239 244018 DEBUG nova.virt.libvirt.host [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.241 244018 DEBUG nova.virt.libvirt.host [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.246 244018 DEBUG nova.virt.libvirt.host [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.246 244018 DEBUG nova.virt.libvirt.host [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.247 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.248 244018 DEBUG nova.virt.hardware [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.249 244018 DEBUG nova.virt.hardware [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.249 244018 DEBUG nova.virt.hardware [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.249 244018 DEBUG nova.virt.hardware [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.250 244018 DEBUG nova.virt.hardware [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.250 244018 DEBUG nova.virt.hardware [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.251 244018 DEBUG nova.virt.hardware [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.251 244018 DEBUG nova.virt.hardware [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.252 244018 DEBUG nova.virt.hardware [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.252 244018 DEBUG nova.virt.hardware [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.252 244018 DEBUG nova.virt.hardware [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.262 244018 DEBUG oslo_concurrency.processutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.285 244018 DEBUG oslo_concurrency.processutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8993d2e7-8b5d-42eb-9e24-f96dcc4da39b/disk.config 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.286 244018 INFO nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Deleting local config drive /var/lib/nova/instances/8993d2e7-8b5d-42eb-9e24-f96dcc4da39b/disk.config because it was imported into RBD.#033[00m
Feb 25 07:19:24 np0005629333 NetworkManager[49836]: <info>  [1772021964.3462] manager: (tap55015950-cf): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Feb 25 07:19:24 np0005629333 kernel: tap55015950-cf: entered promiscuous mode
Feb 25 07:19:24 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:24Z|00076|binding|INFO|Claiming lport 55015950-cf1b-4183-802f-22f661123534 for this chassis.
Feb 25 07:19:24 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:24Z|00077|binding|INFO|55015950-cf1b-4183-802f-22f661123534: Claiming fa:16:3e:c6:da:42 10.100.0.12
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.355 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:24 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:24Z|00078|binding|INFO|Setting lport 55015950-cf1b-4183-802f-22f661123534 ovn-installed in OVS
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.370 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:24 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:24Z|00079|binding|INFO|Setting lport 55015950-cf1b-4183-802f-22f661123534 up in Southbound
Feb 25 07:19:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:24.377 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:da:42 10.100.0.12'], port_security=['fa:16:3e:c6:da:42 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '8993d2e7-8b5d-42eb-9e24-f96dcc4da39b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fd7733ad-d262-4781-bcfa-77cfa8b67164', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556b4b98-e95d-460c-a904-adc77baf4b88, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=55015950-cf1b-4183-802f-22f661123534) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:19:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:24.378 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 55015950-cf1b-4183-802f-22f661123534 in datapath 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 bound to our chassis#033[00m
Feb 25 07:19:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:24.380 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6#033[00m
Feb 25 07:19:24 np0005629333 systemd-machined[210048]: New machine qemu-23-instance-00000015.
Feb 25 07:19:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:24.391 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1dff0414-11ea-4d75-a684-61976e55572c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:24 np0005629333 systemd[1]: Started Virtual Machine qemu-23-instance-00000015.
Feb 25 07:19:24 np0005629333 systemd-udevd[263701]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:19:24 np0005629333 NetworkManager[49836]: <info>  [1772021964.4127] device (tap55015950-cf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:19:24 np0005629333 NetworkManager[49836]: <info>  [1772021964.4135] device (tap55015950-cf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:19:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:24.411 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b0a973e4-130b-4dad-9c2c-894980a5354f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:24.415 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[20911424-ea40-44c6-ac44-2a1eb0d3beb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:24.435 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c79c0daf-0851-4789-be35-9f82c843c03e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:24.448 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4699a15b-0688-4a32-8fa4-3090f7669104]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f4cbf9a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:f8:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389396, 'reachable_time': 28421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263720, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:24.465 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[97837102-051b-4743-abb4-56e7555ccb43]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389407, 'tstamp': 389407}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263722, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389409, 'tstamp': 389409}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263722, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:24.466 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f4cbf9a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.468 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.469 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:24.470 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f4cbf9a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:24.471 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:19:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:24.471 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1f4cbf9a-40, col_values=(('external_ids', {'iface-id': '2cfd1e6b-d28d-43c0-bbbd-c6ad77855812'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:24.472 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.489 244018 DEBUG oslo_concurrency.processutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d44c3dbc-e4bc-4235-bd88-b39616473248_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.547 244018 DEBUG nova.storage.rbd_utils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] resizing rbd image d44c3dbc-e4bc-4235-bd88-b39616473248_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.624 244018 DEBUG nova.objects.instance [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lazy-loading 'migration_context' on Instance uuid d44c3dbc-e4bc-4235-bd88-b39616473248 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.638 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.638 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Ensure instance console log exists: /var/lib/nova/instances/d44c3dbc-e4bc-4235-bd88-b39616473248/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.639 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.639 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.640 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:19:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/760156500' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.838 244018 DEBUG oslo_concurrency.processutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.872 244018 DEBUG nova.storage.rbd_utils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] rbd image de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.879 244018 DEBUG oslo_concurrency.processutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.900 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021964.8879974, 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.900 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] VM Started (Lifecycle Event)#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.909 244018 DEBUG nova.compute.manager [req-eac59a1c-f12c-4350-b97b-69c5e7475e13 req-74b89faf-1b29-4c40-802c-74909fcb1e9a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Received event network-vif-plugged-55015950-cf1b-4183-802f-22f661123534 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.910 244018 DEBUG oslo_concurrency.lockutils [req-eac59a1c-f12c-4350-b97b-69c5e7475e13 req-74b89faf-1b29-4c40-802c-74909fcb1e9a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.910 244018 DEBUG oslo_concurrency.lockutils [req-eac59a1c-f12c-4350-b97b-69c5e7475e13 req-74b89faf-1b29-4c40-802c-74909fcb1e9a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.911 244018 DEBUG oslo_concurrency.lockutils [req-eac59a1c-f12c-4350-b97b-69c5e7475e13 req-74b89faf-1b29-4c40-802c-74909fcb1e9a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.911 244018 DEBUG nova.compute.manager [req-eac59a1c-f12c-4350-b97b-69c5e7475e13 req-74b89faf-1b29-4c40-802c-74909fcb1e9a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Processing event network-vif-plugged-55015950-cf1b-4183-802f-22f661123534 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.912 244018 DEBUG nova.compute.manager [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.919 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.923 244018 INFO nova.virt.libvirt.driver [-] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Instance spawned successfully.#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.924 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.930 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.934 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.952 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.952 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.953 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.954 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.954 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.955 244018 DEBUG nova.virt.libvirt.driver [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.966 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.966 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021964.8884034, 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.966 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.991 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.996 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021964.9176836, 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:19:24 np0005629333 nova_compute[244014]: 2026-02-25 12:19:24.996 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.020 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.027 244018 INFO nova.compute.manager [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Took 7.67 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.027 244018 DEBUG nova.compute.manager [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.030 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.062 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.106 244018 INFO nova.compute.manager [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Took 8.89 seconds to build instance.#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.127 244018 DEBUG oslo_concurrency.lockutils [None req-31640ae3-45b6-4b20-8db0-6b82e21e3279 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.306 244018 DEBUG nova.network.neutron [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Successfully created port: 47bb858f-172e-40b5-8ac0-02c531c303c3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:19:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:19:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4208460432' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.437 244018 DEBUG oslo_concurrency.processutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.439 244018 DEBUG nova.virt.libvirt.vif [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:19:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1190496141',display_name='tempest-FloatingIPsAssociationTestJSON-server-1190496141',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1190496141',id=22,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67d0ed57ac554e4390e928b3c8f9b5f6',ramdisk_id='',reservation_id='r-pnxl3h9s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1904923370',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1904923370-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:19:18Z,user_data=None,user_id='9aa84b2700234a5e9dcba1fc0bbc4cea',uuid=de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b2336583-1aaa-4789-8d4f-a3a14997891d", "address": "fa:16:3e:23:d7:29", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2336583-1a", "ovs_interfaceid": "b2336583-1aaa-4789-8d4f-a3a14997891d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.439 244018 DEBUG nova.network.os_vif_util [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Converting VIF {"id": "b2336583-1aaa-4789-8d4f-a3a14997891d", "address": "fa:16:3e:23:d7:29", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2336583-1a", "ovs_interfaceid": "b2336583-1aaa-4789-8d4f-a3a14997891d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.440 244018 DEBUG nova.network.os_vif_util [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:d7:29,bridge_name='br-int',has_traffic_filtering=True,id=b2336583-1aaa-4789-8d4f-a3a14997891d,network=Network(41c706f5-6f0b-47a8-91a4-16f87e2a0571),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2336583-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.442 244018 DEBUG nova.objects.instance [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.470 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:19:25 np0005629333 nova_compute[244014]:  <uuid>de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75</uuid>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:  <name>instance-00000016</name>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      <nova:name>tempest-FloatingIPsAssociationTestJSON-server-1190496141</nova:name>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:19:24</nova:creationTime>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:19:25 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:        <nova:user uuid="9aa84b2700234a5e9dcba1fc0bbc4cea">tempest-FloatingIPsAssociationTestJSON-1904923370-project-member</nova:user>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:        <nova:project uuid="67d0ed57ac554e4390e928b3c8f9b5f6">tempest-FloatingIPsAssociationTestJSON-1904923370</nova:project>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:        <nova:port uuid="b2336583-1aaa-4789-8d4f-a3a14997891d">
Feb 25 07:19:25 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      <entry name="serial">de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75</entry>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      <entry name="uuid">de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75</entry>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75_disk">
Feb 25 07:19:25 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:19:25 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75_disk.config">
Feb 25 07:19:25 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:19:25 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:23:d7:29"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      <target dev="tapb2336583-1a"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75/console.log" append="off"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:19:25 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:19:25 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:19:25 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:19:25 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.472 244018 DEBUG nova.compute.manager [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Preparing to wait for external event network-vif-plugged-b2336583-1aaa-4789-8d4f-a3a14997891d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.472 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.473 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.473 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.474 244018 DEBUG nova.virt.libvirt.vif [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:19:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1190496141',display_name='tempest-FloatingIPsAssociationTestJSON-server-1190496141',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1190496141',id=22,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67d0ed57ac554e4390e928b3c8f9b5f6',ramdisk_id='',reservation_id='r-pnxl3h9s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1904923370',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1904923370-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:19:18Z,user_data=None,user_id='9aa84b2700234a5e9dcba1fc0bbc4cea',uuid=de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b2336583-1aaa-4789-8d4f-a3a14997891d", "address": "fa:16:3e:23:d7:29", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2336583-1a", "ovs_interfaceid": "b2336583-1aaa-4789-8d4f-a3a14997891d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.475 244018 DEBUG nova.network.os_vif_util [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Converting VIF {"id": "b2336583-1aaa-4789-8d4f-a3a14997891d", "address": "fa:16:3e:23:d7:29", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2336583-1a", "ovs_interfaceid": "b2336583-1aaa-4789-8d4f-a3a14997891d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.476 244018 DEBUG nova.network.os_vif_util [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:d7:29,bridge_name='br-int',has_traffic_filtering=True,id=b2336583-1aaa-4789-8d4f-a3a14997891d,network=Network(41c706f5-6f0b-47a8-91a4-16f87e2a0571),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2336583-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.476 244018 DEBUG os_vif [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:d7:29,bridge_name='br-int',has_traffic_filtering=True,id=b2336583-1aaa-4789-8d4f-a3a14997891d,network=Network(41c706f5-6f0b-47a8-91a4-16f87e2a0571),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2336583-1a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.477 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.478 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.479 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.482 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.482 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb2336583-1a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.483 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb2336583-1a, col_values=(('external_ids', {'iface-id': 'b2336583-1aaa-4789-8d4f-a3a14997891d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:23:d7:29', 'vm-uuid': 'de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:25 np0005629333 NetworkManager[49836]: <info>  [1772021965.4864] manager: (tapb2336583-1a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.490 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.493 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.495 244018 INFO os_vif [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:d7:29,bridge_name='br-int',has_traffic_filtering=True,id=b2336583-1aaa-4789-8d4f-a3a14997891d,network=Network(41c706f5-6f0b-47a8-91a4-16f87e2a0571),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2336583-1a')#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.562 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.564 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.564 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] No VIF found with MAC fa:16:3e:23:d7:29, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.566 244018 INFO nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Using config drive#033[00m
Feb 25 07:19:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 480 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 349 KiB/s rd, 5.6 MiB/s wr, 106 op/s
Feb 25 07:19:25 np0005629333 nova_compute[244014]: 2026-02-25 12:19:25.599 244018 DEBUG nova.storage.rbd_utils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] rbd image de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:26 np0005629333 nova_compute[244014]: 2026-02-25 12:19:26.308 244018 INFO nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Creating config drive at /var/lib/nova/instances/de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75/disk.config#033[00m
Feb 25 07:19:26 np0005629333 nova_compute[244014]: 2026-02-25 12:19:26.317 244018 DEBUG oslo_concurrency.processutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpa796econ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:26 np0005629333 nova_compute[244014]: 2026-02-25 12:19:26.451 244018 DEBUG oslo_concurrency.processutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpa796econ" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:26 np0005629333 nova_compute[244014]: 2026-02-25 12:19:26.490 244018 DEBUG nova.storage.rbd_utils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] rbd image de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:26 np0005629333 nova_compute[244014]: 2026-02-25 12:19:26.495 244018 DEBUG oslo_concurrency.processutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75/disk.config de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:26 np0005629333 nova_compute[244014]: 2026-02-25 12:19:26.651 244018 DEBUG oslo_concurrency.processutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75/disk.config de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:26 np0005629333 nova_compute[244014]: 2026-02-25 12:19:26.652 244018 INFO nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Deleting local config drive /var/lib/nova/instances/de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75/disk.config because it was imported into RBD.#033[00m
Feb 25 07:19:26 np0005629333 kernel: tapb2336583-1a: entered promiscuous mode
Feb 25 07:19:26 np0005629333 NetworkManager[49836]: <info>  [1772021966.7137] manager: (tapb2336583-1a): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Feb 25 07:19:26 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:26Z|00080|binding|INFO|Claiming lport b2336583-1aaa-4789-8d4f-a3a14997891d for this chassis.
Feb 25 07:19:26 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:26Z|00081|binding|INFO|b2336583-1aaa-4789-8d4f-a3a14997891d: Claiming fa:16:3e:23:d7:29 10.100.0.3
Feb 25 07:19:26 np0005629333 NetworkManager[49836]: <info>  [1772021966.7252] device (tapb2336583-1a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:19:26 np0005629333 NetworkManager[49836]: <info>  [1772021966.7274] device (tapb2336583-1a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:19:26 np0005629333 nova_compute[244014]: 2026-02-25 12:19:26.727 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.736 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:d7:29 10.100.0.3'], port_security=['fa:16:3e:23:d7:29 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41c706f5-6f0b-47a8-91a4-16f87e2a0571', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67d0ed57ac554e4390e928b3c8f9b5f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a50ab9ba-7ffb-499d-9822-c20dbf4e32aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db1eeaa4-6673-482f-8f62-eb89284fbfdd, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=b2336583-1aaa-4789-8d4f-a3a14997891d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:19:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.738 157129 INFO neutron.agent.ovn.metadata.agent [-] Port b2336583-1aaa-4789-8d4f-a3a14997891d in datapath 41c706f5-6f0b-47a8-91a4-16f87e2a0571 bound to our chassis#033[00m
Feb 25 07:19:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.739 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41c706f5-6f0b-47a8-91a4-16f87e2a0571#033[00m
Feb 25 07:19:26 np0005629333 systemd-machined[210048]: New machine qemu-24-instance-00000016.
Feb 25 07:19:26 np0005629333 systemd[1]: Started Virtual Machine qemu-24-instance-00000016.
Feb 25 07:19:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.758 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bef6c386-93e0-4dde-bba2-31774ef071ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.758 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap41c706f5-61 in ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:19:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.760 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap41c706f5-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:19:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.761 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[935e1c45-c9ef-4618-908c-f6c297977ff1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.761 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[25e0f48c-1ceb-412a-b7fb-a8f70f7c90d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.778 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[bb071dba-db6b-4aeb-9409-933fdc2bda50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:26 np0005629333 nova_compute[244014]: 2026-02-25 12:19:26.779 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:26 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:26Z|00082|binding|INFO|Setting lport b2336583-1aaa-4789-8d4f-a3a14997891d ovn-installed in OVS
Feb 25 07:19:26 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:26Z|00083|binding|INFO|Setting lport b2336583-1aaa-4789-8d4f-a3a14997891d up in Southbound
Feb 25 07:19:26 np0005629333 nova_compute[244014]: 2026-02-25 12:19:26.785 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.790 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[707c8857-24d5-472f-8b25-7af923becec4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.836 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[dfffdd31-1217-4683-9b4e-048c9cde19c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.841 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9bdf23e5-797f-4a17-a888-fcb2a5163018]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:26 np0005629333 NetworkManager[49836]: <info>  [1772021966.8433] manager: (tap41c706f5-60): new Veth device (/org/freedesktop/NetworkManager/Devices/56)
Feb 25 07:19:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.882 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3f471aeb-beeb-40f4-a581-5bbf13d5b874]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.885 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e345c27d-8eac-439a-8615-92ac85d00377]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:26 np0005629333 NetworkManager[49836]: <info>  [1772021966.9085] device (tap41c706f5-60): carrier: link connected
Feb 25 07:19:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.912 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5f8ba215-168b-4e58-b8da-770bc9123440]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.929 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[72d98bbf-20b6-4acf-868c-aa512a76f556]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41c706f5-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:94:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 393631, 'reachable_time': 16472, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263984, 'error': None, 'target': 'ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.943 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8bd71f81-206d-451a-a1b1-4c17379eae76]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe41:94dc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 393631, 'tstamp': 393631}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263985, 'error': None, 'target': 'ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.961 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8b345e0f-1ffa-4520-9628-557af7910e83]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41c706f5-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:94:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 393631, 'reachable_time': 16472, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 263986, 'error': None, 'target': 'ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:26.986 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ea9d58d9-5287-4fee-8124-0b305d9b3e8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:27.043 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b78f27b3-8a90-4579-9119-fcdf3107e958]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:27.045 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41c706f5-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:27.045 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:27.045 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41c706f5-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:27 np0005629333 NetworkManager[49836]: <info>  [1772021967.0485] manager: (tap41c706f5-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Feb 25 07:19:27 np0005629333 kernel: tap41c706f5-60: entered promiscuous mode
Feb 25 07:19:27 np0005629333 nova_compute[244014]: 2026-02-25 12:19:27.047 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:27 np0005629333 nova_compute[244014]: 2026-02-25 12:19:27.051 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:27.052 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41c706f5-60, col_values=(('external_ids', {'iface-id': 'f59f37b4-05c7-4f51-99f1-f2c4bac42231'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:27 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:27Z|00084|binding|INFO|Releasing lport f59f37b4-05c7-4f51-99f1-f2c4bac42231 from this chassis (sb_readonly=0)
Feb 25 07:19:27 np0005629333 nova_compute[244014]: 2026-02-25 12:19:27.070 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:27.071 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/41c706f5-6f0b-47a8-91a4-16f87e2a0571.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/41c706f5-6f0b-47a8-91a4-16f87e2a0571.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:19:27 np0005629333 nova_compute[244014]: 2026-02-25 12:19:27.072 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:27.072 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c2e2bb72-37ba-440a-a991-dc16ea4553ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:27.074 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-41c706f5-6f0b-47a8-91a4-16f87e2a0571
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/41c706f5-6f0b-47a8-91a4-16f87e2a0571.pid.haproxy
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 41c706f5-6f0b-47a8-91a4-16f87e2a0571
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:19:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:27.077 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571', 'env', 'PROCESS_TAG=haproxy-41c706f5-6f0b-47a8-91a4-16f87e2a0571', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/41c706f5-6f0b-47a8-91a4-16f87e2a0571.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:19:27 np0005629333 nova_compute[244014]: 2026-02-25 12:19:27.084 244018 DEBUG nova.network.neutron [req-913bb8da-fa63-4f63-bae3-483f3134a1e7 req-a667e206-32bb-4ae0-82c3-04f1eaae39c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Updated VIF entry in instance network info cache for port b2336583-1aaa-4789-8d4f-a3a14997891d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:19:27 np0005629333 nova_compute[244014]: 2026-02-25 12:19:27.085 244018 DEBUG nova.network.neutron [req-913bb8da-fa63-4f63-bae3-483f3134a1e7 req-a667e206-32bb-4ae0-82c3-04f1eaae39c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Updating instance_info_cache with network_info: [{"id": "b2336583-1aaa-4789-8d4f-a3a14997891d", "address": "fa:16:3e:23:d7:29", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2336583-1a", "ovs_interfaceid": "b2336583-1aaa-4789-8d4f-a3a14997891d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:19:27 np0005629333 nova_compute[244014]: 2026-02-25 12:19:27.158 244018 DEBUG oslo_concurrency.lockutils [req-913bb8da-fa63-4f63-bae3-483f3134a1e7 req-a667e206-32bb-4ae0-82c3-04f1eaae39c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:19:27 np0005629333 nova_compute[244014]: 2026-02-25 12:19:27.483 244018 DEBUG nova.compute.manager [req-670d421f-8a4c-48d6-8b0d-3904d9fb6952 req-a0b1fdde-c50b-4110-a4da-e50e97f45685 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Received event network-vif-plugged-55015950-cf1b-4183-802f-22f661123534 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:19:27 np0005629333 nova_compute[244014]: 2026-02-25 12:19:27.484 244018 DEBUG oslo_concurrency.lockutils [req-670d421f-8a4c-48d6-8b0d-3904d9fb6952 req-a0b1fdde-c50b-4110-a4da-e50e97f45685 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:27 np0005629333 nova_compute[244014]: 2026-02-25 12:19:27.485 244018 DEBUG oslo_concurrency.lockutils [req-670d421f-8a4c-48d6-8b0d-3904d9fb6952 req-a0b1fdde-c50b-4110-a4da-e50e97f45685 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:27 np0005629333 nova_compute[244014]: 2026-02-25 12:19:27.486 244018 DEBUG oslo_concurrency.lockutils [req-670d421f-8a4c-48d6-8b0d-3904d9fb6952 req-a0b1fdde-c50b-4110-a4da-e50e97f45685 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:27 np0005629333 nova_compute[244014]: 2026-02-25 12:19:27.489 244018 DEBUG nova.compute.manager [req-670d421f-8a4c-48d6-8b0d-3904d9fb6952 req-a0b1fdde-c50b-4110-a4da-e50e97f45685 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] No waiting events found dispatching network-vif-plugged-55015950-cf1b-4183-802f-22f661123534 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:19:27 np0005629333 nova_compute[244014]: 2026-02-25 12:19:27.489 244018 WARNING nova.compute.manager [req-670d421f-8a4c-48d6-8b0d-3904d9fb6952 req-a0b1fdde-c50b-4110-a4da-e50e97f45685 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Received unexpected event network-vif-plugged-55015950-cf1b-4183-802f-22f661123534 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:19:27 np0005629333 nova_compute[244014]: 2026-02-25 12:19:27.490 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021967.4883902, de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:19:27 np0005629333 nova_compute[244014]: 2026-02-25 12:19:27.490 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] VM Started (Lifecycle Event)#033[00m
Feb 25 07:19:27 np0005629333 podman[264059]: 2026-02-25 12:19:27.585513459 +0000 UTC m=+0.089882004 container create 478b53e8a0b2cb418603a83c95725f591550d40056799abe8c7b0aa3c2524a35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 07:19:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 530 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 7.5 MiB/s wr, 222 op/s
Feb 25 07:19:27 np0005629333 nova_compute[244014]: 2026-02-25 12:19:27.613 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:27 np0005629333 nova_compute[244014]: 2026-02-25 12:19:27.618 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021967.4885652, de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:19:27 np0005629333 nova_compute[244014]: 2026-02-25 12:19:27.618 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:19:27 np0005629333 systemd[1]: Started libpod-conmon-478b53e8a0b2cb418603a83c95725f591550d40056799abe8c7b0aa3c2524a35.scope.
Feb 25 07:19:27 np0005629333 podman[264059]: 2026-02-25 12:19:27.547358488 +0000 UTC m=+0.051727083 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:19:27 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:19:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0335c2e7dd04e96857217313fc187a38e2ae19855220b0923b3e59e3330f4337/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:19:27 np0005629333 podman[264059]: 2026-02-25 12:19:27.674815366 +0000 UTC m=+0.179183971 container init 478b53e8a0b2cb418603a83c95725f591550d40056799abe8c7b0aa3c2524a35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 07:19:27 np0005629333 podman[264059]: 2026-02-25 12:19:27.678813558 +0000 UTC m=+0.183182103 container start 478b53e8a0b2cb418603a83c95725f591550d40056799abe8c7b0aa3c2524a35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0)
Feb 25 07:19:27 np0005629333 neutron-haproxy-ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571[264074]: [NOTICE]   (264078) : New worker (264080) forked
Feb 25 07:19:27 np0005629333 neutron-haproxy-ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571[264074]: [NOTICE]   (264078) : Loading success.
Feb 25 07:19:27 np0005629333 nova_compute[244014]: 2026-02-25 12:19:27.721 244018 DEBUG oslo_concurrency.lockutils [None req-87cb69ac-131f-4cef-9ed5-ba2b7debf683 f66bf3bdfeff4d5aa0ae1d0dd4e24375 7eebf71b1276409e90d4ab02df36bf14 - - default default] Acquiring lock "refresh_cache-8993d2e7-8b5d-42eb-9e24-f96dcc4da39b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:19:27 np0005629333 nova_compute[244014]: 2026-02-25 12:19:27.721 244018 DEBUG oslo_concurrency.lockutils [None req-87cb69ac-131f-4cef-9ed5-ba2b7debf683 f66bf3bdfeff4d5aa0ae1d0dd4e24375 7eebf71b1276409e90d4ab02df36bf14 - - default default] Acquired lock "refresh_cache-8993d2e7-8b5d-42eb-9e24-f96dcc4da39b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:19:27 np0005629333 nova_compute[244014]: 2026-02-25 12:19:27.721 244018 DEBUG nova.network.neutron [None req-87cb69ac-131f-4cef-9ed5-ba2b7debf683 f66bf3bdfeff4d5aa0ae1d0dd4e24375 7eebf71b1276409e90d4ab02df36bf14 - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:19:27 np0005629333 nova_compute[244014]: 2026-02-25 12:19:27.724 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:27 np0005629333 nova_compute[244014]: 2026-02-25 12:19:27.727 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:19:27 np0005629333 nova_compute[244014]: 2026-02-25 12:19:27.791 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:19:28 np0005629333 nova_compute[244014]: 2026-02-25 12:19:28.088 244018 DEBUG nova.network.neutron [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Successfully updated port: 47bb858f-172e-40b5-8ac0-02c531c303c3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:19:28 np0005629333 nova_compute[244014]: 2026-02-25 12:19:28.116 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Acquiring lock "refresh_cache-d44c3dbc-e4bc-4235-bd88-b39616473248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:19:28 np0005629333 nova_compute[244014]: 2026-02-25 12:19:28.116 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Acquired lock "refresh_cache-d44c3dbc-e4bc-4235-bd88-b39616473248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:19:28 np0005629333 nova_compute[244014]: 2026-02-25 12:19:28.116 244018 DEBUG nova.network.neutron [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:19:28 np0005629333 nova_compute[244014]: 2026-02-25 12:19:28.296 244018 DEBUG nova.network.neutron [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:19:28 np0005629333 nova_compute[244014]: 2026-02-25 12:19:28.431 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:19:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 530 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 7.5 MiB/s wr, 221 op/s
Feb 25 07:19:29 np0005629333 nova_compute[244014]: 2026-02-25 12:19:29.722 244018 DEBUG nova.compute.manager [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Received event network-vif-plugged-b2336583-1aaa-4789-8d4f-a3a14997891d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:19:29 np0005629333 nova_compute[244014]: 2026-02-25 12:19:29.723 244018 DEBUG oslo_concurrency.lockutils [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:29 np0005629333 nova_compute[244014]: 2026-02-25 12:19:29.724 244018 DEBUG oslo_concurrency.lockutils [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:29 np0005629333 nova_compute[244014]: 2026-02-25 12:19:29.724 244018 DEBUG oslo_concurrency.lockutils [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:29 np0005629333 nova_compute[244014]: 2026-02-25 12:19:29.724 244018 DEBUG nova.compute.manager [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Processing event network-vif-plugged-b2336583-1aaa-4789-8d4f-a3a14997891d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:19:29 np0005629333 nova_compute[244014]: 2026-02-25 12:19:29.725 244018 DEBUG nova.compute.manager [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Received event network-vif-plugged-b2336583-1aaa-4789-8d4f-a3a14997891d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:19:29 np0005629333 nova_compute[244014]: 2026-02-25 12:19:29.725 244018 DEBUG oslo_concurrency.lockutils [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:29 np0005629333 nova_compute[244014]: 2026-02-25 12:19:29.726 244018 DEBUG oslo_concurrency.lockutils [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:29 np0005629333 nova_compute[244014]: 2026-02-25 12:19:29.726 244018 DEBUG oslo_concurrency.lockutils [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:29 np0005629333 nova_compute[244014]: 2026-02-25 12:19:29.727 244018 DEBUG nova.compute.manager [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] No waiting events found dispatching network-vif-plugged-b2336583-1aaa-4789-8d4f-a3a14997891d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:19:29 np0005629333 nova_compute[244014]: 2026-02-25 12:19:29.727 244018 WARNING nova.compute.manager [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Received unexpected event network-vif-plugged-b2336583-1aaa-4789-8d4f-a3a14997891d for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:19:29 np0005629333 nova_compute[244014]: 2026-02-25 12:19:29.728 244018 DEBUG nova.compute.manager [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Received event network-changed-47bb858f-172e-40b5-8ac0-02c531c303c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:19:29 np0005629333 nova_compute[244014]: 2026-02-25 12:19:29.728 244018 DEBUG nova.compute.manager [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Refreshing instance network info cache due to event network-changed-47bb858f-172e-40b5-8ac0-02c531c303c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:19:29 np0005629333 nova_compute[244014]: 2026-02-25 12:19:29.729 244018 DEBUG oslo_concurrency.lockutils [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d44c3dbc-e4bc-4235-bd88-b39616473248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:19:29 np0005629333 nova_compute[244014]: 2026-02-25 12:19:29.730 244018 DEBUG nova.compute.manager [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:19:29 np0005629333 nova_compute[244014]: 2026-02-25 12:19:29.735 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021969.7343109, de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:19:29 np0005629333 nova_compute[244014]: 2026-02-25 12:19:29.736 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:19:29 np0005629333 nova_compute[244014]: 2026-02-25 12:19:29.740 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:19:29 np0005629333 nova_compute[244014]: 2026-02-25 12:19:29.745 244018 INFO nova.virt.libvirt.driver [-] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Instance spawned successfully.#033[00m
Feb 25 07:19:29 np0005629333 nova_compute[244014]: 2026-02-25 12:19:29.746 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:19:29 np0005629333 nova_compute[244014]: 2026-02-25 12:19:29.967 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:29 np0005629333 nova_compute[244014]: 2026-02-25 12:19:29.982 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:19:29 np0005629333 nova_compute[244014]: 2026-02-25 12:19:29.986 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:29 np0005629333 nova_compute[244014]: 2026-02-25 12:19:29.986 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:29 np0005629333 nova_compute[244014]: 2026-02-25 12:19:29.987 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:29 np0005629333 nova_compute[244014]: 2026-02-25 12:19:29.987 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:29 np0005629333 nova_compute[244014]: 2026-02-25 12:19:29.987 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:29 np0005629333 nova_compute[244014]: 2026-02-25 12:19:29.988 244018 DEBUG nova.virt.libvirt.driver [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.013 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.022 244018 DEBUG nova.network.neutron [None req-87cb69ac-131f-4cef-9ed5-ba2b7debf683 f66bf3bdfeff4d5aa0ae1d0dd4e24375 7eebf71b1276409e90d4ab02df36bf14 - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Updating instance_info_cache with network_info: [{"id": "55015950-cf1b-4183-802f-22f661123534", "address": "fa:16:3e:c6:da:42", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55015950-cf", "ovs_interfaceid": "55015950-cf1b-4183-802f-22f661123534", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.030 244018 DEBUG nova.network.neutron [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Updating instance_info_cache with network_info: [{"id": "47bb858f-172e-40b5-8ac0-02c531c303c3", "address": "fa:16:3e:d6:24:39", "network": {"id": "55e574c2-43dc-4bbd-bf4c-2028df9ad3c1", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-191563683-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61fb5315043b44e588f1a84d85f1547b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47bb858f-17", "ovs_interfaceid": "47bb858f-172e-40b5-8ac0-02c531c303c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.138 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Releasing lock "refresh_cache-d44c3dbc-e4bc-4235-bd88-b39616473248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.138 244018 DEBUG nova.compute.manager [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Instance network_info: |[{"id": "47bb858f-172e-40b5-8ac0-02c531c303c3", "address": "fa:16:3e:d6:24:39", "network": {"id": "55e574c2-43dc-4bbd-bf4c-2028df9ad3c1", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-191563683-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61fb5315043b44e588f1a84d85f1547b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47bb858f-17", "ovs_interfaceid": "47bb858f-172e-40b5-8ac0-02c531c303c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.139 244018 DEBUG oslo_concurrency.lockutils [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d44c3dbc-e4bc-4235-bd88-b39616473248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.139 244018 DEBUG nova.network.neutron [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Refreshing network info cache for port 47bb858f-172e-40b5-8ac0-02c531c303c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.142 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Start _get_guest_xml network_info=[{"id": "47bb858f-172e-40b5-8ac0-02c531c303c3", "address": "fa:16:3e:d6:24:39", "network": {"id": "55e574c2-43dc-4bbd-bf4c-2028df9ad3c1", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-191563683-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61fb5315043b44e588f1a84d85f1547b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47bb858f-17", "ovs_interfaceid": "47bb858f-172e-40b5-8ac0-02c531c303c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.147 244018 DEBUG oslo_concurrency.lockutils [None req-87cb69ac-131f-4cef-9ed5-ba2b7debf683 f66bf3bdfeff4d5aa0ae1d0dd4e24375 7eebf71b1276409e90d4ab02df36bf14 - - default default] Releasing lock "refresh_cache-8993d2e7-8b5d-42eb-9e24-f96dcc4da39b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.147 244018 DEBUG nova.compute.manager [None req-87cb69ac-131f-4cef-9ed5-ba2b7debf683 f66bf3bdfeff4d5aa0ae1d0dd4e24375 7eebf71b1276409e90d4ab02df36bf14 - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.147 244018 DEBUG nova.compute.manager [None req-87cb69ac-131f-4cef-9ed5-ba2b7debf683 f66bf3bdfeff4d5aa0ae1d0dd4e24375 7eebf71b1276409e90d4ab02df36bf14 - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] network_info to inject: |[{"id": "55015950-cf1b-4183-802f-22f661123534", "address": "fa:16:3e:c6:da:42", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55015950-cf", "ovs_interfaceid": "55015950-cf1b-4183-802f-22f661123534", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.149 244018 INFO nova.compute.manager [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Took 11.46 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.150 244018 DEBUG nova.compute.manager [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.158 244018 WARNING nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.162 244018 DEBUG nova.virt.libvirt.host [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.163 244018 DEBUG nova.virt.libvirt.host [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.165 244018 DEBUG nova.virt.libvirt.host [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.166 244018 DEBUG nova.virt.libvirt.host [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.166 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.166 244018 DEBUG nova.virt.hardware [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.167 244018 DEBUG nova.virt.hardware [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.167 244018 DEBUG nova.virt.hardware [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.167 244018 DEBUG nova.virt.hardware [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.167 244018 DEBUG nova.virt.hardware [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.167 244018 DEBUG nova.virt.hardware [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.167 244018 DEBUG nova.virt.hardware [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.168 244018 DEBUG nova.virt.hardware [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.168 244018 DEBUG nova.virt.hardware [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.168 244018 DEBUG nova.virt.hardware [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.168 244018 DEBUG nova.virt.hardware [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.171 244018 DEBUG oslo_concurrency.processutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.252 244018 INFO nova.compute.manager [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Took 12.94 seconds to build instance.#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.297 244018 DEBUG oslo_concurrency.lockutils [None req-0b6a0820-db3f-4e70-bb43-b227abd7023d 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.486 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:19:30 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1051982892' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:19:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:19:30
Feb 25 07:19:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:19:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:19:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'backups', 'images', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', '.mgr', 'vms']
Feb 25 07:19:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.893 244018 DEBUG oslo_concurrency.processutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.722s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.933 244018 DEBUG nova.storage.rbd_utils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] rbd image d44c3dbc-e4bc-4235-bd88-b39616473248_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:30 np0005629333 nova_compute[244014]: 2026-02-25 12:19:30.939 244018 DEBUG oslo_concurrency.processutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:31 np0005629333 nova_compute[244014]: 2026-02-25 12:19:31.456 244018 DEBUG nova.network.neutron [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Updated VIF entry in instance network info cache for port 47bb858f-172e-40b5-8ac0-02c531c303c3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:19:31 np0005629333 nova_compute[244014]: 2026-02-25 12:19:31.457 244018 DEBUG nova.network.neutron [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Updating instance_info_cache with network_info: [{"id": "47bb858f-172e-40b5-8ac0-02c531c303c3", "address": "fa:16:3e:d6:24:39", "network": {"id": "55e574c2-43dc-4bbd-bf4c-2028df9ad3c1", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-191563683-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61fb5315043b44e588f1a84d85f1547b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47bb858f-17", "ovs_interfaceid": "47bb858f-172e-40b5-8ac0-02c531c303c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:19:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:19:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2308674807' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:19:31 np0005629333 nova_compute[244014]: 2026-02-25 12:19:31.534 244018 DEBUG oslo_concurrency.lockutils [req-d880c9e0-2c63-4971-a32a-e13880bdce74 req-23617cae-23e6-41b8-aaf2-406899fd3f50 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d44c3dbc-e4bc-4235-bd88-b39616473248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:19:31 np0005629333 nova_compute[244014]: 2026-02-25 12:19:31.539 244018 DEBUG oslo_concurrency.processutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:31 np0005629333 nova_compute[244014]: 2026-02-25 12:19:31.540 244018 DEBUG nova.virt.libvirt.vif [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:19:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1820272093',display_name='tempest-ImagesOneServerTestJSON-server-1820272093',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1820272093',id=23,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='61fb5315043b44e588f1a84d85f1547b',ramdisk_id='',reservation_id='r-brwcjik7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-921017522',owner_user_name='tempest-ImagesOneServerTestJSON-921017522-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:19:23Z,user_data=None,user_id='8349db6a5fcb4d8596a69d83481207b3',uuid=d44c3dbc-e4bc-4235-bd88-b39616473248,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "47bb858f-172e-40b5-8ac0-02c531c303c3", "address": "fa:16:3e:d6:24:39", "network": {"id": "55e574c2-43dc-4bbd-bf4c-2028df9ad3c1", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-191563683-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61fb5315043b44e588f1a84d85f1547b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47bb858f-17", "ovs_interfaceid": "47bb858f-172e-40b5-8ac0-02c531c303c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:19:31 np0005629333 nova_compute[244014]: 2026-02-25 12:19:31.541 244018 DEBUG nova.network.os_vif_util [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Converting VIF {"id": "47bb858f-172e-40b5-8ac0-02c531c303c3", "address": "fa:16:3e:d6:24:39", "network": {"id": "55e574c2-43dc-4bbd-bf4c-2028df9ad3c1", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-191563683-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61fb5315043b44e588f1a84d85f1547b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47bb858f-17", "ovs_interfaceid": "47bb858f-172e-40b5-8ac0-02c531c303c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:19:31 np0005629333 nova_compute[244014]: 2026-02-25 12:19:31.542 244018 DEBUG nova.network.os_vif_util [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:24:39,bridge_name='br-int',has_traffic_filtering=True,id=47bb858f-172e-40b5-8ac0-02c531c303c3,network=Network(55e574c2-43dc-4bbd-bf4c-2028df9ad3c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47bb858f-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:19:31 np0005629333 nova_compute[244014]: 2026-02-25 12:19:31.543 244018 DEBUG nova.objects.instance [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lazy-loading 'pci_devices' on Instance uuid d44c3dbc-e4bc-4235-bd88-b39616473248 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:19:31 np0005629333 nova_compute[244014]: 2026-02-25 12:19:31.574 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:19:31 np0005629333 nova_compute[244014]:  <uuid>d44c3dbc-e4bc-4235-bd88-b39616473248</uuid>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:  <name>instance-00000017</name>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      <nova:name>tempest-ImagesOneServerTestJSON-server-1820272093</nova:name>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:19:30</nova:creationTime>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:19:31 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:        <nova:user uuid="8349db6a5fcb4d8596a69d83481207b3">tempest-ImagesOneServerTestJSON-921017522-project-member</nova:user>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:        <nova:project uuid="61fb5315043b44e588f1a84d85f1547b">tempest-ImagesOneServerTestJSON-921017522</nova:project>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:        <nova:port uuid="47bb858f-172e-40b5-8ac0-02c531c303c3">
Feb 25 07:19:31 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      <entry name="serial">d44c3dbc-e4bc-4235-bd88-b39616473248</entry>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      <entry name="uuid">d44c3dbc-e4bc-4235-bd88-b39616473248</entry>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/d44c3dbc-e4bc-4235-bd88-b39616473248_disk">
Feb 25 07:19:31 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:19:31 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/d44c3dbc-e4bc-4235-bd88-b39616473248_disk.config">
Feb 25 07:19:31 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:19:31 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:d6:24:39"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      <target dev="tap47bb858f-17"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/d44c3dbc-e4bc-4235-bd88-b39616473248/console.log" append="off"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:19:31 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:19:31 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:19:31 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:19:31 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:19:31 np0005629333 nova_compute[244014]: 2026-02-25 12:19:31.575 244018 DEBUG nova.compute.manager [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Preparing to wait for external event network-vif-plugged-47bb858f-172e-40b5-8ac0-02c531c303c3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:19:31 np0005629333 nova_compute[244014]: 2026-02-25 12:19:31.575 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Acquiring lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:31 np0005629333 nova_compute[244014]: 2026-02-25 12:19:31.575 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:19:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:19:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:19:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:19:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:19:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:19:31 np0005629333 nova_compute[244014]: 2026-02-25 12:19:31.575 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:31 np0005629333 nova_compute[244014]: 2026-02-25 12:19:31.576 244018 DEBUG nova.virt.libvirt.vif [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:19:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1820272093',display_name='tempest-ImagesOneServerTestJSON-server-1820272093',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1820272093',id=23,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='61fb5315043b44e588f1a84d85f1547b',ramdisk_id='',reservation_id='r-brwcjik7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-921017522',owner_user_name='tempest-ImagesOneServerTestJSON-921017522-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:19:23Z,user_data=None,user_id='8349db6a5fcb4d8596a69d83481207b3',uuid=d44c3dbc-e4bc-4235-bd88-b39616473248,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "47bb858f-172e-40b5-8ac0-02c531c303c3", "address": "fa:16:3e:d6:24:39", "network": {"id": "55e574c2-43dc-4bbd-bf4c-2028df9ad3c1", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-191563683-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61fb5315043b44e588f1a84d85f1547b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47bb858f-17", "ovs_interfaceid": "47bb858f-172e-40b5-8ac0-02c531c303c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:19:31 np0005629333 nova_compute[244014]: 2026-02-25 12:19:31.577 244018 DEBUG nova.network.os_vif_util [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Converting VIF {"id": "47bb858f-172e-40b5-8ac0-02c531c303c3", "address": "fa:16:3e:d6:24:39", "network": {"id": "55e574c2-43dc-4bbd-bf4c-2028df9ad3c1", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-191563683-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61fb5315043b44e588f1a84d85f1547b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47bb858f-17", "ovs_interfaceid": "47bb858f-172e-40b5-8ac0-02c531c303c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:19:31 np0005629333 nova_compute[244014]: 2026-02-25 12:19:31.578 244018 DEBUG nova.network.os_vif_util [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:24:39,bridge_name='br-int',has_traffic_filtering=True,id=47bb858f-172e-40b5-8ac0-02c531c303c3,network=Network(55e574c2-43dc-4bbd-bf4c-2028df9ad3c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47bb858f-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:19:31 np0005629333 nova_compute[244014]: 2026-02-25 12:19:31.578 244018 DEBUG os_vif [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:24:39,bridge_name='br-int',has_traffic_filtering=True,id=47bb858f-172e-40b5-8ac0-02c531c303c3,network=Network(55e574c2-43dc-4bbd-bf4c-2028df9ad3c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47bb858f-17') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:19:31 np0005629333 nova_compute[244014]: 2026-02-25 12:19:31.578 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:31 np0005629333 nova_compute[244014]: 2026-02-25 12:19:31.579 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:31 np0005629333 nova_compute[244014]: 2026-02-25 12:19:31.579 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:19:31 np0005629333 nova_compute[244014]: 2026-02-25 12:19:31.583 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:31 np0005629333 nova_compute[244014]: 2026-02-25 12:19:31.583 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap47bb858f-17, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:31 np0005629333 nova_compute[244014]: 2026-02-25 12:19:31.584 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap47bb858f-17, col_values=(('external_ids', {'iface-id': '47bb858f-172e-40b5-8ac0-02c531c303c3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:24:39', 'vm-uuid': 'd44c3dbc-e4bc-4235-bd88-b39616473248'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:31 np0005629333 NetworkManager[49836]: <info>  [1772021971.5861] manager: (tap47bb858f-17): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Feb 25 07:19:31 np0005629333 nova_compute[244014]: 2026-02-25 12:19:31.586 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:31 np0005629333 nova_compute[244014]: 2026-02-25 12:19:31.597 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:19:31 np0005629333 nova_compute[244014]: 2026-02-25 12:19:31.597 244018 INFO os_vif [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:24:39,bridge_name='br-int',has_traffic_filtering=True,id=47bb858f-172e-40b5-8ac0-02c531c303c3,network=Network(55e574c2-43dc-4bbd-bf4c-2028df9ad3c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47bb858f-17')#033[00m
Feb 25 07:19:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 530 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 7.5 MiB/s wr, 261 op/s
Feb 25 07:19:31 np0005629333 nova_compute[244014]: 2026-02-25 12:19:31.683 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:19:31 np0005629333 nova_compute[244014]: 2026-02-25 12:19:31.684 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:19:31 np0005629333 nova_compute[244014]: 2026-02-25 12:19:31.684 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] No VIF found with MAC fa:16:3e:d6:24:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:19:31 np0005629333 nova_compute[244014]: 2026-02-25 12:19:31.685 244018 INFO nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Using config drive#033[00m
Feb 25 07:19:31 np0005629333 nova_compute[244014]: 2026-02-25 12:19:31.711 244018 DEBUG nova.storage.rbd_utils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] rbd image d44c3dbc-e4bc-4235-bd88-b39616473248_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:19:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:19:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:19:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:19:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:19:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:19:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:19:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:19:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:19:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:19:32 np0005629333 nova_compute[244014]: 2026-02-25 12:19:32.351 244018 INFO nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Creating config drive at /var/lib/nova/instances/d44c3dbc-e4bc-4235-bd88-b39616473248/disk.config#033[00m
Feb 25 07:19:32 np0005629333 nova_compute[244014]: 2026-02-25 12:19:32.355 244018 DEBUG oslo_concurrency.processutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d44c3dbc-e4bc-4235-bd88-b39616473248/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpod7expt3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:32 np0005629333 nova_compute[244014]: 2026-02-25 12:19:32.483 244018 DEBUG oslo_concurrency.processutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d44c3dbc-e4bc-4235-bd88-b39616473248/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpod7expt3" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:32 np0005629333 nova_compute[244014]: 2026-02-25 12:19:32.529 244018 DEBUG nova.storage.rbd_utils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] rbd image d44c3dbc-e4bc-4235-bd88-b39616473248_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:32 np0005629333 nova_compute[244014]: 2026-02-25 12:19:32.534 244018 DEBUG oslo_concurrency.processutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d44c3dbc-e4bc-4235-bd88-b39616473248/disk.config d44c3dbc-e4bc-4235-bd88-b39616473248_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:32 np0005629333 nova_compute[244014]: 2026-02-25 12:19:32.687 244018 DEBUG oslo_concurrency.processutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d44c3dbc-e4bc-4235-bd88-b39616473248/disk.config d44c3dbc-e4bc-4235-bd88-b39616473248_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:32 np0005629333 nova_compute[244014]: 2026-02-25 12:19:32.688 244018 INFO nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Deleting local config drive /var/lib/nova/instances/d44c3dbc-e4bc-4235-bd88-b39616473248/disk.config because it was imported into RBD.#033[00m
Feb 25 07:19:32 np0005629333 kernel: tap47bb858f-17: entered promiscuous mode
Feb 25 07:19:32 np0005629333 NetworkManager[49836]: <info>  [1772021972.7520] manager: (tap47bb858f-17): new Tun device (/org/freedesktop/NetworkManager/Devices/59)
Feb 25 07:19:32 np0005629333 nova_compute[244014]: 2026-02-25 12:19:32.754 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:32 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:32Z|00085|binding|INFO|Claiming lport 47bb858f-172e-40b5-8ac0-02c531c303c3 for this chassis.
Feb 25 07:19:32 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:32Z|00086|binding|INFO|47bb858f-172e-40b5-8ac0-02c531c303c3: Claiming fa:16:3e:d6:24:39 10.100.0.9
Feb 25 07:19:32 np0005629333 nova_compute[244014]: 2026-02-25 12:19:32.759 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:32 np0005629333 systemd-udevd[264223]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:19:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:32.778 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:24:39 10.100.0.9'], port_security=['fa:16:3e:d6:24:39 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'd44c3dbc-e4bc-4235-bd88-b39616473248', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61fb5315043b44e588f1a84d85f1547b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '65ea446a-ea2b-4eb1-8854-770694a45ea7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf0a1784-e657-4d3c-8170-e3cf50faab00, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=47bb858f-172e-40b5-8ac0-02c531c303c3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:19:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:32.780 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 47bb858f-172e-40b5-8ac0-02c531c303c3 in datapath 55e574c2-43dc-4bbd-bf4c-2028df9ad3c1 bound to our chassis#033[00m
Feb 25 07:19:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:32.786 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55e574c2-43dc-4bbd-bf4c-2028df9ad3c1#033[00m
Feb 25 07:19:32 np0005629333 NetworkManager[49836]: <info>  [1772021972.7939] device (tap47bb858f-17): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:19:32 np0005629333 NetworkManager[49836]: <info>  [1772021972.7949] device (tap47bb858f-17): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:19:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:32.803 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e77629c2-000c-4f22-9107-843c09a61dfe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:32.805 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap55e574c2-41 in ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:19:32 np0005629333 systemd-machined[210048]: New machine qemu-25-instance-00000017.
Feb 25 07:19:32 np0005629333 systemd[1]: Started Virtual Machine qemu-25-instance-00000017.
Feb 25 07:19:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:32.809 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap55e574c2-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:19:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:32.809 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dab282d1-dbed-4f12-a400-ab00dd2e4505]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:32.811 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[543c2c23-016c-4af5-a6d2-e32beb0270ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:32 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:32Z|00087|binding|INFO|Setting lport 47bb858f-172e-40b5-8ac0-02c531c303c3 ovn-installed in OVS
Feb 25 07:19:32 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:32Z|00088|binding|INFO|Setting lport 47bb858f-172e-40b5-8ac0-02c531c303c3 up in Southbound
Feb 25 07:19:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:32.826 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[8f843e82-c563-45f2-a967-5a8d565f526c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:32 np0005629333 nova_compute[244014]: 2026-02-25 12:19:32.830 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:32.915 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dc3bb933-cded-4c7f-aa28-475216c44d5b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:32.939 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3fb9f72c-a29a-439f-8c41-cc9b064789d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:32 np0005629333 NetworkManager[49836]: <info>  [1772021972.9505] manager: (tap55e574c2-40): new Veth device (/org/freedesktop/NetworkManager/Devices/60)
Feb 25 07:19:32 np0005629333 systemd-udevd[264228]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:19:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:32.949 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[afa8f6f0-1a2e-4a88-aa45-0c2339c5f2d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:32.975 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ad53c7b1-2dbe-49c0-bacc-d1f4a55a1811]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:32.979 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d7c4113a-108e-47c4-b45e-16d1f9c29eb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:32 np0005629333 NetworkManager[49836]: <info>  [1772021972.9979] device (tap55e574c2-40): carrier: link connected
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:33.000 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[bd31b21a-3012-442d-babf-e2fc83239d1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:33.014 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3aea074d-b5a8-4a50-b45c-9a059f4480b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55e574c2-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:96:6e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 394240, 'reachable_time': 17034, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264258, 'error': None, 'target': 'ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:33.029 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[68046c81-9d9a-44d8-ad67-3db6e97e1657]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4f:966e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 394240, 'tstamp': 394240}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264259, 'error': None, 'target': 'ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:33.042 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b4b75ba1-1bc6-4cbc-acaf-157f3bca3466]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55e574c2-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:96:6e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 394240, 'reachable_time': 17034, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264260, 'error': None, 'target': 'ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:33.069 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6affe936-09a7-4b9a-8bfc-b3342344c967]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:33.116 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[44989e6a-8d07-490f-981d-d5384fc0ae48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:33.117 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55e574c2-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:33.117 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:33.118 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55e574c2-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:33 np0005629333 NetworkManager[49836]: <info>  [1772021973.1204] manager: (tap55e574c2-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.120 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:33 np0005629333 kernel: tap55e574c2-40: entered promiscuous mode
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.122 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:33.123 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55e574c2-40, col_values=(('external_ids', {'iface-id': 'eaac86b8-7606-47e6-83eb-e27fec0ae5ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.124 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.128 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:33.128 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/55e574c2-43dc-4bbd-bf4c-2028df9ad3c1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/55e574c2-43dc-4bbd-bf4c-2028df9ad3c1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:33.129 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a4171ce6-c041-4308-a833-6b48ce7fe06b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:33.129 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/55e574c2-43dc-4bbd-bf4c-2028df9ad3c1.pid.haproxy
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 55e574c2-43dc-4bbd-bf4c-2028df9ad3c1
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:19:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:33.130 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1', 'env', 'PROCESS_TAG=haproxy-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/55e574c2-43dc-4bbd-bf4c-2028df9ad3c1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:19:33 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:33Z|00089|binding|INFO|Releasing lport eaac86b8-7606-47e6-83eb-e27fec0ae5ff from this chassis (sb_readonly=0)
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.140 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.431 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.447 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021973.4467921, d44c3dbc-e4bc-4235-bd88-b39616473248 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.448 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] VM Started (Lifecycle Event)#033[00m
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.488 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.495 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021973.450587, d44c3dbc-e4bc-4235-bd88-b39616473248 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.496 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.520 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.523 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:19:33 np0005629333 podman[264331]: 2026-02-25 12:19:33.529488626 +0000 UTC m=+0.078690229 container create 9b5519e76029d934b40d1e893028f870427fc34e304464576b213df1c1a95cc2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.546 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:19:33 np0005629333 podman[264331]: 2026-02-25 12:19:33.479068461 +0000 UTC m=+0.028270104 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:19:33 np0005629333 systemd[1]: Started libpod-conmon-9b5519e76029d934b40d1e893028f870427fc34e304464576b213df1c1a95cc2.scope.
Feb 25 07:19:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 530 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 5.1 MiB/s wr, 261 op/s
Feb 25 07:19:33 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:19:33 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85a7581b6a9b7832e1212a34f08cb65dbe5142b80457bc50869d2ba4120d093b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:19:33 np0005629333 podman[264331]: 2026-02-25 12:19:33.638245679 +0000 UTC m=+0.187447282 container init 9b5519e76029d934b40d1e893028f870427fc34e304464576b213df1c1a95cc2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:19:33 np0005629333 podman[264331]: 2026-02-25 12:19:33.643105765 +0000 UTC m=+0.192307368 container start 9b5519e76029d934b40d1e893028f870427fc34e304464576b213df1c1a95cc2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.674 244018 DEBUG nova.compute.manager [req-41080055-46cf-4689-8f41-afec874a415e req-7e1883a5-3f02-4051-946f-55ab5b56ca63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Received event network-vif-plugged-47bb858f-172e-40b5-8ac0-02c531c303c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.674 244018 DEBUG oslo_concurrency.lockutils [req-41080055-46cf-4689-8f41-afec874a415e req-7e1883a5-3f02-4051-946f-55ab5b56ca63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.675 244018 DEBUG oslo_concurrency.lockutils [req-41080055-46cf-4689-8f41-afec874a415e req-7e1883a5-3f02-4051-946f-55ab5b56ca63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.675 244018 DEBUG oslo_concurrency.lockutils [req-41080055-46cf-4689-8f41-afec874a415e req-7e1883a5-3f02-4051-946f-55ab5b56ca63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.675 244018 DEBUG nova.compute.manager [req-41080055-46cf-4689-8f41-afec874a415e req-7e1883a5-3f02-4051-946f-55ab5b56ca63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Processing event network-vif-plugged-47bb858f-172e-40b5-8ac0-02c531c303c3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:19:33 np0005629333 neutron-haproxy-ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1[264346]: [NOTICE]   (264350) : New worker (264352) forked
Feb 25 07:19:33 np0005629333 neutron-haproxy-ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1[264346]: [NOTICE]   (264350) : Loading success.
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.676 244018 DEBUG nova.compute.manager [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.680 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021973.6794744, d44c3dbc-e4bc-4235-bd88-b39616473248 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.681 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.692 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.700 244018 INFO nova.virt.libvirt.driver [-] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Instance spawned successfully.#033[00m
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.701 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.723 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.730 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.735 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.736 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.736 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.737 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.738 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.738 244018 DEBUG nova.virt.libvirt.driver [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.777 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.823 244018 INFO nova.compute.manager [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Took 9.90 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.824 244018 DEBUG nova.compute.manager [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.915 244018 INFO nova.compute.manager [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Took 11.30 seconds to build instance.#033[00m
Feb 25 07:19:33 np0005629333 nova_compute[244014]: 2026-02-25 12:19:33.939 244018 DEBUG oslo_concurrency.lockutils [None req-a5e9e863-79bb-4b53-b9f6-0509d63e4047 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lock "d44c3dbc-e4bc-4235-bd88-b39616473248" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:34 np0005629333 nova_compute[244014]: 2026-02-25 12:19:34.118 244018 INFO nova.compute.manager [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Rebuilding instance#033[00m
Feb 25 07:19:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:19:34 np0005629333 nova_compute[244014]: 2026-02-25 12:19:34.415 244018 DEBUG nova.objects.instance [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'trusted_certs' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:19:34 np0005629333 nova_compute[244014]: 2026-02-25 12:19:34.437 244018 DEBUG nova.compute.manager [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:34 np0005629333 nova_compute[244014]: 2026-02-25 12:19:34.486 244018 DEBUG nova.objects.instance [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'pci_requests' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:19:34 np0005629333 nova_compute[244014]: 2026-02-25 12:19:34.502 244018 DEBUG nova.objects.instance [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'pci_devices' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:19:34 np0005629333 nova_compute[244014]: 2026-02-25 12:19:34.523 244018 DEBUG nova.objects.instance [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'resources' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:19:34 np0005629333 nova_compute[244014]: 2026-02-25 12:19:34.567 244018 DEBUG nova.objects.instance [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'migration_context' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:19:34 np0005629333 nova_compute[244014]: 2026-02-25 12:19:34.588 244018 DEBUG nova.objects.instance [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 25 07:19:34 np0005629333 nova_compute[244014]: 2026-02-25 12:19:34.593 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 25 07:19:35 np0005629333 nova_compute[244014]: 2026-02-25 12:19:35.118 244018 DEBUG nova.compute.manager [None req-7c2f5faa-5f01-4417-99a0-262608624acf 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:35 np0005629333 nova_compute[244014]: 2026-02-25 12:19:35.159 244018 INFO nova.compute.manager [None req-7c2f5faa-5f01-4417-99a0-262608624acf 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] instance snapshotting#033[00m
Feb 25 07:19:35 np0005629333 nova_compute[244014]: 2026-02-25 12:19:35.463 244018 INFO nova.virt.libvirt.driver [None req-7c2f5faa-5f01-4417-99a0-262608624acf 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Beginning live snapshot process#033[00m
Feb 25 07:19:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 530 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 188 op/s
Feb 25 07:19:35 np0005629333 nova_compute[244014]: 2026-02-25 12:19:35.639 244018 DEBUG nova.virt.libvirt.imagebackend [None req-7c2f5faa-5f01-4417-99a0-262608624acf 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Feb 25 07:19:35 np0005629333 nova_compute[244014]: 2026-02-25 12:19:35.913 244018 DEBUG nova.storage.rbd_utils [None req-7c2f5faa-5f01-4417-99a0-262608624acf 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] creating snapshot(a278e77012d7446c92a8db2a544fcdfe) on rbd image(d44c3dbc-e4bc-4235-bd88-b39616473248_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 07:19:35 np0005629333 nova_compute[244014]: 2026-02-25 12:19:35.967 244018 DEBUG nova.compute.manager [req-3dbe77ad-a331-4721-98d5-ffbc3ecee43e req-fc1a83d7-1fe7-4a51-ac7f-b4e0407781fb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Received event network-vif-plugged-47bb858f-172e-40b5-8ac0-02c531c303c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:19:35 np0005629333 nova_compute[244014]: 2026-02-25 12:19:35.968 244018 DEBUG oslo_concurrency.lockutils [req-3dbe77ad-a331-4721-98d5-ffbc3ecee43e req-fc1a83d7-1fe7-4a51-ac7f-b4e0407781fb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:35 np0005629333 nova_compute[244014]: 2026-02-25 12:19:35.968 244018 DEBUG oslo_concurrency.lockutils [req-3dbe77ad-a331-4721-98d5-ffbc3ecee43e req-fc1a83d7-1fe7-4a51-ac7f-b4e0407781fb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:35 np0005629333 nova_compute[244014]: 2026-02-25 12:19:35.968 244018 DEBUG oslo_concurrency.lockutils [req-3dbe77ad-a331-4721-98d5-ffbc3ecee43e req-fc1a83d7-1fe7-4a51-ac7f-b4e0407781fb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:35 np0005629333 nova_compute[244014]: 2026-02-25 12:19:35.968 244018 DEBUG nova.compute.manager [req-3dbe77ad-a331-4721-98d5-ffbc3ecee43e req-fc1a83d7-1fe7-4a51-ac7f-b4e0407781fb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] No waiting events found dispatching network-vif-plugged-47bb858f-172e-40b5-8ac0-02c531c303c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:19:35 np0005629333 nova_compute[244014]: 2026-02-25 12:19:35.969 244018 WARNING nova.compute.manager [req-3dbe77ad-a331-4721-98d5-ffbc3ecee43e req-fc1a83d7-1fe7-4a51-ac7f-b4e0407781fb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Received unexpected event network-vif-plugged-47bb858f-172e-40b5-8ac0-02c531c303c3 for instance with vm_state active and task_state image_uploading.#033[00m
Feb 25 07:19:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Feb 25 07:19:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Feb 25 07:19:36 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Feb 25 07:19:36 np0005629333 nova_compute[244014]: 2026-02-25 12:19:36.586 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:36 np0005629333 nova_compute[244014]: 2026-02-25 12:19:36.639 244018 DEBUG nova.storage.rbd_utils [None req-7c2f5faa-5f01-4417-99a0-262608624acf 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] cloning vms/d44c3dbc-e4bc-4235-bd88-b39616473248_disk@a278e77012d7446c92a8db2a544fcdfe to images/7dba2e0d-092c-46fd-a8a8-9e81515a6945 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 25 07:19:36 np0005629333 nova_compute[244014]: 2026-02-25 12:19:36.821 244018 DEBUG nova.storage.rbd_utils [None req-7c2f5faa-5f01-4417-99a0-262608624acf 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] flattening images/7dba2e0d-092c-46fd-a8a8-9e81515a6945 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 25 07:19:36 np0005629333 kernel: tap2e503dd2-73 (unregistering): left promiscuous mode
Feb 25 07:19:36 np0005629333 NetworkManager[49836]: <info>  [1772021976.9623] device (tap2e503dd2-73): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:19:36 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:36Z|00090|binding|INFO|Releasing lport 2e503dd2-735e-4bfc-87c7-dffab319d935 from this chassis (sb_readonly=0)
Feb 25 07:19:36 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:36Z|00091|binding|INFO|Setting lport 2e503dd2-735e-4bfc-87c7-dffab319d935 down in Southbound
Feb 25 07:19:36 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:36Z|00092|binding|INFO|Removing iface tap2e503dd2-73 ovn-installed in OVS
Feb 25 07:19:36 np0005629333 nova_compute[244014]: 2026-02-25 12:19:36.974 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:36 np0005629333 nova_compute[244014]: 2026-02-25 12:19:36.976 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:36.982 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:71:62 10.100.0.5'], port_security=['fa:16:3e:87:71:62 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '52f927ad-a417-489f-9f92-87bc3433649d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fd7733ad-d262-4781-bcfa-77cfa8b67164', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556b4b98-e95d-460c-a904-adc77baf4b88, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=2e503dd2-735e-4bfc-87c7-dffab319d935) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:19:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:36.983 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 2e503dd2-735e-4bfc-87c7-dffab319d935 in datapath 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 unbound from our chassis#033[00m
Feb 25 07:19:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:36.984 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6#033[00m
Feb 25 07:19:37 np0005629333 nova_compute[244014]: 2026-02-25 12:19:37.000 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.004 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[96aee681-cd98-4334-9f06-93852c396d5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:37 np0005629333 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000011.scope: Deactivated successfully.
Feb 25 07:19:37 np0005629333 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000011.scope: Consumed 13.023s CPU time.
Feb 25 07:19:37 np0005629333 systemd-machined[210048]: Machine qemu-19-instance-00000011 terminated.
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.025 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b1ca8dcf-3ebb-4aa6-a126-1eb15e55da30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.028 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b144f285-786a-4dfc-b661-8e67990d51f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.049 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[564da317-bd44-4b1d-9e97-6a0fdc70f6fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.061 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e59be8da-c8a7-47e6-8f2b-0816ac33e458]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f4cbf9a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:f8:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389396, 'reachable_time': 28421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264478, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.072 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6ad0bef3-9a53-44e5-8805-ee39e3d121e2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389407, 'tstamp': 389407}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264479, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389409, 'tstamp': 389409}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264479, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.074 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f4cbf9a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:37 np0005629333 nova_compute[244014]: 2026-02-25 12:19:37.076 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:37 np0005629333 nova_compute[244014]: 2026-02-25 12:19:37.079 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.080 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f4cbf9a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.080 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.080 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1f4cbf9a-40, col_values=(('external_ids', {'iface-id': '2cfd1e6b-d28d-43c0-bbbd-c6ad77855812'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.080 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:19:37 np0005629333 nova_compute[244014]: 2026-02-25 12:19:37.161 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:37 np0005629333 nova_compute[244014]: 2026-02-25 12:19:37.162 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:37 np0005629333 kernel: tap2e503dd2-73: entered promiscuous mode
Feb 25 07:19:37 np0005629333 systemd-udevd[264468]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:19:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:37Z|00093|binding|INFO|Claiming lport 2e503dd2-735e-4bfc-87c7-dffab319d935 for this chassis.
Feb 25 07:19:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:37Z|00094|binding|INFO|2e503dd2-735e-4bfc-87c7-dffab319d935: Claiming fa:16:3e:87:71:62 10.100.0.5
Feb 25 07:19:37 np0005629333 NetworkManager[49836]: <info>  [1772021977.1850] manager: (tap2e503dd2-73): new Tun device (/org/freedesktop/NetworkManager/Devices/62)
Feb 25 07:19:37 np0005629333 kernel: tap2e503dd2-73 (unregistering): left promiscuous mode
Feb 25 07:19:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:37Z|00095|binding|INFO|Setting lport 2e503dd2-735e-4bfc-87c7-dffab319d935 ovn-installed in OVS
Feb 25 07:19:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:37Z|00096|if_status|INFO|Not setting lport 2e503dd2-735e-4bfc-87c7-dffab319d935 down as sb is readonly
Feb 25 07:19:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:37Z|00097|binding|INFO|Releasing lport 2e503dd2-735e-4bfc-87c7-dffab319d935 from this chassis (sb_readonly=0)
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.214 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:71:62 10.100.0.5'], port_security=['fa:16:3e:87:71:62 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '52f927ad-a417-489f-9f92-87bc3433649d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fd7733ad-d262-4781-bcfa-77cfa8b67164', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556b4b98-e95d-460c-a904-adc77baf4b88, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=2e503dd2-735e-4bfc-87c7-dffab319d935) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.215 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 2e503dd2-735e-4bfc-87c7-dffab319d935 in datapath 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 bound to our chassis#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.217 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.229 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[49703212-0157-487f-8042-e3039fa446a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:37Z|00098|binding|INFO|Releasing lport eaac86b8-7606-47e6-83eb-e27fec0ae5ff from this chassis (sb_readonly=0)
Feb 25 07:19:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:37Z|00099|binding|INFO|Releasing lport f59f37b4-05c7-4f51-99f1-f2c4bac42231 from this chassis (sb_readonly=0)
Feb 25 07:19:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:37Z|00100|binding|INFO|Releasing lport 2cfd1e6b-d28d-43c0-bbbd-c6ad77855812 from this chassis (sb_readonly=0)
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.244 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:71:62 10.100.0.5'], port_security=['fa:16:3e:87:71:62 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '52f927ad-a417-489f-9f92-87bc3433649d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fd7733ad-d262-4781-bcfa-77cfa8b67164', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556b4b98-e95d-460c-a904-adc77baf4b88, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=2e503dd2-735e-4bfc-87c7-dffab319d935) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:19:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:37Z|00101|binding|INFO|Removing iface tap2e503dd2-73 ovn-installed in OVS
Feb 25 07:19:37 np0005629333 nova_compute[244014]: 2026-02-25 12:19:37.257 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:37 np0005629333 nova_compute[244014]: 2026-02-25 12:19:37.261 244018 DEBUG nova.compute.manager [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.266 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f7080701-d546-4609-bf50-a7b382628dcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.270 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1b923b14-cff1-4df0-bb77-8689dffdaeeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:37 np0005629333 nova_compute[244014]: 2026-02-25 12:19:37.286 244018 DEBUG nova.storage.rbd_utils [None req-7c2f5faa-5f01-4417-99a0-262608624acf 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] removing snapshot(a278e77012d7446c92a8db2a544fcdfe) on rbd image(d44c3dbc-e4bc-4235-bd88-b39616473248_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.291 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[395032de-2c7b-4339-917c-269b52875caa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.304 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2685cb4f-05bc-40f0-8d7c-c1efb30e1c1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f4cbf9a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:f8:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389396, 'reachable_time': 28421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264508, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.321 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0f0dd05a-892b-4c6a-9d14-93999fad038a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389407, 'tstamp': 389407}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264509, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389409, 'tstamp': 389409}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264509, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.322 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f4cbf9a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:37 np0005629333 nova_compute[244014]: 2026-02-25 12:19:37.323 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:37 np0005629333 nova_compute[244014]: 2026-02-25 12:19:37.326 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.326 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f4cbf9a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.326 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.327 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1f4cbf9a-40, col_values=(('external_ids', {'iface-id': '2cfd1e6b-d28d-43c0-bbbd-c6ad77855812'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.327 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.328 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 2e503dd2-735e-4bfc-87c7-dffab319d935 in datapath 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 unbound from our chassis#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.329 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.341 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[934bdf32-2314-4cf3-95f0-2bfe7e31ef77]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:37 np0005629333 nova_compute[244014]: 2026-02-25 12:19:37.359 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:37 np0005629333 nova_compute[244014]: 2026-02-25 12:19:37.360 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:37 np0005629333 nova_compute[244014]: 2026-02-25 12:19:37.366 244018 DEBUG nova.virt.hardware [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:19:37 np0005629333 nova_compute[244014]: 2026-02-25 12:19:37.367 244018 INFO nova.compute.claims [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.383 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b086929a-ea3f-486a-a0b6-94d8afda0727]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.386 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[73cc53f7-14d7-4932-b2c8-b75c670f21a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.412 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[476e6f62-199d-4a8b-a31a-b85f01361a66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.427 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[85dec762-de0f-495e-b5ed-aae9e39c248e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f4cbf9a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:f8:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389396, 'reachable_time': 28421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264516, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.445 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7eb13667-afcb-4ee2-bb26-b41e08d34ea9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389407, 'tstamp': 389407}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264517, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389409, 'tstamp': 389409}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264517, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.447 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f4cbf9a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:37 np0005629333 nova_compute[244014]: 2026-02-25 12:19:37.448 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:37 np0005629333 nova_compute[244014]: 2026-02-25 12:19:37.452 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.452 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f4cbf9a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.453 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.453 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1f4cbf9a-40, col_values=(('external_ids', {'iface-id': '2cfd1e6b-d28d-43c0-bbbd-c6ad77855812'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:37.453 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:19:37 np0005629333 nova_compute[244014]: 2026-02-25 12:19:37.558 244018 DEBUG oslo_concurrency.processutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 554 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 2.2 MiB/s wr, 229 op/s
Feb 25 07:19:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Feb 25 07:19:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Feb 25 07:19:37 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Feb 25 07:19:37 np0005629333 nova_compute[244014]: 2026-02-25 12:19:37.665 244018 DEBUG nova.storage.rbd_utils [None req-7c2f5faa-5f01-4417-99a0-262608624acf 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] creating snapshot(snap) on rbd image(7dba2e0d-092c-46fd-a8a8-9e81515a6945) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 07:19:37 np0005629333 nova_compute[244014]: 2026-02-25 12:19:37.702 244018 INFO nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Instance shutdown successfully after 3 seconds.#033[00m
Feb 25 07:19:37 np0005629333 nova_compute[244014]: 2026-02-25 12:19:37.713 244018 INFO nova.virt.libvirt.driver [-] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Instance destroyed successfully.#033[00m
Feb 25 07:19:37 np0005629333 nova_compute[244014]: 2026-02-25 12:19:37.721 244018 INFO nova.virt.libvirt.driver [-] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Instance destroyed successfully.#033[00m
Feb 25 07:19:37 np0005629333 nova_compute[244014]: 2026-02-25 12:19:37.722 244018 DEBUG nova.virt.libvirt.vif [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:18:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1736622759',display_name='tempest-ServersAdminTestJSON-server-1736622759',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1736622759',id=17,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:18:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-ae0qgj1z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:19:33Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=52f927ad-a417-489f-9f92-87bc3433649d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:19:37 np0005629333 nova_compute[244014]: 2026-02-25 12:19:37.723 244018 DEBUG nova.network.os_vif_util [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:19:37 np0005629333 nova_compute[244014]: 2026-02-25 12:19:37.724 244018 DEBUG nova.network.os_vif_util [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:19:37 np0005629333 nova_compute[244014]: 2026-02-25 12:19:37.725 244018 DEBUG os_vif [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:19:37 np0005629333 nova_compute[244014]: 2026-02-25 12:19:37.728 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:37 np0005629333 nova_compute[244014]: 2026-02-25 12:19:37.728 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e503dd2-73, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:37 np0005629333 nova_compute[244014]: 2026-02-25 12:19:37.730 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:37 np0005629333 nova_compute[244014]: 2026-02-25 12:19:37.731 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:37 np0005629333 nova_compute[244014]: 2026-02-25 12:19:37.733 244018 INFO os_vif [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73')#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.059 244018 INFO nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Deleting instance files /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d_del#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.059 244018 INFO nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Deletion of /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d_del complete#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.068 244018 DEBUG nova.compute.manager [req-7854f711-6cb9-4d7d-af0a-b6585b57f76c req-f2ebb1c0-2911-46a1-91f6-026b2ef2f5f5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received event network-vif-unplugged-2e503dd2-735e-4bfc-87c7-dffab319d935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.068 244018 DEBUG oslo_concurrency.lockutils [req-7854f711-6cb9-4d7d-af0a-b6585b57f76c req-f2ebb1c0-2911-46a1-91f6-026b2ef2f5f5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "52f927ad-a417-489f-9f92-87bc3433649d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.069 244018 DEBUG oslo_concurrency.lockutils [req-7854f711-6cb9-4d7d-af0a-b6585b57f76c req-f2ebb1c0-2911-46a1-91f6-026b2ef2f5f5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.070 244018 DEBUG oslo_concurrency.lockutils [req-7854f711-6cb9-4d7d-af0a-b6585b57f76c req-f2ebb1c0-2911-46a1-91f6-026b2ef2f5f5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.071 244018 DEBUG nova.compute.manager [req-7854f711-6cb9-4d7d-af0a-b6585b57f76c req-f2ebb1c0-2911-46a1-91f6-026b2ef2f5f5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] No waiting events found dispatching network-vif-unplugged-2e503dd2-735e-4bfc-87c7-dffab319d935 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.071 244018 WARNING nova.compute.manager [req-7854f711-6cb9-4d7d-af0a-b6585b57f76c req-f2ebb1c0-2911-46a1-91f6-026b2ef2f5f5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received unexpected event network-vif-unplugged-2e503dd2-735e-4bfc-87c7-dffab319d935 for instance with vm_state error and task_state rebuilding.#033[00m
Feb 25 07:19:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:19:38 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2945168416' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.124 244018 DEBUG oslo_concurrency.processutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.132 244018 DEBUG nova.compute.provider_tree [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.172 244018 DEBUG nova.scheduler.client.report [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.226 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.227 244018 DEBUG nova.compute.manager [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.234 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.234 244018 INFO nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Creating image(s)#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.257 244018 DEBUG nova.storage.rbd_utils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.290 244018 DEBUG nova.storage.rbd_utils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.320 244018 DEBUG nova.storage.rbd_utils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.323 244018 DEBUG oslo_concurrency.processutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.354 244018 DEBUG nova.compute.manager [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.355 244018 DEBUG nova.network.neutron [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.382 244018 INFO nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.406 244018 DEBUG oslo_concurrency.processutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.407 244018 DEBUG oslo_concurrency.lockutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.408 244018 DEBUG oslo_concurrency.lockutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.408 244018 DEBUG oslo_concurrency.lockutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.430 244018 DEBUG nova.storage.rbd_utils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.437 244018 DEBUG oslo_concurrency.processutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 52f927ad-a417-489f-9f92-87bc3433649d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.452 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.458 244018 DEBUG nova.compute.manager [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.576 244018 DEBUG nova.compute.manager [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.577 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.578 244018 INFO nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Creating image(s)#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.618 244018 DEBUG nova.storage.rbd_utils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] rbd image 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.656 244018 DEBUG nova.storage.rbd_utils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] rbd image 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Feb 25 07:19:38 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.719 244018 DEBUG nova.storage.rbd_utils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] rbd image 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.723 244018 DEBUG oslo_concurrency.processutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.749 244018 DEBUG nova.policy [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9aa84b2700234a5e9dcba1fc0bbc4cea', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67d0ed57ac554e4390e928b3c8f9b5f6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.753 244018 DEBUG oslo_concurrency.processutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 52f927ad-a417-489f-9f92-87bc3433649d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.814 244018 DEBUG oslo_concurrency.processutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.815 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.816 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.816 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.847 244018 DEBUG nova.storage.rbd_utils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] rbd image 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.851 244018 DEBUG oslo_concurrency.processutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.880 244018 DEBUG nova.storage.rbd_utils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] resizing rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.997 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.998 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Ensure instance console log exists: /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:19:38 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.999 244018 DEBUG oslo_concurrency.lockutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:38.999 244018 DEBUG oslo_concurrency.lockutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.000 244018 DEBUG oslo_concurrency.lockutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.003 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Start _get_guest_xml network_info=[{"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.009 244018 WARNING nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.018 244018 DEBUG nova.virt.libvirt.host [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.019 244018 DEBUG nova.virt.libvirt.host [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.023 244018 DEBUG nova.virt.libvirt.host [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.024 244018 DEBUG nova.virt.libvirt.host [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.024 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.024 244018 DEBUG nova.virt.hardware [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.025 244018 DEBUG nova.virt.hardware [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.025 244018 DEBUG nova.virt.hardware [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.025 244018 DEBUG nova.virt.hardware [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.026 244018 DEBUG nova.virt.hardware [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.026 244018 DEBUG nova.virt.hardware [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.026 244018 DEBUG nova.virt.hardware [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.026 244018 DEBUG nova.virt.hardware [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.027 244018 DEBUG nova.virt.hardware [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.027 244018 DEBUG nova.virt.hardware [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.027 244018 DEBUG nova.virt.hardware [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.027 244018 DEBUG nova.objects.instance [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'vcpu_model' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.056 244018 DEBUG oslo_concurrency.processutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.182 244018 DEBUG oslo_concurrency.processutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.331s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:19:39 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:39Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c6:da:42 10.100.0.12
Feb 25 07:19:39 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:39Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c6:da:42 10.100.0.12
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.283 244018 DEBUG nova.storage.rbd_utils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] resizing rbd image 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.432 244018 DEBUG nova.objects.instance [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lazy-loading 'migration_context' on Instance uuid 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.454 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.454 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Ensure instance console log exists: /var/lib/nova/instances/267cd6f8-d842-45a8-b4b1-4c6f3dee8d69/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.455 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.455 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.455 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 554 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 3.7 MiB/s wr, 236 op/s
Feb 25 07:19:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:19:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/334233231' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.645 244018 DEBUG oslo_concurrency.processutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.670 244018 DEBUG nova.storage.rbd_utils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.673 244018 DEBUG oslo_concurrency.processutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:39 np0005629333 nova_compute[244014]: 2026-02-25 12:19:39.766 244018 DEBUG nova.network.neutron [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Successfully created port: ea269819-4b09-472f-b5f6-ad74852b3850 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.165 244018 DEBUG nova.compute.manager [req-2514a940-1362-47d3-802c-c24437982024 req-7b94479f-67e1-42ed-8380-765257e1d73e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.165 244018 DEBUG oslo_concurrency.lockutils [req-2514a940-1362-47d3-802c-c24437982024 req-7b94479f-67e1-42ed-8380-765257e1d73e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "52f927ad-a417-489f-9f92-87bc3433649d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.166 244018 DEBUG oslo_concurrency.lockutils [req-2514a940-1362-47d3-802c-c24437982024 req-7b94479f-67e1-42ed-8380-765257e1d73e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.166 244018 DEBUG oslo_concurrency.lockutils [req-2514a940-1362-47d3-802c-c24437982024 req-7b94479f-67e1-42ed-8380-765257e1d73e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.166 244018 DEBUG nova.compute.manager [req-2514a940-1362-47d3-802c-c24437982024 req-7b94479f-67e1-42ed-8380-765257e1d73e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] No waiting events found dispatching network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.167 244018 WARNING nova.compute.manager [req-2514a940-1362-47d3-802c-c24437982024 req-7b94479f-67e1-42ed-8380-765257e1d73e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received unexpected event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 for instance with vm_state error and task_state rebuild_spawning.#033[00m
Feb 25 07:19:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:19:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4086827651' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.208 244018 DEBUG oslo_concurrency.processutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.210 244018 DEBUG nova.virt.libvirt.vif [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:18:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1736622759',display_name='tempest-ServersAdminTestJSON-server-1736622759',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1736622759',id=17,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:18:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-ae0qgj1z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:19:38Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=52f927ad-a417-489f-9f92-87bc3433649d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.210 244018 DEBUG nova.network.os_vif_util [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.211 244018 DEBUG nova.network.os_vif_util [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.213 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:19:40 np0005629333 nova_compute[244014]:  <uuid>52f927ad-a417-489f-9f92-87bc3433649d</uuid>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:  <name>instance-00000011</name>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServersAdminTestJSON-server-1736622759</nova:name>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:19:39</nova:creationTime>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:19:40 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:        <nova:user uuid="6395ac4bfa5d4910aed9116395bbbdeb">tempest-ServersAdminTestJSON-147238686-project-member</nova:user>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:        <nova:project uuid="b35cd816238a43d8825ab11e83d2b8bf">tempest-ServersAdminTestJSON-147238686</nova:project>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="f0ef5a9a-23b8-4883-8e47-feb7403a11d8"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:        <nova:port uuid="2e503dd2-735e-4bfc-87c7-dffab319d935">
Feb 25 07:19:40 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      <entry name="serial">52f927ad-a417-489f-9f92-87bc3433649d</entry>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      <entry name="uuid">52f927ad-a417-489f-9f92-87bc3433649d</entry>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/52f927ad-a417-489f-9f92-87bc3433649d_disk">
Feb 25 07:19:40 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:19:40 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/52f927ad-a417-489f-9f92-87bc3433649d_disk.config">
Feb 25 07:19:40 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:19:40 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:87:71:62"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      <target dev="tap2e503dd2-73"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/console.log" append="off"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:19:40 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:19:40 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:19:40 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:19:40 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.214 244018 DEBUG nova.virt.libvirt.vif [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:18:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1736622759',display_name='tempest-ServersAdminTestJSON-server-1736622759',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1736622759',id=17,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:18:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-ae0qgj1z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:19:38Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=52f927ad-a417-489f-9f92-87bc3433649d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.215 244018 DEBUG nova.network.os_vif_util [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.215 244018 DEBUG nova.network.os_vif_util [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.216 244018 DEBUG os_vif [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.216 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.217 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.217 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.220 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.220 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e503dd2-73, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.221 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2e503dd2-73, col_values=(('external_ids', {'iface-id': '2e503dd2-735e-4bfc-87c7-dffab319d935', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:87:71:62', 'vm-uuid': '52f927ad-a417-489f-9f92-87bc3433649d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.222 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:40 np0005629333 NetworkManager[49836]: <info>  [1772021980.2239] manager: (tap2e503dd2-73): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.226 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.228 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.228 244018 INFO os_vif [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73')#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.328 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.328 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.329 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No VIF found with MAC fa:16:3e:87:71:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.329 244018 INFO nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Using config drive#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.355 244018 DEBUG nova.storage.rbd_utils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.363 244018 INFO nova.virt.libvirt.driver [None req-7c2f5faa-5f01-4417-99a0-262608624acf 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Snapshot image upload complete#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.364 244018 INFO nova.compute.manager [None req-7c2f5faa-5f01-4417-99a0-262608624acf 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Took 5.20 seconds to snapshot the instance on the hypervisor.#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.407 244018 DEBUG nova.objects.instance [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'ec2_ids' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.475 244018 DEBUG nova.objects.instance [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'keypairs' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.580 244018 DEBUG nova.network.neutron [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Successfully updated port: ea269819-4b09-472f-b5f6-ad74852b3850 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.617 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "refresh_cache-267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.618 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquired lock "refresh_cache-267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.618 244018 DEBUG nova.network.neutron [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.850 244018 DEBUG nova.compute.manager [req-af60b009-d890-487e-9731-3966fc8d253b req-54b613e3-2e3a-4b2d-8930-1ca11f8299b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Received event network-changed-ea269819-4b09-472f-b5f6-ad74852b3850 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.851 244018 DEBUG nova.compute.manager [req-af60b009-d890-487e-9731-3966fc8d253b req-54b613e3-2e3a-4b2d-8930-1ca11f8299b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Refreshing instance network info cache due to event network-changed-ea269819-4b09-472f-b5f6-ad74852b3850. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.851 244018 DEBUG oslo_concurrency.lockutils [req-af60b009-d890-487e-9731-3966fc8d253b req-54b613e3-2e3a-4b2d-8930-1ca11f8299b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.867 244018 DEBUG nova.network.neutron [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.887 244018 INFO nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Creating config drive at /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config#033[00m
Feb 25 07:19:40 np0005629333 nova_compute[244014]: 2026-02-25 12:19:40.892 244018 DEBUG oslo_concurrency.processutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqrx5g7sb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:41 np0005629333 nova_compute[244014]: 2026-02-25 12:19:41.025 244018 DEBUG oslo_concurrency.processutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqrx5g7sb" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:41 np0005629333 nova_compute[244014]: 2026-02-25 12:19:41.063 244018 DEBUG nova.storage.rbd_utils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:41 np0005629333 nova_compute[244014]: 2026-02-25 12:19:41.068 244018 DEBUG oslo_concurrency.processutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config 52f927ad-a417-489f-9f92-87bc3433649d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:41 np0005629333 nova_compute[244014]: 2026-02-25 12:19:41.276 244018 DEBUG oslo_concurrency.processutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config 52f927ad-a417-489f-9f92-87bc3433649d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:41 np0005629333 nova_compute[244014]: 2026-02-25 12:19:41.277 244018 INFO nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Deleting local config drive /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config because it was imported into RBD.#033[00m
Feb 25 07:19:41 np0005629333 NetworkManager[49836]: <info>  [1772021981.3078] manager: (tap2e503dd2-73): new Tun device (/org/freedesktop/NetworkManager/Devices/64)
Feb 25 07:19:41 np0005629333 kernel: tap2e503dd2-73: entered promiscuous mode
Feb 25 07:19:41 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:41Z|00102|binding|INFO|Claiming lport 2e503dd2-735e-4bfc-87c7-dffab319d935 for this chassis.
Feb 25 07:19:41 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:41Z|00103|binding|INFO|2e503dd2-735e-4bfc-87c7-dffab319d935: Claiming fa:16:3e:87:71:62 10.100.0.5
Feb 25 07:19:41 np0005629333 nova_compute[244014]: 2026-02-25 12:19:41.314 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:41 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:41Z|00104|binding|INFO|Setting lport 2e503dd2-735e-4bfc-87c7-dffab319d935 ovn-installed in OVS
Feb 25 07:19:41 np0005629333 systemd-udevd[265045]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:19:41 np0005629333 nova_compute[244014]: 2026-02-25 12:19:41.333 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:41 np0005629333 nova_compute[244014]: 2026-02-25 12:19:41.335 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:41 np0005629333 systemd-machined[210048]: New machine qemu-26-instance-00000011.
Feb 25 07:19:41 np0005629333 NetworkManager[49836]: <info>  [1772021981.3447] device (tap2e503dd2-73): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:19:41 np0005629333 NetworkManager[49836]: <info>  [1772021981.3451] device (tap2e503dd2-73): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:19:41 np0005629333 systemd[1]: Started Virtual Machine qemu-26-instance-00000011.
Feb 25 07:19:41 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:41Z|00105|binding|INFO|Setting lport 2e503dd2-735e-4bfc-87c7-dffab319d935 up in Southbound
Feb 25 07:19:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:41.359 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:71:62 10.100.0.5'], port_security=['fa:16:3e:87:71:62 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '52f927ad-a417-489f-9f92-87bc3433649d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'fd7733ad-d262-4781-bcfa-77cfa8b67164', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556b4b98-e95d-460c-a904-adc77baf4b88, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=2e503dd2-735e-4bfc-87c7-dffab319d935) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:19:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:41.360 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 2e503dd2-735e-4bfc-87c7-dffab319d935 in datapath 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 bound to our chassis#033[00m
Feb 25 07:19:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:41.362 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6#033[00m
Feb 25 07:19:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:41.372 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3b71daa6-b3b2-4574-b43e-3278b027b426]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:41.406 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[09343034-1f91-46b7-a460-0abe4b24b908]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:41.421 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a5e373a2-6a45-494f-8025-ff3a44166459]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:41.447 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5d8f1fd6-262d-4413-b4da-abe6c1175dd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:41.467 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[06f0b3ce-7803-46c7-bc64-194d381395c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f4cbf9a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:f8:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 17, 'rx_bytes': 784, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 17, 'rx_bytes': 784, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389396, 'reachable_time': 28421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265060, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:41.482 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f5e8930f-e5eb-4e71-bbc4-e69d7a61cff3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389407, 'tstamp': 389407}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265061, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389409, 'tstamp': 389409}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265061, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:41.484 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f4cbf9a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:41 np0005629333 nova_compute[244014]: 2026-02-25 12:19:41.486 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:41 np0005629333 nova_compute[244014]: 2026-02-25 12:19:41.487 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:41.488 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f4cbf9a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:41.488 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:19:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:41.488 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1f4cbf9a-40, col_values=(('external_ids', {'iface-id': '2cfd1e6b-d28d-43c0-bbbd-c6ad77855812'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:41.489 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:19:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 625 MiB data, 577 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 11 MiB/s wr, 361 op/s
Feb 25 07:19:41 np0005629333 nova_compute[244014]: 2026-02-25 12:19:41.993 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for 52f927ad-a417-489f-9f92-87bc3433649d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 25 07:19:41 np0005629333 nova_compute[244014]: 2026-02-25 12:19:41.994 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021981.993051, 52f927ad-a417-489f-9f92-87bc3433649d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:19:41 np0005629333 nova_compute[244014]: 2026-02-25 12:19:41.994 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:19:41 np0005629333 nova_compute[244014]: 2026-02-25 12:19:41.999 244018 DEBUG nova.compute.manager [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:41.999 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.004 244018 INFO nova.virt.libvirt.driver [-] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Instance spawned successfully.#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.004 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.018 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.021 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.044 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.045 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.045 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.045 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.056 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.057 244018 DEBUG nova.virt.libvirt.driver [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:19:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:19:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:19:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003999499221437105 of space, bias 1.0, pg target 1.1998497664311314 quantized to 32 (current 32)
Feb 25 07:19:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:19:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:19:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:19:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:19:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:19:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0027272717409505894 of space, bias 1.0, pg target 0.8154542505442263 quantized to 32 (current 32)
Feb 25 07:19:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:19:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.369006830150746e-07 of space, bias 4.0, pg target 0.0007617332168860292 quantized to 16 (current 16)
Feb 25 07:19:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:19:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:19:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:19:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Feb 25 07:19:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:19:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Feb 25 07:19:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:19:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:19:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:19:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.070 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.070 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021981.9972565, 52f927ad-a417-489f-9f92-87bc3433649d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.070 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] VM Started (Lifecycle Event)#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.117 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.122 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.159 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.164 244018 DEBUG nova.compute.manager [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.241 244018 DEBUG oslo_concurrency.lockutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.242 244018 DEBUG oslo_concurrency.lockutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.242 244018 DEBUG nova.objects.instance [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.330 244018 DEBUG oslo_concurrency.lockutils [None req-6cae7f77-6b82-4f83-9a00-5f6a6905aa56 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.088s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.342 244018 DEBUG nova.network.neutron [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Updating instance_info_cache with network_info: [{"id": "ea269819-4b09-472f-b5f6-ad74852b3850", "address": "fa:16:3e:14:80:6d", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea269819-4b", "ovs_interfaceid": "ea269819-4b09-472f-b5f6-ad74852b3850", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.400 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Releasing lock "refresh_cache-267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.400 244018 DEBUG nova.compute.manager [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Instance network_info: |[{"id": "ea269819-4b09-472f-b5f6-ad74852b3850", "address": "fa:16:3e:14:80:6d", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea269819-4b", "ovs_interfaceid": "ea269819-4b09-472f-b5f6-ad74852b3850", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.401 244018 DEBUG oslo_concurrency.lockutils [req-af60b009-d890-487e-9731-3966fc8d253b req-54b613e3-2e3a-4b2d-8930-1ca11f8299b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.401 244018 DEBUG nova.network.neutron [req-af60b009-d890-487e-9731-3966fc8d253b req-54b613e3-2e3a-4b2d-8930-1ca11f8299b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Refreshing network info cache for port ea269819-4b09-472f-b5f6-ad74852b3850 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.410 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Start _get_guest_xml network_info=[{"id": "ea269819-4b09-472f-b5f6-ad74852b3850", "address": "fa:16:3e:14:80:6d", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea269819-4b", "ovs_interfaceid": "ea269819-4b09-472f-b5f6-ad74852b3850", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.419 244018 DEBUG nova.compute.manager [req-8de3bc75-7727-4b11-9497-6b78340e237d req-3ed54a56-e5a0-4226-a315-0c0b43ad590e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.420 244018 DEBUG oslo_concurrency.lockutils [req-8de3bc75-7727-4b11-9497-6b78340e237d req-3ed54a56-e5a0-4226-a315-0c0b43ad590e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "52f927ad-a417-489f-9f92-87bc3433649d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.420 244018 DEBUG oslo_concurrency.lockutils [req-8de3bc75-7727-4b11-9497-6b78340e237d req-3ed54a56-e5a0-4226-a315-0c0b43ad590e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.420 244018 DEBUG oslo_concurrency.lockutils [req-8de3bc75-7727-4b11-9497-6b78340e237d req-3ed54a56-e5a0-4226-a315-0c0b43ad590e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.421 244018 DEBUG nova.compute.manager [req-8de3bc75-7727-4b11-9497-6b78340e237d req-3ed54a56-e5a0-4226-a315-0c0b43ad590e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] No waiting events found dispatching network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.421 244018 WARNING nova.compute.manager [req-8de3bc75-7727-4b11-9497-6b78340e237d req-3ed54a56-e5a0-4226-a315-0c0b43ad590e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received unexpected event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.425 244018 WARNING nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.430 244018 DEBUG nova.virt.libvirt.host [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.431 244018 DEBUG nova.virt.libvirt.host [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.434 244018 DEBUG nova.virt.libvirt.host [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.434 244018 DEBUG nova.virt.libvirt.host [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.437 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.437 244018 DEBUG nova.virt.hardware [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.437 244018 DEBUG nova.virt.hardware [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.438 244018 DEBUG nova.virt.hardware [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.438 244018 DEBUG nova.virt.hardware [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.438 244018 DEBUG nova.virt.hardware [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.438 244018 DEBUG nova.virt.hardware [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.439 244018 DEBUG nova.virt.hardware [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.439 244018 DEBUG nova.virt.hardware [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.439 244018 DEBUG nova.virt.hardware [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.439 244018 DEBUG nova.virt.hardware [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.439 244018 DEBUG nova.virt.hardware [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:19:42 np0005629333 nova_compute[244014]: 2026-02-25 12:19:42.443 244018 DEBUG oslo_concurrency.processutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:19:43 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1899825016' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.112 244018 DEBUG oslo_concurrency.processutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.669s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.156 244018 DEBUG nova.storage.rbd_utils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] rbd image 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.163 244018 DEBUG oslo_concurrency.processutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:43 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:43Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:23:d7:29 10.100.0.3
Feb 25 07:19:43 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:43Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:23:d7:29 10.100.0.3
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.434 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 648 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 16 MiB/s wr, 534 op/s
Feb 25 07:19:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:19:43 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2718956847' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.658 244018 DEBUG oslo_concurrency.processutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.660 244018 DEBUG nova.virt.libvirt.vif [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:19:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1490579841',display_name='tempest-FloatingIPsAssociationTestJSON-server-1490579841',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1490579841',id=24,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67d0ed57ac554e4390e928b3c8f9b5f6',ramdisk_id='',reservation_id='r-nihkdknn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1904923370',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1904923370-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:19:38Z,user_data=None,user_id='9aa84b2700234a5e9dcba1fc0bbc4cea',uuid=267cd6f8-d842-45a8-b4b1-4c6f3dee8d69,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ea269819-4b09-472f-b5f6-ad74852b3850", "address": "fa:16:3e:14:80:6d", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea269819-4b", "ovs_interfaceid": "ea269819-4b09-472f-b5f6-ad74852b3850", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.661 244018 DEBUG nova.network.os_vif_util [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Converting VIF {"id": "ea269819-4b09-472f-b5f6-ad74852b3850", "address": "fa:16:3e:14:80:6d", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea269819-4b", "ovs_interfaceid": "ea269819-4b09-472f-b5f6-ad74852b3850", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.662 244018 DEBUG nova.network.os_vif_util [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:80:6d,bridge_name='br-int',has_traffic_filtering=True,id=ea269819-4b09-472f-b5f6-ad74852b3850,network=Network(41c706f5-6f0b-47a8-91a4-16f87e2a0571),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea269819-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.663 244018 DEBUG nova.objects.instance [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.707 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:19:43 np0005629333 nova_compute[244014]:  <uuid>267cd6f8-d842-45a8-b4b1-4c6f3dee8d69</uuid>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:  <name>instance-00000018</name>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      <nova:name>tempest-FloatingIPsAssociationTestJSON-server-1490579841</nova:name>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:19:42</nova:creationTime>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:19:43 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:        <nova:user uuid="9aa84b2700234a5e9dcba1fc0bbc4cea">tempest-FloatingIPsAssociationTestJSON-1904923370-project-member</nova:user>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:        <nova:project uuid="67d0ed57ac554e4390e928b3c8f9b5f6">tempest-FloatingIPsAssociationTestJSON-1904923370</nova:project>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:        <nova:port uuid="ea269819-4b09-472f-b5f6-ad74852b3850">
Feb 25 07:19:43 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      <entry name="serial">267cd6f8-d842-45a8-b4b1-4c6f3dee8d69</entry>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      <entry name="uuid">267cd6f8-d842-45a8-b4b1-4c6f3dee8d69</entry>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/267cd6f8-d842-45a8-b4b1-4c6f3dee8d69_disk">
Feb 25 07:19:43 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:19:43 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/267cd6f8-d842-45a8-b4b1-4c6f3dee8d69_disk.config">
Feb 25 07:19:43 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:19:43 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:14:80:6d"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      <target dev="tapea269819-4b"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/267cd6f8-d842-45a8-b4b1-4c6f3dee8d69/console.log" append="off"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:19:43 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:19:43 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:19:43 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:19:43 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.707 244018 DEBUG nova.compute.manager [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Preparing to wait for external event network-vif-plugged-ea269819-4b09-472f-b5f6-ad74852b3850 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.708 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.708 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.709 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.710 244018 DEBUG nova.virt.libvirt.vif [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:19:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1490579841',display_name='tempest-FloatingIPsAssociationTestJSON-server-1490579841',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1490579841',id=24,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67d0ed57ac554e4390e928b3c8f9b5f6',ramdisk_id='',reservation_id='r-nihkdknn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1904923370',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1904923370-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:19:38Z,user_data=None,user_id='9aa84b2700234a5e9dcba1fc0bbc4cea',uuid=267cd6f8-d842-45a8-b4b1-4c6f3dee8d69,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ea269819-4b09-472f-b5f6-ad74852b3850", "address": "fa:16:3e:14:80:6d", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea269819-4b", "ovs_interfaceid": "ea269819-4b09-472f-b5f6-ad74852b3850", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.710 244018 DEBUG nova.network.os_vif_util [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Converting VIF {"id": "ea269819-4b09-472f-b5f6-ad74852b3850", "address": "fa:16:3e:14:80:6d", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea269819-4b", "ovs_interfaceid": "ea269819-4b09-472f-b5f6-ad74852b3850", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.711 244018 DEBUG nova.network.os_vif_util [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:80:6d,bridge_name='br-int',has_traffic_filtering=True,id=ea269819-4b09-472f-b5f6-ad74852b3850,network=Network(41c706f5-6f0b-47a8-91a4-16f87e2a0571),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea269819-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.711 244018 DEBUG os_vif [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:80:6d,bridge_name='br-int',has_traffic_filtering=True,id=ea269819-4b09-472f-b5f6-ad74852b3850,network=Network(41c706f5-6f0b-47a8-91a4-16f87e2a0571),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea269819-4b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.712 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.713 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.713 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.722 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.723 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea269819-4b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.725 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapea269819-4b, col_values=(('external_ids', {'iface-id': 'ea269819-4b09-472f-b5f6-ad74852b3850', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:14:80:6d', 'vm-uuid': '267cd6f8-d842-45a8-b4b1-4c6f3dee8d69'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:43 np0005629333 NetworkManager[49836]: <info>  [1772021983.7304] manager: (tapea269819-4b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.731 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.736 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.736 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.738 244018 INFO os_vif [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:80:6d,bridge_name='br-int',has_traffic_filtering=True,id=ea269819-4b09-472f-b5f6-ad74852b3850,network=Network(41c706f5-6f0b-47a8-91a4-16f87e2a0571),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea269819-4b')#033[00m
Feb 25 07:19:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Feb 25 07:19:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Feb 25 07:19:43 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.810 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.810 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.810 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] No VIF found with MAC fa:16:3e:14:80:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.811 244018 INFO nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Using config drive#033[00m
Feb 25 07:19:43 np0005629333 nova_compute[244014]: 2026-02-25 12:19:43.832 244018 DEBUG nova.storage.rbd_utils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] rbd image 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:19:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Feb 25 07:19:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Feb 25 07:19:44 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Feb 25 07:19:44 np0005629333 nova_compute[244014]: 2026-02-25 12:19:44.821 244018 INFO nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Creating config drive at /var/lib/nova/instances/267cd6f8-d842-45a8-b4b1-4c6f3dee8d69/disk.config#033[00m
Feb 25 07:19:44 np0005629333 nova_compute[244014]: 2026-02-25 12:19:44.828 244018 DEBUG oslo_concurrency.processutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/267cd6f8-d842-45a8-b4b1-4c6f3dee8d69/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpyp6pn1v4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:44 np0005629333 nova_compute[244014]: 2026-02-25 12:19:44.960 244018 DEBUG oslo_concurrency.processutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/267cd6f8-d842-45a8-b4b1-4c6f3dee8d69/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpyp6pn1v4" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:45 np0005629333 nova_compute[244014]: 2026-02-25 12:19:45.000 244018 DEBUG nova.storage.rbd_utils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] rbd image 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:45 np0005629333 nova_compute[244014]: 2026-02-25 12:19:45.005 244018 DEBUG oslo_concurrency.processutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/267cd6f8-d842-45a8-b4b1-4c6f3dee8d69/disk.config 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:45 np0005629333 nova_compute[244014]: 2026-02-25 12:19:45.099 244018 DEBUG nova.network.neutron [req-af60b009-d890-487e-9731-3966fc8d253b req-54b613e3-2e3a-4b2d-8930-1ca11f8299b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Updated VIF entry in instance network info cache for port ea269819-4b09-472f-b5f6-ad74852b3850. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:19:45 np0005629333 nova_compute[244014]: 2026-02-25 12:19:45.100 244018 DEBUG nova.network.neutron [req-af60b009-d890-487e-9731-3966fc8d253b req-54b613e3-2e3a-4b2d-8930-1ca11f8299b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Updating instance_info_cache with network_info: [{"id": "ea269819-4b09-472f-b5f6-ad74852b3850", "address": "fa:16:3e:14:80:6d", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea269819-4b", "ovs_interfaceid": "ea269819-4b09-472f-b5f6-ad74852b3850", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:19:45 np0005629333 nova_compute[244014]: 2026-02-25 12:19:45.115 244018 DEBUG oslo_concurrency.lockutils [req-af60b009-d890-487e-9731-3966fc8d253b req-54b613e3-2e3a-4b2d-8930-1ca11f8299b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:19:45 np0005629333 nova_compute[244014]: 2026-02-25 12:19:45.189 244018 DEBUG oslo_concurrency.processutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/267cd6f8-d842-45a8-b4b1-4c6f3dee8d69/disk.config 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:45 np0005629333 nova_compute[244014]: 2026-02-25 12:19:45.189 244018 INFO nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Deleting local config drive /var/lib/nova/instances/267cd6f8-d842-45a8-b4b1-4c6f3dee8d69/disk.config because it was imported into RBD.#033[00m
Feb 25 07:19:45 np0005629333 kernel: tapea269819-4b: entered promiscuous mode
Feb 25 07:19:45 np0005629333 NetworkManager[49836]: <info>  [1772021985.2214] manager: (tapea269819-4b): new Tun device (/org/freedesktop/NetworkManager/Devices/66)
Feb 25 07:19:45 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:45Z|00106|binding|INFO|Claiming lport ea269819-4b09-472f-b5f6-ad74852b3850 for this chassis.
Feb 25 07:19:45 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:45Z|00107|binding|INFO|ea269819-4b09-472f-b5f6-ad74852b3850: Claiming fa:16:3e:14:80:6d 10.100.0.12
Feb 25 07:19:45 np0005629333 nova_compute[244014]: 2026-02-25 12:19:45.227 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:45.234 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:80:6d 10.100.0.12'], port_security=['fa:16:3e:14:80:6d 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '267cd6f8-d842-45a8-b4b1-4c6f3dee8d69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41c706f5-6f0b-47a8-91a4-16f87e2a0571', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67d0ed57ac554e4390e928b3c8f9b5f6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a50ab9ba-7ffb-499d-9822-c20dbf4e32aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db1eeaa4-6673-482f-8f62-eb89284fbfdd, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ea269819-4b09-472f-b5f6-ad74852b3850) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:19:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:45.235 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ea269819-4b09-472f-b5f6-ad74852b3850 in datapath 41c706f5-6f0b-47a8-91a4-16f87e2a0571 bound to our chassis#033[00m
Feb 25 07:19:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:45.237 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41c706f5-6f0b-47a8-91a4-16f87e2a0571#033[00m
Feb 25 07:19:45 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:45Z|00108|binding|INFO|Setting lport ea269819-4b09-472f-b5f6-ad74852b3850 ovn-installed in OVS
Feb 25 07:19:45 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:45Z|00109|binding|INFO|Setting lport ea269819-4b09-472f-b5f6-ad74852b3850 up in Southbound
Feb 25 07:19:45 np0005629333 nova_compute[244014]: 2026-02-25 12:19:45.241 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:45 np0005629333 nova_compute[244014]: 2026-02-25 12:19:45.249 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:45 np0005629333 systemd-udevd[265241]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:19:45 np0005629333 systemd-machined[210048]: New machine qemu-27-instance-00000018.
Feb 25 07:19:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:45.261 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8489a9cc-6cc4-4211-8713-8f551f5981f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:45 np0005629333 systemd[1]: Started Virtual Machine qemu-27-instance-00000018.
Feb 25 07:19:45 np0005629333 NetworkManager[49836]: <info>  [1772021985.2679] device (tapea269819-4b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:19:45 np0005629333 NetworkManager[49836]: <info>  [1772021985.2687] device (tapea269819-4b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:19:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:45.286 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0bf2c644-813b-4128-a6b3-f4ed399faf00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:45.289 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[28823ba8-0421-405f-a2cd-c8adf59b4abb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:45.312 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[33a4f15c-9755-4e8a-80d6-7e9c1f9d7cb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:45.328 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6d492804-48d7-4c9e-9454-858f57e0b83a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41c706f5-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:94:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 393631, 'reachable_time': 16472, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265253, 'error': None, 'target': 'ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:45.344 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cea039a1-08af-41e9-8e5f-eb90bd728540]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap41c706f5-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 393641, 'tstamp': 393641}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265255, 'error': None, 'target': 'ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap41c706f5-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 393644, 'tstamp': 393644}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265255, 'error': None, 'target': 'ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:45.345 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41c706f5-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:45 np0005629333 nova_compute[244014]: 2026-02-25 12:19:45.347 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:45 np0005629333 nova_compute[244014]: 2026-02-25 12:19:45.348 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:45.348 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41c706f5-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:45.349 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:19:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:45.349 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41c706f5-60, col_values=(('external_ids', {'iface-id': 'f59f37b4-05c7-4f51-99f1-f2c4bac42231'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:45.349 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:19:45 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb 25 07:19:45 np0005629333 nova_compute[244014]: 2026-02-25 12:19:45.566 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021985.5656543, 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:19:45 np0005629333 nova_compute[244014]: 2026-02-25 12:19:45.567 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] VM Started (Lifecycle Event)#033[00m
Feb 25 07:19:45 np0005629333 nova_compute[244014]: 2026-02-25 12:19:45.585 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:45 np0005629333 nova_compute[244014]: 2026-02-25 12:19:45.588 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021985.5667517, 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:19:45 np0005629333 nova_compute[244014]: 2026-02-25 12:19:45.588 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:19:45 np0005629333 nova_compute[244014]: 2026-02-25 12:19:45.606 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 648 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 13 MiB/s wr, 390 op/s
Feb 25 07:19:45 np0005629333 nova_compute[244014]: 2026-02-25 12:19:45.609 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:19:45 np0005629333 nova_compute[244014]: 2026-02-25 12:19:45.631 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:19:45 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:45Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d6:24:39 10.100.0.9
Feb 25 07:19:45 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:45Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d6:24:39 10.100.0.9
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.132 244018 DEBUG nova.compute.manager [req-ba564dca-b7db-4432-ab01-6468881aa4b6 req-4e817bf4-3926-46b6-a28b-71b0be96e462 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.132 244018 DEBUG oslo_concurrency.lockutils [req-ba564dca-b7db-4432-ab01-6468881aa4b6 req-4e817bf4-3926-46b6-a28b-71b0be96e462 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "52f927ad-a417-489f-9f92-87bc3433649d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.133 244018 DEBUG oslo_concurrency.lockutils [req-ba564dca-b7db-4432-ab01-6468881aa4b6 req-4e817bf4-3926-46b6-a28b-71b0be96e462 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.133 244018 DEBUG oslo_concurrency.lockutils [req-ba564dca-b7db-4432-ab01-6468881aa4b6 req-4e817bf4-3926-46b6-a28b-71b0be96e462 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.133 244018 DEBUG nova.compute.manager [req-ba564dca-b7db-4432-ab01-6468881aa4b6 req-4e817bf4-3926-46b6-a28b-71b0be96e462 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] No waiting events found dispatching network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.134 244018 WARNING nova.compute.manager [req-ba564dca-b7db-4432-ab01-6468881aa4b6 req-4e817bf4-3926-46b6-a28b-71b0be96e462 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received unexpected event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.207 244018 DEBUG nova.compute.manager [req-cab85cee-5bbf-44dc-b692-9e986ff4758c req-35e6f3a3-f1dd-483b-bf23-b2d4ddab39bd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Received event network-vif-plugged-ea269819-4b09-472f-b5f6-ad74852b3850 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.208 244018 DEBUG oslo_concurrency.lockutils [req-cab85cee-5bbf-44dc-b692-9e986ff4758c req-35e6f3a3-f1dd-483b-bf23-b2d4ddab39bd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.208 244018 DEBUG oslo_concurrency.lockutils [req-cab85cee-5bbf-44dc-b692-9e986ff4758c req-35e6f3a3-f1dd-483b-bf23-b2d4ddab39bd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.208 244018 DEBUG oslo_concurrency.lockutils [req-cab85cee-5bbf-44dc-b692-9e986ff4758c req-35e6f3a3-f1dd-483b-bf23-b2d4ddab39bd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.208 244018 DEBUG nova.compute.manager [req-cab85cee-5bbf-44dc-b692-9e986ff4758c req-35e6f3a3-f1dd-483b-bf23-b2d4ddab39bd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Processing event network-vif-plugged-ea269819-4b09-472f-b5f6-ad74852b3850 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.209 244018 DEBUG nova.compute.manager [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.213 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021986.212929, 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.214 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.216 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.220 244018 INFO nova.virt.libvirt.driver [-] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Instance spawned successfully.#033[00m
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.220 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.244 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.255 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.259 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.259 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.260 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.261 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.261 244018 DEBUG nova.virt.libvirt.driver [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.269 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.297 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.336 244018 INFO nova.compute.manager [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Took 7.76 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.337 244018 DEBUG nova.compute.manager [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.451 244018 INFO nova.compute.manager [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Took 9.11 seconds to build instance.#033[00m
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.479 244018 DEBUG oslo_concurrency.lockutils [None req-aab31d50-1450-4fcb-b013-d6a1c6ff07dd 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.318s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.886 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "89488b9f-7c53-4e00-ad62-837e33a76dae" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.886 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:46 np0005629333 nova_compute[244014]: 2026-02-25 12:19:46.931 244018 DEBUG nova.compute.manager [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:19:47 np0005629333 nova_compute[244014]: 2026-02-25 12:19:47.075 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:47 np0005629333 nova_compute[244014]: 2026-02-25 12:19:47.076 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:47 np0005629333 nova_compute[244014]: 2026-02-25 12:19:47.084 244018 DEBUG nova.virt.hardware [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:19:47 np0005629333 nova_compute[244014]: 2026-02-25 12:19:47.085 244018 INFO nova.compute.claims [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:19:47 np0005629333 nova_compute[244014]: 2026-02-25 12:19:47.171 244018 DEBUG nova.compute.manager [None req-ede10c97-c158-46ec-afe0-c7d7d219cac6 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:47 np0005629333 nova_compute[244014]: 2026-02-25 12:19:47.239 244018 INFO nova.compute.manager [None req-ede10c97-c158-46ec-afe0-c7d7d219cac6 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] instance snapshotting#033[00m
Feb 25 07:19:47 np0005629333 nova_compute[244014]: 2026-02-25 12:19:47.460 244018 DEBUG oslo_concurrency.processutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:19:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4126469574' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:19:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:19:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4126469574' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:19:47 np0005629333 nova_compute[244014]: 2026-02-25 12:19:47.527 244018 INFO nova.virt.libvirt.driver [None req-ede10c97-c158-46ec-afe0-c7d7d219cac6 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Beginning live snapshot process#033[00m
Feb 25 07:19:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 643 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 15 MiB/s wr, 618 op/s
Feb 25 07:19:47 np0005629333 nova_compute[244014]: 2026-02-25 12:19:47.736 244018 INFO nova.compute.manager [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Rebuilding instance#033[00m
Feb 25 07:19:47 np0005629333 nova_compute[244014]: 2026-02-25 12:19:47.754 244018 DEBUG nova.virt.libvirt.imagebackend [None req-ede10c97-c158-46ec-afe0-c7d7d219cac6 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Feb 25 07:19:47 np0005629333 nova_compute[244014]: 2026-02-25 12:19:47.943 244018 DEBUG nova.objects.instance [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'trusted_certs' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:19:47 np0005629333 nova_compute[244014]: 2026-02-25 12:19:47.966 244018 DEBUG nova.compute.manager [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:19:48 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2467274907' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.032 244018 DEBUG nova.storage.rbd_utils [None req-ede10c97-c158-46ec-afe0-c7d7d219cac6 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] creating snapshot(b994f1c75d9e4bb6a2b7a460df5a4a50) on rbd image(d44c3dbc-e4bc-4235-bd88-b39616473248_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.100 244018 DEBUG oslo_concurrency.processutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.103 244018 DEBUG nova.objects.instance [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'pci_requests' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.117 244018 DEBUG nova.compute.provider_tree [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.136 244018 DEBUG nova.objects.instance [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'pci_devices' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.139 244018 DEBUG nova.scheduler.client.report [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.173 244018 DEBUG nova.objects.instance [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'resources' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.214 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.215 244018 DEBUG nova.compute.manager [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.220 244018 DEBUG nova.objects.instance [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'migration_context' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.257 244018 DEBUG nova.objects.instance [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.262 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.300 244018 DEBUG nova.compute.manager [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.300 244018 DEBUG nova.network.neutron [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.328 244018 INFO nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.346 244018 DEBUG nova.compute.manager [req-6b0bf0ee-6f4f-4d43-a916-1a47e0556226 req-4037f053-d8f0-4742-a537-ff668d21a7e9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Received event network-vif-plugged-ea269819-4b09-472f-b5f6-ad74852b3850 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.347 244018 DEBUG oslo_concurrency.lockutils [req-6b0bf0ee-6f4f-4d43-a916-1a47e0556226 req-4037f053-d8f0-4742-a537-ff668d21a7e9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.347 244018 DEBUG oslo_concurrency.lockutils [req-6b0bf0ee-6f4f-4d43-a916-1a47e0556226 req-4037f053-d8f0-4742-a537-ff668d21a7e9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.347 244018 DEBUG oslo_concurrency.lockutils [req-6b0bf0ee-6f4f-4d43-a916-1a47e0556226 req-4037f053-d8f0-4742-a537-ff668d21a7e9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.348 244018 DEBUG nova.compute.manager [req-6b0bf0ee-6f4f-4d43-a916-1a47e0556226 req-4037f053-d8f0-4742-a537-ff668d21a7e9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] No waiting events found dispatching network-vif-plugged-ea269819-4b09-472f-b5f6-ad74852b3850 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.348 244018 WARNING nova.compute.manager [req-6b0bf0ee-6f4f-4d43-a916-1a47e0556226 req-4037f053-d8f0-4742-a537-ff668d21a7e9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Received unexpected event network-vif-plugged-ea269819-4b09-472f-b5f6-ad74852b3850 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.350 244018 DEBUG nova.compute.manager [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.437 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.481 244018 DEBUG nova.compute.manager [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.482 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.482 244018 INFO nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Creating image(s)#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.502 244018 DEBUG nova.storage.rbd_utils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 89488b9f-7c53-4e00-ad62-837e33a76dae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.525 244018 DEBUG nova.storage.rbd_utils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 89488b9f-7c53-4e00-ad62-837e33a76dae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.543 244018 DEBUG nova.storage.rbd_utils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 89488b9f-7c53-4e00-ad62-837e33a76dae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.546 244018 DEBUG oslo_concurrency.processutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.565 244018 DEBUG nova.policy [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea407839a07d46608b6348caf676d12d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6a771ad0ce454d809d66825f69248fa7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.608 244018 DEBUG oslo_concurrency.processutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.609 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.610 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.610 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.629 244018 DEBUG nova.storage.rbd_utils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 89488b9f-7c53-4e00-ad62-837e33a76dae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.633 244018 DEBUG oslo_concurrency.processutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 89488b9f-7c53-4e00-ad62-837e33a76dae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.730 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Feb 25 07:19:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Feb 25 07:19:48 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.844 244018 DEBUG oslo_concurrency.processutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 89488b9f-7c53-4e00-ad62-837e33a76dae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.932 244018 DEBUG nova.storage.rbd_utils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] resizing rbd image 89488b9f-7c53-4e00-ad62-837e33a76dae_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:19:48 np0005629333 nova_compute[244014]: 2026-02-25 12:19:48.984 244018 DEBUG nova.storage.rbd_utils [None req-ede10c97-c158-46ec-afe0-c7d7d219cac6 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] cloning vms/d44c3dbc-e4bc-4235-bd88-b39616473248_disk@b994f1c75d9e4bb6a2b7a460df5a4a50 to images/1824ee54-4df4-4dd9-a540-99e517825b0b clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 25 07:19:49 np0005629333 nova_compute[244014]: 2026-02-25 12:19:49.101 244018 DEBUG nova.objects.instance [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'migration_context' on Instance uuid 89488b9f-7c53-4e00-ad62-837e33a76dae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:19:49 np0005629333 nova_compute[244014]: 2026-02-25 12:19:49.116 244018 DEBUG nova.storage.rbd_utils [None req-ede10c97-c158-46ec-afe0-c7d7d219cac6 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] flattening images/1824ee54-4df4-4dd9-a540-99e517825b0b flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 25 07:19:49 np0005629333 nova_compute[244014]: 2026-02-25 12:19:49.168 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:19:49 np0005629333 nova_compute[244014]: 2026-02-25 12:19:49.169 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Ensure instance console log exists: /var/lib/nova/instances/89488b9f-7c53-4e00-ad62-837e33a76dae/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:19:49 np0005629333 nova_compute[244014]: 2026-02-25 12:19:49.169 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:49 np0005629333 nova_compute[244014]: 2026-02-25 12:19:49.170 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:49 np0005629333 nova_compute[244014]: 2026-02-25 12:19:49.170 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:19:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Feb 25 07:19:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Feb 25 07:19:49 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Feb 25 07:19:49 np0005629333 nova_compute[244014]: 2026-02-25 12:19:49.423 244018 DEBUG nova.storage.rbd_utils [None req-ede10c97-c158-46ec-afe0-c7d7d219cac6 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] removing snapshot(b994f1c75d9e4bb6a2b7a460df5a4a50) on rbd image(d44c3dbc-e4bc-4235-bd88-b39616473248_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 25 07:19:49 np0005629333 nova_compute[244014]: 2026-02-25 12:19:49.487 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:49 np0005629333 NetworkManager[49836]: <info>  [1772021989.4885] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Feb 25 07:19:49 np0005629333 NetworkManager[49836]: <info>  [1772021989.4897] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Feb 25 07:19:49 np0005629333 nova_compute[244014]: 2026-02-25 12:19:49.527 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:49 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:49Z|00110|binding|INFO|Releasing lport eaac86b8-7606-47e6-83eb-e27fec0ae5ff from this chassis (sb_readonly=0)
Feb 25 07:19:49 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:49Z|00111|binding|INFO|Releasing lport f59f37b4-05c7-4f51-99f1-f2c4bac42231 from this chassis (sb_readonly=0)
Feb 25 07:19:49 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:49Z|00112|binding|INFO|Releasing lport 2cfd1e6b-d28d-43c0-bbbd-c6ad77855812 from this chassis (sb_readonly=0)
Feb 25 07:19:49 np0005629333 nova_compute[244014]: 2026-02-25 12:19:49.547 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 643 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 4.6 MiB/s wr, 384 op/s
Feb 25 07:19:49 np0005629333 nova_compute[244014]: 2026-02-25 12:19:49.721 244018 DEBUG nova.network.neutron [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Successfully created port: 5e8b3807-0ee8-4f97-aa2d-3db7d1283888 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:19:49 np0005629333 podman[265610]: 2026-02-25 12:19:49.777491923 +0000 UTC m=+0.108258372 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0)
Feb 25 07:19:49 np0005629333 podman[265611]: 2026-02-25 12:19:49.803108091 +0000 UTC m=+0.134536369 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 07:19:50 np0005629333 nova_compute[244014]: 2026-02-25 12:19:50.154 244018 DEBUG nova.compute.manager [req-a3648ff9-6014-4c20-bf1f-511fc7703375 req-78655313-401e-4888-b46b-f6f08baf4de8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Received event network-changed-b2336583-1aaa-4789-8d4f-a3a14997891d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:19:50 np0005629333 nova_compute[244014]: 2026-02-25 12:19:50.154 244018 DEBUG nova.compute.manager [req-a3648ff9-6014-4c20-bf1f-511fc7703375 req-78655313-401e-4888-b46b-f6f08baf4de8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Refreshing instance network info cache due to event network-changed-b2336583-1aaa-4789-8d4f-a3a14997891d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:19:50 np0005629333 nova_compute[244014]: 2026-02-25 12:19:50.154 244018 DEBUG oslo_concurrency.lockutils [req-a3648ff9-6014-4c20-bf1f-511fc7703375 req-78655313-401e-4888-b46b-f6f08baf4de8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:19:50 np0005629333 nova_compute[244014]: 2026-02-25 12:19:50.154 244018 DEBUG oslo_concurrency.lockutils [req-a3648ff9-6014-4c20-bf1f-511fc7703375 req-78655313-401e-4888-b46b-f6f08baf4de8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:19:50 np0005629333 nova_compute[244014]: 2026-02-25 12:19:50.155 244018 DEBUG nova.network.neutron [req-a3648ff9-6014-4c20-bf1f-511fc7703375 req-78655313-401e-4888-b46b-f6f08baf4de8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Refreshing network info cache for port b2336583-1aaa-4789-8d4f-a3a14997891d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:19:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Feb 25 07:19:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Feb 25 07:19:50 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Feb 25 07:19:50 np0005629333 nova_compute[244014]: 2026-02-25 12:19:50.234 244018 DEBUG nova.storage.rbd_utils [None req-ede10c97-c158-46ec-afe0-c7d7d219cac6 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] creating snapshot(snap) on rbd image(1824ee54-4df4-4dd9-a540-99e517825b0b) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 07:19:51 np0005629333 nova_compute[244014]: 2026-02-25 12:19:51.017 244018 DEBUG nova.network.neutron [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Successfully updated port: 5e8b3807-0ee8-4f97-aa2d-3db7d1283888 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:19:51 np0005629333 nova_compute[244014]: 2026-02-25 12:19:51.036 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:19:51 np0005629333 nova_compute[244014]: 2026-02-25 12:19:51.036 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquired lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:19:51 np0005629333 nova_compute[244014]: 2026-02-25 12:19:51.036 244018 DEBUG nova.network.neutron [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:19:51 np0005629333 nova_compute[244014]: 2026-02-25 12:19:51.141 244018 DEBUG nova.compute.manager [req-e9ac0c81-8449-48d6-9ce1-a59bc3397a67 req-8b2921f7-17d4-4afa-a080-542614daffd9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Received event network-changed-5e8b3807-0ee8-4f97-aa2d-3db7d1283888 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:19:51 np0005629333 nova_compute[244014]: 2026-02-25 12:19:51.141 244018 DEBUG nova.compute.manager [req-e9ac0c81-8449-48d6-9ce1-a59bc3397a67 req-8b2921f7-17d4-4afa-a080-542614daffd9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Refreshing instance network info cache due to event network-changed-5e8b3807-0ee8-4f97-aa2d-3db7d1283888. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:19:51 np0005629333 nova_compute[244014]: 2026-02-25 12:19:51.142 244018 DEBUG oslo_concurrency.lockutils [req-e9ac0c81-8449-48d6-9ce1-a59bc3397a67 req-8b2921f7-17d4-4afa-a080-542614daffd9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:19:51 np0005629333 nova_compute[244014]: 2026-02-25 12:19:51.215 244018 DEBUG nova.network.neutron [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:19:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Feb 25 07:19:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Feb 25 07:19:51 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Feb 25 07:19:51 np0005629333 nova_compute[244014]: 2026-02-25 12:19:51.437 244018 DEBUG nova.network.neutron [req-a3648ff9-6014-4c20-bf1f-511fc7703375 req-78655313-401e-4888-b46b-f6f08baf4de8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Updated VIF entry in instance network info cache for port b2336583-1aaa-4789-8d4f-a3a14997891d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:19:51 np0005629333 nova_compute[244014]: 2026-02-25 12:19:51.437 244018 DEBUG nova.network.neutron [req-a3648ff9-6014-4c20-bf1f-511fc7703375 req-78655313-401e-4888-b46b-f6f08baf4de8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Updating instance_info_cache with network_info: [{"id": "b2336583-1aaa-4789-8d4f-a3a14997891d", "address": "fa:16:3e:23:d7:29", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2336583-1a", "ovs_interfaceid": "b2336583-1aaa-4789-8d4f-a3a14997891d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:19:51 np0005629333 nova_compute[244014]: 2026-02-25 12:19:51.457 244018 DEBUG oslo_concurrency.lockutils [req-a3648ff9-6014-4c20-bf1f-511fc7703375 req-78655313-401e-4888-b46b-f6f08baf4de8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:19:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 705 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 6.9 MiB/s wr, 125 op/s
Feb 25 07:19:52 np0005629333 nova_compute[244014]: 2026-02-25 12:19:52.283 244018 DEBUG nova.network.neutron [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Updating instance_info_cache with network_info: [{"id": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "address": "fa:16:3e:0c:10:e8", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e8b3807-0e", "ovs_interfaceid": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:19:52 np0005629333 nova_compute[244014]: 2026-02-25 12:19:52.323 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Releasing lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:19:52 np0005629333 nova_compute[244014]: 2026-02-25 12:19:52.324 244018 DEBUG nova.compute.manager [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Instance network_info: |[{"id": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "address": "fa:16:3e:0c:10:e8", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e8b3807-0e", "ovs_interfaceid": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:19:52 np0005629333 nova_compute[244014]: 2026-02-25 12:19:52.325 244018 DEBUG oslo_concurrency.lockutils [req-e9ac0c81-8449-48d6-9ce1-a59bc3397a67 req-8b2921f7-17d4-4afa-a080-542614daffd9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:19:52 np0005629333 nova_compute[244014]: 2026-02-25 12:19:52.325 244018 DEBUG nova.network.neutron [req-e9ac0c81-8449-48d6-9ce1-a59bc3397a67 req-8b2921f7-17d4-4afa-a080-542614daffd9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Refreshing network info cache for port 5e8b3807-0ee8-4f97-aa2d-3db7d1283888 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:19:52 np0005629333 nova_compute[244014]: 2026-02-25 12:19:52.328 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Start _get_guest_xml network_info=[{"id": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "address": "fa:16:3e:0c:10:e8", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e8b3807-0e", "ovs_interfaceid": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:19:52 np0005629333 nova_compute[244014]: 2026-02-25 12:19:52.333 244018 WARNING nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:19:52 np0005629333 nova_compute[244014]: 2026-02-25 12:19:52.343 244018 DEBUG nova.virt.libvirt.host [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:19:52 np0005629333 nova_compute[244014]: 2026-02-25 12:19:52.343 244018 DEBUG nova.virt.libvirt.host [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:19:52 np0005629333 nova_compute[244014]: 2026-02-25 12:19:52.347 244018 DEBUG nova.virt.libvirt.host [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:19:52 np0005629333 nova_compute[244014]: 2026-02-25 12:19:52.347 244018 DEBUG nova.virt.libvirt.host [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:19:52 np0005629333 nova_compute[244014]: 2026-02-25 12:19:52.348 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:19:52 np0005629333 nova_compute[244014]: 2026-02-25 12:19:52.348 244018 DEBUG nova.virt.hardware [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:19:52 np0005629333 nova_compute[244014]: 2026-02-25 12:19:52.348 244018 DEBUG nova.virt.hardware [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:19:52 np0005629333 nova_compute[244014]: 2026-02-25 12:19:52.349 244018 DEBUG nova.virt.hardware [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:19:52 np0005629333 nova_compute[244014]: 2026-02-25 12:19:52.349 244018 DEBUG nova.virt.hardware [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:19:52 np0005629333 nova_compute[244014]: 2026-02-25 12:19:52.349 244018 DEBUG nova.virt.hardware [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:19:52 np0005629333 nova_compute[244014]: 2026-02-25 12:19:52.349 244018 DEBUG nova.virt.hardware [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:19:52 np0005629333 nova_compute[244014]: 2026-02-25 12:19:52.350 244018 DEBUG nova.virt.hardware [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:19:52 np0005629333 nova_compute[244014]: 2026-02-25 12:19:52.350 244018 DEBUG nova.virt.hardware [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:19:52 np0005629333 nova_compute[244014]: 2026-02-25 12:19:52.350 244018 DEBUG nova.virt.hardware [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:19:52 np0005629333 nova_compute[244014]: 2026-02-25 12:19:52.350 244018 DEBUG nova.virt.hardware [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:19:52 np0005629333 nova_compute[244014]: 2026-02-25 12:19:52.351 244018 DEBUG nova.virt.hardware [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:19:52 np0005629333 nova_compute[244014]: 2026-02-25 12:19:52.354 244018 DEBUG oslo_concurrency.processutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:52 np0005629333 nova_compute[244014]: 2026-02-25 12:19:52.640 244018 INFO nova.virt.libvirt.driver [None req-ede10c97-c158-46ec-afe0-c7d7d219cac6 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Snapshot image upload complete#033[00m
Feb 25 07:19:52 np0005629333 nova_compute[244014]: 2026-02-25 12:19:52.641 244018 INFO nova.compute.manager [None req-ede10c97-c158-46ec-afe0-c7d7d219cac6 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Took 5.40 seconds to snapshot the instance on the hypervisor.#033[00m
Feb 25 07:19:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:19:52 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2103251486' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:19:52 np0005629333 nova_compute[244014]: 2026-02-25 12:19:52.898 244018 DEBUG oslo_concurrency.processutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:52 np0005629333 nova_compute[244014]: 2026-02-25 12:19:52.928 244018 DEBUG nova.storage.rbd_utils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 89488b9f-7c53-4e00-ad62-837e33a76dae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:52 np0005629333 nova_compute[244014]: 2026-02-25 12:19:52.935 244018 DEBUG oslo_concurrency.processutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:53 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:53Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:87:71:62 10.100.0.5
Feb 25 07:19:53 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:53Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:87:71:62 10.100.0.5
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.439 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:19:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1987129474' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.469 244018 DEBUG oslo_concurrency.processutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.472 244018 DEBUG nova.virt.libvirt.vif [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:19:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1678048505',display_name='tempest-AttachInterfacesTestJSON-server-1678048505',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1678048505',id=25,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI59Lu36NJMBU0mthshk0lPs05K0s2PWyT35IsJwWazcPBPSYm/Ew3YeQPHN2oaFRIA9yk4c93F1Q1taymdhCN6cLrKiQ4srfxdpMeTwtszRhxTTKjPH7Q3QOLbiw7+JtQ==',key_name='tempest-keypair-1913992596',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-wuykovpt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:19:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=89488b9f-7c53-4e00-ad62-837e33a76dae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "address": "fa:16:3e:0c:10:e8", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e8b3807-0e", "ovs_interfaceid": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.472 244018 DEBUG nova.network.os_vif_util [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "address": "fa:16:3e:0c:10:e8", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e8b3807-0e", "ovs_interfaceid": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.473 244018 DEBUG nova.network.os_vif_util [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:10:e8,bridge_name='br-int',has_traffic_filtering=True,id=5e8b3807-0ee8-4f97-aa2d-3db7d1283888,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e8b3807-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.475 244018 DEBUG nova.objects.instance [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 89488b9f-7c53-4e00-ad62-837e33a76dae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.509 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:19:53 np0005629333 nova_compute[244014]:  <uuid>89488b9f-7c53-4e00-ad62-837e33a76dae</uuid>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:  <name>instance-00000019</name>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      <nova:name>tempest-AttachInterfacesTestJSON-server-1678048505</nova:name>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:19:52</nova:creationTime>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:19:53 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:        <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:        <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:        <nova:port uuid="5e8b3807-0ee8-4f97-aa2d-3db7d1283888">
Feb 25 07:19:53 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      <entry name="serial">89488b9f-7c53-4e00-ad62-837e33a76dae</entry>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      <entry name="uuid">89488b9f-7c53-4e00-ad62-837e33a76dae</entry>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/89488b9f-7c53-4e00-ad62-837e33a76dae_disk">
Feb 25 07:19:53 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:19:53 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/89488b9f-7c53-4e00-ad62-837e33a76dae_disk.config">
Feb 25 07:19:53 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:19:53 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:0c:10:e8"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      <target dev="tap5e8b3807-0e"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/89488b9f-7c53-4e00-ad62-837e33a76dae/console.log" append="off"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:19:53 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:19:53 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:19:53 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:19:53 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.509 244018 DEBUG nova.compute.manager [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Preparing to wait for external event network-vif-plugged-5e8b3807-0ee8-4f97-aa2d-3db7d1283888 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.510 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.510 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.510 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.511 244018 DEBUG nova.virt.libvirt.vif [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:19:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1678048505',display_name='tempest-AttachInterfacesTestJSON-server-1678048505',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1678048505',id=25,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI59Lu36NJMBU0mthshk0lPs05K0s2PWyT35IsJwWazcPBPSYm/Ew3YeQPHN2oaFRIA9yk4c93F1Q1taymdhCN6cLrKiQ4srfxdpMeTwtszRhxTTKjPH7Q3QOLbiw7+JtQ==',key_name='tempest-keypair-1913992596',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-wuykovpt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:19:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=89488b9f-7c53-4e00-ad62-837e33a76dae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "address": "fa:16:3e:0c:10:e8", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e8b3807-0e", "ovs_interfaceid": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.511 244018 DEBUG nova.network.os_vif_util [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "address": "fa:16:3e:0c:10:e8", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e8b3807-0e", "ovs_interfaceid": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.512 244018 DEBUG nova.network.os_vif_util [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:10:e8,bridge_name='br-int',has_traffic_filtering=True,id=5e8b3807-0ee8-4f97-aa2d-3db7d1283888,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e8b3807-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.513 244018 DEBUG os_vif [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:10:e8,bridge_name='br-int',has_traffic_filtering=True,id=5e8b3807-0ee8-4f97-aa2d-3db7d1283888,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e8b3807-0e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.513 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.514 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.514 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.516 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.516 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e8b3807-0e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.517 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5e8b3807-0e, col_values=(('external_ids', {'iface-id': '5e8b3807-0ee8-4f97-aa2d-3db7d1283888', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0c:10:e8', 'vm-uuid': '89488b9f-7c53-4e00-ad62-837e33a76dae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.518 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:53 np0005629333 NetworkManager[49836]: <info>  [1772021993.5191] manager: (tap5e8b3807-0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.520 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.524 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.525 244018 INFO os_vif [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:10:e8,bridge_name='br-int',has_traffic_filtering=True,id=5e8b3807-0ee8-4f97-aa2d-3db7d1283888,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e8b3807-0e')#033[00m
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.583 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.583 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.583 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:0c:10:e8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.584 244018 INFO nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Using config drive#033[00m
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.599 244018 DEBUG nova.storage.rbd_utils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 89488b9f-7c53-4e00-ad62-837e33a76dae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 769 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 14 MiB/s rd, 15 MiB/s wr, 431 op/s
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.790 244018 DEBUG nova.compute.manager [req-0248b669-2406-44d6-b321-2c2eb092d364 req-75bb1082-3755-42f8-8256-6045bd6049f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Received event network-changed-b2336583-1aaa-4789-8d4f-a3a14997891d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.790 244018 DEBUG nova.compute.manager [req-0248b669-2406-44d6-b321-2c2eb092d364 req-75bb1082-3755-42f8-8256-6045bd6049f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Refreshing instance network info cache due to event network-changed-b2336583-1aaa-4789-8d4f-a3a14997891d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.790 244018 DEBUG oslo_concurrency.lockutils [req-0248b669-2406-44d6-b321-2c2eb092d364 req-75bb1082-3755-42f8-8256-6045bd6049f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.790 244018 DEBUG oslo_concurrency.lockutils [req-0248b669-2406-44d6-b321-2c2eb092d364 req-75bb1082-3755-42f8-8256-6045bd6049f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:19:53 np0005629333 nova_compute[244014]: 2026-02-25 12:19:53.790 244018 DEBUG nova.network.neutron [req-0248b669-2406-44d6-b321-2c2eb092d364 req-75bb1082-3755-42f8-8256-6045bd6049f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Refreshing network info cache for port b2336583-1aaa-4789-8d4f-a3a14997891d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:19:54 np0005629333 nova_compute[244014]: 2026-02-25 12:19:54.037 244018 DEBUG nova.network.neutron [req-e9ac0c81-8449-48d6-9ce1-a59bc3397a67 req-8b2921f7-17d4-4afa-a080-542614daffd9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Updated VIF entry in instance network info cache for port 5e8b3807-0ee8-4f97-aa2d-3db7d1283888. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:19:54 np0005629333 nova_compute[244014]: 2026-02-25 12:19:54.038 244018 DEBUG nova.network.neutron [req-e9ac0c81-8449-48d6-9ce1-a59bc3397a67 req-8b2921f7-17d4-4afa-a080-542614daffd9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Updating instance_info_cache with network_info: [{"id": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "address": "fa:16:3e:0c:10:e8", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e8b3807-0e", "ovs_interfaceid": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:19:54 np0005629333 nova_compute[244014]: 2026-02-25 12:19:54.058 244018 DEBUG oslo_concurrency.lockutils [req-e9ac0c81-8449-48d6-9ce1-a59bc3397a67 req-8b2921f7-17d4-4afa-a080-542614daffd9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:19:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:19:54 np0005629333 nova_compute[244014]: 2026-02-25 12:19:54.211 244018 INFO nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Creating config drive at /var/lib/nova/instances/89488b9f-7c53-4e00-ad62-837e33a76dae/disk.config#033[00m
Feb 25 07:19:54 np0005629333 nova_compute[244014]: 2026-02-25 12:19:54.220 244018 DEBUG oslo_concurrency.processutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/89488b9f-7c53-4e00-ad62-837e33a76dae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmprtkosx4q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:54 np0005629333 nova_compute[244014]: 2026-02-25 12:19:54.367 244018 DEBUG oslo_concurrency.processutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/89488b9f-7c53-4e00-ad62-837e33a76dae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmprtkosx4q" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:54 np0005629333 nova_compute[244014]: 2026-02-25 12:19:54.405 244018 DEBUG nova.storage.rbd_utils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 89488b9f-7c53-4e00-ad62-837e33a76dae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:19:54 np0005629333 nova_compute[244014]: 2026-02-25 12:19:54.411 244018 DEBUG oslo_concurrency.processutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/89488b9f-7c53-4e00-ad62-837e33a76dae/disk.config 89488b9f-7c53-4e00-ad62-837e33a76dae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:19:54 np0005629333 nova_compute[244014]: 2026-02-25 12:19:54.530 244018 DEBUG oslo_concurrency.processutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/89488b9f-7c53-4e00-ad62-837e33a76dae/disk.config 89488b9f-7c53-4e00-ad62-837e33a76dae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:19:54 np0005629333 nova_compute[244014]: 2026-02-25 12:19:54.531 244018 INFO nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Deleting local config drive /var/lib/nova/instances/89488b9f-7c53-4e00-ad62-837e33a76dae/disk.config because it was imported into RBD.#033[00m
Feb 25 07:19:54 np0005629333 kernel: tap5e8b3807-0e: entered promiscuous mode
Feb 25 07:19:54 np0005629333 NetworkManager[49836]: <info>  [1772021994.5991] manager: (tap5e8b3807-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Feb 25 07:19:54 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:54Z|00113|binding|INFO|Claiming lport 5e8b3807-0ee8-4f97-aa2d-3db7d1283888 for this chassis.
Feb 25 07:19:54 np0005629333 nova_compute[244014]: 2026-02-25 12:19:54.599 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:54 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:54Z|00114|binding|INFO|5e8b3807-0ee8-4f97-aa2d-3db7d1283888: Claiming fa:16:3e:0c:10:e8 10.100.0.13
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.614 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:10:e8 10.100.0.13'], port_security=['fa:16:3e:0c:10:e8 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '89488b9f-7c53-4e00-ad62-837e33a76dae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7e3ab090-12ee-4eae-8ad1-0ddee1251e75', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=5e8b3807-0ee8-4f97-aa2d-3db7d1283888) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.615 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 5e8b3807-0ee8-4f97-aa2d-3db7d1283888 in datapath 08121372-a435-401a-b405-778e10d8c2e2 bound to our chassis#033[00m
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.617 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08121372-a435-401a-b405-778e10d8c2e2#033[00m
Feb 25 07:19:54 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:54Z|00115|binding|INFO|Setting lport 5e8b3807-0ee8-4f97-aa2d-3db7d1283888 ovn-installed in OVS
Feb 25 07:19:54 np0005629333 nova_compute[244014]: 2026-02-25 12:19:54.617 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:54 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:54Z|00116|binding|INFO|Setting lport 5e8b3807-0ee8-4f97-aa2d-3db7d1283888 up in Southbound
Feb 25 07:19:54 np0005629333 nova_compute[244014]: 2026-02-25 12:19:54.622 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.626 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[12a15592-029b-4e17-9cff-53508bfab6ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.626 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap08121372-a1 in ovnmeta-08121372-a435-401a-b405-778e10d8c2e2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.628 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap08121372-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.628 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4083d532-b4bc-45b2-86e1-c2a8ef790e1e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.629 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[23e8ce88-fb17-4bc7-890b-38c3de3c1262]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.639 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[4d28eb36-c4e1-46ea-ad70-edf3b3e0166b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:54 np0005629333 systemd-machined[210048]: New machine qemu-28-instance-00000019.
Feb 25 07:19:54 np0005629333 systemd-udevd[265810]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.654 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[67c64832-7a17-48f5-a106-c35625417de6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:54 np0005629333 systemd[1]: Started Virtual Machine qemu-28-instance-00000019.
Feb 25 07:19:54 np0005629333 NetworkManager[49836]: <info>  [1772021994.6652] device (tap5e8b3807-0e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:19:54 np0005629333 NetworkManager[49836]: <info>  [1772021994.6669] device (tap5e8b3807-0e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.682 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b06a2fc7-da47-4258-81db-a66516c062cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.689 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c9a8c353-285c-4721-9afe-44965dea711a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:54 np0005629333 NetworkManager[49836]: <info>  [1772021994.6915] manager: (tap08121372-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/71)
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.732 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c9f81e58-9972-4536-9e49-5007df66e699]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.735 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1c57d330-4667-48e0-b055-c2447257194a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:54 np0005629333 NetworkManager[49836]: <info>  [1772021994.7599] device (tap08121372-a0): carrier: link connected
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.765 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7b781f67-703a-4e5a-84e8-e24ee7d7a6c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.777 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2b360605-9a74-435c-aabe-78b1187fcf06]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396416, 'reachable_time': 29153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265840, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.793 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[095723a8-e300-4e99-b3b8-3572b6e22ade]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:732b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396416, 'tstamp': 396416}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265841, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.813 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[17fb05f4-74e6-40c7-aff5-3a37a7ed4ce4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396416, 'reachable_time': 29153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265842, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.839 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bcae03ca-f8f7-4934-8f5d-97ddcde916a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.889 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4a6e1e16-93d9-43c5-a365-7bc1d2b5de14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.890 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.891 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.891 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08121372-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:54 np0005629333 kernel: tap08121372-a0: entered promiscuous mode
Feb 25 07:19:54 np0005629333 NetworkManager[49836]: <info>  [1772021994.8947] manager: (tap08121372-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Feb 25 07:19:54 np0005629333 nova_compute[244014]: 2026-02-25 12:19:54.894 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:54 np0005629333 nova_compute[244014]: 2026-02-25 12:19:54.900 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.900 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08121372-a0, col_values=(('external_ids', {'iface-id': 'ef44c128-3fa4-4475-b63c-4818a50ede40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:54 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:54Z|00117|binding|INFO|Releasing lport ef44c128-3fa4-4475-b63c-4818a50ede40 from this chassis (sb_readonly=0)
Feb 25 07:19:54 np0005629333 nova_compute[244014]: 2026-02-25 12:19:54.904 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.904 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/08121372-a435-401a-b405-778e10d8c2e2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/08121372-a435-401a-b405-778e10d8c2e2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.906 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0eee8154-35d8-46d4-84c3-51d50abb2d48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.909 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-08121372-a435-401a-b405-778e10d8c2e2
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/08121372-a435-401a-b405-778e10d8c2e2.pid.haproxy
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 08121372-a435-401a-b405-778e10d8c2e2
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:19:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:54.911 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'env', 'PROCESS_TAG=haproxy-08121372-a435-401a-b405-778e10d8c2e2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/08121372-a435-401a-b405-778e10d8c2e2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:19:54 np0005629333 nova_compute[244014]: 2026-02-25 12:19:54.914 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:55.004 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:55.005 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:55.006 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:55 np0005629333 nova_compute[244014]: 2026-02-25 12:19:55.157 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021995.1561058, 89488b9f-7c53-4e00-ad62-837e33a76dae => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:19:55 np0005629333 nova_compute[244014]: 2026-02-25 12:19:55.157 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] VM Started (Lifecycle Event)#033[00m
Feb 25 07:19:55 np0005629333 nova_compute[244014]: 2026-02-25 12:19:55.181 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:55 np0005629333 nova_compute[244014]: 2026-02-25 12:19:55.185 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021995.1562688, 89488b9f-7c53-4e00-ad62-837e33a76dae => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:19:55 np0005629333 nova_compute[244014]: 2026-02-25 12:19:55.186 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:19:55 np0005629333 nova_compute[244014]: 2026-02-25 12:19:55.193 244018 DEBUG nova.network.neutron [req-0248b669-2406-44d6-b321-2c2eb092d364 req-75bb1082-3755-42f8-8256-6045bd6049f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Updated VIF entry in instance network info cache for port b2336583-1aaa-4789-8d4f-a3a14997891d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:19:55 np0005629333 nova_compute[244014]: 2026-02-25 12:19:55.193 244018 DEBUG nova.network.neutron [req-0248b669-2406-44d6-b321-2c2eb092d364 req-75bb1082-3755-42f8-8256-6045bd6049f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Updating instance_info_cache with network_info: [{"id": "b2336583-1aaa-4789-8d4f-a3a14997891d", "address": "fa:16:3e:23:d7:29", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2336583-1a", "ovs_interfaceid": "b2336583-1aaa-4789-8d4f-a3a14997891d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:19:55 np0005629333 nova_compute[244014]: 2026-02-25 12:19:55.224 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:55 np0005629333 nova_compute[244014]: 2026-02-25 12:19:55.230 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:19:55 np0005629333 nova_compute[244014]: 2026-02-25 12:19:55.236 244018 DEBUG oslo_concurrency.lockutils [req-0248b669-2406-44d6-b321-2c2eb092d364 req-75bb1082-3755-42f8-8256-6045bd6049f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:19:55 np0005629333 nova_compute[244014]: 2026-02-25 12:19:55.256 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:19:55 np0005629333 podman[265914]: 2026-02-25 12:19:55.451834259 +0000 UTC m=+0.078324209 container create 75b9e4254fe9035cdfd339972a656ed273d5d518d1af188189571c05a71d713d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Feb 25 07:19:55 np0005629333 podman[265914]: 2026-02-25 12:19:55.412661444 +0000 UTC m=+0.039151474 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:19:55 np0005629333 systemd[1]: Started libpod-conmon-75b9e4254fe9035cdfd339972a656ed273d5d518d1af188189571c05a71d713d.scope.
Feb 25 07:19:55 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:19:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f7f7a1a509256948c44cf81664959a0d95a0720770dc3872c473bf267ae05b3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:19:55 np0005629333 podman[265914]: 2026-02-25 12:19:55.564243677 +0000 UTC m=+0.190733657 container init 75b9e4254fe9035cdfd339972a656ed273d5d518d1af188189571c05a71d713d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223)
Feb 25 07:19:55 np0005629333 podman[265914]: 2026-02-25 12:19:55.572721908 +0000 UTC m=+0.199211868 container start 75b9e4254fe9035cdfd339972a656ed273d5d518d1af188189571c05a71d713d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:19:55 np0005629333 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[265930]: [NOTICE]   (265934) : New worker (265936) forked
Feb 25 07:19:55 np0005629333 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[265930]: [NOTICE]   (265934) : Loading success.
Feb 25 07:19:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 769 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 11 MiB/s wr, 322 op/s
Feb 25 07:19:55 np0005629333 nova_compute[244014]: 2026-02-25 12:19:55.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:19:55 np0005629333 nova_compute[244014]: 2026-02-25 12:19:55.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:19:56 np0005629333 nova_compute[244014]: 2026-02-25 12:19:56.046 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-7a6ab503-d433-40a7-9395-3d5660e852c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:19:56 np0005629333 nova_compute[244014]: 2026-02-25 12:19:56.046 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-7a6ab503-d433-40a7-9395-3d5660e852c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:19:56 np0005629333 nova_compute[244014]: 2026-02-25 12:19:56.046 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 25 07:19:56 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb 25 07:19:57 np0005629333 nova_compute[244014]: 2026-02-25 12:19:57.213 244018 DEBUG nova.compute.manager [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Received event network-changed-ea269819-4b09-472f-b5f6-ad74852b3850 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:19:57 np0005629333 nova_compute[244014]: 2026-02-25 12:19:57.213 244018 DEBUG nova.compute.manager [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Refreshing instance network info cache due to event network-changed-ea269819-4b09-472f-b5f6-ad74852b3850. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:19:57 np0005629333 nova_compute[244014]: 2026-02-25 12:19:57.214 244018 DEBUG oslo_concurrency.lockutils [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:19:57 np0005629333 nova_compute[244014]: 2026-02-25 12:19:57.214 244018 DEBUG oslo_concurrency.lockutils [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:19:57 np0005629333 nova_compute[244014]: 2026-02-25 12:19:57.214 244018 DEBUG nova.network.neutron [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Refreshing network info cache for port ea269819-4b09-472f-b5f6-ad74852b3850 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:19:57 np0005629333 nova_compute[244014]: 2026-02-25 12:19:57.240 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Updating instance_info_cache with network_info: [{"id": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "address": "fa:16:3e:d2:1d:06", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap179cbc6a-d7", "ovs_interfaceid": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:19:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Feb 25 07:19:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Feb 25 07:19:57 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Feb 25 07:19:57 np0005629333 nova_compute[244014]: 2026-02-25 12:19:57.347 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-7a6ab503-d433-40a7-9395-3d5660e852c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:19:57 np0005629333 nova_compute[244014]: 2026-02-25 12:19:57.347 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 25 07:19:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 809 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 9.6 MiB/s rd, 14 MiB/s wr, 429 op/s
Feb 25 07:19:58 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:58Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:14:80:6d 10.100.0.12
Feb 25 07:19:58 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:58Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:14:80:6d 10.100.0.12
Feb 25 07:19:58 np0005629333 nova_compute[244014]: 2026-02-25 12:19:58.316 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Feb 25 07:19:58 np0005629333 nova_compute[244014]: 2026-02-25 12:19:58.443 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:58 np0005629333 nova_compute[244014]: 2026-02-25 12:19:58.518 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:58 np0005629333 nova_compute[244014]: 2026-02-25 12:19:58.898 244018 DEBUG oslo_concurrency.lockutils [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Acquiring lock "d44c3dbc-e4bc-4235-bd88-b39616473248" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:58 np0005629333 nova_compute[244014]: 2026-02-25 12:19:58.899 244018 DEBUG oslo_concurrency.lockutils [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lock "d44c3dbc-e4bc-4235-bd88-b39616473248" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:58 np0005629333 nova_compute[244014]: 2026-02-25 12:19:58.900 244018 DEBUG oslo_concurrency.lockutils [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Acquiring lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:58 np0005629333 nova_compute[244014]: 2026-02-25 12:19:58.900 244018 DEBUG oslo_concurrency.lockutils [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:58 np0005629333 nova_compute[244014]: 2026-02-25 12:19:58.901 244018 DEBUG oslo_concurrency.lockutils [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:58 np0005629333 nova_compute[244014]: 2026-02-25 12:19:58.903 244018 INFO nova.compute.manager [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Terminating instance#033[00m
Feb 25 07:19:58 np0005629333 nova_compute[244014]: 2026-02-25 12:19:58.905 244018 DEBUG nova.compute.manager [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:19:58 np0005629333 nova_compute[244014]: 2026-02-25 12:19:58.930 244018 DEBUG nova.network.neutron [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Updated VIF entry in instance network info cache for port ea269819-4b09-472f-b5f6-ad74852b3850. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:19:58 np0005629333 nova_compute[244014]: 2026-02-25 12:19:58.931 244018 DEBUG nova.network.neutron [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Updating instance_info_cache with network_info: [{"id": "ea269819-4b09-472f-b5f6-ad74852b3850", "address": "fa:16:3e:14:80:6d", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea269819-4b", "ovs_interfaceid": "ea269819-4b09-472f-b5f6-ad74852b3850", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:19:58 np0005629333 kernel: tap47bb858f-17 (unregistering): left promiscuous mode
Feb 25 07:19:58 np0005629333 NetworkManager[49836]: <info>  [1772021998.9557] device (tap47bb858f-17): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:19:58 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:58Z|00118|binding|INFO|Releasing lport 47bb858f-172e-40b5-8ac0-02c531c303c3 from this chassis (sb_readonly=0)
Feb 25 07:19:58 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:58Z|00119|binding|INFO|Setting lport 47bb858f-172e-40b5-8ac0-02c531c303c3 down in Southbound
Feb 25 07:19:58 np0005629333 nova_compute[244014]: 2026-02-25 12:19:58.963 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:58 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:58Z|00120|binding|INFO|Removing iface tap47bb858f-17 ovn-installed in OVS
Feb 25 07:19:58 np0005629333 nova_compute[244014]: 2026-02-25 12:19:58.978 244018 DEBUG oslo_concurrency.lockutils [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:19:58 np0005629333 nova_compute[244014]: 2026-02-25 12:19:58.979 244018 DEBUG nova.compute.manager [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Received event network-vif-plugged-5e8b3807-0ee8-4f97-aa2d-3db7d1283888 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:19:58 np0005629333 nova_compute[244014]: 2026-02-25 12:19:58.979 244018 DEBUG oslo_concurrency.lockutils [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:58 np0005629333 nova_compute[244014]: 2026-02-25 12:19:58.980 244018 DEBUG oslo_concurrency.lockutils [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:58 np0005629333 nova_compute[244014]: 2026-02-25 12:19:58.980 244018 DEBUG oslo_concurrency.lockutils [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:58 np0005629333 nova_compute[244014]: 2026-02-25 12:19:58.980 244018 DEBUG nova.compute.manager [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Processing event network-vif-plugged-5e8b3807-0ee8-4f97-aa2d-3db7d1283888 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:19:58 np0005629333 nova_compute[244014]: 2026-02-25 12:19:58.981 244018 DEBUG nova.compute.manager [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Received event network-vif-plugged-5e8b3807-0ee8-4f97-aa2d-3db7d1283888 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:19:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:58.980 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:24:39 10.100.0.9'], port_security=['fa:16:3e:d6:24:39 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'd44c3dbc-e4bc-4235-bd88-b39616473248', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61fb5315043b44e588f1a84d85f1547b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '65ea446a-ea2b-4eb1-8854-770694a45ea7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf0a1784-e657-4d3c-8170-e3cf50faab00, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=47bb858f-172e-40b5-8ac0-02c531c303c3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:19:58 np0005629333 nova_compute[244014]: 2026-02-25 12:19:58.981 244018 DEBUG oslo_concurrency.lockutils [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:58 np0005629333 nova_compute[244014]: 2026-02-25 12:19:58.981 244018 DEBUG oslo_concurrency.lockutils [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:58 np0005629333 nova_compute[244014]: 2026-02-25 12:19:58.982 244018 DEBUG oslo_concurrency.lockutils [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:58 np0005629333 nova_compute[244014]: 2026-02-25 12:19:58.982 244018 DEBUG nova.compute.manager [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] No waiting events found dispatching network-vif-plugged-5e8b3807-0ee8-4f97-aa2d-3db7d1283888 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:19:58 np0005629333 nova_compute[244014]: 2026-02-25 12:19:58.982 244018 WARNING nova.compute.manager [req-aff17248-bab9-40d0-927b-fa9b57d6a6b9 req-1baaf1a6-32ec-42c5-b2cb-b37827ac2cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Received unexpected event network-vif-plugged-5e8b3807-0ee8-4f97-aa2d-3db7d1283888 for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:19:58 np0005629333 nova_compute[244014]: 2026-02-25 12:19:58.983 244018 DEBUG nova.compute.manager [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:19:58 np0005629333 nova_compute[244014]: 2026-02-25 12:19:58.984 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:58 np0005629333 nova_compute[244014]: 2026-02-25 12:19:58.988 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772021998.988034, 89488b9f-7c53-4e00-ad62-837e33a76dae => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:19:58 np0005629333 nova_compute[244014]: 2026-02-25 12:19:58.989 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:19:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:58.987 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 47bb858f-172e-40b5-8ac0-02c531c303c3 in datapath 55e574c2-43dc-4bbd-bf4c-2028df9ad3c1 unbound from our chassis#033[00m
Feb 25 07:19:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:58.993 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 55e574c2-43dc-4bbd-bf4c-2028df9ad3c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:19:58 np0005629333 nova_compute[244014]: 2026-02-25 12:19:58.993 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:19:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:58.996 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c59a03c8-3685-42f9-bce5-976156af17e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:58.997 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1 namespace which is not needed anymore#033[00m
Feb 25 07:19:58 np0005629333 nova_compute[244014]: 2026-02-25 12:19:58.997 244018 INFO nova.virt.libvirt.driver [-] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Instance spawned successfully.#033[00m
Feb 25 07:19:58 np0005629333 nova_compute[244014]: 2026-02-25 12:19:58.998 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:19:59 np0005629333 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000017.scope: Deactivated successfully.
Feb 25 07:19:59 np0005629333 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000017.scope: Consumed 12.542s CPU time.
Feb 25 07:19:59 np0005629333 systemd-machined[210048]: Machine qemu-25-instance-00000017 terminated.
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.038 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.043 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.061 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.061 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.062 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.062 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.063 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.063 244018 DEBUG nova.virt.libvirt.driver [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.091 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:19:59 np0005629333 kernel: tap47bb858f-17: entered promiscuous mode
Feb 25 07:19:59 np0005629333 NetworkManager[49836]: <info>  [1772021999.1272] manager: (tap47bb858f-17): new Tun device (/org/freedesktop/NetworkManager/Devices/73)
Feb 25 07:19:59 np0005629333 systemd-udevd[265948]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:19:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:59Z|00121|binding|INFO|Claiming lport 47bb858f-172e-40b5-8ac0-02c531c303c3 for this chassis.
Feb 25 07:19:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:59Z|00122|binding|INFO|47bb858f-172e-40b5-8ac0-02c531c303c3: Claiming fa:16:3e:d6:24:39 10.100.0.9
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.129 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:59 np0005629333 kernel: tap47bb858f-17 (unregistering): left promiscuous mode
Feb 25 07:19:59 np0005629333 neutron-haproxy-ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1[264346]: [NOTICE]   (264350) : haproxy version is 2.8.14-c23fe91
Feb 25 07:19:59 np0005629333 neutron-haproxy-ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1[264346]: [NOTICE]   (264350) : path to executable is /usr/sbin/haproxy
Feb 25 07:19:59 np0005629333 neutron-haproxy-ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1[264346]: [WARNING]  (264350) : Exiting Master process...
Feb 25 07:19:59 np0005629333 neutron-haproxy-ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1[264346]: [WARNING]  (264350) : Exiting Master process...
Feb 25 07:19:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:59Z|00123|binding|INFO|Setting lport 47bb858f-172e-40b5-8ac0-02c531c303c3 ovn-installed in OVS
Feb 25 07:19:59 np0005629333 neutron-haproxy-ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1[264346]: [ALERT]    (264350) : Current worker (264352) exited with code 143 (Terminated)
Feb 25 07:19:59 np0005629333 neutron-haproxy-ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1[264346]: [WARNING]  (264350) : All workers exited. Exiting... (0)
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.141 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:59 np0005629333 systemd[1]: libpod-9b5519e76029d934b40d1e893028f870427fc34e304464576b213df1c1a95cc2.scope: Deactivated successfully.
Feb 25 07:19:59 np0005629333 podman[265970]: 2026-02-25 12:19:59.151295747 +0000 UTC m=+0.063830188 container died 9b5519e76029d934b40d1e893028f870427fc34e304464576b213df1c1a95cc2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.151 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:59Z|00124|if_status|INFO|Dropped 3 log messages in last 22 seconds (most recently, 22 seconds ago) due to excessive rate
Feb 25 07:19:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:59Z|00125|if_status|INFO|Not setting lport 47bb858f-172e-40b5-8ac0-02c531c303c3 down as sb is readonly
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.153 244018 INFO nova.virt.libvirt.driver [-] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Instance destroyed successfully.#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.153 244018 DEBUG nova.objects.instance [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lazy-loading 'resources' on Instance uuid d44c3dbc-e4bc-4235-bd88-b39616473248 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:19:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:19:59Z|00126|binding|INFO|Releasing lport 47bb858f-172e-40b5-8ac0-02c531c303c3 from this chassis (sb_readonly=0)
Feb 25 07:19:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.173 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:24:39 10.100.0.9'], port_security=['fa:16:3e:d6:24:39 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'd44c3dbc-e4bc-4235-bd88-b39616473248', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61fb5315043b44e588f1a84d85f1547b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '65ea446a-ea2b-4eb1-8854-770694a45ea7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf0a1784-e657-4d3c-8170-e3cf50faab00, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=47bb858f-172e-40b5-8ac0-02c531c303c3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.178 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.183 244018 INFO nova.compute.manager [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Took 10.70 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.183 244018 DEBUG nova.compute.manager [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.185 244018 DEBUG nova.virt.libvirt.vif [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:19:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1820272093',display_name='tempest-ImagesOneServerTestJSON-server-1820272093',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1820272093',id=23,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='61fb5315043b44e588f1a84d85f1547b',ramdisk_id='',reservation_id='r-brwcjik7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerTestJSON-921017522',owner_user_name='tempest-ImagesOneServerTestJSON-921017522-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:19:52Z,user_data=None,user_id='8349db6a5fcb4d8596a69d83481207b3',uuid=d44c3dbc-e4bc-4235-bd88-b39616473248,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "47bb858f-172e-40b5-8ac0-02c531c303c3", "address": "fa:16:3e:d6:24:39", "network": {"id": "55e574c2-43dc-4bbd-bf4c-2028df9ad3c1", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-191563683-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61fb5315043b44e588f1a84d85f1547b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47bb858f-17", "ovs_interfaceid": "47bb858f-172e-40b5-8ac0-02c531c303c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.185 244018 DEBUG nova.network.os_vif_util [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Converting VIF {"id": "47bb858f-172e-40b5-8ac0-02c531c303c3", "address": "fa:16:3e:d6:24:39", "network": {"id": "55e574c2-43dc-4bbd-bf4c-2028df9ad3c1", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-191563683-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "61fb5315043b44e588f1a84d85f1547b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47bb858f-17", "ovs_interfaceid": "47bb858f-172e-40b5-8ac0-02c531c303c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.186 244018 DEBUG nova.network.os_vif_util [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:24:39,bridge_name='br-int',has_traffic_filtering=True,id=47bb858f-172e-40b5-8ac0-02c531c303c3,network=Network(55e574c2-43dc-4bbd-bf4c-2028df9ad3c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47bb858f-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.187 244018 DEBUG os_vif [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:24:39,bridge_name='br-int',has_traffic_filtering=True,id=47bb858f-172e-40b5-8ac0-02c531c303c3,network=Network(55e574c2-43dc-4bbd-bf4c-2028df9ad3c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47bb858f-17') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:19:59 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9b5519e76029d934b40d1e893028f870427fc34e304464576b213df1c1a95cc2-userdata-shm.mount: Deactivated successfully.
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.191 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.190 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:24:39 10.100.0.9'], port_security=['fa:16:3e:d6:24:39 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'd44c3dbc-e4bc-4235-bd88-b39616473248', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61fb5315043b44e588f1a84d85f1547b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '65ea446a-ea2b-4eb1-8854-770694a45ea7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf0a1784-e657-4d3c-8170-e3cf50faab00, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=47bb858f-172e-40b5-8ac0-02c531c303c3) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.191 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap47bb858f-17, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:59 np0005629333 systemd[1]: var-lib-containers-storage-overlay-85a7581b6a9b7832e1212a34f08cb65dbe5142b80457bc50869d2ba4120d093b-merged.mount: Deactivated successfully.
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.194 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.197 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:19:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:19:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.200 244018 INFO os_vif [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:24:39,bridge_name='br-int',has_traffic_filtering=True,id=47bb858f-172e-40b5-8ac0-02c531c303c3,network=Network(55e574c2-43dc-4bbd-bf4c-2028df9ad3c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47bb858f-17')#033[00m
Feb 25 07:19:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Feb 25 07:19:59 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Feb 25 07:19:59 np0005629333 podman[265970]: 2026-02-25 12:19:59.212992122 +0000 UTC m=+0.125526563 container cleanup 9b5519e76029d934b40d1e893028f870427fc34e304464576b213df1c1a95cc2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 07:19:59 np0005629333 systemd[1]: libpod-conmon-9b5519e76029d934b40d1e893028f870427fc34e304464576b213df1c1a95cc2.scope: Deactivated successfully.
Feb 25 07:19:59 np0005629333 podman[266016]: 2026-02-25 12:19:59.282135179 +0000 UTC m=+0.046952117 container remove 9b5519e76029d934b40d1e893028f870427fc34e304464576b213df1c1a95cc2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:19:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.288 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[19c4f15a-71c6-462a-b17a-4f90f2c225da]: (4, ('Wed Feb 25 12:19:59 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1 (9b5519e76029d934b40d1e893028f870427fc34e304464576b213df1c1a95cc2)\n9b5519e76029d934b40d1e893028f870427fc34e304464576b213df1c1a95cc2\nWed Feb 25 12:19:59 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1 (9b5519e76029d934b40d1e893028f870427fc34e304464576b213df1c1a95cc2)\n9b5519e76029d934b40d1e893028f870427fc34e304464576b213df1c1a95cc2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.289 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8b7cd5bf-2a35-4523-afc5-ffa3507a0121]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.290 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55e574c2-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:19:59 np0005629333 kernel: tap55e574c2-40: left promiscuous mode
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.292 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.299 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:19:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.302 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3ce9efce-8f55-4889-829b-2665d6ccee6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.313 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[79e7f594-54aa-4033-8fbc-049806ab5b97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.317 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6dcbf209-a621-4723-80af-bba6ece82a66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.334 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[69dba085-84f7-4b44-84a1-85a6678bb069]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 394234, 'reachable_time': 26287, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266034, 'error': None, 'target': 'ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:59 np0005629333 systemd[1]: run-netns-ovnmeta\x2d55e574c2\x2d43dc\x2d4bbd\x2dbf4c\x2d2028df9ad3c1.mount: Deactivated successfully.
Feb 25 07:19:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.340 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-55e574c2-43dc-4bbd-bf4c-2028df9ad3c1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:19:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.340 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[2a648b8e-3097-4a30-8cad-334e1bd82f91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.341 244018 INFO nova.compute.manager [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Took 12.31 seconds to build instance.#033[00m
Feb 25 07:19:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.342 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 47bb858f-172e-40b5-8ac0-02c531c303c3 in datapath 55e574c2-43dc-4bbd-bf4c-2028df9ad3c1 unbound from our chassis#033[00m
Feb 25 07:19:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.344 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 55e574c2-43dc-4bbd-bf4c-2028df9ad3c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:19:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.345 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fd8e232b-19f4-4dd9-8726-8af8bca252ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.346 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 47bb858f-172e-40b5-8ac0-02c531c303c3 in datapath 55e574c2-43dc-4bbd-bf4c-2028df9ad3c1 unbound from our chassis#033[00m
Feb 25 07:19:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.348 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 55e574c2-43dc-4bbd-bf4c-2028df9ad3c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:19:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:19:59.349 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[19f3065a-93c6-4e3f-a89d-e2848d06410f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.370 244018 DEBUG oslo_concurrency.lockutils [None req-97334374-fe95-4f22-baf2-1329a4303c8a ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.484s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.491 244018 INFO nova.virt.libvirt.driver [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Deleting instance files /var/lib/nova/instances/d44c3dbc-e4bc-4235-bd88-b39616473248_del#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.492 244018 INFO nova.virt.libvirt.driver [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Deletion of /var/lib/nova/instances/d44c3dbc-e4bc-4235-bd88-b39616473248_del complete#033[00m
Feb 25 07:19:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 809 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 9.3 MiB/s wr, 334 op/s
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.618 244018 INFO nova.compute.manager [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.619 244018 DEBUG oslo.service.loopingcall [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.619 244018 DEBUG nova.compute.manager [-] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.620 244018 DEBUG nova.network.neutron [-] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.779 244018 DEBUG nova.compute.manager [req-cd1893d0-ddbd-4062-bb04-e3ef2b0b9f0c req-12aae0f1-489e-4d58-9ac5-d86f1b6004f2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Received event network-vif-unplugged-47bb858f-172e-40b5-8ac0-02c531c303c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.779 244018 DEBUG oslo_concurrency.lockutils [req-cd1893d0-ddbd-4062-bb04-e3ef2b0b9f0c req-12aae0f1-489e-4d58-9ac5-d86f1b6004f2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.779 244018 DEBUG oslo_concurrency.lockutils [req-cd1893d0-ddbd-4062-bb04-e3ef2b0b9f0c req-12aae0f1-489e-4d58-9ac5-d86f1b6004f2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.780 244018 DEBUG oslo_concurrency.lockutils [req-cd1893d0-ddbd-4062-bb04-e3ef2b0b9f0c req-12aae0f1-489e-4d58-9ac5-d86f1b6004f2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.780 244018 DEBUG nova.compute.manager [req-cd1893d0-ddbd-4062-bb04-e3ef2b0b9f0c req-12aae0f1-489e-4d58-9ac5-d86f1b6004f2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] No waiting events found dispatching network-vif-unplugged-47bb858f-172e-40b5-8ac0-02c531c303c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.780 244018 DEBUG nova.compute.manager [req-cd1893d0-ddbd-4062-bb04-e3ef2b0b9f0c req-12aae0f1-489e-4d58-9ac5-d86f1b6004f2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Received event network-vif-unplugged-47bb858f-172e-40b5-8ac0-02c531c303c3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:19:59 np0005629333 nova_compute[244014]: 2026-02-25 12:19:59.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:20:00 np0005629333 kernel: tap2e503dd2-73 (unregistering): left promiscuous mode
Feb 25 07:20:00 np0005629333 NetworkManager[49836]: <info>  [1772022000.5829] device (tap2e503dd2-73): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:20:00 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:00Z|00127|binding|INFO|Releasing lport 2e503dd2-735e-4bfc-87c7-dffab319d935 from this chassis (sb_readonly=0)
Feb 25 07:20:00 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:00Z|00128|binding|INFO|Setting lport 2e503dd2-735e-4bfc-87c7-dffab319d935 down in Southbound
Feb 25 07:20:00 np0005629333 nova_compute[244014]: 2026-02-25 12:20:00.589 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:00 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:00Z|00129|binding|INFO|Removing iface tap2e503dd2-73 ovn-installed in OVS
Feb 25 07:20:00 np0005629333 nova_compute[244014]: 2026-02-25 12:20:00.592 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:00 np0005629333 nova_compute[244014]: 2026-02-25 12:20:00.598 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:00.603 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:71:62 10.100.0.5'], port_security=['fa:16:3e:87:71:62 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '52f927ad-a417-489f-9f92-87bc3433649d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'fd7733ad-d262-4781-bcfa-77cfa8b67164', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556b4b98-e95d-460c-a904-adc77baf4b88, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=2e503dd2-735e-4bfc-87c7-dffab319d935) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:20:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:00.605 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 2e503dd2-735e-4bfc-87c7-dffab319d935 in datapath 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 unbound from our chassis#033[00m
Feb 25 07:20:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:00.608 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6#033[00m
Feb 25 07:20:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:00.621 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5ce6ed4c-33d0-42d2-80d3-a2ec3054e24e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:00 np0005629333 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000011.scope: Deactivated successfully.
Feb 25 07:20:00 np0005629333 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000011.scope: Consumed 11.816s CPU time.
Feb 25 07:20:00 np0005629333 systemd-machined[210048]: Machine qemu-26-instance-00000011 terminated.
Feb 25 07:20:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:00.650 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[167e92d8-e1d5-4ccd-9348-3f752fed1293]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:00.653 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[bc6a0e9d-7b41-4eb3-85a0-25a9b86b8a9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:00.676 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[15ff20eb-0bcd-44dd-a151-069af6de4199]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:00.699 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b6186c89-ce1c-4db4-801f-3bcc315864b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f4cbf9a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:f8:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 19, 'rx_bytes': 868, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 19, 'rx_bytes': 868, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389396, 'reachable_time': 28421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266045, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:00.709 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1aeb9b02-d5aa-4481-ad1e-290ef1397da9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389407, 'tstamp': 389407}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266046, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389409, 'tstamp': 389409}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266046, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:00.710 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f4cbf9a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:00 np0005629333 nova_compute[244014]: 2026-02-25 12:20:00.712 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:00 np0005629333 nova_compute[244014]: 2026-02-25 12:20:00.717 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:00.718 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f4cbf9a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:00.718 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:20:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:00.719 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1f4cbf9a-40, col_values=(('external_ids', {'iface-id': '2cfd1e6b-d28d-43c0-bbbd-c6ad77855812'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:00.719 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:20:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:00.878 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:20:00 np0005629333 nova_compute[244014]: 2026-02-25 12:20:00.878 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:00.879 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:20:00 np0005629333 nova_compute[244014]: 2026-02-25 12:20:00.902 244018 DEBUG nova.network.neutron [-] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:20:00 np0005629333 nova_compute[244014]: 2026-02-25 12:20:00.935 244018 INFO nova.compute.manager [-] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Took 1.32 seconds to deallocate network for instance.#033[00m
Feb 25 07:20:01 np0005629333 nova_compute[244014]: 2026-02-25 12:20:01.010 244018 DEBUG oslo_concurrency.lockutils [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:01 np0005629333 nova_compute[244014]: 2026-02-25 12:20:01.010 244018 DEBUG oslo_concurrency.lockutils [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:01 np0005629333 nova_compute[244014]: 2026-02-25 12:20:01.227 244018 DEBUG oslo_concurrency.processutils [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:20:01 np0005629333 nova_compute[244014]: 2026-02-25 12:20:01.331 244018 INFO nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Instance shutdown successfully after 13 seconds.#033[00m
Feb 25 07:20:01 np0005629333 nova_compute[244014]: 2026-02-25 12:20:01.338 244018 INFO nova.virt.libvirt.driver [-] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Instance destroyed successfully.#033[00m
Feb 25 07:20:01 np0005629333 nova_compute[244014]: 2026-02-25 12:20:01.346 244018 INFO nova.virt.libvirt.driver [-] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Instance destroyed successfully.#033[00m
Feb 25 07:20:01 np0005629333 nova_compute[244014]: 2026-02-25 12:20:01.348 244018 DEBUG nova.virt.libvirt.vif [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:18:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1736622759',display_name='tempest-ServersAdminTestJSON-server-1736622759',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1736622759',id=17,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-ae0qgj1z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:19:47Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=52f927ad-a417-489f-9f92-87bc3433649d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:20:01 np0005629333 nova_compute[244014]: 2026-02-25 12:20:01.349 244018 DEBUG nova.network.os_vif_util [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:20:01 np0005629333 nova_compute[244014]: 2026-02-25 12:20:01.350 244018 DEBUG nova.network.os_vif_util [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:20:01 np0005629333 nova_compute[244014]: 2026-02-25 12:20:01.351 244018 DEBUG os_vif [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:20:01 np0005629333 nova_compute[244014]: 2026-02-25 12:20:01.353 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:01 np0005629333 nova_compute[244014]: 2026-02-25 12:20:01.354 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e503dd2-73, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:01 np0005629333 nova_compute[244014]: 2026-02-25 12:20:01.356 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:01 np0005629333 nova_compute[244014]: 2026-02-25 12:20:01.359 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:20:01 np0005629333 nova_compute[244014]: 2026-02-25 12:20:01.362 244018 INFO os_vif [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73')#033[00m
Feb 25 07:20:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:20:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:20:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:20:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:20:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:20:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:20:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 763 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 6.0 MiB/s wr, 230 op/s
Feb 25 07:20:01 np0005629333 nova_compute[244014]: 2026-02-25 12:20:01.655 244018 INFO nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Deleting instance files /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d_del#033[00m
Feb 25 07:20:01 np0005629333 nova_compute[244014]: 2026-02-25 12:20:01.655 244018 INFO nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Deletion of /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d_del complete#033[00m
Feb 25 07:20:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:20:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2828269986' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:20:01 np0005629333 nova_compute[244014]: 2026-02-25 12:20:01.815 244018 DEBUG oslo_concurrency.processutils [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:20:01 np0005629333 nova_compute[244014]: 2026-02-25 12:20:01.822 244018 DEBUG nova.compute.provider_tree [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:20:01 np0005629333 nova_compute[244014]: 2026-02-25 12:20:01.850 244018 DEBUG nova.scheduler.client.report [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:20:01 np0005629333 nova_compute[244014]: 2026-02-25 12:20:01.859 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:20:01 np0005629333 nova_compute[244014]: 2026-02-25 12:20:01.860 244018 INFO nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Creating image(s)#033[00m
Feb 25 07:20:01 np0005629333 nova_compute[244014]: 2026-02-25 12:20:01.890 244018 DEBUG nova.storage.rbd_utils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:20:01 np0005629333 nova_compute[244014]: 2026-02-25 12:20:01.922 244018 DEBUG nova.storage.rbd_utils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:20:01 np0005629333 nova_compute[244014]: 2026-02-25 12:20:01.952 244018 DEBUG nova.storage.rbd_utils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:20:01 np0005629333 nova_compute[244014]: 2026-02-25 12:20:01.957 244018 DEBUG oslo_concurrency.processutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:20:01 np0005629333 nova_compute[244014]: 2026-02-25 12:20:01.976 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:20:01 np0005629333 nova_compute[244014]: 2026-02-25 12:20:01.977 244018 DEBUG oslo_concurrency.lockutils [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:01 np0005629333 nova_compute[244014]: 2026-02-25 12:20:01.982 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.011 244018 DEBUG nova.compute.manager [req-f7a8c85b-636d-4dcd-b9eb-9f913a175553 req-dc082251-b21d-46bd-83ee-cbad412d9fa7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Received event network-vif-deleted-47bb858f-172e-40b5-8ac0-02c531c303c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.013 244018 INFO nova.scheduler.client.report [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Deleted allocations for instance d44c3dbc-e4bc-4235-bd88-b39616473248#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.022 244018 DEBUG oslo_concurrency.processutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.024 244018 DEBUG oslo_concurrency.lockutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.025 244018 DEBUG oslo_concurrency.lockutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.025 244018 DEBUG oslo_concurrency.lockutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.051 244018 DEBUG nova.storage.rbd_utils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.055 244018 DEBUG oslo_concurrency.processutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 52f927ad-a417-489f-9f92-87bc3433649d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.109 244018 DEBUG nova.compute.manager [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Received event network-vif-plugged-47bb858f-172e-40b5-8ac0-02c531c303c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.110 244018 DEBUG oslo_concurrency.lockutils [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.110 244018 DEBUG oslo_concurrency.lockutils [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.110 244018 DEBUG oslo_concurrency.lockutils [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d44c3dbc-e4bc-4235-bd88-b39616473248-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.111 244018 DEBUG nova.compute.manager [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] No waiting events found dispatching network-vif-plugged-47bb858f-172e-40b5-8ac0-02c531c303c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.111 244018 WARNING nova.compute.manager [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Received unexpected event network-vif-plugged-47bb858f-172e-40b5-8ac0-02c531c303c3 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.111 244018 DEBUG nova.compute.manager [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received event network-vif-unplugged-2e503dd2-735e-4bfc-87c7-dffab319d935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.111 244018 DEBUG oslo_concurrency.lockutils [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "52f927ad-a417-489f-9f92-87bc3433649d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.112 244018 DEBUG oslo_concurrency.lockutils [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.112 244018 DEBUG oslo_concurrency.lockutils [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.112 244018 DEBUG nova.compute.manager [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] No waiting events found dispatching network-vif-unplugged-2e503dd2-735e-4bfc-87c7-dffab319d935 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.113 244018 WARNING nova.compute.manager [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received unexpected event network-vif-unplugged-2e503dd2-735e-4bfc-87c7-dffab319d935 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.113 244018 DEBUG nova.compute.manager [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Received event network-changed-ea269819-4b09-472f-b5f6-ad74852b3850 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.113 244018 DEBUG nova.compute.manager [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Refreshing instance network info cache due to event network-changed-ea269819-4b09-472f-b5f6-ad74852b3850. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.113 244018 DEBUG oslo_concurrency.lockutils [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.114 244018 DEBUG oslo_concurrency.lockutils [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.114 244018 DEBUG nova.network.neutron [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Refreshing network info cache for port ea269819-4b09-472f-b5f6-ad74852b3850 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.143 244018 DEBUG oslo_concurrency.lockutils [None req-5564c373-4508-4946-89c8-b509accf434b 8349db6a5fcb4d8596a69d83481207b3 61fb5315043b44e588f1a84d85f1547b - - default default] Lock "d44c3dbc-e4bc-4235-bd88-b39616473248" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.243s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.258 244018 DEBUG oslo_concurrency.processutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 52f927ad-a417-489f-9f92-87bc3433649d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.330 244018 DEBUG nova.storage.rbd_utils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] resizing rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.410 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.411 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Ensure instance console log exists: /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.411 244018 DEBUG oslo_concurrency.lockutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.411 244018 DEBUG oslo_concurrency.lockutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.412 244018 DEBUG oslo_concurrency.lockutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.413 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Start _get_guest_xml network_info=[{"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.417 244018 WARNING nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.426 244018 DEBUG nova.virt.libvirt.host [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.426 244018 DEBUG nova.virt.libvirt.host [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.430 244018 DEBUG nova.virt.libvirt.host [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.431 244018 DEBUG nova.virt.libvirt.host [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.431 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.431 244018 DEBUG nova.virt.hardware [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.432 244018 DEBUG nova.virt.hardware [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.432 244018 DEBUG nova.virt.hardware [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.432 244018 DEBUG nova.virt.hardware [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.432 244018 DEBUG nova.virt.hardware [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.432 244018 DEBUG nova.virt.hardware [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.432 244018 DEBUG nova.virt.hardware [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.432 244018 DEBUG nova.virt.hardware [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.433 244018 DEBUG nova.virt.hardware [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.433 244018 DEBUG nova.virt.hardware [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.433 244018 DEBUG nova.virt.hardware [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.433 244018 DEBUG nova.objects.instance [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'vcpu_model' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.470 244018 DEBUG oslo_concurrency.processutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:20:02 np0005629333 nova_compute[244014]: 2026-02-25 12:20:02.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:20:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:20:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/446686615' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.011 244018 DEBUG oslo_concurrency.processutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.041 244018 DEBUG nova.storage.rbd_utils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.046 244018 DEBUG oslo_concurrency.processutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.445 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.558 244018 DEBUG oslo_concurrency.lockutils [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.559 244018 DEBUG oslo_concurrency.lockutils [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.559 244018 DEBUG oslo_concurrency.lockutils [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.559 244018 DEBUG oslo_concurrency.lockutils [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.559 244018 DEBUG oslo_concurrency.lockutils [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.561 244018 INFO nova.compute.manager [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Terminating instance#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.562 244018 DEBUG nova.compute.manager [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:20:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:20:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1343313881' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:20:03 np0005629333 kernel: tapea269819-4b (unregistering): left promiscuous mode
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.603 244018 DEBUG oslo_concurrency.processutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.604 244018 DEBUG nova.virt.libvirt.vif [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:18:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1736622759',display_name='tempest-ServersAdminTestJSON-server-1736622759',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1736622759',id=17,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-ae0qgj1z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:20:01Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=52f927ad-a417-489f-9f92-87bc3433649d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.605 244018 DEBUG nova.network.os_vif_util [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:20:03 np0005629333 NetworkManager[49836]: <info>  [1772022003.6051] device (tapea269819-4b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.607 244018 DEBUG nova.network.os_vif_util [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.609 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:20:03 np0005629333 nova_compute[244014]:  <uuid>52f927ad-a417-489f-9f92-87bc3433649d</uuid>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:  <name>instance-00000011</name>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServersAdminTestJSON-server-1736622759</nova:name>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:20:02</nova:creationTime>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:20:03 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:        <nova:user uuid="6395ac4bfa5d4910aed9116395bbbdeb">tempest-ServersAdminTestJSON-147238686-project-member</nova:user>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:        <nova:project uuid="b35cd816238a43d8825ab11e83d2b8bf">tempest-ServersAdminTestJSON-147238686</nova:project>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:        <nova:port uuid="2e503dd2-735e-4bfc-87c7-dffab319d935">
Feb 25 07:20:03 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      <entry name="serial">52f927ad-a417-489f-9f92-87bc3433649d</entry>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      <entry name="uuid">52f927ad-a417-489f-9f92-87bc3433649d</entry>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/52f927ad-a417-489f-9f92-87bc3433649d_disk">
Feb 25 07:20:03 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:20:03 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/52f927ad-a417-489f-9f92-87bc3433649d_disk.config">
Feb 25 07:20:03 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:20:03 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:87:71:62"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      <target dev="tap2e503dd2-73"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/console.log" append="off"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:20:03 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:20:03 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:20:03 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:20:03 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.610 244018 DEBUG nova.virt.libvirt.vif [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:18:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1736622759',display_name='tempest-ServersAdminTestJSON-server-1736622759',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1736622759',id=17,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-ae0qgj1z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:20:01Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=52f927ad-a417-489f-9f92-87bc3433649d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.611 244018 DEBUG nova.network.os_vif_util [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.612 244018 DEBUG nova.network.os_vif_util [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.612 244018 DEBUG os_vif [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:20:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 649 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 7.2 MiB/s wr, 428 op/s
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.615 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.616 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.616 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:20:03 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:03Z|00130|binding|INFO|Releasing lport ea269819-4b09-472f-b5f6-ad74852b3850 from this chassis (sb_readonly=0)
Feb 25 07:20:03 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:03Z|00131|binding|INFO|Setting lport ea269819-4b09-472f-b5f6-ad74852b3850 down in Southbound
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.618 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:03 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:03Z|00132|binding|INFO|Removing iface tapea269819-4b ovn-installed in OVS
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.620 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.621 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e503dd2-73, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.622 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2e503dd2-73, col_values=(('external_ids', {'iface-id': '2e503dd2-735e-4bfc-87c7-dffab319d935', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:87:71:62', 'vm-uuid': '52f927ad-a417-489f-9f92-87bc3433649d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.623 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:03 np0005629333 NetworkManager[49836]: <info>  [1772022003.6241] manager: (tap2e503dd2-73): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.625 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:20:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:03.627 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:80:6d 10.100.0.12'], port_security=['fa:16:3e:14:80:6d 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '267cd6f8-d842-45a8-b4b1-4c6f3dee8d69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41c706f5-6f0b-47a8-91a4-16f87e2a0571', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67d0ed57ac554e4390e928b3c8f9b5f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a50ab9ba-7ffb-499d-9822-c20dbf4e32aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db1eeaa4-6673-482f-8f62-eb89284fbfdd, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ea269819-4b09-472f-b5f6-ad74852b3850) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:20:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:03.628 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ea269819-4b09-472f-b5f6-ad74852b3850 in datapath 41c706f5-6f0b-47a8-91a4-16f87e2a0571 unbound from our chassis#033[00m
Feb 25 07:20:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:03.630 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41c706f5-6f0b-47a8-91a4-16f87e2a0571#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.639 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.645 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:03.645 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[491d82dc-ce99-4a29-98f3-200d32d1e66d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.646 244018 INFO os_vif [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73')#033[00m
Feb 25 07:20:03 np0005629333 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000018.scope: Deactivated successfully.
Feb 25 07:20:03 np0005629333 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000018.scope: Consumed 12.125s CPU time.
Feb 25 07:20:03 np0005629333 systemd-machined[210048]: Machine qemu-27-instance-00000018 terminated.
Feb 25 07:20:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:03.671 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8c506256-88c4-4833-adbf-31b7160c9e23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:03.675 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9794965c-81ee-4178-9e51-70009560d6c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.705 244018 DEBUG nova.network.neutron [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Updated VIF entry in instance network info cache for port ea269819-4b09-472f-b5f6-ad74852b3850. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.706 244018 DEBUG nova.network.neutron [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Updating instance_info_cache with network_info: [{"id": "ea269819-4b09-472f-b5f6-ad74852b3850", "address": "fa:16:3e:14:80:6d", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea269819-4b", "ovs_interfaceid": "ea269819-4b09-472f-b5f6-ad74852b3850", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:20:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:03.707 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[677531a9-3647-4e41-ad11-b147abbdbd90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.715 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.715 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.716 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] No VIF found with MAC fa:16:3e:87:71:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.716 244018 INFO nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Using config drive#033[00m
Feb 25 07:20:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:03.743 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f729d9-d422-4ba8-b7f2-cd87f52feed8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41c706f5-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:94:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 393631, 'reachable_time': 16472, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266347, 'error': None, 'target': 'ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.751 244018 DEBUG nova.storage.rbd_utils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:20:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:03.755 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[03cc95f3-f78d-4009-9912-4444d3f71dd0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap41c706f5-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 393641, 'tstamp': 393641}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266360, 'error': None, 'target': 'ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap41c706f5-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 393644, 'tstamp': 393644}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266360, 'error': None, 'target': 'ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:03.757 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41c706f5-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.762 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.764 244018 DEBUG oslo_concurrency.lockutils [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:20:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:03.765 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41c706f5-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.765 244018 DEBUG nova.compute.manager [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:03.765 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:20:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:03.765 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41c706f5-60, col_values=(('external_ids', {'iface-id': 'f59f37b4-05c7-4f51-99f1-f2c4bac42231'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.765 244018 DEBUG oslo_concurrency.lockutils [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "52f927ad-a417-489f-9f92-87bc3433649d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:03.765 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.765 244018 DEBUG oslo_concurrency.lockutils [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.766 244018 DEBUG oslo_concurrency.lockutils [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.766 244018 DEBUG nova.compute.manager [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] No waiting events found dispatching network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.766 244018 WARNING nova.compute.manager [req-e69cb060-5486-44bd-813c-b2d71af9bd18 req-2a6a691f-e147-488d-851c-c30545110f0c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received unexpected event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.767 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.777 244018 DEBUG nova.objects.instance [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'ec2_ids' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.798 244018 INFO nova.virt.libvirt.driver [-] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Instance destroyed successfully.#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.799 244018 DEBUG nova.objects.instance [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lazy-loading 'resources' on Instance uuid 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.806 244018 DEBUG nova.objects.instance [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'keypairs' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.810 244018 DEBUG nova.virt.libvirt.vif [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:19:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1490579841',display_name='tempest-FloatingIPsAssociationTestJSON-server-1490579841',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1490579841',id=24,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67d0ed57ac554e4390e928b3c8f9b5f6',ramdisk_id='',reservation_id='r-nihkdknn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1904923370',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1904923370-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:19:46Z,user_data=None,user_id='9aa84b2700234a5e9dcba1fc0bbc4cea',uuid=267cd6f8-d842-45a8-b4b1-4c6f3dee8d69,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ea269819-4b09-472f-b5f6-ad74852b3850", "address": "fa:16:3e:14:80:6d", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea269819-4b", "ovs_interfaceid": "ea269819-4b09-472f-b5f6-ad74852b3850", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.810 244018 DEBUG nova.network.os_vif_util [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Converting VIF {"id": "ea269819-4b09-472f-b5f6-ad74852b3850", "address": "fa:16:3e:14:80:6d", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea269819-4b", "ovs_interfaceid": "ea269819-4b09-472f-b5f6-ad74852b3850", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.811 244018 DEBUG nova.network.os_vif_util [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:14:80:6d,bridge_name='br-int',has_traffic_filtering=True,id=ea269819-4b09-472f-b5f6-ad74852b3850,network=Network(41c706f5-6f0b-47a8-91a4-16f87e2a0571),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea269819-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.811 244018 DEBUG os_vif [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:14:80:6d,bridge_name='br-int',has_traffic_filtering=True,id=ea269819-4b09-472f-b5f6-ad74852b3850,network=Network(41c706f5-6f0b-47a8-91a4-16f87e2a0571),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea269819-4b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.812 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.813 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea269819-4b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.814 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.815 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.817 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.819 244018 INFO os_vif [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:14:80:6d,bridge_name='br-int',has_traffic_filtering=True,id=ea269819-4b09-472f-b5f6-ad74852b3850,network=Network(41c706f5-6f0b-47a8-91a4-16f87e2a0571),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea269819-4b')#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.908 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.908 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.908 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.908 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:20:03 np0005629333 nova_compute[244014]: 2026-02-25 12:20:03.909 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.094 244018 INFO nova.virt.libvirt.driver [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Deleting instance files /var/lib/nova/instances/267cd6f8-d842-45a8-b4b1-4c6f3dee8d69_del#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.096 244018 INFO nova.virt.libvirt.driver [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Deletion of /var/lib/nova/instances/267cd6f8-d842-45a8-b4b1-4c6f3dee8d69_del complete#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.107 244018 DEBUG nova.compute.manager [req-a1e0314d-c2ab-4852-bbb1-3fb90f6084ae req-ced7e334-ee1a-456c-bdc7-55c9b509c2c3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Received event network-vif-unplugged-ea269819-4b09-472f-b5f6-ad74852b3850 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.108 244018 DEBUG oslo_concurrency.lockutils [req-a1e0314d-c2ab-4852-bbb1-3fb90f6084ae req-ced7e334-ee1a-456c-bdc7-55c9b509c2c3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.108 244018 DEBUG oslo_concurrency.lockutils [req-a1e0314d-c2ab-4852-bbb1-3fb90f6084ae req-ced7e334-ee1a-456c-bdc7-55c9b509c2c3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.109 244018 DEBUG oslo_concurrency.lockutils [req-a1e0314d-c2ab-4852-bbb1-3fb90f6084ae req-ced7e334-ee1a-456c-bdc7-55c9b509c2c3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.109 244018 DEBUG nova.compute.manager [req-a1e0314d-c2ab-4852-bbb1-3fb90f6084ae req-ced7e334-ee1a-456c-bdc7-55c9b509c2c3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] No waiting events found dispatching network-vif-unplugged-ea269819-4b09-472f-b5f6-ad74852b3850 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.110 244018 DEBUG nova.compute.manager [req-a1e0314d-c2ab-4852-bbb1-3fb90f6084ae req-ced7e334-ee1a-456c-bdc7-55c9b509c2c3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Received event network-vif-unplugged-ea269819-4b09-472f-b5f6-ad74852b3850 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.150 244018 INFO nova.compute.manager [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Took 0.59 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.151 244018 DEBUG oslo.service.loopingcall [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.152 244018 DEBUG nova.compute.manager [-] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.152 244018 DEBUG nova.network.neutron [-] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.188 244018 INFO nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Creating config drive at /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.195 244018 DEBUG oslo_concurrency.processutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpjtgfe8qj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:20:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.234 244018 DEBUG nova.compute.manager [req-5011b234-83ad-40f1-af01-e8d9746d4167 req-730ff0cb-ae87-45d0-9f28-b8b8b4b939ad 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Received event network-changed-5e8b3807-0ee8-4f97-aa2d-3db7d1283888 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.235 244018 DEBUG nova.compute.manager [req-5011b234-83ad-40f1-af01-e8d9746d4167 req-730ff0cb-ae87-45d0-9f28-b8b8b4b939ad 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Refreshing instance network info cache due to event network-changed-5e8b3807-0ee8-4f97-aa2d-3db7d1283888. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.236 244018 DEBUG oslo_concurrency.lockutils [req-5011b234-83ad-40f1-af01-e8d9746d4167 req-730ff0cb-ae87-45d0-9f28-b8b8b4b939ad 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.236 244018 DEBUG oslo_concurrency.lockutils [req-5011b234-83ad-40f1-af01-e8d9746d4167 req-730ff0cb-ae87-45d0-9f28-b8b8b4b939ad 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.237 244018 DEBUG nova.network.neutron [req-5011b234-83ad-40f1-af01-e8d9746d4167 req-730ff0cb-ae87-45d0-9f28-b8b8b4b939ad 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Refreshing network info cache for port 5e8b3807-0ee8-4f97-aa2d-3db7d1283888 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.327 244018 DEBUG oslo_concurrency.processutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpjtgfe8qj" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.369 244018 DEBUG nova.storage.rbd_utils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] rbd image 52f927ad-a417-489f-9f92-87bc3433649d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.373 244018 DEBUG oslo_concurrency.processutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config 52f927ad-a417-489f-9f92-87bc3433649d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:20:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:20:04 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/792541830' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.435 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.528 244018 DEBUG oslo_concurrency.processutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config 52f927ad-a417-489f-9f92-87bc3433649d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.530 244018 INFO nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Deleting local config drive /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d/disk.config because it was imported into RBD.#033[00m
Feb 25 07:20:04 np0005629333 kernel: tap2e503dd2-73: entered promiscuous mode
Feb 25 07:20:04 np0005629333 NetworkManager[49836]: <info>  [1772022004.5892] manager: (tap2e503dd2-73): new Tun device (/org/freedesktop/NetworkManager/Devices/75)
Feb 25 07:20:04 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:04Z|00133|binding|INFO|Claiming lport 2e503dd2-735e-4bfc-87c7-dffab319d935 for this chassis.
Feb 25 07:20:04 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:04Z|00134|binding|INFO|2e503dd2-735e-4bfc-87c7-dffab319d935: Claiming fa:16:3e:87:71:62 10.100.0.5
Feb 25 07:20:04 np0005629333 systemd-udevd[266336]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.590 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:04.606 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:71:62 10.100.0.5'], port_security=['fa:16:3e:87:71:62 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '52f927ad-a417-489f-9f92-87bc3433649d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'fd7733ad-d262-4781-bcfa-77cfa8b67164', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556b4b98-e95d-460c-a904-adc77baf4b88, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=2e503dd2-735e-4bfc-87c7-dffab319d935) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:20:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:04.608 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 2e503dd2-735e-4bfc-87c7-dffab319d935 in datapath 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 bound to our chassis#033[00m
Feb 25 07:20:04 np0005629333 NetworkManager[49836]: <info>  [1772022004.6103] device (tap2e503dd2-73): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:20:04 np0005629333 NetworkManager[49836]: <info>  [1772022004.6108] device (tap2e503dd2-73): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:20:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:04.611 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6#033[00m
Feb 25 07:20:04 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:04Z|00135|binding|INFO|Setting lport 2e503dd2-735e-4bfc-87c7-dffab319d935 ovn-installed in OVS
Feb 25 07:20:04 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:04Z|00136|binding|INFO|Setting lport 2e503dd2-735e-4bfc-87c7-dffab319d935 up in Southbound
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.620 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.623 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.624 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.625 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:04 np0005629333 systemd-machined[210048]: New machine qemu-29-instance-00000011.
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.632 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.632 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:20:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:04.634 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dcece5b0-93dd-40a3-9bf8-9c59d6e41ee2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:04 np0005629333 systemd[1]: Started Virtual Machine qemu-29-instance-00000011.
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.645 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.645 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.654 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.654 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:20:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:04.670 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2dcdd8e2-6d85-4760-9b05-098891cb5abb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:04.677 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f3375240-3b17-4a9c-8f77-f7432d495cbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.699 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.701 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.712 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.712 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:20:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:04.715 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3abf94e9-be8c-4df0-8661-55145f769179]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:04.749 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[869986c7-a27f-432d-b14a-bb7f2417a7f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f4cbf9a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:f8:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 21, 'rx_bytes': 868, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 21, 'rx_bytes': 868, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389396, 'reachable_time': 28421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266484, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:04.770 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d38a1f36-5089-4eb8-a475-4fde47253ff1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389407, 'tstamp': 389407}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266485, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389409, 'tstamp': 389409}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266485, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:04.773 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f4cbf9a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:04 np0005629333 nova_compute[244014]: 2026-02-25 12:20:04.775 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:04.779 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f4cbf9a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:04.780 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:20:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:04.781 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1f4cbf9a-40, col_values=(('external_ids', {'iface-id': '2cfd1e6b-d28d-43c0-bbbd-c6ad77855812'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:04.782 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.075 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.076 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3467MB free_disk=59.70900002960116GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.077 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.077 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.163 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 52f927ad-a417-489f-9f92-87bc3433649d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.164 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 7a6ab503-d433-40a7-9395-3d5660e852c9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.164 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance d9b67bce-8a7c-4f49-9cab-3e20377ca630 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.164 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.165 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.165 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.165 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 89488b9f-7c53-4e00-ad62-837e33a76dae actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.166 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 7 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.166 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1408MB phys_disk=59GB used_disk=7GB total_vcpus=8 used_vcpus=7 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.185 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.209 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.209 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.223 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for 52f927ad-a417-489f-9f92-87bc3433649d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.224 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022005.2228303, 52f927ad-a417-489f-9f92-87bc3433649d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.225 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.229 244018 DEBUG nova.compute.manager [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.230 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.234 244018 INFO nova.virt.libvirt.driver [-] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Instance spawned successfully.#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.235 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.253 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.269 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.278 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.283 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.283 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.284 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.285 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.285 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.286 244018 DEBUG nova.virt.libvirt.driver [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.300 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.308 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.309 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022005.229014, 52f927ad-a417-489f-9f92-87bc3433649d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.309 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] VM Started (Lifecycle Event)#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.333 244018 DEBUG nova.network.neutron [-] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.345 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.348 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.364 244018 INFO nova.compute.manager [-] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Took 1.21 seconds to deallocate network for instance.#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.387 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.408 244018 DEBUG nova.compute.manager [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.462 244018 DEBUG oslo_concurrency.lockutils [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.480 244018 DEBUG oslo_concurrency.lockutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:05 np0005629333 nova_compute[244014]: 2026-02-25 12:20:05.516 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:20:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 649 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.1 MiB/s wr, 277 op/s
Feb 25 07:20:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:20:06 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1518581231' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.086 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.093 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.138 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.234 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.234 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.236 244018 DEBUG oslo_concurrency.lockutils [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.255 244018 DEBUG nova.compute.manager [req-2f60d195-51dd-4224-9694-c452fb07182d req-a2ccc2c5-1337-4749-bf6b-94dafddd0190 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Received event network-vif-plugged-ea269819-4b09-472f-b5f6-ad74852b3850 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.255 244018 DEBUG oslo_concurrency.lockutils [req-2f60d195-51dd-4224-9694-c452fb07182d req-a2ccc2c5-1337-4749-bf6b-94dafddd0190 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.256 244018 DEBUG oslo_concurrency.lockutils [req-2f60d195-51dd-4224-9694-c452fb07182d req-a2ccc2c5-1337-4749-bf6b-94dafddd0190 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.257 244018 DEBUG oslo_concurrency.lockutils [req-2f60d195-51dd-4224-9694-c452fb07182d req-a2ccc2c5-1337-4749-bf6b-94dafddd0190 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.257 244018 DEBUG nova.compute.manager [req-2f60d195-51dd-4224-9694-c452fb07182d req-a2ccc2c5-1337-4749-bf6b-94dafddd0190 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] No waiting events found dispatching network-vif-plugged-ea269819-4b09-472f-b5f6-ad74852b3850 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.257 244018 WARNING nova.compute.manager [req-2f60d195-51dd-4224-9694-c452fb07182d req-a2ccc2c5-1337-4749-bf6b-94dafddd0190 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Received unexpected event network-vif-plugged-ea269819-4b09-472f-b5f6-ad74852b3850 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.307 244018 DEBUG nova.network.neutron [req-5011b234-83ad-40f1-af01-e8d9746d4167 req-730ff0cb-ae87-45d0-9f28-b8b8b4b939ad 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Updated VIF entry in instance network info cache for port 5e8b3807-0ee8-4f97-aa2d-3db7d1283888. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.308 244018 DEBUG nova.network.neutron [req-5011b234-83ad-40f1-af01-e8d9746d4167 req-730ff0cb-ae87-45d0-9f28-b8b8b4b939ad 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Updating instance_info_cache with network_info: [{"id": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "address": "fa:16:3e:0c:10:e8", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e8b3807-0e", "ovs_interfaceid": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.385 244018 DEBUG oslo_concurrency.processutils [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.451 244018 DEBUG nova.compute.manager [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.453 244018 DEBUG oslo_concurrency.lockutils [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "52f927ad-a417-489f-9f92-87bc3433649d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.454 244018 DEBUG oslo_concurrency.lockutils [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.455 244018 DEBUG oslo_concurrency.lockutils [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.455 244018 DEBUG nova.compute.manager [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] No waiting events found dispatching network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.456 244018 WARNING nova.compute.manager [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received unexpected event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.456 244018 DEBUG nova.compute.manager [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Received event network-changed-b2336583-1aaa-4789-8d4f-a3a14997891d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.457 244018 DEBUG nova.compute.manager [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Refreshing instance network info cache due to event network-changed-b2336583-1aaa-4789-8d4f-a3a14997891d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.457 244018 DEBUG oslo_concurrency.lockutils [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.457 244018 DEBUG oslo_concurrency.lockutils [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.458 244018 DEBUG nova.network.neutron [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Refreshing network info cache for port b2336583-1aaa-4789-8d4f-a3a14997891d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.532 244018 DEBUG oslo_concurrency.lockutils [req-5011b234-83ad-40f1-af01-e8d9746d4167 req-730ff0cb-ae87-45d0-9f28-b8b8b4b939ad 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:20:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:20:06 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2618421651' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.896 244018 DEBUG oslo_concurrency.processutils [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.901 244018 DEBUG nova.compute.provider_tree [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.920 244018 DEBUG nova.scheduler.client.report [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.968 244018 DEBUG oslo_concurrency.lockutils [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.972 244018 DEBUG oslo_concurrency.lockutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 1.491s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:06 np0005629333 nova_compute[244014]: 2026-02-25 12:20:06.972 244018 DEBUG nova.objects.instance [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 25 07:20:07 np0005629333 nova_compute[244014]: 2026-02-25 12:20:07.024 244018 INFO nova.scheduler.client.report [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Deleted allocations for instance 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69#033[00m
Feb 25 07:20:07 np0005629333 nova_compute[244014]: 2026-02-25 12:20:07.075 244018 DEBUG oslo_concurrency.lockutils [None req-0dfca49c-0199-4da2-a56d-958a7d5e5b53 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:07 np0005629333 nova_compute[244014]: 2026-02-25 12:20:07.122 244018 DEBUG oslo_concurrency.lockutils [None req-e4c10727-a54f-40d8-a39c-998c0e176fe0 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "267cd6f8-d842-45a8-b4b1-4c6f3dee8d69" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:07 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:07Z|00137|binding|INFO|Releasing lport ef44c128-3fa4-4475-b63c-4818a50ede40 from this chassis (sb_readonly=0)
Feb 25 07:20:07 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:07Z|00138|binding|INFO|Releasing lport f59f37b4-05c7-4f51-99f1-f2c4bac42231 from this chassis (sb_readonly=0)
Feb 25 07:20:07 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:07Z|00139|binding|INFO|Releasing lport 2cfd1e6b-d28d-43c0-bbbd-c6ad77855812 from this chassis (sb_readonly=0)
Feb 25 07:20:07 np0005629333 nova_compute[244014]: 2026-02-25 12:20:07.235 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:07 np0005629333 nova_compute[244014]: 2026-02-25 12:20:07.237 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:20:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 563 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 3.9 MiB/s wr, 370 op/s
Feb 25 07:20:07 np0005629333 nova_compute[244014]: 2026-02-25 12:20:07.947 244018 DEBUG nova.network.neutron [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Updated VIF entry in instance network info cache for port b2336583-1aaa-4789-8d4f-a3a14997891d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:20:07 np0005629333 nova_compute[244014]: 2026-02-25 12:20:07.948 244018 DEBUG nova.network.neutron [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Updating instance_info_cache with network_info: [{"id": "b2336583-1aaa-4789-8d4f-a3a14997891d", "address": "fa:16:3e:23:d7:29", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2336583-1a", "ovs_interfaceid": "b2336583-1aaa-4789-8d4f-a3a14997891d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:20:07 np0005629333 nova_compute[244014]: 2026-02-25 12:20:07.981 244018 DEBUG oslo_concurrency.lockutils [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:20:07 np0005629333 nova_compute[244014]: 2026-02-25 12:20:07.982 244018 DEBUG nova.compute.manager [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Received event network-vif-deleted-ea269819-4b09-472f-b5f6-ad74852b3850 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:07 np0005629333 nova_compute[244014]: 2026-02-25 12:20:07.983 244018 DEBUG nova.compute.manager [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:07 np0005629333 nova_compute[244014]: 2026-02-25 12:20:07.984 244018 DEBUG oslo_concurrency.lockutils [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "52f927ad-a417-489f-9f92-87bc3433649d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:07 np0005629333 nova_compute[244014]: 2026-02-25 12:20:07.984 244018 DEBUG oslo_concurrency.lockutils [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:07 np0005629333 nova_compute[244014]: 2026-02-25 12:20:07.985 244018 DEBUG oslo_concurrency.lockutils [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:07 np0005629333 nova_compute[244014]: 2026-02-25 12:20:07.985 244018 DEBUG nova.compute.manager [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] No waiting events found dispatching network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:20:07 np0005629333 nova_compute[244014]: 2026-02-25 12:20:07.986 244018 WARNING nova.compute.manager [req-61e407ff-c9ee-4673-b4b3-cc259e2e2a61 req-2af00afe-cd1c-4056-9afe-51c3cb33c5ac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received unexpected event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:20:08 np0005629333 nova_compute[244014]: 2026-02-25 12:20:08.447 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:08 np0005629333 nova_compute[244014]: 2026-02-25 12:20:08.815 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:20:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Feb 25 07:20:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Feb 25 07:20:09 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Feb 25 07:20:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:20:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:20:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:20:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:20:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:09Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0c:10:e8 10.100.0.13
Feb 25 07:20:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:09Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0c:10:e8 10.100.0.13
Feb 25 07:20:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 563 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 3.9 MiB/s wr, 370 op/s
Feb 25 07:20:10 np0005629333 nova_compute[244014]: 2026-02-25 12:20:10.309 244018 DEBUG oslo_concurrency.lockutils [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:10 np0005629333 nova_compute[244014]: 2026-02-25 12:20:10.310 244018 DEBUG oslo_concurrency.lockutils [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:10 np0005629333 nova_compute[244014]: 2026-02-25 12:20:10.310 244018 DEBUG oslo_concurrency.lockutils [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:10 np0005629333 nova_compute[244014]: 2026-02-25 12:20:10.310 244018 DEBUG oslo_concurrency.lockutils [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:10 np0005629333 nova_compute[244014]: 2026-02-25 12:20:10.310 244018 DEBUG oslo_concurrency.lockutils [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:10 np0005629333 nova_compute[244014]: 2026-02-25 12:20:10.311 244018 INFO nova.compute.manager [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Terminating instance#033[00m
Feb 25 07:20:10 np0005629333 nova_compute[244014]: 2026-02-25 12:20:10.312 244018 DEBUG nova.compute.manager [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:20:10 np0005629333 kernel: tap55015950-cf (unregistering): left promiscuous mode
Feb 25 07:20:10 np0005629333 NetworkManager[49836]: <info>  [1772022010.3546] device (tap55015950-cf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:20:10 np0005629333 nova_compute[244014]: 2026-02-25 12:20:10.362 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:10Z|00140|binding|INFO|Releasing lport 55015950-cf1b-4183-802f-22f661123534 from this chassis (sb_readonly=0)
Feb 25 07:20:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:10Z|00141|binding|INFO|Setting lport 55015950-cf1b-4183-802f-22f661123534 down in Southbound
Feb 25 07:20:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:10Z|00142|binding|INFO|Removing iface tap55015950-cf ovn-installed in OVS
Feb 25 07:20:10 np0005629333 nova_compute[244014]: 2026-02-25 12:20:10.366 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:10.368 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:da:42 10.100.0.12'], port_security=['fa:16:3e:c6:da:42 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '8993d2e7-8b5d-42eb-9e24-f96dcc4da39b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fd7733ad-d262-4781-bcfa-77cfa8b67164', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556b4b98-e95d-460c-a904-adc77baf4b88, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=55015950-cf1b-4183-802f-22f661123534) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:20:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:10.370 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 55015950-cf1b-4183-802f-22f661123534 in datapath 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 unbound from our chassis#033[00m
Feb 25 07:20:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:10.371 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6#033[00m
Feb 25 07:20:10 np0005629333 nova_compute[244014]: 2026-02-25 12:20:10.376 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:10 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:20:10 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:20:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:10.386 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f7380ce0-751e-42cd-88e2-fa372e3d359c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:10 np0005629333 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000015.scope: Deactivated successfully.
Feb 25 07:20:10 np0005629333 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000015.scope: Consumed 14.578s CPU time.
Feb 25 07:20:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:10.412 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5da462da-f7de-4d4d-8ebc-c434db4f5ff7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:10 np0005629333 systemd-machined[210048]: Machine qemu-23-instance-00000015 terminated.
Feb 25 07:20:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:10.415 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[bd33ee58-df8e-411f-a677-fdedc033ec37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:10.435 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e3cc1b73-267b-424c-9e7b-1ee82483bd84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:10.446 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[20e4b836-07b4-48d9-905c-e239da204ac7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f4cbf9a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:f8:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 23, 'rx_bytes': 868, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 23, 'rx_bytes': 868, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389396, 'reachable_time': 28421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266801, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:10.456 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5803a83a-6038-4bd7-8dc4-d1a9ebf63497]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389407, 'tstamp': 389407}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266802, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389409, 'tstamp': 389409}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266802, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:10.457 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f4cbf9a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:10 np0005629333 nova_compute[244014]: 2026-02-25 12:20:10.459 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:10 np0005629333 nova_compute[244014]: 2026-02-25 12:20:10.463 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:10.463 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f4cbf9a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:10.463 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:20:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:10.463 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1f4cbf9a-40, col_values=(('external_ids', {'iface-id': '2cfd1e6b-d28d-43c0-bbbd-c6ad77855812'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:10.464 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:20:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:20:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:20:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:20:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:20:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:20:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:20:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:20:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:20:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:20:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:20:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:20:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:20:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:20:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:20:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:20:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:20:10 np0005629333 nova_compute[244014]: 2026-02-25 12:20:10.545 244018 INFO nova.virt.libvirt.driver [-] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Instance destroyed successfully.#033[00m
Feb 25 07:20:10 np0005629333 nova_compute[244014]: 2026-02-25 12:20:10.547 244018 DEBUG nova.objects.instance [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'resources' on Instance uuid 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:20:10 np0005629333 nova_compute[244014]: 2026-02-25 12:20:10.725 244018 DEBUG nova.virt.libvirt.vif [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:19:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-649798416',display_name='tempest-ServersAdminTestJSON-server-649798416',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-649798416',id=21,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-fszvg8lb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:19:25Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=8993d2e7-8b5d-42eb-9e24-f96dcc4da39b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "55015950-cf1b-4183-802f-22f661123534", "address": "fa:16:3e:c6:da:42", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55015950-cf", "ovs_interfaceid": "55015950-cf1b-4183-802f-22f661123534", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:20:10 np0005629333 nova_compute[244014]: 2026-02-25 12:20:10.726 244018 DEBUG nova.network.os_vif_util [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "55015950-cf1b-4183-802f-22f661123534", "address": "fa:16:3e:c6:da:42", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55015950-cf", "ovs_interfaceid": "55015950-cf1b-4183-802f-22f661123534", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:20:10 np0005629333 nova_compute[244014]: 2026-02-25 12:20:10.727 244018 DEBUG nova.network.os_vif_util [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c6:da:42,bridge_name='br-int',has_traffic_filtering=True,id=55015950-cf1b-4183-802f-22f661123534,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55015950-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:20:10 np0005629333 nova_compute[244014]: 2026-02-25 12:20:10.728 244018 DEBUG os_vif [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:da:42,bridge_name='br-int',has_traffic_filtering=True,id=55015950-cf1b-4183-802f-22f661123534,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55015950-cf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:20:10 np0005629333 nova_compute[244014]: 2026-02-25 12:20:10.730 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:10 np0005629333 nova_compute[244014]: 2026-02-25 12:20:10.731 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55015950-cf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:10 np0005629333 nova_compute[244014]: 2026-02-25 12:20:10.738 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:10 np0005629333 nova_compute[244014]: 2026-02-25 12:20:10.741 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:20:10 np0005629333 nova_compute[244014]: 2026-02-25 12:20:10.743 244018 INFO os_vif [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c6:da:42,bridge_name='br-int',has_traffic_filtering=True,id=55015950-cf1b-4183-802f-22f661123534,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55015950-cf')#033[00m
Feb 25 07:20:10 np0005629333 podman[266893]: 2026-02-25 12:20:10.869139504 +0000 UTC m=+0.049212331 container create e81a562e303c57a8feb8303e63987e8412102bf793e720893970082c61ee0936 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_heyrovsky, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Feb 25 07:20:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:10.881 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:10 np0005629333 systemd[1]: Started libpod-conmon-e81a562e303c57a8feb8303e63987e8412102bf793e720893970082c61ee0936.scope.
Feb 25 07:20:10 np0005629333 podman[266893]: 2026-02-25 12:20:10.847583811 +0000 UTC m=+0.027656708 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:20:10 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:20:10 np0005629333 podman[266893]: 2026-02-25 12:20:10.972191286 +0000 UTC m=+0.152264173 container init e81a562e303c57a8feb8303e63987e8412102bf793e720893970082c61ee0936 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_heyrovsky, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 07:20:10 np0005629333 podman[266893]: 2026-02-25 12:20:10.980336728 +0000 UTC m=+0.160409585 container start e81a562e303c57a8feb8303e63987e8412102bf793e720893970082c61ee0936 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_heyrovsky, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 25 07:20:10 np0005629333 podman[266893]: 2026-02-25 12:20:10.983878788 +0000 UTC m=+0.163951685 container attach e81a562e303c57a8feb8303e63987e8412102bf793e720893970082c61ee0936 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_heyrovsky, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 07:20:10 np0005629333 dazzling_heyrovsky[266910]: 167 167
Feb 25 07:20:10 np0005629333 systemd[1]: libpod-e81a562e303c57a8feb8303e63987e8412102bf793e720893970082c61ee0936.scope: Deactivated successfully.
Feb 25 07:20:10 np0005629333 conmon[266910]: conmon e81a562e303c57a8feb8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e81a562e303c57a8feb8303e63987e8412102bf793e720893970082c61ee0936.scope/container/memory.events
Feb 25 07:20:10 np0005629333 podman[266893]: 2026-02-25 12:20:10.988318845 +0000 UTC m=+0.168391692 container died e81a562e303c57a8feb8303e63987e8412102bf793e720893970082c61ee0936 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Feb 25 07:20:11 np0005629333 systemd[1]: var-lib-containers-storage-overlay-afe0ceb13a9c74b88e37e7e6865491b4bb4857ee76c17e5d003e0cd3232fdfa2-merged.mount: Deactivated successfully.
Feb 25 07:20:11 np0005629333 podman[266893]: 2026-02-25 12:20:11.037193035 +0000 UTC m=+0.217265882 container remove e81a562e303c57a8feb8303e63987e8412102bf793e720893970082c61ee0936 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:20:11 np0005629333 systemd[1]: libpod-conmon-e81a562e303c57a8feb8303e63987e8412102bf793e720893970082c61ee0936.scope: Deactivated successfully.
Feb 25 07:20:11 np0005629333 nova_compute[244014]: 2026-02-25 12:20:11.083 244018 INFO nova.virt.libvirt.driver [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Deleting instance files /var/lib/nova/instances/8993d2e7-8b5d-42eb-9e24-f96dcc4da39b_del#033[00m
Feb 25 07:20:11 np0005629333 nova_compute[244014]: 2026-02-25 12:20:11.084 244018 INFO nova.virt.libvirt.driver [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Deletion of /var/lib/nova/instances/8993d2e7-8b5d-42eb-9e24-f96dcc4da39b_del complete#033[00m
Feb 25 07:20:11 np0005629333 nova_compute[244014]: 2026-02-25 12:20:11.189 244018 INFO nova.compute.manager [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Took 0.88 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:20:11 np0005629333 nova_compute[244014]: 2026-02-25 12:20:11.190 244018 DEBUG oslo.service.loopingcall [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:20:11 np0005629333 nova_compute[244014]: 2026-02-25 12:20:11.190 244018 DEBUG nova.compute.manager [-] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:20:11 np0005629333 nova_compute[244014]: 2026-02-25 12:20:11.191 244018 DEBUG nova.network.neutron [-] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:20:11 np0005629333 podman[266933]: 2026-02-25 12:20:11.262828435 +0000 UTC m=+0.072438252 container create dc1a2b43ef60a3211c4a691664d7e6a5119e34296be58bbe5640e6ef23e90226 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_payne, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 07:20:11 np0005629333 podman[266933]: 2026-02-25 12:20:11.221397866 +0000 UTC m=+0.031007703 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:20:11 np0005629333 systemd[1]: Started libpod-conmon-dc1a2b43ef60a3211c4a691664d7e6a5119e34296be58bbe5640e6ef23e90226.scope.
Feb 25 07:20:11 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:20:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce9781374f7bfa05477903f28c062f89a213902bc788d26d09d1694bc6ada42e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:20:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce9781374f7bfa05477903f28c062f89a213902bc788d26d09d1694bc6ada42e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:20:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce9781374f7bfa05477903f28c062f89a213902bc788d26d09d1694bc6ada42e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:20:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce9781374f7bfa05477903f28c062f89a213902bc788d26d09d1694bc6ada42e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:20:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce9781374f7bfa05477903f28c062f89a213902bc788d26d09d1694bc6ada42e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:20:11 np0005629333 podman[266933]: 2026-02-25 12:20:11.387673287 +0000 UTC m=+0.197283164 container init dc1a2b43ef60a3211c4a691664d7e6a5119e34296be58bbe5640e6ef23e90226 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:20:11 np0005629333 podman[266933]: 2026-02-25 12:20:11.39620344 +0000 UTC m=+0.205813247 container start dc1a2b43ef60a3211c4a691664d7e6a5119e34296be58bbe5640e6ef23e90226 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_payne, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 07:20:11 np0005629333 podman[266933]: 2026-02-25 12:20:11.400237565 +0000 UTC m=+0.209847362 container attach dc1a2b43ef60a3211c4a691664d7e6a5119e34296be58bbe5640e6ef23e90226 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:20:11 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:20:11 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:20:11 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:20:11 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:20:11 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:20:11 np0005629333 nova_compute[244014]: 2026-02-25 12:20:11.567 244018 DEBUG nova.compute.manager [req-cb503a03-65af-4576-8dae-f3257ed3fe03 req-47d4c79f-7ee2-4c2f-bfc9-ba41b426952a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Received event network-vif-unplugged-55015950-cf1b-4183-802f-22f661123534 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:11 np0005629333 nova_compute[244014]: 2026-02-25 12:20:11.569 244018 DEBUG oslo_concurrency.lockutils [req-cb503a03-65af-4576-8dae-f3257ed3fe03 req-47d4c79f-7ee2-4c2f-bfc9-ba41b426952a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:11 np0005629333 nova_compute[244014]: 2026-02-25 12:20:11.569 244018 DEBUG oslo_concurrency.lockutils [req-cb503a03-65af-4576-8dae-f3257ed3fe03 req-47d4c79f-7ee2-4c2f-bfc9-ba41b426952a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:11 np0005629333 nova_compute[244014]: 2026-02-25 12:20:11.569 244018 DEBUG oslo_concurrency.lockutils [req-cb503a03-65af-4576-8dae-f3257ed3fe03 req-47d4c79f-7ee2-4c2f-bfc9-ba41b426952a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:11 np0005629333 nova_compute[244014]: 2026-02-25 12:20:11.569 244018 DEBUG nova.compute.manager [req-cb503a03-65af-4576-8dae-f3257ed3fe03 req-47d4c79f-7ee2-4c2f-bfc9-ba41b426952a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] No waiting events found dispatching network-vif-unplugged-55015950-cf1b-4183-802f-22f661123534 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:20:11 np0005629333 nova_compute[244014]: 2026-02-25 12:20:11.570 244018 DEBUG nova.compute.manager [req-cb503a03-65af-4576-8dae-f3257ed3fe03 req-47d4c79f-7ee2-4c2f-bfc9-ba41b426952a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Received event network-vif-unplugged-55015950-cf1b-4183-802f-22f661123534 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:20:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 555 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 364 op/s
Feb 25 07:20:11 np0005629333 practical_payne[266949]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:20:11 np0005629333 practical_payne[266949]: --> All data devices are unavailable
Feb 25 07:20:11 np0005629333 systemd[1]: libpod-dc1a2b43ef60a3211c4a691664d7e6a5119e34296be58bbe5640e6ef23e90226.scope: Deactivated successfully.
Feb 25 07:20:11 np0005629333 podman[266933]: 2026-02-25 12:20:11.903245517 +0000 UTC m=+0.712855334 container died dc1a2b43ef60a3211c4a691664d7e6a5119e34296be58bbe5640e6ef23e90226 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_payne, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 07:20:12 np0005629333 nova_compute[244014]: 2026-02-25 12:20:12.025 244018 DEBUG nova.network.neutron [-] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:20:12 np0005629333 nova_compute[244014]: 2026-02-25 12:20:12.040 244018 INFO nova.compute.manager [-] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Took 0.85 seconds to deallocate network for instance.#033[00m
Feb 25 07:20:12 np0005629333 nova_compute[244014]: 2026-02-25 12:20:12.094 244018 DEBUG oslo_concurrency.lockutils [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:12 np0005629333 nova_compute[244014]: 2026-02-25 12:20:12.095 244018 DEBUG oslo_concurrency.lockutils [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:12 np0005629333 nova_compute[244014]: 2026-02-25 12:20:12.125 244018 DEBUG nova.compute.manager [req-32bb59e6-db67-4b08-8483-b0c8bacc911b req-03142469-1835-4b31-811f-83d71e6a600a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Received event network-vif-deleted-55015950-cf1b-4183-802f-22f661123534 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:12 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ce9781374f7bfa05477903f28c062f89a213902bc788d26d09d1694bc6ada42e-merged.mount: Deactivated successfully.
Feb 25 07:20:12 np0005629333 podman[266933]: 2026-02-25 12:20:12.2544745 +0000 UTC m=+1.064084317 container remove dc1a2b43ef60a3211c4a691664d7e6a5119e34296be58bbe5640e6ef23e90226 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_payne, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:20:12 np0005629333 systemd[1]: libpod-conmon-dc1a2b43ef60a3211c4a691664d7e6a5119e34296be58bbe5640e6ef23e90226.scope: Deactivated successfully.
Feb 25 07:20:12 np0005629333 nova_compute[244014]: 2026-02-25 12:20:12.281 244018 DEBUG oslo_concurrency.processutils [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:20:12 np0005629333 podman[267063]: 2026-02-25 12:20:12.759285573 +0000 UTC m=+0.062974123 container create 933c276668caefb25389a014009a3f81ad7536c9e74f131f7f9b4e57b9093045 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 07:20:12 np0005629333 podman[267063]: 2026-02-25 12:20:12.717206756 +0000 UTC m=+0.020895296 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:20:12 np0005629333 systemd[1]: Started libpod-conmon-933c276668caefb25389a014009a3f81ad7536c9e74f131f7f9b4e57b9093045.scope.
Feb 25 07:20:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:20:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4129356601' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:20:12 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:20:12 np0005629333 podman[267063]: 2026-02-25 12:20:12.857130087 +0000 UTC m=+0.160818647 container init 933c276668caefb25389a014009a3f81ad7536c9e74f131f7f9b4e57b9093045 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_goldstine, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 07:20:12 np0005629333 nova_compute[244014]: 2026-02-25 12:20:12.855 244018 DEBUG oslo_concurrency.processutils [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:20:12 np0005629333 podman[267063]: 2026-02-25 12:20:12.863279322 +0000 UTC m=+0.166967852 container start 933c276668caefb25389a014009a3f81ad7536c9e74f131f7f9b4e57b9093045 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_goldstine, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Feb 25 07:20:12 np0005629333 podman[267063]: 2026-02-25 12:20:12.866141203 +0000 UTC m=+0.169829723 container attach 933c276668caefb25389a014009a3f81ad7536c9e74f131f7f9b4e57b9093045 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:20:12 np0005629333 systemd[1]: libpod-933c276668caefb25389a014009a3f81ad7536c9e74f131f7f9b4e57b9093045.scope: Deactivated successfully.
Feb 25 07:20:12 np0005629333 sad_goldstine[267080]: 167 167
Feb 25 07:20:12 np0005629333 conmon[267080]: conmon 933c276668caefb25389 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-933c276668caefb25389a014009a3f81ad7536c9e74f131f7f9b4e57b9093045.scope/container/memory.events
Feb 25 07:20:12 np0005629333 podman[267063]: 2026-02-25 12:20:12.87234075 +0000 UTC m=+0.176029270 container died 933c276668caefb25389a014009a3f81ad7536c9e74f131f7f9b4e57b9093045 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:20:12 np0005629333 nova_compute[244014]: 2026-02-25 12:20:12.870 244018 DEBUG nova.compute.provider_tree [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:20:12 np0005629333 systemd[1]: var-lib-containers-storage-overlay-fd329842d05144bc3c81cc132f5fb82920dfff7494b56222cfbcfd265fbd6c53-merged.mount: Deactivated successfully.
Feb 25 07:20:12 np0005629333 podman[267063]: 2026-02-25 12:20:12.905661568 +0000 UTC m=+0.209350088 container remove 933c276668caefb25389a014009a3f81ad7536c9e74f131f7f9b4e57b9093045 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 07:20:12 np0005629333 nova_compute[244014]: 2026-02-25 12:20:12.905 244018 DEBUG nova.scheduler.client.report [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:20:12 np0005629333 systemd[1]: libpod-conmon-933c276668caefb25389a014009a3f81ad7536c9e74f131f7f9b4e57b9093045.scope: Deactivated successfully.
Feb 25 07:20:12 np0005629333 nova_compute[244014]: 2026-02-25 12:20:12.947 244018 DEBUG oslo_concurrency.lockutils [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:12 np0005629333 nova_compute[244014]: 2026-02-25 12:20:12.975 244018 INFO nova.scheduler.client.report [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Deleted allocations for instance 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b#033[00m
Feb 25 07:20:13 np0005629333 nova_compute[244014]: 2026-02-25 12:20:13.038 244018 DEBUG oslo_concurrency.lockutils [None req-f74dae74-7651-4f46-a291-0c08756c5892 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:13 np0005629333 podman[267106]: 2026-02-25 12:20:13.080227875 +0000 UTC m=+0.050463107 container create 43baeabce3535e5fec13ca2e75780e41bb6b0bf9d9895efccdad00c486d5c8c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:20:13 np0005629333 systemd[1]: Started libpod-conmon-43baeabce3535e5fec13ca2e75780e41bb6b0bf9d9895efccdad00c486d5c8c3.scope.
Feb 25 07:20:13 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:20:13 np0005629333 podman[267106]: 2026-02-25 12:20:13.062405667 +0000 UTC m=+0.032640909 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:20:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07b880d734a62ee220340aaaf44cd8158a34d8392d8d7f963362f96b3eac70fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:20:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07b880d734a62ee220340aaaf44cd8158a34d8392d8d7f963362f96b3eac70fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:20:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07b880d734a62ee220340aaaf44cd8158a34d8392d8d7f963362f96b3eac70fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:20:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07b880d734a62ee220340aaaf44cd8158a34d8392d8d7f963362f96b3eac70fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:20:13 np0005629333 podman[267106]: 2026-02-25 12:20:13.179974693 +0000 UTC m=+0.150209955 container init 43baeabce3535e5fec13ca2e75780e41bb6b0bf9d9895efccdad00c486d5c8c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 07:20:13 np0005629333 podman[267106]: 2026-02-25 12:20:13.194419303 +0000 UTC m=+0.164654555 container start 43baeabce3535e5fec13ca2e75780e41bb6b0bf9d9895efccdad00c486d5c8c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:20:13 np0005629333 podman[267106]: 2026-02-25 12:20:13.198049197 +0000 UTC m=+0.168284509 container attach 43baeabce3535e5fec13ca2e75780e41bb6b0bf9d9895efccdad00c486d5c8c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_thompson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 07:20:13 np0005629333 nova_compute[244014]: 2026-02-25 12:20:13.450 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]: {
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:    "0": [
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:        {
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "devices": [
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "/dev/loop3"
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            ],
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "lv_name": "ceph_lv0",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "lv_size": "21470642176",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "name": "ceph_lv0",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "tags": {
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.cluster_name": "ceph",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.crush_device_class": "",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.encrypted": "0",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.objectstore": "bluestore",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.osd_id": "0",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.type": "block",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.vdo": "0",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.with_tpm": "0"
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            },
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "type": "block",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "vg_name": "ceph_vg0"
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:        }
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:    ],
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:    "1": [
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:        {
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "devices": [
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "/dev/loop4"
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            ],
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "lv_name": "ceph_lv1",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "lv_size": "21470642176",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "name": "ceph_lv1",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "tags": {
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.cluster_name": "ceph",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.crush_device_class": "",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.encrypted": "0",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.objectstore": "bluestore",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.osd_id": "1",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.type": "block",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.vdo": "0",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.with_tpm": "0"
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            },
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "type": "block",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "vg_name": "ceph_vg1"
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:        }
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:    ],
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:    "2": [
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:        {
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "devices": [
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "/dev/loop5"
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            ],
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "lv_name": "ceph_lv2",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "lv_size": "21470642176",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "name": "ceph_lv2",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "tags": {
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.cluster_name": "ceph",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.crush_device_class": "",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.encrypted": "0",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.objectstore": "bluestore",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.osd_id": "2",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.type": "block",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.vdo": "0",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:                "ceph.with_tpm": "0"
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            },
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "type": "block",
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:            "vg_name": "ceph_vg2"
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:        }
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]:    ]
Feb 25 07:20:13 np0005629333 beautiful_thompson[267123]: }
Feb 25 07:20:13 np0005629333 systemd[1]: libpod-43baeabce3535e5fec13ca2e75780e41bb6b0bf9d9895efccdad00c486d5c8c3.scope: Deactivated successfully.
Feb 25 07:20:13 np0005629333 podman[267132]: 2026-02-25 12:20:13.573627962 +0000 UTC m=+0.026689551 container died 43baeabce3535e5fec13ca2e75780e41bb6b0bf9d9895efccdad00c486d5c8c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_thompson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:20:13 np0005629333 systemd[1]: var-lib-containers-storage-overlay-07b880d734a62ee220340aaaf44cd8158a34d8392d8d7f963362f96b3eac70fb-merged.mount: Deactivated successfully.
Feb 25 07:20:13 np0005629333 podman[267132]: 2026-02-25 12:20:13.61150119 +0000 UTC m=+0.064562739 container remove 43baeabce3535e5fec13ca2e75780e41bb6b0bf9d9895efccdad00c486d5c8c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_thompson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:20:13 np0005629333 systemd[1]: libpod-conmon-43baeabce3535e5fec13ca2e75780e41bb6b0bf9d9895efccdad00c486d5c8c3.scope: Deactivated successfully.
Feb 25 07:20:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 517 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.9 MiB/s wr, 257 op/s
Feb 25 07:20:13 np0005629333 nova_compute[244014]: 2026-02-25 12:20:13.787 244018 DEBUG nova.compute.manager [req-a0998892-6103-4ade-b72a-bd34c2736981 req-11e832b3-05af-4995-b161-360628ea94af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Received event network-changed-b2336583-1aaa-4789-8d4f-a3a14997891d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:13 np0005629333 nova_compute[244014]: 2026-02-25 12:20:13.787 244018 DEBUG nova.compute.manager [req-a0998892-6103-4ade-b72a-bd34c2736981 req-11e832b3-05af-4995-b161-360628ea94af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Refreshing instance network info cache due to event network-changed-b2336583-1aaa-4789-8d4f-a3a14997891d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:20:13 np0005629333 nova_compute[244014]: 2026-02-25 12:20:13.787 244018 DEBUG oslo_concurrency.lockutils [req-a0998892-6103-4ade-b72a-bd34c2736981 req-11e832b3-05af-4995-b161-360628ea94af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:20:13 np0005629333 nova_compute[244014]: 2026-02-25 12:20:13.788 244018 DEBUG oslo_concurrency.lockutils [req-a0998892-6103-4ade-b72a-bd34c2736981 req-11e832b3-05af-4995-b161-360628ea94af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:20:13 np0005629333 nova_compute[244014]: 2026-02-25 12:20:13.788 244018 DEBUG nova.network.neutron [req-a0998892-6103-4ade-b72a-bd34c2736981 req-11e832b3-05af-4995-b161-360628ea94af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Refreshing network info cache for port b2336583-1aaa-4789-8d4f-a3a14997891d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:20:14 np0005629333 podman[267212]: 2026-02-25 12:20:14.111898257 +0000 UTC m=+0.062918791 container create 23d656d7799bf30ff10d48d4bb2c6722780a1e31c7567f2ebea660dea3e5e805 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.150 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772021999.149009, d44c3dbc-e4bc-4235-bd88-b39616473248 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.150 244018 INFO nova.compute.manager [-] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:20:14 np0005629333 systemd[1]: Started libpod-conmon-23d656d7799bf30ff10d48d4bb2c6722780a1e31c7567f2ebea660dea3e5e805.scope.
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.175 244018 DEBUG nova.compute.manager [None req-d3b9eff6-8cdf-426a-82f8-24eba9b515a6 - - - - - -] [instance: d44c3dbc-e4bc-4235-bd88-b39616473248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:20:14 np0005629333 podman[267212]: 2026-02-25 12:20:14.083805258 +0000 UTC m=+0.034825852 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.181 244018 DEBUG oslo_concurrency.lockutils [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.181 244018 DEBUG oslo_concurrency.lockutils [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.182 244018 DEBUG oslo_concurrency.lockutils [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.182 244018 DEBUG oslo_concurrency.lockutils [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.182 244018 DEBUG oslo_concurrency.lockutils [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.184 244018 INFO nova.compute.manager [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Terminating instance#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.185 244018 DEBUG nova.compute.manager [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:20:14 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:20:14 np0005629333 podman[267212]: 2026-02-25 12:20:14.207364033 +0000 UTC m=+0.158384567 container init 23d656d7799bf30ff10d48d4bb2c6722780a1e31c7567f2ebea660dea3e5e805 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nobel, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 07:20:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:20:14 np0005629333 podman[267212]: 2026-02-25 12:20:14.217143301 +0000 UTC m=+0.168163835 container start 23d656d7799bf30ff10d48d4bb2c6722780a1e31c7567f2ebea660dea3e5e805 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:20:14 np0005629333 podman[267212]: 2026-02-25 12:20:14.223472931 +0000 UTC m=+0.174493515 container attach 23d656d7799bf30ff10d48d4bb2c6722780a1e31c7567f2ebea660dea3e5e805 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nobel, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 07:20:14 np0005629333 awesome_nobel[267228]: 167 167
Feb 25 07:20:14 np0005629333 systemd[1]: libpod-23d656d7799bf30ff10d48d4bb2c6722780a1e31c7567f2ebea660dea3e5e805.scope: Deactivated successfully.
Feb 25 07:20:14 np0005629333 podman[267212]: 2026-02-25 12:20:14.230236934 +0000 UTC m=+0.181257458 container died 23d656d7799bf30ff10d48d4bb2c6722780a1e31c7567f2ebea660dea3e5e805 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nobel, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:20:14 np0005629333 kernel: tap24e2d6b3-f0 (unregistering): left promiscuous mode
Feb 25 07:20:14 np0005629333 NetworkManager[49836]: <info>  [1772022014.2441] device (tap24e2d6b3-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:20:14 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:14Z|00143|binding|INFO|Releasing lport 24e2d6b3-f0d8-4603-8c1e-da65e164050d from this chassis (sb_readonly=0)
Feb 25 07:20:14 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:14Z|00144|binding|INFO|Setting lport 24e2d6b3-f0d8-4603-8c1e-da65e164050d down in Southbound
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.254 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:14 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:14Z|00145|binding|INFO|Removing iface tap24e2d6b3-f0 ovn-installed in OVS
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.257 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:14 np0005629333 systemd[1]: var-lib-containers-storage-overlay-2fb43b2d1d0e376b0c0fee06b12df7609bc4102e0bd66d2d005b5397d9e9e0d4-merged.mount: Deactivated successfully.
Feb 25 07:20:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:14.264 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:b0:82 10.100.0.9'], port_security=['fa:16:3e:4b:b0:82 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'd9b67bce-8a7c-4f49-9cab-3e20377ca630', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fd7733ad-d262-4781-bcfa-77cfa8b67164', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556b4b98-e95d-460c-a904-adc77baf4b88, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=24e2d6b3-f0d8-4603-8c1e-da65e164050d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:20:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:14.266 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 24e2d6b3-f0d8-4603-8c1e-da65e164050d in datapath 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 unbound from our chassis#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.272 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:14.275 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6#033[00m
Feb 25 07:20:14 np0005629333 podman[267212]: 2026-02-25 12:20:14.280077492 +0000 UTC m=+0.231098016 container remove 23d656d7799bf30ff10d48d4bb2c6722780a1e31c7567f2ebea660dea3e5e805 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_nobel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 07:20:14 np0005629333 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000014.scope: Deactivated successfully.
Feb 25 07:20:14 np0005629333 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000014.scope: Consumed 14.846s CPU time.
Feb 25 07:20:14 np0005629333 systemd-machined[210048]: Machine qemu-22-instance-00000014 terminated.
Feb 25 07:20:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:14.292 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9a48b2c1-828a-4e45-9ca8-1611040fdf76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:14 np0005629333 systemd[1]: libpod-conmon-23d656d7799bf30ff10d48d4bb2c6722780a1e31c7567f2ebea660dea3e5e805.scope: Deactivated successfully.
Feb 25 07:20:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:14.327 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7216aa64-9153-41c4-9df0-0c544975a03b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:14.331 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0a3fe495-5b43-4eff-b4df-70f0439c5fd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:14.357 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[27029dbc-5c12-4627-b4de-5788df8d425a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:14.395 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9d386dc5-1ead-4aab-84e8-199c18137a2f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f4cbf9a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:f8:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 25, 'rx_bytes': 868, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 25, 'rx_bytes': 868, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389396, 'reachable_time': 28421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267259, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:14.413 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[40e5200c-6121-4a0d-abd8-df2ef8f24206]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389407, 'tstamp': 389407}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267265, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389409, 'tstamp': 389409}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267265, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:14.417 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f4cbf9a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.423 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.424 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:14.425 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f4cbf9a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:14.426 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:20:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:14.426 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1f4cbf9a-40, col_values=(('external_ids', {'iface-id': '2cfd1e6b-d28d-43c0-bbbd-c6ad77855812'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:14.427 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.428 244018 INFO nova.virt.libvirt.driver [-] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Instance destroyed successfully.#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.428 244018 DEBUG nova.objects.instance [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'resources' on Instance uuid d9b67bce-8a7c-4f49-9cab-3e20377ca630 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.440 244018 DEBUG nova.virt.libvirt.vif [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:18:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-654045690',display_name='tempest-ServersAdminTestJSON-server-654045690',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-654045690',id=20,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-ki1fzu1r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:19:10Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=d9b67bce-8a7c-4f49-9cab-3e20377ca630,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "address": "fa:16:3e:4b:b0:82", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e2d6b3-f0", "ovs_interfaceid": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.440 244018 DEBUG nova.network.os_vif_util [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "address": "fa:16:3e:4b:b0:82", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24e2d6b3-f0", "ovs_interfaceid": "24e2d6b3-f0d8-4603-8c1e-da65e164050d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.441 244018 DEBUG nova.network.os_vif_util [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:b0:82,bridge_name='br-int',has_traffic_filtering=True,id=24e2d6b3-f0d8-4603-8c1e-da65e164050d,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24e2d6b3-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.441 244018 DEBUG os_vif [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:b0:82,bridge_name='br-int',has_traffic_filtering=True,id=24e2d6b3-f0d8-4603-8c1e-da65e164050d,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24e2d6b3-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.444 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.444 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24e2d6b3-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.446 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.447 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.449 244018 INFO os_vif [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:b0:82,bridge_name='br-int',has_traffic_filtering=True,id=24e2d6b3-f0d8-4603-8c1e-da65e164050d,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24e2d6b3-f0')#033[00m
Feb 25 07:20:14 np0005629333 podman[267272]: 2026-02-25 12:20:14.484686314 +0000 UTC m=+0.058881147 container create 4ad1d04503613efe76f468ccb3a7da786d5667a5b6f982a5e6718530ee402389 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:20:14 np0005629333 systemd[1]: Started libpod-conmon-4ad1d04503613efe76f468ccb3a7da786d5667a5b6f982a5e6718530ee402389.scope.
Feb 25 07:20:14 np0005629333 podman[267272]: 2026-02-25 12:20:14.466773834 +0000 UTC m=+0.040968747 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:20:14 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:20:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c2f50386a3145a2b2602a91f5baeccb11ab765f7e5973d65f5b81ad02f99ebd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:20:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c2f50386a3145a2b2602a91f5baeccb11ab765f7e5973d65f5b81ad02f99ebd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:20:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c2f50386a3145a2b2602a91f5baeccb11ab765f7e5973d65f5b81ad02f99ebd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:20:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c2f50386a3145a2b2602a91f5baeccb11ab765f7e5973d65f5b81ad02f99ebd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:20:14 np0005629333 podman[267272]: 2026-02-25 12:20:14.58719487 +0000 UTC m=+0.161389773 container init 4ad1d04503613efe76f468ccb3a7da786d5667a5b6f982a5e6718530ee402389 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_chaum, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 07:20:14 np0005629333 podman[267272]: 2026-02-25 12:20:14.600654403 +0000 UTC m=+0.174849276 container start 4ad1d04503613efe76f468ccb3a7da786d5667a5b6f982a5e6718530ee402389 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Feb 25 07:20:14 np0005629333 podman[267272]: 2026-02-25 12:20:14.608208468 +0000 UTC m=+0.182403401 container attach 4ad1d04503613efe76f468ccb3a7da786d5667a5b6f982a5e6718530ee402389 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.751 244018 INFO nova.virt.libvirt.driver [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Deleting instance files /var/lib/nova/instances/d9b67bce-8a7c-4f49-9cab-3e20377ca630_del#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.752 244018 INFO nova.virt.libvirt.driver [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Deletion of /var/lib/nova/instances/d9b67bce-8a7c-4f49-9cab-3e20377ca630_del complete#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.766 244018 DEBUG nova.network.neutron [req-a0998892-6103-4ade-b72a-bd34c2736981 req-11e832b3-05af-4995-b161-360628ea94af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Updated VIF entry in instance network info cache for port b2336583-1aaa-4789-8d4f-a3a14997891d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.767 244018 DEBUG nova.network.neutron [req-a0998892-6103-4ade-b72a-bd34c2736981 req-11e832b3-05af-4995-b161-360628ea94af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Updating instance_info_cache with network_info: [{"id": "b2336583-1aaa-4789-8d4f-a3a14997891d", "address": "fa:16:3e:23:d7:29", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2336583-1a", "ovs_interfaceid": "b2336583-1aaa-4789-8d4f-a3a14997891d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.805 244018 DEBUG oslo_concurrency.lockutils [req-a0998892-6103-4ade-b72a-bd34c2736981 req-11e832b3-05af-4995-b161-360628ea94af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.806 244018 DEBUG nova.compute.manager [req-a0998892-6103-4ade-b72a-bd34c2736981 req-11e832b3-05af-4995-b161-360628ea94af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Received event network-vif-plugged-55015950-cf1b-4183-802f-22f661123534 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.806 244018 DEBUG oslo_concurrency.lockutils [req-a0998892-6103-4ade-b72a-bd34c2736981 req-11e832b3-05af-4995-b161-360628ea94af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.807 244018 DEBUG oslo_concurrency.lockutils [req-a0998892-6103-4ade-b72a-bd34c2736981 req-11e832b3-05af-4995-b161-360628ea94af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.807 244018 DEBUG oslo_concurrency.lockutils [req-a0998892-6103-4ade-b72a-bd34c2736981 req-11e832b3-05af-4995-b161-360628ea94af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8993d2e7-8b5d-42eb-9e24-f96dcc4da39b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.807 244018 DEBUG nova.compute.manager [req-a0998892-6103-4ade-b72a-bd34c2736981 req-11e832b3-05af-4995-b161-360628ea94af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] No waiting events found dispatching network-vif-plugged-55015950-cf1b-4183-802f-22f661123534 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.808 244018 WARNING nova.compute.manager [req-a0998892-6103-4ade-b72a-bd34c2736981 req-11e832b3-05af-4995-b161-360628ea94af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Received unexpected event network-vif-plugged-55015950-cf1b-4183-802f-22f661123534 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.834 244018 INFO nova.compute.manager [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Took 0.65 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.834 244018 DEBUG oslo.service.loopingcall [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.835 244018 DEBUG nova.compute.manager [-] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:20:14 np0005629333 nova_compute[244014]: 2026-02-25 12:20:14.835 244018 DEBUG nova.network.neutron [-] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:20:15 np0005629333 lvm[267389]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:20:15 np0005629333 lvm[267389]: VG ceph_vg0 finished
Feb 25 07:20:15 np0005629333 lvm[267391]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:20:15 np0005629333 lvm[267391]: VG ceph_vg1 finished
Feb 25 07:20:15 np0005629333 lvm[267392]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:20:15 np0005629333 lvm[267392]: VG ceph_vg2 finished
Feb 25 07:20:15 np0005629333 hungry_chaum[267313]: {}
Feb 25 07:20:15 np0005629333 systemd[1]: libpod-4ad1d04503613efe76f468ccb3a7da786d5667a5b6f982a5e6718530ee402389.scope: Deactivated successfully.
Feb 25 07:20:15 np0005629333 podman[267272]: 2026-02-25 12:20:15.405466102 +0000 UTC m=+0.979660975 container died 4ad1d04503613efe76f468ccb3a7da786d5667a5b6f982a5e6718530ee402389 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_chaum, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:20:15 np0005629333 systemd[1]: libpod-4ad1d04503613efe76f468ccb3a7da786d5667a5b6f982a5e6718530ee402389.scope: Consumed 1.102s CPU time.
Feb 25 07:20:15 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7c2f50386a3145a2b2602a91f5baeccb11ab765f7e5973d65f5b81ad02f99ebd-merged.mount: Deactivated successfully.
Feb 25 07:20:15 np0005629333 podman[267272]: 2026-02-25 12:20:15.452203732 +0000 UTC m=+1.026398595 container remove 4ad1d04503613efe76f468ccb3a7da786d5667a5b6f982a5e6718530ee402389 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_chaum, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 07:20:15 np0005629333 systemd[1]: libpod-conmon-4ad1d04503613efe76f468ccb3a7da786d5667a5b6f982a5e6718530ee402389.scope: Deactivated successfully.
Feb 25 07:20:15 np0005629333 nova_compute[244014]: 2026-02-25 12:20:15.500 244018 DEBUG nova.network.neutron [-] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:20:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 07:20:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 12K writes, 49K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.03 MB/s#012Cumulative WAL: 12K writes, 3441 syncs, 3.51 writes per sync, written: 0.04 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6253 writes, 24K keys, 6253 commit groups, 1.0 writes per commit group, ingest: 26.67 MB, 0.04 MB/s#012Interval WAL: 6253 writes, 2445 syncs, 2.56 writes per sync, written: 0.03 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 07:20:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:20:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:20:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:20:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:20:15 np0005629333 nova_compute[244014]: 2026-02-25 12:20:15.552 244018 INFO nova.compute.manager [-] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Took 0.72 seconds to deallocate network for instance.#033[00m
Feb 25 07:20:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 517 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.9 MiB/s wr, 257 op/s
Feb 25 07:20:15 np0005629333 nova_compute[244014]: 2026-02-25 12:20:15.657 244018 DEBUG nova.compute.manager [req-3b3cafdb-2733-4910-8469-7e6152f9dac6 req-eff15c67-ab79-404b-b050-9346568e9fdb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Received event network-vif-deleted-24e2d6b3-f0d8-4603-8c1e-da65e164050d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:15 np0005629333 nova_compute[244014]: 2026-02-25 12:20:15.721 244018 DEBUG oslo_concurrency.lockutils [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:15 np0005629333 nova_compute[244014]: 2026-02-25 12:20:15.722 244018 DEBUG oslo_concurrency.lockutils [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:16Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:87:71:62 10.100.0.5
Feb 25 07:20:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:16Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:87:71:62 10.100.0.5
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.265 244018 DEBUG nova.compute.manager [req-272cf3ed-5084-4204-a5de-cba8bb2d1349 req-a096eb04-29bd-4cea-9bbc-dff823eaa31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Received event network-vif-unplugged-24e2d6b3-f0d8-4603-8c1e-da65e164050d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.265 244018 DEBUG oslo_concurrency.lockutils [req-272cf3ed-5084-4204-a5de-cba8bb2d1349 req-a096eb04-29bd-4cea-9bbc-dff823eaa31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.266 244018 DEBUG oslo_concurrency.lockutils [req-272cf3ed-5084-4204-a5de-cba8bb2d1349 req-a096eb04-29bd-4cea-9bbc-dff823eaa31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.266 244018 DEBUG oslo_concurrency.lockutils [req-272cf3ed-5084-4204-a5de-cba8bb2d1349 req-a096eb04-29bd-4cea-9bbc-dff823eaa31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.267 244018 DEBUG nova.compute.manager [req-272cf3ed-5084-4204-a5de-cba8bb2d1349 req-a096eb04-29bd-4cea-9bbc-dff823eaa31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] No waiting events found dispatching network-vif-unplugged-24e2d6b3-f0d8-4603-8c1e-da65e164050d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.267 244018 WARNING nova.compute.manager [req-272cf3ed-5084-4204-a5de-cba8bb2d1349 req-a096eb04-29bd-4cea-9bbc-dff823eaa31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Received unexpected event network-vif-unplugged-24e2d6b3-f0d8-4603-8c1e-da65e164050d for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.267 244018 DEBUG nova.compute.manager [req-272cf3ed-5084-4204-a5de-cba8bb2d1349 req-a096eb04-29bd-4cea-9bbc-dff823eaa31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Received event network-vif-plugged-24e2d6b3-f0d8-4603-8c1e-da65e164050d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.268 244018 DEBUG oslo_concurrency.lockutils [req-272cf3ed-5084-4204-a5de-cba8bb2d1349 req-a096eb04-29bd-4cea-9bbc-dff823eaa31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.268 244018 DEBUG oslo_concurrency.lockutils [req-272cf3ed-5084-4204-a5de-cba8bb2d1349 req-a096eb04-29bd-4cea-9bbc-dff823eaa31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.269 244018 DEBUG oslo_concurrency.lockutils [req-272cf3ed-5084-4204-a5de-cba8bb2d1349 req-a096eb04-29bd-4cea-9bbc-dff823eaa31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.269 244018 DEBUG nova.compute.manager [req-272cf3ed-5084-4204-a5de-cba8bb2d1349 req-a096eb04-29bd-4cea-9bbc-dff823eaa31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] No waiting events found dispatching network-vif-plugged-24e2d6b3-f0d8-4603-8c1e-da65e164050d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.269 244018 WARNING nova.compute.manager [req-272cf3ed-5084-4204-a5de-cba8bb2d1349 req-a096eb04-29bd-4cea-9bbc-dff823eaa31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Received unexpected event network-vif-plugged-24e2d6b3-f0d8-4603-8c1e-da65e164050d for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.280 244018 DEBUG oslo_concurrency.lockutils [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.281 244018 DEBUG oslo_concurrency.lockutils [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.281 244018 DEBUG oslo_concurrency.lockutils [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.282 244018 DEBUG oslo_concurrency.lockutils [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.282 244018 DEBUG oslo_concurrency.lockutils [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.283 244018 INFO nova.compute.manager [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Terminating instance#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.285 244018 DEBUG nova.compute.manager [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:20:16 np0005629333 kernel: tapb2336583-1a (unregistering): left promiscuous mode
Feb 25 07:20:16 np0005629333 NetworkManager[49836]: <info>  [1772022016.3267] device (tapb2336583-1a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:20:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:16Z|00146|binding|INFO|Releasing lport b2336583-1aaa-4789-8d4f-a3a14997891d from this chassis (sb_readonly=0)
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.329 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:16Z|00147|binding|INFO|Setting lport b2336583-1aaa-4789-8d4f-a3a14997891d down in Southbound
Feb 25 07:20:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:16Z|00148|binding|INFO|Removing iface tapb2336583-1a ovn-installed in OVS
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.332 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.340 244018 DEBUG oslo_concurrency.processutils [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:20:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:16.360 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:d7:29 10.100.0.3'], port_security=['fa:16:3e:23:d7:29 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41c706f5-6f0b-47a8-91a4-16f87e2a0571', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67d0ed57ac554e4390e928b3c8f9b5f6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a50ab9ba-7ffb-499d-9822-c20dbf4e32aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db1eeaa4-6673-482f-8f62-eb89284fbfdd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=b2336583-1aaa-4789-8d4f-a3a14997891d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:20:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:16.361 157129 INFO neutron.agent.ovn.metadata.agent [-] Port b2336583-1aaa-4789-8d4f-a3a14997891d in datapath 41c706f5-6f0b-47a8-91a4-16f87e2a0571 unbound from our chassis#033[00m
Feb 25 07:20:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:16.363 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 41c706f5-6f0b-47a8-91a4-16f87e2a0571, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:20:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:16.364 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c0dc6c2e-c010-4798-8572-c6561d11c5c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:16.365 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571 namespace which is not needed anymore#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.365 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:16 np0005629333 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000016.scope: Deactivated successfully.
Feb 25 07:20:16 np0005629333 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000016.scope: Consumed 14.130s CPU time.
Feb 25 07:20:16 np0005629333 systemd-machined[210048]: Machine qemu-24-instance-00000016 terminated.
Feb 25 07:20:16 np0005629333 neutron-haproxy-ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571[264074]: [NOTICE]   (264078) : haproxy version is 2.8.14-c23fe91
Feb 25 07:20:16 np0005629333 neutron-haproxy-ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571[264074]: [NOTICE]   (264078) : path to executable is /usr/sbin/haproxy
Feb 25 07:20:16 np0005629333 neutron-haproxy-ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571[264074]: [WARNING]  (264078) : Exiting Master process...
Feb 25 07:20:16 np0005629333 neutron-haproxy-ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571[264074]: [WARNING]  (264078) : Exiting Master process...
Feb 25 07:20:16 np0005629333 neutron-haproxy-ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571[264074]: [ALERT]    (264078) : Current worker (264080) exited with code 143 (Terminated)
Feb 25 07:20:16 np0005629333 neutron-haproxy-ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571[264074]: [WARNING]  (264078) : All workers exited. Exiting... (0)
Feb 25 07:20:16 np0005629333 systemd[1]: libpod-478b53e8a0b2cb418603a83c95725f591550d40056799abe8c7b0aa3c2524a35.scope: Deactivated successfully.
Feb 25 07:20:16 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:20:16 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:20:16 np0005629333 conmon[264074]: conmon 478b53e8a0b2cb418603 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-478b53e8a0b2cb418603a83c95725f591550d40056799abe8c7b0aa3c2524a35.scope/container/memory.events
Feb 25 07:20:16 np0005629333 podman[267456]: 2026-02-25 12:20:16.52444851 +0000 UTC m=+0.059529745 container died 478b53e8a0b2cb418603a83c95725f591550d40056799abe8c7b0aa3c2524a35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.528 244018 INFO nova.virt.libvirt.driver [-] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Instance destroyed successfully.#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.529 244018 DEBUG nova.objects.instance [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lazy-loading 'resources' on Instance uuid de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:20:16 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-478b53e8a0b2cb418603a83c95725f591550d40056799abe8c7b0aa3c2524a35-userdata-shm.mount: Deactivated successfully.
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.557 244018 DEBUG nova.virt.libvirt.vif [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:19:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1190496141',display_name='tempest-FloatingIPsAssociationTestJSON-server-1190496141',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1190496141',id=22,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67d0ed57ac554e4390e928b3c8f9b5f6',ramdisk_id='',reservation_id='r-pnxl3h9s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1904923370',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1904923370-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:19:30Z,user_data=None,user_id='9aa84b2700234a5e9dcba1fc0bbc4cea',uuid=de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b2336583-1aaa-4789-8d4f-a3a14997891d", "address": "fa:16:3e:23:d7:29", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2336583-1a", "ovs_interfaceid": "b2336583-1aaa-4789-8d4f-a3a14997891d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:20:16 np0005629333 systemd[1]: var-lib-containers-storage-overlay-0335c2e7dd04e96857217313fc187a38e2ae19855220b0923b3e59e3330f4337-merged.mount: Deactivated successfully.
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.558 244018 DEBUG nova.network.os_vif_util [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Converting VIF {"id": "b2336583-1aaa-4789-8d4f-a3a14997891d", "address": "fa:16:3e:23:d7:29", "network": {"id": "41c706f5-6f0b-47a8-91a4-16f87e2a0571", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-718292796-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67d0ed57ac554e4390e928b3c8f9b5f6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2336583-1a", "ovs_interfaceid": "b2336583-1aaa-4789-8d4f-a3a14997891d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.559 244018 DEBUG nova.network.os_vif_util [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:23:d7:29,bridge_name='br-int',has_traffic_filtering=True,id=b2336583-1aaa-4789-8d4f-a3a14997891d,network=Network(41c706f5-6f0b-47a8-91a4-16f87e2a0571),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2336583-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.560 244018 DEBUG os_vif [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:d7:29,bridge_name='br-int',has_traffic_filtering=True,id=b2336583-1aaa-4789-8d4f-a3a14997891d,network=Network(41c706f5-6f0b-47a8-91a4-16f87e2a0571),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2336583-1a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.562 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.563 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2336583-1a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.566 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:16 np0005629333 podman[267456]: 2026-02-25 12:20:16.571273532 +0000 UTC m=+0.106354747 container cleanup 478b53e8a0b2cb418603a83c95725f591550d40056799abe8c7b0aa3c2524a35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.571 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.574 244018 INFO os_vif [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:d7:29,bridge_name='br-int',has_traffic_filtering=True,id=b2336583-1aaa-4789-8d4f-a3a14997891d,network=Network(41c706f5-6f0b-47a8-91a4-16f87e2a0571),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2336583-1a')#033[00m
Feb 25 07:20:16 np0005629333 systemd[1]: libpod-conmon-478b53e8a0b2cb418603a83c95725f591550d40056799abe8c7b0aa3c2524a35.scope: Deactivated successfully.
Feb 25 07:20:16 np0005629333 podman[267515]: 2026-02-25 12:20:16.634934093 +0000 UTC m=+0.042614803 container remove 478b53e8a0b2cb418603a83c95725f591550d40056799abe8c7b0aa3c2524a35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 25 07:20:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:16.644 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[560f46f3-1fef-4895-aa6b-91412b545521]: (4, ('Wed Feb 25 12:20:16 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571 (478b53e8a0b2cb418603a83c95725f591550d40056799abe8c7b0aa3c2524a35)\n478b53e8a0b2cb418603a83c95725f591550d40056799abe8c7b0aa3c2524a35\nWed Feb 25 12:20:16 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571 (478b53e8a0b2cb418603a83c95725f591550d40056799abe8c7b0aa3c2524a35)\n478b53e8a0b2cb418603a83c95725f591550d40056799abe8c7b0aa3c2524a35\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:16.646 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6414961a-5f8d-44e4-8890-d3a81a432c2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:16.647 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41c706f5-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.649 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:16 np0005629333 kernel: tap41c706f5-60: left promiscuous mode
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.660 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:16.669 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c2e3092a-307e-4979-b3dc-9263cf6bd554]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:16.686 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5f02e103-b620-4bfa-b6af-e73a51a70243]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:16.687 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[86262362-83f9-4c4c-b0ec-79614b7d1ff9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:16.702 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[62d6cb0a-625b-4b76-a50a-978fe7ce6c7c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 393624, 'reachable_time': 35159, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267548, 'error': None, 'target': 'ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:16 np0005629333 systemd[1]: run-netns-ovnmeta\x2d41c706f5\x2d6f0b\x2d47a8\x2d91a4\x2d16f87e2a0571.mount: Deactivated successfully.
Feb 25 07:20:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:16.704 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-41c706f5-6f0b-47a8-91a4-16f87e2a0571 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:20:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:16.704 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[897a466a-2281-48a5-93e7-89eba66cb404]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.850 244018 INFO nova.virt.libvirt.driver [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Deleting instance files /var/lib/nova/instances/de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75_del#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.852 244018 INFO nova.virt.libvirt.driver [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Deletion of /var/lib/nova/instances/de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75_del complete#033[00m
Feb 25 07:20:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:20:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1211447400' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.877 244018 DEBUG oslo_concurrency.processutils [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.884 244018 DEBUG nova.compute.provider_tree [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.908 244018 DEBUG nova.scheduler.client.report [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.938 244018 INFO nova.compute.manager [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Took 0.65 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.939 244018 DEBUG oslo.service.loopingcall [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.940 244018 DEBUG nova.compute.manager [-] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.940 244018 DEBUG nova.network.neutron [-] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:20:16 np0005629333 nova_compute[244014]: 2026-02-25 12:20:16.956 244018 DEBUG oslo_concurrency.lockutils [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:17 np0005629333 nova_compute[244014]: 2026-02-25 12:20:17.328 244018 INFO nova.scheduler.client.report [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Deleted allocations for instance d9b67bce-8a7c-4f49-9cab-3e20377ca630#033[00m
Feb 25 07:20:17 np0005629333 nova_compute[244014]: 2026-02-25 12:20:17.408 244018 DEBUG oslo_concurrency.lockutils [None req-8d809cea-a8df-4fea-a929-0d1e0ec18bd8 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "d9b67bce-8a7c-4f49-9cab-3e20377ca630" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:17 np0005629333 nova_compute[244014]: 2026-02-25 12:20:17.614 244018 DEBUG oslo_concurrency.lockutils [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "7a6ab503-d433-40a7-9395-3d5660e852c9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:17 np0005629333 nova_compute[244014]: 2026-02-25 12:20:17.615 244018 DEBUG oslo_concurrency.lockutils [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "7a6ab503-d433-40a7-9395-3d5660e852c9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:17 np0005629333 nova_compute[244014]: 2026-02-25 12:20:17.616 244018 DEBUG oslo_concurrency.lockutils [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:17 np0005629333 nova_compute[244014]: 2026-02-25 12:20:17.616 244018 DEBUG oslo_concurrency.lockutils [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:17 np0005629333 nova_compute[244014]: 2026-02-25 12:20:17.617 244018 DEBUG oslo_concurrency.lockutils [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:17 np0005629333 nova_compute[244014]: 2026-02-25 12:20:17.618 244018 INFO nova.compute.manager [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Terminating instance#033[00m
Feb 25 07:20:17 np0005629333 nova_compute[244014]: 2026-02-25 12:20:17.620 244018 DEBUG nova.compute.manager [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:20:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 416 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 985 KiB/s rd, 5.1 MiB/s wr, 240 op/s
Feb 25 07:20:17 np0005629333 kernel: tap179cbc6a-d7 (unregistering): left promiscuous mode
Feb 25 07:20:17 np0005629333 NetworkManager[49836]: <info>  [1772022017.6769] device (tap179cbc6a-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:20:17 np0005629333 nova_compute[244014]: 2026-02-25 12:20:17.685 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:17 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:17Z|00149|binding|INFO|Releasing lport 179cbc6a-d79f-4f46-88e8-f362ea0e4a26 from this chassis (sb_readonly=0)
Feb 25 07:20:17 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:17Z|00150|binding|INFO|Setting lport 179cbc6a-d79f-4f46-88e8-f362ea0e4a26 down in Southbound
Feb 25 07:20:17 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:17Z|00151|binding|INFO|Removing iface tap179cbc6a-d7 ovn-installed in OVS
Feb 25 07:20:17 np0005629333 nova_compute[244014]: 2026-02-25 12:20:17.690 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:17 np0005629333 nova_compute[244014]: 2026-02-25 12:20:17.704 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:17.706 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:1d:06 10.100.0.7'], port_security=['fa:16:3e:d2:1d:06 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '7a6ab503-d433-40a7-9395-3d5660e852c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fd7733ad-d262-4781-bcfa-77cfa8b67164', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556b4b98-e95d-460c-a904-adc77baf4b88, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=179cbc6a-d79f-4f46-88e8-f362ea0e4a26) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:20:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:17.708 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 179cbc6a-d79f-4f46-88e8-f362ea0e4a26 in datapath 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 unbound from our chassis#033[00m
Feb 25 07:20:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:17.710 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6#033[00m
Feb 25 07:20:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:17.724 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[35b2136d-36f0-49ec-bd70-8234fe71d84e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:17 np0005629333 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000012.scope: Deactivated successfully.
Feb 25 07:20:17 np0005629333 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000012.scope: Consumed 15.060s CPU time.
Feb 25 07:20:17 np0005629333 systemd-machined[210048]: Machine qemu-20-instance-00000012 terminated.
Feb 25 07:20:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:17.752 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[67befab2-3dcf-4b1c-b737-74bbd0585276]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:17.755 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2ca27315-7e57-41db-8b7a-1da3fe43bfb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:17.776 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2938dd83-025f-4961-beac-03e50c690ef9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:17.792 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e5d4578e-0bf2-4fc0-a43a-4386083094f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f4cbf9a-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:f8:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 27, 'rx_bytes': 868, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 27, 'rx_bytes': 868, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389396, 'reachable_time': 28421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267564, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:17.805 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[77dfc9c0-b333-49c4-828a-1f2d2ce51017]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389407, 'tstamp': 389407}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267565, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1f4cbf9a-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389409, 'tstamp': 389409}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267565, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:17.807 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f4cbf9a-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:17 np0005629333 nova_compute[244014]: 2026-02-25 12:20:17.809 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:17 np0005629333 nova_compute[244014]: 2026-02-25 12:20:17.814 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:17.815 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f4cbf9a-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:17.815 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:20:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:17.816 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1f4cbf9a-40, col_values=(('external_ids', {'iface-id': '2cfd1e6b-d28d-43c0-bbbd-c6ad77855812'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:17.817 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:20:17 np0005629333 nova_compute[244014]: 2026-02-25 12:20:17.866 244018 INFO nova.virt.libvirt.driver [-] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Instance destroyed successfully.#033[00m
Feb 25 07:20:17 np0005629333 nova_compute[244014]: 2026-02-25 12:20:17.867 244018 DEBUG nova.objects.instance [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'resources' on Instance uuid 7a6ab503-d433-40a7-9395-3d5660e852c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:20:17 np0005629333 nova_compute[244014]: 2026-02-25 12:20:17.888 244018 DEBUG nova.virt.libvirt.vif [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:18:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-671436254',display_name='tempest-ServersAdminTestJSON-server-671436254',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-671436254',id=18,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:18:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-drd5ht2j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:18:51Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=7a6ab503-d433-40a7-9395-3d5660e852c9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "address": "fa:16:3e:d2:1d:06", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap179cbc6a-d7", "ovs_interfaceid": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:20:17 np0005629333 nova_compute[244014]: 2026-02-25 12:20:17.889 244018 DEBUG nova.network.os_vif_util [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "address": "fa:16:3e:d2:1d:06", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap179cbc6a-d7", "ovs_interfaceid": "179cbc6a-d79f-4f46-88e8-f362ea0e4a26", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:20:17 np0005629333 nova_compute[244014]: 2026-02-25 12:20:17.890 244018 DEBUG nova.network.os_vif_util [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d2:1d:06,bridge_name='br-int',has_traffic_filtering=True,id=179cbc6a-d79f-4f46-88e8-f362ea0e4a26,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap179cbc6a-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:20:17 np0005629333 nova_compute[244014]: 2026-02-25 12:20:17.890 244018 DEBUG os_vif [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:1d:06,bridge_name='br-int',has_traffic_filtering=True,id=179cbc6a-d79f-4f46-88e8-f362ea0e4a26,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap179cbc6a-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:20:17 np0005629333 nova_compute[244014]: 2026-02-25 12:20:17.892 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:17 np0005629333 nova_compute[244014]: 2026-02-25 12:20:17.893 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap179cbc6a-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:17 np0005629333 nova_compute[244014]: 2026-02-25 12:20:17.895 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:17 np0005629333 nova_compute[244014]: 2026-02-25 12:20:17.898 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:20:17 np0005629333 nova_compute[244014]: 2026-02-25 12:20:17.900 244018 INFO os_vif [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:1d:06,bridge_name='br-int',has_traffic_filtering=True,id=179cbc6a-d79f-4f46-88e8-f362ea0e4a26,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap179cbc6a-d7')#033[00m
Feb 25 07:20:18 np0005629333 nova_compute[244014]: 2026-02-25 12:20:18.047 244018 DEBUG nova.network.neutron [-] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:20:18 np0005629333 nova_compute[244014]: 2026-02-25 12:20:18.110 244018 INFO nova.compute.manager [-] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Took 1.17 seconds to deallocate network for instance.#033[00m
Feb 25 07:20:18 np0005629333 nova_compute[244014]: 2026-02-25 12:20:18.191 244018 DEBUG oslo_concurrency.lockutils [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:18 np0005629333 nova_compute[244014]: 2026-02-25 12:20:18.191 244018 DEBUG oslo_concurrency.lockutils [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:18 np0005629333 nova_compute[244014]: 2026-02-25 12:20:18.213 244018 INFO nova.virt.libvirt.driver [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Deleting instance files /var/lib/nova/instances/7a6ab503-d433-40a7-9395-3d5660e852c9_del#033[00m
Feb 25 07:20:18 np0005629333 nova_compute[244014]: 2026-02-25 12:20:18.213 244018 INFO nova.virt.libvirt.driver [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Deletion of /var/lib/nova/instances/7a6ab503-d433-40a7-9395-3d5660e852c9_del complete#033[00m
Feb 25 07:20:18 np0005629333 nova_compute[244014]: 2026-02-25 12:20:18.225 244018 DEBUG nova.compute.manager [req-ab4c2ba9-29bb-44ca-a9bf-4cf9bc6ac21b req-9a668c4b-efbe-4ef6-8ba9-888db177a32d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Received event network-vif-deleted-b2336583-1aaa-4789-8d4f-a3a14997891d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:18 np0005629333 nova_compute[244014]: 2026-02-25 12:20:18.300 244018 INFO nova.compute.manager [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:20:18 np0005629333 nova_compute[244014]: 2026-02-25 12:20:18.301 244018 DEBUG oslo.service.loopingcall [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:20:18 np0005629333 nova_compute[244014]: 2026-02-25 12:20:18.301 244018 DEBUG nova.compute.manager [-] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:20:18 np0005629333 nova_compute[244014]: 2026-02-25 12:20:18.302 244018 DEBUG nova.network.neutron [-] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:20:18 np0005629333 nova_compute[244014]: 2026-02-25 12:20:18.315 244018 DEBUG oslo_concurrency.processutils [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:20:18 np0005629333 nova_compute[244014]: 2026-02-25 12:20:18.359 244018 DEBUG nova.compute.manager [req-6d340116-237d-41ca-95ef-7115f5854e79 req-3b21aa2c-53da-44a0-9acd-22b542dcb6d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Received event network-vif-unplugged-b2336583-1aaa-4789-8d4f-a3a14997891d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:18 np0005629333 nova_compute[244014]: 2026-02-25 12:20:18.360 244018 DEBUG oslo_concurrency.lockutils [req-6d340116-237d-41ca-95ef-7115f5854e79 req-3b21aa2c-53da-44a0-9acd-22b542dcb6d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:18 np0005629333 nova_compute[244014]: 2026-02-25 12:20:18.360 244018 DEBUG oslo_concurrency.lockutils [req-6d340116-237d-41ca-95ef-7115f5854e79 req-3b21aa2c-53da-44a0-9acd-22b542dcb6d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:18 np0005629333 nova_compute[244014]: 2026-02-25 12:20:18.360 244018 DEBUG oslo_concurrency.lockutils [req-6d340116-237d-41ca-95ef-7115f5854e79 req-3b21aa2c-53da-44a0-9acd-22b542dcb6d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:18 np0005629333 nova_compute[244014]: 2026-02-25 12:20:18.361 244018 DEBUG nova.compute.manager [req-6d340116-237d-41ca-95ef-7115f5854e79 req-3b21aa2c-53da-44a0-9acd-22b542dcb6d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] No waiting events found dispatching network-vif-unplugged-b2336583-1aaa-4789-8d4f-a3a14997891d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:20:18 np0005629333 nova_compute[244014]: 2026-02-25 12:20:18.361 244018 WARNING nova.compute.manager [req-6d340116-237d-41ca-95ef-7115f5854e79 req-3b21aa2c-53da-44a0-9acd-22b542dcb6d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Received unexpected event network-vif-unplugged-b2336583-1aaa-4789-8d4f-a3a14997891d for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:20:18 np0005629333 nova_compute[244014]: 2026-02-25 12:20:18.362 244018 DEBUG nova.compute.manager [req-6d340116-237d-41ca-95ef-7115f5854e79 req-3b21aa2c-53da-44a0-9acd-22b542dcb6d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Received event network-vif-plugged-b2336583-1aaa-4789-8d4f-a3a14997891d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:18 np0005629333 nova_compute[244014]: 2026-02-25 12:20:18.362 244018 DEBUG oslo_concurrency.lockutils [req-6d340116-237d-41ca-95ef-7115f5854e79 req-3b21aa2c-53da-44a0-9acd-22b542dcb6d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:18 np0005629333 nova_compute[244014]: 2026-02-25 12:20:18.362 244018 DEBUG oslo_concurrency.lockutils [req-6d340116-237d-41ca-95ef-7115f5854e79 req-3b21aa2c-53da-44a0-9acd-22b542dcb6d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:18 np0005629333 nova_compute[244014]: 2026-02-25 12:20:18.363 244018 DEBUG oslo_concurrency.lockutils [req-6d340116-237d-41ca-95ef-7115f5854e79 req-3b21aa2c-53da-44a0-9acd-22b542dcb6d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:18 np0005629333 nova_compute[244014]: 2026-02-25 12:20:18.363 244018 DEBUG nova.compute.manager [req-6d340116-237d-41ca-95ef-7115f5854e79 req-3b21aa2c-53da-44a0-9acd-22b542dcb6d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] No waiting events found dispatching network-vif-plugged-b2336583-1aaa-4789-8d4f-a3a14997891d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:20:18 np0005629333 nova_compute[244014]: 2026-02-25 12:20:18.363 244018 WARNING nova.compute.manager [req-6d340116-237d-41ca-95ef-7115f5854e79 req-3b21aa2c-53da-44a0-9acd-22b542dcb6d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Received unexpected event network-vif-plugged-b2336583-1aaa-4789-8d4f-a3a14997891d for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:20:18 np0005629333 nova_compute[244014]: 2026-02-25 12:20:18.452 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:18 np0005629333 nova_compute[244014]: 2026-02-25 12:20:18.797 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022003.795866, 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:20:18 np0005629333 nova_compute[244014]: 2026-02-25 12:20:18.797 244018 INFO nova.compute.manager [-] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:20:18 np0005629333 nova_compute[244014]: 2026-02-25 12:20:18.851 244018 DEBUG nova.compute.manager [None req-467312a5-9207-42af-983b-b54605920be0 - - - - - -] [instance: 267cd6f8-d842-45a8-b4b1-4c6f3dee8d69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:20:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:20:18 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3811217120' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:20:18 np0005629333 nova_compute[244014]: 2026-02-25 12:20:18.911 244018 DEBUG oslo_concurrency.processutils [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:20:18 np0005629333 nova_compute[244014]: 2026-02-25 12:20:18.918 244018 DEBUG nova.compute.provider_tree [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:20:18 np0005629333 nova_compute[244014]: 2026-02-25 12:20:18.945 244018 DEBUG nova.scheduler.client.report [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:20:19 np0005629333 nova_compute[244014]: 2026-02-25 12:20:18.999 244018 DEBUG oslo_concurrency.lockutils [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:19 np0005629333 nova_compute[244014]: 2026-02-25 12:20:19.038 244018 INFO nova.scheduler.client.report [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Deleted allocations for instance de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75#033[00m
Feb 25 07:20:19 np0005629333 nova_compute[244014]: 2026-02-25 12:20:19.154 244018 DEBUG oslo_concurrency.lockutils [None req-48ad688a-896e-4000-a1f2-a52991fa9618 9aa84b2700234a5e9dcba1fc0bbc4cea 67d0ed57ac554e4390e928b3c8f9b5f6 - - default default] Lock "de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:20:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 07:20:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.2 total, 600.0 interval#012Cumulative writes: 13K writes, 54K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s#012Cumulative WAL: 13K writes, 4026 syncs, 3.41 writes per sync, written: 0.05 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6594 writes, 25K keys, 6594 commit groups, 1.0 writes per commit group, ingest: 27.32 MB, 0.05 MB/s#012Interval WAL: 6594 writes, 2613 syncs, 2.52 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 07:20:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 416 MiB data, 617 MiB used, 59 GiB / 60 GiB avail; 947 KiB/s rd, 4.9 MiB/s wr, 231 op/s
Feb 25 07:20:20 np0005629333 nova_compute[244014]: 2026-02-25 12:20:20.298 244018 DEBUG nova.network.neutron [-] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:20:20 np0005629333 nova_compute[244014]: 2026-02-25 12:20:20.332 244018 DEBUG nova.compute.manager [req-4103aa58-8523-4715-b90c-79be88d5efa1 req-3cf4c5bc-c04b-4be8-a753-f6de7225d566 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Received event network-vif-unplugged-179cbc6a-d79f-4f46-88e8-f362ea0e4a26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:20 np0005629333 nova_compute[244014]: 2026-02-25 12:20:20.333 244018 DEBUG oslo_concurrency.lockutils [req-4103aa58-8523-4715-b90c-79be88d5efa1 req-3cf4c5bc-c04b-4be8-a753-f6de7225d566 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:20 np0005629333 nova_compute[244014]: 2026-02-25 12:20:20.334 244018 DEBUG oslo_concurrency.lockutils [req-4103aa58-8523-4715-b90c-79be88d5efa1 req-3cf4c5bc-c04b-4be8-a753-f6de7225d566 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:20 np0005629333 nova_compute[244014]: 2026-02-25 12:20:20.334 244018 DEBUG oslo_concurrency.lockutils [req-4103aa58-8523-4715-b90c-79be88d5efa1 req-3cf4c5bc-c04b-4be8-a753-f6de7225d566 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:20 np0005629333 nova_compute[244014]: 2026-02-25 12:20:20.334 244018 DEBUG nova.compute.manager [req-4103aa58-8523-4715-b90c-79be88d5efa1 req-3cf4c5bc-c04b-4be8-a753-f6de7225d566 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] No waiting events found dispatching network-vif-unplugged-179cbc6a-d79f-4f46-88e8-f362ea0e4a26 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:20:20 np0005629333 nova_compute[244014]: 2026-02-25 12:20:20.335 244018 DEBUG nova.compute.manager [req-4103aa58-8523-4715-b90c-79be88d5efa1 req-3cf4c5bc-c04b-4be8-a753-f6de7225d566 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Received event network-vif-unplugged-179cbc6a-d79f-4f46-88e8-f362ea0e4a26 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:20:20 np0005629333 nova_compute[244014]: 2026-02-25 12:20:20.335 244018 DEBUG nova.compute.manager [req-4103aa58-8523-4715-b90c-79be88d5efa1 req-3cf4c5bc-c04b-4be8-a753-f6de7225d566 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Received event network-vif-plugged-179cbc6a-d79f-4f46-88e8-f362ea0e4a26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:20 np0005629333 nova_compute[244014]: 2026-02-25 12:20:20.335 244018 DEBUG oslo_concurrency.lockutils [req-4103aa58-8523-4715-b90c-79be88d5efa1 req-3cf4c5bc-c04b-4be8-a753-f6de7225d566 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:20 np0005629333 nova_compute[244014]: 2026-02-25 12:20:20.336 244018 DEBUG oslo_concurrency.lockutils [req-4103aa58-8523-4715-b90c-79be88d5efa1 req-3cf4c5bc-c04b-4be8-a753-f6de7225d566 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:20 np0005629333 nova_compute[244014]: 2026-02-25 12:20:20.336 244018 DEBUG oslo_concurrency.lockutils [req-4103aa58-8523-4715-b90c-79be88d5efa1 req-3cf4c5bc-c04b-4be8-a753-f6de7225d566 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7a6ab503-d433-40a7-9395-3d5660e852c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:20 np0005629333 nova_compute[244014]: 2026-02-25 12:20:20.336 244018 DEBUG nova.compute.manager [req-4103aa58-8523-4715-b90c-79be88d5efa1 req-3cf4c5bc-c04b-4be8-a753-f6de7225d566 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] No waiting events found dispatching network-vif-plugged-179cbc6a-d79f-4f46-88e8-f362ea0e4a26 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:20:20 np0005629333 nova_compute[244014]: 2026-02-25 12:20:20.337 244018 WARNING nova.compute.manager [req-4103aa58-8523-4715-b90c-79be88d5efa1 req-3cf4c5bc-c04b-4be8-a753-f6de7225d566 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Received unexpected event network-vif-plugged-179cbc6a-d79f-4f46-88e8-f362ea0e4a26 for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:20:20 np0005629333 nova_compute[244014]: 2026-02-25 12:20:20.345 244018 INFO nova.compute.manager [-] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Took 2.04 seconds to deallocate network for instance.#033[00m
Feb 25 07:20:20 np0005629333 nova_compute[244014]: 2026-02-25 12:20:20.406 244018 DEBUG oslo_concurrency.lockutils [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:20 np0005629333 nova_compute[244014]: 2026-02-25 12:20:20.407 244018 DEBUG oslo_concurrency.lockutils [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:20 np0005629333 nova_compute[244014]: 2026-02-25 12:20:20.493 244018 DEBUG oslo_concurrency.lockutils [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "interface-89488b9f-7c53-4e00-ad62-837e33a76dae-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:20 np0005629333 nova_compute[244014]: 2026-02-25 12:20:20.494 244018 DEBUG oslo_concurrency.lockutils [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "interface-89488b9f-7c53-4e00-ad62-837e33a76dae-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:20 np0005629333 nova_compute[244014]: 2026-02-25 12:20:20.495 244018 DEBUG nova.objects.instance [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'flavor' on Instance uuid 89488b9f-7c53-4e00-ad62-837e33a76dae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:20:20 np0005629333 nova_compute[244014]: 2026-02-25 12:20:20.502 244018 DEBUG oslo_concurrency.processutils [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:20:20 np0005629333 podman[267624]: 2026-02-25 12:20:20.732765264 +0000 UTC m=+0.076500938 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 07:20:20 np0005629333 podman[267638]: 2026-02-25 12:20:20.772029411 +0000 UTC m=+0.114785727 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 25 07:20:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:20:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/617138349' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:20:21 np0005629333 nova_compute[244014]: 2026-02-25 12:20:21.035 244018 DEBUG oslo_concurrency.processutils [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:20:21 np0005629333 nova_compute[244014]: 2026-02-25 12:20:21.043 244018 DEBUG nova.compute.provider_tree [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:20:21 np0005629333 nova_compute[244014]: 2026-02-25 12:20:21.067 244018 DEBUG nova.objects.instance [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'pci_requests' on Instance uuid 89488b9f-7c53-4e00-ad62-837e33a76dae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:20:21 np0005629333 nova_compute[244014]: 2026-02-25 12:20:21.073 244018 DEBUG nova.scheduler.client.report [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:20:21 np0005629333 nova_compute[244014]: 2026-02-25 12:20:21.084 244018 DEBUG nova.network.neutron [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:20:21 np0005629333 nova_compute[244014]: 2026-02-25 12:20:21.105 244018 DEBUG oslo_concurrency.lockutils [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:21 np0005629333 nova_compute[244014]: 2026-02-25 12:20:21.143 244018 INFO nova.scheduler.client.report [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Deleted allocations for instance 7a6ab503-d433-40a7-9395-3d5660e852c9#033[00m
Feb 25 07:20:21 np0005629333 nova_compute[244014]: 2026-02-25 12:20:21.226 244018 DEBUG oslo_concurrency.lockutils [None req-80aae49c-e84f-48a6-a67f-0917a65c91f2 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "7a6ab503-d433-40a7-9395-3d5660e852c9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:21 np0005629333 nova_compute[244014]: 2026-02-25 12:20:21.388 244018 DEBUG nova.policy [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea407839a07d46608b6348caf676d12d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6a771ad0ce454d809d66825f69248fa7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:20:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 367 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 866 KiB/s rd, 4.3 MiB/s wr, 235 op/s
Feb 25 07:20:21 np0005629333 nova_compute[244014]: 2026-02-25 12:20:21.718 244018 DEBUG oslo_concurrency.lockutils [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "52f927ad-a417-489f-9f92-87bc3433649d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:21 np0005629333 nova_compute[244014]: 2026-02-25 12:20:21.718 244018 DEBUG oslo_concurrency.lockutils [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:21 np0005629333 nova_compute[244014]: 2026-02-25 12:20:21.719 244018 DEBUG oslo_concurrency.lockutils [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "52f927ad-a417-489f-9f92-87bc3433649d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:21 np0005629333 nova_compute[244014]: 2026-02-25 12:20:21.719 244018 DEBUG oslo_concurrency.lockutils [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:21 np0005629333 nova_compute[244014]: 2026-02-25 12:20:21.719 244018 DEBUG oslo_concurrency.lockutils [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:21 np0005629333 nova_compute[244014]: 2026-02-25 12:20:21.721 244018 INFO nova.compute.manager [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Terminating instance#033[00m
Feb 25 07:20:21 np0005629333 nova_compute[244014]: 2026-02-25 12:20:21.723 244018 DEBUG nova.compute.manager [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:20:21 np0005629333 kernel: tap2e503dd2-73 (unregistering): left promiscuous mode
Feb 25 07:20:21 np0005629333 NetworkManager[49836]: <info>  [1772022021.7903] device (tap2e503dd2-73): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:20:21 np0005629333 nova_compute[244014]: 2026-02-25 12:20:21.801 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:21 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:21Z|00152|binding|INFO|Releasing lport 2e503dd2-735e-4bfc-87c7-dffab319d935 from this chassis (sb_readonly=0)
Feb 25 07:20:21 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:21Z|00153|binding|INFO|Setting lport 2e503dd2-735e-4bfc-87c7-dffab319d935 down in Southbound
Feb 25 07:20:21 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:21Z|00154|binding|INFO|Removing iface tap2e503dd2-73 ovn-installed in OVS
Feb 25 07:20:21 np0005629333 nova_compute[244014]: 2026-02-25 12:20:21.803 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:21.810 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:71:62 10.100.0.5'], port_security=['fa:16:3e:87:71:62 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '52f927ad-a417-489f-9f92-87bc3433649d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b35cd816238a43d8825ab11e83d2b8bf', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'fd7733ad-d262-4781-bcfa-77cfa8b67164', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=556b4b98-e95d-460c-a904-adc77baf4b88, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=2e503dd2-735e-4bfc-87c7-dffab319d935) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:20:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:21.812 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 2e503dd2-735e-4bfc-87c7-dffab319d935 in datapath 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 unbound from our chassis#033[00m
Feb 25 07:20:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:21.815 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:20:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:21.817 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[27fe5341-d672-4b08-9da6-1e7454b24711]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:21.817 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 namespace which is not needed anymore#033[00m
Feb 25 07:20:21 np0005629333 nova_compute[244014]: 2026-02-25 12:20:21.819 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:21 np0005629333 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000011.scope: Deactivated successfully.
Feb 25 07:20:21 np0005629333 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000011.scope: Consumed 11.708s CPU time.
Feb 25 07:20:21 np0005629333 systemd-machined[210048]: Machine qemu-29-instance-00000011 terminated.
Feb 25 07:20:21 np0005629333 nova_compute[244014]: 2026-02-25 12:20:21.961 244018 INFO nova.virt.libvirt.driver [-] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Instance destroyed successfully.#033[00m
Feb 25 07:20:21 np0005629333 nova_compute[244014]: 2026-02-25 12:20:21.962 244018 DEBUG nova.objects.instance [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lazy-loading 'resources' on Instance uuid 52f927ad-a417-489f-9f92-87bc3433649d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:20:21 np0005629333 neutron-haproxy-ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6[261212]: [NOTICE]   (261216) : haproxy version is 2.8.14-c23fe91
Feb 25 07:20:21 np0005629333 neutron-haproxy-ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6[261212]: [NOTICE]   (261216) : path to executable is /usr/sbin/haproxy
Feb 25 07:20:21 np0005629333 neutron-haproxy-ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6[261212]: [WARNING]  (261216) : Exiting Master process...
Feb 25 07:20:21 np0005629333 neutron-haproxy-ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6[261212]: [ALERT]    (261216) : Current worker (261218) exited with code 143 (Terminated)
Feb 25 07:20:21 np0005629333 neutron-haproxy-ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6[261212]: [WARNING]  (261216) : All workers exited. Exiting... (0)
Feb 25 07:20:21 np0005629333 nova_compute[244014]: 2026-02-25 12:20:21.980 244018 DEBUG nova.virt.libvirt.vif [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:18:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1736622759',display_name='tempest-ServersAdminTestJSON-server-1736622759',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1736622759',id=17,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b35cd816238a43d8825ab11e83d2b8bf',ramdisk_id='',reservation_id='r-ae0qgj1z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-147238686',owner_user_name='tempest-ServersAdminTestJSON-147238686-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:20:07Z,user_data=None,user_id='6395ac4bfa5d4910aed9116395bbbdeb',uuid=52f927ad-a417-489f-9f92-87bc3433649d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:20:21 np0005629333 nova_compute[244014]: 2026-02-25 12:20:21.981 244018 DEBUG nova.network.os_vif_util [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converting VIF {"id": "2e503dd2-735e-4bfc-87c7-dffab319d935", "address": "fa:16:3e:87:71:62", "network": {"id": "1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1185395346-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b35cd816238a43d8825ab11e83d2b8bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e503dd2-73", "ovs_interfaceid": "2e503dd2-735e-4bfc-87c7-dffab319d935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:20:21 np0005629333 nova_compute[244014]: 2026-02-25 12:20:21.981 244018 DEBUG nova.network.os_vif_util [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:20:21 np0005629333 nova_compute[244014]: 2026-02-25 12:20:21.982 244018 DEBUG os_vif [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:20:21 np0005629333 nova_compute[244014]: 2026-02-25 12:20:21.983 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:21 np0005629333 nova_compute[244014]: 2026-02-25 12:20:21.983 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e503dd2-73, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:21 np0005629333 systemd[1]: libpod-5cace0ed50a1824992af06633e096a65ec6f66dc35d5b95126e92895214bbe2a.scope: Deactivated successfully.
Feb 25 07:20:21 np0005629333 nova_compute[244014]: 2026-02-25 12:20:21.985 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:21 np0005629333 nova_compute[244014]: 2026-02-25 12:20:21.988 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:21 np0005629333 podman[267711]: 2026-02-25 12:20:21.989280574 +0000 UTC m=+0.063822687 container died 5cace0ed50a1824992af06633e096a65ec6f66dc35d5b95126e92895214bbe2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 25 07:20:21 np0005629333 nova_compute[244014]: 2026-02-25 12:20:21.992 244018 INFO os_vif [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:71:62,bridge_name='br-int',has_traffic_filtering=True,id=2e503dd2-735e-4bfc-87c7-dffab319d935,network=Network(1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e503dd2-73')#033[00m
Feb 25 07:20:22 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5cace0ed50a1824992af06633e096a65ec6f66dc35d5b95126e92895214bbe2a-userdata-shm.mount: Deactivated successfully.
Feb 25 07:20:22 np0005629333 systemd[1]: var-lib-containers-storage-overlay-fc838cfd7fef58997060e892c18c0d6fc490da055ecee8b3fae6fbd9365cbb97-merged.mount: Deactivated successfully.
Feb 25 07:20:22 np0005629333 podman[267711]: 2026-02-25 12:20:22.035579342 +0000 UTC m=+0.110121455 container cleanup 5cace0ed50a1824992af06633e096a65ec6f66dc35d5b95126e92895214bbe2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 25 07:20:22 np0005629333 systemd[1]: libpod-conmon-5cace0ed50a1824992af06633e096a65ec6f66dc35d5b95126e92895214bbe2a.scope: Deactivated successfully.
Feb 25 07:20:22 np0005629333 podman[267766]: 2026-02-25 12:20:22.123295467 +0000 UTC m=+0.059343619 container remove 5cace0ed50a1824992af06633e096a65ec6f66dc35d5b95126e92895214bbe2a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:20:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:22.131 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0b1e4ed7-de7f-4680-99dd-972c183a5641]: (4, ('Wed Feb 25 12:20:21 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 (5cace0ed50a1824992af06633e096a65ec6f66dc35d5b95126e92895214bbe2a)\n5cace0ed50a1824992af06633e096a65ec6f66dc35d5b95126e92895214bbe2a\nWed Feb 25 12:20:22 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 (5cace0ed50a1824992af06633e096a65ec6f66dc35d5b95126e92895214bbe2a)\n5cace0ed50a1824992af06633e096a65ec6f66dc35d5b95126e92895214bbe2a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:22.134 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d2c12a10-6436-43d0-8f25-2641922f13cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:22.138 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f4cbf9a-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:22 np0005629333 nova_compute[244014]: 2026-02-25 12:20:22.140 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:22 np0005629333 kernel: tap1f4cbf9a-40: left promiscuous mode
Feb 25 07:20:22 np0005629333 nova_compute[244014]: 2026-02-25 12:20:22.152 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:22.156 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[42b3925e-e8f2-4e4a-a433-baf80750334c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:22.172 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cdc96f1a-d25e-4cb0-a48e-dffaec71e9d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:22.173 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c7fd8771-6fdd-4f58-963d-90ce339ca2fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:22.191 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[06724f9b-4b02-4d19-8f21-5fae47c491f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389389, 'reachable_time': 39675, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267782, 'error': None, 'target': 'ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:22.194 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1f4cbf9a-48ed-4d53-b670-cf25c9e6c5f6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:20:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:22.195 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[531a3d20-d6e4-43a2-af60-f2d3b6f49cd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:22 np0005629333 systemd[1]: run-netns-ovnmeta\x2d1f4cbf9a\x2d48ed\x2d4d53\x2db670\x2dcf25c9e6c5f6.mount: Deactivated successfully.
Feb 25 07:20:22 np0005629333 nova_compute[244014]: 2026-02-25 12:20:22.273 244018 DEBUG nova.network.neutron [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Successfully created port: 07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:20:22 np0005629333 nova_compute[244014]: 2026-02-25 12:20:22.305 244018 INFO nova.virt.libvirt.driver [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Deleting instance files /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d_del#033[00m
Feb 25 07:20:22 np0005629333 nova_compute[244014]: 2026-02-25 12:20:22.306 244018 INFO nova.virt.libvirt.driver [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Deletion of /var/lib/nova/instances/52f927ad-a417-489f-9f92-87bc3433649d_del complete#033[00m
Feb 25 07:20:22 np0005629333 nova_compute[244014]: 2026-02-25 12:20:22.768 244018 DEBUG nova.compute.manager [req-1185a6bf-ea9e-434a-b9a4-41b34151ccf1 req-6853bea9-7eb4-4ae8-a474-182f4523cb25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received event network-vif-unplugged-2e503dd2-735e-4bfc-87c7-dffab319d935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:22 np0005629333 nova_compute[244014]: 2026-02-25 12:20:22.769 244018 DEBUG oslo_concurrency.lockutils [req-1185a6bf-ea9e-434a-b9a4-41b34151ccf1 req-6853bea9-7eb4-4ae8-a474-182f4523cb25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "52f927ad-a417-489f-9f92-87bc3433649d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:22 np0005629333 nova_compute[244014]: 2026-02-25 12:20:22.770 244018 DEBUG oslo_concurrency.lockutils [req-1185a6bf-ea9e-434a-b9a4-41b34151ccf1 req-6853bea9-7eb4-4ae8-a474-182f4523cb25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:22 np0005629333 nova_compute[244014]: 2026-02-25 12:20:22.771 244018 DEBUG oslo_concurrency.lockutils [req-1185a6bf-ea9e-434a-b9a4-41b34151ccf1 req-6853bea9-7eb4-4ae8-a474-182f4523cb25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:22 np0005629333 nova_compute[244014]: 2026-02-25 12:20:22.771 244018 DEBUG nova.compute.manager [req-1185a6bf-ea9e-434a-b9a4-41b34151ccf1 req-6853bea9-7eb4-4ae8-a474-182f4523cb25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] No waiting events found dispatching network-vif-unplugged-2e503dd2-735e-4bfc-87c7-dffab319d935 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:20:22 np0005629333 nova_compute[244014]: 2026-02-25 12:20:22.771 244018 DEBUG nova.compute.manager [req-1185a6bf-ea9e-434a-b9a4-41b34151ccf1 req-6853bea9-7eb4-4ae8-a474-182f4523cb25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received event network-vif-unplugged-2e503dd2-735e-4bfc-87c7-dffab319d935 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:20:22 np0005629333 nova_compute[244014]: 2026-02-25 12:20:22.822 244018 INFO nova.compute.manager [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Took 1.10 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:20:22 np0005629333 nova_compute[244014]: 2026-02-25 12:20:22.823 244018 DEBUG oslo.service.loopingcall [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:20:22 np0005629333 nova_compute[244014]: 2026-02-25 12:20:22.823 244018 DEBUG nova.compute.manager [-] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:20:22 np0005629333 nova_compute[244014]: 2026-02-25 12:20:22.823 244018 DEBUG nova.network.neutron [-] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:20:23 np0005629333 nova_compute[244014]: 2026-02-25 12:20:23.337 244018 DEBUG nova.compute.manager [req-6c5c725d-9a7a-4b4b-9f93-77e8bbe7b58e req-bd2a7472-e697-4260-9601-ee5a097d8bd7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Received event network-vif-deleted-179cbc6a-d79f-4f46-88e8-f362ea0e4a26 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:23 np0005629333 nova_compute[244014]: 2026-02-25 12:20:23.454 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:23 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:23Z|00155|binding|INFO|Releasing lport ef44c128-3fa4-4475-b63c-4818a50ede40 from this chassis (sb_readonly=0)
Feb 25 07:20:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 291 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 509 KiB/s rd, 2.9 MiB/s wr, 199 op/s
Feb 25 07:20:23 np0005629333 nova_compute[244014]: 2026-02-25 12:20:23.666 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:23 np0005629333 nova_compute[244014]: 2026-02-25 12:20:23.833 244018 DEBUG nova.network.neutron [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Successfully updated port: 07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:20:23 np0005629333 nova_compute[244014]: 2026-02-25 12:20:23.893 244018 DEBUG oslo_concurrency.lockutils [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:20:23 np0005629333 nova_compute[244014]: 2026-02-25 12:20:23.893 244018 DEBUG oslo_concurrency.lockutils [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquired lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:20:23 np0005629333 nova_compute[244014]: 2026-02-25 12:20:23.894 244018 DEBUG nova.network.neutron [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:20:24 np0005629333 nova_compute[244014]: 2026-02-25 12:20:24.139 244018 WARNING nova.network.neutron [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] 08121372-a435-401a-b405-778e10d8c2e2 already exists in list: networks containing: ['08121372-a435-401a-b405-778e10d8c2e2']. ignoring it#033[00m
Feb 25 07:20:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:20:24 np0005629333 nova_compute[244014]: 2026-02-25 12:20:24.510 244018 DEBUG nova.compute.manager [req-32008cf5-682e-439d-b5f9-ac39f2fa6af2 req-6c2816f4-43d0-4545-b067-f7a5cc956b91 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:24 np0005629333 nova_compute[244014]: 2026-02-25 12:20:24.510 244018 DEBUG oslo_concurrency.lockutils [req-32008cf5-682e-439d-b5f9-ac39f2fa6af2 req-6c2816f4-43d0-4545-b067-f7a5cc956b91 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "52f927ad-a417-489f-9f92-87bc3433649d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:24 np0005629333 nova_compute[244014]: 2026-02-25 12:20:24.511 244018 DEBUG oslo_concurrency.lockutils [req-32008cf5-682e-439d-b5f9-ac39f2fa6af2 req-6c2816f4-43d0-4545-b067-f7a5cc956b91 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:24 np0005629333 nova_compute[244014]: 2026-02-25 12:20:24.512 244018 DEBUG oslo_concurrency.lockutils [req-32008cf5-682e-439d-b5f9-ac39f2fa6af2 req-6c2816f4-43d0-4545-b067-f7a5cc956b91 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:24 np0005629333 nova_compute[244014]: 2026-02-25 12:20:24.512 244018 DEBUG nova.compute.manager [req-32008cf5-682e-439d-b5f9-ac39f2fa6af2 req-6c2816f4-43d0-4545-b067-f7a5cc956b91 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] No waiting events found dispatching network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:20:24 np0005629333 nova_compute[244014]: 2026-02-25 12:20:24.513 244018 WARNING nova.compute.manager [req-32008cf5-682e-439d-b5f9-ac39f2fa6af2 req-6c2816f4-43d0-4545-b067-f7a5cc956b91 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received unexpected event network-vif-plugged-2e503dd2-735e-4bfc-87c7-dffab319d935 for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:20:24 np0005629333 nova_compute[244014]: 2026-02-25 12:20:24.602 244018 DEBUG nova.compute.manager [req-41d94ee1-7045-44c7-9e81-08dee8c11ba4 req-8561f94b-057f-41b5-9e64-00b50e0f1ac2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Received event network-changed-07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:24 np0005629333 nova_compute[244014]: 2026-02-25 12:20:24.602 244018 DEBUG nova.compute.manager [req-41d94ee1-7045-44c7-9e81-08dee8c11ba4 req-8561f94b-057f-41b5-9e64-00b50e0f1ac2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Refreshing instance network info cache due to event network-changed-07c8ce0b-0c89-4abc-87c6-99c1024f7dd8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:20:24 np0005629333 nova_compute[244014]: 2026-02-25 12:20:24.603 244018 DEBUG oslo_concurrency.lockutils [req-41d94ee1-7045-44c7-9e81-08dee8c11ba4 req-8561f94b-057f-41b5-9e64-00b50e0f1ac2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:20:24 np0005629333 nova_compute[244014]: 2026-02-25 12:20:24.610 244018 DEBUG nova.network.neutron [-] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:20:24 np0005629333 nova_compute[244014]: 2026-02-25 12:20:24.655 244018 INFO nova.compute.manager [-] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Took 1.83 seconds to deallocate network for instance.#033[00m
Feb 25 07:20:24 np0005629333 nova_compute[244014]: 2026-02-25 12:20:24.762 244018 DEBUG oslo_concurrency.lockutils [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:24 np0005629333 nova_compute[244014]: 2026-02-25 12:20:24.762 244018 DEBUG oslo_concurrency.lockutils [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:24 np0005629333 nova_compute[244014]: 2026-02-25 12:20:24.828 244018 DEBUG oslo_concurrency.processutils [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:20:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 07:20:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.6 total, 600.0 interval#012Cumulative writes: 10K writes, 44K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 2984 syncs, 3.63 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5159 writes, 20K keys, 5159 commit groups, 1.0 writes per commit group, ingest: 21.71 MB, 0.04 MB/s#012Interval WAL: 5159 writes, 2085 syncs, 2.47 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 07:20:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:20:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/120177367' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:20:25 np0005629333 nova_compute[244014]: 2026-02-25 12:20:25.403 244018 DEBUG oslo_concurrency.processutils [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:20:25 np0005629333 nova_compute[244014]: 2026-02-25 12:20:25.410 244018 DEBUG nova.compute.provider_tree [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:20:25 np0005629333 nova_compute[244014]: 2026-02-25 12:20:25.429 244018 DEBUG nova.scheduler.client.report [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:20:25 np0005629333 nova_compute[244014]: 2026-02-25 12:20:25.479 244018 DEBUG oslo_concurrency.lockutils [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:25 np0005629333 nova_compute[244014]: 2026-02-25 12:20:25.527 244018 INFO nova.scheduler.client.report [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Deleted allocations for instance 52f927ad-a417-489f-9f92-87bc3433649d#033[00m
Feb 25 07:20:25 np0005629333 nova_compute[244014]: 2026-02-25 12:20:25.544 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022010.5433438, 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:20:25 np0005629333 nova_compute[244014]: 2026-02-25 12:20:25.545 244018 INFO nova.compute.manager [-] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:20:25 np0005629333 nova_compute[244014]: 2026-02-25 12:20:25.582 244018 DEBUG nova.compute.manager [None req-35a8a6f4-8bfd-4195-9dfa-3519229e0d54 - - - - - -] [instance: 8993d2e7-8b5d-42eb-9e24-f96dcc4da39b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:20:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 291 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 2.1 MiB/s wr, 156 op/s
Feb 25 07:20:25 np0005629333 nova_compute[244014]: 2026-02-25 12:20:25.645 244018 DEBUG oslo_concurrency.lockutils [None req-f903368e-decd-46d4-846b-c4fa1ab3e0cd 6395ac4bfa5d4910aed9116395bbbdeb b35cd816238a43d8825ab11e83d2b8bf - - default default] Lock "52f927ad-a417-489f-9f92-87bc3433649d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.223 244018 DEBUG nova.network.neutron [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Updating instance_info_cache with network_info: [{"id": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "address": "fa:16:3e:0c:10:e8", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e8b3807-0e", "ovs_interfaceid": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "address": "fa:16:3e:3d:5e:dc", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c8ce0b-0c", "ovs_interfaceid": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.268 244018 DEBUG oslo_concurrency.lockutils [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Releasing lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.269 244018 DEBUG oslo_concurrency.lockutils [req-41d94ee1-7045-44c7-9e81-08dee8c11ba4 req-8561f94b-057f-41b5-9e64-00b50e0f1ac2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.269 244018 DEBUG nova.network.neutron [req-41d94ee1-7045-44c7-9e81-08dee8c11ba4 req-8561f94b-057f-41b5-9e64-00b50e0f1ac2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Refreshing network info cache for port 07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.275 244018 DEBUG nova.virt.libvirt.vif [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:19:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1678048505',display_name='tempest-AttachInterfacesTestJSON-server-1678048505',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1678048505',id=25,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI59Lu36NJMBU0mthshk0lPs05K0s2PWyT35IsJwWazcPBPSYm/Ew3YeQPHN2oaFRIA9yk4c93F1Q1taymdhCN6cLrKiQ4srfxdpMeTwtszRhxTTKjPH7Q3QOLbiw7+JtQ==',key_name='tempest-keypair-1913992596',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-wuykovpt',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:19:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=89488b9f-7c53-4e00-ad62-837e33a76dae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "address": "fa:16:3e:3d:5e:dc", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c8ce0b-0c", "ovs_interfaceid": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.275 244018 DEBUG nova.network.os_vif_util [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "address": "fa:16:3e:3d:5e:dc", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c8ce0b-0c", "ovs_interfaceid": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.276 244018 DEBUG nova.network.os_vif_util [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5e:dc,bridge_name='br-int',has_traffic_filtering=True,id=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c8ce0b-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.277 244018 DEBUG os_vif [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5e:dc,bridge_name='br-int',has_traffic_filtering=True,id=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c8ce0b-0c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.277 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.278 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.279 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.282 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.282 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap07c8ce0b-0c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.283 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap07c8ce0b-0c, col_values=(('external_ids', {'iface-id': '07c8ce0b-0c89-4abc-87c6-99c1024f7dd8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:5e:dc', 'vm-uuid': '89488b9f-7c53-4e00-ad62-837e33a76dae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.285 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:26 np0005629333 NetworkManager[49836]: <info>  [1772022026.2866] manager: (tap07c8ce0b-0c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.288 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.292 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.293 244018 INFO os_vif [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5e:dc,bridge_name='br-int',has_traffic_filtering=True,id=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c8ce0b-0c')#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.294 244018 DEBUG nova.virt.libvirt.vif [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:19:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1678048505',display_name='tempest-AttachInterfacesTestJSON-server-1678048505',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1678048505',id=25,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI59Lu36NJMBU0mthshk0lPs05K0s2PWyT35IsJwWazcPBPSYm/Ew3YeQPHN2oaFRIA9yk4c93F1Q1taymdhCN6cLrKiQ4srfxdpMeTwtszRhxTTKjPH7Q3QOLbiw7+JtQ==',key_name='tempest-keypair-1913992596',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-wuykovpt',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:19:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=89488b9f-7c53-4e00-ad62-837e33a76dae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "address": "fa:16:3e:3d:5e:dc", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c8ce0b-0c", "ovs_interfaceid": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.294 244018 DEBUG nova.network.os_vif_util [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "address": "fa:16:3e:3d:5e:dc", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c8ce0b-0c", "ovs_interfaceid": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.295 244018 DEBUG nova.network.os_vif_util [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5e:dc,bridge_name='br-int',has_traffic_filtering=True,id=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c8ce0b-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.299 244018 DEBUG nova.virt.libvirt.guest [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] attach device xml: <interface type="ethernet">
Feb 25 07:20:26 np0005629333 nova_compute[244014]:  <mac address="fa:16:3e:3d:5e:dc"/>
Feb 25 07:20:26 np0005629333 nova_compute[244014]:  <model type="virtio"/>
Feb 25 07:20:26 np0005629333 nova_compute[244014]:  <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:20:26 np0005629333 nova_compute[244014]:  <mtu size="1442"/>
Feb 25 07:20:26 np0005629333 nova_compute[244014]:  <target dev="tap07c8ce0b-0c"/>
Feb 25 07:20:26 np0005629333 nova_compute[244014]: </interface>
Feb 25 07:20:26 np0005629333 nova_compute[244014]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb 25 07:20:26 np0005629333 kernel: tap07c8ce0b-0c: entered promiscuous mode
Feb 25 07:20:26 np0005629333 NetworkManager[49836]: <info>  [1772022026.3165] manager: (tap07c8ce0b-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/77)
Feb 25 07:20:26 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:26Z|00156|binding|INFO|Claiming lport 07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 for this chassis.
Feb 25 07:20:26 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:26Z|00157|binding|INFO|07c8ce0b-0c89-4abc-87c6-99c1024f7dd8: Claiming fa:16:3e:3d:5e:dc 10.100.0.14
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.320 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:26 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:26Z|00158|binding|INFO|Setting lport 07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 ovn-installed in OVS
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.335 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:26 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:26Z|00159|binding|INFO|Setting lport 07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 up in Southbound
Feb 25 07:20:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:26.338 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:5e:dc 10.100.0.14'], port_security=['fa:16:3e:3d:5e:dc 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '89488b9f-7c53-4e00-ad62-837e33a76dae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '078dca40-137f-4eb6-953b-2ae25d0b4ca3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.339 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:26.342 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 in datapath 08121372-a435-401a-b405-778e10d8c2e2 bound to our chassis#033[00m
Feb 25 07:20:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:26.345 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08121372-a435-401a-b405-778e10d8c2e2#033[00m
Feb 25 07:20:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:26.361 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a69a26eb-aa68-4648-904f-2a8e16e280d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:26 np0005629333 systemd-udevd[267812]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:20:26 np0005629333 NetworkManager[49836]: <info>  [1772022026.3811] device (tap07c8ce0b-0c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:20:26 np0005629333 NetworkManager[49836]: <info>  [1772022026.3819] device (tap07c8ce0b-0c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:20:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:26.394 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b5432c46-89b2-40a4-8673-9df6a2501657]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:26.399 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0f820f82-7a3c-47e4-adb4-d48b36b2c60a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.411 244018 DEBUG nova.virt.libvirt.driver [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.412 244018 DEBUG nova.virt.libvirt.driver [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.412 244018 DEBUG nova.virt.libvirt.driver [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:0c:10:e8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.413 244018 DEBUG nova.virt.libvirt.driver [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:3d:5e:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:20:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:26.422 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f9b32315-7d94-4e3c-812e-eaafbbe59c9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:26.442 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ee79c77e-e31f-450e-8e54-fffdbd3daa64]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396416, 'reachable_time': 29153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267819, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.450 244018 DEBUG nova.virt.libvirt.guest [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:20:26 np0005629333 nova_compute[244014]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:20:26 np0005629333 nova_compute[244014]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1678048505</nova:name>
Feb 25 07:20:26 np0005629333 nova_compute[244014]:  <nova:creationTime>2026-02-25 12:20:26</nova:creationTime>
Feb 25 07:20:26 np0005629333 nova_compute[244014]:  <nova:flavor name="m1.nano">
Feb 25 07:20:26 np0005629333 nova_compute[244014]:    <nova:memory>128</nova:memory>
Feb 25 07:20:26 np0005629333 nova_compute[244014]:    <nova:disk>1</nova:disk>
Feb 25 07:20:26 np0005629333 nova_compute[244014]:    <nova:swap>0</nova:swap>
Feb 25 07:20:26 np0005629333 nova_compute[244014]:    <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:20:26 np0005629333 nova_compute[244014]:    <nova:vcpus>1</nova:vcpus>
Feb 25 07:20:26 np0005629333 nova_compute[244014]:  </nova:flavor>
Feb 25 07:20:26 np0005629333 nova_compute[244014]:  <nova:owner>
Feb 25 07:20:26 np0005629333 nova_compute[244014]:    <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 07:20:26 np0005629333 nova_compute[244014]:    <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 07:20:26 np0005629333 nova_compute[244014]:  </nova:owner>
Feb 25 07:20:26 np0005629333 nova_compute[244014]:  <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:20:26 np0005629333 nova_compute[244014]:  <nova:ports>
Feb 25 07:20:26 np0005629333 nova_compute[244014]:    <nova:port uuid="5e8b3807-0ee8-4f97-aa2d-3db7d1283888">
Feb 25 07:20:26 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 07:20:26 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:20:26 np0005629333 nova_compute[244014]:    <nova:port uuid="07c8ce0b-0c89-4abc-87c6-99c1024f7dd8">
Feb 25 07:20:26 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 25 07:20:26 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:20:26 np0005629333 nova_compute[244014]:  </nova:ports>
Feb 25 07:20:26 np0005629333 nova_compute[244014]: </nova:instance>
Feb 25 07:20:26 np0005629333 nova_compute[244014]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Feb 25 07:20:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:26.459 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[97987a71-4038-4902-bbce-c7f55a7aa59f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396426, 'tstamp': 396426}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267820, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396429, 'tstamp': 396429}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267820, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:26.461 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:26.465 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08121372-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:26.466 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:20:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:26.466 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08121372-a0, col_values=(('external_ids', {'iface-id': 'ef44c128-3fa4-4475-b63c-4818a50ede40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.466 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:26.467 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.487 244018 DEBUG oslo_concurrency.lockutils [None req-2046c8ee-baa9-4316-9ee7-b98e075f1cc9 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "interface-89488b9f-7c53-4e00-ad62-837e33a76dae-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.993s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.584 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.711 244018 DEBUG nova.compute.manager [req-69ee54a4-01ae-41b4-9728-83e6173f563c req-e5f2761e-6874-4473-a1e0-b8f59b9d86a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Received event network-vif-deleted-2e503dd2-735e-4bfc-87c7-dffab319d935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.785 244018 DEBUG nova.compute.manager [req-9fa7854f-5193-4197-bb29-0be252070de1 req-9cc220d9-8708-4414-98fc-c2982a88b129 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Received event network-vif-plugged-07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.785 244018 DEBUG oslo_concurrency.lockutils [req-9fa7854f-5193-4197-bb29-0be252070de1 req-9cc220d9-8708-4414-98fc-c2982a88b129 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.786 244018 DEBUG oslo_concurrency.lockutils [req-9fa7854f-5193-4197-bb29-0be252070de1 req-9cc220d9-8708-4414-98fc-c2982a88b129 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.787 244018 DEBUG oslo_concurrency.lockutils [req-9fa7854f-5193-4197-bb29-0be252070de1 req-9cc220d9-8708-4414-98fc-c2982a88b129 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.787 244018 DEBUG nova.compute.manager [req-9fa7854f-5193-4197-bb29-0be252070de1 req-9cc220d9-8708-4414-98fc-c2982a88b129 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] No waiting events found dispatching network-vif-plugged-07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:20:26 np0005629333 nova_compute[244014]: 2026-02-25 12:20:26.787 244018 WARNING nova.compute.manager [req-9fa7854f-5193-4197-bb29-0be252070de1 req-9cc220d9-8708-4414-98fc-c2982a88b129 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Received unexpected event network-vif-plugged-07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:20:27 np0005629333 ceph-mgr[76641]: [devicehealth INFO root] Check health
Feb 25 07:20:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 233 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 412 KiB/s rd, 2.1 MiB/s wr, 176 op/s
Feb 25 07:20:28 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:28Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3d:5e:dc 10.100.0.14
Feb 25 07:20:28 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:28Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3d:5e:dc 10.100.0.14
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.457 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.492 244018 DEBUG nova.network.neutron [req-41d94ee1-7045-44c7-9e81-08dee8c11ba4 req-8561f94b-057f-41b5-9e64-00b50e0f1ac2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Updated VIF entry in instance network info cache for port 07c8ce0b-0c89-4abc-87c6-99c1024f7dd8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.493 244018 DEBUG nova.network.neutron [req-41d94ee1-7045-44c7-9e81-08dee8c11ba4 req-8561f94b-057f-41b5-9e64-00b50e0f1ac2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Updating instance_info_cache with network_info: [{"id": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "address": "fa:16:3e:0c:10:e8", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e8b3807-0e", "ovs_interfaceid": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "address": "fa:16:3e:3d:5e:dc", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c8ce0b-0c", "ovs_interfaceid": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.521 244018 DEBUG oslo_concurrency.lockutils [req-41d94ee1-7045-44c7-9e81-08dee8c11ba4 req-8561f94b-057f-41b5-9e64-00b50e0f1ac2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.690 244018 DEBUG oslo_concurrency.lockutils [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "interface-89488b9f-7c53-4e00-ad62-837e33a76dae-07c8ce0b-0c89-4abc-87c6-99c1024f7dd8" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.691 244018 DEBUG oslo_concurrency.lockutils [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "interface-89488b9f-7c53-4e00-ad62-837e33a76dae-07c8ce0b-0c89-4abc-87c6-99c1024f7dd8" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.707 244018 DEBUG nova.objects.instance [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'flavor' on Instance uuid 89488b9f-7c53-4e00-ad62-837e33a76dae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.735 244018 DEBUG nova.virt.libvirt.vif [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:19:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1678048505',display_name='tempest-AttachInterfacesTestJSON-server-1678048505',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1678048505',id=25,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI59Lu36NJMBU0mthshk0lPs05K0s2PWyT35IsJwWazcPBPSYm/Ew3YeQPHN2oaFRIA9yk4c93F1Q1taymdhCN6cLrKiQ4srfxdpMeTwtszRhxTTKjPH7Q3QOLbiw7+JtQ==',key_name='tempest-keypair-1913992596',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-wuykovpt',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:19:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=89488b9f-7c53-4e00-ad62-837e33a76dae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "address": "fa:16:3e:3d:5e:dc", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c8ce0b-0c", "ovs_interfaceid": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.735 244018 DEBUG nova.network.os_vif_util [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "address": "fa:16:3e:3d:5e:dc", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c8ce0b-0c", "ovs_interfaceid": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.736 244018 DEBUG nova.network.os_vif_util [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5e:dc,bridge_name='br-int',has_traffic_filtering=True,id=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c8ce0b-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.740 244018 DEBUG nova.virt.libvirt.guest [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3d:5e:dc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap07c8ce0b-0c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.743 244018 DEBUG nova.virt.libvirt.guest [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3d:5e:dc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap07c8ce0b-0c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.746 244018 DEBUG nova.virt.libvirt.driver [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Attempting to detach device tap07c8ce0b-0c from instance 89488b9f-7c53-4e00-ad62-837e33a76dae from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.746 244018 DEBUG nova.virt.libvirt.guest [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] detach device xml: <interface type="ethernet">
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <mac address="fa:16:3e:3d:5e:dc"/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <model type="virtio"/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <mtu size="1442"/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <target dev="tap07c8ce0b-0c"/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]: </interface>
Feb 25 07:20:28 np0005629333 nova_compute[244014]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.754 244018 DEBUG nova.virt.libvirt.guest [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3d:5e:dc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap07c8ce0b-0c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.758 244018 DEBUG nova.virt.libvirt.guest [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:3d:5e:dc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap07c8ce0b-0c"/></interface>not found in domain: <domain type='kvm' id='28'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <name>instance-00000019</name>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <uuid>89488b9f-7c53-4e00-ad62-837e33a76dae</uuid>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1678048505</nova:name>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <nova:creationTime>2026-02-25 12:20:26</nova:creationTime>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <nova:flavor name="m1.nano">
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <nova:memory>128</nova:memory>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <nova:disk>1</nova:disk>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <nova:swap>0</nova:swap>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <nova:vcpus>1</nova:vcpus>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  </nova:flavor>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <nova:owner>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  </nova:owner>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <nova:ports>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <nova:port uuid="5e8b3807-0ee8-4f97-aa2d-3db7d1283888">
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <nova:port uuid="07c8ce0b-0c89-4abc-87c6-99c1024f7dd8">
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  </nova:ports>
Feb 25 07:20:28 np0005629333 nova_compute[244014]: </nova:instance>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <memory unit='KiB'>131072</memory>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <currentMemory unit='KiB'>131072</currentMemory>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <vcpu placement='static'>1</vcpu>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <resource>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <partition>/machine</partition>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  </resource>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <sysinfo type='smbios'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <entry name='manufacturer'>RDO</entry>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <entry name='product'>OpenStack Compute</entry>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <entry name='serial'>89488b9f-7c53-4e00-ad62-837e33a76dae</entry>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <entry name='uuid'>89488b9f-7c53-4e00-ad62-837e33a76dae</entry>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <entry name='family'>Virtual Machine</entry>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <boot dev='hd'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <smbios mode='sysinfo'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <vmcoreinfo state='on'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <cpu mode='custom' match='exact' check='full'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <model fallback='forbid'>EPYC-Rome</model>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <vendor>AMD</vendor>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='require' name='x2apic'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='require' name='tsc-deadline'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='require' name='hypervisor'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='require' name='tsc_adjust'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='require' name='spec-ctrl'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='require' name='stibp'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='require' name='ssbd'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='require' name='cmp_legacy'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='require' name='overflow-recov'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='require' name='succor'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='require' name='ibrs'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='require' name='amd-ssbd'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='require' name='virt-ssbd'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='disable' name='lbrv'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='disable' name='tsc-scale'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='disable' name='vmcb-clean'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='disable' name='flushbyasid'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='disable' name='pause-filter'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='disable' name='pfthreshold'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='disable' name='svme-addr-chk'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='require' name='lfence-always-serializing'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='disable' name='xsaves'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='disable' name='svm'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='require' name='topoext'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='disable' name='npt'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='disable' name='nrip-save'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <clock offset='utc'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <timer name='pit' tickpolicy='delay'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <timer name='rtc' tickpolicy='catchup'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <timer name='hpet' present='no'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <on_poweroff>destroy</on_poweroff>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <on_reboot>restart</on_reboot>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <on_crash>destroy</on_crash>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <disk type='network' device='disk'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <driver name='qemu' type='raw' cache='none'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <auth username='openstack'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:        <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <source protocol='rbd' name='vms/89488b9f-7c53-4e00-ad62-837e33a76dae_disk' index='2'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:        <host name='192.168.122.100' port='6789'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target dev='vda' bus='virtio'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='virtio-disk0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <disk type='network' device='cdrom'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <driver name='qemu' type='raw' cache='none'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <auth username='openstack'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:        <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <source protocol='rbd' name='vms/89488b9f-7c53-4e00-ad62-837e33a76dae_disk.config' index='1'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:        <host name='192.168.122.100' port='6789'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target dev='sda' bus='sata'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <readonly/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='sata0-0-0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='0' model='pcie-root'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pcie.0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='1' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='1' port='0x10'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.1'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='2' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='2' port='0x11'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.2'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='3' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='3' port='0x12'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.3'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='4' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='4' port='0x13'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.4'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='5' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='5' port='0x14'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.5'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='6' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='6' port='0x15'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.6'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='7' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='7' port='0x16'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.7'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='8' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='8' port='0x17'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.8'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='9' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='9' port='0x18'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.9'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='10' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='10' port='0x19'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.10'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='11' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='11' port='0x1a'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.11'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='12' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='12' port='0x1b'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.12'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='13' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='13' port='0x1c'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.13'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='14' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='14' port='0x1d'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.14'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='15' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='15' port='0x1e'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.15'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='16' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='16' port='0x1f'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.16'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='17' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='17' port='0x20'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.17'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='18' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='18' port='0x21'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.18'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='19' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='19' port='0x22'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.19'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='20' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='20' port='0x23'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.20'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='21' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='21' port='0x24'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.21'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='22' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='22' port='0x25'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.22'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='23' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='23' port='0x26'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.23'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='24' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='24' port='0x27'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.24'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='25' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='25' port='0x28'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.25'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-pci-bridge'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.26'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='usb' index='0' model='piix3-uhci'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='usb'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='sata' index='0'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='ide'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <interface type='ethernet'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <mac address='fa:16:3e:0c:10:e8'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target dev='tap5e8b3807-0e'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model type='virtio'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <driver name='vhost' rx_queue_size='512'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <mtu size='1442'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='net0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <interface type='ethernet'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <mac address='fa:16:3e:3d:5e:dc'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target dev='tap07c8ce0b-0c'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model type='virtio'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <driver name='vhost' rx_queue_size='512'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <mtu size='1442'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='net1'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <serial type='pty'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <source path='/dev/pts/7'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <log file='/var/lib/nova/instances/89488b9f-7c53-4e00-ad62-837e33a76dae/console.log' append='off'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target type='isa-serial' port='0'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:        <model name='isa-serial'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      </target>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='serial0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <console type='pty' tty='/dev/pts/7'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <source path='/dev/pts/7'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <log file='/var/lib/nova/instances/89488b9f-7c53-4e00-ad62-837e33a76dae/console.log' append='off'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target type='serial' port='0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='serial0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </console>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <input type='tablet' bus='usb'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='input0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='usb' bus='0' port='1'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <input type='mouse' bus='ps2'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='input1'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <input type='keyboard' bus='ps2'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='input2'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <graphics type='vnc' port='5907' autoport='yes' listen='::0'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <listen type='address' address='::0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </graphics>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <audio id='1' type='none'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model type='virtio' heads='1' primary='yes'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='video0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <watchdog model='itco' action='reset'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='watchdog0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </watchdog>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <memballoon model='virtio'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <stats period='10'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='balloon0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <rng model='virtio'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <backend model='random'>/dev/urandom</backend>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='rng0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <label>system_u:system_r:svirt_t:s0:c642,c973</label>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c642,c973</imagelabel>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  </seclabel>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <label>+107:+107</label>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <imagelabel>+107:+107</imagelabel>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  </seclabel>
Feb 25 07:20:28 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:20:28 np0005629333 nova_compute[244014]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.759 244018 INFO nova.virt.libvirt.driver [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully detached device tap07c8ce0b-0c from instance 89488b9f-7c53-4e00-ad62-837e33a76dae from the persistent domain config.#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.760 244018 DEBUG nova.virt.libvirt.driver [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] (1/8): Attempting to detach device tap07c8ce0b-0c with device alias net1 from instance 89488b9f-7c53-4e00-ad62-837e33a76dae from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.761 244018 DEBUG nova.virt.libvirt.guest [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] detach device xml: <interface type="ethernet">
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <mac address="fa:16:3e:3d:5e:dc"/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <model type="virtio"/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <mtu size="1442"/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <target dev="tap07c8ce0b-0c"/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]: </interface>
Feb 25 07:20:28 np0005629333 nova_compute[244014]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb 25 07:20:28 np0005629333 kernel: tap07c8ce0b-0c (unregistering): left promiscuous mode
Feb 25 07:20:28 np0005629333 NetworkManager[49836]: <info>  [1772022028.8269] device (tap07c8ce0b-0c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:20:28 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:28Z|00160|binding|INFO|Releasing lport 07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 from this chassis (sb_readonly=0)
Feb 25 07:20:28 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:28Z|00161|binding|INFO|Setting lport 07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 down in Southbound
Feb 25 07:20:28 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:28Z|00162|binding|INFO|Removing iface tap07c8ce0b-0c ovn-installed in OVS
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.832 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.835 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.837 244018 DEBUG nova.virt.libvirt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Received event <DeviceRemovedEvent: 1772022028.83713, 89488b9f-7c53-4e00-ad62-837e33a76dae => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.838 244018 DEBUG nova.virt.libvirt.driver [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Start waiting for the detach event from libvirt for device tap07c8ce0b-0c with device alias net1 for instance 89488b9f-7c53-4e00-ad62-837e33a76dae _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.839 244018 DEBUG nova.virt.libvirt.guest [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3d:5e:dc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap07c8ce0b-0c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.843 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.845 244018 DEBUG nova.virt.libvirt.guest [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:3d:5e:dc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap07c8ce0b-0c"/></interface>not found in domain: <domain type='kvm' id='28'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <name>instance-00000019</name>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <uuid>89488b9f-7c53-4e00-ad62-837e33a76dae</uuid>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1678048505</nova:name>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <nova:creationTime>2026-02-25 12:20:26</nova:creationTime>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <nova:flavor name="m1.nano">
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <nova:memory>128</nova:memory>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <nova:disk>1</nova:disk>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <nova:swap>0</nova:swap>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <nova:vcpus>1</nova:vcpus>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  </nova:flavor>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <nova:owner>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  </nova:owner>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <nova:ports>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <nova:port uuid="5e8b3807-0ee8-4f97-aa2d-3db7d1283888">
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <nova:port uuid="07c8ce0b-0c89-4abc-87c6-99c1024f7dd8">
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  </nova:ports>
Feb 25 07:20:28 np0005629333 nova_compute[244014]: </nova:instance>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <memory unit='KiB'>131072</memory>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <currentMemory unit='KiB'>131072</currentMemory>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <vcpu placement='static'>1</vcpu>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <resource>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <partition>/machine</partition>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  </resource>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <sysinfo type='smbios'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <entry name='manufacturer'>RDO</entry>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <entry name='product'>OpenStack Compute</entry>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <entry name='serial'>89488b9f-7c53-4e00-ad62-837e33a76dae</entry>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <entry name='uuid'>89488b9f-7c53-4e00-ad62-837e33a76dae</entry>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <entry name='family'>Virtual Machine</entry>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <boot dev='hd'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <smbios mode='sysinfo'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <vmcoreinfo state='on'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <cpu mode='custom' match='exact' check='full'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <model fallback='forbid'>EPYC-Rome</model>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <vendor>AMD</vendor>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='require' name='x2apic'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='require' name='tsc-deadline'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='require' name='hypervisor'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='require' name='tsc_adjust'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='require' name='spec-ctrl'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='require' name='stibp'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='require' name='ssbd'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='require' name='cmp_legacy'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='require' name='overflow-recov'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='require' name='succor'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='require' name='ibrs'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='require' name='amd-ssbd'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='require' name='virt-ssbd'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='disable' name='lbrv'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='disable' name='tsc-scale'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='disable' name='vmcb-clean'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='disable' name='flushbyasid'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='disable' name='pause-filter'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='disable' name='pfthreshold'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='disable' name='svme-addr-chk'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='require' name='lfence-always-serializing'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='disable' name='xsaves'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='disable' name='svm'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='require' name='topoext'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='disable' name='npt'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <feature policy='disable' name='nrip-save'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <clock offset='utc'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <timer name='pit' tickpolicy='delay'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <timer name='rtc' tickpolicy='catchup'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <timer name='hpet' present='no'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <on_poweroff>destroy</on_poweroff>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <on_reboot>restart</on_reboot>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <on_crash>destroy</on_crash>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <disk type='network' device='disk'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <driver name='qemu' type='raw' cache='none'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <auth username='openstack'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:        <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <source protocol='rbd' name='vms/89488b9f-7c53-4e00-ad62-837e33a76dae_disk' index='2'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:        <host name='192.168.122.100' port='6789'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target dev='vda' bus='virtio'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='virtio-disk0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <disk type='network' device='cdrom'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <driver name='qemu' type='raw' cache='none'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <auth username='openstack'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:        <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <source protocol='rbd' name='vms/89488b9f-7c53-4e00-ad62-837e33a76dae_disk.config' index='1'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:        <host name='192.168.122.100' port='6789'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target dev='sda' bus='sata'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <readonly/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='sata0-0-0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='0' model='pcie-root'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pcie.0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='1' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='1' port='0x10'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.1'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='2' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='2' port='0x11'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.2'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='3' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='3' port='0x12'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.3'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='4' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='4' port='0x13'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.4'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='5' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='5' port='0x14'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.5'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='6' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='6' port='0x15'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.6'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='7' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='7' port='0x16'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.7'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='8' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='8' port='0x17'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.8'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='9' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='9' port='0x18'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.9'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='10' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='10' port='0x19'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.10'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='11' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='11' port='0x1a'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.11'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='12' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='12' port='0x1b'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.12'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='13' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='13' port='0x1c'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.13'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='14' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='14' port='0x1d'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.14'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='15' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='15' port='0x1e'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.15'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='16' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='16' port='0x1f'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.16'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='17' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='17' port='0x20'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.17'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='18' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='18' port='0x21'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.18'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='19' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='19' port='0x22'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.19'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='20' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='20' port='0x23'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.20'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='21' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='21' port='0x24'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.21'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='22' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='22' port='0x25'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.22'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='23' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='23' port='0x26'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.23'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='24' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='24' port='0x27'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.24'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='25' model='pcie-root-port'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target chassis='25' port='0x28'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.25'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model name='pcie-pci-bridge'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='pci.26'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='usb' index='0' model='piix3-uhci'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='usb'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <controller type='sata' index='0'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='ide'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <interface type='ethernet'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <mac address='fa:16:3e:0c:10:e8'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target dev='tap5e8b3807-0e'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model type='virtio'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <driver name='vhost' rx_queue_size='512'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <mtu size='1442'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='net0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <serial type='pty'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <source path='/dev/pts/7'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <log file='/var/lib/nova/instances/89488b9f-7c53-4e00-ad62-837e33a76dae/console.log' append='off'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target type='isa-serial' port='0'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:        <model name='isa-serial'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      </target>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='serial0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <console type='pty' tty='/dev/pts/7'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <source path='/dev/pts/7'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <log file='/var/lib/nova/instances/89488b9f-7c53-4e00-ad62-837e33a76dae/console.log' append='off'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <target type='serial' port='0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='serial0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </console>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <input type='tablet' bus='usb'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='input0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='usb' bus='0' port='1'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <input type='mouse' bus='ps2'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='input1'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <input type='keyboard' bus='ps2'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='input2'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <graphics type='vnc' port='5907' autoport='yes' listen='::0'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <listen type='address' address='::0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </graphics>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <audio id='1' type='none'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <model type='virtio' heads='1' primary='yes'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='video0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <watchdog model='itco' action='reset'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='watchdog0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </watchdog>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <memballoon model='virtio'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <stats period='10'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='balloon0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <rng model='virtio'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <backend model='random'>/dev/urandom</backend>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <alias name='rng0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <label>system_u:system_r:svirt_t:s0:c642,c973</label>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c642,c973</imagelabel>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  </seclabel>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <label>+107:+107</label>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <imagelabel>+107:+107</imagelabel>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  </seclabel>
Feb 25 07:20:28 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:20:28 np0005629333 nova_compute[244014]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.845 244018 INFO nova.virt.libvirt.driver [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully detached device tap07c8ce0b-0c from instance 89488b9f-7c53-4e00-ad62-837e33a76dae from the live domain config.#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.845 244018 DEBUG nova.virt.libvirt.vif [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:19:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1678048505',display_name='tempest-AttachInterfacesTestJSON-server-1678048505',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1678048505',id=25,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI59Lu36NJMBU0mthshk0lPs05K0s2PWyT35IsJwWazcPBPSYm/Ew3YeQPHN2oaFRIA9yk4c93F1Q1taymdhCN6cLrKiQ4srfxdpMeTwtszRhxTTKjPH7Q3QOLbiw7+JtQ==',key_name='tempest-keypair-1913992596',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-wuykovpt',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:19:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=89488b9f-7c53-4e00-ad62-837e33a76dae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "address": "fa:16:3e:3d:5e:dc", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c8ce0b-0c", "ovs_interfaceid": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.846 244018 DEBUG nova.network.os_vif_util [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "address": "fa:16:3e:3d:5e:dc", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c8ce0b-0c", "ovs_interfaceid": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.846 244018 DEBUG nova.network.os_vif_util [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5e:dc,bridge_name='br-int',has_traffic_filtering=True,id=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c8ce0b-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.847 244018 DEBUG os_vif [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5e:dc,bridge_name='br-int',has_traffic_filtering=True,id=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c8ce0b-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.848 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.849 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07c8ce0b-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.850 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.853 244018 INFO os_vif [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5e:dc,bridge_name='br-int',has_traffic_filtering=True,id=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c8ce0b-0c')#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.853 244018 DEBUG nova.virt.libvirt.guest [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1678048505</nova:name>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <nova:creationTime>2026-02-25 12:20:28</nova:creationTime>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <nova:flavor name="m1.nano">
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <nova:memory>128</nova:memory>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <nova:disk>1</nova:disk>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <nova:swap>0</nova:swap>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <nova:vcpus>1</nova:vcpus>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  </nova:flavor>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <nova:owner>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  </nova:owner>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  <nova:ports>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    <nova:port uuid="5e8b3807-0ee8-4f97-aa2d-3db7d1283888">
Feb 25 07:20:28 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:20:28 np0005629333 nova_compute[244014]:  </nova:ports>
Feb 25 07:20:28 np0005629333 nova_compute[244014]: </nova:instance>
Feb 25 07:20:28 np0005629333 nova_compute[244014]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Feb 25 07:20:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:28.897 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:5e:dc 10.100.0.14'], port_security=['fa:16:3e:3d:5e:dc 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '89488b9f-7c53-4e00-ad62-837e33a76dae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '078dca40-137f-4eb6-953b-2ae25d0b4ca3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:20:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:28.900 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 in datapath 08121372-a435-401a-b405-778e10d8c2e2 unbound from our chassis#033[00m
Feb 25 07:20:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:28.903 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08121372-a435-401a-b405-778e10d8c2e2#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.922 244018 DEBUG nova.compute.manager [req-b25cdfbc-b71e-400c-8e40-4c88fab7bef1 req-a322f99b-e49f-4f4b-8241-a09b3c37a62f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Received event network-vif-plugged-07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.922 244018 DEBUG oslo_concurrency.lockutils [req-b25cdfbc-b71e-400c-8e40-4c88fab7bef1 req-a322f99b-e49f-4f4b-8241-a09b3c37a62f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.923 244018 DEBUG oslo_concurrency.lockutils [req-b25cdfbc-b71e-400c-8e40-4c88fab7bef1 req-a322f99b-e49f-4f4b-8241-a09b3c37a62f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.923 244018 DEBUG oslo_concurrency.lockutils [req-b25cdfbc-b71e-400c-8e40-4c88fab7bef1 req-a322f99b-e49f-4f4b-8241-a09b3c37a62f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.923 244018 DEBUG nova.compute.manager [req-b25cdfbc-b71e-400c-8e40-4c88fab7bef1 req-a322f99b-e49f-4f4b-8241-a09b3c37a62f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] No waiting events found dispatching network-vif-plugged-07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:20:28 np0005629333 nova_compute[244014]: 2026-02-25 12:20:28.923 244018 WARNING nova.compute.manager [req-b25cdfbc-b71e-400c-8e40-4c88fab7bef1 req-a322f99b-e49f-4f4b-8241-a09b3c37a62f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Received unexpected event network-vif-plugged-07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:20:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:28.922 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d253936d-533e-42da-8be9-e65836d160f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:28.951 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7a6e7778-e245-4bc7-90b0-a515b6ea1749]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:28.958 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[16b3dcdd-2d1a-48fc-b507-63c588333cf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:28.989 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[34b33671-daa5-486b-8546-ac0ab8e47720]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:29.008 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d2a35d-4871-48cd-90ed-3060e5df0a78]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396416, 'reachable_time': 29153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267830, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:29.022 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d03650be-035a-410f-9208-df3835a9a086]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396426, 'tstamp': 396426}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267831, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396429, 'tstamp': 396429}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267831, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:29.025 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:29 np0005629333 nova_compute[244014]: 2026-02-25 12:20:29.027 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:29.029 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08121372-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:29.030 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:20:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:29.031 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08121372-a0, col_values=(('external_ids', {'iface-id': 'ef44c128-3fa4-4475-b63c-4818a50ede40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:29.032 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:20:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:20:29 np0005629333 nova_compute[244014]: 2026-02-25 12:20:29.425 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022014.4231637, d9b67bce-8a7c-4f49-9cab-3e20377ca630 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:20:29 np0005629333 nova_compute[244014]: 2026-02-25 12:20:29.426 244018 INFO nova.compute.manager [-] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:20:29 np0005629333 nova_compute[244014]: 2026-02-25 12:20:29.451 244018 DEBUG nova.compute.manager [None req-7afd26ff-cf89-44cb-8a7f-cd84ff60f5b7 - - - - - -] [instance: d9b67bce-8a7c-4f49-9cab-3e20377ca630] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:20:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 233 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 75 KiB/s wr, 74 op/s
Feb 25 07:20:30 np0005629333 nova_compute[244014]: 2026-02-25 12:20:30.179 244018 DEBUG oslo_concurrency.lockutils [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:20:30 np0005629333 nova_compute[244014]: 2026-02-25 12:20:30.180 244018 DEBUG oslo_concurrency.lockutils [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquired lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:20:30 np0005629333 nova_compute[244014]: 2026-02-25 12:20:30.180 244018 DEBUG nova.network.neutron [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:20:30 np0005629333 nova_compute[244014]: 2026-02-25 12:20:30.409 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:30 np0005629333 nova_compute[244014]: 2026-02-25 12:20:30.785 244018 DEBUG oslo_concurrency.lockutils [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "89488b9f-7c53-4e00-ad62-837e33a76dae" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:30 np0005629333 nova_compute[244014]: 2026-02-25 12:20:30.786 244018 DEBUG oslo_concurrency.lockutils [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:30 np0005629333 nova_compute[244014]: 2026-02-25 12:20:30.786 244018 DEBUG oslo_concurrency.lockutils [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:30 np0005629333 nova_compute[244014]: 2026-02-25 12:20:30.786 244018 DEBUG oslo_concurrency.lockutils [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:30 np0005629333 nova_compute[244014]: 2026-02-25 12:20:30.787 244018 DEBUG oslo_concurrency.lockutils [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:30 np0005629333 nova_compute[244014]: 2026-02-25 12:20:30.788 244018 INFO nova.compute.manager [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Terminating instance#033[00m
Feb 25 07:20:30 np0005629333 nova_compute[244014]: 2026-02-25 12:20:30.790 244018 DEBUG nova.compute.manager [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:20:30 np0005629333 kernel: tap5e8b3807-0e (unregistering): left promiscuous mode
Feb 25 07:20:30 np0005629333 nova_compute[244014]: 2026-02-25 12:20:30.829 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:30 np0005629333 NetworkManager[49836]: <info>  [1772022030.8308] device (tap5e8b3807-0e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:20:30 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:30Z|00163|binding|INFO|Releasing lport 5e8b3807-0ee8-4f97-aa2d-3db7d1283888 from this chassis (sb_readonly=0)
Feb 25 07:20:30 np0005629333 nova_compute[244014]: 2026-02-25 12:20:30.834 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:30 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:30Z|00164|binding|INFO|Setting lport 5e8b3807-0ee8-4f97-aa2d-3db7d1283888 down in Southbound
Feb 25 07:20:30 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:30Z|00165|binding|INFO|Removing iface tap5e8b3807-0e ovn-installed in OVS
Feb 25 07:20:30 np0005629333 nova_compute[244014]: 2026-02-25 12:20:30.837 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:30 np0005629333 nova_compute[244014]: 2026-02-25 12:20:30.848 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:30.870 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:10:e8 10.100.0.13'], port_security=['fa:16:3e:0c:10:e8 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '89488b9f-7c53-4e00-ad62-837e33a76dae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7e3ab090-12ee-4eae-8ad1-0ddee1251e75', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=5e8b3807-0ee8-4f97-aa2d-3db7d1283888) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:20:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:30.872 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 5e8b3807-0ee8-4f97-aa2d-3db7d1283888 in datapath 08121372-a435-401a-b405-778e10d8c2e2 unbound from our chassis#033[00m
Feb 25 07:20:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:30.874 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08121372-a435-401a-b405-778e10d8c2e2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:20:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:30.875 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c2ef8804-0262-4771-aee3-1e1de0eba51a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:30.876 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-08121372-a435-401a-b405-778e10d8c2e2 namespace which is not needed anymore#033[00m
Feb 25 07:20:30 np0005629333 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000019.scope: Deactivated successfully.
Feb 25 07:20:30 np0005629333 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000019.scope: Consumed 12.228s CPU time.
Feb 25 07:20:30 np0005629333 systemd-machined[210048]: Machine qemu-28-instance-00000019 terminated.
Feb 25 07:20:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:20:30
Feb 25 07:20:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:20:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:20:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'default.rgw.meta', '.rgw.root', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'vms']
Feb 25 07:20:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.027 244018 DEBUG nova.compute.manager [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Received event network-vif-deleted-07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.028 244018 INFO nova.compute.manager [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Neutron deleted interface 07c8ce0b-0c89-4abc-87c6-99c1024f7dd8; detaching it from the instance and deleting it from the info cache#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.028 244018 DEBUG nova.network.neutron [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Updating instance_info_cache with network_info: [{"id": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "address": "fa:16:3e:0c:10:e8", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e8b3807-0e", "ovs_interfaceid": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.035 244018 INFO nova.virt.libvirt.driver [-] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Instance destroyed successfully.#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.036 244018 DEBUG nova.objects.instance [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'resources' on Instance uuid 89488b9f-7c53-4e00-ad62-837e33a76dae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:20:31 np0005629333 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[265930]: [NOTICE]   (265934) : haproxy version is 2.8.14-c23fe91
Feb 25 07:20:31 np0005629333 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[265930]: [NOTICE]   (265934) : path to executable is /usr/sbin/haproxy
Feb 25 07:20:31 np0005629333 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[265930]: [WARNING]  (265934) : Exiting Master process...
Feb 25 07:20:31 np0005629333 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[265930]: [WARNING]  (265934) : Exiting Master process...
Feb 25 07:20:31 np0005629333 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[265930]: [ALERT]    (265934) : Current worker (265936) exited with code 143 (Terminated)
Feb 25 07:20:31 np0005629333 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[265930]: [WARNING]  (265934) : All workers exited. Exiting... (0)
Feb 25 07:20:31 np0005629333 systemd[1]: libpod-75b9e4254fe9035cdfd339972a656ed273d5d518d1af188189571c05a71d713d.scope: Deactivated successfully.
Feb 25 07:20:31 np0005629333 conmon[265930]: conmon 75b9e4254fe9035cdfd3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-75b9e4254fe9035cdfd339972a656ed273d5d518d1af188189571c05a71d713d.scope/container/memory.events
Feb 25 07:20:31 np0005629333 podman[267856]: 2026-02-25 12:20:31.060756623 +0000 UTC m=+0.070586579 container died 75b9e4254fe9035cdfd339972a656ed273d5d518d1af188189571c05a71d713d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.061 244018 DEBUG nova.virt.libvirt.vif [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:19:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1678048505',display_name='tempest-AttachInterfacesTestJSON-server-1678048505',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1678048505',id=25,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI59Lu36NJMBU0mthshk0lPs05K0s2PWyT35IsJwWazcPBPSYm/Ew3YeQPHN2oaFRIA9yk4c93F1Q1taymdhCN6cLrKiQ4srfxdpMeTwtszRhxTTKjPH7Q3QOLbiw7+JtQ==',key_name='tempest-keypair-1913992596',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-wuykovpt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:19:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=89488b9f-7c53-4e00-ad62-837e33a76dae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "address": "fa:16:3e:0c:10:e8", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e8b3807-0e", "ovs_interfaceid": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.062 244018 DEBUG nova.network.os_vif_util [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "address": "fa:16:3e:0c:10:e8", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e8b3807-0e", "ovs_interfaceid": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.063 244018 DEBUG nova.network.os_vif_util [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0c:10:e8,bridge_name='br-int',has_traffic_filtering=True,id=5e8b3807-0ee8-4f97-aa2d-3db7d1283888,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e8b3807-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.063 244018 DEBUG os_vif [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0c:10:e8,bridge_name='br-int',has_traffic_filtering=True,id=5e8b3807-0ee8-4f97-aa2d-3db7d1283888,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e8b3807-0e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.065 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.065 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e8b3807-0e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.074 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.077 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.079 244018 DEBUG nova.objects.instance [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lazy-loading 'system_metadata' on Instance uuid 89488b9f-7c53-4e00-ad62-837e33a76dae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.083 244018 INFO os_vif [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0c:10:e8,bridge_name='br-int',has_traffic_filtering=True,id=5e8b3807-0ee8-4f97-aa2d-3db7d1283888,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e8b3807-0e')#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.084 244018 DEBUG nova.virt.libvirt.vif [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:19:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1678048505',display_name='tempest-AttachInterfacesTestJSON-server-1678048505',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1678048505',id=25,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI59Lu36NJMBU0mthshk0lPs05K0s2PWyT35IsJwWazcPBPSYm/Ew3YeQPHN2oaFRIA9yk4c93F1Q1taymdhCN6cLrKiQ4srfxdpMeTwtszRhxTTKjPH7Q3QOLbiw7+JtQ==',key_name='tempest-keypair-1913992596',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-wuykovpt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:19:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=89488b9f-7c53-4e00-ad62-837e33a76dae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "address": "fa:16:3e:3d:5e:dc", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c8ce0b-0c", "ovs_interfaceid": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.084 244018 DEBUG nova.network.os_vif_util [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "address": "fa:16:3e:3d:5e:dc", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c8ce0b-0c", "ovs_interfaceid": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.085 244018 DEBUG nova.network.os_vif_util [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5e:dc,bridge_name='br-int',has_traffic_filtering=True,id=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c8ce0b-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.086 244018 DEBUG os_vif [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5e:dc,bridge_name='br-int',has_traffic_filtering=True,id=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c8ce0b-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.087 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.088 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07c8ce0b-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.088 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:20:31 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-75b9e4254fe9035cdfd339972a656ed273d5d518d1af188189571c05a71d713d-userdata-shm.mount: Deactivated successfully.
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.091 244018 INFO os_vif [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5e:dc,bridge_name='br-int',has_traffic_filtering=True,id=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c8ce0b-0c')#033[00m
Feb 25 07:20:31 np0005629333 systemd[1]: var-lib-containers-storage-overlay-5f7f7a1a509256948c44cf81664959a0d95a0720770dc3872c473bf267ae05b3-merged.mount: Deactivated successfully.
Feb 25 07:20:31 np0005629333 podman[267856]: 2026-02-25 12:20:31.097350545 +0000 UTC m=+0.107180511 container cleanup 75b9e4254fe9035cdfd339972a656ed273d5d518d1af188189571c05a71d713d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 25 07:20:31 np0005629333 systemd[1]: libpod-conmon-75b9e4254fe9035cdfd339972a656ed273d5d518d1af188189571c05a71d713d.scope: Deactivated successfully.
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.136 244018 DEBUG nova.objects.instance [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lazy-loading 'flavor' on Instance uuid 89488b9f-7c53-4e00-ad62-837e33a76dae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:20:31 np0005629333 podman[267910]: 2026-02-25 12:20:31.173174272 +0000 UTC m=+0.054380788 container remove 75b9e4254fe9035cdfd339972a656ed273d5d518d1af188189571c05a71d713d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:20:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:31.179 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fd9dded9-2f26-42d5-aab3-fbf5bd2098a4]: (4, ('Wed Feb 25 12:20:30 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2 (75b9e4254fe9035cdfd339972a656ed273d5d518d1af188189571c05a71d713d)\n75b9e4254fe9035cdfd339972a656ed273d5d518d1af188189571c05a71d713d\nWed Feb 25 12:20:31 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2 (75b9e4254fe9035cdfd339972a656ed273d5d518d1af188189571c05a71d713d)\n75b9e4254fe9035cdfd339972a656ed273d5d518d1af188189571c05a71d713d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:31.181 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[31e5f275-f25e-4234-b19e-b7be8275618a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:31.182 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.184 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:31 np0005629333 kernel: tap08121372-a0: left promiscuous mode
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.188 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:31.192 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7e3ef68c-ddff-49d0-9d75-96c39ec4a40f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.199 244018 DEBUG nova.virt.libvirt.vif [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:19:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1678048505',display_name='tempest-AttachInterfacesTestJSON-server-1678048505',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1678048505',id=25,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI59Lu36NJMBU0mthshk0lPs05K0s2PWyT35IsJwWazcPBPSYm/Ew3YeQPHN2oaFRIA9yk4c93F1Q1taymdhCN6cLrKiQ4srfxdpMeTwtszRhxTTKjPH7Q3QOLbiw7+JtQ==',key_name='tempest-keypair-1913992596',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-wuykovpt',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:20:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=89488b9f-7c53-4e00-ad62-837e33a76dae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "address": "fa:16:3e:3d:5e:dc", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c8ce0b-0c", "ovs_interfaceid": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.199 244018 DEBUG nova.network.os_vif_util [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Converting VIF {"id": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "address": "fa:16:3e:3d:5e:dc", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c8ce0b-0c", "ovs_interfaceid": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.200 244018 DEBUG nova.network.os_vif_util [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5e:dc,bridge_name='br-int',has_traffic_filtering=True,id=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c8ce0b-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.202 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:31.207 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[87de8be2-6596-42aa-ab75-2fb1c6021a43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:31.209 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6c3281c7-062e-43c5-b74c-3933593897fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.215 244018 DEBUG nova.virt.libvirt.guest [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:3d:5e:dc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap07c8ce0b-0c"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.221 244018 DEBUG nova.virt.libvirt.guest [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:3d:5e:dc"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap07c8ce0b-0c"/></interface>not found in domain: <domain type='kvm'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:  <name>instance-00000019</name>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:  <uuid>89488b9f-7c53-4e00-ad62-837e33a76dae</uuid>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <nova:name>tempest-AttachInterfacesTestJSON-server-1678048505</nova:name>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:19:52</nova:creationTime>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:20:31 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:        <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:        <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:        <nova:port uuid="5e8b3807-0ee8-4f97-aa2d-3db7d1283888">
Feb 25 07:20:31 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:  <memory unit='KiB'>131072</memory>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:  <currentMemory unit='KiB'>131072</currentMemory>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:  <vcpu placement='static'>1</vcpu>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:  <sysinfo type='smbios'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <entry name='manufacturer'>RDO</entry>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <entry name='product'>OpenStack Compute</entry>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <entry name='serial'>89488b9f-7c53-4e00-ad62-837e33a76dae</entry>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <entry name='uuid'>89488b9f-7c53-4e00-ad62-837e33a76dae</entry>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <entry name='family'>Virtual Machine</entry>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <boot dev='hd'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <smbios mode='sysinfo'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <vmcoreinfo state='on'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:  <cpu mode='host-model' check='partial'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:  <clock offset='utc'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <timer name='pit' tickpolicy='delay'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <timer name='rtc' tickpolicy='catchup'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <timer name='hpet' present='no'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:  <on_poweroff>destroy</on_poweroff>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:  <on_reboot>restart</on_reboot>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:  <on_crash>destroy</on_crash>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <disk type='network' device='disk'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <driver name='qemu' type='raw' cache='none'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <auth username='openstack'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:        <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <source protocol='rbd' name='vms/89488b9f-7c53-4e00-ad62-837e33a76dae_disk'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:        <host name='192.168.122.100' port='6789'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <target dev='vda' bus='virtio'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <disk type='network' device='cdrom'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <driver name='qemu' type='raw' cache='none'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <auth username='openstack'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:        <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <source protocol='rbd' name='vms/89488b9f-7c53-4e00-ad62-837e33a76dae_disk.config'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:        <host name='192.168.122.100' port='6789'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <target dev='sda' bus='sata'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <readonly/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <controller type='pci' index='0' model='pcie-root'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <controller type='pci' index='1' model='pcie-root-port'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <target chassis='1' port='0x10'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <controller type='pci' index='2' model='pcie-root-port'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <target chassis='2' port='0x11'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <controller type='pci' index='3' model='pcie-root-port'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <target chassis='3' port='0x12'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <controller type='pci' index='4' model='pcie-root-port'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <target chassis='4' port='0x13'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <controller type='pci' index='5' model='pcie-root-port'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <target chassis='5' port='0x14'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <controller type='pci' index='6' model='pcie-root-port'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <target chassis='6' port='0x15'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <controller type='pci' index='7' model='pcie-root-port'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <target chassis='7' port='0x16'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <controller type='pci' index='8' model='pcie-root-port'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <target chassis='8' port='0x17'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <controller type='pci' index='9' model='pcie-root-port'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <target chassis='9' port='0x18'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <controller type='pci' index='10' model='pcie-root-port'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <target chassis='10' port='0x19'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <controller type='pci' index='11' model='pcie-root-port'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <target chassis='11' port='0x1a'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <controller type='pci' index='12' model='pcie-root-port'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <target chassis='12' port='0x1b'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <controller type='pci' index='13' model='pcie-root-port'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <target chassis='13' port='0x1c'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <controller type='pci' index='14' model='pcie-root-port'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <target chassis='14' port='0x1d'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <controller type='pci' index='15' model='pcie-root-port'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <target chassis='15' port='0x1e'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <controller type='pci' index='16' model='pcie-root-port'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <target chassis='16' port='0x1f'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <controller type='pci' index='17' model='pcie-root-port'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <target chassis='17' port='0x20'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <controller type='pci' index='18' model='pcie-root-port'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <target chassis='18' port='0x21'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <controller type='pci' index='19' model='pcie-root-port'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <target chassis='19' port='0x22'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <controller type='pci' index='20' model='pcie-root-port'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <target chassis='20' port='0x23'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <controller type='pci' index='21' model='pcie-root-port'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <target chassis='21' port='0x24'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <controller type='pci' index='22' model='pcie-root-port'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <target chassis='22' port='0x25'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <controller type='pci' index='23' model='pcie-root-port'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <target chassis='23' port='0x26'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <controller type='pci' index='24' model='pcie-root-port'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <target chassis='24' port='0x27'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <controller type='pci' index='25' model='pcie-root-port'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <target chassis='25' port='0x28'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <model name='pcie-pci-bridge'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <controller type='usb' index='0' model='piix3-uhci'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <controller type='sata' index='0'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <interface type='ethernet'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <mac address='fa:16:3e:0c:10:e8'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <target dev='tap5e8b3807-0e'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <model type='virtio'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <driver name='vhost' rx_queue_size='512'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <mtu size='1442'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <serial type='pty'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <log file='/var/lib/nova/instances/89488b9f-7c53-4e00-ad62-837e33a76dae/console.log' append='off'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <target type='isa-serial' port='0'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:        <model name='isa-serial'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      </target>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <console type='pty'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <log file='/var/lib/nova/instances/89488b9f-7c53-4e00-ad62-837e33a76dae/console.log' append='off'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <target type='serial' port='0'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </console>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <input type='tablet' bus='usb'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='usb' bus='0' port='1'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <input type='mouse' bus='ps2'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <input type='keyboard' bus='ps2'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <graphics type='vnc' port='-1' autoport='yes' listen='::0'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <listen type='address' address='::0'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </graphics>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <audio id='1' type='none'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <model type='virtio' heads='1' primary='yes'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <watchdog model='itco' action='reset'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <memballoon model='virtio'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <stats period='10'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <rng model='virtio'>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <backend model='random'>/dev/urandom</backend>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:20:31 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:20:31 np0005629333 nova_compute[244014]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.223 244018 WARNING nova.virt.libvirt.driver [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Detaching interface fa:16:3e:3d:5e:dc failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap07c8ce0b-0c' not found.#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.224 244018 DEBUG nova.virt.libvirt.vif [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:19:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1678048505',display_name='tempest-AttachInterfacesTestJSON-server-1678048505',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1678048505',id=25,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI59Lu36NJMBU0mthshk0lPs05K0s2PWyT35IsJwWazcPBPSYm/Ew3YeQPHN2oaFRIA9yk4c93F1Q1taymdhCN6cLrKiQ4srfxdpMeTwtszRhxTTKjPH7Q3QOLbiw7+JtQ==',key_name='tempest-keypair-1913992596',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:19:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-wuykovpt',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:20:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=89488b9f-7c53-4e00-ad62-837e33a76dae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "address": "fa:16:3e:3d:5e:dc", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c8ce0b-0c", "ovs_interfaceid": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.225 244018 DEBUG nova.network.os_vif_util [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Converting VIF {"id": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "address": "fa:16:3e:3d:5e:dc", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07c8ce0b-0c", "ovs_interfaceid": "07c8ce0b-0c89-4abc-87c6-99c1024f7dd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.226 244018 DEBUG nova.network.os_vif_util [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5e:dc,bridge_name='br-int',has_traffic_filtering=True,id=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c8ce0b-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.226 244018 DEBUG os_vif [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5e:dc,bridge_name='br-int',has_traffic_filtering=True,id=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c8ce0b-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.228 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.229 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07c8ce0b-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.229 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:20:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:31.229 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c4010c4c-e114-41ab-87c2-fce8e05d7a86]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396408, 'reachable_time': 41147, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267929, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:31 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.235 244018 DEBUG nova.compute.manager [req-b2125f03-0daa-4081-852f-e91864225773 req-8dc9ad88-b4d9-4b60-b133-86b66a9f523f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Received event network-vif-unplugged-5e8b3807-0ee8-4f97-aa2d-3db7d1283888 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.235 244018 DEBUG oslo_concurrency.lockutils [req-b2125f03-0daa-4081-852f-e91864225773 req-8dc9ad88-b4d9-4b60-b133-86b66a9f523f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.235 244018 DEBUG oslo_concurrency.lockutils [req-b2125f03-0daa-4081-852f-e91864225773 req-8dc9ad88-b4d9-4b60-b133-86b66a9f523f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.236 244018 DEBUG oslo_concurrency.lockutils [req-b2125f03-0daa-4081-852f-e91864225773 req-8dc9ad88-b4d9-4b60-b133-86b66a9f523f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.236 244018 DEBUG nova.compute.manager [req-b2125f03-0daa-4081-852f-e91864225773 req-8dc9ad88-b4d9-4b60-b133-86b66a9f523f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] No waiting events found dispatching network-vif-unplugged-5e8b3807-0ee8-4f97-aa2d-3db7d1283888 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:20:31 np0005629333 systemd[1]: run-netns-ovnmeta\x2d08121372\x2da435\x2d401a\x2db405\x2d778e10d8c2e2.mount: Deactivated successfully.
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.236 244018 DEBUG nova.compute.manager [req-b2125f03-0daa-4081-852f-e91864225773 req-8dc9ad88-b4d9-4b60-b133-86b66a9f523f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Received event network-vif-unplugged-5e8b3807-0ee8-4f97-aa2d-3db7d1283888 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:20:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:31.234 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-08121372-a435-401a-b405-778e10d8c2e2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:20:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:31.234 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[9610019e-d487-41da-afe3-98430e1cb2bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.237 244018 INFO os_vif [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5e:dc,bridge_name='br-int',has_traffic_filtering=True,id=07c8ce0b-0c89-4abc-87c6-99c1024f7dd8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07c8ce0b-0c')#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.239 244018 DEBUG nova.virt.libvirt.guest [req-4316e803-b850-491f-a3a1-2003cc5af073 req-fef8d4b8-f707-4cf7-a68f-7601dd1e97d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:20:31 np0005629333 nova_compute[244014]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1678048505</nova:name>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:  <nova:creationTime>2026-02-25 12:20:31</nova:creationTime>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:  <nova:flavor name="m1.nano">
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <nova:memory>128</nova:memory>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <nova:disk>1</nova:disk>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <nova:swap>0</nova:swap>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <nova:vcpus>1</nova:vcpus>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:  </nova:flavor>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:  <nova:owner>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:  </nova:owner>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:  <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:  <nova:ports>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    <nova:port uuid="5e8b3807-0ee8-4f97-aa2d-3db7d1283888">
Feb 25 07:20:31 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:20:31 np0005629333 nova_compute[244014]:  </nova:ports>
Feb 25 07:20:31 np0005629333 nova_compute[244014]: </nova:instance>
Feb 25 07:20:31 np0005629333 nova_compute[244014]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.389 244018 INFO nova.virt.libvirt.driver [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Deleting instance files /var/lib/nova/instances/89488b9f-7c53-4e00-ad62-837e33a76dae_del#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.390 244018 INFO nova.virt.libvirt.driver [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Deletion of /var/lib/nova/instances/89488b9f-7c53-4e00-ad62-837e33a76dae_del complete#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.520 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022016.5193965, de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.520 244018 INFO nova.compute.manager [-] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:20:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:20:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:20:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:20:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:20:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:20:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:20:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 233 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 75 KiB/s wr, 74 op/s
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.694 244018 INFO nova.network.neutron [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Port 07c8ce0b-0c89-4abc-87c6-99c1024f7dd8 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.695 244018 DEBUG nova.network.neutron [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Updating instance_info_cache with network_info: [{"id": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "address": "fa:16:3e:0c:10:e8", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e8b3807-0e", "ovs_interfaceid": "5e8b3807-0ee8-4f97-aa2d-3db7d1283888", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.724 244018 DEBUG nova.compute.manager [None req-a48cfd5b-3b23-4fe2-8503-9875a44996dc - - - - - -] [instance: de9168ca-7bb3-4ef5-b9cb-e23dbcfc6f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.724 244018 DEBUG oslo_concurrency.lockutils [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Releasing lock "refresh_cache-89488b9f-7c53-4e00-ad62-837e33a76dae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.731 244018 INFO nova.compute.manager [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Took 0.94 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.731 244018 DEBUG oslo.service.loopingcall [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.732 244018 DEBUG nova.compute.manager [-] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.733 244018 DEBUG nova.network.neutron [-] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:20:31 np0005629333 nova_compute[244014]: 2026-02-25 12:20:31.748 244018 DEBUG oslo_concurrency.lockutils [None req-70c3bb1d-9d4b-4ffd-a14d-189e9514c916 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "interface-89488b9f-7c53-4e00-ad62-837e33a76dae-07c8ce0b-0c89-4abc-87c6-99c1024f7dd8" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:20:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:20:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:20:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:20:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:20:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:20:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:20:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:20:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:20:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:20:32 np0005629333 nova_compute[244014]: 2026-02-25 12:20:32.385 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:32 np0005629333 nova_compute[244014]: 2026-02-25 12:20:32.497 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:32 np0005629333 nova_compute[244014]: 2026-02-25 12:20:32.864 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022017.863511, 7a6ab503-d433-40a7-9395-3d5660e852c9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:20:32 np0005629333 nova_compute[244014]: 2026-02-25 12:20:32.865 244018 INFO nova.compute.manager [-] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:20:32 np0005629333 nova_compute[244014]: 2026-02-25 12:20:32.937 244018 DEBUG nova.compute.manager [None req-67b1a63c-4490-43e9-8c5f-95d1139d3913 - - - - - -] [instance: 7a6ab503-d433-40a7-9395-3d5660e852c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:20:33 np0005629333 nova_compute[244014]: 2026-02-25 12:20:33.319 244018 DEBUG nova.network.neutron [-] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:20:33 np0005629333 nova_compute[244014]: 2026-02-25 12:20:33.436 244018 INFO nova.compute.manager [-] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Took 1.70 seconds to deallocate network for instance.#033[00m
Feb 25 07:20:33 np0005629333 nova_compute[244014]: 2026-02-25 12:20:33.450 244018 DEBUG nova.compute.manager [req-5bbfe431-364b-4a8b-abe4-3006afcd236d req-209bd495-7184-45ff-8aa7-9ec3715af9c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Received event network-vif-deleted-5e8b3807-0ee8-4f97-aa2d-3db7d1283888 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:33 np0005629333 nova_compute[244014]: 2026-02-25 12:20:33.450 244018 INFO nova.compute.manager [req-5bbfe431-364b-4a8b-abe4-3006afcd236d req-209bd495-7184-45ff-8aa7-9ec3715af9c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Neutron deleted interface 5e8b3807-0ee8-4f97-aa2d-3db7d1283888; detaching it from the instance and deleting it from the info cache#033[00m
Feb 25 07:20:33 np0005629333 nova_compute[244014]: 2026-02-25 12:20:33.451 244018 DEBUG nova.network.neutron [req-5bbfe431-364b-4a8b-abe4-3006afcd236d req-209bd495-7184-45ff-8aa7-9ec3715af9c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:20:33 np0005629333 nova_compute[244014]: 2026-02-25 12:20:33.459 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:33 np0005629333 nova_compute[244014]: 2026-02-25 12:20:33.534 244018 DEBUG nova.compute.manager [req-5bbfe431-364b-4a8b-abe4-3006afcd236d req-209bd495-7184-45ff-8aa7-9ec3715af9c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Detach interface failed, port_id=5e8b3807-0ee8-4f97-aa2d-3db7d1283888, reason: Instance 89488b9f-7c53-4e00-ad62-837e33a76dae could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 25 07:20:33 np0005629333 nova_compute[244014]: 2026-02-25 12:20:33.610 244018 DEBUG oslo_concurrency.lockutils [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:33 np0005629333 nova_compute[244014]: 2026-02-25 12:20:33.610 244018 DEBUG oslo_concurrency.lockutils [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 195 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 3.2 KiB/s wr, 61 op/s
Feb 25 07:20:33 np0005629333 nova_compute[244014]: 2026-02-25 12:20:33.665 244018 DEBUG oslo_concurrency.processutils [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:20:33 np0005629333 nova_compute[244014]: 2026-02-25 12:20:33.783 244018 DEBUG nova.compute.manager [req-17877713-4e19-43ee-b8ac-4168515ae73e req-418c7df1-f2ec-473d-8d2d-f0d78bbfb990 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Received event network-vif-plugged-5e8b3807-0ee8-4f97-aa2d-3db7d1283888 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:33 np0005629333 nova_compute[244014]: 2026-02-25 12:20:33.784 244018 DEBUG oslo_concurrency.lockutils [req-17877713-4e19-43ee-b8ac-4168515ae73e req-418c7df1-f2ec-473d-8d2d-f0d78bbfb990 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:33 np0005629333 nova_compute[244014]: 2026-02-25 12:20:33.784 244018 DEBUG oslo_concurrency.lockutils [req-17877713-4e19-43ee-b8ac-4168515ae73e req-418c7df1-f2ec-473d-8d2d-f0d78bbfb990 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:33 np0005629333 nova_compute[244014]: 2026-02-25 12:20:33.785 244018 DEBUG oslo_concurrency.lockutils [req-17877713-4e19-43ee-b8ac-4168515ae73e req-418c7df1-f2ec-473d-8d2d-f0d78bbfb990 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:33 np0005629333 nova_compute[244014]: 2026-02-25 12:20:33.785 244018 DEBUG nova.compute.manager [req-17877713-4e19-43ee-b8ac-4168515ae73e req-418c7df1-f2ec-473d-8d2d-f0d78bbfb990 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] No waiting events found dispatching network-vif-plugged-5e8b3807-0ee8-4f97-aa2d-3db7d1283888 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:20:33 np0005629333 nova_compute[244014]: 2026-02-25 12:20:33.786 244018 WARNING nova.compute.manager [req-17877713-4e19-43ee-b8ac-4168515ae73e req-418c7df1-f2ec-473d-8d2d-f0d78bbfb990 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Received unexpected event network-vif-plugged-5e8b3807-0ee8-4f97-aa2d-3db7d1283888 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:20:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:20:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:20:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/368640659' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:20:34 np0005629333 nova_compute[244014]: 2026-02-25 12:20:34.233 244018 DEBUG oslo_concurrency.processutils [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:20:34 np0005629333 nova_compute[244014]: 2026-02-25 12:20:34.238 244018 DEBUG nova.compute.provider_tree [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:20:34 np0005629333 nova_compute[244014]: 2026-02-25 12:20:34.314 244018 DEBUG nova.scheduler.client.report [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:20:34 np0005629333 nova_compute[244014]: 2026-02-25 12:20:34.382 244018 DEBUG oslo_concurrency.lockutils [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:34 np0005629333 nova_compute[244014]: 2026-02-25 12:20:34.442 244018 INFO nova.scheduler.client.report [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Deleted allocations for instance 89488b9f-7c53-4e00-ad62-837e33a76dae#033[00m
Feb 25 07:20:34 np0005629333 nova_compute[244014]: 2026-02-25 12:20:34.635 244018 DEBUG oslo_concurrency.lockutils [None req-edb3bf1b-0ba8-484a-87e3-7a3dd942ed79 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "89488b9f-7c53-4e00-ad62-837e33a76dae" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 195 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.7 KiB/s wr, 42 op/s
Feb 25 07:20:36 np0005629333 nova_compute[244014]: 2026-02-25 12:20:36.068 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:36 np0005629333 nova_compute[244014]: 2026-02-25 12:20:36.958 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022021.9568572, 52f927ad-a417-489f-9f92-87bc3433649d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:20:36 np0005629333 nova_compute[244014]: 2026-02-25 12:20:36.959 244018 INFO nova.compute.manager [-] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:20:36 np0005629333 nova_compute[244014]: 2026-02-25 12:20:36.992 244018 DEBUG nova.compute.manager [None req-9a64dcda-494a-4638-893b-958295ee1183 - - - - - -] [instance: 52f927ad-a417-489f-9f92-87bc3433649d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:20:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 153 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 KiB/s wr, 48 op/s
Feb 25 07:20:38 np0005629333 nova_compute[244014]: 2026-02-25 12:20:38.462 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:20:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 153 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Feb 25 07:20:41 np0005629333 nova_compute[244014]: 2026-02-25 12:20:41.071 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 153 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Feb 25 07:20:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:20:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:20:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:20:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:20:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 3.9747607904369495e-06 of space, bias 1.0, pg target 0.0011924282371310849 quantized to 32 (current 32)
Feb 25 07:20:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:20:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:20:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:20:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:20:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:20:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024899687316491128 of space, bias 1.0, pg target 0.7469906194947339 quantized to 32 (current 32)
Feb 25 07:20:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:20:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0716338374073357e-06 of space, bias 4.0, pg target 0.0012859606048888027 quantized to 16 (current 16)
Feb 25 07:20:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:20:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:20:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:20:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:20:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:20:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:20:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:20:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:20:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:20:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:20:42 np0005629333 nova_compute[244014]: 2026-02-25 12:20:42.813 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Acquiring lock "437f3047-f865-44f7-b16e-cddab230e873" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:42 np0005629333 nova_compute[244014]: 2026-02-25 12:20:42.814 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lock "437f3047-f865-44f7-b16e-cddab230e873" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:42 np0005629333 nova_compute[244014]: 2026-02-25 12:20:42.857 244018 DEBUG nova.compute.manager [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:20:42 np0005629333 nova_compute[244014]: 2026-02-25 12:20:42.950 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:42 np0005629333 nova_compute[244014]: 2026-02-25 12:20:42.950 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:42 np0005629333 nova_compute[244014]: 2026-02-25 12:20:42.957 244018 DEBUG nova.virt.hardware [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:20:42 np0005629333 nova_compute[244014]: 2026-02-25 12:20:42.957 244018 INFO nova.compute.claims [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:20:43 np0005629333 nova_compute[244014]: 2026-02-25 12:20:43.184 244018 DEBUG oslo_concurrency.processutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:20:43 np0005629333 nova_compute[244014]: 2026-02-25 12:20:43.463 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 153 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Feb 25 07:20:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:20:43 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3670045618' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:20:43 np0005629333 nova_compute[244014]: 2026-02-25 12:20:43.718 244018 DEBUG oslo_concurrency.processutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:20:43 np0005629333 nova_compute[244014]: 2026-02-25 12:20:43.724 244018 DEBUG nova.compute.provider_tree [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:20:44 np0005629333 nova_compute[244014]: 2026-02-25 12:20:44.109 244018 DEBUG nova.scheduler.client.report [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:20:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:20:44 np0005629333 nova_compute[244014]: 2026-02-25 12:20:44.757 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:44 np0005629333 nova_compute[244014]: 2026-02-25 12:20:44.758 244018 DEBUG nova.compute.manager [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:20:44 np0005629333 nova_compute[244014]: 2026-02-25 12:20:44.826 244018 DEBUG nova.compute.manager [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:20:44 np0005629333 nova_compute[244014]: 2026-02-25 12:20:44.826 244018 DEBUG nova.network.neutron [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:20:44 np0005629333 nova_compute[244014]: 2026-02-25 12:20:44.856 244018 INFO nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:20:44 np0005629333 nova_compute[244014]: 2026-02-25 12:20:44.883 244018 DEBUG nova.compute.manager [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:20:45 np0005629333 nova_compute[244014]: 2026-02-25 12:20:44.999 244018 DEBUG nova.compute.manager [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:20:45 np0005629333 nova_compute[244014]: 2026-02-25 12:20:45.001 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:20:45 np0005629333 nova_compute[244014]: 2026-02-25 12:20:45.002 244018 INFO nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Creating image(s)#033[00m
Feb 25 07:20:45 np0005629333 nova_compute[244014]: 2026-02-25 12:20:45.035 244018 DEBUG nova.storage.rbd_utils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] rbd image 437f3047-f865-44f7-b16e-cddab230e873_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:20:45 np0005629333 nova_compute[244014]: 2026-02-25 12:20:45.071 244018 DEBUG nova.storage.rbd_utils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] rbd image 437f3047-f865-44f7-b16e-cddab230e873_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:20:45 np0005629333 nova_compute[244014]: 2026-02-25 12:20:45.107 244018 DEBUG nova.storage.rbd_utils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] rbd image 437f3047-f865-44f7-b16e-cddab230e873_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:20:45 np0005629333 nova_compute[244014]: 2026-02-25 12:20:45.112 244018 DEBUG oslo_concurrency.processutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:20:45 np0005629333 nova_compute[244014]: 2026-02-25 12:20:45.143 244018 DEBUG nova.policy [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '10c5d76df8d046e8858030b12b7affa1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '28df138f3ff744a09c7e79a179881f21', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:20:45 np0005629333 nova_compute[244014]: 2026-02-25 12:20:45.189 244018 DEBUG oslo_concurrency.processutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:20:45 np0005629333 nova_compute[244014]: 2026-02-25 12:20:45.190 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:45 np0005629333 nova_compute[244014]: 2026-02-25 12:20:45.191 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:45 np0005629333 nova_compute[244014]: 2026-02-25 12:20:45.192 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:45 np0005629333 nova_compute[244014]: 2026-02-25 12:20:45.226 244018 DEBUG nova.storage.rbd_utils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] rbd image 437f3047-f865-44f7-b16e-cddab230e873_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:20:45 np0005629333 nova_compute[244014]: 2026-02-25 12:20:45.231 244018 DEBUG oslo_concurrency.processutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 437f3047-f865-44f7-b16e-cddab230e873_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:20:45 np0005629333 nova_compute[244014]: 2026-02-25 12:20:45.550 244018 DEBUG oslo_concurrency.processutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 437f3047-f865-44f7-b16e-cddab230e873_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.319s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:20:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 153 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 852 B/s wr, 5 op/s
Feb 25 07:20:45 np0005629333 nova_compute[244014]: 2026-02-25 12:20:45.634 244018 DEBUG nova.storage.rbd_utils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] resizing rbd image 437f3047-f865-44f7-b16e-cddab230e873_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:20:45 np0005629333 nova_compute[244014]: 2026-02-25 12:20:45.722 244018 DEBUG nova.objects.instance [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lazy-loading 'migration_context' on Instance uuid 437f3047-f865-44f7-b16e-cddab230e873 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:20:45 np0005629333 nova_compute[244014]: 2026-02-25 12:20:45.826 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:20:45 np0005629333 nova_compute[244014]: 2026-02-25 12:20:45.826 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Ensure instance console log exists: /var/lib/nova/instances/437f3047-f865-44f7-b16e-cddab230e873/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:20:45 np0005629333 nova_compute[244014]: 2026-02-25 12:20:45.827 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:45 np0005629333 nova_compute[244014]: 2026-02-25 12:20:45.827 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:45 np0005629333 nova_compute[244014]: 2026-02-25 12:20:45.827 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:46 np0005629333 nova_compute[244014]: 2026-02-25 12:20:46.023 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022031.0221703, 89488b9f-7c53-4e00-ad62-837e33a76dae => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:20:46 np0005629333 nova_compute[244014]: 2026-02-25 12:20:46.024 244018 INFO nova.compute.manager [-] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:20:46 np0005629333 nova_compute[244014]: 2026-02-25 12:20:46.062 244018 DEBUG nova.compute.manager [None req-0c6131d7-981f-4b98-856d-1ce5d5939626 - - - - - -] [instance: 89488b9f-7c53-4e00-ad62-837e33a76dae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:20:46 np0005629333 nova_compute[244014]: 2026-02-25 12:20:46.074 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:46 np0005629333 nova_compute[244014]: 2026-02-25 12:20:46.304 244018 DEBUG nova.network.neutron [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Successfully created port: 6c0083e1-a97b-4de8-aa7b-14217c45376f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:20:46 np0005629333 nova_compute[244014]: 2026-02-25 12:20:46.845 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "51d1d661-89db-4958-a2f4-c299ee997cde" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:46 np0005629333 nova_compute[244014]: 2026-02-25 12:20:46.846 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:46 np0005629333 nova_compute[244014]: 2026-02-25 12:20:46.872 244018 DEBUG nova.compute.manager [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:20:46 np0005629333 nova_compute[244014]: 2026-02-25 12:20:46.966 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:46 np0005629333 nova_compute[244014]: 2026-02-25 12:20:46.967 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:46 np0005629333 nova_compute[244014]: 2026-02-25 12:20:46.978 244018 DEBUG nova.virt.hardware [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:20:46 np0005629333 nova_compute[244014]: 2026-02-25 12:20:46.978 244018 INFO nova.compute.claims [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:20:47 np0005629333 nova_compute[244014]: 2026-02-25 12:20:47.081 244018 DEBUG nova.network.neutron [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Successfully updated port: 6c0083e1-a97b-4de8-aa7b-14217c45376f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:20:47 np0005629333 nova_compute[244014]: 2026-02-25 12:20:47.131 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Acquiring lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:20:47 np0005629333 nova_compute[244014]: 2026-02-25 12:20:47.132 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Acquired lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:20:47 np0005629333 nova_compute[244014]: 2026-02-25 12:20:47.132 244018 DEBUG nova.network.neutron [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:20:47 np0005629333 nova_compute[244014]: 2026-02-25 12:20:47.244 244018 DEBUG oslo_concurrency.processutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:20:47 np0005629333 nova_compute[244014]: 2026-02-25 12:20:47.312 244018 DEBUG nova.compute.manager [req-f2a2c98c-1d63-4e52-a50b-25f6962acd05 req-c9c75db7-512e-4c4d-b32b-afa7e872330d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Received event network-changed-6c0083e1-a97b-4de8-aa7b-14217c45376f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:47 np0005629333 nova_compute[244014]: 2026-02-25 12:20:47.313 244018 DEBUG nova.compute.manager [req-f2a2c98c-1d63-4e52-a50b-25f6962acd05 req-c9c75db7-512e-4c4d-b32b-afa7e872330d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Refreshing instance network info cache due to event network-changed-6c0083e1-a97b-4de8-aa7b-14217c45376f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:20:47 np0005629333 nova_compute[244014]: 2026-02-25 12:20:47.313 244018 DEBUG oslo_concurrency.lockutils [req-f2a2c98c-1d63-4e52-a50b-25f6962acd05 req-c9c75db7-512e-4c4d-b32b-afa7e872330d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:20:47 np0005629333 nova_compute[244014]: 2026-02-25 12:20:47.357 244018 DEBUG nova.network.neutron [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:20:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:20:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4166039607' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:20:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:20:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4166039607' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:20:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 200 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Feb 25 07:20:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:20:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2080038316' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:20:47 np0005629333 nova_compute[244014]: 2026-02-25 12:20:47.770 244018 DEBUG oslo_concurrency.processutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:20:47 np0005629333 nova_compute[244014]: 2026-02-25 12:20:47.777 244018 DEBUG nova.compute.provider_tree [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:20:47 np0005629333 nova_compute[244014]: 2026-02-25 12:20:47.817 244018 DEBUG nova.scheduler.client.report [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:20:47 np0005629333 nova_compute[244014]: 2026-02-25 12:20:47.852 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:47 np0005629333 nova_compute[244014]: 2026-02-25 12:20:47.853 244018 DEBUG nova.compute.manager [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:20:47 np0005629333 nova_compute[244014]: 2026-02-25 12:20:47.929 244018 DEBUG nova.compute.manager [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:20:47 np0005629333 nova_compute[244014]: 2026-02-25 12:20:47.929 244018 DEBUG nova.network.neutron [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:20:47 np0005629333 nova_compute[244014]: 2026-02-25 12:20:47.984 244018 INFO nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.005 244018 DEBUG nova.compute.manager [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.135 244018 DEBUG nova.compute.manager [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.137 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.138 244018 INFO nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Creating image(s)#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.172 244018 DEBUG nova.storage.rbd_utils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 51d1d661-89db-4958-a2f4-c299ee997cde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.208 244018 DEBUG nova.storage.rbd_utils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 51d1d661-89db-4958-a2f4-c299ee997cde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.245 244018 DEBUG nova.storage.rbd_utils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 51d1d661-89db-4958-a2f4-c299ee997cde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.251 244018 DEBUG oslo_concurrency.processutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.326 244018 DEBUG oslo_concurrency.processutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.327 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.328 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.329 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.362 244018 DEBUG nova.storage.rbd_utils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 51d1d661-89db-4958-a2f4-c299ee997cde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.367 244018 DEBUG oslo_concurrency.processutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 51d1d661-89db-4958-a2f4-c299ee997cde_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.391 244018 DEBUG nova.policy [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea407839a07d46608b6348caf676d12d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6a771ad0ce454d809d66825f69248fa7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.465 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.601 244018 DEBUG oslo_concurrency.processutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 51d1d661-89db-4958-a2f4-c299ee997cde_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.234s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.698 244018 DEBUG nova.storage.rbd_utils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] resizing rbd image 51d1d661-89db-4958-a2f4-c299ee997cde_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.812 244018 DEBUG nova.objects.instance [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'migration_context' on Instance uuid 51d1d661-89db-4958-a2f4-c299ee997cde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.833 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.833 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Ensure instance console log exists: /var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.834 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.835 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.835 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.880 244018 DEBUG nova.network.neutron [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Updating instance_info_cache with network_info: [{"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.932 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Releasing lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.933 244018 DEBUG nova.compute.manager [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Instance network_info: |[{"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.934 244018 DEBUG oslo_concurrency.lockutils [req-f2a2c98c-1d63-4e52-a50b-25f6962acd05 req-c9c75db7-512e-4c4d-b32b-afa7e872330d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.935 244018 DEBUG nova.network.neutron [req-f2a2c98c-1d63-4e52-a50b-25f6962acd05 req-c9c75db7-512e-4c4d-b32b-afa7e872330d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Refreshing network info cache for port 6c0083e1-a97b-4de8-aa7b-14217c45376f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.940 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Start _get_guest_xml network_info=[{"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.947 244018 WARNING nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.953 244018 DEBUG nova.virt.libvirt.host [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.953 244018 DEBUG nova.virt.libvirt.host [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.958 244018 DEBUG nova.virt.libvirt.host [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.958 244018 DEBUG nova.virt.libvirt.host [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.959 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.959 244018 DEBUG nova.virt.hardware [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.960 244018 DEBUG nova.virt.hardware [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.960 244018 DEBUG nova.virt.hardware [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.961 244018 DEBUG nova.virt.hardware [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.961 244018 DEBUG nova.virt.hardware [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.962 244018 DEBUG nova.virt.hardware [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.962 244018 DEBUG nova.virt.hardware [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.962 244018 DEBUG nova.virt.hardware [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.963 244018 DEBUG nova.virt.hardware [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.963 244018 DEBUG nova.virt.hardware [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.964 244018 DEBUG nova.virt.hardware [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:20:48 np0005629333 nova_compute[244014]: 2026-02-25 12:20:48.968 244018 DEBUG oslo_concurrency.processutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:20:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:20:49 np0005629333 nova_compute[244014]: 2026-02-25 12:20:49.317 244018 DEBUG nova.network.neutron [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Successfully created port: 433e6f28-313e-4fe8-b8da-eacc8a0332c8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:20:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:20:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/985391114' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:20:49 np0005629333 nova_compute[244014]: 2026-02-25 12:20:49.609 244018 DEBUG oslo_concurrency.processutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:20:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 200 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 07:20:49 np0005629333 nova_compute[244014]: 2026-02-25 12:20:49.641 244018 DEBUG nova.storage.rbd_utils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] rbd image 437f3047-f865-44f7-b16e-cddab230e873_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:20:49 np0005629333 nova_compute[244014]: 2026-02-25 12:20:49.646 244018 DEBUG oslo_concurrency.processutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:20:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:20:50 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/671786509' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:20:50 np0005629333 nova_compute[244014]: 2026-02-25 12:20:50.187 244018 DEBUG oslo_concurrency.processutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:20:50 np0005629333 nova_compute[244014]: 2026-02-25 12:20:50.189 244018 DEBUG nova.virt.libvirt.vif [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:20:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1111859970',display_name='tempest-AttachInterfacesUnderV243Test-server-1111859970',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1111859970',id=26,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXFN7zC5myw9D/AepfFmxAXWRK8TzW2Ph47f455gtFzF5f9ZUF8tii8j35Vr/zY8abYSaTZMXBunxHe79g4nB5q7Pdu+4WcSGnU0sarcpQHiHET7AGRM6ljsAzRNmAesA==',key_name='tempest-keypair-1280464142',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28df138f3ff744a09c7e79a179881f21',ramdisk_id='',reservation_id='r-t6tk786a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1754830019',owner_user_name='tempest-AttachInterfacesUnderV243Test-1754830019-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:20:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='10c5d76df8d046e8858030b12b7affa1',uuid=437f3047-f865-44f7-b16e-cddab230e873,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:20:50 np0005629333 nova_compute[244014]: 2026-02-25 12:20:50.190 244018 DEBUG nova.network.os_vif_util [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Converting VIF {"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:20:50 np0005629333 nova_compute[244014]: 2026-02-25 12:20:50.192 244018 DEBUG nova.network.os_vif_util [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:e1:9a,bridge_name='br-int',has_traffic_filtering=True,id=6c0083e1-a97b-4de8-aa7b-14217c45376f,network=Network(08d2df9f-b68a-40d4-a888-4f15fc0b5403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c0083e1-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:20:50 np0005629333 nova_compute[244014]: 2026-02-25 12:20:50.193 244018 DEBUG nova.objects.instance [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lazy-loading 'pci_devices' on Instance uuid 437f3047-f865-44f7-b16e-cddab230e873 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:20:50 np0005629333 nova_compute[244014]: 2026-02-25 12:20:50.215 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:20:50 np0005629333 nova_compute[244014]:  <uuid>437f3047-f865-44f7-b16e-cddab230e873</uuid>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:  <name>instance-0000001a</name>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      <nova:name>tempest-AttachInterfacesUnderV243Test-server-1111859970</nova:name>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:20:48</nova:creationTime>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:20:50 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:        <nova:user uuid="10c5d76df8d046e8858030b12b7affa1">tempest-AttachInterfacesUnderV243Test-1754830019-project-member</nova:user>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:        <nova:project uuid="28df138f3ff744a09c7e79a179881f21">tempest-AttachInterfacesUnderV243Test-1754830019</nova:project>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:        <nova:port uuid="6c0083e1-a97b-4de8-aa7b-14217c45376f">
Feb 25 07:20:50 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      <entry name="serial">437f3047-f865-44f7-b16e-cddab230e873</entry>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      <entry name="uuid">437f3047-f865-44f7-b16e-cddab230e873</entry>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/437f3047-f865-44f7-b16e-cddab230e873_disk">
Feb 25 07:20:50 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:20:50 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/437f3047-f865-44f7-b16e-cddab230e873_disk.config">
Feb 25 07:20:50 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:20:50 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:b5:e1:9a"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      <target dev="tap6c0083e1-a9"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/437f3047-f865-44f7-b16e-cddab230e873/console.log" append="off"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:20:50 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:20:50 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:20:50 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:20:50 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:20:50 np0005629333 nova_compute[244014]: 2026-02-25 12:20:50.216 244018 DEBUG nova.compute.manager [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Preparing to wait for external event network-vif-plugged-6c0083e1-a97b-4de8-aa7b-14217c45376f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:20:50 np0005629333 nova_compute[244014]: 2026-02-25 12:20:50.216 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Acquiring lock "437f3047-f865-44f7-b16e-cddab230e873-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:50 np0005629333 nova_compute[244014]: 2026-02-25 12:20:50.217 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lock "437f3047-f865-44f7-b16e-cddab230e873-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:50 np0005629333 nova_compute[244014]: 2026-02-25 12:20:50.217 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lock "437f3047-f865-44f7-b16e-cddab230e873-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:50 np0005629333 nova_compute[244014]: 2026-02-25 12:20:50.217 244018 DEBUG nova.virt.libvirt.vif [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:20:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1111859970',display_name='tempest-AttachInterfacesUnderV243Test-server-1111859970',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1111859970',id=26,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXFN7zC5myw9D/AepfFmxAXWRK8TzW2Ph47f455gtFzF5f9ZUF8tii8j35Vr/zY8abYSaTZMXBunxHe79g4nB5q7Pdu+4WcSGnU0sarcpQHiHET7AGRM6ljsAzRNmAesA==',key_name='tempest-keypair-1280464142',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28df138f3ff744a09c7e79a179881f21',ramdisk_id='',reservation_id='r-t6tk786a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1754830019',owner_user_name='tempest-AttachInterfacesUnderV243Test-1754830019-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:20:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='10c5d76df8d046e8858030b12b7affa1',uuid=437f3047-f865-44f7-b16e-cddab230e873,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:20:50 np0005629333 nova_compute[244014]: 2026-02-25 12:20:50.218 244018 DEBUG nova.network.os_vif_util [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Converting VIF {"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:20:50 np0005629333 nova_compute[244014]: 2026-02-25 12:20:50.218 244018 DEBUG nova.network.os_vif_util [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:e1:9a,bridge_name='br-int',has_traffic_filtering=True,id=6c0083e1-a97b-4de8-aa7b-14217c45376f,network=Network(08d2df9f-b68a-40d4-a888-4f15fc0b5403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c0083e1-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:20:50 np0005629333 nova_compute[244014]: 2026-02-25 12:20:50.218 244018 DEBUG os_vif [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:e1:9a,bridge_name='br-int',has_traffic_filtering=True,id=6c0083e1-a97b-4de8-aa7b-14217c45376f,network=Network(08d2df9f-b68a-40d4-a888-4f15fc0b5403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c0083e1-a9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:20:50 np0005629333 nova_compute[244014]: 2026-02-25 12:20:50.219 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:50 np0005629333 nova_compute[244014]: 2026-02-25 12:20:50.219 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:50 np0005629333 nova_compute[244014]: 2026-02-25 12:20:50.219 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:20:50 np0005629333 nova_compute[244014]: 2026-02-25 12:20:50.222 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:50 np0005629333 nova_compute[244014]: 2026-02-25 12:20:50.222 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c0083e1-a9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:50 np0005629333 nova_compute[244014]: 2026-02-25 12:20:50.222 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6c0083e1-a9, col_values=(('external_ids', {'iface-id': '6c0083e1-a97b-4de8-aa7b-14217c45376f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b5:e1:9a', 'vm-uuid': '437f3047-f865-44f7-b16e-cddab230e873'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:50 np0005629333 nova_compute[244014]: 2026-02-25 12:20:50.223 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:50 np0005629333 NetworkManager[49836]: <info>  [1772022050.2253] manager: (tap6c0083e1-a9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Feb 25 07:20:50 np0005629333 nova_compute[244014]: 2026-02-25 12:20:50.226 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:20:50 np0005629333 nova_compute[244014]: 2026-02-25 12:20:50.231 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:50 np0005629333 nova_compute[244014]: 2026-02-25 12:20:50.232 244018 INFO os_vif [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:e1:9a,bridge_name='br-int',has_traffic_filtering=True,id=6c0083e1-a97b-4de8-aa7b-14217c45376f,network=Network(08d2df9f-b68a-40d4-a888-4f15fc0b5403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c0083e1-a9')#033[00m
Feb 25 07:20:50 np0005629333 nova_compute[244014]: 2026-02-25 12:20:50.315 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:20:50 np0005629333 nova_compute[244014]: 2026-02-25 12:20:50.316 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:20:50 np0005629333 nova_compute[244014]: 2026-02-25 12:20:50.316 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] No VIF found with MAC fa:16:3e:b5:e1:9a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:20:50 np0005629333 nova_compute[244014]: 2026-02-25 12:20:50.317 244018 INFO nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Using config drive#033[00m
Feb 25 07:20:50 np0005629333 nova_compute[244014]: 2026-02-25 12:20:50.346 244018 DEBUG nova.storage.rbd_utils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] rbd image 437f3047-f865-44f7-b16e-cddab230e873_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:20:50 np0005629333 nova_compute[244014]: 2026-02-25 12:20:50.994 244018 DEBUG nova.network.neutron [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Successfully updated port: 433e6f28-313e-4fe8-b8da-eacc8a0332c8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:20:51 np0005629333 nova_compute[244014]: 2026-02-25 12:20:51.013 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:20:51 np0005629333 nova_compute[244014]: 2026-02-25 12:20:51.014 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquired lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:20:51 np0005629333 nova_compute[244014]: 2026-02-25 12:20:51.014 244018 DEBUG nova.network.neutron [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:20:51 np0005629333 nova_compute[244014]: 2026-02-25 12:20:51.220 244018 INFO nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Creating config drive at /var/lib/nova/instances/437f3047-f865-44f7-b16e-cddab230e873/disk.config#033[00m
Feb 25 07:20:51 np0005629333 nova_compute[244014]: 2026-02-25 12:20:51.228 244018 DEBUG oslo_concurrency.processutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/437f3047-f865-44f7-b16e-cddab230e873/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpu1gbz944 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:20:51 np0005629333 nova_compute[244014]: 2026-02-25 12:20:51.359 244018 DEBUG oslo_concurrency.processutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/437f3047-f865-44f7-b16e-cddab230e873/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpu1gbz944" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:20:51 np0005629333 nova_compute[244014]: 2026-02-25 12:20:51.396 244018 DEBUG nova.storage.rbd_utils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] rbd image 437f3047-f865-44f7-b16e-cddab230e873_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:20:51 np0005629333 nova_compute[244014]: 2026-02-25 12:20:51.400 244018 DEBUG oslo_concurrency.processutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/437f3047-f865-44f7-b16e-cddab230e873/disk.config 437f3047-f865-44f7-b16e-cddab230e873_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:20:51 np0005629333 nova_compute[244014]: 2026-02-25 12:20:51.446 244018 DEBUG nova.compute.manager [req-005ef240-db21-4b8d-a633-dc7a6b8aea52 req-3cc37e52-05c9-40c3-9107-b0f13bb070b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-changed-433e6f28-313e-4fe8-b8da-eacc8a0332c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:51 np0005629333 nova_compute[244014]: 2026-02-25 12:20:51.447 244018 DEBUG nova.compute.manager [req-005ef240-db21-4b8d-a633-dc7a6b8aea52 req-3cc37e52-05c9-40c3-9107-b0f13bb070b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Refreshing instance network info cache due to event network-changed-433e6f28-313e-4fe8-b8da-eacc8a0332c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:20:51 np0005629333 nova_compute[244014]: 2026-02-25 12:20:51.447 244018 DEBUG oslo_concurrency.lockutils [req-005ef240-db21-4b8d-a633-dc7a6b8aea52 req-3cc37e52-05c9-40c3-9107-b0f13bb070b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:20:51 np0005629333 nova_compute[244014]: 2026-02-25 12:20:51.472 244018 DEBUG nova.network.neutron [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:20:51 np0005629333 nova_compute[244014]: 2026-02-25 12:20:51.561 244018 DEBUG oslo_concurrency.processutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/437f3047-f865-44f7-b16e-cddab230e873/disk.config 437f3047-f865-44f7-b16e-cddab230e873_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:20:51 np0005629333 nova_compute[244014]: 2026-02-25 12:20:51.562 244018 INFO nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Deleting local config drive /var/lib/nova/instances/437f3047-f865-44f7-b16e-cddab230e873/disk.config because it was imported into RBD.#033[00m
Feb 25 07:20:51 np0005629333 nova_compute[244014]: 2026-02-25 12:20:51.588 244018 DEBUG nova.network.neutron [req-f2a2c98c-1d63-4e52-a50b-25f6962acd05 req-c9c75db7-512e-4c4d-b32b-afa7e872330d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Updated VIF entry in instance network info cache for port 6c0083e1-a97b-4de8-aa7b-14217c45376f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:20:51 np0005629333 nova_compute[244014]: 2026-02-25 12:20:51.588 244018 DEBUG nova.network.neutron [req-f2a2c98c-1d63-4e52-a50b-25f6962acd05 req-c9c75db7-512e-4c4d-b32b-afa7e872330d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Updating instance_info_cache with network_info: [{"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:20:51 np0005629333 nova_compute[244014]: 2026-02-25 12:20:51.603 244018 DEBUG oslo_concurrency.lockutils [req-f2a2c98c-1d63-4e52-a50b-25f6962acd05 req-c9c75db7-512e-4c4d-b32b-afa7e872330d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:20:51 np0005629333 kernel: tap6c0083e1-a9: entered promiscuous mode
Feb 25 07:20:51 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:51Z|00166|binding|INFO|Claiming lport 6c0083e1-a97b-4de8-aa7b-14217c45376f for this chassis.
Feb 25 07:20:51 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:51Z|00167|binding|INFO|6c0083e1-a97b-4de8-aa7b-14217c45376f: Claiming fa:16:3e:b5:e1:9a 10.100.0.6
Feb 25 07:20:51 np0005629333 NetworkManager[49836]: <info>  [1772022051.6384] manager: (tap6c0083e1-a9): new Tun device (/org/freedesktop/NetworkManager/Devices/79)
Feb 25 07:20:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 212 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.1 MiB/s wr, 27 op/s
Feb 25 07:20:51 np0005629333 nova_compute[244014]: 2026-02-25 12:20:51.638 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:51 np0005629333 nova_compute[244014]: 2026-02-25 12:20:51.642 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.657 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:e1:9a 10.100.0.6'], port_security=['fa:16:3e:b5:e1:9a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '437f3047-f865-44f7-b16e-cddab230e873', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08d2df9f-b68a-40d4-a888-4f15fc0b5403', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28df138f3ff744a09c7e79a179881f21', 'neutron:revision_number': '2', 'neutron:security_group_ids': '142094f4-7b01-4079-a840-811d10ccd4e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fff7b783-3fdd-403e-b061-0aa11a082612, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6c0083e1-a97b-4de8-aa7b-14217c45376f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.659 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6c0083e1-a97b-4de8-aa7b-14217c45376f in datapath 08d2df9f-b68a-40d4-a888-4f15fc0b5403 bound to our chassis#033[00m
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.661 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08d2df9f-b68a-40d4-a888-4f15fc0b5403#033[00m
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.677 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1fa45755-9a80-42e4-9a69-dc1a59df5196]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.678 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap08d2df9f-b1 in ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.681 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap08d2df9f-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.681 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[57bacf76-813c-436e-9372-907e9885236a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.682 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[75d2137d-c872-4bcd-b56e-b632d948874c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:51 np0005629333 systemd-machined[210048]: New machine qemu-30-instance-0000001a.
Feb 25 07:20:51 np0005629333 nova_compute[244014]: 2026-02-25 12:20:51.691 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:51 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:51Z|00168|binding|INFO|Setting lport 6c0083e1-a97b-4de8-aa7b-14217c45376f ovn-installed in OVS
Feb 25 07:20:51 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:51Z|00169|binding|INFO|Setting lport 6c0083e1-a97b-4de8-aa7b-14217c45376f up in Southbound
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.699 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[24809956-4faa-4a3a-ab5a-da128b221c68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:51 np0005629333 systemd[1]: Started Virtual Machine qemu-30-instance-0000001a.
Feb 25 07:20:51 np0005629333 nova_compute[244014]: 2026-02-25 12:20:51.698 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:51 np0005629333 systemd-udevd[268505]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:20:51 np0005629333 NetworkManager[49836]: <info>  [1772022051.7256] device (tap6c0083e1-a9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:20:51 np0005629333 NetworkManager[49836]: <info>  [1772022051.7263] device (tap6c0083e1-a9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:20:51 np0005629333 podman[268455]: 2026-02-25 12:20:51.73231433 +0000 UTC m=+0.123363721 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.726 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[735f8149-4195-48e2-835d-89dbfaf63e7d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.752 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[dd190554-fba8-41f9-bfd7-f363be241874]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:51 np0005629333 systemd-udevd[268510]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:20:51 np0005629333 NetworkManager[49836]: <info>  [1772022051.7590] manager: (tap08d2df9f-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/80)
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.758 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7a8b9f36-9256-4b1b-9385-79c0ca44d696]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:51 np0005629333 podman[268458]: 2026-02-25 12:20:51.760325737 +0000 UTC m=+0.148685412 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.build-date=20260223)
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.782 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f7faa196-e04d-409e-b0df-31125fe86816]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.785 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[bd84d73c-e583-44ba-9536-ab64fb790239]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:51 np0005629333 NetworkManager[49836]: <info>  [1772022051.8036] device (tap08d2df9f-b0): carrier: link connected
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.806 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[de723bb2-3343-4f85-b8f8-5e464d1d8a7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.822 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b0382705-e3be-4f16-b3c1-bb6ea4313399]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08d2df9f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:1a:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402121, 'reachable_time': 23852, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268544, 'error': None, 'target': 'ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.831 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[96224060-29ed-4d2d-b9e7-2608867bd563]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef4:1ade'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402121, 'tstamp': 402121}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268545, 'error': None, 'target': 'ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.845 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0d84cfe1-046a-4bda-8274-97cd45676079]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08d2df9f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:1a:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402121, 'reachable_time': 23852, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268546, 'error': None, 'target': 'ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.873 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8a283753-e238-4608-bf45-74164985f868]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.929 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8dab47a9-93b6-4e91-934d-24a85d0b0d34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.931 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08d2df9f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.932 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.933 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08d2df9f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:51 np0005629333 kernel: tap08d2df9f-b0: entered promiscuous mode
Feb 25 07:20:51 np0005629333 NetworkManager[49836]: <info>  [1772022051.9399] manager: (tap08d2df9f-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.940 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08d2df9f-b0, col_values=(('external_ids', {'iface-id': '6d2f2d5b-e692-4371-bdfb-0788a13e6e8e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:51 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:51Z|00170|binding|INFO|Releasing lport 6d2f2d5b-e692-4371-bdfb-0788a13e6e8e from this chassis (sb_readonly=0)
Feb 25 07:20:51 np0005629333 nova_compute[244014]: 2026-02-25 12:20:51.935 244018 DEBUG nova.compute.manager [req-519d2744-ed23-427e-9a69-50770808c1c2 req-b4821d06-4c15-44a5-a61b-e8cd12cb10fa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Received event network-vif-plugged-6c0083e1-a97b-4de8-aa7b-14217c45376f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:51 np0005629333 nova_compute[244014]: 2026-02-25 12:20:51.942 244018 DEBUG oslo_concurrency.lockutils [req-519d2744-ed23-427e-9a69-50770808c1c2 req-b4821d06-4c15-44a5-a61b-e8cd12cb10fa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "437f3047-f865-44f7-b16e-cddab230e873-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:51 np0005629333 nova_compute[244014]: 2026-02-25 12:20:51.943 244018 DEBUG oslo_concurrency.lockutils [req-519d2744-ed23-427e-9a69-50770808c1c2 req-b4821d06-4c15-44a5-a61b-e8cd12cb10fa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "437f3047-f865-44f7-b16e-cddab230e873-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:51 np0005629333 nova_compute[244014]: 2026-02-25 12:20:51.943 244018 DEBUG oslo_concurrency.lockutils [req-519d2744-ed23-427e-9a69-50770808c1c2 req-b4821d06-4c15-44a5-a61b-e8cd12cb10fa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "437f3047-f865-44f7-b16e-cddab230e873-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:51 np0005629333 nova_compute[244014]: 2026-02-25 12:20:51.943 244018 DEBUG nova.compute.manager [req-519d2744-ed23-427e-9a69-50770808c1c2 req-b4821d06-4c15-44a5-a61b-e8cd12cb10fa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Processing event network-vif-plugged-6c0083e1-a97b-4de8-aa7b-14217c45376f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:20:51 np0005629333 nova_compute[244014]: 2026-02-25 12:20:51.944 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.956 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/08d2df9f-b68a-40d4-a888-4f15fc0b5403.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/08d2df9f-b68a-40d4-a888-4f15fc0b5403.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.958 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[499d9f46-e191-44b2-9fbf-dd332f9c8122]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.959 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-08d2df9f-b68a-40d4-a888-4f15fc0b5403
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/08d2df9f-b68a-40d4-a888-4f15fc0b5403.pid.haproxy
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 08d2df9f-b68a-40d4-a888-4f15fc0b5403
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:20:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:51.960 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403', 'env', 'PROCESS_TAG=haproxy-08d2df9f-b68a-40d4-a888-4f15fc0b5403', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/08d2df9f-b68a-40d4-a888-4f15fc0b5403.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:20:52 np0005629333 podman[268578]: 2026-02-25 12:20:52.364538968 +0000 UTC m=+0.071420173 container create 61c9207e39c8e0bfbde648c37934d17a08b357dc2b9565a55f5bd1882fd7d3b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223)
Feb 25 07:20:52 np0005629333 systemd[1]: Started libpod-conmon-61c9207e39c8e0bfbde648c37934d17a08b357dc2b9565a55f5bd1882fd7d3b7.scope.
Feb 25 07:20:52 np0005629333 podman[268578]: 2026-02-25 12:20:52.327665279 +0000 UTC m=+0.034546534 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:20:52 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:20:52 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95e3b4792fbae073d4d0de8c0eb2f97b209d111264ecc91aaecd534959142ed9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:20:52 np0005629333 nova_compute[244014]: 2026-02-25 12:20:52.514 244018 DEBUG nova.compute.manager [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:20:52 np0005629333 podman[268578]: 2026-02-25 12:20:52.515605496 +0000 UTC m=+0.222486751 container init 61c9207e39c8e0bfbde648c37934d17a08b357dc2b9565a55f5bd1882fd7d3b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 07:20:52 np0005629333 nova_compute[244014]: 2026-02-25 12:20:52.515 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022052.5135899, 437f3047-f865-44f7-b16e-cddab230e873 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:20:52 np0005629333 nova_compute[244014]: 2026-02-25 12:20:52.516 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 437f3047-f865-44f7-b16e-cddab230e873] VM Started (Lifecycle Event)#033[00m
Feb 25 07:20:52 np0005629333 nova_compute[244014]: 2026-02-25 12:20:52.520 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:20:52 np0005629333 podman[268578]: 2026-02-25 12:20:52.524941782 +0000 UTC m=+0.231822977 container start 61c9207e39c8e0bfbde648c37934d17a08b357dc2b9565a55f5bd1882fd7d3b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:20:52 np0005629333 nova_compute[244014]: 2026-02-25 12:20:52.525 244018 INFO nova.virt.libvirt.driver [-] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Instance spawned successfully.#033[00m
Feb 25 07:20:52 np0005629333 nova_compute[244014]: 2026-02-25 12:20:52.525 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:20:52 np0005629333 nova_compute[244014]: 2026-02-25 12:20:52.548 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:20:52 np0005629333 nova_compute[244014]: 2026-02-25 12:20:52.554 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:20:52 np0005629333 nova_compute[244014]: 2026-02-25 12:20:52.555 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:20:52 np0005629333 nova_compute[244014]: 2026-02-25 12:20:52.556 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:20:52 np0005629333 neutron-haproxy-ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403[268630]: [NOTICE]   (268639) : New worker (268641) forked
Feb 25 07:20:52 np0005629333 neutron-haproxy-ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403[268630]: [NOTICE]   (268639) : Loading success.
Feb 25 07:20:52 np0005629333 nova_compute[244014]: 2026-02-25 12:20:52.556 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:20:52 np0005629333 nova_compute[244014]: 2026-02-25 12:20:52.557 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:20:52 np0005629333 nova_compute[244014]: 2026-02-25 12:20:52.558 244018 DEBUG nova.virt.libvirt.driver [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:20:52 np0005629333 nova_compute[244014]: 2026-02-25 12:20:52.565 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:20:52 np0005629333 nova_compute[244014]: 2026-02-25 12:20:52.605 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 437f3047-f865-44f7-b16e-cddab230e873] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:20:52 np0005629333 nova_compute[244014]: 2026-02-25 12:20:52.606 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022052.5150912, 437f3047-f865-44f7-b16e-cddab230e873 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:20:52 np0005629333 nova_compute[244014]: 2026-02-25 12:20:52.606 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 437f3047-f865-44f7-b16e-cddab230e873] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:20:52 np0005629333 nova_compute[244014]: 2026-02-25 12:20:52.640 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:20:52 np0005629333 nova_compute[244014]: 2026-02-25 12:20:52.644 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022052.51974, 437f3047-f865-44f7-b16e-cddab230e873 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:20:52 np0005629333 nova_compute[244014]: 2026-02-25 12:20:52.644 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 437f3047-f865-44f7-b16e-cddab230e873] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:20:52 np0005629333 nova_compute[244014]: 2026-02-25 12:20:52.659 244018 INFO nova.compute.manager [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Took 7.66 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:20:52 np0005629333 nova_compute[244014]: 2026-02-25 12:20:52.659 244018 DEBUG nova.compute.manager [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:20:52 np0005629333 nova_compute[244014]: 2026-02-25 12:20:52.669 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:20:52 np0005629333 nova_compute[244014]: 2026-02-25 12:20:52.673 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:20:52 np0005629333 nova_compute[244014]: 2026-02-25 12:20:52.738 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 437f3047-f865-44f7-b16e-cddab230e873] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:20:52 np0005629333 nova_compute[244014]: 2026-02-25 12:20:52.812 244018 INFO nova.compute.manager [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Took 9.89 seconds to build instance.#033[00m
Feb 25 07:20:52 np0005629333 nova_compute[244014]: 2026-02-25 12:20:52.855 244018 DEBUG oslo_concurrency.lockutils [None req-8bff3dd4-58f7-49ea-a1e3-341e5f595af7 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lock "437f3047-f865-44f7-b16e-cddab230e873" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:53 np0005629333 nova_compute[244014]: 2026-02-25 12:20:53.466 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:53 np0005629333 nova_compute[244014]: 2026-02-25 12:20:53.490 244018 DEBUG nova.network.neutron [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updating instance_info_cache with network_info: [{"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:20:53 np0005629333 nova_compute[244014]: 2026-02-25 12:20:53.520 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Releasing lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:20:53 np0005629333 nova_compute[244014]: 2026-02-25 12:20:53.521 244018 DEBUG nova.compute.manager [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Instance network_info: |[{"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:20:53 np0005629333 nova_compute[244014]: 2026-02-25 12:20:53.522 244018 DEBUG oslo_concurrency.lockutils [req-005ef240-db21-4b8d-a633-dc7a6b8aea52 req-3cc37e52-05c9-40c3-9107-b0f13bb070b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:20:53 np0005629333 nova_compute[244014]: 2026-02-25 12:20:53.523 244018 DEBUG nova.network.neutron [req-005ef240-db21-4b8d-a633-dc7a6b8aea52 req-3cc37e52-05c9-40c3-9107-b0f13bb070b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Refreshing network info cache for port 433e6f28-313e-4fe8-b8da-eacc8a0332c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:20:53 np0005629333 nova_compute[244014]: 2026-02-25 12:20:53.528 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Start _get_guest_xml network_info=[{"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:20:53 np0005629333 nova_compute[244014]: 2026-02-25 12:20:53.533 244018 WARNING nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:20:53 np0005629333 nova_compute[244014]: 2026-02-25 12:20:53.539 244018 DEBUG nova.virt.libvirt.host [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:20:53 np0005629333 nova_compute[244014]: 2026-02-25 12:20:53.540 244018 DEBUG nova.virt.libvirt.host [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:20:53 np0005629333 nova_compute[244014]: 2026-02-25 12:20:53.546 244018 DEBUG nova.virt.libvirt.host [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:20:53 np0005629333 nova_compute[244014]: 2026-02-25 12:20:53.547 244018 DEBUG nova.virt.libvirt.host [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:20:53 np0005629333 nova_compute[244014]: 2026-02-25 12:20:53.547 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:20:53 np0005629333 nova_compute[244014]: 2026-02-25 12:20:53.548 244018 DEBUG nova.virt.hardware [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:20:53 np0005629333 nova_compute[244014]: 2026-02-25 12:20:53.549 244018 DEBUG nova.virt.hardware [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:20:53 np0005629333 nova_compute[244014]: 2026-02-25 12:20:53.549 244018 DEBUG nova.virt.hardware [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:20:53 np0005629333 nova_compute[244014]: 2026-02-25 12:20:53.550 244018 DEBUG nova.virt.hardware [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:20:53 np0005629333 nova_compute[244014]: 2026-02-25 12:20:53.550 244018 DEBUG nova.virt.hardware [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:20:53 np0005629333 nova_compute[244014]: 2026-02-25 12:20:53.551 244018 DEBUG nova.virt.hardware [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:20:53 np0005629333 nova_compute[244014]: 2026-02-25 12:20:53.551 244018 DEBUG nova.virt.hardware [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:20:53 np0005629333 nova_compute[244014]: 2026-02-25 12:20:53.552 244018 DEBUG nova.virt.hardware [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:20:53 np0005629333 nova_compute[244014]: 2026-02-25 12:20:53.552 244018 DEBUG nova.virt.hardware [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:20:53 np0005629333 nova_compute[244014]: 2026-02-25 12:20:53.553 244018 DEBUG nova.virt.hardware [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:20:53 np0005629333 nova_compute[244014]: 2026-02-25 12:20:53.553 244018 DEBUG nova.virt.hardware [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:20:53 np0005629333 nova_compute[244014]: 2026-02-25 12:20:53.559 244018 DEBUG oslo_concurrency.processutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:20:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 246 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 3.5 MiB/s wr, 59 op/s
Feb 25 07:20:54 np0005629333 nova_compute[244014]: 2026-02-25 12:20:54.039 244018 DEBUG nova.compute.manager [req-80cbe99f-69e6-496a-bd44-d4fb52eec6b1 req-a19140a8-6148-492b-93ac-9fec17c0e5c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Received event network-vif-plugged-6c0083e1-a97b-4de8-aa7b-14217c45376f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:54 np0005629333 nova_compute[244014]: 2026-02-25 12:20:54.040 244018 DEBUG oslo_concurrency.lockutils [req-80cbe99f-69e6-496a-bd44-d4fb52eec6b1 req-a19140a8-6148-492b-93ac-9fec17c0e5c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "437f3047-f865-44f7-b16e-cddab230e873-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:54 np0005629333 nova_compute[244014]: 2026-02-25 12:20:54.041 244018 DEBUG oslo_concurrency.lockutils [req-80cbe99f-69e6-496a-bd44-d4fb52eec6b1 req-a19140a8-6148-492b-93ac-9fec17c0e5c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "437f3047-f865-44f7-b16e-cddab230e873-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:54 np0005629333 nova_compute[244014]: 2026-02-25 12:20:54.041 244018 DEBUG oslo_concurrency.lockutils [req-80cbe99f-69e6-496a-bd44-d4fb52eec6b1 req-a19140a8-6148-492b-93ac-9fec17c0e5c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "437f3047-f865-44f7-b16e-cddab230e873-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:54 np0005629333 nova_compute[244014]: 2026-02-25 12:20:54.042 244018 DEBUG nova.compute.manager [req-80cbe99f-69e6-496a-bd44-d4fb52eec6b1 req-a19140a8-6148-492b-93ac-9fec17c0e5c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] No waiting events found dispatching network-vif-plugged-6c0083e1-a97b-4de8-aa7b-14217c45376f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:20:54 np0005629333 nova_compute[244014]: 2026-02-25 12:20:54.043 244018 WARNING nova.compute.manager [req-80cbe99f-69e6-496a-bd44-d4fb52eec6b1 req-a19140a8-6148-492b-93ac-9fec17c0e5c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Received unexpected event network-vif-plugged-6c0083e1-a97b-4de8-aa7b-14217c45376f for instance with vm_state active and task_state None.#033[00m
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4078310787' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:20:54 np0005629333 nova_compute[244014]: 2026-02-25 12:20:54.122 244018 DEBUG oslo_concurrency.processutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:20:54 np0005629333 nova_compute[244014]: 2026-02-25 12:20:54.155 244018 DEBUG nova.storage.rbd_utils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 51d1d661-89db-4958-a2f4-c299ee997cde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:20:54 np0005629333 nova_compute[244014]: 2026-02-25 12:20:54.161 244018 DEBUG oslo_concurrency.processutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4040702362' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:20:54 np0005629333 nova_compute[244014]: 2026-02-25 12:20:54.713 244018 DEBUG oslo_concurrency.processutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:20:54 np0005629333 nova_compute[244014]: 2026-02-25 12:20:54.716 244018 DEBUG nova.virt.libvirt.vif [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:20:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:20:54 np0005629333 nova_compute[244014]: 2026-02-25 12:20:54.717 244018 DEBUG nova.network.os_vif_util [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:20:54 np0005629333 nova_compute[244014]: 2026-02-25 12:20:54.718 244018 DEBUG nova.network.os_vif_util [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:d5:f4,bridge_name='br-int',has_traffic_filtering=True,id=433e6f28-313e-4fe8-b8da-eacc8a0332c8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap433e6f28-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:20:54 np0005629333 nova_compute[244014]: 2026-02-25 12:20:54.720 244018 DEBUG nova.objects.instance [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 51d1d661-89db-4958-a2f4-c299ee997cde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:20:54.918680) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022054918739, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2156, "num_deletes": 255, "total_data_size": 3235397, "memory_usage": 3285552, "flush_reason": "Manual Compaction"}
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022054932077, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3154273, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21014, "largest_seqno": 23169, "table_properties": {"data_size": 3144753, "index_size": 5886, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20752, "raw_average_key_size": 20, "raw_value_size": 3125183, "raw_average_value_size": 3097, "num_data_blocks": 263, "num_entries": 1009, "num_filter_entries": 1009, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772021869, "oldest_key_time": 1772021869, "file_creation_time": 1772022054, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 13491 microseconds, and 7069 cpu microseconds.
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:20:54.932191) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3154273 bytes OK
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:20:54.932245) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:20:54.934193) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:20:54.934215) EVENT_LOG_v1 {"time_micros": 1772022054934209, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:20:54.934239) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3226254, prev total WAL file size 3226254, number of live WAL files 2.
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:20:54.935259) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3080KB)], [50(7173KB)]
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022054935297, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10499852, "oldest_snapshot_seqno": -1}
Feb 25 07:20:54 np0005629333 nova_compute[244014]: 2026-02-25 12:20:54.962 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:20:54 np0005629333 nova_compute[244014]:  <uuid>51d1d661-89db-4958-a2f4-c299ee997cde</uuid>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:  <name>instance-0000001b</name>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      <nova:name>tempest-AttachInterfacesTestJSON-server-1386050966</nova:name>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:20:53</nova:creationTime>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:20:54 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:        <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:        <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:        <nova:port uuid="433e6f28-313e-4fe8-b8da-eacc8a0332c8">
Feb 25 07:20:54 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      <entry name="serial">51d1d661-89db-4958-a2f4-c299ee997cde</entry>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      <entry name="uuid">51d1d661-89db-4958-a2f4-c299ee997cde</entry>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/51d1d661-89db-4958-a2f4-c299ee997cde_disk">
Feb 25 07:20:54 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:20:54 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/51d1d661-89db-4958-a2f4-c299ee997cde_disk.config">
Feb 25 07:20:54 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:20:54 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:4a:d5:f4"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      <target dev="tap433e6f28-31"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde/console.log" append="off"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:20:54 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:20:54 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:20:54 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:20:54 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:20:54 np0005629333 nova_compute[244014]: 2026-02-25 12:20:54.972 244018 DEBUG nova.compute.manager [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Preparing to wait for external event network-vif-plugged-433e6f28-313e-4fe8-b8da-eacc8a0332c8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:20:54 np0005629333 nova_compute[244014]: 2026-02-25 12:20:54.973 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:54 np0005629333 nova_compute[244014]: 2026-02-25 12:20:54.974 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:54 np0005629333 nova_compute[244014]: 2026-02-25 12:20:54.974 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:54 np0005629333 nova_compute[244014]: 2026-02-25 12:20:54.975 244018 DEBUG nova.virt.libvirt.vif [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:20:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:20:54 np0005629333 nova_compute[244014]: 2026-02-25 12:20:54.976 244018 DEBUG nova.network.os_vif_util [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:20:54 np0005629333 nova_compute[244014]: 2026-02-25 12:20:54.977 244018 DEBUG nova.network.os_vif_util [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:d5:f4,bridge_name='br-int',has_traffic_filtering=True,id=433e6f28-313e-4fe8-b8da-eacc8a0332c8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap433e6f28-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:20:54 np0005629333 nova_compute[244014]: 2026-02-25 12:20:54.978 244018 DEBUG os_vif [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:d5:f4,bridge_name='br-int',has_traffic_filtering=True,id=433e6f28-313e-4fe8-b8da-eacc8a0332c8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap433e6f28-31') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:20:54 np0005629333 nova_compute[244014]: 2026-02-25 12:20:54.979 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:54 np0005629333 nova_compute[244014]: 2026-02-25 12:20:54.980 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4875 keys, 8732516 bytes, temperature: kUnknown
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022054981241, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 8732516, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8698300, "index_size": 20922, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12229, "raw_key_size": 119493, "raw_average_key_size": 24, "raw_value_size": 8608925, "raw_average_value_size": 1765, "num_data_blocks": 876, "num_entries": 4875, "num_filter_entries": 4875, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772022054, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:20:54 np0005629333 nova_compute[244014]: 2026-02-25 12:20:54.981 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:20:54.981519) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 8732516 bytes
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:20:54.983095) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 228.0 rd, 189.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 7.0 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 5398, records dropped: 523 output_compression: NoCompression
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:20:54.983127) EVENT_LOG_v1 {"time_micros": 1772022054983113, "job": 26, "event": "compaction_finished", "compaction_time_micros": 46059, "compaction_time_cpu_micros": 23633, "output_level": 6, "num_output_files": 1, "total_output_size": 8732516, "num_input_records": 5398, "num_output_records": 4875, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022054983809, "job": 26, "event": "table_file_deletion", "file_number": 52}
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022054984793, "job": 26, "event": "table_file_deletion", "file_number": 50}
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:20:54.935177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:20:54.984910) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:20:54.984917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:20:54.984920) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:20:54.984923) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:20:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:20:54.984926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:20:54 np0005629333 nova_compute[244014]: 2026-02-25 12:20:54.986 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:54 np0005629333 nova_compute[244014]: 2026-02-25 12:20:54.987 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap433e6f28-31, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:54 np0005629333 nova_compute[244014]: 2026-02-25 12:20:54.988 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap433e6f28-31, col_values=(('external_ids', {'iface-id': '433e6f28-313e-4fe8-b8da-eacc8a0332c8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4a:d5:f4', 'vm-uuid': '51d1d661-89db-4958-a2f4-c299ee997cde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:54 np0005629333 NetworkManager[49836]: <info>  [1772022054.9920] manager: (tap433e6f28-31): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Feb 25 07:20:54 np0005629333 nova_compute[244014]: 2026-02-25 12:20:54.995 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:20:54 np0005629333 nova_compute[244014]: 2026-02-25 12:20:54.997 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:54 np0005629333 nova_compute[244014]: 2026-02-25 12:20:54.998 244018 INFO os_vif [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:d5:f4,bridge_name='br-int',has_traffic_filtering=True,id=433e6f28-313e-4fe8-b8da-eacc8a0332c8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap433e6f28-31')#033[00m
Feb 25 07:20:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:55.004 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:55.006 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:55.007 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:55 np0005629333 nova_compute[244014]: 2026-02-25 12:20:55.117 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:20:55 np0005629333 nova_compute[244014]: 2026-02-25 12:20:55.117 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:20:55 np0005629333 nova_compute[244014]: 2026-02-25 12:20:55.118 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:4a:d5:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:20:55 np0005629333 nova_compute[244014]: 2026-02-25 12:20:55.118 244018 INFO nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Using config drive#033[00m
Feb 25 07:20:55 np0005629333 nova_compute[244014]: 2026-02-25 12:20:55.141 244018 DEBUG nova.storage.rbd_utils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 51d1d661-89db-4958-a2f4-c299ee997cde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:20:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 246 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 3.5 MiB/s wr, 59 op/s
Feb 25 07:20:55 np0005629333 nova_compute[244014]: 2026-02-25 12:20:55.685 244018 INFO nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Creating config drive at /var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde/disk.config#033[00m
Feb 25 07:20:55 np0005629333 nova_compute[244014]: 2026-02-25 12:20:55.693 244018 DEBUG oslo_concurrency.processutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp4wyfcbet execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:20:55 np0005629333 nova_compute[244014]: 2026-02-25 12:20:55.714 244018 DEBUG nova.network.neutron [req-005ef240-db21-4b8d-a633-dc7a6b8aea52 req-3cc37e52-05c9-40c3-9107-b0f13bb070b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updated VIF entry in instance network info cache for port 433e6f28-313e-4fe8-b8da-eacc8a0332c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:20:55 np0005629333 nova_compute[244014]: 2026-02-25 12:20:55.715 244018 DEBUG nova.network.neutron [req-005ef240-db21-4b8d-a633-dc7a6b8aea52 req-3cc37e52-05c9-40c3-9107-b0f13bb070b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updating instance_info_cache with network_info: [{"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:20:55 np0005629333 nova_compute[244014]: 2026-02-25 12:20:55.769 244018 DEBUG oslo_concurrency.lockutils [req-005ef240-db21-4b8d-a633-dc7a6b8aea52 req-3cc37e52-05c9-40c3-9107-b0f13bb070b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:20:55 np0005629333 nova_compute[244014]: 2026-02-25 12:20:55.804 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:55 np0005629333 NetworkManager[49836]: <info>  [1772022055.8089] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Feb 25 07:20:55 np0005629333 NetworkManager[49836]: <info>  [1772022055.8098] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Feb 25 07:20:55 np0005629333 nova_compute[244014]: 2026-02-25 12:20:55.817 244018 DEBUG oslo_concurrency.processutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp4wyfcbet" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:20:55 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:55Z|00171|binding|INFO|Releasing lport 6d2f2d5b-e692-4371-bdfb-0788a13e6e8e from this chassis (sb_readonly=0)
Feb 25 07:20:55 np0005629333 nova_compute[244014]: 2026-02-25 12:20:55.853 244018 DEBUG nova.storage.rbd_utils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 51d1d661-89db-4958-a2f4-c299ee997cde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:20:55 np0005629333 nova_compute[244014]: 2026-02-25 12:20:55.857 244018 DEBUG oslo_concurrency.processutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde/disk.config 51d1d661-89db-4958-a2f4-c299ee997cde_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:20:55 np0005629333 nova_compute[244014]: 2026-02-25 12:20:55.879 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:55 np0005629333 nova_compute[244014]: 2026-02-25 12:20:55.882 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:20:55 np0005629333 nova_compute[244014]: 2026-02-25 12:20:55.883 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:20:55 np0005629333 nova_compute[244014]: 2026-02-25 12:20:55.883 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 07:20:55 np0005629333 nova_compute[244014]: 2026-02-25 12:20:55.916 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 25 07:20:55 np0005629333 nova_compute[244014]: 2026-02-25 12:20:55.999 244018 DEBUG oslo_concurrency.processutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde/disk.config 51d1d661-89db-4958-a2f4-c299ee997cde_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:20:56 np0005629333 nova_compute[244014]: 2026-02-25 12:20:56.000 244018 INFO nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Deleting local config drive /var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde/disk.config because it was imported into RBD.#033[00m
Feb 25 07:20:56 np0005629333 NetworkManager[49836]: <info>  [1772022056.0512] manager: (tap433e6f28-31): new Tun device (/org/freedesktop/NetworkManager/Devices/85)
Feb 25 07:20:56 np0005629333 kernel: tap433e6f28-31: entered promiscuous mode
Feb 25 07:20:56 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:56Z|00172|binding|INFO|Claiming lport 433e6f28-313e-4fe8-b8da-eacc8a0332c8 for this chassis.
Feb 25 07:20:56 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:56Z|00173|binding|INFO|433e6f28-313e-4fe8-b8da-eacc8a0332c8: Claiming fa:16:3e:4a:d5:f4 10.100.0.11
Feb 25 07:20:56 np0005629333 nova_compute[244014]: 2026-02-25 12:20:56.057 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:56 np0005629333 systemd-udevd[268783]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:20:56 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:56Z|00174|binding|INFO|Setting lport 433e6f28-313e-4fe8-b8da-eacc8a0332c8 ovn-installed in OVS
Feb 25 07:20:56 np0005629333 nova_compute[244014]: 2026-02-25 12:20:56.084 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:56 np0005629333 nova_compute[244014]: 2026-02-25 12:20:56.087 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:56 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:56Z|00175|binding|INFO|Setting lport 433e6f28-313e-4fe8-b8da-eacc8a0332c8 up in Southbound
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.088 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:d5:f4 10.100.0.11'], port_security=['fa:16:3e:4a:d5:f4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '51d1d661-89db-4958-a2f4-c299ee997cde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '43bfaf0d-cb4f-4024-9f62-b8a095234270', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=433e6f28-313e-4fe8-b8da-eacc8a0332c8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.090 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 433e6f28-313e-4fe8-b8da-eacc8a0332c8 in datapath 08121372-a435-401a-b405-778e10d8c2e2 bound to our chassis#033[00m
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.092 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08121372-a435-401a-b405-778e10d8c2e2#033[00m
Feb 25 07:20:56 np0005629333 NetworkManager[49836]: <info>  [1772022056.0938] device (tap433e6f28-31): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:20:56 np0005629333 NetworkManager[49836]: <info>  [1772022056.0946] device (tap433e6f28-31): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:20:56 np0005629333 systemd-machined[210048]: New machine qemu-31-instance-0000001b.
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.102 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b96aec56-1905-44ff-a90e-74a8c8d7466c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.103 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap08121372-a1 in ovnmeta-08121372-a435-401a-b405-778e10d8c2e2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.105 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap08121372-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.106 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4f2ee969-200e-4f15-b8e1-80801ea83e24]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.107 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[971120bd-215d-47ac-b247-7a51b9c95c9c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:56 np0005629333 systemd[1]: Started Virtual Machine qemu-31-instance-0000001b.
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.116 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[12188dc0-dce0-4554-a571-33309d2f93d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.127 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e74d8a2d-f9b7-4f5e-a29f-a019648d2515]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.168 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[92eaa505-9711-4a7b-ad52-ef2a565b717f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.180 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[01389b53-8875-4e77-b639-cc0303e7cf2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:56 np0005629333 NetworkManager[49836]: <info>  [1772022056.1814] manager: (tap08121372-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/86)
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.213 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8da5875e-7aa3-4788-a9b3-e00e84afc802]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.221 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b49d3a72-8b5d-4e72-b356-041c1e315973]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:56 np0005629333 nova_compute[244014]: 2026-02-25 12:20:56.226 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:20:56 np0005629333 nova_compute[244014]: 2026-02-25 12:20:56.226 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:20:56 np0005629333 nova_compute[244014]: 2026-02-25 12:20:56.226 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 25 07:20:56 np0005629333 nova_compute[244014]: 2026-02-25 12:20:56.227 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid 437f3047-f865-44f7-b16e-cddab230e873 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:20:56 np0005629333 nova_compute[244014]: 2026-02-25 12:20:56.237 244018 DEBUG nova.compute.manager [req-04a657dd-fc8d-4174-84db-51e1c9338e46 req-dcdab6ac-4c5c-42fd-a889-84fc231adfe4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Received event network-changed-6c0083e1-a97b-4de8-aa7b-14217c45376f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:56 np0005629333 nova_compute[244014]: 2026-02-25 12:20:56.237 244018 DEBUG nova.compute.manager [req-04a657dd-fc8d-4174-84db-51e1c9338e46 req-dcdab6ac-4c5c-42fd-a889-84fc231adfe4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Refreshing instance network info cache due to event network-changed-6c0083e1-a97b-4de8-aa7b-14217c45376f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:20:56 np0005629333 nova_compute[244014]: 2026-02-25 12:20:56.237 244018 DEBUG oslo_concurrency.lockutils [req-04a657dd-fc8d-4174-84db-51e1c9338e46 req-dcdab6ac-4c5c-42fd-a889-84fc231adfe4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:20:56 np0005629333 NetworkManager[49836]: <info>  [1772022056.2465] device (tap08121372-a0): carrier: link connected
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.250 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[44faa8b2-7a07-4be2-806b-bb1c416f2fae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.265 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[88cbc67e-372f-4b0d-b1f6-632ee30a7021]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402565, 'reachable_time': 42201, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268819, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.277 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[721bff18-d7c3-4e48-90af-83867e77a935]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:732b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402565, 'tstamp': 402565}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268820, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.293 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[37f66b32-e2cc-4991-8146-17225d59b6b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402565, 'reachable_time': 42201, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268821, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.318 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[66eb1f82-6a0e-4ce2-865b-92cf34e019d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.366 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4c2104f7-8ba8-4837-b72e-77075c3c0a79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.368 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.369 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.369 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08121372-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:56 np0005629333 nova_compute[244014]: 2026-02-25 12:20:56.371 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:56 np0005629333 NetworkManager[49836]: <info>  [1772022056.3722] manager: (tap08121372-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/87)
Feb 25 07:20:56 np0005629333 kernel: tap08121372-a0: entered promiscuous mode
Feb 25 07:20:56 np0005629333 nova_compute[244014]: 2026-02-25 12:20:56.373 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.375 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08121372-a0, col_values=(('external_ids', {'iface-id': 'ef44c128-3fa4-4475-b63c-4818a50ede40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:20:56 np0005629333 nova_compute[244014]: 2026-02-25 12:20:56.376 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:56 np0005629333 ovn_controller[147040]: 2026-02-25T12:20:56Z|00176|binding|INFO|Releasing lport ef44c128-3fa4-4475-b63c-4818a50ede40 from this chassis (sb_readonly=0)
Feb 25 07:20:56 np0005629333 nova_compute[244014]: 2026-02-25 12:20:56.386 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.388 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/08121372-a435-401a-b405-778e10d8c2e2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/08121372-a435-401a-b405-778e10d8c2e2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.389 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2fe60fe6-bbdb-4a8c-942b-8fec892d8e9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.390 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-08121372-a435-401a-b405-778e10d8c2e2
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/08121372-a435-401a-b405-778e10d8c2e2.pid.haproxy
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 08121372-a435-401a-b405-778e10d8c2e2
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:20:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:20:56.391 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'env', 'PROCESS_TAG=haproxy-08121372-a435-401a-b405-778e10d8c2e2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/08121372-a435-401a-b405-778e10d8c2e2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:20:56 np0005629333 podman[268853]: 2026-02-25 12:20:56.804982417 +0000 UTC m=+0.068951023 container create b50073db9dbcd731d0b723c13f97680fccceaabc6c416564b26619e87a58647d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 07:20:56 np0005629333 systemd[1]: Started libpod-conmon-b50073db9dbcd731d0b723c13f97680fccceaabc6c416564b26619e87a58647d.scope.
Feb 25 07:20:56 np0005629333 podman[268853]: 2026-02-25 12:20:56.771875465 +0000 UTC m=+0.035844121 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:20:56 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:20:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b250d49de0d9de34b918b625012f9e944f53e47c7cb2c90afd9a5ea1fff5ce10/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:20:56 np0005629333 podman[268853]: 2026-02-25 12:20:56.901975067 +0000 UTC m=+0.165943693 container init b50073db9dbcd731d0b723c13f97680fccceaabc6c416564b26619e87a58647d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:20:56 np0005629333 podman[268853]: 2026-02-25 12:20:56.912247799 +0000 UTC m=+0.176216415 container start b50073db9dbcd731d0b723c13f97680fccceaabc6c416564b26619e87a58647d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 25 07:20:56 np0005629333 nova_compute[244014]: 2026-02-25 12:20:56.918 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:56 np0005629333 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[268869]: [NOTICE]   (268891) : New worker (268909) forked
Feb 25 07:20:56 np0005629333 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[268869]: [NOTICE]   (268891) : Loading success.
Feb 25 07:20:57 np0005629333 nova_compute[244014]: 2026-02-25 12:20:57.050 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022057.0500329, 51d1d661-89db-4958-a2f4-c299ee997cde => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:20:57 np0005629333 nova_compute[244014]: 2026-02-25 12:20:57.051 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] VM Started (Lifecycle Event)#033[00m
Feb 25 07:20:57 np0005629333 nova_compute[244014]: 2026-02-25 12:20:57.099 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:20:57 np0005629333 nova_compute[244014]: 2026-02-25 12:20:57.106 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022057.0512195, 51d1d661-89db-4958-a2f4-c299ee997cde => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:20:57 np0005629333 nova_compute[244014]: 2026-02-25 12:20:57.106 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:20:57 np0005629333 nova_compute[244014]: 2026-02-25 12:20:57.154 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:20:57 np0005629333 nova_compute[244014]: 2026-02-25 12:20:57.157 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:20:57 np0005629333 nova_compute[244014]: 2026-02-25 12:20:57.202 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:20:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 246 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 135 op/s
Feb 25 07:20:57 np0005629333 nova_compute[244014]: 2026-02-25 12:20:57.912 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Updating instance_info_cache with network_info: [{"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:20:57 np0005629333 nova_compute[244014]: 2026-02-25 12:20:57.940 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:20:57 np0005629333 nova_compute[244014]: 2026-02-25 12:20:57.941 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 25 07:20:57 np0005629333 nova_compute[244014]: 2026-02-25 12:20:57.941 244018 DEBUG oslo_concurrency.lockutils [req-04a657dd-fc8d-4174-84db-51e1c9338e46 req-dcdab6ac-4c5c-42fd-a889-84fc231adfe4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:20:57 np0005629333 nova_compute[244014]: 2026-02-25 12:20:57.942 244018 DEBUG nova.network.neutron [req-04a657dd-fc8d-4174-84db-51e1c9338e46 req-dcdab6ac-4c5c-42fd-a889-84fc231adfe4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Refreshing network info cache for port 6c0083e1-a97b-4de8-aa7b-14217c45376f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:20:58 np0005629333 nova_compute[244014]: 2026-02-25 12:20:58.430 244018 DEBUG nova.compute.manager [req-e2d09a6a-46e4-44c2-bff7-e7e166627a91 req-db23157c-fb15-4904-94b7-e5ce0cd7689c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-plugged-433e6f28-313e-4fe8-b8da-eacc8a0332c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:58 np0005629333 nova_compute[244014]: 2026-02-25 12:20:58.430 244018 DEBUG oslo_concurrency.lockutils [req-e2d09a6a-46e4-44c2-bff7-e7e166627a91 req-db23157c-fb15-4904-94b7-e5ce0cd7689c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:58 np0005629333 nova_compute[244014]: 2026-02-25 12:20:58.430 244018 DEBUG oslo_concurrency.lockutils [req-e2d09a6a-46e4-44c2-bff7-e7e166627a91 req-db23157c-fb15-4904-94b7-e5ce0cd7689c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:58 np0005629333 nova_compute[244014]: 2026-02-25 12:20:58.431 244018 DEBUG oslo_concurrency.lockutils [req-e2d09a6a-46e4-44c2-bff7-e7e166627a91 req-db23157c-fb15-4904-94b7-e5ce0cd7689c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:58 np0005629333 nova_compute[244014]: 2026-02-25 12:20:58.431 244018 DEBUG nova.compute.manager [req-e2d09a6a-46e4-44c2-bff7-e7e166627a91 req-db23157c-fb15-4904-94b7-e5ce0cd7689c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Processing event network-vif-plugged-433e6f28-313e-4fe8-b8da-eacc8a0332c8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:20:58 np0005629333 nova_compute[244014]: 2026-02-25 12:20:58.431 244018 DEBUG nova.compute.manager [req-e2d09a6a-46e4-44c2-bff7-e7e166627a91 req-db23157c-fb15-4904-94b7-e5ce0cd7689c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-plugged-433e6f28-313e-4fe8-b8da-eacc8a0332c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:20:58 np0005629333 nova_compute[244014]: 2026-02-25 12:20:58.431 244018 DEBUG oslo_concurrency.lockutils [req-e2d09a6a-46e4-44c2-bff7-e7e166627a91 req-db23157c-fb15-4904-94b7-e5ce0cd7689c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:20:58 np0005629333 nova_compute[244014]: 2026-02-25 12:20:58.431 244018 DEBUG oslo_concurrency.lockutils [req-e2d09a6a-46e4-44c2-bff7-e7e166627a91 req-db23157c-fb15-4904-94b7-e5ce0cd7689c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:20:58 np0005629333 nova_compute[244014]: 2026-02-25 12:20:58.431 244018 DEBUG oslo_concurrency.lockutils [req-e2d09a6a-46e4-44c2-bff7-e7e166627a91 req-db23157c-fb15-4904-94b7-e5ce0cd7689c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:58 np0005629333 nova_compute[244014]: 2026-02-25 12:20:58.432 244018 DEBUG nova.compute.manager [req-e2d09a6a-46e4-44c2-bff7-e7e166627a91 req-db23157c-fb15-4904-94b7-e5ce0cd7689c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] No waiting events found dispatching network-vif-plugged-433e6f28-313e-4fe8-b8da-eacc8a0332c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:20:58 np0005629333 nova_compute[244014]: 2026-02-25 12:20:58.432 244018 WARNING nova.compute.manager [req-e2d09a6a-46e4-44c2-bff7-e7e166627a91 req-db23157c-fb15-4904-94b7-e5ce0cd7689c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received unexpected event network-vif-plugged-433e6f28-313e-4fe8-b8da-eacc8a0332c8 for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:20:58 np0005629333 nova_compute[244014]: 2026-02-25 12:20:58.432 244018 DEBUG nova.compute.manager [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:20:58 np0005629333 nova_compute[244014]: 2026-02-25 12:20:58.435 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022058.4353757, 51d1d661-89db-4958-a2f4-c299ee997cde => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:20:58 np0005629333 nova_compute[244014]: 2026-02-25 12:20:58.435 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:20:58 np0005629333 nova_compute[244014]: 2026-02-25 12:20:58.458 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:20:58 np0005629333 nova_compute[244014]: 2026-02-25 12:20:58.461 244018 INFO nova.virt.libvirt.driver [-] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Instance spawned successfully.#033[00m
Feb 25 07:20:58 np0005629333 nova_compute[244014]: 2026-02-25 12:20:58.461 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:20:58 np0005629333 nova_compute[244014]: 2026-02-25 12:20:58.469 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:20:58 np0005629333 nova_compute[244014]: 2026-02-25 12:20:58.496 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:20:58 np0005629333 nova_compute[244014]: 2026-02-25 12:20:58.498 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:20:58 np0005629333 nova_compute[244014]: 2026-02-25 12:20:58.523 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:20:58 np0005629333 nova_compute[244014]: 2026-02-25 12:20:58.524 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:20:58 np0005629333 nova_compute[244014]: 2026-02-25 12:20:58.525 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:20:58 np0005629333 nova_compute[244014]: 2026-02-25 12:20:58.525 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:20:58 np0005629333 nova_compute[244014]: 2026-02-25 12:20:58.526 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:20:58 np0005629333 nova_compute[244014]: 2026-02-25 12:20:58.527 244018 DEBUG nova.virt.libvirt.driver [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:20:58 np0005629333 nova_compute[244014]: 2026-02-25 12:20:58.586 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:20:58 np0005629333 nova_compute[244014]: 2026-02-25 12:20:58.647 244018 INFO nova.compute.manager [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Took 10.51 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:20:58 np0005629333 nova_compute[244014]: 2026-02-25 12:20:58.648 244018 DEBUG nova.compute.manager [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:20:58 np0005629333 nova_compute[244014]: 2026-02-25 12:20:58.737 244018 INFO nova.compute.manager [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Took 11.79 seconds to build instance.#033[00m
Feb 25 07:20:58 np0005629333 nova_compute[244014]: 2026-02-25 12:20:58.765 244018 DEBUG oslo_concurrency.lockutils [None req-73658d24-bc77-42d6-ad5d-bfff16a06327 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:20:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:20:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 246 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Feb 25 07:20:59 np0005629333 nova_compute[244014]: 2026-02-25 12:20:59.687 244018 DEBUG nova.network.neutron [req-04a657dd-fc8d-4174-84db-51e1c9338e46 req-dcdab6ac-4c5c-42fd-a889-84fc231adfe4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Updated VIF entry in instance network info cache for port 6c0083e1-a97b-4de8-aa7b-14217c45376f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:20:59 np0005629333 nova_compute[244014]: 2026-02-25 12:20:59.688 244018 DEBUG nova.network.neutron [req-04a657dd-fc8d-4174-84db-51e1c9338e46 req-dcdab6ac-4c5c-42fd-a889-84fc231adfe4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Updating instance_info_cache with network_info: [{"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:20:59 np0005629333 nova_compute[244014]: 2026-02-25 12:20:59.705 244018 DEBUG oslo_concurrency.lockutils [req-04a657dd-fc8d-4174-84db-51e1c9338e46 req-dcdab6ac-4c5c-42fd-a889-84fc231adfe4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:20:59 np0005629333 nova_compute[244014]: 2026-02-25 12:20:59.991 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:01 np0005629333 nova_compute[244014]: 2026-02-25 12:21:01.524 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:21:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:21:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:21:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:21:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:21:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:21:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 246 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Feb 25 07:21:01 np0005629333 nova_compute[244014]: 2026-02-25 12:21:01.646 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:01.647 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:21:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:01.648 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:21:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:01.649 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:21:01 np0005629333 nova_compute[244014]: 2026-02-25 12:21:01.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:21:01 np0005629333 nova_compute[244014]: 2026-02-25 12:21:01.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:21:02 np0005629333 nova_compute[244014]: 2026-02-25 12:21:02.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:21:02 np0005629333 nova_compute[244014]: 2026-02-25 12:21:02.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:21:03 np0005629333 nova_compute[244014]: 2026-02-25 12:21:03.332 244018 DEBUG nova.compute.manager [req-25f18c54-1408-4d33-ace3-de5bef741f32 req-c7dfb499-c5ab-49a3-8ae5-aec9bde8864b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-changed-433e6f28-313e-4fe8-b8da-eacc8a0332c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:21:03 np0005629333 nova_compute[244014]: 2026-02-25 12:21:03.333 244018 DEBUG nova.compute.manager [req-25f18c54-1408-4d33-ace3-de5bef741f32 req-c7dfb499-c5ab-49a3-8ae5-aec9bde8864b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Refreshing instance network info cache due to event network-changed-433e6f28-313e-4fe8-b8da-eacc8a0332c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:21:03 np0005629333 nova_compute[244014]: 2026-02-25 12:21:03.333 244018 DEBUG oslo_concurrency.lockutils [req-25f18c54-1408-4d33-ace3-de5bef741f32 req-c7dfb499-c5ab-49a3-8ae5-aec9bde8864b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:21:03 np0005629333 nova_compute[244014]: 2026-02-25 12:21:03.334 244018 DEBUG oslo_concurrency.lockutils [req-25f18c54-1408-4d33-ace3-de5bef741f32 req-c7dfb499-c5ab-49a3-8ae5-aec9bde8864b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:21:03 np0005629333 nova_compute[244014]: 2026-02-25 12:21:03.334 244018 DEBUG nova.network.neutron [req-25f18c54-1408-4d33-ace3-de5bef741f32 req-c7dfb499-c5ab-49a3-8ae5-aec9bde8864b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Refreshing network info cache for port 433e6f28-313e-4fe8-b8da-eacc8a0332c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:21:03 np0005629333 nova_compute[244014]: 2026-02-25 12:21:03.471 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 246 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.5 MiB/s wr, 173 op/s
Feb 25 07:21:03 np0005629333 nova_compute[244014]: 2026-02-25 12:21:03.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:21:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:21:04 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:04Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b5:e1:9a 10.100.0.6
Feb 25 07:21:04 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:04Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b5:e1:9a 10.100.0.6
Feb 25 07:21:04 np0005629333 nova_compute[244014]: 2026-02-25 12:21:04.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:21:04 np0005629333 nova_compute[244014]: 2026-02-25 12:21:04.906 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:21:04 np0005629333 nova_compute[244014]: 2026-02-25 12:21:04.906 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:21:04 np0005629333 nova_compute[244014]: 2026-02-25 12:21:04.907 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:21:04 np0005629333 nova_compute[244014]: 2026-02-25 12:21:04.907 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:21:04 np0005629333 nova_compute[244014]: 2026-02-25 12:21:04.908 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:21:04 np0005629333 nova_compute[244014]: 2026-02-25 12:21:04.995 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:05 np0005629333 nova_compute[244014]: 2026-02-25 12:21:05.368 244018 DEBUG nova.network.neutron [req-25f18c54-1408-4d33-ace3-de5bef741f32 req-c7dfb499-c5ab-49a3-8ae5-aec9bde8864b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updated VIF entry in instance network info cache for port 433e6f28-313e-4fe8-b8da-eacc8a0332c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:21:05 np0005629333 nova_compute[244014]: 2026-02-25 12:21:05.369 244018 DEBUG nova.network.neutron [req-25f18c54-1408-4d33-ace3-de5bef741f32 req-c7dfb499-c5ab-49a3-8ae5-aec9bde8864b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updating instance_info_cache with network_info: [{"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:21:05 np0005629333 nova_compute[244014]: 2026-02-25 12:21:05.464 244018 DEBUG oslo_concurrency.lockutils [req-25f18c54-1408-4d33-ace3-de5bef741f32 req-c7dfb499-c5ab-49a3-8ae5-aec9bde8864b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:21:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:21:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3680699916' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:21:05 np0005629333 nova_compute[244014]: 2026-02-25 12:21:05.493 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:21:05 np0005629333 nova_compute[244014]: 2026-02-25 12:21:05.615 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:21:05 np0005629333 nova_compute[244014]: 2026-02-25 12:21:05.616 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:21:05 np0005629333 nova_compute[244014]: 2026-02-25 12:21:05.622 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:21:05 np0005629333 nova_compute[244014]: 2026-02-25 12:21:05.622 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:21:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 246 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 28 KiB/s wr, 142 op/s
Feb 25 07:21:05 np0005629333 nova_compute[244014]: 2026-02-25 12:21:05.829 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:21:05 np0005629333 nova_compute[244014]: 2026-02-25 12:21:05.833 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4045MB free_disk=59.94620304182172GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:21:05 np0005629333 nova_compute[244014]: 2026-02-25 12:21:05.833 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:21:05 np0005629333 nova_compute[244014]: 2026-02-25 12:21:05.834 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:21:05 np0005629333 nova_compute[244014]: 2026-02-25 12:21:05.945 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 437f3047-f865-44f7-b16e-cddab230e873 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:21:05 np0005629333 nova_compute[244014]: 2026-02-25 12:21:05.945 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 51d1d661-89db-4958-a2f4-c299ee997cde actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:21:05 np0005629333 nova_compute[244014]: 2026-02-25 12:21:05.946 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:21:05 np0005629333 nova_compute[244014]: 2026-02-25 12:21:05.946 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:21:06 np0005629333 nova_compute[244014]: 2026-02-25 12:21:06.011 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:21:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:21:06 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3313543570' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:21:06 np0005629333 nova_compute[244014]: 2026-02-25 12:21:06.579 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:21:06 np0005629333 nova_compute[244014]: 2026-02-25 12:21:06.585 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:21:06 np0005629333 nova_compute[244014]: 2026-02-25 12:21:06.605 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:21:06 np0005629333 nova_compute[244014]: 2026-02-25 12:21:06.644 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:21:06 np0005629333 nova_compute[244014]: 2026-02-25 12:21:06.644 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:21:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 208 op/s
Feb 25 07:21:07 np0005629333 nova_compute[244014]: 2026-02-25 12:21:07.646 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:21:07 np0005629333 nova_compute[244014]: 2026-02-25 12:21:07.646 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:21:07 np0005629333 nova_compute[244014]: 2026-02-25 12:21:07.647 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:21:08 np0005629333 nova_compute[244014]: 2026-02-25 12:21:08.473 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:21:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:09Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4a:d5:f4 10.100.0.11
Feb 25 07:21:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:09Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4a:d5:f4 10.100.0.11
Feb 25 07:21:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Feb 25 07:21:09 np0005629333 nova_compute[244014]: 2026-02-25 12:21:09.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:21:09 np0005629333 nova_compute[244014]: 2026-02-25 12:21:09.999 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 290 MiB data, 522 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.6 MiB/s wr, 149 op/s
Feb 25 07:21:13 np0005629333 nova_compute[244014]: 2026-02-25 12:21:13.183 244018 DEBUG nova.objects.instance [None req-4cc205ad-1eb5-4d8e-b0d3-36c4e7018156 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lazy-loading 'flavor' on Instance uuid 437f3047-f865-44f7-b16e-cddab230e873 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:21:13 np0005629333 nova_compute[244014]: 2026-02-25 12:21:13.261 244018 DEBUG oslo_concurrency.lockutils [None req-4cc205ad-1eb5-4d8e-b0d3-36c4e7018156 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Acquiring lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:21:13 np0005629333 nova_compute[244014]: 2026-02-25 12:21:13.261 244018 DEBUG oslo_concurrency.lockutils [None req-4cc205ad-1eb5-4d8e-b0d3-36c4e7018156 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Acquired lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:21:13 np0005629333 nova_compute[244014]: 2026-02-25 12:21:13.476 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 312 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.3 MiB/s wr, 179 op/s
Feb 25 07:21:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Feb 25 07:21:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Feb 25 07:21:14 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Feb 25 07:21:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:21:14 np0005629333 nova_compute[244014]: 2026-02-25 12:21:14.672 244018 DEBUG nova.network.neutron [None req-4cc205ad-1eb5-4d8e-b0d3-36c4e7018156 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:21:15 np0005629333 nova_compute[244014]: 2026-02-25 12:21:15.002 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 312 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 786 KiB/s rd, 5.1 MiB/s wr, 156 op/s
Feb 25 07:21:15 np0005629333 nova_compute[244014]: 2026-02-25 12:21:15.781 244018 DEBUG nova.compute.manager [req-4ac1119a-f842-412f-aa4d-65747f408f69 req-f25a74e8-af37-4ab8-ac3a-46c4c708b8b8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Received event network-changed-6c0083e1-a97b-4de8-aa7b-14217c45376f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:21:15 np0005629333 nova_compute[244014]: 2026-02-25 12:21:15.784 244018 DEBUG nova.compute.manager [req-4ac1119a-f842-412f-aa4d-65747f408f69 req-f25a74e8-af37-4ab8-ac3a-46c4c708b8b8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Refreshing instance network info cache due to event network-changed-6c0083e1-a97b-4de8-aa7b-14217c45376f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:21:15 np0005629333 nova_compute[244014]: 2026-02-25 12:21:15.784 244018 DEBUG oslo_concurrency.lockutils [req-4ac1119a-f842-412f-aa4d-65747f408f69 req-f25a74e8-af37-4ab8-ac3a-46c4c708b8b8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:21:16 np0005629333 podman[269115]: 2026-02-25 12:21:16.996854117 +0000 UTC m=+0.054586424 container create 952790843f5b3d0b335997f399cf1f777f66706ff7aa6afeeabe47e1351e2798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_hawking, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:21:17 np0005629333 nova_compute[244014]: 2026-02-25 12:21:17.014 244018 DEBUG nova.network.neutron [None req-4cc205ad-1eb5-4d8e-b0d3-36c4e7018156 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Updating instance_info_cache with network_info: [{"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:21:17 np0005629333 nova_compute[244014]: 2026-02-25 12:21:17.038 244018 DEBUG oslo_concurrency.lockutils [None req-4cc205ad-1eb5-4d8e-b0d3-36c4e7018156 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Releasing lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:21:17 np0005629333 systemd[1]: Started libpod-conmon-952790843f5b3d0b335997f399cf1f777f66706ff7aa6afeeabe47e1351e2798.scope.
Feb 25 07:21:17 np0005629333 nova_compute[244014]: 2026-02-25 12:21:17.038 244018 DEBUG nova.compute.manager [None req-4cc205ad-1eb5-4d8e-b0d3-36c4e7018156 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Feb 25 07:21:17 np0005629333 nova_compute[244014]: 2026-02-25 12:21:17.040 244018 DEBUG nova.compute.manager [None req-4cc205ad-1eb5-4d8e-b0d3-36c4e7018156 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] network_info to inject: |[{"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Feb 25 07:21:17 np0005629333 nova_compute[244014]: 2026-02-25 12:21:17.044 244018 DEBUG oslo_concurrency.lockutils [req-4ac1119a-f842-412f-aa4d-65747f408f69 req-f25a74e8-af37-4ab8-ac3a-46c4c708b8b8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:21:17 np0005629333 nova_compute[244014]: 2026-02-25 12:21:17.045 244018 DEBUG nova.network.neutron [req-4ac1119a-f842-412f-aa4d-65747f408f69 req-f25a74e8-af37-4ab8-ac3a-46c4c708b8b8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Refreshing network info cache for port 6c0083e1-a97b-4de8-aa7b-14217c45376f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:21:17 np0005629333 podman[269115]: 2026-02-25 12:21:16.969067136 +0000 UTC m=+0.026799573 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:21:17 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:21:17 np0005629333 podman[269115]: 2026-02-25 12:21:17.101765082 +0000 UTC m=+0.159497419 container init 952790843f5b3d0b335997f399cf1f777f66706ff7aa6afeeabe47e1351e2798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_hawking, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:21:17 np0005629333 podman[269115]: 2026-02-25 12:21:17.110200372 +0000 UTC m=+0.167932709 container start 952790843f5b3d0b335997f399cf1f777f66706ff7aa6afeeabe47e1351e2798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:21:17 np0005629333 podman[269115]: 2026-02-25 12:21:17.113839285 +0000 UTC m=+0.171571612 container attach 952790843f5b3d0b335997f399cf1f777f66706ff7aa6afeeabe47e1351e2798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_hawking, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:21:17 np0005629333 youthful_hawking[269132]: 167 167
Feb 25 07:21:17 np0005629333 systemd[1]: libpod-952790843f5b3d0b335997f399cf1f777f66706ff7aa6afeeabe47e1351e2798.scope: Deactivated successfully.
Feb 25 07:21:17 np0005629333 podman[269115]: 2026-02-25 12:21:17.117921522 +0000 UTC m=+0.175653859 container died 952790843f5b3d0b335997f399cf1f777f66706ff7aa6afeeabe47e1351e2798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:21:17 np0005629333 systemd[1]: var-lib-containers-storage-overlay-2082e52ec00ff7d8d3b2e3b59b9741ec10d53b319e6975068bde68f73ea4190e-merged.mount: Deactivated successfully.
Feb 25 07:21:17 np0005629333 podman[269115]: 2026-02-25 12:21:17.160227655 +0000 UTC m=+0.217959942 container remove 952790843f5b3d0b335997f399cf1f777f66706ff7aa6afeeabe47e1351e2798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_hawking, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 07:21:17 np0005629333 systemd[1]: libpod-conmon-952790843f5b3d0b335997f399cf1f777f66706ff7aa6afeeabe47e1351e2798.scope: Deactivated successfully.
Feb 25 07:21:17 np0005629333 podman[269155]: 2026-02-25 12:21:17.355873692 +0000 UTC m=+0.069501019 container create 1d7bd366230a298e47b7610d2c282c71524bcd631dbc49a8fa58de79238bd068 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mirzakhani, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 07:21:17 np0005629333 systemd[1]: Started libpod-conmon-1d7bd366230a298e47b7610d2c282c71524bcd631dbc49a8fa58de79238bd068.scope.
Feb 25 07:21:17 np0005629333 podman[269155]: 2026-02-25 12:21:17.322314807 +0000 UTC m=+0.035942194 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:21:17 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:21:17 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/887fcae71b6a15a9fd490653bcaef35bf625336f2e7c8be7a2678b0ce1110c40/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:21:17 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/887fcae71b6a15a9fd490653bcaef35bf625336f2e7c8be7a2678b0ce1110c40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:21:17 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/887fcae71b6a15a9fd490653bcaef35bf625336f2e7c8be7a2678b0ce1110c40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:21:17 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/887fcae71b6a15a9fd490653bcaef35bf625336f2e7c8be7a2678b0ce1110c40/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:21:17 np0005629333 podman[269155]: 2026-02-25 12:21:17.474760764 +0000 UTC m=+0.188388111 container init 1d7bd366230a298e47b7610d2c282c71524bcd631dbc49a8fa58de79238bd068 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mirzakhani, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 07:21:17 np0005629333 podman[269155]: 2026-02-25 12:21:17.483322238 +0000 UTC m=+0.196949575 container start 1d7bd366230a298e47b7610d2c282c71524bcd631dbc49a8fa58de79238bd068 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:21:17 np0005629333 podman[269155]: 2026-02-25 12:21:17.488973259 +0000 UTC m=+0.202600586 container attach 1d7bd366230a298e47b7610d2c282c71524bcd631dbc49a8fa58de79238bd068 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:21:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 312 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 404 KiB/s rd, 2.6 MiB/s wr, 92 op/s
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]: [
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:    {
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:        "available": false,
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:        "being_replaced": false,
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:        "ceph_device_lvm": false,
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:        "device_id": "QEMU_DVD-ROM_QM00001",
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:        "lsm_data": {},
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:        "lvs": [],
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:        "path": "/dev/sr0",
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:        "rejected_reasons": [
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:            "Has a FileSystem",
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:            "Insufficient space (<5GB)"
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:        ],
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:        "sys_api": {
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:            "actuators": null,
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:            "device_nodes": [
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:                "sr0"
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:            ],
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:            "devname": "sr0",
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:            "human_readable_size": "482.00 KB",
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:            "id_bus": "ata",
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:            "model": "QEMU DVD-ROM",
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:            "nr_requests": "2",
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:            "parent": "/dev/sr0",
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:            "partitions": {},
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:            "path": "/dev/sr0",
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:            "removable": "1",
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:            "rev": "2.5+",
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:            "ro": "0",
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:            "rotational": "1",
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:            "sas_address": "",
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:            "sas_device_handle": "",
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:            "scheduler_mode": "mq-deadline",
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:            "sectors": 0,
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:            "sectorsize": "2048",
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:            "size": 493568.0,
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:            "support_discard": "2048",
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:            "type": "disk",
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:            "vendor": "QEMU"
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:        }
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]:    }
Feb 25 07:21:18 np0005629333 intelligent_mirzakhani[269171]: ]
Feb 25 07:21:18 np0005629333 systemd[1]: libpod-1d7bd366230a298e47b7610d2c282c71524bcd631dbc49a8fa58de79238bd068.scope: Deactivated successfully.
Feb 25 07:21:18 np0005629333 podman[269155]: 2026-02-25 12:21:18.121818324 +0000 UTC m=+0.835445631 container died 1d7bd366230a298e47b7610d2c282c71524bcd631dbc49a8fa58de79238bd068 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Feb 25 07:21:18 np0005629333 systemd[1]: var-lib-containers-storage-overlay-887fcae71b6a15a9fd490653bcaef35bf625336f2e7c8be7a2678b0ce1110c40-merged.mount: Deactivated successfully.
Feb 25 07:21:18 np0005629333 podman[269155]: 2026-02-25 12:21:18.168148602 +0000 UTC m=+0.881775909 container remove 1d7bd366230a298e47b7610d2c282c71524bcd631dbc49a8fa58de79238bd068 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:21:18 np0005629333 systemd[1]: libpod-conmon-1d7bd366230a298e47b7610d2c282c71524bcd631dbc49a8fa58de79238bd068.scope: Deactivated successfully.
Feb 25 07:21:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:21:18 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:21:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:21:18 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:21:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 25 07:21:18 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 07:21:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:21:18 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:21:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:21:18 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:21:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:21:18 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:21:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:21:18 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:21:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:21:18 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:21:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:21:18 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:21:18 np0005629333 nova_compute[244014]: 2026-02-25 12:21:18.477 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:18 np0005629333 podman[270162]: 2026-02-25 12:21:18.780048422 +0000 UTC m=+0.102626951 container create e92abd034796528d6b853a6ab606cae7dd189af5c73350a7a7e0e51f5ff08740 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 07:21:18 np0005629333 podman[270162]: 2026-02-25 12:21:18.713017765 +0000 UTC m=+0.035596364 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:21:18 np0005629333 systemd[1]: Started libpod-conmon-e92abd034796528d6b853a6ab606cae7dd189af5c73350a7a7e0e51f5ff08740.scope.
Feb 25 07:21:18 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:21:18 np0005629333 podman[270162]: 2026-02-25 12:21:18.897058431 +0000 UTC m=+0.219637030 container init e92abd034796528d6b853a6ab606cae7dd189af5c73350a7a7e0e51f5ff08740 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_yonath, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:21:18 np0005629333 podman[270162]: 2026-02-25 12:21:18.907375475 +0000 UTC m=+0.229954024 container start e92abd034796528d6b853a6ab606cae7dd189af5c73350a7a7e0e51f5ff08740 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:21:18 np0005629333 podman[270162]: 2026-02-25 12:21:18.911531423 +0000 UTC m=+0.234109972 container attach e92abd034796528d6b853a6ab606cae7dd189af5c73350a7a7e0e51f5ff08740 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 07:21:18 np0005629333 cool_yonath[270179]: 167 167
Feb 25 07:21:18 np0005629333 systemd[1]: libpod-e92abd034796528d6b853a6ab606cae7dd189af5c73350a7a7e0e51f5ff08740.scope: Deactivated successfully.
Feb 25 07:21:18 np0005629333 podman[270162]: 2026-02-25 12:21:18.913439807 +0000 UTC m=+0.236018356 container died e92abd034796528d6b853a6ab606cae7dd189af5c73350a7a7e0e51f5ff08740 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_yonath, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:21:18 np0005629333 systemd[1]: var-lib-containers-storage-overlay-cf2970ed82240e8d6ccf82802bcff666441e1de7726864821bbab8412d852ef6-merged.mount: Deactivated successfully.
Feb 25 07:21:18 np0005629333 podman[270162]: 2026-02-25 12:21:18.961596178 +0000 UTC m=+0.284174737 container remove e92abd034796528d6b853a6ab606cae7dd189af5c73350a7a7e0e51f5ff08740 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:21:18 np0005629333 systemd[1]: libpod-conmon-e92abd034796528d6b853a6ab606cae7dd189af5c73350a7a7e0e51f5ff08740.scope: Deactivated successfully.
Feb 25 07:21:19 np0005629333 podman[270203]: 2026-02-25 12:21:19.171731516 +0000 UTC m=+0.059905785 container create 6d6ce450c6a61c73352136d840e631b769581f934068cb2b8a9e298c51e6cde3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_taussig, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:21:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:21:19 np0005629333 systemd[1]: Started libpod-conmon-6d6ce450c6a61c73352136d840e631b769581f934068cb2b8a9e298c51e6cde3.scope.
Feb 25 07:21:19 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:21:19 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:21:19 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 07:21:19 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:21:19 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:21:19 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:21:19 np0005629333 podman[270203]: 2026-02-25 12:21:19.147446045 +0000 UTC m=+0.035620364 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:21:19 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:21:19 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a704835cfa8f9e98907533f46672093169d1d30b7a12b78cd1f2b04e7fdf6857/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:21:19 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a704835cfa8f9e98907533f46672093169d1d30b7a12b78cd1f2b04e7fdf6857/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:21:19 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a704835cfa8f9e98907533f46672093169d1d30b7a12b78cd1f2b04e7fdf6857/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:21:19 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a704835cfa8f9e98907533f46672093169d1d30b7a12b78cd1f2b04e7fdf6857/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:21:19 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a704835cfa8f9e98907533f46672093169d1d30b7a12b78cd1f2b04e7fdf6857/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:21:19 np0005629333 podman[270203]: 2026-02-25 12:21:19.274362666 +0000 UTC m=+0.162536985 container init 6d6ce450c6a61c73352136d840e631b769581f934068cb2b8a9e298c51e6cde3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_taussig, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:21:19 np0005629333 podman[270203]: 2026-02-25 12:21:19.289217999 +0000 UTC m=+0.177392278 container start 6d6ce450c6a61c73352136d840e631b769581f934068cb2b8a9e298c51e6cde3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_taussig, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:21:19 np0005629333 podman[270203]: 2026-02-25 12:21:19.294383236 +0000 UTC m=+0.182557575 container attach 6d6ce450c6a61c73352136d840e631b769581f934068cb2b8a9e298c51e6cde3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_taussig, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 07:21:19 np0005629333 nova_compute[244014]: 2026-02-25 12:21:19.513 244018 DEBUG nova.network.neutron [req-4ac1119a-f842-412f-aa4d-65747f408f69 req-f25a74e8-af37-4ab8-ac3a-46c4c708b8b8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Updated VIF entry in instance network info cache for port 6c0083e1-a97b-4de8-aa7b-14217c45376f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:21:19 np0005629333 nova_compute[244014]: 2026-02-25 12:21:19.516 244018 DEBUG nova.network.neutron [req-4ac1119a-f842-412f-aa4d-65747f408f69 req-f25a74e8-af37-4ab8-ac3a-46c4c708b8b8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Updating instance_info_cache with network_info: [{"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:21:19 np0005629333 nova_compute[244014]: 2026-02-25 12:21:19.550 244018 DEBUG oslo_concurrency.lockutils [req-4ac1119a-f842-412f-aa4d-65747f408f69 req-f25a74e8-af37-4ab8-ac3a-46c4c708b8b8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:21:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 312 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 403 KiB/s rd, 2.6 MiB/s wr, 92 op/s
Feb 25 07:21:19 np0005629333 charming_taussig[270220]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:21:19 np0005629333 charming_taussig[270220]: --> All data devices are unavailable
Feb 25 07:21:19 np0005629333 systemd[1]: libpod-6d6ce450c6a61c73352136d840e631b769581f934068cb2b8a9e298c51e6cde3.scope: Deactivated successfully.
Feb 25 07:21:19 np0005629333 podman[270240]: 2026-02-25 12:21:19.86144074 +0000 UTC m=+0.026945157 container died 6d6ce450c6a61c73352136d840e631b769581f934068cb2b8a9e298c51e6cde3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 07:21:19 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a704835cfa8f9e98907533f46672093169d1d30b7a12b78cd1f2b04e7fdf6857-merged.mount: Deactivated successfully.
Feb 25 07:21:19 np0005629333 podman[270240]: 2026-02-25 12:21:19.914874771 +0000 UTC m=+0.080379208 container remove 6d6ce450c6a61c73352136d840e631b769581f934068cb2b8a9e298c51e6cde3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_taussig, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True)
Feb 25 07:21:19 np0005629333 systemd[1]: libpod-conmon-6d6ce450c6a61c73352136d840e631b769581f934068cb2b8a9e298c51e6cde3.scope: Deactivated successfully.
Feb 25 07:21:20 np0005629333 nova_compute[244014]: 2026-02-25 12:21:20.006 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:20 np0005629333 podman[270317]: 2026-02-25 12:21:20.367999493 +0000 UTC m=+0.041604795 container create c700dcdde6be290bc869b103ad8ed3ec64388dbf7fe902c226434d1c8d91800d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mcclintock, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:21:20 np0005629333 systemd[1]: Started libpod-conmon-c700dcdde6be290bc869b103ad8ed3ec64388dbf7fe902c226434d1c8d91800d.scope.
Feb 25 07:21:20 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:21:20 np0005629333 podman[270317]: 2026-02-25 12:21:20.350655109 +0000 UTC m=+0.024260391 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:21:20 np0005629333 podman[270317]: 2026-02-25 12:21:20.466033312 +0000 UTC m=+0.139638624 container init c700dcdde6be290bc869b103ad8ed3ec64388dbf7fe902c226434d1c8d91800d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mcclintock, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:21:20 np0005629333 podman[270317]: 2026-02-25 12:21:20.473787823 +0000 UTC m=+0.147393125 container start c700dcdde6be290bc869b103ad8ed3ec64388dbf7fe902c226434d1c8d91800d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 07:21:20 np0005629333 cool_mcclintock[270333]: 167 167
Feb 25 07:21:20 np0005629333 systemd[1]: libpod-c700dcdde6be290bc869b103ad8ed3ec64388dbf7fe902c226434d1c8d91800d.scope: Deactivated successfully.
Feb 25 07:21:20 np0005629333 podman[270317]: 2026-02-25 12:21:20.4845912 +0000 UTC m=+0.158196562 container attach c700dcdde6be290bc869b103ad8ed3ec64388dbf7fe902c226434d1c8d91800d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mcclintock, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 07:21:20 np0005629333 podman[270317]: 2026-02-25 12:21:20.485606019 +0000 UTC m=+0.159211321 container died c700dcdde6be290bc869b103ad8ed3ec64388dbf7fe902c226434d1c8d91800d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mcclintock, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:21:20 np0005629333 systemd[1]: var-lib-containers-storage-overlay-5cf599176a1f13dedd81ee7480976519539bba4df1d99f13a2c7706028f8a2cd-merged.mount: Deactivated successfully.
Feb 25 07:21:20 np0005629333 podman[270317]: 2026-02-25 12:21:20.571566105 +0000 UTC m=+0.245171397 container remove c700dcdde6be290bc869b103ad8ed3ec64388dbf7fe902c226434d1c8d91800d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 07:21:20 np0005629333 systemd[1]: libpod-conmon-c700dcdde6be290bc869b103ad8ed3ec64388dbf7fe902c226434d1c8d91800d.scope: Deactivated successfully.
Feb 25 07:21:20 np0005629333 nova_compute[244014]: 2026-02-25 12:21:20.602 244018 DEBUG nova.objects.instance [None req-56d6117c-3768-4b91-a8e3-ea7e21e84709 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lazy-loading 'flavor' on Instance uuid 437f3047-f865-44f7-b16e-cddab230e873 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:21:20 np0005629333 nova_compute[244014]: 2026-02-25 12:21:20.630 244018 DEBUG oslo_concurrency.lockutils [None req-56d6117c-3768-4b91-a8e3-ea7e21e84709 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Acquiring lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:21:20 np0005629333 nova_compute[244014]: 2026-02-25 12:21:20.630 244018 DEBUG oslo_concurrency.lockutils [None req-56d6117c-3768-4b91-a8e3-ea7e21e84709 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Acquired lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:21:20 np0005629333 podman[270358]: 2026-02-25 12:21:20.752870643 +0000 UTC m=+0.053549164 container create 47a90c44379628c0ee6aad5836586b1d37ce63fd183742f701df39543b20a192 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 07:21:20 np0005629333 systemd[1]: Started libpod-conmon-47a90c44379628c0ee6aad5836586b1d37ce63fd183742f701df39543b20a192.scope.
Feb 25 07:21:20 np0005629333 podman[270358]: 2026-02-25 12:21:20.723733094 +0000 UTC m=+0.024411675 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:21:20 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:21:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2706b7f0d90901817487c75296b5e97a601b3a179da50baadb9bd36dec774a11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:21:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2706b7f0d90901817487c75296b5e97a601b3a179da50baadb9bd36dec774a11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:21:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2706b7f0d90901817487c75296b5e97a601b3a179da50baadb9bd36dec774a11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:21:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2706b7f0d90901817487c75296b5e97a601b3a179da50baadb9bd36dec774a11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:21:20 np0005629333 podman[270358]: 2026-02-25 12:21:20.880998899 +0000 UTC m=+0.181677420 container init 47a90c44379628c0ee6aad5836586b1d37ce63fd183742f701df39543b20a192 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feynman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 07:21:20 np0005629333 podman[270358]: 2026-02-25 12:21:20.890740516 +0000 UTC m=+0.191419027 container start 47a90c44379628c0ee6aad5836586b1d37ce63fd183742f701df39543b20a192 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 07:21:20 np0005629333 podman[270358]: 2026-02-25 12:21:20.936084836 +0000 UTC m=+0.236763367 container attach 47a90c44379628c0ee6aad5836586b1d37ce63fd183742f701df39543b20a192 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feynman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]: {
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:    "0": [
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:        {
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "devices": [
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "/dev/loop3"
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            ],
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "lv_name": "ceph_lv0",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "lv_size": "21470642176",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "name": "ceph_lv0",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "tags": {
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.cluster_name": "ceph",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.crush_device_class": "",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.encrypted": "0",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.objectstore": "bluestore",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.osd_id": "0",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.type": "block",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.vdo": "0",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.with_tpm": "0"
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            },
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "type": "block",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "vg_name": "ceph_vg0"
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:        }
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:    ],
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:    "1": [
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:        {
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "devices": [
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "/dev/loop4"
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            ],
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "lv_name": "ceph_lv1",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "lv_size": "21470642176",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "name": "ceph_lv1",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "tags": {
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.cluster_name": "ceph",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.crush_device_class": "",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.encrypted": "0",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.objectstore": "bluestore",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.osd_id": "1",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.type": "block",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.vdo": "0",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.with_tpm": "0"
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            },
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "type": "block",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "vg_name": "ceph_vg1"
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:        }
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:    ],
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:    "2": [
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:        {
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "devices": [
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "/dev/loop5"
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            ],
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "lv_name": "ceph_lv2",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "lv_size": "21470642176",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "name": "ceph_lv2",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "tags": {
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.cluster_name": "ceph",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.crush_device_class": "",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.encrypted": "0",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.objectstore": "bluestore",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.osd_id": "2",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.type": "block",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.vdo": "0",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:                "ceph.with_tpm": "0"
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            },
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "type": "block",
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:            "vg_name": "ceph_vg2"
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:        }
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]:    ]
Feb 25 07:21:21 np0005629333 sweet_feynman[270374]: }
Feb 25 07:21:21 np0005629333 systemd[1]: libpod-47a90c44379628c0ee6aad5836586b1d37ce63fd183742f701df39543b20a192.scope: Deactivated successfully.
Feb 25 07:21:21 np0005629333 podman[270358]: 2026-02-25 12:21:21.244869062 +0000 UTC m=+0.545547583 container died 47a90c44379628c0ee6aad5836586b1d37ce63fd183742f701df39543b20a192 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 07:21:21 np0005629333 systemd[1]: var-lib-containers-storage-overlay-2706b7f0d90901817487c75296b5e97a601b3a179da50baadb9bd36dec774a11-merged.mount: Deactivated successfully.
Feb 25 07:21:21 np0005629333 podman[270358]: 2026-02-25 12:21:21.355111097 +0000 UTC m=+0.655789608 container remove 47a90c44379628c0ee6aad5836586b1d37ce63fd183742f701df39543b20a192 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_feynman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:21:21 np0005629333 systemd[1]: libpod-conmon-47a90c44379628c0ee6aad5836586b1d37ce63fd183742f701df39543b20a192.scope: Deactivated successfully.
Feb 25 07:21:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 312 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 101 op/s
Feb 25 07:21:21 np0005629333 podman[270459]: 2026-02-25 12:21:21.89891925 +0000 UTC m=+0.055959323 container create 06f1ae898d8b0e52f69756d86c460d7ef94c07a752a313c02adf77ab808c1ab4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 07:21:21 np0005629333 systemd[1]: Started libpod-conmon-06f1ae898d8b0e52f69756d86c460d7ef94c07a752a313c02adf77ab808c1ab4.scope.
Feb 25 07:21:21 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:21:21 np0005629333 podman[270459]: 2026-02-25 12:21:21.875313268 +0000 UTC m=+0.032353321 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:21:21 np0005629333 podman[270459]: 2026-02-25 12:21:21.983729743 +0000 UTC m=+0.140769786 container init 06f1ae898d8b0e52f69756d86c460d7ef94c07a752a313c02adf77ab808c1ab4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_greider, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 07:21:21 np0005629333 podman[270459]: 2026-02-25 12:21:21.990891467 +0000 UTC m=+0.147931500 container start 06f1ae898d8b0e52f69756d86c460d7ef94c07a752a313c02adf77ab808c1ab4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_greider, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:21:21 np0005629333 podman[270459]: 2026-02-25 12:21:21.994185861 +0000 UTC m=+0.151225894 container attach 06f1ae898d8b0e52f69756d86c460d7ef94c07a752a313c02adf77ab808c1ab4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:21:21 np0005629333 busy_greider[270477]: 167 167
Feb 25 07:21:21 np0005629333 systemd[1]: libpod-06f1ae898d8b0e52f69756d86c460d7ef94c07a752a313c02adf77ab808c1ab4.scope: Deactivated successfully.
Feb 25 07:21:21 np0005629333 podman[270459]: 2026-02-25 12:21:21.995762865 +0000 UTC m=+0.152802918 container died 06f1ae898d8b0e52f69756d86c460d7ef94c07a752a313c02adf77ab808c1ab4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_greider, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:21:22 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ff07ea1af8a0570ffdc424bb66ec63f83a59b888fa552fe7522b0790e1d618ad-merged.mount: Deactivated successfully.
Feb 25 07:21:22 np0005629333 podman[270459]: 2026-02-25 12:21:22.036458913 +0000 UTC m=+0.193498946 container remove 06f1ae898d8b0e52f69756d86c460d7ef94c07a752a313c02adf77ab808c1ab4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_greider, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 07:21:22 np0005629333 podman[270473]: 2026-02-25 12:21:22.041549338 +0000 UTC m=+0.091819743 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223)
Feb 25 07:21:22 np0005629333 systemd[1]: libpod-conmon-06f1ae898d8b0e52f69756d86c460d7ef94c07a752a313c02adf77ab808c1ab4.scope: Deactivated successfully.
Feb 25 07:21:22 np0005629333 podman[270476]: 2026-02-25 12:21:22.069799682 +0000 UTC m=+0.120054657 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:21:22 np0005629333 podman[270543]: 2026-02-25 12:21:22.198549385 +0000 UTC m=+0.050824017 container create 30659966bd59b0833059d7fdc927d7ef433906b6db09919df86860db4237ba80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_poitras, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 07:21:22 np0005629333 systemd[1]: Started libpod-conmon-30659966bd59b0833059d7fdc927d7ef433906b6db09919df86860db4237ba80.scope.
Feb 25 07:21:22 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:21:22 np0005629333 podman[270543]: 2026-02-25 12:21:22.174493351 +0000 UTC m=+0.026768063 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:21:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cc00c6d31d524c897c7daf794ea352f1e078c5150d746747ba92a791b18c930/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:21:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cc00c6d31d524c897c7daf794ea352f1e078c5150d746747ba92a791b18c930/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:21:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cc00c6d31d524c897c7daf794ea352f1e078c5150d746747ba92a791b18c930/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:21:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7cc00c6d31d524c897c7daf794ea352f1e078c5150d746747ba92a791b18c930/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:21:22 np0005629333 podman[270543]: 2026-02-25 12:21:22.29045793 +0000 UTC m=+0.142732602 container init 30659966bd59b0833059d7fdc927d7ef433906b6db09919df86860db4237ba80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 07:21:22 np0005629333 podman[270543]: 2026-02-25 12:21:22.304122029 +0000 UTC m=+0.156396651 container start 30659966bd59b0833059d7fdc927d7ef433906b6db09919df86860db4237ba80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_poitras, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 07:21:22 np0005629333 podman[270543]: 2026-02-25 12:21:22.307739682 +0000 UTC m=+0.160014334 container attach 30659966bd59b0833059d7fdc927d7ef433906b6db09919df86860db4237ba80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 07:21:22 np0005629333 nova_compute[244014]: 2026-02-25 12:21:22.433 244018 DEBUG nova.network.neutron [None req-56d6117c-3768-4b91-a8e3-ea7e21e84709 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:21:22 np0005629333 nova_compute[244014]: 2026-02-25 12:21:22.585 244018 DEBUG nova.compute.manager [req-7cb9db27-5d66-41a4-9635-1c09fa104f20 req-f5152018-50c7-4818-b3eb-5df30af034b5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Received event network-changed-6c0083e1-a97b-4de8-aa7b-14217c45376f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:21:22 np0005629333 nova_compute[244014]: 2026-02-25 12:21:22.585 244018 DEBUG nova.compute.manager [req-7cb9db27-5d66-41a4-9635-1c09fa104f20 req-f5152018-50c7-4818-b3eb-5df30af034b5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Refreshing instance network info cache due to event network-changed-6c0083e1-a97b-4de8-aa7b-14217c45376f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:21:22 np0005629333 nova_compute[244014]: 2026-02-25 12:21:22.586 244018 DEBUG oslo_concurrency.lockutils [req-7cb9db27-5d66-41a4-9635-1c09fa104f20 req-f5152018-50c7-4818-b3eb-5df30af034b5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:21:22 np0005629333 lvm[270640]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:21:22 np0005629333 lvm[270638]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:21:22 np0005629333 lvm[270640]: VG ceph_vg1 finished
Feb 25 07:21:22 np0005629333 lvm[270638]: VG ceph_vg0 finished
Feb 25 07:21:22 np0005629333 lvm[270642]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:21:22 np0005629333 lvm[270642]: VG ceph_vg2 finished
Feb 25 07:21:22 np0005629333 lvm[270643]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:21:22 np0005629333 lvm[270643]: VG ceph_vg0 finished
Feb 25 07:21:22 np0005629333 nova_compute[244014]: 2026-02-25 12:21:22.948 244018 DEBUG oslo_concurrency.lockutils [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "interface-51d1d661-89db-4958-a2f4-c299ee997cde-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:21:22 np0005629333 nova_compute[244014]: 2026-02-25 12:21:22.948 244018 DEBUG oslo_concurrency.lockutils [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "interface-51d1d661-89db-4958-a2f4-c299ee997cde-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:21:22 np0005629333 nova_compute[244014]: 2026-02-25 12:21:22.948 244018 DEBUG nova.objects.instance [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'flavor' on Instance uuid 51d1d661-89db-4958-a2f4-c299ee997cde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:21:23 np0005629333 nova_compute[244014]: 2026-02-25 12:21:23.025 244018 DEBUG nova.objects.instance [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'pci_requests' on Instance uuid 51d1d661-89db-4958-a2f4-c299ee997cde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:21:23 np0005629333 nova_compute[244014]: 2026-02-25 12:21:23.068 244018 DEBUG nova.network.neutron [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:21:23 np0005629333 happy_poitras[270561]: {}
Feb 25 07:21:23 np0005629333 systemd[1]: libpod-30659966bd59b0833059d7fdc927d7ef433906b6db09919df86860db4237ba80.scope: Deactivated successfully.
Feb 25 07:21:23 np0005629333 podman[270543]: 2026-02-25 12:21:23.094370583 +0000 UTC m=+0.946645205 container died 30659966bd59b0833059d7fdc927d7ef433906b6db09919df86860db4237ba80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 25 07:21:23 np0005629333 systemd[1]: libpod-30659966bd59b0833059d7fdc927d7ef433906b6db09919df86860db4237ba80.scope: Consumed 1.046s CPU time.
Feb 25 07:21:23 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7cc00c6d31d524c897c7daf794ea352f1e078c5150d746747ba92a791b18c930-merged.mount: Deactivated successfully.
Feb 25 07:21:23 np0005629333 podman[270543]: 2026-02-25 12:21:23.211427154 +0000 UTC m=+1.063701806 container remove 30659966bd59b0833059d7fdc927d7ef433906b6db09919df86860db4237ba80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_poitras, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Feb 25 07:21:23 np0005629333 systemd[1]: libpod-conmon-30659966bd59b0833059d7fdc927d7ef433906b6db09919df86860db4237ba80.scope: Deactivated successfully.
Feb 25 07:21:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:21:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:21:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:21:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:21:23 np0005629333 nova_compute[244014]: 2026-02-25 12:21:23.479 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 312 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 19 KiB/s wr, 106 op/s
Feb 25 07:21:23 np0005629333 nova_compute[244014]: 2026-02-25 12:21:23.978 244018 DEBUG nova.policy [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea407839a07d46608b6348caf676d12d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6a771ad0ce454d809d66825f69248fa7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:21:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:21:24 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:21:24 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:21:24 np0005629333 nova_compute[244014]: 2026-02-25 12:21:24.747 244018 DEBUG nova.network.neutron [None req-56d6117c-3768-4b91-a8e3-ea7e21e84709 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Updating instance_info_cache with network_info: [{"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:21:24 np0005629333 nova_compute[244014]: 2026-02-25 12:21:24.771 244018 DEBUG oslo_concurrency.lockutils [None req-56d6117c-3768-4b91-a8e3-ea7e21e84709 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Releasing lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:21:24 np0005629333 nova_compute[244014]: 2026-02-25 12:21:24.772 244018 DEBUG nova.compute.manager [None req-56d6117c-3768-4b91-a8e3-ea7e21e84709 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Feb 25 07:21:24 np0005629333 nova_compute[244014]: 2026-02-25 12:21:24.772 244018 DEBUG nova.compute.manager [None req-56d6117c-3768-4b91-a8e3-ea7e21e84709 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] network_info to inject: |[{"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Feb 25 07:21:24 np0005629333 nova_compute[244014]: 2026-02-25 12:21:24.776 244018 DEBUG oslo_concurrency.lockutils [req-7cb9db27-5d66-41a4-9635-1c09fa104f20 req-f5152018-50c7-4818-b3eb-5df30af034b5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:21:24 np0005629333 nova_compute[244014]: 2026-02-25 12:21:24.776 244018 DEBUG nova.network.neutron [req-7cb9db27-5d66-41a4-9635-1c09fa104f20 req-f5152018-50c7-4818-b3eb-5df30af034b5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Refreshing network info cache for port 6c0083e1-a97b-4de8-aa7b-14217c45376f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:21:25 np0005629333 nova_compute[244014]: 2026-02-25 12:21:25.008 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:25 np0005629333 nova_compute[244014]: 2026-02-25 12:21:25.265 244018 DEBUG nova.network.neutron [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Successfully created port: 31f40ed6-505b-4061-b861-ea2720b0ff62 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:21:25 np0005629333 nova_compute[244014]: 2026-02-25 12:21:25.558 244018 DEBUG oslo_concurrency.lockutils [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Acquiring lock "437f3047-f865-44f7-b16e-cddab230e873" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:21:25 np0005629333 nova_compute[244014]: 2026-02-25 12:21:25.559 244018 DEBUG oslo_concurrency.lockutils [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lock "437f3047-f865-44f7-b16e-cddab230e873" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:21:25 np0005629333 nova_compute[244014]: 2026-02-25 12:21:25.559 244018 DEBUG oslo_concurrency.lockutils [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Acquiring lock "437f3047-f865-44f7-b16e-cddab230e873-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:21:25 np0005629333 nova_compute[244014]: 2026-02-25 12:21:25.559 244018 DEBUG oslo_concurrency.lockutils [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lock "437f3047-f865-44f7-b16e-cddab230e873-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:21:25 np0005629333 nova_compute[244014]: 2026-02-25 12:21:25.559 244018 DEBUG oslo_concurrency.lockutils [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lock "437f3047-f865-44f7-b16e-cddab230e873-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:21:25 np0005629333 nova_compute[244014]: 2026-02-25 12:21:25.560 244018 INFO nova.compute.manager [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Terminating instance#033[00m
Feb 25 07:21:25 np0005629333 nova_compute[244014]: 2026-02-25 12:21:25.561 244018 DEBUG nova.compute.manager [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:21:25 np0005629333 kernel: tap6c0083e1-a9 (unregistering): left promiscuous mode
Feb 25 07:21:25 np0005629333 NetworkManager[49836]: <info>  [1772022085.6115] device (tap6c0083e1-a9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:21:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:25Z|00177|binding|INFO|Releasing lport 6c0083e1-a97b-4de8-aa7b-14217c45376f from this chassis (sb_readonly=0)
Feb 25 07:21:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:25Z|00178|binding|INFO|Setting lport 6c0083e1-a97b-4de8-aa7b-14217c45376f down in Southbound
Feb 25 07:21:25 np0005629333 nova_compute[244014]: 2026-02-25 12:21:25.620 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:25Z|00179|binding|INFO|Removing iface tap6c0083e1-a9 ovn-installed in OVS
Feb 25 07:21:25 np0005629333 nova_compute[244014]: 2026-02-25 12:21:25.623 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:25 np0005629333 nova_compute[244014]: 2026-02-25 12:21:25.633 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:25.634 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:e1:9a 10.100.0.6'], port_security=['fa:16:3e:b5:e1:9a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '437f3047-f865-44f7-b16e-cddab230e873', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08d2df9f-b68a-40d4-a888-4f15fc0b5403', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28df138f3ff744a09c7e79a179881f21', 'neutron:revision_number': '6', 'neutron:security_group_ids': '142094f4-7b01-4079-a840-811d10ccd4e6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fff7b783-3fdd-403e-b061-0aa11a082612, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6c0083e1-a97b-4de8-aa7b-14217c45376f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:21:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:25.636 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6c0083e1-a97b-4de8-aa7b-14217c45376f in datapath 08d2df9f-b68a-40d4-a888-4f15fc0b5403 unbound from our chassis#033[00m
Feb 25 07:21:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:25.638 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08d2df9f-b68a-40d4-a888-4f15fc0b5403, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:21:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:25.640 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5532ce03-0cd7-4d05-96c6-3fa93f783f39]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:25.640 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403 namespace which is not needed anymore#033[00m
Feb 25 07:21:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 312 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 17 KiB/s wr, 92 op/s
Feb 25 07:21:25 np0005629333 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Feb 25 07:21:25 np0005629333 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001a.scope: Consumed 13.923s CPU time.
Feb 25 07:21:25 np0005629333 systemd-machined[210048]: Machine qemu-30-instance-0000001a terminated.
Feb 25 07:21:25 np0005629333 neutron-haproxy-ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403[268630]: [NOTICE]   (268639) : haproxy version is 2.8.14-c23fe91
Feb 25 07:21:25 np0005629333 neutron-haproxy-ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403[268630]: [NOTICE]   (268639) : path to executable is /usr/sbin/haproxy
Feb 25 07:21:25 np0005629333 neutron-haproxy-ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403[268630]: [WARNING]  (268639) : Exiting Master process...
Feb 25 07:21:25 np0005629333 neutron-haproxy-ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403[268630]: [WARNING]  (268639) : Exiting Master process...
Feb 25 07:21:25 np0005629333 neutron-haproxy-ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403[268630]: [ALERT]    (268639) : Current worker (268641) exited with code 143 (Terminated)
Feb 25 07:21:25 np0005629333 neutron-haproxy-ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403[268630]: [WARNING]  (268639) : All workers exited. Exiting... (0)
Feb 25 07:21:25 np0005629333 systemd[1]: libpod-61c9207e39c8e0bfbde648c37934d17a08b357dc2b9565a55f5bd1882fd7d3b7.scope: Deactivated successfully.
Feb 25 07:21:25 np0005629333 nova_compute[244014]: 2026-02-25 12:21:25.803 244018 INFO nova.virt.libvirt.driver [-] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Instance destroyed successfully.#033[00m
Feb 25 07:21:25 np0005629333 nova_compute[244014]: 2026-02-25 12:21:25.804 244018 DEBUG nova.objects.instance [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lazy-loading 'resources' on Instance uuid 437f3047-f865-44f7-b16e-cddab230e873 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:21:25 np0005629333 podman[270706]: 2026-02-25 12:21:25.810266316 +0000 UTC m=+0.063071396 container died 61c9207e39c8e0bfbde648c37934d17a08b357dc2b9565a55f5bd1882fd7d3b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 07:21:25 np0005629333 nova_compute[244014]: 2026-02-25 12:21:25.828 244018 DEBUG nova.virt.libvirt.vif [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1111859970',display_name='tempest-AttachInterfacesUnderV243Test-server-1111859970',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1111859970',id=26,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLXFN7zC5myw9D/AepfFmxAXWRK8TzW2Ph47f455gtFzF5f9ZUF8tii8j35Vr/zY8abYSaTZMXBunxHe79g4nB5q7Pdu+4WcSGnU0sarcpQHiHET7AGRM6ljsAzRNmAesA==',key_name='tempest-keypair-1280464142',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='28df138f3ff744a09c7e79a179881f21',ramdisk_id='',reservation_id='r-t6tk786a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-1754830019',owner_user_name='tempest-AttachInterfacesUnderV243Test-1754830019-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:21:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='10c5d76df8d046e8858030b12b7affa1',uuid=437f3047-f865-44f7-b16e-cddab230e873,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:21:25 np0005629333 nova_compute[244014]: 2026-02-25 12:21:25.829 244018 DEBUG nova.network.os_vif_util [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Converting VIF {"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:21:25 np0005629333 nova_compute[244014]: 2026-02-25 12:21:25.830 244018 DEBUG nova.network.os_vif_util [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b5:e1:9a,bridge_name='br-int',has_traffic_filtering=True,id=6c0083e1-a97b-4de8-aa7b-14217c45376f,network=Network(08d2df9f-b68a-40d4-a888-4f15fc0b5403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c0083e1-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:21:25 np0005629333 nova_compute[244014]: 2026-02-25 12:21:25.831 244018 DEBUG os_vif [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b5:e1:9a,bridge_name='br-int',has_traffic_filtering=True,id=6c0083e1-a97b-4de8-aa7b-14217c45376f,network=Network(08d2df9f-b68a-40d4-a888-4f15fc0b5403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c0083e1-a9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:21:25 np0005629333 nova_compute[244014]: 2026-02-25 12:21:25.834 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:25 np0005629333 nova_compute[244014]: 2026-02-25 12:21:25.835 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c0083e1-a9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:21:25 np0005629333 nova_compute[244014]: 2026-02-25 12:21:25.837 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:25 np0005629333 nova_compute[244014]: 2026-02-25 12:21:25.840 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:21:25 np0005629333 nova_compute[244014]: 2026-02-25 12:21:25.844 244018 INFO os_vif [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b5:e1:9a,bridge_name='br-int',has_traffic_filtering=True,id=6c0083e1-a97b-4de8-aa7b-14217c45376f,network=Network(08d2df9f-b68a-40d4-a888-4f15fc0b5403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c0083e1-a9')#033[00m
Feb 25 07:21:25 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-61c9207e39c8e0bfbde648c37934d17a08b357dc2b9565a55f5bd1882fd7d3b7-userdata-shm.mount: Deactivated successfully.
Feb 25 07:21:25 np0005629333 systemd[1]: var-lib-containers-storage-overlay-95e3b4792fbae073d4d0de8c0eb2f97b209d111264ecc91aaecd534959142ed9-merged.mount: Deactivated successfully.
Feb 25 07:21:25 np0005629333 podman[270706]: 2026-02-25 12:21:25.863518241 +0000 UTC m=+0.116323321 container cleanup 61c9207e39c8e0bfbde648c37934d17a08b357dc2b9565a55f5bd1882fd7d3b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 07:21:25 np0005629333 systemd[1]: libpod-conmon-61c9207e39c8e0bfbde648c37934d17a08b357dc2b9565a55f5bd1882fd7d3b7.scope: Deactivated successfully.
Feb 25 07:21:25 np0005629333 podman[270763]: 2026-02-25 12:21:25.940572563 +0000 UTC m=+0.052612458 container remove 61c9207e39c8e0bfbde648c37934d17a08b357dc2b9565a55f5bd1882fd7d3b7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:21:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:25.947 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ee192504-604b-4a0c-a252-ac46c87ef2ee]: (4, ('Wed Feb 25 12:21:25 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403 (61c9207e39c8e0bfbde648c37934d17a08b357dc2b9565a55f5bd1882fd7d3b7)\n61c9207e39c8e0bfbde648c37934d17a08b357dc2b9565a55f5bd1882fd7d3b7\nWed Feb 25 12:21:25 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403 (61c9207e39c8e0bfbde648c37934d17a08b357dc2b9565a55f5bd1882fd7d3b7)\n61c9207e39c8e0bfbde648c37934d17a08b357dc2b9565a55f5bd1882fd7d3b7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:25.950 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4fd66c4e-3c69-4421-af0a-1ecf0ca1b9f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:25.951 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08d2df9f-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:21:25 np0005629333 nova_compute[244014]: 2026-02-25 12:21:25.954 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:25 np0005629333 kernel: tap08d2df9f-b0: left promiscuous mode
Feb 25 07:21:25 np0005629333 nova_compute[244014]: 2026-02-25 12:21:25.967 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:25.972 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e7d06010-83b4-4d5e-bd6b-992be2acaa39]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:25 np0005629333 nova_compute[244014]: 2026-02-25 12:21:25.983 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:25.990 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5368e769-e188-4b89-aa6e-b1e07474e6cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:25.991 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[842ef611-dd6c-460e-a0d5-44bdae08d2c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:26.007 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[781c26ca-f2a3-435f-9923-fa3e8f85d998]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402115, 'reachable_time': 27684, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270779, 'error': None, 'target': 'ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:26 np0005629333 systemd[1]: run-netns-ovnmeta\x2d08d2df9f\x2db68a\x2d40d4\x2da888\x2d4f15fc0b5403.mount: Deactivated successfully.
Feb 25 07:21:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:26.013 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-08d2df9f-b68a-40d4-a888-4f15fc0b5403 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:21:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:26.013 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[138ed9aa-6e90-41ee-94e8-14ece411663e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:26 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:26Z|00180|binding|INFO|Releasing lport ef44c128-3fa4-4475-b63c-4818a50ede40 from this chassis (sb_readonly=0)
Feb 25 07:21:26 np0005629333 nova_compute[244014]: 2026-02-25 12:21:26.137 244018 INFO nova.virt.libvirt.driver [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Deleting instance files /var/lib/nova/instances/437f3047-f865-44f7-b16e-cddab230e873_del#033[00m
Feb 25 07:21:26 np0005629333 nova_compute[244014]: 2026-02-25 12:21:26.138 244018 INFO nova.virt.libvirt.driver [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Deletion of /var/lib/nova/instances/437f3047-f865-44f7-b16e-cddab230e873_del complete#033[00m
Feb 25 07:21:26 np0005629333 nova_compute[244014]: 2026-02-25 12:21:26.172 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:26 np0005629333 nova_compute[244014]: 2026-02-25 12:21:26.277 244018 INFO nova.compute.manager [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:21:26 np0005629333 nova_compute[244014]: 2026-02-25 12:21:26.278 244018 DEBUG oslo.service.loopingcall [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:21:26 np0005629333 nova_compute[244014]: 2026-02-25 12:21:26.278 244018 DEBUG nova.compute.manager [-] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:21:26 np0005629333 nova_compute[244014]: 2026-02-25 12:21:26.278 244018 DEBUG nova.network.neutron [-] [instance: 437f3047-f865-44f7-b16e-cddab230e873] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:21:26 np0005629333 nova_compute[244014]: 2026-02-25 12:21:26.536 244018 DEBUG nova.network.neutron [req-7cb9db27-5d66-41a4-9635-1c09fa104f20 req-f5152018-50c7-4818-b3eb-5df30af034b5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Updated VIF entry in instance network info cache for port 6c0083e1-a97b-4de8-aa7b-14217c45376f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:21:26 np0005629333 nova_compute[244014]: 2026-02-25 12:21:26.537 244018 DEBUG nova.network.neutron [req-7cb9db27-5d66-41a4-9635-1c09fa104f20 req-f5152018-50c7-4818-b3eb-5df30af034b5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Updating instance_info_cache with network_info: [{"id": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "address": "fa:16:3e:b5:e1:9a", "network": {"id": "08d2df9f-b68a-40d4-a888-4f15fc0b5403", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-223256236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28df138f3ff744a09c7e79a179881f21", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c0083e1-a9", "ovs_interfaceid": "6c0083e1-a97b-4de8-aa7b-14217c45376f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:21:26 np0005629333 nova_compute[244014]: 2026-02-25 12:21:26.563 244018 DEBUG oslo_concurrency.lockutils [req-7cb9db27-5d66-41a4-9635-1c09fa104f20 req-f5152018-50c7-4818-b3eb-5df30af034b5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-437f3047-f865-44f7-b16e-cddab230e873" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:21:26 np0005629333 nova_compute[244014]: 2026-02-25 12:21:26.765 244018 DEBUG nova.network.neutron [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Successfully updated port: 31f40ed6-505b-4061-b861-ea2720b0ff62 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:21:26 np0005629333 nova_compute[244014]: 2026-02-25 12:21:26.786 244018 DEBUG oslo_concurrency.lockutils [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:21:26 np0005629333 nova_compute[244014]: 2026-02-25 12:21:26.786 244018 DEBUG oslo_concurrency.lockutils [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquired lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:21:26 np0005629333 nova_compute[244014]: 2026-02-25 12:21:26.787 244018 DEBUG nova.network.neutron [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:21:26 np0005629333 nova_compute[244014]: 2026-02-25 12:21:26.943 244018 WARNING nova.network.neutron [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] 08121372-a435-401a-b405-778e10d8c2e2 already exists in list: networks containing: ['08121372-a435-401a-b405-778e10d8c2e2']. ignoring it#033[00m
Feb 25 07:21:27 np0005629333 nova_compute[244014]: 2026-02-25 12:21:27.405 244018 DEBUG nova.compute.manager [req-7085bb1a-c0de-4889-9666-928d5883a506 req-2e8757e8-9ec4-437b-881a-f2ed8db14b06 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Received event network-vif-unplugged-6c0083e1-a97b-4de8-aa7b-14217c45376f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:21:27 np0005629333 nova_compute[244014]: 2026-02-25 12:21:27.406 244018 DEBUG oslo_concurrency.lockutils [req-7085bb1a-c0de-4889-9666-928d5883a506 req-2e8757e8-9ec4-437b-881a-f2ed8db14b06 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "437f3047-f865-44f7-b16e-cddab230e873-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:21:27 np0005629333 nova_compute[244014]: 2026-02-25 12:21:27.407 244018 DEBUG oslo_concurrency.lockutils [req-7085bb1a-c0de-4889-9666-928d5883a506 req-2e8757e8-9ec4-437b-881a-f2ed8db14b06 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "437f3047-f865-44f7-b16e-cddab230e873-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:21:27 np0005629333 nova_compute[244014]: 2026-02-25 12:21:27.407 244018 DEBUG oslo_concurrency.lockutils [req-7085bb1a-c0de-4889-9666-928d5883a506 req-2e8757e8-9ec4-437b-881a-f2ed8db14b06 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "437f3047-f865-44f7-b16e-cddab230e873-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:21:27 np0005629333 nova_compute[244014]: 2026-02-25 12:21:27.408 244018 DEBUG nova.compute.manager [req-7085bb1a-c0de-4889-9666-928d5883a506 req-2e8757e8-9ec4-437b-881a-f2ed8db14b06 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] No waiting events found dispatching network-vif-unplugged-6c0083e1-a97b-4de8-aa7b-14217c45376f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:21:27 np0005629333 nova_compute[244014]: 2026-02-25 12:21:27.408 244018 DEBUG nova.compute.manager [req-7085bb1a-c0de-4889-9666-928d5883a506 req-2e8757e8-9ec4-437b-881a-f2ed8db14b06 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Received event network-vif-unplugged-6c0083e1-a97b-4de8-aa7b-14217c45376f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:21:27 np0005629333 nova_compute[244014]: 2026-02-25 12:21:27.477 244018 DEBUG nova.network.neutron [-] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:21:27 np0005629333 nova_compute[244014]: 2026-02-25 12:21:27.502 244018 INFO nova.compute.manager [-] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Took 1.22 seconds to deallocate network for instance.#033[00m
Feb 25 07:21:27 np0005629333 nova_compute[244014]: 2026-02-25 12:21:27.564 244018 DEBUG oslo_concurrency.lockutils [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:21:27 np0005629333 nova_compute[244014]: 2026-02-25 12:21:27.565 244018 DEBUG oslo_concurrency.lockutils [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:21:27 np0005629333 nova_compute[244014]: 2026-02-25 12:21:27.649 244018 DEBUG oslo_concurrency.processutils [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:21:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 233 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 20 KiB/s wr, 117 op/s
Feb 25 07:21:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:21:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1603885954' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:21:28 np0005629333 nova_compute[244014]: 2026-02-25 12:21:28.256 244018 DEBUG oslo_concurrency.processutils [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:21:28 np0005629333 nova_compute[244014]: 2026-02-25 12:21:28.265 244018 DEBUG nova.compute.provider_tree [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:21:28 np0005629333 nova_compute[244014]: 2026-02-25 12:21:28.296 244018 DEBUG nova.scheduler.client.report [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:21:28 np0005629333 nova_compute[244014]: 2026-02-25 12:21:28.360 244018 DEBUG oslo_concurrency.lockutils [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:21:28 np0005629333 nova_compute[244014]: 2026-02-25 12:21:28.481 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:28 np0005629333 nova_compute[244014]: 2026-02-25 12:21:28.555 244018 INFO nova.scheduler.client.report [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Deleted allocations for instance 437f3047-f865-44f7-b16e-cddab230e873#033[00m
Feb 25 07:21:28 np0005629333 nova_compute[244014]: 2026-02-25 12:21:28.643 244018 DEBUG oslo_concurrency.lockutils [None req-57ddb482-4237-4ace-8ab5-4afbdfc3bd98 10c5d76df8d046e8858030b12b7affa1 28df138f3ff744a09c7e79a179881f21 - - default default] Lock "437f3047-f865-44f7-b16e-cddab230e873" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:21:29.226476) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022089226547, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 570, "num_deletes": 256, "total_data_size": 623513, "memory_usage": 635352, "flush_reason": "Manual Compaction"}
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022089232774, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 607103, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23170, "largest_seqno": 23739, "table_properties": {"data_size": 603944, "index_size": 1067, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 6997, "raw_average_key_size": 18, "raw_value_size": 597569, "raw_average_value_size": 1540, "num_data_blocks": 48, "num_entries": 388, "num_filter_entries": 388, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772022055, "oldest_key_time": 1772022055, "file_creation_time": 1772022089, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 6349 microseconds, and 2933 cpu microseconds.
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:21:29.232831) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 607103 bytes OK
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:21:29.232852) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:21:29.234887) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:21:29.234910) EVENT_LOG_v1 {"time_micros": 1772022089234903, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:21:29.234934) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 620311, prev total WAL file size 620311, number of live WAL files 2.
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:21:29.235544) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373535' seq:0, type:0; will stop at (end)
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(592KB)], [53(8527KB)]
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022089235586, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 9339619, "oldest_snapshot_seqno": -1}
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4736 keys, 9240141 bytes, temperature: kUnknown
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022089279881, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 9240141, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9205626, "index_size": 21605, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11845, "raw_key_size": 117774, "raw_average_key_size": 24, "raw_value_size": 9117440, "raw_average_value_size": 1925, "num_data_blocks": 903, "num_entries": 4736, "num_filter_entries": 4736, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772022089, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:21:29.280230) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 9240141 bytes
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:21:29.281484) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 210.2 rd, 208.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 8.3 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(30.6) write-amplify(15.2) OK, records in: 5263, records dropped: 527 output_compression: NoCompression
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:21:29.281523) EVENT_LOG_v1 {"time_micros": 1772022089281506, "job": 28, "event": "compaction_finished", "compaction_time_micros": 44426, "compaction_time_cpu_micros": 27256, "output_level": 6, "num_output_files": 1, "total_output_size": 9240141, "num_input_records": 5263, "num_output_records": 4736, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022089281931, "job": 28, "event": "table_file_deletion", "file_number": 55}
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022089283172, "job": 28, "event": "table_file_deletion", "file_number": 53}
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:21:29.235374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:21:29.283240) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:21:29.283245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:21:29.283246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:21:29.283248) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:21:29 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:21:29.283249) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:21:29 np0005629333 nova_compute[244014]: 2026-02-25 12:21:29.609 244018 DEBUG nova.compute.manager [req-9a2fe790-4cd3-48a5-8e22-6fa2b8392052 req-ed1db38b-bf62-4dc9-b61d-89c5ce4f5c17 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-changed-31f40ed6-505b-4061-b861-ea2720b0ff62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:21:29 np0005629333 nova_compute[244014]: 2026-02-25 12:21:29.610 244018 DEBUG nova.compute.manager [req-9a2fe790-4cd3-48a5-8e22-6fa2b8392052 req-ed1db38b-bf62-4dc9-b61d-89c5ce4f5c17 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Refreshing instance network info cache due to event network-changed-31f40ed6-505b-4061-b861-ea2720b0ff62. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:21:29 np0005629333 nova_compute[244014]: 2026-02-25 12:21:29.610 244018 DEBUG oslo_concurrency.lockutils [req-9a2fe790-4cd3-48a5-8e22-6fa2b8392052 req-ed1db38b-bf62-4dc9-b61d-89c5ce4f5c17 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:21:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 233 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 4.2 KiB/s wr, 103 op/s
Feb 25 07:21:29 np0005629333 nova_compute[244014]: 2026-02-25 12:21:29.812 244018 DEBUG nova.compute.manager [req-d96aea56-dc34-408a-a1e8-813289b7a18e req-cbad5e11-762b-41cc-bb9a-4d69d446eaae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Received event network-vif-plugged-6c0083e1-a97b-4de8-aa7b-14217c45376f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:21:29 np0005629333 nova_compute[244014]: 2026-02-25 12:21:29.813 244018 DEBUG oslo_concurrency.lockutils [req-d96aea56-dc34-408a-a1e8-813289b7a18e req-cbad5e11-762b-41cc-bb9a-4d69d446eaae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "437f3047-f865-44f7-b16e-cddab230e873-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:21:29 np0005629333 nova_compute[244014]: 2026-02-25 12:21:29.814 244018 DEBUG oslo_concurrency.lockutils [req-d96aea56-dc34-408a-a1e8-813289b7a18e req-cbad5e11-762b-41cc-bb9a-4d69d446eaae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "437f3047-f865-44f7-b16e-cddab230e873-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:21:29 np0005629333 nova_compute[244014]: 2026-02-25 12:21:29.814 244018 DEBUG oslo_concurrency.lockutils [req-d96aea56-dc34-408a-a1e8-813289b7a18e req-cbad5e11-762b-41cc-bb9a-4d69d446eaae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "437f3047-f865-44f7-b16e-cddab230e873-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:21:29 np0005629333 nova_compute[244014]: 2026-02-25 12:21:29.815 244018 DEBUG nova.compute.manager [req-d96aea56-dc34-408a-a1e8-813289b7a18e req-cbad5e11-762b-41cc-bb9a-4d69d446eaae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] No waiting events found dispatching network-vif-plugged-6c0083e1-a97b-4de8-aa7b-14217c45376f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:21:29 np0005629333 nova_compute[244014]: 2026-02-25 12:21:29.815 244018 WARNING nova.compute.manager [req-d96aea56-dc34-408a-a1e8-813289b7a18e req-cbad5e11-762b-41cc-bb9a-4d69d446eaae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Received unexpected event network-vif-plugged-6c0083e1-a97b-4de8-aa7b-14217c45376f for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:21:30 np0005629333 nova_compute[244014]: 2026-02-25 12:21:30.288 244018 DEBUG nova.network.neutron [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updating instance_info_cache with network_info: [{"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:21:30 np0005629333 nova_compute[244014]: 2026-02-25 12:21:30.343 244018 DEBUG oslo_concurrency.lockutils [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Releasing lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:21:30 np0005629333 nova_compute[244014]: 2026-02-25 12:21:30.345 244018 DEBUG oslo_concurrency.lockutils [req-9a2fe790-4cd3-48a5-8e22-6fa2b8392052 req-ed1db38b-bf62-4dc9-b61d-89c5ce4f5c17 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:21:30 np0005629333 nova_compute[244014]: 2026-02-25 12:21:30.346 244018 DEBUG nova.network.neutron [req-9a2fe790-4cd3-48a5-8e22-6fa2b8392052 req-ed1db38b-bf62-4dc9-b61d-89c5ce4f5c17 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Refreshing network info cache for port 31f40ed6-505b-4061-b861-ea2720b0ff62 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:21:30 np0005629333 nova_compute[244014]: 2026-02-25 12:21:30.350 244018 DEBUG nova.virt.libvirt.vif [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:20:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:21:30 np0005629333 nova_compute[244014]: 2026-02-25 12:21:30.351 244018 DEBUG nova.network.os_vif_util [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:21:30 np0005629333 nova_compute[244014]: 2026-02-25 12:21:30.352 244018 DEBUG nova.network.os_vif_util [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:c1:8f,bridge_name='br-int',has_traffic_filtering=True,id=31f40ed6-505b-4061-b861-ea2720b0ff62,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31f40ed6-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:21:30 np0005629333 nova_compute[244014]: 2026-02-25 12:21:30.353 244018 DEBUG os_vif [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:c1:8f,bridge_name='br-int',has_traffic_filtering=True,id=31f40ed6-505b-4061-b861-ea2720b0ff62,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31f40ed6-50') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:21:30 np0005629333 nova_compute[244014]: 2026-02-25 12:21:30.354 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:30 np0005629333 nova_compute[244014]: 2026-02-25 12:21:30.354 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:21:30 np0005629333 nova_compute[244014]: 2026-02-25 12:21:30.355 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:21:30 np0005629333 nova_compute[244014]: 2026-02-25 12:21:30.358 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:30 np0005629333 nova_compute[244014]: 2026-02-25 12:21:30.358 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31f40ed6-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:21:30 np0005629333 nova_compute[244014]: 2026-02-25 12:21:30.359 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap31f40ed6-50, col_values=(('external_ids', {'iface-id': '31f40ed6-505b-4061-b861-ea2720b0ff62', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:c1:8f', 'vm-uuid': '51d1d661-89db-4958-a2f4-c299ee997cde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:21:30 np0005629333 nova_compute[244014]: 2026-02-25 12:21:30.361 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:30 np0005629333 NetworkManager[49836]: <info>  [1772022090.3625] manager: (tap31f40ed6-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Feb 25 07:21:30 np0005629333 nova_compute[244014]: 2026-02-25 12:21:30.369 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:30 np0005629333 nova_compute[244014]: 2026-02-25 12:21:30.371 244018 INFO os_vif [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:c1:8f,bridge_name='br-int',has_traffic_filtering=True,id=31f40ed6-505b-4061-b861-ea2720b0ff62,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31f40ed6-50')#033[00m
Feb 25 07:21:30 np0005629333 nova_compute[244014]: 2026-02-25 12:21:30.373 244018 DEBUG nova.virt.libvirt.vif [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:20:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:21:30 np0005629333 nova_compute[244014]: 2026-02-25 12:21:30.373 244018 DEBUG nova.network.os_vif_util [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:21:30 np0005629333 nova_compute[244014]: 2026-02-25 12:21:30.374 244018 DEBUG nova.network.os_vif_util [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:c1:8f,bridge_name='br-int',has_traffic_filtering=True,id=31f40ed6-505b-4061-b861-ea2720b0ff62,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31f40ed6-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:21:30 np0005629333 nova_compute[244014]: 2026-02-25 12:21:30.379 244018 DEBUG nova.virt.libvirt.guest [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] attach device xml: <interface type="ethernet">
Feb 25 07:21:30 np0005629333 nova_compute[244014]:  <mac address="fa:16:3e:c1:c1:8f"/>
Feb 25 07:21:30 np0005629333 nova_compute[244014]:  <model type="virtio"/>
Feb 25 07:21:30 np0005629333 nova_compute[244014]:  <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:21:30 np0005629333 nova_compute[244014]:  <mtu size="1442"/>
Feb 25 07:21:30 np0005629333 nova_compute[244014]:  <target dev="tap31f40ed6-50"/>
Feb 25 07:21:30 np0005629333 nova_compute[244014]: </interface>
Feb 25 07:21:30 np0005629333 nova_compute[244014]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb 25 07:21:30 np0005629333 kernel: tap31f40ed6-50: entered promiscuous mode
Feb 25 07:21:30 np0005629333 NetworkManager[49836]: <info>  [1772022090.3952] manager: (tap31f40ed6-50): new Tun device (/org/freedesktop/NetworkManager/Devices/89)
Feb 25 07:21:30 np0005629333 nova_compute[244014]: 2026-02-25 12:21:30.396 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:30 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:30Z|00181|binding|INFO|Claiming lport 31f40ed6-505b-4061-b861-ea2720b0ff62 for this chassis.
Feb 25 07:21:30 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:30Z|00182|binding|INFO|31f40ed6-505b-4061-b861-ea2720b0ff62: Claiming fa:16:3e:c1:c1:8f 10.100.0.10
Feb 25 07:21:30 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:30Z|00183|binding|INFO|Setting lport 31f40ed6-505b-4061-b861-ea2720b0ff62 ovn-installed in OVS
Feb 25 07:21:30 np0005629333 nova_compute[244014]: 2026-02-25 12:21:30.409 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:30 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:30Z|00184|binding|INFO|Setting lport 31f40ed6-505b-4061-b861-ea2720b0ff62 up in Southbound
Feb 25 07:21:30 np0005629333 nova_compute[244014]: 2026-02-25 12:21:30.413 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:30.415 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:c1:8f 10.100.0.10'], port_security=['fa:16:3e:c1:c1:8f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '51d1d661-89db-4958-a2f4-c299ee997cde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '078dca40-137f-4eb6-953b-2ae25d0b4ca3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=31f40ed6-505b-4061-b861-ea2720b0ff62) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:21:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:30.418 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 31f40ed6-505b-4061-b861-ea2720b0ff62 in datapath 08121372-a435-401a-b405-778e10d8c2e2 bound to our chassis#033[00m
Feb 25 07:21:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:30.420 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08121372-a435-401a-b405-778e10d8c2e2#033[00m
Feb 25 07:21:30 np0005629333 systemd-udevd[270814]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:21:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:30.435 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c7257c36-a14c-4f61-a25a-2060cbe1d022]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:30 np0005629333 NetworkManager[49836]: <info>  [1772022090.4489] device (tap31f40ed6-50): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:21:30 np0005629333 NetworkManager[49836]: <info>  [1772022090.4501] device (tap31f40ed6-50): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:21:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:30.468 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[225b32c8-f340-47d4-a727-7783091a70b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:30.472 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0219bdb2-a405-414a-910e-47abb8218e49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:30 np0005629333 nova_compute[244014]: 2026-02-25 12:21:30.493 244018 DEBUG nova.virt.libvirt.driver [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:21:30 np0005629333 nova_compute[244014]: 2026-02-25 12:21:30.494 244018 DEBUG nova.virt.libvirt.driver [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:21:30 np0005629333 nova_compute[244014]: 2026-02-25 12:21:30.495 244018 DEBUG nova.virt.libvirt.driver [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:4a:d5:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:21:30 np0005629333 nova_compute[244014]: 2026-02-25 12:21:30.495 244018 DEBUG nova.virt.libvirt.driver [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:c1:c1:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:21:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:30.503 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ce53b0dc-fd70-4881-a960-307461fbd4e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:30.522 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a8891728-3c5c-434e-ab42-20251bf47817]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402565, 'reachable_time': 42201, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270821, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:30 np0005629333 nova_compute[244014]: 2026-02-25 12:21:30.526 244018 DEBUG nova.virt.libvirt.guest [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:21:30 np0005629333 nova_compute[244014]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:21:30 np0005629333 nova_compute[244014]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1386050966</nova:name>
Feb 25 07:21:30 np0005629333 nova_compute[244014]:  <nova:creationTime>2026-02-25 12:21:30</nova:creationTime>
Feb 25 07:21:30 np0005629333 nova_compute[244014]:  <nova:flavor name="m1.nano">
Feb 25 07:21:30 np0005629333 nova_compute[244014]:    <nova:memory>128</nova:memory>
Feb 25 07:21:30 np0005629333 nova_compute[244014]:    <nova:disk>1</nova:disk>
Feb 25 07:21:30 np0005629333 nova_compute[244014]:    <nova:swap>0</nova:swap>
Feb 25 07:21:30 np0005629333 nova_compute[244014]:    <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:21:30 np0005629333 nova_compute[244014]:    <nova:vcpus>1</nova:vcpus>
Feb 25 07:21:30 np0005629333 nova_compute[244014]:  </nova:flavor>
Feb 25 07:21:30 np0005629333 nova_compute[244014]:  <nova:owner>
Feb 25 07:21:30 np0005629333 nova_compute[244014]:    <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 07:21:30 np0005629333 nova_compute[244014]:    <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 07:21:30 np0005629333 nova_compute[244014]:  </nova:owner>
Feb 25 07:21:30 np0005629333 nova_compute[244014]:  <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:21:30 np0005629333 nova_compute[244014]:  <nova:ports>
Feb 25 07:21:30 np0005629333 nova_compute[244014]:    <nova:port uuid="433e6f28-313e-4fe8-b8da-eacc8a0332c8">
Feb 25 07:21:30 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 07:21:30 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:21:30 np0005629333 nova_compute[244014]:    <nova:port uuid="31f40ed6-505b-4061-b861-ea2720b0ff62">
Feb 25 07:21:30 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 25 07:21:30 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:21:30 np0005629333 nova_compute[244014]:  </nova:ports>
Feb 25 07:21:30 np0005629333 nova_compute[244014]: </nova:instance>
Feb 25 07:21:30 np0005629333 nova_compute[244014]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Feb 25 07:21:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:30.541 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[14dff7e7-ad29-4159-92c6-9468d7332c84]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402574, 'tstamp': 402574}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270822, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402577, 'tstamp': 402577}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270822, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:30.543 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:21:30 np0005629333 nova_compute[244014]: 2026-02-25 12:21:30.545 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:30.547 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08121372-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:21:30 np0005629333 nova_compute[244014]: 2026-02-25 12:21:30.548 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:30.548 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:21:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:30.549 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08121372-a0, col_values=(('external_ids', {'iface-id': 'ef44c128-3fa4-4475-b63c-4818a50ede40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:21:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:30.549 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:21:30 np0005629333 nova_compute[244014]: 2026-02-25 12:21:30.565 244018 DEBUG oslo_concurrency.lockutils [None req-ed13c1b1-b112-40e7-a1eb-c637bba40eb7 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "interface-51d1d661-89db-4958-a2f4-c299ee997cde-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:21:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:21:30
Feb 25 07:21:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:21:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:21:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'images', '.mgr', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', '.rgw.root', 'default.rgw.log']
Feb 25 07:21:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:21:31 np0005629333 nova_compute[244014]: 2026-02-25 12:21:31.560 244018 DEBUG oslo_concurrency.lockutils [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "interface-51d1d661-89db-4958-a2f4-c299ee997cde-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:21:31 np0005629333 nova_compute[244014]: 2026-02-25 12:21:31.561 244018 DEBUG oslo_concurrency.lockutils [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "interface-51d1d661-89db-4958-a2f4-c299ee997cde-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:21:31 np0005629333 nova_compute[244014]: 2026-02-25 12:21:31.562 244018 DEBUG nova.objects.instance [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'flavor' on Instance uuid 51d1d661-89db-4958-a2f4-c299ee997cde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:21:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:21:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:21:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:21:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:21:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:21:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:21:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 233 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 5.2 KiB/s wr, 103 op/s
Feb 25 07:21:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:21:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:21:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:21:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:21:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:21:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:21:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:21:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:21:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:21:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:21:32 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:32Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c1:c1:8f 10.100.0.10
Feb 25 07:21:32 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:32Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c1:c1:8f 10.100.0.10
Feb 25 07:21:32 np0005629333 nova_compute[244014]: 2026-02-25 12:21:32.069 244018 DEBUG nova.network.neutron [req-9a2fe790-4cd3-48a5-8e22-6fa2b8392052 req-ed1db38b-bf62-4dc9-b61d-89c5ce4f5c17 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updated VIF entry in instance network info cache for port 31f40ed6-505b-4061-b861-ea2720b0ff62. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:21:32 np0005629333 nova_compute[244014]: 2026-02-25 12:21:32.069 244018 DEBUG nova.network.neutron [req-9a2fe790-4cd3-48a5-8e22-6fa2b8392052 req-ed1db38b-bf62-4dc9-b61d-89c5ce4f5c17 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updating instance_info_cache with network_info: [{"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:21:32 np0005629333 nova_compute[244014]: 2026-02-25 12:21:32.077 244018 DEBUG nova.compute.manager [req-4ea4a013-4490-4a28-b630-aec654387959 req-27ed7ad0-569d-4c0c-8ca6-b0608f92f895 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Received event network-vif-deleted-6c0083e1-a97b-4de8-aa7b-14217c45376f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:21:32 np0005629333 nova_compute[244014]: 2026-02-25 12:21:32.078 244018 DEBUG nova.compute.manager [req-4ea4a013-4490-4a28-b630-aec654387959 req-27ed7ad0-569d-4c0c-8ca6-b0608f92f895 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-plugged-31f40ed6-505b-4061-b861-ea2720b0ff62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:21:32 np0005629333 nova_compute[244014]: 2026-02-25 12:21:32.079 244018 DEBUG oslo_concurrency.lockutils [req-4ea4a013-4490-4a28-b630-aec654387959 req-27ed7ad0-569d-4c0c-8ca6-b0608f92f895 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:21:32 np0005629333 nova_compute[244014]: 2026-02-25 12:21:32.079 244018 DEBUG oslo_concurrency.lockutils [req-4ea4a013-4490-4a28-b630-aec654387959 req-27ed7ad0-569d-4c0c-8ca6-b0608f92f895 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:21:32 np0005629333 nova_compute[244014]: 2026-02-25 12:21:32.080 244018 DEBUG oslo_concurrency.lockutils [req-4ea4a013-4490-4a28-b630-aec654387959 req-27ed7ad0-569d-4c0c-8ca6-b0608f92f895 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:21:32 np0005629333 nova_compute[244014]: 2026-02-25 12:21:32.080 244018 DEBUG nova.compute.manager [req-4ea4a013-4490-4a28-b630-aec654387959 req-27ed7ad0-569d-4c0c-8ca6-b0608f92f895 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] No waiting events found dispatching network-vif-plugged-31f40ed6-505b-4061-b861-ea2720b0ff62 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:21:32 np0005629333 nova_compute[244014]: 2026-02-25 12:21:32.081 244018 WARNING nova.compute.manager [req-4ea4a013-4490-4a28-b630-aec654387959 req-27ed7ad0-569d-4c0c-8ca6-b0608f92f895 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received unexpected event network-vif-plugged-31f40ed6-505b-4061-b861-ea2720b0ff62 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:21:32 np0005629333 nova_compute[244014]: 2026-02-25 12:21:32.081 244018 DEBUG nova.compute.manager [req-4ea4a013-4490-4a28-b630-aec654387959 req-27ed7ad0-569d-4c0c-8ca6-b0608f92f895 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-plugged-31f40ed6-505b-4061-b861-ea2720b0ff62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:21:32 np0005629333 nova_compute[244014]: 2026-02-25 12:21:32.082 244018 DEBUG oslo_concurrency.lockutils [req-4ea4a013-4490-4a28-b630-aec654387959 req-27ed7ad0-569d-4c0c-8ca6-b0608f92f895 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:21:32 np0005629333 nova_compute[244014]: 2026-02-25 12:21:32.082 244018 DEBUG oslo_concurrency.lockutils [req-4ea4a013-4490-4a28-b630-aec654387959 req-27ed7ad0-569d-4c0c-8ca6-b0608f92f895 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:21:32 np0005629333 nova_compute[244014]: 2026-02-25 12:21:32.082 244018 DEBUG oslo_concurrency.lockutils [req-4ea4a013-4490-4a28-b630-aec654387959 req-27ed7ad0-569d-4c0c-8ca6-b0608f92f895 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:21:32 np0005629333 nova_compute[244014]: 2026-02-25 12:21:32.083 244018 DEBUG nova.compute.manager [req-4ea4a013-4490-4a28-b630-aec654387959 req-27ed7ad0-569d-4c0c-8ca6-b0608f92f895 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] No waiting events found dispatching network-vif-plugged-31f40ed6-505b-4061-b861-ea2720b0ff62 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:21:32 np0005629333 nova_compute[244014]: 2026-02-25 12:21:32.083 244018 WARNING nova.compute.manager [req-4ea4a013-4490-4a28-b630-aec654387959 req-27ed7ad0-569d-4c0c-8ca6-b0608f92f895 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received unexpected event network-vif-plugged-31f40ed6-505b-4061-b861-ea2720b0ff62 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:21:32 np0005629333 nova_compute[244014]: 2026-02-25 12:21:32.122 244018 DEBUG nova.objects.instance [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'pci_requests' on Instance uuid 51d1d661-89db-4958-a2f4-c299ee997cde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:21:32 np0005629333 nova_compute[244014]: 2026-02-25 12:21:32.125 244018 DEBUG oslo_concurrency.lockutils [req-9a2fe790-4cd3-48a5-8e22-6fa2b8392052 req-ed1db38b-bf62-4dc9-b61d-89c5ce4f5c17 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:21:32 np0005629333 nova_compute[244014]: 2026-02-25 12:21:32.154 244018 DEBUG nova.network.neutron [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:21:32 np0005629333 nova_compute[244014]: 2026-02-25 12:21:32.387 244018 DEBUG nova.policy [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea407839a07d46608b6348caf676d12d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6a771ad0ce454d809d66825f69248fa7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:21:32 np0005629333 nova_compute[244014]: 2026-02-25 12:21:32.922 244018 DEBUG nova.network.neutron [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Successfully created port: f1abe770-5205-4bae-888a-f2489c2af7a7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:21:33 np0005629333 nova_compute[244014]: 2026-02-25 12:21:33.484 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 233 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 5.2 KiB/s wr, 79 op/s
Feb 25 07:21:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:21:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Feb 25 07:21:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Feb 25 07:21:34 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Feb 25 07:21:34 np0005629333 nova_compute[244014]: 2026-02-25 12:21:34.835 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:35 np0005629333 nova_compute[244014]: 2026-02-25 12:21:35.048 244018 DEBUG nova.network.neutron [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Successfully updated port: f1abe770-5205-4bae-888a-f2489c2af7a7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:21:35 np0005629333 nova_compute[244014]: 2026-02-25 12:21:35.065 244018 DEBUG oslo_concurrency.lockutils [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:21:35 np0005629333 nova_compute[244014]: 2026-02-25 12:21:35.066 244018 DEBUG oslo_concurrency.lockutils [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquired lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:21:35 np0005629333 nova_compute[244014]: 2026-02-25 12:21:35.066 244018 DEBUG nova.network.neutron [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:21:35 np0005629333 nova_compute[244014]: 2026-02-25 12:21:35.186 244018 DEBUG nova.compute.manager [req-f0e127d8-30f9-49b0-b37c-dd645308dec3 req-bb27b9a8-b8eb-4a69-8474-bec68e583425 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-changed-f1abe770-5205-4bae-888a-f2489c2af7a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:21:35 np0005629333 nova_compute[244014]: 2026-02-25 12:21:35.186 244018 DEBUG nova.compute.manager [req-f0e127d8-30f9-49b0-b37c-dd645308dec3 req-bb27b9a8-b8eb-4a69-8474-bec68e583425 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Refreshing instance network info cache due to event network-changed-f1abe770-5205-4bae-888a-f2489c2af7a7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:21:35 np0005629333 nova_compute[244014]: 2026-02-25 12:21:35.186 244018 DEBUG oslo_concurrency.lockutils [req-f0e127d8-30f9-49b0-b37c-dd645308dec3 req-bb27b9a8-b8eb-4a69-8474-bec68e583425 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:21:35 np0005629333 nova_compute[244014]: 2026-02-25 12:21:35.361 244018 WARNING nova.network.neutron [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] 08121372-a435-401a-b405-778e10d8c2e2 already exists in list: networks containing: ['08121372-a435-401a-b405-778e10d8c2e2']. ignoring it#033[00m
Feb 25 07:21:35 np0005629333 nova_compute[244014]: 2026-02-25 12:21:35.362 244018 WARNING nova.network.neutron [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] 08121372-a435-401a-b405-778e10d8c2e2 already exists in list: networks containing: ['08121372-a435-401a-b405-778e10d8c2e2']. ignoring it#033[00m
Feb 25 07:21:35 np0005629333 nova_compute[244014]: 2026-02-25 12:21:35.366 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 233 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 6.2 KiB/s wr, 33 op/s
Feb 25 07:21:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:37Z|00185|binding|INFO|Releasing lport ef44c128-3fa4-4475-b63c-4818a50ede40 from this chassis (sb_readonly=0)
Feb 25 07:21:37 np0005629333 nova_compute[244014]: 2026-02-25 12:21:37.210 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 233 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 2.6 KiB/s wr, 25 op/s
Feb 25 07:21:38 np0005629333 nova_compute[244014]: 2026-02-25 12:21:38.415 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:38 np0005629333 nova_compute[244014]: 2026-02-25 12:21:38.486 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:21:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Feb 25 07:21:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Feb 25 07:21:39 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Feb 25 07:21:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 233 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Feb 25 07:21:39 np0005629333 nova_compute[244014]: 2026-02-25 12:21:39.662 244018 DEBUG nova.network.neutron [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updating instance_info_cache with network_info: [{"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f1abe770-5205-4bae-888a-f2489c2af7a7", "address": "fa:16:3e:85:c5:fe", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1abe770-52", "ovs_interfaceid": "f1abe770-5205-4bae-888a-f2489c2af7a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:21:39 np0005629333 nova_compute[244014]: 2026-02-25 12:21:39.686 244018 DEBUG oslo_concurrency.lockutils [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Releasing lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:21:39 np0005629333 nova_compute[244014]: 2026-02-25 12:21:39.688 244018 DEBUG oslo_concurrency.lockutils [req-f0e127d8-30f9-49b0-b37c-dd645308dec3 req-bb27b9a8-b8eb-4a69-8474-bec68e583425 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:21:39 np0005629333 nova_compute[244014]: 2026-02-25 12:21:39.689 244018 DEBUG nova.network.neutron [req-f0e127d8-30f9-49b0-b37c-dd645308dec3 req-bb27b9a8-b8eb-4a69-8474-bec68e583425 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Refreshing network info cache for port f1abe770-5205-4bae-888a-f2489c2af7a7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:21:39 np0005629333 nova_compute[244014]: 2026-02-25 12:21:39.694 244018 DEBUG nova.virt.libvirt.vif [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:20:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f1abe770-5205-4bae-888a-f2489c2af7a7", "address": "fa:16:3e:85:c5:fe", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1abe770-52", "ovs_interfaceid": "f1abe770-5205-4bae-888a-f2489c2af7a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:21:39 np0005629333 nova_compute[244014]: 2026-02-25 12:21:39.695 244018 DEBUG nova.network.os_vif_util [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "f1abe770-5205-4bae-888a-f2489c2af7a7", "address": "fa:16:3e:85:c5:fe", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1abe770-52", "ovs_interfaceid": "f1abe770-5205-4bae-888a-f2489c2af7a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:21:39 np0005629333 nova_compute[244014]: 2026-02-25 12:21:39.696 244018 DEBUG nova.network.os_vif_util [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:c5:fe,bridge_name='br-int',has_traffic_filtering=True,id=f1abe770-5205-4bae-888a-f2489c2af7a7,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1abe770-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:21:39 np0005629333 nova_compute[244014]: 2026-02-25 12:21:39.697 244018 DEBUG os_vif [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:c5:fe,bridge_name='br-int',has_traffic_filtering=True,id=f1abe770-5205-4bae-888a-f2489c2af7a7,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1abe770-52') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:21:39 np0005629333 nova_compute[244014]: 2026-02-25 12:21:39.698 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:39 np0005629333 nova_compute[244014]: 2026-02-25 12:21:39.698 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:21:39 np0005629333 nova_compute[244014]: 2026-02-25 12:21:39.699 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:21:39 np0005629333 nova_compute[244014]: 2026-02-25 12:21:39.703 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:39 np0005629333 nova_compute[244014]: 2026-02-25 12:21:39.704 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf1abe770-52, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:21:39 np0005629333 nova_compute[244014]: 2026-02-25 12:21:39.705 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf1abe770-52, col_values=(('external_ids', {'iface-id': 'f1abe770-5205-4bae-888a-f2489c2af7a7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:85:c5:fe', 'vm-uuid': '51d1d661-89db-4958-a2f4-c299ee997cde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:21:39 np0005629333 NetworkManager[49836]: <info>  [1772022099.7091] manager: (tapf1abe770-52): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/90)
Feb 25 07:21:39 np0005629333 nova_compute[244014]: 2026-02-25 12:21:39.715 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:39 np0005629333 nova_compute[244014]: 2026-02-25 12:21:39.719 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:21:39 np0005629333 nova_compute[244014]: 2026-02-25 12:21:39.720 244018 INFO os_vif [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:c5:fe,bridge_name='br-int',has_traffic_filtering=True,id=f1abe770-5205-4bae-888a-f2489c2af7a7,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1abe770-52')#033[00m
Feb 25 07:21:39 np0005629333 nova_compute[244014]: 2026-02-25 12:21:39.722 244018 DEBUG nova.virt.libvirt.vif [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:20:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f1abe770-5205-4bae-888a-f2489c2af7a7", "address": "fa:16:3e:85:c5:fe", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1abe770-52", "ovs_interfaceid": "f1abe770-5205-4bae-888a-f2489c2af7a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:21:39 np0005629333 nova_compute[244014]: 2026-02-25 12:21:39.722 244018 DEBUG nova.network.os_vif_util [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "f1abe770-5205-4bae-888a-f2489c2af7a7", "address": "fa:16:3e:85:c5:fe", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1abe770-52", "ovs_interfaceid": "f1abe770-5205-4bae-888a-f2489c2af7a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:21:39 np0005629333 nova_compute[244014]: 2026-02-25 12:21:39.723 244018 DEBUG nova.network.os_vif_util [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:c5:fe,bridge_name='br-int',has_traffic_filtering=True,id=f1abe770-5205-4bae-888a-f2489c2af7a7,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1abe770-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:21:39 np0005629333 nova_compute[244014]: 2026-02-25 12:21:39.727 244018 DEBUG nova.virt.libvirt.guest [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] attach device xml: <interface type="ethernet">
Feb 25 07:21:39 np0005629333 nova_compute[244014]:  <mac address="fa:16:3e:85:c5:fe"/>
Feb 25 07:21:39 np0005629333 nova_compute[244014]:  <model type="virtio"/>
Feb 25 07:21:39 np0005629333 nova_compute[244014]:  <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:21:39 np0005629333 nova_compute[244014]:  <mtu size="1442"/>
Feb 25 07:21:39 np0005629333 nova_compute[244014]:  <target dev="tapf1abe770-52"/>
Feb 25 07:21:39 np0005629333 nova_compute[244014]: </interface>
Feb 25 07:21:39 np0005629333 nova_compute[244014]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb 25 07:21:39 np0005629333 NetworkManager[49836]: <info>  [1772022099.7409] manager: (tapf1abe770-52): new Tun device (/org/freedesktop/NetworkManager/Devices/91)
Feb 25 07:21:39 np0005629333 kernel: tapf1abe770-52: entered promiscuous mode
Feb 25 07:21:39 np0005629333 nova_compute[244014]: 2026-02-25 12:21:39.743 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:39 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:39Z|00186|binding|INFO|Claiming lport f1abe770-5205-4bae-888a-f2489c2af7a7 for this chassis.
Feb 25 07:21:39 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:39Z|00187|binding|INFO|f1abe770-5205-4bae-888a-f2489c2af7a7: Claiming fa:16:3e:85:c5:fe 10.100.0.7
Feb 25 07:21:39 np0005629333 nova_compute[244014]: 2026-02-25 12:21:39.746 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:39 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:39Z|00188|binding|INFO|Setting lport f1abe770-5205-4bae-888a-f2489c2af7a7 ovn-installed in OVS
Feb 25 07:21:39 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:39Z|00189|binding|INFO|Setting lport f1abe770-5205-4bae-888a-f2489c2af7a7 up in Southbound
Feb 25 07:21:39 np0005629333 nova_compute[244014]: 2026-02-25 12:21:39.753 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:39.754 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:c5:fe 10.100.0.7'], port_security=['fa:16:3e:85:c5:fe 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '51d1d661-89db-4958-a2f4-c299ee997cde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '078dca40-137f-4eb6-953b-2ae25d0b4ca3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=f1abe770-5205-4bae-888a-f2489c2af7a7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:21:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:39.757 157129 INFO neutron.agent.ovn.metadata.agent [-] Port f1abe770-5205-4bae-888a-f2489c2af7a7 in datapath 08121372-a435-401a-b405-778e10d8c2e2 bound to our chassis#033[00m
Feb 25 07:21:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:39.761 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08121372-a435-401a-b405-778e10d8c2e2#033[00m
Feb 25 07:21:39 np0005629333 systemd-udevd[270830]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:21:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:39.777 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2c145408-da42-41a0-ac82-b8016ab9951a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:39 np0005629333 NetworkManager[49836]: <info>  [1772022099.7855] device (tapf1abe770-52): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:21:39 np0005629333 NetworkManager[49836]: <info>  [1772022099.7864] device (tapf1abe770-52): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:21:39 np0005629333 nova_compute[244014]: 2026-02-25 12:21:39.801 244018 DEBUG nova.virt.libvirt.driver [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:21:39 np0005629333 nova_compute[244014]: 2026-02-25 12:21:39.802 244018 DEBUG nova.virt.libvirt.driver [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:21:39 np0005629333 nova_compute[244014]: 2026-02-25 12:21:39.802 244018 DEBUG nova.virt.libvirt.driver [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:4a:d5:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:21:39 np0005629333 nova_compute[244014]: 2026-02-25 12:21:39.802 244018 DEBUG nova.virt.libvirt.driver [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:c1:c1:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:21:39 np0005629333 nova_compute[244014]: 2026-02-25 12:21:39.802 244018 DEBUG nova.virt.libvirt.driver [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:85:c5:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:21:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:39.808 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[219cd775-faca-40df-a395-fecebf0f0ea1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:39.812 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a38c4c29-8ec5-4a4f-b4e0-d5489d729453]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:39 np0005629333 nova_compute[244014]: 2026-02-25 12:21:39.828 244018 DEBUG nova.virt.libvirt.guest [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:21:39 np0005629333 nova_compute[244014]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:21:39 np0005629333 nova_compute[244014]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1386050966</nova:name>
Feb 25 07:21:39 np0005629333 nova_compute[244014]:  <nova:creationTime>2026-02-25 12:21:39</nova:creationTime>
Feb 25 07:21:39 np0005629333 nova_compute[244014]:  <nova:flavor name="m1.nano">
Feb 25 07:21:39 np0005629333 nova_compute[244014]:    <nova:memory>128</nova:memory>
Feb 25 07:21:39 np0005629333 nova_compute[244014]:    <nova:disk>1</nova:disk>
Feb 25 07:21:39 np0005629333 nova_compute[244014]:    <nova:swap>0</nova:swap>
Feb 25 07:21:39 np0005629333 nova_compute[244014]:    <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:21:39 np0005629333 nova_compute[244014]:    <nova:vcpus>1</nova:vcpus>
Feb 25 07:21:39 np0005629333 nova_compute[244014]:  </nova:flavor>
Feb 25 07:21:39 np0005629333 nova_compute[244014]:  <nova:owner>
Feb 25 07:21:39 np0005629333 nova_compute[244014]:    <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 07:21:39 np0005629333 nova_compute[244014]:    <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 07:21:39 np0005629333 nova_compute[244014]:  </nova:owner>
Feb 25 07:21:39 np0005629333 nova_compute[244014]:  <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:21:39 np0005629333 nova_compute[244014]:  <nova:ports>
Feb 25 07:21:39 np0005629333 nova_compute[244014]:    <nova:port uuid="433e6f28-313e-4fe8-b8da-eacc8a0332c8">
Feb 25 07:21:39 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 07:21:39 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:21:39 np0005629333 nova_compute[244014]:    <nova:port uuid="31f40ed6-505b-4061-b861-ea2720b0ff62">
Feb 25 07:21:39 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 25 07:21:39 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:21:39 np0005629333 nova_compute[244014]:    <nova:port uuid="f1abe770-5205-4bae-888a-f2489c2af7a7">
Feb 25 07:21:39 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 07:21:39 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:21:39 np0005629333 nova_compute[244014]:  </nova:ports>
Feb 25 07:21:39 np0005629333 nova_compute[244014]: </nova:instance>
Feb 25 07:21:39 np0005629333 nova_compute[244014]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Feb 25 07:21:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:39.840 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[34ba225e-b1d6-4484-bc7c-8cdf30f1e84f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:39 np0005629333 nova_compute[244014]: 2026-02-25 12:21:39.858 244018 DEBUG oslo_concurrency.lockutils [None req-5a9a23ad-f1cc-4512-b2d4-afc5c1d73490 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "interface-51d1d661-89db-4958-a2f4-c299ee997cde-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 8.296s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:21:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:39.858 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e69bdb8c-bcfc-4865-af9a-ce7047883c91]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402565, 'reachable_time': 31993, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270837, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:39.875 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed6a0b9-df86-4a59-93a4-a168774bc831]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402574, 'tstamp': 402574}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270838, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402577, 'tstamp': 402577}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270838, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:39.878 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:21:39 np0005629333 nova_compute[244014]: 2026-02-25 12:21:39.880 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:39.881 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08121372-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:21:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:39.882 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:21:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:39.883 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08121372-a0, col_values=(('external_ids', {'iface-id': 'ef44c128-3fa4-4475-b63c-4818a50ede40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:21:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:39.883 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:21:40 np0005629333 nova_compute[244014]: 2026-02-25 12:21:40.795 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022085.7939126, 437f3047-f865-44f7-b16e-cddab230e873 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:21:40 np0005629333 nova_compute[244014]: 2026-02-25 12:21:40.795 244018 INFO nova.compute.manager [-] [instance: 437f3047-f865-44f7-b16e-cddab230e873] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:21:40 np0005629333 nova_compute[244014]: 2026-02-25 12:21:40.982 244018 DEBUG nova.compute.manager [None req-caa137db-6bb9-43eb-ba56-41f3208c4847 - - - - - -] [instance: 437f3047-f865-44f7-b16e-cddab230e873] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:21:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 233 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 5.7 KiB/s wr, 31 op/s
Feb 25 07:21:41 np0005629333 nova_compute[244014]: 2026-02-25 12:21:41.854 244018 DEBUG nova.network.neutron [req-f0e127d8-30f9-49b0-b37c-dd645308dec3 req-bb27b9a8-b8eb-4a69-8474-bec68e583425 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updated VIF entry in instance network info cache for port f1abe770-5205-4bae-888a-f2489c2af7a7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:21:41 np0005629333 nova_compute[244014]: 2026-02-25 12:21:41.855 244018 DEBUG nova.network.neutron [req-f0e127d8-30f9-49b0-b37c-dd645308dec3 req-bb27b9a8-b8eb-4a69-8474-bec68e583425 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updating instance_info_cache with network_info: [{"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f1abe770-5205-4bae-888a-f2489c2af7a7", "address": "fa:16:3e:85:c5:fe", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1abe770-52", "ovs_interfaceid": "f1abe770-5205-4bae-888a-f2489c2af7a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:21:41 np0005629333 nova_compute[244014]: 2026-02-25 12:21:41.875 244018 DEBUG oslo_concurrency.lockutils [req-f0e127d8-30f9-49b0-b37c-dd645308dec3 req-bb27b9a8-b8eb-4a69-8474-bec68e583425 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:21:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:21:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:21:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:21:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:21:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007643093857563683 of space, bias 1.0, pg target 0.2292928157269105 quantized to 32 (current 32)
Feb 25 07:21:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:21:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:21:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:21:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:21:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:21:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024899999836253926 of space, bias 1.0, pg target 0.7469999950876178 quantized to 32 (current 32)
Feb 25 07:21:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:21:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.048749876634027e-06 of space, bias 4.0, pg target 0.0012584998519608323 quantized to 16 (current 16)
Feb 25 07:21:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:21:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:21:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:21:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:21:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:21:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:21:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:21:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:21:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:21:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:21:42 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:42Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:85:c5:fe 10.100.0.7
Feb 25 07:21:42 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:42Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:85:c5:fe 10.100.0.7
Feb 25 07:21:42 np0005629333 nova_compute[244014]: 2026-02-25 12:21:42.413 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:21:42 np0005629333 nova_compute[244014]: 2026-02-25 12:21:42.414 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:21:42 np0005629333 nova_compute[244014]: 2026-02-25 12:21:42.445 244018 DEBUG nova.compute.manager [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:21:42 np0005629333 nova_compute[244014]: 2026-02-25 12:21:42.532 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:21:42 np0005629333 nova_compute[244014]: 2026-02-25 12:21:42.534 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:21:42 np0005629333 nova_compute[244014]: 2026-02-25 12:21:42.546 244018 DEBUG nova.virt.hardware [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:21:42 np0005629333 nova_compute[244014]: 2026-02-25 12:21:42.548 244018 INFO nova.compute.claims [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:21:42 np0005629333 nova_compute[244014]: 2026-02-25 12:21:42.704 244018 DEBUG oslo_concurrency.processutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:21:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:21:43 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2645564130' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:21:43 np0005629333 nova_compute[244014]: 2026-02-25 12:21:43.279 244018 DEBUG oslo_concurrency.processutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:21:43 np0005629333 nova_compute[244014]: 2026-02-25 12:21:43.286 244018 DEBUG nova.compute.provider_tree [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:21:43 np0005629333 nova_compute[244014]: 2026-02-25 12:21:43.308 244018 DEBUG nova.scheduler.client.report [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:21:43 np0005629333 nova_compute[244014]: 2026-02-25 12:21:43.345 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:21:43 np0005629333 nova_compute[244014]: 2026-02-25 12:21:43.346 244018 DEBUG nova.compute.manager [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:21:43 np0005629333 nova_compute[244014]: 2026-02-25 12:21:43.405 244018 DEBUG nova.compute.manager [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:21:43 np0005629333 nova_compute[244014]: 2026-02-25 12:21:43.406 244018 DEBUG nova.network.neutron [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:21:43 np0005629333 nova_compute[244014]: 2026-02-25 12:21:43.437 244018 INFO nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:21:43 np0005629333 nova_compute[244014]: 2026-02-25 12:21:43.479 244018 DEBUG nova.compute.manager [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:21:43 np0005629333 nova_compute[244014]: 2026-02-25 12:21:43.491 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 233 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.4 KiB/s wr, 27 op/s
Feb 25 07:21:43 np0005629333 nova_compute[244014]: 2026-02-25 12:21:43.800 244018 DEBUG nova.policy [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1cfc3e643014e1c984d71182534fd24', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '851e1817495944c1ad7ac421ab226d13', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:21:43 np0005629333 nova_compute[244014]: 2026-02-25 12:21:43.825 244018 DEBUG nova.compute.manager [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:21:43 np0005629333 nova_compute[244014]: 2026-02-25 12:21:43.830 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:21:43 np0005629333 nova_compute[244014]: 2026-02-25 12:21:43.830 244018 INFO nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Creating image(s)#033[00m
Feb 25 07:21:43 np0005629333 nova_compute[244014]: 2026-02-25 12:21:43.863 244018 DEBUG nova.storage.rbd_utils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:21:43 np0005629333 nova_compute[244014]: 2026-02-25 12:21:43.896 244018 DEBUG nova.storage.rbd_utils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:21:43 np0005629333 nova_compute[244014]: 2026-02-25 12:21:43.930 244018 DEBUG nova.storage.rbd_utils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:21:43 np0005629333 nova_compute[244014]: 2026-02-25 12:21:43.935 244018 DEBUG oslo_concurrency.processutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:21:44 np0005629333 nova_compute[244014]: 2026-02-25 12:21:44.014 244018 DEBUG oslo_concurrency.processutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:21:44 np0005629333 nova_compute[244014]: 2026-02-25 12:21:44.015 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:21:44 np0005629333 nova_compute[244014]: 2026-02-25 12:21:44.016 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:21:44 np0005629333 nova_compute[244014]: 2026-02-25 12:21:44.017 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:21:44 np0005629333 nova_compute[244014]: 2026-02-25 12:21:44.047 244018 DEBUG nova.storage.rbd_utils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:21:44 np0005629333 nova_compute[244014]: 2026-02-25 12:21:44.050 244018 DEBUG oslo_concurrency.processutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:21:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:21:44 np0005629333 nova_compute[244014]: 2026-02-25 12:21:44.265 244018 DEBUG oslo_concurrency.processutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:21:44 np0005629333 nova_compute[244014]: 2026-02-25 12:21:44.343 244018 DEBUG nova.storage.rbd_utils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] resizing rbd image e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:21:44 np0005629333 nova_compute[244014]: 2026-02-25 12:21:44.419 244018 DEBUG nova.objects.instance [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lazy-loading 'migration_context' on Instance uuid e9eb76fe-9616-40a4-aa53-0054cc5c3a57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:21:44 np0005629333 nova_compute[244014]: 2026-02-25 12:21:44.439 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:21:44 np0005629333 nova_compute[244014]: 2026-02-25 12:21:44.439 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Ensure instance console log exists: /var/lib/nova/instances/e9eb76fe-9616-40a4-aa53-0054cc5c3a57/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:21:44 np0005629333 nova_compute[244014]: 2026-02-25 12:21:44.440 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:21:44 np0005629333 nova_compute[244014]: 2026-02-25 12:21:44.440 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:21:44 np0005629333 nova_compute[244014]: 2026-02-25 12:21:44.441 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:21:44 np0005629333 nova_compute[244014]: 2026-02-25 12:21:44.708 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:44 np0005629333 nova_compute[244014]: 2026-02-25 12:21:44.909 244018 DEBUG nova.compute.manager [req-71c0623b-c5b8-472f-89c0-9a6e4a10c174 req-c713d2dd-edb2-4302-a275-a00853c29f71 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-plugged-f1abe770-5205-4bae-888a-f2489c2af7a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:21:44 np0005629333 nova_compute[244014]: 2026-02-25 12:21:44.910 244018 DEBUG oslo_concurrency.lockutils [req-71c0623b-c5b8-472f-89c0-9a6e4a10c174 req-c713d2dd-edb2-4302-a275-a00853c29f71 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:21:44 np0005629333 nova_compute[244014]: 2026-02-25 12:21:44.910 244018 DEBUG oslo_concurrency.lockutils [req-71c0623b-c5b8-472f-89c0-9a6e4a10c174 req-c713d2dd-edb2-4302-a275-a00853c29f71 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:21:44 np0005629333 nova_compute[244014]: 2026-02-25 12:21:44.910 244018 DEBUG oslo_concurrency.lockutils [req-71c0623b-c5b8-472f-89c0-9a6e4a10c174 req-c713d2dd-edb2-4302-a275-a00853c29f71 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:21:44 np0005629333 nova_compute[244014]: 2026-02-25 12:21:44.910 244018 DEBUG nova.compute.manager [req-71c0623b-c5b8-472f-89c0-9a6e4a10c174 req-c713d2dd-edb2-4302-a275-a00853c29f71 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] No waiting events found dispatching network-vif-plugged-f1abe770-5205-4bae-888a-f2489c2af7a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:21:44 np0005629333 nova_compute[244014]: 2026-02-25 12:21:44.911 244018 WARNING nova.compute.manager [req-71c0623b-c5b8-472f-89c0-9a6e4a10c174 req-c713d2dd-edb2-4302-a275-a00853c29f71 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received unexpected event network-vif-plugged-f1abe770-5205-4bae-888a-f2489c2af7a7 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:21:45 np0005629333 nova_compute[244014]: 2026-02-25 12:21:45.314 244018 DEBUG nova.network.neutron [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Successfully created port: 6e87f383-dbcb-4dad-b195-bc813617ad12 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:21:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 233 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 5.0 KiB/s wr, 25 op/s
Feb 25 07:21:46 np0005629333 nova_compute[244014]: 2026-02-25 12:21:46.773 244018 DEBUG nova.network.neutron [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Successfully updated port: 6e87f383-dbcb-4dad-b195-bc813617ad12 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:21:46 np0005629333 nova_compute[244014]: 2026-02-25 12:21:46.787 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "refresh_cache-e9eb76fe-9616-40a4-aa53-0054cc5c3a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:21:46 np0005629333 nova_compute[244014]: 2026-02-25 12:21:46.787 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquired lock "refresh_cache-e9eb76fe-9616-40a4-aa53-0054cc5c3a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:21:46 np0005629333 nova_compute[244014]: 2026-02-25 12:21:46.788 244018 DEBUG nova.network.neutron [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:21:46 np0005629333 nova_compute[244014]: 2026-02-25 12:21:46.823 244018 DEBUG oslo_concurrency.lockutils [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "interface-51d1d661-89db-4958-a2f4-c299ee997cde-be69a588-f795-413e-b981-a20088eea5ed" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:21:46 np0005629333 nova_compute[244014]: 2026-02-25 12:21:46.823 244018 DEBUG oslo_concurrency.lockutils [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "interface-51d1d661-89db-4958-a2f4-c299ee997cde-be69a588-f795-413e-b981-a20088eea5ed" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:21:46 np0005629333 nova_compute[244014]: 2026-02-25 12:21:46.824 244018 DEBUG nova.objects.instance [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'flavor' on Instance uuid 51d1d661-89db-4958-a2f4-c299ee997cde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:21:46 np0005629333 nova_compute[244014]: 2026-02-25 12:21:46.939 244018 DEBUG nova.network.neutron [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.099 244018 DEBUG nova.compute.manager [req-2c57c03a-f408-41e7-9d28-33c9e6837849 req-7244daac-ad40-4224-b45f-b17c95ded274 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-plugged-f1abe770-5205-4bae-888a-f2489c2af7a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.099 244018 DEBUG oslo_concurrency.lockutils [req-2c57c03a-f408-41e7-9d28-33c9e6837849 req-7244daac-ad40-4224-b45f-b17c95ded274 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.100 244018 DEBUG oslo_concurrency.lockutils [req-2c57c03a-f408-41e7-9d28-33c9e6837849 req-7244daac-ad40-4224-b45f-b17c95ded274 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.100 244018 DEBUG oslo_concurrency.lockutils [req-2c57c03a-f408-41e7-9d28-33c9e6837849 req-7244daac-ad40-4224-b45f-b17c95ded274 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.100 244018 DEBUG nova.compute.manager [req-2c57c03a-f408-41e7-9d28-33c9e6837849 req-7244daac-ad40-4224-b45f-b17c95ded274 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] No waiting events found dispatching network-vif-plugged-f1abe770-5205-4bae-888a-f2489c2af7a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.100 244018 WARNING nova.compute.manager [req-2c57c03a-f408-41e7-9d28-33c9e6837849 req-7244daac-ad40-4224-b45f-b17c95ded274 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received unexpected event network-vif-plugged-f1abe770-5205-4bae-888a-f2489c2af7a7 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.101 244018 DEBUG nova.compute.manager [req-2c57c03a-f408-41e7-9d28-33c9e6837849 req-7244daac-ad40-4224-b45f-b17c95ded274 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Received event network-changed-6e87f383-dbcb-4dad-b195-bc813617ad12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.101 244018 DEBUG nova.compute.manager [req-2c57c03a-f408-41e7-9d28-33c9e6837849 req-7244daac-ad40-4224-b45f-b17c95ded274 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Refreshing instance network info cache due to event network-changed-6e87f383-dbcb-4dad-b195-bc813617ad12. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.101 244018 DEBUG oslo_concurrency.lockutils [req-2c57c03a-f408-41e7-9d28-33c9e6837849 req-7244daac-ad40-4224-b45f-b17c95ded274 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-e9eb76fe-9616-40a4-aa53-0054cc5c3a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.365 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:21:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/451925680' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:21:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:21:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/451925680' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.620 244018 DEBUG nova.objects.instance [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'pci_requests' on Instance uuid 51d1d661-89db-4958-a2f4-c299ee997cde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.642 244018 DEBUG nova.network.neutron [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:21:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.866 244018 DEBUG nova.network.neutron [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Updating instance_info_cache with network_info: [{"id": "6e87f383-dbcb-4dad-b195-bc813617ad12", "address": "fa:16:3e:9e:80:09", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e87f383-db", "ovs_interfaceid": "6e87f383-dbcb-4dad-b195-bc813617ad12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.908 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Releasing lock "refresh_cache-e9eb76fe-9616-40a4-aa53-0054cc5c3a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.909 244018 DEBUG nova.compute.manager [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Instance network_info: |[{"id": "6e87f383-dbcb-4dad-b195-bc813617ad12", "address": "fa:16:3e:9e:80:09", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e87f383-db", "ovs_interfaceid": "6e87f383-dbcb-4dad-b195-bc813617ad12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.909 244018 DEBUG oslo_concurrency.lockutils [req-2c57c03a-f408-41e7-9d28-33c9e6837849 req-7244daac-ad40-4224-b45f-b17c95ded274 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-e9eb76fe-9616-40a4-aa53-0054cc5c3a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.910 244018 DEBUG nova.network.neutron [req-2c57c03a-f408-41e7-9d28-33c9e6837849 req-7244daac-ad40-4224-b45f-b17c95ded274 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Refreshing network info cache for port 6e87f383-dbcb-4dad-b195-bc813617ad12 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.915 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Start _get_guest_xml network_info=[{"id": "6e87f383-dbcb-4dad-b195-bc813617ad12", "address": "fa:16:3e:9e:80:09", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e87f383-db", "ovs_interfaceid": "6e87f383-dbcb-4dad-b195-bc813617ad12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.924 244018 WARNING nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.930 244018 DEBUG nova.virt.libvirt.host [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.931 244018 DEBUG nova.virt.libvirt.host [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.937 244018 DEBUG nova.virt.libvirt.host [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.938 244018 DEBUG nova.virt.libvirt.host [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.938 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.938 244018 DEBUG nova.virt.hardware [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.939 244018 DEBUG nova.virt.hardware [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.939 244018 DEBUG nova.virt.hardware [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.940 244018 DEBUG nova.virt.hardware [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.940 244018 DEBUG nova.virt.hardware [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.940 244018 DEBUG nova.virt.hardware [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.940 244018 DEBUG nova.virt.hardware [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.941 244018 DEBUG nova.virt.hardware [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.941 244018 DEBUG nova.virt.hardware [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.941 244018 DEBUG nova.virt.hardware [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.941 244018 DEBUG nova.virt.hardware [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:21:47 np0005629333 nova_compute[244014]: 2026-02-25 12:21:47.945 244018 DEBUG oslo_concurrency.processutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:21:48 np0005629333 nova_compute[244014]: 2026-02-25 12:21:48.087 244018 DEBUG nova.policy [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea407839a07d46608b6348caf676d12d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6a771ad0ce454d809d66825f69248fa7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:21:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:21:48 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/45216140' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:21:48 np0005629333 nova_compute[244014]: 2026-02-25 12:21:48.491 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:48 np0005629333 nova_compute[244014]: 2026-02-25 12:21:48.508 244018 DEBUG oslo_concurrency.processutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:21:48 np0005629333 nova_compute[244014]: 2026-02-25 12:21:48.540 244018 DEBUG nova.storage.rbd_utils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:21:48 np0005629333 nova_compute[244014]: 2026-02-25 12:21:48.546 244018 DEBUG oslo_concurrency.processutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:21:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:21:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/974428450' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.071 244018 DEBUG oslo_concurrency.processutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.074 244018 DEBUG nova.virt.libvirt.vif [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:21:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1303628417',display_name='tempest-ImagesTestJSON-server-1303628417',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1303628417',id=28,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='851e1817495944c1ad7ac421ab226d13',ramdisk_id='',reservation_id='r-18jw7ang',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1353368211',owner_user_name='tempest-ImagesTestJSON-1353368211-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:21:43Z,user_data=None,user_id='f1cfc3e643014e1c984d71182534fd24',uuid=e9eb76fe-9616-40a4-aa53-0054cc5c3a57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e87f383-dbcb-4dad-b195-bc813617ad12", "address": "fa:16:3e:9e:80:09", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e87f383-db", "ovs_interfaceid": "6e87f383-dbcb-4dad-b195-bc813617ad12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.075 244018 DEBUG nova.network.os_vif_util [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converting VIF {"id": "6e87f383-dbcb-4dad-b195-bc813617ad12", "address": "fa:16:3e:9e:80:09", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e87f383-db", "ovs_interfaceid": "6e87f383-dbcb-4dad-b195-bc813617ad12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.076 244018 DEBUG nova.network.os_vif_util [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:80:09,bridge_name='br-int',has_traffic_filtering=True,id=6e87f383-dbcb-4dad-b195-bc813617ad12,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e87f383-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.078 244018 DEBUG nova.objects.instance [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lazy-loading 'pci_devices' on Instance uuid e9eb76fe-9616-40a4-aa53-0054cc5c3a57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.094 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:21:49 np0005629333 nova_compute[244014]:  <uuid>e9eb76fe-9616-40a4-aa53-0054cc5c3a57</uuid>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:  <name>instance-0000001c</name>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      <nova:name>tempest-ImagesTestJSON-server-1303628417</nova:name>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:21:47</nova:creationTime>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:21:49 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:        <nova:user uuid="f1cfc3e643014e1c984d71182534fd24">tempest-ImagesTestJSON-1353368211-project-member</nova:user>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:        <nova:project uuid="851e1817495944c1ad7ac421ab226d13">tempest-ImagesTestJSON-1353368211</nova:project>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:        <nova:port uuid="6e87f383-dbcb-4dad-b195-bc813617ad12">
Feb 25 07:21:49 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      <entry name="serial">e9eb76fe-9616-40a4-aa53-0054cc5c3a57</entry>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      <entry name="uuid">e9eb76fe-9616-40a4-aa53-0054cc5c3a57</entry>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk">
Feb 25 07:21:49 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:21:49 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk.config">
Feb 25 07:21:49 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:21:49 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:9e:80:09"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      <target dev="tap6e87f383-db"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/e9eb76fe-9616-40a4-aa53-0054cc5c3a57/console.log" append="off"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:21:49 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:21:49 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:21:49 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:21:49 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.096 244018 DEBUG nova.compute.manager [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Preparing to wait for external event network-vif-plugged-6e87f383-dbcb-4dad-b195-bc813617ad12 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.096 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.097 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.097 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.099 244018 DEBUG nova.virt.libvirt.vif [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:21:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1303628417',display_name='tempest-ImagesTestJSON-server-1303628417',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1303628417',id=28,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='851e1817495944c1ad7ac421ab226d13',ramdisk_id='',reservation_id='r-18jw7ang',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1353368211',owner_user_name='tempest-ImagesTestJSON-1353368211-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:21:43Z,user_data=None,user_id='f1cfc3e643014e1c984d71182534fd24',uuid=e9eb76fe-9616-40a4-aa53-0054cc5c3a57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e87f383-dbcb-4dad-b195-bc813617ad12", "address": "fa:16:3e:9e:80:09", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e87f383-db", "ovs_interfaceid": "6e87f383-dbcb-4dad-b195-bc813617ad12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.099 244018 DEBUG nova.network.os_vif_util [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converting VIF {"id": "6e87f383-dbcb-4dad-b195-bc813617ad12", "address": "fa:16:3e:9e:80:09", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e87f383-db", "ovs_interfaceid": "6e87f383-dbcb-4dad-b195-bc813617ad12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.100 244018 DEBUG nova.network.os_vif_util [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:80:09,bridge_name='br-int',has_traffic_filtering=True,id=6e87f383-dbcb-4dad-b195-bc813617ad12,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e87f383-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.101 244018 DEBUG os_vif [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:80:09,bridge_name='br-int',has_traffic_filtering=True,id=6e87f383-dbcb-4dad-b195-bc813617ad12,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e87f383-db') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.102 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.102 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.103 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.107 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.107 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e87f383-db, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.108 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6e87f383-db, col_values=(('external_ids', {'iface-id': '6e87f383-dbcb-4dad-b195-bc813617ad12', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9e:80:09', 'vm-uuid': 'e9eb76fe-9616-40a4-aa53-0054cc5c3a57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.110 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:49 np0005629333 NetworkManager[49836]: <info>  [1772022109.1110] manager: (tap6e87f383-db): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/92)
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.113 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.114 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.115 244018 INFO os_vif [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:80:09,bridge_name='br-int',has_traffic_filtering=True,id=6e87f383-dbcb-4dad-b195-bc813617ad12,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e87f383-db')#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.160 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.161 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.161 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] No VIF found with MAC fa:16:3e:9e:80:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.162 244018 INFO nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Using config drive#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.182 244018 DEBUG nova.storage.rbd_utils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:21:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:21:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 2.0 MiB/s wr, 32 op/s
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.732 244018 INFO nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Creating config drive at /var/lib/nova/instances/e9eb76fe-9616-40a4-aa53-0054cc5c3a57/disk.config#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.739 244018 DEBUG oslo_concurrency.processutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e9eb76fe-9616-40a4-aa53-0054cc5c3a57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5m3_1uc7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.849 244018 DEBUG nova.network.neutron [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Successfully updated port: be69a588-f795-413e-b981-a20088eea5ed _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.862 244018 DEBUG oslo_concurrency.processutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e9eb76fe-9616-40a4-aa53-0054cc5c3a57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5m3_1uc7" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.894 244018 DEBUG nova.storage.rbd_utils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.899 244018 DEBUG oslo_concurrency.processutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e9eb76fe-9616-40a4-aa53-0054cc5c3a57/disk.config e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.923 244018 DEBUG oslo_concurrency.lockutils [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.924 244018 DEBUG oslo_concurrency.lockutils [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquired lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.925 244018 DEBUG nova.network.neutron [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.996 244018 DEBUG nova.compute.manager [req-beb95973-8d23-4b6e-bfbd-722b4b88dfa1 req-1695f558-c1af-46c3-a899-2d7058947a4a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-changed-be69a588-f795-413e-b981-a20088eea5ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.996 244018 DEBUG nova.compute.manager [req-beb95973-8d23-4b6e-bfbd-722b4b88dfa1 req-1695f558-c1af-46c3-a899-2d7058947a4a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Refreshing instance network info cache due to event network-changed-be69a588-f795-413e-b981-a20088eea5ed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:21:49 np0005629333 nova_compute[244014]: 2026-02-25 12:21:49.997 244018 DEBUG oslo_concurrency.lockutils [req-beb95973-8d23-4b6e-bfbd-722b4b88dfa1 req-1695f558-c1af-46c3-a899-2d7058947a4a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:21:50 np0005629333 nova_compute[244014]: 2026-02-25 12:21:50.057 244018 DEBUG oslo_concurrency.processutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e9eb76fe-9616-40a4-aa53-0054cc5c3a57/disk.config e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:21:50 np0005629333 nova_compute[244014]: 2026-02-25 12:21:50.058 244018 INFO nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Deleting local config drive /var/lib/nova/instances/e9eb76fe-9616-40a4-aa53-0054cc5c3a57/disk.config because it was imported into RBD.#033[00m
Feb 25 07:21:50 np0005629333 nova_compute[244014]: 2026-02-25 12:21:50.074 244018 DEBUG nova.network.neutron [req-2c57c03a-f408-41e7-9d28-33c9e6837849 req-7244daac-ad40-4224-b45f-b17c95ded274 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Updated VIF entry in instance network info cache for port 6e87f383-dbcb-4dad-b195-bc813617ad12. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:21:50 np0005629333 nova_compute[244014]: 2026-02-25 12:21:50.075 244018 DEBUG nova.network.neutron [req-2c57c03a-f408-41e7-9d28-33c9e6837849 req-7244daac-ad40-4224-b45f-b17c95ded274 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Updating instance_info_cache with network_info: [{"id": "6e87f383-dbcb-4dad-b195-bc813617ad12", "address": "fa:16:3e:9e:80:09", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e87f383-db", "ovs_interfaceid": "6e87f383-dbcb-4dad-b195-bc813617ad12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:21:50 np0005629333 nova_compute[244014]: 2026-02-25 12:21:50.092 244018 DEBUG oslo_concurrency.lockutils [req-2c57c03a-f408-41e7-9d28-33c9e6837849 req-7244daac-ad40-4224-b45f-b17c95ded274 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-e9eb76fe-9616-40a4-aa53-0054cc5c3a57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:21:50 np0005629333 NetworkManager[49836]: <info>  [1772022110.1140] manager: (tap6e87f383-db): new Tun device (/org/freedesktop/NetworkManager/Devices/93)
Feb 25 07:21:50 np0005629333 kernel: tap6e87f383-db: entered promiscuous mode
Feb 25 07:21:50 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:50Z|00190|binding|INFO|Claiming lport 6e87f383-dbcb-4dad-b195-bc813617ad12 for this chassis.
Feb 25 07:21:50 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:50Z|00191|binding|INFO|6e87f383-dbcb-4dad-b195-bc813617ad12: Claiming fa:16:3e:9e:80:09 10.100.0.13
Feb 25 07:21:50 np0005629333 nova_compute[244014]: 2026-02-25 12:21:50.118 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.124 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:80:09 10.100.0.13'], port_security=['fa:16:3e:9e:80:09 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e9eb76fe-9616-40a4-aa53-0054cc5c3a57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '851e1817495944c1ad7ac421ab226d13', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a8a99d02-a675-470f-9701-584a41fcc90e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9e85d49-f0c0-457f-8aeb-85579a3d5aa5, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6e87f383-dbcb-4dad-b195-bc813617ad12) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.126 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6e87f383-dbcb-4dad-b195-bc813617ad12 in datapath 6a1663dd-25aa-4b0e-a1c0-42434d371a21 bound to our chassis#033[00m
Feb 25 07:21:50 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:50Z|00192|binding|INFO|Setting lport 6e87f383-dbcb-4dad-b195-bc813617ad12 ovn-installed in OVS
Feb 25 07:21:50 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:50Z|00193|binding|INFO|Setting lport 6e87f383-dbcb-4dad-b195-bc813617ad12 up in Southbound
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.128 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a1663dd-25aa-4b0e-a1c0-42434d371a21#033[00m
Feb 25 07:21:50 np0005629333 nova_compute[244014]: 2026-02-25 12:21:50.128 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:50 np0005629333 nova_compute[244014]: 2026-02-25 12:21:50.131 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.141 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ad43f7c7-e8cd-401b-8708-39c3aba29d7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.142 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6a1663dd-21 in ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.144 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6a1663dd-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.145 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[562c0ac2-9433-4eab-bbae-2a62b9ed5c83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.146 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[20e700a9-60ea-483c-ab6e-9f80d85c1f52]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:50 np0005629333 systemd-machined[210048]: New machine qemu-32-instance-0000001c.
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.160 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[ed627be5-6706-4779-9505-6a0e775ac247]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:50 np0005629333 systemd[1]: Started Virtual Machine qemu-32-instance-0000001c.
Feb 25 07:21:50 np0005629333 systemd-udevd[271166]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.177 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7161c572-1837-4c82-9c0b-f90240a82d6c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:50 np0005629333 NetworkManager[49836]: <info>  [1772022110.1875] device (tap6e87f383-db): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:21:50 np0005629333 NetworkManager[49836]: <info>  [1772022110.1886] device (tap6e87f383-db): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.212 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8547d8e4-1d23-4faa-bc7d-a1f315c4629d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.217 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6dc074b2-921c-46ac-a660-71310491d4e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:50 np0005629333 systemd-udevd[271169]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:21:50 np0005629333 NetworkManager[49836]: <info>  [1772022110.2194] manager: (tap6a1663dd-20): new Veth device (/org/freedesktop/NetworkManager/Devices/94)
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.259 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[fc2c1bd9-d995-468d-86ed-33ea633aca14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.264 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e0d5a885-ebf1-492e-9369-ba679f213ad2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:50 np0005629333 NetworkManager[49836]: <info>  [1772022110.2947] device (tap6a1663dd-20): carrier: link connected
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.303 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[edb97d22-de19-4dde-9bbc-70a0b1fdd73e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.322 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a3e4adc5-023e-49cf-8c7c-34aac90e99a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a1663dd-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:0c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407970, 'reachable_time': 35160, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271196, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.337 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b6714865-91e0-40ba-8dd6-fe4392ba016e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe47:cb3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407970, 'tstamp': 407970}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271197, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.352 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2e6d07ce-c102-4f55-85e3-1f82b246af73]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a1663dd-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:0c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407970, 'reachable_time': 35160, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271198, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.382 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[427ff152-2ecd-4382-bc03-22b9cc7ec6f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.442 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f7075e7e-cb67-46b1-b4e6-8cb7b40fd1b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.443 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a1663dd-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.443 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.444 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a1663dd-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:21:50 np0005629333 nova_compute[244014]: 2026-02-25 12:21:50.446 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:50 np0005629333 NetworkManager[49836]: <info>  [1772022110.4467] manager: (tap6a1663dd-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Feb 25 07:21:50 np0005629333 kernel: tap6a1663dd-20: entered promiscuous mode
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.449 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a1663dd-20, col_values=(('external_ids', {'iface-id': '7f515737-b36a-4caa-affc-8ad110539172'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:21:50 np0005629333 nova_compute[244014]: 2026-02-25 12:21:50.450 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:50 np0005629333 nova_compute[244014]: 2026-02-25 12:21:50.454 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.454 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6a1663dd-25aa-4b0e-a1c0-42434d371a21.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6a1663dd-25aa-4b0e-a1c0-42434d371a21.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:21:50 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:50Z|00194|binding|INFO|Releasing lport 7f515737-b36a-4caa-affc-8ad110539172 from this chassis (sb_readonly=0)
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.455 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fc1ed559-d081-4679-b48f-c8eefabf28af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.456 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-6a1663dd-25aa-4b0e-a1c0-42434d371a21
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/6a1663dd-25aa-4b0e-a1c0-42434d371a21.pid.haproxy
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 6a1663dd-25aa-4b0e-a1c0-42434d371a21
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:21:50 np0005629333 nova_compute[244014]: 2026-02-25 12:21:50.457 244018 WARNING nova.network.neutron [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] 08121372-a435-401a-b405-778e10d8c2e2 already exists in list: networks containing: ['08121372-a435-401a-b405-778e10d8c2e2']. ignoring it#033[00m
Feb 25 07:21:50 np0005629333 nova_compute[244014]: 2026-02-25 12:21:50.457 244018 WARNING nova.network.neutron [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] 08121372-a435-401a-b405-778e10d8c2e2 already exists in list: networks containing: ['08121372-a435-401a-b405-778e10d8c2e2']. ignoring it#033[00m
Feb 25 07:21:50 np0005629333 nova_compute[244014]: 2026-02-25 12:21:50.458 244018 WARNING nova.network.neutron [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] 08121372-a435-401a-b405-778e10d8c2e2 already exists in list: networks containing: ['08121372-a435-401a-b405-778e10d8c2e2']. ignoring it#033[00m
Feb 25 07:21:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:50.457 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'env', 'PROCESS_TAG=haproxy-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6a1663dd-25aa-4b0e-a1c0-42434d371a21.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:21:50 np0005629333 nova_compute[244014]: 2026-02-25 12:21:50.464 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:50 np0005629333 nova_compute[244014]: 2026-02-25 12:21:50.800 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022110.799059, e9eb76fe-9616-40a4-aa53-0054cc5c3a57 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:21:50 np0005629333 nova_compute[244014]: 2026-02-25 12:21:50.802 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] VM Started (Lifecycle Event)#033[00m
Feb 25 07:21:50 np0005629333 nova_compute[244014]: 2026-02-25 12:21:50.828 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:21:50 np0005629333 nova_compute[244014]: 2026-02-25 12:21:50.833 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022110.799418, e9eb76fe-9616-40a4-aa53-0054cc5c3a57 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:21:50 np0005629333 nova_compute[244014]: 2026-02-25 12:21:50.834 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:21:50 np0005629333 nova_compute[244014]: 2026-02-25 12:21:50.856 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:21:50 np0005629333 nova_compute[244014]: 2026-02-25 12:21:50.866 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:21:50 np0005629333 podman[271272]: 2026-02-25 12:21:50.886112282 +0000 UTC m=+0.098010179 container create 32e2e67f265024bd8ca460335f43bf713d9bf8daeebf886f60f6318d469a72a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true)
Feb 25 07:21:50 np0005629333 nova_compute[244014]: 2026-02-25 12:21:50.890 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:21:50 np0005629333 podman[271272]: 2026-02-25 12:21:50.816735528 +0000 UTC m=+0.028633245 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:21:50 np0005629333 systemd[1]: Started libpod-conmon-32e2e67f265024bd8ca460335f43bf713d9bf8daeebf886f60f6318d469a72a1.scope.
Feb 25 07:21:50 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:21:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dda58046a993fafc00305b047e988f5ee511b02fbba1a417a37a87ca1579e71e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:21:51 np0005629333 podman[271272]: 2026-02-25 12:21:51.003654807 +0000 UTC m=+0.215552574 container init 32e2e67f265024bd8ca460335f43bf713d9bf8daeebf886f60f6318d469a72a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 25 07:21:51 np0005629333 podman[271272]: 2026-02-25 12:21:51.01149735 +0000 UTC m=+0.223395077 container start 32e2e67f265024bd8ca460335f43bf713d9bf8daeebf886f60f6318d469a72a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 07:21:51 np0005629333 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[271287]: [NOTICE]   (271291) : New worker (271293) forked
Feb 25 07:21:51 np0005629333 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[271287]: [NOTICE]   (271291) : Loading success.
Feb 25 07:21:51 np0005629333 nova_compute[244014]: 2026-02-25 12:21:51.067 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Feb 25 07:21:52 np0005629333 podman[271302]: 2026-02-25 12:21:52.736131888 +0000 UTC m=+0.073874893 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:21:52 np0005629333 podman[271303]: 2026-02-25 12:21:52.798961466 +0000 UTC m=+0.133039876 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260223)
Feb 25 07:21:53 np0005629333 nova_compute[244014]: 2026-02-25 12:21:53.494 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 25 07:21:54 np0005629333 nova_compute[244014]: 2026-02-25 12:21:54.109 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:21:54 np0005629333 nova_compute[244014]: 2026-02-25 12:21:54.970 244018 DEBUG nova.compute.manager [req-1497a9fa-a9b7-4556-96e4-b90b63c88169 req-88bc28c6-a5cf-4de4-9dba-361182ed8be0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Received event network-vif-plugged-6e87f383-dbcb-4dad-b195-bc813617ad12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:21:54 np0005629333 nova_compute[244014]: 2026-02-25 12:21:54.971 244018 DEBUG oslo_concurrency.lockutils [req-1497a9fa-a9b7-4556-96e4-b90b63c88169 req-88bc28c6-a5cf-4de4-9dba-361182ed8be0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:21:54 np0005629333 nova_compute[244014]: 2026-02-25 12:21:54.972 244018 DEBUG oslo_concurrency.lockutils [req-1497a9fa-a9b7-4556-96e4-b90b63c88169 req-88bc28c6-a5cf-4de4-9dba-361182ed8be0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:21:54 np0005629333 nova_compute[244014]: 2026-02-25 12:21:54.972 244018 DEBUG oslo_concurrency.lockutils [req-1497a9fa-a9b7-4556-96e4-b90b63c88169 req-88bc28c6-a5cf-4de4-9dba-361182ed8be0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:21:54 np0005629333 nova_compute[244014]: 2026-02-25 12:21:54.973 244018 DEBUG nova.compute.manager [req-1497a9fa-a9b7-4556-96e4-b90b63c88169 req-88bc28c6-a5cf-4de4-9dba-361182ed8be0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Processing event network-vif-plugged-6e87f383-dbcb-4dad-b195-bc813617ad12 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:21:54 np0005629333 nova_compute[244014]: 2026-02-25 12:21:54.973 244018 DEBUG nova.compute.manager [req-1497a9fa-a9b7-4556-96e4-b90b63c88169 req-88bc28c6-a5cf-4de4-9dba-361182ed8be0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Received event network-vif-plugged-6e87f383-dbcb-4dad-b195-bc813617ad12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:21:54 np0005629333 nova_compute[244014]: 2026-02-25 12:21:54.974 244018 DEBUG oslo_concurrency.lockutils [req-1497a9fa-a9b7-4556-96e4-b90b63c88169 req-88bc28c6-a5cf-4de4-9dba-361182ed8be0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:21:54 np0005629333 nova_compute[244014]: 2026-02-25 12:21:54.974 244018 DEBUG oslo_concurrency.lockutils [req-1497a9fa-a9b7-4556-96e4-b90b63c88169 req-88bc28c6-a5cf-4de4-9dba-361182ed8be0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:21:54 np0005629333 nova_compute[244014]: 2026-02-25 12:21:54.974 244018 DEBUG oslo_concurrency.lockutils [req-1497a9fa-a9b7-4556-96e4-b90b63c88169 req-88bc28c6-a5cf-4de4-9dba-361182ed8be0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:21:54 np0005629333 nova_compute[244014]: 2026-02-25 12:21:54.975 244018 DEBUG nova.compute.manager [req-1497a9fa-a9b7-4556-96e4-b90b63c88169 req-88bc28c6-a5cf-4de4-9dba-361182ed8be0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] No waiting events found dispatching network-vif-plugged-6e87f383-dbcb-4dad-b195-bc813617ad12 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:21:54 np0005629333 nova_compute[244014]: 2026-02-25 12:21:54.975 244018 WARNING nova.compute.manager [req-1497a9fa-a9b7-4556-96e4-b90b63c88169 req-88bc28c6-a5cf-4de4-9dba-361182ed8be0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Received unexpected event network-vif-plugged-6e87f383-dbcb-4dad-b195-bc813617ad12 for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:21:54 np0005629333 nova_compute[244014]: 2026-02-25 12:21:54.976 244018 DEBUG nova.compute.manager [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:21:54 np0005629333 nova_compute[244014]: 2026-02-25 12:21:54.981 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022114.981375, e9eb76fe-9616-40a4-aa53-0054cc5c3a57 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:21:54 np0005629333 nova_compute[244014]: 2026-02-25 12:21:54.982 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:21:54 np0005629333 nova_compute[244014]: 2026-02-25 12:21:54.986 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:21:54 np0005629333 nova_compute[244014]: 2026-02-25 12:21:54.991 244018 INFO nova.virt.libvirt.driver [-] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Instance spawned successfully.#033[00m
Feb 25 07:21:54 np0005629333 nova_compute[244014]: 2026-02-25 12:21:54.991 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:21:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:55.005 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:21:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:55.006 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:21:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:55.007 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:21:55 np0005629333 nova_compute[244014]: 2026-02-25 12:21:55.023 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:21:55 np0005629333 nova_compute[244014]: 2026-02-25 12:21:55.033 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:21:55 np0005629333 nova_compute[244014]: 2026-02-25 12:21:55.038 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:21:55 np0005629333 nova_compute[244014]: 2026-02-25 12:21:55.039 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:21:55 np0005629333 nova_compute[244014]: 2026-02-25 12:21:55.040 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:21:55 np0005629333 nova_compute[244014]: 2026-02-25 12:21:55.041 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:21:55 np0005629333 nova_compute[244014]: 2026-02-25 12:21:55.042 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:21:55 np0005629333 nova_compute[244014]: 2026-02-25 12:21:55.042 244018 DEBUG nova.virt.libvirt.driver [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:21:55 np0005629333 nova_compute[244014]: 2026-02-25 12:21:55.059 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:21:55 np0005629333 nova_compute[244014]: 2026-02-25 12:21:55.107 244018 INFO nova.compute.manager [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Took 11.28 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:21:55 np0005629333 nova_compute[244014]: 2026-02-25 12:21:55.108 244018 DEBUG nova.compute.manager [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:21:55 np0005629333 nova_compute[244014]: 2026-02-25 12:21:55.174 244018 INFO nova.compute.manager [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Took 12.68 seconds to build instance.#033[00m
Feb 25 07:21:55 np0005629333 nova_compute[244014]: 2026-02-25 12:21:55.191 244018 DEBUG oslo_concurrency.lockutils [None req-cc48f854-56a7-4404-a18b-d5f8829e2b77 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:21:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 25 07:21:55 np0005629333 nova_compute[244014]: 2026-02-25 12:21:55.902 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Acquiring lock "5a7dc142-2b11-4214-87b7-636f27ccacbf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:21:55 np0005629333 nova_compute[244014]: 2026-02-25 12:21:55.903 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:21:55 np0005629333 nova_compute[244014]: 2026-02-25 12:21:55.922 244018 DEBUG nova.compute.manager [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.001 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.002 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.011 244018 DEBUG nova.virt.hardware [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.012 244018 INFO nova.compute.claims [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.340 244018 DEBUG oslo_concurrency.processutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.575 244018 DEBUG nova.network.neutron [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updating instance_info_cache with network_info: [{"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f1abe770-5205-4bae-888a-f2489c2af7a7", "address": "fa:16:3e:85:c5:fe", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1abe770-52", "ovs_interfaceid": "f1abe770-5205-4bae-888a-f2489c2af7a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be69a588-f795-413e-b981-a20088eea5ed", "address": "fa:16:3e:76:32:d9", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe69a588-f7", "ovs_interfaceid": "be69a588-f795-413e-b981-a20088eea5ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.603 244018 DEBUG oslo_concurrency.lockutils [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Releasing lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.604 244018 DEBUG oslo_concurrency.lockutils [req-beb95973-8d23-4b6e-bfbd-722b4b88dfa1 req-1695f558-c1af-46c3-a899-2d7058947a4a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.605 244018 DEBUG nova.network.neutron [req-beb95973-8d23-4b6e-bfbd-722b4b88dfa1 req-1695f558-c1af-46c3-a899-2d7058947a4a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Refreshing network info cache for port be69a588-f795-413e-b981-a20088eea5ed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.609 244018 DEBUG nova.virt.libvirt.vif [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:20:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be69a588-f795-413e-b981-a20088eea5ed", "address": "fa:16:3e:76:32:d9", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe69a588-f7", "ovs_interfaceid": "be69a588-f795-413e-b981-a20088eea5ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.609 244018 DEBUG nova.network.os_vif_util [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "be69a588-f795-413e-b981-a20088eea5ed", "address": "fa:16:3e:76:32:d9", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe69a588-f7", "ovs_interfaceid": "be69a588-f795-413e-b981-a20088eea5ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.611 244018 DEBUG nova.network.os_vif_util [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:32:d9,bridge_name='br-int',has_traffic_filtering=True,id=be69a588-f795-413e-b981-a20088eea5ed,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe69a588-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.611 244018 DEBUG os_vif [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:32:d9,bridge_name='br-int',has_traffic_filtering=True,id=be69a588-f795-413e-b981-a20088eea5ed,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe69a588-f7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.612 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.613 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.614 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.617 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.618 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe69a588-f7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.618 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbe69a588-f7, col_values=(('external_ids', {'iface-id': 'be69a588-f795-413e-b981-a20088eea5ed', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:32:d9', 'vm-uuid': '51d1d661-89db-4958-a2f4-c299ee997cde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:21:56 np0005629333 NetworkManager[49836]: <info>  [1772022116.6219] manager: (tapbe69a588-f7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/96)
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.623 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.630 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.632 244018 INFO os_vif [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:32:d9,bridge_name='br-int',has_traffic_filtering=True,id=be69a588-f795-413e-b981-a20088eea5ed,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe69a588-f7')#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.633 244018 DEBUG nova.virt.libvirt.vif [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:20:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be69a588-f795-413e-b981-a20088eea5ed", "address": "fa:16:3e:76:32:d9", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe69a588-f7", "ovs_interfaceid": "be69a588-f795-413e-b981-a20088eea5ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.633 244018 DEBUG nova.network.os_vif_util [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "be69a588-f795-413e-b981-a20088eea5ed", "address": "fa:16:3e:76:32:d9", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe69a588-f7", "ovs_interfaceid": "be69a588-f795-413e-b981-a20088eea5ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.634 244018 DEBUG nova.network.os_vif_util [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:32:d9,bridge_name='br-int',has_traffic_filtering=True,id=be69a588-f795-413e-b981-a20088eea5ed,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe69a588-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.637 244018 DEBUG nova.virt.libvirt.guest [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] attach device xml: <interface type="ethernet">
Feb 25 07:21:56 np0005629333 nova_compute[244014]:  <mac address="fa:16:3e:76:32:d9"/>
Feb 25 07:21:56 np0005629333 nova_compute[244014]:  <model type="virtio"/>
Feb 25 07:21:56 np0005629333 nova_compute[244014]:  <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:21:56 np0005629333 nova_compute[244014]:  <mtu size="1442"/>
Feb 25 07:21:56 np0005629333 nova_compute[244014]:  <target dev="tapbe69a588-f7"/>
Feb 25 07:21:56 np0005629333 nova_compute[244014]: </interface>
Feb 25 07:21:56 np0005629333 nova_compute[244014]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb 25 07:21:56 np0005629333 kernel: tapbe69a588-f7: entered promiscuous mode
Feb 25 07:21:56 np0005629333 NetworkManager[49836]: <info>  [1772022116.6482] manager: (tapbe69a588-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/97)
Feb 25 07:21:56 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:56Z|00195|binding|INFO|Claiming lport be69a588-f795-413e-b981-a20088eea5ed for this chassis.
Feb 25 07:21:56 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:56Z|00196|binding|INFO|be69a588-f795-413e-b981-a20088eea5ed: Claiming fa:16:3e:76:32:d9 10.100.0.3
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.650 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:56.659 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:32:d9 10.100.0.3'], port_security=['fa:16:3e:76:32:d9 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-616423065', 'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '51d1d661-89db-4958-a2f4-c299ee997cde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-616423065', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '078dca40-137f-4eb6-953b-2ae25d0b4ca3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=be69a588-f795-413e-b981-a20088eea5ed) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:21:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:56.660 157129 INFO neutron.agent.ovn.metadata.agent [-] Port be69a588-f795-413e-b981-a20088eea5ed in datapath 08121372-a435-401a-b405-778e10d8c2e2 bound to our chassis#033[00m
Feb 25 07:21:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:56.662 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08121372-a435-401a-b405-778e10d8c2e2#033[00m
Feb 25 07:21:56 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:56Z|00197|binding|INFO|Setting lport be69a588-f795-413e-b981-a20088eea5ed up in Southbound
Feb 25 07:21:56 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:56Z|00198|binding|INFO|Setting lport be69a588-f795-413e-b981-a20088eea5ed ovn-installed in OVS
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.666 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.669 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:56 np0005629333 systemd-udevd[271374]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:21:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:56.677 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f7a5d9bf-40aa-4b2c-82c5-a67612b1b4f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:56 np0005629333 NetworkManager[49836]: <info>  [1772022116.6867] device (tapbe69a588-f7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:21:56 np0005629333 NetworkManager[49836]: <info>  [1772022116.6884] device (tapbe69a588-f7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:21:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:56.726 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0701b5fc-a41d-4902-8e8c-95b78e29e879]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:56.729 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[038eb543-dd3c-4a69-9d79-5dd6a311f5ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:56.752 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4d222830-c8b8-40ba-b0e5-1d474f8f2971]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:56.794 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5cee3435-0595-48fb-bfbd-c4b8268b57bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402565, 'reachable_time': 31993, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271381, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.817 244018 DEBUG nova.virt.libvirt.driver [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:21:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:56.816 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[79432e3c-ddcb-4bd0-b46d-ceae47b5c6e5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402574, 'tstamp': 402574}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271382, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402577, 'tstamp': 402577}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271382, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.817 244018 DEBUG nova.virt.libvirt.driver [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.818 244018 DEBUG nova.virt.libvirt.driver [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:4a:d5:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.818 244018 DEBUG nova.virt.libvirt.driver [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:c1:c1:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.819 244018 DEBUG nova.virt.libvirt.driver [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:85:c5:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:21:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:56.819 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.819 244018 DEBUG nova.virt.libvirt.driver [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:76:32:d9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:21:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:56.824 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08121372-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:21:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:56.824 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:21:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:56.825 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08121372-a0, col_values=(('external_ids', {'iface-id': 'ef44c128-3fa4-4475-b63c-4818a50ede40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:21:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:56.826 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.827 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:21:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3765039026' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.906 244018 DEBUG nova.virt.libvirt.guest [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:21:56 np0005629333 nova_compute[244014]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:21:56 np0005629333 nova_compute[244014]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1386050966</nova:name>
Feb 25 07:21:56 np0005629333 nova_compute[244014]:  <nova:creationTime>2026-02-25 12:21:56</nova:creationTime>
Feb 25 07:21:56 np0005629333 nova_compute[244014]:  <nova:flavor name="m1.nano">
Feb 25 07:21:56 np0005629333 nova_compute[244014]:    <nova:memory>128</nova:memory>
Feb 25 07:21:56 np0005629333 nova_compute[244014]:    <nova:disk>1</nova:disk>
Feb 25 07:21:56 np0005629333 nova_compute[244014]:    <nova:swap>0</nova:swap>
Feb 25 07:21:56 np0005629333 nova_compute[244014]:    <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:21:56 np0005629333 nova_compute[244014]:    <nova:vcpus>1</nova:vcpus>
Feb 25 07:21:56 np0005629333 nova_compute[244014]:  </nova:flavor>
Feb 25 07:21:56 np0005629333 nova_compute[244014]:  <nova:owner>
Feb 25 07:21:56 np0005629333 nova_compute[244014]:    <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 07:21:56 np0005629333 nova_compute[244014]:    <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 07:21:56 np0005629333 nova_compute[244014]:  </nova:owner>
Feb 25 07:21:56 np0005629333 nova_compute[244014]:  <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:21:56 np0005629333 nova_compute[244014]:  <nova:ports>
Feb 25 07:21:56 np0005629333 nova_compute[244014]:    <nova:port uuid="433e6f28-313e-4fe8-b8da-eacc8a0332c8">
Feb 25 07:21:56 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 07:21:56 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:21:56 np0005629333 nova_compute[244014]:    <nova:port uuid="31f40ed6-505b-4061-b861-ea2720b0ff62">
Feb 25 07:21:56 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 25 07:21:56 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:21:56 np0005629333 nova_compute[244014]:    <nova:port uuid="f1abe770-5205-4bae-888a-f2489c2af7a7">
Feb 25 07:21:56 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 07:21:56 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:21:56 np0005629333 nova_compute[244014]:    <nova:port uuid="be69a588-f795-413e-b981-a20088eea5ed">
Feb 25 07:21:56 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 07:21:56 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:21:56 np0005629333 nova_compute[244014]:  </nova:ports>
Feb 25 07:21:56 np0005629333 nova_compute[244014]: </nova:instance>
Feb 25 07:21:56 np0005629333 nova_compute[244014]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.914 244018 DEBUG oslo_concurrency.processutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:21:56 np0005629333 nova_compute[244014]: 2026-02-25 12:21:56.921 244018 DEBUG nova.compute.provider_tree [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.077 244018 DEBUG oslo_concurrency.lockutils [None req-e5a5e7f9-034c-4839-b966-fe0776f9cb08 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "interface-51d1d661-89db-4958-a2f4-c299ee997cde-be69a588-f795-413e-b981-a20088eea5ed" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 10.254s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.096 244018 DEBUG nova.scheduler.client.report [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.113 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.118 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.119 244018 DEBUG nova.compute.manager [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.167 244018 DEBUG nova.compute.manager [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.168 244018 DEBUG nova.network.neutron [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.191 244018 INFO nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.209 244018 DEBUG nova.compute.manager [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.302 244018 DEBUG nova.compute.manager [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.304 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.305 244018 INFO nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Creating image(s)#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.337 244018 DEBUG nova.storage.rbd_utils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] rbd image 5a7dc142-2b11-4214-87b7-636f27ccacbf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.370 244018 DEBUG nova.storage.rbd_utils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] rbd image 5a7dc142-2b11-4214-87b7-636f27ccacbf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.404 244018 DEBUG nova.storage.rbd_utils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] rbd image 5a7dc142-2b11-4214-87b7-636f27ccacbf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.409 244018 DEBUG oslo_concurrency.processutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.474 244018 DEBUG oslo_concurrency.processutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.475 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.475 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.476 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.499 244018 DEBUG nova.storage.rbd_utils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] rbd image 5a7dc142-2b11-4214-87b7-636f27ccacbf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.502 244018 DEBUG oslo_concurrency.processutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 5a7dc142-2b11-4214-87b7-636f27ccacbf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:21:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.731 244018 DEBUG oslo_concurrency.processutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 5a7dc142-2b11-4214-87b7-636f27ccacbf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.229s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.809 244018 DEBUG nova.policy [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7d27f8a357c2443a9140598fd9ec73e1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '04734aae68d34fac8fb592fc015632fd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.818 244018 DEBUG nova.storage.rbd_utils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] resizing rbd image 5a7dc142-2b11-4214-87b7-636f27ccacbf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.858 244018 DEBUG nova.compute.manager [req-53b24ab4-30b5-4478-99ef-5a0a125c0013 req-4793a5ce-5526-429e-b93f-5a58f7da56e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-plugged-be69a588-f795-413e-b981-a20088eea5ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.859 244018 DEBUG oslo_concurrency.lockutils [req-53b24ab4-30b5-4478-99ef-5a0a125c0013 req-4793a5ce-5526-429e-b93f-5a58f7da56e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.860 244018 DEBUG oslo_concurrency.lockutils [req-53b24ab4-30b5-4478-99ef-5a0a125c0013 req-4793a5ce-5526-429e-b93f-5a58f7da56e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.860 244018 DEBUG oslo_concurrency.lockutils [req-53b24ab4-30b5-4478-99ef-5a0a125c0013 req-4793a5ce-5526-429e-b93f-5a58f7da56e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.861 244018 DEBUG nova.compute.manager [req-53b24ab4-30b5-4478-99ef-5a0a125c0013 req-4793a5ce-5526-429e-b93f-5a58f7da56e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] No waiting events found dispatching network-vif-plugged-be69a588-f795-413e-b981-a20088eea5ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.861 244018 WARNING nova.compute.manager [req-53b24ab4-30b5-4478-99ef-5a0a125c0013 req-4793a5ce-5526-429e-b93f-5a58f7da56e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received unexpected event network-vif-plugged-be69a588-f795-413e-b981-a20088eea5ed for instance with vm_state active and task_state None.#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.917 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.918 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.918 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.928 244018 DEBUG nova.objects.instance [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lazy-loading 'migration_context' on Instance uuid 5a7dc142-2b11-4214-87b7-636f27ccacbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.950 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.958 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.959 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Ensure instance console log exists: /var/lib/nova/instances/5a7dc142-2b11-4214-87b7-636f27ccacbf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.960 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.961 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:21:57 np0005629333 nova_compute[244014]: 2026-02-25 12:21:57.962 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:21:58 np0005629333 nova_compute[244014]: 2026-02-25 12:21:58.020 244018 INFO nova.compute.manager [None req-c7298093-c441-4d8d-9897-a2fd745f8370 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Pausing#033[00m
Feb 25 07:21:58 np0005629333 nova_compute[244014]: 2026-02-25 12:21:58.021 244018 DEBUG nova.objects.instance [None req-c7298093-c441-4d8d-9897-a2fd745f8370 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lazy-loading 'flavor' on Instance uuid e9eb76fe-9616-40a4-aa53-0054cc5c3a57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:21:58 np0005629333 nova_compute[244014]: 2026-02-25 12:21:58.056 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022118.0560896, e9eb76fe-9616-40a4-aa53-0054cc5c3a57 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:21:58 np0005629333 nova_compute[244014]: 2026-02-25 12:21:58.057 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:21:58 np0005629333 nova_compute[244014]: 2026-02-25 12:21:58.060 244018 DEBUG nova.compute.manager [None req-c7298093-c441-4d8d-9897-a2fd745f8370 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:21:58 np0005629333 nova_compute[244014]: 2026-02-25 12:21:58.089 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:21:58 np0005629333 nova_compute[244014]: 2026-02-25 12:21:58.094 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:21:58 np0005629333 nova_compute[244014]: 2026-02-25 12:21:58.133 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Feb 25 07:21:58 np0005629333 nova_compute[244014]: 2026-02-25 12:21:58.477 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:21:58 np0005629333 nova_compute[244014]: 2026-02-25 12:21:58.497 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:58 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:58Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:76:32:d9 10.100.0.3
Feb 25 07:21:58 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:58Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:76:32:d9 10.100.0.3
Feb 25 07:21:58 np0005629333 nova_compute[244014]: 2026-02-25 12:21:58.896 244018 DEBUG nova.network.neutron [req-beb95973-8d23-4b6e-bfbd-722b4b88dfa1 req-1695f558-c1af-46c3-a899-2d7058947a4a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updated VIF entry in instance network info cache for port be69a588-f795-413e-b981-a20088eea5ed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:21:58 np0005629333 nova_compute[244014]: 2026-02-25 12:21:58.898 244018 DEBUG nova.network.neutron [req-beb95973-8d23-4b6e-bfbd-722b4b88dfa1 req-1695f558-c1af-46c3-a899-2d7058947a4a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updating instance_info_cache with network_info: [{"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f1abe770-5205-4bae-888a-f2489c2af7a7", "address": "fa:16:3e:85:c5:fe", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1abe770-52", "ovs_interfaceid": "f1abe770-5205-4bae-888a-f2489c2af7a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be69a588-f795-413e-b981-a20088eea5ed", "address": "fa:16:3e:76:32:d9", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe69a588-f7", "ovs_interfaceid": "be69a588-f795-413e-b981-a20088eea5ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:21:58 np0005629333 nova_compute[244014]: 2026-02-25 12:21:58.946 244018 DEBUG oslo_concurrency.lockutils [req-beb95973-8d23-4b6e-bfbd-722b4b88dfa1 req-1695f558-c1af-46c3-a899-2d7058947a4a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:21:58 np0005629333 nova_compute[244014]: 2026-02-25 12:21:58.947 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:21:58 np0005629333 nova_compute[244014]: 2026-02-25 12:21:58.948 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 25 07:21:58 np0005629333 nova_compute[244014]: 2026-02-25 12:21:58.948 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid 51d1d661-89db-4958-a2f4-c299ee997cde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:21:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.345 244018 DEBUG oslo_concurrency.lockutils [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "interface-51d1d661-89db-4958-a2f4-c299ee997cde-31f40ed6-505b-4061-b861-ea2720b0ff62" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.346 244018 DEBUG oslo_concurrency.lockutils [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "interface-51d1d661-89db-4958-a2f4-c299ee997cde-31f40ed6-505b-4061-b861-ea2720b0ff62" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.366 244018 DEBUG nova.objects.instance [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'flavor' on Instance uuid 51d1d661-89db-4958-a2f4-c299ee997cde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.398 244018 DEBUG nova.virt.libvirt.vif [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:20:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.399 244018 DEBUG nova.network.os_vif_util [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.401 244018 DEBUG nova.network.os_vif_util [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c1:8f,bridge_name='br-int',has_traffic_filtering=True,id=31f40ed6-505b-4061-b861-ea2720b0ff62,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31f40ed6-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.404 244018 DEBUG nova.network.neutron [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Successfully created port: 0fef626d-412c-4101-95eb-eadb3354247f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.413 244018 DEBUG nova.virt.libvirt.guest [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c1:c1:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap31f40ed6-50"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.417 244018 DEBUG nova.virt.libvirt.guest [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c1:c1:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap31f40ed6-50"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.421 244018 DEBUG nova.virt.libvirt.driver [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Attempting to detach device tap31f40ed6-50 from instance 51d1d661-89db-4958-a2f4-c299ee997cde from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.421 244018 DEBUG nova.virt.libvirt.guest [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] detach device xml: <interface type="ethernet">
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <mac address="fa:16:3e:c1:c1:8f"/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <model type="virtio"/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <mtu size="1442"/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <target dev="tap31f40ed6-50"/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]: </interface>
Feb 25 07:21:59 np0005629333 nova_compute[244014]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.430 244018 DEBUG nova.virt.libvirt.guest [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c1:c1:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap31f40ed6-50"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.435 244018 DEBUG nova.virt.libvirt.guest [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:c1:c1:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap31f40ed6-50"/></interface>not found in domain: <domain type='kvm' id='31'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <name>instance-0000001b</name>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <uuid>51d1d661-89db-4958-a2f4-c299ee997cde</uuid>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1386050966</nova:name>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <nova:creationTime>2026-02-25 12:21:56</nova:creationTime>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <nova:flavor name="m1.nano">
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:memory>128</nova:memory>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:disk>1</nova:disk>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:swap>0</nova:swap>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:vcpus>1</nova:vcpus>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  </nova:flavor>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <nova:owner>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  </nova:owner>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <nova:ports>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:port uuid="433e6f28-313e-4fe8-b8da-eacc8a0332c8">
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:port uuid="31f40ed6-505b-4061-b861-ea2720b0ff62">
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:port uuid="f1abe770-5205-4bae-888a-f2489c2af7a7">
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:port uuid="be69a588-f795-413e-b981-a20088eea5ed">
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  </nova:ports>
Feb 25 07:21:59 np0005629333 nova_compute[244014]: </nova:instance>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <memory unit='KiB'>131072</memory>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <currentMemory unit='KiB'>131072</currentMemory>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <vcpu placement='static'>1</vcpu>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <resource>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <partition>/machine</partition>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  </resource>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <sysinfo type='smbios'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <entry name='manufacturer'>RDO</entry>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <entry name='product'>OpenStack Compute</entry>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <entry name='serial'>51d1d661-89db-4958-a2f4-c299ee997cde</entry>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <entry name='uuid'>51d1d661-89db-4958-a2f4-c299ee997cde</entry>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <entry name='family'>Virtual Machine</entry>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <boot dev='hd'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <smbios mode='sysinfo'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <vmcoreinfo state='on'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <cpu mode='custom' match='exact' check='full'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <model fallback='forbid'>EPYC-Rome</model>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <vendor>AMD</vendor>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='require' name='x2apic'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='require' name='tsc-deadline'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='require' name='hypervisor'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='require' name='tsc_adjust'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='require' name='spec-ctrl'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='require' name='stibp'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='require' name='ssbd'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='require' name='cmp_legacy'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='require' name='overflow-recov'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='require' name='succor'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='require' name='ibrs'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='require' name='amd-ssbd'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='require' name='virt-ssbd'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='disable' name='lbrv'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='disable' name='tsc-scale'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='disable' name='vmcb-clean'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='disable' name='flushbyasid'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='disable' name='pause-filter'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='disable' name='pfthreshold'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='disable' name='svme-addr-chk'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='require' name='lfence-always-serializing'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='disable' name='xsaves'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='disable' name='svm'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='require' name='topoext'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='disable' name='npt'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='disable' name='nrip-save'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <clock offset='utc'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <timer name='pit' tickpolicy='delay'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <timer name='rtc' tickpolicy='catchup'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <timer name='hpet' present='no'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <on_poweroff>destroy</on_poweroff>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <on_reboot>restart</on_reboot>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <on_crash>destroy</on_crash>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <disk type='network' device='disk'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <driver name='qemu' type='raw' cache='none'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <auth username='openstack'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:        <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <source protocol='rbd' name='vms/51d1d661-89db-4958-a2f4-c299ee997cde_disk' index='2'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:        <host name='192.168.122.100' port='6789'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target dev='vda' bus='virtio'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='virtio-disk0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <disk type='network' device='cdrom'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <driver name='qemu' type='raw' cache='none'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <auth username='openstack'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:        <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <source protocol='rbd' name='vms/51d1d661-89db-4958-a2f4-c299ee997cde_disk.config' index='1'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:        <host name='192.168.122.100' port='6789'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target dev='sda' bus='sata'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <readonly/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='sata0-0-0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='0' model='pcie-root'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pcie.0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='1' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='1' port='0x10'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.1'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='2' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='2' port='0x11'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.2'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='3' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='3' port='0x12'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.3'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='4' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='4' port='0x13'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.4'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='5' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='5' port='0x14'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.5'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='6' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='6' port='0x15'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.6'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='7' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='7' port='0x16'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.7'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='8' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='8' port='0x17'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.8'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='9' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='9' port='0x18'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.9'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='10' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='10' port='0x19'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.10'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='11' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='11' port='0x1a'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.11'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='12' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='12' port='0x1b'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.12'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='13' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='13' port='0x1c'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.13'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='14' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='14' port='0x1d'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.14'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='15' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='15' port='0x1e'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.15'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='16' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='16' port='0x1f'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.16'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='17' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='17' port='0x20'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.17'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='18' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='18' port='0x21'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.18'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='19' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='19' port='0x22'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.19'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='20' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='20' port='0x23'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.20'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='21' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='21' port='0x24'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.21'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='22' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='22' port='0x25'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.22'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='23' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='23' port='0x26'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.23'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='24' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='24' port='0x27'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.24'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='25' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='25' port='0x28'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.25'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-pci-bridge'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.26'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='usb' index='0' model='piix3-uhci'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='usb'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='sata' index='0'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='ide'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <interface type='ethernet'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <mac address='fa:16:3e:4a:d5:f4'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target dev='tap433e6f28-31'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model type='virtio'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <driver name='vhost' rx_queue_size='512'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <mtu size='1442'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='net0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <interface type='ethernet'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <mac address='fa:16:3e:c1:c1:8f'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target dev='tap31f40ed6-50'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model type='virtio'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <driver name='vhost' rx_queue_size='512'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <mtu size='1442'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='net1'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <interface type='ethernet'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <mac address='fa:16:3e:85:c5:fe'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target dev='tapf1abe770-52'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model type='virtio'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <driver name='vhost' rx_queue_size='512'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <mtu size='1442'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='net2'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <interface type='ethernet'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <mac address='fa:16:3e:76:32:d9'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target dev='tapbe69a588-f7'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model type='virtio'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <driver name='vhost' rx_queue_size='512'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <mtu size='1442'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='net3'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <serial type='pty'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <source path='/dev/pts/1'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <log file='/var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde/console.log' append='off'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target type='isa-serial' port='0'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:        <model name='isa-serial'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      </target>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='serial0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <console type='pty' tty='/dev/pts/1'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <source path='/dev/pts/1'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <log file='/var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde/console.log' append='off'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target type='serial' port='0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='serial0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </console>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <input type='tablet' bus='usb'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='input0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='usb' bus='0' port='1'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <input type='mouse' bus='ps2'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='input1'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <input type='keyboard' bus='ps2'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='input2'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <listen type='address' address='::0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </graphics>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <audio id='1' type='none'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model type='virtio' heads='1' primary='yes'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='video0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <watchdog model='itco' action='reset'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='watchdog0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </watchdog>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <memballoon model='virtio'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <stats period='10'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='balloon0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <rng model='virtio'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <backend model='random'>/dev/urandom</backend>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='rng0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <label>system_u:system_r:svirt_t:s0:c806,c930</label>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c806,c930</imagelabel>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  </seclabel>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <label>+107:+107</label>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <imagelabel>+107:+107</imagelabel>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  </seclabel>
Feb 25 07:21:59 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:21:59 np0005629333 nova_compute[244014]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.436 244018 INFO nova.virt.libvirt.driver [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully detached device tap31f40ed6-50 from instance 51d1d661-89db-4958-a2f4-c299ee997cde from the persistent domain config.#033[00m
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.437 244018 DEBUG nova.virt.libvirt.driver [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] (1/8): Attempting to detach device tap31f40ed6-50 with device alias net1 from instance 51d1d661-89db-4958-a2f4-c299ee997cde from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.438 244018 DEBUG nova.virt.libvirt.guest [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] detach device xml: <interface type="ethernet">
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <mac address="fa:16:3e:c1:c1:8f"/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <model type="virtio"/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <mtu size="1442"/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <target dev="tap31f40ed6-50"/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]: </interface>
Feb 25 07:21:59 np0005629333 nova_compute[244014]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb 25 07:21:59 np0005629333 kernel: tap31f40ed6-50 (unregistering): left promiscuous mode
Feb 25 07:21:59 np0005629333 NetworkManager[49836]: <info>  [1772022119.5656] device (tap31f40ed6-50): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:21:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:59Z|00199|binding|INFO|Releasing lport 31f40ed6-505b-4061-b861-ea2720b0ff62 from this chassis (sb_readonly=0)
Feb 25 07:21:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:59Z|00200|binding|INFO|Setting lport 31f40ed6-505b-4061-b861-ea2720b0ff62 down in Southbound
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.575 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:21:59Z|00201|binding|INFO|Removing iface tap31f40ed6-50 ovn-installed in OVS
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.577 244018 DEBUG nova.virt.libvirt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Received event <DeviceRemovedEvent: 1772022119.575966, 51d1d661-89db-4958-a2f4-c299ee997cde => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.578 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.579 244018 DEBUG nova.virt.libvirt.driver [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Start waiting for the detach event from libvirt for device tap31f40ed6-50 with device alias net1 for instance 51d1d661-89db-4958-a2f4-c299ee997cde _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.579 244018 DEBUG nova.virt.libvirt.guest [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c1:c1:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap31f40ed6-50"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.585 244018 DEBUG nova.virt.libvirt.guest [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:c1:c1:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap31f40ed6-50"/></interface>not found in domain: <domain type='kvm' id='31'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <name>instance-0000001b</name>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <uuid>51d1d661-89db-4958-a2f4-c299ee997cde</uuid>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1386050966</nova:name>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <nova:creationTime>2026-02-25 12:21:56</nova:creationTime>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <nova:flavor name="m1.nano">
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:memory>128</nova:memory>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:disk>1</nova:disk>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:swap>0</nova:swap>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:vcpus>1</nova:vcpus>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  </nova:flavor>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <nova:owner>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  </nova:owner>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <nova:ports>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:port uuid="433e6f28-313e-4fe8-b8da-eacc8a0332c8">
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:port uuid="31f40ed6-505b-4061-b861-ea2720b0ff62">
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:port uuid="f1abe770-5205-4bae-888a-f2489c2af7a7">
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:port uuid="be69a588-f795-413e-b981-a20088eea5ed">
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  </nova:ports>
Feb 25 07:21:59 np0005629333 nova_compute[244014]: </nova:instance>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <memory unit='KiB'>131072</memory>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <currentMemory unit='KiB'>131072</currentMemory>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <vcpu placement='static'>1</vcpu>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <resource>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <partition>/machine</partition>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  </resource>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <sysinfo type='smbios'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <entry name='manufacturer'>RDO</entry>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <entry name='product'>OpenStack Compute</entry>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <entry name='serial'>51d1d661-89db-4958-a2f4-c299ee997cde</entry>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <entry name='uuid'>51d1d661-89db-4958-a2f4-c299ee997cde</entry>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <entry name='family'>Virtual Machine</entry>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <boot dev='hd'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <smbios mode='sysinfo'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <vmcoreinfo state='on'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <cpu mode='custom' match='exact' check='full'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <model fallback='forbid'>EPYC-Rome</model>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <vendor>AMD</vendor>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='require' name='x2apic'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='require' name='tsc-deadline'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='require' name='hypervisor'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='require' name='tsc_adjust'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='require' name='spec-ctrl'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='require' name='stibp'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='require' name='ssbd'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='require' name='cmp_legacy'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='require' name='overflow-recov'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='require' name='succor'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='require' name='ibrs'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='require' name='amd-ssbd'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='require' name='virt-ssbd'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='disable' name='lbrv'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='disable' name='tsc-scale'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='disable' name='vmcb-clean'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='disable' name='flushbyasid'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='disable' name='pause-filter'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='disable' name='pfthreshold'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='disable' name='svme-addr-chk'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='require' name='lfence-always-serializing'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='disable' name='xsaves'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='disable' name='svm'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='require' name='topoext'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='disable' name='npt'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <feature policy='disable' name='nrip-save'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <clock offset='utc'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <timer name='pit' tickpolicy='delay'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <timer name='rtc' tickpolicy='catchup'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <timer name='hpet' present='no'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <on_poweroff>destroy</on_poweroff>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <on_reboot>restart</on_reboot>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <on_crash>destroy</on_crash>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <disk type='network' device='disk'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <driver name='qemu' type='raw' cache='none'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <auth username='openstack'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:        <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <source protocol='rbd' name='vms/51d1d661-89db-4958-a2f4-c299ee997cde_disk' index='2'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:        <host name='192.168.122.100' port='6789'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target dev='vda' bus='virtio'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='virtio-disk0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <disk type='network' device='cdrom'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <driver name='qemu' type='raw' cache='none'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <auth username='openstack'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:        <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <source protocol='rbd' name='vms/51d1d661-89db-4958-a2f4-c299ee997cde_disk.config' index='1'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:        <host name='192.168.122.100' port='6789'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target dev='sda' bus='sata'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <readonly/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='sata0-0-0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='0' model='pcie-root'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pcie.0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='1' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='1' port='0x10'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.1'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='2' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='2' port='0x11'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.2'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='3' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='3' port='0x12'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.3'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='4' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='4' port='0x13'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.4'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='5' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='5' port='0x14'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.5'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='6' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='6' port='0x15'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.6'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='7' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='7' port='0x16'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.7'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='8' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='8' port='0x17'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.8'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='9' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='9' port='0x18'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.9'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='10' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='10' port='0x19'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.10'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='11' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='11' port='0x1a'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.11'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='12' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='12' port='0x1b'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.12'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='13' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='13' port='0x1c'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.13'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='14' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='14' port='0x1d'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.14'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='15' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='15' port='0x1e'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.15'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='16' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='16' port='0x1f'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.16'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='17' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='17' port='0x20'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.17'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='18' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='18' port='0x21'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.18'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='19' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='19' port='0x22'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.19'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='20' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='20' port='0x23'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.20'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='21' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='21' port='0x24'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.21'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='22' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='22' port='0x25'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.22'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='23' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='23' port='0x26'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.23'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='24' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='24' port='0x27'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.24'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='25' model='pcie-root-port'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target chassis='25' port='0x28'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.25'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model name='pcie-pci-bridge'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='pci.26'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='usb' index='0' model='piix3-uhci'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='usb'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <controller type='sata' index='0'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='ide'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <interface type='ethernet'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <mac address='fa:16:3e:4a:d5:f4'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target dev='tap433e6f28-31'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model type='virtio'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <driver name='vhost' rx_queue_size='512'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <mtu size='1442'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='net0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <interface type='ethernet'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <mac address='fa:16:3e:85:c5:fe'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target dev='tapf1abe770-52'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model type='virtio'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <driver name='vhost' rx_queue_size='512'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <mtu size='1442'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='net2'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <interface type='ethernet'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <mac address='fa:16:3e:76:32:d9'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target dev='tapbe69a588-f7'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model type='virtio'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <driver name='vhost' rx_queue_size='512'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <mtu size='1442'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='net3'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <serial type='pty'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <source path='/dev/pts/1'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <log file='/var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde/console.log' append='off'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target type='isa-serial' port='0'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:        <model name='isa-serial'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      </target>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='serial0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <console type='pty' tty='/dev/pts/1'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <source path='/dev/pts/1'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <log file='/var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde/console.log' append='off'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <target type='serial' port='0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='serial0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </console>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <input type='tablet' bus='usb'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='input0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='usb' bus='0' port='1'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <input type='mouse' bus='ps2'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='input1'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <input type='keyboard' bus='ps2'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='input2'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <listen type='address' address='::0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </graphics>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <audio id='1' type='none'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <model type='virtio' heads='1' primary='yes'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='video0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <watchdog model='itco' action='reset'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='watchdog0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </watchdog>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <memballoon model='virtio'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <stats period='10'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='balloon0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <rng model='virtio'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <backend model='random'>/dev/urandom</backend>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <alias name='rng0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <label>system_u:system_r:svirt_t:s0:c806,c930</label>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c806,c930</imagelabel>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  </seclabel>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <label>+107:+107</label>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <imagelabel>+107:+107</imagelabel>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  </seclabel>
Feb 25 07:21:59 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:21:59 np0005629333 nova_compute[244014]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.585 244018 INFO nova.virt.libvirt.driver [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully detached device tap31f40ed6-50 from instance 51d1d661-89db-4958-a2f4-c299ee997cde from the live domain config.#033[00m
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.586 244018 DEBUG nova.virt.libvirt.vif [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:20:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.587 244018 DEBUG nova.network.os_vif_util [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.588 244018 DEBUG nova.network.os_vif_util [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c1:8f,bridge_name='br-int',has_traffic_filtering=True,id=31f40ed6-505b-4061-b861-ea2720b0ff62,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31f40ed6-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:21:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:59.587 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:c1:8f 10.100.0.10'], port_security=['fa:16:3e:c1:c1:8f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '51d1d661-89db-4958-a2f4-c299ee997cde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '078dca40-137f-4eb6-953b-2ae25d0b4ca3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=31f40ed6-505b-4061-b861-ea2720b0ff62) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.588 244018 DEBUG os_vif [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c1:8f,bridge_name='br-int',has_traffic_filtering=True,id=31f40ed6-505b-4061-b861-ea2720b0ff62,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31f40ed6-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:21:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:59.589 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 31f40ed6-505b-4061-b861-ea2720b0ff62 in datapath 08121372-a435-401a-b405-778e10d8c2e2 unbound from our chassis#033[00m
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.590 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.591 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31f40ed6-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:21:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:59.592 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08121372-a435-401a-b405-778e10d8c2e2#033[00m
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.593 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.595 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.597 244018 INFO os_vif [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c1:8f,bridge_name='br-int',has_traffic_filtering=True,id=31f40ed6-505b-4061-b861-ea2720b0ff62,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31f40ed6-50')#033[00m
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.598 244018 DEBUG nova.virt.libvirt.guest [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1386050966</nova:name>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <nova:creationTime>2026-02-25 12:21:59</nova:creationTime>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <nova:flavor name="m1.nano">
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:memory>128</nova:memory>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:disk>1</nova:disk>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:swap>0</nova:swap>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:vcpus>1</nova:vcpus>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  </nova:flavor>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <nova:owner>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  </nova:owner>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  <nova:ports>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:port uuid="433e6f28-313e-4fe8-b8da-eacc8a0332c8">
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:port uuid="f1abe770-5205-4bae-888a-f2489c2af7a7">
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    <nova:port uuid="be69a588-f795-413e-b981-a20088eea5ed">
Feb 25 07:21:59 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:21:59 np0005629333 nova_compute[244014]:  </nova:ports>
Feb 25 07:21:59 np0005629333 nova_compute[244014]: </nova:instance>
Feb 25 07:21:59 np0005629333 nova_compute[244014]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Feb 25 07:21:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:59.608 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9320917a-df56-405f-94e7-04aec17303da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:59.642 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[28cd2d84-9a73-4e37-aea6-b734b760aa73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:59.647 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e943b312-2458-494f-a7bd-318495fb129a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 279 MiB data, 517 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 73 op/s
Feb 25 07:21:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:59.684 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[25f3a6ce-b3ac-4d06-9673-4f6ead3741b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:59.702 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b6e25326-a4b7-4061-b566-9d46b66b962c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402565, 'reachable_time': 31993, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271560, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:59.723 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a2015285-3394-43f0-82f9-ecad9c8bdecc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402574, 'tstamp': 402574}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271561, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402577, 'tstamp': 402577}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271561, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:21:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:59.725 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.727 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:59 np0005629333 nova_compute[244014]: 2026-02-25 12:21:59.729 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:21:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:59.729 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08121372-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:21:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:59.729 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:21:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:59.730 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08121372-a0, col_values=(('external_ids', {'iface-id': 'ef44c128-3fa4-4475-b63c-4818a50ede40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:21:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:21:59.730 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:22:00 np0005629333 nova_compute[244014]: 2026-02-25 12:22:00.288 244018 DEBUG nova.compute.manager [req-720f6cba-7e14-4c86-805c-022dff208b63 req-d48078a1-791e-4331-839b-5a874d602ef9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-plugged-be69a588-f795-413e-b981-a20088eea5ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:22:00 np0005629333 nova_compute[244014]: 2026-02-25 12:22:00.288 244018 DEBUG oslo_concurrency.lockutils [req-720f6cba-7e14-4c86-805c-022dff208b63 req-d48078a1-791e-4331-839b-5a874d602ef9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:00 np0005629333 nova_compute[244014]: 2026-02-25 12:22:00.289 244018 DEBUG oslo_concurrency.lockutils [req-720f6cba-7e14-4c86-805c-022dff208b63 req-d48078a1-791e-4331-839b-5a874d602ef9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:00 np0005629333 nova_compute[244014]: 2026-02-25 12:22:00.289 244018 DEBUG oslo_concurrency.lockutils [req-720f6cba-7e14-4c86-805c-022dff208b63 req-d48078a1-791e-4331-839b-5a874d602ef9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:00 np0005629333 nova_compute[244014]: 2026-02-25 12:22:00.289 244018 DEBUG nova.compute.manager [req-720f6cba-7e14-4c86-805c-022dff208b63 req-d48078a1-791e-4331-839b-5a874d602ef9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] No waiting events found dispatching network-vif-plugged-be69a588-f795-413e-b981-a20088eea5ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:22:00 np0005629333 nova_compute[244014]: 2026-02-25 12:22:00.290 244018 WARNING nova.compute.manager [req-720f6cba-7e14-4c86-805c-022dff208b63 req-d48078a1-791e-4331-839b-5a874d602ef9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received unexpected event network-vif-plugged-be69a588-f795-413e-b981-a20088eea5ed for instance with vm_state active and task_state None.#033[00m
Feb 25 07:22:00 np0005629333 nova_compute[244014]: 2026-02-25 12:22:00.363 244018 DEBUG nova.compute.manager [None req-2312e454-6a95-42a1-acb3-d3fef949d88f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:22:00 np0005629333 nova_compute[244014]: 2026-02-25 12:22:00.420 244018 INFO nova.compute.manager [None req-2312e454-6a95-42a1-acb3-d3fef949d88f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] instance snapshotting#033[00m
Feb 25 07:22:00 np0005629333 nova_compute[244014]: 2026-02-25 12:22:00.420 244018 WARNING nova.compute.manager [None req-2312e454-6a95-42a1-acb3-d3fef949d88f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] trying to snapshot a non-running instance: (state: 3 expected: 1)#033[00m
Feb 25 07:22:01 np0005629333 nova_compute[244014]: 2026-02-25 12:22:01.432 244018 DEBUG oslo_concurrency.lockutils [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:22:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:22:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:22:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:22:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:22:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:22:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:22:01 np0005629333 nova_compute[244014]: 2026-02-25 12:22:01.587 244018 DEBUG nova.network.neutron [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Successfully updated port: 0fef626d-412c-4101-95eb-eadb3354247f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:22:01 np0005629333 nova_compute[244014]: 2026-02-25 12:22:01.603 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Acquiring lock "refresh_cache-5a7dc142-2b11-4214-87b7-636f27ccacbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:22:01 np0005629333 nova_compute[244014]: 2026-02-25 12:22:01.604 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Acquired lock "refresh_cache-5a7dc142-2b11-4214-87b7-636f27ccacbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:22:01 np0005629333 nova_compute[244014]: 2026-02-25 12:22:01.604 244018 DEBUG nova.network.neutron [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:22:01 np0005629333 nova_compute[244014]: 2026-02-25 12:22:01.611 244018 INFO nova.virt.libvirt.driver [None req-2312e454-6a95-42a1-acb3-d3fef949d88f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Beginning live snapshot process#033[00m
Feb 25 07:22:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 292 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 499 KiB/s wr, 75 op/s
Feb 25 07:22:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.793 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:22:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.796 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:22:01 np0005629333 nova_compute[244014]: 2026-02-25 12:22:01.803 244018 DEBUG nova.compute.manager [req-73db7ff8-0a92-47ad-bc14-c668b231a8ae req-a583c495-440f-4e7a-83a4-c43a1e70cf46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received event network-changed-0fef626d-412c-4101-95eb-eadb3354247f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:22:01 np0005629333 nova_compute[244014]: 2026-02-25 12:22:01.803 244018 DEBUG nova.compute.manager [req-73db7ff8-0a92-47ad-bc14-c668b231a8ae req-a583c495-440f-4e7a-83a4-c43a1e70cf46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Refreshing instance network info cache due to event network-changed-0fef626d-412c-4101-95eb-eadb3354247f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:22:01 np0005629333 nova_compute[244014]: 2026-02-25 12:22:01.804 244018 DEBUG oslo_concurrency.lockutils [req-73db7ff8-0a92-47ad-bc14-c668b231a8ae req-a583c495-440f-4e7a-83a4-c43a1e70cf46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-5a7dc142-2b11-4214-87b7-636f27ccacbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:22:01 np0005629333 nova_compute[244014]: 2026-02-25 12:22:01.804 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:01 np0005629333 nova_compute[244014]: 2026-02-25 12:22:01.813 244018 DEBUG nova.virt.libvirt.imagebackend [None req-2312e454-6a95-42a1-acb3-d3fef949d88f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Feb 25 07:22:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.819 157129 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 261c7612-7fb7-43f1-9ba4-19c54ae198d6 with type ""#033[00m
Feb 25 07:22:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.821 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:32:d9 10.100.0.3'], port_security=['fa:16:3e:76:32:d9 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-616423065', 'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '51d1d661-89db-4958-a2f4-c299ee997cde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-616423065', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '078dca40-137f-4eb6-953b-2ae25d0b4ca3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=be69a588-f795-413e-b981-a20088eea5ed) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:22:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.823 157129 INFO neutron.agent.ovn.metadata.agent [-] Port be69a588-f795-413e-b981-a20088eea5ed in datapath 08121372-a435-401a-b405-778e10d8c2e2 unbound from our chassis#033[00m
Feb 25 07:22:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.826 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08121372-a435-401a-b405-778e10d8c2e2#033[00m
Feb 25 07:22:01 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:01Z|00202|binding|INFO|Removing iface tapbe69a588-f7 ovn-installed in OVS
Feb 25 07:22:01 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:01Z|00203|binding|INFO|Removing lport be69a588-f795-413e-b981-a20088eea5ed ovn-installed in OVS
Feb 25 07:22:01 np0005629333 nova_compute[244014]: 2026-02-25 12:22:01.836 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:01 np0005629333 nova_compute[244014]: 2026-02-25 12:22:01.840 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.841 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ae8f9bae-0c15-468d-81ea-60bf91e7b50c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.872 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[335e0f55-a0d7-49f7-8715-163d138956a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.876 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[59f1ba9a-5150-4003-b441-16ea9770fb16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:01 np0005629333 nova_compute[244014]: 2026-02-25 12:22:01.880 244018 DEBUG nova.network.neutron [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:22:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.913 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[dac11507-59c4-45a8-bbf9-4fa9a5aa94ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.934 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0dfc7ab9-46b4-46a8-b243-b0b74675aa94]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402565, 'reachable_time': 31993, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271600, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.953 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ce4902a1-0e2b-4ee5-9ca1-c9f61cd2fc0a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402574, 'tstamp': 402574}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271601, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402577, 'tstamp': 402577}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271601, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.955 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:01 np0005629333 nova_compute[244014]: 2026-02-25 12:22:01.957 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:01 np0005629333 nova_compute[244014]: 2026-02-25 12:22:01.958 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.959 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08121372-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.960 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:22:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.961 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08121372-a0, col_values=(('external_ids', {'iface-id': 'ef44c128-3fa4-4475-b63c-4818a50ede40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:01.962 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.080 244018 DEBUG nova.storage.rbd_utils [None req-2312e454-6a95-42a1-acb3-d3fef949d88f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] creating snapshot(56170dc03de240289a8d156f9d514494) on rbd image(e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.341 244018 DEBUG oslo_concurrency.lockutils [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "51d1d661-89db-4958-a2f4-c299ee997cde" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.342 244018 DEBUG oslo_concurrency.lockutils [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.343 244018 DEBUG oslo_concurrency.lockutils [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.343 244018 DEBUG oslo_concurrency.lockutils [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.344 244018 DEBUG oslo_concurrency.lockutils [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.346 244018 INFO nova.compute.manager [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Terminating instance#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.347 244018 DEBUG nova.compute.manager [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:22:02 np0005629333 kernel: tap433e6f28-31 (unregistering): left promiscuous mode
Feb 25 07:22:02 np0005629333 NetworkManager[49836]: <info>  [1772022122.4003] device (tap433e6f28-31): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:22:02 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:02Z|00204|binding|INFO|Releasing lport 433e6f28-313e-4fe8-b8da-eacc8a0332c8 from this chassis (sb_readonly=0)
Feb 25 07:22:02 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:02Z|00205|binding|INFO|Setting lport 433e6f28-313e-4fe8-b8da-eacc8a0332c8 down in Southbound
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.407 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:02 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:02Z|00206|binding|INFO|Removing iface tap433e6f28-31 ovn-installed in OVS
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.413 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.415 244018 DEBUG nova.compute.manager [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-deleted-31f40ed6-505b-4061-b861-ea2720b0ff62 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.416 244018 INFO nova.compute.manager [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Neutron deleted interface 31f40ed6-505b-4061-b861-ea2720b0ff62; detaching it from the instance and deleting it from the info cache#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.417 244018 DEBUG nova.network.neutron [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updating instance_info_cache with network_info: [{"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f1abe770-5205-4bae-888a-f2489c2af7a7", "address": "fa:16:3e:85:c5:fe", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1abe770-52", "ovs_interfaceid": "f1abe770-5205-4bae-888a-f2489c2af7a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be69a588-f795-413e-b981-a20088eea5ed", "address": "fa:16:3e:76:32:d9", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe69a588-f7", "ovs_interfaceid": "be69a588-f795-413e-b981-a20088eea5ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:22:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.419 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:d5:f4 10.100.0.11'], port_security=['fa:16:3e:4a:d5:f4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '51d1d661-89db-4958-a2f4-c299ee997cde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '43bfaf0d-cb4f-4024-9f62-b8a095234270', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.227'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=433e6f28-313e-4fe8-b8da-eacc8a0332c8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:22:02 np0005629333 kernel: tapf1abe770-52 (unregistering): left promiscuous mode
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.422 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.423 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 433e6f28-313e-4fe8-b8da-eacc8a0332c8 in datapath 08121372-a435-401a-b405-778e10d8c2e2 unbound from our chassis#033[00m
Feb 25 07:22:02 np0005629333 NetworkManager[49836]: <info>  [1772022122.4251] device (tapf1abe770-52): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:22:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.426 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08121372-a435-401a-b405-778e10d8c2e2#033[00m
Feb 25 07:22:02 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:02Z|00207|binding|INFO|Releasing lport f1abe770-5205-4bae-888a-f2489c2af7a7 from this chassis (sb_readonly=0)
Feb 25 07:22:02 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:02Z|00208|binding|INFO|Setting lport f1abe770-5205-4bae-888a-f2489c2af7a7 down in Southbound
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.436 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:02 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:02Z|00209|binding|INFO|Removing iface tapf1abe770-52 ovn-installed in OVS
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.439 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.442 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4dc79da2-859c-4ca9-9b32-7f9a1f0135da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:02 np0005629333 kernel: tapbe69a588-f7 (unregistering): left promiscuous mode
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.450 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.454 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:c5:fe 10.100.0.7'], port_security=['fa:16:3e:85:c5:fe 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '51d1d661-89db-4958-a2f4-c299ee997cde', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '078dca40-137f-4eb6-953b-2ae25d0b4ca3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=f1abe770-5205-4bae-888a-f2489c2af7a7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:22:02 np0005629333 NetworkManager[49836]: <info>  [1772022122.4573] device (tapbe69a588-f7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.479 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.482 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[019779ce-699b-469f-826c-4f44e2ac9bee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.487 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e13e294c-a825-4876-ab0a-6f2bcde34df3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:02 np0005629333 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Feb 25 07:22:02 np0005629333 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001b.scope: Consumed 14.923s CPU time.
Feb 25 07:22:02 np0005629333 systemd-machined[210048]: Machine qemu-31-instance-0000001b terminated.
Feb 25 07:22:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.522 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8495d245-26c9-481b-9909-5765e377ee97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.544 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[44b10f7e-b69e-4c4e-b8c8-e946d0fb2db8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402565, 'reachable_time': 31993, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271642, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.555 244018 DEBUG nova.objects.instance [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lazy-loading 'system_metadata' on Instance uuid 51d1d661-89db-4958-a2f4-c299ee997cde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:22:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.560 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b2624b7c-7332-43be-871e-a87f9cae44c2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402574, 'tstamp': 402574}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271643, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap08121372-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402577, 'tstamp': 402577}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271643, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.563 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.565 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.576 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.577 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08121372-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.577 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:22:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.578 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08121372-a0, col_values=(('external_ids', {'iface-id': 'ef44c128-3fa4-4475-b63c-4818a50ede40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.579 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:22:02 np0005629333 NetworkManager[49836]: <info>  [1772022122.5807] manager: (tapf1abe770-52): new Tun device (/org/freedesktop/NetworkManager/Devices/98)
Feb 25 07:22:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.582 157129 INFO neutron.agent.ovn.metadata.agent [-] Port f1abe770-5205-4bae-888a-f2489c2af7a7 in datapath 08121372-a435-401a-b405-778e10d8c2e2 unbound from our chassis#033[00m
Feb 25 07:22:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.584 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08121372-a435-401a-b405-778e10d8c2e2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:22:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.585 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[47246297-ffe4-4057-851d-07d4fb7383ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.587 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-08121372-a435-401a-b405-778e10d8c2e2 namespace which is not needed anymore#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.588 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:02 np0005629333 NetworkManager[49836]: <info>  [1772022122.5946] manager: (tapbe69a588-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/99)
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.603 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.615 244018 INFO nova.virt.libvirt.driver [-] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Instance destroyed successfully.#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.616 244018 DEBUG nova.objects.instance [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'resources' on Instance uuid 51d1d661-89db-4958-a2f4-c299ee997cde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.730 244018 DEBUG nova.objects.instance [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lazy-loading 'flavor' on Instance uuid 51d1d661-89db-4958-a2f4-c299ee997cde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:22:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Feb 25 07:22:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Feb 25 07:22:02 np0005629333 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[268869]: [NOTICE]   (268891) : haproxy version is 2.8.14-c23fe91
Feb 25 07:22:02 np0005629333 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[268869]: [NOTICE]   (268891) : path to executable is /usr/sbin/haproxy
Feb 25 07:22:02 np0005629333 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[268869]: [WARNING]  (268891) : Exiting Master process...
Feb 25 07:22:02 np0005629333 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[268869]: [WARNING]  (268891) : Exiting Master process...
Feb 25 07:22:02 np0005629333 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[268869]: [ALERT]    (268891) : Current worker (268909) exited with code 143 (Terminated)
Feb 25 07:22:02 np0005629333 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[268869]: [WARNING]  (268891) : All workers exited. Exiting... (0)
Feb 25 07:22:02 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Feb 25 07:22:02 np0005629333 systemd[1]: libpod-b50073db9dbcd731d0b723c13f97680fccceaabc6c416564b26619e87a58647d.scope: Deactivated successfully.
Feb 25 07:22:02 np0005629333 podman[271693]: 2026-02-25 12:22:02.752752601 +0000 UTC m=+0.058912477 container died b50073db9dbcd731d0b723c13f97680fccceaabc6c416564b26619e87a58647d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.752 244018 DEBUG nova.virt.libvirt.vif [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:20:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.753 244018 DEBUG nova.network.os_vif_util [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.755 244018 DEBUG nova.network.os_vif_util [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4a:d5:f4,bridge_name='br-int',has_traffic_filtering=True,id=433e6f28-313e-4fe8-b8da-eacc8a0332c8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap433e6f28-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.756 244018 DEBUG os_vif [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:d5:f4,bridge_name='br-int',has_traffic_filtering=True,id=433e6f28-313e-4fe8-b8da-eacc8a0332c8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap433e6f28-31') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.759 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.759 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap433e6f28-31, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.765 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.774 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.777 244018 INFO os_vif [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4a:d5:f4,bridge_name='br-int',has_traffic_filtering=True,id=433e6f28-313e-4fe8-b8da-eacc8a0332c8,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap433e6f28-31')#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.778 244018 DEBUG nova.virt.libvirt.vif [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:20:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.779 244018 DEBUG nova.network.os_vif_util [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.780 244018 DEBUG nova.network.os_vif_util [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c1:8f,bridge_name='br-int',has_traffic_filtering=True,id=31f40ed6-505b-4061-b861-ea2720b0ff62,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31f40ed6-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.781 244018 DEBUG os_vif [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c1:8f,bridge_name='br-int',has_traffic_filtering=True,id=31f40ed6-505b-4061-b861-ea2720b0ff62,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31f40ed6-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.784 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.785 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31f40ed6-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.785 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.790 244018 DEBUG nova.virt.libvirt.vif [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:22:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.790 244018 DEBUG nova.network.os_vif_util [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Converting VIF {"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.792 244018 DEBUG nova.network.os_vif_util [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c1:8f,bridge_name='br-int',has_traffic_filtering=True,id=31f40ed6-505b-4061-b861-ea2720b0ff62,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31f40ed6-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:22:02 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b50073db9dbcd731d0b723c13f97680fccceaabc6c416564b26619e87a58647d-userdata-shm.mount: Deactivated successfully.
Feb 25 07:22:02 np0005629333 systemd[1]: var-lib-containers-storage-overlay-b250d49de0d9de34b918b625012f9e944f53e47c7cb2c90afd9a5ea1fff5ce10-merged.mount: Deactivated successfully.
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.802 244018 DEBUG nova.storage.rbd_utils [None req-2312e454-6a95-42a1-acb3-d3fef949d88f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] cloning vms/e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk@56170dc03de240289a8d156f9d514494 to images/7c5bf52b-c6c9-41dc-8a9d-b16bc36dfe7b clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 25 07:22:02 np0005629333 podman[271693]: 2026-02-25 12:22:02.807681714 +0000 UTC m=+0.113841520 container cleanup b50073db9dbcd731d0b723c13f97680fccceaabc6c416564b26619e87a58647d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 07:22:02 np0005629333 systemd[1]: libpod-conmon-b50073db9dbcd731d0b723c13f97680fccceaabc6c416564b26619e87a58647d.scope: Deactivated successfully.
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.837 244018 INFO os_vif [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c1:8f,bridge_name='br-int',has_traffic_filtering=True,id=31f40ed6-505b-4061-b861-ea2720b0ff62,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31f40ed6-50')#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.838 244018 DEBUG nova.virt.libvirt.vif [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:20:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f1abe770-5205-4bae-888a-f2489c2af7a7", "address": "fa:16:3e:85:c5:fe", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1abe770-52", "ovs_interfaceid": "f1abe770-5205-4bae-888a-f2489c2af7a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.838 244018 DEBUG nova.network.os_vif_util [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "f1abe770-5205-4bae-888a-f2489c2af7a7", "address": "fa:16:3e:85:c5:fe", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1abe770-52", "ovs_interfaceid": "f1abe770-5205-4bae-888a-f2489c2af7a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.839 244018 DEBUG nova.network.os_vif_util [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:85:c5:fe,bridge_name='br-int',has_traffic_filtering=True,id=f1abe770-5205-4bae-888a-f2489c2af7a7,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1abe770-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.839 244018 DEBUG os_vif [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:c5:fe,bridge_name='br-int',has_traffic_filtering=True,id=f1abe770-5205-4bae-888a-f2489c2af7a7,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1abe770-52') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.840 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.840 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1abe770-52, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.842 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.844 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.845 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.846 244018 DEBUG nova.virt.libvirt.guest [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:c1:c1:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap31f40ed6-50"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.853 244018 INFO os_vif [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:c5:fe,bridge_name='br-int',has_traffic_filtering=True,id=f1abe770-5205-4bae-888a-f2489c2af7a7,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf1abe770-52')#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.854 244018 DEBUG nova.virt.libvirt.vif [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:20:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be69a588-f795-413e-b981-a20088eea5ed", "address": "fa:16:3e:76:32:d9", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe69a588-f7", "ovs_interfaceid": "be69a588-f795-413e-b981-a20088eea5ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.854 244018 DEBUG nova.network.os_vif_util [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "be69a588-f795-413e-b981-a20088eea5ed", "address": "fa:16:3e:76:32:d9", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe69a588-f7", "ovs_interfaceid": "be69a588-f795-413e-b981-a20088eea5ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.854 244018 DEBUG nova.network.os_vif_util [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:32:d9,bridge_name='br-int',has_traffic_filtering=True,id=be69a588-f795-413e-b981-a20088eea5ed,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe69a588-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.855 244018 DEBUG os_vif [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:32:d9,bridge_name='br-int',has_traffic_filtering=True,id=be69a588-f795-413e-b981-a20088eea5ed,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe69a588-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:22:02 np0005629333 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.868 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.869 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe69a588-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.872 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.874 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.876 244018 DEBUG nova.virt.libvirt.guest [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:c1:c1:8f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap31f40ed6-50"/></interface>not found in domain: <domain type='kvm'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:  <name>instance-0000001b</name>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:  <uuid>51d1d661-89db-4958-a2f4-c299ee997cde</uuid>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <nova:name>tempest-AttachInterfacesTestJSON-server-1386050966</nova:name>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:20:53</nova:creationTime>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:22:02 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:        <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:        <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:        <nova:port uuid="433e6f28-313e-4fe8-b8da-eacc8a0332c8">
Feb 25 07:22:02 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:  <memory unit='KiB'>131072</memory>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:  <currentMemory unit='KiB'>131072</currentMemory>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:  <vcpu placement='static'>1</vcpu>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:  <sysinfo type='smbios'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <entry name='manufacturer'>RDO</entry>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <entry name='product'>OpenStack Compute</entry>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <entry name='serial'>51d1d661-89db-4958-a2f4-c299ee997cde</entry>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <entry name='uuid'>51d1d661-89db-4958-a2f4-c299ee997cde</entry>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <entry name='family'>Virtual Machine</entry>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <boot dev='hd'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <smbios mode='sysinfo'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <vmcoreinfo state='on'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:  <cpu mode='host-model' check='partial'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:  <clock offset='utc'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <timer name='pit' tickpolicy='delay'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <timer name='rtc' tickpolicy='catchup'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <timer name='hpet' present='no'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:  <on_poweroff>destroy</on_poweroff>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:  <on_reboot>restart</on_reboot>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:  <on_crash>destroy</on_crash>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <disk type='network' device='disk'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <driver name='qemu' type='raw' cache='none'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <auth username='openstack'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:        <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <source protocol='rbd' name='vms/51d1d661-89db-4958-a2f4-c299ee997cde_disk'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:        <host name='192.168.122.100' port='6789'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <target dev='vda' bus='virtio'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <disk type='network' device='cdrom'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <driver name='qemu' type='raw' cache='none'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <auth username='openstack'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:        <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <source protocol='rbd' name='vms/51d1d661-89db-4958-a2f4-c299ee997cde_disk.config'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:        <host name='192.168.122.100' port='6789'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <target dev='sda' bus='sata'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <readonly/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <controller type='pci' index='0' model='pcie-root'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <controller type='pci' index='1' model='pcie-root-port'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <target chassis='1' port='0x10'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <controller type='pci' index='2' model='pcie-root-port'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <target chassis='2' port='0x11'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <controller type='pci' index='3' model='pcie-root-port'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <target chassis='3' port='0x12'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <controller type='pci' index='4' model='pcie-root-port'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <target chassis='4' port='0x13'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <controller type='pci' index='5' model='pcie-root-port'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <target chassis='5' port='0x14'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <controller type='pci' index='6' model='pcie-root-port'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <target chassis='6' port='0x15'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <controller type='pci' index='7' model='pcie-root-port'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <target chassis='7' port='0x16'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <controller type='pci' index='8' model='pcie-root-port'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <target chassis='8' port='0x17'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <controller type='pci' index='9' model='pcie-root-port'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <target chassis='9' port='0x18'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <controller type='pci' index='10' model='pcie-root-port'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <target chassis='10' port='0x19'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <controller type='pci' index='11' model='pcie-root-port'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <target chassis='11' port='0x1a'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <controller type='pci' index='12' model='pcie-root-port'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <target chassis='12' port='0x1b'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <controller type='pci' index='13' model='pcie-root-port'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <target chassis='13' port='0x1c'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <controller type='pci' index='14' model='pcie-root-port'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <target chassis='14' port='0x1d'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <controller type='pci' index='15' model='pcie-root-port'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <target chassis='15' port='0x1e'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <controller type='pci' index='16' model='pcie-root-port'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <target chassis='16' port='0x1f'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <controller type='pci' index='17' model='pcie-root-port'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <target chassis='17' port='0x20'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <controller type='pci' index='18' model='pcie-root-port'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <target chassis='18' port='0x21'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <controller type='pci' index='19' model='pcie-root-port'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <target chassis='19' port='0x22'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <controller type='pci' index='20' model='pcie-root-port'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <target chassis='20' port='0x23'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <controller type='pci' index='21' model='pcie-root-port'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <target chassis='21' port='0x24'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <controller type='pci' index='22' model='pcie-root-port'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <target chassis='22' port='0x25'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <controller type='pci' index='23' model='pcie-root-port'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <target chassis='23' port='0x26'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <controller type='pci' index='24' model='pcie-root-port'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <target chassis='24' port='0x27'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <controller type='pci' index='25' model='pcie-root-port'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <target chassis='25' port='0x28'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <model name='pcie-pci-bridge'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <controller type='usb' index='0' model='piix3-uhci'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <controller type='sata' index='0'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <interface type='ethernet'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <mac address='fa:16:3e:4a:d5:f4'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <target dev='tap433e6f28-31'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <model type='virtio'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <driver name='vhost' rx_queue_size='512'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <mtu size='1442'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <interface type='ethernet'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <mac address='fa:16:3e:85:c5:fe'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <target dev='tapf1abe770-52'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <model type='virtio'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <driver name='vhost' rx_queue_size='512'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <mtu size='1442'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <interface type='ethernet'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <mac address='fa:16:3e:76:32:d9'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <target dev='tapbe69a588-f7'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <model type='virtio'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <driver name='vhost' rx_queue_size='512'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <mtu size='1442'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <serial type='pty'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <log file='/var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde/console.log' append='off'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <target type='isa-serial' port='0'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:        <model name='isa-serial'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      </target>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <console type='pty'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <log file='/var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde/console.log' append='off'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <target type='serial' port='0'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </console>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <input type='tablet' bus='usb'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='usb' bus='0' port='1'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <input type='mouse' bus='ps2'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <input type='keyboard' bus='ps2'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <graphics type='vnc' port='-1' autoport='yes' listen='::0'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <listen type='address' address='::0'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </graphics>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <audio id='1' type='none'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <model type='virtio' heads='1' primary='yes'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <watchdog model='itco' action='reset'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <memballoon model='virtio'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <stats period='10'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <rng model='virtio'>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <backend model='random'>/dev/urandom</backend>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:22:02 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:22:02 np0005629333 nova_compute[244014]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.878 244018 WARNING nova.virt.libvirt.driver [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Detaching interface fa:16:3e:c1:c1:8f failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap31f40ed6-50' not found.#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.880 244018 DEBUG nova.virt.libvirt.vif [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:22:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.881 244018 DEBUG nova.network.os_vif_util [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Converting VIF {"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.882 244018 DEBUG nova.network.os_vif_util [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c1:8f,bridge_name='br-int',has_traffic_filtering=True,id=31f40ed6-505b-4061-b861-ea2720b0ff62,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31f40ed6-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.883 244018 DEBUG os_vif [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c1:8f,bridge_name='br-int',has_traffic_filtering=True,id=31f40ed6-505b-4061-b861-ea2720b0ff62,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31f40ed6-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:22:02 np0005629333 podman[271731]: 2026-02-25 12:22:02.88484559 +0000 UTC m=+0.054042269 container remove b50073db9dbcd731d0b723c13f97680fccceaabc6c416564b26619e87a58647d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.885 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.886 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31f40ed6-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.887 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.889 244018 INFO os_vif [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:32:d9,bridge_name='br-int',has_traffic_filtering=True,id=be69a588-f795-413e-b981-a20088eea5ed,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe69a588-f7')#033[00m
Feb 25 07:22:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.895 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d60eedac-5b3f-428e-9f40-420095564672]: (4, ('Wed Feb 25 12:22:02 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2 (b50073db9dbcd731d0b723c13f97680fccceaabc6c416564b26619e87a58647d)\nb50073db9dbcd731d0b723c13f97680fccceaabc6c416564b26619e87a58647d\nWed Feb 25 12:22:02 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2 (b50073db9dbcd731d0b723c13f97680fccceaabc6c416564b26619e87a58647d)\nb50073db9dbcd731d0b723c13f97680fccceaabc6c416564b26619e87a58647d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.896 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e935a566-1be5-41fc-9c73-c588d066c3f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.897 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:02 np0005629333 kernel: tap08121372-a0: left promiscuous mode
Feb 25 07:22:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.904 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ac4730c8-35b8-4170-ac55-c7db7ab0b057]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.916 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.918 244018 INFO os_vif [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:c1:8f,bridge_name='br-int',has_traffic_filtering=True,id=31f40ed6-505b-4061-b861-ea2720b0ff62,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap31f40ed6-50')#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.919 244018 DEBUG nova.virt.libvirt.guest [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:22:02 np0005629333 nova_compute[244014]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1386050966</nova:name>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:  <nova:creationTime>2026-02-25 12:22:02</nova:creationTime>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:  <nova:flavor name="m1.nano">
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <nova:memory>128</nova:memory>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <nova:disk>1</nova:disk>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <nova:swap>0</nova:swap>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <nova:vcpus>1</nova:vcpus>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:  </nova:flavor>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:  <nova:owner>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:  </nova:owner>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:  <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:  <nova:ports>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <nova:port uuid="433e6f28-313e-4fe8-b8da-eacc8a0332c8">
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <nova:port uuid="f1abe770-5205-4bae-888a-f2489c2af7a7">
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    <nova:port uuid="be69a588-f795-413e-b981-a20088eea5ed">
Feb 25 07:22:02 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:22:02 np0005629333 nova_compute[244014]:  </nova:ports>
Feb 25 07:22:02 np0005629333 nova_compute[244014]: </nova:instance>
Feb 25 07:22:02 np0005629333 nova_compute[244014]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Feb 25 07:22:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.919 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed2cdfc-da11-4e81-9ce1-b57ad2f8e5d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.920 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[144e32bf-c329-488a-a8b1-31f4ce98b874]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.924 244018 DEBUG nova.compute.manager [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-deleted-be69a588-f795-413e-b981-a20088eea5ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.924 244018 INFO nova.compute.manager [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Neutron deleted interface be69a588-f795-413e-b981-a20088eea5ed; detaching it from the instance and deleting it from the info cache#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.924 244018 DEBUG nova.network.neutron [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updating instance_info_cache with network_info: [{"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f1abe770-5205-4bae-888a-f2489c2af7a7", "address": "fa:16:3e:85:c5:fe", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1abe770-52", "ovs_interfaceid": "f1abe770-5205-4bae-888a-f2489c2af7a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.937 244018 DEBUG nova.storage.rbd_utils [None req-2312e454-6a95-42a1-acb3-d3fef949d88f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] flattening images/7c5bf52b-c6c9-41dc-8a9d-b16bc36dfe7b flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 25 07:22:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.938 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ba144d37-078e-4aec-8961-ef8dc9d3b57b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402555, 'reachable_time': 23431, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271792, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.941 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-08121372-a435-401a-b405-778e10d8c2e2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:22:02 np0005629333 systemd[1]: run-netns-ovnmeta\x2d08121372\x2da435\x2d401a\x2db405\x2d778e10d8c2e2.mount: Deactivated successfully.
Feb 25 07:22:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:02.943 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[29b9448d-3a59-440f-946e-8a3724107b73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.990 244018 DEBUG nova.virt.libvirt.vif [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:22:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be69a588-f795-413e-b981-a20088eea5ed", "address": "fa:16:3e:76:32:d9", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe69a588-f7", "ovs_interfaceid": "be69a588-f795-413e-b981-a20088eea5ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.992 244018 DEBUG nova.network.os_vif_util [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Converting VIF {"id": "be69a588-f795-413e-b981-a20088eea5ed", "address": "fa:16:3e:76:32:d9", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe69a588-f7", "ovs_interfaceid": "be69a588-f795-413e-b981-a20088eea5ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:22:02 np0005629333 nova_compute[244014]: 2026-02-25 12:22:02.996 244018 DEBUG nova.network.os_vif_util [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:32:d9,bridge_name='br-int',has_traffic_filtering=True,id=be69a588-f795-413e-b981-a20088eea5ed,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe69a588-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.006 244018 DEBUG nova.virt.libvirt.guest [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:76:32:d9"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbe69a588-f7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.015 244018 DEBUG nova.virt.libvirt.driver [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Attempting to detach device tapbe69a588-f7 from instance 51d1d661-89db-4958-a2f4-c299ee997cde from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.016 244018 DEBUG nova.virt.libvirt.guest [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] detach device xml: <interface type="ethernet">
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  <mac address="fa:16:3e:76:32:d9"/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  <model type="virtio"/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  <mtu size="1442"/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  <target dev="tapbe69a588-f7"/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]: </interface>
Feb 25 07:22:03 np0005629333 nova_compute[244014]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.042 244018 DEBUG nova.virt.libvirt.guest [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:76:32:d9"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbe69a588-f7"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.046 244018 DEBUG nova.virt.libvirt.guest [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:76:32:d9"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapbe69a588-f7"/></interface>not found in domain: <domain type='kvm'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  <name>instance-0000001b</name>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  <uuid>51d1d661-89db-4958-a2f4-c299ee997cde</uuid>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <nova:name>tempest-AttachInterfacesTestJSON-server-1386050966</nova:name>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:22:02</nova:creationTime>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:22:03 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:        <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:        <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:        <nova:port uuid="433e6f28-313e-4fe8-b8da-eacc8a0332c8">
Feb 25 07:22:03 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:        <nova:port uuid="f1abe770-5205-4bae-888a-f2489c2af7a7">
Feb 25 07:22:03 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:        <nova:port uuid="be69a588-f795-413e-b981-a20088eea5ed">
Feb 25 07:22:03 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  <memory unit='KiB'>131072</memory>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  <currentMemory unit='KiB'>131072</currentMemory>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  <vcpu placement='static'>1</vcpu>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  <sysinfo type='smbios'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <entry name='manufacturer'>RDO</entry>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <entry name='product'>OpenStack Compute</entry>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <entry name='serial'>51d1d661-89db-4958-a2f4-c299ee997cde</entry>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <entry name='uuid'>51d1d661-89db-4958-a2f4-c299ee997cde</entry>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <entry name='family'>Virtual Machine</entry>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <boot dev='hd'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <smbios mode='sysinfo'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <vmcoreinfo state='on'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  <cpu mode='host-model' check='partial'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  <clock offset='utc'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <timer name='pit' tickpolicy='delay'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <timer name='rtc' tickpolicy='catchup'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <timer name='hpet' present='no'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  <on_poweroff>destroy</on_poweroff>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  <on_reboot>restart</on_reboot>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  <on_crash>destroy</on_crash>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <disk type='network' device='disk'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <driver name='qemu' type='raw' cache='none'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <auth username='openstack'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:        <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <source protocol='rbd' name='vms/51d1d661-89db-4958-a2f4-c299ee997cde_disk'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:        <host name='192.168.122.100' port='6789'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <target dev='vda' bus='virtio'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <disk type='network' device='cdrom'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <driver name='qemu' type='raw' cache='none'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <auth username='openstack'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:        <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <source protocol='rbd' name='vms/51d1d661-89db-4958-a2f4-c299ee997cde_disk.config'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:        <host name='192.168.122.100' port='6789'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <target dev='sda' bus='sata'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <readonly/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <controller type='pci' index='0' model='pcie-root'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <controller type='pci' index='1' model='pcie-root-port'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <target chassis='1' port='0x10'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <controller type='pci' index='2' model='pcie-root-port'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <target chassis='2' port='0x11'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <controller type='pci' index='3' model='pcie-root-port'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <target chassis='3' port='0x12'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <controller type='pci' index='4' model='pcie-root-port'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <target chassis='4' port='0x13'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <controller type='pci' index='5' model='pcie-root-port'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <target chassis='5' port='0x14'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <controller type='pci' index='6' model='pcie-root-port'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <target chassis='6' port='0x15'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <controller type='pci' index='7' model='pcie-root-port'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <target chassis='7' port='0x16'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <controller type='pci' index='8' model='pcie-root-port'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <target chassis='8' port='0x17'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <controller type='pci' index='9' model='pcie-root-port'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <target chassis='9' port='0x18'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <controller type='pci' index='10' model='pcie-root-port'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <target chassis='10' port='0x19'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <controller type='pci' index='11' model='pcie-root-port'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <target chassis='11' port='0x1a'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <controller type='pci' index='12' model='pcie-root-port'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <target chassis='12' port='0x1b'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <controller type='pci' index='13' model='pcie-root-port'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <target chassis='13' port='0x1c'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <controller type='pci' index='14' model='pcie-root-port'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <target chassis='14' port='0x1d'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <controller type='pci' index='15' model='pcie-root-port'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <target chassis='15' port='0x1e'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <controller type='pci' index='16' model='pcie-root-port'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <target chassis='16' port='0x1f'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <controller type='pci' index='17' model='pcie-root-port'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <target chassis='17' port='0x20'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <controller type='pci' index='18' model='pcie-root-port'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <target chassis='18' port='0x21'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <controller type='pci' index='19' model='pcie-root-port'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <target chassis='19' port='0x22'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <controller type='pci' index='20' model='pcie-root-port'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <target chassis='20' port='0x23'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <controller type='pci' index='21' model='pcie-root-port'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <target chassis='21' port='0x24'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <controller type='pci' index='22' model='pcie-root-port'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <target chassis='22' port='0x25'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <controller type='pci' index='23' model='pcie-root-port'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <target chassis='23' port='0x26'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <controller type='pci' index='24' model='pcie-root-port'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <target chassis='24' port='0x27'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <controller type='pci' index='25' model='pcie-root-port'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <target chassis='25' port='0x28'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <model name='pcie-pci-bridge'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <controller type='usb' index='0' model='piix3-uhci'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <controller type='sata' index='0'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <interface type='ethernet'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <mac address='fa:16:3e:4a:d5:f4'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <target dev='tap433e6f28-31'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <model type='virtio'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <driver name='vhost' rx_queue_size='512'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <mtu size='1442'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <interface type='ethernet'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <mac address='fa:16:3e:85:c5:fe'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <target dev='tapf1abe770-52'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <model type='virtio'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <driver name='vhost' rx_queue_size='512'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <mtu size='1442'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <serial type='pty'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <log file='/var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde/console.log' append='off'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <target type='isa-serial' port='0'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:        <model name='isa-serial'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      </target>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <console type='pty'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <log file='/var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde/console.log' append='off'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <target type='serial' port='0'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </console>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <input type='tablet' bus='usb'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='usb' bus='0' port='1'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <input type='mouse' bus='ps2'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <input type='keyboard' bus='ps2'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <graphics type='vnc' port='-1' autoport='yes' listen='::0'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <listen type='address' address='::0'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </graphics>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <audio id='1' type='none'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <model type='virtio' heads='1' primary='yes'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <watchdog model='itco' action='reset'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <memballoon model='virtio'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <stats period='10'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <rng model='virtio'>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <backend model='random'>/dev/urandom</backend>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:22:03 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:22:03 np0005629333 nova_compute[244014]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.048 244018 INFO nova.virt.libvirt.driver [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Successfully detached device tapbe69a588-f7 from instance 51d1d661-89db-4958-a2f4-c299ee997cde from the persistent domain config.#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.049 244018 DEBUG nova.virt.libvirt.vif [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:20:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1386050966',display_name='tempest-AttachInterfacesTestJSON-server-1386050966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1386050966',id=27,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3Qs4kdLW5bqYrRLWEBzJw8UcU+3lkulOyYAhWsA+bRH1j3S9Z7uwL/IpjoLl46+M631clnAHdxFJg3d7VIkui/eViORSe/Qn//YRzqQpBGvASKIZ6dBgaRkJi6p/BZXw==',key_name='tempest-keypair-1265508460',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:20:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-6aany624',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:22:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=51d1d661-89db-4958-a2f4-c299ee997cde,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be69a588-f795-413e-b981-a20088eea5ed", "address": "fa:16:3e:76:32:d9", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe69a588-f7", "ovs_interfaceid": "be69a588-f795-413e-b981-a20088eea5ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.049 244018 DEBUG nova.network.os_vif_util [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Converting VIF {"id": "be69a588-f795-413e-b981-a20088eea5ed", "address": "fa:16:3e:76:32:d9", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe69a588-f7", "ovs_interfaceid": "be69a588-f795-413e-b981-a20088eea5ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.050 244018 DEBUG nova.network.os_vif_util [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:32:d9,bridge_name='br-int',has_traffic_filtering=True,id=be69a588-f795-413e-b981-a20088eea5ed,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe69a588-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.051 244018 DEBUG os_vif [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:32:d9,bridge_name='br-int',has_traffic_filtering=True,id=be69a588-f795-413e-b981-a20088eea5ed,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe69a588-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.052 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.052 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe69a588-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.053 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.054 244018 INFO os_vif [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:32:d9,bridge_name='br-int',has_traffic_filtering=True,id=be69a588-f795-413e-b981-a20088eea5ed,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe69a588-f7')#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.055 244018 DEBUG nova.virt.libvirt.guest [req-76c6d834-43ac-4a37-b89c-fec4d71bb59c req-888b6a24-7846-4627-b911-869e022be7db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  <nova:name>tempest-AttachInterfacesTestJSON-server-1386050966</nova:name>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  <nova:creationTime>2026-02-25 12:22:03</nova:creationTime>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  <nova:flavor name="m1.nano">
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <nova:memory>128</nova:memory>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <nova:disk>1</nova:disk>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <nova:swap>0</nova:swap>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <nova:vcpus>1</nova:vcpus>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  </nova:flavor>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  <nova:owner>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  </nova:owner>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  <nova:ports>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <nova:port uuid="433e6f28-313e-4fe8-b8da-eacc8a0332c8">
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    <nova:port uuid="f1abe770-5205-4bae-888a-f2489c2af7a7">
Feb 25 07:22:03 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:22:03 np0005629333 nova_compute[244014]:  </nova:ports>
Feb 25 07:22:03 np0005629333 nova_compute[244014]: </nova:instance>
Feb 25 07:22:03 np0005629333 nova_compute[244014]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.187 244018 DEBUG nova.storage.rbd_utils [None req-2312e454-6a95-42a1-acb3-d3fef949d88f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] removing snapshot(56170dc03de240289a8d156f9d514494) on rbd image(e9eb76fe-9616-40a4-aa53-0054cc5c3a57_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.297 244018 INFO nova.virt.libvirt.driver [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Deleting instance files /var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde_del#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.298 244018 INFO nova.virt.libvirt.driver [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Deletion of /var/lib/nova/instances/51d1d661-89db-4958-a2f4-c299ee997cde_del complete#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.365 244018 INFO nova.compute.manager [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Took 1.02 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.366 244018 DEBUG oslo.service.loopingcall [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.366 244018 DEBUG nova.compute.manager [-] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.366 244018 DEBUG nova.network.neutron [-] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.500 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.585 244018 DEBUG nova.network.neutron [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Updating instance_info_cache with network_info: [{"id": "0fef626d-412c-4101-95eb-eadb3354247f", "address": "fa:16:3e:67:08:21", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fef626d-41", "ovs_interfaceid": "0fef626d-412c-4101-95eb-eadb3354247f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.607 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Releasing lock "refresh_cache-5a7dc142-2b11-4214-87b7-636f27ccacbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.608 244018 DEBUG nova.compute.manager [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Instance network_info: |[{"id": "0fef626d-412c-4101-95eb-eadb3354247f", "address": "fa:16:3e:67:08:21", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fef626d-41", "ovs_interfaceid": "0fef626d-412c-4101-95eb-eadb3354247f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.609 244018 DEBUG oslo_concurrency.lockutils [req-73db7ff8-0a92-47ad-bc14-c668b231a8ae req-a583c495-440f-4e7a-83a4-c43a1e70cf46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-5a7dc142-2b11-4214-87b7-636f27ccacbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.609 244018 DEBUG nova.network.neutron [req-73db7ff8-0a92-47ad-bc14-c668b231a8ae req-a583c495-440f-4e7a-83a4-c43a1e70cf46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Refreshing network info cache for port 0fef626d-412c-4101-95eb-eadb3354247f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.616 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Start _get_guest_xml network_info=[{"id": "0fef626d-412c-4101-95eb-eadb3354247f", "address": "fa:16:3e:67:08:21", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fef626d-41", "ovs_interfaceid": "0fef626d-412c-4101-95eb-eadb3354247f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.625 244018 WARNING nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.637 244018 DEBUG nova.virt.libvirt.host [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.638 244018 DEBUG nova.virt.libvirt.host [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.642 244018 DEBUG nova.virt.libvirt.host [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.642 244018 DEBUG nova.virt.libvirt.host [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.643 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.643 244018 DEBUG nova.virt.hardware [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.644 244018 DEBUG nova.virt.hardware [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.644 244018 DEBUG nova.virt.hardware [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.645 244018 DEBUG nova.virt.hardware [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.645 244018 DEBUG nova.virt.hardware [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.645 244018 DEBUG nova.virt.hardware [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.646 244018 DEBUG nova.virt.hardware [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.646 244018 DEBUG nova.virt.hardware [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.646 244018 DEBUG nova.virt.hardware [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.646 244018 DEBUG nova.virt.hardware [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.647 244018 DEBUG nova.virt.hardware [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.651 244018 DEBUG oslo_concurrency.processutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:22:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 325 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 114 op/s
Feb 25 07:22:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Feb 25 07:22:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Feb 25 07:22:03 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.812 244018 DEBUG nova.storage.rbd_utils [None req-2312e454-6a95-42a1-acb3-d3fef949d88f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] creating snapshot(snap) on rbd image(7c5bf52b-c6c9-41dc-8a9d-b16bc36dfe7b) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 07:22:03 np0005629333 nova_compute[244014]: 2026-02-25 12:22:03.856 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:22:04 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1695468625' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.210 244018 DEBUG oslo_concurrency.processutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.230 244018 DEBUG nova.storage.rbd_utils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] rbd image 5a7dc142-2b11-4214-87b7-636f27ccacbf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.233 244018 DEBUG oslo_concurrency.processutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:22:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.732 244018 DEBUG nova.compute.manager [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-unplugged-433e6f28-313e-4fe8-b8da-eacc8a0332c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.733 244018 DEBUG oslo_concurrency.lockutils [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.733 244018 DEBUG oslo_concurrency.lockutils [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.733 244018 DEBUG oslo_concurrency.lockutils [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.734 244018 DEBUG nova.compute.manager [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] No waiting events found dispatching network-vif-unplugged-433e6f28-313e-4fe8-b8da-eacc8a0332c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.734 244018 DEBUG nova.compute.manager [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-unplugged-433e6f28-313e-4fe8-b8da-eacc8a0332c8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.734 244018 DEBUG nova.compute.manager [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-plugged-433e6f28-313e-4fe8-b8da-eacc8a0332c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.734 244018 DEBUG oslo_concurrency.lockutils [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.735 244018 DEBUG oslo_concurrency.lockutils [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.735 244018 DEBUG oslo_concurrency.lockutils [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.735 244018 DEBUG nova.compute.manager [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] No waiting events found dispatching network-vif-plugged-433e6f28-313e-4fe8-b8da-eacc8a0332c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.735 244018 WARNING nova.compute.manager [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received unexpected event network-vif-plugged-433e6f28-313e-4fe8-b8da-eacc8a0332c8 for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.736 244018 DEBUG nova.compute.manager [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-unplugged-f1abe770-5205-4bae-888a-f2489c2af7a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.736 244018 DEBUG oslo_concurrency.lockutils [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.736 244018 DEBUG oslo_concurrency.lockutils [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.736 244018 DEBUG oslo_concurrency.lockutils [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.737 244018 DEBUG nova.compute.manager [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] No waiting events found dispatching network-vif-unplugged-f1abe770-5205-4bae-888a-f2489c2af7a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.737 244018 DEBUG nova.compute.manager [req-af028087-bacc-47a4-a60f-9e6e43d8ab4f req-2c5a806e-16b1-4c2e-b1bf-e4bbba4ebb49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-unplugged-f1abe770-5205-4bae-888a-f2489c2af7a7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:22:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:22:04 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/398283395' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:22:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.776 244018 DEBUG oslo_concurrency.processutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.777 244018 DEBUG nova.virt.libvirt.vif [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:21:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-389676934',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-389676934',id=29,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='04734aae68d34fac8fb592fc015632fd',ramdisk_id='',reservation_id='r-x9nbtz7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1981016367',owner_user_name='tempest-AttachInterfacesV270Test-1981016367-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:21:57Z,user_data=None,user_id='7d27f8a357c2443a9140598fd9ec73e1',uuid=5a7dc142-2b11-4214-87b7-636f27ccacbf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0fef626d-412c-4101-95eb-eadb3354247f", "address": "fa:16:3e:67:08:21", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fef626d-41", "ovs_interfaceid": "0fef626d-412c-4101-95eb-eadb3354247f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.777 244018 DEBUG nova.network.os_vif_util [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Converting VIF {"id": "0fef626d-412c-4101-95eb-eadb3354247f", "address": "fa:16:3e:67:08:21", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fef626d-41", "ovs_interfaceid": "0fef626d-412c-4101-95eb-eadb3354247f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.778 244018 DEBUG nova.network.os_vif_util [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:08:21,bridge_name='br-int',has_traffic_filtering=True,id=0fef626d-412c-4101-95eb-eadb3354247f,network=Network(6599ab7f-86c0-4af9-b532-eeb7a134fca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fef626d-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.779 244018 DEBUG nova.objects.instance [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lazy-loading 'pci_devices' on Instance uuid 5a7dc142-2b11-4214-87b7-636f27ccacbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:22:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Feb 25 07:22:04 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.971 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:22:04 np0005629333 nova_compute[244014]:  <uuid>5a7dc142-2b11-4214-87b7-636f27ccacbf</uuid>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:  <name>instance-0000001d</name>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      <nova:name>tempest-AttachInterfacesV270Test-server-389676934</nova:name>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:22:03</nova:creationTime>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:22:04 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:        <nova:user uuid="7d27f8a357c2443a9140598fd9ec73e1">tempest-AttachInterfacesV270Test-1981016367-project-member</nova:user>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:        <nova:project uuid="04734aae68d34fac8fb592fc015632fd">tempest-AttachInterfacesV270Test-1981016367</nova:project>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:        <nova:port uuid="0fef626d-412c-4101-95eb-eadb3354247f">
Feb 25 07:22:04 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      <entry name="serial">5a7dc142-2b11-4214-87b7-636f27ccacbf</entry>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      <entry name="uuid">5a7dc142-2b11-4214-87b7-636f27ccacbf</entry>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/5a7dc142-2b11-4214-87b7-636f27ccacbf_disk">
Feb 25 07:22:04 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:22:04 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/5a7dc142-2b11-4214-87b7-636f27ccacbf_disk.config">
Feb 25 07:22:04 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:22:04 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:67:08:21"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      <target dev="tap0fef626d-41"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/5a7dc142-2b11-4214-87b7-636f27ccacbf/console.log" append="off"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:22:04 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:22:04 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:22:04 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:22:04 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.971 244018 DEBUG nova.compute.manager [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Preparing to wait for external event network-vif-plugged-0fef626d-412c-4101-95eb-eadb3354247f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.971 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Acquiring lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.971 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.972 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.972 244018 DEBUG nova.virt.libvirt.vif [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:21:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-389676934',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-389676934',id=29,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='04734aae68d34fac8fb592fc015632fd',ramdisk_id='',reservation_id='r-x9nbtz7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1981016367',owner_user_name='tempest-AttachInterfacesV270Test-1981016367-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:21:57Z,user_data=None,user_id='7d27f8a357c2443a9140598fd9ec73e1',uuid=5a7dc142-2b11-4214-87b7-636f27ccacbf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0fef626d-412c-4101-95eb-eadb3354247f", "address": "fa:16:3e:67:08:21", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fef626d-41", "ovs_interfaceid": "0fef626d-412c-4101-95eb-eadb3354247f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.972 244018 DEBUG nova.network.os_vif_util [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Converting VIF {"id": "0fef626d-412c-4101-95eb-eadb3354247f", "address": "fa:16:3e:67:08:21", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fef626d-41", "ovs_interfaceid": "0fef626d-412c-4101-95eb-eadb3354247f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.973 244018 DEBUG nova.network.os_vif_util [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:08:21,bridge_name='br-int',has_traffic_filtering=True,id=0fef626d-412c-4101-95eb-eadb3354247f,network=Network(6599ab7f-86c0-4af9-b532-eeb7a134fca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fef626d-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.973 244018 DEBUG os_vif [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:08:21,bridge_name='br-int',has_traffic_filtering=True,id=0fef626d-412c-4101-95eb-eadb3354247f,network=Network(6599ab7f-86c0-4af9-b532-eeb7a134fca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fef626d-41') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.973 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.974 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.974 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.976 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.976 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0fef626d-41, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.977 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0fef626d-41, col_values=(('external_ids', {'iface-id': '0fef626d-412c-4101-95eb-eadb3354247f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:67:08:21', 'vm-uuid': '5a7dc142-2b11-4214-87b7-636f27ccacbf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.978 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:04 np0005629333 NetworkManager[49836]: <info>  [1772022124.9789] manager: (tap0fef626d-41): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.980 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.983 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:04 np0005629333 nova_compute[244014]: 2026-02-25 12:22:04.984 244018 INFO os_vif [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:08:21,bridge_name='br-int',has_traffic_filtering=True,id=0fef626d-412c-4101-95eb-eadb3354247f,network=Network(6599ab7f-86c0-4af9-b532-eeb7a134fca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fef626d-41')#033[00m
Feb 25 07:22:05 np0005629333 nova_compute[244014]: 2026-02-25 12:22:05.031 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:22:05 np0005629333 nova_compute[244014]: 2026-02-25 12:22:05.031 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:22:05 np0005629333 nova_compute[244014]: 2026-02-25 12:22:05.031 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] No VIF found with MAC fa:16:3e:67:08:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:22:05 np0005629333 nova_compute[244014]: 2026-02-25 12:22:05.031 244018 INFO nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Using config drive#033[00m
Feb 25 07:22:05 np0005629333 nova_compute[244014]: 2026-02-25 12:22:05.056 244018 DEBUG nova.storage.rbd_utils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] rbd image 5a7dc142-2b11-4214-87b7-636f27ccacbf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:22:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 325 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.5 MiB/s wr, 63 op/s
Feb 25 07:22:05 np0005629333 nova_compute[244014]: 2026-02-25 12:22:05.731 244018 INFO nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Creating config drive at /var/lib/nova/instances/5a7dc142-2b11-4214-87b7-636f27ccacbf/disk.config#033[00m
Feb 25 07:22:05 np0005629333 nova_compute[244014]: 2026-02-25 12:22:05.737 244018 DEBUG oslo_concurrency.processutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5a7dc142-2b11-4214-87b7-636f27ccacbf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5givxko3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:22:05 np0005629333 nova_compute[244014]: 2026-02-25 12:22:05.764 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updating instance_info_cache with network_info: [{"id": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "address": "fa:16:3e:4a:d5:f4", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap433e6f28-31", "ovs_interfaceid": "433e6f28-313e-4fe8-b8da-eacc8a0332c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "31f40ed6-505b-4061-b861-ea2720b0ff62", "address": "fa:16:3e:c1:c1:8f", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31f40ed6-50", "ovs_interfaceid": "31f40ed6-505b-4061-b861-ea2720b0ff62", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f1abe770-5205-4bae-888a-f2489c2af7a7", "address": "fa:16:3e:85:c5:fe", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1abe770-52", "ovs_interfaceid": "f1abe770-5205-4bae-888a-f2489c2af7a7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be69a588-f795-413e-b981-a20088eea5ed", "address": "fa:16:3e:76:32:d9", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe69a588-f7", "ovs_interfaceid": "be69a588-f795-413e-b981-a20088eea5ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:22:05 np0005629333 nova_compute[244014]: 2026-02-25 12:22:05.799 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:22:05 np0005629333 nova_compute[244014]: 2026-02-25 12:22:05.799 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 25 07:22:05 np0005629333 nova_compute[244014]: 2026-02-25 12:22:05.800 244018 DEBUG oslo_concurrency.lockutils [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquired lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:22:05 np0005629333 nova_compute[244014]: 2026-02-25 12:22:05.801 244018 DEBUG nova.network.neutron [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:22:05 np0005629333 nova_compute[244014]: 2026-02-25 12:22:05.803 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:22:05 np0005629333 nova_compute[244014]: 2026-02-25 12:22:05.805 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:22:05 np0005629333 nova_compute[244014]: 2026-02-25 12:22:05.806 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:22:05 np0005629333 nova_compute[244014]: 2026-02-25 12:22:05.806 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:22:05 np0005629333 nova_compute[244014]: 2026-02-25 12:22:05.870 244018 DEBUG oslo_concurrency.processutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5a7dc142-2b11-4214-87b7-636f27ccacbf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5givxko3" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:22:05 np0005629333 nova_compute[244014]: 2026-02-25 12:22:05.904 244018 DEBUG nova.storage.rbd_utils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] rbd image 5a7dc142-2b11-4214-87b7-636f27ccacbf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:22:05 np0005629333 nova_compute[244014]: 2026-02-25 12:22:05.908 244018 DEBUG oslo_concurrency.processutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5a7dc142-2b11-4214-87b7-636f27ccacbf/disk.config 5a7dc142-2b11-4214-87b7-636f27ccacbf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:22:05 np0005629333 nova_compute[244014]: 2026-02-25 12:22:05.933 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:22:05 np0005629333 nova_compute[244014]: 2026-02-25 12:22:05.935 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:22:05 np0005629333 nova_compute[244014]: 2026-02-25 12:22:05.966 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:05 np0005629333 nova_compute[244014]: 2026-02-25 12:22:05.967 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:05 np0005629333 nova_compute[244014]: 2026-02-25 12:22:05.967 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:05 np0005629333 nova_compute[244014]: 2026-02-25 12:22:05.968 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:22:05 np0005629333 nova_compute[244014]: 2026-02-25 12:22:05.968 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.066 244018 DEBUG oslo_concurrency.processutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5a7dc142-2b11-4214-87b7-636f27ccacbf/disk.config 5a7dc142-2b11-4214-87b7-636f27ccacbf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.068 244018 INFO nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Deleting local config drive /var/lib/nova/instances/5a7dc142-2b11-4214-87b7-636f27ccacbf/disk.config because it was imported into RBD.#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.089 244018 DEBUG nova.network.neutron [req-73db7ff8-0a92-47ad-bc14-c668b231a8ae req-a583c495-440f-4e7a-83a4-c43a1e70cf46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Updated VIF entry in instance network info cache for port 0fef626d-412c-4101-95eb-eadb3354247f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.090 244018 DEBUG nova.network.neutron [req-73db7ff8-0a92-47ad-bc14-c668b231a8ae req-a583c495-440f-4e7a-83a4-c43a1e70cf46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Updating instance_info_cache with network_info: [{"id": "0fef626d-412c-4101-95eb-eadb3354247f", "address": "fa:16:3e:67:08:21", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fef626d-41", "ovs_interfaceid": "0fef626d-412c-4101-95eb-eadb3354247f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:22:06 np0005629333 kernel: tap0fef626d-41: entered promiscuous mode
Feb 25 07:22:06 np0005629333 NetworkManager[49836]: <info>  [1772022126.1148] manager: (tap0fef626d-41): new Tun device (/org/freedesktop/NetworkManager/Devices/101)
Feb 25 07:22:06 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:06Z|00210|binding|INFO|Claiming lport 0fef626d-412c-4101-95eb-eadb3354247f for this chassis.
Feb 25 07:22:06 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:06Z|00211|binding|INFO|0fef626d-412c-4101-95eb-eadb3354247f: Claiming fa:16:3e:67:08:21 10.100.0.12
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.118 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.122 244018 DEBUG oslo_concurrency.lockutils [req-73db7ff8-0a92-47ad-bc14-c668b231a8ae req-a583c495-440f-4e7a-83a4-c43a1e70cf46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-5a7dc142-2b11-4214-87b7-636f27ccacbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.129 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:08:21 10.100.0.12'], port_security=['fa:16:3e:67:08:21 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5a7dc142-2b11-4214-87b7-636f27ccacbf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6599ab7f-86c0-4af9-b532-eeb7a134fca8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '04734aae68d34fac8fb592fc015632fd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4de9378b-f5fd-4838-89b9-d0ae11ab0ea8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fdb8d004-ad0d-4775-bacb-c6f8c36164f5, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=0fef626d-412c-4101-95eb-eadb3354247f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.131 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 0fef626d-412c-4101-95eb-eadb3354247f in datapath 6599ab7f-86c0-4af9-b532-eeb7a134fca8 bound to our chassis#033[00m
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.134 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6599ab7f-86c0-4af9-b532-eeb7a134fca8#033[00m
Feb 25 07:22:06 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:06Z|00212|binding|INFO|Setting lport 0fef626d-412c-4101-95eb-eadb3354247f ovn-installed in OVS
Feb 25 07:22:06 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:06Z|00213|binding|INFO|Setting lport 0fef626d-412c-4101-95eb-eadb3354247f up in Southbound
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.140 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.142 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.147 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a076dde0-dd77-42b0-a0d4-aff23efcedf4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.149 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6599ab7f-81 in ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.150 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6599ab7f-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.151 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4b669cf1-3951-44f4-a952-d3fcbd0c11d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.152 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[049bae00-f53d-4743-9499-8803d9d23118]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:06 np0005629333 systemd-machined[210048]: New machine qemu-33-instance-0000001d.
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.163 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[7dc1c34d-58d3-4ae9-875c-b25312de28d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:06 np0005629333 systemd[1]: Started Virtual Machine qemu-33-instance-0000001d.
Feb 25 07:22:06 np0005629333 systemd-udevd[272010]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.187 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[596ee765-26e9-4359-9638-9f564acea1ef]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:06 np0005629333 NetworkManager[49836]: <info>  [1772022126.2010] device (tap0fef626d-41): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:22:06 np0005629333 NetworkManager[49836]: <info>  [1772022126.2026] device (tap0fef626d-41): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.226 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e2267cc2-45af-4ba4-be4b-b0893a4a285a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.233 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b213083e-8ab0-40a9-b9a4-bca05bf80c1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:06 np0005629333 NetworkManager[49836]: <info>  [1772022126.2348] manager: (tap6599ab7f-80): new Veth device (/org/freedesktop/NetworkManager/Devices/102)
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.261 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1f3fe8c4-0cb1-4d5c-9912-7a627a22e222]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.268 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b7ed0f6e-431e-4d79-a6db-48621dad16a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:06 np0005629333 NetworkManager[49836]: <info>  [1772022126.2960] device (tap6599ab7f-80): carrier: link connected
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.301 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5ac5f02f-d6f5-4fad-88c0-35706038eace]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.320 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b5d62bb6-c787-4400-8be2-dbe3918907cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6599ab7f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:4a:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409570, 'reachable_time': 24913, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272040, 'error': None, 'target': 'ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.334 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1133d88a-f2f4-4c75-8c17-2db581f9d622]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe72:4a2d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 409570, 'tstamp': 409570}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272041, 'error': None, 'target': 'ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.336 244018 DEBUG nova.network.neutron [-] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.357 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e795fdda-03a4-4d69-a0a2-e91d4ce4170b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6599ab7f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:4a:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409570, 'reachable_time': 24913, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272042, 'error': None, 'target': 'ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.365 244018 INFO nova.compute.manager [-] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Took 3.00 seconds to deallocate network for instance.#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.373 244018 INFO nova.network.neutron [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Port 433e6f28-313e-4fe8-b8da-eacc8a0332c8 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.374 244018 INFO nova.network.neutron [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Port 31f40ed6-505b-4061-b861-ea2720b0ff62 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.393 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b0653097-7c44-419c-81fb-296b814be0a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.420 244018 DEBUG oslo_concurrency.lockutils [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.421 244018 DEBUG oslo_concurrency.lockutils [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.465 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[057af4d7-e58d-46de-8038-85b783dbee66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.467 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6599ab7f-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.468 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.469 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6599ab7f-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:06 np0005629333 kernel: tap6599ab7f-80: entered promiscuous mode
Feb 25 07:22:06 np0005629333 NetworkManager[49836]: <info>  [1772022126.4728] manager: (tap6599ab7f-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/103)
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.472 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.480 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6599ab7f-80, col_values=(('external_ids', {'iface-id': 'd8168125-f59d-4dc2-b239-e9ce5117e64b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:06 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:06Z|00214|binding|INFO|Releasing lport d8168125-f59d-4dc2-b239-e9ce5117e64b from this chassis (sb_readonly=0)
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.482 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.486 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6599ab7f-86c0-4af9-b532-eeb7a134fca8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6599ab7f-86c0-4af9-b532-eeb7a134fca8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.488 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.487 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8e35cff5-3e29-4da5-9eca-7d0bf65bdda8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.490 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-6599ab7f-86c0-4af9-b532-eeb7a134fca8
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/6599ab7f-86c0-4af9-b532-eeb7a134fca8.pid.haproxy
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 6599ab7f-86c0-4af9-b532-eeb7a134fca8
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:22:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:06.493 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8', 'env', 'PROCESS_TAG=haproxy-6599ab7f-86c0-4af9-b532-eeb7a134fca8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6599ab7f-86c0-4af9-b532-eeb7a134fca8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.575 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022126.5745094, 5a7dc142-2b11-4214-87b7-636f27ccacbf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.575 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] VM Started (Lifecycle Event)#033[00m
Feb 25 07:22:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:22:06 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1066128238' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.615 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.616 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.622 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022126.5746312, 5a7dc142-2b11-4214-87b7-636f27ccacbf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.622 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.649 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.654 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.698 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.754 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.755 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.764 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.764 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.820 244018 DEBUG nova.compute.manager [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-plugged-f1abe770-5205-4bae-888a-f2489c2af7a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.821 244018 DEBUG oslo_concurrency.lockutils [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.821 244018 DEBUG oslo_concurrency.lockutils [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.822 244018 DEBUG oslo_concurrency.lockutils [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.822 244018 DEBUG nova.compute.manager [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] No waiting events found dispatching network-vif-plugged-f1abe770-5205-4bae-888a-f2489c2af7a7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.822 244018 WARNING nova.compute.manager [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received unexpected event network-vif-plugged-f1abe770-5205-4bae-888a-f2489c2af7a7 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.823 244018 DEBUG nova.compute.manager [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-deleted-433e6f28-313e-4fe8-b8da-eacc8a0332c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.823 244018 DEBUG nova.compute.manager [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Received event network-vif-deleted-f1abe770-5205-4bae-888a-f2489c2af7a7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.823 244018 DEBUG nova.compute.manager [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received event network-vif-plugged-0fef626d-412c-4101-95eb-eadb3354247f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.824 244018 DEBUG oslo_concurrency.lockutils [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.824 244018 DEBUG oslo_concurrency.lockutils [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.824 244018 DEBUG oslo_concurrency.lockutils [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.824 244018 DEBUG nova.compute.manager [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Processing event network-vif-plugged-0fef626d-412c-4101-95eb-eadb3354247f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.825 244018 DEBUG nova.compute.manager [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received event network-vif-plugged-0fef626d-412c-4101-95eb-eadb3354247f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.825 244018 DEBUG oslo_concurrency.lockutils [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.825 244018 DEBUG oslo_concurrency.lockutils [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.826 244018 DEBUG oslo_concurrency.lockutils [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.826 244018 DEBUG nova.compute.manager [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] No waiting events found dispatching network-vif-plugged-0fef626d-412c-4101-95eb-eadb3354247f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.826 244018 WARNING nova.compute.manager [req-9cc31776-c7d3-47b8-91fb-1060b0738ec3 req-165c014e-a3fd-4ef4-8357-a9b42110ae28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received unexpected event network-vif-plugged-0fef626d-412c-4101-95eb-eadb3354247f for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.827 244018 DEBUG nova.compute.manager [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.832 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.833 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022126.8330507, 5a7dc142-2b11-4214-87b7-636f27ccacbf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.834 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.841 244018 INFO nova.virt.libvirt.driver [-] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Instance spawned successfully.#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.842 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.868 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.879 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.884 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:22:06 np0005629333 podman[272119]: 2026-02-25 12:22:06.886035432 +0000 UTC m=+0.077679471 container create 677e628f3a81fd3832b29927a5666feb48b09d1d8b185b424fcfc3c1f030757b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.885 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.886 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.887 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.888 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.888 244018 DEBUG nova.virt.libvirt.driver [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.918 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:22:06 np0005629333 nova_compute[244014]: 2026-02-25 12:22:06.928 244018 DEBUG oslo_concurrency.processutils [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:22:06 np0005629333 podman[272119]: 2026-02-25 12:22:06.849662077 +0000 UTC m=+0.041306186 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:22:06 np0005629333 systemd[1]: Started libpod-conmon-677e628f3a81fd3832b29927a5666feb48b09d1d8b185b424fcfc3c1f030757b.scope.
Feb 25 07:22:06 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:22:06 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb7a359f29f9a30a87daa097bb03214ce4fe36c45b982aeae264aafa91f3402f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:22:07 np0005629333 podman[272119]: 2026-02-25 12:22:07.002132815 +0000 UTC m=+0.193776884 container init 677e628f3a81fd3832b29927a5666feb48b09d1d8b185b424fcfc3c1f030757b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:22:07 np0005629333 podman[272119]: 2026-02-25 12:22:07.010493853 +0000 UTC m=+0.202137922 container start 677e628f3a81fd3832b29927a5666feb48b09d1d8b185b424fcfc3c1f030757b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:22:07 np0005629333 nova_compute[244014]: 2026-02-25 12:22:07.029 244018 INFO nova.virt.libvirt.driver [None req-2312e454-6a95-42a1-acb3-d3fef949d88f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Snapshot image upload complete#033[00m
Feb 25 07:22:07 np0005629333 nova_compute[244014]: 2026-02-25 12:22:07.030 244018 INFO nova.compute.manager [None req-2312e454-6a95-42a1-acb3-d3fef949d88f f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Took 6.61 seconds to snapshot the instance on the hypervisor.#033[00m
Feb 25 07:22:07 np0005629333 nova_compute[244014]: 2026-02-25 12:22:07.043 244018 INFO nova.compute.manager [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Took 9.74 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:22:07 np0005629333 nova_compute[244014]: 2026-02-25 12:22:07.043 244018 DEBUG nova.compute.manager [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:22:07 np0005629333 neutron-haproxy-ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8[272135]: [NOTICE]   (272139) : New worker (272141) forked
Feb 25 07:22:07 np0005629333 neutron-haproxy-ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8[272135]: [NOTICE]   (272139) : Loading success.
Feb 25 07:22:07 np0005629333 nova_compute[244014]: 2026-02-25 12:22:07.160 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:22:07 np0005629333 nova_compute[244014]: 2026-02-25 12:22:07.161 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4159MB free_disk=59.900690112262964GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:22:07 np0005629333 nova_compute[244014]: 2026-02-25 12:22:07.162 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:07 np0005629333 nova_compute[244014]: 2026-02-25 12:22:07.174 244018 INFO nova.compute.manager [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Took 11.21 seconds to build instance.#033[00m
Feb 25 07:22:07 np0005629333 nova_compute[244014]: 2026-02-25 12:22:07.210 244018 INFO nova.network.neutron [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Port be69a588-f795-413e-b981-a20088eea5ed from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Feb 25 07:22:07 np0005629333 nova_compute[244014]: 2026-02-25 12:22:07.211 244018 DEBUG nova.network.neutron [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Updating instance_info_cache with network_info: [{"id": "f1abe770-5205-4bae-888a-f2489c2af7a7", "address": "fa:16:3e:85:c5:fe", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf1abe770-52", "ovs_interfaceid": "f1abe770-5205-4bae-888a-f2489c2af7a7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:22:07 np0005629333 nova_compute[244014]: 2026-02-25 12:22:07.262 244018 DEBUG oslo_concurrency.lockutils [None req-cddec4c2-e04f-44d8-bc1c-03499e38b4e8 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.359s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:07 np0005629333 nova_compute[244014]: 2026-02-25 12:22:07.329 244018 DEBUG oslo_concurrency.lockutils [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Releasing lock "refresh_cache-51d1d661-89db-4958-a2f4-c299ee997cde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:22:07 np0005629333 nova_compute[244014]: 2026-02-25 12:22:07.459 244018 DEBUG oslo_concurrency.lockutils [None req-1fa3b975-be01-4b25-970b-1fc3fb8ecf8e ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "interface-51d1d661-89db-4958-a2f4-c299ee997cde-31f40ed6-505b-4061-b861-ea2720b0ff62" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 8.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:22:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4122485027' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:22:07 np0005629333 nova_compute[244014]: 2026-02-25 12:22:07.502 244018 DEBUG oslo_concurrency.processutils [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:22:07 np0005629333 nova_compute[244014]: 2026-02-25 12:22:07.509 244018 DEBUG nova.compute.provider_tree [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:22:07 np0005629333 nova_compute[244014]: 2026-02-25 12:22:07.544 244018 DEBUG nova.scheduler.client.report [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:22:07 np0005629333 nova_compute[244014]: 2026-02-25 12:22:07.571 244018 DEBUG oslo_concurrency.lockutils [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:07 np0005629333 nova_compute[244014]: 2026-02-25 12:22:07.575 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.413s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:07 np0005629333 nova_compute[244014]: 2026-02-25 12:22:07.605 244018 INFO nova.scheduler.client.report [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Deleted allocations for instance 51d1d661-89db-4958-a2f4-c299ee997cde#033[00m
Feb 25 07:22:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 293 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 6.2 MiB/s wr, 242 op/s
Feb 25 07:22:07 np0005629333 nova_compute[244014]: 2026-02-25 12:22:07.686 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance e9eb76fe-9616-40a4-aa53-0054cc5c3a57 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:22:07 np0005629333 nova_compute[244014]: 2026-02-25 12:22:07.687 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 5a7dc142-2b11-4214-87b7-636f27ccacbf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:22:07 np0005629333 nova_compute[244014]: 2026-02-25 12:22:07.687 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:22:07 np0005629333 nova_compute[244014]: 2026-02-25 12:22:07.687 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:22:07 np0005629333 nova_compute[244014]: 2026-02-25 12:22:07.702 244018 DEBUG oslo_concurrency.lockutils [None req-50786254-1161-49de-865c-8f84ad236424 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "51d1d661-89db-4958-a2f4-c299ee997cde" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.360s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:07 np0005629333 nova_compute[244014]: 2026-02-25 12:22:07.751 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:22:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:07.800 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:08 np0005629333 nova_compute[244014]: 2026-02-25 12:22:08.038 244018 DEBUG oslo_concurrency.lockutils [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Acquiring lock "interface-5a7dc142-2b11-4214-87b7-636f27ccacbf-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:08 np0005629333 nova_compute[244014]: 2026-02-25 12:22:08.039 244018 DEBUG oslo_concurrency.lockutils [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "interface-5a7dc142-2b11-4214-87b7-636f27ccacbf-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:08 np0005629333 nova_compute[244014]: 2026-02-25 12:22:08.039 244018 DEBUG nova.objects.instance [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lazy-loading 'flavor' on Instance uuid 5a7dc142-2b11-4214-87b7-636f27ccacbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:22:08 np0005629333 nova_compute[244014]: 2026-02-25 12:22:08.068 244018 DEBUG nova.objects.instance [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lazy-loading 'pci_requests' on Instance uuid 5a7dc142-2b11-4214-87b7-636f27ccacbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:22:08 np0005629333 nova_compute[244014]: 2026-02-25 12:22:08.083 244018 DEBUG nova.network.neutron [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:22:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:22:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3767490391' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:22:08 np0005629333 nova_compute[244014]: 2026-02-25 12:22:08.326 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:22:08 np0005629333 nova_compute[244014]: 2026-02-25 12:22:08.332 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:22:08 np0005629333 nova_compute[244014]: 2026-02-25 12:22:08.354 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:22:08 np0005629333 nova_compute[244014]: 2026-02-25 12:22:08.406 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:22:08 np0005629333 nova_compute[244014]: 2026-02-25 12:22:08.413 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:08 np0005629333 nova_compute[244014]: 2026-02-25 12:22:08.423 244018 DEBUG nova.policy [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7d27f8a357c2443a9140598fd9ec73e1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '04734aae68d34fac8fb592fc015632fd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:22:08 np0005629333 nova_compute[244014]: 2026-02-25 12:22:08.436 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:22:08 np0005629333 nova_compute[244014]: 2026-02-25 12:22:08.437 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 25 07:22:08 np0005629333 nova_compute[244014]: 2026-02-25 12:22:08.503 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:09 np0005629333 nova_compute[244014]: 2026-02-25 12:22:09.133 244018 DEBUG nova.network.neutron [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Successfully created port: 448b460d-ecda-4125-9d74-8560b946b896 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:22:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:22:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Feb 25 07:22:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Feb 25 07:22:09 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Feb 25 07:22:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 293 MiB data, 513 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 181 op/s
Feb 25 07:22:09 np0005629333 nova_compute[244014]: 2026-02-25 12:22:09.904 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:22:09 np0005629333 nova_compute[244014]: 2026-02-25 12:22:09.905 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:22:09 np0005629333 nova_compute[244014]: 2026-02-25 12:22:09.905 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:22:09 np0005629333 nova_compute[244014]: 2026-02-25 12:22:09.905 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:22:09 np0005629333 nova_compute[244014]: 2026-02-25 12:22:09.906 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 25 07:22:09 np0005629333 nova_compute[244014]: 2026-02-25 12:22:09.931 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 25 07:22:09 np0005629333 nova_compute[244014]: 2026-02-25 12:22:09.934 244018 DEBUG nova.network.neutron [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Successfully updated port: 448b460d-ecda-4125-9d74-8560b946b896 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:22:09 np0005629333 nova_compute[244014]: 2026-02-25 12:22:09.969 244018 DEBUG oslo_concurrency.lockutils [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Acquiring lock "refresh_cache-5a7dc142-2b11-4214-87b7-636f27ccacbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:22:09 np0005629333 nova_compute[244014]: 2026-02-25 12:22:09.969 244018 DEBUG oslo_concurrency.lockutils [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Acquired lock "refresh_cache-5a7dc142-2b11-4214-87b7-636f27ccacbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:22:09 np0005629333 nova_compute[244014]: 2026-02-25 12:22:09.969 244018 DEBUG nova.network.neutron [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:22:09 np0005629333 nova_compute[244014]: 2026-02-25 12:22:09.979 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:10 np0005629333 nova_compute[244014]: 2026-02-25 12:22:10.165 244018 WARNING nova.network.neutron [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] 6599ab7f-86c0-4af9-b532-eeb7a134fca8 already exists in list: networks containing: ['6599ab7f-86c0-4af9-b532-eeb7a134fca8']. ignoring it#033[00m
Feb 25 07:22:10 np0005629333 nova_compute[244014]: 2026-02-25 12:22:10.827 244018 DEBUG oslo_concurrency.lockutils [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:10 np0005629333 nova_compute[244014]: 2026-02-25 12:22:10.827 244018 DEBUG oslo_concurrency.lockutils [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:10 np0005629333 nova_compute[244014]: 2026-02-25 12:22:10.828 244018 DEBUG oslo_concurrency.lockutils [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:10 np0005629333 nova_compute[244014]: 2026-02-25 12:22:10.828 244018 DEBUG oslo_concurrency.lockutils [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:10 np0005629333 nova_compute[244014]: 2026-02-25 12:22:10.828 244018 DEBUG oslo_concurrency.lockutils [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:10 np0005629333 nova_compute[244014]: 2026-02-25 12:22:10.830 244018 INFO nova.compute.manager [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Terminating instance#033[00m
Feb 25 07:22:10 np0005629333 nova_compute[244014]: 2026-02-25 12:22:10.831 244018 DEBUG nova.compute.manager [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:22:10 np0005629333 kernel: tap6e87f383-db (unregistering): left promiscuous mode
Feb 25 07:22:10 np0005629333 NetworkManager[49836]: <info>  [1772022130.8692] device (tap6e87f383-db): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:22:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:10Z|00215|binding|INFO|Releasing lport 6e87f383-dbcb-4dad-b195-bc813617ad12 from this chassis (sb_readonly=0)
Feb 25 07:22:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:10Z|00216|binding|INFO|Setting lport 6e87f383-dbcb-4dad-b195-bc813617ad12 down in Southbound
Feb 25 07:22:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:10Z|00217|binding|INFO|Removing iface tap6e87f383-db ovn-installed in OVS
Feb 25 07:22:10 np0005629333 nova_compute[244014]: 2026-02-25 12:22:10.880 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:22:10 np0005629333 nova_compute[244014]: 2026-02-25 12:22:10.890 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:10 np0005629333 nova_compute[244014]: 2026-02-25 12:22:10.897 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:10 np0005629333 nova_compute[244014]: 2026-02-25 12:22:10.910 244018 DEBUG nova.compute.manager [req-d59c6561-9a0a-4500-8e61-d94b2e00d440 req-77416e74-71ff-4597-8c34-bd8c778965c4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received event network-changed-448b460d-ecda-4125-9d74-8560b946b896 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:22:10 np0005629333 nova_compute[244014]: 2026-02-25 12:22:10.911 244018 DEBUG nova.compute.manager [req-d59c6561-9a0a-4500-8e61-d94b2e00d440 req-77416e74-71ff-4597-8c34-bd8c778965c4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Refreshing instance network info cache due to event network-changed-448b460d-ecda-4125-9d74-8560b946b896. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:22:10 np0005629333 nova_compute[244014]: 2026-02-25 12:22:10.911 244018 DEBUG oslo_concurrency.lockutils [req-d59c6561-9a0a-4500-8e61-d94b2e00d440 req-77416e74-71ff-4597-8c34-bd8c778965c4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-5a7dc142-2b11-4214-87b7-636f27ccacbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:22:10 np0005629333 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Feb 25 07:22:10 np0005629333 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001c.scope: Consumed 3.789s CPU time.
Feb 25 07:22:10 np0005629333 systemd-machined[210048]: Machine qemu-32-instance-0000001c terminated.
Feb 25 07:22:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:10.935 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:80:09 10.100.0.13'], port_security=['fa:16:3e:9e:80:09 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e9eb76fe-9616-40a4-aa53-0054cc5c3a57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '851e1817495944c1ad7ac421ab226d13', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a8a99d02-a675-470f-9701-584a41fcc90e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9e85d49-f0c0-457f-8aeb-85579a3d5aa5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6e87f383-dbcb-4dad-b195-bc813617ad12) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:22:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:10.937 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6e87f383-dbcb-4dad-b195-bc813617ad12 in datapath 6a1663dd-25aa-4b0e-a1c0-42434d371a21 unbound from our chassis#033[00m
Feb 25 07:22:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:10.939 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6a1663dd-25aa-4b0e-a1c0-42434d371a21, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:22:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:10.940 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ef65d113-ec4b-4397-8f6f-881279998113]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:10.941 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21 namespace which is not needed anymore#033[00m
Feb 25 07:22:11 np0005629333 nova_compute[244014]: 2026-02-25 12:22:11.079 244018 INFO nova.virt.libvirt.driver [-] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Instance destroyed successfully.#033[00m
Feb 25 07:22:11 np0005629333 nova_compute[244014]: 2026-02-25 12:22:11.080 244018 DEBUG nova.objects.instance [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lazy-loading 'resources' on Instance uuid e9eb76fe-9616-40a4-aa53-0054cc5c3a57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:22:11 np0005629333 nova_compute[244014]: 2026-02-25 12:22:11.097 244018 DEBUG nova.virt.libvirt.vif [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:21:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1303628417',display_name='tempest-ImagesTestJSON-server-1303628417',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1303628417',id=28,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:21:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='851e1817495944c1ad7ac421ab226d13',ramdisk_id='',reservation_id='r-18jw7ang',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1353368211',owner_user_name='tempest-ImagesTestJSON-1353368211-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:22:07Z,user_data=None,user_id='f1cfc3e643014e1c984d71182534fd24',uuid=e9eb76fe-9616-40a4-aa53-0054cc5c3a57,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "6e87f383-dbcb-4dad-b195-bc813617ad12", "address": "fa:16:3e:9e:80:09", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e87f383-db", "ovs_interfaceid": "6e87f383-dbcb-4dad-b195-bc813617ad12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:22:11 np0005629333 nova_compute[244014]: 2026-02-25 12:22:11.098 244018 DEBUG nova.network.os_vif_util [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converting VIF {"id": "6e87f383-dbcb-4dad-b195-bc813617ad12", "address": "fa:16:3e:9e:80:09", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e87f383-db", "ovs_interfaceid": "6e87f383-dbcb-4dad-b195-bc813617ad12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:22:11 np0005629333 nova_compute[244014]: 2026-02-25 12:22:11.100 244018 DEBUG nova.network.os_vif_util [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:80:09,bridge_name='br-int',has_traffic_filtering=True,id=6e87f383-dbcb-4dad-b195-bc813617ad12,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e87f383-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:22:11 np0005629333 nova_compute[244014]: 2026-02-25 12:22:11.101 244018 DEBUG os_vif [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:80:09,bridge_name='br-int',has_traffic_filtering=True,id=6e87f383-dbcb-4dad-b195-bc813617ad12,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e87f383-db') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:22:11 np0005629333 nova_compute[244014]: 2026-02-25 12:22:11.103 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:11 np0005629333 nova_compute[244014]: 2026-02-25 12:22:11.104 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e87f383-db, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:11 np0005629333 nova_compute[244014]: 2026-02-25 12:22:11.106 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:11 np0005629333 nova_compute[244014]: 2026-02-25 12:22:11.109 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:22:11 np0005629333 nova_compute[244014]: 2026-02-25 12:22:11.110 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:11 np0005629333 nova_compute[244014]: 2026-02-25 12:22:11.113 244018 INFO os_vif [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:80:09,bridge_name='br-int',has_traffic_filtering=True,id=6e87f383-dbcb-4dad-b195-bc813617ad12,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e87f383-db')#033[00m
Feb 25 07:22:11 np0005629333 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[271287]: [NOTICE]   (271291) : haproxy version is 2.8.14-c23fe91
Feb 25 07:22:11 np0005629333 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[271287]: [NOTICE]   (271291) : path to executable is /usr/sbin/haproxy
Feb 25 07:22:11 np0005629333 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[271287]: [WARNING]  (271291) : Exiting Master process...
Feb 25 07:22:11 np0005629333 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[271287]: [WARNING]  (271291) : Exiting Master process...
Feb 25 07:22:11 np0005629333 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[271287]: [ALERT]    (271291) : Current worker (271293) exited with code 143 (Terminated)
Feb 25 07:22:11 np0005629333 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[271287]: [WARNING]  (271291) : All workers exited. Exiting... (0)
Feb 25 07:22:11 np0005629333 systemd[1]: libpod-32e2e67f265024bd8ca460335f43bf713d9bf8daeebf886f60f6318d469a72a1.scope: Deactivated successfully.
Feb 25 07:22:11 np0005629333 podman[272217]: 2026-02-25 12:22:11.128055125 +0000 UTC m=+0.068885031 container died 32e2e67f265024bd8ca460335f43bf713d9bf8daeebf886f60f6318d469a72a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 07:22:11 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-32e2e67f265024bd8ca460335f43bf713d9bf8daeebf886f60f6318d469a72a1-userdata-shm.mount: Deactivated successfully.
Feb 25 07:22:11 np0005629333 systemd[1]: var-lib-containers-storage-overlay-dda58046a993fafc00305b047e988f5ee511b02fbba1a417a37a87ca1579e71e-merged.mount: Deactivated successfully.
Feb 25 07:22:11 np0005629333 podman[272217]: 2026-02-25 12:22:11.20167106 +0000 UTC m=+0.142500926 container cleanup 32e2e67f265024bd8ca460335f43bf713d9bf8daeebf886f60f6318d469a72a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:22:11 np0005629333 systemd[1]: libpod-conmon-32e2e67f265024bd8ca460335f43bf713d9bf8daeebf886f60f6318d469a72a1.scope: Deactivated successfully.
Feb 25 07:22:11 np0005629333 podman[272273]: 2026-02-25 12:22:11.27867217 +0000 UTC m=+0.050231460 container remove 32e2e67f265024bd8ca460335f43bf713d9bf8daeebf886f60f6318d469a72a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 07:22:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:11.289 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a26800a9-df0b-4b1b-be30-f6525fcdde0d]: (4, ('Wed Feb 25 12:22:11 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21 (32e2e67f265024bd8ca460335f43bf713d9bf8daeebf886f60f6318d469a72a1)\n32e2e67f265024bd8ca460335f43bf713d9bf8daeebf886f60f6318d469a72a1\nWed Feb 25 12:22:11 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21 (32e2e67f265024bd8ca460335f43bf713d9bf8daeebf886f60f6318d469a72a1)\n32e2e67f265024bd8ca460335f43bf713d9bf8daeebf886f60f6318d469a72a1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:11.292 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e4c3ebbf-95ca-4c9e-9b62-4402e4a187dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:11.293 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a1663dd-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:11 np0005629333 nova_compute[244014]: 2026-02-25 12:22:11.296 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:11 np0005629333 kernel: tap6a1663dd-20: left promiscuous mode
Feb 25 07:22:11 np0005629333 nova_compute[244014]: 2026-02-25 12:22:11.304 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:11.307 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[05940354-3f3f-4456-bb82-048be2684fa3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:11.323 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a0620579-bc59-4f1a-a736-1d424fa14c45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:11.324 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[317e12bf-56ab-4cb8-a383-292592259d03]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:11.354 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[08e84599-54fe-47ec-ba14-1644916ddd0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407961, 'reachable_time': 15149, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272289, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:11 np0005629333 systemd[1]: run-netns-ovnmeta\x2d6a1663dd\x2d25aa\x2d4b0e\x2da1c0\x2d42434d371a21.mount: Deactivated successfully.
Feb 25 07:22:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:11.365 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:22:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:11.365 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[a5273ac6-4a8b-4c28-9051-03d0f4f35d59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:11 np0005629333 nova_compute[244014]: 2026-02-25 12:22:11.481 244018 INFO nova.virt.libvirt.driver [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Deleting instance files /var/lib/nova/instances/e9eb76fe-9616-40a4-aa53-0054cc5c3a57_del#033[00m
Feb 25 07:22:11 np0005629333 nova_compute[244014]: 2026-02-25 12:22:11.482 244018 INFO nova.virt.libvirt.driver [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Deletion of /var/lib/nova/instances/e9eb76fe-9616-40a4-aa53-0054cc5c3a57_del complete#033[00m
Feb 25 07:22:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 277 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.7 MiB/s wr, 207 op/s
Feb 25 07:22:11 np0005629333 nova_compute[244014]: 2026-02-25 12:22:11.685 244018 INFO nova.compute.manager [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Took 0.85 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:22:11 np0005629333 nova_compute[244014]: 2026-02-25 12:22:11.686 244018 DEBUG oslo.service.loopingcall [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:22:11 np0005629333 nova_compute[244014]: 2026-02-25 12:22:11.687 244018 DEBUG nova.compute.manager [-] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:22:11 np0005629333 nova_compute[244014]: 2026-02-25 12:22:11.687 244018 DEBUG nova.network.neutron [-] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.519 244018 DEBUG nova.network.neutron [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Updating instance_info_cache with network_info: [{"id": "0fef626d-412c-4101-95eb-eadb3354247f", "address": "fa:16:3e:67:08:21", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fef626d-41", "ovs_interfaceid": "0fef626d-412c-4101-95eb-eadb3354247f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "448b460d-ecda-4125-9d74-8560b946b896", "address": "fa:16:3e:17:fa:d0", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap448b460d-ec", "ovs_interfaceid": "448b460d-ecda-4125-9d74-8560b946b896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.587 244018 DEBUG oslo_concurrency.lockutils [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Releasing lock "refresh_cache-5a7dc142-2b11-4214-87b7-636f27ccacbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.588 244018 DEBUG oslo_concurrency.lockutils [req-d59c6561-9a0a-4500-8e61-d94b2e00d440 req-77416e74-71ff-4597-8c34-bd8c778965c4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-5a7dc142-2b11-4214-87b7-636f27ccacbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.589 244018 DEBUG nova.network.neutron [req-d59c6561-9a0a-4500-8e61-d94b2e00d440 req-77416e74-71ff-4597-8c34-bd8c778965c4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Refreshing network info cache for port 448b460d-ecda-4125-9d74-8560b946b896 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:22:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 205 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 2.4 MiB/s wr, 277 op/s
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.709 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.714 244018 DEBUG nova.virt.libvirt.vif [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:21:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-389676934',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-389676934',id=29,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:22:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='04734aae68d34fac8fb592fc015632fd',ramdisk_id='',reservation_id='r-x9nbtz7e',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1981016367',owner_user_name='tempest-AttachInterfacesV270Test-1981016367-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:22:07Z,user_data=None,user_id='7d27f8a357c2443a9140598fd9ec73e1',uuid=5a7dc142-2b11-4214-87b7-636f27ccacbf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "448b460d-ecda-4125-9d74-8560b946b896", "address": "fa:16:3e:17:fa:d0", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap448b460d-ec", "ovs_interfaceid": "448b460d-ecda-4125-9d74-8560b946b896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.714 244018 DEBUG nova.network.os_vif_util [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Converting VIF {"id": "448b460d-ecda-4125-9d74-8560b946b896", "address": "fa:16:3e:17:fa:d0", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap448b460d-ec", "ovs_interfaceid": "448b460d-ecda-4125-9d74-8560b946b896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.715 244018 DEBUG nova.network.os_vif_util [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:fa:d0,bridge_name='br-int',has_traffic_filtering=True,id=448b460d-ecda-4125-9d74-8560b946b896,network=Network(6599ab7f-86c0-4af9-b532-eeb7a134fca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap448b460d-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.716 244018 DEBUG os_vif [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:fa:d0,bridge_name='br-int',has_traffic_filtering=True,id=448b460d-ecda-4125-9d74-8560b946b896,network=Network(6599ab7f-86c0-4af9-b532-eeb7a134fca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap448b460d-ec') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.716 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.717 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.717 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.721 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.721 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap448b460d-ec, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.722 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap448b460d-ec, col_values=(('external_ids', {'iface-id': '448b460d-ecda-4125-9d74-8560b946b896', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:17:fa:d0', 'vm-uuid': '5a7dc142-2b11-4214-87b7-636f27ccacbf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.723 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:13 np0005629333 NetworkManager[49836]: <info>  [1772022133.7254] manager: (tap448b460d-ec): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.727 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.729 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.730 244018 INFO os_vif [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:fa:d0,bridge_name='br-int',has_traffic_filtering=True,id=448b460d-ecda-4125-9d74-8560b946b896,network=Network(6599ab7f-86c0-4af9-b532-eeb7a134fca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap448b460d-ec')#033[00m
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.731 244018 DEBUG nova.virt.libvirt.vif [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:21:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-389676934',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-389676934',id=29,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:22:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='04734aae68d34fac8fb592fc015632fd',ramdisk_id='',reservation_id='r-x9nbtz7e',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1981016367',owner_user_name='tempest-AttachInterfacesV270Test-1981016367-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:22:07Z,user_data=None,user_id='7d27f8a357c2443a9140598fd9ec73e1',uuid=5a7dc142-2b11-4214-87b7-636f27ccacbf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "448b460d-ecda-4125-9d74-8560b946b896", "address": "fa:16:3e:17:fa:d0", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap448b460d-ec", "ovs_interfaceid": "448b460d-ecda-4125-9d74-8560b946b896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.732 244018 DEBUG nova.network.os_vif_util [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Converting VIF {"id": "448b460d-ecda-4125-9d74-8560b946b896", "address": "fa:16:3e:17:fa:d0", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap448b460d-ec", "ovs_interfaceid": "448b460d-ecda-4125-9d74-8560b946b896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.733 244018 DEBUG nova.network.os_vif_util [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:fa:d0,bridge_name='br-int',has_traffic_filtering=True,id=448b460d-ecda-4125-9d74-8560b946b896,network=Network(6599ab7f-86c0-4af9-b532-eeb7a134fca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap448b460d-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.735 244018 DEBUG nova.virt.libvirt.guest [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] attach device xml: <interface type="ethernet">
Feb 25 07:22:13 np0005629333 nova_compute[244014]:  <mac address="fa:16:3e:17:fa:d0"/>
Feb 25 07:22:13 np0005629333 nova_compute[244014]:  <model type="virtio"/>
Feb 25 07:22:13 np0005629333 nova_compute[244014]:  <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:22:13 np0005629333 nova_compute[244014]:  <mtu size="1442"/>
Feb 25 07:22:13 np0005629333 nova_compute[244014]:  <target dev="tap448b460d-ec"/>
Feb 25 07:22:13 np0005629333 nova_compute[244014]: </interface>
Feb 25 07:22:13 np0005629333 nova_compute[244014]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb 25 07:22:13 np0005629333 kernel: tap448b460d-ec: entered promiscuous mode
Feb 25 07:22:13 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:13Z|00218|binding|INFO|Claiming lport 448b460d-ecda-4125-9d74-8560b946b896 for this chassis.
Feb 25 07:22:13 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:13Z|00219|binding|INFO|448b460d-ecda-4125-9d74-8560b946b896: Claiming fa:16:3e:17:fa:d0 10.100.0.8
Feb 25 07:22:13 np0005629333 NetworkManager[49836]: <info>  [1772022133.7548] manager: (tap448b460d-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/105)
Feb 25 07:22:13 np0005629333 systemd-udevd[272195]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.756 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:13 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:13Z|00220|binding|INFO|Setting lport 448b460d-ecda-4125-9d74-8560b946b896 ovn-installed in OVS
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.765 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:13 np0005629333 NetworkManager[49836]: <info>  [1772022133.7700] device (tap448b460d-ec): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:22:13 np0005629333 NetworkManager[49836]: <info>  [1772022133.7712] device (tap448b460d-ec): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.772 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:13.782 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:fa:d0 10.100.0.8'], port_security=['fa:16:3e:17:fa:d0 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '5a7dc142-2b11-4214-87b7-636f27ccacbf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6599ab7f-86c0-4af9-b532-eeb7a134fca8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '04734aae68d34fac8fb592fc015632fd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4de9378b-f5fd-4838-89b9-d0ae11ab0ea8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fdb8d004-ad0d-4775-bacb-c6f8c36164f5, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=448b460d-ecda-4125-9d74-8560b946b896) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:22:13 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:13Z|00221|binding|INFO|Setting lport 448b460d-ecda-4125-9d74-8560b946b896 up in Southbound
Feb 25 07:22:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:13.783 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 448b460d-ecda-4125-9d74-8560b946b896 in datapath 6599ab7f-86c0-4af9-b532-eeb7a134fca8 bound to our chassis#033[00m
Feb 25 07:22:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:13.785 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6599ab7f-86c0-4af9-b532-eeb7a134fca8#033[00m
Feb 25 07:22:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:13.796 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[45905596-a4aa-411a-8fd8-ea1fe531b074]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:13.823 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b4aa3c95-957e-4bf6-96f1-972d1f065244]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:13.825 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[654e9ace-deb1-4fdc-ae12-f4df49d412f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:13.852 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e3a28312-e551-44e2-8aed-9320c1ae6b1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:13.864 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a3c8b935-5302-4c59-8ecb-2b09a62e852b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6599ab7f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:4a:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409570, 'reachable_time': 24913, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272302, 'error': None, 'target': 'ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.882 244018 DEBUG nova.virt.libvirt.driver [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.882 244018 DEBUG nova.virt.libvirt.driver [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.882 244018 DEBUG nova.virt.libvirt.driver [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] No VIF found with MAC fa:16:3e:67:08:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.882 244018 DEBUG nova.virt.libvirt.driver [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] No VIF found with MAC fa:16:3e:17:fa:d0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:22:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:13.883 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[210d8713-c223-4617-ac09-3d485a4b2912]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6599ab7f-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 409583, 'tstamp': 409583}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272303, 'error': None, 'target': 'ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6599ab7f-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 409586, 'tstamp': 409586}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272303, 'error': None, 'target': 'ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:13.885 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6599ab7f-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:13.890 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6599ab7f-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:13.891 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.892 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:13.892 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6599ab7f-80, col_values=(('external_ids', {'iface-id': 'd8168125-f59d-4dc2-b239-e9ce5117e64b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:13.894 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.983 244018 DEBUG nova.virt.libvirt.guest [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:22:13 np0005629333 nova_compute[244014]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:22:13 np0005629333 nova_compute[244014]:  <nova:name>tempest-AttachInterfacesV270Test-server-389676934</nova:name>
Feb 25 07:22:13 np0005629333 nova_compute[244014]:  <nova:creationTime>2026-02-25 12:22:13</nova:creationTime>
Feb 25 07:22:13 np0005629333 nova_compute[244014]:  <nova:flavor name="m1.nano">
Feb 25 07:22:13 np0005629333 nova_compute[244014]:    <nova:memory>128</nova:memory>
Feb 25 07:22:13 np0005629333 nova_compute[244014]:    <nova:disk>1</nova:disk>
Feb 25 07:22:13 np0005629333 nova_compute[244014]:    <nova:swap>0</nova:swap>
Feb 25 07:22:13 np0005629333 nova_compute[244014]:    <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:22:13 np0005629333 nova_compute[244014]:    <nova:vcpus>1</nova:vcpus>
Feb 25 07:22:13 np0005629333 nova_compute[244014]:  </nova:flavor>
Feb 25 07:22:13 np0005629333 nova_compute[244014]:  <nova:owner>
Feb 25 07:22:13 np0005629333 nova_compute[244014]:    <nova:user uuid="7d27f8a357c2443a9140598fd9ec73e1">tempest-AttachInterfacesV270Test-1981016367-project-member</nova:user>
Feb 25 07:22:13 np0005629333 nova_compute[244014]:    <nova:project uuid="04734aae68d34fac8fb592fc015632fd">tempest-AttachInterfacesV270Test-1981016367</nova:project>
Feb 25 07:22:13 np0005629333 nova_compute[244014]:  </nova:owner>
Feb 25 07:22:13 np0005629333 nova_compute[244014]:  <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:22:13 np0005629333 nova_compute[244014]:  <nova:ports>
Feb 25 07:22:13 np0005629333 nova_compute[244014]:    <nova:port uuid="0fef626d-412c-4101-95eb-eadb3354247f">
Feb 25 07:22:13 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 07:22:13 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:22:13 np0005629333 nova_compute[244014]:    <nova:port uuid="448b460d-ecda-4125-9d74-8560b946b896">
Feb 25 07:22:13 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 25 07:22:13 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:22:13 np0005629333 nova_compute[244014]:  </nova:ports>
Feb 25 07:22:13 np0005629333 nova_compute[244014]: </nova:instance>
Feb 25 07:22:13 np0005629333 nova_compute[244014]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Feb 25 07:22:13 np0005629333 nova_compute[244014]: 2026-02-25 12:22:13.985 244018 DEBUG nova.network.neutron [-] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:22:14 np0005629333 nova_compute[244014]: 2026-02-25 12:22:14.032 244018 DEBUG oslo_concurrency.lockutils [None req-f300bded-482e-471e-95c4-b06087a6885d 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "interface-5a7dc142-2b11-4214-87b7-636f27ccacbf-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.993s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:14 np0005629333 nova_compute[244014]: 2026-02-25 12:22:14.073 244018 INFO nova.compute.manager [-] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Took 2.39 seconds to deallocate network for instance.#033[00m
Feb 25 07:22:14 np0005629333 nova_compute[244014]: 2026-02-25 12:22:14.214 244018 DEBUG oslo_concurrency.lockutils [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:14 np0005629333 nova_compute[244014]: 2026-02-25 12:22:14.214 244018 DEBUG oslo_concurrency.lockutils [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:22:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Feb 25 07:22:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Feb 25 07:22:14 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Feb 25 07:22:14 np0005629333 nova_compute[244014]: 2026-02-25 12:22:14.291 244018 DEBUG oslo_concurrency.processutils [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:22:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:22:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2230240323' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:22:14 np0005629333 nova_compute[244014]: 2026-02-25 12:22:14.836 244018 DEBUG oslo_concurrency.processutils [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:22:14 np0005629333 nova_compute[244014]: 2026-02-25 12:22:14.842 244018 DEBUG nova.compute.provider_tree [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:22:14 np0005629333 nova_compute[244014]: 2026-02-25 12:22:14.862 244018 DEBUG nova.scheduler.client.report [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:22:14 np0005629333 nova_compute[244014]: 2026-02-25 12:22:14.892 244018 DEBUG oslo_concurrency.lockutils [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:14 np0005629333 nova_compute[244014]: 2026-02-25 12:22:14.928 244018 INFO nova.scheduler.client.report [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Deleted allocations for instance e9eb76fe-9616-40a4-aa53-0054cc5c3a57#033[00m
Feb 25 07:22:15 np0005629333 nova_compute[244014]: 2026-02-25 12:22:15.006 244018 DEBUG oslo_concurrency.lockutils [None req-fcdc3b46-4e31-45d9-8544-085fcbfcc0e4 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:15 np0005629333 nova_compute[244014]: 2026-02-25 12:22:15.194 244018 DEBUG nova.compute.manager [req-08c8a8f0-0466-44bf-af77-8a1119b6557c req-9425031a-3b1b-4bad-adac-a5befb99ad1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Received event network-vif-unplugged-6e87f383-dbcb-4dad-b195-bc813617ad12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:22:15 np0005629333 nova_compute[244014]: 2026-02-25 12:22:15.194 244018 DEBUG oslo_concurrency.lockutils [req-08c8a8f0-0466-44bf-af77-8a1119b6557c req-9425031a-3b1b-4bad-adac-a5befb99ad1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:15 np0005629333 nova_compute[244014]: 2026-02-25 12:22:15.195 244018 DEBUG oslo_concurrency.lockutils [req-08c8a8f0-0466-44bf-af77-8a1119b6557c req-9425031a-3b1b-4bad-adac-a5befb99ad1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:15 np0005629333 nova_compute[244014]: 2026-02-25 12:22:15.195 244018 DEBUG oslo_concurrency.lockutils [req-08c8a8f0-0466-44bf-af77-8a1119b6557c req-9425031a-3b1b-4bad-adac-a5befb99ad1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:15 np0005629333 nova_compute[244014]: 2026-02-25 12:22:15.196 244018 DEBUG nova.compute.manager [req-08c8a8f0-0466-44bf-af77-8a1119b6557c req-9425031a-3b1b-4bad-adac-a5befb99ad1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] No waiting events found dispatching network-vif-unplugged-6e87f383-dbcb-4dad-b195-bc813617ad12 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:22:15 np0005629333 nova_compute[244014]: 2026-02-25 12:22:15.196 244018 WARNING nova.compute.manager [req-08c8a8f0-0466-44bf-af77-8a1119b6557c req-9425031a-3b1b-4bad-adac-a5befb99ad1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Received unexpected event network-vif-unplugged-6e87f383-dbcb-4dad-b195-bc813617ad12 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:22:15 np0005629333 nova_compute[244014]: 2026-02-25 12:22:15.196 244018 DEBUG nova.compute.manager [req-08c8a8f0-0466-44bf-af77-8a1119b6557c req-9425031a-3b1b-4bad-adac-a5befb99ad1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Received event network-vif-plugged-6e87f383-dbcb-4dad-b195-bc813617ad12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:22:15 np0005629333 nova_compute[244014]: 2026-02-25 12:22:15.196 244018 DEBUG oslo_concurrency.lockutils [req-08c8a8f0-0466-44bf-af77-8a1119b6557c req-9425031a-3b1b-4bad-adac-a5befb99ad1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:15 np0005629333 nova_compute[244014]: 2026-02-25 12:22:15.197 244018 DEBUG oslo_concurrency.lockutils [req-08c8a8f0-0466-44bf-af77-8a1119b6557c req-9425031a-3b1b-4bad-adac-a5befb99ad1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:15 np0005629333 nova_compute[244014]: 2026-02-25 12:22:15.197 244018 DEBUG oslo_concurrency.lockutils [req-08c8a8f0-0466-44bf-af77-8a1119b6557c req-9425031a-3b1b-4bad-adac-a5befb99ad1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e9eb76fe-9616-40a4-aa53-0054cc5c3a57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:15 np0005629333 nova_compute[244014]: 2026-02-25 12:22:15.197 244018 DEBUG nova.compute.manager [req-08c8a8f0-0466-44bf-af77-8a1119b6557c req-9425031a-3b1b-4bad-adac-a5befb99ad1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] No waiting events found dispatching network-vif-plugged-6e87f383-dbcb-4dad-b195-bc813617ad12 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:22:15 np0005629333 nova_compute[244014]: 2026-02-25 12:22:15.198 244018 WARNING nova.compute.manager [req-08c8a8f0-0466-44bf-af77-8a1119b6557c req-9425031a-3b1b-4bad-adac-a5befb99ad1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Received unexpected event network-vif-plugged-6e87f383-dbcb-4dad-b195-bc813617ad12 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:22:15 np0005629333 nova_compute[244014]: 2026-02-25 12:22:15.290 244018 DEBUG nova.network.neutron [req-d59c6561-9a0a-4500-8e61-d94b2e00d440 req-77416e74-71ff-4597-8c34-bd8c778965c4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Updated VIF entry in instance network info cache for port 448b460d-ecda-4125-9d74-8560b946b896. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:22:15 np0005629333 nova_compute[244014]: 2026-02-25 12:22:15.293 244018 DEBUG nova.network.neutron [req-d59c6561-9a0a-4500-8e61-d94b2e00d440 req-77416e74-71ff-4597-8c34-bd8c778965c4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Updating instance_info_cache with network_info: [{"id": "0fef626d-412c-4101-95eb-eadb3354247f", "address": "fa:16:3e:67:08:21", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fef626d-41", "ovs_interfaceid": "0fef626d-412c-4101-95eb-eadb3354247f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "448b460d-ecda-4125-9d74-8560b946b896", "address": "fa:16:3e:17:fa:d0", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap448b460d-ec", "ovs_interfaceid": "448b460d-ecda-4125-9d74-8560b946b896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:22:15 np0005629333 nova_compute[244014]: 2026-02-25 12:22:15.332 244018 DEBUG oslo_concurrency.lockutils [req-d59c6561-9a0a-4500-8e61-d94b2e00d440 req-77416e74-71ff-4597-8c34-bd8c778965c4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-5a7dc142-2b11-4214-87b7-636f27ccacbf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:22:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 205 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.2 KiB/s wr, 172 op/s
Feb 25 07:22:15 np0005629333 nova_compute[244014]: 2026-02-25 12:22:15.882 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "ffd5cedf-474c-4977-807e-22a276ceb002" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:15 np0005629333 nova_compute[244014]: 2026-02-25 12:22:15.883 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "ffd5cedf-474c-4977-807e-22a276ceb002" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:15 np0005629333 nova_compute[244014]: 2026-02-25 12:22:15.926 244018 DEBUG nova.compute.manager [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:22:16 np0005629333 nova_compute[244014]: 2026-02-25 12:22:16.097 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:16 np0005629333 nova_compute[244014]: 2026-02-25 12:22:16.097 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:16 np0005629333 nova_compute[244014]: 2026-02-25 12:22:16.105 244018 DEBUG nova.virt.hardware [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:22:16 np0005629333 nova_compute[244014]: 2026-02-25 12:22:16.106 244018 INFO nova.compute.claims [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:22:16 np0005629333 nova_compute[244014]: 2026-02-25 12:22:16.268 244018 DEBUG oslo_concurrency.processutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:22:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:22:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2112032542' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:22:16 np0005629333 nova_compute[244014]: 2026-02-25 12:22:16.795 244018 DEBUG oslo_concurrency.processutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:22:16 np0005629333 nova_compute[244014]: 2026-02-25 12:22:16.804 244018 DEBUG nova.compute.provider_tree [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:22:16 np0005629333 nova_compute[244014]: 2026-02-25 12:22:16.825 244018 DEBUG nova.scheduler.client.report [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:22:16 np0005629333 nova_compute[244014]: 2026-02-25 12:22:16.856 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:16 np0005629333 nova_compute[244014]: 2026-02-25 12:22:16.857 244018 DEBUG nova.compute.manager [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:22:16 np0005629333 nova_compute[244014]: 2026-02-25 12:22:16.914 244018 DEBUG nova.compute.manager [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:22:16 np0005629333 nova_compute[244014]: 2026-02-25 12:22:16.915 244018 DEBUG nova.network.neutron [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:22:16 np0005629333 nova_compute[244014]: 2026-02-25 12:22:16.965 244018 INFO nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:22:16 np0005629333 nova_compute[244014]: 2026-02-25 12:22:16.991 244018 DEBUG nova.compute.manager [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:22:17 np0005629333 nova_compute[244014]: 2026-02-25 12:22:17.142 244018 DEBUG nova.compute.manager [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:22:17 np0005629333 nova_compute[244014]: 2026-02-25 12:22:17.144 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:22:17 np0005629333 nova_compute[244014]: 2026-02-25 12:22:17.144 244018 INFO nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Creating image(s)#033[00m
Feb 25 07:22:17 np0005629333 nova_compute[244014]: 2026-02-25 12:22:17.176 244018 DEBUG nova.storage.rbd_utils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image ffd5cedf-474c-4977-807e-22a276ceb002_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:22:17 np0005629333 nova_compute[244014]: 2026-02-25 12:22:17.213 244018 DEBUG nova.storage.rbd_utils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image ffd5cedf-474c-4977-807e-22a276ceb002_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:22:17 np0005629333 nova_compute[244014]: 2026-02-25 12:22:17.244 244018 DEBUG nova.storage.rbd_utils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image ffd5cedf-474c-4977-807e-22a276ceb002_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:22:17 np0005629333 nova_compute[244014]: 2026-02-25 12:22:17.249 244018 DEBUG oslo_concurrency.processutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:22:17 np0005629333 nova_compute[244014]: 2026-02-25 12:22:17.336 244018 DEBUG oslo_concurrency.processutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:22:17 np0005629333 nova_compute[244014]: 2026-02-25 12:22:17.337 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:17 np0005629333 nova_compute[244014]: 2026-02-25 12:22:17.337 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:17 np0005629333 nova_compute[244014]: 2026-02-25 12:22:17.337 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:17 np0005629333 nova_compute[244014]: 2026-02-25 12:22:17.354 244018 DEBUG nova.storage.rbd_utils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image ffd5cedf-474c-4977-807e-22a276ceb002_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:22:17 np0005629333 nova_compute[244014]: 2026-02-25 12:22:17.357 244018 DEBUG oslo_concurrency.processutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 ffd5cedf-474c-4977-807e-22a276ceb002_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:22:17 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:17Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:67:08:21 10.100.0.12
Feb 25 07:22:17 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:17Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:67:08:21 10.100.0.12
Feb 25 07:22:17 np0005629333 nova_compute[244014]: 2026-02-25 12:22:17.521 244018 DEBUG nova.policy [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1cfc3e643014e1c984d71182534fd24', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '851e1817495944c1ad7ac421ab226d13', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:22:17 np0005629333 nova_compute[244014]: 2026-02-25 12:22:17.543 244018 DEBUG oslo_concurrency.processutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 ffd5cedf-474c-4977-807e-22a276ceb002_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:22:17 np0005629333 nova_compute[244014]: 2026-02-25 12:22:17.617 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022122.6103446, 51d1d661-89db-4958-a2f4-c299ee997cde => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:22:17 np0005629333 nova_compute[244014]: 2026-02-25 12:22:17.618 244018 INFO nova.compute.manager [-] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:22:17 np0005629333 nova_compute[244014]: 2026-02-25 12:22:17.626 244018 DEBUG nova.storage.rbd_utils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] resizing rbd image ffd5cedf-474c-4977-807e-22a276ceb002_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:22:17 np0005629333 nova_compute[244014]: 2026-02-25 12:22:17.668 244018 DEBUG nova.compute.manager [None req-57fda520-de7c-4c73-b2f7-94b535be15f1 - - - - - -] [instance: 51d1d661-89db-4958-a2f4-c299ee997cde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:22:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 205 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.1 MiB/s wr, 202 op/s
Feb 25 07:22:17 np0005629333 nova_compute[244014]: 2026-02-25 12:22:17.734 244018 DEBUG nova.objects.instance [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lazy-loading 'migration_context' on Instance uuid ffd5cedf-474c-4977-807e-22a276ceb002 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:22:17 np0005629333 nova_compute[244014]: 2026-02-25 12:22:17.768 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:22:17 np0005629333 nova_compute[244014]: 2026-02-25 12:22:17.768 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Ensure instance console log exists: /var/lib/nova/instances/ffd5cedf-474c-4977-807e-22a276ceb002/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:22:17 np0005629333 nova_compute[244014]: 2026-02-25 12:22:17.769 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:17 np0005629333 nova_compute[244014]: 2026-02-25 12:22:17.769 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:17 np0005629333 nova_compute[244014]: 2026-02-25 12:22:17.769 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:17 np0005629333 nova_compute[244014]: 2026-02-25 12:22:17.824 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:17 np0005629333 nova_compute[244014]: 2026-02-25 12:22:17.825 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:17 np0005629333 nova_compute[244014]: 2026-02-25 12:22:17.966 244018 DEBUG nova.compute.manager [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.109 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.110 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.119 244018 DEBUG nova.virt.hardware [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.119 244018 INFO nova.compute.claims [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.198 244018 DEBUG nova.compute.manager [req-c72280f0-2726-4067-a409-c16c388b5016 req-6982479b-af54-4218-824f-52ce60940349 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Received event network-vif-deleted-6e87f383-dbcb-4dad-b195-bc813617ad12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.199 244018 DEBUG nova.compute.manager [req-c72280f0-2726-4067-a409-c16c388b5016 req-6982479b-af54-4218-824f-52ce60940349 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received event network-vif-plugged-448b460d-ecda-4125-9d74-8560b946b896 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.200 244018 DEBUG oslo_concurrency.lockutils [req-c72280f0-2726-4067-a409-c16c388b5016 req-6982479b-af54-4218-824f-52ce60940349 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.200 244018 DEBUG oslo_concurrency.lockutils [req-c72280f0-2726-4067-a409-c16c388b5016 req-6982479b-af54-4218-824f-52ce60940349 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.201 244018 DEBUG oslo_concurrency.lockutils [req-c72280f0-2726-4067-a409-c16c388b5016 req-6982479b-af54-4218-824f-52ce60940349 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.202 244018 DEBUG nova.compute.manager [req-c72280f0-2726-4067-a409-c16c388b5016 req-6982479b-af54-4218-824f-52ce60940349 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] No waiting events found dispatching network-vif-plugged-448b460d-ecda-4125-9d74-8560b946b896 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.202 244018 WARNING nova.compute.manager [req-c72280f0-2726-4067-a409-c16c388b5016 req-6982479b-af54-4218-824f-52ce60940349 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received unexpected event network-vif-plugged-448b460d-ecda-4125-9d74-8560b946b896 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.203 244018 DEBUG nova.compute.manager [req-c72280f0-2726-4067-a409-c16c388b5016 req-6982479b-af54-4218-824f-52ce60940349 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received event network-vif-plugged-448b460d-ecda-4125-9d74-8560b946b896 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.204 244018 DEBUG oslo_concurrency.lockutils [req-c72280f0-2726-4067-a409-c16c388b5016 req-6982479b-af54-4218-824f-52ce60940349 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.205 244018 DEBUG oslo_concurrency.lockutils [req-c72280f0-2726-4067-a409-c16c388b5016 req-6982479b-af54-4218-824f-52ce60940349 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.205 244018 DEBUG oslo_concurrency.lockutils [req-c72280f0-2726-4067-a409-c16c388b5016 req-6982479b-af54-4218-824f-52ce60940349 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.206 244018 DEBUG nova.compute.manager [req-c72280f0-2726-4067-a409-c16c388b5016 req-6982479b-af54-4218-824f-52ce60940349 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] No waiting events found dispatching network-vif-plugged-448b460d-ecda-4125-9d74-8560b946b896 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.206 244018 WARNING nova.compute.manager [req-c72280f0-2726-4067-a409-c16c388b5016 req-6982479b-af54-4218-824f-52ce60940349 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received unexpected event network-vif-plugged-448b460d-ecda-4125-9d74-8560b946b896 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.258 244018 DEBUG nova.network.neutron [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Successfully created port: fd320060-eaf8-4fd7-9325-e3793617bc7b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.304 244018 DEBUG oslo_concurrency.lockutils [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Acquiring lock "5a7dc142-2b11-4214-87b7-636f27ccacbf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.305 244018 DEBUG oslo_concurrency.lockutils [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.306 244018 DEBUG oslo_concurrency.lockutils [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Acquiring lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.306 244018 DEBUG oslo_concurrency.lockutils [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.307 244018 DEBUG oslo_concurrency.lockutils [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.309 244018 INFO nova.compute.manager [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Terminating instance#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.311 244018 DEBUG nova.compute.manager [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:22:18 np0005629333 kernel: tap0fef626d-41 (unregistering): left promiscuous mode
Feb 25 07:22:18 np0005629333 NetworkManager[49836]: <info>  [1772022138.3619] device (tap0fef626d-41): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:22:18 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:18Z|00222|binding|INFO|Releasing lport 0fef626d-412c-4101-95eb-eadb3354247f from this chassis (sb_readonly=0)
Feb 25 07:22:18 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:18Z|00223|binding|INFO|Setting lport 0fef626d-412c-4101-95eb-eadb3354247f down in Southbound
Feb 25 07:22:18 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:18Z|00224|binding|INFO|Removing iface tap0fef626d-41 ovn-installed in OVS
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.368 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.376 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:08:21 10.100.0.12'], port_security=['fa:16:3e:67:08:21 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5a7dc142-2b11-4214-87b7-636f27ccacbf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6599ab7f-86c0-4af9-b532-eeb7a134fca8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '04734aae68d34fac8fb592fc015632fd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4de9378b-f5fd-4838-89b9-d0ae11ab0ea8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fdb8d004-ad0d-4775-bacb-c6f8c36164f5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=0fef626d-412c-4101-95eb-eadb3354247f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.375 244018 DEBUG oslo_concurrency.processutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:22:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.377 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 0fef626d-412c-4101-95eb-eadb3354247f in datapath 6599ab7f-86c0-4af9-b532-eeb7a134fca8 unbound from our chassis#033[00m
Feb 25 07:22:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.379 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6599ab7f-86c0-4af9-b532-eeb7a134fca8#033[00m
Feb 25 07:22:18 np0005629333 kernel: tap448b460d-ec (unregistering): left promiscuous mode
Feb 25 07:22:18 np0005629333 NetworkManager[49836]: <info>  [1772022138.3831] device (tap448b460d-ec): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:22:18 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:18Z|00225|binding|INFO|Releasing lport 448b460d-ecda-4125-9d74-8560b946b896 from this chassis (sb_readonly=0)
Feb 25 07:22:18 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:18Z|00226|binding|INFO|Setting lport 448b460d-ecda-4125-9d74-8560b946b896 down in Southbound
Feb 25 07:22:18 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:18Z|00227|binding|INFO|Removing iface tap448b460d-ec ovn-installed in OVS
Feb 25 07:22:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.398 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[acc1a7de-49ff-4ec4-b5cf-98dfcf59bf2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.407 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.413 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:fa:d0 10.100.0.8'], port_security=['fa:16:3e:17:fa:d0 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '5a7dc142-2b11-4214-87b7-636f27ccacbf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6599ab7f-86c0-4af9-b532-eeb7a134fca8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '04734aae68d34fac8fb592fc015632fd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4de9378b-f5fd-4838-89b9-d0ae11ab0ea8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fdb8d004-ad0d-4775-bacb-c6f8c36164f5, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=448b460d-ecda-4125-9d74-8560b946b896) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:22:18 np0005629333 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Feb 25 07:22:18 np0005629333 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001d.scope: Consumed 11.303s CPU time.
Feb 25 07:22:18 np0005629333 systemd-machined[210048]: Machine qemu-33-instance-0000001d terminated.
Feb 25 07:22:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.444 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[aa4662b7-7c28-49d1-8a53-745f65ead3d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.448 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9d2fd6e6-d127-4ece-b8b3-423471eb310b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.484 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1965b643-d195-41ab-856d-dfa506fb0021]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.507 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[923f37d4-a075-485e-9dd5-2795f914b452]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6599ab7f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:4a:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 8, 'rx_bytes': 532, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 8, 'rx_bytes': 532, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409570, 'reachable_time': 24913, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272535, 'error': None, 'target': 'ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.536 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8a4a216c-12f5-4067-bd79-f06eef8f0566]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6599ab7f-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 409583, 'tstamp': 409583}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272554, 'error': None, 'target': 'ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6599ab7f-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 409586, 'tstamp': 409586}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272554, 'error': None, 'target': 'ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.539 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6599ab7f-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.542 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:18 np0005629333 NetworkManager[49836]: <info>  [1772022138.5443] manager: (tap448b460d-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/106)
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.554 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.555 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6599ab7f-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.556 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:22:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.557 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6599ab7f-80, col_values=(('external_ids', {'iface-id': 'd8168125-f59d-4dc2-b239-e9ce5117e64b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.558 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:22:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.560 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 448b460d-ecda-4125-9d74-8560b946b896 in datapath 6599ab7f-86c0-4af9-b532-eeb7a134fca8 unbound from our chassis#033[00m
Feb 25 07:22:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.563 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6599ab7f-86c0-4af9-b532-eeb7a134fca8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:22:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.564 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a23c3ec2-f04b-4bea-9d74-3550d1ff0e17]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.565 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8 namespace which is not needed anymore#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.571 244018 INFO nova.virt.libvirt.driver [-] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Instance destroyed successfully.#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.571 244018 DEBUG nova.objects.instance [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lazy-loading 'resources' on Instance uuid 5a7dc142-2b11-4214-87b7-636f27ccacbf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.587 244018 DEBUG nova.virt.libvirt.vif [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:21:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-389676934',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-389676934',id=29,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:22:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='04734aae68d34fac8fb592fc015632fd',ramdisk_id='',reservation_id='r-x9nbtz7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1981016367',owner_user_name='tempest-AttachInterfacesV270Test-1981016367-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:22:07Z,user_data=None,user_id='7d27f8a357c2443a9140598fd9ec73e1',uuid=5a7dc142-2b11-4214-87b7-636f27ccacbf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0fef626d-412c-4101-95eb-eadb3354247f", "address": "fa:16:3e:67:08:21", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fef626d-41", "ovs_interfaceid": "0fef626d-412c-4101-95eb-eadb3354247f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.587 244018 DEBUG nova.network.os_vif_util [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Converting VIF {"id": "0fef626d-412c-4101-95eb-eadb3354247f", "address": "fa:16:3e:67:08:21", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0fef626d-41", "ovs_interfaceid": "0fef626d-412c-4101-95eb-eadb3354247f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.588 244018 DEBUG nova.network.os_vif_util [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:67:08:21,bridge_name='br-int',has_traffic_filtering=True,id=0fef626d-412c-4101-95eb-eadb3354247f,network=Network(6599ab7f-86c0-4af9-b532-eeb7a134fca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fef626d-41') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.588 244018 DEBUG os_vif [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:67:08:21,bridge_name='br-int',has_traffic_filtering=True,id=0fef626d-412c-4101-95eb-eadb3354247f,network=Network(6599ab7f-86c0-4af9-b532-eeb7a134fca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fef626d-41') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.590 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.590 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0fef626d-41, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.592 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.594 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.597 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.600 244018 INFO os_vif [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:67:08:21,bridge_name='br-int',has_traffic_filtering=True,id=0fef626d-412c-4101-95eb-eadb3354247f,network=Network(6599ab7f-86c0-4af9-b532-eeb7a134fca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0fef626d-41')#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.600 244018 DEBUG nova.virt.libvirt.vif [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:21:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-389676934',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-389676934',id=29,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:22:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='04734aae68d34fac8fb592fc015632fd',ramdisk_id='',reservation_id='r-x9nbtz7e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1981016367',owner_user_name='tempest-AttachInterfacesV270Test-1981016367-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:22:07Z,user_data=None,user_id='7d27f8a357c2443a9140598fd9ec73e1',uuid=5a7dc142-2b11-4214-87b7-636f27ccacbf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "448b460d-ecda-4125-9d74-8560b946b896", "address": "fa:16:3e:17:fa:d0", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap448b460d-ec", "ovs_interfaceid": "448b460d-ecda-4125-9d74-8560b946b896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.601 244018 DEBUG nova.network.os_vif_util [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Converting VIF {"id": "448b460d-ecda-4125-9d74-8560b946b896", "address": "fa:16:3e:17:fa:d0", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap448b460d-ec", "ovs_interfaceid": "448b460d-ecda-4125-9d74-8560b946b896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.601 244018 DEBUG nova.network.os_vif_util [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:fa:d0,bridge_name='br-int',has_traffic_filtering=True,id=448b460d-ecda-4125-9d74-8560b946b896,network=Network(6599ab7f-86c0-4af9-b532-eeb7a134fca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap448b460d-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.602 244018 DEBUG os_vif [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:fa:d0,bridge_name='br-int',has_traffic_filtering=True,id=448b460d-ecda-4125-9d74-8560b946b896,network=Network(6599ab7f-86c0-4af9-b532-eeb7a134fca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap448b460d-ec') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.603 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.603 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap448b460d-ec, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.604 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.608 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.609 244018 INFO os_vif [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:fa:d0,bridge_name='br-int',has_traffic_filtering=True,id=448b460d-ecda-4125-9d74-8560b946b896,network=Network(6599ab7f-86c0-4af9-b532-eeb7a134fca8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap448b460d-ec')#033[00m
Feb 25 07:22:18 np0005629333 neutron-haproxy-ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8[272135]: [NOTICE]   (272139) : haproxy version is 2.8.14-c23fe91
Feb 25 07:22:18 np0005629333 neutron-haproxy-ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8[272135]: [NOTICE]   (272139) : path to executable is /usr/sbin/haproxy
Feb 25 07:22:18 np0005629333 neutron-haproxy-ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8[272135]: [WARNING]  (272139) : Exiting Master process...
Feb 25 07:22:18 np0005629333 neutron-haproxy-ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8[272135]: [ALERT]    (272139) : Current worker (272141) exited with code 143 (Terminated)
Feb 25 07:22:18 np0005629333 neutron-haproxy-ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8[272135]: [WARNING]  (272139) : All workers exited. Exiting... (0)
Feb 25 07:22:18 np0005629333 systemd[1]: libpod-677e628f3a81fd3832b29927a5666feb48b09d1d8b185b424fcfc3c1f030757b.scope: Deactivated successfully.
Feb 25 07:22:18 np0005629333 podman[272613]: 2026-02-25 12:22:18.710237622 +0000 UTC m=+0.051612589 container died 677e628f3a81fd3832b29927a5666feb48b09d1d8b185b424fcfc3c1f030757b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.712 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:18 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-677e628f3a81fd3832b29927a5666feb48b09d1d8b185b424fcfc3c1f030757b-userdata-shm.mount: Deactivated successfully.
Feb 25 07:22:18 np0005629333 systemd[1]: var-lib-containers-storage-overlay-bb7a359f29f9a30a87daa097bb03214ce4fe36c45b982aeae264aafa91f3402f-merged.mount: Deactivated successfully.
Feb 25 07:22:18 np0005629333 podman[272613]: 2026-02-25 12:22:18.749459578 +0000 UTC m=+0.090834595 container cleanup 677e628f3a81fd3832b29927a5666feb48b09d1d8b185b424fcfc3c1f030757b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 07:22:18 np0005629333 systemd[1]: libpod-conmon-677e628f3a81fd3832b29927a5666feb48b09d1d8b185b424fcfc3c1f030757b.scope: Deactivated successfully.
Feb 25 07:22:18 np0005629333 podman[272645]: 2026-02-25 12:22:18.828503847 +0000 UTC m=+0.049606612 container remove 677e628f3a81fd3832b29927a5666feb48b09d1d8b185b424fcfc3c1f030757b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:22:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.835 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1f54c809-d7bb-457c-bb46-2d4ec32ff0bd]: (4, ('Wed Feb 25 12:22:18 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8 (677e628f3a81fd3832b29927a5666feb48b09d1d8b185b424fcfc3c1f030757b)\n677e628f3a81fd3832b29927a5666feb48b09d1d8b185b424fcfc3c1f030757b\nWed Feb 25 12:22:18 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8 (677e628f3a81fd3832b29927a5666feb48b09d1d8b185b424fcfc3c1f030757b)\n677e628f3a81fd3832b29927a5666feb48b09d1d8b185b424fcfc3c1f030757b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.837 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[77f6d966-2a0b-4dec-8c47-b4121178f369]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.838 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6599ab7f-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.839 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:18 np0005629333 kernel: tap6599ab7f-80: left promiscuous mode
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.852 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.857 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0c1f66ba-537b-4056-80dc-c587e081e3e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.868 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[16c969d2-a281-4a94-97d1-b8d54b41a539]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.869 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e6efc135-3a10-49e1-b9a4-5e3664e96543]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.881 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7f41c142-358e-4e14-809e-4dca6f8f6654]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409562, 'reachable_time': 16997, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272662, 'error': None, 'target': 'ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.883 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6599ab7f-86c0-4af9-b532-eeb7a134fca8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:22:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:18.883 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[e98adacf-e425-41ad-8e7e-0bd11df9bb9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:18 np0005629333 systemd[1]: run-netns-ovnmeta\x2d6599ab7f\x2d86c0\x2d4af9\x2db532\x2deeb7a134fca8.mount: Deactivated successfully.
Feb 25 07:22:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:22:18 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2580611927' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.929 244018 DEBUG oslo_concurrency.processutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.937 244018 INFO nova.virt.libvirt.driver [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Deleting instance files /var/lib/nova/instances/5a7dc142-2b11-4214-87b7-636f27ccacbf_del#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.938 244018 INFO nova.virt.libvirt.driver [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Deletion of /var/lib/nova/instances/5a7dc142-2b11-4214-87b7-636f27ccacbf_del complete#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.943 244018 DEBUG nova.compute.provider_tree [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:22:18 np0005629333 nova_compute[244014]: 2026-02-25 12:22:18.972 244018 DEBUG nova.scheduler.client.report [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:22:19 np0005629333 nova_compute[244014]: 2026-02-25 12:22:19.076 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.966s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:19 np0005629333 nova_compute[244014]: 2026-02-25 12:22:19.078 244018 DEBUG nova.compute.manager [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:22:19 np0005629333 nova_compute[244014]: 2026-02-25 12:22:19.090 244018 INFO nova.compute.manager [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:22:19 np0005629333 nova_compute[244014]: 2026-02-25 12:22:19.090 244018 DEBUG oslo.service.loopingcall [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:22:19 np0005629333 nova_compute[244014]: 2026-02-25 12:22:19.091 244018 DEBUG nova.compute.manager [-] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:22:19 np0005629333 nova_compute[244014]: 2026-02-25 12:22:19.091 244018 DEBUG nova.network.neutron [-] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:22:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:22:19 np0005629333 nova_compute[244014]: 2026-02-25 12:22:19.271 244018 DEBUG nova.compute.manager [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:22:19 np0005629333 nova_compute[244014]: 2026-02-25 12:22:19.272 244018 DEBUG nova.network.neutron [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:22:19 np0005629333 nova_compute[244014]: 2026-02-25 12:22:19.308 244018 INFO nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:22:19 np0005629333 nova_compute[244014]: 2026-02-25 12:22:19.356 244018 DEBUG nova.compute.manager [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:22:19 np0005629333 nova_compute[244014]: 2026-02-25 12:22:19.520 244018 DEBUG nova.compute.manager [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:22:19 np0005629333 nova_compute[244014]: 2026-02-25 12:22:19.522 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:22:19 np0005629333 nova_compute[244014]: 2026-02-25 12:22:19.523 244018 INFO nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Creating image(s)#033[00m
Feb 25 07:22:19 np0005629333 nova_compute[244014]: 2026-02-25 12:22:19.556 244018 DEBUG nova.storage.rbd_utils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 33db1662-e67d-41de-b8d6-ea93b40cf7cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:22:19 np0005629333 nova_compute[244014]: 2026-02-25 12:22:19.592 244018 DEBUG nova.storage.rbd_utils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 33db1662-e67d-41de-b8d6-ea93b40cf7cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:22:19 np0005629333 nova_compute[244014]: 2026-02-25 12:22:19.623 244018 DEBUG nova.storage.rbd_utils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 33db1662-e67d-41de-b8d6-ea93b40cf7cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:22:19 np0005629333 nova_compute[244014]: 2026-02-25 12:22:19.627 244018 DEBUG oslo_concurrency.processutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:22:19 np0005629333 nova_compute[244014]: 2026-02-25 12:22:19.660 244018 DEBUG nova.policy [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea407839a07d46608b6348caf676d12d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6a771ad0ce454d809d66825f69248fa7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:22:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 205 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 933 KiB/s wr, 170 op/s
Feb 25 07:22:19 np0005629333 nova_compute[244014]: 2026-02-25 12:22:19.709 244018 DEBUG oslo_concurrency.processutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:22:19 np0005629333 nova_compute[244014]: 2026-02-25 12:22:19.710 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:19 np0005629333 nova_compute[244014]: 2026-02-25 12:22:19.711 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:19 np0005629333 nova_compute[244014]: 2026-02-25 12:22:19.711 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:19 np0005629333 nova_compute[244014]: 2026-02-25 12:22:19.744 244018 DEBUG nova.storage.rbd_utils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 33db1662-e67d-41de-b8d6-ea93b40cf7cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:22:19 np0005629333 nova_compute[244014]: 2026-02-25 12:22:19.747 244018 DEBUG oslo_concurrency.processutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 33db1662-e67d-41de-b8d6-ea93b40cf7cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.017 244018 DEBUG oslo_concurrency.processutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 33db1662-e67d-41de-b8d6-ea93b40cf7cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.269s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.096 244018 DEBUG nova.storage.rbd_utils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] resizing rbd image 33db1662-e67d-41de-b8d6-ea93b40cf7cd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.193 244018 DEBUG nova.objects.instance [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'migration_context' on Instance uuid 33db1662-e67d-41de-b8d6-ea93b40cf7cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.209 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.210 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Ensure instance console log exists: /var/lib/nova/instances/33db1662-e67d-41de-b8d6-ea93b40cf7cd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.211 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.212 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.212 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.419 244018 DEBUG nova.network.neutron [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Successfully updated port: fd320060-eaf8-4fd7-9325-e3793617bc7b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.448 244018 DEBUG nova.compute.manager [req-b7050af2-8dcb-44ae-82eb-05bb26432fe0 req-49a64329-4b35-40cb-ad9e-c5fab7ce4509 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received event network-vif-deleted-0fef626d-412c-4101-95eb-eadb3354247f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.449 244018 INFO nova.compute.manager [req-b7050af2-8dcb-44ae-82eb-05bb26432fe0 req-49a64329-4b35-40cb-ad9e-c5fab7ce4509 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Neutron deleted interface 0fef626d-412c-4101-95eb-eadb3354247f; detaching it from the instance and deleting it from the info cache#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.449 244018 DEBUG nova.network.neutron [req-b7050af2-8dcb-44ae-82eb-05bb26432fe0 req-49a64329-4b35-40cb-ad9e-c5fab7ce4509 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Updating instance_info_cache with network_info: [{"id": "448b460d-ecda-4125-9d74-8560b946b896", "address": "fa:16:3e:17:fa:d0", "network": {"id": "6599ab7f-86c0-4af9-b532-eeb7a134fca8", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-945769192-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04734aae68d34fac8fb592fc015632fd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap448b460d-ec", "ovs_interfaceid": "448b460d-ecda-4125-9d74-8560b946b896", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.645 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "refresh_cache-ffd5cedf-474c-4977-807e-22a276ceb002" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.645 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquired lock "refresh_cache-ffd5cedf-474c-4977-807e-22a276ceb002" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.646 244018 DEBUG nova.network.neutron [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.655 244018 DEBUG nova.network.neutron [-] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.728 244018 DEBUG nova.compute.manager [req-b7050af2-8dcb-44ae-82eb-05bb26432fe0 req-49a64329-4b35-40cb-ad9e-c5fab7ce4509 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Detach interface failed, port_id=0fef626d-412c-4101-95eb-eadb3354247f, reason: Instance 5a7dc142-2b11-4214-87b7-636f27ccacbf could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.867 244018 DEBUG nova.compute.manager [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received event network-vif-unplugged-0fef626d-412c-4101-95eb-eadb3354247f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.867 244018 DEBUG oslo_concurrency.lockutils [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.868 244018 DEBUG oslo_concurrency.lockutils [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.868 244018 DEBUG oslo_concurrency.lockutils [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.868 244018 DEBUG nova.compute.manager [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] No waiting events found dispatching network-vif-unplugged-0fef626d-412c-4101-95eb-eadb3354247f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.869 244018 DEBUG nova.compute.manager [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received event network-vif-unplugged-0fef626d-412c-4101-95eb-eadb3354247f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.869 244018 DEBUG nova.compute.manager [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received event network-vif-plugged-0fef626d-412c-4101-95eb-eadb3354247f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.869 244018 DEBUG oslo_concurrency.lockutils [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.870 244018 DEBUG oslo_concurrency.lockutils [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.870 244018 DEBUG oslo_concurrency.lockutils [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.870 244018 DEBUG nova.compute.manager [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] No waiting events found dispatching network-vif-plugged-0fef626d-412c-4101-95eb-eadb3354247f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.871 244018 WARNING nova.compute.manager [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received unexpected event network-vif-plugged-0fef626d-412c-4101-95eb-eadb3354247f for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.871 244018 DEBUG nova.compute.manager [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received event network-vif-unplugged-448b460d-ecda-4125-9d74-8560b946b896 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.871 244018 DEBUG oslo_concurrency.lockutils [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.871 244018 DEBUG oslo_concurrency.lockutils [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.872 244018 DEBUG oslo_concurrency.lockutils [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.872 244018 DEBUG nova.compute.manager [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] No waiting events found dispatching network-vif-unplugged-448b460d-ecda-4125-9d74-8560b946b896 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.872 244018 DEBUG nova.compute.manager [req-442f830c-03f9-4761-a040-5ece55c55a2b req-3a0ad7c1-31e4-4686-b11a-9a8ba7af89ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received event network-vif-unplugged-448b460d-ecda-4125-9d74-8560b946b896 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.890 244018 INFO nova.compute.manager [-] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Took 1.80 seconds to deallocate network for instance.#033[00m
Feb 25 07:22:20 np0005629333 nova_compute[244014]: 2026-02-25 12:22:20.975 244018 DEBUG nova.network.neutron [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:22:21 np0005629333 nova_compute[244014]: 2026-02-25 12:22:21.043 244018 DEBUG oslo_concurrency.lockutils [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:21 np0005629333 nova_compute[244014]: 2026-02-25 12:22:21.044 244018 DEBUG oslo_concurrency.lockutils [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:21 np0005629333 nova_compute[244014]: 2026-02-25 12:22:21.116 244018 DEBUG oslo_concurrency.processutils [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:22:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:22:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/264283392' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:22:21 np0005629333 nova_compute[244014]: 2026-02-25 12:22:21.654 244018 DEBUG oslo_concurrency.processutils [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:22:21 np0005629333 nova_compute[244014]: 2026-02-25 12:22:21.660 244018 DEBUG nova.compute.provider_tree [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:22:21 np0005629333 nova_compute[244014]: 2026-02-25 12:22:21.668 244018 DEBUG nova.network.neutron [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Successfully created port: 67db480d-6a66-4c54-be9c-5375a0d664cd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:22:21 np0005629333 nova_compute[244014]: 2026-02-25 12:22:21.683 244018 DEBUG nova.scheduler.client.report [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:22:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 235 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.7 MiB/s wr, 158 op/s
Feb 25 07:22:21 np0005629333 nova_compute[244014]: 2026-02-25 12:22:21.732 244018 DEBUG oslo_concurrency.lockutils [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:21 np0005629333 nova_compute[244014]: 2026-02-25 12:22:21.841 244018 INFO nova.scheduler.client.report [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Deleted allocations for instance 5a7dc142-2b11-4214-87b7-636f27ccacbf#033[00m
Feb 25 07:22:22 np0005629333 nova_compute[244014]: 2026-02-25 12:22:22.000 244018 DEBUG oslo_concurrency.lockutils [None req-e12beadc-0014-44fe-a9c7-b086ded76d0a 7d27f8a357c2443a9140598fd9ec73e1 04734aae68d34fac8fb592fc015632fd - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:22 np0005629333 nova_compute[244014]: 2026-02-25 12:22:22.316 244018 DEBUG nova.network.neutron [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Updating instance_info_cache with network_info: [{"id": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "address": "fa:16:3e:76:73:07", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd320060-ea", "ovs_interfaceid": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:22:22 np0005629333 nova_compute[244014]: 2026-02-25 12:22:22.429 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Releasing lock "refresh_cache-ffd5cedf-474c-4977-807e-22a276ceb002" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:22:22 np0005629333 nova_compute[244014]: 2026-02-25 12:22:22.430 244018 DEBUG nova.compute.manager [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Instance network_info: |[{"id": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "address": "fa:16:3e:76:73:07", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd320060-ea", "ovs_interfaceid": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:22:22 np0005629333 nova_compute[244014]: 2026-02-25 12:22:22.434 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Start _get_guest_xml network_info=[{"id": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "address": "fa:16:3e:76:73:07", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd320060-ea", "ovs_interfaceid": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:22:22 np0005629333 nova_compute[244014]: 2026-02-25 12:22:22.440 244018 WARNING nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:22:22 np0005629333 nova_compute[244014]: 2026-02-25 12:22:22.447 244018 DEBUG nova.virt.libvirt.host [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:22:22 np0005629333 nova_compute[244014]: 2026-02-25 12:22:22.448 244018 DEBUG nova.virt.libvirt.host [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:22:22 np0005629333 nova_compute[244014]: 2026-02-25 12:22:22.451 244018 DEBUG nova.virt.libvirt.host [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:22:22 np0005629333 nova_compute[244014]: 2026-02-25 12:22:22.452 244018 DEBUG nova.virt.libvirt.host [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:22:22 np0005629333 nova_compute[244014]: 2026-02-25 12:22:22.452 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:22:22 np0005629333 nova_compute[244014]: 2026-02-25 12:22:22.453 244018 DEBUG nova.virt.hardware [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:22:22 np0005629333 nova_compute[244014]: 2026-02-25 12:22:22.453 244018 DEBUG nova.virt.hardware [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:22:22 np0005629333 nova_compute[244014]: 2026-02-25 12:22:22.454 244018 DEBUG nova.virt.hardware [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:22:22 np0005629333 nova_compute[244014]: 2026-02-25 12:22:22.455 244018 DEBUG nova.virt.hardware [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:22:22 np0005629333 nova_compute[244014]: 2026-02-25 12:22:22.456 244018 DEBUG nova.virt.hardware [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:22:22 np0005629333 nova_compute[244014]: 2026-02-25 12:22:22.456 244018 DEBUG nova.virt.hardware [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:22:22 np0005629333 nova_compute[244014]: 2026-02-25 12:22:22.456 244018 DEBUG nova.virt.hardware [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:22:22 np0005629333 nova_compute[244014]: 2026-02-25 12:22:22.457 244018 DEBUG nova.virt.hardware [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:22:22 np0005629333 nova_compute[244014]: 2026-02-25 12:22:22.457 244018 DEBUG nova.virt.hardware [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:22:22 np0005629333 nova_compute[244014]: 2026-02-25 12:22:22.457 244018 DEBUG nova.virt.hardware [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:22:22 np0005629333 nova_compute[244014]: 2026-02-25 12:22:22.458 244018 DEBUG nova.virt.hardware [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:22:22 np0005629333 nova_compute[244014]: 2026-02-25 12:22:22.463 244018 DEBUG oslo_concurrency.processutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:22:22 np0005629333 nova_compute[244014]: 2026-02-25 12:22:22.587 244018 DEBUG nova.compute.manager [req-1f8d1afc-9155-4b05-9553-c95b16442038 req-6c30d5d9-c0ab-4879-8e1e-6031511e5a64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received event network-vif-deleted-448b460d-ecda-4125-9d74-8560b946b896 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:22:22 np0005629333 nova_compute[244014]: 2026-02-25 12:22:22.682 244018 DEBUG nova.network.neutron [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Successfully updated port: 67db480d-6a66-4c54-be9c-5375a0d664cd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:22:22 np0005629333 nova_compute[244014]: 2026-02-25 12:22:22.716 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:22:22 np0005629333 nova_compute[244014]: 2026-02-25 12:22:22.716 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquired lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:22:22 np0005629333 nova_compute[244014]: 2026-02-25 12:22:22.717 244018 DEBUG nova.network.neutron [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:22:22 np0005629333 nova_compute[244014]: 2026-02-25 12:22:22.927 244018 DEBUG nova.network.neutron [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:22:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:22:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/364510408' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.051 244018 DEBUG oslo_concurrency.processutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.081 244018 DEBUG nova.storage.rbd_utils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image ffd5cedf-474c-4977-807e-22a276ceb002_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.085 244018 DEBUG oslo_concurrency.processutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.514 244018 DEBUG nova.compute.manager [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received event network-vif-plugged-448b460d-ecda-4125-9d74-8560b946b896 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.515 244018 DEBUG oslo_concurrency.lockutils [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.516 244018 DEBUG oslo_concurrency.lockutils [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.516 244018 DEBUG oslo_concurrency.lockutils [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5a7dc142-2b11-4214-87b7-636f27ccacbf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.516 244018 DEBUG nova.compute.manager [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] No waiting events found dispatching network-vif-plugged-448b460d-ecda-4125-9d74-8560b946b896 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.516 244018 WARNING nova.compute.manager [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Received unexpected event network-vif-plugged-448b460d-ecda-4125-9d74-8560b946b896 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.517 244018 DEBUG nova.compute.manager [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Received event network-changed-fd320060-eaf8-4fd7-9325-e3793617bc7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.517 244018 DEBUG nova.compute.manager [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Refreshing instance network info cache due to event network-changed-fd320060-eaf8-4fd7-9325-e3793617bc7b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.517 244018 DEBUG oslo_concurrency.lockutils [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-ffd5cedf-474c-4977-807e-22a276ceb002" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.518 244018 DEBUG oslo_concurrency.lockutils [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-ffd5cedf-474c-4977-807e-22a276ceb002" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.518 244018 DEBUG nova.network.neutron [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Refreshing network info cache for port fd320060-eaf8-4fd7-9325-e3793617bc7b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.604 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:23 np0005629333 podman[272937]: 2026-02-25 12:22:23.614981452 +0000 UTC m=+0.095554810 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:22:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:22:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/598363268' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:22:23 np0005629333 podman[272938]: 2026-02-25 12:22:23.638739078 +0000 UTC m=+0.119741918 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.652 244018 DEBUG oslo_concurrency.processutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.654 244018 DEBUG nova.virt.libvirt.vif [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:22:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-337821408',display_name='tempest-ImagesTestJSON-server-337821408',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-337821408',id=30,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='851e1817495944c1ad7ac421ab226d13',ramdisk_id='',reservation_id='r-a00v4oxk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1353368211',owner_user_name='tempest-ImagesTestJSON-1353368211-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:22:17Z,user_data=None,user_id='f1cfc3e643014e1c984d71182534fd24',uuid=ffd5cedf-474c-4977-807e-22a276ceb002,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "address": "fa:16:3e:76:73:07", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd320060-ea", "ovs_interfaceid": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.654 244018 DEBUG nova.network.os_vif_util [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converting VIF {"id": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "address": "fa:16:3e:76:73:07", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd320060-ea", "ovs_interfaceid": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.656 244018 DEBUG nova.network.os_vif_util [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:73:07,bridge_name='br-int',has_traffic_filtering=True,id=fd320060-eaf8-4fd7-9325-e3793617bc7b,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd320060-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.657 244018 DEBUG nova.objects.instance [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lazy-loading 'pci_devices' on Instance uuid ffd5cedf-474c-4977-807e-22a276ceb002 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.680 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:22:23 np0005629333 nova_compute[244014]:  <uuid>ffd5cedf-474c-4977-807e-22a276ceb002</uuid>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:  <name>instance-0000001e</name>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      <nova:name>tempest-ImagesTestJSON-server-337821408</nova:name>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:22:22</nova:creationTime>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:22:23 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:        <nova:user uuid="f1cfc3e643014e1c984d71182534fd24">tempest-ImagesTestJSON-1353368211-project-member</nova:user>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:        <nova:project uuid="851e1817495944c1ad7ac421ab226d13">tempest-ImagesTestJSON-1353368211</nova:project>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:        <nova:port uuid="fd320060-eaf8-4fd7-9325-e3793617bc7b">
Feb 25 07:22:23 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      <entry name="serial">ffd5cedf-474c-4977-807e-22a276ceb002</entry>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      <entry name="uuid">ffd5cedf-474c-4977-807e-22a276ceb002</entry>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/ffd5cedf-474c-4977-807e-22a276ceb002_disk">
Feb 25 07:22:23 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:22:23 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/ffd5cedf-474c-4977-807e-22a276ceb002_disk.config">
Feb 25 07:22:23 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:22:23 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:76:73:07"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      <target dev="tapfd320060-ea"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/ffd5cedf-474c-4977-807e-22a276ceb002/console.log" append="off"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:22:23 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:22:23 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:22:23 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:22:23 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.680 244018 DEBUG nova.compute.manager [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Preparing to wait for external event network-vif-plugged-fd320060-eaf8-4fd7-9325-e3793617bc7b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.681 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "ffd5cedf-474c-4977-807e-22a276ceb002-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.681 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "ffd5cedf-474c-4977-807e-22a276ceb002-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.681 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "ffd5cedf-474c-4977-807e-22a276ceb002-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.682 244018 DEBUG nova.virt.libvirt.vif [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:22:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-337821408',display_name='tempest-ImagesTestJSON-server-337821408',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-337821408',id=30,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='851e1817495944c1ad7ac421ab226d13',ramdisk_id='',reservation_id='r-a00v4oxk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1353368211',owner_user_name='tempest-ImagesTestJSON-1353368211-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:22:17Z,user_data=None,user_id='f1cfc3e643014e1c984d71182534fd24',uuid=ffd5cedf-474c-4977-807e-22a276ceb002,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "address": "fa:16:3e:76:73:07", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd320060-ea", "ovs_interfaceid": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.683 244018 DEBUG nova.network.os_vif_util [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converting VIF {"id": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "address": "fa:16:3e:76:73:07", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd320060-ea", "ovs_interfaceid": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:22:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 246 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 503 KiB/s rd, 6.8 MiB/s wr, 182 op/s
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.684 244018 DEBUG nova.network.os_vif_util [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:73:07,bridge_name='br-int',has_traffic_filtering=True,id=fd320060-eaf8-4fd7-9325-e3793617bc7b,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd320060-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.686 244018 DEBUG os_vif [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:73:07,bridge_name='br-int',has_traffic_filtering=True,id=fd320060-eaf8-4fd7-9325-e3793617bc7b,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd320060-ea') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.687 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.688 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.688 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.692 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.692 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd320060-ea, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.693 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfd320060-ea, col_values=(('external_ids', {'iface-id': 'fd320060-eaf8-4fd7-9325-e3793617bc7b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:73:07', 'vm-uuid': 'ffd5cedf-474c-4977-807e-22a276ceb002'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:23 np0005629333 NetworkManager[49836]: <info>  [1772022143.6963] manager: (tapfd320060-ea): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/107)
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.699 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.703 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.704 244018 INFO os_vif [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:73:07,bridge_name='br-int',has_traffic_filtering=True,id=fd320060-eaf8-4fd7-9325-e3793617bc7b,network=Network(6a1663dd-25aa-4b0e-a1c0-42434d371a21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd320060-ea')#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.716 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.762 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.763 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.763 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] No VIF found with MAC fa:16:3e:76:73:07, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.763 244018 INFO nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Using config drive#033[00m
Feb 25 07:22:23 np0005629333 nova_compute[244014]: 2026-02-25 12:22:23.786 244018 DEBUG nova.storage.rbd_utils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image ffd5cedf-474c-4977-807e-22a276ceb002_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:22:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:22:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:22:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:22:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:22:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:22:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:22:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:22:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:22:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:22:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:22:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:22:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:22:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:22:24 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:22:24 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:22:24 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:22:24 np0005629333 podman[273117]: 2026-02-25 12:22:24.614353166 +0000 UTC m=+0.059129573 container create 62ba32c99c935942c9716bb6d5b7b943a634df6498174f87a48662e6d43cf63d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_heisenberg, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Feb 25 07:22:24 np0005629333 systemd[1]: Started libpod-conmon-62ba32c99c935942c9716bb6d5b7b943a634df6498174f87a48662e6d43cf63d.scope.
Feb 25 07:22:24 np0005629333 podman[273117]: 2026-02-25 12:22:24.590079826 +0000 UTC m=+0.034856263 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:22:24 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:22:24 np0005629333 nova_compute[244014]: 2026-02-25 12:22:24.690 244018 INFO nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Creating config drive at /var/lib/nova/instances/ffd5cedf-474c-4977-807e-22a276ceb002/disk.config#033[00m
Feb 25 07:22:24 np0005629333 nova_compute[244014]: 2026-02-25 12:22:24.697 244018 DEBUG oslo_concurrency.processutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ffd5cedf-474c-4977-807e-22a276ceb002/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp179iq57l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:22:24 np0005629333 podman[273117]: 2026-02-25 12:22:24.708265198 +0000 UTC m=+0.153041645 container init 62ba32c99c935942c9716bb6d5b7b943a634df6498174f87a48662e6d43cf63d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:22:24 np0005629333 podman[273117]: 2026-02-25 12:22:24.716800641 +0000 UTC m=+0.161577048 container start 62ba32c99c935942c9716bb6d5b7b943a634df6498174f87a48662e6d43cf63d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_heisenberg, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 07:22:24 np0005629333 ecstatic_heisenberg[273134]: 167 167
Feb 25 07:22:24 np0005629333 systemd[1]: libpod-62ba32c99c935942c9716bb6d5b7b943a634df6498174f87a48662e6d43cf63d.scope: Deactivated successfully.
Feb 25 07:22:24 np0005629333 podman[273117]: 2026-02-25 12:22:24.725725185 +0000 UTC m=+0.170501592 container attach 62ba32c99c935942c9716bb6d5b7b943a634df6498174f87a48662e6d43cf63d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_heisenberg, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:22:24 np0005629333 podman[273117]: 2026-02-25 12:22:24.726557329 +0000 UTC m=+0.171333736 container died 62ba32c99c935942c9716bb6d5b7b943a634df6498174f87a48662e6d43cf63d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_heisenberg, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:22:24 np0005629333 systemd[1]: var-lib-containers-storage-overlay-eb780e7d05b6c9cc2f4d9d4992afe712fdeb9a48d35127c8a0b1c3c237db18bb-merged.mount: Deactivated successfully.
Feb 25 07:22:24 np0005629333 podman[273117]: 2026-02-25 12:22:24.773841294 +0000 UTC m=+0.218617691 container remove 62ba32c99c935942c9716bb6d5b7b943a634df6498174f87a48662e6d43cf63d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_heisenberg, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:22:24 np0005629333 systemd[1]: libpod-conmon-62ba32c99c935942c9716bb6d5b7b943a634df6498174f87a48662e6d43cf63d.scope: Deactivated successfully.
Feb 25 07:22:24 np0005629333 nova_compute[244014]: 2026-02-25 12:22:24.828 244018 DEBUG oslo_concurrency.processutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ffd5cedf-474c-4977-807e-22a276ceb002/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp179iq57l" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:22:24 np0005629333 nova_compute[244014]: 2026-02-25 12:22:24.864 244018 DEBUG nova.storage.rbd_utils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] rbd image ffd5cedf-474c-4977-807e-22a276ceb002_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:22:24 np0005629333 nova_compute[244014]: 2026-02-25 12:22:24.869 244018 DEBUG oslo_concurrency.processutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ffd5cedf-474c-4977-807e-22a276ceb002/disk.config ffd5cedf-474c-4977-807e-22a276ceb002_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:22:24 np0005629333 podman[273180]: 2026-02-25 12:22:24.946386464 +0000 UTC m=+0.062739597 container create 49c9e5346b672ece8579d0117de3a76ccd1f9ca647f4c1d476a039dc28a4dea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_heyrovsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 07:22:24 np0005629333 systemd[1]: Started libpod-conmon-49c9e5346b672ece8579d0117de3a76ccd1f9ca647f4c1d476a039dc28a4dea3.scope.
Feb 25 07:22:25 np0005629333 podman[273180]: 2026-02-25 12:22:24.917634235 +0000 UTC m=+0.033987428 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:22:25 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.022 244018 DEBUG nova.network.neutron [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Updating instance_info_cache with network_info: [{"id": "67db480d-6a66-4c54-be9c-5375a0d664cd", "address": "fa:16:3e:4a:ac:51", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67db480d-6a", "ovs_interfaceid": "67db480d-6a66-4c54-be9c-5375a0d664cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:22:25 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b24ebb71a112860422c43da6a45f97454bba9286c9a336e7c807dd647040b23f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:22:25 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b24ebb71a112860422c43da6a45f97454bba9286c9a336e7c807dd647040b23f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:22:25 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b24ebb71a112860422c43da6a45f97454bba9286c9a336e7c807dd647040b23f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:22:25 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b24ebb71a112860422c43da6a45f97454bba9286c9a336e7c807dd647040b23f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:22:25 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b24ebb71a112860422c43da6a45f97454bba9286c9a336e7c807dd647040b23f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.032 244018 DEBUG oslo_concurrency.processutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ffd5cedf-474c-4977-807e-22a276ceb002/disk.config ffd5cedf-474c-4977-807e-22a276ceb002_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.033 244018 INFO nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Deleting local config drive /var/lib/nova/instances/ffd5cedf-474c-4977-807e-22a276ceb002/disk.config because it was imported into RBD.#033[00m
Feb 25 07:22:25 np0005629333 podman[273180]: 2026-02-25 12:22:25.052624346 +0000 UTC m=+0.168977469 container init 49c9e5346b672ece8579d0117de3a76ccd1f9ca647f4c1d476a039dc28a4dea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_heyrovsky, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Feb 25 07:22:25 np0005629333 podman[273180]: 2026-02-25 12:22:25.063319181 +0000 UTC m=+0.179672294 container start 49c9e5346b672ece8579d0117de3a76ccd1f9ca647f4c1d476a039dc28a4dea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_heyrovsky, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.065 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Releasing lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.065 244018 DEBUG nova.compute.manager [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Instance network_info: |[{"id": "67db480d-6a66-4c54-be9c-5375a0d664cd", "address": "fa:16:3e:4a:ac:51", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67db480d-6a", "ovs_interfaceid": "67db480d-6a66-4c54-be9c-5375a0d664cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:22:25 np0005629333 podman[273180]: 2026-02-25 12:22:25.067476819 +0000 UTC m=+0.183829962 container attach 49c9e5346b672ece8579d0117de3a76ccd1f9ca647f4c1d476a039dc28a4dea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_heyrovsky, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.068 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Start _get_guest_xml network_info=[{"id": "67db480d-6a66-4c54-be9c-5375a0d664cd", "address": "fa:16:3e:4a:ac:51", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67db480d-6a", "ovs_interfaceid": "67db480d-6a66-4c54-be9c-5375a0d664cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.077 244018 WARNING nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.085 244018 DEBUG nova.virt.libvirt.host [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.086 244018 DEBUG nova.virt.libvirt.host [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.089 244018 DEBUG nova.virt.libvirt.host [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.090 244018 DEBUG nova.virt.libvirt.host [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.090 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.091 244018 DEBUG nova.virt.hardware [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.091 244018 DEBUG nova.virt.hardware [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.092 244018 DEBUG nova.virt.hardware [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.092 244018 DEBUG nova.virt.hardware [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.092 244018 DEBUG nova.virt.hardware [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.093 244018 DEBUG nova.virt.hardware [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.093 244018 DEBUG nova.virt.hardware [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.093 244018 DEBUG nova.virt.hardware [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.094 244018 DEBUG nova.virt.hardware [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.094 244018 DEBUG nova.virt.hardware [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.094 244018 DEBUG nova.virt.hardware [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:22:25 np0005629333 kernel: tapfd320060-ea: entered promiscuous mode
Feb 25 07:22:25 np0005629333 NetworkManager[49836]: <info>  [1772022145.0995] manager: (tapfd320060-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/108)
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.099 244018 DEBUG oslo_concurrency.processutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:22:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:25Z|00228|binding|INFO|Claiming lport fd320060-eaf8-4fd7-9325-e3793617bc7b for this chassis.
Feb 25 07:22:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:25Z|00229|binding|INFO|fd320060-eaf8-4fd7-9325-e3793617bc7b: Claiming fa:16:3e:76:73:07 10.100.0.12
Feb 25 07:22:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:25Z|00230|binding|INFO|Setting lport fd320060-eaf8-4fd7-9325-e3793617bc7b ovn-installed in OVS
Feb 25 07:22:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:25Z|00231|binding|INFO|Setting lport fd320060-eaf8-4fd7-9325-e3793617bc7b up in Southbound
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.109 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:73:07 10.100.0.12'], port_security=['fa:16:3e:76:73:07 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'ffd5cedf-474c-4977-807e-22a276ceb002', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '851e1817495944c1ad7ac421ab226d13', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a8a99d02-a675-470f-9701-584a41fcc90e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b9e85d49-f0c0-457f-8aeb-85579a3d5aa5, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=fd320060-eaf8-4fd7-9325-e3793617bc7b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.111 157129 INFO neutron.agent.ovn.metadata.agent [-] Port fd320060-eaf8-4fd7-9325-e3793617bc7b in datapath 6a1663dd-25aa-4b0e-a1c0-42434d371a21 bound to our chassis#033[00m
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.113 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a1663dd-25aa-4b0e-a1c0-42434d371a21#033[00m
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.124 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b674aa5b-fb3f-4972-a041-2ddc32e11b1f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.124 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6a1663dd-21 in ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.126 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.126 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6a1663dd-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.126 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ac149cc2-7549-445e-83ab-c3de59db99ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.128 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[34574215-7bd4-4c50-b348-cd5fea70b166]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:25 np0005629333 systemd-udevd[273237]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.137 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[8e5181c5-165d-4a06-9997-c6aba41522bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:25 np0005629333 systemd-machined[210048]: New machine qemu-34-instance-0000001e.
Feb 25 07:22:25 np0005629333 NetworkManager[49836]: <info>  [1772022145.1522] device (tapfd320060-ea): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:22:25 np0005629333 NetworkManager[49836]: <info>  [1772022145.1531] device (tapfd320060-ea): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:22:25 np0005629333 systemd[1]: Started Virtual Machine qemu-34-instance-0000001e.
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.161 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7dffecb1-bb7b-4022-8528-f646d787e4ab]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.193 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5fd4e376-293c-4729-ba46-8ab8a7f6273e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:25 np0005629333 NetworkManager[49836]: <info>  [1772022145.1998] manager: (tap6a1663dd-20): new Veth device (/org/freedesktop/NetworkManager/Devices/109)
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.199 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[284b9438-3f3a-4dae-867a-79ba0918980c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.229 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ba272279-7c4e-47ac-b0e5-163367631c2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.232 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5fdf2401-e6d1-42e6-81a1-bf97d6a32924]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:25 np0005629333 NetworkManager[49836]: <info>  [1772022145.2477] device (tap6a1663dd-20): carrier: link connected
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.249 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d24c77fb-4c7a-4b69-b1ac-a63b11716e58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.263 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[81394fbb-858a-4048-bc15-e7bf6e428746]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a1663dd-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:0c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411465, 'reachable_time': 26508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273287, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.276 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8852c2ab-f587-49d3-9740-458165f18f0a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe47:cb3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411465, 'tstamp': 411465}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273290, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.291 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4caf6a0b-4170-48c2-882f-1b220f33c95e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a1663dd-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:47:0c:b3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411465, 'reachable_time': 26508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273291, 'error': None, 'target': 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.318 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[76bfe854-1961-4509-acbb-98aee18e600e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.373 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b69f7a06-deae-4d3c-a14d-22c73d799442]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.374 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a1663dd-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:25 np0005629333 kernel: tap6a1663dd-20: entered promiscuous mode
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.375 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.375 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a1663dd-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.377 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:25 np0005629333 NetworkManager[49836]: <info>  [1772022145.3778] manager: (tap6a1663dd-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.381 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a1663dd-20, col_values=(('external_ids', {'iface-id': '7f515737-b36a-4caa-affc-8ad110539172'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.383 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:25Z|00232|binding|INFO|Releasing lport 7f515737-b36a-4caa-affc-8ad110539172 from this chassis (sb_readonly=0)
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.386 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6a1663dd-25aa-4b0e-a1c0-42434d371a21.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6a1663dd-25aa-4b0e-a1c0-42434d371a21.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.386 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a807a260-bc60-458b-a59f-675b10702446]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.387 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-6a1663dd-25aa-4b0e-a1c0-42434d371a21
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/6a1663dd-25aa-4b0e-a1c0-42434d371a21.pid.haproxy
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 6a1663dd-25aa-4b0e-a1c0-42434d371a21
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:22:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:25.388 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'env', 'PROCESS_TAG=haproxy-6a1663dd-25aa-4b0e-a1c0-42434d371a21', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6a1663dd-25aa-4b0e-a1c0-42434d371a21.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.388 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:25 np0005629333 wizardly_heyrovsky[273216]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:22:25 np0005629333 wizardly_heyrovsky[273216]: --> All data devices are unavailable
Feb 25 07:22:25 np0005629333 systemd[1]: libpod-49c9e5346b672ece8579d0117de3a76ccd1f9ca647f4c1d476a039dc28a4dea3.scope: Deactivated successfully.
Feb 25 07:22:25 np0005629333 podman[273180]: 2026-02-25 12:22:25.530773901 +0000 UTC m=+0.647127014 container died 49c9e5346b672ece8579d0117de3a76ccd1f9ca647f4c1d476a039dc28a4dea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 07:22:25 np0005629333 systemd[1]: var-lib-containers-storage-overlay-b24ebb71a112860422c43da6a45f97454bba9286c9a336e7c807dd647040b23f-merged.mount: Deactivated successfully.
Feb 25 07:22:25 np0005629333 podman[273180]: 2026-02-25 12:22:25.587139714 +0000 UTC m=+0.703492847 container remove 49c9e5346b672ece8579d0117de3a76ccd1f9ca647f4c1d476a039dc28a4dea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 07:22:25 np0005629333 systemd[1]: libpod-conmon-49c9e5346b672ece8579d0117de3a76ccd1f9ca647f4c1d476a039dc28a4dea3.scope: Deactivated successfully.
Feb 25 07:22:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:22:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1617231523' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.653 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022145.652667, ffd5cedf-474c-4977-807e-22a276ceb002 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.654 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] VM Started (Lifecycle Event)#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.660 244018 DEBUG oslo_concurrency.processutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.685 244018 DEBUG nova.storage.rbd_utils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 33db1662-e67d-41de-b8d6-ea93b40cf7cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:22:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 246 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 440 KiB/s rd, 6.0 MiB/s wr, 159 op/s
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.689 244018 DEBUG oslo_concurrency.processutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.709 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.716 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022145.652835, ffd5cedf-474c-4977-807e-22a276ceb002 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.716 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.738 244018 DEBUG nova.compute.manager [req-473462fe-6654-4592-8f2b-0224faa28ce5 req-ecc3b36b-d7b6-471f-afb0-45775afec12b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Received event network-vif-plugged-fd320060-eaf8-4fd7-9325-e3793617bc7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.739 244018 DEBUG oslo_concurrency.lockutils [req-473462fe-6654-4592-8f2b-0224faa28ce5 req-ecc3b36b-d7b6-471f-afb0-45775afec12b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ffd5cedf-474c-4977-807e-22a276ceb002-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.739 244018 DEBUG oslo_concurrency.lockutils [req-473462fe-6654-4592-8f2b-0224faa28ce5 req-ecc3b36b-d7b6-471f-afb0-45775afec12b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ffd5cedf-474c-4977-807e-22a276ceb002-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.739 244018 DEBUG oslo_concurrency.lockutils [req-473462fe-6654-4592-8f2b-0224faa28ce5 req-ecc3b36b-d7b6-471f-afb0-45775afec12b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ffd5cedf-474c-4977-807e-22a276ceb002-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.740 244018 DEBUG nova.compute.manager [req-473462fe-6654-4592-8f2b-0224faa28ce5 req-ecc3b36b-d7b6-471f-afb0-45775afec12b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Processing event network-vif-plugged-fd320060-eaf8-4fd7-9325-e3793617bc7b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.741 244018 DEBUG nova.compute.manager [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.744 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.745 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.749 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022145.7451842, ffd5cedf-474c-4977-807e-22a276ceb002 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.749 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.752 244018 INFO nova.virt.libvirt.driver [-] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Instance spawned successfully.#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.752 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:22:25 np0005629333 podman[273432]: 2026-02-25 12:22:25.755402721 +0000 UTC m=+0.051554167 container create f7ea2e331bb1786bb460f9ca91459e1208caaa52a96182ce9c31382a2d0bf899 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.778 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.781 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.781 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.782 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.782 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.783 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.784 244018 DEBUG nova.virt.libvirt.driver [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.788 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:22:25 np0005629333 systemd[1]: Started libpod-conmon-f7ea2e331bb1786bb460f9ca91459e1208caaa52a96182ce9c31382a2d0bf899.scope.
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.816 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:22:25 np0005629333 podman[273432]: 2026-02-25 12:22:25.729903895 +0000 UTC m=+0.026055391 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:22:25 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:22:25 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9334fe215ef094e8186d73e604d53efb736aee56ac5daec6868b6733a23f6a9f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:22:25 np0005629333 podman[273432]: 2026-02-25 12:22:25.849191569 +0000 UTC m=+0.145343055 container init f7ea2e331bb1786bb460f9ca91459e1208caaa52a96182ce9c31382a2d0bf899 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.853 244018 INFO nova.compute.manager [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Took 8.71 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.853 244018 DEBUG nova.compute.manager [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:22:25 np0005629333 podman[273432]: 2026-02-25 12:22:25.854738167 +0000 UTC m=+0.150889613 container start f7ea2e331bb1786bb460f9ca91459e1208caaa52a96182ce9c31382a2d0bf899 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 25 07:22:25 np0005629333 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[273483]: [NOTICE]   (273496) : New worker (273498) forked
Feb 25 07:22:25 np0005629333 neutron-haproxy-ovnmeta-6a1663dd-25aa-4b0e-a1c0-42434d371a21[273483]: [NOTICE]   (273496) : Loading success.
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.921 244018 INFO nova.compute.manager [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Took 9.85 seconds to build instance.#033[00m
Feb 25 07:22:25 np0005629333 nova_compute[244014]: 2026-02-25 12:22:25.942 244018 DEBUG oslo_concurrency.lockutils [None req-5dfdeb05-a07a-4f8c-954d-9502ba0e78dd f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "ffd5cedf-474c-4977-807e-22a276ceb002" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:26 np0005629333 podman[273519]: 2026-02-25 12:22:26.018089665 +0000 UTC m=+0.031640871 container create 14d28ffe8ddd315d40b33731843454c041db6bdb78eb16eec351d768af217925 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_aryabhata, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:22:26 np0005629333 systemd[1]: Started libpod-conmon-14d28ffe8ddd315d40b33731843454c041db6bdb78eb16eec351d768af217925.scope.
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.076 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022131.0753112, e9eb76fe-9616-40a4-aa53-0054cc5c3a57 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.077 244018 INFO nova.compute.manager [-] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:22:26 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:22:26 np0005629333 podman[273519]: 2026-02-25 12:22:26.096941998 +0000 UTC m=+0.110493264 container init 14d28ffe8ddd315d40b33731843454c041db6bdb78eb16eec351d768af217925 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:22:26 np0005629333 podman[273519]: 2026-02-25 12:22:26.004528819 +0000 UTC m=+0.018080055 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:22:26 np0005629333 podman[273519]: 2026-02-25 12:22:26.104681409 +0000 UTC m=+0.118232665 container start 14d28ffe8ddd315d40b33731843454c041db6bdb78eb16eec351d768af217925 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_aryabhata, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:22:26 np0005629333 zen_aryabhata[273535]: 167 167
Feb 25 07:22:26 np0005629333 systemd[1]: libpod-14d28ffe8ddd315d40b33731843454c041db6bdb78eb16eec351d768af217925.scope: Deactivated successfully.
Feb 25 07:22:26 np0005629333 podman[273519]: 2026-02-25 12:22:26.109480895 +0000 UTC m=+0.123032151 container attach 14d28ffe8ddd315d40b33731843454c041db6bdb78eb16eec351d768af217925 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Feb 25 07:22:26 np0005629333 podman[273519]: 2026-02-25 12:22:26.110182095 +0000 UTC m=+0.123733311 container died 14d28ffe8ddd315d40b33731843454c041db6bdb78eb16eec351d768af217925 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_aryabhata, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.115 244018 DEBUG nova.network.neutron [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Updated VIF entry in instance network info cache for port fd320060-eaf8-4fd7-9325-e3793617bc7b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.116 244018 DEBUG nova.network.neutron [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Updating instance_info_cache with network_info: [{"id": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "address": "fa:16:3e:76:73:07", "network": {"id": "6a1663dd-25aa-4b0e-a1c0-42434d371a21", "bridge": "br-int", "label": "tempest-ImagesTestJSON-759982574-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "851e1817495944c1ad7ac421ab226d13", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd320060-ea", "ovs_interfaceid": "fd320060-eaf8-4fd7-9325-e3793617bc7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.119 244018 DEBUG nova.compute.manager [None req-7515afb6-cf31-44d6-a463-352e3f416521 - - - - - -] [instance: e9eb76fe-9616-40a4-aa53-0054cc5c3a57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:22:26 np0005629333 systemd[1]: var-lib-containers-storage-overlay-6235d5e1ec722d2de6bb742e2e7263a8c6864d14277088de20fbf04ed8742f75-merged.mount: Deactivated successfully.
Feb 25 07:22:26 np0005629333 podman[273519]: 2026-02-25 12:22:26.142731071 +0000 UTC m=+0.156282277 container remove 14d28ffe8ddd315d40b33731843454c041db6bdb78eb16eec351d768af217925 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 07:22:26 np0005629333 systemd[1]: libpod-conmon-14d28ffe8ddd315d40b33731843454c041db6bdb78eb16eec351d768af217925.scope: Deactivated successfully.
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.179 244018 DEBUG oslo_concurrency.lockutils [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-ffd5cedf-474c-4977-807e-22a276ceb002" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.180 244018 DEBUG nova.compute.manager [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Received event network-changed-67db480d-6a66-4c54-be9c-5375a0d664cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.180 244018 DEBUG nova.compute.manager [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Refreshing instance network info cache due to event network-changed-67db480d-6a66-4c54-be9c-5375a0d664cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.180 244018 DEBUG oslo_concurrency.lockutils [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.181 244018 DEBUG oslo_concurrency.lockutils [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.181 244018 DEBUG nova.network.neutron [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Refreshing network info cache for port 67db480d-6a66-4c54-be9c-5375a0d664cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:22:26 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:26Z|00233|binding|INFO|Releasing lport 7f515737-b36a-4caa-affc-8ad110539172 from this chassis (sb_readonly=0)
Feb 25 07:22:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:22:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/201932827' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.210 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.221 244018 DEBUG oslo_concurrency.processutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.222 244018 DEBUG nova.virt.libvirt.vif [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:22:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-244265435',display_name='tempest-tempest.common.compute-instance-244265435',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-244265435',id=31,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOggDAkE/yMJzyb2WB93GwUtYZKr98E6F94LkHMlcbkujcDoPEUatUQ4cc3OD4kTDqtr/+QJWixJY7iaylVdULWnRyMAsh+/JrBYSM2MkEWihh9m/4SAmgeADQeMRQwlMA==',key_name='tempest-keypair-1926067471',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-y80s0s52',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:22:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=33db1662-e67d-41de-b8d6-ea93b40cf7cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "67db480d-6a66-4c54-be9c-5375a0d664cd", "address": "fa:16:3e:4a:ac:51", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67db480d-6a", "ovs_interfaceid": "67db480d-6a66-4c54-be9c-5375a0d664cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.223 244018 DEBUG nova.network.os_vif_util [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "67db480d-6a66-4c54-be9c-5375a0d664cd", "address": "fa:16:3e:4a:ac:51", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67db480d-6a", "ovs_interfaceid": "67db480d-6a66-4c54-be9c-5375a0d664cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.223 244018 DEBUG nova.network.os_vif_util [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:ac:51,bridge_name='br-int',has_traffic_filtering=True,id=67db480d-6a66-4c54-be9c-5375a0d664cd,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67db480d-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.224 244018 DEBUG nova.objects.instance [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 33db1662-e67d-41de-b8d6-ea93b40cf7cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:22:26 np0005629333 podman[273561]: 2026-02-25 12:22:26.272673809 +0000 UTC m=+0.038598480 container create 18dcbb910735cf0f97b9891ed6eee01ee4f7b55b904609592521b49aa82530b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:22:26 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:26Z|00234|binding|INFO|Releasing lport 7f515737-b36a-4caa-affc-8ad110539172 from this chassis (sb_readonly=0)
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.298 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:26 np0005629333 systemd[1]: Started libpod-conmon-18dcbb910735cf0f97b9891ed6eee01ee4f7b55b904609592521b49aa82530b1.scope.
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.338 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:22:26 np0005629333 nova_compute[244014]:  <uuid>33db1662-e67d-41de-b8d6-ea93b40cf7cd</uuid>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:  <name>instance-0000001f</name>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      <nova:name>tempest-tempest.common.compute-instance-244265435</nova:name>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:22:25</nova:creationTime>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:22:26 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:        <nova:user uuid="ea407839a07d46608b6348caf676d12d">tempest-AttachInterfacesTestJSON-1625212989-project-member</nova:user>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:        <nova:project uuid="6a771ad0ce454d809d66825f69248fa7">tempest-AttachInterfacesTestJSON-1625212989</nova:project>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:        <nova:port uuid="67db480d-6a66-4c54-be9c-5375a0d664cd">
Feb 25 07:22:26 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      <entry name="serial">33db1662-e67d-41de-b8d6-ea93b40cf7cd</entry>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      <entry name="uuid">33db1662-e67d-41de-b8d6-ea93b40cf7cd</entry>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/33db1662-e67d-41de-b8d6-ea93b40cf7cd_disk">
Feb 25 07:22:26 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:22:26 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/33db1662-e67d-41de-b8d6-ea93b40cf7cd_disk.config">
Feb 25 07:22:26 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:22:26 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:4a:ac:51"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      <target dev="tap67db480d-6a"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/33db1662-e67d-41de-b8d6-ea93b40cf7cd/console.log" append="off"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:22:26 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:22:26 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:22:26 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:22:26 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.343 244018 DEBUG nova.compute.manager [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Preparing to wait for external event network-vif-plugged-67db480d-6a66-4c54-be9c-5375a0d664cd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:22:26 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.343 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.345 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.345 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.346 244018 DEBUG nova.virt.libvirt.vif [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:22:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-244265435',display_name='tempest-tempest.common.compute-instance-244265435',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-244265435',id=31,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOggDAkE/yMJzyb2WB93GwUtYZKr98E6F94LkHMlcbkujcDoPEUatUQ4cc3OD4kTDqtr/+QJWixJY7iaylVdULWnRyMAsh+/JrBYSM2MkEWihh9m/4SAmgeADQeMRQwlMA==',key_name='tempest-keypair-1926067471',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6a771ad0ce454d809d66825f69248fa7',ramdisk_id='',reservation_id='r-y80s0s52',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1625212989',owner_user_name='tempest-AttachInterfacesTestJSON-1625212989-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:22:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea407839a07d46608b6348caf676d12d',uuid=33db1662-e67d-41de-b8d6-ea93b40cf7cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "67db480d-6a66-4c54-be9c-5375a0d664cd", "address": "fa:16:3e:4a:ac:51", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67db480d-6a", "ovs_interfaceid": "67db480d-6a66-4c54-be9c-5375a0d664cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:22:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37ce79f13d78a07f954baa22de194b13d7b0ab9684c1feb2ee6932b4319eace9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.347 244018 DEBUG nova.network.os_vif_util [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converting VIF {"id": "67db480d-6a66-4c54-be9c-5375a0d664cd", "address": "fa:16:3e:4a:ac:51", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67db480d-6a", "ovs_interfaceid": "67db480d-6a66-4c54-be9c-5375a0d664cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.348 244018 DEBUG nova.network.os_vif_util [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4a:ac:51,bridge_name='br-int',has_traffic_filtering=True,id=67db480d-6a66-4c54-be9c-5375a0d664cd,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67db480d-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.348 244018 DEBUG os_vif [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:ac:51,bridge_name='br-int',has_traffic_filtering=True,id=67db480d-6a66-4c54-be9c-5375a0d664cd,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67db480d-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:22:26 np0005629333 podman[273561]: 2026-02-25 12:22:26.254315676 +0000 UTC m=+0.020240407 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.349 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37ce79f13d78a07f954baa22de194b13d7b0ab9684c1feb2ee6932b4319eace9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.349 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.350 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:22:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37ce79f13d78a07f954baa22de194b13d7b0ab9684c1feb2ee6932b4319eace9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:22:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37ce79f13d78a07f954baa22de194b13d7b0ab9684c1feb2ee6932b4319eace9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.353 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.353 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67db480d-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.353 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap67db480d-6a, col_values=(('external_ids', {'iface-id': '67db480d-6a66-4c54-be9c-5375a0d664cd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4a:ac:51', 'vm-uuid': '33db1662-e67d-41de-b8d6-ea93b40cf7cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:26 np0005629333 NetworkManager[49836]: <info>  [1772022146.3556] manager: (tap67db480d-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.357 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.359 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:22:26 np0005629333 podman[273561]: 2026-02-25 12:22:26.369091662 +0000 UTC m=+0.135016393 container init 18dcbb910735cf0f97b9891ed6eee01ee4f7b55b904609592521b49aa82530b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.370 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.371 244018 INFO os_vif [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4a:ac:51,bridge_name='br-int',has_traffic_filtering=True,id=67db480d-6a66-4c54-be9c-5375a0d664cd,network=Network(08121372-a435-401a-b405-778e10d8c2e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap67db480d-6a')#033[00m
Feb 25 07:22:26 np0005629333 podman[273561]: 2026-02-25 12:22:26.380331222 +0000 UTC m=+0.146255923 container start 18dcbb910735cf0f97b9891ed6eee01ee4f7b55b904609592521b49aa82530b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_goldstine, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:22:26 np0005629333 podman[273561]: 2026-02-25 12:22:26.38448503 +0000 UTC m=+0.150409741 container attach 18dcbb910735cf0f97b9891ed6eee01ee4f7b55b904609592521b49aa82530b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.495 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.497 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.497 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] No VIF found with MAC fa:16:3e:4a:ac:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.498 244018 INFO nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Using config drive#033[00m
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.529 244018 DEBUG nova.storage.rbd_utils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 33db1662-e67d-41de-b8d6-ea93b40cf7cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]: {
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:    "0": [
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:        {
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "devices": [
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "/dev/loop3"
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            ],
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "lv_name": "ceph_lv0",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "lv_size": "21470642176",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "name": "ceph_lv0",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "tags": {
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.cluster_name": "ceph",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.crush_device_class": "",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.encrypted": "0",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.objectstore": "bluestore",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.osd_id": "0",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.type": "block",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.vdo": "0",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.with_tpm": "0"
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            },
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "type": "block",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "vg_name": "ceph_vg0"
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:        }
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:    ],
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:    "1": [
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:        {
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "devices": [
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "/dev/loop4"
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            ],
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "lv_name": "ceph_lv1",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "lv_size": "21470642176",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "name": "ceph_lv1",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "tags": {
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.cluster_name": "ceph",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.crush_device_class": "",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.encrypted": "0",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.objectstore": "bluestore",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.osd_id": "1",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.type": "block",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.vdo": "0",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.with_tpm": "0"
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            },
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "type": "block",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "vg_name": "ceph_vg1"
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:        }
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:    ],
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:    "2": [
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:        {
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "devices": [
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "/dev/loop5"
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            ],
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "lv_name": "ceph_lv2",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "lv_size": "21470642176",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "name": "ceph_lv2",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "tags": {
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.cluster_name": "ceph",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.crush_device_class": "",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.encrypted": "0",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.objectstore": "bluestore",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.osd_id": "2",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.type": "block",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.vdo": "0",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:                "ceph.with_tpm": "0"
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            },
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "type": "block",
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:            "vg_name": "ceph_vg2"
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:        }
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]:    ]
Feb 25 07:22:26 np0005629333 awesome_goldstine[273577]: }
Feb 25 07:22:26 np0005629333 systemd[1]: libpod-18dcbb910735cf0f97b9891ed6eee01ee4f7b55b904609592521b49aa82530b1.scope: Deactivated successfully.
Feb 25 07:22:26 np0005629333 podman[273561]: 2026-02-25 12:22:26.683153288 +0000 UTC m=+0.449077999 container died 18dcbb910735cf0f97b9891ed6eee01ee4f7b55b904609592521b49aa82530b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 07:22:26 np0005629333 podman[273561]: 2026-02-25 12:22:26.727609542 +0000 UTC m=+0.493534243 container remove 18dcbb910735cf0f97b9891ed6eee01ee4f7b55b904609592521b49aa82530b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_goldstine, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:22:26 np0005629333 systemd[1]: var-lib-containers-storage-overlay-37ce79f13d78a07f954baa22de194b13d7b0ab9684c1feb2ee6932b4319eace9-merged.mount: Deactivated successfully.
Feb 25 07:22:26 np0005629333 systemd[1]: libpod-conmon-18dcbb910735cf0f97b9891ed6eee01ee4f7b55b904609592521b49aa82530b1.scope: Deactivated successfully.
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.897 244018 INFO nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Creating config drive at /var/lib/nova/instances/33db1662-e67d-41de-b8d6-ea93b40cf7cd/disk.config#033[00m
Feb 25 07:22:26 np0005629333 nova_compute[244014]: 2026-02-25 12:22:26.903 244018 DEBUG oslo_concurrency.processutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/33db1662-e67d-41de-b8d6-ea93b40cf7cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpe5jtm2do execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.036 244018 DEBUG oslo_concurrency.processutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/33db1662-e67d-41de-b8d6-ea93b40cf7cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpe5jtm2do" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.066 244018 DEBUG nova.storage.rbd_utils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 33db1662-e67d-41de-b8d6-ea93b40cf7cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.077 244018 DEBUG oslo_concurrency.processutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/33db1662-e67d-41de-b8d6-ea93b40cf7cd/disk.config 33db1662-e67d-41de-b8d6-ea93b40cf7cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:22:27 np0005629333 podman[273702]: 2026-02-25 12:22:27.188862016 +0000 UTC m=+0.052816264 container create ca0a471a1e400be034cf310216ad470212c4060b6cf1929e7930a9ef940cff49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:22:27 np0005629333 systemd[1]: Started libpod-conmon-ca0a471a1e400be034cf310216ad470212c4060b6cf1929e7930a9ef940cff49.scope.
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.240 244018 DEBUG oslo_concurrency.processutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/33db1662-e67d-41de-b8d6-ea93b40cf7cd/disk.config 33db1662-e67d-41de-b8d6-ea93b40cf7cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.241 244018 INFO nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Deleting local config drive /var/lib/nova/instances/33db1662-e67d-41de-b8d6-ea93b40cf7cd/disk.config because it was imported into RBD.#033[00m
Feb 25 07:22:27 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:22:27 np0005629333 podman[273702]: 2026-02-25 12:22:27.163254208 +0000 UTC m=+0.027208476 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:22:27 np0005629333 podman[273702]: 2026-02-25 12:22:27.269811549 +0000 UTC m=+0.133765867 container init ca0a471a1e400be034cf310216ad470212c4060b6cf1929e7930a9ef940cff49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:22:27 np0005629333 podman[273702]: 2026-02-25 12:22:27.278439975 +0000 UTC m=+0.142394203 container start ca0a471a1e400be034cf310216ad470212c4060b6cf1929e7930a9ef940cff49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 07:22:27 np0005629333 NetworkManager[49836]: <info>  [1772022147.2808] manager: (tap67db480d-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/112)
Feb 25 07:22:27 np0005629333 podman[273702]: 2026-02-25 12:22:27.282100259 +0000 UTC m=+0.146054517 container attach ca0a471a1e400be034cf310216ad470212c4060b6cf1929e7930a9ef940cff49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_brown, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 07:22:27 np0005629333 kernel: tap67db480d-6a: entered promiscuous mode
Feb 25 07:22:27 np0005629333 systemd-udevd[273253]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:22:27 np0005629333 happy_brown[273736]: 167 167
Feb 25 07:22:27 np0005629333 systemd[1]: libpod-ca0a471a1e400be034cf310216ad470212c4060b6cf1929e7930a9ef940cff49.scope: Deactivated successfully.
Feb 25 07:22:27 np0005629333 conmon[273736]: conmon ca0a471a1e400be034cf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ca0a471a1e400be034cf310216ad470212c4060b6cf1929e7930a9ef940cff49.scope/container/memory.events
Feb 25 07:22:27 np0005629333 podman[273702]: 2026-02-25 12:22:27.288024268 +0000 UTC m=+0.151978526 container died ca0a471a1e400be034cf310216ad470212c4060b6cf1929e7930a9ef940cff49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 07:22:27 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:27Z|00235|binding|INFO|Claiming lport 67db480d-6a66-4c54-be9c-5375a0d664cd for this chassis.
Feb 25 07:22:27 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:27Z|00236|binding|INFO|67db480d-6a66-4c54-be9c-5375a0d664cd: Claiming fa:16:3e:4a:ac:51 10.100.0.11
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.291 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:27 np0005629333 NetworkManager[49836]: <info>  [1772022147.2983] device (tap67db480d-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:22:27 np0005629333 NetworkManager[49836]: <info>  [1772022147.2992] device (tap67db480d-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.305 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4a:ac:51 10.100.0.11'], port_security=['fa:16:3e:4a:ac:51 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '33db1662-e67d-41de-b8d6-ea93b40cf7cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08121372-a435-401a-b405-778e10d8c2e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6a771ad0ce454d809d66825f69248fa7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cc690998-27c4-44c1-9496-60c99dea61b7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=162b2412-bbdb-42ef-a136-e6c9552d22a4, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=67db480d-6a66-4c54-be9c-5375a0d664cd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.308 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 67db480d-6a66-4c54-be9c-5375a0d664cd in datapath 08121372-a435-401a-b405-778e10d8c2e2 bound to our chassis#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.309 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:27 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:27Z|00237|binding|INFO|Setting lport 67db480d-6a66-4c54-be9c-5375a0d664cd ovn-installed in OVS
Feb 25 07:22:27 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:27Z|00238|binding|INFO|Setting lport 67db480d-6a66-4c54-be9c-5375a0d664cd up in Southbound
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.317 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08121372-a435-401a-b405-778e10d8c2e2#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.316 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.327 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9c03cc11-471d-4aed-94cd-c39827f6266e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.328 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap08121372-a1 in ovnmeta-08121372-a435-401a-b405-778e10d8c2e2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:22:27 np0005629333 systemd-machined[210048]: New machine qemu-35-instance-0000001f.
Feb 25 07:22:27 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d4dc5cbccd02c79d4bf120842d6b24638ebb8f4160b4b0607d3d6400ff10a598-merged.mount: Deactivated successfully.
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.332 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap08121372-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.333 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b0e6d2b5-e606-4dab-b5f9-38b3f5c960c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.334 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[19c9456e-15e4-4bb5-8afd-33b310cc4839]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:27 np0005629333 systemd[1]: Started Virtual Machine qemu-35-instance-0000001f.
Feb 25 07:22:27 np0005629333 podman[273702]: 2026-02-25 12:22:27.345976427 +0000 UTC m=+0.209930665 container remove ca0a471a1e400be034cf310216ad470212c4060b6cf1929e7930a9ef940cff49 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.350 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[5a28507c-4e39-408e-9294-c6796346a7c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:27 np0005629333 systemd[1]: libpod-conmon-ca0a471a1e400be034cf310216ad470212c4060b6cf1929e7930a9ef940cff49.scope: Deactivated successfully.
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.372 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[25c69af4-3b90-4264-8916-dafeadc7b309]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.394 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[70b0b920-03a9-49b0-af2a-21f7ddc02cd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:27 np0005629333 NetworkManager[49836]: <info>  [1772022147.4124] manager: (tap08121372-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/113)
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.414 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[38a80d62-e506-4f81-97dd-2231a2b92df2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.440 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d9ad82af-45a7-481b-bcc9-b4fde177542c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.444 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[369b3b96-fca9-446b-bce2-c1d9acf52aec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:27 np0005629333 NetworkManager[49836]: <info>  [1772022147.4653] device (tap08121372-a0): carrier: link connected
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.471 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f3952259-ee53-435a-92bd-ec09fad485a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.491 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0b01ae92-fb0d-46a8-960c-bc8f1bab0dc0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411687, 'reachable_time': 20583, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273789, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.506 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[90dad9e7-25e6-448c-ac6a-3dac7611d924]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:732b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411687, 'tstamp': 411687}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273797, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.519 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cf7f8803-031a-453b-9dcf-bf7c24483210]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08121372-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:73:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411687, 'reachable_time': 20583, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273798, 'error': None, 'target': 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.543 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4fc442bb-81ea-4c31-b0ab-7999abad7ab3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:27 np0005629333 podman[273790]: 2026-02-25 12:22:27.556240939 +0000 UTC m=+0.059996228 container create 70ae4fceb954a5983a74468cecf1e03fc820afee15e29160e0829d835a33c90b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_jepsen, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 07:22:27 np0005629333 systemd[1]: Started libpod-conmon-70ae4fceb954a5983a74468cecf1e03fc820afee15e29160e0829d835a33c90b.scope.
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.609 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[33a15e24-012a-46e9-8487-d36c3dffccfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.612 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08121372-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.612 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.613 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08121372-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:27 np0005629333 kernel: tap08121372-a0: entered promiscuous mode
Feb 25 07:22:27 np0005629333 NetworkManager[49836]: <info>  [1772022147.6163] manager: (tap08121372-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.615 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:27 np0005629333 podman[273790]: 2026-02-25 12:22:27.528610763 +0000 UTC m=+0.032366132 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.627 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08121372-a0, col_values=(('external_ids', {'iface-id': 'ef44c128-3fa4-4475-b63c-4818a50ede40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.628 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.629 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:27 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:27Z|00239|binding|INFO|Releasing lport ef44c128-3fa4-4475-b63c-4818a50ede40 from this chassis (sb_readonly=0)
Feb 25 07:22:27 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.639 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/08121372-a435-401a-b405-778e10d8c2e2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/08121372-a435-401a-b405-778e10d8c2e2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.639 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.639 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bafcfce1-722b-4e15-89bd-bf1929f7745a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.640 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-08121372-a435-401a-b405-778e10d8c2e2
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/08121372-a435-401a-b405-778e10d8c2e2.pid.haproxy
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 08121372-a435-401a-b405-778e10d8c2e2
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:22:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:22:27.641 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-08121372-a435-401a-b405-778e10d8c2e2', 'env', 'PROCESS_TAG=haproxy-08121372-a435-401a-b405-778e10d8c2e2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/08121372-a435-401a-b405-778e10d8c2e2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:22:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f284aac424b9f14ba868f44fc0169e26fef3cd2f4abef20321764ea577401fb3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:22:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f284aac424b9f14ba868f44fc0169e26fef3cd2f4abef20321764ea577401fb3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:22:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f284aac424b9f14ba868f44fc0169e26fef3cd2f4abef20321764ea577401fb3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:22:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f284aac424b9f14ba868f44fc0169e26fef3cd2f4abef20321764ea577401fb3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:22:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 246 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 846 KiB/s rd, 5.7 MiB/s wr, 178 op/s
Feb 25 07:22:27 np0005629333 podman[273790]: 2026-02-25 12:22:27.696793278 +0000 UTC m=+0.200548647 container init 70ae4fceb954a5983a74468cecf1e03fc820afee15e29160e0829d835a33c90b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_jepsen, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:22:27 np0005629333 podman[273790]: 2026-02-25 12:22:27.70422977 +0000 UTC m=+0.207985079 container start 70ae4fceb954a5983a74468cecf1e03fc820afee15e29160e0829d835a33c90b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_jepsen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:22:27 np0005629333 podman[273790]: 2026-02-25 12:22:27.751457673 +0000 UTC m=+0.255212962 container attach 70ae4fceb954a5983a74468cecf1e03fc820afee15e29160e0829d835a33c90b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_jepsen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.882 244018 DEBUG nova.compute.manager [req-107a8671-002e-428b-9634-4631d56464d8 req-400efaf4-da9d-4791-b8ea-1977d011bc8d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Received event network-vif-plugged-67db480d-6a66-4c54-be9c-5375a0d664cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.883 244018 DEBUG oslo_concurrency.lockutils [req-107a8671-002e-428b-9634-4631d56464d8 req-400efaf4-da9d-4791-b8ea-1977d011bc8d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.883 244018 DEBUG oslo_concurrency.lockutils [req-107a8671-002e-428b-9634-4631d56464d8 req-400efaf4-da9d-4791-b8ea-1977d011bc8d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.884 244018 DEBUG oslo_concurrency.lockutils [req-107a8671-002e-428b-9634-4631d56464d8 req-400efaf4-da9d-4791-b8ea-1977d011bc8d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.884 244018 DEBUG nova.compute.manager [req-107a8671-002e-428b-9634-4631d56464d8 req-400efaf4-da9d-4791-b8ea-1977d011bc8d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Processing event network-vif-plugged-67db480d-6a66-4c54-be9c-5375a0d664cd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.886 244018 DEBUG nova.network.neutron [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Updated VIF entry in instance network info cache for port 67db480d-6a66-4c54-be9c-5375a0d664cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.887 244018 DEBUG nova.network.neutron [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Updating instance_info_cache with network_info: [{"id": "67db480d-6a66-4c54-be9c-5375a0d664cd", "address": "fa:16:3e:4a:ac:51", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67db480d-6a", "ovs_interfaceid": "67db480d-6a66-4c54-be9c-5375a0d664cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.908 244018 DEBUG oslo_concurrency.lockutils [req-70d9c3fa-71ed-4bdc-afe4-98c332a2528e req-e4d2deeb-5442-47a1-a48a-3f7e3df74cae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.947 244018 DEBUG nova.compute.manager [req-f61295ba-06c4-4eef-b5d0-40194fefef92 req-8196fe9f-d1e1-4d7b-bc41-91cdabef1064 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Received event network-vif-plugged-fd320060-eaf8-4fd7-9325-e3793617bc7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.948 244018 DEBUG oslo_concurrency.lockutils [req-f61295ba-06c4-4eef-b5d0-40194fefef92 req-8196fe9f-d1e1-4d7b-bc41-91cdabef1064 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ffd5cedf-474c-4977-807e-22a276ceb002-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.948 244018 DEBUG oslo_concurrency.lockutils [req-f61295ba-06c4-4eef-b5d0-40194fefef92 req-8196fe9f-d1e1-4d7b-bc41-91cdabef1064 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ffd5cedf-474c-4977-807e-22a276ceb002-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.949 244018 DEBUG oslo_concurrency.lockutils [req-f61295ba-06c4-4eef-b5d0-40194fefef92 req-8196fe9f-d1e1-4d7b-bc41-91cdabef1064 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ffd5cedf-474c-4977-807e-22a276ceb002-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.949 244018 DEBUG nova.compute.manager [req-f61295ba-06c4-4eef-b5d0-40194fefef92 req-8196fe9f-d1e1-4d7b-bc41-91cdabef1064 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] No waiting events found dispatching network-vif-plugged-fd320060-eaf8-4fd7-9325-e3793617bc7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.949 244018 WARNING nova.compute.manager [req-f61295ba-06c4-4eef-b5d0-40194fefef92 req-8196fe9f-d1e1-4d7b-bc41-91cdabef1064 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Received unexpected event network-vif-plugged-fd320060-eaf8-4fd7-9325-e3793617bc7b for instance with vm_state active and task_state None.#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.959 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022147.9587524, 33db1662-e67d-41de-b8d6-ea93b40cf7cd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.959 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] VM Started (Lifecycle Event)#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.961 244018 DEBUG nova.compute.manager [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.964 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.967 244018 INFO nova.virt.libvirt.driver [-] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Instance spawned successfully.#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.967 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.983 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.989 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.992 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.992 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.993 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.993 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.994 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:22:27 np0005629333 nova_compute[244014]: 2026-02-25 12:22:27.994 244018 DEBUG nova.virt.libvirt.driver [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:22:28 np0005629333 nova_compute[244014]: 2026-02-25 12:22:28.024 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:22:28 np0005629333 nova_compute[244014]: 2026-02-25 12:22:28.024 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022147.960511, 33db1662-e67d-41de-b8d6-ea93b40cf7cd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:22:28 np0005629333 nova_compute[244014]: 2026-02-25 12:22:28.024 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:22:28 np0005629333 podman[273894]: 2026-02-25 12:22:28.039515319 +0000 UTC m=+0.053218565 container create 015e855d399683b7ef58ebd371f2f8e9d0e1f2955b1ac09c0dd295f2c2bd68f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 25 07:22:28 np0005629333 nova_compute[244014]: 2026-02-25 12:22:28.049 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:22:28 np0005629333 nova_compute[244014]: 2026-02-25 12:22:28.052 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022147.9633794, 33db1662-e67d-41de-b8d6-ea93b40cf7cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:22:28 np0005629333 nova_compute[244014]: 2026-02-25 12:22:28.052 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:22:28 np0005629333 nova_compute[244014]: 2026-02-25 12:22:28.059 244018 INFO nova.compute.manager [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Took 8.54 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:22:28 np0005629333 nova_compute[244014]: 2026-02-25 12:22:28.060 244018 DEBUG nova.compute.manager [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:22:28 np0005629333 systemd[1]: Started libpod-conmon-015e855d399683b7ef58ebd371f2f8e9d0e1f2955b1ac09c0dd295f2c2bd68f0.scope.
Feb 25 07:22:28 np0005629333 nova_compute[244014]: 2026-02-25 12:22:28.084 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:22:28 np0005629333 nova_compute[244014]: 2026-02-25 12:22:28.087 244018 DEBUG oslo_concurrency.lockutils [None req-554dfaaf-0154-490c-987b-24235c51a652 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Acquiring lock "ffd5cedf-474c-4977-807e-22a276ceb002" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:28 np0005629333 nova_compute[244014]: 2026-02-25 12:22:28.087 244018 DEBUG oslo_concurrency.lockutils [None req-554dfaaf-0154-490c-987b-24235c51a652 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lock "ffd5cedf-474c-4977-807e-22a276ceb002" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:28 np0005629333 nova_compute[244014]: 2026-02-25 12:22:28.087 244018 DEBUG nova.compute.manager [None req-554dfaaf-0154-490c-987b-24235c51a652 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:22:28 np0005629333 nova_compute[244014]: 2026-02-25 12:22:28.090 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:22:28 np0005629333 nova_compute[244014]: 2026-02-25 12:22:28.093 244018 DEBUG nova.compute.manager [None req-554dfaaf-0154-490c-987b-24235c51a652 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Feb 25 07:22:28 np0005629333 nova_compute[244014]: 2026-02-25 12:22:28.093 244018 DEBUG nova.objects.instance [None req-554dfaaf-0154-490c-987b-24235c51a652 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] Lazy-loading 'flavor' on Instance uuid ffd5cedf-474c-4977-807e-22a276ceb002 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:22:28 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:22:28 np0005629333 podman[273894]: 2026-02-25 12:22:28.009441004 +0000 UTC m=+0.023144280 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:22:28 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef7d83654588edb9f69d59a0176fd12c6608dd39d362a35f0ac0ad74b24bf41d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:22:28 np0005629333 nova_compute[244014]: 2026-02-25 12:22:28.132 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:22:28 np0005629333 nova_compute[244014]: 2026-02-25 12:22:28.134 244018 DEBUG nova.virt.libvirt.driver [None req-554dfaaf-0154-490c-987b-24235c51a652 f1cfc3e643014e1c984d71182534fd24 851e1817495944c1ad7ac421ab226d13 - - default default] [instance: ffd5cedf-474c-4977-807e-22a276ceb002] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 25 07:22:28 np0005629333 podman[273894]: 2026-02-25 12:22:28.149012495 +0000 UTC m=+0.162715741 container init 015e855d399683b7ef58ebd371f2f8e9d0e1f2955b1ac09c0dd295f2c2bd68f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 07:22:28 np0005629333 nova_compute[244014]: 2026-02-25 12:22:28.150 244018 INFO nova.compute.manager [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Took 10.07 seconds to build instance.#033[00m
Feb 25 07:22:28 np0005629333 podman[273894]: 2026-02-25 12:22:28.156359814 +0000 UTC m=+0.170063060 container start 015e855d399683b7ef58ebd371f2f8e9d0e1f2955b1ac09c0dd295f2c2bd68f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 25 07:22:28 np0005629333 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[273920]: [NOTICE]   (273952) : New worker (273959) forked
Feb 25 07:22:28 np0005629333 neutron-haproxy-ovnmeta-08121372-a435-401a-b405-778e10d8c2e2[273920]: [NOTICE]   (273952) : Loading success.
Feb 25 07:22:28 np0005629333 nova_compute[244014]: 2026-02-25 12:22:28.183 244018 DEBUG oslo_concurrency.lockutils [None req-9b01ced9-51bd-494d-beb1-0eb4f2a0fd32 ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.358s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:28 np0005629333 lvm[273987]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:22:28 np0005629333 lvm[273987]: VG ceph_vg1 finished
Feb 25 07:22:28 np0005629333 lvm[273986]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:22:28 np0005629333 lvm[273986]: VG ceph_vg0 finished
Feb 25 07:22:28 np0005629333 lvm[273989]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:22:28 np0005629333 lvm[273989]: VG ceph_vg2 finished
Feb 25 07:22:28 np0005629333 unruffled_jepsen[273815]: {}
Feb 25 07:22:28 np0005629333 systemd[1]: libpod-70ae4fceb954a5983a74468cecf1e03fc820afee15e29160e0829d835a33c90b.scope: Deactivated successfully.
Feb 25 07:22:28 np0005629333 podman[273992]: 2026-02-25 12:22:28.484321215 +0000 UTC m=+0.023818539 container died 70ae4fceb954a5983a74468cecf1e03fc820afee15e29160e0829d835a33c90b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 07:22:28 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f284aac424b9f14ba868f44fc0169e26fef3cd2f4abef20321764ea577401fb3-merged.mount: Deactivated successfully.
Feb 25 07:22:28 np0005629333 podman[273992]: 2026-02-25 12:22:28.522559283 +0000 UTC m=+0.062056537 container remove 70ae4fceb954a5983a74468cecf1e03fc820afee15e29160e0829d835a33c90b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_jepsen, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3)
Feb 25 07:22:28 np0005629333 systemd[1]: libpod-conmon-70ae4fceb954a5983a74468cecf1e03fc820afee15e29160e0829d835a33c90b.scope: Deactivated successfully.
Feb 25 07:22:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:22:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:22:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:22:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:22:28 np0005629333 nova_compute[244014]: 2026-02-25 12:22:28.716 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:22:29 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:22:29 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:22:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 246 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 758 KiB/s rd, 4.9 MiB/s wr, 151 op/s
Feb 25 07:22:30 np0005629333 nova_compute[244014]: 2026-02-25 12:22:30.207 244018 DEBUG nova.compute.manager [req-80e6a839-d4d6-4b67-a2c8-7f0ca5f4b737 req-83d22b16-145b-4fd1-9297-95c721f320a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Received event network-vif-plugged-67db480d-6a66-4c54-be9c-5375a0d664cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:22:30 np0005629333 nova_compute[244014]: 2026-02-25 12:22:30.209 244018 DEBUG oslo_concurrency.lockutils [req-80e6a839-d4d6-4b67-a2c8-7f0ca5f4b737 req-83d22b16-145b-4fd1-9297-95c721f320a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:30 np0005629333 nova_compute[244014]: 2026-02-25 12:22:30.210 244018 DEBUG oslo_concurrency.lockutils [req-80e6a839-d4d6-4b67-a2c8-7f0ca5f4b737 req-83d22b16-145b-4fd1-9297-95c721f320a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:30 np0005629333 nova_compute[244014]: 2026-02-25 12:22:30.210 244018 DEBUG oslo_concurrency.lockutils [req-80e6a839-d4d6-4b67-a2c8-7f0ca5f4b737 req-83d22b16-145b-4fd1-9297-95c721f320a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "33db1662-e67d-41de-b8d6-ea93b40cf7cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:30 np0005629333 nova_compute[244014]: 2026-02-25 12:22:30.211 244018 DEBUG nova.compute.manager [req-80e6a839-d4d6-4b67-a2c8-7f0ca5f4b737 req-83d22b16-145b-4fd1-9297-95c721f320a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] No waiting events found dispatching network-vif-plugged-67db480d-6a66-4c54-be9c-5375a0d664cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:22:30 np0005629333 nova_compute[244014]: 2026-02-25 12:22:30.211 244018 WARNING nova.compute.manager [req-80e6a839-d4d6-4b67-a2c8-7f0ca5f4b737 req-83d22b16-145b-4fd1-9297-95c721f320a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Received unexpected event network-vif-plugged-67db480d-6a66-4c54-be9c-5375a0d664cd for instance with vm_state active and task_state None.#033[00m
Feb 25 07:22:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:22:30
Feb 25 07:22:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:22:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:22:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['backups', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'vms', 'images', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root']
Feb 25 07:22:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:22:31 np0005629333 nova_compute[244014]: 2026-02-25 12:22:31.354 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:22:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:22:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:22:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:22:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:22:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:22:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 246 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.9 MiB/s wr, 216 op/s
Feb 25 07:22:31 np0005629333 nova_compute[244014]: 2026-02-25 12:22:31.703 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:31 np0005629333 NetworkManager[49836]: <info>  [1772022151.7045] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Feb 25 07:22:31 np0005629333 NetworkManager[49836]: <info>  [1772022151.7062] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Feb 25 07:22:31 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:31Z|00240|binding|INFO|Releasing lport ef44c128-3fa4-4475-b63c-4818a50ede40 from this chassis (sb_readonly=0)
Feb 25 07:22:31 np0005629333 nova_compute[244014]: 2026-02-25 12:22:31.743 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:31 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:31Z|00241|binding|INFO|Releasing lport 7f515737-b36a-4caa-affc-8ad110539172 from this chassis (sb_readonly=0)
Feb 25 07:22:31 np0005629333 nova_compute[244014]: 2026-02-25 12:22:31.759 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:22:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:22:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:22:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:22:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:22:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:22:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:22:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:22:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:22:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:22:32 np0005629333 nova_compute[244014]: 2026-02-25 12:22:32.397 244018 DEBUG nova.compute.manager [req-d4749919-baee-472d-a196-ed55d4a979ed req-390aa6ab-8e08-42f1-96d3-6a8c1f027324 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Received event network-changed-67db480d-6a66-4c54-be9c-5375a0d664cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:22:32 np0005629333 nova_compute[244014]: 2026-02-25 12:22:32.398 244018 DEBUG nova.compute.manager [req-d4749919-baee-472d-a196-ed55d4a979ed req-390aa6ab-8e08-42f1-96d3-6a8c1f027324 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Refreshing instance network info cache due to event network-changed-67db480d-6a66-4c54-be9c-5375a0d664cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:22:32 np0005629333 nova_compute[244014]: 2026-02-25 12:22:32.399 244018 DEBUG oslo_concurrency.lockutils [req-d4749919-baee-472d-a196-ed55d4a979ed req-390aa6ab-8e08-42f1-96d3-6a8c1f027324 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:22:32 np0005629333 nova_compute[244014]: 2026-02-25 12:22:32.399 244018 DEBUG oslo_concurrency.lockutils [req-d4749919-baee-472d-a196-ed55d4a979ed req-390aa6ab-8e08-42f1-96d3-6a8c1f027324 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:22:32 np0005629333 nova_compute[244014]: 2026-02-25 12:22:32.400 244018 DEBUG nova.network.neutron [req-d4749919-baee-472d-a196-ed55d4a979ed req-390aa6ab-8e08-42f1-96d3-6a8c1f027324 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Refreshing network info cache for port 67db480d-6a66-4c54-be9c-5375a0d664cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:22:33 np0005629333 nova_compute[244014]: 2026-02-25 12:22:33.563 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022138.5626101, 5a7dc142-2b11-4214-87b7-636f27ccacbf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:22:33 np0005629333 nova_compute[244014]: 2026-02-25 12:22:33.564 244018 INFO nova.compute.manager [-] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:22:33 np0005629333 nova_compute[244014]: 2026-02-25 12:22:33.589 244018 DEBUG nova.compute.manager [None req-f1e12a04-6e59-454b-a6c8-d66ac19bc22b - - - - - -] [instance: 5a7dc142-2b11-4214-87b7-636f27ccacbf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:22:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 246 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.6 MiB/s wr, 236 op/s
Feb 25 07:22:33 np0005629333 nova_compute[244014]: 2026-02-25 12:22:33.717 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:34 np0005629333 nova_compute[244014]: 2026-02-25 12:22:34.195 244018 DEBUG nova.network.neutron [req-d4749919-baee-472d-a196-ed55d4a979ed req-390aa6ab-8e08-42f1-96d3-6a8c1f027324 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Updated VIF entry in instance network info cache for port 67db480d-6a66-4c54-be9c-5375a0d664cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:22:34 np0005629333 nova_compute[244014]: 2026-02-25 12:22:34.196 244018 DEBUG nova.network.neutron [req-d4749919-baee-472d-a196-ed55d4a979ed req-390aa6ab-8e08-42f1-96d3-6a8c1f027324 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 33db1662-e67d-41de-b8d6-ea93b40cf7cd] Updating instance_info_cache with network_info: [{"id": "67db480d-6a66-4c54-be9c-5375a0d664cd", "address": "fa:16:3e:4a:ac:51", "network": {"id": "08121372-a435-401a-b405-778e10d8c2e2", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1571729129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6a771ad0ce454d809d66825f69248fa7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap67db480d-6a", "ovs_interfaceid": "67db480d-6a66-4c54-be9c-5375a0d664cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:22:34 np0005629333 nova_compute[244014]: 2026-02-25 12:22:34.227 244018 DEBUG oslo_concurrency.lockutils [req-d4749919-baee-472d-a196-ed55d4a979ed req-390aa6ab-8e08-42f1-96d3-6a8c1f027324 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-33db1662-e67d-41de-b8d6-ea93b40cf7cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:22:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:34Z|00242|binding|INFO|Releasing lport ef44c128-3fa4-4475-b63c-4818a50ede40 from this chassis (sb_readonly=0)
Feb 25 07:22:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:22:34Z|00243|binding|INFO|Releasing lport 7f515737-b36a-4caa-affc-8ad110539172 from this chassis (sb_readonly=0)
Feb 25 07:22:34 np0005629333 nova_compute[244014]: 2026-02-25 12:22:34.278 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:22:35 np0005629333 nova_compute[244014]: 2026-02-25 12:22:35.668 244018 DEBUG oslo_concurrency.lockutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:35 np0005629333 nova_compute[244014]: 2026-02-25 12:22:35.668 244018 DEBUG oslo_concurrency.lockutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "8715ba65-46d1-406d-8a4d-b5ff5067e2c1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 246 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 147 op/s
Feb 25 07:22:35 np0005629333 nova_compute[244014]: 2026-02-25 12:22:35.704 244018 DEBUG nova.compute.manager [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:22:35 np0005629333 nova_compute[244014]: 2026-02-25 12:22:35.803 244018 DEBUG oslo_concurrency.lockutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:35 np0005629333 nova_compute[244014]: 2026-02-25 12:22:35.804 244018 DEBUG oslo_concurrency.lockutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:35 np0005629333 nova_compute[244014]: 2026-02-25 12:22:35.821 244018 DEBUG nova.virt.hardware [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:22:35 np0005629333 nova_compute[244014]: 2026-02-25 12:22:35.822 244018 INFO nova.compute.claims [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:22:36 np0005629333 nova_compute[244014]: 2026-02-25 12:22:36.195 244018 DEBUG oslo_concurrency.processutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:22:36 np0005629333 nova_compute[244014]: 2026-02-25 12:22:36.356 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:22:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:22:37 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3773599575' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:22:37 np0005629333 nova_compute[244014]: 2026-02-25 12:22:37.049 244018 DEBUG oslo_concurrency.processutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.854s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:22:37 np0005629333 nova_compute[244014]: 2026-02-25 12:22:37.056 244018 DEBUG nova.compute.provider_tree [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:22:37 np0005629333 nova_compute[244014]: 2026-02-25 12:22:37.083 244018 DEBUG nova.scheduler.client.report [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:22:37 np0005629333 nova_compute[244014]: 2026-02-25 12:22:37.121 244018 DEBUG oslo_concurrency.lockutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.317s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:37 np0005629333 nova_compute[244014]: 2026-02-25 12:22:37.122 244018 DEBUG nova.compute.manager [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:22:37 np0005629333 nova_compute[244014]: 2026-02-25 12:22:37.174 244018 DEBUG nova.compute.manager [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:22:37 np0005629333 nova_compute[244014]: 2026-02-25 12:22:37.174 244018 DEBUG nova.network.neutron [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:22:37 np0005629333 nova_compute[244014]: 2026-02-25 12:22:37.211 244018 INFO nova.virt.libvirt.driver [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:22:37 np0005629333 nova_compute[244014]: 2026-02-25 12:22:37.239 244018 DEBUG nova.compute.manager [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:22:37 np0005629333 nova_compute[244014]: 2026-02-25 12:22:37.386 244018 DEBUG nova.compute.manager [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:22:37 np0005629333 nova_compute[244014]: 2026-02-25 12:22:37.387 244018 DEBUG nova.virt.libvirt.driver [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:22:37 np0005629333 nova_compute[244014]: 2026-02-25 12:22:37.388 244018 INFO nova.virt.libvirt.driver [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] [instance: 8715ba65-46d1-406d-8a4d-b5ff5067e2c1] Creating image(s)#033[00m
Feb 25 07:22:37 np0005629333 nova_compute[244014]: 2026-02-25 12:22:37.415 244018 DEBUG nova.storage.rbd_utils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 8715ba65-46d1-406d-8a4d-b5ff5067e2c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:22:37 np0005629333 nova_compute[244014]: 2026-02-25 12:22:37.443 244018 DEBUG nova.storage.rbd_utils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 8715ba65-46d1-406d-8a4d-b5ff5067e2c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:22:37 np0005629333 nova_compute[244014]: 2026-02-25 12:22:37.464 244018 DEBUG nova.storage.rbd_utils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 8715ba65-46d1-406d-8a4d-b5ff5067e2c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:22:37 np0005629333 nova_compute[244014]: 2026-02-25 12:22:37.467 244018 DEBUG oslo_concurrency.processutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:22:37 np0005629333 nova_compute[244014]: 2026-02-25 12:22:37.516 244018 DEBUG oslo_concurrency.processutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:22:37 np0005629333 nova_compute[244014]: 2026-02-25 12:22:37.518 244018 DEBUG oslo_concurrency.lockutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:22:37 np0005629333 nova_compute[244014]: 2026-02-25 12:22:37.519 244018 DEBUG oslo_concurrency.lockutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:22:37 np0005629333 nova_compute[244014]: 2026-02-25 12:22:37.520 244018 DEBUG oslo_concurrency.lockutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:22:37 np0005629333 nova_compute[244014]: 2026-02-25 12:22:37.549 244018 DEBUG nova.storage.rbd_utils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] rbd image 8715ba65-46d1-406d-8a4d-b5ff5067e2c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:22:37 np0005629333 nova_compute[244014]: 2026-02-25 12:22:37.552 244018 DEBUG oslo_concurrency.processutils [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 8715ba65-46d1-406d-8a4d-b5ff5067e2c1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:22:37 np0005629333 nova_compute[244014]: 2026-02-25 12:22:37.612 244018 DEBUG nova.policy [None req-6d4f6c9e-af89-4ae4-bc3c-1810853be9db ea407839a07d46608b6348caf676d12d 6a771ad0ce454d809d66825f69248fa7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea407839a07d46608b6348caf676d12d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6a771ad0ce454d809d66825f69248fa7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:22:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 257 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.2 MiB/s wr, 166 op/s
Feb 25 07:25:08 np0005629333 nova_compute[244014]: 2026-02-25 12:25:08.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:25:08 np0005629333 nova_compute[244014]: 2026-02-25 12:25:08.900 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:08 np0005629333 nova_compute[244014]: 2026-02-25 12:25:08.900 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:08 np0005629333 nova_compute[244014]: 2026-02-25 12:25:08.901 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:08 np0005629333 nova_compute[244014]: 2026-02-25 12:25:08.901 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:25:08 np0005629333 nova_compute[244014]: 2026-02-25 12:25:08.902 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.024 244018 INFO nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Creating config drive at /var/lib/nova/instances/1238794a-063b-4ac0-a7d9-3590353e3207/disk.config#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.032 244018 DEBUG oslo_concurrency.processutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1238794a-063b-4ac0-a7d9-3590353e3207/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8ifvi6la execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.073 244018 DEBUG nova.compute.manager [req-7eac5312-2910-482d-97c2-a24777eacefc req-94f9915b-8151-48ba-9a3f-05e975f0e2a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Received event network-vif-plugged-fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.074 244018 DEBUG oslo_concurrency.lockutils [req-7eac5312-2910-482d-97c2-a24777eacefc req-94f9915b-8151-48ba-9a3f-05e975f0e2a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.076 244018 DEBUG oslo_concurrency.lockutils [req-7eac5312-2910-482d-97c2-a24777eacefc req-94f9915b-8151-48ba-9a3f-05e975f0e2a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.077 244018 DEBUG oslo_concurrency.lockutils [req-7eac5312-2910-482d-97c2-a24777eacefc req-94f9915b-8151-48ba-9a3f-05e975f0e2a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.078 244018 DEBUG nova.compute.manager [req-7eac5312-2910-482d-97c2-a24777eacefc req-94f9915b-8151-48ba-9a3f-05e975f0e2a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] No waiting events found dispatching network-vif-plugged-fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.078 244018 WARNING nova.compute.manager [req-7eac5312-2910-482d-97c2-a24777eacefc req-94f9915b-8151-48ba-9a3f-05e975f0e2a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Received unexpected event network-vif-plugged-fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 for instance with vm_state active and task_state shelving.#033[00m
Feb 25 07:25:09 np0005629333 rsyslogd[1020]: imjournal: 10175 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.173 244018 DEBUG oslo_concurrency.processutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1238794a-063b-4ac0-a7d9-3590353e3207/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8ifvi6la" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.279 244018 DEBUG nova.storage.rbd_utils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image 1238794a-063b-4ac0-a7d9-3590353e3207_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.290 244018 DEBUG oslo_concurrency.processutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1238794a-063b-4ac0-a7d9-3590353e3207/disk.config 1238794a-063b-4ac0-a7d9-3590353e3207_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:25:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3816542441' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.521 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.542 244018 DEBUG oslo_concurrency.processutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1238794a-063b-4ac0-a7d9-3590353e3207/disk.config 1238794a-063b-4ac0-a7d9-3590353e3207_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.252s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.543 244018 INFO nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Deleting local config drive /var/lib/nova/instances/1238794a-063b-4ac0-a7d9-3590353e3207/disk.config because it was imported into RBD.#033[00m
Feb 25 07:25:09 np0005629333 kernel: tap9f614955-c9: entered promiscuous mode
Feb 25 07:25:09 np0005629333 systemd-udevd[284495]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:25:09 np0005629333 NetworkManager[49836]: <info>  [1772022309.5903] manager: (tap9f614955-c9): new Tun device (/org/freedesktop/NetworkManager/Devices/178)
Feb 25 07:25:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:09Z|00389|binding|INFO|Claiming lport 9f614955-c92f-41c2-a47e-6d65c378bf82 for this chassis.
Feb 25 07:25:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:09Z|00390|binding|INFO|9f614955-c92f-41c2-a47e-6d65c378bf82: Claiming fa:16:3e:db:31:fe 10.100.0.7
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.594 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:09.603 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:31:fe 10.100.0.7'], port_security=['fa:16:3e:db:31:fe 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1238794a-063b-4ac0-a7d9-3590353e3207', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c97c5b11-7517-46fe-a6ca-63894792908c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8744bdbc0f1499388aab5f477246beb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9622f97c-a9c4-423f-b49e-154152bd6881', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b7ae960-1b2b-4f15-b35e-8e889d9ccce8, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=9f614955-c92f-41c2-a47e-6d65c378bf82) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:25:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:09.606 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 9f614955-c92f-41c2-a47e-6d65c378bf82 in datapath c97c5b11-7517-46fe-a6ca-63894792908c bound to our chassis#033[00m
Feb 25 07:25:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:09.609 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c97c5b11-7517-46fe-a6ca-63894792908c#033[00m
Feb 25 07:25:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:09Z|00391|binding|INFO|Setting lport 9f614955-c92f-41c2-a47e-6d65c378bf82 ovn-installed in OVS
Feb 25 07:25:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:09Z|00392|binding|INFO|Setting lport 9f614955-c92f-41c2-a47e-6d65c378bf82 up in Southbound
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.614 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:09 np0005629333 NetworkManager[49836]: <info>  [1772022309.6176] device (tap9f614955-c9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:25:09 np0005629333 NetworkManager[49836]: <info>  [1772022309.6186] device (tap9f614955-c9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.619 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:09 np0005629333 systemd-machined[210048]: New machine qemu-51-instance-0000002e.
Feb 25 07:25:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:25:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:09.623 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a280c7ce-25da-4dbf-bbcb-c3cd5fa3f560]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:09 np0005629333 systemd[1]: Started Virtual Machine qemu-51-instance-0000002e.
Feb 25 07:25:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:09.650 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[df6f9241-6896-414c-8ec7-5de3fe3fe571]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:09.653 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[70a9c200-db29-4a4e-9a73-2023fd96c1e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:09.677 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c7383a91-b363-4439-a38a-c909d9303abc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:09.699 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2bbf93be-c0d1-4a99-834f-121bd2184d7b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc97c5b11-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:61:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 114], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 427672, 'reachable_time': 15170, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284776, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.715 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "b8086e43-4c45-422f-a3b5-fa665c256b30" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.715 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:09.716 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8e20ea54-78ca-46ca-b84a-14c290ab0242]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc97c5b11-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 427682, 'tstamp': 427682}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284777, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc97c5b11-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 427685, 'tstamp': 427685}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284777, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:09.718 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc97c5b11-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.720 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:09.722 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc97c5b11-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:09.722 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:25:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:09.723 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc97c5b11-70, col_values=(('external_ids', {'iface-id': 'db412aa7-4ad4-4eb8-b61f-dd3e71d5329d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:09.724 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.726 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000002e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.726 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000002e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.730 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000002c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.730 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000002c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.734 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.734 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.737 244018 DEBUG nova.compute.manager [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:25:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 293 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 170 KiB/s rd, 3.6 MiB/s wr, 72 op/s
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.836 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.836 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.841 244018 DEBUG nova.virt.hardware [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.842 244018 INFO nova.compute.claims [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.927 244018 DEBUG nova.network.neutron [req-d7d932c3-fd87-4c32-aa08-390e7cb78f1b req-be46a7d7-8ee3-4b50-832e-1e9b923b8d66 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Updated VIF entry in instance network info cache for port 9f614955-c92f-41c2-a47e-6d65c378bf82. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.928 244018 DEBUG nova.network.neutron [req-d7d932c3-fd87-4c32-aa08-390e7cb78f1b req-be46a7d7-8ee3-4b50-832e-1e9b923b8d66 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Updating instance_info_cache with network_info: [{"id": "9f614955-c92f-41c2-a47e-6d65c378bf82", "address": "fa:16:3e:db:31:fe", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f614955-c9", "ovs_interfaceid": "9f614955-c92f-41c2-a47e-6d65c378bf82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.930 244018 DEBUG nova.scheduler.client.report [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.938 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.939 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4016MB free_disk=59.92535855714232GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.940 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.941 244018 DEBUG oslo_concurrency.lockutils [req-d7d932c3-fd87-4c32-aa08-390e7cb78f1b req-be46a7d7-8ee3-4b50-832e-1e9b923b8d66 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-1238794a-063b-4ac0-a7d9-3590353e3207" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.946 244018 DEBUG nova.scheduler.client.report [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.946 244018 DEBUG nova.compute.provider_tree [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.965 244018 DEBUG nova.scheduler.client.report [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 25 07:25:09 np0005629333 nova_compute[244014]: 2026-02-25 12:25:09.988 244018 DEBUG nova.scheduler.client.report [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 25 07:25:10 np0005629333 nova_compute[244014]: 2026-02-25 12:25:10.094 244018 DEBUG oslo_concurrency.processutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:10 np0005629333 nova_compute[244014]: 2026-02-25 12:25:10.366 244018 DEBUG nova.compute.manager [req-97f4b0ab-afda-4a38-981a-7b50cb72242c req-93d24d2c-2972-4f63-9ffc-97ec450ad90d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Received event network-vif-plugged-4f6b67d0-056b-4277-a604-6221f16e30b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:10 np0005629333 nova_compute[244014]: 2026-02-25 12:25:10.367 244018 DEBUG oslo_concurrency.lockutils [req-97f4b0ab-afda-4a38-981a-7b50cb72242c req-93d24d2c-2972-4f63-9ffc-97ec450ad90d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "22450f11-d042-48c5-941e-fd544e58d84a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:10 np0005629333 nova_compute[244014]: 2026-02-25 12:25:10.367 244018 DEBUG oslo_concurrency.lockutils [req-97f4b0ab-afda-4a38-981a-7b50cb72242c req-93d24d2c-2972-4f63-9ffc-97ec450ad90d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "22450f11-d042-48c5-941e-fd544e58d84a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:10 np0005629333 nova_compute[244014]: 2026-02-25 12:25:10.368 244018 DEBUG oslo_concurrency.lockutils [req-97f4b0ab-afda-4a38-981a-7b50cb72242c req-93d24d2c-2972-4f63-9ffc-97ec450ad90d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "22450f11-d042-48c5-941e-fd544e58d84a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:10 np0005629333 nova_compute[244014]: 2026-02-25 12:25:10.368 244018 DEBUG nova.compute.manager [req-97f4b0ab-afda-4a38-981a-7b50cb72242c req-93d24d2c-2972-4f63-9ffc-97ec450ad90d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] No waiting events found dispatching network-vif-plugged-4f6b67d0-056b-4277-a604-6221f16e30b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:25:10 np0005629333 nova_compute[244014]: 2026-02-25 12:25:10.368 244018 WARNING nova.compute.manager [req-97f4b0ab-afda-4a38-981a-7b50cb72242c req-93d24d2c-2972-4f63-9ffc-97ec450ad90d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Received unexpected event network-vif-plugged-4f6b67d0-056b-4277-a604-6221f16e30b2 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:25:10 np0005629333 nova_compute[244014]: 2026-02-25 12:25:10.372 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022310.3720725, 1238794a-063b-4ac0-a7d9-3590353e3207 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:25:10 np0005629333 nova_compute[244014]: 2026-02-25 12:25:10.372 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] VM Started (Lifecycle Event)#033[00m
Feb 25 07:25:10 np0005629333 nova_compute[244014]: 2026-02-25 12:25:10.407 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:25:10 np0005629333 nova_compute[244014]: 2026-02-25 12:25:10.410 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022310.3721282, 1238794a-063b-4ac0-a7d9-3590353e3207 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:25:10 np0005629333 nova_compute[244014]: 2026-02-25 12:25:10.410 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:25:10 np0005629333 nova_compute[244014]: 2026-02-25 12:25:10.428 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:25:10 np0005629333 nova_compute[244014]: 2026-02-25 12:25:10.430 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:25:10 np0005629333 nova_compute[244014]: 2026-02-25 12:25:10.449 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:25:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:25:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3552167995' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:25:10 np0005629333 nova_compute[244014]: 2026-02-25 12:25:10.673 244018 DEBUG oslo_concurrency.processutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:10 np0005629333 nova_compute[244014]: 2026-02-25 12:25:10.678 244018 DEBUG nova.compute.provider_tree [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:25:10 np0005629333 nova_compute[244014]: 2026-02-25 12:25:10.711 244018 DEBUG nova.scheduler.client.report [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:25:10 np0005629333 nova_compute[244014]: 2026-02-25 12:25:10.741 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:10 np0005629333 nova_compute[244014]: 2026-02-25 12:25:10.742 244018 DEBUG nova.compute.manager [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:25:10 np0005629333 nova_compute[244014]: 2026-02-25 12:25:10.746 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.807s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:10 np0005629333 nova_compute[244014]: 2026-02-25 12:25:10.824 244018 DEBUG nova.compute.manager [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:25:10 np0005629333 nova_compute[244014]: 2026-02-25 12:25:10.825 244018 DEBUG nova.network.neutron [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:25:10 np0005629333 nova_compute[244014]: 2026-02-25 12:25:10.860 244018 INFO nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:25:10 np0005629333 nova_compute[244014]: 2026-02-25 12:25:10.869 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:25:10 np0005629333 nova_compute[244014]: 2026-02-25 12:25:10.869 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 22450f11-d042-48c5-941e-fd544e58d84a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:25:10 np0005629333 nova_compute[244014]: 2026-02-25 12:25:10.870 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 1238794a-063b-4ac0-a7d9-3590353e3207 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:25:10 np0005629333 nova_compute[244014]: 2026-02-25 12:25:10.870 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance b8086e43-4c45-422f-a3b5-fa665c256b30 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:25:10 np0005629333 nova_compute[244014]: 2026-02-25 12:25:10.871 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:25:10 np0005629333 nova_compute[244014]: 2026-02-25 12:25:10.872 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:25:10 np0005629333 nova_compute[244014]: 2026-02-25 12:25:10.896 244018 DEBUG nova.compute.manager [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:25:11 np0005629333 nova_compute[244014]: 2026-02-25 12:25:11.003 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:11 np0005629333 nova_compute[244014]: 2026-02-25 12:25:11.040 244018 DEBUG nova.compute.manager [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:25:11 np0005629333 nova_compute[244014]: 2026-02-25 12:25:11.043 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:25:11 np0005629333 nova_compute[244014]: 2026-02-25 12:25:11.048 244018 INFO nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Creating image(s)#033[00m
Feb 25 07:25:11 np0005629333 nova_compute[244014]: 2026-02-25 12:25:11.078 244018 DEBUG nova.storage.rbd_utils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image b8086e43-4c45-422f-a3b5-fa665c256b30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:11 np0005629333 nova_compute[244014]: 2026-02-25 12:25:11.108 244018 DEBUG nova.storage.rbd_utils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image b8086e43-4c45-422f-a3b5-fa665c256b30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:11 np0005629333 nova_compute[244014]: 2026-02-25 12:25:11.130 244018 DEBUG nova.storage.rbd_utils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image b8086e43-4c45-422f-a3b5-fa665c256b30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:11 np0005629333 nova_compute[244014]: 2026-02-25 12:25:11.133 244018 DEBUG oslo_concurrency.processutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:11 np0005629333 nova_compute[244014]: 2026-02-25 12:25:11.161 244018 DEBUG nova.policy [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b774fd0c04fc403d9ddb205f1e6abbc5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c85a955249394f0faf7c890f5cd0df32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:25:11 np0005629333 nova_compute[244014]: 2026-02-25 12:25:11.215 244018 DEBUG oslo_concurrency.processutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:11 np0005629333 nova_compute[244014]: 2026-02-25 12:25:11.216 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:11 np0005629333 nova_compute[244014]: 2026-02-25 12:25:11.218 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:11 np0005629333 nova_compute[244014]: 2026-02-25 12:25:11.218 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:11 np0005629333 nova_compute[244014]: 2026-02-25 12:25:11.247 244018 DEBUG nova.storage.rbd_utils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image b8086e43-4c45-422f-a3b5-fa665c256b30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:11 np0005629333 nova_compute[244014]: 2026-02-25 12:25:11.252 244018 DEBUG oslo_concurrency.processutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 b8086e43-4c45-422f-a3b5-fa665c256b30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:11 np0005629333 nova_compute[244014]: 2026-02-25 12:25:11.529 244018 DEBUG oslo_concurrency.processutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 b8086e43-4c45-422f-a3b5-fa665c256b30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:25:11 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1632242059' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:25:11 np0005629333 nova_compute[244014]: 2026-02-25 12:25:11.593 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:11 np0005629333 nova_compute[244014]: 2026-02-25 12:25:11.602 244018 DEBUG nova.storage.rbd_utils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] resizing rbd image b8086e43-4c45-422f-a3b5-fa665c256b30_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:25:11 np0005629333 nova_compute[244014]: 2026-02-25 12:25:11.633 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:25:11 np0005629333 nova_compute[244014]: 2026-02-25 12:25:11.685 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:25:11 np0005629333 nova_compute[244014]: 2026-02-25 12:25:11.694 244018 DEBUG nova.objects.instance [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'migration_context' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:25:11 np0005629333 nova_compute[244014]: 2026-02-25 12:25:11.712 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:25:11 np0005629333 nova_compute[244014]: 2026-02-25 12:25:11.712 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Ensure instance console log exists: /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:25:11 np0005629333 nova_compute[244014]: 2026-02-25 12:25:11.713 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:11 np0005629333 nova_compute[244014]: 2026-02-25 12:25:11.714 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:11 np0005629333 nova_compute[244014]: 2026-02-25 12:25:11.714 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:11 np0005629333 nova_compute[244014]: 2026-02-25 12:25:11.716 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:25:11 np0005629333 nova_compute[244014]: 2026-02-25 12:25:11.716 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.970s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 300 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.1 MiB/s wr, 197 op/s
Feb 25 07:25:11 np0005629333 nova_compute[244014]: 2026-02-25 12:25:11.837 244018 DEBUG nova.network.neutron [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Successfully created port: abdb97b5-8e9d-4929-af6f-bfb06c067878 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:25:12 np0005629333 nova_compute[244014]: 2026-02-25 12:25:12.643 244018 DEBUG nova.network.neutron [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Successfully updated port: abdb97b5-8e9d-4929-af6f-bfb06c067878 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:25:12 np0005629333 nova_compute[244014]: 2026-02-25 12:25:12.664 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:25:12 np0005629333 nova_compute[244014]: 2026-02-25 12:25:12.665 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquired lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:25:12 np0005629333 nova_compute[244014]: 2026-02-25 12:25:12.665 244018 DEBUG nova.network.neutron [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:25:12 np0005629333 nova_compute[244014]: 2026-02-25 12:25:12.893 244018 DEBUG nova.network.neutron [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.057 244018 DEBUG nova.compute.manager [req-da8ca6f0-4e60-4a35-9582-2a6968527156 req-1052804e-c3ee-48ad-ab17-a59ac67e5a80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received event network-changed-abdb97b5-8e9d-4929-af6f-bfb06c067878 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.057 244018 DEBUG nova.compute.manager [req-da8ca6f0-4e60-4a35-9582-2a6968527156 req-1052804e-c3ee-48ad-ab17-a59ac67e5a80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Refreshing instance network info cache due to event network-changed-abdb97b5-8e9d-4929-af6f-bfb06c067878. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.058 244018 DEBUG oslo_concurrency.lockutils [req-da8ca6f0-4e60-4a35-9582-2a6968527156 req-1052804e-c3ee-48ad-ab17-a59ac67e5a80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.438 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.758 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "e291d969-fcea-4f60-a478-f7b81a91ccd9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.758 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "e291d969-fcea-4f60-a478-f7b81a91ccd9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 316 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.7 MiB/s wr, 215 op/s
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.793 244018 DEBUG nova.compute.manager [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.850 244018 DEBUG nova.network.neutron [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updating instance_info_cache with network_info: [{"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.871 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Releasing lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.872 244018 DEBUG nova.compute.manager [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Instance network_info: |[{"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.875 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.876 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.878 244018 DEBUG oslo_concurrency.lockutils [req-da8ca6f0-4e60-4a35-9582-2a6968527156 req-1052804e-c3ee-48ad-ab17-a59ac67e5a80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.878 244018 DEBUG nova.network.neutron [req-da8ca6f0-4e60-4a35-9582-2a6968527156 req-1052804e-c3ee-48ad-ab17-a59ac67e5a80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Refreshing network info cache for port abdb97b5-8e9d-4929-af6f-bfb06c067878 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.885 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Start _get_guest_xml network_info=[{"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.894 244018 WARNING nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.900 244018 DEBUG nova.virt.hardware [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.900 244018 INFO nova.compute.claims [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.914 244018 DEBUG nova.virt.libvirt.host [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.916 244018 DEBUG nova.virt.libvirt.host [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.919 244018 DEBUG nova.virt.libvirt.host [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.920 244018 DEBUG nova.virt.libvirt.host [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.921 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.921 244018 DEBUG nova.virt.hardware [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.922 244018 DEBUG nova.virt.hardware [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.923 244018 DEBUG nova.virt.hardware [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.923 244018 DEBUG nova.virt.hardware [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.924 244018 DEBUG nova.virt.hardware [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.924 244018 DEBUG nova.virt.hardware [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.925 244018 DEBUG nova.virt.hardware [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.925 244018 DEBUG nova.virt.hardware [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.926 244018 DEBUG nova.virt.hardware [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.926 244018 DEBUG nova.virt.hardware [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.927 244018 DEBUG nova.virt.hardware [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:25:13 np0005629333 nova_compute[244014]: 2026-02-25 12:25:13.931 244018 DEBUG oslo_concurrency.processutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:14 np0005629333 nova_compute[244014]: 2026-02-25 12:25:14.146 244018 DEBUG oslo_concurrency.processutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:25:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/380632174' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:25:14 np0005629333 nova_compute[244014]: 2026-02-25 12:25:14.488 244018 DEBUG oslo_concurrency.processutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:14 np0005629333 nova_compute[244014]: 2026-02-25 12:25:14.507 244018 DEBUG nova.storage.rbd_utils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image b8086e43-4c45-422f-a3b5-fa665c256b30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:14 np0005629333 nova_compute[244014]: 2026-02-25 12:25:14.511 244018 DEBUG oslo_concurrency.processutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:14 np0005629333 nova_compute[244014]: 2026-02-25 12:25:14.622 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:25:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:25:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/884668611' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:25:14 np0005629333 nova_compute[244014]: 2026-02-25 12:25:14.706 244018 DEBUG oslo_concurrency.processutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:14 np0005629333 nova_compute[244014]: 2026-02-25 12:25:14.711 244018 DEBUG nova.compute.provider_tree [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:25:14 np0005629333 nova_compute[244014]: 2026-02-25 12:25:14.717 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:25:14 np0005629333 nova_compute[244014]: 2026-02-25 12:25:14.717 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:25:14 np0005629333 nova_compute[244014]: 2026-02-25 12:25:14.965 244018 DEBUG nova.scheduler.client.report [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:25:14 np0005629333 nova_compute[244014]: 2026-02-25 12:25:14.998 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.000 244018 DEBUG nova.compute.manager [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:25:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:25:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/888031494' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.092 244018 DEBUG nova.compute.manager [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.093 244018 DEBUG nova.network.neutron [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.099 244018 DEBUG oslo_concurrency.processutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.100 244018 DEBUG nova.virt.libvirt.vif [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:25:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2111996537',display_name='tempest-ServerActionsTestOtherB-server-2111996537',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2111996537',id=47,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFQ4/ViXjDl7sfXbfHy1Rj0+WVS30xG/+F445xoJQyz45huoziS5Ge/69+H9D3xA69BQvF6LAGpEuOAI4T0oNr5YUcMHaOf8cBGICZoqOX1SEjGVzLtcjONvsNISgitMaQ==',key_name='tempest-keypair-221288102',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c85a955249394f0faf7c890f5cd0df32',ramdisk_id='',reservation_id='r-huq779yj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1539976047',owner_user_name='tempest-ServerActionsTestOtherB-1539976047-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:25:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b774fd0c04fc403d9ddb205f1e6abbc5',uuid=b8086e43-4c45-422f-a3b5-fa665c256b30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.101 244018 DEBUG nova.network.os_vif_util [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converting VIF {"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.102 244018 DEBUG nova.network.os_vif_util [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:53:87,bridge_name='br-int',has_traffic_filtering=True,id=abdb97b5-8e9d-4929-af6f-bfb06c067878,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdb97b5-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.103 244018 DEBUG nova.objects.instance [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'pci_devices' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.118 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:25:15 np0005629333 nova_compute[244014]:  <uuid>b8086e43-4c45-422f-a3b5-fa665c256b30</uuid>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:  <name>instance-0000002f</name>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServerActionsTestOtherB-server-2111996537</nova:name>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:25:13</nova:creationTime>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:25:15 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:        <nova:user uuid="b774fd0c04fc403d9ddb205f1e6abbc5">tempest-ServerActionsTestOtherB-1539976047-project-member</nova:user>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:        <nova:project uuid="c85a955249394f0faf7c890f5cd0df32">tempest-ServerActionsTestOtherB-1539976047</nova:project>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:        <nova:port uuid="abdb97b5-8e9d-4929-af6f-bfb06c067878">
Feb 25 07:25:15 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      <entry name="serial">b8086e43-4c45-422f-a3b5-fa665c256b30</entry>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      <entry name="uuid">b8086e43-4c45-422f-a3b5-fa665c256b30</entry>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/b8086e43-4c45-422f-a3b5-fa665c256b30_disk">
Feb 25 07:25:15 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:25:15 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/b8086e43-4c45-422f-a3b5-fa665c256b30_disk.config">
Feb 25 07:25:15 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:25:15 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:f8:53:87"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      <target dev="tapabdb97b5-8e"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30/console.log" append="off"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:25:15 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:25:15 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:25:15 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:25:15 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.124 244018 DEBUG nova.compute.manager [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Preparing to wait for external event network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.124 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.124 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.125 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.125 244018 DEBUG nova.virt.libvirt.vif [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:25:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2111996537',display_name='tempest-ServerActionsTestOtherB-server-2111996537',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2111996537',id=47,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFQ4/ViXjDl7sfXbfHy1Rj0+WVS30xG/+F445xoJQyz45huoziS5Ge/69+H9D3xA69BQvF6LAGpEuOAI4T0oNr5YUcMHaOf8cBGICZoqOX1SEjGVzLtcjONvsNISgitMaQ==',key_name='tempest-keypair-221288102',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c85a955249394f0faf7c890f5cd0df32',ramdisk_id='',reservation_id='r-huq779yj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1539976047',owner_user_name='tempest-ServerActionsTestOtherB-1539976047-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:25:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b774fd0c04fc403d9ddb205f1e6abbc5',uuid=b8086e43-4c45-422f-a3b5-fa665c256b30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.126 244018 DEBUG nova.network.os_vif_util [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converting VIF {"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.127 244018 DEBUG nova.network.os_vif_util [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:53:87,bridge_name='br-int',has_traffic_filtering=True,id=abdb97b5-8e9d-4929-af6f-bfb06c067878,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdb97b5-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.127 244018 DEBUG os_vif [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:53:87,bridge_name='br-int',has_traffic_filtering=True,id=abdb97b5-8e9d-4929-af6f-bfb06c067878,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdb97b5-8e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.128 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.128 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.129 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.130 244018 INFO nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.133 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.133 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabdb97b5-8e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.134 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapabdb97b5-8e, col_values=(('external_ids', {'iface-id': 'abdb97b5-8e9d-4929-af6f-bfb06c067878', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:53:87', 'vm-uuid': 'b8086e43-4c45-422f-a3b5-fa665c256b30'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.135 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:15 np0005629333 NetworkManager[49836]: <info>  [1772022315.1365] manager: (tapabdb97b5-8e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/179)
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.138 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.142 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.142 244018 INFO os_vif [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:53:87,bridge_name='br-int',has_traffic_filtering=True,id=abdb97b5-8e9d-4929-af6f-bfb06c067878,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdb97b5-8e')#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.148 244018 DEBUG nova.compute.manager [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.262 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.263 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.264 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No VIF found with MAC fa:16:3e:f8:53:87, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.265 244018 INFO nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Using config drive#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.294 244018 DEBUG nova.storage.rbd_utils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image b8086e43-4c45-422f-a3b5-fa665c256b30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.302 244018 DEBUG nova.compute.manager [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.304 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.305 244018 INFO nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Creating image(s)#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.329 244018 DEBUG nova.storage.rbd_utils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image e291d969-fcea-4f60-a478-f7b81a91ccd9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.355 244018 DEBUG nova.storage.rbd_utils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image e291d969-fcea-4f60-a478-f7b81a91ccd9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.379 244018 DEBUG nova.storage.rbd_utils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image e291d969-fcea-4f60-a478-f7b81a91ccd9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.382 244018 DEBUG oslo_concurrency.processutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.411 244018 DEBUG nova.policy [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '89e71139346a40899212d5bc35835720', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f976004e0b334963a69c2519fca200d2', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.457 244018 DEBUG oslo_concurrency.processutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.457 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.458 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.459 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.479 244018 DEBUG nova.storage.rbd_utils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image e291d969-fcea-4f60-a478-f7b81a91ccd9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.482 244018 DEBUG oslo_concurrency.processutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 e291d969-fcea-4f60-a478-f7b81a91ccd9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.505 244018 DEBUG nova.network.neutron [req-da8ca6f0-4e60-4a35-9582-2a6968527156 req-1052804e-c3ee-48ad-ab17-a59ac67e5a80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updated VIF entry in instance network info cache for port abdb97b5-8e9d-4929-af6f-bfb06c067878. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.506 244018 DEBUG nova.network.neutron [req-da8ca6f0-4e60-4a35-9582-2a6968527156 req-1052804e-c3ee-48ad-ab17-a59ac67e5a80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updating instance_info_cache with network_info: [{"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.530 244018 DEBUG oslo_concurrency.lockutils [req-da8ca6f0-4e60-4a35-9582-2a6968527156 req-1052804e-c3ee-48ad-ab17-a59ac67e5a80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:25:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 316 MiB data, 591 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.5 MiB/s wr, 187 op/s
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.987 244018 INFO nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Creating config drive at /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30/disk.config#033[00m
Feb 25 07:25:15 np0005629333 nova_compute[244014]: 2026-02-25 12:25:15.993 244018 DEBUG oslo_concurrency.processutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpi8gres87 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:16 np0005629333 nova_compute[244014]: 2026-02-25 12:25:16.130 244018 DEBUG oslo_concurrency.processutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpi8gres87" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:16 np0005629333 nova_compute[244014]: 2026-02-25 12:25:16.241 244018 DEBUG nova.storage.rbd_utils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image b8086e43-4c45-422f-a3b5-fa665c256b30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:16 np0005629333 nova_compute[244014]: 2026-02-25 12:25:16.260 244018 DEBUG oslo_concurrency.processutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30/disk.config b8086e43-4c45-422f-a3b5-fa665c256b30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:16 np0005629333 nova_compute[244014]: 2026-02-25 12:25:16.288 244018 DEBUG nova.network.neutron [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Successfully created port: 07860675-4ac4-43a4-ab6b-bacd17801fc2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:25:16 np0005629333 nova_compute[244014]: 2026-02-25 12:25:16.300 244018 DEBUG nova.compute.manager [req-533460a8-8a80-4f76-9207-b87bf1746f06 req-7b1ea306-21c8-4561-bbaf-43fbc0a2b7a2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Received event network-vif-plugged-9f614955-c92f-41c2-a47e-6d65c378bf82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:16 np0005629333 nova_compute[244014]: 2026-02-25 12:25:16.301 244018 DEBUG oslo_concurrency.lockutils [req-533460a8-8a80-4f76-9207-b87bf1746f06 req-7b1ea306-21c8-4561-bbaf-43fbc0a2b7a2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:16 np0005629333 nova_compute[244014]: 2026-02-25 12:25:16.301 244018 DEBUG oslo_concurrency.lockutils [req-533460a8-8a80-4f76-9207-b87bf1746f06 req-7b1ea306-21c8-4561-bbaf-43fbc0a2b7a2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:16 np0005629333 nova_compute[244014]: 2026-02-25 12:25:16.302 244018 DEBUG oslo_concurrency.lockutils [req-533460a8-8a80-4f76-9207-b87bf1746f06 req-7b1ea306-21c8-4561-bbaf-43fbc0a2b7a2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:16 np0005629333 nova_compute[244014]: 2026-02-25 12:25:16.303 244018 DEBUG nova.compute.manager [req-533460a8-8a80-4f76-9207-b87bf1746f06 req-7b1ea306-21c8-4561-bbaf-43fbc0a2b7a2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Processing event network-vif-plugged-9f614955-c92f-41c2-a47e-6d65c378bf82 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:25:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:16.302 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:16 np0005629333 nova_compute[244014]: 2026-02-25 12:25:16.305 244018 DEBUG nova.compute.manager [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:25:16 np0005629333 nova_compute[244014]: 2026-02-25 12:25:16.309 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022316.3093193, 1238794a-063b-4ac0-a7d9-3590353e3207 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:25:16 np0005629333 nova_compute[244014]: 2026-02-25 12:25:16.310 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:25:16 np0005629333 nova_compute[244014]: 2026-02-25 12:25:16.329 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:25:16 np0005629333 nova_compute[244014]: 2026-02-25 12:25:16.333 244018 INFO nova.virt.libvirt.driver [-] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Instance spawned successfully.#033[00m
Feb 25 07:25:16 np0005629333 nova_compute[244014]: 2026-02-25 12:25:16.333 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:25:16 np0005629333 nova_compute[244014]: 2026-02-25 12:25:16.347 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:25:16 np0005629333 nova_compute[244014]: 2026-02-25 12:25:16.350 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:25:16 np0005629333 nova_compute[244014]: 2026-02-25 12:25:16.390 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:25:16 np0005629333 nova_compute[244014]: 2026-02-25 12:25:16.402 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:25:16 np0005629333 nova_compute[244014]: 2026-02-25 12:25:16.402 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:25:16 np0005629333 nova_compute[244014]: 2026-02-25 12:25:16.403 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:25:16 np0005629333 nova_compute[244014]: 2026-02-25 12:25:16.403 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:25:16 np0005629333 nova_compute[244014]: 2026-02-25 12:25:16.405 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:25:16 np0005629333 nova_compute[244014]: 2026-02-25 12:25:16.405 244018 DEBUG nova.virt.libvirt.driver [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:25:16 np0005629333 nova_compute[244014]: 2026-02-25 12:25:16.456 244018 DEBUG oslo_concurrency.processutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 e291d969-fcea-4f60-a478-f7b81a91ccd9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.974s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:16 np0005629333 nova_compute[244014]: 2026-02-25 12:25:16.497 244018 INFO nova.compute.manager [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Took 14.00 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:25:16 np0005629333 nova_compute[244014]: 2026-02-25 12:25:16.497 244018 DEBUG nova.compute.manager [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:25:16 np0005629333 nova_compute[244014]: 2026-02-25 12:25:16.549 244018 DEBUG nova.storage.rbd_utils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] resizing rbd image e291d969-fcea-4f60-a478-f7b81a91ccd9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:25:16 np0005629333 nova_compute[244014]: 2026-02-25 12:25:16.696 244018 INFO nova.compute.manager [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Took 16.34 seconds to build instance.#033[00m
Feb 25 07:25:16 np0005629333 nova_compute[244014]: 2026-02-25 12:25:16.720 244018 DEBUG oslo_concurrency.lockutils [None req-b9a3747f-4571-48ba-8fd5-00e60821064d 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "1238794a-063b-4ac0-a7d9-3590353e3207" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.442s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:17 np0005629333 nova_compute[244014]: 2026-02-25 12:25:17.332 244018 DEBUG oslo_concurrency.processutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30/disk.config b8086e43-4c45-422f-a3b5-fa665c256b30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:17 np0005629333 nova_compute[244014]: 2026-02-25 12:25:17.333 244018 INFO nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Deleting local config drive /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30/disk.config because it was imported into RBD.#033[00m
Feb 25 07:25:17 np0005629333 NetworkManager[49836]: <info>  [1772022317.3799] manager: (tapabdb97b5-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/180)
Feb 25 07:25:17 np0005629333 kernel: tapabdb97b5-8e: entered promiscuous mode
Feb 25 07:25:17 np0005629333 nova_compute[244014]: 2026-02-25 12:25:17.383 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:17 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:17Z|00393|binding|INFO|Claiming lport abdb97b5-8e9d-4929-af6f-bfb06c067878 for this chassis.
Feb 25 07:25:17 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:17Z|00394|binding|INFO|abdb97b5-8e9d-4929-af6f-bfb06c067878: Claiming fa:16:3e:f8:53:87 10.100.0.6
Feb 25 07:25:17 np0005629333 nova_compute[244014]: 2026-02-25 12:25:17.388 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:17 np0005629333 systemd-udevd[285335]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.407 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:53:87 10.100.0.6'], port_security=['fa:16:3e:f8:53:87 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b8086e43-4c45-422f-a3b5-fa665c256b30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64c22162-7e15-45de-8fd2-8c9a24f27006', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c85a955249394f0faf7c890f5cd0df32', 'neutron:revision_number': '2', 'neutron:security_group_ids': '35edb9b7-5285-41a3-a867-f1cc587b3ad5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a9495f97-67e6-4da7-a9b0-f643c9e48076, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=abdb97b5-8e9d-4929-af6f-bfb06c067878) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.408 157129 INFO neutron.agent.ovn.metadata.agent [-] Port abdb97b5-8e9d-4929-af6f-bfb06c067878 in datapath 64c22162-7e15-45de-8fd2-8c9a24f27006 bound to our chassis#033[00m
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.410 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 64c22162-7e15-45de-8fd2-8c9a24f27006#033[00m
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.417 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[48096c7a-2b31-434c-ac93-5ffe8cfec417]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.418 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap64c22162-71 in ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:25:17 np0005629333 NetworkManager[49836]: <info>  [1772022317.4291] device (tapabdb97b5-8e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:25:17 np0005629333 NetworkManager[49836]: <info>  [1772022317.4300] device (tapabdb97b5-8e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.431 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap64c22162-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.432 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5e83f23a-c5fb-4468-bc4e-cc577f15bf9e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.433 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8301af38-9968-4fe9-b420-79121176d3af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:17 np0005629333 systemd-machined[210048]: New machine qemu-52-instance-0000002f.
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.441 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[3e39619b-a8a3-4b91-879f-102a49fecb80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:17 np0005629333 nova_compute[244014]: 2026-02-25 12:25:17.444 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:17 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:17Z|00395|binding|INFO|Setting lport abdb97b5-8e9d-4929-af6f-bfb06c067878 ovn-installed in OVS
Feb 25 07:25:17 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:17Z|00396|binding|INFO|Setting lport abdb97b5-8e9d-4929-af6f-bfb06c067878 up in Southbound
Feb 25 07:25:17 np0005629333 systemd[1]: Started Virtual Machine qemu-52-instance-0000002f.
Feb 25 07:25:17 np0005629333 nova_compute[244014]: 2026-02-25 12:25:17.451 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.450 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[81205c3b-01ab-4699-951d-dfe245c3db93]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.478 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0d309e13-733e-47e7-a54b-0e16a07c6436]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.483 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ddca2a32-e1d1-4577-8ecf-0898df3c4445]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:17 np0005629333 systemd-udevd[285341]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:25:17 np0005629333 NetworkManager[49836]: <info>  [1772022317.4846] manager: (tap64c22162-70): new Veth device (/org/freedesktop/NetworkManager/Devices/181)
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.510 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e06a410c-fade-4ab4-8445-99a8a19ff4e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.513 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9815e7ee-a624-48ab-bfc2-922d9d44dc7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:17 np0005629333 NetworkManager[49836]: <info>  [1772022317.5322] device (tap64c22162-70): carrier: link connected
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.537 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[561538d9-00a9-4130-939f-0c8beea8da7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.549 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[61459568-feab-4785-8206-ad66532931e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64c22162-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:1c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428693, 'reachable_time': 31211, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285371, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.562 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1c24f3fb-2ae9-462c-b500-b7a915fdbd64]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe08:1cca'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428693, 'tstamp': 428693}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285372, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.572 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4a74ba5f-a695-4864-8496-82c703e92baf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64c22162-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:1c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428693, 'reachable_time': 31211, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 285373, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.598 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cc3225e2-ed61-4b6f-bc60-ec0c1051b875]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.651 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[757248a8-be73-474d-aa9d-832fd185580a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.653 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64c22162-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.654 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.654 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64c22162-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:17 np0005629333 nova_compute[244014]: 2026-02-25 12:25:17.656 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:17 np0005629333 kernel: tap64c22162-70: entered promiscuous mode
Feb 25 07:25:17 np0005629333 nova_compute[244014]: 2026-02-25 12:25:17.658 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:17 np0005629333 NetworkManager[49836]: <info>  [1772022317.6598] manager: (tap64c22162-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/182)
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.662 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap64c22162-70, col_values=(('external_ids', {'iface-id': '81f0f54c-4e04-4adf-952f-b6d0fe9698c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:17 np0005629333 nova_compute[244014]: 2026-02-25 12:25:17.663 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:17 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:17Z|00397|binding|INFO|Releasing lport 81f0f54c-4e04-4adf-952f-b6d0fe9698c7 from this chassis (sb_readonly=0)
Feb 25 07:25:17 np0005629333 nova_compute[244014]: 2026-02-25 12:25:17.665 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.666 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/64c22162-7e15-45de-8fd2-8c9a24f27006.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/64c22162-7e15-45de-8fd2-8c9a24f27006.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.667 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e903aa24-38d5-47d8-b24f-afc1b66b68aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.668 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-64c22162-7e15-45de-8fd2-8c9a24f27006
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/64c22162-7e15-45de-8fd2-8c9a24f27006.pid.haproxy
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 64c22162-7e15-45de-8fd2-8c9a24f27006
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:25:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:17.670 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'env', 'PROCESS_TAG=haproxy-64c22162-7e15-45de-8fd2-8c9a24f27006', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/64c22162-7e15-45de-8fd2-8c9a24f27006.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:25:17 np0005629333 nova_compute[244014]: 2026-02-25 12:25:17.674 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 362 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 6.4 MiB/s wr, 294 op/s
Feb 25 07:25:17 np0005629333 nova_compute[244014]: 2026-02-25 12:25:17.898 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:25:17 np0005629333 nova_compute[244014]: 2026-02-25 12:25:17.911 244018 DEBUG nova.objects.instance [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lazy-loading 'migration_context' on Instance uuid e291d969-fcea-4f60-a478-f7b81a91ccd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:25:17 np0005629333 nova_compute[244014]: 2026-02-25 12:25:17.931 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:25:17 np0005629333 nova_compute[244014]: 2026-02-25 12:25:17.931 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Ensure instance console log exists: /var/lib/nova/instances/e291d969-fcea-4f60-a478-f7b81a91ccd9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:25:17 np0005629333 nova_compute[244014]: 2026-02-25 12:25:17.932 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:17 np0005629333 nova_compute[244014]: 2026-02-25 12:25:17.932 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:17 np0005629333 nova_compute[244014]: 2026-02-25 12:25:17.932 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:18 np0005629333 podman[285441]: 2026-02-25 12:25:18.01733181 +0000 UTC m=+0.033226941 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:25:18 np0005629333 podman[285441]: 2026-02-25 12:25:18.411673391 +0000 UTC m=+0.427568542 container create 61e4c109bda741d406f5418ff4a666a3832342de2e3c5435c4c55719b3407174 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 25 07:25:18 np0005629333 nova_compute[244014]: 2026-02-25 12:25:18.430 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022318.4300559, b8086e43-4c45-422f-a3b5-fa665c256b30 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:25:18 np0005629333 nova_compute[244014]: 2026-02-25 12:25:18.431 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] VM Started (Lifecycle Event)#033[00m
Feb 25 07:25:18 np0005629333 nova_compute[244014]: 2026-02-25 12:25:18.439 244018 DEBUG nova.virt.libvirt.driver [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Feb 25 07:25:18 np0005629333 nova_compute[244014]: 2026-02-25 12:25:18.462 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:25:18 np0005629333 nova_compute[244014]: 2026-02-25 12:25:18.465 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022318.4301653, b8086e43-4c45-422f-a3b5-fa665c256b30 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:25:18 np0005629333 nova_compute[244014]: 2026-02-25 12:25:18.466 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:25:18 np0005629333 nova_compute[244014]: 2026-02-25 12:25:18.500 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:25:18 np0005629333 nova_compute[244014]: 2026-02-25 12:25:18.503 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:25:18 np0005629333 systemd[1]: Started libpod-conmon-61e4c109bda741d406f5418ff4a666a3832342de2e3c5435c4c55719b3407174.scope.
Feb 25 07:25:18 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:25:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee9ea23fb31ef780d3141b9da7996fc6467cb5ff171d46fe3017a00206862844/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:25:18 np0005629333 nova_compute[244014]: 2026-02-25 12:25:18.583 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:25:18 np0005629333 podman[285441]: 2026-02-25 12:25:18.596158192 +0000 UTC m=+0.612053393 container init 61e4c109bda741d406f5418ff4a666a3832342de2e3c5435c4c55719b3407174 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 25 07:25:18 np0005629333 podman[285441]: 2026-02-25 12:25:18.60314415 +0000 UTC m=+0.619039301 container start 61e4c109bda741d406f5418ff4a666a3832342de2e3c5435c4c55719b3407174 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 25 07:25:18 np0005629333 neutron-haproxy-ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006[285480]: [NOTICE]   (285484) : New worker (285486) forked
Feb 25 07:25:18 np0005629333 neutron-haproxy-ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006[285480]: [NOTICE]   (285484) : Loading success.
Feb 25 07:25:18 np0005629333 nova_compute[244014]: 2026-02-25 12:25:18.735 244018 DEBUG nova.network.neutron [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Successfully updated port: 07860675-4ac4-43a4-ab6b-bacd17801fc2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:25:18 np0005629333 nova_compute[244014]: 2026-02-25 12:25:18.765 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "refresh_cache-e291d969-fcea-4f60-a478-f7b81a91ccd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:25:18 np0005629333 nova_compute[244014]: 2026-02-25 12:25:18.766 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquired lock "refresh_cache-e291d969-fcea-4f60-a478-f7b81a91ccd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:25:18 np0005629333 nova_compute[244014]: 2026-02-25 12:25:18.766 244018 DEBUG nova.network.neutron [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.078 244018 DEBUG nova.compute.manager [req-b039b08c-3118-4e84-b529-1d5df25f068d req-60fed3d1-34b9-41fa-92eb-16b9674b5601 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Received event network-vif-plugged-9f614955-c92f-41c2-a47e-6d65c378bf82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.079 244018 DEBUG oslo_concurrency.lockutils [req-b039b08c-3118-4e84-b529-1d5df25f068d req-60fed3d1-34b9-41fa-92eb-16b9674b5601 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.079 244018 DEBUG oslo_concurrency.lockutils [req-b039b08c-3118-4e84-b529-1d5df25f068d req-60fed3d1-34b9-41fa-92eb-16b9674b5601 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.080 244018 DEBUG oslo_concurrency.lockutils [req-b039b08c-3118-4e84-b529-1d5df25f068d req-60fed3d1-34b9-41fa-92eb-16b9674b5601 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.080 244018 DEBUG nova.compute.manager [req-b039b08c-3118-4e84-b529-1d5df25f068d req-60fed3d1-34b9-41fa-92eb-16b9674b5601 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] No waiting events found dispatching network-vif-plugged-9f614955-c92f-41c2-a47e-6d65c378bf82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.080 244018 WARNING nova.compute.manager [req-b039b08c-3118-4e84-b529-1d5df25f068d req-60fed3d1-34b9-41fa-92eb-16b9674b5601 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Received unexpected event network-vif-plugged-9f614955-c92f-41c2-a47e-6d65c378bf82 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.081 244018 DEBUG nova.compute.manager [req-b039b08c-3118-4e84-b529-1d5df25f068d req-60fed3d1-34b9-41fa-92eb-16b9674b5601 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received event network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.081 244018 DEBUG oslo_concurrency.lockutils [req-b039b08c-3118-4e84-b529-1d5df25f068d req-60fed3d1-34b9-41fa-92eb-16b9674b5601 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.081 244018 DEBUG oslo_concurrency.lockutils [req-b039b08c-3118-4e84-b529-1d5df25f068d req-60fed3d1-34b9-41fa-92eb-16b9674b5601 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.082 244018 DEBUG oslo_concurrency.lockutils [req-b039b08c-3118-4e84-b529-1d5df25f068d req-60fed3d1-34b9-41fa-92eb-16b9674b5601 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.082 244018 DEBUG nova.compute.manager [req-b039b08c-3118-4e84-b529-1d5df25f068d req-60fed3d1-34b9-41fa-92eb-16b9674b5601 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Processing event network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.083 244018 DEBUG nova.compute.manager [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.087 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022319.0871675, b8086e43-4c45-422f-a3b5-fa665c256b30 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.087 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.095 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.098 244018 INFO nova.virt.libvirt.driver [-] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Instance spawned successfully.#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.098 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.128 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.133 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.133 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.134 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.134 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.134 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.135 244018 DEBUG nova.virt.libvirt.driver [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.139 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.173 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.204 244018 INFO nova.compute.manager [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Took 8.16 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.205 244018 DEBUG nova.compute.manager [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.275 244018 INFO nova.compute.manager [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Took 9.47 seconds to build instance.#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.312 244018 DEBUG oslo_concurrency.lockutils [None req-9b943a63-8233-437a-b804-d6fb8df4f36b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:19 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:19Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8b:cd:f8 10.100.0.12
Feb 25 07:25:19 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:19Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8b:cd:f8 10.100.0.12
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.623 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.714 244018 DEBUG nova.network.neutron [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.768 244018 DEBUG nova.compute.manager [req-c183a5cd-20c9-41b4-80d3-2649440d9a5c req-d68c95c4-b5ef-4386-a4c1-8df026b3d9c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Received event network-changed-07860675-4ac4-43a4-ab6b-bacd17801fc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.768 244018 DEBUG nova.compute.manager [req-c183a5cd-20c9-41b4-80d3-2649440d9a5c req-d68c95c4-b5ef-4386-a4c1-8df026b3d9c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Refreshing instance network info cache due to event network-changed-07860675-4ac4-43a4-ab6b-bacd17801fc2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:25:19 np0005629333 nova_compute[244014]: 2026-02-25 12:25:19.769 244018 DEBUG oslo_concurrency.lockutils [req-c183a5cd-20c9-41b4-80d3-2649440d9a5c req-d68c95c4-b5ef-4386-a4c1-8df026b3d9c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-e291d969-fcea-4f60-a478-f7b81a91ccd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:25:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 362 MiB data, 613 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 3.0 MiB/s wr, 249 op/s
Feb 25 07:25:20 np0005629333 nova_compute[244014]: 2026-02-25 12:25:20.135 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:20 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:20Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fc:58:40 10.100.0.9
Feb 25 07:25:20 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:20Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fc:58:40 10.100.0.9
Feb 25 07:25:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 305 active+clean; 423 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 7.3 MiB/s rd, 6.0 MiB/s wr, 377 op/s
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.154 244018 DEBUG nova.compute.manager [req-6a86b125-8ed0-49f1-9ce6-e136be937e00 req-6b196023-1ab4-46cb-a60d-714277933d63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received event network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.155 244018 DEBUG oslo_concurrency.lockutils [req-6a86b125-8ed0-49f1-9ce6-e136be937e00 req-6b196023-1ab4-46cb-a60d-714277933d63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.156 244018 DEBUG oslo_concurrency.lockutils [req-6a86b125-8ed0-49f1-9ce6-e136be937e00 req-6b196023-1ab4-46cb-a60d-714277933d63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.156 244018 DEBUG oslo_concurrency.lockutils [req-6a86b125-8ed0-49f1-9ce6-e136be937e00 req-6b196023-1ab4-46cb-a60d-714277933d63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.157 244018 DEBUG nova.compute.manager [req-6a86b125-8ed0-49f1-9ce6-e136be937e00 req-6b196023-1ab4-46cb-a60d-714277933d63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] No waiting events found dispatching network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.158 244018 WARNING nova.compute.manager [req-6a86b125-8ed0-49f1-9ce6-e136be937e00 req-6b196023-1ab4-46cb-a60d-714277933d63 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received unexpected event network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.202 244018 DEBUG nova.network.neutron [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Updating instance_info_cache with network_info: [{"id": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "address": "fa:16:3e:07:ea:76", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07860675-4a", "ovs_interfaceid": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.224 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Releasing lock "refresh_cache-e291d969-fcea-4f60-a478-f7b81a91ccd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.224 244018 DEBUG nova.compute.manager [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Instance network_info: |[{"id": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "address": "fa:16:3e:07:ea:76", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07860675-4a", "ovs_interfaceid": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.226 244018 DEBUG oslo_concurrency.lockutils [req-c183a5cd-20c9-41b4-80d3-2649440d9a5c req-d68c95c4-b5ef-4386-a4c1-8df026b3d9c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-e291d969-fcea-4f60-a478-f7b81a91ccd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.227 244018 DEBUG nova.network.neutron [req-c183a5cd-20c9-41b4-80d3-2649440d9a5c req-d68c95c4-b5ef-4386-a4c1-8df026b3d9c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Refreshing network info cache for port 07860675-4ac4-43a4-ab6b-bacd17801fc2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.232 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Start _get_guest_xml network_info=[{"id": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "address": "fa:16:3e:07:ea:76", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07860675-4a", "ovs_interfaceid": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.238 244018 WARNING nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.245 244018 DEBUG nova.virt.libvirt.host [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.246 244018 DEBUG nova.virt.libvirt.host [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.254 244018 DEBUG nova.virt.libvirt.host [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.255 244018 DEBUG nova.virt.libvirt.host [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.256 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.256 244018 DEBUG nova.virt.hardware [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.257 244018 DEBUG nova.virt.hardware [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.258 244018 DEBUG nova.virt.hardware [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.258 244018 DEBUG nova.virt.hardware [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.259 244018 DEBUG nova.virt.hardware [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.260 244018 DEBUG nova.virt.hardware [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.260 244018 DEBUG nova.virt.hardware [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.261 244018 DEBUG nova.virt.hardware [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.261 244018 DEBUG nova.virt.hardware [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.262 244018 DEBUG nova.virt.hardware [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.262 244018 DEBUG nova.virt.hardware [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.267 244018 DEBUG oslo_concurrency.processutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:25:22 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3323372262' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.773 244018 DEBUG oslo_concurrency.lockutils [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "22450f11-d042-48c5-941e-fd544e58d84a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.774 244018 DEBUG oslo_concurrency.lockutils [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "22450f11-d042-48c5-941e-fd544e58d84a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.775 244018 DEBUG oslo_concurrency.lockutils [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "22450f11-d042-48c5-941e-fd544e58d84a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.776 244018 DEBUG oslo_concurrency.lockutils [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "22450f11-d042-48c5-941e-fd544e58d84a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.776 244018 DEBUG oslo_concurrency.lockutils [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "22450f11-d042-48c5-941e-fd544e58d84a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.779 244018 INFO nova.compute.manager [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Terminating instance#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.780 244018 DEBUG nova.compute.manager [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.782 244018 DEBUG oslo_concurrency.processutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.811 244018 DEBUG nova.storage.rbd_utils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image e291d969-fcea-4f60-a478-f7b81a91ccd9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.817 244018 DEBUG oslo_concurrency.processutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:22 np0005629333 kernel: tap4f6b67d0-05 (unregistering): left promiscuous mode
Feb 25 07:25:22 np0005629333 NetworkManager[49836]: <info>  [1772022322.9003] device (tap4f6b67d0-05): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:25:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:22Z|00398|binding|INFO|Releasing lport 4f6b67d0-056b-4277-a604-6221f16e30b2 from this chassis (sb_readonly=0)
Feb 25 07:25:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:22Z|00399|binding|INFO|Setting lport 4f6b67d0-056b-4277-a604-6221f16e30b2 down in Southbound
Feb 25 07:25:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:22Z|00400|binding|INFO|Removing iface tap4f6b67d0-05 ovn-installed in OVS
Feb 25 07:25:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:22.919 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:cd:f8 10.100.0.12'], port_security=['fa:16:3e:8b:cd:f8 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '22450f11-d042-48c5-941e-fd544e58d84a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c97c5b11-7517-46fe-a6ca-63894792908c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8744bdbc0f1499388aab5f477246beb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9622f97c-a9c4-423f-b49e-154152bd6881', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b7ae960-1b2b-4f15-b35e-8e889d9ccce8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=4f6b67d0-056b-4277-a604-6221f16e30b2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:25:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:22.921 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 4f6b67d0-056b-4277-a604-6221f16e30b2 in datapath c97c5b11-7517-46fe-a6ca-63894792908c unbound from our chassis#033[00m
Feb 25 07:25:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:22.922 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c97c5b11-7517-46fe-a6ca-63894792908c#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.920 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.927 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:22.936 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b410593f-798d-467c-b2a5-5d7315363496]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.947 244018 DEBUG oslo_concurrency.lockutils [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "1238794a-063b-4ac0-a7d9-3590353e3207" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.947 244018 DEBUG oslo_concurrency.lockutils [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "1238794a-063b-4ac0-a7d9-3590353e3207" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.948 244018 DEBUG oslo_concurrency.lockutils [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.948 244018 DEBUG oslo_concurrency.lockutils [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.948 244018 DEBUG oslo_concurrency.lockutils [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.949 244018 INFO nova.compute.manager [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Terminating instance#033[00m
Feb 25 07:25:22 np0005629333 nova_compute[244014]: 2026-02-25 12:25:22.950 244018 DEBUG nova.compute.manager [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:25:22 np0005629333 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Feb 25 07:25:22 np0005629333 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002d.scope: Consumed 11.743s CPU time.
Feb 25 07:25:22 np0005629333 systemd-machined[210048]: Machine qemu-50-instance-0000002d terminated.
Feb 25 07:25:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:22.963 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c9af9d4d-6344-4cda-9aed-24c9d25407d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:22.966 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c30a0132-ad00-441c-b6e7-f121c5e13425]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:22.986 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4e2ab585-01db-479d-b4a4-2e7fe0ddc6d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.001 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[57aa28b4-d99b-4294-ab1e-7863af16d17e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc97c5b11-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:61:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 114], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 427672, 'reachable_time': 15170, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285567, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:23 np0005629333 kernel: tap9f614955-c9 (unregistering): left promiscuous mode
Feb 25 07:25:23 np0005629333 NetworkManager[49836]: <info>  [1772022323.0121] device (tap9f614955-c9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:25:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.017 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[660aa12f-4bc4-4fd1-a0f6-cbf4d6b30d16]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc97c5b11-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 427682, 'tstamp': 427682}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285568, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc97c5b11-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 427685, 'tstamp': 427685}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285568, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:23 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:23Z|00401|binding|INFO|Releasing lport 9f614955-c92f-41c2-a47e-6d65c378bf82 from this chassis (sb_readonly=0)
Feb 25 07:25:23 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:23Z|00402|binding|INFO|Setting lport 9f614955-c92f-41c2-a47e-6d65c378bf82 down in Southbound
Feb 25 07:25:23 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:23Z|00403|binding|INFO|Removing iface tap9f614955-c9 ovn-installed in OVS
Feb 25 07:25:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.020 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc97c5b11-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.022 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.025 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.030 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc97c5b11-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.031 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:25:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.032 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:31:fe 10.100.0.7'], port_security=['fa:16:3e:db:31:fe 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1238794a-063b-4ac0-a7d9-3590353e3207', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c97c5b11-7517-46fe-a6ca-63894792908c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8744bdbc0f1499388aab5f477246beb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9622f97c-a9c4-423f-b49e-154152bd6881', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b7ae960-1b2b-4f15-b35e-8e889d9ccce8, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=9f614955-c92f-41c2-a47e-6d65c378bf82) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:25:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.034 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc97c5b11-70, col_values=(('external_ids', {'iface-id': 'db412aa7-4ad4-4eb8-b61f-dd3e71d5329d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.035 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:25:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.036 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 9f614955-c92f-41c2-a47e-6d65c378bf82 in datapath c97c5b11-7517-46fe-a6ca-63894792908c unbound from our chassis#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.036 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.039 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c97c5b11-7517-46fe-a6ca-63894792908c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:25:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.040 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[250ba1a5-1e81-482a-b1c6-f43dd163874b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.040 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c namespace which is not needed anymore#033[00m
Feb 25 07:25:23 np0005629333 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002e.scope: Deactivated successfully.
Feb 25 07:25:23 np0005629333 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002e.scope: Consumed 7.403s CPU time.
Feb 25 07:25:23 np0005629333 systemd-machined[210048]: Machine qemu-51-instance-0000002e terminated.
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.076 244018 INFO nova.virt.libvirt.driver [-] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Instance destroyed successfully.#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.077 244018 DEBUG nova.objects.instance [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lazy-loading 'resources' on Instance uuid 22450f11-d042-48c5-941e-fd544e58d84a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.098 244018 DEBUG nova.virt.libvirt.vif [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:24:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2025017343',display_name='tempest-tempest.common.compute-instance-2025017343-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2025017343-1',id=45,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:25:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c8744bdbc0f1499388aab5f477246beb',ramdisk_id='',reservation_id='r-74gnqqyr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1900553995',owner_user_name='tempest-MultipleCreateTestJSON-1900553995-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:25:08Z,user_data=None,user_id='0899f3fdb57d46a395d07753dd261241',uuid=22450f11-d042-48c5-941e-fd544e58d84a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f6b67d0-056b-4277-a604-6221f16e30b2", "address": "fa:16:3e:8b:cd:f8", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f6b67d0-05", "ovs_interfaceid": "4f6b67d0-056b-4277-a604-6221f16e30b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.098 244018 DEBUG nova.network.os_vif_util [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converting VIF {"id": "4f6b67d0-056b-4277-a604-6221f16e30b2", "address": "fa:16:3e:8b:cd:f8", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f6b67d0-05", "ovs_interfaceid": "4f6b67d0-056b-4277-a604-6221f16e30b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.099 244018 DEBUG nova.network.os_vif_util [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:cd:f8,bridge_name='br-int',has_traffic_filtering=True,id=4f6b67d0-056b-4277-a604-6221f16e30b2,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f6b67d0-05') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.099 244018 DEBUG os_vif [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:cd:f8,bridge_name='br-int',has_traffic_filtering=True,id=4f6b67d0-056b-4277-a604-6221f16e30b2,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f6b67d0-05') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.101 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.102 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f6b67d0-05, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.104 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.105 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.107 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.108 244018 INFO os_vif [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:cd:f8,bridge_name='br-int',has_traffic_filtering=True,id=4f6b67d0-056b-4277-a604-6221f16e30b2,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f6b67d0-05')#033[00m
Feb 25 07:25:23 np0005629333 neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c[284633]: [NOTICE]   (284638) : haproxy version is 2.8.14-c23fe91
Feb 25 07:25:23 np0005629333 neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c[284633]: [NOTICE]   (284638) : path to executable is /usr/sbin/haproxy
Feb 25 07:25:23 np0005629333 neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c[284633]: [WARNING]  (284638) : Exiting Master process...
Feb 25 07:25:23 np0005629333 neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c[284633]: [WARNING]  (284638) : Exiting Master process...
Feb 25 07:25:23 np0005629333 neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c[284633]: [ALERT]    (284638) : Current worker (284649) exited with code 143 (Terminated)
Feb 25 07:25:23 np0005629333 neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c[284633]: [WARNING]  (284638) : All workers exited. Exiting... (0)
Feb 25 07:25:23 np0005629333 systemd[1]: libpod-b4f05d582605c0073c469746dbd4841db6061815d6bdcb4cf3d14bdc5e5afc87.scope: Deactivated successfully.
Feb 25 07:25:23 np0005629333 podman[285607]: 2026-02-25 12:25:23.160929811 +0000 UTC m=+0.044920743 container died b4f05d582605c0073c469746dbd4841db6061815d6bdcb4cf3d14bdc5e5afc87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.184 244018 INFO nova.virt.libvirt.driver [-] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Instance destroyed successfully.#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.185 244018 DEBUG nova.objects.instance [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lazy-loading 'resources' on Instance uuid 1238794a-063b-4ac0-a7d9-3590353e3207 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:25:23 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b4f05d582605c0073c469746dbd4841db6061815d6bdcb4cf3d14bdc5e5afc87-userdata-shm.mount: Deactivated successfully.
Feb 25 07:25:23 np0005629333 systemd[1]: var-lib-containers-storage-overlay-8f8c15061c2508c9a22a1e470245ba40e3bbbed4564cf9ed0210b81c7f79f5d3-merged.mount: Deactivated successfully.
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.231 244018 DEBUG nova.virt.libvirt.vif [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:24:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2025017343',display_name='tempest-tempest.common.compute-instance-2025017343-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2025017343-2',id=46,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2026-02-25T12:25:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c8744bdbc0f1499388aab5f477246beb',ramdisk_id='',reservation_id='r-74gnqqyr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1900553995',owner_user_name='tempest-MultipleCreateTestJSON-1900553995-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:25:16Z,user_data=None,user_id='0899f3fdb57d46a395d07753dd261241',uuid=1238794a-063b-4ac0-a7d9-3590353e3207,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9f614955-c92f-41c2-a47e-6d65c378bf82", "address": "fa:16:3e:db:31:fe", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f614955-c9", "ovs_interfaceid": "9f614955-c92f-41c2-a47e-6d65c378bf82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.232 244018 DEBUG nova.network.os_vif_util [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converting VIF {"id": "9f614955-c92f-41c2-a47e-6d65c378bf82", "address": "fa:16:3e:db:31:fe", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f614955-c9", "ovs_interfaceid": "9f614955-c92f-41c2-a47e-6d65c378bf82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:25:23 np0005629333 podman[285607]: 2026-02-25 12:25:23.232672631 +0000 UTC m=+0.116663553 container cleanup b4f05d582605c0073c469746dbd4841db6061815d6bdcb4cf3d14bdc5e5afc87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.233 244018 DEBUG nova.network.os_vif_util [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:31:fe,bridge_name='br-int',has_traffic_filtering=True,id=9f614955-c92f-41c2-a47e-6d65c378bf82,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f614955-c9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.233 244018 DEBUG os_vif [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:31:fe,bridge_name='br-int',has_traffic_filtering=True,id=9f614955-c92f-41c2-a47e-6d65c378bf82,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f614955-c9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.235 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.235 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f614955-c9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.240 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:25:23 np0005629333 systemd[1]: libpod-conmon-b4f05d582605c0073c469746dbd4841db6061815d6bdcb4cf3d14bdc5e5afc87.scope: Deactivated successfully.
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.243 244018 INFO os_vif [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:31:fe,bridge_name='br-int',has_traffic_filtering=True,id=9f614955-c92f-41c2-a47e-6d65c378bf82,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f614955-c9')#033[00m
Feb 25 07:25:23 np0005629333 podman[285664]: 2026-02-25 12:25:23.316835973 +0000 UTC m=+0.065343340 container remove b4f05d582605c0073c469746dbd4841db6061815d6bdcb4cf3d14bdc5e5afc87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:25:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.326 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[066d5a57-1671-4e65-83e3-8cd029ea8f74]: (4, ('Wed Feb 25 12:25:23 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c (b4f05d582605c0073c469746dbd4841db6061815d6bdcb4cf3d14bdc5e5afc87)\nb4f05d582605c0073c469746dbd4841db6061815d6bdcb4cf3d14bdc5e5afc87\nWed Feb 25 12:25:23 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c (b4f05d582605c0073c469746dbd4841db6061815d6bdcb4cf3d14bdc5e5afc87)\nb4f05d582605c0073c469746dbd4841db6061815d6bdcb4cf3d14bdc5e5afc87\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.329 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b3574ddf-abf0-4dbd-a096-a174b96fd29f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.330 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc97c5b11-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:23 np0005629333 kernel: tapc97c5b11-70: left promiscuous mode
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.334 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.341 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.342 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a3674ba5-7701-419b-98f5-e5be1643e33b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.355 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[01ca78ac-f750-43fe-8670-86d7ffeccf86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.357 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7db5edf7-f409-4414-8011-312086be8fda]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:25:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/968102273' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:25:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.373 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5c55aa01-fb80-413f-9b78-9507647906b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 427665, 'reachable_time': 19920, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285699, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:23 np0005629333 systemd[1]: run-netns-ovnmeta\x2dc97c5b11\x2d7517\x2d46fe\x2da6ca\x2d63894792908c.mount: Deactivated successfully.
Feb 25 07:25:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.379 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:25:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:23.380 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[209a15be-f223-49c6-bf90-0adeaa33314c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.385 244018 DEBUG oslo_concurrency.processutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.387 244018 DEBUG nova.virt.libvirt.vif [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:25:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-20106714',display_name='tempest-ImagesOneServerNegativeTestJSON-server-20106714',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-20106714',id=48,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f976004e0b334963a69c2519fca200d2',ramdisk_id='',reservation_id='r-agofmyat',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1374162185',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1374162185-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:25:15Z,user_data=None,user_id='89e71139346a40899212d5bc35835720',uuid=e291d969-fcea-4f60-a478-f7b81a91ccd9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "address": "fa:16:3e:07:ea:76", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07860675-4a", "ovs_interfaceid": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.388 244018 DEBUG nova.network.os_vif_util [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converting VIF {"id": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "address": "fa:16:3e:07:ea:76", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07860675-4a", "ovs_interfaceid": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.388 244018 DEBUG nova.network.os_vif_util [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:ea:76,bridge_name='br-int',has_traffic_filtering=True,id=07860675-4ac4-43a4-ab6b-bacd17801fc2,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07860675-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.392 244018 DEBUG nova.objects.instance [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lazy-loading 'pci_devices' on Instance uuid e291d969-fcea-4f60-a478-f7b81a91ccd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.418 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:25:23 np0005629333 nova_compute[244014]:  <uuid>e291d969-fcea-4f60-a478-f7b81a91ccd9</uuid>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:  <name>instance-00000030</name>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-20106714</nova:name>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:25:22</nova:creationTime>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:25:23 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:        <nova:user uuid="89e71139346a40899212d5bc35835720">tempest-ImagesOneServerNegativeTestJSON-1374162185-project-member</nova:user>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:        <nova:project uuid="f976004e0b334963a69c2519fca200d2">tempest-ImagesOneServerNegativeTestJSON-1374162185</nova:project>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:        <nova:port uuid="07860675-4ac4-43a4-ab6b-bacd17801fc2">
Feb 25 07:25:23 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      <entry name="serial">e291d969-fcea-4f60-a478-f7b81a91ccd9</entry>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      <entry name="uuid">e291d969-fcea-4f60-a478-f7b81a91ccd9</entry>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/e291d969-fcea-4f60-a478-f7b81a91ccd9_disk">
Feb 25 07:25:23 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:25:23 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/e291d969-fcea-4f60-a478-f7b81a91ccd9_disk.config">
Feb 25 07:25:23 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:25:23 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:07:ea:76"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      <target dev="tap07860675-4a"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/e291d969-fcea-4f60-a478-f7b81a91ccd9/console.log" append="off"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:25:23 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:25:23 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:25:23 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:25:23 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.420 244018 DEBUG nova.compute.manager [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Preparing to wait for external event network-vif-plugged-07860675-4ac4-43a4-ab6b-bacd17801fc2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.420 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.421 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.421 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.423 244018 DEBUG nova.virt.libvirt.vif [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:25:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-20106714',display_name='tempest-ImagesOneServerNegativeTestJSON-server-20106714',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-20106714',id=48,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f976004e0b334963a69c2519fca200d2',ramdisk_id='',reservation_id='r-agofmyat',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1374162185',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1374162185-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:25:15Z,user_data=None,user_id='89e71139346a40899212d5bc35835720',uuid=e291d969-fcea-4f60-a478-f7b81a91ccd9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "address": "fa:16:3e:07:ea:76", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07860675-4a", "ovs_interfaceid": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.424 244018 DEBUG nova.network.os_vif_util [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converting VIF {"id": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "address": "fa:16:3e:07:ea:76", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07860675-4a", "ovs_interfaceid": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.425 244018 DEBUG nova.network.os_vif_util [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:ea:76,bridge_name='br-int',has_traffic_filtering=True,id=07860675-4ac4-43a4-ab6b-bacd17801fc2,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07860675-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.426 244018 DEBUG os_vif [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:ea:76,bridge_name='br-int',has_traffic_filtering=True,id=07860675-4ac4-43a4-ab6b-bacd17801fc2,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07860675-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.427 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.428 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.428 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.433 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.433 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap07860675-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.434 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap07860675-4a, col_values=(('external_ids', {'iface-id': '07860675-4ac4-43a4-ab6b-bacd17801fc2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:07:ea:76', 'vm-uuid': 'e291d969-fcea-4f60-a478-f7b81a91ccd9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:23 np0005629333 NetworkManager[49836]: <info>  [1772022323.4374] manager: (tap07860675-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/183)
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.439 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.443 244018 INFO os_vif [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:ea:76,bridge_name='br-int',has_traffic_filtering=True,id=07860675-4ac4-43a4-ab6b-bacd17801fc2,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07860675-4a')#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.505 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.506 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.506 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] No VIF found with MAC fa:16:3e:07:ea:76, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.507 244018 INFO nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Using config drive#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.535 244018 DEBUG nova.storage.rbd_utils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image e291d969-fcea-4f60-a478-f7b81a91ccd9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.545 244018 INFO nova.virt.libvirt.driver [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Deleting instance files /var/lib/nova/instances/22450f11-d042-48c5-941e-fd544e58d84a_del#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.545 244018 INFO nova.virt.libvirt.driver [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Deletion of /var/lib/nova/instances/22450f11-d042-48c5-941e-fd544e58d84a_del complete#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.604 244018 INFO nova.compute.manager [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.604 244018 DEBUG oslo.service.loopingcall [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.605 244018 DEBUG nova.compute.manager [-] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.605 244018 DEBUG nova.network.neutron [-] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.659 244018 INFO nova.virt.libvirt.driver [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Deleting instance files /var/lib/nova/instances/1238794a-063b-4ac0-a7d9-3590353e3207_del#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.660 244018 INFO nova.virt.libvirt.driver [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Deletion of /var/lib/nova/instances/1238794a-063b-4ac0-a7d9-3590353e3207_del complete#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.731 244018 INFO nova.compute.manager [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.732 244018 DEBUG oslo.service.loopingcall [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.732 244018 DEBUG nova.compute.manager [-] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:25:23 np0005629333 nova_compute[244014]: 2026-02-25 12:25:23.732 244018 DEBUG nova.network.neutron [-] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:25:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 451 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 7.3 MiB/s wr, 332 op/s
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.469 244018 INFO nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Creating config drive at /var/lib/nova/instances/e291d969-fcea-4f60-a478-f7b81a91ccd9/disk.config#033[00m
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.476 244018 DEBUG oslo_concurrency.processutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e291d969-fcea-4f60-a478-f7b81a91ccd9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmps5rn5u30 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.517 244018 DEBUG nova.network.neutron [req-c183a5cd-20c9-41b4-80d3-2649440d9a5c req-d68c95c4-b5ef-4386-a4c1-8df026b3d9c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Updated VIF entry in instance network info cache for port 07860675-4ac4-43a4-ab6b-bacd17801fc2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.519 244018 DEBUG nova.network.neutron [req-c183a5cd-20c9-41b4-80d3-2649440d9a5c req-d68c95c4-b5ef-4386-a4c1-8df026b3d9c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Updating instance_info_cache with network_info: [{"id": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "address": "fa:16:3e:07:ea:76", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07860675-4a", "ovs_interfaceid": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.526 244018 DEBUG nova.compute.manager [req-5dbb5b3e-5449-4a71-b096-945069b87f0b req-7c79ab0e-57a8-4e79-a1fa-e7fe69624867 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Received event network-vif-unplugged-4f6b67d0-056b-4277-a604-6221f16e30b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.527 244018 DEBUG oslo_concurrency.lockutils [req-5dbb5b3e-5449-4a71-b096-945069b87f0b req-7c79ab0e-57a8-4e79-a1fa-e7fe69624867 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "22450f11-d042-48c5-941e-fd544e58d84a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.527 244018 DEBUG oslo_concurrency.lockutils [req-5dbb5b3e-5449-4a71-b096-945069b87f0b req-7c79ab0e-57a8-4e79-a1fa-e7fe69624867 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "22450f11-d042-48c5-941e-fd544e58d84a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.528 244018 DEBUG oslo_concurrency.lockutils [req-5dbb5b3e-5449-4a71-b096-945069b87f0b req-7c79ab0e-57a8-4e79-a1fa-e7fe69624867 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "22450f11-d042-48c5-941e-fd544e58d84a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.528 244018 DEBUG nova.compute.manager [req-5dbb5b3e-5449-4a71-b096-945069b87f0b req-7c79ab0e-57a8-4e79-a1fa-e7fe69624867 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] No waiting events found dispatching network-vif-unplugged-4f6b67d0-056b-4277-a604-6221f16e30b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.529 244018 DEBUG nova.compute.manager [req-5dbb5b3e-5449-4a71-b096-945069b87f0b req-7c79ab0e-57a8-4e79-a1fa-e7fe69624867 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Received event network-vif-unplugged-4f6b67d0-056b-4277-a604-6221f16e30b2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.541 244018 DEBUG oslo_concurrency.lockutils [req-c183a5cd-20c9-41b4-80d3-2649440d9a5c req-d68c95c4-b5ef-4386-a4c1-8df026b3d9c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-e291d969-fcea-4f60-a478-f7b81a91ccd9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.616 244018 DEBUG oslo_concurrency.processutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e291d969-fcea-4f60-a478-f7b81a91ccd9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmps5rn5u30" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.644 244018 DEBUG nova.storage.rbd_utils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image e291d969-fcea-4f60-a478-f7b81a91ccd9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.648 244018 DEBUG oslo_concurrency.processutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e291d969-fcea-4f60-a478-f7b81a91ccd9/disk.config e291d969-fcea-4f60-a478-f7b81a91ccd9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.672 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.677 244018 DEBUG nova.compute.manager [req-5cfd1e58-6bb1-4a16-b0aa-7a6eac6a21d8 req-a6b8acbf-54d5-429d-adc0-b499d9bbbe99 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Received event network-vif-unplugged-9f614955-c92f-41c2-a47e-6d65c378bf82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.678 244018 DEBUG oslo_concurrency.lockutils [req-5cfd1e58-6bb1-4a16-b0aa-7a6eac6a21d8 req-a6b8acbf-54d5-429d-adc0-b499d9bbbe99 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.678 244018 DEBUG oslo_concurrency.lockutils [req-5cfd1e58-6bb1-4a16-b0aa-7a6eac6a21d8 req-a6b8acbf-54d5-429d-adc0-b499d9bbbe99 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.678 244018 DEBUG oslo_concurrency.lockutils [req-5cfd1e58-6bb1-4a16-b0aa-7a6eac6a21d8 req-a6b8acbf-54d5-429d-adc0-b499d9bbbe99 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.678 244018 DEBUG nova.compute.manager [req-5cfd1e58-6bb1-4a16-b0aa-7a6eac6a21d8 req-a6b8acbf-54d5-429d-adc0-b499d9bbbe99 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] No waiting events found dispatching network-vif-unplugged-9f614955-c92f-41c2-a47e-6d65c378bf82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.679 244018 DEBUG nova.compute.manager [req-5cfd1e58-6bb1-4a16-b0aa-7a6eac6a21d8 req-a6b8acbf-54d5-429d-adc0-b499d9bbbe99 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Received event network-vif-unplugged-9f614955-c92f-41c2-a47e-6d65c378bf82 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.704 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:24 np0005629333 NetworkManager[49836]: <info>  [1772022324.7053] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/184)
Feb 25 07:25:24 np0005629333 NetworkManager[49836]: <info>  [1772022324.7059] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/185)
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.736 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:24 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:24Z|00404|binding|INFO|Releasing lport 7c89df63-779c-4e1c-9e58-76a4001fabc2 from this chassis (sb_readonly=0)
Feb 25 07:25:24 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:24Z|00405|binding|INFO|Releasing lport 81f0f54c-4e04-4adf-952f-b6d0fe9698c7 from this chassis (sb_readonly=0)
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.763 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.819 244018 DEBUG oslo_concurrency.processutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e291d969-fcea-4f60-a478-f7b81a91ccd9/disk.config e291d969-fcea-4f60-a478-f7b81a91ccd9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.820 244018 INFO nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Deleting local config drive /var/lib/nova/instances/e291d969-fcea-4f60-a478-f7b81a91ccd9/disk.config because it was imported into RBD.#033[00m
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.861 244018 DEBUG nova.network.neutron [-] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:25:24 np0005629333 systemd-udevd[285539]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:25:24 np0005629333 kernel: tap07860675-4a: entered promiscuous mode
Feb 25 07:25:24 np0005629333 NetworkManager[49836]: <info>  [1772022324.8723] manager: (tap07860675-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/186)
Feb 25 07:25:24 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:24Z|00406|binding|INFO|Claiming lport 07860675-4ac4-43a4-ab6b-bacd17801fc2 for this chassis.
Feb 25 07:25:24 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:24Z|00407|binding|INFO|07860675-4ac4-43a4-ab6b-bacd17801fc2: Claiming fa:16:3e:07:ea:76 10.100.0.14
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.876 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.879 244018 INFO nova.compute.manager [-] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Took 1.27 seconds to deallocate network for instance.#033[00m
Feb 25 07:25:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:24.880 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:ea:76 10.100.0.14'], port_security=['fa:16:3e:07:ea:76 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e291d969-fcea-4f60-a478-f7b81a91ccd9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f976004e0b334963a69c2519fca200d2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ca95284b-67f9-4e09-a57b-7847021c2465', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=974b795b-e2d8-4683-ac80-b366113e2dd8, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=07860675-4ac4-43a4-ab6b-bacd17801fc2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:25:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:24.881 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 07860675-4ac4-43a4-ab6b-bacd17801fc2 in datapath 7693903d-d5e2-4b50-a39b-bbbcc4148329 bound to our chassis#033[00m
Feb 25 07:25:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:24.882 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7693903d-d5e2-4b50-a39b-bbbcc4148329#033[00m
Feb 25 07:25:24 np0005629333 NetworkManager[49836]: <info>  [1772022324.8852] device (tap07860675-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:25:24 np0005629333 NetworkManager[49836]: <info>  [1772022324.8878] device (tap07860675-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:25:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:24.894 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[779b9d43-5afe-4da0-a7ea-47442ab53b56]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:24.895 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7693903d-d1 in ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.899 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:24.897 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7693903d-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:25:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:24.898 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1780b18d-1fa4-4657-a311-f3c36306c4a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:24.900 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ec8ba17a-3f10-4e20-9f75-ca93aee4c675]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:24 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:24Z|00408|binding|INFO|Setting lport 07860675-4ac4-43a4-ab6b-bacd17801fc2 ovn-installed in OVS
Feb 25 07:25:24 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:24Z|00409|binding|INFO|Setting lport 07860675-4ac4-43a4-ab6b-bacd17801fc2 up in Southbound
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.904 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:24.911 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[a01e0248-be83-4960-b3bf-a9b1ecf5e02b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:24 np0005629333 systemd-machined[210048]: New machine qemu-53-instance-00000030.
Feb 25 07:25:24 np0005629333 systemd[1]: Started Virtual Machine qemu-53-instance-00000030.
Feb 25 07:25:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:24.935 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c9f37601-8521-4f36-97c9-12bafbe93aa8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.939 244018 DEBUG oslo_concurrency.lockutils [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:24 np0005629333 nova_compute[244014]: 2026-02-25 12:25:24.940 244018 DEBUG oslo_concurrency.lockutils [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:24.962 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c8aa73bb-2739-4534-b91f-038433ce1585]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:24.967 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ebc3e762-dc6c-47ce-9576-cb6b4d5b9b6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:24 np0005629333 NetworkManager[49836]: <info>  [1772022324.9688] manager: (tap7693903d-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/187)
Feb 25 07:25:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:24.988 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a0a0c67a-60d6-4256-bf4d-a2bf2970c96b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:24.991 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0941addc-b11d-4dda-88d9-95b95cef1ef0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:25 np0005629333 NetworkManager[49836]: <info>  [1772022325.0070] device (tap7693903d-d0): carrier: link connected
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:25.011 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e7b1529e-e8b5-432f-bb92-0cfb0f28eeb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:25.022 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8276ab82-9e12-4058-b00b-48f9f174a40d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7693903d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:f2:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429441, 'reachable_time': 35620, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285807, 'error': None, 'target': 'ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:25.041 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e248e757-f563-456f-945c-2b1eca435123]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec3:f240'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 429441, 'tstamp': 429441}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285808, 'error': None, 'target': 'ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:25.053 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c0991a23-d5a0-422b-ba46-c54dca2be63d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7693903d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:f2:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429441, 'reachable_time': 35620, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 285809, 'error': None, 'target': 'ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:25.079 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[43c6e9b6-f1c0-4833-92d8-55a6ffc8c61c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:25 np0005629333 nova_compute[244014]: 2026-02-25 12:25:25.092 244018 DEBUG oslo_concurrency.processutils [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:25.134 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[38c43af1-8f0f-4fed-a043-5cd0d290f2f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:25.136 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7693903d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:25.137 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:25.138 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7693903d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:25 np0005629333 NetworkManager[49836]: <info>  [1772022325.1413] manager: (tap7693903d-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/188)
Feb 25 07:25:25 np0005629333 kernel: tap7693903d-d0: entered promiscuous mode
Feb 25 07:25:25 np0005629333 nova_compute[244014]: 2026-02-25 12:25:25.140 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:25.144 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7693903d-d0, col_values=(('external_ids', {'iface-id': '6dc5897c-8765-434f-a79d-19523884d8ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:25 np0005629333 nova_compute[244014]: 2026-02-25 12:25:25.145 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:25Z|00410|binding|INFO|Releasing lport 6dc5897c-8765-434f-a79d-19523884d8ae from this chassis (sb_readonly=0)
Feb 25 07:25:25 np0005629333 nova_compute[244014]: 2026-02-25 12:25:25.157 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:25.159 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7693903d-d5e2-4b50-a39b-bbbcc4148329.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7693903d-d5e2-4b50-a39b-bbbcc4148329.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:25:25 np0005629333 nova_compute[244014]: 2026-02-25 12:25:25.159 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:25.160 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e52780bf-0809-4ff6-86e2-47bdf3d74d5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:25.161 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-7693903d-d5e2-4b50-a39b-bbbcc4148329
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/7693903d-d5e2-4b50-a39b-bbbcc4148329.pid.haproxy
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 7693903d-d5e2-4b50-a39b-bbbcc4148329
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:25:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:25.162 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'env', 'PROCESS_TAG=haproxy-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7693903d-d5e2-4b50-a39b-bbbcc4148329.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:25:25 np0005629333 nova_compute[244014]: 2026-02-25 12:25:25.500 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022325.4997602, e291d969-fcea-4f60-a478-f7b81a91ccd9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:25:25 np0005629333 nova_compute[244014]: 2026-02-25 12:25:25.500 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] VM Started (Lifecycle Event)#033[00m
Feb 25 07:25:25 np0005629333 nova_compute[244014]: 2026-02-25 12:25:25.523 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:25:25 np0005629333 nova_compute[244014]: 2026-02-25 12:25:25.530 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022325.500846, e291d969-fcea-4f60-a478-f7b81a91ccd9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:25:25 np0005629333 nova_compute[244014]: 2026-02-25 12:25:25.530 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:25:25 np0005629333 nova_compute[244014]: 2026-02-25 12:25:25.535 244018 DEBUG nova.network.neutron [-] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:25:25 np0005629333 nova_compute[244014]: 2026-02-25 12:25:25.549 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:25:25 np0005629333 nova_compute[244014]: 2026-02-25 12:25:25.552 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:25:25 np0005629333 nova_compute[244014]: 2026-02-25 12:25:25.563 244018 INFO nova.compute.manager [-] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Took 1.83 seconds to deallocate network for instance.#033[00m
Feb 25 07:25:25 np0005629333 nova_compute[244014]: 2026-02-25 12:25:25.568 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:25:25 np0005629333 podman[285903]: 2026-02-25 12:25:25.584996794 +0000 UTC m=+0.060566665 container create e8e468a20062a375f4a78f5f5701c3191b8bc49d7f32d13d785dfe337dc4569a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 25 07:25:25 np0005629333 nova_compute[244014]: 2026-02-25 12:25:25.603 244018 DEBUG oslo_concurrency.lockutils [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:25 np0005629333 systemd[1]: Started libpod-conmon-e8e468a20062a375f4a78f5f5701c3191b8bc49d7f32d13d785dfe337dc4569a.scope.
Feb 25 07:25:25 np0005629333 podman[285903]: 2026-02-25 12:25:25.54737451 +0000 UTC m=+0.022944411 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:25:25 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:25:25 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/332be3fc7ea941666457dff9d1433e95fd9071ac0ddde7234a55bd6539d7c2bc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:25:25 np0005629333 podman[285903]: 2026-02-25 12:25:25.663383343 +0000 UTC m=+0.138953234 container init e8e468a20062a375f4a78f5f5701c3191b8bc49d7f32d13d785dfe337dc4569a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 25 07:25:25 np0005629333 podman[285903]: 2026-02-25 12:25:25.666911733 +0000 UTC m=+0.142481634 container start e8e468a20062a375f4a78f5f5701c3191b8bc49d7f32d13d785dfe337dc4569a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 07:25:25 np0005629333 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[285918]: [NOTICE]   (285922) : New worker (285924) forked
Feb 25 07:25:25 np0005629333 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[285918]: [NOTICE]   (285922) : Loading success.
Feb 25 07:25:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:25:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/452448757' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:25:25 np0005629333 nova_compute[244014]: 2026-02-25 12:25:25.734 244018 DEBUG oslo_concurrency.processutils [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:25 np0005629333 nova_compute[244014]: 2026-02-25 12:25:25.739 244018 DEBUG nova.compute.provider_tree [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:25:25 np0005629333 nova_compute[244014]: 2026-02-25 12:25:25.758 244018 DEBUG nova.scheduler.client.report [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:25:25 np0005629333 nova_compute[244014]: 2026-02-25 12:25:25.780 244018 DEBUG oslo_concurrency.lockutils [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:25 np0005629333 nova_compute[244014]: 2026-02-25 12:25:25.783 244018 DEBUG oslo_concurrency.lockutils [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 451 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 6.7 MiB/s wr, 314 op/s
Feb 25 07:25:25 np0005629333 nova_compute[244014]: 2026-02-25 12:25:25.827 244018 INFO nova.scheduler.client.report [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Deleted allocations for instance 22450f11-d042-48c5-941e-fd544e58d84a#033[00m
Feb 25 07:25:25 np0005629333 nova_compute[244014]: 2026-02-25 12:25:25.930 244018 DEBUG oslo_concurrency.processutils [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:25 np0005629333 nova_compute[244014]: 2026-02-25 12:25:25.964 244018 DEBUG oslo_concurrency.lockutils [None req-5773e6b1-43c0-46e8-85f5-4c272fd622e1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "22450f11-d042-48c5-941e-fd544e58d84a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.190s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:25:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3937285555' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:25:26 np0005629333 nova_compute[244014]: 2026-02-25 12:25:26.519 244018 DEBUG oslo_concurrency.processutils [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:26 np0005629333 nova_compute[244014]: 2026-02-25 12:25:26.525 244018 DEBUG nova.compute.provider_tree [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:25:26 np0005629333 nova_compute[244014]: 2026-02-25 12:25:26.541 244018 DEBUG nova.scheduler.client.report [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:25:26 np0005629333 nova_compute[244014]: 2026-02-25 12:25:26.575 244018 DEBUG oslo_concurrency.lockutils [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:26 np0005629333 nova_compute[244014]: 2026-02-25 12:25:26.617 244018 INFO nova.scheduler.client.report [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Deleted allocations for instance 1238794a-063b-4ac0-a7d9-3590353e3207#033[00m
Feb 25 07:25:26 np0005629333 nova_compute[244014]: 2026-02-25 12:25:26.653 244018 DEBUG nova.compute.manager [req-5b7cb101-e11b-42e6-b60e-ca270d8494cf req-8ae01e8d-ed46-46f0-b003-84e1ec760dd8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Received event network-vif-plugged-4f6b67d0-056b-4277-a604-6221f16e30b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:26 np0005629333 nova_compute[244014]: 2026-02-25 12:25:26.654 244018 DEBUG oslo_concurrency.lockutils [req-5b7cb101-e11b-42e6-b60e-ca270d8494cf req-8ae01e8d-ed46-46f0-b003-84e1ec760dd8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "22450f11-d042-48c5-941e-fd544e58d84a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:26 np0005629333 nova_compute[244014]: 2026-02-25 12:25:26.654 244018 DEBUG oslo_concurrency.lockutils [req-5b7cb101-e11b-42e6-b60e-ca270d8494cf req-8ae01e8d-ed46-46f0-b003-84e1ec760dd8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "22450f11-d042-48c5-941e-fd544e58d84a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:26 np0005629333 nova_compute[244014]: 2026-02-25 12:25:26.654 244018 DEBUG oslo_concurrency.lockutils [req-5b7cb101-e11b-42e6-b60e-ca270d8494cf req-8ae01e8d-ed46-46f0-b003-84e1ec760dd8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "22450f11-d042-48c5-941e-fd544e58d84a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:26 np0005629333 nova_compute[244014]: 2026-02-25 12:25:26.655 244018 DEBUG nova.compute.manager [req-5b7cb101-e11b-42e6-b60e-ca270d8494cf req-8ae01e8d-ed46-46f0-b003-84e1ec760dd8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] No waiting events found dispatching network-vif-plugged-4f6b67d0-056b-4277-a604-6221f16e30b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:25:26 np0005629333 nova_compute[244014]: 2026-02-25 12:25:26.655 244018 WARNING nova.compute.manager [req-5b7cb101-e11b-42e6-b60e-ca270d8494cf req-8ae01e8d-ed46-46f0-b003-84e1ec760dd8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Received unexpected event network-vif-plugged-4f6b67d0-056b-4277-a604-6221f16e30b2 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:25:26 np0005629333 nova_compute[244014]: 2026-02-25 12:25:26.655 244018 DEBUG nova.compute.manager [req-5b7cb101-e11b-42e6-b60e-ca270d8494cf req-8ae01e8d-ed46-46f0-b003-84e1ec760dd8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Received event network-vif-deleted-4f6b67d0-056b-4277-a604-6221f16e30b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:26 np0005629333 nova_compute[244014]: 2026-02-25 12:25:26.718 244018 DEBUG oslo_concurrency.lockutils [None req-fdf2dd69-d3ab-4f5c-8906-2f0e1f8420e3 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "1238794a-063b-4ac0-a7d9-3590353e3207" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:26 np0005629333 nova_compute[244014]: 2026-02-25 12:25:26.764 244018 DEBUG nova.compute.manager [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Received event network-vif-plugged-9f614955-c92f-41c2-a47e-6d65c378bf82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:26 np0005629333 nova_compute[244014]: 2026-02-25 12:25:26.764 244018 DEBUG oslo_concurrency.lockutils [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:26 np0005629333 nova_compute[244014]: 2026-02-25 12:25:26.764 244018 DEBUG oslo_concurrency.lockutils [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:26 np0005629333 nova_compute[244014]: 2026-02-25 12:25:26.764 244018 DEBUG oslo_concurrency.lockutils [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "1238794a-063b-4ac0-a7d9-3590353e3207-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:26 np0005629333 nova_compute[244014]: 2026-02-25 12:25:26.764 244018 DEBUG nova.compute.manager [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] No waiting events found dispatching network-vif-plugged-9f614955-c92f-41c2-a47e-6d65c378bf82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:25:26 np0005629333 nova_compute[244014]: 2026-02-25 12:25:26.764 244018 WARNING nova.compute.manager [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Received unexpected event network-vif-plugged-9f614955-c92f-41c2-a47e-6d65c378bf82 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:25:26 np0005629333 nova_compute[244014]: 2026-02-25 12:25:26.765 244018 DEBUG nova.compute.manager [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received event network-changed-abdb97b5-8e9d-4929-af6f-bfb06c067878 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:26 np0005629333 nova_compute[244014]: 2026-02-25 12:25:26.765 244018 DEBUG nova.compute.manager [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Refreshing instance network info cache due to event network-changed-abdb97b5-8e9d-4929-af6f-bfb06c067878. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:25:26 np0005629333 nova_compute[244014]: 2026-02-25 12:25:26.765 244018 DEBUG oslo_concurrency.lockutils [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:25:26 np0005629333 nova_compute[244014]: 2026-02-25 12:25:26.765 244018 DEBUG oslo_concurrency.lockutils [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:25:26 np0005629333 nova_compute[244014]: 2026-02-25 12:25:26.765 244018 DEBUG nova.network.neutron [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Refreshing network info cache for port abdb97b5-8e9d-4929-af6f-bfb06c067878 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:25:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 326 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 6.7 MiB/s wr, 379 op/s
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.262 244018 DEBUG nova.network.neutron [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updated VIF entry in instance network info cache for port abdb97b5-8e9d-4929-af6f-bfb06c067878. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.263 244018 DEBUG nova.network.neutron [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updating instance_info_cache with network_info: [{"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.300 244018 DEBUG oslo_concurrency.lockutils [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.301 244018 DEBUG nova.compute.manager [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Received event network-vif-plugged-07860675-4ac4-43a4-ab6b-bacd17801fc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.301 244018 DEBUG oslo_concurrency.lockutils [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.302 244018 DEBUG oslo_concurrency.lockutils [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.302 244018 DEBUG oslo_concurrency.lockutils [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.303 244018 DEBUG nova.compute.manager [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Processing event network-vif-plugged-07860675-4ac4-43a4-ab6b-bacd17801fc2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.303 244018 DEBUG nova.compute.manager [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Received event network-vif-deleted-9f614955-c92f-41c2-a47e-6d65c378bf82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.304 244018 DEBUG nova.compute.manager [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Received event network-vif-plugged-07860675-4ac4-43a4-ab6b-bacd17801fc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.304 244018 DEBUG oslo_concurrency.lockutils [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.305 244018 DEBUG oslo_concurrency.lockutils [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.306 244018 DEBUG oslo_concurrency.lockutils [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.306 244018 DEBUG nova.compute.manager [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] No waiting events found dispatching network-vif-plugged-07860675-4ac4-43a4-ab6b-bacd17801fc2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.307 244018 WARNING nova.compute.manager [req-93bc21ae-e8fc-4467-897a-50263399d7ca req-5eeae9cf-988c-47fc-9f23-cb25a430d12f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Received unexpected event network-vif-plugged-07860675-4ac4-43a4-ab6b-bacd17801fc2 for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.308 244018 DEBUG nova.compute.manager [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.313 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022328.312849, e291d969-fcea-4f60-a478-f7b81a91ccd9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.314 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.317 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.321 244018 INFO nova.virt.libvirt.driver [-] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Instance spawned successfully.#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.322 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.386 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.390 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.406 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.407 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.407 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.408 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.408 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.408 244018 DEBUG nova.virt.libvirt.driver [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.438 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.478 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.514 244018 INFO nova.compute.manager [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Took 13.21 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.515 244018 DEBUG nova.compute.manager [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.581 244018 INFO nova.compute.manager [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Took 14.73 seconds to build instance.#033[00m
Feb 25 07:25:28 np0005629333 podman[285957]: 2026-02-25 12:25:28.73771177 +0000 UTC m=+0.080400177 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 07:25:28 np0005629333 podman[285958]: 2026-02-25 12:25:28.758886099 +0000 UTC m=+0.093824466 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 07:25:28 np0005629333 nova_compute[244014]: 2026-02-25 12:25:28.810 244018 DEBUG oslo_concurrency.lockutils [None req-0506b899-c640-44f6-8585-98371f24a29e 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "e291d969-fcea-4f60-a478-f7b81a91ccd9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:29 np0005629333 nova_compute[244014]: 2026-02-25 12:25:29.543 244018 DEBUG nova.virt.libvirt.driver [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Feb 25 07:25:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:25:29 np0005629333 nova_compute[244014]: 2026-02-25 12:25:29.628 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 326 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.8 MiB/s wr, 272 op/s
Feb 25 07:25:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:25:30
Feb 25 07:25:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:25:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:25:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.log', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'images', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', '.mgr', 'default.rgw.meta']
Feb 25 07:25:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:25:31 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:31Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f8:53:87 10.100.0.6
Feb 25 07:25:31 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:31Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f8:53:87 10.100.0.6
Feb 25 07:25:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:25:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:25:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:25:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:25:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:25:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:25:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 346 MiB data, 629 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 6.4 MiB/s wr, 373 op/s
Feb 25 07:25:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:25:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:25:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:25:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:25:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:25:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:25:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:25:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:25:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:25:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:25:31 np0005629333 kernel: tapfcfcdd66-6c (unregistering): left promiscuous mode
Feb 25 07:25:31 np0005629333 NetworkManager[49836]: <info>  [1772022331.8709] device (tapfcfcdd66-6c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:25:31 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:31Z|00411|binding|INFO|Releasing lport fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 from this chassis (sb_readonly=0)
Feb 25 07:25:31 np0005629333 nova_compute[244014]: 2026-02-25 12:25:31.882 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:31 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:31Z|00412|binding|INFO|Setting lport fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 down in Southbound
Feb 25 07:25:31 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:31Z|00413|binding|INFO|Removing iface tapfcfcdd66-6c ovn-installed in OVS
Feb 25 07:25:31 np0005629333 nova_compute[244014]: 2026-02-25 12:25:31.885 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:31.899 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fc:58:40 10.100.0.9'], port_security=['fa:16:3e:fc:58:40 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'daab2f813dbd467685c22833bf875ec9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ccb27c94-53b6-4e09-9a40-41a94659ce9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96ce8d12-3a71-4b1e-a397-65d0b61f3794, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:25:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:31.902 157129 INFO neutron.agent.ovn.metadata.agent [-] Port fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 in datapath a0d45b1c-1680-4599-a27a-6e3335c94c99 unbound from our chassis#033[00m
Feb 25 07:25:31 np0005629333 nova_compute[244014]: 2026-02-25 12:25:31.904 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:31.906 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a0d45b1c-1680-4599-a27a-6e3335c94c99, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:25:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:31.907 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[16e46228-4704-468f-b1ae-5676b104df5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:31.909 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 namespace which is not needed anymore#033[00m
Feb 25 07:25:31 np0005629333 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002c.scope: Deactivated successfully.
Feb 25 07:25:31 np0005629333 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002c.scope: Consumed 12.439s CPU time.
Feb 25 07:25:31 np0005629333 systemd-machined[210048]: Machine qemu-49-instance-0000002c terminated.
Feb 25 07:25:32 np0005629333 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[284244]: [NOTICE]   (284248) : haproxy version is 2.8.14-c23fe91
Feb 25 07:25:32 np0005629333 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[284244]: [NOTICE]   (284248) : path to executable is /usr/sbin/haproxy
Feb 25 07:25:32 np0005629333 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[284244]: [WARNING]  (284248) : Exiting Master process...
Feb 25 07:25:32 np0005629333 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[284244]: [WARNING]  (284248) : Exiting Master process...
Feb 25 07:25:32 np0005629333 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[284244]: [ALERT]    (284248) : Current worker (284250) exited with code 143 (Terminated)
Feb 25 07:25:32 np0005629333 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[284244]: [WARNING]  (284248) : All workers exited. Exiting... (0)
Feb 25 07:25:32 np0005629333 systemd[1]: libpod-e24e7ad85649571a627c87aefa789be66fa44446fda13641328680056e211148.scope: Deactivated successfully.
Feb 25 07:25:32 np0005629333 podman[286024]: 2026-02-25 12:25:32.03440744 +0000 UTC m=+0.046535258 container died e24e7ad85649571a627c87aefa789be66fa44446fda13641328680056e211148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:25:32 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e24e7ad85649571a627c87aefa789be66fa44446fda13641328680056e211148-userdata-shm.mount: Deactivated successfully.
Feb 25 07:25:32 np0005629333 systemd[1]: var-lib-containers-storage-overlay-fd73bb85b3ddca49f944375bbbcd5f0c148a4c567244e730f0a7b2967982d80b-merged.mount: Deactivated successfully.
Feb 25 07:25:32 np0005629333 podman[286024]: 2026-02-25 12:25:32.065979104 +0000 UTC m=+0.078106922 container cleanup e24e7ad85649571a627c87aefa789be66fa44446fda13641328680056e211148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:25:32 np0005629333 systemd[1]: libpod-conmon-e24e7ad85649571a627c87aefa789be66fa44446fda13641328680056e211148.scope: Deactivated successfully.
Feb 25 07:25:32 np0005629333 nova_compute[244014]: 2026-02-25 12:25:32.100 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:32 np0005629333 nova_compute[244014]: 2026-02-25 12:25:32.104 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:32 np0005629333 podman[286053]: 2026-02-25 12:25:32.127463834 +0000 UTC m=+0.046718733 container remove e24e7ad85649571a627c87aefa789be66fa44446fda13641328680056e211148 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:25:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:32.132 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a4ab57b4-409f-4639-b63b-0b630d505425]: (4, ('Wed Feb 25 12:25:31 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 (e24e7ad85649571a627c87aefa789be66fa44446fda13641328680056e211148)\ne24e7ad85649571a627c87aefa789be66fa44446fda13641328680056e211148\nWed Feb 25 12:25:32 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 (e24e7ad85649571a627c87aefa789be66fa44446fda13641328680056e211148)\ne24e7ad85649571a627c87aefa789be66fa44446fda13641328680056e211148\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:32.133 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[72b42977-01d8-4203-8dcf-ca9782e7180c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:32.134 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0d45b1c-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:32 np0005629333 kernel: tapa0d45b1c-10: left promiscuous mode
Feb 25 07:25:32 np0005629333 nova_compute[244014]: 2026-02-25 12:25:32.138 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:32 np0005629333 nova_compute[244014]: 2026-02-25 12:25:32.144 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:32.148 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4fcea0f9-157d-4e68-94c1-4b58a690ac06]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:32.166 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5c9f7ec2-4b00-4732-aa02-fa7585649634]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:32.167 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[13da39f1-a679-4b0a-b60a-8edc2d53ca7b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:32.178 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5f1af9b3-5c20-48dd-810e-ab5881a89956]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 427161, 'reachable_time': 28255, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286082, 'error': None, 'target': 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:32 np0005629333 systemd[1]: run-netns-ovnmeta\x2da0d45b1c\x2d1680\x2d4599\x2da27a\x2d6e3335c94c99.mount: Deactivated successfully.
Feb 25 07:25:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:32.183 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:25:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:32.183 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[046468d3-0103-4565-b4e1-da72a8f20d34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:32 np0005629333 nova_compute[244014]: 2026-02-25 12:25:32.207 244018 DEBUG nova.compute.manager [req-d479582c-e395-469f-bcf4-b8c76540c9ef req-7db74252-19f0-48f5-b8eb-0362a76d3ea8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Received event network-vif-unplugged-fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:32 np0005629333 nova_compute[244014]: 2026-02-25 12:25:32.208 244018 DEBUG oslo_concurrency.lockutils [req-d479582c-e395-469f-bcf4-b8c76540c9ef req-7db74252-19f0-48f5-b8eb-0362a76d3ea8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:32 np0005629333 nova_compute[244014]: 2026-02-25 12:25:32.208 244018 DEBUG oslo_concurrency.lockutils [req-d479582c-e395-469f-bcf4-b8c76540c9ef req-7db74252-19f0-48f5-b8eb-0362a76d3ea8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:32 np0005629333 nova_compute[244014]: 2026-02-25 12:25:32.209 244018 DEBUG oslo_concurrency.lockutils [req-d479582c-e395-469f-bcf4-b8c76540c9ef req-7db74252-19f0-48f5-b8eb-0362a76d3ea8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:32 np0005629333 nova_compute[244014]: 2026-02-25 12:25:32.209 244018 DEBUG nova.compute.manager [req-d479582c-e395-469f-bcf4-b8c76540c9ef req-7db74252-19f0-48f5-b8eb-0362a76d3ea8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] No waiting events found dispatching network-vif-unplugged-fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:25:32 np0005629333 nova_compute[244014]: 2026-02-25 12:25:32.210 244018 WARNING nova.compute.manager [req-d479582c-e395-469f-bcf4-b8c76540c9ef req-7db74252-19f0-48f5-b8eb-0362a76d3ea8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Received unexpected event network-vif-unplugged-fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 for instance with vm_state active and task_state shelving.#033[00m
Feb 25 07:25:32 np0005629333 nova_compute[244014]: 2026-02-25 12:25:32.556 244018 INFO nova.virt.libvirt.driver [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Instance shutdown successfully after 24 seconds.#033[00m
Feb 25 07:25:32 np0005629333 nova_compute[244014]: 2026-02-25 12:25:32.563 244018 INFO nova.virt.libvirt.driver [-] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Instance destroyed successfully.#033[00m
Feb 25 07:25:32 np0005629333 nova_compute[244014]: 2026-02-25 12:25:32.563 244018 DEBUG nova.objects.instance [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lazy-loading 'numa_topology' on Instance uuid b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:25:32 np0005629333 nova_compute[244014]: 2026-02-25 12:25:32.943 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:32 np0005629333 nova_compute[244014]: 2026-02-25 12:25:32.943 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:33 np0005629333 nova_compute[244014]: 2026-02-25 12:25:33.256 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "ec873c3c-bf46-4537-8c29-b23a3133d281" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:33 np0005629333 nova_compute[244014]: 2026-02-25 12:25:33.257 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "ec873c3c-bf46-4537-8c29-b23a3133d281" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:33 np0005629333 nova_compute[244014]: 2026-02-25 12:25:33.286 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:25:33 np0005629333 nova_compute[244014]: 2026-02-25 12:25:33.301 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:25:33 np0005629333 nova_compute[244014]: 2026-02-25 12:25:33.442 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:33 np0005629333 nova_compute[244014]: 2026-02-25 12:25:33.566 244018 INFO nova.virt.libvirt.driver [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Beginning cold snapshot process#033[00m
Feb 25 07:25:33 np0005629333 nova_compute[244014]: 2026-02-25 12:25:33.573 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:33 np0005629333 nova_compute[244014]: 2026-02-25 12:25:33.574 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:33 np0005629333 nova_compute[244014]: 2026-02-25 12:25:33.583 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:25:33 np0005629333 nova_compute[244014]: 2026-02-25 12:25:33.584 244018 INFO nova.compute.claims [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:25:33 np0005629333 nova_compute[244014]: 2026-02-25 12:25:33.592 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:33 np0005629333 nova_compute[244014]: 2026-02-25 12:25:33.745 244018 DEBUG nova.virt.libvirt.imagebackend [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Feb 25 07:25:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 358 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 4.0 MiB/s wr, 275 op/s
Feb 25 07:25:34 np0005629333 nova_compute[244014]: 2026-02-25 12:25:34.031 244018 DEBUG nova.storage.rbd_utils [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] creating snapshot(1299e66f82a04239b9607b3d45305c57) on rbd image(b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 07:25:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Feb 25 07:25:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Feb 25 07:25:34 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Feb 25 07:25:34 np0005629333 nova_compute[244014]: 2026-02-25 12:25:34.389 244018 DEBUG nova.storage.rbd_utils [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] cloning vms/b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6_disk@1299e66f82a04239b9607b3d45305c57 to images/ebcf3504-22bf-4cd8-b1d7-d238a69aa7a5 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 25 07:25:34 np0005629333 nova_compute[244014]: 2026-02-25 12:25:34.489 244018 DEBUG nova.storage.rbd_utils [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] flattening images/ebcf3504-22bf-4cd8-b1d7-d238a69aa7a5 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 25 07:25:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:25:34 np0005629333 nova_compute[244014]: 2026-02-25 12:25:34.810 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:34 np0005629333 nova_compute[244014]: 2026-02-25 12:25:34.815 244018 DEBUG nova.compute.manager [req-3994aa0c-e737-4d08-b472-03574c22f9a7 req-7931a9b8-b3b0-4902-9ffd-5d5ea1748efc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Received event network-vif-plugged-fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:34 np0005629333 nova_compute[244014]: 2026-02-25 12:25:34.816 244018 DEBUG oslo_concurrency.lockutils [req-3994aa0c-e737-4d08-b472-03574c22f9a7 req-7931a9b8-b3b0-4902-9ffd-5d5ea1748efc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:34 np0005629333 nova_compute[244014]: 2026-02-25 12:25:34.816 244018 DEBUG oslo_concurrency.lockutils [req-3994aa0c-e737-4d08-b472-03574c22f9a7 req-7931a9b8-b3b0-4902-9ffd-5d5ea1748efc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:34 np0005629333 nova_compute[244014]: 2026-02-25 12:25:34.816 244018 DEBUG oslo_concurrency.lockutils [req-3994aa0c-e737-4d08-b472-03574c22f9a7 req-7931a9b8-b3b0-4902-9ffd-5d5ea1748efc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:34 np0005629333 nova_compute[244014]: 2026-02-25 12:25:34.817 244018 DEBUG nova.compute.manager [req-3994aa0c-e737-4d08-b472-03574c22f9a7 req-7931a9b8-b3b0-4902-9ffd-5d5ea1748efc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] No waiting events found dispatching network-vif-plugged-fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:25:34 np0005629333 nova_compute[244014]: 2026-02-25 12:25:34.817 244018 WARNING nova.compute.manager [req-3994aa0c-e737-4d08-b472-03574c22f9a7 req-7931a9b8-b3b0-4902-9ffd-5d5ea1748efc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Received unexpected event network-vif-plugged-fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Feb 25 07:25:34 np0005629333 nova_compute[244014]: 2026-02-25 12:25:34.906 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:25:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1300447857' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:25:35 np0005629333 nova_compute[244014]: 2026-02-25 12:25:35.430 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:35 np0005629333 nova_compute[244014]: 2026-02-25 12:25:35.438 244018 DEBUG nova.compute.provider_tree [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:25:35 np0005629333 nova_compute[244014]: 2026-02-25 12:25:35.679 244018 DEBUG nova.scheduler.client.report [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:25:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 358 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 234 op/s
Feb 25 07:25:35 np0005629333 nova_compute[244014]: 2026-02-25 12:25:35.914 244018 DEBUG nova.storage.rbd_utils [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] removing snapshot(1299e66f82a04239b9607b3d45305c57) on rbd image(b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 25 07:25:36 np0005629333 nova_compute[244014]: 2026-02-25 12:25:36.520 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.946s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:36 np0005629333 nova_compute[244014]: 2026-02-25 12:25:36.521 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:25:36 np0005629333 nova_compute[244014]: 2026-02-25 12:25:36.524 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 2.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Feb 25 07:25:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Feb 25 07:25:36 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Feb 25 07:25:36 np0005629333 nova_compute[244014]: 2026-02-25 12:25:36.946 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:25:36 np0005629333 nova_compute[244014]: 2026-02-25 12:25:36.947 244018 INFO nova.compute.claims [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:25:37 np0005629333 nova_compute[244014]: 2026-02-25 12:25:37.001 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:25:37 np0005629333 nova_compute[244014]: 2026-02-25 12:25:37.002 244018 DEBUG nova.network.neutron [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:25:37 np0005629333 nova_compute[244014]: 2026-02-25 12:25:37.011 244018 DEBUG nova.storage.rbd_utils [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] creating snapshot(snap) on rbd image(ebcf3504-22bf-4cd8-b1d7-d238a69aa7a5) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 07:25:37 np0005629333 nova_compute[244014]: 2026-02-25 12:25:37.040 244018 INFO nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:25:37 np0005629333 nova_compute[244014]: 2026-02-25 12:25:37.065 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:25:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Feb 25 07:25:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Feb 25 07:25:37 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Feb 25 07:25:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 438 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 8.9 MiB/s wr, 203 op/s
Feb 25 07:25:38 np0005629333 nova_compute[244014]: 2026-02-25 12:25:38.013 244018 DEBUG nova.policy [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0899f3fdb57d46a395d07753dd261241', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c8744bdbc0f1499388aab5f477246beb', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:25:38 np0005629333 nova_compute[244014]: 2026-02-25 12:25:38.074 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022323.072457, 22450f11-d042-48c5-941e-fd544e58d84a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:25:38 np0005629333 nova_compute[244014]: 2026-02-25 12:25:38.074 244018 INFO nova.compute.manager [-] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:25:38 np0005629333 nova_compute[244014]: 2026-02-25 12:25:38.095 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:25:38 np0005629333 nova_compute[244014]: 2026-02-25 12:25:38.096 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:25:38 np0005629333 nova_compute[244014]: 2026-02-25 12:25:38.096 244018 INFO nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Creating image(s)#033[00m
Feb 25 07:25:38 np0005629333 nova_compute[244014]: 2026-02-25 12:25:38.116 244018 DEBUG nova.storage.rbd_utils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image ec873c3c-bf46-4537-8c29-b23a3133d281_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:38 np0005629333 nova_compute[244014]: 2026-02-25 12:25:38.151 244018 DEBUG nova.storage.rbd_utils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image ec873c3c-bf46-4537-8c29-b23a3133d281_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:38 np0005629333 nova_compute[244014]: 2026-02-25 12:25:38.185 244018 DEBUG nova.storage.rbd_utils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image ec873c3c-bf46-4537-8c29-b23a3133d281_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:38 np0005629333 nova_compute[244014]: 2026-02-25 12:25:38.189 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:38 np0005629333 nova_compute[244014]: 2026-02-25 12:25:38.223 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022323.1808317, 1238794a-063b-4ac0-a7d9-3590353e3207 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:25:38 np0005629333 nova_compute[244014]: 2026-02-25 12:25:38.224 244018 INFO nova.compute.manager [-] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:25:38 np0005629333 nova_compute[244014]: 2026-02-25 12:25:38.226 244018 DEBUG nova.compute.manager [None req-bee6cd0a-00be-4b0e-a459-3c29806908ad - - - - - -] [instance: 22450f11-d042-48c5-941e-fd544e58d84a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:25:38 np0005629333 nova_compute[244014]: 2026-02-25 12:25:38.227 244018 DEBUG nova.compute.manager [None req-a30f093f-9b1c-4327-88e7-e2a4faad9f43 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:25:38 np0005629333 nova_compute[244014]: 2026-02-25 12:25:38.252 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:38 np0005629333 nova_compute[244014]: 2026-02-25 12:25:38.253 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:38 np0005629333 nova_compute[244014]: 2026-02-25 12:25:38.253 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:38 np0005629333 nova_compute[244014]: 2026-02-25 12:25:38.253 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:38 np0005629333 nova_compute[244014]: 2026-02-25 12:25:38.280 244018 DEBUG nova.storage.rbd_utils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image ec873c3c-bf46-4537-8c29-b23a3133d281_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:38 np0005629333 nova_compute[244014]: 2026-02-25 12:25:38.285 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 ec873c3c-bf46-4537-8c29-b23a3133d281_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:38 np0005629333 nova_compute[244014]: 2026-02-25 12:25:38.444 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:38 np0005629333 nova_compute[244014]: 2026-02-25 12:25:38.846 244018 DEBUG nova.compute.manager [None req-bf77f507-d49e-4eb0-9557-fee9c174e121 - - - - - -] [instance: 1238794a-063b-4ac0-a7d9-3590353e3207] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:25:38 np0005629333 nova_compute[244014]: 2026-02-25 12:25:38.849 244018 DEBUG nova.network.neutron [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Successfully created port: 0b2cc5f4-bf41-4e03-8c24-1e711f742942 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:25:38 np0005629333 nova_compute[244014]: 2026-02-25 12:25:38.855 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:38 np0005629333 nova_compute[244014]: 2026-02-25 12:25:38.895 244018 INFO nova.compute.manager [None req-a30f093f-9b1c-4327-88e7-e2a4faad9f43 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] instance snapshotting#033[00m
Feb 25 07:25:39 np0005629333 nova_compute[244014]: 2026-02-25 12:25:39.064 244018 WARNING nova.compute.manager [None req-a30f093f-9b1c-4327-88e7-e2a4faad9f43 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Image not found during snapshot: nova.exception.ImageNotFound: Image 30923a52-0362-49d2-a639-798fbe2c4beb could not be found.#033[00m
Feb 25 07:25:39 np0005629333 nova_compute[244014]: 2026-02-25 12:25:39.632 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:25:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/439742937' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:25:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:25:39 np0005629333 nova_compute[244014]: 2026-02-25 12:25:39.702 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.847s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:39 np0005629333 nova_compute[244014]: 2026-02-25 12:25:39.707 244018 DEBUG nova.compute.provider_tree [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:25:39 np0005629333 nova_compute[244014]: 2026-02-25 12:25:39.765 244018 DEBUG nova.network.neutron [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Successfully updated port: 0b2cc5f4-bf41-4e03-8c24-1e711f742942 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:25:39 np0005629333 nova_compute[244014]: 2026-02-25 12:25:39.777 244018 DEBUG nova.scheduler.client.report [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:25:39 np0005629333 nova_compute[244014]: 2026-02-25 12:25:39.782 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "refresh_cache-ec873c3c-bf46-4537-8c29-b23a3133d281" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:25:39 np0005629333 nova_compute[244014]: 2026-02-25 12:25:39.783 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquired lock "refresh_cache-ec873c3c-bf46-4537-8c29-b23a3133d281" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:25:39 np0005629333 nova_compute[244014]: 2026-02-25 12:25:39.783 244018 DEBUG nova.network.neutron [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:25:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 438 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 144 op/s
Feb 25 07:25:39 np0005629333 nova_compute[244014]: 2026-02-25 12:25:39.804 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 3.280s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:39 np0005629333 nova_compute[244014]: 2026-02-25 12:25:39.805 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:25:39 np0005629333 nova_compute[244014]: 2026-02-25 12:25:39.848 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:25:39 np0005629333 nova_compute[244014]: 2026-02-25 12:25:39.849 244018 DEBUG nova.network.neutron [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:25:39 np0005629333 nova_compute[244014]: 2026-02-25 12:25:39.861 244018 DEBUG nova.compute.manager [req-400f4913-daf3-45b5-814e-29d0fbf4ffbd req-62025a39-bfe2-484e-8da1-ab017b51bf49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Received event network-changed-0b2cc5f4-bf41-4e03-8c24-1e711f742942 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:39 np0005629333 nova_compute[244014]: 2026-02-25 12:25:39.861 244018 DEBUG nova.compute.manager [req-400f4913-daf3-45b5-814e-29d0fbf4ffbd req-62025a39-bfe2-484e-8da1-ab017b51bf49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Refreshing instance network info cache due to event network-changed-0b2cc5f4-bf41-4e03-8c24-1e711f742942. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:25:39 np0005629333 nova_compute[244014]: 2026-02-25 12:25:39.861 244018 DEBUG oslo_concurrency.lockutils [req-400f4913-daf3-45b5-814e-29d0fbf4ffbd req-62025a39-bfe2-484e-8da1-ab017b51bf49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-ec873c3c-bf46-4537-8c29-b23a3133d281" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:25:39 np0005629333 nova_compute[244014]: 2026-02-25 12:25:39.862 244018 DEBUG oslo_concurrency.lockutils [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "e291d969-fcea-4f60-a478-f7b81a91ccd9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:39 np0005629333 nova_compute[244014]: 2026-02-25 12:25:39.862 244018 DEBUG oslo_concurrency.lockutils [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "e291d969-fcea-4f60-a478-f7b81a91ccd9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:39 np0005629333 nova_compute[244014]: 2026-02-25 12:25:39.863 244018 DEBUG oslo_concurrency.lockutils [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:39 np0005629333 nova_compute[244014]: 2026-02-25 12:25:39.863 244018 DEBUG oslo_concurrency.lockutils [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:39 np0005629333 nova_compute[244014]: 2026-02-25 12:25:39.863 244018 DEBUG oslo_concurrency.lockutils [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:39 np0005629333 nova_compute[244014]: 2026-02-25 12:25:39.864 244018 INFO nova.compute.manager [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Terminating instance#033[00m
Feb 25 07:25:39 np0005629333 nova_compute[244014]: 2026-02-25 12:25:39.865 244018 DEBUG nova.compute.manager [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:25:39 np0005629333 nova_compute[244014]: 2026-02-25 12:25:39.870 244018 INFO nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:25:39 np0005629333 nova_compute[244014]: 2026-02-25 12:25:39.885 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:25:39 np0005629333 nova_compute[244014]: 2026-02-25 12:25:39.994 244018 DEBUG nova.network.neutron [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:25:40 np0005629333 nova_compute[244014]: 2026-02-25 12:25:40.001 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:25:40 np0005629333 nova_compute[244014]: 2026-02-25 12:25:40.003 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:25:40 np0005629333 nova_compute[244014]: 2026-02-25 12:25:40.004 244018 INFO nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Creating image(s)#033[00m
Feb 25 07:25:40 np0005629333 nova_compute[244014]: 2026-02-25 12:25:40.570 244018 DEBUG nova.storage.rbd_utils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:40 np0005629333 nova_compute[244014]: 2026-02-25 12:25:40.602 244018 DEBUG nova.storage.rbd_utils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:40 np0005629333 nova_compute[244014]: 2026-02-25 12:25:40.626 244018 DEBUG nova.storage.rbd_utils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:40 np0005629333 nova_compute[244014]: 2026-02-25 12:25:40.629 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:40 np0005629333 nova_compute[244014]: 2026-02-25 12:25:40.650 244018 DEBUG nova.policy [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0899f3fdb57d46a395d07753dd261241', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c8744bdbc0f1499388aab5f477246beb', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:25:40 np0005629333 nova_compute[244014]: 2026-02-25 12:25:40.680 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:40 np0005629333 nova_compute[244014]: 2026-02-25 12:25:40.680 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:40 np0005629333 nova_compute[244014]: 2026-02-25 12:25:40.681 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:40 np0005629333 nova_compute[244014]: 2026-02-25 12:25:40.681 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:40 np0005629333 nova_compute[244014]: 2026-02-25 12:25:40.701 244018 DEBUG nova.storage.rbd_utils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:40 np0005629333 nova_compute[244014]: 2026-02-25 12:25:40.706 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:41 np0005629333 nova_compute[244014]: 2026-02-25 12:25:41.731 244018 DEBUG nova.network.neutron [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Updating instance_info_cache with network_info: [{"id": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "address": "fa:16:3e:1f:2e:f8", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2cc5f4-bf", "ovs_interfaceid": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:25:41 np0005629333 nova_compute[244014]: 2026-02-25 12:25:41.738 244018 DEBUG nova.network.neutron [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Successfully created port: 58ea35ed-ff5d-4827-a801-431f7536d78d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:25:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 444 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 7.6 MiB/s wr, 138 op/s
Feb 25 07:25:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:25:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:25:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:25:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:25:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002031964281383588 of space, bias 1.0, pg target 0.6095892844150764 quantized to 32 (current 32)
Feb 25 07:25:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:25:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:25:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:25:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:25:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:25:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003248144253363575 of space, bias 1.0, pg target 0.9744432760090725 quantized to 32 (current 32)
Feb 25 07:25:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:25:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.932041486911633e-07 of space, bias 4.0, pg target 0.001071844978429396 quantized to 16 (current 16)
Feb 25 07:25:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:25:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:25:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:25:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:25:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:25:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:25:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:25:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:25:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:25:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:25:42 np0005629333 nova_compute[244014]: 2026-02-25 12:25:42.236 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Releasing lock "refresh_cache-ec873c3c-bf46-4537-8c29-b23a3133d281" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:25:42 np0005629333 nova_compute[244014]: 2026-02-25 12:25:42.237 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Instance network_info: |[{"id": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "address": "fa:16:3e:1f:2e:f8", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2cc5f4-bf", "ovs_interfaceid": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:25:42 np0005629333 nova_compute[244014]: 2026-02-25 12:25:42.237 244018 DEBUG oslo_concurrency.lockutils [req-400f4913-daf3-45b5-814e-29d0fbf4ffbd req-62025a39-bfe2-484e-8da1-ab017b51bf49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-ec873c3c-bf46-4537-8c29-b23a3133d281" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:25:42 np0005629333 nova_compute[244014]: 2026-02-25 12:25:42.238 244018 DEBUG nova.network.neutron [req-400f4913-daf3-45b5-814e-29d0fbf4ffbd req-62025a39-bfe2-484e-8da1-ab017b51bf49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Refreshing network info cache for port 0b2cc5f4-bf41-4e03-8c24-1e711f742942 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:25:42 np0005629333 nova_compute[244014]: 2026-02-25 12:25:42.996 244018 DEBUG nova.network.neutron [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Successfully updated port: 58ea35ed-ff5d-4827-a801-431f7536d78d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:25:43 np0005629333 kernel: tap07860675-4a (unregistering): left promiscuous mode
Feb 25 07:25:43 np0005629333 NetworkManager[49836]: <info>  [1772022343.0125] device (tap07860675-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.012 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 ec873c3c-bf46-4537-8c29-b23a3133d281_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.727s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:43 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:43Z|00414|binding|INFO|Releasing lport 07860675-4ac4-43a4-ab6b-bacd17801fc2 from this chassis (sb_readonly=0)
Feb 25 07:25:43 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:43Z|00415|binding|INFO|Setting lport 07860675-4ac4-43a4-ab6b-bacd17801fc2 down in Southbound
Feb 25 07:25:43 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:43Z|00416|binding|INFO|Removing iface tap07860675-4a ovn-installed in OVS
Feb 25 07:25:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:43.032 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:ea:76 10.100.0.14'], port_security=['fa:16:3e:07:ea:76 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e291d969-fcea-4f60-a478-f7b81a91ccd9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f976004e0b334963a69c2519fca200d2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ca95284b-67f9-4e09-a57b-7847021c2465', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=974b795b-e2d8-4683-ac80-b366113e2dd8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=07860675-4ac4-43a4-ab6b-bacd17801fc2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:25:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:43.034 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 07860675-4ac4-43a4-ab6b-bacd17801fc2 in datapath 7693903d-d5e2-4b50-a39b-bbbcc4148329 unbound from our chassis#033[00m
Feb 25 07:25:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:43.036 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7693903d-d5e2-4b50-a39b-bbbcc4148329, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:25:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:43.038 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5027d710-b499-491b-b444-7f2a96b7228d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:43.039 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329 namespace which is not needed anymore#033[00m
Feb 25 07:25:43 np0005629333 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000030.scope: Deactivated successfully.
Feb 25 07:25:43 np0005629333 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000030.scope: Consumed 10.855s CPU time.
Feb 25 07:25:43 np0005629333 systemd-machined[210048]: Machine qemu-53-instance-00000030 terminated.
Feb 25 07:25:43 np0005629333 systemd-udevd[286468]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:25:43 np0005629333 NetworkManager[49836]: <info>  [1772022343.0839] manager: (tap07860675-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/189)
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.126 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.132 244018 DEBUG nova.compute.manager [req-9976292b-b405-4117-acbe-5da40c542e93 req-a9d34996-aef0-4732-9169-f22236095bc5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Received event network-changed-58ea35ed-ff5d-4827-a801-431f7536d78d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.132 244018 DEBUG nova.compute.manager [req-9976292b-b405-4117-acbe-5da40c542e93 req-a9d34996-aef0-4732-9169-f22236095bc5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Refreshing instance network info cache due to event network-changed-58ea35ed-ff5d-4827-a801-431f7536d78d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.133 244018 DEBUG oslo_concurrency.lockutils [req-9976292b-b405-4117-acbe-5da40c542e93 req-a9d34996-aef0-4732-9169-f22236095bc5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.134 244018 DEBUG oslo_concurrency.lockutils [req-9976292b-b405-4117-acbe-5da40c542e93 req-a9d34996-aef0-4732-9169-f22236095bc5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.134 244018 DEBUG nova.network.neutron [req-9976292b-b405-4117-acbe-5da40c542e93 req-a9d34996-aef0-4732-9169-f22236095bc5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Refreshing network info cache for port 58ea35ed-ff5d-4827-a801-431f7536d78d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.138 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "refresh_cache-3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.138 244018 DEBUG nova.compute.manager [None req-daed9e93-fa86-46b3-9114-c10c3375fd64 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:25:43 np0005629333 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[285918]: [NOTICE]   (285922) : haproxy version is 2.8.14-c23fe91
Feb 25 07:25:43 np0005629333 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[285918]: [NOTICE]   (285922) : path to executable is /usr/sbin/haproxy
Feb 25 07:25:43 np0005629333 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[285918]: [WARNING]  (285922) : Exiting Master process...
Feb 25 07:25:43 np0005629333 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[285918]: [ALERT]    (285922) : Current worker (285924) exited with code 143 (Terminated)
Feb 25 07:25:43 np0005629333 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[285918]: [WARNING]  (285922) : All workers exited. Exiting... (0)
Feb 25 07:25:43 np0005629333 systemd[1]: libpod-e8e468a20062a375f4a78f5f5701c3191b8bc49d7f32d13d785dfe337dc4569a.scope: Deactivated successfully.
Feb 25 07:25:43 np0005629333 podman[286513]: 2026-02-25 12:25:43.171616716 +0000 UTC m=+0.041253038 container died e8e468a20062a375f4a78f5f5701c3191b8bc49d7f32d13d785dfe337dc4569a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.178 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.183 244018 DEBUG nova.storage.rbd_utils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] resizing rbd image ec873c3c-bf46-4537-8c29-b23a3133d281_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:25:43 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e8e468a20062a375f4a78f5f5701c3191b8bc49d7f32d13d785dfe337dc4569a-userdata-shm.mount: Deactivated successfully.
Feb 25 07:25:43 np0005629333 systemd[1]: var-lib-containers-storage-overlay-332be3fc7ea941666457dff9d1433e95fd9071ac0ddde7234a55bd6539d7c2bc-merged.mount: Deactivated successfully.
Feb 25 07:25:43 np0005629333 podman[286513]: 2026-02-25 12:25:43.198560728 +0000 UTC m=+0.068197060 container cleanup e8e468a20062a375f4a78f5f5701c3191b8bc49d7f32d13d785dfe337dc4569a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:25:43 np0005629333 systemd[1]: libpod-conmon-e8e468a20062a375f4a78f5f5701c3191b8bc49d7f32d13d785dfe337dc4569a.scope: Deactivated successfully.
Feb 25 07:25:43 np0005629333 podman[286576]: 2026-02-25 12:25:43.254504542 +0000 UTC m=+0.037245705 container remove e8e468a20062a375f4a78f5f5701c3191b8bc49d7f32d13d785dfe337dc4569a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:25:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:43.259 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d0e22a0b-b0be-43e7-8058-575bd7fcc225]: (4, ('Wed Feb 25 12:25:43 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329 (e8e468a20062a375f4a78f5f5701c3191b8bc49d7f32d13d785dfe337dc4569a)\ne8e468a20062a375f4a78f5f5701c3191b8bc49d7f32d13d785dfe337dc4569a\nWed Feb 25 12:25:43 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329 (e8e468a20062a375f4a78f5f5701c3191b8bc49d7f32d13d785dfe337dc4569a)\ne8e468a20062a375f4a78f5f5701c3191b8bc49d7f32d13d785dfe337dc4569a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:43.260 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[536cc2b3-1f23-4348-b349-357586b0a92c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:43.261 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7693903d-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:43 np0005629333 kernel: tap7693903d-d0: left promiscuous mode
Feb 25 07:25:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:43.276 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b0d8aa1b-01ed-46ae-b895-573675ac9f53]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.287 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:43.289 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f23253b5-1137-408f-80a8-710665535799]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:43.290 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ad8bfea3-a861-4656-b7b1-1cf8f3f1eba6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.290 244018 INFO nova.virt.libvirt.driver [-] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Instance destroyed successfully.#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.291 244018 INFO nova.compute.manager [None req-daed9e93-fa86-46b3-9114-c10c3375fd64 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] instance snapshotting#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.291 244018 DEBUG nova.objects.instance [None req-daed9e93-fa86-46b3-9114-c10c3375fd64 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'flavor' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.293 244018 DEBUG nova.objects.instance [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lazy-loading 'resources' on Instance uuid e291d969-fcea-4f60-a478-f7b81a91ccd9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.296 244018 DEBUG nova.storage.rbd_utils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] resizing rbd image 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:25:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:43.303 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[45ebb007-2dee-47d0-a0d7-2b3d0309fb85]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 429436, 'reachable_time': 16281, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286653, 'error': None, 'target': 'ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:43.306 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:25:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:43.306 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[c8fa2db7-1be4-4123-92a8-01297a3ef527]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:43 np0005629333 systemd[1]: run-netns-ovnmeta\x2d7693903d\x2dd5e2\x2d4b50\x2da39b\x2dbbbcc4148329.mount: Deactivated successfully.
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.318 244018 DEBUG nova.objects.instance [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lazy-loading 'migration_context' on Instance uuid ec873c3c-bf46-4537-8c29-b23a3133d281 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.350 244018 DEBUG nova.objects.instance [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lazy-loading 'migration_context' on Instance uuid 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.446 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.613 244018 DEBUG nova.compute.manager [req-c3e50967-8850-4d83-bf1e-2f7310ee2243 req-635d41f4-c722-4986-9093-c11cf02fc28c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Received event network-vif-unplugged-07860675-4ac4-43a4-ab6b-bacd17801fc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.614 244018 DEBUG oslo_concurrency.lockutils [req-c3e50967-8850-4d83-bf1e-2f7310ee2243 req-635d41f4-c722-4986-9093-c11cf02fc28c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.614 244018 DEBUG oslo_concurrency.lockutils [req-c3e50967-8850-4d83-bf1e-2f7310ee2243 req-635d41f4-c722-4986-9093-c11cf02fc28c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.614 244018 DEBUG oslo_concurrency.lockutils [req-c3e50967-8850-4d83-bf1e-2f7310ee2243 req-635d41f4-c722-4986-9093-c11cf02fc28c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.614 244018 DEBUG nova.compute.manager [req-c3e50967-8850-4d83-bf1e-2f7310ee2243 req-635d41f4-c722-4986-9093-c11cf02fc28c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] No waiting events found dispatching network-vif-unplugged-07860675-4ac4-43a4-ab6b-bacd17801fc2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.615 244018 DEBUG nova.compute.manager [req-c3e50967-8850-4d83-bf1e-2f7310ee2243 req-635d41f4-c722-4986-9093-c11cf02fc28c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Received event network-vif-unplugged-07860675-4ac4-43a4-ab6b-bacd17801fc2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.615 244018 DEBUG nova.network.neutron [req-9976292b-b405-4117-acbe-5da40c542e93 req-a9d34996-aef0-4732-9169-f22236095bc5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:25:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 472 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 9.4 MiB/s wr, 165 op/s
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.877 244018 DEBUG nova.virt.libvirt.vif [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:25:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-20106714',display_name='tempest-ImagesOneServerNegativeTestJSON-server-20106714',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-20106714',id=48,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:25:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f976004e0b334963a69c2519fca200d2',ramdisk_id='',reservation_id='r-agofmyat',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1374162185',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1374162185-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:25:39Z,user_data=None,user_id='89e71139346a40899212d5bc35835720',uuid=e291d969-fcea-4f60-a478-f7b81a91ccd9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "address": "fa:16:3e:07:ea:76", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07860675-4a", "ovs_interfaceid": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.878 244018 DEBUG nova.network.os_vif_util [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converting VIF {"id": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "address": "fa:16:3e:07:ea:76", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07860675-4a", "ovs_interfaceid": "07860675-4ac4-43a4-ab6b-bacd17801fc2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.878 244018 DEBUG nova.network.os_vif_util [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:07:ea:76,bridge_name='br-int',has_traffic_filtering=True,id=07860675-4ac4-43a4-ab6b-bacd17801fc2,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07860675-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.879 244018 DEBUG os_vif [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:ea:76,bridge_name='br-int',has_traffic_filtering=True,id=07860675-4ac4-43a4-ab6b-bacd17801fc2,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07860675-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.882 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.882 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07860675-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.885 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.885 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Ensure instance console log exists: /var/lib/nova/instances/ec873c3c-bf46-4537-8c29-b23a3133d281/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.886 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.886 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.886 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.888 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Start _get_guest_xml network_info=[{"id": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "address": "fa:16:3e:1f:2e:f8", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2cc5f4-bf", "ovs_interfaceid": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.888 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.888 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Ensure instance console log exists: /var/lib/nova/instances/3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.889 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.889 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.889 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.889 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.890 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.893 244018 INFO os_vif [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:07:ea:76,bridge_name='br-int',has_traffic_filtering=True,id=07860675-4ac4-43a4-ab6b-bacd17801fc2,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07860675-4a')#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.912 244018 WARNING nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.918 244018 DEBUG nova.virt.libvirt.host [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.918 244018 DEBUG nova.virt.libvirt.host [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.923 244018 DEBUG nova.virt.libvirt.host [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.923 244018 DEBUG nova.virt.libvirt.host [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.923 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.923 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.924 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.924 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.924 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.924 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.924 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.925 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.925 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.925 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.925 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.926 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:25:43 np0005629333 nova_compute[244014]: 2026-02-25 12:25:43.928 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:44 np0005629333 nova_compute[244014]: 2026-02-25 12:25:44.230 244018 DEBUG nova.network.neutron [req-9976292b-b405-4117-acbe-5da40c542e93 req-a9d34996-aef0-4732-9169-f22236095bc5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:25:44 np0005629333 nova_compute[244014]: 2026-02-25 12:25:44.343 244018 DEBUG oslo_concurrency.lockutils [req-9976292b-b405-4117-acbe-5da40c542e93 req-a9d34996-aef0-4732-9169-f22236095bc5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:25:44 np0005629333 nova_compute[244014]: 2026-02-25 12:25:44.350 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquired lock "refresh_cache-3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:25:44 np0005629333 nova_compute[244014]: 2026-02-25 12:25:44.351 244018 DEBUG nova.network.neutron [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:25:44 np0005629333 nova_compute[244014]: 2026-02-25 12:25:44.499 244018 INFO nova.virt.libvirt.driver [None req-daed9e93-fa86-46b3-9114-c10c3375fd64 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Beginning live snapshot process#033[00m
Feb 25 07:25:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:25:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Feb 25 07:25:44 np0005629333 nova_compute[244014]: 2026-02-25 12:25:44.991 244018 DEBUG nova.network.neutron [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:25:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:25:44 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/535292027' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:25:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Feb 25 07:25:44 np0005629333 nova_compute[244014]: 2026-02-25 12:25:44.996 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:45 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.023 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.039 244018 DEBUG nova.storage.rbd_utils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image ec873c3c-bf46-4537-8c29-b23a3133d281_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.042 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.199 244018 INFO nova.virt.libvirt.driver [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Snapshot image upload complete#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.200 244018 DEBUG nova.compute.manager [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.204 244018 INFO nova.virt.libvirt.driver [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Deleting instance files /var/lib/nova/instances/e291d969-fcea-4f60-a478-f7b81a91ccd9_del#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.205 244018 INFO nova.virt.libvirt.driver [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Deletion of /var/lib/nova/instances/e291d969-fcea-4f60-a478-f7b81a91ccd9_del complete#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.209 244018 DEBUG nova.virt.libvirt.imagebackend [None req-daed9e93-fa86-46b3-9114-c10c3375fd64 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.305 244018 INFO nova.compute.manager [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Took 5.44 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.306 244018 DEBUG oslo.service.loopingcall [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.307 244018 DEBUG nova.compute.manager [-] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.307 244018 DEBUG nova.network.neutron [-] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.317 244018 INFO nova.compute.manager [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Shelve offloading#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.332 244018 INFO nova.virt.libvirt.driver [-] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Instance destroyed successfully.#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.333 244018 DEBUG nova.compute.manager [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.335 244018 DEBUG oslo_concurrency.lockutils [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "refresh_cache-b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.336 244018 DEBUG oslo_concurrency.lockutils [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquired lock "refresh_cache-b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.337 244018 DEBUG nova.network.neutron [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.499 244018 DEBUG nova.storage.rbd_utils [None req-daed9e93-fa86-46b3-9114-c10c3375fd64 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] creating snapshot(1242784c9e544b4a9c689b1d2aa99219) on rbd image(b8086e43-4c45-422f-a3b5-fa665c256b30_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 07:25:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:25:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3902237722' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.550 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.555 244018 DEBUG nova.virt.libvirt.vif [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:25:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-2038151869',display_name='tempest-MultipleCreateTestJSON-server-2038151869-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-2038151869-2',id=50,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c8744bdbc0f1499388aab5f477246beb',ramdisk_id='',reservation_id='r-e70320tn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1900553995',owner_user_name='tempest-MultipleCreateTestJSON-1900553995-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:25:37Z,user_data=None,user_id='0899f3fdb57d46a395d07753dd261241',uuid=ec873c3c-bf46-4537-8c29-b23a3133d281,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "address": "fa:16:3e:1f:2e:f8", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2cc5f4-bf", "ovs_interfaceid": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.556 244018 DEBUG nova.network.os_vif_util [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converting VIF {"id": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "address": "fa:16:3e:1f:2e:f8", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2cc5f4-bf", "ovs_interfaceid": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.558 244018 DEBUG nova.network.os_vif_util [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:2e:f8,bridge_name='br-int',has_traffic_filtering=True,id=0b2cc5f4-bf41-4e03-8c24-1e711f742942,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b2cc5f4-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.560 244018 DEBUG nova.objects.instance [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lazy-loading 'pci_devices' on Instance uuid ec873c3c-bf46-4537-8c29-b23a3133d281 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:25:45 np0005629333 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 07:25:45 np0005629333 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.584 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:25:45 np0005629333 nova_compute[244014]:  <uuid>ec873c3c-bf46-4537-8c29-b23a3133d281</uuid>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:  <name>instance-00000032</name>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      <nova:name>tempest-MultipleCreateTestJSON-server-2038151869-2</nova:name>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:25:43</nova:creationTime>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:25:45 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:        <nova:user uuid="0899f3fdb57d46a395d07753dd261241">tempest-MultipleCreateTestJSON-1900553995-project-member</nova:user>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:        <nova:project uuid="c8744bdbc0f1499388aab5f477246beb">tempest-MultipleCreateTestJSON-1900553995</nova:project>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:        <nova:port uuid="0b2cc5f4-bf41-4e03-8c24-1e711f742942">
Feb 25 07:25:45 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      <entry name="serial">ec873c3c-bf46-4537-8c29-b23a3133d281</entry>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      <entry name="uuid">ec873c3c-bf46-4537-8c29-b23a3133d281</entry>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/ec873c3c-bf46-4537-8c29-b23a3133d281_disk">
Feb 25 07:25:45 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:25:45 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/ec873c3c-bf46-4537-8c29-b23a3133d281_disk.config">
Feb 25 07:25:45 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:25:45 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:1f:2e:f8"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      <target dev="tap0b2cc5f4-bf"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/ec873c3c-bf46-4537-8c29-b23a3133d281/console.log" append="off"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:25:45 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:25:45 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:25:45 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:25:45 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.593 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Preparing to wait for external event network-vif-plugged-0b2cc5f4-bf41-4e03-8c24-1e711f742942 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.593 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.593 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.594 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.595 244018 DEBUG nova.virt.libvirt.vif [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:25:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-2038151869',display_name='tempest-MultipleCreateTestJSON-server-2038151869-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-2038151869-2',id=50,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c8744bdbc0f1499388aab5f477246beb',ramdisk_id='',reservation_id='r-e70320tn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1900553995',owner_user_name='tempest-MultipleCreateTestJSON-1900553995-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:25:37Z,user_data=None,user_id='0899f3fdb57d46a395d07753dd261241',uuid=ec873c3c-bf46-4537-8c29-b23a3133d281,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "address": "fa:16:3e:1f:2e:f8", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2cc5f4-bf", "ovs_interfaceid": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.595 244018 DEBUG nova.network.os_vif_util [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converting VIF {"id": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "address": "fa:16:3e:1f:2e:f8", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2cc5f4-bf", "ovs_interfaceid": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.596 244018 DEBUG nova.network.os_vif_util [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:2e:f8,bridge_name='br-int',has_traffic_filtering=True,id=0b2cc5f4-bf41-4e03-8c24-1e711f742942,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b2cc5f4-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.596 244018 DEBUG os_vif [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:2e:f8,bridge_name='br-int',has_traffic_filtering=True,id=0b2cc5f4-bf41-4e03-8c24-1e711f742942,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b2cc5f4-bf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.597 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.599 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.599 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.603 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.604 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b2cc5f4-bf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.604 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0b2cc5f4-bf, col_values=(('external_ids', {'iface-id': '0b2cc5f4-bf41-4e03-8c24-1e711f742942', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1f:2e:f8', 'vm-uuid': 'ec873c3c-bf46-4537-8c29-b23a3133d281'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.606 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.608 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:25:45 np0005629333 NetworkManager[49836]: <info>  [1772022345.6083] manager: (tap0b2cc5f4-bf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/190)
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.612 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.613 244018 INFO os_vif [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:2e:f8,bridge_name='br-int',has_traffic_filtering=True,id=0b2cc5f4-bf41-4e03-8c24-1e711f742942,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b2cc5f4-bf')#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.672 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.673 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.673 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] No VIF found with MAC fa:16:3e:1f:2e:f8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.674 244018 INFO nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Using config drive#033[00m
Feb 25 07:25:45 np0005629333 nova_compute[244014]: 2026-02-25 12:25:45.698 244018 DEBUG nova.storage.rbd_utils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image ec873c3c-bf46-4537-8c29-b23a3133d281_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 472 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 3.5 MiB/s wr, 57 op/s
Feb 25 07:25:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:25:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:25:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:25:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:25:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:25:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:25:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:25:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:25:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:25:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:25:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:25:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.097 244018 DEBUG nova.network.neutron [req-400f4913-daf3-45b5-814e-29d0fbf4ffbd req-62025a39-bfe2-484e-8da1-ab017b51bf49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Updated VIF entry in instance network info cache for port 0b2cc5f4-bf41-4e03-8c24-1e711f742942. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.098 244018 DEBUG nova.network.neutron [req-400f4913-daf3-45b5-814e-29d0fbf4ffbd req-62025a39-bfe2-484e-8da1-ab017b51bf49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Updating instance_info_cache with network_info: [{"id": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "address": "fa:16:3e:1f:2e:f8", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2cc5f4-bf", "ovs_interfaceid": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.116 244018 DEBUG oslo_concurrency.lockutils [req-400f4913-daf3-45b5-814e-29d0fbf4ffbd req-62025a39-bfe2-484e-8da1-ab017b51bf49 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-ec873c3c-bf46-4537-8c29-b23a3133d281" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.135 244018 DEBUG nova.compute.manager [req-a5c1c2e8-1fbb-41dc-8843-dc56a1cf62b4 req-267b838f-57c0-4ae6-b193-1f6970028787 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Received event network-vif-plugged-07860675-4ac4-43a4-ab6b-bacd17801fc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.135 244018 DEBUG oslo_concurrency.lockutils [req-a5c1c2e8-1fbb-41dc-8843-dc56a1cf62b4 req-267b838f-57c0-4ae6-b193-1f6970028787 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.136 244018 DEBUG oslo_concurrency.lockutils [req-a5c1c2e8-1fbb-41dc-8843-dc56a1cf62b4 req-267b838f-57c0-4ae6-b193-1f6970028787 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.136 244018 DEBUG oslo_concurrency.lockutils [req-a5c1c2e8-1fbb-41dc-8843-dc56a1cf62b4 req-267b838f-57c0-4ae6-b193-1f6970028787 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e291d969-fcea-4f60-a478-f7b81a91ccd9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.137 244018 DEBUG nova.compute.manager [req-a5c1c2e8-1fbb-41dc-8843-dc56a1cf62b4 req-267b838f-57c0-4ae6-b193-1f6970028787 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] No waiting events found dispatching network-vif-plugged-07860675-4ac4-43a4-ab6b-bacd17801fc2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.137 244018 WARNING nova.compute.manager [req-a5c1c2e8-1fbb-41dc-8843-dc56a1cf62b4 req-267b838f-57c0-4ae6-b193-1f6970028787 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Received unexpected event network-vif-plugged-07860675-4ac4-43a4-ab6b-bacd17801fc2 for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.311 244018 INFO nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Creating config drive at /var/lib/nova/instances/ec873c3c-bf46-4537-8c29-b23a3133d281/disk.config#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.320 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ec873c3c-bf46-4537-8c29-b23a3133d281/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_mv_swr_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Feb 25 07:25:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Feb 25 07:25:46 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:25:46 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:25:46 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.373 244018 DEBUG nova.network.neutron [-] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:25:46 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.409 244018 DEBUG nova.storage.rbd_utils [None req-daed9e93-fa86-46b3-9114-c10c3375fd64 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] cloning vms/b8086e43-4c45-422f-a3b5-fa665c256b30_disk@1242784c9e544b4a9c689b1d2aa99219 to images/5c4d8498-0ce5-4898-96f9-042972db1ddb clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.450 244018 INFO nova.compute.manager [-] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Took 1.14 seconds to deallocate network for instance.#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.463 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ec873c3c-bf46-4537-8c29-b23a3133d281/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_mv_swr_" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.491 244018 DEBUG nova.storage.rbd_utils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image ec873c3c-bf46-4537-8c29-b23a3133d281_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.495 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ec873c3c-bf46-4537-8c29-b23a3133d281/disk.config ec873c3c-bf46-4537-8c29-b23a3133d281_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.524 244018 DEBUG nova.storage.rbd_utils [None req-daed9e93-fa86-46b3-9114-c10c3375fd64 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] flattening images/5c4d8498-0ce5-4898-96f9-042972db1ddb flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.569 244018 DEBUG nova.network.neutron [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Updating instance_info_cache with network_info: [{"id": "58ea35ed-ff5d-4827-a801-431f7536d78d", "address": "fa:16:3e:03:e4:94", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58ea35ed-ff", "ovs_interfaceid": "58ea35ed-ff5d-4827-a801-431f7536d78d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.571 244018 DEBUG oslo_concurrency.lockutils [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.571 244018 DEBUG oslo_concurrency.lockutils [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:46 np0005629333 podman[287045]: 2026-02-25 12:25:46.571210669 +0000 UTC m=+0.052729434 container create b8db06c61530c9985816d42b8699b821d80cb314864fc10cd2b5f2c418b27fd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_kare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 07:25:46 np0005629333 podman[287045]: 2026-02-25 12:25:46.554051513 +0000 UTC m=+0.035570258 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:25:46 np0005629333 systemd[1]: Started libpod-conmon-b8db06c61530c9985816d42b8699b821d80cb314864fc10cd2b5f2c418b27fd4.scope.
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.668 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Releasing lock "refresh_cache-3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.668 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Instance network_info: |[{"id": "58ea35ed-ff5d-4827-a801-431f7536d78d", "address": "fa:16:3e:03:e4:94", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58ea35ed-ff", "ovs_interfaceid": "58ea35ed-ff5d-4827-a801-431f7536d78d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.673 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Start _get_guest_xml network_info=[{"id": "58ea35ed-ff5d-4827-a801-431f7536d78d", "address": "fa:16:3e:03:e4:94", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58ea35ed-ff", "ovs_interfaceid": "58ea35ed-ff5d-4827-a801-431f7536d78d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.685 244018 WARNING nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:25:46 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.691 244018 DEBUG nova.virt.libvirt.host [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.696 244018 DEBUG nova.virt.libvirt.host [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.704 244018 DEBUG nova.virt.libvirt.host [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.704 244018 DEBUG nova.virt.libvirt.host [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.705 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.705 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.707 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.708 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.708 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.709 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.709 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.710 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.711 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.711 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.711 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.712 244018 DEBUG nova.virt.hardware [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.717 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:46 np0005629333 podman[287045]: 2026-02-25 12:25:46.718556259 +0000 UTC m=+0.200075064 container init b8db06c61530c9985816d42b8699b821d80cb314864fc10cd2b5f2c418b27fd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_kare, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 07:25:46 np0005629333 podman[287045]: 2026-02-25 12:25:46.725647889 +0000 UTC m=+0.207166664 container start b8db06c61530c9985816d42b8699b821d80cb314864fc10cd2b5f2c418b27fd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_kare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Feb 25 07:25:46 np0005629333 bold_kare[287098]: 167 167
Feb 25 07:25:46 np0005629333 systemd[1]: libpod-b8db06c61530c9985816d42b8699b821d80cb314864fc10cd2b5f2c418b27fd4.scope: Deactivated successfully.
Feb 25 07:25:46 np0005629333 podman[287045]: 2026-02-25 12:25:46.740882271 +0000 UTC m=+0.222401056 container attach b8db06c61530c9985816d42b8699b821d80cb314864fc10cd2b5f2c418b27fd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 07:25:46 np0005629333 podman[287045]: 2026-02-25 12:25:46.741196079 +0000 UTC m=+0.222714834 container died b8db06c61530c9985816d42b8699b821d80cb314864fc10cd2b5f2c418b27fd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_kare, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.781 244018 DEBUG oslo_concurrency.processutils [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:46 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7db3bddc775f896f3dabe0e77d5dbe14bc4c1a709fed230308146e9ff20ca5ed-merged.mount: Deactivated successfully.
Feb 25 07:25:46 np0005629333 podman[287045]: 2026-02-25 12:25:46.848345352 +0000 UTC m=+0.329864097 container remove b8db06c61530c9985816d42b8699b821d80cb314864fc10cd2b5f2c418b27fd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 07:25:46 np0005629333 systemd[1]: libpod-conmon-b8db06c61530c9985816d42b8699b821d80cb314864fc10cd2b5f2c418b27fd4.scope: Deactivated successfully.
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.913 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ec873c3c-bf46-4537-8c29-b23a3133d281/disk.config ec873c3c-bf46-4537-8c29-b23a3133d281_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.914 244018 INFO nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Deleting local config drive /var/lib/nova/instances/ec873c3c-bf46-4537-8c29-b23a3133d281/disk.config because it was imported into RBD.#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.927 244018 DEBUG nova.storage.rbd_utils [None req-daed9e93-fa86-46b3-9114-c10c3375fd64 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] removing snapshot(1242784c9e544b4a9c689b1d2aa99219) on rbd image(b8086e43-4c45-422f-a3b5-fa665c256b30_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 25 07:25:46 np0005629333 NetworkManager[49836]: <info>  [1772022346.9584] manager: (tap0b2cc5f4-bf): new Tun device (/org/freedesktop/NetworkManager/Devices/191)
Feb 25 07:25:46 np0005629333 kernel: tap0b2cc5f4-bf: entered promiscuous mode
Feb 25 07:25:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:46Z|00417|binding|INFO|Claiming lport 0b2cc5f4-bf41-4e03-8c24-1e711f742942 for this chassis.
Feb 25 07:25:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:46Z|00418|binding|INFO|0b2cc5f4-bf41-4e03-8c24-1e711f742942: Claiming fa:16:3e:1f:2e:f8 10.100.0.3
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.962 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:46.975 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:2e:f8 10.100.0.3'], port_security=['fa:16:3e:1f:2e:f8 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ec873c3c-bf46-4537-8c29-b23a3133d281', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c97c5b11-7517-46fe-a6ca-63894792908c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8744bdbc0f1499388aab5f477246beb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9622f97c-a9c4-423f-b49e-154152bd6881', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b7ae960-1b2b-4f15-b35e-8e889d9ccce8, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=0b2cc5f4-bf41-4e03-8c24-1e711f742942) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:25:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:46.977 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 0b2cc5f4-bf41-4e03-8c24-1e711f742942 in datapath c97c5b11-7517-46fe-a6ca-63894792908c bound to our chassis#033[00m
Feb 25 07:25:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:46.978 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c97c5b11-7517-46fe-a6ca-63894792908c#033[00m
Feb 25 07:25:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:46Z|00419|binding|INFO|Setting lport 0b2cc5f4-bf41-4e03-8c24-1e711f742942 ovn-installed in OVS
Feb 25 07:25:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:46Z|00420|binding|INFO|Setting lport 0b2cc5f4-bf41-4e03-8c24-1e711f742942 up in Southbound
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.984 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:46 np0005629333 nova_compute[244014]: 2026-02-25 12:25:46.991 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:46.990 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d395df89-7870-4994-8578-34c228025b0a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:46.992 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc97c5b11-71 in ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:25:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:46.994 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc97c5b11-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:25:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:46.994 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3d99ba5d-cf2d-4343-863f-e2f630eddca2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:46 np0005629333 systemd-udevd[287206]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:25:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:46.996 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bb754f0a-8281-4e3c-801c-84fad379631a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:46 np0005629333 systemd-machined[210048]: New machine qemu-54-instance-00000032.
Feb 25 07:25:47 np0005629333 NetworkManager[49836]: <info>  [1772022347.0064] device (tap0b2cc5f4-bf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:25:47 np0005629333 NetworkManager[49836]: <info>  [1772022347.0071] device (tap0b2cc5f4-bf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.005 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[52a5ccdc-49b2-4718-93bd-36e89b46b752]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:47 np0005629333 systemd[1]: Started Virtual Machine qemu-54-instance-00000032.
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.018 244018 DEBUG nova.network.neutron [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Updating instance_info_cache with network_info: [{"id": "fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5", "address": "fa:16:3e:fc:58:40", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcfcdd66-6c", "ovs_interfaceid": "fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:25:47 np0005629333 podman[287193]: 2026-02-25 12:25:47.025742402 +0000 UTC m=+0.046918499 container create 4ca53240ba254bd86ec50ce894d2190869c5bac569ac437feb3e4efd271ffccb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_proskuriakova, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.031 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[40bc70cc-7c0b-45cf-8ac1-22917d8ff1de]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:47 np0005629333 systemd[1]: Started libpod-conmon-4ca53240ba254bd86ec50ce894d2190869c5bac569ac437feb3e4efd271ffccb.scope.
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.045 244018 DEBUG oslo_concurrency.lockutils [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Releasing lock "refresh_cache-b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.060 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[dc901aba-866c-4eae-b1f0-aa9ef23b6bee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:47 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:25:47 np0005629333 NetworkManager[49836]: <info>  [1772022347.0659] manager: (tapc97c5b11-70): new Veth device (/org/freedesktop/NetworkManager/Devices/192)
Feb 25 07:25:47 np0005629333 systemd-udevd[287210]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.064 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5630f102-9c0e-4c2d-8b13-b15d46af3572]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:47 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a27a5bf22faa301f16dcfdf1e408fac247e6c8cad36d3a2bda55d42f55bc67b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:25:47 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a27a5bf22faa301f16dcfdf1e408fac247e6c8cad36d3a2bda55d42f55bc67b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:25:47 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a27a5bf22faa301f16dcfdf1e408fac247e6c8cad36d3a2bda55d42f55bc67b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:25:47 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a27a5bf22faa301f16dcfdf1e408fac247e6c8cad36d3a2bda55d42f55bc67b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:25:47 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a27a5bf22faa301f16dcfdf1e408fac247e6c8cad36d3a2bda55d42f55bc67b5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.092 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[11921cbd-005d-4891-97ee-9c6572b86190]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:47 np0005629333 podman[287193]: 2026-02-25 12:25:47.092659416 +0000 UTC m=+0.113835523 container init 4ca53240ba254bd86ec50ce894d2190869c5bac569ac437feb3e4efd271ffccb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_proskuriakova, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.095 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d7e494e7-25bb-455a-9a27-205613ee97b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:47 np0005629333 podman[287193]: 2026-02-25 12:25:47.010708567 +0000 UTC m=+0.031884694 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:25:47 np0005629333 podman[287193]: 2026-02-25 12:25:47.112168538 +0000 UTC m=+0.133344645 container start 4ca53240ba254bd86ec50ce894d2190869c5bac569ac437feb3e4efd271ffccb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 07:25:47 np0005629333 NetworkManager[49836]: <info>  [1772022347.1130] device (tapc97c5b11-70): carrier: link connected
Feb 25 07:25:47 np0005629333 podman[287193]: 2026-02-25 12:25:47.115454941 +0000 UTC m=+0.136631048 container attach 4ca53240ba254bd86ec50ce894d2190869c5bac569ac437feb3e4efd271ffccb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_proskuriakova, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.116 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022332.1154826, b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.116 244018 INFO nova.compute.manager [-] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.117 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[df5e2b16-d348-4c67-8d48-fd3755f8f31a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.128 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4f193c05-537d-4788-bfb1-71d5de6f2d49]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc97c5b11-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:61:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431652, 'reachable_time': 20919, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287255, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.139 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3edf1e77-6c46-43e3-92c2-2f7860368891]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe87:610a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 431652, 'tstamp': 431652}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287256, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.150 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2310cc77-8f2c-4e23-8967-cedf4dcd98f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc97c5b11-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:61:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431652, 'reachable_time': 20919, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287257, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.153 244018 DEBUG nova.compute.manager [None req-0b64b02f-611b-4288-bc63-0edaccc3fc6f - - - - - -] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.157 244018 DEBUG nova.compute.manager [None req-0b64b02f-611b-4288-bc63-0edaccc3fc6f - - - - - -] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.167 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[09d9d5f7-cddf-4bec-b7b1-1ecb20d7fac4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.186 244018 INFO nova.compute.manager [None req-0b64b02f-611b-4288-bc63-0edaccc3fc6f - - - - - -] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] During sync_power_state the instance has a pending task (shelving_offloading). Skip.#033[00m
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.202 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[894032d3-d3de-4a4c-a45e-406fb956d956]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.203 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc97c5b11-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.203 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.204 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc97c5b11-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:47 np0005629333 NetworkManager[49836]: <info>  [1772022347.2065] manager: (tapc97c5b11-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/193)
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.207 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:47 np0005629333 kernel: tapc97c5b11-70: entered promiscuous mode
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.210 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc97c5b11-70, col_values=(('external_ids', {'iface-id': 'db412aa7-4ad4-4eb8-b61f-dd3e71d5329d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:47 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:47Z|00421|binding|INFO|Releasing lport db412aa7-4ad4-4eb8-b61f-dd3e71d5329d from this chassis (sb_readonly=0)
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.210 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.217 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.220 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.222 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c97c5b11-7517-46fe-a6ca-63894792908c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c97c5b11-7517-46fe-a6ca-63894792908c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.222 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[366719e8-1c4d-419e-90bb-0e9362e968df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.223 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-c97c5b11-7517-46fe-a6ca-63894792908c
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/c97c5b11-7517-46fe-a6ca-63894792908c.pid.haproxy
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID c97c5b11-7517-46fe-a6ca-63894792908c
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:25:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:47.223 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'env', 'PROCESS_TAG=haproxy-c97c5b11-7517-46fe-a6ca-63894792908c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c97c5b11-7517-46fe-a6ca-63894792908c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:25:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:25:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4159697921' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.313 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:25:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3314089494' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.334 244018 DEBUG nova.storage.rbd_utils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.341 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.365 244018 DEBUG oslo_concurrency.processutils [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Feb 25 07:25:47 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.378 244018 DEBUG nova.compute.provider_tree [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.396 244018 DEBUG nova.storage.rbd_utils [None req-daed9e93-fa86-46b3-9114-c10c3375fd64 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] creating snapshot(snap) on rbd image(5c4d8498-0ce5-4898-96f9-042972db1ddb) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.430 244018 DEBUG nova.scheduler.client.report [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.474 244018 DEBUG oslo_concurrency.lockutils [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.502 244018 INFO nova.scheduler.client.report [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Deleted allocations for instance e291d969-fcea-4f60-a478-f7b81a91ccd9#033[00m
Feb 25 07:25:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:25:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1305819421' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:25:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:25:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1305819421' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:25:47 np0005629333 sweet_proskuriakova[287224]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:25:47 np0005629333 sweet_proskuriakova[287224]: --> All data devices are unavailable
Feb 25 07:25:47 np0005629333 systemd[1]: libpod-4ca53240ba254bd86ec50ce894d2190869c5bac569ac437feb3e4efd271ffccb.scope: Deactivated successfully.
Feb 25 07:25:47 np0005629333 podman[287193]: 2026-02-25 12:25:47.568537834 +0000 UTC m=+0.589713941 container died 4ca53240ba254bd86ec50ce894d2190869c5bac569ac437feb3e4efd271ffccb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.577 244018 DEBUG oslo_concurrency.lockutils [None req-e1045bde-9d77-4d18-baa0-3eb9bea06557 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "e291d969-fcea-4f60-a478-f7b81a91ccd9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:47 np0005629333 podman[287364]: 2026-02-25 12:25:47.61680097 +0000 UTC m=+0.099997051 container create 26d09110bafeffdf906186af360597b7063b1c4f77ebb55bbae343edd6e074b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 07:25:47 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a27a5bf22faa301f16dcfdf1e408fac247e6c8cad36d3a2bda55d42f55bc67b5-merged.mount: Deactivated successfully.
Feb 25 07:25:47 np0005629333 podman[287364]: 2026-02-25 12:25:47.535273673 +0000 UTC m=+0.018469794 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:25:47 np0005629333 podman[287193]: 2026-02-25 12:25:47.645912784 +0000 UTC m=+0.667088891 container remove 4ca53240ba254bd86ec50ce894d2190869c5bac569ac437feb3e4efd271ffccb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_proskuriakova, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.644 244018 DEBUG nova.compute.manager [req-c0871cc0-369b-4693-a9ff-53e19bbe671a req-d0643c60-c8b6-4be2-a26d-5f95e799f98f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Received event network-vif-plugged-0b2cc5f4-bf41-4e03-8c24-1e711f742942 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.646 244018 DEBUG oslo_concurrency.lockutils [req-c0871cc0-369b-4693-a9ff-53e19bbe671a req-d0643c60-c8b6-4be2-a26d-5f95e799f98f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.646 244018 DEBUG oslo_concurrency.lockutils [req-c0871cc0-369b-4693-a9ff-53e19bbe671a req-d0643c60-c8b6-4be2-a26d-5f95e799f98f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.647 244018 DEBUG oslo_concurrency.lockutils [req-c0871cc0-369b-4693-a9ff-53e19bbe671a req-d0643c60-c8b6-4be2-a26d-5f95e799f98f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.647 244018 DEBUG nova.compute.manager [req-c0871cc0-369b-4693-a9ff-53e19bbe671a req-d0643c60-c8b6-4be2-a26d-5f95e799f98f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Processing event network-vif-plugged-0b2cc5f4-bf41-4e03-8c24-1e711f742942 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:25:47 np0005629333 systemd[1]: libpod-conmon-4ca53240ba254bd86ec50ce894d2190869c5bac569ac437feb3e4efd271ffccb.scope: Deactivated successfully.
Feb 25 07:25:47 np0005629333 systemd[1]: Started libpod-conmon-26d09110bafeffdf906186af360597b7063b1c4f77ebb55bbae343edd6e074b1.scope.
Feb 25 07:25:47 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:25:47 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed524380e5edf8481fd2be2b83ddd592e929273edf1401db6f69e8d7d1c59845/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:25:47 np0005629333 podman[287364]: 2026-02-25 12:25:47.713356513 +0000 UTC m=+0.196552614 container init 26d09110bafeffdf906186af360597b7063b1c4f77ebb55bbae343edd6e074b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:25:47 np0005629333 podman[287364]: 2026-02-25 12:25:47.717371877 +0000 UTC m=+0.200567958 container start 26d09110bafeffdf906186af360597b7063b1c4f77ebb55bbae343edd6e074b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 25 07:25:47 np0005629333 neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c[287391]: [NOTICE]   (287418) : New worker (287439) forked
Feb 25 07:25:47 np0005629333 neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c[287391]: [NOTICE]   (287418) : Loading success.
Feb 25 07:25:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 505 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 11 MiB/s wr, 255 op/s
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.860 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.860 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022347.8598304, ec873c3c-bf46-4537-8c29-b23a3133d281 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.861 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] VM Started (Lifecycle Event)#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.865 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.868 244018 INFO nova.virt.libvirt.driver [-] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Instance spawned successfully.#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.868 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:25:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:25:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/206562268' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.886 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.887 244018 DEBUG nova.virt.libvirt.vif [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:25:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-2038151869',display_name='tempest-MultipleCreateTestJSON-server-2038151869-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-2038151869-1',id=49,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c8744bdbc0f1499388aab5f477246beb',ramdisk_id='',reservation_id='r-e70320tn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1900553995',owner_user_name='tempest-MultipleCreateTestJSON-1900553995-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:25:39Z,user_data=None,user_id='0899f3fdb57d46a395d07753dd261241',uuid=3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "58ea35ed-ff5d-4827-a801-431f7536d78d", "address": "fa:16:3e:03:e4:94", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58ea35ed-ff", "ovs_interfaceid": "58ea35ed-ff5d-4827-a801-431f7536d78d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.887 244018 DEBUG nova.network.os_vif_util [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converting VIF {"id": "58ea35ed-ff5d-4827-a801-431f7536d78d", "address": "fa:16:3e:03:e4:94", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58ea35ed-ff", "ovs_interfaceid": "58ea35ed-ff5d-4827-a801-431f7536d78d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.888 244018 DEBUG nova.network.os_vif_util [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:e4:94,bridge_name='br-int',has_traffic_filtering=True,id=58ea35ed-ff5d-4827-a801-431f7536d78d,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58ea35ed-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.889 244018 DEBUG nova.objects.instance [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lazy-loading 'pci_devices' on Instance uuid 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.890 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.895 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.897 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.897 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.897 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.898 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.898 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.898 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.905 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:25:47 np0005629333 nova_compute[244014]:  <uuid>3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2</uuid>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:  <name>instance-00000031</name>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      <nova:name>tempest-MultipleCreateTestJSON-server-2038151869-1</nova:name>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:25:46</nova:creationTime>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:25:47 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:        <nova:user uuid="0899f3fdb57d46a395d07753dd261241">tempest-MultipleCreateTestJSON-1900553995-project-member</nova:user>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:        <nova:project uuid="c8744bdbc0f1499388aab5f477246beb">tempest-MultipleCreateTestJSON-1900553995</nova:project>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:        <nova:port uuid="58ea35ed-ff5d-4827-a801-431f7536d78d">
Feb 25 07:25:47 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      <entry name="serial">3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2</entry>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      <entry name="uuid">3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2</entry>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2_disk">
Feb 25 07:25:47 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:25:47 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2_disk.config">
Feb 25 07:25:47 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:25:47 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:03:e4:94"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      <target dev="tap58ea35ed-ff"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2/console.log" append="off"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:25:47 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:25:47 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:25:47 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:25:47 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.906 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Preparing to wait for external event network-vif-plugged-58ea35ed-ff5d-4827-a801-431f7536d78d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.906 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.906 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.906 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.906 244018 DEBUG nova.virt.libvirt.vif [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:25:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-2038151869',display_name='tempest-MultipleCreateTestJSON-server-2038151869-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-2038151869-1',id=49,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c8744bdbc0f1499388aab5f477246beb',ramdisk_id='',reservation_id='r-e70320tn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1900553995',owner_user_name='tempest-MultipleCreateTestJSON-1900553995-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:25:39Z,user_data=None,user_id='0899f3fdb57d46a395d07753dd261241',uuid=3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "58ea35ed-ff5d-4827-a801-431f7536d78d", "address": "fa:16:3e:03:e4:94", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58ea35ed-ff", "ovs_interfaceid": "58ea35ed-ff5d-4827-a801-431f7536d78d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.907 244018 DEBUG nova.network.os_vif_util [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converting VIF {"id": "58ea35ed-ff5d-4827-a801-431f7536d78d", "address": "fa:16:3e:03:e4:94", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58ea35ed-ff", "ovs_interfaceid": "58ea35ed-ff5d-4827-a801-431f7536d78d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.907 244018 DEBUG nova.network.os_vif_util [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:e4:94,bridge_name='br-int',has_traffic_filtering=True,id=58ea35ed-ff5d-4827-a801-431f7536d78d,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58ea35ed-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.907 244018 DEBUG os_vif [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:e4:94,bridge_name='br-int',has_traffic_filtering=True,id=58ea35ed-ff5d-4827-a801-431f7536d78d,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58ea35ed-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.908 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.908 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.908 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.910 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.910 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58ea35ed-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.911 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap58ea35ed-ff, col_values=(('external_ids', {'iface-id': '58ea35ed-ff5d-4827-a801-431f7536d78d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:03:e4:94', 'vm-uuid': '3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.912 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:47 np0005629333 NetworkManager[49836]: <info>  [1772022347.9129] manager: (tap58ea35ed-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/194)
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.916 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.917 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.918 244018 INFO os_vif [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:e4:94,bridge_name='br-int',has_traffic_filtering=True,id=58ea35ed-ff5d-4827-a801-431f7536d78d,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58ea35ed-ff')#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.926 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.926 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022347.8600066, ec873c3c-bf46-4537-8c29-b23a3133d281 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.926 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.959 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.964 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022347.8652508, ec873c3c-bf46-4537-8c29-b23a3133d281 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.964 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.971 244018 INFO nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Took 9.88 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.971 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:25:47 np0005629333 nova_compute[244014]: 2026-02-25 12:25:47.995 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:25:48 np0005629333 nova_compute[244014]: 2026-02-25 12:25:48.000 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:25:48 np0005629333 nova_compute[244014]: 2026-02-25 12:25:48.000 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:25:48 np0005629333 nova_compute[244014]: 2026-02-25 12:25:48.001 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] No VIF found with MAC fa:16:3e:03:e4:94, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:25:48 np0005629333 nova_compute[244014]: 2026-02-25 12:25:48.001 244018 INFO nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Using config drive#033[00m
Feb 25 07:25:48 np0005629333 nova_compute[244014]: 2026-02-25 12:25:48.021 244018 DEBUG nova.storage.rbd_utils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:48 np0005629333 nova_compute[244014]: 2026-02-25 12:25:48.036 244018 INFO nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Took 14.67 seconds to build instance.#033[00m
Feb 25 07:25:48 np0005629333 nova_compute[244014]: 2026-02-25 12:25:48.039 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:25:48 np0005629333 nova_compute[244014]: 2026-02-25 12:25:48.068 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "ec873c3c-bf46-4537-8c29-b23a3133d281" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:48 np0005629333 podman[287535]: 2026-02-25 12:25:48.075144502 +0000 UTC m=+0.041066693 container create a3fe61deb5a73b43a53e7110792443cb1eaa6c8d084d0cdf7517beab54d711a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_raman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Feb 25 07:25:48 np0005629333 systemd[1]: Started libpod-conmon-a3fe61deb5a73b43a53e7110792443cb1eaa6c8d084d0cdf7517beab54d711a5.scope.
Feb 25 07:25:48 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:25:48 np0005629333 podman[287535]: 2026-02-25 12:25:48.056430332 +0000 UTC m=+0.022352503 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:25:48 np0005629333 podman[287535]: 2026-02-25 12:25:48.16198422 +0000 UTC m=+0.127906421 container init a3fe61deb5a73b43a53e7110792443cb1eaa6c8d084d0cdf7517beab54d711a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 07:25:48 np0005629333 podman[287535]: 2026-02-25 12:25:48.168159364 +0000 UTC m=+0.134081565 container start a3fe61deb5a73b43a53e7110792443cb1eaa6c8d084d0cdf7517beab54d711a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_raman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 07:25:48 np0005629333 podman[287535]: 2026-02-25 12:25:48.170905662 +0000 UTC m=+0.136827863 container attach a3fe61deb5a73b43a53e7110792443cb1eaa6c8d084d0cdf7517beab54d711a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_raman, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 07:25:48 np0005629333 inspiring_raman[287550]: 167 167
Feb 25 07:25:48 np0005629333 systemd[1]: libpod-a3fe61deb5a73b43a53e7110792443cb1eaa6c8d084d0cdf7517beab54d711a5.scope: Deactivated successfully.
Feb 25 07:25:48 np0005629333 podman[287535]: 2026-02-25 12:25:48.173128885 +0000 UTC m=+0.139051096 container died a3fe61deb5a73b43a53e7110792443cb1eaa6c8d084d0cdf7517beab54d711a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_raman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:25:48 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a6eec19b16acb79f6922783fde96875702d154d7892be7583ba291d2cf132e38-merged.mount: Deactivated successfully.
Feb 25 07:25:48 np0005629333 podman[287535]: 2026-02-25 12:25:48.219645262 +0000 UTC m=+0.185567473 container remove a3fe61deb5a73b43a53e7110792443cb1eaa6c8d084d0cdf7517beab54d711a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_raman, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:25:48 np0005629333 systemd[1]: libpod-conmon-a3fe61deb5a73b43a53e7110792443cb1eaa6c8d084d0cdf7517beab54d711a5.scope: Deactivated successfully.
Feb 25 07:25:48 np0005629333 nova_compute[244014]: 2026-02-25 12:25:48.272 244018 DEBUG nova.compute.manager [req-40f69660-4c29-4ef1-bd5b-b3e9f5df5050 req-7b1e29fe-5370-4cdc-ae66-575e16513f88 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Received event network-vif-deleted-07860675-4ac4-43a4-ab6b-bacd17801fc2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Feb 25 07:25:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Feb 25 07:25:48 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Feb 25 07:25:48 np0005629333 podman[287573]: 2026-02-25 12:25:48.468378901 +0000 UTC m=+0.090165793 container create caedf22d8ac4aeb5bd1e71554e9dfb1a9dc112c98343ac1ac50de4e44f443d47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_black, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 07:25:48 np0005629333 podman[287573]: 2026-02-25 12:25:48.434602505 +0000 UTC m=+0.056389457 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:25:48 np0005629333 systemd[1]: Started libpod-conmon-caedf22d8ac4aeb5bd1e71554e9dfb1a9dc112c98343ac1ac50de4e44f443d47.scope.
Feb 25 07:25:48 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:25:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9114c1557232f43568203555f2085c0728815fca0dadbd9c1410cd4f5228815b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:25:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9114c1557232f43568203555f2085c0728815fca0dadbd9c1410cd4f5228815b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:25:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9114c1557232f43568203555f2085c0728815fca0dadbd9c1410cd4f5228815b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:25:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9114c1557232f43568203555f2085c0728815fca0dadbd9c1410cd4f5228815b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:25:48 np0005629333 podman[287573]: 2026-02-25 12:25:48.59731641 +0000 UTC m=+0.219103342 container init caedf22d8ac4aeb5bd1e71554e9dfb1a9dc112c98343ac1ac50de4e44f443d47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:25:48 np0005629333 podman[287573]: 2026-02-25 12:25:48.605044899 +0000 UTC m=+0.226831791 container start caedf22d8ac4aeb5bd1e71554e9dfb1a9dc112c98343ac1ac50de4e44f443d47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_black, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:25:48 np0005629333 podman[287573]: 2026-02-25 12:25:48.608884498 +0000 UTC m=+0.230671450 container attach caedf22d8ac4aeb5bd1e71554e9dfb1a9dc112c98343ac1ac50de4e44f443d47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_black, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:25:48 np0005629333 nova_compute[244014]: 2026-02-25 12:25:48.677 244018 INFO nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Creating config drive at /var/lib/nova/instances/3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2/disk.config#033[00m
Feb 25 07:25:48 np0005629333 nova_compute[244014]: 2026-02-25 12:25:48.684 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpumloveo3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:48 np0005629333 nova_compute[244014]: 2026-02-25 12:25:48.743 244018 INFO nova.virt.libvirt.driver [-] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Instance destroyed successfully.#033[00m
Feb 25 07:25:48 np0005629333 nova_compute[244014]: 2026-02-25 12:25:48.745 244018 DEBUG nova.objects.instance [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lazy-loading 'resources' on Instance uuid b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:25:48 np0005629333 nova_compute[244014]: 2026-02-25 12:25:48.778 244018 DEBUG nova.virt.libvirt.vif [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:24:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-818536104',display_name='tempest-DeleteServersTestJSON-server-818536104',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-818536104',id=44,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:25:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='daab2f813dbd467685c22833bf875ec9',ramdisk_id='',reservation_id='r-f8ey9dlk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-285157757',owner_user_name='tempest-DeleteServersTestJSON-285157757-project-member',shelved_at='2026-02-25T12:25:45.200330',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='ebcf3504-22bf-4cd8-b1d7-d238a69aa7a5'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:25:33Z,user_data=None,user_id='9c44c1a95c8d4d97bc1d7dde284dcf1b',uuid=b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5", "address": "fa:16:3e:fc:58:40", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcfcdd66-6c", "ovs_interfaceid": "fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:25:48 np0005629333 nova_compute[244014]: 2026-02-25 12:25:48.779 244018 DEBUG nova.network.os_vif_util [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converting VIF {"id": "fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5", "address": "fa:16:3e:fc:58:40", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfcfcdd66-6c", "ovs_interfaceid": "fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:25:48 np0005629333 nova_compute[244014]: 2026-02-25 12:25:48.781 244018 DEBUG nova.network.os_vif_util [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fc:58:40,bridge_name='br-int',has_traffic_filtering=True,id=fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcfcdd66-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:25:48 np0005629333 nova_compute[244014]: 2026-02-25 12:25:48.782 244018 DEBUG os_vif [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:58:40,bridge_name='br-int',has_traffic_filtering=True,id=fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcfcdd66-6c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:25:48 np0005629333 nova_compute[244014]: 2026-02-25 12:25:48.785 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:48 np0005629333 nova_compute[244014]: 2026-02-25 12:25:48.786 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfcfcdd66-6c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:48 np0005629333 nova_compute[244014]: 2026-02-25 12:25:48.788 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:48 np0005629333 nova_compute[244014]: 2026-02-25 12:25:48.791 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:25:48 np0005629333 nova_compute[244014]: 2026-02-25 12:25:48.793 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:48 np0005629333 nova_compute[244014]: 2026-02-25 12:25:48.796 244018 INFO os_vif [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fc:58:40,bridge_name='br-int',has_traffic_filtering=True,id=fcfcdd66-6c38-4a6a-93a9-15a5ce842fd5,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfcfcdd66-6c')#033[00m
Feb 25 07:25:48 np0005629333 nova_compute[244014]: 2026-02-25 12:25:48.836 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpumloveo3" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:48 np0005629333 nova_compute[244014]: 2026-02-25 12:25:48.869 244018 DEBUG nova.storage.rbd_utils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] rbd image 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:48 np0005629333 nova_compute[244014]: 2026-02-25 12:25:48.874 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2/disk.config 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:48 np0005629333 cranky_black[287590]: {
Feb 25 07:25:48 np0005629333 cranky_black[287590]:    "0": [
Feb 25 07:25:48 np0005629333 cranky_black[287590]:        {
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "devices": [
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "/dev/loop3"
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            ],
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "lv_name": "ceph_lv0",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "lv_size": "21470642176",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "name": "ceph_lv0",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "tags": {
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.cluster_name": "ceph",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.crush_device_class": "",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.encrypted": "0",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.objectstore": "bluestore",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.osd_id": "0",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.type": "block",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.vdo": "0",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.with_tpm": "0"
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            },
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "type": "block",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "vg_name": "ceph_vg0"
Feb 25 07:25:48 np0005629333 cranky_black[287590]:        }
Feb 25 07:25:48 np0005629333 cranky_black[287590]:    ],
Feb 25 07:25:48 np0005629333 cranky_black[287590]:    "1": [
Feb 25 07:25:48 np0005629333 cranky_black[287590]:        {
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "devices": [
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "/dev/loop4"
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            ],
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "lv_name": "ceph_lv1",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "lv_size": "21470642176",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "name": "ceph_lv1",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "tags": {
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.cluster_name": "ceph",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.crush_device_class": "",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.encrypted": "0",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.objectstore": "bluestore",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.osd_id": "1",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.type": "block",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.vdo": "0",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.with_tpm": "0"
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            },
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "type": "block",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "vg_name": "ceph_vg1"
Feb 25 07:25:48 np0005629333 cranky_black[287590]:        }
Feb 25 07:25:48 np0005629333 cranky_black[287590]:    ],
Feb 25 07:25:48 np0005629333 cranky_black[287590]:    "2": [
Feb 25 07:25:48 np0005629333 cranky_black[287590]:        {
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "devices": [
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "/dev/loop5"
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            ],
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "lv_name": "ceph_lv2",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "lv_size": "21470642176",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "name": "ceph_lv2",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "tags": {
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.cluster_name": "ceph",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.crush_device_class": "",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.encrypted": "0",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.objectstore": "bluestore",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.osd_id": "2",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.type": "block",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.vdo": "0",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:                "ceph.with_tpm": "0"
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            },
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "type": "block",
Feb 25 07:25:48 np0005629333 cranky_black[287590]:            "vg_name": "ceph_vg2"
Feb 25 07:25:48 np0005629333 cranky_black[287590]:        }
Feb 25 07:25:48 np0005629333 cranky_black[287590]:    ]
Feb 25 07:25:48 np0005629333 cranky_black[287590]: }
Feb 25 07:25:48 np0005629333 systemd[1]: libpod-caedf22d8ac4aeb5bd1e71554e9dfb1a9dc112c98343ac1ac50de4e44f443d47.scope: Deactivated successfully.
Feb 25 07:25:48 np0005629333 podman[287573]: 2026-02-25 12:25:48.905348378 +0000 UTC m=+0.527135230 container died caedf22d8ac4aeb5bd1e71554e9dfb1a9dc112c98343ac1ac50de4e44f443d47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 07:25:48 np0005629333 systemd[1]: var-lib-containers-storage-overlay-9114c1557232f43568203555f2085c0728815fca0dadbd9c1410cd4f5228815b-merged.mount: Deactivated successfully.
Feb 25 07:25:48 np0005629333 podman[287573]: 2026-02-25 12:25:48.942259363 +0000 UTC m=+0.564046215 container remove caedf22d8ac4aeb5bd1e71554e9dfb1a9dc112c98343ac1ac50de4e44f443d47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_black, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 07:25:48 np0005629333 systemd[1]: libpod-conmon-caedf22d8ac4aeb5bd1e71554e9dfb1a9dc112c98343ac1ac50de4e44f443d47.scope: Deactivated successfully.
Feb 25 07:25:49 np0005629333 nova_compute[244014]: 2026-02-25 12:25:49.050 244018 DEBUG oslo_concurrency.processutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2/disk.config 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:49 np0005629333 nova_compute[244014]: 2026-02-25 12:25:49.052 244018 INFO nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Deleting local config drive /var/lib/nova/instances/3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2/disk.config because it was imported into RBD.#033[00m
Feb 25 07:25:49 np0005629333 kernel: tap58ea35ed-ff: entered promiscuous mode
Feb 25 07:25:49 np0005629333 NetworkManager[49836]: <info>  [1772022349.0992] manager: (tap58ea35ed-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/195)
Feb 25 07:25:49 np0005629333 systemd-udevd[287237]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:25:49 np0005629333 nova_compute[244014]: 2026-02-25 12:25:49.100 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:49 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:49Z|00422|binding|INFO|Claiming lport 58ea35ed-ff5d-4827-a801-431f7536d78d for this chassis.
Feb 25 07:25:49 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:49Z|00423|binding|INFO|58ea35ed-ff5d-4827-a801-431f7536d78d: Claiming fa:16:3e:03:e4:94 10.100.0.7
Feb 25 07:25:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:49.107 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:e4:94 10.100.0.7'], port_security=['fa:16:3e:03:e4:94 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c97c5b11-7517-46fe-a6ca-63894792908c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8744bdbc0f1499388aab5f477246beb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9622f97c-a9c4-423f-b49e-154152bd6881', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b7ae960-1b2b-4f15-b35e-8e889d9ccce8, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=58ea35ed-ff5d-4827-a801-431f7536d78d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:25:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:49.108 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 58ea35ed-ff5d-4827-a801-431f7536d78d in datapath c97c5b11-7517-46fe-a6ca-63894792908c bound to our chassis#033[00m
Feb 25 07:25:49 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:49Z|00424|binding|INFO|Setting lport 58ea35ed-ff5d-4827-a801-431f7536d78d ovn-installed in OVS
Feb 25 07:25:49 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:49Z|00425|binding|INFO|Setting lport 58ea35ed-ff5d-4827-a801-431f7536d78d up in Southbound
Feb 25 07:25:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:49.111 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c97c5b11-7517-46fe-a6ca-63894792908c#033[00m
Feb 25 07:25:49 np0005629333 nova_compute[244014]: 2026-02-25 12:25:49.111 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:49 np0005629333 NetworkManager[49836]: <info>  [1772022349.1119] device (tap58ea35ed-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:25:49 np0005629333 NetworkManager[49836]: <info>  [1772022349.1124] device (tap58ea35ed-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:25:49 np0005629333 nova_compute[244014]: 2026-02-25 12:25:49.114 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:49.122 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d70379b9-8c1e-43b7-b531-303b236143fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:49 np0005629333 systemd-machined[210048]: New machine qemu-55-instance-00000031.
Feb 25 07:25:49 np0005629333 systemd[1]: Started Virtual Machine qemu-55-instance-00000031.
Feb 25 07:25:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:49.141 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f17c7144-72ea-4865-9868-035ee58b2904]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:49.144 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[360e8de9-9b2d-4e08-ba99-a0dd0591c088]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:49 np0005629333 nova_compute[244014]: 2026-02-25 12:25:49.145 244018 INFO nova.virt.libvirt.driver [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Deleting instance files /var/lib/nova/instances/b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6_del#033[00m
Feb 25 07:25:49 np0005629333 nova_compute[244014]: 2026-02-25 12:25:49.146 244018 INFO nova.virt.libvirt.driver [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6] Deletion of /var/lib/nova/instances/b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6_del complete#033[00m
Feb 25 07:25:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:49.165 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[05b2e913-41ff-44be-b368-48452e79384c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:49.178 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2c2305c3-1ac7-4974-b37d-458243c9bd4c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc97c5b11-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:61:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431652, 'reachable_time': 20919, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287743, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:49.189 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c74cd329-3bd2-4726-a12d-04df196918b2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc97c5b11-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 431658, 'tstamp': 431658}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287744, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc97c5b11-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 431660, 'tstamp': 431660}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287744, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:49.191 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc97c5b11-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:49 np0005629333 nova_compute[244014]: 2026-02-25 12:25:49.192 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:49.193 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc97c5b11-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:49.193 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:25:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:49.194 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc97c5b11-70, col_values=(('external_ids', {'iface-id': 'db412aa7-4ad4-4eb8-b61f-dd3e71d5329d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:49.194 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:25:49 np0005629333 nova_compute[244014]: 2026-02-25 12:25:49.263 244018 INFO nova.scheduler.client.report [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Deleted allocations for instance b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6#033[00m
Feb 25 07:25:49 np0005629333 nova_compute[244014]: 2026-02-25 12:25:49.331 244018 DEBUG oslo_concurrency.lockutils [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:49 np0005629333 nova_compute[244014]: 2026-02-25 12:25:49.332 244018 DEBUG oslo_concurrency.lockutils [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:49 np0005629333 podman[287773]: 2026-02-25 12:25:49.424140071 +0000 UTC m=+0.055215004 container create 6ad177bc6504e87c384c2c11a55cbf86a672197017855402bc5ab8d4976cccaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:25:49 np0005629333 systemd[1]: Started libpod-conmon-6ad177bc6504e87c384c2c11a55cbf86a672197017855402bc5ab8d4976cccaa.scope.
Feb 25 07:25:49 np0005629333 nova_compute[244014]: 2026-02-25 12:25:49.471 244018 DEBUG oslo_concurrency.processutils [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:49 np0005629333 podman[287773]: 2026-02-25 12:25:49.396349654 +0000 UTC m=+0.027424627 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:25:49 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:25:49 np0005629333 nova_compute[244014]: 2026-02-25 12:25:49.512 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022349.511386, 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:25:49 np0005629333 nova_compute[244014]: 2026-02-25 12:25:49.513 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] VM Started (Lifecycle Event)#033[00m
Feb 25 07:25:49 np0005629333 podman[287773]: 2026-02-25 12:25:49.523483512 +0000 UTC m=+0.154558445 container init 6ad177bc6504e87c384c2c11a55cbf86a672197017855402bc5ab8d4976cccaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_jepsen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:25:49 np0005629333 podman[287773]: 2026-02-25 12:25:49.528784342 +0000 UTC m=+0.159859275 container start 6ad177bc6504e87c384c2c11a55cbf86a672197017855402bc5ab8d4976cccaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_jepsen, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 07:25:49 np0005629333 podman[287773]: 2026-02-25 12:25:49.532165938 +0000 UTC m=+0.163240861 container attach 6ad177bc6504e87c384c2c11a55cbf86a672197017855402bc5ab8d4976cccaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_jepsen, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3)
Feb 25 07:25:49 np0005629333 compassionate_jepsen[287813]: 167 167
Feb 25 07:25:49 np0005629333 systemd[1]: libpod-6ad177bc6504e87c384c2c11a55cbf86a672197017855402bc5ab8d4976cccaa.scope: Deactivated successfully.
Feb 25 07:25:49 np0005629333 podman[287773]: 2026-02-25 12:25:49.535014238 +0000 UTC m=+0.166089161 container died 6ad177bc6504e87c384c2c11a55cbf86a672197017855402bc5ab8d4976cccaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_jepsen, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:25:49 np0005629333 nova_compute[244014]: 2026-02-25 12:25:49.539 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:25:49 np0005629333 nova_compute[244014]: 2026-02-25 12:25:49.546 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022349.511532, 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:25:49 np0005629333 nova_compute[244014]: 2026-02-25 12:25:49.546 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:25:49 np0005629333 systemd[1]: var-lib-containers-storage-overlay-b1532f849c4a6033fcc8126a72acc55f4f9f8ee435d311c0351bee119183f7fe-merged.mount: Deactivated successfully.
Feb 25 07:25:49 np0005629333 nova_compute[244014]: 2026-02-25 12:25:49.571 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:25:49 np0005629333 nova_compute[244014]: 2026-02-25 12:25:49.577 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:25:49 np0005629333 podman[287773]: 2026-02-25 12:25:49.58100902 +0000 UTC m=+0.212083943 container remove 6ad177bc6504e87c384c2c11a55cbf86a672197017855402bc5ab8d4976cccaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 07:25:49 np0005629333 systemd[1]: libpod-conmon-6ad177bc6504e87c384c2c11a55cbf86a672197017855402bc5ab8d4976cccaa.scope: Deactivated successfully.
Feb 25 07:25:49 np0005629333 nova_compute[244014]: 2026-02-25 12:25:49.611 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:25:49 np0005629333 nova_compute[244014]: 2026-02-25 12:25:49.636 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:49 np0005629333 podman[287858]: 2026-02-25 12:25:49.711962606 +0000 UTC m=+0.034752804 container create 30c57b1d20928f4e053210efab33801415f2008fe9e13b10b256285583d5b8f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_lamarr, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 07:25:49 np0005629333 systemd[1]: Started libpod-conmon-30c57b1d20928f4e053210efab33801415f2008fe9e13b10b256285583d5b8f1.scope.
Feb 25 07:25:49 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:25:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62dd4f6e36b767a08c0291371882b67c87f4011d23494fcc796781fde240aa06/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:25:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62dd4f6e36b767a08c0291371882b67c87f4011d23494fcc796781fde240aa06/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:25:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62dd4f6e36b767a08c0291371882b67c87f4011d23494fcc796781fde240aa06/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:25:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62dd4f6e36b767a08c0291371882b67c87f4011d23494fcc796781fde240aa06/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:25:49 np0005629333 podman[287858]: 2026-02-25 12:25:49.789726066 +0000 UTC m=+0.112516294 container init 30c57b1d20928f4e053210efab33801415f2008fe9e13b10b256285583d5b8f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_lamarr, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:25:49 np0005629333 podman[287858]: 2026-02-25 12:25:49.697321642 +0000 UTC m=+0.020111860 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:25:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 505 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 9.3 MiB/s wr, 258 op/s
Feb 25 07:25:49 np0005629333 podman[287858]: 2026-02-25 12:25:49.797725923 +0000 UTC m=+0.120516151 container start 30c57b1d20928f4e053210efab33801415f2008fe9e13b10b256285583d5b8f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 07:25:49 np0005629333 podman[287858]: 2026-02-25 12:25:49.801571361 +0000 UTC m=+0.124361589 container attach 30c57b1d20928f4e053210efab33801415f2008fe9e13b10b256285583d5b8f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 07:25:49 np0005629333 nova_compute[244014]: 2026-02-25 12:25:49.866 244018 DEBUG nova.compute.manager [req-1445bab1-c1e7-4c4e-8e79-6129eb3476f1 req-2fe47710-d740-4cc7-b535-2586d524fb40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Received event network-vif-plugged-0b2cc5f4-bf41-4e03-8c24-1e711f742942 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:49 np0005629333 nova_compute[244014]: 2026-02-25 12:25:49.867 244018 DEBUG oslo_concurrency.lockutils [req-1445bab1-c1e7-4c4e-8e79-6129eb3476f1 req-2fe47710-d740-4cc7-b535-2586d524fb40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:49 np0005629333 nova_compute[244014]: 2026-02-25 12:25:49.868 244018 DEBUG oslo_concurrency.lockutils [req-1445bab1-c1e7-4c4e-8e79-6129eb3476f1 req-2fe47710-d740-4cc7-b535-2586d524fb40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:49 np0005629333 nova_compute[244014]: 2026-02-25 12:25:49.868 244018 DEBUG oslo_concurrency.lockutils [req-1445bab1-c1e7-4c4e-8e79-6129eb3476f1 req-2fe47710-d740-4cc7-b535-2586d524fb40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:49 np0005629333 nova_compute[244014]: 2026-02-25 12:25:49.868 244018 DEBUG nova.compute.manager [req-1445bab1-c1e7-4c4e-8e79-6129eb3476f1 req-2fe47710-d740-4cc7-b535-2586d524fb40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] No waiting events found dispatching network-vif-plugged-0b2cc5f4-bf41-4e03-8c24-1e711f742942 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:25:49 np0005629333 nova_compute[244014]: 2026-02-25 12:25:49.869 244018 WARNING nova.compute.manager [req-1445bab1-c1e7-4c4e-8e79-6129eb3476f1 req-2fe47710-d740-4cc7-b535-2586d524fb40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Received unexpected event network-vif-plugged-0b2cc5f4-bf41-4e03-8c24-1e711f742942 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:25:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:25:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:25:50 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1124586646' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.073 244018 DEBUG oslo_concurrency.processutils [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.079 244018 DEBUG nova.compute.provider_tree [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.099 244018 DEBUG nova.scheduler.client.report [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.125 244018 INFO nova.virt.libvirt.driver [None req-daed9e93-fa86-46b3-9114-c10c3375fd64 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Snapshot image upload complete#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.126 244018 INFO nova.compute.manager [None req-daed9e93-fa86-46b3-9114-c10c3375fd64 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Took 6.29 seconds to snapshot the instance on the hypervisor.#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.130 244018 DEBUG oslo_concurrency.lockutils [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.211 244018 DEBUG oslo_concurrency.lockutils [None req-6e026918-df92-4730-ac11-bc33c22bb056 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "b5cb3a5a-f44d-43a7-b6c0-e9c5c9a418d6" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 41.993s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.366 244018 DEBUG nova.compute.manager [req-2284421b-02a0-4202-b210-c5ac847e792f req-f58345c8-9243-433f-a9d8-99b5e6c7f69f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Received event network-vif-plugged-58ea35ed-ff5d-4827-a801-431f7536d78d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.366 244018 DEBUG oslo_concurrency.lockutils [req-2284421b-02a0-4202-b210-c5ac847e792f req-f58345c8-9243-433f-a9d8-99b5e6c7f69f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.367 244018 DEBUG oslo_concurrency.lockutils [req-2284421b-02a0-4202-b210-c5ac847e792f req-f58345c8-9243-433f-a9d8-99b5e6c7f69f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.367 244018 DEBUG oslo_concurrency.lockutils [req-2284421b-02a0-4202-b210-c5ac847e792f req-f58345c8-9243-433f-a9d8-99b5e6c7f69f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.367 244018 DEBUG nova.compute.manager [req-2284421b-02a0-4202-b210-c5ac847e792f req-f58345c8-9243-433f-a9d8-99b5e6c7f69f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Processing event network-vif-plugged-58ea35ed-ff5d-4827-a801-431f7536d78d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.367 244018 DEBUG nova.compute.manager [req-2284421b-02a0-4202-b210-c5ac847e792f req-f58345c8-9243-433f-a9d8-99b5e6c7f69f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Received event network-vif-plugged-58ea35ed-ff5d-4827-a801-431f7536d78d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.367 244018 DEBUG oslo_concurrency.lockutils [req-2284421b-02a0-4202-b210-c5ac847e792f req-f58345c8-9243-433f-a9d8-99b5e6c7f69f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.367 244018 DEBUG oslo_concurrency.lockutils [req-2284421b-02a0-4202-b210-c5ac847e792f req-f58345c8-9243-433f-a9d8-99b5e6c7f69f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.367 244018 DEBUG oslo_concurrency.lockutils [req-2284421b-02a0-4202-b210-c5ac847e792f req-f58345c8-9243-433f-a9d8-99b5e6c7f69f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.368 244018 DEBUG nova.compute.manager [req-2284421b-02a0-4202-b210-c5ac847e792f req-f58345c8-9243-433f-a9d8-99b5e6c7f69f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] No waiting events found dispatching network-vif-plugged-58ea35ed-ff5d-4827-a801-431f7536d78d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.368 244018 WARNING nova.compute.manager [req-2284421b-02a0-4202-b210-c5ac847e792f req-f58345c8-9243-433f-a9d8-99b5e6c7f69f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Received unexpected event network-vif-plugged-58ea35ed-ff5d-4827-a801-431f7536d78d for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.370 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.374 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022350.373836, 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.374 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.375 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.379 244018 INFO nova.virt.libvirt.driver [-] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Instance spawned successfully.#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.380 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.397 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.401 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:25:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.404 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.404 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.404 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.405 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.405 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.405 244018 DEBUG nova.virt.libvirt.driver [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:25:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Feb 25 07:25:50 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.433 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.469 244018 INFO nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Took 10.47 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.470 244018 DEBUG nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.517 244018 DEBUG nova.compute.manager [None req-daed9e93-fa86-46b3-9114-c10c3375fd64 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Found 1 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.530 244018 INFO nova.compute.manager [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Took 17.17 seconds to build instance.#033[00m
Feb 25 07:25:50 np0005629333 nova_compute[244014]: 2026-02-25 12:25:50.555 244018 DEBUG oslo_concurrency.lockutils [None req-ebec8559-c7f2-4351-890a-2736831cdd86 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:50 np0005629333 lvm[287951]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:25:50 np0005629333 lvm[287951]: VG ceph_vg0 finished
Feb 25 07:25:50 np0005629333 lvm[287953]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:25:50 np0005629333 lvm[287953]: VG ceph_vg1 finished
Feb 25 07:25:50 np0005629333 lvm[287955]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:25:50 np0005629333 lvm[287955]: VG ceph_vg2 finished
Feb 25 07:25:50 np0005629333 lvm[287957]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:25:50 np0005629333 lvm[287957]: VG ceph_vg2 finished
Feb 25 07:25:50 np0005629333 quizzical_lamarr[287872]: {}
Feb 25 07:25:50 np0005629333 lvm[287959]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:25:50 np0005629333 lvm[287959]: VG ceph_vg2 finished
Feb 25 07:25:50 np0005629333 systemd[1]: libpod-30c57b1d20928f4e053210efab33801415f2008fe9e13b10b256285583d5b8f1.scope: Deactivated successfully.
Feb 25 07:25:50 np0005629333 podman[287858]: 2026-02-25 12:25:50.741021799 +0000 UTC m=+1.063811997 container died 30c57b1d20928f4e053210efab33801415f2008fe9e13b10b256285583d5b8f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_lamarr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Feb 25 07:25:50 np0005629333 systemd[1]: libpod-30c57b1d20928f4e053210efab33801415f2008fe9e13b10b256285583d5b8f1.scope: Consumed 1.190s CPU time.
Feb 25 07:25:50 np0005629333 systemd[1]: var-lib-containers-storage-overlay-62dd4f6e36b767a08c0291371882b67c87f4011d23494fcc796781fde240aa06-merged.mount: Deactivated successfully.
Feb 25 07:25:50 np0005629333 podman[287858]: 2026-02-25 12:25:50.779770476 +0000 UTC m=+1.102560674 container remove 30c57b1d20928f4e053210efab33801415f2008fe9e13b10b256285583d5b8f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_lamarr, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:25:50 np0005629333 systemd[1]: libpod-conmon-30c57b1d20928f4e053210efab33801415f2008fe9e13b10b256285583d5b8f1.scope: Deactivated successfully.
Feb 25 07:25:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:25:50 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:25:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:25:50 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:25:51 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:25:51 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:25:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 456 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 15 MiB/s rd, 12 MiB/s wr, 552 op/s
Feb 25 07:25:53 np0005629333 nova_compute[244014]: 2026-02-25 12:25:53.789 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 405 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 5.0 MiB/s wr, 480 op/s
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.159 244018 DEBUG nova.compute.manager [None req-2cd5f0bd-19e7-47f7-97a7-ab5d46f4435b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.177 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "a826c2fd-1af8-4b55-b801-90ce87d04466" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.178 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "a826c2fd-1af8-4b55-b801-90ce87d04466" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.228 244018 INFO nova.compute.manager [None req-2cd5f0bd-19e7-47f7-97a7-ab5d46f4435b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] instance snapshotting#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.229 244018 DEBUG nova.objects.instance [None req-2cd5f0bd-19e7-47f7-97a7-ab5d46f4435b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'flavor' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.232 244018 DEBUG nova.compute.manager [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.314 244018 DEBUG oslo_concurrency.lockutils [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.315 244018 DEBUG oslo_concurrency.lockutils [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.316 244018 DEBUG oslo_concurrency.lockutils [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.316 244018 DEBUG oslo_concurrency.lockutils [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.317 244018 DEBUG oslo_concurrency.lockutils [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.318 244018 INFO nova.compute.manager [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Terminating instance#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.320 244018 DEBUG nova.compute.manager [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.336 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.336 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.346 244018 DEBUG nova.virt.hardware [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.347 244018 INFO nova.compute.claims [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:25:54 np0005629333 kernel: tap58ea35ed-ff (unregistering): left promiscuous mode
Feb 25 07:25:54 np0005629333 NetworkManager[49836]: <info>  [1772022354.3583] device (tap58ea35ed-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.363 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:54 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:54Z|00426|binding|INFO|Releasing lport 58ea35ed-ff5d-4827-a801-431f7536d78d from this chassis (sb_readonly=0)
Feb 25 07:25:54 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:54Z|00427|binding|INFO|Setting lport 58ea35ed-ff5d-4827-a801-431f7536d78d down in Southbound
Feb 25 07:25:54 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:54Z|00428|binding|INFO|Removing iface tap58ea35ed-ff ovn-installed in OVS
Feb 25 07:25:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.372 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:e4:94 10.100.0.7'], port_security=['fa:16:3e:03:e4:94 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c97c5b11-7517-46fe-a6ca-63894792908c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8744bdbc0f1499388aab5f477246beb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9622f97c-a9c4-423f-b49e-154152bd6881', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b7ae960-1b2b-4f15-b35e-8e889d9ccce8, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=58ea35ed-ff5d-4827-a801-431f7536d78d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:25:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.374 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 58ea35ed-ff5d-4827-a801-431f7536d78d in datapath c97c5b11-7517-46fe-a6ca-63894792908c unbound from our chassis#033[00m
Feb 25 07:25:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.376 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c97c5b11-7517-46fe-a6ca-63894792908c#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.379 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.398 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a23d6e9a-fe6c-4a74-9343-c07699178d08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:54 np0005629333 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000031.scope: Deactivated successfully.
Feb 25 07:25:54 np0005629333 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000031.scope: Consumed 4.384s CPU time.
Feb 25 07:25:54 np0005629333 systemd-machined[210048]: Machine qemu-55-instance-00000031 terminated.
Feb 25 07:25:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.420 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[aafc41a7-1ff5-4222-8698-3e9b134a29d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.426 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[448ab6a9-a54d-48ad-a957-ffb451b0c52a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.447 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[134d773f-58bf-4b58-9b5e-409a1e3384e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.466 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[789257e8-946f-41f2-8e5e-72149cfdff73]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc97c5b11-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:61:0a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431652, 'reachable_time': 20919, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288010, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.474 244018 DEBUG oslo_concurrency.lockutils [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "ec873c3c-bf46-4537-8c29-b23a3133d281" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.474 244018 DEBUG oslo_concurrency.lockutils [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "ec873c3c-bf46-4537-8c29-b23a3133d281" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.475 244018 DEBUG oslo_concurrency.lockutils [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.475 244018 DEBUG oslo_concurrency.lockutils [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.476 244018 DEBUG oslo_concurrency.lockutils [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.477 244018 INFO nova.compute.manager [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Terminating instance#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.479 244018 DEBUG nova.compute.manager [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:25:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.484 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7ee5ec55-dabf-4c6e-a80b-82c85082d71e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc97c5b11-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 431658, 'tstamp': 431658}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288011, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc97c5b11-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 431660, 'tstamp': 431660}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288011, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.486 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc97c5b11-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.489 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.492 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.492 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc97c5b11-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.493 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:25:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.493 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc97c5b11-70, col_values=(('external_ids', {'iface-id': 'db412aa7-4ad4-4eb8-b61f-dd3e71d5329d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.493 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:25:54 np0005629333 kernel: tap0b2cc5f4-bf (unregistering): left promiscuous mode
Feb 25 07:25:54 np0005629333 NetworkManager[49836]: <info>  [1772022354.5261] device (tap0b2cc5f4-bf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.538 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:54 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:54Z|00429|binding|INFO|Releasing lport 0b2cc5f4-bf41-4e03-8c24-1e711f742942 from this chassis (sb_readonly=0)
Feb 25 07:25:54 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:54Z|00430|binding|INFO|Setting lport 0b2cc5f4-bf41-4e03-8c24-1e711f742942 down in Southbound
Feb 25 07:25:54 np0005629333 ovn_controller[147040]: 2026-02-25T12:25:54Z|00431|binding|INFO|Removing iface tap0b2cc5f4-bf ovn-installed in OVS
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.541 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.545 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:2e:f8 10.100.0.3'], port_security=['fa:16:3e:1f:2e:f8 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ec873c3c-bf46-4537-8c29-b23a3133d281', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c97c5b11-7517-46fe-a6ca-63894792908c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c8744bdbc0f1499388aab5f477246beb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9622f97c-a9c4-423f-b49e-154152bd6881', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9b7ae960-1b2b-4f15-b35e-8e889d9ccce8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=0b2cc5f4-bf41-4e03-8c24-1e711f742942) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:25:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.546 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 0b2cc5f4-bf41-4e03-8c24-1e711f742942 in datapath c97c5b11-7517-46fe-a6ca-63894792908c unbound from our chassis#033[00m
Feb 25 07:25:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.548 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c97c5b11-7517-46fe-a6ca-63894792908c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:25:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.550 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f1880ff0-c44e-4211-9136-e20814ef28dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.551 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c namespace which is not needed anymore#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.551 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.566 244018 INFO nova.virt.libvirt.driver [-] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Instance destroyed successfully.#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.566 244018 DEBUG nova.objects.instance [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lazy-loading 'resources' on Instance uuid 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.570 244018 DEBUG oslo_concurrency.processutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:54 np0005629333 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000032.scope: Deactivated successfully.
Feb 25 07:25:54 np0005629333 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000032.scope: Consumed 7.498s CPU time.
Feb 25 07:25:54 np0005629333 systemd-machined[210048]: Machine qemu-54-instance-00000032 terminated.
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.606 244018 DEBUG nova.virt.libvirt.vif [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:25:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-2038151869',display_name='tempest-MultipleCreateTestJSON-server-2038151869-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-2038151869-1',id=49,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:25:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c8744bdbc0f1499388aab5f477246beb',ramdisk_id='',reservation_id='r-e70320tn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1900553995',owner_user_name='tempest-MultipleCreateTestJSON-1900553995-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:25:50Z,user_data=None,user_id='0899f3fdb57d46a395d07753dd261241',uuid=3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "58ea35ed-ff5d-4827-a801-431f7536d78d", "address": "fa:16:3e:03:e4:94", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58ea35ed-ff", "ovs_interfaceid": "58ea35ed-ff5d-4827-a801-431f7536d78d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.607 244018 DEBUG nova.network.os_vif_util [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converting VIF {"id": "58ea35ed-ff5d-4827-a801-431f7536d78d", "address": "fa:16:3e:03:e4:94", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap58ea35ed-ff", "ovs_interfaceid": "58ea35ed-ff5d-4827-a801-431f7536d78d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.608 244018 DEBUG nova.network.os_vif_util [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:e4:94,bridge_name='br-int',has_traffic_filtering=True,id=58ea35ed-ff5d-4827-a801-431f7536d78d,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58ea35ed-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.608 244018 DEBUG os_vif [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:e4:94,bridge_name='br-int',has_traffic_filtering=True,id=58ea35ed-ff5d-4827-a801-431f7536d78d,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58ea35ed-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.610 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.611 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58ea35ed-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.612 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.615 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.617 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.620 244018 INFO os_vif [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:e4:94,bridge_name='br-int',has_traffic_filtering=True,id=58ea35ed-ff5d-4827-a801-431f7536d78d,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap58ea35ed-ff')#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.637 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.682 244018 INFO nova.virt.libvirt.driver [None req-2cd5f0bd-19e7-47f7-97a7-ab5d46f4435b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Beginning live snapshot process#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.716 244018 INFO nova.virt.libvirt.driver [-] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Instance destroyed successfully.#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.717 244018 DEBUG nova.objects.instance [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lazy-loading 'resources' on Instance uuid ec873c3c-bf46-4537-8c29-b23a3133d281 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:25:54 np0005629333 neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c[287391]: [NOTICE]   (287418) : haproxy version is 2.8.14-c23fe91
Feb 25 07:25:54 np0005629333 neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c[287391]: [NOTICE]   (287418) : path to executable is /usr/sbin/haproxy
Feb 25 07:25:54 np0005629333 neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c[287391]: [WARNING]  (287418) : Exiting Master process...
Feb 25 07:25:54 np0005629333 neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c[287391]: [ALERT]    (287418) : Current worker (287439) exited with code 143 (Terminated)
Feb 25 07:25:54 np0005629333 neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c[287391]: [WARNING]  (287418) : All workers exited. Exiting... (0)
Feb 25 07:25:54 np0005629333 systemd[1]: libpod-26d09110bafeffdf906186af360597b7063b1c4f77ebb55bbae343edd6e074b1.scope: Deactivated successfully.
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.733 244018 DEBUG nova.virt.libvirt.vif [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:25:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-2038151869',display_name='tempest-MultipleCreateTestJSON-server-2038151869-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-2038151869-2',id=50,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2026-02-25T12:25:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c8744bdbc0f1499388aab5f477246beb',ramdisk_id='',reservation_id='r-e70320tn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1900553995',owner_user_name='tempest-MultipleCreateTestJSON-1900553995-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:25:48Z,user_data=None,user_id='0899f3fdb57d46a395d07753dd261241',uuid=ec873c3c-bf46-4537-8c29-b23a3133d281,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "address": "fa:16:3e:1f:2e:f8", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2cc5f4-bf", "ovs_interfaceid": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.734 244018 DEBUG nova.network.os_vif_util [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converting VIF {"id": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "address": "fa:16:3e:1f:2e:f8", "network": {"id": "c97c5b11-7517-46fe-a6ca-63894792908c", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610828338-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c8744bdbc0f1499388aab5f477246beb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b2cc5f4-bf", "ovs_interfaceid": "0b2cc5f4-bf41-4e03-8c24-1e711f742942", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.736 244018 DEBUG nova.network.os_vif_util [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:2e:f8,bridge_name='br-int',has_traffic_filtering=True,id=0b2cc5f4-bf41-4e03-8c24-1e711f742942,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b2cc5f4-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.736 244018 DEBUG os_vif [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:2e:f8,bridge_name='br-int',has_traffic_filtering=True,id=0b2cc5f4-bf41-4e03-8c24-1e711f742942,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b2cc5f4-bf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.739 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:54 np0005629333 podman[288062]: 2026-02-25 12:25:54.740501579 +0000 UTC m=+0.068087838 container died 26d09110bafeffdf906186af360597b7063b1c4f77ebb55bbae343edd6e074b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.740 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b2cc5f4-bf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.743 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.748 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.751 244018 INFO os_vif [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:2e:f8,bridge_name='br-int',has_traffic_filtering=True,id=0b2cc5f4-bf41-4e03-8c24-1e711f742942,network=Network(c97c5b11-7517-46fe-a6ca-63894792908c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b2cc5f4-bf')#033[00m
Feb 25 07:25:54 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ed524380e5edf8481fd2be2b83ddd592e929273edf1401db6f69e8d7d1c59845-merged.mount: Deactivated successfully.
Feb 25 07:25:54 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-26d09110bafeffdf906186af360597b7063b1c4f77ebb55bbae343edd6e074b1-userdata-shm.mount: Deactivated successfully.
Feb 25 07:25:54 np0005629333 podman[288062]: 2026-02-25 12:25:54.779868024 +0000 UTC m=+0.107454283 container cleanup 26d09110bafeffdf906186af360597b7063b1c4f77ebb55bbae343edd6e074b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 07:25:54 np0005629333 systemd[1]: libpod-conmon-26d09110bafeffdf906186af360597b7063b1c4f77ebb55bbae343edd6e074b1.scope: Deactivated successfully.
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.840 244018 DEBUG nova.virt.libvirt.imagebackend [None req-2cd5f0bd-19e7-47f7-97a7-ab5d46f4435b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Feb 25 07:25:54 np0005629333 podman[288150]: 2026-02-25 12:25:54.862831311 +0000 UTC m=+0.058546247 container remove 26d09110bafeffdf906186af360597b7063b1c4f77ebb55bbae343edd6e074b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 25 07:25:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.869 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[19876bc5-91d0-4512-bf57-88198c7de211]: (4, ('Wed Feb 25 12:25:54 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c (26d09110bafeffdf906186af360597b7063b1c4f77ebb55bbae343edd6e074b1)\n26d09110bafeffdf906186af360597b7063b1c4f77ebb55bbae343edd6e074b1\nWed Feb 25 12:25:54 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c (26d09110bafeffdf906186af360597b7063b1c4f77ebb55bbae343edd6e074b1)\n26d09110bafeffdf906186af360597b7063b1c4f77ebb55bbae343edd6e074b1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.871 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ac5de2e2-69e5-4fc8-98fd-47a8fd886c6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.872 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc97c5b11-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:25:54 np0005629333 kernel: tapc97c5b11-70: left promiscuous mode
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.874 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:54 np0005629333 nova_compute[244014]: 2026-02-25 12:25:54.885 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.888 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4518b2c8-d986-4a83-b01f-d3db8f9e3ce2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.899 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b0387d0c-fb04-48af-972d-03c45a069415]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.900 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cf079a03-6f3e-4777-aac0-fb481bf64e72]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.916 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[db91499b-df57-455e-bf44-03d3b760c4af]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 431646, 'reachable_time': 44850, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288191, 'error': None, 'target': 'ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:54 np0005629333 systemd[1]: run-netns-ovnmeta\x2dc97c5b11\x2d7517\x2d46fe\x2da6ca\x2d63894792908c.mount: Deactivated successfully.
Feb 25 07:25:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.922 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c97c5b11-7517-46fe-a6ca-63894792908c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:25:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:54.922 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[dc683313-ff2b-4ed5-8e78-3bcf75bb4905]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:25:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:25:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Feb 25 07:25:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Feb 25 07:25:55 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Feb 25 07:25:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:55.008 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:55.008 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:25:55.012 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.078 244018 INFO nova.virt.libvirt.driver [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Deleting instance files /var/lib/nova/instances/3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2_del#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.079 244018 INFO nova.virt.libvirt.driver [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Deletion of /var/lib/nova/instances/3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2_del complete#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.084 244018 DEBUG nova.storage.rbd_utils [None req-2cd5f0bd-19e7-47f7-97a7-ab5d46f4435b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] creating snapshot(1bd836acae524e0d8fac760064da62b9) on rbd image(b8086e43-4c45-422f-a3b5-fa665c256b30_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 07:25:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:25:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3141398883' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.125 244018 DEBUG nova.compute.manager [req-b50fcfd4-3c27-4b2b-99fe-1728ceb22d43 req-d783226e-1427-49e8-b31a-59c416e72deb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Received event network-vif-unplugged-0b2cc5f4-bf41-4e03-8c24-1e711f742942 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.126 244018 DEBUG oslo_concurrency.lockutils [req-b50fcfd4-3c27-4b2b-99fe-1728ceb22d43 req-d783226e-1427-49e8-b31a-59c416e72deb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.126 244018 DEBUG oslo_concurrency.lockutils [req-b50fcfd4-3c27-4b2b-99fe-1728ceb22d43 req-d783226e-1427-49e8-b31a-59c416e72deb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.127 244018 DEBUG oslo_concurrency.lockutils [req-b50fcfd4-3c27-4b2b-99fe-1728ceb22d43 req-d783226e-1427-49e8-b31a-59c416e72deb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.127 244018 DEBUG nova.compute.manager [req-b50fcfd4-3c27-4b2b-99fe-1728ceb22d43 req-d783226e-1427-49e8-b31a-59c416e72deb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] No waiting events found dispatching network-vif-unplugged-0b2cc5f4-bf41-4e03-8c24-1e711f742942 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.127 244018 DEBUG nova.compute.manager [req-b50fcfd4-3c27-4b2b-99fe-1728ceb22d43 req-d783226e-1427-49e8-b31a-59c416e72deb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Received event network-vif-unplugged-0b2cc5f4-bf41-4e03-8c24-1e711f742942 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.128 244018 DEBUG oslo_concurrency.processutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.132 244018 DEBUG nova.compute.provider_tree [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.149 244018 DEBUG nova.scheduler.client.report [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.185 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.186 244018 DEBUG nova.compute.manager [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.195 244018 INFO nova.compute.manager [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Took 0.87 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.196 244018 DEBUG oslo.service.loopingcall [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.197 244018 DEBUG nova.compute.manager [-] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.198 244018 DEBUG nova.network.neutron [-] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.250 244018 DEBUG nova.compute.manager [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.251 244018 DEBUG nova.network.neutron [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.262 244018 INFO nova.virt.libvirt.driver [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Deleting instance files /var/lib/nova/instances/ec873c3c-bf46-4537-8c29-b23a3133d281_del#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.264 244018 INFO nova.virt.libvirt.driver [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Deletion of /var/lib/nova/instances/ec873c3c-bf46-4537-8c29-b23a3133d281_del complete#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.303 244018 INFO nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.329 244018 DEBUG nova.compute.manager [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.339 244018 INFO nova.compute.manager [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.340 244018 DEBUG oslo.service.loopingcall [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.341 244018 DEBUG nova.compute.manager [-] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.341 244018 DEBUG nova.network.neutron [-] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.416 244018 DEBUG nova.compute.manager [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.418 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.419 244018 INFO nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Creating image(s)#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.451 244018 DEBUG nova.storage.rbd_utils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image a826c2fd-1af8-4b55-b801-90ce87d04466_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.485 244018 DEBUG nova.storage.rbd_utils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image a826c2fd-1af8-4b55-b801-90ce87d04466_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.518 244018 DEBUG nova.storage.rbd_utils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image a826c2fd-1af8-4b55-b801-90ce87d04466_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.522 244018 DEBUG oslo_concurrency.processutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.554 244018 DEBUG nova.compute.manager [req-3e591108-e88f-4925-8f1b-83ce44cd8b10 req-4a4d8a87-1f31-4f77-b2bd-c9664c468e25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Received event network-vif-unplugged-58ea35ed-ff5d-4827-a801-431f7536d78d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.554 244018 DEBUG oslo_concurrency.lockutils [req-3e591108-e88f-4925-8f1b-83ce44cd8b10 req-4a4d8a87-1f31-4f77-b2bd-c9664c468e25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.555 244018 DEBUG oslo_concurrency.lockutils [req-3e591108-e88f-4925-8f1b-83ce44cd8b10 req-4a4d8a87-1f31-4f77-b2bd-c9664c468e25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.555 244018 DEBUG oslo_concurrency.lockutils [req-3e591108-e88f-4925-8f1b-83ce44cd8b10 req-4a4d8a87-1f31-4f77-b2bd-c9664c468e25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.555 244018 DEBUG nova.compute.manager [req-3e591108-e88f-4925-8f1b-83ce44cd8b10 req-4a4d8a87-1f31-4f77-b2bd-c9664c468e25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] No waiting events found dispatching network-vif-unplugged-58ea35ed-ff5d-4827-a801-431f7536d78d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.556 244018 DEBUG nova.compute.manager [req-3e591108-e88f-4925-8f1b-83ce44cd8b10 req-4a4d8a87-1f31-4f77-b2bd-c9664c468e25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Received event network-vif-unplugged-58ea35ed-ff5d-4827-a801-431f7536d78d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.556 244018 DEBUG nova.compute.manager [req-3e591108-e88f-4925-8f1b-83ce44cd8b10 req-4a4d8a87-1f31-4f77-b2bd-c9664c468e25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Received event network-vif-plugged-58ea35ed-ff5d-4827-a801-431f7536d78d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.557 244018 DEBUG oslo_concurrency.lockutils [req-3e591108-e88f-4925-8f1b-83ce44cd8b10 req-4a4d8a87-1f31-4f77-b2bd-c9664c468e25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.557 244018 DEBUG oslo_concurrency.lockutils [req-3e591108-e88f-4925-8f1b-83ce44cd8b10 req-4a4d8a87-1f31-4f77-b2bd-c9664c468e25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.557 244018 DEBUG oslo_concurrency.lockutils [req-3e591108-e88f-4925-8f1b-83ce44cd8b10 req-4a4d8a87-1f31-4f77-b2bd-c9664c468e25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.557 244018 DEBUG nova.compute.manager [req-3e591108-e88f-4925-8f1b-83ce44cd8b10 req-4a4d8a87-1f31-4f77-b2bd-c9664c468e25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] No waiting events found dispatching network-vif-plugged-58ea35ed-ff5d-4827-a801-431f7536d78d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.558 244018 WARNING nova.compute.manager [req-3e591108-e88f-4925-8f1b-83ce44cd8b10 req-4a4d8a87-1f31-4f77-b2bd-c9664c468e25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Received unexpected event network-vif-plugged-58ea35ed-ff5d-4827-a801-431f7536d78d for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.602 244018 DEBUG oslo_concurrency.processutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.602 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.603 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.604 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.635 244018 DEBUG nova.storage.rbd_utils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image a826c2fd-1af8-4b55-b801-90ce87d04466_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.640 244018 DEBUG oslo_concurrency.processutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 a826c2fd-1af8-4b55-b801-90ce87d04466_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:55 np0005629333 nova_compute[244014]: 2026-02-25 12:25:55.750 244018 DEBUG nova.policy [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '89e71139346a40899212d5bc35835720', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f976004e0b334963a69c2519fca200d2', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:25:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 405 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 9.0 MiB/s rd, 4.3 MiB/s wr, 416 op/s
Feb 25 07:25:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Feb 25 07:25:56 np0005629333 nova_compute[244014]: 2026-02-25 12:25:56.421 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "679fb15f-b258-473a-8cdc-a2c143eb4d92" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:56 np0005629333 nova_compute[244014]: 2026-02-25 12:25:56.422 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Feb 25 07:25:56 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Feb 25 07:25:56 np0005629333 nova_compute[244014]: 2026-02-25 12:25:56.462 244018 DEBUG nova.compute.manager [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:25:56 np0005629333 nova_compute[244014]: 2026-02-25 12:25:56.526 244018 DEBUG nova.storage.rbd_utils [None req-2cd5f0bd-19e7-47f7-97a7-ab5d46f4435b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] cloning vms/b8086e43-4c45-422f-a3b5-fa665c256b30_disk@1bd836acae524e0d8fac760064da62b9 to images/47933427-b31f-475b-a243-3521fd903d10 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 25 07:25:56 np0005629333 nova_compute[244014]: 2026-02-25 12:25:56.579 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:56 np0005629333 nova_compute[244014]: 2026-02-25 12:25:56.579 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:56 np0005629333 nova_compute[244014]: 2026-02-25 12:25:56.585 244018 DEBUG nova.virt.hardware [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:25:56 np0005629333 nova_compute[244014]: 2026-02-25 12:25:56.586 244018 INFO nova.compute.claims [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:25:56 np0005629333 nova_compute[244014]: 2026-02-25 12:25:56.683 244018 DEBUG oslo_concurrency.processutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 a826c2fd-1af8-4b55-b801-90ce87d04466_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:56 np0005629333 nova_compute[244014]: 2026-02-25 12:25:56.720 244018 DEBUG nova.storage.rbd_utils [None req-2cd5f0bd-19e7-47f7-97a7-ab5d46f4435b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] flattening images/47933427-b31f-475b-a243-3521fd903d10 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 25 07:25:56 np0005629333 nova_compute[244014]: 2026-02-25 12:25:56.838 244018 DEBUG nova.network.neutron [-] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:25:56 np0005629333 nova_compute[244014]: 2026-02-25 12:25:56.846 244018 DEBUG nova.storage.rbd_utils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] resizing rbd image a826c2fd-1af8-4b55-b801-90ce87d04466_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:25:56 np0005629333 nova_compute[244014]: 2026-02-25 12:25:56.976 244018 DEBUG nova.network.neutron [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Successfully created port: 4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:25:56 np0005629333 nova_compute[244014]: 2026-02-25 12:25:56.980 244018 INFO nova.compute.manager [-] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Took 1.78 seconds to deallocate network for instance.#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.081 244018 DEBUG nova.network.neutron [-] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.083 244018 DEBUG oslo_concurrency.lockutils [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.127 244018 DEBUG oslo_concurrency.processutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.165 244018 DEBUG nova.objects.instance [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lazy-loading 'migration_context' on Instance uuid a826c2fd-1af8-4b55-b801-90ce87d04466 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.168 244018 INFO nova.compute.manager [-] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Took 1.83 seconds to deallocate network for instance.#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.179 244018 DEBUG nova.compute.manager [req-a367d2f4-c0a1-4224-ad1a-3d91ae176194 req-b6425175-adb9-418a-96f9-f29ba7e6096d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Received event network-vif-plugged-0b2cc5f4-bf41-4e03-8c24-1e711f742942 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.180 244018 DEBUG oslo_concurrency.lockutils [req-a367d2f4-c0a1-4224-ad1a-3d91ae176194 req-b6425175-adb9-418a-96f9-f29ba7e6096d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.180 244018 DEBUG oslo_concurrency.lockutils [req-a367d2f4-c0a1-4224-ad1a-3d91ae176194 req-b6425175-adb9-418a-96f9-f29ba7e6096d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.181 244018 DEBUG oslo_concurrency.lockutils [req-a367d2f4-c0a1-4224-ad1a-3d91ae176194 req-b6425175-adb9-418a-96f9-f29ba7e6096d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ec873c3c-bf46-4537-8c29-b23a3133d281-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.181 244018 DEBUG nova.compute.manager [req-a367d2f4-c0a1-4224-ad1a-3d91ae176194 req-b6425175-adb9-418a-96f9-f29ba7e6096d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] No waiting events found dispatching network-vif-plugged-0b2cc5f4-bf41-4e03-8c24-1e711f742942 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.181 244018 WARNING nova.compute.manager [req-a367d2f4-c0a1-4224-ad1a-3d91ae176194 req-b6425175-adb9-418a-96f9-f29ba7e6096d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Received unexpected event network-vif-plugged-0b2cc5f4-bf41-4e03-8c24-1e711f742942 for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.182 244018 DEBUG nova.compute.manager [req-a367d2f4-c0a1-4224-ad1a-3d91ae176194 req-b6425175-adb9-418a-96f9-f29ba7e6096d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Received event network-vif-deleted-0b2cc5f4-bf41-4e03-8c24-1e711f742942 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.184 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.184 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Ensure instance console log exists: /var/lib/nova/instances/a826c2fd-1af8-4b55-b801-90ce87d04466/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.185 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.185 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.186 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.201 244018 DEBUG nova.storage.rbd_utils [None req-2cd5f0bd-19e7-47f7-97a7-ab5d46f4435b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] removing snapshot(1bd836acae524e0d8fac760064da62b9) on rbd image(b8086e43-4c45-422f-a3b5-fa665c256b30_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.223 244018 DEBUG oslo_concurrency.lockutils [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Feb 25 07:25:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Feb 25 07:25:57 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.547 244018 DEBUG nova.compute.manager [req-5609c235-524b-45b9-abb1-7137b5747bd5 req-b90cff67-f2fd-4b93-a03a-f6cd7dea4077 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Received event network-vif-deleted-58ea35ed-ff5d-4827-a801-431f7536d78d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.576 244018 DEBUG nova.storage.rbd_utils [None req-2cd5f0bd-19e7-47f7-97a7-ab5d46f4435b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] creating snapshot(snap) on rbd image(47933427-b31f-475b-a243-3521fd903d10) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 07:25:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:25:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/23727402' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.673 244018 DEBUG oslo_concurrency.processutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.680 244018 DEBUG nova.compute.provider_tree [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.708 244018 DEBUG nova.scheduler.client.report [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.736 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.737 244018 DEBUG nova.compute.manager [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.739 244018 DEBUG oslo_concurrency.lockutils [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.793 244018 DEBUG nova.compute.manager [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.793 244018 DEBUG nova.network.neutron [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:25:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 366 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 4.5 MiB/s wr, 329 op/s
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.821 244018 INFO nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.824 244018 DEBUG nova.network.neutron [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Successfully updated port: 4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.846 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "refresh_cache-a826c2fd-1af8-4b55-b801-90ce87d04466" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.846 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquired lock "refresh_cache-a826c2fd-1af8-4b55-b801-90ce87d04466" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.847 244018 DEBUG nova.network.neutron [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.848 244018 DEBUG nova.compute.manager [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.870 244018 DEBUG oslo_concurrency.processutils [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.960 244018 DEBUG nova.compute.manager [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.962 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.962 244018 INFO nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Creating image(s)#033[00m
Feb 25 07:25:57 np0005629333 nova_compute[244014]: 2026-02-25 12:25:57.984 244018 DEBUG nova.storage.rbd_utils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image 679fb15f-b258-473a-8cdc-a2c143eb4d92_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:58 np0005629333 nova_compute[244014]: 2026-02-25 12:25:58.008 244018 DEBUG nova.storage.rbd_utils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image 679fb15f-b258-473a-8cdc-a2c143eb4d92_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:58 np0005629333 nova_compute[244014]: 2026-02-25 12:25:58.034 244018 DEBUG nova.storage.rbd_utils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image 679fb15f-b258-473a-8cdc-a2c143eb4d92_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:58 np0005629333 nova_compute[244014]: 2026-02-25 12:25:58.038 244018 DEBUG oslo_concurrency.processutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:58 np0005629333 nova_compute[244014]: 2026-02-25 12:25:58.070 244018 DEBUG nova.policy [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9c44c1a95c8d4d97bc1d7dde284dcf1b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'daab2f813dbd467685c22833bf875ec9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:25:58 np0005629333 nova_compute[244014]: 2026-02-25 12:25:58.073 244018 DEBUG nova.network.neutron [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:25:58 np0005629333 nova_compute[244014]: 2026-02-25 12:25:58.119 244018 DEBUG oslo_concurrency.processutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:58 np0005629333 nova_compute[244014]: 2026-02-25 12:25:58.120 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:58 np0005629333 nova_compute[244014]: 2026-02-25 12:25:58.120 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:58 np0005629333 nova_compute[244014]: 2026-02-25 12:25:58.121 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:58 np0005629333 nova_compute[244014]: 2026-02-25 12:25:58.139 244018 DEBUG nova.storage.rbd_utils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image 679fb15f-b258-473a-8cdc-a2c143eb4d92_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:25:58 np0005629333 nova_compute[244014]: 2026-02-25 12:25:58.142 244018 DEBUG oslo_concurrency.processutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 679fb15f-b258-473a-8cdc-a2c143eb4d92_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:58 np0005629333 nova_compute[244014]: 2026-02-25 12:25:58.165 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022343.1000125, e291d969-fcea-4f60-a478-f7b81a91ccd9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:25:58 np0005629333 nova_compute[244014]: 2026-02-25 12:25:58.165 244018 INFO nova.compute.manager [-] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:25:58 np0005629333 nova_compute[244014]: 2026-02-25 12:25:58.186 244018 DEBUG nova.compute.manager [None req-92343db1-01ce-4928-a1f5-ffa60b4c7d5a - - - - - -] [instance: e291d969-fcea-4f60-a478-f7b81a91ccd9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:25:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:25:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1788305962' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:25:58 np0005629333 nova_compute[244014]: 2026-02-25 12:25:58.510 244018 DEBUG oslo_concurrency.processutils [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.640s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:58 np0005629333 nova_compute[244014]: 2026-02-25 12:25:58.514 244018 DEBUG nova.compute.provider_tree [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:25:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Feb 25 07:25:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Feb 25 07:25:58 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Feb 25 07:25:58 np0005629333 nova_compute[244014]: 2026-02-25 12:25:58.530 244018 DEBUG nova.scheduler.client.report [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:25:58 np0005629333 nova_compute[244014]: 2026-02-25 12:25:58.536 244018 DEBUG oslo_concurrency.processutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 679fb15f-b258-473a-8cdc-a2c143eb4d92_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:58 np0005629333 nova_compute[244014]: 2026-02-25 12:25:58.570 244018 DEBUG oslo_concurrency.lockutils [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:58 np0005629333 nova_compute[244014]: 2026-02-25 12:25:58.573 244018 DEBUG oslo_concurrency.lockutils [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.350s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:58 np0005629333 nova_compute[244014]: 2026-02-25 12:25:58.617 244018 DEBUG nova.storage.rbd_utils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] resizing rbd image 679fb15f-b258-473a-8cdc-a2c143eb4d92_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:25:58 np0005629333 nova_compute[244014]: 2026-02-25 12:25:58.658 244018 INFO nova.scheduler.client.report [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Deleted allocations for instance 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2#033[00m
Feb 25 07:25:58 np0005629333 nova_compute[244014]: 2026-02-25 12:25:58.725 244018 DEBUG nova.objects.instance [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lazy-loading 'migration_context' on Instance uuid 679fb15f-b258-473a-8cdc-a2c143eb4d92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:25:58 np0005629333 nova_compute[244014]: 2026-02-25 12:25:58.746 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:25:58 np0005629333 nova_compute[244014]: 2026-02-25 12:25:58.746 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Ensure instance console log exists: /var/lib/nova/instances/679fb15f-b258-473a-8cdc-a2c143eb4d92/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:25:58 np0005629333 nova_compute[244014]: 2026-02-25 12:25:58.747 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:25:58 np0005629333 nova_compute[244014]: 2026-02-25 12:25:58.747 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:25:58 np0005629333 nova_compute[244014]: 2026-02-25 12:25:58.747 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:58 np0005629333 nova_compute[244014]: 2026-02-25 12:25:58.760 244018 DEBUG oslo_concurrency.lockutils [None req-c8544c44-9c8b-4284-aca0-2e5452e653b1 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.444s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:58 np0005629333 nova_compute[244014]: 2026-02-25 12:25:58.801 244018 DEBUG oslo_concurrency.processutils [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:58 np0005629333 nova_compute[244014]: 2026-02-25 12:25:58.845 244018 DEBUG nova.network.neutron [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Successfully created port: fea2b354-8a21-4bff-bd90-431f8a17aa19 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:25:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:25:59 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3085999945' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.382 244018 DEBUG oslo_concurrency.processutils [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.388 244018 DEBUG nova.compute.provider_tree [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.405 244018 DEBUG nova.scheduler.client.report [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.433 244018 DEBUG oslo_concurrency.lockutils [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.474 244018 DEBUG nova.network.neutron [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Updating instance_info_cache with network_info: [{"id": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "address": "fa:16:3e:b6:1d:3c", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bf8f3ad-0e", "ovs_interfaceid": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.491 244018 INFO nova.scheduler.client.report [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Deleted allocations for instance ec873c3c-bf46-4537-8c29-b23a3133d281#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.495 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Releasing lock "refresh_cache-a826c2fd-1af8-4b55-b801-90ce87d04466" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.496 244018 DEBUG nova.compute.manager [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Instance network_info: |[{"id": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "address": "fa:16:3e:b6:1d:3c", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bf8f3ad-0e", "ovs_interfaceid": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.500 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Start _get_guest_xml network_info=[{"id": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "address": "fa:16:3e:b6:1d:3c", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bf8f3ad-0e", "ovs_interfaceid": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.510 244018 WARNING nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.516 244018 DEBUG nova.virt.libvirt.host [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.516 244018 DEBUG nova.virt.libvirt.host [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.521 244018 DEBUG nova.virt.libvirt.host [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.522 244018 DEBUG nova.virt.libvirt.host [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.523 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.523 244018 DEBUG nova.virt.hardware [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.524 244018 DEBUG nova.virt.hardware [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.524 244018 DEBUG nova.virt.hardware [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.524 244018 DEBUG nova.virt.hardware [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.525 244018 DEBUG nova.virt.hardware [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.525 244018 DEBUG nova.virt.hardware [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.525 244018 DEBUG nova.virt.hardware [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.526 244018 DEBUG nova.virt.hardware [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.526 244018 DEBUG nova.virt.hardware [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.526 244018 DEBUG nova.virt.hardware [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.527 244018 DEBUG nova.virt.hardware [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.531 244018 DEBUG oslo_concurrency.processutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.570 244018 DEBUG oslo_concurrency.lockutils [None req-7a3bd5d4-d99f-4961-a0ad-06083b2e8834 0899f3fdb57d46a395d07753dd261241 c8744bdbc0f1499388aab5f477246beb - - default default] Lock "ec873c3c-bf46-4537-8c29-b23a3133d281" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.640 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.669 244018 DEBUG nova.compute.manager [req-2b5f8b66-9563-4ebe-9391-e1208044f3d8 req-7623d489-520e-41e8-8f41-29145ff156bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Received event network-changed-4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.677 244018 DEBUG nova.compute.manager [req-2b5f8b66-9563-4ebe-9391-e1208044f3d8 req-7623d489-520e-41e8-8f41-29145ff156bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Refreshing instance network info cache due to event network-changed-4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.678 244018 DEBUG oslo_concurrency.lockutils [req-2b5f8b66-9563-4ebe-9391-e1208044f3d8 req-7623d489-520e-41e8-8f41-29145ff156bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-a826c2fd-1af8-4b55-b801-90ce87d04466" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.678 244018 DEBUG oslo_concurrency.lockutils [req-2b5f8b66-9563-4ebe-9391-e1208044f3d8 req-7623d489-520e-41e8-8f41-29145ff156bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-a826c2fd-1af8-4b55-b801-90ce87d04466" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.678 244018 DEBUG nova.network.neutron [req-2b5f8b66-9563-4ebe-9391-e1208044f3d8 req-7623d489-520e-41e8-8f41-29145ff156bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Refreshing network info cache for port 4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:25:59 np0005629333 podman[288703]: 2026-02-25 12:25:59.729759502 +0000 UTC m=+0.071815184 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 07:25:59 np0005629333 nova_compute[244014]: 2026-02-25 12:25:59.743 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:25:59 np0005629333 podman[288713]: 2026-02-25 12:25:59.764787773 +0000 UTC m=+0.108808830 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 07:25:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 366 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.5 MiB/s wr, 211 op/s
Feb 25 07:25:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:26:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:26:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2567944676' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.069 244018 DEBUG oslo_concurrency.processutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.137 244018 DEBUG nova.storage.rbd_utils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image a826c2fd-1af8-4b55-b801-90ce87d04466_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.141 244018 DEBUG oslo_concurrency.processutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.250 244018 DEBUG nova.network.neutron [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Successfully updated port: fea2b354-8a21-4bff-bd90-431f8a17aa19 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.269 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "refresh_cache-679fb15f-b258-473a-8cdc-a2c143eb4d92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.269 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquired lock "refresh_cache-679fb15f-b258-473a-8cdc-a2c143eb4d92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.270 244018 DEBUG nova.network.neutron [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.472 244018 INFO nova.virt.libvirt.driver [None req-2cd5f0bd-19e7-47f7-97a7-ab5d46f4435b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Snapshot image upload complete#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.473 244018 INFO nova.compute.manager [None req-2cd5f0bd-19e7-47f7-97a7-ab5d46f4435b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Took 6.20 seconds to snapshot the instance on the hypervisor.#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.479 244018 DEBUG nova.network.neutron [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:26:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:26:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3227801658' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.691 244018 DEBUG oslo_concurrency.processutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.693 244018 DEBUG nova.virt.libvirt.vif [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:25:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-131355373',display_name='tempest-ImagesOneServerNegativeTestJSON-server-131355373',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-131355373',id=51,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f976004e0b334963a69c2519fca200d2',ramdisk_id='',reservation_id='r-w9t1cq22',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1374162185',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1374162185-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:25:55Z,user_data=None,user_id='89e71139346a40899212d5bc35835720',uuid=a826c2fd-1af8-4b55-b801-90ce87d04466,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "address": "fa:16:3e:b6:1d:3c", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bf8f3ad-0e", "ovs_interfaceid": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.694 244018 DEBUG nova.network.os_vif_util [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converting VIF {"id": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "address": "fa:16:3e:b6:1d:3c", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bf8f3ad-0e", "ovs_interfaceid": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.695 244018 DEBUG nova.network.os_vif_util [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:1d:3c,bridge_name='br-int',has_traffic_filtering=True,id=4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bf8f3ad-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.697 244018 DEBUG nova.objects.instance [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lazy-loading 'pci_devices' on Instance uuid a826c2fd-1af8-4b55-b801-90ce87d04466 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.717 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:26:00 np0005629333 nova_compute[244014]:  <uuid>a826c2fd-1af8-4b55-b801-90ce87d04466</uuid>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:  <name>instance-00000033</name>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-131355373</nova:name>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:25:59</nova:creationTime>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:26:00 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:        <nova:user uuid="89e71139346a40899212d5bc35835720">tempest-ImagesOneServerNegativeTestJSON-1374162185-project-member</nova:user>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:        <nova:project uuid="f976004e0b334963a69c2519fca200d2">tempest-ImagesOneServerNegativeTestJSON-1374162185</nova:project>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:        <nova:port uuid="4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf">
Feb 25 07:26:00 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      <entry name="serial">a826c2fd-1af8-4b55-b801-90ce87d04466</entry>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      <entry name="uuid">a826c2fd-1af8-4b55-b801-90ce87d04466</entry>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/a826c2fd-1af8-4b55-b801-90ce87d04466_disk">
Feb 25 07:26:00 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:26:00 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/a826c2fd-1af8-4b55-b801-90ce87d04466_disk.config">
Feb 25 07:26:00 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:26:00 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:b6:1d:3c"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      <target dev="tap4bf8f3ad-0e"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/a826c2fd-1af8-4b55-b801-90ce87d04466/console.log" append="off"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:26:00 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:26:00 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:26:00 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:26:00 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.718 244018 DEBUG nova.compute.manager [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Preparing to wait for external event network-vif-plugged-4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.718 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "a826c2fd-1af8-4b55-b801-90ce87d04466-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.719 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "a826c2fd-1af8-4b55-b801-90ce87d04466-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.719 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "a826c2fd-1af8-4b55-b801-90ce87d04466-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.721 244018 DEBUG nova.virt.libvirt.vif [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:25:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-131355373',display_name='tempest-ImagesOneServerNegativeTestJSON-server-131355373',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-131355373',id=51,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f976004e0b334963a69c2519fca200d2',ramdisk_id='',reservation_id='r-w9t1cq22',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1374162185',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1374162185-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:25:55Z,user_data=None,user_id='89e71139346a40899212d5bc35835720',uuid=a826c2fd-1af8-4b55-b801-90ce87d04466,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "address": "fa:16:3e:b6:1d:3c", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bf8f3ad-0e", "ovs_interfaceid": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.721 244018 DEBUG nova.network.os_vif_util [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converting VIF {"id": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "address": "fa:16:3e:b6:1d:3c", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bf8f3ad-0e", "ovs_interfaceid": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.723 244018 DEBUG nova.network.os_vif_util [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:1d:3c,bridge_name='br-int',has_traffic_filtering=True,id=4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bf8f3ad-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.723 244018 DEBUG os_vif [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:1d:3c,bridge_name='br-int',has_traffic_filtering=True,id=4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bf8f3ad-0e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.724 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.725 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.726 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.729 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.730 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bf8f3ad-0e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.730 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4bf8f3ad-0e, col_values=(('external_ids', {'iface-id': '4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b6:1d:3c', 'vm-uuid': 'a826c2fd-1af8-4b55-b801-90ce87d04466'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:00 np0005629333 NetworkManager[49836]: <info>  [1772022360.7336] manager: (tap4bf8f3ad-0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/196)
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.732 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.743 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.744 244018 INFO os_vif [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:1d:3c,bridge_name='br-int',has_traffic_filtering=True,id=4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bf8f3ad-0e')#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.818 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.818 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.819 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] No VIF found with MAC fa:16:3e:b6:1d:3c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.819 244018 INFO nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Using config drive#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.849 244018 DEBUG nova.storage.rbd_utils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image a826c2fd-1af8-4b55-b801-90ce87d04466_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.864 244018 DEBUG nova.compute.manager [None req-2cd5f0bd-19e7-47f7-97a7-ab5d46f4435b b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Found 2 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.901 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 25 07:26:00 np0005629333 nova_compute[244014]: 2026-02-25 12:26:00.901 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.146 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.146 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.146 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.147 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.470 244018 INFO nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Creating config drive at /var/lib/nova/instances/a826c2fd-1af8-4b55-b801-90ce87d04466/disk.config#033[00m
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.478 244018 DEBUG oslo_concurrency.processutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a826c2fd-1af8-4b55-b801-90ce87d04466/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp38szqe_b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.550 244018 DEBUG nova.network.neutron [req-2b5f8b66-9563-4ebe-9391-e1208044f3d8 req-7623d489-520e-41e8-8f41-29145ff156bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Updated VIF entry in instance network info cache for port 4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.551 244018 DEBUG nova.network.neutron [req-2b5f8b66-9563-4ebe-9391-e1208044f3d8 req-7623d489-520e-41e8-8f41-29145ff156bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Updating instance_info_cache with network_info: [{"id": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "address": "fa:16:3e:b6:1d:3c", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bf8f3ad-0e", "ovs_interfaceid": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.569 244018 DEBUG oslo_concurrency.lockutils [req-2b5f8b66-9563-4ebe-9391-e1208044f3d8 req-7623d489-520e-41e8-8f41-29145ff156bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-a826c2fd-1af8-4b55-b801-90ce87d04466" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:26:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:26:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:26:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:26:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:26:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:26:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.616 244018 DEBUG oslo_concurrency.processutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a826c2fd-1af8-4b55-b801-90ce87d04466/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp38szqe_b" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.652 244018 DEBUG nova.storage.rbd_utils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image a826c2fd-1af8-4b55-b801-90ce87d04466_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.657 244018 DEBUG oslo_concurrency.processutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a826c2fd-1af8-4b55-b801-90ce87d04466/disk.config a826c2fd-1af8-4b55-b801-90ce87d04466_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 463 MiB data, 713 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 14 MiB/s wr, 393 op/s
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.834 244018 DEBUG oslo_concurrency.processutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a826c2fd-1af8-4b55-b801-90ce87d04466/disk.config a826c2fd-1af8-4b55-b801-90ce87d04466_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.835 244018 INFO nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Deleting local config drive /var/lib/nova/instances/a826c2fd-1af8-4b55-b801-90ce87d04466/disk.config because it was imported into RBD.#033[00m
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.886 244018 DEBUG nova.network.neutron [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Updating instance_info_cache with network_info: [{"id": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "address": "fa:16:3e:28:ef:45", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea2b354-8a", "ovs_interfaceid": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.916 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Releasing lock "refresh_cache-679fb15f-b258-473a-8cdc-a2c143eb4d92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.917 244018 DEBUG nova.compute.manager [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Instance network_info: |[{"id": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "address": "fa:16:3e:28:ef:45", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea2b354-8a", "ovs_interfaceid": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:26:01 np0005629333 kernel: tap4bf8f3ad-0e: entered promiscuous mode
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.921 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Start _get_guest_xml network_info=[{"id": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "address": "fa:16:3e:28:ef:45", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea2b354-8a", "ovs_interfaceid": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:26:01 np0005629333 NetworkManager[49836]: <info>  [1772022361.9225] manager: (tap4bf8f3ad-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/197)
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.923 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:01 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:01Z|00432|binding|INFO|Claiming lport 4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf for this chassis.
Feb 25 07:26:01 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:01Z|00433|binding|INFO|4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf: Claiming fa:16:3e:b6:1d:3c 10.100.0.4
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.931 244018 WARNING nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:26:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:01.935 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b6:1d:3c 10.100.0.4'], port_security=['fa:16:3e:b6:1d:3c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a826c2fd-1af8-4b55-b801-90ce87d04466', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f976004e0b334963a69c2519fca200d2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ca95284b-67f9-4e09-a57b-7847021c2465', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=974b795b-e2d8-4683-ac80-b366113e2dd8, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:26:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:01.937 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf in datapath 7693903d-d5e2-4b50-a39b-bbbcc4148329 bound to our chassis#033[00m
Feb 25 07:26:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:01.941 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7693903d-d5e2-4b50-a39b-bbbcc4148329#033[00m
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.941 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.943 244018 DEBUG nova.virt.libvirt.host [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.944 244018 DEBUG nova.virt.libvirt.host [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:26:01 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:01Z|00434|binding|INFO|Setting lport 4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf ovn-installed in OVS
Feb 25 07:26:01 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:01Z|00435|binding|INFO|Setting lport 4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf up in Southbound
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.945 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.955 244018 DEBUG nova.virt.libvirt.host [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:26:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:01.956 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a542f6be-5ce0-4eb4-9e66-976179e5afdf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:01.957 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7693903d-d1 in ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:26:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:01.959 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7693903d-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:26:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:01.959 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ffdd9def-a7da-4da4-b7d2-06761290cfff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.957 244018 DEBUG nova.virt.libvirt.host [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.959 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.959 244018 DEBUG nova.virt.hardware [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.960 244018 DEBUG nova.virt.hardware [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.961 244018 DEBUG nova.virt.hardware [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:26:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:01.961 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9871f67a-5abe-468e-9af2-e0c25f03892a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.961 244018 DEBUG nova.virt.hardware [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.961 244018 DEBUG nova.virt.hardware [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.962 244018 DEBUG nova.virt.hardware [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.962 244018 DEBUG nova.virt.hardware [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.963 244018 DEBUG nova.virt.hardware [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.963 244018 DEBUG nova.virt.hardware [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.964 244018 DEBUG nova.virt.hardware [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.964 244018 DEBUG nova.virt.hardware [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:26:01 np0005629333 systemd-machined[210048]: New machine qemu-56-instance-00000033.
Feb 25 07:26:01 np0005629333 nova_compute[244014]: 2026-02-25 12:26:01.969 244018 DEBUG oslo_concurrency.processutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:01.977 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[4beaf90a-0747-47d9-830e-dfad2d82ba13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:01 np0005629333 systemd[1]: Started Virtual Machine qemu-56-instance-00000033.
Feb 25 07:26:01 np0005629333 systemd-udevd[288881]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.002 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[048854ef-0418-44e8-b78a-389e2d82736c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:02 np0005629333 NetworkManager[49836]: <info>  [1772022362.0090] device (tap4bf8f3ad-0e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:26:02 np0005629333 NetworkManager[49836]: <info>  [1772022362.0102] device (tap4bf8f3ad-0e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.027 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1afddf7e-e429-43df-bbac-7638735ab72a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:02 np0005629333 nova_compute[244014]: 2026-02-25 12:26:02.029 244018 DEBUG nova.compute.manager [req-78269d9e-e463-4e29-b07a-c72b477f6a02 req-ee005971-2365-401f-a6e3-676169890ecf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Received event network-changed-fea2b354-8a21-4bff-bd90-431f8a17aa19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:26:02 np0005629333 nova_compute[244014]: 2026-02-25 12:26:02.030 244018 DEBUG nova.compute.manager [req-78269d9e-e463-4e29-b07a-c72b477f6a02 req-ee005971-2365-401f-a6e3-676169890ecf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Refreshing instance network info cache due to event network-changed-fea2b354-8a21-4bff-bd90-431f8a17aa19. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:26:02 np0005629333 nova_compute[244014]: 2026-02-25 12:26:02.030 244018 DEBUG oslo_concurrency.lockutils [req-78269d9e-e463-4e29-b07a-c72b477f6a02 req-ee005971-2365-401f-a6e3-676169890ecf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-679fb15f-b258-473a-8cdc-a2c143eb4d92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:26:02 np0005629333 nova_compute[244014]: 2026-02-25 12:26:02.030 244018 DEBUG oslo_concurrency.lockutils [req-78269d9e-e463-4e29-b07a-c72b477f6a02 req-ee005971-2365-401f-a6e3-676169890ecf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-679fb15f-b258-473a-8cdc-a2c143eb4d92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:26:02 np0005629333 nova_compute[244014]: 2026-02-25 12:26:02.031 244018 DEBUG nova.network.neutron [req-78269d9e-e463-4e29-b07a-c72b477f6a02 req-ee005971-2365-401f-a6e3-676169890ecf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Refreshing network info cache for port fea2b354-8a21-4bff-bd90-431f8a17aa19 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:26:02 np0005629333 systemd-udevd[288884]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.033 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eeb5ea11-4f47-4f45-bf9c-c57544170021]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:02 np0005629333 NetworkManager[49836]: <info>  [1772022362.0353] manager: (tap7693903d-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/198)
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.067 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0b013e80-98dd-4d85-86c1-d348f6595361]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.070 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[acbba05d-ce46-4a5d-a5ba-6cb9d4ea9748]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:02 np0005629333 NetworkManager[49836]: <info>  [1772022362.0917] device (tap7693903d-d0): carrier: link connected
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.098 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9e4c4ee0-a36a-44d4-8ef7-72414e274211]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.110 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[248e0952-30af-4c81-8dda-2726e10684fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7693903d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:f2:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 433149, 'reachable_time': 31366, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288911, 'error': None, 'target': 'ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.126 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9d9d3407-3ce2-46fe-974c-81183d1a4c8c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec3:f240'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 433149, 'tstamp': 433149}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288913, 'error': None, 'target': 'ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.145 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ed21c935-af64-4ab9-9244-9929b39373d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7693903d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:f2:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 433149, 'reachable_time': 31366, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288932, 'error': None, 'target': 'ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.167 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[66869389-2c8a-4946-bb8a-4dabdfadc636]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.210 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[23a46632-4851-483b-9e6a-c18a57cd217d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.212 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7693903d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.213 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.214 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7693903d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:02 np0005629333 NetworkManager[49836]: <info>  [1772022362.2165] manager: (tap7693903d-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/199)
Feb 25 07:26:02 np0005629333 kernel: tap7693903d-d0: entered promiscuous mode
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.220 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7693903d-d0, col_values=(('external_ids', {'iface-id': '6dc5897c-8765-434f-a79d-19523884d8ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:02 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:02Z|00436|binding|INFO|Releasing lport 6dc5897c-8765-434f-a79d-19523884d8ae from this chassis (sb_readonly=0)
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.224 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7693903d-d5e2-4b50-a39b-bbbcc4148329.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7693903d-d5e2-4b50-a39b-bbbcc4148329.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.225 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c8693c16-6f1b-4b80-bece-11a78831af7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.225 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-7693903d-d5e2-4b50-a39b-bbbcc4148329
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/7693903d-d5e2-4b50-a39b-bbbcc4148329.pid.haproxy
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 7693903d-d5e2-4b50-a39b-bbbcc4148329
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:26:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:02.226 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'env', 'PROCESS_TAG=haproxy-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7693903d-d5e2-4b50-a39b-bbbcc4148329.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:26:02 np0005629333 nova_compute[244014]: 2026-02-25 12:26:02.222 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:02 np0005629333 nova_compute[244014]: 2026-02-25 12:26:02.237 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:26:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1815433461' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:26:02 np0005629333 nova_compute[244014]: 2026-02-25 12:26:02.533 244018 DEBUG oslo_concurrency.processutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:02 np0005629333 nova_compute[244014]: 2026-02-25 12:26:02.557 244018 DEBUG nova.storage.rbd_utils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image 679fb15f-b258-473a-8cdc-a2c143eb4d92_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:02 np0005629333 nova_compute[244014]: 2026-02-25 12:26:02.561 244018 DEBUG oslo_concurrency.processutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:02 np0005629333 nova_compute[244014]: 2026-02-25 12:26:02.625 244018 DEBUG nova.compute.manager [None req-419af498-6a4b-48e3-8b5b-d20b3a078ddc b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:02 np0005629333 podman[288964]: 2026-02-25 12:26:02.567098881 +0000 UTC m=+0.048232866 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:26:02 np0005629333 podman[288964]: 2026-02-25 12:26:02.698864681 +0000 UTC m=+0.179998606 container create c7c03332f63549aaa6c7ae37db7e2c5546622e445dbfae87bdb87a41364cbc0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 25 07:26:02 np0005629333 nova_compute[244014]: 2026-02-25 12:26:02.743 244018 INFO nova.compute.manager [None req-419af498-6a4b-48e3-8b5b-d20b3a078ddc b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] instance snapshotting#033[00m
Feb 25 07:26:02 np0005629333 nova_compute[244014]: 2026-02-25 12:26:02.744 244018 DEBUG nova.objects.instance [None req-419af498-6a4b-48e3-8b5b-d20b3a078ddc b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'flavor' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:26:02 np0005629333 systemd[1]: Started libpod-conmon-c7c03332f63549aaa6c7ae37db7e2c5546622e445dbfae87bdb87a41364cbc0c.scope.
Feb 25 07:26:02 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:26:02 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f375a03ac1bcd5215d8d83e4e0e939faf3fd84985807e733642b344e8fa21f6d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:26:02 np0005629333 nova_compute[244014]: 2026-02-25 12:26:02.832 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022362.8320043, a826c2fd-1af8-4b55-b801-90ce87d04466 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:26:02 np0005629333 nova_compute[244014]: 2026-02-25 12:26:02.833 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] VM Started (Lifecycle Event)#033[00m
Feb 25 07:26:02 np0005629333 nova_compute[244014]: 2026-02-25 12:26:02.861 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:02 np0005629333 nova_compute[244014]: 2026-02-25 12:26:02.866 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022362.8325772, a826c2fd-1af8-4b55-b801-90ce87d04466 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:26:02 np0005629333 nova_compute[244014]: 2026-02-25 12:26:02.866 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:26:02 np0005629333 podman[288964]: 2026-02-25 12:26:02.878033571 +0000 UTC m=+0.359167506 container init c7c03332f63549aaa6c7ae37db7e2c5546622e445dbfae87bdb87a41364cbc0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 07:26:02 np0005629333 podman[288964]: 2026-02-25 12:26:02.885236305 +0000 UTC m=+0.366370210 container start c7c03332f63549aaa6c7ae37db7e2c5546622e445dbfae87bdb87a41364cbc0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:26:02 np0005629333 nova_compute[244014]: 2026-02-25 12:26:02.899 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:02 np0005629333 nova_compute[244014]: 2026-02-25 12:26:02.904 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:26:02 np0005629333 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[289057]: [NOTICE]   (289064) : New worker (289066) forked
Feb 25 07:26:02 np0005629333 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[289057]: [NOTICE]   (289064) : Loading success.
Feb 25 07:26:02 np0005629333 nova_compute[244014]: 2026-02-25 12:26:02.946 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:26:03 np0005629333 nova_compute[244014]: 2026-02-25 12:26:03.077 244018 INFO nova.virt.libvirt.driver [None req-419af498-6a4b-48e3-8b5b-d20b3a078ddc b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Beginning live snapshot process#033[00m
Feb 25 07:26:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:26:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1308210811' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:26:03 np0005629333 nova_compute[244014]: 2026-02-25 12:26:03.226 244018 DEBUG oslo_concurrency.processutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.666s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:03 np0005629333 nova_compute[244014]: 2026-02-25 12:26:03.229 244018 DEBUG nova.virt.libvirt.imagebackend [None req-419af498-6a4b-48e3-8b5b-d20b3a078ddc b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Feb 25 07:26:03 np0005629333 nova_compute[244014]: 2026-02-25 12:26:03.231 244018 DEBUG nova.virt.libvirt.vif [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:25:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1785986325',display_name='tempest-DeleteServersTestJSON-server-1785986325',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1785986325',id=52,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='daab2f813dbd467685c22833bf875ec9',ramdisk_id='',reservation_id='r-j1l7797p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-285157757',owner_user_name='tempest-DeleteServersTestJSON-285157757-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:25:57Z,user_data=None,user_id='9c44c1a95c8d4d97bc1d7dde284dcf1b',uuid=679fb15f-b258-473a-8cdc-a2c143eb4d92,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "address": "fa:16:3e:28:ef:45", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea2b354-8a", "ovs_interfaceid": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:26:03 np0005629333 nova_compute[244014]: 2026-02-25 12:26:03.231 244018 DEBUG nova.network.os_vif_util [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converting VIF {"id": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "address": "fa:16:3e:28:ef:45", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea2b354-8a", "ovs_interfaceid": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:26:03 np0005629333 nova_compute[244014]: 2026-02-25 12:26:03.232 244018 DEBUG nova.network.os_vif_util [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:ef:45,bridge_name='br-int',has_traffic_filtering=True,id=fea2b354-8a21-4bff-bd90-431f8a17aa19,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfea2b354-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:26:03 np0005629333 nova_compute[244014]: 2026-02-25 12:26:03.233 244018 DEBUG nova.objects.instance [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 679fb15f-b258-473a-8cdc-a2c143eb4d92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:26:03 np0005629333 nova_compute[244014]: 2026-02-25 12:26:03.268 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:26:03 np0005629333 nova_compute[244014]:  <uuid>679fb15f-b258-473a-8cdc-a2c143eb4d92</uuid>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:  <name>instance-00000034</name>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      <nova:name>tempest-DeleteServersTestJSON-server-1785986325</nova:name>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:26:01</nova:creationTime>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:26:03 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:        <nova:user uuid="9c44c1a95c8d4d97bc1d7dde284dcf1b">tempest-DeleteServersTestJSON-285157757-project-member</nova:user>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:        <nova:project uuid="daab2f813dbd467685c22833bf875ec9">tempest-DeleteServersTestJSON-285157757</nova:project>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:        <nova:port uuid="fea2b354-8a21-4bff-bd90-431f8a17aa19">
Feb 25 07:26:03 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      <entry name="serial">679fb15f-b258-473a-8cdc-a2c143eb4d92</entry>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      <entry name="uuid">679fb15f-b258-473a-8cdc-a2c143eb4d92</entry>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/679fb15f-b258-473a-8cdc-a2c143eb4d92_disk">
Feb 25 07:26:03 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:26:03 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/679fb15f-b258-473a-8cdc-a2c143eb4d92_disk.config">
Feb 25 07:26:03 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:26:03 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:28:ef:45"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      <target dev="tapfea2b354-8a"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/679fb15f-b258-473a-8cdc-a2c143eb4d92/console.log" append="off"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:26:03 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:26:03 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:26:03 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:26:03 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:26:03 np0005629333 nova_compute[244014]: 2026-02-25 12:26:03.269 244018 DEBUG nova.compute.manager [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Preparing to wait for external event network-vif-plugged-fea2b354-8a21-4bff-bd90-431f8a17aa19 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:26:03 np0005629333 nova_compute[244014]: 2026-02-25 12:26:03.269 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:03 np0005629333 nova_compute[244014]: 2026-02-25 12:26:03.269 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:03 np0005629333 nova_compute[244014]: 2026-02-25 12:26:03.270 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:03 np0005629333 nova_compute[244014]: 2026-02-25 12:26:03.270 244018 DEBUG nova.virt.libvirt.vif [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:25:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1785986325',display_name='tempest-DeleteServersTestJSON-server-1785986325',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1785986325',id=52,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='daab2f813dbd467685c22833bf875ec9',ramdisk_id='',reservation_id='r-j1l7797p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-285157757',owner_user_name='tempest-DeleteServersTestJSON-285157757-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:25:57Z,user_data=None,user_id='9c44c1a95c8d4d97bc1d7dde284dcf1b',uuid=679fb15f-b258-473a-8cdc-a2c143eb4d92,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "address": "fa:16:3e:28:ef:45", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea2b354-8a", "ovs_interfaceid": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:26:03 np0005629333 nova_compute[244014]: 2026-02-25 12:26:03.271 244018 DEBUG nova.network.os_vif_util [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converting VIF {"id": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "address": "fa:16:3e:28:ef:45", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea2b354-8a", "ovs_interfaceid": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:26:03 np0005629333 nova_compute[244014]: 2026-02-25 12:26:03.271 244018 DEBUG nova.network.os_vif_util [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:ef:45,bridge_name='br-int',has_traffic_filtering=True,id=fea2b354-8a21-4bff-bd90-431f8a17aa19,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfea2b354-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:26:03 np0005629333 nova_compute[244014]: 2026-02-25 12:26:03.272 244018 DEBUG os_vif [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:ef:45,bridge_name='br-int',has_traffic_filtering=True,id=fea2b354-8a21-4bff-bd90-431f8a17aa19,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfea2b354-8a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:26:03 np0005629333 nova_compute[244014]: 2026-02-25 12:26:03.272 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:03 np0005629333 nova_compute[244014]: 2026-02-25 12:26:03.273 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:03 np0005629333 nova_compute[244014]: 2026-02-25 12:26:03.273 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:26:03 np0005629333 nova_compute[244014]: 2026-02-25 12:26:03.276 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:03 np0005629333 nova_compute[244014]: 2026-02-25 12:26:03.276 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfea2b354-8a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:03 np0005629333 nova_compute[244014]: 2026-02-25 12:26:03.277 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfea2b354-8a, col_values=(('external_ids', {'iface-id': 'fea2b354-8a21-4bff-bd90-431f8a17aa19', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:ef:45', 'vm-uuid': '679fb15f-b258-473a-8cdc-a2c143eb4d92'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:03 np0005629333 nova_compute[244014]: 2026-02-25 12:26:03.278 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:03 np0005629333 NetworkManager[49836]: <info>  [1772022363.2793] manager: (tapfea2b354-8a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/200)
Feb 25 07:26:03 np0005629333 nova_compute[244014]: 2026-02-25 12:26:03.281 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:26:03 np0005629333 nova_compute[244014]: 2026-02-25 12:26:03.284 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:03 np0005629333 nova_compute[244014]: 2026-02-25 12:26:03.285 244018 INFO os_vif [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:ef:45,bridge_name='br-int',has_traffic_filtering=True,id=fea2b354-8a21-4bff-bd90-431f8a17aa19,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfea2b354-8a')#033[00m
Feb 25 07:26:03 np0005629333 nova_compute[244014]: 2026-02-25 12:26:03.369 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:26:03 np0005629333 nova_compute[244014]: 2026-02-25 12:26:03.369 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:26:03 np0005629333 nova_compute[244014]: 2026-02-25 12:26:03.369 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] No VIF found with MAC fa:16:3e:28:ef:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:26:03 np0005629333 nova_compute[244014]: 2026-02-25 12:26:03.370 244018 INFO nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Using config drive#033[00m
Feb 25 07:26:03 np0005629333 nova_compute[244014]: 2026-02-25 12:26:03.393 244018 DEBUG nova.storage.rbd_utils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image 679fb15f-b258-473a-8cdc-a2c143eb4d92_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:03 np0005629333 nova_compute[244014]: 2026-02-25 12:26:03.567 244018 DEBUG nova.storage.rbd_utils [None req-419af498-6a4b-48e3-8b5b-d20b3a078ddc b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] creating snapshot(620e1832902b4b9abcaddca4af5c4c8f) on rbd image(b8086e43-4c45-422f-a3b5-fa665c256b30_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 07:26:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 484 MiB data, 721 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 12 MiB/s wr, 278 op/s
Feb 25 07:26:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Feb 25 07:26:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Feb 25 07:26:04 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Feb 25 07:26:04 np0005629333 nova_compute[244014]: 2026-02-25 12:26:04.644 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:04 np0005629333 nova_compute[244014]: 2026-02-25 12:26:04.751 244018 INFO nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Creating config drive at /var/lib/nova/instances/679fb15f-b258-473a-8cdc-a2c143eb4d92/disk.config#033[00m
Feb 25 07:26:04 np0005629333 nova_compute[244014]: 2026-02-25 12:26:04.759 244018 DEBUG oslo_concurrency.processutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/679fb15f-b258-473a-8cdc-a2c143eb4d92/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpwrh01z72 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:04 np0005629333 nova_compute[244014]: 2026-02-25 12:26:04.904 244018 DEBUG oslo_concurrency.processutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/679fb15f-b258-473a-8cdc-a2c143eb4d92/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpwrh01z72" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:04 np0005629333 nova_compute[244014]: 2026-02-25 12:26:04.940 244018 DEBUG nova.storage.rbd_utils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image 679fb15f-b258-473a-8cdc-a2c143eb4d92_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:04 np0005629333 nova_compute[244014]: 2026-02-25 12:26:04.945 244018 DEBUG oslo_concurrency.processutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/679fb15f-b258-473a-8cdc-a2c143eb4d92/disk.config 679fb15f-b258-473a-8cdc-a2c143eb4d92_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:26:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.085 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updating instance_info_cache with network_info: [{"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.112 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.112 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 25 07:26:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Feb 25 07:26:05 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.294 244018 DEBUG nova.storage.rbd_utils [None req-419af498-6a4b-48e3-8b5b-d20b3a078ddc b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] cloning vms/b8086e43-4c45-422f-a3b5-fa665c256b30_disk@620e1832902b4b9abcaddca4af5c4c8f to images/37d89e85-5a66-4a69-8ead-230ec4360b24 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.674 244018 DEBUG oslo_concurrency.processutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/679fb15f-b258-473a-8cdc-a2c143eb4d92/disk.config 679fb15f-b258-473a-8cdc-a2c143eb4d92_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.729s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.675 244018 INFO nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Deleting local config drive /var/lib/nova/instances/679fb15f-b258-473a-8cdc-a2c143eb4d92/disk.config because it was imported into RBD.#033[00m
Feb 25 07:26:05 np0005629333 kernel: tapfea2b354-8a: entered promiscuous mode
Feb 25 07:26:05 np0005629333 NetworkManager[49836]: <info>  [1772022365.7260] manager: (tapfea2b354-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/201)
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.730 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:05 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:05Z|00437|binding|INFO|Claiming lport fea2b354-8a21-4bff-bd90-431f8a17aa19 for this chassis.
Feb 25 07:26:05 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:05Z|00438|binding|INFO|fea2b354-8a21-4bff-bd90-431f8a17aa19: Claiming fa:16:3e:28:ef:45 10.100.0.12
Feb 25 07:26:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.744 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:ef:45 10.100.0.12'], port_security=['fa:16:3e:28:ef:45 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '679fb15f-b258-473a-8cdc-a2c143eb4d92', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'daab2f813dbd467685c22833bf875ec9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ccb27c94-53b6-4e09-9a40-41a94659ce9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96ce8d12-3a71-4b1e-a397-65d0b61f3794, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=fea2b354-8a21-4bff-bd90-431f8a17aa19) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.746 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.748 157129 INFO neutron.agent.ovn.metadata.agent [-] Port fea2b354-8a21-4bff-bd90-431f8a17aa19 in datapath a0d45b1c-1680-4599-a27a-6e3335c94c99 bound to our chassis#033[00m
Feb 25 07:26:05 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:05Z|00439|binding|INFO|Setting lport fea2b354-8a21-4bff-bd90-431f8a17aa19 ovn-installed in OVS
Feb 25 07:26:05 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:05Z|00440|binding|INFO|Setting lport fea2b354-8a21-4bff-bd90-431f8a17aa19 up in Southbound
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.750 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.751 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a0d45b1c-1680-4599-a27a-6e3335c94c99#033[00m
Feb 25 07:26:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.764 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[18e529dd-fc1b-492c-a8e5-8dff531fb946]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.765 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa0d45b1c-11 in ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:26:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.767 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa0d45b1c-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:26:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.767 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4f005b2b-ae95-4305-8edf-0a985850a14e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.768 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c3a7005a-b25d-4dcf-8b95-99771afe8347]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:05 np0005629333 systemd-machined[210048]: New machine qemu-57-instance-00000034.
Feb 25 07:26:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.780 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[4c639fa9-0dbc-416a-be23-364d1e02a650]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.791 244018 DEBUG nova.compute.manager [req-6064286e-b71c-4787-ad2e-74d3a79e1a99 req-13f748d7-3140-4851-8622-5f1ba2d59c3b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Received event network-vif-plugged-4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.792 244018 DEBUG oslo_concurrency.lockutils [req-6064286e-b71c-4787-ad2e-74d3a79e1a99 req-13f748d7-3140-4851-8622-5f1ba2d59c3b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "a826c2fd-1af8-4b55-b801-90ce87d04466-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.792 244018 DEBUG oslo_concurrency.lockutils [req-6064286e-b71c-4787-ad2e-74d3a79e1a99 req-13f748d7-3140-4851-8622-5f1ba2d59c3b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "a826c2fd-1af8-4b55-b801-90ce87d04466-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.793 244018 DEBUG oslo_concurrency.lockutils [req-6064286e-b71c-4787-ad2e-74d3a79e1a99 req-13f748d7-3140-4851-8622-5f1ba2d59c3b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "a826c2fd-1af8-4b55-b801-90ce87d04466-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.793 244018 DEBUG nova.compute.manager [req-6064286e-b71c-4787-ad2e-74d3a79e1a99 req-13f748d7-3140-4851-8622-5f1ba2d59c3b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Processing event network-vif-plugged-4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.793 244018 DEBUG nova.compute.manager [req-6064286e-b71c-4787-ad2e-74d3a79e1a99 req-13f748d7-3140-4851-8622-5f1ba2d59c3b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Received event network-vif-plugged-4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:26:05 np0005629333 systemd[1]: Started Virtual Machine qemu-57-instance-00000034.
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.794 244018 DEBUG oslo_concurrency.lockutils [req-6064286e-b71c-4787-ad2e-74d3a79e1a99 req-13f748d7-3140-4851-8622-5f1ba2d59c3b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "a826c2fd-1af8-4b55-b801-90ce87d04466-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.794 244018 DEBUG oslo_concurrency.lockutils [req-6064286e-b71c-4787-ad2e-74d3a79e1a99 req-13f748d7-3140-4851-8622-5f1ba2d59c3b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "a826c2fd-1af8-4b55-b801-90ce87d04466-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.794 244018 DEBUG oslo_concurrency.lockutils [req-6064286e-b71c-4787-ad2e-74d3a79e1a99 req-13f748d7-3140-4851-8622-5f1ba2d59c3b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "a826c2fd-1af8-4b55-b801-90ce87d04466-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.795 244018 DEBUG nova.compute.manager [req-6064286e-b71c-4787-ad2e-74d3a79e1a99 req-13f748d7-3140-4851-8622-5f1ba2d59c3b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] No waiting events found dispatching network-vif-plugged-4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:26:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.794 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2ca5e77a-6bc2-4b10-812f-d4bdbfec94ed]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.795 244018 WARNING nova.compute.manager [req-6064286e-b71c-4787-ad2e-74d3a79e1a99 req-13f748d7-3140-4851-8622-5f1ba2d59c3b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Received unexpected event network-vif-plugged-4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.796 244018 DEBUG nova.compute.manager [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:26:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 484 MiB data, 721 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 8.6 MiB/s wr, 189 op/s
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.803 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022365.802834, a826c2fd-1af8-4b55-b801-90ce87d04466 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.803 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:26:05 np0005629333 systemd-udevd[289242]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.810 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.817 244018 INFO nova.virt.libvirt.driver [-] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Instance spawned successfully.#033[00m
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.818 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:26:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.818 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3bfeeded-cba0-41e5-9ffe-258220845475]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:05 np0005629333 NetworkManager[49836]: <info>  [1772022365.8235] device (tapfea2b354-8a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:26:05 np0005629333 NetworkManager[49836]: <info>  [1772022365.8245] device (tapfea2b354-8a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:26:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.830 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6229621e-2335-4a03-9d4b-e0d2d7111f9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:05 np0005629333 systemd-udevd[289245]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:26:05 np0005629333 NetworkManager[49836]: <info>  [1772022365.8313] manager: (tapa0d45b1c-10): new Veth device (/org/freedesktop/NetworkManager/Devices/202)
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.829 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.863 244018 DEBUG nova.storage.rbd_utils [None req-419af498-6a4b-48e3-8b5b-d20b3a078ddc b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] flattening images/37d89e85-5a66-4a69-8ead-230ec4360b24 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 25 07:26:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.870 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[95455d47-613e-4f89-829c-a41cec13dead]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.874 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[159467a2-75f2-4f3b-841e-89072495960a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:05 np0005629333 NetworkManager[49836]: <info>  [1772022365.8999] device (tapa0d45b1c-10): carrier: link connected
Feb 25 07:26:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.906 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[fae049b7-ef2b-430f-a3f2-dec2bbb3cbd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.923 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed4552f-d5bf-46ce-a0e4-7ed9a8f137dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0d45b1c-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:2b:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 433530, 'reachable_time': 28858, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289289, 'error': None, 'target': 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.938 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4de577c5-99b9-46d0-bb50-ed49b0e3915f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe22:2bc6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 433530, 'tstamp': 433530}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289290, 'error': None, 'target': 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.963 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0c96fc5d-3a05-48c5-a9de-6708907f106a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0d45b1c-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:2b:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 433530, 'reachable_time': 28858, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289291, 'error': None, 'target': 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.976 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.978 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.981 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.982 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.983 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.983 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.984 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:05 np0005629333 nova_compute[244014]: 2026-02-25 12:26:05.984 244018 DEBUG nova.virt.libvirt.driver [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:05.998 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[716f5e85-29e1-434e-b7a1-c9d456191cca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:06 np0005629333 nova_compute[244014]: 2026-02-25 12:26:06.026 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:06.064 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5fabcba7-611c-46ed-9804-6f6b5ae9937f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:06.066 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0d45b1c-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:06.067 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:06.068 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0d45b1c-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:06 np0005629333 kernel: tapa0d45b1c-10: entered promiscuous mode
Feb 25 07:26:06 np0005629333 NetworkManager[49836]: <info>  [1772022366.0720] manager: (tapa0d45b1c-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/203)
Feb 25 07:26:06 np0005629333 nova_compute[244014]: 2026-02-25 12:26:06.072 244018 INFO nova.compute.manager [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Took 10.66 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:26:06 np0005629333 nova_compute[244014]: 2026-02-25 12:26:06.073 244018 DEBUG nova.compute.manager [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:06 np0005629333 nova_compute[244014]: 2026-02-25 12:26:06.073 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:06.080 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa0d45b1c-10, col_values=(('external_ids', {'iface-id': '7c89df63-779c-4e1c-9e58-76a4001fabc2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:06 np0005629333 nova_compute[244014]: 2026-02-25 12:26:06.081 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:06 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:06Z|00441|binding|INFO|Releasing lport 7c89df63-779c-4e1c-9e58-76a4001fabc2 from this chassis (sb_readonly=0)
Feb 25 07:26:06 np0005629333 nova_compute[244014]: 2026-02-25 12:26:06.082 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:06.083 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a0d45b1c-1680-4599-a27a-6e3335c94c99.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a0d45b1c-1680-4599-a27a-6e3335c94c99.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:06.084 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[47de337f-e114-406b-838d-faf5c3687f48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:06.085 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-a0d45b1c-1680-4599-a27a-6e3335c94c99
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/a0d45b1c-1680-4599-a27a-6e3335c94c99.pid.haproxy
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID a0d45b1c-1680-4599-a27a-6e3335c94c99
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:26:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:06.086 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'env', 'PROCESS_TAG=haproxy-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a0d45b1c-1680-4599-a27a-6e3335c94c99.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:26:06 np0005629333 nova_compute[244014]: 2026-02-25 12:26:06.090 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:06 np0005629333 nova_compute[244014]: 2026-02-25 12:26:06.154 244018 INFO nova.compute.manager [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Took 11.84 seconds to build instance.#033[00m
Feb 25 07:26:06 np0005629333 nova_compute[244014]: 2026-02-25 12:26:06.181 244018 DEBUG nova.network.neutron [req-78269d9e-e463-4e29-b07a-c72b477f6a02 req-ee005971-2365-401f-a6e3-676169890ecf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Updated VIF entry in instance network info cache for port fea2b354-8a21-4bff-bd90-431f8a17aa19. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:26:06 np0005629333 nova_compute[244014]: 2026-02-25 12:26:06.182 244018 DEBUG nova.network.neutron [req-78269d9e-e463-4e29-b07a-c72b477f6a02 req-ee005971-2365-401f-a6e3-676169890ecf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Updating instance_info_cache with network_info: [{"id": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "address": "fa:16:3e:28:ef:45", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea2b354-8a", "ovs_interfaceid": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:26:06 np0005629333 nova_compute[244014]: 2026-02-25 12:26:06.200 244018 DEBUG oslo_concurrency.lockutils [None req-399fef52-b12a-4faa-9450-84c972715d02 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "a826c2fd-1af8-4b55-b801-90ce87d04466" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:06 np0005629333 nova_compute[244014]: 2026-02-25 12:26:06.204 244018 DEBUG oslo_concurrency.lockutils [req-78269d9e-e463-4e29-b07a-c72b477f6a02 req-ee005971-2365-401f-a6e3-676169890ecf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-679fb15f-b258-473a-8cdc-a2c143eb4d92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:26:06 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:06Z|00442|binding|INFO|Releasing lport 7c89df63-779c-4e1c-9e58-76a4001fabc2 from this chassis (sb_readonly=0)
Feb 25 07:26:06 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:06Z|00443|binding|INFO|Releasing lport 81f0f54c-4e04-4adf-952f-b6d0fe9698c7 from this chassis (sb_readonly=0)
Feb 25 07:26:06 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:06Z|00444|binding|INFO|Releasing lport 6dc5897c-8765-434f-a79d-19523884d8ae from this chassis (sb_readonly=0)
Feb 25 07:26:06 np0005629333 nova_compute[244014]: 2026-02-25 12:26:06.298 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:06 np0005629333 nova_compute[244014]: 2026-02-25 12:26:06.307 244018 DEBUG nova.compute.manager [req-38de0482-6df2-4742-95f1-0c7c98ffb9e4 req-db80227c-f66a-4565-abfc-ef3253aab614 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Received event network-vif-plugged-fea2b354-8a21-4bff-bd90-431f8a17aa19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:26:06 np0005629333 nova_compute[244014]: 2026-02-25 12:26:06.308 244018 DEBUG oslo_concurrency.lockutils [req-38de0482-6df2-4742-95f1-0c7c98ffb9e4 req-db80227c-f66a-4565-abfc-ef3253aab614 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:06 np0005629333 nova_compute[244014]: 2026-02-25 12:26:06.308 244018 DEBUG oslo_concurrency.lockutils [req-38de0482-6df2-4742-95f1-0c7c98ffb9e4 req-db80227c-f66a-4565-abfc-ef3253aab614 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:06 np0005629333 nova_compute[244014]: 2026-02-25 12:26:06.308 244018 DEBUG oslo_concurrency.lockutils [req-38de0482-6df2-4742-95f1-0c7c98ffb9e4 req-db80227c-f66a-4565-abfc-ef3253aab614 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:06 np0005629333 nova_compute[244014]: 2026-02-25 12:26:06.308 244018 DEBUG nova.compute.manager [req-38de0482-6df2-4742-95f1-0c7c98ffb9e4 req-db80227c-f66a-4565-abfc-ef3253aab614 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Processing event network-vif-plugged-fea2b354-8a21-4bff-bd90-431f8a17aa19 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:26:06 np0005629333 podman[289341]: 2026-02-25 12:26:06.436161691 +0000 UTC m=+0.033488329 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:26:07 np0005629333 podman[289341]: 2026-02-25 12:26:07.567911021 +0000 UTC m=+1.165237659 container create dcfe6ead5fc444f1ddac2814a604c840a5d4738734a0af9e5cc86c6a606d5232 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:26:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 484 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 9.6 MiB/s rd, 7.9 MiB/s wr, 310 op/s
Feb 25 07:26:07 np0005629333 nova_compute[244014]: 2026-02-25 12:26:07.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:26:07 np0005629333 nova_compute[244014]: 2026-02-25 12:26:07.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:26:07 np0005629333 nova_compute[244014]: 2026-02-25 12:26:07.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:26:07 np0005629333 nova_compute[244014]: 2026-02-25 12:26:07.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:26:07 np0005629333 systemd[1]: Started libpod-conmon-dcfe6ead5fc444f1ddac2814a604c840a5d4738734a0af9e5cc86c6a606d5232.scope.
Feb 25 07:26:07 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:26:07 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef35dca2b6cc01530d501e999723c27176f167b9ee93f4dca8bdbb9412fbbf6a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:26:07 np0005629333 nova_compute[244014]: 2026-02-25 12:26:07.957 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022367.9574857, 679fb15f-b258-473a-8cdc-a2c143eb4d92 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:26:07 np0005629333 nova_compute[244014]: 2026-02-25 12:26:07.958 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] VM Started (Lifecycle Event)#033[00m
Feb 25 07:26:07 np0005629333 nova_compute[244014]: 2026-02-25 12:26:07.960 244018 DEBUG nova.compute.manager [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:26:07 np0005629333 nova_compute[244014]: 2026-02-25 12:26:07.963 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:26:07 np0005629333 nova_compute[244014]: 2026-02-25 12:26:07.968 244018 INFO nova.virt.libvirt.driver [-] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Instance spawned successfully.#033[00m
Feb 25 07:26:07 np0005629333 nova_compute[244014]: 2026-02-25 12:26:07.969 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:26:07 np0005629333 nova_compute[244014]: 2026-02-25 12:26:07.982 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:07 np0005629333 nova_compute[244014]: 2026-02-25 12:26:07.987 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:26:07 np0005629333 nova_compute[244014]: 2026-02-25 12:26:07.991 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:07 np0005629333 nova_compute[244014]: 2026-02-25 12:26:07.991 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:07 np0005629333 nova_compute[244014]: 2026-02-25 12:26:07.992 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:07 np0005629333 nova_compute[244014]: 2026-02-25 12:26:07.992 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:07 np0005629333 nova_compute[244014]: 2026-02-25 12:26:07.992 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:07 np0005629333 nova_compute[244014]: 2026-02-25 12:26:07.993 244018 DEBUG nova.virt.libvirt.driver [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:08 np0005629333 nova_compute[244014]: 2026-02-25 12:26:08.016 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:26:08 np0005629333 nova_compute[244014]: 2026-02-25 12:26:08.016 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022367.9576776, 679fb15f-b258-473a-8cdc-a2c143eb4d92 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:26:08 np0005629333 nova_compute[244014]: 2026-02-25 12:26:08.017 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:26:08 np0005629333 nova_compute[244014]: 2026-02-25 12:26:08.043 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:08 np0005629333 nova_compute[244014]: 2026-02-25 12:26:08.046 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022367.9635012, 679fb15f-b258-473a-8cdc-a2c143eb4d92 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:26:08 np0005629333 nova_compute[244014]: 2026-02-25 12:26:08.046 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:26:08 np0005629333 nova_compute[244014]: 2026-02-25 12:26:08.054 244018 INFO nova.compute.manager [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Took 10.09 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:26:08 np0005629333 nova_compute[244014]: 2026-02-25 12:26:08.055 244018 DEBUG nova.compute.manager [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:08 np0005629333 nova_compute[244014]: 2026-02-25 12:26:08.066 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:08 np0005629333 nova_compute[244014]: 2026-02-25 12:26:08.068 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:26:08 np0005629333 nova_compute[244014]: 2026-02-25 12:26:08.120 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:26:08 np0005629333 nova_compute[244014]: 2026-02-25 12:26:08.148 244018 INFO nova.compute.manager [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Took 11.59 seconds to build instance.#033[00m
Feb 25 07:26:08 np0005629333 nova_compute[244014]: 2026-02-25 12:26:08.173 244018 DEBUG oslo_concurrency.lockutils [None req-a24368bc-437e-4170-a502-3bc2129bb115 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:08 np0005629333 podman[289341]: 2026-02-25 12:26:08.182816093 +0000 UTC m=+1.780142691 container init dcfe6ead5fc444f1ddac2814a604c840a5d4738734a0af9e5cc86c6a606d5232 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 25 07:26:08 np0005629333 podman[289341]: 2026-02-25 12:26:08.190070508 +0000 UTC m=+1.787397106 container start dcfe6ead5fc444f1ddac2814a604c840a5d4738734a0af9e5cc86c6a606d5232 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:26:08 np0005629333 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[289379]: [NOTICE]   (289385) : New worker (289387) forked
Feb 25 07:26:08 np0005629333 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[289379]: [NOTICE]   (289385) : Loading success.
Feb 25 07:26:08 np0005629333 nova_compute[244014]: 2026-02-25 12:26:08.281 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:08 np0005629333 nova_compute[244014]: 2026-02-25 12:26:08.809 244018 DEBUG nova.compute.manager [req-9f0b781c-3895-41ad-aa44-1cbaf960d239 req-abda2d74-f4c7-44b4-ae71-7ea0fe0c7aa5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Received event network-vif-plugged-fea2b354-8a21-4bff-bd90-431f8a17aa19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:26:08 np0005629333 nova_compute[244014]: 2026-02-25 12:26:08.810 244018 DEBUG oslo_concurrency.lockutils [req-9f0b781c-3895-41ad-aa44-1cbaf960d239 req-abda2d74-f4c7-44b4-ae71-7ea0fe0c7aa5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:08 np0005629333 nova_compute[244014]: 2026-02-25 12:26:08.810 244018 DEBUG oslo_concurrency.lockutils [req-9f0b781c-3895-41ad-aa44-1cbaf960d239 req-abda2d74-f4c7-44b4-ae71-7ea0fe0c7aa5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:08 np0005629333 nova_compute[244014]: 2026-02-25 12:26:08.810 244018 DEBUG oslo_concurrency.lockutils [req-9f0b781c-3895-41ad-aa44-1cbaf960d239 req-abda2d74-f4c7-44b4-ae71-7ea0fe0c7aa5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:08 np0005629333 nova_compute[244014]: 2026-02-25 12:26:08.810 244018 DEBUG nova.compute.manager [req-9f0b781c-3895-41ad-aa44-1cbaf960d239 req-abda2d74-f4c7-44b4-ae71-7ea0fe0c7aa5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] No waiting events found dispatching network-vif-plugged-fea2b354-8a21-4bff-bd90-431f8a17aa19 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:26:08 np0005629333 nova_compute[244014]: 2026-02-25 12:26:08.810 244018 WARNING nova.compute.manager [req-9f0b781c-3895-41ad-aa44-1cbaf960d239 req-abda2d74-f4c7-44b4-ae71-7ea0fe0c7aa5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Received unexpected event network-vif-plugged-fea2b354-8a21-4bff-bd90-431f8a17aa19 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:26:08 np0005629333 nova_compute[244014]: 2026-02-25 12:26:08.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:26:08 np0005629333 nova_compute[244014]: 2026-02-25 12:26:08.920 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:08 np0005629333 nova_compute[244014]: 2026-02-25 12:26:08.921 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:08 np0005629333 nova_compute[244014]: 2026-02-25 12:26:08.921 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:08 np0005629333 nova_compute[244014]: 2026-02-25 12:26:08.921 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:26:08 np0005629333 nova_compute[244014]: 2026-02-25 12:26:08.922 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:26:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1732650868' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:26:09 np0005629333 nova_compute[244014]: 2026-02-25 12:26:09.522 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:09 np0005629333 nova_compute[244014]: 2026-02-25 12:26:09.563 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022354.5611358, 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:26:09 np0005629333 nova_compute[244014]: 2026-02-25 12:26:09.563 244018 INFO nova.compute.manager [-] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:26:09 np0005629333 nova_compute[244014]: 2026-02-25 12:26:09.596 244018 DEBUG nova.compute.manager [None req-e4580369-bca1-4f97-9536-68e60431feae - - - - - -] [instance: 3536c6e8-1cc8-418f-9ad8-8a5096b9c8d2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:09 np0005629333 nova_compute[244014]: 2026-02-25 12:26:09.623 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000033 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:26:09 np0005629333 nova_compute[244014]: 2026-02-25 12:26:09.623 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000033 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:26:09 np0005629333 nova_compute[244014]: 2026-02-25 12:26:09.629 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:26:09 np0005629333 nova_compute[244014]: 2026-02-25 12:26:09.629 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:26:09 np0005629333 nova_compute[244014]: 2026-02-25 12:26:09.634 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:26:09 np0005629333 nova_compute[244014]: 2026-02-25 12:26:09.634 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:26:09 np0005629333 nova_compute[244014]: 2026-02-25 12:26:09.646 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:09 np0005629333 nova_compute[244014]: 2026-02-25 12:26:09.709 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022354.708602, ec873c3c-bf46-4537-8c29-b23a3133d281 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:26:09 np0005629333 nova_compute[244014]: 2026-02-25 12:26:09.709 244018 INFO nova.compute.manager [-] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:26:09 np0005629333 nova_compute[244014]: 2026-02-25 12:26:09.734 244018 DEBUG nova.storage.rbd_utils [None req-419af498-6a4b-48e3-8b5b-d20b3a078ddc b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] removing snapshot(620e1832902b4b9abcaddca4af5c4c8f) on rbd image(b8086e43-4c45-422f-a3b5-fa665c256b30_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 25 07:26:09 np0005629333 nova_compute[244014]: 2026-02-25 12:26:09.741 244018 DEBUG nova.compute.manager [None req-c37593e0-4014-4c7a-a1d2-5be128e34bba - - - - - -] [instance: ec873c3c-bf46-4537-8c29-b23a3133d281] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 484 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 1.0 MiB/s wr, 142 op/s
Feb 25 07:26:09 np0005629333 nova_compute[244014]: 2026-02-25 12:26:09.817 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:09.822 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:26:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:09.824 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:26:09 np0005629333 nova_compute[244014]: 2026-02-25 12:26:09.914 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:26:09 np0005629333 nova_compute[244014]: 2026-02-25 12:26:09.915 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3472MB free_disk=59.900443555787206GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:26:09 np0005629333 nova_compute[244014]: 2026-02-25 12:26:09.916 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:09 np0005629333 nova_compute[244014]: 2026-02-25 12:26:09.916 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:10 np0005629333 nova_compute[244014]: 2026-02-25 12:26:10.033 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance b8086e43-4c45-422f-a3b5-fa665c256b30 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:26:10 np0005629333 nova_compute[244014]: 2026-02-25 12:26:10.034 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance a826c2fd-1af8-4b55-b801-90ce87d04466 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:26:10 np0005629333 nova_compute[244014]: 2026-02-25 12:26:10.034 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 679fb15f-b258-473a-8cdc-a2c143eb4d92 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:26:10 np0005629333 nova_compute[244014]: 2026-02-25 12:26:10.034 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:26:10 np0005629333 nova_compute[244014]: 2026-02-25 12:26:10.035 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:26:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:26:10 np0005629333 nova_compute[244014]: 2026-02-25 12:26:10.134 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Feb 25 07:26:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Feb 25 07:26:10 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Feb 25 07:26:10 np0005629333 nova_compute[244014]: 2026-02-25 12:26:10.550 244018 DEBUG nova.storage.rbd_utils [None req-419af498-6a4b-48e3-8b5b-d20b3a078ddc b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] creating snapshot(snap) on rbd image(37d89e85-5a66-4a69-8ead-230ec4360b24) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 07:26:10 np0005629333 nova_compute[244014]: 2026-02-25 12:26:10.692 244018 DEBUG nova.compute.manager [None req-e80dbe77-6d57-4cdf-a987-c1abb7d4834a 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:26:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3604942363' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:26:10 np0005629333 nova_compute[244014]: 2026-02-25 12:26:10.719 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:10 np0005629333 nova_compute[244014]: 2026-02-25 12:26:10.724 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:26:10 np0005629333 nova_compute[244014]: 2026-02-25 12:26:10.744 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:26:10 np0005629333 nova_compute[244014]: 2026-02-25 12:26:10.753 244018 INFO nova.compute.manager [None req-e80dbe77-6d57-4cdf-a987-c1abb7d4834a 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] instance snapshotting#033[00m
Feb 25 07:26:10 np0005629333 nova_compute[244014]: 2026-02-25 12:26:10.794 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:26:10 np0005629333 nova_compute[244014]: 2026-02-25 12:26:10.795 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.879s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:10 np0005629333 nova_compute[244014]: 2026-02-25 12:26:10.837 244018 DEBUG oslo_concurrency.lockutils [None req-08368522-f82e-41c2-9bfb-6acb020d3bce 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "679fb15f-b258-473a-8cdc-a2c143eb4d92" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:10 np0005629333 nova_compute[244014]: 2026-02-25 12:26:10.837 244018 DEBUG oslo_concurrency.lockutils [None req-08368522-f82e-41c2-9bfb-6acb020d3bce 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:10 np0005629333 nova_compute[244014]: 2026-02-25 12:26:10.838 244018 DEBUG nova.compute.manager [None req-08368522-f82e-41c2-9bfb-6acb020d3bce 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:10 np0005629333 nova_compute[244014]: 2026-02-25 12:26:10.841 244018 DEBUG nova.compute.manager [None req-08368522-f82e-41c2-9bfb-6acb020d3bce 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Feb 25 07:26:10 np0005629333 nova_compute[244014]: 2026-02-25 12:26:10.842 244018 DEBUG nova.objects.instance [None req-08368522-f82e-41c2-9bfb-6acb020d3bce 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lazy-loading 'flavor' on Instance uuid 679fb15f-b258-473a-8cdc-a2c143eb4d92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:26:10 np0005629333 nova_compute[244014]: 2026-02-25 12:26:10.868 244018 DEBUG nova.virt.libvirt.driver [None req-08368522-f82e-41c2-9bfb-6acb020d3bce 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 25 07:26:11 np0005629333 nova_compute[244014]: 2026-02-25 12:26:11.131 244018 INFO nova.virt.libvirt.driver [None req-e80dbe77-6d57-4cdf-a987-c1abb7d4834a 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Beginning live snapshot process#033[00m
Feb 25 07:26:11 np0005629333 nova_compute[244014]: 2026-02-25 12:26:11.285 244018 DEBUG nova.virt.libvirt.imagebackend [None req-e80dbe77-6d57-4cdf-a987-c1abb7d4834a 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Feb 25 07:26:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Feb 25 07:26:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Feb 25 07:26:11 np0005629333 nova_compute[244014]: 2026-02-25 12:26:11.742 244018 DEBUG nova.storage.rbd_utils [None req-e80dbe77-6d57-4cdf-a987-c1abb7d4834a 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] creating snapshot(004186d8a7564dabbca17fe0d43e3e73) on rbd image(a826c2fd-1af8-4b55-b801-90ce87d04466_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 07:26:11 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Feb 25 07:26:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 507 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 9.8 MiB/s rd, 1.3 MiB/s wr, 256 op/s
Feb 25 07:26:11 np0005629333 nova_compute[244014]: 2026-02-25 12:26:11.938 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:26:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Feb 25 07:26:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Feb 25 07:26:12 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Feb 25 07:26:13 np0005629333 nova_compute[244014]: 2026-02-25 12:26:13.284 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:13 np0005629333 nova_compute[244014]: 2026-02-25 12:26:13.538 244018 DEBUG nova.storage.rbd_utils [None req-e80dbe77-6d57-4cdf-a987-c1abb7d4834a 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] cloning vms/a826c2fd-1af8-4b55-b801-90ce87d04466_disk@004186d8a7564dabbca17fe0d43e3e73 to images/cd3e1cc7-9e00-4edc-af0c-3ab968bb14ad clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 25 07:26:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 563 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 7.8 MiB/s wr, 264 op/s
Feb 25 07:26:13 np0005629333 nova_compute[244014]: 2026-02-25 12:26:13.831 244018 INFO nova.virt.libvirt.driver [None req-419af498-6a4b-48e3-8b5b-d20b3a078ddc b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Snapshot image upload complete#033[00m
Feb 25 07:26:13 np0005629333 nova_compute[244014]: 2026-02-25 12:26:13.832 244018 INFO nova.compute.manager [None req-419af498-6a4b-48e3-8b5b-d20b3a078ddc b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Took 11.06 seconds to snapshot the instance on the hypervisor.#033[00m
Feb 25 07:26:13 np0005629333 nova_compute[244014]: 2026-02-25 12:26:13.849 244018 DEBUG nova.storage.rbd_utils [None req-e80dbe77-6d57-4cdf-a987-c1abb7d4834a 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] flattening images/cd3e1cc7-9e00-4edc-af0c-3ab968bb14ad flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 25 07:26:13 np0005629333 nova_compute[244014]: 2026-02-25 12:26:13.937 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:26:13 np0005629333 nova_compute[244014]: 2026-02-25 12:26:13.937 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:26:14 np0005629333 nova_compute[244014]: 2026-02-25 12:26:14.235 244018 DEBUG nova.compute.manager [None req-419af498-6a4b-48e3-8b5b-d20b3a078ddc b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Found 3 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Feb 25 07:26:14 np0005629333 nova_compute[244014]: 2026-02-25 12:26:14.236 244018 DEBUG nova.compute.manager [None req-419af498-6a4b-48e3-8b5b-d20b3a078ddc b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Rotating out 1 backups _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4458#033[00m
Feb 25 07:26:14 np0005629333 nova_compute[244014]: 2026-02-25 12:26:14.236 244018 DEBUG nova.compute.manager [None req-419af498-6a4b-48e3-8b5b-d20b3a078ddc b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Deleting image 5c4d8498-0ce5-4898-96f9-042972db1ddb _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4463#033[00m
Feb 25 07:26:14 np0005629333 nova_compute[244014]: 2026-02-25 12:26:14.381 244018 DEBUG nova.storage.rbd_utils [None req-e80dbe77-6d57-4cdf-a987-c1abb7d4834a 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] removing snapshot(004186d8a7564dabbca17fe0d43e3e73) on rbd image(a826c2fd-1af8-4b55-b801-90ce87d04466_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 25 07:26:14 np0005629333 nova_compute[244014]: 2026-02-25 12:26:14.649 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:26:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Feb 25 07:26:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Feb 25 07:26:15 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Feb 25 07:26:15 np0005629333 nova_compute[244014]: 2026-02-25 12:26:15.244 244018 DEBUG nova.storage.rbd_utils [None req-e80dbe77-6d57-4cdf-a987-c1abb7d4834a 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] creating snapshot(snap) on rbd image(cd3e1cc7-9e00-4edc-af0c-3ab968bb14ad) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 07:26:15 np0005629333 nova_compute[244014]: 2026-02-25 12:26:15.331 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 563 MiB data, 768 MiB used, 59 GiB / 60 GiB avail; 9.3 MiB/s rd, 8.7 MiB/s wr, 297 op/s
Feb 25 07:26:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Feb 25 07:26:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Feb 25 07:26:16 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Feb 25 07:26:16 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver [None req-e80dbe77-6d57-4cdf-a987-c1abb7d4834a 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image cd3e1cc7-9e00-4edc-af0c-3ab968bb14ad could not be found.
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     image = self._client.call(
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID cd3e1cc7-9e00-4edc-af0c-3ab968bb14ad
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver 
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver 
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     image = self._client.call(
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image cd3e1cc7-9e00-4edc-af0c-3ab968bb14ad could not be found.
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.545 244018 ERROR nova.virt.libvirt.driver #033[00m
Feb 25 07:26:16 np0005629333 nova_compute[244014]: 2026-02-25 12:26:16.613 244018 DEBUG nova.storage.rbd_utils [None req-e80dbe77-6d57-4cdf-a987-c1abb7d4834a 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] removing snapshot(snap) on rbd image(cd3e1cc7-9e00-4edc-af0c-3ab968bb14ad) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 25 07:26:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:16.826 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Feb 25 07:26:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Feb 25 07:26:17 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Feb 25 07:26:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 547 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 8.8 MiB/s wr, 306 op/s
Feb 25 07:26:18 np0005629333 nova_compute[244014]: 2026-02-25 12:26:18.289 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:18 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:18Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b6:1d:3c 10.100.0.4
Feb 25 07:26:18 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:18Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b6:1d:3c 10.100.0.4
Feb 25 07:26:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Feb 25 07:26:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Feb 25 07:26:18 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Feb 25 07:26:19 np0005629333 nova_compute[244014]: 2026-02-25 12:26:19.108 244018 WARNING nova.compute.manager [None req-e80dbe77-6d57-4cdf-a987-c1abb7d4834a 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Image not found during snapshot: nova.exception.ImageNotFound: Image cd3e1cc7-9e00-4edc-af0c-3ab968bb14ad could not be found.#033[00m
Feb 25 07:26:19 np0005629333 nova_compute[244014]: 2026-02-25 12:26:19.651 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Feb 25 07:26:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Feb 25 07:26:19 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Feb 25 07:26:19 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:19Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:28:ef:45 10.100.0.12
Feb 25 07:26:19 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:19Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:28:ef:45 10.100.0.12
Feb 25 07:26:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 547 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 11 MiB/s wr, 370 op/s
Feb 25 07:26:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:26:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Feb 25 07:26:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Feb 25 07:26:20 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Feb 25 07:26:20 np0005629333 nova_compute[244014]: 2026-02-25 12:26:20.492 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:20 np0005629333 nova_compute[244014]: 2026-02-25 12:26:20.933 244018 DEBUG oslo_concurrency.lockutils [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "a826c2fd-1af8-4b55-b801-90ce87d04466" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:20 np0005629333 nova_compute[244014]: 2026-02-25 12:26:20.934 244018 DEBUG oslo_concurrency.lockutils [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "a826c2fd-1af8-4b55-b801-90ce87d04466" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:20 np0005629333 nova_compute[244014]: 2026-02-25 12:26:20.934 244018 DEBUG oslo_concurrency.lockutils [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "a826c2fd-1af8-4b55-b801-90ce87d04466-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:20 np0005629333 nova_compute[244014]: 2026-02-25 12:26:20.935 244018 DEBUG oslo_concurrency.lockutils [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "a826c2fd-1af8-4b55-b801-90ce87d04466-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:20 np0005629333 nova_compute[244014]: 2026-02-25 12:26:20.935 244018 DEBUG oslo_concurrency.lockutils [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "a826c2fd-1af8-4b55-b801-90ce87d04466-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:20 np0005629333 nova_compute[244014]: 2026-02-25 12:26:20.937 244018 INFO nova.compute.manager [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Terminating instance#033[00m
Feb 25 07:26:20 np0005629333 nova_compute[244014]: 2026-02-25 12:26:20.939 244018 DEBUG nova.compute.manager [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:26:20 np0005629333 nova_compute[244014]: 2026-02-25 12:26:20.979 244018 DEBUG nova.virt.libvirt.driver [None req-08368522-f82e-41c2-9bfb-6acb020d3bce 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Feb 25 07:26:21 np0005629333 kernel: tap4bf8f3ad-0e (unregistering): left promiscuous mode
Feb 25 07:26:21 np0005629333 NetworkManager[49836]: <info>  [1772022381.4623] device (tap4bf8f3ad-0e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:26:21 np0005629333 nova_compute[244014]: 2026-02-25 12:26:21.470 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:21 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:21Z|00445|binding|INFO|Releasing lport 4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf from this chassis (sb_readonly=0)
Feb 25 07:26:21 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:21Z|00446|binding|INFO|Setting lport 4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf down in Southbound
Feb 25 07:26:21 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:21Z|00447|binding|INFO|Removing iface tap4bf8f3ad-0e ovn-installed in OVS
Feb 25 07:26:21 np0005629333 nova_compute[244014]: 2026-02-25 12:26:21.474 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:21.481 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b6:1d:3c 10.100.0.4'], port_security=['fa:16:3e:b6:1d:3c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'a826c2fd-1af8-4b55-b801-90ce87d04466', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f976004e0b334963a69c2519fca200d2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ca95284b-67f9-4e09-a57b-7847021c2465', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=974b795b-e2d8-4683-ac80-b366113e2dd8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:26:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:21.483 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf in datapath 7693903d-d5e2-4b50-a39b-bbbcc4148329 unbound from our chassis#033[00m
Feb 25 07:26:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:21.486 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7693903d-d5e2-4b50-a39b-bbbcc4148329, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:26:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:21.487 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4bda1118-083e-4d0d-968f-f16261eac76a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:21.488 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329 namespace which is not needed anymore#033[00m
Feb 25 07:26:21 np0005629333 nova_compute[244014]: 2026-02-25 12:26:21.489 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:21 np0005629333 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000033.scope: Deactivated successfully.
Feb 25 07:26:21 np0005629333 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000033.scope: Consumed 12.048s CPU time.
Feb 25 07:26:21 np0005629333 systemd-machined[210048]: Machine qemu-56-instance-00000033 terminated.
Feb 25 07:26:21 np0005629333 nova_compute[244014]: 2026-02-25 12:26:21.581 244018 INFO nova.virt.libvirt.driver [-] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Instance destroyed successfully.#033[00m
Feb 25 07:26:21 np0005629333 nova_compute[244014]: 2026-02-25 12:26:21.583 244018 DEBUG nova.objects.instance [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lazy-loading 'resources' on Instance uuid a826c2fd-1af8-4b55-b801-90ce87d04466 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:26:21 np0005629333 nova_compute[244014]: 2026-02-25 12:26:21.603 244018 DEBUG nova.virt.libvirt.vif [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:25:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-131355373',display_name='tempest-ImagesOneServerNegativeTestJSON-server-131355373',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-131355373',id=51,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f976004e0b334963a69c2519fca200d2',ramdisk_id='',reservation_id='r-w9t1cq22',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1374162185',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1374162185-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:26:19Z,user_data=None,user_id='89e71139346a40899212d5bc35835720',uuid=a826c2fd-1af8-4b55-b801-90ce87d04466,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "address": "fa:16:3e:b6:1d:3c", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bf8f3ad-0e", "ovs_interfaceid": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:26:21 np0005629333 nova_compute[244014]: 2026-02-25 12:26:21.604 244018 DEBUG nova.network.os_vif_util [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converting VIF {"id": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "address": "fa:16:3e:b6:1d:3c", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4bf8f3ad-0e", "ovs_interfaceid": "4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:26:21 np0005629333 nova_compute[244014]: 2026-02-25 12:26:21.606 244018 DEBUG nova.network.os_vif_util [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b6:1d:3c,bridge_name='br-int',has_traffic_filtering=True,id=4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bf8f3ad-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:26:21 np0005629333 nova_compute[244014]: 2026-02-25 12:26:21.607 244018 DEBUG os_vif [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:1d:3c,bridge_name='br-int',has_traffic_filtering=True,id=4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bf8f3ad-0e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:26:21 np0005629333 nova_compute[244014]: 2026-02-25 12:26:21.611 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:21 np0005629333 nova_compute[244014]: 2026-02-25 12:26:21.612 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bf8f3ad-0e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:21 np0005629333 nova_compute[244014]: 2026-02-25 12:26:21.614 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:21 np0005629333 nova_compute[244014]: 2026-02-25 12:26:21.616 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:21 np0005629333 nova_compute[244014]: 2026-02-25 12:26:21.620 244018 INFO os_vif [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b6:1d:3c,bridge_name='br-int',has_traffic_filtering=True,id=4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4bf8f3ad-0e')#033[00m
Feb 25 07:26:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 499 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.6 MiB/s wr, 241 op/s
Feb 25 07:26:21 np0005629333 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[289057]: [NOTICE]   (289064) : haproxy version is 2.8.14-c23fe91
Feb 25 07:26:21 np0005629333 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[289057]: [NOTICE]   (289064) : path to executable is /usr/sbin/haproxy
Feb 25 07:26:21 np0005629333 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[289057]: [ALERT]    (289064) : Current worker (289066) exited with code 143 (Terminated)
Feb 25 07:26:21 np0005629333 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[289057]: [WARNING]  (289064) : All workers exited. Exiting... (0)
Feb 25 07:26:21 np0005629333 systemd[1]: libpod-c7c03332f63549aaa6c7ae37db7e2c5546622e445dbfae87bdb87a41364cbc0c.scope: Deactivated successfully.
Feb 25 07:26:21 np0005629333 podman[289691]: 2026-02-25 12:26:21.880579331 +0000 UTC m=+0.262107740 container died c7c03332f63549aaa6c7ae37db7e2c5546622e445dbfae87bdb87a41364cbc0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 25 07:26:21 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c7c03332f63549aaa6c7ae37db7e2c5546622e445dbfae87bdb87a41364cbc0c-userdata-shm.mount: Deactivated successfully.
Feb 25 07:26:21 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f375a03ac1bcd5215d8d83e4e0e939faf3fd84985807e733642b344e8fa21f6d-merged.mount: Deactivated successfully.
Feb 25 07:26:21 np0005629333 podman[289691]: 2026-02-25 12:26:21.948926005 +0000 UTC m=+0.330454404 container cleanup c7c03332f63549aaa6c7ae37db7e2c5546622e445dbfae87bdb87a41364cbc0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 25 07:26:21 np0005629333 systemd[1]: libpod-conmon-c7c03332f63549aaa6c7ae37db7e2c5546622e445dbfae87bdb87a41364cbc0c.scope: Deactivated successfully.
Feb 25 07:26:22 np0005629333 podman[289739]: 2026-02-25 12:26:22.029833934 +0000 UTC m=+0.052778105 container remove c7c03332f63549aaa6c7ae37db7e2c5546622e445dbfae87bdb87a41364cbc0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 07:26:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:22.035 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[14e39747-63fa-4dd9-aace-022af9bc9d1f]: (4, ('Wed Feb 25 12:26:21 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329 (c7c03332f63549aaa6c7ae37db7e2c5546622e445dbfae87bdb87a41364cbc0c)\nc7c03332f63549aaa6c7ae37db7e2c5546622e445dbfae87bdb87a41364cbc0c\nWed Feb 25 12:26:21 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329 (c7c03332f63549aaa6c7ae37db7e2c5546622e445dbfae87bdb87a41364cbc0c)\nc7c03332f63549aaa6c7ae37db7e2c5546622e445dbfae87bdb87a41364cbc0c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:22.037 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[09440b3b-9707-43a6-af14-948e52ce5a42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:22.039 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7693903d-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:22 np0005629333 nova_compute[244014]: 2026-02-25 12:26:22.041 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:22 np0005629333 kernel: tap7693903d-d0: left promiscuous mode
Feb 25 07:26:22 np0005629333 nova_compute[244014]: 2026-02-25 12:26:22.055 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:22.057 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4b1c822a-2076-467a-9745-c31a80220f5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:22.071 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[19b7d3ff-8671-4db5-94e4-2d75c7581198]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:22.072 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7d63e3b8-fe8a-4685-99b7-f89813b40d0f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:22.083 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[40520456-b0ff-4f80-9af6-9c1862b0b50a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 433143, 'reachable_time': 27786, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289755, 'error': None, 'target': 'ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:22.085 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:26:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:22.086 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[880c7a4e-07ac-4b3a-9575-4d1f6f21665a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:22 np0005629333 systemd[1]: run-netns-ovnmeta\x2d7693903d\x2dd5e2\x2d4b50\x2da39b\x2dbbbcc4148329.mount: Deactivated successfully.
Feb 25 07:26:22 np0005629333 nova_compute[244014]: 2026-02-25 12:26:22.133 244018 INFO nova.virt.libvirt.driver [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Deleting instance files /var/lib/nova/instances/a826c2fd-1af8-4b55-b801-90ce87d04466_del#033[00m
Feb 25 07:26:22 np0005629333 nova_compute[244014]: 2026-02-25 12:26:22.134 244018 INFO nova.virt.libvirt.driver [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Deletion of /var/lib/nova/instances/a826c2fd-1af8-4b55-b801-90ce87d04466_del complete#033[00m
Feb 25 07:26:22 np0005629333 nova_compute[244014]: 2026-02-25 12:26:22.194 244018 INFO nova.compute.manager [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Took 1.25 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:26:22 np0005629333 nova_compute[244014]: 2026-02-25 12:26:22.194 244018 DEBUG oslo.service.loopingcall [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:26:22 np0005629333 nova_compute[244014]: 2026-02-25 12:26:22.195 244018 DEBUG nova.compute.manager [-] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:26:22 np0005629333 nova_compute[244014]: 2026-02-25 12:26:22.195 244018 DEBUG nova.network.neutron [-] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:26:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 366 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 865 KiB/s rd, 5.0 MiB/s wr, 343 op/s
Feb 25 07:26:23 np0005629333 nova_compute[244014]: 2026-02-25 12:26:23.946 244018 DEBUG nova.network.neutron [-] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:26:23 np0005629333 nova_compute[244014]: 2026-02-25 12:26:23.964 244018 INFO nova.compute.manager [-] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Took 1.77 seconds to deallocate network for instance.#033[00m
Feb 25 07:26:24 np0005629333 nova_compute[244014]: 2026-02-25 12:26:24.016 244018 DEBUG oslo_concurrency.lockutils [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:24 np0005629333 nova_compute[244014]: 2026-02-25 12:26:24.017 244018 DEBUG oslo_concurrency.lockutils [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:24 np0005629333 nova_compute[244014]: 2026-02-25 12:26:24.121 244018 DEBUG oslo_concurrency.processutils [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:24 np0005629333 kernel: tapfea2b354-8a (unregistering): left promiscuous mode
Feb 25 07:26:24 np0005629333 NetworkManager[49836]: <info>  [1772022384.3577] device (tapfea2b354-8a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:26:24 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:24Z|00448|binding|INFO|Releasing lport fea2b354-8a21-4bff-bd90-431f8a17aa19 from this chassis (sb_readonly=0)
Feb 25 07:26:24 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:24Z|00449|binding|INFO|Setting lport fea2b354-8a21-4bff-bd90-431f8a17aa19 down in Southbound
Feb 25 07:26:24 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:24Z|00450|binding|INFO|Removing iface tapfea2b354-8a ovn-installed in OVS
Feb 25 07:26:24 np0005629333 nova_compute[244014]: 2026-02-25 12:26:24.371 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:24.378 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:ef:45 10.100.0.12'], port_security=['fa:16:3e:28:ef:45 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '679fb15f-b258-473a-8cdc-a2c143eb4d92', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'daab2f813dbd467685c22833bf875ec9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ccb27c94-53b6-4e09-9a40-41a94659ce9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96ce8d12-3a71-4b1e-a397-65d0b61f3794, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=fea2b354-8a21-4bff-bd90-431f8a17aa19) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:26:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:24.381 157129 INFO neutron.agent.ovn.metadata.agent [-] Port fea2b354-8a21-4bff-bd90-431f8a17aa19 in datapath a0d45b1c-1680-4599-a27a-6e3335c94c99 unbound from our chassis#033[00m
Feb 25 07:26:24 np0005629333 nova_compute[244014]: 2026-02-25 12:26:24.383 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:24.384 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a0d45b1c-1680-4599-a27a-6e3335c94c99, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:26:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:24.385 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9749f414-116a-4e5b-98f5-888ac0c96f65]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:24.386 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 namespace which is not needed anymore#033[00m
Feb 25 07:26:24 np0005629333 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000034.scope: Deactivated successfully.
Feb 25 07:26:24 np0005629333 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000034.scope: Consumed 12.174s CPU time.
Feb 25 07:26:24 np0005629333 systemd-machined[210048]: Machine qemu-57-instance-00000034 terminated.
Feb 25 07:26:24 np0005629333 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[289379]: [NOTICE]   (289385) : haproxy version is 2.8.14-c23fe91
Feb 25 07:26:24 np0005629333 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[289379]: [NOTICE]   (289385) : path to executable is /usr/sbin/haproxy
Feb 25 07:26:24 np0005629333 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[289379]: [WARNING]  (289385) : Exiting Master process...
Feb 25 07:26:24 np0005629333 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[289379]: [WARNING]  (289385) : Exiting Master process...
Feb 25 07:26:24 np0005629333 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[289379]: [ALERT]    (289385) : Current worker (289387) exited with code 143 (Terminated)
Feb 25 07:26:24 np0005629333 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[289379]: [WARNING]  (289385) : All workers exited. Exiting... (0)
Feb 25 07:26:24 np0005629333 systemd[1]: libpod-dcfe6ead5fc444f1ddac2814a604c840a5d4738734a0af9e5cc86c6a606d5232.scope: Deactivated successfully.
Feb 25 07:26:24 np0005629333 podman[289798]: 2026-02-25 12:26:24.555502814 +0000 UTC m=+0.056173971 container died dcfe6ead5fc444f1ddac2814a604c840a5d4738734a0af9e5cc86c6a606d5232 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:26:24 np0005629333 nova_compute[244014]: 2026-02-25 12:26:24.587 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:24 np0005629333 nova_compute[244014]: 2026-02-25 12:26:24.593 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:24 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dcfe6ead5fc444f1ddac2814a604c840a5d4738734a0af9e5cc86c6a606d5232-userdata-shm.mount: Deactivated successfully.
Feb 25 07:26:24 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ef35dca2b6cc01530d501e999723c27176f167b9ee93f4dca8bdbb9412fbbf6a-merged.mount: Deactivated successfully.
Feb 25 07:26:24 np0005629333 podman[289798]: 2026-02-25 12:26:24.618147846 +0000 UTC m=+0.118818993 container cleanup dcfe6ead5fc444f1ddac2814a604c840a5d4738734a0af9e5cc86c6a606d5232 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 07:26:24 np0005629333 systemd[1]: libpod-conmon-dcfe6ead5fc444f1ddac2814a604c840a5d4738734a0af9e5cc86c6a606d5232.scope: Deactivated successfully.
Feb 25 07:26:24 np0005629333 nova_compute[244014]: 2026-02-25 12:26:24.655 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:26:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/334945106' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:26:24 np0005629333 nova_compute[244014]: 2026-02-25 12:26:24.685 244018 DEBUG oslo_concurrency.processutils [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:24 np0005629333 podman[289832]: 2026-02-25 12:26:24.688862848 +0000 UTC m=+0.049084130 container remove dcfe6ead5fc444f1ddac2814a604c840a5d4738734a0af9e5cc86c6a606d5232 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 25 07:26:24 np0005629333 nova_compute[244014]: 2026-02-25 12:26:24.691 244018 DEBUG nova.compute.provider_tree [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:26:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:24.693 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[deb4c5eb-a7fe-4904-8d8c-1e0358a84211]: (4, ('Wed Feb 25 12:26:24 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 (dcfe6ead5fc444f1ddac2814a604c840a5d4738734a0af9e5cc86c6a606d5232)\ndcfe6ead5fc444f1ddac2814a604c840a5d4738734a0af9e5cc86c6a606d5232\nWed Feb 25 12:26:24 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 (dcfe6ead5fc444f1ddac2814a604c840a5d4738734a0af9e5cc86c6a606d5232)\ndcfe6ead5fc444f1ddac2814a604c840a5d4738734a0af9e5cc86c6a606d5232\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:24.695 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[37655ca1-0964-4348-a1f0-2af4b43bb4f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:24.695 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0d45b1c-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:24 np0005629333 kernel: tapa0d45b1c-10: left promiscuous mode
Feb 25 07:26:24 np0005629333 nova_compute[244014]: 2026-02-25 12:26:24.697 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:24 np0005629333 nova_compute[244014]: 2026-02-25 12:26:24.708 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:24 np0005629333 nova_compute[244014]: 2026-02-25 12:26:24.712 244018 DEBUG nova.scheduler.client.report [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:26:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:24.712 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0b7c52e0-1613-4974-88f1-21db1f9467e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:24.725 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6d54e9de-351c-4779-934d-a70ffbba52af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:24.726 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7c3418e8-c746-4464-ab65-660e71457c08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:24.737 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a8f1e411-4d9a-413a-921f-d411bc9c2537]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 433522, 'reachable_time': 24205, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289863, 'error': None, 'target': 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:24.739 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:26:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:24.739 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[8d8d1b4f-4aab-46d6-ac9c-a3b0ed470c2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:24 np0005629333 systemd[1]: run-netns-ovnmeta\x2da0d45b1c\x2d1680\x2d4599\x2da27a\x2d6e3335c94c99.mount: Deactivated successfully.
Feb 25 07:26:24 np0005629333 nova_compute[244014]: 2026-02-25 12:26:24.740 244018 DEBUG oslo_concurrency.lockutils [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:24 np0005629333 nova_compute[244014]: 2026-02-25 12:26:24.801 244018 INFO nova.scheduler.client.report [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Deleted allocations for instance a826c2fd-1af8-4b55-b801-90ce87d04466#033[00m
Feb 25 07:26:24 np0005629333 nova_compute[244014]: 2026-02-25 12:26:24.886 244018 DEBUG oslo_concurrency.lockutils [None req-15d6c545-ac52-4a85-a77a-53b6ba18e572 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "a826c2fd-1af8-4b55-b801-90ce87d04466" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:24 np0005629333 nova_compute[244014]: 2026-02-25 12:26:24.976 244018 DEBUG nova.compute.manager [req-98d39001-bfa5-4481-985d-e79394288d37 req-b2df0f6d-8f14-4f9a-9fa0-f9af1acd18ec 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Received event network-vif-deleted-4bf8f3ad-0e9d-4f4d-a425-2b64066d6edf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:26:24 np0005629333 nova_compute[244014]: 2026-02-25 12:26:24.998 244018 INFO nova.virt.libvirt.driver [None req-08368522-f82e-41c2-9bfb-6acb020d3bce 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Instance shutdown successfully after 14 seconds.#033[00m
Feb 25 07:26:25 np0005629333 nova_compute[244014]: 2026-02-25 12:26:25.005 244018 INFO nova.virt.libvirt.driver [-] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Instance destroyed successfully.#033[00m
Feb 25 07:26:25 np0005629333 nova_compute[244014]: 2026-02-25 12:26:25.006 244018 DEBUG nova.objects.instance [None req-08368522-f82e-41c2-9bfb-6acb020d3bce 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lazy-loading 'numa_topology' on Instance uuid 679fb15f-b258-473a-8cdc-a2c143eb4d92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:26:25 np0005629333 nova_compute[244014]: 2026-02-25 12:26:25.020 244018 DEBUG nova.compute.manager [None req-08368522-f82e-41c2-9bfb-6acb020d3bce 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:25 np0005629333 nova_compute[244014]: 2026-02-25 12:26:25.058 244018 DEBUG oslo_concurrency.lockutils [None req-08368522-f82e-41c2-9bfb-6acb020d3bce 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 14.220s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:26:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Feb 25 07:26:25 np0005629333 nova_compute[244014]: 2026-02-25 12:26:25.105 244018 DEBUG nova.compute.manager [req-62046900-93aa-4b04-a873-eed640f71e0a req-7308f64a-5ab4-4052-8de6-78e12627c297 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Received event network-vif-unplugged-fea2b354-8a21-4bff-bd90-431f8a17aa19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:26:25 np0005629333 nova_compute[244014]: 2026-02-25 12:26:25.106 244018 DEBUG oslo_concurrency.lockutils [req-62046900-93aa-4b04-a873-eed640f71e0a req-7308f64a-5ab4-4052-8de6-78e12627c297 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:25 np0005629333 nova_compute[244014]: 2026-02-25 12:26:25.107 244018 DEBUG oslo_concurrency.lockutils [req-62046900-93aa-4b04-a873-eed640f71e0a req-7308f64a-5ab4-4052-8de6-78e12627c297 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:25 np0005629333 nova_compute[244014]: 2026-02-25 12:26:25.107 244018 DEBUG oslo_concurrency.lockutils [req-62046900-93aa-4b04-a873-eed640f71e0a req-7308f64a-5ab4-4052-8de6-78e12627c297 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:25 np0005629333 nova_compute[244014]: 2026-02-25 12:26:25.108 244018 DEBUG nova.compute.manager [req-62046900-93aa-4b04-a873-eed640f71e0a req-7308f64a-5ab4-4052-8de6-78e12627c297 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] No waiting events found dispatching network-vif-unplugged-fea2b354-8a21-4bff-bd90-431f8a17aa19 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:26:25 np0005629333 nova_compute[244014]: 2026-02-25 12:26:25.108 244018 WARNING nova.compute.manager [req-62046900-93aa-4b04-a873-eed640f71e0a req-7308f64a-5ab4-4052-8de6-78e12627c297 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Received unexpected event network-vif-unplugged-fea2b354-8a21-4bff-bd90-431f8a17aa19 for instance with vm_state stopped and task_state None.#033[00m
Feb 25 07:26:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Feb 25 07:26:25 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Feb 25 07:26:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 366 MiB data, 695 MiB used, 59 GiB / 60 GiB avail; 852 KiB/s rd, 4.9 MiB/s wr, 338 op/s
Feb 25 07:26:26 np0005629333 nova_compute[244014]: 2026-02-25 12:26:26.615 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:27 np0005629333 nova_compute[244014]: 2026-02-25 12:26:27.515 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:27 np0005629333 nova_compute[244014]: 2026-02-25 12:26:27.516 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:27 np0005629333 nova_compute[244014]: 2026-02-25 12:26:27.560 244018 DEBUG nova.compute.manager [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:26:27 np0005629333 nova_compute[244014]: 2026-02-25 12:26:27.589 244018 DEBUG nova.compute.manager [req-065dda7b-3412-406d-b5c1-1089c7e5ca45 req-2c649646-b2c3-4996-8f35-8500b83507f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Received event network-vif-plugged-fea2b354-8a21-4bff-bd90-431f8a17aa19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:26:27 np0005629333 nova_compute[244014]: 2026-02-25 12:26:27.590 244018 DEBUG oslo_concurrency.lockutils [req-065dda7b-3412-406d-b5c1-1089c7e5ca45 req-2c649646-b2c3-4996-8f35-8500b83507f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:27 np0005629333 nova_compute[244014]: 2026-02-25 12:26:27.591 244018 DEBUG oslo_concurrency.lockutils [req-065dda7b-3412-406d-b5c1-1089c7e5ca45 req-2c649646-b2c3-4996-8f35-8500b83507f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:27 np0005629333 nova_compute[244014]: 2026-02-25 12:26:27.591 244018 DEBUG oslo_concurrency.lockutils [req-065dda7b-3412-406d-b5c1-1089c7e5ca45 req-2c649646-b2c3-4996-8f35-8500b83507f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:27 np0005629333 nova_compute[244014]: 2026-02-25 12:26:27.592 244018 DEBUG nova.compute.manager [req-065dda7b-3412-406d-b5c1-1089c7e5ca45 req-2c649646-b2c3-4996-8f35-8500b83507f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] No waiting events found dispatching network-vif-plugged-fea2b354-8a21-4bff-bd90-431f8a17aa19 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:26:27 np0005629333 nova_compute[244014]: 2026-02-25 12:26:27.592 244018 WARNING nova.compute.manager [req-065dda7b-3412-406d-b5c1-1089c7e5ca45 req-2c649646-b2c3-4996-8f35-8500b83507f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Received unexpected event network-vif-plugged-fea2b354-8a21-4bff-bd90-431f8a17aa19 for instance with vm_state stopped and task_state None.#033[00m
Feb 25 07:26:27 np0005629333 nova_compute[244014]: 2026-02-25 12:26:27.630 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:27 np0005629333 nova_compute[244014]: 2026-02-25 12:26:27.630 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:27 np0005629333 nova_compute[244014]: 2026-02-25 12:26:27.640 244018 DEBUG nova.virt.hardware [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:26:27 np0005629333 nova_compute[244014]: 2026-02-25 12:26:27.640 244018 INFO nova.compute.claims [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:26:27 np0005629333 nova_compute[244014]: 2026-02-25 12:26:27.797 244018 DEBUG oslo_concurrency.processutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 312 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 673 KiB/s rd, 3.8 MiB/s wr, 294 op/s
Feb 25 07:26:27 np0005629333 nova_compute[244014]: 2026-02-25 12:26:27.897 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "56267d17-0733-4abe-b916-d1a25e516514" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:27 np0005629333 nova_compute[244014]: 2026-02-25 12:26:27.898 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "56267d17-0733-4abe-b916-d1a25e516514" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:27 np0005629333 nova_compute[244014]: 2026-02-25 12:26:27.921 244018 DEBUG nova.compute.manager [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:26:27 np0005629333 nova_compute[244014]: 2026-02-25 12:26:27.995 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.011 244018 DEBUG oslo_concurrency.lockutils [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "679fb15f-b258-473a-8cdc-a2c143eb4d92" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.012 244018 DEBUG oslo_concurrency.lockutils [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.013 244018 DEBUG oslo_concurrency.lockutils [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.013 244018 DEBUG oslo_concurrency.lockutils [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.014 244018 DEBUG oslo_concurrency.lockutils [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.016 244018 INFO nova.compute.manager [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Terminating instance#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.018 244018 DEBUG nova.compute.manager [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.028 244018 INFO nova.virt.libvirt.driver [-] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Instance destroyed successfully.#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.029 244018 DEBUG nova.objects.instance [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lazy-loading 'resources' on Instance uuid 679fb15f-b258-473a-8cdc-a2c143eb4d92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.044 244018 DEBUG nova.virt.libvirt.vif [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:25:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1785986325',display_name='tempest-DeleteServersTestJSON-server-1785986325',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1785986325',id=52,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='daab2f813dbd467685c22833bf875ec9',ramdisk_id='',reservation_id='r-j1l7797p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-285157757',owner_user_name='tempest-DeleteServersTestJSON-285157757-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:26:25Z,user_data=None,user_id='9c44c1a95c8d4d97bc1d7dde284dcf1b',uuid=679fb15f-b258-473a-8cdc-a2c143eb4d92,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "address": "fa:16:3e:28:ef:45", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea2b354-8a", "ovs_interfaceid": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.045 244018 DEBUG nova.network.os_vif_util [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converting VIF {"id": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "address": "fa:16:3e:28:ef:45", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfea2b354-8a", "ovs_interfaceid": "fea2b354-8a21-4bff-bd90-431f8a17aa19", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.046 244018 DEBUG nova.network.os_vif_util [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:ef:45,bridge_name='br-int',has_traffic_filtering=True,id=fea2b354-8a21-4bff-bd90-431f8a17aa19,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfea2b354-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.047 244018 DEBUG os_vif [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:ef:45,bridge_name='br-int',has_traffic_filtering=True,id=fea2b354-8a21-4bff-bd90-431f8a17aa19,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfea2b354-8a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.050 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.051 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfea2b354-8a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.055 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.059 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.062 244018 INFO os_vif [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:ef:45,bridge_name='br-int',has_traffic_filtering=True,id=fea2b354-8a21-4bff-bd90-431f8a17aa19,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfea2b354-8a')#033[00m
Feb 25 07:26:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:26:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/702035612' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.444 244018 DEBUG oslo_concurrency.processutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.646s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.450 244018 DEBUG nova.compute.provider_tree [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.472 244018 DEBUG nova.scheduler.client.report [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.509 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.513 244018 DEBUG nova.compute.manager [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.516 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.520s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.524 244018 DEBUG nova.virt.hardware [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.524 244018 INFO nova.compute.claims [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.586 244018 DEBUG nova.compute.manager [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.587 244018 DEBUG nova.network.neutron [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.612 244018 INFO nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.628 244018 INFO nova.virt.libvirt.driver [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Deleting instance files /var/lib/nova/instances/679fb15f-b258-473a-8cdc-a2c143eb4d92_del#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.630 244018 INFO nova.virt.libvirt.driver [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Deletion of /var/lib/nova/instances/679fb15f-b258-473a-8cdc-a2c143eb4d92_del complete#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.640 244018 DEBUG nova.compute.manager [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.721 244018 INFO nova.compute.manager [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Took 0.70 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.722 244018 DEBUG oslo.service.loopingcall [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.722 244018 DEBUG nova.compute.manager [-] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.723 244018 DEBUG nova.network.neutron [-] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.734 244018 DEBUG oslo_concurrency.processutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.765 244018 DEBUG nova.policy [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1f8bbe7db4454108aca005daa72d5c22', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56700581ea88438ba482d90bc702ced3', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.793 244018 DEBUG nova.compute.manager [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.796 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.796 244018 INFO nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Creating image(s)#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.830 244018 DEBUG nova.storage.rbd_utils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.862 244018 DEBUG nova.storage.rbd_utils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.888 244018 DEBUG nova.storage.rbd_utils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.894 244018 DEBUG oslo_concurrency.processutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.945 244018 DEBUG oslo_concurrency.processutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.946 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.947 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.947 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.974 244018 DEBUG nova.storage.rbd_utils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:28 np0005629333 nova_compute[244014]: 2026-02-25 12:26:28.980 244018 DEBUG oslo_concurrency.processutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.017 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "43b8959e-9cf0-42ca-aa1f-8a380321c971" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.017 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.035 244018 DEBUG nova.compute.manager [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.136 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.313 244018 DEBUG nova.network.neutron [-] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:26:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:26:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/192472623' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.336 244018 INFO nova.compute.manager [-] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Took 0.61 seconds to deallocate network for instance.#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.352 244018 DEBUG oslo_concurrency.processutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.361 244018 DEBUG nova.compute.provider_tree [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.384 244018 DEBUG nova.scheduler.client.report [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.391 244018 DEBUG oslo_concurrency.lockutils [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.405 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.890s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.406 244018 DEBUG nova.compute.manager [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.408 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.273s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.415 244018 DEBUG nova.virt.hardware [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.415 244018 INFO nova.compute.claims [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.461 244018 DEBUG nova.compute.manager [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.462 244018 DEBUG nova.network.neutron [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.486 244018 INFO nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.495 244018 DEBUG oslo_concurrency.processutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.536 244018 DEBUG nova.compute.manager [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.598 244018 DEBUG nova.storage.rbd_utils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] resizing rbd image 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.641 244018 DEBUG nova.compute.manager [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.643 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.644 244018 INFO nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Creating image(s)#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.676 244018 DEBUG nova.storage.rbd_utils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image 56267d17-0733-4abe-b916-d1a25e516514_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.711 244018 DEBUG nova.storage.rbd_utils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image 56267d17-0733-4abe-b916-d1a25e516514_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.744 244018 DEBUG nova.storage.rbd_utils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image 56267d17-0733-4abe-b916-d1a25e516514_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.750 244018 DEBUG oslo_concurrency.processutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.790 244018 DEBUG nova.policy [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '89e71139346a40899212d5bc35835720', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f976004e0b334963a69c2519fca200d2', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.793 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.797 244018 DEBUG nova.network.neutron [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Successfully created port: ee46268d-740d-4ff9-8b65-4a81fc61eec3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.805 244018 DEBUG nova.compute.manager [req-34e94e5c-e1f3-4d3d-bacb-5278c821e6d3 req-3d958801-3ad4-4f47-a22d-80d062663e9d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Received event network-vif-deleted-fea2b354-8a21-4bff-bd90-431f8a17aa19 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:26:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 312 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 561 KiB/s rd, 3.2 MiB/s wr, 245 op/s
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.830 244018 DEBUG oslo_concurrency.processutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.831 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.831 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.832 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.858 244018 DEBUG nova.storage.rbd_utils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image 56267d17-0733-4abe-b916-d1a25e516514_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.869 244018 DEBUG oslo_concurrency.processutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 56267d17-0733-4abe-b916-d1a25e516514_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:29 np0005629333 nova_compute[244014]: 2026-02-25 12:26:29.924 244018 DEBUG oslo_concurrency.processutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.002 244018 DEBUG nova.objects.instance [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'migration_context' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.025 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.025 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Ensure instance console log exists: /var/lib/nova/instances/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.026 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.027 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.027 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.186 244018 DEBUG oslo_concurrency.processutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 56267d17-0733-4abe-b916-d1a25e516514_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.230 244018 DEBUG nova.storage.rbd_utils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] resizing rbd image 56267d17-0733-4abe-b916-d1a25e516514_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.327 244018 DEBUG nova.objects.instance [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lazy-loading 'migration_context' on Instance uuid 56267d17-0733-4abe-b916-d1a25e516514 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.342 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.342 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Ensure instance console log exists: /var/lib/nova/instances/56267d17-0733-4abe-b916-d1a25e516514/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.343 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.343 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.343 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.420 244018 DEBUG nova.network.neutron [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Successfully created port: dd03c667-f058-4e13-bb03-517ed838be2e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:26:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:26:30 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3710562482' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.549 244018 DEBUG oslo_concurrency.processutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.553 244018 DEBUG nova.compute.provider_tree [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.566 244018 DEBUG nova.scheduler.client.report [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.570 244018 DEBUG nova.network.neutron [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Successfully updated port: ee46268d-740d-4ff9-8b65-4a81fc61eec3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.586 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.586 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquired lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.586 244018 DEBUG nova.network.neutron [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.588 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.589 244018 DEBUG nova.compute.manager [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.590 244018 DEBUG oslo_concurrency.lockutils [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.635 244018 DEBUG nova.compute.manager [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.636 244018 DEBUG nova.network.neutron [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.650 244018 DEBUG nova.compute.manager [req-e44c9157-e2e4-4975-a01a-6591efd2a936 req-f0b9c788-a6dd-4865-98f3-dd3161dbe02d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-changed-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.651 244018 DEBUG nova.compute.manager [req-e44c9157-e2e4-4975-a01a-6591efd2a936 req-f0b9c788-a6dd-4865-98f3-dd3161dbe02d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Refreshing instance network info cache due to event network-changed-ee46268d-740d-4ff9-8b65-4a81fc61eec3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.652 244018 DEBUG oslo_concurrency.lockutils [req-e44c9157-e2e4-4975-a01a-6591efd2a936 req-f0b9c788-a6dd-4865-98f3-dd3161dbe02d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.661 244018 INFO nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.698 244018 DEBUG nova.compute.manager [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:26:30 np0005629333 podman[290281]: 2026-02-25 12:26:30.733129458 +0000 UTC m=+0.066923805 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.731 244018 DEBUG nova.network.neutron [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.741 244018 DEBUG oslo_concurrency.processutils [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:30 np0005629333 podman[290282]: 2026-02-25 12:26:30.757010713 +0000 UTC m=+0.089435052 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.831 244018 DEBUG nova.compute.manager [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.832 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.832 244018 INFO nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Creating image(s)#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.853 244018 DEBUG nova.storage.rbd_utils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image 43b8959e-9cf0-42ca-aa1f-8a380321c971_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.873 244018 DEBUG nova.storage.rbd_utils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image 43b8959e-9cf0-42ca-aa1f-8a380321c971_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.893 244018 DEBUG nova.storage.rbd_utils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image 43b8959e-9cf0-42ca-aa1f-8a380321c971_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.896 244018 DEBUG oslo_concurrency.processutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:26:30
Feb 25 07:26:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:26:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:26:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'backups', 'images']
Feb 25 07:26:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.925 244018 DEBUG nova.policy [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b774fd0c04fc403d9ddb205f1e6abbc5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c85a955249394f0faf7c890f5cd0df32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.964 244018 DEBUG oslo_concurrency.processutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.965 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.965 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:30 np0005629333 nova_compute[244014]: 2026-02-25 12:26:30.966 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.003 244018 DEBUG nova.storage.rbd_utils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image 43b8959e-9cf0-42ca-aa1f-8a380321c971_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.006 244018 DEBUG oslo_concurrency.processutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 43b8959e-9cf0-42ca-aa1f-8a380321c971_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.144 244018 DEBUG nova.network.neutron [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Successfully updated port: dd03c667-f058-4e13-bb03-517ed838be2e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.164 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "refresh_cache-56267d17-0733-4abe-b916-d1a25e516514" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.164 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquired lock "refresh_cache-56267d17-0733-4abe-b916-d1a25e516514" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.165 244018 DEBUG nova.network.neutron [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:26:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:26:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1631888206' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.312 244018 DEBUG oslo_concurrency.processutils [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.320 244018 DEBUG nova.compute.provider_tree [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.347 244018 DEBUG nova.scheduler.client.report [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.374 244018 DEBUG oslo_concurrency.lockutils [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.379 244018 DEBUG nova.network.neutron [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.403 244018 INFO nova.scheduler.client.report [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Deleted allocations for instance 679fb15f-b258-473a-8cdc-a2c143eb4d92#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.463 244018 DEBUG oslo_concurrency.lockutils [None req-ddef47ed-bdc3-42ef-8f3f-4f8551699a5d 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "679fb15f-b258-473a-8cdc-a2c143eb4d92" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.451s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:26:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:26:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:26:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:26:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:26:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.629 244018 DEBUG oslo_concurrency.processutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 43b8959e-9cf0-42ca-aa1f-8a380321c971_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.714 244018 DEBUG nova.storage.rbd_utils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] resizing rbd image 43b8959e-9cf0-42ca-aa1f-8a380321c971_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:26:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 297 MiB data, 645 MiB used, 59 GiB / 60 GiB avail; 439 KiB/s rd, 3.0 MiB/s wr, 192 op/s
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.819 244018 DEBUG nova.network.neutron [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Updating instance_info_cache with network_info: [{"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.829 244018 DEBUG nova.objects.instance [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'migration_context' on Instance uuid 43b8959e-9cf0-42ca-aa1f-8a380321c971 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.842 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:26:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.843 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Ensure instance console log exists: /var/lib/nova/instances/43b8959e-9cf0-42ca-aa1f-8a380321c971/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.844 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.845 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:26:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.846 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:26:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.848 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Releasing lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:26:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:26:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.849 244018 DEBUG nova.compute.manager [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance network_info: |[{"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:26:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:26:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:26:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.851 244018 DEBUG oslo_concurrency.lockutils [req-e44c9157-e2e4-4975-a01a-6591efd2a936 req-f0b9c788-a6dd-4865-98f3-dd3161dbe02d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.851 244018 DEBUG nova.network.neutron [req-e44c9157-e2e4-4975-a01a-6591efd2a936 req-f0b9c788-a6dd-4865-98f3-dd3161dbe02d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Refreshing network info cache for port ee46268d-740d-4ff9-8b65-4a81fc61eec3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.856 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Start _get_guest_xml network_info=[{"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.862 244018 DEBUG nova.network.neutron [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Successfully created port: 69011b0a-5af7-4bef-a14c-8d83e63e08ae _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.868 244018 WARNING nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.878 244018 DEBUG nova.virt.libvirt.host [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.879 244018 DEBUG nova.virt.libvirt.host [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.882 244018 DEBUG nova.virt.libvirt.host [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.883 244018 DEBUG nova.virt.libvirt.host [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.883 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.884 244018 DEBUG nova.virt.hardware [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.884 244018 DEBUG nova.virt.hardware [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.885 244018 DEBUG nova.virt.hardware [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.885 244018 DEBUG nova.virt.hardware [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.886 244018 DEBUG nova.virt.hardware [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.886 244018 DEBUG nova.virt.hardware [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.886 244018 DEBUG nova.virt.hardware [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.887 244018 DEBUG nova.virt.hardware [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.887 244018 DEBUG nova.virt.hardware [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.888 244018 DEBUG nova.virt.hardware [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.888 244018 DEBUG nova.virt.hardware [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.893 244018 DEBUG oslo_concurrency.processutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.994 244018 DEBUG nova.compute.manager [req-28c89c4b-bc3e-418b-9ad1-3c65ff1b3487 req-93897ab1-0a9b-4408-bf20-5e505972020e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Received event network-changed-dd03c667-f058-4e13-bb03-517ed838be2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.995 244018 DEBUG nova.compute.manager [req-28c89c4b-bc3e-418b-9ad1-3c65ff1b3487 req-93897ab1-0a9b-4408-bf20-5e505972020e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Refreshing instance network info cache due to event network-changed-dd03c667-f058-4e13-bb03-517ed838be2e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:26:31 np0005629333 nova_compute[244014]: 2026-02-25 12:26:31.996 244018 DEBUG oslo_concurrency.lockutils [req-28c89c4b-bc3e-418b-9ad1-3c65ff1b3487 req-93897ab1-0a9b-4408-bf20-5e505972020e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-56267d17-0733-4abe-b916-d1a25e516514" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.164 244018 DEBUG nova.network.neutron [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Updating instance_info_cache with network_info: [{"id": "dd03c667-f058-4e13-bb03-517ed838be2e", "address": "fa:16:3e:87:52:b8", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd03c667-f0", "ovs_interfaceid": "dd03c667-f058-4e13-bb03-517ed838be2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.183 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Releasing lock "refresh_cache-56267d17-0733-4abe-b916-d1a25e516514" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.183 244018 DEBUG nova.compute.manager [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Instance network_info: |[{"id": "dd03c667-f058-4e13-bb03-517ed838be2e", "address": "fa:16:3e:87:52:b8", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd03c667-f0", "ovs_interfaceid": "dd03c667-f058-4e13-bb03-517ed838be2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.184 244018 DEBUG oslo_concurrency.lockutils [req-28c89c4b-bc3e-418b-9ad1-3c65ff1b3487 req-93897ab1-0a9b-4408-bf20-5e505972020e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-56267d17-0733-4abe-b916-d1a25e516514" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.184 244018 DEBUG nova.network.neutron [req-28c89c4b-bc3e-418b-9ad1-3c65ff1b3487 req-93897ab1-0a9b-4408-bf20-5e505972020e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Refreshing network info cache for port dd03c667-f058-4e13-bb03-517ed838be2e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.190 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Start _get_guest_xml network_info=[{"id": "dd03c667-f058-4e13-bb03-517ed838be2e", "address": "fa:16:3e:87:52:b8", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd03c667-f0", "ovs_interfaceid": "dd03c667-f058-4e13-bb03-517ed838be2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.195 244018 WARNING nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.200 244018 DEBUG nova.virt.libvirt.host [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.201 244018 DEBUG nova.virt.libvirt.host [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.209 244018 DEBUG nova.virt.libvirt.host [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.210 244018 DEBUG nova.virt.libvirt.host [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.210 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.211 244018 DEBUG nova.virt.hardware [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.212 244018 DEBUG nova.virt.hardware [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.212 244018 DEBUG nova.virt.hardware [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.212 244018 DEBUG nova.virt.hardware [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.213 244018 DEBUG nova.virt.hardware [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.213 244018 DEBUG nova.virt.hardware [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.213 244018 DEBUG nova.virt.hardware [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.214 244018 DEBUG nova.virt.hardware [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.214 244018 DEBUG nova.virt.hardware [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.214 244018 DEBUG nova.virt.hardware [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.215 244018 DEBUG nova.virt.hardware [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.220 244018 DEBUG oslo_concurrency.processutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:26:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/878136816' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.413 244018 DEBUG oslo_concurrency.processutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.447 244018 DEBUG nova.storage.rbd_utils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.453 244018 DEBUG oslo_concurrency.processutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.780 244018 DEBUG nova.network.neutron [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Successfully updated port: 69011b0a-5af7-4bef-a14c-8d83e63e08ae _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:26:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:26:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4145244569' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.797 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "refresh_cache-43b8959e-9cf0-42ca-aa1f-8a380321c971" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.797 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquired lock "refresh_cache-43b8959e-9cf0-42ca-aa1f-8a380321c971" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.798 244018 DEBUG nova.network.neutron [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.807 244018 DEBUG oslo_concurrency.processutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.835 244018 DEBUG nova.storage.rbd_utils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image 56267d17-0733-4abe-b916-d1a25e516514_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.840 244018 DEBUG oslo_concurrency.processutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.877 244018 DEBUG nova.compute.manager [req-6ce61633-62a4-4b22-93ad-844245fb32a4 req-5425d966-e669-4657-b137-ee80d7e2e112 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Received event network-changed-69011b0a-5af7-4bef-a14c-8d83e63e08ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.878 244018 DEBUG nova.compute.manager [req-6ce61633-62a4-4b22-93ad-844245fb32a4 req-5425d966-e669-4657-b137-ee80d7e2e112 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Refreshing instance network info cache due to event network-changed-69011b0a-5af7-4bef-a14c-8d83e63e08ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.878 244018 DEBUG oslo_concurrency.lockutils [req-6ce61633-62a4-4b22-93ad-844245fb32a4 req-5425d966-e669-4657-b137-ee80d7e2e112 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-43b8959e-9cf0-42ca-aa1f-8a380321c971" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:26:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:26:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2388521735' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.975 244018 DEBUG oslo_concurrency.processutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.978 244018 DEBUG nova.virt.libvirt.vif [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1971346294',display_name='tempest-ServerActionsTestJSON-server-1971346294',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1971346294',id=53,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAxL27ottNuqfXH6nySrIpiq52DbdIwstuJNvjKVA2mjXoBhB8Hf28a6S+Sox62IJx/Akv2MX8rF28TRT28AB2t2jhcJkKsJ3yIrvpBvNuGbxcLEouYwPlp1/Hru0erD1g==',key_name='tempest-keypair-1811376271',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-d1p0icxu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:26:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.979 244018 DEBUG nova.network.os_vif_util [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.981 244018 DEBUG nova.network.os_vif_util [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:26:32 np0005629333 nova_compute[244014]: 2026-02-25 12:26:32.983 244018 DEBUG nova.objects.instance [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.000 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  <uuid>8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b</uuid>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  <name>instance-00000035</name>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServerActionsTestJSON-server-1971346294</nova:name>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:26:31</nova:creationTime>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:        <nova:user uuid="1f8bbe7db4454108aca005daa72d5c22">tempest-ServerActionsTestJSON-436476112-project-member</nova:user>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:        <nova:project uuid="56700581ea88438ba482d90bc702ced3">tempest-ServerActionsTestJSON-436476112</nova:project>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:        <nova:port uuid="ee46268d-740d-4ff9-8b65-4a81fc61eec3">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <entry name="serial">8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b</entry>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <entry name="uuid">8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b</entry>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk.config">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:ba:87:f1"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <target dev="tapee46268d-74"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b/console.log" append="off"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:26:33 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:26:33 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.002 244018 DEBUG nova.compute.manager [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Preparing to wait for external event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.002 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.003 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.003 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.005 244018 DEBUG nova.virt.libvirt.vif [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1971346294',display_name='tempest-ServerActionsTestJSON-server-1971346294',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1971346294',id=53,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAxL27ottNuqfXH6nySrIpiq52DbdIwstuJNvjKVA2mjXoBhB8Hf28a6S+Sox62IJx/Akv2MX8rF28TRT28AB2t2jhcJkKsJ3yIrvpBvNuGbxcLEouYwPlp1/Hru0erD1g==',key_name='tempest-keypair-1811376271',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-d1p0icxu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:26:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.006 244018 DEBUG nova.network.os_vif_util [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.007 244018 DEBUG nova.network.os_vif_util [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.038 244018 DEBUG os_vif [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.039 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.039 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.040 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.046 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.046 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee46268d-74, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.047 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee46268d-74, col_values=(('external_ids', {'iface-id': 'ee46268d-740d-4ff9-8b65-4a81fc61eec3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:87:f1', 'vm-uuid': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.049 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:33 np0005629333 NetworkManager[49836]: <info>  [1772022393.0497] manager: (tapee46268d-74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/204)
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.051 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.054 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.054 244018 INFO os_vif [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74')#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.105 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.105 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.105 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] No VIF found with MAC fa:16:3e:ba:87:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.106 244018 INFO nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Using config drive#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.120 244018 DEBUG nova.storage.rbd_utils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.124 244018 DEBUG nova.network.neutron [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.213 244018 DEBUG nova.network.neutron [req-e44c9157-e2e4-4975-a01a-6591efd2a936 req-f0b9c788-a6dd-4865-98f3-dd3161dbe02d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Updated VIF entry in instance network info cache for port ee46268d-740d-4ff9-8b65-4a81fc61eec3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.214 244018 DEBUG nova.network.neutron [req-e44c9157-e2e4-4975-a01a-6591efd2a936 req-f0b9c788-a6dd-4865-98f3-dd3161dbe02d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Updating instance_info_cache with network_info: [{"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.243 244018 DEBUG oslo_concurrency.lockutils [req-e44c9157-e2e4-4975-a01a-6591efd2a936 req-f0b9c788-a6dd-4865-98f3-dd3161dbe02d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:26:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:26:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3186710184' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.397 244018 DEBUG oslo_concurrency.processutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.398 244018 DEBUG nova.virt.libvirt.vif [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:26:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1327428650',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1327428650',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1327428650',id=54,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f976004e0b334963a69c2519fca200d2',ramdisk_id='',reservation_id='r-pc621z15',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1374162185',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1374162185-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:26:29Z,user_data=None,user_id='89e71139346a40899212d5bc35835720',uuid=56267d17-0733-4abe-b916-d1a25e516514,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dd03c667-f058-4e13-bb03-517ed838be2e", "address": "fa:16:3e:87:52:b8", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd03c667-f0", "ovs_interfaceid": "dd03c667-f058-4e13-bb03-517ed838be2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.399 244018 DEBUG nova.network.os_vif_util [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converting VIF {"id": "dd03c667-f058-4e13-bb03-517ed838be2e", "address": "fa:16:3e:87:52:b8", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd03c667-f0", "ovs_interfaceid": "dd03c667-f058-4e13-bb03-517ed838be2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.400 244018 DEBUG nova.network.os_vif_util [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:52:b8,bridge_name='br-int',has_traffic_filtering=True,id=dd03c667-f058-4e13-bb03-517ed838be2e,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd03c667-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.402 244018 DEBUG nova.objects.instance [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 56267d17-0733-4abe-b916-d1a25e516514 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.421 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  <uuid>56267d17-0733-4abe-b916-d1a25e516514</uuid>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  <name>instance-00000036</name>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1327428650</nova:name>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:26:32</nova:creationTime>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:        <nova:user uuid="89e71139346a40899212d5bc35835720">tempest-ImagesOneServerNegativeTestJSON-1374162185-project-member</nova:user>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:        <nova:project uuid="f976004e0b334963a69c2519fca200d2">tempest-ImagesOneServerNegativeTestJSON-1374162185</nova:project>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:        <nova:port uuid="dd03c667-f058-4e13-bb03-517ed838be2e">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <entry name="serial">56267d17-0733-4abe-b916-d1a25e516514</entry>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <entry name="uuid">56267d17-0733-4abe-b916-d1a25e516514</entry>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/56267d17-0733-4abe-b916-d1a25e516514_disk">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/56267d17-0733-4abe-b916-d1a25e516514_disk.config">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:87:52:b8"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <target dev="tapdd03c667-f0"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/56267d17-0733-4abe-b916-d1a25e516514/console.log" append="off"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:26:33 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:26:33 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:26:33 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:26:33 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.421 244018 DEBUG nova.compute.manager [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Preparing to wait for external event network-vif-plugged-dd03c667-f058-4e13-bb03-517ed838be2e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.422 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "56267d17-0733-4abe-b916-d1a25e516514-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.422 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "56267d17-0733-4abe-b916-d1a25e516514-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.422 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "56267d17-0733-4abe-b916-d1a25e516514-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.424 244018 DEBUG nova.virt.libvirt.vif [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:26:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1327428650',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1327428650',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1327428650',id=54,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f976004e0b334963a69c2519fca200d2',ramdisk_id='',reservation_id='r-pc621z15',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1374162185',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1374162185-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:26:29Z,user_data=None,user_id='89e71139346a40899212d5bc35835720',uuid=56267d17-0733-4abe-b916-d1a25e516514,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dd03c667-f058-4e13-bb03-517ed838be2e", "address": "fa:16:3e:87:52:b8", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd03c667-f0", "ovs_interfaceid": "dd03c667-f058-4e13-bb03-517ed838be2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.424 244018 DEBUG nova.network.os_vif_util [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converting VIF {"id": "dd03c667-f058-4e13-bb03-517ed838be2e", "address": "fa:16:3e:87:52:b8", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd03c667-f0", "ovs_interfaceid": "dd03c667-f058-4e13-bb03-517ed838be2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.425 244018 DEBUG nova.network.os_vif_util [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:52:b8,bridge_name='br-int',has_traffic_filtering=True,id=dd03c667-f058-4e13-bb03-517ed838be2e,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd03c667-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.426 244018 DEBUG os_vif [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:52:b8,bridge_name='br-int',has_traffic_filtering=True,id=dd03c667-f058-4e13-bb03-517ed838be2e,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd03c667-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.427 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.427 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.428 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.431 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.431 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdd03c667-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.432 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdd03c667-f0, col_values=(('external_ids', {'iface-id': 'dd03c667-f058-4e13-bb03-517ed838be2e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:87:52:b8', 'vm-uuid': '56267d17-0733-4abe-b916-d1a25e516514'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.435 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:33 np0005629333 NetworkManager[49836]: <info>  [1772022393.4367] manager: (tapdd03c667-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/205)
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.438 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.442 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.443 244018 INFO os_vif [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:52:b8,bridge_name='br-int',has_traffic_filtering=True,id=dd03c667-f058-4e13-bb03-517ed838be2e,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd03c667-f0')#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.461 244018 DEBUG nova.network.neutron [req-28c89c4b-bc3e-418b-9ad1-3c65ff1b3487 req-93897ab1-0a9b-4408-bf20-5e505972020e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Updated VIF entry in instance network info cache for port dd03c667-f058-4e13-bb03-517ed838be2e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.461 244018 DEBUG nova.network.neutron [req-28c89c4b-bc3e-418b-9ad1-3c65ff1b3487 req-93897ab1-0a9b-4408-bf20-5e505972020e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Updating instance_info_cache with network_info: [{"id": "dd03c667-f058-4e13-bb03-517ed838be2e", "address": "fa:16:3e:87:52:b8", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd03c667-f0", "ovs_interfaceid": "dd03c667-f058-4e13-bb03-517ed838be2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.485 244018 DEBUG oslo_concurrency.lockutils [req-28c89c4b-bc3e-418b-9ad1-3c65ff1b3487 req-93897ab1-0a9b-4408-bf20-5e505972020e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-56267d17-0733-4abe-b916-d1a25e516514" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.491 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.492 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.492 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] No VIF found with MAC fa:16:3e:87:52:b8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.493 244018 INFO nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Using config drive#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.522 244018 DEBUG nova.storage.rbd_utils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image 56267d17-0733-4abe-b916-d1a25e516514_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.532 244018 INFO nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Creating config drive at /var/lib/nova/instances/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b/disk.config#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.541 244018 DEBUG oslo_concurrency.processutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpckp0qi7m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.684 244018 DEBUG oslo_concurrency.processutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpckp0qi7m" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.723 244018 DEBUG nova.storage.rbd_utils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.727 244018 DEBUG oslo_concurrency.processutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b/disk.config 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 367 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 94 KiB/s rd, 6.2 MiB/s wr, 150 op/s
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.966 244018 INFO nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Creating config drive at /var/lib/nova/instances/56267d17-0733-4abe-b916-d1a25e516514/disk.config#033[00m
Feb 25 07:26:33 np0005629333 nova_compute[244014]: 2026-02-25 12:26:33.973 244018 DEBUG oslo_concurrency.processutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/56267d17-0733-4abe-b916-d1a25e516514/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpiy9b6owk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.007 244018 DEBUG nova.network.neutron [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Updating instance_info_cache with network_info: [{"id": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "address": "fa:16:3e:df:0d:06", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69011b0a-5a", "ovs_interfaceid": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.033 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Releasing lock "refresh_cache-43b8959e-9cf0-42ca-aa1f-8a380321c971" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.034 244018 DEBUG nova.compute.manager [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Instance network_info: |[{"id": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "address": "fa:16:3e:df:0d:06", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69011b0a-5a", "ovs_interfaceid": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.035 244018 DEBUG oslo_concurrency.lockutils [req-6ce61633-62a4-4b22-93ad-844245fb32a4 req-5425d966-e669-4657-b137-ee80d7e2e112 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-43b8959e-9cf0-42ca-aa1f-8a380321c971" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.035 244018 DEBUG nova.network.neutron [req-6ce61633-62a4-4b22-93ad-844245fb32a4 req-5425d966-e669-4657-b137-ee80d7e2e112 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Refreshing network info cache for port 69011b0a-5af7-4bef-a14c-8d83e63e08ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.040 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Start _get_guest_xml network_info=[{"id": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "address": "fa:16:3e:df:0d:06", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69011b0a-5a", "ovs_interfaceid": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.046 244018 WARNING nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.051 244018 DEBUG nova.virt.libvirt.host [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.051 244018 DEBUG nova.virt.libvirt.host [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.062 244018 DEBUG nova.virt.libvirt.host [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.062 244018 DEBUG nova.virt.libvirt.host [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.063 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.063 244018 DEBUG nova.virt.hardware [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.064 244018 DEBUG nova.virt.hardware [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.064 244018 DEBUG nova.virt.hardware [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.064 244018 DEBUG nova.virt.hardware [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.065 244018 DEBUG nova.virt.hardware [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.065 244018 DEBUG nova.virt.hardware [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.065 244018 DEBUG nova.virt.hardware [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.066 244018 DEBUG nova.virt.hardware [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.066 244018 DEBUG nova.virt.hardware [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.066 244018 DEBUG nova.virt.hardware [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.067 244018 DEBUG nova.virt.hardware [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.071 244018 DEBUG oslo_concurrency.processutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.113 244018 DEBUG oslo_concurrency.processutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/56267d17-0733-4abe-b916-d1a25e516514/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpiy9b6owk" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.147 244018 DEBUG nova.storage.rbd_utils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] rbd image 56267d17-0733-4abe-b916-d1a25e516514_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.152 244018 DEBUG oslo_concurrency.processutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/56267d17-0733-4abe-b916-d1a25e516514/disk.config 56267d17-0733-4abe-b916-d1a25e516514_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.189 244018 DEBUG oslo_concurrency.processutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b/disk.config 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.197 244018 INFO nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Deleting local config drive /var/lib/nova/instances/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b/disk.config because it was imported into RBD.#033[00m
Feb 25 07:26:34 np0005629333 kernel: tapee46268d-74: entered promiscuous mode
Feb 25 07:26:34 np0005629333 NetworkManager[49836]: <info>  [1772022394.2563] manager: (tapee46268d-74): new Tun device (/org/freedesktop/NetworkManager/Devices/206)
Feb 25 07:26:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:34Z|00451|binding|INFO|Claiming lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 for this chassis.
Feb 25 07:26:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:34Z|00452|binding|INFO|ee46268d-740d-4ff9-8b65-4a81fc61eec3: Claiming fa:16:3e:ba:87:f1 10.100.0.11
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.260 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.272 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:87:f1 10.100.0.11'], port_security=['fa:16:3e:ba:87:f1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce318891-cf3c-4d99-af7c-c01770f38194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56700581ea88438ba482d90bc702ced3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c2ca716d-3f7c-490b-954a-bca009559baa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0958bb9f-eb63-44ee-b380-21c56b170304, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ee46268d-740d-4ff9-8b65-4a81fc61eec3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.274 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ee46268d-740d-4ff9-8b65-4a81fc61eec3 in datapath ce318891-cf3c-4d99-af7c-c01770f38194 bound to our chassis#033[00m
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.277 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce318891-cf3c-4d99-af7c-c01770f38194#033[00m
Feb 25 07:26:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:34Z|00453|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 ovn-installed in OVS
Feb 25 07:26:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:34Z|00454|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 up in Southbound
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.280 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.289 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[55b3b8af-382c-4220-87ca-2d258197bde0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.290 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce318891-c1 in ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.293 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce318891-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.293 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[18aed446-261f-4da5-9542-39eeb3e855b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.294 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[141b1b41-e2f8-4d3e-9b81-1b68c271a00d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:34 np0005629333 systemd-machined[210048]: New machine qemu-58-instance-00000035.
Feb 25 07:26:34 np0005629333 systemd-udevd[290793]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:26:34 np0005629333 systemd[1]: Started Virtual Machine qemu-58-instance-00000035.
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.305 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[2fd52680-5317-44be-9f80-c542ba8f5565]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:34 np0005629333 NetworkManager[49836]: <info>  [1772022394.3080] device (tapee46268d-74): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:26:34 np0005629333 NetworkManager[49836]: <info>  [1772022394.3098] device (tapee46268d-74): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.326 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ed60fc16-cefc-4767-8658-ce818203f5b7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.351 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a2983ba6-4543-4eaf-ad20-5975a5b4ea0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:34 np0005629333 NetworkManager[49836]: <info>  [1772022394.3564] manager: (tapce318891-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/207)
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.355 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cc7fd718-f47e-43e9-9c32-9c39e7f3c088]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:34 np0005629333 systemd-udevd[290796]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.378 244018 DEBUG oslo_concurrency.processutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/56267d17-0733-4abe-b916-d1a25e516514/disk.config 56267d17-0733-4abe-b916-d1a25e516514_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.227s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.379 244018 INFO nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Deleting local config drive /var/lib/nova/instances/56267d17-0733-4abe-b916-d1a25e516514/disk.config because it was imported into RBD.#033[00m
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.379 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c9c6d1c7-fd87-40d2-b47e-dff37f27481e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.382 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d3123649-77ca-4a24-8def-7bd5ba7ff025]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.395 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.395 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:34 np0005629333 NetworkManager[49836]: <info>  [1772022394.3992] device (tapce318891-c0): carrier: link connected
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.402 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d4fab75f-558b-48ee-9b45-c426167a0323]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.412 244018 DEBUG nova.compute.manager [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:26:34 np0005629333 NetworkManager[49836]: <info>  [1772022394.4203] manager: (tapdd03c667-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/208)
Feb 25 07:26:34 np0005629333 kernel: tapdd03c667-f0: entered promiscuous mode
Feb 25 07:26:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:34Z|00455|binding|INFO|Claiming lport dd03c667-f058-4e13-bb03-517ed838be2e for this chassis.
Feb 25 07:26:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:34Z|00456|binding|INFO|dd03c667-f058-4e13-bb03-517ed838be2e: Claiming fa:16:3e:87:52:b8 10.100.0.13
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.422 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:34 np0005629333 NetworkManager[49836]: <info>  [1772022394.4294] device (tapdd03c667-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:26:34 np0005629333 NetworkManager[49836]: <info>  [1772022394.4305] device (tapdd03c667-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.418 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5d25f3c9-8d79-43ae-ac75-ab633469507f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce318891-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c3:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436380, 'reachable_time': 33874, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290835, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.433 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:52:b8 10.100.0.13'], port_security=['fa:16:3e:87:52:b8 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '56267d17-0733-4abe-b916-d1a25e516514', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f976004e0b334963a69c2519fca200d2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ca95284b-67f9-4e09-a57b-7847021c2465', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=974b795b-e2d8-4683-ac80-b366113e2dd8, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=dd03c667-f058-4e13-bb03-517ed838be2e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.432 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:34Z|00457|binding|INFO|Setting lport dd03c667-f058-4e13-bb03-517ed838be2e ovn-installed in OVS
Feb 25 07:26:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:34Z|00458|binding|INFO|Setting lport dd03c667-f058-4e13-bb03-517ed838be2e up in Southbound
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.436 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:34 np0005629333 systemd-machined[210048]: New machine qemu-59-instance-00000036.
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.444 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[91d54acb-8a9c-45cf-a854-1de78bb22852]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe44:c3a0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 436380, 'tstamp': 436380}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290843, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:34 np0005629333 systemd[1]: Started Virtual Machine qemu-59-instance-00000036.
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.468 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3bcd1013-2bba-405d-a206-32833369db92]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce318891-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c3:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436380, 'reachable_time': 33874, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290846, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.469 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.469 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.481 244018 DEBUG nova.virt.hardware [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.481 244018 INFO nova.compute.claims [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.496 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c6a9a91f-1c08-48de-a25e-eacbd84d8c3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.537 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[55a9f54e-56f3-46c9-abbe-7e9a95634561]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.547 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce318891-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.547 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.548 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce318891-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:34 np0005629333 kernel: tapce318891-c0: entered promiscuous mode
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.550 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:34 np0005629333 NetworkManager[49836]: <info>  [1772022394.5512] manager: (tapce318891-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/209)
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.555 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce318891-c0, col_values=(('external_ids', {'iface-id': '3b184c15-8ef4-4e11-bd18-e1253a4ff440'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:34Z|00459|binding|INFO|Releasing lport 3b184c15-8ef4-4e11-bd18-e1253a4ff440 from this chassis (sb_readonly=0)
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.557 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.558 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2114601c-3bc3-4df9-91f1-82b32c8ab7bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.558 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.558 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:26:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:34.559 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'env', 'PROCESS_TAG=haproxy-ce318891-cf3c-4d99-af7c-c01770f38194', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce318891-cf3c-4d99-af7c-c01770f38194.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.564 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:26:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4068374435' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.657 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.659 244018 DEBUG oslo_concurrency.processutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.680 244018 DEBUG nova.storage.rbd_utils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image 43b8959e-9cf0-42ca-aa1f-8a380321c971_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.683 244018 DEBUG oslo_concurrency.processutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.721 244018 DEBUG oslo_concurrency.processutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.749 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022394.7491221, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.750 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Started (Lifecycle Event)#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.768 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.790 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022394.7492712, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.791 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.808 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.811 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.832 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:26:34 np0005629333 podman[290985]: 2026-02-25 12:26:34.914949268 +0000 UTC m=+0.050980663 container create 0c88556e16fdb202933ed83f84dce9a3e4f63746e4037f5398377a4eb7316225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:26:34 np0005629333 systemd[1]: Started libpod-conmon-0c88556e16fdb202933ed83f84dce9a3e4f63746e4037f5398377a4eb7316225.scope.
Feb 25 07:26:34 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:26:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/669065cfe9b355ae4db308310ad10b864c500e39dbf8d8abf3254990187da5c4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:26:34 np0005629333 podman[290985]: 2026-02-25 12:26:34.884427085 +0000 UTC m=+0.020458510 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.984 244018 DEBUG nova.compute.manager [req-0489a867-bb6a-40f7-a664-cd41f9abd65e req-2400d00c-c55e-4410-b41a-89237dfb5c26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.985 244018 DEBUG oslo_concurrency.lockutils [req-0489a867-bb6a-40f7-a664-cd41f9abd65e req-2400d00c-c55e-4410-b41a-89237dfb5c26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.986 244018 DEBUG oslo_concurrency.lockutils [req-0489a867-bb6a-40f7-a664-cd41f9abd65e req-2400d00c-c55e-4410-b41a-89237dfb5c26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.986 244018 DEBUG oslo_concurrency.lockutils [req-0489a867-bb6a-40f7-a664-cd41f9abd65e req-2400d00c-c55e-4410-b41a-89237dfb5c26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.987 244018 DEBUG nova.compute.manager [req-0489a867-bb6a-40f7-a664-cd41f9abd65e req-2400d00c-c55e-4410-b41a-89237dfb5c26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Processing event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.987 244018 DEBUG nova.compute.manager [req-0489a867-bb6a-40f7-a664-cd41f9abd65e req-2400d00c-c55e-4410-b41a-89237dfb5c26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.988 244018 DEBUG oslo_concurrency.lockutils [req-0489a867-bb6a-40f7-a664-cd41f9abd65e req-2400d00c-c55e-4410-b41a-89237dfb5c26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.988 244018 DEBUG oslo_concurrency.lockutils [req-0489a867-bb6a-40f7-a664-cd41f9abd65e req-2400d00c-c55e-4410-b41a-89237dfb5c26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.989 244018 DEBUG oslo_concurrency.lockutils [req-0489a867-bb6a-40f7-a664-cd41f9abd65e req-2400d00c-c55e-4410-b41a-89237dfb5c26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.990 244018 DEBUG nova.compute.manager [req-0489a867-bb6a-40f7-a664-cd41f9abd65e req-2400d00c-c55e-4410-b41a-89237dfb5c26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.990 244018 WARNING nova.compute.manager [req-0489a867-bb6a-40f7-a664-cd41f9abd65e req-2400d00c-c55e-4410-b41a-89237dfb5c26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.991 244018 DEBUG nova.compute.manager [req-0489a867-bb6a-40f7-a664-cd41f9abd65e req-2400d00c-c55e-4410-b41a-89237dfb5c26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Received event network-vif-plugged-dd03c667-f058-4e13-bb03-517ed838be2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.991 244018 DEBUG oslo_concurrency.lockutils [req-0489a867-bb6a-40f7-a664-cd41f9abd65e req-2400d00c-c55e-4410-b41a-89237dfb5c26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "56267d17-0733-4abe-b916-d1a25e516514-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.992 244018 DEBUG oslo_concurrency.lockutils [req-0489a867-bb6a-40f7-a664-cd41f9abd65e req-2400d00c-c55e-4410-b41a-89237dfb5c26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "56267d17-0733-4abe-b916-d1a25e516514-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.992 244018 DEBUG oslo_concurrency.lockutils [req-0489a867-bb6a-40f7-a664-cd41f9abd65e req-2400d00c-c55e-4410-b41a-89237dfb5c26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "56267d17-0733-4abe-b916-d1a25e516514-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.993 244018 DEBUG nova.compute.manager [req-0489a867-bb6a-40f7-a664-cd41f9abd65e req-2400d00c-c55e-4410-b41a-89237dfb5c26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Processing event network-vif-plugged-dd03c667-f058-4e13-bb03-517ed838be2e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.994 244018 DEBUG nova.compute.manager [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.997 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022394.9976788, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:26:34 np0005629333 nova_compute[244014]: 2026-02-25 12:26:34.998 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:26:34 np0005629333 podman[290985]: 2026-02-25 12:26:34.999228194 +0000 UTC m=+0.135259609 container init 0c88556e16fdb202933ed83f84dce9a3e4f63746e4037f5398377a4eb7316225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.001 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:26:35 np0005629333 podman[290985]: 2026-02-25 12:26:35.004468312 +0000 UTC m=+0.140499707 container start 0c88556e16fdb202933ed83f84dce9a3e4f63746e4037f5398377a4eb7316225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.008 244018 INFO nova.virt.libvirt.driver [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance spawned successfully.#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.009 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.016 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:35 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[291013]: [NOTICE]   (291038) : New worker (291045) forked
Feb 25 07:26:35 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[291013]: [NOTICE]   (291038) : Loading success.
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.029 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.035 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.036 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.036 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.037 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.038 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.039 244018 DEBUG nova.virt.libvirt.driver [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.048 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.061 157129 INFO neutron.agent.ovn.metadata.agent [-] Port dd03c667-f058-4e13-bb03-517ed838be2e in datapath 7693903d-d5e2-4b50-a39b-bbbcc4148329 unbound from our chassis#033[00m
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.063 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7693903d-d5e2-4b50-a39b-bbbcc4148329#033[00m
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.070 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5786b26c-3990-422e-b2dd-347a33629b9f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.070 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7693903d-d1 in ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.072 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7693903d-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.072 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[48960a31-0060-4bb9-a29a-b28cb813c563]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.073 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e676696c-78c7-4bd3-a91c-bda5781d0ec3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.084 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[7bacb9a1-662c-40f2-b108-81503437177c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.090 244018 INFO nova.compute.manager [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Took 6.30 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.090 244018 DEBUG nova.compute.manager [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.098 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[29932cc3-966a-4d56-9c88-960bb82bdb44]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.100 244018 DEBUG nova.compute.manager [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.101 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022395.1008575, 56267d17-0733-4abe-b916-d1a25e516514 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.101 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 56267d17-0733-4abe-b916-d1a25e516514] VM Started (Lifecycle Event)#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.103 244018 DEBUG nova.network.neutron [req-6ce61633-62a4-4b22-93ad-844245fb32a4 req-5425d966-e669-4657-b137-ee80d7e2e112 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Updated VIF entry in instance network info cache for port 69011b0a-5af7-4bef-a14c-8d83e63e08ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.103 244018 DEBUG nova.network.neutron [req-6ce61633-62a4-4b22-93ad-844245fb32a4 req-5425d966-e669-4657-b137-ee80d7e2e112 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Updating instance_info_cache with network_info: [{"id": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "address": "fa:16:3e:df:0d:06", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69011b0a-5a", "ovs_interfaceid": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.106 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:26:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.121 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.123 244018 INFO nova.virt.libvirt.driver [-] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Instance spawned successfully.#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.123 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.124 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[69a0cbfc-53fb-471b-b985-edd365302e7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.127 244018 DEBUG oslo_concurrency.lockutils [req-6ce61633-62a4-4b22-93ad-844245fb32a4 req-5425d966-e669-4657-b137-ee80d7e2e112 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-43b8959e-9cf0-42ca-aa1f-8a380321c971" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:26:35 np0005629333 NetworkManager[49836]: <info>  [1772022395.1301] manager: (tap7693903d-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/210)
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.128 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[49f174a9-2853-48ac-96f6-28d5bf5a0584]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.132 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.159 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7c5eee53-58f1-49fb-8c82-07d822cb4734]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.164 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a596d94b-9ac5-48a9-a8ab-bfb881544142]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:35 np0005629333 NetworkManager[49836]: <info>  [1772022395.1865] device (tap7693903d-d0): carrier: link connected
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.192 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b72a45a0-ebdb-433a-b759-95f473309154]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.206 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8ff12b8c-9dea-47d8-b7e2-e427b0fcd070]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7693903d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:f2:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436459, 'reachable_time': 40040, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291065, 'error': None, 'target': 'ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.219 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ca2af1a4-4344-4216-8fe2-d262085eabb3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec3:f240'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 436459, 'tstamp': 436459}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291066, 'error': None, 'target': 'ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.231 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[be9117ad-6e72-42af-90ba-e77505af2048]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7693903d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c3:f2:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 138], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436459, 'reachable_time': 40040, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291067, 'error': None, 'target': 'ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.248 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b0590cd2-0832-4e71-a436-4539162af819]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:26:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2360327645' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.271 244018 DEBUG oslo_concurrency.processutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.272 244018 DEBUG nova.virt.libvirt.vif [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:26:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1336556788',display_name='tempest-ServerActionsTestOtherB-server-1336556788',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1336556788',id=55,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c85a955249394f0faf7c890f5cd0df32',ramdisk_id='',reservation_id='r-8l7ggktq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1539976047',owner_user_name='tempest-ServerActionsTestOtherB-1539976047-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:26:30Z,user_data=None,user_id='b774fd0c04fc403d9ddb205f1e6abbc5',uuid=43b8959e-9cf0-42ca-aa1f-8a380321c971,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "address": "fa:16:3e:df:0d:06", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69011b0a-5a", "ovs_interfaceid": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.273 244018 DEBUG nova.network.os_vif_util [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converting VIF {"id": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "address": "fa:16:3e:df:0d:06", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69011b0a-5a", "ovs_interfaceid": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.274 244018 DEBUG nova.network.os_vif_util [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:0d:06,bridge_name='br-int',has_traffic_filtering=True,id=69011b0a-5af7-4bef-a14c-8d83e63e08ae,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69011b0a-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:26:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:26:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1159604673' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.276 244018 DEBUG nova.objects.instance [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'pci_devices' on Instance uuid 43b8959e-9cf0-42ca-aa1f-8a380321c971 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.283 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[61f221e3-c684-4c48-8752-4ecdca297281]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.285 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7693903d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.285 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.285 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7693903d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.287 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:35 np0005629333 NetworkManager[49836]: <info>  [1772022395.2877] manager: (tap7693903d-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/211)
Feb 25 07:26:35 np0005629333 kernel: tap7693903d-d0: entered promiscuous mode
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.289 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7693903d-d0, col_values=(('external_ids', {'iface-id': '6dc5897c-8765-434f-a79d-19523884d8ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.290 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.292 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:35 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:35Z|00460|binding|INFO|Releasing lport 6dc5897c-8765-434f-a79d-19523884d8ae from this chassis (sb_readonly=0)
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.294 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7693903d-d5e2-4b50-a39b-bbbcc4148329.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7693903d-d5e2-4b50-a39b-bbbcc4148329.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.297 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8619b1d0-6ff2-413d-a194-b58ac50ca068]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.297 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-7693903d-d5e2-4b50-a39b-bbbcc4148329
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/7693903d-d5e2-4b50-a39b-bbbcc4148329.pid.haproxy
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 7693903d-d5e2-4b50-a39b-bbbcc4148329
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:26:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:35.298 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'env', 'PROCESS_TAG=haproxy-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7693903d-d5e2-4b50-a39b-bbbcc4148329.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.301 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.302 244018 DEBUG oslo_concurrency.processutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.319 244018 DEBUG nova.compute.provider_tree [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.324 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 56267d17-0733-4abe-b916-d1a25e516514] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.324 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022395.1012468, 56267d17-0733-4abe-b916-d1a25e516514 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.325 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 56267d17-0733-4abe-b916-d1a25e516514] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.330 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:26:35 np0005629333 nova_compute[244014]:  <uuid>43b8959e-9cf0-42ca-aa1f-8a380321c971</uuid>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:  <name>instance-00000037</name>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServerActionsTestOtherB-server-1336556788</nova:name>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:26:34</nova:creationTime>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:26:35 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:        <nova:user uuid="b774fd0c04fc403d9ddb205f1e6abbc5">tempest-ServerActionsTestOtherB-1539976047-project-member</nova:user>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:        <nova:project uuid="c85a955249394f0faf7c890f5cd0df32">tempest-ServerActionsTestOtherB-1539976047</nova:project>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:        <nova:port uuid="69011b0a-5af7-4bef-a14c-8d83e63e08ae">
Feb 25 07:26:35 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      <entry name="serial">43b8959e-9cf0-42ca-aa1f-8a380321c971</entry>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      <entry name="uuid">43b8959e-9cf0-42ca-aa1f-8a380321c971</entry>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/43b8959e-9cf0-42ca-aa1f-8a380321c971_disk">
Feb 25 07:26:35 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:26:35 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/43b8959e-9cf0-42ca-aa1f-8a380321c971_disk.config">
Feb 25 07:26:35 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:26:35 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:df:0d:06"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      <target dev="tap69011b0a-5a"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/43b8959e-9cf0-42ca-aa1f-8a380321c971/console.log" append="off"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:26:35 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:26:35 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:26:35 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:26:35 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.336 244018 DEBUG nova.compute.manager [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Preparing to wait for external event network-vif-plugged-69011b0a-5af7-4bef-a14c-8d83e63e08ae prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.337 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.337 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.338 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.338 244018 DEBUG nova.virt.libvirt.vif [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:26:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1336556788',display_name='tempest-ServerActionsTestOtherB-server-1336556788',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1336556788',id=55,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c85a955249394f0faf7c890f5cd0df32',ramdisk_id='',reservation_id='r-8l7ggktq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1539976047',owner_user_name='tempest-ServerActionsTestOtherB-1539976047-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:26:30Z,user_data=None,user_id='b774fd0c04fc403d9ddb205f1e6abbc5',uuid=43b8959e-9cf0-42ca-aa1f-8a380321c971,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "address": "fa:16:3e:df:0d:06", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69011b0a-5a", "ovs_interfaceid": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.339 244018 DEBUG nova.network.os_vif_util [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converting VIF {"id": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "address": "fa:16:3e:df:0d:06", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69011b0a-5a", "ovs_interfaceid": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.340 244018 DEBUG nova.network.os_vif_util [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:0d:06,bridge_name='br-int',has_traffic_filtering=True,id=69011b0a-5af7-4bef-a14c-8d83e63e08ae,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69011b0a-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.340 244018 DEBUG os_vif [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:0d:06,bridge_name='br-int',has_traffic_filtering=True,id=69011b0a-5af7-4bef-a14c-8d83e63e08ae,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69011b0a-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.342 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.343 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.343 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.350 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.351 244018 DEBUG nova.scheduler.client.report [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.358 244018 INFO nova.compute.manager [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Took 7.75 seconds to build instance.#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.362 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.363 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.364 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.365 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.365 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.366 244018 DEBUG nova.virt.libvirt.driver [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.371 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.371 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69011b0a-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.371 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap69011b0a-5a, col_values=(('external_ids', {'iface-id': '69011b0a-5af7-4bef-a14c-8d83e63e08ae', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:df:0d:06', 'vm-uuid': '43b8959e-9cf0-42ca-aa1f-8a380321c971'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.373 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:35 np0005629333 NetworkManager[49836]: <info>  [1772022395.3739] manager: (tap69011b0a-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/212)
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.374 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.375 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022395.1050055, 56267d17-0733-4abe-b916-d1a25e516514 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.375 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 56267d17-0733-4abe-b916-d1a25e516514] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.377 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.378 244018 INFO os_vif [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:0d:06,bridge_name='br-int',has_traffic_filtering=True,id=69011b0a-5af7-4bef-a14c-8d83e63e08ae,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69011b0a-5a')#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.398 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.929s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.399 244018 DEBUG nova.compute.manager [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.406 244018 DEBUG oslo_concurrency.lockutils [None req-24cdebe1-b825-401c-8313-50e22d24519a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.890s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.424 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.427 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.455 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 56267d17-0733-4abe-b916-d1a25e516514] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.458 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.459 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.459 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No VIF found with MAC fa:16:3e:df:0d:06, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.459 244018 INFO nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Using config drive#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.479 244018 DEBUG nova.storage.rbd_utils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image 43b8959e-9cf0-42ca-aa1f-8a380321c971_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.489 244018 INFO nova.compute.manager [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Took 5.85 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.490 244018 DEBUG nova.compute.manager [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.490 244018 DEBUG nova.compute.manager [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.491 244018 DEBUG nova.network.neutron [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.516 244018 INFO nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.537 244018 DEBUG nova.compute.manager [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.549 244018 INFO nova.compute.manager [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Took 7.58 seconds to build instance.#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.574 244018 DEBUG oslo_concurrency.lockutils [None req-a3609d45-e0a9-4e5f-b2c5-94b432e51cfd 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "56267d17-0733-4abe-b916-d1a25e516514" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.618 244018 DEBUG nova.compute.manager [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.619 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.620 244018 INFO nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Creating image(s)#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.641 244018 DEBUG nova.storage.rbd_utils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.665 244018 DEBUG nova.storage.rbd_utils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.690 244018 DEBUG nova.storage.rbd_utils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.698 244018 DEBUG oslo_concurrency.processutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:35 np0005629333 podman[291122]: 2026-02-25 12:26:35.703500686 +0000 UTC m=+0.089670539 container create 41085a498c43d421528932ebb1e273652d232a9ced8039295b9b7912c6839115 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.740 244018 DEBUG nova.policy [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9c44c1a95c8d4d97bc1d7dde284dcf1b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'daab2f813dbd467685c22833bf875ec9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:26:35 np0005629333 systemd[1]: Started libpod-conmon-41085a498c43d421528932ebb1e273652d232a9ced8039295b9b7912c6839115.scope.
Feb 25 07:26:35 np0005629333 podman[291122]: 2026-02-25 12:26:35.664601455 +0000 UTC m=+0.050771418 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:26:35 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:26:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b6407dc8ff3720d9297a51c2a82a3e2ac0165aaf6a4b9b13749f15d5541d8db/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.796 244018 DEBUG oslo_concurrency.processutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.797 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.797 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.798 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:35 np0005629333 podman[291122]: 2026-02-25 12:26:35.799237905 +0000 UTC m=+0.185407808 container init 41085a498c43d421528932ebb1e273652d232a9ced8039295b9b7912c6839115 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223)
Feb 25 07:26:35 np0005629333 podman[291122]: 2026-02-25 12:26:35.809425793 +0000 UTC m=+0.195595686 container start 41085a498c43d421528932ebb1e273652d232a9ced8039295b9b7912c6839115 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:26:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 367 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 6.1 MiB/s wr, 147 op/s
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.819 244018 DEBUG nova.storage.rbd_utils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:35 np0005629333 nova_compute[244014]: 2026-02-25 12:26:35.824 244018 DEBUG oslo_concurrency.processutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:35 np0005629333 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[291192]: [NOTICE]   (291212) : New worker (291219) forked
Feb 25 07:26:35 np0005629333 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[291192]: [NOTICE]   (291212) : Loading success.
Feb 25 07:26:36 np0005629333 nova_compute[244014]: 2026-02-25 12:26:36.070 244018 INFO nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Creating config drive at /var/lib/nova/instances/43b8959e-9cf0-42ca-aa1f-8a380321c971/disk.config#033[00m
Feb 25 07:26:36 np0005629333 nova_compute[244014]: 2026-02-25 12:26:36.075 244018 DEBUG oslo_concurrency.processutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/43b8959e-9cf0-42ca-aa1f-8a380321c971/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpsq0q_haq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:36 np0005629333 nova_compute[244014]: 2026-02-25 12:26:36.138 244018 DEBUG oslo_concurrency.processutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:36 np0005629333 nova_compute[244014]: 2026-02-25 12:26:36.203 244018 DEBUG nova.storage.rbd_utils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] resizing rbd image d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:26:36 np0005629333 nova_compute[244014]: 2026-02-25 12:26:36.232 244018 DEBUG oslo_concurrency.processutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/43b8959e-9cf0-42ca-aa1f-8a380321c971/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpsq0q_haq" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:36 np0005629333 nova_compute[244014]: 2026-02-25 12:26:36.259 244018 DEBUG nova.storage.rbd_utils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image 43b8959e-9cf0-42ca-aa1f-8a380321c971_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:36 np0005629333 nova_compute[244014]: 2026-02-25 12:26:36.263 244018 DEBUG oslo_concurrency.processutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/43b8959e-9cf0-42ca-aa1f-8a380321c971/disk.config 43b8959e-9cf0-42ca-aa1f-8a380321c971_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:36 np0005629333 nova_compute[244014]: 2026-02-25 12:26:36.326 244018 DEBUG nova.objects.instance [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lazy-loading 'migration_context' on Instance uuid d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:26:36 np0005629333 nova_compute[244014]: 2026-02-25 12:26:36.341 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:26:36 np0005629333 nova_compute[244014]: 2026-02-25 12:26:36.341 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Ensure instance console log exists: /var/lib/nova/instances/d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:26:36 np0005629333 nova_compute[244014]: 2026-02-25 12:26:36.341 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:36 np0005629333 nova_compute[244014]: 2026-02-25 12:26:36.342 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:36 np0005629333 nova_compute[244014]: 2026-02-25 12:26:36.342 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:36 np0005629333 nova_compute[244014]: 2026-02-25 12:26:36.378 244018 DEBUG oslo_concurrency.processutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/43b8959e-9cf0-42ca-aa1f-8a380321c971/disk.config 43b8959e-9cf0-42ca-aa1f-8a380321c971_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:36 np0005629333 nova_compute[244014]: 2026-02-25 12:26:36.379 244018 INFO nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Deleting local config drive /var/lib/nova/instances/43b8959e-9cf0-42ca-aa1f-8a380321c971/disk.config because it was imported into RBD.#033[00m
Feb 25 07:26:36 np0005629333 kernel: tap69011b0a-5a: entered promiscuous mode
Feb 25 07:26:36 np0005629333 NetworkManager[49836]: <info>  [1772022396.4129] manager: (tap69011b0a-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/213)
Feb 25 07:26:36 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:36Z|00461|binding|INFO|Claiming lport 69011b0a-5af7-4bef-a14c-8d83e63e08ae for this chassis.
Feb 25 07:26:36 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:36Z|00462|binding|INFO|69011b0a-5af7-4bef-a14c-8d83e63e08ae: Claiming fa:16:3e:df:0d:06 10.100.0.5
Feb 25 07:26:36 np0005629333 nova_compute[244014]: 2026-02-25 12:26:36.414 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:36 np0005629333 NetworkManager[49836]: <info>  [1772022396.4243] device (tap69011b0a-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:26:36 np0005629333 NetworkManager[49836]: <info>  [1772022396.4250] device (tap69011b0a-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:26:36 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:36Z|00463|binding|INFO|Setting lport 69011b0a-5af7-4bef-a14c-8d83e63e08ae ovn-installed in OVS
Feb 25 07:26:36 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:36Z|00464|binding|INFO|Setting lport 69011b0a-5af7-4bef-a14c-8d83e63e08ae up in Southbound
Feb 25 07:26:36 np0005629333 nova_compute[244014]: 2026-02-25 12:26:36.428 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:36.428 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:0d:06 10.100.0.5'], port_security=['fa:16:3e:df:0d:06 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '43b8959e-9cf0-42ca-aa1f-8a380321c971', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64c22162-7e15-45de-8fd2-8c9a24f27006', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c85a955249394f0faf7c890f5cd0df32', 'neutron:revision_number': '2', 'neutron:security_group_ids': '10bdd349-ebee-42f5-8295-ca2b7d5c5d74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a9495f97-67e6-4da7-a9b0-f643c9e48076, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=69011b0a-5af7-4bef-a14c-8d83e63e08ae) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:26:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:36.430 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 69011b0a-5af7-4bef-a14c-8d83e63e08ae in datapath 64c22162-7e15-45de-8fd2-8c9a24f27006 bound to our chassis#033[00m
Feb 25 07:26:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:36.432 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 64c22162-7e15-45de-8fd2-8c9a24f27006#033[00m
Feb 25 07:26:36 np0005629333 systemd-machined[210048]: New machine qemu-60-instance-00000037.
Feb 25 07:26:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:36.449 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9a77c24c-883b-49c5-83e9-dd78f1f86c77]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:36 np0005629333 systemd[1]: Started Virtual Machine qemu-60-instance-00000037.
Feb 25 07:26:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:36.469 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[42cedd75-191d-4b4e-b0db-4bdff8caafab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:36.479 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[295e2408-1fba-4bf2-898b-cf64522c5fdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:36.497 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7b6728bc-176b-4db8-8eed-3d9305d1d1ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:36.507 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9c82bbaf-378a-427a-ba50-470509a23e05]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64c22162-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:1c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428693, 'reachable_time': 31211, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291382, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:36.520 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[00256589-2aae-4dc8-8f15-716d51d31131]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap64c22162-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428702, 'tstamp': 428702}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291383, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap64c22162-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428705, 'tstamp': 428705}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291383, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:36.521 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64c22162-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:36 np0005629333 nova_compute[244014]: 2026-02-25 12:26:36.523 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:36 np0005629333 nova_compute[244014]: 2026-02-25 12:26:36.524 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:36.524 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64c22162-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:36.524 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:26:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:36.525 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap64c22162-70, col_values=(('external_ids', {'iface-id': '81f0f54c-4e04-4adf-952f-b6d0fe9698c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:36.525 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:26:36 np0005629333 nova_compute[244014]: 2026-02-25 12:26:36.577 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022381.5763638, a826c2fd-1af8-4b55-b801-90ce87d04466 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:26:36 np0005629333 nova_compute[244014]: 2026-02-25 12:26:36.577 244018 INFO nova.compute.manager [-] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:26:36 np0005629333 nova_compute[244014]: 2026-02-25 12:26:36.596 244018 DEBUG nova.compute.manager [None req-2506aba5-5168-4ee6-a5fe-447a1ceb6c70 - - - - - -] [instance: a826c2fd-1af8-4b55-b801-90ce87d04466] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:36 np0005629333 nova_compute[244014]: 2026-02-25 12:26:36.836 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022396.836163, 43b8959e-9cf0-42ca-aa1f-8a380321c971 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:26:36 np0005629333 nova_compute[244014]: 2026-02-25 12:26:36.837 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] VM Started (Lifecycle Event)#033[00m
Feb 25 07:26:36 np0005629333 nova_compute[244014]: 2026-02-25 12:26:36.864 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:36 np0005629333 nova_compute[244014]: 2026-02-25 12:26:36.868 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022396.8365889, 43b8959e-9cf0-42ca-aa1f-8a380321c971 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:26:36 np0005629333 nova_compute[244014]: 2026-02-25 12:26:36.868 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:26:36 np0005629333 nova_compute[244014]: 2026-02-25 12:26:36.892 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:36 np0005629333 nova_compute[244014]: 2026-02-25 12:26:36.895 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:26:36 np0005629333 nova_compute[244014]: 2026-02-25 12:26:36.929 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.015 244018 DEBUG nova.network.neutron [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Successfully created port: a4d3e156-0255-47ba-811a-4ddaf7c16468 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.119 244018 DEBUG nova.compute.manager [req-3524a921-3c1f-4576-b9b4-f83f017b1f85 req-225e8b94-8ea1-46c1-89e1-e6dbb01be9af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Received event network-vif-plugged-dd03c667-f058-4e13-bb03-517ed838be2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.120 244018 DEBUG oslo_concurrency.lockutils [req-3524a921-3c1f-4576-b9b4-f83f017b1f85 req-225e8b94-8ea1-46c1-89e1-e6dbb01be9af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "56267d17-0733-4abe-b916-d1a25e516514-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.120 244018 DEBUG oslo_concurrency.lockutils [req-3524a921-3c1f-4576-b9b4-f83f017b1f85 req-225e8b94-8ea1-46c1-89e1-e6dbb01be9af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "56267d17-0733-4abe-b916-d1a25e516514-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.120 244018 DEBUG oslo_concurrency.lockutils [req-3524a921-3c1f-4576-b9b4-f83f017b1f85 req-225e8b94-8ea1-46c1-89e1-e6dbb01be9af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "56267d17-0733-4abe-b916-d1a25e516514-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.121 244018 DEBUG nova.compute.manager [req-3524a921-3c1f-4576-b9b4-f83f017b1f85 req-225e8b94-8ea1-46c1-89e1-e6dbb01be9af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] No waiting events found dispatching network-vif-plugged-dd03c667-f058-4e13-bb03-517ed838be2e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.121 244018 WARNING nova.compute.manager [req-3524a921-3c1f-4576-b9b4-f83f017b1f85 req-225e8b94-8ea1-46c1-89e1-e6dbb01be9af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Received unexpected event network-vif-plugged-dd03c667-f058-4e13-bb03-517ed838be2e for instance with vm_state active and task_state None.#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.121 244018 DEBUG nova.compute.manager [req-3524a921-3c1f-4576-b9b4-f83f017b1f85 req-225e8b94-8ea1-46c1-89e1-e6dbb01be9af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Received event network-vif-plugged-69011b0a-5af7-4bef-a14c-8d83e63e08ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.122 244018 DEBUG oslo_concurrency.lockutils [req-3524a921-3c1f-4576-b9b4-f83f017b1f85 req-225e8b94-8ea1-46c1-89e1-e6dbb01be9af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.122 244018 DEBUG oslo_concurrency.lockutils [req-3524a921-3c1f-4576-b9b4-f83f017b1f85 req-225e8b94-8ea1-46c1-89e1-e6dbb01be9af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.122 244018 DEBUG oslo_concurrency.lockutils [req-3524a921-3c1f-4576-b9b4-f83f017b1f85 req-225e8b94-8ea1-46c1-89e1-e6dbb01be9af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.123 244018 DEBUG nova.compute.manager [req-3524a921-3c1f-4576-b9b4-f83f017b1f85 req-225e8b94-8ea1-46c1-89e1-e6dbb01be9af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Processing event network-vif-plugged-69011b0a-5af7-4bef-a14c-8d83e63e08ae _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.124 244018 DEBUG nova.compute.manager [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.127 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022397.1269784, 43b8959e-9cf0-42ca-aa1f-8a380321c971 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.128 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.130 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.134 244018 INFO nova.virt.libvirt.driver [-] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Instance spawned successfully.#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.135 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.158 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.163 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.164 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.164 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.165 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.166 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.166 244018 DEBUG nova.virt.libvirt.driver [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.171 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.216 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.250 244018 INFO nova.compute.manager [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Took 6.42 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.250 244018 DEBUG nova.compute.manager [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.295 244018 DEBUG oslo_concurrency.lockutils [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "56267d17-0733-4abe-b916-d1a25e516514" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.295 244018 DEBUG oslo_concurrency.lockutils [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "56267d17-0733-4abe-b916-d1a25e516514" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.295 244018 DEBUG oslo_concurrency.lockutils [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "56267d17-0733-4abe-b916-d1a25e516514-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.295 244018 DEBUG oslo_concurrency.lockutils [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "56267d17-0733-4abe-b916-d1a25e516514-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.296 244018 DEBUG oslo_concurrency.lockutils [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "56267d17-0733-4abe-b916-d1a25e516514-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.297 244018 INFO nova.compute.manager [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Terminating instance#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.298 244018 DEBUG nova.compute.manager [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.316 244018 INFO nova.compute.manager [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Took 8.21 seconds to build instance.#033[00m
Feb 25 07:26:37 np0005629333 kernel: tapdd03c667-f0 (unregistering): left promiscuous mode
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.336 244018 DEBUG oslo_concurrency.lockutils [None req-fd7f9745-8f46-4b20-a909-989b74dfa46d b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.319s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:37Z|00465|binding|INFO|Releasing lport dd03c667-f058-4e13-bb03-517ed838be2e from this chassis (sb_readonly=0)
Feb 25 07:26:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:37Z|00466|binding|INFO|Setting lport dd03c667-f058-4e13-bb03-517ed838be2e down in Southbound
Feb 25 07:26:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:37Z|00467|binding|INFO|Removing iface tapdd03c667-f0 ovn-installed in OVS
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.338 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:37 np0005629333 NetworkManager[49836]: <info>  [1772022397.3459] device (tapdd03c667-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:26:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:37.353 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:52:b8 10.100.0.13'], port_security=['fa:16:3e:87:52:b8 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '56267d17-0733-4abe-b916-d1a25e516514', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f976004e0b334963a69c2519fca200d2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ca95284b-67f9-4e09-a57b-7847021c2465', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=974b795b-e2d8-4683-ac80-b366113e2dd8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=dd03c667-f058-4e13-bb03-517ed838be2e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:26:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:37.356 157129 INFO neutron.agent.ovn.metadata.agent [-] Port dd03c667-f058-4e13-bb03-517ed838be2e in datapath 7693903d-d5e2-4b50-a39b-bbbcc4148329 unbound from our chassis#033[00m
Feb 25 07:26:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:37.359 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7693903d-d5e2-4b50-a39b-bbbcc4148329, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:26:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:37.360 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[576c3a00-47f9-4a9d-afc9-308eb73e16e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:37.361 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329 namespace which is not needed anymore#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.367 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:37 np0005629333 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000036.scope: Deactivated successfully.
Feb 25 07:26:37 np0005629333 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000036.scope: Consumed 2.783s CPU time.
Feb 25 07:26:37 np0005629333 systemd-machined[210048]: Machine qemu-59-instance-00000036 terminated.
Feb 25 07:26:37 np0005629333 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[291192]: [NOTICE]   (291212) : haproxy version is 2.8.14-c23fe91
Feb 25 07:26:37 np0005629333 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[291192]: [NOTICE]   (291212) : path to executable is /usr/sbin/haproxy
Feb 25 07:26:37 np0005629333 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[291192]: [WARNING]  (291212) : Exiting Master process...
Feb 25 07:26:37 np0005629333 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[291192]: [WARNING]  (291212) : Exiting Master process...
Feb 25 07:26:37 np0005629333 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[291192]: [ALERT]    (291212) : Current worker (291219) exited with code 143 (Terminated)
Feb 25 07:26:37 np0005629333 neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329[291192]: [WARNING]  (291212) : All workers exited. Exiting... (0)
Feb 25 07:26:37 np0005629333 systemd[1]: libpod-41085a498c43d421528932ebb1e273652d232a9ced8039295b9b7912c6839115.scope: Deactivated successfully.
Feb 25 07:26:37 np0005629333 conmon[291192]: conmon 41085a498c43d4215289 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-41085a498c43d421528932ebb1e273652d232a9ced8039295b9b7912c6839115.scope/container/memory.events
Feb 25 07:26:37 np0005629333 podman[291450]: 2026-02-25 12:26:37.518016128 +0000 UTC m=+0.060714999 container died 41085a498c43d421528932ebb1e273652d232a9ced8039295b9b7912c6839115 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.532 244018 INFO nova.virt.libvirt.driver [-] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Instance destroyed successfully.#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.533 244018 DEBUG nova.objects.instance [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lazy-loading 'resources' on Instance uuid 56267d17-0733-4abe-b916-d1a25e516514 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:26:37 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-41085a498c43d421528932ebb1e273652d232a9ced8039295b9b7912c6839115-userdata-shm.mount: Deactivated successfully.
Feb 25 07:26:37 np0005629333 systemd[1]: var-lib-containers-storage-overlay-5b6407dc8ff3720d9297a51c2a82a3e2ac0165aaf6a4b9b13749f15d5541d8db-merged.mount: Deactivated successfully.
Feb 25 07:26:37 np0005629333 podman[291450]: 2026-02-25 12:26:37.573470407 +0000 UTC m=+0.116169278 container cleanup 41085a498c43d421528932ebb1e273652d232a9ced8039295b9b7912c6839115 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0)
Feb 25 07:26:37 np0005629333 systemd[1]: libpod-conmon-41085a498c43d421528932ebb1e273652d232a9ced8039295b9b7912c6839115.scope: Deactivated successfully.
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.589 244018 DEBUG nova.virt.libvirt.vif [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1327428650',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1327428650',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1327428650',id=54,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f976004e0b334963a69c2519fca200d2',ramdisk_id='',reservation_id='r-pc621z15',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-1374162185',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-1374162185-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:26:35Z,user_data=None,user_id='89e71139346a40899212d5bc35835720',uuid=56267d17-0733-4abe-b916-d1a25e516514,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dd03c667-f058-4e13-bb03-517ed838be2e", "address": "fa:16:3e:87:52:b8", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd03c667-f0", "ovs_interfaceid": "dd03c667-f058-4e13-bb03-517ed838be2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.590 244018 DEBUG nova.network.os_vif_util [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converting VIF {"id": "dd03c667-f058-4e13-bb03-517ed838be2e", "address": "fa:16:3e:87:52:b8", "network": {"id": "7693903d-d5e2-4b50-a39b-bbbcc4148329", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-131439805-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f976004e0b334963a69c2519fca200d2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd03c667-f0", "ovs_interfaceid": "dd03c667-f058-4e13-bb03-517ed838be2e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.591 244018 DEBUG nova.network.os_vif_util [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:52:b8,bridge_name='br-int',has_traffic_filtering=True,id=dd03c667-f058-4e13-bb03-517ed838be2e,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd03c667-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.592 244018 DEBUG os_vif [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:52:b8,bridge_name='br-int',has_traffic_filtering=True,id=dd03c667-f058-4e13-bb03-517ed838be2e,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd03c667-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.595 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.595 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdd03c667-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.597 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.600 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.603 244018 INFO os_vif [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:52:b8,bridge_name='br-int',has_traffic_filtering=True,id=dd03c667-f058-4e13-bb03-517ed838be2e,network=Network(7693903d-d5e2-4b50-a39b-bbbcc4148329),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd03c667-f0')#033[00m
Feb 25 07:26:37 np0005629333 podman[291490]: 2026-02-25 12:26:37.656049415 +0000 UTC m=+0.059649280 container remove 41085a498c43d421528932ebb1e273652d232a9ced8039295b9b7912c6839115 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 25 07:26:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:37.664 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ef018af5-d8ae-4382-a5f6-cb745f8f409a]: (4, ('Wed Feb 25 12:26:37 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329 (41085a498c43d421528932ebb1e273652d232a9ced8039295b9b7912c6839115)\n41085a498c43d421528932ebb1e273652d232a9ced8039295b9b7912c6839115\nWed Feb 25 12:26:37 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329 (41085a498c43d421528932ebb1e273652d232a9ced8039295b9b7912c6839115)\n41085a498c43d421528932ebb1e273652d232a9ced8039295b9b7912c6839115\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:37.666 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b13b8b1f-5716-4bcc-b19b-33bc491f3334]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:37.667 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7693903d-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.669 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:37 np0005629333 kernel: tap7693903d-d0: left promiscuous mode
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.677 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:37.680 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ad82f17c-bbf9-4c7b-a4b8-22baaf174887]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:37.694 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7e493225-a91e-48ab-9afe-f3a5414131d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:37.696 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[53433b70-22ae-4917-89e2-27a1c6a62c52]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:37.721 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[41d3b2b5-4d11-459e-bd27-2f0a2c9d3006]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436452, 'reachable_time': 31271, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291520, 'error': None, 'target': 'ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:37 np0005629333 systemd[1]: run-netns-ovnmeta\x2d7693903d\x2dd5e2\x2d4b50\x2da39b\x2dbbbcc4148329.mount: Deactivated successfully.
Feb 25 07:26:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:37.727 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7693903d-d5e2-4b50-a39b-bbbcc4148329 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:26:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:37.728 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[972ef46e-33e1-451e-9659-54fea56c75e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 418 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.2 MiB/s wr, 313 op/s
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.913 244018 INFO nova.virt.libvirt.driver [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Deleting instance files /var/lib/nova/instances/56267d17-0733-4abe-b916-d1a25e516514_del#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.914 244018 INFO nova.virt.libvirt.driver [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Deletion of /var/lib/nova/instances/56267d17-0733-4abe-b916-d1a25e516514_del complete#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.982 244018 INFO nova.compute.manager [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.983 244018 DEBUG oslo.service.loopingcall [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.983 244018 DEBUG nova.compute.manager [-] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:26:37 np0005629333 nova_compute[244014]: 2026-02-25 12:26:37.983 244018 DEBUG nova.network.neutron [-] [instance: 56267d17-0733-4abe-b916-d1a25e516514] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:26:38 np0005629333 nova_compute[244014]: 2026-02-25 12:26:38.350 244018 DEBUG nova.compute.manager [req-c1748933-d2a0-4c2b-867b-7d3b67ca6251 req-b9d86225-4b4f-4d2b-b2ec-1a95945a13a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-changed-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:26:38 np0005629333 nova_compute[244014]: 2026-02-25 12:26:38.350 244018 DEBUG nova.compute.manager [req-c1748933-d2a0-4c2b-867b-7d3b67ca6251 req-b9d86225-4b4f-4d2b-b2ec-1a95945a13a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Refreshing instance network info cache due to event network-changed-ee46268d-740d-4ff9-8b65-4a81fc61eec3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:26:38 np0005629333 nova_compute[244014]: 2026-02-25 12:26:38.350 244018 DEBUG oslo_concurrency.lockutils [req-c1748933-d2a0-4c2b-867b-7d3b67ca6251 req-b9d86225-4b4f-4d2b-b2ec-1a95945a13a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:26:38 np0005629333 nova_compute[244014]: 2026-02-25 12:26:38.351 244018 DEBUG oslo_concurrency.lockutils [req-c1748933-d2a0-4c2b-867b-7d3b67ca6251 req-b9d86225-4b4f-4d2b-b2ec-1a95945a13a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:26:38 np0005629333 nova_compute[244014]: 2026-02-25 12:26:38.351 244018 DEBUG nova.network.neutron [req-c1748933-d2a0-4c2b-867b-7d3b67ca6251 req-b9d86225-4b4f-4d2b-b2ec-1a95945a13a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Refreshing network info cache for port ee46268d-740d-4ff9-8b65-4a81fc61eec3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:26:38 np0005629333 nova_compute[244014]: 2026-02-25 12:26:38.626 244018 INFO nova.compute.manager [None req-53df2c5c-bcc5-4781-af56-8409d0c14aa0 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Get console output#033[00m
Feb 25 07:26:38 np0005629333 nova_compute[244014]: 2026-02-25 12:26:38.632 244018 INFO oslo.privsep.daemon [None req-53df2c5c-bcc5-4781-af56-8409d0c14aa0 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp6qasf0gg/privsep.sock']#033[00m
Feb 25 07:26:38 np0005629333 nova_compute[244014]: 2026-02-25 12:26:38.936 244018 DEBUG nova.network.neutron [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Successfully updated port: a4d3e156-0255-47ba-811a-4ddaf7c16468 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:26:38 np0005629333 nova_compute[244014]: 2026-02-25 12:26:38.960 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "refresh_cache-d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:26:38 np0005629333 nova_compute[244014]: 2026-02-25 12:26:38.961 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquired lock "refresh_cache-d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:26:38 np0005629333 nova_compute[244014]: 2026-02-25 12:26:38.961 244018 DEBUG nova.network.neutron [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.200 244018 DEBUG nova.network.neutron [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.430 244018 DEBUG nova.network.neutron [-] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.449 244018 INFO nova.compute.manager [-] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Took 1.47 seconds to deallocate network for instance.#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.493 244018 DEBUG oslo_concurrency.lockutils [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.494 244018 DEBUG oslo_concurrency.lockutils [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.596 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022384.5960622, 679fb15f-b258-473a-8cdc-a2c143eb4d92 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.597 244018 INFO nova.compute.manager [-] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.616 244018 DEBUG nova.compute.manager [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Received event network-vif-plugged-69011b0a-5af7-4bef-a14c-8d83e63e08ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.616 244018 DEBUG oslo_concurrency.lockutils [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.617 244018 DEBUG oslo_concurrency.lockutils [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.617 244018 DEBUG oslo_concurrency.lockutils [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.617 244018 DEBUG nova.compute.manager [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] No waiting events found dispatching network-vif-plugged-69011b0a-5af7-4bef-a14c-8d83e63e08ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.618 244018 WARNING nova.compute.manager [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Received unexpected event network-vif-plugged-69011b0a-5af7-4bef-a14c-8d83e63e08ae for instance with vm_state active and task_state None.#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.618 244018 DEBUG nova.compute.manager [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Received event network-vif-unplugged-dd03c667-f058-4e13-bb03-517ed838be2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.618 244018 DEBUG oslo_concurrency.lockutils [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "56267d17-0733-4abe-b916-d1a25e516514-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.618 244018 DEBUG oslo_concurrency.lockutils [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "56267d17-0733-4abe-b916-d1a25e516514-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.619 244018 DEBUG oslo_concurrency.lockutils [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "56267d17-0733-4abe-b916-d1a25e516514-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.619 244018 DEBUG nova.compute.manager [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] No waiting events found dispatching network-vif-unplugged-dd03c667-f058-4e13-bb03-517ed838be2e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.619 244018 WARNING nova.compute.manager [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Received unexpected event network-vif-unplugged-dd03c667-f058-4e13-bb03-517ed838be2e for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.620 244018 DEBUG nova.compute.manager [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Received event network-vif-plugged-dd03c667-f058-4e13-bb03-517ed838be2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.620 244018 DEBUG oslo_concurrency.lockutils [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "56267d17-0733-4abe-b916-d1a25e516514-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.620 244018 DEBUG oslo_concurrency.lockutils [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "56267d17-0733-4abe-b916-d1a25e516514-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.621 244018 DEBUG oslo_concurrency.lockutils [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "56267d17-0733-4abe-b916-d1a25e516514-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.621 244018 DEBUG nova.compute.manager [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] No waiting events found dispatching network-vif-plugged-dd03c667-f058-4e13-bb03-517ed838be2e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.621 244018 WARNING nova.compute.manager [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Received unexpected event network-vif-plugged-dd03c667-f058-4e13-bb03-517ed838be2e for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.621 244018 DEBUG nova.compute.manager [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Received event network-changed-a4d3e156-0255-47ba-811a-4ddaf7c16468 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.622 244018 DEBUG nova.compute.manager [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Refreshing instance network info cache due to event network-changed-a4d3e156-0255-47ba-811a-4ddaf7c16468. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.622 244018 DEBUG oslo_concurrency.lockutils [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.625 244018 DEBUG nova.compute.manager [None req-740ac07a-bb3f-48bb-a219-c99c1806dd91 - - - - - -] [instance: 679fb15f-b258-473a-8cdc-a2c143eb4d92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.660 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.665 244018 DEBUG oslo_concurrency.processutils [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.797 244018 INFO oslo.privsep.daemon [None req-53df2c5c-bcc5-4781-af56-8409d0c14aa0 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.671 291526 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.675 291526 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.677 291526 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.677 291526 INFO oslo.privsep.daemon [-] privsep daemon running as pid 291526#033[00m
Feb 25 07:26:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 418 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.1 MiB/s wr, 289 op/s
Feb 25 07:26:39 np0005629333 nova_compute[244014]: 2026-02-25 12:26:39.940 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 25 07:26:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:26:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:26:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3570298839' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.195 244018 DEBUG oslo_concurrency.processutils [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.200 244018 DEBUG nova.compute.provider_tree [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.219 244018 DEBUG nova.scheduler.client.report [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.246 244018 DEBUG oslo_concurrency.lockutils [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.270 244018 INFO nova.scheduler.client.report [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Deleted allocations for instance 56267d17-0733-4abe-b916-d1a25e516514#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.365 244018 DEBUG oslo_concurrency.lockutils [None req-8f5c5555-5db1-4a46-bfe9-81dd94ff48a2 89e71139346a40899212d5bc35835720 f976004e0b334963a69c2519fca200d2 - - default default] Lock "56267d17-0733-4abe-b916-d1a25e516514" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.447 244018 DEBUG nova.compute.manager [req-6301e5b5-0962-4265-b118-a55d486c89b1 req-51d7d4fd-c05f-4607-85ab-0383e8952358 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Received event network-vif-deleted-dd03c667-f058-4e13-bb03-517ed838be2e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.493 244018 DEBUG nova.network.neutron [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Updating instance_info_cache with network_info: [{"id": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "address": "fa:16:3e:61:0e:2a", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d3e156-02", "ovs_interfaceid": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.519 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Releasing lock "refresh_cache-d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.520 244018 DEBUG nova.compute.manager [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Instance network_info: |[{"id": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "address": "fa:16:3e:61:0e:2a", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d3e156-02", "ovs_interfaceid": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.520 244018 DEBUG oslo_concurrency.lockutils [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.521 244018 DEBUG nova.network.neutron [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Refreshing network info cache for port a4d3e156-0255-47ba-811a-4ddaf7c16468 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.527 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Start _get_guest_xml network_info=[{"id": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "address": "fa:16:3e:61:0e:2a", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d3e156-02", "ovs_interfaceid": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.541 244018 WARNING nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.553 244018 DEBUG nova.virt.libvirt.host [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.553 244018 DEBUG nova.virt.libvirt.host [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.558 244018 DEBUG nova.virt.libvirt.host [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.558 244018 DEBUG nova.virt.libvirt.host [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.559 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.559 244018 DEBUG nova.virt.hardware [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.560 244018 DEBUG nova.virt.hardware [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.560 244018 DEBUG nova.virt.hardware [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.561 244018 DEBUG nova.virt.hardware [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.561 244018 DEBUG nova.virt.hardware [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.561 244018 DEBUG nova.virt.hardware [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.562 244018 DEBUG nova.virt.hardware [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.562 244018 DEBUG nova.virt.hardware [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.562 244018 DEBUG nova.virt.hardware [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.563 244018 DEBUG nova.virt.hardware [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.563 244018 DEBUG nova.virt.hardware [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.568 244018 DEBUG oslo_concurrency.processutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.770 244018 DEBUG nova.network.neutron [req-c1748933-d2a0-4c2b-867b-7d3b67ca6251 req-b9d86225-4b4f-4d2b-b2ec-1a95945a13a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Updated VIF entry in instance network info cache for port ee46268d-740d-4ff9-8b65-4a81fc61eec3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.771 244018 DEBUG nova.network.neutron [req-c1748933-d2a0-4c2b-867b-7d3b67ca6251 req-b9d86225-4b4f-4d2b-b2ec-1a95945a13a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Updating instance_info_cache with network_info: [{"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:26:40 np0005629333 nova_compute[244014]: 2026-02-25 12:26:40.796 244018 DEBUG oslo_concurrency.lockutils [req-c1748933-d2a0-4c2b-867b-7d3b67ca6251 req-b9d86225-4b4f-4d2b-b2ec-1a95945a13a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:26:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:26:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1166624250' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:26:41 np0005629333 nova_compute[244014]: 2026-02-25 12:26:41.122 244018 DEBUG oslo_concurrency.processutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:41 np0005629333 nova_compute[244014]: 2026-02-25 12:26:41.158 244018 DEBUG nova.storage.rbd_utils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:41 np0005629333 nova_compute[244014]: 2026-02-25 12:26:41.171 244018 DEBUG oslo_concurrency.processutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:26:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3477816515' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:26:41 np0005629333 nova_compute[244014]: 2026-02-25 12:26:41.694 244018 DEBUG oslo_concurrency.processutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:41 np0005629333 nova_compute[244014]: 2026-02-25 12:26:41.697 244018 DEBUG nova.virt.libvirt.vif [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:26:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1913570277',display_name='tempest-DeleteServersTestJSON-server-1913570277',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1913570277',id=56,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='daab2f813dbd467685c22833bf875ec9',ramdisk_id='',reservation_id='r-hpk4oof7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-285157757',owner_user_name='tempest-DeleteServersTestJSON-285157757-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:26:35Z,user_data=None,user_id='9c44c1a95c8d4d97bc1d7dde284dcf1b',uuid=d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "address": "fa:16:3e:61:0e:2a", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d3e156-02", "ovs_interfaceid": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:26:41 np0005629333 nova_compute[244014]: 2026-02-25 12:26:41.698 244018 DEBUG nova.network.os_vif_util [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converting VIF {"id": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "address": "fa:16:3e:61:0e:2a", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d3e156-02", "ovs_interfaceid": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:26:41 np0005629333 nova_compute[244014]: 2026-02-25 12:26:41.699 244018 DEBUG nova.network.os_vif_util [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:0e:2a,bridge_name='br-int',has_traffic_filtering=True,id=a4d3e156-0255-47ba-811a-4ddaf7c16468,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4d3e156-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:26:41 np0005629333 nova_compute[244014]: 2026-02-25 12:26:41.701 244018 DEBUG nova.objects.instance [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lazy-loading 'pci_devices' on Instance uuid d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:26:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 407 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 7.1 MiB/s wr, 320 op/s
Feb 25 07:26:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:26:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:26:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:26:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:26:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0020518122360856455 of space, bias 1.0, pg target 0.6155436708256936 quantized to 32 (current 32)
Feb 25 07:26:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:26:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:26:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:26:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:26:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:26:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002491905081122308 of space, bias 1.0, pg target 0.7475715243366924 quantized to 32 (current 32)
Feb 25 07:26:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:26:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0829516171927781e-06 of space, bias 4.0, pg target 0.0012995419406313337 quantized to 16 (current 16)
Feb 25 07:26:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:26:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:26:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:26:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:26:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:26:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:26:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:26:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:26:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:26:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:26:42 np0005629333 nova_compute[244014]: 2026-02-25 12:26:42.363 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:26:42 np0005629333 nova_compute[244014]:  <uuid>d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84</uuid>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:  <name>instance-00000038</name>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      <nova:name>tempest-DeleteServersTestJSON-server-1913570277</nova:name>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:26:40</nova:creationTime>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:26:42 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:        <nova:user uuid="9c44c1a95c8d4d97bc1d7dde284dcf1b">tempest-DeleteServersTestJSON-285157757-project-member</nova:user>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:        <nova:project uuid="daab2f813dbd467685c22833bf875ec9">tempest-DeleteServersTestJSON-285157757</nova:project>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:        <nova:port uuid="a4d3e156-0255-47ba-811a-4ddaf7c16468">
Feb 25 07:26:42 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      <entry name="serial">d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84</entry>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      <entry name="uuid">d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84</entry>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84_disk">
Feb 25 07:26:42 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:26:42 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84_disk.config">
Feb 25 07:26:42 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:26:42 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:61:0e:2a"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      <target dev="tapa4d3e156-02"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84/console.log" append="off"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:26:42 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:26:42 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:26:42 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:26:42 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:26:42 np0005629333 nova_compute[244014]: 2026-02-25 12:26:42.364 244018 DEBUG nova.compute.manager [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Preparing to wait for external event network-vif-plugged-a4d3e156-0255-47ba-811a-4ddaf7c16468 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:26:42 np0005629333 nova_compute[244014]: 2026-02-25 12:26:42.364 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:42 np0005629333 nova_compute[244014]: 2026-02-25 12:26:42.364 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:42 np0005629333 nova_compute[244014]: 2026-02-25 12:26:42.365 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:42 np0005629333 nova_compute[244014]: 2026-02-25 12:26:42.365 244018 DEBUG nova.virt.libvirt.vif [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:26:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1913570277',display_name='tempest-DeleteServersTestJSON-server-1913570277',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1913570277',id=56,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='daab2f813dbd467685c22833bf875ec9',ramdisk_id='',reservation_id='r-hpk4oof7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-285157757',owner_user_name='tempest-DeleteServersTestJSON-285157757-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:26:35Z,user_data=None,user_id='9c44c1a95c8d4d97bc1d7dde284dcf1b',uuid=d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "address": "fa:16:3e:61:0e:2a", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d3e156-02", "ovs_interfaceid": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:26:42 np0005629333 nova_compute[244014]: 2026-02-25 12:26:42.366 244018 DEBUG nova.network.os_vif_util [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converting VIF {"id": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "address": "fa:16:3e:61:0e:2a", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d3e156-02", "ovs_interfaceid": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:26:42 np0005629333 nova_compute[244014]: 2026-02-25 12:26:42.366 244018 DEBUG nova.network.os_vif_util [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:0e:2a,bridge_name='br-int',has_traffic_filtering=True,id=a4d3e156-0255-47ba-811a-4ddaf7c16468,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4d3e156-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:26:42 np0005629333 nova_compute[244014]: 2026-02-25 12:26:42.367 244018 DEBUG os_vif [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:0e:2a,bridge_name='br-int',has_traffic_filtering=True,id=a4d3e156-0255-47ba-811a-4ddaf7c16468,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4d3e156-02') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:26:42 np0005629333 nova_compute[244014]: 2026-02-25 12:26:42.367 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:42 np0005629333 nova_compute[244014]: 2026-02-25 12:26:42.368 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:42 np0005629333 nova_compute[244014]: 2026-02-25 12:26:42.368 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:26:42 np0005629333 nova_compute[244014]: 2026-02-25 12:26:42.371 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:42 np0005629333 nova_compute[244014]: 2026-02-25 12:26:42.371 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa4d3e156-02, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:42 np0005629333 nova_compute[244014]: 2026-02-25 12:26:42.372 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa4d3e156-02, col_values=(('external_ids', {'iface-id': 'a4d3e156-0255-47ba-811a-4ddaf7c16468', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:0e:2a', 'vm-uuid': 'd0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:42 np0005629333 nova_compute[244014]: 2026-02-25 12:26:42.373 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:42 np0005629333 NetworkManager[49836]: <info>  [1772022402.3748] manager: (tapa4d3e156-02): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/214)
Feb 25 07:26:42 np0005629333 nova_compute[244014]: 2026-02-25 12:26:42.380 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:26:42 np0005629333 nova_compute[244014]: 2026-02-25 12:26:42.381 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:42 np0005629333 nova_compute[244014]: 2026-02-25 12:26:42.382 244018 INFO os_vif [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:0e:2a,bridge_name='br-int',has_traffic_filtering=True,id=a4d3e156-0255-47ba-811a-4ddaf7c16468,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4d3e156-02')#033[00m
Feb 25 07:26:42 np0005629333 nova_compute[244014]: 2026-02-25 12:26:42.592 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:26:42 np0005629333 nova_compute[244014]: 2026-02-25 12:26:42.592 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:26:42 np0005629333 nova_compute[244014]: 2026-02-25 12:26:42.592 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] No VIF found with MAC fa:16:3e:61:0e:2a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:26:42 np0005629333 nova_compute[244014]: 2026-02-25 12:26:42.593 244018 INFO nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Using config drive#033[00m
Feb 25 07:26:42 np0005629333 nova_compute[244014]: 2026-02-25 12:26:42.621 244018 DEBUG nova.storage.rbd_utils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:43 np0005629333 nova_compute[244014]: 2026-02-25 12:26:43.680 244018 INFO nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Creating config drive at /var/lib/nova/instances/d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84/disk.config#033[00m
Feb 25 07:26:43 np0005629333 nova_compute[244014]: 2026-02-25 12:26:43.683 244018 DEBUG oslo_concurrency.processutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpke6237ft execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 372 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 6.3 MiB/s wr, 364 op/s
Feb 25 07:26:43 np0005629333 nova_compute[244014]: 2026-02-25 12:26:43.831 244018 DEBUG oslo_concurrency.processutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpke6237ft" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:43 np0005629333 nova_compute[244014]: 2026-02-25 12:26:43.858 244018 DEBUG nova.storage.rbd_utils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] rbd image d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:43 np0005629333 nova_compute[244014]: 2026-02-25 12:26:43.862 244018 DEBUG oslo_concurrency.processutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84/disk.config d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:44 np0005629333 nova_compute[244014]: 2026-02-25 12:26:44.148 244018 DEBUG oslo_concurrency.processutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84/disk.config d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:44 np0005629333 nova_compute[244014]: 2026-02-25 12:26:44.149 244018 INFO nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Deleting local config drive /var/lib/nova/instances/d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84/disk.config because it was imported into RBD.#033[00m
Feb 25 07:26:44 np0005629333 NetworkManager[49836]: <info>  [1772022404.1992] manager: (tapa4d3e156-02): new Tun device (/org/freedesktop/NetworkManager/Devices/215)
Feb 25 07:26:44 np0005629333 kernel: tapa4d3e156-02: entered promiscuous mode
Feb 25 07:26:44 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:44Z|00468|binding|INFO|Claiming lport a4d3e156-0255-47ba-811a-4ddaf7c16468 for this chassis.
Feb 25 07:26:44 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:44Z|00469|binding|INFO|a4d3e156-0255-47ba-811a-4ddaf7c16468: Claiming fa:16:3e:61:0e:2a 10.100.0.11
Feb 25 07:26:44 np0005629333 nova_compute[244014]: 2026-02-25 12:26:44.209 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.216 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:0e:2a 10.100.0.11'], port_security=['fa:16:3e:61:0e:2a 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'd0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'daab2f813dbd467685c22833bf875ec9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ccb27c94-53b6-4e09-9a40-41a94659ce9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96ce8d12-3a71-4b1e-a397-65d0b61f3794, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=a4d3e156-0255-47ba-811a-4ddaf7c16468) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.217 157129 INFO neutron.agent.ovn.metadata.agent [-] Port a4d3e156-0255-47ba-811a-4ddaf7c16468 in datapath a0d45b1c-1680-4599-a27a-6e3335c94c99 bound to our chassis#033[00m
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.219 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a0d45b1c-1680-4599-a27a-6e3335c94c99#033[00m
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.228 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0b57c957-fccc-42ae-bbfb-610912b12846]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:44 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:44Z|00470|binding|INFO|Setting lport a4d3e156-0255-47ba-811a-4ddaf7c16468 ovn-installed in OVS
Feb 25 07:26:44 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:44Z|00471|binding|INFO|Setting lport a4d3e156-0255-47ba-811a-4ddaf7c16468 up in Southbound
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.228 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa0d45b1c-11 in ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.231 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa0d45b1c-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.231 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a6d93af6-4012-4475-84a2-a5301460d8d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.232 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2f72d63f-ec94-4929-8530-83a9a0d77b73]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:44 np0005629333 nova_compute[244014]: 2026-02-25 12:26:44.230 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.245 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[1a41013b-25a8-4042-96a9-2768ec6a86b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:44 np0005629333 systemd-machined[210048]: New machine qemu-61-instance-00000038.
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.257 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[603bcaa8-2981-4ce1-9802-ffb6bbd98632]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:44 np0005629333 systemd[1]: Started Virtual Machine qemu-61-instance-00000038.
Feb 25 07:26:44 np0005629333 systemd-udevd[291691]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:26:44 np0005629333 NetworkManager[49836]: <info>  [1772022404.2916] device (tapa4d3e156-02): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:26:44 np0005629333 NetworkManager[49836]: <info>  [1772022404.2929] device (tapa4d3e156-02): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.296 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f90c70db-1960-47a3-828b-0ff5677b2721]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:44 np0005629333 NetworkManager[49836]: <info>  [1772022404.3032] manager: (tapa0d45b1c-10): new Veth device (/org/freedesktop/NetworkManager/Devices/216)
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.303 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[30e7872c-5cd4-4694-9fbc-c07642ac6769]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.333 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[6d2b335d-b3d1-442a-a9d8-4093490237e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.337 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b5054d55-9ffa-4db0-a837-62c2bee024d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:44 np0005629333 NetworkManager[49836]: <info>  [1772022404.3542] device (tapa0d45b1c-10): carrier: link connected
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.359 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[6d721806-584f-4180-86ff-a66032ada2a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.372 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d5be93ca-1793-4f47-93d1-d13bc05d534e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0d45b1c-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:2b:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 142], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437376, 'reachable_time': 35387, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291719, 'error': None, 'target': 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.397 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1d53c3e7-00d6-4260-920a-1332ddf91531]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe22:2bc6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437376, 'tstamp': 437376}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291720, 'error': None, 'target': 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.410 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[63201e63-9f50-4198-a591-2735d0db3eff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0d45b1c-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:2b:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 142], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437376, 'reachable_time': 35387, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291721, 'error': None, 'target': 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.451 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[49a72f49-083d-4297-b0b8-62723f8820df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.499 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bd70650d-6624-489a-90b5-5d4bd9665fd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.500 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0d45b1c-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.501 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.501 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0d45b1c-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:44 np0005629333 kernel: tapa0d45b1c-10: entered promiscuous mode
Feb 25 07:26:44 np0005629333 nova_compute[244014]: 2026-02-25 12:26:44.503 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:44 np0005629333 NetworkManager[49836]: <info>  [1772022404.5043] manager: (tapa0d45b1c-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/217)
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.510 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa0d45b1c-10, col_values=(('external_ids', {'iface-id': '7c89df63-779c-4e1c-9e58-76a4001fabc2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:44 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:44Z|00472|binding|INFO|Releasing lport 7c89df63-779c-4e1c-9e58-76a4001fabc2 from this chassis (sb_readonly=0)
Feb 25 07:26:44 np0005629333 nova_compute[244014]: 2026-02-25 12:26:44.511 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:44 np0005629333 nova_compute[244014]: 2026-02-25 12:26:44.512 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.515 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a0d45b1c-1680-4599-a27a-6e3335c94c99.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a0d45b1c-1680-4599-a27a-6e3335c94c99.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.516 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8011f0db-8bd2-4077-bf1f-93008ebb78e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.517 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-a0d45b1c-1680-4599-a27a-6e3335c94c99
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/a0d45b1c-1680-4599-a27a-6e3335c94c99.pid.haproxy
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID a0d45b1c-1680-4599-a27a-6e3335c94c99
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:26:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:44.518 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'env', 'PROCESS_TAG=haproxy-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a0d45b1c-1680-4599-a27a-6e3335c94c99.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:26:44 np0005629333 nova_compute[244014]: 2026-02-25 12:26:44.525 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:44 np0005629333 nova_compute[244014]: 2026-02-25 12:26:44.662 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:44 np0005629333 nova_compute[244014]: 2026-02-25 12:26:44.755 244018 DEBUG nova.network.neutron [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Updated VIF entry in instance network info cache for port a4d3e156-0255-47ba-811a-4ddaf7c16468. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:26:44 np0005629333 nova_compute[244014]: 2026-02-25 12:26:44.756 244018 DEBUG nova.network.neutron [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Updating instance_info_cache with network_info: [{"id": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "address": "fa:16:3e:61:0e:2a", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d3e156-02", "ovs_interfaceid": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:26:44 np0005629333 nova_compute[244014]: 2026-02-25 12:26:44.758 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022404.7578166, d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:26:44 np0005629333 nova_compute[244014]: 2026-02-25 12:26:44.758 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] VM Started (Lifecycle Event)#033[00m
Feb 25 07:26:44 np0005629333 nova_compute[244014]: 2026-02-25 12:26:44.784 244018 DEBUG oslo_concurrency.lockutils [req-2c025f72-d864-4ee7-b383-d908595a5d11 req-d1b46674-89a5-4016-a2ae-ea4e6511ada0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:26:44 np0005629333 nova_compute[244014]: 2026-02-25 12:26:44.791 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:44 np0005629333 nova_compute[244014]: 2026-02-25 12:26:44.801 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022404.7580752, d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:26:44 np0005629333 nova_compute[244014]: 2026-02-25 12:26:44.802 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:26:44 np0005629333 nova_compute[244014]: 2026-02-25 12:26:44.835 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:44 np0005629333 nova_compute[244014]: 2026-02-25 12:26:44.839 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:26:44 np0005629333 nova_compute[244014]: 2026-02-25 12:26:44.866 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:26:44 np0005629333 podman[291793]: 2026-02-25 12:26:44.885796845 +0000 UTC m=+0.057963022 container create d8a78348c24fdfc70916f61f5d57168cee7364eb6ba19adce2fa4c9dba408c72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Feb 25 07:26:44 np0005629333 systemd[1]: Started libpod-conmon-d8a78348c24fdfc70916f61f5d57168cee7364eb6ba19adce2fa4c9dba408c72.scope.
Feb 25 07:26:44 np0005629333 podman[291793]: 2026-02-25 12:26:44.854502489 +0000 UTC m=+0.026668666 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:26:44 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:26:44 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/163036a2769903f79b3e9bb16ada6a0f381d1efd3f2b026f8dd2d7aac6106578/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:26:44 np0005629333 podman[291793]: 2026-02-25 12:26:44.977626274 +0000 UTC m=+0.149792431 container init d8a78348c24fdfc70916f61f5d57168cee7364eb6ba19adce2fa4c9dba408c72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Feb 25 07:26:44 np0005629333 podman[291793]: 2026-02-25 12:26:44.982241394 +0000 UTC m=+0.154407521 container start d8a78348c24fdfc70916f61f5d57168cee7364eb6ba19adce2fa4c9dba408c72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 25 07:26:45 np0005629333 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[291808]: [NOTICE]   (291812) : New worker (291814) forked
Feb 25 07:26:45 np0005629333 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[291808]: [NOTICE]   (291812) : Loading success.
Feb 25 07:26:45 np0005629333 nova_compute[244014]: 2026-02-25 12:26:45.033 244018 DEBUG nova.compute.manager [req-72c40636-f206-447f-a50c-bb1aefb58e6d req-ffd21d9e-7359-4041-ba3a-795827c6e0b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Received event network-vif-plugged-a4d3e156-0255-47ba-811a-4ddaf7c16468 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:26:45 np0005629333 nova_compute[244014]: 2026-02-25 12:26:45.034 244018 DEBUG oslo_concurrency.lockutils [req-72c40636-f206-447f-a50c-bb1aefb58e6d req-ffd21d9e-7359-4041-ba3a-795827c6e0b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:45 np0005629333 nova_compute[244014]: 2026-02-25 12:26:45.034 244018 DEBUG oslo_concurrency.lockutils [req-72c40636-f206-447f-a50c-bb1aefb58e6d req-ffd21d9e-7359-4041-ba3a-795827c6e0b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:45 np0005629333 nova_compute[244014]: 2026-02-25 12:26:45.034 244018 DEBUG oslo_concurrency.lockutils [req-72c40636-f206-447f-a50c-bb1aefb58e6d req-ffd21d9e-7359-4041-ba3a-795827c6e0b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:45 np0005629333 nova_compute[244014]: 2026-02-25 12:26:45.035 244018 DEBUG nova.compute.manager [req-72c40636-f206-447f-a50c-bb1aefb58e6d req-ffd21d9e-7359-4041-ba3a-795827c6e0b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Processing event network-vif-plugged-a4d3e156-0255-47ba-811a-4ddaf7c16468 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:26:45 np0005629333 nova_compute[244014]: 2026-02-25 12:26:45.036 244018 DEBUG nova.compute.manager [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:26:45 np0005629333 nova_compute[244014]: 2026-02-25 12:26:45.048 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022405.0394669, d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:26:45 np0005629333 nova_compute[244014]: 2026-02-25 12:26:45.049 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:26:45 np0005629333 nova_compute[244014]: 2026-02-25 12:26:45.051 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:26:45 np0005629333 nova_compute[244014]: 2026-02-25 12:26:45.058 244018 INFO nova.virt.libvirt.driver [-] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Instance spawned successfully.#033[00m
Feb 25 07:26:45 np0005629333 nova_compute[244014]: 2026-02-25 12:26:45.058 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:26:45 np0005629333 nova_compute[244014]: 2026-02-25 12:26:45.072 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:45 np0005629333 nova_compute[244014]: 2026-02-25 12:26:45.078 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:26:45 np0005629333 nova_compute[244014]: 2026-02-25 12:26:45.082 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:45 np0005629333 nova_compute[244014]: 2026-02-25 12:26:45.082 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:45 np0005629333 nova_compute[244014]: 2026-02-25 12:26:45.083 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:45 np0005629333 nova_compute[244014]: 2026-02-25 12:26:45.083 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:45 np0005629333 nova_compute[244014]: 2026-02-25 12:26:45.084 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:45 np0005629333 nova_compute[244014]: 2026-02-25 12:26:45.084 244018 DEBUG nova.virt.libvirt.driver [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:26:45 np0005629333 nova_compute[244014]: 2026-02-25 12:26:45.122 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:26:45 np0005629333 nova_compute[244014]: 2026-02-25 12:26:45.152 244018 INFO nova.compute.manager [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Took 9.53 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:26:45 np0005629333 nova_compute[244014]: 2026-02-25 12:26:45.152 244018 DEBUG nova.compute.manager [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:45 np0005629333 nova_compute[244014]: 2026-02-25 12:26:45.224 244018 INFO nova.compute.manager [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Took 10.77 seconds to build instance.#033[00m
Feb 25 07:26:45 np0005629333 nova_compute[244014]: 2026-02-25 12:26:45.245 244018 DEBUG oslo_concurrency.lockutils [None req-6e196e39-437e-431a-82c3-c6d2b83f7c0e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 372 MiB data, 667 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 2.0 MiB/s wr, 285 op/s
Feb 25 07:26:47 np0005629333 nova_compute[244014]: 2026-02-25 12:26:47.040 244018 DEBUG nova.objects.instance [None req-ea726091-0f1a-4f29-aed0-0c444d808ba9 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lazy-loading 'pci_devices' on Instance uuid d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:26:47 np0005629333 nova_compute[244014]: 2026-02-25 12:26:47.067 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022407.0676155, d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:26:47 np0005629333 nova_compute[244014]: 2026-02-25 12:26:47.068 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:26:47 np0005629333 nova_compute[244014]: 2026-02-25 12:26:47.085 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:47 np0005629333 nova_compute[244014]: 2026-02-25 12:26:47.090 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:26:47 np0005629333 nova_compute[244014]: 2026-02-25 12:26:47.116 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Feb 25 07:26:47 np0005629333 nova_compute[244014]: 2026-02-25 12:26:47.374 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:26:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2160361272' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:26:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:26:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2160361272' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:26:47 np0005629333 kernel: tapa4d3e156-02 (unregistering): left promiscuous mode
Feb 25 07:26:47 np0005629333 NetworkManager[49836]: <info>  [1772022407.5272] device (tapa4d3e156-02): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:26:47 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:47Z|00473|binding|INFO|Releasing lport a4d3e156-0255-47ba-811a-4ddaf7c16468 from this chassis (sb_readonly=0)
Feb 25 07:26:47 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:47Z|00474|binding|INFO|Setting lport a4d3e156-0255-47ba-811a-4ddaf7c16468 down in Southbound
Feb 25 07:26:47 np0005629333 nova_compute[244014]: 2026-02-25 12:26:47.541 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:47 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:47Z|00475|binding|INFO|Removing iface tapa4d3e156-02 ovn-installed in OVS
Feb 25 07:26:47 np0005629333 nova_compute[244014]: 2026-02-25 12:26:47.547 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:47.553 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:0e:2a 10.100.0.11'], port_security=['fa:16:3e:61:0e:2a 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'd0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'daab2f813dbd467685c22833bf875ec9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ccb27c94-53b6-4e09-9a40-41a94659ce9c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=96ce8d12-3a71-4b1e-a397-65d0b61f3794, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=a4d3e156-0255-47ba-811a-4ddaf7c16468) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:26:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:47.555 157129 INFO neutron.agent.ovn.metadata.agent [-] Port a4d3e156-0255-47ba-811a-4ddaf7c16468 in datapath a0d45b1c-1680-4599-a27a-6e3335c94c99 unbound from our chassis#033[00m
Feb 25 07:26:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:47.557 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a0d45b1c-1680-4599-a27a-6e3335c94c99, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:26:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:47.558 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8559cf85-ae46-4fa9-a0db-353c611e57a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:47.559 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 namespace which is not needed anymore#033[00m
Feb 25 07:26:47 np0005629333 nova_compute[244014]: 2026-02-25 12:26:47.565 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:47 np0005629333 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000038.scope: Deactivated successfully.
Feb 25 07:26:47 np0005629333 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000038.scope: Consumed 2.643s CPU time.
Feb 25 07:26:47 np0005629333 systemd-machined[210048]: Machine qemu-61-instance-00000038 terminated.
Feb 25 07:26:47 np0005629333 nova_compute[244014]: 2026-02-25 12:26:47.696 244018 DEBUG nova.compute.manager [None req-ea726091-0f1a-4f29-aed0-0c444d808ba9 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:47 np0005629333 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[291808]: [NOTICE]   (291812) : haproxy version is 2.8.14-c23fe91
Feb 25 07:26:47 np0005629333 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[291808]: [NOTICE]   (291812) : path to executable is /usr/sbin/haproxy
Feb 25 07:26:47 np0005629333 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[291808]: [WARNING]  (291812) : Exiting Master process...
Feb 25 07:26:47 np0005629333 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[291808]: [ALERT]    (291812) : Current worker (291814) exited with code 143 (Terminated)
Feb 25 07:26:47 np0005629333 neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99[291808]: [WARNING]  (291812) : All workers exited. Exiting... (0)
Feb 25 07:26:47 np0005629333 systemd[1]: libpod-d8a78348c24fdfc70916f61f5d57168cee7364eb6ba19adce2fa4c9dba408c72.scope: Deactivated successfully.
Feb 25 07:26:47 np0005629333 podman[291847]: 2026-02-25 12:26:47.73638199 +0000 UTC m=+0.070471426 container died d8a78348c24fdfc70916f61f5d57168cee7364eb6ba19adce2fa4c9dba408c72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:26:47 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d8a78348c24fdfc70916f61f5d57168cee7364eb6ba19adce2fa4c9dba408c72-userdata-shm.mount: Deactivated successfully.
Feb 25 07:26:47 np0005629333 systemd[1]: var-lib-containers-storage-overlay-163036a2769903f79b3e9bb16ada6a0f381d1efd3f2b026f8dd2d7aac6106578-merged.mount: Deactivated successfully.
Feb 25 07:26:47 np0005629333 podman[291847]: 2026-02-25 12:26:47.773907262 +0000 UTC m=+0.107996688 container cleanup d8a78348c24fdfc70916f61f5d57168cee7364eb6ba19adce2fa4c9dba408c72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 07:26:47 np0005629333 systemd[1]: libpod-conmon-d8a78348c24fdfc70916f61f5d57168cee7364eb6ba19adce2fa4c9dba408c72.scope: Deactivated successfully.
Feb 25 07:26:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 400 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 5.0 MiB/s wr, 413 op/s
Feb 25 07:26:47 np0005629333 podman[291885]: 2026-02-25 12:26:47.841821604 +0000 UTC m=+0.051009165 container remove d8a78348c24fdfc70916f61f5d57168cee7364eb6ba19adce2fa4c9dba408c72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 25 07:26:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:47.848 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c48b7e4b-13dd-4acb-9f6f-9de8045a7f52]: (4, ('Wed Feb 25 12:26:47 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 (d8a78348c24fdfc70916f61f5d57168cee7364eb6ba19adce2fa4c9dba408c72)\nd8a78348c24fdfc70916f61f5d57168cee7364eb6ba19adce2fa4c9dba408c72\nWed Feb 25 12:26:47 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 (d8a78348c24fdfc70916f61f5d57168cee7364eb6ba19adce2fa4c9dba408c72)\nd8a78348c24fdfc70916f61f5d57168cee7364eb6ba19adce2fa4c9dba408c72\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:47.849 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d54a34bf-775c-4686-8701-979f3a4a197d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:47.851 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0d45b1c-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:47 np0005629333 nova_compute[244014]: 2026-02-25 12:26:47.853 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:47 np0005629333 kernel: tapa0d45b1c-10: left promiscuous mode
Feb 25 07:26:47 np0005629333 nova_compute[244014]: 2026-02-25 12:26:47.867 244018 DEBUG nova.compute.manager [req-3c18c075-e2cd-437d-8a37-6ac71a41f977 req-5b5b0f8c-272f-42aa-9fb2-0fc3b1e654aa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Received event network-vif-plugged-a4d3e156-0255-47ba-811a-4ddaf7c16468 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:26:47 np0005629333 nova_compute[244014]: 2026-02-25 12:26:47.868 244018 DEBUG oslo_concurrency.lockutils [req-3c18c075-e2cd-437d-8a37-6ac71a41f977 req-5b5b0f8c-272f-42aa-9fb2-0fc3b1e654aa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:47 np0005629333 nova_compute[244014]: 2026-02-25 12:26:47.868 244018 DEBUG oslo_concurrency.lockutils [req-3c18c075-e2cd-437d-8a37-6ac71a41f977 req-5b5b0f8c-272f-42aa-9fb2-0fc3b1e654aa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:47 np0005629333 nova_compute[244014]: 2026-02-25 12:26:47.869 244018 DEBUG oslo_concurrency.lockutils [req-3c18c075-e2cd-437d-8a37-6ac71a41f977 req-5b5b0f8c-272f-42aa-9fb2-0fc3b1e654aa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:47 np0005629333 nova_compute[244014]: 2026-02-25 12:26:47.870 244018 DEBUG nova.compute.manager [req-3c18c075-e2cd-437d-8a37-6ac71a41f977 req-5b5b0f8c-272f-42aa-9fb2-0fc3b1e654aa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] No waiting events found dispatching network-vif-plugged-a4d3e156-0255-47ba-811a-4ddaf7c16468 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:26:47 np0005629333 nova_compute[244014]: 2026-02-25 12:26:47.870 244018 WARNING nova.compute.manager [req-3c18c075-e2cd-437d-8a37-6ac71a41f977 req-5b5b0f8c-272f-42aa-9fb2-0fc3b1e654aa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Received unexpected event network-vif-plugged-a4d3e156-0255-47ba-811a-4ddaf7c16468 for instance with vm_state suspended and task_state None.#033[00m
Feb 25 07:26:47 np0005629333 nova_compute[244014]: 2026-02-25 12:26:47.871 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:47.875 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4a119115-bf8d-47b0-baa8-e4c8643544bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:47.889 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8a055e47-7970-4c84-ad4f-b98c8a987a0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:47.890 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f92c503e-4854-4f05-a98e-7b56771be6cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:47.905 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bf12e7e4-d527-4973-88d5-d432752f3970]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437369, 'reachable_time': 39228, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291905, 'error': None, 'target': 'ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:47 np0005629333 systemd[1]: run-netns-ovnmeta\x2da0d45b1c\x2d1680\x2d4599\x2da27a\x2d6e3335c94c99.mount: Deactivated successfully.
Feb 25 07:26:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:47.911 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a0d45b1c-1680-4599-a27a-6e3335c94c99 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:26:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:47.911 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[980887b2-d563-4b20-bf05-c26071a92e36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:48 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:48Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ba:87:f1 10.100.0.11
Feb 25 07:26:48 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:48Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ba:87:f1 10.100.0.11
Feb 25 07:26:48 np0005629333 nova_compute[244014]: 2026-02-25 12:26:48.057 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "0612ec20-e725-4516-a153-f1fe9be74f75" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:48 np0005629333 nova_compute[244014]: 2026-02-25 12:26:48.058 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "0612ec20-e725-4516-a153-f1fe9be74f75" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:48 np0005629333 nova_compute[244014]: 2026-02-25 12:26:48.073 244018 DEBUG nova.compute.manager [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:26:48 np0005629333 nova_compute[244014]: 2026-02-25 12:26:48.146 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:48 np0005629333 nova_compute[244014]: 2026-02-25 12:26:48.147 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:48 np0005629333 nova_compute[244014]: 2026-02-25 12:26:48.157 244018 DEBUG nova.virt.hardware [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:26:48 np0005629333 nova_compute[244014]: 2026-02-25 12:26:48.159 244018 INFO nova.compute.claims [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:26:48 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:48Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:df:0d:06 10.100.0.5
Feb 25 07:26:48 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:48Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:df:0d:06 10.100.0.5
Feb 25 07:26:48 np0005629333 nova_compute[244014]: 2026-02-25 12:26:48.360 244018 DEBUG oslo_concurrency.processutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:26:48 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2568133218' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:26:48 np0005629333 nova_compute[244014]: 2026-02-25 12:26:48.967 244018 DEBUG oslo_concurrency.processutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:48 np0005629333 nova_compute[244014]: 2026-02-25 12:26:48.977 244018 DEBUG nova.compute.provider_tree [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:26:48 np0005629333 nova_compute[244014]: 2026-02-25 12:26:48.996 244018 DEBUG nova.scheduler.client.report [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:26:49 np0005629333 nova_compute[244014]: 2026-02-25 12:26:49.052 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:49 np0005629333 nova_compute[244014]: 2026-02-25 12:26:49.053 244018 DEBUG nova.compute.manager [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:26:49 np0005629333 nova_compute[244014]: 2026-02-25 12:26:49.135 244018 DEBUG nova.compute.manager [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:26:49 np0005629333 nova_compute[244014]: 2026-02-25 12:26:49.136 244018 DEBUG nova.network.neutron [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:26:49 np0005629333 nova_compute[244014]: 2026-02-25 12:26:49.167 244018 INFO nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:26:49 np0005629333 nova_compute[244014]: 2026-02-25 12:26:49.188 244018 DEBUG nova.compute.manager [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:26:49 np0005629333 nova_compute[244014]: 2026-02-25 12:26:49.409 244018 DEBUG nova.compute.manager [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:26:49 np0005629333 nova_compute[244014]: 2026-02-25 12:26:49.411 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:26:49 np0005629333 nova_compute[244014]: 2026-02-25 12:26:49.412 244018 INFO nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Creating image(s)#033[00m
Feb 25 07:26:49 np0005629333 nova_compute[244014]: 2026-02-25 12:26:49.442 244018 DEBUG nova.storage.rbd_utils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image 0612ec20-e725-4516-a153-f1fe9be74f75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:49 np0005629333 nova_compute[244014]: 2026-02-25 12:26:49.479 244018 DEBUG nova.storage.rbd_utils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image 0612ec20-e725-4516-a153-f1fe9be74f75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:49 np0005629333 nova_compute[244014]: 2026-02-25 12:26:49.514 244018 DEBUG nova.storage.rbd_utils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image 0612ec20-e725-4516-a153-f1fe9be74f75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:49 np0005629333 nova_compute[244014]: 2026-02-25 12:26:49.519 244018 DEBUG oslo_concurrency.processutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:49 np0005629333 nova_compute[244014]: 2026-02-25 12:26:49.585 244018 DEBUG oslo_concurrency.processutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:49 np0005629333 nova_compute[244014]: 2026-02-25 12:26:49.586 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:49 np0005629333 nova_compute[244014]: 2026-02-25 12:26:49.587 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:49 np0005629333 nova_compute[244014]: 2026-02-25 12:26:49.588 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:49 np0005629333 nova_compute[244014]: 2026-02-25 12:26:49.617 244018 DEBUG nova.storage.rbd_utils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image 0612ec20-e725-4516-a153-f1fe9be74f75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:49 np0005629333 nova_compute[244014]: 2026-02-25 12:26:49.621 244018 DEBUG oslo_concurrency.processutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 0612ec20-e725-4516-a153-f1fe9be74f75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:49 np0005629333 nova_compute[244014]: 2026-02-25 12:26:49.665 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:49 np0005629333 nova_compute[244014]: 2026-02-25 12:26:49.734 244018 DEBUG nova.policy [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b774fd0c04fc403d9ddb205f1e6abbc5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c85a955249394f0faf7c890f5cd0df32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:26:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 400 MiB data, 694 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.0 MiB/s wr, 224 op/s
Feb 25 07:26:49 np0005629333 nova_compute[244014]: 2026-02-25 12:26:49.964 244018 DEBUG oslo_concurrency.processutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 0612ec20-e725-4516-a153-f1fe9be74f75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.002 244018 DEBUG nova.compute.manager [req-2ee5a5fb-b11b-4890-8f94-c2c55733531c req-358a9dd1-10cb-4e7c-9bde-108caa7cdc65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Received event network-vif-unplugged-a4d3e156-0255-47ba-811a-4ddaf7c16468 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.002 244018 DEBUG oslo_concurrency.lockutils [req-2ee5a5fb-b11b-4890-8f94-c2c55733531c req-358a9dd1-10cb-4e7c-9bde-108caa7cdc65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.003 244018 DEBUG oslo_concurrency.lockutils [req-2ee5a5fb-b11b-4890-8f94-c2c55733531c req-358a9dd1-10cb-4e7c-9bde-108caa7cdc65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.003 244018 DEBUG oslo_concurrency.lockutils [req-2ee5a5fb-b11b-4890-8f94-c2c55733531c req-358a9dd1-10cb-4e7c-9bde-108caa7cdc65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.004 244018 DEBUG nova.compute.manager [req-2ee5a5fb-b11b-4890-8f94-c2c55733531c req-358a9dd1-10cb-4e7c-9bde-108caa7cdc65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] No waiting events found dispatching network-vif-unplugged-a4d3e156-0255-47ba-811a-4ddaf7c16468 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.004 244018 WARNING nova.compute.manager [req-2ee5a5fb-b11b-4890-8f94-c2c55733531c req-358a9dd1-10cb-4e7c-9bde-108caa7cdc65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Received unexpected event network-vif-unplugged-a4d3e156-0255-47ba-811a-4ddaf7c16468 for instance with vm_state suspended and task_state None.#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.004 244018 DEBUG nova.compute.manager [req-2ee5a5fb-b11b-4890-8f94-c2c55733531c req-358a9dd1-10cb-4e7c-9bde-108caa7cdc65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Received event network-vif-plugged-a4d3e156-0255-47ba-811a-4ddaf7c16468 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.005 244018 DEBUG oslo_concurrency.lockutils [req-2ee5a5fb-b11b-4890-8f94-c2c55733531c req-358a9dd1-10cb-4e7c-9bde-108caa7cdc65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.005 244018 DEBUG oslo_concurrency.lockutils [req-2ee5a5fb-b11b-4890-8f94-c2c55733531c req-358a9dd1-10cb-4e7c-9bde-108caa7cdc65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.006 244018 DEBUG oslo_concurrency.lockutils [req-2ee5a5fb-b11b-4890-8f94-c2c55733531c req-358a9dd1-10cb-4e7c-9bde-108caa7cdc65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.006 244018 DEBUG nova.compute.manager [req-2ee5a5fb-b11b-4890-8f94-c2c55733531c req-358a9dd1-10cb-4e7c-9bde-108caa7cdc65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] No waiting events found dispatching network-vif-plugged-a4d3e156-0255-47ba-811a-4ddaf7c16468 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.006 244018 WARNING nova.compute.manager [req-2ee5a5fb-b11b-4890-8f94-c2c55733531c req-358a9dd1-10cb-4e7c-9bde-108caa7cdc65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Received unexpected event network-vif-plugged-a4d3e156-0255-47ba-811a-4ddaf7c16468 for instance with vm_state suspended and task_state None.#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.063 244018 DEBUG nova.storage.rbd_utils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] resizing rbd image 0612ec20-e725-4516-a153-f1fe9be74f75_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.104 244018 DEBUG oslo_concurrency.lockutils [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.105 244018 DEBUG oslo_concurrency.lockutils [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.105 244018 DEBUG oslo_concurrency.lockutils [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.106 244018 DEBUG oslo_concurrency.lockutils [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.106 244018 DEBUG oslo_concurrency.lockutils [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.110 244018 INFO nova.compute.manager [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Terminating instance#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.112 244018 DEBUG nova.compute.manager [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.167 244018 INFO nova.virt.libvirt.driver [-] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Instance destroyed successfully.#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.169 244018 DEBUG nova.objects.instance [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lazy-loading 'resources' on Instance uuid d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.178 244018 DEBUG nova.objects.instance [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'migration_context' on Instance uuid 0612ec20-e725-4516-a153-f1fe9be74f75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.192 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.193 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Ensure instance console log exists: /var/lib/nova/instances/0612ec20-e725-4516-a153-f1fe9be74f75/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:26:50 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:50Z|00476|binding|INFO|Releasing lport 81f0f54c-4e04-4adf-952f-b6d0fe9698c7 from this chassis (sb_readonly=0)
Feb 25 07:26:50 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:50Z|00477|binding|INFO|Releasing lport 3b184c15-8ef4-4e11-bd18-e1253a4ff440 from this chassis (sb_readonly=0)
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.194 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.195 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.196 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.198 244018 DEBUG nova.virt.libvirt.vif [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1913570277',display_name='tempest-DeleteServersTestJSON-server-1913570277',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1913570277',id=56,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='daab2f813dbd467685c22833bf875ec9',ramdisk_id='',reservation_id='r-hpk4oof7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-285157757',owner_user_name='tempest-DeleteServersTestJSON-285157757-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:26:47Z,user_data=None,user_id='9c44c1a95c8d4d97bc1d7dde284dcf1b',uuid=d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "address": "fa:16:3e:61:0e:2a", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d3e156-02", "ovs_interfaceid": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.198 244018 DEBUG nova.network.os_vif_util [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converting VIF {"id": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "address": "fa:16:3e:61:0e:2a", "network": {"id": "a0d45b1c-1680-4599-a27a-6e3335c94c99", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-124460717-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daab2f813dbd467685c22833bf875ec9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d3e156-02", "ovs_interfaceid": "a4d3e156-0255-47ba-811a-4ddaf7c16468", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.199 244018 DEBUG nova.network.os_vif_util [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:0e:2a,bridge_name='br-int',has_traffic_filtering=True,id=a4d3e156-0255-47ba-811a-4ddaf7c16468,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4d3e156-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.200 244018 DEBUG os_vif [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:0e:2a,bridge_name='br-int',has_traffic_filtering=True,id=a4d3e156-0255-47ba-811a-4ddaf7c16468,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4d3e156-02') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.202 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.203 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa4d3e156-02, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.207 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.209 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.238 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.240 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.242 244018 INFO os_vif [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:0e:2a,bridge_name='br-int',has_traffic_filtering=True,id=a4d3e156-0255-47ba-811a-4ddaf7c16468,network=Network(a0d45b1c-1680-4599-a27a-6e3335c94c99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4d3e156-02')#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.424 244018 DEBUG nova.network.neutron [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Successfully created port: 5951ca77-9d6a-41db-aaf8-3508d21701f3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.611 244018 INFO nova.virt.libvirt.driver [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Deleting instance files /var/lib/nova/instances/d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84_del#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.612 244018 INFO nova.virt.libvirt.driver [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Deletion of /var/lib/nova/instances/d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84_del complete#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.686 244018 INFO nova.compute.manager [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Took 0.57 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.687 244018 DEBUG oslo.service.loopingcall [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.687 244018 DEBUG nova.compute.manager [-] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:26:50 np0005629333 nova_compute[244014]: 2026-02-25 12:26:50.687 244018 DEBUG nova.network.neutron [-] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:26:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:26:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:26:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:26:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:26:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:26:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:26:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:26:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:26:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:26:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:26:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:26:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:26:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 413 MiB data, 724 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.4 MiB/s wr, 264 op/s
Feb 25 07:26:52 np0005629333 podman[292256]: 2026-02-25 12:26:52.178395705 +0000 UTC m=+0.085116650 container create fc4a4860f528c88139aae7600856c123d86464366f1f9802be196b675bb309e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 07:26:52 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:26:52 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:26:52 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:26:52 np0005629333 podman[292256]: 2026-02-25 12:26:52.1153433 +0000 UTC m=+0.022064265 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:26:52 np0005629333 systemd[1]: Started libpod-conmon-fc4a4860f528c88139aae7600856c123d86464366f1f9802be196b675bb309e5.scope.
Feb 25 07:26:52 np0005629333 nova_compute[244014]: 2026-02-25 12:26:52.254 244018 DEBUG nova.network.neutron [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Successfully updated port: 5951ca77-9d6a-41db-aaf8-3508d21701f3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:26:52 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:26:52 np0005629333 nova_compute[244014]: 2026-02-25 12:26:52.271 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "refresh_cache-0612ec20-e725-4516-a153-f1fe9be74f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:26:52 np0005629333 nova_compute[244014]: 2026-02-25 12:26:52.271 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquired lock "refresh_cache-0612ec20-e725-4516-a153-f1fe9be74f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:26:52 np0005629333 nova_compute[244014]: 2026-02-25 12:26:52.271 244018 DEBUG nova.network.neutron [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:26:52 np0005629333 podman[292256]: 2026-02-25 12:26:52.276930004 +0000 UTC m=+0.183650989 container init fc4a4860f528c88139aae7600856c123d86464366f1f9802be196b675bb309e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 07:26:52 np0005629333 podman[292256]: 2026-02-25 12:26:52.285035573 +0000 UTC m=+0.191756518 container start fc4a4860f528c88139aae7600856c123d86464366f1f9802be196b675bb309e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_elgamal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:26:52 np0005629333 podman[292256]: 2026-02-25 12:26:52.288606094 +0000 UTC m=+0.195327029 container attach fc4a4860f528c88139aae7600856c123d86464366f1f9802be196b675bb309e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_elgamal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:26:52 np0005629333 reverent_elgamal[292274]: 167 167
Feb 25 07:26:52 np0005629333 systemd[1]: libpod-fc4a4860f528c88139aae7600856c123d86464366f1f9802be196b675bb309e5.scope: Deactivated successfully.
Feb 25 07:26:52 np0005629333 podman[292256]: 2026-02-25 12:26:52.292474133 +0000 UTC m=+0.199195058 container died fc4a4860f528c88139aae7600856c123d86464366f1f9802be196b675bb309e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 07:26:52 np0005629333 systemd[1]: var-lib-containers-storage-overlay-54e5bc5d52300805adb94b1af1293dd35ef8e5abd63bbb27ab7906e94f36c1af-merged.mount: Deactivated successfully.
Feb 25 07:26:52 np0005629333 podman[292256]: 2026-02-25 12:26:52.336610723 +0000 UTC m=+0.243331648 container remove fc4a4860f528c88139aae7600856c123d86464366f1f9802be196b675bb309e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:26:52 np0005629333 systemd[1]: libpod-conmon-fc4a4860f528c88139aae7600856c123d86464366f1f9802be196b675bb309e5.scope: Deactivated successfully.
Feb 25 07:26:52 np0005629333 nova_compute[244014]: 2026-02-25 12:26:52.409 244018 DEBUG nova.compute.manager [req-616fcb37-6f38-4c7c-9e2b-94463eceddf1 req-9c621897-61e8-4adb-b0ad-36e6b3086be4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Received event network-changed-5951ca77-9d6a-41db-aaf8-3508d21701f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:26:52 np0005629333 nova_compute[244014]: 2026-02-25 12:26:52.410 244018 DEBUG nova.compute.manager [req-616fcb37-6f38-4c7c-9e2b-94463eceddf1 req-9c621897-61e8-4adb-b0ad-36e6b3086be4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Refreshing instance network info cache due to event network-changed-5951ca77-9d6a-41db-aaf8-3508d21701f3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:26:52 np0005629333 nova_compute[244014]: 2026-02-25 12:26:52.411 244018 DEBUG oslo_concurrency.lockutils [req-616fcb37-6f38-4c7c-9e2b-94463eceddf1 req-9c621897-61e8-4adb-b0ad-36e6b3086be4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-0612ec20-e725-4516-a153-f1fe9be74f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:26:52 np0005629333 nova_compute[244014]: 2026-02-25 12:26:52.453 244018 DEBUG nova.network.neutron [-] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:26:52 np0005629333 nova_compute[244014]: 2026-02-25 12:26:52.479 244018 INFO nova.compute.manager [-] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Took 1.79 seconds to deallocate network for instance.#033[00m
Feb 25 07:26:52 np0005629333 nova_compute[244014]: 2026-02-25 12:26:52.520 244018 DEBUG nova.network.neutron [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:26:52 np0005629333 nova_compute[244014]: 2026-02-25 12:26:52.527 244018 DEBUG oslo_concurrency.lockutils [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:52 np0005629333 podman[292299]: 2026-02-25 12:26:52.527345841 +0000 UTC m=+0.055533023 container create 2d84c409de550fe6ae7be7a704aff79d5ddc251d74d3642dae7b10df9c3bc297 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_darwin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:26:52 np0005629333 nova_compute[244014]: 2026-02-25 12:26:52.527 244018 DEBUG oslo_concurrency.lockutils [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:52 np0005629333 nova_compute[244014]: 2026-02-25 12:26:52.529 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022397.52653, 56267d17-0733-4abe-b916-d1a25e516514 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:26:52 np0005629333 nova_compute[244014]: 2026-02-25 12:26:52.529 244018 INFO nova.compute.manager [-] [instance: 56267d17-0733-4abe-b916-d1a25e516514] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:26:52 np0005629333 nova_compute[244014]: 2026-02-25 12:26:52.551 244018 DEBUG nova.compute.manager [None req-4e8a76df-d682-4c13-b90b-a24f00faf586 - - - - - -] [instance: 56267d17-0733-4abe-b916-d1a25e516514] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:52 np0005629333 systemd[1]: Started libpod-conmon-2d84c409de550fe6ae7be7a704aff79d5ddc251d74d3642dae7b10df9c3bc297.scope.
Feb 25 07:26:52 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:26:52 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d3abbac97ca719265ebb82d12410e91644545f552281152cc3cab0df63358ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:26:52 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d3abbac97ca719265ebb82d12410e91644545f552281152cc3cab0df63358ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:26:52 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d3abbac97ca719265ebb82d12410e91644545f552281152cc3cab0df63358ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:26:52 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d3abbac97ca719265ebb82d12410e91644545f552281152cc3cab0df63358ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:26:52 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d3abbac97ca719265ebb82d12410e91644545f552281152cc3cab0df63358ff/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:26:52 np0005629333 podman[292299]: 2026-02-25 12:26:52.50504274 +0000 UTC m=+0.033229962 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:26:52 np0005629333 podman[292299]: 2026-02-25 12:26:52.606014467 +0000 UTC m=+0.134201699 container init 2d84c409de550fe6ae7be7a704aff79d5ddc251d74d3642dae7b10df9c3bc297 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:26:52 np0005629333 podman[292299]: 2026-02-25 12:26:52.618619384 +0000 UTC m=+0.146806566 container start 2d84c409de550fe6ae7be7a704aff79d5ddc251d74d3642dae7b10df9c3bc297 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 07:26:52 np0005629333 podman[292299]: 2026-02-25 12:26:52.623489232 +0000 UTC m=+0.151676464 container attach 2d84c409de550fe6ae7be7a704aff79d5ddc251d74d3642dae7b10df9c3bc297 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_darwin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 07:26:52 np0005629333 nova_compute[244014]: 2026-02-25 12:26:52.704 244018 DEBUG oslo_concurrency.processutils [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:53 np0005629333 lucid_darwin[292316]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:26:53 np0005629333 lucid_darwin[292316]: --> All data devices are unavailable
Feb 25 07:26:53 np0005629333 systemd[1]: libpod-2d84c409de550fe6ae7be7a704aff79d5ddc251d74d3642dae7b10df9c3bc297.scope: Deactivated successfully.
Feb 25 07:26:53 np0005629333 podman[292299]: 2026-02-25 12:26:53.098930127 +0000 UTC m=+0.627117269 container died 2d84c409de550fe6ae7be7a704aff79d5ddc251d74d3642dae7b10df9c3bc297 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 07:26:53 np0005629333 systemd[1]: var-lib-containers-storage-overlay-2d3abbac97ca719265ebb82d12410e91644545f552281152cc3cab0df63358ff-merged.mount: Deactivated successfully.
Feb 25 07:26:53 np0005629333 podman[292299]: 2026-02-25 12:26:53.129844752 +0000 UTC m=+0.658031934 container remove 2d84c409de550fe6ae7be7a704aff79d5ddc251d74d3642dae7b10df9c3bc297 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_darwin, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 07:26:53 np0005629333 systemd[1]: libpod-conmon-2d84c409de550fe6ae7be7a704aff79d5ddc251d74d3642dae7b10df9c3bc297.scope: Deactivated successfully.
Feb 25 07:26:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:26:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3436349269' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:26:53 np0005629333 nova_compute[244014]: 2026-02-25 12:26:53.265 244018 DEBUG oslo_concurrency.processutils [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:53 np0005629333 nova_compute[244014]: 2026-02-25 12:26:53.269 244018 DEBUG nova.compute.provider_tree [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:26:53 np0005629333 nova_compute[244014]: 2026-02-25 12:26:53.300 244018 DEBUG nova.scheduler.client.report [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:26:53 np0005629333 nova_compute[244014]: 2026-02-25 12:26:53.330 244018 DEBUG oslo_concurrency.lockutils [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:53 np0005629333 nova_compute[244014]: 2026-02-25 12:26:53.351 244018 INFO nova.scheduler.client.report [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Deleted allocations for instance d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84#033[00m
Feb 25 07:26:53 np0005629333 nova_compute[244014]: 2026-02-25 12:26:53.448 244018 DEBUG oslo_concurrency.lockutils [None req-65ecfb70-5616-421d-bff2-3008d3c8371e 9c44c1a95c8d4d97bc1d7dde284dcf1b daab2f813dbd467685c22833bf875ec9 - - default default] Lock "d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.344s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:53 np0005629333 nova_compute[244014]: 2026-02-25 12:26:53.508 244018 DEBUG nova.network.neutron [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Updating instance_info_cache with network_info: [{"id": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "address": "fa:16:3e:a7:90:29", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5951ca77-9d", "ovs_interfaceid": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:26:53 np0005629333 nova_compute[244014]: 2026-02-25 12:26:53.541 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Releasing lock "refresh_cache-0612ec20-e725-4516-a153-f1fe9be74f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:26:53 np0005629333 nova_compute[244014]: 2026-02-25 12:26:53.542 244018 DEBUG nova.compute.manager [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Instance network_info: |[{"id": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "address": "fa:16:3e:a7:90:29", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5951ca77-9d", "ovs_interfaceid": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:26:53 np0005629333 nova_compute[244014]: 2026-02-25 12:26:53.542 244018 DEBUG oslo_concurrency.lockutils [req-616fcb37-6f38-4c7c-9e2b-94463eceddf1 req-9c621897-61e8-4adb-b0ad-36e6b3086be4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-0612ec20-e725-4516-a153-f1fe9be74f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:26:53 np0005629333 nova_compute[244014]: 2026-02-25 12:26:53.543 244018 DEBUG nova.network.neutron [req-616fcb37-6f38-4c7c-9e2b-94463eceddf1 req-9c621897-61e8-4adb-b0ad-36e6b3086be4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Refreshing network info cache for port 5951ca77-9d6a-41db-aaf8-3508d21701f3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:26:53 np0005629333 nova_compute[244014]: 2026-02-25 12:26:53.547 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Start _get_guest_xml network_info=[{"id": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "address": "fa:16:3e:a7:90:29", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5951ca77-9d", "ovs_interfaceid": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:26:53 np0005629333 nova_compute[244014]: 2026-02-25 12:26:53.553 244018 WARNING nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:26:53 np0005629333 nova_compute[244014]: 2026-02-25 12:26:53.572 244018 DEBUG nova.virt.libvirt.host [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:26:53 np0005629333 nova_compute[244014]: 2026-02-25 12:26:53.573 244018 DEBUG nova.virt.libvirt.host [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:26:53 np0005629333 nova_compute[244014]: 2026-02-25 12:26:53.580 244018 DEBUG nova.virt.libvirt.host [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:26:53 np0005629333 nova_compute[244014]: 2026-02-25 12:26:53.581 244018 DEBUG nova.virt.libvirt.host [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:26:53 np0005629333 nova_compute[244014]: 2026-02-25 12:26:53.582 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:26:53 np0005629333 nova_compute[244014]: 2026-02-25 12:26:53.582 244018 DEBUG nova.virt.hardware [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:26:53 np0005629333 nova_compute[244014]: 2026-02-25 12:26:53.583 244018 DEBUG nova.virt.hardware [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:26:53 np0005629333 nova_compute[244014]: 2026-02-25 12:26:53.584 244018 DEBUG nova.virt.hardware [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:26:53 np0005629333 nova_compute[244014]: 2026-02-25 12:26:53.584 244018 DEBUG nova.virt.hardware [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:26:53 np0005629333 nova_compute[244014]: 2026-02-25 12:26:53.584 244018 DEBUG nova.virt.hardware [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:26:53 np0005629333 nova_compute[244014]: 2026-02-25 12:26:53.585 244018 DEBUG nova.virt.hardware [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:26:53 np0005629333 nova_compute[244014]: 2026-02-25 12:26:53.585 244018 DEBUG nova.virt.hardware [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:26:53 np0005629333 nova_compute[244014]: 2026-02-25 12:26:53.586 244018 DEBUG nova.virt.hardware [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:26:53 np0005629333 nova_compute[244014]: 2026-02-25 12:26:53.586 244018 DEBUG nova.virt.hardware [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:26:53 np0005629333 nova_compute[244014]: 2026-02-25 12:26:53.587 244018 DEBUG nova.virt.hardware [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:26:53 np0005629333 nova_compute[244014]: 2026-02-25 12:26:53.587 244018 DEBUG nova.virt.hardware [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:26:53 np0005629333 podman[292434]: 2026-02-25 12:26:53.591994932 +0000 UTC m=+0.062108539 container create f4366b05d13589991ba6ffa4b5fce106b52ca0882317bc6f84553edd9c851482 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:26:53 np0005629333 nova_compute[244014]: 2026-02-25 12:26:53.593 244018 DEBUG oslo_concurrency.processutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:53 np0005629333 systemd[1]: Started libpod-conmon-f4366b05d13589991ba6ffa4b5fce106b52ca0882317bc6f84553edd9c851482.scope.
Feb 25 07:26:53 np0005629333 podman[292434]: 2026-02-25 12:26:53.565016158 +0000 UTC m=+0.035129675 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:26:53 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:26:53 np0005629333 podman[292434]: 2026-02-25 12:26:53.726807237 +0000 UTC m=+0.196920724 container init f4366b05d13589991ba6ffa4b5fce106b52ca0882317bc6f84553edd9c851482 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_raman, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 07:26:53 np0005629333 podman[292434]: 2026-02-25 12:26:53.73573782 +0000 UTC m=+0.205851297 container start f4366b05d13589991ba6ffa4b5fce106b52ca0882317bc6f84553edd9c851482 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_raman, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 07:26:53 np0005629333 wonderful_raman[292450]: 167 167
Feb 25 07:26:53 np0005629333 systemd[1]: libpod-f4366b05d13589991ba6ffa4b5fce106b52ca0882317bc6f84553edd9c851482.scope: Deactivated successfully.
Feb 25 07:26:53 np0005629333 podman[292434]: 2026-02-25 12:26:53.744075386 +0000 UTC m=+0.214188913 container attach f4366b05d13589991ba6ffa4b5fce106b52ca0882317bc6f84553edd9c851482 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_raman, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:26:53 np0005629333 podman[292434]: 2026-02-25 12:26:53.747947845 +0000 UTC m=+0.218061322 container died f4366b05d13589991ba6ffa4b5fce106b52ca0882317bc6f84553edd9c851482 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_raman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 07:26:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 438 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 6.0 MiB/s wr, 317 op/s
Feb 25 07:26:53 np0005629333 systemd[1]: var-lib-containers-storage-overlay-74438388b85b99b05cb213c34d0a25c08a291d2db83f9740623de3b66a656f36-merged.mount: Deactivated successfully.
Feb 25 07:26:53 np0005629333 podman[292434]: 2026-02-25 12:26:53.968503808 +0000 UTC m=+0.438617295 container remove f4366b05d13589991ba6ffa4b5fce106b52ca0882317bc6f84553edd9c851482 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_raman, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:26:54 np0005629333 systemd[1]: libpod-conmon-f4366b05d13589991ba6ffa4b5fce106b52ca0882317bc6f84553edd9c851482.scope: Deactivated successfully.
Feb 25 07:26:54 np0005629333 podman[292494]: 2026-02-25 12:26:54.152800503 +0000 UTC m=+0.058389053 container create 02b4a8a624adc11a0995a60402216004432202f6215e670471931038694aefc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_benz, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 07:26:54 np0005629333 systemd[1]: Started libpod-conmon-02b4a8a624adc11a0995a60402216004432202f6215e670471931038694aefc4.scope.
Feb 25 07:26:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:26:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2872472128' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:26:54 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:26:54 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309cf003cb58abb02efa06ff5647b9af010919c0f9b8f07c2f36c3f59fa3158d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:26:54 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309cf003cb58abb02efa06ff5647b9af010919c0f9b8f07c2f36c3f59fa3158d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:26:54 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309cf003cb58abb02efa06ff5647b9af010919c0f9b8f07c2f36c3f59fa3158d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:26:54 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309cf003cb58abb02efa06ff5647b9af010919c0f9b8f07c2f36c3f59fa3158d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:26:54 np0005629333 podman[292494]: 2026-02-25 12:26:54.128885367 +0000 UTC m=+0.034473967 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:26:54 np0005629333 nova_compute[244014]: 2026-02-25 12:26:54.224 244018 DEBUG oslo_concurrency.processutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:54 np0005629333 podman[292494]: 2026-02-25 12:26:54.243574571 +0000 UTC m=+0.149163111 container init 02b4a8a624adc11a0995a60402216004432202f6215e670471931038694aefc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 07:26:54 np0005629333 podman[292494]: 2026-02-25 12:26:54.252576496 +0000 UTC m=+0.158165046 container start 02b4a8a624adc11a0995a60402216004432202f6215e670471931038694aefc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_benz, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 07:26:54 np0005629333 podman[292494]: 2026-02-25 12:26:54.256516748 +0000 UTC m=+0.162105278 container attach 02b4a8a624adc11a0995a60402216004432202f6215e670471931038694aefc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Feb 25 07:26:54 np0005629333 nova_compute[244014]: 2026-02-25 12:26:54.257 244018 DEBUG nova.storage.rbd_utils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image 0612ec20-e725-4516-a153-f1fe9be74f75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:54 np0005629333 nova_compute[244014]: 2026-02-25 12:26:54.262 244018 DEBUG oslo_concurrency.processutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:54 np0005629333 elegant_benz[292511]: {
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:    "0": [
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:        {
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "devices": [
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "/dev/loop3"
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            ],
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "lv_name": "ceph_lv0",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "lv_size": "21470642176",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "name": "ceph_lv0",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "tags": {
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.cluster_name": "ceph",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.crush_device_class": "",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.encrypted": "0",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.objectstore": "bluestore",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.osd_id": "0",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.type": "block",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.vdo": "0",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.with_tpm": "0"
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            },
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "type": "block",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "vg_name": "ceph_vg0"
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:        }
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:    ],
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:    "1": [
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:        {
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "devices": [
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "/dev/loop4"
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            ],
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "lv_name": "ceph_lv1",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "lv_size": "21470642176",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "name": "ceph_lv1",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "tags": {
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.cluster_name": "ceph",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.crush_device_class": "",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.encrypted": "0",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.objectstore": "bluestore",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.osd_id": "1",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.type": "block",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.vdo": "0",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.with_tpm": "0"
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            },
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "type": "block",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "vg_name": "ceph_vg1"
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:        }
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:    ],
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:    "2": [
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:        {
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "devices": [
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "/dev/loop5"
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            ],
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "lv_name": "ceph_lv2",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "lv_size": "21470642176",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "name": "ceph_lv2",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "tags": {
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.cluster_name": "ceph",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.crush_device_class": "",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.encrypted": "0",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.objectstore": "bluestore",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.osd_id": "2",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.type": "block",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.vdo": "0",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:                "ceph.with_tpm": "0"
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            },
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "type": "block",
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:            "vg_name": "ceph_vg2"
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:        }
Feb 25 07:26:54 np0005629333 elegant_benz[292511]:    ]
Feb 25 07:26:54 np0005629333 elegant_benz[292511]: }
Feb 25 07:26:54 np0005629333 systemd[1]: libpod-02b4a8a624adc11a0995a60402216004432202f6215e670471931038694aefc4.scope: Deactivated successfully.
Feb 25 07:26:54 np0005629333 podman[292494]: 2026-02-25 12:26:54.560279185 +0000 UTC m=+0.465867815 container died 02b4a8a624adc11a0995a60402216004432202f6215e670471931038694aefc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_benz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 07:26:54 np0005629333 systemd[1]: var-lib-containers-storage-overlay-309cf003cb58abb02efa06ff5647b9af010919c0f9b8f07c2f36c3f59fa3158d-merged.mount: Deactivated successfully.
Feb 25 07:26:54 np0005629333 podman[292494]: 2026-02-25 12:26:54.610157156 +0000 UTC m=+0.515745716 container remove 02b4a8a624adc11a0995a60402216004432202f6215e670471931038694aefc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:26:54 np0005629333 systemd[1]: libpod-conmon-02b4a8a624adc11a0995a60402216004432202f6215e670471931038694aefc4.scope: Deactivated successfully.
Feb 25 07:26:54 np0005629333 nova_compute[244014]: 2026-02-25 12:26:54.669 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:54 np0005629333 nova_compute[244014]: 2026-02-25 12:26:54.692 244018 DEBUG nova.compute.manager [req-2b8a0610-fa99-4e1c-8299-f78ef6d1acf3 req-c5d88f2c-eb2f-4f84-b10b-6d7329d77c21 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Received event network-vif-deleted-a4d3e156-0255-47ba-811a-4ddaf7c16468 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:26:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:26:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3626153450' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:26:54 np0005629333 nova_compute[244014]: 2026-02-25 12:26:54.799 244018 DEBUG oslo_concurrency.processutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:54 np0005629333 nova_compute[244014]: 2026-02-25 12:26:54.800 244018 DEBUG nova.virt.libvirt.vif [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:26:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-526421737',display_name='tempest-ServerActionsTestOtherB-server-526421737',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-526421737',id=57,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c85a955249394f0faf7c890f5cd0df32',ramdisk_id='',reservation_id='r-l9xxjiai',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1539976047',owner_user_name='tempest-ServerActionsTestOtherB-1539976047-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:26:49Z,user_data=None,user_id='b774fd0c04fc403d9ddb205f1e6abbc5',uuid=0612ec20-e725-4516-a153-f1fe9be74f75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "address": "fa:16:3e:a7:90:29", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5951ca77-9d", "ovs_interfaceid": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:26:54 np0005629333 nova_compute[244014]: 2026-02-25 12:26:54.801 244018 DEBUG nova.network.os_vif_util [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converting VIF {"id": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "address": "fa:16:3e:a7:90:29", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5951ca77-9d", "ovs_interfaceid": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:26:54 np0005629333 nova_compute[244014]: 2026-02-25 12:26:54.802 244018 DEBUG nova.network.os_vif_util [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:90:29,bridge_name='br-int',has_traffic_filtering=True,id=5951ca77-9d6a-41db-aaf8-3508d21701f3,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5951ca77-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:26:54 np0005629333 nova_compute[244014]: 2026-02-25 12:26:54.804 244018 DEBUG nova.objects.instance [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0612ec20-e725-4516-a153-f1fe9be74f75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:26:54 np0005629333 nova_compute[244014]: 2026-02-25 12:26:54.823 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:26:54 np0005629333 nova_compute[244014]:  <uuid>0612ec20-e725-4516-a153-f1fe9be74f75</uuid>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:  <name>instance-00000039</name>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServerActionsTestOtherB-server-526421737</nova:name>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:26:53</nova:creationTime>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:26:54 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:        <nova:user uuid="b774fd0c04fc403d9ddb205f1e6abbc5">tempest-ServerActionsTestOtherB-1539976047-project-member</nova:user>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:        <nova:project uuid="c85a955249394f0faf7c890f5cd0df32">tempest-ServerActionsTestOtherB-1539976047</nova:project>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:        <nova:port uuid="5951ca77-9d6a-41db-aaf8-3508d21701f3">
Feb 25 07:26:54 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      <entry name="serial">0612ec20-e725-4516-a153-f1fe9be74f75</entry>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      <entry name="uuid">0612ec20-e725-4516-a153-f1fe9be74f75</entry>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/0612ec20-e725-4516-a153-f1fe9be74f75_disk">
Feb 25 07:26:54 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:26:54 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/0612ec20-e725-4516-a153-f1fe9be74f75_disk.config">
Feb 25 07:26:54 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:26:54 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:a7:90:29"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      <target dev="tap5951ca77-9d"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/0612ec20-e725-4516-a153-f1fe9be74f75/console.log" append="off"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:26:54 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:26:54 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:26:54 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:26:54 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:26:54 np0005629333 nova_compute[244014]: 2026-02-25 12:26:54.823 244018 DEBUG nova.compute.manager [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Preparing to wait for external event network-vif-plugged-5951ca77-9d6a-41db-aaf8-3508d21701f3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:26:54 np0005629333 nova_compute[244014]: 2026-02-25 12:26:54.824 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "0612ec20-e725-4516-a153-f1fe9be74f75-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:54 np0005629333 nova_compute[244014]: 2026-02-25 12:26:54.824 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "0612ec20-e725-4516-a153-f1fe9be74f75-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:54 np0005629333 nova_compute[244014]: 2026-02-25 12:26:54.824 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "0612ec20-e725-4516-a153-f1fe9be74f75-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:54 np0005629333 nova_compute[244014]: 2026-02-25 12:26:54.825 244018 DEBUG nova.virt.libvirt.vif [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:26:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-526421737',display_name='tempest-ServerActionsTestOtherB-server-526421737',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-526421737',id=57,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c85a955249394f0faf7c890f5cd0df32',ramdisk_id='',reservation_id='r-l9xxjiai',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1539976047',owner_user_name='tempest-ServerActionsTestOtherB-1539976047-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:26:49Z,user_data=None,user_id='b774fd0c04fc403d9ddb205f1e6abbc5',uuid=0612ec20-e725-4516-a153-f1fe9be74f75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "address": "fa:16:3e:a7:90:29", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5951ca77-9d", "ovs_interfaceid": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:26:54 np0005629333 nova_compute[244014]: 2026-02-25 12:26:54.826 244018 DEBUG nova.network.os_vif_util [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converting VIF {"id": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "address": "fa:16:3e:a7:90:29", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5951ca77-9d", "ovs_interfaceid": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:26:54 np0005629333 nova_compute[244014]: 2026-02-25 12:26:54.827 244018 DEBUG nova.network.os_vif_util [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:90:29,bridge_name='br-int',has_traffic_filtering=True,id=5951ca77-9d6a-41db-aaf8-3508d21701f3,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5951ca77-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:26:54 np0005629333 nova_compute[244014]: 2026-02-25 12:26:54.827 244018 DEBUG os_vif [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:90:29,bridge_name='br-int',has_traffic_filtering=True,id=5951ca77-9d6a-41db-aaf8-3508d21701f3,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5951ca77-9d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:26:54 np0005629333 nova_compute[244014]: 2026-02-25 12:26:54.828 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:54 np0005629333 nova_compute[244014]: 2026-02-25 12:26:54.828 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:54 np0005629333 nova_compute[244014]: 2026-02-25 12:26:54.829 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:26:54 np0005629333 nova_compute[244014]: 2026-02-25 12:26:54.833 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:54 np0005629333 nova_compute[244014]: 2026-02-25 12:26:54.833 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5951ca77-9d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:54 np0005629333 nova_compute[244014]: 2026-02-25 12:26:54.834 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5951ca77-9d, col_values=(('external_ids', {'iface-id': '5951ca77-9d6a-41db-aaf8-3508d21701f3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a7:90:29', 'vm-uuid': '0612ec20-e725-4516-a153-f1fe9be74f75'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:54 np0005629333 nova_compute[244014]: 2026-02-25 12:26:54.836 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:54 np0005629333 NetworkManager[49836]: <info>  [1772022414.8370] manager: (tap5951ca77-9d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/218)
Feb 25 07:26:54 np0005629333 nova_compute[244014]: 2026-02-25 12:26:54.839 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:26:54 np0005629333 nova_compute[244014]: 2026-02-25 12:26:54.843 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:54 np0005629333 nova_compute[244014]: 2026-02-25 12:26:54.844 244018 INFO os_vif [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:90:29,bridge_name='br-int',has_traffic_filtering=True,id=5951ca77-9d6a-41db-aaf8-3508d21701f3,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5951ca77-9d')#033[00m
Feb 25 07:26:54 np0005629333 nova_compute[244014]: 2026-02-25 12:26:54.885 244018 DEBUG nova.network.neutron [req-616fcb37-6f38-4c7c-9e2b-94463eceddf1 req-9c621897-61e8-4adb-b0ad-36e6b3086be4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Updated VIF entry in instance network info cache for port 5951ca77-9d6a-41db-aaf8-3508d21701f3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:26:54 np0005629333 nova_compute[244014]: 2026-02-25 12:26:54.885 244018 DEBUG nova.network.neutron [req-616fcb37-6f38-4c7c-9e2b-94463eceddf1 req-9c621897-61e8-4adb-b0ad-36e6b3086be4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Updating instance_info_cache with network_info: [{"id": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "address": "fa:16:3e:a7:90:29", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5951ca77-9d", "ovs_interfaceid": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:26:55 np0005629333 nova_compute[244014]: 2026-02-25 12:26:55.005 244018 DEBUG oslo_concurrency.lockutils [req-616fcb37-6f38-4c7c-9e2b-94463eceddf1 req-9c621897-61e8-4adb-b0ad-36e6b3086be4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-0612ec20-e725-4516-a153-f1fe9be74f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:26:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.008 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.009 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.010 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:55 np0005629333 nova_compute[244014]: 2026-02-25 12:26:55.012 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:26:55 np0005629333 nova_compute[244014]: 2026-02-25 12:26:55.013 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:26:55 np0005629333 nova_compute[244014]: 2026-02-25 12:26:55.014 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No VIF found with MAC fa:16:3e:a7:90:29, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:26:55 np0005629333 nova_compute[244014]: 2026-02-25 12:26:55.015 244018 INFO nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Using config drive#033[00m
Feb 25 07:26:55 np0005629333 nova_compute[244014]: 2026-02-25 12:26:55.052 244018 DEBUG nova.storage.rbd_utils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image 0612ec20-e725-4516-a153-f1fe9be74f75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:26:55 np0005629333 podman[292655]: 2026-02-25 12:26:55.123266848 +0000 UTC m=+0.056789158 container create e268d3ef7cc565a57746f6975c4b08e17740b8fef3ca6201967258f674da861e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:26:55 np0005629333 systemd[1]: Started libpod-conmon-e268d3ef7cc565a57746f6975c4b08e17740b8fef3ca6201967258f674da861e.scope.
Feb 25 07:26:55 np0005629333 podman[292655]: 2026-02-25 12:26:55.099061863 +0000 UTC m=+0.032584233 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:26:55 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:26:55 np0005629333 podman[292655]: 2026-02-25 12:26:55.212244956 +0000 UTC m=+0.145767256 container init e268d3ef7cc565a57746f6975c4b08e17740b8fef3ca6201967258f674da861e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:26:55 np0005629333 podman[292655]: 2026-02-25 12:26:55.219889643 +0000 UTC m=+0.153411933 container start e268d3ef7cc565a57746f6975c4b08e17740b8fef3ca6201967258f674da861e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_ganguly, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 07:26:55 np0005629333 podman[292655]: 2026-02-25 12:26:55.224189974 +0000 UTC m=+0.157712294 container attach e268d3ef7cc565a57746f6975c4b08e17740b8fef3ca6201967258f674da861e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 07:26:55 np0005629333 pedantic_ganguly[292671]: 167 167
Feb 25 07:26:55 np0005629333 systemd[1]: libpod-e268d3ef7cc565a57746f6975c4b08e17740b8fef3ca6201967258f674da861e.scope: Deactivated successfully.
Feb 25 07:26:55 np0005629333 podman[292655]: 2026-02-25 12:26:55.226249963 +0000 UTC m=+0.159772283 container died e268d3ef7cc565a57746f6975c4b08e17740b8fef3ca6201967258f674da861e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:26:55 np0005629333 systemd[1]: var-lib-containers-storage-overlay-4a03d63988efeff3422ea892508cd78ef1d783f8fe477150b460e1aa03bfad9d-merged.mount: Deactivated successfully.
Feb 25 07:26:55 np0005629333 podman[292655]: 2026-02-25 12:26:55.271889744 +0000 UTC m=+0.205412054 container remove e268d3ef7cc565a57746f6975c4b08e17740b8fef3ca6201967258f674da861e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_ganguly, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 25 07:26:55 np0005629333 systemd[1]: libpod-conmon-e268d3ef7cc565a57746f6975c4b08e17740b8fef3ca6201967258f674da861e.scope: Deactivated successfully.
Feb 25 07:26:55 np0005629333 nova_compute[244014]: 2026-02-25 12:26:55.287 244018 DEBUG oslo_concurrency.lockutils [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:55 np0005629333 nova_compute[244014]: 2026-02-25 12:26:55.288 244018 DEBUG oslo_concurrency.lockutils [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:55 np0005629333 nova_compute[244014]: 2026-02-25 12:26:55.289 244018 INFO nova.compute.manager [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Rebooting instance#033[00m
Feb 25 07:26:55 np0005629333 nova_compute[244014]: 2026-02-25 12:26:55.306 244018 DEBUG oslo_concurrency.lockutils [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:26:55 np0005629333 nova_compute[244014]: 2026-02-25 12:26:55.306 244018 DEBUG oslo_concurrency.lockutils [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquired lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:26:55 np0005629333 nova_compute[244014]: 2026-02-25 12:26:55.306 244018 DEBUG nova.network.neutron [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:26:55 np0005629333 nova_compute[244014]: 2026-02-25 12:26:55.405 244018 INFO nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Creating config drive at /var/lib/nova/instances/0612ec20-e725-4516-a153-f1fe9be74f75/disk.config#033[00m
Feb 25 07:26:55 np0005629333 nova_compute[244014]: 2026-02-25 12:26:55.412 244018 DEBUG oslo_concurrency.processutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0612ec20-e725-4516-a153-f1fe9be74f75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpowdgiq3q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:55 np0005629333 podman[292694]: 2026-02-25 12:26:55.49114874 +0000 UTC m=+0.085426729 container create b8de8ac065a215e0b23df124f97cbcb4b941d40381a355169ba9c8f5af53d993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_fermi, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle)
Feb 25 07:26:55 np0005629333 podman[292694]: 2026-02-25 12:26:55.426179681 +0000 UTC m=+0.020457690 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:26:55 np0005629333 systemd[1]: Started libpod-conmon-b8de8ac065a215e0b23df124f97cbcb4b941d40381a355169ba9c8f5af53d993.scope.
Feb 25 07:26:55 np0005629333 nova_compute[244014]: 2026-02-25 12:26:55.555 244018 DEBUG oslo_concurrency.processutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0612ec20-e725-4516-a153-f1fe9be74f75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpowdgiq3q" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:55 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:26:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c318287c6bdcf7c6f65e75d2b32ee4652ceda2d174449994cd4497453e8733b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:26:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c318287c6bdcf7c6f65e75d2b32ee4652ceda2d174449994cd4497453e8733b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:26:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c318287c6bdcf7c6f65e75d2b32ee4652ceda2d174449994cd4497453e8733b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:26:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c318287c6bdcf7c6f65e75d2b32ee4652ceda2d174449994cd4497453e8733b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:26:55 np0005629333 nova_compute[244014]: 2026-02-25 12:26:55.599 244018 DEBUG nova.storage.rbd_utils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image 0612ec20-e725-4516-a153-f1fe9be74f75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:26:55 np0005629333 podman[292694]: 2026-02-25 12:26:55.605027062 +0000 UTC m=+0.199305101 container init b8de8ac065a215e0b23df124f97cbcb4b941d40381a355169ba9c8f5af53d993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:26:55 np0005629333 nova_compute[244014]: 2026-02-25 12:26:55.605 244018 DEBUG oslo_concurrency.processutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0612ec20-e725-4516-a153-f1fe9be74f75/disk.config 0612ec20-e725-4516-a153-f1fe9be74f75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:55 np0005629333 podman[292694]: 2026-02-25 12:26:55.614930063 +0000 UTC m=+0.209208022 container start b8de8ac065a215e0b23df124f97cbcb4b941d40381a355169ba9c8f5af53d993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_fermi, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 07:26:55 np0005629333 podman[292694]: 2026-02-25 12:26:55.620171731 +0000 UTC m=+0.214449780 container attach b8de8ac065a215e0b23df124f97cbcb4b941d40381a355169ba9c8f5af53d993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_fermi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:26:55 np0005629333 nova_compute[244014]: 2026-02-25 12:26:55.759 244018 DEBUG oslo_concurrency.processutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0612ec20-e725-4516-a153-f1fe9be74f75/disk.config 0612ec20-e725-4516-a153-f1fe9be74f75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:55 np0005629333 nova_compute[244014]: 2026-02-25 12:26:55.761 244018 INFO nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Deleting local config drive /var/lib/nova/instances/0612ec20-e725-4516-a153-f1fe9be74f75/disk.config because it was imported into RBD.#033[00m
Feb 25 07:26:55 np0005629333 NetworkManager[49836]: <info>  [1772022415.8069] manager: (tap5951ca77-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/219)
Feb 25 07:26:55 np0005629333 kernel: tap5951ca77-9d: entered promiscuous mode
Feb 25 07:26:55 np0005629333 nova_compute[244014]: 2026-02-25 12:26:55.815 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:55 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:55Z|00478|binding|INFO|Claiming lport 5951ca77-9d6a-41db-aaf8-3508d21701f3 for this chassis.
Feb 25 07:26:55 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:55Z|00479|binding|INFO|5951ca77-9d6a-41db-aaf8-3508d21701f3: Claiming fa:16:3e:a7:90:29 10.100.0.11
Feb 25 07:26:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 438 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.0 MiB/s wr, 252 op/s
Feb 25 07:26:55 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:55Z|00480|binding|INFO|Setting lport 5951ca77-9d6a-41db-aaf8-3508d21701f3 ovn-installed in OVS
Feb 25 07:26:55 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:55Z|00481|binding|INFO|Setting lport 5951ca77-9d6a-41db-aaf8-3508d21701f3 up in Southbound
Feb 25 07:26:55 np0005629333 nova_compute[244014]: 2026-02-25 12:26:55.830 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:55 np0005629333 systemd-udevd[292776]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:26:55 np0005629333 nova_compute[244014]: 2026-02-25 12:26:55.833 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.834 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:90:29 10.100.0.11'], port_security=['fa:16:3e:a7:90:29 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0612ec20-e725-4516-a153-f1fe9be74f75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64c22162-7e15-45de-8fd2-8c9a24f27006', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c85a955249394f0faf7c890f5cd0df32', 'neutron:revision_number': '2', 'neutron:security_group_ids': '10bdd349-ebee-42f5-8295-ca2b7d5c5d74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a9495f97-67e6-4da7-a9b0-f643c9e48076, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=5951ca77-9d6a-41db-aaf8-3508d21701f3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:26:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.839 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 5951ca77-9d6a-41db-aaf8-3508d21701f3 in datapath 64c22162-7e15-45de-8fd2-8c9a24f27006 bound to our chassis#033[00m
Feb 25 07:26:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.841 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 64c22162-7e15-45de-8fd2-8c9a24f27006#033[00m
Feb 25 07:26:55 np0005629333 NetworkManager[49836]: <info>  [1772022415.8490] device (tap5951ca77-9d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:26:55 np0005629333 NetworkManager[49836]: <info>  [1772022415.8496] device (tap5951ca77-9d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:26:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.853 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[04d92c6a-1538-4ce0-a72e-04e029b42001]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:55 np0005629333 systemd-machined[210048]: New machine qemu-62-instance-00000039.
Feb 25 07:26:55 np0005629333 systemd[1]: Started Virtual Machine qemu-62-instance-00000039.
Feb 25 07:26:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.878 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b12e7f52-c358-44fc-b3cb-1c2c17b56878]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.880 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9f3ddece-2f7c-4fd4-8021-0dadbb12e837]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.902 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[366ba607-3e13-485b-bc5c-6cf7e2cb5969]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.915 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3a61a8f4-cc80-4456-b25f-40c0e0b0295c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64c22162-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:1c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 658, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 658, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428693, 'reachable_time': 31584, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292793, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.929 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ab3401a1-348e-42ea-a6b4-aff76786fa2d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap64c22162-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428702, 'tstamp': 428702}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292795, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap64c22162-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428705, 'tstamp': 428705}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292795, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.930 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64c22162-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.933 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64c22162-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:55 np0005629333 nova_compute[244014]: 2026-02-25 12:26:55.933 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.933 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:26:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.934 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap64c22162-70, col_values=(('external_ids', {'iface-id': '81f0f54c-4e04-4adf-952f-b6d0fe9698c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:55.934 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:26:56 np0005629333 nova_compute[244014]: 2026-02-25 12:26:56.141 244018 DEBUG nova.compute.manager [req-b770b531-4473-48a9-805e-5d7be4a33422 req-eaf14dea-167a-4b6b-a59d-5bff035dbeec 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Received event network-vif-plugged-5951ca77-9d6a-41db-aaf8-3508d21701f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:26:56 np0005629333 nova_compute[244014]: 2026-02-25 12:26:56.142 244018 DEBUG oslo_concurrency.lockutils [req-b770b531-4473-48a9-805e-5d7be4a33422 req-eaf14dea-167a-4b6b-a59d-5bff035dbeec 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0612ec20-e725-4516-a153-f1fe9be74f75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:56 np0005629333 nova_compute[244014]: 2026-02-25 12:26:56.142 244018 DEBUG oslo_concurrency.lockutils [req-b770b531-4473-48a9-805e-5d7be4a33422 req-eaf14dea-167a-4b6b-a59d-5bff035dbeec 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0612ec20-e725-4516-a153-f1fe9be74f75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:56 np0005629333 nova_compute[244014]: 2026-02-25 12:26:56.142 244018 DEBUG oslo_concurrency.lockutils [req-b770b531-4473-48a9-805e-5d7be4a33422 req-eaf14dea-167a-4b6b-a59d-5bff035dbeec 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0612ec20-e725-4516-a153-f1fe9be74f75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:56 np0005629333 nova_compute[244014]: 2026-02-25 12:26:56.142 244018 DEBUG nova.compute.manager [req-b770b531-4473-48a9-805e-5d7be4a33422 req-eaf14dea-167a-4b6b-a59d-5bff035dbeec 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Processing event network-vif-plugged-5951ca77-9d6a-41db-aaf8-3508d21701f3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Feb 25 07:26:56 np0005629333 lvm[292853]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:26:56 np0005629333 lvm[292853]: VG ceph_vg0 finished
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:26:56.239775) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022416239836, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2285, "num_deletes": 263, "total_data_size": 3230759, "memory_usage": 3291224, "flush_reason": "Manual Compaction"}
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Feb 25 07:26:56 np0005629333 lvm[292855]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:26:56 np0005629333 lvm[292855]: VG ceph_vg1 finished
Feb 25 07:26:56 np0005629333 lvm[292857]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:26:56 np0005629333 lvm[292857]: VG ceph_vg2 finished
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022416263935, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3178108, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25854, "largest_seqno": 28138, "table_properties": {"data_size": 3167750, "index_size": 6532, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 22830, "raw_average_key_size": 21, "raw_value_size": 3146507, "raw_average_value_size": 2916, "num_data_blocks": 284, "num_entries": 1079, "num_filter_entries": 1079, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772022244, "oldest_key_time": 1772022244, "file_creation_time": 1772022416, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 24228 microseconds, and 8930 cpu microseconds.
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:26:56.264000) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3178108 bytes OK
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:26:56.264028) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:26:56.267041) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:26:56.267066) EVENT_LOG_v1 {"time_micros": 1772022416267058, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:26:56.267090) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3220933, prev total WAL file size 3220933, number of live WAL files 2.
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:26:56.267934) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3103KB)], [59(6823KB)]
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022416267989, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10165305, "oldest_snapshot_seqno": -1}
Feb 25 07:26:56 np0005629333 lvm[292859]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:26:56 np0005629333 lvm[292859]: VG ceph_vg2 finished
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5391 keys, 8527064 bytes, temperature: kUnknown
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022416311852, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 8527064, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8489836, "index_size": 22654, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 133973, "raw_average_key_size": 24, "raw_value_size": 8391779, "raw_average_value_size": 1556, "num_data_blocks": 929, "num_entries": 5391, "num_filter_entries": 5391, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772022416, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:26:56.312035) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8527064 bytes
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:26:56.313816) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 231.5 rd, 194.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 6.7 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5923, records dropped: 532 output_compression: NoCompression
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:26:56.313833) EVENT_LOG_v1 {"time_micros": 1772022416313823, "job": 32, "event": "compaction_finished", "compaction_time_micros": 43917, "compaction_time_cpu_micros": 13040, "output_level": 6, "num_output_files": 1, "total_output_size": 8527064, "num_input_records": 5923, "num_output_records": 5391, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022416314178, "job": 32, "event": "table_file_deletion", "file_number": 61}
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022416315001, "job": 32, "event": "table_file_deletion", "file_number": 59}
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:26:56.267840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:26:56.315045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:26:56.315050) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:26:56.315051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:26:56.315053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:26:56.315054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:26:56 np0005629333 affectionate_fermi[292713]: {}
Feb 25 07:26:56 np0005629333 systemd[1]: libpod-b8de8ac065a215e0b23df124f97cbcb4b941d40381a355169ba9c8f5af53d993.scope: Deactivated successfully.
Feb 25 07:26:56 np0005629333 systemd[1]: libpod-b8de8ac065a215e0b23df124f97cbcb4b941d40381a355169ba9c8f5af53d993.scope: Consumed 1.071s CPU time.
Feb 25 07:26:56 np0005629333 podman[292694]: 2026-02-25 12:26:56.367569084 +0000 UTC m=+0.961847113 container died b8de8ac065a215e0b23df124f97cbcb4b941d40381a355169ba9c8f5af53d993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 07:26:56 np0005629333 systemd[1]: var-lib-containers-storage-overlay-4c318287c6bdcf7c6f65e75d2b32ee4652ceda2d174449994cd4497453e8733b-merged.mount: Deactivated successfully.
Feb 25 07:26:56 np0005629333 podman[292694]: 2026-02-25 12:26:56.422457267 +0000 UTC m=+1.016735236 container remove b8de8ac065a215e0b23df124f97cbcb4b941d40381a355169ba9c8f5af53d993 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_fermi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 07:26:56 np0005629333 systemd[1]: libpod-conmon-b8de8ac065a215e0b23df124f97cbcb4b941d40381a355169ba9c8f5af53d993.scope: Deactivated successfully.
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:26:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:26:56 np0005629333 nova_compute[244014]: 2026-02-25 12:26:56.566 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022416.566098, 0612ec20-e725-4516-a153-f1fe9be74f75 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:26:56 np0005629333 nova_compute[244014]: 2026-02-25 12:26:56.567 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] VM Started (Lifecycle Event)#033[00m
Feb 25 07:26:56 np0005629333 nova_compute[244014]: 2026-02-25 12:26:56.571 244018 DEBUG nova.compute.manager [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:26:56 np0005629333 nova_compute[244014]: 2026-02-25 12:26:56.574 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:26:56 np0005629333 nova_compute[244014]: 2026-02-25 12:26:56.578 244018 INFO nova.virt.libvirt.driver [-] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Instance spawned successfully.#033[00m
Feb 25 07:26:56 np0005629333 nova_compute[244014]: 2026-02-25 12:26:56.578 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:26:56 np0005629333 nova_compute[244014]: 2026-02-25 12:26:56.701 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:56 np0005629333 nova_compute[244014]: 2026-02-25 12:26:56.702 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:56 np0005629333 nova_compute[244014]: 2026-02-25 12:26:56.705 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:56 np0005629333 nova_compute[244014]: 2026-02-25 12:26:56.706 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:56 np0005629333 nova_compute[244014]: 2026-02-25 12:26:56.707 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:56 np0005629333 nova_compute[244014]: 2026-02-25 12:26:56.708 244018 DEBUG nova.virt.libvirt.driver [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:26:56 np0005629333 nova_compute[244014]: 2026-02-25 12:26:56.713 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:56 np0005629333 nova_compute[244014]: 2026-02-25 12:26:56.718 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:26:56 np0005629333 nova_compute[244014]: 2026-02-25 12:26:56.761 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:26:56 np0005629333 nova_compute[244014]: 2026-02-25 12:26:56.761 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022416.566317, 0612ec20-e725-4516-a153-f1fe9be74f75 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:26:56 np0005629333 nova_compute[244014]: 2026-02-25 12:26:56.762 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:26:56 np0005629333 nova_compute[244014]: 2026-02-25 12:26:56.773 244018 INFO nova.compute.manager [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Took 7.36 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:26:56 np0005629333 nova_compute[244014]: 2026-02-25 12:26:56.773 244018 DEBUG nova.compute.manager [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:56 np0005629333 nova_compute[244014]: 2026-02-25 12:26:56.786 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:56 np0005629333 nova_compute[244014]: 2026-02-25 12:26:56.790 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022416.57372, 0612ec20-e725-4516-a153-f1fe9be74f75 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:26:56 np0005629333 nova_compute[244014]: 2026-02-25 12:26:56.791 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:26:56 np0005629333 nova_compute[244014]: 2026-02-25 12:26:56.816 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:56 np0005629333 nova_compute[244014]: 2026-02-25 12:26:56.822 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:26:56 np0005629333 nova_compute[244014]: 2026-02-25 12:26:56.840 244018 INFO nova.compute.manager [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Took 8.72 seconds to build instance.#033[00m
Feb 25 07:26:56 np0005629333 nova_compute[244014]: 2026-02-25 12:26:56.867 244018 DEBUG oslo_concurrency.lockutils [None req-466e5e45-36d8-4cab-ac96-f83cff42f7a2 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "0612ec20-e725-4516-a153-f1fe9be74f75" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.136 244018 DEBUG nova.network.neutron [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Updating instance_info_cache with network_info: [{"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.152 244018 DEBUG oslo_concurrency.lockutils [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Releasing lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.154 244018 DEBUG nova.compute.manager [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:57 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:26:57 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:26:57 np0005629333 kernel: tapee46268d-74 (unregistering): left promiscuous mode
Feb 25 07:26:57 np0005629333 NetworkManager[49836]: <info>  [1772022417.3348] device (tapee46268d-74): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:26:57 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:57Z|00482|binding|INFO|Releasing lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 from this chassis (sb_readonly=0)
Feb 25 07:26:57 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:57Z|00483|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 down in Southbound
Feb 25 07:26:57 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:57Z|00484|binding|INFO|Removing iface tapee46268d-74 ovn-installed in OVS
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.349 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:57.354 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:87:f1 10.100.0.11'], port_security=['fa:16:3e:ba:87:f1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce318891-cf3c-4d99-af7c-c01770f38194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56700581ea88438ba482d90bc702ced3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c2ca716d-3f7c-490b-954a-bca009559baa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0958bb9f-eb63-44ee-b380-21c56b170304, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ee46268d-740d-4ff9-8b65-4a81fc61eec3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.357 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:57.356 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ee46268d-740d-4ff9-8b65-4a81fc61eec3 in datapath ce318891-cf3c-4d99-af7c-c01770f38194 unbound from our chassis#033[00m
Feb 25 07:26:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:57.359 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce318891-cf3c-4d99-af7c-c01770f38194, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:26:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:57.361 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[681f618f-a409-4177-88f6-f5580c5e3058]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:57.362 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 namespace which is not needed anymore#033[00m
Feb 25 07:26:57 np0005629333 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000035.scope: Deactivated successfully.
Feb 25 07:26:57 np0005629333 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000035.scope: Consumed 12.252s CPU time.
Feb 25 07:26:57 np0005629333 systemd-machined[210048]: Machine qemu-58-instance-00000035 terminated.
Feb 25 07:26:57 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[291013]: [NOTICE]   (291038) : haproxy version is 2.8.14-c23fe91
Feb 25 07:26:57 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[291013]: [NOTICE]   (291038) : path to executable is /usr/sbin/haproxy
Feb 25 07:26:57 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[291013]: [WARNING]  (291038) : Exiting Master process...
Feb 25 07:26:57 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[291013]: [WARNING]  (291038) : Exiting Master process...
Feb 25 07:26:57 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[291013]: [ALERT]    (291038) : Current worker (291045) exited with code 143 (Terminated)
Feb 25 07:26:57 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[291013]: [WARNING]  (291038) : All workers exited. Exiting... (0)
Feb 25 07:26:57 np0005629333 systemd[1]: libpod-0c88556e16fdb202933ed83f84dce9a3e4f63746e4037f5398377a4eb7316225.scope: Deactivated successfully.
Feb 25 07:26:57 np0005629333 podman[292964]: 2026-02-25 12:26:57.518669811 +0000 UTC m=+0.053750192 container died 0c88556e16fdb202933ed83f84dce9a3e4f63746e4037f5398377a4eb7316225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.528 244018 INFO nova.virt.libvirt.driver [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance destroyed successfully.#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.529 244018 DEBUG nova.objects.instance [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'resources' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:26:57 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0c88556e16fdb202933ed83f84dce9a3e4f63746e4037f5398377a4eb7316225-userdata-shm.mount: Deactivated successfully.
Feb 25 07:26:57 np0005629333 systemd[1]: var-lib-containers-storage-overlay-669065cfe9b355ae4db308310ad10b864c500e39dbf8d8abf3254990187da5c4-merged.mount: Deactivated successfully.
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.550 244018 DEBUG nova.virt.libvirt.vif [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1971346294',display_name='tempest-ServerActionsTestJSON-server-1971346294',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1971346294',id=53,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAxL27ottNuqfXH6nySrIpiq52DbdIwstuJNvjKVA2mjXoBhB8Hf28a6S+Sox62IJx/Akv2MX8rF28TRT28AB2t2jhcJkKsJ3yIrvpBvNuGbxcLEouYwPlp1/Hru0erD1g==',key_name='tempest-keypair-1811376271',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-d1p0icxu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:26:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.551 244018 DEBUG nova.network.os_vif_util [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.552 244018 DEBUG nova.network.os_vif_util [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.552 244018 DEBUG os_vif [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.554 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.554 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee46268d-74, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.557 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.557 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:57 np0005629333 podman[292964]: 2026-02-25 12:26:57.558332744 +0000 UTC m=+0.093413105 container cleanup 0c88556e16fdb202933ed83f84dce9a3e4f63746e4037f5398377a4eb7316225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.560 244018 INFO os_vif [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74')#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.566 244018 DEBUG nova.virt.libvirt.driver [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Start _get_guest_xml network_info=[{"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.570 244018 WARNING nova.virt.libvirt.driver [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.576 244018 DEBUG nova.virt.libvirt.host [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.576 244018 DEBUG nova.virt.libvirt.host [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.579 244018 DEBUG nova.virt.libvirt.host [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.580 244018 DEBUG nova.virt.libvirt.host [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.580 244018 DEBUG nova.virt.libvirt.driver [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.580 244018 DEBUG nova.virt.hardware [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.581 244018 DEBUG nova.virt.hardware [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.581 244018 DEBUG nova.virt.hardware [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:26:57 np0005629333 systemd[1]: libpod-conmon-0c88556e16fdb202933ed83f84dce9a3e4f63746e4037f5398377a4eb7316225.scope: Deactivated successfully.
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.581 244018 DEBUG nova.virt.hardware [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.581 244018 DEBUG nova.virt.hardware [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.581 244018 DEBUG nova.virt.hardware [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.582 244018 DEBUG nova.virt.hardware [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.582 244018 DEBUG nova.virt.hardware [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.582 244018 DEBUG nova.virt.hardware [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.582 244018 DEBUG nova.virt.hardware [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.582 244018 DEBUG nova.virt.hardware [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.583 244018 DEBUG nova.objects.instance [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:26:57 np0005629333 podman[293004]: 2026-02-25 12:26:57.623426196 +0000 UTC m=+0.037659647 container remove 0c88556e16fdb202933ed83f84dce9a3e4f63746e4037f5398377a4eb7316225 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.625 244018 DEBUG oslo_concurrency.processutils [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:57.640 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9a25b50d-6e95-485e-a186-a970ddcadfb4]: (4, ('Wed Feb 25 12:26:57 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 (0c88556e16fdb202933ed83f84dce9a3e4f63746e4037f5398377a4eb7316225)\n0c88556e16fdb202933ed83f84dce9a3e4f63746e4037f5398377a4eb7316225\nWed Feb 25 12:26:57 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 (0c88556e16fdb202933ed83f84dce9a3e4f63746e4037f5398377a4eb7316225)\n0c88556e16fdb202933ed83f84dce9a3e4f63746e4037f5398377a4eb7316225\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:57.642 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0bb4918b-ad99-4019-8ea6-a004b6f84989]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:57.643 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce318891-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:57 np0005629333 kernel: tapce318891-c0: left promiscuous mode
Feb 25 07:26:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:57.650 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6553f18d-51bb-44be-b48f-74527c5754a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.659 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:57.660 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f17db2a3-390d-4bc2-90f2-5fcafa22658f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:57.664 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[35e413c9-f5c2-4c1f-a3cd-c15e43205153]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:57.677 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[14779900-4fd7-4e32-a691-fa313ad19492]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436375, 'reachable_time': 26540, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293020, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:57 np0005629333 systemd[1]: run-netns-ovnmeta\x2dce318891\x2dcf3c\x2d4d99\x2daf7c\x2dc01770f38194.mount: Deactivated successfully.
Feb 25 07:26:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:57.679 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:26:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:57.680 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[64aadbeb-a3c0-4676-b371-4bc15a8b3b69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 438 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 934 KiB/s rd, 3.7 MiB/s wr, 180 op/s
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.897 244018 DEBUG nova.compute.manager [req-08130b1b-7ace-4418-b47e-ff7e74c61f05 req-79c04fce-e98d-42be-998c-bf240e1385c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.897 244018 DEBUG oslo_concurrency.lockutils [req-08130b1b-7ace-4418-b47e-ff7e74c61f05 req-79c04fce-e98d-42be-998c-bf240e1385c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.898 244018 DEBUG oslo_concurrency.lockutils [req-08130b1b-7ace-4418-b47e-ff7e74c61f05 req-79c04fce-e98d-42be-998c-bf240e1385c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.898 244018 DEBUG oslo_concurrency.lockutils [req-08130b1b-7ace-4418-b47e-ff7e74c61f05 req-79c04fce-e98d-42be-998c-bf240e1385c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.899 244018 DEBUG nova.compute.manager [req-08130b1b-7ace-4418-b47e-ff7e74c61f05 req-79c04fce-e98d-42be-998c-bf240e1385c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:26:57 np0005629333 nova_compute[244014]: 2026-02-25 12:26:57.899 244018 WARNING nova.compute.manager [req-08130b1b-7ace-4418-b47e-ff7e74c61f05 req-79c04fce-e98d-42be-998c-bf240e1385c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.155 244018 INFO nova.compute.manager [None req-d8dffaa2-f9e3-4d2f-8619-690073e20fb8 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Pausing#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.156 244018 DEBUG nova.objects.instance [None req-d8dffaa2-f9e3-4d2f-8619-690073e20fb8 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'flavor' on Instance uuid 0612ec20-e725-4516-a153-f1fe9be74f75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:26:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:26:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3647707096' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.173 244018 DEBUG oslo_concurrency.processutils [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:58 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:58Z|00485|binding|INFO|Releasing lport 81f0f54c-4e04-4adf-952f-b6d0fe9698c7 from this chassis (sb_readonly=0)
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.232 244018 DEBUG oslo_concurrency.processutils [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.262 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.268 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022418.26787, 0612ec20-e725-4516-a153-f1fe9be74f75 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.268 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.271 244018 DEBUG nova.compute.manager [None req-d8dffaa2-f9e3-4d2f-8619-690073e20fb8 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.304 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.306 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.352 244018 DEBUG nova.compute.manager [req-0caaa265-11c3-439b-a59d-2ed94ae0529c req-be8d43e1-8053-417c-8538-2a74357bd39f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Received event network-vif-plugged-5951ca77-9d6a-41db-aaf8-3508d21701f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.352 244018 DEBUG oslo_concurrency.lockutils [req-0caaa265-11c3-439b-a59d-2ed94ae0529c req-be8d43e1-8053-417c-8538-2a74357bd39f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0612ec20-e725-4516-a153-f1fe9be74f75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.353 244018 DEBUG oslo_concurrency.lockutils [req-0caaa265-11c3-439b-a59d-2ed94ae0529c req-be8d43e1-8053-417c-8538-2a74357bd39f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0612ec20-e725-4516-a153-f1fe9be74f75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.353 244018 DEBUG oslo_concurrency.lockutils [req-0caaa265-11c3-439b-a59d-2ed94ae0529c req-be8d43e1-8053-417c-8538-2a74357bd39f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0612ec20-e725-4516-a153-f1fe9be74f75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.354 244018 DEBUG nova.compute.manager [req-0caaa265-11c3-439b-a59d-2ed94ae0529c req-be8d43e1-8053-417c-8538-2a74357bd39f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] No waiting events found dispatching network-vif-plugged-5951ca77-9d6a-41db-aaf8-3508d21701f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.354 244018 WARNING nova.compute.manager [req-0caaa265-11c3-439b-a59d-2ed94ae0529c req-be8d43e1-8053-417c-8538-2a74357bd39f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Received unexpected event network-vif-plugged-5951ca77-9d6a-41db-aaf8-3508d21701f3 for instance with vm_state active and task_state pausing.#033[00m
Feb 25 07:26:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:26:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3124443606' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.774 244018 DEBUG oslo_concurrency.processutils [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.777 244018 DEBUG nova.virt.libvirt.vif [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1971346294',display_name='tempest-ServerActionsTestJSON-server-1971346294',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1971346294',id=53,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAxL27ottNuqfXH6nySrIpiq52DbdIwstuJNvjKVA2mjXoBhB8Hf28a6S+Sox62IJx/Akv2MX8rF28TRT28AB2t2jhcJkKsJ3yIrvpBvNuGbxcLEouYwPlp1/Hru0erD1g==',key_name='tempest-keypair-1811376271',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-d1p0icxu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:26:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.778 244018 DEBUG nova.network.os_vif_util [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.780 244018 DEBUG nova.network.os_vif_util [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.782 244018 DEBUG nova.objects.instance [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.814 244018 DEBUG nova.virt.libvirt.driver [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:26:58 np0005629333 nova_compute[244014]:  <uuid>8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b</uuid>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:  <name>instance-00000035</name>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServerActionsTestJSON-server-1971346294</nova:name>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:26:57</nova:creationTime>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:26:58 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:        <nova:user uuid="1f8bbe7db4454108aca005daa72d5c22">tempest-ServerActionsTestJSON-436476112-project-member</nova:user>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:        <nova:project uuid="56700581ea88438ba482d90bc702ced3">tempest-ServerActionsTestJSON-436476112</nova:project>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:        <nova:port uuid="ee46268d-740d-4ff9-8b65-4a81fc61eec3">
Feb 25 07:26:58 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      <entry name="serial">8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b</entry>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      <entry name="uuid">8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b</entry>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk">
Feb 25 07:26:58 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:26:58 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk.config">
Feb 25 07:26:58 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:26:58 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:ba:87:f1"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      <target dev="tapee46268d-74"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b/console.log" append="off"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <input type="keyboard" bus="usb"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:26:58 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:26:58 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:26:58 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:26:58 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.816 244018 DEBUG nova.virt.libvirt.driver [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.816 244018 DEBUG nova.virt.libvirt.driver [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.818 244018 DEBUG nova.virt.libvirt.vif [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1971346294',display_name='tempest-ServerActionsTestJSON-server-1971346294',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1971346294',id=53,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAxL27ottNuqfXH6nySrIpiq52DbdIwstuJNvjKVA2mjXoBhB8Hf28a6S+Sox62IJx/Akv2MX8rF28TRT28AB2t2jhcJkKsJ3yIrvpBvNuGbxcLEouYwPlp1/Hru0erD1g==',key_name='tempest-keypair-1811376271',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-d1p0icxu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:26:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.818 244018 DEBUG nova.network.os_vif_util [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.820 244018 DEBUG nova.network.os_vif_util [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.820 244018 DEBUG os_vif [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.821 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.822 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.823 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.827 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.827 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee46268d-74, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.828 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee46268d-74, col_values=(('external_ids', {'iface-id': 'ee46268d-740d-4ff9-8b65-4a81fc61eec3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:87:f1', 'vm-uuid': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:58 np0005629333 NetworkManager[49836]: <info>  [1772022418.8315] manager: (tapee46268d-74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/220)
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.832 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.837 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.840 244018 INFO os_vif [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74')#033[00m
Feb 25 07:26:58 np0005629333 kernel: tapee46268d-74: entered promiscuous mode
Feb 25 07:26:58 np0005629333 NetworkManager[49836]: <info>  [1772022418.9303] manager: (tapee46268d-74): new Tun device (/org/freedesktop/NetworkManager/Devices/221)
Feb 25 07:26:58 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:58Z|00486|binding|INFO|Claiming lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 for this chassis.
Feb 25 07:26:58 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:58Z|00487|binding|INFO|ee46268d-740d-4ff9-8b65-4a81fc61eec3: Claiming fa:16:3e:ba:87:f1 10.100.0.11
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.931 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:58.939 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:87:f1 10.100.0.11'], port_security=['fa:16:3e:ba:87:f1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce318891-cf3c-4d99-af7c-c01770f38194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56700581ea88438ba482d90bc702ced3', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'c2ca716d-3f7c-490b-954a-bca009559baa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0958bb9f-eb63-44ee-b380-21c56b170304, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ee46268d-740d-4ff9-8b65-4a81fc61eec3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:26:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:58.941 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ee46268d-740d-4ff9-8b65-4a81fc61eec3 in datapath ce318891-cf3c-4d99-af7c-c01770f38194 bound to our chassis#033[00m
Feb 25 07:26:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:58.944 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce318891-cf3c-4d99-af7c-c01770f38194#033[00m
Feb 25 07:26:58 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:58Z|00488|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 ovn-installed in OVS
Feb 25 07:26:58 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:58Z|00489|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 up in Southbound
Feb 25 07:26:58 np0005629333 nova_compute[244014]: 2026-02-25 12:26:58.948 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:58.956 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8b5474b0-588f-452c-9318-6ee281fa178f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:58.957 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce318891-c1 in ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:26:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:58.959 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce318891-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:26:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:58.959 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f4445bed-7088-436d-8327-f9c9a1482a88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:58.961 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[304831df-7a4c-4a16-91cd-cb9652e82a7b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:58 np0005629333 systemd-machined[210048]: New machine qemu-63-instance-00000035.
Feb 25 07:26:58 np0005629333 systemd-udevd[293098]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:26:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:58.974 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[59cc087c-d0d7-4882-a81d-c27dc5490a55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:58 np0005629333 systemd[1]: Started Virtual Machine qemu-63-instance-00000035.
Feb 25 07:26:58 np0005629333 NetworkManager[49836]: <info>  [1772022418.9867] device (tapee46268d-74): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:26:58 np0005629333 NetworkManager[49836]: <info>  [1772022418.9884] device (tapee46268d-74): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:26:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:58.990 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[959419a3-7223-4628-baa6-90f4353411e8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.018 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f9ad794d-351b-4e45-8409-8ca9834ee23c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:59 np0005629333 systemd-udevd[293102]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:26:59 np0005629333 NetworkManager[49836]: <info>  [1772022419.0270] manager: (tapce318891-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/222)
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.028 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[22c71db0-d241-4c86-ab23-063e17ca4fdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.055 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[31a7eef7-9a2e-4264-b31b-8cb995560cf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.059 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0a8f0af2-b3cb-40c9-893b-a22f5f2afb6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:59 np0005629333 NetworkManager[49836]: <info>  [1772022419.0810] device (tapce318891-c0): carrier: link connected
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.087 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c5714778-368e-4189-b05d-c09028e77893]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.100 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cd42f1ca-2221-42bd-a484-c19667f7aeb5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce318891-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c3:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 147], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438848, 'reachable_time': 24499, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293130, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.110 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bae719af-088c-437c-9c8e-ba013c6acb28]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe44:c3a0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438848, 'tstamp': 438848}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293131, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.126 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[393926f5-dc64-4db9-ba2a-0a6a58f14494]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce318891-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c3:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 147], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438848, 'reachable_time': 24499, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293132, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.149 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a9628a36-f67f-43a0-8bcb-4c6b0ab44d3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.204 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7cc3e8ed-ab9c-44ba-918b-0fa708c7ebd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.206 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce318891-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.206 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.207 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce318891-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:59 np0005629333 nova_compute[244014]: 2026-02-25 12:26:59.209 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:59 np0005629333 kernel: tapce318891-c0: entered promiscuous mode
Feb 25 07:26:59 np0005629333 NetworkManager[49836]: <info>  [1772022419.2129] manager: (tapce318891-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/223)
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.213 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce318891-c0, col_values=(('external_ids', {'iface-id': '3b184c15-8ef4-4e11-bd18-e1253a4ff440'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:26:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:26:59Z|00490|binding|INFO|Releasing lport 3b184c15-8ef4-4e11-bd18-e1253a4ff440 from this chassis (sb_readonly=0)
Feb 25 07:26:59 np0005629333 nova_compute[244014]: 2026-02-25 12:26:59.215 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:59 np0005629333 nova_compute[244014]: 2026-02-25 12:26:59.229 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:59 np0005629333 nova_compute[244014]: 2026-02-25 12:26:59.233 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.234 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.236 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eb2eaff5-15de-43e1-8c45-61a94f47875c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.237 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:26:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:26:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:26:59.238 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'env', 'PROCESS_TAG=haproxy-ce318891-cf3c-4d99-af7c-c01770f38194', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce318891-cf3c-4d99-af7c-c01770f38194.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:26:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Feb 25 07:26:59 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Feb 25 07:26:59 np0005629333 nova_compute[244014]: 2026-02-25 12:26:59.593 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 25 07:26:59 np0005629333 nova_compute[244014]: 2026-02-25 12:26:59.593 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022419.592792, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:26:59 np0005629333 nova_compute[244014]: 2026-02-25 12:26:59.593 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:26:59 np0005629333 nova_compute[244014]: 2026-02-25 12:26:59.598 244018 DEBUG nova.compute.manager [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:26:59 np0005629333 nova_compute[244014]: 2026-02-25 12:26:59.602 244018 INFO nova.virt.libvirt.driver [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance rebooted successfully.#033[00m
Feb 25 07:26:59 np0005629333 nova_compute[244014]: 2026-02-25 12:26:59.603 244018 DEBUG nova.compute.manager [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:59 np0005629333 podman[293205]: 2026-02-25 12:26:59.61466364 +0000 UTC m=+0.060286017 container create 3a7f482e2061e992266da4754a5befd53c7f4112964ae064dda88ea14183060a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:26:59 np0005629333 nova_compute[244014]: 2026-02-25 12:26:59.640 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:59 np0005629333 nova_compute[244014]: 2026-02-25 12:26:59.644 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:26:59 np0005629333 systemd[1]: Started libpod-conmon-3a7f482e2061e992266da4754a5befd53c7f4112964ae064dda88ea14183060a.scope.
Feb 25 07:26:59 np0005629333 nova_compute[244014]: 2026-02-25 12:26:59.668 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022419.5975893, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:26:59 np0005629333 nova_compute[244014]: 2026-02-25 12:26:59.668 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Started (Lifecycle Event)#033[00m
Feb 25 07:26:59 np0005629333 nova_compute[244014]: 2026-02-25 12:26:59.671 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:26:59 np0005629333 nova_compute[244014]: 2026-02-25 12:26:59.677 244018 DEBUG oslo_concurrency.lockutils [None req-c2ae4ad3-038b-4439-ba61-59f3a4c80c5a 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.389s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:26:59 np0005629333 podman[293205]: 2026-02-25 12:26:59.581825281 +0000 UTC m=+0.027447668 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:26:59 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:26:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eba61f7f2d26200ebeb270f5b7762f03e0ef58df99ed676891fb1a63a7baeb70/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:26:59 np0005629333 nova_compute[244014]: 2026-02-25 12:26:59.693 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:26:59 np0005629333 nova_compute[244014]: 2026-02-25 12:26:59.702 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:26:59 np0005629333 podman[293205]: 2026-02-25 12:26:59.708327481 +0000 UTC m=+0.153949908 container init 3a7f482e2061e992266da4754a5befd53c7f4112964ae064dda88ea14183060a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 25 07:26:59 np0005629333 podman[293205]: 2026-02-25 12:26:59.716736499 +0000 UTC m=+0.162358896 container start 3a7f482e2061e992266da4754a5befd53c7f4112964ae064dda88ea14183060a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:26:59 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[293222]: [NOTICE]   (293226) : New worker (293228) forked
Feb 25 07:26:59 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[293222]: [NOTICE]   (293226) : Loading success.
Feb 25 07:26:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 438 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 612 KiB/s rd, 2.5 MiB/s wr, 165 op/s
Feb 25 07:27:00 np0005629333 nova_compute[244014]: 2026-02-25 12:27:00.019 244018 DEBUG nova.compute.manager [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:27:00 np0005629333 nova_compute[244014]: 2026-02-25 12:27:00.019 244018 DEBUG oslo_concurrency.lockutils [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:00 np0005629333 nova_compute[244014]: 2026-02-25 12:27:00.020 244018 DEBUG oslo_concurrency.lockutils [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:00 np0005629333 nova_compute[244014]: 2026-02-25 12:27:00.021 244018 DEBUG oslo_concurrency.lockutils [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:00 np0005629333 nova_compute[244014]: 2026-02-25 12:27:00.021 244018 DEBUG nova.compute.manager [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:27:00 np0005629333 nova_compute[244014]: 2026-02-25 12:27:00.022 244018 WARNING nova.compute.manager [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:27:00 np0005629333 nova_compute[244014]: 2026-02-25 12:27:00.022 244018 DEBUG nova.compute.manager [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:27:00 np0005629333 nova_compute[244014]: 2026-02-25 12:27:00.023 244018 DEBUG oslo_concurrency.lockutils [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:00 np0005629333 nova_compute[244014]: 2026-02-25 12:27:00.023 244018 DEBUG oslo_concurrency.lockutils [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:00 np0005629333 nova_compute[244014]: 2026-02-25 12:27:00.024 244018 DEBUG oslo_concurrency.lockutils [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:00 np0005629333 nova_compute[244014]: 2026-02-25 12:27:00.024 244018 DEBUG nova.compute.manager [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:27:00 np0005629333 nova_compute[244014]: 2026-02-25 12:27:00.024 244018 WARNING nova.compute.manager [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:27:00 np0005629333 nova_compute[244014]: 2026-02-25 12:27:00.025 244018 DEBUG nova.compute.manager [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:27:00 np0005629333 nova_compute[244014]: 2026-02-25 12:27:00.025 244018 DEBUG oslo_concurrency.lockutils [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:00 np0005629333 nova_compute[244014]: 2026-02-25 12:27:00.026 244018 DEBUG oslo_concurrency.lockutils [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:00 np0005629333 nova_compute[244014]: 2026-02-25 12:27:00.026 244018 DEBUG oslo_concurrency.lockutils [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:00 np0005629333 nova_compute[244014]: 2026-02-25 12:27:00.026 244018 DEBUG nova.compute.manager [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:27:00 np0005629333 nova_compute[244014]: 2026-02-25 12:27:00.026 244018 WARNING nova.compute.manager [req-ea4a154b-af74-42bb-b6d1-480e34c34eb6 req-04085586-6634-4469-b1fe-773c26ae7136 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:27:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:27:00 np0005629333 nova_compute[244014]: 2026-02-25 12:27:00.994 244018 INFO nova.compute.manager [None req-cca10bbf-0f06-4916-959d-8382b9e0e3a0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Get console output#033[00m
Feb 25 07:27:01 np0005629333 nova_compute[244014]: 2026-02-25 12:27:01.002 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 25 07:27:01 np0005629333 nova_compute[244014]: 2026-02-25 12:27:01.079 244018 DEBUG oslo_concurrency.lockutils [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "0612ec20-e725-4516-a153-f1fe9be74f75" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:01 np0005629333 nova_compute[244014]: 2026-02-25 12:27:01.079 244018 DEBUG oslo_concurrency.lockutils [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "0612ec20-e725-4516-a153-f1fe9be74f75" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:01 np0005629333 nova_compute[244014]: 2026-02-25 12:27:01.080 244018 INFO nova.compute.manager [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Shelving#033[00m
Feb 25 07:27:01 np0005629333 kernel: tap5951ca77-9d (unregistering): left promiscuous mode
Feb 25 07:27:01 np0005629333 NetworkManager[49836]: <info>  [1772022421.1364] device (tap5951ca77-9d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:27:01 np0005629333 nova_compute[244014]: 2026-02-25 12:27:01.141 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:01 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:01Z|00491|binding|INFO|Releasing lport 5951ca77-9d6a-41db-aaf8-3508d21701f3 from this chassis (sb_readonly=0)
Feb 25 07:27:01 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:01Z|00492|binding|INFO|Setting lport 5951ca77-9d6a-41db-aaf8-3508d21701f3 down in Southbound
Feb 25 07:27:01 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:01Z|00493|binding|INFO|Removing iface tap5951ca77-9d ovn-installed in OVS
Feb 25 07:27:01 np0005629333 nova_compute[244014]: 2026-02-25 12:27:01.149 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:01 np0005629333 nova_compute[244014]: 2026-02-25 12:27:01.156 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.156 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:90:29 10.100.0.11'], port_security=['fa:16:3e:a7:90:29 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0612ec20-e725-4516-a153-f1fe9be74f75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64c22162-7e15-45de-8fd2-8c9a24f27006', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c85a955249394f0faf7c890f5cd0df32', 'neutron:revision_number': '4', 'neutron:security_group_ids': '10bdd349-ebee-42f5-8295-ca2b7d5c5d74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a9495f97-67e6-4da7-a9b0-f643c9e48076, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=5951ca77-9d6a-41db-aaf8-3508d21701f3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.159 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 5951ca77-9d6a-41db-aaf8-3508d21701f3 in datapath 64c22162-7e15-45de-8fd2-8c9a24f27006 unbound from our chassis#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.162 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 64c22162-7e15-45de-8fd2-8c9a24f27006#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.173 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c0350793-529c-4f88-bca9-015d4c9ced6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:01 np0005629333 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000039.scope: Deactivated successfully.
Feb 25 07:27:01 np0005629333 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000039.scope: Consumed 2.383s CPU time.
Feb 25 07:27:01 np0005629333 systemd-machined[210048]: Machine qemu-62-instance-00000039 terminated.
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.199 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a565ab83-2cd4-46b7-9f20-2b0c9ae7189a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.203 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ccca2450-6aaa-47f6-a231-a2b090b94875]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:01 np0005629333 podman[293239]: 2026-02-25 12:27:01.215449325 +0000 UTC m=+0.057866669 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.43.0)
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.225 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[293e1400-6799-4d76-8f55-3597e0688b77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:01 np0005629333 podman[293242]: 2026-02-25 12:27:01.253598065 +0000 UTC m=+0.087115127 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.261 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2514a43f-7452-4db2-8ea6-e5869bd80aff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64c22162-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:1c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428693, 'reachable_time': 31584, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293283, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.277 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c12bbe1c-1fd8-4801-87a7-b7022233cb9f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap64c22162-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428702, 'tstamp': 428702}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293291, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap64c22162-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428705, 'tstamp': 428705}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293291, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.279 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64c22162-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:01 np0005629333 nova_compute[244014]: 2026-02-25 12:27:01.280 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:01 np0005629333 nova_compute[244014]: 2026-02-25 12:27:01.284 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.285 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64c22162-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.285 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.285 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap64c22162-70, col_values=(('external_ids', {'iface-id': '81f0f54c-4e04-4adf-952f-b6d0fe9698c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.286 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:27:01 np0005629333 kernel: tap5951ca77-9d: entered promiscuous mode
Feb 25 07:27:01 np0005629333 NetworkManager[49836]: <info>  [1772022421.3146] manager: (tap5951ca77-9d): new Tun device (/org/freedesktop/NetworkManager/Devices/224)
Feb 25 07:27:01 np0005629333 kernel: tap5951ca77-9d (unregistering): left promiscuous mode
Feb 25 07:27:01 np0005629333 systemd-udevd[293117]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:27:01 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:01Z|00494|binding|INFO|Claiming lport 5951ca77-9d6a-41db-aaf8-3508d21701f3 for this chassis.
Feb 25 07:27:01 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:01Z|00495|binding|INFO|5951ca77-9d6a-41db-aaf8-3508d21701f3: Claiming fa:16:3e:a7:90:29 10.100.0.11
Feb 25 07:27:01 np0005629333 nova_compute[244014]: 2026-02-25 12:27:01.319 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:01 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:01Z|00496|binding|INFO|Setting lport 5951ca77-9d6a-41db-aaf8-3508d21701f3 ovn-installed in OVS
Feb 25 07:27:01 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:01Z|00497|if_status|INFO|Dropped 2 log messages in last 214 seconds (most recently, 214 seconds ago) due to excessive rate
Feb 25 07:27:01 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:01Z|00498|if_status|INFO|Not setting lport 5951ca77-9d6a-41db-aaf8-3508d21701f3 down as sb is readonly
Feb 25 07:27:01 np0005629333 nova_compute[244014]: 2026-02-25 12:27:01.334 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:01 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:01Z|00499|binding|INFO|Releasing lport 5951ca77-9d6a-41db-aaf8-3508d21701f3 from this chassis (sb_readonly=0)
Feb 25 07:27:01 np0005629333 nova_compute[244014]: 2026-02-25 12:27:01.338 244018 INFO nova.virt.libvirt.driver [-] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Instance destroyed successfully.#033[00m
Feb 25 07:27:01 np0005629333 nova_compute[244014]: 2026-02-25 12:27:01.339 244018 DEBUG nova.objects.instance [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'numa_topology' on Instance uuid 0612ec20-e725-4516-a153-f1fe9be74f75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.341 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:90:29 10.100.0.11'], port_security=['fa:16:3e:a7:90:29 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0612ec20-e725-4516-a153-f1fe9be74f75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64c22162-7e15-45de-8fd2-8c9a24f27006', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c85a955249394f0faf7c890f5cd0df32', 'neutron:revision_number': '4', 'neutron:security_group_ids': '10bdd349-ebee-42f5-8295-ca2b7d5c5d74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a9495f97-67e6-4da7-a9b0-f643c9e48076, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=5951ca77-9d6a-41db-aaf8-3508d21701f3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:27:01 np0005629333 nova_compute[244014]: 2026-02-25 12:27:01.344 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.347 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 5951ca77-9d6a-41db-aaf8-3508d21701f3 in datapath 64c22162-7e15-45de-8fd2-8c9a24f27006 bound to our chassis#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.350 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:90:29 10.100.0.11'], port_security=['fa:16:3e:a7:90:29 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '0612ec20-e725-4516-a153-f1fe9be74f75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64c22162-7e15-45de-8fd2-8c9a24f27006', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c85a955249394f0faf7c890f5cd0df32', 'neutron:revision_number': '4', 'neutron:security_group_ids': '10bdd349-ebee-42f5-8295-ca2b7d5c5d74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a9495f97-67e6-4da7-a9b0-f643c9e48076, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=5951ca77-9d6a-41db-aaf8-3508d21701f3) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.353 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 64c22162-7e15-45de-8fd2-8c9a24f27006#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.369 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[167ed9a0-3c07-489e-983d-22a9203ba8ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.386 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c9547e70-d821-4d70-9908-e9bf6987d9d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.390 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4e54c359-f103-4840-83a0-2b5196642fed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:01 np0005629333 nova_compute[244014]: 2026-02-25 12:27:01.400 244018 DEBUG nova.compute.manager [req-7a778495-82be-4036-b1a2-5677d5675d35 req-bb3d0b55-bae5-40c7-974a-96b9fcc21608 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Received event network-vif-unplugged-5951ca77-9d6a-41db-aaf8-3508d21701f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:27:01 np0005629333 nova_compute[244014]: 2026-02-25 12:27:01.400 244018 DEBUG oslo_concurrency.lockutils [req-7a778495-82be-4036-b1a2-5677d5675d35 req-bb3d0b55-bae5-40c7-974a-96b9fcc21608 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0612ec20-e725-4516-a153-f1fe9be74f75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:01 np0005629333 nova_compute[244014]: 2026-02-25 12:27:01.400 244018 DEBUG oslo_concurrency.lockutils [req-7a778495-82be-4036-b1a2-5677d5675d35 req-bb3d0b55-bae5-40c7-974a-96b9fcc21608 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0612ec20-e725-4516-a153-f1fe9be74f75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:01 np0005629333 nova_compute[244014]: 2026-02-25 12:27:01.400 244018 DEBUG oslo_concurrency.lockutils [req-7a778495-82be-4036-b1a2-5677d5675d35 req-bb3d0b55-bae5-40c7-974a-96b9fcc21608 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0612ec20-e725-4516-a153-f1fe9be74f75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:01 np0005629333 nova_compute[244014]: 2026-02-25 12:27:01.401 244018 DEBUG nova.compute.manager [req-7a778495-82be-4036-b1a2-5677d5675d35 req-bb3d0b55-bae5-40c7-974a-96b9fcc21608 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] No waiting events found dispatching network-vif-unplugged-5951ca77-9d6a-41db-aaf8-3508d21701f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:27:01 np0005629333 nova_compute[244014]: 2026-02-25 12:27:01.401 244018 WARNING nova.compute.manager [req-7a778495-82be-4036-b1a2-5677d5675d35 req-bb3d0b55-bae5-40c7-974a-96b9fcc21608 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Received unexpected event network-vif-unplugged-5951ca77-9d6a-41db-aaf8-3508d21701f3 for instance with vm_state paused and task_state shelving.#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.415 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3c590edf-4e52-4f40-8dd2-679e88b5329c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.427 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3c611a27-489f-4947-a2fe-be706a3e2e81]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64c22162-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:1c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 700, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 700, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428693, 'reachable_time': 31584, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293302, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.444 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f526e6f9-a4eb-4752-b9f9-e9540312028c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap64c22162-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428702, 'tstamp': 428702}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293303, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap64c22162-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428705, 'tstamp': 428705}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293303, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.447 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64c22162-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:01 np0005629333 nova_compute[244014]: 2026-02-25 12:27:01.450 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:01 np0005629333 nova_compute[244014]: 2026-02-25 12:27:01.452 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.453 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64c22162-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.454 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.454 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap64c22162-70, col_values=(('external_ids', {'iface-id': '81f0f54c-4e04-4adf-952f-b6d0fe9698c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.455 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.457 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 5951ca77-9d6a-41db-aaf8-3508d21701f3 in datapath 64c22162-7e15-45de-8fd2-8c9a24f27006 unbound from our chassis#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.459 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 64c22162-7e15-45de-8fd2-8c9a24f27006#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.470 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b5240f9f-3afd-4490-8d53-16d5326b72ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.488 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2621ab86-97ef-424e-93ee-92cbb23a8890]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.491 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[bda54c9b-f334-469b-afb8-0b4b601e9f32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.511 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4264fac6-9673-4559-988e-0a150a118e79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.526 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[06242db4-2e80-4139-a5d2-b12ed82af825]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64c22162-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:1c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 14, 'rx_bytes': 700, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 14, 'rx_bytes': 700, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428693, 'reachable_time': 31584, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293310, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.537 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bc17ada6-10c1-4290-bc9c-236abf9f16ed]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap64c22162-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428702, 'tstamp': 428702}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293311, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap64c22162-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428705, 'tstamp': 428705}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293311, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.539 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64c22162-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:01 np0005629333 nova_compute[244014]: 2026-02-25 12:27:01.541 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:01 np0005629333 nova_compute[244014]: 2026-02-25 12:27:01.544 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.545 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64c22162-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.545 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.547 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap64c22162-70, col_values=(('external_ids', {'iface-id': '81f0f54c-4e04-4adf-952f-b6d0fe9698c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:01.547 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:27:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:27:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:27:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:27:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:27:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:27:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:27:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 438 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 59 KiB/s wr, 118 op/s
Feb 25 07:27:01 np0005629333 nova_compute[244014]: 2026-02-25 12:27:01.881 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:27:01 np0005629333 nova_compute[244014]: 2026-02-25 12:27:01.883 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:27:01 np0005629333 nova_compute[244014]: 2026-02-25 12:27:01.884 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 07:27:02 np0005629333 nova_compute[244014]: 2026-02-25 12:27:02.001 244018 INFO nova.virt.libvirt.driver [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Beginning cold snapshot process#033[00m
Feb 25 07:27:02 np0005629333 nova_compute[244014]: 2026-02-25 12:27:02.071 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:27:02 np0005629333 nova_compute[244014]: 2026-02-25 12:27:02.071 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:27:02 np0005629333 nova_compute[244014]: 2026-02-25 12:27:02.072 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 25 07:27:02 np0005629333 nova_compute[244014]: 2026-02-25 12:27:02.073 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:02 np0005629333 nova_compute[244014]: 2026-02-25 12:27:02.159 244018 INFO nova.compute.manager [None req-d1df27ec-1aa2-4e7c-bc7b-cc612375e94e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Get console output#033[00m
Feb 25 07:27:02 np0005629333 nova_compute[244014]: 2026-02-25 12:27:02.168 244018 DEBUG nova.virt.libvirt.imagebackend [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Feb 25 07:27:02 np0005629333 nova_compute[244014]: 2026-02-25 12:27:02.174 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 25 07:27:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Feb 25 07:27:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Feb 25 07:27:02 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Feb 25 07:27:02 np0005629333 nova_compute[244014]: 2026-02-25 12:27:02.376 244018 DEBUG nova.storage.rbd_utils [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] creating snapshot(e1535f1184504ef7b5b0e4adb03eb1de) on rbd image(0612ec20-e725-4516-a153-f1fe9be74f75_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 07:27:02 np0005629333 nova_compute[244014]: 2026-02-25 12:27:02.697 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022407.6968205, d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:27:02 np0005629333 nova_compute[244014]: 2026-02-25 12:27:02.698 244018 INFO nova.compute.manager [-] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:27:02 np0005629333 nova_compute[244014]: 2026-02-25 12:27:02.732 244018 DEBUG nova.compute.manager [None req-427c5340-36ca-4f1c-8de7-5c223952a13d - - - - - -] [instance: d0b80f5d-6ece-4cd0-9ff8-89b0e6a1bf84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Feb 25 07:27:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Feb 25 07:27:03 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Feb 25 07:27:03 np0005629333 nova_compute[244014]: 2026-02-25 12:27:03.376 244018 DEBUG nova.storage.rbd_utils [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] cloning vms/0612ec20-e725-4516-a153-f1fe9be74f75_disk@e1535f1184504ef7b5b0e4adb03eb1de to images/bac7e237-2902-4866-9b21-65c1a5c9d2ec clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 25 07:27:03 np0005629333 nova_compute[244014]: 2026-02-25 12:27:03.493 244018 DEBUG nova.storage.rbd_utils [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] flattening images/bac7e237-2902-4866-9b21-65c1a5c9d2ec flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 25 07:27:03 np0005629333 nova_compute[244014]: 2026-02-25 12:27:03.706 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updating instance_info_cache with network_info: [{"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:27:03 np0005629333 nova_compute[244014]: 2026-02-25 12:27:03.719 244018 DEBUG nova.storage.rbd_utils [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] removing snapshot(e1535f1184504ef7b5b0e4adb03eb1de) on rbd image(0612ec20-e725-4516-a153-f1fe9be74f75_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 25 07:27:03 np0005629333 nova_compute[244014]: 2026-02-25 12:27:03.733 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:27:03 np0005629333 nova_compute[244014]: 2026-02-25 12:27:03.734 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 25 07:27:03 np0005629333 nova_compute[244014]: 2026-02-25 12:27:03.778 244018 DEBUG nova.compute.manager [req-d52d914f-514c-400d-9fe7-affcba1848e9 req-3085d2ef-bad0-4682-89f0-669053ea9120 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Received event network-vif-plugged-5951ca77-9d6a-41db-aaf8-3508d21701f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:27:03 np0005629333 nova_compute[244014]: 2026-02-25 12:27:03.778 244018 DEBUG oslo_concurrency.lockutils [req-d52d914f-514c-400d-9fe7-affcba1848e9 req-3085d2ef-bad0-4682-89f0-669053ea9120 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0612ec20-e725-4516-a153-f1fe9be74f75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:03 np0005629333 nova_compute[244014]: 2026-02-25 12:27:03.779 244018 DEBUG oslo_concurrency.lockutils [req-d52d914f-514c-400d-9fe7-affcba1848e9 req-3085d2ef-bad0-4682-89f0-669053ea9120 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0612ec20-e725-4516-a153-f1fe9be74f75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:03 np0005629333 nova_compute[244014]: 2026-02-25 12:27:03.779 244018 DEBUG oslo_concurrency.lockutils [req-d52d914f-514c-400d-9fe7-affcba1848e9 req-3085d2ef-bad0-4682-89f0-669053ea9120 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0612ec20-e725-4516-a153-f1fe9be74f75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:03 np0005629333 nova_compute[244014]: 2026-02-25 12:27:03.779 244018 DEBUG nova.compute.manager [req-d52d914f-514c-400d-9fe7-affcba1848e9 req-3085d2ef-bad0-4682-89f0-669053ea9120 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] No waiting events found dispatching network-vif-plugged-5951ca77-9d6a-41db-aaf8-3508d21701f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:27:03 np0005629333 nova_compute[244014]: 2026-02-25 12:27:03.780 244018 WARNING nova.compute.manager [req-d52d914f-514c-400d-9fe7-affcba1848e9 req-3085d2ef-bad0-4682-89f0-669053ea9120 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Received unexpected event network-vif-plugged-5951ca77-9d6a-41db-aaf8-3508d21701f3 for instance with vm_state paused and task_state shelving_image_uploading.#033[00m
Feb 25 07:27:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 438 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 7.3 MiB/s rd, 3.3 KiB/s wr, 300 op/s
Feb 25 07:27:03 np0005629333 nova_compute[244014]: 2026-02-25 12:27:03.831 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Feb 25 07:27:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Feb 25 07:27:04 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Feb 25 07:27:04 np0005629333 nova_compute[244014]: 2026-02-25 12:27:04.346 244018 DEBUG nova.storage.rbd_utils [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] creating snapshot(snap) on rbd image(bac7e237-2902-4866-9b21-65c1a5c9d2ec) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 07:27:04 np0005629333 nova_compute[244014]: 2026-02-25 12:27:04.671 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:27:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Feb 25 07:27:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Feb 25 07:27:05 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Feb 25 07:27:05 np0005629333 nova_compute[244014]: 2026-02-25 12:27:05.664 244018 DEBUG oslo_concurrency.lockutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquiring lock "160a4e3f-b197-4b82-a2ff-cebf79df47df" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:05 np0005629333 nova_compute[244014]: 2026-02-25 12:27:05.665 244018 DEBUG oslo_concurrency.lockutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "160a4e3f-b197-4b82-a2ff-cebf79df47df" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:05 np0005629333 nova_compute[244014]: 2026-02-25 12:27:05.687 244018 DEBUG nova.compute.manager [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:27:05 np0005629333 nova_compute[244014]: 2026-02-25 12:27:05.805 244018 DEBUG oslo_concurrency.lockutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:05 np0005629333 nova_compute[244014]: 2026-02-25 12:27:05.806 244018 DEBUG oslo_concurrency.lockutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:05 np0005629333 nova_compute[244014]: 2026-02-25 12:27:05.812 244018 DEBUG nova.virt.hardware [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:27:05 np0005629333 nova_compute[244014]: 2026-02-25 12:27:05.813 244018 INFO nova.compute.claims [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:27:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 438 MiB data, 735 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 2.7 KiB/s wr, 289 op/s
Feb 25 07:27:06 np0005629333 nova_compute[244014]: 2026-02-25 12:27:06.000 244018 DEBUG oslo_concurrency.processutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:27:06 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1604082407' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:27:06 np0005629333 nova_compute[244014]: 2026-02-25 12:27:06.615 244018 DEBUG oslo_concurrency.processutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:06 np0005629333 nova_compute[244014]: 2026-02-25 12:27:06.625 244018 DEBUG nova.compute.provider_tree [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:27:06 np0005629333 nova_compute[244014]: 2026-02-25 12:27:06.664 244018 DEBUG nova.scheduler.client.report [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:27:06 np0005629333 nova_compute[244014]: 2026-02-25 12:27:06.699 244018 DEBUG oslo_concurrency.lockutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.893s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:06 np0005629333 nova_compute[244014]: 2026-02-25 12:27:06.700 244018 DEBUG nova.compute.manager [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:27:06 np0005629333 nova_compute[244014]: 2026-02-25 12:27:06.763 244018 DEBUG nova.compute.manager [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:27:06 np0005629333 nova_compute[244014]: 2026-02-25 12:27:06.764 244018 DEBUG nova.network.neutron [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:27:06 np0005629333 nova_compute[244014]: 2026-02-25 12:27:06.787 244018 INFO nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:27:06 np0005629333 nova_compute[244014]: 2026-02-25 12:27:06.818 244018 DEBUG nova.compute.manager [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.119 244018 DEBUG nova.compute.manager [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.122 244018 DEBUG nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.122 244018 INFO nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Creating image(s)#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.160 244018 DEBUG nova.storage.rbd_utils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] rbd image 160a4e3f-b197-4b82-a2ff-cebf79df47df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.195 244018 DEBUG nova.storage.rbd_utils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] rbd image 160a4e3f-b197-4b82-a2ff-cebf79df47df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.230 244018 DEBUG nova.storage.rbd_utils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] rbd image 160a4e3f-b197-4b82-a2ff-cebf79df47df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.235 244018 DEBUG oslo_concurrency.processutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.271 244018 INFO nova.virt.libvirt.driver [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Snapshot image upload complete#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.272 244018 DEBUG nova.compute.manager [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.278 244018 DEBUG oslo_concurrency.lockutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquiring lock "bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.279 244018 DEBUG oslo_concurrency.lockutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.313 244018 DEBUG nova.compute.manager [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.324 244018 DEBUG oslo_concurrency.processutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.324 244018 DEBUG oslo_concurrency.lockutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.325 244018 DEBUG oslo_concurrency.lockutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.325 244018 DEBUG oslo_concurrency.lockutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.351 244018 DEBUG nova.storage.rbd_utils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] rbd image 160a4e3f-b197-4b82-a2ff-cebf79df47df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.356 244018 DEBUG oslo_concurrency.processutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 160a4e3f-b197-4b82-a2ff-cebf79df47df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.397 244018 INFO nova.compute.manager [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Shelve offloading#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.410 244018 INFO nova.virt.libvirt.driver [-] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Instance destroyed successfully.#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.410 244018 DEBUG nova.compute.manager [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.413 244018 DEBUG oslo_concurrency.lockutils [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "refresh_cache-0612ec20-e725-4516-a153-f1fe9be74f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.413 244018 DEBUG oslo_concurrency.lockutils [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquired lock "refresh_cache-0612ec20-e725-4516-a153-f1fe9be74f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.413 244018 DEBUG nova.network.neutron [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.545 244018 DEBUG oslo_concurrency.lockutils [None req-f1581a0f-1dcd-4400-89f8-a529a5e4b804 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.547 244018 DEBUG oslo_concurrency.lockutils [None req-f1581a0f-1dcd-4400-89f8-a529a5e4b804 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.547 244018 DEBUG nova.compute.manager [None req-f1581a0f-1dcd-4400-89f8-a529a5e4b804 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.555 244018 DEBUG nova.compute.manager [None req-f1581a0f-1dcd-4400-89f8-a529a5e4b804 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.557 244018 DEBUG nova.objects.instance [None req-f1581a0f-1dcd-4400-89f8-a529a5e4b804 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'flavor' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.582 244018 DEBUG nova.virt.libvirt.driver [None req-f1581a0f-1dcd-4400-89f8-a529a5e4b804 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.627 244018 DEBUG oslo_concurrency.lockutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.628 244018 DEBUG oslo_concurrency.lockutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.637 244018 DEBUG nova.virt.hardware [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.638 244018 INFO nova.compute.claims [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.646 244018 DEBUG oslo_concurrency.processutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 160a4e3f-b197-4b82-a2ff-cebf79df47df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.683 244018 DEBUG nova.network.neutron [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.684 244018 DEBUG nova.compute.manager [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.731 244018 DEBUG nova.storage.rbd_utils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] resizing rbd image 160a4e3f-b197-4b82-a2ff-cebf79df47df_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:27:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 485 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 3.9 MiB/s wr, 257 op/s
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.833 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.844 244018 DEBUG nova.objects.instance [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lazy-loading 'migration_context' on Instance uuid 160a4e3f-b197-4b82-a2ff-cebf79df47df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.860 244018 DEBUG nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.860 244018 DEBUG nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Ensure instance console log exists: /var/lib/nova/instances/160a4e3f-b197-4b82-a2ff-cebf79df47df/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.861 244018 DEBUG oslo_concurrency.lockutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.861 244018 DEBUG oslo_concurrency.lockutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.861 244018 DEBUG oslo_concurrency.lockutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.863 244018 DEBUG nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.867 244018 WARNING nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.871 244018 DEBUG nova.virt.libvirt.host [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.871 244018 DEBUG nova.virt.libvirt.host [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.875 244018 DEBUG nova.virt.libvirt.host [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.875 244018 DEBUG nova.virt.libvirt.host [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.876 244018 DEBUG nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.876 244018 DEBUG nova.virt.hardware [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.876 244018 DEBUG nova.virt.hardware [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.877 244018 DEBUG nova.virt.hardware [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.877 244018 DEBUG nova.virt.hardware [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.877 244018 DEBUG nova.virt.hardware [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.877 244018 DEBUG nova.virt.hardware [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.878 244018 DEBUG nova.virt.hardware [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.878 244018 DEBUG nova.virt.hardware [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.878 244018 DEBUG nova.virt.hardware [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.878 244018 DEBUG nova.virt.hardware [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.879 244018 DEBUG nova.virt.hardware [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.881 244018 DEBUG oslo_concurrency.processutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.915 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:27:07 np0005629333 nova_compute[244014]: 2026-02-25 12:27:07.980 244018 DEBUG oslo_concurrency.processutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:27:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/725141415' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:27:08 np0005629333 nova_compute[244014]: 2026-02-25 12:27:08.492 244018 DEBUG oslo_concurrency.processutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:08 np0005629333 nova_compute[244014]: 2026-02-25 12:27:08.525 244018 DEBUG nova.storage.rbd_utils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] rbd image 160a4e3f-b197-4b82-a2ff-cebf79df47df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:27:08 np0005629333 nova_compute[244014]: 2026-02-25 12:27:08.530 244018 DEBUG oslo_concurrency.processutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:27:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4077633717' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:27:08 np0005629333 nova_compute[244014]: 2026-02-25 12:27:08.582 244018 DEBUG oslo_concurrency.processutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:08 np0005629333 nova_compute[244014]: 2026-02-25 12:27:08.589 244018 DEBUG nova.compute.provider_tree [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:27:08 np0005629333 nova_compute[244014]: 2026-02-25 12:27:08.607 244018 DEBUG nova.scheduler.client.report [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:27:08 np0005629333 nova_compute[244014]: 2026-02-25 12:27:08.628 244018 DEBUG oslo_concurrency.lockutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:08 np0005629333 nova_compute[244014]: 2026-02-25 12:27:08.629 244018 DEBUG nova.compute.manager [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:27:08 np0005629333 nova_compute[244014]: 2026-02-25 12:27:08.740 244018 DEBUG nova.compute.manager [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:27:08 np0005629333 nova_compute[244014]: 2026-02-25 12:27:08.740 244018 DEBUG nova.network.neutron [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:27:08 np0005629333 nova_compute[244014]: 2026-02-25 12:27:08.833 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:08 np0005629333 nova_compute[244014]: 2026-02-25 12:27:08.850 244018 INFO nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:27:08 np0005629333 nova_compute[244014]: 2026-02-25 12:27:08.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:27:08 np0005629333 nova_compute[244014]: 2026-02-25 12:27:08.936 244018 DEBUG nova.compute.manager [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:27:08 np0005629333 nova_compute[244014]: 2026-02-25 12:27:08.973 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:08 np0005629333 nova_compute[244014]: 2026-02-25 12:27:08.974 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:08 np0005629333 nova_compute[244014]: 2026-02-25 12:27:08.974 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:08 np0005629333 nova_compute[244014]: 2026-02-25 12:27:08.975 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:27:08 np0005629333 nova_compute[244014]: 2026-02-25 12:27:08.975 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:27:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3139178603' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.059 244018 DEBUG oslo_concurrency.processutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.063 244018 DEBUG nova.objects.instance [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lazy-loading 'pci_devices' on Instance uuid 160a4e3f-b197-4b82-a2ff-cebf79df47df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.113 244018 DEBUG nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:27:09 np0005629333 nova_compute[244014]:  <uuid>160a4e3f-b197-4b82-a2ff-cebf79df47df</uuid>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:  <name>instance-0000003a</name>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:27:09 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:      <nova:name>tempest-ListImageFiltersTestJSON-server-1856460464</nova:name>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:27:07</nova:creationTime>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:27:09 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:        <nova:user uuid="29db18fbf1f1410384934731b9c53cb5">tempest-ListImageFiltersTestJSON-1391082587-project-member</nova:user>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:        <nova:project uuid="ddbe63406adc44468e143b66e5b1c207">tempest-ListImageFiltersTestJSON-1391082587</nova:project>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:      <nova:ports/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:      <entry name="serial">160a4e3f-b197-4b82-a2ff-cebf79df47df</entry>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:      <entry name="uuid">160a4e3f-b197-4b82-a2ff-cebf79df47df</entry>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:27:09 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/160a4e3f-b197-4b82-a2ff-cebf79df47df_disk">
Feb 25 07:27:09 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:27:09 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:27:09 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/160a4e3f-b197-4b82-a2ff-cebf79df47df_disk.config">
Feb 25 07:27:09 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:27:09 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:27:09 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/160a4e3f-b197-4b82-a2ff-cebf79df47df/console.log" append="off"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:27:09 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:27:09 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:27:09 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:27:09 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:27:09 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.115 244018 DEBUG nova.compute.manager [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.116 244018 DEBUG nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.117 244018 INFO nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Creating image(s)#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.137 244018 DEBUG nova.storage.rbd_utils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] rbd image bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.163 244018 DEBUG nova.storage.rbd_utils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] rbd image bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.191 244018 DEBUG nova.storage.rbd_utils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] rbd image bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.193 244018 DEBUG oslo_concurrency.processutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.281 244018 DEBUG oslo_concurrency.processutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.281 244018 DEBUG oslo_concurrency.lockutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.282 244018 DEBUG oslo_concurrency.lockutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.283 244018 DEBUG oslo_concurrency.lockutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.302 244018 DEBUG nova.storage.rbd_utils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] rbd image bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.306 244018 DEBUG oslo_concurrency.processutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.336 244018 DEBUG nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.337 244018 DEBUG nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.338 244018 INFO nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Using config drive#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.357 244018 DEBUG nova.storage.rbd_utils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] rbd image 160a4e3f-b197-4b82-a2ff-cebf79df47df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:27:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:27:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3933452317' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.575 244018 DEBUG oslo_concurrency.processutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.269s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.604 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.639 244018 DEBUG nova.storage.rbd_utils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] resizing rbd image bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.673 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.684 244018 INFO nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Creating config drive at /var/lib/nova/instances/160a4e3f-b197-4b82-a2ff-cebf79df47df/disk.config#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.688 244018 DEBUG oslo_concurrency.processutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/160a4e3f-b197-4b82-a2ff-cebf79df47df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp1s80scfm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.779 244018 DEBUG nova.objects.instance [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lazy-loading 'migration_context' on Instance uuid bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.793 244018 DEBUG nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.794 244018 DEBUG nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Ensure instance console log exists: /var/lib/nova/instances/bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.795 244018 DEBUG oslo_concurrency.lockutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.795 244018 DEBUG oslo_concurrency.lockutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.795 244018 DEBUG oslo_concurrency.lockutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.798 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.798 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.805 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.806 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:27:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 485 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.3 MiB/s wr, 135 op/s
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.836 244018 DEBUG oslo_concurrency.processutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/160a4e3f-b197-4b82-a2ff-cebf79df47df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp1s80scfm" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.861 244018 DEBUG nova.storage.rbd_utils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] rbd image 160a4e3f-b197-4b82-a2ff-cebf79df47df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.865 244018 DEBUG oslo_concurrency.processutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/160a4e3f-b197-4b82-a2ff-cebf79df47df/disk.config 160a4e3f-b197-4b82-a2ff-cebf79df47df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.894 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.895 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.899 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.900 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.903 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000037 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:27:09 np0005629333 nova_compute[244014]: 2026-02-25 12:27:09.903 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000037 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.051 244018 DEBUG oslo_concurrency.processutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/160a4e3f-b197-4b82-a2ff-cebf79df47df/disk.config 160a4e3f-b197-4b82-a2ff-cebf79df47df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.051 244018 INFO nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Deleting local config drive /var/lib/nova/instances/160a4e3f-b197-4b82-a2ff-cebf79df47df/disk.config because it was imported into RBD.#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.065 244018 DEBUG nova.network.neutron [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.066 244018 DEBUG nova.compute.manager [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.068 244018 DEBUG nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.084 244018 WARNING nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.089 244018 DEBUG nova.virt.libvirt.host [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.089 244018 DEBUG nova.virt.libvirt.host [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.095 244018 DEBUG nova.virt.libvirt.host [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.095 244018 DEBUG nova.virt.libvirt.host [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.096 244018 DEBUG nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.096 244018 DEBUG nova.virt.hardware [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.097 244018 DEBUG nova.virt.hardware [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.097 244018 DEBUG nova.virt.hardware [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.097 244018 DEBUG nova.virt.hardware [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.097 244018 DEBUG nova.virt.hardware [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.097 244018 DEBUG nova.virt.hardware [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.098 244018 DEBUG nova.virt.hardware [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.098 244018 DEBUG nova.virt.hardware [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.098 244018 DEBUG nova.virt.hardware [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.098 244018 DEBUG nova.virt.hardware [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.099 244018 DEBUG nova.virt.hardware [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.102 244018 DEBUG oslo_concurrency.processutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:10 np0005629333 systemd-machined[210048]: New machine qemu-64-instance-0000003a.
Feb 25 07:27:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:27:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Feb 25 07:27:10 np0005629333 systemd[1]: Started Virtual Machine qemu-64-instance-0000003a.
Feb 25 07:27:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Feb 25 07:27:10 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.196 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.197 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3387MB free_disk=59.827746075578034GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.197 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.198 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.239 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:10.240 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:27:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:10.240 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.305 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance b8086e43-4c45-422f-a3b5-fa665c256b30 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.305 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.306 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 43b8959e-9cf0-42ca-aa1f-8a380321c971 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.306 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 0612ec20-e725-4516-a153-f1fe9be74f75 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.306 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 160a4e3f-b197-4b82-a2ff-cebf79df47df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.306 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.307 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.307 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.449 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.554 244018 DEBUG nova.network.neutron [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Updating instance_info_cache with network_info: [{"id": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "address": "fa:16:3e:a7:90:29", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5951ca77-9d", "ovs_interfaceid": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.580 244018 DEBUG oslo_concurrency.lockutils [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Releasing lock "refresh_cache-0612ec20-e725-4516-a153-f1fe9be74f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:27:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:27:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3026256746' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.633 244018 DEBUG oslo_concurrency.processutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.671 244018 DEBUG nova.storage.rbd_utils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] rbd image bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.684 244018 DEBUG oslo_concurrency.processutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.710 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022430.6713681, 160a4e3f-b197-4b82-a2ff-cebf79df47df => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.711 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.715 244018 DEBUG nova.compute.manager [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.715 244018 DEBUG nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.719 244018 INFO nova.virt.libvirt.driver [-] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Instance spawned successfully.#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.719 244018 DEBUG nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.744 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.749 244018 DEBUG nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.749 244018 DEBUG nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.749 244018 DEBUG nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.750 244018 DEBUG nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.750 244018 DEBUG nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.750 244018 DEBUG nova.virt.libvirt.driver [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.754 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.792 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.792 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022430.6724477, 160a4e3f-b197-4b82-a2ff-cebf79df47df => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.792 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] VM Started (Lifecycle Event)#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.820 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.823 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.833 244018 INFO nova.compute.manager [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Took 3.71 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.833 244018 DEBUG nova.compute.manager [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.843 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.886 244018 INFO nova.compute.manager [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Took 5.11 seconds to build instance.#033[00m
Feb 25 07:27:10 np0005629333 nova_compute[244014]: 2026-02-25 12:27:10.914 244018 DEBUG oslo_concurrency.lockutils [None req-eb86428b-bd2d-44e2-bf16-4179575d4068 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "160a4e3f-b197-4b82-a2ff-cebf79df47df" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.249s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:27:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1415389268' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:27:11 np0005629333 nova_compute[244014]: 2026-02-25 12:27:11.010 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:11 np0005629333 nova_compute[244014]: 2026-02-25 12:27:11.015 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:27:11 np0005629333 nova_compute[244014]: 2026-02-25 12:27:11.033 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:27:11 np0005629333 nova_compute[244014]: 2026-02-25 12:27:11.058 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:27:11 np0005629333 nova_compute[244014]: 2026-02-25 12:27:11.059 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:27:11 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1226214579' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:27:11 np0005629333 nova_compute[244014]: 2026-02-25 12:27:11.173 244018 DEBUG oslo_concurrency.processutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:11 np0005629333 nova_compute[244014]: 2026-02-25 12:27:11.175 244018 DEBUG nova.objects.instance [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lazy-loading 'pci_devices' on Instance uuid bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:11 np0005629333 nova_compute[244014]: 2026-02-25 12:27:11.193 244018 DEBUG nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:27:11 np0005629333 nova_compute[244014]:  <uuid>bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36</uuid>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:  <name>instance-0000003b</name>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:27:11 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:      <nova:name>tempest-ListImageFiltersTestJSON-server-428345901</nova:name>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:27:10</nova:creationTime>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:27:11 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:        <nova:user uuid="29db18fbf1f1410384934731b9c53cb5">tempest-ListImageFiltersTestJSON-1391082587-project-member</nova:user>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:        <nova:project uuid="ddbe63406adc44468e143b66e5b1c207">tempest-ListImageFiltersTestJSON-1391082587</nova:project>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:      <nova:ports/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:      <entry name="serial">bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36</entry>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:      <entry name="uuid">bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36</entry>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:27:11 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk">
Feb 25 07:27:11 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:27:11 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:27:11 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk.config">
Feb 25 07:27:11 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:27:11 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:27:11 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36/console.log" append="off"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:27:11 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:27:11 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:27:11 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:27:11 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:27:11 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:27:11 np0005629333 nova_compute[244014]: 2026-02-25 12:27:11.250 244018 DEBUG nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:27:11 np0005629333 nova_compute[244014]: 2026-02-25 12:27:11.250 244018 DEBUG nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:27:11 np0005629333 nova_compute[244014]: 2026-02-25 12:27:11.251 244018 INFO nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Using config drive#033[00m
Feb 25 07:27:11 np0005629333 nova_compute[244014]: 2026-02-25 12:27:11.290 244018 DEBUG nova.storage.rbd_utils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] rbd image bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:27:11 np0005629333 nova_compute[244014]: 2026-02-25 12:27:11.748 244018 INFO nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Creating config drive at /var/lib/nova/instances/bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36/disk.config#033[00m
Feb 25 07:27:11 np0005629333 nova_compute[244014]: 2026-02-25 12:27:11.758 244018 DEBUG oslo_concurrency.processutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp95i13reb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 516 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.8 MiB/s wr, 180 op/s
Feb 25 07:27:11 np0005629333 nova_compute[244014]: 2026-02-25 12:27:11.902 244018 DEBUG oslo_concurrency.processutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp95i13reb" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:11 np0005629333 nova_compute[244014]: 2026-02-25 12:27:11.926 244018 DEBUG nova.storage.rbd_utils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] rbd image bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:27:11 np0005629333 nova_compute[244014]: 2026-02-25 12:27:11.930 244018 DEBUG oslo_concurrency.processutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36/disk.config bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:12 np0005629333 nova_compute[244014]: 2026-02-25 12:27:12.055 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:27:12 np0005629333 nova_compute[244014]: 2026-02-25 12:27:12.056 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:27:12 np0005629333 nova_compute[244014]: 2026-02-25 12:27:12.056 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:27:12 np0005629333 nova_compute[244014]: 2026-02-25 12:27:12.056 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:27:12 np0005629333 nova_compute[244014]: 2026-02-25 12:27:12.089 244018 DEBUG oslo_concurrency.processutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36/disk.config bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:12 np0005629333 nova_compute[244014]: 2026-02-25 12:27:12.089 244018 INFO nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Deleting local config drive /var/lib/nova/instances/bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36/disk.config because it was imported into RBD.#033[00m
Feb 25 07:27:12 np0005629333 systemd-machined[210048]: New machine qemu-65-instance-0000003b.
Feb 25 07:27:12 np0005629333 systemd[1]: Started Virtual Machine qemu-65-instance-0000003b.
Feb 25 07:27:12 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:12Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ba:87:f1 10.100.0.11
Feb 25 07:27:12 np0005629333 nova_compute[244014]: 2026-02-25 12:27:12.607 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022432.606436, bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:27:12 np0005629333 nova_compute[244014]: 2026-02-25 12:27:12.608 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:27:12 np0005629333 nova_compute[244014]: 2026-02-25 12:27:12.611 244018 DEBUG nova.compute.manager [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:27:12 np0005629333 nova_compute[244014]: 2026-02-25 12:27:12.611 244018 DEBUG nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:27:12 np0005629333 nova_compute[244014]: 2026-02-25 12:27:12.616 244018 INFO nova.virt.libvirt.driver [-] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Instance spawned successfully.#033[00m
Feb 25 07:27:12 np0005629333 nova_compute[244014]: 2026-02-25 12:27:12.616 244018 DEBUG nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:27:12 np0005629333 nova_compute[244014]: 2026-02-25 12:27:12.649 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:12 np0005629333 nova_compute[244014]: 2026-02-25 12:27:12.655 244018 DEBUG nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:27:12 np0005629333 nova_compute[244014]: 2026-02-25 12:27:12.656 244018 DEBUG nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:27:12 np0005629333 nova_compute[244014]: 2026-02-25 12:27:12.657 244018 DEBUG nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:27:12 np0005629333 nova_compute[244014]: 2026-02-25 12:27:12.657 244018 DEBUG nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:27:12 np0005629333 nova_compute[244014]: 2026-02-25 12:27:12.658 244018 DEBUG nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:27:12 np0005629333 nova_compute[244014]: 2026-02-25 12:27:12.658 244018 DEBUG nova.virt.libvirt.driver [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:27:12 np0005629333 nova_compute[244014]: 2026-02-25 12:27:12.667 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:27:12 np0005629333 nova_compute[244014]: 2026-02-25 12:27:12.703 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:27:12 np0005629333 nova_compute[244014]: 2026-02-25 12:27:12.703 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022432.6116762, bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:27:12 np0005629333 nova_compute[244014]: 2026-02-25 12:27:12.703 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] VM Started (Lifecycle Event)#033[00m
Feb 25 07:27:12 np0005629333 nova_compute[244014]: 2026-02-25 12:27:12.737 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:12 np0005629333 nova_compute[244014]: 2026-02-25 12:27:12.741 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:27:12 np0005629333 nova_compute[244014]: 2026-02-25 12:27:12.747 244018 INFO nova.compute.manager [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Took 3.63 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:27:12 np0005629333 nova_compute[244014]: 2026-02-25 12:27:12.747 244018 DEBUG nova.compute.manager [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:12 np0005629333 nova_compute[244014]: 2026-02-25 12:27:12.762 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:27:12 np0005629333 nova_compute[244014]: 2026-02-25 12:27:12.823 244018 INFO nova.compute.manager [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Took 5.42 seconds to build instance.#033[00m
Feb 25 07:27:12 np0005629333 nova_compute[244014]: 2026-02-25 12:27:12.845 244018 DEBUG oslo_concurrency.lockutils [None req-12b41efb-4817-4b7d-8f1f-5a331607da7e 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 577 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 7.5 MiB/s wr, 313 op/s
Feb 25 07:27:13 np0005629333 nova_compute[244014]: 2026-02-25 12:27:13.839 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:13 np0005629333 nova_compute[244014]: 2026-02-25 12:27:13.975 244018 INFO nova.virt.libvirt.driver [-] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Instance destroyed successfully.#033[00m
Feb 25 07:27:13 np0005629333 nova_compute[244014]: 2026-02-25 12:27:13.976 244018 DEBUG nova.objects.instance [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'resources' on Instance uuid 0612ec20-e725-4516-a153-f1fe9be74f75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:14 np0005629333 nova_compute[244014]: 2026-02-25 12:27:14.675 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:15 np0005629333 nova_compute[244014]: 2026-02-25 12:27:15.024 244018 DEBUG nova.virt.libvirt.vif [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-526421737',display_name='tempest-ServerActionsTestOtherB-server-526421737',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-526421737',id=57,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c85a955249394f0faf7c890f5cd0df32',ramdisk_id='',reservation_id='r-l9xxjiai',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1539976047',owner_user_name='tempest-ServerActionsTestOtherB-1539976047-project-member',shelved_at='2026-02-25T12:27:07.272339',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='bac7e237-2902-4866-9b21-65c1a5c9d2ec'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:27:02Z,user_data=None,user_id='b774fd0c04fc403d9ddb205f1e6abbc5',uuid=0612ec20-e725-4516-a153-f1fe9be74f75,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "address": "fa:16:3e:a7:90:29", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5951ca77-9d", "ovs_interfaceid": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:27:15 np0005629333 nova_compute[244014]: 2026-02-25 12:27:15.025 244018 DEBUG nova.network.os_vif_util [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converting VIF {"id": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "address": "fa:16:3e:a7:90:29", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5951ca77-9d", "ovs_interfaceid": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:27:15 np0005629333 nova_compute[244014]: 2026-02-25 12:27:15.026 244018 DEBUG nova.network.os_vif_util [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:90:29,bridge_name='br-int',has_traffic_filtering=True,id=5951ca77-9d6a-41db-aaf8-3508d21701f3,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5951ca77-9d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:27:15 np0005629333 nova_compute[244014]: 2026-02-25 12:27:15.027 244018 DEBUG os_vif [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:90:29,bridge_name='br-int',has_traffic_filtering=True,id=5951ca77-9d6a-41db-aaf8-3508d21701f3,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5951ca77-9d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:27:15 np0005629333 nova_compute[244014]: 2026-02-25 12:27:15.030 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:15 np0005629333 nova_compute[244014]: 2026-02-25 12:27:15.030 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5951ca77-9d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:15 np0005629333 nova_compute[244014]: 2026-02-25 12:27:15.033 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:15 np0005629333 nova_compute[244014]: 2026-02-25 12:27:15.036 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:27:15 np0005629333 nova_compute[244014]: 2026-02-25 12:27:15.039 244018 INFO os_vif [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:90:29,bridge_name='br-int',has_traffic_filtering=True,id=5951ca77-9d6a-41db-aaf8-3508d21701f3,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5951ca77-9d')#033[00m
Feb 25 07:27:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:27:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:15.242 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:15 np0005629333 nova_compute[244014]: 2026-02-25 12:27:15.401 244018 INFO nova.virt.libvirt.driver [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Deleting instance files /var/lib/nova/instances/0612ec20-e725-4516-a153-f1fe9be74f75_del#033[00m
Feb 25 07:27:15 np0005629333 nova_compute[244014]: 2026-02-25 12:27:15.402 244018 INFO nova.virt.libvirt.driver [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Deletion of /var/lib/nova/instances/0612ec20-e725-4516-a153-f1fe9be74f75_del complete#033[00m
Feb 25 07:27:15 np0005629333 nova_compute[244014]: 2026-02-25 12:27:15.526 244018 INFO nova.scheduler.client.report [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Deleted allocations for instance 0612ec20-e725-4516-a153-f1fe9be74f75#033[00m
Feb 25 07:27:15 np0005629333 nova_compute[244014]: 2026-02-25 12:27:15.584 244018 DEBUG oslo_concurrency.lockutils [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:15 np0005629333 nova_compute[244014]: 2026-02-25 12:27:15.585 244018 DEBUG oslo_concurrency.lockutils [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:15 np0005629333 nova_compute[244014]: 2026-02-25 12:27:15.725 244018 DEBUG oslo_concurrency.processutils [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:15 np0005629333 nova_compute[244014]: 2026-02-25 12:27:15.791 244018 DEBUG nova.compute.manager [req-a41b956d-a99c-4d91-919e-01122135884f req-81c02a2a-1409-4c7c-a2c9-8496ddbf0e8b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Received event network-changed-5951ca77-9d6a-41db-aaf8-3508d21701f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:27:15 np0005629333 nova_compute[244014]: 2026-02-25 12:27:15.792 244018 DEBUG nova.compute.manager [req-a41b956d-a99c-4d91-919e-01122135884f req-81c02a2a-1409-4c7c-a2c9-8496ddbf0e8b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Refreshing instance network info cache due to event network-changed-5951ca77-9d6a-41db-aaf8-3508d21701f3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:27:15 np0005629333 nova_compute[244014]: 2026-02-25 12:27:15.793 244018 DEBUG oslo_concurrency.lockutils [req-a41b956d-a99c-4d91-919e-01122135884f req-81c02a2a-1409-4c7c-a2c9-8496ddbf0e8b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-0612ec20-e725-4516-a153-f1fe9be74f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:27:15 np0005629333 nova_compute[244014]: 2026-02-25 12:27:15.793 244018 DEBUG oslo_concurrency.lockutils [req-a41b956d-a99c-4d91-919e-01122135884f req-81c02a2a-1409-4c7c-a2c9-8496ddbf0e8b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-0612ec20-e725-4516-a153-f1fe9be74f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:27:15 np0005629333 nova_compute[244014]: 2026-02-25 12:27:15.794 244018 DEBUG nova.network.neutron [req-a41b956d-a99c-4d91-919e-01122135884f req-81c02a2a-1409-4c7c-a2c9-8496ddbf0e8b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Refreshing network info cache for port 5951ca77-9d6a-41db-aaf8-3508d21701f3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:27:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 577 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 6.4 MiB/s wr, 266 op/s
Feb 25 07:27:15 np0005629333 nova_compute[244014]: 2026-02-25 12:27:15.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:27:15 np0005629333 nova_compute[244014]: 2026-02-25 12:27:15.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:27:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:27:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3956758993' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:27:16 np0005629333 nova_compute[244014]: 2026-02-25 12:27:16.260 244018 DEBUG oslo_concurrency.processutils [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:16 np0005629333 nova_compute[244014]: 2026-02-25 12:27:16.266 244018 DEBUG nova.compute.provider_tree [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:27:16 np0005629333 nova_compute[244014]: 2026-02-25 12:27:16.292 244018 DEBUG nova.scheduler.client.report [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:27:16 np0005629333 nova_compute[244014]: 2026-02-25 12:27:16.317 244018 DEBUG oslo_concurrency.lockutils [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:16 np0005629333 nova_compute[244014]: 2026-02-25 12:27:16.332 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022421.332212, 0612ec20-e725-4516-a153-f1fe9be74f75 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:27:16 np0005629333 nova_compute[244014]: 2026-02-25 12:27:16.333 244018 INFO nova.compute.manager [-] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:27:16 np0005629333 nova_compute[244014]: 2026-02-25 12:27:16.371 244018 DEBUG nova.compute.manager [None req-583d0699-2ba5-4a25-97d9-21bf1daa1fb0 - - - - - -] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:16 np0005629333 nova_compute[244014]: 2026-02-25 12:27:16.383 244018 DEBUG oslo_concurrency.lockutils [None req-c9db98d5-a8c6-471a-9712-72dcb5105be7 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "0612ec20-e725-4516-a153-f1fe9be74f75" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 15.304s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:16 np0005629333 nova_compute[244014]: 2026-02-25 12:27:16.543 244018 DEBUG nova.compute.manager [None req-a506a779-80b8-4ab6-ba80-08f04d4a6932 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:16 np0005629333 nova_compute[244014]: 2026-02-25 12:27:16.597 244018 INFO nova.compute.manager [None req-a506a779-80b8-4ab6-ba80-08f04d4a6932 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] instance snapshotting#033[00m
Feb 25 07:27:16 np0005629333 nova_compute[244014]: 2026-02-25 12:27:16.869 244018 INFO nova.virt.libvirt.driver [None req-a506a779-80b8-4ab6-ba80-08f04d4a6932 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Beginning live snapshot process#033[00m
Feb 25 07:27:17 np0005629333 nova_compute[244014]: 2026-02-25 12:27:17.141 244018 DEBUG nova.virt.libvirt.imagebackend [None req-a506a779-80b8-4ab6-ba80-08f04d4a6932 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Feb 25 07:27:17 np0005629333 nova_compute[244014]: 2026-02-25 12:27:17.350 244018 DEBUG nova.storage.rbd_utils [None req-a506a779-80b8-4ab6-ba80-08f04d4a6932 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] creating snapshot(a332b96ae3c14c26b3b29692ee7dfe37) on rbd image(160a4e3f-b197-4b82-a2ff-cebf79df47df_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 07:27:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Feb 25 07:27:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Feb 25 07:27:17 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Feb 25 07:27:17 np0005629333 nova_compute[244014]: 2026-02-25 12:27:17.526 244018 DEBUG nova.storage.rbd_utils [None req-a506a779-80b8-4ab6-ba80-08f04d4a6932 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] cloning vms/160a4e3f-b197-4b82-a2ff-cebf79df47df_disk@a332b96ae3c14c26b3b29692ee7dfe37 to images/c955e791-f328-407c-9295-73bc5d2887e7 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 25 07:27:17 np0005629333 nova_compute[244014]: 2026-02-25 12:27:17.608 244018 DEBUG nova.storage.rbd_utils [None req-a506a779-80b8-4ab6-ba80-08f04d4a6932 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] flattening images/c955e791-f328-407c-9295-73bc5d2887e7 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 25 07:27:17 np0005629333 nova_compute[244014]: 2026-02-25 12:27:17.770 244018 DEBUG nova.virt.libvirt.driver [None req-f1581a0f-1dcd-4400-89f8-a529a5e4b804 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Feb 25 07:27:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 532 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 5.4 MiB/s wr, 419 op/s
Feb 25 07:27:17 np0005629333 nova_compute[244014]: 2026-02-25 12:27:17.981 244018 DEBUG nova.storage.rbd_utils [None req-a506a779-80b8-4ab6-ba80-08f04d4a6932 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] removing snapshot(a332b96ae3c14c26b3b29692ee7dfe37) on rbd image(160a4e3f-b197-4b82-a2ff-cebf79df47df_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 25 07:27:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Feb 25 07:27:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Feb 25 07:27:18 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Feb 25 07:27:18 np0005629333 nova_compute[244014]: 2026-02-25 12:27:18.563 244018 DEBUG nova.storage.rbd_utils [None req-a506a779-80b8-4ab6-ba80-08f04d4a6932 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] creating snapshot(snap) on rbd image(c955e791-f328-407c-9295-73bc5d2887e7) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 07:27:18 np0005629333 nova_compute[244014]: 2026-02-25 12:27:18.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:27:18 np0005629333 nova_compute[244014]: 2026-02-25 12:27:18.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 25 07:27:18 np0005629333 nova_compute[244014]: 2026-02-25 12:27:18.891 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 25 07:27:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Feb 25 07:27:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Feb 25 07:27:19 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Feb 25 07:27:19 np0005629333 nova_compute[244014]: 2026-02-25 12:27:19.678 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 532 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 48 KiB/s wr, 261 op/s
Feb 25 07:27:20 np0005629333 nova_compute[244014]: 2026-02-25 12:27:20.032 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:20 np0005629333 nova_compute[244014]: 2026-02-25 12:27:20.101 244018 DEBUG nova.network.neutron [req-a41b956d-a99c-4d91-919e-01122135884f req-81c02a2a-1409-4c7c-a2c9-8496ddbf0e8b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Updated VIF entry in instance network info cache for port 5951ca77-9d6a-41db-aaf8-3508d21701f3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:27:20 np0005629333 nova_compute[244014]: 2026-02-25 12:27:20.101 244018 DEBUG nova.network.neutron [req-a41b956d-a99c-4d91-919e-01122135884f req-81c02a2a-1409-4c7c-a2c9-8496ddbf0e8b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0612ec20-e725-4516-a153-f1fe9be74f75] Updating instance_info_cache with network_info: [{"id": "5951ca77-9d6a-41db-aaf8-3508d21701f3", "address": "fa:16:3e:a7:90:29", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": null, "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap5951ca77-9d", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:27:20 np0005629333 nova_compute[244014]: 2026-02-25 12:27:20.123 244018 DEBUG oslo_concurrency.lockutils [req-a41b956d-a99c-4d91-919e-01122135884f req-81c02a2a-1409-4c7c-a2c9-8496ddbf0e8b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-0612ec20-e725-4516-a153-f1fe9be74f75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:27:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:27:20 np0005629333 kernel: tapee46268d-74 (unregistering): left promiscuous mode
Feb 25 07:27:20 np0005629333 NetworkManager[49836]: <info>  [1772022440.1315] device (tapee46268d-74): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:27:20 np0005629333 nova_compute[244014]: 2026-02-25 12:27:20.135 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:20 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:20Z|00500|binding|INFO|Releasing lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 from this chassis (sb_readonly=0)
Feb 25 07:27:20 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:20Z|00501|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 down in Southbound
Feb 25 07:27:20 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:20Z|00502|binding|INFO|Removing iface tapee46268d-74 ovn-installed in OVS
Feb 25 07:27:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:20.142 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:87:f1 10.100.0.11'], port_security=['fa:16:3e:ba:87:f1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce318891-cf3c-4d99-af7c-c01770f38194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56700581ea88438ba482d90bc702ced3', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'c2ca716d-3f7c-490b-954a-bca009559baa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0958bb9f-eb63-44ee-b380-21c56b170304, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ee46268d-740d-4ff9-8b65-4a81fc61eec3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:27:20 np0005629333 nova_compute[244014]: 2026-02-25 12:27:20.144 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:20.145 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ee46268d-740d-4ff9-8b65-4a81fc61eec3 in datapath ce318891-cf3c-4d99-af7c-c01770f38194 unbound from our chassis#033[00m
Feb 25 07:27:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:20.147 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce318891-cf3c-4d99-af7c-c01770f38194, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:27:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:20.151 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e88eb5bc-da6a-468b-be68-725d95aca44b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:20.154 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 namespace which is not needed anymore#033[00m
Feb 25 07:27:20 np0005629333 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000035.scope: Deactivated successfully.
Feb 25 07:27:20 np0005629333 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000035.scope: Consumed 12.639s CPU time.
Feb 25 07:27:20 np0005629333 systemd-machined[210048]: Machine qemu-63-instance-00000035 terminated.
Feb 25 07:27:20 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[293222]: [NOTICE]   (293226) : haproxy version is 2.8.14-c23fe91
Feb 25 07:27:20 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[293222]: [NOTICE]   (293226) : path to executable is /usr/sbin/haproxy
Feb 25 07:27:20 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[293222]: [WARNING]  (293226) : Exiting Master process...
Feb 25 07:27:20 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[293222]: [ALERT]    (293226) : Current worker (293228) exited with code 143 (Terminated)
Feb 25 07:27:20 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[293222]: [WARNING]  (293226) : All workers exited. Exiting... (0)
Feb 25 07:27:20 np0005629333 systemd[1]: libpod-3a7f482e2061e992266da4754a5befd53c7f4112964ae064dda88ea14183060a.scope: Deactivated successfully.
Feb 25 07:27:20 np0005629333 conmon[293222]: conmon 3a7f482e2061e992266d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3a7f482e2061e992266da4754a5befd53c7f4112964ae064dda88ea14183060a.scope/container/memory.events
Feb 25 07:27:20 np0005629333 podman[294439]: 2026-02-25 12:27:20.283010669 +0000 UTC m=+0.045463868 container died 3a7f482e2061e992266da4754a5befd53c7f4112964ae064dda88ea14183060a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 07:27:20 np0005629333 nova_compute[244014]: 2026-02-25 12:27:20.301 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:20 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3a7f482e2061e992266da4754a5befd53c7f4112964ae064dda88ea14183060a-userdata-shm.mount: Deactivated successfully.
Feb 25 07:27:20 np0005629333 systemd[1]: var-lib-containers-storage-overlay-eba61f7f2d26200ebeb270f5b7762f03e0ef58df99ed676891fb1a63a7baeb70-merged.mount: Deactivated successfully.
Feb 25 07:27:20 np0005629333 podman[294439]: 2026-02-25 12:27:20.328835716 +0000 UTC m=+0.091288905 container cleanup 3a7f482e2061e992266da4754a5befd53c7f4112964ae064dda88ea14183060a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 25 07:27:20 np0005629333 systemd[1]: libpod-conmon-3a7f482e2061e992266da4754a5befd53c7f4112964ae064dda88ea14183060a.scope: Deactivated successfully.
Feb 25 07:27:20 np0005629333 podman[294469]: 2026-02-25 12:27:20.390166231 +0000 UTC m=+0.044642514 container remove 3a7f482e2061e992266da4754a5befd53c7f4112964ae064dda88ea14183060a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:27:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:20.395 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d7d0ee1a-69d1-48ca-b68e-b666d707df28]: (4, ('Wed Feb 25 12:27:20 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 (3a7f482e2061e992266da4754a5befd53c7f4112964ae064dda88ea14183060a)\n3a7f482e2061e992266da4754a5befd53c7f4112964ae064dda88ea14183060a\nWed Feb 25 12:27:20 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 (3a7f482e2061e992266da4754a5befd53c7f4112964ae064dda88ea14183060a)\n3a7f482e2061e992266da4754a5befd53c7f4112964ae064dda88ea14183060a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:20.396 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[750c4e91-5d50-4aa9-96f5-b8e65bbd4c04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:20.397 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce318891-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:20 np0005629333 nova_compute[244014]: 2026-02-25 12:27:20.399 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:20 np0005629333 kernel: tapce318891-c0: left promiscuous mode
Feb 25 07:27:20 np0005629333 nova_compute[244014]: 2026-02-25 12:27:20.411 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:20.417 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[906afe99-9bbf-43ed-b297-d8097d6a0a5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:20.432 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[48a5da02-a959-43af-812b-c059dcf8adb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:20.433 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[11a59933-1ce5-411c-94b4-9799b4f05362]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:20.443 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[65e35d28-cc93-4ff0-ab1c-89f67ae041c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438842, 'reachable_time': 35302, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294496, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:20.447 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:27:20 np0005629333 systemd[1]: run-netns-ovnmeta\x2dce318891\x2dcf3c\x2d4d99\x2daf7c\x2dc01770f38194.mount: Deactivated successfully.
Feb 25 07:27:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:20.447 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[0a1061c7-1489-4c62-a6b6-b9ccf076c50c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:20 np0005629333 nova_compute[244014]: 2026-02-25 12:27:20.697 244018 DEBUG nova.compute.manager [req-3e83cc3f-3d3b-4795-953a-c7642fdf203a req-88dc757d-f91c-490e-af97-6a00afd48d7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:27:20 np0005629333 nova_compute[244014]: 2026-02-25 12:27:20.699 244018 DEBUG oslo_concurrency.lockutils [req-3e83cc3f-3d3b-4795-953a-c7642fdf203a req-88dc757d-f91c-490e-af97-6a00afd48d7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:20 np0005629333 nova_compute[244014]: 2026-02-25 12:27:20.699 244018 DEBUG oslo_concurrency.lockutils [req-3e83cc3f-3d3b-4795-953a-c7642fdf203a req-88dc757d-f91c-490e-af97-6a00afd48d7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:20 np0005629333 nova_compute[244014]: 2026-02-25 12:27:20.700 244018 DEBUG oslo_concurrency.lockutils [req-3e83cc3f-3d3b-4795-953a-c7642fdf203a req-88dc757d-f91c-490e-af97-6a00afd48d7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:20 np0005629333 nova_compute[244014]: 2026-02-25 12:27:20.700 244018 DEBUG nova.compute.manager [req-3e83cc3f-3d3b-4795-953a-c7642fdf203a req-88dc757d-f91c-490e-af97-6a00afd48d7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:27:20 np0005629333 nova_compute[244014]: 2026-02-25 12:27:20.701 244018 WARNING nova.compute.manager [req-3e83cc3f-3d3b-4795-953a-c7642fdf203a req-88dc757d-f91c-490e-af97-6a00afd48d7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state active and task_state powering-off.#033[00m
Feb 25 07:27:20 np0005629333 nova_compute[244014]: 2026-02-25 12:27:20.788 244018 INFO nova.virt.libvirt.driver [None req-f1581a0f-1dcd-4400-89f8-a529a5e4b804 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance shutdown successfully after 13 seconds.#033[00m
Feb 25 07:27:20 np0005629333 nova_compute[244014]: 2026-02-25 12:27:20.795 244018 INFO nova.virt.libvirt.driver [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance destroyed successfully.#033[00m
Feb 25 07:27:20 np0005629333 nova_compute[244014]: 2026-02-25 12:27:20.796 244018 DEBUG nova.objects.instance [None req-f1581a0f-1dcd-4400-89f8-a529a5e4b804 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'numa_topology' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:20 np0005629333 nova_compute[244014]: 2026-02-25 12:27:20.817 244018 DEBUG nova.compute.manager [None req-f1581a0f-1dcd-4400-89f8-a529a5e4b804 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:20 np0005629333 nova_compute[244014]: 2026-02-25 12:27:20.867 244018 DEBUG oslo_concurrency.lockutils [None req-f1581a0f-1dcd-4400-89f8-a529a5e4b804 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.321s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:20 np0005629333 nova_compute[244014]: 2026-02-25 12:27:20.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:27:20 np0005629333 nova_compute[244014]: 2026-02-25 12:27:20.900 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:27:20 np0005629333 nova_compute[244014]: 2026-02-25 12:27:20.900 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 25 07:27:20 np0005629333 nova_compute[244014]: 2026-02-25 12:27:20.917 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:27:21 np0005629333 nova_compute[244014]: 2026-02-25 12:27:21.065 244018 INFO nova.virt.libvirt.driver [None req-a506a779-80b8-4ab6-ba80-08f04d4a6932 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Snapshot image upload complete#033[00m
Feb 25 07:27:21 np0005629333 nova_compute[244014]: 2026-02-25 12:27:21.066 244018 INFO nova.compute.manager [None req-a506a779-80b8-4ab6-ba80-08f04d4a6932 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Took 4.47 seconds to snapshot the instance on the hypervisor.#033[00m
Feb 25 07:27:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1337: 305 pgs: 305 active+clean; 543 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 413 KiB/s wr, 291 op/s
Feb 25 07:27:22 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Feb 25 07:27:23 np0005629333 nova_compute[244014]: 2026-02-25 12:27:23.727 244018 DEBUG nova.compute.manager [req-f34159d1-8ff9-48f0-b73e-488d18d801d6 req-3ec4fa96-0957-4edd-9a71-6a27c5cca43c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:27:23 np0005629333 nova_compute[244014]: 2026-02-25 12:27:23.728 244018 DEBUG oslo_concurrency.lockutils [req-f34159d1-8ff9-48f0-b73e-488d18d801d6 req-3ec4fa96-0957-4edd-9a71-6a27c5cca43c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:23 np0005629333 nova_compute[244014]: 2026-02-25 12:27:23.729 244018 DEBUG oslo_concurrency.lockutils [req-f34159d1-8ff9-48f0-b73e-488d18d801d6 req-3ec4fa96-0957-4edd-9a71-6a27c5cca43c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:23 np0005629333 nova_compute[244014]: 2026-02-25 12:27:23.729 244018 DEBUG oslo_concurrency.lockutils [req-f34159d1-8ff9-48f0-b73e-488d18d801d6 req-3ec4fa96-0957-4edd-9a71-6a27c5cca43c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:23 np0005629333 nova_compute[244014]: 2026-02-25 12:27:23.729 244018 DEBUG nova.compute.manager [req-f34159d1-8ff9-48f0-b73e-488d18d801d6 req-3ec4fa96-0957-4edd-9a71-6a27c5cca43c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:27:23 np0005629333 nova_compute[244014]: 2026-02-25 12:27:23.730 244018 WARNING nova.compute.manager [req-f34159d1-8ff9-48f0-b73e-488d18d801d6 req-3ec4fa96-0957-4edd-9a71-6a27c5cca43c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state stopped and task_state None.#033[00m
Feb 25 07:27:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 584 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 4.8 MiB/s wr, 220 op/s
Feb 25 07:27:23 np0005629333 nova_compute[244014]: 2026-02-25 12:27:23.873 244018 DEBUG nova.objects.instance [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'flavor' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:23 np0005629333 nova_compute[244014]: 2026-02-25 12:27:23.894 244018 DEBUG oslo_concurrency.lockutils [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:27:23 np0005629333 nova_compute[244014]: 2026-02-25 12:27:23.894 244018 DEBUG oslo_concurrency.lockutils [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquired lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:27:23 np0005629333 nova_compute[244014]: 2026-02-25 12:27:23.894 244018 DEBUG nova.network.neutron [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:27:23 np0005629333 nova_compute[244014]: 2026-02-25 12:27:23.894 244018 DEBUG nova.objects.instance [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'info_cache' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:24 np0005629333 nova_compute[244014]: 2026-02-25 12:27:24.030 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Acquiring lock "b2915349-3797-40de-a554-2de79463723b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:24 np0005629333 nova_compute[244014]: 2026-02-25 12:27:24.031 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lock "b2915349-3797-40de-a554-2de79463723b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:24 np0005629333 nova_compute[244014]: 2026-02-25 12:27:24.050 244018 DEBUG nova.compute.manager [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:27:24 np0005629333 nova_compute[244014]: 2026-02-25 12:27:24.128 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:24 np0005629333 nova_compute[244014]: 2026-02-25 12:27:24.129 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:24 np0005629333 nova_compute[244014]: 2026-02-25 12:27:24.137 244018 DEBUG nova.virt.hardware [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:27:24 np0005629333 nova_compute[244014]: 2026-02-25 12:27:24.137 244018 INFO nova.compute.claims [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:27:24 np0005629333 nova_compute[244014]: 2026-02-25 12:27:24.356 244018 DEBUG oslo_concurrency.processutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:24 np0005629333 nova_compute[244014]: 2026-02-25 12:27:24.453 244018 DEBUG oslo_concurrency.lockutils [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "b8086e43-4c45-422f-a3b5-fa665c256b30" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:24 np0005629333 nova_compute[244014]: 2026-02-25 12:27:24.453 244018 DEBUG oslo_concurrency.lockutils [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:24 np0005629333 nova_compute[244014]: 2026-02-25 12:27:24.454 244018 INFO nova.compute.manager [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Shelving#033[00m
Feb 25 07:27:24 np0005629333 nova_compute[244014]: 2026-02-25 12:27:24.477 244018 DEBUG nova.virt.libvirt.driver [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 25 07:27:24 np0005629333 nova_compute[244014]: 2026-02-25 12:27:24.680 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:27:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2589183456' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:27:24 np0005629333 nova_compute[244014]: 2026-02-25 12:27:24.914 244018 DEBUG oslo_concurrency.processutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:24 np0005629333 nova_compute[244014]: 2026-02-25 12:27:24.921 244018 DEBUG nova.compute.provider_tree [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:27:24 np0005629333 nova_compute[244014]: 2026-02-25 12:27:24.942 244018 DEBUG nova.scheduler.client.report [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:27:24 np0005629333 nova_compute[244014]: 2026-02-25 12:27:24.970 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:24 np0005629333 nova_compute[244014]: 2026-02-25 12:27:24.972 244018 DEBUG nova.compute.manager [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.026 244018 DEBUG nova.compute.manager [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.027 244018 DEBUG nova.network.neutron [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.036 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.053 244018 INFO nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.072 244018 DEBUG nova.compute.manager [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.080 244018 DEBUG nova.network.neutron [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Updating instance_info_cache with network_info: [{"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.126 244018 DEBUG oslo_concurrency.lockutils [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Releasing lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:27:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:27:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Feb 25 07:27:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Feb 25 07:27:25 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.171 244018 INFO nova.virt.libvirt.driver [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance destroyed successfully.#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.172 244018 DEBUG nova.objects.instance [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'numa_topology' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.244 244018 DEBUG nova.objects.instance [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'resources' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.293 244018 DEBUG nova.virt.libvirt.vif [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1971346294',display_name='tempest-ServerActionsTestJSON-server-1971346294',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1971346294',id=53,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAxL27ottNuqfXH6nySrIpiq52DbdIwstuJNvjKVA2mjXoBhB8Hf28a6S+Sox62IJx/Akv2MX8rF28TRT28AB2t2jhcJkKsJ3yIrvpBvNuGbxcLEouYwPlp1/Hru0erD1g==',key_name='tempest-keypair-1811376271',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-d1p0icxu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:27:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.294 244018 DEBUG nova.network.os_vif_util [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.295 244018 DEBUG nova.network.os_vif_util [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.296 244018 DEBUG os_vif [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.299 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.300 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee46268d-74, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.305 244018 DEBUG nova.compute.manager [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.307 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.308 244018 INFO nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Creating image(s)#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.347 244018 DEBUG nova.storage.rbd_utils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] rbd image b2915349-3797-40de-a554-2de79463723b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.381 244018 DEBUG nova.storage.rbd_utils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] rbd image b2915349-3797-40de-a554-2de79463723b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.417 244018 DEBUG nova.storage.rbd_utils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] rbd image b2915349-3797-40de-a554-2de79463723b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.423 244018 DEBUG oslo_concurrency.processutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.467 244018 DEBUG nova.policy [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9f1b75cc204c46bba5f57dbfecdde9ad', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '09adeeb80de1411fb55d81d407987bba', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.471 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.476 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.480 244018 INFO os_vif [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74')#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.488 244018 DEBUG nova.virt.libvirt.driver [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Start _get_guest_xml network_info=[{"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.492 244018 WARNING nova.virt.libvirt.driver [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.512 244018 DEBUG oslo_concurrency.processutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.513 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.513 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.514 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.535 244018 DEBUG nova.storage.rbd_utils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] rbd image b2915349-3797-40de-a554-2de79463723b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.540 244018 DEBUG oslo_concurrency.processutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 b2915349-3797-40de-a554-2de79463723b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.575 244018 DEBUG nova.virt.libvirt.host [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.576 244018 DEBUG nova.virt.libvirt.host [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.578 244018 DEBUG nova.compute.manager [None req-6ead0b10-c826-4c3f-b4f0-d83eb046807a 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.581 244018 DEBUG nova.virt.libvirt.host [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.582 244018 DEBUG nova.virt.libvirt.host [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.583 244018 DEBUG nova.virt.libvirt.driver [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.583 244018 DEBUG nova.virt.hardware [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.585 244018 DEBUG nova.virt.hardware [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.585 244018 DEBUG nova.virt.hardware [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.586 244018 DEBUG nova.virt.hardware [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.586 244018 DEBUG nova.virt.hardware [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.586 244018 DEBUG nova.virt.hardware [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.586 244018 DEBUG nova.virt.hardware [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.586 244018 DEBUG nova.virt.hardware [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.587 244018 DEBUG nova.virt.hardware [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.587 244018 DEBUG nova.virt.hardware [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.587 244018 DEBUG nova.virt.hardware [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.587 244018 DEBUG nova.objects.instance [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.626 244018 DEBUG oslo_concurrency.processutils [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.656 244018 INFO nova.compute.manager [None req-6ead0b10-c826-4c3f-b4f0-d83eb046807a 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] instance snapshotting#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.823 244018 DEBUG oslo_concurrency.processutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 b2915349-3797-40de-a554-2de79463723b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 584 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.1 MiB/s wr, 118 op/s
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.902 244018 DEBUG nova.storage.rbd_utils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] resizing rbd image b2915349-3797-40de-a554-2de79463723b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:27:25 np0005629333 nova_compute[244014]: 2026-02-25 12:27:25.989 244018 DEBUG nova.objects.instance [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lazy-loading 'migration_context' on Instance uuid b2915349-3797-40de-a554-2de79463723b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.041 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.042 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Ensure instance console log exists: /var/lib/nova/instances/b2915349-3797-40de-a554-2de79463723b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.043 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.043 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.043 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:27:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2699635394' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.222 244018 DEBUG oslo_concurrency.processutils [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.262 244018 DEBUG oslo_concurrency.processutils [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.303 244018 INFO nova.virt.libvirt.driver [None req-6ead0b10-c826-4c3f-b4f0-d83eb046807a 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Beginning live snapshot process#033[00m
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.520 244018 DEBUG nova.virt.libvirt.imagebackend [None req-6ead0b10-c826-4c3f-b4f0-d83eb046807a 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Feb 25 07:27:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:27:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1249750884' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:27:26 np0005629333 kernel: tapabdb97b5-8e (unregistering): left promiscuous mode
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.828 244018 DEBUG oslo_concurrency.processutils [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:26 np0005629333 NetworkManager[49836]: <info>  [1772022446.8297] device (tapabdb97b5-8e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.830 244018 DEBUG nova.virt.libvirt.vif [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1971346294',display_name='tempest-ServerActionsTestJSON-server-1971346294',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1971346294',id=53,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAxL27ottNuqfXH6nySrIpiq52DbdIwstuJNvjKVA2mjXoBhB8Hf28a6S+Sox62IJx/Akv2MX8rF28TRT28AB2t2jhcJkKsJ3yIrvpBvNuGbxcLEouYwPlp1/Hru0erD1g==',key_name='tempest-keypair-1811376271',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-d1p0icxu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:27:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.831 244018 DEBUG nova.network.os_vif_util [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.832 244018 DEBUG nova.network.os_vif_util [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:27:26 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:26Z|00503|binding|INFO|Releasing lport abdb97b5-8e9d-4929-af6f-bfb06c067878 from this chassis (sb_readonly=0)
Feb 25 07:27:26 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:26Z|00504|binding|INFO|Setting lport abdb97b5-8e9d-4929-af6f-bfb06c067878 down in Southbound
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.833 244018 DEBUG nova.objects.instance [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.834 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:26 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:26Z|00505|binding|INFO|Removing iface tapabdb97b5-8e ovn-installed in OVS
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.836 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.847 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.865 244018 DEBUG nova.storage.rbd_utils [None req-6ead0b10-c826-4c3f-b4f0-d83eb046807a 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] creating snapshot(a8b6c9ac7dad412996d3d497216ce479) on rbd image(bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 07:27:26 np0005629333 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000002f.scope: Deactivated successfully.
Feb 25 07:27:26 np0005629333 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000002f.scope: Consumed 16.659s CPU time.
Feb 25 07:27:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:26.882 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:53:87 10.100.0.6'], port_security=['fa:16:3e:f8:53:87 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b8086e43-4c45-422f-a3b5-fa665c256b30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64c22162-7e15-45de-8fd2-8c9a24f27006', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c85a955249394f0faf7c890f5cd0df32', 'neutron:revision_number': '4', 'neutron:security_group_ids': '35edb9b7-5285-41a3-a867-f1cc587b3ad5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.229'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a9495f97-67e6-4da7-a9b0-f643c9e48076, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=abdb97b5-8e9d-4929-af6f-bfb06c067878) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:27:26 np0005629333 systemd-machined[210048]: Machine qemu-52-instance-0000002f terminated.
Feb 25 07:27:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:26.885 157129 INFO neutron.agent.ovn.metadata.agent [-] Port abdb97b5-8e9d-4929-af6f-bfb06c067878 in datapath 64c22162-7e15-45de-8fd2-8c9a24f27006 unbound from our chassis#033[00m
Feb 25 07:27:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:26.888 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 64c22162-7e15-45de-8fd2-8c9a24f27006#033[00m
Feb 25 07:27:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:26.903 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4dca787b-1924-4c95-bc3d-a900460ae08d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.907 244018 DEBUG nova.virt.libvirt.driver [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:27:26 np0005629333 nova_compute[244014]:  <uuid>8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b</uuid>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:  <name>instance-00000035</name>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServerActionsTestJSON-server-1971346294</nova:name>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:27:25</nova:creationTime>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:27:26 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:        <nova:user uuid="1f8bbe7db4454108aca005daa72d5c22">tempest-ServerActionsTestJSON-436476112-project-member</nova:user>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:        <nova:project uuid="56700581ea88438ba482d90bc702ced3">tempest-ServerActionsTestJSON-436476112</nova:project>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:        <nova:port uuid="ee46268d-740d-4ff9-8b65-4a81fc61eec3">
Feb 25 07:27:26 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      <entry name="serial">8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b</entry>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      <entry name="uuid">8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b</entry>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk">
Feb 25 07:27:26 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:27:26 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk.config">
Feb 25 07:27:26 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:27:26 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:ba:87:f1"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      <target dev="tapee46268d-74"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b/console.log" append="off"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <input type="keyboard" bus="usb"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:27:26 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:27:26 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:27:26 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:27:26 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.910 244018 DEBUG nova.virt.libvirt.driver [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.910 244018 DEBUG nova.virt.libvirt.driver [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.912 244018 DEBUG nova.virt.libvirt.vif [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1971346294',display_name='tempest-ServerActionsTestJSON-server-1971346294',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1971346294',id=53,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAxL27ottNuqfXH6nySrIpiq52DbdIwstuJNvjKVA2mjXoBhB8Hf28a6S+Sox62IJx/Akv2MX8rF28TRT28AB2t2jhcJkKsJ3yIrvpBvNuGbxcLEouYwPlp1/Hru0erD1g==',key_name='tempest-keypair-1811376271',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-d1p0icxu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:27:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.912 244018 DEBUG nova.network.os_vif_util [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.914 244018 DEBUG nova.network.os_vif_util [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.914 244018 DEBUG os_vif [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.915 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.916 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.916 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.920 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.920 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee46268d-74, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.921 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee46268d-74, col_values=(('external_ids', {'iface-id': 'ee46268d-740d-4ff9-8b65-4a81fc61eec3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:87:f1', 'vm-uuid': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.923 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:26 np0005629333 NetworkManager[49836]: <info>  [1772022446.9250] manager: (tapee46268d-74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/225)
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.928 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.932 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:26 np0005629333 nova_compute[244014]: 2026-02-25 12:27:26.933 244018 INFO os_vif [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74')#033[00m
Feb 25 07:27:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:26.933 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[979cfdf8-9dc2-4c13-a321-a7413d265591]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:26.938 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[dd382c7f-45bd-495b-8294-053a1c9c1f81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:26.964 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c480a8be-0f6c-4408-8fe6-7957e67f68b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:26.984 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9e555133-07a0-485b-99dc-ab4ffe172a8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64c22162-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:1c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 16, 'rx_bytes': 700, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 16, 'rx_bytes': 700, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428693, 'reachable_time': 31584, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294813, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.003 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0a30084d-7c7b-4040-bc45-22c63df1b424]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap64c22162-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428702, 'tstamp': 428702}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294818, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap64c22162-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428705, 'tstamp': 428705}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294818, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.005 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64c22162-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.007 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:27 np0005629333 systemd-udevd[294782]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:27:27 np0005629333 NetworkManager[49836]: <info>  [1772022447.0162] manager: (tapee46268d-74): new Tun device (/org/freedesktop/NetworkManager/Devices/226)
Feb 25 07:27:27 np0005629333 kernel: tapee46268d-74: entered promiscuous mode
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.019 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:27 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:27Z|00506|binding|INFO|Claiming lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 for this chassis.
Feb 25 07:27:27 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:27Z|00507|binding|INFO|ee46268d-740d-4ff9-8b65-4a81fc61eec3: Claiming fa:16:3e:ba:87:f1 10.100.0.11
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.022 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64c22162-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.022 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.023 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap64c22162-70, col_values=(('external_ids', {'iface-id': '81f0f54c-4e04-4adf-952f-b6d0fe9698c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.024 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:27:27 np0005629333 NetworkManager[49836]: <info>  [1772022447.0311] device (tapee46268d-74): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:27:27 np0005629333 NetworkManager[49836]: <info>  [1772022447.0324] device (tapee46268d-74): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:27:27 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:27Z|00508|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 ovn-installed in OVS
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.038 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.043 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:27 np0005629333 NetworkManager[49836]: <info>  [1772022447.0474] manager: (tapabdb97b5-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/227)
Feb 25 07:27:27 np0005629333 kernel: tapabdb97b5-8e: entered promiscuous mode
Feb 25 07:27:27 np0005629333 kernel: tapabdb97b5-8e (unregistering): left promiscuous mode
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.057 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:27 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:27Z|00509|if_status|INFO|Not updating pb chassis for abdb97b5-8e9d-4929-af6f-bfb06c067878 now as sb is readonly
Feb 25 07:27:27 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:27Z|00510|if_status|INFO|Dropped 2 log messages in last 26 seconds (most recently, 26 seconds ago) due to excessive rate
Feb 25 07:27:27 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:27Z|00511|if_status|INFO|Not setting lport abdb97b5-8e9d-4929-af6f-bfb06c067878 down as sb is readonly
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.073 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:27 np0005629333 systemd-machined[210048]: New machine qemu-66-instance-00000035.
Feb 25 07:27:27 np0005629333 systemd[1]: Started Virtual Machine qemu-66-instance-00000035.
Feb 25 07:27:27 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:27Z|00512|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 up in Southbound
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.238 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:87:f1 10.100.0.11'], port_security=['fa:16:3e:ba:87:f1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce318891-cf3c-4d99-af7c-c01770f38194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56700581ea88438ba482d90bc702ced3', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'c2ca716d-3f7c-490b-954a-bca009559baa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0958bb9f-eb63-44ee-b380-21c56b170304, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ee46268d-740d-4ff9-8b65-4a81fc61eec3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.240 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ee46268d-740d-4ff9-8b65-4a81fc61eec3 in datapath ce318891-cf3c-4d99-af7c-c01770f38194 bound to our chassis#033[00m
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.243 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce318891-cf3c-4d99-af7c-c01770f38194#033[00m
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.254 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5e284922-15c2-4945-adaf-fd1e2a5d9d74]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.255 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce318891-c1 in ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.258 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce318891-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.258 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a4b5dc4f-927e-4016-a0f6-1e2120a612ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.259 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[86afdd55-3fd9-4e29-90da-c994453b9ec1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.272 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[324b79c4-69b2-4119-a87b-721aa3c6e99b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.295 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8d151f28-bc6a-425a-9473-920d89a62a8d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.322 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1c8ce840-5426-4156-8f9e-973d2be79b30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.326 244018 DEBUG nova.network.neutron [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Successfully created port: be60a13c-bf18-4eb6-ab12-d76c0abf9525 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.329 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[616682c2-d067-41a8-8e38-575f1894c579]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:27 np0005629333 NetworkManager[49836]: <info>  [1772022447.3323] manager: (tapce318891-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/228)
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.365 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2609454a-3bde-44cd-a6c9-0c7154e11fc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.369 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e777bd65-687f-4865-9a33-e37ef47abfaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:27 np0005629333 NetworkManager[49836]: <info>  [1772022447.3922] device (tapce318891-c0): carrier: link connected
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.396 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f8254e67-38d9-43c9-a0f0-e115dbace23c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.418 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4bbd112a-edd3-4cd9-a1d6-3995b0946f7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce318891-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c3:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441679, 'reachable_time': 17977, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294870, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.434 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[084009a9-91c3-4927-91e1-9a359e9b62d0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe44:c3a0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 441679, 'tstamp': 441679}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294871, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.451 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[21c4a839-c1b7-43b8-877c-c7fc07544d62]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce318891-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c3:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441679, 'reachable_time': 17977, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294872, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.474 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6f1823b4-71cf-4a92-b26b-685eb72ac1da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.525 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a5433787-1c51-49fc-ad41-6d5b06003872]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.527 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce318891-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.528 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.528 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce318891-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.530 244018 INFO nova.virt.libvirt.driver [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Instance shutdown successfully after 3 seconds.#033[00m
Feb 25 07:27:27 np0005629333 NetworkManager[49836]: <info>  [1772022447.5324] manager: (tapce318891-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/229)
Feb 25 07:27:27 np0005629333 kernel: tapce318891-c0: entered promiscuous mode
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.534 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.539 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.541 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce318891-c0, col_values=(('external_ids', {'iface-id': '3b184c15-8ef4-4e11-bd18-e1253a4ff440'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.544 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:27 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:27Z|00513|binding|INFO|Releasing lport 3b184c15-8ef4-4e11-bd18-e1253a4ff440 from this chassis (sb_readonly=0)
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.548 244018 INFO nova.virt.libvirt.driver [-] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Instance destroyed successfully.#033[00m
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.549 244018 DEBUG nova.objects.instance [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'numa_topology' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.562 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.565 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.566 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.567 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a0121760-f049-4100-b890-1e68480ff1d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.568 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:27:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:27.568 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'env', 'PROCESS_TAG=haproxy-ce318891-cf3c-4d99-af7c-c01770f38194', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce318891-cf3c-4d99-af7c-c01770f38194.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:27:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Feb 25 07:27:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Feb 25 07:27:27 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.657 244018 DEBUG nova.storage.rbd_utils [None req-6ead0b10-c826-4c3f-b4f0-d83eb046807a 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] cloning vms/bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk@a8b6c9ac7dad412996d3d497216ce479 to images/a8d16fd6-a547-4180-934b-00fd58f79b87 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.701 244018 DEBUG nova.compute.manager [req-1f5b4774-8860-4785-869c-9385d58ca809 req-21d3aba9-bd45-4c23-9316-cdea60999414 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received event network-vif-unplugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.702 244018 DEBUG oslo_concurrency.lockutils [req-1f5b4774-8860-4785-869c-9385d58ca809 req-21d3aba9-bd45-4c23-9316-cdea60999414 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.703 244018 DEBUG oslo_concurrency.lockutils [req-1f5b4774-8860-4785-869c-9385d58ca809 req-21d3aba9-bd45-4c23-9316-cdea60999414 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.703 244018 DEBUG oslo_concurrency.lockutils [req-1f5b4774-8860-4785-869c-9385d58ca809 req-21d3aba9-bd45-4c23-9316-cdea60999414 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.704 244018 DEBUG nova.compute.manager [req-1f5b4774-8860-4785-869c-9385d58ca809 req-21d3aba9-bd45-4c23-9316-cdea60999414 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] No waiting events found dispatching network-vif-unplugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.704 244018 WARNING nova.compute.manager [req-1f5b4774-8860-4785-869c-9385d58ca809 req-21d3aba9-bd45-4c23-9316-cdea60999414 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received unexpected event network-vif-unplugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 for instance with vm_state active and task_state shelving.#033[00m
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.714 244018 DEBUG nova.compute.manager [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.715 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.716 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022447.715182, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.716 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.725 244018 INFO nova.virt.libvirt.driver [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance rebooted successfully.#033[00m
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.725 244018 DEBUG nova.compute.manager [None req-539c503b-e082-4be7-84f4-2c705c0e9753 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.747 244018 DEBUG nova.storage.rbd_utils [None req-6ead0b10-c826-4c3f-b4f0-d83eb046807a 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] flattening images/a8d16fd6-a547-4180-934b-00fd58f79b87 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.788 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.793 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.836 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022447.715263, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.836 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Started (Lifecycle Event)#033[00m
Feb 25 07:27:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 691 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 12 MiB/s wr, 322 op/s
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.866 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.871 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:27:27 np0005629333 nova_compute[244014]: 2026-02-25 12:27:27.887 244018 INFO nova.virt.libvirt.driver [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Beginning cold snapshot process#033[00m
Feb 25 07:27:28 np0005629333 podman[295004]: 2026-02-25 12:27:27.947994508 +0000 UTC m=+0.021622363 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:27:28 np0005629333 nova_compute[244014]: 2026-02-25 12:27:28.070 244018 DEBUG nova.virt.libvirt.imagebackend [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Feb 25 07:27:28 np0005629333 podman[295004]: 2026-02-25 12:27:28.073052727 +0000 UTC m=+0.146680562 container create 32106443e4d9cce5597e1e35d916e34cc8e899ac548b632e8c9e58485a32cad8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:27:28 np0005629333 systemd[1]: Started libpod-conmon-32106443e4d9cce5597e1e35d916e34cc8e899ac548b632e8c9e58485a32cad8.scope.
Feb 25 07:27:28 np0005629333 nova_compute[244014]: 2026-02-25 12:27:28.146 244018 DEBUG nova.storage.rbd_utils [None req-6ead0b10-c826-4c3f-b4f0-d83eb046807a 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] removing snapshot(a8b6c9ac7dad412996d3d497216ce479) on rbd image(bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 25 07:27:28 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:27:28 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb72b0e246b6d9347232c841bd42065a2553e7580308af9097a9325d7f9de613/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:27:28 np0005629333 podman[295004]: 2026-02-25 12:27:28.166565904 +0000 UTC m=+0.240193789 container init 32106443e4d9cce5597e1e35d916e34cc8e899ac548b632e8c9e58485a32cad8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 07:27:28 np0005629333 podman[295004]: 2026-02-25 12:27:28.171221515 +0000 UTC m=+0.244849360 container start 32106443e4d9cce5597e1e35d916e34cc8e899ac548b632e8c9e58485a32cad8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:27:28 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[295066]: [NOTICE]   (295073) : New worker (295075) forked
Feb 25 07:27:28 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[295066]: [NOTICE]   (295073) : Loading success.
Feb 25 07:27:28 np0005629333 nova_compute[244014]: 2026-02-25 12:27:28.236 244018 DEBUG nova.network.neutron [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Successfully updated port: be60a13c-bf18-4eb6-ab12-d76c0abf9525 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:27:28 np0005629333 nova_compute[244014]: 2026-02-25 12:27:28.256 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Acquiring lock "refresh_cache-b2915349-3797-40de-a554-2de79463723b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:27:28 np0005629333 nova_compute[244014]: 2026-02-25 12:27:28.257 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Acquired lock "refresh_cache-b2915349-3797-40de-a554-2de79463723b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:27:28 np0005629333 nova_compute[244014]: 2026-02-25 12:27:28.257 244018 DEBUG nova.network.neutron [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:27:28 np0005629333 nova_compute[244014]: 2026-02-25 12:27:28.291 244018 DEBUG nova.storage.rbd_utils [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] creating snapshot(b5e987aa5269471aa49cf0d7cd2b10d6) on rbd image(b8086e43-4c45-422f-a3b5-fa665c256b30_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 07:27:28 np0005629333 nova_compute[244014]: 2026-02-25 12:27:28.433 244018 DEBUG nova.network.neutron [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:27:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Feb 25 07:27:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Feb 25 07:27:28 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Feb 25 07:27:28 np0005629333 nova_compute[244014]: 2026-02-25 12:27:28.641 244018 DEBUG nova.compute.manager [req-ab4491f6-30c4-4ad3-890c-809de39fc1e8 req-89f3b589-10c8-44fb-a12e-3024c7b10daf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Received event network-changed-be60a13c-bf18-4eb6-ab12-d76c0abf9525 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:27:28 np0005629333 nova_compute[244014]: 2026-02-25 12:27:28.642 244018 DEBUG nova.compute.manager [req-ab4491f6-30c4-4ad3-890c-809de39fc1e8 req-89f3b589-10c8-44fb-a12e-3024c7b10daf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Refreshing instance network info cache due to event network-changed-be60a13c-bf18-4eb6-ab12-d76c0abf9525. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:27:28 np0005629333 nova_compute[244014]: 2026-02-25 12:27:28.643 244018 DEBUG oslo_concurrency.lockutils [req-ab4491f6-30c4-4ad3-890c-809de39fc1e8 req-89f3b589-10c8-44fb-a12e-3024c7b10daf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-b2915349-3797-40de-a554-2de79463723b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:27:28 np0005629333 nova_compute[244014]: 2026-02-25 12:27:28.661 244018 DEBUG nova.storage.rbd_utils [None req-6ead0b10-c826-4c3f-b4f0-d83eb046807a 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] creating snapshot(snap) on rbd image(a8d16fd6-a547-4180-934b-00fd58f79b87) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 07:27:28 np0005629333 nova_compute[244014]: 2026-02-25 12:27:28.725 244018 DEBUG nova.storage.rbd_utils [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] cloning vms/b8086e43-4c45-422f-a3b5-fa665c256b30_disk@b5e987aa5269471aa49cf0d7cd2b10d6 to images/a1bfc991-8c23-4709-b2d6-80b0d18f6430 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 25 07:27:28 np0005629333 nova_compute[244014]: 2026-02-25 12:27:28.851 244018 DEBUG nova.storage.rbd_utils [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] flattening images/a1bfc991-8c23-4709-b2d6-80b0d18f6430 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 25 07:27:29 np0005629333 nova_compute[244014]: 2026-02-25 12:27:29.333 244018 DEBUG nova.storage.rbd_utils [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] removing snapshot(b5e987aa5269471aa49cf0d7cd2b10d6) on rbd image(b8086e43-4c45-422f-a3b5-fa665c256b30_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 25 07:27:29 np0005629333 nova_compute[244014]: 2026-02-25 12:27:29.549 244018 DEBUG nova.network.neutron [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Updating instance_info_cache with network_info: [{"id": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "address": "fa:16:3e:78:7f:56", "network": {"id": "c6988a49-992e-4d2d-a359-182201377e6a", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-391653903-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09adeeb80de1411fb55d81d407987bba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe60a13c-bf", "ovs_interfaceid": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:27:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Feb 25 07:27:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Feb 25 07:27:29 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Feb 25 07:27:29 np0005629333 nova_compute[244014]: 2026-02-25 12:27:29.682 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:29 np0005629333 nova_compute[244014]: 2026-02-25 12:27:29.713 244018 DEBUG nova.storage.rbd_utils [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] creating snapshot(snap) on rbd image(a1bfc991-8c23-4709-b2d6-80b0d18f6430) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 07:27:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 691 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 14 MiB/s wr, 363 op/s
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.063 244018 DEBUG nova.compute.manager [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received event network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.063 244018 DEBUG oslo_concurrency.lockutils [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.063 244018 DEBUG oslo_concurrency.lockutils [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.064 244018 DEBUG oslo_concurrency.lockutils [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.064 244018 DEBUG nova.compute.manager [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] No waiting events found dispatching network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.064 244018 WARNING nova.compute.manager [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received unexpected event network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.064 244018 DEBUG nova.compute.manager [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.065 244018 DEBUG oslo_concurrency.lockutils [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.065 244018 DEBUG oslo_concurrency.lockutils [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.065 244018 DEBUG oslo_concurrency.lockutils [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.065 244018 DEBUG nova.compute.manager [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.065 244018 WARNING nova.compute.manager [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.066 244018 DEBUG nova.compute.manager [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.066 244018 DEBUG oslo_concurrency.lockutils [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.066 244018 DEBUG oslo_concurrency.lockutils [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.066 244018 DEBUG oslo_concurrency.lockutils [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.066 244018 DEBUG nova.compute.manager [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.067 244018 WARNING nova.compute.manager [req-a9f2a178-143c-4377-a7f9-f881775f869d req-a2894095-1aa7-4b51-b598-763ce6f1bac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.070 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Releasing lock "refresh_cache-b2915349-3797-40de-a554-2de79463723b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.071 244018 DEBUG nova.compute.manager [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Instance network_info: |[{"id": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "address": "fa:16:3e:78:7f:56", "network": {"id": "c6988a49-992e-4d2d-a359-182201377e6a", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-391653903-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09adeeb80de1411fb55d81d407987bba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe60a13c-bf", "ovs_interfaceid": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.071 244018 DEBUG oslo_concurrency.lockutils [req-ab4491f6-30c4-4ad3-890c-809de39fc1e8 req-89f3b589-10c8-44fb-a12e-3024c7b10daf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-b2915349-3797-40de-a554-2de79463723b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.071 244018 DEBUG nova.network.neutron [req-ab4491f6-30c4-4ad3-890c-809de39fc1e8 req-89f3b589-10c8-44fb-a12e-3024c7b10daf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Refreshing network info cache for port be60a13c-bf18-4eb6-ab12-d76c0abf9525 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.074 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Start _get_guest_xml network_info=[{"id": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "address": "fa:16:3e:78:7f:56", "network": {"id": "c6988a49-992e-4d2d-a359-182201377e6a", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-391653903-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09adeeb80de1411fb55d81d407987bba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe60a13c-bf", "ovs_interfaceid": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.081 244018 WARNING nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.093 244018 DEBUG nova.virt.libvirt.host [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.094 244018 DEBUG nova.virt.libvirt.host [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.097 244018 DEBUG nova.virt.libvirt.host [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.097 244018 DEBUG nova.virt.libvirt.host [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.097 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.098 244018 DEBUG nova.virt.hardware [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.098 244018 DEBUG nova.virt.hardware [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.098 244018 DEBUG nova.virt.hardware [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.098 244018 DEBUG nova.virt.hardware [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.099 244018 DEBUG nova.virt.hardware [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.099 244018 DEBUG nova.virt.hardware [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.099 244018 DEBUG nova.virt.hardware [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.099 244018 DEBUG nova.virt.hardware [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.099 244018 DEBUG nova.virt.hardware [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.100 244018 DEBUG nova.virt.hardware [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.100 244018 DEBUG nova.virt.hardware [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.103 244018 DEBUG oslo_concurrency.processutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:27:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Feb 25 07:27:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Feb 25 07:27:30 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Feb 25 07:27:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:27:30 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2519301337' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.780 244018 DEBUG oslo_concurrency.processutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.677s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.808 244018 DEBUG nova.storage.rbd_utils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] rbd image b2915349-3797-40de-a554-2de79463723b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:27:30 np0005629333 nova_compute[244014]: 2026-02-25 12:27:30.814 244018 DEBUG oslo_concurrency.processutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:27:30
Feb 25 07:27:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:27:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:27:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['.rgw.root', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', '.mgr', 'default.rgw.meta', 'backups']
Feb 25 07:27:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:27:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:27:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1850460076' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.398 244018 DEBUG oslo_concurrency.processutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.400 244018 DEBUG nova.virt.libvirt.vif [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:27:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1911361008',display_name='tempest-InstanceActionsNegativeTestJSON-server-1911361008',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1911361008',id=60,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='09adeeb80de1411fb55d81d407987bba',ramdisk_id='',reservation_id='r-dfoydzhg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1721839948',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1721839948-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:27:25Z,user_data=None,user_id='9f1b75cc204c46bba5f57dbfecdde9ad',uuid=b2915349-3797-40de-a554-2de79463723b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "address": "fa:16:3e:78:7f:56", "network": {"id": "c6988a49-992e-4d2d-a359-182201377e6a", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-391653903-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09adeeb80de1411fb55d81d407987bba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe60a13c-bf", "ovs_interfaceid": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.401 244018 DEBUG nova.network.os_vif_util [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Converting VIF {"id": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "address": "fa:16:3e:78:7f:56", "network": {"id": "c6988a49-992e-4d2d-a359-182201377e6a", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-391653903-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09adeeb80de1411fb55d81d407987bba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe60a13c-bf", "ovs_interfaceid": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.402 244018 DEBUG nova.network.os_vif_util [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:7f:56,bridge_name='br-int',has_traffic_filtering=True,id=be60a13c-bf18-4eb6-ab12-d76c0abf9525,network=Network(c6988a49-992e-4d2d-a359-182201377e6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe60a13c-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.403 244018 DEBUG nova.objects.instance [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lazy-loading 'pci_devices' on Instance uuid b2915349-3797-40de-a554-2de79463723b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.428 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:27:31 np0005629333 nova_compute[244014]:  <uuid>b2915349-3797-40de-a554-2de79463723b</uuid>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:  <name>instance-0000003c</name>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      <nova:name>tempest-InstanceActionsNegativeTestJSON-server-1911361008</nova:name>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:27:30</nova:creationTime>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:27:31 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:        <nova:user uuid="9f1b75cc204c46bba5f57dbfecdde9ad">tempest-InstanceActionsNegativeTestJSON-1721839948-project-member</nova:user>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:        <nova:project uuid="09adeeb80de1411fb55d81d407987bba">tempest-InstanceActionsNegativeTestJSON-1721839948</nova:project>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:        <nova:port uuid="be60a13c-bf18-4eb6-ab12-d76c0abf9525">
Feb 25 07:27:31 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      <entry name="serial">b2915349-3797-40de-a554-2de79463723b</entry>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      <entry name="uuid">b2915349-3797-40de-a554-2de79463723b</entry>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/b2915349-3797-40de-a554-2de79463723b_disk">
Feb 25 07:27:31 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:27:31 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/b2915349-3797-40de-a554-2de79463723b_disk.config">
Feb 25 07:27:31 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:27:31 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:78:7f:56"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      <target dev="tapbe60a13c-bf"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/b2915349-3797-40de-a554-2de79463723b/console.log" append="off"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:27:31 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:27:31 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:27:31 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:27:31 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.430 244018 DEBUG nova.compute.manager [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Preparing to wait for external event network-vif-plugged-be60a13c-bf18-4eb6-ab12-d76c0abf9525 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.430 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Acquiring lock "b2915349-3797-40de-a554-2de79463723b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.431 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lock "b2915349-3797-40de-a554-2de79463723b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.431 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lock "b2915349-3797-40de-a554-2de79463723b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.432 244018 DEBUG nova.virt.libvirt.vif [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:27:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1911361008',display_name='tempest-InstanceActionsNegativeTestJSON-server-1911361008',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1911361008',id=60,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='09adeeb80de1411fb55d81d407987bba',ramdisk_id='',reservation_id='r-dfoydzhg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1721839948',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1721839948-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:27:25Z,user_data=None,user_id='9f1b75cc204c46bba5f57dbfecdde9ad',uuid=b2915349-3797-40de-a554-2de79463723b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "address": "fa:16:3e:78:7f:56", "network": {"id": "c6988a49-992e-4d2d-a359-182201377e6a", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-391653903-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09adeeb80de1411fb55d81d407987bba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe60a13c-bf", "ovs_interfaceid": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.433 244018 DEBUG nova.network.os_vif_util [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Converting VIF {"id": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "address": "fa:16:3e:78:7f:56", "network": {"id": "c6988a49-992e-4d2d-a359-182201377e6a", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-391653903-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09adeeb80de1411fb55d81d407987bba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe60a13c-bf", "ovs_interfaceid": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.433 244018 DEBUG nova.network.os_vif_util [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:7f:56,bridge_name='br-int',has_traffic_filtering=True,id=be60a13c-bf18-4eb6-ab12-d76c0abf9525,network=Network(c6988a49-992e-4d2d-a359-182201377e6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe60a13c-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.434 244018 DEBUG os_vif [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:7f:56,bridge_name='br-int',has_traffic_filtering=True,id=be60a13c-bf18-4eb6-ab12-d76c0abf9525,network=Network(c6988a49-992e-4d2d-a359-182201377e6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe60a13c-bf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.435 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.435 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.435 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.440 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.440 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe60a13c-bf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.441 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbe60a13c-bf, col_values=(('external_ids', {'iface-id': 'be60a13c-bf18-4eb6-ab12-d76c0abf9525', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:78:7f:56', 'vm-uuid': 'b2915349-3797-40de-a554-2de79463723b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:31 np0005629333 NetworkManager[49836]: <info>  [1772022451.4437] manager: (tapbe60a13c-bf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/230)
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.442 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.445 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.452 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.453 244018 INFO os_vif [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:7f:56,bridge_name='br-int',has_traffic_filtering=True,id=be60a13c-bf18-4eb6-ab12-d76c0abf9525,network=Network(c6988a49-992e-4d2d-a359-182201377e6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe60a13c-bf')#033[00m
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.516 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.517 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.518 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] No VIF found with MAC fa:16:3e:78:7f:56, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.518 244018 INFO nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Using config drive#033[00m
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.551 244018 DEBUG nova.storage.rbd_utils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] rbd image b2915349-3797-40de-a554-2de79463723b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:27:31 np0005629333 podman[295276]: 2026-02-25 12:27:31.561143365 +0000 UTC m=+0.075142228 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.563 244018 INFO nova.virt.libvirt.driver [None req-6ead0b10-c826-4c3f-b4f0-d83eb046807a 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Snapshot image upload complete#033[00m
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.564 244018 INFO nova.compute.manager [None req-6ead0b10-c826-4c3f-b4f0-d83eb046807a 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Took 5.91 seconds to snapshot the instance on the hypervisor.#033[00m
Feb 25 07:27:31 np0005629333 podman[295277]: 2026-02-25 12:27:31.570055827 +0000 UTC m=+0.078529183 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:27:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:27:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:27:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:27:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:27:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:27:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.602 244018 DEBUG nova.network.neutron [req-ab4491f6-30c4-4ad3-890c-809de39fc1e8 req-89f3b589-10c8-44fb-a12e-3024c7b10daf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Updated VIF entry in instance network info cache for port be60a13c-bf18-4eb6-ab12-d76c0abf9525. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.602 244018 DEBUG nova.network.neutron [req-ab4491f6-30c4-4ad3-890c-809de39fc1e8 req-89f3b589-10c8-44fb-a12e-3024c7b10daf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Updating instance_info_cache with network_info: [{"id": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "address": "fa:16:3e:78:7f:56", "network": {"id": "c6988a49-992e-4d2d-a359-182201377e6a", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-391653903-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09adeeb80de1411fb55d81d407987bba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe60a13c-bf", "ovs_interfaceid": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.629 244018 DEBUG oslo_concurrency.lockutils [req-ab4491f6-30c4-4ad3-890c-809de39fc1e8 req-89f3b589-10c8-44fb-a12e-3024c7b10daf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-b2915349-3797-40de-a554-2de79463723b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:27:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 737 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 5.8 MiB/s wr, 335 op/s
Feb 25 07:27:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:27:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:27:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:27:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:27:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:27:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:27:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:27:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:27:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:27:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.970 244018 INFO nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Creating config drive at /var/lib/nova/instances/b2915349-3797-40de-a554-2de79463723b/disk.config#033[00m
Feb 25 07:27:31 np0005629333 nova_compute[244014]: 2026-02-25 12:27:31.980 244018 DEBUG oslo_concurrency.processutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b2915349-3797-40de-a554-2de79463723b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5dpdrarh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:32 np0005629333 nova_compute[244014]: 2026-02-25 12:27:32.133 244018 DEBUG oslo_concurrency.processutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b2915349-3797-40de-a554-2de79463723b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5dpdrarh" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:32 np0005629333 nova_compute[244014]: 2026-02-25 12:27:32.167 244018 DEBUG nova.storage.rbd_utils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] rbd image b2915349-3797-40de-a554-2de79463723b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:27:32 np0005629333 nova_compute[244014]: 2026-02-25 12:27:32.172 244018 DEBUG oslo_concurrency.processutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b2915349-3797-40de-a554-2de79463723b/disk.config b2915349-3797-40de-a554-2de79463723b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:32 np0005629333 nova_compute[244014]: 2026-02-25 12:27:32.317 244018 DEBUG oslo_concurrency.processutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b2915349-3797-40de-a554-2de79463723b/disk.config b2915349-3797-40de-a554-2de79463723b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:32 np0005629333 nova_compute[244014]: 2026-02-25 12:27:32.318 244018 INFO nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Deleting local config drive /var/lib/nova/instances/b2915349-3797-40de-a554-2de79463723b/disk.config because it was imported into RBD.#033[00m
Feb 25 07:27:32 np0005629333 kernel: tapbe60a13c-bf: entered promiscuous mode
Feb 25 07:27:32 np0005629333 NetworkManager[49836]: <info>  [1772022452.3761] manager: (tapbe60a13c-bf): new Tun device (/org/freedesktop/NetworkManager/Devices/231)
Feb 25 07:27:32 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:32Z|00514|binding|INFO|Claiming lport be60a13c-bf18-4eb6-ab12-d76c0abf9525 for this chassis.
Feb 25 07:27:32 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:32Z|00515|binding|INFO|be60a13c-bf18-4eb6-ab12-d76c0abf9525: Claiming fa:16:3e:78:7f:56 10.100.0.5
Feb 25 07:27:32 np0005629333 nova_compute[244014]: 2026-02-25 12:27:32.418 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:32 np0005629333 nova_compute[244014]: 2026-02-25 12:27:32.423 244018 INFO nova.virt.libvirt.driver [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Snapshot image upload complete#033[00m
Feb 25 07:27:32 np0005629333 nova_compute[244014]: 2026-02-25 12:27:32.423 244018 DEBUG nova.compute.manager [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.428 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:7f:56 10.100.0.5'], port_security=['fa:16:3e:78:7f:56 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b2915349-3797-40de-a554-2de79463723b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6988a49-992e-4d2d-a359-182201377e6a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09adeeb80de1411fb55d81d407987bba', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a7fbc9cc-1faa-472a-913b-870c01f46476', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=129e28e8-2627-4723-80a9-1e2113f78748, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=be60a13c-bf18-4eb6-ab12-d76c0abf9525) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.429 157129 INFO neutron.agent.ovn.metadata.agent [-] Port be60a13c-bf18-4eb6-ab12-d76c0abf9525 in datapath c6988a49-992e-4d2d-a359-182201377e6a bound to our chassis#033[00m
Feb 25 07:27:32 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:32Z|00516|binding|INFO|Setting lport be60a13c-bf18-4eb6-ab12-d76c0abf9525 ovn-installed in OVS
Feb 25 07:27:32 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:32Z|00517|binding|INFO|Setting lport be60a13c-bf18-4eb6-ab12-d76c0abf9525 up in Southbound
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.430 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c6988a49-992e-4d2d-a359-182201377e6a#033[00m
Feb 25 07:27:32 np0005629333 nova_compute[244014]: 2026-02-25 12:27:32.431 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:32 np0005629333 nova_compute[244014]: 2026-02-25 12:27:32.434 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:32 np0005629333 systemd-udevd[295388]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.444 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bc66d52b-b050-48c8-8239-2b0886ee430a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.445 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc6988a49-91 in ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.447 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc6988a49-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.447 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0f7112e1-47bd-4f5b-a4de-219cdcf3ef16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.448 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[08b62aef-4296-4cef-a8cc-69f1f7a4dc2e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:32 np0005629333 NetworkManager[49836]: <info>  [1772022452.4503] device (tapbe60a13c-bf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:27:32 np0005629333 NetworkManager[49836]: <info>  [1772022452.4522] device (tapbe60a13c-bf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:27:32 np0005629333 systemd-machined[210048]: New machine qemu-67-instance-0000003c.
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.458 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[54c22e39-e3b9-4c27-8150-bd08eaabf8c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:32 np0005629333 systemd[1]: Started Virtual Machine qemu-67-instance-0000003c.
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.480 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[44a17314-84a9-40c2-82f0-723dd61dfdaa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.514 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ec223556-ec71-4e1e-8d7e-a271bb2e4581]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.519 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b6298527-1037-4b46-885e-f9ee087997ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:32 np0005629333 NetworkManager[49836]: <info>  [1772022452.5214] manager: (tapc6988a49-90): new Veth device (/org/freedesktop/NetworkManager/Devices/232)
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.552 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[66f66186-f6be-45cb-bafa-bc20160b3b6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:32 np0005629333 nova_compute[244014]: 2026-02-25 12:27:32.553 244018 INFO nova.compute.manager [None req-f7aff48c-c972-466c-ae6d-c69218c95f2e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Pausing#033[00m
Feb 25 07:27:32 np0005629333 nova_compute[244014]: 2026-02-25 12:27:32.554 244018 DEBUG nova.objects.instance [None req-f7aff48c-c972-466c-ae6d-c69218c95f2e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'flavor' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.556 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c34bfe12-f6f3-4f19-ac4e-7e4dfede6d75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:32 np0005629333 NetworkManager[49836]: <info>  [1772022452.5761] device (tapc6988a49-90): carrier: link connected
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.584 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[85ca81bf-f045-4b55-940f-93ae69f6ac5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:32 np0005629333 nova_compute[244014]: 2026-02-25 12:27:32.590 244018 INFO nova.compute.manager [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Shelve offloading#033[00m
Feb 25 07:27:32 np0005629333 nova_compute[244014]: 2026-02-25 12:27:32.597 244018 INFO nova.virt.libvirt.driver [-] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Instance destroyed successfully.#033[00m
Feb 25 07:27:32 np0005629333 nova_compute[244014]: 2026-02-25 12:27:32.598 244018 DEBUG nova.compute.manager [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:32 np0005629333 nova_compute[244014]: 2026-02-25 12:27:32.600 244018 DEBUG oslo_concurrency.lockutils [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:27:32 np0005629333 nova_compute[244014]: 2026-02-25 12:27:32.600 244018 DEBUG oslo_concurrency.lockutils [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquired lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:27:32 np0005629333 nova_compute[244014]: 2026-02-25 12:27:32.600 244018 DEBUG nova.network.neutron [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.606 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2bc5d82c-b811-4028-9416-68d2ac9be700]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6988a49-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:db:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 154], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442198, 'reachable_time': 35522, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295427, 'error': None, 'target': 'ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.617 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[78ad5fcd-2648-4ef7-9bbe-48fb50462eeb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7f:db8e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 442198, 'tstamp': 442198}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295428, 'error': None, 'target': 'ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.633 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9edd0707-7ea9-4b4e-b6f1-b74d517f192a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6988a49-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7f:db:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 154], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442198, 'reachable_time': 35522, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295429, 'error': None, 'target': 'ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.657 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[740fffc7-e82d-41f3-b74e-a1a26b7576ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:32 np0005629333 nova_compute[244014]: 2026-02-25 12:27:32.687 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022452.6873376, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:27:32 np0005629333 nova_compute[244014]: 2026-02-25 12:27:32.687 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:27:32 np0005629333 nova_compute[244014]: 2026-02-25 12:27:32.689 244018 DEBUG nova.compute.manager [None req-f7aff48c-c972-466c-ae6d-c69218c95f2e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.710 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4d3adbf6-679a-4a5e-ab6e-d36084bda2f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.712 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6988a49-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.712 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.713 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6988a49-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:32 np0005629333 NetworkManager[49836]: <info>  [1772022452.7166] manager: (tapc6988a49-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/233)
Feb 25 07:27:32 np0005629333 kernel: tapc6988a49-90: entered promiscuous mode
Feb 25 07:27:32 np0005629333 nova_compute[244014]: 2026-02-25 12:27:32.715 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:32 np0005629333 nova_compute[244014]: 2026-02-25 12:27:32.719 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:32 np0005629333 nova_compute[244014]: 2026-02-25 12:27:32.721 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.725 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc6988a49-90, col_values=(('external_ids', {'iface-id': '1b406748-e819-4bf2-b148-753ecf2ebbe2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:32 np0005629333 nova_compute[244014]: 2026-02-25 12:27:32.727 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:27:32 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:32Z|00518|binding|INFO|Releasing lport 1b406748-e819-4bf2-b148-753ecf2ebbe2 from this chassis (sb_readonly=0)
Feb 25 07:27:32 np0005629333 nova_compute[244014]: 2026-02-25 12:27:32.728 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:32 np0005629333 nova_compute[244014]: 2026-02-25 12:27:32.735 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:32 np0005629333 nova_compute[244014]: 2026-02-25 12:27:32.737 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.739 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c6988a49-992e-4d2d-a359-182201377e6a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c6988a49-992e-4d2d-a359-182201377e6a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.740 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0876a098-79eb-4013-a91b-c8ea01cd702c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.741 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-c6988a49-992e-4d2d-a359-182201377e6a
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/c6988a49-992e-4d2d-a359-182201377e6a.pid.haproxy
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID c6988a49-992e-4d2d-a359-182201377e6a
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:27:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:32.741 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a', 'env', 'PROCESS_TAG=haproxy-c6988a49-992e-4d2d-a359-182201377e6a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c6988a49-992e-4d2d-a359-182201377e6a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:27:32 np0005629333 nova_compute[244014]: 2026-02-25 12:27:32.977 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022452.9768987, b2915349-3797-40de-a554-2de79463723b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:27:32 np0005629333 nova_compute[244014]: 2026-02-25 12:27:32.978 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b2915349-3797-40de-a554-2de79463723b] VM Started (Lifecycle Event)#033[00m
Feb 25 07:27:32 np0005629333 nova_compute[244014]: 2026-02-25 12:27:32.997 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b2915349-3797-40de-a554-2de79463723b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:33 np0005629333 nova_compute[244014]: 2026-02-25 12:27:33.001 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022452.9806504, b2915349-3797-40de-a554-2de79463723b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:27:33 np0005629333 nova_compute[244014]: 2026-02-25 12:27:33.001 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b2915349-3797-40de-a554-2de79463723b] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:27:33 np0005629333 nova_compute[244014]: 2026-02-25 12:27:33.024 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b2915349-3797-40de-a554-2de79463723b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:33 np0005629333 nova_compute[244014]: 2026-02-25 12:27:33.027 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b2915349-3797-40de-a554-2de79463723b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:27:33 np0005629333 nova_compute[244014]: 2026-02-25 12:27:33.047 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b2915349-3797-40de-a554-2de79463723b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:27:33 np0005629333 podman[295508]: 2026-02-25 12:27:33.176470721 +0000 UTC m=+0.071282438 container create e11b39fafd6ac4a45f29365e08f25ee0cb228d4f432d74cc23c15520f7ce33c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:27:33 np0005629333 podman[295508]: 2026-02-25 12:27:33.143900039 +0000 UTC m=+0.038711796 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:27:33 np0005629333 systemd[1]: Started libpod-conmon-e11b39fafd6ac4a45f29365e08f25ee0cb228d4f432d74cc23c15520f7ce33c9.scope.
Feb 25 07:27:33 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:27:33 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21b4837bf9aa32acef51cbe6f98489e364315734f563f075261a323ae88457b7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:27:33 np0005629333 podman[295508]: 2026-02-25 12:27:33.289225912 +0000 UTC m=+0.184037659 container init e11b39fafd6ac4a45f29365e08f25ee0cb228d4f432d74cc23c15520f7ce33c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:27:33 np0005629333 podman[295508]: 2026-02-25 12:27:33.295960653 +0000 UTC m=+0.190772370 container start e11b39fafd6ac4a45f29365e08f25ee0cb228d4f432d74cc23c15520f7ce33c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:27:33 np0005629333 neutron-haproxy-ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a[295523]: [NOTICE]   (295527) : New worker (295529) forked
Feb 25 07:27:33 np0005629333 neutron-haproxy-ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a[295523]: [NOTICE]   (295527) : Loading success.
Feb 25 07:27:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 849 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 20 MiB/s rd, 16 MiB/s wr, 482 op/s
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.164 244018 DEBUG nova.compute.manager [req-98a8adfe-752d-4342-9f52-3c714b9a21e5 req-5bba8c33-a33a-4616-a738-ae7c6155292d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Received event network-vif-plugged-be60a13c-bf18-4eb6-ab12-d76c0abf9525 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.164 244018 DEBUG oslo_concurrency.lockutils [req-98a8adfe-752d-4342-9f52-3c714b9a21e5 req-5bba8c33-a33a-4616-a738-ae7c6155292d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b2915349-3797-40de-a554-2de79463723b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.165 244018 DEBUG oslo_concurrency.lockutils [req-98a8adfe-752d-4342-9f52-3c714b9a21e5 req-5bba8c33-a33a-4616-a738-ae7c6155292d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b2915349-3797-40de-a554-2de79463723b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.165 244018 DEBUG oslo_concurrency.lockutils [req-98a8adfe-752d-4342-9f52-3c714b9a21e5 req-5bba8c33-a33a-4616-a738-ae7c6155292d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b2915349-3797-40de-a554-2de79463723b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.166 244018 DEBUG nova.compute.manager [req-98a8adfe-752d-4342-9f52-3c714b9a21e5 req-5bba8c33-a33a-4616-a738-ae7c6155292d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Processing event network-vif-plugged-be60a13c-bf18-4eb6-ab12-d76c0abf9525 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.167 244018 DEBUG nova.compute.manager [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.171 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022454.1715372, b2915349-3797-40de-a554-2de79463723b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.172 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b2915349-3797-40de-a554-2de79463723b] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.176 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.181 244018 INFO nova.virt.libvirt.driver [-] [instance: b2915349-3797-40de-a554-2de79463723b] Instance spawned successfully.#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.181 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.206 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b2915349-3797-40de-a554-2de79463723b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.215 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b2915349-3797-40de-a554-2de79463723b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.220 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.221 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.222 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.222 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.223 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.223 244018 DEBUG nova.virt.libvirt.driver [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.247 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b2915349-3797-40de-a554-2de79463723b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.281 244018 INFO nova.compute.manager [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Took 8.98 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.282 244018 DEBUG nova.compute.manager [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.396 244018 INFO nova.compute.manager [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Took 10.30 seconds to build instance.#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.423 244018 DEBUG oslo_concurrency.lockutils [None req-b5bce04f-9c6b-4dc0-831a-930543d86369 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lock "b2915349-3797-40de-a554-2de79463723b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.392s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.685 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.723 244018 DEBUG nova.compute.manager [None req-06e7ccc1-d7db-436e-9a38-85d40724ff2b 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.784 244018 INFO nova.compute.manager [None req-06e7ccc1-d7db-436e-9a38-85d40724ff2b 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] instance snapshotting#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.793 244018 INFO nova.compute.manager [None req-f71c8029-49a4-47cd-be1d-685b1ed79a50 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Unpausing#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.794 244018 DEBUG nova.objects.instance [None req-f71c8029-49a4-47cd-be1d-685b1ed79a50 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'flavor' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.830 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022454.8306704, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.831 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:27:34 np0005629333 virtqemud[243235]: argument unsupported: QEMU guest agent is not configured
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.835 244018 DEBUG nova.virt.libvirt.guest [None req-f71c8029-49a4-47cd-be1d-685b1ed79a50 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.835 244018 DEBUG nova.compute.manager [None req-f71c8029-49a4-47cd-be1d-685b1ed79a50 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.852 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.864 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:27:34 np0005629333 nova_compute[244014]: 2026-02-25 12:27:34.892 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Feb 25 07:27:35 np0005629333 nova_compute[244014]: 2026-02-25 12:27:35.070 244018 INFO nova.virt.libvirt.driver [None req-06e7ccc1-d7db-436e-9a38-85d40724ff2b 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Beginning live snapshot process#033[00m
Feb 25 07:27:35 np0005629333 nova_compute[244014]: 2026-02-25 12:27:35.131 244018 DEBUG nova.network.neutron [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updating instance_info_cache with network_info: [{"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:27:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:27:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Feb 25 07:27:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Feb 25 07:27:35 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Feb 25 07:27:35 np0005629333 nova_compute[244014]: 2026-02-25 12:27:35.157 244018 DEBUG oslo_concurrency.lockutils [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Releasing lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:27:35 np0005629333 nova_compute[244014]: 2026-02-25 12:27:35.238 244018 DEBUG nova.virt.libvirt.imagebackend [None req-06e7ccc1-d7db-436e-9a38-85d40724ff2b 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Feb 25 07:27:35 np0005629333 nova_compute[244014]: 2026-02-25 12:27:35.480 244018 DEBUG nova.storage.rbd_utils [None req-06e7ccc1-d7db-436e-9a38-85d40724ff2b 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] creating snapshot(164300b8e5ac4fcebf239e5a5f4ee14f) on rbd image(160a4e3f-b197-4b82-a2ff-cebf79df47df_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 07:27:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 849 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 19 MiB/s rd, 15 MiB/s wr, 468 op/s
Feb 25 07:27:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Feb 25 07:27:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Feb 25 07:27:36 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Feb 25 07:27:36 np0005629333 nova_compute[244014]: 2026-02-25 12:27:36.209 244018 DEBUG nova.storage.rbd_utils [None req-06e7ccc1-d7db-436e-9a38-85d40724ff2b 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] cloning vms/160a4e3f-b197-4b82-a2ff-cebf79df47df_disk@164300b8e5ac4fcebf239e5a5f4ee14f to images/d6d3f72c-5f81-4cf6-8807-0685977b2a2c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 25 07:27:36 np0005629333 nova_compute[244014]: 2026-02-25 12:27:36.318 244018 DEBUG nova.storage.rbd_utils [None req-06e7ccc1-d7db-436e-9a38-85d40724ff2b 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] flattening images/d6d3f72c-5f81-4cf6-8807-0685977b2a2c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 25 07:27:36 np0005629333 nova_compute[244014]: 2026-02-25 12:27:36.436 244018 INFO nova.virt.libvirt.driver [-] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Instance destroyed successfully.#033[00m
Feb 25 07:27:36 np0005629333 nova_compute[244014]: 2026-02-25 12:27:36.438 244018 DEBUG nova.objects.instance [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'resources' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:36 np0005629333 nova_compute[244014]: 2026-02-25 12:27:36.443 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:36 np0005629333 nova_compute[244014]: 2026-02-25 12:27:36.460 244018 DEBUG nova.virt.libvirt.vif [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:25:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2111996537',display_name='tempest-ServerActionsTestOtherB-server-2111996537',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2111996537',id=47,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFQ4/ViXjDl7sfXbfHy1Rj0+WVS30xG/+F445xoJQyz45huoziS5Ge/69+H9D3xA69BQvF6LAGpEuOAI4T0oNr5YUcMHaOf8cBGICZoqOX1SEjGVzLtcjONvsNISgitMaQ==',key_name='tempest-keypair-221288102',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:25:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c85a955249394f0faf7c890f5cd0df32',ramdisk_id='',reservation_id='r-huq779yj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1539976047',owner_user_name='tempest-ServerActionsTestOtherB-1539976047-project-member',shelved_at='2026-02-25T12:27:32.423369',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='a1bfc991-8c23-4709-b2d6-80b0d18f6430'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:27:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b774fd0c04fc403d9ddb205f1e6abbc5',uuid=b8086e43-4c45-422f-a3b5-fa665c256b30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:27:36 np0005629333 nova_compute[244014]: 2026-02-25 12:27:36.460 244018 DEBUG nova.network.os_vif_util [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converting VIF {"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:27:36 np0005629333 nova_compute[244014]: 2026-02-25 12:27:36.462 244018 DEBUG nova.network.os_vif_util [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:53:87,bridge_name='br-int',has_traffic_filtering=True,id=abdb97b5-8e9d-4929-af6f-bfb06c067878,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdb97b5-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:27:36 np0005629333 nova_compute[244014]: 2026-02-25 12:27:36.462 244018 DEBUG os_vif [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:53:87,bridge_name='br-int',has_traffic_filtering=True,id=abdb97b5-8e9d-4929-af6f-bfb06c067878,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdb97b5-8e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:27:36 np0005629333 nova_compute[244014]: 2026-02-25 12:27:36.465 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:36 np0005629333 nova_compute[244014]: 2026-02-25 12:27:36.466 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabdb97b5-8e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:36 np0005629333 nova_compute[244014]: 2026-02-25 12:27:36.471 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:36 np0005629333 nova_compute[244014]: 2026-02-25 12:27:36.474 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:27:36 np0005629333 nova_compute[244014]: 2026-02-25 12:27:36.477 244018 INFO os_vif [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:53:87,bridge_name='br-int',has_traffic_filtering=True,id=abdb97b5-8e9d-4929-af6f-bfb06c067878,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdb97b5-8e')#033[00m
Feb 25 07:27:36 np0005629333 nova_compute[244014]: 2026-02-25 12:27:36.928 244018 DEBUG nova.storage.rbd_utils [None req-06e7ccc1-d7db-436e-9a38-85d40724ff2b 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] removing snapshot(164300b8e5ac4fcebf239e5a5f4ee14f) on rbd image(160a4e3f-b197-4b82-a2ff-cebf79df47df_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.002 244018 DEBUG nova.compute.manager [req-92cfa7d6-b6b8-4adb-870b-99fa5fd8604f req-41d94204-22a2-4fff-a09d-43420daf732a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Received event network-vif-plugged-be60a13c-bf18-4eb6-ab12-d76c0abf9525 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.003 244018 DEBUG oslo_concurrency.lockutils [req-92cfa7d6-b6b8-4adb-870b-99fa5fd8604f req-41d94204-22a2-4fff-a09d-43420daf732a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b2915349-3797-40de-a554-2de79463723b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.003 244018 DEBUG oslo_concurrency.lockutils [req-92cfa7d6-b6b8-4adb-870b-99fa5fd8604f req-41d94204-22a2-4fff-a09d-43420daf732a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b2915349-3797-40de-a554-2de79463723b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.003 244018 DEBUG oslo_concurrency.lockutils [req-92cfa7d6-b6b8-4adb-870b-99fa5fd8604f req-41d94204-22a2-4fff-a09d-43420daf732a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b2915349-3797-40de-a554-2de79463723b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.004 244018 DEBUG nova.compute.manager [req-92cfa7d6-b6b8-4adb-870b-99fa5fd8604f req-41d94204-22a2-4fff-a09d-43420daf732a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] No waiting events found dispatching network-vif-plugged-be60a13c-bf18-4eb6-ab12-d76c0abf9525 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.004 244018 WARNING nova.compute.manager [req-92cfa7d6-b6b8-4adb-870b-99fa5fd8604f req-41d94204-22a2-4fff-a09d-43420daf732a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Received unexpected event network-vif-plugged-be60a13c-bf18-4eb6-ab12-d76c0abf9525 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.045 244018 DEBUG nova.compute.manager [req-ae2831d7-28d3-45f1-9bc5-18728eca7173 req-f3cc3155-a77e-4f95-8a25-1f1912b21e67 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received event network-changed-abdb97b5-8e9d-4929-af6f-bfb06c067878 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.045 244018 DEBUG nova.compute.manager [req-ae2831d7-28d3-45f1-9bc5-18728eca7173 req-f3cc3155-a77e-4f95-8a25-1f1912b21e67 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Refreshing instance network info cache due to event network-changed-abdb97b5-8e9d-4929-af6f-bfb06c067878. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.046 244018 DEBUG oslo_concurrency.lockutils [req-ae2831d7-28d3-45f1-9bc5-18728eca7173 req-f3cc3155-a77e-4f95-8a25-1f1912b21e67 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.046 244018 DEBUG oslo_concurrency.lockutils [req-ae2831d7-28d3-45f1-9bc5-18728eca7173 req-f3cc3155-a77e-4f95-8a25-1f1912b21e67 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.047 244018 DEBUG nova.network.neutron [req-ae2831d7-28d3-45f1-9bc5-18728eca7173 req-f3cc3155-a77e-4f95-8a25-1f1912b21e67 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Refreshing network info cache for port abdb97b5-8e9d-4929-af6f-bfb06c067878 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.066 244018 INFO nova.virt.libvirt.driver [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Deleting instance files /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30_del#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.068 244018 INFO nova.virt.libvirt.driver [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Deletion of /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30_del complete#033[00m
Feb 25 07:27:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Feb 25 07:27:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Feb 25 07:27:37 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.172 244018 INFO nova.scheduler.client.report [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Deleted allocations for instance b8086e43-4c45-422f-a3b5-fa665c256b30#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.200 244018 DEBUG nova.storage.rbd_utils [None req-06e7ccc1-d7db-436e-9a38-85d40724ff2b 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] creating snapshot(snap) on rbd image(d6d3f72c-5f81-4cf6-8807-0685977b2a2c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.277 244018 DEBUG oslo_concurrency.lockutils [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.278 244018 DEBUG oslo_concurrency.lockutils [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.294 244018 DEBUG oslo_concurrency.lockutils [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Acquiring lock "b2915349-3797-40de-a554-2de79463723b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.295 244018 DEBUG oslo_concurrency.lockutils [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lock "b2915349-3797-40de-a554-2de79463723b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.296 244018 DEBUG oslo_concurrency.lockutils [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Acquiring lock "b2915349-3797-40de-a554-2de79463723b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.297 244018 DEBUG oslo_concurrency.lockutils [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lock "b2915349-3797-40de-a554-2de79463723b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.299 244018 DEBUG oslo_concurrency.lockutils [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lock "b2915349-3797-40de-a554-2de79463723b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.302 244018 INFO nova.compute.manager [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Terminating instance#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.304 244018 DEBUG nova.compute.manager [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:27:37 np0005629333 kernel: tapbe60a13c-bf (unregistering): left promiscuous mode
Feb 25 07:27:37 np0005629333 NetworkManager[49836]: <info>  [1772022457.3523] device (tapbe60a13c-bf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.359 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:37Z|00519|binding|INFO|Releasing lport be60a13c-bf18-4eb6-ab12-d76c0abf9525 from this chassis (sb_readonly=0)
Feb 25 07:27:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:37Z|00520|binding|INFO|Setting lport be60a13c-bf18-4eb6-ab12-d76c0abf9525 down in Southbound
Feb 25 07:27:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:37Z|00521|binding|INFO|Removing iface tapbe60a13c-bf ovn-installed in OVS
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.365 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:37.370 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:7f:56 10.100.0.5'], port_security=['fa:16:3e:78:7f:56 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b2915349-3797-40de-a554-2de79463723b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6988a49-992e-4d2d-a359-182201377e6a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '09adeeb80de1411fb55d81d407987bba', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a7fbc9cc-1faa-472a-913b-870c01f46476', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=129e28e8-2627-4723-80a9-1e2113f78748, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=be60a13c-bf18-4eb6-ab12-d76c0abf9525) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:27:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:37.371 157129 INFO neutron.agent.ovn.metadata.agent [-] Port be60a13c-bf18-4eb6-ab12-d76c0abf9525 in datapath c6988a49-992e-4d2d-a359-182201377e6a unbound from our chassis#033[00m
Feb 25 07:27:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:37.372 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c6988a49-992e-4d2d-a359-182201377e6a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:27:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:37.373 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5c06cfaf-abc5-4de6-838e-b7b7cef3ae3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:37.373 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a namespace which is not needed anymore#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.384 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.397 244018 DEBUG oslo_concurrency.processutils [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:37 np0005629333 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Feb 25 07:27:37 np0005629333 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003c.scope: Consumed 3.726s CPU time.
Feb 25 07:27:37 np0005629333 systemd-machined[210048]: Machine qemu-67-instance-0000003c terminated.
Feb 25 07:27:37 np0005629333 neutron-haproxy-ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a[295523]: [NOTICE]   (295527) : haproxy version is 2.8.14-c23fe91
Feb 25 07:27:37 np0005629333 neutron-haproxy-ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a[295523]: [NOTICE]   (295527) : path to executable is /usr/sbin/haproxy
Feb 25 07:27:37 np0005629333 neutron-haproxy-ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a[295523]: [WARNING]  (295527) : Exiting Master process...
Feb 25 07:27:37 np0005629333 neutron-haproxy-ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a[295523]: [ALERT]    (295527) : Current worker (295529) exited with code 143 (Terminated)
Feb 25 07:27:37 np0005629333 neutron-haproxy-ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a[295523]: [WARNING]  (295527) : All workers exited. Exiting... (0)
Feb 25 07:27:37 np0005629333 systemd[1]: libpod-e11b39fafd6ac4a45f29365e08f25ee0cb228d4f432d74cc23c15520f7ce33c9.scope: Deactivated successfully.
Feb 25 07:27:37 np0005629333 podman[295723]: 2026-02-25 12:27:37.515627032 +0000 UTC m=+0.049754579 container died e11b39fafd6ac4a45f29365e08f25ee0cb228d4f432d74cc23c15520f7ce33c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.542 244018 INFO nova.virt.libvirt.driver [-] [instance: b2915349-3797-40de-a554-2de79463723b] Instance destroyed successfully.#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.543 244018 DEBUG nova.objects.instance [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lazy-loading 'resources' on Instance uuid b2915349-3797-40de-a554-2de79463723b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:37 np0005629333 systemd[1]: var-lib-containers-storage-overlay-21b4837bf9aa32acef51cbe6f98489e364315734f563f075261a323ae88457b7-merged.mount: Deactivated successfully.
Feb 25 07:27:37 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e11b39fafd6ac4a45f29365e08f25ee0cb228d4f432d74cc23c15520f7ce33c9-userdata-shm.mount: Deactivated successfully.
Feb 25 07:27:37 np0005629333 podman[295723]: 2026-02-25 12:27:37.560721238 +0000 UTC m=+0.094848785 container cleanup e11b39fafd6ac4a45f29365e08f25ee0cb228d4f432d74cc23c15520f7ce33c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223)
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.563 244018 DEBUG nova.virt.libvirt.vif [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:27:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1911361008',display_name='tempest-InstanceActionsNegativeTestJSON-server-1911361008',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1911361008',id=60,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:27:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='09adeeb80de1411fb55d81d407987bba',ramdisk_id='',reservation_id='r-dfoydzhg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1721839948',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1721839948-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:27:34Z,user_data=None,user_id='9f1b75cc204c46bba5f57dbfecdde9ad',uuid=b2915349-3797-40de-a554-2de79463723b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "address": "fa:16:3e:78:7f:56", "network": {"id": "c6988a49-992e-4d2d-a359-182201377e6a", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-391653903-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09adeeb80de1411fb55d81d407987bba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe60a13c-bf", "ovs_interfaceid": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.564 244018 DEBUG nova.network.os_vif_util [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Converting VIF {"id": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "address": "fa:16:3e:78:7f:56", "network": {"id": "c6988a49-992e-4d2d-a359-182201377e6a", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-391653903-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "09adeeb80de1411fb55d81d407987bba", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe60a13c-bf", "ovs_interfaceid": "be60a13c-bf18-4eb6-ab12-d76c0abf9525", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.566 244018 DEBUG nova.network.os_vif_util [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:7f:56,bridge_name='br-int',has_traffic_filtering=True,id=be60a13c-bf18-4eb6-ab12-d76c0abf9525,network=Network(c6988a49-992e-4d2d-a359-182201377e6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe60a13c-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.567 244018 DEBUG os_vif [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:7f:56,bridge_name='br-int',has_traffic_filtering=True,id=be60a13c-bf18-4eb6-ab12-d76c0abf9525,network=Network(c6988a49-992e-4d2d-a359-182201377e6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe60a13c-bf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.571 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.571 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe60a13c-bf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.575 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.578 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.581 244018 INFO os_vif [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:7f:56,bridge_name='br-int',has_traffic_filtering=True,id=be60a13c-bf18-4eb6-ab12-d76c0abf9525,network=Network(c6988a49-992e-4d2d-a359-182201377e6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe60a13c-bf')#033[00m
Feb 25 07:27:37 np0005629333 systemd[1]: libpod-conmon-e11b39fafd6ac4a45f29365e08f25ee0cb228d4f432d74cc23c15520f7ce33c9.scope: Deactivated successfully.
Feb 25 07:27:37 np0005629333 podman[295781]: 2026-02-25 12:27:37.623481175 +0000 UTC m=+0.042652588 container remove e11b39fafd6ac4a45f29365e08f25ee0cb228d4f432d74cc23c15520f7ce33c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 25 07:27:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:37.628 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4c969b4a-d140-4c04-b792-69f03b265a48]: (4, ('Wed Feb 25 12:27:37 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a (e11b39fafd6ac4a45f29365e08f25ee0cb228d4f432d74cc23c15520f7ce33c9)\ne11b39fafd6ac4a45f29365e08f25ee0cb228d4f432d74cc23c15520f7ce33c9\nWed Feb 25 12:27:37 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a (e11b39fafd6ac4a45f29365e08f25ee0cb228d4f432d74cc23c15520f7ce33c9)\ne11b39fafd6ac4a45f29365e08f25ee0cb228d4f432d74cc23c15520f7ce33c9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:37.629 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dcadbdb4-f3bf-44f7-87be-5e702571dd81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:37.630 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6988a49-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:37 np0005629333 kernel: tapc6988a49-90: left promiscuous mode
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.632 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.640 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.641 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:37.645 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[30881c5d-7963-429d-b1de-d0455770e6f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:37.659 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ba62574b-d2ab-4b22-a36b-02c00b02def5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:37.661 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f64a33a8-ba30-480d-9dd2-d850e774d911]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:37.674 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e8f24074-7ec0-4ca8-91ee-296bd29d61c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442191, 'reachable_time': 20694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295814, 'error': None, 'target': 'ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:37.677 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c6988a49-992e-4d2d-a359-182201377e6a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:27:37 np0005629333 systemd[1]: run-netns-ovnmeta\x2dc6988a49\x2d992e\x2d4d2d\x2da359\x2d182201377e6a.mount: Deactivated successfully.
Feb 25 07:27:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:37.677 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[98a54d47-9b4b-4cf8-ab16-b07d06105278]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.839 244018 INFO nova.virt.libvirt.driver [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Deleting instance files /var/lib/nova/instances/b2915349-3797-40de-a554-2de79463723b_del#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.839 244018 INFO nova.virt.libvirt.driver [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Deletion of /var/lib/nova/instances/b2915349-3797-40de-a554-2de79463723b_del complete#033[00m
Feb 25 07:27:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 867 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 21 MiB/s rd, 16 MiB/s wr, 536 op/s
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.903 244018 INFO nova.compute.manager [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Took 0.60 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.904 244018 DEBUG oslo.service.loopingcall [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.904 244018 DEBUG nova.compute.manager [-] [instance: b2915349-3797-40de-a554-2de79463723b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.905 244018 DEBUG nova.network.neutron [-] [instance: b2915349-3797-40de-a554-2de79463723b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:27:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:27:37 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/392415029' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.951 244018 DEBUG oslo_concurrency.processutils [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.957 244018 DEBUG nova.compute.provider_tree [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:27:37 np0005629333 nova_compute[244014]: 2026-02-25 12:27:37.983 244018 DEBUG nova.scheduler.client.report [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:27:38 np0005629333 nova_compute[244014]: 2026-02-25 12:27:38.032 244018 DEBUG oslo_concurrency.lockutils [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Feb 25 07:27:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Feb 25 07:27:38 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Feb 25 07:27:38 np0005629333 nova_compute[244014]: 2026-02-25 12:27:38.699 244018 DEBUG oslo_concurrency.lockutils [None req-863a0ffa-d54e-4f91-ac57-2dd105177383 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 14.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:39 np0005629333 nova_compute[244014]: 2026-02-25 12:27:39.687 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 867 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 6.0 MiB/s wr, 369 op/s
Feb 25 07:27:40 np0005629333 nova_compute[244014]: 2026-02-25 12:27:40.005 244018 DEBUG nova.compute.manager [req-b767969b-60c2-4949-8e04-557a4bd50ff8 req-b80cbe2d-17b9-4dd0-aace-eea17d881267 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Received event network-vif-unplugged-be60a13c-bf18-4eb6-ab12-d76c0abf9525 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:27:40 np0005629333 nova_compute[244014]: 2026-02-25 12:27:40.005 244018 DEBUG oslo_concurrency.lockutils [req-b767969b-60c2-4949-8e04-557a4bd50ff8 req-b80cbe2d-17b9-4dd0-aace-eea17d881267 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b2915349-3797-40de-a554-2de79463723b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:40 np0005629333 nova_compute[244014]: 2026-02-25 12:27:40.006 244018 DEBUG oslo_concurrency.lockutils [req-b767969b-60c2-4949-8e04-557a4bd50ff8 req-b80cbe2d-17b9-4dd0-aace-eea17d881267 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b2915349-3797-40de-a554-2de79463723b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:40 np0005629333 nova_compute[244014]: 2026-02-25 12:27:40.006 244018 DEBUG oslo_concurrency.lockutils [req-b767969b-60c2-4949-8e04-557a4bd50ff8 req-b80cbe2d-17b9-4dd0-aace-eea17d881267 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b2915349-3797-40de-a554-2de79463723b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:40 np0005629333 nova_compute[244014]: 2026-02-25 12:27:40.006 244018 DEBUG nova.compute.manager [req-b767969b-60c2-4949-8e04-557a4bd50ff8 req-b80cbe2d-17b9-4dd0-aace-eea17d881267 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] No waiting events found dispatching network-vif-unplugged-be60a13c-bf18-4eb6-ab12-d76c0abf9525 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:27:40 np0005629333 nova_compute[244014]: 2026-02-25 12:27:40.006 244018 DEBUG nova.compute.manager [req-b767969b-60c2-4949-8e04-557a4bd50ff8 req-b80cbe2d-17b9-4dd0-aace-eea17d881267 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Received event network-vif-unplugged-be60a13c-bf18-4eb6-ab12-d76c0abf9525 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:27:40 np0005629333 nova_compute[244014]: 2026-02-25 12:27:40.007 244018 DEBUG nova.compute.manager [req-b767969b-60c2-4949-8e04-557a4bd50ff8 req-b80cbe2d-17b9-4dd0-aace-eea17d881267 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Received event network-vif-plugged-be60a13c-bf18-4eb6-ab12-d76c0abf9525 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:27:40 np0005629333 nova_compute[244014]: 2026-02-25 12:27:40.007 244018 DEBUG oslo_concurrency.lockutils [req-b767969b-60c2-4949-8e04-557a4bd50ff8 req-b80cbe2d-17b9-4dd0-aace-eea17d881267 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b2915349-3797-40de-a554-2de79463723b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:40 np0005629333 nova_compute[244014]: 2026-02-25 12:27:40.007 244018 DEBUG oslo_concurrency.lockutils [req-b767969b-60c2-4949-8e04-557a4bd50ff8 req-b80cbe2d-17b9-4dd0-aace-eea17d881267 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b2915349-3797-40de-a554-2de79463723b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:40 np0005629333 nova_compute[244014]: 2026-02-25 12:27:40.007 244018 DEBUG oslo_concurrency.lockutils [req-b767969b-60c2-4949-8e04-557a4bd50ff8 req-b80cbe2d-17b9-4dd0-aace-eea17d881267 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b2915349-3797-40de-a554-2de79463723b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:40 np0005629333 nova_compute[244014]: 2026-02-25 12:27:40.008 244018 DEBUG nova.compute.manager [req-b767969b-60c2-4949-8e04-557a4bd50ff8 req-b80cbe2d-17b9-4dd0-aace-eea17d881267 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] No waiting events found dispatching network-vif-plugged-be60a13c-bf18-4eb6-ab12-d76c0abf9525 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:27:40 np0005629333 nova_compute[244014]: 2026-02-25 12:27:40.008 244018 WARNING nova.compute.manager [req-b767969b-60c2-4949-8e04-557a4bd50ff8 req-b80cbe2d-17b9-4dd0-aace-eea17d881267 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Received unexpected event network-vif-plugged-be60a13c-bf18-4eb6-ab12-d76c0abf9525 for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:27:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:27:40 np0005629333 nova_compute[244014]: 2026-02-25 12:27:40.172 244018 DEBUG nova.network.neutron [-] [instance: b2915349-3797-40de-a554-2de79463723b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:27:40 np0005629333 nova_compute[244014]: 2026-02-25 12:27:40.210 244018 INFO nova.compute.manager [-] [instance: b2915349-3797-40de-a554-2de79463723b] Took 2.30 seconds to deallocate network for instance.#033[00m
Feb 25 07:27:40 np0005629333 nova_compute[244014]: 2026-02-25 12:27:40.265 244018 DEBUG oslo_concurrency.lockutils [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:40 np0005629333 nova_compute[244014]: 2026-02-25 12:27:40.265 244018 DEBUG oslo_concurrency.lockutils [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:40 np0005629333 nova_compute[244014]: 2026-02-25 12:27:40.385 244018 DEBUG nova.network.neutron [req-ae2831d7-28d3-45f1-9bc5-18728eca7173 req-f3cc3155-a77e-4f95-8a25-1f1912b21e67 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updated VIF entry in instance network info cache for port abdb97b5-8e9d-4929-af6f-bfb06c067878. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:27:40 np0005629333 nova_compute[244014]: 2026-02-25 12:27:40.385 244018 DEBUG nova.network.neutron [req-ae2831d7-28d3-45f1-9bc5-18728eca7173 req-f3cc3155-a77e-4f95-8a25-1f1912b21e67 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updating instance_info_cache with network_info: [{"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": null, "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:27:40 np0005629333 nova_compute[244014]: 2026-02-25 12:27:40.425 244018 DEBUG oslo_concurrency.lockutils [req-ae2831d7-28d3-45f1-9bc5-18728eca7173 req-f3cc3155-a77e-4f95-8a25-1f1912b21e67 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:27:40 np0005629333 nova_compute[244014]: 2026-02-25 12:27:40.449 244018 DEBUG oslo_concurrency.processutils [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:40 np0005629333 nova_compute[244014]: 2026-02-25 12:27:40.497 244018 DEBUG nova.compute.manager [req-6c6d41ce-1601-4f77-b9f4-21749c28f3ba req-79a4d743-8446-41b3-85b8-0ff3b8d3501f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b2915349-3797-40de-a554-2de79463723b] Received event network-vif-deleted-be60a13c-bf18-4eb6-ab12-d76c0abf9525 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:27:40 np0005629333 nova_compute[244014]: 2026-02-25 12:27:40.625 244018 INFO nova.virt.libvirt.driver [None req-06e7ccc1-d7db-436e-9a38-85d40724ff2b 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Snapshot image upload complete#033[00m
Feb 25 07:27:40 np0005629333 nova_compute[244014]: 2026-02-25 12:27:40.626 244018 INFO nova.compute.manager [None req-06e7ccc1-d7db-436e-9a38-85d40724ff2b 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Took 5.84 seconds to snapshot the instance on the hypervisor.#033[00m
Feb 25 07:27:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:27:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2691922157' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:27:40 np0005629333 nova_compute[244014]: 2026-02-25 12:27:40.991 244018 DEBUG oslo_concurrency.processutils [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:41 np0005629333 nova_compute[244014]: 2026-02-25 12:27:40.999 244018 DEBUG nova.compute.provider_tree [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:27:41 np0005629333 nova_compute[244014]: 2026-02-25 12:27:41.019 244018 DEBUG nova.scheduler.client.report [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:27:41 np0005629333 nova_compute[244014]: 2026-02-25 12:27:41.051 244018 DEBUG oslo_concurrency.lockutils [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:41 np0005629333 nova_compute[244014]: 2026-02-25 12:27:41.094 244018 INFO nova.scheduler.client.report [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Deleted allocations for instance b2915349-3797-40de-a554-2de79463723b#033[00m
Feb 25 07:27:41 np0005629333 nova_compute[244014]: 2026-02-25 12:27:41.175 244018 DEBUG oslo_concurrency.lockutils [None req-66004c4b-cbfa-4c96-8b96-e4acdbe47a44 9f1b75cc204c46bba5f57dbfecdde9ad 09adeeb80de1411fb55d81d407987bba - - default default] Lock "b2915349-3797-40de-a554-2de79463723b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:41 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:41Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ba:87:f1 10.100.0.11
Feb 25 07:27:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 856 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 396 op/s
Feb 25 07:27:42 np0005629333 nova_compute[244014]: 2026-02-25 12:27:42.063 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022447.062671, b8086e43-4c45-422f-a3b5-fa665c256b30 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:27:42 np0005629333 nova_compute[244014]: 2026-02-25 12:27:42.064 244018 INFO nova.compute.manager [-] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:27:42 np0005629333 nova_compute[244014]: 2026-02-25 12:27:42.100 244018 DEBUG nova.compute.manager [None req-cfad8324-b99c-481b-9e4f-4217ff02b6eb - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:27:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:27:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:27:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:27:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0035085793762395815 of space, bias 1.0, pg target 1.0525738128718745 quantized to 32 (current 32)
Feb 25 07:27:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:27:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:27:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:27:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:27:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:27:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.005452531556362146 of space, bias 1.0, pg target 1.6303069353522814 quantized to 32 (current 32)
Feb 25 07:27:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:27:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0971725860315506e-06 of space, bias 4.0, pg target 0.0013078297225496084 quantized to 16 (current 16)
Feb 25 07:27:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:27:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:27:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:27:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011370018558312169 quantized to 32 (current 32)
Feb 25 07:27:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:27:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012507020414143388 quantized to 32 (current 32)
Feb 25 07:27:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:27:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:27:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:27:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015160024744416227 quantized to 32 (current 32)
Feb 25 07:27:42 np0005629333 nova_compute[244014]: 2026-02-25 12:27:42.576 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:43 np0005629333 nova_compute[244014]: 2026-02-25 12:27:43.343 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "b8086e43-4c45-422f-a3b5-fa665c256b30" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:43 np0005629333 nova_compute[244014]: 2026-02-25 12:27:43.344 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:43 np0005629333 nova_compute[244014]: 2026-02-25 12:27:43.344 244018 INFO nova.compute.manager [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Unshelving#033[00m
Feb 25 07:27:43 np0005629333 nova_compute[244014]: 2026-02-25 12:27:43.492 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:43 np0005629333 nova_compute[244014]: 2026-02-25 12:27:43.493 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:43 np0005629333 nova_compute[244014]: 2026-02-25 12:27:43.499 244018 DEBUG nova.objects.instance [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'pci_requests' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:43 np0005629333 nova_compute[244014]: 2026-02-25 12:27:43.518 244018 DEBUG nova.objects.instance [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'numa_topology' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:43 np0005629333 nova_compute[244014]: 2026-02-25 12:27:43.533 244018 DEBUG nova.virt.hardware [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:27:43 np0005629333 nova_compute[244014]: 2026-02-25 12:27:43.534 244018 INFO nova.compute.claims [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:27:43 np0005629333 nova_compute[244014]: 2026-02-25 12:27:43.730 244018 DEBUG oslo_concurrency.processutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 802 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 6.1 MiB/s wr, 418 op/s
Feb 25 07:27:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:27:44 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1085921296' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:27:44 np0005629333 nova_compute[244014]: 2026-02-25 12:27:44.343 244018 DEBUG oslo_concurrency.processutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:44 np0005629333 nova_compute[244014]: 2026-02-25 12:27:44.349 244018 DEBUG nova.compute.provider_tree [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:27:44 np0005629333 nova_compute[244014]: 2026-02-25 12:27:44.365 244018 DEBUG nova.scheduler.client.report [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:27:44 np0005629333 nova_compute[244014]: 2026-02-25 12:27:44.387 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.894s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:44 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:44Z|00522|binding|INFO|Releasing lport 81f0f54c-4e04-4adf-952f-b6d0fe9698c7 from this chassis (sb_readonly=0)
Feb 25 07:27:44 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:44Z|00523|binding|INFO|Releasing lport 3b184c15-8ef4-4e11-bd18-e1253a4ff440 from this chassis (sb_readonly=0)
Feb 25 07:27:44 np0005629333 nova_compute[244014]: 2026-02-25 12:27:44.583 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:44 np0005629333 nova_compute[244014]: 2026-02-25 12:27:44.622 244018 INFO nova.network.neutron [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updating port abdb97b5-8e9d-4929-af6f-bfb06c067878 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Feb 25 07:27:44 np0005629333 nova_compute[244014]: 2026-02-25 12:27:44.690 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:27:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Feb 25 07:27:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Feb 25 07:27:45 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Feb 25 07:27:45 np0005629333 nova_compute[244014]: 2026-02-25 12:27:45.392 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:27:45 np0005629333 nova_compute[244014]: 2026-02-25 12:27:45.393 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquired lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:27:45 np0005629333 nova_compute[244014]: 2026-02-25 12:27:45.393 244018 DEBUG nova.network.neutron [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:27:45 np0005629333 nova_compute[244014]: 2026-02-25 12:27:45.545 244018 DEBUG nova.compute.manager [req-986e7b14-64e6-4fcc-8bd4-2957d7d5c4b0 req-8f848d47-a82a-4bed-8b30-7429d295dda2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received event network-changed-abdb97b5-8e9d-4929-af6f-bfb06c067878 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:27:45 np0005629333 nova_compute[244014]: 2026-02-25 12:27:45.545 244018 DEBUG nova.compute.manager [req-986e7b14-64e6-4fcc-8bd4-2957d7d5c4b0 req-8f848d47-a82a-4bed-8b30-7429d295dda2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Refreshing instance network info cache due to event network-changed-abdb97b5-8e9d-4929-af6f-bfb06c067878. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:27:45 np0005629333 nova_compute[244014]: 2026-02-25 12:27:45.546 244018 DEBUG oslo_concurrency.lockutils [req-986e7b14-64e6-4fcc-8bd4-2957d7d5c4b0 req-8f848d47-a82a-4bed-8b30-7429d295dda2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:27:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 802 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.3 MiB/s wr, 184 op/s
Feb 25 07:27:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:27:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2118540703' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:27:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:27:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2118540703' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:27:47 np0005629333 nova_compute[244014]: 2026-02-25 12:27:47.579 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 802 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.9 MiB/s wr, 153 op/s
Feb 25 07:27:48 np0005629333 nova_compute[244014]: 2026-02-25 12:27:48.791 244018 DEBUG nova.network.neutron [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updating instance_info_cache with network_info: [{"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:27:49 np0005629333 nova_compute[244014]: 2026-02-25 12:27:49.223 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Releasing lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:27:49 np0005629333 nova_compute[244014]: 2026-02-25 12:27:49.226 244018 DEBUG nova.virt.libvirt.driver [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:27:49 np0005629333 nova_compute[244014]: 2026-02-25 12:27:49.226 244018 INFO nova.virt.libvirt.driver [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Creating image(s)#033[00m
Feb 25 07:27:49 np0005629333 nova_compute[244014]: 2026-02-25 12:27:49.309 244018 DEBUG nova.storage.rbd_utils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image b8086e43-4c45-422f-a3b5-fa665c256b30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:27:49 np0005629333 nova_compute[244014]: 2026-02-25 12:27:49.315 244018 DEBUG nova.objects.instance [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'trusted_certs' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:49 np0005629333 nova_compute[244014]: 2026-02-25 12:27:49.317 244018 DEBUG oslo_concurrency.lockutils [req-986e7b14-64e6-4fcc-8bd4-2957d7d5c4b0 req-8f848d47-a82a-4bed-8b30-7429d295dda2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:27:49 np0005629333 nova_compute[244014]: 2026-02-25 12:27:49.318 244018 DEBUG nova.network.neutron [req-986e7b14-64e6-4fcc-8bd4-2957d7d5c4b0 req-8f848d47-a82a-4bed-8b30-7429d295dda2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Refreshing network info cache for port abdb97b5-8e9d-4929-af6f-bfb06c067878 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:27:49 np0005629333 nova_compute[244014]: 2026-02-25 12:27:49.377 244018 DEBUG nova.storage.rbd_utils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image b8086e43-4c45-422f-a3b5-fa665c256b30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:27:49 np0005629333 nova_compute[244014]: 2026-02-25 12:27:49.411 244018 DEBUG nova.storage.rbd_utils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image b8086e43-4c45-422f-a3b5-fa665c256b30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:27:49 np0005629333 nova_compute[244014]: 2026-02-25 12:27:49.415 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "79eac0f5768899c715881ae5099eb80ea7cb356e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:49 np0005629333 nova_compute[244014]: 2026-02-25 12:27:49.417 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "79eac0f5768899c715881ae5099eb80ea7cb356e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:49 np0005629333 nova_compute[244014]: 2026-02-25 12:27:49.692 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:49 np0005629333 nova_compute[244014]: 2026-02-25 12:27:49.710 244018 DEBUG nova.virt.libvirt.imagebackend [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Image locations are: [{'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/a1bfc991-8c23-4709-b2d6-80b0d18f6430/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/a1bfc991-8c23-4709-b2d6-80b0d18f6430/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Feb 25 07:27:49 np0005629333 nova_compute[244014]: 2026-02-25 12:27:49.779 244018 DEBUG nova.virt.libvirt.imagebackend [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Selected location: {'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/a1bfc991-8c23-4709-b2d6-80b0d18f6430/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Feb 25 07:27:49 np0005629333 nova_compute[244014]: 2026-02-25 12:27:49.780 244018 DEBUG nova.storage.rbd_utils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] cloning images/a1bfc991-8c23-4709-b2d6-80b0d18f6430@snap to None/b8086e43-4c45-422f-a3b5-fa665c256b30_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 25 07:27:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 802 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.9 MiB/s wr, 148 op/s
Feb 25 07:27:50 np0005629333 nova_compute[244014]: 2026-02-25 12:27:50.053 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "79eac0f5768899c715881ae5099eb80ea7cb356e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:27:50 np0005629333 nova_compute[244014]: 2026-02-25 12:27:50.223 244018 DEBUG nova.objects.instance [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'migration_context' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:50 np0005629333 nova_compute[244014]: 2026-02-25 12:27:50.306 244018 DEBUG nova.storage.rbd_utils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] flattening vms/b8086e43-4c45-422f-a3b5-fa665c256b30_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 25 07:27:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Feb 25 07:27:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Feb 25 07:27:50 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Feb 25 07:27:50 np0005629333 nova_compute[244014]: 2026-02-25 12:27:50.610 244018 DEBUG oslo_concurrency.lockutils [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:50 np0005629333 nova_compute[244014]: 2026-02-25 12:27:50.610 244018 DEBUG oslo_concurrency.lockutils [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:50 np0005629333 nova_compute[244014]: 2026-02-25 12:27:50.611 244018 INFO nova.compute.manager [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Rebooting instance#033[00m
Feb 25 07:27:50 np0005629333 nova_compute[244014]: 2026-02-25 12:27:50.629 244018 DEBUG oslo_concurrency.lockutils [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:27:50 np0005629333 nova_compute[244014]: 2026-02-25 12:27:50.630 244018 DEBUG oslo_concurrency.lockutils [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquired lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:27:50 np0005629333 nova_compute[244014]: 2026-02-25 12:27:50.630 244018 DEBUG nova.network.neutron [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:27:50 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Feb 25 07:27:51 np0005629333 nova_compute[244014]: 2026-02-25 12:27:51.010 244018 DEBUG nova.virt.libvirt.driver [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Image rbd:vms/b8086e43-4c45-422f-a3b5-fa665c256b30_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Feb 25 07:27:51 np0005629333 nova_compute[244014]: 2026-02-25 12:27:51.011 244018 DEBUG nova.virt.libvirt.driver [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:27:51 np0005629333 nova_compute[244014]: 2026-02-25 12:27:51.012 244018 DEBUG nova.virt.libvirt.driver [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Ensure instance console log exists: /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:27:51 np0005629333 nova_compute[244014]: 2026-02-25 12:27:51.012 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:51 np0005629333 nova_compute[244014]: 2026-02-25 12:27:51.013 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:51 np0005629333 nova_compute[244014]: 2026-02-25 12:27:51.013 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:51 np0005629333 nova_compute[244014]: 2026-02-25 12:27:51.018 244018 DEBUG nova.virt.libvirt.driver [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Start _get_guest_xml network_info=[{"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-02-25T12:27:24Z,direct_url=<?>,disk_format='raw',id=a1bfc991-8c23-4709-b2d6-80b0d18f6430,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-2111996537-shelved',owner='c85a955249394f0faf7c890f5cd0df32',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-25T12:27:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:27:51 np0005629333 nova_compute[244014]: 2026-02-25 12:27:51.023 244018 WARNING nova.virt.libvirt.driver [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:27:51 np0005629333 nova_compute[244014]: 2026-02-25 12:27:51.169 244018 DEBUG nova.virt.libvirt.host [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:27:51 np0005629333 nova_compute[244014]: 2026-02-25 12:27:51.170 244018 DEBUG nova.virt.libvirt.host [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:27:51 np0005629333 nova_compute[244014]: 2026-02-25 12:27:51.174 244018 DEBUG nova.virt.libvirt.host [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:27:51 np0005629333 nova_compute[244014]: 2026-02-25 12:27:51.175 244018 DEBUG nova.virt.libvirt.host [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:27:51 np0005629333 nova_compute[244014]: 2026-02-25 12:27:51.175 244018 DEBUG nova.virt.libvirt.driver [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:27:51 np0005629333 nova_compute[244014]: 2026-02-25 12:27:51.175 244018 DEBUG nova.virt.hardware [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-02-25T12:27:24Z,direct_url=<?>,disk_format='raw',id=a1bfc991-8c23-4709-b2d6-80b0d18f6430,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-2111996537-shelved',owner='c85a955249394f0faf7c890f5cd0df32',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-25T12:27:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:27:51 np0005629333 nova_compute[244014]: 2026-02-25 12:27:51.176 244018 DEBUG nova.virt.hardware [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:27:51 np0005629333 nova_compute[244014]: 2026-02-25 12:27:51.176 244018 DEBUG nova.virt.hardware [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:27:51 np0005629333 nova_compute[244014]: 2026-02-25 12:27:51.176 244018 DEBUG nova.virt.hardware [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:27:51 np0005629333 nova_compute[244014]: 2026-02-25 12:27:51.177 244018 DEBUG nova.virt.hardware [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:27:51 np0005629333 nova_compute[244014]: 2026-02-25 12:27:51.177 244018 DEBUG nova.virt.hardware [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:27:51 np0005629333 nova_compute[244014]: 2026-02-25 12:27:51.177 244018 DEBUG nova.virt.hardware [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:27:51 np0005629333 nova_compute[244014]: 2026-02-25 12:27:51.177 244018 DEBUG nova.virt.hardware [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:27:51 np0005629333 nova_compute[244014]: 2026-02-25 12:27:51.178 244018 DEBUG nova.virt.hardware [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:27:51 np0005629333 nova_compute[244014]: 2026-02-25 12:27:51.178 244018 DEBUG nova.virt.hardware [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:27:51 np0005629333 nova_compute[244014]: 2026-02-25 12:27:51.178 244018 DEBUG nova.virt.hardware [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:27:51 np0005629333 nova_compute[244014]: 2026-02-25 12:27:51.178 244018 DEBUG nova.objects.instance [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'vcpu_model' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:51 np0005629333 nova_compute[244014]: 2026-02-25 12:27:51.201 244018 DEBUG oslo_concurrency.processutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:27:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2017960151' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:27:51 np0005629333 nova_compute[244014]: 2026-02-25 12:27:51.787 244018 DEBUG oslo_concurrency.processutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:51 np0005629333 nova_compute[244014]: 2026-02-25 12:27:51.806 244018 DEBUG nova.storage.rbd_utils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image b8086e43-4c45-422f-a3b5-fa665c256b30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:27:51 np0005629333 nova_compute[244014]: 2026-02-25 12:27:51.809 244018 DEBUG oslo_concurrency.processutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 844 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.4 MiB/s wr, 74 op/s
Feb 25 07:27:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:27:52 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2435174376' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.296 244018 DEBUG oslo_concurrency.processutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.298 244018 DEBUG nova.virt.libvirt.vif [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:25:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2111996537',display_name='tempest-ServerActionsTestOtherB-server-2111996537',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2111996537',id=47,image_ref='a1bfc991-8c23-4709-b2d6-80b0d18f6430',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-221288102',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:25:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='c85a955249394f0faf7c890f5cd0df32',ramdisk_id='',reservation_id='r-huq779yj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1539976047',owner_user_name='tempest-ServerActionsTestOtherB-1539976047-project-member',shelved_at='2026-02-25T12:27:32.423369',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='a1bfc991-8c23-4709-b2d6-80b0d18f6430'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:27:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b774fd0c04fc403d9ddb205f1e6abbc5',uuid=b8086e43-4c45-422f-a3b5-fa665c256b30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.299 244018 DEBUG nova.network.os_vif_util [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converting VIF {"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.300 244018 DEBUG nova.network.os_vif_util [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:53:87,bridge_name='br-int',has_traffic_filtering=True,id=abdb97b5-8e9d-4929-af6f-bfb06c067878,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdb97b5-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.301 244018 DEBUG nova.objects.instance [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'pci_devices' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.341 244018 DEBUG nova.virt.libvirt.driver [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:27:52 np0005629333 nova_compute[244014]:  <uuid>b8086e43-4c45-422f-a3b5-fa665c256b30</uuid>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:  <name>instance-0000002f</name>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServerActionsTestOtherB-server-2111996537</nova:name>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:27:51</nova:creationTime>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:27:52 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:        <nova:user uuid="b774fd0c04fc403d9ddb205f1e6abbc5">tempest-ServerActionsTestOtherB-1539976047-project-member</nova:user>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:        <nova:project uuid="c85a955249394f0faf7c890f5cd0df32">tempest-ServerActionsTestOtherB-1539976047</nova:project>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="a1bfc991-8c23-4709-b2d6-80b0d18f6430"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:        <nova:port uuid="abdb97b5-8e9d-4929-af6f-bfb06c067878">
Feb 25 07:27:52 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      <entry name="serial">b8086e43-4c45-422f-a3b5-fa665c256b30</entry>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      <entry name="uuid">b8086e43-4c45-422f-a3b5-fa665c256b30</entry>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/b8086e43-4c45-422f-a3b5-fa665c256b30_disk">
Feb 25 07:27:52 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:27:52 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/b8086e43-4c45-422f-a3b5-fa665c256b30_disk.config">
Feb 25 07:27:52 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:27:52 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:f8:53:87"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      <target dev="tapabdb97b5-8e"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30/console.log" append="off"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <input type="keyboard" bus="usb"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:27:52 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:27:52 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:27:52 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:27:52 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.343 244018 DEBUG nova.compute.manager [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Preparing to wait for external event network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.344 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.344 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.344 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.345 244018 DEBUG nova.virt.libvirt.vif [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:25:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2111996537',display_name='tempest-ServerActionsTestOtherB-server-2111996537',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2111996537',id=47,image_ref='a1bfc991-8c23-4709-b2d6-80b0d18f6430',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-221288102',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:25:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='c85a955249394f0faf7c890f5cd0df32',ramdisk_id='',reservation_id='r-huq779yj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1539976047',owner_user_name='tempest-ServerActionsTestOtherB-1539976047-project-member',shelved_at='2026-02-25T12:27:32.423369',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='a1bfc991-8c23-4709-b2d6-80b0d18f6430'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:27:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b774fd0c04fc403d9ddb205f1e6abbc5',uuid=b8086e43-4c45-422f-a3b5-fa665c256b30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.345 244018 DEBUG nova.network.os_vif_util [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converting VIF {"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.346 244018 DEBUG nova.network.os_vif_util [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:53:87,bridge_name='br-int',has_traffic_filtering=True,id=abdb97b5-8e9d-4929-af6f-bfb06c067878,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdb97b5-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.346 244018 DEBUG os_vif [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:53:87,bridge_name='br-int',has_traffic_filtering=True,id=abdb97b5-8e9d-4929-af6f-bfb06c067878,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdb97b5-8e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.347 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.348 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.348 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.352 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.352 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabdb97b5-8e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.353 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapabdb97b5-8e, col_values=(('external_ids', {'iface-id': 'abdb97b5-8e9d-4929-af6f-bfb06c067878', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:53:87', 'vm-uuid': 'b8086e43-4c45-422f-a3b5-fa665c256b30'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.354 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:52 np0005629333 NetworkManager[49836]: <info>  [1772022472.3554] manager: (tapabdb97b5-8e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/234)
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.356 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.362 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.362 244018 INFO os_vif [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:53:87,bridge_name='br-int',has_traffic_filtering=True,id=abdb97b5-8e9d-4929-af6f-bfb06c067878,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdb97b5-8e')#033[00m
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.410 244018 DEBUG nova.virt.libvirt.driver [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.411 244018 DEBUG nova.virt.libvirt.driver [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.411 244018 DEBUG nova.virt.libvirt.driver [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] No VIF found with MAC fa:16:3e:f8:53:87, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.411 244018 INFO nova.virt.libvirt.driver [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Using config drive#033[00m
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.440 244018 DEBUG nova.storage.rbd_utils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image b8086e43-4c45-422f-a3b5-fa665c256b30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.461 244018 DEBUG nova.objects.instance [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'ec2_ids' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.505 244018 DEBUG nova.objects.instance [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'keypairs' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.538 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022457.5376704, b2915349-3797-40de-a554-2de79463723b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.538 244018 INFO nova.compute.manager [-] [instance: b2915349-3797-40de-a554-2de79463723b] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:27:52 np0005629333 nova_compute[244014]: 2026-02-25 12:27:52.558 244018 DEBUG nova.compute.manager [None req-b1637fb9-7cce-4702-8bcc-ff31a0e649e4 - - - - - -] [instance: b2915349-3797-40de-a554-2de79463723b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:53 np0005629333 nova_compute[244014]: 2026-02-25 12:27:53.135 244018 DEBUG nova.network.neutron [req-986e7b14-64e6-4fcc-8bd4-2957d7d5c4b0 req-8f848d47-a82a-4bed-8b30-7429d295dda2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updated VIF entry in instance network info cache for port abdb97b5-8e9d-4929-af6f-bfb06c067878. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:27:53 np0005629333 nova_compute[244014]: 2026-02-25 12:27:53.136 244018 DEBUG nova.network.neutron [req-986e7b14-64e6-4fcc-8bd4-2957d7d5c4b0 req-8f848d47-a82a-4bed-8b30-7429d295dda2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updating instance_info_cache with network_info: [{"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:27:53 np0005629333 nova_compute[244014]: 2026-02-25 12:27:53.169 244018 DEBUG oslo_concurrency.lockutils [req-986e7b14-64e6-4fcc-8bd4-2957d7d5c4b0 req-8f848d47-a82a-4bed-8b30-7429d295dda2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-b8086e43-4c45-422f-a3b5-fa665c256b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:27:53 np0005629333 nova_compute[244014]: 2026-02-25 12:27:53.200 244018 INFO nova.virt.libvirt.driver [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Creating config drive at /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30/disk.config#033[00m
Feb 25 07:27:53 np0005629333 nova_compute[244014]: 2026-02-25 12:27:53.205 244018 DEBUG oslo_concurrency.processutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_rgib05k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:53 np0005629333 nova_compute[244014]: 2026-02-25 12:27:53.343 244018 DEBUG oslo_concurrency.processutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_rgib05k" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:53 np0005629333 nova_compute[244014]: 2026-02-25 12:27:53.363 244018 DEBUG nova.storage.rbd_utils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] rbd image b8086e43-4c45-422f-a3b5-fa665c256b30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:27:53 np0005629333 nova_compute[244014]: 2026-02-25 12:27:53.366 244018 DEBUG oslo_concurrency.processutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30/disk.config b8086e43-4c45-422f-a3b5-fa665c256b30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Feb 25 07:27:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Feb 25 07:27:53 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Feb 25 07:27:53 np0005629333 nova_compute[244014]: 2026-02-25 12:27:53.539 244018 DEBUG oslo_concurrency.processutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30/disk.config b8086e43-4c45-422f-a3b5-fa665c256b30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:53 np0005629333 nova_compute[244014]: 2026-02-25 12:27:53.540 244018 INFO nova.virt.libvirt.driver [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Deleting local config drive /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30/disk.config because it was imported into RBD.#033[00m
Feb 25 07:27:53 np0005629333 kernel: tapabdb97b5-8e: entered promiscuous mode
Feb 25 07:27:53 np0005629333 NetworkManager[49836]: <info>  [1772022473.6021] manager: (tapabdb97b5-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/235)
Feb 25 07:27:53 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:53Z|00524|binding|INFO|Claiming lport abdb97b5-8e9d-4929-af6f-bfb06c067878 for this chassis.
Feb 25 07:27:53 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:53Z|00525|binding|INFO|abdb97b5-8e9d-4929-af6f-bfb06c067878: Claiming fa:16:3e:f8:53:87 10.100.0.6
Feb 25 07:27:53 np0005629333 nova_compute[244014]: 2026-02-25 12:27:53.603 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:53 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:53Z|00526|binding|INFO|Setting lport abdb97b5-8e9d-4929-af6f-bfb06c067878 ovn-installed in OVS
Feb 25 07:27:53 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:53Z|00527|binding|INFO|Setting lport abdb97b5-8e9d-4929-af6f-bfb06c067878 up in Southbound
Feb 25 07:27:53 np0005629333 nova_compute[244014]: 2026-02-25 12:27:53.623 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:53.623 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:53:87 10.100.0.6'], port_security=['fa:16:3e:f8:53:87 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b8086e43-4c45-422f-a3b5-fa665c256b30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64c22162-7e15-45de-8fd2-8c9a24f27006', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c85a955249394f0faf7c890f5cd0df32', 'neutron:revision_number': '7', 'neutron:security_group_ids': '35edb9b7-5285-41a3-a867-f1cc587b3ad5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.229'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a9495f97-67e6-4da7-a9b0-f643c9e48076, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=abdb97b5-8e9d-4929-af6f-bfb06c067878) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:27:53 np0005629333 nova_compute[244014]: 2026-02-25 12:27:53.626 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:53.626 157129 INFO neutron.agent.ovn.metadata.agent [-] Port abdb97b5-8e9d-4929-af6f-bfb06c067878 in datapath 64c22162-7e15-45de-8fd2-8c9a24f27006 bound to our chassis#033[00m
Feb 25 07:27:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:53.628 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 64c22162-7e15-45de-8fd2-8c9a24f27006#033[00m
Feb 25 07:27:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:53.646 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[324e64d3-404c-4551-a2ad-7e772598cbab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:53 np0005629333 systemd-machined[210048]: New machine qemu-68-instance-0000002f.
Feb 25 07:27:53 np0005629333 systemd[1]: Started Virtual Machine qemu-68-instance-0000002f.
Feb 25 07:27:53 np0005629333 systemd-udevd[296216]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:27:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:53.683 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4dcc3188-e43f-4fcb-8e40-dbedd86edfda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:53.687 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1745ca41-cf22-4868-9024-b2a0d691d5f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:53 np0005629333 NetworkManager[49836]: <info>  [1772022473.6897] device (tapabdb97b5-8e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:27:53 np0005629333 NetworkManager[49836]: <info>  [1772022473.6909] device (tapabdb97b5-8e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:27:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:53.715 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ec3988a1-67ec-409b-8ca7-7720df7ebe79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:53.732 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d5e0cf6c-3ddc-496e-9311-b7cdca772910]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64c22162-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:1c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 18, 'rx_bytes': 700, 'tx_bytes': 944, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 18, 'rx_bytes': 700, 'tx_bytes': 944, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428693, 'reachable_time': 31584, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296224, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:53.752 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[30a026f2-f948-4387-a1a1-8476d0169343]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap64c22162-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428702, 'tstamp': 428702}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296227, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap64c22162-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428705, 'tstamp': 428705}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296227, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:53.753 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64c22162-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:53 np0005629333 nova_compute[244014]: 2026-02-25 12:27:53.755 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:53 np0005629333 nova_compute[244014]: 2026-02-25 12:27:53.757 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:53.758 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64c22162-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:53.758 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:27:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:53.758 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap64c22162-70, col_values=(('external_ids', {'iface-id': '81f0f54c-4e04-4adf-952f-b6d0fe9698c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:53.759 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:27:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 835 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.9 MiB/s wr, 153 op/s
Feb 25 07:27:53 np0005629333 nova_compute[244014]: 2026-02-25 12:27:53.913 244018 DEBUG nova.network.neutron [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Updating instance_info_cache with network_info: [{"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:27:53 np0005629333 nova_compute[244014]: 2026-02-25 12:27:53.938 244018 DEBUG oslo_concurrency.lockutils [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Releasing lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:27:53 np0005629333 nova_compute[244014]: 2026-02-25 12:27:53.940 244018 DEBUG nova.compute.manager [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.104 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022474.1039176, b8086e43-4c45-422f-a3b5-fa665c256b30 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.105 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] VM Started (Lifecycle Event)#033[00m
Feb 25 07:27:54 np0005629333 kernel: tapee46268d-74 (unregistering): left promiscuous mode
Feb 25 07:27:54 np0005629333 NetworkManager[49836]: <info>  [1772022474.1185] device (tapee46268d-74): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.124 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:54 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:54Z|00528|binding|INFO|Releasing lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 from this chassis (sb_readonly=0)
Feb 25 07:27:54 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:54Z|00529|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 down in Southbound
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.127 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:54 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:54Z|00530|binding|INFO|Removing iface tapee46268d-74 ovn-installed in OVS
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.130 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.136 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022474.108122, b8086e43-4c45-422f-a3b5-fa665c256b30 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.136 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:27:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:54.136 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:87:f1 10.100.0.11'], port_security=['fa:16:3e:ba:87:f1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce318891-cf3c-4d99-af7c-c01770f38194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56700581ea88438ba482d90bc702ced3', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'c2ca716d-3f7c-490b-954a-bca009559baa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0958bb9f-eb63-44ee-b380-21c56b170304, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ee46268d-740d-4ff9-8b65-4a81fc61eec3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:27:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:54.138 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ee46268d-740d-4ff9-8b65-4a81fc61eec3 in datapath ce318891-cf3c-4d99-af7c-c01770f38194 unbound from our chassis#033[00m
Feb 25 07:27:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:54.141 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce318891-cf3c-4d99-af7c-c01770f38194, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.141 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:54.142 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f99e3dcb-a9cf-4b4a-b822-3c9f6299c150]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:54.143 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 namespace which is not needed anymore#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.163 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:54 np0005629333 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000035.scope: Deactivated successfully.
Feb 25 07:27:54 np0005629333 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000035.scope: Consumed 12.334s CPU time.
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.168 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:27:54 np0005629333 systemd-machined[210048]: Machine qemu-66-instance-00000035 terminated.
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.187 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.255 244018 DEBUG nova.compute.manager [req-12e5385c-8192-4a9f-bd04-35c41d8fc1c2 req-cae868a7-2509-407c-86e5-83fdc2d36673 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received event network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.255 244018 DEBUG oslo_concurrency.lockutils [req-12e5385c-8192-4a9f-bd04-35c41d8fc1c2 req-cae868a7-2509-407c-86e5-83fdc2d36673 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.256 244018 DEBUG oslo_concurrency.lockutils [req-12e5385c-8192-4a9f-bd04-35c41d8fc1c2 req-cae868a7-2509-407c-86e5-83fdc2d36673 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.257 244018 DEBUG oslo_concurrency.lockutils [req-12e5385c-8192-4a9f-bd04-35c41d8fc1c2 req-cae868a7-2509-407c-86e5-83fdc2d36673 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.257 244018 DEBUG nova.compute.manager [req-12e5385c-8192-4a9f-bd04-35c41d8fc1c2 req-cae868a7-2509-407c-86e5-83fdc2d36673 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Processing event network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.258 244018 DEBUG nova.compute.manager [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.262 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022474.2612882, b8086e43-4c45-422f-a3b5-fa665c256b30 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.262 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.265 244018 DEBUG nova.virt.libvirt.driver [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.273 244018 INFO nova.virt.libvirt.driver [-] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Instance spawned successfully.#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.281 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.286 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.291 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.296 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.300 244018 INFO nova.virt.libvirt.driver [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance destroyed successfully.#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.301 244018 DEBUG nova.objects.instance [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'resources' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.318 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:27:54 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[295066]: [NOTICE]   (295073) : haproxy version is 2.8.14-c23fe91
Feb 25 07:27:54 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[295066]: [NOTICE]   (295073) : path to executable is /usr/sbin/haproxy
Feb 25 07:27:54 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[295066]: [WARNING]  (295073) : Exiting Master process...
Feb 25 07:27:54 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[295066]: [ALERT]    (295073) : Current worker (295075) exited with code 143 (Terminated)
Feb 25 07:27:54 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[295066]: [WARNING]  (295073) : All workers exited. Exiting... (0)
Feb 25 07:27:54 np0005629333 systemd[1]: libpod-32106443e4d9cce5597e1e35d916e34cc8e899ac548b632e8c9e58485a32cad8.scope: Deactivated successfully.
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.325 244018 DEBUG nova.virt.libvirt.vif [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1971346294',display_name='tempest-ServerActionsTestJSON-server-1971346294',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1971346294',id=53,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAxL27ottNuqfXH6nySrIpiq52DbdIwstuJNvjKVA2mjXoBhB8Hf28a6S+Sox62IJx/Akv2MX8rF28TRT28AB2t2jhcJkKsJ3yIrvpBvNuGbxcLEouYwPlp1/Hru0erD1g==',key_name='tempest-keypair-1811376271',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-d1p0icxu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:27:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.325 244018 DEBUG nova.network.os_vif_util [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.327 244018 DEBUG nova.network.os_vif_util [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.327 244018 DEBUG os_vif [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.330 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.330 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee46268d-74, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.332 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:54 np0005629333 podman[296291]: 2026-02-25 12:27:54.33393653 +0000 UTC m=+0.069335113 container died 32106443e4d9cce5597e1e35d916e34cc8e899ac548b632e8c9e58485a32cad8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.336 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.339 244018 INFO os_vif [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74')#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.352 244018 DEBUG nova.virt.libvirt.driver [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Start _get_guest_xml network_info=[{"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.361 244018 WARNING nova.virt.libvirt.driver [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:27:54 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-32106443e4d9cce5597e1e35d916e34cc8e899ac548b632e8c9e58485a32cad8-userdata-shm.mount: Deactivated successfully.
Feb 25 07:27:54 np0005629333 systemd[1]: var-lib-containers-storage-overlay-eb72b0e246b6d9347232c841bd42065a2553e7580308af9097a9325d7f9de613-merged.mount: Deactivated successfully.
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.375 244018 DEBUG nova.virt.libvirt.host [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.376 244018 DEBUG nova.virt.libvirt.host [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.382 244018 DEBUG nova.virt.libvirt.host [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.382 244018 DEBUG nova.virt.libvirt.host [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.383 244018 DEBUG nova.virt.libvirt.driver [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.383 244018 DEBUG nova.virt.hardware [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.383 244018 DEBUG nova.virt.hardware [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.384 244018 DEBUG nova.virt.hardware [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.384 244018 DEBUG nova.virt.hardware [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.384 244018 DEBUG nova.virt.hardware [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.385 244018 DEBUG nova.virt.hardware [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.385 244018 DEBUG nova.virt.hardware [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.385 244018 DEBUG nova.virt.hardware [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.386 244018 DEBUG nova.virt.hardware [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.387 244018 DEBUG nova.virt.hardware [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.387 244018 DEBUG nova.virt.hardware [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.387 244018 DEBUG nova.objects.instance [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:54 np0005629333 podman[296291]: 2026-02-25 12:27:54.388246347 +0000 UTC m=+0.123644890 container cleanup 32106443e4d9cce5597e1e35d916e34cc8e899ac548b632e8c9e58485a32cad8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 25 07:27:54 np0005629333 systemd[1]: libpod-conmon-32106443e4d9cce5597e1e35d916e34cc8e899ac548b632e8c9e58485a32cad8.scope: Deactivated successfully.
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.410 244018 DEBUG oslo_concurrency.processutils [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Feb 25 07:27:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Feb 25 07:27:54 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Feb 25 07:27:54 np0005629333 podman[296326]: 2026-02-25 12:27:54.461708586 +0000 UTC m=+0.053370012 container remove 32106443e4d9cce5597e1e35d916e34cc8e899ac548b632e8c9e58485a32cad8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:27:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:54.470 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4403e540-1701-416a-9627-6491f75bc007]: (4, ('Wed Feb 25 12:27:54 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 (32106443e4d9cce5597e1e35d916e34cc8e899ac548b632e8c9e58485a32cad8)\n32106443e4d9cce5597e1e35d916e34cc8e899ac548b632e8c9e58485a32cad8\nWed Feb 25 12:27:54 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 (32106443e4d9cce5597e1e35d916e34cc8e899ac548b632e8c9e58485a32cad8)\n32106443e4d9cce5597e1e35d916e34cc8e899ac548b632e8c9e58485a32cad8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:54.471 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f9feccb1-dba4-4628-8f0c-26acefe61ba7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:54.472 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce318891-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:54 np0005629333 kernel: tapce318891-c0: left promiscuous mode
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.478 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.488 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:54.491 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[85e2afd5-14b4-4af8-917a-1db5424f52d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:54.511 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[63789194-c637-4089-96fa-46baec28aabe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:54.513 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[88484fda-28a6-4001-b929-6e0802811c23]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:54.528 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[571ab088-b755-4ed7-bcdd-3ad0cf903ee3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441672, 'reachable_time': 35801, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296341, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:54 np0005629333 systemd[1]: run-netns-ovnmeta\x2dce318891\x2dcf3c\x2d4d99\x2daf7c\x2dc01770f38194.mount: Deactivated successfully.
Feb 25 07:27:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:54.530 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:27:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:54.530 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[f34e5279-982a-492e-89f2-edb8a3d97b1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.695 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.942 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:27:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3644554855' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:27:54 np0005629333 nova_compute[244014]: 2026-02-25 12:27:54.973 244018 DEBUG oslo_concurrency.processutils [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.010 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.010 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.011 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.023 244018 DEBUG oslo_concurrency.lockutils [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquiring lock "bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.024 244018 DEBUG oslo_concurrency.lockutils [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.024 244018 DEBUG oslo_concurrency.lockutils [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquiring lock "bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.026 244018 DEBUG oslo_concurrency.lockutils [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.026 244018 DEBUG oslo_concurrency.lockutils [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.029 244018 INFO nova.compute.manager [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Terminating instance#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.031 244018 DEBUG oslo_concurrency.lockutils [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquiring lock "refresh_cache-bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.032 244018 DEBUG oslo_concurrency.lockutils [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquired lock "refresh_cache-bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.032 244018 DEBUG nova.network.neutron [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.039 244018 DEBUG oslo_concurrency.processutils [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:27:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Feb 25 07:27:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Feb 25 07:27:55 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.202 244018 DEBUG nova.network.neutron [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.504 244018 DEBUG nova.compute.manager [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.599 244018 DEBUG oslo_concurrency.lockutils [None req-62bbec65-ddef-4fa8-95ea-48a40f75a157 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 12.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:27:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4210909642' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.620 244018 DEBUG oslo_concurrency.processutils [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.622 244018 DEBUG nova.virt.libvirt.vif [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1971346294',display_name='tempest-ServerActionsTestJSON-server-1971346294',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1971346294',id=53,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAxL27ottNuqfXH6nySrIpiq52DbdIwstuJNvjKVA2mjXoBhB8Hf28a6S+Sox62IJx/Akv2MX8rF28TRT28AB2t2jhcJkKsJ3yIrvpBvNuGbxcLEouYwPlp1/Hru0erD1g==',key_name='tempest-keypair-1811376271',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-d1p0icxu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:27:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.622 244018 DEBUG nova.network.os_vif_util [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.623 244018 DEBUG nova.network.os_vif_util [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.625 244018 DEBUG nova.objects.instance [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.639 244018 DEBUG nova.virt.libvirt.driver [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:27:55 np0005629333 nova_compute[244014]:  <uuid>8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b</uuid>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:  <name>instance-00000035</name>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServerActionsTestJSON-server-1971346294</nova:name>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:27:54</nova:creationTime>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:27:55 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:        <nova:user uuid="1f8bbe7db4454108aca005daa72d5c22">tempest-ServerActionsTestJSON-436476112-project-member</nova:user>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:        <nova:project uuid="56700581ea88438ba482d90bc702ced3">tempest-ServerActionsTestJSON-436476112</nova:project>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:        <nova:port uuid="ee46268d-740d-4ff9-8b65-4a81fc61eec3">
Feb 25 07:27:55 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      <entry name="serial">8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b</entry>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      <entry name="uuid">8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b</entry>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk">
Feb 25 07:27:55 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:27:55 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk.config">
Feb 25 07:27:55 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:27:55 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:ba:87:f1"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      <target dev="tapee46268d-74"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b/console.log" append="off"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <input type="keyboard" bus="usb"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:27:55 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:27:55 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:27:55 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:27:55 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.645 244018 DEBUG nova.virt.libvirt.driver [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.645 244018 DEBUG nova.virt.libvirt.driver [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.646 244018 DEBUG nova.virt.libvirt.vif [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1971346294',display_name='tempest-ServerActionsTestJSON-server-1971346294',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1971346294',id=53,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAxL27ottNuqfXH6nySrIpiq52DbdIwstuJNvjKVA2mjXoBhB8Hf28a6S+Sox62IJx/Akv2MX8rF28TRT28AB2t2jhcJkKsJ3yIrvpBvNuGbxcLEouYwPlp1/Hru0erD1g==',key_name='tempest-keypair-1811376271',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-d1p0icxu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:27:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.646 244018 DEBUG nova.network.os_vif_util [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.647 244018 DEBUG nova.network.os_vif_util [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.648 244018 DEBUG os_vif [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.650 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.650 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.651 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.654 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.655 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee46268d-74, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.655 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee46268d-74, col_values=(('external_ids', {'iface-id': 'ee46268d-740d-4ff9-8b65-4a81fc61eec3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:87:f1', 'vm-uuid': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.657 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:55 np0005629333 NetworkManager[49836]: <info>  [1772022475.6581] manager: (tapee46268d-74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/236)
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.660 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.664 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.665 244018 INFO os_vif [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74')#033[00m
Feb 25 07:27:55 np0005629333 kernel: tapee46268d-74: entered promiscuous mode
Feb 25 07:27:55 np0005629333 systemd-udevd[296218]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:27:55 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:55Z|00531|binding|INFO|Claiming lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 for this chassis.
Feb 25 07:27:55 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:55Z|00532|binding|INFO|ee46268d-740d-4ff9-8b65-4a81fc61eec3: Claiming fa:16:3e:ba:87:f1 10.100.0.11
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.737 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:55 np0005629333 NetworkManager[49836]: <info>  [1772022475.7407] manager: (tapee46268d-74): new Tun device (/org/freedesktop/NetworkManager/Devices/237)
Feb 25 07:27:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.747 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:87:f1 10.100.0.11'], port_security=['fa:16:3e:ba:87:f1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce318891-cf3c-4d99-af7c-c01770f38194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56700581ea88438ba482d90bc702ced3', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'c2ca716d-3f7c-490b-954a-bca009559baa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0958bb9f-eb63-44ee-b380-21c56b170304, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ee46268d-740d-4ff9-8b65-4a81fc61eec3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:27:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.748 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ee46268d-740d-4ff9-8b65-4a81fc61eec3 in datapath ce318891-cf3c-4d99-af7c-c01770f38194 bound to our chassis#033[00m
Feb 25 07:27:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.750 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce318891-cf3c-4d99-af7c-c01770f38194#033[00m
Feb 25 07:27:55 np0005629333 NetworkManager[49836]: <info>  [1772022475.7521] device (tapee46268d-74): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:27:55 np0005629333 NetworkManager[49836]: <info>  [1772022475.7528] device (tapee46268d-74): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:27:55 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:55Z|00533|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 ovn-installed in OVS
Feb 25 07:27:55 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:55Z|00534|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 up in Southbound
Feb 25 07:27:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.760 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[348a5fa6-b656-4844-963c-6046c5d26358]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:55 np0005629333 nova_compute[244014]: 2026-02-25 12:27:55.761 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.761 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce318891-c1 in ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:27:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.764 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce318891-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:27:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.764 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bf03320b-ddea-42bd-8efb-81bcf9810cd7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.764 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0b23570b-84ab-492b-8938-7135b98cb793]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.778 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[569a3c9d-58fc-411e-8a8a-fb871ef35418]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:55 np0005629333 systemd-machined[210048]: New machine qemu-69-instance-00000035.
Feb 25 07:27:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.792 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4f9d4590-6081-4f31-898a-423e33090336]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:55 np0005629333 systemd[1]: Started Virtual Machine qemu-69-instance-00000035.
Feb 25 07:27:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.821 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f8f3bb35-c063-4aa1-8f3a-f29fd556c335]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:55 np0005629333 NetworkManager[49836]: <info>  [1772022475.8279] manager: (tapce318891-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/238)
Feb 25 07:27:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.826 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3fcdde45-b0ba-4f01-b6a9-bcdf1b22b723]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 835 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 8.8 MiB/s rd, 8.7 MiB/s wr, 227 op/s
Feb 25 07:27:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.868 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cd295640-5147-481b-a126-58915d030f60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.871 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e3a62e68-9e49-4535-92a7-da4feb83bf34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:55 np0005629333 NetworkManager[49836]: <info>  [1772022475.8923] device (tapce318891-c0): carrier: link connected
Feb 25 07:27:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.897 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d46193bf-0797-4d01-bb90-1780a23ebbff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.917 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[06a57f03-fea4-40ac-a65e-efcbbc9d0950]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce318891-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c3:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444530, 'reachable_time': 37500, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296449, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.931 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9b09eae9-74fa-4ee5-84db-223b7acdb2f8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe44:c3a0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444530, 'tstamp': 444530}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296450, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.947 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b1c192-8902-432b-94a3-3822aafc9e04]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce318891-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c3:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444530, 'reachable_time': 37500, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 296451, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:55.978 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[836474ad-207b-47ef-85fc-a2ccf8979eef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:56.044 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d3170aa6-7286-4251-8e1a-2a6b4a22f674]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:56.045 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce318891-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:56.045 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:56.046 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce318891-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:56 np0005629333 NetworkManager[49836]: <info>  [1772022476.0488] manager: (tapce318891-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/239)
Feb 25 07:27:56 np0005629333 kernel: tapce318891-c0: entered promiscuous mode
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.048 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:56.052 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce318891-c0, col_values=(('external_ids', {'iface-id': '3b184c15-8ef4-4e11-bd18-e1253a4ff440'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:27:56 np0005629333 ovn_controller[147040]: 2026-02-25T12:27:56Z|00535|binding|INFO|Releasing lport 3b184c15-8ef4-4e11-bd18-e1253a4ff440 from this chassis (sb_readonly=0)
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.055 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:56.057 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:56.058 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cd4732e7-35df-4d14-b3c3-c76648df6e07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:56.059 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:27:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:27:56.060 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'env', 'PROCESS_TAG=haproxy-ce318891-cf3c-4d99-af7c-c01770f38194', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce318891-cf3c-4d99-af7c-c01770f38194.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.062 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.334 244018 DEBUG nova.network.neutron [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.350 244018 DEBUG oslo_concurrency.lockutils [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Releasing lock "refresh_cache-bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.351 244018 DEBUG nova.compute.manager [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.394 244018 DEBUG nova.compute.manager [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received event network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.394 244018 DEBUG oslo_concurrency.lockutils [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.394 244018 DEBUG oslo_concurrency.lockutils [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.395 244018 DEBUG oslo_concurrency.lockutils [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.395 244018 DEBUG nova.compute.manager [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] No waiting events found dispatching network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.395 244018 WARNING nova.compute.manager [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received unexpected event network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.395 244018 DEBUG nova.compute.manager [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.396 244018 DEBUG oslo_concurrency.lockutils [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.397 244018 DEBUG oslo_concurrency.lockutils [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.397 244018 DEBUG oslo_concurrency.lockutils [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.397 244018 DEBUG nova.compute.manager [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.397 244018 WARNING nova.compute.manager [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.398 244018 DEBUG nova.compute.manager [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.398 244018 DEBUG oslo_concurrency.lockutils [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.398 244018 DEBUG oslo_concurrency.lockutils [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.399 244018 DEBUG oslo_concurrency.lockutils [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.399 244018 DEBUG nova.compute.manager [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.399 244018 WARNING nova.compute.manager [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.399 244018 DEBUG nova.compute.manager [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.400 244018 DEBUG oslo_concurrency.lockutils [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.400 244018 DEBUG oslo_concurrency.lockutils [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.400 244018 DEBUG oslo_concurrency.lockutils [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.400 244018 DEBUG nova.compute.manager [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.400 244018 WARNING nova.compute.manager [req-aeda2758-7689-40a7-8079-4b586f8dd0d0 req-2f1d3e3e-96ed-4563-9a96-a9cb84699e24 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Feb 25 07:27:56 np0005629333 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000003b.scope: Deactivated successfully.
Feb 25 07:27:56 np0005629333 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000003b.scope: Consumed 14.759s CPU time.
Feb 25 07:27:56 np0005629333 systemd-machined[210048]: Machine qemu-65-instance-0000003b terminated.
Feb 25 07:27:56 np0005629333 podman[296482]: 2026-02-25 12:27:56.468786688 +0000 UTC m=+0.086636243 container create 43047c72e8074fabab76ae337cff05445967f3daebf90afef05a64abdd8d0080 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 07:27:56 np0005629333 systemd[1]: Started libpod-conmon-43047c72e8074fabab76ae337cff05445967f3daebf90afef05a64abdd8d0080.scope.
Feb 25 07:27:56 np0005629333 podman[296482]: 2026-02-25 12:27:56.41160793 +0000 UTC m=+0.029457505 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:27:56 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:27:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43a9a945243e4aa6728c2c83d101f229caef91189a564a26a79cc76c0a932bd4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:27:56 np0005629333 podman[296482]: 2026-02-25 12:27:56.566530085 +0000 UTC m=+0.184379690 container init 43047c72e8074fabab76ae337cff05445967f3daebf90afef05a64abdd8d0080 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223)
Feb 25 07:27:56 np0005629333 podman[296482]: 2026-02-25 12:27:56.577107444 +0000 UTC m=+0.194957019 container start 43047c72e8074fabab76ae337cff05445967f3daebf90afef05a64abdd8d0080 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223)
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.581 244018 INFO nova.virt.libvirt.driver [-] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Instance destroyed successfully.#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.582 244018 DEBUG nova.objects.instance [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lazy-loading 'resources' on Instance uuid bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:27:56 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[296497]: [NOTICE]   (296548) : New worker (296562) forked
Feb 25 07:27:56 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[296497]: [NOTICE]   (296548) : Loading success.
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.756 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.757 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022476.7564242, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.757 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.759 244018 DEBUG nova.compute.manager [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.762 244018 INFO nova.virt.libvirt.driver [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance rebooted successfully.#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.762 244018 DEBUG nova.compute.manager [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.792 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.795 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.820 244018 DEBUG oslo_concurrency.lockutils [None req-fb30cdba-999e-4b6f-bed8-035ca0049a44 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.821 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022476.757529, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.821 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Started (Lifecycle Event)#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.965 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:27:56 np0005629333 nova_compute[244014]: 2026-02-25 12:27:56.968 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:27:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:27:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:27:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:27:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:27:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:27:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:27:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:27:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:27:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:27:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:27:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:27:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:27:57 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:27:57 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:27:57 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:27:57 np0005629333 nova_compute[244014]: 2026-02-25 12:27:57.690 244018 INFO nova.virt.libvirt.driver [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Deleting instance files /var/lib/nova/instances/bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_del#033[00m
Feb 25 07:27:57 np0005629333 nova_compute[244014]: 2026-02-25 12:27:57.691 244018 INFO nova.virt.libvirt.driver [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Deletion of /var/lib/nova/instances/bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36_del complete#033[00m
Feb 25 07:27:57 np0005629333 podman[296722]: 2026-02-25 12:27:57.777825536 +0000 UTC m=+0.054996748 container create a3218bf72228955af139e96cbaa1dcd65162dfcadc6b3edc352adf923d5d920c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_khorana, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:27:57 np0005629333 systemd[1]: Started libpod-conmon-a3218bf72228955af139e96cbaa1dcd65162dfcadc6b3edc352adf923d5d920c.scope.
Feb 25 07:27:57 np0005629333 podman[296722]: 2026-02-25 12:27:57.743724331 +0000 UTC m=+0.020895583 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:27:57 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:27:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 580 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 9.7 MiB/s rd, 3.3 MiB/s wr, 426 op/s
Feb 25 07:27:57 np0005629333 nova_compute[244014]: 2026-02-25 12:27:57.854 244018 INFO nova.compute.manager [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Took 1.50 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:27:57 np0005629333 nova_compute[244014]: 2026-02-25 12:27:57.855 244018 DEBUG oslo.service.loopingcall [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:27:57 np0005629333 nova_compute[244014]: 2026-02-25 12:27:57.855 244018 DEBUG nova.compute.manager [-] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:27:57 np0005629333 nova_compute[244014]: 2026-02-25 12:27:57.855 244018 DEBUG nova.network.neutron [-] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:27:57 np0005629333 podman[296722]: 2026-02-25 12:27:57.866163946 +0000 UTC m=+0.143335248 container init a3218bf72228955af139e96cbaa1dcd65162dfcadc6b3edc352adf923d5d920c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 07:27:57 np0005629333 podman[296722]: 2026-02-25 12:27:57.873516614 +0000 UTC m=+0.150687866 container start a3218bf72228955af139e96cbaa1dcd65162dfcadc6b3edc352adf923d5d920c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 07:27:57 np0005629333 podman[296722]: 2026-02-25 12:27:57.877932299 +0000 UTC m=+0.155103541 container attach a3218bf72228955af139e96cbaa1dcd65162dfcadc6b3edc352adf923d5d920c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 25 07:27:57 np0005629333 interesting_khorana[296738]: 167 167
Feb 25 07:27:57 np0005629333 systemd[1]: libpod-a3218bf72228955af139e96cbaa1dcd65162dfcadc6b3edc352adf923d5d920c.scope: Deactivated successfully.
Feb 25 07:27:57 np0005629333 podman[296722]: 2026-02-25 12:27:57.881230902 +0000 UTC m=+0.158402144 container died a3218bf72228955af139e96cbaa1dcd65162dfcadc6b3edc352adf923d5d920c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_khorana, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:27:57 np0005629333 systemd[1]: var-lib-containers-storage-overlay-adaa34dcf965cf40c66b6085bb41edaaf98b0324a3f5cee74187a774bc175bcb-merged.mount: Deactivated successfully.
Feb 25 07:27:57 np0005629333 podman[296722]: 2026-02-25 12:27:57.927370858 +0000 UTC m=+0.204542080 container remove a3218bf72228955af139e96cbaa1dcd65162dfcadc6b3edc352adf923d5d920c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:27:57 np0005629333 systemd[1]: libpod-conmon-a3218bf72228955af139e96cbaa1dcd65162dfcadc6b3edc352adf923d5d920c.scope: Deactivated successfully.
Feb 25 07:27:58 np0005629333 nova_compute[244014]: 2026-02-25 12:27:58.045 244018 DEBUG nova.network.neutron [-] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:27:58 np0005629333 nova_compute[244014]: 2026-02-25 12:27:58.061 244018 DEBUG nova.network.neutron [-] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:27:58 np0005629333 nova_compute[244014]: 2026-02-25 12:27:58.081 244018 INFO nova.compute.manager [-] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Took 0.23 seconds to deallocate network for instance.#033[00m
Feb 25 07:27:58 np0005629333 podman[296761]: 2026-02-25 12:27:58.112822027 +0000 UTC m=+0.058485006 container create 32cbc73087f33748a610ebe2c31feaebe46a0af2ef4c5c40fd12d4a9343adc9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_satoshi, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:27:58 np0005629333 nova_compute[244014]: 2026-02-25 12:27:58.124 244018 DEBUG oslo_concurrency.lockutils [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:58 np0005629333 nova_compute[244014]: 2026-02-25 12:27:58.126 244018 DEBUG oslo_concurrency.lockutils [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:58 np0005629333 systemd[1]: Started libpod-conmon-32cbc73087f33748a610ebe2c31feaebe46a0af2ef4c5c40fd12d4a9343adc9d.scope.
Feb 25 07:27:58 np0005629333 podman[296761]: 2026-02-25 12:27:58.092279525 +0000 UTC m=+0.037942484 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:27:58 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:27:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f69ea29c9e991a3a9110fedbc270e30cc9ce12209b358d2ec83123f5f96007/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:27:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f69ea29c9e991a3a9110fedbc270e30cc9ce12209b358d2ec83123f5f96007/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:27:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f69ea29c9e991a3a9110fedbc270e30cc9ce12209b358d2ec83123f5f96007/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:27:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f69ea29c9e991a3a9110fedbc270e30cc9ce12209b358d2ec83123f5f96007/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:27:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f69ea29c9e991a3a9110fedbc270e30cc9ce12209b358d2ec83123f5f96007/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:27:58 np0005629333 podman[296761]: 2026-02-25 12:27:58.250242976 +0000 UTC m=+0.195905985 container init 32cbc73087f33748a610ebe2c31feaebe46a0af2ef4c5c40fd12d4a9343adc9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_satoshi, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 07:27:58 np0005629333 podman[296761]: 2026-02-25 12:27:58.266367393 +0000 UTC m=+0.212030372 container start 32cbc73087f33748a610ebe2c31feaebe46a0af2ef4c5c40fd12d4a9343adc9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3)
Feb 25 07:27:58 np0005629333 podman[296761]: 2026-02-25 12:27:58.270825229 +0000 UTC m=+0.216488208 container attach 32cbc73087f33748a610ebe2c31feaebe46a0af2ef4c5c40fd12d4a9343adc9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:27:58 np0005629333 nova_compute[244014]: 2026-02-25 12:27:58.283 244018 DEBUG oslo_concurrency.processutils [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:27:58 np0005629333 nova_compute[244014]: 2026-02-25 12:27:58.642 244018 DEBUG nova.compute.manager [req-fc145ba6-74e9-4a33-a8d3-fc3a633c1aef req-1dfb41ec-8064-44d1-9b4c-0d72a54c9b28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:27:58 np0005629333 nova_compute[244014]: 2026-02-25 12:27:58.644 244018 DEBUG oslo_concurrency.lockutils [req-fc145ba6-74e9-4a33-a8d3-fc3a633c1aef req-1dfb41ec-8064-44d1-9b4c-0d72a54c9b28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:58 np0005629333 nova_compute[244014]: 2026-02-25 12:27:58.644 244018 DEBUG oslo_concurrency.lockutils [req-fc145ba6-74e9-4a33-a8d3-fc3a633c1aef req-1dfb41ec-8064-44d1-9b4c-0d72a54c9b28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:58 np0005629333 nova_compute[244014]: 2026-02-25 12:27:58.645 244018 DEBUG oslo_concurrency.lockutils [req-fc145ba6-74e9-4a33-a8d3-fc3a633c1aef req-1dfb41ec-8064-44d1-9b4c-0d72a54c9b28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:58 np0005629333 nova_compute[244014]: 2026-02-25 12:27:58.645 244018 DEBUG nova.compute.manager [req-fc145ba6-74e9-4a33-a8d3-fc3a633c1aef req-1dfb41ec-8064-44d1-9b4c-0d72a54c9b28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:27:58 np0005629333 nova_compute[244014]: 2026-02-25 12:27:58.646 244018 WARNING nova.compute.manager [req-fc145ba6-74e9-4a33-a8d3-fc3a633c1aef req-1dfb41ec-8064-44d1-9b4c-0d72a54c9b28 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:27:58 np0005629333 admiring_satoshi[296777]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:27:58 np0005629333 admiring_satoshi[296777]: --> All data devices are unavailable
Feb 25 07:27:58 np0005629333 systemd[1]: libpod-32cbc73087f33748a610ebe2c31feaebe46a0af2ef4c5c40fd12d4a9343adc9d.scope: Deactivated successfully.
Feb 25 07:27:58 np0005629333 podman[296761]: 2026-02-25 12:27:58.790261848 +0000 UTC m=+0.735924827 container died 32cbc73087f33748a610ebe2c31feaebe46a0af2ef4c5c40fd12d4a9343adc9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_satoshi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 07:27:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:27:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1167045000' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:27:58 np0005629333 systemd[1]: var-lib-containers-storage-overlay-11f69ea29c9e991a3a9110fedbc270e30cc9ce12209b358d2ec83123f5f96007-merged.mount: Deactivated successfully.
Feb 25 07:27:58 np0005629333 nova_compute[244014]: 2026-02-25 12:27:58.840 244018 DEBUG oslo_concurrency.processutils [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:27:58 np0005629333 podman[296761]: 2026-02-25 12:27:58.843873316 +0000 UTC m=+0.789536295 container remove 32cbc73087f33748a610ebe2c31feaebe46a0af2ef4c5c40fd12d4a9343adc9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_satoshi, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:27:58 np0005629333 systemd[1]: libpod-conmon-32cbc73087f33748a610ebe2c31feaebe46a0af2ef4c5c40fd12d4a9343adc9d.scope: Deactivated successfully.
Feb 25 07:27:58 np0005629333 nova_compute[244014]: 2026-02-25 12:27:58.879 244018 DEBUG nova.compute.provider_tree [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:27:58 np0005629333 nova_compute[244014]: 2026-02-25 12:27:58.902 244018 DEBUG nova.scheduler.client.report [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:27:58 np0005629333 nova_compute[244014]: 2026-02-25 12:27:58.944 244018 DEBUG oslo_concurrency.lockutils [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:59 np0005629333 nova_compute[244014]: 2026-02-25 12:27:59.025 244018 INFO nova.scheduler.client.report [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Deleted allocations for instance bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36#033[00m
Feb 25 07:27:59 np0005629333 nova_compute[244014]: 2026-02-25 12:27:59.091 244018 DEBUG oslo_concurrency.lockutils [None req-f137cc63-ddb3-4d51-a359-fc77d04bef30 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:59 np0005629333 podman[296892]: 2026-02-25 12:27:59.303574906 +0000 UTC m=+0.054359990 container create 0f79c020393287a098b937a76499f4b41db7b3cac150d34bca5abc81ab056c01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 07:27:59 np0005629333 systemd[1]: Started libpod-conmon-0f79c020393287a098b937a76499f4b41db7b3cac150d34bca5abc81ab056c01.scope.
Feb 25 07:27:59 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:27:59 np0005629333 podman[296892]: 2026-02-25 12:27:59.36521516 +0000 UTC m=+0.116000254 container init 0f79c020393287a098b937a76499f4b41db7b3cac150d34bca5abc81ab056c01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_khayyam, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 07:27:59 np0005629333 podman[296892]: 2026-02-25 12:27:59.372766934 +0000 UTC m=+0.123552008 container start 0f79c020393287a098b937a76499f4b41db7b3cac150d34bca5abc81ab056c01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:27:59 np0005629333 podman[296892]: 2026-02-25 12:27:59.277883259 +0000 UTC m=+0.028668383 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:27:59 np0005629333 podman[296892]: 2026-02-25 12:27:59.376031196 +0000 UTC m=+0.126816290 container attach 0f79c020393287a098b937a76499f4b41db7b3cac150d34bca5abc81ab056c01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_khayyam, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 07:27:59 np0005629333 amazing_khayyam[296908]: 167 167
Feb 25 07:27:59 np0005629333 podman[296892]: 2026-02-25 12:27:59.379638799 +0000 UTC m=+0.130423913 container died 0f79c020393287a098b937a76499f4b41db7b3cac150d34bca5abc81ab056c01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_khayyam, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:27:59 np0005629333 systemd[1]: libpod-0f79c020393287a098b937a76499f4b41db7b3cac150d34bca5abc81ab056c01.scope: Deactivated successfully.
Feb 25 07:27:59 np0005629333 systemd[1]: var-lib-containers-storage-overlay-3aab89e1ce8e4f742a7ee84231cf2312625904772350d5335b2c5022bd9716c0-merged.mount: Deactivated successfully.
Feb 25 07:27:59 np0005629333 podman[296892]: 2026-02-25 12:27:59.418703674 +0000 UTC m=+0.169488758 container remove 0f79c020393287a098b937a76499f4b41db7b3cac150d34bca5abc81ab056c01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_khayyam, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 07:27:59 np0005629333 systemd[1]: libpod-conmon-0f79c020393287a098b937a76499f4b41db7b3cac150d34bca5abc81ab056c01.scope: Deactivated successfully.
Feb 25 07:27:59 np0005629333 podman[296931]: 2026-02-25 12:27:59.590208168 +0000 UTC m=+0.054532674 container create 659ff074334d38b482764875d6a153af78a3627288f8402989e3b7e2d16ae69c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_jepsen, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:27:59 np0005629333 systemd[1]: Started libpod-conmon-659ff074334d38b482764875d6a153af78a3627288f8402989e3b7e2d16ae69c.scope.
Feb 25 07:27:59 np0005629333 podman[296931]: 2026-02-25 12:27:59.566417465 +0000 UTC m=+0.030742041 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:27:59 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:27:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02f06d7702a45c7b3b07c518e0a7eb9cc9c4590bb82e0ebf4d0f39184fda2be0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:27:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02f06d7702a45c7b3b07c518e0a7eb9cc9c4590bb82e0ebf4d0f39184fda2be0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:27:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02f06d7702a45c7b3b07c518e0a7eb9cc9c4590bb82e0ebf4d0f39184fda2be0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:27:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02f06d7702a45c7b3b07c518e0a7eb9cc9c4590bb82e0ebf4d0f39184fda2be0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:27:59 np0005629333 podman[296931]: 2026-02-25 12:27:59.708714662 +0000 UTC m=+0.173039198 container init 659ff074334d38b482764875d6a153af78a3627288f8402989e3b7e2d16ae69c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_jepsen, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:27:59 np0005629333 nova_compute[244014]: 2026-02-25 12:27:59.706 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:59 np0005629333 podman[296931]: 2026-02-25 12:27:59.716504602 +0000 UTC m=+0.180829128 container start 659ff074334d38b482764875d6a153af78a3627288f8402989e3b7e2d16ae69c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_jepsen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 07:27:59 np0005629333 podman[296931]: 2026-02-25 12:27:59.720612979 +0000 UTC m=+0.184937485 container attach 659ff074334d38b482764875d6a153af78a3627288f8402989e3b7e2d16ae69c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_jepsen, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 07:27:59 np0005629333 nova_compute[244014]: 2026-02-25 12:27:59.773 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:27:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 580 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 52 KiB/s wr, 300 op/s
Feb 25 07:27:59 np0005629333 nova_compute[244014]: 2026-02-25 12:27:59.898 244018 DEBUG oslo_concurrency.lockutils [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquiring lock "160a4e3f-b197-4b82-a2ff-cebf79df47df" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:59 np0005629333 nova_compute[244014]: 2026-02-25 12:27:59.899 244018 DEBUG oslo_concurrency.lockutils [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "160a4e3f-b197-4b82-a2ff-cebf79df47df" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:59 np0005629333 nova_compute[244014]: 2026-02-25 12:27:59.900 244018 DEBUG oslo_concurrency.lockutils [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquiring lock "160a4e3f-b197-4b82-a2ff-cebf79df47df-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:27:59 np0005629333 nova_compute[244014]: 2026-02-25 12:27:59.900 244018 DEBUG oslo_concurrency.lockutils [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "160a4e3f-b197-4b82-a2ff-cebf79df47df-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:27:59 np0005629333 nova_compute[244014]: 2026-02-25 12:27:59.901 244018 DEBUG oslo_concurrency.lockutils [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "160a4e3f-b197-4b82-a2ff-cebf79df47df-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:27:59 np0005629333 nova_compute[244014]: 2026-02-25 12:27:59.902 244018 INFO nova.compute.manager [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Terminating instance#033[00m
Feb 25 07:27:59 np0005629333 nova_compute[244014]: 2026-02-25 12:27:59.904 244018 DEBUG oslo_concurrency.lockutils [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquiring lock "refresh_cache-160a4e3f-b197-4b82-a2ff-cebf79df47df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:27:59 np0005629333 nova_compute[244014]: 2026-02-25 12:27:59.904 244018 DEBUG oslo_concurrency.lockutils [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquired lock "refresh_cache-160a4e3f-b197-4b82-a2ff-cebf79df47df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:27:59 np0005629333 nova_compute[244014]: 2026-02-25 12:27:59.904 244018 DEBUG nova.network.neutron [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:28:00 np0005629333 great_jepsen[296947]: {
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:    "0": [
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:        {
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "devices": [
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "/dev/loop3"
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            ],
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "lv_name": "ceph_lv0",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "lv_size": "21470642176",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "name": "ceph_lv0",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "tags": {
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.cluster_name": "ceph",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.crush_device_class": "",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.encrypted": "0",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.objectstore": "bluestore",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.osd_id": "0",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.type": "block",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.vdo": "0",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.with_tpm": "0"
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            },
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "type": "block",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "vg_name": "ceph_vg0"
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:        }
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:    ],
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:    "1": [
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:        {
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "devices": [
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "/dev/loop4"
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            ],
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "lv_name": "ceph_lv1",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "lv_size": "21470642176",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "name": "ceph_lv1",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "tags": {
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.cluster_name": "ceph",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.crush_device_class": "",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.encrypted": "0",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.objectstore": "bluestore",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.osd_id": "1",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.type": "block",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.vdo": "0",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.with_tpm": "0"
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            },
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "type": "block",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "vg_name": "ceph_vg1"
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:        }
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:    ],
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:    "2": [
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:        {
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "devices": [
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "/dev/loop5"
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            ],
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "lv_name": "ceph_lv2",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "lv_size": "21470642176",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "name": "ceph_lv2",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "tags": {
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.cluster_name": "ceph",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.crush_device_class": "",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.encrypted": "0",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.objectstore": "bluestore",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.osd_id": "2",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.type": "block",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.vdo": "0",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:                "ceph.with_tpm": "0"
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            },
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "type": "block",
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:            "vg_name": "ceph_vg2"
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:        }
Feb 25 07:28:00 np0005629333 great_jepsen[296947]:    ]
Feb 25 07:28:00 np0005629333 great_jepsen[296947]: }
Feb 25 07:28:00 np0005629333 systemd[1]: libpod-659ff074334d38b482764875d6a153af78a3627288f8402989e3b7e2d16ae69c.scope: Deactivated successfully.
Feb 25 07:28:00 np0005629333 podman[296931]: 2026-02-25 12:28:00.050815684 +0000 UTC m=+0.515140170 container died 659ff074334d38b482764875d6a153af78a3627288f8402989e3b7e2d16ae69c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_jepsen, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 07:28:00 np0005629333 systemd[1]: var-lib-containers-storage-overlay-02f06d7702a45c7b3b07c518e0a7eb9cc9c4590bb82e0ebf4d0f39184fda2be0-merged.mount: Deactivated successfully.
Feb 25 07:28:00 np0005629333 podman[296931]: 2026-02-25 12:28:00.086095072 +0000 UTC m=+0.550419548 container remove 659ff074334d38b482764875d6a153af78a3627288f8402989e3b7e2d16ae69c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:28:00 np0005629333 nova_compute[244014]: 2026-02-25 12:28:00.098 244018 DEBUG nova.network.neutron [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:28:00 np0005629333 systemd[1]: libpod-conmon-659ff074334d38b482764875d6a153af78a3627288f8402989e3b7e2d16ae69c.scope: Deactivated successfully.
Feb 25 07:28:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:28:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Feb 25 07:28:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Feb 25 07:28:00 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Feb 25 07:28:00 np0005629333 nova_compute[244014]: 2026-02-25 12:28:00.463 244018 DEBUG nova.network.neutron [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:28:00 np0005629333 nova_compute[244014]: 2026-02-25 12:28:00.480 244018 DEBUG oslo_concurrency.lockutils [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Releasing lock "refresh_cache-160a4e3f-b197-4b82-a2ff-cebf79df47df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:28:00 np0005629333 nova_compute[244014]: 2026-02-25 12:28:00.481 244018 DEBUG nova.compute.manager [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:28:00 np0005629333 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Feb 25 07:28:00 np0005629333 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d0000003a.scope: Consumed 14.289s CPU time.
Feb 25 07:28:00 np0005629333 systemd-machined[210048]: Machine qemu-64-instance-0000003a terminated.
Feb 25 07:28:00 np0005629333 podman[297031]: 2026-02-25 12:28:00.581524704 +0000 UTC m=+0.067857012 container create 950c586fbd23947d4920215832ba5dbac4fafda2f2c298bc1e8f3f43d7e32460 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 07:28:00 np0005629333 systemd[1]: Started libpod-conmon-950c586fbd23947d4920215832ba5dbac4fafda2f2c298bc1e8f3f43d7e32460.scope.
Feb 25 07:28:00 np0005629333 podman[297031]: 2026-02-25 12:28:00.547271254 +0000 UTC m=+0.033603572 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:28:00 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:28:00 np0005629333 nova_compute[244014]: 2026-02-25 12:28:00.658 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:00 np0005629333 podman[297031]: 2026-02-25 12:28:00.66018698 +0000 UTC m=+0.146519288 container init 950c586fbd23947d4920215832ba5dbac4fafda2f2c298bc1e8f3f43d7e32460 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_shamir, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 07:28:00 np0005629333 podman[297031]: 2026-02-25 12:28:00.669049931 +0000 UTC m=+0.155382209 container start 950c586fbd23947d4920215832ba5dbac4fafda2f2c298bc1e8f3f43d7e32460 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 07:28:00 np0005629333 podman[297031]: 2026-02-25 12:28:00.67288791 +0000 UTC m=+0.159220268 container attach 950c586fbd23947d4920215832ba5dbac4fafda2f2c298bc1e8f3f43d7e32460 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_shamir, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 07:28:00 np0005629333 priceless_shamir[297046]: 167 167
Feb 25 07:28:00 np0005629333 systemd[1]: libpod-950c586fbd23947d4920215832ba5dbac4fafda2f2c298bc1e8f3f43d7e32460.scope: Deactivated successfully.
Feb 25 07:28:00 np0005629333 podman[297031]: 2026-02-25 12:28:00.675104712 +0000 UTC m=+0.161437020 container died 950c586fbd23947d4920215832ba5dbac4fafda2f2c298bc1e8f3f43d7e32460 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_shamir, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 07:28:00 np0005629333 nova_compute[244014]: 2026-02-25 12:28:00.701 244018 INFO nova.virt.libvirt.driver [-] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Instance destroyed successfully.#033[00m
Feb 25 07:28:00 np0005629333 nova_compute[244014]: 2026-02-25 12:28:00.701 244018 DEBUG nova.objects.instance [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lazy-loading 'resources' on Instance uuid 160a4e3f-b197-4b82-a2ff-cebf79df47df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:28:00 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a86e46fc80c4633abd3ba6dff11910203d7188ae0affae4cb7626efce0140231-merged.mount: Deactivated successfully.
Feb 25 07:28:00 np0005629333 podman[297031]: 2026-02-25 12:28:00.73757087 +0000 UTC m=+0.223903148 container remove 950c586fbd23947d4920215832ba5dbac4fafda2f2c298bc1e8f3f43d7e32460 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_shamir, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:28:00 np0005629333 systemd[1]: libpod-conmon-950c586fbd23947d4920215832ba5dbac4fafda2f2c298bc1e8f3f43d7e32460.scope: Deactivated successfully.
Feb 25 07:28:00 np0005629333 podman[297092]: 2026-02-25 12:28:00.961858558 +0000 UTC m=+0.068338805 container create 837aef9fd0909fdc0a5401e5df308209222a0a871113eb599b84b36316f7bc4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:28:01 np0005629333 systemd[1]: Started libpod-conmon-837aef9fd0909fdc0a5401e5df308209222a0a871113eb599b84b36316f7bc4e.scope.
Feb 25 07:28:01 np0005629333 podman[297092]: 2026-02-25 12:28:00.938428465 +0000 UTC m=+0.044908802 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:28:01 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:28:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/556b365383e7b3fad21c5c626887e848e47cc361e110b111c312871b9f5067a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:28:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/556b365383e7b3fad21c5c626887e848e47cc361e110b111c312871b9f5067a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:28:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/556b365383e7b3fad21c5c626887e848e47cc361e110b111c312871b9f5067a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:28:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/556b365383e7b3fad21c5c626887e848e47cc361e110b111c312871b9f5067a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:28:01 np0005629333 podman[297092]: 2026-02-25 12:28:01.070814141 +0000 UTC m=+0.177294408 container init 837aef9fd0909fdc0a5401e5df308209222a0a871113eb599b84b36316f7bc4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_thompson, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:28:01 np0005629333 podman[297092]: 2026-02-25 12:28:01.080603368 +0000 UTC m=+0.187083615 container start 837aef9fd0909fdc0a5401e5df308209222a0a871113eb599b84b36316f7bc4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 07:28:01 np0005629333 podman[297092]: 2026-02-25 12:28:01.084079407 +0000 UTC m=+0.190559664 container attach 837aef9fd0909fdc0a5401e5df308209222a0a871113eb599b84b36316f7bc4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 07:28:01 np0005629333 nova_compute[244014]: 2026-02-25 12:28:01.116 244018 INFO nova.virt.libvirt.driver [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Deleting instance files /var/lib/nova/instances/160a4e3f-b197-4b82-a2ff-cebf79df47df_del#033[00m
Feb 25 07:28:01 np0005629333 nova_compute[244014]: 2026-02-25 12:28:01.117 244018 INFO nova.virt.libvirt.driver [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Deletion of /var/lib/nova/instances/160a4e3f-b197-4b82-a2ff-cebf79df47df_del complete#033[00m
Feb 25 07:28:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Feb 25 07:28:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Feb 25 07:28:01 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Feb 25 07:28:01 np0005629333 nova_compute[244014]: 2026-02-25 12:28:01.201 244018 INFO nova.compute.manager [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:28:01 np0005629333 nova_compute[244014]: 2026-02-25 12:28:01.202 244018 DEBUG oslo.service.loopingcall [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:28:01 np0005629333 nova_compute[244014]: 2026-02-25 12:28:01.202 244018 DEBUG nova.compute.manager [-] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:28:01 np0005629333 nova_compute[244014]: 2026-02-25 12:28:01.202 244018 DEBUG nova.network.neutron [-] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:28:01 np0005629333 nova_compute[244014]: 2026-02-25 12:28:01.579 244018 DEBUG nova.network.neutron [-] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:28:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:28:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:28:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:28:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:28:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:28:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:28:01 np0005629333 nova_compute[244014]: 2026-02-25 12:28:01.595 244018 DEBUG nova.network.neutron [-] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:28:01 np0005629333 nova_compute[244014]: 2026-02-25 12:28:01.613 244018 INFO nova.compute.manager [-] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Took 0.41 seconds to deallocate network for instance.#033[00m
Feb 25 07:28:01 np0005629333 nova_compute[244014]: 2026-02-25 12:28:01.667 244018 DEBUG oslo_concurrency.lockutils [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:01 np0005629333 nova_compute[244014]: 2026-02-25 12:28:01.667 244018 DEBUG oslo_concurrency.lockutils [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:01 np0005629333 podman[297174]: 2026-02-25 12:28:01.726160599 +0000 UTC m=+0.074549321 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 07:28:01 np0005629333 lvm[297222]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:28:01 np0005629333 lvm[297222]: VG ceph_vg0 finished
Feb 25 07:28:01 np0005629333 lvm[297228]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:28:01 np0005629333 lvm[297228]: VG ceph_vg1 finished
Feb 25 07:28:01 np0005629333 podman[297178]: 2026-02-25 12:28:01.748306055 +0000 UTC m=+0.093903878 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible)
Feb 25 07:28:01 np0005629333 lvm[297230]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:28:01 np0005629333 lvm[297230]: VG ceph_vg2 finished
Feb 25 07:28:01 np0005629333 nova_compute[244014]: 2026-02-25 12:28:01.767 244018 DEBUG oslo_concurrency.processutils [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:01 np0005629333 nervous_thompson[297111]: {}
Feb 25 07:28:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 451 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 54 KiB/s wr, 458 op/s
Feb 25 07:28:01 np0005629333 systemd[1]: libpod-837aef9fd0909fdc0a5401e5df308209222a0a871113eb599b84b36316f7bc4e.scope: Deactivated successfully.
Feb 25 07:28:01 np0005629333 systemd[1]: libpod-837aef9fd0909fdc0a5401e5df308209222a0a871113eb599b84b36316f7bc4e.scope: Consumed 1.078s CPU time.
Feb 25 07:28:01 np0005629333 podman[297092]: 2026-02-25 12:28:01.865981386 +0000 UTC m=+0.972461633 container died 837aef9fd0909fdc0a5401e5df308209222a0a871113eb599b84b36316f7bc4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:28:01 np0005629333 systemd[1]: var-lib-containers-storage-overlay-556b365383e7b3fad21c5c626887e848e47cc361e110b111c312871b9f5067a9-merged.mount: Deactivated successfully.
Feb 25 07:28:01 np0005629333 podman[297092]: 2026-02-25 12:28:01.902473229 +0000 UTC m=+1.008953476 container remove 837aef9fd0909fdc0a5401e5df308209222a0a871113eb599b84b36316f7bc4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_thompson, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 07:28:01 np0005629333 systemd[1]: libpod-conmon-837aef9fd0909fdc0a5401e5df308209222a0a871113eb599b84b36316f7bc4e.scope: Deactivated successfully.
Feb 25 07:28:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:28:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:28:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:28:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:28:02 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:28:02 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:28:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:28:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1538028854' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:28:02 np0005629333 nova_compute[244014]: 2026-02-25 12:28:02.363 244018 DEBUG oslo_concurrency.processutils [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:02 np0005629333 nova_compute[244014]: 2026-02-25 12:28:02.372 244018 DEBUG nova.compute.provider_tree [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:28:02 np0005629333 nova_compute[244014]: 2026-02-25 12:28:02.398 244018 DEBUG nova.scheduler.client.report [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:28:02 np0005629333 nova_compute[244014]: 2026-02-25 12:28:02.422 244018 DEBUG oslo_concurrency.lockutils [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:02 np0005629333 nova_compute[244014]: 2026-02-25 12:28:02.462 244018 INFO nova.scheduler.client.report [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Deleted allocations for instance 160a4e3f-b197-4b82-a2ff-cebf79df47df#033[00m
Feb 25 07:28:02 np0005629333 nova_compute[244014]: 2026-02-25 12:28:02.541 244018 DEBUG oslo_concurrency.lockutils [None req-2a1499c5-6b97-4022-85fa-2964f8f3609c 29db18fbf1f1410384934731b9c53cb5 ddbe63406adc44468e143b66e5b1c207 - - default default] Lock "160a4e3f-b197-4b82-a2ff-cebf79df47df" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.391 244018 DEBUG oslo_concurrency.lockutils [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "43b8959e-9cf0-42ca-aa1f-8a380321c971" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.392 244018 DEBUG oslo_concurrency.lockutils [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.392 244018 DEBUG oslo_concurrency.lockutils [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.393 244018 DEBUG oslo_concurrency.lockutils [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.393 244018 DEBUG oslo_concurrency.lockutils [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.395 244018 INFO nova.compute.manager [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Terminating instance#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.397 244018 DEBUG nova.compute.manager [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:28:03 np0005629333 kernel: tap69011b0a-5a (unregistering): left promiscuous mode
Feb 25 07:28:03 np0005629333 NetworkManager[49836]: <info>  [1772022483.4530] device (tap69011b0a-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:28:03 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:03Z|00536|binding|INFO|Releasing lport 69011b0a-5af7-4bef-a14c-8d83e63e08ae from this chassis (sb_readonly=0)
Feb 25 07:28:03 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:03Z|00537|binding|INFO|Setting lport 69011b0a-5af7-4bef-a14c-8d83e63e08ae down in Southbound
Feb 25 07:28:03 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:03Z|00538|binding|INFO|Removing iface tap69011b0a-5a ovn-installed in OVS
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.464 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.472 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:03.478 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:0d:06 10.100.0.5'], port_security=['fa:16:3e:df:0d:06 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '43b8959e-9cf0-42ca-aa1f-8a380321c971', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64c22162-7e15-45de-8fd2-8c9a24f27006', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c85a955249394f0faf7c890f5cd0df32', 'neutron:revision_number': '4', 'neutron:security_group_ids': '10bdd349-ebee-42f5-8295-ca2b7d5c5d74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a9495f97-67e6-4da7-a9b0-f643c9e48076, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=69011b0a-5af7-4bef-a14c-8d83e63e08ae) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:28:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:03.483 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 69011b0a-5af7-4bef-a14c-8d83e63e08ae in datapath 64c22162-7e15-45de-8fd2-8c9a24f27006 unbound from our chassis#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.485 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:03.486 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 64c22162-7e15-45de-8fd2-8c9a24f27006#033[00m
Feb 25 07:28:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:03.498 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d62bb3a4-08eb-46fe-a778-8d5e3917fae8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:03 np0005629333 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000037.scope: Deactivated successfully.
Feb 25 07:28:03 np0005629333 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000037.scope: Consumed 14.469s CPU time.
Feb 25 07:28:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:03.518 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[135c072a-b4e2-4313-ac88-27ba041a814f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:03 np0005629333 systemd-machined[210048]: Machine qemu-60-instance-00000037 terminated.
Feb 25 07:28:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:03.525 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[efe1cdba-33f7-4740-85b8-bb3041362f8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:03.548 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[03962804-dd0a-4033-bd3e-fc362e0e53c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:03.565 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6341d750-2d8c-4705-9626-30692ad3e1e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap64c22162-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:08:1c:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 20, 'rx_bytes': 700, 'tx_bytes': 1028, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 20, 'rx_bytes': 700, 'tx_bytes': 1028, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 117], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428693, 'reachable_time': 31584, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297299, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:03.577 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[30cac6ed-d47f-42f7-b982-1aabc9c75ea8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap64c22162-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428702, 'tstamp': 428702}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297300, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap64c22162-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 428705, 'tstamp': 428705}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297300, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:03.579 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64c22162-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.580 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.584 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:03.585 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64c22162-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:03.585 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:28:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:03.586 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap64c22162-70, col_values=(('external_ids', {'iface-id': '81f0f54c-4e04-4adf-952f-b6d0fe9698c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:03.587 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.628 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Acquiring lock "caff6378-2d93-4b73-8d58-da0e74a6d46e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.628 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.632 244018 INFO nova.virt.libvirt.driver [-] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Instance destroyed successfully.#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.632 244018 DEBUG nova.objects.instance [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'resources' on Instance uuid 43b8959e-9cf0-42ca-aa1f-8a380321c971 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.648 244018 DEBUG nova.virt.libvirt.vif [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1336556788',display_name='tempest-ServerActionsTestOtherB-server-1336556788',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1336556788',id=55,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c85a955249394f0faf7c890f5cd0df32',ramdisk_id='',reservation_id='r-8l7ggktq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1539976047',owner_user_name='tempest-ServerActionsTestOtherB-1539976047-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:26:37Z,user_data=None,user_id='b774fd0c04fc403d9ddb205f1e6abbc5',uuid=43b8959e-9cf0-42ca-aa1f-8a380321c971,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "address": "fa:16:3e:df:0d:06", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69011b0a-5a", "ovs_interfaceid": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.649 244018 DEBUG nova.network.os_vif_util [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converting VIF {"id": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "address": "fa:16:3e:df:0d:06", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap69011b0a-5a", "ovs_interfaceid": "69011b0a-5af7-4bef-a14c-8d83e63e08ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.649 244018 DEBUG nova.network.os_vif_util [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:0d:06,bridge_name='br-int',has_traffic_filtering=True,id=69011b0a-5af7-4bef-a14c-8d83e63e08ae,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69011b0a-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.650 244018 DEBUG os_vif [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:0d:06,bridge_name='br-int',has_traffic_filtering=True,id=69011b0a-5af7-4bef-a14c-8d83e63e08ae,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69011b0a-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.652 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.652 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69011b0a-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.653 244018 DEBUG nova.compute.manager [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.657 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.658 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.660 244018 INFO os_vif [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:0d:06,bridge_name='br-int',has_traffic_filtering=True,id=69011b0a-5af7-4bef-a14c-8d83e63e08ae,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap69011b0a-5a')#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.739 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.741 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.749 244018 DEBUG nova.virt.hardware [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.750 244018 INFO nova.compute.claims [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:28:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 410 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 46 KiB/s wr, 436 op/s
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.906 244018 INFO nova.virt.libvirt.driver [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Deleting instance files /var/lib/nova/instances/43b8959e-9cf0-42ca-aa1f-8a380321c971_del#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.907 244018 INFO nova.virt.libvirt.driver [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Deletion of /var/lib/nova/instances/43b8959e-9cf0-42ca-aa1f-8a380321c971_del complete#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.914 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.915 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.948 244018 DEBUG nova.compute.manager [req-b9e9c6c2-5fd2-45ab-993a-cd2424cc7d76 req-43be49d5-b2a9-4d78-a5b3-eacb50a223bf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Received event network-vif-unplugged-69011b0a-5af7-4bef-a14c-8d83e63e08ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.948 244018 DEBUG oslo_concurrency.lockutils [req-b9e9c6c2-5fd2-45ab-993a-cd2424cc7d76 req-43be49d5-b2a9-4d78-a5b3-eacb50a223bf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.949 244018 DEBUG oslo_concurrency.lockutils [req-b9e9c6c2-5fd2-45ab-993a-cd2424cc7d76 req-43be49d5-b2a9-4d78-a5b3-eacb50a223bf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.949 244018 DEBUG oslo_concurrency.lockutils [req-b9e9c6c2-5fd2-45ab-993a-cd2424cc7d76 req-43be49d5-b2a9-4d78-a5b3-eacb50a223bf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.950 244018 DEBUG nova.compute.manager [req-b9e9c6c2-5fd2-45ab-993a-cd2424cc7d76 req-43be49d5-b2a9-4d78-a5b3-eacb50a223bf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] No waiting events found dispatching network-vif-unplugged-69011b0a-5af7-4bef-a14c-8d83e63e08ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.950 244018 DEBUG nova.compute.manager [req-b9e9c6c2-5fd2-45ab-993a-cd2424cc7d76 req-43be49d5-b2a9-4d78-a5b3-eacb50a223bf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Received event network-vif-unplugged-69011b0a-5af7-4bef-a14c-8d83e63e08ae for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:28:03 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.952 244018 DEBUG oslo_concurrency.processutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:04 np0005629333 nova_compute[244014]: 2026-02-25 12:28:03.999 244018 INFO nova.compute.manager [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Took 0.60 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:28:04 np0005629333 nova_compute[244014]: 2026-02-25 12:28:04.001 244018 DEBUG oslo.service.loopingcall [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:28:04 np0005629333 nova_compute[244014]: 2026-02-25 12:28:04.001 244018 DEBUG nova.compute.manager [-] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:28:04 np0005629333 nova_compute[244014]: 2026-02-25 12:28:04.002 244018 DEBUG nova.network.neutron [-] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:28:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Feb 25 07:28:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Feb 25 07:28:04 np0005629333 nova_compute[244014]: 2026-02-25 12:28:04.188 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:28:04 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Feb 25 07:28:04 np0005629333 nova_compute[244014]: 2026-02-25 12:28:04.193 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:28:04 np0005629333 nova_compute[244014]: 2026-02-25 12:28:04.194 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 25 07:28:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:28:04 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2846933071' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:28:04 np0005629333 nova_compute[244014]: 2026-02-25 12:28:04.534 244018 DEBUG oslo_concurrency.processutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:04 np0005629333 nova_compute[244014]: 2026-02-25 12:28:04.541 244018 DEBUG nova.compute.provider_tree [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:28:04 np0005629333 nova_compute[244014]: 2026-02-25 12:28:04.706 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:04 np0005629333 nova_compute[244014]: 2026-02-25 12:28:04.855 244018 DEBUG nova.scheduler.client.report [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:28:04 np0005629333 nova_compute[244014]: 2026-02-25 12:28:04.899 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:04 np0005629333 nova_compute[244014]: 2026-02-25 12:28:04.900 244018 DEBUG nova.compute.manager [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:28:04 np0005629333 nova_compute[244014]: 2026-02-25 12:28:04.981 244018 DEBUG nova.compute.manager [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:28:04 np0005629333 nova_compute[244014]: 2026-02-25 12:28:04.982 244018 DEBUG nova.network.neutron [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:28:05 np0005629333 nova_compute[244014]: 2026-02-25 12:28:05.013 244018 INFO nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:28:05 np0005629333 nova_compute[244014]: 2026-02-25 12:28:05.041 244018 DEBUG nova.compute.manager [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:28:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:28:05 np0005629333 nova_compute[244014]: 2026-02-25 12:28:05.387 244018 DEBUG nova.policy [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '322943d9098b419f8d2d573cc6b7fe8e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'fbfdd47eb32e4d4aadc46f464899cf16', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:28:05 np0005629333 nova_compute[244014]: 2026-02-25 12:28:05.398 244018 DEBUG nova.compute.manager [req-6bb749f3-6d1a-47a5-ae30-ea8645ad9b9c req-a5244247-2a11-4d3e-a50c-79f56cc1576f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Received event network-vif-deleted-69011b0a-5af7-4bef-a14c-8d83e63e08ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:28:05 np0005629333 nova_compute[244014]: 2026-02-25 12:28:05.399 244018 INFO nova.compute.manager [req-6bb749f3-6d1a-47a5-ae30-ea8645ad9b9c req-a5244247-2a11-4d3e-a50c-79f56cc1576f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Neutron deleted interface 69011b0a-5af7-4bef-a14c-8d83e63e08ae; detaching it from the instance and deleting it from the info cache#033[00m
Feb 25 07:28:05 np0005629333 nova_compute[244014]: 2026-02-25 12:28:05.399 244018 DEBUG nova.network.neutron [req-6bb749f3-6d1a-47a5-ae30-ea8645ad9b9c req-a5244247-2a11-4d3e-a50c-79f56cc1576f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:28:05 np0005629333 nova_compute[244014]: 2026-02-25 12:28:05.401 244018 DEBUG nova.network.neutron [-] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:28:05 np0005629333 nova_compute[244014]: 2026-02-25 12:28:05.678 244018 INFO nova.compute.manager [-] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Took 1.68 seconds to deallocate network for instance.#033[00m
Feb 25 07:28:05 np0005629333 nova_compute[244014]: 2026-02-25 12:28:05.680 244018 DEBUG nova.compute.manager [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:28:05 np0005629333 nova_compute[244014]: 2026-02-25 12:28:05.682 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:28:05 np0005629333 nova_compute[244014]: 2026-02-25 12:28:05.683 244018 INFO nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Creating image(s)#033[00m
Feb 25 07:28:05 np0005629333 nova_compute[244014]: 2026-02-25 12:28:05.713 244018 DEBUG nova.storage.rbd_utils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] rbd image caff6378-2d93-4b73-8d58-da0e74a6d46e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:05 np0005629333 nova_compute[244014]: 2026-02-25 12:28:05.743 244018 DEBUG nova.storage.rbd_utils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] rbd image caff6378-2d93-4b73-8d58-da0e74a6d46e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:05 np0005629333 nova_compute[244014]: 2026-02-25 12:28:05.775 244018 DEBUG nova.storage.rbd_utils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] rbd image caff6378-2d93-4b73-8d58-da0e74a6d46e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:05 np0005629333 nova_compute[244014]: 2026-02-25 12:28:05.779 244018 DEBUG oslo_concurrency.processutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:05 np0005629333 nova_compute[244014]: 2026-02-25 12:28:05.824 244018 DEBUG nova.compute.manager [req-6bb749f3-6d1a-47a5-ae30-ea8645ad9b9c req-a5244247-2a11-4d3e-a50c-79f56cc1576f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Detach interface failed, port_id=69011b0a-5af7-4bef-a14c-8d83e63e08ae, reason: Instance 43b8959e-9cf0-42ca-aa1f-8a380321c971 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 25 07:28:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 410 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.7 KiB/s wr, 259 op/s
Feb 25 07:28:05 np0005629333 nova_compute[244014]: 2026-02-25 12:28:05.857 244018 DEBUG oslo_concurrency.processutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:05 np0005629333 nova_compute[244014]: 2026-02-25 12:28:05.858 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:05 np0005629333 nova_compute[244014]: 2026-02-25 12:28:05.859 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:05 np0005629333 nova_compute[244014]: 2026-02-25 12:28:05.860 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:05 np0005629333 nova_compute[244014]: 2026-02-25 12:28:05.891 244018 DEBUG nova.storage.rbd_utils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] rbd image caff6378-2d93-4b73-8d58-da0e74a6d46e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:05 np0005629333 nova_compute[244014]: 2026-02-25 12:28:05.896 244018 DEBUG oslo_concurrency.processutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 caff6378-2d93-4b73-8d58-da0e74a6d46e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:06 np0005629333 nova_compute[244014]: 2026-02-25 12:28:06.050 244018 DEBUG oslo_concurrency.lockutils [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:06 np0005629333 nova_compute[244014]: 2026-02-25 12:28:06.051 244018 DEBUG oslo_concurrency.lockutils [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Feb 25 07:28:06 np0005629333 nova_compute[244014]: 2026-02-25 12:28:06.209 244018 DEBUG nova.compute.manager [req-6729d7c3-068e-4bc9-9b4a-c17b8a34c71b req-a2f35e91-55a8-4324-8354-d6985731184b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Received event network-vif-plugged-69011b0a-5af7-4bef-a14c-8d83e63e08ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:28:06 np0005629333 nova_compute[244014]: 2026-02-25 12:28:06.210 244018 DEBUG oslo_concurrency.lockutils [req-6729d7c3-068e-4bc9-9b4a-c17b8a34c71b req-a2f35e91-55a8-4324-8354-d6985731184b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:06 np0005629333 nova_compute[244014]: 2026-02-25 12:28:06.210 244018 DEBUG oslo_concurrency.lockutils [req-6729d7c3-068e-4bc9-9b4a-c17b8a34c71b req-a2f35e91-55a8-4324-8354-d6985731184b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:06 np0005629333 nova_compute[244014]: 2026-02-25 12:28:06.210 244018 DEBUG oslo_concurrency.lockutils [req-6729d7c3-068e-4bc9-9b4a-c17b8a34c71b req-a2f35e91-55a8-4324-8354-d6985731184b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:06 np0005629333 nova_compute[244014]: 2026-02-25 12:28:06.211 244018 DEBUG nova.compute.manager [req-6729d7c3-068e-4bc9-9b4a-c17b8a34c71b req-a2f35e91-55a8-4324-8354-d6985731184b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] No waiting events found dispatching network-vif-plugged-69011b0a-5af7-4bef-a14c-8d83e63e08ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:28:06 np0005629333 nova_compute[244014]: 2026-02-25 12:28:06.211 244018 WARNING nova.compute.manager [req-6729d7c3-068e-4bc9-9b4a-c17b8a34c71b req-a2f35e91-55a8-4324-8354-d6985731184b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Received unexpected event network-vif-plugged-69011b0a-5af7-4bef-a14c-8d83e63e08ae for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:28:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Feb 25 07:28:06 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Feb 25 07:28:06 np0005629333 nova_compute[244014]: 2026-02-25 12:28:06.231 244018 DEBUG oslo_concurrency.processutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 caff6378-2d93-4b73-8d58-da0e74a6d46e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.335s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:06 np0005629333 nova_compute[244014]: 2026-02-25 12:28:06.304 244018 DEBUG nova.storage.rbd_utils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] resizing rbd image caff6378-2d93-4b73-8d58-da0e74a6d46e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:28:06 np0005629333 nova_compute[244014]: 2026-02-25 12:28:06.512 244018 DEBUG nova.objects.instance [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lazy-loading 'migration_context' on Instance uuid caff6378-2d93-4b73-8d58-da0e74a6d46e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:28:06 np0005629333 nova_compute[244014]: 2026-02-25 12:28:06.655 244018 DEBUG nova.network.neutron [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Successfully created port: 21ae71a3-b142-47f7-a2b0-b62a7625a31d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:28:06 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:06Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f8:53:87 10.100.0.6
Feb 25 07:28:06 np0005629333 nova_compute[244014]: 2026-02-25 12:28:06.707 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:28:06 np0005629333 nova_compute[244014]: 2026-02-25 12:28:06.708 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Ensure instance console log exists: /var/lib/nova/instances/caff6378-2d93-4b73-8d58-da0e74a6d46e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:28:06 np0005629333 nova_compute[244014]: 2026-02-25 12:28:06.708 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:06 np0005629333 nova_compute[244014]: 2026-02-25 12:28:06.708 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:06 np0005629333 nova_compute[244014]: 2026-02-25 12:28:06.709 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:07 np0005629333 nova_compute[244014]: 2026-02-25 12:28:07.084 244018 DEBUG oslo_concurrency.processutils [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:28:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1125683448' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:28:07 np0005629333 nova_compute[244014]: 2026-02-25 12:28:07.625 244018 DEBUG oslo_concurrency.processutils [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:07 np0005629333 nova_compute[244014]: 2026-02-25 12:28:07.633 244018 DEBUG nova.compute.provider_tree [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:28:07 np0005629333 nova_compute[244014]: 2026-02-25 12:28:07.637 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Updating instance_info_cache with network_info: [{"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:28:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 360 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.2 MiB/s wr, 475 op/s
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.496 244018 DEBUG nova.network.neutron [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Successfully updated port: 21ae71a3-b142-47f7-a2b0-b62a7625a31d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.502 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.502 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.504 244018 DEBUG nova.compute.manager [req-8b365c66-cb36-41b6-b20e-c905ae66e4c6 req-8a6c9b52-a2ba-419f-94cc-419d4019bb27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Received event network-changed-21ae71a3-b142-47f7-a2b0-b62a7625a31d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.504 244018 DEBUG nova.compute.manager [req-8b365c66-cb36-41b6-b20e-c905ae66e4c6 req-8a6c9b52-a2ba-419f-94cc-419d4019bb27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Refreshing instance network info cache due to event network-changed-21ae71a3-b142-47f7-a2b0-b62a7625a31d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.504 244018 DEBUG oslo_concurrency.lockutils [req-8b365c66-cb36-41b6-b20e-c905ae66e4c6 req-8a6c9b52-a2ba-419f-94cc-419d4019bb27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-caff6378-2d93-4b73-8d58-da0e74a6d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.505 244018 DEBUG oslo_concurrency.lockutils [req-8b365c66-cb36-41b6-b20e-c905ae66e4c6 req-8a6c9b52-a2ba-419f-94cc-419d4019bb27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-caff6378-2d93-4b73-8d58-da0e74a6d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.505 244018 DEBUG nova.network.neutron [req-8b365c66-cb36-41b6-b20e-c905ae66e4c6 req-8a6c9b52-a2ba-419f-94cc-419d4019bb27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Refreshing network info cache for port 21ae71a3-b142-47f7-a2b0-b62a7625a31d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.507 244018 DEBUG nova.scheduler.client.report [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.512 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.514 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.517 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Acquiring lock "refresh_cache-caff6378-2d93-4b73-8d58-da0e74a6d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.556 244018 DEBUG oslo_concurrency.lockutils [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.558 244018 WARNING nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] While synchronizing instance power states, found 4 instances in the database and 2 instances on the hypervisor.#033[00m
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.559 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Triggering sync for uuid b8086e43-4c45-422f-a3b5-fa665c256b30 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.559 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Triggering sync for uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.559 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Triggering sync for uuid 43b8959e-9cf0-42ca-aa1f-8a380321c971 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.559 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Triggering sync for uuid caff6378-2d93-4b73-8d58-da0e74a6d46e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.559 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "b8086e43-4c45-422f-a3b5-fa665c256b30" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.560 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.560 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.560 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.561 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "43b8959e-9cf0-42ca-aa1f-8a380321c971" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.561 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "caff6378-2d93-4b73-8d58-da0e74a6d46e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.582 244018 INFO nova.scheduler.client.report [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Deleted allocations for instance 43b8959e-9cf0-42ca-aa1f-8a380321c971#033[00m
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.604 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.044s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.608 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.654 244018 DEBUG oslo_concurrency.lockutils [None req-bee69532-ce64-4c09-aa35-6bbc8bc59d74 b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.263s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.655 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.656 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.669 244018 DEBUG nova.network.neutron [req-8b365c66-cb36-41b6-b20e-c905ae66e4c6 req-8a6c9b52-a2ba-419f-94cc-419d4019bb27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.678 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "43b8959e-9cf0-42ca-aa1f-8a380321c971" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:08 np0005629333 nova_compute[244014]: 2026-02-25 12:28:08.924 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:28:09 np0005629333 nova_compute[244014]: 2026-02-25 12:28:09.046 244018 DEBUG nova.network.neutron [req-8b365c66-cb36-41b6-b20e-c905ae66e4c6 req-8a6c9b52-a2ba-419f-94cc-419d4019bb27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:28:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:09Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ba:87:f1 10.100.0.11
Feb 25 07:28:09 np0005629333 nova_compute[244014]: 2026-02-25 12:28:09.069 244018 DEBUG oslo_concurrency.lockutils [req-8b365c66-cb36-41b6-b20e-c905ae66e4c6 req-8a6c9b52-a2ba-419f-94cc-419d4019bb27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-caff6378-2d93-4b73-8d58-da0e74a6d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:28:09 np0005629333 nova_compute[244014]: 2026-02-25 12:28:09.070 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Acquired lock "refresh_cache-caff6378-2d93-4b73-8d58-da0e74a6d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:28:09 np0005629333 nova_compute[244014]: 2026-02-25 12:28:09.070 244018 DEBUG nova.network.neutron [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:28:09 np0005629333 nova_compute[244014]: 2026-02-25 12:28:09.263 244018 DEBUG nova.network.neutron [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:28:09 np0005629333 nova_compute[244014]: 2026-02-25 12:28:09.496 244018 DEBUG oslo_concurrency.lockutils [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "b8086e43-4c45-422f-a3b5-fa665c256b30" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:09 np0005629333 nova_compute[244014]: 2026-02-25 12:28:09.497 244018 DEBUG oslo_concurrency.lockutils [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:09 np0005629333 nova_compute[244014]: 2026-02-25 12:28:09.497 244018 DEBUG oslo_concurrency.lockutils [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:09 np0005629333 nova_compute[244014]: 2026-02-25 12:28:09.498 244018 DEBUG oslo_concurrency.lockutils [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:09 np0005629333 nova_compute[244014]: 2026-02-25 12:28:09.498 244018 DEBUG oslo_concurrency.lockutils [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:09 np0005629333 nova_compute[244014]: 2026-02-25 12:28:09.500 244018 INFO nova.compute.manager [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Terminating instance#033[00m
Feb 25 07:28:09 np0005629333 nova_compute[244014]: 2026-02-25 12:28:09.501 244018 DEBUG nova.compute.manager [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:28:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Feb 25 07:28:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Feb 25 07:28:09 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Feb 25 07:28:09 np0005629333 kernel: tapabdb97b5-8e (unregistering): left promiscuous mode
Feb 25 07:28:09 np0005629333 NetworkManager[49836]: <info>  [1772022489.5640] device (tapabdb97b5-8e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:28:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:09Z|00539|binding|INFO|Releasing lport abdb97b5-8e9d-4929-af6f-bfb06c067878 from this chassis (sb_readonly=0)
Feb 25 07:28:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:09Z|00540|binding|INFO|Setting lport abdb97b5-8e9d-4929-af6f-bfb06c067878 down in Southbound
Feb 25 07:28:09 np0005629333 nova_compute[244014]: 2026-02-25 12:28:09.569 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:09Z|00541|binding|INFO|Removing iface tapabdb97b5-8e ovn-installed in OVS
Feb 25 07:28:09 np0005629333 nova_compute[244014]: 2026-02-25 12:28:09.574 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:09.582 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:53:87 10.100.0.6'], port_security=['fa:16:3e:f8:53:87 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b8086e43-4c45-422f-a3b5-fa665c256b30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-64c22162-7e15-45de-8fd2-8c9a24f27006', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c85a955249394f0faf7c890f5cd0df32', 'neutron:revision_number': '9', 'neutron:security_group_ids': '35edb9b7-5285-41a3-a867-f1cc587b3ad5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.229', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a9495f97-67e6-4da7-a9b0-f643c9e48076, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=abdb97b5-8e9d-4929-af6f-bfb06c067878) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:28:09 np0005629333 nova_compute[244014]: 2026-02-25 12:28:09.583 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:09.586 157129 INFO neutron.agent.ovn.metadata.agent [-] Port abdb97b5-8e9d-4929-af6f-bfb06c067878 in datapath 64c22162-7e15-45de-8fd2-8c9a24f27006 unbound from our chassis#033[00m
Feb 25 07:28:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:09.589 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 64c22162-7e15-45de-8fd2-8c9a24f27006, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:28:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:09.590 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6e932f6b-21b0-4193-a016-a1b96312d96a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:09.591 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006 namespace which is not needed anymore#033[00m
Feb 25 07:28:09 np0005629333 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000002f.scope: Deactivated successfully.
Feb 25 07:28:09 np0005629333 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000002f.scope: Consumed 11.788s CPU time.
Feb 25 07:28:09 np0005629333 systemd-machined[210048]: Machine qemu-68-instance-0000002f terminated.
Feb 25 07:28:09 np0005629333 nova_compute[244014]: 2026-02-25 12:28:09.708 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:09 np0005629333 nova_compute[244014]: 2026-02-25 12:28:09.721 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:09 np0005629333 nova_compute[244014]: 2026-02-25 12:28:09.727 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:09 np0005629333 nova_compute[244014]: 2026-02-25 12:28:09.734 244018 INFO nova.virt.libvirt.driver [-] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Instance destroyed successfully.#033[00m
Feb 25 07:28:09 np0005629333 nova_compute[244014]: 2026-02-25 12:28:09.734 244018 DEBUG nova.objects.instance [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lazy-loading 'resources' on Instance uuid b8086e43-4c45-422f-a3b5-fa665c256b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:28:09 np0005629333 nova_compute[244014]: 2026-02-25 12:28:09.752 244018 DEBUG nova.virt.libvirt.vif [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:25:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2111996537',display_name='tempest-ServerActionsTestOtherB-server-2111996537',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2111996537',id=47,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFQ4/ViXjDl7sfXbfHy1Rj0+WVS30xG/+F445xoJQyz45huoziS5Ge/69+H9D3xA69BQvF6LAGpEuOAI4T0oNr5YUcMHaOf8cBGICZoqOX1SEjGVzLtcjONvsNISgitMaQ==',key_name='tempest-keypair-221288102',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:27:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c85a955249394f0faf7c890f5cd0df32',ramdisk_id='',reservation_id='r-huq779yj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1539976047',owner_user_name='tempest-ServerActionsTestOtherB-1539976047-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:27:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b774fd0c04fc403d9ddb205f1e6abbc5',uuid=b8086e43-4c45-422f-a3b5-fa665c256b30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:28:09 np0005629333 nova_compute[244014]: 2026-02-25 12:28:09.753 244018 DEBUG nova.network.os_vif_util [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converting VIF {"id": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "address": "fa:16:3e:f8:53:87", "network": {"id": "64c22162-7e15-45de-8fd2-8c9a24f27006", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-804659748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c85a955249394f0faf7c890f5cd0df32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdb97b5-8e", "ovs_interfaceid": "abdb97b5-8e9d-4929-af6f-bfb06c067878", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:28:09 np0005629333 nova_compute[244014]: 2026-02-25 12:28:09.754 244018 DEBUG nova.network.os_vif_util [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:53:87,bridge_name='br-int',has_traffic_filtering=True,id=abdb97b5-8e9d-4929-af6f-bfb06c067878,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdb97b5-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:28:09 np0005629333 nova_compute[244014]: 2026-02-25 12:28:09.755 244018 DEBUG os_vif [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:53:87,bridge_name='br-int',has_traffic_filtering=True,id=abdb97b5-8e9d-4929-af6f-bfb06c067878,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdb97b5-8e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:28:09 np0005629333 nova_compute[244014]: 2026-02-25 12:28:09.756 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:09 np0005629333 nova_compute[244014]: 2026-02-25 12:28:09.757 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabdb97b5-8e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:09 np0005629333 nova_compute[244014]: 2026-02-25 12:28:09.758 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:09 np0005629333 nova_compute[244014]: 2026-02-25 12:28:09.761 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:09 np0005629333 neutron-haproxy-ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006[285480]: [NOTICE]   (285484) : haproxy version is 2.8.14-c23fe91
Feb 25 07:28:09 np0005629333 nova_compute[244014]: 2026-02-25 12:28:09.765 244018 INFO os_vif [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:53:87,bridge_name='br-int',has_traffic_filtering=True,id=abdb97b5-8e9d-4929-af6f-bfb06c067878,network=Network(64c22162-7e15-45de-8fd2-8c9a24f27006),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdb97b5-8e')#033[00m
Feb 25 07:28:09 np0005629333 neutron-haproxy-ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006[285480]: [NOTICE]   (285484) : path to executable is /usr/sbin/haproxy
Feb 25 07:28:09 np0005629333 neutron-haproxy-ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006[285480]: [WARNING]  (285484) : Exiting Master process...
Feb 25 07:28:09 np0005629333 neutron-haproxy-ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006[285480]: [WARNING]  (285484) : Exiting Master process...
Feb 25 07:28:09 np0005629333 neutron-haproxy-ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006[285480]: [ALERT]    (285484) : Current worker (285486) exited with code 143 (Terminated)
Feb 25 07:28:09 np0005629333 neutron-haproxy-ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006[285480]: [WARNING]  (285484) : All workers exited. Exiting... (0)
Feb 25 07:28:09 np0005629333 systemd[1]: libpod-61e4c109bda741d406f5418ff4a666a3832342de2e3c5435c4c55719b3407174.scope: Deactivated successfully.
Feb 25 07:28:09 np0005629333 podman[297567]: 2026-02-25 12:28:09.776940975 +0000 UTC m=+0.069736565 container died 61e4c109bda741d406f5418ff4a666a3832342de2e3c5435c4c55719b3407174 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:28:09 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-61e4c109bda741d406f5418ff4a666a3832342de2e3c5435c4c55719b3407174-userdata-shm.mount: Deactivated successfully.
Feb 25 07:28:09 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ee9ea23fb31ef780d3141b9da7996fc6467cb5ff171d46fe3017a00206862844-merged.mount: Deactivated successfully.
Feb 25 07:28:09 np0005629333 podman[297567]: 2026-02-25 12:28:09.827884887 +0000 UTC m=+0.120680487 container cleanup 61e4c109bda741d406f5418ff4a666a3832342de2e3c5435c4c55719b3407174 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 25 07:28:09 np0005629333 systemd[1]: libpod-conmon-61e4c109bda741d406f5418ff4a666a3832342de2e3c5435c4c55719b3407174.scope: Deactivated successfully.
Feb 25 07:28:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 360 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 270 op/s
Feb 25 07:28:09 np0005629333 nova_compute[244014]: 2026-02-25 12:28:09.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:28:09 np0005629333 podman[297623]: 2026-02-25 12:28:09.900243055 +0000 UTC m=+0.046290401 container remove 61e4c109bda741d406f5418ff4a666a3832342de2e3c5435c4c55719b3407174 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:28:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:09.908 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6962df76-db3f-4ac6-a252-d4887b5a2ec2]: (4, ('Wed Feb 25 12:28:09 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006 (61e4c109bda741d406f5418ff4a666a3832342de2e3c5435c4c55719b3407174)\n61e4c109bda741d406f5418ff4a666a3832342de2e3c5435c4c55719b3407174\nWed Feb 25 12:28:09 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006 (61e4c109bda741d406f5418ff4a666a3832342de2e3c5435c4c55719b3407174)\n61e4c109bda741d406f5418ff4a666a3832342de2e3c5435c4c55719b3407174\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:09.912 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b5201f07-6677-4689-8fd2-0f6ec6844143]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:09.914 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64c22162-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:09 np0005629333 nova_compute[244014]: 2026-02-25 12:28:09.918 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:09 np0005629333 kernel: tap64c22162-70: left promiscuous mode
Feb 25 07:28:09 np0005629333 nova_compute[244014]: 2026-02-25 12:28:09.930 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:09.934 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[67a0879d-fee6-403f-be5e-e69ee7a03675]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:09.949 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4131b8f2-1482-4a72-a752-db705548f1a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:09.951 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[80caa983-2a27-48a8-81d2-d73536b797d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:09.969 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[745d7ba7-1893-4198-b781-c662f979ced6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 428688, 'reachable_time': 18847, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297638, 'error': None, 'target': 'ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:09 np0005629333 systemd[1]: run-netns-ovnmeta\x2d64c22162\x2d7e15\x2d45de\x2d8fd2\x2d8c9a24f27006.mount: Deactivated successfully.
Feb 25 07:28:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:09.973 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-64c22162-7e15-45de-8fd2-8c9a24f27006 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:28:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:09.973 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[6c3b10c0-fc53-4477-8669-ef5aec32d128]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:10 np0005629333 nova_compute[244014]: 2026-02-25 12:28:10.137 244018 INFO nova.virt.libvirt.driver [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Deleting instance files /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30_del#033[00m
Feb 25 07:28:10 np0005629333 nova_compute[244014]: 2026-02-25 12:28:10.138 244018 INFO nova.virt.libvirt.driver [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Deletion of /var/lib/nova/instances/b8086e43-4c45-422f-a3b5-fa665c256b30_del complete#033[00m
Feb 25 07:28:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:28:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Feb 25 07:28:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Feb 25 07:28:10 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Feb 25 07:28:10 np0005629333 nova_compute[244014]: 2026-02-25 12:28:10.218 244018 INFO nova.compute.manager [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:28:10 np0005629333 nova_compute[244014]: 2026-02-25 12:28:10.219 244018 DEBUG oslo.service.loopingcall [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:28:10 np0005629333 nova_compute[244014]: 2026-02-25 12:28:10.220 244018 DEBUG nova.compute.manager [-] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:28:10 np0005629333 nova_compute[244014]: 2026-02-25 12:28:10.220 244018 DEBUG nova.network.neutron [-] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:28:10 np0005629333 nova_compute[244014]: 2026-02-25 12:28:10.344 244018 DEBUG nova.compute.manager [req-105cb8a9-47f0-43cd-a1ba-fc57ab01dcf2 req-96a2adc5-7607-498f-965c-7b1c8e7facd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received event network-vif-unplugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:28:10 np0005629333 nova_compute[244014]: 2026-02-25 12:28:10.345 244018 DEBUG oslo_concurrency.lockutils [req-105cb8a9-47f0-43cd-a1ba-fc57ab01dcf2 req-96a2adc5-7607-498f-965c-7b1c8e7facd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:10 np0005629333 nova_compute[244014]: 2026-02-25 12:28:10.346 244018 DEBUG oslo_concurrency.lockutils [req-105cb8a9-47f0-43cd-a1ba-fc57ab01dcf2 req-96a2adc5-7607-498f-965c-7b1c8e7facd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:10 np0005629333 nova_compute[244014]: 2026-02-25 12:28:10.346 244018 DEBUG oslo_concurrency.lockutils [req-105cb8a9-47f0-43cd-a1ba-fc57ab01dcf2 req-96a2adc5-7607-498f-965c-7b1c8e7facd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:10 np0005629333 nova_compute[244014]: 2026-02-25 12:28:10.347 244018 DEBUG nova.compute.manager [req-105cb8a9-47f0-43cd-a1ba-fc57ab01dcf2 req-96a2adc5-7607-498f-965c-7b1c8e7facd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] No waiting events found dispatching network-vif-unplugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:28:10 np0005629333 nova_compute[244014]: 2026-02-25 12:28:10.348 244018 DEBUG nova.compute.manager [req-105cb8a9-47f0-43cd-a1ba-fc57ab01dcf2 req-96a2adc5-7607-498f-965c-7b1c8e7facd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received event network-vif-unplugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:28:10 np0005629333 nova_compute[244014]: 2026-02-25 12:28:10.348 244018 DEBUG nova.compute.manager [req-105cb8a9-47f0-43cd-a1ba-fc57ab01dcf2 req-96a2adc5-7607-498f-965c-7b1c8e7facd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received event network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:28:10 np0005629333 nova_compute[244014]: 2026-02-25 12:28:10.349 244018 DEBUG oslo_concurrency.lockutils [req-105cb8a9-47f0-43cd-a1ba-fc57ab01dcf2 req-96a2adc5-7607-498f-965c-7b1c8e7facd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:10 np0005629333 nova_compute[244014]: 2026-02-25 12:28:10.349 244018 DEBUG oslo_concurrency.lockutils [req-105cb8a9-47f0-43cd-a1ba-fc57ab01dcf2 req-96a2adc5-7607-498f-965c-7b1c8e7facd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:10 np0005629333 nova_compute[244014]: 2026-02-25 12:28:10.350 244018 DEBUG oslo_concurrency.lockutils [req-105cb8a9-47f0-43cd-a1ba-fc57ab01dcf2 req-96a2adc5-7607-498f-965c-7b1c8e7facd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:10 np0005629333 nova_compute[244014]: 2026-02-25 12:28:10.350 244018 DEBUG nova.compute.manager [req-105cb8a9-47f0-43cd-a1ba-fc57ab01dcf2 req-96a2adc5-7607-498f-965c-7b1c8e7facd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] No waiting events found dispatching network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:28:10 np0005629333 nova_compute[244014]: 2026-02-25 12:28:10.351 244018 WARNING nova.compute.manager [req-105cb8a9-47f0-43cd-a1ba-fc57ab01dcf2 req-96a2adc5-7607-498f-965c-7b1c8e7facd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received unexpected event network-vif-plugged-abdb97b5-8e9d-4929-af6f-bfb06c067878 for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:28:10 np0005629333 nova_compute[244014]: 2026-02-25 12:28:10.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:28:10 np0005629333 nova_compute[244014]: 2026-02-25 12:28:10.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.054 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.054 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.055 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.055 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.056 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.105 244018 DEBUG nova.network.neutron [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Updating instance_info_cache with network_info: [{"id": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "address": "fa:16:3e:a1:6b:aa", "network": {"id": "fd427256-862c-42a4-8ee4-681f0377401d", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1066034838-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbfdd47eb32e4d4aadc46f464899cf16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21ae71a3-b1", "ovs_interfaceid": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.132 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Releasing lock "refresh_cache-caff6378-2d93-4b73-8d58-da0e74a6d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.133 244018 DEBUG nova.compute.manager [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Instance network_info: |[{"id": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "address": "fa:16:3e:a1:6b:aa", "network": {"id": "fd427256-862c-42a4-8ee4-681f0377401d", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1066034838-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbfdd47eb32e4d4aadc46f464899cf16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21ae71a3-b1", "ovs_interfaceid": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.135 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Start _get_guest_xml network_info=[{"id": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "address": "fa:16:3e:a1:6b:aa", "network": {"id": "fd427256-862c-42a4-8ee4-681f0377401d", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1066034838-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbfdd47eb32e4d4aadc46f464899cf16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21ae71a3-b1", "ovs_interfaceid": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.140 244018 WARNING nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.147 244018 DEBUG nova.virt.libvirt.host [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.147 244018 DEBUG nova.virt.libvirt.host [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.151 244018 DEBUG nova.virt.libvirt.host [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.151 244018 DEBUG nova.virt.libvirt.host [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.152 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.152 244018 DEBUG nova.virt.hardware [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.152 244018 DEBUG nova.virt.hardware [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.153 244018 DEBUG nova.virt.hardware [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.153 244018 DEBUG nova.virt.hardware [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.153 244018 DEBUG nova.virt.hardware [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.153 244018 DEBUG nova.virt.hardware [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.153 244018 DEBUG nova.virt.hardware [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.154 244018 DEBUG nova.virt.hardware [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.154 244018 DEBUG nova.virt.hardware [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.154 244018 DEBUG nova.virt.hardware [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.154 244018 DEBUG nova.virt.hardware [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.157 244018 DEBUG oslo_concurrency.processutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:28:11 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4247859297' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.577 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022476.5753376, bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.577 244018 INFO nova.compute.manager [-] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.585 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.596 244018 DEBUG nova.compute.manager [None req-7aa3b82a-cf7c-4506-8750-c047a9e57d1d - - - - - -] [instance: bd7e6bb5-fcda-4cf5-9133-3f04fb1f9b36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.655 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.655 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:28:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:28:11 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1376598932' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.730 244018 DEBUG oslo_concurrency.processutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.763 244018 DEBUG nova.storage.rbd_utils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] rbd image caff6378-2d93-4b73-8d58-da0e74a6d46e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.766 244018 DEBUG oslo_concurrency.processutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 306 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.6 MiB/s wr, 444 op/s
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.890 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.891 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3741MB free_disk=59.87575867306441GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.892 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.892 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.967 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.967 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance b8086e43-4c45-422f-a3b5-fa665c256b30 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.967 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance caff6378-2d93-4b73-8d58-da0e74a6d46e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.968 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:28:11 np0005629333 nova_compute[244014]: 2026-02-25 12:28:11.968 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.037 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:12.262 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:28:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:12.263 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.263 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:28:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/97689290' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.290 244018 DEBUG oslo_concurrency.processutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.292 244018 DEBUG nova.virt.libvirt.vif [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:28:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-409359068',display_name='tempest-ServerAddressesNegativeTestJSON-server-409359068',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-409359068',id=61,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fbfdd47eb32e4d4aadc46f464899cf16',ramdisk_id='',reservation_id='r-061938kj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-512544112',owner_user_name='tempest-ServerAddressesNegativeTestJSON-512544112-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:28:05Z,user_data=None,user_id='322943d9098b419f8d2d573cc6b7fe8e',uuid=caff6378-2d93-4b73-8d58-da0e74a6d46e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "address": "fa:16:3e:a1:6b:aa", "network": {"id": "fd427256-862c-42a4-8ee4-681f0377401d", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1066034838-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbfdd47eb32e4d4aadc46f464899cf16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21ae71a3-b1", "ovs_interfaceid": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.293 244018 DEBUG nova.network.os_vif_util [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Converting VIF {"id": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "address": "fa:16:3e:a1:6b:aa", "network": {"id": "fd427256-862c-42a4-8ee4-681f0377401d", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1066034838-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbfdd47eb32e4d4aadc46f464899cf16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21ae71a3-b1", "ovs_interfaceid": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.294 244018 DEBUG nova.network.os_vif_util [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:6b:aa,bridge_name='br-int',has_traffic_filtering=True,id=21ae71a3-b142-47f7-a2b0-b62a7625a31d,network=Network(fd427256-862c-42a4-8ee4-681f0377401d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21ae71a3-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.296 244018 DEBUG nova.objects.instance [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lazy-loading 'pci_devices' on Instance uuid caff6378-2d93-4b73-8d58-da0e74a6d46e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.314 244018 DEBUG nova.network.neutron [-] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.319 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:28:12 np0005629333 nova_compute[244014]:  <uuid>caff6378-2d93-4b73-8d58-da0e74a6d46e</uuid>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:  <name>instance-0000003d</name>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServerAddressesNegativeTestJSON-server-409359068</nova:name>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:28:11</nova:creationTime>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:28:12 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:        <nova:user uuid="322943d9098b419f8d2d573cc6b7fe8e">tempest-ServerAddressesNegativeTestJSON-512544112-project-member</nova:user>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:        <nova:project uuid="fbfdd47eb32e4d4aadc46f464899cf16">tempest-ServerAddressesNegativeTestJSON-512544112</nova:project>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:        <nova:port uuid="21ae71a3-b142-47f7-a2b0-b62a7625a31d">
Feb 25 07:28:12 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      <entry name="serial">caff6378-2d93-4b73-8d58-da0e74a6d46e</entry>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      <entry name="uuid">caff6378-2d93-4b73-8d58-da0e74a6d46e</entry>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/caff6378-2d93-4b73-8d58-da0e74a6d46e_disk">
Feb 25 07:28:12 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:28:12 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/caff6378-2d93-4b73-8d58-da0e74a6d46e_disk.config">
Feb 25 07:28:12 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:28:12 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:a1:6b:aa"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      <target dev="tap21ae71a3-b1"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/caff6378-2d93-4b73-8d58-da0e74a6d46e/console.log" append="off"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:28:12 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:28:12 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:28:12 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:28:12 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.320 244018 DEBUG nova.compute.manager [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Preparing to wait for external event network-vif-plugged-21ae71a3-b142-47f7-a2b0-b62a7625a31d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.320 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Acquiring lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.320 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.321 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.321 244018 DEBUG nova.virt.libvirt.vif [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:28:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-409359068',display_name='tempest-ServerAddressesNegativeTestJSON-server-409359068',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-409359068',id=61,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='fbfdd47eb32e4d4aadc46f464899cf16',ramdisk_id='',reservation_id='r-061938kj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-512544112',owner_user_name='tempest-ServerAddressesNegativeTestJSON-512544112-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:28:05Z,user_data=None,user_id='322943d9098b419f8d2d573cc6b7fe8e',uuid=caff6378-2d93-4b73-8d58-da0e74a6d46e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "address": "fa:16:3e:a1:6b:aa", "network": {"id": "fd427256-862c-42a4-8ee4-681f0377401d", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1066034838-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbfdd47eb32e4d4aadc46f464899cf16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21ae71a3-b1", "ovs_interfaceid": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.322 244018 DEBUG nova.network.os_vif_util [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Converting VIF {"id": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "address": "fa:16:3e:a1:6b:aa", "network": {"id": "fd427256-862c-42a4-8ee4-681f0377401d", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1066034838-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbfdd47eb32e4d4aadc46f464899cf16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21ae71a3-b1", "ovs_interfaceid": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.322 244018 DEBUG nova.network.os_vif_util [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:6b:aa,bridge_name='br-int',has_traffic_filtering=True,id=21ae71a3-b142-47f7-a2b0-b62a7625a31d,network=Network(fd427256-862c-42a4-8ee4-681f0377401d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21ae71a3-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.323 244018 DEBUG os_vif [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:6b:aa,bridge_name='br-int',has_traffic_filtering=True,id=21ae71a3-b142-47f7-a2b0-b62a7625a31d,network=Network(fd427256-862c-42a4-8ee4-681f0377401d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21ae71a3-b1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.323 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.324 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.324 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.327 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.328 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21ae71a3-b1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.328 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap21ae71a3-b1, col_values=(('external_ids', {'iface-id': '21ae71a3-b142-47f7-a2b0-b62a7625a31d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a1:6b:aa', 'vm-uuid': 'caff6378-2d93-4b73-8d58-da0e74a6d46e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.329 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:12 np0005629333 NetworkManager[49836]: <info>  [1772022492.3305] manager: (tap21ae71a3-b1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/240)
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.332 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.335 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.336 244018 INFO os_vif [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:6b:aa,bridge_name='br-int',has_traffic_filtering=True,id=21ae71a3-b142-47f7-a2b0-b62a7625a31d,network=Network(fd427256-862c-42a4-8ee4-681f0377401d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21ae71a3-b1')#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.342 244018 INFO nova.compute.manager [-] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Took 2.12 seconds to deallocate network for instance.#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.413 244018 DEBUG oslo_concurrency.lockutils [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.419 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.420 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.420 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] No VIF found with MAC fa:16:3e:a1:6b:aa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.421 244018 INFO nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Using config drive#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.455 244018 DEBUG nova.storage.rbd_utils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] rbd image caff6378-2d93-4b73-8d58-da0e74a6d46e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.496 244018 DEBUG nova.compute.manager [req-e6b9ebf6-69fd-4508-8b1a-155e39e8fa11 req-3dc83c4d-f50f-45bd-ad2c-1b9340969744 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Received event network-vif-deleted-abdb97b5-8e9d-4929-af6f-bfb06c067878 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:28:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:28:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4170653208' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.636 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.643 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.662 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.691 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.692 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.693 244018 DEBUG oslo_concurrency.lockutils [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.280s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:12 np0005629333 nova_compute[244014]: 2026-02-25 12:28:12.783 244018 DEBUG oslo_concurrency.processutils [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:13 np0005629333 nova_compute[244014]: 2026-02-25 12:28:13.333 244018 INFO nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Creating config drive at /var/lib/nova/instances/caff6378-2d93-4b73-8d58-da0e74a6d46e/disk.config#033[00m
Feb 25 07:28:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:28:13 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1048601496' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:28:13 np0005629333 nova_compute[244014]: 2026-02-25 12:28:13.341 244018 DEBUG oslo_concurrency.processutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/caff6378-2d93-4b73-8d58-da0e74a6d46e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3609w17g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:13 np0005629333 nova_compute[244014]: 2026-02-25 12:28:13.365 244018 DEBUG oslo_concurrency.processutils [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:13 np0005629333 nova_compute[244014]: 2026-02-25 12:28:13.372 244018 DEBUG nova.compute.provider_tree [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:28:13 np0005629333 nova_compute[244014]: 2026-02-25 12:28:13.385 244018 DEBUG nova.scheduler.client.report [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:28:14 np0005629333 nova_compute[244014]: 2026-02-25 12:28:13.406 244018 DEBUG oslo_concurrency.lockutils [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:14 np0005629333 nova_compute[244014]: 2026-02-25 12:28:13.427 244018 INFO nova.scheduler.client.report [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Deleted allocations for instance b8086e43-4c45-422f-a3b5-fa665c256b30#033[00m
Feb 25 07:28:14 np0005629333 nova_compute[244014]: 2026-02-25 12:28:13.471 244018 DEBUG oslo_concurrency.processutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/caff6378-2d93-4b73-8d58-da0e74a6d46e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3609w17g" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 281 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.8 MiB/s wr, 374 op/s
Feb 25 07:28:14 np0005629333 nova_compute[244014]: 2026-02-25 12:28:14.545 244018 DEBUG nova.storage.rbd_utils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] rbd image caff6378-2d93-4b73-8d58-da0e74a6d46e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:14 np0005629333 nova_compute[244014]: 2026-02-25 12:28:14.548 244018 DEBUG oslo_concurrency.processutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/caff6378-2d93-4b73-8d58-da0e74a6d46e/disk.config caff6378-2d93-4b73-8d58-da0e74a6d46e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:14 np0005629333 nova_compute[244014]: 2026-02-25 12:28:14.579 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:28:14 np0005629333 nova_compute[244014]: 2026-02-25 12:28:14.582 244018 DEBUG oslo_concurrency.lockutils [None req-31f355d9-9448-4a8b-9a96-1b770c48e2ec b774fd0c04fc403d9ddb205f1e6abbc5 c85a955249394f0faf7c890f5cd0df32 - - default default] Lock "b8086e43-4c45-422f-a3b5-fa665c256b30" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.086s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:14 np0005629333 nova_compute[244014]: 2026-02-25 12:28:14.584 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:28:14 np0005629333 nova_compute[244014]: 2026-02-25 12:28:14.696 244018 DEBUG oslo_concurrency.processutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/caff6378-2d93-4b73-8d58-da0e74a6d46e/disk.config caff6378-2d93-4b73-8d58-da0e74a6d46e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:14 np0005629333 nova_compute[244014]: 2026-02-25 12:28:14.697 244018 INFO nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Deleting local config drive /var/lib/nova/instances/caff6378-2d93-4b73-8d58-da0e74a6d46e/disk.config because it was imported into RBD.#033[00m
Feb 25 07:28:14 np0005629333 nova_compute[244014]: 2026-02-25 12:28:14.710 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:14 np0005629333 kernel: tap21ae71a3-b1: entered promiscuous mode
Feb 25 07:28:14 np0005629333 NetworkManager[49836]: <info>  [1772022494.7502] manager: (tap21ae71a3-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/241)
Feb 25 07:28:14 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:14Z|00542|binding|INFO|Claiming lport 21ae71a3-b142-47f7-a2b0-b62a7625a31d for this chassis.
Feb 25 07:28:14 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:14Z|00543|binding|INFO|21ae71a3-b142-47f7-a2b0-b62a7625a31d: Claiming fa:16:3e:a1:6b:aa 10.100.0.7
Feb 25 07:28:14 np0005629333 nova_compute[244014]: 2026-02-25 12:28:14.751 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.762 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:6b:aa 10.100.0.7'], port_security=['fa:16:3e:a1:6b:aa 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'caff6378-2d93-4b73-8d58-da0e74a6d46e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd427256-862c-42a4-8ee4-681f0377401d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fbfdd47eb32e4d4aadc46f464899cf16', 'neutron:revision_number': '2', 'neutron:security_group_ids': '61b17d81-cbe9-4d4b-9f04-df6306382ea5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=48a62b60-46ed-47ca-bd9c-153a4a8b3bce, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=21ae71a3-b142-47f7-a2b0-b62a7625a31d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:28:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.764 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 21ae71a3-b142-47f7-a2b0-b62a7625a31d in datapath fd427256-862c-42a4-8ee4-681f0377401d bound to our chassis#033[00m
Feb 25 07:28:14 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:14Z|00544|binding|INFO|Setting lport 21ae71a3-b142-47f7-a2b0-b62a7625a31d ovn-installed in OVS
Feb 25 07:28:14 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:14Z|00545|binding|INFO|Setting lport 21ae71a3-b142-47f7-a2b0-b62a7625a31d up in Southbound
Feb 25 07:28:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.766 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fd427256-862c-42a4-8ee4-681f0377401d#033[00m
Feb 25 07:28:14 np0005629333 nova_compute[244014]: 2026-02-25 12:28:14.767 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.779 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1715bd7a-b995-4f87-91fa-f2506d604339]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.780 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfd427256-81 in ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:28:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.782 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfd427256-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:28:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.782 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8524f905-c4ca-4f74-b367-3f86c8721df3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.784 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[01f9dc86-1555-4377-880d-9ce30da19f51]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:14 np0005629333 systemd-udevd[297843]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:28:14 np0005629333 systemd-machined[210048]: New machine qemu-70-instance-0000003d.
Feb 25 07:28:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.796 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[02d754bb-8371-4cab-8895-317843421fc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:14 np0005629333 NetworkManager[49836]: <info>  [1772022494.8038] device (tap21ae71a3-b1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:28:14 np0005629333 NetworkManager[49836]: <info>  [1772022494.8045] device (tap21ae71a3-b1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:28:14 np0005629333 systemd[1]: Started Virtual Machine qemu-70-instance-0000003d.
Feb 25 07:28:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.811 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cfa268d2-58d4-4bc9-8c48-8326faa19c2a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.841 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9a00817b-37dd-43f9-a597-2fc4b8234c41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.847 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[acc8a9ff-6155-4c05-85d4-18193ae86794]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:14 np0005629333 NetworkManager[49836]: <info>  [1772022494.8496] manager: (tapfd427256-80): new Veth device (/org/freedesktop/NetworkManager/Devices/242)
Feb 25 07:28:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.879 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[50fb0ac2-bc68-48a4-a96b-76db49b851f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.883 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3a88aeef-7977-4853-8227-d06224cc011b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:14 np0005629333 NetworkManager[49836]: <info>  [1772022494.9043] device (tapfd427256-80): carrier: link connected
Feb 25 07:28:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.907 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b3b73af8-095a-4ec0-99ea-f4f4943cb467]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.925 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a4920f20-13fc-40c2-ac8b-c8333bd086c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfd427256-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:8b:6e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 163], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446431, 'reachable_time': 31818, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297875, 'error': None, 'target': 'ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.941 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1fddcdbb-8928-4a80-b0f8-43ec8826ffe1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe67:8b6e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 446431, 'tstamp': 446431}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297876, 'error': None, 'target': 'ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:14.963 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cd276803-2f5b-4f69-9444-7d6dcef97d54]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfd427256-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:67:8b:6e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 163], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446431, 'reachable_time': 31818, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297877, 'error': None, 'target': 'ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:15.000 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a1415673-e4bf-4c48-b0f0-2256e9f66c11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:15.060 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dbabe378-8d00-4c1c-95df-80c804f1b951]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:15.065 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd427256-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:15.066 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:15.066 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd427256-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.068 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:15 np0005629333 kernel: tapfd427256-80: entered promiscuous mode
Feb 25 07:28:15 np0005629333 NetworkManager[49836]: <info>  [1772022495.0698] manager: (tapfd427256-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/243)
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.071 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:15.075 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfd427256-80, col_values=(('external_ids', {'iface-id': '4c9fe647-e7fc-4a1b-aee3-9f86c40cf6d7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.076 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:15 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:15Z|00546|binding|INFO|Releasing lport 4c9fe647-e7fc-4a1b-aee3-9f86c40cf6d7 from this chassis (sb_readonly=0)
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.078 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:15.078 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fd427256-862c-42a4-8ee4-681f0377401d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fd427256-862c-42a4-8ee4-681f0377401d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:15.079 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[70a525a9-d4de-474c-a4ae-4022c3f78c49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:15.081 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-fd427256-862c-42a4-8ee4-681f0377401d
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/fd427256-862c-42a4-8ee4-681f0377401d.pid.haproxy
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID fd427256-862c-42a4-8ee4-681f0377401d
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:28:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:15.081 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d', 'env', 'PROCESS_TAG=haproxy-fd427256-862c-42a4-8ee4-681f0377401d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fd427256-862c-42a4-8ee4-681f0377401d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.089 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:28:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Feb 25 07:28:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Feb 25 07:28:15 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.230 244018 DEBUG nova.compute.manager [req-7f1a0d98-2987-4143-876f-17fb601e0bcf req-7d29d795-3b40-4199-9f20-dde7ae75a46d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Received event network-vif-plugged-21ae71a3-b142-47f7-a2b0-b62a7625a31d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.230 244018 DEBUG oslo_concurrency.lockutils [req-7f1a0d98-2987-4143-876f-17fb601e0bcf req-7d29d795-3b40-4199-9f20-dde7ae75a46d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.230 244018 DEBUG oslo_concurrency.lockutils [req-7f1a0d98-2987-4143-876f-17fb601e0bcf req-7d29d795-3b40-4199-9f20-dde7ae75a46d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.231 244018 DEBUG oslo_concurrency.lockutils [req-7f1a0d98-2987-4143-876f-17fb601e0bcf req-7d29d795-3b40-4199-9f20-dde7ae75a46d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.231 244018 DEBUG nova.compute.manager [req-7f1a0d98-2987-4143-876f-17fb601e0bcf req-7d29d795-3b40-4199-9f20-dde7ae75a46d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Processing event network-vif-plugged-21ae71a3-b142-47f7-a2b0-b62a7625a31d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.335 244018 DEBUG nova.compute.manager [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.335 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022495.3342507, caff6378-2d93-4b73-8d58-da0e74a6d46e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.336 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] VM Started (Lifecycle Event)#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.340 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.344 244018 INFO nova.virt.libvirt.driver [-] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Instance spawned successfully.#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.344 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.371 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.375 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.390 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.391 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.392 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.392 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.393 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.394 244018 DEBUG nova.virt.libvirt.driver [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.405 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.406 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022495.3378887, caff6378-2d93-4b73-8d58-da0e74a6d46e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.406 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.442 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.447 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022495.340199, caff6378-2d93-4b73-8d58-da0e74a6d46e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.447 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.473 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.477 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.485 244018 INFO nova.compute.manager [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Took 9.80 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.487 244018 DEBUG nova.compute.manager [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.498 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:28:15 np0005629333 podman[297951]: 2026-02-25 12:28:15.535013105 +0000 UTC m=+0.068718865 container create c2c93aca61e07f2afbedfdcba092c4b0aacae945f99daf08e8f1c2204d061fe6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.557 244018 INFO nova.compute.manager [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Took 11.85 seconds to build instance.#033[00m
Feb 25 07:28:15 np0005629333 systemd[1]: Started libpod-conmon-c2c93aca61e07f2afbedfdcba092c4b0aacae945f99daf08e8f1c2204d061fe6.scope.
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.587 244018 DEBUG oslo_concurrency.lockutils [None req-b897786d-e935-44c4-87a0-d6611cc8e472 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.587 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 7.026s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.587 244018 INFO nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.587 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:15 np0005629333 podman[297951]: 2026-02-25 12:28:15.502960308 +0000 UTC m=+0.036666138 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:28:15 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:28:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/741e5774616400b8936dd56216d1caef309170a34fbe27eb97875339193d1ec0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:28:15 np0005629333 podman[297951]: 2026-02-25 12:28:15.630348464 +0000 UTC m=+0.164054284 container init c2c93aca61e07f2afbedfdcba092c4b0aacae945f99daf08e8f1c2204d061fe6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 07:28:15 np0005629333 podman[297951]: 2026-02-25 12:28:15.637410363 +0000 UTC m=+0.171116123 container start c2c93aca61e07f2afbedfdcba092c4b0aacae945f99daf08e8f1c2204d061fe6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d, io.buildah.version=1.43.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 07:28:15 np0005629333 neutron-haproxy-ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d[297966]: [NOTICE]   (297970) : New worker (297972) forked
Feb 25 07:28:15 np0005629333 neutron-haproxy-ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d[297966]: [NOTICE]   (297970) : Loading success.
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.701 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022480.699482, 160a4e3f-b197-4b82-a2ff-cebf79df47df => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.701 244018 INFO nova.compute.manager [-] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:28:15 np0005629333 nova_compute[244014]: 2026-02-25 12:28:15.725 244018 DEBUG nova.compute.manager [None req-040ae3d5-94b3-4c26-b839-d874f5f27b09 - - - - - -] [instance: 160a4e3f-b197-4b82-a2ff-cebf79df47df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:28:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 281 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 982 KiB/s rd, 15 KiB/s wr, 195 op/s
Feb 25 07:28:16 np0005629333 nova_compute[244014]: 2026-02-25 12:28:16.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:28:16 np0005629333 nova_compute[244014]: 2026-02-25 12:28:16.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.264 244018 DEBUG oslo_concurrency.lockutils [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Acquiring lock "caff6378-2d93-4b73-8d58-da0e74a6d46e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.265 244018 DEBUG oslo_concurrency.lockutils [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.265 244018 DEBUG oslo_concurrency.lockutils [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Acquiring lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.266 244018 DEBUG oslo_concurrency.lockutils [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.266 244018 DEBUG oslo_concurrency.lockutils [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.268 244018 INFO nova.compute.manager [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Terminating instance#033[00m
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.269 244018 DEBUG nova.compute.manager [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:28:17 np0005629333 kernel: tap21ae71a3-b1 (unregistering): left promiscuous mode
Feb 25 07:28:17 np0005629333 NetworkManager[49836]: <info>  [1772022497.3046] device (tap21ae71a3-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:28:17 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:17Z|00547|binding|INFO|Releasing lport 21ae71a3-b142-47f7-a2b0-b62a7625a31d from this chassis (sb_readonly=0)
Feb 25 07:28:17 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:17Z|00548|binding|INFO|Setting lport 21ae71a3-b142-47f7-a2b0-b62a7625a31d down in Southbound
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.310 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:17 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:17Z|00549|binding|INFO|Removing iface tap21ae71a3-b1 ovn-installed in OVS
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.313 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.318 244018 DEBUG nova.compute.manager [req-40190565-bbf6-4634-8e50-2386445a213c req-d50065fc-d9ff-473e-b420-7dd970e17e4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Received event network-vif-plugged-21ae71a3-b142-47f7-a2b0-b62a7625a31d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.319 244018 DEBUG oslo_concurrency.lockutils [req-40190565-bbf6-4634-8e50-2386445a213c req-d50065fc-d9ff-473e-b420-7dd970e17e4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:17.319 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:6b:aa 10.100.0.7'], port_security=['fa:16:3e:a1:6b:aa 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'caff6378-2d93-4b73-8d58-da0e74a6d46e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fd427256-862c-42a4-8ee4-681f0377401d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fbfdd47eb32e4d4aadc46f464899cf16', 'neutron:revision_number': '4', 'neutron:security_group_ids': '61b17d81-cbe9-4d4b-9f04-df6306382ea5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=48a62b60-46ed-47ca-bd9c-153a4a8b3bce, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=21ae71a3-b142-47f7-a2b0-b62a7625a31d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.319 244018 DEBUG oslo_concurrency.lockutils [req-40190565-bbf6-4634-8e50-2386445a213c req-d50065fc-d9ff-473e-b420-7dd970e17e4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.319 244018 DEBUG oslo_concurrency.lockutils [req-40190565-bbf6-4634-8e50-2386445a213c req-d50065fc-d9ff-473e-b420-7dd970e17e4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.320 244018 DEBUG nova.compute.manager [req-40190565-bbf6-4634-8e50-2386445a213c req-d50065fc-d9ff-473e-b420-7dd970e17e4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] No waiting events found dispatching network-vif-plugged-21ae71a3-b142-47f7-a2b0-b62a7625a31d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.320 244018 WARNING nova.compute.manager [req-40190565-bbf6-4634-8e50-2386445a213c req-d50065fc-d9ff-473e-b420-7dd970e17e4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Received unexpected event network-vif-plugged-21ae71a3-b142-47f7-a2b0-b62a7625a31d for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:28:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:17.321 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 21ae71a3-b142-47f7-a2b0-b62a7625a31d in datapath fd427256-862c-42a4-8ee4-681f0377401d unbound from our chassis#033[00m
Feb 25 07:28:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:17.324 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fd427256-862c-42a4-8ee4-681f0377401d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:28:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:17.325 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4b510df0-95b3-4d15-8092-6b7c8c104941]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:17.326 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d namespace which is not needed anymore#033[00m
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.328 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.329 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:17 np0005629333 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003d.scope: Deactivated successfully.
Feb 25 07:28:17 np0005629333 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003d.scope: Consumed 2.485s CPU time.
Feb 25 07:28:17 np0005629333 systemd-machined[210048]: Machine qemu-70-instance-0000003d terminated.
Feb 25 07:28:17 np0005629333 neutron-haproxy-ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d[297966]: [NOTICE]   (297970) : haproxy version is 2.8.14-c23fe91
Feb 25 07:28:17 np0005629333 neutron-haproxy-ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d[297966]: [NOTICE]   (297970) : path to executable is /usr/sbin/haproxy
Feb 25 07:28:17 np0005629333 neutron-haproxy-ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d[297966]: [WARNING]  (297970) : Exiting Master process...
Feb 25 07:28:17 np0005629333 neutron-haproxy-ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d[297966]: [ALERT]    (297970) : Current worker (297972) exited with code 143 (Terminated)
Feb 25 07:28:17 np0005629333 neutron-haproxy-ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d[297966]: [WARNING]  (297970) : All workers exited. Exiting... (0)
Feb 25 07:28:17 np0005629333 systemd[1]: libpod-c2c93aca61e07f2afbedfdcba092c4b0aacae945f99daf08e8f1c2204d061fe6.scope: Deactivated successfully.
Feb 25 07:28:17 np0005629333 conmon[297966]: conmon c2c93aca61e07f2afbed <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c2c93aca61e07f2afbedfdcba092c4b0aacae945f99daf08e8f1c2204d061fe6.scope/container/memory.events
Feb 25 07:28:17 np0005629333 podman[298003]: 2026-02-25 12:28:17.488572243 +0000 UTC m=+0.054977297 container died c2c93aca61e07f2afbedfdcba092c4b0aacae945f99daf08e8f1c2204d061fe6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.507 244018 INFO nova.virt.libvirt.driver [-] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Instance destroyed successfully.#033[00m
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.509 244018 DEBUG nova.objects.instance [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lazy-loading 'resources' on Instance uuid caff6378-2d93-4b73-8d58-da0e74a6d46e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:28:17 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c2c93aca61e07f2afbedfdcba092c4b0aacae945f99daf08e8f1c2204d061fe6-userdata-shm.mount: Deactivated successfully.
Feb 25 07:28:17 np0005629333 systemd[1]: var-lib-containers-storage-overlay-741e5774616400b8936dd56216d1caef309170a34fbe27eb97875339193d1ec0-merged.mount: Deactivated successfully.
Feb 25 07:28:17 np0005629333 podman[298003]: 2026-02-25 12:28:17.540534294 +0000 UTC m=+0.106939368 container cleanup c2c93aca61e07f2afbedfdcba092c4b0aacae945f99daf08e8f1c2204d061fe6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.540 244018 DEBUG nova.virt.libvirt.vif [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:28:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-409359068',display_name='tempest-ServerAddressesNegativeTestJSON-server-409359068',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-409359068',id=61,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:28:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fbfdd47eb32e4d4aadc46f464899cf16',ramdisk_id='',reservation_id='r-061938kj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesNegativeTestJSON-512544112',owner_user_name='tempest-ServerAddressesNegativeTestJSON-512544112-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:28:15Z,user_data=None,user_id='322943d9098b419f8d2d573cc6b7fe8e',uuid=caff6378-2d93-4b73-8d58-da0e74a6d46e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "address": "fa:16:3e:a1:6b:aa", "network": {"id": "fd427256-862c-42a4-8ee4-681f0377401d", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1066034838-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbfdd47eb32e4d4aadc46f464899cf16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21ae71a3-b1", "ovs_interfaceid": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.541 244018 DEBUG nova.network.os_vif_util [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Converting VIF {"id": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "address": "fa:16:3e:a1:6b:aa", "network": {"id": "fd427256-862c-42a4-8ee4-681f0377401d", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1066034838-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fbfdd47eb32e4d4aadc46f464899cf16", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21ae71a3-b1", "ovs_interfaceid": "21ae71a3-b142-47f7-a2b0-b62a7625a31d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.543 244018 DEBUG nova.network.os_vif_util [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:6b:aa,bridge_name='br-int',has_traffic_filtering=True,id=21ae71a3-b142-47f7-a2b0-b62a7625a31d,network=Network(fd427256-862c-42a4-8ee4-681f0377401d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21ae71a3-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.544 244018 DEBUG os_vif [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:6b:aa,bridge_name='br-int',has_traffic_filtering=True,id=21ae71a3-b142-47f7-a2b0-b62a7625a31d,network=Network(fd427256-862c-42a4-8ee4-681f0377401d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21ae71a3-b1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.547 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.548 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21ae71a3-b1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.552 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:17 np0005629333 systemd[1]: libpod-conmon-c2c93aca61e07f2afbedfdcba092c4b0aacae945f99daf08e8f1c2204d061fe6.scope: Deactivated successfully.
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.556 244018 INFO os_vif [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:6b:aa,bridge_name='br-int',has_traffic_filtering=True,id=21ae71a3-b142-47f7-a2b0-b62a7625a31d,network=Network(fd427256-862c-42a4-8ee4-681f0377401d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21ae71a3-b1')#033[00m
Feb 25 07:28:17 np0005629333 podman[298042]: 2026-02-25 12:28:17.62448854 +0000 UTC m=+0.059507566 container remove c2c93aca61e07f2afbedfdcba092c4b0aacae945f99daf08e8f1c2204d061fe6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 07:28:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:17.631 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[86f3cc12-689d-4ac3-b893-319d1c59a131]: (4, ('Wed Feb 25 12:28:17 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d (c2c93aca61e07f2afbedfdcba092c4b0aacae945f99daf08e8f1c2204d061fe6)\nc2c93aca61e07f2afbedfdcba092c4b0aacae945f99daf08e8f1c2204d061fe6\nWed Feb 25 12:28:17 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d (c2c93aca61e07f2afbedfdcba092c4b0aacae945f99daf08e8f1c2204d061fe6)\nc2c93aca61e07f2afbedfdcba092c4b0aacae945f99daf08e8f1c2204d061fe6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:17.633 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ed14f5ee-73e5-4d6e-814a-3f09954ee61d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:17.634 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd427256-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.637 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:17 np0005629333 kernel: tapfd427256-80: left promiscuous mode
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.650 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:17.654 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7d91bfad-f777-42a0-8666-1c70b3ada04f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:17.675 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[74d1852b-15e9-4e36-8404-84735c3d2483]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:17.677 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c7966e39-bff7-4dca-8e11-0d8fd6e984e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:17.695 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[46345c1c-cb8c-4740-a4b1-909c152db1a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446424, 'reachable_time': 22937, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298076, 'error': None, 'target': 'ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:17 np0005629333 systemd[1]: run-netns-ovnmeta\x2dfd427256\x2d862c\x2d42a4\x2d8ee4\x2d681f0377401d.mount: Deactivated successfully.
Feb 25 07:28:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:17.699 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fd427256-862c-42a4-8ee4-681f0377401d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:28:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:17.699 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[252231f8-acfb-46e7-83e5-8d351f55776a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 281 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 46 KiB/s wr, 264 op/s
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.871 244018 INFO nova.virt.libvirt.driver [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Deleting instance files /var/lib/nova/instances/caff6378-2d93-4b73-8d58-da0e74a6d46e_del#033[00m
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.872 244018 INFO nova.virt.libvirt.driver [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Deletion of /var/lib/nova/instances/caff6378-2d93-4b73-8d58-da0e74a6d46e_del complete#033[00m
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.943 244018 INFO nova.compute.manager [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Took 0.67 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.944 244018 DEBUG oslo.service.loopingcall [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.944 244018 DEBUG nova.compute.manager [-] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:28:17 np0005629333 nova_compute[244014]: 2026-02-25 12:28:17.945 244018 DEBUG nova.network.neutron [-] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:28:18 np0005629333 nova_compute[244014]: 2026-02-25 12:28:18.609 244018 DEBUG nova.network.neutron [-] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:28:18 np0005629333 nova_compute[244014]: 2026-02-25 12:28:18.625 244018 INFO nova.compute.manager [-] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Took 0.68 seconds to deallocate network for instance.#033[00m
Feb 25 07:28:18 np0005629333 nova_compute[244014]: 2026-02-25 12:28:18.630 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022483.6297674, 43b8959e-9cf0-42ca-aa1f-8a380321c971 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:28:18 np0005629333 nova_compute[244014]: 2026-02-25 12:28:18.631 244018 INFO nova.compute.manager [-] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:28:18 np0005629333 nova_compute[244014]: 2026-02-25 12:28:18.659 244018 DEBUG nova.compute.manager [None req-9eca5cda-9cbb-45b4-9187-e96ff7bc1e36 - - - - - -] [instance: 43b8959e-9cf0-42ca-aa1f-8a380321c971] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:28:18 np0005629333 nova_compute[244014]: 2026-02-25 12:28:18.665 244018 DEBUG oslo_concurrency.lockutils [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:18 np0005629333 nova_compute[244014]: 2026-02-25 12:28:18.665 244018 DEBUG oslo_concurrency.lockutils [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:18 np0005629333 nova_compute[244014]: 2026-02-25 12:28:18.764 244018 DEBUG oslo_concurrency.processutils [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:18 np0005629333 nova_compute[244014]: 2026-02-25 12:28:18.804 244018 DEBUG nova.compute.manager [req-1e3174d4-f346-4a6b-937f-ca4ff8467ace req-c9002bfa-064e-41eb-90ac-8b85d15034dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Received event network-vif-deleted-21ae71a3-b142-47f7-a2b0-b62a7625a31d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:28:18 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:18Z|00550|binding|INFO|Releasing lport 3b184c15-8ef4-4e11-bd18-e1253a4ff440 from this chassis (sb_readonly=0)
Feb 25 07:28:19 np0005629333 nova_compute[244014]: 2026-02-25 12:28:19.023 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:19 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:19.265 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:28:19 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/951023251' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:28:19 np0005629333 nova_compute[244014]: 2026-02-25 12:28:19.367 244018 DEBUG oslo_concurrency.processutils [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:19 np0005629333 nova_compute[244014]: 2026-02-25 12:28:19.374 244018 DEBUG nova.compute.provider_tree [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:28:19 np0005629333 nova_compute[244014]: 2026-02-25 12:28:19.402 244018 DEBUG nova.scheduler.client.report [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:28:19 np0005629333 nova_compute[244014]: 2026-02-25 12:28:19.435 244018 DEBUG oslo_concurrency.lockutils [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:19 np0005629333 nova_compute[244014]: 2026-02-25 12:28:19.454 244018 DEBUG nova.compute.manager [req-a606ba28-64bf-471e-82cb-8bf87c5d4219 req-288e6048-3f6d-43ef-8166-b70242b19728 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Received event network-vif-unplugged-21ae71a3-b142-47f7-a2b0-b62a7625a31d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:28:19 np0005629333 nova_compute[244014]: 2026-02-25 12:28:19.455 244018 DEBUG oslo_concurrency.lockutils [req-a606ba28-64bf-471e-82cb-8bf87c5d4219 req-288e6048-3f6d-43ef-8166-b70242b19728 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:19 np0005629333 nova_compute[244014]: 2026-02-25 12:28:19.455 244018 DEBUG oslo_concurrency.lockutils [req-a606ba28-64bf-471e-82cb-8bf87c5d4219 req-288e6048-3f6d-43ef-8166-b70242b19728 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:19 np0005629333 nova_compute[244014]: 2026-02-25 12:28:19.456 244018 DEBUG oslo_concurrency.lockutils [req-a606ba28-64bf-471e-82cb-8bf87c5d4219 req-288e6048-3f6d-43ef-8166-b70242b19728 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:19 np0005629333 nova_compute[244014]: 2026-02-25 12:28:19.456 244018 DEBUG nova.compute.manager [req-a606ba28-64bf-471e-82cb-8bf87c5d4219 req-288e6048-3f6d-43ef-8166-b70242b19728 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] No waiting events found dispatching network-vif-unplugged-21ae71a3-b142-47f7-a2b0-b62a7625a31d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:28:19 np0005629333 nova_compute[244014]: 2026-02-25 12:28:19.457 244018 WARNING nova.compute.manager [req-a606ba28-64bf-471e-82cb-8bf87c5d4219 req-288e6048-3f6d-43ef-8166-b70242b19728 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Received unexpected event network-vif-unplugged-21ae71a3-b142-47f7-a2b0-b62a7625a31d for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:28:19 np0005629333 nova_compute[244014]: 2026-02-25 12:28:19.457 244018 DEBUG nova.compute.manager [req-a606ba28-64bf-471e-82cb-8bf87c5d4219 req-288e6048-3f6d-43ef-8166-b70242b19728 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Received event network-vif-plugged-21ae71a3-b142-47f7-a2b0-b62a7625a31d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:28:19 np0005629333 nova_compute[244014]: 2026-02-25 12:28:19.457 244018 DEBUG oslo_concurrency.lockutils [req-a606ba28-64bf-471e-82cb-8bf87c5d4219 req-288e6048-3f6d-43ef-8166-b70242b19728 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:19 np0005629333 nova_compute[244014]: 2026-02-25 12:28:19.458 244018 DEBUG oslo_concurrency.lockutils [req-a606ba28-64bf-471e-82cb-8bf87c5d4219 req-288e6048-3f6d-43ef-8166-b70242b19728 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:19 np0005629333 nova_compute[244014]: 2026-02-25 12:28:19.458 244018 DEBUG oslo_concurrency.lockutils [req-a606ba28-64bf-471e-82cb-8bf87c5d4219 req-288e6048-3f6d-43ef-8166-b70242b19728 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:19 np0005629333 nova_compute[244014]: 2026-02-25 12:28:19.458 244018 DEBUG nova.compute.manager [req-a606ba28-64bf-471e-82cb-8bf87c5d4219 req-288e6048-3f6d-43ef-8166-b70242b19728 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] No waiting events found dispatching network-vif-plugged-21ae71a3-b142-47f7-a2b0-b62a7625a31d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:28:19 np0005629333 nova_compute[244014]: 2026-02-25 12:28:19.459 244018 WARNING nova.compute.manager [req-a606ba28-64bf-471e-82cb-8bf87c5d4219 req-288e6048-3f6d-43ef-8166-b70242b19728 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Received unexpected event network-vif-plugged-21ae71a3-b142-47f7-a2b0-b62a7625a31d for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:28:19 np0005629333 nova_compute[244014]: 2026-02-25 12:28:19.463 244018 INFO nova.scheduler.client.report [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Deleted allocations for instance caff6378-2d93-4b73-8d58-da0e74a6d46e#033[00m
Feb 25 07:28:19 np0005629333 nova_compute[244014]: 2026-02-25 12:28:19.543 244018 DEBUG oslo_concurrency.lockutils [None req-395b2090-905a-4922-b7c3-7995c41c8d1a 322943d9098b419f8d2d573cc6b7fe8e fbfdd47eb32e4d4aadc46f464899cf16 - - default default] Lock "caff6378-2d93-4b73-8d58-da0e74a6d46e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.279s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:19 np0005629333 nova_compute[244014]: 2026-02-25 12:28:19.712 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 281 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 38 KiB/s wr, 217 op/s
Feb 25 07:28:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:28:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 248 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 33 KiB/s wr, 116 op/s
Feb 25 07:28:22 np0005629333 nova_compute[244014]: 2026-02-25 12:28:22.552 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:23 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:23Z|00551|binding|INFO|Releasing lport 3b184c15-8ef4-4e11-bd18-e1253a4ff440 from this chassis (sb_readonly=0)
Feb 25 07:28:23 np0005629333 nova_compute[244014]: 2026-02-25 12:28:23.406 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 235 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 32 KiB/s wr, 121 op/s
Feb 25 07:28:24 np0005629333 nova_compute[244014]: 2026-02-25 12:28:24.715 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:24 np0005629333 nova_compute[244014]: 2026-02-25 12:28:24.733 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022489.7320051, b8086e43-4c45-422f-a3b5-fa665c256b30 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:28:24 np0005629333 nova_compute[244014]: 2026-02-25 12:28:24.734 244018 INFO nova.compute.manager [-] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:28:24 np0005629333 nova_compute[244014]: 2026-02-25 12:28:24.754 244018 DEBUG nova.compute.manager [None req-c660e2bb-19a0-4ae8-a36f-3580bc5492ed - - - - - -] [instance: b8086e43-4c45-422f-a3b5-fa665c256b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:28:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:28:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 235 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 30 KiB/s wr, 113 op/s
Feb 25 07:28:26 np0005629333 nova_compute[244014]: 2026-02-25 12:28:26.988 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:27 np0005629333 nova_compute[244014]: 2026-02-25 12:28:27.554 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:27 np0005629333 nova_compute[244014]: 2026-02-25 12:28:27.693 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 235 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 101 op/s
Feb 25 07:28:29 np0005629333 nova_compute[244014]: 2026-02-25 12:28:29.717 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 235 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 4.8 KiB/s wr, 27 op/s
Feb 25 07:28:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:28:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:28:30
Feb 25 07:28:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:28:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:28:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'vms', 'default.rgw.meta', 'images', '.rgw.root', 'volumes', 'default.rgw.control', 'default.rgw.log', '.mgr']
Feb 25 07:28:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:28:31 np0005629333 nova_compute[244014]: 2026-02-25 12:28:31.078 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:28:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:28:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:28:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:28:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:28:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:28:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:28:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:28:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:28:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:28:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:28:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:28:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:28:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 235 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 4.8 KiB/s wr, 28 op/s
Feb 25 07:28:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:28:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:28:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:28:32 np0005629333 nova_compute[244014]: 2026-02-25 12:28:32.505 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022497.5050864, caff6378-2d93-4b73-8d58-da0e74a6d46e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:28:32 np0005629333 nova_compute[244014]: 2026-02-25 12:28:32.506 244018 INFO nova.compute.manager [-] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:28:32 np0005629333 nova_compute[244014]: 2026-02-25 12:28:32.530 244018 DEBUG nova.compute.manager [None req-36b35eb2-c45f-453e-b18f-c29ad76644f0 - - - - - -] [instance: caff6378-2d93-4b73-8d58-da0e74a6d46e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:28:32 np0005629333 nova_compute[244014]: 2026-02-25 12:28:32.557 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:32 np0005629333 podman[298100]: 2026-02-25 12:28:32.743927599 +0000 UTC m=+0.081365478 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 07:28:32 np0005629333 podman[298101]: 2026-02-25 12:28:32.776358939 +0000 UTC m=+0.110101433 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb 25 07:28:32 np0005629333 nova_compute[244014]: 2026-02-25 12:28:32.840 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 235 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.3 KiB/s wr, 20 op/s
Feb 25 07:28:33 np0005629333 nova_compute[244014]: 2026-02-25 12:28:33.874 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:34 np0005629333 nova_compute[244014]: 2026-02-25 12:28:34.719 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:35 np0005629333 nova_compute[244014]: 2026-02-25 12:28:35.057 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Acquiring lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:35 np0005629333 nova_compute[244014]: 2026-02-25 12:28:35.058 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:35 np0005629333 nova_compute[244014]: 2026-02-25 12:28:35.075 244018 DEBUG nova.compute.manager [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:28:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:28:35 np0005629333 nova_compute[244014]: 2026-02-25 12:28:35.175 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:35 np0005629333 nova_compute[244014]: 2026-02-25 12:28:35.176 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:35 np0005629333 nova_compute[244014]: 2026-02-25 12:28:35.186 244018 DEBUG nova.virt.hardware [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:28:35 np0005629333 nova_compute[244014]: 2026-02-25 12:28:35.186 244018 INFO nova.compute.claims [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:28:35 np0005629333 nova_compute[244014]: 2026-02-25 12:28:35.335 244018 DEBUG oslo_concurrency.processutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 235 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 0 op/s
Feb 25 07:28:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:28:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2522776792' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:28:35 np0005629333 nova_compute[244014]: 2026-02-25 12:28:35.897 244018 DEBUG oslo_concurrency.processutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:35 np0005629333 nova_compute[244014]: 2026-02-25 12:28:35.905 244018 DEBUG nova.compute.provider_tree [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:28:35 np0005629333 nova_compute[244014]: 2026-02-25 12:28:35.926 244018 DEBUG nova.scheduler.client.report [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:28:35 np0005629333 nova_compute[244014]: 2026-02-25 12:28:35.953 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:35 np0005629333 nova_compute[244014]: 2026-02-25 12:28:35.954 244018 DEBUG nova.compute.manager [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.000 244018 DEBUG nova.compute.manager [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.001 244018 DEBUG nova.network.neutron [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.029 244018 INFO nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.057 244018 DEBUG nova.compute.manager [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.155 244018 DEBUG nova.compute.manager [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.157 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.158 244018 INFO nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Creating image(s)#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.188 244018 DEBUG nova.storage.rbd_utils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] rbd image 9630899b-57d8-4e46-b9e0-8762f0f4f2cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.220 244018 DEBUG nova.storage.rbd_utils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] rbd image 9630899b-57d8-4e46-b9e0-8762f0f4f2cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.253 244018 DEBUG nova.storage.rbd_utils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] rbd image 9630899b-57d8-4e46-b9e0-8762f0f4f2cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.258 244018 DEBUG oslo_concurrency.processutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.346 244018 DEBUG oslo_concurrency.processutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.347 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.348 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.349 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.382 244018 DEBUG nova.storage.rbd_utils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] rbd image 9630899b-57d8-4e46-b9e0-8762f0f4f2cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.388 244018 DEBUG oslo_concurrency.processutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 9630899b-57d8-4e46-b9e0-8762f0f4f2cb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.416 244018 DEBUG nova.policy [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f1a7d945e359492faaab7c197de3e9e8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3d46f1174f384dc3be789d4301748e2d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.682 244018 DEBUG oslo_concurrency.processutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 9630899b-57d8-4e46-b9e0-8762f0f4f2cb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.730 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.732 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.778 244018 DEBUG nova.storage.rbd_utils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] resizing rbd image 9630899b-57d8-4e46-b9e0-8762f0f4f2cb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.817 244018 DEBUG nova.compute.manager [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.884 244018 DEBUG nova.objects.instance [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lazy-loading 'migration_context' on Instance uuid 9630899b-57d8-4e46-b9e0-8762f0f4f2cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.908 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.909 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Ensure instance console log exists: /var/lib/nova/instances/9630899b-57d8-4e46-b9e0-8762f0f4f2cb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.909 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.910 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.910 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.922 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.923 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.933 244018 DEBUG nova.virt.hardware [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:28:36 np0005629333 nova_compute[244014]: 2026-02-25 12:28:36.934 244018 INFO nova.compute.claims [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:28:37 np0005629333 nova_compute[244014]: 2026-02-25 12:28:37.132 244018 DEBUG oslo_concurrency.processutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:37 np0005629333 nova_compute[244014]: 2026-02-25 12:28:37.301 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:37 np0005629333 nova_compute[244014]: 2026-02-25 12:28:37.462 244018 DEBUG nova.network.neutron [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Successfully created port: 1b61d923-a06e-4948-b459-b6cf5b8c668d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:28:37 np0005629333 nova_compute[244014]: 2026-02-25 12:28:37.559 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:28:37 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/772933509' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:28:37 np0005629333 nova_compute[244014]: 2026-02-25 12:28:37.671 244018 DEBUG oslo_concurrency.processutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:37 np0005629333 nova_compute[244014]: 2026-02-25 12:28:37.678 244018 DEBUG nova.compute.provider_tree [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:28:37 np0005629333 nova_compute[244014]: 2026-02-25 12:28:37.711 244018 DEBUG nova.scheduler.client.report [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:28:37 np0005629333 nova_compute[244014]: 2026-02-25 12:28:37.737 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:37 np0005629333 nova_compute[244014]: 2026-02-25 12:28:37.739 244018 DEBUG nova.compute.manager [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:28:37 np0005629333 nova_compute[244014]: 2026-02-25 12:28:37.810 244018 DEBUG nova.compute.manager [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:28:37 np0005629333 nova_compute[244014]: 2026-02-25 12:28:37.811 244018 DEBUG nova.network.neutron [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:28:37 np0005629333 nova_compute[244014]: 2026-02-25 12:28:37.833 244018 INFO nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:28:37 np0005629333 nova_compute[244014]: 2026-02-25 12:28:37.854 244018 DEBUG nova.compute.manager [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:28:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 264 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 9.4 KiB/s rd, 826 KiB/s wr, 14 op/s
Feb 25 07:28:37 np0005629333 nova_compute[244014]: 2026-02-25 12:28:37.976 244018 DEBUG nova.compute.manager [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:28:37 np0005629333 nova_compute[244014]: 2026-02-25 12:28:37.978 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:28:37 np0005629333 nova_compute[244014]: 2026-02-25 12:28:37.979 244018 INFO nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Creating image(s)#033[00m
Feb 25 07:28:38 np0005629333 nova_compute[244014]: 2026-02-25 12:28:38.013 244018 DEBUG nova.storage.rbd_utils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:38 np0005629333 nova_compute[244014]: 2026-02-25 12:28:38.051 244018 DEBUG nova.storage.rbd_utils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:38 np0005629333 nova_compute[244014]: 2026-02-25 12:28:38.086 244018 DEBUG nova.storage.rbd_utils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:38 np0005629333 nova_compute[244014]: 2026-02-25 12:28:38.091 244018 DEBUG oslo_concurrency.processutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:38 np0005629333 nova_compute[244014]: 2026-02-25 12:28:38.130 244018 DEBUG nova.policy [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1f8bbe7db4454108aca005daa72d5c22', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56700581ea88438ba482d90bc702ced3', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:28:38 np0005629333 nova_compute[244014]: 2026-02-25 12:28:38.193 244018 DEBUG oslo_concurrency.processutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:38 np0005629333 nova_compute[244014]: 2026-02-25 12:28:38.194 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:38 np0005629333 nova_compute[244014]: 2026-02-25 12:28:38.195 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:38 np0005629333 nova_compute[244014]: 2026-02-25 12:28:38.196 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:38 np0005629333 nova_compute[244014]: 2026-02-25 12:28:38.228 244018 DEBUG nova.storage.rbd_utils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:38 np0005629333 nova_compute[244014]: 2026-02-25 12:28:38.232 244018 DEBUG oslo_concurrency.processutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:38 np0005629333 nova_compute[244014]: 2026-02-25 12:28:38.522 244018 DEBUG oslo_concurrency.processutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:38 np0005629333 nova_compute[244014]: 2026-02-25 12:28:38.619 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "8086400b-ac70-4c79-928b-4f1966084384" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:38 np0005629333 nova_compute[244014]: 2026-02-25 12:28:38.619 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "8086400b-ac70-4c79-928b-4f1966084384" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:38 np0005629333 nova_compute[244014]: 2026-02-25 12:28:38.625 244018 DEBUG nova.storage.rbd_utils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] resizing rbd image ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:28:38 np0005629333 nova_compute[244014]: 2026-02-25 12:28:38.676 244018 DEBUG nova.compute.manager [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:28:38 np0005629333 nova_compute[244014]: 2026-02-25 12:28:38.709 244018 DEBUG nova.network.neutron [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Successfully updated port: 1b61d923-a06e-4948-b459-b6cf5b8c668d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:28:38 np0005629333 nova_compute[244014]: 2026-02-25 12:28:38.715 244018 DEBUG nova.objects.instance [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'migration_context' on Instance uuid ee9cd98b-1ca6-48e7-aa44-a09caf048a1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:28:38 np0005629333 nova_compute[244014]: 2026-02-25 12:28:38.738 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Acquiring lock "refresh_cache-9630899b-57d8-4e46-b9e0-8762f0f4f2cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:28:38 np0005629333 nova_compute[244014]: 2026-02-25 12:28:38.739 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Acquired lock "refresh_cache-9630899b-57d8-4e46-b9e0-8762f0f4f2cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:28:38 np0005629333 nova_compute[244014]: 2026-02-25 12:28:38.739 244018 DEBUG nova.network.neutron [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:28:38 np0005629333 nova_compute[244014]: 2026-02-25 12:28:38.748 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:28:38 np0005629333 nova_compute[244014]: 2026-02-25 12:28:38.749 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Ensure instance console log exists: /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:28:38 np0005629333 nova_compute[244014]: 2026-02-25 12:28:38.749 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:38 np0005629333 nova_compute[244014]: 2026-02-25 12:28:38.749 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:38 np0005629333 nova_compute[244014]: 2026-02-25 12:28:38.749 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:38 np0005629333 nova_compute[244014]: 2026-02-25 12:28:38.762 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:38 np0005629333 nova_compute[244014]: 2026-02-25 12:28:38.762 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:38 np0005629333 nova_compute[244014]: 2026-02-25 12:28:38.768 244018 DEBUG nova.virt.hardware [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:28:38 np0005629333 nova_compute[244014]: 2026-02-25 12:28:38.769 244018 INFO nova.compute.claims [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:28:38 np0005629333 nova_compute[244014]: 2026-02-25 12:28:38.942 244018 DEBUG nova.network.neutron [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:28:38 np0005629333 nova_compute[244014]: 2026-02-25 12:28:38.948 244018 DEBUG oslo_concurrency.processutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:39 np0005629333 nova_compute[244014]: 2026-02-25 12:28:39.112 244018 DEBUG nova.network.neutron [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Successfully created port: 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:28:39 np0005629333 nova_compute[244014]: 2026-02-25 12:28:39.250 244018 DEBUG nova.compute.manager [req-08720f54-7c71-4ccb-9fe4-75617dffdd9d req-9442e942-2263-4a3f-a44c-75af60831b57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Received event network-changed-1b61d923-a06e-4948-b459-b6cf5b8c668d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:28:39 np0005629333 nova_compute[244014]: 2026-02-25 12:28:39.250 244018 DEBUG nova.compute.manager [req-08720f54-7c71-4ccb-9fe4-75617dffdd9d req-9442e942-2263-4a3f-a44c-75af60831b57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Refreshing instance network info cache due to event network-changed-1b61d923-a06e-4948-b459-b6cf5b8c668d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:28:39 np0005629333 nova_compute[244014]: 2026-02-25 12:28:39.251 244018 DEBUG oslo_concurrency.lockutils [req-08720f54-7c71-4ccb-9fe4-75617dffdd9d req-9442e942-2263-4a3f-a44c-75af60831b57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-9630899b-57d8-4e46-b9e0-8762f0f4f2cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:28:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:28:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3428883068' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:28:39 np0005629333 nova_compute[244014]: 2026-02-25 12:28:39.499 244018 DEBUG oslo_concurrency.processutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:39 np0005629333 nova_compute[244014]: 2026-02-25 12:28:39.507 244018 DEBUG nova.compute.provider_tree [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:28:39 np0005629333 nova_compute[244014]: 2026-02-25 12:28:39.528 244018 DEBUG nova.scheduler.client.report [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:28:39 np0005629333 nova_compute[244014]: 2026-02-25 12:28:39.553 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:39 np0005629333 nova_compute[244014]: 2026-02-25 12:28:39.555 244018 DEBUG nova.compute.manager [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:28:39 np0005629333 nova_compute[244014]: 2026-02-25 12:28:39.648 244018 DEBUG nova.compute.manager [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:28:39 np0005629333 nova_compute[244014]: 2026-02-25 12:28:39.649 244018 DEBUG nova.network.neutron [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:28:39 np0005629333 nova_compute[244014]: 2026-02-25 12:28:39.673 244018 INFO nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:28:39 np0005629333 nova_compute[244014]: 2026-02-25 12:28:39.696 244018 DEBUG nova.compute.manager [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:28:39 np0005629333 nova_compute[244014]: 2026-02-25 12:28:39.722 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:39 np0005629333 nova_compute[244014]: 2026-02-25 12:28:39.813 244018 DEBUG nova.compute.manager [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:28:39 np0005629333 nova_compute[244014]: 2026-02-25 12:28:39.815 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:28:39 np0005629333 nova_compute[244014]: 2026-02-25 12:28:39.815 244018 INFO nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Creating image(s)#033[00m
Feb 25 07:28:39 np0005629333 nova_compute[244014]: 2026-02-25 12:28:39.845 244018 DEBUG nova.storage.rbd_utils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] rbd image 8086400b-ac70-4c79-928b-4f1966084384_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 264 MiB data, 680 MiB used, 59 GiB / 60 GiB avail; 9.4 KiB/s rd, 825 KiB/s wr, 14 op/s
Feb 25 07:28:39 np0005629333 nova_compute[244014]: 2026-02-25 12:28:39.882 244018 DEBUG nova.storage.rbd_utils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] rbd image 8086400b-ac70-4c79-928b-4f1966084384_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:39 np0005629333 nova_compute[244014]: 2026-02-25 12:28:39.917 244018 DEBUG nova.storage.rbd_utils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] rbd image 8086400b-ac70-4c79-928b-4f1966084384_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:39 np0005629333 nova_compute[244014]: 2026-02-25 12:28:39.923 244018 DEBUG oslo_concurrency.processutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:39 np0005629333 nova_compute[244014]: 2026-02-25 12:28:39.973 244018 DEBUG nova.network.neutron [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Updating instance_info_cache with network_info: [{"id": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "address": "fa:16:3e:f6:48:42", "network": {"id": "0406cfb3-4360-4052-8e1d-7019c4224092", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1148317190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d46f1174f384dc3be789d4301748e2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b61d923-a0", "ovs_interfaceid": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:28:39 np0005629333 nova_compute[244014]: 2026-02-25 12:28:39.982 244018 DEBUG nova.policy [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bcb4ded096bc4f7993f96ca892b82333', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '780a93a8758a4bd78b22fe68ed6276cf', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:28:39 np0005629333 nova_compute[244014]: 2026-02-25 12:28:39.996 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Acquiring lock "826789b1-e26a-4569-bd77-bd1ef76388be" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:39 np0005629333 nova_compute[244014]: 2026-02-25 12:28:39.996 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lock "826789b1-e26a-4569-bd77-bd1ef76388be" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.002 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Releasing lock "refresh_cache-9630899b-57d8-4e46-b9e0-8762f0f4f2cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.003 244018 DEBUG nova.compute.manager [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Instance network_info: |[{"id": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "address": "fa:16:3e:f6:48:42", "network": {"id": "0406cfb3-4360-4052-8e1d-7019c4224092", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1148317190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d46f1174f384dc3be789d4301748e2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b61d923-a0", "ovs_interfaceid": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.004 244018 DEBUG oslo_concurrency.lockutils [req-08720f54-7c71-4ccb-9fe4-75617dffdd9d req-9442e942-2263-4a3f-a44c-75af60831b57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-9630899b-57d8-4e46-b9e0-8762f0f4f2cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.004 244018 DEBUG nova.network.neutron [req-08720f54-7c71-4ccb-9fe4-75617dffdd9d req-9442e942-2263-4a3f-a44c-75af60831b57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Refreshing network info cache for port 1b61d923-a06e-4948-b459-b6cf5b8c668d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.010 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Start _get_guest_xml network_info=[{"id": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "address": "fa:16:3e:f6:48:42", "network": {"id": "0406cfb3-4360-4052-8e1d-7019c4224092", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1148317190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d46f1174f384dc3be789d4301748e2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b61d923-a0", "ovs_interfaceid": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.012 244018 DEBUG nova.compute.manager [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.022 244018 WARNING nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.031 244018 DEBUG oslo_concurrency.processutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.032 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.033 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.033 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.061 244018 DEBUG nova.storage.rbd_utils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] rbd image 8086400b-ac70-4c79-928b-4f1966084384_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.066 244018 DEBUG oslo_concurrency.processutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 8086400b-ac70-4c79-928b-4f1966084384_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.109 244018 DEBUG nova.network.neutron [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Successfully updated port: 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.123 244018 DEBUG nova.virt.libvirt.host [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.124 244018 DEBUG nova.virt.libvirt.host [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.129 244018 DEBUG nova.virt.libvirt.host [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.129 244018 DEBUG nova.virt.libvirt.host [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.130 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.130 244018 DEBUG nova.virt.hardware [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.131 244018 DEBUG nova.virt.hardware [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.132 244018 DEBUG nova.virt.hardware [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.132 244018 DEBUG nova.virt.hardware [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.133 244018 DEBUG nova.virt.hardware [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.133 244018 DEBUG nova.virt.hardware [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.133 244018 DEBUG nova.virt.hardware [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.134 244018 DEBUG nova.virt.hardware [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.135 244018 DEBUG nova.virt.hardware [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.135 244018 DEBUG nova.virt.hardware [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.135 244018 DEBUG nova.virt.hardware [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.141 244018 DEBUG oslo_concurrency.processutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.184 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.186 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.192 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "refresh_cache-ee9cd98b-1ca6-48e7-aa44-a09caf048a1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.193 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquired lock "refresh_cache-ee9cd98b-1ca6-48e7-aa44-a09caf048a1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.193 244018 DEBUG nova.network.neutron [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.206 244018 DEBUG nova.virt.hardware [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.207 244018 INFO nova.compute.claims [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.371 244018 DEBUG nova.network.neutron [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.386 244018 DEBUG oslo_concurrency.processutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.412 244018 DEBUG oslo_concurrency.processutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 8086400b-ac70-4c79-928b-4f1966084384_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.482 244018 DEBUG nova.storage.rbd_utils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] resizing rbd image 8086400b-ac70-4c79-928b-4f1966084384_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.580 244018 DEBUG nova.objects.instance [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lazy-loading 'migration_context' on Instance uuid 8086400b-ac70-4c79-928b-4f1966084384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.596 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.596 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Ensure instance console log exists: /var/lib/nova/instances/8086400b-ac70-4c79-928b-4f1966084384/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.597 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.597 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.597 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.623 244018 DEBUG nova.network.neutron [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Successfully created port: 05178abb-a113-4013-9194-9243afe9d0ff _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:28:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:28:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/275815993' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.744 244018 DEBUG oslo_concurrency.processutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.765 244018 DEBUG nova.storage.rbd_utils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] rbd image 9630899b-57d8-4e46-b9e0-8762f0f4f2cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.769 244018 DEBUG oslo_concurrency.processutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:28:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4096300758' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.947 244018 DEBUG oslo_concurrency.processutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.953 244018 DEBUG nova.compute.provider_tree [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.969 244018 DEBUG nova.scheduler.client.report [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.995 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:40 np0005629333 nova_compute[244014]: 2026-02-25 12:28:40.996 244018 DEBUG nova.compute.manager [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.045 244018 DEBUG nova.compute.manager [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.046 244018 DEBUG nova.network.neutron [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.065 244018 INFO nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.083 244018 DEBUG nova.compute.manager [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.179 244018 DEBUG nova.compute.manager [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.180 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.180 244018 INFO nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Creating image(s)#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.208 244018 DEBUG nova.storage.rbd_utils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] rbd image 826789b1-e26a-4569-bd77-bd1ef76388be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.238 244018 DEBUG nova.storage.rbd_utils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] rbd image 826789b1-e26a-4569-bd77-bd1ef76388be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:28:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3528588789' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.267 244018 DEBUG nova.storage.rbd_utils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] rbd image 826789b1-e26a-4569-bd77-bd1ef76388be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.272 244018 DEBUG oslo_concurrency.processutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.314 244018 DEBUG oslo_concurrency.processutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.316 244018 DEBUG nova.virt.libvirt.vif [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:28:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-761477046',display_name='tempest-ServerAddressesTestJSON-server-761477046',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-761477046',id=62,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3d46f1174f384dc3be789d4301748e2d',ramdisk_id='',reservation_id='r-0u0fmhzw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-785917861',owner_user_name='tempest-ServerAddressesTestJSON-785917861-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:28:36Z,user_data=None,user_id='f1a7d945e359492faaab7c197de3e9e8',uuid=9630899b-57d8-4e46-b9e0-8762f0f4f2cb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "address": "fa:16:3e:f6:48:42", "network": {"id": "0406cfb3-4360-4052-8e1d-7019c4224092", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1148317190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d46f1174f384dc3be789d4301748e2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b61d923-a0", "ovs_interfaceid": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.317 244018 DEBUG nova.network.os_vif_util [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Converting VIF {"id": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "address": "fa:16:3e:f6:48:42", "network": {"id": "0406cfb3-4360-4052-8e1d-7019c4224092", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1148317190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d46f1174f384dc3be789d4301748e2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b61d923-a0", "ovs_interfaceid": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.318 244018 DEBUG nova.network.os_vif_util [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:48:42,bridge_name='br-int',has_traffic_filtering=True,id=1b61d923-a06e-4948-b459-b6cf5b8c668d,network=Network(0406cfb3-4360-4052-8e1d-7019c4224092),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b61d923-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.320 244018 DEBUG nova.objects.instance [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lazy-loading 'pci_devices' on Instance uuid 9630899b-57d8-4e46-b9e0-8762f0f4f2cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.340 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:28:41 np0005629333 nova_compute[244014]:  <uuid>9630899b-57d8-4e46-b9e0-8762f0f4f2cb</uuid>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:  <name>instance-0000003e</name>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServerAddressesTestJSON-server-761477046</nova:name>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:28:40</nova:creationTime>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:28:41 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:        <nova:user uuid="f1a7d945e359492faaab7c197de3e9e8">tempest-ServerAddressesTestJSON-785917861-project-member</nova:user>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:        <nova:project uuid="3d46f1174f384dc3be789d4301748e2d">tempest-ServerAddressesTestJSON-785917861</nova:project>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:        <nova:port uuid="1b61d923-a06e-4948-b459-b6cf5b8c668d">
Feb 25 07:28:41 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      <entry name="serial">9630899b-57d8-4e46-b9e0-8762f0f4f2cb</entry>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      <entry name="uuid">9630899b-57d8-4e46-b9e0-8762f0f4f2cb</entry>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/9630899b-57d8-4e46-b9e0-8762f0f4f2cb_disk">
Feb 25 07:28:41 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:28:41 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/9630899b-57d8-4e46-b9e0-8762f0f4f2cb_disk.config">
Feb 25 07:28:41 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:28:41 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:f6:48:42"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      <target dev="tap1b61d923-a0"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/9630899b-57d8-4e46-b9e0-8762f0f4f2cb/console.log" append="off"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:28:41 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:28:41 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:28:41 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:28:41 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.341 244018 DEBUG nova.compute.manager [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Preparing to wait for external event network-vif-plugged-1b61d923-a06e-4948-b459-b6cf5b8c668d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.341 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Acquiring lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.342 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.342 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.343 244018 DEBUG nova.virt.libvirt.vif [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:28:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-761477046',display_name='tempest-ServerAddressesTestJSON-server-761477046',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-761477046',id=62,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3d46f1174f384dc3be789d4301748e2d',ramdisk_id='',reservation_id='r-0u0fmhzw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-785917861',owner_user_name='tempest-ServerAddressesTestJSON-785917861-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:28:36Z,user_data=None,user_id='f1a7d945e359492faaab7c197de3e9e8',uuid=9630899b-57d8-4e46-b9e0-8762f0f4f2cb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "address": "fa:16:3e:f6:48:42", "network": {"id": "0406cfb3-4360-4052-8e1d-7019c4224092", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1148317190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d46f1174f384dc3be789d4301748e2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b61d923-a0", "ovs_interfaceid": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.343 244018 DEBUG nova.network.os_vif_util [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Converting VIF {"id": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "address": "fa:16:3e:f6:48:42", "network": {"id": "0406cfb3-4360-4052-8e1d-7019c4224092", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1148317190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d46f1174f384dc3be789d4301748e2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b61d923-a0", "ovs_interfaceid": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.344 244018 DEBUG nova.network.os_vif_util [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:48:42,bridge_name='br-int',has_traffic_filtering=True,id=1b61d923-a06e-4948-b459-b6cf5b8c668d,network=Network(0406cfb3-4360-4052-8e1d-7019c4224092),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b61d923-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.345 244018 DEBUG os_vif [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:48:42,bridge_name='br-int',has_traffic_filtering=True,id=1b61d923-a06e-4948-b459-b6cf5b8c668d,network=Network(0406cfb3-4360-4052-8e1d-7019c4224092),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b61d923-a0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.346 244018 DEBUG nova.network.neutron [req-08720f54-7c71-4ccb-9fe4-75617dffdd9d req-9442e942-2263-4a3f-a44c-75af60831b57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Updated VIF entry in instance network info cache for port 1b61d923-a06e-4948-b459-b6cf5b8c668d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.347 244018 DEBUG nova.network.neutron [req-08720f54-7c71-4ccb-9fe4-75617dffdd9d req-9442e942-2263-4a3f-a44c-75af60831b57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Updating instance_info_cache with network_info: [{"id": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "address": "fa:16:3e:f6:48:42", "network": {"id": "0406cfb3-4360-4052-8e1d-7019c4224092", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1148317190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d46f1174f384dc3be789d4301748e2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b61d923-a0", "ovs_interfaceid": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.348 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.348 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.349 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.352 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.352 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1b61d923-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.353 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1b61d923-a0, col_values=(('external_ids', {'iface-id': '1b61d923-a06e-4948-b459-b6cf5b8c668d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:48:42', 'vm-uuid': '9630899b-57d8-4e46-b9e0-8762f0f4f2cb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.355 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:41 np0005629333 NetworkManager[49836]: <info>  [1772022521.3563] manager: (tap1b61d923-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/244)
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.357 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.361 244018 DEBUG nova.compute.manager [req-1c0e5a47-8dee-4efe-a0c9-796288c5fd87 req-f147db26-b7fb-419d-8c5b-1303f4c4a413 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Received event network-changed-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.361 244018 DEBUG nova.compute.manager [req-1c0e5a47-8dee-4efe-a0c9-796288c5fd87 req-f147db26-b7fb-419d-8c5b-1303f4c4a413 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Refreshing instance network info cache due to event network-changed-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.361 244018 DEBUG oslo_concurrency.lockutils [req-1c0e5a47-8dee-4efe-a0c9-796288c5fd87 req-f147db26-b7fb-419d-8c5b-1303f4c4a413 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-ee9cd98b-1ca6-48e7-aa44-a09caf048a1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.363 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.363 244018 INFO os_vif [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:48:42,bridge_name='br-int',has_traffic_filtering=True,id=1b61d923-a06e-4948-b459-b6cf5b8c668d,network=Network(0406cfb3-4360-4052-8e1d-7019c4224092),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b61d923-a0')#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.367 244018 DEBUG oslo_concurrency.lockutils [req-08720f54-7c71-4ccb-9fe4-75617dffdd9d req-9442e942-2263-4a3f-a44c-75af60831b57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-9630899b-57d8-4e46-b9e0-8762f0f4f2cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.371 244018 DEBUG oslo_concurrency.processutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.371 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.372 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.372 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.403 244018 DEBUG nova.storage.rbd_utils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] rbd image 826789b1-e26a-4569-bd77-bd1ef76388be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.408 244018 DEBUG oslo_concurrency.processutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 826789b1-e26a-4569-bd77-bd1ef76388be_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.489 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.490 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.490 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] No VIF found with MAC fa:16:3e:f6:48:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.491 244018 INFO nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Using config drive#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.526 244018 DEBUG nova.storage.rbd_utils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] rbd image 9630899b-57d8-4e46-b9e0-8762f0f4f2cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.537 244018 DEBUG nova.policy [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '05ca7159581049009a4223cf01ebf146', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'be8db082f3894d28b63a3709be538262', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.575 244018 DEBUG nova.network.neutron [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Updating instance_info_cache with network_info: [{"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.593 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Releasing lock "refresh_cache-ee9cd98b-1ca6-48e7-aa44-a09caf048a1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.593 244018 DEBUG nova.compute.manager [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Instance network_info: |[{"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.593 244018 DEBUG oslo_concurrency.lockutils [req-1c0e5a47-8dee-4efe-a0c9-796288c5fd87 req-f147db26-b7fb-419d-8c5b-1303f4c4a413 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-ee9cd98b-1ca6-48e7-aa44-a09caf048a1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.594 244018 DEBUG nova.network.neutron [req-1c0e5a47-8dee-4efe-a0c9-796288c5fd87 req-f147db26-b7fb-419d-8c5b-1303f4c4a413 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Refreshing network info cache for port 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.598 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Start _get_guest_xml network_info=[{"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.604 244018 WARNING nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.618 244018 DEBUG nova.virt.libvirt.host [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.619 244018 DEBUG nova.virt.libvirt.host [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.623 244018 DEBUG nova.virt.libvirt.host [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.624 244018 DEBUG nova.virt.libvirt.host [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.624 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.624 244018 DEBUG nova.virt.hardware [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.625 244018 DEBUG nova.virt.hardware [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.625 244018 DEBUG nova.virt.hardware [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.625 244018 DEBUG nova.virt.hardware [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.625 244018 DEBUG nova.virt.hardware [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.625 244018 DEBUG nova.virt.hardware [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.626 244018 DEBUG nova.virt.hardware [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.626 244018 DEBUG nova.virt.hardware [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.626 244018 DEBUG nova.virt.hardware [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.626 244018 DEBUG nova.virt.hardware [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.626 244018 DEBUG nova.virt.hardware [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.630 244018 DEBUG oslo_concurrency.processutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.714 244018 DEBUG oslo_concurrency.processutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 826789b1-e26a-4569-bd77-bd1ef76388be_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.807 244018 DEBUG nova.storage.rbd_utils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] resizing rbd image 826789b1-e26a-4569-bd77-bd1ef76388be_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.862 244018 DEBUG nova.network.neutron [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Successfully updated port: 05178abb-a113-4013-9194-9243afe9d0ff _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:28:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 350 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 4.3 MiB/s wr, 80 op/s
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.902 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "refresh_cache-8086400b-ac70-4c79-928b-4f1966084384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.903 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquired lock "refresh_cache-8086400b-ac70-4c79-928b-4f1966084384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.903 244018 DEBUG nova.network.neutron [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.910 244018 DEBUG nova.objects.instance [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lazy-loading 'migration_context' on Instance uuid 826789b1-e26a-4569-bd77-bd1ef76388be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.929 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.930 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Ensure instance console log exists: /var/lib/nova/instances/826789b1-e26a-4569-bd77-bd1ef76388be/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.930 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.931 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.931 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.981 244018 DEBUG nova.compute.manager [req-cbb1b097-e9d0-4eea-831a-7b77e866f437 req-ec51d979-3ffd-496c-af27-5badf73731a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Received event network-changed-05178abb-a113-4013-9194-9243afe9d0ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.982 244018 DEBUG nova.compute.manager [req-cbb1b097-e9d0-4eea-831a-7b77e866f437 req-ec51d979-3ffd-496c-af27-5badf73731a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Refreshing instance network info cache due to event network-changed-05178abb-a113-4013-9194-9243afe9d0ff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:28:41 np0005629333 nova_compute[244014]: 2026-02-25 12:28:41.982 244018 DEBUG oslo_concurrency.lockutils [req-cbb1b097-e9d0-4eea-831a-7b77e866f437 req-ec51d979-3ffd-496c-af27-5badf73731a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8086400b-ac70-4c79-928b-4f1966084384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.044 244018 INFO nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Creating config drive at /var/lib/nova/instances/9630899b-57d8-4e46-b9e0-8762f0f4f2cb/disk.config#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.048 244018 DEBUG oslo_concurrency.processutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9630899b-57d8-4e46-b9e0-8762f0f4f2cb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpfkaefgbm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:28:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1691004792' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.193 244018 DEBUG oslo_concurrency.processutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.214 244018 DEBUG nova.storage.rbd_utils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.217 244018 DEBUG oslo_concurrency.processutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.236 244018 DEBUG oslo_concurrency.processutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9630899b-57d8-4e46-b9e0-8762f0f4f2cb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpfkaefgbm" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.238 244018 DEBUG nova.network.neutron [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:28:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.266 244018 DEBUG nova.storage.rbd_utils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] rbd image 9630899b-57d8-4e46-b9e0-8762f0f4f2cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:28:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:28:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:28:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016095451353241661 of space, bias 1.0, pg target 0.48286354059724984 quantized to 32 (current 32)
Feb 25 07:28:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:28:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:28:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:28:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:28:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:28:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002492519703322481 of space, bias 1.0, pg target 0.7477559109967442 quantized to 32 (current 32)
Feb 25 07:28:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.270 244018 DEBUG oslo_concurrency.processutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9630899b-57d8-4e46-b9e0-8762f0f4f2cb/disk.config 9630899b-57d8-4e46-b9e0-8762f0f4f2cb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.773530469614834e-07 of space, bias 4.0, pg target 0.0010528236563537802 quantized to 16 (current 16)
Feb 25 07:28:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:28:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:28:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:28:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:28:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:28:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:28:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:28:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:28:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:28:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.335 244018 DEBUG nova.network.neutron [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Successfully created port: ba138ed1-c811-4043-9bd6-e1a5c6127f84 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.439 244018 DEBUG oslo_concurrency.processutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9630899b-57d8-4e46-b9e0-8762f0f4f2cb/disk.config 9630899b-57d8-4e46-b9e0-8762f0f4f2cb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.440 244018 INFO nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Deleting local config drive /var/lib/nova/instances/9630899b-57d8-4e46-b9e0-8762f0f4f2cb/disk.config because it was imported into RBD.#033[00m
Feb 25 07:28:42 np0005629333 kernel: tap1b61d923-a0: entered promiscuous mode
Feb 25 07:28:42 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:42Z|00552|binding|INFO|Claiming lport 1b61d923-a06e-4948-b459-b6cf5b8c668d for this chassis.
Feb 25 07:28:42 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:42Z|00553|binding|INFO|1b61d923-a06e-4948-b459-b6cf5b8c668d: Claiming fa:16:3e:f6:48:42 10.100.0.7
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.494 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.497 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:42 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:42Z|00554|binding|INFO|Setting lport 1b61d923-a06e-4948-b459-b6cf5b8c668d ovn-installed in OVS
Feb 25 07:28:42 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:42Z|00555|binding|INFO|Setting lport 1b61d923-a06e-4948-b459-b6cf5b8c668d up in Southbound
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.509 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:48:42 10.100.0.7'], port_security=['fa:16:3e:f6:48:42 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9630899b-57d8-4e46-b9e0-8762f0f4f2cb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0406cfb3-4360-4052-8e1d-7019c4224092', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d46f1174f384dc3be789d4301748e2d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6a0b035a-6f2a-459e-b461-0e5414640091', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4783117-d743-4200-8206-169bf90ef6ab, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=1b61d923-a06e-4948-b459-b6cf5b8c668d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.511 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.512 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 1b61d923-a06e-4948-b459-b6cf5b8c668d in datapath 0406cfb3-4360-4052-8e1d-7019c4224092 bound to our chassis#033[00m
Feb 25 07:28:42 np0005629333 NetworkManager[49836]: <info>  [1772022522.5164] manager: (tap1b61d923-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/245)
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.517 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.519 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0406cfb3-4360-4052-8e1d-7019c4224092#033[00m
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.530 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1c4487cd-4abb-4ad7-b0e2-3a276f3dc5c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.531 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0406cfb3-41 in ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.533 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0406cfb3-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.533 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[904eff9a-0ce4-4de8-8091-bbb9e0a5b808]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.534 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0089a305-a759-44e4-a497-33c85008b14b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:42 np0005629333 systemd-udevd[299097]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:28:42 np0005629333 systemd-machined[210048]: New machine qemu-71-instance-0000003e.
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.553 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[cf09f10c-f2f0-457e-a702-4e1d8b083aa5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:42 np0005629333 NetworkManager[49836]: <info>  [1772022522.5607] device (tap1b61d923-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:28:42 np0005629333 NetworkManager[49836]: <info>  [1772022522.5616] device (tap1b61d923-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:28:42 np0005629333 systemd[1]: Started Virtual Machine qemu-71-instance-0000003e.
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.572 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[716e4291-1518-4835-a88b-5c5df4413d39]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.606 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cb8ed7e1-533f-411e-9ece-a88d957edbde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:42 np0005629333 NetworkManager[49836]: <info>  [1772022522.6136] manager: (tap0406cfb3-40): new Veth device (/org/freedesktop/NetworkManager/Devices/246)
Feb 25 07:28:42 np0005629333 systemd-udevd[299100]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.611 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e365d2d9-e08f-4495-a2e0-7fb8106ca739]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.651 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[34263ff2-82d4-42ce-91fc-f21b400c6ddb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.656 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[244013a8-d397-483e-9146-7acd1e8aab0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:42 np0005629333 NetworkManager[49836]: <info>  [1772022522.6782] device (tap0406cfb3-40): carrier: link connected
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.684 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cd0eb9b1-2588-49b8-8365-6fb88b7e983b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.702 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[73ae7c56-8467-4d6f-b493-a813a158a133]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0406cfb3-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:38:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449208, 'reachable_time': 32070, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299128, 'error': None, 'target': 'ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.717 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0b520daf-a6b5-40f2-9149-e29866aa1f5b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fead:38ac'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449208, 'tstamp': 449208}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299129, 'error': None, 'target': 'ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.736 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0548b219-2754-40a1-b6ff-d78b0dda16d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0406cfb3-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:38:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449208, 'reachable_time': 32070, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299130, 'error': None, 'target': 'ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:28:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/691462048' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.764 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eb88e89c-253a-4b2b-938d-cfd25bde7656]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.769 244018 DEBUG oslo_concurrency.processutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.771 244018 DEBUG nova.virt.libvirt.vif [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:28:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-745827155',display_name='tempest-tempest.common.compute-instance-745827155',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-745827155',id=63,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-ce0sx5g6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:28:37Z,user_data=None,user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=ee9cd98b-1ca6-48e7-aa44-a09caf048a1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.771 244018 DEBUG nova.network.os_vif_util [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.772 244018 DEBUG nova.network.os_vif_util [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3b:33,bridge_name='br-int',has_traffic_filtering=True,id=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9848fa6c-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.773 244018 DEBUG nova.objects.instance [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'pci_devices' on Instance uuid ee9cd98b-1ca6-48e7-aa44-a09caf048a1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.789 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:28:42 np0005629333 nova_compute[244014]:  <uuid>ee9cd98b-1ca6-48e7-aa44-a09caf048a1c</uuid>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:  <name>instance-0000003f</name>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      <nova:name>tempest-tempest.common.compute-instance-745827155</nova:name>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:28:41</nova:creationTime>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:28:42 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:        <nova:user uuid="1f8bbe7db4454108aca005daa72d5c22">tempest-ServerActionsTestJSON-436476112-project-member</nova:user>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:        <nova:project uuid="56700581ea88438ba482d90bc702ced3">tempest-ServerActionsTestJSON-436476112</nova:project>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:        <nova:port uuid="9848fa6c-0a42-4cac-a3d8-2b90d5b7920c">
Feb 25 07:28:42 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      <entry name="serial">ee9cd98b-1ca6-48e7-aa44-a09caf048a1c</entry>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      <entry name="uuid">ee9cd98b-1ca6-48e7-aa44-a09caf048a1c</entry>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk">
Feb 25 07:28:42 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:28:42 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk.config">
Feb 25 07:28:42 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:28:42 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:2e:3b:33"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      <target dev="tap9848fa6c-0a"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c/console.log" append="off"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:28:42 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:28:42 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:28:42 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:28:42 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.790 244018 DEBUG nova.compute.manager [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Preparing to wait for external event network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.790 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.791 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.791 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.792 244018 DEBUG nova.virt.libvirt.vif [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:28:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-745827155',display_name='tempest-tempest.common.compute-instance-745827155',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-745827155',id=63,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-ce0sx5g6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:28:37Z,user_data=None,user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=ee9cd98b-1ca6-48e7-aa44-a09caf048a1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.792 244018 DEBUG nova.network.os_vif_util [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.793 244018 DEBUG nova.network.os_vif_util [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3b:33,bridge_name='br-int',has_traffic_filtering=True,id=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9848fa6c-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.793 244018 DEBUG os_vif [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3b:33,bridge_name='br-int',has_traffic_filtering=True,id=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9848fa6c-0a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.794 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.794 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.795 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.797 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.797 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9848fa6c-0a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.797 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9848fa6c-0a, col_values=(('external_ids', {'iface-id': '9848fa6c-0a42-4cac-a3d8-2b90d5b7920c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:3b:33', 'vm-uuid': 'ee9cd98b-1ca6-48e7-aa44-a09caf048a1c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.799 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:42 np0005629333 NetworkManager[49836]: <info>  [1772022522.8000] manager: (tap9848fa6c-0a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/247)
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.801 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.804 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.804 244018 INFO os_vif [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3b:33,bridge_name='br-int',has_traffic_filtering=True,id=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9848fa6c-0a')#033[00m
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.814 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bee743e5-8a12-43b2-8721-b9328abdae1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.815 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0406cfb3-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.815 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.816 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0406cfb3-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.817 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:42 np0005629333 kernel: tap0406cfb3-40: entered promiscuous mode
Feb 25 07:28:42 np0005629333 NetworkManager[49836]: <info>  [1772022522.8203] manager: (tap0406cfb3-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/248)
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.822 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.823 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0406cfb3-40, col_values=(('external_ids', {'iface-id': 'd0900c78-8fda-4698-ae1c-119352f6dad6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:42 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:42Z|00556|binding|INFO|Releasing lport d0900c78-8fda-4698-ae1c-119352f6dad6 from this chassis (sb_readonly=0)
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.824 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.832 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.835 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.835 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0406cfb3-4360-4052-8e1d-7019c4224092.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0406cfb3-4360-4052-8e1d-7019c4224092.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.836 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[058b6fae-c366-4465-b746-4e33e5bda1ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.837 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-0406cfb3-4360-4052-8e1d-7019c4224092
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/0406cfb3-4360-4052-8e1d-7019c4224092.pid.haproxy
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 0406cfb3-4360-4052-8e1d-7019c4224092
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:28:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:42.838 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092', 'env', 'PROCESS_TAG=haproxy-0406cfb3-4360-4052-8e1d-7019c4224092', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0406cfb3-4360-4052-8e1d-7019c4224092.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.873 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.873 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.874 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] No VIF found with MAC fa:16:3e:2e:3b:33, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.874 244018 INFO nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Using config drive#033[00m
Feb 25 07:28:42 np0005629333 nova_compute[244014]: 2026-02-25 12:28:42.899 244018 DEBUG nova.storage.rbd_utils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:43 np0005629333 podman[299227]: 2026-02-25 12:28:43.191831004 +0000 UTC m=+0.055200356 container create 92077ce90f2b639bc84fd4013caf4923123cd64a38a3252e0076d8721934b070 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 07:28:43 np0005629333 nova_compute[244014]: 2026-02-25 12:28:43.210 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022523.2096448, 9630899b-57d8-4e46-b9e0-8762f0f4f2cb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:28:43 np0005629333 nova_compute[244014]: 2026-02-25 12:28:43.211 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] VM Started (Lifecycle Event)#033[00m
Feb 25 07:28:43 np0005629333 systemd[1]: Started libpod-conmon-92077ce90f2b639bc84fd4013caf4923123cd64a38a3252e0076d8721934b070.scope.
Feb 25 07:28:43 np0005629333 nova_compute[244014]: 2026-02-25 12:28:43.234 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:28:43 np0005629333 nova_compute[244014]: 2026-02-25 12:28:43.241 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022523.2103922, 9630899b-57d8-4e46-b9e0-8762f0f4f2cb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:28:43 np0005629333 nova_compute[244014]: 2026-02-25 12:28:43.242 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:28:43 np0005629333 podman[299227]: 2026-02-25 12:28:43.163669776 +0000 UTC m=+0.027039158 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:28:43 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:28:43 np0005629333 nova_compute[244014]: 2026-02-25 12:28:43.265 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:28:43 np0005629333 nova_compute[244014]: 2026-02-25 12:28:43.269 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:28:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e401b80a9939de6acae8801df5427e297dd65f2008b59d33321f3f6cd457d4f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:28:43 np0005629333 podman[299227]: 2026-02-25 12:28:43.285358727 +0000 UTC m=+0.148728149 container init 92077ce90f2b639bc84fd4013caf4923123cd64a38a3252e0076d8721934b070 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 07:28:43 np0005629333 podman[299227]: 2026-02-25 12:28:43.293892259 +0000 UTC m=+0.157261631 container start 92077ce90f2b639bc84fd4013caf4923123cd64a38a3252e0076d8721934b070 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:28:43 np0005629333 nova_compute[244014]: 2026-02-25 12:28:43.297 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:28:43 np0005629333 neutron-haproxy-ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092[299243]: [NOTICE]   (299247) : New worker (299249) forked
Feb 25 07:28:43 np0005629333 neutron-haproxy-ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092[299243]: [NOTICE]   (299247) : Loading success.
Feb 25 07:28:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 401 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 6.3 MiB/s wr, 109 op/s
Feb 25 07:28:44 np0005629333 nova_compute[244014]: 2026-02-25 12:28:44.118 244018 DEBUG nova.network.neutron [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Updating instance_info_cache with network_info: [{"id": "05178abb-a113-4013-9194-9243afe9d0ff", "address": "fa:16:3e:7e:1d:a7", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05178abb-a1", "ovs_interfaceid": "05178abb-a113-4013-9194-9243afe9d0ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:28:44 np0005629333 nova_compute[244014]: 2026-02-25 12:28:44.131 244018 DEBUG nova.network.neutron [req-1c0e5a47-8dee-4efe-a0c9-796288c5fd87 req-f147db26-b7fb-419d-8c5b-1303f4c4a413 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Updated VIF entry in instance network info cache for port 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:28:44 np0005629333 nova_compute[244014]: 2026-02-25 12:28:44.132 244018 DEBUG nova.network.neutron [req-1c0e5a47-8dee-4efe-a0c9-796288c5fd87 req-f147db26-b7fb-419d-8c5b-1303f4c4a413 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Updating instance_info_cache with network_info: [{"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:28:44 np0005629333 nova_compute[244014]: 2026-02-25 12:28:44.279 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Releasing lock "refresh_cache-8086400b-ac70-4c79-928b-4f1966084384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:28:44 np0005629333 nova_compute[244014]: 2026-02-25 12:28:44.279 244018 DEBUG nova.compute.manager [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Instance network_info: |[{"id": "05178abb-a113-4013-9194-9243afe9d0ff", "address": "fa:16:3e:7e:1d:a7", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05178abb-a1", "ovs_interfaceid": "05178abb-a113-4013-9194-9243afe9d0ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:28:44 np0005629333 nova_compute[244014]: 2026-02-25 12:28:44.281 244018 DEBUG oslo_concurrency.lockutils [req-cbb1b097-e9d0-4eea-831a-7b77e866f437 req-ec51d979-3ffd-496c-af27-5badf73731a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8086400b-ac70-4c79-928b-4f1966084384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:28:44 np0005629333 nova_compute[244014]: 2026-02-25 12:28:44.282 244018 DEBUG nova.network.neutron [req-cbb1b097-e9d0-4eea-831a-7b77e866f437 req-ec51d979-3ffd-496c-af27-5badf73731a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Refreshing network info cache for port 05178abb-a113-4013-9194-9243afe9d0ff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:28:44 np0005629333 nova_compute[244014]: 2026-02-25 12:28:44.288 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Start _get_guest_xml network_info=[{"id": "05178abb-a113-4013-9194-9243afe9d0ff", "address": "fa:16:3e:7e:1d:a7", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05178abb-a1", "ovs_interfaceid": "05178abb-a113-4013-9194-9243afe9d0ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:28:44 np0005629333 nova_compute[244014]: 2026-02-25 12:28:44.291 244018 DEBUG oslo_concurrency.lockutils [req-1c0e5a47-8dee-4efe-a0c9-796288c5fd87 req-f147db26-b7fb-419d-8c5b-1303f4c4a413 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-ee9cd98b-1ca6-48e7-aa44-a09caf048a1c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:28:44 np0005629333 nova_compute[244014]: 2026-02-25 12:28:44.299 244018 WARNING nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:28:44 np0005629333 nova_compute[244014]: 2026-02-25 12:28:44.311 244018 DEBUG nova.virt.libvirt.host [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:28:44 np0005629333 nova_compute[244014]: 2026-02-25 12:28:44.312 244018 DEBUG nova.virt.libvirt.host [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:28:44 np0005629333 nova_compute[244014]: 2026-02-25 12:28:44.317 244018 DEBUG nova.virt.libvirt.host [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:28:44 np0005629333 nova_compute[244014]: 2026-02-25 12:28:44.318 244018 DEBUG nova.virt.libvirt.host [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:28:44 np0005629333 nova_compute[244014]: 2026-02-25 12:28:44.319 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:28:44 np0005629333 nova_compute[244014]: 2026-02-25 12:28:44.319 244018 DEBUG nova.virt.hardware [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:28:44 np0005629333 nova_compute[244014]: 2026-02-25 12:28:44.320 244018 DEBUG nova.virt.hardware [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:28:44 np0005629333 nova_compute[244014]: 2026-02-25 12:28:44.321 244018 DEBUG nova.virt.hardware [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:28:44 np0005629333 nova_compute[244014]: 2026-02-25 12:28:44.321 244018 DEBUG nova.virt.hardware [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:28:44 np0005629333 nova_compute[244014]: 2026-02-25 12:28:44.322 244018 DEBUG nova.virt.hardware [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:28:44 np0005629333 nova_compute[244014]: 2026-02-25 12:28:44.322 244018 DEBUG nova.virt.hardware [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:28:44 np0005629333 nova_compute[244014]: 2026-02-25 12:28:44.323 244018 DEBUG nova.virt.hardware [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:28:44 np0005629333 nova_compute[244014]: 2026-02-25 12:28:44.323 244018 DEBUG nova.virt.hardware [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:28:44 np0005629333 nova_compute[244014]: 2026-02-25 12:28:44.324 244018 DEBUG nova.virt.hardware [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:28:44 np0005629333 nova_compute[244014]: 2026-02-25 12:28:44.324 244018 DEBUG nova.virt.hardware [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:28:44 np0005629333 nova_compute[244014]: 2026-02-25 12:28:44.325 244018 DEBUG nova.virt.hardware [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:28:44 np0005629333 nova_compute[244014]: 2026-02-25 12:28:44.330 244018 DEBUG oslo_concurrency.processutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:44 np0005629333 nova_compute[244014]: 2026-02-25 12:28:44.725 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:28:44 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1548982358' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:28:44 np0005629333 nova_compute[244014]: 2026-02-25 12:28:44.936 244018 DEBUG oslo_concurrency.processutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:44 np0005629333 nova_compute[244014]: 2026-02-25 12:28:44.960 244018 DEBUG nova.storage.rbd_utils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] rbd image 8086400b-ac70-4c79-928b-4f1966084384_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:44 np0005629333 nova_compute[244014]: 2026-02-25 12:28:44.965 244018 DEBUG oslo_concurrency.processutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.003 244018 INFO nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Creating config drive at /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c/disk.config#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.009 244018 DEBUG oslo_concurrency.processutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpe6lfj44a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.078 244018 DEBUG nova.compute.manager [req-f383cae7-a03b-4ae2-9441-51ae2863f6c4 req-495a4152-630f-46ca-98a7-79188e9d6816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Received event network-vif-plugged-1b61d923-a06e-4948-b459-b6cf5b8c668d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.079 244018 DEBUG oslo_concurrency.lockutils [req-f383cae7-a03b-4ae2-9441-51ae2863f6c4 req-495a4152-630f-46ca-98a7-79188e9d6816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.080 244018 DEBUG oslo_concurrency.lockutils [req-f383cae7-a03b-4ae2-9441-51ae2863f6c4 req-495a4152-630f-46ca-98a7-79188e9d6816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.080 244018 DEBUG oslo_concurrency.lockutils [req-f383cae7-a03b-4ae2-9441-51ae2863f6c4 req-495a4152-630f-46ca-98a7-79188e9d6816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.081 244018 DEBUG nova.compute.manager [req-f383cae7-a03b-4ae2-9441-51ae2863f6c4 req-495a4152-630f-46ca-98a7-79188e9d6816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Processing event network-vif-plugged-1b61d923-a06e-4948-b459-b6cf5b8c668d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.081 244018 DEBUG nova.compute.manager [req-f383cae7-a03b-4ae2-9441-51ae2863f6c4 req-495a4152-630f-46ca-98a7-79188e9d6816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Received event network-vif-plugged-1b61d923-a06e-4948-b459-b6cf5b8c668d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.082 244018 DEBUG oslo_concurrency.lockutils [req-f383cae7-a03b-4ae2-9441-51ae2863f6c4 req-495a4152-630f-46ca-98a7-79188e9d6816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.082 244018 DEBUG oslo_concurrency.lockutils [req-f383cae7-a03b-4ae2-9441-51ae2863f6c4 req-495a4152-630f-46ca-98a7-79188e9d6816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.082 244018 DEBUG oslo_concurrency.lockutils [req-f383cae7-a03b-4ae2-9441-51ae2863f6c4 req-495a4152-630f-46ca-98a7-79188e9d6816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.083 244018 DEBUG nova.compute.manager [req-f383cae7-a03b-4ae2-9441-51ae2863f6c4 req-495a4152-630f-46ca-98a7-79188e9d6816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] No waiting events found dispatching network-vif-plugged-1b61d923-a06e-4948-b459-b6cf5b8c668d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.083 244018 WARNING nova.compute.manager [req-f383cae7-a03b-4ae2-9441-51ae2863f6c4 req-495a4152-630f-46ca-98a7-79188e9d6816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Received unexpected event network-vif-plugged-1b61d923-a06e-4948-b459-b6cf5b8c668d for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.084 244018 DEBUG nova.compute.manager [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.089 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022525.089278, 9630899b-57d8-4e46-b9e0-8762f0f4f2cb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.090 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.093 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.097 244018 INFO nova.virt.libvirt.driver [-] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Instance spawned successfully.#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.097 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.116 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.126 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.131 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.131 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.132 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.133 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.133 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.134 244018 DEBUG nova.virt.libvirt.driver [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.149 244018 DEBUG oslo_concurrency.processutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpe6lfj44a" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.183 244018 DEBUG nova.storage.rbd_utils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.187 244018 DEBUG oslo_concurrency.processutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c/disk.config ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.220 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.223 244018 INFO nova.compute.manager [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Took 9.07 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.223 244018 DEBUG nova.compute.manager [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.295 244018 INFO nova.compute.manager [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Took 10.17 seconds to build instance.#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.313 244018 DEBUG oslo_concurrency.lockutils [None req-801095e0-6054-4c4c-857c-070c7b03e488 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.354 244018 DEBUG oslo_concurrency.processutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c/disk.config ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.354 244018 INFO nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Deleting local config drive /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c/disk.config because it was imported into RBD.#033[00m
Feb 25 07:28:45 np0005629333 NetworkManager[49836]: <info>  [1772022525.4075] manager: (tap9848fa6c-0a): new Tun device (/org/freedesktop/NetworkManager/Devices/249)
Feb 25 07:28:45 np0005629333 systemd-udevd[299114]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:28:45 np0005629333 kernel: tap9848fa6c-0a: entered promiscuous mode
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.413 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:45 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:45Z|00557|binding|INFO|Claiming lport 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c for this chassis.
Feb 25 07:28:45 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:45Z|00558|binding|INFO|9848fa6c-0a42-4cac-a3d8-2b90d5b7920c: Claiming fa:16:3e:2e:3b:33 10.100.0.14
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.425 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:45.431 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:3b:33 10.100.0.14'], port_security=['fa:16:3e:2e:3b:33 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ee9cd98b-1ca6-48e7-aa44-a09caf048a1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce318891-cf3c-4d99-af7c-c01770f38194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56700581ea88438ba482d90bc702ced3', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf6e8407-f371-4f64-a4aa-eb412980736d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0958bb9f-eb63-44ee-b380-21c56b170304, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:28:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:45.434 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c in datapath ce318891-cf3c-4d99-af7c-c01770f38194 bound to our chassis#033[00m
Feb 25 07:28:45 np0005629333 NetworkManager[49836]: <info>  [1772022525.4379] device (tap9848fa6c-0a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:28:45 np0005629333 NetworkManager[49836]: <info>  [1772022525.4387] device (tap9848fa6c-0a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:28:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:45.441 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce318891-cf3c-4d99-af7c-c01770f38194#033[00m
Feb 25 07:28:45 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:45Z|00559|binding|INFO|Setting lport 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c ovn-installed in OVS
Feb 25 07:28:45 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:45Z|00560|binding|INFO|Setting lport 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c up in Southbound
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.445 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:45.459 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[13a63070-70a2-4a97-bd57-2be4c14a8ccc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:45 np0005629333 systemd-machined[210048]: New machine qemu-72-instance-0000003f.
Feb 25 07:28:45 np0005629333 systemd[1]: Started Virtual Machine qemu-72-instance-0000003f.
Feb 25 07:28:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:45.489 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c150bcb7-6b15-46c5-9b3c-3df29303fb80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:45.496 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f656a9a1-5866-4c06-84af-4ad1f5713c0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:45.525 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2a86d515-84ec-484c-92f0-92bf2be27876]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:45.547 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f4aecf67-e9f7-4d42-b7d0-d35705f9ab83]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce318891-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c3:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444530, 'reachable_time': 37500, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299383, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:28:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3881799203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:28:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:45.564 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f0387b0d-ca95-4b4e-96ca-141984565e69]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce318891-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444541, 'tstamp': 444541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299384, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce318891-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444544, 'tstamp': 444544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299384, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:45.568 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce318891-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.570 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.572 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:45.574 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce318891-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:45.575 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:28:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:45.575 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce318891-c0, col_values=(('external_ids', {'iface-id': '3b184c15-8ef4-4e11-bd18-e1253a4ff440'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:45.576 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.594 244018 DEBUG oslo_concurrency.processutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.597 244018 DEBUG nova.virt.libvirt.vif [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:28:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-105273812',display_name='tempest-SecurityGroupsTestJSON-server-105273812',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-105273812',id=64,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='780a93a8758a4bd78b22fe68ed6276cf',ramdisk_id='',reservation_id='r-6s6re1xg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-195828884',owner_user_name='tempest-SecurityGroupsTestJSON-195828884-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:28:39Z,user_data=None,user_id='bcb4ded096bc4f7993f96ca892b82333',uuid=8086400b-ac70-4c79-928b-4f1966084384,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05178abb-a113-4013-9194-9243afe9d0ff", "address": "fa:16:3e:7e:1d:a7", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05178abb-a1", "ovs_interfaceid": "05178abb-a113-4013-9194-9243afe9d0ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.598 244018 DEBUG nova.network.os_vif_util [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converting VIF {"id": "05178abb-a113-4013-9194-9243afe9d0ff", "address": "fa:16:3e:7e:1d:a7", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05178abb-a1", "ovs_interfaceid": "05178abb-a113-4013-9194-9243afe9d0ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.599 244018 DEBUG nova.network.os_vif_util [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:1d:a7,bridge_name='br-int',has_traffic_filtering=True,id=05178abb-a113-4013-9194-9243afe9d0ff,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05178abb-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.602 244018 DEBUG nova.objects.instance [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lazy-loading 'pci_devices' on Instance uuid 8086400b-ac70-4c79-928b-4f1966084384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.623 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:28:45 np0005629333 nova_compute[244014]:  <uuid>8086400b-ac70-4c79-928b-4f1966084384</uuid>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:  <name>instance-00000040</name>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      <nova:name>tempest-SecurityGroupsTestJSON-server-105273812</nova:name>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:28:44</nova:creationTime>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:28:45 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:        <nova:user uuid="bcb4ded096bc4f7993f96ca892b82333">tempest-SecurityGroupsTestJSON-195828884-project-member</nova:user>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:        <nova:project uuid="780a93a8758a4bd78b22fe68ed6276cf">tempest-SecurityGroupsTestJSON-195828884</nova:project>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:        <nova:port uuid="05178abb-a113-4013-9194-9243afe9d0ff">
Feb 25 07:28:45 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      <entry name="serial">8086400b-ac70-4c79-928b-4f1966084384</entry>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      <entry name="uuid">8086400b-ac70-4c79-928b-4f1966084384</entry>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/8086400b-ac70-4c79-928b-4f1966084384_disk">
Feb 25 07:28:45 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:28:45 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/8086400b-ac70-4c79-928b-4f1966084384_disk.config">
Feb 25 07:28:45 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:28:45 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:7e:1d:a7"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      <target dev="tap05178abb-a1"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/8086400b-ac70-4c79-928b-4f1966084384/console.log" append="off"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:28:45 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:28:45 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:28:45 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:28:45 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.625 244018 DEBUG nova.compute.manager [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Preparing to wait for external event network-vif-plugged-05178abb-a113-4013-9194-9243afe9d0ff prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.625 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "8086400b-ac70-4c79-928b-4f1966084384-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.626 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "8086400b-ac70-4c79-928b-4f1966084384-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.627 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "8086400b-ac70-4c79-928b-4f1966084384-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.628 244018 DEBUG nova.virt.libvirt.vif [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:28:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-105273812',display_name='tempest-SecurityGroupsTestJSON-server-105273812',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-105273812',id=64,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='780a93a8758a4bd78b22fe68ed6276cf',ramdisk_id='',reservation_id='r-6s6re1xg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-195828884',owner_user_name='tempest-SecurityGroupsTestJSON-195828884-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:28:39Z,user_data=None,user_id='bcb4ded096bc4f7993f96ca892b82333',uuid=8086400b-ac70-4c79-928b-4f1966084384,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05178abb-a113-4013-9194-9243afe9d0ff", "address": "fa:16:3e:7e:1d:a7", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05178abb-a1", "ovs_interfaceid": "05178abb-a113-4013-9194-9243afe9d0ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.629 244018 DEBUG nova.network.os_vif_util [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converting VIF {"id": "05178abb-a113-4013-9194-9243afe9d0ff", "address": "fa:16:3e:7e:1d:a7", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05178abb-a1", "ovs_interfaceid": "05178abb-a113-4013-9194-9243afe9d0ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.630 244018 DEBUG nova.network.os_vif_util [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:1d:a7,bridge_name='br-int',has_traffic_filtering=True,id=05178abb-a113-4013-9194-9243afe9d0ff,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05178abb-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.631 244018 DEBUG os_vif [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:1d:a7,bridge_name='br-int',has_traffic_filtering=True,id=05178abb-a113-4013-9194-9243afe9d0ff,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05178abb-a1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.632 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.632 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.633 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.637 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.638 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05178abb-a1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.639 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05178abb-a1, col_values=(('external_ids', {'iface-id': '05178abb-a113-4013-9194-9243afe9d0ff', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7e:1d:a7', 'vm-uuid': '8086400b-ac70-4c79-928b-4f1966084384'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:45 np0005629333 NetworkManager[49836]: <info>  [1772022525.6427] manager: (tap05178abb-a1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/250)
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.641 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.646 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.650 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.651 244018 INFO os_vif [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:1d:a7,bridge_name='br-int',has_traffic_filtering=True,id=05178abb-a113-4013-9194-9243afe9d0ff,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05178abb-a1')#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.709 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.710 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.710 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] No VIF found with MAC fa:16:3e:7e:1d:a7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.710 244018 INFO nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Using config drive#033[00m
Feb 25 07:28:45 np0005629333 nova_compute[244014]: 2026-02-25 12:28:45.731 244018 DEBUG nova.storage.rbd_utils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] rbd image 8086400b-ac70-4c79-928b-4f1966084384_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 401 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 6.3 MiB/s wr, 109 op/s
Feb 25 07:28:46 np0005629333 nova_compute[244014]: 2026-02-25 12:28:46.279 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022526.2788637, ee9cd98b-1ca6-48e7-aa44-a09caf048a1c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:28:46 np0005629333 nova_compute[244014]: 2026-02-25 12:28:46.279 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] VM Started (Lifecycle Event)#033[00m
Feb 25 07:28:46 np0005629333 nova_compute[244014]: 2026-02-25 12:28:46.308 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:28:46 np0005629333 nova_compute[244014]: 2026-02-25 12:28:46.313 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022526.2797914, ee9cd98b-1ca6-48e7-aa44-a09caf048a1c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:28:46 np0005629333 nova_compute[244014]: 2026-02-25 12:28:46.313 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:28:46 np0005629333 nova_compute[244014]: 2026-02-25 12:28:46.338 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:28:46 np0005629333 nova_compute[244014]: 2026-02-25 12:28:46.347 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:28:46 np0005629333 nova_compute[244014]: 2026-02-25 12:28:46.356 244018 DEBUG nova.network.neutron [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Successfully updated port: ba138ed1-c811-4043-9bd6-e1a5c6127f84 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:28:46 np0005629333 nova_compute[244014]: 2026-02-25 12:28:46.370 244018 INFO nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Creating config drive at /var/lib/nova/instances/8086400b-ac70-4c79-928b-4f1966084384/disk.config#033[00m
Feb 25 07:28:46 np0005629333 nova_compute[244014]: 2026-02-25 12:28:46.376 244018 DEBUG oslo_concurrency.processutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8086400b-ac70-4c79-928b-4f1966084384/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5kkvbou0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:46 np0005629333 nova_compute[244014]: 2026-02-25 12:28:46.406 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:28:46 np0005629333 nova_compute[244014]: 2026-02-25 12:28:46.408 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Acquiring lock "refresh_cache-826789b1-e26a-4569-bd77-bd1ef76388be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:28:46 np0005629333 nova_compute[244014]: 2026-02-25 12:28:46.408 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Acquired lock "refresh_cache-826789b1-e26a-4569-bd77-bd1ef76388be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:28:46 np0005629333 nova_compute[244014]: 2026-02-25 12:28:46.408 244018 DEBUG nova.network.neutron [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:28:46 np0005629333 nova_compute[244014]: 2026-02-25 12:28:46.509 244018 DEBUG oslo_concurrency.processutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8086400b-ac70-4c79-928b-4f1966084384/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5kkvbou0" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:46 np0005629333 nova_compute[244014]: 2026-02-25 12:28:46.547 244018 DEBUG nova.storage.rbd_utils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] rbd image 8086400b-ac70-4c79-928b-4f1966084384_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:46 np0005629333 nova_compute[244014]: 2026-02-25 12:28:46.550 244018 DEBUG oslo_concurrency.processutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8086400b-ac70-4c79-928b-4f1966084384/disk.config 8086400b-ac70-4c79-928b-4f1966084384_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:46 np0005629333 nova_compute[244014]: 2026-02-25 12:28:46.575 244018 DEBUG nova.network.neutron [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:28:46 np0005629333 nova_compute[244014]: 2026-02-25 12:28:46.668 244018 DEBUG oslo_concurrency.processutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8086400b-ac70-4c79-928b-4f1966084384/disk.config 8086400b-ac70-4c79-928b-4f1966084384_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:46 np0005629333 nova_compute[244014]: 2026-02-25 12:28:46.669 244018 INFO nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Deleting local config drive /var/lib/nova/instances/8086400b-ac70-4c79-928b-4f1966084384/disk.config because it was imported into RBD.#033[00m
Feb 25 07:28:46 np0005629333 kernel: tap05178abb-a1: entered promiscuous mode
Feb 25 07:28:46 np0005629333 NetworkManager[49836]: <info>  [1772022526.7233] manager: (tap05178abb-a1): new Tun device (/org/freedesktop/NetworkManager/Devices/251)
Feb 25 07:28:46 np0005629333 systemd-udevd[299450]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:28:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:46Z|00561|binding|INFO|Claiming lport 05178abb-a113-4013-9194-9243afe9d0ff for this chassis.
Feb 25 07:28:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:46Z|00562|binding|INFO|05178abb-a113-4013-9194-9243afe9d0ff: Claiming fa:16:3e:7e:1d:a7 10.100.0.10
Feb 25 07:28:46 np0005629333 nova_compute[244014]: 2026-02-25 12:28:46.727 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:46 np0005629333 nova_compute[244014]: 2026-02-25 12:28:46.735 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:46Z|00563|binding|INFO|Setting lport 05178abb-a113-4013-9194-9243afe9d0ff ovn-installed in OVS
Feb 25 07:28:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.738 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:1d:a7 10.100.0.10'], port_security=['fa:16:3e:7e:1d:a7 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '8086400b-ac70-4c79-928b-4f1966084384', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a448894-87d7-4c8e-a168-2593011ffed7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '780a93a8758a4bd78b22fe68ed6276cf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '809ecded-d7db-4d4f-aed2-3cfc6bac71b9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc8be04a-7591-4819-939d-39b48e9a96cf, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=05178abb-a113-4013-9194-9243afe9d0ff) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:28:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:46Z|00564|binding|INFO|Setting lport 05178abb-a113-4013-9194-9243afe9d0ff up in Southbound
Feb 25 07:28:46 np0005629333 NetworkManager[49836]: <info>  [1772022526.7411] device (tap05178abb-a1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:28:46 np0005629333 NetworkManager[49836]: <info>  [1772022526.7420] device (tap05178abb-a1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:28:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.743 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 05178abb-a113-4013-9194-9243afe9d0ff in datapath 9a448894-87d7-4c8e-a168-2593011ffed7 bound to our chassis#033[00m
Feb 25 07:28:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.746 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9a448894-87d7-4c8e-a168-2593011ffed7#033[00m
Feb 25 07:28:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.760 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[476994eb-e12c-434d-9d03-839507ee21a2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.761 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9a448894-81 in ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:28:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.763 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9a448894-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:28:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.763 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[59c2d0e8-1e89-4150-a70b-2d36baa73175]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:46 np0005629333 systemd-machined[210048]: New machine qemu-73-instance-00000040.
Feb 25 07:28:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.765 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5e32d2c0-9ace-4f47-a0e7-f6992c9afe91]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.774 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[869689d0-45e0-4177-b79b-5e8ff7cb6780]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:46 np0005629333 systemd[1]: Started Virtual Machine qemu-73-instance-00000040.
Feb 25 07:28:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.787 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b8231bd5-632c-4519-a4fc-04df8274bf4f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:46 np0005629333 nova_compute[244014]: 2026-02-25 12:28:46.792 244018 DEBUG nova.compute.manager [req-fac583c4-0af4-4e32-9a07-43a6011ab998 req-c21937a6-452a-4ee7-bdff-28c4aec46742 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Received event network-changed-ba138ed1-c811-4043-9bd6-e1a5c6127f84 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:28:46 np0005629333 nova_compute[244014]: 2026-02-25 12:28:46.793 244018 DEBUG nova.compute.manager [req-fac583c4-0af4-4e32-9a07-43a6011ab998 req-c21937a6-452a-4ee7-bdff-28c4aec46742 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Refreshing instance network info cache due to event network-changed-ba138ed1-c811-4043-9bd6-e1a5c6127f84. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:28:46 np0005629333 nova_compute[244014]: 2026-02-25 12:28:46.794 244018 DEBUG oslo_concurrency.lockutils [req-fac583c4-0af4-4e32-9a07-43a6011ab998 req-c21937a6-452a-4ee7-bdff-28c4aec46742 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-826789b1-e26a-4569-bd77-bd1ef76388be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:28:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.839 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f02cb8e9-0674-4c65-89b9-c3469130ce1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:46 np0005629333 NetworkManager[49836]: <info>  [1772022526.8480] manager: (tap9a448894-80): new Veth device (/org/freedesktop/NetworkManager/Devices/252)
Feb 25 07:28:46 np0005629333 systemd-udevd[299505]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:28:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.849 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[32069880-0e7e-466a-99b3-4f324740ea90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.898 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f69bf4b7-9426-417a-8a20-9be5a75f1805]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.902 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ed06973f-6749-457d-b79b-35f83a074c2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:46 np0005629333 NetworkManager[49836]: <info>  [1772022526.9294] device (tap9a448894-80): carrier: link connected
Feb 25 07:28:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.934 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[46fca849-b87d-4eb2-b2bf-787664f3c72d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.952 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[93a59b10-20b0-4aa5-8d03-d38ba17eeeaf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9a448894-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:4c:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449633, 'reachable_time': 29076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299537, 'error': None, 'target': 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.969 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[96701aa6-9150-4ff4-9e54-f2535950c503]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb9:4c93'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449633, 'tstamp': 449633}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299538, 'error': None, 'target': 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:46.983 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[391b59ab-4ca3-4fff-9bf6-c0303044f90b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9a448894-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:4c:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449633, 'reachable_time': 29076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299539, 'error': None, 'target': 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:47.013 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b5d73fcd-632e-480b-a902-b7fb2ff4e874]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:47.063 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[43968bf0-009d-4421-b54a-98a9212d8f08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:47.065 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a448894-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:47.065 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:47.066 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9a448894-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.068 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:47 np0005629333 kernel: tap9a448894-80: entered promiscuous mode
Feb 25 07:28:47 np0005629333 NetworkManager[49836]: <info>  [1772022527.0698] manager: (tap9a448894-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/253)
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:47.075 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9a448894-80, col_values=(('external_ids', {'iface-id': '6ef06fbe-6eb9-4d55-bbd8-9394a70da39f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:47 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:47Z|00565|binding|INFO|Releasing lport 6ef06fbe-6eb9-4d55-bbd8-9394a70da39f from this chassis (sb_readonly=0)
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.077 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.090 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:47.091 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9a448894-87d7-4c8e-a168-2593011ffed7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9a448894-87d7-4c8e-a168-2593011ffed7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:47.092 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cb02802c-9830-4926-bcc5-e1453699e2bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:47.093 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-9a448894-87d7-4c8e-a168-2593011ffed7
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/9a448894-87d7-4c8e-a168-2593011ffed7.pid.haproxy
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 9a448894-87d7-4c8e-a168-2593011ffed7
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:28:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:47.095 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'env', 'PROCESS_TAG=haproxy-9a448894-87d7-4c8e-a168-2593011ffed7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9a448894-87d7-4c8e-a168-2593011ffed7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.147 244018 DEBUG nova.network.neutron [req-cbb1b097-e9d0-4eea-831a-7b77e866f437 req-ec51d979-3ffd-496c-af27-5badf73731a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Updated VIF entry in instance network info cache for port 05178abb-a113-4013-9194-9243afe9d0ff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.147 244018 DEBUG nova.network.neutron [req-cbb1b097-e9d0-4eea-831a-7b77e866f437 req-ec51d979-3ffd-496c-af27-5badf73731a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Updating instance_info_cache with network_info: [{"id": "05178abb-a113-4013-9194-9243afe9d0ff", "address": "fa:16:3e:7e:1d:a7", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05178abb-a1", "ovs_interfaceid": "05178abb-a113-4013-9194-9243afe9d0ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.223 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022527.222624, 8086400b-ac70-4c79-928b-4f1966084384 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.224 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8086400b-ac70-4c79-928b-4f1966084384] VM Started (Lifecycle Event)#033[00m
Feb 25 07:28:47 np0005629333 podman[299613]: 2026-02-25 12:28:47.504733768 +0000 UTC m=+0.062605597 container create 620ca25fa8851b65461fdb951800fbc4d72272096ab3622b616f1ba2e99f8df7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.512 244018 DEBUG nova.compute.manager [req-b9857048-60da-48cc-86a4-d6c9809594a0 req-e3816227-cdda-4035-b12a-e54330632e84 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Received event network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.514 244018 DEBUG oslo_concurrency.lockutils [req-b9857048-60da-48cc-86a4-d6c9809594a0 req-e3816227-cdda-4035-b12a-e54330632e84 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.514 244018 DEBUG oslo_concurrency.lockutils [req-b9857048-60da-48cc-86a4-d6c9809594a0 req-e3816227-cdda-4035-b12a-e54330632e84 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.515 244018 DEBUG oslo_concurrency.lockutils [req-b9857048-60da-48cc-86a4-d6c9809594a0 req-e3816227-cdda-4035-b12a-e54330632e84 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.515 244018 DEBUG nova.compute.manager [req-b9857048-60da-48cc-86a4-d6c9809594a0 req-e3816227-cdda-4035-b12a-e54330632e84 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Processing event network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.516 244018 DEBUG nova.compute.manager [req-b9857048-60da-48cc-86a4-d6c9809594a0 req-e3816227-cdda-4035-b12a-e54330632e84 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Received event network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.516 244018 DEBUG oslo_concurrency.lockutils [req-b9857048-60da-48cc-86a4-d6c9809594a0 req-e3816227-cdda-4035-b12a-e54330632e84 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.517 244018 DEBUG oslo_concurrency.lockutils [req-b9857048-60da-48cc-86a4-d6c9809594a0 req-e3816227-cdda-4035-b12a-e54330632e84 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.517 244018 DEBUG oslo_concurrency.lockutils [req-b9857048-60da-48cc-86a4-d6c9809594a0 req-e3816227-cdda-4035-b12a-e54330632e84 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.518 244018 DEBUG nova.compute.manager [req-b9857048-60da-48cc-86a4-d6c9809594a0 req-e3816227-cdda-4035-b12a-e54330632e84 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] No waiting events found dispatching network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.518 244018 WARNING nova.compute.manager [req-b9857048-60da-48cc-86a4-d6c9809594a0 req-e3816227-cdda-4035-b12a-e54330632e84 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Received unexpected event network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.521 244018 DEBUG nova.compute.manager [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.522 244018 DEBUG oslo_concurrency.lockutils [req-cbb1b097-e9d0-4eea-831a-7b77e866f437 req-ec51d979-3ffd-496c-af27-5badf73731a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8086400b-ac70-4c79-928b-4f1966084384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.530 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.537 244018 INFO nova.virt.libvirt.driver [-] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Instance spawned successfully.#033[00m
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.539 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.544 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.552 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022527.2237253, 8086400b-ac70-4c79-928b-4f1966084384 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.553 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8086400b-ac70-4c79-928b-4f1966084384] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:28:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:28:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1766219868' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:28:47 np0005629333 systemd[1]: Started libpod-conmon-620ca25fa8851b65461fdb951800fbc4d72272096ab3622b616f1ba2e99f8df7.scope.
Feb 25 07:28:47 np0005629333 podman[299613]: 2026-02-25 12:28:47.471225687 +0000 UTC m=+0.029097576 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:28:47 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:28:47 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2584817c9d371ffcf257a70731a89404fd3bb1f9d05f4f6659b9234327334bd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:28:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 420 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 7.1 MiB/s wr, 192 op/s
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.923 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.929 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.933 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.933 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.933 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.934 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.934 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:28:47 np0005629333 nova_compute[244014]: 2026-02-25 12:28:47.934 244018 DEBUG nova.virt.libvirt.driver [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:28:48 np0005629333 nova_compute[244014]: 2026-02-25 12:28:48.417 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8086400b-ac70-4c79-928b-4f1966084384] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:28:48 np0005629333 nova_compute[244014]: 2026-02-25 12:28:48.417 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022527.5282967, ee9cd98b-1ca6-48e7-aa44-a09caf048a1c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:28:48 np0005629333 nova_compute[244014]: 2026-02-25 12:28:48.417 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:28:48 np0005629333 nova_compute[244014]: 2026-02-25 12:28:48.665 244018 INFO nova.compute.manager [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Took 10.69 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:28:48 np0005629333 nova_compute[244014]: 2026-02-25 12:28:48.666 244018 DEBUG nova.compute.manager [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:28:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:28:48 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1766219868' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:28:48 np0005629333 podman[299613]: 2026-02-25 12:28:48.676937849 +0000 UTC m=+1.234809758 container init 620ca25fa8851b65461fdb951800fbc4d72272096ab3622b616f1ba2e99f8df7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:28:48 np0005629333 podman[299613]: 2026-02-25 12:28:48.685532302 +0000 UTC m=+1.243404141 container start 620ca25fa8851b65461fdb951800fbc4d72272096ab3622b616f1ba2e99f8df7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:28:48 np0005629333 neutron-haproxy-ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7[299628]: [NOTICE]   (299632) : New worker (299634) forked
Feb 25 07:28:48 np0005629333 neutron-haproxy-ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7[299628]: [NOTICE]   (299632) : Loading success.
Feb 25 07:28:48 np0005629333 nova_compute[244014]: 2026-02-25 12:28:48.715 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:28:48 np0005629333 nova_compute[244014]: 2026-02-25 12:28:48.719 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:28:48 np0005629333 nova_compute[244014]: 2026-02-25 12:28:48.749 244018 INFO nova.compute.manager [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Took 11.85 seconds to build instance.#033[00m
Feb 25 07:28:48 np0005629333 nova_compute[244014]: 2026-02-25 12:28:48.782 244018 DEBUG oslo_concurrency.lockutils [None req-d372a00b-bfb6-4c8f-8ba6-0f3882541ea5 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:49 np0005629333 nova_compute[244014]: 2026-02-25 12:28:49.727 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 420 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 6.3 MiB/s wr, 179 op/s
Feb 25 07:28:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.642 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.923 244018 DEBUG nova.network.neutron [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Updating instance_info_cache with network_info: [{"id": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "address": "fa:16:3e:bd:a8:78", "network": {"id": "23ab8bc1-e701-454d-a828-42d18cbd9afc", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1699028190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be8db082f3894d28b63a3709be538262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba138ed1-c8", "ovs_interfaceid": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.951 244018 DEBUG nova.compute.manager [req-6762c082-6dac-4503-a676-6d61aab933da req-0fae1fe1-e070-4390-8aba-4c88e990fb8c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Received event network-vif-plugged-05178abb-a113-4013-9194-9243afe9d0ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.951 244018 DEBUG oslo_concurrency.lockutils [req-6762c082-6dac-4503-a676-6d61aab933da req-0fae1fe1-e070-4390-8aba-4c88e990fb8c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8086400b-ac70-4c79-928b-4f1966084384-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.952 244018 DEBUG oslo_concurrency.lockutils [req-6762c082-6dac-4503-a676-6d61aab933da req-0fae1fe1-e070-4390-8aba-4c88e990fb8c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8086400b-ac70-4c79-928b-4f1966084384-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.952 244018 DEBUG oslo_concurrency.lockutils [req-6762c082-6dac-4503-a676-6d61aab933da req-0fae1fe1-e070-4390-8aba-4c88e990fb8c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8086400b-ac70-4c79-928b-4f1966084384-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.952 244018 DEBUG nova.compute.manager [req-6762c082-6dac-4503-a676-6d61aab933da req-0fae1fe1-e070-4390-8aba-4c88e990fb8c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Processing event network-vif-plugged-05178abb-a113-4013-9194-9243afe9d0ff _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.953 244018 DEBUG nova.compute.manager [req-6762c082-6dac-4503-a676-6d61aab933da req-0fae1fe1-e070-4390-8aba-4c88e990fb8c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Received event network-vif-plugged-05178abb-a113-4013-9194-9243afe9d0ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.953 244018 DEBUG oslo_concurrency.lockutils [req-6762c082-6dac-4503-a676-6d61aab933da req-0fae1fe1-e070-4390-8aba-4c88e990fb8c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8086400b-ac70-4c79-928b-4f1966084384-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.953 244018 DEBUG oslo_concurrency.lockutils [req-6762c082-6dac-4503-a676-6d61aab933da req-0fae1fe1-e070-4390-8aba-4c88e990fb8c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8086400b-ac70-4c79-928b-4f1966084384-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.953 244018 DEBUG oslo_concurrency.lockutils [req-6762c082-6dac-4503-a676-6d61aab933da req-0fae1fe1-e070-4390-8aba-4c88e990fb8c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8086400b-ac70-4c79-928b-4f1966084384-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.954 244018 DEBUG nova.compute.manager [req-6762c082-6dac-4503-a676-6d61aab933da req-0fae1fe1-e070-4390-8aba-4c88e990fb8c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] No waiting events found dispatching network-vif-plugged-05178abb-a113-4013-9194-9243afe9d0ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.954 244018 WARNING nova.compute.manager [req-6762c082-6dac-4503-a676-6d61aab933da req-0fae1fe1-e070-4390-8aba-4c88e990fb8c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Received unexpected event network-vif-plugged-05178abb-a113-4013-9194-9243afe9d0ff for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.955 244018 DEBUG nova.compute.manager [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.957 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Releasing lock "refresh_cache-826789b1-e26a-4569-bd77-bd1ef76388be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.958 244018 DEBUG nova.compute.manager [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Instance network_info: |[{"id": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "address": "fa:16:3e:bd:a8:78", "network": {"id": "23ab8bc1-e701-454d-a828-42d18cbd9afc", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1699028190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be8db082f3894d28b63a3709be538262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba138ed1-c8", "ovs_interfaceid": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.959 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022530.9581943, 8086400b-ac70-4c79-928b-4f1966084384 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.959 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8086400b-ac70-4c79-928b-4f1966084384] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.960 244018 DEBUG oslo_concurrency.lockutils [req-fac583c4-0af4-4e32-9a07-43a6011ab998 req-c21937a6-452a-4ee7-bdff-28c4aec46742 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-826789b1-e26a-4569-bd77-bd1ef76388be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.961 244018 DEBUG nova.network.neutron [req-fac583c4-0af4-4e32-9a07-43a6011ab998 req-c21937a6-452a-4ee7-bdff-28c4aec46742 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Refreshing network info cache for port ba138ed1-c811-4043-9bd6-e1a5c6127f84 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.965 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Start _get_guest_xml network_info=[{"id": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "address": "fa:16:3e:bd:a8:78", "network": {"id": "23ab8bc1-e701-454d-a828-42d18cbd9afc", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1699028190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be8db082f3894d28b63a3709be538262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba138ed1-c8", "ovs_interfaceid": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.965 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.969 244018 INFO nova.virt.libvirt.driver [-] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Instance spawned successfully.#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.970 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.971 244018 WARNING nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.978 244018 DEBUG nova.virt.libvirt.host [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.978 244018 DEBUG nova.virt.libvirt.host [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.982 244018 DEBUG nova.virt.libvirt.host [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.983 244018 DEBUG nova.virt.libvirt.host [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.983 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.983 244018 DEBUG nova.virt.hardware [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.984 244018 DEBUG nova.virt.hardware [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.984 244018 DEBUG nova.virt.hardware [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.984 244018 DEBUG nova.virt.hardware [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.985 244018 DEBUG nova.virt.hardware [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.985 244018 DEBUG nova.virt.hardware [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.985 244018 DEBUG nova.virt.hardware [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.985 244018 DEBUG nova.virt.hardware [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.986 244018 DEBUG nova.virt.hardware [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.986 244018 DEBUG nova.virt.hardware [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.986 244018 DEBUG nova.virt.hardware [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:28:50 np0005629333 nova_compute[244014]: 2026-02-25 12:28:50.989 244018 DEBUG oslo_concurrency.processutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.027 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.040 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.045 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.046 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.047 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.047 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.048 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.049 244018 DEBUG nova.virt.libvirt.driver [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.072 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8086400b-ac70-4c79-928b-4f1966084384] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.106 244018 INFO nova.compute.manager [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Took 11.29 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.107 244018 DEBUG nova.compute.manager [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.179 244018 INFO nova.compute.manager [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Took 12.43 seconds to build instance.#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.205 244018 DEBUG oslo_concurrency.lockutils [None req-59f532c1-db2f-425e-9a23-607736e548a2 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "8086400b-ac70-4c79-928b-4f1966084384" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.319 244018 DEBUG oslo_concurrency.lockutils [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Acquiring lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.320 244018 DEBUG oslo_concurrency.lockutils [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.321 244018 DEBUG oslo_concurrency.lockutils [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Acquiring lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.321 244018 DEBUG oslo_concurrency.lockutils [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.322 244018 DEBUG oslo_concurrency.lockutils [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.324 244018 INFO nova.compute.manager [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Terminating instance#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.325 244018 DEBUG nova.compute.manager [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:28:51 np0005629333 kernel: tap1b61d923-a0 (unregistering): left promiscuous mode
Feb 25 07:28:51 np0005629333 NetworkManager[49836]: <info>  [1772022531.3593] device (tap1b61d923-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:28:51 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:51Z|00566|binding|INFO|Releasing lport 1b61d923-a06e-4948-b459-b6cf5b8c668d from this chassis (sb_readonly=0)
Feb 25 07:28:51 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:51Z|00567|binding|INFO|Setting lport 1b61d923-a06e-4948-b459-b6cf5b8c668d down in Southbound
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.365 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:51 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:51Z|00568|binding|INFO|Removing iface tap1b61d923-a0 ovn-installed in OVS
Feb 25 07:28:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:51.375 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:48:42 10.100.0.7'], port_security=['fa:16:3e:f6:48:42 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9630899b-57d8-4e46-b9e0-8762f0f4f2cb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0406cfb3-4360-4052-8e1d-7019c4224092', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d46f1174f384dc3be789d4301748e2d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6a0b035a-6f2a-459e-b461-0e5414640091', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4783117-d743-4200-8206-169bf90ef6ab, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=1b61d923-a06e-4948-b459-b6cf5b8c668d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:28:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:51.376 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 1b61d923-a06e-4948-b459-b6cf5b8c668d in datapath 0406cfb3-4360-4052-8e1d-7019c4224092 unbound from our chassis#033[00m
Feb 25 07:28:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:51.378 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0406cfb3-4360-4052-8e1d-7019c4224092, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:28:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:51.381 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[455a5187-238e-43e2-9ed9-8e036422b959]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:51.382 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092 namespace which is not needed anymore#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.388 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:51 np0005629333 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Feb 25 07:28:51 np0005629333 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000003e.scope: Consumed 7.071s CPU time.
Feb 25 07:28:51 np0005629333 systemd-machined[210048]: Machine qemu-71-instance-0000003e terminated.
Feb 25 07:28:51 np0005629333 neutron-haproxy-ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092[299243]: [NOTICE]   (299247) : haproxy version is 2.8.14-c23fe91
Feb 25 07:28:51 np0005629333 neutron-haproxy-ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092[299243]: [NOTICE]   (299247) : path to executable is /usr/sbin/haproxy
Feb 25 07:28:51 np0005629333 neutron-haproxy-ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092[299243]: [WARNING]  (299247) : Exiting Master process...
Feb 25 07:28:51 np0005629333 neutron-haproxy-ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092[299243]: [ALERT]    (299247) : Current worker (299249) exited with code 143 (Terminated)
Feb 25 07:28:51 np0005629333 neutron-haproxy-ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092[299243]: [WARNING]  (299247) : All workers exited. Exiting... (0)
Feb 25 07:28:51 np0005629333 systemd[1]: libpod-92077ce90f2b639bc84fd4013caf4923123cd64a38a3252e0076d8721934b070.scope: Deactivated successfully.
Feb 25 07:28:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:28:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1151926361' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:28:51 np0005629333 podman[299687]: 2026-02-25 12:28:51.53444313 +0000 UTC m=+0.056041880 container died 92077ce90f2b639bc84fd4013caf4923123cd64a38a3252e0076d8721934b070 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.548 244018 DEBUG oslo_concurrency.processutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:51 np0005629333 systemd[1]: var-lib-containers-storage-overlay-2e401b80a9939de6acae8801df5427e297dd65f2008b59d33321f3f6cd457d4f-merged.mount: Deactivated successfully.
Feb 25 07:28:51 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-92077ce90f2b639bc84fd4013caf4923123cd64a38a3252e0076d8721934b070-userdata-shm.mount: Deactivated successfully.
Feb 25 07:28:51 np0005629333 podman[299687]: 2026-02-25 12:28:51.585663092 +0000 UTC m=+0.107261812 container cleanup 92077ce90f2b639bc84fd4013caf4923123cd64a38a3252e0076d8721934b070 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.595 244018 DEBUG nova.storage.rbd_utils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] rbd image 826789b1-e26a-4569-bd77-bd1ef76388be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.604 244018 DEBUG oslo_concurrency.processutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:51 np0005629333 systemd[1]: libpod-conmon-92077ce90f2b639bc84fd4013caf4923123cd64a38a3252e0076d8721934b070.scope: Deactivated successfully.
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.629 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.639 244018 INFO nova.virt.libvirt.driver [-] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Instance destroyed successfully.#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.639 244018 DEBUG nova.objects.instance [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lazy-loading 'resources' on Instance uuid 9630899b-57d8-4e46-b9e0-8762f0f4f2cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.655 244018 DEBUG nova.virt.libvirt.vif [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:28:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-761477046',display_name='tempest-ServerAddressesTestJSON-server-761477046',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-761477046',id=62,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:28:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3d46f1174f384dc3be789d4301748e2d',ramdisk_id='',reservation_id='r-0u0fmhzw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-785917861',owner_user_name='tempest-ServerAddressesTestJSON-785917861-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:28:45Z,user_data=None,user_id='f1a7d945e359492faaab7c197de3e9e8',uuid=9630899b-57d8-4e46-b9e0-8762f0f4f2cb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "address": "fa:16:3e:f6:48:42", "network": {"id": "0406cfb3-4360-4052-8e1d-7019c4224092", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1148317190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d46f1174f384dc3be789d4301748e2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b61d923-a0", "ovs_interfaceid": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.656 244018 DEBUG nova.network.os_vif_util [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Converting VIF {"id": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "address": "fa:16:3e:f6:48:42", "network": {"id": "0406cfb3-4360-4052-8e1d-7019c4224092", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1148317190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d46f1174f384dc3be789d4301748e2d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b61d923-a0", "ovs_interfaceid": "1b61d923-a06e-4948-b459-b6cf5b8c668d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.657 244018 DEBUG nova.network.os_vif_util [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:48:42,bridge_name='br-int',has_traffic_filtering=True,id=1b61d923-a06e-4948-b459-b6cf5b8c668d,network=Network(0406cfb3-4360-4052-8e1d-7019c4224092),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b61d923-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.657 244018 DEBUG os_vif [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:48:42,bridge_name='br-int',has_traffic_filtering=True,id=1b61d923-a06e-4948-b459-b6cf5b8c668d,network=Network(0406cfb3-4360-4052-8e1d-7019c4224092),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b61d923-a0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.659 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.660 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b61d923-a0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.663 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.665 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.668 244018 INFO os_vif [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:48:42,bridge_name='br-int',has_traffic_filtering=True,id=1b61d923-a06e-4948-b459-b6cf5b8c668d,network=Network(0406cfb3-4360-4052-8e1d-7019c4224092),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b61d923-a0')#033[00m
Feb 25 07:28:51 np0005629333 podman[299744]: 2026-02-25 12:28:51.762233019 +0000 UTC m=+0.149794568 container remove 92077ce90f2b639bc84fd4013caf4923123cd64a38a3252e0076d8721934b070 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:28:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:51.767 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[47526ffa-2eda-485b-9d0c-956c3cce5ca5]: (4, ('Wed Feb 25 12:28:51 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092 (92077ce90f2b639bc84fd4013caf4923123cd64a38a3252e0076d8721934b070)\n92077ce90f2b639bc84fd4013caf4923123cd64a38a3252e0076d8721934b070\nWed Feb 25 12:28:51 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092 (92077ce90f2b639bc84fd4013caf4923123cd64a38a3252e0076d8721934b070)\n92077ce90f2b639bc84fd4013caf4923123cd64a38a3252e0076d8721934b070\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:51.768 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[33bef22d-b901-4172-b8e8-d91be5ac89f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:51.770 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0406cfb3-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:51 np0005629333 kernel: tap0406cfb3-40: left promiscuous mode
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.782 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:51.783 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f55e6983-1dc8-4823-a33c-7127ae4bfb37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.793 244018 DEBUG nova.compute.manager [req-6e063c46-c854-48a3-bf86-3d73991cce58 req-7f0f6612-cbec-43ce-bd15-0f5f56cd232b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Received event network-vif-unplugged-1b61d923-a06e-4948-b459-b6cf5b8c668d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.794 244018 DEBUG oslo_concurrency.lockutils [req-6e063c46-c854-48a3-bf86-3d73991cce58 req-7f0f6612-cbec-43ce-bd15-0f5f56cd232b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.794 244018 DEBUG oslo_concurrency.lockutils [req-6e063c46-c854-48a3-bf86-3d73991cce58 req-7f0f6612-cbec-43ce-bd15-0f5f56cd232b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.794 244018 DEBUG oslo_concurrency.lockutils [req-6e063c46-c854-48a3-bf86-3d73991cce58 req-7f0f6612-cbec-43ce-bd15-0f5f56cd232b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.795 244018 DEBUG nova.compute.manager [req-6e063c46-c854-48a3-bf86-3d73991cce58 req-7f0f6612-cbec-43ce-bd15-0f5f56cd232b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] No waiting events found dispatching network-vif-unplugged-1b61d923-a06e-4948-b459-b6cf5b8c668d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.795 244018 DEBUG nova.compute.manager [req-6e063c46-c854-48a3-bf86-3d73991cce58 req-7f0f6612-cbec-43ce-bd15-0f5f56cd232b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Received event network-vif-unplugged-1b61d923-a06e-4948-b459-b6cf5b8c668d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:28:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:51.798 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b14a8525-04c1-48df-9c07-d64274d5d805]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:51.799 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[76865ea3-eb54-4d66-ba7a-1b94925885c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:51.812 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fafa093e-8c40-49a2-8d3c-b3762e84c18d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449200, 'reachable_time': 24261, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299799, 'error': None, 'target': 'ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:51.814 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0406cfb3-4360-4052-8e1d-7019c4224092 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:28:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:51.814 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[0efcc455-c5bf-4b42-95b2-c027f7902008]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:51 np0005629333 systemd[1]: run-netns-ovnmeta\x2d0406cfb3\x2d4360\x2d4052\x2d8e1d\x2d7019c4224092.mount: Deactivated successfully.
Feb 25 07:28:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 408 MiB data, 760 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 6.3 MiB/s wr, 245 op/s
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.956 244018 INFO nova.virt.libvirt.driver [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Deleting instance files /var/lib/nova/instances/9630899b-57d8-4e46-b9e0-8762f0f4f2cb_del#033[00m
Feb 25 07:28:51 np0005629333 nova_compute[244014]: 2026-02-25 12:28:51.956 244018 INFO nova.virt.libvirt.driver [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Deletion of /var/lib/nova/instances/9630899b-57d8-4e46-b9e0-8762f0f4f2cb_del complete#033[00m
Feb 25 07:28:52 np0005629333 nova_compute[244014]: 2026-02-25 12:28:52.004 244018 INFO nova.compute.manager [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:28:52 np0005629333 nova_compute[244014]: 2026-02-25 12:28:52.005 244018 DEBUG oslo.service.loopingcall [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:28:52 np0005629333 nova_compute[244014]: 2026-02-25 12:28:52.005 244018 DEBUG nova.compute.manager [-] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:28:52 np0005629333 nova_compute[244014]: 2026-02-25 12:28:52.006 244018 DEBUG nova.network.neutron [-] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:28:52 np0005629333 nova_compute[244014]: 2026-02-25 12:28:52.139 244018 DEBUG nova.network.neutron [req-fac583c4-0af4-4e32-9a07-43a6011ab998 req-c21937a6-452a-4ee7-bdff-28c4aec46742 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Updated VIF entry in instance network info cache for port ba138ed1-c811-4043-9bd6-e1a5c6127f84. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:28:52 np0005629333 nova_compute[244014]: 2026-02-25 12:28:52.140 244018 DEBUG nova.network.neutron [req-fac583c4-0af4-4e32-9a07-43a6011ab998 req-c21937a6-452a-4ee7-bdff-28c4aec46742 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Updating instance_info_cache with network_info: [{"id": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "address": "fa:16:3e:bd:a8:78", "network": {"id": "23ab8bc1-e701-454d-a828-42d18cbd9afc", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1699028190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be8db082f3894d28b63a3709be538262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba138ed1-c8", "ovs_interfaceid": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:28:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:28:52 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3694668567' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:28:52 np0005629333 nova_compute[244014]: 2026-02-25 12:28:52.219 244018 DEBUG oslo_concurrency.processutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:52 np0005629333 nova_compute[244014]: 2026-02-25 12:28:52.224 244018 DEBUG nova.virt.libvirt.vif [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:28:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-846051174',display_name='tempest-ServerPasswordTestJSON-server-846051174',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-846051174',id=65,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='be8db082f3894d28b63a3709be538262',ramdisk_id='',reservation_id='r-624y952x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-1533125897',owner_user_name='tempest-ServerPasswordTestJSON-1533125897-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:28:41Z,user_data=None,user_id='05ca7159581049009a4223cf01ebf146',uuid=826789b1-e26a-4569-bd77-bd1ef76388be,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "address": "fa:16:3e:bd:a8:78", "network": {"id": "23ab8bc1-e701-454d-a828-42d18cbd9afc", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1699028190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be8db082f3894d28b63a3709be538262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba138ed1-c8", "ovs_interfaceid": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:28:52 np0005629333 nova_compute[244014]: 2026-02-25 12:28:52.224 244018 DEBUG nova.network.os_vif_util [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Converting VIF {"id": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "address": "fa:16:3e:bd:a8:78", "network": {"id": "23ab8bc1-e701-454d-a828-42d18cbd9afc", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1699028190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be8db082f3894d28b63a3709be538262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba138ed1-c8", "ovs_interfaceid": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:28:52 np0005629333 nova_compute[244014]: 2026-02-25 12:28:52.225 244018 DEBUG nova.network.os_vif_util [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:a8:78,bridge_name='br-int',has_traffic_filtering=True,id=ba138ed1-c811-4043-9bd6-e1a5c6127f84,network=Network(23ab8bc1-e701-454d-a828-42d18cbd9afc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba138ed1-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:28:52 np0005629333 nova_compute[244014]: 2026-02-25 12:28:52.228 244018 DEBUG nova.objects.instance [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lazy-loading 'pci_devices' on Instance uuid 826789b1-e26a-4569-bd77-bd1ef76388be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:28:52 np0005629333 nova_compute[244014]: 2026-02-25 12:28:52.615 244018 DEBUG oslo_concurrency.lockutils [req-fac583c4-0af4-4e32-9a07-43a6011ab998 req-c21937a6-452a-4ee7-bdff-28c4aec46742 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-826789b1-e26a-4569-bd77-bd1ef76388be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:28:52 np0005629333 nova_compute[244014]: 2026-02-25 12:28:52.632 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:28:52 np0005629333 nova_compute[244014]:  <uuid>826789b1-e26a-4569-bd77-bd1ef76388be</uuid>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:  <name>instance-00000041</name>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServerPasswordTestJSON-server-846051174</nova:name>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:28:50</nova:creationTime>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:28:52 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:        <nova:user uuid="05ca7159581049009a4223cf01ebf146">tempest-ServerPasswordTestJSON-1533125897-project-member</nova:user>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:        <nova:project uuid="be8db082f3894d28b63a3709be538262">tempest-ServerPasswordTestJSON-1533125897</nova:project>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:        <nova:port uuid="ba138ed1-c811-4043-9bd6-e1a5c6127f84">
Feb 25 07:28:52 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      <entry name="serial">826789b1-e26a-4569-bd77-bd1ef76388be</entry>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      <entry name="uuid">826789b1-e26a-4569-bd77-bd1ef76388be</entry>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/826789b1-e26a-4569-bd77-bd1ef76388be_disk">
Feb 25 07:28:52 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:28:52 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/826789b1-e26a-4569-bd77-bd1ef76388be_disk.config">
Feb 25 07:28:52 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:28:52 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:bd:a8:78"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      <target dev="tapba138ed1-c8"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/826789b1-e26a-4569-bd77-bd1ef76388be/console.log" append="off"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:28:52 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:28:52 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:28:52 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:28:52 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:28:52 np0005629333 nova_compute[244014]: 2026-02-25 12:28:52.633 244018 DEBUG nova.compute.manager [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Preparing to wait for external event network-vif-plugged-ba138ed1-c811-4043-9bd6-e1a5c6127f84 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:28:52 np0005629333 nova_compute[244014]: 2026-02-25 12:28:52.633 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Acquiring lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:52 np0005629333 nova_compute[244014]: 2026-02-25 12:28:52.634 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:52 np0005629333 nova_compute[244014]: 2026-02-25 12:28:52.634 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:52 np0005629333 nova_compute[244014]: 2026-02-25 12:28:52.635 244018 DEBUG nova.virt.libvirt.vif [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:28:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-846051174',display_name='tempest-ServerPasswordTestJSON-server-846051174',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-846051174',id=65,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='be8db082f3894d28b63a3709be538262',ramdisk_id='',reservation_id='r-624y952x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-1533125897',owner_user_name='tempest-ServerPasswordTestJSON-1533125897-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:28:41Z,user_data=None,user_id='05ca7159581049009a4223cf01ebf146',uuid=826789b1-e26a-4569-bd77-bd1ef76388be,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "address": "fa:16:3e:bd:a8:78", "network": {"id": "23ab8bc1-e701-454d-a828-42d18cbd9afc", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1699028190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be8db082f3894d28b63a3709be538262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba138ed1-c8", "ovs_interfaceid": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:28:52 np0005629333 nova_compute[244014]: 2026-02-25 12:28:52.636 244018 DEBUG nova.network.os_vif_util [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Converting VIF {"id": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "address": "fa:16:3e:bd:a8:78", "network": {"id": "23ab8bc1-e701-454d-a828-42d18cbd9afc", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1699028190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be8db082f3894d28b63a3709be538262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba138ed1-c8", "ovs_interfaceid": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:28:52 np0005629333 nova_compute[244014]: 2026-02-25 12:28:52.636 244018 DEBUG nova.network.os_vif_util [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:a8:78,bridge_name='br-int',has_traffic_filtering=True,id=ba138ed1-c811-4043-9bd6-e1a5c6127f84,network=Network(23ab8bc1-e701-454d-a828-42d18cbd9afc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba138ed1-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:28:52 np0005629333 nova_compute[244014]: 2026-02-25 12:28:52.637 244018 DEBUG os_vif [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:a8:78,bridge_name='br-int',has_traffic_filtering=True,id=ba138ed1-c811-4043-9bd6-e1a5c6127f84,network=Network(23ab8bc1-e701-454d-a828-42d18cbd9afc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba138ed1-c8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:28:52 np0005629333 nova_compute[244014]: 2026-02-25 12:28:52.638 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:52 np0005629333 nova_compute[244014]: 2026-02-25 12:28:52.638 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:52 np0005629333 nova_compute[244014]: 2026-02-25 12:28:52.639 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:28:52 np0005629333 nova_compute[244014]: 2026-02-25 12:28:52.642 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:52 np0005629333 nova_compute[244014]: 2026-02-25 12:28:52.643 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapba138ed1-c8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:52 np0005629333 nova_compute[244014]: 2026-02-25 12:28:52.643 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapba138ed1-c8, col_values=(('external_ids', {'iface-id': 'ba138ed1-c811-4043-9bd6-e1a5c6127f84', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bd:a8:78', 'vm-uuid': '826789b1-e26a-4569-bd77-bd1ef76388be'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:52 np0005629333 nova_compute[244014]: 2026-02-25 12:28:52.645 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:52 np0005629333 NetworkManager[49836]: <info>  [1772022532.6465] manager: (tapba138ed1-c8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/254)
Feb 25 07:28:52 np0005629333 nova_compute[244014]: 2026-02-25 12:28:52.648 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:28:52 np0005629333 nova_compute[244014]: 2026-02-25 12:28:52.651 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:52 np0005629333 nova_compute[244014]: 2026-02-25 12:28:52.652 244018 INFO os_vif [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:a8:78,bridge_name='br-int',has_traffic_filtering=True,id=ba138ed1-c811-4043-9bd6-e1a5c6127f84,network=Network(23ab8bc1-e701-454d-a828-42d18cbd9afc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba138ed1-c8')#033[00m
Feb 25 07:28:53 np0005629333 nova_compute[244014]: 2026-02-25 12:28:53.230 244018 DEBUG nova.network.neutron [-] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:28:53 np0005629333 nova_compute[244014]: 2026-02-25 12:28:53.236 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:28:53 np0005629333 nova_compute[244014]: 2026-02-25 12:28:53.237 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:28:53 np0005629333 nova_compute[244014]: 2026-02-25 12:28:53.237 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] No VIF found with MAC fa:16:3e:bd:a8:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:28:53 np0005629333 nova_compute[244014]: 2026-02-25 12:28:53.238 244018 INFO nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Using config drive#033[00m
Feb 25 07:28:53 np0005629333 nova_compute[244014]: 2026-02-25 12:28:53.264 244018 DEBUG nova.storage.rbd_utils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] rbd image 826789b1-e26a-4569-bd77-bd1ef76388be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:53 np0005629333 nova_compute[244014]: 2026-02-25 12:28:53.273 244018 INFO nova.compute.manager [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Rebuilding instance#033[00m
Feb 25 07:28:53 np0005629333 nova_compute[244014]: 2026-02-25 12:28:53.276 244018 INFO nova.compute.manager [-] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Took 1.27 seconds to deallocate network for instance.#033[00m
Feb 25 07:28:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 391 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 2.8 MiB/s wr, 239 op/s
Feb 25 07:28:54 np0005629333 nova_compute[244014]: 2026-02-25 12:28:54.387 244018 DEBUG nova.compute.manager [req-75a109a1-31a7-488d-a264-8ba4f996bbee req-cf52dc2b-0f08-495f-8fdf-c63609b76b19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Received event network-changed-05178abb-a113-4013-9194-9243afe9d0ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:28:54 np0005629333 nova_compute[244014]: 2026-02-25 12:28:54.387 244018 DEBUG nova.compute.manager [req-75a109a1-31a7-488d-a264-8ba4f996bbee req-cf52dc2b-0f08-495f-8fdf-c63609b76b19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Refreshing instance network info cache due to event network-changed-05178abb-a113-4013-9194-9243afe9d0ff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:28:54 np0005629333 nova_compute[244014]: 2026-02-25 12:28:54.387 244018 DEBUG oslo_concurrency.lockutils [req-75a109a1-31a7-488d-a264-8ba4f996bbee req-cf52dc2b-0f08-495f-8fdf-c63609b76b19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8086400b-ac70-4c79-928b-4f1966084384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:28:54 np0005629333 nova_compute[244014]: 2026-02-25 12:28:54.387 244018 DEBUG oslo_concurrency.lockutils [req-75a109a1-31a7-488d-a264-8ba4f996bbee req-cf52dc2b-0f08-495f-8fdf-c63609b76b19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8086400b-ac70-4c79-928b-4f1966084384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:28:54 np0005629333 nova_compute[244014]: 2026-02-25 12:28:54.387 244018 DEBUG nova.network.neutron [req-75a109a1-31a7-488d-a264-8ba4f996bbee req-cf52dc2b-0f08-495f-8fdf-c63609b76b19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Refreshing network info cache for port 05178abb-a113-4013-9194-9243afe9d0ff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:28:54 np0005629333 nova_compute[244014]: 2026-02-25 12:28:54.448 244018 DEBUG oslo_concurrency.lockutils [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:54 np0005629333 nova_compute[244014]: 2026-02-25 12:28:54.449 244018 DEBUG oslo_concurrency.lockutils [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:54 np0005629333 nova_compute[244014]: 2026-02-25 12:28:54.600 244018 DEBUG oslo_concurrency.processutils [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:54 np0005629333 nova_compute[244014]: 2026-02-25 12:28:54.676 244018 DEBUG nova.objects.instance [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'trusted_certs' on Instance uuid ee9cd98b-1ca6-48e7-aa44-a09caf048a1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:28:54 np0005629333 nova_compute[244014]: 2026-02-25 12:28:54.696 244018 DEBUG nova.compute.manager [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:28:54 np0005629333 nova_compute[244014]: 2026-02-25 12:28:54.729 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:54 np0005629333 nova_compute[244014]: 2026-02-25 12:28:54.744 244018 DEBUG nova.objects.instance [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'pci_requests' on Instance uuid ee9cd98b-1ca6-48e7-aa44-a09caf048a1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:28:54 np0005629333 nova_compute[244014]: 2026-02-25 12:28:54.760 244018 DEBUG nova.objects.instance [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'pci_devices' on Instance uuid ee9cd98b-1ca6-48e7-aa44-a09caf048a1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:28:54 np0005629333 nova_compute[244014]: 2026-02-25 12:28:54.775 244018 DEBUG nova.objects.instance [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'resources' on Instance uuid ee9cd98b-1ca6-48e7-aa44-a09caf048a1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:28:54 np0005629333 nova_compute[244014]: 2026-02-25 12:28:54.786 244018 DEBUG nova.objects.instance [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'migration_context' on Instance uuid ee9cd98b-1ca6-48e7-aa44-a09caf048a1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:28:54 np0005629333 nova_compute[244014]: 2026-02-25 12:28:54.798 244018 DEBUG nova.objects.instance [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 25 07:28:54 np0005629333 nova_compute[244014]: 2026-02-25 12:28:54.802 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 25 07:28:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:55.011 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:55.011 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:55.012 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:28:55 np0005629333 nova_compute[244014]: 2026-02-25 12:28:55.190 244018 DEBUG nova.compute.manager [req-feb58b03-2be8-45e9-aa7d-ea5bed2d6da9 req-e3545075-6900-491e-ad8b-14e0402b2e09 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Received event network-vif-plugged-1b61d923-a06e-4948-b459-b6cf5b8c668d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:28:55 np0005629333 nova_compute[244014]: 2026-02-25 12:28:55.190 244018 DEBUG oslo_concurrency.lockutils [req-feb58b03-2be8-45e9-aa7d-ea5bed2d6da9 req-e3545075-6900-491e-ad8b-14e0402b2e09 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:55 np0005629333 nova_compute[244014]: 2026-02-25 12:28:55.191 244018 DEBUG oslo_concurrency.lockutils [req-feb58b03-2be8-45e9-aa7d-ea5bed2d6da9 req-e3545075-6900-491e-ad8b-14e0402b2e09 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:55 np0005629333 nova_compute[244014]: 2026-02-25 12:28:55.191 244018 DEBUG oslo_concurrency.lockutils [req-feb58b03-2be8-45e9-aa7d-ea5bed2d6da9 req-e3545075-6900-491e-ad8b-14e0402b2e09 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:55 np0005629333 nova_compute[244014]: 2026-02-25 12:28:55.192 244018 DEBUG nova.compute.manager [req-feb58b03-2be8-45e9-aa7d-ea5bed2d6da9 req-e3545075-6900-491e-ad8b-14e0402b2e09 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] No waiting events found dispatching network-vif-plugged-1b61d923-a06e-4948-b459-b6cf5b8c668d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:28:55 np0005629333 nova_compute[244014]: 2026-02-25 12:28:55.192 244018 WARNING nova.compute.manager [req-feb58b03-2be8-45e9-aa7d-ea5bed2d6da9 req-e3545075-6900-491e-ad8b-14e0402b2e09 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Received unexpected event network-vif-plugged-1b61d923-a06e-4948-b459-b6cf5b8c668d for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:28:55 np0005629333 nova_compute[244014]: 2026-02-25 12:28:55.192 244018 DEBUG nova.compute.manager [req-feb58b03-2be8-45e9-aa7d-ea5bed2d6da9 req-e3545075-6900-491e-ad8b-14e0402b2e09 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Received event network-vif-deleted-1b61d923-a06e-4948-b459-b6cf5b8c668d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:28:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:28:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3715193955' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:28:55 np0005629333 nova_compute[244014]: 2026-02-25 12:28:55.247 244018 DEBUG oslo_concurrency.processutils [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.647s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:55 np0005629333 nova_compute[244014]: 2026-02-25 12:28:55.251 244018 DEBUG nova.compute.provider_tree [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:28:55 np0005629333 nova_compute[244014]: 2026-02-25 12:28:55.408 244018 DEBUG nova.scheduler.client.report [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:28:55 np0005629333 nova_compute[244014]: 2026-02-25 12:28:55.431 244018 DEBUG oslo_concurrency.lockutils [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.982s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:55 np0005629333 nova_compute[244014]: 2026-02-25 12:28:55.455 244018 INFO nova.scheduler.client.report [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Deleted allocations for instance 9630899b-57d8-4e46-b9e0-8762f0f4f2cb#033[00m
Feb 25 07:28:55 np0005629333 nova_compute[244014]: 2026-02-25 12:28:55.522 244018 DEBUG oslo_concurrency.lockutils [None req-8f5e7490-ff39-4ad6-9ed6-ce23c2cb7739 f1a7d945e359492faaab7c197de3e9e8 3d46f1174f384dc3be789d4301748e2d - - default default] Lock "9630899b-57d8-4e46-b9e0-8762f0f4f2cb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 391 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 825 KiB/s wr, 209 op/s
Feb 25 07:28:56 np0005629333 nova_compute[244014]: 2026-02-25 12:28:56.471 244018 INFO nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Creating config drive at /var/lib/nova/instances/826789b1-e26a-4569-bd77-bd1ef76388be/disk.config#033[00m
Feb 25 07:28:56 np0005629333 nova_compute[244014]: 2026-02-25 12:28:56.476 244018 DEBUG oslo_concurrency.processutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/826789b1-e26a-4569-bd77-bd1ef76388be/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpnjw6iv01 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:56 np0005629333 nova_compute[244014]: 2026-02-25 12:28:56.617 244018 DEBUG oslo_concurrency.processutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/826789b1-e26a-4569-bd77-bd1ef76388be/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpnjw6iv01" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:56 np0005629333 nova_compute[244014]: 2026-02-25 12:28:56.652 244018 DEBUG nova.storage.rbd_utils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] rbd image 826789b1-e26a-4569-bd77-bd1ef76388be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:28:56 np0005629333 nova_compute[244014]: 2026-02-25 12:28:56.656 244018 DEBUG oslo_concurrency.processutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/826789b1-e26a-4569-bd77-bd1ef76388be/disk.config 826789b1-e26a-4569-bd77-bd1ef76388be_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:28:56 np0005629333 nova_compute[244014]: 2026-02-25 12:28:56.798 244018 DEBUG oslo_concurrency.processutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/826789b1-e26a-4569-bd77-bd1ef76388be/disk.config 826789b1-e26a-4569-bd77-bd1ef76388be_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:28:56 np0005629333 nova_compute[244014]: 2026-02-25 12:28:56.800 244018 INFO nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Deleting local config drive /var/lib/nova/instances/826789b1-e26a-4569-bd77-bd1ef76388be/disk.config because it was imported into RBD.#033[00m
Feb 25 07:28:56 np0005629333 kernel: tapba138ed1-c8: entered promiscuous mode
Feb 25 07:28:56 np0005629333 NetworkManager[49836]: <info>  [1772022536.8384] manager: (tapba138ed1-c8): new Tun device (/org/freedesktop/NetworkManager/Devices/255)
Feb 25 07:28:56 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:56Z|00569|binding|INFO|Claiming lport ba138ed1-c811-4043-9bd6-e1a5c6127f84 for this chassis.
Feb 25 07:28:56 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:56Z|00570|binding|INFO|ba138ed1-c811-4043-9bd6-e1a5c6127f84: Claiming fa:16:3e:bd:a8:78 10.100.0.14
Feb 25 07:28:56 np0005629333 nova_compute[244014]: 2026-02-25 12:28:56.841 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:56 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:56Z|00571|binding|INFO|Setting lport ba138ed1-c811-4043-9bd6-e1a5c6127f84 ovn-installed in OVS
Feb 25 07:28:56 np0005629333 nova_compute[244014]: 2026-02-25 12:28:56.857 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:56 np0005629333 nova_compute[244014]: 2026-02-25 12:28:56.864 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:56 np0005629333 systemd-udevd[299895]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:28:56 np0005629333 NetworkManager[49836]: <info>  [1772022536.8790] device (tapba138ed1-c8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:28:56 np0005629333 NetworkManager[49836]: <info>  [1772022536.8799] device (tapba138ed1-c8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:28:57 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:57Z|00572|binding|INFO|Setting lport ba138ed1-c811-4043-9bd6-e1a5c6127f84 up in Southbound
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.032 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:a8:78 10.100.0.14'], port_security=['fa:16:3e:bd:a8:78 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '826789b1-e26a-4569-bd77-bd1ef76388be', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-23ab8bc1-e701-454d-a828-42d18cbd9afc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be8db082f3894d28b63a3709be538262', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3c589948-8c4c-4f73-8545-f79b3c943dbd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bcb3a436-cbdb-491a-80cb-92f630c9f966, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ba138ed1-c811-4043-9bd6-e1a5c6127f84) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.033 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ba138ed1-c811-4043-9bd6-e1a5c6127f84 in datapath 23ab8bc1-e701-454d-a828-42d18cbd9afc bound to our chassis#033[00m
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.034 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 23ab8bc1-e701-454d-a828-42d18cbd9afc#033[00m
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.041 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7f5d9203-0d38-4fc3-a9b4-bfabd9b4f2fc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.042 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap23ab8bc1-e1 in ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.045 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap23ab8bc1-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.045 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cdfb4ea2-7f2b-433d-87d2-056384b08983]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.046 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[38a97cbc-52c5-4acc-b214-b650cf8b4696]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:57 np0005629333 systemd-machined[210048]: New machine qemu-74-instance-00000041.
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.054 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[1b891dea-19cf-469e-88c2-c197ac6a4a18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.065 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bc29f777-9e57-44d7-9a29-f05fe5a08751]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:57 np0005629333 systemd[1]: Started Virtual Machine qemu-74-instance-00000041.
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.085 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b7be0200-6fc9-499e-852c-662f92f75e09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:57 np0005629333 systemd-udevd[299897]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:28:57 np0005629333 NetworkManager[49836]: <info>  [1772022537.0908] manager: (tap23ab8bc1-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/256)
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.091 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[67c4db90-d8b2-4a93-a735-fe183c068849]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.140 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[88b47e2c-51a2-46b6-ad59-4ab2cc58d46b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.143 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[656713b4-02df-4016-9376-e2403c25aa8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:57 np0005629333 NetworkManager[49836]: <info>  [1772022537.1667] device (tap23ab8bc1-e0): carrier: link connected
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.174 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f0c1227d-1d1a-4ccb-85c5-90774a29442f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.191 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[09345465-07a8-4848-b151-07535beb377a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap23ab8bc1-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:bf:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 450657, 'reachable_time': 39331, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299931, 'error': None, 'target': 'ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.206 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[38d140f6-962d-493a-9cef-231aacaaf5b6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe18:bf8a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 450657, 'tstamp': 450657}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299932, 'error': None, 'target': 'ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.221 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[69f2069e-ad7d-4695-b9b3-70c00b94aa6c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap23ab8bc1-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:bf:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 450657, 'reachable_time': 39331, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299933, 'error': None, 'target': 'ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.247 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fb23e843-3b3e-463a-8a86-fa74ee0c37b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.305 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f2f0b87b-fb3f-44d1-8b26-10e5bc0d7e52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.307 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap23ab8bc1-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.307 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.308 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap23ab8bc1-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:57 np0005629333 nova_compute[244014]: 2026-02-25 12:28:57.311 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:57 np0005629333 NetworkManager[49836]: <info>  [1772022537.3121] manager: (tap23ab8bc1-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/257)
Feb 25 07:28:57 np0005629333 kernel: tap23ab8bc1-e0: entered promiscuous mode
Feb 25 07:28:57 np0005629333 nova_compute[244014]: 2026-02-25 12:28:57.314 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.316 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap23ab8bc1-e0, col_values=(('external_ids', {'iface-id': 'aa84a84b-80b1-4514-9f74-691107edfa38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:28:57 np0005629333 nova_compute[244014]: 2026-02-25 12:28:57.317 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:57 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:57Z|00573|binding|INFO|Releasing lport aa84a84b-80b1-4514-9f74-691107edfa38 from this chassis (sb_readonly=0)
Feb 25 07:28:57 np0005629333 nova_compute[244014]: 2026-02-25 12:28:57.331 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:57 np0005629333 nova_compute[244014]: 2026-02-25 12:28:57.332 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.333 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/23ab8bc1-e701-454d-a828-42d18cbd9afc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/23ab8bc1-e701-454d-a828-42d18cbd9afc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.334 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fefb0849-2094-414a-b72c-4ea975c8475c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.334 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-23ab8bc1-e701-454d-a828-42d18cbd9afc
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/23ab8bc1-e701-454d-a828-42d18cbd9afc.pid.haproxy
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 23ab8bc1-e701-454d-a828-42d18cbd9afc
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:28:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:28:57.335 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc', 'env', 'PROCESS_TAG=haproxy-23ab8bc1-e701-454d-a828-42d18cbd9afc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/23ab8bc1-e701-454d-a828-42d18cbd9afc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:28:57 np0005629333 nova_compute[244014]: 2026-02-25 12:28:57.358 244018 DEBUG nova.compute.manager [req-5dcb7a35-ab56-4002-8bc7-fc783e28d25e req-ceb1d362-5683-465e-8c23-615bf036d3d6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Received event network-changed-05178abb-a113-4013-9194-9243afe9d0ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:28:57 np0005629333 nova_compute[244014]: 2026-02-25 12:28:57.359 244018 DEBUG nova.compute.manager [req-5dcb7a35-ab56-4002-8bc7-fc783e28d25e req-ceb1d362-5683-465e-8c23-615bf036d3d6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Refreshing instance network info cache due to event network-changed-05178abb-a113-4013-9194-9243afe9d0ff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:28:57 np0005629333 nova_compute[244014]: 2026-02-25 12:28:57.359 244018 DEBUG oslo_concurrency.lockutils [req-5dcb7a35-ab56-4002-8bc7-fc783e28d25e req-ceb1d362-5683-465e-8c23-615bf036d3d6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8086400b-ac70-4c79-928b-4f1966084384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:28:57 np0005629333 nova_compute[244014]: 2026-02-25 12:28:57.627 244018 DEBUG nova.network.neutron [req-75a109a1-31a7-488d-a264-8ba4f996bbee req-cf52dc2b-0f08-495f-8fdf-c63609b76b19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Updated VIF entry in instance network info cache for port 05178abb-a113-4013-9194-9243afe9d0ff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:28:57 np0005629333 nova_compute[244014]: 2026-02-25 12:28:57.636 244018 DEBUG nova.network.neutron [req-75a109a1-31a7-488d-a264-8ba4f996bbee req-cf52dc2b-0f08-495f-8fdf-c63609b76b19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Updating instance_info_cache with network_info: [{"id": "05178abb-a113-4013-9194-9243afe9d0ff", "address": "fa:16:3e:7e:1d:a7", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05178abb-a1", "ovs_interfaceid": "05178abb-a113-4013-9194-9243afe9d0ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:28:57 np0005629333 nova_compute[244014]: 2026-02-25 12:28:57.646 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:57 np0005629333 nova_compute[244014]: 2026-02-25 12:28:57.655 244018 DEBUG oslo_concurrency.lockutils [req-75a109a1-31a7-488d-a264-8ba4f996bbee req-cf52dc2b-0f08-495f-8fdf-c63609b76b19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8086400b-ac70-4c79-928b-4f1966084384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:28:57 np0005629333 nova_compute[244014]: 2026-02-25 12:28:57.656 244018 DEBUG oslo_concurrency.lockutils [req-5dcb7a35-ab56-4002-8bc7-fc783e28d25e req-ceb1d362-5683-465e-8c23-615bf036d3d6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8086400b-ac70-4c79-928b-4f1966084384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:28:57 np0005629333 nova_compute[244014]: 2026-02-25 12:28:57.657 244018 DEBUG nova.network.neutron [req-5dcb7a35-ab56-4002-8bc7-fc783e28d25e req-ceb1d362-5683-465e-8c23-615bf036d3d6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Refreshing network info cache for port 05178abb-a113-4013-9194-9243afe9d0ff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:28:57 np0005629333 podman[299994]: 2026-02-25 12:28:57.602583836 +0000 UTC m=+0.023566469 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:28:57 np0005629333 nova_compute[244014]: 2026-02-25 12:28:57.712 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022537.7121496, 826789b1-e26a-4569-bd77-bd1ef76388be => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:28:57 np0005629333 nova_compute[244014]: 2026-02-25 12:28:57.713 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] VM Started (Lifecycle Event)#033[00m
Feb 25 07:28:57 np0005629333 nova_compute[244014]: 2026-02-25 12:28:57.733 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:28:57 np0005629333 nova_compute[244014]: 2026-02-25 12:28:57.737 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022537.713391, 826789b1-e26a-4569-bd77-bd1ef76388be => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:28:57 np0005629333 nova_compute[244014]: 2026-02-25 12:28:57.737 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:28:57 np0005629333 nova_compute[244014]: 2026-02-25 12:28:57.759 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:28:57 np0005629333 nova_compute[244014]: 2026-02-25 12:28:57.764 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:28:57 np0005629333 nova_compute[244014]: 2026-02-25 12:28:57.785 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:28:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 374 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 838 KiB/s wr, 249 op/s
Feb 25 07:28:58 np0005629333 podman[299994]: 2026-02-25 12:28:58.265771262 +0000 UTC m=+0.686753865 container create 8d8aa376e4a861fa2f7225446901311747ad0d5b7243089f1e3bba47c6fba1a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 25 07:28:58 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:58Z|00574|binding|INFO|Releasing lport aa84a84b-80b1-4514-9f74-691107edfa38 from this chassis (sb_readonly=0)
Feb 25 07:28:58 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:58Z|00575|binding|INFO|Releasing lport 3b184c15-8ef4-4e11-bd18-e1253a4ff440 from this chassis (sb_readonly=0)
Feb 25 07:28:58 np0005629333 ovn_controller[147040]: 2026-02-25T12:28:58Z|00576|binding|INFO|Releasing lport 6ef06fbe-6eb9-4d55-bbd8-9394a70da39f from this chassis (sb_readonly=0)
Feb 25 07:28:58 np0005629333 nova_compute[244014]: 2026-02-25 12:28:58.384 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:58 np0005629333 systemd[1]: Started libpod-conmon-8d8aa376e4a861fa2f7225446901311747ad0d5b7243089f1e3bba47c6fba1a6.scope.
Feb 25 07:28:58 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:28:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767066c256d5f6bdcbbcdf1ad704e26891b6cc82d028e5003584f4e4aff9b4b3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:28:58 np0005629333 podman[299994]: 2026-02-25 12:28:58.794193777 +0000 UTC m=+1.215176370 container init 8d8aa376e4a861fa2f7225446901311747ad0d5b7243089f1e3bba47c6fba1a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:28:58 np0005629333 podman[299994]: 2026-02-25 12:28:58.803197933 +0000 UTC m=+1.224180526 container start 8d8aa376e4a861fa2f7225446901311747ad0d5b7243089f1e3bba47c6fba1a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0)
Feb 25 07:28:58 np0005629333 neutron-haproxy-ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc[300020]: [NOTICE]   (300024) : New worker (300026) forked
Feb 25 07:28:58 np0005629333 neutron-haproxy-ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc[300020]: [NOTICE]   (300024) : Loading success.
Feb 25 07:28:59 np0005629333 nova_compute[244014]: 2026-02-25 12:28:59.456 244018 DEBUG nova.compute.manager [req-ca6ddb1f-c508-4839-8363-dcd37070e41c req-9286d71d-4895-4501-b8d7-279bfdb9d891 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Received event network-vif-plugged-ba138ed1-c811-4043-9bd6-e1a5c6127f84 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:28:59 np0005629333 nova_compute[244014]: 2026-02-25 12:28:59.457 244018 DEBUG oslo_concurrency.lockutils [req-ca6ddb1f-c508-4839-8363-dcd37070e41c req-9286d71d-4895-4501-b8d7-279bfdb9d891 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:59 np0005629333 nova_compute[244014]: 2026-02-25 12:28:59.458 244018 DEBUG oslo_concurrency.lockutils [req-ca6ddb1f-c508-4839-8363-dcd37070e41c req-9286d71d-4895-4501-b8d7-279bfdb9d891 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:59 np0005629333 nova_compute[244014]: 2026-02-25 12:28:59.458 244018 DEBUG oslo_concurrency.lockutils [req-ca6ddb1f-c508-4839-8363-dcd37070e41c req-9286d71d-4895-4501-b8d7-279bfdb9d891 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:59 np0005629333 nova_compute[244014]: 2026-02-25 12:28:59.459 244018 DEBUG nova.compute.manager [req-ca6ddb1f-c508-4839-8363-dcd37070e41c req-9286d71d-4895-4501-b8d7-279bfdb9d891 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Processing event network-vif-plugged-ba138ed1-c811-4043-9bd6-e1a5c6127f84 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:28:59 np0005629333 nova_compute[244014]: 2026-02-25 12:28:59.459 244018 DEBUG nova.compute.manager [req-ca6ddb1f-c508-4839-8363-dcd37070e41c req-9286d71d-4895-4501-b8d7-279bfdb9d891 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Received event network-vif-plugged-ba138ed1-c811-4043-9bd6-e1a5c6127f84 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:28:59 np0005629333 nova_compute[244014]: 2026-02-25 12:28:59.459 244018 DEBUG oslo_concurrency.lockutils [req-ca6ddb1f-c508-4839-8363-dcd37070e41c req-9286d71d-4895-4501-b8d7-279bfdb9d891 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:28:59 np0005629333 nova_compute[244014]: 2026-02-25 12:28:59.460 244018 DEBUG oslo_concurrency.lockutils [req-ca6ddb1f-c508-4839-8363-dcd37070e41c req-9286d71d-4895-4501-b8d7-279bfdb9d891 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:28:59 np0005629333 nova_compute[244014]: 2026-02-25 12:28:59.460 244018 DEBUG oslo_concurrency.lockutils [req-ca6ddb1f-c508-4839-8363-dcd37070e41c req-9286d71d-4895-4501-b8d7-279bfdb9d891 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:28:59 np0005629333 nova_compute[244014]: 2026-02-25 12:28:59.460 244018 DEBUG nova.compute.manager [req-ca6ddb1f-c508-4839-8363-dcd37070e41c req-9286d71d-4895-4501-b8d7-279bfdb9d891 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] No waiting events found dispatching network-vif-plugged-ba138ed1-c811-4043-9bd6-e1a5c6127f84 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:28:59 np0005629333 nova_compute[244014]: 2026-02-25 12:28:59.461 244018 WARNING nova.compute.manager [req-ca6ddb1f-c508-4839-8363-dcd37070e41c req-9286d71d-4895-4501-b8d7-279bfdb9d891 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Received unexpected event network-vif-plugged-ba138ed1-c811-4043-9bd6-e1a5c6127f84 for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:28:59 np0005629333 nova_compute[244014]: 2026-02-25 12:28:59.461 244018 DEBUG nova.compute.manager [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:28:59 np0005629333 nova_compute[244014]: 2026-02-25 12:28:59.464 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022539.4643273, 826789b1-e26a-4569-bd77-bd1ef76388be => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:28:59 np0005629333 nova_compute[244014]: 2026-02-25 12:28:59.464 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:28:59 np0005629333 nova_compute[244014]: 2026-02-25 12:28:59.480 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:28:59 np0005629333 nova_compute[244014]: 2026-02-25 12:28:59.482 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:28:59 np0005629333 nova_compute[244014]: 2026-02-25 12:28:59.486 244018 INFO nova.virt.libvirt.driver [-] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Instance spawned successfully.#033[00m
Feb 25 07:28:59 np0005629333 nova_compute[244014]: 2026-02-25 12:28:59.486 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:28:59 np0005629333 nova_compute[244014]: 2026-02-25 12:28:59.488 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:28:59 np0005629333 nova_compute[244014]: 2026-02-25 12:28:59.732 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:28:59 np0005629333 nova_compute[244014]: 2026-02-25 12:28:59.848 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:28:59 np0005629333 nova_compute[244014]: 2026-02-25 12:28:59.849 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:28:59 np0005629333 nova_compute[244014]: 2026-02-25 12:28:59.850 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:28:59 np0005629333 nova_compute[244014]: 2026-02-25 12:28:59.850 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:28:59 np0005629333 nova_compute[244014]: 2026-02-25 12:28:59.851 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:28:59 np0005629333 nova_compute[244014]: 2026-02-25 12:28:59.851 244018 DEBUG nova.virt.libvirt.driver [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:28:59 np0005629333 nova_compute[244014]: 2026-02-25 12:28:59.866 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:28:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 374 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 14 KiB/s wr, 166 op/s
Feb 25 07:28:59 np0005629333 nova_compute[244014]: 2026-02-25 12:28:59.946 244018 INFO nova.compute.manager [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Took 18.77 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:28:59 np0005629333 nova_compute[244014]: 2026-02-25 12:28:59.947 244018 DEBUG nova.compute.manager [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:00 np0005629333 nova_compute[244014]: 2026-02-25 12:29:00.027 244018 INFO nova.compute.manager [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Took 19.91 seconds to build instance.#033[00m
Feb 25 07:29:00 np0005629333 nova_compute[244014]: 2026-02-25 12:29:00.050 244018 DEBUG oslo_concurrency.lockutils [None req-9bedd9df-8a74-484f-9f16-d137a2262787 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lock "826789b1-e26a-4569-bd77-bd1ef76388be" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.053s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:29:00 np0005629333 nova_compute[244014]: 2026-02-25 12:29:00.619 244018 DEBUG nova.network.neutron [req-5dcb7a35-ab56-4002-8bc7-fc783e28d25e req-ceb1d362-5683-465e-8c23-615bf036d3d6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Updated VIF entry in instance network info cache for port 05178abb-a113-4013-9194-9243afe9d0ff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:29:00 np0005629333 nova_compute[244014]: 2026-02-25 12:29:00.620 244018 DEBUG nova.network.neutron [req-5dcb7a35-ab56-4002-8bc7-fc783e28d25e req-ceb1d362-5683-465e-8c23-615bf036d3d6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Updating instance_info_cache with network_info: [{"id": "05178abb-a113-4013-9194-9243afe9d0ff", "address": "fa:16:3e:7e:1d:a7", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05178abb-a1", "ovs_interfaceid": "05178abb-a113-4013-9194-9243afe9d0ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:29:00 np0005629333 nova_compute[244014]: 2026-02-25 12:29:00.641 244018 DEBUG oslo_concurrency.lockutils [req-5dcb7a35-ab56-4002-8bc7-fc783e28d25e req-ceb1d362-5683-465e-8c23-615bf036d3d6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8086400b-ac70-4c79-928b-4f1966084384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:29:01 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:01Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2e:3b:33 10.100.0.14
Feb 25 07:29:01 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:01Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2e:3b:33 10.100.0.14
Feb 25 07:29:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:29:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:29:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:29:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:29:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:29:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:29:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 390 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 1.3 MiB/s wr, 217 op/s
Feb 25 07:29:02 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:02Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7e:1d:a7 10.100.0.10
Feb 25 07:29:02 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:02Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7e:1d:a7 10.100.0.10
Feb 25 07:29:02 np0005629333 nova_compute[244014]: 2026-02-25 12:29:02.556 244018 DEBUG oslo_concurrency.lockutils [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Acquiring lock "826789b1-e26a-4569-bd77-bd1ef76388be" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:02 np0005629333 nova_compute[244014]: 2026-02-25 12:29:02.556 244018 DEBUG oslo_concurrency.lockutils [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lock "826789b1-e26a-4569-bd77-bd1ef76388be" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:02 np0005629333 nova_compute[244014]: 2026-02-25 12:29:02.556 244018 DEBUG oslo_concurrency.lockutils [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Acquiring lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:02 np0005629333 nova_compute[244014]: 2026-02-25 12:29:02.556 244018 DEBUG oslo_concurrency.lockutils [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:02 np0005629333 nova_compute[244014]: 2026-02-25 12:29:02.557 244018 DEBUG oslo_concurrency.lockutils [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:02 np0005629333 nova_compute[244014]: 2026-02-25 12:29:02.557 244018 INFO nova.compute.manager [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Terminating instance#033[00m
Feb 25 07:29:02 np0005629333 nova_compute[244014]: 2026-02-25 12:29:02.558 244018 DEBUG nova.compute.manager [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:29:02 np0005629333 kernel: tapba138ed1-c8 (unregistering): left promiscuous mode
Feb 25 07:29:02 np0005629333 NetworkManager[49836]: <info>  [1772022542.5980] device (tapba138ed1-c8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:29:02 np0005629333 nova_compute[244014]: 2026-02-25 12:29:02.604 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:02 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:02Z|00577|binding|INFO|Releasing lport ba138ed1-c811-4043-9bd6-e1a5c6127f84 from this chassis (sb_readonly=0)
Feb 25 07:29:02 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:02Z|00578|binding|INFO|Setting lport ba138ed1-c811-4043-9bd6-e1a5c6127f84 down in Southbound
Feb 25 07:29:02 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:02Z|00579|binding|INFO|Removing iface tapba138ed1-c8 ovn-installed in OVS
Feb 25 07:29:02 np0005629333 nova_compute[244014]: 2026-02-25 12:29:02.608 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:02 np0005629333 nova_compute[244014]: 2026-02-25 12:29:02.614 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:02.618 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:a8:78 10.100.0.14'], port_security=['fa:16:3e:bd:a8:78 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '826789b1-e26a-4569-bd77-bd1ef76388be', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-23ab8bc1-e701-454d-a828-42d18cbd9afc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'be8db082f3894d28b63a3709be538262', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3c589948-8c4c-4f73-8545-f79b3c943dbd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bcb3a436-cbdb-491a-80cb-92f630c9f966, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ba138ed1-c811-4043-9bd6-e1a5c6127f84) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:29:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:02.621 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ba138ed1-c811-4043-9bd6-e1a5c6127f84 in datapath 23ab8bc1-e701-454d-a828-42d18cbd9afc unbound from our chassis#033[00m
Feb 25 07:29:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:02.622 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 23ab8bc1-e701-454d-a828-42d18cbd9afc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:29:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:02.624 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e4569eeb-9dca-42fa-815c-cfa5c3655da3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:02.624 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc namespace which is not needed anymore#033[00m
Feb 25 07:29:02 np0005629333 nova_compute[244014]: 2026-02-25 12:29:02.649 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:02 np0005629333 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000041.scope: Deactivated successfully.
Feb 25 07:29:02 np0005629333 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000041.scope: Consumed 3.819s CPU time.
Feb 25 07:29:02 np0005629333 systemd-machined[210048]: Machine qemu-74-instance-00000041 terminated.
Feb 25 07:29:02 np0005629333 neutron-haproxy-ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc[300020]: [NOTICE]   (300024) : haproxy version is 2.8.14-c23fe91
Feb 25 07:29:02 np0005629333 neutron-haproxy-ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc[300020]: [NOTICE]   (300024) : path to executable is /usr/sbin/haproxy
Feb 25 07:29:02 np0005629333 neutron-haproxy-ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc[300020]: [WARNING]  (300024) : Exiting Master process...
Feb 25 07:29:02 np0005629333 neutron-haproxy-ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc[300020]: [WARNING]  (300024) : Exiting Master process...
Feb 25 07:29:02 np0005629333 neutron-haproxy-ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc[300020]: [ALERT]    (300024) : Current worker (300026) exited with code 143 (Terminated)
Feb 25 07:29:02 np0005629333 neutron-haproxy-ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc[300020]: [WARNING]  (300024) : All workers exited. Exiting... (0)
Feb 25 07:29:02 np0005629333 systemd[1]: libpod-8d8aa376e4a861fa2f7225446901311747ad0d5b7243089f1e3bba47c6fba1a6.scope: Deactivated successfully.
Feb 25 07:29:02 np0005629333 podman[300138]: 2026-02-25 12:29:02.743486619 +0000 UTC m=+0.040587492 container died 8d8aa376e4a861fa2f7225446901311747ad0d5b7243089f1e3bba47c6fba1a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 07:29:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:29:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:29:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:29:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:29:02 np0005629333 systemd[1]: var-lib-containers-storage-overlay-767066c256d5f6bdcbbcdf1ad704e26891b6cc82d028e5003584f4e4aff9b4b3-merged.mount: Deactivated successfully.
Feb 25 07:29:02 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8d8aa376e4a861fa2f7225446901311747ad0d5b7243089f1e3bba47c6fba1a6-userdata-shm.mount: Deactivated successfully.
Feb 25 07:29:02 np0005629333 podman[300138]: 2026-02-25 12:29:02.784997746 +0000 UTC m=+0.082098639 container cleanup 8d8aa376e4a861fa2f7225446901311747ad0d5b7243089f1e3bba47c6fba1a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 07:29:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:29:02 np0005629333 nova_compute[244014]: 2026-02-25 12:29:02.786 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:29:02 np0005629333 systemd[1]: libpod-conmon-8d8aa376e4a861fa2f7225446901311747ad0d5b7243089f1e3bba47c6fba1a6.scope: Deactivated successfully.
Feb 25 07:29:02 np0005629333 nova_compute[244014]: 2026-02-25 12:29:02.793 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:29:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:29:02 np0005629333 nova_compute[244014]: 2026-02-25 12:29:02.811 244018 INFO nova.virt.libvirt.driver [-] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Instance destroyed successfully.#033[00m
Feb 25 07:29:02 np0005629333 nova_compute[244014]: 2026-02-25 12:29:02.812 244018 DEBUG nova.objects.instance [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lazy-loading 'resources' on Instance uuid 826789b1-e26a-4569-bd77-bd1ef76388be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:29:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:29:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:29:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:29:02 np0005629333 podman[300177]: 2026-02-25 12:29:02.861465314 +0000 UTC m=+0.054088684 container remove 8d8aa376e4a861fa2f7225446901311747ad0d5b7243089f1e3bba47c6fba1a6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:29:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:02.867 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bdb33748-9392-46e6-993d-c6bf6fd41d1c]: (4, ('Wed Feb 25 12:29:02 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc (8d8aa376e4a861fa2f7225446901311747ad0d5b7243089f1e3bba47c6fba1a6)\n8d8aa376e4a861fa2f7225446901311747ad0d5b7243089f1e3bba47c6fba1a6\nWed Feb 25 12:29:02 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc (8d8aa376e4a861fa2f7225446901311747ad0d5b7243089f1e3bba47c6fba1a6)\n8d8aa376e4a861fa2f7225446901311747ad0d5b7243089f1e3bba47c6fba1a6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:02.868 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7d54b366-adb1-446f-9138-2650d66b0356]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:02.869 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap23ab8bc1-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:02 np0005629333 nova_compute[244014]: 2026-02-25 12:29:02.870 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:02 np0005629333 nova_compute[244014]: 2026-02-25 12:29:02.877 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:02 np0005629333 kernel: tap23ab8bc1-e0: left promiscuous mode
Feb 25 07:29:02 np0005629333 podman[300161]: 2026-02-25 12:29:02.880529865 +0000 UTC m=+0.091895427 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 07:29:02 np0005629333 nova_compute[244014]: 2026-02-25 12:29:02.881 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:02.884 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eca6ad54-3784-45db-8492-294dff622a53]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:02.898 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8179e798-284e-4cd3-8d3b-71d55bfcefc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:02.899 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[580ab355-c336-43a7-9786-809eb26fd7f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:02.915 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bc85f059-17cb-4b1a-ac4d-bd8d651a6dca]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 450649, 'reachable_time': 32336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300269, 'error': None, 'target': 'ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:02 np0005629333 systemd[1]: run-netns-ovnmeta\x2d23ab8bc1\x2de701\x2d454d\x2da828\x2d42d18cbd9afc.mount: Deactivated successfully.
Feb 25 07:29:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:02.918 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-23ab8bc1-e701-454d-a828-42d18cbd9afc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:29:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:02.918 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[2a576c1b-5e64-4d72-8462-74ce4fc02edb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:02 np0005629333 podman[300176]: 2026-02-25 12:29:02.941097753 +0000 UTC m=+0.123168744 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:29:03 np0005629333 nova_compute[244014]: 2026-02-25 12:29:03.064 244018 DEBUG nova.virt.libvirt.vif [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:28:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-846051174',display_name='tempest-ServerPasswordTestJSON-server-846051174',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-846051174',id=65,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:28:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='be8db082f3894d28b63a3709be538262',ramdisk_id='',reservation_id='r-624y952x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerPasswordTestJSON-1533125897',owner_user_name='tempest-ServerPasswordTestJSON-1533125897-project-member',password_0='',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:29:02Z,user_data=None,user_id='05ca7159581049009a4223cf01ebf146',uuid=826789b1-e26a-4569-bd77-bd1ef76388be,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "address": "fa:16:3e:bd:a8:78", "network": {"id": "23ab8bc1-e701-454d-a828-42d18cbd9afc", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1699028190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be8db082f3894d28b63a3709be538262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba138ed1-c8", "ovs_interfaceid": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:29:03 np0005629333 nova_compute[244014]: 2026-02-25 12:29:03.065 244018 DEBUG nova.network.os_vif_util [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Converting VIF {"id": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "address": "fa:16:3e:bd:a8:78", "network": {"id": "23ab8bc1-e701-454d-a828-42d18cbd9afc", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1699028190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "be8db082f3894d28b63a3709be538262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba138ed1-c8", "ovs_interfaceid": "ba138ed1-c811-4043-9bd6-e1a5c6127f84", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:29:03 np0005629333 nova_compute[244014]: 2026-02-25 12:29:03.066 244018 DEBUG nova.network.os_vif_util [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:a8:78,bridge_name='br-int',has_traffic_filtering=True,id=ba138ed1-c811-4043-9bd6-e1a5c6127f84,network=Network(23ab8bc1-e701-454d-a828-42d18cbd9afc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba138ed1-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:29:03 np0005629333 nova_compute[244014]: 2026-02-25 12:29:03.066 244018 DEBUG os_vif [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:a8:78,bridge_name='br-int',has_traffic_filtering=True,id=ba138ed1-c811-4043-9bd6-e1a5c6127f84,network=Network(23ab8bc1-e701-454d-a828-42d18cbd9afc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba138ed1-c8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:29:03 np0005629333 nova_compute[244014]: 2026-02-25 12:29:03.068 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:03 np0005629333 nova_compute[244014]: 2026-02-25 12:29:03.069 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba138ed1-c8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:03 np0005629333 nova_compute[244014]: 2026-02-25 12:29:03.071 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:03 np0005629333 nova_compute[244014]: 2026-02-25 12:29:03.073 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:29:03 np0005629333 nova_compute[244014]: 2026-02-25 12:29:03.077 244018 INFO os_vif [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:a8:78,bridge_name='br-int',has_traffic_filtering=True,id=ba138ed1-c811-4043-9bd6-e1a5c6127f84,network=Network(23ab8bc1-e701-454d-a828-42d18cbd9afc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba138ed1-c8')#033[00m
Feb 25 07:29:03 np0005629333 podman[300316]: 2026-02-25 12:29:03.173823751 +0000 UTC m=+0.023222459 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:29:03 np0005629333 podman[300316]: 2026-02-25 12:29:03.280055694 +0000 UTC m=+0.129454362 container create 4d062eb0472a163949585027c98df2a833d320b39b5c6307d4ccf2b26b327ee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hofstadter, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 07:29:03 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:29:03 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:29:03 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:29:03 np0005629333 systemd[1]: Started libpod-conmon-4d062eb0472a163949585027c98df2a833d320b39b5c6307d4ccf2b26b327ee8.scope.
Feb 25 07:29:03 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:29:03 np0005629333 podman[300316]: 2026-02-25 12:29:03.370123658 +0000 UTC m=+0.219522356 container init 4d062eb0472a163949585027c98df2a833d320b39b5c6307d4ccf2b26b327ee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hofstadter, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:29:03 np0005629333 podman[300316]: 2026-02-25 12:29:03.382654733 +0000 UTC m=+0.232053411 container start 4d062eb0472a163949585027c98df2a833d320b39b5c6307d4ccf2b26b327ee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hofstadter, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 07:29:03 np0005629333 podman[300316]: 2026-02-25 12:29:03.387065968 +0000 UTC m=+0.236464636 container attach 4d062eb0472a163949585027c98df2a833d320b39b5c6307d4ccf2b26b327ee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:29:03 np0005629333 funny_hofstadter[300332]: 167 167
Feb 25 07:29:03 np0005629333 systemd[1]: libpod-4d062eb0472a163949585027c98df2a833d320b39b5c6307d4ccf2b26b327ee8.scope: Deactivated successfully.
Feb 25 07:29:03 np0005629333 podman[300316]: 2026-02-25 12:29:03.39099373 +0000 UTC m=+0.240392398 container died 4d062eb0472a163949585027c98df2a833d320b39b5c6307d4ccf2b26b327ee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:29:03 np0005629333 systemd[1]: var-lib-containers-storage-overlay-c6727e85ca8755540978617833a2740aa501d9128825fd4a5eb532fe2a667ace-merged.mount: Deactivated successfully.
Feb 25 07:29:03 np0005629333 podman[300316]: 2026-02-25 12:29:03.429720728 +0000 UTC m=+0.279119396 container remove 4d062eb0472a163949585027c98df2a833d320b39b5c6307d4ccf2b26b327ee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Feb 25 07:29:03 np0005629333 systemd[1]: libpod-conmon-4d062eb0472a163949585027c98df2a833d320b39b5c6307d4ccf2b26b327ee8.scope: Deactivated successfully.
Feb 25 07:29:03 np0005629333 nova_compute[244014]: 2026-02-25 12:29:03.515 244018 INFO nova.virt.libvirt.driver [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Deleting instance files /var/lib/nova/instances/826789b1-e26a-4569-bd77-bd1ef76388be_del#033[00m
Feb 25 07:29:03 np0005629333 nova_compute[244014]: 2026-02-25 12:29:03.516 244018 INFO nova.virt.libvirt.driver [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Deletion of /var/lib/nova/instances/826789b1-e26a-4569-bd77-bd1ef76388be_del complete#033[00m
Feb 25 07:29:03 np0005629333 nova_compute[244014]: 2026-02-25 12:29:03.532 244018 DEBUG nova.compute.manager [req-85ca27bc-4a93-40c8-bf70-792b6965f65c req-86fca6b5-1970-499b-8374-c1f513144908 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Received event network-vif-unplugged-ba138ed1-c811-4043-9bd6-e1a5c6127f84 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:03 np0005629333 nova_compute[244014]: 2026-02-25 12:29:03.532 244018 DEBUG oslo_concurrency.lockutils [req-85ca27bc-4a93-40c8-bf70-792b6965f65c req-86fca6b5-1970-499b-8374-c1f513144908 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:03 np0005629333 nova_compute[244014]: 2026-02-25 12:29:03.533 244018 DEBUG oslo_concurrency.lockutils [req-85ca27bc-4a93-40c8-bf70-792b6965f65c req-86fca6b5-1970-499b-8374-c1f513144908 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:03 np0005629333 nova_compute[244014]: 2026-02-25 12:29:03.533 244018 DEBUG oslo_concurrency.lockutils [req-85ca27bc-4a93-40c8-bf70-792b6965f65c req-86fca6b5-1970-499b-8374-c1f513144908 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:03 np0005629333 nova_compute[244014]: 2026-02-25 12:29:03.533 244018 DEBUG nova.compute.manager [req-85ca27bc-4a93-40c8-bf70-792b6965f65c req-86fca6b5-1970-499b-8374-c1f513144908 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] No waiting events found dispatching network-vif-unplugged-ba138ed1-c811-4043-9bd6-e1a5c6127f84 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:29:03 np0005629333 nova_compute[244014]: 2026-02-25 12:29:03.533 244018 DEBUG nova.compute.manager [req-85ca27bc-4a93-40c8-bf70-792b6965f65c req-86fca6b5-1970-499b-8374-c1f513144908 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Received event network-vif-unplugged-ba138ed1-c811-4043-9bd6-e1a5c6127f84 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:29:03 np0005629333 podman[300356]: 2026-02-25 12:29:03.550656257 +0000 UTC m=+0.034585341 container create 8fc1e4fb6cd27cdbc78e6487241a75f62536030dc0cf38b22ef8cd160500bf13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_newton, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:29:03 np0005629333 nova_compute[244014]: 2026-02-25 12:29:03.572 244018 INFO nova.compute.manager [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Took 1.01 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:29:03 np0005629333 nova_compute[244014]: 2026-02-25 12:29:03.572 244018 DEBUG oslo.service.loopingcall [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:29:03 np0005629333 nova_compute[244014]: 2026-02-25 12:29:03.573 244018 DEBUG nova.compute.manager [-] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:29:03 np0005629333 nova_compute[244014]: 2026-02-25 12:29:03.573 244018 DEBUG nova.network.neutron [-] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:29:03 np0005629333 systemd[1]: Started libpod-conmon-8fc1e4fb6cd27cdbc78e6487241a75f62536030dc0cf38b22ef8cd160500bf13.scope.
Feb 25 07:29:03 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:29:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbe86844e603c8ecce5a06bbe39221bf02b0a3a6ae27a6d6bfec5f00df9b6c14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:29:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbe86844e603c8ecce5a06bbe39221bf02b0a3a6ae27a6d6bfec5f00df9b6c14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:29:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbe86844e603c8ecce5a06bbe39221bf02b0a3a6ae27a6d6bfec5f00df9b6c14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:29:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbe86844e603c8ecce5a06bbe39221bf02b0a3a6ae27a6d6bfec5f00df9b6c14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:29:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbe86844e603c8ecce5a06bbe39221bf02b0a3a6ae27a6d6bfec5f00df9b6c14/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:29:03 np0005629333 podman[300356]: 2026-02-25 12:29:03.537443113 +0000 UTC m=+0.021372207 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:29:03 np0005629333 podman[300356]: 2026-02-25 12:29:03.642026898 +0000 UTC m=+0.125956002 container init 8fc1e4fb6cd27cdbc78e6487241a75f62536030dc0cf38b22ef8cd160500bf13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_newton, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 07:29:03 np0005629333 podman[300356]: 2026-02-25 12:29:03.646591788 +0000 UTC m=+0.130520862 container start 8fc1e4fb6cd27cdbc78e6487241a75f62536030dc0cf38b22ef8cd160500bf13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_newton, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 07:29:03 np0005629333 podman[300356]: 2026-02-25 12:29:03.649228863 +0000 UTC m=+0.133157937 container attach 8fc1e4fb6cd27cdbc78e6487241a75f62536030dc0cf38b22ef8cd160500bf13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_newton, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 07:29:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 427 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 3.9 MiB/s wr, 277 op/s
Feb 25 07:29:04 np0005629333 reverent_newton[300372]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:29:04 np0005629333 reverent_newton[300372]: --> All data devices are unavailable
Feb 25 07:29:04 np0005629333 systemd[1]: libpod-8fc1e4fb6cd27cdbc78e6487241a75f62536030dc0cf38b22ef8cd160500bf13.scope: Deactivated successfully.
Feb 25 07:29:04 np0005629333 podman[300356]: 2026-02-25 12:29:04.073035851 +0000 UTC m=+0.556964935 container died 8fc1e4fb6cd27cdbc78e6487241a75f62536030dc0cf38b22ef8cd160500bf13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 07:29:04 np0005629333 systemd[1]: var-lib-containers-storage-overlay-dbe86844e603c8ecce5a06bbe39221bf02b0a3a6ae27a6d6bfec5f00df9b6c14-merged.mount: Deactivated successfully.
Feb 25 07:29:04 np0005629333 podman[300356]: 2026-02-25 12:29:04.124469689 +0000 UTC m=+0.608398803 container remove 8fc1e4fb6cd27cdbc78e6487241a75f62536030dc0cf38b22ef8cd160500bf13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_newton, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:29:04 np0005629333 systemd[1]: libpod-conmon-8fc1e4fb6cd27cdbc78e6487241a75f62536030dc0cf38b22ef8cd160500bf13.scope: Deactivated successfully.
Feb 25 07:29:04 np0005629333 nova_compute[244014]: 2026-02-25 12:29:04.323 244018 DEBUG nova.network.neutron [-] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:29:04 np0005629333 nova_compute[244014]: 2026-02-25 12:29:04.343 244018 INFO nova.compute.manager [-] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Took 0.77 seconds to deallocate network for instance.#033[00m
Feb 25 07:29:04 np0005629333 nova_compute[244014]: 2026-02-25 12:29:04.410 244018 DEBUG oslo_concurrency.lockutils [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:04 np0005629333 nova_compute[244014]: 2026-02-25 12:29:04.410 244018 DEBUG oslo_concurrency.lockutils [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:04 np0005629333 nova_compute[244014]: 2026-02-25 12:29:04.415 244018 DEBUG nova.compute.manager [req-dd486d5a-61d5-4390-aa67-961ac8607cec req-33bf3965-7d88-4686-aefd-d7dc3a445000 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Received event network-vif-deleted-ba138ed1-c811-4043-9bd6-e1a5c6127f84 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:04 np0005629333 podman[300466]: 2026-02-25 12:29:04.536453792 +0000 UTC m=+0.035438146 container create 93f5403e4c9878db2007220d137e021c00daff18ee06921c30443cf426e71afa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mccarthy, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 07:29:04 np0005629333 nova_compute[244014]: 2026-02-25 12:29:04.570 244018 DEBUG oslo_concurrency.processutils [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:04 np0005629333 systemd[1]: Started libpod-conmon-93f5403e4c9878db2007220d137e021c00daff18ee06921c30443cf426e71afa.scope.
Feb 25 07:29:04 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:29:04 np0005629333 podman[300466]: 2026-02-25 12:29:04.520630353 +0000 UTC m=+0.019614697 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:29:04 np0005629333 podman[300466]: 2026-02-25 12:29:04.623291805 +0000 UTC m=+0.122276219 container init 93f5403e4c9878db2007220d137e021c00daff18ee06921c30443cf426e71afa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mccarthy, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:29:04 np0005629333 podman[300466]: 2026-02-25 12:29:04.634932735 +0000 UTC m=+0.133917089 container start 93f5403e4c9878db2007220d137e021c00daff18ee06921c30443cf426e71afa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 07:29:04 np0005629333 podman[300466]: 2026-02-25 12:29:04.638433324 +0000 UTC m=+0.137417728 container attach 93f5403e4c9878db2007220d137e021c00daff18ee06921c30443cf426e71afa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:29:04 np0005629333 cranky_mccarthy[300483]: 167 167
Feb 25 07:29:04 np0005629333 systemd[1]: libpod-93f5403e4c9878db2007220d137e021c00daff18ee06921c30443cf426e71afa.scope: Deactivated successfully.
Feb 25 07:29:04 np0005629333 podman[300466]: 2026-02-25 12:29:04.640508743 +0000 UTC m=+0.139493097 container died 93f5403e4c9878db2007220d137e021c00daff18ee06921c30443cf426e71afa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 07:29:04 np0005629333 systemd[1]: var-lib-containers-storage-overlay-72d8009d5776e297b7b27261bcc33f44c9e0c2e7322f85689a12fccea5c18c58-merged.mount: Deactivated successfully.
Feb 25 07:29:04 np0005629333 podman[300466]: 2026-02-25 12:29:04.68731308 +0000 UTC m=+0.186297444 container remove 93f5403e4c9878db2007220d137e021c00daff18ee06921c30443cf426e71afa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 07:29:04 np0005629333 systemd[1]: libpod-conmon-93f5403e4c9878db2007220d137e021c00daff18ee06921c30443cf426e71afa.scope: Deactivated successfully.
Feb 25 07:29:04 np0005629333 nova_compute[244014]: 2026-02-25 12:29:04.735 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:04 np0005629333 nova_compute[244014]: 2026-02-25 12:29:04.857 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Feb 25 07:29:04 np0005629333 nova_compute[244014]: 2026-02-25 12:29:04.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:29:04 np0005629333 nova_compute[244014]: 2026-02-25 12:29:04.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:29:04 np0005629333 podman[300528]: 2026-02-25 12:29:04.9048686 +0000 UTC m=+0.085959139 container create b921d7348e148caf8d7f1f794f80061cccaad6c7ef08788cbe5460c7fc30c4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:29:04 np0005629333 podman[300528]: 2026-02-25 12:29:04.84495108 +0000 UTC m=+0.026041669 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:29:04 np0005629333 systemd[1]: Started libpod-conmon-b921d7348e148caf8d7f1f794f80061cccaad6c7ef08788cbe5460c7fc30c4e5.scope.
Feb 25 07:29:05 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:29:05 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0445bdfbd75dd41a98cda322291371e74dd8847b3b6099bedefad35672f242a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:29:05 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0445bdfbd75dd41a98cda322291371e74dd8847b3b6099bedefad35672f242a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:29:05 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0445bdfbd75dd41a98cda322291371e74dd8847b3b6099bedefad35672f242a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:29:05 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0445bdfbd75dd41a98cda322291371e74dd8847b3b6099bedefad35672f242a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:29:05 np0005629333 podman[300528]: 2026-02-25 12:29:05.032975472 +0000 UTC m=+0.214065991 container init b921d7348e148caf8d7f1f794f80061cccaad6c7ef08788cbe5460c7fc30c4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_neumann, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:29:05 np0005629333 podman[300528]: 2026-02-25 12:29:05.04313408 +0000 UTC m=+0.224224579 container start b921d7348e148caf8d7f1f794f80061cccaad6c7ef08788cbe5460c7fc30c4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_neumann, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 07:29:05 np0005629333 podman[300528]: 2026-02-25 12:29:05.046202267 +0000 UTC m=+0.227292766 container attach b921d7348e148caf8d7f1f794f80061cccaad6c7ef08788cbe5460c7fc30c4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 07:29:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:29:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/374794373' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:29:05 np0005629333 nova_compute[244014]: 2026-02-25 12:29:05.133 244018 DEBUG oslo_concurrency.processutils [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:05 np0005629333 nova_compute[244014]: 2026-02-25 12:29:05.142 244018 DEBUG nova.compute.provider_tree [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:29:05 np0005629333 nova_compute[244014]: 2026-02-25 12:29:05.149 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 07:29:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]: {
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:    "0": [
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:        {
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "devices": [
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "/dev/loop3"
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            ],
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "lv_name": "ceph_lv0",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "lv_size": "21470642176",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "name": "ceph_lv0",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "tags": {
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.cluster_name": "ceph",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.crush_device_class": "",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.encrypted": "0",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.objectstore": "bluestore",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.osd_id": "0",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.type": "block",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.vdo": "0",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.with_tpm": "0"
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            },
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "type": "block",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "vg_name": "ceph_vg0"
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:        }
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:    ],
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:    "1": [
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:        {
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "devices": [
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "/dev/loop4"
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            ],
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "lv_name": "ceph_lv1",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "lv_size": "21470642176",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "name": "ceph_lv1",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "tags": {
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.cluster_name": "ceph",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.crush_device_class": "",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.encrypted": "0",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.objectstore": "bluestore",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.osd_id": "1",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.type": "block",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.vdo": "0",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.with_tpm": "0"
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            },
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "type": "block",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "vg_name": "ceph_vg1"
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:        }
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:    ],
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:    "2": [
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:        {
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "devices": [
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "/dev/loop5"
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            ],
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "lv_name": "ceph_lv2",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "lv_size": "21470642176",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "name": "ceph_lv2",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "tags": {
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.cluster_name": "ceph",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.crush_device_class": "",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.encrypted": "0",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.objectstore": "bluestore",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.osd_id": "2",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.type": "block",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.vdo": "0",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:                "ceph.with_tpm": "0"
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            },
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "type": "block",
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:            "vg_name": "ceph_vg2"
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:        }
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]:    ]
Feb 25 07:29:05 np0005629333 priceless_neumann[300545]: }
Feb 25 07:29:05 np0005629333 systemd[1]: libpod-b921d7348e148caf8d7f1f794f80061cccaad6c7ef08788cbe5460c7fc30c4e5.scope: Deactivated successfully.
Feb 25 07:29:05 np0005629333 podman[300528]: 2026-02-25 12:29:05.379433577 +0000 UTC m=+0.560524116 container died b921d7348e148caf8d7f1f794f80061cccaad6c7ef08788cbe5460c7fc30c4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_neumann, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:29:05 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f0445bdfbd75dd41a98cda322291371e74dd8847b3b6099bedefad35672f242a-merged.mount: Deactivated successfully.
Feb 25 07:29:05 np0005629333 podman[300528]: 2026-02-25 12:29:05.4306612 +0000 UTC m=+0.611751709 container remove b921d7348e148caf8d7f1f794f80061cccaad6c7ef08788cbe5460c7fc30c4e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 25 07:29:05 np0005629333 systemd[1]: libpod-conmon-b921d7348e148caf8d7f1f794f80061cccaad6c7ef08788cbe5460c7fc30c4e5.scope: Deactivated successfully.
Feb 25 07:29:05 np0005629333 nova_compute[244014]: 2026-02-25 12:29:05.514 244018 DEBUG nova.scheduler.client.report [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:29:05 np0005629333 nova_compute[244014]: 2026-02-25 12:29:05.557 244018 DEBUG oslo_concurrency.lockutils [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:05 np0005629333 nova_compute[244014]: 2026-02-25 12:29:05.590 244018 INFO nova.scheduler.client.report [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Deleted allocations for instance 826789b1-e26a-4569-bd77-bd1ef76388be#033[00m
Feb 25 07:29:05 np0005629333 nova_compute[244014]: 2026-02-25 12:29:05.740 244018 DEBUG nova.compute.manager [req-8059118e-0ec9-4a47-84d3-dd7737ffe9fa req-933df3ad-1600-4913-9ed6-fe224f0f957a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Received event network-vif-plugged-ba138ed1-c811-4043-9bd6-e1a5c6127f84 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:05 np0005629333 nova_compute[244014]: 2026-02-25 12:29:05.740 244018 DEBUG oslo_concurrency.lockutils [req-8059118e-0ec9-4a47-84d3-dd7737ffe9fa req-933df3ad-1600-4913-9ed6-fe224f0f957a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:05 np0005629333 nova_compute[244014]: 2026-02-25 12:29:05.741 244018 DEBUG oslo_concurrency.lockutils [req-8059118e-0ec9-4a47-84d3-dd7737ffe9fa req-933df3ad-1600-4913-9ed6-fe224f0f957a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:05 np0005629333 nova_compute[244014]: 2026-02-25 12:29:05.741 244018 DEBUG oslo_concurrency.lockutils [req-8059118e-0ec9-4a47-84d3-dd7737ffe9fa req-933df3ad-1600-4913-9ed6-fe224f0f957a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "826789b1-e26a-4569-bd77-bd1ef76388be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:05 np0005629333 nova_compute[244014]: 2026-02-25 12:29:05.741 244018 DEBUG nova.compute.manager [req-8059118e-0ec9-4a47-84d3-dd7737ffe9fa req-933df3ad-1600-4913-9ed6-fe224f0f957a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] No waiting events found dispatching network-vif-plugged-ba138ed1-c811-4043-9bd6-e1a5c6127f84 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:29:05 np0005629333 nova_compute[244014]: 2026-02-25 12:29:05.742 244018 WARNING nova.compute.manager [req-8059118e-0ec9-4a47-84d3-dd7737ffe9fa req-933df3ad-1600-4913-9ed6-fe224f0f957a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Received unexpected event network-vif-plugged-ba138ed1-c811-4043-9bd6-e1a5c6127f84 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:29:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 427 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.9 MiB/s wr, 218 op/s
Feb 25 07:29:05 np0005629333 podman[300631]: 2026-02-25 12:29:05.995846707 +0000 UTC m=+0.068104552 container create f124aa606ed27a6bd498acb9bfae5558404778bd1286b7ad23d0ffcf72da605d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 07:29:06 np0005629333 podman[300631]: 2026-02-25 12:29:05.958770726 +0000 UTC m=+0.031028641 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:29:06 np0005629333 nova_compute[244014]: 2026-02-25 12:29:06.253 244018 DEBUG oslo_concurrency.lockutils [None req-a3bd4748-ba29-4de4-ac2c-dd2f37b84f49 05ca7159581049009a4223cf01ebf146 be8db082f3894d28b63a3709be538262 - - default default] Lock "826789b1-e26a-4569-bd77-bd1ef76388be" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:06 np0005629333 systemd[1]: Started libpod-conmon-f124aa606ed27a6bd498acb9bfae5558404778bd1286b7ad23d0ffcf72da605d.scope.
Feb 25 07:29:06 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:29:06 np0005629333 podman[300631]: 2026-02-25 12:29:06.348504058 +0000 UTC m=+0.420762003 container init f124aa606ed27a6bd498acb9bfae5558404778bd1286b7ad23d0ffcf72da605d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_shockley, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 07:29:06 np0005629333 podman[300631]: 2026-02-25 12:29:06.357785841 +0000 UTC m=+0.430043716 container start f124aa606ed27a6bd498acb9bfae5558404778bd1286b7ad23d0ffcf72da605d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_shockley, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:29:06 np0005629333 stupefied_shockley[300648]: 167 167
Feb 25 07:29:06 np0005629333 systemd[1]: libpod-f124aa606ed27a6bd498acb9bfae5558404778bd1286b7ad23d0ffcf72da605d.scope: Deactivated successfully.
Feb 25 07:29:06 np0005629333 podman[300631]: 2026-02-25 12:29:06.481043416 +0000 UTC m=+0.553301281 container attach f124aa606ed27a6bd498acb9bfae5558404778bd1286b7ad23d0ffcf72da605d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_shockley, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:29:06 np0005629333 podman[300631]: 2026-02-25 12:29:06.481734996 +0000 UTC m=+0.553992861 container died f124aa606ed27a6bd498acb9bfae5558404778bd1286b7ad23d0ffcf72da605d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_shockley, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:29:06 np0005629333 nova_compute[244014]: 2026-02-25 12:29:06.636 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022531.5631955, 9630899b-57d8-4e46-b9e0-8762f0f4f2cb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:29:06 np0005629333 nova_compute[244014]: 2026-02-25 12:29:06.636 244018 INFO nova.compute.manager [-] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:29:06 np0005629333 systemd[1]: var-lib-containers-storage-overlay-c52a3132975f3e9e1b76132660b628d327853c2e9e2ce16428a735dbbd0b0a78-merged.mount: Deactivated successfully.
Feb 25 07:29:06 np0005629333 nova_compute[244014]: 2026-02-25 12:29:06.669 244018 DEBUG nova.compute.manager [None req-351521fe-e0b9-4fde-bc76-de32c1dfb050 - - - - - -] [instance: 9630899b-57d8-4e46-b9e0-8762f0f4f2cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:06 np0005629333 podman[300631]: 2026-02-25 12:29:06.805868267 +0000 UTC m=+0.878126112 container remove f124aa606ed27a6bd498acb9bfae5558404778bd1286b7ad23d0ffcf72da605d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_shockley, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 07:29:06 np0005629333 systemd[1]: libpod-conmon-f124aa606ed27a6bd498acb9bfae5558404778bd1286b7ad23d0ffcf72da605d.scope: Deactivated successfully.
Feb 25 07:29:07 np0005629333 podman[300674]: 2026-02-25 12:29:07.012159216 +0000 UTC m=+0.079680370 container create b50b351b6506cc5270ea15156882904dffff2266ef9ed13b7dca2378fe9c8ecd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_golick, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 07:29:07 np0005629333 podman[300674]: 2026-02-25 12:29:06.964072293 +0000 UTC m=+0.031593517 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:29:07 np0005629333 systemd[1]: Started libpod-conmon-b50b351b6506cc5270ea15156882904dffff2266ef9ed13b7dca2378fe9c8ecd.scope.
Feb 25 07:29:07 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:29:07 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f9135927551e557500372eaf360e65893f2dcd5de224e740d89afa4e21224d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:29:07 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f9135927551e557500372eaf360e65893f2dcd5de224e740d89afa4e21224d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:29:07 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f9135927551e557500372eaf360e65893f2dcd5de224e740d89afa4e21224d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:29:07 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f9135927551e557500372eaf360e65893f2dcd5de224e740d89afa4e21224d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:29:07 np0005629333 podman[300674]: 2026-02-25 12:29:07.190234426 +0000 UTC m=+0.257755570 container init b50b351b6506cc5270ea15156882904dffff2266ef9ed13b7dca2378fe9c8ecd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_golick, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:29:07 np0005629333 podman[300674]: 2026-02-25 12:29:07.200403495 +0000 UTC m=+0.267924619 container start b50b351b6506cc5270ea15156882904dffff2266ef9ed13b7dca2378fe9c8ecd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:29:07 np0005629333 podman[300674]: 2026-02-25 12:29:07.223538191 +0000 UTC m=+0.291059325 container attach b50b351b6506cc5270ea15156882904dffff2266ef9ed13b7dca2378fe9c8ecd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_golick, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:29:07 np0005629333 kernel: tap9848fa6c-0a (unregistering): left promiscuous mode
Feb 25 07:29:07 np0005629333 NetworkManager[49836]: <info>  [1772022547.2422] device (tap9848fa6c-0a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:29:07 np0005629333 nova_compute[244014]: 2026-02-25 12:29:07.252 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:07 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:07Z|00580|binding|INFO|Releasing lport 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c from this chassis (sb_readonly=0)
Feb 25 07:29:07 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:07Z|00581|binding|INFO|Setting lport 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c down in Southbound
Feb 25 07:29:07 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:07Z|00582|binding|INFO|Removing iface tap9848fa6c-0a ovn-installed in OVS
Feb 25 07:29:07 np0005629333 nova_compute[244014]: 2026-02-25 12:29:07.257 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:07.262 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:3b:33 10.100.0.14'], port_security=['fa:16:3e:2e:3b:33 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ee9cd98b-1ca6-48e7-aa44-a09caf048a1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce318891-cf3c-4d99-af7c-c01770f38194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56700581ea88438ba482d90bc702ced3', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf6e8407-f371-4f64-a4aa-eb412980736d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0958bb9f-eb63-44ee-b380-21c56b170304, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:29:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:07.267 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c in datapath ce318891-cf3c-4d99-af7c-c01770f38194 unbound from our chassis#033[00m
Feb 25 07:29:07 np0005629333 nova_compute[244014]: 2026-02-25 12:29:07.268 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:07.270 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce318891-cf3c-4d99-af7c-c01770f38194#033[00m
Feb 25 07:29:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:07.285 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a898f40c-a1b6-4907-88d7-3c835f4d7e65]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:07 np0005629333 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Feb 25 07:29:07 np0005629333 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000003f.scope: Consumed 12.385s CPU time.
Feb 25 07:29:07 np0005629333 systemd-machined[210048]: Machine qemu-72-instance-0000003f terminated.
Feb 25 07:29:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:07.318 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5ebb7dd3-8377-4a2c-b8f5-7aedc6b7d972]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:07.322 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[84fb996b-d0c0-4ea7-ac6b-3d9efb6a81bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:07.347 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[03748ac3-684f-4b0b-817e-7dcc8adcd6e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:07.363 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[69043fba-6a79-4d71-880a-e237cacfa55e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce318891-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c3:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444530, 'reachable_time': 37500, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300708, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:07.377 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ac6ba236-3b3e-4c76-bbce-d731a1b93811]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce318891-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444541, 'tstamp': 444541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300709, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce318891-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444544, 'tstamp': 444544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300709, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:07.379 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce318891-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:07 np0005629333 nova_compute[244014]: 2026-02-25 12:29:07.380 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:07 np0005629333 nova_compute[244014]: 2026-02-25 12:29:07.384 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:07.385 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce318891-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:07.385 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:29:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:07.386 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce318891-c0, col_values=(('external_ids', {'iface-id': '3b184c15-8ef4-4e11-bd18-e1253a4ff440'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:07.386 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:29:07 np0005629333 nova_compute[244014]: 2026-02-25 12:29:07.530 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:07 np0005629333 lvm[300792]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:29:07 np0005629333 lvm[300792]: VG ceph_vg0 finished
Feb 25 07:29:07 np0005629333 nova_compute[244014]: 2026-02-25 12:29:07.873 244018 INFO nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Instance shutdown successfully after 13 seconds.#033[00m
Feb 25 07:29:07 np0005629333 nova_compute[244014]: 2026-02-25 12:29:07.878 244018 INFO nova.virt.libvirt.driver [-] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Instance destroyed successfully.#033[00m
Feb 25 07:29:07 np0005629333 nova_compute[244014]: 2026-02-25 12:29:07.882 244018 INFO nova.virt.libvirt.driver [-] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Instance destroyed successfully.#033[00m
Feb 25 07:29:07 np0005629333 lvm[300794]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:29:07 np0005629333 lvm[300794]: VG ceph_vg1 finished
Feb 25 07:29:07 np0005629333 nova_compute[244014]: 2026-02-25 12:29:07.883 244018 DEBUG nova.virt.libvirt.vif [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:28:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-745827155',display_name='tempest-ServerActionsTestJSON-server-1953116819',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-745827155',id=63,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:28:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-ce0sx5g6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:28:52Z,user_data=None,user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=ee9cd98b-1ca6-48e7-aa44-a09caf048a1c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:29:07 np0005629333 nova_compute[244014]: 2026-02-25 12:29:07.883 244018 DEBUG nova.network.os_vif_util [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:29:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 393 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 4.3 MiB/s wr, 272 op/s
Feb 25 07:29:07 np0005629333 nova_compute[244014]: 2026-02-25 12:29:07.884 244018 DEBUG nova.network.os_vif_util [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3b:33,bridge_name='br-int',has_traffic_filtering=True,id=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9848fa6c-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:29:07 np0005629333 nova_compute[244014]: 2026-02-25 12:29:07.885 244018 DEBUG os_vif [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3b:33,bridge_name='br-int',has_traffic_filtering=True,id=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9848fa6c-0a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:29:07 np0005629333 nova_compute[244014]: 2026-02-25 12:29:07.887 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:07 np0005629333 nova_compute[244014]: 2026-02-25 12:29:07.887 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9848fa6c-0a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:07 np0005629333 nova_compute[244014]: 2026-02-25 12:29:07.889 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:07 np0005629333 nova_compute[244014]: 2026-02-25 12:29:07.890 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:07 np0005629333 nova_compute[244014]: 2026-02-25 12:29:07.892 244018 INFO os_vif [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3b:33,bridge_name='br-int',has_traffic_filtering=True,id=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9848fa6c-0a')#033[00m
Feb 25 07:29:07 np0005629333 lvm[300795]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:29:07 np0005629333 lvm[300795]: VG ceph_vg2 finished
Feb 25 07:29:07 np0005629333 quirky_golick[300691]: {}
Feb 25 07:29:08 np0005629333 nova_compute[244014]: 2026-02-25 12:29:08.000 244018 DEBUG nova.compute.manager [req-4d647a4a-6aae-4872-9806-4ab1bd3510f0 req-c8118dec-7bd5-447e-b92f-430cf2b63b9e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Received event network-vif-unplugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:08 np0005629333 nova_compute[244014]: 2026-02-25 12:29:08.001 244018 DEBUG oslo_concurrency.lockutils [req-4d647a4a-6aae-4872-9806-4ab1bd3510f0 req-c8118dec-7bd5-447e-b92f-430cf2b63b9e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:08 np0005629333 nova_compute[244014]: 2026-02-25 12:29:08.002 244018 DEBUG oslo_concurrency.lockutils [req-4d647a4a-6aae-4872-9806-4ab1bd3510f0 req-c8118dec-7bd5-447e-b92f-430cf2b63b9e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:08 np0005629333 nova_compute[244014]: 2026-02-25 12:29:08.002 244018 DEBUG oslo_concurrency.lockutils [req-4d647a4a-6aae-4872-9806-4ab1bd3510f0 req-c8118dec-7bd5-447e-b92f-430cf2b63b9e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:08 np0005629333 nova_compute[244014]: 2026-02-25 12:29:08.002 244018 DEBUG nova.compute.manager [req-4d647a4a-6aae-4872-9806-4ab1bd3510f0 req-c8118dec-7bd5-447e-b92f-430cf2b63b9e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] No waiting events found dispatching network-vif-unplugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:29:08 np0005629333 nova_compute[244014]: 2026-02-25 12:29:08.002 244018 WARNING nova.compute.manager [req-4d647a4a-6aae-4872-9806-4ab1bd3510f0 req-c8118dec-7bd5-447e-b92f-430cf2b63b9e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Received unexpected event network-vif-unplugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c for instance with vm_state active and task_state rebuilding.#033[00m
Feb 25 07:29:08 np0005629333 systemd[1]: libpod-b50b351b6506cc5270ea15156882904dffff2266ef9ed13b7dca2378fe9c8ecd.scope: Deactivated successfully.
Feb 25 07:29:08 np0005629333 systemd[1]: libpod-b50b351b6506cc5270ea15156882904dffff2266ef9ed13b7dca2378fe9c8ecd.scope: Consumed 1.151s CPU time.
Feb 25 07:29:08 np0005629333 podman[300674]: 2026-02-25 12:29:08.032583003 +0000 UTC m=+1.100104157 container died b50b351b6506cc5270ea15156882904dffff2266ef9ed13b7dca2378fe9c8ecd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_golick, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:29:08 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f4f9135927551e557500372eaf360e65893f2dcd5de224e740d89afa4e21224d-merged.mount: Deactivated successfully.
Feb 25 07:29:08 np0005629333 podman[300674]: 2026-02-25 12:29:08.07336505 +0000 UTC m=+1.140886204 container remove b50b351b6506cc5270ea15156882904dffff2266ef9ed13b7dca2378fe9c8ecd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_golick, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:29:08 np0005629333 systemd[1]: libpod-conmon-b50b351b6506cc5270ea15156882904dffff2266ef9ed13b7dca2378fe9c8ecd.scope: Deactivated successfully.
Feb 25 07:29:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:29:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:29:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:29:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:29:08 np0005629333 nova_compute[244014]: 2026-02-25 12:29:08.174 244018 INFO nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Deleting instance files /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_del#033[00m
Feb 25 07:29:08 np0005629333 nova_compute[244014]: 2026-02-25 12:29:08.175 244018 INFO nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Deletion of /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_del complete#033[00m
Feb 25 07:29:08 np0005629333 nova_compute[244014]: 2026-02-25 12:29:08.338 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:29:08 np0005629333 nova_compute[244014]: 2026-02-25 12:29:08.339 244018 INFO nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Creating image(s)#033[00m
Feb 25 07:29:08 np0005629333 nova_compute[244014]: 2026-02-25 12:29:08.360 244018 DEBUG nova.storage.rbd_utils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:08 np0005629333 nova_compute[244014]: 2026-02-25 12:29:08.389 244018 DEBUG nova.storage.rbd_utils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:08 np0005629333 nova_compute[244014]: 2026-02-25 12:29:08.428 244018 DEBUG nova.storage.rbd_utils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:08 np0005629333 nova_compute[244014]: 2026-02-25 12:29:08.433 244018 DEBUG oslo_concurrency.processutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:08 np0005629333 nova_compute[244014]: 2026-02-25 12:29:08.523 244018 DEBUG oslo_concurrency.processutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:08 np0005629333 nova_compute[244014]: 2026-02-25 12:29:08.525 244018 DEBUG oslo_concurrency.lockutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:08 np0005629333 nova_compute[244014]: 2026-02-25 12:29:08.526 244018 DEBUG oslo_concurrency.lockutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:08 np0005629333 nova_compute[244014]: 2026-02-25 12:29:08.526 244018 DEBUG oslo_concurrency.lockutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:08 np0005629333 nova_compute[244014]: 2026-02-25 12:29:08.553 244018 DEBUG nova.storage.rbd_utils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:08 np0005629333 nova_compute[244014]: 2026-02-25 12:29:08.556 244018 DEBUG oslo_concurrency.processutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:08 np0005629333 nova_compute[244014]: 2026-02-25 12:29:08.862 244018 DEBUG oslo_concurrency.processutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:08 np0005629333 nova_compute[244014]: 2026-02-25 12:29:08.899 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:29:08 np0005629333 nova_compute[244014]: 2026-02-25 12:29:08.900 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:29:08 np0005629333 nova_compute[244014]: 2026-02-25 12:29:08.945 244018 DEBUG nova.storage.rbd_utils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] resizing rbd image ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.038 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.039 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Ensure instance console log exists: /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.039 244018 DEBUG oslo_concurrency.lockutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.040 244018 DEBUG oslo_concurrency.lockutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.040 244018 DEBUG oslo_concurrency.lockutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.043 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Start _get_guest_xml network_info=[{"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.047 244018 WARNING nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.054 244018 DEBUG nova.virt.libvirt.host [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.054 244018 DEBUG nova.virt.libvirt.host [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.065 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "abe229eb-2238-4237-a7f2-83b8476ac1dc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.065 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.066 244018 DEBUG nova.virt.libvirt.host [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.067 244018 DEBUG nova.virt.libvirt.host [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.068 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.068 244018 DEBUG nova.virt.hardware [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.068 244018 DEBUG nova.virt.hardware [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.069 244018 DEBUG nova.virt.hardware [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.069 244018 DEBUG nova.virt.hardware [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.069 244018 DEBUG nova.virt.hardware [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.069 244018 DEBUG nova.virt.hardware [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.070 244018 DEBUG nova.virt.hardware [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.070 244018 DEBUG nova.virt.hardware [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.070 244018 DEBUG nova.virt.hardware [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.071 244018 DEBUG nova.virt.hardware [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.071 244018 DEBUG nova.virt.hardware [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.071 244018 DEBUG nova.objects.instance [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid ee9cd98b-1ca6-48e7-aa44-a09caf048a1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.099 244018 DEBUG oslo_concurrency.processutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:29:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.132 244018 DEBUG nova.compute.manager [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.244 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.245 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.253 244018 DEBUG nova.virt.hardware [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.253 244018 INFO nova.compute.claims [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.398 244018 DEBUG oslo_concurrency.processutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:29:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3579413670' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.641 244018 DEBUG oslo_concurrency.processutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.676 244018 DEBUG nova.storage.rbd_utils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.684 244018 DEBUG oslo_concurrency.processutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.737 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 393 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 232 op/s
Feb 25 07:29:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:29:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3169542644' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.979 244018 DEBUG oslo_concurrency.processutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:09 np0005629333 nova_compute[244014]: 2026-02-25 12:29:09.984 244018 DEBUG nova.compute.provider_tree [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.002 244018 DEBUG nova.scheduler.client.report [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.032 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.033 244018 DEBUG nova.compute.manager [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.087 244018 DEBUG nova.compute.manager [req-4dabb436-eeee-4e2e-915d-208f6cc53301 req-6e217c16-7558-4651-a3ed-7947336961c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Received event network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.088 244018 DEBUG oslo_concurrency.lockutils [req-4dabb436-eeee-4e2e-915d-208f6cc53301 req-6e217c16-7558-4651-a3ed-7947336961c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.088 244018 DEBUG oslo_concurrency.lockutils [req-4dabb436-eeee-4e2e-915d-208f6cc53301 req-6e217c16-7558-4651-a3ed-7947336961c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.089 244018 DEBUG oslo_concurrency.lockutils [req-4dabb436-eeee-4e2e-915d-208f6cc53301 req-6e217c16-7558-4651-a3ed-7947336961c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.089 244018 DEBUG nova.compute.manager [req-4dabb436-eeee-4e2e-915d-208f6cc53301 req-6e217c16-7558-4651-a3ed-7947336961c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] No waiting events found dispatching network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.090 244018 WARNING nova.compute.manager [req-4dabb436-eeee-4e2e-915d-208f6cc53301 req-6e217c16-7558-4651-a3ed-7947336961c0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Received unexpected event network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c for instance with vm_state active and task_state rebuild_spawning.#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.097 244018 DEBUG nova.compute.manager [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.098 244018 DEBUG nova.network.neutron [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.120 244018 INFO nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.140 244018 DEBUG nova.compute.manager [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:29:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:29:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:29:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3641828908' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.199 244018 DEBUG oslo_concurrency.processutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.200 244018 DEBUG nova.virt.libvirt.vif [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:28:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-745827155',display_name='tempest-ServerActionsTestJSON-server-1953116819',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-745827155',id=63,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:28:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-ce0sx5g6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:29:08Z,user_data=None,user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=ee9cd98b-1ca6-48e7-aa44-a09caf048a1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.201 244018 DEBUG nova.network.os_vif_util [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.201 244018 DEBUG nova.network.os_vif_util [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3b:33,bridge_name='br-int',has_traffic_filtering=True,id=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9848fa6c-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.205 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:29:10 np0005629333 nova_compute[244014]:  <uuid>ee9cd98b-1ca6-48e7-aa44-a09caf048a1c</uuid>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:  <name>instance-0000003f</name>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServerActionsTestJSON-server-1953116819</nova:name>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:29:09</nova:creationTime>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:29:10 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:        <nova:user uuid="1f8bbe7db4454108aca005daa72d5c22">tempest-ServerActionsTestJSON-436476112-project-member</nova:user>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:        <nova:project uuid="56700581ea88438ba482d90bc702ced3">tempest-ServerActionsTestJSON-436476112</nova:project>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="f0ef5a9a-23b8-4883-8e47-feb7403a11d8"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:        <nova:port uuid="9848fa6c-0a42-4cac-a3d8-2b90d5b7920c">
Feb 25 07:29:10 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      <entry name="serial">ee9cd98b-1ca6-48e7-aa44-a09caf048a1c</entry>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      <entry name="uuid">ee9cd98b-1ca6-48e7-aa44-a09caf048a1c</entry>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk">
Feb 25 07:29:10 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:29:10 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk.config">
Feb 25 07:29:10 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:29:10 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:2e:3b:33"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      <target dev="tap9848fa6c-0a"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c/console.log" append="off"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:29:10 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:29:10 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:29:10 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:29:10 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.207 244018 DEBUG nova.compute.manager [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Preparing to wait for external event network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.207 244018 DEBUG oslo_concurrency.lockutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.207 244018 DEBUG oslo_concurrency.lockutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.208 244018 DEBUG oslo_concurrency.lockutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.208 244018 DEBUG nova.virt.libvirt.vif [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:28:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-745827155',display_name='tempest-ServerActionsTestJSON-server-1953116819',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-745827155',id=63,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:28:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-ce0sx5g6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:29:08Z,user_data=None,user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=ee9cd98b-1ca6-48e7-aa44-a09caf048a1c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.209 244018 DEBUG nova.network.os_vif_util [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.210 244018 DEBUG nova.network.os_vif_util [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3b:33,bridge_name='br-int',has_traffic_filtering=True,id=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9848fa6c-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.210 244018 DEBUG os_vif [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3b:33,bridge_name='br-int',has_traffic_filtering=True,id=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9848fa6c-0a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.211 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.211 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.211 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.215 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.215 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9848fa6c-0a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.215 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9848fa6c-0a, col_values=(('external_ids', {'iface-id': '9848fa6c-0a42-4cac-a3d8-2b90d5b7920c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:3b:33', 'vm-uuid': 'ee9cd98b-1ca6-48e7-aa44-a09caf048a1c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.217 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:10 np0005629333 NetworkManager[49836]: <info>  [1772022550.2184] manager: (tap9848fa6c-0a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/258)
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.219 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.224 244018 DEBUG nova.compute.manager [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.226 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.226 244018 INFO nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Creating image(s)#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.247 244018 DEBUG nova.storage.rbd_utils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] rbd image abe229eb-2238-4237-a7f2-83b8476ac1dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.265 244018 DEBUG nova.storage.rbd_utils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] rbd image abe229eb-2238-4237-a7f2-83b8476ac1dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.286 244018 DEBUG nova.storage.rbd_utils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] rbd image abe229eb-2238-4237-a7f2-83b8476ac1dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.289 244018 DEBUG oslo_concurrency.processutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.319 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.322 244018 INFO os_vif [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3b:33,bridge_name='br-int',has_traffic_filtering=True,id=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9848fa6c-0a')#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.355 244018 DEBUG oslo_concurrency.processutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.356 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.357 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.358 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.385 244018 DEBUG nova.storage.rbd_utils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] rbd image abe229eb-2238-4237-a7f2-83b8476ac1dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.389 244018 DEBUG oslo_concurrency.processutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 abe229eb-2238-4237-a7f2-83b8476ac1dc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.422 244018 DEBUG nova.policy [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bcb4ded096bc4f7993f96ca892b82333', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '780a93a8758a4bd78b22fe68ed6276cf', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.446 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.446 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.447 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] No VIF found with MAC fa:16:3e:2e:3b:33, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.447 244018 INFO nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Using config drive#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.469 244018 DEBUG nova.storage.rbd_utils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.487 244018 DEBUG nova.objects.instance [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'ec2_ids' on Instance uuid ee9cd98b-1ca6-48e7-aa44-a09caf048a1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.516 244018 DEBUG nova.objects.instance [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'keypairs' on Instance uuid ee9cd98b-1ca6-48e7-aa44-a09caf048a1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.675 244018 DEBUG oslo_concurrency.processutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 abe229eb-2238-4237-a7f2-83b8476ac1dc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.741 244018 DEBUG nova.storage.rbd_utils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] resizing rbd image abe229eb-2238-4237-a7f2-83b8476ac1dc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.784 244018 INFO nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Creating config drive at /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c/disk.config#033[00m
Feb 25 07:29:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:10Z|00583|binding|INFO|Releasing lport 3b184c15-8ef4-4e11-bd18-e1253a4ff440 from this chassis (sb_readonly=0)
Feb 25 07:29:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:10Z|00584|binding|INFO|Releasing lport 6ef06fbe-6eb9-4d55-bbd8-9394a70da39f from this chassis (sb_readonly=0)
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.789 244018 DEBUG oslo_concurrency.processutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpbfsah201 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.851 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.858 244018 DEBUG nova.objects.instance [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lazy-loading 'migration_context' on Instance uuid abe229eb-2238-4237-a7f2-83b8476ac1dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.874 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.875 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Ensure instance console log exists: /var/lib/nova/instances/abe229eb-2238-4237-a7f2-83b8476ac1dc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.875 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.876 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.876 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.929 244018 DEBUG oslo_concurrency.processutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpbfsah201" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.960 244018 DEBUG nova.storage.rbd_utils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] rbd image ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:10 np0005629333 nova_compute[244014]: 2026-02-25 12:29:10.964 244018 DEBUG oslo_concurrency.processutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c/disk.config ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:11 np0005629333 nova_compute[244014]: 2026-02-25 12:29:11.114 244018 DEBUG oslo_concurrency.processutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c/disk.config ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:11 np0005629333 nova_compute[244014]: 2026-02-25 12:29:11.115 244018 INFO nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Deleting local config drive /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c/disk.config because it was imported into RBD.#033[00m
Feb 25 07:29:11 np0005629333 kernel: tap9848fa6c-0a: entered promiscuous mode
Feb 25 07:29:11 np0005629333 NetworkManager[49836]: <info>  [1772022551.1689] manager: (tap9848fa6c-0a): new Tun device (/org/freedesktop/NetworkManager/Devices/259)
Feb 25 07:29:11 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:11Z|00585|binding|INFO|Claiming lport 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c for this chassis.
Feb 25 07:29:11 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:11Z|00586|binding|INFO|9848fa6c-0a42-4cac-a3d8-2b90d5b7920c: Claiming fa:16:3e:2e:3b:33 10.100.0.14
Feb 25 07:29:11 np0005629333 nova_compute[244014]: 2026-02-25 12:29:11.169 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:11.178 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:3b:33 10.100.0.14'], port_security=['fa:16:3e:2e:3b:33 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ee9cd98b-1ca6-48e7-aa44-a09caf048a1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce318891-cf3c-4d99-af7c-c01770f38194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56700581ea88438ba482d90bc702ced3', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'cf6e8407-f371-4f64-a4aa-eb412980736d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0958bb9f-eb63-44ee-b380-21c56b170304, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:29:11 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:11Z|00587|binding|INFO|Setting lport 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c ovn-installed in OVS
Feb 25 07:29:11 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:11Z|00588|binding|INFO|Setting lport 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c up in Southbound
Feb 25 07:29:11 np0005629333 nova_compute[244014]: 2026-02-25 12:29:11.184 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:11 np0005629333 nova_compute[244014]: 2026-02-25 12:29:11.186 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:11.186 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c in datapath ce318891-cf3c-4d99-af7c-c01770f38194 bound to our chassis#033[00m
Feb 25 07:29:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:11.189 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce318891-cf3c-4d99-af7c-c01770f38194#033[00m
Feb 25 07:29:11 np0005629333 systemd-udevd[301342]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:29:11 np0005629333 systemd-machined[210048]: New machine qemu-75-instance-0000003f.
Feb 25 07:29:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:11.206 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cc710d1d-46cc-4a99-a687-877269b74952]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:11 np0005629333 NetworkManager[49836]: <info>  [1772022551.2105] device (tap9848fa6c-0a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:29:11 np0005629333 NetworkManager[49836]: <info>  [1772022551.2121] device (tap9848fa6c-0a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:29:11 np0005629333 systemd[1]: Started Virtual Machine qemu-75-instance-0000003f.
Feb 25 07:29:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:11.236 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5774e92b-2c27-437f-adfd-c72dd827cfc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:11.239 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8e103f64-b7ed-45b8-90f7-c6d0ec528291]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:11.264 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[24b4d2b1-81e4-4448-af35-7a21a2aded92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:11.284 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3708e6e2-8e87-4274-a93c-d8396376ee52]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce318891-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c3:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 616, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 616, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444530, 'reachable_time': 37500, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301355, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:11.303 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[316699bf-3e00-49fa-92e0-4b20d891de3e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce318891-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444541, 'tstamp': 444541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301357, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce318891-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444544, 'tstamp': 444544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301357, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:11.304 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce318891-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:11 np0005629333 nova_compute[244014]: 2026-02-25 12:29:11.306 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:11 np0005629333 nova_compute[244014]: 2026-02-25 12:29:11.308 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:11.308 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce318891-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:11.309 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:29:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:11.309 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce318891-c0, col_values=(('external_ids', {'iface-id': '3b184c15-8ef4-4e11-bd18-e1253a4ff440'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:11.309 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:29:11 np0005629333 nova_compute[244014]: 2026-02-25 12:29:11.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:29:11 np0005629333 nova_compute[244014]: 2026-02-25 12:29:11.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:29:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 411 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.3 MiB/s wr, 294 op/s
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.214 244018 DEBUG nova.compute.manager [req-140e4b40-c8d0-47e0-85b6-3db8e4573119 req-4003ed9b-e02e-4bda-8bf2-324680702511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Received event network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.215 244018 DEBUG oslo_concurrency.lockutils [req-140e4b40-c8d0-47e0-85b6-3db8e4573119 req-4003ed9b-e02e-4bda-8bf2-324680702511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.215 244018 DEBUG oslo_concurrency.lockutils [req-140e4b40-c8d0-47e0-85b6-3db8e4573119 req-4003ed9b-e02e-4bda-8bf2-324680702511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.216 244018 DEBUG oslo_concurrency.lockutils [req-140e4b40-c8d0-47e0-85b6-3db8e4573119 req-4003ed9b-e02e-4bda-8bf2-324680702511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.217 244018 DEBUG nova.compute.manager [req-140e4b40-c8d0-47e0-85b6-3db8e4573119 req-4003ed9b-e02e-4bda-8bf2-324680702511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Processing event network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.217 244018 DEBUG nova.compute.manager [req-140e4b40-c8d0-47e0-85b6-3db8e4573119 req-4003ed9b-e02e-4bda-8bf2-324680702511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Received event network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.218 244018 DEBUG oslo_concurrency.lockutils [req-140e4b40-c8d0-47e0-85b6-3db8e4573119 req-4003ed9b-e02e-4bda-8bf2-324680702511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.218 244018 DEBUG oslo_concurrency.lockutils [req-140e4b40-c8d0-47e0-85b6-3db8e4573119 req-4003ed9b-e02e-4bda-8bf2-324680702511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.219 244018 DEBUG oslo_concurrency.lockutils [req-140e4b40-c8d0-47e0-85b6-3db8e4573119 req-4003ed9b-e02e-4bda-8bf2-324680702511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.219 244018 DEBUG nova.compute.manager [req-140e4b40-c8d0-47e0-85b6-3db8e4573119 req-4003ed9b-e02e-4bda-8bf2-324680702511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] No waiting events found dispatching network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.220 244018 WARNING nova.compute.manager [req-140e4b40-c8d0-47e0-85b6-3db8e4573119 req-4003ed9b-e02e-4bda-8bf2-324680702511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Received unexpected event network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c for instance with vm_state active and task_state rebuild_spawning.#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.396 244018 DEBUG nova.network.neutron [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Successfully created port: ab721623-aa6d-494f-8b90-6ffd63b7a33f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.410 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for ee9cd98b-1ca6-48e7-aa44-a09caf048a1c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.410 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022552.4099329, ee9cd98b-1ca6-48e7-aa44-a09caf048a1c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.410 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] VM Started (Lifecycle Event)#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.413 244018 DEBUG nova.compute.manager [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.417 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.421 244018 INFO nova.virt.libvirt.driver [-] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Instance spawned successfully.#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.421 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.431 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.436 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.445 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.445 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.448 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.448 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.449 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.449 244018 DEBUG nova.virt.libvirt.driver [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.480 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.481 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022552.4126215, ee9cd98b-1ca6-48e7-aa44-a09caf048a1c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.482 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.515 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.520 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022552.4160485, ee9cd98b-1ca6-48e7-aa44-a09caf048a1c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.521 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.525 244018 DEBUG nova.compute.manager [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.548 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.551 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.576 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.587 244018 DEBUG oslo_concurrency.lockutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.588 244018 DEBUG oslo_concurrency.lockutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.588 244018 DEBUG nova.objects.instance [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.642 244018 DEBUG oslo_concurrency.lockutils [None req-0ff2c225-0b52-4aca-9ffb-6592e9d65ef7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:12.668 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:29:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:12.670 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.670 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 07:29:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 6637 writes, 29K keys, 6637 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 6637 writes, 6637 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1781 writes, 8156 keys, 1781 commit groups, 1.0 writes per commit group, ingest: 10.59 MB, 0.02 MB/s#012Interval WAL: 1781 writes, 1781 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     89.4      0.36              0.09        16    0.023       0      0       0.0       0.0#012  L6      1/0    8.13 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3    130.2    106.3      1.03              0.30        15    0.068     70K   8315       0.0       0.0#012 Sum      1/0    8.13 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     96.0    101.9      1.39              0.40        31    0.045     70K   8315       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.7    166.6    171.4      0.24              0.11         8    0.030     22K   2558       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    130.2    106.3      1.03              0.30        15    0.068     70K   8315       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     90.3      0.36              0.09        15    0.024       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.032, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.14 GB write, 0.06 MB/s write, 0.13 GB read, 0.06 MB/s read, 1.4 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561a1af858d0#2 capacity: 304.00 MB usage: 15.84 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000134 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(989,15.28 MB,5.02476%) FilterBlock(32,203.17 KB,0.0652665%) IndexBlock(32,371.19 KB,0.119239%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.905 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.905 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:29:12 np0005629333 nova_compute[244014]: 2026-02-25 12:29:12.905 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:13 np0005629333 nova_compute[244014]: 2026-02-25 12:29:13.257 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:29:13 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4088956665' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:29:13 np0005629333 nova_compute[244014]: 2026-02-25 12:29:13.492 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:13 np0005629333 nova_compute[244014]: 2026-02-25 12:29:13.591 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:29:13 np0005629333 nova_compute[244014]: 2026-02-25 12:29:13.591 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:29:13 np0005629333 nova_compute[244014]: 2026-02-25 12:29:13.594 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:29:13 np0005629333 nova_compute[244014]: 2026-02-25 12:29:13.594 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:29:13 np0005629333 nova_compute[244014]: 2026-02-25 12:29:13.597 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:29:13 np0005629333 nova_compute[244014]: 2026-02-25 12:29:13.597 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000040 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:29:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:13.671 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:13 np0005629333 nova_compute[244014]: 2026-02-25 12:29:13.719 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:29:13 np0005629333 nova_compute[244014]: 2026-02-25 12:29:13.720 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3536MB free_disk=59.85440388228744GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:29:13 np0005629333 nova_compute[244014]: 2026-02-25 12:29:13.720 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:13 np0005629333 nova_compute[244014]: 2026-02-25 12:29:13.720 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 407 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 6.6 MiB/s wr, 271 op/s
Feb 25 07:29:13 np0005629333 nova_compute[244014]: 2026-02-25 12:29:13.898 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:29:13 np0005629333 nova_compute[244014]: 2026-02-25 12:29:13.898 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance ee9cd98b-1ca6-48e7-aa44-a09caf048a1c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:29:13 np0005629333 nova_compute[244014]: 2026-02-25 12:29:13.899 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 8086400b-ac70-4c79-928b-4f1966084384 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:29:13 np0005629333 nova_compute[244014]: 2026-02-25 12:29:13.899 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance abe229eb-2238-4237-a7f2-83b8476ac1dc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:29:13 np0005629333 nova_compute[244014]: 2026-02-25 12:29:13.899 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:29:13 np0005629333 nova_compute[244014]: 2026-02-25 12:29:13.900 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:29:13 np0005629333 nova_compute[244014]: 2026-02-25 12:29:13.993 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:29:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3763794133' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:29:14 np0005629333 nova_compute[244014]: 2026-02-25 12:29:14.545 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:14 np0005629333 nova_compute[244014]: 2026-02-25 12:29:14.549 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:29:14 np0005629333 nova_compute[244014]: 2026-02-25 12:29:14.577 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:29:14 np0005629333 nova_compute[244014]: 2026-02-25 12:29:14.603 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:29:14 np0005629333 nova_compute[244014]: 2026-02-25 12:29:14.603 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:14 np0005629333 nova_compute[244014]: 2026-02-25 12:29:14.695 244018 DEBUG nova.network.neutron [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Successfully updated port: ab721623-aa6d-494f-8b90-6ffd63b7a33f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:29:14 np0005629333 nova_compute[244014]: 2026-02-25 12:29:14.710 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "refresh_cache-abe229eb-2238-4237-a7f2-83b8476ac1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:29:14 np0005629333 nova_compute[244014]: 2026-02-25 12:29:14.711 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquired lock "refresh_cache-abe229eb-2238-4237-a7f2-83b8476ac1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:29:14 np0005629333 nova_compute[244014]: 2026-02-25 12:29:14.711 244018 DEBUG nova.network.neutron [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:29:14 np0005629333 nova_compute[244014]: 2026-02-25 12:29:14.740 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:14 np0005629333 nova_compute[244014]: 2026-02-25 12:29:14.822 244018 DEBUG nova.compute.manager [req-fd6a20cd-88e9-4098-90fa-0d7a3d7bb575 req-3254a2bd-275d-4c8d-aba3-79d10c1a08cf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received event network-changed-ab721623-aa6d-494f-8b90-6ffd63b7a33f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:14 np0005629333 nova_compute[244014]: 2026-02-25 12:29:14.823 244018 DEBUG nova.compute.manager [req-fd6a20cd-88e9-4098-90fa-0d7a3d7bb575 req-3254a2bd-275d-4c8d-aba3-79d10c1a08cf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Refreshing instance network info cache due to event network-changed-ab721623-aa6d-494f-8b90-6ffd63b7a33f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:29:14 np0005629333 nova_compute[244014]: 2026-02-25 12:29:14.823 244018 DEBUG oslo_concurrency.lockutils [req-fd6a20cd-88e9-4098-90fa-0d7a3d7bb575 req-3254a2bd-275d-4c8d-aba3-79d10c1a08cf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-abe229eb-2238-4237-a7f2-83b8476ac1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:29:14 np0005629333 nova_compute[244014]: 2026-02-25 12:29:14.917 244018 DEBUG nova.network.neutron [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:29:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:29:15 np0005629333 nova_compute[244014]: 2026-02-25 12:29:15.218 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:15 np0005629333 nova_compute[244014]: 2026-02-25 12:29:15.604 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:29:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 407 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 252 KiB/s rd, 3.9 MiB/s wr, 144 op/s
Feb 25 07:29:15 np0005629333 nova_compute[244014]: 2026-02-25 12:29:15.992 244018 DEBUG nova.network.neutron [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Updating instance_info_cache with network_info: [{"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.019 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Releasing lock "refresh_cache-abe229eb-2238-4237-a7f2-83b8476ac1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.020 244018 DEBUG nova.compute.manager [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Instance network_info: |[{"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.021 244018 DEBUG oslo_concurrency.lockutils [req-fd6a20cd-88e9-4098-90fa-0d7a3d7bb575 req-3254a2bd-275d-4c8d-aba3-79d10c1a08cf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-abe229eb-2238-4237-a7f2-83b8476ac1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.022 244018 DEBUG nova.network.neutron [req-fd6a20cd-88e9-4098-90fa-0d7a3d7bb575 req-3254a2bd-275d-4c8d-aba3-79d10c1a08cf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Refreshing network info cache for port ab721623-aa6d-494f-8b90-6ffd63b7a33f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.027 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Start _get_guest_xml network_info=[{"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.033 244018 WARNING nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.038 244018 DEBUG nova.virt.libvirt.host [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.039 244018 DEBUG nova.virt.libvirt.host [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.045 244018 DEBUG nova.virt.libvirt.host [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.045 244018 DEBUG nova.virt.libvirt.host [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.046 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.046 244018 DEBUG nova.virt.hardware [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.046 244018 DEBUG nova.virt.hardware [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.047 244018 DEBUG nova.virt.hardware [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.047 244018 DEBUG nova.virt.hardware [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.047 244018 DEBUG nova.virt.hardware [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.047 244018 DEBUG nova.virt.hardware [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.047 244018 DEBUG nova.virt.hardware [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.047 244018 DEBUG nova.virt.hardware [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.048 244018 DEBUG nova.virt.hardware [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.048 244018 DEBUG nova.virt.hardware [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.048 244018 DEBUG nova.virt.hardware [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.050 244018 DEBUG oslo_concurrency.processutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.466 244018 DEBUG oslo_concurrency.lockutils [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.466 244018 DEBUG oslo_concurrency.lockutils [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.467 244018 DEBUG oslo_concurrency.lockutils [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.467 244018 DEBUG oslo_concurrency.lockutils [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.467 244018 DEBUG oslo_concurrency.lockutils [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.469 244018 INFO nova.compute.manager [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Terminating instance#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.470 244018 DEBUG nova.compute.manager [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:29:16 np0005629333 kernel: tap9848fa6c-0a (unregistering): left promiscuous mode
Feb 25 07:29:16 np0005629333 NetworkManager[49836]: <info>  [1772022556.5451] device (tap9848fa6c-0a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.551 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:16Z|00589|binding|INFO|Releasing lport 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c from this chassis (sb_readonly=0)
Feb 25 07:29:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:16Z|00590|binding|INFO|Setting lport 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c down in Southbound
Feb 25 07:29:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:16Z|00591|binding|INFO|Removing iface tap9848fa6c-0a ovn-installed in OVS
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.557 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.562 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:16.565 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:3b:33 10.100.0.14'], port_security=['fa:16:3e:2e:3b:33 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ee9cd98b-1ca6-48e7-aa44-a09caf048a1c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce318891-cf3c-4d99-af7c-c01770f38194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56700581ea88438ba482d90bc702ced3', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'cf6e8407-f371-4f64-a4aa-eb412980736d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0958bb9f-eb63-44ee-b380-21c56b170304, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:29:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:16.568 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 9848fa6c-0a42-4cac-a3d8-2b90d5b7920c in datapath ce318891-cf3c-4d99-af7c-c01770f38194 unbound from our chassis#033[00m
Feb 25 07:29:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:16.572 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce318891-cf3c-4d99-af7c-c01770f38194#033[00m
Feb 25 07:29:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:16.588 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2943d324-7220-4d23-8446-79e7e689f602]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:16 np0005629333 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Feb 25 07:29:16 np0005629333 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d0000003f.scope: Consumed 5.326s CPU time.
Feb 25 07:29:16 np0005629333 systemd-machined[210048]: Machine qemu-75-instance-0000003f terminated.
Feb 25 07:29:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:16.621 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[76fe03af-147b-475b-a712-7623f4d8b215]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:16.625 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[00d89bfe-e20e-407f-b9f7-f445f867625f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:16.649 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[206aad5a-3684-4a38-8f26-daf7604f356e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:16.663 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5645f3a6-3506-4b74-bd53-465e6c3b73d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce318891-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c3:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 616, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 616, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444530, 'reachable_time': 37500, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301477, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:16.676 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3893f223-db96-452e-b6d0-e48ca0a03aeb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapce318891-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444541, 'tstamp': 444541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301478, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce318891-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444544, 'tstamp': 444544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301478, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:16.678 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce318891-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.679 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.683 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:16.684 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce318891-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:16.684 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:29:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:16.684 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce318891-c0, col_values=(('external_ids', {'iface-id': '3b184c15-8ef4-4e11-bd18-e1253a4ff440'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:16.685 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:29:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:29:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2728083525' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.701 244018 INFO nova.virt.libvirt.driver [-] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Instance destroyed successfully.#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.702 244018 DEBUG nova.objects.instance [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'resources' on Instance uuid ee9cd98b-1ca6-48e7-aa44-a09caf048a1c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.719 244018 DEBUG nova.virt.libvirt.vif [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:28:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-745827155',display_name='tempest-ServerActionsTestJSON-server-1953116819',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-745827155',id=63,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:29:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-ce0sx5g6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:29:12Z,user_data=None,user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=ee9cd98b-1ca6-48e7-aa44-a09caf048a1c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.719 244018 DEBUG nova.network.os_vif_util [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "address": "fa:16:3e:2e:3b:33", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9848fa6c-0a", "ovs_interfaceid": "9848fa6c-0a42-4cac-a3d8-2b90d5b7920c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.720 244018 DEBUG nova.network.os_vif_util [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3b:33,bridge_name='br-int',has_traffic_filtering=True,id=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9848fa6c-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.720 244018 DEBUG os_vif [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3b:33,bridge_name='br-int',has_traffic_filtering=True,id=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9848fa6c-0a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.722 244018 DEBUG oslo_concurrency.processutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.672s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.722 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.722 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9848fa6c-0a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.742 244018 DEBUG nova.storage.rbd_utils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] rbd image abe229eb-2238-4237-a7f2-83b8476ac1dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.746 244018 DEBUG oslo_concurrency.processutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.767 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.769 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.771 244018 INFO os_vif [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:3b:33,bridge_name='br-int',has_traffic_filtering=True,id=9848fa6c-0a42-4cac-a3d8-2b90d5b7920c,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9848fa6c-0a')#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.859 244018 DEBUG nova.compute.manager [req-50d61208-30b4-4d69-9af6-3d070504fbfd req-282f4fc4-d015-41ca-bd81-7b321808261f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Received event network-vif-unplugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.859 244018 DEBUG oslo_concurrency.lockutils [req-50d61208-30b4-4d69-9af6-3d070504fbfd req-282f4fc4-d015-41ca-bd81-7b321808261f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.860 244018 DEBUG oslo_concurrency.lockutils [req-50d61208-30b4-4d69-9af6-3d070504fbfd req-282f4fc4-d015-41ca-bd81-7b321808261f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.860 244018 DEBUG oslo_concurrency.lockutils [req-50d61208-30b4-4d69-9af6-3d070504fbfd req-282f4fc4-d015-41ca-bd81-7b321808261f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.860 244018 DEBUG nova.compute.manager [req-50d61208-30b4-4d69-9af6-3d070504fbfd req-282f4fc4-d015-41ca-bd81-7b321808261f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] No waiting events found dispatching network-vif-unplugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:29:16 np0005629333 nova_compute[244014]: 2026-02-25 12:29:16.860 244018 DEBUG nova.compute.manager [req-50d61208-30b4-4d69-9af6-3d070504fbfd req-282f4fc4-d015-41ca-bd81-7b321808261f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Received event network-vif-unplugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.220 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "bf261ccf-c216-4383-a22a-7f0553198152" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.221 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "bf261ccf-c216-4383-a22a-7f0553198152" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.238 244018 DEBUG nova.compute.manager [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.309 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.309 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.318 244018 DEBUG nova.virt.hardware [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.318 244018 INFO nova.compute.claims [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.323 244018 DEBUG nova.network.neutron [req-fd6a20cd-88e9-4098-90fa-0d7a3d7bb575 req-3254a2bd-275d-4c8d-aba3-79d10c1a08cf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Updated VIF entry in instance network info cache for port ab721623-aa6d-494f-8b90-6ffd63b7a33f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.324 244018 DEBUG nova.network.neutron [req-fd6a20cd-88e9-4098-90fa-0d7a3d7bb575 req-3254a2bd-275d-4c8d-aba3-79d10c1a08cf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Updating instance_info_cache with network_info: [{"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:29:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:29:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3015791985' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.360 244018 DEBUG oslo_concurrency.lockutils [req-fd6a20cd-88e9-4098-90fa-0d7a3d7bb575 req-3254a2bd-275d-4c8d-aba3-79d10c1a08cf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-abe229eb-2238-4237-a7f2-83b8476ac1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.377 244018 DEBUG oslo_concurrency.processutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.379 244018 DEBUG nova.virt.libvirt.vif [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:29:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1679149420',display_name='tempest-SecurityGroupsTestJSON-server-1679149420',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1679149420',id=66,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='780a93a8758a4bd78b22fe68ed6276cf',ramdisk_id='',reservation_id='r-q96j5rpg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-195828884',owner_user_name='tempest-SecurityGroupsTestJSON-195828884-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:29:10Z,user_data=None,user_id='bcb4ded096bc4f7993f96ca892b82333',uuid=abe229eb-2238-4237-a7f2-83b8476ac1dc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.380 244018 DEBUG nova.network.os_vif_util [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converting VIF {"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.381 244018 DEBUG nova.network.os_vif_util [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:6c:fd,bridge_name='br-int',has_traffic_filtering=True,id=ab721623-aa6d-494f-8b90-6ffd63b7a33f,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab721623-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.383 244018 DEBUG nova.objects.instance [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lazy-loading 'pci_devices' on Instance uuid abe229eb-2238-4237-a7f2-83b8476ac1dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.402 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:29:17 np0005629333 nova_compute[244014]:  <uuid>abe229eb-2238-4237-a7f2-83b8476ac1dc</uuid>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:  <name>instance-00000042</name>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      <nova:name>tempest-SecurityGroupsTestJSON-server-1679149420</nova:name>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:29:16</nova:creationTime>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:29:17 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:        <nova:user uuid="bcb4ded096bc4f7993f96ca892b82333">tempest-SecurityGroupsTestJSON-195828884-project-member</nova:user>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:        <nova:project uuid="780a93a8758a4bd78b22fe68ed6276cf">tempest-SecurityGroupsTestJSON-195828884</nova:project>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:        <nova:port uuid="ab721623-aa6d-494f-8b90-6ffd63b7a33f">
Feb 25 07:29:17 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      <entry name="serial">abe229eb-2238-4237-a7f2-83b8476ac1dc</entry>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      <entry name="uuid">abe229eb-2238-4237-a7f2-83b8476ac1dc</entry>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/abe229eb-2238-4237-a7f2-83b8476ac1dc_disk">
Feb 25 07:29:17 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:29:17 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/abe229eb-2238-4237-a7f2-83b8476ac1dc_disk.config">
Feb 25 07:29:17 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:29:17 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:53:6c:fd"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      <target dev="tapab721623-aa"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/abe229eb-2238-4237-a7f2-83b8476ac1dc/console.log" append="off"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:29:17 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:29:17 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:29:17 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:29:17 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.403 244018 DEBUG nova.compute.manager [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Preparing to wait for external event network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.403 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.404 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.404 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.405 244018 DEBUG nova.virt.libvirt.vif [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:29:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1679149420',display_name='tempest-SecurityGroupsTestJSON-server-1679149420',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1679149420',id=66,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='780a93a8758a4bd78b22fe68ed6276cf',ramdisk_id='',reservation_id='r-q96j5rpg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-195828884',owner_user_name='tempest-SecurityGroupsTestJSON-195828884-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:29:10Z,user_data=None,user_id='bcb4ded096bc4f7993f96ca892b82333',uuid=abe229eb-2238-4237-a7f2-83b8476ac1dc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.405 244018 DEBUG nova.network.os_vif_util [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converting VIF {"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.414 244018 DEBUG nova.network.os_vif_util [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:6c:fd,bridge_name='br-int',has_traffic_filtering=True,id=ab721623-aa6d-494f-8b90-6ffd63b7a33f,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab721623-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.415 244018 DEBUG os_vif [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:6c:fd,bridge_name='br-int',has_traffic_filtering=True,id=ab721623-aa6d-494f-8b90-6ffd63b7a33f,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab721623-aa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.416 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.417 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.417 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.425 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.425 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapab721623-aa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.426 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapab721623-aa, col_values=(('external_ids', {'iface-id': 'ab721623-aa6d-494f-8b90-6ffd63b7a33f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:6c:fd', 'vm-uuid': 'abe229eb-2238-4237-a7f2-83b8476ac1dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.428 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:17 np0005629333 NetworkManager[49836]: <info>  [1772022557.4297] manager: (tapab721623-aa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/260)
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.431 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.435 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.437 244018 INFO os_vif [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:6c:fd,bridge_name='br-int',has_traffic_filtering=True,id=ab721623-aa6d-494f-8b90-6ffd63b7a33f,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab721623-aa')#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.523 244018 DEBUG oslo_concurrency.processutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.556 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.557 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.557 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] No VIF found with MAC fa:16:3e:53:6c:fd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.558 244018 INFO nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Using config drive#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.582 244018 DEBUG nova.storage.rbd_utils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] rbd image abe229eb-2238-4237-a7f2-83b8476ac1dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.798 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022542.797305, 826789b1-e26a-4569-bd77-bd1ef76388be => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.798 244018 INFO nova.compute.manager [-] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:29:17 np0005629333 nova_compute[244014]: 2026-02-25 12:29:17.816 244018 DEBUG nova.compute.manager [None req-3a84685d-6a19-4319-a4c3-aeb4e53d0c8a - - - - - -] [instance: 826789b1-e26a-4569-bd77-bd1ef76388be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 407 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 215 op/s
Feb 25 07:29:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:29:18 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1170211199' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:29:18 np0005629333 nova_compute[244014]: 2026-02-25 12:29:18.078 244018 DEBUG oslo_concurrency.processutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:18 np0005629333 nova_compute[244014]: 2026-02-25 12:29:18.085 244018 DEBUG nova.compute.provider_tree [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:29:18 np0005629333 nova_compute[244014]: 2026-02-25 12:29:18.103 244018 DEBUG nova.scheduler.client.report [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:29:18 np0005629333 nova_compute[244014]: 2026-02-25 12:29:18.126 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:18 np0005629333 nova_compute[244014]: 2026-02-25 12:29:18.127 244018 DEBUG nova.compute.manager [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:29:18 np0005629333 nova_compute[244014]: 2026-02-25 12:29:18.180 244018 DEBUG nova.compute.manager [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:29:18 np0005629333 nova_compute[244014]: 2026-02-25 12:29:18.181 244018 DEBUG nova.network.neutron [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:29:18 np0005629333 nova_compute[244014]: 2026-02-25 12:29:18.202 244018 INFO nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:29:18 np0005629333 nova_compute[244014]: 2026-02-25 12:29:18.219 244018 DEBUG nova.compute.manager [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:29:18 np0005629333 nova_compute[244014]: 2026-02-25 12:29:18.316 244018 DEBUG nova.compute.manager [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:29:18 np0005629333 nova_compute[244014]: 2026-02-25 12:29:18.319 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:29:18 np0005629333 nova_compute[244014]: 2026-02-25 12:29:18.320 244018 INFO nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Creating image(s)#033[00m
Feb 25 07:29:18 np0005629333 nova_compute[244014]: 2026-02-25 12:29:18.354 244018 DEBUG nova.storage.rbd_utils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image bf261ccf-c216-4383-a22a-7f0553198152_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:18 np0005629333 nova_compute[244014]: 2026-02-25 12:29:18.392 244018 DEBUG nova.storage.rbd_utils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image bf261ccf-c216-4383-a22a-7f0553198152_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:18 np0005629333 nova_compute[244014]: 2026-02-25 12:29:18.424 244018 DEBUG nova.storage.rbd_utils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image bf261ccf-c216-4383-a22a-7f0553198152_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:18 np0005629333 nova_compute[244014]: 2026-02-25 12:29:18.428 244018 DEBUG oslo_concurrency.processutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:18 np0005629333 nova_compute[244014]: 2026-02-25 12:29:18.500 244018 DEBUG oslo_concurrency.processutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:18 np0005629333 nova_compute[244014]: 2026-02-25 12:29:18.502 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:18 np0005629333 nova_compute[244014]: 2026-02-25 12:29:18.503 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:18 np0005629333 nova_compute[244014]: 2026-02-25 12:29:18.503 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:18 np0005629333 nova_compute[244014]: 2026-02-25 12:29:18.534 244018 DEBUG nova.storage.rbd_utils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image bf261ccf-c216-4383-a22a-7f0553198152_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:18 np0005629333 nova_compute[244014]: 2026-02-25 12:29:18.538 244018 DEBUG oslo_concurrency.processutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 bf261ccf-c216-4383-a22a-7f0553198152_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:18 np0005629333 nova_compute[244014]: 2026-02-25 12:29:18.689 244018 INFO nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Creating config drive at /var/lib/nova/instances/abe229eb-2238-4237-a7f2-83b8476ac1dc/disk.config#033[00m
Feb 25 07:29:18 np0005629333 nova_compute[244014]: 2026-02-25 12:29:18.693 244018 DEBUG oslo_concurrency.processutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/abe229eb-2238-4237-a7f2-83b8476ac1dc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpg_4gn57z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:18 np0005629333 nova_compute[244014]: 2026-02-25 12:29:18.744 244018 DEBUG nova.policy [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3aef97e1c87341848423edc65828540c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c4445209a7384565a93895032b4f077e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:29:18 np0005629333 nova_compute[244014]: 2026-02-25 12:29:18.840 244018 DEBUG oslo_concurrency.processutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/abe229eb-2238-4237-a7f2-83b8476ac1dc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpg_4gn57z" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:18 np0005629333 nova_compute[244014]: 2026-02-25 12:29:18.870 244018 DEBUG nova.storage.rbd_utils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] rbd image abe229eb-2238-4237-a7f2-83b8476ac1dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:18 np0005629333 nova_compute[244014]: 2026-02-25 12:29:18.874 244018 DEBUG oslo_concurrency.processutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/abe229eb-2238-4237-a7f2-83b8476ac1dc/disk.config abe229eb-2238-4237-a7f2-83b8476ac1dc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:18 np0005629333 nova_compute[244014]: 2026-02-25 12:29:18.906 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:29:18 np0005629333 nova_compute[244014]: 2026-02-25 12:29:18.907 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:29:19 np0005629333 nova_compute[244014]: 2026-02-25 12:29:19.221 244018 DEBUG nova.compute.manager [req-903bc10f-fc6d-485b-b278-077063945e50 req-1b37a53b-d4fb-4089-89f4-992155e2a6f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Received event network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:19 np0005629333 nova_compute[244014]: 2026-02-25 12:29:19.222 244018 DEBUG oslo_concurrency.lockutils [req-903bc10f-fc6d-485b-b278-077063945e50 req-1b37a53b-d4fb-4089-89f4-992155e2a6f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:19 np0005629333 nova_compute[244014]: 2026-02-25 12:29:19.222 244018 DEBUG oslo_concurrency.lockutils [req-903bc10f-fc6d-485b-b278-077063945e50 req-1b37a53b-d4fb-4089-89f4-992155e2a6f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:19 np0005629333 nova_compute[244014]: 2026-02-25 12:29:19.223 244018 DEBUG oslo_concurrency.lockutils [req-903bc10f-fc6d-485b-b278-077063945e50 req-1b37a53b-d4fb-4089-89f4-992155e2a6f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:19 np0005629333 nova_compute[244014]: 2026-02-25 12:29:19.223 244018 DEBUG nova.compute.manager [req-903bc10f-fc6d-485b-b278-077063945e50 req-1b37a53b-d4fb-4089-89f4-992155e2a6f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] No waiting events found dispatching network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:29:19 np0005629333 nova_compute[244014]: 2026-02-25 12:29:19.223 244018 WARNING nova.compute.manager [req-903bc10f-fc6d-485b-b278-077063945e50 req-1b37a53b-d4fb-4089-89f4-992155e2a6f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Received unexpected event network-vif-plugged-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:29:19 np0005629333 nova_compute[244014]: 2026-02-25 12:29:19.743 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 407 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 161 op/s
Feb 25 07:29:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:29:20 np0005629333 nova_compute[244014]: 2026-02-25 12:29:20.492 244018 DEBUG oslo_concurrency.processutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 bf261ccf-c216-4383-a22a-7f0553198152_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.954s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:20 np0005629333 nova_compute[244014]: 2026-02-25 12:29:20.582 244018 DEBUG nova.storage.rbd_utils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] resizing rbd image bf261ccf-c216-4383-a22a-7f0553198152_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:29:20 np0005629333 nova_compute[244014]: 2026-02-25 12:29:20.968 244018 DEBUG nova.network.neutron [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Successfully created port: c75a4bcb-9292-41aa-b0c3-14b1433392e2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:29:21 np0005629333 nova_compute[244014]: 2026-02-25 12:29:21.125 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:21 np0005629333 nova_compute[244014]: 2026-02-25 12:29:21.126 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:21 np0005629333 nova_compute[244014]: 2026-02-25 12:29:21.147 244018 DEBUG nova.compute.manager [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:29:21 np0005629333 nova_compute[244014]: 2026-02-25 12:29:21.212 244018 DEBUG oslo_concurrency.processutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/abe229eb-2238-4237-a7f2-83b8476ac1dc/disk.config abe229eb-2238-4237-a7f2-83b8476ac1dc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:21 np0005629333 nova_compute[244014]: 2026-02-25 12:29:21.213 244018 INFO nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Deleting local config drive /var/lib/nova/instances/abe229eb-2238-4237-a7f2-83b8476ac1dc/disk.config because it was imported into RBD.#033[00m
Feb 25 07:29:21 np0005629333 nova_compute[244014]: 2026-02-25 12:29:21.225 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:21 np0005629333 nova_compute[244014]: 2026-02-25 12:29:21.225 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:21 np0005629333 nova_compute[244014]: 2026-02-25 12:29:21.257 244018 DEBUG nova.virt.hardware [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:29:21 np0005629333 nova_compute[244014]: 2026-02-25 12:29:21.257 244018 INFO nova.compute.claims [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:29:21 np0005629333 kernel: tapab721623-aa: entered promiscuous mode
Feb 25 07:29:21 np0005629333 NetworkManager[49836]: <info>  [1772022561.2782] manager: (tapab721623-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/261)
Feb 25 07:29:21 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:21Z|00592|binding|INFO|Claiming lport ab721623-aa6d-494f-8b90-6ffd63b7a33f for this chassis.
Feb 25 07:29:21 np0005629333 nova_compute[244014]: 2026-02-25 12:29:21.278 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:21 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:21Z|00593|binding|INFO|ab721623-aa6d-494f-8b90-6ffd63b7a33f: Claiming fa:16:3e:53:6c:fd 10.100.0.13
Feb 25 07:29:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:21.287 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:6c:fd 10.100.0.13'], port_security=['fa:16:3e:53:6c:fd 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'abe229eb-2238-4237-a7f2-83b8476ac1dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a448894-87d7-4c8e-a168-2593011ffed7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '780a93a8758a4bd78b22fe68ed6276cf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '809ecded-d7db-4d4f-aed2-3cfc6bac71b9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc8be04a-7591-4819-939d-39b48e9a96cf, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ab721623-aa6d-494f-8b90-6ffd63b7a33f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:29:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:21.288 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ab721623-aa6d-494f-8b90-6ffd63b7a33f in datapath 9a448894-87d7-4c8e-a168-2593011ffed7 bound to our chassis#033[00m
Feb 25 07:29:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:21.291 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9a448894-87d7-4c8e-a168-2593011ffed7#033[00m
Feb 25 07:29:21 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:21Z|00594|binding|INFO|Setting lport ab721623-aa6d-494f-8b90-6ffd63b7a33f ovn-installed in OVS
Feb 25 07:29:21 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:21Z|00595|binding|INFO|Setting lport ab721623-aa6d-494f-8b90-6ffd63b7a33f up in Southbound
Feb 25 07:29:21 np0005629333 nova_compute[244014]: 2026-02-25 12:29:21.295 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:21.310 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e67adfb5-7e6e-4512-9cdd-66106f62f578]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:21 np0005629333 systemd-machined[210048]: New machine qemu-76-instance-00000042.
Feb 25 07:29:21 np0005629333 systemd-udevd[301797]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:29:21 np0005629333 systemd[1]: Started Virtual Machine qemu-76-instance-00000042.
Feb 25 07:29:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:21.342 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c6c6c366-a427-454c-bd73-b0bb8fedc7eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:21 np0005629333 NetworkManager[49836]: <info>  [1772022561.3482] device (tapab721623-aa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:29:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:21.347 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[13cd4534-eddb-485c-a41a-8976248d491a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:21 np0005629333 NetworkManager[49836]: <info>  [1772022561.3495] device (tapab721623-aa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:29:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:21.383 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d3083412-87d9-4613-a860-3b7c099aaa76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:21.401 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9e139ce4-7a5e-4257-b9ed-c9c06b89ae52]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9a448894-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:4c:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449633, 'reachable_time': 29076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301803, 'error': None, 'target': 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:21.420 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a384e1c6-e3d5-4428-b8bb-87a578936821]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9a448894-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449644, 'tstamp': 449644}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301808, 'error': None, 'target': 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9a448894-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449646, 'tstamp': 449646}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301808, 'error': None, 'target': 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:21.423 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a448894-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:21.427 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9a448894-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:21.428 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:29:21 np0005629333 nova_compute[244014]: 2026-02-25 12:29:21.428 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:21.429 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9a448894-80, col_values=(('external_ids', {'iface-id': '6ef06fbe-6eb9-4d55-bbd8-9394a70da39f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:21.430 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:29:21 np0005629333 nova_compute[244014]: 2026-02-25 12:29:21.443 244018 DEBUG oslo_concurrency.processutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:21 np0005629333 nova_compute[244014]: 2026-02-25 12:29:21.484 244018 DEBUG nova.compute.manager [req-8ec2bdd4-24ed-4094-a1ce-b865ca7fcda9 req-09896613-68a3-46db-ba08-39e285187289 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received event network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:21 np0005629333 nova_compute[244014]: 2026-02-25 12:29:21.485 244018 DEBUG oslo_concurrency.lockutils [req-8ec2bdd4-24ed-4094-a1ce-b865ca7fcda9 req-09896613-68a3-46db-ba08-39e285187289 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:21 np0005629333 nova_compute[244014]: 2026-02-25 12:29:21.486 244018 DEBUG oslo_concurrency.lockutils [req-8ec2bdd4-24ed-4094-a1ce-b865ca7fcda9 req-09896613-68a3-46db-ba08-39e285187289 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:21 np0005629333 nova_compute[244014]: 2026-02-25 12:29:21.486 244018 DEBUG oslo_concurrency.lockutils [req-8ec2bdd4-24ed-4094-a1ce-b865ca7fcda9 req-09896613-68a3-46db-ba08-39e285187289 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:21 np0005629333 nova_compute[244014]: 2026-02-25 12:29:21.487 244018 DEBUG nova.compute.manager [req-8ec2bdd4-24ed-4094-a1ce-b865ca7fcda9 req-09896613-68a3-46db-ba08-39e285187289 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Processing event network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:29:21 np0005629333 nova_compute[244014]: 2026-02-25 12:29:21.872 244018 DEBUG nova.objects.instance [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lazy-loading 'migration_context' on Instance uuid bf261ccf-c216-4383-a22a-7f0553198152 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 395 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.2 MiB/s wr, 197 op/s
Feb 25 07:29:21 np0005629333 nova_compute[244014]: 2026-02-25 12:29:21.889 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:29:21 np0005629333 nova_compute[244014]: 2026-02-25 12:29:21.889 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Ensure instance console log exists: /var/lib/nova/instances/bf261ccf-c216-4383-a22a-7f0553198152/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:29:21 np0005629333 nova_compute[244014]: 2026-02-25 12:29:21.891 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:21 np0005629333 nova_compute[244014]: 2026-02-25 12:29:21.892 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:21 np0005629333 nova_compute[244014]: 2026-02-25 12:29:21.892 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:29:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2591715701' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.009 244018 DEBUG oslo_concurrency.processutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.015 244018 DEBUG nova.compute.provider_tree [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.033 244018 DEBUG nova.scheduler.client.report [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.048 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022562.048216, abe229eb-2238-4237-a7f2-83b8476ac1dc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.049 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] VM Started (Lifecycle Event)#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.050 244018 DEBUG nova.compute.manager [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.053 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.054 244018 DEBUG nova.compute.manager [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.056 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.060 244018 INFO nova.virt.libvirt.driver [-] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Instance spawned successfully.#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.060 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.076 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.079 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.090 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.090 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.091 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.091 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.092 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.092 244018 DEBUG nova.virt.libvirt.driver [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.106 244018 DEBUG nova.compute.manager [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.107 244018 DEBUG nova.network.neutron [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.114 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.115 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.115 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022562.0490148, abe229eb-2238-4237-a7f2-83b8476ac1dc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.115 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.151 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.151 244018 INFO nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.157 244018 INFO nova.compute.manager [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Took 11.93 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.157 244018 DEBUG nova.compute.manager [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.160 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022562.0526254, abe229eb-2238-4237-a7f2-83b8476ac1dc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.160 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.185 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.186 244018 DEBUG nova.compute.manager [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.191 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.224 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.249 244018 INFO nova.compute.manager [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Took 13.02 seconds to build instance.#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.280 244018 DEBUG oslo_concurrency.lockutils [None req-90a69dce-6028-4ec9-85d8-bc05b3182000 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.299 244018 DEBUG nova.compute.manager [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.301 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.301 244018 INFO nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Creating image(s)#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.327 244018 DEBUG nova.storage.rbd_utils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.354 244018 DEBUG nova.storage.rbd_utils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.378 244018 DEBUG nova.storage.rbd_utils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.381 244018 DEBUG oslo_concurrency.processutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.428 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.473 244018 DEBUG oslo_concurrency.processutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.474 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.475 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.475 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.511 244018 DEBUG nova.storage.rbd_utils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.516 244018 DEBUG oslo_concurrency.processutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.545 244018 DEBUG nova.policy [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3aef97e1c87341848423edc65828540c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c4445209a7384565a93895032b4f077e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.553 244018 INFO nova.virt.libvirt.driver [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Deleting instance files /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_del#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.554 244018 INFO nova.virt.libvirt.driver [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Deletion of /var/lib/nova/instances/ee9cd98b-1ca6-48e7-aa44-a09caf048a1c_del complete#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.616 244018 INFO nova.compute.manager [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Took 6.15 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.617 244018 DEBUG oslo.service.loopingcall [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.618 244018 DEBUG nova.compute.manager [-] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.618 244018 DEBUG nova.network.neutron [-] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.641 244018 DEBUG nova.network.neutron [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Successfully updated port: c75a4bcb-9292-41aa-b0c3-14b1433392e2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.659 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "refresh_cache-bf261ccf-c216-4383-a22a-7f0553198152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.659 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquired lock "refresh_cache-bf261ccf-c216-4383-a22a-7f0553198152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.660 244018 DEBUG nova.network.neutron [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.748 244018 DEBUG nova.compute.manager [req-9b9b3791-0633-48ae-bc48-88a2b0adcb19 req-4eca0568-0096-458a-ba3a-a6be3b620216 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Received event network-changed-c75a4bcb-9292-41aa-b0c3-14b1433392e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.749 244018 DEBUG nova.compute.manager [req-9b9b3791-0633-48ae-bc48-88a2b0adcb19 req-4eca0568-0096-458a-ba3a-a6be3b620216 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Refreshing instance network info cache due to event network-changed-c75a4bcb-9292-41aa-b0c3-14b1433392e2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.749 244018 DEBUG oslo_concurrency.lockutils [req-9b9b3791-0633-48ae-bc48-88a2b0adcb19 req-4eca0568-0096-458a-ba3a-a6be3b620216 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-bf261ccf-c216-4383-a22a-7f0553198152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:29:22 np0005629333 nova_compute[244014]: 2026-02-25 12:29:22.909 244018 DEBUG nova.network.neutron [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:29:23 np0005629333 nova_compute[244014]: 2026-02-25 12:29:23.654 244018 DEBUG oslo_concurrency.processutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:23 np0005629333 nova_compute[244014]: 2026-02-25 12:29:23.805 244018 DEBUG nova.storage.rbd_utils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] resizing rbd image f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:29:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 407 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.3 MiB/s wr, 153 op/s
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.011 244018 DEBUG nova.compute.manager [req-d173ea54-5ab1-4a24-967b-ef217d5c8d3b req-fad877a0-61cc-4298-8066-c7b41d1481b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received event network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.011 244018 DEBUG oslo_concurrency.lockutils [req-d173ea54-5ab1-4a24-967b-ef217d5c8d3b req-fad877a0-61cc-4298-8066-c7b41d1481b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.011 244018 DEBUG oslo_concurrency.lockutils [req-d173ea54-5ab1-4a24-967b-ef217d5c8d3b req-fad877a0-61cc-4298-8066-c7b41d1481b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.011 244018 DEBUG oslo_concurrency.lockutils [req-d173ea54-5ab1-4a24-967b-ef217d5c8d3b req-fad877a0-61cc-4298-8066-c7b41d1481b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.011 244018 DEBUG nova.compute.manager [req-d173ea54-5ab1-4a24-967b-ef217d5c8d3b req-fad877a0-61cc-4298-8066-c7b41d1481b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] No waiting events found dispatching network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.011 244018 WARNING nova.compute.manager [req-d173ea54-5ab1-4a24-967b-ef217d5c8d3b req-fad877a0-61cc-4298-8066-c7b41d1481b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received unexpected event network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f for instance with vm_state active and task_state None.#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.135 244018 DEBUG nova.objects.instance [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lazy-loading 'migration_context' on Instance uuid f7b18575-d1fc-423f-a596-8ca6d8ed08fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.148 244018 DEBUG nova.network.neutron [-] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.164 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.164 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Ensure instance console log exists: /var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.164 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.165 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.165 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.169 244018 INFO nova.compute.manager [-] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Took 1.55 seconds to deallocate network for instance.#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.208 244018 DEBUG oslo_concurrency.lockutils [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.209 244018 DEBUG oslo_concurrency.lockutils [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.318 244018 DEBUG nova.network.neutron [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Successfully created port: 27c957cd-d68f-48d8-b2e1-170275200ed3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.350 244018 DEBUG oslo_concurrency.processutils [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.539 244018 DEBUG nova.network.neutron [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Updating instance_info_cache with network_info: [{"id": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "address": "fa:16:3e:03:8c:25", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75a4bcb-92", "ovs_interfaceid": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.558 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Releasing lock "refresh_cache-bf261ccf-c216-4383-a22a-7f0553198152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.559 244018 DEBUG nova.compute.manager [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Instance network_info: |[{"id": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "address": "fa:16:3e:03:8c:25", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75a4bcb-92", "ovs_interfaceid": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.559 244018 DEBUG oslo_concurrency.lockutils [req-9b9b3791-0633-48ae-bc48-88a2b0adcb19 req-4eca0568-0096-458a-ba3a-a6be3b620216 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-bf261ccf-c216-4383-a22a-7f0553198152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.560 244018 DEBUG nova.network.neutron [req-9b9b3791-0633-48ae-bc48-88a2b0adcb19 req-4eca0568-0096-458a-ba3a-a6be3b620216 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Refreshing network info cache for port c75a4bcb-9292-41aa-b0c3-14b1433392e2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.563 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Start _get_guest_xml network_info=[{"id": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "address": "fa:16:3e:03:8c:25", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75a4bcb-92", "ovs_interfaceid": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.569 244018 WARNING nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.578 244018 DEBUG nova.virt.libvirt.host [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.579 244018 DEBUG nova.virt.libvirt.host [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.583 244018 DEBUG nova.virt.libvirt.host [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.584 244018 DEBUG nova.virt.libvirt.host [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.584 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.584 244018 DEBUG nova.virt.hardware [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.585 244018 DEBUG nova.virt.hardware [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.585 244018 DEBUG nova.virt.hardware [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.586 244018 DEBUG nova.virt.hardware [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.586 244018 DEBUG nova.virt.hardware [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.586 244018 DEBUG nova.virt.hardware [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.586 244018 DEBUG nova.virt.hardware [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.587 244018 DEBUG nova.virt.hardware [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.587 244018 DEBUG nova.virt.hardware [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.587 244018 DEBUG nova.virt.hardware [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.587 244018 DEBUG nova.virt.hardware [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.591 244018 DEBUG oslo_concurrency.processutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.754 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.832 244018 DEBUG nova.compute.manager [req-929ff49a-c566-47a5-b3ff-1278a6a267d7 req-27ae22af-2154-4b5b-9926-f2e0e745b8cd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Received event network-vif-deleted-9848fa6c-0a42-4cac-a3d8-2b90d5b7920c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.879 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:29:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:29:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1826584454' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.936 244018 DEBUG oslo_concurrency.processutils [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.940 244018 DEBUG nova.compute.provider_tree [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.957 244018 DEBUG nova.scheduler.client.report [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:29:24 np0005629333 nova_compute[244014]: 2026-02-25 12:29:24.979 244018 DEBUG oslo_concurrency.lockutils [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.010 244018 INFO nova.scheduler.client.report [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Deleted allocations for instance ee9cd98b-1ca6-48e7-aa44-a09caf048a1c#033[00m
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.080 244018 DEBUG oslo_concurrency.lockutils [None req-ddbcf79c-3f14-4027-9bf7-e7e61dbd9421 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "ee9cd98b-1ca6-48e7-aa44-a09caf048a1c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:29:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4228440592' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:29:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.203 244018 DEBUG oslo_concurrency.processutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.226 244018 DEBUG nova.storage.rbd_utils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image bf261ccf-c216-4383-a22a-7f0553198152_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.229 244018 DEBUG oslo_concurrency.processutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:29:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/35679060' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.784 244018 DEBUG oslo_concurrency.processutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.787 244018 DEBUG nova.virt.libvirt.vif [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:29:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-789545168',display_name='tempest-ServerRescueNegativeTestJSON-server-789545168',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-789545168',id=67,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4445209a7384565a93895032b4f077e',ramdisk_id='',reservation_id='r-3pw48dcp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-75220162',owner_user_name='tempest-ServerRescueNegativeTestJSON-75220162-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:29:18Z,user_data=None,user_id='3aef97e1c87341848423edc65828540c',uuid=bf261ccf-c216-4383-a22a-7f0553198152,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "address": "fa:16:3e:03:8c:25", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75a4bcb-92", "ovs_interfaceid": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.788 244018 DEBUG nova.network.os_vif_util [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Converting VIF {"id": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "address": "fa:16:3e:03:8c:25", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75a4bcb-92", "ovs_interfaceid": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.790 244018 DEBUG nova.network.os_vif_util [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:8c:25,bridge_name='br-int',has_traffic_filtering=True,id=c75a4bcb-9292-41aa-b0c3-14b1433392e2,network=Network(85b56c79-01b6-47e7-ab3b-02e44acca3d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75a4bcb-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.792 244018 DEBUG nova.objects.instance [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lazy-loading 'pci_devices' on Instance uuid bf261ccf-c216-4383-a22a-7f0553198152 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.811 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:29:25 np0005629333 nova_compute[244014]:  <uuid>bf261ccf-c216-4383-a22a-7f0553198152</uuid>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:  <name>instance-00000043</name>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-789545168</nova:name>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:29:24</nova:creationTime>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:29:25 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:        <nova:user uuid="3aef97e1c87341848423edc65828540c">tempest-ServerRescueNegativeTestJSON-75220162-project-member</nova:user>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:        <nova:project uuid="c4445209a7384565a93895032b4f077e">tempest-ServerRescueNegativeTestJSON-75220162</nova:project>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:        <nova:port uuid="c75a4bcb-9292-41aa-b0c3-14b1433392e2">
Feb 25 07:29:25 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      <entry name="serial">bf261ccf-c216-4383-a22a-7f0553198152</entry>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      <entry name="uuid">bf261ccf-c216-4383-a22a-7f0553198152</entry>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/bf261ccf-c216-4383-a22a-7f0553198152_disk">
Feb 25 07:29:25 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:29:25 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/bf261ccf-c216-4383-a22a-7f0553198152_disk.config">
Feb 25 07:29:25 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:29:25 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:03:8c:25"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      <target dev="tapc75a4bcb-92"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/bf261ccf-c216-4383-a22a-7f0553198152/console.log" append="off"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:29:25 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:29:25 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:29:25 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:29:25 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.813 244018 DEBUG nova.compute.manager [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Preparing to wait for external event network-vif-plugged-c75a4bcb-9292-41aa-b0c3-14b1433392e2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.814 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "bf261ccf-c216-4383-a22a-7f0553198152-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.815 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "bf261ccf-c216-4383-a22a-7f0553198152-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.815 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "bf261ccf-c216-4383-a22a-7f0553198152-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.816 244018 DEBUG nova.virt.libvirt.vif [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:29:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-789545168',display_name='tempest-ServerRescueNegativeTestJSON-server-789545168',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-789545168',id=67,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4445209a7384565a93895032b4f077e',ramdisk_id='',reservation_id='r-3pw48dcp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-75220162',owner_user_name='tempest-ServerRescueNegativeTestJSON-75220162-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:29:18Z,user_data=None,user_id='3aef97e1c87341848423edc65828540c',uuid=bf261ccf-c216-4383-a22a-7f0553198152,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "address": "fa:16:3e:03:8c:25", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75a4bcb-92", "ovs_interfaceid": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.817 244018 DEBUG nova.network.os_vif_util [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Converting VIF {"id": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "address": "fa:16:3e:03:8c:25", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75a4bcb-92", "ovs_interfaceid": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.818 244018 DEBUG nova.network.os_vif_util [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:8c:25,bridge_name='br-int',has_traffic_filtering=True,id=c75a4bcb-9292-41aa-b0c3-14b1433392e2,network=Network(85b56c79-01b6-47e7-ab3b-02e44acca3d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75a4bcb-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.819 244018 DEBUG os_vif [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:8c:25,bridge_name='br-int',has_traffic_filtering=True,id=c75a4bcb-9292-41aa-b0c3-14b1433392e2,network=Network(85b56c79-01b6-47e7-ab3b-02e44acca3d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75a4bcb-92') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.821 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.822 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.823 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.827 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.828 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc75a4bcb-92, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.829 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc75a4bcb-92, col_values=(('external_ids', {'iface-id': 'c75a4bcb-9292-41aa-b0c3-14b1433392e2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:03:8c:25', 'vm-uuid': 'bf261ccf-c216-4383-a22a-7f0553198152'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:25 np0005629333 NetworkManager[49836]: <info>  [1772022565.8336] manager: (tapc75a4bcb-92): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/262)
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.832 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.838 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.841 244018 INFO os_vif [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:8c:25,bridge_name='br-int',has_traffic_filtering=True,id=c75a4bcb-9292-41aa-b0c3-14b1433392e2,network=Network(85b56c79-01b6-47e7-ab3b-02e44acca3d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75a4bcb-92')#033[00m
Feb 25 07:29:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 407 MiB data, 782 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 124 op/s
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.948 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.949 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.949 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] No VIF found with MAC fa:16:3e:03:8c:25, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.950 244018 INFO nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Using config drive#033[00m
Feb 25 07:29:25 np0005629333 nova_compute[244014]: 2026-02-25 12:29:25.980 244018 DEBUG nova.storage.rbd_utils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image bf261ccf-c216-4383-a22a-7f0553198152_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:26 np0005629333 nova_compute[244014]: 2026-02-25 12:29:26.180 244018 DEBUG nova.compute.manager [req-f7dbf7c4-074f-43ea-93d4-40e095900a9d req-f3d7e32c-15f0-497b-8555-47573f1d0987 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received event network-changed-ab721623-aa6d-494f-8b90-6ffd63b7a33f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:26 np0005629333 nova_compute[244014]: 2026-02-25 12:29:26.181 244018 DEBUG nova.compute.manager [req-f7dbf7c4-074f-43ea-93d4-40e095900a9d req-f3d7e32c-15f0-497b-8555-47573f1d0987 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Refreshing instance network info cache due to event network-changed-ab721623-aa6d-494f-8b90-6ffd63b7a33f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:29:26 np0005629333 nova_compute[244014]: 2026-02-25 12:29:26.181 244018 DEBUG oslo_concurrency.lockutils [req-f7dbf7c4-074f-43ea-93d4-40e095900a9d req-f3d7e32c-15f0-497b-8555-47573f1d0987 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-abe229eb-2238-4237-a7f2-83b8476ac1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:29:26 np0005629333 nova_compute[244014]: 2026-02-25 12:29:26.182 244018 DEBUG oslo_concurrency.lockutils [req-f7dbf7c4-074f-43ea-93d4-40e095900a9d req-f3d7e32c-15f0-497b-8555-47573f1d0987 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-abe229eb-2238-4237-a7f2-83b8476ac1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:29:26 np0005629333 nova_compute[244014]: 2026-02-25 12:29:26.182 244018 DEBUG nova.network.neutron [req-f7dbf7c4-074f-43ea-93d4-40e095900a9d req-f3d7e32c-15f0-497b-8555-47573f1d0987 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Refreshing network info cache for port ab721623-aa6d-494f-8b90-6ffd63b7a33f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:29:26 np0005629333 nova_compute[244014]: 2026-02-25 12:29:26.186 244018 DEBUG nova.network.neutron [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Successfully updated port: 27c957cd-d68f-48d8-b2e1-170275200ed3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:29:26 np0005629333 nova_compute[244014]: 2026-02-25 12:29:26.213 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "refresh_cache-f7b18575-d1fc-423f-a596-8ca6d8ed08fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:29:26 np0005629333 nova_compute[244014]: 2026-02-25 12:29:26.213 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquired lock "refresh_cache-f7b18575-d1fc-423f-a596-8ca6d8ed08fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:29:26 np0005629333 nova_compute[244014]: 2026-02-25 12:29:26.214 244018 DEBUG nova.network.neutron [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:29:26 np0005629333 nova_compute[244014]: 2026-02-25 12:29:26.354 244018 DEBUG oslo_concurrency.lockutils [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "abe229eb-2238-4237-a7f2-83b8476ac1dc" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:26 np0005629333 nova_compute[244014]: 2026-02-25 12:29:26.356 244018 DEBUG oslo_concurrency.lockutils [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:26 np0005629333 nova_compute[244014]: 2026-02-25 12:29:26.356 244018 INFO nova.compute.manager [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Rebooting instance#033[00m
Feb 25 07:29:26 np0005629333 nova_compute[244014]: 2026-02-25 12:29:26.374 244018 DEBUG oslo_concurrency.lockutils [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "refresh_cache-abe229eb-2238-4237-a7f2-83b8476ac1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:29:26 np0005629333 nova_compute[244014]: 2026-02-25 12:29:26.743 244018 DEBUG nova.network.neutron [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:29:27 np0005629333 nova_compute[244014]: 2026-02-25 12:29:27.252 244018 INFO nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Creating config drive at /var/lib/nova/instances/bf261ccf-c216-4383-a22a-7f0553198152/disk.config#033[00m
Feb 25 07:29:27 np0005629333 nova_compute[244014]: 2026-02-25 12:29:27.258 244018 DEBUG oslo_concurrency.processutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bf261ccf-c216-4383-a22a-7f0553198152/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpxkhxuyd4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:27 np0005629333 nova_compute[244014]: 2026-02-25 12:29:27.314 244018 DEBUG nova.network.neutron [req-9b9b3791-0633-48ae-bc48-88a2b0adcb19 req-4eca0568-0096-458a-ba3a-a6be3b620216 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Updated VIF entry in instance network info cache for port c75a4bcb-9292-41aa-b0c3-14b1433392e2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:29:27 np0005629333 nova_compute[244014]: 2026-02-25 12:29:27.315 244018 DEBUG nova.network.neutron [req-9b9b3791-0633-48ae-bc48-88a2b0adcb19 req-4eca0568-0096-458a-ba3a-a6be3b620216 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Updating instance_info_cache with network_info: [{"id": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "address": "fa:16:3e:03:8c:25", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75a4bcb-92", "ovs_interfaceid": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:29:27 np0005629333 nova_compute[244014]: 2026-02-25 12:29:27.336 244018 DEBUG oslo_concurrency.lockutils [req-9b9b3791-0633-48ae-bc48-88a2b0adcb19 req-4eca0568-0096-458a-ba3a-a6be3b620216 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-bf261ccf-c216-4383-a22a-7f0553198152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:29:27 np0005629333 nova_compute[244014]: 2026-02-25 12:29:27.357 244018 DEBUG nova.compute.manager [req-04e1d027-a528-4acf-9398-3aa1f4699ab0 req-7cd4b149-6740-4cd1-ba22-ecbe6658b90f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received event network-changed-27c957cd-d68f-48d8-b2e1-170275200ed3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:27 np0005629333 nova_compute[244014]: 2026-02-25 12:29:27.358 244018 DEBUG nova.compute.manager [req-04e1d027-a528-4acf-9398-3aa1f4699ab0 req-7cd4b149-6740-4cd1-ba22-ecbe6658b90f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Refreshing instance network info cache due to event network-changed-27c957cd-d68f-48d8-b2e1-170275200ed3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:29:27 np0005629333 nova_compute[244014]: 2026-02-25 12:29:27.359 244018 DEBUG oslo_concurrency.lockutils [req-04e1d027-a528-4acf-9398-3aa1f4699ab0 req-7cd4b149-6740-4cd1-ba22-ecbe6658b90f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-f7b18575-d1fc-423f-a596-8ca6d8ed08fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:29:27 np0005629333 nova_compute[244014]: 2026-02-25 12:29:27.407 244018 DEBUG oslo_concurrency.processutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bf261ccf-c216-4383-a22a-7f0553198152/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpxkhxuyd4" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:27 np0005629333 nova_compute[244014]: 2026-02-25 12:29:27.448 244018 DEBUG nova.storage.rbd_utils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image bf261ccf-c216-4383-a22a-7f0553198152_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:27 np0005629333 nova_compute[244014]: 2026-02-25 12:29:27.453 244018 DEBUG oslo_concurrency.processutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bf261ccf-c216-4383-a22a-7f0553198152/disk.config bf261ccf-c216-4383-a22a-7f0553198152_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 453 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 221 op/s
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.291 244018 DEBUG oslo_concurrency.processutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bf261ccf-c216-4383-a22a-7f0553198152/disk.config bf261ccf-c216-4383-a22a-7f0553198152_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.837s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.292 244018 INFO nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Deleting local config drive /var/lib/nova/instances/bf261ccf-c216-4383-a22a-7f0553198152/disk.config because it was imported into RBD.#033[00m
Feb 25 07:29:28 np0005629333 kernel: tapc75a4bcb-92: entered promiscuous mode
Feb 25 07:29:28 np0005629333 NetworkManager[49836]: <info>  [1772022568.3402] manager: (tapc75a4bcb-92): new Tun device (/org/freedesktop/NetworkManager/Devices/263)
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.343 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:28 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:28Z|00596|binding|INFO|Claiming lport c75a4bcb-9292-41aa-b0c3-14b1433392e2 for this chassis.
Feb 25 07:29:28 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:28Z|00597|binding|INFO|c75a4bcb-9292-41aa-b0c3-14b1433392e2: Claiming fa:16:3e:03:8c:25 10.100.0.12
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.361 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:28 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:28Z|00598|binding|INFO|Setting lport c75a4bcb-9292-41aa-b0c3-14b1433392e2 ovn-installed in OVS
Feb 25 07:29:28 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:28Z|00599|binding|INFO|Setting lport c75a4bcb-9292-41aa-b0c3-14b1433392e2 up in Southbound
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.363 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:8c:25 10.100.0.12'], port_security=['fa:16:3e:03:8c:25 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'bf261ccf-c216-4383-a22a-7f0553198152', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4445209a7384565a93895032b4f077e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8918a64f-99f7-4eb3-a626-bacb054cff5c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=28589015-6a4f-416a-a72c-4ed4061d2d31, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=c75a4bcb-9292-41aa-b0c3-14b1433392e2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.365 157129 INFO neutron.agent.ovn.metadata.agent [-] Port c75a4bcb-9292-41aa-b0c3-14b1433392e2 in datapath 85b56c79-01b6-47e7-ab3b-02e44acca3d3 bound to our chassis#033[00m
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.368 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85b56c79-01b6-47e7-ab3b-02e44acca3d3#033[00m
Feb 25 07:29:28 np0005629333 systemd-machined[210048]: New machine qemu-77-instance-00000043.
Feb 25 07:29:28 np0005629333 systemd[1]: Started Virtual Machine qemu-77-instance-00000043.
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.387 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[72e7bdb3-eea6-4d3d-b811-92d90fc252f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.389 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap85b56c79-01 in ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.391 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap85b56c79-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.391 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6e5448d1-8675-421d-a527-1743fac7098d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.392 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7a6578be-70c0-42aa-a2d1-20886f17babd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.405 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[204edc39-d02b-4498-abed-b7e1755c2eca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.418 244018 DEBUG nova.network.neutron [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Updating instance_info_cache with network_info: [{"id": "27c957cd-d68f-48d8-b2e1-170275200ed3", "address": "fa:16:3e:bf:b7:62", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27c957cd-d6", "ovs_interfaceid": "27c957cd-d68f-48d8-b2e1-170275200ed3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.421 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6c1b1421-9c3b-41ab-a3b6-cc6382e7ddff]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:28 np0005629333 systemd-udevd[302224]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:29:28 np0005629333 NetworkManager[49836]: <info>  [1772022568.4458] device (tapc75a4bcb-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:29:28 np0005629333 NetworkManager[49836]: <info>  [1772022568.4467] device (tapc75a4bcb-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.452 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cea68dd5-8253-49c9-800a-aeea971d158b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:28 np0005629333 systemd-udevd[302229]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:29:28 np0005629333 NetworkManager[49836]: <info>  [1772022568.4601] manager: (tap85b56c79-00): new Veth device (/org/freedesktop/NetworkManager/Devices/264)
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.458 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[633894f5-88d1-40e7-a9cb-aadfcd4081b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.463 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Releasing lock "refresh_cache-f7b18575-d1fc-423f-a596-8ca6d8ed08fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.464 244018 DEBUG nova.compute.manager [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Instance network_info: |[{"id": "27c957cd-d68f-48d8-b2e1-170275200ed3", "address": "fa:16:3e:bf:b7:62", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27c957cd-d6", "ovs_interfaceid": "27c957cd-d68f-48d8-b2e1-170275200ed3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.464 244018 DEBUG oslo_concurrency.lockutils [req-04e1d027-a528-4acf-9398-3aa1f4699ab0 req-7cd4b149-6740-4cd1-ba22-ecbe6658b90f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-f7b18575-d1fc-423f-a596-8ca6d8ed08fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.465 244018 DEBUG nova.network.neutron [req-04e1d027-a528-4acf-9398-3aa1f4699ab0 req-7cd4b149-6740-4cd1-ba22-ecbe6658b90f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Refreshing network info cache for port 27c957cd-d68f-48d8-b2e1-170275200ed3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.468 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Start _get_guest_xml network_info=[{"id": "27c957cd-d68f-48d8-b2e1-170275200ed3", "address": "fa:16:3e:bf:b7:62", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27c957cd-d6", "ovs_interfaceid": "27c957cd-d68f-48d8-b2e1-170275200ed3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.485 244018 WARNING nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.496 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b801dd27-a0d2-4cfd-acb6-8fa07e819d86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.500 244018 DEBUG nova.virt.libvirt.host [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.501 244018 DEBUG nova.virt.libvirt.host [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.507 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a8654fea-21f2-48fc-9ee8-028386423de7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.511 244018 DEBUG nova.virt.libvirt.host [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.511 244018 DEBUG nova.virt.libvirt.host [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.512 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.512 244018 DEBUG nova.virt.hardware [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.512 244018 DEBUG nova.virt.hardware [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.513 244018 DEBUG nova.virt.hardware [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.513 244018 DEBUG nova.virt.hardware [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.513 244018 DEBUG nova.virt.hardware [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.513 244018 DEBUG nova.virt.hardware [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.513 244018 DEBUG nova.virt.hardware [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.514 244018 DEBUG nova.virt.hardware [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.514 244018 DEBUG nova.virt.hardware [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.514 244018 DEBUG nova.virt.hardware [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.514 244018 DEBUG nova.virt.hardware [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.517 244018 DEBUG oslo_concurrency.processutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:28 np0005629333 NetworkManager[49836]: <info>  [1772022568.5248] device (tap85b56c79-00): carrier: link connected
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.532 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8f25b533-b1df-45a5-8d2f-82f9bd4e1533]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.546 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b688eba6-e3e3-4eb6-a4bf-b34dfe90061e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85b56c79-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:20:24:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 179], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453793, 'reachable_time': 15353, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302250, 'error': None, 'target': 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.561 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0211d7c1-331f-4b86-89cb-7910f551b408]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe20:24ce'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453793, 'tstamp': 453793}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302251, 'error': None, 'target': 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.579 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[75ba23f5-b007-4a02-be33-884aff627ba1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85b56c79-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:20:24:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 179], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453793, 'reachable_time': 15353, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302252, 'error': None, 'target': 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.603 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[69a642c1-756d-459c-b6f8-caabfe6c558a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.648 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[81cc115d-e2e7-4a27-b571-23342271ac1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.649 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85b56c79-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.650 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.650 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85b56c79-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:28 np0005629333 kernel: tap85b56c79-00: entered promiscuous mode
Feb 25 07:29:28 np0005629333 NetworkManager[49836]: <info>  [1772022568.6550] manager: (tap85b56c79-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/265)
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.657 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.660 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85b56c79-00, col_values=(('external_ids', {'iface-id': 'ad3b1754-7f6c-4114-8056-d68f2e9a25d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:28 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:28Z|00600|binding|INFO|Releasing lport ad3b1754-7f6c-4114-8056-d68f2e9a25d5 from this chassis (sb_readonly=0)
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.664 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/85b56c79-01b6-47e7-ab3b-02e44acca3d3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/85b56c79-01b6-47e7-ab3b-02e44acca3d3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.666 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[37930e9c-7931-4b9d-974e-bb459132fc10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.666 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-85b56c79-01b6-47e7-ab3b-02e44acca3d3
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/85b56c79-01b6-47e7-ab3b-02e44acca3d3.pid.haproxy
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 85b56c79-01b6-47e7-ab3b-02e44acca3d3
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:29:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:28.670 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'env', 'PROCESS_TAG=haproxy-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/85b56c79-01b6-47e7-ab3b-02e44acca3d3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.679 244018 DEBUG nova.network.neutron [req-f7dbf7c4-074f-43ea-93d4-40e095900a9d req-f3d7e32c-15f0-497b-8555-47573f1d0987 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Updated VIF entry in instance network info cache for port ab721623-aa6d-494f-8b90-6ffd63b7a33f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.679 244018 DEBUG nova.network.neutron [req-f7dbf7c4-074f-43ea-93d4-40e095900a9d req-f3d7e32c-15f0-497b-8555-47573f1d0987 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Updating instance_info_cache with network_info: [{"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.704 244018 DEBUG oslo_concurrency.lockutils [req-f7dbf7c4-074f-43ea-93d4-40e095900a9d req-f3d7e32c-15f0-497b-8555-47573f1d0987 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-abe229eb-2238-4237-a7f2-83b8476ac1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.705 244018 DEBUG oslo_concurrency.lockutils [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquired lock "refresh_cache-abe229eb-2238-4237-a7f2-83b8476ac1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.705 244018 DEBUG nova.network.neutron [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.851 244018 DEBUG oslo_concurrency.lockutils [None req-8cf4db25-7a22-42e5-9873-9d4911a29c18 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.854 244018 DEBUG oslo_concurrency.lockutils [None req-8cf4db25-7a22-42e5-9873-9d4911a29c18 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.854 244018 DEBUG nova.compute.manager [None req-8cf4db25-7a22-42e5-9873-9d4911a29c18 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.872 244018 DEBUG nova.compute.manager [None req-8cf4db25-7a22-42e5-9873-9d4911a29c18 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.874 244018 DEBUG nova.objects.instance [None req-8cf4db25-7a22-42e5-9873-9d4911a29c18 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'flavor' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.887 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022568.8867865, bf261ccf-c216-4383-a22a-7f0553198152 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.887 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] VM Started (Lifecycle Event)#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.904 244018 DEBUG nova.virt.libvirt.driver [None req-8cf4db25-7a22-42e5-9873-9d4911a29c18 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.910 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.918 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022568.8875976, bf261ccf-c216-4383-a22a-7f0553198152 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.918 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.942 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.947 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:29:28 np0005629333 nova_compute[244014]: 2026-02-25 12:29:28.982 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:29:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:29:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1526574973' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.073 244018 DEBUG oslo_concurrency.processutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:29 np0005629333 podman[302344]: 2026-02-25 12:29:28.996389601 +0000 UTC m=+0.024183847 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.113 244018 DEBUG nova.storage.rbd_utils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.117 244018 DEBUG oslo_concurrency.processutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:29 np0005629333 podman[302344]: 2026-02-25 12:29:29.52883654 +0000 UTC m=+0.556630726 container create e2300c096a2f5eef6b0f90854fb9153e3f2948f0fa4813b0d2e637ef38690b5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:29:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:29:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3537716141' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.706 244018 DEBUG oslo_concurrency.processutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:29 np0005629333 systemd[1]: Started libpod-conmon-e2300c096a2f5eef6b0f90854fb9153e3f2948f0fa4813b0d2e637ef38690b5e.scope.
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.710 244018 DEBUG nova.virt.libvirt.vif [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:29:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1779266838',display_name='tempest-ServerRescueNegativeTestJSON-server-1779266838',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1779266838',id=68,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4445209a7384565a93895032b4f077e',ramdisk_id='',reservation_id='r-ut2etugv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-75220162',owner_user_name='tempest-ServerRescueNegativeTestJSON-75220162-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:29:22Z,user_data=None,user_id='3aef97e1c87341848423edc65828540c',uuid=f7b18575-d1fc-423f-a596-8ca6d8ed08fa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27c957cd-d68f-48d8-b2e1-170275200ed3", "address": "fa:16:3e:bf:b7:62", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27c957cd-d6", "ovs_interfaceid": "27c957cd-d68f-48d8-b2e1-170275200ed3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.711 244018 DEBUG nova.network.os_vif_util [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Converting VIF {"id": "27c957cd-d68f-48d8-b2e1-170275200ed3", "address": "fa:16:3e:bf:b7:62", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27c957cd-d6", "ovs_interfaceid": "27c957cd-d68f-48d8-b2e1-170275200ed3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.713 244018 DEBUG nova.network.os_vif_util [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:b7:62,bridge_name='br-int',has_traffic_filtering=True,id=27c957cd-d68f-48d8-b2e1-170275200ed3,network=Network(85b56c79-01b6-47e7-ab3b-02e44acca3d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27c957cd-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.715 244018 DEBUG nova.objects.instance [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lazy-loading 'pci_devices' on Instance uuid f7b18575-d1fc-423f-a596-8ca6d8ed08fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:29 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.740 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:29:29 np0005629333 nova_compute[244014]:  <uuid>f7b18575-d1fc-423f-a596-8ca6d8ed08fa</uuid>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:  <name>instance-00000044</name>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-1779266838</nova:name>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:29:28</nova:creationTime>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:29:29 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:        <nova:user uuid="3aef97e1c87341848423edc65828540c">tempest-ServerRescueNegativeTestJSON-75220162-project-member</nova:user>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:        <nova:project uuid="c4445209a7384565a93895032b4f077e">tempest-ServerRescueNegativeTestJSON-75220162</nova:project>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:        <nova:port uuid="27c957cd-d68f-48d8-b2e1-170275200ed3">
Feb 25 07:29:29 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      <entry name="serial">f7b18575-d1fc-423f-a596-8ca6d8ed08fa</entry>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      <entry name="uuid">f7b18575-d1fc-423f-a596-8ca6d8ed08fa</entry>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:29:29 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d4ba589ec8c9c7feb5a1c1945ac7be1440c19e55dcd78f4845f04574fea8475/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk">
Feb 25 07:29:29 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:29:29 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.config">
Feb 25 07:29:29 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:29:29 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:bf:b7:62"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      <target dev="tap27c957cd-d6"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa/console.log" append="off"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:29:29 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:29:29 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:29:29 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:29:29 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.740 244018 DEBUG nova.compute.manager [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Preparing to wait for external event network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.741 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.741 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.741 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.742 244018 DEBUG nova.virt.libvirt.vif [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:29:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1779266838',display_name='tempest-ServerRescueNegativeTestJSON-server-1779266838',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1779266838',id=68,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c4445209a7384565a93895032b4f077e',ramdisk_id='',reservation_id='r-ut2etugv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-75220162',owner_user_name='tempest-ServerRescueNegativeTestJSON-75220162-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:29:22Z,user_data=None,user_id='3aef97e1c87341848423edc65828540c',uuid=f7b18575-d1fc-423f-a596-8ca6d8ed08fa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "27c957cd-d68f-48d8-b2e1-170275200ed3", "address": "fa:16:3e:bf:b7:62", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27c957cd-d6", "ovs_interfaceid": "27c957cd-d68f-48d8-b2e1-170275200ed3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.742 244018 DEBUG nova.network.os_vif_util [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Converting VIF {"id": "27c957cd-d68f-48d8-b2e1-170275200ed3", "address": "fa:16:3e:bf:b7:62", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27c957cd-d6", "ovs_interfaceid": "27c957cd-d68f-48d8-b2e1-170275200ed3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.742 244018 DEBUG nova.network.os_vif_util [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:b7:62,bridge_name='br-int',has_traffic_filtering=True,id=27c957cd-d68f-48d8-b2e1-170275200ed3,network=Network(85b56c79-01b6-47e7-ab3b-02e44acca3d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27c957cd-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.743 244018 DEBUG os_vif [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:b7:62,bridge_name='br-int',has_traffic_filtering=True,id=27c957cd-d68f-48d8-b2e1-170275200ed3,network=Network(85b56c79-01b6-47e7-ab3b-02e44acca3d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27c957cd-d6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.743 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.743 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.744 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.748 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.753 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.753 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27c957cd-d6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.754 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap27c957cd-d6, col_values=(('external_ids', {'iface-id': '27c957cd-d68f-48d8-b2e1-170275200ed3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bf:b7:62', 'vm-uuid': 'f7b18575-d1fc-423f-a596-8ca6d8ed08fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:29 np0005629333 NetworkManager[49836]: <info>  [1772022569.7559] manager: (tap27c957cd-d6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/266)
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.758 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.763 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.765 244018 INFO os_vif [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:b7:62,bridge_name='br-int',has_traffic_filtering=True,id=27c957cd-d68f-48d8-b2e1-170275200ed3,network=Network(85b56c79-01b6-47e7-ab3b-02e44acca3d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27c957cd-d6')#033[00m
Feb 25 07:29:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 453 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 151 op/s
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.896 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.955 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.956 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.956 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] No VIF found with MAC fa:16:3e:bf:b7:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.956 244018 INFO nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Using config drive#033[00m
Feb 25 07:29:29 np0005629333 nova_compute[244014]: 2026-02-25 12:29:29.986 244018 DEBUG nova.storage.rbd_utils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:30 np0005629333 podman[302344]: 2026-02-25 12:29:30.048175627 +0000 UTC m=+1.075969823 container init e2300c096a2f5eef6b0f90854fb9153e3f2948f0fa4813b0d2e637ef38690b5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 25 07:29:30 np0005629333 podman[302344]: 2026-02-25 12:29:30.057257624 +0000 UTC m=+1.085051820 container start e2300c096a2f5eef6b0f90854fb9153e3f2948f0fa4813b0d2e637ef38690b5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 25 07:29:30 np0005629333 neutron-haproxy-ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3[302401]: [NOTICE]   (302426) : New worker (302428) forked
Feb 25 07:29:30 np0005629333 neutron-haproxy-ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3[302401]: [NOTICE]   (302426) : Loading success.
Feb 25 07:29:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:29:30 np0005629333 nova_compute[244014]: 2026-02-25 12:29:30.414 244018 INFO nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Creating config drive at /var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa/disk.config#033[00m
Feb 25 07:29:30 np0005629333 nova_compute[244014]: 2026-02-25 12:29:30.419 244018 DEBUG oslo_concurrency.processutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpwdbfqr1r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:30 np0005629333 nova_compute[244014]: 2026-02-25 12:29:30.560 244018 DEBUG oslo_concurrency.processutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpwdbfqr1r" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:30 np0005629333 nova_compute[244014]: 2026-02-25 12:29:30.632 244018 DEBUG nova.storage.rbd_utils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:30 np0005629333 nova_compute[244014]: 2026-02-25 12:29:30.637 244018 DEBUG oslo_concurrency.processutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa/disk.config f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:30 np0005629333 nova_compute[244014]: 2026-02-25 12:29:30.779 244018 DEBUG nova.network.neutron [req-04e1d027-a528-4acf-9398-3aa1f4699ab0 req-7cd4b149-6740-4cd1-ba22-ecbe6658b90f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Updated VIF entry in instance network info cache for port 27c957cd-d68f-48d8-b2e1-170275200ed3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:29:30 np0005629333 nova_compute[244014]: 2026-02-25 12:29:30.780 244018 DEBUG nova.network.neutron [req-04e1d027-a528-4acf-9398-3aa1f4699ab0 req-7cd4b149-6740-4cd1-ba22-ecbe6658b90f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Updating instance_info_cache with network_info: [{"id": "27c957cd-d68f-48d8-b2e1-170275200ed3", "address": "fa:16:3e:bf:b7:62", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27c957cd-d6", "ovs_interfaceid": "27c957cd-d68f-48d8-b2e1-170275200ed3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:29:30 np0005629333 nova_compute[244014]: 2026-02-25 12:29:30.804 244018 DEBUG oslo_concurrency.lockutils [req-04e1d027-a528-4acf-9398-3aa1f4699ab0 req-7cd4b149-6740-4cd1-ba22-ecbe6658b90f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-f7b18575-d1fc-423f-a596-8ca6d8ed08fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:29:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:29:30
Feb 25 07:29:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:29:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:29:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['volumes', 'images', 'default.rgw.control', 'backups', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta']
Feb 25 07:29:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:29:30 np0005629333 nova_compute[244014]: 2026-02-25 12:29:30.934 244018 DEBUG nova.network.neutron [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Updating instance_info_cache with network_info: [{"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:29:30 np0005629333 nova_compute[244014]: 2026-02-25 12:29:30.963 244018 DEBUG oslo_concurrency.lockutils [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Releasing lock "refresh_cache-abe229eb-2238-4237-a7f2-83b8476ac1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:29:30 np0005629333 nova_compute[244014]: 2026-02-25 12:29:30.965 244018 DEBUG nova.compute.manager [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:31 np0005629333 kernel: tapab721623-aa (unregistering): left promiscuous mode
Feb 25 07:29:31 np0005629333 NetworkManager[49836]: <info>  [1772022571.3438] device (tapab721623-aa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:29:31 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:31Z|00601|binding|INFO|Releasing lport ab721623-aa6d-494f-8b90-6ffd63b7a33f from this chassis (sb_readonly=0)
Feb 25 07:29:31 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:31Z|00602|binding|INFO|Setting lport ab721623-aa6d-494f-8b90-6ffd63b7a33f down in Southbound
Feb 25 07:29:31 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:31Z|00603|binding|INFO|Removing iface tapab721623-aa ovn-installed in OVS
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.356 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.368 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:6c:fd 10.100.0.13'], port_security=['fa:16:3e:53:6c:fd 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'abe229eb-2238-4237-a7f2-83b8476ac1dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a448894-87d7-4c8e-a168-2593011ffed7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '780a93a8758a4bd78b22fe68ed6276cf', 'neutron:revision_number': '5', 'neutron:security_group_ids': '809ecded-d7db-4d4f-aed2-3cfc6bac71b9 bdf2b6e4-9059-4e5b-93fb-9739a22eb004', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc8be04a-7591-4819-939d-39b48e9a96cf, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ab721623-aa6d-494f-8b90-6ffd63b7a33f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:29:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.371 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ab721623-aa6d-494f-8b90-6ffd63b7a33f in datapath 9a448894-87d7-4c8e-a168-2593011ffed7 unbound from our chassis#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.371 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.374 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9a448894-87d7-4c8e-a168-2593011ffed7#033[00m
Feb 25 07:29:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.391 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5fcdcf09-7d6d-473e-8da3-48a5bd834a2a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:31 np0005629333 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000042.scope: Deactivated successfully.
Feb 25 07:29:31 np0005629333 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000042.scope: Consumed 9.871s CPU time.
Feb 25 07:29:31 np0005629333 systemd-machined[210048]: Machine qemu-76-instance-00000042 terminated.
Feb 25 07:29:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.422 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[16c24d04-1c6b-46ff-bd34-3c40186cee75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.426 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b1f5eced-6631-4c74-8089-13ff795dacf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.458 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2a8bd20e-71be-45b0-b9e1-46ef0b289e34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.479 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7a1755b1-e917-4828-b54c-43f5d4330c51]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9a448894-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:4c:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449633, 'reachable_time': 29076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302490, 'error': None, 'target': 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.498 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[54ce0d52-0935-471e-aad3-881d9cc17165]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9a448894-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449644, 'tstamp': 449644}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302491, 'error': None, 'target': 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9a448894-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449646, 'tstamp': 449646}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302491, 'error': None, 'target': 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.500 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a448894-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.502 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.510 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.511 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9a448894-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.511 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:29:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.512 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9a448894-80, col_values=(('external_ids', {'iface-id': '6ef06fbe-6eb9-4d55-bbd8-9394a70da39f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.513 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.570 244018 INFO nova.virt.libvirt.driver [-] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Instance destroyed successfully.#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.571 244018 DEBUG nova.objects.instance [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lazy-loading 'resources' on Instance uuid abe229eb-2238-4237-a7f2-83b8476ac1dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.585 244018 DEBUG nova.virt.libvirt.vif [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:29:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1679149420',display_name='tempest-SecurityGroupsTestJSON-server-1679149420',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1679149420',id=66,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:29:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='780a93a8758a4bd78b22fe68ed6276cf',ramdisk_id='',reservation_id='r-q96j5rpg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-195828884',owner_user_name='tempest-SecurityGroupsTestJSON-195828884-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:29:31Z,user_data=None,user_id='bcb4ded096bc4f7993f96ca892b82333',uuid=abe229eb-2238-4237-a7f2-83b8476ac1dc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.586 244018 DEBUG nova.network.os_vif_util [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converting VIF {"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.587 244018 DEBUG nova.network.os_vif_util [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:53:6c:fd,bridge_name='br-int',has_traffic_filtering=True,id=ab721623-aa6d-494f-8b90-6ffd63b7a33f,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab721623-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.588 244018 DEBUG os_vif [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:6c:fd,bridge_name='br-int',has_traffic_filtering=True,id=ab721623-aa6d-494f-8b90-6ffd63b7a33f,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab721623-aa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:29:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:29:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:29:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:29:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.591 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.592 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab721623-aa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:29:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.595 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.597 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.601 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.604 244018 INFO os_vif [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:6c:fd,bridge_name='br-int',has_traffic_filtering=True,id=ab721623-aa6d-494f-8b90-6ffd63b7a33f,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab721623-aa')#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.614 244018 DEBUG nova.virt.libvirt.driver [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Start _get_guest_xml network_info=[{"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.619 244018 WARNING nova.virt.libvirt.driver [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.625 244018 DEBUG nova.virt.libvirt.host [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.626 244018 DEBUG nova.virt.libvirt.host [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.630 244018 DEBUG nova.virt.libvirt.host [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.631 244018 DEBUG nova.virt.libvirt.host [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.631 244018 DEBUG nova.virt.libvirt.driver [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.632 244018 DEBUG nova.virt.hardware [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.633 244018 DEBUG nova.virt.hardware [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.633 244018 DEBUG nova.virt.hardware [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.634 244018 DEBUG nova.virt.hardware [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.634 244018 DEBUG nova.virt.hardware [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.635 244018 DEBUG nova.virt.hardware [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.635 244018 DEBUG nova.virt.hardware [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.635 244018 DEBUG nova.virt.hardware [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.636 244018 DEBUG nova.virt.hardware [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.636 244018 DEBUG nova.virt.hardware [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.637 244018 DEBUG nova.virt.hardware [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.637 244018 DEBUG nova.objects.instance [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lazy-loading 'vcpu_model' on Instance uuid abe229eb-2238-4237-a7f2-83b8476ac1dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.675 244018 DEBUG oslo_concurrency.processutils [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.712 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022556.6982589, ee9cd98b-1ca6-48e7-aa44-a09caf048a1c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.713 244018 INFO nova.compute.manager [-] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.739 244018 DEBUG nova.compute.manager [None req-39648e57-89ff-41b4-9760-0fcf089e4b43 - - - - - -] [instance: ee9cd98b-1ca6-48e7-aa44-a09caf048a1c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.854 244018 DEBUG oslo_concurrency.processutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa/disk.config f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.216s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.855 244018 INFO nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Deleting local config drive /var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa/disk.config because it was imported into RBD.#033[00m
Feb 25 07:29:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:29:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:29:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:29:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:29:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:29:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:29:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:29:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:29:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:29:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:29:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 453 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 159 op/s
Feb 25 07:29:31 np0005629333 kernel: tap27c957cd-d6: entered promiscuous mode
Feb 25 07:29:31 np0005629333 systemd-udevd[302482]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:29:31 np0005629333 NetworkManager[49836]: <info>  [1772022571.9242] manager: (tap27c957cd-d6): new Tun device (/org/freedesktop/NetworkManager/Devices/267)
Feb 25 07:29:31 np0005629333 NetworkManager[49836]: <info>  [1772022571.9352] device (tap27c957cd-d6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:29:31 np0005629333 NetworkManager[49836]: <info>  [1772022571.9356] device (tap27c957cd-d6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.922 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.939 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:31 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:31Z|00604|binding|INFO|Claiming lport 27c957cd-d68f-48d8-b2e1-170275200ed3 for this chassis.
Feb 25 07:29:31 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:31Z|00605|binding|INFO|27c957cd-d68f-48d8-b2e1-170275200ed3: Claiming fa:16:3e:bf:b7:62 10.100.0.4
Feb 25 07:29:31 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:31Z|00606|binding|INFO|Setting lport 27c957cd-d68f-48d8-b2e1-170275200ed3 ovn-installed in OVS
Feb 25 07:29:31 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:31Z|00607|binding|INFO|Setting lport 27c957cd-d68f-48d8-b2e1-170275200ed3 up in Southbound
Feb 25 07:29:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.961 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:b7:62 10.100.0.4'], port_security=['fa:16:3e:bf:b7:62 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f7b18575-d1fc-423f-a596-8ca6d8ed08fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4445209a7384565a93895032b4f077e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8918a64f-99f7-4eb3-a626-bacb054cff5c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=28589015-6a4f-416a-a72c-4ed4061d2d31, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=27c957cd-d68f-48d8-b2e1-170275200ed3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:29:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.962 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 27c957cd-d68f-48d8-b2e1-170275200ed3 in datapath 85b56c79-01b6-47e7-ab3b-02e44acca3d3 bound to our chassis#033[00m
Feb 25 07:29:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.963 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85b56c79-01b6-47e7-ab3b-02e44acca3d3#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.970 244018 INFO nova.virt.libvirt.driver [None req-8cf4db25-7a22-42e5-9873-9d4911a29c18 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance shutdown successfully after 3 seconds.#033[00m
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.971 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:31.977 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2591a4cc-67ce-4cb8-a038-df7371b76d38]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:31 np0005629333 kernel: tapee46268d-74 (unregistering): left promiscuous mode
Feb 25 07:29:31 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:31Z|00608|binding|INFO|Releasing lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 from this chassis (sb_readonly=0)
Feb 25 07:29:31 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:31Z|00609|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 down in Southbound
Feb 25 07:29:31 np0005629333 systemd-machined[210048]: New machine qemu-78-instance-00000044.
Feb 25 07:29:31 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:31Z|00610|binding|INFO|Removing iface tapee46268d-74 ovn-installed in OVS
Feb 25 07:29:31 np0005629333 nova_compute[244014]: 2026-02-25 12:29:31.997 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:31 np0005629333 NetworkManager[49836]: <info>  [1772022571.9993] device (tapee46268d-74): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:29:32 np0005629333 nova_compute[244014]: 2026-02-25 12:29:32.001 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:32 np0005629333 systemd[1]: Started Virtual Machine qemu-78-instance-00000044.
Feb 25 07:29:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:32.006 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:87:f1 10.100.0.11'], port_security=['fa:16:3e:ba:87:f1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce318891-cf3c-4d99-af7c-c01770f38194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56700581ea88438ba482d90bc702ced3', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'c2ca716d-3f7c-490b-954a-bca009559baa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0958bb9f-eb63-44ee-b380-21c56b170304, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ee46268d-740d-4ff9-8b65-4a81fc61eec3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:29:32 np0005629333 nova_compute[244014]: 2026-02-25 12:29:32.018 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:32.028 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[fbc906c9-7b0a-4ed9-8b4f-8c1aab8829cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:32.033 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3909477f-7655-471a-9e8a-213658b9e14c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:32 np0005629333 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000035.scope: Deactivated successfully.
Feb 25 07:29:32 np0005629333 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000035.scope: Consumed 15.848s CPU time.
Feb 25 07:29:32 np0005629333 systemd-machined[210048]: Machine qemu-69-instance-00000035 terminated.
Feb 25 07:29:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:32.061 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[843a3a68-21ed-4979-aaa6-7d3ce0387280]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:32.094 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0219aac7-f1a2-46a7-9690-6a73ba8a4571]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85b56c79-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:20:24:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 179], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453793, 'reachable_time': 15353, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302556, 'error': None, 'target': 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:32.111 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3f5bc890-8ec7-4090-8da6-d140262d506c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap85b56c79-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453803, 'tstamp': 453803}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302558, 'error': None, 'target': 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap85b56c79-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453805, 'tstamp': 453805}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302558, 'error': None, 'target': 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:32.114 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85b56c79-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:32 np0005629333 nova_compute[244014]: 2026-02-25 12:29:32.116 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:32 np0005629333 nova_compute[244014]: 2026-02-25 12:29:32.121 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:32.122 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85b56c79-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:32.122 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:29:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:32.123 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85b56c79-00, col_values=(('external_ids', {'iface-id': 'ad3b1754-7f6c-4114-8056-d68f2e9a25d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:32.124 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:29:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:32.125 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ee46268d-740d-4ff9-8b65-4a81fc61eec3 in datapath ce318891-cf3c-4d99-af7c-c01770f38194 unbound from our chassis#033[00m
Feb 25 07:29:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:32.128 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce318891-cf3c-4d99-af7c-c01770f38194, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:29:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:32.128 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a120c8cb-520b-409e-83d6-ffe298f524ef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:32.129 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 namespace which is not needed anymore#033[00m
Feb 25 07:29:32 np0005629333 nova_compute[244014]: 2026-02-25 12:29:32.220 244018 INFO nova.virt.libvirt.driver [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance destroyed successfully.#033[00m
Feb 25 07:29:32 np0005629333 nova_compute[244014]: 2026-02-25 12:29:32.221 244018 DEBUG nova.objects.instance [None req-8cf4db25-7a22-42e5-9873-9d4911a29c18 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'numa_topology' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:32 np0005629333 nova_compute[244014]: 2026-02-25 12:29:32.248 244018 DEBUG nova.compute.manager [None req-8cf4db25-7a22-42e5-9873-9d4911a29c18 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:32 np0005629333 nova_compute[244014]: 2026-02-25 12:29:32.317 244018 DEBUG oslo_concurrency.lockutils [None req-8cf4db25-7a22-42e5-9873-9d4911a29c18 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:29:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/867522823' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:29:32 np0005629333 nova_compute[244014]: 2026-02-25 12:29:32.356 244018 DEBUG oslo_concurrency.processutils [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.682s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:32 np0005629333 nova_compute[244014]: 2026-02-25 12:29:32.391 244018 DEBUG oslo_concurrency.processutils [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:32 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[296497]: [NOTICE]   (296548) : haproxy version is 2.8.14-c23fe91
Feb 25 07:29:32 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[296497]: [NOTICE]   (296548) : path to executable is /usr/sbin/haproxy
Feb 25 07:29:32 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[296497]: [WARNING]  (296548) : Exiting Master process...
Feb 25 07:29:32 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[296497]: [ALERT]    (296548) : Current worker (296562) exited with code 143 (Terminated)
Feb 25 07:29:32 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[296497]: [WARNING]  (296548) : All workers exited. Exiting... (0)
Feb 25 07:29:32 np0005629333 systemd[1]: libpod-43047c72e8074fabab76ae337cff05445967f3daebf90afef05a64abdd8d0080.scope: Deactivated successfully.
Feb 25 07:29:32 np0005629333 podman[302588]: 2026-02-25 12:29:32.42440745 +0000 UTC m=+0.149526741 container died 43047c72e8074fabab76ae337cff05445967f3daebf90afef05a64abdd8d0080 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true)
Feb 25 07:29:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:29:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/446312852' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.000 244018 DEBUG oslo_concurrency.processutils [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.003 244018 DEBUG nova.virt.libvirt.vif [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:29:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1679149420',display_name='tempest-SecurityGroupsTestJSON-server-1679149420',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1679149420',id=66,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:29:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='780a93a8758a4bd78b22fe68ed6276cf',ramdisk_id='',reservation_id='r-q96j5rpg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-195828884',owner_user_name='tempest-SecurityGroupsTestJSON-195828884-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:29:31Z,user_data=None,user_id='bcb4ded096bc4f7993f96ca892b82333',uuid=abe229eb-2238-4237-a7f2-83b8476ac1dc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.004 244018 DEBUG nova.network.os_vif_util [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converting VIF {"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.005 244018 DEBUG nova.network.os_vif_util [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:53:6c:fd,bridge_name='br-int',has_traffic_filtering=True,id=ab721623-aa6d-494f-8b90-6ffd63b7a33f,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab721623-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.007 244018 DEBUG nova.objects.instance [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lazy-loading 'pci_devices' on Instance uuid abe229eb-2238-4237-a7f2-83b8476ac1dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.026 244018 DEBUG nova.virt.libvirt.driver [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:29:33 np0005629333 nova_compute[244014]:  <uuid>abe229eb-2238-4237-a7f2-83b8476ac1dc</uuid>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:  <name>instance-00000042</name>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      <nova:name>tempest-SecurityGroupsTestJSON-server-1679149420</nova:name>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:29:31</nova:creationTime>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:29:33 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:        <nova:user uuid="bcb4ded096bc4f7993f96ca892b82333">tempest-SecurityGroupsTestJSON-195828884-project-member</nova:user>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:        <nova:project uuid="780a93a8758a4bd78b22fe68ed6276cf">tempest-SecurityGroupsTestJSON-195828884</nova:project>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:        <nova:port uuid="ab721623-aa6d-494f-8b90-6ffd63b7a33f">
Feb 25 07:29:33 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      <entry name="serial">abe229eb-2238-4237-a7f2-83b8476ac1dc</entry>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      <entry name="uuid">abe229eb-2238-4237-a7f2-83b8476ac1dc</entry>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/abe229eb-2238-4237-a7f2-83b8476ac1dc_disk">
Feb 25 07:29:33 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:29:33 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/abe229eb-2238-4237-a7f2-83b8476ac1dc_disk.config">
Feb 25 07:29:33 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:29:33 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:53:6c:fd"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      <target dev="tapab721623-aa"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/abe229eb-2238-4237-a7f2-83b8476ac1dc/console.log" append="off"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <input type="keyboard" bus="usb"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:29:33 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:29:33 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:29:33 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:29:33 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.028 244018 DEBUG nova.virt.libvirt.driver [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] skipping disk for instance-00000042 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.028 244018 DEBUG nova.virt.libvirt.driver [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] skipping disk for instance-00000042 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.031 244018 DEBUG nova.virt.libvirt.vif [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:29:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1679149420',display_name='tempest-SecurityGroupsTestJSON-server-1679149420',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1679149420',id=66,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:29:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='780a93a8758a4bd78b22fe68ed6276cf',ramdisk_id='',reservation_id='r-q96j5rpg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-195828884',owner_user_name='tempest-SecurityGroupsTestJSON-195828884-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:29:31Z,user_data=None,user_id='bcb4ded096bc4f7993f96ca892b82333',uuid=abe229eb-2238-4237-a7f2-83b8476ac1dc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.031 244018 DEBUG nova.network.os_vif_util [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converting VIF {"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.032 244018 DEBUG nova.network.os_vif_util [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:53:6c:fd,bridge_name='br-int',has_traffic_filtering=True,id=ab721623-aa6d-494f-8b90-6ffd63b7a33f,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab721623-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.033 244018 DEBUG os_vif [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:6c:fd,bridge_name='br-int',has_traffic_filtering=True,id=ab721623-aa6d-494f-8b90-6ffd63b7a33f,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab721623-aa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.034 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.035 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.035 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.039 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.040 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapab721623-aa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.040 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapab721623-aa, col_values=(('external_ids', {'iface-id': 'ab721623-aa6d-494f-8b90-6ffd63b7a33f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:6c:fd', 'vm-uuid': 'abe229eb-2238-4237-a7f2-83b8476ac1dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:33 np0005629333 NetworkManager[49836]: <info>  [1772022573.0442] manager: (tapab721623-aa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/268)
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.046 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.051 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.052 244018 INFO os_vif [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:6c:fd,bridge_name='br-int',has_traffic_filtering=True,id=ab721623-aa6d-494f-8b90-6ffd63b7a33f,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab721623-aa')#033[00m
Feb 25 07:29:33 np0005629333 NetworkManager[49836]: <info>  [1772022573.4009] manager: (tapab721623-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/269)
Feb 25 07:29:33 np0005629333 kernel: tapab721623-aa: entered promiscuous mode
Feb 25 07:29:33 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:33Z|00611|binding|INFO|Claiming lport ab721623-aa6d-494f-8b90-6ffd63b7a33f for this chassis.
Feb 25 07:29:33 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:33Z|00612|binding|INFO|ab721623-aa6d-494f-8b90-6ffd63b7a33f: Claiming fa:16:3e:53:6c:fd 10.100.0.13
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.405 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:33.415 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:6c:fd 10.100.0.13'], port_security=['fa:16:3e:53:6c:fd 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'abe229eb-2238-4237-a7f2-83b8476ac1dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a448894-87d7-4c8e-a168-2593011ffed7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '780a93a8758a4bd78b22fe68ed6276cf', 'neutron:revision_number': '6', 'neutron:security_group_ids': '809ecded-d7db-4d4f-aed2-3cfc6bac71b9 bdf2b6e4-9059-4e5b-93fb-9739a22eb004', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc8be04a-7591-4819-939d-39b48e9a96cf, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ab721623-aa6d-494f-8b90-6ffd63b7a33f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:29:33 np0005629333 NetworkManager[49836]: <info>  [1772022573.4179] device (tapab721623-aa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:29:33 np0005629333 NetworkManager[49836]: <info>  [1772022573.4189] device (tapab721623-aa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:29:33 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:33Z|00613|binding|INFO|Setting lport ab721623-aa6d-494f-8b90-6ffd63b7a33f ovn-installed in OVS
Feb 25 07:29:33 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:33Z|00614|binding|INFO|Setting lport ab721623-aa6d-494f-8b90-6ffd63b7a33f up in Southbound
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.434 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:33 np0005629333 systemd-machined[210048]: New machine qemu-79-instance-00000042.
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.436 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022573.4359405, f7b18575-d1fc-423f-a596-8ca6d8ed08fa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.436 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] VM Started (Lifecycle Event)#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.440 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:33 np0005629333 systemd[1]: Started Virtual Machine qemu-79-instance-00000042.
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.467 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.472 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022573.4373517, f7b18575-d1fc-423f-a596-8ca6d8ed08fa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.473 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.495 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.498 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.518 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.539 244018 DEBUG nova.compute.manager [req-f41c47c5-a32b-4223-a192-67c4fc070725 req-84a18666-b6d4-444c-8b99-922df59f41ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Received event network-vif-plugged-c75a4bcb-9292-41aa-b0c3-14b1433392e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.540 244018 DEBUG oslo_concurrency.lockutils [req-f41c47c5-a32b-4223-a192-67c4fc070725 req-84a18666-b6d4-444c-8b99-922df59f41ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "bf261ccf-c216-4383-a22a-7f0553198152-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.540 244018 DEBUG oslo_concurrency.lockutils [req-f41c47c5-a32b-4223-a192-67c4fc070725 req-84a18666-b6d4-444c-8b99-922df59f41ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bf261ccf-c216-4383-a22a-7f0553198152-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.540 244018 DEBUG oslo_concurrency.lockutils [req-f41c47c5-a32b-4223-a192-67c4fc070725 req-84a18666-b6d4-444c-8b99-922df59f41ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bf261ccf-c216-4383-a22a-7f0553198152-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.541 244018 DEBUG nova.compute.manager [req-f41c47c5-a32b-4223-a192-67c4fc070725 req-84a18666-b6d4-444c-8b99-922df59f41ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Processing event network-vif-plugged-c75a4bcb-9292-41aa-b0c3-14b1433392e2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.541 244018 DEBUG nova.compute.manager [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.545 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022573.545179, bf261ccf-c216-4383-a22a-7f0553198152 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.546 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.548 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.558 244018 INFO nova.virt.libvirt.driver [-] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Instance spawned successfully.#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.558 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.567 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.571 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.582 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.582 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.590 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.591 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.592 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.592 244018 DEBUG nova.virt.libvirt.driver [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.598 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.601 244018 DEBUG nova.compute.manager [req-f258d385-c84d-4a78-ad22-5eef9bd1ba55 req-d5765433-12ce-4907-a928-c3f980f2a344 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received event network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.602 244018 DEBUG oslo_concurrency.lockutils [req-f258d385-c84d-4a78-ad22-5eef9bd1ba55 req-d5765433-12ce-4907-a928-c3f980f2a344 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.602 244018 DEBUG oslo_concurrency.lockutils [req-f258d385-c84d-4a78-ad22-5eef9bd1ba55 req-d5765433-12ce-4907-a928-c3f980f2a344 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.602 244018 DEBUG oslo_concurrency.lockutils [req-f258d385-c84d-4a78-ad22-5eef9bd1ba55 req-d5765433-12ce-4907-a928-c3f980f2a344 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.603 244018 DEBUG nova.compute.manager [req-f258d385-c84d-4a78-ad22-5eef9bd1ba55 req-d5765433-12ce-4907-a928-c3f980f2a344 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Processing event network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.604 244018 DEBUG nova.compute.manager [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.613 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.613 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022573.6130521, f7b18575-d1fc-423f-a596-8ca6d8ed08fa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.613 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.617 244018 INFO nova.virt.libvirt.driver [-] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Instance spawned successfully.#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.617 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.643 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.649 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.652 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.652 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.652 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.653 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.653 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.653 244018 DEBUG nova.virt.libvirt.driver [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.660 244018 INFO nova.compute.manager [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Took 15.34 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.660 244018 DEBUG nova.compute.manager [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.691 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.737 244018 INFO nova.compute.manager [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Took 11.44 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.738 244018 DEBUG nova.compute.manager [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.740 244018 INFO nova.compute.manager [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Took 16.45 seconds to build instance.#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.763 244018 DEBUG oslo_concurrency.lockutils [None req-6d9571f3-2389-450b-8a4b-0e969de50dc6 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "bf261ccf-c216-4383-a22a-7f0553198152" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.811 244018 INFO nova.compute.manager [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Took 12.61 seconds to build instance.#033[00m
Feb 25 07:29:33 np0005629333 nova_compute[244014]: 2026-02-25 12:29:33.831 244018 DEBUG oslo_concurrency.lockutils [None req-6092fcf7-0916-4a75-bba7-6ea56fdadcbb 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 453 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.0 MiB/s wr, 129 op/s
Feb 25 07:29:33 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-43047c72e8074fabab76ae337cff05445967f3daebf90afef05a64abdd8d0080-userdata-shm.mount: Deactivated successfully.
Feb 25 07:29:33 np0005629333 systemd[1]: var-lib-containers-storage-overlay-43a9a945243e4aa6728c2c83d101f229caef91189a564a26a79cc76c0a932bd4-merged.mount: Deactivated successfully.
Feb 25 07:29:34 np0005629333 nova_compute[244014]: 2026-02-25 12:29:34.002 244018 DEBUG nova.objects.instance [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'flavor' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:34 np0005629333 nova_compute[244014]: 2026-02-25 12:29:34.022 244018 DEBUG oslo_concurrency.lockutils [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:29:34 np0005629333 nova_compute[244014]: 2026-02-25 12:29:34.022 244018 DEBUG oslo_concurrency.lockutils [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquired lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:29:34 np0005629333 nova_compute[244014]: 2026-02-25 12:29:34.022 244018 DEBUG nova.network.neutron [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:29:34 np0005629333 nova_compute[244014]: 2026-02-25 12:29:34.022 244018 DEBUG nova.objects.instance [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'info_cache' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:34 np0005629333 podman[302695]: 2026-02-25 12:29:34.044217994 +0000 UTC m=+0.940363837 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Feb 25 07:29:34 np0005629333 podman[302696]: 2026-02-25 12:29:34.060125175 +0000 UTC m=+0.964333667 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 07:29:34 np0005629333 nova_compute[244014]: 2026-02-25 12:29:34.147 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Acquiring lock "d945940d-a1b5-4a36-b980-efda3a9efda6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:34 np0005629333 nova_compute[244014]: 2026-02-25 12:29:34.147 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:34 np0005629333 nova_compute[244014]: 2026-02-25 12:29:34.167 244018 DEBUG nova.compute.manager [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:29:34 np0005629333 nova_compute[244014]: 2026-02-25 12:29:34.244 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:34 np0005629333 nova_compute[244014]: 2026-02-25 12:29:34.245 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:34 np0005629333 nova_compute[244014]: 2026-02-25 12:29:34.251 244018 DEBUG nova.virt.hardware [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:29:34 np0005629333 nova_compute[244014]: 2026-02-25 12:29:34.251 244018 INFO nova.compute.claims [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:29:34 np0005629333 nova_compute[244014]: 2026-02-25 12:29:34.441 244018 DEBUG oslo_concurrency.processutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:34 np0005629333 nova_compute[244014]: 2026-02-25 12:29:34.653 244018 DEBUG nova.compute.manager [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:29:34 np0005629333 nova_compute[244014]: 2026-02-25 12:29:34.654 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for abe229eb-2238-4237-a7f2-83b8476ac1dc due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 25 07:29:34 np0005629333 nova_compute[244014]: 2026-02-25 12:29:34.655 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022574.6543152, abe229eb-2238-4237-a7f2-83b8476ac1dc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:29:34 np0005629333 nova_compute[244014]: 2026-02-25 12:29:34.655 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:29:34 np0005629333 nova_compute[244014]: 2026-02-25 12:29:34.662 244018 INFO nova.virt.libvirt.driver [-] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Instance rebooted successfully.#033[00m
Feb 25 07:29:34 np0005629333 nova_compute[244014]: 2026-02-25 12:29:34.662 244018 DEBUG nova.compute.manager [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:34 np0005629333 nova_compute[244014]: 2026-02-25 12:29:34.708 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:34 np0005629333 nova_compute[244014]: 2026-02-25 12:29:34.712 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:29:34 np0005629333 nova_compute[244014]: 2026-02-25 12:29:34.749 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:34 np0005629333 nova_compute[244014]: 2026-02-25 12:29:34.779 244018 DEBUG oslo_concurrency.lockutils [None req-e6ba1594-073d-4912-9ca5-9f2e337405ae bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 8.424s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:34 np0005629333 nova_compute[244014]: 2026-02-25 12:29:34.800 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022574.6552982, abe229eb-2238-4237-a7f2-83b8476ac1dc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:29:34 np0005629333 nova_compute[244014]: 2026-02-25 12:29:34.801 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] VM Started (Lifecycle Event)#033[00m
Feb 25 07:29:34 np0005629333 nova_compute[244014]: 2026-02-25 12:29:34.821 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:34 np0005629333 nova_compute[244014]: 2026-02-25 12:29:34.826 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:29:35 np0005629333 podman[302588]: 2026-02-25 12:29:35.023625178 +0000 UTC m=+2.748744429 container cleanup 43047c72e8074fabab76ae337cff05445967f3daebf90afef05a64abdd8d0080 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 25 07:29:35 np0005629333 systemd[1]: libpod-conmon-43047c72e8074fabab76ae337cff05445967f3daebf90afef05a64abdd8d0080.scope: Deactivated successfully.
Feb 25 07:29:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:29:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:29:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3825455234' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.371 244018 DEBUG oslo_concurrency.processutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.929s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.376 244018 DEBUG nova.compute.provider_tree [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.406 244018 DEBUG nova.scheduler.client.report [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.445 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.446 244018 DEBUG nova.compute.manager [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.512 244018 DEBUG nova.compute.manager [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.512 244018 DEBUG nova.network.neutron [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.531 244018 INFO nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.553 244018 DEBUG nova.compute.manager [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.586 244018 INFO nova.compute.manager [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Rescuing#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.586 244018 DEBUG oslo_concurrency.lockutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "refresh_cache-f7b18575-d1fc-423f-a596-8ca6d8ed08fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.586 244018 DEBUG oslo_concurrency.lockutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquired lock "refresh_cache-f7b18575-d1fc-423f-a596-8ca6d8ed08fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.586 244018 DEBUG nova.network.neutron [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.645 244018 DEBUG nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Received event network-vif-plugged-c75a4bcb-9292-41aa-b0c3-14b1433392e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.646 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "bf261ccf-c216-4383-a22a-7f0553198152-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.646 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bf261ccf-c216-4383-a22a-7f0553198152-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.646 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bf261ccf-c216-4383-a22a-7f0553198152-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.646 244018 DEBUG nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] No waiting events found dispatching network-vif-plugged-c75a4bcb-9292-41aa-b0c3-14b1433392e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.647 244018 WARNING nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Received unexpected event network-vif-plugged-c75a4bcb-9292-41aa-b0c3-14b1433392e2 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.647 244018 DEBUG nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received event network-vif-unplugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.647 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.647 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.647 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.648 244018 DEBUG nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] No waiting events found dispatching network-vif-unplugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.648 244018 WARNING nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received unexpected event network-vif-unplugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f for instance with vm_state active and task_state None.#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.648 244018 DEBUG nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received event network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.648 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.648 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.649 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.649 244018 DEBUG nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] No waiting events found dispatching network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.649 244018 WARNING nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received unexpected event network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f for instance with vm_state active and task_state None.#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.649 244018 DEBUG nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.649 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.649 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.650 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.650 244018 DEBUG nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.650 244018 WARNING nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state stopped and task_state powering-on.#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.650 244018 DEBUG nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.650 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.651 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.651 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.651 244018 DEBUG nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.651 244018 WARNING nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state stopped and task_state powering-on.#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.652 244018 DEBUG nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received event network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.652 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.652 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.652 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.652 244018 DEBUG nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] No waiting events found dispatching network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.653 244018 WARNING nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received unexpected event network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f for instance with vm_state active and task_state None.#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.653 244018 DEBUG nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received event network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.653 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.653 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.653 244018 DEBUG oslo_concurrency.lockutils [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.654 244018 DEBUG nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] No waiting events found dispatching network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.654 244018 WARNING nova.compute.manager [req-951a19a8-eaff-491c-8435-8b5da429d7b2 req-490a1fac-8570-4b10-9d3f-8771b3d66f6e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received unexpected event network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f for instance with vm_state active and task_state None.#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.661 244018 DEBUG nova.compute.manager [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.662 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.662 244018 INFO nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Creating image(s)#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.690 244018 DEBUG nova.storage.rbd_utils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] rbd image d945940d-a1b5-4a36-b980-efda3a9efda6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:35 np0005629333 nova_compute[244014]: 2026-02-25 12:29:35.713 244018 DEBUG nova.storage.rbd_utils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] rbd image d945940d-a1b5-4a36-b980-efda3a9efda6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 453 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.273 244018 DEBUG nova.storage.rbd_utils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] rbd image d945940d-a1b5-4a36-b980-efda3a9efda6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.279 244018 DEBUG oslo_concurrency.processutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.314 244018 DEBUG nova.policy [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '74af2f394ab04b06b55e62150e81b6b1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '61371c73c9fb4961886c5c22f8f871e1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.334 244018 DEBUG nova.compute.manager [req-31c372a4-ce03-42af-9165-0bb78531a6de req-e6da9e25-0a2e-4507-bcdb-13a78c7215e1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received event network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.335 244018 DEBUG oslo_concurrency.lockutils [req-31c372a4-ce03-42af-9165-0bb78531a6de req-e6da9e25-0a2e-4507-bcdb-13a78c7215e1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.335 244018 DEBUG oslo_concurrency.lockutils [req-31c372a4-ce03-42af-9165-0bb78531a6de req-e6da9e25-0a2e-4507-bcdb-13a78c7215e1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.335 244018 DEBUG oslo_concurrency.lockutils [req-31c372a4-ce03-42af-9165-0bb78531a6de req-e6da9e25-0a2e-4507-bcdb-13a78c7215e1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.336 244018 DEBUG nova.compute.manager [req-31c372a4-ce03-42af-9165-0bb78531a6de req-e6da9e25-0a2e-4507-bcdb-13a78c7215e1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] No waiting events found dispatching network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.336 244018 WARNING nova.compute.manager [req-31c372a4-ce03-42af-9165-0bb78531a6de req-e6da9e25-0a2e-4507-bcdb-13a78c7215e1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received unexpected event network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 for instance with vm_state active and task_state rescuing.#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.368 244018 DEBUG oslo_concurrency.processutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.369 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.370 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.370 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.411 244018 DEBUG nova.storage.rbd_utils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] rbd image d945940d-a1b5-4a36-b980-efda3a9efda6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.415 244018 DEBUG oslo_concurrency.processutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d945940d-a1b5-4a36-b980-efda3a9efda6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:36 np0005629333 podman[302842]: 2026-02-25 12:29:36.603603701 +0000 UTC m=+1.557957780 container remove 43047c72e8074fabab76ae337cff05445967f3daebf90afef05a64abdd8d0080 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:29:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.612 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[be27f3dc-b54f-46b2-9cb2-d4cebfd52f52]: (4, ('Wed Feb 25 12:29:32 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 (43047c72e8074fabab76ae337cff05445967f3daebf90afef05a64abdd8d0080)\n43047c72e8074fabab76ae337cff05445967f3daebf90afef05a64abdd8d0080\nWed Feb 25 12:29:35 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 (43047c72e8074fabab76ae337cff05445967f3daebf90afef05a64abdd8d0080)\n43047c72e8074fabab76ae337cff05445967f3daebf90afef05a64abdd8d0080\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.614 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f7359ea8-c0b5-4da0-9254-948cc1f1a4c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.614 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce318891-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.616 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:36 np0005629333 kernel: tapce318891-c0: left promiscuous mode
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.627 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.630 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[53d2a483-df62-4ca2-8a01-6e1cec06347e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.641 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[11bf57f4-2a7f-47dd-8e26-9bb0cfe023c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.644 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5719c827-5606-4161-9e78-5efb3a4ae505]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.655 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2e7e4d69-bfdc-4aaf-8a45-cdeba1aca87c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444522, 'reachable_time': 39645, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302952, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:36 np0005629333 systemd[1]: run-netns-ovnmeta\x2dce318891\x2dcf3c\x2d4d99\x2daf7c\x2dc01770f38194.mount: Deactivated successfully.
Feb 25 07:29:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.659 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:29:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.659 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[80f267af-32ce-4681-8b7d-79b33c3ab7ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.660 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ab721623-aa6d-494f-8b90-6ffd63b7a33f in datapath 9a448894-87d7-4c8e-a168-2593011ffed7 unbound from our chassis#033[00m
Feb 25 07:29:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.661 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9a448894-87d7-4c8e-a168-2593011ffed7#033[00m
Feb 25 07:29:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.675 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d944f74f-7520-466c-b2a1-13f2be13f86a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.691 244018 DEBUG nova.network.neutron [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Updating instance_info_cache with network_info: [{"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.717 244018 DEBUG oslo_concurrency.lockutils [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Releasing lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:29:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.731 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3532cf3f-c0dc-43e0-a7f7-ee925cb57a56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.734 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ebaf3f98-e250-4909-94b8-8cb6bbf63c20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.756 244018 INFO nova.virt.libvirt.driver [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance destroyed successfully.#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.757 244018 DEBUG nova.objects.instance [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'numa_topology' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.773 244018 DEBUG nova.objects.instance [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'resources' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.777 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c6933f26-5fde-4daa-b4fd-0f049ff72bbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.789 244018 DEBUG nova.virt.libvirt.vif [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1971346294',display_name='tempest-ServerActionsTestJSON-server-1971346294',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1971346294',id=53,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAxL27ottNuqfXH6nySrIpiq52DbdIwstuJNvjKVA2mjXoBhB8Hf28a6S+Sox62IJx/Akv2MX8rF28TRT28AB2t2jhcJkKsJ3yIrvpBvNuGbxcLEouYwPlp1/Hru0erD1g==',key_name='tempest-keypair-1811376271',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-d1p0icxu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:29:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.789 244018 DEBUG nova.network.os_vif_util [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.790 244018 DEBUG nova.network.os_vif_util [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.791 244018 DEBUG os_vif [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:29:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.790 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7ef9df95-90f0-448c-b501-ef10139ff652]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9a448894-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:4c:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449633, 'reachable_time': 29076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302958, 'error': None, 'target': 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.794 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.794 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee46268d-74, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.796 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.799 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.801 244018 INFO os_vif [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74')#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.810 244018 DEBUG nova.virt.libvirt.driver [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Start _get_guest_xml network_info=[{"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:29:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.810 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6097d166-556f-4288-9261-135f3db01628]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9a448894-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449644, 'tstamp': 449644}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302959, 'error': None, 'target': 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9a448894-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449646, 'tstamp': 449646}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302959, 'error': None, 'target': 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.812 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a448894-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.815 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9a448894-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.815 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.815 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:29:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.816 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9a448894-80, col_values=(('external_ids', {'iface-id': '6ef06fbe-6eb9-4d55-bbd8-9394a70da39f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:36.816 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.817 244018 WARNING nova.virt.libvirt.driver [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.825 244018 DEBUG nova.virt.libvirt.host [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.826 244018 DEBUG nova.virt.libvirt.host [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.830 244018 DEBUG nova.virt.libvirt.host [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.831 244018 DEBUG nova.virt.libvirt.host [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.831 244018 DEBUG nova.virt.libvirt.driver [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.832 244018 DEBUG nova.virt.hardware [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.832 244018 DEBUG nova.virt.hardware [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.833 244018 DEBUG nova.virt.hardware [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.833 244018 DEBUG nova.virt.hardware [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.833 244018 DEBUG nova.virt.hardware [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.834 244018 DEBUG nova.virt.hardware [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.834 244018 DEBUG nova.virt.hardware [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.834 244018 DEBUG nova.virt.hardware [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.835 244018 DEBUG nova.virt.hardware [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.835 244018 DEBUG nova.virt.hardware [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.835 244018 DEBUG nova.virt.hardware [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.836 244018 DEBUG nova.objects.instance [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:36 np0005629333 nova_compute[244014]: 2026-02-25 12:29:36.856 244018 DEBUG oslo_concurrency.processutils [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:37 np0005629333 nova_compute[244014]: 2026-02-25 12:29:37.325 244018 DEBUG nova.network.neutron [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Successfully created port: 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:29:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:29:37 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/611464997' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:29:37 np0005629333 nova_compute[244014]: 2026-02-25 12:29:37.697 244018 DEBUG oslo_concurrency.processutils [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.841s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 453 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 7.6 MiB/s rd, 1.8 MiB/s wr, 320 op/s
Feb 25 07:29:38 np0005629333 nova_compute[244014]: 2026-02-25 12:29:38.404 244018 DEBUG nova.network.neutron [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Updating instance_info_cache with network_info: [{"id": "27c957cd-d68f-48d8-b2e1-170275200ed3", "address": "fa:16:3e:bf:b7:62", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27c957cd-d6", "ovs_interfaceid": "27c957cd-d68f-48d8-b2e1-170275200ed3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:29:38 np0005629333 nova_compute[244014]: 2026-02-25 12:29:38.409 244018 DEBUG nova.compute.manager [req-8662c787-c531-4610-85a9-bfa0bfdaef94 req-fc2f069f-0f2e-40a1-b17b-a0f5b1b3db94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received event network-changed-ab721623-aa6d-494f-8b90-6ffd63b7a33f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:38 np0005629333 nova_compute[244014]: 2026-02-25 12:29:38.410 244018 DEBUG nova.compute.manager [req-8662c787-c531-4610-85a9-bfa0bfdaef94 req-fc2f069f-0f2e-40a1-b17b-a0f5b1b3db94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Refreshing instance network info cache due to event network-changed-ab721623-aa6d-494f-8b90-6ffd63b7a33f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:29:38 np0005629333 nova_compute[244014]: 2026-02-25 12:29:38.410 244018 DEBUG oslo_concurrency.lockutils [req-8662c787-c531-4610-85a9-bfa0bfdaef94 req-fc2f069f-0f2e-40a1-b17b-a0f5b1b3db94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-abe229eb-2238-4237-a7f2-83b8476ac1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:29:38 np0005629333 nova_compute[244014]: 2026-02-25 12:29:38.411 244018 DEBUG oslo_concurrency.lockutils [req-8662c787-c531-4610-85a9-bfa0bfdaef94 req-fc2f069f-0f2e-40a1-b17b-a0f5b1b3db94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-abe229eb-2238-4237-a7f2-83b8476ac1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:29:38 np0005629333 nova_compute[244014]: 2026-02-25 12:29:38.411 244018 DEBUG nova.network.neutron [req-8662c787-c531-4610-85a9-bfa0bfdaef94 req-fc2f069f-0f2e-40a1-b17b-a0f5b1b3db94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Refreshing network info cache for port ab721623-aa6d-494f-8b90-6ffd63b7a33f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:29:38 np0005629333 nova_compute[244014]: 2026-02-25 12:29:38.416 244018 DEBUG oslo_concurrency.processutils [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:38 np0005629333 nova_compute[244014]: 2026-02-25 12:29:38.444 244018 DEBUG oslo_concurrency.lockutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Releasing lock "refresh_cache-f7b18575-d1fc-423f-a596-8ca6d8ed08fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:29:38 np0005629333 nova_compute[244014]: 2026-02-25 12:29:38.752 244018 DEBUG nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 25 07:29:38 np0005629333 nova_compute[244014]: 2026-02-25 12:29:38.765 244018 DEBUG oslo_concurrency.processutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d945940d-a1b5-4a36-b980-efda3a9efda6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.350s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:38 np0005629333 nova_compute[244014]: 2026-02-25 12:29:38.850 244018 DEBUG nova.storage.rbd_utils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] resizing rbd image d945940d-a1b5-4a36-b980-efda3a9efda6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:29:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:29:38 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1587152409' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.011 244018 DEBUG oslo_concurrency.processutils [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.013 244018 DEBUG nova.virt.libvirt.vif [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1971346294',display_name='tempest-ServerActionsTestJSON-server-1971346294',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1971346294',id=53,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAxL27ottNuqfXH6nySrIpiq52DbdIwstuJNvjKVA2mjXoBhB8Hf28a6S+Sox62IJx/Akv2MX8rF28TRT28AB2t2jhcJkKsJ3yIrvpBvNuGbxcLEouYwPlp1/Hru0erD1g==',key_name='tempest-keypair-1811376271',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-d1p0icxu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:29:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.014 244018 DEBUG nova.network.os_vif_util [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.016 244018 DEBUG nova.network.os_vif_util [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.018 244018 DEBUG nova.objects.instance [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.034 244018 DEBUG nova.virt.libvirt.driver [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:29:39 np0005629333 nova_compute[244014]:  <uuid>8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b</uuid>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:  <name>instance-00000035</name>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServerActionsTestJSON-server-1971346294</nova:name>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:29:36</nova:creationTime>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:29:39 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:        <nova:user uuid="1f8bbe7db4454108aca005daa72d5c22">tempest-ServerActionsTestJSON-436476112-project-member</nova:user>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:        <nova:project uuid="56700581ea88438ba482d90bc702ced3">tempest-ServerActionsTestJSON-436476112</nova:project>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:        <nova:port uuid="ee46268d-740d-4ff9-8b65-4a81fc61eec3">
Feb 25 07:29:39 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      <entry name="serial">8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b</entry>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      <entry name="uuid">8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b</entry>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk">
Feb 25 07:29:39 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:29:39 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_disk.config">
Feb 25 07:29:39 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:29:39 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:ba:87:f1"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      <target dev="tapee46268d-74"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b/console.log" append="off"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <input type="keyboard" bus="usb"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:29:39 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:29:39 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:29:39 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:29:39 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.043 244018 DEBUG nova.virt.libvirt.driver [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.044 244018 DEBUG nova.virt.libvirt.driver [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.045 244018 DEBUG nova.virt.libvirt.vif [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1971346294',display_name='tempest-ServerActionsTestJSON-server-1971346294',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1971346294',id=53,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAxL27ottNuqfXH6nySrIpiq52DbdIwstuJNvjKVA2mjXoBhB8Hf28a6S+Sox62IJx/Akv2MX8rF28TRT28AB2t2jhcJkKsJ3yIrvpBvNuGbxcLEouYwPlp1/Hru0erD1g==',key_name='tempest-keypair-1811376271',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-d1p0icxu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:29:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.046 244018 DEBUG nova.network.os_vif_util [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.048 244018 DEBUG nova.network.os_vif_util [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.049 244018 DEBUG os_vif [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.050 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.051 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.052 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.056 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.057 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee46268d-74, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.058 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee46268d-74, col_values=(('external_ids', {'iface-id': 'ee46268d-740d-4ff9-8b65-4a81fc61eec3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:87:f1', 'vm-uuid': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:39 np0005629333 NetworkManager[49836]: <info>  [1772022579.0610] manager: (tapee46268d-74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/270)
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.062 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.067 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.068 244018 INFO os_vif [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74')#033[00m
Feb 25 07:29:39 np0005629333 kernel: tapee46268d-74: entered promiscuous mode
Feb 25 07:29:39 np0005629333 NetworkManager[49836]: <info>  [1772022579.1547] manager: (tapee46268d-74): new Tun device (/org/freedesktop/NetworkManager/Devices/271)
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.154 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:39 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:39Z|00615|binding|INFO|Claiming lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 for this chassis.
Feb 25 07:29:39 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:39Z|00616|binding|INFO|ee46268d-740d-4ff9-8b65-4a81fc61eec3: Claiming fa:16:3e:ba:87:f1 10.100.0.11
Feb 25 07:29:39 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:39Z|00617|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 ovn-installed in OVS
Feb 25 07:29:39 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:39Z|00618|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 up in Southbound
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.164 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:87:f1 10.100.0.11'], port_security=['fa:16:3e:ba:87:f1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce318891-cf3c-4d99-af7c-c01770f38194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56700581ea88438ba482d90bc702ced3', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'c2ca716d-3f7c-490b-954a-bca009559baa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0958bb9f-eb63-44ee-b380-21c56b170304, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ee46268d-740d-4ff9-8b65-4a81fc61eec3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.167 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ee46268d-740d-4ff9-8b65-4a81fc61eec3 in datapath ce318891-cf3c-4d99-af7c-c01770f38194 bound to our chassis#033[00m
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.170 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce318891-cf3c-4d99-af7c-c01770f38194#033[00m
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.174 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.178 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0982127e-1b31-4483-b733-523668d5b7d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.179 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce318891-c1 in ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:29:39 np0005629333 systemd-udevd[303093]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.182 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce318891-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.182 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b827c021-bc4a-4ed1-8985-f31e4c8160e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.184 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fc07a779-e37a-4ee5-894a-cc00ce4811c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:39 np0005629333 NetworkManager[49836]: <info>  [1772022579.1911] device (tapee46268d-74): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:29:39 np0005629333 NetworkManager[49836]: <info>  [1772022579.1917] device (tapee46268d-74): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.196 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[71daed12-3fb2-4fd7-ab84-9f27baad6088]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:39 np0005629333 systemd-machined[210048]: New machine qemu-80-instance-00000035.
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.210 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[88125daa-1780-4855-af3e-a08c7c5e9489]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:39 np0005629333 systemd[1]: Started Virtual Machine qemu-80-instance-00000035.
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.239 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f5111a29-c98c-4ce7-bcd2-927a957a0a99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.246 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fc0b03db-152f-4d6f-9862-93f0e83f96d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:39 np0005629333 NetworkManager[49836]: <info>  [1772022579.2488] manager: (tapce318891-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/272)
Feb 25 07:29:39 np0005629333 systemd-udevd[303098]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.278 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7d8d3ecf-d169-4f21-a3b2-152eee6f4950]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.281 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d799bc6c-70b6-477b-b4e4-d36cbc5754d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:39 np0005629333 NetworkManager[49836]: <info>  [1772022579.3012] device (tapce318891-c0): carrier: link connected
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.303 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[66b36504-7508-4789-bca9-def41eb824c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.316 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c426acab-0bb4-4f38-b5c5-dc05585798d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce318891-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c3:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 185], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454870, 'reachable_time': 42375, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303127, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.327 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4767c139-ee34-48e3-99d4-744daf077a12]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe44:c3a0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454870, 'tstamp': 454870}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303128, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.340 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[22ee209a-92f5-4a5d-b957-e76a09ff7501]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce318891-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c3:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 185], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454870, 'reachable_time': 42375, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303129, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.359 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6737fb57-9948-4f5b-ac19-c63b4240749f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.398 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1aa498d4-4152-4c61-9a0c-c15bf1d250db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.400 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce318891-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.400 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.400 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce318891-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:39 np0005629333 kernel: tapce318891-c0: entered promiscuous mode
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.402 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:39 np0005629333 NetworkManager[49836]: <info>  [1772022579.4028] manager: (tapce318891-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/273)
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.404 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce318891-c0, col_values=(('external_ids', {'iface-id': '3b184c15-8ef4-4e11-bd18-e1253a4ff440'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:39 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:39Z|00619|binding|INFO|Releasing lport 3b184c15-8ef4-4e11-bd18-e1253a4ff440 from this chassis (sb_readonly=0)
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.410 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.414 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.415 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5c0e4c19-e400-492b-bf40-7e3ff80fc9d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.416 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:29:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:39.417 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'env', 'PROCESS_TAG=haproxy-ce318891-cf3c-4d99-af7c-c01770f38194', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce318891-cf3c-4d99-af7c-c01770f38194.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.459 244018 DEBUG nova.network.neutron [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Successfully updated port: 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.479 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Acquiring lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.479 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Acquired lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.480 244018 DEBUG nova.network.neutron [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.751 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:39 np0005629333 podman[303191]: 2026-02-25 12:29:39.739441905 +0000 UTC m=+0.022767506 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:29:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 453 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 34 KiB/s wr, 224 op/s
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.900 244018 DEBUG nova.objects.instance [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lazy-loading 'migration_context' on Instance uuid d945940d-a1b5-4a36-b980-efda3a9efda6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.985 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.986 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Ensure instance console log exists: /var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.987 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.988 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:39 np0005629333 nova_compute[244014]: 2026-02-25 12:29:39.988 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.125 244018 DEBUG nova.network.neutron [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.184 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.185 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022580.177592, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.192 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.195 244018 DEBUG nova.compute.manager [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.200 244018 INFO nova.virt.libvirt.driver [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance rebooted successfully.#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.200 244018 DEBUG nova.compute.manager [None req-834d9398-4d2b-4cbf-a117-8c4535ee59a7 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.229 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.233 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.262 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.263 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022580.189978, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.263 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Started (Lifecycle Event)#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.296 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.299 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.307 244018 DEBUG oslo_concurrency.lockutils [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "abe229eb-2238-4237-a7f2-83b8476ac1dc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.308 244018 DEBUG oslo_concurrency.lockutils [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.308 244018 DEBUG oslo_concurrency.lockutils [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.309 244018 DEBUG oslo_concurrency.lockutils [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.309 244018 DEBUG oslo_concurrency.lockutils [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.311 244018 INFO nova.compute.manager [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Terminating instance#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.312 244018 DEBUG nova.compute.manager [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.379 244018 DEBUG nova.compute.manager [req-d36dafd3-ae58-4e60-9d71-0966a82a5607 req-d23dda63-2fdd-47b4-8326-7979668debbf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.379 244018 DEBUG oslo_concurrency.lockutils [req-d36dafd3-ae58-4e60-9d71-0966a82a5607 req-d23dda63-2fdd-47b4-8326-7979668debbf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.379 244018 DEBUG oslo_concurrency.lockutils [req-d36dafd3-ae58-4e60-9d71-0966a82a5607 req-d23dda63-2fdd-47b4-8326-7979668debbf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.380 244018 DEBUG oslo_concurrency.lockutils [req-d36dafd3-ae58-4e60-9d71-0966a82a5607 req-d23dda63-2fdd-47b4-8326-7979668debbf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.380 244018 DEBUG nova.compute.manager [req-d36dafd3-ae58-4e60-9d71-0966a82a5607 req-d23dda63-2fdd-47b4-8326-7979668debbf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.380 244018 WARNING nova.compute.manager [req-d36dafd3-ae58-4e60-9d71-0966a82a5607 req-d23dda63-2fdd-47b4-8326-7979668debbf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:29:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:29:40 np0005629333 podman[303191]: 2026-02-25 12:29:40.601362267 +0000 UTC m=+0.884687878 container create 84c2a7e1679bfc942ea4275a6f6001f42fea18892b1ffccbd4c59228d1dc9f85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_managed=true)
Feb 25 07:29:40 np0005629333 kernel: tapab721623-aa (unregistering): left promiscuous mode
Feb 25 07:29:40 np0005629333 NetworkManager[49836]: <info>  [1772022580.6350] device (tapab721623-aa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:29:40 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:40Z|00620|binding|INFO|Releasing lport ab721623-aa6d-494f-8b90-6ffd63b7a33f from this chassis (sb_readonly=0)
Feb 25 07:29:40 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:40Z|00621|binding|INFO|Setting lport ab721623-aa6d-494f-8b90-6ffd63b7a33f down in Southbound
Feb 25 07:29:40 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:40Z|00622|binding|INFO|Removing iface tapab721623-aa ovn-installed in OVS
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.654 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:40 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:40.664 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:6c:fd 10.100.0.13'], port_security=['fa:16:3e:53:6c:fd 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'abe229eb-2238-4237-a7f2-83b8476ac1dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a448894-87d7-4c8e-a168-2593011ffed7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '780a93a8758a4bd78b22fe68ed6276cf', 'neutron:revision_number': '8', 'neutron:security_group_ids': '809ecded-d7db-4d4f-aed2-3cfc6bac71b9 bdf2b6e4-9059-4e5b-93fb-9739a22eb004 fb44570f-d13c-4d72-bde7-f388c86c70de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc8be04a-7591-4819-939d-39b48e9a96cf, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ab721623-aa6d-494f-8b90-6ffd63b7a33f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.671 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:40 np0005629333 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000042.scope: Deactivated successfully.
Feb 25 07:29:40 np0005629333 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000042.scope: Consumed 6.572s CPU time.
Feb 25 07:29:40 np0005629333 systemd-machined[210048]: Machine qemu-79-instance-00000042 terminated.
Feb 25 07:29:40 np0005629333 NetworkManager[49836]: <info>  [1772022580.7282] manager: (tapab721623-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/274)
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.744 244018 INFO nova.virt.libvirt.driver [-] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Instance destroyed successfully.#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.745 244018 DEBUG nova.objects.instance [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lazy-loading 'resources' on Instance uuid abe229eb-2238-4237-a7f2-83b8476ac1dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.759 244018 DEBUG nova.virt.libvirt.vif [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:29:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1679149420',display_name='tempest-SecurityGroupsTestJSON-server-1679149420',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1679149420',id=66,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:29:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='780a93a8758a4bd78b22fe68ed6276cf',ramdisk_id='',reservation_id='r-q96j5rpg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-195828884',owner_user_name='tempest-SecurityGroupsTestJSON-195828884-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:29:34Z,user_data=None,user_id='bcb4ded096bc4f7993f96ca892b82333',uuid=abe229eb-2238-4237-a7f2-83b8476ac1dc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.760 244018 DEBUG nova.network.os_vif_util [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converting VIF {"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.763 244018 DEBUG nova.network.os_vif_util [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:53:6c:fd,bridge_name='br-int',has_traffic_filtering=True,id=ab721623-aa6d-494f-8b90-6ffd63b7a33f,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab721623-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.764 244018 DEBUG os_vif [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:6c:fd,bridge_name='br-int',has_traffic_filtering=True,id=ab721623-aa6d-494f-8b90-6ffd63b7a33f,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab721623-aa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.768 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.769 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab721623-aa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.771 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.775 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:29:40 np0005629333 nova_compute[244014]: 2026-02-25 12:29:40.779 244018 INFO os_vif [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:6c:fd,bridge_name='br-int',has_traffic_filtering=True,id=ab721623-aa6d-494f-8b90-6ffd63b7a33f,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapab721623-aa')#033[00m
Feb 25 07:29:40 np0005629333 systemd[1]: Started libpod-conmon-84c2a7e1679bfc942ea4275a6f6001f42fea18892b1ffccbd4c59228d1dc9f85.scope.
Feb 25 07:29:40 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:29:40 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47676d99690e7ef9a70c5e78706610c8c77da71cb8b8a0b03963190efdeac6fe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:29:41 np0005629333 nova_compute[244014]: 2026-02-25 12:29:41.155 244018 DEBUG nova.network.neutron [req-8662c787-c531-4610-85a9-bfa0bfdaef94 req-fc2f069f-0f2e-40a1-b17b-a0f5b1b3db94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Updated VIF entry in instance network info cache for port ab721623-aa6d-494f-8b90-6ffd63b7a33f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:29:41 np0005629333 nova_compute[244014]: 2026-02-25 12:29:41.157 244018 DEBUG nova.network.neutron [req-8662c787-c531-4610-85a9-bfa0bfdaef94 req-fc2f069f-0f2e-40a1-b17b-a0f5b1b3db94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Updating instance_info_cache with network_info: [{"id": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "address": "fa:16:3e:53:6c:fd", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapab721623-aa", "ovs_interfaceid": "ab721623-aa6d-494f-8b90-6ffd63b7a33f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:29:41 np0005629333 nova_compute[244014]: 2026-02-25 12:29:41.176 244018 DEBUG oslo_concurrency.lockutils [req-8662c787-c531-4610-85a9-bfa0bfdaef94 req-fc2f069f-0f2e-40a1-b17b-a0f5b1b3db94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-abe229eb-2238-4237-a7f2-83b8476ac1dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:29:41 np0005629333 nova_compute[244014]: 2026-02-25 12:29:41.211 244018 DEBUG nova.compute.manager [req-a214dfc8-f31b-4122-9130-35c5c13c2e79 req-cd41393f-82a0-455f-8157-641475c2db12 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received event network-changed-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:41 np0005629333 nova_compute[244014]: 2026-02-25 12:29:41.212 244018 DEBUG nova.compute.manager [req-a214dfc8-f31b-4122-9130-35c5c13c2e79 req-cd41393f-82a0-455f-8157-641475c2db12 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Refreshing instance network info cache due to event network-changed-197929cb-aaa4-48f0-a831-7ac3f4ac5b37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:29:41 np0005629333 nova_compute[244014]: 2026-02-25 12:29:41.212 244018 DEBUG oslo_concurrency.lockutils [req-a214dfc8-f31b-4122-9130-35c5c13c2e79 req-cd41393f-82a0-455f-8157-641475c2db12 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:29:41 np0005629333 podman[303191]: 2026-02-25 12:29:41.222865462 +0000 UTC m=+1.506191063 container init 84c2a7e1679bfc942ea4275a6f6001f42fea18892b1ffccbd4c59228d1dc9f85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Feb 25 07:29:41 np0005629333 podman[303191]: 2026-02-25 12:29:41.230857679 +0000 UTC m=+1.514183260 container start 84c2a7e1679bfc942ea4275a6f6001f42fea18892b1ffccbd4c59228d1dc9f85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS)
Feb 25 07:29:41 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[303267]: [NOTICE]   (303271) : New worker (303273) forked
Feb 25 07:29:41 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[303267]: [NOTICE]   (303271) : Loading success.
Feb 25 07:29:41 np0005629333 nova_compute[244014]: 2026-02-25 12:29:41.280 244018 DEBUG nova.network.neutron [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Updating instance_info_cache with network_info: [{"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:29:41 np0005629333 nova_compute[244014]: 2026-02-25 12:29:41.309 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Releasing lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:29:41 np0005629333 nova_compute[244014]: 2026-02-25 12:29:41.310 244018 DEBUG nova.compute.manager [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Instance network_info: |[{"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:29:41 np0005629333 nova_compute[244014]: 2026-02-25 12:29:41.310 244018 DEBUG oslo_concurrency.lockutils [req-a214dfc8-f31b-4122-9130-35c5c13c2e79 req-cd41393f-82a0-455f-8157-641475c2db12 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:29:41 np0005629333 nova_compute[244014]: 2026-02-25 12:29:41.311 244018 DEBUG nova.network.neutron [req-a214dfc8-f31b-4122-9130-35c5c13c2e79 req-cd41393f-82a0-455f-8157-641475c2db12 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Refreshing network info cache for port 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:29:41 np0005629333 nova_compute[244014]: 2026-02-25 12:29:41.314 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Start _get_guest_xml network_info=[{"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:29:41 np0005629333 nova_compute[244014]: 2026-02-25 12:29:41.318 244018 WARNING nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:29:41 np0005629333 nova_compute[244014]: 2026-02-25 12:29:41.324 244018 DEBUG nova.virt.libvirt.host [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:29:41 np0005629333 nova_compute[244014]: 2026-02-25 12:29:41.325 244018 DEBUG nova.virt.libvirt.host [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:29:41 np0005629333 nova_compute[244014]: 2026-02-25 12:29:41.329 244018 DEBUG nova.virt.libvirt.host [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:29:41 np0005629333 nova_compute[244014]: 2026-02-25 12:29:41.329 244018 DEBUG nova.virt.libvirt.host [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:29:41 np0005629333 nova_compute[244014]: 2026-02-25 12:29:41.330 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:29:41 np0005629333 nova_compute[244014]: 2026-02-25 12:29:41.330 244018 DEBUG nova.virt.hardware [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:29:41 np0005629333 nova_compute[244014]: 2026-02-25 12:29:41.331 244018 DEBUG nova.virt.hardware [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:29:41 np0005629333 nova_compute[244014]: 2026-02-25 12:29:41.331 244018 DEBUG nova.virt.hardware [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:29:41 np0005629333 nova_compute[244014]: 2026-02-25 12:29:41.331 244018 DEBUG nova.virt.hardware [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:29:41 np0005629333 nova_compute[244014]: 2026-02-25 12:29:41.331 244018 DEBUG nova.virt.hardware [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:29:41 np0005629333 nova_compute[244014]: 2026-02-25 12:29:41.332 244018 DEBUG nova.virt.hardware [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:29:41 np0005629333 nova_compute[244014]: 2026-02-25 12:29:41.332 244018 DEBUG nova.virt.hardware [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:29:41 np0005629333 nova_compute[244014]: 2026-02-25 12:29:41.332 244018 DEBUG nova.virt.hardware [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:29:41 np0005629333 nova_compute[244014]: 2026-02-25 12:29:41.333 244018 DEBUG nova.virt.hardware [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:29:41 np0005629333 nova_compute[244014]: 2026-02-25 12:29:41.333 244018 DEBUG nova.virt.hardware [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:29:41 np0005629333 nova_compute[244014]: 2026-02-25 12:29:41.333 244018 DEBUG nova.virt.hardware [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:29:41 np0005629333 nova_compute[244014]: 2026-02-25 12:29:41.336 244018 DEBUG oslo_concurrency.processutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:41.648 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ab721623-aa6d-494f-8b90-6ffd63b7a33f in datapath 9a448894-87d7-4c8e-a168-2593011ffed7 unbound from our chassis#033[00m
Feb 25 07:29:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:41.650 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9a448894-87d7-4c8e-a168-2593011ffed7#033[00m
Feb 25 07:29:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:41.665 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0479a1d9-4b26-4252-bcf8-3ebfdf433fb2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:41.691 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b8f06e15-bc6f-4dca-af54-e896713b99bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:41.694 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[57a80eaa-e239-4ef1-83dd-3449086721e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:41.722 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7aab4d1a-a809-4fb8-b2a7-7faa4c859b22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:41.741 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3e7613de-9629-4c2a-a761-7ca748d74457]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9a448894-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:4c:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449633, 'reachable_time': 29076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303307, 'error': None, 'target': 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:41.758 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fd4b6c96-a438-4f2d-bcca-9034bd93d177]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9a448894-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449644, 'tstamp': 449644}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303308, 'error': None, 'target': 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9a448894-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449646, 'tstamp': 449646}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303308, 'error': None, 'target': 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:41.760 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a448894-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:41 np0005629333 nova_compute[244014]: 2026-02-25 12:29:41.762 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:41.766 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9a448894-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:41.766 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:29:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:41.767 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9a448894-80, col_values=(('external_ids', {'iface-id': '6ef06fbe-6eb9-4d55-bbd8-9394a70da39f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:41.767 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:29:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 473 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 857 KiB/s wr, 284 op/s
Feb 25 07:29:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:29:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:29:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2223370145' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:29:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:29:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:29:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:29:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0027299510273700845 of space, bias 1.0, pg target 0.8189853082110253 quantized to 32 (current 32)
Feb 25 07:29:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:29:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:29:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:29:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:29:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:29:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002492563996361888 of space, bias 1.0, pg target 0.7477691989085664 quantized to 32 (current 32)
Feb 25 07:29:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:29:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.701649371663396e-07 of space, bias 4.0, pg target 0.0010441979245996076 quantized to 16 (current 16)
Feb 25 07:29:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:29:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:29:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:29:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:29:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:29:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:29:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:29:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:29:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:29:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:29:42 np0005629333 nova_compute[244014]: 2026-02-25 12:29:42.320 244018 DEBUG oslo_concurrency.processutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.984s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:42 np0005629333 nova_compute[244014]: 2026-02-25 12:29:42.367 244018 DEBUG nova.storage.rbd_utils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] rbd image d945940d-a1b5-4a36-b980-efda3a9efda6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:42 np0005629333 nova_compute[244014]: 2026-02-25 12:29:42.372 244018 DEBUG oslo_concurrency.processutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:29:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1108628078' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:29:42 np0005629333 nova_compute[244014]: 2026-02-25 12:29:42.923 244018 DEBUG oslo_concurrency.processutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:42 np0005629333 nova_compute[244014]: 2026-02-25 12:29:42.926 244018 DEBUG nova.virt.libvirt.vif [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:29:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-579439343',display_name='tempest-ServerRescueTestJSONUnderV235-server-579439343',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-579439343',id=69,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='61371c73c9fb4961886c5c22f8f871e1',ramdisk_id='',reservation_id='r-yaoxup28',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1059268888',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1059268888-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:29:35Z,user_data=None,user_id='74af2f394ab04b06b55e62150e81b6b1',uuid=d945940d-a1b5-4a36-b980-efda3a9efda6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:29:42 np0005629333 nova_compute[244014]: 2026-02-25 12:29:42.926 244018 DEBUG nova.network.os_vif_util [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Converting VIF {"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:29:42 np0005629333 nova_compute[244014]: 2026-02-25 12:29:42.927 244018 DEBUG nova.network.os_vif_util [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:10:e8,bridge_name='br-int',has_traffic_filtering=True,id=197929cb-aaa4-48f0-a831-7ac3f4ac5b37,network=Network(a36e5ee4-5e24-4d80-83ee-01c487ff157c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap197929cb-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:29:42 np0005629333 nova_compute[244014]: 2026-02-25 12:29:42.929 244018 DEBUG nova.objects.instance [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lazy-loading 'pci_devices' on Instance uuid d945940d-a1b5-4a36-b980-efda3a9efda6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:42 np0005629333 nova_compute[244014]: 2026-02-25 12:29:42.954 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:29:42 np0005629333 nova_compute[244014]:  <uuid>d945940d-a1b5-4a36-b980-efda3a9efda6</uuid>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:  <name>instance-00000045</name>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServerRescueTestJSONUnderV235-server-579439343</nova:name>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:29:41</nova:creationTime>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:29:42 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:        <nova:user uuid="74af2f394ab04b06b55e62150e81b6b1">tempest-ServerRescueTestJSONUnderV235-1059268888-project-member</nova:user>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:        <nova:project uuid="61371c73c9fb4961886c5c22f8f871e1">tempest-ServerRescueTestJSONUnderV235-1059268888</nova:project>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:        <nova:port uuid="197929cb-aaa4-48f0-a831-7ac3f4ac5b37">
Feb 25 07:29:42 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      <entry name="serial">d945940d-a1b5-4a36-b980-efda3a9efda6</entry>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      <entry name="uuid">d945940d-a1b5-4a36-b980-efda3a9efda6</entry>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/d945940d-a1b5-4a36-b980-efda3a9efda6_disk">
Feb 25 07:29:42 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:29:42 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/d945940d-a1b5-4a36-b980-efda3a9efda6_disk.config">
Feb 25 07:29:42 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:29:42 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:3f:10:e8"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      <target dev="tap197929cb-aa"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6/console.log" append="off"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:29:42 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:29:42 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:29:42 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:29:42 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:29:42 np0005629333 nova_compute[244014]: 2026-02-25 12:29:42.955 244018 DEBUG nova.compute.manager [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Preparing to wait for external event network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:29:42 np0005629333 nova_compute[244014]: 2026-02-25 12:29:42.955 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Acquiring lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:42 np0005629333 nova_compute[244014]: 2026-02-25 12:29:42.956 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:42 np0005629333 nova_compute[244014]: 2026-02-25 12:29:42.956 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:42 np0005629333 nova_compute[244014]: 2026-02-25 12:29:42.957 244018 DEBUG nova.virt.libvirt.vif [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:29:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-579439343',display_name='tempest-ServerRescueTestJSONUnderV235-server-579439343',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-579439343',id=69,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='61371c73c9fb4961886c5c22f8f871e1',ramdisk_id='',reservation_id='r-yaoxup28',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1059268888',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1059268888-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:29:35Z,user_data=None,user_id='74af2f394ab04b06b55e62150e81b6b1',uuid=d945940d-a1b5-4a36-b980-efda3a9efda6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:29:42 np0005629333 nova_compute[244014]: 2026-02-25 12:29:42.958 244018 DEBUG nova.network.os_vif_util [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Converting VIF {"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:29:42 np0005629333 nova_compute[244014]: 2026-02-25 12:29:42.959 244018 DEBUG nova.network.os_vif_util [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:10:e8,bridge_name='br-int',has_traffic_filtering=True,id=197929cb-aaa4-48f0-a831-7ac3f4ac5b37,network=Network(a36e5ee4-5e24-4d80-83ee-01c487ff157c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap197929cb-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:29:42 np0005629333 nova_compute[244014]: 2026-02-25 12:29:42.959 244018 DEBUG os_vif [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:10:e8,bridge_name='br-int',has_traffic_filtering=True,id=197929cb-aaa4-48f0-a831-7ac3f4ac5b37,network=Network(a36e5ee4-5e24-4d80-83ee-01c487ff157c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap197929cb-aa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:29:42 np0005629333 nova_compute[244014]: 2026-02-25 12:29:42.960 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:42 np0005629333 nova_compute[244014]: 2026-02-25 12:29:42.960 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:42 np0005629333 nova_compute[244014]: 2026-02-25 12:29:42.961 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:29:42 np0005629333 nova_compute[244014]: 2026-02-25 12:29:42.964 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:42 np0005629333 nova_compute[244014]: 2026-02-25 12:29:42.964 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap197929cb-aa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:42 np0005629333 nova_compute[244014]: 2026-02-25 12:29:42.965 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap197929cb-aa, col_values=(('external_ids', {'iface-id': '197929cb-aaa4-48f0-a831-7ac3f4ac5b37', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3f:10:e8', 'vm-uuid': 'd945940d-a1b5-4a36-b980-efda3a9efda6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:42 np0005629333 nova_compute[244014]: 2026-02-25 12:29:42.967 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:42 np0005629333 NetworkManager[49836]: <info>  [1772022582.9688] manager: (tap197929cb-aa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/275)
Feb 25 07:29:42 np0005629333 nova_compute[244014]: 2026-02-25 12:29:42.974 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:42 np0005629333 nova_compute[244014]: 2026-02-25 12:29:42.975 244018 INFO os_vif [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:10:e8,bridge_name='br-int',has_traffic_filtering=True,id=197929cb-aaa4-48f0-a831-7ac3f4ac5b37,network=Network(a36e5ee4-5e24-4d80-83ee-01c487ff157c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap197929cb-aa')#033[00m
Feb 25 07:29:43 np0005629333 nova_compute[244014]: 2026-02-25 12:29:43.137 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:29:43 np0005629333 nova_compute[244014]: 2026-02-25 12:29:43.138 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:29:43 np0005629333 nova_compute[244014]: 2026-02-25 12:29:43.138 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] No VIF found with MAC fa:16:3e:3f:10:e8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:29:43 np0005629333 nova_compute[244014]: 2026-02-25 12:29:43.138 244018 INFO nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Using config drive#033[00m
Feb 25 07:29:43 np0005629333 nova_compute[244014]: 2026-02-25 12:29:43.532 244018 DEBUG nova.storage.rbd_utils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] rbd image d945940d-a1b5-4a36-b980-efda3a9efda6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:43 np0005629333 nova_compute[244014]: 2026-02-25 12:29:43.541 244018 DEBUG nova.network.neutron [req-a214dfc8-f31b-4122-9130-35c5c13c2e79 req-cd41393f-82a0-455f-8157-641475c2db12 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Updated VIF entry in instance network info cache for port 197929cb-aaa4-48f0-a831-7ac3f4ac5b37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:29:43 np0005629333 nova_compute[244014]: 2026-02-25 12:29:43.542 244018 DEBUG nova.network.neutron [req-a214dfc8-f31b-4122-9130-35c5c13c2e79 req-cd41393f-82a0-455f-8157-641475c2db12 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Updating instance_info_cache with network_info: [{"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:29:43 np0005629333 nova_compute[244014]: 2026-02-25 12:29:43.548 244018 DEBUG nova.compute.manager [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:43 np0005629333 nova_compute[244014]: 2026-02-25 12:29:43.548 244018 DEBUG oslo_concurrency.lockutils [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:43 np0005629333 nova_compute[244014]: 2026-02-25 12:29:43.548 244018 DEBUG oslo_concurrency.lockutils [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:43 np0005629333 nova_compute[244014]: 2026-02-25 12:29:43.549 244018 DEBUG oslo_concurrency.lockutils [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:43 np0005629333 nova_compute[244014]: 2026-02-25 12:29:43.549 244018 DEBUG nova.compute.manager [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:29:43 np0005629333 nova_compute[244014]: 2026-02-25 12:29:43.549 244018 WARNING nova.compute.manager [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:29:43 np0005629333 nova_compute[244014]: 2026-02-25 12:29:43.550 244018 DEBUG nova.compute.manager [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received event network-vif-unplugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:43 np0005629333 nova_compute[244014]: 2026-02-25 12:29:43.550 244018 DEBUG oslo_concurrency.lockutils [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:43 np0005629333 nova_compute[244014]: 2026-02-25 12:29:43.550 244018 DEBUG oslo_concurrency.lockutils [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:43 np0005629333 nova_compute[244014]: 2026-02-25 12:29:43.551 244018 DEBUG oslo_concurrency.lockutils [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:43 np0005629333 nova_compute[244014]: 2026-02-25 12:29:43.551 244018 DEBUG nova.compute.manager [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] No waiting events found dispatching network-vif-unplugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:29:43 np0005629333 nova_compute[244014]: 2026-02-25 12:29:43.551 244018 DEBUG nova.compute.manager [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received event network-vif-unplugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:29:43 np0005629333 nova_compute[244014]: 2026-02-25 12:29:43.552 244018 DEBUG nova.compute.manager [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received event network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:43 np0005629333 nova_compute[244014]: 2026-02-25 12:29:43.552 244018 DEBUG oslo_concurrency.lockutils [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:43 np0005629333 nova_compute[244014]: 2026-02-25 12:29:43.552 244018 DEBUG oslo_concurrency.lockutils [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:43 np0005629333 nova_compute[244014]: 2026-02-25 12:29:43.553 244018 DEBUG oslo_concurrency.lockutils [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:43 np0005629333 nova_compute[244014]: 2026-02-25 12:29:43.553 244018 DEBUG nova.compute.manager [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] No waiting events found dispatching network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:29:43 np0005629333 nova_compute[244014]: 2026-02-25 12:29:43.553 244018 WARNING nova.compute.manager [req-a3e4b467-9e95-42af-aaf2-20794b2aae9c req-684cf23d-cb67-4931-9690-69be538531f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received unexpected event network-vif-plugged-ab721623-aa6d-494f-8b90-6ffd63b7a33f for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:29:43 np0005629333 nova_compute[244014]: 2026-02-25 12:29:43.579 244018 DEBUG oslo_concurrency.lockutils [req-a214dfc8-f31b-4122-9130-35c5c13c2e79 req-cd41393f-82a0-455f-8157-641475c2db12 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:29:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 500 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 1.8 MiB/s wr, 317 op/s
Feb 25 07:29:44 np0005629333 nova_compute[244014]: 2026-02-25 12:29:44.306 244018 INFO nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Creating config drive at /var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6/disk.config#033[00m
Feb 25 07:29:44 np0005629333 nova_compute[244014]: 2026-02-25 12:29:44.312 244018 DEBUG oslo_concurrency.processutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpcq3tr85h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:44 np0005629333 nova_compute[244014]: 2026-02-25 12:29:44.449 244018 DEBUG oslo_concurrency.processutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpcq3tr85h" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:44 np0005629333 nova_compute[244014]: 2026-02-25 12:29:44.802 244018 DEBUG nova.storage.rbd_utils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] rbd image d945940d-a1b5-4a36-b980-efda3a9efda6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:44 np0005629333 nova_compute[244014]: 2026-02-25 12:29:44.806 244018 DEBUG oslo_concurrency.processutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6/disk.config d945940d-a1b5-4a36-b980-efda3a9efda6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:44 np0005629333 nova_compute[244014]: 2026-02-25 12:29:44.836 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:44 np0005629333 nova_compute[244014]: 2026-02-25 12:29:44.867 244018 DEBUG nova.objects.instance [None req-cf4e8229-fdc1-426c-989d-74ca743cc7c1 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:44 np0005629333 nova_compute[244014]: 2026-02-25 12:29:44.894 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022584.893929, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:29:44 np0005629333 nova_compute[244014]: 2026-02-25 12:29:44.894 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:29:44 np0005629333 nova_compute[244014]: 2026-02-25 12:29:44.925 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:44 np0005629333 nova_compute[244014]: 2026-02-25 12:29:44.931 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:29:45 np0005629333 nova_compute[244014]: 2026-02-25 12:29:45.172 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Feb 25 07:29:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:29:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 500 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 1.8 MiB/s wr, 310 op/s
Feb 25 07:29:46 np0005629333 kernel: tapee46268d-74 (unregistering): left promiscuous mode
Feb 25 07:29:46 np0005629333 NetworkManager[49836]: <info>  [1772022586.4840] device (tapee46268d-74): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:29:46 np0005629333 nova_compute[244014]: 2026-02-25 12:29:46.489 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:46Z|00623|binding|INFO|Releasing lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 from this chassis (sb_readonly=0)
Feb 25 07:29:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:46Z|00624|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 down in Southbound
Feb 25 07:29:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:46Z|00625|binding|INFO|Removing iface tapee46268d-74 ovn-installed in OVS
Feb 25 07:29:46 np0005629333 nova_compute[244014]: 2026-02-25 12:29:46.491 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:46.498 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:87:f1 10.100.0.11'], port_security=['fa:16:3e:ba:87:f1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce318891-cf3c-4d99-af7c-c01770f38194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56700581ea88438ba482d90bc702ced3', 'neutron:revision_number': '12', 'neutron:security_group_ids': 'c2ca716d-3f7c-490b-954a-bca009559baa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0958bb9f-eb63-44ee-b380-21c56b170304, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ee46268d-740d-4ff9-8b65-4a81fc61eec3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:29:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:46.500 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ee46268d-740d-4ff9-8b65-4a81fc61eec3 in datapath ce318891-cf3c-4d99-af7c-c01770f38194 unbound from our chassis#033[00m
Feb 25 07:29:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:46.503 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce318891-cf3c-4d99-af7c-c01770f38194, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:29:46 np0005629333 nova_compute[244014]: 2026-02-25 12:29:46.506 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:46.505 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[efbe512e-808c-4924-907e-a57b12bf2c0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:46.508 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 namespace which is not needed anymore#033[00m
Feb 25 07:29:46 np0005629333 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000035.scope: Deactivated successfully.
Feb 25 07:29:46 np0005629333 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000035.scope: Consumed 5.400s CPU time.
Feb 25 07:29:46 np0005629333 systemd-machined[210048]: Machine qemu-80-instance-00000035 terminated.
Feb 25 07:29:46 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Feb 25 07:29:46 np0005629333 nova_compute[244014]: 2026-02-25 12:29:46.579 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:46 np0005629333 nova_compute[244014]: 2026-02-25 12:29:46.587 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:46 np0005629333 nova_compute[244014]: 2026-02-25 12:29:46.588 244018 DEBUG nova.compute.manager [None req-cf4e8229-fdc1-426c-989d-74ca743cc7c1 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:46 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[303267]: [NOTICE]   (303271) : haproxy version is 2.8.14-c23fe91
Feb 25 07:29:46 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[303267]: [NOTICE]   (303271) : path to executable is /usr/sbin/haproxy
Feb 25 07:29:46 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[303267]: [WARNING]  (303271) : Exiting Master process...
Feb 25 07:29:46 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[303267]: [ALERT]    (303271) : Current worker (303273) exited with code 143 (Terminated)
Feb 25 07:29:46 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[303267]: [WARNING]  (303271) : All workers exited. Exiting... (0)
Feb 25 07:29:46 np0005629333 systemd[1]: libpod-84c2a7e1679bfc942ea4275a6f6001f42fea18892b1ffccbd4c59228d1dc9f85.scope: Deactivated successfully.
Feb 25 07:29:46 np0005629333 podman[303454]: 2026-02-25 12:29:46.764356304 +0000 UTC m=+0.143163061 container died 84c2a7e1679bfc942ea4275a6f6001f42fea18892b1ffccbd4c59228d1dc9f85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 25 07:29:46 np0005629333 nova_compute[244014]: 2026-02-25 12:29:46.833 244018 DEBUG nova.compute.manager [req-f0befaab-db5d-4217-bd7d-75fb5c0a313f req-5c753961-cda9-43cd-91c0-5ca8473632ec 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:46 np0005629333 nova_compute[244014]: 2026-02-25 12:29:46.834 244018 DEBUG oslo_concurrency.lockutils [req-f0befaab-db5d-4217-bd7d-75fb5c0a313f req-5c753961-cda9-43cd-91c0-5ca8473632ec 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:46 np0005629333 nova_compute[244014]: 2026-02-25 12:29:46.834 244018 DEBUG oslo_concurrency.lockutils [req-f0befaab-db5d-4217-bd7d-75fb5c0a313f req-5c753961-cda9-43cd-91c0-5ca8473632ec 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:46 np0005629333 nova_compute[244014]: 2026-02-25 12:29:46.835 244018 DEBUG oslo_concurrency.lockutils [req-f0befaab-db5d-4217-bd7d-75fb5c0a313f req-5c753961-cda9-43cd-91c0-5ca8473632ec 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:46 np0005629333 nova_compute[244014]: 2026-02-25 12:29:46.835 244018 DEBUG nova.compute.manager [req-f0befaab-db5d-4217-bd7d-75fb5c0a313f req-5c753961-cda9-43cd-91c0-5ca8473632ec 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:29:46 np0005629333 nova_compute[244014]: 2026-02-25 12:29:46.836 244018 WARNING nova.compute.manager [req-f0befaab-db5d-4217-bd7d-75fb5c0a313f req-5c753961-cda9-43cd-91c0-5ca8473632ec 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state suspended and task_state None.#033[00m
Feb 25 07:29:47 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-84c2a7e1679bfc942ea4275a6f6001f42fea18892b1ffccbd4c59228d1dc9f85-userdata-shm.mount: Deactivated successfully.
Feb 25 07:29:47 np0005629333 systemd[1]: var-lib-containers-storage-overlay-47676d99690e7ef9a70c5e78706610c8c77da71cb8b8a0b03963190efdeac6fe-merged.mount: Deactivated successfully.
Feb 25 07:29:47 np0005629333 podman[303454]: 2026-02-25 12:29:47.33375856 +0000 UTC m=+0.712565327 container cleanup 84c2a7e1679bfc942ea4275a6f6001f42fea18892b1ffccbd4c59228d1dc9f85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:29:47 np0005629333 systemd[1]: libpod-conmon-84c2a7e1679bfc942ea4275a6f6001f42fea18892b1ffccbd4c59228d1dc9f85.scope: Deactivated successfully.
Feb 25 07:29:47 np0005629333 nova_compute[244014]: 2026-02-25 12:29:47.347 244018 DEBUG oslo_concurrency.processutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6/disk.config d945940d-a1b5-4a36-b980-efda3a9efda6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:47 np0005629333 nova_compute[244014]: 2026-02-25 12:29:47.348 244018 INFO nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Deleting local config drive /var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6/disk.config because it was imported into RBD.#033[00m
Feb 25 07:29:47 np0005629333 systemd-udevd[303422]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:29:47 np0005629333 NetworkManager[49836]: <info>  [1772022587.4131] manager: (tap197929cb-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/276)
Feb 25 07:29:47 np0005629333 kernel: tap197929cb-aa: entered promiscuous mode
Feb 25 07:29:47 np0005629333 nova_compute[244014]: 2026-02-25 12:29:47.416 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:47 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:47Z|00626|binding|INFO|Claiming lport 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 for this chassis.
Feb 25 07:29:47 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:47Z|00627|binding|INFO|197929cb-aaa4-48f0-a831-7ac3f4ac5b37: Claiming fa:16:3e:3f:10:e8 10.100.0.4
Feb 25 07:29:47 np0005629333 NetworkManager[49836]: <info>  [1772022587.4287] device (tap197929cb-aa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:29:47 np0005629333 NetworkManager[49836]: <info>  [1772022587.4314] device (tap197929cb-aa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:29:47 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:47Z|00628|binding|INFO|Setting lport 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 ovn-installed in OVS
Feb 25 07:29:47 np0005629333 nova_compute[244014]: 2026-02-25 12:29:47.434 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:47 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:47Z|00629|binding|INFO|Setting lport 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 up in Southbound
Feb 25 07:29:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:47.440 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:10:e8 10.100.0.4'], port_security=['fa:16:3e:3f:10:e8 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd945940d-a1b5-4a36-b980-efda3a9efda6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a36e5ee4-5e24-4d80-83ee-01c487ff157c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61371c73c9fb4961886c5c22f8f871e1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '95094e28-96cd-49ef-b6c0-712f53ce443e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4bfc4d05-cf0b-42bf-9420-27556b829f8f, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=197929cb-aaa4-48f0-a831-7ac3f4ac5b37) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:29:47 np0005629333 nova_compute[244014]: 2026-02-25 12:29:47.446 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:47 np0005629333 systemd-machined[210048]: New machine qemu-81-instance-00000045.
Feb 25 07:29:47 np0005629333 systemd[1]: Started Virtual Machine qemu-81-instance-00000045.
Feb 25 07:29:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:29:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2950221945' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:29:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:29:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2950221945' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:29:47 np0005629333 podman[303486]: 2026-02-25 12:29:47.716118683 +0000 UTC m=+0.352786825 container remove 84c2a7e1679bfc942ea4275a6f6001f42fea18892b1ffccbd4c59228d1dc9f85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 07:29:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:47.721 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[71bcecc7-752f-4125-97e3-166e02ba4764]: (4, ('Wed Feb 25 12:29:46 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 (84c2a7e1679bfc942ea4275a6f6001f42fea18892b1ffccbd4c59228d1dc9f85)\n84c2a7e1679bfc942ea4275a6f6001f42fea18892b1ffccbd4c59228d1dc9f85\nWed Feb 25 12:29:47 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 (84c2a7e1679bfc942ea4275a6f6001f42fea18892b1ffccbd4c59228d1dc9f85)\n84c2a7e1679bfc942ea4275a6f6001f42fea18892b1ffccbd4c59228d1dc9f85\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:47.723 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[806eac2d-8b0a-44db-8594-cbf5033ea585]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:47.724 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce318891-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:47 np0005629333 nova_compute[244014]: 2026-02-25 12:29:47.726 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:47 np0005629333 kernel: tapce318891-c0: left promiscuous mode
Feb 25 07:29:47 np0005629333 nova_compute[244014]: 2026-02-25 12:29:47.758 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:47.768 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[26cd8a5e-e808-4853-bba1-ade62cf6b606]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:47.792 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ce6bddcb-c378-44f9-b14e-69fe5e72dd6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:47.794 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ccf5526a-2713-40cc-af99-c1bbaa634331]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:47.815 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8df4e874-f775-4653-8628-e343ff64baae]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454864, 'reachable_time': 41913, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303524, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:47 np0005629333 systemd[1]: run-netns-ovnmeta\x2dce318891\x2dcf3c\x2d4d99\x2daf7c\x2dc01770f38194.mount: Deactivated successfully.
Feb 25 07:29:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:47.821 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:29:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:47.821 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[986f38cb-10b3-492b-b249-17e5bb392955]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:47.822 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 in datapath a36e5ee4-5e24-4d80-83ee-01c487ff157c unbound from our chassis#033[00m
Feb 25 07:29:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:47.823 157129 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a36e5ee4-5e24-4d80-83ee-01c487ff157c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 25 07:29:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:47.824 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[815c4d3d-ed20-4797-b187-31dc154ae3c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 485 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 5.0 MiB/s wr, 370 op/s
Feb 25 07:29:47 np0005629333 nova_compute[244014]: 2026-02-25 12:29:47.968 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:48 np0005629333 nova_compute[244014]: 2026-02-25 12:29:48.206 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022588.2057252, d945940d-a1b5-4a36-b980-efda3a9efda6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:29:48 np0005629333 nova_compute[244014]: 2026-02-25 12:29:48.208 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] VM Started (Lifecycle Event)#033[00m
Feb 25 07:29:48 np0005629333 nova_compute[244014]: 2026-02-25 12:29:48.238 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:48 np0005629333 nova_compute[244014]: 2026-02-25 12:29:48.246 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022588.2058125, d945940d-a1b5-4a36-b980-efda3a9efda6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:29:48 np0005629333 nova_compute[244014]: 2026-02-25 12:29:48.246 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:29:48 np0005629333 nova_compute[244014]: 2026-02-25 12:29:48.278 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:48 np0005629333 nova_compute[244014]: 2026-02-25 12:29:48.281 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:29:48 np0005629333 nova_compute[244014]: 2026-02-25 12:29:48.317 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:29:48 np0005629333 nova_compute[244014]: 2026-02-25 12:29:48.350 244018 INFO nova.virt.libvirt.driver [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Deleting instance files /var/lib/nova/instances/abe229eb-2238-4237-a7f2-83b8476ac1dc_del#033[00m
Feb 25 07:29:48 np0005629333 nova_compute[244014]: 2026-02-25 12:29:48.352 244018 INFO nova.virt.libvirt.driver [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Deletion of /var/lib/nova/instances/abe229eb-2238-4237-a7f2-83b8476ac1dc_del complete#033[00m
Feb 25 07:29:48 np0005629333 nova_compute[244014]: 2026-02-25 12:29:48.459 244018 INFO nova.compute.manager [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Took 8.15 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:29:48 np0005629333 nova_compute[244014]: 2026-02-25 12:29:48.460 244018 DEBUG oslo.service.loopingcall [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:29:48 np0005629333 nova_compute[244014]: 2026-02-25 12:29:48.461 244018 DEBUG nova.compute.manager [-] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:29:48 np0005629333 nova_compute[244014]: 2026-02-25 12:29:48.461 244018 DEBUG nova.network.neutron [-] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:29:48 np0005629333 nova_compute[244014]: 2026-02-25 12:29:48.568 244018 INFO nova.compute.manager [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Resuming#033[00m
Feb 25 07:29:48 np0005629333 nova_compute[244014]: 2026-02-25 12:29:48.569 244018 DEBUG nova.objects.instance [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'flavor' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:48 np0005629333 nova_compute[244014]: 2026-02-25 12:29:48.677 244018 DEBUG oslo_concurrency.lockutils [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:29:48 np0005629333 nova_compute[244014]: 2026-02-25 12:29:48.678 244018 DEBUG oslo_concurrency.lockutils [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquired lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:29:48 np0005629333 nova_compute[244014]: 2026-02-25 12:29:48.678 244018 DEBUG nova.network.neutron [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:29:48 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:48Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:03:8c:25 10.100.0.12
Feb 25 07:29:48 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:48Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:03:8c:25 10.100.0.12
Feb 25 07:29:48 np0005629333 nova_compute[244014]: 2026-02-25 12:29:48.864 244018 DEBUG nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Feb 25 07:29:49 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:49Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bf:b7:62 10.100.0.4
Feb 25 07:29:49 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:49Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bf:b7:62 10.100.0.4
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.325 244018 DEBUG nova.compute.manager [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.326 244018 DEBUG oslo_concurrency.lockutils [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.326 244018 DEBUG oslo_concurrency.lockutils [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.326 244018 DEBUG oslo_concurrency.lockutils [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.327 244018 DEBUG nova.compute.manager [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.327 244018 WARNING nova.compute.manager [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state suspended and task_state resuming.#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.327 244018 DEBUG nova.compute.manager [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received event network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.328 244018 DEBUG oslo_concurrency.lockutils [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.328 244018 DEBUG oslo_concurrency.lockutils [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.328 244018 DEBUG oslo_concurrency.lockutils [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.329 244018 DEBUG nova.compute.manager [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Processing event network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.329 244018 DEBUG nova.compute.manager [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received event network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.329 244018 DEBUG oslo_concurrency.lockutils [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.330 244018 DEBUG oslo_concurrency.lockutils [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.330 244018 DEBUG oslo_concurrency.lockutils [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.330 244018 DEBUG nova.compute.manager [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] No waiting events found dispatching network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.331 244018 WARNING nova.compute.manager [req-78679cb1-29b7-4ae5-a5b0-6036e1860d8d req-d01743ee-59a9-4a5f-90ff-e0a8dcc3b957 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received unexpected event network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.331 244018 DEBUG nova.compute.manager [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.338 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022589.3354928, d945940d-a1b5-4a36-b980-efda3a9efda6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.338 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.340 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.345 244018 INFO nova.virt.libvirt.driver [-] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Instance spawned successfully.#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.345 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.372 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.372 244018 DEBUG nova.network.neutron [-] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.380 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.389 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.389 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.390 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.391 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.391 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.392 244018 DEBUG nova.virt.libvirt.driver [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.423 244018 INFO nova.compute.manager [-] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Took 0.96 seconds to deallocate network for instance.#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.431 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.576 244018 DEBUG nova.compute.manager [req-2e824472-ceb5-42bd-9166-eef5c5a7caba req-43df687e-8bf5-461e-a840-7344d4ef49bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Received event network-vif-deleted-ab721623-aa6d-494f-8b90-6ffd63b7a33f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.610 244018 INFO nova.compute.manager [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Took 13.95 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.610 244018 DEBUG nova.compute.manager [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.699 244018 DEBUG oslo_concurrency.lockutils [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.700 244018 DEBUG oslo_concurrency.lockutils [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.756 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.794 244018 INFO nova.compute.manager [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Took 15.58 seconds to build instance.#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.827 244018 DEBUG oslo_concurrency.lockutils [None req-6814efdb-2f0c-45c6-9c04-d0809a90d8e7 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:49 np0005629333 nova_compute[244014]: 2026-02-25 12:29:49.857 244018 DEBUG oslo_concurrency.processutils [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 485 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.0 MiB/s wr, 161 op/s
Feb 25 07:29:50 np0005629333 nova_compute[244014]: 2026-02-25 12:29:50.332 244018 DEBUG nova.network.neutron [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Updating instance_info_cache with network_info: [{"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:29:50 np0005629333 nova_compute[244014]: 2026-02-25 12:29:50.359 244018 DEBUG oslo_concurrency.lockutils [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Releasing lock "refresh_cache-8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:29:50 np0005629333 nova_compute[244014]: 2026-02-25 12:29:50.366 244018 DEBUG nova.virt.libvirt.vif [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1971346294',display_name='tempest-ServerActionsTestJSON-server-1971346294',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1971346294',id=53,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAxL27ottNuqfXH6nySrIpiq52DbdIwstuJNvjKVA2mjXoBhB8Hf28a6S+Sox62IJx/Akv2MX8rF28TRT28AB2t2jhcJkKsJ3yIrvpBvNuGbxcLEouYwPlp1/Hru0erD1g==',key_name='tempest-keypair-1811376271',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-d1p0icxu',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:29:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:29:50 np0005629333 nova_compute[244014]: 2026-02-25 12:29:50.367 244018 DEBUG nova.network.os_vif_util [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:29:50 np0005629333 nova_compute[244014]: 2026-02-25 12:29:50.369 244018 DEBUG nova.network.os_vif_util [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:29:50 np0005629333 nova_compute[244014]: 2026-02-25 12:29:50.369 244018 DEBUG os_vif [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:29:50 np0005629333 nova_compute[244014]: 2026-02-25 12:29:50.371 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:50 np0005629333 nova_compute[244014]: 2026-02-25 12:29:50.372 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:50 np0005629333 nova_compute[244014]: 2026-02-25 12:29:50.373 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:29:50 np0005629333 nova_compute[244014]: 2026-02-25 12:29:50.376 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:50 np0005629333 nova_compute[244014]: 2026-02-25 12:29:50.377 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee46268d-74, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:50 np0005629333 nova_compute[244014]: 2026-02-25 12:29:50.377 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee46268d-74, col_values=(('external_ids', {'iface-id': 'ee46268d-740d-4ff9-8b65-4a81fc61eec3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:87:f1', 'vm-uuid': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:50 np0005629333 nova_compute[244014]: 2026-02-25 12:29:50.378 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:29:50 np0005629333 nova_compute[244014]: 2026-02-25 12:29:50.379 244018 INFO os_vif [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74')#033[00m
Feb 25 07:29:50 np0005629333 nova_compute[244014]: 2026-02-25 12:29:50.402 244018 DEBUG nova.objects.instance [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'numa_topology' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:29:50 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4123865339' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:29:50 np0005629333 nova_compute[244014]: 2026-02-25 12:29:50.417 244018 DEBUG oslo_concurrency.processutils [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:50 np0005629333 nova_compute[244014]: 2026-02-25 12:29:50.429 244018 DEBUG nova.compute.provider_tree [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:29:50 np0005629333 nova_compute[244014]: 2026-02-25 12:29:50.442 244018 DEBUG nova.scheduler.client.report [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:29:50 np0005629333 kernel: tapee46268d-74: entered promiscuous mode
Feb 25 07:29:50 np0005629333 NetworkManager[49836]: <info>  [1772022590.4676] manager: (tapee46268d-74): new Tun device (/org/freedesktop/NetworkManager/Devices/277)
Feb 25 07:29:50 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:50Z|00630|binding|INFO|Claiming lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 for this chassis.
Feb 25 07:29:50 np0005629333 nova_compute[244014]: 2026-02-25 12:29:50.474 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:50 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:50Z|00631|binding|INFO|ee46268d-740d-4ff9-8b65-4a81fc61eec3: Claiming fa:16:3e:ba:87:f1 10.100.0.11
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.493 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:87:f1 10.100.0.11'], port_security=['fa:16:3e:ba:87:f1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce318891-cf3c-4d99-af7c-c01770f38194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56700581ea88438ba482d90bc702ced3', 'neutron:revision_number': '13', 'neutron:security_group_ids': 'c2ca716d-3f7c-490b-954a-bca009559baa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0958bb9f-eb63-44ee-b380-21c56b170304, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ee46268d-740d-4ff9-8b65-4a81fc61eec3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.494 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ee46268d-740d-4ff9-8b65-4a81fc61eec3 in datapath ce318891-cf3c-4d99-af7c-c01770f38194 bound to our chassis#033[00m
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.496 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce318891-cf3c-4d99-af7c-c01770f38194#033[00m
Feb 25 07:29:50 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:50Z|00632|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 ovn-installed in OVS
Feb 25 07:29:50 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:50Z|00633|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 up in Southbound
Feb 25 07:29:50 np0005629333 nova_compute[244014]: 2026-02-25 12:29:50.503 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.503 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[568078eb-5c4a-4d0d-9e23-342cd6caf7a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.504 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce318891-c1 in ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.506 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce318891-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.506 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4e5d9de0-c20e-4a97-beec-71b0c915e33a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.507 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[09a128ca-7de9-4c21-8227-717d7c76313d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.515 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[7c7c8900-9b15-45ce-877d-28e7f6d68112]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:50 np0005629333 systemd-machined[210048]: New machine qemu-82-instance-00000035.
Feb 25 07:29:50 np0005629333 nova_compute[244014]: 2026-02-25 12:29:50.524 244018 DEBUG oslo_concurrency.lockutils [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:50 np0005629333 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 07:29:50 np0005629333 systemd[1]: Started Virtual Machine qemu-82-instance-00000035.
Feb 25 07:29:50 np0005629333 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.526 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f0ff1990-dfcb-4585-8555-ff1ba73131cb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.545 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[998e1bc7-2081-4d07-b3cf-3cc1666364d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:50 np0005629333 systemd-udevd[303611]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:29:50 np0005629333 NetworkManager[49836]: <info>  [1772022590.5508] manager: (tapce318891-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/278)
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.550 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a5760220-1436-4ae7-8586-a54a286700c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:50 np0005629333 systemd-udevd[303613]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:29:50 np0005629333 nova_compute[244014]: 2026-02-25 12:29:50.553 244018 INFO nova.scheduler.client.report [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Deleted allocations for instance abe229eb-2238-4237-a7f2-83b8476ac1dc#033[00m
Feb 25 07:29:50 np0005629333 NetworkManager[49836]: <info>  [1772022590.5581] device (tapee46268d-74): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:29:50 np0005629333 NetworkManager[49836]: <info>  [1772022590.5587] device (tapee46268d-74): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.577 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[232b8af4-630d-4588-8030-dfe442b1ee4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.579 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0219eee3-c6e0-4421-b0d3-7a9206fea53b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:50 np0005629333 NetworkManager[49836]: <info>  [1772022590.5945] device (tapce318891-c0): carrier: link connected
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.596 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0c1f2adb-a0d8-4764-ae89-daba1f489594]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.607 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc31043-6602-4558-938b-2e68ad4a16f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce318891-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c3:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 190], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456000, 'reachable_time': 24131, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303639, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.614 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0b321ddc-7d29-4a33-80ac-8c5e988ffb7d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe44:c3a0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 456000, 'tstamp': 456000}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303640, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.623 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f8518fcd-4533-45d5-9ffb-4bd118548a5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce318891-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:c3:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 190], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456000, 'reachable_time': 24131, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303641, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.645 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[38825170-758a-4a66-9219-64175a91f6fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:50 np0005629333 nova_compute[244014]: 2026-02-25 12:29:50.653 244018 DEBUG oslo_concurrency.lockutils [None req-8de78b16-18c2-42c8-822d-992ee94e67e6 bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "abe229eb-2238-4237-a7f2-83b8476ac1dc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.345s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.699 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d857c223-b04b-4aa9-b42b-63f2403ec233]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.700 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce318891-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.700 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.701 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce318891-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:50 np0005629333 nova_compute[244014]: 2026-02-25 12:29:50.702 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:50 np0005629333 NetworkManager[49836]: <info>  [1772022590.7029] manager: (tapce318891-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/279)
Feb 25 07:29:50 np0005629333 kernel: tapce318891-c0: entered promiscuous mode
Feb 25 07:29:50 np0005629333 nova_compute[244014]: 2026-02-25 12:29:50.705 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.706 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce318891-c0, col_values=(('external_ids', {'iface-id': '3b184c15-8ef4-4e11-bd18-e1253a4ff440'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:50 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:50Z|00634|binding|INFO|Releasing lport 3b184c15-8ef4-4e11-bd18-e1253a4ff440 from this chassis (sb_readonly=0)
Feb 25 07:29:50 np0005629333 nova_compute[244014]: 2026-02-25 12:29:50.707 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:50 np0005629333 nova_compute[244014]: 2026-02-25 12:29:50.713 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:50 np0005629333 nova_compute[244014]: 2026-02-25 12:29:50.716 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.716 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.717 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[adf1c13f-b68c-4c88-80d6-f00089e2733d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.718 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/ce318891-cf3c-4d99-af7c-c01770f38194.pid.haproxy
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID ce318891-cf3c-4d99-af7c-c01770f38194
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:29:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:50.718 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'env', 'PROCESS_TAG=haproxy-ce318891-cf3c-4d99-af7c-c01770f38194', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce318891-cf3c-4d99-af7c-c01770f38194.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:29:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:29:50 np0005629333 nova_compute[244014]: 2026-02-25 12:29:50.923 244018 INFO nova.compute.manager [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Rescuing#033[00m
Feb 25 07:29:50 np0005629333 nova_compute[244014]: 2026-02-25 12:29:50.924 244018 DEBUG oslo_concurrency.lockutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Acquiring lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:29:50 np0005629333 nova_compute[244014]: 2026-02-25 12:29:50.925 244018 DEBUG oslo_concurrency.lockutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Acquired lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:29:50 np0005629333 nova_compute[244014]: 2026-02-25 12:29:50.925 244018 DEBUG nova.network.neutron [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:29:51 np0005629333 nova_compute[244014]: 2026-02-25 12:29:51.042 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 25 07:29:51 np0005629333 nova_compute[244014]: 2026-02-25 12:29:51.043 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022591.0407772, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:29:51 np0005629333 nova_compute[244014]: 2026-02-25 12:29:51.043 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Started (Lifecycle Event)#033[00m
Feb 25 07:29:51 np0005629333 nova_compute[244014]: 2026-02-25 12:29:51.056 244018 DEBUG nova.compute.manager [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:29:51 np0005629333 nova_compute[244014]: 2026-02-25 12:29:51.056 244018 DEBUG nova.objects.instance [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:51 np0005629333 nova_compute[244014]: 2026-02-25 12:29:51.069 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:51 np0005629333 nova_compute[244014]: 2026-02-25 12:29:51.074 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:29:51 np0005629333 nova_compute[244014]: 2026-02-25 12:29:51.078 244018 INFO nova.virt.libvirt.driver [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance running successfully.#033[00m
Feb 25 07:29:51 np0005629333 virtqemud[243235]: argument unsupported: QEMU guest agent is not configured
Feb 25 07:29:51 np0005629333 nova_compute[244014]: 2026-02-25 12:29:51.081 244018 DEBUG nova.virt.libvirt.guest [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Feb 25 07:29:51 np0005629333 nova_compute[244014]: 2026-02-25 12:29:51.081 244018 DEBUG nova.compute.manager [None req-2636a3f4-96b6-4ce8-a8db-c8793a8c0be0 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:51 np0005629333 nova_compute[244014]: 2026-02-25 12:29:51.113 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Feb 25 07:29:51 np0005629333 nova_compute[244014]: 2026-02-25 12:29:51.114 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022591.0556786, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:29:51 np0005629333 nova_compute[244014]: 2026-02-25 12:29:51.114 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:29:51 np0005629333 podman[303717]: 2026-02-25 12:29:51.134725996 +0000 UTC m=+0.098150324 container create 22b26ae90a68dbf270a9ef8007d567987583402d803a22661062c87111ed844d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:29:51 np0005629333 nova_compute[244014]: 2026-02-25 12:29:51.140 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:51 np0005629333 nova_compute[244014]: 2026-02-25 12:29:51.148 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:29:51 np0005629333 podman[303717]: 2026-02-25 12:29:51.073893051 +0000 UTC m=+0.037317389 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:29:51 np0005629333 systemd[1]: Started libpod-conmon-22b26ae90a68dbf270a9ef8007d567987583402d803a22661062c87111ed844d.scope.
Feb 25 07:29:51 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:29:51 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a08bd54308eaf352c167ef973cce0fd266db351b043b8cd64c7edc8d50209f31/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:29:51 np0005629333 podman[303717]: 2026-02-25 12:29:51.230647116 +0000 UTC m=+0.194071464 container init 22b26ae90a68dbf270a9ef8007d567987583402d803a22661062c87111ed844d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:29:51 np0005629333 podman[303717]: 2026-02-25 12:29:51.235446272 +0000 UTC m=+0.198870590 container start 22b26ae90a68dbf270a9ef8007d567987583402d803a22661062c87111ed844d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223)
Feb 25 07:29:51 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[303731]: [NOTICE]   (303735) : New worker (303737) forked
Feb 25 07:29:51 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[303731]: [NOTICE]   (303735) : Loading success.
Feb 25 07:29:51 np0005629333 nova_compute[244014]: 2026-02-25 12:29:51.807 244018 DEBUG nova.compute.manager [req-7422836f-8e8a-417b-89a3-099167eb47b0 req-3f64922c-98c7-4c95-8f70-426bc8caa780 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:51 np0005629333 nova_compute[244014]: 2026-02-25 12:29:51.807 244018 DEBUG oslo_concurrency.lockutils [req-7422836f-8e8a-417b-89a3-099167eb47b0 req-3f64922c-98c7-4c95-8f70-426bc8caa780 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:51 np0005629333 nova_compute[244014]: 2026-02-25 12:29:51.808 244018 DEBUG oslo_concurrency.lockutils [req-7422836f-8e8a-417b-89a3-099167eb47b0 req-3f64922c-98c7-4c95-8f70-426bc8caa780 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:51 np0005629333 nova_compute[244014]: 2026-02-25 12:29:51.809 244018 DEBUG oslo_concurrency.lockutils [req-7422836f-8e8a-417b-89a3-099167eb47b0 req-3f64922c-98c7-4c95-8f70-426bc8caa780 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:51 np0005629333 nova_compute[244014]: 2026-02-25 12:29:51.809 244018 DEBUG nova.compute.manager [req-7422836f-8e8a-417b-89a3-099167eb47b0 req-3f64922c-98c7-4c95-8f70-426bc8caa780 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:29:51 np0005629333 nova_compute[244014]: 2026-02-25 12:29:51.809 244018 WARNING nova.compute.manager [req-7422836f-8e8a-417b-89a3-099167eb47b0 req-3f64922c-98c7-4c95-8f70-426bc8caa780 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:29:51 np0005629333 nova_compute[244014]: 2026-02-25 12:29:51.809 244018 DEBUG nova.compute.manager [req-7422836f-8e8a-417b-89a3-099167eb47b0 req-3f64922c-98c7-4c95-8f70-426bc8caa780 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:51 np0005629333 nova_compute[244014]: 2026-02-25 12:29:51.810 244018 DEBUG oslo_concurrency.lockutils [req-7422836f-8e8a-417b-89a3-099167eb47b0 req-3f64922c-98c7-4c95-8f70-426bc8caa780 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:51 np0005629333 nova_compute[244014]: 2026-02-25 12:29:51.810 244018 DEBUG oslo_concurrency.lockutils [req-7422836f-8e8a-417b-89a3-099167eb47b0 req-3f64922c-98c7-4c95-8f70-426bc8caa780 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:51 np0005629333 nova_compute[244014]: 2026-02-25 12:29:51.810 244018 DEBUG oslo_concurrency.lockutils [req-7422836f-8e8a-417b-89a3-099167eb47b0 req-3f64922c-98c7-4c95-8f70-426bc8caa780 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:51 np0005629333 nova_compute[244014]: 2026-02-25 12:29:51.810 244018 DEBUG nova.compute.manager [req-7422836f-8e8a-417b-89a3-099167eb47b0 req-3f64922c-98c7-4c95-8f70-426bc8caa780 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:29:51 np0005629333 nova_compute[244014]: 2026-02-25 12:29:51.811 244018 WARNING nova.compute.manager [req-7422836f-8e8a-417b-89a3-099167eb47b0 req-3f64922c-98c7-4c95-8f70-426bc8caa780 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:29:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 305 active+clean; 509 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 5.9 MiB/s wr, 259 op/s
Feb 25 07:29:52 np0005629333 kernel: tap27c957cd-d6 (unregistering): left promiscuous mode
Feb 25 07:29:52 np0005629333 NetworkManager[49836]: <info>  [1772022592.3664] device (tap27c957cd-d6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:29:52 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:52Z|00635|binding|INFO|Releasing lport 27c957cd-d68f-48d8-b2e1-170275200ed3 from this chassis (sb_readonly=0)
Feb 25 07:29:52 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:52Z|00636|binding|INFO|Setting lport 27c957cd-d68f-48d8-b2e1-170275200ed3 down in Southbound
Feb 25 07:29:52 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:52Z|00637|binding|INFO|Removing iface tap27c957cd-d6 ovn-installed in OVS
Feb 25 07:29:52 np0005629333 nova_compute[244014]: 2026-02-25 12:29:52.376 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:52.383 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:b7:62 10.100.0.4'], port_security=['fa:16:3e:bf:b7:62 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f7b18575-d1fc-423f-a596-8ca6d8ed08fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4445209a7384565a93895032b4f077e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8918a64f-99f7-4eb3-a626-bacb054cff5c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=28589015-6a4f-416a-a72c-4ed4061d2d31, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=27c957cd-d68f-48d8-b2e1-170275200ed3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:29:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:52.386 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 27c957cd-d68f-48d8-b2e1-170275200ed3 in datapath 85b56c79-01b6-47e7-ab3b-02e44acca3d3 unbound from our chassis#033[00m
Feb 25 07:29:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:52.390 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85b56c79-01b6-47e7-ab3b-02e44acca3d3#033[00m
Feb 25 07:29:52 np0005629333 nova_compute[244014]: 2026-02-25 12:29:52.394 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:52.400 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[550c85b9-45e1-45ec-a7e7-f35c8670b583]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:52 np0005629333 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000044.scope: Deactivated successfully.
Feb 25 07:29:52 np0005629333 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000044.scope: Consumed 11.379s CPU time.
Feb 25 07:29:52 np0005629333 systemd-machined[210048]: Machine qemu-78-instance-00000044 terminated.
Feb 25 07:29:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:52.426 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[82bd3f4b-c900-4e39-b218-f4529ae6baa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:52.430 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7faf8b54-d30a-4fc1-9074-b85f5aadac3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:52.452 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[bf1a8f9f-ca60-401b-8ba0-2783ce633774]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:52.466 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c2dd83be-f5e5-40b0-8ebe-eb7136489e79]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85b56c79-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:20:24:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 179], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453793, 'reachable_time': 15353, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303755, 'error': None, 'target': 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:52.476 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a04de21d-758d-4c55-8787-7d3626f19586]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap85b56c79-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453803, 'tstamp': 453803}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303756, 'error': None, 'target': 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap85b56c79-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453805, 'tstamp': 453805}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303756, 'error': None, 'target': 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:52.478 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85b56c79-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:52 np0005629333 nova_compute[244014]: 2026-02-25 12:29:52.481 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:52 np0005629333 nova_compute[244014]: 2026-02-25 12:29:52.485 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:52.486 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85b56c79-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:52.487 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:29:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:52.489 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85b56c79-00, col_values=(('external_ids', {'iface-id': 'ad3b1754-7f6c-4114-8056-d68f2e9a25d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:52.490 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:29:52 np0005629333 nova_compute[244014]: 2026-02-25 12:29:52.681 244018 DEBUG nova.network.neutron [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Updating instance_info_cache with network_info: [{"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:29:52 np0005629333 nova_compute[244014]: 2026-02-25 12:29:52.732 244018 DEBUG oslo_concurrency.lockutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Releasing lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:29:52 np0005629333 nova_compute[244014]: 2026-02-25 12:29:52.776 244018 DEBUG nova.compute.manager [req-0f201201-1e35-4a3d-a285-00e4fd448597 req-6d974db4-59cd-418e-bf44-19789559d001 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received event network-vif-unplugged-27c957cd-d68f-48d8-b2e1-170275200ed3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:52 np0005629333 nova_compute[244014]: 2026-02-25 12:29:52.776 244018 DEBUG oslo_concurrency.lockutils [req-0f201201-1e35-4a3d-a285-00e4fd448597 req-6d974db4-59cd-418e-bf44-19789559d001 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:52 np0005629333 nova_compute[244014]: 2026-02-25 12:29:52.777 244018 DEBUG oslo_concurrency.lockutils [req-0f201201-1e35-4a3d-a285-00e4fd448597 req-6d974db4-59cd-418e-bf44-19789559d001 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:52 np0005629333 nova_compute[244014]: 2026-02-25 12:29:52.777 244018 DEBUG oslo_concurrency.lockutils [req-0f201201-1e35-4a3d-a285-00e4fd448597 req-6d974db4-59cd-418e-bf44-19789559d001 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:52 np0005629333 nova_compute[244014]: 2026-02-25 12:29:52.777 244018 DEBUG nova.compute.manager [req-0f201201-1e35-4a3d-a285-00e4fd448597 req-6d974db4-59cd-418e-bf44-19789559d001 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] No waiting events found dispatching network-vif-unplugged-27c957cd-d68f-48d8-b2e1-170275200ed3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:29:52 np0005629333 nova_compute[244014]: 2026-02-25 12:29:52.778 244018 WARNING nova.compute.manager [req-0f201201-1e35-4a3d-a285-00e4fd448597 req-6d974db4-59cd-418e-bf44-19789559d001 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received unexpected event network-vif-unplugged-27c957cd-d68f-48d8-b2e1-170275200ed3 for instance with vm_state active and task_state rescuing.#033[00m
Feb 25 07:29:52 np0005629333 nova_compute[244014]: 2026-02-25 12:29:52.910 244018 INFO nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Instance shutdown successfully after 14 seconds.#033[00m
Feb 25 07:29:52 np0005629333 nova_compute[244014]: 2026-02-25 12:29:52.916 244018 INFO nova.virt.libvirt.driver [-] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Instance destroyed successfully.#033[00m
Feb 25 07:29:52 np0005629333 nova_compute[244014]: 2026-02-25 12:29:52.917 244018 DEBUG nova.objects.instance [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lazy-loading 'numa_topology' on Instance uuid f7b18575-d1fc-423f-a596-8ca6d8ed08fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:52 np0005629333 nova_compute[244014]: 2026-02-25 12:29:52.935 244018 INFO nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Attempting rescue#033[00m
Feb 25 07:29:52 np0005629333 nova_compute[244014]: 2026-02-25 12:29:52.936 244018 DEBUG nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Feb 25 07:29:52 np0005629333 nova_compute[244014]: 2026-02-25 12:29:52.941 244018 DEBUG nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Feb 25 07:29:52 np0005629333 nova_compute[244014]: 2026-02-25 12:29:52.941 244018 INFO nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Creating image(s)#033[00m
Feb 25 07:29:52 np0005629333 nova_compute[244014]: 2026-02-25 12:29:52.980 244018 DEBUG nova.storage.rbd_utils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:52 np0005629333 nova_compute[244014]: 2026-02-25 12:29:52.987 244018 DEBUG nova.objects.instance [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lazy-loading 'trusted_certs' on Instance uuid f7b18575-d1fc-423f-a596-8ca6d8ed08fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:52 np0005629333 nova_compute[244014]: 2026-02-25 12:29:52.994 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.049 244018 DEBUG nova.storage.rbd_utils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.083 244018 DEBUG nova.storage.rbd_utils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.088 244018 DEBUG oslo_concurrency.processutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.132 244018 DEBUG nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.175 244018 DEBUG oslo_concurrency.processutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.176 244018 DEBUG oslo_concurrency.lockutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.177 244018 DEBUG oslo_concurrency.lockutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.177 244018 DEBUG oslo_concurrency.lockutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.213 244018 DEBUG nova.storage.rbd_utils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.220 244018 DEBUG oslo_concurrency.processutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.417 244018 DEBUG oslo_concurrency.lockutils [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.418 244018 DEBUG oslo_concurrency.lockutils [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.418 244018 DEBUG oslo_concurrency.lockutils [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.418 244018 DEBUG oslo_concurrency.lockutils [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.419 244018 DEBUG oslo_concurrency.lockutils [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.420 244018 INFO nova.compute.manager [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Terminating instance#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.421 244018 DEBUG nova.compute.manager [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:29:53 np0005629333 kernel: tapee46268d-74 (unregistering): left promiscuous mode
Feb 25 07:29:53 np0005629333 NetworkManager[49836]: <info>  [1772022593.4934] device (tapee46268d-74): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.504 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:53 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:53Z|00638|binding|INFO|Releasing lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 from this chassis (sb_readonly=0)
Feb 25 07:29:53 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:53Z|00639|binding|INFO|Setting lport ee46268d-740d-4ff9-8b65-4a81fc61eec3 down in Southbound
Feb 25 07:29:53 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:53Z|00640|binding|INFO|Removing iface tapee46268d-74 ovn-installed in OVS
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.519 244018 DEBUG oslo_concurrency.processutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.519 244018 DEBUG nova.objects.instance [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lazy-loading 'migration_context' on Instance uuid f7b18575-d1fc-423f-a596-8ca6d8ed08fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.521 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:53.521 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:87:f1 10.100.0.11'], port_security=['fa:16:3e:ba:87:f1 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce318891-cf3c-4d99-af7c-c01770f38194', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56700581ea88438ba482d90bc702ced3', 'neutron:revision_number': '14', 'neutron:security_group_ids': 'c2ca716d-3f7c-490b-954a-bca009559baa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0958bb9f-eb63-44ee-b380-21c56b170304, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ee46268d-740d-4ff9-8b65-4a81fc61eec3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:29:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:53.524 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ee46268d-740d-4ff9-8b65-4a81fc61eec3 in datapath ce318891-cf3c-4d99-af7c-c01770f38194 unbound from our chassis#033[00m
Feb 25 07:29:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:53.526 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce318891-cf3c-4d99-af7c-c01770f38194, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:29:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:53.527 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[13184264-f485-440d-9136-bff0fd6e99e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:53.528 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 namespace which is not needed anymore#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.539 244018 DEBUG nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.539 244018 DEBUG nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Start _get_guest_xml network_info=[{"id": "27c957cd-d68f-48d8-b2e1-170275200ed3", "address": "fa:16:3e:bf:b7:62", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "vif_mac": "fa:16:3e:bf:b7:62"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27c957cd-d6", "ovs_interfaceid": "27c957cd-d68f-48d8-b2e1-170275200ed3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.540 244018 DEBUG nova.objects.instance [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lazy-loading 'resources' on Instance uuid f7b18575-d1fc-423f-a596-8ca6d8ed08fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:53 np0005629333 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d00000035.scope: Deactivated successfully.
Feb 25 07:29:53 np0005629333 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d00000035.scope: Consumed 2.837s CPU time.
Feb 25 07:29:53 np0005629333 systemd-machined[210048]: Machine qemu-82-instance-00000035 terminated.
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.562 244018 WARNING nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.572 244018 DEBUG nova.virt.libvirt.host [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.573 244018 DEBUG nova.virt.libvirt.host [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.577 244018 DEBUG nova.virt.libvirt.host [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.577 244018 DEBUG nova.virt.libvirt.host [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.577 244018 DEBUG nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.578 244018 DEBUG nova.virt.hardware [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.578 244018 DEBUG nova.virt.hardware [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.578 244018 DEBUG nova.virt.hardware [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.579 244018 DEBUG nova.virt.hardware [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.579 244018 DEBUG nova.virt.hardware [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.579 244018 DEBUG nova.virt.hardware [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.579 244018 DEBUG nova.virt.hardware [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.580 244018 DEBUG nova.virt.hardware [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.580 244018 DEBUG nova.virt.hardware [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.580 244018 DEBUG nova.virt.hardware [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.581 244018 DEBUG nova.virt.hardware [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.581 244018 DEBUG nova.objects.instance [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lazy-loading 'vcpu_model' on Instance uuid f7b18575-d1fc-423f-a596-8ca6d8ed08fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.601 244018 DEBUG oslo_concurrency.processutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.658 244018 INFO nova.virt.libvirt.driver [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Instance destroyed successfully.#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.658 244018 DEBUG nova.objects.instance [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lazy-loading 'resources' on Instance uuid 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.676 244018 DEBUG nova.virt.libvirt.vif [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:26:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1971346294',display_name='tempest-ServerActionsTestJSON-server-1971346294',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1971346294',id=53,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAxL27ottNuqfXH6nySrIpiq52DbdIwstuJNvjKVA2mjXoBhB8Hf28a6S+Sox62IJx/Akv2MX8rF28TRT28AB2t2jhcJkKsJ3yIrvpBvNuGbxcLEouYwPlp1/Hru0erD1g==',key_name='tempest-keypair-1811376271',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:26:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56700581ea88438ba482d90bc702ced3',ramdisk_id='',reservation_id='r-d1p0icxu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-436476112',owner_user_name='tempest-ServerActionsTestJSON-436476112-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:29:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1f8bbe7db4454108aca005daa72d5c22',uuid=8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.677 244018 DEBUG nova.network.os_vif_util [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converting VIF {"id": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "address": "fa:16:3e:ba:87:f1", "network": {"id": "ce318891-cf3c-4d99-af7c-c01770f38194", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-721382231-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56700581ea88438ba482d90bc702ced3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee46268d-74", "ovs_interfaceid": "ee46268d-740d-4ff9-8b65-4a81fc61eec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.677 244018 DEBUG nova.network.os_vif_util [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.678 244018 DEBUG os_vif [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.680 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.680 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee46268d-74, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.682 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.684 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.686 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:53 np0005629333 nova_compute[244014]: 2026-02-25 12:29:53.689 244018 INFO os_vif [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:87:f1,bridge_name='br-int',has_traffic_filtering=True,id=ee46268d-740d-4ff9-8b65-4a81fc61eec3,network=Network(ce318891-cf3c-4d99-af7c-c01770f38194),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee46268d-74')#033[00m
Feb 25 07:29:53 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[303731]: [NOTICE]   (303735) : haproxy version is 2.8.14-c23fe91
Feb 25 07:29:53 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[303731]: [NOTICE]   (303735) : path to executable is /usr/sbin/haproxy
Feb 25 07:29:53 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[303731]: [WARNING]  (303735) : Exiting Master process...
Feb 25 07:29:53 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[303731]: [ALERT]    (303735) : Current worker (303737) exited with code 143 (Terminated)
Feb 25 07:29:53 np0005629333 neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194[303731]: [WARNING]  (303735) : All workers exited. Exiting... (0)
Feb 25 07:29:53 np0005629333 systemd[1]: libpod-22b26ae90a68dbf270a9ef8007d567987583402d803a22661062c87111ed844d.scope: Deactivated successfully.
Feb 25 07:29:53 np0005629333 podman[303888]: 2026-02-25 12:29:53.704062235 +0000 UTC m=+0.074110812 container died 22b26ae90a68dbf270a9ef8007d567987583402d803a22661062c87111ed844d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:29:53 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-22b26ae90a68dbf270a9ef8007d567987583402d803a22661062c87111ed844d-userdata-shm.mount: Deactivated successfully.
Feb 25 07:29:53 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a08bd54308eaf352c167ef973cce0fd266db351b043b8cd64c7edc8d50209f31-merged.mount: Deactivated successfully.
Feb 25 07:29:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 519 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.3 MiB/s wr, 265 op/s
Feb 25 07:29:53 np0005629333 podman[303888]: 2026-02-25 12:29:53.934905851 +0000 UTC m=+0.304954448 container cleanup 22b26ae90a68dbf270a9ef8007d567987583402d803a22661062c87111ed844d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 25 07:29:53 np0005629333 systemd[1]: libpod-conmon-22b26ae90a68dbf270a9ef8007d567987583402d803a22661062c87111ed844d.scope: Deactivated successfully.
Feb 25 07:29:54 np0005629333 nova_compute[244014]: 2026-02-25 12:29:54.004 244018 DEBUG nova.compute.manager [req-380379e5-6a4d-41f1-9f91-5e1711e1b3b7 req-f34dca80-c8d9-4f0d-9b8e-3d9fa3ac0a4d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:54 np0005629333 nova_compute[244014]: 2026-02-25 12:29:54.005 244018 DEBUG oslo_concurrency.lockutils [req-380379e5-6a4d-41f1-9f91-5e1711e1b3b7 req-f34dca80-c8d9-4f0d-9b8e-3d9fa3ac0a4d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:54 np0005629333 nova_compute[244014]: 2026-02-25 12:29:54.005 244018 DEBUG oslo_concurrency.lockutils [req-380379e5-6a4d-41f1-9f91-5e1711e1b3b7 req-f34dca80-c8d9-4f0d-9b8e-3d9fa3ac0a4d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:54 np0005629333 nova_compute[244014]: 2026-02-25 12:29:54.006 244018 DEBUG oslo_concurrency.lockutils [req-380379e5-6a4d-41f1-9f91-5e1711e1b3b7 req-f34dca80-c8d9-4f0d-9b8e-3d9fa3ac0a4d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:54 np0005629333 nova_compute[244014]: 2026-02-25 12:29:54.006 244018 DEBUG nova.compute.manager [req-380379e5-6a4d-41f1-9f91-5e1711e1b3b7 req-f34dca80-c8d9-4f0d-9b8e-3d9fa3ac0a4d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:29:54 np0005629333 nova_compute[244014]: 2026-02-25 12:29:54.006 244018 DEBUG nova.compute.manager [req-380379e5-6a4d-41f1-9f91-5e1711e1b3b7 req-f34dca80-c8d9-4f0d-9b8e-3d9fa3ac0a4d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-unplugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:29:54 np0005629333 podman[303964]: 2026-02-25 12:29:54.104293045 +0000 UTC m=+0.137748227 container remove 22b26ae90a68dbf270a9ef8007d567987583402d803a22661062c87111ed844d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 07:29:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:54.113 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[99b0e8ab-75c4-4303-8831-37129f44272a]: (4, ('Wed Feb 25 12:29:53 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 (22b26ae90a68dbf270a9ef8007d567987583402d803a22661062c87111ed844d)\n22b26ae90a68dbf270a9ef8007d567987583402d803a22661062c87111ed844d\nWed Feb 25 12:29:53 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 (22b26ae90a68dbf270a9ef8007d567987583402d803a22661062c87111ed844d)\n22b26ae90a68dbf270a9ef8007d567987583402d803a22661062c87111ed844d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:54.116 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f454ebd9-daec-4b28-b241-3162bde8485c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:54.117 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce318891-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:54 np0005629333 nova_compute[244014]: 2026-02-25 12:29:54.122 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:54 np0005629333 kernel: tapce318891-c0: left promiscuous mode
Feb 25 07:29:54 np0005629333 nova_compute[244014]: 2026-02-25 12:29:54.147 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:54.150 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[12ee1714-ce11-4133-9df8-bd82c69cb989]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/358877739' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:29:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:54.164 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3b2a99f8-922d-49e1-88ee-1b088fd9acd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:54.167 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[76d4e0a4-60cd-47b3-b227-6e9fb3add5fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:54 np0005629333 nova_compute[244014]: 2026-02-25 12:29:54.179 244018 DEBUG oslo_concurrency.processutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:54 np0005629333 nova_compute[244014]: 2026-02-25 12:29:54.180 244018 DEBUG oslo_concurrency.processutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:54.183 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5d69bcaa-589e-43eb-bc77-2540a8692afa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455995, 'reachable_time': 28972, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303985, 'error': None, 'target': 'ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:54 np0005629333 systemd[1]: run-netns-ovnmeta\x2dce318891\x2dcf3c\x2d4d99\x2daf7c\x2dc01770f38194.mount: Deactivated successfully.
Feb 25 07:29:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:54.189 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce318891-cf3c-4d99-af7c-c01770f38194 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:29:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:54.189 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[e55f8bbc-bb9a-4b8d-a89f-de2ca130b49a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:54 np0005629333 nova_compute[244014]: 2026-02-25 12:29:54.352 244018 INFO nova.virt.libvirt.driver [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Deleting instance files /var/lib/nova/instances/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_del#033[00m
Feb 25 07:29:54 np0005629333 nova_compute[244014]: 2026-02-25 12:29:54.353 244018 INFO nova.virt.libvirt.driver [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Deletion of /var/lib/nova/instances/8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b_del complete#033[00m
Feb 25 07:29:54 np0005629333 nova_compute[244014]: 2026-02-25 12:29:54.442 244018 INFO nova.compute.manager [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Took 1.02 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:29:54 np0005629333 nova_compute[244014]: 2026-02-25 12:29:54.443 244018 DEBUG oslo.service.loopingcall [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:29:54 np0005629333 nova_compute[244014]: 2026-02-25 12:29:54.443 244018 DEBUG nova.compute.manager [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:29:54 np0005629333 nova_compute[244014]: 2026-02-25 12:29:54.443 244018 DEBUG nova.network.neutron [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:29:54.514099) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022594514147, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2484, "num_deletes": 515, "total_data_size": 3274670, "memory_usage": 3327520, "flush_reason": "Manual Compaction"}
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022594533810, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3211321, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28139, "largest_seqno": 30622, "table_properties": {"data_size": 3200577, "index_size": 6406, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3269, "raw_key_size": 25751, "raw_average_key_size": 19, "raw_value_size": 3176708, "raw_average_value_size": 2453, "num_data_blocks": 280, "num_entries": 1295, "num_filter_entries": 1295, "num_deletions": 515, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772022416, "oldest_key_time": 1772022416, "file_creation_time": 1772022594, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 19765 microseconds, and 8348 cpu microseconds.
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:29:54.533863) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3211321 bytes OK
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:29:54.533891) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:29:54.536247) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:29:54.536269) EVENT_LOG_v1 {"time_micros": 1772022594536262, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:29:54.536291) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3263151, prev total WAL file size 3263151, number of live WAL files 2.
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:29:54.537315) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3136KB)], [62(8327KB)]
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022594537373, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 11738385, "oldest_snapshot_seqno": -1}
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5642 keys, 9910825 bytes, temperature: kUnknown
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022594594911, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 9910825, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9869660, "index_size": 25951, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14149, "raw_key_size": 141476, "raw_average_key_size": 25, "raw_value_size": 9764991, "raw_average_value_size": 1730, "num_data_blocks": 1058, "num_entries": 5642, "num_filter_entries": 5642, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772022594, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:29:54.595172) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 9910825 bytes
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:29:54.596523) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.7 rd, 172.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.1 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 6686, records dropped: 1044 output_compression: NoCompression
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:29:54.596553) EVENT_LOG_v1 {"time_micros": 1772022594596539, "job": 34, "event": "compaction_finished", "compaction_time_micros": 57622, "compaction_time_cpu_micros": 28449, "output_level": 6, "num_output_files": 1, "total_output_size": 9910825, "num_input_records": 6686, "num_output_records": 5642, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022594597114, "job": 34, "event": "table_file_deletion", "file_number": 64}
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022594598169, "job": 34, "event": "table_file_deletion", "file_number": 62}
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:29:54.537239) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:29:54.598207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:29:54.598214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:29:54.598217) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:29:54.598220) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:29:54.598223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:29:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3918405371' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:29:54 np0005629333 nova_compute[244014]: 2026-02-25 12:29:54.747 244018 DEBUG oslo_concurrency.processutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:54 np0005629333 nova_compute[244014]: 2026-02-25 12:29:54.749 244018 DEBUG oslo_concurrency.processutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:54 np0005629333 nova_compute[244014]: 2026-02-25 12:29:54.794 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:55.011 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:55.012 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:55.013 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:55 np0005629333 nova_compute[244014]: 2026-02-25 12:29:55.128 244018 DEBUG nova.compute.manager [req-04ee81e3-49be-4581-b962-48284c9abd3c req-ba9c3cf2-4d29-46d8-9c41-f6d93ef09069 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received event network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:55 np0005629333 nova_compute[244014]: 2026-02-25 12:29:55.129 244018 DEBUG oslo_concurrency.lockutils [req-04ee81e3-49be-4581-b962-48284c9abd3c req-ba9c3cf2-4d29-46d8-9c41-f6d93ef09069 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:55 np0005629333 nova_compute[244014]: 2026-02-25 12:29:55.129 244018 DEBUG oslo_concurrency.lockutils [req-04ee81e3-49be-4581-b962-48284c9abd3c req-ba9c3cf2-4d29-46d8-9c41-f6d93ef09069 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:55 np0005629333 nova_compute[244014]: 2026-02-25 12:29:55.130 244018 DEBUG oslo_concurrency.lockutils [req-04ee81e3-49be-4581-b962-48284c9abd3c req-ba9c3cf2-4d29-46d8-9c41-f6d93ef09069 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:55 np0005629333 nova_compute[244014]: 2026-02-25 12:29:55.130 244018 DEBUG nova.compute.manager [req-04ee81e3-49be-4581-b962-48284c9abd3c req-ba9c3cf2-4d29-46d8-9c41-f6d93ef09069 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] No waiting events found dispatching network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:29:55 np0005629333 nova_compute[244014]: 2026-02-25 12:29:55.131 244018 WARNING nova.compute.manager [req-04ee81e3-49be-4581-b962-48284c9abd3c req-ba9c3cf2-4d29-46d8-9c41-f6d93ef09069 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received unexpected event network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 for instance with vm_state active and task_state rescuing.#033[00m
Feb 25 07:29:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:29:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1776476394' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:29:55 np0005629333 nova_compute[244014]: 2026-02-25 12:29:55.399 244018 DEBUG oslo_concurrency.processutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.650s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:55 np0005629333 nova_compute[244014]: 2026-02-25 12:29:55.401 244018 DEBUG nova.virt.libvirt.vif [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:29:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1779266838',display_name='tempest-ServerRescueNegativeTestJSON-server-1779266838',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1779266838',id=68,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:29:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c4445209a7384565a93895032b4f077e',ramdisk_id='',reservation_id='r-ut2etugv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-75220162',owner_user_name='tempest-ServerRescueNegativeTestJSON-75220162-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:29:33Z,user_data=None,user_id='3aef97e1c87341848423edc65828540c',uuid=f7b18575-d1fc-423f-a596-8ca6d8ed08fa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "27c957cd-d68f-48d8-b2e1-170275200ed3", "address": "fa:16:3e:bf:b7:62", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "vif_mac": "fa:16:3e:bf:b7:62"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27c957cd-d6", "ovs_interfaceid": "27c957cd-d68f-48d8-b2e1-170275200ed3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:29:55 np0005629333 nova_compute[244014]: 2026-02-25 12:29:55.402 244018 DEBUG nova.network.os_vif_util [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Converting VIF {"id": "27c957cd-d68f-48d8-b2e1-170275200ed3", "address": "fa:16:3e:bf:b7:62", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "vif_mac": "fa:16:3e:bf:b7:62"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27c957cd-d6", "ovs_interfaceid": "27c957cd-d68f-48d8-b2e1-170275200ed3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:29:55 np0005629333 nova_compute[244014]: 2026-02-25 12:29:55.403 244018 DEBUG nova.network.os_vif_util [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bf:b7:62,bridge_name='br-int',has_traffic_filtering=True,id=27c957cd-d68f-48d8-b2e1-170275200ed3,network=Network(85b56c79-01b6-47e7-ab3b-02e44acca3d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27c957cd-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:29:55 np0005629333 nova_compute[244014]: 2026-02-25 12:29:55.405 244018 DEBUG nova.objects.instance [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lazy-loading 'pci_devices' on Instance uuid f7b18575-d1fc-423f-a596-8ca6d8ed08fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:55 np0005629333 nova_compute[244014]: 2026-02-25 12:29:55.432 244018 DEBUG nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:29:55 np0005629333 nova_compute[244014]:  <uuid>f7b18575-d1fc-423f-a596-8ca6d8ed08fa</uuid>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:  <name>instance-00000044</name>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-1779266838</nova:name>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:29:53</nova:creationTime>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:29:55 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:        <nova:user uuid="3aef97e1c87341848423edc65828540c">tempest-ServerRescueNegativeTestJSON-75220162-project-member</nova:user>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:        <nova:project uuid="c4445209a7384565a93895032b4f077e">tempest-ServerRescueNegativeTestJSON-75220162</nova:project>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:        <nova:port uuid="27c957cd-d68f-48d8-b2e1-170275200ed3">
Feb 25 07:29:55 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <entry name="serial">f7b18575-d1fc-423f-a596-8ca6d8ed08fa</entry>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <entry name="uuid">f7b18575-d1fc-423f-a596-8ca6d8ed08fa</entry>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.rescue">
Feb 25 07:29:55 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:29:55 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk">
Feb 25 07:29:55 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:29:55 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <target dev="vdb" bus="virtio"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.config.rescue">
Feb 25 07:29:55 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:29:55 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:bf:b7:62"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <target dev="tap27c957cd-d6"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa/console.log" append="off"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:29:55 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:29:55 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:29:55 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:29:55 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:29:55 np0005629333 nova_compute[244014]: 2026-02-25 12:29:55.444 244018 INFO nova.virt.libvirt.driver [-] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Instance destroyed successfully.#033[00m
Feb 25 07:29:55 np0005629333 nova_compute[244014]: 2026-02-25 12:29:55.507 244018 DEBUG nova.network.neutron [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:29:55 np0005629333 nova_compute[244014]: 2026-02-25 12:29:55.522 244018 DEBUG nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:29:55 np0005629333 nova_compute[244014]: 2026-02-25 12:29:55.524 244018 DEBUG nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:29:55 np0005629333 nova_compute[244014]: 2026-02-25 12:29:55.524 244018 DEBUG nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:29:55 np0005629333 nova_compute[244014]: 2026-02-25 12:29:55.525 244018 DEBUG nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] No VIF found with MAC fa:16:3e:bf:b7:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:29:55 np0005629333 nova_compute[244014]: 2026-02-25 12:29:55.526 244018 INFO nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Using config drive#033[00m
Feb 25 07:29:55 np0005629333 nova_compute[244014]: 2026-02-25 12:29:55.550 244018 DEBUG nova.storage.rbd_utils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:55 np0005629333 nova_compute[244014]: 2026-02-25 12:29:55.558 244018 INFO nova.compute.manager [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Took 1.11 seconds to deallocate network for instance.#033[00m
Feb 25 07:29:55 np0005629333 nova_compute[244014]: 2026-02-25 12:29:55.581 244018 DEBUG nova.objects.instance [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lazy-loading 'ec2_ids' on Instance uuid f7b18575-d1fc-423f-a596-8ca6d8ed08fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:55 np0005629333 nova_compute[244014]: 2026-02-25 12:29:55.621 244018 DEBUG nova.objects.instance [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lazy-loading 'keypairs' on Instance uuid f7b18575-d1fc-423f-a596-8ca6d8ed08fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:55 np0005629333 nova_compute[244014]: 2026-02-25 12:29:55.632 244018 DEBUG oslo_concurrency.lockutils [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:55 np0005629333 nova_compute[244014]: 2026-02-25 12:29:55.632 244018 DEBUG oslo_concurrency.lockutils [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:29:55 np0005629333 nova_compute[244014]: 2026-02-25 12:29:55.741 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022580.7403479, abe229eb-2238-4237-a7f2-83b8476ac1dc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:29:55 np0005629333 nova_compute[244014]: 2026-02-25 12:29:55.742 244018 INFO nova.compute.manager [-] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:29:55 np0005629333 nova_compute[244014]: 2026-02-25 12:29:55.769 244018 DEBUG nova.compute.manager [None req-12f5ec44-1447-4ed9-afbe-0b5b828bd547 - - - - - -] [instance: abe229eb-2238-4237-a7f2-83b8476ac1dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:55 np0005629333 nova_compute[244014]: 2026-02-25 12:29:55.803 244018 DEBUG oslo_concurrency.processutils [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 519 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 224 op/s
Feb 25 07:29:56 np0005629333 nova_compute[244014]: 2026-02-25 12:29:56.033 244018 INFO nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Creating config drive at /var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa/disk.config.rescue#033[00m
Feb 25 07:29:56 np0005629333 nova_compute[244014]: 2026-02-25 12:29:56.039 244018 DEBUG oslo_concurrency.processutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpgbk_prr1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:56 np0005629333 nova_compute[244014]: 2026-02-25 12:29:56.127 244018 DEBUG nova.compute.manager [req-f2533574-d281-4b10-b9d3-c91c999fea56 req-a75a04e1-399e-4680-9f44-29103af4157b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:56 np0005629333 nova_compute[244014]: 2026-02-25 12:29:56.128 244018 DEBUG oslo_concurrency.lockutils [req-f2533574-d281-4b10-b9d3-c91c999fea56 req-a75a04e1-399e-4680-9f44-29103af4157b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:56 np0005629333 nova_compute[244014]: 2026-02-25 12:29:56.128 244018 DEBUG oslo_concurrency.lockutils [req-f2533574-d281-4b10-b9d3-c91c999fea56 req-a75a04e1-399e-4680-9f44-29103af4157b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:56 np0005629333 nova_compute[244014]: 2026-02-25 12:29:56.128 244018 DEBUG oslo_concurrency.lockutils [req-f2533574-d281-4b10-b9d3-c91c999fea56 req-a75a04e1-399e-4680-9f44-29103af4157b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:56 np0005629333 nova_compute[244014]: 2026-02-25 12:29:56.129 244018 DEBUG nova.compute.manager [req-f2533574-d281-4b10-b9d3-c91c999fea56 req-a75a04e1-399e-4680-9f44-29103af4157b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] No waiting events found dispatching network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:29:56 np0005629333 nova_compute[244014]: 2026-02-25 12:29:56.129 244018 WARNING nova.compute.manager [req-f2533574-d281-4b10-b9d3-c91c999fea56 req-a75a04e1-399e-4680-9f44-29103af4157b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received unexpected event network-vif-plugged-ee46268d-740d-4ff9-8b65-4a81fc61eec3 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:29:56 np0005629333 nova_compute[244014]: 2026-02-25 12:29:56.129 244018 DEBUG nova.compute.manager [req-f2533574-d281-4b10-b9d3-c91c999fea56 req-a75a04e1-399e-4680-9f44-29103af4157b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Received event network-vif-deleted-ee46268d-740d-4ff9-8b65-4a81fc61eec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:56 np0005629333 nova_compute[244014]: 2026-02-25 12:29:56.178 244018 DEBUG oslo_concurrency.processutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpgbk_prr1" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:56 np0005629333 nova_compute[244014]: 2026-02-25 12:29:56.210 244018 DEBUG nova.storage.rbd_utils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] rbd image f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:29:56 np0005629333 nova_compute[244014]: 2026-02-25 12:29:56.213 244018 DEBUG oslo_concurrency.processutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa/disk.config.rescue f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:29:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1754663529' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:29:56 np0005629333 nova_compute[244014]: 2026-02-25 12:29:56.353 244018 DEBUG oslo_concurrency.processutils [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa/disk.config.rescue f7b18575-d1fc-423f-a596-8ca6d8ed08fa_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:56 np0005629333 nova_compute[244014]: 2026-02-25 12:29:56.354 244018 INFO nova.virt.libvirt.driver [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Deleting local config drive /var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa/disk.config.rescue because it was imported into RBD.#033[00m
Feb 25 07:29:56 np0005629333 nova_compute[244014]: 2026-02-25 12:29:56.367 244018 DEBUG oslo_concurrency.processutils [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:56 np0005629333 nova_compute[244014]: 2026-02-25 12:29:56.372 244018 DEBUG nova.compute.provider_tree [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:29:56 np0005629333 nova_compute[244014]: 2026-02-25 12:29:56.392 244018 DEBUG nova.scheduler.client.report [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:29:56 np0005629333 systemd-udevd[303868]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:29:56 np0005629333 kernel: tap27c957cd-d6: entered promiscuous mode
Feb 25 07:29:56 np0005629333 NetworkManager[49836]: <info>  [1772022596.4065] manager: (tap27c957cd-d6): new Tun device (/org/freedesktop/NetworkManager/Devices/280)
Feb 25 07:29:56 np0005629333 nova_compute[244014]: 2026-02-25 12:29:56.407 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:56 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:56Z|00641|binding|INFO|Claiming lport 27c957cd-d68f-48d8-b2e1-170275200ed3 for this chassis.
Feb 25 07:29:56 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:56Z|00642|binding|INFO|27c957cd-d68f-48d8-b2e1-170275200ed3: Claiming fa:16:3e:bf:b7:62 10.100.0.4
Feb 25 07:29:56 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:56Z|00643|binding|INFO|Setting lport 27c957cd-d68f-48d8-b2e1-170275200ed3 ovn-installed in OVS
Feb 25 07:29:56 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:56Z|00644|binding|INFO|Setting lport 27c957cd-d68f-48d8-b2e1-170275200ed3 up in Southbound
Feb 25 07:29:56 np0005629333 nova_compute[244014]: 2026-02-25 12:29:56.419 244018 DEBUG oslo_concurrency.lockutils [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:56.416 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:b7:62 10.100.0.4'], port_security=['fa:16:3e:bf:b7:62 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f7b18575-d1fc-423f-a596-8ca6d8ed08fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4445209a7384565a93895032b4f077e', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8918a64f-99f7-4eb3-a626-bacb054cff5c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=28589015-6a4f-416a-a72c-4ed4061d2d31, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=27c957cd-d68f-48d8-b2e1-170275200ed3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:29:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:56.418 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 27c957cd-d68f-48d8-b2e1-170275200ed3 in datapath 85b56c79-01b6-47e7-ab3b-02e44acca3d3 bound to our chassis#033[00m
Feb 25 07:29:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:56.421 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85b56c79-01b6-47e7-ab3b-02e44acca3d3#033[00m
Feb 25 07:29:56 np0005629333 NetworkManager[49836]: <info>  [1772022596.4236] device (tap27c957cd-d6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:29:56 np0005629333 NetworkManager[49836]: <info>  [1772022596.4242] device (tap27c957cd-d6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:29:56 np0005629333 nova_compute[244014]: 2026-02-25 12:29:56.427 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:56.435 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7769574d-1675-480a-b9eb-ce6b67d74381]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:56 np0005629333 systemd-machined[210048]: New machine qemu-83-instance-00000044.
Feb 25 07:29:56 np0005629333 systemd[1]: Started Virtual Machine qemu-83-instance-00000044.
Feb 25 07:29:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:56.456 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cbc9dcfc-0668-4ca3-8dbd-946741dcacd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:56.459 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f88636af-c322-4794-ad6d-fb1eabc1c2e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:56 np0005629333 nova_compute[244014]: 2026-02-25 12:29:56.474 244018 INFO nova.scheduler.client.report [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Deleted allocations for instance 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b#033[00m
Feb 25 07:29:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:56.487 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[79d34b4b-9ddf-4532-9573-5302f1a82f09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:56.507 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2c394152-1742-4f5c-87a6-806b62363af1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85b56c79-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:20:24:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 179], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453793, 'reachable_time': 15353, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304136, 'error': None, 'target': 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:56.520 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f6b66be0-8e84-4dd5-ac71-282a32cf00dc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap85b56c79-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453803, 'tstamp': 453803}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304138, 'error': None, 'target': 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap85b56c79-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453805, 'tstamp': 453805}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304138, 'error': None, 'target': 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:56.521 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85b56c79-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:56 np0005629333 nova_compute[244014]: 2026-02-25 12:29:56.523 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:56.524 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85b56c79-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:56.524 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:29:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:56.525 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85b56c79-00, col_values=(('external_ids', {'iface-id': 'ad3b1754-7f6c-4114-8056-d68f2e9a25d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:56.525 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:29:56 np0005629333 nova_compute[244014]: 2026-02-25 12:29:56.562 244018 DEBUG oslo_concurrency.lockutils [None req-23ffb4c4-001a-4e2b-89fb-7db477d9418e 1f8bbe7db4454108aca005daa72d5c22 56700581ea88438ba482d90bc702ced3 - - default default] Lock "8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.144s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:56 np0005629333 nova_compute[244014]: 2026-02-25 12:29:56.857 244018 DEBUG oslo_concurrency.lockutils [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "8086400b-ac70-4c79-928b-4f1966084384" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:56 np0005629333 nova_compute[244014]: 2026-02-25 12:29:56.858 244018 DEBUG oslo_concurrency.lockutils [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "8086400b-ac70-4c79-928b-4f1966084384" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:56 np0005629333 nova_compute[244014]: 2026-02-25 12:29:56.858 244018 DEBUG oslo_concurrency.lockutils [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "8086400b-ac70-4c79-928b-4f1966084384-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:56 np0005629333 nova_compute[244014]: 2026-02-25 12:29:56.858 244018 DEBUG oslo_concurrency.lockutils [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "8086400b-ac70-4c79-928b-4f1966084384-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:56 np0005629333 nova_compute[244014]: 2026-02-25 12:29:56.859 244018 DEBUG oslo_concurrency.lockutils [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "8086400b-ac70-4c79-928b-4f1966084384-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:56 np0005629333 nova_compute[244014]: 2026-02-25 12:29:56.860 244018 INFO nova.compute.manager [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Terminating instance#033[00m
Feb 25 07:29:56 np0005629333 nova_compute[244014]: 2026-02-25 12:29:56.861 244018 DEBUG nova.compute.manager [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:29:56 np0005629333 kernel: tap05178abb-a1 (unregistering): left promiscuous mode
Feb 25 07:29:56 np0005629333 NetworkManager[49836]: <info>  [1772022596.9926] device (tap05178abb-a1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.002 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:57 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:57Z|00645|binding|INFO|Releasing lport 05178abb-a113-4013-9194-9243afe9d0ff from this chassis (sb_readonly=0)
Feb 25 07:29:57 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:57Z|00646|binding|INFO|Setting lport 05178abb-a113-4013-9194-9243afe9d0ff down in Southbound
Feb 25 07:29:57 np0005629333 ovn_controller[147040]: 2026-02-25T12:29:57Z|00647|binding|INFO|Removing iface tap05178abb-a1 ovn-installed in OVS
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.005 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:57.015 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:1d:a7 10.100.0.10'], port_security=['fa:16:3e:7e:1d:a7 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '8086400b-ac70-4c79-928b-4f1966084384', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a448894-87d7-4c8e-a168-2593011ffed7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '780a93a8758a4bd78b22fe68ed6276cf', 'neutron:revision_number': '6', 'neutron:security_group_ids': '3f8dac82-3613-4042-9783-376a897175f3 809ecded-d7db-4d4f-aed2-3cfc6bac71b9 8484ff9f-cb46-4540-b482-1e509f2bf2a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc8be04a-7591-4819-939d-39b48e9a96cf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=05178abb-a113-4013-9194-9243afe9d0ff) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.017 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:57.019 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 05178abb-a113-4013-9194-9243afe9d0ff in datapath 9a448894-87d7-4c8e-a168-2593011ffed7 unbound from our chassis#033[00m
Feb 25 07:29:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:57.021 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9a448894-87d7-4c8e-a168-2593011ffed7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:29:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:57.023 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0e9b3b89-9955-4a5b-b9c7-fd01e5470c56]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:57.024 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7 namespace which is not needed anymore#033[00m
Feb 25 07:29:57 np0005629333 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000040.scope: Deactivated successfully.
Feb 25 07:29:57 np0005629333 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000040.scope: Consumed 13.675s CPU time.
Feb 25 07:29:57 np0005629333 systemd-machined[210048]: Machine qemu-73-instance-00000040 terminated.
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.101 244018 INFO nova.virt.libvirt.driver [-] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Instance destroyed successfully.#033[00m
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.102 244018 DEBUG nova.objects.instance [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lazy-loading 'resources' on Instance uuid 8086400b-ac70-4c79-928b-4f1966084384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.131 244018 DEBUG nova.virt.libvirt.vif [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:28:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-105273812',display_name='tempest-SecurityGroupsTestJSON-server-105273812',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-105273812',id=64,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:28:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='780a93a8758a4bd78b22fe68ed6276cf',ramdisk_id='',reservation_id='r-6s6re1xg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-195828884',owner_user_name='tempest-SecurityGroupsTestJSON-195828884-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:28:51Z,user_data=None,user_id='bcb4ded096bc4f7993f96ca892b82333',uuid=8086400b-ac70-4c79-928b-4f1966084384,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05178abb-a113-4013-9194-9243afe9d0ff", "address": "fa:16:3e:7e:1d:a7", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05178abb-a1", "ovs_interfaceid": "05178abb-a113-4013-9194-9243afe9d0ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.134 244018 DEBUG nova.network.os_vif_util [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converting VIF {"id": "05178abb-a113-4013-9194-9243afe9d0ff", "address": "fa:16:3e:7e:1d:a7", "network": {"id": "9a448894-87d7-4c8e-a168-2593011ffed7", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1452519884-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "780a93a8758a4bd78b22fe68ed6276cf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05178abb-a1", "ovs_interfaceid": "05178abb-a113-4013-9194-9243afe9d0ff", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.136 244018 DEBUG nova.network.os_vif_util [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7e:1d:a7,bridge_name='br-int',has_traffic_filtering=True,id=05178abb-a113-4013-9194-9243afe9d0ff,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05178abb-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.136 244018 DEBUG os_vif [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7e:1d:a7,bridge_name='br-int',has_traffic_filtering=True,id=05178abb-a113-4013-9194-9243afe9d0ff,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05178abb-a1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.139 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.140 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05178abb-a1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.143 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.145 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.148 244018 INFO os_vif [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7e:1d:a7,bridge_name='br-int',has_traffic_filtering=True,id=05178abb-a113-4013-9194-9243afe9d0ff,network=Network(9a448894-87d7-4c8e-a168-2593011ffed7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05178abb-a1')#033[00m
Feb 25 07:29:57 np0005629333 neutron-haproxy-ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7[299628]: [NOTICE]   (299632) : haproxy version is 2.8.14-c23fe91
Feb 25 07:29:57 np0005629333 neutron-haproxy-ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7[299628]: [NOTICE]   (299632) : path to executable is /usr/sbin/haproxy
Feb 25 07:29:57 np0005629333 neutron-haproxy-ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7[299628]: [WARNING]  (299632) : Exiting Master process...
Feb 25 07:29:57 np0005629333 neutron-haproxy-ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7[299628]: [WARNING]  (299632) : Exiting Master process...
Feb 25 07:29:57 np0005629333 neutron-haproxy-ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7[299628]: [ALERT]    (299632) : Current worker (299634) exited with code 143 (Terminated)
Feb 25 07:29:57 np0005629333 neutron-haproxy-ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7[299628]: [WARNING]  (299632) : All workers exited. Exiting... (0)
Feb 25 07:29:57 np0005629333 systemd[1]: libpod-620ca25fa8851b65461fdb951800fbc4d72272096ab3622b616f1ba2e99f8df7.scope: Deactivated successfully.
Feb 25 07:29:57 np0005629333 podman[304228]: 2026-02-25 12:29:57.176144044 +0000 UTC m=+0.056793431 container died 620ca25fa8851b65461fdb951800fbc4d72272096ab3622b616f1ba2e99f8df7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 25 07:29:57 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-620ca25fa8851b65461fdb951800fbc4d72272096ab3622b616f1ba2e99f8df7-userdata-shm.mount: Deactivated successfully.
Feb 25 07:29:57 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d2584817c9d371ffcf257a70731a89404fd3bb1f9d05f4f6659b9234327334bd-merged.mount: Deactivated successfully.
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.218 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for f7b18575-d1fc-423f-a596-8ca6d8ed08fa due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.218 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022597.2176583, f7b18575-d1fc-423f-a596-8ca6d8ed08fa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.218 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:29:57 np0005629333 podman[304228]: 2026-02-25 12:29:57.22044083 +0000 UTC m=+0.101090227 container cleanup 620ca25fa8851b65461fdb951800fbc4d72272096ab3622b616f1ba2e99f8df7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.222 244018 DEBUG nova.compute.manager [None req-f76dccbc-eb4a-4c03-8723-f149ded794cd 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:57 np0005629333 systemd[1]: libpod-conmon-620ca25fa8851b65461fdb951800fbc4d72272096ab3622b616f1ba2e99f8df7.scope: Deactivated successfully.
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.271 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.273 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:29:57 np0005629333 podman[304278]: 2026-02-25 12:29:57.307100228 +0000 UTC m=+0.057812391 container remove 620ca25fa8851b65461fdb951800fbc4d72272096ab3622b616f1ba2e99f8df7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223)
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.313 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Feb 25 07:29:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:57.313 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[aaab6cea-813a-4480-8864-e7c9c9f7a2b4]: (4, ('Wed Feb 25 12:29:57 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7 (620ca25fa8851b65461fdb951800fbc4d72272096ab3622b616f1ba2e99f8df7)\n620ca25fa8851b65461fdb951800fbc4d72272096ab3622b616f1ba2e99f8df7\nWed Feb 25 12:29:57 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7 (620ca25fa8851b65461fdb951800fbc4d72272096ab3622b616f1ba2e99f8df7)\n620ca25fa8851b65461fdb951800fbc4d72272096ab3622b616f1ba2e99f8df7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.314 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022597.2190325, f7b18575-d1fc-423f-a596-8ca6d8ed08fa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.314 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] VM Started (Lifecycle Event)#033[00m
Feb 25 07:29:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:57.315 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[076d6dc6-66ea-4f6e-8c20-63c908de4f8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:57.315 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a448894-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.317 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:57 np0005629333 kernel: tap9a448894-80: left promiscuous mode
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.326 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:57.328 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1f02f46e-2be2-4675-972b-25fcfb64ab4b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.336 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:29:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:57.339 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c0375fb2-1a8d-49a3-8470-b637f327fe1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:57.340 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3441946c-c9b3-48b9-9d92-2b0985000dda]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.340 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:29:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:57.353 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[61decc58-42e5-41d1-a3b7-5c755070efda]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449623, 'reachable_time': 18705, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304293, 'error': None, 'target': 'ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:57 np0005629333 systemd[1]: run-netns-ovnmeta\x2d9a448894\x2d87d7\x2d4c8e\x2da168\x2d2593011ffed7.mount: Deactivated successfully.
Feb 25 07:29:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:57.355 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9a448894-87d7-4c8e-a168-2593011ffed7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:29:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:29:57.356 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[e68b0565-2786-4dfd-8dea-7a1350ec244e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.482 244018 INFO nova.virt.libvirt.driver [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Deleting instance files /var/lib/nova/instances/8086400b-ac70-4c79-928b-4f1966084384_del#033[00m
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.483 244018 INFO nova.virt.libvirt.driver [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Deletion of /var/lib/nova/instances/8086400b-ac70-4c79-928b-4f1966084384_del complete#033[00m
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.555 244018 INFO nova.compute.manager [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Took 0.69 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.556 244018 DEBUG oslo.service.loopingcall [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.556 244018 DEBUG nova.compute.manager [-] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:29:57 np0005629333 nova_compute[244014]: 2026-02-25 12:29:57.556 244018 DEBUG nova.network.neutron [-] [instance: 8086400b-ac70-4c79-928b-4f1966084384] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:29:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 305 active+clean; 457 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 283 op/s
Feb 25 07:29:58 np0005629333 nova_compute[244014]: 2026-02-25 12:29:58.240 244018 DEBUG nova.compute.manager [req-de4ebe55-7ce1-46bb-bad2-189749eded7d req-804a9182-4851-4ea7-9c64-f070757433dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received event network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:58 np0005629333 nova_compute[244014]: 2026-02-25 12:29:58.241 244018 DEBUG oslo_concurrency.lockutils [req-de4ebe55-7ce1-46bb-bad2-189749eded7d req-804a9182-4851-4ea7-9c64-f070757433dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:58 np0005629333 nova_compute[244014]: 2026-02-25 12:29:58.241 244018 DEBUG oslo_concurrency.lockutils [req-de4ebe55-7ce1-46bb-bad2-189749eded7d req-804a9182-4851-4ea7-9c64-f070757433dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:58 np0005629333 nova_compute[244014]: 2026-02-25 12:29:58.242 244018 DEBUG oslo_concurrency.lockutils [req-de4ebe55-7ce1-46bb-bad2-189749eded7d req-804a9182-4851-4ea7-9c64-f070757433dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:58 np0005629333 nova_compute[244014]: 2026-02-25 12:29:58.242 244018 DEBUG nova.compute.manager [req-de4ebe55-7ce1-46bb-bad2-189749eded7d req-804a9182-4851-4ea7-9c64-f070757433dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] No waiting events found dispatching network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:29:58 np0005629333 nova_compute[244014]: 2026-02-25 12:29:58.243 244018 WARNING nova.compute.manager [req-de4ebe55-7ce1-46bb-bad2-189749eded7d req-804a9182-4851-4ea7-9c64-f070757433dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received unexpected event network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 for instance with vm_state rescued and task_state None.#033[00m
Feb 25 07:29:58 np0005629333 nova_compute[244014]: 2026-02-25 12:29:58.243 244018 DEBUG nova.compute.manager [req-de4ebe55-7ce1-46bb-bad2-189749eded7d req-804a9182-4851-4ea7-9c64-f070757433dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received event network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:58 np0005629333 nova_compute[244014]: 2026-02-25 12:29:58.244 244018 DEBUG oslo_concurrency.lockutils [req-de4ebe55-7ce1-46bb-bad2-189749eded7d req-804a9182-4851-4ea7-9c64-f070757433dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:58 np0005629333 nova_compute[244014]: 2026-02-25 12:29:58.244 244018 DEBUG oslo_concurrency.lockutils [req-de4ebe55-7ce1-46bb-bad2-189749eded7d req-804a9182-4851-4ea7-9c64-f070757433dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:58 np0005629333 nova_compute[244014]: 2026-02-25 12:29:58.245 244018 DEBUG oslo_concurrency.lockutils [req-de4ebe55-7ce1-46bb-bad2-189749eded7d req-804a9182-4851-4ea7-9c64-f070757433dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:58 np0005629333 nova_compute[244014]: 2026-02-25 12:29:58.245 244018 DEBUG nova.compute.manager [req-de4ebe55-7ce1-46bb-bad2-189749eded7d req-804a9182-4851-4ea7-9c64-f070757433dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] No waiting events found dispatching network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:29:58 np0005629333 nova_compute[244014]: 2026-02-25 12:29:58.246 244018 WARNING nova.compute.manager [req-de4ebe55-7ce1-46bb-bad2-189749eded7d req-804a9182-4851-4ea7-9c64-f070757433dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received unexpected event network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 for instance with vm_state rescued and task_state None.#033[00m
Feb 25 07:29:58 np0005629333 nova_compute[244014]: 2026-02-25 12:29:58.867 244018 DEBUG nova.network.neutron [-] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:29:58 np0005629333 nova_compute[244014]: 2026-02-25 12:29:58.888 244018 INFO nova.compute.manager [-] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Took 1.33 seconds to deallocate network for instance.#033[00m
Feb 25 07:29:58 np0005629333 nova_compute[244014]: 2026-02-25 12:29:58.938 244018 DEBUG oslo_concurrency.lockutils [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:29:58 np0005629333 nova_compute[244014]: 2026-02-25 12:29:58.939 244018 DEBUG oslo_concurrency.lockutils [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:29:58 np0005629333 nova_compute[244014]: 2026-02-25 12:29:58.961 244018 DEBUG nova.compute.manager [req-3bb00d0e-1883-4dda-afa8-5d50771db852 req-b9d3249a-6d0e-4b18-89d9-4b3b946ee062 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Received event network-vif-deleted-05178abb-a113-4013-9194-9243afe9d0ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:29:59 np0005629333 nova_compute[244014]: 2026-02-25 12:29:59.067 244018 DEBUG oslo_concurrency.processutils [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:29:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:29:59 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/289405999' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:29:59 np0005629333 nova_compute[244014]: 2026-02-25 12:29:59.729 244018 DEBUG oslo_concurrency.processutils [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.661s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:29:59 np0005629333 nova_compute[244014]: 2026-02-25 12:29:59.735 244018 DEBUG nova.compute.provider_tree [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:29:59 np0005629333 nova_compute[244014]: 2026-02-25 12:29:59.759 244018 DEBUG nova.scheduler.client.report [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:29:59 np0005629333 nova_compute[244014]: 2026-02-25 12:29:59.766 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:29:59 np0005629333 nova_compute[244014]: 2026-02-25 12:29:59.790 244018 DEBUG oslo_concurrency.lockutils [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:29:59 np0005629333 nova_compute[244014]: 2026-02-25 12:29:59.826 244018 INFO nova.scheduler.client.report [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Deleted allocations for instance 8086400b-ac70-4c79-928b-4f1966084384#033[00m
Feb 25 07:29:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 305 active+clean; 457 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.9 MiB/s wr, 223 op/s
Feb 25 07:29:59 np0005629333 nova_compute[244014]: 2026-02-25 12:29:59.914 244018 DEBUG oslo_concurrency.lockutils [None req-6e5a53b9-fcaa-44fb-8a22-348da17262dc bcb4ded096bc4f7993f96ca892b82333 780a93a8758a4bd78b22fe68ed6276cf - - default default] Lock "8086400b-ac70-4c79-928b-4f1966084384" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:30:01 np0005629333 nova_compute[244014]: 2026-02-25 12:30:01.064 244018 DEBUG nova.compute.manager [req-0115614a-6643-42bf-8655-f035ba6d65a8 req-fd365190-fe26-461f-8cd8-34633e57e3ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Received event network-vif-unplugged-05178abb-a113-4013-9194-9243afe9d0ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:30:01 np0005629333 nova_compute[244014]: 2026-02-25 12:30:01.064 244018 DEBUG oslo_concurrency.lockutils [req-0115614a-6643-42bf-8655-f035ba6d65a8 req-fd365190-fe26-461f-8cd8-34633e57e3ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8086400b-ac70-4c79-928b-4f1966084384-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:01 np0005629333 nova_compute[244014]: 2026-02-25 12:30:01.064 244018 DEBUG oslo_concurrency.lockutils [req-0115614a-6643-42bf-8655-f035ba6d65a8 req-fd365190-fe26-461f-8cd8-34633e57e3ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8086400b-ac70-4c79-928b-4f1966084384-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:01 np0005629333 nova_compute[244014]: 2026-02-25 12:30:01.064 244018 DEBUG oslo_concurrency.lockutils [req-0115614a-6643-42bf-8655-f035ba6d65a8 req-fd365190-fe26-461f-8cd8-34633e57e3ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8086400b-ac70-4c79-928b-4f1966084384-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:01 np0005629333 nova_compute[244014]: 2026-02-25 12:30:01.065 244018 DEBUG nova.compute.manager [req-0115614a-6643-42bf-8655-f035ba6d65a8 req-fd365190-fe26-461f-8cd8-34633e57e3ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] No waiting events found dispatching network-vif-unplugged-05178abb-a113-4013-9194-9243afe9d0ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:30:01 np0005629333 nova_compute[244014]: 2026-02-25 12:30:01.065 244018 WARNING nova.compute.manager [req-0115614a-6643-42bf-8655-f035ba6d65a8 req-fd365190-fe26-461f-8cd8-34633e57e3ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Received unexpected event network-vif-unplugged-05178abb-a113-4013-9194-9243afe9d0ff for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:30:01 np0005629333 nova_compute[244014]: 2026-02-25 12:30:01.065 244018 DEBUG nova.compute.manager [req-0115614a-6643-42bf-8655-f035ba6d65a8 req-fd365190-fe26-461f-8cd8-34633e57e3ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Received event network-vif-plugged-05178abb-a113-4013-9194-9243afe9d0ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:30:01 np0005629333 nova_compute[244014]: 2026-02-25 12:30:01.065 244018 DEBUG oslo_concurrency.lockutils [req-0115614a-6643-42bf-8655-f035ba6d65a8 req-fd365190-fe26-461f-8cd8-34633e57e3ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8086400b-ac70-4c79-928b-4f1966084384-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:01 np0005629333 nova_compute[244014]: 2026-02-25 12:30:01.065 244018 DEBUG oslo_concurrency.lockutils [req-0115614a-6643-42bf-8655-f035ba6d65a8 req-fd365190-fe26-461f-8cd8-34633e57e3ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8086400b-ac70-4c79-928b-4f1966084384-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:01 np0005629333 nova_compute[244014]: 2026-02-25 12:30:01.065 244018 DEBUG oslo_concurrency.lockutils [req-0115614a-6643-42bf-8655-f035ba6d65a8 req-fd365190-fe26-461f-8cd8-34633e57e3ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8086400b-ac70-4c79-928b-4f1966084384-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:01 np0005629333 nova_compute[244014]: 2026-02-25 12:30:01.066 244018 DEBUG nova.compute.manager [req-0115614a-6643-42bf-8655-f035ba6d65a8 req-fd365190-fe26-461f-8cd8-34633e57e3ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] No waiting events found dispatching network-vif-plugged-05178abb-a113-4013-9194-9243afe9d0ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:30:01 np0005629333 nova_compute[244014]: 2026-02-25 12:30:01.066 244018 WARNING nova.compute.manager [req-0115614a-6643-42bf-8655-f035ba6d65a8 req-fd365190-fe26-461f-8cd8-34633e57e3ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Received unexpected event network-vif-plugged-05178abb-a113-4013-9194-9243afe9d0ff for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:30:01 np0005629333 nova_compute[244014]: 2026-02-25 12:30:01.366 244018 INFO nova.compute.manager [None req-771650aa-3e1a-409a-8bbf-bd6b74dfe0e8 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Pausing#033[00m
Feb 25 07:30:01 np0005629333 nova_compute[244014]: 2026-02-25 12:30:01.370 244018 DEBUG nova.objects.instance [None req-771650aa-3e1a-409a-8bbf-bd6b74dfe0e8 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lazy-loading 'flavor' on Instance uuid bf261ccf-c216-4383-a22a-7f0553198152 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:30:01 np0005629333 nova_compute[244014]: 2026-02-25 12:30:01.418 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022601.4179032, bf261ccf-c216-4383-a22a-7f0553198152 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:30:01 np0005629333 nova_compute[244014]: 2026-02-25 12:30:01.419 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:30:01 np0005629333 nova_compute[244014]: 2026-02-25 12:30:01.422 244018 DEBUG nova.compute.manager [None req-771650aa-3e1a-409a-8bbf-bd6b74dfe0e8 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:30:01 np0005629333 nova_compute[244014]: 2026-02-25 12:30:01.454 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:30:01 np0005629333 nova_compute[244014]: 2026-02-25 12:30:01.459 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:30:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:30:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:30:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:30:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:30:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:30:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:30:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 421 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.2 MiB/s wr, 320 op/s
Feb 25 07:30:02 np0005629333 nova_compute[244014]: 2026-02-25 12:30:02.143 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:02 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:02Z|00648|binding|INFO|Releasing lport ad3b1754-7f6c-4114-8056-d68f2e9a25d5 from this chassis (sb_readonly=0)
Feb 25 07:30:02 np0005629333 nova_compute[244014]: 2026-02-25 12:30:02.504 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:02 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:02Z|00649|binding|INFO|Releasing lport ad3b1754-7f6c-4114-8056-d68f2e9a25d5 from this chassis (sb_readonly=0)
Feb 25 07:30:02 np0005629333 nova_compute[244014]: 2026-02-25 12:30:02.553 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:03 np0005629333 nova_compute[244014]: 2026-02-25 12:30:03.205 244018 DEBUG nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Feb 25 07:30:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 435 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.1 MiB/s wr, 277 op/s
Feb 25 07:30:04 np0005629333 nova_compute[244014]: 2026-02-25 12:30:04.468 244018 INFO nova.compute.manager [None req-94377022-49b0-490c-95b7-9ce5549d5586 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Unpausing#033[00m
Feb 25 07:30:04 np0005629333 nova_compute[244014]: 2026-02-25 12:30:04.469 244018 DEBUG nova.objects.instance [None req-94377022-49b0-490c-95b7-9ce5549d5586 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lazy-loading 'flavor' on Instance uuid bf261ccf-c216-4383-a22a-7f0553198152 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:30:04 np0005629333 nova_compute[244014]: 2026-02-25 12:30:04.512 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022604.5119896, bf261ccf-c216-4383-a22a-7f0553198152 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:30:04 np0005629333 nova_compute[244014]: 2026-02-25 12:30:04.512 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:30:04 np0005629333 virtqemud[243235]: argument unsupported: QEMU guest agent is not configured
Feb 25 07:30:04 np0005629333 nova_compute[244014]: 2026-02-25 12:30:04.518 244018 DEBUG nova.virt.libvirt.guest [None req-94377022-49b0-490c-95b7-9ce5549d5586 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Feb 25 07:30:04 np0005629333 nova_compute[244014]: 2026-02-25 12:30:04.518 244018 DEBUG nova.compute.manager [None req-94377022-49b0-490c-95b7-9ce5549d5586 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:30:04 np0005629333 nova_compute[244014]: 2026-02-25 12:30:04.553 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:30:04 np0005629333 nova_compute[244014]: 2026-02-25 12:30:04.557 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:30:04 np0005629333 nova_compute[244014]: 2026-02-25 12:30:04.582 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Feb 25 07:30:04 np0005629333 podman[304318]: 2026-02-25 12:30:04.740797918 +0000 UTC m=+0.074327489 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:30:04 np0005629333 nova_compute[244014]: 2026-02-25 12:30:04.763 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:04 np0005629333 podman[304319]: 2026-02-25 12:30:04.777766266 +0000 UTC m=+0.108344193 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 07:30:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:30:05 np0005629333 kernel: tap197929cb-aa (unregistering): left promiscuous mode
Feb 25 07:30:05 np0005629333 nova_compute[244014]: 2026-02-25 12:30:05.750 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:05 np0005629333 NetworkManager[49836]: <info>  [1772022605.7532] device (tap197929cb-aa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:30:05 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:05Z|00650|binding|INFO|Releasing lport 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 from this chassis (sb_readonly=0)
Feb 25 07:30:05 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:05Z|00651|binding|INFO|Setting lport 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 down in Southbound
Feb 25 07:30:05 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:05Z|00652|binding|INFO|Removing iface tap197929cb-aa ovn-installed in OVS
Feb 25 07:30:05 np0005629333 nova_compute[244014]: 2026-02-25 12:30:05.759 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:05.763 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:10:e8 10.100.0.4'], port_security=['fa:16:3e:3f:10:e8 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd945940d-a1b5-4a36-b980-efda3a9efda6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a36e5ee4-5e24-4d80-83ee-01c487ff157c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61371c73c9fb4961886c5c22f8f871e1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '95094e28-96cd-49ef-b6c0-712f53ce443e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4bfc4d05-cf0b-42bf-9420-27556b829f8f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=197929cb-aaa4-48f0-a831-7ac3f4ac5b37) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:30:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:05.764 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 in datapath a36e5ee4-5e24-4d80-83ee-01c487ff157c unbound from our chassis#033[00m
Feb 25 07:30:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:05.765 157129 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a36e5ee4-5e24-4d80-83ee-01c487ff157c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 25 07:30:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:05.766 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[67743a01-91a9-4d44-826c-2b1d15139753]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:05 np0005629333 nova_compute[244014]: 2026-02-25 12:30:05.774 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:05 np0005629333 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000045.scope: Deactivated successfully.
Feb 25 07:30:05 np0005629333 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000045.scope: Consumed 12.277s CPU time.
Feb 25 07:30:05 np0005629333 systemd-machined[210048]: Machine qemu-81-instance-00000045 terminated.
Feb 25 07:30:05 np0005629333 nova_compute[244014]: 2026-02-25 12:30:05.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:30:05 np0005629333 nova_compute[244014]: 2026-02-25 12:30:05.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:30:05 np0005629333 nova_compute[244014]: 2026-02-25 12:30:05.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 07:30:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 435 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 211 op/s
Feb 25 07:30:05 np0005629333 nova_compute[244014]: 2026-02-25 12:30:05.978 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:05 np0005629333 nova_compute[244014]: 2026-02-25 12:30:05.986 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:06 np0005629333 nova_compute[244014]: 2026-02-25 12:30:06.095 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-bf261ccf-c216-4383-a22a-7f0553198152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:30:06 np0005629333 nova_compute[244014]: 2026-02-25 12:30:06.095 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-bf261ccf-c216-4383-a22a-7f0553198152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:30:06 np0005629333 nova_compute[244014]: 2026-02-25 12:30:06.096 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 25 07:30:06 np0005629333 nova_compute[244014]: 2026-02-25 12:30:06.097 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid bf261ccf-c216-4383-a22a-7f0553198152 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:30:06 np0005629333 nova_compute[244014]: 2026-02-25 12:30:06.219 244018 INFO nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Instance shutdown successfully after 13 seconds.#033[00m
Feb 25 07:30:06 np0005629333 nova_compute[244014]: 2026-02-25 12:30:06.227 244018 INFO nova.virt.libvirt.driver [-] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Instance destroyed successfully.#033[00m
Feb 25 07:30:06 np0005629333 nova_compute[244014]: 2026-02-25 12:30:06.228 244018 DEBUG nova.objects.instance [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lazy-loading 'numa_topology' on Instance uuid d945940d-a1b5-4a36-b980-efda3a9efda6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:30:06 np0005629333 nova_compute[244014]: 2026-02-25 12:30:06.250 244018 INFO nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Attempting rescue#033[00m
Feb 25 07:30:06 np0005629333 nova_compute[244014]: 2026-02-25 12:30:06.252 244018 DEBUG nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Feb 25 07:30:06 np0005629333 nova_compute[244014]: 2026-02-25 12:30:06.257 244018 DEBUG nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Feb 25 07:30:06 np0005629333 nova_compute[244014]: 2026-02-25 12:30:06.258 244018 INFO nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Creating image(s)#033[00m
Feb 25 07:30:06 np0005629333 nova_compute[244014]: 2026-02-25 12:30:06.290 244018 DEBUG nova.storage.rbd_utils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] rbd image d945940d-a1b5-4a36-b980-efda3a9efda6_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:06 np0005629333 nova_compute[244014]: 2026-02-25 12:30:06.294 244018 DEBUG nova.objects.instance [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lazy-loading 'trusted_certs' on Instance uuid d945940d-a1b5-4a36-b980-efda3a9efda6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:30:06 np0005629333 nova_compute[244014]: 2026-02-25 12:30:06.355 244018 DEBUG nova.storage.rbd_utils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] rbd image d945940d-a1b5-4a36-b980-efda3a9efda6_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:06 np0005629333 nova_compute[244014]: 2026-02-25 12:30:06.390 244018 DEBUG nova.storage.rbd_utils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] rbd image d945940d-a1b5-4a36-b980-efda3a9efda6_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:06 np0005629333 nova_compute[244014]: 2026-02-25 12:30:06.395 244018 DEBUG oslo_concurrency.processutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:06 np0005629333 nova_compute[244014]: 2026-02-25 12:30:06.481 244018 DEBUG oslo_concurrency.processutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:06 np0005629333 nova_compute[244014]: 2026-02-25 12:30:06.482 244018 DEBUG oslo_concurrency.lockutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:06 np0005629333 nova_compute[244014]: 2026-02-25 12:30:06.483 244018 DEBUG oslo_concurrency.lockutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:06 np0005629333 nova_compute[244014]: 2026-02-25 12:30:06.484 244018 DEBUG oslo_concurrency.lockutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:06 np0005629333 nova_compute[244014]: 2026-02-25 12:30:06.520 244018 DEBUG nova.storage.rbd_utils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] rbd image d945940d-a1b5-4a36-b980-efda3a9efda6_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:06 np0005629333 nova_compute[244014]: 2026-02-25 12:30:06.527 244018 DEBUG oslo_concurrency.processutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d945940d-a1b5-4a36-b980-efda3a9efda6_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:06 np0005629333 nova_compute[244014]: 2026-02-25 12:30:06.568 244018 DEBUG nova.compute.manager [req-14d6b380-6f27-4fdc-a956-4009674720de req-5f22cb96-793e-45e5-b6f1-abad4da1a833 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received event network-vif-unplugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:30:06 np0005629333 nova_compute[244014]: 2026-02-25 12:30:06.569 244018 DEBUG oslo_concurrency.lockutils [req-14d6b380-6f27-4fdc-a956-4009674720de req-5f22cb96-793e-45e5-b6f1-abad4da1a833 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:06 np0005629333 nova_compute[244014]: 2026-02-25 12:30:06.569 244018 DEBUG oslo_concurrency.lockutils [req-14d6b380-6f27-4fdc-a956-4009674720de req-5f22cb96-793e-45e5-b6f1-abad4da1a833 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:06 np0005629333 nova_compute[244014]: 2026-02-25 12:30:06.570 244018 DEBUG oslo_concurrency.lockutils [req-14d6b380-6f27-4fdc-a956-4009674720de req-5f22cb96-793e-45e5-b6f1-abad4da1a833 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:06 np0005629333 nova_compute[244014]: 2026-02-25 12:30:06.570 244018 DEBUG nova.compute.manager [req-14d6b380-6f27-4fdc-a956-4009674720de req-5f22cb96-793e-45e5-b6f1-abad4da1a833 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] No waiting events found dispatching network-vif-unplugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:30:06 np0005629333 nova_compute[244014]: 2026-02-25 12:30:06.570 244018 WARNING nova.compute.manager [req-14d6b380-6f27-4fdc-a956-4009674720de req-5f22cb96-793e-45e5-b6f1-abad4da1a833 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received unexpected event network-vif-unplugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 for instance with vm_state active and task_state rescuing.#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.059 244018 DEBUG oslo_concurrency.processutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d945940d-a1b5-4a36-b980-efda3a9efda6_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.060 244018 DEBUG nova.objects.instance [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lazy-loading 'migration_context' on Instance uuid d945940d-a1b5-4a36-b980-efda3a9efda6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.077 244018 DEBUG nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.078 244018 DEBUG nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Start _get_guest_xml network_info=[{"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "vif_mac": "fa:16:3e:3f:10:e8"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.079 244018 DEBUG nova.objects.instance [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lazy-loading 'resources' on Instance uuid d945940d-a1b5-4a36-b980-efda3a9efda6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.081 244018 DEBUG oslo_concurrency.lockutils [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.082 244018 DEBUG oslo_concurrency.lockutils [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.082 244018 DEBUG oslo_concurrency.lockutils [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.082 244018 DEBUG oslo_concurrency.lockutils [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.083 244018 DEBUG oslo_concurrency.lockutils [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.085 244018 INFO nova.compute.manager [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Terminating instance#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.087 244018 DEBUG nova.compute.manager [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.097 244018 WARNING nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.103 244018 DEBUG nova.virt.libvirt.host [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.104 244018 DEBUG nova.virt.libvirt.host [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.113 244018 DEBUG nova.virt.libvirt.host [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.114 244018 DEBUG nova.virt.libvirt.host [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.115 244018 DEBUG nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.115 244018 DEBUG nova.virt.hardware [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.116 244018 DEBUG nova.virt.hardware [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.116 244018 DEBUG nova.virt.hardware [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.117 244018 DEBUG nova.virt.hardware [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.117 244018 DEBUG nova.virt.hardware [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.118 244018 DEBUG nova.virt.hardware [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.118 244018 DEBUG nova.virt.hardware [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.119 244018 DEBUG nova.virt.hardware [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.119 244018 DEBUG nova.virt.hardware [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.119 244018 DEBUG nova.virt.hardware [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.120 244018 DEBUG nova.virt.hardware [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.120 244018 DEBUG nova.objects.instance [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid d945940d-a1b5-4a36-b980-efda3a9efda6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:30:07 np0005629333 kernel: tap27c957cd-d6 (unregistering): left promiscuous mode
Feb 25 07:30:07 np0005629333 NetworkManager[49836]: <info>  [1772022607.1386] device (tap27c957cd-d6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.144 244018 DEBUG oslo_concurrency.processutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:07 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:07Z|00653|binding|INFO|Releasing lport 27c957cd-d68f-48d8-b2e1-170275200ed3 from this chassis (sb_readonly=0)
Feb 25 07:30:07 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:07Z|00654|binding|INFO|Setting lport 27c957cd-d68f-48d8-b2e1-170275200ed3 down in Southbound
Feb 25 07:30:07 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:07Z|00655|binding|INFO|Removing iface tap27c957cd-d6 ovn-installed in OVS
Feb 25 07:30:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:07.162 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:b7:62 10.100.0.4'], port_security=['fa:16:3e:bf:b7:62 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f7b18575-d1fc-423f-a596-8ca6d8ed08fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4445209a7384565a93895032b4f077e', 'neutron:revision_number': '6', 'neutron:security_group_ids': '8918a64f-99f7-4eb3-a626-bacb054cff5c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=28589015-6a4f-416a-a72c-4ed4061d2d31, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=27c957cd-d68f-48d8-b2e1-170275200ed3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:30:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:07.164 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 27c957cd-d68f-48d8-b2e1-170275200ed3 in datapath 85b56c79-01b6-47e7-ab3b-02e44acca3d3 unbound from our chassis#033[00m
Feb 25 07:30:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:07.166 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 85b56c79-01b6-47e7-ab3b-02e44acca3d3#033[00m
Feb 25 07:30:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:07.177 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e3cf0d1f-46c1-4cb9-832d-38bad3efcb14]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.179 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.182 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:30:07 np0005629333 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d00000044.scope: Deactivated successfully.
Feb 25 07:30:07 np0005629333 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d00000044.scope: Consumed 10.661s CPU time.
Feb 25 07:30:07 np0005629333 systemd-machined[210048]: Machine qemu-83-instance-00000044 terminated.
Feb 25 07:30:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:07.206 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e8673af6-cca6-4816-813b-abe47268cbe9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:07.210 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[64e97d70-7d12-4496-b9e2-ea121c677e86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:07.241 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c3d89e94-8dcc-404e-a5ec-37e3d445b4bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:07.259 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[30c89533-d536-4e37-ada1-6e9d2cdf08e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap85b56c79-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:20:24:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 179], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453793, 'reachable_time': 15353, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304488, 'error': None, 'target': 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:07.270 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[30f2eec5-a34a-45dc-a2b1-a50d4776939e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap85b56c79-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453803, 'tstamp': 453803}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304489, 'error': None, 'target': 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap85b56c79-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453805, 'tstamp': 453805}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304489, 'error': None, 'target': 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:07.271 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85b56c79-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.273 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.280 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:07.281 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap85b56c79-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:30:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:07.282 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:30:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:07.282 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap85b56c79-00, col_values=(('external_ids', {'iface-id': 'ad3b1754-7f6c-4114-8056-d68f2e9a25d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:30:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:07.282 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.327 244018 INFO nova.virt.libvirt.driver [-] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Instance destroyed successfully.#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.329 244018 DEBUG nova.objects.instance [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lazy-loading 'resources' on Instance uuid f7b18575-d1fc-423f-a596-8ca6d8ed08fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.365 244018 DEBUG nova.virt.libvirt.vif [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:29:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1779266838',display_name='tempest-ServerRescueNegativeTestJSON-server-1779266838',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1779266838',id=68,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:29:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c4445209a7384565a93895032b4f077e',ramdisk_id='',reservation_id='r-ut2etugv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-75220162',owner_user_name='tempest-ServerRescueNegativeTestJSON-75220162-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:29:57Z,user_data=None,user_id='3aef97e1c87341848423edc65828540c',uuid=f7b18575-d1fc-423f-a596-8ca6d8ed08fa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "27c957cd-d68f-48d8-b2e1-170275200ed3", "address": "fa:16:3e:bf:b7:62", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27c957cd-d6", "ovs_interfaceid": "27c957cd-d68f-48d8-b2e1-170275200ed3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.365 244018 DEBUG nova.network.os_vif_util [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Converting VIF {"id": "27c957cd-d68f-48d8-b2e1-170275200ed3", "address": "fa:16:3e:bf:b7:62", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27c957cd-d6", "ovs_interfaceid": "27c957cd-d68f-48d8-b2e1-170275200ed3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.367 244018 DEBUG nova.network.os_vif_util [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bf:b7:62,bridge_name='br-int',has_traffic_filtering=True,id=27c957cd-d68f-48d8-b2e1-170275200ed3,network=Network(85b56c79-01b6-47e7-ab3b-02e44acca3d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27c957cd-d6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.367 244018 DEBUG os_vif [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bf:b7:62,bridge_name='br-int',has_traffic_filtering=True,id=27c957cd-d68f-48d8-b2e1-170275200ed3,network=Network(85b56c79-01b6-47e7-ab3b-02e44acca3d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27c957cd-d6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.370 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.370 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27c957cd-d6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.372 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.375 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.376 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.379 244018 INFO os_vif [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bf:b7:62,bridge_name='br-int',has_traffic_filtering=True,id=27c957cd-d68f-48d8-b2e1-170275200ed3,network=Network(85b56c79-01b6-47e7-ab3b-02e44acca3d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27c957cd-d6')#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.552 244018 DEBUG nova.compute.manager [req-62be2842-122e-4ae4-a053-f98d6584d434 req-1ebec25b-e0a5-42b7-af64-6c40a6dfd884 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received event network-vif-unplugged-27c957cd-d68f-48d8-b2e1-170275200ed3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.552 244018 DEBUG oslo_concurrency.lockutils [req-62be2842-122e-4ae4-a053-f98d6584d434 req-1ebec25b-e0a5-42b7-af64-6c40a6dfd884 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.553 244018 DEBUG oslo_concurrency.lockutils [req-62be2842-122e-4ae4-a053-f98d6584d434 req-1ebec25b-e0a5-42b7-af64-6c40a6dfd884 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.553 244018 DEBUG oslo_concurrency.lockutils [req-62be2842-122e-4ae4-a053-f98d6584d434 req-1ebec25b-e0a5-42b7-af64-6c40a6dfd884 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.553 244018 DEBUG nova.compute.manager [req-62be2842-122e-4ae4-a053-f98d6584d434 req-1ebec25b-e0a5-42b7-af64-6c40a6dfd884 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] No waiting events found dispatching network-vif-unplugged-27c957cd-d68f-48d8-b2e1-170275200ed3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.554 244018 DEBUG nova.compute.manager [req-62be2842-122e-4ae4-a053-f98d6584d434 req-1ebec25b-e0a5-42b7-af64-6c40a6dfd884 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received event network-vif-unplugged-27c957cd-d68f-48d8-b2e1-170275200ed3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:30:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:30:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3187409761' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.703 244018 DEBUG oslo_concurrency.processutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.705 244018 DEBUG oslo_concurrency.processutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 452 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.3 MiB/s wr, 234 op/s
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.921 244018 INFO nova.virt.libvirt.driver [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Deleting instance files /var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa_del#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.922 244018 INFO nova.virt.libvirt.driver [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Deletion of /var/lib/nova/instances/f7b18575-d1fc-423f-a596-8ca6d8ed08fa_del complete#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.992 244018 INFO nova.compute.manager [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Took 0.90 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.995 244018 DEBUG oslo.service.loopingcall [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.996 244018 DEBUG nova.compute.manager [-] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:30:07 np0005629333 nova_compute[244014]: 2026-02-25 12:30:07.996 244018 DEBUG nova.network.neutron [-] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:30:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:30:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3834581766' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.226 244018 DEBUG oslo_concurrency.processutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.227 244018 DEBUG oslo_concurrency.processutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.389 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Updating instance_info_cache with network_info: [{"id": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "address": "fa:16:3e:03:8c:25", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75a4bcb-92", "ovs_interfaceid": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.457 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-bf261ccf-c216-4383-a22a-7f0553198152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.458 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.651 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022593.6500142, 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.652 244018 INFO nova.compute.manager [-] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.658 244018 DEBUG nova.compute.manager [req-26482a6e-ed08-47ee-bf5d-98fc2eec0f79 req-4060210b-891d-4129-bf62-4cc9180c087d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received event network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.659 244018 DEBUG oslo_concurrency.lockutils [req-26482a6e-ed08-47ee-bf5d-98fc2eec0f79 req-4060210b-891d-4129-bf62-4cc9180c087d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.660 244018 DEBUG oslo_concurrency.lockutils [req-26482a6e-ed08-47ee-bf5d-98fc2eec0f79 req-4060210b-891d-4129-bf62-4cc9180c087d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.660 244018 DEBUG oslo_concurrency.lockutils [req-26482a6e-ed08-47ee-bf5d-98fc2eec0f79 req-4060210b-891d-4129-bf62-4cc9180c087d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.660 244018 DEBUG nova.compute.manager [req-26482a6e-ed08-47ee-bf5d-98fc2eec0f79 req-4060210b-891d-4129-bf62-4cc9180c087d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] No waiting events found dispatching network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.661 244018 WARNING nova.compute.manager [req-26482a6e-ed08-47ee-bf5d-98fc2eec0f79 req-4060210b-891d-4129-bf62-4cc9180c087d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received unexpected event network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 for instance with vm_state active and task_state rescuing.#033[00m
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.675 244018 DEBUG nova.network.neutron [-] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.691 244018 DEBUG nova.compute.manager [None req-c69c3ff9-3a62-4f35-891e-8dd24a5b9402 - - - - - -] [instance: 8e8ab5d0-fccf-42cf-9ca2-2d63c4515c4b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.704 244018 INFO nova.compute.manager [-] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Took 0.71 seconds to deallocate network for instance.#033[00m
Feb 25 07:30:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:30:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3246071265' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.727 244018 DEBUG oslo_concurrency.processutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.729 244018 DEBUG nova.virt.libvirt.vif [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:29:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-579439343',display_name='tempest-ServerRescueTestJSONUnderV235-server-579439343',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-579439343',id=69,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:29:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='61371c73c9fb4961886c5c22f8f871e1',ramdisk_id='',reservation_id='r-yaoxup28',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1059268888',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1059268888-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:29:49Z,user_data=None,user_id='74af2f394ab04b06b55e62150e81b6b1',uuid=d945940d-a1b5-4a36-b980-efda3a9efda6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "vif_mac": "fa:16:3e:3f:10:e8"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.730 244018 DEBUG nova.network.os_vif_util [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Converting VIF {"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "vif_mac": "fa:16:3e:3f:10:e8"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.731 244018 DEBUG nova.network.os_vif_util [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3f:10:e8,bridge_name='br-int',has_traffic_filtering=True,id=197929cb-aaa4-48f0-a831-7ac3f4ac5b37,network=Network(a36e5ee4-5e24-4d80-83ee-01c487ff157c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap197929cb-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.732 244018 DEBUG nova.objects.instance [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lazy-loading 'pci_devices' on Instance uuid d945940d-a1b5-4a36-b980-efda3a9efda6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.749 244018 DEBUG nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:30:08 np0005629333 nova_compute[244014]:  <uuid>d945940d-a1b5-4a36-b980-efda3a9efda6</uuid>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:  <name>instance-00000045</name>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServerRescueTestJSONUnderV235-server-579439343</nova:name>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:30:07</nova:creationTime>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:30:08 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:        <nova:user uuid="74af2f394ab04b06b55e62150e81b6b1">tempest-ServerRescueTestJSONUnderV235-1059268888-project-member</nova:user>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:        <nova:project uuid="61371c73c9fb4961886c5c22f8f871e1">tempest-ServerRescueTestJSONUnderV235-1059268888</nova:project>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:        <nova:port uuid="197929cb-aaa4-48f0-a831-7ac3f4ac5b37">
Feb 25 07:30:08 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <entry name="serial">d945940d-a1b5-4a36-b980-efda3a9efda6</entry>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <entry name="uuid">d945940d-a1b5-4a36-b980-efda3a9efda6</entry>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/d945940d-a1b5-4a36-b980-efda3a9efda6_disk.rescue">
Feb 25 07:30:08 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:30:08 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/d945940d-a1b5-4a36-b980-efda3a9efda6_disk">
Feb 25 07:30:08 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:30:08 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <target dev="vdb" bus="virtio"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/d945940d-a1b5-4a36-b980-efda3a9efda6_disk.config.rescue">
Feb 25 07:30:08 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:30:08 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:3f:10:e8"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <target dev="tap197929cb-aa"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6/console.log" append="off"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:30:08 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:30:08 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:30:08 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:30:08 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.751 244018 DEBUG oslo_concurrency.lockutils [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.752 244018 DEBUG oslo_concurrency.lockutils [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.759 244018 INFO nova.virt.libvirt.driver [-] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Instance destroyed successfully.#033[00m
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.833 244018 DEBUG nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.834 244018 DEBUG nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.834 244018 DEBUG nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.835 244018 DEBUG nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] No VIF found with MAC fa:16:3e:3f:10:e8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.837 244018 INFO nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Using config drive#033[00m
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.873 244018 DEBUG nova.storage.rbd_utils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] rbd image d945940d-a1b5-4a36-b980-efda3a9efda6_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.886 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.900 244018 DEBUG nova.objects.instance [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lazy-loading 'ec2_ids' on Instance uuid d945940d-a1b5-4a36-b980-efda3a9efda6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:30:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:30:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:30:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:30:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.932 244018 DEBUG oslo_concurrency.processutils [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:30:08 np0005629333 nova_compute[244014]: 2026-02-25 12:30:08.966 244018 DEBUG nova.objects.instance [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lazy-loading 'keypairs' on Instance uuid d945940d-a1b5-4a36-b980-efda3a9efda6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:30:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:30:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:30:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:30:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:30:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:30:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:30:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:30:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:30:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:30:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:30:09 np0005629333 nova_compute[244014]: 2026-02-25 12:30:09.340 244018 INFO nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Creating config drive at /var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6/disk.config.rescue#033[00m
Feb 25 07:30:09 np0005629333 nova_compute[244014]: 2026-02-25 12:30:09.349 244018 DEBUG oslo_concurrency.processutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpkhvsrgt7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:30:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3319225679' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:30:09 np0005629333 podman[304775]: 2026-02-25 12:30:09.485059663 +0000 UTC m=+0.079783543 container create 7f8358b350565310890188fcc88e33b75820e33a628dc1a81f01151ff8546de8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ptolemy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:30:09 np0005629333 nova_compute[244014]: 2026-02-25 12:30:09.489 244018 DEBUG oslo_concurrency.processutils [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:09 np0005629333 nova_compute[244014]: 2026-02-25 12:30:09.494 244018 DEBUG oslo_concurrency.processutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpkhvsrgt7" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:09 np0005629333 nova_compute[244014]: 2026-02-25 12:30:09.529 244018 DEBUG nova.storage.rbd_utils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] rbd image d945940d-a1b5-4a36-b980-efda3a9efda6_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:09 np0005629333 podman[304775]: 2026-02-25 12:30:09.438544214 +0000 UTC m=+0.033268154 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:30:09 np0005629333 nova_compute[244014]: 2026-02-25 12:30:09.534 244018 DEBUG oslo_concurrency.processutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6/disk.config.rescue d945940d-a1b5-4a36-b980-efda3a9efda6_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:09 np0005629333 systemd[1]: Started libpod-conmon-7f8358b350565310890188fcc88e33b75820e33a628dc1a81f01151ff8546de8.scope.
Feb 25 07:30:09 np0005629333 nova_compute[244014]: 2026-02-25 12:30:09.575 244018 DEBUG nova.compute.provider_tree [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:30:09 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:30:09 np0005629333 nova_compute[244014]: 2026-02-25 12:30:09.595 244018 DEBUG nova.scheduler.client.report [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:30:09 np0005629333 nova_compute[244014]: 2026-02-25 12:30:09.626 244018 DEBUG oslo_concurrency.lockutils [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.874s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:09 np0005629333 podman[304775]: 2026-02-25 12:30:09.647900851 +0000 UTC m=+0.242624741 container init 7f8358b350565310890188fcc88e33b75820e33a628dc1a81f01151ff8546de8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ptolemy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:30:09 np0005629333 nova_compute[244014]: 2026-02-25 12:30:09.651 244018 INFO nova.scheduler.client.report [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Deleted allocations for instance f7b18575-d1fc-423f-a596-8ca6d8ed08fa#033[00m
Feb 25 07:30:09 np0005629333 podman[304775]: 2026-02-25 12:30:09.656891896 +0000 UTC m=+0.251615786 container start 7f8358b350565310890188fcc88e33b75820e33a628dc1a81f01151ff8546de8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:30:09 np0005629333 relaxed_ptolemy[304814]: 167 167
Feb 25 07:30:09 np0005629333 systemd[1]: libpod-7f8358b350565310890188fcc88e33b75820e33a628dc1a81f01151ff8546de8.scope: Deactivated successfully.
Feb 25 07:30:09 np0005629333 nova_compute[244014]: 2026-02-25 12:30:09.694 244018 DEBUG nova.compute.manager [req-c1a5fcac-c25e-41ff-8315-d2a349442499 req-478137ad-e96b-4608-a279-64044a786a26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received event network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:30:09 np0005629333 nova_compute[244014]: 2026-02-25 12:30:09.696 244018 DEBUG oslo_concurrency.lockutils [req-c1a5fcac-c25e-41ff-8315-d2a349442499 req-478137ad-e96b-4608-a279-64044a786a26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:09 np0005629333 nova_compute[244014]: 2026-02-25 12:30:09.696 244018 DEBUG oslo_concurrency.lockutils [req-c1a5fcac-c25e-41ff-8315-d2a349442499 req-478137ad-e96b-4608-a279-64044a786a26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:09 np0005629333 nova_compute[244014]: 2026-02-25 12:30:09.697 244018 DEBUG oslo_concurrency.lockutils [req-c1a5fcac-c25e-41ff-8315-d2a349442499 req-478137ad-e96b-4608-a279-64044a786a26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:09 np0005629333 nova_compute[244014]: 2026-02-25 12:30:09.697 244018 DEBUG nova.compute.manager [req-c1a5fcac-c25e-41ff-8315-d2a349442499 req-478137ad-e96b-4608-a279-64044a786a26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] No waiting events found dispatching network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:30:09 np0005629333 nova_compute[244014]: 2026-02-25 12:30:09.698 244018 WARNING nova.compute.manager [req-c1a5fcac-c25e-41ff-8315-d2a349442499 req-478137ad-e96b-4608-a279-64044a786a26 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received unexpected event network-vif-plugged-27c957cd-d68f-48d8-b2e1-170275200ed3 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:30:09 np0005629333 podman[304775]: 2026-02-25 12:30:09.710500416 +0000 UTC m=+0.305224356 container attach 7f8358b350565310890188fcc88e33b75820e33a628dc1a81f01151ff8546de8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle)
Feb 25 07:30:09 np0005629333 podman[304775]: 2026-02-25 12:30:09.711370881 +0000 UTC m=+0.306094771 container died 7f8358b350565310890188fcc88e33b75820e33a628dc1a81f01151ff8546de8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:30:09 np0005629333 nova_compute[244014]: 2026-02-25 12:30:09.716 244018 DEBUG oslo_concurrency.lockutils [None req-d9563a02-3b72-4cb0-9751-a6c1a64e6554 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "f7b18575-d1fc-423f-a596-8ca6d8ed08fa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:09 np0005629333 nova_compute[244014]: 2026-02-25 12:30:09.764 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:09 np0005629333 nova_compute[244014]: 2026-02-25 12:30:09.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:30:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 452 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.5 MiB/s wr, 176 op/s
Feb 25 07:30:09 np0005629333 systemd[1]: var-lib-containers-storage-overlay-39ce674c49af47e01cf173d3e9ff2dc63879be75715e1bf81f3f8fee4d2429c9-merged.mount: Deactivated successfully.
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.049 244018 DEBUG oslo_concurrency.processutils [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6/disk.config.rescue d945940d-a1b5-4a36-b980-efda3a9efda6_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.050 244018 INFO nova.virt.libvirt.driver [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Deleting local config drive /var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6/disk.config.rescue because it was imported into RBD.#033[00m
Feb 25 07:30:10 np0005629333 kernel: tap197929cb-aa: entered promiscuous mode
Feb 25 07:30:10 np0005629333 NetworkManager[49836]: <info>  [1772022610.1121] manager: (tap197929cb-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/281)
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.111 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:10Z|00656|binding|INFO|Claiming lport 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 for this chassis.
Feb 25 07:30:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:10Z|00657|binding|INFO|197929cb-aaa4-48f0-a831-7ac3f4ac5b37: Claiming fa:16:3e:3f:10:e8 10.100.0.4
Feb 25 07:30:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:10.120 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:10:e8 10.100.0.4'], port_security=['fa:16:3e:3f:10:e8 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd945940d-a1b5-4a36-b980-efda3a9efda6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a36e5ee4-5e24-4d80-83ee-01c487ff157c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61371c73c9fb4961886c5c22f8f871e1', 'neutron:revision_number': '5', 'neutron:security_group_ids': '95094e28-96cd-49ef-b6c0-712f53ce443e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4bfc4d05-cf0b-42bf-9420-27556b829f8f, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=197929cb-aaa4-48f0-a831-7ac3f4ac5b37) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:30:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:10.123 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 in datapath a36e5ee4-5e24-4d80-83ee-01c487ff157c bound to our chassis#033[00m
Feb 25 07:30:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:10.127 157129 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a36e5ee4-5e24-4d80-83ee-01c487ff157c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.128 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:10Z|00658|binding|INFO|Setting lport 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 ovn-installed in OVS
Feb 25 07:30:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:10Z|00659|binding|INFO|Setting lport 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 up in Southbound
Feb 25 07:30:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:10.129 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[29edfadd-486c-4da2-84ae-7bee46631fcc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.133 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.134 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:10 np0005629333 systemd-machined[210048]: New machine qemu-84-instance-00000045.
Feb 25 07:30:10 np0005629333 systemd-udevd[304863]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:30:10 np0005629333 NetworkManager[49836]: <info>  [1772022610.1665] device (tap197929cb-aa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:30:10 np0005629333 NetworkManager[49836]: <info>  [1772022610.1672] device (tap197929cb-aa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:30:10 np0005629333 systemd[1]: Started Virtual Machine qemu-84-instance-00000045.
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.476 244018 DEBUG oslo_concurrency.lockutils [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "bf261ccf-c216-4383-a22a-7f0553198152" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.477 244018 DEBUG oslo_concurrency.lockutils [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "bf261ccf-c216-4383-a22a-7f0553198152" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.477 244018 DEBUG oslo_concurrency.lockutils [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "bf261ccf-c216-4383-a22a-7f0553198152-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.478 244018 DEBUG oslo_concurrency.lockutils [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "bf261ccf-c216-4383-a22a-7f0553198152-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.478 244018 DEBUG oslo_concurrency.lockutils [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "bf261ccf-c216-4383-a22a-7f0553198152-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.480 244018 INFO nova.compute.manager [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Terminating instance#033[00m
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.481 244018 DEBUG nova.compute.manager [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:30:10 np0005629333 podman[304775]: 2026-02-25 12:30:10.601332218 +0000 UTC m=+1.196056098 container remove 7f8358b350565310890188fcc88e33b75820e33a628dc1a81f01151ff8546de8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ptolemy, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 07:30:10 np0005629333 systemd[1]: libpod-conmon-7f8358b350565310890188fcc88e33b75820e33a628dc1a81f01151ff8546de8.scope: Deactivated successfully.
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.750 244018 DEBUG nova.compute.manager [req-cf6310f9-3e33-4bbf-9e1f-49ab8421a419 req-c9729718-f75f-4269-8303-0c52d967d898 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Received event network-vif-deleted-27c957cd-d68f-48d8-b2e1-170275200ed3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.750 244018 DEBUG nova.compute.manager [req-cf6310f9-3e33-4bbf-9e1f-49ab8421a419 req-c9729718-f75f-4269-8303-0c52d967d898 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received event network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.750 244018 DEBUG oslo_concurrency.lockutils [req-cf6310f9-3e33-4bbf-9e1f-49ab8421a419 req-c9729718-f75f-4269-8303-0c52d967d898 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.750 244018 DEBUG oslo_concurrency.lockutils [req-cf6310f9-3e33-4bbf-9e1f-49ab8421a419 req-c9729718-f75f-4269-8303-0c52d967d898 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.751 244018 DEBUG oslo_concurrency.lockutils [req-cf6310f9-3e33-4bbf-9e1f-49ab8421a419 req-c9729718-f75f-4269-8303-0c52d967d898 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.751 244018 DEBUG nova.compute.manager [req-cf6310f9-3e33-4bbf-9e1f-49ab8421a419 req-c9729718-f75f-4269-8303-0c52d967d898 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] No waiting events found dispatching network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.751 244018 WARNING nova.compute.manager [req-cf6310f9-3e33-4bbf-9e1f-49ab8421a419 req-c9729718-f75f-4269-8303-0c52d967d898 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received unexpected event network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 for instance with vm_state active and task_state rescuing.#033[00m
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.751 244018 DEBUG nova.compute.manager [req-cf6310f9-3e33-4bbf-9e1f-49ab8421a419 req-c9729718-f75f-4269-8303-0c52d967d898 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received event network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.752 244018 DEBUG oslo_concurrency.lockutils [req-cf6310f9-3e33-4bbf-9e1f-49ab8421a419 req-c9729718-f75f-4269-8303-0c52d967d898 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.752 244018 DEBUG oslo_concurrency.lockutils [req-cf6310f9-3e33-4bbf-9e1f-49ab8421a419 req-c9729718-f75f-4269-8303-0c52d967d898 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.752 244018 DEBUG oslo_concurrency.lockutils [req-cf6310f9-3e33-4bbf-9e1f-49ab8421a419 req-c9729718-f75f-4269-8303-0c52d967d898 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.752 244018 DEBUG nova.compute.manager [req-cf6310f9-3e33-4bbf-9e1f-49ab8421a419 req-c9729718-f75f-4269-8303-0c52d967d898 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] No waiting events found dispatching network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.752 244018 WARNING nova.compute.manager [req-cf6310f9-3e33-4bbf-9e1f-49ab8421a419 req-c9729718-f75f-4269-8303-0c52d967d898 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received unexpected event network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 for instance with vm_state active and task_state rescuing.#033[00m
Feb 25 07:30:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:30:10 np0005629333 kernel: tapc75a4bcb-92 (unregistering): left promiscuous mode
Feb 25 07:30:10 np0005629333 NetworkManager[49836]: <info>  [1772022610.7996] device (tapc75a4bcb-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.807 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:10Z|00660|binding|INFO|Releasing lport c75a4bcb-9292-41aa-b0c3-14b1433392e2 from this chassis (sb_readonly=0)
Feb 25 07:30:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:10Z|00661|binding|INFO|Setting lport c75a4bcb-9292-41aa-b0c3-14b1433392e2 down in Southbound
Feb 25 07:30:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:10Z|00662|binding|INFO|Removing iface tapc75a4bcb-92 ovn-installed in OVS
Feb 25 07:30:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:10.818 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:8c:25 10.100.0.12'], port_security=['fa:16:3e:03:8c:25 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'bf261ccf-c216-4383-a22a-7f0553198152', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c4445209a7384565a93895032b4f077e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8918a64f-99f7-4eb3-a626-bacb054cff5c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=28589015-6a4f-416a-a72c-4ed4061d2d31, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=c75a4bcb-9292-41aa-b0c3-14b1433392e2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:30:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:10.820 157129 INFO neutron.agent.ovn.metadata.agent [-] Port c75a4bcb-9292-41aa-b0c3-14b1433392e2 in datapath 85b56c79-01b6-47e7-ab3b-02e44acca3d3 unbound from our chassis#033[00m
Feb 25 07:30:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:10.823 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 85b56c79-01b6-47e7-ab3b-02e44acca3d3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:30:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:10.824 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[aabe164c-44eb-4c66-9799-272e0e23f8cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:10.824 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3 namespace which is not needed anymore#033[00m
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.829 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:10 np0005629333 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d00000043.scope: Deactivated successfully.
Feb 25 07:30:10 np0005629333 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d00000043.scope: Consumed 12.463s CPU time.
Feb 25 07:30:10 np0005629333 systemd-machined[210048]: Machine qemu-77-instance-00000043 terminated.
Feb 25 07:30:10 np0005629333 podman[304904]: 2026-02-25 12:30:10.863810751 +0000 UTC m=+0.109691311 container create f68060e6291d9c3413e2c65e86a75d8b4b9b68dcfae9846a4c8ad0e0568e9b57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_swanson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle)
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:30:10 np0005629333 podman[304904]: 2026-02-25 12:30:10.79358154 +0000 UTC m=+0.039462150 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.900 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.905 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:10 np0005629333 systemd[1]: Started libpod-conmon-f68060e6291d9c3413e2c65e86a75d8b4b9b68dcfae9846a4c8ad0e0568e9b57.scope.
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.918 244018 INFO nova.virt.libvirt.driver [-] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Instance destroyed successfully.#033[00m
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.918 244018 DEBUG nova.objects.instance [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lazy-loading 'resources' on Instance uuid bf261ccf-c216-4383-a22a-7f0553198152 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:30:10 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.937 244018 DEBUG nova.virt.libvirt.vif [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:29:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-789545168',display_name='tempest-ServerRescueNegativeTestJSON-server-789545168',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-789545168',id=67,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:29:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c4445209a7384565a93895032b4f077e',ramdisk_id='',reservation_id='r-3pw48dcp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-75220162',owner_user_name='tempest-ServerRescueNegativeTestJSON-75220162-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:30:04Z,user_data=None,user_id='3aef97e1c87341848423edc65828540c',uuid=bf261ccf-c216-4383-a22a-7f0553198152,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "address": "fa:16:3e:03:8c:25", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75a4bcb-92", "ovs_interfaceid": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.937 244018 DEBUG nova.network.os_vif_util [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Converting VIF {"id": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "address": "fa:16:3e:03:8c:25", "network": {"id": "85b56c79-01b6-47e7-ab3b-02e44acca3d3", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-1893799933-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c4445209a7384565a93895032b4f077e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75a4bcb-92", "ovs_interfaceid": "c75a4bcb-9292-41aa-b0c3-14b1433392e2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.938 244018 DEBUG nova.network.os_vif_util [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:03:8c:25,bridge_name='br-int',has_traffic_filtering=True,id=c75a4bcb-9292-41aa-b0c3-14b1433392e2,network=Network(85b56c79-01b6-47e7-ab3b-02e44acca3d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75a4bcb-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.938 244018 DEBUG os_vif [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:8c:25,bridge_name='br-int',has_traffic_filtering=True,id=c75a4bcb-9292-41aa-b0c3-14b1433392e2,network=Network(85b56c79-01b6-47e7-ab3b-02e44acca3d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75a4bcb-92') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:30:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77ebe46a3d7e34e74ac77e833ca7df55a22a83fcdd968f588d1a0d3c09f644cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.940 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.940 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc75a4bcb-92, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:30:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77ebe46a3d7e34e74ac77e833ca7df55a22a83fcdd968f588d1a0d3c09f644cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:30:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77ebe46a3d7e34e74ac77e833ca7df55a22a83fcdd968f588d1a0d3c09f644cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:30:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77ebe46a3d7e34e74ac77e833ca7df55a22a83fcdd968f588d1a0d3c09f644cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.942 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77ebe46a3d7e34e74ac77e833ca7df55a22a83fcdd968f588d1a0d3c09f644cf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.952 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.957 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:10 np0005629333 nova_compute[244014]: 2026-02-25 12:30:10.959 244018 INFO os_vif [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:8c:25,bridge_name='br-int',has_traffic_filtering=True,id=c75a4bcb-9292-41aa-b0c3-14b1433392e2,network=Network(85b56c79-01b6-47e7-ab3b-02e44acca3d3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75a4bcb-92')#033[00m
Feb 25 07:30:11 np0005629333 nova_compute[244014]: 2026-02-25 12:30:11.008 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for d945940d-a1b5-4a36-b980-efda3a9efda6 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 25 07:30:11 np0005629333 nova_compute[244014]: 2026-02-25 12:30:11.008 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022611.008239, d945940d-a1b5-4a36-b980-efda3a9efda6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:30:11 np0005629333 nova_compute[244014]: 2026-02-25 12:30:11.009 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:30:11 np0005629333 nova_compute[244014]: 2026-02-25 12:30:11.012 244018 DEBUG nova.compute.manager [None req-65b4eb42-95d3-47c7-a4f0-1a15626b88ad 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:30:11 np0005629333 nova_compute[244014]: 2026-02-25 12:30:11.036 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:30:11 np0005629333 nova_compute[244014]: 2026-02-25 12:30:11.038 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:30:11 np0005629333 podman[304904]: 2026-02-25 12:30:11.04892165 +0000 UTC m=+0.294802210 container init f68060e6291d9c3413e2c65e86a75d8b4b9b68dcfae9846a4c8ad0e0568e9b57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_swanson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 07:30:11 np0005629333 podman[304904]: 2026-02-25 12:30:11.059983693 +0000 UTC m=+0.305864253 container start f68060e6291d9c3413e2c65e86a75d8b4b9b68dcfae9846a4c8ad0e0568e9b57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_swanson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:30:11 np0005629333 podman[304904]: 2026-02-25 12:30:11.065797018 +0000 UTC m=+0.311677568 container attach f68060e6291d9c3413e2c65e86a75d8b4b9b68dcfae9846a4c8ad0e0568e9b57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:30:11 np0005629333 nova_compute[244014]: 2026-02-25 12:30:11.076 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022611.0104368, d945940d-a1b5-4a36-b980-efda3a9efda6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:30:11 np0005629333 nova_compute[244014]: 2026-02-25 12:30:11.077 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] VM Started (Lifecycle Event)#033[00m
Feb 25 07:30:11 np0005629333 neutron-haproxy-ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3[302401]: [NOTICE]   (302426) : haproxy version is 2.8.14-c23fe91
Feb 25 07:30:11 np0005629333 neutron-haproxy-ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3[302401]: [NOTICE]   (302426) : path to executable is /usr/sbin/haproxy
Feb 25 07:30:11 np0005629333 neutron-haproxy-ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3[302401]: [WARNING]  (302426) : Exiting Master process...
Feb 25 07:30:11 np0005629333 neutron-haproxy-ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3[302401]: [ALERT]    (302426) : Current worker (302428) exited with code 143 (Terminated)
Feb 25 07:30:11 np0005629333 neutron-haproxy-ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3[302401]: [WARNING]  (302426) : All workers exited. Exiting... (0)
Feb 25 07:30:11 np0005629333 systemd[1]: libpod-e2300c096a2f5eef6b0f90854fb9153e3f2948f0fa4813b0d2e637ef38690b5e.scope: Deactivated successfully.
Feb 25 07:30:11 np0005629333 conmon[302401]: conmon e2300c096a2f5eef6b0f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e2300c096a2f5eef6b0f90854fb9153e3f2948f0fa4813b0d2e637ef38690b5e.scope/container/memory.events
Feb 25 07:30:11 np0005629333 nova_compute[244014]: 2026-02-25 12:30:11.099 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:30:11 np0005629333 podman[304983]: 2026-02-25 12:30:11.100809611 +0000 UTC m=+0.171401220 container died e2300c096a2f5eef6b0f90854fb9153e3f2948f0fa4813b0d2e637ef38690b5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 25 07:30:11 np0005629333 nova_compute[244014]: 2026-02-25 12:30:11.106 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:30:11 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e2300c096a2f5eef6b0f90854fb9153e3f2948f0fa4813b0d2e637ef38690b5e-userdata-shm.mount: Deactivated successfully.
Feb 25 07:30:11 np0005629333 systemd[1]: var-lib-containers-storage-overlay-4d4ba589ec8c9c7feb5a1c1945ac7be1440c19e55dcd78f4845f04574fea8475-merged.mount: Deactivated successfully.
Feb 25 07:30:11 np0005629333 podman[304983]: 2026-02-25 12:30:11.151579401 +0000 UTC m=+0.222171000 container cleanup e2300c096a2f5eef6b0f90854fb9153e3f2948f0fa4813b0d2e637ef38690b5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:30:11 np0005629333 systemd[1]: libpod-conmon-e2300c096a2f5eef6b0f90854fb9153e3f2948f0fa4813b0d2e637ef38690b5e.scope: Deactivated successfully.
Feb 25 07:30:11 np0005629333 podman[305034]: 2026-02-25 12:30:11.22912996 +0000 UTC m=+0.046897261 container remove e2300c096a2f5eef6b0f90854fb9153e3f2948f0fa4813b0d2e637ef38690b5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 25 07:30:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:11.233 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9f916d67-3fd2-4328-8c43-aadc19a76f3a]: (4, ('Wed Feb 25 12:30:10 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3 (e2300c096a2f5eef6b0f90854fb9153e3f2948f0fa4813b0d2e637ef38690b5e)\ne2300c096a2f5eef6b0f90854fb9153e3f2948f0fa4813b0d2e637ef38690b5e\nWed Feb 25 12:30:11 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3 (e2300c096a2f5eef6b0f90854fb9153e3f2948f0fa4813b0d2e637ef38690b5e)\ne2300c096a2f5eef6b0f90854fb9153e3f2948f0fa4813b0d2e637ef38690b5e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:11.235 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[912d6fdc-a1e0-4d2b-b435-f5dc66a88a09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:11.236 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap85b56c79-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:30:11 np0005629333 nova_compute[244014]: 2026-02-25 12:30:11.238 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:11 np0005629333 kernel: tap85b56c79-00: left promiscuous mode
Feb 25 07:30:11 np0005629333 nova_compute[244014]: 2026-02-25 12:30:11.244 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:11.247 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2411aeae-d619-4ed2-9729-035fceceebf4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:11.265 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1bcca248-83d8-4c4a-9377-41be0dcf5b49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:11.270 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[871ac734-e478-44da-a289-bc1faefdb471]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:11.284 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6414912b-993c-4fc9-8d56-4f38539d1b2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453785, 'reachable_time': 43175, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305050, 'error': None, 'target': 'ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:11 np0005629333 systemd[1]: run-netns-ovnmeta\x2d85b56c79\x2d01b6\x2d47e7\x2dab3b\x2d02e44acca3d3.mount: Deactivated successfully.
Feb 25 07:30:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:11.287 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-85b56c79-01b6-47e7-ab3b-02e44acca3d3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:30:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:11.288 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[01015799-c10a-4e1d-b3f3-0e39502f07df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:11 np0005629333 nova_compute[244014]: 2026-02-25 12:30:11.338 244018 INFO nova.virt.libvirt.driver [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Deleting instance files /var/lib/nova/instances/bf261ccf-c216-4383-a22a-7f0553198152_del#033[00m
Feb 25 07:30:11 np0005629333 nova_compute[244014]: 2026-02-25 12:30:11.339 244018 INFO nova.virt.libvirt.driver [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Deletion of /var/lib/nova/instances/bf261ccf-c216-4383-a22a-7f0553198152_del complete#033[00m
Feb 25 07:30:11 np0005629333 nova_compute[244014]: 2026-02-25 12:30:11.402 244018 INFO nova.compute.manager [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Took 0.92 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:30:11 np0005629333 nova_compute[244014]: 2026-02-25 12:30:11.403 244018 DEBUG oslo.service.loopingcall [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:30:11 np0005629333 nova_compute[244014]: 2026-02-25 12:30:11.403 244018 DEBUG nova.compute.manager [-] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:30:11 np0005629333 nova_compute[244014]: 2026-02-25 12:30:11.403 244018 DEBUG nova.network.neutron [-] [instance: bf261ccf-c216-4383-a22a-7f0553198152] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:30:11 np0005629333 thirsty_swanson[304976]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:30:11 np0005629333 thirsty_swanson[304976]: --> All data devices are unavailable
Feb 25 07:30:11 np0005629333 systemd[1]: libpod-f68060e6291d9c3413e2c65e86a75d8b4b9b68dcfae9846a4c8ad0e0568e9b57.scope: Deactivated successfully.
Feb 25 07:30:11 np0005629333 podman[304904]: 2026-02-25 12:30:11.592367091 +0000 UTC m=+0.838247651 container died f68060e6291d9c3413e2c65e86a75d8b4b9b68dcfae9846a4c8ad0e0568e9b57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_swanson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 07:30:11 np0005629333 systemd[1]: var-lib-containers-storage-overlay-77ebe46a3d7e34e74ac77e833ca7df55a22a83fcdd968f588d1a0d3c09f644cf-merged.mount: Deactivated successfully.
Feb 25 07:30:11 np0005629333 nova_compute[244014]: 2026-02-25 12:30:11.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:30:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 408 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 217 op/s
Feb 25 07:30:11 np0005629333 podman[304904]: 2026-02-25 12:30:11.96298352 +0000 UTC m=+1.208864030 container remove f68060e6291d9c3413e2c65e86a75d8b4b9b68dcfae9846a4c8ad0e0568e9b57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 07:30:11 np0005629333 systemd[1]: libpod-conmon-f68060e6291d9c3413e2c65e86a75d8b4b9b68dcfae9846a4c8ad0e0568e9b57.scope: Deactivated successfully.
Feb 25 07:30:12 np0005629333 nova_compute[244014]: 2026-02-25 12:30:12.099 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022597.098086, 8086400b-ac70-4c79-928b-4f1966084384 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:30:12 np0005629333 nova_compute[244014]: 2026-02-25 12:30:12.099 244018 INFO nova.compute.manager [-] [instance: 8086400b-ac70-4c79-928b-4f1966084384] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:30:12 np0005629333 nova_compute[244014]: 2026-02-25 12:30:12.120 244018 DEBUG nova.compute.manager [None req-a6b66b3f-d954-4db5-9409-a940349a8506 - - - - - -] [instance: 8086400b-ac70-4c79-928b-4f1966084384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:30:12 np0005629333 nova_compute[244014]: 2026-02-25 12:30:12.203 244018 DEBUG nova.network.neutron [-] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:30:12 np0005629333 nova_compute[244014]: 2026-02-25 12:30:12.226 244018 INFO nova.compute.manager [-] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Took 0.82 seconds to deallocate network for instance.#033[00m
Feb 25 07:30:12 np0005629333 nova_compute[244014]: 2026-02-25 12:30:12.281 244018 DEBUG oslo_concurrency.lockutils [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:12 np0005629333 nova_compute[244014]: 2026-02-25 12:30:12.281 244018 DEBUG oslo_concurrency.lockutils [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:12 np0005629333 nova_compute[244014]: 2026-02-25 12:30:12.301 244018 DEBUG nova.scheduler.client.report [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 25 07:30:12 np0005629333 nova_compute[244014]: 2026-02-25 12:30:12.319 244018 DEBUG nova.scheduler.client.report [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 25 07:30:12 np0005629333 nova_compute[244014]: 2026-02-25 12:30:12.319 244018 DEBUG nova.compute.provider_tree [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 25 07:30:12 np0005629333 nova_compute[244014]: 2026-02-25 12:30:12.340 244018 DEBUG nova.scheduler.client.report [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 25 07:30:12 np0005629333 nova_compute[244014]: 2026-02-25 12:30:12.363 244018 DEBUG nova.scheduler.client.report [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 25 07:30:12 np0005629333 nova_compute[244014]: 2026-02-25 12:30:12.420 244018 DEBUG oslo_concurrency.processutils [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:12 np0005629333 podman[305144]: 2026-02-25 12:30:12.492399933 +0000 UTC m=+0.058233252 container create 6ee49bd97186056584ebc111fef97fbf0f366005a3366dc82e7dcf16827ac978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:30:12 np0005629333 systemd[1]: Started libpod-conmon-6ee49bd97186056584ebc111fef97fbf0f366005a3366dc82e7dcf16827ac978.scope.
Feb 25 07:30:12 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:30:12 np0005629333 podman[305144]: 2026-02-25 12:30:12.466061706 +0000 UTC m=+0.031895116 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:30:12 np0005629333 podman[305144]: 2026-02-25 12:30:12.5779855 +0000 UTC m=+0.143818859 container init 6ee49bd97186056584ebc111fef97fbf0f366005a3366dc82e7dcf16827ac978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:30:12 np0005629333 podman[305144]: 2026-02-25 12:30:12.58360632 +0000 UTC m=+0.149439629 container start 6ee49bd97186056584ebc111fef97fbf0f366005a3366dc82e7dcf16827ac978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2)
Feb 25 07:30:12 np0005629333 podman[305144]: 2026-02-25 12:30:12.587214372 +0000 UTC m=+0.153047721 container attach 6ee49bd97186056584ebc111fef97fbf0f366005a3366dc82e7dcf16827ac978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_lovelace, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 25 07:30:12 np0005629333 nice_lovelace[305162]: 167 167
Feb 25 07:30:12 np0005629333 systemd[1]: libpod-6ee49bd97186056584ebc111fef97fbf0f366005a3366dc82e7dcf16827ac978.scope: Deactivated successfully.
Feb 25 07:30:12 np0005629333 podman[305144]: 2026-02-25 12:30:12.588412586 +0000 UTC m=+0.154245895 container died 6ee49bd97186056584ebc111fef97fbf0f366005a3366dc82e7dcf16827ac978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_lovelace, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:30:12 np0005629333 systemd[1]: var-lib-containers-storage-overlay-3a3acee57314c787e20438ea5caca15a18f1dca625b53f986f8d8b1f400173a2-merged.mount: Deactivated successfully.
Feb 25 07:30:12 np0005629333 podman[305144]: 2026-02-25 12:30:12.622649437 +0000 UTC m=+0.188482776 container remove 6ee49bd97186056584ebc111fef97fbf0f366005a3366dc82e7dcf16827ac978 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_lovelace, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:30:12 np0005629333 systemd[1]: libpod-conmon-6ee49bd97186056584ebc111fef97fbf0f366005a3366dc82e7dcf16827ac978.scope: Deactivated successfully.
Feb 25 07:30:12 np0005629333 podman[305204]: 2026-02-25 12:30:12.788992724 +0000 UTC m=+0.058685745 container create 89ae5c9d0c07527075e8601aa74fadaf177d841949d158430eee4bfd7e8d33ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keldysh, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:30:12 np0005629333 systemd[1]: Started libpod-conmon-89ae5c9d0c07527075e8601aa74fadaf177d841949d158430eee4bfd7e8d33ed.scope.
Feb 25 07:30:12 np0005629333 podman[305204]: 2026-02-25 12:30:12.763334046 +0000 UTC m=+0.033027127 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:30:12 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:30:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a741be0c23eb2bfe3d05cc207dad273e536d6620ccc8491fe745c0d7a5f9c90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:30:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a741be0c23eb2bfe3d05cc207dad273e536d6620ccc8491fe745c0d7a5f9c90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:30:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a741be0c23eb2bfe3d05cc207dad273e536d6620ccc8491fe745c0d7a5f9c90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:30:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a741be0c23eb2bfe3d05cc207dad273e536d6620ccc8491fe745c0d7a5f9c90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:30:12 np0005629333 nova_compute[244014]: 2026-02-25 12:30:12.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:30:12 np0005629333 nova_compute[244014]: 2026-02-25 12:30:12.879 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:30:12 np0005629333 podman[305204]: 2026-02-25 12:30:12.884504003 +0000 UTC m=+0.154197064 container init 89ae5c9d0c07527075e8601aa74fadaf177d841949d158430eee4bfd7e8d33ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 07:30:12 np0005629333 podman[305204]: 2026-02-25 12:30:12.896778521 +0000 UTC m=+0.166471542 container start 89ae5c9d0c07527075e8601aa74fadaf177d841949d158430eee4bfd7e8d33ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keldysh, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 07:30:12 np0005629333 podman[305204]: 2026-02-25 12:30:12.900563928 +0000 UTC m=+0.170256949 container attach 89ae5c9d0c07527075e8601aa74fadaf177d841949d158430eee4bfd7e8d33ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:30:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:30:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2322769673' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:30:12 np0005629333 nova_compute[244014]: 2026-02-25 12:30:12.969 244018 DEBUG oslo_concurrency.processutils [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:12 np0005629333 nova_compute[244014]: 2026-02-25 12:30:12.976 244018 DEBUG nova.compute.provider_tree [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:30:13 np0005629333 nova_compute[244014]: 2026-02-25 12:30:13.001 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:13 np0005629333 nova_compute[244014]: 2026-02-25 12:30:13.052 244018 DEBUG nova.compute.manager [req-3051fbc7-2760-49db-833d-579c86dcbd65 req-a26cf4b0-b744-423c-9bd3-3f00b1d2a152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Received event network-vif-unplugged-c75a4bcb-9292-41aa-b0c3-14b1433392e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:30:13 np0005629333 nova_compute[244014]: 2026-02-25 12:30:13.054 244018 DEBUG oslo_concurrency.lockutils [req-3051fbc7-2760-49db-833d-579c86dcbd65 req-a26cf4b0-b744-423c-9bd3-3f00b1d2a152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "bf261ccf-c216-4383-a22a-7f0553198152-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:13 np0005629333 nova_compute[244014]: 2026-02-25 12:30:13.056 244018 DEBUG oslo_concurrency.lockutils [req-3051fbc7-2760-49db-833d-579c86dcbd65 req-a26cf4b0-b744-423c-9bd3-3f00b1d2a152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bf261ccf-c216-4383-a22a-7f0553198152-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:13 np0005629333 nova_compute[244014]: 2026-02-25 12:30:13.057 244018 DEBUG oslo_concurrency.lockutils [req-3051fbc7-2760-49db-833d-579c86dcbd65 req-a26cf4b0-b744-423c-9bd3-3f00b1d2a152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bf261ccf-c216-4383-a22a-7f0553198152-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:13 np0005629333 nova_compute[244014]: 2026-02-25 12:30:13.057 244018 DEBUG nova.compute.manager [req-3051fbc7-2760-49db-833d-579c86dcbd65 req-a26cf4b0-b744-423c-9bd3-3f00b1d2a152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] No waiting events found dispatching network-vif-unplugged-c75a4bcb-9292-41aa-b0c3-14b1433392e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:30:13 np0005629333 nova_compute[244014]: 2026-02-25 12:30:13.058 244018 WARNING nova.compute.manager [req-3051fbc7-2760-49db-833d-579c86dcbd65 req-a26cf4b0-b744-423c-9bd3-3f00b1d2a152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Received unexpected event network-vif-unplugged-c75a4bcb-9292-41aa-b0c3-14b1433392e2 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:30:13 np0005629333 nova_compute[244014]: 2026-02-25 12:30:13.058 244018 DEBUG nova.compute.manager [req-3051fbc7-2760-49db-833d-579c86dcbd65 req-a26cf4b0-b744-423c-9bd3-3f00b1d2a152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Received event network-vif-plugged-c75a4bcb-9292-41aa-b0c3-14b1433392e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:30:13 np0005629333 nova_compute[244014]: 2026-02-25 12:30:13.059 244018 DEBUG oslo_concurrency.lockutils [req-3051fbc7-2760-49db-833d-579c86dcbd65 req-a26cf4b0-b744-423c-9bd3-3f00b1d2a152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "bf261ccf-c216-4383-a22a-7f0553198152-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:13 np0005629333 nova_compute[244014]: 2026-02-25 12:30:13.059 244018 DEBUG oslo_concurrency.lockutils [req-3051fbc7-2760-49db-833d-579c86dcbd65 req-a26cf4b0-b744-423c-9bd3-3f00b1d2a152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bf261ccf-c216-4383-a22a-7f0553198152-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:13 np0005629333 nova_compute[244014]: 2026-02-25 12:30:13.060 244018 DEBUG oslo_concurrency.lockutils [req-3051fbc7-2760-49db-833d-579c86dcbd65 req-a26cf4b0-b744-423c-9bd3-3f00b1d2a152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bf261ccf-c216-4383-a22a-7f0553198152-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:13 np0005629333 nova_compute[244014]: 2026-02-25 12:30:13.060 244018 DEBUG nova.compute.manager [req-3051fbc7-2760-49db-833d-579c86dcbd65 req-a26cf4b0-b744-423c-9bd3-3f00b1d2a152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] No waiting events found dispatching network-vif-plugged-c75a4bcb-9292-41aa-b0c3-14b1433392e2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:30:13 np0005629333 nova_compute[244014]: 2026-02-25 12:30:13.061 244018 WARNING nova.compute.manager [req-3051fbc7-2760-49db-833d-579c86dcbd65 req-a26cf4b0-b744-423c-9bd3-3f00b1d2a152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Received unexpected event network-vif-plugged-c75a4bcb-9292-41aa-b0c3-14b1433392e2 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:30:13 np0005629333 nova_compute[244014]: 2026-02-25 12:30:13.061 244018 DEBUG nova.compute.manager [req-3051fbc7-2760-49db-833d-579c86dcbd65 req-a26cf4b0-b744-423c-9bd3-3f00b1d2a152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Received event network-vif-deleted-c75a4bcb-9292-41aa-b0c3-14b1433392e2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:30:13 np0005629333 nova_compute[244014]: 2026-02-25 12:30:13.073 244018 DEBUG nova.scheduler.client.report [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]: {
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:    "0": [
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:        {
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "devices": [
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "/dev/loop3"
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            ],
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "lv_name": "ceph_lv0",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "lv_size": "21470642176",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "name": "ceph_lv0",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "tags": {
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.cluster_name": "ceph",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.crush_device_class": "",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.encrypted": "0",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.objectstore": "bluestore",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.osd_id": "0",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.type": "block",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.vdo": "0",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.with_tpm": "0"
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            },
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "type": "block",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "vg_name": "ceph_vg0"
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:        }
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:    ],
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:    "1": [
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:        {
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "devices": [
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "/dev/loop4"
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            ],
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "lv_name": "ceph_lv1",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "lv_size": "21470642176",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "name": "ceph_lv1",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "tags": {
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.cluster_name": "ceph",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.crush_device_class": "",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.encrypted": "0",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.objectstore": "bluestore",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.osd_id": "1",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.type": "block",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.vdo": "0",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.with_tpm": "0"
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            },
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "type": "block",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "vg_name": "ceph_vg1"
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:        }
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:    ],
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:    "2": [
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:        {
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "devices": [
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "/dev/loop5"
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            ],
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "lv_name": "ceph_lv2",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "lv_size": "21470642176",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "name": "ceph_lv2",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "tags": {
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.cluster_name": "ceph",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.crush_device_class": "",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.encrypted": "0",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.objectstore": "bluestore",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.osd_id": "2",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.type": "block",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.vdo": "0",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:                "ceph.with_tpm": "0"
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            },
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "type": "block",
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:            "vg_name": "ceph_vg2"
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:        }
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]:    ]
Feb 25 07:30:13 np0005629333 nifty_keldysh[305222]: }
Feb 25 07:30:13 np0005629333 systemd[1]: libpod-89ae5c9d0c07527075e8601aa74fadaf177d841949d158430eee4bfd7e8d33ed.scope: Deactivated successfully.
Feb 25 07:30:13 np0005629333 podman[305204]: 2026-02-25 12:30:13.163870545 +0000 UTC m=+0.433563606 container died 89ae5c9d0c07527075e8601aa74fadaf177d841949d158430eee4bfd7e8d33ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:30:13 np0005629333 systemd[1]: var-lib-containers-storage-overlay-5a741be0c23eb2bfe3d05cc207dad273e536d6620ccc8491fe745c0d7a5f9c90-merged.mount: Deactivated successfully.
Feb 25 07:30:13 np0005629333 podman[305204]: 2026-02-25 12:30:13.214233743 +0000 UTC m=+0.483926754 container remove 89ae5c9d0c07527075e8601aa74fadaf177d841949d158430eee4bfd7e8d33ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:30:13 np0005629333 systemd[1]: libpod-conmon-89ae5c9d0c07527075e8601aa74fadaf177d841949d158430eee4bfd7e8d33ed.scope: Deactivated successfully.
Feb 25 07:30:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:13.458 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:30:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:13.459 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:30:13 np0005629333 nova_compute[244014]: 2026-02-25 12:30:13.459 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:13 np0005629333 nova_compute[244014]: 2026-02-25 12:30:13.464 244018 DEBUG oslo_concurrency.lockutils [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.182s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:13 np0005629333 nova_compute[244014]: 2026-02-25 12:30:13.467 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.467s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:13 np0005629333 nova_compute[244014]: 2026-02-25 12:30:13.468 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:13 np0005629333 nova_compute[244014]: 2026-02-25 12:30:13.468 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:30:13 np0005629333 nova_compute[244014]: 2026-02-25 12:30:13.469 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:13 np0005629333 nova_compute[244014]: 2026-02-25 12:30:13.589 244018 INFO nova.scheduler.client.report [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Deleted allocations for instance bf261ccf-c216-4383-a22a-7f0553198152#033[00m
Feb 25 07:30:13 np0005629333 podman[305307]: 2026-02-25 12:30:13.676360808 +0000 UTC m=+0.046111589 container create a354767be8b9029882df904eb8ada11530b29756f5095288e0221ea3b97557f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_booth, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 07:30:13 np0005629333 systemd[1]: Started libpod-conmon-a354767be8b9029882df904eb8ada11530b29756f5095288e0221ea3b97557f5.scope.
Feb 25 07:30:13 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:30:13 np0005629333 podman[305307]: 2026-02-25 12:30:13.658010037 +0000 UTC m=+0.027760838 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:30:13 np0005629333 podman[305307]: 2026-02-25 12:30:13.763490749 +0000 UTC m=+0.133241570 container init a354767be8b9029882df904eb8ada11530b29756f5095288e0221ea3b97557f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 07:30:13 np0005629333 nova_compute[244014]: 2026-02-25 12:30:13.767 244018 DEBUG oslo_concurrency.lockutils [None req-d68baa93-9992-4136-9544-5bb785d80421 3aef97e1c87341848423edc65828540c c4445209a7384565a93895032b4f077e - - default default] Lock "bf261ccf-c216-4383-a22a-7f0553198152" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.290s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:13 np0005629333 podman[305307]: 2026-02-25 12:30:13.771142405 +0000 UTC m=+0.140893176 container start a354767be8b9029882df904eb8ada11530b29756f5095288e0221ea3b97557f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_booth, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:30:13 np0005629333 podman[305307]: 2026-02-25 12:30:13.775068167 +0000 UTC m=+0.144818958 container attach a354767be8b9029882df904eb8ada11530b29756f5095288e0221ea3b97557f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_booth, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:30:13 np0005629333 bold_booth[305340]: 167 167
Feb 25 07:30:13 np0005629333 systemd[1]: libpod-a354767be8b9029882df904eb8ada11530b29756f5095288e0221ea3b97557f5.scope: Deactivated successfully.
Feb 25 07:30:13 np0005629333 podman[305307]: 2026-02-25 12:30:13.777153936 +0000 UTC m=+0.146904687 container died a354767be8b9029882df904eb8ada11530b29756f5095288e0221ea3b97557f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_booth, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 07:30:13 np0005629333 systemd[1]: var-lib-containers-storage-overlay-383781a54c6752887e248227683818866c6af9180adade514d8b66ed9dff04cd-merged.mount: Deactivated successfully.
Feb 25 07:30:13 np0005629333 podman[305307]: 2026-02-25 12:30:13.828547793 +0000 UTC m=+0.198298554 container remove a354767be8b9029882df904eb8ada11530b29756f5095288e0221ea3b97557f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_booth, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 07:30:13 np0005629333 systemd[1]: libpod-conmon-a354767be8b9029882df904eb8ada11530b29756f5095288e0221ea3b97557f5.scope: Deactivated successfully.
Feb 25 07:30:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 279 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 214 op/s
Feb 25 07:30:14 np0005629333 podman[305363]: 2026-02-25 12:30:14.000904981 +0000 UTC m=+0.052375826 container create c279bef2f20fb0e53a10ea471930e4a8e6e75b4151fc15ea3fdf85e1b06d9086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_taussig, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:30:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:30:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4101531742' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:30:14 np0005629333 systemd[1]: Started libpod-conmon-c279bef2f20fb0e53a10ea471930e4a8e6e75b4151fc15ea3fdf85e1b06d9086.scope.
Feb 25 07:30:14 np0005629333 nova_compute[244014]: 2026-02-25 12:30:14.071 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:14 np0005629333 podman[305363]: 2026-02-25 12:30:13.978188217 +0000 UTC m=+0.029659082 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:30:14 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:30:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d43e1317d961cd7f0f34ea745f3a16587e636f1f83e6dc60777b648ca38c5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:30:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d43e1317d961cd7f0f34ea745f3a16587e636f1f83e6dc60777b648ca38c5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:30:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d43e1317d961cd7f0f34ea745f3a16587e636f1f83e6dc60777b648ca38c5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:30:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d43e1317d961cd7f0f34ea745f3a16587e636f1f83e6dc60777b648ca38c5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:30:14 np0005629333 podman[305363]: 2026-02-25 12:30:14.107622787 +0000 UTC m=+0.159093622 container init c279bef2f20fb0e53a10ea471930e4a8e6e75b4151fc15ea3fdf85e1b06d9086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_taussig, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 07:30:14 np0005629333 podman[305363]: 2026-02-25 12:30:14.114328137 +0000 UTC m=+0.165798972 container start c279bef2f20fb0e53a10ea471930e4a8e6e75b4151fc15ea3fdf85e1b06d9086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_taussig, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 07:30:14 np0005629333 podman[305363]: 2026-02-25 12:30:14.118968909 +0000 UTC m=+0.170439794 container attach c279bef2f20fb0e53a10ea471930e4a8e6e75b4151fc15ea3fdf85e1b06d9086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_taussig, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 07:30:14 np0005629333 nova_compute[244014]: 2026-02-25 12:30:14.261 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:30:14 np0005629333 nova_compute[244014]: 2026-02-25 12:30:14.262 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:30:14 np0005629333 nova_compute[244014]: 2026-02-25 12:30:14.262 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000045 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:30:14 np0005629333 nova_compute[244014]: 2026-02-25 12:30:14.410 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:30:14 np0005629333 nova_compute[244014]: 2026-02-25 12:30:14.411 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3735MB free_disk=59.855410382151604GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:30:14 np0005629333 nova_compute[244014]: 2026-02-25 12:30:14.412 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:14 np0005629333 nova_compute[244014]: 2026-02-25 12:30:14.412 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:14.460 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:30:14 np0005629333 nova_compute[244014]: 2026-02-25 12:30:14.480 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance d945940d-a1b5-4a36-b980-efda3a9efda6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:30:14 np0005629333 nova_compute[244014]: 2026-02-25 12:30:14.481 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:30:14 np0005629333 nova_compute[244014]: 2026-02-25 12:30:14.481 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:30:14 np0005629333 nova_compute[244014]: 2026-02-25 12:30:14.521 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:14 np0005629333 lvm[305479]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:30:14 np0005629333 lvm[305479]: VG ceph_vg0 finished
Feb 25 07:30:14 np0005629333 lvm[305481]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:30:14 np0005629333 lvm[305481]: VG ceph_vg1 finished
Feb 25 07:30:14 np0005629333 lvm[305483]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:30:14 np0005629333 lvm[305483]: VG ceph_vg2 finished
Feb 25 07:30:14 np0005629333 nova_compute[244014]: 2026-02-25 12:30:14.766 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:14 np0005629333 nostalgic_taussig[305380]: {}
Feb 25 07:30:14 np0005629333 systemd[1]: libpod-c279bef2f20fb0e53a10ea471930e4a8e6e75b4151fc15ea3fdf85e1b06d9086.scope: Deactivated successfully.
Feb 25 07:30:14 np0005629333 podman[305363]: 2026-02-25 12:30:14.852246042 +0000 UTC m=+0.903716907 container died c279bef2f20fb0e53a10ea471930e4a8e6e75b4151fc15ea3fdf85e1b06d9086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_taussig, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:30:14 np0005629333 systemd[1]: libpod-c279bef2f20fb0e53a10ea471930e4a8e6e75b4151fc15ea3fdf85e1b06d9086.scope: Consumed 1.030s CPU time.
Feb 25 07:30:14 np0005629333 systemd[1]: var-lib-containers-storage-overlay-86d43e1317d961cd7f0f34ea745f3a16587e636f1f83e6dc60777b648ca38c5b-merged.mount: Deactivated successfully.
Feb 25 07:30:14 np0005629333 podman[305363]: 2026-02-25 12:30:14.911223825 +0000 UTC m=+0.962694690 container remove c279bef2f20fb0e53a10ea471930e4a8e6e75b4151fc15ea3fdf85e1b06d9086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_taussig, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Feb 25 07:30:14 np0005629333 systemd[1]: libpod-conmon-c279bef2f20fb0e53a10ea471930e4a8e6e75b4151fc15ea3fdf85e1b06d9086.scope: Deactivated successfully.
Feb 25 07:30:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:30:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:30:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:30:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:30:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:30:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1633452403' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:30:15 np0005629333 nova_compute[244014]: 2026-02-25 12:30:15.097 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:15 np0005629333 nova_compute[244014]: 2026-02-25 12:30:15.104 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:30:15 np0005629333 nova_compute[244014]: 2026-02-25 12:30:15.219 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:30:15 np0005629333 nova_compute[244014]: 2026-02-25 12:30:15.409 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:30:15 np0005629333 nova_compute[244014]: 2026-02-25 12:30:15.410 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.998s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 07:30:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 23K writes, 94K keys, 23K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.04 MB/s#012Cumulative WAL: 23K writes, 8073 syncs, 2.96 writes per sync, written: 0.09 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 45K keys, 11K commit groups, 1.0 writes per commit group, ingest: 45.83 MB, 0.08 MB/s#012Interval WAL: 11K writes, 4632 syncs, 2.56 writes per sync, written: 0.04 GB, 0.08 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 07:30:15 np0005629333 nova_compute[244014]: 2026-02-25 12:30:15.944 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:16 np0005629333 nova_compute[244014]: 2026-02-25 12:30:16.093 244018 DEBUG nova.compute.manager [req-eb943151-2b35-4c25-be1f-2b985155c7f5 req-1c555028-62f5-4d3d-9b6f-ea941aac1da1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received event network-changed-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:30:16 np0005629333 nova_compute[244014]: 2026-02-25 12:30:16.094 244018 DEBUG nova.compute.manager [req-eb943151-2b35-4c25-be1f-2b985155c7f5 req-1c555028-62f5-4d3d-9b6f-ea941aac1da1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Refreshing instance network info cache due to event network-changed-197929cb-aaa4-48f0-a831-7ac3f4ac5b37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:30:16 np0005629333 nova_compute[244014]: 2026-02-25 12:30:16.094 244018 DEBUG oslo_concurrency.lockutils [req-eb943151-2b35-4c25-be1f-2b985155c7f5 req-1c555028-62f5-4d3d-9b6f-ea941aac1da1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:30:16 np0005629333 nova_compute[244014]: 2026-02-25 12:30:16.094 244018 DEBUG oslo_concurrency.lockutils [req-eb943151-2b35-4c25-be1f-2b985155c7f5 req-1c555028-62f5-4d3d-9b6f-ea941aac1da1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:30:16 np0005629333 nova_compute[244014]: 2026-02-25 12:30:16.094 244018 DEBUG nova.network.neutron [req-eb943151-2b35-4c25-be1f-2b985155c7f5 req-1c555028-62f5-4d3d-9b6f-ea941aac1da1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Refreshing network info cache for port 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:30:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 279 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 159 op/s
Feb 25 07:30:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:30:16 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:30:16 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:30:17 np0005629333 nova_compute[244014]: 2026-02-25 12:30:17.543 244018 DEBUG nova.compute.manager [req-71810bc1-f913-4f71-846e-93c29adc8bf6 req-caa12d45-471b-41ca-8e87-53f716c801d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received event network-changed-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:30:17 np0005629333 nova_compute[244014]: 2026-02-25 12:30:17.544 244018 DEBUG nova.compute.manager [req-71810bc1-f913-4f71-846e-93c29adc8bf6 req-caa12d45-471b-41ca-8e87-53f716c801d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Refreshing instance network info cache due to event network-changed-197929cb-aaa4-48f0-a831-7ac3f4ac5b37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:30:17 np0005629333 nova_compute[244014]: 2026-02-25 12:30:17.545 244018 DEBUG oslo_concurrency.lockutils [req-71810bc1-f913-4f71-846e-93c29adc8bf6 req-caa12d45-471b-41ca-8e87-53f716c801d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:30:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 279 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 183 op/s
Feb 25 07:30:18 np0005629333 nova_compute[244014]: 2026-02-25 12:30:18.409 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:30:18 np0005629333 nova_compute[244014]: 2026-02-25 12:30:18.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:30:18 np0005629333 nova_compute[244014]: 2026-02-25 12:30:18.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:30:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 07:30:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.2 total, 600.0 interval#012Cumulative writes: 25K writes, 100K keys, 25K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.04 MB/s#012Cumulative WAL: 25K writes, 8835 syncs, 2.92 writes per sync, written: 0.09 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 12K writes, 45K keys, 12K commit groups, 1.0 writes per commit group, ingest: 47.52 MB, 0.08 MB/s#012Interval WAL: 12K writes, 4809 syncs, 2.52 writes per sync, written: 0.05 GB, 0.08 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 07:30:19 np0005629333 nova_compute[244014]: 2026-02-25 12:30:19.768 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 279 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 489 KiB/s wr, 160 op/s
Feb 25 07:30:20 np0005629333 nova_compute[244014]: 2026-02-25 12:30:20.373 244018 DEBUG nova.network.neutron [req-eb943151-2b35-4c25-be1f-2b985155c7f5 req-1c555028-62f5-4d3d-9b6f-ea941aac1da1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Updated VIF entry in instance network info cache for port 197929cb-aaa4-48f0-a831-7ac3f4ac5b37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:30:20 np0005629333 nova_compute[244014]: 2026-02-25 12:30:20.373 244018 DEBUG nova.network.neutron [req-eb943151-2b35-4c25-be1f-2b985155c7f5 req-1c555028-62f5-4d3d-9b6f-ea941aac1da1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Updating instance_info_cache with network_info: [{"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:30:20 np0005629333 nova_compute[244014]: 2026-02-25 12:30:20.412 244018 DEBUG oslo_concurrency.lockutils [req-eb943151-2b35-4c25-be1f-2b985155c7f5 req-1c555028-62f5-4d3d-9b6f-ea941aac1da1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:30:20 np0005629333 nova_compute[244014]: 2026-02-25 12:30:20.413 244018 DEBUG oslo_concurrency.lockutils [req-71810bc1-f913-4f71-846e-93c29adc8bf6 req-caa12d45-471b-41ca-8e87-53f716c801d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:30:20 np0005629333 nova_compute[244014]: 2026-02-25 12:30:20.413 244018 DEBUG nova.network.neutron [req-71810bc1-f913-4f71-846e-93c29adc8bf6 req-caa12d45-471b-41ca-8e87-53f716c801d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Refreshing network info cache for port 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:30:20 np0005629333 nova_compute[244014]: 2026-02-25 12:30:20.947 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:21 np0005629333 nova_compute[244014]: 2026-02-25 12:30:21.001 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:30:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 279 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 490 KiB/s wr, 174 op/s
Feb 25 07:30:22 np0005629333 nova_compute[244014]: 2026-02-25 12:30:22.324 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022607.3221087, f7b18575-d1fc-423f-a596-8ca6d8ed08fa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:30:22 np0005629333 nova_compute[244014]: 2026-02-25 12:30:22.324 244018 INFO nova.compute.manager [-] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:30:22 np0005629333 nova_compute[244014]: 2026-02-25 12:30:22.403 244018 DEBUG nova.compute.manager [None req-822b9d25-2dc2-4944-8032-363fdbdfb602 - - - - - -] [instance: f7b18575-d1fc-423f-a596-8ca6d8ed08fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:30:23 np0005629333 nova_compute[244014]: 2026-02-25 12:30:23.304 244018 DEBUG nova.network.neutron [req-71810bc1-f913-4f71-846e-93c29adc8bf6 req-caa12d45-471b-41ca-8e87-53f716c801d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Updated VIF entry in instance network info cache for port 197929cb-aaa4-48f0-a831-7ac3f4ac5b37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:30:23 np0005629333 nova_compute[244014]: 2026-02-25 12:30:23.305 244018 DEBUG nova.network.neutron [req-71810bc1-f913-4f71-846e-93c29adc8bf6 req-caa12d45-471b-41ca-8e87-53f716c801d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Updating instance_info_cache with network_info: [{"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:30:23 np0005629333 nova_compute[244014]: 2026-02-25 12:30:23.321 244018 DEBUG oslo_concurrency.lockutils [req-71810bc1-f913-4f71-846e-93c29adc8bf6 req-caa12d45-471b-41ca-8e87-53f716c801d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:30:23 np0005629333 nova_compute[244014]: 2026-02-25 12:30:23.872 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:23 np0005629333 NetworkManager[49836]: <info>  [1772022623.8734] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/282)
Feb 25 07:30:23 np0005629333 NetworkManager[49836]: <info>  [1772022623.8749] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/283)
Feb 25 07:30:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 279 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 25 KiB/s wr, 158 op/s
Feb 25 07:30:23 np0005629333 nova_compute[244014]: 2026-02-25 12:30:23.930 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:23 np0005629333 nova_compute[244014]: 2026-02-25 12:30:23.945 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:24 np0005629333 nova_compute[244014]: 2026-02-25 12:30:24.582 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "5732c5fb-59b3-4590-b65a-a696b9c90152" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:24 np0005629333 nova_compute[244014]: 2026-02-25 12:30:24.582 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "5732c5fb-59b3-4590-b65a-a696b9c90152" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:24 np0005629333 nova_compute[244014]: 2026-02-25 12:30:24.605 244018 DEBUG nova.compute.manager [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:30:24 np0005629333 nova_compute[244014]: 2026-02-25 12:30:24.681 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:24 np0005629333 nova_compute[244014]: 2026-02-25 12:30:24.681 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:24 np0005629333 nova_compute[244014]: 2026-02-25 12:30:24.690 244018 DEBUG nova.virt.hardware [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:30:24 np0005629333 nova_compute[244014]: 2026-02-25 12:30:24.690 244018 INFO nova.compute.claims [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:30:24 np0005629333 nova_compute[244014]: 2026-02-25 12:30:24.751 244018 DEBUG nova.compute.manager [req-55d4a188-8deb-4997-96eb-976603f9139a req-ec1384cc-63eb-48ec-aa23-8d292915ea19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received event network-changed-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:30:24 np0005629333 nova_compute[244014]: 2026-02-25 12:30:24.752 244018 DEBUG nova.compute.manager [req-55d4a188-8deb-4997-96eb-976603f9139a req-ec1384cc-63eb-48ec-aa23-8d292915ea19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Refreshing instance network info cache due to event network-changed-197929cb-aaa4-48f0-a831-7ac3f4ac5b37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:30:24 np0005629333 nova_compute[244014]: 2026-02-25 12:30:24.752 244018 DEBUG oslo_concurrency.lockutils [req-55d4a188-8deb-4997-96eb-976603f9139a req-ec1384cc-63eb-48ec-aa23-8d292915ea19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:30:24 np0005629333 nova_compute[244014]: 2026-02-25 12:30:24.753 244018 DEBUG oslo_concurrency.lockutils [req-55d4a188-8deb-4997-96eb-976603f9139a req-ec1384cc-63eb-48ec-aa23-8d292915ea19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:30:24 np0005629333 nova_compute[244014]: 2026-02-25 12:30:24.753 244018 DEBUG nova.network.neutron [req-55d4a188-8deb-4997-96eb-976603f9139a req-ec1384cc-63eb-48ec-aa23-8d292915ea19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Refreshing network info cache for port 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:30:24 np0005629333 nova_compute[244014]: 2026-02-25 12:30:24.773 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:24 np0005629333 nova_compute[244014]: 2026-02-25 12:30:24.844 244018 DEBUG oslo_concurrency.processutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 07:30:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.6 total, 600.0 interval#012Cumulative writes: 20K writes, 83K keys, 20K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s#012Cumulative WAL: 20K writes, 6665 syncs, 3.07 writes per sync, written: 0.08 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 9652 writes, 39K keys, 9652 commit groups, 1.0 writes per commit group, ingest: 43.03 MB, 0.07 MB/s#012Interval WAL: 9652 writes, 3681 syncs, 2.62 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 07:30:25 np0005629333 nova_compute[244014]: 2026-02-25 12:30:25.304 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:30:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/245090991' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:30:25 np0005629333 nova_compute[244014]: 2026-02-25 12:30:25.428 244018 DEBUG oslo_concurrency.processutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:25 np0005629333 nova_compute[244014]: 2026-02-25 12:30:25.433 244018 DEBUG nova.compute.provider_tree [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:30:25 np0005629333 nova_compute[244014]: 2026-02-25 12:30:25.452 244018 DEBUG nova.scheduler.client.report [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:30:25 np0005629333 nova_compute[244014]: 2026-02-25 12:30:25.474 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:25 np0005629333 nova_compute[244014]: 2026-02-25 12:30:25.475 244018 DEBUG nova.compute.manager [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:30:25 np0005629333 nova_compute[244014]: 2026-02-25 12:30:25.519 244018 DEBUG nova.compute.manager [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:30:25 np0005629333 nova_compute[244014]: 2026-02-25 12:30:25.520 244018 DEBUG nova.network.neutron [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:30:25 np0005629333 nova_compute[244014]: 2026-02-25 12:30:25.542 244018 INFO nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:30:25 np0005629333 nova_compute[244014]: 2026-02-25 12:30:25.560 244018 DEBUG nova.compute.manager [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:30:25 np0005629333 nova_compute[244014]: 2026-02-25 12:30:25.655 244018 DEBUG nova.compute.manager [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:30:25 np0005629333 nova_compute[244014]: 2026-02-25 12:30:25.657 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:30:25 np0005629333 nova_compute[244014]: 2026-02-25 12:30:25.658 244018 INFO nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Creating image(s)#033[00m
Feb 25 07:30:25 np0005629333 nova_compute[244014]: 2026-02-25 12:30:25.689 244018 DEBUG nova.storage.rbd_utils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 5732c5fb-59b3-4590-b65a-a696b9c90152_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:25 np0005629333 nova_compute[244014]: 2026-02-25 12:30:25.719 244018 DEBUG nova.storage.rbd_utils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 5732c5fb-59b3-4590-b65a-a696b9c90152_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:25 np0005629333 nova_compute[244014]: 2026-02-25 12:30:25.751 244018 DEBUG nova.storage.rbd_utils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 5732c5fb-59b3-4590-b65a-a696b9c90152_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:25 np0005629333 nova_compute[244014]: 2026-02-25 12:30:25.756 244018 DEBUG oslo_concurrency.processutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:25 np0005629333 nova_compute[244014]: 2026-02-25 12:30:25.823 244018 DEBUG oslo_concurrency.processutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:25 np0005629333 nova_compute[244014]: 2026-02-25 12:30:25.824 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:25 np0005629333 nova_compute[244014]: 2026-02-25 12:30:25.825 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:25 np0005629333 nova_compute[244014]: 2026-02-25 12:30:25.826 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:25 np0005629333 nova_compute[244014]: 2026-02-25 12:30:25.860 244018 DEBUG nova.storage.rbd_utils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 5732c5fb-59b3-4590-b65a-a696b9c90152_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:25 np0005629333 nova_compute[244014]: 2026-02-25 12:30:25.866 244018 DEBUG oslo_concurrency.processutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 5732c5fb-59b3-4590-b65a-a696b9c90152_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:25 np0005629333 nova_compute[244014]: 2026-02-25 12:30:25.913 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022610.9121666, bf261ccf-c216-4383-a22a-7f0553198152 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:30:25 np0005629333 nova_compute[244014]: 2026-02-25 12:30:25.914 244018 INFO nova.compute.manager [-] [instance: bf261ccf-c216-4383-a22a-7f0553198152] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:30:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 279 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 11 KiB/s wr, 65 op/s
Feb 25 07:30:25 np0005629333 nova_compute[244014]: 2026-02-25 12:30:25.938 244018 DEBUG nova.compute.manager [None req-222c7227-50a8-4c19-a01c-f6bb22bec95a - - - - - -] [instance: bf261ccf-c216-4383-a22a-7f0553198152] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:30:25 np0005629333 nova_compute[244014]: 2026-02-25 12:30:25.950 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:26 np0005629333 nova_compute[244014]: 2026-02-25 12:30:26.102 244018 DEBUG oslo_concurrency.processutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 5732c5fb-59b3-4590-b65a-a696b9c90152_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.236s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:26 np0005629333 nova_compute[244014]: 2026-02-25 12:30:26.186 244018 DEBUG nova.policy [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b63928451c6a4137bb65e25561326aff', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8315f545d21f4f8d9a43d810f50e7b78', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:30:26 np0005629333 nova_compute[244014]: 2026-02-25 12:30:26.196 244018 DEBUG nova.storage.rbd_utils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] resizing rbd image 5732c5fb-59b3-4590-b65a-a696b9c90152_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:30:26 np0005629333 nova_compute[244014]: 2026-02-25 12:30:26.285 244018 DEBUG nova.objects.instance [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'migration_context' on Instance uuid 5732c5fb-59b3-4590-b65a-a696b9c90152 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:30:26 np0005629333 nova_compute[244014]: 2026-02-25 12:30:26.301 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:30:26 np0005629333 nova_compute[244014]: 2026-02-25 12:30:26.302 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Ensure instance console log exists: /var/lib/nova/instances/5732c5fb-59b3-4590-b65a-a696b9c90152/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:30:26 np0005629333 nova_compute[244014]: 2026-02-25 12:30:26.302 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:26 np0005629333 nova_compute[244014]: 2026-02-25 12:30:26.302 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:26 np0005629333 nova_compute[244014]: 2026-02-25 12:30:26.302 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:30:27 np0005629333 ceph-mgr[76641]: [devicehealth INFO root] Check health
Feb 25 07:30:27 np0005629333 nova_compute[244014]: 2026-02-25 12:30:27.173 244018 DEBUG nova.network.neutron [req-55d4a188-8deb-4997-96eb-976603f9139a req-ec1384cc-63eb-48ec-aa23-8d292915ea19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Updated VIF entry in instance network info cache for port 197929cb-aaa4-48f0-a831-7ac3f4ac5b37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:30:27 np0005629333 nova_compute[244014]: 2026-02-25 12:30:27.174 244018 DEBUG nova.network.neutron [req-55d4a188-8deb-4997-96eb-976603f9139a req-ec1384cc-63eb-48ec-aa23-8d292915ea19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Updating instance_info_cache with network_info: [{"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:30:27 np0005629333 nova_compute[244014]: 2026-02-25 12:30:27.198 244018 DEBUG oslo_concurrency.lockutils [req-55d4a188-8deb-4997-96eb-976603f9139a req-ec1384cc-63eb-48ec-aa23-8d292915ea19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:30:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 325 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 113 op/s
Feb 25 07:30:28 np0005629333 nova_compute[244014]: 2026-02-25 12:30:28.306 244018 DEBUG nova.network.neutron [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Successfully created port: d09a3a83-0b97-4635-9188-5ab61ccc4626 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:30:29 np0005629333 nova_compute[244014]: 2026-02-25 12:30:29.772 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 325 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 628 KiB/s rd, 1.8 MiB/s wr, 88 op/s
Feb 25 07:30:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:30:30
Feb 25 07:30:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:30:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:30:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'backups', 'default.rgw.control', '.mgr', 'images', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', '.rgw.root']
Feb 25 07:30:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:30:30 np0005629333 nova_compute[244014]: 2026-02-25 12:30:30.953 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:31 np0005629333 nova_compute[244014]: 2026-02-25 12:30:31.156 244018 DEBUG nova.network.neutron [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Successfully updated port: d09a3a83-0b97-4635-9188-5ab61ccc4626 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:30:31 np0005629333 nova_compute[244014]: 2026-02-25 12:30:31.176 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "refresh_cache-5732c5fb-59b3-4590-b65a-a696b9c90152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:30:31 np0005629333 nova_compute[244014]: 2026-02-25 12:30:31.177 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquired lock "refresh_cache-5732c5fb-59b3-4590-b65a-a696b9c90152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:30:31 np0005629333 nova_compute[244014]: 2026-02-25 12:30:31.177 244018 DEBUG nova.network.neutron [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:30:31 np0005629333 nova_compute[244014]: 2026-02-25 12:30:31.368 244018 DEBUG nova.compute.manager [req-afa91eca-a0d7-4c88-b40a-22fb8e8ed9e1 req-d15d447f-bf2f-4e86-a0fa-a65feb209105 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Received event network-changed-d09a3a83-0b97-4635-9188-5ab61ccc4626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:30:31 np0005629333 nova_compute[244014]: 2026-02-25 12:30:31.369 244018 DEBUG nova.compute.manager [req-afa91eca-a0d7-4c88-b40a-22fb8e8ed9e1 req-d15d447f-bf2f-4e86-a0fa-a65feb209105 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Refreshing instance network info cache due to event network-changed-d09a3a83-0b97-4635-9188-5ab61ccc4626. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:30:31 np0005629333 nova_compute[244014]: 2026-02-25 12:30:31.369 244018 DEBUG oslo_concurrency.lockutils [req-afa91eca-a0d7-4c88-b40a-22fb8e8ed9e1 req-d15d447f-bf2f-4e86-a0fa-a65feb209105 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-5732c5fb-59b3-4590-b65a-a696b9c90152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:30:31 np0005629333 nova_compute[244014]: 2026-02-25 12:30:31.469 244018 DEBUG nova.network.neutron [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:30:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:30:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:30:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:30:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:30:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:30:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:30:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:30:31 np0005629333 nova_compute[244014]: 2026-02-25 12:30:31.865 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:30:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:30:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:30:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:30:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:30:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:30:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:30:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:30:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:30:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:30:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 305 active+clean; 327 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 628 KiB/s rd, 1.8 MiB/s wr, 88 op/s
Feb 25 07:30:31 np0005629333 nova_compute[244014]: 2026-02-25 12:30:31.921 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:32 np0005629333 nova_compute[244014]: 2026-02-25 12:30:32.918 244018 DEBUG nova.network.neutron [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Updating instance_info_cache with network_info: [{"id": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "address": "fa:16:3e:23:63:29", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd09a3a83-0b", "ovs_interfaceid": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:30:32 np0005629333 nova_compute[244014]: 2026-02-25 12:30:32.967 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Releasing lock "refresh_cache-5732c5fb-59b3-4590-b65a-a696b9c90152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:30:32 np0005629333 nova_compute[244014]: 2026-02-25 12:30:32.968 244018 DEBUG nova.compute.manager [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Instance network_info: |[{"id": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "address": "fa:16:3e:23:63:29", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd09a3a83-0b", "ovs_interfaceid": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:30:32 np0005629333 nova_compute[244014]: 2026-02-25 12:30:32.969 244018 DEBUG oslo_concurrency.lockutils [req-afa91eca-a0d7-4c88-b40a-22fb8e8ed9e1 req-d15d447f-bf2f-4e86-a0fa-a65feb209105 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-5732c5fb-59b3-4590-b65a-a696b9c90152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:30:32 np0005629333 nova_compute[244014]: 2026-02-25 12:30:32.970 244018 DEBUG nova.network.neutron [req-afa91eca-a0d7-4c88-b40a-22fb8e8ed9e1 req-d15d447f-bf2f-4e86-a0fa-a65feb209105 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Refreshing network info cache for port d09a3a83-0b97-4635-9188-5ab61ccc4626 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:30:32 np0005629333 nova_compute[244014]: 2026-02-25 12:30:32.976 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Start _get_guest_xml network_info=[{"id": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "address": "fa:16:3e:23:63:29", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd09a3a83-0b", "ovs_interfaceid": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:30:32 np0005629333 nova_compute[244014]: 2026-02-25 12:30:32.983 244018 WARNING nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:30:32 np0005629333 nova_compute[244014]: 2026-02-25 12:30:32.990 244018 DEBUG nova.virt.libvirt.host [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:30:32 np0005629333 nova_compute[244014]: 2026-02-25 12:30:32.991 244018 DEBUG nova.virt.libvirt.host [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:30:33 np0005629333 nova_compute[244014]: 2026-02-25 12:30:33.004 244018 DEBUG nova.virt.libvirt.host [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:30:33 np0005629333 nova_compute[244014]: 2026-02-25 12:30:33.005 244018 DEBUG nova.virt.libvirt.host [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:30:33 np0005629333 nova_compute[244014]: 2026-02-25 12:30:33.006 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:30:33 np0005629333 nova_compute[244014]: 2026-02-25 12:30:33.006 244018 DEBUG nova.virt.hardware [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:30:33 np0005629333 nova_compute[244014]: 2026-02-25 12:30:33.007 244018 DEBUG nova.virt.hardware [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:30:33 np0005629333 nova_compute[244014]: 2026-02-25 12:30:33.007 244018 DEBUG nova.virt.hardware [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:30:33 np0005629333 nova_compute[244014]: 2026-02-25 12:30:33.008 244018 DEBUG nova.virt.hardware [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:30:33 np0005629333 nova_compute[244014]: 2026-02-25 12:30:33.008 244018 DEBUG nova.virt.hardware [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:30:33 np0005629333 nova_compute[244014]: 2026-02-25 12:30:33.008 244018 DEBUG nova.virt.hardware [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:30:33 np0005629333 nova_compute[244014]: 2026-02-25 12:30:33.009 244018 DEBUG nova.virt.hardware [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:30:33 np0005629333 nova_compute[244014]: 2026-02-25 12:30:33.009 244018 DEBUG nova.virt.hardware [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:30:33 np0005629333 nova_compute[244014]: 2026-02-25 12:30:33.010 244018 DEBUG nova.virt.hardware [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:30:33 np0005629333 nova_compute[244014]: 2026-02-25 12:30:33.010 244018 DEBUG nova.virt.hardware [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:30:33 np0005629333 nova_compute[244014]: 2026-02-25 12:30:33.010 244018 DEBUG nova.virt.hardware [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:30:33 np0005629333 nova_compute[244014]: 2026-02-25 12:30:33.015 244018 DEBUG oslo_concurrency.processutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:30:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2758504434' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:30:33 np0005629333 nova_compute[244014]: 2026-02-25 12:30:33.528 244018 DEBUG oslo_concurrency.processutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:33 np0005629333 nova_compute[244014]: 2026-02-25 12:30:33.559 244018 DEBUG nova.storage.rbd_utils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 5732c5fb-59b3-4590-b65a-a696b9c90152_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:33 np0005629333 nova_compute[244014]: 2026-02-25 12:30:33.564 244018 DEBUG oslo_concurrency.processutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:33 np0005629333 nova_compute[244014]: 2026-02-25 12:30:33.601 244018 DEBUG nova.compute.manager [req-6562fd7c-58da-47e3-8ea4-975dda99cf81 req-f700e374-28b0-4c67-a8be-3d59271aba4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received event network-changed-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:30:33 np0005629333 nova_compute[244014]: 2026-02-25 12:30:33.601 244018 DEBUG nova.compute.manager [req-6562fd7c-58da-47e3-8ea4-975dda99cf81 req-f700e374-28b0-4c67-a8be-3d59271aba4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Refreshing instance network info cache due to event network-changed-197929cb-aaa4-48f0-a831-7ac3f4ac5b37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:30:33 np0005629333 nova_compute[244014]: 2026-02-25 12:30:33.602 244018 DEBUG oslo_concurrency.lockutils [req-6562fd7c-58da-47e3-8ea4-975dda99cf81 req-f700e374-28b0-4c67-a8be-3d59271aba4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:30:33 np0005629333 nova_compute[244014]: 2026-02-25 12:30:33.602 244018 DEBUG oslo_concurrency.lockutils [req-6562fd7c-58da-47e3-8ea4-975dda99cf81 req-f700e374-28b0-4c67-a8be-3d59271aba4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:30:33 np0005629333 nova_compute[244014]: 2026-02-25 12:30:33.602 244018 DEBUG nova.network.neutron [req-6562fd7c-58da-47e3-8ea4-975dda99cf81 req-f700e374-28b0-4c67-a8be-3d59271aba4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Refreshing network info cache for port 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:30:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 327 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 505 KiB/s rd, 1.8 MiB/s wr, 74 op/s
Feb 25 07:30:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:30:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3407926462' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:30:34 np0005629333 nova_compute[244014]: 2026-02-25 12:30:34.124 244018 DEBUG oslo_concurrency.processutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:34 np0005629333 nova_compute[244014]: 2026-02-25 12:30:34.125 244018 DEBUG nova.virt.libvirt.vif [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:30:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-187776660',display_name='tempest-ServerActionsTestOtherA-server-187776660',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-187776660',id=70,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH+oU121pR6KUJ0WVaybHO4eKD7+IjTA6CrXc/3HOS7yAeJNztQB4sxwZQLs0fjSQoqiFpts6gDtbDhfFq9vINorEApwAS8sG60gm5VPKd60x5tzDBbplVpTIhXJOJeGqw==',key_name='tempest-keypair-2086603261',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8315f545d21f4f8d9a43d810f50e7b78',ramdisk_id='',reservation_id='r-0x55w3gp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1615886603',owner_user_name='tempest-ServerActionsTestOtherA-1615886603-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:30:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b63928451c6a4137bb65e25561326aff',uuid=5732c5fb-59b3-4590-b65a-a696b9c90152,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "address": "fa:16:3e:23:63:29", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd09a3a83-0b", "ovs_interfaceid": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:30:34 np0005629333 nova_compute[244014]: 2026-02-25 12:30:34.125 244018 DEBUG nova.network.os_vif_util [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converting VIF {"id": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "address": "fa:16:3e:23:63:29", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd09a3a83-0b", "ovs_interfaceid": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:30:34 np0005629333 nova_compute[244014]: 2026-02-25 12:30:34.126 244018 DEBUG nova.network.os_vif_util [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:63:29,bridge_name='br-int',has_traffic_filtering=True,id=d09a3a83-0b97-4635-9188-5ab61ccc4626,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd09a3a83-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:30:34 np0005629333 nova_compute[244014]: 2026-02-25 12:30:34.127 244018 DEBUG nova.objects.instance [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5732c5fb-59b3-4590-b65a-a696b9c90152 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:30:34 np0005629333 nova_compute[244014]: 2026-02-25 12:30:34.142 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:30:34 np0005629333 nova_compute[244014]:  <uuid>5732c5fb-59b3-4590-b65a-a696b9c90152</uuid>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:  <name>instance-00000046</name>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServerActionsTestOtherA-server-187776660</nova:name>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:30:32</nova:creationTime>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:30:34 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:        <nova:user uuid="b63928451c6a4137bb65e25561326aff">tempest-ServerActionsTestOtherA-1615886603-project-member</nova:user>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:        <nova:project uuid="8315f545d21f4f8d9a43d810f50e7b78">tempest-ServerActionsTestOtherA-1615886603</nova:project>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:        <nova:port uuid="d09a3a83-0b97-4635-9188-5ab61ccc4626">
Feb 25 07:30:34 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      <entry name="serial">5732c5fb-59b3-4590-b65a-a696b9c90152</entry>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      <entry name="uuid">5732c5fb-59b3-4590-b65a-a696b9c90152</entry>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/5732c5fb-59b3-4590-b65a-a696b9c90152_disk">
Feb 25 07:30:34 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:30:34 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/5732c5fb-59b3-4590-b65a-a696b9c90152_disk.config">
Feb 25 07:30:34 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:30:34 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:23:63:29"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      <target dev="tapd09a3a83-0b"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/5732c5fb-59b3-4590-b65a-a696b9c90152/console.log" append="off"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:30:34 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:30:34 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:30:34 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:30:34 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:30:34 np0005629333 nova_compute[244014]: 2026-02-25 12:30:34.144 244018 DEBUG nova.compute.manager [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Preparing to wait for external event network-vif-plugged-d09a3a83-0b97-4635-9188-5ab61ccc4626 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:30:34 np0005629333 nova_compute[244014]: 2026-02-25 12:30:34.144 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "5732c5fb-59b3-4590-b65a-a696b9c90152-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:34 np0005629333 nova_compute[244014]: 2026-02-25 12:30:34.144 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "5732c5fb-59b3-4590-b65a-a696b9c90152-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:34 np0005629333 nova_compute[244014]: 2026-02-25 12:30:34.144 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "5732c5fb-59b3-4590-b65a-a696b9c90152-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:34 np0005629333 nova_compute[244014]: 2026-02-25 12:30:34.145 244018 DEBUG nova.virt.libvirt.vif [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:30:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-187776660',display_name='tempest-ServerActionsTestOtherA-server-187776660',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-187776660',id=70,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH+oU121pR6KUJ0WVaybHO4eKD7+IjTA6CrXc/3HOS7yAeJNztQB4sxwZQLs0fjSQoqiFpts6gDtbDhfFq9vINorEApwAS8sG60gm5VPKd60x5tzDBbplVpTIhXJOJeGqw==',key_name='tempest-keypair-2086603261',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8315f545d21f4f8d9a43d810f50e7b78',ramdisk_id='',reservation_id='r-0x55w3gp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1615886603',owner_user_name='tempest-ServerActionsTestOtherA-1615886603-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:30:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b63928451c6a4137bb65e25561326aff',uuid=5732c5fb-59b3-4590-b65a-a696b9c90152,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "address": "fa:16:3e:23:63:29", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd09a3a83-0b", "ovs_interfaceid": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:30:34 np0005629333 nova_compute[244014]: 2026-02-25 12:30:34.145 244018 DEBUG nova.network.os_vif_util [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converting VIF {"id": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "address": "fa:16:3e:23:63:29", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd09a3a83-0b", "ovs_interfaceid": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:30:34 np0005629333 nova_compute[244014]: 2026-02-25 12:30:34.146 244018 DEBUG nova.network.os_vif_util [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:63:29,bridge_name='br-int',has_traffic_filtering=True,id=d09a3a83-0b97-4635-9188-5ab61ccc4626,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd09a3a83-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:30:34 np0005629333 nova_compute[244014]: 2026-02-25 12:30:34.146 244018 DEBUG os_vif [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:63:29,bridge_name='br-int',has_traffic_filtering=True,id=d09a3a83-0b97-4635-9188-5ab61ccc4626,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd09a3a83-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:30:34 np0005629333 nova_compute[244014]: 2026-02-25 12:30:34.149 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:34 np0005629333 nova_compute[244014]: 2026-02-25 12:30:34.149 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:30:34 np0005629333 nova_compute[244014]: 2026-02-25 12:30:34.149 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:30:34 np0005629333 nova_compute[244014]: 2026-02-25 12:30:34.152 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:34 np0005629333 nova_compute[244014]: 2026-02-25 12:30:34.153 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd09a3a83-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:30:34 np0005629333 nova_compute[244014]: 2026-02-25 12:30:34.153 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd09a3a83-0b, col_values=(('external_ids', {'iface-id': 'd09a3a83-0b97-4635-9188-5ab61ccc4626', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:23:63:29', 'vm-uuid': '5732c5fb-59b3-4590-b65a-a696b9c90152'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:30:34 np0005629333 nova_compute[244014]: 2026-02-25 12:30:34.154 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:34 np0005629333 NetworkManager[49836]: <info>  [1772022634.1562] manager: (tapd09a3a83-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/284)
Feb 25 07:30:34 np0005629333 nova_compute[244014]: 2026-02-25 12:30:34.157 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:30:34 np0005629333 nova_compute[244014]: 2026-02-25 12:30:34.161 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:34 np0005629333 nova_compute[244014]: 2026-02-25 12:30:34.162 244018 INFO os_vif [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:63:29,bridge_name='br-int',has_traffic_filtering=True,id=d09a3a83-0b97-4635-9188-5ab61ccc4626,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd09a3a83-0b')#033[00m
Feb 25 07:30:34 np0005629333 nova_compute[244014]: 2026-02-25 12:30:34.206 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:30:34 np0005629333 nova_compute[244014]: 2026-02-25 12:30:34.207 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:30:34 np0005629333 nova_compute[244014]: 2026-02-25 12:30:34.207 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] No VIF found with MAC fa:16:3e:23:63:29, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:30:34 np0005629333 nova_compute[244014]: 2026-02-25 12:30:34.207 244018 INFO nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Using config drive#033[00m
Feb 25 07:30:34 np0005629333 nova_compute[244014]: 2026-02-25 12:30:34.227 244018 DEBUG nova.storage.rbd_utils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 5732c5fb-59b3-4590-b65a-a696b9c90152_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:34 np0005629333 nova_compute[244014]: 2026-02-25 12:30:34.774 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:35 np0005629333 nova_compute[244014]: 2026-02-25 12:30:35.212 244018 INFO nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Creating config drive at /var/lib/nova/instances/5732c5fb-59b3-4590-b65a-a696b9c90152/disk.config#033[00m
Feb 25 07:30:35 np0005629333 nova_compute[244014]: 2026-02-25 12:30:35.217 244018 DEBUG oslo_concurrency.processutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5732c5fb-59b3-4590-b65a-a696b9c90152/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpg5mi_yur execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:35 np0005629333 nova_compute[244014]: 2026-02-25 12:30:35.356 244018 DEBUG oslo_concurrency.processutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5732c5fb-59b3-4590-b65a-a696b9c90152/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpg5mi_yur" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:35 np0005629333 nova_compute[244014]: 2026-02-25 12:30:35.381 244018 DEBUG nova.storage.rbd_utils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 5732c5fb-59b3-4590-b65a-a696b9c90152_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:35 np0005629333 nova_compute[244014]: 2026-02-25 12:30:35.385 244018 DEBUG oslo_concurrency.processutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5732c5fb-59b3-4590-b65a-a696b9c90152/disk.config 5732c5fb-59b3-4590-b65a-a696b9c90152_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:35 np0005629333 nova_compute[244014]: 2026-02-25 12:30:35.533 244018 DEBUG oslo_concurrency.processutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5732c5fb-59b3-4590-b65a-a696b9c90152/disk.config 5732c5fb-59b3-4590-b65a-a696b9c90152_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:35 np0005629333 nova_compute[244014]: 2026-02-25 12:30:35.534 244018 INFO nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Deleting local config drive /var/lib/nova/instances/5732c5fb-59b3-4590-b65a-a696b9c90152/disk.config because it was imported into RBD.#033[00m
Feb 25 07:30:35 np0005629333 kernel: tapd09a3a83-0b: entered promiscuous mode
Feb 25 07:30:35 np0005629333 NetworkManager[49836]: <info>  [1772022635.5709] manager: (tapd09a3a83-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/285)
Feb 25 07:30:35 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:35Z|00663|binding|INFO|Claiming lport d09a3a83-0b97-4635-9188-5ab61ccc4626 for this chassis.
Feb 25 07:30:35 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:35Z|00664|binding|INFO|d09a3a83-0b97-4635-9188-5ab61ccc4626: Claiming fa:16:3e:23:63:29 10.100.0.11
Feb 25 07:30:35 np0005629333 nova_compute[244014]: 2026-02-25 12:30:35.571 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.581 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:63:29 10.100.0.11'], port_security=['fa:16:3e:23:63:29 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '5732c5fb-59b3-4590-b65a-a696b9c90152', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd796561-bd80-4610-8abc-655ee9e3676f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8315f545d21f4f8d9a43d810f50e7b78', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9b1ce41f-d52c-4d7a-8899-239050236d80', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b5d31ff-0cb2-4a33-884a-0100942c964b, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=d09a3a83-0b97-4635-9188-5ab61ccc4626) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.582 157129 INFO neutron.agent.ovn.metadata.agent [-] Port d09a3a83-0b97-4635-9188-5ab61ccc4626 in datapath cd796561-bd80-4610-8abc-655ee9e3676f bound to our chassis#033[00m
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.583 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cd796561-bd80-4610-8abc-655ee9e3676f#033[00m
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.591 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3fd2cf66-ab90-44bb-860d-f156de9f5c7a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.591 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcd796561-b1 in ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.593 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcd796561-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.593 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f1f2ae3e-7960-4aa3-bde9-0f0379babb18]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.594 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8212a547-132d-4bcc-a835-d4c60df58274]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.601 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[dbc9428f-08ff-43a3-a58e-66c8f4d11a07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:35 np0005629333 systemd[1]: Started Virtual Machine qemu-85-instance-00000046.
Feb 25 07:30:35 np0005629333 systemd-machined[210048]: New machine qemu-85-instance-00000046.
Feb 25 07:30:35 np0005629333 nova_compute[244014]: 2026-02-25 12:30:35.620 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.622 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[070674f2-a82d-4e6e-8211-122f5a6e9b35]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:35 np0005629333 systemd-udevd[305878]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:30:35 np0005629333 nova_compute[244014]: 2026-02-25 12:30:35.625 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:35 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:35Z|00665|binding|INFO|Setting lport d09a3a83-0b97-4635-9188-5ab61ccc4626 ovn-installed in OVS
Feb 25 07:30:35 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:35Z|00666|binding|INFO|Setting lport d09a3a83-0b97-4635-9188-5ab61ccc4626 up in Southbound
Feb 25 07:30:35 np0005629333 NetworkManager[49836]: <info>  [1772022635.6418] device (tapd09a3a83-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:30:35 np0005629333 NetworkManager[49836]: <info>  [1772022635.6429] device (tapd09a3a83-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:30:35 np0005629333 nova_compute[244014]: 2026-02-25 12:30:35.643 244018 DEBUG nova.network.neutron [req-afa91eca-a0d7-4c88-b40a-22fb8e8ed9e1 req-d15d447f-bf2f-4e86-a0fa-a65feb209105 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Updated VIF entry in instance network info cache for port d09a3a83-0b97-4635-9188-5ab61ccc4626. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:30:35 np0005629333 nova_compute[244014]: 2026-02-25 12:30:35.644 244018 DEBUG nova.network.neutron [req-afa91eca-a0d7-4c88-b40a-22fb8e8ed9e1 req-d15d447f-bf2f-4e86-a0fa-a65feb209105 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Updating instance_info_cache with network_info: [{"id": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "address": "fa:16:3e:23:63:29", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd09a3a83-0b", "ovs_interfaceid": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.652 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[693c0485-3a6e-46b2-bfd5-3b4bcb7ddb31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:35 np0005629333 systemd-udevd[305888]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.656 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3b19b827-cee7-4e0f-b149-b913895ce259]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:35 np0005629333 NetworkManager[49836]: <info>  [1772022635.6583] manager: (tapcd796561-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/286)
Feb 25 07:30:35 np0005629333 nova_compute[244014]: 2026-02-25 12:30:35.669 244018 DEBUG oslo_concurrency.lockutils [req-afa91eca-a0d7-4c88-b40a-22fb8e8ed9e1 req-d15d447f-bf2f-4e86-a0fa-a65feb209105 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-5732c5fb-59b3-4590-b65a-a696b9c90152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.680 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[45064a85-b332-4771-9bc0-e99f9ea285fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.683 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ec1c54e6-19c3-4111-930a-c70e6c28a038]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:35 np0005629333 podman[305850]: 2026-02-25 12:30:35.686009464 +0000 UTC m=+0.087184754 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 25 07:30:35 np0005629333 podman[305849]: 2026-02-25 12:30:35.686138487 +0000 UTC m=+0.088243483 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:30:35 np0005629333 NetworkManager[49836]: <info>  [1772022635.6979] device (tapcd796561-b0): carrier: link connected
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.701 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ef721427-80f9-4986-a093-c9e958c972c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.713 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ba6c7367-452f-4951-b573-2139b86ba278]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd796561-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:48:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 200], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460510, 'reachable_time': 29897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305927, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.724 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[76a247fd-c00a-4226-9854-494dbceacf0c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea5:48dc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460510, 'tstamp': 460510}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305928, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.732 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0eb8e090-b78c-4522-9f18-51f40f25a68f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd796561-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:48:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 200], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460510, 'reachable_time': 29897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305929, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.750 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a00ef26e-d642-43f3-87cf-7d9dba788d8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.778 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ccf961ab-91c2-4b98-918b-a4a4a3cbcaaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.779 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd796561-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.779 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.780 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd796561-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:30:35 np0005629333 NetworkManager[49836]: <info>  [1772022635.7821] manager: (tapcd796561-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/287)
Feb 25 07:30:35 np0005629333 nova_compute[244014]: 2026-02-25 12:30:35.782 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:35 np0005629333 kernel: tapcd796561-b0: entered promiscuous mode
Feb 25 07:30:35 np0005629333 nova_compute[244014]: 2026-02-25 12:30:35.784 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.784 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcd796561-b0, col_values=(('external_ids', {'iface-id': 'd456b40d-31d4-4447-97a8-c536382c29f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:30:35 np0005629333 nova_compute[244014]: 2026-02-25 12:30:35.785 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:35 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:35Z|00667|binding|INFO|Releasing lport d456b40d-31d4-4447-97a8-c536382c29f9 from this chassis (sb_readonly=0)
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.786 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cd796561-bd80-4610-8abc-655ee9e3676f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cd796561-bd80-4610-8abc-655ee9e3676f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.787 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[37e46f7f-8e15-490b-af2c-a46e0e186bf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.788 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-cd796561-bd80-4610-8abc-655ee9e3676f
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/cd796561-bd80-4610-8abc-655ee9e3676f.pid.haproxy
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID cd796561-bd80-4610-8abc-655ee9e3676f
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:30:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:35.788 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'env', 'PROCESS_TAG=haproxy-cd796561-bd80-4610-8abc-655ee9e3676f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cd796561-bd80-4610-8abc-655ee9e3676f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:30:35 np0005629333 nova_compute[244014]: 2026-02-25 12:30:35.790 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 327 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 253 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Feb 25 07:30:35 np0005629333 nova_compute[244014]: 2026-02-25 12:30:35.929 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022635.9287539, 5732c5fb-59b3-4590-b65a-a696b9c90152 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:30:35 np0005629333 nova_compute[244014]: 2026-02-25 12:30:35.930 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] VM Started (Lifecycle Event)#033[00m
Feb 25 07:30:35 np0005629333 nova_compute[244014]: 2026-02-25 12:30:35.953 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:30:35 np0005629333 nova_compute[244014]: 2026-02-25 12:30:35.957 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022635.9289703, 5732c5fb-59b3-4590-b65a-a696b9c90152 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:30:35 np0005629333 nova_compute[244014]: 2026-02-25 12:30:35.957 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:30:35 np0005629333 nova_compute[244014]: 2026-02-25 12:30:35.975 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:30:35 np0005629333 nova_compute[244014]: 2026-02-25 12:30:35.978 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:30:35 np0005629333 nova_compute[244014]: 2026-02-25 12:30:35.999 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:30:36 np0005629333 podman[306003]: 2026-02-25 12:30:36.113052743 +0000 UTC m=+0.057162101 container create 91b65c1d93e47db67b1a178d17ac256376360ec98fc106c85e509ff223fbe5b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 07:30:36 np0005629333 systemd[1]: Started libpod-conmon-91b65c1d93e47db67b1a178d17ac256376360ec98fc106c85e509ff223fbe5b0.scope.
Feb 25 07:30:36 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:30:36 np0005629333 podman[306003]: 2026-02-25 12:30:36.078893445 +0000 UTC m=+0.023002843 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:30:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f940080901bec8896734a9c7645d302a54dac0d4387fd56144cab0605773a5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:30:36 np0005629333 podman[306003]: 2026-02-25 12:30:36.193505864 +0000 UTC m=+0.137615282 container init 91b65c1d93e47db67b1a178d17ac256376360ec98fc106c85e509ff223fbe5b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 25 07:30:36 np0005629333 podman[306003]: 2026-02-25 12:30:36.20078854 +0000 UTC m=+0.144897928 container start 91b65c1d93e47db67b1a178d17ac256376360ec98fc106c85e509ff223fbe5b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 25 07:30:36 np0005629333 neutron-haproxy-ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f[306019]: [NOTICE]   (306023) : New worker (306025) forked
Feb 25 07:30:36 np0005629333 neutron-haproxy-ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f[306019]: [NOTICE]   (306023) : Loading success.
Feb 25 07:30:36 np0005629333 nova_compute[244014]: 2026-02-25 12:30:36.603 244018 DEBUG nova.network.neutron [req-6562fd7c-58da-47e3-8ea4-975dda99cf81 req-f700e374-28b0-4c67-a8be-3d59271aba4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Updated VIF entry in instance network info cache for port 197929cb-aaa4-48f0-a831-7ac3f4ac5b37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:30:36 np0005629333 nova_compute[244014]: 2026-02-25 12:30:36.604 244018 DEBUG nova.network.neutron [req-6562fd7c-58da-47e3-8ea4-975dda99cf81 req-f700e374-28b0-4c67-a8be-3d59271aba4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Updating instance_info_cache with network_info: [{"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:30:36 np0005629333 nova_compute[244014]: 2026-02-25 12:30:36.634 244018 DEBUG oslo_concurrency.lockutils [req-6562fd7c-58da-47e3-8ea4-975dda99cf81 req-f700e374-28b0-4c67-a8be-3d59271aba4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d945940d-a1b5-4a36-b980-efda3a9efda6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:30:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:30:36 np0005629333 nova_compute[244014]: 2026-02-25 12:30:36.712 244018 DEBUG nova.compute.manager [req-5071cb96-955d-417f-af51-2fbd1390f6d6 req-09b1ed19-288f-41a8-b43a-0d13b632ed58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Received event network-vif-plugged-d09a3a83-0b97-4635-9188-5ab61ccc4626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:30:36 np0005629333 nova_compute[244014]: 2026-02-25 12:30:36.713 244018 DEBUG oslo_concurrency.lockutils [req-5071cb96-955d-417f-af51-2fbd1390f6d6 req-09b1ed19-288f-41a8-b43a-0d13b632ed58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "5732c5fb-59b3-4590-b65a-a696b9c90152-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:36 np0005629333 nova_compute[244014]: 2026-02-25 12:30:36.713 244018 DEBUG oslo_concurrency.lockutils [req-5071cb96-955d-417f-af51-2fbd1390f6d6 req-09b1ed19-288f-41a8-b43a-0d13b632ed58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5732c5fb-59b3-4590-b65a-a696b9c90152-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:36 np0005629333 nova_compute[244014]: 2026-02-25 12:30:36.714 244018 DEBUG oslo_concurrency.lockutils [req-5071cb96-955d-417f-af51-2fbd1390f6d6 req-09b1ed19-288f-41a8-b43a-0d13b632ed58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5732c5fb-59b3-4590-b65a-a696b9c90152-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:36 np0005629333 nova_compute[244014]: 2026-02-25 12:30:36.714 244018 DEBUG nova.compute.manager [req-5071cb96-955d-417f-af51-2fbd1390f6d6 req-09b1ed19-288f-41a8-b43a-0d13b632ed58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Processing event network-vif-plugged-d09a3a83-0b97-4635-9188-5ab61ccc4626 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:30:36 np0005629333 nova_compute[244014]: 2026-02-25 12:30:36.715 244018 DEBUG nova.compute.manager [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:30:36 np0005629333 nova_compute[244014]: 2026-02-25 12:30:36.720 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022636.720331, 5732c5fb-59b3-4590-b65a-a696b9c90152 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:30:36 np0005629333 nova_compute[244014]: 2026-02-25 12:30:36.720 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:30:36 np0005629333 nova_compute[244014]: 2026-02-25 12:30:36.722 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:30:36 np0005629333 nova_compute[244014]: 2026-02-25 12:30:36.726 244018 INFO nova.virt.libvirt.driver [-] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Instance spawned successfully.#033[00m
Feb 25 07:30:36 np0005629333 nova_compute[244014]: 2026-02-25 12:30:36.726 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:30:36 np0005629333 nova_compute[244014]: 2026-02-25 12:30:36.747 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:30:36 np0005629333 nova_compute[244014]: 2026-02-25 12:30:36.752 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:30:36 np0005629333 nova_compute[244014]: 2026-02-25 12:30:36.752 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:30:36 np0005629333 nova_compute[244014]: 2026-02-25 12:30:36.753 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:30:36 np0005629333 nova_compute[244014]: 2026-02-25 12:30:36.753 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:30:36 np0005629333 nova_compute[244014]: 2026-02-25 12:30:36.754 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:30:36 np0005629333 nova_compute[244014]: 2026-02-25 12:30:36.754 244018 DEBUG nova.virt.libvirt.driver [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:30:36 np0005629333 nova_compute[244014]: 2026-02-25 12:30:36.757 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:30:36 np0005629333 nova_compute[244014]: 2026-02-25 12:30:36.795 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:30:36 np0005629333 nova_compute[244014]: 2026-02-25 12:30:36.821 244018 INFO nova.compute.manager [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Took 11.16 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:30:36 np0005629333 nova_compute[244014]: 2026-02-25 12:30:36.821 244018 DEBUG nova.compute.manager [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:30:36 np0005629333 nova_compute[244014]: 2026-02-25 12:30:36.887 244018 INFO nova.compute.manager [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Took 12.24 seconds to build instance.#033[00m
Feb 25 07:30:36 np0005629333 nova_compute[244014]: 2026-02-25 12:30:36.907 244018 DEBUG oslo_concurrency.lockutils [None req-2b6d2ce1-68de-4de5-97fc-f34a70f84ab8 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "5732c5fb-59b3-4590-b65a-a696b9c90152" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.325s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 327 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 447 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Feb 25 07:30:38 np0005629333 nova_compute[244014]: 2026-02-25 12:30:38.272 244018 DEBUG oslo_concurrency.lockutils [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Acquiring lock "d945940d-a1b5-4a36-b980-efda3a9efda6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:38 np0005629333 nova_compute[244014]: 2026-02-25 12:30:38.273 244018 DEBUG oslo_concurrency.lockutils [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:38 np0005629333 nova_compute[244014]: 2026-02-25 12:30:38.273 244018 DEBUG oslo_concurrency.lockutils [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Acquiring lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:38 np0005629333 nova_compute[244014]: 2026-02-25 12:30:38.274 244018 DEBUG oslo_concurrency.lockutils [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:38 np0005629333 nova_compute[244014]: 2026-02-25 12:30:38.274 244018 DEBUG oslo_concurrency.lockutils [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:38 np0005629333 nova_compute[244014]: 2026-02-25 12:30:38.276 244018 INFO nova.compute.manager [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Terminating instance#033[00m
Feb 25 07:30:38 np0005629333 nova_compute[244014]: 2026-02-25 12:30:38.277 244018 DEBUG nova.compute.manager [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:30:38 np0005629333 kernel: tap197929cb-aa (unregistering): left promiscuous mode
Feb 25 07:30:38 np0005629333 NetworkManager[49836]: <info>  [1772022638.3460] device (tap197929cb-aa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:30:38 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:38Z|00668|binding|INFO|Releasing lport 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 from this chassis (sb_readonly=0)
Feb 25 07:30:38 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:38Z|00669|binding|INFO|Setting lport 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 down in Southbound
Feb 25 07:30:38 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:38Z|00670|binding|INFO|Removing iface tap197929cb-aa ovn-installed in OVS
Feb 25 07:30:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:38.362 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:10:e8 10.100.0.4'], port_security=['fa:16:3e:3f:10:e8 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd945940d-a1b5-4a36-b980-efda3a9efda6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a36e5ee4-5e24-4d80-83ee-01c487ff157c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '61371c73c9fb4961886c5c22f8f871e1', 'neutron:revision_number': '8', 'neutron:security_group_ids': '95094e28-96cd-49ef-b6c0-712f53ce443e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4bfc4d05-cf0b-42bf-9420-27556b829f8f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=197929cb-aaa4-48f0-a831-7ac3f4ac5b37) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:30:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:38.363 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 197929cb-aaa4-48f0-a831-7ac3f4ac5b37 in datapath a36e5ee4-5e24-4d80-83ee-01c487ff157c unbound from our chassis#033[00m
Feb 25 07:30:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:38.365 157129 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a36e5ee4-5e24-4d80-83ee-01c487ff157c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 25 07:30:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:38.366 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[50c2b022-4cfd-4da1-9320-92e6364a6883]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:38 np0005629333 nova_compute[244014]: 2026-02-25 12:30:38.367 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:38 np0005629333 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d00000045.scope: Deactivated successfully.
Feb 25 07:30:38 np0005629333 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d00000045.scope: Consumed 12.983s CPU time.
Feb 25 07:30:38 np0005629333 systemd-machined[210048]: Machine qemu-84-instance-00000045 terminated.
Feb 25 07:30:38 np0005629333 nova_compute[244014]: 2026-02-25 12:30:38.519 244018 INFO nova.virt.libvirt.driver [-] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Instance destroyed successfully.#033[00m
Feb 25 07:30:38 np0005629333 nova_compute[244014]: 2026-02-25 12:30:38.520 244018 DEBUG nova.objects.instance [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lazy-loading 'resources' on Instance uuid d945940d-a1b5-4a36-b980-efda3a9efda6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:30:38 np0005629333 nova_compute[244014]: 2026-02-25 12:30:38.556 244018 DEBUG nova.virt.libvirt.vif [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:29:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-579439343',display_name='tempest-ServerRescueTestJSONUnderV235-server-579439343',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-579439343',id=69,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:30:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='61371c73c9fb4961886c5c22f8f871e1',ramdisk_id='',reservation_id='r-yaoxup28',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-1059268888',owner_user_name='tempest-ServerRescueTestJSONUnderV235-1059268888-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:30:11Z,user_data=None,user_id='74af2f394ab04b06b55e62150e81b6b1',uuid=d945940d-a1b5-4a36-b980-efda3a9efda6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:30:38 np0005629333 nova_compute[244014]: 2026-02-25 12:30:38.557 244018 DEBUG nova.network.os_vif_util [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Converting VIF {"id": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "address": "fa:16:3e:3f:10:e8", "network": {"id": "a36e5ee4-5e24-4d80-83ee-01c487ff157c", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-1143367099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "61371c73c9fb4961886c5c22f8f871e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap197929cb-aa", "ovs_interfaceid": "197929cb-aaa4-48f0-a831-7ac3f4ac5b37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:30:38 np0005629333 nova_compute[244014]: 2026-02-25 12:30:38.559 244018 DEBUG nova.network.os_vif_util [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3f:10:e8,bridge_name='br-int',has_traffic_filtering=True,id=197929cb-aaa4-48f0-a831-7ac3f4ac5b37,network=Network(a36e5ee4-5e24-4d80-83ee-01c487ff157c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap197929cb-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:30:38 np0005629333 nova_compute[244014]: 2026-02-25 12:30:38.559 244018 DEBUG os_vif [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3f:10:e8,bridge_name='br-int',has_traffic_filtering=True,id=197929cb-aaa4-48f0-a831-7ac3f4ac5b37,network=Network(a36e5ee4-5e24-4d80-83ee-01c487ff157c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap197929cb-aa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:30:38 np0005629333 nova_compute[244014]: 2026-02-25 12:30:38.562 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:38 np0005629333 nova_compute[244014]: 2026-02-25 12:30:38.562 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap197929cb-aa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:30:38 np0005629333 nova_compute[244014]: 2026-02-25 12:30:38.564 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:38 np0005629333 nova_compute[244014]: 2026-02-25 12:30:38.567 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:30:38 np0005629333 nova_compute[244014]: 2026-02-25 12:30:38.570 244018 INFO os_vif [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3f:10:e8,bridge_name='br-int',has_traffic_filtering=True,id=197929cb-aaa4-48f0-a831-7ac3f4ac5b37,network=Network(a36e5ee4-5e24-4d80-83ee-01c487ff157c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap197929cb-aa')#033[00m
Feb 25 07:30:38 np0005629333 nova_compute[244014]: 2026-02-25 12:30:38.934 244018 DEBUG nova.compute.manager [req-1871b12c-ab34-4bf9-a7b4-57d951c2f962 req-52e4ed01-b76e-4a16-98db-a485fe57e7af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Received event network-vif-plugged-d09a3a83-0b97-4635-9188-5ab61ccc4626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:30:38 np0005629333 nova_compute[244014]: 2026-02-25 12:30:38.935 244018 DEBUG oslo_concurrency.lockutils [req-1871b12c-ab34-4bf9-a7b4-57d951c2f962 req-52e4ed01-b76e-4a16-98db-a485fe57e7af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "5732c5fb-59b3-4590-b65a-a696b9c90152-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:38 np0005629333 nova_compute[244014]: 2026-02-25 12:30:38.935 244018 DEBUG oslo_concurrency.lockutils [req-1871b12c-ab34-4bf9-a7b4-57d951c2f962 req-52e4ed01-b76e-4a16-98db-a485fe57e7af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5732c5fb-59b3-4590-b65a-a696b9c90152-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:38 np0005629333 nova_compute[244014]: 2026-02-25 12:30:38.936 244018 DEBUG oslo_concurrency.lockutils [req-1871b12c-ab34-4bf9-a7b4-57d951c2f962 req-52e4ed01-b76e-4a16-98db-a485fe57e7af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5732c5fb-59b3-4590-b65a-a696b9c90152-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:38 np0005629333 nova_compute[244014]: 2026-02-25 12:30:38.936 244018 DEBUG nova.compute.manager [req-1871b12c-ab34-4bf9-a7b4-57d951c2f962 req-52e4ed01-b76e-4a16-98db-a485fe57e7af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] No waiting events found dispatching network-vif-plugged-d09a3a83-0b97-4635-9188-5ab61ccc4626 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:30:38 np0005629333 nova_compute[244014]: 2026-02-25 12:30:38.937 244018 WARNING nova.compute.manager [req-1871b12c-ab34-4bf9-a7b4-57d951c2f962 req-52e4ed01-b76e-4a16-98db-a485fe57e7af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Received unexpected event network-vif-plugged-d09a3a83-0b97-4635-9188-5ab61ccc4626 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:30:38 np0005629333 nova_compute[244014]: 2026-02-25 12:30:38.937 244018 DEBUG nova.compute.manager [req-1871b12c-ab34-4bf9-a7b4-57d951c2f962 req-52e4ed01-b76e-4a16-98db-a485fe57e7af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received event network-vif-unplugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:30:38 np0005629333 nova_compute[244014]: 2026-02-25 12:30:38.937 244018 DEBUG oslo_concurrency.lockutils [req-1871b12c-ab34-4bf9-a7b4-57d951c2f962 req-52e4ed01-b76e-4a16-98db-a485fe57e7af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:38 np0005629333 nova_compute[244014]: 2026-02-25 12:30:38.938 244018 DEBUG oslo_concurrency.lockutils [req-1871b12c-ab34-4bf9-a7b4-57d951c2f962 req-52e4ed01-b76e-4a16-98db-a485fe57e7af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:38 np0005629333 nova_compute[244014]: 2026-02-25 12:30:38.938 244018 DEBUG oslo_concurrency.lockutils [req-1871b12c-ab34-4bf9-a7b4-57d951c2f962 req-52e4ed01-b76e-4a16-98db-a485fe57e7af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:38 np0005629333 nova_compute[244014]: 2026-02-25 12:30:38.938 244018 DEBUG nova.compute.manager [req-1871b12c-ab34-4bf9-a7b4-57d951c2f962 req-52e4ed01-b76e-4a16-98db-a485fe57e7af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] No waiting events found dispatching network-vif-unplugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:30:38 np0005629333 nova_compute[244014]: 2026-02-25 12:30:38.939 244018 DEBUG nova.compute.manager [req-1871b12c-ab34-4bf9-a7b4-57d951c2f962 req-52e4ed01-b76e-4a16-98db-a485fe57e7af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received event network-vif-unplugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:30:39 np0005629333 nova_compute[244014]: 2026-02-25 12:30:39.098 244018 INFO nova.virt.libvirt.driver [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Deleting instance files /var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6_del#033[00m
Feb 25 07:30:39 np0005629333 nova_compute[244014]: 2026-02-25 12:30:39.099 244018 INFO nova.virt.libvirt.driver [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Deletion of /var/lib/nova/instances/d945940d-a1b5-4a36-b980-efda3a9efda6_del complete#033[00m
Feb 25 07:30:39 np0005629333 nova_compute[244014]: 2026-02-25 12:30:39.155 244018 INFO nova.compute.manager [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Took 0.88 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:30:39 np0005629333 nova_compute[244014]: 2026-02-25 12:30:39.155 244018 DEBUG oslo.service.loopingcall [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:30:39 np0005629333 nova_compute[244014]: 2026-02-25 12:30:39.156 244018 DEBUG nova.compute.manager [-] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:30:39 np0005629333 nova_compute[244014]: 2026-02-25 12:30:39.156 244018 DEBUG nova.network.neutron [-] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:30:39 np0005629333 nova_compute[244014]: 2026-02-25 12:30:39.777 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 327 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 194 KiB/s rd, 17 KiB/s wr, 16 op/s
Feb 25 07:30:40 np0005629333 nova_compute[244014]: 2026-02-25 12:30:40.178 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:40 np0005629333 NetworkManager[49836]: <info>  [1772022640.1790] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/288)
Feb 25 07:30:40 np0005629333 NetworkManager[49836]: <info>  [1772022640.1798] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/289)
Feb 25 07:30:40 np0005629333 nova_compute[244014]: 2026-02-25 12:30:40.208 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:40 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:40Z|00671|binding|INFO|Releasing lport d456b40d-31d4-4447-97a8-c536382c29f9 from this chassis (sb_readonly=0)
Feb 25 07:30:40 np0005629333 nova_compute[244014]: 2026-02-25 12:30:40.230 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:40 np0005629333 nova_compute[244014]: 2026-02-25 12:30:40.501 244018 DEBUG nova.network.neutron [-] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:30:40 np0005629333 nova_compute[244014]: 2026-02-25 12:30:40.520 244018 INFO nova.compute.manager [-] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Took 1.36 seconds to deallocate network for instance.#033[00m
Feb 25 07:30:40 np0005629333 nova_compute[244014]: 2026-02-25 12:30:40.561 244018 DEBUG oslo_concurrency.lockutils [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:40 np0005629333 nova_compute[244014]: 2026-02-25 12:30:40.561 244018 DEBUG oslo_concurrency.lockutils [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:40 np0005629333 nova_compute[244014]: 2026-02-25 12:30:40.641 244018 DEBUG oslo_concurrency.processutils [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:41 np0005629333 nova_compute[244014]: 2026-02-25 12:30:41.055 244018 DEBUG nova.compute.manager [req-815ce73a-d0fd-4c09-ab6e-9bdb99597169 req-15abdd84-6875-4033-a7ad-fc2ef70a2d4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received event network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:30:41 np0005629333 nova_compute[244014]: 2026-02-25 12:30:41.056 244018 DEBUG oslo_concurrency.lockutils [req-815ce73a-d0fd-4c09-ab6e-9bdb99597169 req-15abdd84-6875-4033-a7ad-fc2ef70a2d4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:41 np0005629333 nova_compute[244014]: 2026-02-25 12:30:41.057 244018 DEBUG oslo_concurrency.lockutils [req-815ce73a-d0fd-4c09-ab6e-9bdb99597169 req-15abdd84-6875-4033-a7ad-fc2ef70a2d4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:41 np0005629333 nova_compute[244014]: 2026-02-25 12:30:41.057 244018 DEBUG oslo_concurrency.lockutils [req-815ce73a-d0fd-4c09-ab6e-9bdb99597169 req-15abdd84-6875-4033-a7ad-fc2ef70a2d4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:41 np0005629333 nova_compute[244014]: 2026-02-25 12:30:41.057 244018 DEBUG nova.compute.manager [req-815ce73a-d0fd-4c09-ab6e-9bdb99597169 req-15abdd84-6875-4033-a7ad-fc2ef70a2d4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] No waiting events found dispatching network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:30:41 np0005629333 nova_compute[244014]: 2026-02-25 12:30:41.058 244018 WARNING nova.compute.manager [req-815ce73a-d0fd-4c09-ab6e-9bdb99597169 req-15abdd84-6875-4033-a7ad-fc2ef70a2d4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received unexpected event network-vif-plugged-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:30:41 np0005629333 nova_compute[244014]: 2026-02-25 12:30:41.058 244018 DEBUG nova.compute.manager [req-815ce73a-d0fd-4c09-ab6e-9bdb99597169 req-15abdd84-6875-4033-a7ad-fc2ef70a2d4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Received event network-vif-deleted-197929cb-aaa4-48f0-a831-7ac3f4ac5b37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:30:41 np0005629333 nova_compute[244014]: 2026-02-25 12:30:41.059 244018 DEBUG nova.compute.manager [req-815ce73a-d0fd-4c09-ab6e-9bdb99597169 req-15abdd84-6875-4033-a7ad-fc2ef70a2d4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Received event network-changed-d09a3a83-0b97-4635-9188-5ab61ccc4626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:30:41 np0005629333 nova_compute[244014]: 2026-02-25 12:30:41.059 244018 DEBUG nova.compute.manager [req-815ce73a-d0fd-4c09-ab6e-9bdb99597169 req-15abdd84-6875-4033-a7ad-fc2ef70a2d4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Refreshing instance network info cache due to event network-changed-d09a3a83-0b97-4635-9188-5ab61ccc4626. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:30:41 np0005629333 nova_compute[244014]: 2026-02-25 12:30:41.060 244018 DEBUG oslo_concurrency.lockutils [req-815ce73a-d0fd-4c09-ab6e-9bdb99597169 req-15abdd84-6875-4033-a7ad-fc2ef70a2d4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-5732c5fb-59b3-4590-b65a-a696b9c90152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:30:41 np0005629333 nova_compute[244014]: 2026-02-25 12:30:41.060 244018 DEBUG oslo_concurrency.lockutils [req-815ce73a-d0fd-4c09-ab6e-9bdb99597169 req-15abdd84-6875-4033-a7ad-fc2ef70a2d4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-5732c5fb-59b3-4590-b65a-a696b9c90152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:30:41 np0005629333 nova_compute[244014]: 2026-02-25 12:30:41.060 244018 DEBUG nova.network.neutron [req-815ce73a-d0fd-4c09-ab6e-9bdb99597169 req-15abdd84-6875-4033-a7ad-fc2ef70a2d4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Refreshing network info cache for port d09a3a83-0b97-4635-9188-5ab61ccc4626 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:30:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:30:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1138094607' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:30:41 np0005629333 nova_compute[244014]: 2026-02-25 12:30:41.231 244018 DEBUG oslo_concurrency.processutils [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:41 np0005629333 nova_compute[244014]: 2026-02-25 12:30:41.239 244018 DEBUG nova.compute.provider_tree [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:30:41 np0005629333 nova_compute[244014]: 2026-02-25 12:30:41.275 244018 DEBUG nova.scheduler.client.report [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:30:41 np0005629333 nova_compute[244014]: 2026-02-25 12:30:41.309 244018 DEBUG oslo_concurrency.lockutils [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:41 np0005629333 nova_compute[244014]: 2026-02-25 12:30:41.348 244018 INFO nova.scheduler.client.report [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Deleted allocations for instance d945940d-a1b5-4a36-b980-efda3a9efda6#033[00m
Feb 25 07:30:41 np0005629333 nova_compute[244014]: 2026-02-25 12:30:41.420 244018 DEBUG oslo_concurrency.lockutils [None req-18e85598-03e9-4930-bc9a-13dba2e2e60b 74af2f394ab04b06b55e62150e81b6b1 61371c73c9fb4961886c5c22f8f871e1 - - default default] Lock "d945940d-a1b5-4a36-b980-efda3a9efda6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:41 np0005629333 nova_compute[244014]: 2026-02-25 12:30:41.659 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Acquiring lock "25eece29-8689-47a8-b930-0492bf528de5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:41 np0005629333 nova_compute[244014]: 2026-02-25 12:30:41.660 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lock "25eece29-8689-47a8-b930-0492bf528de5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:41 np0005629333 nova_compute[244014]: 2026-02-25 12:30:41.678 244018 DEBUG nova.compute.manager [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:30:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:30:41 np0005629333 nova_compute[244014]: 2026-02-25 12:30:41.759 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:41 np0005629333 nova_compute[244014]: 2026-02-25 12:30:41.760 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:41 np0005629333 nova_compute[244014]: 2026-02-25 12:30:41.768 244018 DEBUG nova.virt.hardware [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:30:41 np0005629333 nova_compute[244014]: 2026-02-25 12:30:41.769 244018 INFO nova.compute.claims [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:30:41 np0005629333 nova_compute[244014]: 2026-02-25 12:30:41.915 244018 DEBUG oslo_concurrency.processutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 271 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 19 KiB/s wr, 83 op/s
Feb 25 07:30:42 np0005629333 nova_compute[244014]: 2026-02-25 12:30:42.009 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:30:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:30:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:30:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:30:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009143007851876558 of space, bias 1.0, pg target 0.2742902355562967 quantized to 32 (current 32)
Feb 25 07:30:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:30:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:30:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:30:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:30:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:30:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024925910720929524 of space, bias 1.0, pg target 0.7477773216278857 quantized to 32 (current 32)
Feb 25 07:30:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:30:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.641567330827097e-07 of space, bias 4.0, pg target 0.0010369880796992515 quantized to 16 (current 16)
Feb 25 07:30:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:30:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:30:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:30:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:30:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:30:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:30:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:30:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:30:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:30:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:30:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:30:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/683198314' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:30:42 np0005629333 nova_compute[244014]: 2026-02-25 12:30:42.484 244018 DEBUG oslo_concurrency.processutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:42 np0005629333 nova_compute[244014]: 2026-02-25 12:30:42.491 244018 DEBUG nova.compute.provider_tree [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:30:42 np0005629333 nova_compute[244014]: 2026-02-25 12:30:42.525 244018 DEBUG nova.scheduler.client.report [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:30:42 np0005629333 nova_compute[244014]: 2026-02-25 12:30:42.555 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:42 np0005629333 nova_compute[244014]: 2026-02-25 12:30:42.556 244018 DEBUG nova.compute.manager [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:30:42 np0005629333 nova_compute[244014]: 2026-02-25 12:30:42.615 244018 DEBUG nova.compute.manager [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:30:42 np0005629333 nova_compute[244014]: 2026-02-25 12:30:42.616 244018 DEBUG nova.network.neutron [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:30:42 np0005629333 nova_compute[244014]: 2026-02-25 12:30:42.644 244018 INFO nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:30:42 np0005629333 nova_compute[244014]: 2026-02-25 12:30:42.667 244018 DEBUG nova.compute.manager [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:30:42 np0005629333 nova_compute[244014]: 2026-02-25 12:30:42.756 244018 DEBUG nova.network.neutron [req-815ce73a-d0fd-4c09-ab6e-9bdb99597169 req-15abdd84-6875-4033-a7ad-fc2ef70a2d4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Updated VIF entry in instance network info cache for port d09a3a83-0b97-4635-9188-5ab61ccc4626. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:30:42 np0005629333 nova_compute[244014]: 2026-02-25 12:30:42.757 244018 DEBUG nova.network.neutron [req-815ce73a-d0fd-4c09-ab6e-9bdb99597169 req-15abdd84-6875-4033-a7ad-fc2ef70a2d4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5732c5fb-59b3-4590-b65a-a696b9c90152] Updating instance_info_cache with network_info: [{"id": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "address": "fa:16:3e:23:63:29", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd09a3a83-0b", "ovs_interfaceid": "d09a3a83-0b97-4635-9188-5ab61ccc4626", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:30:42 np0005629333 nova_compute[244014]: 2026-02-25 12:30:42.784 244018 DEBUG oslo_concurrency.lockutils [req-815ce73a-d0fd-4c09-ab6e-9bdb99597169 req-15abdd84-6875-4033-a7ad-fc2ef70a2d4e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-5732c5fb-59b3-4590-b65a-a696b9c90152" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:30:42 np0005629333 nova_compute[244014]: 2026-02-25 12:30:42.788 244018 DEBUG nova.compute.manager [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:30:42 np0005629333 nova_compute[244014]: 2026-02-25 12:30:42.790 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:30:42 np0005629333 nova_compute[244014]: 2026-02-25 12:30:42.791 244018 INFO nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Creating image(s)#033[00m
Feb 25 07:30:42 np0005629333 nova_compute[244014]: 2026-02-25 12:30:42.823 244018 DEBUG nova.storage.rbd_utils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] rbd image 25eece29-8689-47a8-b930-0492bf528de5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:42 np0005629333 nova_compute[244014]: 2026-02-25 12:30:42.859 244018 DEBUG nova.storage.rbd_utils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] rbd image 25eece29-8689-47a8-b930-0492bf528de5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:42 np0005629333 nova_compute[244014]: 2026-02-25 12:30:42.890 244018 DEBUG nova.storage.rbd_utils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] rbd image 25eece29-8689-47a8-b930-0492bf528de5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:42 np0005629333 nova_compute[244014]: 2026-02-25 12:30:42.894 244018 DEBUG oslo_concurrency.processutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:42 np0005629333 nova_compute[244014]: 2026-02-25 12:30:42.926 244018 DEBUG nova.policy [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7f34eea4e6284ff7af83727c45d504ae', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1f3f7599abe54e879797365670ae88f0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:30:42 np0005629333 nova_compute[244014]: 2026-02-25 12:30:42.979 244018 DEBUG oslo_concurrency.processutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:42 np0005629333 nova_compute[244014]: 2026-02-25 12:30:42.979 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:42 np0005629333 nova_compute[244014]: 2026-02-25 12:30:42.980 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:42 np0005629333 nova_compute[244014]: 2026-02-25 12:30:42.980 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:43 np0005629333 nova_compute[244014]: 2026-02-25 12:30:43.003 244018 DEBUG nova.storage.rbd_utils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] rbd image 25eece29-8689-47a8-b930-0492bf528de5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:43 np0005629333 nova_compute[244014]: 2026-02-25 12:30:43.006 244018 DEBUG oslo_concurrency.processutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 25eece29-8689-47a8-b930-0492bf528de5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:43 np0005629333 nova_compute[244014]: 2026-02-25 12:30:43.278 244018 DEBUG oslo_concurrency.processutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 25eece29-8689-47a8-b930-0492bf528de5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:43 np0005629333 nova_compute[244014]: 2026-02-25 12:30:43.350 244018 DEBUG nova.storage.rbd_utils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] resizing rbd image 25eece29-8689-47a8-b930-0492bf528de5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:30:43 np0005629333 nova_compute[244014]: 2026-02-25 12:30:43.448 244018 DEBUG nova.objects.instance [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lazy-loading 'migration_context' on Instance uuid 25eece29-8689-47a8-b930-0492bf528de5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:30:43 np0005629333 nova_compute[244014]: 2026-02-25 12:30:43.466 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:30:43 np0005629333 nova_compute[244014]: 2026-02-25 12:30:43.467 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Ensure instance console log exists: /var/lib/nova/instances/25eece29-8689-47a8-b930-0492bf528de5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:30:43 np0005629333 nova_compute[244014]: 2026-02-25 12:30:43.467 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:43 np0005629333 nova_compute[244014]: 2026-02-25 12:30:43.468 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:43 np0005629333 nova_compute[244014]: 2026-02-25 12:30:43.468 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:43 np0005629333 nova_compute[244014]: 2026-02-25 12:30:43.566 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:43 np0005629333 nova_compute[244014]: 2026-02-25 12:30:43.797 244018 DEBUG nova.network.neutron [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Successfully created port: 8c2528a5-82a5-4855-b5eb-dd3a5eba5030 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:30:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 200 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 18 KiB/s wr, 127 op/s
Feb 25 07:30:44 np0005629333 nova_compute[244014]: 2026-02-25 12:30:44.696 244018 DEBUG nova.network.neutron [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Successfully updated port: 8c2528a5-82a5-4855-b5eb-dd3a5eba5030 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:30:44 np0005629333 nova_compute[244014]: 2026-02-25 12:30:44.712 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Acquiring lock "refresh_cache-25eece29-8689-47a8-b930-0492bf528de5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:30:44 np0005629333 nova_compute[244014]: 2026-02-25 12:30:44.713 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Acquired lock "refresh_cache-25eece29-8689-47a8-b930-0492bf528de5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:30:44 np0005629333 nova_compute[244014]: 2026-02-25 12:30:44.713 244018 DEBUG nova.network.neutron [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:30:44 np0005629333 nova_compute[244014]: 2026-02-25 12:30:44.780 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:44 np0005629333 nova_compute[244014]: 2026-02-25 12:30:44.794 244018 DEBUG nova.compute.manager [req-87c9341c-1536-4549-a7a9-8a68d44c44db req-bb0bfaf4-b1a3-41de-9f61-bba42490cdfc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Received event network-changed-8c2528a5-82a5-4855-b5eb-dd3a5eba5030 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:30:44 np0005629333 nova_compute[244014]: 2026-02-25 12:30:44.794 244018 DEBUG nova.compute.manager [req-87c9341c-1536-4549-a7a9-8a68d44c44db req-bb0bfaf4-b1a3-41de-9f61-bba42490cdfc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Refreshing instance network info cache due to event network-changed-8c2528a5-82a5-4855-b5eb-dd3a5eba5030. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:30:44 np0005629333 nova_compute[244014]: 2026-02-25 12:30:44.794 244018 DEBUG oslo_concurrency.lockutils [req-87c9341c-1536-4549-a7a9-8a68d44c44db req-bb0bfaf4-b1a3-41de-9f61-bba42490cdfc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-25eece29-8689-47a8-b930-0492bf528de5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:30:44 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:44Z|00672|binding|INFO|Releasing lport d456b40d-31d4-4447-97a8-c536382c29f9 from this chassis (sb_readonly=0)
Feb 25 07:30:44 np0005629333 nova_compute[244014]: 2026-02-25 12:30:44.858 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:44 np0005629333 nova_compute[244014]: 2026-02-25 12:30:44.892 244018 DEBUG nova.network.neutron [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:30:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 305 active+clean; 200 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 18 KiB/s wr, 127 op/s
Feb 25 07:30:46 np0005629333 nova_compute[244014]: 2026-02-25 12:30:46.187 244018 DEBUG nova.network.neutron [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Updating instance_info_cache with network_info: [{"id": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "address": "fa:16:3e:a1:57:14", "network": {"id": "748b52bb-559a-4d75-b4f7-a397fd5e7e77", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1734495710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f3f7599abe54e879797365670ae88f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2528a5-82", "ovs_interfaceid": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:30:46 np0005629333 nova_compute[244014]: 2026-02-25 12:30:46.216 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Releasing lock "refresh_cache-25eece29-8689-47a8-b930-0492bf528de5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:30:46 np0005629333 nova_compute[244014]: 2026-02-25 12:30:46.217 244018 DEBUG nova.compute.manager [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Instance network_info: |[{"id": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "address": "fa:16:3e:a1:57:14", "network": {"id": "748b52bb-559a-4d75-b4f7-a397fd5e7e77", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1734495710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f3f7599abe54e879797365670ae88f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2528a5-82", "ovs_interfaceid": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:30:46 np0005629333 nova_compute[244014]: 2026-02-25 12:30:46.217 244018 DEBUG oslo_concurrency.lockutils [req-87c9341c-1536-4549-a7a9-8a68d44c44db req-bb0bfaf4-b1a3-41de-9f61-bba42490cdfc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-25eece29-8689-47a8-b930-0492bf528de5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:30:46 np0005629333 nova_compute[244014]: 2026-02-25 12:30:46.218 244018 DEBUG nova.network.neutron [req-87c9341c-1536-4549-a7a9-8a68d44c44db req-bb0bfaf4-b1a3-41de-9f61-bba42490cdfc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Refreshing network info cache for port 8c2528a5-82a5-4855-b5eb-dd3a5eba5030 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:30:46 np0005629333 nova_compute[244014]: 2026-02-25 12:30:46.222 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Start _get_guest_xml network_info=[{"id": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "address": "fa:16:3e:a1:57:14", "network": {"id": "748b52bb-559a-4d75-b4f7-a397fd5e7e77", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1734495710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f3f7599abe54e879797365670ae88f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2528a5-82", "ovs_interfaceid": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:30:46 np0005629333 nova_compute[244014]: 2026-02-25 12:30:46.229 244018 WARNING nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:30:46 np0005629333 nova_compute[244014]: 2026-02-25 12:30:46.243 244018 DEBUG nova.virt.libvirt.host [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:30:46 np0005629333 nova_compute[244014]: 2026-02-25 12:30:46.244 244018 DEBUG nova.virt.libvirt.host [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:30:46 np0005629333 nova_compute[244014]: 2026-02-25 12:30:46.250 244018 DEBUG nova.virt.libvirt.host [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:30:46 np0005629333 nova_compute[244014]: 2026-02-25 12:30:46.251 244018 DEBUG nova.virt.libvirt.host [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:30:46 np0005629333 nova_compute[244014]: 2026-02-25 12:30:46.252 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:30:46 np0005629333 nova_compute[244014]: 2026-02-25 12:30:46.252 244018 DEBUG nova.virt.hardware [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:30:46 np0005629333 nova_compute[244014]: 2026-02-25 12:30:46.252 244018 DEBUG nova.virt.hardware [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:30:46 np0005629333 nova_compute[244014]: 2026-02-25 12:30:46.253 244018 DEBUG nova.virt.hardware [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:30:46 np0005629333 nova_compute[244014]: 2026-02-25 12:30:46.253 244018 DEBUG nova.virt.hardware [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:30:46 np0005629333 nova_compute[244014]: 2026-02-25 12:30:46.253 244018 DEBUG nova.virt.hardware [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:30:46 np0005629333 nova_compute[244014]: 2026-02-25 12:30:46.254 244018 DEBUG nova.virt.hardware [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:30:46 np0005629333 nova_compute[244014]: 2026-02-25 12:30:46.254 244018 DEBUG nova.virt.hardware [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:30:46 np0005629333 nova_compute[244014]: 2026-02-25 12:30:46.254 244018 DEBUG nova.virt.hardware [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:30:46 np0005629333 nova_compute[244014]: 2026-02-25 12:30:46.255 244018 DEBUG nova.virt.hardware [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:30:46 np0005629333 nova_compute[244014]: 2026-02-25 12:30:46.255 244018 DEBUG nova.virt.hardware [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:30:46 np0005629333 nova_compute[244014]: 2026-02-25 12:30:46.255 244018 DEBUG nova.virt.hardware [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:30:46 np0005629333 nova_compute[244014]: 2026-02-25 12:30:46.259 244018 DEBUG oslo_concurrency.processutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:46 np0005629333 nova_compute[244014]: 2026-02-25 12:30:46.355 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:30:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:30:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2563724747' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:30:46 np0005629333 nova_compute[244014]: 2026-02-25 12:30:46.919 244018 DEBUG oslo_concurrency.processutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.660s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:46 np0005629333 nova_compute[244014]: 2026-02-25 12:30:46.938 244018 DEBUG nova.storage.rbd_utils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] rbd image 25eece29-8689-47a8-b930-0492bf528de5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:46 np0005629333 nova_compute[244014]: 2026-02-25 12:30:46.941 244018 DEBUG oslo_concurrency.processutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:47 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:47Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:23:63:29 10.100.0.11
Feb 25 07:30:47 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:47Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:23:63:29 10.100.0.11
Feb 25 07:30:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:30:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3035742841' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.473 244018 DEBUG oslo_concurrency.processutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.476 244018 DEBUG nova.virt.libvirt.vif [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:30:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-910589157',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-910589157',id=71,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f3f7599abe54e879797365670ae88f0',ramdisk_id='',reservation_id='r-ct0nuxro',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-1961555021',owner_user_name='tempest-ServerTagsTestJSON-1961555021-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:30:42Z,user_data=None,user_id='7f34eea4e6284ff7af83727c45d504ae',uuid=25eece29-8689-47a8-b930-0492bf528de5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "address": "fa:16:3e:a1:57:14", "network": {"id": "748b52bb-559a-4d75-b4f7-a397fd5e7e77", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1734495710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f3f7599abe54e879797365670ae88f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2528a5-82", "ovs_interfaceid": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.477 244018 DEBUG nova.network.os_vif_util [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Converting VIF {"id": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "address": "fa:16:3e:a1:57:14", "network": {"id": "748b52bb-559a-4d75-b4f7-a397fd5e7e77", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1734495710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f3f7599abe54e879797365670ae88f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2528a5-82", "ovs_interfaceid": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.479 244018 DEBUG nova.network.os_vif_util [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:57:14,bridge_name='br-int',has_traffic_filtering=True,id=8c2528a5-82a5-4855-b5eb-dd3a5eba5030,network=Network(748b52bb-559a-4d75-b4f7-a397fd5e7e77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c2528a5-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.481 244018 DEBUG nova.objects.instance [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 25eece29-8689-47a8-b930-0492bf528de5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.498 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:30:47 np0005629333 nova_compute[244014]:  <uuid>25eece29-8689-47a8-b930-0492bf528de5</uuid>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:  <name>instance-00000047</name>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServerTagsTestJSON-server-910589157</nova:name>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:30:46</nova:creationTime>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:30:47 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:        <nova:user uuid="7f34eea4e6284ff7af83727c45d504ae">tempest-ServerTagsTestJSON-1961555021-project-member</nova:user>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:        <nova:project uuid="1f3f7599abe54e879797365670ae88f0">tempest-ServerTagsTestJSON-1961555021</nova:project>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:        <nova:port uuid="8c2528a5-82a5-4855-b5eb-dd3a5eba5030">
Feb 25 07:30:47 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      <entry name="serial">25eece29-8689-47a8-b930-0492bf528de5</entry>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      <entry name="uuid">25eece29-8689-47a8-b930-0492bf528de5</entry>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/25eece29-8689-47a8-b930-0492bf528de5_disk">
Feb 25 07:30:47 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:30:47 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/25eece29-8689-47a8-b930-0492bf528de5_disk.config">
Feb 25 07:30:47 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:30:47 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:a1:57:14"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      <target dev="tap8c2528a5-82"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/25eece29-8689-47a8-b930-0492bf528de5/console.log" append="off"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:30:47 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:30:47 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:30:47 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:30:47 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.501 244018 DEBUG nova.compute.manager [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Preparing to wait for external event network-vif-plugged-8c2528a5-82a5-4855-b5eb-dd3a5eba5030 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.503 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Acquiring lock "25eece29-8689-47a8-b930-0492bf528de5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.503 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lock "25eece29-8689-47a8-b930-0492bf528de5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.504 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lock "25eece29-8689-47a8-b930-0492bf528de5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.505 244018 DEBUG nova.virt.libvirt.vif [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:30:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-910589157',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-910589157',id=71,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f3f7599abe54e879797365670ae88f0',ramdisk_id='',reservation_id='r-ct0nuxro',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-1961555021',owner_user_name='tempest-ServerTagsTestJSON-1961555021-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:30:42Z,user_data=None,user_id='7f34eea4e6284ff7af83727c45d504ae',uuid=25eece29-8689-47a8-b930-0492bf528de5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "address": "fa:16:3e:a1:57:14", "network": {"id": "748b52bb-559a-4d75-b4f7-a397fd5e7e77", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1734495710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f3f7599abe54e879797365670ae88f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2528a5-82", "ovs_interfaceid": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:30:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2900533914' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.505 244018 DEBUG nova.network.os_vif_util [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Converting VIF {"id": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "address": "fa:16:3e:a1:57:14", "network": {"id": "748b52bb-559a-4d75-b4f7-a397fd5e7e77", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1734495710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f3f7599abe54e879797365670ae88f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2528a5-82", "ovs_interfaceid": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:30:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.506 244018 DEBUG nova.network.os_vif_util [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:57:14,bridge_name='br-int',has_traffic_filtering=True,id=8c2528a5-82a5-4855-b5eb-dd3a5eba5030,network=Network(748b52bb-559a-4d75-b4f7-a397fd5e7e77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c2528a5-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:30:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2900533914' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.507 244018 DEBUG os_vif [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:57:14,bridge_name='br-int',has_traffic_filtering=True,id=8c2528a5-82a5-4855-b5eb-dd3a5eba5030,network=Network(748b52bb-559a-4d75-b4f7-a397fd5e7e77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c2528a5-82') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.508 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.509 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.509 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.514 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.514 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c2528a5-82, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.515 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8c2528a5-82, col_values=(('external_ids', {'iface-id': '8c2528a5-82a5-4855-b5eb-dd3a5eba5030', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a1:57:14', 'vm-uuid': '25eece29-8689-47a8-b930-0492bf528de5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.517 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:47 np0005629333 NetworkManager[49836]: <info>  [1772022647.5183] manager: (tap8c2528a5-82): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/290)
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.520 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.521 244018 INFO os_vif [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:57:14,bridge_name='br-int',has_traffic_filtering=True,id=8c2528a5-82a5-4855-b5eb-dd3a5eba5030,network=Network(748b52bb-559a-4d75-b4f7-a397fd5e7e77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c2528a5-82')#033[00m
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.578 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.578 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.579 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] No VIF found with MAC fa:16:3e:a1:57:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.579 244018 INFO nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Using config drive#033[00m
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.598 244018 DEBUG nova.storage.rbd_utils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] rbd image 25eece29-8689-47a8-b930-0492bf528de5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.873 244018 DEBUG nova.network.neutron [req-87c9341c-1536-4549-a7a9-8a68d44c44db req-bb0bfaf4-b1a3-41de-9f61-bba42490cdfc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Updated VIF entry in instance network info cache for port 8c2528a5-82a5-4855-b5eb-dd3a5eba5030. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.874 244018 DEBUG nova.network.neutron [req-87c9341c-1536-4549-a7a9-8a68d44c44db req-bb0bfaf4-b1a3-41de-9f61-bba42490cdfc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Updating instance_info_cache with network_info: [{"id": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "address": "fa:16:3e:a1:57:14", "network": {"id": "748b52bb-559a-4d75-b4f7-a397fd5e7e77", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1734495710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f3f7599abe54e879797365670ae88f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2528a5-82", "ovs_interfaceid": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.895 244018 DEBUG oslo_concurrency.lockutils [req-87c9341c-1536-4549-a7a9-8a68d44c44db req-bb0bfaf4-b1a3-41de-9f61-bba42490cdfc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-25eece29-8689-47a8-b930-0492bf528de5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:30:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 262 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.1 MiB/s wr, 178 op/s
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.941 244018 INFO nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Creating config drive at /var/lib/nova/instances/25eece29-8689-47a8-b930-0492bf528de5/disk.config#033[00m
Feb 25 07:30:47 np0005629333 nova_compute[244014]: 2026-02-25 12:30:47.945 244018 DEBUG oslo_concurrency.processutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/25eece29-8689-47a8-b930-0492bf528de5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmph0dwn_9c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:48 np0005629333 nova_compute[244014]: 2026-02-25 12:30:48.071 244018 DEBUG oslo_concurrency.processutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/25eece29-8689-47a8-b930-0492bf528de5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmph0dwn_9c" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:48 np0005629333 nova_compute[244014]: 2026-02-25 12:30:48.110 244018 DEBUG nova.storage.rbd_utils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] rbd image 25eece29-8689-47a8-b930-0492bf528de5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:48 np0005629333 nova_compute[244014]: 2026-02-25 12:30:48.115 244018 DEBUG oslo_concurrency.processutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/25eece29-8689-47a8-b930-0492bf528de5/disk.config 25eece29-8689-47a8-b930-0492bf528de5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:48 np0005629333 nova_compute[244014]: 2026-02-25 12:30:48.255 244018 DEBUG oslo_concurrency.processutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/25eece29-8689-47a8-b930-0492bf528de5/disk.config 25eece29-8689-47a8-b930-0492bf528de5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:48 np0005629333 nova_compute[244014]: 2026-02-25 12:30:48.256 244018 INFO nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Deleting local config drive /var/lib/nova/instances/25eece29-8689-47a8-b930-0492bf528de5/disk.config because it was imported into RBD.#033[00m
Feb 25 07:30:48 np0005629333 NetworkManager[49836]: <info>  [1772022648.2984] manager: (tap8c2528a5-82): new Tun device (/org/freedesktop/NetworkManager/Devices/291)
Feb 25 07:30:48 np0005629333 kernel: tap8c2528a5-82: entered promiscuous mode
Feb 25 07:30:48 np0005629333 nova_compute[244014]: 2026-02-25 12:30:48.302 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:48 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:48Z|00673|binding|INFO|Claiming lport 8c2528a5-82a5-4855-b5eb-dd3a5eba5030 for this chassis.
Feb 25 07:30:48 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:48Z|00674|binding|INFO|8c2528a5-82a5-4855-b5eb-dd3a5eba5030: Claiming fa:16:3e:a1:57:14 10.100.0.7
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.311 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:57:14 10.100.0.7'], port_security=['fa:16:3e:a1:57:14 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '25eece29-8689-47a8-b930-0492bf528de5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-748b52bb-559a-4d75-b4f7-a397fd5e7e77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f3f7599abe54e879797365670ae88f0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c8caf3e4-1d71-4f11-8550-c5b31d856ffc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f94153ad-cd99-45e9-920b-47039b1f034f, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=8c2528a5-82a5-4855-b5eb-dd3a5eba5030) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:30:48 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:48Z|00675|binding|INFO|Setting lport 8c2528a5-82a5-4855-b5eb-dd3a5eba5030 ovn-installed in OVS
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.313 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 8c2528a5-82a5-4855-b5eb-dd3a5eba5030 in datapath 748b52bb-559a-4d75-b4f7-a397fd5e7e77 bound to our chassis#033[00m
Feb 25 07:30:48 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:48Z|00676|binding|INFO|Setting lport 8c2528a5-82a5-4855-b5eb-dd3a5eba5030 up in Southbound
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.316 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 748b52bb-559a-4d75-b4f7-a397fd5e7e77#033[00m
Feb 25 07:30:48 np0005629333 systemd-udevd[306411]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:30:48 np0005629333 nova_compute[244014]: 2026-02-25 12:30:48.318 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:48 np0005629333 nova_compute[244014]: 2026-02-25 12:30:48.323 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.325 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ae097a23-e25f-42f5-95e5-99dd8eff81af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.326 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap748b52bb-51 in ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.328 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap748b52bb-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.329 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1b9c9956-4f8d-46ba-8bae-2acdb193a9df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.330 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8b90feb4-37af-40c6-ad83-db28b042b26a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:48 np0005629333 NetworkManager[49836]: <info>  [1772022648.3364] device (tap8c2528a5-82): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:30:48 np0005629333 NetworkManager[49836]: <info>  [1772022648.3369] device (tap8c2528a5-82): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:30:48 np0005629333 systemd-machined[210048]: New machine qemu-86-instance-00000047.
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.345 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[409d7142-a740-4b48-8350-b4b685f2e700]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:48 np0005629333 systemd[1]: Started Virtual Machine qemu-86-instance-00000047.
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.360 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a055118c-3ee4-40bd-9d81-70ccfe5183a9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.391 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4eb4a9c5-d555-4d78-bb69-23ef691e2f1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.399 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5edb7c5d-060c-4ac5-af79-a2b899cc5292]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:48 np0005629333 NetworkManager[49836]: <info>  [1772022648.4005] manager: (tap748b52bb-50): new Veth device (/org/freedesktop/NetworkManager/Devices/292)
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.426 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cd9f1c14-e4bc-4bb7-86fc-ece2c51ab008]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.429 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4934d40f-92a6-4b0e-bd8a-c7e6c9d59659]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:48 np0005629333 NetworkManager[49836]: <info>  [1772022648.4509] device (tap748b52bb-50): carrier: link connected
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.455 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[30472656-2af6-4d7b-9cc4-21e3a5635bc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.469 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9565e0a7-069f-4c58-a164-96dd3b2496e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap748b52bb-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:f3:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 203], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461785, 'reachable_time': 25756, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306447, 'error': None, 'target': 'ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.490 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d725c026-f197-4590-aa08-36776ff8cb79]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe16:f364'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461785, 'tstamp': 461785}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306448, 'error': None, 'target': 'ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.506 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[07b0d5b6-983d-4551-9346-3c7ae3eecd73]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap748b52bb-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:f3:64'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 203], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461785, 'reachable_time': 25756, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306449, 'error': None, 'target': 'ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.537 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[12dd21ba-b1b4-430b-b124-cc014d646c37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.586 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[88f01dde-c1b5-4917-a1f7-969dac8fa7d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.588 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap748b52bb-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.589 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.589 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap748b52bb-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:30:48 np0005629333 nova_compute[244014]: 2026-02-25 12:30:48.591 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:48 np0005629333 NetworkManager[49836]: <info>  [1772022648.5923] manager: (tap748b52bb-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/293)
Feb 25 07:30:48 np0005629333 kernel: tap748b52bb-50: entered promiscuous mode
Feb 25 07:30:48 np0005629333 nova_compute[244014]: 2026-02-25 12:30:48.594 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.596 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap748b52bb-50, col_values=(('external_ids', {'iface-id': '8c2f9c93-8b5a-45ae-baad-19b412bcdcc8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:30:48 np0005629333 nova_compute[244014]: 2026-02-25 12:30:48.597 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:48 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:48Z|00677|binding|INFO|Releasing lport 8c2f9c93-8b5a-45ae-baad-19b412bcdcc8 from this chassis (sb_readonly=0)
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.599 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/748b52bb-559a-4d75-b4f7-a397fd5e7e77.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/748b52bb-559a-4d75-b4f7-a397fd5e7e77.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.600 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[13e36a55-6d42-4f36-a3ca-fe3c9b292352]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.601 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-748b52bb-559a-4d75-b4f7-a397fd5e7e77
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/748b52bb-559a-4d75-b4f7-a397fd5e7e77.pid.haproxy
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 748b52bb-559a-4d75-b4f7-a397fd5e7e77
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:30:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:48.602 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77', 'env', 'PROCESS_TAG=haproxy-748b52bb-559a-4d75-b4f7-a397fd5e7e77', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/748b52bb-559a-4d75-b4f7-a397fd5e7e77.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:30:48 np0005629333 nova_compute[244014]: 2026-02-25 12:30:48.604 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:48 np0005629333 nova_compute[244014]: 2026-02-25 12:30:48.791 244018 DEBUG nova.compute.manager [req-7a1d20e1-de81-4f44-9ae4-d19275dd626e req-289c9ba6-4017-4754-985b-f733b4f2993a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Received event network-vif-plugged-8c2528a5-82a5-4855-b5eb-dd3a5eba5030 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:30:48 np0005629333 nova_compute[244014]: 2026-02-25 12:30:48.791 244018 DEBUG oslo_concurrency.lockutils [req-7a1d20e1-de81-4f44-9ae4-d19275dd626e req-289c9ba6-4017-4754-985b-f733b4f2993a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "25eece29-8689-47a8-b930-0492bf528de5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:48 np0005629333 nova_compute[244014]: 2026-02-25 12:30:48.791 244018 DEBUG oslo_concurrency.lockutils [req-7a1d20e1-de81-4f44-9ae4-d19275dd626e req-289c9ba6-4017-4754-985b-f733b4f2993a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "25eece29-8689-47a8-b930-0492bf528de5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:48 np0005629333 nova_compute[244014]: 2026-02-25 12:30:48.792 244018 DEBUG oslo_concurrency.lockutils [req-7a1d20e1-de81-4f44-9ae4-d19275dd626e req-289c9ba6-4017-4754-985b-f733b4f2993a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "25eece29-8689-47a8-b930-0492bf528de5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:48 np0005629333 nova_compute[244014]: 2026-02-25 12:30:48.792 244018 DEBUG nova.compute.manager [req-7a1d20e1-de81-4f44-9ae4-d19275dd626e req-289c9ba6-4017-4754-985b-f733b4f2993a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Processing event network-vif-plugged-8c2528a5-82a5-4855-b5eb-dd3a5eba5030 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:30:48 np0005629333 nova_compute[244014]: 2026-02-25 12:30:48.896 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022648.89564, 25eece29-8689-47a8-b930-0492bf528de5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:30:48 np0005629333 nova_compute[244014]: 2026-02-25 12:30:48.896 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 25eece29-8689-47a8-b930-0492bf528de5] VM Started (Lifecycle Event)#033[00m
Feb 25 07:30:48 np0005629333 nova_compute[244014]: 2026-02-25 12:30:48.898 244018 DEBUG nova.compute.manager [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:30:48 np0005629333 nova_compute[244014]: 2026-02-25 12:30:48.901 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:30:48 np0005629333 nova_compute[244014]: 2026-02-25 12:30:48.904 244018 INFO nova.virt.libvirt.driver [-] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Instance spawned successfully.#033[00m
Feb 25 07:30:48 np0005629333 nova_compute[244014]: 2026-02-25 12:30:48.904 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:30:48 np0005629333 nova_compute[244014]: 2026-02-25 12:30:48.930 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:30:48 np0005629333 nova_compute[244014]: 2026-02-25 12:30:48.931 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:30:48 np0005629333 nova_compute[244014]: 2026-02-25 12:30:48.931 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:30:48 np0005629333 nova_compute[244014]: 2026-02-25 12:30:48.932 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:30:48 np0005629333 nova_compute[244014]: 2026-02-25 12:30:48.932 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:30:48 np0005629333 nova_compute[244014]: 2026-02-25 12:30:48.932 244018 DEBUG nova.virt.libvirt.driver [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:30:48 np0005629333 nova_compute[244014]: 2026-02-25 12:30:48.935 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:30:48 np0005629333 nova_compute[244014]: 2026-02-25 12:30:48.938 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:30:48 np0005629333 podman[306523]: 2026-02-25 12:30:48.972761221 +0000 UTC m=+0.053872529 container create 3ad8cc0134873527b474d022df82b4563639cec7ba0bd708d828106c9a4e2f4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 07:30:49 np0005629333 systemd[1]: Started libpod-conmon-3ad8cc0134873527b474d022df82b4563639cec7ba0bd708d828106c9a4e2f4b.scope.
Feb 25 07:30:49 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:30:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb79d7d65570a7eb27f03f2a9bdb296e9b93b584c351901416e7c35c831aebbc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:30:49 np0005629333 podman[306523]: 2026-02-25 12:30:48.949762978 +0000 UTC m=+0.030874306 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:30:49 np0005629333 podman[306523]: 2026-02-25 12:30:49.05629941 +0000 UTC m=+0.137410718 container init 3ad8cc0134873527b474d022df82b4563639cec7ba0bd708d828106c9a4e2f4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:30:49 np0005629333 podman[306523]: 2026-02-25 12:30:49.063816663 +0000 UTC m=+0.144927971 container start 3ad8cc0134873527b474d022df82b4563639cec7ba0bd708d828106c9a4e2f4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 25 07:30:49 np0005629333 nova_compute[244014]: 2026-02-25 12:30:49.070 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 25eece29-8689-47a8-b930-0492bf528de5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:30:49 np0005629333 nova_compute[244014]: 2026-02-25 12:30:49.071 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022648.8959663, 25eece29-8689-47a8-b930-0492bf528de5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:30:49 np0005629333 nova_compute[244014]: 2026-02-25 12:30:49.071 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 25eece29-8689-47a8-b930-0492bf528de5] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:30:49 np0005629333 nova_compute[244014]: 2026-02-25 12:30:49.073 244018 INFO nova.compute.manager [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Took 6.28 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:30:49 np0005629333 nova_compute[244014]: 2026-02-25 12:30:49.073 244018 DEBUG nova.compute.manager [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:30:49 np0005629333 neutron-haproxy-ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77[306538]: [NOTICE]   (306542) : New worker (306544) forked
Feb 25 07:30:49 np0005629333 neutron-haproxy-ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77[306538]: [NOTICE]   (306542) : Loading success.
Feb 25 07:30:49 np0005629333 nova_compute[244014]: 2026-02-25 12:30:49.341 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:30:49 np0005629333 nova_compute[244014]: 2026-02-25 12:30:49.347 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022648.9006157, 25eece29-8689-47a8-b930-0492bf528de5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:30:49 np0005629333 nova_compute[244014]: 2026-02-25 12:30:49.347 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 25eece29-8689-47a8-b930-0492bf528de5] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:30:49 np0005629333 nova_compute[244014]: 2026-02-25 12:30:49.369 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:30:49 np0005629333 nova_compute[244014]: 2026-02-25 12:30:49.372 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:30:49 np0005629333 nova_compute[244014]: 2026-02-25 12:30:49.380 244018 INFO nova.compute.manager [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Took 7.64 seconds to build instance.#033[00m
Feb 25 07:30:49 np0005629333 nova_compute[244014]: 2026-02-25 12:30:49.427 244018 DEBUG oslo_concurrency.lockutils [None req-eee63682-35e1-43c7-b489-620b5334bdd9 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lock "25eece29-8689-47a8-b930-0492bf528de5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:49 np0005629333 nova_compute[244014]: 2026-02-25 12:30:49.782 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 262 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.0 MiB/s wr, 162 op/s
Feb 25 07:30:51 np0005629333 nova_compute[244014]: 2026-02-25 12:30:51.216 244018 DEBUG nova.compute.manager [req-a494aa71-266a-4e0a-9d69-86200bb21eda req-fed4fb2a-a3a9-4008-8843-cf9f21e9d77b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Received event network-vif-plugged-8c2528a5-82a5-4855-b5eb-dd3a5eba5030 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:30:51 np0005629333 nova_compute[244014]: 2026-02-25 12:30:51.218 244018 DEBUG oslo_concurrency.lockutils [req-a494aa71-266a-4e0a-9d69-86200bb21eda req-fed4fb2a-a3a9-4008-8843-cf9f21e9d77b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "25eece29-8689-47a8-b930-0492bf528de5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:51 np0005629333 nova_compute[244014]: 2026-02-25 12:30:51.218 244018 DEBUG oslo_concurrency.lockutils [req-a494aa71-266a-4e0a-9d69-86200bb21eda req-fed4fb2a-a3a9-4008-8843-cf9f21e9d77b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "25eece29-8689-47a8-b930-0492bf528de5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:51 np0005629333 nova_compute[244014]: 2026-02-25 12:30:51.219 244018 DEBUG oslo_concurrency.lockutils [req-a494aa71-266a-4e0a-9d69-86200bb21eda req-fed4fb2a-a3a9-4008-8843-cf9f21e9d77b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "25eece29-8689-47a8-b930-0492bf528de5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:51 np0005629333 nova_compute[244014]: 2026-02-25 12:30:51.219 244018 DEBUG nova.compute.manager [req-a494aa71-266a-4e0a-9d69-86200bb21eda req-fed4fb2a-a3a9-4008-8843-cf9f21e9d77b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] No waiting events found dispatching network-vif-plugged-8c2528a5-82a5-4855-b5eb-dd3a5eba5030 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:30:51 np0005629333 nova_compute[244014]: 2026-02-25 12:30:51.219 244018 WARNING nova.compute.manager [req-a494aa71-266a-4e0a-9d69-86200bb21eda req-fed4fb2a-a3a9-4008-8843-cf9f21e9d77b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Received unexpected event network-vif-plugged-8c2528a5-82a5-4855-b5eb-dd3a5eba5030 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:30:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:30:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 278 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.9 MiB/s wr, 261 op/s
Feb 25 07:30:52 np0005629333 nova_compute[244014]: 2026-02-25 12:30:52.252 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Acquiring lock "7715b665-70af-4b84-b8a3-85ccea6ab805" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:52 np0005629333 nova_compute[244014]: 2026-02-25 12:30:52.254 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lock "7715b665-70af-4b84-b8a3-85ccea6ab805" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:52 np0005629333 nova_compute[244014]: 2026-02-25 12:30:52.283 244018 DEBUG nova.compute.manager [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:30:52 np0005629333 nova_compute[244014]: 2026-02-25 12:30:52.378 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:52 np0005629333 nova_compute[244014]: 2026-02-25 12:30:52.379 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:52 np0005629333 nova_compute[244014]: 2026-02-25 12:30:52.387 244018 DEBUG nova.virt.hardware [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:30:52 np0005629333 nova_compute[244014]: 2026-02-25 12:30:52.387 244018 INFO nova.compute.claims [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:30:52 np0005629333 nova_compute[244014]: 2026-02-25 12:30:52.519 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:52 np0005629333 nova_compute[244014]: 2026-02-25 12:30:52.543 244018 DEBUG oslo_concurrency.processutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:30:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2129785163' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.069 244018 DEBUG oslo_concurrency.processutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.076 244018 DEBUG nova.compute.provider_tree [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.103 244018 DEBUG nova.scheduler.client.report [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.140 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.141 244018 DEBUG nova.compute.manager [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.210 244018 DEBUG nova.compute.manager [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.211 244018 DEBUG nova.network.neutron [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.237 244018 INFO nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.264 244018 DEBUG nova.compute.manager [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.390 244018 DEBUG nova.compute.manager [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.392 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.393 244018 INFO nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Creating image(s)#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.419 244018 DEBUG nova.storage.rbd_utils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] rbd image 7715b665-70af-4b84-b8a3-85ccea6ab805_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.445 244018 DEBUG nova.storage.rbd_utils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] rbd image 7715b665-70af-4b84-b8a3-85ccea6ab805_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.477 244018 DEBUG nova.storage.rbd_utils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] rbd image 7715b665-70af-4b84-b8a3-85ccea6ab805_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.481 244018 DEBUG oslo_concurrency.processutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.516 244018 DEBUG nova.policy [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '87b7b8b029dc48549d8d5982d7329f63', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c7f30b1d5a1f4604bb44f655b6be0571', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.522 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022638.5156546, d945940d-a1b5-4a36-b980-efda3a9efda6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.522 244018 INFO nova.compute.manager [-] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.524 244018 DEBUG oslo_concurrency.lockutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "e954c936-91fe-4aa5-8c91-78ec08c85221" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.524 244018 DEBUG oslo_concurrency.lockutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "e954c936-91fe-4aa5-8c91-78ec08c85221" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.551 244018 DEBUG nova.compute.manager [None req-def9d0e1-9cc0-45cd-a8f2-960ffcd6cd3a - - - - - -] [instance: d945940d-a1b5-4a36-b980-efda3a9efda6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.560 244018 DEBUG nova.compute.manager [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.566 244018 DEBUG oslo_concurrency.processutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.566 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.567 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.567 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.590 244018 DEBUG nova.storage.rbd_utils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] rbd image 7715b665-70af-4b84-b8a3-85ccea6ab805_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.592 244018 DEBUG oslo_concurrency.processutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 7715b665-70af-4b84-b8a3-85ccea6ab805_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.634 244018 DEBUG oslo_concurrency.lockutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.635 244018 DEBUG oslo_concurrency.lockutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.645 244018 DEBUG nova.virt.hardware [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.645 244018 INFO nova.compute.claims [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.689 244018 DEBUG oslo_concurrency.lockutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "9b9beb6f-4642-40d1-b9e5-96345c682341" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.689 244018 DEBUG oslo_concurrency.lockutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "9b9beb6f-4642-40d1-b9e5-96345c682341" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.709 244018 DEBUG nova.compute.manager [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.776 244018 DEBUG oslo_concurrency.lockutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:53 np0005629333 nova_compute[244014]: 2026-02-25 12:30:53.838 244018 DEBUG oslo_concurrency.processutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 279 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.9 MiB/s wr, 208 op/s
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.182 244018 DEBUG oslo_concurrency.lockutils [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Acquiring lock "25eece29-8689-47a8-b930-0492bf528de5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.183 244018 DEBUG oslo_concurrency.lockutils [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lock "25eece29-8689-47a8-b930-0492bf528de5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.183 244018 DEBUG oslo_concurrency.lockutils [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Acquiring lock "25eece29-8689-47a8-b930-0492bf528de5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.183 244018 DEBUG oslo_concurrency.lockutils [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lock "25eece29-8689-47a8-b930-0492bf528de5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.184 244018 DEBUG oslo_concurrency.lockutils [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lock "25eece29-8689-47a8-b930-0492bf528de5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.185 244018 INFO nova.compute.manager [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Terminating instance#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.186 244018 DEBUG nova.compute.manager [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:30:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:30:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1602529104' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:30:54 np0005629333 kernel: tap8c2528a5-82 (unregistering): left promiscuous mode
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.469 244018 DEBUG oslo_concurrency.processutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:54 np0005629333 NetworkManager[49836]: <info>  [1772022654.4719] device (tap8c2528a5-82): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.477 244018 DEBUG nova.compute.provider_tree [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:30:54 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:54Z|00678|binding|INFO|Releasing lport 8c2528a5-82a5-4855-b5eb-dd3a5eba5030 from this chassis (sb_readonly=0)
Feb 25 07:30:54 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:54Z|00679|binding|INFO|Setting lport 8c2528a5-82a5-4855-b5eb-dd3a5eba5030 down in Southbound
Feb 25 07:30:54 np0005629333 ovn_controller[147040]: 2026-02-25T12:30:54Z|00680|binding|INFO|Removing iface tap8c2528a5-82 ovn-installed in OVS
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.481 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.489 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:54 np0005629333 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d00000047.scope: Deactivated successfully.
Feb 25 07:30:54 np0005629333 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d00000047.scope: Consumed 5.915s CPU time.
Feb 25 07:30:54 np0005629333 systemd-machined[210048]: Machine qemu-86-instance-00000047 terminated.
Feb 25 07:30:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:54.636 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:57:14 10.100.0.7'], port_security=['fa:16:3e:a1:57:14 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '25eece29-8689-47a8-b930-0492bf528de5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-748b52bb-559a-4d75-b4f7-a397fd5e7e77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f3f7599abe54e879797365670ae88f0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c8caf3e4-1d71-4f11-8550-c5b31d856ffc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f94153ad-cd99-45e9-920b-47039b1f034f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=8c2528a5-82a5-4855-b5eb-dd3a5eba5030) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.636 244018 INFO nova.virt.libvirt.driver [-] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Instance destroyed successfully.#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.637 244018 DEBUG nova.objects.instance [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lazy-loading 'resources' on Instance uuid 25eece29-8689-47a8-b930-0492bf528de5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:30:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:54.639 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 8c2528a5-82a5-4855-b5eb-dd3a5eba5030 in datapath 748b52bb-559a-4d75-b4f7-a397fd5e7e77 unbound from our chassis#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.639 244018 DEBUG nova.scheduler.client.report [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:30:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:54.641 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 748b52bb-559a-4d75-b4f7-a397fd5e7e77, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:30:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:54.643 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cd638720-4d89-4b34-a6ba-1acf1b3258ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:54.644 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77 namespace which is not needed anymore#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.670 244018 DEBUG nova.virt.libvirt.vif [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:30:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-910589157',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-910589157',id=71,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:30:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f3f7599abe54e879797365670ae88f0',ramdisk_id='',reservation_id='r-ct0nuxro',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerTagsTestJSON-1961555021',owner_user_name='tempest-ServerTagsTestJSON-1961555021-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:30:49Z,user_data=None,user_id='7f34eea4e6284ff7af83727c45d504ae',uuid=25eece29-8689-47a8-b930-0492bf528de5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "address": "fa:16:3e:a1:57:14", "network": {"id": "748b52bb-559a-4d75-b4f7-a397fd5e7e77", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1734495710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f3f7599abe54e879797365670ae88f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2528a5-82", "ovs_interfaceid": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.671 244018 DEBUG nova.network.os_vif_util [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Converting VIF {"id": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "address": "fa:16:3e:a1:57:14", "network": {"id": "748b52bb-559a-4d75-b4f7-a397fd5e7e77", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1734495710-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f3f7599abe54e879797365670ae88f0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8c2528a5-82", "ovs_interfaceid": "8c2528a5-82a5-4855-b5eb-dd3a5eba5030", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.673 244018 DEBUG nova.network.os_vif_util [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:57:14,bridge_name='br-int',has_traffic_filtering=True,id=8c2528a5-82a5-4855-b5eb-dd3a5eba5030,network=Network(748b52bb-559a-4d75-b4f7-a397fd5e7e77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c2528a5-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.674 244018 DEBUG os_vif [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:57:14,bridge_name='br-int',has_traffic_filtering=True,id=8c2528a5-82a5-4855-b5eb-dd3a5eba5030,network=Network(748b52bb-559a-4d75-b4f7-a397fd5e7e77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c2528a5-82') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.678 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.679 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c2528a5-82, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.682 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.685 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.687 244018 INFO os_vif [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:57:14,bridge_name='br-int',has_traffic_filtering=True,id=8c2528a5-82a5-4855-b5eb-dd3a5eba5030,network=Network(748b52bb-559a-4d75-b4f7-a397fd5e7e77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8c2528a5-82')#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.772 244018 DEBUG oslo_concurrency.lockutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.773 244018 DEBUG nova.compute.manager [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.776 244018 DEBUG oslo_concurrency.lockutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.785 244018 DEBUG nova.virt.hardware [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.786 244018 INFO nova.compute.claims [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.788 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.809 244018 DEBUG nova.network.neutron [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Successfully created port: e61932f1-9a36-4c95-a52f-470c182ac70f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.845 244018 DEBUG nova.compute.manager [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.849 244018 DEBUG nova.compute.manager [req-6753044f-64f2-4723-9ea4-5df1b29bc82d req-48b2053a-517a-452f-8486-4d75729009a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Received event network-vif-unplugged-8c2528a5-82a5-4855-b5eb-dd3a5eba5030 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.849 244018 DEBUG oslo_concurrency.lockutils [req-6753044f-64f2-4723-9ea4-5df1b29bc82d req-48b2053a-517a-452f-8486-4d75729009a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "25eece29-8689-47a8-b930-0492bf528de5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.849 244018 DEBUG oslo_concurrency.lockutils [req-6753044f-64f2-4723-9ea4-5df1b29bc82d req-48b2053a-517a-452f-8486-4d75729009a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "25eece29-8689-47a8-b930-0492bf528de5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.849 244018 DEBUG oslo_concurrency.lockutils [req-6753044f-64f2-4723-9ea4-5df1b29bc82d req-48b2053a-517a-452f-8486-4d75729009a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "25eece29-8689-47a8-b930-0492bf528de5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.850 244018 DEBUG nova.compute.manager [req-6753044f-64f2-4723-9ea4-5df1b29bc82d req-48b2053a-517a-452f-8486-4d75729009a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] No waiting events found dispatching network-vif-unplugged-8c2528a5-82a5-4855-b5eb-dd3a5eba5030 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.850 244018 DEBUG nova.compute.manager [req-6753044f-64f2-4723-9ea4-5df1b29bc82d req-48b2053a-517a-452f-8486-4d75729009a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Received event network-vif-unplugged-8c2528a5-82a5-4855-b5eb-dd3a5eba5030 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.862 244018 INFO nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.880 244018 DEBUG nova.compute.manager [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.962 244018 DEBUG oslo_concurrency.processutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.987 244018 DEBUG nova.compute.manager [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.989 244018 DEBUG nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:30:54 np0005629333 nova_compute[244014]: 2026-02-25 12:30:54.989 244018 INFO nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Creating image(s)#033[00m
Feb 25 07:30:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:55.012 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:55.012 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:55.013 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:55 np0005629333 neutron-haproxy-ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77[306538]: [NOTICE]   (306542) : haproxy version is 2.8.14-c23fe91
Feb 25 07:30:55 np0005629333 neutron-haproxy-ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77[306538]: [NOTICE]   (306542) : path to executable is /usr/sbin/haproxy
Feb 25 07:30:55 np0005629333 neutron-haproxy-ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77[306538]: [WARNING]  (306542) : Exiting Master process...
Feb 25 07:30:55 np0005629333 neutron-haproxy-ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77[306538]: [ALERT]    (306542) : Current worker (306544) exited with code 143 (Terminated)
Feb 25 07:30:55 np0005629333 neutron-haproxy-ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77[306538]: [WARNING]  (306542) : All workers exited. Exiting... (0)
Feb 25 07:30:55 np0005629333 systemd[1]: libpod-3ad8cc0134873527b474d022df82b4563639cec7ba0bd708d828106c9a4e2f4b.scope: Deactivated successfully.
Feb 25 07:30:55 np0005629333 podman[306734]: 2026-02-25 12:30:55.126899796 +0000 UTC m=+0.393295604 container died 3ad8cc0134873527b474d022df82b4563639cec7ba0bd708d828106c9a4e2f4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0)
Feb 25 07:30:55 np0005629333 nova_compute[244014]: 2026-02-25 12:30:55.147 244018 DEBUG nova.storage.rbd_utils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image e954c936-91fe-4aa5-8c91-78ec08c85221_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:55 np0005629333 nova_compute[244014]: 2026-02-25 12:30:55.168 244018 DEBUG nova.storage.rbd_utils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image e954c936-91fe-4aa5-8c91-78ec08c85221_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:55 np0005629333 nova_compute[244014]: 2026-02-25 12:30:55.190 244018 DEBUG nova.storage.rbd_utils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image e954c936-91fe-4aa5-8c91-78ec08c85221_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:55 np0005629333 nova_compute[244014]: 2026-02-25 12:30:55.192 244018 DEBUG oslo_concurrency.processutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:55 np0005629333 nova_compute[244014]: 2026-02-25 12:30:55.244 244018 DEBUG oslo_concurrency.processutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:55 np0005629333 nova_compute[244014]: 2026-02-25 12:30:55.245 244018 DEBUG oslo_concurrency.lockutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:55 np0005629333 nova_compute[244014]: 2026-02-25 12:30:55.246 244018 DEBUG oslo_concurrency.lockutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:55 np0005629333 nova_compute[244014]: 2026-02-25 12:30:55.246 244018 DEBUG oslo_concurrency.lockutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:55 np0005629333 nova_compute[244014]: 2026-02-25 12:30:55.269 244018 DEBUG nova.storage.rbd_utils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image e954c936-91fe-4aa5-8c91-78ec08c85221_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:55 np0005629333 nova_compute[244014]: 2026-02-25 12:30:55.273 244018 DEBUG oslo_concurrency.processutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 e954c936-91fe-4aa5-8c91-78ec08c85221_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:30:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2909746229' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:30:55 np0005629333 nova_compute[244014]: 2026-02-25 12:30:55.534 244018 DEBUG oslo_concurrency.processutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:55 np0005629333 nova_compute[244014]: 2026-02-25 12:30:55.542 244018 DEBUG nova.compute.provider_tree [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:30:55 np0005629333 nova_compute[244014]: 2026-02-25 12:30:55.561 244018 DEBUG nova.scheduler.client.report [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:30:55 np0005629333 nova_compute[244014]: 2026-02-25 12:30:55.594 244018 DEBUG oslo_concurrency.lockutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:55 np0005629333 nova_compute[244014]: 2026-02-25 12:30:55.596 244018 DEBUG nova.compute.manager [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:30:55 np0005629333 nova_compute[244014]: 2026-02-25 12:30:55.655 244018 DEBUG nova.compute.manager [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Feb 25 07:30:55 np0005629333 nova_compute[244014]: 2026-02-25 12:30:55.674 244018 INFO nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:30:55 np0005629333 nova_compute[244014]: 2026-02-25 12:30:55.691 244018 DEBUG nova.network.neutron [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Successfully updated port: e61932f1-9a36-4c95-a52f-470c182ac70f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:30:55 np0005629333 nova_compute[244014]: 2026-02-25 12:30:55.695 244018 DEBUG nova.compute.manager [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:30:55 np0005629333 nova_compute[244014]: 2026-02-25 12:30:55.704 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Acquiring lock "refresh_cache-7715b665-70af-4b84-b8a3-85ccea6ab805" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:30:55 np0005629333 nova_compute[244014]: 2026-02-25 12:30:55.704 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Acquired lock "refresh_cache-7715b665-70af-4b84-b8a3-85ccea6ab805" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:30:55 np0005629333 nova_compute[244014]: 2026-02-25 12:30:55.705 244018 DEBUG nova.network.neutron [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:30:55 np0005629333 nova_compute[244014]: 2026-02-25 12:30:55.745 244018 DEBUG oslo_concurrency.processutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 7715b665-70af-4b84-b8a3-85ccea6ab805_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:55 np0005629333 nova_compute[244014]: 2026-02-25 12:30:55.781 244018 DEBUG nova.compute.manager [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:30:55 np0005629333 nova_compute[244014]: 2026-02-25 12:30:55.782 244018 DEBUG nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:30:55 np0005629333 nova_compute[244014]: 2026-02-25 12:30:55.783 244018 INFO nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Creating image(s)#033[00m
Feb 25 07:30:55 np0005629333 nova_compute[244014]: 2026-02-25 12:30:55.804 244018 DEBUG nova.storage.rbd_utils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image 9b9beb6f-4642-40d1-b9e5-96345c682341_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:55 np0005629333 nova_compute[244014]: 2026-02-25 12:30:55.827 244018 DEBUG nova.storage.rbd_utils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image 9b9beb6f-4642-40d1-b9e5-96345c682341_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:55 np0005629333 nova_compute[244014]: 2026-02-25 12:30:55.845 244018 DEBUG nova.storage.rbd_utils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image 9b9beb6f-4642-40d1-b9e5-96345c682341_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:55 np0005629333 nova_compute[244014]: 2026-02-25 12:30:55.848 244018 DEBUG oslo_concurrency.processutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:55 np0005629333 nova_compute[244014]: 2026-02-25 12:30:55.906 244018 DEBUG nova.storage.rbd_utils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] resizing rbd image 7715b665-70af-4b84-b8a3-85ccea6ab805_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:30:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 279 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Feb 25 07:30:56 np0005629333 nova_compute[244014]: 2026-02-25 12:30:56.011 244018 DEBUG oslo_concurrency.processutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:56 np0005629333 nova_compute[244014]: 2026-02-25 12:30:56.012 244018 DEBUG oslo_concurrency.lockutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:56 np0005629333 nova_compute[244014]: 2026-02-25 12:30:56.013 244018 DEBUG oslo_concurrency.lockutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:56 np0005629333 nova_compute[244014]: 2026-02-25 12:30:56.013 244018 DEBUG oslo_concurrency.lockutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:56 np0005629333 nova_compute[244014]: 2026-02-25 12:30:56.090 244018 DEBUG nova.storage.rbd_utils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image 9b9beb6f-4642-40d1-b9e5-96345c682341_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:56 np0005629333 nova_compute[244014]: 2026-02-25 12:30:56.094 244018 DEBUG oslo_concurrency.processutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 9b9beb6f-4642-40d1-b9e5-96345c682341_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:56 np0005629333 nova_compute[244014]: 2026-02-25 12:30:56.124 244018 DEBUG nova.network.neutron [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:30:56 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3ad8cc0134873527b474d022df82b4563639cec7ba0bd708d828106c9a4e2f4b-userdata-shm.mount: Deactivated successfully.
Feb 25 07:30:56 np0005629333 systemd[1]: var-lib-containers-storage-overlay-eb79d7d65570a7eb27f03f2a9bdb296e9b93b584c351901416e7c35c831aebbc-merged.mount: Deactivated successfully.
Feb 25 07:30:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:30:56 np0005629333 podman[306734]: 2026-02-25 12:30:56.818191477 +0000 UTC m=+2.084587315 container cleanup 3ad8cc0134873527b474d022df82b4563639cec7ba0bd708d828106c9a4e2f4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 07:30:56 np0005629333 systemd[1]: libpod-conmon-3ad8cc0134873527b474d022df82b4563639cec7ba0bd708d828106c9a4e2f4b.scope: Deactivated successfully.
Feb 25 07:30:56 np0005629333 nova_compute[244014]: 2026-02-25 12:30:56.841 244018 DEBUG nova.network.neutron [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Updating instance_info_cache with network_info: [{"id": "e61932f1-9a36-4c95-a52f-470c182ac70f", "address": "fa:16:3e:38:25:49", "network": {"id": "2f200cab-355f-4832-b410-50e7782a19b7", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-633209158-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7f30b1d5a1f4604bb44f655b6be0571", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61932f1-9a", "ovs_interfaceid": "e61932f1-9a36-4c95-a52f-470c182ac70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:30:56 np0005629333 nova_compute[244014]: 2026-02-25 12:30:56.864 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Releasing lock "refresh_cache-7715b665-70af-4b84-b8a3-85ccea6ab805" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:30:56 np0005629333 nova_compute[244014]: 2026-02-25 12:30:56.865 244018 DEBUG nova.compute.manager [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Instance network_info: |[{"id": "e61932f1-9a36-4c95-a52f-470c182ac70f", "address": "fa:16:3e:38:25:49", "network": {"id": "2f200cab-355f-4832-b410-50e7782a19b7", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-633209158-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7f30b1d5a1f4604bb44f655b6be0571", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61932f1-9a", "ovs_interfaceid": "e61932f1-9a36-4c95-a52f-470c182ac70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.199 244018 DEBUG nova.compute.manager [req-7aa60001-e015-4d2a-97f6-727f30426470 req-8128c3ed-8f0d-419f-8c41-c200f61b3805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Received event network-vif-plugged-8c2528a5-82a5-4855-b5eb-dd3a5eba5030 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.200 244018 DEBUG oslo_concurrency.lockutils [req-7aa60001-e015-4d2a-97f6-727f30426470 req-8128c3ed-8f0d-419f-8c41-c200f61b3805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "25eece29-8689-47a8-b930-0492bf528de5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.200 244018 DEBUG oslo_concurrency.lockutils [req-7aa60001-e015-4d2a-97f6-727f30426470 req-8128c3ed-8f0d-419f-8c41-c200f61b3805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "25eece29-8689-47a8-b930-0492bf528de5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.200 244018 DEBUG oslo_concurrency.lockutils [req-7aa60001-e015-4d2a-97f6-727f30426470 req-8128c3ed-8f0d-419f-8c41-c200f61b3805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "25eece29-8689-47a8-b930-0492bf528de5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.201 244018 DEBUG nova.compute.manager [req-7aa60001-e015-4d2a-97f6-727f30426470 req-8128c3ed-8f0d-419f-8c41-c200f61b3805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] No waiting events found dispatching network-vif-plugged-8c2528a5-82a5-4855-b5eb-dd3a5eba5030 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.201 244018 WARNING nova.compute.manager [req-7aa60001-e015-4d2a-97f6-727f30426470 req-8128c3ed-8f0d-419f-8c41-c200f61b3805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Received unexpected event network-vif-plugged-8c2528a5-82a5-4855-b5eb-dd3a5eba5030 for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.201 244018 DEBUG nova.compute.manager [req-7aa60001-e015-4d2a-97f6-727f30426470 req-8128c3ed-8f0d-419f-8c41-c200f61b3805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Received event network-changed-e61932f1-9a36-4c95-a52f-470c182ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.202 244018 DEBUG nova.compute.manager [req-7aa60001-e015-4d2a-97f6-727f30426470 req-8128c3ed-8f0d-419f-8c41-c200f61b3805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Refreshing instance network info cache due to event network-changed-e61932f1-9a36-4c95-a52f-470c182ac70f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.202 244018 DEBUG oslo_concurrency.lockutils [req-7aa60001-e015-4d2a-97f6-727f30426470 req-8128c3ed-8f0d-419f-8c41-c200f61b3805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-7715b665-70af-4b84-b8a3-85ccea6ab805" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.202 244018 DEBUG oslo_concurrency.lockutils [req-7aa60001-e015-4d2a-97f6-727f30426470 req-8128c3ed-8f0d-419f-8c41-c200f61b3805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-7715b665-70af-4b84-b8a3-85ccea6ab805" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.203 244018 DEBUG nova.network.neutron [req-7aa60001-e015-4d2a-97f6-727f30426470 req-8128c3ed-8f0d-419f-8c41-c200f61b3805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Refreshing network info cache for port e61932f1-9a36-4c95-a52f-470c182ac70f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.411 244018 DEBUG nova.objects.instance [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lazy-loading 'migration_context' on Instance uuid 7715b665-70af-4b84-b8a3-85ccea6ab805 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.426 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.426 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Ensure instance console log exists: /var/lib/nova/instances/7715b665-70af-4b84-b8a3-85ccea6ab805/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.427 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.427 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.427 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.430 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Start _get_guest_xml network_info=[{"id": "e61932f1-9a36-4c95-a52f-470c182ac70f", "address": "fa:16:3e:38:25:49", "network": {"id": "2f200cab-355f-4832-b410-50e7782a19b7", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-633209158-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7f30b1d5a1f4604bb44f655b6be0571", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61932f1-9a", "ovs_interfaceid": "e61932f1-9a36-4c95-a52f-470c182ac70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.433 244018 WARNING nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.437 244018 DEBUG nova.virt.libvirt.host [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.438 244018 DEBUG nova.virt.libvirt.host [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.441 244018 DEBUG nova.virt.libvirt.host [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.441 244018 DEBUG nova.virt.libvirt.host [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.441 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.442 244018 DEBUG nova.virt.hardware [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.442 244018 DEBUG nova.virt.hardware [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.442 244018 DEBUG nova.virt.hardware [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.443 244018 DEBUG nova.virt.hardware [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.443 244018 DEBUG nova.virt.hardware [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.443 244018 DEBUG nova.virt.hardware [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.443 244018 DEBUG nova.virt.hardware [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.443 244018 DEBUG nova.virt.hardware [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.444 244018 DEBUG nova.virt.hardware [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.444 244018 DEBUG nova.virt.hardware [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.444 244018 DEBUG nova.virt.hardware [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.447 244018 DEBUG oslo_concurrency.processutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:57 np0005629333 podman[307034]: 2026-02-25 12:30:57.516288523 +0000 UTC m=+0.672853142 container remove 3ad8cc0134873527b474d022df82b4563639cec7ba0bd708d828106c9a4e2f4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:30:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:57.521 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ab71f7ed-2a17-4daa-807a-7f6a90cb2712]: (4, ('Wed Feb 25 12:30:54 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77 (3ad8cc0134873527b474d022df82b4563639cec7ba0bd708d828106c9a4e2f4b)\n3ad8cc0134873527b474d022df82b4563639cec7ba0bd708d828106c9a4e2f4b\nWed Feb 25 12:30:56 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77 (3ad8cc0134873527b474d022df82b4563639cec7ba0bd708d828106c9a4e2f4b)\n3ad8cc0134873527b474d022df82b4563639cec7ba0bd708d828106c9a4e2f4b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:57.522 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f7f3139c-ce1f-426d-99df-099c6d8249a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:57.523 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap748b52bb-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.525 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:57 np0005629333 kernel: tap748b52bb-50: left promiscuous mode
Feb 25 07:30:57 np0005629333 nova_compute[244014]: 2026-02-25 12:30:57.543 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:57.545 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6392ca98-469b-4ca6-af25-a80fcc3b8ab5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:57.566 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[07767f53-7d8a-4b69-843c-22d578fe2165]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:57.568 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[47324682-9a42-41cc-b97a-b09523b4c857]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:57.584 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b309c1b0-22b7-4d4c-9400-fd923a6e354a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461779, 'reachable_time': 44390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307070, 'error': None, 'target': 'ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:57.587 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-748b52bb-559a-4d75-b4f7-a397fd5e7e77 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:30:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:30:57.587 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[e861710d-409e-4f73-a64e-bd2c99bed4d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:30:57 np0005629333 systemd[1]: run-netns-ovnmeta\x2d748b52bb\x2d559a\x2d4d75\x2db4f7\x2da397fd5e7e77.mount: Deactivated successfully.
Feb 25 07:30:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 337 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.9 MiB/s wr, 198 op/s
Feb 25 07:30:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:30:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/735797724' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.041 244018 DEBUG oslo_concurrency.processutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.063 244018 DEBUG nova.storage.rbd_utils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] rbd image 7715b665-70af-4b84-b8a3-85ccea6ab805_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.065 244018 DEBUG oslo_concurrency.processutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.166 244018 DEBUG oslo_concurrency.processutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 e954c936-91fe-4aa5-8c91-78ec08c85221_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.893s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.246 244018 DEBUG nova.storage.rbd_utils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] resizing rbd image e954c936-91fe-4aa5-8c91-78ec08c85221_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:30:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:30:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/674155135' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.699 244018 DEBUG oslo_concurrency.processutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.700 244018 DEBUG nova.virt.libvirt.vif [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:30:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1810610267',display_name='tempest-ServersTestManualDisk-server-1810610267',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1810610267',id=72,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+ZyZhedZYP+GQcHR1V4WhhTfwoBFUWOed32RhXsQrJwpPpF6RwVYj0ZbkDWlwidrgnyGCLLNNfWgQRbNPFMLe3H8LOerGP8hP+UJf+LIu8QRlzX2YtSutjJfTqBBvhdg==',key_name='tempest-keypair-1369517783',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c7f30b1d5a1f4604bb44f655b6be0571',ramdisk_id='',reservation_id='r-60vlmj26',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-155925755',owner_user_name='tempest-ServersTestManualDisk-155925755-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:30:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='87b7b8b029dc48549d8d5982d7329f63',uuid=7715b665-70af-4b84-b8a3-85ccea6ab805,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e61932f1-9a36-4c95-a52f-470c182ac70f", "address": "fa:16:3e:38:25:49", "network": {"id": "2f200cab-355f-4832-b410-50e7782a19b7", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-633209158-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7f30b1d5a1f4604bb44f655b6be0571", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61932f1-9a", "ovs_interfaceid": "e61932f1-9a36-4c95-a52f-470c182ac70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.701 244018 DEBUG nova.network.os_vif_util [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Converting VIF {"id": "e61932f1-9a36-4c95-a52f-470c182ac70f", "address": "fa:16:3e:38:25:49", "network": {"id": "2f200cab-355f-4832-b410-50e7782a19b7", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-633209158-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7f30b1d5a1f4604bb44f655b6be0571", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61932f1-9a", "ovs_interfaceid": "e61932f1-9a36-4c95-a52f-470c182ac70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.702 244018 DEBUG nova.network.os_vif_util [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:25:49,bridge_name='br-int',has_traffic_filtering=True,id=e61932f1-9a36-4c95-a52f-470c182ac70f,network=Network(2f200cab-355f-4832-b410-50e7782a19b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61932f1-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.704 244018 DEBUG nova.objects.instance [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7715b665-70af-4b84-b8a3-85ccea6ab805 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.712 244018 DEBUG nova.network.neutron [req-7aa60001-e015-4d2a-97f6-727f30426470 req-8128c3ed-8f0d-419f-8c41-c200f61b3805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Updated VIF entry in instance network info cache for port e61932f1-9a36-4c95-a52f-470c182ac70f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.713 244018 DEBUG nova.network.neutron [req-7aa60001-e015-4d2a-97f6-727f30426470 req-8128c3ed-8f0d-419f-8c41-c200f61b3805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Updating instance_info_cache with network_info: [{"id": "e61932f1-9a36-4c95-a52f-470c182ac70f", "address": "fa:16:3e:38:25:49", "network": {"id": "2f200cab-355f-4832-b410-50e7782a19b7", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-633209158-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7f30b1d5a1f4604bb44f655b6be0571", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61932f1-9a", "ovs_interfaceid": "e61932f1-9a36-4c95-a52f-470c182ac70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.729 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:30:58 np0005629333 nova_compute[244014]:  <uuid>7715b665-70af-4b84-b8a3-85ccea6ab805</uuid>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:  <name>instance-00000048</name>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServersTestManualDisk-server-1810610267</nova:name>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:30:57</nova:creationTime>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:30:58 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:        <nova:user uuid="87b7b8b029dc48549d8d5982d7329f63">tempest-ServersTestManualDisk-155925755-project-member</nova:user>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:        <nova:project uuid="c7f30b1d5a1f4604bb44f655b6be0571">tempest-ServersTestManualDisk-155925755</nova:project>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:        <nova:port uuid="e61932f1-9a36-4c95-a52f-470c182ac70f">
Feb 25 07:30:58 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      <entry name="serial">7715b665-70af-4b84-b8a3-85ccea6ab805</entry>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      <entry name="uuid">7715b665-70af-4b84-b8a3-85ccea6ab805</entry>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/7715b665-70af-4b84-b8a3-85ccea6ab805_disk">
Feb 25 07:30:58 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:30:58 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/7715b665-70af-4b84-b8a3-85ccea6ab805_disk.config">
Feb 25 07:30:58 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:30:58 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:38:25:49"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      <target dev="tape61932f1-9a"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/7715b665-70af-4b84-b8a3-85ccea6ab805/console.log" append="off"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:30:58 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:30:58 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:30:58 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:30:58 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.730 244018 DEBUG nova.compute.manager [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Preparing to wait for external event network-vif-plugged-e61932f1-9a36-4c95-a52f-470c182ac70f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.730 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Acquiring lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.730 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.730 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.731 244018 DEBUG nova.virt.libvirt.vif [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:30:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1810610267',display_name='tempest-ServersTestManualDisk-server-1810610267',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1810610267',id=72,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+ZyZhedZYP+GQcHR1V4WhhTfwoBFUWOed32RhXsQrJwpPpF6RwVYj0ZbkDWlwidrgnyGCLLNNfWgQRbNPFMLe3H8LOerGP8hP+UJf+LIu8QRlzX2YtSutjJfTqBBvhdg==',key_name='tempest-keypair-1369517783',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c7f30b1d5a1f4604bb44f655b6be0571',ramdisk_id='',reservation_id='r-60vlmj26',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-155925755',owner_user_name='tempest-ServersTestManualDisk-155925755-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:30:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='87b7b8b029dc48549d8d5982d7329f63',uuid=7715b665-70af-4b84-b8a3-85ccea6ab805,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e61932f1-9a36-4c95-a52f-470c182ac70f", "address": "fa:16:3e:38:25:49", "network": {"id": "2f200cab-355f-4832-b410-50e7782a19b7", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-633209158-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7f30b1d5a1f4604bb44f655b6be0571", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61932f1-9a", "ovs_interfaceid": "e61932f1-9a36-4c95-a52f-470c182ac70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.731 244018 DEBUG nova.network.os_vif_util [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Converting VIF {"id": "e61932f1-9a36-4c95-a52f-470c182ac70f", "address": "fa:16:3e:38:25:49", "network": {"id": "2f200cab-355f-4832-b410-50e7782a19b7", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-633209158-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7f30b1d5a1f4604bb44f655b6be0571", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61932f1-9a", "ovs_interfaceid": "e61932f1-9a36-4c95-a52f-470c182ac70f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.732 244018 DEBUG nova.network.os_vif_util [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:25:49,bridge_name='br-int',has_traffic_filtering=True,id=e61932f1-9a36-4c95-a52f-470c182ac70f,network=Network(2f200cab-355f-4832-b410-50e7782a19b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61932f1-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.732 244018 DEBUG os_vif [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:25:49,bridge_name='br-int',has_traffic_filtering=True,id=e61932f1-9a36-4c95-a52f-470c182ac70f,network=Network(2f200cab-355f-4832-b410-50e7782a19b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61932f1-9a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.732 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.733 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.733 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.734 244018 DEBUG oslo_concurrency.lockutils [req-7aa60001-e015-4d2a-97f6-727f30426470 req-8128c3ed-8f0d-419f-8c41-c200f61b3805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-7715b665-70af-4b84-b8a3-85ccea6ab805" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.736 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.736 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape61932f1-9a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.737 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape61932f1-9a, col_values=(('external_ids', {'iface-id': 'e61932f1-9a36-4c95-a52f-470c182ac70f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:25:49', 'vm-uuid': '7715b665-70af-4b84-b8a3-85ccea6ab805'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.738 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:58 np0005629333 NetworkManager[49836]: <info>  [1772022658.7390] manager: (tape61932f1-9a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/294)
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.739 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.744 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.744 244018 INFO os_vif [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:25:49,bridge_name='br-int',has_traffic_filtering=True,id=e61932f1-9a36-4c95-a52f-470c182ac70f,network=Network(2f200cab-355f-4832-b410-50e7782a19b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61932f1-9a')#033[00m
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.914 244018 DEBUG oslo_concurrency.processutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 9b9beb6f-4642-40d1-b9e5-96345c682341_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.819s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:58 np0005629333 nova_compute[244014]: 2026-02-25 12:30:58.998 244018 DEBUG nova.storage.rbd_utils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] resizing rbd image 9b9beb6f-4642-40d1-b9e5-96345c682341_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.045 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.046 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.046 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] No VIF found with MAC fa:16:3e:38:25:49, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.046 244018 INFO nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Using config drive#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.065 244018 DEBUG nova.storage.rbd_utils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] rbd image 7715b665-70af-4b84-b8a3-85ccea6ab805_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.212 244018 DEBUG nova.objects.instance [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lazy-loading 'migration_context' on Instance uuid e954c936-91fe-4aa5-8c91-78ec08c85221 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.243 244018 DEBUG nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.243 244018 DEBUG nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Ensure instance console log exists: /var/lib/nova/instances/e954c936-91fe-4aa5-8c91-78ec08c85221/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.243 244018 DEBUG oslo_concurrency.lockutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.244 244018 DEBUG oslo_concurrency.lockutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.244 244018 DEBUG oslo_concurrency.lockutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.245 244018 DEBUG nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.254 244018 WARNING nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.262 244018 DEBUG nova.virt.libvirt.host [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.262 244018 DEBUG nova.virt.libvirt.host [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.265 244018 DEBUG nova.virt.libvirt.host [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.266 244018 DEBUG nova.virt.libvirt.host [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.266 244018 DEBUG nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.266 244018 DEBUG nova.virt.hardware [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.266 244018 DEBUG nova.virt.hardware [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.267 244018 DEBUG nova.virt.hardware [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.267 244018 DEBUG nova.virt.hardware [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.267 244018 DEBUG nova.virt.hardware [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.267 244018 DEBUG nova.virt.hardware [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.267 244018 DEBUG nova.virt.hardware [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.268 244018 DEBUG nova.virt.hardware [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.268 244018 DEBUG nova.virt.hardware [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.268 244018 DEBUG nova.virt.hardware [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.268 244018 DEBUG nova.virt.hardware [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.271 244018 DEBUG oslo_concurrency.processutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.319 244018 DEBUG nova.objects.instance [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lazy-loading 'migration_context' on Instance uuid 9b9beb6f-4642-40d1-b9e5-96345c682341 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.336 244018 DEBUG nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.337 244018 DEBUG nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Ensure instance console log exists: /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.337 244018 DEBUG oslo_concurrency.lockutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.337 244018 DEBUG oslo_concurrency.lockutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.338 244018 DEBUG oslo_concurrency.lockutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.339 244018 DEBUG nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.342 244018 WARNING nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.346 244018 DEBUG nova.virt.libvirt.host [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.347 244018 DEBUG nova.virt.libvirt.host [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.348 244018 DEBUG nova.virt.libvirt.host [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.348 244018 DEBUG nova.virt.libvirt.host [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.349 244018 DEBUG nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.349 244018 DEBUG nova.virt.hardware [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.349 244018 DEBUG nova.virt.hardware [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.349 244018 DEBUG nova.virt.hardware [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.350 244018 DEBUG nova.virt.hardware [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.350 244018 DEBUG nova.virt.hardware [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.350 244018 DEBUG nova.virt.hardware [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.350 244018 DEBUG nova.virt.hardware [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.350 244018 DEBUG nova.virt.hardware [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.350 244018 DEBUG nova.virt.hardware [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.350 244018 DEBUG nova.virt.hardware [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.351 244018 DEBUG nova.virt.hardware [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.353 244018 DEBUG oslo_concurrency.processutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.433 244018 INFO nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Creating config drive at /var/lib/nova/instances/7715b665-70af-4b84-b8a3-85ccea6ab805/disk.config#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.438 244018 DEBUG oslo_concurrency.processutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7715b665-70af-4b84-b8a3-85ccea6ab805/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpkgllwqmt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.584 244018 DEBUG oslo_concurrency.processutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7715b665-70af-4b84-b8a3-85ccea6ab805/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpkgllwqmt" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.682 244018 DEBUG nova.storage.rbd_utils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] rbd image 7715b665-70af-4b84-b8a3-85ccea6ab805_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.687 244018 DEBUG oslo_concurrency.processutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7715b665-70af-4b84-b8a3-85ccea6ab805/disk.config 7715b665-70af-4b84-b8a3-85ccea6ab805_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.729 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.730 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.744 244018 DEBUG nova.compute.manager [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.789 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.810 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.811 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.816 244018 DEBUG nova.virt.hardware [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.817 244018 INFO nova.compute.claims [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:30:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:30:59 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2225140866' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:30:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:30:59 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2232766859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.849 244018 DEBUG oslo_concurrency.processutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.869 244018 DEBUG nova.storage.rbd_utils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image e954c936-91fe-4aa5-8c91-78ec08c85221_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.872 244018 DEBUG oslo_concurrency.processutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.899 244018 DEBUG oslo_concurrency.processutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.927 244018 DEBUG nova.storage.rbd_utils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image 9b9beb6f-4642-40d1-b9e5-96345c682341_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:30:59 np0005629333 nova_compute[244014]: 2026-02-25 12:30:59.930 244018 DEBUG oslo_concurrency.processutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:30:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 337 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.9 MiB/s wr, 147 op/s
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.162 244018 DEBUG oslo_concurrency.processutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:31:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:31:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1465789387' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.434 244018 DEBUG oslo_concurrency.processutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.437 244018 DEBUG nova.objects.instance [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lazy-loading 'pci_devices' on Instance uuid e954c936-91fe-4aa5-8c91-78ec08c85221 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:31:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:31:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3574508948' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.447 244018 DEBUG oslo_concurrency.processutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7715b665-70af-4b84-b8a3-85ccea6ab805/disk.config 7715b665-70af-4b84-b8a3-85ccea6ab805_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.760s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.448 244018 INFO nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Deleting local config drive /var/lib/nova/instances/7715b665-70af-4b84-b8a3-85ccea6ab805/disk.config because it was imported into RBD.#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.469 244018 DEBUG nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  <uuid>e954c936-91fe-4aa5-8c91-78ec08c85221</uuid>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  <name>instance-00000049</name>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServerShowV247Test-server-978520516</nova:name>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:30:59</nova:creationTime>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:31:00 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:        <nova:user uuid="28ac489e13c44a30a1aafde4937f3cff">tempest-ServerShowV247Test-1730972552-project-member</nova:user>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:        <nova:project uuid="e7c1be4d53154ae793a86c8bcc2c5b47">tempest-ServerShowV247Test-1730972552</nova:project>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <nova:ports/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <entry name="serial">e954c936-91fe-4aa5-8c91-78ec08c85221</entry>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <entry name="uuid">e954c936-91fe-4aa5-8c91-78ec08c85221</entry>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/e954c936-91fe-4aa5-8c91-78ec08c85221_disk">
Feb 25 07:31:00 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:31:00 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/e954c936-91fe-4aa5-8c91-78ec08c85221_disk.config">
Feb 25 07:31:00 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:31:00 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/e954c936-91fe-4aa5-8c91-78ec08c85221/console.log" append="off"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:31:00 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:31:00 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.471 244018 DEBUG oslo_concurrency.processutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.472 244018 DEBUG nova.objects.instance [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b9beb6f-4642-40d1-b9e5-96345c682341 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.490 244018 DEBUG nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  <uuid>9b9beb6f-4642-40d1-b9e5-96345c682341</uuid>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  <name>instance-0000004a</name>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServerShowV247Test-server-1350466245</nova:name>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:30:59</nova:creationTime>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:31:00 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:        <nova:user uuid="28ac489e13c44a30a1aafde4937f3cff">tempest-ServerShowV247Test-1730972552-project-member</nova:user>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:        <nova:project uuid="e7c1be4d53154ae793a86c8bcc2c5b47">tempest-ServerShowV247Test-1730972552</nova:project>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <nova:ports/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <entry name="serial">9b9beb6f-4642-40d1-b9e5-96345c682341</entry>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <entry name="uuid">9b9beb6f-4642-40d1-b9e5-96345c682341</entry>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/9b9beb6f-4642-40d1-b9e5-96345c682341_disk">
Feb 25 07:31:00 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:31:00 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/9b9beb6f-4642-40d1-b9e5-96345c682341_disk.config">
Feb 25 07:31:00 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:31:00 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341/console.log" append="off"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:31:00 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:31:00 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:31:00 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:31:00 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.503 244018 INFO nova.virt.libvirt.driver [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Deleting instance files /var/lib/nova/instances/25eece29-8689-47a8-b930-0492bf528de5_del#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.504 244018 INFO nova.virt.libvirt.driver [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Deletion of /var/lib/nova/instances/25eece29-8689-47a8-b930-0492bf528de5_del complete#033[00m
Feb 25 07:31:00 np0005629333 kernel: tape61932f1-9a: entered promiscuous mode
Feb 25 07:31:00 np0005629333 NetworkManager[49836]: <info>  [1772022660.5060] manager: (tape61932f1-9a): new Tun device (/org/freedesktop/NetworkManager/Devices/295)
Feb 25 07:31:00 np0005629333 systemd-udevd[307073]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.509 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.512 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:00 np0005629333 ovn_controller[147040]: 2026-02-25T12:31:00Z|00681|binding|INFO|Claiming lport e61932f1-9a36-4c95-a52f-470c182ac70f for this chassis.
Feb 25 07:31:00 np0005629333 ovn_controller[147040]: 2026-02-25T12:31:00Z|00682|binding|INFO|e61932f1-9a36-4c95-a52f-470c182ac70f: Claiming fa:16:3e:38:25:49 10.100.0.5
Feb 25 07:31:00 np0005629333 NetworkManager[49836]: <info>  [1772022660.5173] device (tape61932f1-9a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:31:00 np0005629333 NetworkManager[49836]: <info>  [1772022660.5179] device (tape61932f1-9a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.525 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:25:49 10.100.0.5'], port_security=['fa:16:3e:38:25:49 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7715b665-70af-4b84-b8a3-85ccea6ab805', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f200cab-355f-4832-b410-50e7782a19b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c7f30b1d5a1f4604bb44f655b6be0571', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dbda8792-bcc5-48a3-8da7-ae12bedd4d63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba165cab-01bd-4b15-a1be-78df84ef667b, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=e61932f1-9a36-4c95-a52f-470c182ac70f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:31:00 np0005629333 ovn_controller[147040]: 2026-02-25T12:31:00Z|00683|binding|INFO|Setting lport e61932f1-9a36-4c95-a52f-470c182ac70f ovn-installed in OVS
Feb 25 07:31:00 np0005629333 ovn_controller[147040]: 2026-02-25T12:31:00Z|00684|binding|INFO|Setting lport e61932f1-9a36-4c95-a52f-470c182ac70f up in Southbound
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.528 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.527 157129 INFO neutron.agent.ovn.metadata.agent [-] Port e61932f1-9a36-4c95-a52f-470c182ac70f in datapath 2f200cab-355f-4832-b410-50e7782a19b7 bound to our chassis#033[00m
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.529 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2f200cab-355f-4832-b410-50e7782a19b7#033[00m
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.536 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b7a37775-1190-4ecb-ac32-0d7f804eeca9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.536 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2f200cab-31 in ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.539 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2f200cab-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.539 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0194b5c0-fa6b-47b7-a937-380be93cc96a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.540 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9851d08e-940d-4746-a6dd-831a51f867c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.553 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[2cbff711-bfa1-4ee3-9a3a-e3aa292f63ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:00 np0005629333 systemd-machined[210048]: New machine qemu-87-instance-00000048.
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.564 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[99b0628f-d279-41b5-a9cc-976ca15ea808]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:00 np0005629333 systemd[1]: Started Virtual Machine qemu-87-instance-00000048.
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.579 244018 DEBUG nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.583 244018 DEBUG nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.584 244018 INFO nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Using config drive#033[00m
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.587 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c077243f-bc02-4fde-81f8-509e34d9aa9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:00 np0005629333 NetworkManager[49836]: <info>  [1772022660.5919] manager: (tap2f200cab-30): new Veth device (/org/freedesktop/NetworkManager/Devices/296)
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.592 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[07528d38-0667-4db6-941f-4220c90c93f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:00 np0005629333 systemd-udevd[307528]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.617 244018 DEBUG nova.storage.rbd_utils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image e954c936-91fe-4aa5-8c91-78ec08c85221_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.619 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3d05860c-ba12-43fd-8b2f-6696c6c1c9d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.624 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[6c8f7896-35b9-46ef-bf08-fb34b7a83fc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:00 np0005629333 NetworkManager[49836]: <info>  [1772022660.6436] device (tap2f200cab-30): carrier: link connected
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.644 244018 INFO nova.compute.manager [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Took 6.46 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.645 244018 DEBUG oslo.service.loopingcall [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.645 244018 DEBUG nova.compute.manager [-] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.645 244018 DEBUG nova.network.neutron [-] [instance: 25eece29-8689-47a8-b930-0492bf528de5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.648 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c5e7bf8d-0b39-48ac-8eb0-94053955ec96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.652 244018 DEBUG nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.652 244018 DEBUG nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.653 244018 INFO nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Using config drive#033[00m
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.663 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a043d328-618e-4ffa-bb67-967eb617e701]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2f200cab-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:99:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463005, 'reachable_time': 43854, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307551, 'error': None, 'target': 'ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.673 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b4014b6b-c5e7-4fdd-9544-278f11312a10]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe55:9962'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 463005, 'tstamp': 463005}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307566, 'error': None, 'target': 'ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.681 244018 DEBUG nova.storage.rbd_utils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image 9b9beb6f-4642-40d1-b9e5-96345c682341_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.692 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[edad8808-2e7e-4c01-a47d-0766b120576a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2f200cab-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:99:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 206], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463005, 'reachable_time': 43854, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307571, 'error': None, 'target': 'ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.718 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8f15001b-f2c9-4d09-9aa0-7cddf29daa7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:31:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3442358198' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.741 244018 DEBUG oslo_concurrency.processutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.745 244018 DEBUG nova.compute.provider_tree [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.768 244018 DEBUG nova.scheduler.client.report [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.787 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d2aa5b42-d7f6-436e-9a4f-7b4447a1eb33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.789 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f200cab-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.790 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.790 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f200cab-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:31:00 np0005629333 kernel: tap2f200cab-30: entered promiscuous mode
Feb 25 07:31:00 np0005629333 NetworkManager[49836]: <info>  [1772022660.7936] manager: (tap2f200cab-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/297)
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.792 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.796 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2f200cab-30, col_values=(('external_ids', {'iface-id': '87b3311c-e91e-486e-ab14-ba8c3046ee03'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:31:00 np0005629333 ovn_controller[147040]: 2026-02-25T12:31:00Z|00685|binding|INFO|Releasing lport 87b3311c-e91e-486e-ab14-ba8c3046ee03 from this chassis (sb_readonly=0)
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.798 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.987s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.799 244018 DEBUG nova.compute.manager [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.801 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2f200cab-355f-4832-b410-50e7782a19b7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2f200cab-355f-4832-b410-50e7782a19b7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.802 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[73bf487b-2001-4b0e-a75b-4772d7ae1fab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.803 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-2f200cab-355f-4832-b410-50e7782a19b7
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/2f200cab-355f-4832-b410-50e7782a19b7.pid.haproxy
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 2f200cab-355f-4832-b410-50e7782a19b7
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:31:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:00.804 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7', 'env', 'PROCESS_TAG=haproxy-2f200cab-355f-4832-b410-50e7782a19b7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2f200cab-355f-4832-b410-50e7782a19b7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.804 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.848 244018 DEBUG nova.compute.manager [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.849 244018 DEBUG nova.network.neutron [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.869 244018 INFO nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.890 244018 INFO nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Creating config drive at /var/lib/nova/instances/e954c936-91fe-4aa5-8c91-78ec08c85221/disk.config#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.894 244018 DEBUG oslo_concurrency.processutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e954c936-91fe-4aa5-8c91-78ec08c85221/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_ucae5xe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.923 244018 INFO nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Creating config drive at /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341/disk.config#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.926 244018 DEBUG oslo_concurrency.processutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp25k7oviy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.954 244018 DEBUG nova.compute.manager [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.958 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022660.9372306, 7715b665-70af-4b84-b8a3-85ccea6ab805 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.958 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] VM Started (Lifecycle Event)#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.988 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.993 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022660.9373624, 7715b665-70af-4b84-b8a3-85ccea6ab805 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:31:00 np0005629333 nova_compute[244014]: 2026-02-25 12:31:00.993 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.021 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.025 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.030 244018 DEBUG oslo_concurrency.processutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e954c936-91fe-4aa5-8c91-78ec08c85221/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_ucae5xe" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.053 244018 DEBUG nova.storage.rbd_utils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image e954c936-91fe-4aa5-8c91-78ec08c85221_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.056 244018 DEBUG oslo_concurrency.processutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e954c936-91fe-4aa5-8c91-78ec08c85221/disk.config e954c936-91fe-4aa5-8c91-78ec08c85221_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.078 244018 DEBUG oslo_concurrency.processutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp25k7oviy" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.100 244018 DEBUG nova.storage.rbd_utils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image 9b9beb6f-4642-40d1-b9e5-96345c682341_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.107 244018 DEBUG oslo_concurrency.processutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341/disk.config 9b9beb6f-4642-40d1-b9e5-96345c682341_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.182 244018 DEBUG oslo_concurrency.processutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e954c936-91fe-4aa5-8c91-78ec08c85221/disk.config e954c936-91fe-4aa5-8c91-78ec08c85221_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.184 244018 INFO nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Deleting local config drive /var/lib/nova/instances/e954c936-91fe-4aa5-8c91-78ec08c85221/disk.config because it was imported into RBD.#033[00m
Feb 25 07:31:01 np0005629333 podman[307706]: 2026-02-25 12:31:01.199969571 +0000 UTC m=+0.058468559 container create 1ec6ee91f3c3be0dfe98ab540b248e251c3c364e30b34d7208b6e6af4cede795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.207 244018 DEBUG nova.compute.manager [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.211 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.212 244018 INFO nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Creating image(s)#033[00m
Feb 25 07:31:01 np0005629333 systemd[1]: Started libpod-conmon-1ec6ee91f3c3be0dfe98ab540b248e251c3c364e30b34d7208b6e6af4cede795.scope.
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.239 244018 DEBUG nova.storage.rbd_utils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:31:01 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:31:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/562f19f351018e8168f67d2f0ec9c0e4adfbd26e742451e2aa28d80d269eda69/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:31:01 np0005629333 podman[307706]: 2026-02-25 12:31:01.167895242 +0000 UTC m=+0.026394310 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.264 244018 DEBUG nova.storage.rbd_utils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:31:01 np0005629333 podman[307706]: 2026-02-25 12:31:01.271339475 +0000 UTC m=+0.129838493 container init 1ec6ee91f3c3be0dfe98ab540b248e251c3c364e30b34d7208b6e6af4cede795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 07:31:01 np0005629333 podman[307706]: 2026-02-25 12:31:01.276989685 +0000 UTC m=+0.135488673 container start 1ec6ee91f3c3be0dfe98ab540b248e251c3c364e30b34d7208b6e6af4cede795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.291 244018 DEBUG nova.storage.rbd_utils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.295 244018 DEBUG oslo_concurrency.processutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:31:01 np0005629333 systemd-machined[210048]: New machine qemu-88-instance-00000049.
Feb 25 07:31:01 np0005629333 neutron-haproxy-ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7[307765]: [NOTICE]   (307803) : New worker (307813) forked
Feb 25 07:31:01 np0005629333 neutron-haproxy-ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7[307765]: [NOTICE]   (307803) : Loading success.
Feb 25 07:31:01 np0005629333 systemd[1]: Started Virtual Machine qemu-88-instance-00000049.
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.319 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.320 244018 DEBUG oslo_concurrency.processutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341/disk.config 9b9beb6f-4642-40d1-b9e5-96345c682341_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.320 244018 INFO nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Deleting local config drive /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341/disk.config because it was imported into RBD.#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.363 244018 DEBUG oslo_concurrency.processutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.363 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.364 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.364 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.382 244018 DEBUG nova.storage.rbd_utils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.385 244018 DEBUG oslo_concurrency.processutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.492 244018 DEBUG nova.policy [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b63928451c6a4137bb65e25561326aff', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8315f545d21f4f8d9a43d810f50e7b78', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:31:01 np0005629333 systemd-machined[210048]: New machine qemu-89-instance-0000004a.
Feb 25 07:31:01 np0005629333 systemd[1]: Started Virtual Machine qemu-89-instance-0000004a.
Feb 25 07:31:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:31:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:31:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:31:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:31:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:31:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.718 244018 DEBUG oslo_concurrency.processutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:31:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.788 244018 DEBUG nova.storage.rbd_utils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] resizing rbd image 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.888 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022661.8566487, e954c936-91fe-4aa5-8c91-78ec08c85221 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.888 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.890 244018 DEBUG nova.compute.manager [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.890 244018 DEBUG nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.894 244018 DEBUG nova.objects.instance [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'migration_context' on Instance uuid 3f143481-f5f9-45d3-9d0b-b66e77ee0714 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.897 244018 INFO nova.virt.libvirt.driver [-] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Instance spawned successfully.#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.897 244018 DEBUG nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:31:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 376 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.7 MiB/s wr, 224 op/s
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.933 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.934 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.935 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Ensure instance console log exists: /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.935 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.935 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.935 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.940 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.943 244018 DEBUG nova.compute.manager [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.943 244018 DEBUG nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.944 244018 DEBUG nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.945 244018 DEBUG nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.945 244018 DEBUG nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.945 244018 DEBUG nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.946 244018 DEBUG nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.946 244018 DEBUG nova.virt.libvirt.driver [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.952 244018 DEBUG nova.compute.manager [req-88fccf28-523c-4c82-9d7a-5d3f8091e5b9 req-41746def-6d6a-422a-9bec-e8f9efe937bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Received event network-vif-plugged-e61932f1-9a36-4c95-a52f-470c182ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.953 244018 DEBUG oslo_concurrency.lockutils [req-88fccf28-523c-4c82-9d7a-5d3f8091e5b9 req-41746def-6d6a-422a-9bec-e8f9efe937bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.953 244018 DEBUG oslo_concurrency.lockutils [req-88fccf28-523c-4c82-9d7a-5d3f8091e5b9 req-41746def-6d6a-422a-9bec-e8f9efe937bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.953 244018 DEBUG oslo_concurrency.lockutils [req-88fccf28-523c-4c82-9d7a-5d3f8091e5b9 req-41746def-6d6a-422a-9bec-e8f9efe937bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.953 244018 DEBUG nova.compute.manager [req-88fccf28-523c-4c82-9d7a-5d3f8091e5b9 req-41746def-6d6a-422a-9bec-e8f9efe937bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Processing event network-vif-plugged-e61932f1-9a36-4c95-a52f-470c182ac70f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.954 244018 DEBUG nova.compute.manager [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.954 244018 INFO nova.virt.libvirt.driver [-] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Instance spawned successfully.#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.955 244018 DEBUG nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.957 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.959 244018 INFO nova.virt.libvirt.driver [-] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Instance spawned successfully.#033[00m
Feb 25 07:31:01 np0005629333 nova_compute[244014]: 2026-02-25 12:31:01.959 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.015 244018 DEBUG nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.016 244018 DEBUG nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.017 244018 DEBUG nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.017 244018 DEBUG nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.017 244018 DEBUG nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.018 244018 DEBUG nova.virt.libvirt.driver [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.021 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.022 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022661.8569694, e954c936-91fe-4aa5-8c91-78ec08c85221 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.022 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] VM Started (Lifecycle Event)#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.036 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.036 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.037 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.037 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.038 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.039 244018 DEBUG nova.virt.libvirt.driver [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.087 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.096 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.118 244018 INFO nova.compute.manager [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Took 7.13 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.118 244018 DEBUG nova.compute.manager [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.132 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.132 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022661.930519, 9b9beb6f-4642-40d1-b9e5-96345c682341 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.132 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.148 244018 INFO nova.compute.manager [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Took 6.37 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.148 244018 DEBUG nova.compute.manager [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.171 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.174 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.204 244018 INFO nova.compute.manager [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Took 8.81 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.204 244018 DEBUG nova.compute.manager [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.210 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.210 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022661.9440706, 9b9beb6f-4642-40d1-b9e5-96345c682341 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.211 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] VM Started (Lifecycle Event)#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.213 244018 INFO nova.compute.manager [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Took 8.60 seconds to build instance.#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.223 244018 INFO nova.compute.manager [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Took 8.47 seconds to build instance.#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.540 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.544 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.550 244018 DEBUG oslo_concurrency.lockutils [None req-9fe9016a-b32a-4f1f-8274-35dda3d4ccfc 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "e954c936-91fe-4aa5-8c91-78ec08c85221" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.025s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.559 244018 DEBUG oslo_concurrency.lockutils [None req-2d6dfbf0-ee71-4904-a261-16bbb40cb45c 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "9b9beb6f-4642-40d1-b9e5-96345c682341" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.583 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022661.9685843, 7715b665-70af-4b84-b8a3-85ccea6ab805 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.583 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.603 244018 INFO nova.compute.manager [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Took 10.26 seconds to build instance.#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.610 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.614 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.656 244018 DEBUG oslo_concurrency.lockutils [None req-b686389a-8322-403a-8bd8-be258d238a45 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lock "7715b665-70af-4b84-b8a3-85ccea6ab805" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.402s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.755 244018 DEBUG nova.network.neutron [-] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.773 244018 INFO nova.compute.manager [-] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Took 2.13 seconds to deallocate network for instance.#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.846 244018 DEBUG oslo_concurrency.lockutils [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.847 244018 DEBUG oslo_concurrency.lockutils [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.855 244018 DEBUG nova.network.neutron [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Successfully created port: a308a2fe-7f22-4520-8be1-36876d0d5361 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:31:02 np0005629333 nova_compute[244014]: 2026-02-25 12:31:02.997 244018 DEBUG oslo_concurrency.processutils [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:31:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:31:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3580024248' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:31:03 np0005629333 nova_compute[244014]: 2026-02-25 12:31:03.541 244018 DEBUG oslo_concurrency.processutils [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:31:03 np0005629333 nova_compute[244014]: 2026-02-25 12:31:03.548 244018 DEBUG nova.compute.provider_tree [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:31:03 np0005629333 nova_compute[244014]: 2026-02-25 12:31:03.575 244018 DEBUG nova.scheduler.client.report [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:31:03 np0005629333 nova_compute[244014]: 2026-02-25 12:31:03.602 244018 DEBUG oslo_concurrency.lockutils [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:03 np0005629333 nova_compute[244014]: 2026-02-25 12:31:03.656 244018 INFO nova.scheduler.client.report [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Deleted allocations for instance 25eece29-8689-47a8-b930-0492bf528de5#033[00m
Feb 25 07:31:03 np0005629333 nova_compute[244014]: 2026-02-25 12:31:03.726 244018 DEBUG oslo_concurrency.lockutils [None req-939febf3-43bb-4325-9d40-fa98c7fcaa36 7f34eea4e6284ff7af83727c45d504ae 1f3f7599abe54e879797365670ae88f0 - - default default] Lock "25eece29-8689-47a8-b930-0492bf528de5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:03 np0005629333 nova_compute[244014]: 2026-02-25 12:31:03.740 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:03 np0005629333 nova_compute[244014]: 2026-02-25 12:31:03.831 244018 DEBUG nova.network.neutron [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Successfully updated port: a308a2fe-7f22-4520-8be1-36876d0d5361 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:31:03 np0005629333 nova_compute[244014]: 2026-02-25 12:31:03.849 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "refresh_cache-3f143481-f5f9-45d3-9d0b-b66e77ee0714" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:31:03 np0005629333 nova_compute[244014]: 2026-02-25 12:31:03.850 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquired lock "refresh_cache-3f143481-f5f9-45d3-9d0b-b66e77ee0714" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:31:03 np0005629333 nova_compute[244014]: 2026-02-25 12:31:03.850 244018 DEBUG nova.network.neutron [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:31:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 396 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 6.1 MiB/s wr, 183 op/s
Feb 25 07:31:04 np0005629333 nova_compute[244014]: 2026-02-25 12:31:04.046 244018 DEBUG nova.network.neutron [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:31:04 np0005629333 nova_compute[244014]: 2026-02-25 12:31:04.261 244018 DEBUG nova.compute.manager [req-acc33c67-c619-46bb-9ff6-9bffaa105c5b req-2bf81253-c20d-4d1c-a9de-a9b493575658 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Received event network-vif-plugged-e61932f1-9a36-4c95-a52f-470c182ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:31:04 np0005629333 nova_compute[244014]: 2026-02-25 12:31:04.262 244018 DEBUG oslo_concurrency.lockutils [req-acc33c67-c619-46bb-9ff6-9bffaa105c5b req-2bf81253-c20d-4d1c-a9de-a9b493575658 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:31:04 np0005629333 nova_compute[244014]: 2026-02-25 12:31:04.262 244018 DEBUG oslo_concurrency.lockutils [req-acc33c67-c619-46bb-9ff6-9bffaa105c5b req-2bf81253-c20d-4d1c-a9de-a9b493575658 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:31:04 np0005629333 nova_compute[244014]: 2026-02-25 12:31:04.263 244018 DEBUG oslo_concurrency.lockutils [req-acc33c67-c619-46bb-9ff6-9bffaa105c5b req-2bf81253-c20d-4d1c-a9de-a9b493575658 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:04 np0005629333 nova_compute[244014]: 2026-02-25 12:31:04.264 244018 DEBUG nova.compute.manager [req-acc33c67-c619-46bb-9ff6-9bffaa105c5b req-2bf81253-c20d-4d1c-a9de-a9b493575658 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] No waiting events found dispatching network-vif-plugged-e61932f1-9a36-4c95-a52f-470c182ac70f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:31:04 np0005629333 nova_compute[244014]: 2026-02-25 12:31:04.264 244018 WARNING nova.compute.manager [req-acc33c67-c619-46bb-9ff6-9bffaa105c5b req-2bf81253-c20d-4d1c-a9de-a9b493575658 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Received unexpected event network-vif-plugged-e61932f1-9a36-4c95-a52f-470c182ac70f for instance with vm_state active and task_state None.#033[00m
Feb 25 07:31:04 np0005629333 nova_compute[244014]: 2026-02-25 12:31:04.265 244018 DEBUG nova.compute.manager [req-acc33c67-c619-46bb-9ff6-9bffaa105c5b req-2bf81253-c20d-4d1c-a9de-a9b493575658 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Received event network-vif-deleted-8c2528a5-82a5-4855-b5eb-dd3a5eba5030 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:31:04 np0005629333 nova_compute[244014]: 2026-02-25 12:31:04.265 244018 DEBUG nova.compute.manager [req-acc33c67-c619-46bb-9ff6-9bffaa105c5b req-2bf81253-c20d-4d1c-a9de-a9b493575658 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Received event network-changed-a308a2fe-7f22-4520-8be1-36876d0d5361 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:31:04 np0005629333 nova_compute[244014]: 2026-02-25 12:31:04.266 244018 DEBUG nova.compute.manager [req-acc33c67-c619-46bb-9ff6-9bffaa105c5b req-2bf81253-c20d-4d1c-a9de-a9b493575658 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Refreshing instance network info cache due to event network-changed-a308a2fe-7f22-4520-8be1-36876d0d5361. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:31:04 np0005629333 nova_compute[244014]: 2026-02-25 12:31:04.269 244018 DEBUG oslo_concurrency.lockutils [req-acc33c67-c619-46bb-9ff6-9bffaa105c5b req-2bf81253-c20d-4d1c-a9de-a9b493575658 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-3f143481-f5f9-45d3-9d0b-b66e77ee0714" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:31:04 np0005629333 nova_compute[244014]: 2026-02-25 12:31:04.295 244018 INFO nova.compute.manager [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Rebuilding instance#033[00m
Feb 25 07:31:04 np0005629333 nova_compute[244014]: 2026-02-25 12:31:04.562 244018 DEBUG nova.objects.instance [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9b9beb6f-4642-40d1-b9e5-96345c682341 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:31:04 np0005629333 nova_compute[244014]: 2026-02-25 12:31:04.585 244018 DEBUG nova.compute.manager [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:31:04 np0005629333 nova_compute[244014]: 2026-02-25 12:31:04.641 244018 DEBUG nova.objects.instance [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lazy-loading 'pci_requests' on Instance uuid 9b9beb6f-4642-40d1-b9e5-96345c682341 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:31:04 np0005629333 nova_compute[244014]: 2026-02-25 12:31:04.655 244018 DEBUG nova.objects.instance [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b9beb6f-4642-40d1-b9e5-96345c682341 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:31:04 np0005629333 nova_compute[244014]: 2026-02-25 12:31:04.667 244018 DEBUG nova.objects.instance [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lazy-loading 'resources' on Instance uuid 9b9beb6f-4642-40d1-b9e5-96345c682341 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:31:04 np0005629333 nova_compute[244014]: 2026-02-25 12:31:04.682 244018 DEBUG nova.objects.instance [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lazy-loading 'migration_context' on Instance uuid 9b9beb6f-4642-40d1-b9e5-96345c682341 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:31:04 np0005629333 nova_compute[244014]: 2026-02-25 12:31:04.700 244018 DEBUG nova.objects.instance [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 25 07:31:04 np0005629333 nova_compute[244014]: 2026-02-25 12:31:04.704 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 25 07:31:04 np0005629333 nova_compute[244014]: 2026-02-25 12:31:04.790 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:05 np0005629333 nova_compute[244014]: 2026-02-25 12:31:05.141 244018 DEBUG nova.network.neutron [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Updating instance_info_cache with network_info: [{"id": "a308a2fe-7f22-4520-8be1-36876d0d5361", "address": "fa:16:3e:da:c8:fb", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa308a2fe-7f", "ovs_interfaceid": "a308a2fe-7f22-4520-8be1-36876d0d5361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:31:05 np0005629333 nova_compute[244014]: 2026-02-25 12:31:05.158 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Releasing lock "refresh_cache-3f143481-f5f9-45d3-9d0b-b66e77ee0714" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:31:05 np0005629333 nova_compute[244014]: 2026-02-25 12:31:05.159 244018 DEBUG nova.compute.manager [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Instance network_info: |[{"id": "a308a2fe-7f22-4520-8be1-36876d0d5361", "address": "fa:16:3e:da:c8:fb", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa308a2fe-7f", "ovs_interfaceid": "a308a2fe-7f22-4520-8be1-36876d0d5361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:31:05 np0005629333 nova_compute[244014]: 2026-02-25 12:31:05.159 244018 DEBUG oslo_concurrency.lockutils [req-acc33c67-c619-46bb-9ff6-9bffaa105c5b req-2bf81253-c20d-4d1c-a9de-a9b493575658 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-3f143481-f5f9-45d3-9d0b-b66e77ee0714" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:31:05 np0005629333 nova_compute[244014]: 2026-02-25 12:31:05.160 244018 DEBUG nova.network.neutron [req-acc33c67-c619-46bb-9ff6-9bffaa105c5b req-2bf81253-c20d-4d1c-a9de-a9b493575658 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Refreshing network info cache for port a308a2fe-7f22-4520-8be1-36876d0d5361 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:31:05 np0005629333 nova_compute[244014]: 2026-02-25 12:31:05.162 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Start _get_guest_xml network_info=[{"id": "a308a2fe-7f22-4520-8be1-36876d0d5361", "address": "fa:16:3e:da:c8:fb", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa308a2fe-7f", "ovs_interfaceid": "a308a2fe-7f22-4520-8be1-36876d0d5361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:31:05 np0005629333 nova_compute[244014]: 2026-02-25 12:31:05.166 244018 WARNING nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:31:05 np0005629333 nova_compute[244014]: 2026-02-25 12:31:05.171 244018 DEBUG nova.virt.libvirt.host [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:31:05 np0005629333 nova_compute[244014]: 2026-02-25 12:31:05.172 244018 DEBUG nova.virt.libvirt.host [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:31:05 np0005629333 nova_compute[244014]: 2026-02-25 12:31:05.175 244018 DEBUG nova.virt.libvirt.host [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:31:05 np0005629333 nova_compute[244014]: 2026-02-25 12:31:05.175 244018 DEBUG nova.virt.libvirt.host [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:31:05 np0005629333 nova_compute[244014]: 2026-02-25 12:31:05.176 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:31:05 np0005629333 nova_compute[244014]: 2026-02-25 12:31:05.176 244018 DEBUG nova.virt.hardware [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:31:05 np0005629333 nova_compute[244014]: 2026-02-25 12:31:05.177 244018 DEBUG nova.virt.hardware [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:31:05 np0005629333 nova_compute[244014]: 2026-02-25 12:31:05.177 244018 DEBUG nova.virt.hardware [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:31:05 np0005629333 nova_compute[244014]: 2026-02-25 12:31:05.177 244018 DEBUG nova.virt.hardware [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:31:05 np0005629333 nova_compute[244014]: 2026-02-25 12:31:05.177 244018 DEBUG nova.virt.hardware [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:31:05 np0005629333 nova_compute[244014]: 2026-02-25 12:31:05.178 244018 DEBUG nova.virt.hardware [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:31:05 np0005629333 nova_compute[244014]: 2026-02-25 12:31:05.178 244018 DEBUG nova.virt.hardware [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:31:05 np0005629333 nova_compute[244014]: 2026-02-25 12:31:05.178 244018 DEBUG nova.virt.hardware [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:31:05 np0005629333 nova_compute[244014]: 2026-02-25 12:31:05.179 244018 DEBUG nova.virt.hardware [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:31:05 np0005629333 nova_compute[244014]: 2026-02-25 12:31:05.179 244018 DEBUG nova.virt.hardware [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:31:05 np0005629333 nova_compute[244014]: 2026-02-25 12:31:05.179 244018 DEBUG nova.virt.hardware [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:31:05 np0005629333 nova_compute[244014]: 2026-02-25 12:31:05.183 244018 DEBUG oslo_concurrency.processutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:31:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:31:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2437135054' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:31:05 np0005629333 nova_compute[244014]: 2026-02-25 12:31:05.818 244018 DEBUG oslo_concurrency.processutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.635s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:31:05 np0005629333 nova_compute[244014]: 2026-02-25 12:31:05.841 244018 DEBUG nova.storage.rbd_utils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:31:05 np0005629333 nova_compute[244014]: 2026-02-25 12:31:05.846 244018 DEBUG oslo_concurrency.processutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:31:05 np0005629333 nova_compute[244014]: 2026-02-25 12:31:05.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:31:05 np0005629333 nova_compute[244014]: 2026-02-25 12:31:05.878 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:31:05 np0005629333 nova_compute[244014]: 2026-02-25 12:31:05.919 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 07:31:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 396 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 677 KiB/s rd, 6.1 MiB/s wr, 168 op/s
Feb 25 07:31:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:31:06 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/220219505' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.363 244018 DEBUG oslo_concurrency.processutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.364 244018 DEBUG nova.virt.libvirt.vif [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:30:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1795002660',display_name='tempest-tempest.common.compute-instance-1795002660',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1795002660',id=75,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8315f545d21f4f8d9a43d810f50e7b78',ramdisk_id='',reservation_id='r-p68nlja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1615886603',owner_user_name='tempest-ServerActionsTestOtherA-1615886603-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:31:01Z,user_data=None,user_id='b63928451c6a4137bb65e25561326aff',uuid=3f143481-f5f9-45d3-9d0b-b66e77ee0714,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a308a2fe-7f22-4520-8be1-36876d0d5361", "address": "fa:16:3e:da:c8:fb", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa308a2fe-7f", "ovs_interfaceid": "a308a2fe-7f22-4520-8be1-36876d0d5361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.364 244018 DEBUG nova.network.os_vif_util [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converting VIF {"id": "a308a2fe-7f22-4520-8be1-36876d0d5361", "address": "fa:16:3e:da:c8:fb", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa308a2fe-7f", "ovs_interfaceid": "a308a2fe-7f22-4520-8be1-36876d0d5361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.365 244018 DEBUG nova.network.os_vif_util [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:c8:fb,bridge_name='br-int',has_traffic_filtering=True,id=a308a2fe-7f22-4520-8be1-36876d0d5361,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa308a2fe-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.367 244018 DEBUG nova.objects.instance [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3f143481-f5f9-45d3-9d0b-b66e77ee0714 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.396 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:31:06 np0005629333 nova_compute[244014]:  <uuid>3f143481-f5f9-45d3-9d0b-b66e77ee0714</uuid>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:  <name>instance-0000004b</name>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      <nova:name>tempest-tempest.common.compute-instance-1795002660</nova:name>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:31:05</nova:creationTime>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:31:06 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:        <nova:user uuid="b63928451c6a4137bb65e25561326aff">tempest-ServerActionsTestOtherA-1615886603-project-member</nova:user>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:        <nova:project uuid="8315f545d21f4f8d9a43d810f50e7b78">tempest-ServerActionsTestOtherA-1615886603</nova:project>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:        <nova:port uuid="a308a2fe-7f22-4520-8be1-36876d0d5361">
Feb 25 07:31:06 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      <entry name="serial">3f143481-f5f9-45d3-9d0b-b66e77ee0714</entry>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      <entry name="uuid">3f143481-f5f9-45d3-9d0b-b66e77ee0714</entry>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk">
Feb 25 07:31:06 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:31:06 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk.config">
Feb 25 07:31:06 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:31:06 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:da:c8:fb"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      <target dev="tapa308a2fe-7f"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714/console.log" append="off"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:31:06 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:31:06 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:31:06 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:31:06 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.396 244018 DEBUG nova.compute.manager [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Preparing to wait for external event network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.396 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.397 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.397 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.397 244018 DEBUG nova.virt.libvirt.vif [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:30:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1795002660',display_name='tempest-tempest.common.compute-instance-1795002660',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1795002660',id=75,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8315f545d21f4f8d9a43d810f50e7b78',ramdisk_id='',reservation_id='r-p68nlja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1615886603',owner_user_name='tempest-ServerActionsTestOtherA-1615886603-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:31:01Z,user_data=None,user_id='b63928451c6a4137bb65e25561326aff',uuid=3f143481-f5f9-45d3-9d0b-b66e77ee0714,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a308a2fe-7f22-4520-8be1-36876d0d5361", "address": "fa:16:3e:da:c8:fb", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa308a2fe-7f", "ovs_interfaceid": "a308a2fe-7f22-4520-8be1-36876d0d5361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.398 244018 DEBUG nova.network.os_vif_util [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converting VIF {"id": "a308a2fe-7f22-4520-8be1-36876d0d5361", "address": "fa:16:3e:da:c8:fb", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa308a2fe-7f", "ovs_interfaceid": "a308a2fe-7f22-4520-8be1-36876d0d5361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.398 244018 DEBUG nova.network.os_vif_util [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:c8:fb,bridge_name='br-int',has_traffic_filtering=True,id=a308a2fe-7f22-4520-8be1-36876d0d5361,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa308a2fe-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.399 244018 DEBUG os_vif [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:c8:fb,bridge_name='br-int',has_traffic_filtering=True,id=a308a2fe-7f22-4520-8be1-36876d0d5361,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa308a2fe-7f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.399 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.399 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.400 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.403 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.403 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa308a2fe-7f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.403 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa308a2fe-7f, col_values=(('external_ids', {'iface-id': 'a308a2fe-7f22-4520-8be1-36876d0d5361', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:c8:fb', 'vm-uuid': '3f143481-f5f9-45d3-9d0b-b66e77ee0714'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:31:06 np0005629333 NetworkManager[49836]: <info>  [1772022666.4062] manager: (tapa308a2fe-7f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/298)
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.405 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.408 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.411 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.411 244018 INFO os_vif [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:c8:fb,bridge_name='br-int',has_traffic_filtering=True,id=a308a2fe-7f22-4520-8be1-36876d0d5361,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa308a2fe-7f')#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.538 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.538 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.538 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] No VIF found with MAC fa:16:3e:da:c8:fb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.539 244018 INFO nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Using config drive#033[00m
Feb 25 07:31:06 np0005629333 podman[308124]: 2026-02-25 12:31:06.550506688 +0000 UTC m=+0.109885617 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.564 244018 DEBUG nova.storage.rbd_utils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.570 244018 DEBUG nova.compute.manager [req-ad7973e0-3764-4f33-a12f-7d3c481a5819 req-80367ab1-7da2-4c49-991b-072168779842 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Received event network-changed-e61932f1-9a36-4c95-a52f-470c182ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.570 244018 DEBUG nova.compute.manager [req-ad7973e0-3764-4f33-a12f-7d3c481a5819 req-80367ab1-7da2-4c49-991b-072168779842 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Refreshing instance network info cache due to event network-changed-e61932f1-9a36-4c95-a52f-470c182ac70f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.570 244018 DEBUG oslo_concurrency.lockutils [req-ad7973e0-3764-4f33-a12f-7d3c481a5819 req-80367ab1-7da2-4c49-991b-072168779842 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-7715b665-70af-4b84-b8a3-85ccea6ab805" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.571 244018 DEBUG oslo_concurrency.lockutils [req-ad7973e0-3764-4f33-a12f-7d3c481a5819 req-80367ab1-7da2-4c49-991b-072168779842 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-7715b665-70af-4b84-b8a3-85ccea6ab805" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.571 244018 DEBUG nova.network.neutron [req-ad7973e0-3764-4f33-a12f-7d3c481a5819 req-80367ab1-7da2-4c49-991b-072168779842 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Refreshing network info cache for port e61932f1-9a36-4c95-a52f-470c182ac70f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:31:06 np0005629333 podman[308125]: 2026-02-25 12:31:06.599867868 +0000 UTC m=+0.154281606 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:31:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.729 244018 DEBUG nova.network.neutron [req-acc33c67-c619-46bb-9ff6-9bffaa105c5b req-2bf81253-c20d-4d1c-a9de-a9b493575658 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Updated VIF entry in instance network info cache for port a308a2fe-7f22-4520-8be1-36876d0d5361. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.730 244018 DEBUG nova.network.neutron [req-acc33c67-c619-46bb-9ff6-9bffaa105c5b req-2bf81253-c20d-4d1c-a9de-a9b493575658 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Updating instance_info_cache with network_info: [{"id": "a308a2fe-7f22-4520-8be1-36876d0d5361", "address": "fa:16:3e:da:c8:fb", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa308a2fe-7f", "ovs_interfaceid": "a308a2fe-7f22-4520-8be1-36876d0d5361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:31:06 np0005629333 nova_compute[244014]: 2026-02-25 12:31:06.755 244018 DEBUG oslo_concurrency.lockutils [req-acc33c67-c619-46bb-9ff6-9bffaa105c5b req-2bf81253-c20d-4d1c-a9de-a9b493575658 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-3f143481-f5f9-45d3-9d0b-b66e77ee0714" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:31:07 np0005629333 nova_compute[244014]: 2026-02-25 12:31:07.259 244018 INFO nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Creating config drive at /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714/disk.config#033[00m
Feb 25 07:31:07 np0005629333 nova_compute[244014]: 2026-02-25 12:31:07.267 244018 DEBUG oslo_concurrency.processutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpv7o3qpwn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:31:07 np0005629333 nova_compute[244014]: 2026-02-25 12:31:07.400 244018 DEBUG oslo_concurrency.processutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpv7o3qpwn" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:31:07 np0005629333 nova_compute[244014]: 2026-02-25 12:31:07.423 244018 DEBUG nova.storage.rbd_utils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:31:07 np0005629333 nova_compute[244014]: 2026-02-25 12:31:07.427 244018 DEBUG oslo_concurrency.processutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714/disk.config 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:31:07 np0005629333 nova_compute[244014]: 2026-02-25 12:31:07.565 244018 DEBUG oslo_concurrency.processutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714/disk.config 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:31:07 np0005629333 nova_compute[244014]: 2026-02-25 12:31:07.566 244018 INFO nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Deleting local config drive /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714/disk.config because it was imported into RBD.#033[00m
Feb 25 07:31:07 np0005629333 kernel: tapa308a2fe-7f: entered promiscuous mode
Feb 25 07:31:07 np0005629333 NetworkManager[49836]: <info>  [1772022667.6050] manager: (tapa308a2fe-7f): new Tun device (/org/freedesktop/NetworkManager/Devices/299)
Feb 25 07:31:07 np0005629333 nova_compute[244014]: 2026-02-25 12:31:07.606 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:07 np0005629333 ovn_controller[147040]: 2026-02-25T12:31:07Z|00686|binding|INFO|Claiming lport a308a2fe-7f22-4520-8be1-36876d0d5361 for this chassis.
Feb 25 07:31:07 np0005629333 ovn_controller[147040]: 2026-02-25T12:31:07Z|00687|binding|INFO|a308a2fe-7f22-4520-8be1-36876d0d5361: Claiming fa:16:3e:da:c8:fb 10.100.0.4
Feb 25 07:31:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:07.619 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:c8:fb 10.100.0.4'], port_security=['fa:16:3e:da:c8:fb 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '3f143481-f5f9-45d3-9d0b-b66e77ee0714', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd796561-bd80-4610-8abc-655ee9e3676f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8315f545d21f4f8d9a43d810f50e7b78', 'neutron:revision_number': '2', 'neutron:security_group_ids': '290a60c1-6d36-455f-8454-5a2e312fec46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b5d31ff-0cb2-4a33-884a-0100942c964b, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=a308a2fe-7f22-4520-8be1-36876d0d5361) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:31:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:07.620 157129 INFO neutron.agent.ovn.metadata.agent [-] Port a308a2fe-7f22-4520-8be1-36876d0d5361 in datapath cd796561-bd80-4610-8abc-655ee9e3676f bound to our chassis#033[00m
Feb 25 07:31:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:07.627 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cd796561-bd80-4610-8abc-655ee9e3676f#033[00m
Feb 25 07:31:07 np0005629333 ovn_controller[147040]: 2026-02-25T12:31:07Z|00688|binding|INFO|Setting lport a308a2fe-7f22-4520-8be1-36876d0d5361 ovn-installed in OVS
Feb 25 07:31:07 np0005629333 ovn_controller[147040]: 2026-02-25T12:31:07Z|00689|binding|INFO|Setting lport a308a2fe-7f22-4520-8be1-36876d0d5361 up in Southbound
Feb 25 07:31:07 np0005629333 nova_compute[244014]: 2026-02-25 12:31:07.629 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:07 np0005629333 nova_compute[244014]: 2026-02-25 12:31:07.635 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:07 np0005629333 systemd-machined[210048]: New machine qemu-90-instance-0000004b.
Feb 25 07:31:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:07.646 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[342eba33-5e6d-47ad-ab7f-3574f2117238]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:07 np0005629333 systemd[1]: Started Virtual Machine qemu-90-instance-0000004b.
Feb 25 07:31:07 np0005629333 systemd-udevd[308243]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:31:07 np0005629333 NetworkManager[49836]: <info>  [1772022667.6763] device (tapa308a2fe-7f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:31:07 np0005629333 NetworkManager[49836]: <info>  [1772022667.6769] device (tapa308a2fe-7f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:31:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:07.682 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[35c38c72-d169-453c-ba31-73333cd87e47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:07.684 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b51b41fa-0caf-45b6-b90b-cc73a3b2beff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:07.717 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[83e64448-1a38-4aab-9264-e3d90895ba67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:07.732 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fd088751-3c3b-433e-acd5-68865542c7b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd796561-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:48:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 200], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460510, 'reachable_time': 29897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308254, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:07.743 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[37f61b81-4a80-4608-a814-1f39ce88fdc6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcd796561-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460517, 'tstamp': 460517}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308255, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcd796561-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460518, 'tstamp': 460518}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308255, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:07.745 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd796561-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:31:07 np0005629333 nova_compute[244014]: 2026-02-25 12:31:07.748 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:07.751 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd796561-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:31:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:07.751 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:31:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:07.752 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcd796561-b0, col_values=(('external_ids', {'iface-id': 'd456b40d-31d4-4447-97a8-c536382c29f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:31:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:07.752 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:31:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 419 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 7.2 MiB/s wr, 361 op/s
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.140 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022668.1383505, 3f143481-f5f9-45d3-9d0b-b66e77ee0714 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.141 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] VM Started (Lifecycle Event)#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.169 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.172 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022668.1397903, 3f143481-f5f9-45d3-9d0b-b66e77ee0714 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.173 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.204 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.208 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.232 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:31:08 np0005629333 ovn_controller[147040]: 2026-02-25T12:31:08Z|00690|binding|INFO|Releasing lport 87b3311c-e91e-486e-ab14-ba8c3046ee03 from this chassis (sb_readonly=0)
Feb 25 07:31:08 np0005629333 ovn_controller[147040]: 2026-02-25T12:31:08Z|00691|binding|INFO|Releasing lport d456b40d-31d4-4447-97a8-c536382c29f9 from this chassis (sb_readonly=0)
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.265 244018 DEBUG nova.compute.manager [req-56772232-9eb7-4e1c-96dc-6c8687984e82 req-db14442a-ebc4-40ac-a426-c4201f74b835 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Received event network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.267 244018 DEBUG oslo_concurrency.lockutils [req-56772232-9eb7-4e1c-96dc-6c8687984e82 req-db14442a-ebc4-40ac-a426-c4201f74b835 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.268 244018 DEBUG oslo_concurrency.lockutils [req-56772232-9eb7-4e1c-96dc-6c8687984e82 req-db14442a-ebc4-40ac-a426-c4201f74b835 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.268 244018 DEBUG oslo_concurrency.lockutils [req-56772232-9eb7-4e1c-96dc-6c8687984e82 req-db14442a-ebc4-40ac-a426-c4201f74b835 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.269 244018 DEBUG nova.compute.manager [req-56772232-9eb7-4e1c-96dc-6c8687984e82 req-db14442a-ebc4-40ac-a426-c4201f74b835 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Processing event network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.284 244018 DEBUG nova.compute.manager [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.288 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022668.288267, 3f143481-f5f9-45d3-9d0b-b66e77ee0714 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.289 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.303 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.307 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.312 244018 INFO nova.virt.libvirt.driver [-] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Instance spawned successfully.#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.313 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.321 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.327 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.346 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.347 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.348 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.349 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.350 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.350 244018 DEBUG nova.virt.libvirt.driver [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.357 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.439 244018 INFO nova.compute.manager [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Took 7.23 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.440 244018 DEBUG nova.compute.manager [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.523 244018 INFO nova.compute.manager [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Took 8.74 seconds to build instance.#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.556 244018 DEBUG oslo_concurrency.lockutils [None req-5dd3ded0-244d-4ac1-961a-bd83c9700835 b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.581 244018 DEBUG nova.network.neutron [req-ad7973e0-3764-4f33-a12f-7d3c481a5819 req-80367ab1-7da2-4c49-991b-072168779842 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Updated VIF entry in instance network info cache for port e61932f1-9a36-4c95-a52f-470c182ac70f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.582 244018 DEBUG nova.network.neutron [req-ad7973e0-3764-4f33-a12f-7d3c481a5819 req-80367ab1-7da2-4c49-991b-072168779842 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Updating instance_info_cache with network_info: [{"id": "e61932f1-9a36-4c95-a52f-470c182ac70f", "address": "fa:16:3e:38:25:49", "network": {"id": "2f200cab-355f-4832-b410-50e7782a19b7", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-633209158-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7f30b1d5a1f4604bb44f655b6be0571", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61932f1-9a", "ovs_interfaceid": "e61932f1-9a36-4c95-a52f-470c182ac70f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.610 244018 DEBUG oslo_concurrency.lockutils [req-ad7973e0-3764-4f33-a12f-7d3c481a5819 req-80367ab1-7da2-4c49-991b-072168779842 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-7715b665-70af-4b84-b8a3-85ccea6ab805" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:31:08 np0005629333 nova_compute[244014]: 2026-02-25 12:31:08.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:31:09 np0005629333 nova_compute[244014]: 2026-02-25 12:31:09.634 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022654.6325033, 25eece29-8689-47a8-b930-0492bf528de5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:31:09 np0005629333 nova_compute[244014]: 2026-02-25 12:31:09.635 244018 INFO nova.compute.manager [-] [instance: 25eece29-8689-47a8-b930-0492bf528de5] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:31:09 np0005629333 nova_compute[244014]: 2026-02-25 12:31:09.656 244018 DEBUG nova.compute.manager [None req-5ed2e0a9-5a44-4f30-a48a-5b33741ec828 - - - - - -] [instance: 25eece29-8689-47a8-b930-0492bf528de5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:31:09 np0005629333 nova_compute[244014]: 2026-02-25 12:31:09.792 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 419 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 5.1 MiB/s wr, 327 op/s
Feb 25 07:31:10 np0005629333 nova_compute[244014]: 2026-02-25 12:31:10.392 244018 DEBUG nova.compute.manager [req-c5174979-8251-4d0a-8540-42c4daaa071d req-12f38a1b-d881-44af-affd-8f01a6507e5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Received event network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:31:10 np0005629333 nova_compute[244014]: 2026-02-25 12:31:10.393 244018 DEBUG oslo_concurrency.lockutils [req-c5174979-8251-4d0a-8540-42c4daaa071d req-12f38a1b-d881-44af-affd-8f01a6507e5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:31:10 np0005629333 nova_compute[244014]: 2026-02-25 12:31:10.393 244018 DEBUG oslo_concurrency.lockutils [req-c5174979-8251-4d0a-8540-42c4daaa071d req-12f38a1b-d881-44af-affd-8f01a6507e5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:31:10 np0005629333 nova_compute[244014]: 2026-02-25 12:31:10.393 244018 DEBUG oslo_concurrency.lockutils [req-c5174979-8251-4d0a-8540-42c4daaa071d req-12f38a1b-d881-44af-affd-8f01a6507e5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:10 np0005629333 nova_compute[244014]: 2026-02-25 12:31:10.393 244018 DEBUG nova.compute.manager [req-c5174979-8251-4d0a-8540-42c4daaa071d req-12f38a1b-d881-44af-affd-8f01a6507e5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] No waiting events found dispatching network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:31:10 np0005629333 nova_compute[244014]: 2026-02-25 12:31:10.394 244018 WARNING nova.compute.manager [req-c5174979-8251-4d0a-8540-42c4daaa071d req-12f38a1b-d881-44af-affd-8f01a6507e5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Received unexpected event network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:31:10 np0005629333 nova_compute[244014]: 2026-02-25 12:31:10.606 244018 DEBUG oslo_concurrency.lockutils [None req-a1534fad-b5ea-4148-b370-375060246dbc b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:31:10 np0005629333 nova_compute[244014]: 2026-02-25 12:31:10.606 244018 DEBUG oslo_concurrency.lockutils [None req-a1534fad-b5ea-4148-b370-375060246dbc b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:31:10 np0005629333 nova_compute[244014]: 2026-02-25 12:31:10.606 244018 DEBUG nova.compute.manager [None req-a1534fad-b5ea-4148-b370-375060246dbc b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:31:10 np0005629333 nova_compute[244014]: 2026-02-25 12:31:10.609 244018 DEBUG nova.compute.manager [None req-a1534fad-b5ea-4148-b370-375060246dbc b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Feb 25 07:31:10 np0005629333 nova_compute[244014]: 2026-02-25 12:31:10.610 244018 DEBUG nova.objects.instance [None req-a1534fad-b5ea-4148-b370-375060246dbc b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'flavor' on Instance uuid 3f143481-f5f9-45d3-9d0b-b66e77ee0714 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:31:10 np0005629333 nova_compute[244014]: 2026-02-25 12:31:10.646 244018 DEBUG nova.virt.libvirt.driver [None req-a1534fad-b5ea-4148-b370-375060246dbc b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 25 07:31:10 np0005629333 nova_compute[244014]: 2026-02-25 12:31:10.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:31:11 np0005629333 nova_compute[244014]: 2026-02-25 12:31:11.407 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:31:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 419 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 5.1 MiB/s wr, 367 op/s
Feb 25 07:31:12 np0005629333 nova_compute[244014]: 2026-02-25 12:31:12.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:31:12 np0005629333 nova_compute[244014]: 2026-02-25 12:31:12.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:31:12 np0005629333 nova_compute[244014]: 2026-02-25 12:31:12.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:31:12 np0005629333 nova_compute[244014]: 2026-02-25 12:31:12.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:31:12 np0005629333 nova_compute[244014]: 2026-02-25 12:31:12.910 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:31:12 np0005629333 nova_compute[244014]: 2026-02-25 12:31:12.910 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:31:12 np0005629333 nova_compute[244014]: 2026-02-25 12:31:12.911 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:12 np0005629333 nova_compute[244014]: 2026-02-25 12:31:12.911 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:31:12 np0005629333 nova_compute[244014]: 2026-02-25 12:31:12.911 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:31:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:31:13 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/20037407' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:31:13 np0005629333 nova_compute[244014]: 2026-02-25 12:31:13.569 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.658s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:31:13 np0005629333 nova_compute[244014]: 2026-02-25 12:31:13.646 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000004a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:31:13 np0005629333 nova_compute[244014]: 2026-02-25 12:31:13.646 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000004a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:31:13 np0005629333 nova_compute[244014]: 2026-02-25 12:31:13.651 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000046 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:31:13 np0005629333 nova_compute[244014]: 2026-02-25 12:31:13.651 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000046 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:31:13 np0005629333 nova_compute[244014]: 2026-02-25 12:31:13.655 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:31:13 np0005629333 nova_compute[244014]: 2026-02-25 12:31:13.655 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000048 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:31:13 np0005629333 nova_compute[244014]: 2026-02-25 12:31:13.660 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000049 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:31:13 np0005629333 nova_compute[244014]: 2026-02-25 12:31:13.660 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000049 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:31:13 np0005629333 nova_compute[244014]: 2026-02-25 12:31:13.664 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:31:13 np0005629333 nova_compute[244014]: 2026-02-25 12:31:13.664 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000004b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:31:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:13.805 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:31:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:13.806 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:31:13 np0005629333 nova_compute[244014]: 2026-02-25 12:31:13.805 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:13 np0005629333 nova_compute[244014]: 2026-02-25 12:31:13.854 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:31:13 np0005629333 nova_compute[244014]: 2026-02-25 12:31:13.855 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3005MB free_disk=59.85860504861921GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:31:13 np0005629333 nova_compute[244014]: 2026-02-25 12:31:13.855 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:31:13 np0005629333 nova_compute[244014]: 2026-02-25 12:31:13.855 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:31:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 419 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 2.4 MiB/s wr, 320 op/s
Feb 25 07:31:14 np0005629333 nova_compute[244014]: 2026-02-25 12:31:14.100 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 5732c5fb-59b3-4590-b65a-a696b9c90152 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:31:14 np0005629333 nova_compute[244014]: 2026-02-25 12:31:14.100 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 7715b665-70af-4b84-b8a3-85ccea6ab805 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:31:14 np0005629333 nova_compute[244014]: 2026-02-25 12:31:14.101 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance e954c936-91fe-4aa5-8c91-78ec08c85221 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:31:14 np0005629333 nova_compute[244014]: 2026-02-25 12:31:14.101 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 9b9beb6f-4642-40d1-b9e5-96345c682341 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:31:14 np0005629333 nova_compute[244014]: 2026-02-25 12:31:14.101 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 3f143481-f5f9-45d3-9d0b-b66e77ee0714 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:31:14 np0005629333 nova_compute[244014]: 2026-02-25 12:31:14.101 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:31:14 np0005629333 nova_compute[244014]: 2026-02-25 12:31:14.101 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:31:14 np0005629333 nova_compute[244014]: 2026-02-25 12:31:14.207 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:31:14 np0005629333 ovn_controller[147040]: 2026-02-25T12:31:14Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:38:25:49 10.100.0.5
Feb 25 07:31:14 np0005629333 ovn_controller[147040]: 2026-02-25T12:31:14Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:38:25:49 10.100.0.5
Feb 25 07:31:14 np0005629333 nova_compute[244014]: 2026-02-25 12:31:14.751 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Feb 25 07:31:14 np0005629333 nova_compute[244014]: 2026-02-25 12:31:14.794 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:14 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Feb 25 07:31:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:31:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1930314918' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:31:14 np0005629333 nova_compute[244014]: 2026-02-25 12:31:14.949 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.741s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:31:14 np0005629333 nova_compute[244014]: 2026-02-25 12:31:14.955 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:31:14 np0005629333 nova_compute[244014]: 2026-02-25 12:31:14.994 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:31:15 np0005629333 nova_compute[244014]: 2026-02-25 12:31:15.022 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:31:15 np0005629333 nova_compute[244014]: 2026-02-25 12:31:15.023 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.168s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:31:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:31:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:31:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:31:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:15.808 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:31:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 419 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 7.1 MiB/s rd, 1.0 MiB/s wr, 262 op/s
Feb 25 07:31:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:31:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:31:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:31:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:31:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:31:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:31:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:31:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:31:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:31:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:31:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:31:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:31:16 np0005629333 nova_compute[244014]: 2026-02-25 12:31:16.409 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:16 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:31:16 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:31:16 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:31:16 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:31:16 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:31:16 np0005629333 podman[308559]: 2026-02-25 12:31:16.552111868 +0000 UTC m=+0.057740129 container create b9ff2c0fc71129fc869b15f82fc4e4023e5c10c0e3b460ea10fb76d9628f241c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_joliot, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:31:16 np0005629333 systemd[1]: Started libpod-conmon-b9ff2c0fc71129fc869b15f82fc4e4023e5c10c0e3b460ea10fb76d9628f241c.scope.
Feb 25 07:31:16 np0005629333 podman[308559]: 2026-02-25 12:31:16.523904398 +0000 UTC m=+0.029532659 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:31:16 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:31:16 np0005629333 podman[308559]: 2026-02-25 12:31:16.648254224 +0000 UTC m=+0.153882485 container init b9ff2c0fc71129fc869b15f82fc4e4023e5c10c0e3b460ea10fb76d9628f241c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_joliot, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 07:31:16 np0005629333 podman[308559]: 2026-02-25 12:31:16.658577047 +0000 UTC m=+0.164205318 container start b9ff2c0fc71129fc869b15f82fc4e4023e5c10c0e3b460ea10fb76d9628f241c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_joliot, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 07:31:16 np0005629333 podman[308559]: 2026-02-25 12:31:16.664488764 +0000 UTC m=+0.170117125 container attach b9ff2c0fc71129fc869b15f82fc4e4023e5c10c0e3b460ea10fb76d9628f241c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 07:31:16 np0005629333 hardcore_joliot[308575]: 167 167
Feb 25 07:31:16 np0005629333 systemd[1]: libpod-b9ff2c0fc71129fc869b15f82fc4e4023e5c10c0e3b460ea10fb76d9628f241c.scope: Deactivated successfully.
Feb 25 07:31:16 np0005629333 podman[308559]: 2026-02-25 12:31:16.680046286 +0000 UTC m=+0.185674567 container died b9ff2c0fc71129fc869b15f82fc4e4023e5c10c0e3b460ea10fb76d9628f241c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_joliot, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:31:16 np0005629333 systemd[1]: var-lib-containers-storage-overlay-8725b5f832f0fd1045a12d874d634f58ce113e4306344a225cad4240be5678e4-merged.mount: Deactivated successfully.
Feb 25 07:31:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:31:16 np0005629333 podman[308559]: 2026-02-25 12:31:16.725775562 +0000 UTC m=+0.231403833 container remove b9ff2c0fc71129fc869b15f82fc4e4023e5c10c0e3b460ea10fb76d9628f241c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_joliot, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:31:16 np0005629333 systemd[1]: libpod-conmon-b9ff2c0fc71129fc869b15f82fc4e4023e5c10c0e3b460ea10fb76d9628f241c.scope: Deactivated successfully.
Feb 25 07:31:16 np0005629333 podman[308600]: 2026-02-25 12:31:16.912279761 +0000 UTC m=+0.072474046 container create e893661fa2ac28d5ff3dfa16285f58b102ae922bcf8df35c487e57c51d71035e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_hawking, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:31:16 np0005629333 systemd[1]: Started libpod-conmon-e893661fa2ac28d5ff3dfa16285f58b102ae922bcf8df35c487e57c51d71035e.scope.
Feb 25 07:31:16 np0005629333 podman[308600]: 2026-02-25 12:31:16.884811742 +0000 UTC m=+0.045006097 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:31:16 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:31:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91940f3a79f2199f12086aec020dcccb7290bbd47cfed1c147b9ece290d5b3e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:31:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91940f3a79f2199f12086aec020dcccb7290bbd47cfed1c147b9ece290d5b3e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:31:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91940f3a79f2199f12086aec020dcccb7290bbd47cfed1c147b9ece290d5b3e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:31:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91940f3a79f2199f12086aec020dcccb7290bbd47cfed1c147b9ece290d5b3e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:31:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91940f3a79f2199f12086aec020dcccb7290bbd47cfed1c147b9ece290d5b3e4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:31:17 np0005629333 podman[308600]: 2026-02-25 12:31:17.022995451 +0000 UTC m=+0.183189816 container init e893661fa2ac28d5ff3dfa16285f58b102ae922bcf8df35c487e57c51d71035e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_hawking, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 07:31:17 np0005629333 podman[308600]: 2026-02-25 12:31:17.040521578 +0000 UTC m=+0.200715843 container start e893661fa2ac28d5ff3dfa16285f58b102ae922bcf8df35c487e57c51d71035e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:31:17 np0005629333 podman[308600]: 2026-02-25 12:31:17.048115963 +0000 UTC m=+0.208310308 container attach e893661fa2ac28d5ff3dfa16285f58b102ae922bcf8df35c487e57c51d71035e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_hawking, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:31:17 np0005629333 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d0000004a.scope: Deactivated successfully.
Feb 25 07:31:17 np0005629333 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d0000004a.scope: Consumed 12.738s CPU time.
Feb 25 07:31:17 np0005629333 systemd-machined[210048]: Machine qemu-89-instance-0000004a terminated.
Feb 25 07:31:17 np0005629333 happy_hawking[308617]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:31:17 np0005629333 happy_hawking[308617]: --> All data devices are unavailable
Feb 25 07:31:17 np0005629333 systemd[1]: libpod-e893661fa2ac28d5ff3dfa16285f58b102ae922bcf8df35c487e57c51d71035e.scope: Deactivated successfully.
Feb 25 07:31:17 np0005629333 podman[308600]: 2026-02-25 12:31:17.613553688 +0000 UTC m=+0.773747963 container died e893661fa2ac28d5ff3dfa16285f58b102ae922bcf8df35c487e57c51d71035e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_hawking, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:31:17 np0005629333 systemd[1]: var-lib-containers-storage-overlay-91940f3a79f2199f12086aec020dcccb7290bbd47cfed1c147b9ece290d5b3e4-merged.mount: Deactivated successfully.
Feb 25 07:31:17 np0005629333 podman[308600]: 2026-02-25 12:31:17.657921796 +0000 UTC m=+0.818116071 container remove e893661fa2ac28d5ff3dfa16285f58b102ae922bcf8df35c487e57c51d71035e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_hawking, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:31:17 np0005629333 systemd[1]: libpod-conmon-e893661fa2ac28d5ff3dfa16285f58b102ae922bcf8df35c487e57c51d71035e.scope: Deactivated successfully.
Feb 25 07:31:17 np0005629333 nova_compute[244014]: 2026-02-25 12:31:17.768 244018 INFO nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Instance shutdown successfully after 13 seconds.#033[00m
Feb 25 07:31:17 np0005629333 nova_compute[244014]: 2026-02-25 12:31:17.774 244018 INFO nova.virt.libvirt.driver [-] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Instance destroyed successfully.#033[00m
Feb 25 07:31:17 np0005629333 nova_compute[244014]: 2026-02-25 12:31:17.779 244018 INFO nova.virt.libvirt.driver [-] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Instance destroyed successfully.#033[00m
Feb 25 07:31:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 305 active+clean; 517 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.5 MiB/s wr, 448 op/s
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.024 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:31:18 np0005629333 podman[308733]: 2026-02-25 12:31:18.078088321 +0000 UTC m=+0.045890453 container create 51024856e59e2138e7ab159fa5bd7cbf1b6e78a41bf2412aa779dd709b3b2e14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_einstein, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.098 244018 INFO nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Deleting instance files /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341_del#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.099 244018 INFO nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Deletion of /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341_del complete#033[00m
Feb 25 07:31:18 np0005629333 systemd[1]: Started libpod-conmon-51024856e59e2138e7ab159fa5bd7cbf1b6e78a41bf2412aa779dd709b3b2e14.scope.
Feb 25 07:31:18 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:31:18 np0005629333 podman[308733]: 2026-02-25 12:31:18.140189502 +0000 UTC m=+0.107991644 container init 51024856e59e2138e7ab159fa5bd7cbf1b6e78a41bf2412aa779dd709b3b2e14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_einstein, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 07:31:18 np0005629333 podman[308733]: 2026-02-25 12:31:18.147809278 +0000 UTC m=+0.115611420 container start 51024856e59e2138e7ab159fa5bd7cbf1b6e78a41bf2412aa779dd709b3b2e14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_einstein, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 07:31:18 np0005629333 podman[308733]: 2026-02-25 12:31:18.054359958 +0000 UTC m=+0.022162090 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:31:18 np0005629333 stupefied_einstein[308749]: 167 167
Feb 25 07:31:18 np0005629333 systemd[1]: libpod-51024856e59e2138e7ab159fa5bd7cbf1b6e78a41bf2412aa779dd709b3b2e14.scope: Deactivated successfully.
Feb 25 07:31:18 np0005629333 podman[308733]: 2026-02-25 12:31:18.153336375 +0000 UTC m=+0.121138517 container attach 51024856e59e2138e7ab159fa5bd7cbf1b6e78a41bf2412aa779dd709b3b2e14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_einstein, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:31:18 np0005629333 podman[308733]: 2026-02-25 12:31:18.157307827 +0000 UTC m=+0.125109949 container died 51024856e59e2138e7ab159fa5bd7cbf1b6e78a41bf2412aa779dd709b3b2e14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:31:18 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f6af21c1b6ef1e4e6e3f5b7af2aee9faed8870d8483ad65ba06337b7ddcee0d5-merged.mount: Deactivated successfully.
Feb 25 07:31:18 np0005629333 podman[308733]: 2026-02-25 12:31:18.20360562 +0000 UTC m=+0.171407742 container remove 51024856e59e2138e7ab159fa5bd7cbf1b6e78a41bf2412aa779dd709b3b2e14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_einstein, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 07:31:18 np0005629333 systemd[1]: libpod-conmon-51024856e59e2138e7ab159fa5bd7cbf1b6e78a41bf2412aa779dd709b3b2e14.scope: Deactivated successfully.
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.246 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.246 244018 INFO nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Creating image(s)#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.272 244018 DEBUG nova.storage.rbd_utils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image 9b9beb6f-4642-40d1-b9e5-96345c682341_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.295 244018 DEBUG nova.storage.rbd_utils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image 9b9beb6f-4642-40d1-b9e5-96345c682341_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.317 244018 DEBUG nova.storage.rbd_utils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image 9b9beb6f-4642-40d1-b9e5-96345c682341_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.321 244018 DEBUG oslo_concurrency.processutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:31:18 np0005629333 podman[308825]: 2026-02-25 12:31:18.365418989 +0000 UTC m=+0.042773974 container create 820eccc2ad35e0ac66370ff1a9e62ffcc0a4027e4587f3c60244e0bc6d7f72eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:31:18 np0005629333 systemd[1]: Started libpod-conmon-820eccc2ad35e0ac66370ff1a9e62ffcc0a4027e4587f3c60244e0bc6d7f72eb.scope.
Feb 25 07:31:18 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:31:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43c885edf9bbe7d949c95966ebf04c7fa71748d4370709c9c1119b002eceffc5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:31:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43c885edf9bbe7d949c95966ebf04c7fa71748d4370709c9c1119b002eceffc5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:31:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43c885edf9bbe7d949c95966ebf04c7fa71748d4370709c9c1119b002eceffc5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:31:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43c885edf9bbe7d949c95966ebf04c7fa71748d4370709c9c1119b002eceffc5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.437 244018 DEBUG oslo_concurrency.processutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json" returned: 0 in 0.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.438 244018 DEBUG oslo_concurrency.lockutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.439 244018 DEBUG oslo_concurrency.lockutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.439 244018 DEBUG oslo_concurrency.lockutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:18 np0005629333 podman[308825]: 2026-02-25 12:31:18.341599103 +0000 UTC m=+0.018954128 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:31:18 np0005629333 podman[308825]: 2026-02-25 12:31:18.442229807 +0000 UTC m=+0.119584812 container init 820eccc2ad35e0ac66370ff1a9e62ffcc0a4027e4587f3c60244e0bc6d7f72eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_mclaren, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 07:31:18 np0005629333 podman[308825]: 2026-02-25 12:31:18.448485454 +0000 UTC m=+0.125840439 container start 820eccc2ad35e0ac66370ff1a9e62ffcc0a4027e4587f3c60244e0bc6d7f72eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 07:31:18 np0005629333 podman[308825]: 2026-02-25 12:31:18.454575167 +0000 UTC m=+0.131930192 container attach 820eccc2ad35e0ac66370ff1a9e62ffcc0a4027e4587f3c60244e0bc6d7f72eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.467 244018 DEBUG nova.storage.rbd_utils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image 9b9beb6f-4642-40d1-b9e5-96345c682341_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.470 244018 DEBUG oslo_concurrency.processutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 9b9beb6f-4642-40d1-b9e5-96345c682341_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.734 244018 DEBUG oslo_concurrency.processutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 9b9beb6f-4642-40d1-b9e5-96345c682341_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.264s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]: {
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:    "0": [
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:        {
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "devices": [
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "/dev/loop3"
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            ],
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "lv_name": "ceph_lv0",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "lv_size": "21470642176",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "name": "ceph_lv0",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "tags": {
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.cluster_name": "ceph",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.crush_device_class": "",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.encrypted": "0",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.objectstore": "bluestore",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.osd_id": "0",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.type": "block",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.vdo": "0",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.with_tpm": "0"
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            },
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "type": "block",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "vg_name": "ceph_vg0"
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:        }
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:    ],
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:    "1": [
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:        {
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "devices": [
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "/dev/loop4"
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            ],
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "lv_name": "ceph_lv1",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "lv_size": "21470642176",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "name": "ceph_lv1",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "tags": {
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.cluster_name": "ceph",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.crush_device_class": "",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.encrypted": "0",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.objectstore": "bluestore",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.osd_id": "1",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.type": "block",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.vdo": "0",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.with_tpm": "0"
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            },
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "type": "block",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "vg_name": "ceph_vg1"
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:        }
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:    ],
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:    "2": [
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:        {
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "devices": [
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "/dev/loop5"
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            ],
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "lv_name": "ceph_lv2",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "lv_size": "21470642176",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "name": "ceph_lv2",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "tags": {
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.cluster_name": "ceph",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.crush_device_class": "",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.encrypted": "0",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.objectstore": "bluestore",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.osd_id": "2",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.type": "block",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.vdo": "0",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:                "ceph.with_tpm": "0"
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            },
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "type": "block",
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:            "vg_name": "ceph_vg2"
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:        }
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]:    ]
Feb 25 07:31:18 np0005629333 keen_mclaren[308846]: }
Feb 25 07:31:18 np0005629333 systemd[1]: libpod-820eccc2ad35e0ac66370ff1a9e62ffcc0a4027e4587f3c60244e0bc6d7f72eb.scope: Deactivated successfully.
Feb 25 07:31:18 np0005629333 podman[308825]: 2026-02-25 12:31:18.773218083 +0000 UTC m=+0.450573078 container died 820eccc2ad35e0ac66370ff1a9e62ffcc0a4027e4587f3c60244e0bc6d7f72eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:31:18 np0005629333 systemd[1]: var-lib-containers-storage-overlay-43c885edf9bbe7d949c95966ebf04c7fa71748d4370709c9c1119b002eceffc5-merged.mount: Deactivated successfully.
Feb 25 07:31:18 np0005629333 podman[308825]: 2026-02-25 12:31:18.812792945 +0000 UTC m=+0.490147920 container remove 820eccc2ad35e0ac66370ff1a9e62ffcc0a4027e4587f3c60244e0bc6d7f72eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 07:31:18 np0005629333 systemd[1]: libpod-conmon-820eccc2ad35e0ac66370ff1a9e62ffcc0a4027e4587f3c60244e0bc6d7f72eb.scope: Deactivated successfully.
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.834 244018 DEBUG nova.storage.rbd_utils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] resizing rbd image 9b9beb6f-4642-40d1-b9e5-96345c682341_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.882 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.883 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.959 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.959 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Ensure instance console log exists: /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.960 244018 DEBUG oslo_concurrency.lockutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.960 244018 DEBUG oslo_concurrency.lockutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.960 244018 DEBUG oslo_concurrency.lockutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.961 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.967 244018 WARNING nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.973 244018 DEBUG nova.virt.libvirt.host [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.973 244018 DEBUG nova.virt.libvirt.host [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.976 244018 DEBUG nova.virt.libvirt.host [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.976 244018 DEBUG nova.virt.libvirt.host [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.977 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.978 244018 DEBUG nova.virt.hardware [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.978 244018 DEBUG nova.virt.hardware [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.979 244018 DEBUG nova.virt.hardware [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.979 244018 DEBUG nova.virt.hardware [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.979 244018 DEBUG nova.virt.hardware [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.979 244018 DEBUG nova.virt.hardware [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.979 244018 DEBUG nova.virt.hardware [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.980 244018 DEBUG nova.virt.hardware [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.980 244018 DEBUG nova.virt.hardware [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.980 244018 DEBUG nova.virt.hardware [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.981 244018 DEBUG nova.virt.hardware [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.982 244018 DEBUG nova.objects.instance [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9b9beb6f-4642-40d1-b9e5-96345c682341 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:31:18 np0005629333 nova_compute[244014]: 2026-02-25 12:31:18.999 244018 DEBUG oslo_concurrency.processutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:31:19 np0005629333 podman[309058]: 2026-02-25 12:31:19.275770103 +0000 UTC m=+0.054633460 container create 2f9dfea75a7557a0e40a9ecae41b1f55167960586085e167aaf4ab541740d7da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:31:19 np0005629333 systemd[1]: Started libpod-conmon-2f9dfea75a7557a0e40a9ecae41b1f55167960586085e167aaf4ab541740d7da.scope.
Feb 25 07:31:19 np0005629333 podman[309058]: 2026-02-25 12:31:19.257154605 +0000 UTC m=+0.036017982 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:31:19 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:31:19 np0005629333 podman[309058]: 2026-02-25 12:31:19.389606501 +0000 UTC m=+0.168469868 container init 2f9dfea75a7557a0e40a9ecae41b1f55167960586085e167aaf4ab541740d7da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_feistel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:31:19 np0005629333 podman[309058]: 2026-02-25 12:31:19.399165762 +0000 UTC m=+0.178029129 container start 2f9dfea75a7557a0e40a9ecae41b1f55167960586085e167aaf4ab541740d7da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_feistel, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:31:19 np0005629333 podman[309058]: 2026-02-25 12:31:19.403130735 +0000 UTC m=+0.181994102 container attach 2f9dfea75a7557a0e40a9ecae41b1f55167960586085e167aaf4ab541740d7da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:31:19 np0005629333 goofy_feistel[309074]: 167 167
Feb 25 07:31:19 np0005629333 systemd[1]: libpod-2f9dfea75a7557a0e40a9ecae41b1f55167960586085e167aaf4ab541740d7da.scope: Deactivated successfully.
Feb 25 07:31:19 np0005629333 podman[309058]: 2026-02-25 12:31:19.40685742 +0000 UTC m=+0.185720817 container died 2f9dfea75a7557a0e40a9ecae41b1f55167960586085e167aaf4ab541740d7da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_feistel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 07:31:19 np0005629333 systemd[1]: var-lib-containers-storage-overlay-c0fea8d2f47546ab634dcdb10b00af7a9056721817a50344774451d54c15fe19-merged.mount: Deactivated successfully.
Feb 25 07:31:19 np0005629333 podman[309058]: 2026-02-25 12:31:19.464136155 +0000 UTC m=+0.242999552 container remove 2f9dfea75a7557a0e40a9ecae41b1f55167960586085e167aaf4ab541740d7da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_feistel, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:31:19 np0005629333 systemd[1]: libpod-conmon-2f9dfea75a7557a0e40a9ecae41b1f55167960586085e167aaf4ab541740d7da.scope: Deactivated successfully.
Feb 25 07:31:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:31:19 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/941473776' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:31:19 np0005629333 nova_compute[244014]: 2026-02-25 12:31:19.586 244018 DEBUG oslo_concurrency.processutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:31:19 np0005629333 nova_compute[244014]: 2026-02-25 12:31:19.614 244018 DEBUG nova.storage.rbd_utils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image 9b9beb6f-4642-40d1-b9e5-96345c682341_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:31:19 np0005629333 nova_compute[244014]: 2026-02-25 12:31:19.618 244018 DEBUG oslo_concurrency.processutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:31:19 np0005629333 podman[309106]: 2026-02-25 12:31:19.661572034 +0000 UTC m=+0.048308711 container create 4e2929e2ed98e8e6a05f497d19e34f8e8fc5a77e231b3ea74217698ea780c9b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_williams, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:31:19 np0005629333 systemd[1]: Started libpod-conmon-4e2929e2ed98e8e6a05f497d19e34f8e8fc5a77e231b3ea74217698ea780c9b8.scope.
Feb 25 07:31:19 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:31:19 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8b85be122383e8aceaf8f7f9fd363410c5c69ab549d34747c0e826f2200a350/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:31:19 np0005629333 podman[309106]: 2026-02-25 12:31:19.642529934 +0000 UTC m=+0.029266631 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:31:19 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8b85be122383e8aceaf8f7f9fd363410c5c69ab549d34747c0e826f2200a350/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:31:19 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8b85be122383e8aceaf8f7f9fd363410c5c69ab549d34747c0e826f2200a350/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:31:19 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8b85be122383e8aceaf8f7f9fd363410c5c69ab549d34747c0e826f2200a350/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:31:19 np0005629333 podman[309106]: 2026-02-25 12:31:19.754526149 +0000 UTC m=+0.141262936 container init 4e2929e2ed98e8e6a05f497d19e34f8e8fc5a77e231b3ea74217698ea780c9b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_williams, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:31:19 np0005629333 podman[309106]: 2026-02-25 12:31:19.763644478 +0000 UTC m=+0.150381205 container start 4e2929e2ed98e8e6a05f497d19e34f8e8fc5a77e231b3ea74217698ea780c9b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_williams, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:31:19 np0005629333 podman[309106]: 2026-02-25 12:31:19.777750568 +0000 UTC m=+0.164487285 container attach 4e2929e2ed98e8e6a05f497d19e34f8e8fc5a77e231b3ea74217698ea780c9b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_williams, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:31:19 np0005629333 nova_compute[244014]: 2026-02-25 12:31:19.797 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 517 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 6.4 MiB/s wr, 256 op/s
Feb 25 07:31:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:31:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/661453527' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:31:20 np0005629333 nova_compute[244014]: 2026-02-25 12:31:20.279 244018 DEBUG oslo_concurrency.processutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.661s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:31:20 np0005629333 nova_compute[244014]: 2026-02-25 12:31:20.282 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:31:20 np0005629333 nova_compute[244014]:  <uuid>9b9beb6f-4642-40d1-b9e5-96345c682341</uuid>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:  <name>instance-0000004a</name>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:31:20 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServerShowV247Test-server-1350466245</nova:name>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:31:18</nova:creationTime>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:31:20 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:        <nova:user uuid="28ac489e13c44a30a1aafde4937f3cff">tempest-ServerShowV247Test-1730972552-project-member</nova:user>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:        <nova:project uuid="e7c1be4d53154ae793a86c8bcc2c5b47">tempest-ServerShowV247Test-1730972552</nova:project>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="f0ef5a9a-23b8-4883-8e47-feb7403a11d8"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:      <nova:ports/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:      <entry name="serial">9b9beb6f-4642-40d1-b9e5-96345c682341</entry>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:      <entry name="uuid">9b9beb6f-4642-40d1-b9e5-96345c682341</entry>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:31:20 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/9b9beb6f-4642-40d1-b9e5-96345c682341_disk">
Feb 25 07:31:20 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:31:20 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:31:20 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/9b9beb6f-4642-40d1-b9e5-96345c682341_disk.config">
Feb 25 07:31:20 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:31:20 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:31:20 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341/console.log" append="off"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:31:20 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:31:20 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:31:20 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:31:20 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:31:20 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:31:20 np0005629333 nova_compute[244014]: 2026-02-25 12:31:20.332 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:31:20 np0005629333 nova_compute[244014]: 2026-02-25 12:31:20.332 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:31:20 np0005629333 nova_compute[244014]: 2026-02-25 12:31:20.332 244018 INFO nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Using config drive#033[00m
Feb 25 07:31:20 np0005629333 nova_compute[244014]: 2026-02-25 12:31:20.348 244018 DEBUG nova.storage.rbd_utils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image 9b9beb6f-4642-40d1-b9e5-96345c682341_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:31:20 np0005629333 lvm[309251]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:31:20 np0005629333 lvm[309251]: VG ceph_vg0 finished
Feb 25 07:31:20 np0005629333 nova_compute[244014]: 2026-02-25 12:31:20.372 244018 DEBUG nova.objects.instance [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 9b9beb6f-4642-40d1-b9e5-96345c682341 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:31:20 np0005629333 lvm[309254]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:31:20 np0005629333 lvm[309254]: VG ceph_vg1 finished
Feb 25 07:31:20 np0005629333 lvm[309256]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:31:20 np0005629333 lvm[309256]: VG ceph_vg2 finished
Feb 25 07:31:20 np0005629333 nova_compute[244014]: 2026-02-25 12:31:20.408 244018 DEBUG nova.objects.instance [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lazy-loading 'keypairs' on Instance uuid 9b9beb6f-4642-40d1-b9e5-96345c682341 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:31:20 np0005629333 lucid_williams[309135]: {}
Feb 25 07:31:20 np0005629333 systemd[1]: libpod-4e2929e2ed98e8e6a05f497d19e34f8e8fc5a77e231b3ea74217698ea780c9b8.scope: Deactivated successfully.
Feb 25 07:31:20 np0005629333 podman[309106]: 2026-02-25 12:31:20.519307037 +0000 UTC m=+0.906043714 container died 4e2929e2ed98e8e6a05f497d19e34f8e8fc5a77e231b3ea74217698ea780c9b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_williams, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:31:20 np0005629333 systemd[1]: libpod-4e2929e2ed98e8e6a05f497d19e34f8e8fc5a77e231b3ea74217698ea780c9b8.scope: Consumed 1.035s CPU time.
Feb 25 07:31:20 np0005629333 systemd[1]: var-lib-containers-storage-overlay-b8b85be122383e8aceaf8f7f9fd363410c5c69ab549d34747c0e826f2200a350-merged.mount: Deactivated successfully.
Feb 25 07:31:20 np0005629333 nova_compute[244014]: 2026-02-25 12:31:20.564 244018 INFO nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Creating config drive at /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341/disk.config#033[00m
Feb 25 07:31:20 np0005629333 nova_compute[244014]: 2026-02-25 12:31:20.568 244018 DEBUG oslo_concurrency.processutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp1g8sith0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:31:20 np0005629333 podman[309106]: 2026-02-25 12:31:20.575429238 +0000 UTC m=+0.962165915 container remove 4e2929e2ed98e8e6a05f497d19e34f8e8fc5a77e231b3ea74217698ea780c9b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_williams, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:31:20 np0005629333 systemd[1]: libpod-conmon-4e2929e2ed98e8e6a05f497d19e34f8e8fc5a77e231b3ea74217698ea780c9b8.scope: Deactivated successfully.
Feb 25 07:31:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:31:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:31:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:31:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:31:20 np0005629333 nova_compute[244014]: 2026-02-25 12:31:20.697 244018 DEBUG nova.virt.libvirt.driver [None req-a1534fad-b5ea-4148-b370-375060246dbc b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Feb 25 07:31:20 np0005629333 nova_compute[244014]: 2026-02-25 12:31:20.708 244018 DEBUG oslo_concurrency.processutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp1g8sith0" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:31:20 np0005629333 nova_compute[244014]: 2026-02-25 12:31:20.751 244018 DEBUG nova.storage.rbd_utils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] rbd image 9b9beb6f-4642-40d1-b9e5-96345c682341_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:31:20 np0005629333 nova_compute[244014]: 2026-02-25 12:31:20.756 244018 DEBUG oslo_concurrency.processutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341/disk.config 9b9beb6f-4642-40d1-b9e5-96345c682341_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:31:20 np0005629333 nova_compute[244014]: 2026-02-25 12:31:20.889 244018 DEBUG oslo_concurrency.processutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341/disk.config 9b9beb6f-4642-40d1-b9e5-96345c682341_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:31:20 np0005629333 nova_compute[244014]: 2026-02-25 12:31:20.890 244018 INFO nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Deleting local config drive /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341/disk.config because it was imported into RBD.#033[00m
Feb 25 07:31:20 np0005629333 systemd-machined[210048]: New machine qemu-91-instance-0000004a.
Feb 25 07:31:20 np0005629333 systemd[1]: Started Virtual Machine qemu-91-instance-0000004a.
Feb 25 07:31:21 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:31:21 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:31:21 np0005629333 nova_compute[244014]: 2026-02-25 12:31:21.412 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:31:21 np0005629333 nova_compute[244014]: 2026-02-25 12:31:21.876 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for 9b9beb6f-4642-40d1-b9e5-96345c682341 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 25 07:31:21 np0005629333 nova_compute[244014]: 2026-02-25 12:31:21.877 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022681.8759117, 9b9beb6f-4642-40d1-b9e5-96345c682341 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:31:21 np0005629333 nova_compute[244014]: 2026-02-25 12:31:21.878 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:31:21 np0005629333 nova_compute[244014]: 2026-02-25 12:31:21.882 244018 DEBUG nova.compute.manager [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:31:21 np0005629333 nova_compute[244014]: 2026-02-25 12:31:21.883 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:31:21 np0005629333 nova_compute[244014]: 2026-02-25 12:31:21.887 244018 INFO nova.virt.libvirt.driver [-] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Instance spawned successfully.#033[00m
Feb 25 07:31:21 np0005629333 nova_compute[244014]: 2026-02-25 12:31:21.888 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:31:21 np0005629333 nova_compute[244014]: 2026-02-25 12:31:21.921 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:31:21 np0005629333 nova_compute[244014]: 2026-02-25 12:31:21.925 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:31:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 507 MiB data, 899 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 8.8 MiB/s wr, 352 op/s
Feb 25 07:31:21 np0005629333 nova_compute[244014]: 2026-02-25 12:31:21.946 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 25 07:31:21 np0005629333 nova_compute[244014]: 2026-02-25 12:31:21.947 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022681.8811173, 9b9beb6f-4642-40d1-b9e5-96345c682341 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:31:21 np0005629333 nova_compute[244014]: 2026-02-25 12:31:21.948 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] VM Started (Lifecycle Event)#033[00m
Feb 25 07:31:21 np0005629333 nova_compute[244014]: 2026-02-25 12:31:21.954 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:31:21 np0005629333 nova_compute[244014]: 2026-02-25 12:31:21.954 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:31:21 np0005629333 nova_compute[244014]: 2026-02-25 12:31:21.955 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:31:21 np0005629333 nova_compute[244014]: 2026-02-25 12:31:21.956 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:31:21 np0005629333 nova_compute[244014]: 2026-02-25 12:31:21.956 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:31:21 np0005629333 nova_compute[244014]: 2026-02-25 12:31:21.957 244018 DEBUG nova.virt.libvirt.driver [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:31:21 np0005629333 nova_compute[244014]: 2026-02-25 12:31:21.965 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:31:21 np0005629333 nova_compute[244014]: 2026-02-25 12:31:21.970 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:31:21 np0005629333 nova_compute[244014]: 2026-02-25 12:31:21.999 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 25 07:31:22 np0005629333 nova_compute[244014]: 2026-02-25 12:31:22.019 244018 DEBUG nova.compute.manager [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:31:22 np0005629333 nova_compute[244014]: 2026-02-25 12:31:22.086 244018 DEBUG oslo_concurrency.lockutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:31:22 np0005629333 nova_compute[244014]: 2026-02-25 12:31:22.087 244018 DEBUG oslo_concurrency.lockutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:31:22 np0005629333 nova_compute[244014]: 2026-02-25 12:31:22.087 244018 DEBUG nova.objects.instance [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 25 07:31:22 np0005629333 nova_compute[244014]: 2026-02-25 12:31:22.156 244018 DEBUG oslo_concurrency.lockutils [None req-c2f65268-4403-42a6-be7d-f22eb4bcd995 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 516 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 10 MiB/s wr, 405 op/s
Feb 25 07:31:24 np0005629333 kernel: tapa308a2fe-7f (unregistering): left promiscuous mode
Feb 25 07:31:24 np0005629333 NetworkManager[49836]: <info>  [1772022684.5591] device (tapa308a2fe-7f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:31:24 np0005629333 ovn_controller[147040]: 2026-02-25T12:31:24Z|00692|binding|INFO|Releasing lport a308a2fe-7f22-4520-8be1-36876d0d5361 from this chassis (sb_readonly=0)
Feb 25 07:31:24 np0005629333 ovn_controller[147040]: 2026-02-25T12:31:24Z|00693|binding|INFO|Setting lport a308a2fe-7f22-4520-8be1-36876d0d5361 down in Southbound
Feb 25 07:31:24 np0005629333 nova_compute[244014]: 2026-02-25 12:31:24.568 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:24 np0005629333 ovn_controller[147040]: 2026-02-25T12:31:24Z|00694|binding|INFO|Removing iface tapa308a2fe-7f ovn-installed in OVS
Feb 25 07:31:24 np0005629333 nova_compute[244014]: 2026-02-25 12:31:24.570 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:24 np0005629333 nova_compute[244014]: 2026-02-25 12:31:24.576 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:24 np0005629333 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d0000004b.scope: Deactivated successfully.
Feb 25 07:31:24 np0005629333 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d0000004b.scope: Consumed 13.090s CPU time.
Feb 25 07:31:24 np0005629333 systemd-machined[210048]: Machine qemu-90-instance-0000004b terminated.
Feb 25 07:31:24 np0005629333 nova_compute[244014]: 2026-02-25 12:31:24.795 244018 INFO nova.virt.libvirt.driver [None req-a1534fad-b5ea-4148-b370-375060246dbc b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Instance shutdown successfully after 14 seconds.#033[00m
Feb 25 07:31:24 np0005629333 nova_compute[244014]: 2026-02-25 12:31:24.798 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:24 np0005629333 nova_compute[244014]: 2026-02-25 12:31:24.804 244018 INFO nova.virt.libvirt.driver [-] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Instance destroyed successfully.#033[00m
Feb 25 07:31:24 np0005629333 nova_compute[244014]: 2026-02-25 12:31:24.804 244018 DEBUG nova.objects.instance [None req-a1534fad-b5ea-4148-b370-375060246dbc b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'numa_topology' on Instance uuid 3f143481-f5f9-45d3-9d0b-b66e77ee0714 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.520 244018 DEBUG oslo_concurrency.lockutils [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Acquiring lock "7715b665-70af-4b84-b8a3-85ccea6ab805" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.521 244018 DEBUG oslo_concurrency.lockutils [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lock "7715b665-70af-4b84-b8a3-85ccea6ab805" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.521 244018 DEBUG oslo_concurrency.lockutils [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Acquiring lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.522 244018 DEBUG oslo_concurrency.lockutils [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.522 244018 DEBUG oslo_concurrency.lockutils [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.523 244018 INFO nova.compute.manager [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Terminating instance#033[00m
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.524 244018 DEBUG nova.compute.manager [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:31:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.521 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:c8:fb 10.100.0.4'], port_security=['fa:16:3e:da:c8:fb 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '3f143481-f5f9-45d3-9d0b-b66e77ee0714', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd796561-bd80-4610-8abc-655ee9e3676f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8315f545d21f4f8d9a43d810f50e7b78', 'neutron:revision_number': '4', 'neutron:security_group_ids': '290a60c1-6d36-455f-8454-5a2e312fec46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3b5d31ff-0cb2-4a33-884a-0100942c964b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=a308a2fe-7f22-4520-8be1-36876d0d5361) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:31:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.523 157129 INFO neutron.agent.ovn.metadata.agent [-] Port a308a2fe-7f22-4520-8be1-36876d0d5361 in datapath cd796561-bd80-4610-8abc-655ee9e3676f unbound from our chassis#033[00m
Feb 25 07:31:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.524 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cd796561-bd80-4610-8abc-655ee9e3676f#033[00m
Feb 25 07:31:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.544 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1bd91573-1bdf-4cd0-8e10-793a3ee70a5a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.552 244018 DEBUG nova.compute.manager [None req-a1534fad-b5ea-4148-b370-375060246dbc b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.558 244018 DEBUG oslo_concurrency.lockutils [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "9b9beb6f-4642-40d1-b9e5-96345c682341" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.559 244018 DEBUG oslo_concurrency.lockutils [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "9b9beb6f-4642-40d1-b9e5-96345c682341" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.560 244018 DEBUG oslo_concurrency.lockutils [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "9b9beb6f-4642-40d1-b9e5-96345c682341-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.562 244018 DEBUG oslo_concurrency.lockutils [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "9b9beb6f-4642-40d1-b9e5-96345c682341-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.563 244018 DEBUG oslo_concurrency.lockutils [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "9b9beb6f-4642-40d1-b9e5-96345c682341-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.564 244018 INFO nova.compute.manager [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Terminating instance#033[00m
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.565 244018 DEBUG oslo_concurrency.lockutils [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "refresh_cache-9b9beb6f-4642-40d1-b9e5-96345c682341" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.565 244018 DEBUG oslo_concurrency.lockutils [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquired lock "refresh_cache-9b9beb6f-4642-40d1-b9e5-96345c682341" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.566 244018 DEBUG nova.network.neutron [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:31:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.582 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[743af188-b430-4105-a0c2-779add01fb88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:25 np0005629333 kernel: tape61932f1-9a (unregistering): left promiscuous mode
Feb 25 07:31:25 np0005629333 NetworkManager[49836]: <info>  [1772022685.5875] device (tape61932f1-9a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:31:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.589 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[52f018ff-fc75-4d46-a398-af09f1c6e62d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:31:25Z|00695|binding|INFO|Releasing lport e61932f1-9a36-4c95-a52f-470c182ac70f from this chassis (sb_readonly=0)
Feb 25 07:31:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:31:25Z|00696|binding|INFO|Setting lport e61932f1-9a36-4c95-a52f-470c182ac70f down in Southbound
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.593 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:31:25Z|00697|binding|INFO|Removing iface tape61932f1-9a ovn-installed in OVS
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.602 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.603 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:25:49 10.100.0.5'], port_security=['fa:16:3e:38:25:49 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7715b665-70af-4b84-b8a3-85ccea6ab805', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f200cab-355f-4832-b410-50e7782a19b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c7f30b1d5a1f4604bb44f655b6be0571', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dbda8792-bcc5-48a3-8da7-ae12bedd4d63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba165cab-01bd-4b15-a1be-78df84ef667b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=e61932f1-9a36-4c95-a52f-470c182ac70f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:31:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.630 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[22554f35-bf68-4162-91b4-8f743b5d01bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:25 np0005629333 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d00000048.scope: Deactivated successfully.
Feb 25 07:31:25 np0005629333 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d00000048.scope: Consumed 12.698s CPU time.
Feb 25 07:31:25 np0005629333 systemd-machined[210048]: Machine qemu-87-instance-00000048 terminated.
Feb 25 07:31:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.644 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e34ddfdc-bc9c-4fec-8b67-ab150c50ad20]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd796561-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:48:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 200], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460510, 'reachable_time': 29897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309422, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.657 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7319865c-6d1c-4dba-a941-701666102baf]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapcd796561-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460517, 'tstamp': 460517}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309423, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcd796561-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460518, 'tstamp': 460518}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309423, 'error': None, 'target': 'ovnmeta-cd796561-bd80-4610-8abc-655ee9e3676f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.658 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd796561-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.660 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.664 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.664 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd796561-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:31:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.665 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:31:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.665 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcd796561-b0, col_values=(('external_ids', {'iface-id': 'd456b40d-31d4-4447-97a8-c536382c29f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:31:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.666 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:31:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.667 157129 INFO neutron.agent.ovn.metadata.agent [-] Port e61932f1-9a36-4c95-a52f-470c182ac70f in datapath 2f200cab-355f-4832-b410-50e7782a19b7 unbound from our chassis#033[00m
Feb 25 07:31:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.668 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2f200cab-355f-4832-b410-50e7782a19b7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:31:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.669 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2d246b04-4b29-48ac-84ee-fd15d12f2206]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.669 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7 namespace which is not needed anymore#033[00m
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.715 244018 DEBUG oslo_concurrency.lockutils [None req-a1534fad-b5ea-4148-b370-375060246dbc b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 15.109s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.752 244018 INFO nova.virt.libvirt.driver [-] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Instance destroyed successfully.#033[00m
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.752 244018 DEBUG nova.objects.instance [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lazy-loading 'resources' on Instance uuid 7715b665-70af-4b84-b8a3-85ccea6ab805 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.778 244018 DEBUG nova.virt.libvirt.vif [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:30:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1810610267',display_name='tempest-ServersTestManualDisk-server-1810610267',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1810610267',id=72,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+ZyZhedZYP+GQcHR1V4WhhTfwoBFUWOed32RhXsQrJwpPpF6RwVYj0ZbkDWlwidrgnyGCLLNNfWgQRbNPFMLe3H8LOerGP8hP+UJf+LIu8QRlzX2YtSutjJfTqBBvhdg==',key_name='tempest-keypair-1369517783',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:31:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c7f30b1d5a1f4604bb44f655b6be0571',ramdisk_id='',reservation_id='r-60vlmj26',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-155925755',owner_user_name='tempest-ServersTestManualDisk-155925755-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:31:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='87b7b8b029dc48549d8d5982d7329f63',uuid=7715b665-70af-4b84-b8a3-85ccea6ab805,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e61932f1-9a36-4c95-a52f-470c182ac70f", "address": "fa:16:3e:38:25:49", "network": {"id": "2f200cab-355f-4832-b410-50e7782a19b7", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-633209158-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7f30b1d5a1f4604bb44f655b6be0571", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61932f1-9a", "ovs_interfaceid": "e61932f1-9a36-4c95-a52f-470c182ac70f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:31:25 np0005629333 neutron-haproxy-ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7[307765]: [NOTICE]   (307803) : haproxy version is 2.8.14-c23fe91
Feb 25 07:31:25 np0005629333 neutron-haproxy-ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7[307765]: [NOTICE]   (307803) : path to executable is /usr/sbin/haproxy
Feb 25 07:31:25 np0005629333 neutron-haproxy-ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7[307765]: [WARNING]  (307803) : Exiting Master process...
Feb 25 07:31:25 np0005629333 neutron-haproxy-ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7[307765]: [ALERT]    (307803) : Current worker (307813) exited with code 143 (Terminated)
Feb 25 07:31:25 np0005629333 neutron-haproxy-ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7[307765]: [WARNING]  (307803) : All workers exited. Exiting... (0)
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.783 244018 DEBUG nova.network.os_vif_util [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Converting VIF {"id": "e61932f1-9a36-4c95-a52f-470c182ac70f", "address": "fa:16:3e:38:25:49", "network": {"id": "2f200cab-355f-4832-b410-50e7782a19b7", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-633209158-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7f30b1d5a1f4604bb44f655b6be0571", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape61932f1-9a", "ovs_interfaceid": "e61932f1-9a36-4c95-a52f-470c182ac70f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:31:25 np0005629333 systemd[1]: libpod-1ec6ee91f3c3be0dfe98ab540b248e251c3c364e30b34d7208b6e6af4cede795.scope: Deactivated successfully.
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.784 244018 DEBUG nova.network.os_vif_util [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:38:25:49,bridge_name='br-int',has_traffic_filtering=True,id=e61932f1-9a36-4c95-a52f-470c182ac70f,network=Network(2f200cab-355f-4832-b410-50e7782a19b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61932f1-9a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.784 244018 DEBUG os_vif [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:38:25:49,bridge_name='br-int',has_traffic_filtering=True,id=e61932f1-9a36-4c95-a52f-470c182ac70f,network=Network(2f200cab-355f-4832-b410-50e7782a19b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61932f1-9a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.786 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.787 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape61932f1-9a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.788 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.790 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:31:25 np0005629333 podman[309445]: 2026-02-25 12:31:25.792042228 +0000 UTC m=+0.046640583 container died 1ec6ee91f3c3be0dfe98ab540b248e251c3c364e30b34d7208b6e6af4cede795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.793 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.795 244018 INFO os_vif [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:38:25:49,bridge_name='br-int',has_traffic_filtering=True,id=e61932f1-9a36-4c95-a52f-470c182ac70f,network=Network(2f200cab-355f-4832-b410-50e7782a19b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape61932f1-9a')#033[00m
Feb 25 07:31:25 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1ec6ee91f3c3be0dfe98ab540b248e251c3c364e30b34d7208b6e6af4cede795-userdata-shm.mount: Deactivated successfully.
Feb 25 07:31:25 np0005629333 systemd[1]: var-lib-containers-storage-overlay-562f19f351018e8168f67d2f0ec9c0e4adfbd26e742451e2aa28d80d269eda69-merged.mount: Deactivated successfully.
Feb 25 07:31:25 np0005629333 podman[309445]: 2026-02-25 12:31:25.840628616 +0000 UTC m=+0.095226971 container cleanup 1ec6ee91f3c3be0dfe98ab540b248e251c3c364e30b34d7208b6e6af4cede795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 07:31:25 np0005629333 systemd[1]: libpod-conmon-1ec6ee91f3c3be0dfe98ab540b248e251c3c364e30b34d7208b6e6af4cede795.scope: Deactivated successfully.
Feb 25 07:31:25 np0005629333 podman[309501]: 2026-02-25 12:31:25.91481026 +0000 UTC m=+0.056975397 container remove 1ec6ee91f3c3be0dfe98ab540b248e251c3c364e30b34d7208b6e6af4cede795 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 07:31:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.922 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[251ff6c7-58d0-47a2-8023-824f46ef5640]: (4, ('Wed Feb 25 12:31:25 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7 (1ec6ee91f3c3be0dfe98ab540b248e251c3c364e30b34d7208b6e6af4cede795)\n1ec6ee91f3c3be0dfe98ab540b248e251c3c364e30b34d7208b6e6af4cede795\nWed Feb 25 12:31:25 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7 (1ec6ee91f3c3be0dfe98ab540b248e251c3c364e30b34d7208b6e6af4cede795)\n1ec6ee91f3c3be0dfe98ab540b248e251c3c364e30b34d7208b6e6af4cede795\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.924 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[67e8b6aa-e0c1-4746-b39f-a626bf122888]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.925 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f200cab-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:31:25 np0005629333 kernel: tap2f200cab-30: left promiscuous mode
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.927 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:25 np0005629333 nova_compute[244014]: 2026-02-25 12:31:25.935 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.939 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e9bfb4a0-2d27-4017-ae3e-2b4e5c242444]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 516 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 10 MiB/s wr, 376 op/s
Feb 25 07:31:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.952 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[819808a7-ff2c-4834-ad40-c5b27f0a6ae4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.953 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c4715f6e-b79f-41cf-8cb7-9ead44aabb2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.969 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[60d70df9-9863-4d95-a4da-8c4f8097db35]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 462999, 'reachable_time': 22537, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309520, 'error': None, 'target': 'ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:25 np0005629333 systemd[1]: run-netns-ovnmeta\x2d2f200cab\x2d355f\x2d4832\x2db410\x2d50e7782a19b7.mount: Deactivated successfully.
Feb 25 07:31:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.974 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2f200cab-355f-4832-b410-50e7782a19b7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:31:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:31:25.974 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[b3fd60e3-285e-426b-9a13-5d725a22df35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:31:26 np0005629333 nova_compute[244014]: 2026-02-25 12:31:26.128 244018 INFO nova.virt.libvirt.driver [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Deleting instance files /var/lib/nova/instances/7715b665-70af-4b84-b8a3-85ccea6ab805_del#033[00m
Feb 25 07:31:26 np0005629333 nova_compute[244014]: 2026-02-25 12:31:26.129 244018 INFO nova.virt.libvirt.driver [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Deletion of /var/lib/nova/instances/7715b665-70af-4b84-b8a3-85ccea6ab805_del complete#033[00m
Feb 25 07:31:26 np0005629333 nova_compute[244014]: 2026-02-25 12:31:26.140 244018 DEBUG nova.compute.manager [req-6ad34906-f358-4e78-988d-3b5a30784b12 req-ed32a51f-fce9-40b3-b852-e123247ecc8e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Received event network-vif-unplugged-a308a2fe-7f22-4520-8be1-36876d0d5361 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:31:26 np0005629333 nova_compute[244014]: 2026-02-25 12:31:26.141 244018 DEBUG oslo_concurrency.lockutils [req-6ad34906-f358-4e78-988d-3b5a30784b12 req-ed32a51f-fce9-40b3-b852-e123247ecc8e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:31:26 np0005629333 nova_compute[244014]: 2026-02-25 12:31:26.142 244018 DEBUG oslo_concurrency.lockutils [req-6ad34906-f358-4e78-988d-3b5a30784b12 req-ed32a51f-fce9-40b3-b852-e123247ecc8e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:31:26 np0005629333 nova_compute[244014]: 2026-02-25 12:31:26.142 244018 DEBUG oslo_concurrency.lockutils [req-6ad34906-f358-4e78-988d-3b5a30784b12 req-ed32a51f-fce9-40b3-b852-e123247ecc8e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:26 np0005629333 nova_compute[244014]: 2026-02-25 12:31:26.143 244018 DEBUG nova.compute.manager [req-6ad34906-f358-4e78-988d-3b5a30784b12 req-ed32a51f-fce9-40b3-b852-e123247ecc8e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] No waiting events found dispatching network-vif-unplugged-a308a2fe-7f22-4520-8be1-36876d0d5361 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:31:26 np0005629333 nova_compute[244014]: 2026-02-25 12:31:26.143 244018 WARNING nova.compute.manager [req-6ad34906-f358-4e78-988d-3b5a30784b12 req-ed32a51f-fce9-40b3-b852-e123247ecc8e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Received unexpected event network-vif-unplugged-a308a2fe-7f22-4520-8be1-36876d0d5361 for instance with vm_state stopped and task_state None.#033[00m
Feb 25 07:31:26 np0005629333 nova_compute[244014]: 2026-02-25 12:31:26.186 244018 DEBUG nova.network.neutron [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:31:26 np0005629333 nova_compute[244014]: 2026-02-25 12:31:26.208 244018 INFO nova.compute.manager [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:31:26 np0005629333 nova_compute[244014]: 2026-02-25 12:31:26.208 244018 DEBUG oslo.service.loopingcall [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:31:26 np0005629333 nova_compute[244014]: 2026-02-25 12:31:26.209 244018 DEBUG nova.compute.manager [-] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:31:26 np0005629333 nova_compute[244014]: 2026-02-25 12:31:26.210 244018 DEBUG nova.network.neutron [-] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:31:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:31:26 np0005629333 nova_compute[244014]: 2026-02-25 12:31:26.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:31:26 np0005629333 nova_compute[244014]: 2026-02-25 12:31:26.976 244018 DEBUG nova.network.neutron [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:31:26 np0005629333 nova_compute[244014]: 2026-02-25 12:31:26.995 244018 DEBUG oslo_concurrency.lockutils [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Releasing lock "refresh_cache-9b9beb6f-4642-40d1-b9e5-96345c682341" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:31:26 np0005629333 nova_compute[244014]: 2026-02-25 12:31:26.996 244018 DEBUG nova.compute.manager [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:31:27 np0005629333 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d0000004a.scope: Deactivated successfully.
Feb 25 07:31:27 np0005629333 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d0000004a.scope: Consumed 5.981s CPU time.
Feb 25 07:31:27 np0005629333 systemd-machined[210048]: Machine qemu-91-instance-0000004a terminated.
Feb 25 07:31:27 np0005629333 nova_compute[244014]: 2026-02-25 12:31:27.222 244018 INFO nova.virt.libvirt.driver [-] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Instance destroyed successfully.#033[00m
Feb 25 07:31:27 np0005629333 nova_compute[244014]: 2026-02-25 12:31:27.223 244018 DEBUG nova.objects.instance [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lazy-loading 'resources' on Instance uuid 9b9beb6f-4642-40d1-b9e5-96345c682341 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:31:27 np0005629333 nova_compute[244014]: 2026-02-25 12:31:27.622 244018 INFO nova.virt.libvirt.driver [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Deleting instance files /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341_del#033[00m
Feb 25 07:31:27 np0005629333 nova_compute[244014]: 2026-02-25 12:31:27.624 244018 INFO nova.virt.libvirt.driver [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Deletion of /var/lib/nova/instances/9b9beb6f-4642-40d1-b9e5-96345c682341_del complete#033[00m
Feb 25 07:31:27 np0005629333 nova_compute[244014]: 2026-02-25 12:31:27.648 244018 DEBUG nova.network.neutron [-] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:31:27 np0005629333 nova_compute[244014]: 2026-02-25 12:31:27.696 244018 INFO nova.compute.manager [-] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Took 1.49 seconds to deallocate network for instance.#033[00m
Feb 25 07:31:27 np0005629333 nova_compute[244014]: 2026-02-25 12:31:27.705 244018 INFO nova.compute.manager [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:31:27 np0005629333 nova_compute[244014]: 2026-02-25 12:31:27.706 244018 DEBUG oslo.service.loopingcall [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:31:27 np0005629333 nova_compute[244014]: 2026-02-25 12:31:27.706 244018 DEBUG nova.compute.manager [-] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:31:27 np0005629333 nova_compute[244014]: 2026-02-25 12:31:27.706 244018 DEBUG nova.network.neutron [-] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:31:27 np0005629333 nova_compute[244014]: 2026-02-25 12:31:27.771 244018 DEBUG oslo_concurrency.lockutils [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:31:27 np0005629333 nova_compute[244014]: 2026-02-25 12:31:27.771 244018 DEBUG oslo_concurrency.lockutils [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:31:27 np0005629333 nova_compute[244014]: 2026-02-25 12:31:27.912 244018 DEBUG nova.network.neutron [-] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:31:27 np0005629333 nova_compute[244014]: 2026-02-25 12:31:27.933 244018 DEBUG nova.network.neutron [-] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:31:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 418 MiB data, 880 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 10 MiB/s wr, 481 op/s
Feb 25 07:31:27 np0005629333 nova_compute[244014]: 2026-02-25 12:31:27.953 244018 DEBUG oslo_concurrency.processutils [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:31:27 np0005629333 nova_compute[244014]: 2026-02-25 12:31:27.983 244018 INFO nova.compute.manager [-] [instance: 9b9beb6f-4642-40d1-b9e5-96345c682341] Took 0.28 seconds to deallocate network for instance.#033[00m
Feb 25 07:31:28 np0005629333 nova_compute[244014]: 2026-02-25 12:31:28.034 244018 DEBUG oslo_concurrency.lockutils [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:31:28 np0005629333 nova_compute[244014]: 2026-02-25 12:31:28.365 244018 DEBUG nova.compute.manager [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Received event network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:31:28 np0005629333 nova_compute[244014]: 2026-02-25 12:31:28.365 244018 DEBUG oslo_concurrency.lockutils [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:31:28 np0005629333 nova_compute[244014]: 2026-02-25 12:31:28.366 244018 DEBUG oslo_concurrency.lockutils [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:31:28 np0005629333 nova_compute[244014]: 2026-02-25 12:31:28.366 244018 DEBUG oslo_concurrency.lockutils [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:28 np0005629333 nova_compute[244014]: 2026-02-25 12:31:28.366 244018 DEBUG nova.compute.manager [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] No waiting events found dispatching network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:31:28 np0005629333 nova_compute[244014]: 2026-02-25 12:31:28.367 244018 WARNING nova.compute.manager [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Received unexpected event network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 for instance with vm_state stopped and task_state None.#033[00m
Feb 25 07:31:28 np0005629333 nova_compute[244014]: 2026-02-25 12:31:28.367 244018 DEBUG nova.compute.manager [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Received event network-vif-unplugged-e61932f1-9a36-4c95-a52f-470c182ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:31:28 np0005629333 nova_compute[244014]: 2026-02-25 12:31:28.367 244018 DEBUG oslo_concurrency.lockutils [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:31:28 np0005629333 nova_compute[244014]: 2026-02-25 12:31:28.367 244018 DEBUG oslo_concurrency.lockutils [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:31:28 np0005629333 nova_compute[244014]: 2026-02-25 12:31:28.368 244018 DEBUG oslo_concurrency.lockutils [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:28 np0005629333 nova_compute[244014]: 2026-02-25 12:31:28.368 244018 DEBUG nova.compute.manager [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] No waiting events found dispatching network-vif-unplugged-e61932f1-9a36-4c95-a52f-470c182ac70f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:31:28 np0005629333 nova_compute[244014]: 2026-02-25 12:31:28.368 244018 WARNING nova.compute.manager [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Received unexpected event network-vif-unplugged-e61932f1-9a36-4c95-a52f-470c182ac70f for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:31:28 np0005629333 nova_compute[244014]: 2026-02-25 12:31:28.368 244018 DEBUG nova.compute.manager [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Received event network-vif-plugged-e61932f1-9a36-4c95-a52f-470c182ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:31:28 np0005629333 nova_compute[244014]: 2026-02-25 12:31:28.369 244018 DEBUG oslo_concurrency.lockutils [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:31:28 np0005629333 nova_compute[244014]: 2026-02-25 12:31:28.369 244018 DEBUG oslo_concurrency.lockutils [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:31:28 np0005629333 nova_compute[244014]: 2026-02-25 12:31:28.369 244018 DEBUG oslo_concurrency.lockutils [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7715b665-70af-4b84-b8a3-85ccea6ab805-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:28 np0005629333 nova_compute[244014]: 2026-02-25 12:31:28.369 244018 DEBUG nova.compute.manager [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] No waiting events found dispatching network-vif-plugged-e61932f1-9a36-4c95-a52f-470c182ac70f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:31:28 np0005629333 nova_compute[244014]: 2026-02-25 12:31:28.370 244018 WARNING nova.compute.manager [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Received unexpected event network-vif-plugged-e61932f1-9a36-4c95-a52f-470c182ac70f for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:31:28 np0005629333 nova_compute[244014]: 2026-02-25 12:31:28.370 244018 DEBUG nova.compute.manager [req-f45864be-e763-4610-b35c-10e1ce5b615f req-3ebe430d-3604-4ce2-9ec2-f7fd5c9a84a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7715b665-70af-4b84-b8a3-85ccea6ab805] Received event network-vif-deleted-e61932f1-9a36-4c95-a52f-470c182ac70f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:31:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:31:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4279200398' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:31:28 np0005629333 nova_compute[244014]: 2026-02-25 12:31:28.556 244018 DEBUG oslo_concurrency.processutils [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:31:28 np0005629333 nova_compute[244014]: 2026-02-25 12:31:28.563 244018 DEBUG nova.compute.provider_tree [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:31:28 np0005629333 nova_compute[244014]: 2026-02-25 12:31:28.578 244018 DEBUG nova.scheduler.client.report [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:31:28 np0005629333 nova_compute[244014]: 2026-02-25 12:31:28.604 244018 DEBUG oslo_concurrency.lockutils [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:28 np0005629333 nova_compute[244014]: 2026-02-25 12:31:28.606 244018 DEBUG oslo_concurrency.lockutils [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:31:28 np0005629333 nova_compute[244014]: 2026-02-25 12:31:28.646 244018 INFO nova.scheduler.client.report [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Deleted allocations for instance 7715b665-70af-4b84-b8a3-85ccea6ab805#033[00m
Feb 25 07:31:28 np0005629333 nova_compute[244014]: 2026-02-25 12:31:28.744 244018 DEBUG oslo_concurrency.processutils [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:31:28 np0005629333 nova_compute[244014]: 2026-02-25 12:31:28.793 244018 DEBUG oslo_concurrency.lockutils [None req-3e1d4f83-3520-4644-9e8e-83add870c419 87b7b8b029dc48549d8d5982d7329f63 c7f30b1d5a1f4604bb44f655b6be0571 - - default default] Lock "7715b665-70af-4b84-b8a3-85ccea6ab805" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.272s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:31:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2030116511' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:31:29 np0005629333 nova_compute[244014]: 2026-02-25 12:31:29.346 244018 DEBUG oslo_concurrency.processutils [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:31:29 np0005629333 nova_compute[244014]: 2026-02-25 12:31:29.351 244018 DEBUG nova.compute.provider_tree [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:31:29 np0005629333 nova_compute[244014]: 2026-02-25 12:31:29.373 244018 DEBUG nova.scheduler.client.report [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:31:29 np0005629333 nova_compute[244014]: 2026-02-25 12:31:29.403 244018 DEBUG oslo_concurrency.lockutils [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:29 np0005629333 nova_compute[244014]: 2026-02-25 12:31:29.443 244018 INFO nova.scheduler.client.report [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Deleted allocations for instance 9b9beb6f-4642-40d1-b9e5-96345c682341#033[00m
Feb 25 07:31:29 np0005629333 nova_compute[244014]: 2026-02-25 12:31:29.522 244018 DEBUG oslo_concurrency.lockutils [None req-af1076e1-3666-478d-9e6c-cbd680baa1fd 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "9b9beb6f-4642-40d1-b9e5-96345c682341" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.963s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:29 np0005629333 nova_compute[244014]: 2026-02-25 12:31:29.801 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 418 MiB data, 880 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.0 MiB/s wr, 295 op/s
Feb 25 07:31:30 np0005629333 nova_compute[244014]: 2026-02-25 12:31:30.137 244018 INFO nova.compute.manager [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Rebuilding instance#033[00m
Feb 25 07:31:30 np0005629333 nova_compute[244014]: 2026-02-25 12:31:30.582 244018 DEBUG oslo_concurrency.lockutils [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "e954c936-91fe-4aa5-8c91-78ec08c85221" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:31:30 np0005629333 nova_compute[244014]: 2026-02-25 12:31:30.583 244018 DEBUG oslo_concurrency.lockutils [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "e954c936-91fe-4aa5-8c91-78ec08c85221" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:31:30 np0005629333 nova_compute[244014]: 2026-02-25 12:31:30.584 244018 DEBUG oslo_concurrency.lockutils [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "e954c936-91fe-4aa5-8c91-78ec08c85221-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:31:30 np0005629333 nova_compute[244014]: 2026-02-25 12:31:30.584 244018 DEBUG oslo_concurrency.lockutils [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "e954c936-91fe-4aa5-8c91-78ec08c85221-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:31:30 np0005629333 nova_compute[244014]: 2026-02-25 12:31:30.584 244018 DEBUG oslo_concurrency.lockutils [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "e954c936-91fe-4aa5-8c91-78ec08c85221-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:30 np0005629333 nova_compute[244014]: 2026-02-25 12:31:30.585 244018 INFO nova.compute.manager [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Terminating instance#033[00m
Feb 25 07:31:30 np0005629333 nova_compute[244014]: 2026-02-25 12:31:30.586 244018 DEBUG oslo_concurrency.lockutils [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "refresh_cache-e954c936-91fe-4aa5-8c91-78ec08c85221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:31:30 np0005629333 nova_compute[244014]: 2026-02-25 12:31:30.586 244018 DEBUG oslo_concurrency.lockutils [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquired lock "refresh_cache-e954c936-91fe-4aa5-8c91-78ec08c85221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:31:30 np0005629333 nova_compute[244014]: 2026-02-25 12:31:30.587 244018 DEBUG nova.network.neutron [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:31:30 np0005629333 nova_compute[244014]: 2026-02-25 12:31:30.664 244018 DEBUG nova.objects.instance [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 3f143481-f5f9-45d3-9d0b-b66e77ee0714 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:31:30 np0005629333 nova_compute[244014]: 2026-02-25 12:31:30.686 244018 DEBUG nova.compute.manager [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:31:30 np0005629333 nova_compute[244014]: 2026-02-25 12:31:30.745 244018 DEBUG nova.objects.instance [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'pci_requests' on Instance uuid 3f143481-f5f9-45d3-9d0b-b66e77ee0714 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:31:30 np0005629333 nova_compute[244014]: 2026-02-25 12:31:30.756 244018 DEBUG nova.objects.instance [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3f143481-f5f9-45d3-9d0b-b66e77ee0714 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:31:30 np0005629333 nova_compute[244014]: 2026-02-25 12:31:30.766 244018 DEBUG nova.objects.instance [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'resources' on Instance uuid 3f143481-f5f9-45d3-9d0b-b66e77ee0714 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:31:30 np0005629333 nova_compute[244014]: 2026-02-25 12:31:30.783 244018 DEBUG nova.objects.instance [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'migration_context' on Instance uuid 3f143481-f5f9-45d3-9d0b-b66e77ee0714 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:31:30 np0005629333 nova_compute[244014]: 2026-02-25 12:31:30.790 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:30 np0005629333 nova_compute[244014]: 2026-02-25 12:31:30.793 244018 DEBUG nova.objects.instance [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 25 07:31:30 np0005629333 nova_compute[244014]: 2026-02-25 12:31:30.797 244018 INFO nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Instance already shutdown.#033[00m
Feb 25 07:31:30 np0005629333 nova_compute[244014]: 2026-02-25 12:31:30.803 244018 INFO nova.virt.libvirt.driver [-] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Instance destroyed successfully.#033[00m
Feb 25 07:31:30 np0005629333 nova_compute[244014]: 2026-02-25 12:31:30.809 244018 INFO nova.virt.libvirt.driver [-] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Instance destroyed successfully.#033[00m
Feb 25 07:31:30 np0005629333 nova_compute[244014]: 2026-02-25 12:31:30.810 244018 DEBUG nova.virt.libvirt.vif [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:30:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1795002660',display_name='tempest-tempest.common.compute-instance-1795002660',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1795002660',id=75,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:31:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='8315f545d21f4f8d9a43d810f50e7b78',ramdisk_id='',reservation_id='r-p68nlja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1615886603',owner_user_name='tempest-ServerActionsTestOtherA-1615886603-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:31:29Z,user_data=None,user_id='b63928451c6a4137bb65e25561326aff',uuid=3f143481-f5f9-45d3-9d0b-b66e77ee0714,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a308a2fe-7f22-4520-8be1-36876d0d5361", "address": "fa:16:3e:da:c8:fb", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa308a2fe-7f", "ovs_interfaceid": "a308a2fe-7f22-4520-8be1-36876d0d5361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:31:30 np0005629333 nova_compute[244014]: 2026-02-25 12:31:30.810 244018 DEBUG nova.network.os_vif_util [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converting VIF {"id": "a308a2fe-7f22-4520-8be1-36876d0d5361", "address": "fa:16:3e:da:c8:fb", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa308a2fe-7f", "ovs_interfaceid": "a308a2fe-7f22-4520-8be1-36876d0d5361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:31:30 np0005629333 nova_compute[244014]: 2026-02-25 12:31:30.811 244018 DEBUG nova.network.os_vif_util [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:c8:fb,bridge_name='br-int',has_traffic_filtering=True,id=a308a2fe-7f22-4520-8be1-36876d0d5361,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa308a2fe-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:31:30 np0005629333 nova_compute[244014]: 2026-02-25 12:31:30.811 244018 DEBUG os_vif [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:c8:fb,bridge_name='br-int',has_traffic_filtering=True,id=a308a2fe-7f22-4520-8be1-36876d0d5361,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa308a2fe-7f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:31:30 np0005629333 nova_compute[244014]: 2026-02-25 12:31:30.816 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:30 np0005629333 nova_compute[244014]: 2026-02-25 12:31:30.817 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa308a2fe-7f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:31:30 np0005629333 nova_compute[244014]: 2026-02-25 12:31:30.819 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:30 np0005629333 nova_compute[244014]: 2026-02-25 12:31:30.821 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:30 np0005629333 nova_compute[244014]: 2026-02-25 12:31:30.824 244018 INFO os_vif [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:c8:fb,bridge_name='br-int',has_traffic_filtering=True,id=a308a2fe-7f22-4520-8be1-36876d0d5361,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa308a2fe-7f')#033[00m
Feb 25 07:31:30 np0005629333 nova_compute[244014]: 2026-02-25 12:31:30.865 244018 DEBUG nova.network.neutron [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:31:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:31:30
Feb 25 07:31:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:31:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:31:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'volumes', 'backups', 'images', '.rgw.root', 'cephfs.cephfs.meta']
Feb 25 07:31:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:31:31 np0005629333 nova_compute[244014]: 2026-02-25 12:31:31.155 244018 DEBUG nova.network.neutron [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:31:31 np0005629333 nova_compute[244014]: 2026-02-25 12:31:31.172 244018 DEBUG oslo_concurrency.lockutils [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Releasing lock "refresh_cache-e954c936-91fe-4aa5-8c91-78ec08c85221" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:31:31 np0005629333 nova_compute[244014]: 2026-02-25 12:31:31.173 244018 DEBUG nova.compute.manager [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:31:31 np0005629333 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d00000049.scope: Deactivated successfully.
Feb 25 07:31:31 np0005629333 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d00000049.scope: Consumed 12.876s CPU time.
Feb 25 07:31:31 np0005629333 systemd-machined[210048]: Machine qemu-88-instance-00000049 terminated.
Feb 25 07:31:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:31:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:31:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:31:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:31:31 np0005629333 nova_compute[244014]: 2026-02-25 12:31:31.595 244018 INFO nova.virt.libvirt.driver [-] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Instance destroyed successfully.#033[00m
Feb 25 07:31:31 np0005629333 nova_compute[244014]: 2026-02-25 12:31:31.595 244018 DEBUG nova.objects.instance [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lazy-loading 'resources' on Instance uuid e954c936-91fe-4aa5-8c91-78ec08c85221 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:31:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:31:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:31:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:31:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:31:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:31:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:31:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:31:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:31:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:31:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:31:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:31:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:31:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:31:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 372 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.0 MiB/s wr, 311 op/s
Feb 25 07:31:32 np0005629333 nova_compute[244014]: 2026-02-25 12:31:32.709 244018 INFO nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Deleting instance files /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714_del#033[00m
Feb 25 07:31:32 np0005629333 nova_compute[244014]: 2026-02-25 12:31:32.710 244018 INFO nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Deletion of /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714_del complete#033[00m
Feb 25 07:31:32 np0005629333 nova_compute[244014]: 2026-02-25 12:31:32.857 244018 DEBUG nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:31:32 np0005629333 nova_compute[244014]: 2026-02-25 12:31:32.858 244018 INFO nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Creating image(s)#033[00m
Feb 25 07:31:32 np0005629333 nova_compute[244014]: 2026-02-25 12:31:32.887 244018 DEBUG nova.storage.rbd_utils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:31:32 np0005629333 nova_compute[244014]: 2026-02-25 12:31:32.919 244018 DEBUG nova.storage.rbd_utils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:31:32 np0005629333 nova_compute[244014]: 2026-02-25 12:31:32.954 244018 DEBUG nova.storage.rbd_utils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:31:32 np0005629333 nova_compute[244014]: 2026-02-25 12:31:32.959 244018 DEBUG oslo_concurrency.processutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:31:33 np0005629333 nova_compute[244014]: 2026-02-25 12:31:33.090 244018 DEBUG oslo_concurrency.processutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:31:33 np0005629333 nova_compute[244014]: 2026-02-25 12:31:33.092 244018 DEBUG oslo_concurrency.lockutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:31:33 np0005629333 nova_compute[244014]: 2026-02-25 12:31:33.093 244018 DEBUG oslo_concurrency.lockutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:31:33 np0005629333 nova_compute[244014]: 2026-02-25 12:31:33.094 244018 DEBUG oslo_concurrency.lockutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:33 np0005629333 nova_compute[244014]: 2026-02-25 12:31:33.124 244018 DEBUG nova.storage.rbd_utils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:31:33 np0005629333 nova_compute[244014]: 2026-02-25 12:31:33.128 244018 DEBUG oslo_concurrency.processutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:31:33 np0005629333 nova_compute[244014]: 2026-02-25 12:31:33.868 244018 DEBUG oslo_concurrency.processutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.740s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:31:33 np0005629333 nova_compute[244014]: 2026-02-25 12:31:33.912 244018 INFO nova.virt.libvirt.driver [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Deleting instance files /var/lib/nova/instances/e954c936-91fe-4aa5-8c91-78ec08c85221_del#033[00m
Feb 25 07:31:33 np0005629333 nova_compute[244014]: 2026-02-25 12:31:33.913 244018 INFO nova.virt.libvirt.driver [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Deletion of /var/lib/nova/instances/e954c936-91fe-4aa5-8c91-78ec08c85221_del complete#033[00m
Feb 25 07:31:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 322 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.6 MiB/s wr, 237 op/s
Feb 25 07:31:33 np0005629333 nova_compute[244014]: 2026-02-25 12:31:33.953 244018 DEBUG nova.storage.rbd_utils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] resizing rbd image 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:31:33 np0005629333 nova_compute[244014]: 2026-02-25 12:31:33.988 244018 INFO nova.compute.manager [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Took 2.81 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:31:33 np0005629333 nova_compute[244014]: 2026-02-25 12:31:33.989 244018 DEBUG oslo.service.loopingcall [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:31:33 np0005629333 nova_compute[244014]: 2026-02-25 12:31:33.989 244018 DEBUG nova.compute.manager [-] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:31:33 np0005629333 nova_compute[244014]: 2026-02-25 12:31:33.989 244018 DEBUG nova.network.neutron [-] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.038 244018 DEBUG nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.039 244018 DEBUG nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Ensure instance console log exists: /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.039 244018 DEBUG oslo_concurrency.lockutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.039 244018 DEBUG oslo_concurrency.lockutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.040 244018 DEBUG oslo_concurrency.lockutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.041 244018 DEBUG nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Start _get_guest_xml network_info=[{"id": "a308a2fe-7f22-4520-8be1-36876d0d5361", "address": "fa:16:3e:da:c8:fb", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa308a2fe-7f", "ovs_interfaceid": "a308a2fe-7f22-4520-8be1-36876d0d5361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.045 244018 WARNING nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.050 244018 DEBUG nova.virt.libvirt.host [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.050 244018 DEBUG nova.virt.libvirt.host [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.053 244018 DEBUG nova.virt.libvirt.host [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.053 244018 DEBUG nova.virt.libvirt.host [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.053 244018 DEBUG nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.053 244018 DEBUG nova.virt.hardware [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.054 244018 DEBUG nova.virt.hardware [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.054 244018 DEBUG nova.virt.hardware [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.054 244018 DEBUG nova.virt.hardware [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.054 244018 DEBUG nova.virt.hardware [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.054 244018 DEBUG nova.virt.hardware [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.055 244018 DEBUG nova.virt.hardware [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.055 244018 DEBUG nova.virt.hardware [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.055 244018 DEBUG nova.virt.hardware [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.055 244018 DEBUG nova.virt.hardware [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.055 244018 DEBUG nova.virt.hardware [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.056 244018 DEBUG nova.objects.instance [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3f143481-f5f9-45d3-9d0b-b66e77ee0714 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.070 244018 DEBUG oslo_concurrency.processutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:31:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:31:34Z|00698|binding|INFO|Releasing lport d456b40d-31d4-4447-97a8-c536382c29f9 from this chassis (sb_readonly=0)
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.172 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.271 244018 DEBUG nova.network.neutron [-] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.291 244018 DEBUG nova.network.neutron [-] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.311 244018 INFO nova.compute.manager [-] [instance: e954c936-91fe-4aa5-8c91-78ec08c85221] Took 0.32 seconds to deallocate network for instance.#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.386 244018 DEBUG oslo_concurrency.lockutils [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.387 244018 DEBUG oslo_concurrency.lockutils [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.529 244018 DEBUG oslo_concurrency.processutils [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:31:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:31:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3486925998' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.617 244018 DEBUG oslo_concurrency.processutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.648 244018 DEBUG nova.storage.rbd_utils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.654 244018 DEBUG oslo_concurrency.processutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:31:34 np0005629333 nova_compute[244014]: 2026-02-25 12:31:34.803 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:31:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3339143367' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.109 244018 DEBUG oslo_concurrency.processutils [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.114 244018 DEBUG nova.compute.provider_tree [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.138 244018 DEBUG nova.scheduler.client.report [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.169 244018 DEBUG oslo_concurrency.lockutils [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.195 244018 INFO nova.scheduler.client.report [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Deleted allocations for instance e954c936-91fe-4aa5-8c91-78ec08c85221#033[00m
Feb 25 07:31:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:31:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/907256136' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.237 244018 DEBUG oslo_concurrency.processutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.239 244018 DEBUG nova.virt.libvirt.vif [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:30:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1795002660',display_name='tempest-tempest.common.compute-instance-1795002660',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1795002660',id=75,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:31:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='8315f545d21f4f8d9a43d810f50e7b78',ramdisk_id='',reservation_id='r-p68nlja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1615886603',owner_user_name='tempest-ServerActionsTestOtherA-1615886603-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:31:32Z,user_data=None,user_id='b63928451c6a4137bb65e25561326aff',uuid=3f143481-f5f9-45d3-9d0b-b66e77ee0714,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a308a2fe-7f22-4520-8be1-36876d0d5361", "address": "fa:16:3e:da:c8:fb", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa308a2fe-7f", "ovs_interfaceid": "a308a2fe-7f22-4520-8be1-36876d0d5361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.240 244018 DEBUG nova.network.os_vif_util [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converting VIF {"id": "a308a2fe-7f22-4520-8be1-36876d0d5361", "address": "fa:16:3e:da:c8:fb", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa308a2fe-7f", "ovs_interfaceid": "a308a2fe-7f22-4520-8be1-36876d0d5361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.241 244018 DEBUG nova.network.os_vif_util [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:c8:fb,bridge_name='br-int',has_traffic_filtering=True,id=a308a2fe-7f22-4520-8be1-36876d0d5361,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa308a2fe-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.245 244018 DEBUG nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:31:35 np0005629333 nova_compute[244014]:  <uuid>3f143481-f5f9-45d3-9d0b-b66e77ee0714</uuid>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:  <name>instance-0000004b</name>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      <nova:name>tempest-tempest.common.compute-instance-1795002660</nova:name>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:31:34</nova:creationTime>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:31:35 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:        <nova:user uuid="b63928451c6a4137bb65e25561326aff">tempest-ServerActionsTestOtherA-1615886603-project-member</nova:user>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:        <nova:project uuid="8315f545d21f4f8d9a43d810f50e7b78">tempest-ServerActionsTestOtherA-1615886603</nova:project>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="f0ef5a9a-23b8-4883-8e47-feb7403a11d8"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:        <nova:port uuid="a308a2fe-7f22-4520-8be1-36876d0d5361">
Feb 25 07:31:35 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      <entry name="serial">3f143481-f5f9-45d3-9d0b-b66e77ee0714</entry>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      <entry name="uuid">3f143481-f5f9-45d3-9d0b-b66e77ee0714</entry>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk">
Feb 25 07:31:35 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:31:35 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk.config">
Feb 25 07:31:35 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:31:35 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:da:c8:fb"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      <target dev="tapa308a2fe-7f"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714/console.log" append="off"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:31:35 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:31:35 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:31:35 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:31:35 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.246 244018 DEBUG nova.compute.manager [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Preparing to wait for external event network-vif-plugged-a308a2fe-7f22-4520-8be1-36876d0d5361 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.247 244018 DEBUG oslo_concurrency.lockutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Acquiring lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.247 244018 DEBUG oslo_concurrency.lockutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.248 244018 DEBUG oslo_concurrency.lockutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lock "3f143481-f5f9-45d3-9d0b-b66e77ee0714-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.249 244018 DEBUG nova.virt.libvirt.vif [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:30:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1795002660',display_name='tempest-tempest.common.compute-instance-1795002660',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1795002660',id=75,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:31:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='8315f545d21f4f8d9a43d810f50e7b78',ramdisk_id='',reservation_id='r-p68nlja0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1615886603',owner_user_name='tempest-ServerActionsTestOtherA-1615886603-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:31:32Z,user_data=None,user_id='b63928451c6a4137bb65e25561326aff',uuid=3f143481-f5f9-45d3-9d0b-b66e77ee0714,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a308a2fe-7f22-4520-8be1-36876d0d5361", "address": "fa:16:3e:da:c8:fb", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa308a2fe-7f", "ovs_interfaceid": "a308a2fe-7f22-4520-8be1-36876d0d5361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.249 244018 DEBUG nova.network.os_vif_util [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converting VIF {"id": "a308a2fe-7f22-4520-8be1-36876d0d5361", "address": "fa:16:3e:da:c8:fb", "network": {"id": "cd796561-bd80-4610-8abc-655ee9e3676f", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2012307138-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8315f545d21f4f8d9a43d810f50e7b78", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa308a2fe-7f", "ovs_interfaceid": "a308a2fe-7f22-4520-8be1-36876d0d5361", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.250 244018 DEBUG nova.network.os_vif_util [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:c8:fb,bridge_name='br-int',has_traffic_filtering=True,id=a308a2fe-7f22-4520-8be1-36876d0d5361,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa308a2fe-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.251 244018 DEBUG os_vif [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:c8:fb,bridge_name='br-int',has_traffic_filtering=True,id=a308a2fe-7f22-4520-8be1-36876d0d5361,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa308a2fe-7f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.253 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.253 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.254 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.258 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.259 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa308a2fe-7f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.260 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa308a2fe-7f, col_values=(('external_ids', {'iface-id': 'a308a2fe-7f22-4520-8be1-36876d0d5361', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:c8:fb', 'vm-uuid': '3f143481-f5f9-45d3-9d0b-b66e77ee0714'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.262 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:35 np0005629333 NetworkManager[49836]: <info>  [1772022695.2633] manager: (tapa308a2fe-7f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/300)
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.266 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.268 244018 INFO os_vif [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:c8:fb,bridge_name='br-int',has_traffic_filtering=True,id=a308a2fe-7f22-4520-8be1-36876d0d5361,network=Network(cd796561-bd80-4610-8abc-655ee9e3676f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa308a2fe-7f')#033[00m
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.282 244018 DEBUG oslo_concurrency.lockutils [None req-5d63271c-feb4-4a39-b756-0674b1b0f06b 28ac489e13c44a30a1aafde4937f3cff e7c1be4d53154ae793a86c8bcc2c5b47 - - default default] Lock "e954c936-91fe-4aa5-8c91-78ec08c85221" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.367 244018 DEBUG nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.367 244018 DEBUG nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.368 244018 DEBUG nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] No VIF found with MAC fa:16:3e:da:c8:fb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.370 244018 INFO nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Using config drive#033[00m
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.406 244018 DEBUG nova.storage.rbd_utils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.435 244018 DEBUG nova.objects.instance [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 3f143481-f5f9-45d3-9d0b-b66e77ee0714 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:31:35 np0005629333 nova_compute[244014]: 2026-02-25 12:31:35.497 244018 DEBUG nova.objects.instance [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Lazy-loading 'keypairs' on Instance uuid 3f143481-f5f9-45d3-9d0b-b66e77ee0714 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:31:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 322 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 28 KiB/s wr, 142 op/s
Feb 25 07:31:36 np0005629333 nova_compute[244014]: 2026-02-25 12:31:36.027 244018 INFO nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Creating config drive at /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714/disk.config#033[00m
Feb 25 07:31:36 np0005629333 nova_compute[244014]: 2026-02-25 12:31:36.034 244018 DEBUG oslo_concurrency.processutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp7t6jeuso execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:31:36 np0005629333 nova_compute[244014]: 2026-02-25 12:31:36.182 244018 DEBUG oslo_concurrency.processutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp7t6jeuso" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:31:36 np0005629333 nova_compute[244014]: 2026-02-25 12:31:36.495 244018 DEBUG nova.storage.rbd_utils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] rbd image 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:31:36 np0005629333 nova_compute[244014]: 2026-02-25 12:31:36.500 244018 DEBUG oslo_concurrency.processutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714/disk.config 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:31:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:31:36 np0005629333 podman[309938]: 2026-02-25 12:31:36.803267027 +0000 UTC m=+0.144786166 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 25 07:31:36 np0005629333 podman[309939]: 2026-02-25 12:31:36.901928115 +0000 UTC m=+0.243081224 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 07:31:37 np0005629333 nova_compute[244014]: 2026-02-25 12:31:37.393 244018 DEBUG oslo_concurrency.processutils [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714/disk.config 3f143481-f5f9-45d3-9d0b-b66e77ee0714_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.893s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:31:37 np0005629333 nova_compute[244014]: 2026-02-25 12:31:37.394 244018 INFO nova.virt.libvirt.driver [None req-d44fa171-9c79-4e7f-84e1-278c9496efcf b63928451c6a4137bb65e25561326aff 8315f545d21f4f8d9a43d810f50e7b78 - - default default] [instance: 3f143481-f5f9-45d3-9d0b-b66e77ee0714] Deleting local config drive /var/lib/nova/instances/3f143481-f5f9-45d3-9d0b-b66e77ee0714/disk.config because it was imported into RBD.#033[00m
Feb 25 07:31:37 np0005629333 kernel: tapa308a2fe-7f: entered promiscuous mode
Feb 25 07:31:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:31:37Z|00699|binding|INFO|Claiming lport a308a2fe-7f22-4520-8be1-36876d0d5361 for this chassis.
Feb 25 07:31:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:31:37Z|00700|binding|INFO|a308a2fe-7f22-4520-8be1-36876d0d5361: Claiming fa:16:3e:da:c8:fb 10.100.0.4
Feb 25 07:31:37 np0005629333 NetworkManager[49836]: <info>  [1772022697.4507] manager: (tapa308a2fe-7f): new Tun device (/org/freedesktop/NetworkManager/Devices/301)
Feb 25 07:31:37 np0005629333 nova_compute[244014]: 2026-02-25 12:31:37.450 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:31:37Z|00701|binding|INFO|Setting lport a308a2fe-7f22-4520-8be1-36876d0d5361 ovn-installed in OVS
Feb 25 07:31:37 np0005629333 nova_compute[244014]: 2026-02-25 12:31:37.461 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:37 np0005629333 nova_compute[244014]: 2026-02-25 12:31:37.477 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:31:37 np0005629333 systemd-udevd[309995]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:31:37 np0005629333 NetworkManager[49836]: <info>  [1772022697.4936] device (tapa308a2fe-7f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:31:37 np0005629333 NetworkManager[49836]: <info>  [1772022697.4945] device (tapa308a2fe-7f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:35:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 305 active+clean; 524 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.5 MiB/s wr, 188 op/s
Feb 25 07:35:10 np0005629333 nova_compute[244014]: 2026-02-25 12:35:10.132 244018 DEBUG oslo_concurrency.lockutils [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquiring lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:35:10 np0005629333 nova_compute[244014]: 2026-02-25 12:35:10.133 244018 DEBUG oslo_concurrency.lockutils [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:35:10 np0005629333 nova_compute[244014]: 2026-02-25 12:35:10.133 244018 DEBUG oslo_concurrency.lockutils [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquiring lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:35:10 np0005629333 nova_compute[244014]: 2026-02-25 12:35:10.133 244018 DEBUG oslo_concurrency.lockutils [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:35:10 np0005629333 nova_compute[244014]: 2026-02-25 12:35:10.133 244018 DEBUG oslo_concurrency.lockutils [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:35:10 np0005629333 nova_compute[244014]: 2026-02-25 12:35:10.135 244018 INFO nova.compute.manager [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Terminating instance#033[00m
Feb 25 07:35:10 np0005629333 nova_compute[244014]: 2026-02-25 12:35:10.135 244018 DEBUG nova.compute.manager [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:35:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:35:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2218163997' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:35:10 np0005629333 rsyslogd[1020]: imjournal: 9423 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Feb 25 07:35:10 np0005629333 nova_compute[244014]: 2026-02-25 12:35:10.255 244018 DEBUG oslo_concurrency.processutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:35:10 np0005629333 nova_compute[244014]: 2026-02-25 12:35:10.287 244018 DEBUG nova.storage.rbd_utils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image 77c38424-b0a2-4d31-975a-16f265ff93fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:35:10 np0005629333 nova_compute[244014]: 2026-02-25 12:35:10.293 244018 DEBUG oslo_concurrency.processutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:35:10 np0005629333 nova_compute[244014]: 2026-02-25 12:35:10.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:35:10 np0005629333 nova_compute[244014]: 2026-02-25 12:35:10.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:35:10 np0005629333 kernel: tapd6f9abb7-ac (unregistering): left promiscuous mode
Feb 25 07:35:10 np0005629333 NetworkManager[49836]: <info>  [1772022910.9639] device (tapd6f9abb7-ac): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:35:10 np0005629333 nova_compute[244014]: 2026-02-25 12:35:10.966 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:35:10Z|00875|binding|INFO|Releasing lport d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 from this chassis (sb_readonly=0)
Feb 25 07:35:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:35:10Z|00876|binding|INFO|Setting lport d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 down in Southbound
Feb 25 07:35:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:35:10Z|00877|binding|INFO|Removing iface tapd6f9abb7-ac ovn-installed in OVS
Feb 25 07:35:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:10.981 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:66:6e 10.100.0.8'], port_security=['fa:16:3e:af:66:6e 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '19abddab-88d5-48b8-b98e-1dedccbb8b7f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d34ca23436b401fbaeb0b01190a440a', 'neutron:revision_number': '10', 'neutron:security_group_ids': '3f8cb539-706f-470a-94ac-5465c7a896f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a2c08b8a-392f-4189-9ba4-eaa2e30b0540, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:35:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:10.982 157129 INFO neutron.agent.ovn.metadata.agent [-] Port d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 in datapath a0cf2281-bf49-498f-8de5-70cdba33cd62 unbound from our chassis#033[00m
Feb 25 07:35:10 np0005629333 nova_compute[244014]: 2026-02-25 12:35:10.983 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:10.983 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a0cf2281-bf49-498f-8de5-70cdba33cd62, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:35:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:10.984 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a6fbf84b-aee9-44de-831b-938c63a92aa6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:10.986 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62 namespace which is not needed anymore#033[00m
Feb 25 07:35:11 np0005629333 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d0000004f.scope: Deactivated successfully.
Feb 25 07:35:11 np0005629333 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d0000004f.scope: Consumed 5.862s CPU time.
Feb 25 07:35:11 np0005629333 systemd-machined[210048]: Machine qemu-109-instance-0000004f terminated.
Feb 25 07:35:11 np0005629333 podman[322099]: 2026-02-25 12:35:11.0808088 +0000 UTC m=+0.091473467 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.151 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.172 244018 INFO nova.virt.libvirt.driver [-] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Instance destroyed successfully.#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.172 244018 DEBUG nova.objects.instance [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lazy-loading 'resources' on Instance uuid 19abddab-88d5-48b8-b98e-1dedccbb8b7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.185 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-aeaad9e2-4ad0-46fb-b619-7ca2c78443a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.185 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-aeaad9e2-4ad0-46fb-b619-7ca2c78443a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.185 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.188 244018 DEBUG nova.virt.libvirt.vif [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:32:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-293517057',display_name='tempest-ServersNegativeTestJSON-server-293517057',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-293517057',id=79,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:34:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d34ca23436b401fbaeb0b01190a440a',ramdisk_id='',reservation_id='r-lll6a7bq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1613719120',owner_user_name='tempest-ServersNegativeTestJSON-1613719120-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:34:57Z,user_data=None,user_id='84ba7d5e80a44535b25853f3b18e352d',uuid=19abddab-88d5-48b8-b98e-1dedccbb8b7f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "address": "fa:16:3e:af:66:6e", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6f9abb7-ac", "ovs_interfaceid": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.188 244018 DEBUG nova.network.os_vif_util [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Converting VIF {"id": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "address": "fa:16:3e:af:66:6e", "network": {"id": "a0cf2281-bf49-498f-8de5-70cdba33cd62", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1023731774-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d34ca23436b401fbaeb0b01190a440a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6f9abb7-ac", "ovs_interfaceid": "d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.189 244018 DEBUG nova.network.os_vif_util [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:66:6e,bridge_name='br-int',has_traffic_filtering=True,id=d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536,network=Network(a0cf2281-bf49-498f-8de5-70cdba33cd62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6f9abb7-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.189 244018 DEBUG os_vif [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:66:6e,bridge_name='br-int',has_traffic_filtering=True,id=d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536,network=Network(a0cf2281-bf49-498f-8de5-70cdba33cd62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6f9abb7-ac') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.190 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.191 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6f9abb7-ac, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.193 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.194 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.196 244018 INFO os_vif [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:66:6e,bridge_name='br-int',has_traffic_filtering=True,id=d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536,network=Network(a0cf2281-bf49-498f-8de5-70cdba33cd62),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6f9abb7-ac')#033[00m
Feb 25 07:35:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:35:11 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2038076065' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.256 244018 DEBUG oslo_concurrency.processutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.963s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.257 244018 DEBUG nova.virt.libvirt.vif [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:35:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-354613820',display_name='tempest-ServerDiskConfigTestJSON-server-354613820',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-354613820',id=87,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e80b416cdd774e9483545c9e08abf805',ramdisk_id='',reservation_id='r-cr7n0zq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1145834762',owner_user_name='tempest-ServerDiskConfigTestJSON-1145834762-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:35:03Z,user_data=None,user_id='d8636d59ca0d49698907e2edb5dc4967',uuid=77c38424-b0a2-4d31-975a-16f265ff93fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "db6ecea4-c353-4b6f-860e-95c410e7ec39", "address": "fa:16:3e:db:ce:56", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb6ecea4-c3", "ovs_interfaceid": "db6ecea4-c353-4b6f-860e-95c410e7ec39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.257 244018 DEBUG nova.network.os_vif_util [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converting VIF {"id": "db6ecea4-c353-4b6f-860e-95c410e7ec39", "address": "fa:16:3e:db:ce:56", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb6ecea4-c3", "ovs_interfaceid": "db6ecea4-c353-4b6f-860e-95c410e7ec39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.258 244018 DEBUG nova.network.os_vif_util [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:ce:56,bridge_name='br-int',has_traffic_filtering=True,id=db6ecea4-c353-4b6f-860e-95c410e7ec39,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb6ecea4-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.259 244018 DEBUG nova.objects.instance [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lazy-loading 'pci_devices' on Instance uuid 77c38424-b0a2-4d31-975a-16f265ff93fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.289 244018 DEBUG nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:35:11 np0005629333 nova_compute[244014]:  <uuid>77c38424-b0a2-4d31-975a-16f265ff93fb</uuid>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:  <name>instance-00000057</name>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-354613820</nova:name>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:35:09</nova:creationTime>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:35:11 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:        <nova:user uuid="d8636d59ca0d49698907e2edb5dc4967">tempest-ServerDiskConfigTestJSON-1145834762-project-member</nova:user>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:        <nova:project uuid="e80b416cdd774e9483545c9e08abf805">tempest-ServerDiskConfigTestJSON-1145834762</nova:project>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:        <nova:port uuid="db6ecea4-c353-4b6f-860e-95c410e7ec39">
Feb 25 07:35:11 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      <entry name="serial">77c38424-b0a2-4d31-975a-16f265ff93fb</entry>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      <entry name="uuid">77c38424-b0a2-4d31-975a-16f265ff93fb</entry>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/77c38424-b0a2-4d31-975a-16f265ff93fb_disk">
Feb 25 07:35:11 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:35:11 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/77c38424-b0a2-4d31-975a-16f265ff93fb_disk.config">
Feb 25 07:35:11 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:35:11 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:db:ce:56"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      <target dev="tapdb6ecea4-c3"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/77c38424-b0a2-4d31-975a-16f265ff93fb/console.log" append="off"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:35:11 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:35:11 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:35:11 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:35:11 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.289 244018 DEBUG nova.compute.manager [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Preparing to wait for external event network-vif-plugged-db6ecea4-c353-4b6f-860e-95c410e7ec39 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.290 244018 DEBUG oslo_concurrency.lockutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.290 244018 DEBUG oslo_concurrency.lockutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.290 244018 DEBUG oslo_concurrency.lockutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.291 244018 DEBUG nova.virt.libvirt.vif [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:35:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-354613820',display_name='tempest-ServerDiskConfigTestJSON-server-354613820',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-354613820',id=87,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e80b416cdd774e9483545c9e08abf805',ramdisk_id='',reservation_id='r-cr7n0zq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1145834762',owner_user_name='tempest-ServerDiskConfigTestJSON-1145834762-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:35:03Z,user_data=None,user_id='d8636d59ca0d49698907e2edb5dc4967',uuid=77c38424-b0a2-4d31-975a-16f265ff93fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "db6ecea4-c353-4b6f-860e-95c410e7ec39", "address": "fa:16:3e:db:ce:56", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb6ecea4-c3", "ovs_interfaceid": "db6ecea4-c353-4b6f-860e-95c410e7ec39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.291 244018 DEBUG nova.network.os_vif_util [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converting VIF {"id": "db6ecea4-c353-4b6f-860e-95c410e7ec39", "address": "fa:16:3e:db:ce:56", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb6ecea4-c3", "ovs_interfaceid": "db6ecea4-c353-4b6f-860e-95c410e7ec39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.292 244018 DEBUG nova.network.os_vif_util [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:ce:56,bridge_name='br-int',has_traffic_filtering=True,id=db6ecea4-c353-4b6f-860e-95c410e7ec39,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb6ecea4-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.292 244018 DEBUG os_vif [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:ce:56,bridge_name='br-int',has_traffic_filtering=True,id=db6ecea4-c353-4b6f-860e-95c410e7ec39,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb6ecea4-c3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.293 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.293 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.293 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.296 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.297 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdb6ecea4-c3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.297 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdb6ecea4-c3, col_values=(('external_ids', {'iface-id': 'db6ecea4-c353-4b6f-860e-95c410e7ec39', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:db:ce:56', 'vm-uuid': '77c38424-b0a2-4d31-975a-16f265ff93fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.298 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:11 np0005629333 NetworkManager[49836]: <info>  [1772022911.2992] manager: (tapdb6ecea4-c3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/367)
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.300 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.302 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.302 244018 INFO os_vif [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:ce:56,bridge_name='br-int',has_traffic_filtering=True,id=db6ecea4-c353-4b6f-860e-95c410e7ec39,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb6ecea4-c3')#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.353 244018 DEBUG nova.compute.manager [req-1d4d313f-7f8c-4384-aff1-3d9b8fb278be req-47e5b7f0-2aed-4f6e-a4ce-0a4e0f1cea9e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Received event network-vif-unplugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.354 244018 DEBUG oslo_concurrency.lockutils [req-1d4d313f-7f8c-4384-aff1-3d9b8fb278be req-47e5b7f0-2aed-4f6e-a4ce-0a4e0f1cea9e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.354 244018 DEBUG oslo_concurrency.lockutils [req-1d4d313f-7f8c-4384-aff1-3d9b8fb278be req-47e5b7f0-2aed-4f6e-a4ce-0a4e0f1cea9e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.354 244018 DEBUG oslo_concurrency.lockutils [req-1d4d313f-7f8c-4384-aff1-3d9b8fb278be req-47e5b7f0-2aed-4f6e-a4ce-0a4e0f1cea9e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.355 244018 DEBUG nova.compute.manager [req-1d4d313f-7f8c-4384-aff1-3d9b8fb278be req-47e5b7f0-2aed-4f6e-a4ce-0a4e0f1cea9e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] No waiting events found dispatching network-vif-unplugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.355 244018 DEBUG nova.compute.manager [req-1d4d313f-7f8c-4384-aff1-3d9b8fb278be req-47e5b7f0-2aed-4f6e-a4ce-0a4e0f1cea9e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Received event network-vif-unplugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.686 244018 DEBUG nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.686 244018 DEBUG nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.686 244018 DEBUG nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] No VIF found with MAC fa:16:3e:db:ce:56, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.687 244018 INFO nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Using config drive#033[00m
Feb 25 07:35:11 np0005629333 neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62[321503]: [NOTICE]   (321507) : haproxy version is 2.8.14-c23fe91
Feb 25 07:35:11 np0005629333 neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62[321503]: [NOTICE]   (321507) : path to executable is /usr/sbin/haproxy
Feb 25 07:35:11 np0005629333 neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62[321503]: [WARNING]  (321507) : Exiting Master process...
Feb 25 07:35:11 np0005629333 neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62[321503]: [ALERT]    (321507) : Current worker (321509) exited with code 143 (Terminated)
Feb 25 07:35:11 np0005629333 neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62[321503]: [WARNING]  (321507) : All workers exited. Exiting... (0)
Feb 25 07:35:11 np0005629333 systemd[1]: libpod-16b3d02c4792e3bb2d3aa048a3ee2c4cc350ebbeed9123a68e1cc32e424aaadf.scope: Deactivated successfully.
Feb 25 07:35:11 np0005629333 podman[322161]: 2026-02-25 12:35:11.713035067 +0000 UTC m=+0.622455742 container died 16b3d02c4792e3bb2d3aa048a3ee2c4cc350ebbeed9123a68e1cc32e424aaadf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 25 07:35:11 np0005629333 nova_compute[244014]: 2026-02-25 12:35:11.713 244018 DEBUG nova.storage.rbd_utils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image 77c38424-b0a2-4d31-975a-16f265ff93fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:35:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1602: 305 pgs: 305 active+clean; 524 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.5 MiB/s wr, 188 op/s
Feb 25 07:35:12 np0005629333 nova_compute[244014]: 2026-02-25 12:35:12.218 244018 INFO nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Creating config drive at /var/lib/nova/instances/77c38424-b0a2-4d31-975a-16f265ff93fb/disk.config#033[00m
Feb 25 07:35:12 np0005629333 nova_compute[244014]: 2026-02-25 12:35:12.222 244018 DEBUG oslo_concurrency.processutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/77c38424-b0a2-4d31-975a-16f265ff93fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpizamvkbe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:35:12 np0005629333 nova_compute[244014]: 2026-02-25 12:35:12.355 244018 DEBUG oslo_concurrency.processutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/77c38424-b0a2-4d31-975a-16f265ff93fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpizamvkbe" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:35:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:35:12 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-16b3d02c4792e3bb2d3aa048a3ee2c4cc350ebbeed9123a68e1cc32e424aaadf-userdata-shm.mount: Deactivated successfully.
Feb 25 07:35:12 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1286b91a07c941dc6ee3488e98e5fa87f3ac6bdce355293186c3d25055400acc-merged.mount: Deactivated successfully.
Feb 25 07:35:12 np0005629333 podman[322101]: 2026-02-25 12:35:12.752102024 +0000 UTC m=+1.761340810 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 25 07:35:12 np0005629333 nova_compute[244014]: 2026-02-25 12:35:12.901 244018 DEBUG nova.storage.rbd_utils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] rbd image 77c38424-b0a2-4d31-975a-16f265ff93fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:35:12 np0005629333 nova_compute[244014]: 2026-02-25 12:35:12.907 244018 DEBUG oslo_concurrency.processutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/77c38424-b0a2-4d31-975a-16f265ff93fb/disk.config 77c38424-b0a2-4d31-975a-16f265ff93fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:35:12 np0005629333 nova_compute[244014]: 2026-02-25 12:35:12.942 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Updating instance_info_cache with network_info: [{"id": "e7482fdc-9792-451e-adf6-a816dd7113c8", "address": "fa:16:3e:56:aa:11", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7482fdc-97", "ovs_interfaceid": "e7482fdc-9792-451e-adf6-a816dd7113c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:35:12 np0005629333 nova_compute[244014]: 2026-02-25 12:35:12.971 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-aeaad9e2-4ad0-46fb-b619-7ca2c78443a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:35:12 np0005629333 nova_compute[244014]: 2026-02-25 12:35:12.972 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 25 07:35:13 np0005629333 nova_compute[244014]: 2026-02-25 12:35:13.483 244018 DEBUG nova.compute.manager [req-2c74f3a1-5014-40c1-92b8-be88c77fae8a req-62e0dde1-5b51-4a86-ae62-67398660fe58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Received event network-vif-plugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:35:13 np0005629333 nova_compute[244014]: 2026-02-25 12:35:13.483 244018 DEBUG oslo_concurrency.lockutils [req-2c74f3a1-5014-40c1-92b8-be88c77fae8a req-62e0dde1-5b51-4a86-ae62-67398660fe58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:35:13 np0005629333 nova_compute[244014]: 2026-02-25 12:35:13.483 244018 DEBUG oslo_concurrency.lockutils [req-2c74f3a1-5014-40c1-92b8-be88c77fae8a req-62e0dde1-5b51-4a86-ae62-67398660fe58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:35:13 np0005629333 nova_compute[244014]: 2026-02-25 12:35:13.484 244018 DEBUG oslo_concurrency.lockutils [req-2c74f3a1-5014-40c1-92b8-be88c77fae8a req-62e0dde1-5b51-4a86-ae62-67398660fe58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:35:13 np0005629333 nova_compute[244014]: 2026-02-25 12:35:13.484 244018 DEBUG nova.compute.manager [req-2c74f3a1-5014-40c1-92b8-be88c77fae8a req-62e0dde1-5b51-4a86-ae62-67398660fe58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] No waiting events found dispatching network-vif-plugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:35:13 np0005629333 nova_compute[244014]: 2026-02-25 12:35:13.484 244018 WARNING nova.compute.manager [req-2c74f3a1-5014-40c1-92b8-be88c77fae8a req-62e0dde1-5b51-4a86-ae62-67398660fe58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Received unexpected event network-vif-plugged-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:35:13 np0005629333 nova_compute[244014]: 2026-02-25 12:35:13.813 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022898.8124812, ef34c4d4-57e0-4af1-af7a-b8ef35d09862 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:35:13 np0005629333 nova_compute[244014]: 2026-02-25 12:35:13.814 244018 INFO nova.compute.manager [-] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:35:13 np0005629333 nova_compute[244014]: 2026-02-25 12:35:13.844 244018 DEBUG nova.compute.manager [None req-5b6b4607-a514-4480-bb63-ad6f2e1efaaf - - - - - -] [instance: ef34c4d4-57e0-4af1-af7a-b8ef35d09862] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:35:13 np0005629333 podman[322161]: 2026-02-25 12:35:13.929294464 +0000 UTC m=+2.838715179 container cleanup 16b3d02c4792e3bb2d3aa048a3ee2c4cc350ebbeed9123a68e1cc32e424aaadf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 07:35:13 np0005629333 systemd[1]: libpod-conmon-16b3d02c4792e3bb2d3aa048a3ee2c4cc350ebbeed9123a68e1cc32e424aaadf.scope: Deactivated successfully.
Feb 25 07:35:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1603: 305 pgs: 305 active+clean; 534 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.8 MiB/s wr, 201 op/s
Feb 25 07:35:14 np0005629333 nova_compute[244014]: 2026-02-25 12:35:14.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:35:14 np0005629333 nova_compute[244014]: 2026-02-25 12:35:14.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:35:14 np0005629333 nova_compute[244014]: 2026-02-25 12:35:14.903 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:15 np0005629333 podman[322282]: 2026-02-25 12:35:15.161200941 +0000 UTC m=+1.208539768 container remove 16b3d02c4792e3bb2d3aa048a3ee2c4cc350ebbeed9123a68e1cc32e424aaadf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.166 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d89b378c-7b05-49f3-b463-95ae260848ce]: (4, ('Wed Feb 25 12:35:11 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62 (16b3d02c4792e3bb2d3aa048a3ee2c4cc350ebbeed9123a68e1cc32e424aaadf)\n16b3d02c4792e3bb2d3aa048a3ee2c4cc350ebbeed9123a68e1cc32e424aaadf\nWed Feb 25 12:35:13 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62 (16b3d02c4792e3bb2d3aa048a3ee2c4cc350ebbeed9123a68e1cc32e424aaadf)\n16b3d02c4792e3bb2d3aa048a3ee2c4cc350ebbeed9123a68e1cc32e424aaadf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.168 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5778b6ee-7e4a-46bb-82ba-dc4c0dd810b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.173 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0cf2281-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:35:15 np0005629333 kernel: tapa0cf2281-b0: left promiscuous mode
Feb 25 07:35:15 np0005629333 nova_compute[244014]: 2026-02-25 12:35:15.176 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:15 np0005629333 nova_compute[244014]: 2026-02-25 12:35:15.185 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.188 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[be471435-036a-434a-903f-181bcd713490]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.214 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[295ee3e5-9c17-4bc7-be52-7d82f1eba8ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.215 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c60fac3a-93d8-46af-ac21-84fd03c6e213]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.229 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b8ffb47f-380d-49c1-8593-85a16d0e2a50]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 486584, 'reachable_time': 33997, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322304, 'error': None, 'target': 'ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:15 np0005629333 systemd[1]: run-netns-ovnmeta\x2da0cf2281\x2dbf49\x2d498f\x2d8de5\x2d70cdba33cd62.mount: Deactivated successfully.
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.232 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a0cf2281-bf49-498f-8de5-70cdba33cd62 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.233 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[e24b7749-6e34-4ba6-a2bd-0c13198acb59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:15 np0005629333 nova_compute[244014]: 2026-02-25 12:35:15.586 244018 DEBUG oslo_concurrency.processutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/77c38424-b0a2-4d31-975a-16f265ff93fb/disk.config 77c38424-b0a2-4d31-975a-16f265ff93fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.679s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:35:15 np0005629333 nova_compute[244014]: 2026-02-25 12:35:15.587 244018 INFO nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Deleting local config drive /var/lib/nova/instances/77c38424-b0a2-4d31-975a-16f265ff93fb/disk.config because it was imported into RBD.#033[00m
Feb 25 07:35:15 np0005629333 kernel: tapdb6ecea4-c3: entered promiscuous mode
Feb 25 07:35:15 np0005629333 NetworkManager[49836]: <info>  [1772022915.6220] manager: (tapdb6ecea4-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/368)
Feb 25 07:35:15 np0005629333 nova_compute[244014]: 2026-02-25 12:35:15.624 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:15 np0005629333 ovn_controller[147040]: 2026-02-25T12:35:15Z|00878|binding|INFO|Claiming lport db6ecea4-c353-4b6f-860e-95c410e7ec39 for this chassis.
Feb 25 07:35:15 np0005629333 ovn_controller[147040]: 2026-02-25T12:35:15Z|00879|binding|INFO|db6ecea4-c353-4b6f-860e-95c410e7ec39: Claiming fa:16:3e:db:ce:56 10.100.0.6
Feb 25 07:35:15 np0005629333 systemd-udevd[322306]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:35:15 np0005629333 ovn_controller[147040]: 2026-02-25T12:35:15Z|00880|binding|INFO|Setting lport db6ecea4-c353-4b6f-860e-95c410e7ec39 ovn-installed in OVS
Feb 25 07:35:15 np0005629333 ovn_controller[147040]: 2026-02-25T12:35:15Z|00881|binding|INFO|Setting lport db6ecea4-c353-4b6f-860e-95c410e7ec39 up in Southbound
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.630 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:ce:56 10.100.0.6'], port_security=['fa:16:3e:db:ce:56 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '77c38424-b0a2-4d31-975a-16f265ff93fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d1639de-d0ac-47b6-9707-253523b26c21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e80b416cdd774e9483545c9e08abf805', 'neutron:revision_number': '2', 'neutron:security_group_ids': '71e5f465-6833-47b1-8565-db566c0f3b12', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=031ffbbc-3d5b-436f-b976-888884cb2042, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=db6ecea4-c353-4b6f-860e-95c410e7ec39) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.631 157129 INFO neutron.agent.ovn.metadata.agent [-] Port db6ecea4-c353-4b6f-860e-95c410e7ec39 in datapath 9d1639de-d0ac-47b6-9707-253523b26c21 bound to our chassis#033[00m
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.632 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9d1639de-d0ac-47b6-9707-253523b26c21#033[00m
Feb 25 07:35:15 np0005629333 nova_compute[244014]: 2026-02-25 12:35:15.631 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:15 np0005629333 NetworkManager[49836]: <info>  [1772022915.6396] device (tapdb6ecea4-c3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:35:15 np0005629333 NetworkManager[49836]: <info>  [1772022915.6405] device (tapdb6ecea4-c3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:35:15 np0005629333 nova_compute[244014]: 2026-02-25 12:35:15.641 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.646 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b0c95371-ade4-4066-b3b3-aea3de13277f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.647 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9d1639de-d1 in ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.648 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9d1639de-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.648 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[13d87f7f-6dc6-4463-8a4d-6df5d590f463]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.649 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8ec6cdfa-f4a6-43bd-b342-8cf55243d33e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:15 np0005629333 systemd-machined[210048]: New machine qemu-112-instance-00000057.
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.657 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[4375e6d1-9e56-4209-8d28-89096bcfac6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:15 np0005629333 systemd[1]: Started Virtual Machine qemu-112-instance-00000057.
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.667 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b635210d-2696-4229-b338-d5d1d09379f4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.690 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[32952eab-0977-4ab9-9c69-cea9bd47a0b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.694 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cc96e4a4-9bef-444f-98c7-e85fc91f9156]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:15 np0005629333 NetworkManager[49836]: <info>  [1772022915.6959] manager: (tap9d1639de-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/369)
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.715 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[52e2d979-dbfb-46ff-b47a-e69ec82775a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.717 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2f3b4c37-2058-4f98-bf70-825683bdc6bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:15 np0005629333 NetworkManager[49836]: <info>  [1772022915.7330] device (tap9d1639de-d0): carrier: link connected
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.738 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[741e0c41-db50-436d-8eeb-21d1d740737f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.751 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[292105a5-41aa-42a6-bea8-fc8195bfeeb3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d1639de-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:8c:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 264], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488514, 'reachable_time': 43171, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322352, 'error': None, 'target': 'ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.767 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[98f7dc95-66b0-4bfa-90d2-072f1b8f275a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe80:8cd3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 488514, 'tstamp': 488514}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322353, 'error': None, 'target': 'ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.778 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7908d736-a063-4098-b6b6-57623d622830]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d1639de-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:8c:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 264], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488514, 'reachable_time': 43171, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322354, 'error': None, 'target': 'ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.799 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[61458f9e-af6c-4525-9749-e6a84fae58e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.850 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f2038962-435e-42d8-8eb6-a7cc1268e01a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.851 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d1639de-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.852 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.853 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d1639de-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:35:15 np0005629333 NetworkManager[49836]: <info>  [1772022915.8565] manager: (tap9d1639de-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/370)
Feb 25 07:35:15 np0005629333 kernel: tap9d1639de-d0: entered promiscuous mode
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.859 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9d1639de-d0, col_values=(('external_ids', {'iface-id': '2a0bb56b-974f-4df3-ab65-5a5521eee6ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.862 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9d1639de-d0ac-47b6-9707-253523b26c21.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9d1639de-d0ac-47b6-9707-253523b26c21.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:35:15 np0005629333 ovn_controller[147040]: 2026-02-25T12:35:15Z|00882|binding|INFO|Releasing lport 2a0bb56b-974f-4df3-ab65-5a5521eee6ab from this chassis (sb_readonly=0)
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.863 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e64f7c2e-2872-4e02-a0ef-525aa72c0b64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:15 np0005629333 nova_compute[244014]: 2026-02-25 12:35:15.864 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.865 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-9d1639de-d0ac-47b6-9707-253523b26c21
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/9d1639de-d0ac-47b6-9707-253523b26c21.pid.haproxy
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 9d1639de-d0ac-47b6-9707-253523b26c21
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:35:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:15.867 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21', 'env', 'PROCESS_TAG=haproxy-9d1639de-d0ac-47b6-9707-253523b26c21', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9d1639de-d0ac-47b6-9707-253523b26c21.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:35:15 np0005629333 nova_compute[244014]: 2026-02-25 12:35:15.870 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:15 np0005629333 nova_compute[244014]: 2026-02-25 12:35:15.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:35:15 np0005629333 nova_compute[244014]: 2026-02-25 12:35:15.899 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:35:15 np0005629333 nova_compute[244014]: 2026-02-25 12:35:15.900 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:35:15 np0005629333 nova_compute[244014]: 2026-02-25 12:35:15.900 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:35:15 np0005629333 nova_compute[244014]: 2026-02-25 12:35:15.900 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:35:15 np0005629333 nova_compute[244014]: 2026-02-25 12:35:15.900 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:35:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 305 active+clean; 534 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 143 op/s
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.181 244018 DEBUG nova.compute.manager [req-4ed7c461-8b67-41b8-b83d-9c61f34cf3e9 req-1e0073e8-eb4e-407b-9a56-60f8c1d79e74 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Received event network-vif-plugged-db6ecea4-c353-4b6f-860e-95c410e7ec39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.181 244018 DEBUG oslo_concurrency.lockutils [req-4ed7c461-8b67-41b8-b83d-9c61f34cf3e9 req-1e0073e8-eb4e-407b-9a56-60f8c1d79e74 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.182 244018 DEBUG oslo_concurrency.lockutils [req-4ed7c461-8b67-41b8-b83d-9c61f34cf3e9 req-1e0073e8-eb4e-407b-9a56-60f8c1d79e74 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.182 244018 DEBUG oslo_concurrency.lockutils [req-4ed7c461-8b67-41b8-b83d-9c61f34cf3e9 req-1e0073e8-eb4e-407b-9a56-60f8c1d79e74 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.182 244018 DEBUG nova.compute.manager [req-4ed7c461-8b67-41b8-b83d-9c61f34cf3e9 req-1e0073e8-eb4e-407b-9a56-60f8c1d79e74 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Processing event network-vif-plugged-db6ecea4-c353-4b6f-860e-95c410e7ec39 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:35:16 np0005629333 podman[322423]: 2026-02-25 12:35:16.177898854 +0000 UTC m=+0.019983206 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.298 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:35:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4075301731' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.432 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.502 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.502 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.505 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.505 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000057 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.508 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000054 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.508 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000054 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.508 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000054 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.510 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.510 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.669 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.670 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3318MB free_disk=59.788564148359GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.670 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.670 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.689 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022916.6893885, 77c38424-b0a2-4d31-975a-16f265ff93fb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.690 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] VM Started (Lifecycle Event)#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.691 244018 DEBUG nova.compute.manager [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.694 244018 DEBUG nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.705 244018 INFO nova.virt.libvirt.driver [-] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Instance spawned successfully.#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.705 244018 DEBUG nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.827 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.830 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.836 244018 DEBUG nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.836 244018 DEBUG nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.838 244018 DEBUG nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.838 244018 DEBUG nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.838 244018 DEBUG nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.839 244018 DEBUG nova.virt.libvirt.driver [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.877 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.877 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022916.691264, 77c38424-b0a2-4d31-975a-16f265ff93fb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.877 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.913 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.915 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance aeaad9e2-4ad0-46fb-b619-7ca2c78443a8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.915 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance e3178d86-5c76-4393-9327-2aac2cb8d81d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.915 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 19abddab-88d5-48b8-b98e-1dedccbb8b7f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.916 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 77c38424-b0a2-4d31-975a-16f265ff93fb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.916 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.916 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.923 244018 INFO nova.compute.manager [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Took 12.98 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.925 244018 DEBUG nova.compute.manager [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.926 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022916.6938012, 77c38424-b0a2-4d31-975a-16f265ff93fb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.927 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.944 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.959 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.962 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.964 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.965 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.993 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 25 07:35:16 np0005629333 nova_compute[244014]: 2026-02-25 12:35:16.997 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:35:17 np0005629333 nova_compute[244014]: 2026-02-25 12:35:17.012 244018 INFO nova.compute.manager [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Took 14.06 seconds to build instance.#033[00m
Feb 25 07:35:17 np0005629333 nova_compute[244014]: 2026-02-25 12:35:17.023 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 25 07:35:17 np0005629333 nova_compute[244014]: 2026-02-25 12:35:17.028 244018 DEBUG oslo_concurrency.lockutils [None req-c5878e82-973e-4896-8b7d-13677d6f778d d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "77c38424-b0a2-4d31-975a-16f265ff93fb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:35:17 np0005629333 nova_compute[244014]: 2026-02-25 12:35:17.127 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:35:17 np0005629333 podman[322423]: 2026-02-25 12:35:17.228499377 +0000 UTC m=+1.070583689 container create 16c7c0a2d235e8044621fdf5dcbe4099cc881eed8dc1ea229eaec1cdf05d1eb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:35:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:35:17 np0005629333 systemd[1]: Started libpod-conmon-16c7c0a2d235e8044621fdf5dcbe4099cc881eed8dc1ea229eaec1cdf05d1eb4.scope.
Feb 25 07:35:17 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:35:17 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd22ca02a4a701e91c040bc18e8d941f5c4b35115a09169483e243124fa15c6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:35:17 np0005629333 podman[322423]: 2026-02-25 12:35:17.888795118 +0000 UTC m=+1.730879440 container init 16c7c0a2d235e8044621fdf5dcbe4099cc881eed8dc1ea229eaec1cdf05d1eb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223)
Feb 25 07:35:17 np0005629333 podman[322423]: 2026-02-25 12:35:17.895820727 +0000 UTC m=+1.737905039 container start 16c7c0a2d235e8044621fdf5dcbe4099cc881eed8dc1ea229eaec1cdf05d1eb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 25 07:35:17 np0005629333 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[322482]: [NOTICE]   (322487) : New worker (322489) forked
Feb 25 07:35:17 np0005629333 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[322482]: [NOTICE]   (322487) : Loading success.
Feb 25 07:35:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:35:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4255929073' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:35:17 np0005629333 nova_compute[244014]: 2026-02-25 12:35:17.968 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.842s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:35:17 np0005629333 nova_compute[244014]: 2026-02-25 12:35:17.973 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:35:17 np0005629333 nova_compute[244014]: 2026-02-25 12:35:17.996 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:35:18 np0005629333 nova_compute[244014]: 2026-02-25 12:35:18.017 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:35:18 np0005629333 nova_compute[244014]: 2026-02-25 12:35:18.018 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:35:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1605: 305 pgs: 305 active+clean; 453 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 175 op/s
Feb 25 07:35:18 np0005629333 nova_compute[244014]: 2026-02-25 12:35:18.834 244018 DEBUG nova.compute.manager [req-a08f2b2d-8949-436d-bf1a-7852afbdbaee req-5eaf0aad-ef67-4404-96bf-fbee6a861a15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Received event network-vif-plugged-db6ecea4-c353-4b6f-860e-95c410e7ec39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:35:18 np0005629333 nova_compute[244014]: 2026-02-25 12:35:18.835 244018 DEBUG oslo_concurrency.lockutils [req-a08f2b2d-8949-436d-bf1a-7852afbdbaee req-5eaf0aad-ef67-4404-96bf-fbee6a861a15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:35:18 np0005629333 nova_compute[244014]: 2026-02-25 12:35:18.835 244018 DEBUG oslo_concurrency.lockutils [req-a08f2b2d-8949-436d-bf1a-7852afbdbaee req-5eaf0aad-ef67-4404-96bf-fbee6a861a15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:35:18 np0005629333 nova_compute[244014]: 2026-02-25 12:35:18.835 244018 DEBUG oslo_concurrency.lockutils [req-a08f2b2d-8949-436d-bf1a-7852afbdbaee req-5eaf0aad-ef67-4404-96bf-fbee6a861a15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:35:18 np0005629333 nova_compute[244014]: 2026-02-25 12:35:18.836 244018 DEBUG nova.compute.manager [req-a08f2b2d-8949-436d-bf1a-7852afbdbaee req-5eaf0aad-ef67-4404-96bf-fbee6a861a15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] No waiting events found dispatching network-vif-plugged-db6ecea4-c353-4b6f-860e-95c410e7ec39 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:35:18 np0005629333 nova_compute[244014]: 2026-02-25 12:35:18.836 244018 WARNING nova.compute.manager [req-a08f2b2d-8949-436d-bf1a-7852afbdbaee req-5eaf0aad-ef67-4404-96bf-fbee6a861a15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Received unexpected event network-vif-plugged-db6ecea4-c353-4b6f-860e-95c410e7ec39 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:35:19 np0005629333 nova_compute[244014]: 2026-02-25 12:35:19.013 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:35:19 np0005629333 nova_compute[244014]: 2026-02-25 12:35:19.014 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:35:19 np0005629333 nova_compute[244014]: 2026-02-25 12:35:19.015 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:35:19 np0005629333 nova_compute[244014]: 2026-02-25 12:35:19.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:35:19 np0005629333 nova_compute[244014]: 2026-02-25 12:35:19.906 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1606: 305 pgs: 305 active+clean; 453 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 323 KiB/s wr, 125 op/s
Feb 25 07:35:21 np0005629333 nova_compute[244014]: 2026-02-25 12:35:21.299 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:21 np0005629333 nova_compute[244014]: 2026-02-25 12:35:21.758 244018 DEBUG nova.compute.manager [None req-88bde76d-bf73-4490-b090-91dd8d7ea7c3 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:35:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1607: 305 pgs: 305 active+clean; 453 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 322 KiB/s wr, 125 op/s
Feb 25 07:35:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:35:22 np0005629333 nova_compute[244014]: 2026-02-25 12:35:22.710 244018 DEBUG oslo_concurrency.lockutils [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquiring lock "e3178d86-5c76-4393-9327-2aac2cb8d81d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:35:22 np0005629333 nova_compute[244014]: 2026-02-25 12:35:22.712 244018 DEBUG oslo_concurrency.lockutils [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:35:22 np0005629333 nova_compute[244014]: 2026-02-25 12:35:22.712 244018 DEBUG oslo_concurrency.lockutils [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquiring lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:35:22 np0005629333 nova_compute[244014]: 2026-02-25 12:35:22.713 244018 DEBUG oslo_concurrency.lockutils [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:35:22 np0005629333 nova_compute[244014]: 2026-02-25 12:35:22.714 244018 DEBUG oslo_concurrency.lockutils [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:35:22 np0005629333 nova_compute[244014]: 2026-02-25 12:35:22.716 244018 INFO nova.compute.manager [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Terminating instance#033[00m
Feb 25 07:35:22 np0005629333 nova_compute[244014]: 2026-02-25 12:35:22.719 244018 DEBUG nova.compute.manager [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:35:22 np0005629333 nova_compute[244014]: 2026-02-25 12:35:22.779 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:22.782 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:35:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:22.783 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:35:22 np0005629333 nova_compute[244014]: 2026-02-25 12:35:22.902 244018 DEBUG oslo_concurrency.lockutils [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "77c38424-b0a2-4d31-975a-16f265ff93fb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:35:22 np0005629333 nova_compute[244014]: 2026-02-25 12:35:22.903 244018 DEBUG oslo_concurrency.lockutils [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "77c38424-b0a2-4d31-975a-16f265ff93fb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:35:22 np0005629333 nova_compute[244014]: 2026-02-25 12:35:22.903 244018 DEBUG oslo_concurrency.lockutils [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:35:22 np0005629333 nova_compute[244014]: 2026-02-25 12:35:22.904 244018 DEBUG oslo_concurrency.lockutils [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:35:22 np0005629333 nova_compute[244014]: 2026-02-25 12:35:22.904 244018 DEBUG oslo_concurrency.lockutils [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:35:22 np0005629333 nova_compute[244014]: 2026-02-25 12:35:22.905 244018 INFO nova.compute.manager [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Terminating instance#033[00m
Feb 25 07:35:22 np0005629333 nova_compute[244014]: 2026-02-25 12:35:22.906 244018 DEBUG nova.compute.manager [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:35:23 np0005629333 kernel: tapdb6ecea4-c3 (unregistering): left promiscuous mode
Feb 25 07:35:23 np0005629333 NetworkManager[49836]: <info>  [1772022923.2937] device (tapdb6ecea4-c3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.299 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:23 np0005629333 ovn_controller[147040]: 2026-02-25T12:35:23Z|00883|binding|INFO|Releasing lport db6ecea4-c353-4b6f-860e-95c410e7ec39 from this chassis (sb_readonly=0)
Feb 25 07:35:23 np0005629333 ovn_controller[147040]: 2026-02-25T12:35:23Z|00884|binding|INFO|Setting lport db6ecea4-c353-4b6f-860e-95c410e7ec39 down in Southbound
Feb 25 07:35:23 np0005629333 ovn_controller[147040]: 2026-02-25T12:35:23Z|00885|binding|INFO|Removing iface tapdb6ecea4-c3 ovn-installed in OVS
Feb 25 07:35:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:23.311 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:ce:56 10.100.0.6'], port_security=['fa:16:3e:db:ce:56 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '77c38424-b0a2-4d31-975a-16f265ff93fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d1639de-d0ac-47b6-9707-253523b26c21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e80b416cdd774e9483545c9e08abf805', 'neutron:revision_number': '4', 'neutron:security_group_ids': '71e5f465-6833-47b1-8565-db566c0f3b12', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=031ffbbc-3d5b-436f-b976-888884cb2042, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=db6ecea4-c353-4b6f-860e-95c410e7ec39) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:35:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:23.312 157129 INFO neutron.agent.ovn.metadata.agent [-] Port db6ecea4-c353-4b6f-860e-95c410e7ec39 in datapath 9d1639de-d0ac-47b6-9707-253523b26c21 unbound from our chassis#033[00m
Feb 25 07:35:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:23.314 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9d1639de-d0ac-47b6-9707-253523b26c21, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:35:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:23.316 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0c9b265c-b045-4633-bcdc-0e7068fddcf7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:23.316 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21 namespace which is not needed anymore#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.318 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:23 np0005629333 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d00000057.scope: Deactivated successfully.
Feb 25 07:35:23 np0005629333 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d00000057.scope: Consumed 6.934s CPU time.
Feb 25 07:35:23 np0005629333 systemd-machined[210048]: Machine qemu-112-instance-00000057 terminated.
Feb 25 07:35:23 np0005629333 kernel: tap26c6cf46-a5 (unregistering): left promiscuous mode
Feb 25 07:35:23 np0005629333 NetworkManager[49836]: <info>  [1772022923.3910] device (tap26c6cf46-a5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:35:23 np0005629333 ovn_controller[147040]: 2026-02-25T12:35:23Z|00886|binding|INFO|Releasing lport 26c6cf46-a52b-4476-9112-4047f420e492 from this chassis (sb_readonly=0)
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.391 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:23 np0005629333 ovn_controller[147040]: 2026-02-25T12:35:23Z|00887|binding|INFO|Setting lport 26c6cf46-a52b-4476-9112-4047f420e492 down in Southbound
Feb 25 07:35:23 np0005629333 ovn_controller[147040]: 2026-02-25T12:35:23Z|00888|binding|INFO|Removing iface tap26c6cf46-a5 ovn-installed in OVS
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.393 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:23.400 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:a3:ed 10.100.0.5'], port_security=['fa:16:3e:a5:a3:ed 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'e3178d86-5c76-4393-9327-2aac2cb8d81d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfc4aeb0-f3b5-4ae8-80e8-5202af72a365', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '367d43ab207546c3900a8414f0713ef4', 'neutron:revision_number': '8', 'neutron:security_group_ids': '7ed366e8-b2db-4652-98a9-640647117ebb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ed9c1bd-254e-44c0-aeae-a8dc844b47a7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=26c6cf46-a52b-4476-9112-4047f420e492) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.402 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:23 np0005629333 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d00000056.scope: Deactivated successfully.
Feb 25 07:35:23 np0005629333 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d00000056.scope: Consumed 11.458s CPU time.
Feb 25 07:35:23 np0005629333 systemd-machined[210048]: Machine qemu-111-instance-00000056 terminated.
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.491 244018 INFO nova.virt.libvirt.driver [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Deleting instance files /var/lib/nova/instances/19abddab-88d5-48b8-b98e-1dedccbb8b7f_del#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.491 244018 INFO nova.virt.libvirt.driver [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Deletion of /var/lib/nova/instances/19abddab-88d5-48b8-b98e-1dedccbb8b7f_del complete#033[00m
Feb 25 07:35:23 np0005629333 NetworkManager[49836]: <info>  [1772022923.5342] manager: (tap26c6cf46-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/371)
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.539 244018 INFO nova.virt.libvirt.driver [-] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Instance destroyed successfully.#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.540 244018 DEBUG nova.objects.instance [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lazy-loading 'resources' on Instance uuid 77c38424-b0a2-4d31-975a-16f265ff93fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.547 244018 INFO nova.virt.libvirt.driver [-] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Instance destroyed successfully.#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.548 244018 DEBUG nova.objects.instance [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lazy-loading 'resources' on Instance uuid e3178d86-5c76-4393-9327-2aac2cb8d81d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.549 244018 INFO nova.compute.manager [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Took 13.41 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.550 244018 DEBUG oslo.service.loopingcall [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.550 244018 DEBUG nova.compute.manager [-] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.550 244018 DEBUG nova.network.neutron [-] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.559 244018 DEBUG nova.virt.libvirt.vif [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:35:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-354613820',display_name='tempest-ServerDiskConfigTestJSON-server-354613820',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-354613820',id=87,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:35:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e80b416cdd774e9483545c9e08abf805',ramdisk_id='',reservation_id='r-cr7n0zq0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1145834762',owner_user_name='tempest-ServerDiskConfigTestJSON-1145834762-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:35:21Z,user_data=None,user_id='d8636d59ca0d49698907e2edb5dc4967',uuid=77c38424-b0a2-4d31-975a-16f265ff93fb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "db6ecea4-c353-4b6f-860e-95c410e7ec39", "address": "fa:16:3e:db:ce:56", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb6ecea4-c3", "ovs_interfaceid": "db6ecea4-c353-4b6f-860e-95c410e7ec39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.560 244018 DEBUG nova.network.os_vif_util [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converting VIF {"id": "db6ecea4-c353-4b6f-860e-95c410e7ec39", "address": "fa:16:3e:db:ce:56", "network": {"id": "9d1639de-d0ac-47b6-9707-253523b26c21", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-187103865-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e80b416cdd774e9483545c9e08abf805", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb6ecea4-c3", "ovs_interfaceid": "db6ecea4-c353-4b6f-860e-95c410e7ec39", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.560 244018 DEBUG nova.network.os_vif_util [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:ce:56,bridge_name='br-int',has_traffic_filtering=True,id=db6ecea4-c353-4b6f-860e-95c410e7ec39,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb6ecea4-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.560 244018 DEBUG os_vif [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:ce:56,bridge_name='br-int',has_traffic_filtering=True,id=db6ecea4-c353-4b6f-860e-95c410e7ec39,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb6ecea4-c3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.563 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.563 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb6ecea4-c3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.565 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.567 244018 DEBUG nova.virt.libvirt.vif [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:34:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1397178537',display_name='tempest-ServerRescueTestJSON-server-1397178537',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1397178537',id=86,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:34:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='367d43ab207546c3900a8414f0713ef4',ramdisk_id='',reservation_id='r-j34kam6o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-930018924',owner_user_name='tempest-ServerRescueTestJSON-930018924-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:35:21Z,user_data=None,user_id='cb2c815ef3214a7b897f911b4f53a146',uuid=e3178d86-5c76-4393-9327-2aac2cb8d81d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "26c6cf46-a52b-4476-9112-4047f420e492", "address": "fa:16:3e:a5:a3:ed", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c6cf46-a5", "ovs_interfaceid": "26c6cf46-a52b-4476-9112-4047f420e492", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.567 244018 DEBUG nova.network.os_vif_util [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Converting VIF {"id": "26c6cf46-a52b-4476-9112-4047f420e492", "address": "fa:16:3e:a5:a3:ed", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26c6cf46-a5", "ovs_interfaceid": "26c6cf46-a52b-4476-9112-4047f420e492", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.568 244018 DEBUG nova.network.os_vif_util [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a5:a3:ed,bridge_name='br-int',has_traffic_filtering=True,id=26c6cf46-a52b-4476-9112-4047f420e492,network=Network(dfc4aeb0-f3b5-4ae8-80e8-5202af72a365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26c6cf46-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.568 244018 DEBUG os_vif [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:a3:ed,bridge_name='br-int',has_traffic_filtering=True,id=26c6cf46-a52b-4476-9112-4047f420e492,network=Network(dfc4aeb0-f3b5-4ae8-80e8-5202af72a365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26c6cf46-a5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.570 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.573 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.574 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26c6cf46-a5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.575 244018 INFO os_vif [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:ce:56,bridge_name='br-int',has_traffic_filtering=True,id=db6ecea4-c353-4b6f-860e-95c410e7ec39,network=Network(9d1639de-d0ac-47b6-9707-253523b26c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb6ecea4-c3')#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.587 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.589 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.591 244018 INFO os_vif [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:a3:ed,bridge_name='br-int',has_traffic_filtering=True,id=26c6cf46-a52b-4476-9112-4047f420e492,network=Network(dfc4aeb0-f3b5-4ae8-80e8-5202af72a365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26c6cf46-a5')#033[00m
Feb 25 07:35:23 np0005629333 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[322482]: [NOTICE]   (322487) : haproxy version is 2.8.14-c23fe91
Feb 25 07:35:23 np0005629333 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[322482]: [NOTICE]   (322487) : path to executable is /usr/sbin/haproxy
Feb 25 07:35:23 np0005629333 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[322482]: [WARNING]  (322487) : Exiting Master process...
Feb 25 07:35:23 np0005629333 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[322482]: [ALERT]    (322487) : Current worker (322489) exited with code 143 (Terminated)
Feb 25 07:35:23 np0005629333 neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21[322482]: [WARNING]  (322487) : All workers exited. Exiting... (0)
Feb 25 07:35:23 np0005629333 systemd[1]: libpod-16c7c0a2d235e8044621fdf5dcbe4099cc881eed8dc1ea229eaec1cdf05d1eb4.scope: Deactivated successfully.
Feb 25 07:35:23 np0005629333 podman[322528]: 2026-02-25 12:35:23.603836378 +0000 UTC m=+0.193831209 container died 16c7c0a2d235e8044621fdf5dcbe4099cc881eed8dc1ea229eaec1cdf05d1eb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.628 244018 DEBUG nova.compute.manager [req-ca9e5617-390a-47f1-8ac6-3a5af176d709 req-e6133c3d-4940-4a7b-9a47-1e8fc70ebde5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Received event network-vif-unplugged-db6ecea4-c353-4b6f-860e-95c410e7ec39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.629 244018 DEBUG oslo_concurrency.lockutils [req-ca9e5617-390a-47f1-8ac6-3a5af176d709 req-e6133c3d-4940-4a7b-9a47-1e8fc70ebde5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.629 244018 DEBUG oslo_concurrency.lockutils [req-ca9e5617-390a-47f1-8ac6-3a5af176d709 req-e6133c3d-4940-4a7b-9a47-1e8fc70ebde5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.629 244018 DEBUG oslo_concurrency.lockutils [req-ca9e5617-390a-47f1-8ac6-3a5af176d709 req-e6133c3d-4940-4a7b-9a47-1e8fc70ebde5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.630 244018 DEBUG nova.compute.manager [req-ca9e5617-390a-47f1-8ac6-3a5af176d709 req-e6133c3d-4940-4a7b-9a47-1e8fc70ebde5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] No waiting events found dispatching network-vif-unplugged-db6ecea4-c353-4b6f-860e-95c410e7ec39 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:35:23 np0005629333 nova_compute[244014]: 2026-02-25 12:35:23.630 244018 DEBUG nova.compute.manager [req-ca9e5617-390a-47f1-8ac6-3a5af176d709 req-e6133c3d-4940-4a7b-9a47-1e8fc70ebde5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Received event network-vif-unplugged-db6ecea4-c353-4b6f-860e-95c410e7ec39 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:35:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1608: 305 pgs: 305 active+clean; 407 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 334 KiB/s wr, 246 op/s
Feb 25 07:35:24 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-16c7c0a2d235e8044621fdf5dcbe4099cc881eed8dc1ea229eaec1cdf05d1eb4-userdata-shm.mount: Deactivated successfully.
Feb 25 07:35:24 np0005629333 systemd[1]: var-lib-containers-storage-overlay-4fd22ca02a4a701e91c040bc18e8d941f5c4b35115a09169483e243124fa15c6-merged.mount: Deactivated successfully.
Feb 25 07:35:24 np0005629333 nova_compute[244014]: 2026-02-25 12:35:24.675 244018 DEBUG nova.network.neutron [-] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:35:24 np0005629333 podman[322528]: 2026-02-25 12:35:24.693148294 +0000 UTC m=+1.283143125 container cleanup 16c7c0a2d235e8044621fdf5dcbe4099cc881eed8dc1ea229eaec1cdf05d1eb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:35:24 np0005629333 nova_compute[244014]: 2026-02-25 12:35:24.701 244018 INFO nova.compute.manager [-] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Took 1.15 seconds to deallocate network for instance.#033[00m
Feb 25 07:35:24 np0005629333 systemd[1]: libpod-conmon-16c7c0a2d235e8044621fdf5dcbe4099cc881eed8dc1ea229eaec1cdf05d1eb4.scope: Deactivated successfully.
Feb 25 07:35:24 np0005629333 nova_compute[244014]: 2026-02-25 12:35:24.745 244018 DEBUG oslo_concurrency.lockutils [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:35:24 np0005629333 nova_compute[244014]: 2026-02-25 12:35:24.746 244018 DEBUG oslo_concurrency.lockutils [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:35:24 np0005629333 nova_compute[244014]: 2026-02-25 12:35:24.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:35:24 np0005629333 nova_compute[244014]: 2026-02-25 12:35:24.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:35:24 np0005629333 nova_compute[244014]: 2026-02-25 12:35:24.878 244018 DEBUG oslo_concurrency.processutils [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:35:24 np0005629333 nova_compute[244014]: 2026-02-25 12:35:24.910 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:35:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1576377988' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:35:25 np0005629333 nova_compute[244014]: 2026-02-25 12:35:25.414 244018 DEBUG oslo_concurrency.processutils [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:35:25 np0005629333 nova_compute[244014]: 2026-02-25 12:35:25.420 244018 DEBUG nova.compute.provider_tree [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:35:25 np0005629333 nova_compute[244014]: 2026-02-25 12:35:25.445 244018 DEBUG nova.scheduler.client.report [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:35:25 np0005629333 nova_compute[244014]: 2026-02-25 12:35:25.475 244018 DEBUG oslo_concurrency.lockutils [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:35:25 np0005629333 nova_compute[244014]: 2026-02-25 12:35:25.500 244018 INFO nova.scheduler.client.report [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Deleted allocations for instance 19abddab-88d5-48b8-b98e-1dedccbb8b7f#033[00m
Feb 25 07:35:25 np0005629333 nova_compute[244014]: 2026-02-25 12:35:25.583 244018 DEBUG oslo_concurrency.lockutils [None req-26cfc309-1998-4626-bd1e-a9da07efffaa 84ba7d5e80a44535b25853f3b18e352d 0d34ca23436b401fbaeb0b01190a440a - - default default] Lock "19abddab-88d5-48b8-b98e-1dedccbb8b7f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 15.450s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:35:25 np0005629333 nova_compute[244014]: 2026-02-25 12:35:25.737 244018 DEBUG nova.compute.manager [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Received event network-vif-plugged-db6ecea4-c353-4b6f-860e-95c410e7ec39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:35:25 np0005629333 nova_compute[244014]: 2026-02-25 12:35:25.738 244018 DEBUG oslo_concurrency.lockutils [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:35:25 np0005629333 nova_compute[244014]: 2026-02-25 12:35:25.738 244018 DEBUG oslo_concurrency.lockutils [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:35:25 np0005629333 nova_compute[244014]: 2026-02-25 12:35:25.739 244018 DEBUG oslo_concurrency.lockutils [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "77c38424-b0a2-4d31-975a-16f265ff93fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:35:25 np0005629333 nova_compute[244014]: 2026-02-25 12:35:25.739 244018 DEBUG nova.compute.manager [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] No waiting events found dispatching network-vif-plugged-db6ecea4-c353-4b6f-860e-95c410e7ec39 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:35:25 np0005629333 nova_compute[244014]: 2026-02-25 12:35:25.739 244018 WARNING nova.compute.manager [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Received unexpected event network-vif-plugged-db6ecea4-c353-4b6f-860e-95c410e7ec39 for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:35:25 np0005629333 nova_compute[244014]: 2026-02-25 12:35:25.739 244018 DEBUG nova.compute.manager [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Received event network-vif-unplugged-26c6cf46-a52b-4476-9112-4047f420e492 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:35:25 np0005629333 nova_compute[244014]: 2026-02-25 12:35:25.740 244018 DEBUG oslo_concurrency.lockutils [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:35:25 np0005629333 nova_compute[244014]: 2026-02-25 12:35:25.740 244018 DEBUG oslo_concurrency.lockutils [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:35:25 np0005629333 nova_compute[244014]: 2026-02-25 12:35:25.740 244018 DEBUG oslo_concurrency.lockutils [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:35:25 np0005629333 nova_compute[244014]: 2026-02-25 12:35:25.740 244018 DEBUG nova.compute.manager [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] No waiting events found dispatching network-vif-unplugged-26c6cf46-a52b-4476-9112-4047f420e492 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:35:25 np0005629333 nova_compute[244014]: 2026-02-25 12:35:25.740 244018 DEBUG nova.compute.manager [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Received event network-vif-unplugged-26c6cf46-a52b-4476-9112-4047f420e492 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:35:25 np0005629333 nova_compute[244014]: 2026-02-25 12:35:25.741 244018 DEBUG nova.compute.manager [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Received event network-vif-deleted-d6f9abb7-ac51-44f8-88ae-b5a8ef3b6536 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:35:25 np0005629333 nova_compute[244014]: 2026-02-25 12:35:25.741 244018 DEBUG nova.compute.manager [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Received event network-vif-plugged-26c6cf46-a52b-4476-9112-4047f420e492 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:35:25 np0005629333 nova_compute[244014]: 2026-02-25 12:35:25.741 244018 DEBUG oslo_concurrency.lockutils [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:35:25 np0005629333 nova_compute[244014]: 2026-02-25 12:35:25.741 244018 DEBUG oslo_concurrency.lockutils [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:35:25 np0005629333 nova_compute[244014]: 2026-02-25 12:35:25.741 244018 DEBUG oslo_concurrency.lockutils [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:35:25 np0005629333 nova_compute[244014]: 2026-02-25 12:35:25.742 244018 DEBUG nova.compute.manager [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] No waiting events found dispatching network-vif-plugged-26c6cf46-a52b-4476-9112-4047f420e492 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:35:25 np0005629333 nova_compute[244014]: 2026-02-25 12:35:25.742 244018 WARNING nova.compute.manager [req-548c1684-3902-4326-a852-9d312be5c4ff req-d307427d-8ef2-4368-a3f3-41600f812a27 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Received unexpected event network-vif-plugged-26c6cf46-a52b-4476-9112-4047f420e492 for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:35:26 np0005629333 podman[322623]: 2026-02-25 12:35:26.028059961 +0000 UTC m=+1.312255648 container remove 16c7c0a2d235e8044621fdf5dcbe4099cc881eed8dc1ea229eaec1cdf05d1eb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223)
Feb 25 07:35:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:26.036 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[41c72b26-6659-4ee6-85ed-556337bce0ef]: (4, ('Wed Feb 25 12:35:23 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21 (16c7c0a2d235e8044621fdf5dcbe4099cc881eed8dc1ea229eaec1cdf05d1eb4)\n16c7c0a2d235e8044621fdf5dcbe4099cc881eed8dc1ea229eaec1cdf05d1eb4\nWed Feb 25 12:35:24 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21 (16c7c0a2d235e8044621fdf5dcbe4099cc881eed8dc1ea229eaec1cdf05d1eb4)\n16c7c0a2d235e8044621fdf5dcbe4099cc881eed8dc1ea229eaec1cdf05d1eb4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:26.037 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[27828008-f07d-4a5e-a499-b103b24583a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:26.038 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d1639de-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:35:26 np0005629333 nova_compute[244014]: 2026-02-25 12:35:26.040 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:26 np0005629333 kernel: tap9d1639de-d0: left promiscuous mode
Feb 25 07:35:26 np0005629333 nova_compute[244014]: 2026-02-25 12:35:26.046 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:26.049 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[54d5be6a-2e85-4c14-bc30-9d331f7791b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1609: 305 pgs: 305 active+clean; 407 MiB data, 874 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 25 KiB/s wr, 152 op/s
Feb 25 07:35:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:26.060 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4cab8623-5c25-4105-9e39-46b79a2ecc3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:26.062 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[913aedd8-c900-4eb0-a81c-8d10a81df0ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:26.079 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c4d10f4d-e4d3-4bcc-a4e8-7e81937ebeda]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488509, 'reachable_time': 35625, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322661, 'error': None, 'target': 'ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:26.083 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9d1639de-d0ac-47b6-9707-253523b26c21 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:35:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:26.083 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[2f3783fe-9054-4ef1-a556-ef1dd17483c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:26 np0005629333 systemd[1]: run-netns-ovnmeta\x2d9d1639de\x2dd0ac\x2d47b6\x2d9707\x2d253523b26c21.mount: Deactivated successfully.
Feb 25 07:35:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:26.085 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 26c6cf46-a52b-4476-9112-4047f420e492 in datapath dfc4aeb0-f3b5-4ae8-80e8-5202af72a365 unbound from our chassis#033[00m
Feb 25 07:35:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:26.086 157129 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network dfc4aeb0-f3b5-4ae8-80e8-5202af72a365 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 25 07:35:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:26.087 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[09bd3a41-3da8-411a-9d09-74ea07c14995]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:26 np0005629333 nova_compute[244014]: 2026-02-25 12:35:26.168 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022911.1674278, 19abddab-88d5-48b8-b98e-1dedccbb8b7f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:35:26 np0005629333 nova_compute[244014]: 2026-02-25 12:35:26.169 244018 INFO nova.compute.manager [-] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:35:26 np0005629333 nova_compute[244014]: 2026-02-25 12:35:26.214 244018 DEBUG nova.compute.manager [None req-0533052a-d532-40ae-a847-c9f23d06a869 - - - - - -] [instance: 19abddab-88d5-48b8-b98e-1dedccbb8b7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:35:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:35:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 305 active+clean; 357 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 26 KiB/s wr, 176 op/s
Feb 25 07:35:28 np0005629333 nova_compute[244014]: 2026-02-25 12:35:28.577 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:29.785 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:35:29 np0005629333 nova_compute[244014]: 2026-02-25 12:35:29.909 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 305 active+clean; 357 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 13 KiB/s wr, 144 op/s
Feb 25 07:35:30 np0005629333 nova_compute[244014]: 2026-02-25 12:35:30.904 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:35:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:35:30
Feb 25 07:35:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:35:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:35:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['.rgw.root', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'images', '.mgr', 'volumes', 'default.rgw.log']
Feb 25 07:35:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:35:31 np0005629333 nova_compute[244014]: 2026-02-25 12:35:31.147 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:35:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:35:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:35:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:35:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:35:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:35:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:35:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:35:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:35:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:35:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:35:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:35:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:35:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:35:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:35:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:35:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1612: 305 pgs: 305 active+clean; 302 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 14 KiB/s wr, 169 op/s
Feb 25 07:35:32 np0005629333 nova_compute[244014]: 2026-02-25 12:35:32.497 244018 INFO nova.virt.libvirt.driver [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Deleting instance files /var/lib/nova/instances/e3178d86-5c76-4393-9327-2aac2cb8d81d_del#033[00m
Feb 25 07:35:32 np0005629333 nova_compute[244014]: 2026-02-25 12:35:32.498 244018 INFO nova.virt.libvirt.driver [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Deletion of /var/lib/nova/instances/e3178d86-5c76-4393-9327-2aac2cb8d81d_del complete#033[00m
Feb 25 07:35:32 np0005629333 nova_compute[244014]: 2026-02-25 12:35:32.505 244018 INFO nova.virt.libvirt.driver [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Deleting instance files /var/lib/nova/instances/77c38424-b0a2-4d31-975a-16f265ff93fb_del#033[00m
Feb 25 07:35:32 np0005629333 nova_compute[244014]: 2026-02-25 12:35:32.506 244018 INFO nova.virt.libvirt.driver [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Deletion of /var/lib/nova/instances/77c38424-b0a2-4d31-975a-16f265ff93fb_del complete#033[00m
Feb 25 07:35:32 np0005629333 nova_compute[244014]: 2026-02-25 12:35:32.573 244018 INFO nova.compute.manager [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Took 9.85 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:35:32 np0005629333 nova_compute[244014]: 2026-02-25 12:35:32.574 244018 DEBUG oslo.service.loopingcall [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:35:32 np0005629333 nova_compute[244014]: 2026-02-25 12:35:32.574 244018 DEBUG nova.compute.manager [-] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:35:32 np0005629333 nova_compute[244014]: 2026-02-25 12:35:32.575 244018 DEBUG nova.network.neutron [-] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:35:32 np0005629333 nova_compute[244014]: 2026-02-25 12:35:32.579 244018 INFO nova.compute.manager [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Took 9.67 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:35:32 np0005629333 nova_compute[244014]: 2026-02-25 12:35:32.580 244018 DEBUG oslo.service.loopingcall [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:35:32 np0005629333 nova_compute[244014]: 2026-02-25 12:35:32.581 244018 DEBUG nova.compute.manager [-] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:35:32 np0005629333 nova_compute[244014]: 2026-02-25 12:35:32.581 244018 DEBUG nova.network.neutron [-] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:35:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:35:33 np0005629333 nova_compute[244014]: 2026-02-25 12:35:33.582 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:33 np0005629333 nova_compute[244014]: 2026-02-25 12:35:33.901 244018 DEBUG nova.network.neutron [-] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:35:33 np0005629333 nova_compute[244014]: 2026-02-25 12:35:33.920 244018 INFO nova.compute.manager [-] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Took 1.35 seconds to deallocate network for instance.#033[00m
Feb 25 07:35:33 np0005629333 nova_compute[244014]: 2026-02-25 12:35:33.969 244018 DEBUG oslo_concurrency.lockutils [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:35:33 np0005629333 nova_compute[244014]: 2026-02-25 12:35:33.969 244018 DEBUG oslo_concurrency.lockutils [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:35:34 np0005629333 nova_compute[244014]: 2026-02-25 12:35:34.029 244018 DEBUG nova.network.neutron [-] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:35:34 np0005629333 nova_compute[244014]: 2026-02-25 12:35:34.050 244018 DEBUG oslo_concurrency.processutils [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:35:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 305 active+clean; 281 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 14 KiB/s wr, 174 op/s
Feb 25 07:35:34 np0005629333 nova_compute[244014]: 2026-02-25 12:35:34.078 244018 INFO nova.compute.manager [-] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Took 1.50 seconds to deallocate network for instance.#033[00m
Feb 25 07:35:34 np0005629333 nova_compute[244014]: 2026-02-25 12:35:34.181 244018 DEBUG oslo_concurrency.lockutils [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:35:34 np0005629333 nova_compute[244014]: 2026-02-25 12:35:34.253 244018 DEBUG nova.compute.manager [req-078bf21a-77d8-49dd-bb6f-1e65fd13c4e8 req-06a4fb17-b06c-4ade-bbca-8892cd520480 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Received event network-vif-deleted-26c6cf46-a52b-4476-9112-4047f420e492 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:35:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:35:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2806862022' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:35:34 np0005629333 nova_compute[244014]: 2026-02-25 12:35:34.619 244018 DEBUG oslo_concurrency.processutils [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:35:34 np0005629333 nova_compute[244014]: 2026-02-25 12:35:34.625 244018 DEBUG nova.compute.provider_tree [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:35:34 np0005629333 nova_compute[244014]: 2026-02-25 12:35:34.668 244018 DEBUG nova.scheduler.client.report [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:35:34 np0005629333 nova_compute[244014]: 2026-02-25 12:35:34.774 244018 DEBUG oslo_concurrency.lockutils [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:35:34 np0005629333 nova_compute[244014]: 2026-02-25 12:35:34.776 244018 DEBUG oslo_concurrency.lockutils [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:35:34 np0005629333 nova_compute[244014]: 2026-02-25 12:35:34.863 244018 DEBUG oslo_concurrency.processutils [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:35:34 np0005629333 nova_compute[244014]: 2026-02-25 12:35:34.898 244018 INFO nova.scheduler.client.report [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Deleted allocations for instance e3178d86-5c76-4393-9327-2aac2cb8d81d#033[00m
Feb 25 07:35:34 np0005629333 nova_compute[244014]: 2026-02-25 12:35:34.911 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:34 np0005629333 nova_compute[244014]: 2026-02-25 12:35:34.965 244018 DEBUG oslo_concurrency.lockutils [None req-5eb63f15-1bf4-4e86-9613-e00d57678a23 cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "e3178d86-5c76-4393-9327-2aac2cb8d81d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.254s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:35:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:35:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/46961917' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:35:35 np0005629333 nova_compute[244014]: 2026-02-25 12:35:35.481 244018 DEBUG oslo_concurrency.processutils [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:35:35 np0005629333 nova_compute[244014]: 2026-02-25 12:35:35.486 244018 DEBUG nova.compute.provider_tree [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:35:35 np0005629333 nova_compute[244014]: 2026-02-25 12:35:35.501 244018 DEBUG nova.scheduler.client.report [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:35:35 np0005629333 nova_compute[244014]: 2026-02-25 12:35:35.526 244018 DEBUG oslo_concurrency.lockutils [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:35:35 np0005629333 nova_compute[244014]: 2026-02-25 12:35:35.570 244018 INFO nova.scheduler.client.report [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Deleted allocations for instance 77c38424-b0a2-4d31-975a-16f265ff93fb#033[00m
Feb 25 07:35:35 np0005629333 nova_compute[244014]: 2026-02-25 12:35:35.657 244018 DEBUG oslo_concurrency.lockutils [None req-27c23c65-9a2b-4874-b2fe-90818c7fc998 d8636d59ca0d49698907e2edb5dc4967 e80b416cdd774e9483545c9e08abf805 - - default default] Lock "77c38424-b0a2-4d31-975a-16f265ff93fb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:35:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1614: 305 pgs: 305 active+clean; 281 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 53 op/s
Feb 25 07:35:36 np0005629333 nova_compute[244014]: 2026-02-25 12:35:36.127 244018 DEBUG oslo_concurrency.lockutils [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquiring lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:35:36 np0005629333 nova_compute[244014]: 2026-02-25 12:35:36.128 244018 DEBUG oslo_concurrency.lockutils [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:35:36 np0005629333 nova_compute[244014]: 2026-02-25 12:35:36.129 244018 DEBUG oslo_concurrency.lockutils [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquiring lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:35:36 np0005629333 nova_compute[244014]: 2026-02-25 12:35:36.129 244018 DEBUG oslo_concurrency.lockutils [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:35:36 np0005629333 nova_compute[244014]: 2026-02-25 12:35:36.129 244018 DEBUG oslo_concurrency.lockutils [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:35:36 np0005629333 nova_compute[244014]: 2026-02-25 12:35:36.130 244018 INFO nova.compute.manager [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Terminating instance#033[00m
Feb 25 07:35:36 np0005629333 nova_compute[244014]: 2026-02-25 12:35:36.131 244018 DEBUG nova.compute.manager [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:35:36 np0005629333 nova_compute[244014]: 2026-02-25 12:35:36.438 244018 DEBUG nova.compute.manager [req-647815c2-c9a8-4c8c-80d8-53fb3adc66cd req-f92d57aa-1fb2-410a-bb5c-a5cd0c9f476f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Received event network-vif-deleted-db6ecea4-c353-4b6f-860e-95c410e7ec39 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:35:36 np0005629333 kernel: tape7482fdc-97 (unregistering): left promiscuous mode
Feb 25 07:35:36 np0005629333 NetworkManager[49836]: <info>  [1772022936.4752] device (tape7482fdc-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:35:36 np0005629333 nova_compute[244014]: 2026-02-25 12:35:36.480 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:36 np0005629333 ovn_controller[147040]: 2026-02-25T12:35:36Z|00889|binding|INFO|Releasing lport e7482fdc-9792-451e-adf6-a816dd7113c8 from this chassis (sb_readonly=0)
Feb 25 07:35:36 np0005629333 ovn_controller[147040]: 2026-02-25T12:35:36Z|00890|binding|INFO|Setting lport e7482fdc-9792-451e-adf6-a816dd7113c8 down in Southbound
Feb 25 07:35:36 np0005629333 ovn_controller[147040]: 2026-02-25T12:35:36Z|00891|binding|INFO|Removing iface tape7482fdc-97 ovn-installed in OVS
Feb 25 07:35:36 np0005629333 nova_compute[244014]: 2026-02-25 12:35:36.483 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:36 np0005629333 nova_compute[244014]: 2026-02-25 12:35:36.489 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:36.500 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:aa:11 10.100.0.12'], port_security=['fa:16:3e:56:aa:11 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'aeaad9e2-4ad0-46fb-b619-7ca2c78443a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dfc4aeb0-f3b5-4ae8-80e8-5202af72a365', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '367d43ab207546c3900a8414f0713ef4', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7ed366e8-b2db-4652-98a9-640647117ebb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ed9c1bd-254e-44c0-aeae-a8dc844b47a7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=e7482fdc-9792-451e-adf6-a816dd7113c8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:35:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:36.501 157129 INFO neutron.agent.ovn.metadata.agent [-] Port e7482fdc-9792-451e-adf6-a816dd7113c8 in datapath dfc4aeb0-f3b5-4ae8-80e8-5202af72a365 unbound from our chassis#033[00m
Feb 25 07:35:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:36.502 157129 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network dfc4aeb0-f3b5-4ae8-80e8-5202af72a365 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 25 07:35:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:36.504 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bcbe935e-ccef-4c31-b218-dda2c2c1edb2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:35:36 np0005629333 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000054.scope: Deactivated successfully.
Feb 25 07:35:36 np0005629333 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000054.scope: Consumed 13.887s CPU time.
Feb 25 07:35:36 np0005629333 systemd-machined[210048]: Machine qemu-104-instance-00000054 terminated.
Feb 25 07:35:36 np0005629333 nova_compute[244014]: 2026-02-25 12:35:36.576 244018 INFO nova.virt.libvirt.driver [-] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Instance destroyed successfully.#033[00m
Feb 25 07:35:36 np0005629333 nova_compute[244014]: 2026-02-25 12:35:36.576 244018 DEBUG nova.objects.instance [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lazy-loading 'resources' on Instance uuid aeaad9e2-4ad0-46fb-b619-7ca2c78443a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:35:36 np0005629333 nova_compute[244014]: 2026-02-25 12:35:36.591 244018 DEBUG nova.virt.libvirt.vif [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:33:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-842091875',display_name='tempest-ServerRescueTestJSON-server-842091875',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-842091875',id=84,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:34:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='367d43ab207546c3900a8414f0713ef4',ramdisk_id='',reservation_id='r-aun4358p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-930018924',owner_user_name='tempest-ServerRescueTestJSON-930018924-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:34:21Z,user_data=None,user_id='cb2c815ef3214a7b897f911b4f53a146',uuid=aeaad9e2-4ad0-46fb-b619-7ca2c78443a8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "e7482fdc-9792-451e-adf6-a816dd7113c8", "address": "fa:16:3e:56:aa:11", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7482fdc-97", "ovs_interfaceid": "e7482fdc-9792-451e-adf6-a816dd7113c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:35:36 np0005629333 nova_compute[244014]: 2026-02-25 12:35:36.591 244018 DEBUG nova.network.os_vif_util [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Converting VIF {"id": "e7482fdc-9792-451e-adf6-a816dd7113c8", "address": "fa:16:3e:56:aa:11", "network": {"id": "dfc4aeb0-f3b5-4ae8-80e8-5202af72a365", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1416447925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "367d43ab207546c3900a8414f0713ef4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape7482fdc-97", "ovs_interfaceid": "e7482fdc-9792-451e-adf6-a816dd7113c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:35:36 np0005629333 nova_compute[244014]: 2026-02-25 12:35:36.592 244018 DEBUG nova.network.os_vif_util [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:56:aa:11,bridge_name='br-int',has_traffic_filtering=True,id=e7482fdc-9792-451e-adf6-a816dd7113c8,network=Network(dfc4aeb0-f3b5-4ae8-80e8-5202af72a365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7482fdc-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:35:36 np0005629333 nova_compute[244014]: 2026-02-25 12:35:36.592 244018 DEBUG os_vif [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:aa:11,bridge_name='br-int',has_traffic_filtering=True,id=e7482fdc-9792-451e-adf6-a816dd7113c8,network=Network(dfc4aeb0-f3b5-4ae8-80e8-5202af72a365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7482fdc-97') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:35:36 np0005629333 nova_compute[244014]: 2026-02-25 12:35:36.593 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:36 np0005629333 nova_compute[244014]: 2026-02-25 12:35:36.593 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7482fdc-97, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:35:36 np0005629333 nova_compute[244014]: 2026-02-25 12:35:36.595 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:36 np0005629333 nova_compute[244014]: 2026-02-25 12:35:36.596 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:36 np0005629333 nova_compute[244014]: 2026-02-25 12:35:36.598 244018 INFO os_vif [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:aa:11,bridge_name='br-int',has_traffic_filtering=True,id=e7482fdc-9792-451e-adf6-a816dd7113c8,network=Network(dfc4aeb0-f3b5-4ae8-80e8-5202af72a365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape7482fdc-97')#033[00m
Feb 25 07:35:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:35:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1615: 305 pgs: 305 active+clean; 281 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 8.0 KiB/s wr, 61 op/s
Feb 25 07:35:38 np0005629333 nova_compute[244014]: 2026-02-25 12:35:38.538 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022923.53617, 77c38424-b0a2-4d31-975a-16f265ff93fb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:35:38 np0005629333 nova_compute[244014]: 2026-02-25 12:35:38.538 244018 INFO nova.compute.manager [-] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:35:38 np0005629333 nova_compute[244014]: 2026-02-25 12:35:38.545 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022923.5447984, e3178d86-5c76-4393-9327-2aac2cb8d81d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:35:38 np0005629333 nova_compute[244014]: 2026-02-25 12:35:38.546 244018 INFO nova.compute.manager [-] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:35:38 np0005629333 nova_compute[244014]: 2026-02-25 12:35:38.573 244018 DEBUG nova.compute.manager [None req-83513475-46d9-4e17-8a52-63011aaca117 - - - - - -] [instance: e3178d86-5c76-4393-9327-2aac2cb8d81d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:35:38 np0005629333 nova_compute[244014]: 2026-02-25 12:35:38.575 244018 DEBUG nova.compute.manager [None req-cba4b95b-37d9-4184-b7fc-d7c21dc205bb - - - - - -] [instance: 77c38424-b0a2-4d31-975a-16f265ff93fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:35:38 np0005629333 nova_compute[244014]: 2026-02-25 12:35:38.613 244018 DEBUG nova.compute.manager [req-817c6610-f406-4117-8372-832ec2d9978c req-574d7732-b594-4b14-8daf-b39320f442f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Received event network-vif-unplugged-e7482fdc-9792-451e-adf6-a816dd7113c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:35:38 np0005629333 nova_compute[244014]: 2026-02-25 12:35:38.614 244018 DEBUG oslo_concurrency.lockutils [req-817c6610-f406-4117-8372-832ec2d9978c req-574d7732-b594-4b14-8daf-b39320f442f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:35:38 np0005629333 nova_compute[244014]: 2026-02-25 12:35:38.614 244018 DEBUG oslo_concurrency.lockutils [req-817c6610-f406-4117-8372-832ec2d9978c req-574d7732-b594-4b14-8daf-b39320f442f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:35:38 np0005629333 nova_compute[244014]: 2026-02-25 12:35:38.614 244018 DEBUG oslo_concurrency.lockutils [req-817c6610-f406-4117-8372-832ec2d9978c req-574d7732-b594-4b14-8daf-b39320f442f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:35:38 np0005629333 nova_compute[244014]: 2026-02-25 12:35:38.615 244018 DEBUG nova.compute.manager [req-817c6610-f406-4117-8372-832ec2d9978c req-574d7732-b594-4b14-8daf-b39320f442f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] No waiting events found dispatching network-vif-unplugged-e7482fdc-9792-451e-adf6-a816dd7113c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:35:38 np0005629333 nova_compute[244014]: 2026-02-25 12:35:38.615 244018 DEBUG nova.compute.manager [req-817c6610-f406-4117-8372-832ec2d9978c req-574d7732-b594-4b14-8daf-b39320f442f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Received event network-vif-unplugged-e7482fdc-9792-451e-adf6-a816dd7113c8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:35:38 np0005629333 nova_compute[244014]: 2026-02-25 12:35:38.615 244018 DEBUG nova.compute.manager [req-817c6610-f406-4117-8372-832ec2d9978c req-574d7732-b594-4b14-8daf-b39320f442f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Received event network-vif-plugged-e7482fdc-9792-451e-adf6-a816dd7113c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:35:38 np0005629333 nova_compute[244014]: 2026-02-25 12:35:38.616 244018 DEBUG oslo_concurrency.lockutils [req-817c6610-f406-4117-8372-832ec2d9978c req-574d7732-b594-4b14-8daf-b39320f442f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:35:38 np0005629333 nova_compute[244014]: 2026-02-25 12:35:38.616 244018 DEBUG oslo_concurrency.lockutils [req-817c6610-f406-4117-8372-832ec2d9978c req-574d7732-b594-4b14-8daf-b39320f442f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:35:38 np0005629333 nova_compute[244014]: 2026-02-25 12:35:38.617 244018 DEBUG oslo_concurrency.lockutils [req-817c6610-f406-4117-8372-832ec2d9978c req-574d7732-b594-4b14-8daf-b39320f442f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:35:38 np0005629333 nova_compute[244014]: 2026-02-25 12:35:38.617 244018 DEBUG nova.compute.manager [req-817c6610-f406-4117-8372-832ec2d9978c req-574d7732-b594-4b14-8daf-b39320f442f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] No waiting events found dispatching network-vif-plugged-e7482fdc-9792-451e-adf6-a816dd7113c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:35:38 np0005629333 nova_compute[244014]: 2026-02-25 12:35:38.617 244018 WARNING nova.compute.manager [req-817c6610-f406-4117-8372-832ec2d9978c req-574d7732-b594-4b14-8daf-b39320f442f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Received unexpected event network-vif-plugged-e7482fdc-9792-451e-adf6-a816dd7113c8 for instance with vm_state rescued and task_state deleting.#033[00m
Feb 25 07:35:39 np0005629333 nova_compute[244014]: 2026-02-25 12:35:39.912 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1616: 305 pgs: 305 active+clean; 281 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 7.0 KiB/s wr, 37 op/s
Feb 25 07:35:41 np0005629333 nova_compute[244014]: 2026-02-25 12:35:41.596 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:41 np0005629333 podman[322746]: 2026-02-25 12:35:41.766496094 +0000 UTC m=+0.065526763 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent)
Feb 25 07:35:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1617: 305 pgs: 305 active+clean; 228 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 7.7 KiB/s wr, 48 op/s
Feb 25 07:35:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:35:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:35:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:35:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:35:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005990409087240522 of space, bias 1.0, pg target 0.17971227261721565 quantized to 32 (current 32)
Feb 25 07:35:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:35:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:35:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:35:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:35:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:35:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024928472979332526 of space, bias 1.0, pg target 0.7478541893799757 quantized to 32 (current 32)
Feb 25 07:35:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:35:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.930829390391495e-07 of space, bias 4.0, pg target 0.0009516995268469793 quantized to 16 (current 16)
Feb 25 07:35:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:35:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:35:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:35:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:35:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:35:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:35:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:35:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:35:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:35:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:35:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:35:43 np0005629333 podman[322767]: 2026-02-25 12:35:43.72654674 +0000 UTC m=+0.071376548 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 07:35:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 200 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 7.0 KiB/s wr, 33 op/s
Feb 25 07:35:44 np0005629333 nova_compute[244014]: 2026-02-25 12:35:44.914 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1619: 305 pgs: 305 active+clean; 200 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 6.7 KiB/s wr, 28 op/s
Feb 25 07:35:46 np0005629333 nova_compute[244014]: 2026-02-25 12:35:46.599 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:46 np0005629333 nova_compute[244014]: 2026-02-25 12:35:46.633 244018 INFO nova.virt.libvirt.driver [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Deleting instance files /var/lib/nova/instances/aeaad9e2-4ad0-46fb-b619-7ca2c78443a8_del#033[00m
Feb 25 07:35:46 np0005629333 nova_compute[244014]: 2026-02-25 12:35:46.634 244018 INFO nova.virt.libvirt.driver [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Deletion of /var/lib/nova/instances/aeaad9e2-4ad0-46fb-b619-7ca2c78443a8_del complete#033[00m
Feb 25 07:35:46 np0005629333 nova_compute[244014]: 2026-02-25 12:35:46.684 244018 INFO nova.compute.manager [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Took 10.55 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:35:46 np0005629333 nova_compute[244014]: 2026-02-25 12:35:46.684 244018 DEBUG oslo.service.loopingcall [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:35:46 np0005629333 nova_compute[244014]: 2026-02-25 12:35:46.685 244018 DEBUG nova.compute.manager [-] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:35:46 np0005629333 nova_compute[244014]: 2026-02-25 12:35:46.685 244018 DEBUG nova.network.neutron [-] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:35:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:35:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:35:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:35:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:35:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:35:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:35:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:35:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:35:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:35:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:35:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:35:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:35:47 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:35:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:35:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3839980790' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:35:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:35:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3839980790' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:35:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:35:47 np0005629333 podman[322933]: 2026-02-25 12:35:47.847316522 +0000 UTC m=+0.105469862 container create e8f5b39c6b8e3a74c429524d5c21395369999aa88f0bbf3fd8d360021b1fdc8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keldysh, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:35:47 np0005629333 podman[322933]: 2026-02-25 12:35:47.763153543 +0000 UTC m=+0.021306913 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:35:47 np0005629333 systemd[1]: Started libpod-conmon-e8f5b39c6b8e3a74c429524d5c21395369999aa88f0bbf3fd8d360021b1fdc8d.scope.
Feb 25 07:35:47 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:35:47 np0005629333 podman[322933]: 2026-02-25 12:35:47.987422332 +0000 UTC m=+0.245575702 container init e8f5b39c6b8e3a74c429524d5c21395369999aa88f0bbf3fd8d360021b1fdc8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:35:47 np0005629333 podman[322933]: 2026-02-25 12:35:47.994254255 +0000 UTC m=+0.252407595 container start e8f5b39c6b8e3a74c429524d5c21395369999aa88f0bbf3fd8d360021b1fdc8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keldysh, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:35:47 np0005629333 tender_keldysh[322948]: 167 167
Feb 25 07:35:47 np0005629333 systemd[1]: libpod-e8f5b39c6b8e3a74c429524d5c21395369999aa88f0bbf3fd8d360021b1fdc8d.scope: Deactivated successfully.
Feb 25 07:35:48 np0005629333 podman[322933]: 2026-02-25 12:35:48.025057056 +0000 UTC m=+0.283210426 container attach e8f5b39c6b8e3a74c429524d5c21395369999aa88f0bbf3fd8d360021b1fdc8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keldysh, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:35:48 np0005629333 podman[322933]: 2026-02-25 12:35:48.026859497 +0000 UTC m=+0.285012837 container died e8f5b39c6b8e3a74c429524d5c21395369999aa88f0bbf3fd8d360021b1fdc8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:35:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 305 active+clean; 172 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 7.9 KiB/s wr, 57 op/s
Feb 25 07:35:48 np0005629333 systemd[1]: var-lib-containers-storage-overlay-66f2c4fcd165cf095617d9912ddaab2230cf583c7b47dd64c592b1c36006a7d7-merged.mount: Deactivated successfully.
Feb 25 07:35:48 np0005629333 podman[322953]: 2026-02-25 12:35:48.15684664 +0000 UTC m=+0.143953099 container remove e8f5b39c6b8e3a74c429524d5c21395369999aa88f0bbf3fd8d360021b1fdc8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 07:35:48 np0005629333 systemd[1]: libpod-conmon-e8f5b39c6b8e3a74c429524d5c21395369999aa88f0bbf3fd8d360021b1fdc8d.scope: Deactivated successfully.
Feb 25 07:35:48 np0005629333 nova_compute[244014]: 2026-02-25 12:35:48.202 244018 DEBUG nova.network.neutron [-] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:35:48 np0005629333 nova_compute[244014]: 2026-02-25 12:35:48.225 244018 INFO nova.compute.manager [-] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Took 1.54 seconds to deallocate network for instance.#033[00m
Feb 25 07:35:48 np0005629333 nova_compute[244014]: 2026-02-25 12:35:48.267 244018 DEBUG oslo_concurrency.lockutils [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:35:48 np0005629333 nova_compute[244014]: 2026-02-25 12:35:48.268 244018 DEBUG oslo_concurrency.lockutils [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:35:48 np0005629333 nova_compute[244014]: 2026-02-25 12:35:48.324 244018 DEBUG oslo_concurrency.processutils [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:35:48 np0005629333 podman[322976]: 2026-02-25 12:35:48.343574008 +0000 UTC m=+0.097514577 container create aaa7d4c3ddfbddb2ed18a3a5eaffb1bb93eb415c14b2b34f2e2a5ace821f9e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 07:35:48 np0005629333 podman[322976]: 2026-02-25 12:35:48.265728758 +0000 UTC m=+0.019669387 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:35:48 np0005629333 nova_compute[244014]: 2026-02-25 12:35:48.372 244018 DEBUG nova.compute.manager [req-1917102e-ddb4-4446-bffb-726c4c557e10 req-a2824ee0-ceb8-427a-b9cb-8ab8e942f3f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Received event network-vif-deleted-e7482fdc-9792-451e-adf6-a816dd7113c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:35:48 np0005629333 systemd[1]: Started libpod-conmon-aaa7d4c3ddfbddb2ed18a3a5eaffb1bb93eb415c14b2b34f2e2a5ace821f9e59.scope.
Feb 25 07:35:48 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:35:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33b98985019f826f2cab6ea2c5e66e35c9053f82ac52cd8783ed801de8b06d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:35:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33b98985019f826f2cab6ea2c5e66e35c9053f82ac52cd8783ed801de8b06d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:35:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33b98985019f826f2cab6ea2c5e66e35c9053f82ac52cd8783ed801de8b06d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:35:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33b98985019f826f2cab6ea2c5e66e35c9053f82ac52cd8783ed801de8b06d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:35:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33b98985019f826f2cab6ea2c5e66e35c9053f82ac52cd8783ed801de8b06d3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:35:48 np0005629333 podman[322976]: 2026-02-25 12:35:48.628676506 +0000 UTC m=+0.382617135 container init aaa7d4c3ddfbddb2ed18a3a5eaffb1bb93eb415c14b2b34f2e2a5ace821f9e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_booth, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:35:48 np0005629333 podman[322976]: 2026-02-25 12:35:48.634872981 +0000 UTC m=+0.388813570 container start aaa7d4c3ddfbddb2ed18a3a5eaffb1bb93eb415c14b2b34f2e2a5ace821f9e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_booth, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 07:35:48 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:35:48 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:35:48 np0005629333 podman[322976]: 2026-02-25 12:35:48.812235353 +0000 UTC m=+0.566175942 container attach aaa7d4c3ddfbddb2ed18a3a5eaffb1bb93eb415c14b2b34f2e2a5ace821f9e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_booth, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 07:35:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:35:48 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2863393853' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:35:48 np0005629333 nova_compute[244014]: 2026-02-25 12:35:48.947 244018 DEBUG oslo_concurrency.processutils [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:35:48 np0005629333 nova_compute[244014]: 2026-02-25 12:35:48.952 244018 DEBUG nova.compute.provider_tree [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:35:48 np0005629333 nova_compute[244014]: 2026-02-25 12:35:48.966 244018 DEBUG nova.scheduler.client.report [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:35:48 np0005629333 nova_compute[244014]: 2026-02-25 12:35:48.988 244018 DEBUG oslo_concurrency.lockutils [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:35:49 np0005629333 nova_compute[244014]: 2026-02-25 12:35:49.017 244018 INFO nova.scheduler.client.report [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Deleted allocations for instance aeaad9e2-4ad0-46fb-b619-7ca2c78443a8#033[00m
Feb 25 07:35:49 np0005629333 nova_compute[244014]: 2026-02-25 12:35:49.093 244018 DEBUG oslo_concurrency.lockutils [None req-5dd64bef-6c95-408a-be05-7e9834d1fa0a cb2c815ef3214a7b897f911b4f53a146 367d43ab207546c3900a8414f0713ef4 - - default default] Lock "aeaad9e2-4ad0-46fb-b619-7ca2c78443a8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.965s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:35:49 np0005629333 happy_booth[322994]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:35:49 np0005629333 happy_booth[322994]: --> All data devices are unavailable
Feb 25 07:35:49 np0005629333 systemd[1]: libpod-aaa7d4c3ddfbddb2ed18a3a5eaffb1bb93eb415c14b2b34f2e2a5ace821f9e59.scope: Deactivated successfully.
Feb 25 07:35:49 np0005629333 podman[323035]: 2026-02-25 12:35:49.159190089 +0000 UTC m=+0.026462799 container died aaa7d4c3ddfbddb2ed18a3a5eaffb1bb93eb415c14b2b34f2e2a5ace821f9e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_booth, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:35:49 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a33b98985019f826f2cab6ea2c5e66e35c9053f82ac52cd8783ed801de8b06d3-merged.mount: Deactivated successfully.
Feb 25 07:35:49 np0005629333 podman[323035]: 2026-02-25 12:35:49.53957632 +0000 UTC m=+0.406848970 container remove aaa7d4c3ddfbddb2ed18a3a5eaffb1bb93eb415c14b2b34f2e2a5ace821f9e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_booth, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 07:35:49 np0005629333 systemd[1]: libpod-conmon-aaa7d4c3ddfbddb2ed18a3a5eaffb1bb93eb415c14b2b34f2e2a5ace821f9e59.scope: Deactivated successfully.
Feb 25 07:35:49 np0005629333 nova_compute[244014]: 2026-02-25 12:35:49.916 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:50 np0005629333 podman[323111]: 2026-02-25 12:35:50.026334907 +0000 UTC m=+0.077919303 container create 950065b982a67a7dec3c50e5b9f0db29102b7df01b97f4ee5f489c56a5eaaa90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bassi, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:35:50 np0005629333 podman[323111]: 2026-02-25 12:35:49.972821764 +0000 UTC m=+0.024406160 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:35:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1621: 305 pgs: 305 active+clean; 172 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 2.2 KiB/s wr, 49 op/s
Feb 25 07:35:50 np0005629333 systemd[1]: Started libpod-conmon-950065b982a67a7dec3c50e5b9f0db29102b7df01b97f4ee5f489c56a5eaaa90.scope.
Feb 25 07:35:50 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:35:50 np0005629333 podman[323111]: 2026-02-25 12:35:50.170543213 +0000 UTC m=+0.222127639 container init 950065b982a67a7dec3c50e5b9f0db29102b7df01b97f4ee5f489c56a5eaaa90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:35:50 np0005629333 podman[323111]: 2026-02-25 12:35:50.176399258 +0000 UTC m=+0.227983644 container start 950065b982a67a7dec3c50e5b9f0db29102b7df01b97f4ee5f489c56a5eaaa90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 07:35:50 np0005629333 hardcore_bassi[323127]: 167 167
Feb 25 07:35:50 np0005629333 systemd[1]: libpod-950065b982a67a7dec3c50e5b9f0db29102b7df01b97f4ee5f489c56a5eaaa90.scope: Deactivated successfully.
Feb 25 07:35:50 np0005629333 conmon[323127]: conmon 950065b982a67a7dec3c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-950065b982a67a7dec3c50e5b9f0db29102b7df01b97f4ee5f489c56a5eaaa90.scope/container/memory.events
Feb 25 07:35:50 np0005629333 podman[323111]: 2026-02-25 12:35:50.20300179 +0000 UTC m=+0.254586216 container attach 950065b982a67a7dec3c50e5b9f0db29102b7df01b97f4ee5f489c56a5eaaa90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bassi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Feb 25 07:35:50 np0005629333 podman[323111]: 2026-02-25 12:35:50.203518415 +0000 UTC m=+0.255102811 container died 950065b982a67a7dec3c50e5b9f0db29102b7df01b97f4ee5f489c56a5eaaa90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bassi, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:35:50 np0005629333 systemd[1]: var-lib-containers-storage-overlay-c6fdf26c9bae01860ca844447f16f72b3122f6afb3b909d91ca18a5d896e7871-merged.mount: Deactivated successfully.
Feb 25 07:35:50 np0005629333 podman[323111]: 2026-02-25 12:35:50.39086937 +0000 UTC m=+0.442453766 container remove 950065b982a67a7dec3c50e5b9f0db29102b7df01b97f4ee5f489c56a5eaaa90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:35:50 np0005629333 systemd[1]: libpod-conmon-950065b982a67a7dec3c50e5b9f0db29102b7df01b97f4ee5f489c56a5eaaa90.scope: Deactivated successfully.
Feb 25 07:35:50 np0005629333 podman[323151]: 2026-02-25 12:35:50.542929577 +0000 UTC m=+0.065931414 container create 2fad42e209b4242dca2dd7e75f0a869c8a1a3950c58c1b29a81bb3c3c1184392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_jackson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:35:50 np0005629333 podman[323151]: 2026-02-25 12:35:50.496656359 +0000 UTC m=+0.019658216 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:35:50 np0005629333 systemd[1]: Started libpod-conmon-2fad42e209b4242dca2dd7e75f0a869c8a1a3950c58c1b29a81bb3c3c1184392.scope.
Feb 25 07:35:50 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:35:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6bfe3abe50d22ca3245972e8c88b1f25ad1889a61233ad4a27994b9da1b9897/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:35:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6bfe3abe50d22ca3245972e8c88b1f25ad1889a61233ad4a27994b9da1b9897/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:35:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6bfe3abe50d22ca3245972e8c88b1f25ad1889a61233ad4a27994b9da1b9897/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:35:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6bfe3abe50d22ca3245972e8c88b1f25ad1889a61233ad4a27994b9da1b9897/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:35:50 np0005629333 podman[323151]: 2026-02-25 12:35:50.68843379 +0000 UTC m=+0.211435647 container init 2fad42e209b4242dca2dd7e75f0a869c8a1a3950c58c1b29a81bb3c3c1184392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:35:50 np0005629333 podman[323151]: 2026-02-25 12:35:50.694934823 +0000 UTC m=+0.217936670 container start 2fad42e209b4242dca2dd7e75f0a869c8a1a3950c58c1b29a81bb3c3c1184392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle)
Feb 25 07:35:50 np0005629333 podman[323151]: 2026-02-25 12:35:50.719300702 +0000 UTC m=+0.242302559 container attach 2fad42e209b4242dca2dd7e75f0a869c8a1a3950c58c1b29a81bb3c3c1184392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]: {
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:    "0": [
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:        {
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "devices": [
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "/dev/loop3"
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            ],
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "lv_name": "ceph_lv0",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "lv_size": "21470642176",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "name": "ceph_lv0",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "tags": {
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.cluster_name": "ceph",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.crush_device_class": "",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.encrypted": "0",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.objectstore": "bluestore",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.osd_id": "0",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.type": "block",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.vdo": "0",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.with_tpm": "0"
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            },
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "type": "block",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "vg_name": "ceph_vg0"
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:        }
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:    ],
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:    "1": [
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:        {
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "devices": [
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "/dev/loop4"
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            ],
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "lv_name": "ceph_lv1",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "lv_size": "21470642176",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "name": "ceph_lv1",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "tags": {
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.cluster_name": "ceph",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.crush_device_class": "",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.encrypted": "0",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.objectstore": "bluestore",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.osd_id": "1",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.type": "block",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.vdo": "0",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.with_tpm": "0"
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            },
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "type": "block",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "vg_name": "ceph_vg1"
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:        }
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:    ],
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:    "2": [
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:        {
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "devices": [
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "/dev/loop5"
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            ],
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "lv_name": "ceph_lv2",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "lv_size": "21470642176",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "name": "ceph_lv2",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "tags": {
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.cluster_name": "ceph",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.crush_device_class": "",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.encrypted": "0",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.objectstore": "bluestore",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.osd_id": "2",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.type": "block",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.vdo": "0",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:                "ceph.with_tpm": "0"
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            },
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "type": "block",
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:            "vg_name": "ceph_vg2"
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:        }
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]:    ]
Feb 25 07:35:50 np0005629333 vigilant_jackson[323167]: }
Feb 25 07:35:50 np0005629333 systemd[1]: libpod-2fad42e209b4242dca2dd7e75f0a869c8a1a3950c58c1b29a81bb3c3c1184392.scope: Deactivated successfully.
Feb 25 07:35:50 np0005629333 podman[323151]: 2026-02-25 12:35:50.981845172 +0000 UTC m=+0.504847019 container died 2fad42e209b4242dca2dd7e75f0a869c8a1a3950c58c1b29a81bb3c3c1184392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_jackson, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 07:35:51 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f6bfe3abe50d22ca3245972e8c88b1f25ad1889a61233ad4a27994b9da1b9897-merged.mount: Deactivated successfully.
Feb 25 07:35:51 np0005629333 podman[323151]: 2026-02-25 12:35:51.424680377 +0000 UTC m=+0.947682244 container remove 2fad42e209b4242dca2dd7e75f0a869c8a1a3950c58c1b29a81bb3c3c1184392 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_jackson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 07:35:51 np0005629333 systemd[1]: libpod-conmon-2fad42e209b4242dca2dd7e75f0a869c8a1a3950c58c1b29a81bb3c3c1184392.scope: Deactivated successfully.
Feb 25 07:35:51 np0005629333 nova_compute[244014]: 2026-02-25 12:35:51.574 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022936.5731418, aeaad9e2-4ad0-46fb-b619-7ca2c78443a8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:35:51 np0005629333 nova_compute[244014]: 2026-02-25 12:35:51.575 244018 INFO nova.compute.manager [-] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:35:51 np0005629333 nova_compute[244014]: 2026-02-25 12:35:51.598 244018 DEBUG nova.compute.manager [None req-b30990db-df00-4615-ba8f-cd2e9d9ea234 - - - - - -] [instance: aeaad9e2-4ad0-46fb-b619-7ca2c78443a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:35:51 np0005629333 nova_compute[244014]: 2026-02-25 12:35:51.601 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:51 np0005629333 podman[323252]: 2026-02-25 12:35:51.817587371 +0000 UTC m=+0.018406861 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:35:52 np0005629333 podman[323252]: 2026-02-25 12:35:52.016462212 +0000 UTC m=+0.217281692 container create 0e45c323d324dd8db27220c1fbf2f805873ad07d68f5b64438f7f4d653f34025 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 07:35:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1622: 305 pgs: 305 active+clean; 153 MiB data, 745 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 2.3 KiB/s wr, 50 op/s
Feb 25 07:35:52 np0005629333 systemd[1]: Started libpod-conmon-0e45c323d324dd8db27220c1fbf2f805873ad07d68f5b64438f7f4d653f34025.scope.
Feb 25 07:35:52 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:35:52 np0005629333 podman[323252]: 2026-02-25 12:35:52.702184703 +0000 UTC m=+0.903004273 container init 0e45c323d324dd8db27220c1fbf2f805873ad07d68f5b64438f7f4d653f34025 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 07:35:52 np0005629333 podman[323252]: 2026-02-25 12:35:52.710401165 +0000 UTC m=+0.911220675 container start 0e45c323d324dd8db27220c1fbf2f805873ad07d68f5b64438f7f4d653f34025 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:35:52 np0005629333 thirsty_herschel[323268]: 167 167
Feb 25 07:35:52 np0005629333 systemd[1]: libpod-0e45c323d324dd8db27220c1fbf2f805873ad07d68f5b64438f7f4d653f34025.scope: Deactivated successfully.
Feb 25 07:35:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:35:53 np0005629333 podman[323252]: 2026-02-25 12:35:53.13553957 +0000 UTC m=+1.336359150 container attach 0e45c323d324dd8db27220c1fbf2f805873ad07d68f5b64438f7f4d653f34025 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:35:53 np0005629333 podman[323252]: 2026-02-25 12:35:53.136032454 +0000 UTC m=+1.336851964 container died 0e45c323d324dd8db27220c1fbf2f805873ad07d68f5b64438f7f4d653f34025 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:35:53 np0005629333 systemd[1]: var-lib-containers-storage-overlay-749c2b5acc00ca6f5db3a98132a0451fae225059b634177f9882234c7883b41f-merged.mount: Deactivated successfully.
Feb 25 07:35:53 np0005629333 nova_compute[244014]: 2026-02-25 12:35:53.873 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1623: 305 pgs: 305 active+clean; 153 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 39 op/s
Feb 25 07:35:54 np0005629333 podman[323252]: 2026-02-25 12:35:54.232904955 +0000 UTC m=+2.433724445 container remove 0e45c323d324dd8db27220c1fbf2f805873ad07d68f5b64438f7f4d653f34025 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_herschel, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:35:54 np0005629333 systemd[1]: libpod-conmon-0e45c323d324dd8db27220c1fbf2f805873ad07d68f5b64438f7f4d653f34025.scope: Deactivated successfully.
Feb 25 07:35:54 np0005629333 podman[323293]: 2026-02-25 12:35:54.369905347 +0000 UTC m=+0.023675351 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:35:54 np0005629333 podman[323293]: 2026-02-25 12:35:54.63507725 +0000 UTC m=+0.288847194 container create 3e823151dc9ed6343afe8e1c9157de61bc131755b7a25c31f85a815dca28bf9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 07:35:54 np0005629333 systemd[1]: Started libpod-conmon-3e823151dc9ed6343afe8e1c9157de61bc131755b7a25c31f85a815dca28bf9a.scope.
Feb 25 07:35:54 np0005629333 nova_compute[244014]: 2026-02-25 12:35:54.918 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:54 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:35:54 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/563cbdc86e633d7f55a5bb1a79ee34cda4c6862562076516d8ac8052fa148ce8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:35:54 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/563cbdc86e633d7f55a5bb1a79ee34cda4c6862562076516d8ac8052fa148ce8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:35:54 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/563cbdc86e633d7f55a5bb1a79ee34cda4c6862562076516d8ac8052fa148ce8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:35:54 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/563cbdc86e633d7f55a5bb1a79ee34cda4c6862562076516d8ac8052fa148ce8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:35:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:55.016 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:35:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:55.016 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:35:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:35:55.017 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:35:55 np0005629333 podman[323293]: 2026-02-25 12:35:55.150346953 +0000 UTC m=+0.804116947 container init 3e823151dc9ed6343afe8e1c9157de61bc131755b7a25c31f85a815dca28bf9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_chaplygin, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 07:35:55 np0005629333 podman[323293]: 2026-02-25 12:35:55.156937399 +0000 UTC m=+0.810707353 container start 3e823151dc9ed6343afe8e1c9157de61bc131755b7a25c31f85a815dca28bf9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_chaplygin, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:35:55 np0005629333 podman[323293]: 2026-02-25 12:35:55.288104946 +0000 UTC m=+0.941874890 container attach 3e823151dc9ed6343afe8e1c9157de61bc131755b7a25c31f85a815dca28bf9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_chaplygin, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 07:35:55 np0005629333 lvm[323391]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:35:55 np0005629333 lvm[323388]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:35:55 np0005629333 lvm[323388]: VG ceph_vg0 finished
Feb 25 07:35:55 np0005629333 lvm[323391]: VG ceph_vg2 finished
Feb 25 07:35:55 np0005629333 lvm[323390]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:35:55 np0005629333 lvm[323390]: VG ceph_vg1 finished
Feb 25 07:35:55 np0005629333 quirky_chaplygin[323310]: {}
Feb 25 07:35:55 np0005629333 systemd[1]: libpod-3e823151dc9ed6343afe8e1c9157de61bc131755b7a25c31f85a815dca28bf9a.scope: Deactivated successfully.
Feb 25 07:35:55 np0005629333 podman[323293]: 2026-02-25 12:35:55.914073138 +0000 UTC m=+1.567843102 container died 3e823151dc9ed6343afe8e1c9157de61bc131755b7a25c31f85a815dca28bf9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 07:35:55 np0005629333 systemd[1]: libpod-3e823151dc9ed6343afe8e1c9157de61bc131755b7a25c31f85a815dca28bf9a.scope: Consumed 1.044s CPU time.
Feb 25 07:35:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 153 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.3 KiB/s wr, 29 op/s
Feb 25 07:35:56 np0005629333 nova_compute[244014]: 2026-02-25 12:35:56.602 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:35:56 np0005629333 systemd[1]: var-lib-containers-storage-overlay-563cbdc86e633d7f55a5bb1a79ee34cda4c6862562076516d8ac8052fa148ce8-merged.mount: Deactivated successfully.
Feb 25 07:35:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:35:57 np0005629333 podman[323293]: 2026-02-25 12:35:57.801052879 +0000 UTC m=+3.454822833 container remove 3e823151dc9ed6343afe8e1c9157de61bc131755b7a25c31f85a815dca28bf9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 07:35:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:35:57 np0005629333 systemd[1]: libpod-conmon-3e823151dc9ed6343afe8e1c9157de61bc131755b7a25c31f85a815dca28bf9a.scope: Deactivated successfully.
Feb 25 07:35:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:35:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:35:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1625: 305 pgs: 305 active+clean; 153 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.3 KiB/s wr, 29 op/s
Feb 25 07:35:58 np0005629333 nova_compute[244014]: 2026-02-25 12:35:58.105 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Acquiring lock "be11f836-327c-447c-865a-0088e0554c0d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:35:58 np0005629333 nova_compute[244014]: 2026-02-25 12:35:58.106 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lock "be11f836-327c-447c-865a-0088e0554c0d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:35:58 np0005629333 nova_compute[244014]: 2026-02-25 12:35:58.124 244018 DEBUG nova.compute.manager [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:35:58 np0005629333 nova_compute[244014]: 2026-02-25 12:35:58.210 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:35:58 np0005629333 nova_compute[244014]: 2026-02-25 12:35:58.211 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:35:58 np0005629333 nova_compute[244014]: 2026-02-25 12:35:58.219 244018 DEBUG nova.virt.hardware [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:35:58 np0005629333 nova_compute[244014]: 2026-02-25 12:35:58.220 244018 INFO nova.compute.claims [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:35:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:35:58 np0005629333 nova_compute[244014]: 2026-02-25 12:35:58.369 244018 DEBUG oslo_concurrency.processutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:35:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:35:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/47908807' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:35:58 np0005629333 nova_compute[244014]: 2026-02-25 12:35:58.892 244018 DEBUG oslo_concurrency.processutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:35:58 np0005629333 nova_compute[244014]: 2026-02-25 12:35:58.900 244018 DEBUG nova.compute.provider_tree [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:35:58 np0005629333 nova_compute[244014]: 2026-02-25 12:35:58.924 244018 DEBUG nova.scheduler.client.report [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:35:58 np0005629333 nova_compute[244014]: 2026-02-25 12:35:58.951 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:35:58 np0005629333 nova_compute[244014]: 2026-02-25 12:35:58.952 244018 DEBUG nova.compute.manager [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:35:59 np0005629333 nova_compute[244014]: 2026-02-25 12:35:59.018 244018 DEBUG nova.compute.manager [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:35:59 np0005629333 nova_compute[244014]: 2026-02-25 12:35:59.018 244018 DEBUG nova.network.neutron [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:35:59 np0005629333 nova_compute[244014]: 2026-02-25 12:35:59.053 244018 INFO nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:35:59 np0005629333 nova_compute[244014]: 2026-02-25 12:35:59.073 244018 DEBUG nova.compute.manager [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:35:59 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:35:59 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:35:59 np0005629333 nova_compute[244014]: 2026-02-25 12:35:59.165 244018 DEBUG nova.compute.manager [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:35:59 np0005629333 nova_compute[244014]: 2026-02-25 12:35:59.166 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:35:59 np0005629333 nova_compute[244014]: 2026-02-25 12:35:59.167 244018 INFO nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Creating image(s)#033[00m
Feb 25 07:35:59 np0005629333 nova_compute[244014]: 2026-02-25 12:35:59.493 244018 DEBUG nova.storage.rbd_utils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] rbd image be11f836-327c-447c-865a-0088e0554c0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:35:59 np0005629333 nova_compute[244014]: 2026-02-25 12:35:59.521 244018 DEBUG nova.storage.rbd_utils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] rbd image be11f836-327c-447c-865a-0088e0554c0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:35:59 np0005629333 nova_compute[244014]: 2026-02-25 12:35:59.545 244018 DEBUG nova.storage.rbd_utils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] rbd image be11f836-327c-447c-865a-0088e0554c0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:35:59 np0005629333 nova_compute[244014]: 2026-02-25 12:35:59.549 244018 DEBUG oslo_concurrency.processutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:35:59 np0005629333 nova_compute[244014]: 2026-02-25 12:35:59.589 244018 DEBUG nova.policy [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '546ffcfc9e27442d89b984a2d0e30dda', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6c30db4c9b5f480dbb12cc23f1604de1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:35:59 np0005629333 nova_compute[244014]: 2026-02-25 12:35:59.660 244018 DEBUG oslo_concurrency.processutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:35:59 np0005629333 nova_compute[244014]: 2026-02-25 12:35:59.661 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:35:59 np0005629333 nova_compute[244014]: 2026-02-25 12:35:59.661 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:35:59 np0005629333 nova_compute[244014]: 2026-02-25 12:35:59.662 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:35:59 np0005629333 nova_compute[244014]: 2026-02-25 12:35:59.685 244018 DEBUG nova.storage.rbd_utils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] rbd image be11f836-327c-447c-865a-0088e0554c0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:35:59 np0005629333 nova_compute[244014]: 2026-02-25 12:35:59.688 244018 DEBUG oslo_concurrency.processutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 be11f836-327c-447c-865a-0088e0554c0d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:35:59 np0005629333 nova_compute[244014]: 2026-02-25 12:35:59.920 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1626: 305 pgs: 305 active+clean; 153 MiB data, 734 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Feb 25 07:36:01 np0005629333 nova_compute[244014]: 2026-02-25 12:36:01.077 244018 DEBUG nova.network.neutron [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Successfully created port: 3fb1749a-7239-482e-939c-ec165690b798 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:36:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:36:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:36:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:36:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:36:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:36:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:36:01 np0005629333 nova_compute[244014]: 2026-02-25 12:36:01.605 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1627: 305 pgs: 305 active+clean; 161 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 2.4 KiB/s rd, 266 KiB/s wr, 4 op/s
Feb 25 07:36:02 np0005629333 nova_compute[244014]: 2026-02-25 12:36:02.191 244018 DEBUG nova.network.neutron [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Successfully updated port: 3fb1749a-7239-482e-939c-ec165690b798 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:36:02 np0005629333 nova_compute[244014]: 2026-02-25 12:36:02.280 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Acquiring lock "refresh_cache-be11f836-327c-447c-865a-0088e0554c0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:36:02 np0005629333 nova_compute[244014]: 2026-02-25 12:36:02.280 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Acquired lock "refresh_cache-be11f836-327c-447c-865a-0088e0554c0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:36:02 np0005629333 nova_compute[244014]: 2026-02-25 12:36:02.280 244018 DEBUG nova.network.neutron [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:36:02 np0005629333 nova_compute[244014]: 2026-02-25 12:36:02.409 244018 DEBUG nova.compute.manager [req-eda8d7d4-ab81-441a-a66a-941b415b5544 req-9ed2b845-3c02-4125-a8c1-27a1e763ac1a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Received event network-changed-3fb1749a-7239-482e-939c-ec165690b798 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:36:02 np0005629333 nova_compute[244014]: 2026-02-25 12:36:02.410 244018 DEBUG nova.compute.manager [req-eda8d7d4-ab81-441a-a66a-941b415b5544 req-9ed2b845-3c02-4125-a8c1-27a1e763ac1a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Refreshing instance network info cache due to event network-changed-3fb1749a-7239-482e-939c-ec165690b798. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:36:02 np0005629333 nova_compute[244014]: 2026-02-25 12:36:02.410 244018 DEBUG oslo_concurrency.lockutils [req-eda8d7d4-ab81-441a-a66a-941b415b5544 req-9ed2b845-3c02-4125-a8c1-27a1e763ac1a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-be11f836-327c-447c-865a-0088e0554c0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:36:02 np0005629333 nova_compute[244014]: 2026-02-25 12:36:02.654 244018 DEBUG nova.network.neutron [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:36:02 np0005629333 nova_compute[244014]: 2026-02-25 12:36:02.678 244018 DEBUG oslo_concurrency.processutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 be11f836-327c-447c-865a-0088e0554c0d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.990s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:02 np0005629333 nova_compute[244014]: 2026-02-25 12:36:02.737 244018 DEBUG nova.storage.rbd_utils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] resizing rbd image be11f836-327c-447c-865a-0088e0554c0d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:36:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:36:03 np0005629333 nova_compute[244014]: 2026-02-25 12:36:03.514 244018 DEBUG nova.objects.instance [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lazy-loading 'migration_context' on Instance uuid be11f836-327c-447c-865a-0088e0554c0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:36:03 np0005629333 nova_compute[244014]: 2026-02-25 12:36:03.532 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:36:03 np0005629333 nova_compute[244014]: 2026-02-25 12:36:03.533 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Ensure instance console log exists: /var/lib/nova/instances/be11f836-327c-447c-865a-0088e0554c0d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:36:03 np0005629333 nova_compute[244014]: 2026-02-25 12:36:03.534 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:03 np0005629333 nova_compute[244014]: 2026-02-25 12:36:03.534 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:03 np0005629333 nova_compute[244014]: 2026-02-25 12:36:03.534 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1628: 305 pgs: 305 active+clean; 179 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 543 KiB/s wr, 15 op/s
Feb 25 07:36:04 np0005629333 nova_compute[244014]: 2026-02-25 12:36:04.304 244018 DEBUG nova.network.neutron [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Updating instance_info_cache with network_info: [{"id": "3fb1749a-7239-482e-939c-ec165690b798", "address": "fa:16:3e:9d:e4:ea", "network": {"id": "aefc26c0-533a-476c-be4d-dc2b231c8685", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1194995791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c30db4c9b5f480dbb12cc23f1604de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb1749a-72", "ovs_interfaceid": "3fb1749a-7239-482e-939c-ec165690b798", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:36:04 np0005629333 nova_compute[244014]: 2026-02-25 12:36:04.343 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Releasing lock "refresh_cache-be11f836-327c-447c-865a-0088e0554c0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:36:04 np0005629333 nova_compute[244014]: 2026-02-25 12:36:04.343 244018 DEBUG nova.compute.manager [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Instance network_info: |[{"id": "3fb1749a-7239-482e-939c-ec165690b798", "address": "fa:16:3e:9d:e4:ea", "network": {"id": "aefc26c0-533a-476c-be4d-dc2b231c8685", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1194995791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c30db4c9b5f480dbb12cc23f1604de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb1749a-72", "ovs_interfaceid": "3fb1749a-7239-482e-939c-ec165690b798", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:36:04 np0005629333 nova_compute[244014]: 2026-02-25 12:36:04.344 244018 DEBUG oslo_concurrency.lockutils [req-eda8d7d4-ab81-441a-a66a-941b415b5544 req-9ed2b845-3c02-4125-a8c1-27a1e763ac1a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-be11f836-327c-447c-865a-0088e0554c0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:36:04 np0005629333 nova_compute[244014]: 2026-02-25 12:36:04.345 244018 DEBUG nova.network.neutron [req-eda8d7d4-ab81-441a-a66a-941b415b5544 req-9ed2b845-3c02-4125-a8c1-27a1e763ac1a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Refreshing network info cache for port 3fb1749a-7239-482e-939c-ec165690b798 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:36:04 np0005629333 nova_compute[244014]: 2026-02-25 12:36:04.348 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Start _get_guest_xml network_info=[{"id": "3fb1749a-7239-482e-939c-ec165690b798", "address": "fa:16:3e:9d:e4:ea", "network": {"id": "aefc26c0-533a-476c-be4d-dc2b231c8685", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1194995791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c30db4c9b5f480dbb12cc23f1604de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb1749a-72", "ovs_interfaceid": "3fb1749a-7239-482e-939c-ec165690b798", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:36:04 np0005629333 nova_compute[244014]: 2026-02-25 12:36:04.355 244018 WARNING nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:36:04 np0005629333 nova_compute[244014]: 2026-02-25 12:36:04.364 244018 DEBUG nova.virt.libvirt.host [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:36:04 np0005629333 nova_compute[244014]: 2026-02-25 12:36:04.365 244018 DEBUG nova.virt.libvirt.host [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:36:04 np0005629333 nova_compute[244014]: 2026-02-25 12:36:04.374 244018 DEBUG nova.virt.libvirt.host [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:36:04 np0005629333 nova_compute[244014]: 2026-02-25 12:36:04.374 244018 DEBUG nova.virt.libvirt.host [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:36:04 np0005629333 nova_compute[244014]: 2026-02-25 12:36:04.375 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:36:04 np0005629333 nova_compute[244014]: 2026-02-25 12:36:04.375 244018 DEBUG nova.virt.hardware [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:36:04 np0005629333 nova_compute[244014]: 2026-02-25 12:36:04.375 244018 DEBUG nova.virt.hardware [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:36:04 np0005629333 nova_compute[244014]: 2026-02-25 12:36:04.376 244018 DEBUG nova.virt.hardware [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:36:04 np0005629333 nova_compute[244014]: 2026-02-25 12:36:04.376 244018 DEBUG nova.virt.hardware [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:36:04 np0005629333 nova_compute[244014]: 2026-02-25 12:36:04.376 244018 DEBUG nova.virt.hardware [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:36:04 np0005629333 nova_compute[244014]: 2026-02-25 12:36:04.376 244018 DEBUG nova.virt.hardware [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:36:04 np0005629333 nova_compute[244014]: 2026-02-25 12:36:04.377 244018 DEBUG nova.virt.hardware [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:36:04 np0005629333 nova_compute[244014]: 2026-02-25 12:36:04.377 244018 DEBUG nova.virt.hardware [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:36:04 np0005629333 nova_compute[244014]: 2026-02-25 12:36:04.377 244018 DEBUG nova.virt.hardware [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:36:04 np0005629333 nova_compute[244014]: 2026-02-25 12:36:04.377 244018 DEBUG nova.virt.hardware [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:36:04 np0005629333 nova_compute[244014]: 2026-02-25 12:36:04.378 244018 DEBUG nova.virt.hardware [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:36:04 np0005629333 nova_compute[244014]: 2026-02-25 12:36:04.381 244018 DEBUG oslo_concurrency.processutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:04 np0005629333 nova_compute[244014]: 2026-02-25 12:36:04.921 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:36:04 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3425467780' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:36:04 np0005629333 nova_compute[244014]: 2026-02-25 12:36:04.965 244018 DEBUG oslo_concurrency.processutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:04 np0005629333 nova_compute[244014]: 2026-02-25 12:36:04.985 244018 DEBUG nova.storage.rbd_utils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] rbd image be11f836-327c-447c-865a-0088e0554c0d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:04 np0005629333 nova_compute[244014]: 2026-02-25 12:36:04.989 244018 DEBUG oslo_concurrency.processutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.331 244018 DEBUG oslo_concurrency.lockutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Acquiring lock "2184a715-0ac8-4fc2-aa99-fae3b8e32edf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.332 244018 DEBUG oslo_concurrency.lockutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "2184a715-0ac8-4fc2-aa99-fae3b8e32edf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.368 244018 DEBUG nova.compute.manager [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.474 244018 DEBUG oslo_concurrency.lockutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.475 244018 DEBUG oslo_concurrency.lockutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.483 244018 DEBUG nova.virt.hardware [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.484 244018 INFO nova.compute.claims [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:36:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:36:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3133009143' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.534 244018 DEBUG oslo_concurrency.processutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.535 244018 DEBUG nova.virt.libvirt.vif [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:35:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1424139914',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1424139914',id=88,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6c30db4c9b5f480dbb12cc23f1604de1',ramdisk_id='',reservation_id='r-7eqku0s7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-203832490',owner_user_name='tempest-InstanceActionsV221TestJSON-203832490-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:35:59Z,user_data=None,user_id='546ffcfc9e27442d89b984a2d0e30dda',uuid=be11f836-327c-447c-865a-0088e0554c0d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3fb1749a-7239-482e-939c-ec165690b798", "address": "fa:16:3e:9d:e4:ea", "network": {"id": "aefc26c0-533a-476c-be4d-dc2b231c8685", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1194995791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c30db4c9b5f480dbb12cc23f1604de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb1749a-72", "ovs_interfaceid": "3fb1749a-7239-482e-939c-ec165690b798", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.537 244018 DEBUG nova.network.os_vif_util [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Converting VIF {"id": "3fb1749a-7239-482e-939c-ec165690b798", "address": "fa:16:3e:9d:e4:ea", "network": {"id": "aefc26c0-533a-476c-be4d-dc2b231c8685", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1194995791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c30db4c9b5f480dbb12cc23f1604de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb1749a-72", "ovs_interfaceid": "3fb1749a-7239-482e-939c-ec165690b798", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.539 244018 DEBUG nova.network.os_vif_util [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e4:ea,bridge_name='br-int',has_traffic_filtering=True,id=3fb1749a-7239-482e-939c-ec165690b798,network=Network(aefc26c0-533a-476c-be4d-dc2b231c8685),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fb1749a-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.541 244018 DEBUG nova.objects.instance [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lazy-loading 'pci_devices' on Instance uuid be11f836-327c-447c-865a-0088e0554c0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.578 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:36:05 np0005629333 nova_compute[244014]:  <uuid>be11f836-327c-447c-865a-0088e0554c0d</uuid>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:  <name>instance-00000058</name>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      <nova:name>tempest-InstanceActionsV221TestJSON-server-1424139914</nova:name>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:36:04</nova:creationTime>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:36:05 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:        <nova:user uuid="546ffcfc9e27442d89b984a2d0e30dda">tempest-InstanceActionsV221TestJSON-203832490-project-member</nova:user>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:        <nova:project uuid="6c30db4c9b5f480dbb12cc23f1604de1">tempest-InstanceActionsV221TestJSON-203832490</nova:project>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:        <nova:port uuid="3fb1749a-7239-482e-939c-ec165690b798">
Feb 25 07:36:05 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      <entry name="serial">be11f836-327c-447c-865a-0088e0554c0d</entry>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      <entry name="uuid">be11f836-327c-447c-865a-0088e0554c0d</entry>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/be11f836-327c-447c-865a-0088e0554c0d_disk">
Feb 25 07:36:05 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:36:05 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/be11f836-327c-447c-865a-0088e0554c0d_disk.config">
Feb 25 07:36:05 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:36:05 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:9d:e4:ea"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      <target dev="tap3fb1749a-72"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/be11f836-327c-447c-865a-0088e0554c0d/console.log" append="off"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:36:05 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:36:05 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:36:05 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:36:05 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.579 244018 DEBUG nova.compute.manager [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Preparing to wait for external event network-vif-plugged-3fb1749a-7239-482e-939c-ec165690b798 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.580 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Acquiring lock "be11f836-327c-447c-865a-0088e0554c0d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.580 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lock "be11f836-327c-447c-865a-0088e0554c0d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.580 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lock "be11f836-327c-447c-865a-0088e0554c0d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.581 244018 DEBUG nova.virt.libvirt.vif [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:35:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1424139914',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1424139914',id=88,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6c30db4c9b5f480dbb12cc23f1604de1',ramdisk_id='',reservation_id='r-7eqku0s7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-203832490',owner_user_name='tempest-InstanceActionsV221TestJSON-203832490-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:35:59Z,user_data=None,user_id='546ffcfc9e27442d89b984a2d0e30dda',uuid=be11f836-327c-447c-865a-0088e0554c0d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3fb1749a-7239-482e-939c-ec165690b798", "address": "fa:16:3e:9d:e4:ea", "network": {"id": "aefc26c0-533a-476c-be4d-dc2b231c8685", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1194995791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c30db4c9b5f480dbb12cc23f1604de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb1749a-72", "ovs_interfaceid": "3fb1749a-7239-482e-939c-ec165690b798", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.581 244018 DEBUG nova.network.os_vif_util [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Converting VIF {"id": "3fb1749a-7239-482e-939c-ec165690b798", "address": "fa:16:3e:9d:e4:ea", "network": {"id": "aefc26c0-533a-476c-be4d-dc2b231c8685", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1194995791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c30db4c9b5f480dbb12cc23f1604de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb1749a-72", "ovs_interfaceid": "3fb1749a-7239-482e-939c-ec165690b798", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.582 244018 DEBUG nova.network.os_vif_util [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e4:ea,bridge_name='br-int',has_traffic_filtering=True,id=3fb1749a-7239-482e-939c-ec165690b798,network=Network(aefc26c0-533a-476c-be4d-dc2b231c8685),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fb1749a-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.582 244018 DEBUG os_vif [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e4:ea,bridge_name='br-int',has_traffic_filtering=True,id=3fb1749a-7239-482e-939c-ec165690b798,network=Network(aefc26c0-533a-476c-be4d-dc2b231c8685),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fb1749a-72') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.583 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.583 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.584 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.588 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.588 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3fb1749a-72, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.589 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3fb1749a-72, col_values=(('external_ids', {'iface-id': '3fb1749a-7239-482e-939c-ec165690b798', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:e4:ea', 'vm-uuid': 'be11f836-327c-447c-865a-0088e0554c0d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.590 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.591 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:36:05 np0005629333 NetworkManager[49836]: <info>  [1772022965.5918] manager: (tap3fb1749a-72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/372)
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.597 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.599 244018 INFO os_vif [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e4:ea,bridge_name='br-int',has_traffic_filtering=True,id=3fb1749a-7239-482e-939c-ec165690b798,network=Network(aefc26c0-533a-476c-be4d-dc2b231c8685),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fb1749a-72')#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.679 244018 DEBUG oslo_concurrency.processutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.731 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.732 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.732 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] No VIF found with MAC fa:16:3e:9d:e4:ea, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.733 244018 INFO nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Using config drive#033[00m
Feb 25 07:36:05 np0005629333 nova_compute[244014]: 2026-02-25 12:36:05.888 244018 DEBUG nova.storage.rbd_utils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] rbd image be11f836-327c-447c-865a-0088e0554c0d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1629: 305 pgs: 305 active+clean; 179 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 543 KiB/s wr, 15 op/s
Feb 25 07:36:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:36:06 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3238565797' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:36:06 np0005629333 nova_compute[244014]: 2026-02-25 12:36:06.318 244018 DEBUG oslo_concurrency.processutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.640s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:06 np0005629333 nova_compute[244014]: 2026-02-25 12:36:06.324 244018 DEBUG nova.compute.provider_tree [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:36:06 np0005629333 nova_compute[244014]: 2026-02-25 12:36:06.339 244018 DEBUG nova.scheduler.client.report [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:36:06 np0005629333 nova_compute[244014]: 2026-02-25 12:36:06.380 244018 DEBUG oslo_concurrency.lockutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:06 np0005629333 nova_compute[244014]: 2026-02-25 12:36:06.381 244018 DEBUG nova.compute.manager [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:36:06 np0005629333 nova_compute[244014]: 2026-02-25 12:36:06.387 244018 INFO nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Creating config drive at /var/lib/nova/instances/be11f836-327c-447c-865a-0088e0554c0d/disk.config#033[00m
Feb 25 07:36:06 np0005629333 nova_compute[244014]: 2026-02-25 12:36:06.391 244018 DEBUG oslo_concurrency.processutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/be11f836-327c-447c-865a-0088e0554c0d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpoc37fecw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:06 np0005629333 nova_compute[244014]: 2026-02-25 12:36:06.479 244018 DEBUG nova.compute.manager [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Feb 25 07:36:06 np0005629333 nova_compute[244014]: 2026-02-25 12:36:06.534 244018 DEBUG oslo_concurrency.processutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/be11f836-327c-447c-865a-0088e0554c0d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpoc37fecw" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:06 np0005629333 nova_compute[244014]: 2026-02-25 12:36:06.557 244018 DEBUG nova.storage.rbd_utils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] rbd image be11f836-327c-447c-865a-0088e0554c0d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:06 np0005629333 nova_compute[244014]: 2026-02-25 12:36:06.560 244018 DEBUG oslo_concurrency.processutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/be11f836-327c-447c-865a-0088e0554c0d/disk.config be11f836-327c-447c-865a-0088e0554c0d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:06 np0005629333 nova_compute[244014]: 2026-02-25 12:36:06.731 244018 INFO nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:36:06 np0005629333 nova_compute[244014]: 2026-02-25 12:36:06.753 244018 DEBUG nova.compute.manager [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:36:06 np0005629333 nova_compute[244014]: 2026-02-25 12:36:06.845 244018 DEBUG nova.compute.manager [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:36:06 np0005629333 nova_compute[244014]: 2026-02-25 12:36:06.846 244018 DEBUG nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:36:06 np0005629333 nova_compute[244014]: 2026-02-25 12:36:06.847 244018 INFO nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Creating image(s)#033[00m
Feb 25 07:36:06 np0005629333 nova_compute[244014]: 2026-02-25 12:36:06.870 244018 DEBUG nova.storage.rbd_utils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] rbd image 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:06 np0005629333 nova_compute[244014]: 2026-02-25 12:36:06.892 244018 DEBUG nova.storage.rbd_utils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] rbd image 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:06 np0005629333 nova_compute[244014]: 2026-02-25 12:36:06.910 244018 DEBUG nova.storage.rbd_utils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] rbd image 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:06 np0005629333 nova_compute[244014]: 2026-02-25 12:36:06.913 244018 DEBUG oslo_concurrency.processutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:06 np0005629333 nova_compute[244014]: 2026-02-25 12:36:06.942 244018 DEBUG nova.network.neutron [req-eda8d7d4-ab81-441a-a66a-941b415b5544 req-9ed2b845-3c02-4125-a8c1-27a1e763ac1a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Updated VIF entry in instance network info cache for port 3fb1749a-7239-482e-939c-ec165690b798. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:36:06 np0005629333 nova_compute[244014]: 2026-02-25 12:36:06.943 244018 DEBUG nova.network.neutron [req-eda8d7d4-ab81-441a-a66a-941b415b5544 req-9ed2b845-3c02-4125-a8c1-27a1e763ac1a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Updating instance_info_cache with network_info: [{"id": "3fb1749a-7239-482e-939c-ec165690b798", "address": "fa:16:3e:9d:e4:ea", "network": {"id": "aefc26c0-533a-476c-be4d-dc2b231c8685", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1194995791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c30db4c9b5f480dbb12cc23f1604de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb1749a-72", "ovs_interfaceid": "3fb1749a-7239-482e-939c-ec165690b798", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:36:06 np0005629333 nova_compute[244014]: 2026-02-25 12:36:06.945 244018 DEBUG oslo_concurrency.processutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/be11f836-327c-447c-865a-0088e0554c0d/disk.config be11f836-327c-447c-865a-0088e0554c0d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.385s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:06 np0005629333 nova_compute[244014]: 2026-02-25 12:36:06.945 244018 INFO nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Deleting local config drive /var/lib/nova/instances/be11f836-327c-447c-865a-0088e0554c0d/disk.config because it was imported into RBD.#033[00m
Feb 25 07:36:06 np0005629333 nova_compute[244014]: 2026-02-25 12:36:06.960 244018 DEBUG oslo_concurrency.lockutils [req-eda8d7d4-ab81-441a-a66a-941b415b5544 req-9ed2b845-3c02-4125-a8c1-27a1e763ac1a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-be11f836-327c-447c-865a-0088e0554c0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:36:06 np0005629333 nova_compute[244014]: 2026-02-25 12:36:06.973 244018 DEBUG oslo_concurrency.processutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:06 np0005629333 nova_compute[244014]: 2026-02-25 12:36:06.974 244018 DEBUG oslo_concurrency.lockutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:06 np0005629333 nova_compute[244014]: 2026-02-25 12:36:06.975 244018 DEBUG oslo_concurrency.lockutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:06 np0005629333 nova_compute[244014]: 2026-02-25 12:36:06.975 244018 DEBUG oslo_concurrency.lockutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:06 np0005629333 kernel: tap3fb1749a-72: entered promiscuous mode
Feb 25 07:36:06 np0005629333 NetworkManager[49836]: <info>  [1772022966.9803] manager: (tap3fb1749a-72): new Tun device (/org/freedesktop/NetworkManager/Devices/373)
Feb 25 07:36:06 np0005629333 ovn_controller[147040]: 2026-02-25T12:36:06Z|00892|binding|INFO|Claiming lport 3fb1749a-7239-482e-939c-ec165690b798 for this chassis.
Feb 25 07:36:06 np0005629333 ovn_controller[147040]: 2026-02-25T12:36:06Z|00893|binding|INFO|3fb1749a-7239-482e-939c-ec165690b798: Claiming fa:16:3e:9d:e4:ea 10.100.0.14
Feb 25 07:36:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:06.994 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:e4:ea 10.100.0.14'], port_security=['fa:16:3e:9d:e4:ea 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'be11f836-327c-447c-865a-0088e0554c0d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aefc26c0-533a-476c-be4d-dc2b231c8685', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6c30db4c9b5f480dbb12cc23f1604de1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b1c7a807-e7ae-4ff5-a4f8-0f24bfa78da8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3ffc0757-5bda-4254-9536-399017e8e195, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=3fb1749a-7239-482e-939c-ec165690b798) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:36:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:06.995 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 3fb1749a-7239-482e-939c-ec165690b798 in datapath aefc26c0-533a-476c-be4d-dc2b231c8685 bound to our chassis#033[00m
Feb 25 07:36:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:06.996 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aefc26c0-533a-476c-be4d-dc2b231c8685#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.001 244018 DEBUG nova.storage.rbd_utils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] rbd image 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.003 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7909c137-7f72-45e0-9b9f-2384c8b29e7e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.006 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaefc26c0-51 in ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:36:07 np0005629333 systemd-udevd[323851]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.008 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaefc26c0-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.008 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cb4cbae0-e746-4ac5-820d-833679a4e718]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:07 np0005629333 ovn_controller[147040]: 2026-02-25T12:36:07Z|00894|binding|INFO|Setting lport 3fb1749a-7239-482e-939c-ec165690b798 ovn-installed in OVS
Feb 25 07:36:07 np0005629333 ovn_controller[147040]: 2026-02-25T12:36:07Z|00895|binding|INFO|Setting lport 3fb1749a-7239-482e-939c-ec165690b798 up in Southbound
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.010 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[793f195c-0f62-4248-b98c-da7c97f0c180]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.011 244018 DEBUG oslo_concurrency.processutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:07 np0005629333 systemd-machined[210048]: New machine qemu-113-instance-00000058.
Feb 25 07:36:07 np0005629333 NetworkManager[49836]: <info>  [1772022967.0194] device (tap3fb1749a-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:36:07 np0005629333 NetworkManager[49836]: <info>  [1772022967.0202] device (tap3fb1749a-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.023 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[9a4abfbc-3e7f-442d-80cb-8e5aa7ba26ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:07 np0005629333 systemd[1]: Started Virtual Machine qemu-113-instance-00000058.
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.041 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5b2c02e1-438b-43af-a39f-8c0a571acae1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.048 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.062 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e48f5bce-a476-4042-a074-beb7486ddb9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:07 np0005629333 systemd-udevd[323855]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:36:07 np0005629333 NetworkManager[49836]: <info>  [1772022967.0671] manager: (tapaefc26c0-50): new Veth device (/org/freedesktop/NetworkManager/Devices/374)
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.068 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[85f26b94-1747-4e6f-99c1-9daae2140759]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.088 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7cebb0a4-402d-4bff-9845-d65fa1f8d8ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.091 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[944454bb-3148-403c-b87f-6f79f8dda383]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:07 np0005629333 NetworkManager[49836]: <info>  [1772022967.1087] device (tapaefc26c0-50): carrier: link connected
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.111 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[da396934-2e70-49a9-b9f2-cf8a851f074a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.128 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[27dec32b-2a37-4c6e-8b6a-e3d4a379631e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaefc26c0-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:0c:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 269], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493651, 'reachable_time': 32059, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323900, 'error': None, 'target': 'ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.139 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[145a3de4-f567-485c-8c0a-1ede4fbf7bb6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea3:cc6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 493651, 'tstamp': 493651}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323901, 'error': None, 'target': 'ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.151 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[81573cb8-929d-403d-b96a-8ea848c9c42c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaefc26c0-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:0c:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 269], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493651, 'reachable_time': 32059, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323902, 'error': None, 'target': 'ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.170 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4d090a90-24f8-43ed-81eb-d161f5ae7dbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.208 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[59e1b273-7985-4233-9c7c-7dfe4efdd911]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.209 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaefc26c0-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.209 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.210 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaefc26c0-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.211 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:07 np0005629333 NetworkManager[49836]: <info>  [1772022967.2131] manager: (tapaefc26c0-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/375)
Feb 25 07:36:07 np0005629333 kernel: tapaefc26c0-50: entered promiscuous mode
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.215 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.216 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaefc26c0-50, col_values=(('external_ids', {'iface-id': '25f1e81e-7d2e-4698-8a33-915eba7603a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.217 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:07 np0005629333 ovn_controller[147040]: 2026-02-25T12:36:07Z|00896|binding|INFO|Releasing lport 25f1e81e-7d2e-4698-8a33-915eba7603a7 from this chassis (sb_readonly=0)
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.218 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aefc26c0-533a-476c-be4d-dc2b231c8685.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aefc26c0-533a-476c-be4d-dc2b231c8685.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.219 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fd7df206-826a-4999-a06a-23c3975c4d07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.219 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-aefc26c0-533a-476c-be4d-dc2b231c8685
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/aefc26c0-533a-476c-be4d-dc2b231c8685.pid.haproxy
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID aefc26c0-533a-476c-be4d-dc2b231c8685
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:36:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:07.220 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685', 'env', 'PROCESS_TAG=haproxy-aefc26c0-533a-476c-be4d-dc2b231c8685', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aefc26c0-533a-476c-be4d-dc2b231c8685.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.223 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.388 244018 DEBUG nova.compute.manager [req-e3fe9e0b-1a13-4706-8e76-44e4d0c7c6ba req-e3ccf7bf-0320-4024-91d7-f26b2a6847e2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Received event network-vif-plugged-3fb1749a-7239-482e-939c-ec165690b798 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.389 244018 DEBUG oslo_concurrency.lockutils [req-e3fe9e0b-1a13-4706-8e76-44e4d0c7c6ba req-e3ccf7bf-0320-4024-91d7-f26b2a6847e2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "be11f836-327c-447c-865a-0088e0554c0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.390 244018 DEBUG oslo_concurrency.lockutils [req-e3fe9e0b-1a13-4706-8e76-44e4d0c7c6ba req-e3ccf7bf-0320-4024-91d7-f26b2a6847e2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "be11f836-327c-447c-865a-0088e0554c0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.390 244018 DEBUG oslo_concurrency.lockutils [req-e3fe9e0b-1a13-4706-8e76-44e4d0c7c6ba req-e3ccf7bf-0320-4024-91d7-f26b2a6847e2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "be11f836-327c-447c-865a-0088e0554c0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.391 244018 DEBUG nova.compute.manager [req-e3fe9e0b-1a13-4706-8e76-44e4d0c7c6ba req-e3ccf7bf-0320-4024-91d7-f26b2a6847e2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Processing event network-vif-plugged-3fb1749a-7239-482e-939c-ec165690b798 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.603 244018 DEBUG oslo_concurrency.processutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:07 np0005629333 podman[323938]: 2026-02-25 12:36:07.546954079 +0000 UTC m=+0.022635261 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.702 244018 DEBUG nova.storage.rbd_utils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] resizing rbd image 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:36:07 np0005629333 podman[323938]: 2026-02-25 12:36:07.706097877 +0000 UTC m=+0.181779019 container create 46c5d24ad592e89bdd0ec5d6f2895732dc46f0758087abab27fb3ed210180d54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 07:36:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:36:07 np0005629333 systemd[1]: Started libpod-conmon-46c5d24ad592e89bdd0ec5d6f2895732dc46f0758087abab27fb3ed210180d54.scope.
Feb 25 07:36:07 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:36:07 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f869d10a1ae0f882e6b7b906c9ac51192b52ba8e8071b619209b2a6cd377d6fd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:36:07 np0005629333 podman[323938]: 2026-02-25 12:36:07.879461066 +0000 UTC m=+0.355142238 container init 46c5d24ad592e89bdd0ec5d6f2895732dc46f0758087abab27fb3ed210180d54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:36:07 np0005629333 podman[323938]: 2026-02-25 12:36:07.885429035 +0000 UTC m=+0.361110177 container start 46c5d24ad592e89bdd0ec5d6f2895732dc46f0758087abab27fb3ed210180d54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 07:36:07 np0005629333 neutron-haproxy-ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685[324048]: [NOTICE]   (324053) : New worker (324055) forked
Feb 25 07:36:07 np0005629333 neutron-haproxy-ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685[324048]: [NOTICE]   (324053) : Loading success.
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.910 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022967.9094691, be11f836-327c-447c-865a-0088e0554c0d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.910 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: be11f836-327c-447c-865a-0088e0554c0d] VM Started (Lifecycle Event)#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.912 244018 DEBUG nova.compute.manager [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.916 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.919 244018 INFO nova.virt.libvirt.driver [-] [instance: be11f836-327c-447c-865a-0088e0554c0d] Instance spawned successfully.#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.919 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.952 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: be11f836-327c-447c-865a-0088e0554c0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.958 244018 DEBUG nova.objects.instance [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 2184a715-0ac8-4fc2-aa99-fae3b8e32edf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.963 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: be11f836-327c-447c-865a-0088e0554c0d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.966 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.966 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.967 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.967 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.968 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.968 244018 DEBUG nova.virt.libvirt.driver [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.974 244018 DEBUG nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.975 244018 DEBUG nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Ensure instance console log exists: /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.975 244018 DEBUG oslo_concurrency.lockutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.975 244018 DEBUG oslo_concurrency.lockutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.976 244018 DEBUG oslo_concurrency.lockutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.977 244018 DEBUG nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.981 244018 WARNING nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.990 244018 DEBUG nova.virt.libvirt.host [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.991 244018 DEBUG nova.virt.libvirt.host [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.994 244018 DEBUG nova.virt.libvirt.host [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.995 244018 DEBUG nova.virt.libvirt.host [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.995 244018 DEBUG nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.995 244018 DEBUG nova.virt.hardware [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.996 244018 DEBUG nova.virt.hardware [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.996 244018 DEBUG nova.virt.hardware [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.996 244018 DEBUG nova.virt.hardware [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.997 244018 DEBUG nova.virt.hardware [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.997 244018 DEBUG nova.virt.hardware [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.997 244018 DEBUG nova.virt.hardware [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.997 244018 DEBUG nova.virt.hardware [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.998 244018 DEBUG nova.virt.hardware [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.998 244018 DEBUG nova.virt.hardware [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:36:07 np0005629333 nova_compute[244014]: 2026-02-25 12:36:07.998 244018 DEBUG nova.virt.hardware [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:36:08 np0005629333 nova_compute[244014]: 2026-02-25 12:36:08.000 244018 DEBUG oslo_concurrency.processutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:08 np0005629333 nova_compute[244014]: 2026-02-25 12:36:08.028 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: be11f836-327c-447c-865a-0088e0554c0d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:36:08 np0005629333 nova_compute[244014]: 2026-02-25 12:36:08.029 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022967.9096217, be11f836-327c-447c-865a-0088e0554c0d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:36:08 np0005629333 nova_compute[244014]: 2026-02-25 12:36:08.030 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: be11f836-327c-447c-865a-0088e0554c0d] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:36:08 np0005629333 nova_compute[244014]: 2026-02-25 12:36:08.032 244018 INFO nova.compute.manager [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Took 8.87 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:36:08 np0005629333 nova_compute[244014]: 2026-02-25 12:36:08.032 244018 DEBUG nova.compute.manager [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:36:08 np0005629333 nova_compute[244014]: 2026-02-25 12:36:08.058 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: be11f836-327c-447c-865a-0088e0554c0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:36:08 np0005629333 nova_compute[244014]: 2026-02-25 12:36:08.062 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022967.915422, be11f836-327c-447c-865a-0088e0554c0d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:36:08 np0005629333 nova_compute[244014]: 2026-02-25 12:36:08.063 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: be11f836-327c-447c-865a-0088e0554c0d] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:36:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1630: 305 pgs: 305 active+clean; 200 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 07:36:08 np0005629333 nova_compute[244014]: 2026-02-25 12:36:08.091 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: be11f836-327c-447c-865a-0088e0554c0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:36:08 np0005629333 nova_compute[244014]: 2026-02-25 12:36:08.103 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: be11f836-327c-447c-865a-0088e0554c0d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:36:08 np0005629333 nova_compute[244014]: 2026-02-25 12:36:08.110 244018 INFO nova.compute.manager [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Took 9.93 seconds to build instance.#033[00m
Feb 25 07:36:08 np0005629333 nova_compute[244014]: 2026-02-25 12:36:08.149 244018 DEBUG oslo_concurrency.lockutils [None req-2ec75be2-17d2-4a02-8f2b-f67ec09afff0 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lock "be11f836-327c-447c-865a-0088e0554c0d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.042s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:36:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3999880895' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:36:08 np0005629333 nova_compute[244014]: 2026-02-25 12:36:08.594 244018 DEBUG oslo_concurrency.processutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:08 np0005629333 nova_compute[244014]: 2026-02-25 12:36:08.616 244018 DEBUG nova.storage.rbd_utils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] rbd image 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:08 np0005629333 nova_compute[244014]: 2026-02-25 12:36:08.619 244018 DEBUG oslo_concurrency.processutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:08 np0005629333 nova_compute[244014]: 2026-02-25 12:36:08.705 244018 DEBUG oslo_concurrency.lockutils [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Acquiring lock "be11f836-327c-447c-865a-0088e0554c0d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:08 np0005629333 nova_compute[244014]: 2026-02-25 12:36:08.706 244018 DEBUG oslo_concurrency.lockutils [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lock "be11f836-327c-447c-865a-0088e0554c0d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:08 np0005629333 nova_compute[244014]: 2026-02-25 12:36:08.706 244018 DEBUG oslo_concurrency.lockutils [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Acquiring lock "be11f836-327c-447c-865a-0088e0554c0d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:08 np0005629333 nova_compute[244014]: 2026-02-25 12:36:08.706 244018 DEBUG oslo_concurrency.lockutils [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lock "be11f836-327c-447c-865a-0088e0554c0d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:08 np0005629333 nova_compute[244014]: 2026-02-25 12:36:08.707 244018 DEBUG oslo_concurrency.lockutils [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lock "be11f836-327c-447c-865a-0088e0554c0d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:08 np0005629333 nova_compute[244014]: 2026-02-25 12:36:08.708 244018 INFO nova.compute.manager [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Terminating instance#033[00m
Feb 25 07:36:08 np0005629333 nova_compute[244014]: 2026-02-25 12:36:08.709 244018 DEBUG nova.compute.manager [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:36:08 np0005629333 kernel: tap3fb1749a-72 (unregistering): left promiscuous mode
Feb 25 07:36:08 np0005629333 NetworkManager[49836]: <info>  [1772022968.9069] device (tap3fb1749a-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:36:08 np0005629333 nova_compute[244014]: 2026-02-25 12:36:08.909 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:08 np0005629333 ovn_controller[147040]: 2026-02-25T12:36:08Z|00897|binding|INFO|Releasing lport 3fb1749a-7239-482e-939c-ec165690b798 from this chassis (sb_readonly=0)
Feb 25 07:36:08 np0005629333 ovn_controller[147040]: 2026-02-25T12:36:08Z|00898|binding|INFO|Setting lport 3fb1749a-7239-482e-939c-ec165690b798 down in Southbound
Feb 25 07:36:08 np0005629333 ovn_controller[147040]: 2026-02-25T12:36:08Z|00899|binding|INFO|Removing iface tap3fb1749a-72 ovn-installed in OVS
Feb 25 07:36:08 np0005629333 nova_compute[244014]: 2026-02-25 12:36:08.915 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:08 np0005629333 nova_compute[244014]: 2026-02-25 12:36:08.919 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:08.925 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:e4:ea 10.100.0.14'], port_security=['fa:16:3e:9d:e4:ea 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'be11f836-327c-447c-865a-0088e0554c0d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aefc26c0-533a-476c-be4d-dc2b231c8685', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6c30db4c9b5f480dbb12cc23f1604de1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b1c7a807-e7ae-4ff5-a4f8-0f24bfa78da8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3ffc0757-5bda-4254-9536-399017e8e195, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=3fb1749a-7239-482e-939c-ec165690b798) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:36:08 np0005629333 nova_compute[244014]: 2026-02-25 12:36:08.926 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:08.934 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 3fb1749a-7239-482e-939c-ec165690b798 in datapath aefc26c0-533a-476c-be4d-dc2b231c8685 unbound from our chassis#033[00m
Feb 25 07:36:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:08.937 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aefc26c0-533a-476c-be4d-dc2b231c8685, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:36:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:08.938 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ade21f94-d5dd-4aed-9cdb-fb1023600597]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:08.940 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685 namespace which is not needed anymore#033[00m
Feb 25 07:36:08 np0005629333 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d00000058.scope: Deactivated successfully.
Feb 25 07:36:08 np0005629333 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d00000058.scope: Consumed 1.548s CPU time.
Feb 25 07:36:08 np0005629333 systemd-machined[210048]: Machine qemu-113-instance-00000058 terminated.
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.126 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:09 np0005629333 neutron-haproxy-ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685[324048]: [NOTICE]   (324053) : haproxy version is 2.8.14-c23fe91
Feb 25 07:36:09 np0005629333 neutron-haproxy-ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685[324048]: [NOTICE]   (324053) : path to executable is /usr/sbin/haproxy
Feb 25 07:36:09 np0005629333 neutron-haproxy-ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685[324048]: [WARNING]  (324053) : Exiting Master process...
Feb 25 07:36:09 np0005629333 neutron-haproxy-ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685[324048]: [ALERT]    (324053) : Current worker (324055) exited with code 143 (Terminated)
Feb 25 07:36:09 np0005629333 neutron-haproxy-ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685[324048]: [WARNING]  (324053) : All workers exited. Exiting... (0)
Feb 25 07:36:09 np0005629333 systemd[1]: libpod-46c5d24ad592e89bdd0ec5d6f2895732dc46f0758087abab27fb3ed210180d54.scope: Deactivated successfully.
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.139 244018 INFO nova.virt.libvirt.driver [-] [instance: be11f836-327c-447c-865a-0088e0554c0d] Instance destroyed successfully.#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.140 244018 DEBUG nova.objects.instance [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lazy-loading 'resources' on Instance uuid be11f836-327c-447c-865a-0088e0554c0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:36:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:36:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2781290701' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:36:09 np0005629333 podman[324163]: 2026-02-25 12:36:09.142655876 +0000 UTC m=+0.117168472 container died 46c5d24ad592e89bdd0ec5d6f2895732dc46f0758087abab27fb3ed210180d54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0)
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.157 244018 DEBUG nova.virt.libvirt.vif [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:35:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-1424139914',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-1424139914',id=88,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:36:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6c30db4c9b5f480dbb12cc23f1604de1',ramdisk_id='',reservation_id='r-7eqku0s7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsV221TestJSON-203832490',owner_user_name='tempest-InstanceActionsV221TestJSON-203832490-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:36:08Z,user_data=None,user_id='546ffcfc9e27442d89b984a2d0e30dda',uuid=be11f836-327c-447c-865a-0088e0554c0d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3fb1749a-7239-482e-939c-ec165690b798", "address": "fa:16:3e:9d:e4:ea", "network": {"id": "aefc26c0-533a-476c-be4d-dc2b231c8685", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1194995791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c30db4c9b5f480dbb12cc23f1604de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb1749a-72", "ovs_interfaceid": "3fb1749a-7239-482e-939c-ec165690b798", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.158 244018 DEBUG nova.network.os_vif_util [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Converting VIF {"id": "3fb1749a-7239-482e-939c-ec165690b798", "address": "fa:16:3e:9d:e4:ea", "network": {"id": "aefc26c0-533a-476c-be4d-dc2b231c8685", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-1194995791-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c30db4c9b5f480dbb12cc23f1604de1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fb1749a-72", "ovs_interfaceid": "3fb1749a-7239-482e-939c-ec165690b798", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.159 244018 DEBUG nova.network.os_vif_util [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e4:ea,bridge_name='br-int',has_traffic_filtering=True,id=3fb1749a-7239-482e-939c-ec165690b798,network=Network(aefc26c0-533a-476c-be4d-dc2b231c8685),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fb1749a-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.160 244018 DEBUG os_vif [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e4:ea,bridge_name='br-int',has_traffic_filtering=True,id=3fb1749a-7239-482e-939c-ec165690b798,network=Network(aefc26c0-533a-476c-be4d-dc2b231c8685),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fb1749a-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.163 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.163 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3fb1749a-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.165 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.167 244018 DEBUG oslo_concurrency.processutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.168 244018 DEBUG nova.objects.instance [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2184a715-0ac8-4fc2-aa99-fae3b8e32edf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.170 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.172 244018 INFO os_vif [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:e4:ea,bridge_name='br-int',has_traffic_filtering=True,id=3fb1749a-7239-482e-939c-ec165690b798,network=Network(aefc26c0-533a-476c-be4d-dc2b231c8685),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fb1749a-72')#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.201 244018 DEBUG nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:36:09 np0005629333 nova_compute[244014]:  <uuid>2184a715-0ac8-4fc2-aa99-fae3b8e32edf</uuid>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:  <name>instance-00000059</name>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:36:09 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServerShowV254Test-server-567186647</nova:name>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:36:07</nova:creationTime>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:36:09 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:        <nova:user uuid="f14f324492304e519e215bb56099abdd">tempest-ServerShowV254Test-1036115410-project-member</nova:user>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:        <nova:project uuid="70a704d198834ace8c228d59139dd7b4">tempest-ServerShowV254Test-1036115410</nova:project>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:      <nova:ports/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:      <entry name="serial">2184a715-0ac8-4fc2-aa99-fae3b8e32edf</entry>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:      <entry name="uuid">2184a715-0ac8-4fc2-aa99-fae3b8e32edf</entry>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:36:09 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk">
Feb 25 07:36:09 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:36:09 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:36:09 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk.config">
Feb 25 07:36:09 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:36:09 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:36:09 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf/console.log" append="off"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:36:09 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:36:09 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:36:09 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:36:09 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:36:09 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.386 244018 DEBUG nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.387 244018 DEBUG nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.387 244018 INFO nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Using config drive#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.420 244018 DEBUG nova.storage.rbd_utils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] rbd image 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:09 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-46c5d24ad592e89bdd0ec5d6f2895732dc46f0758087abab27fb3ed210180d54-userdata-shm.mount: Deactivated successfully.
Feb 25 07:36:09 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f869d10a1ae0f882e6b7b906c9ac51192b52ba8e8071b619209b2a6cd377d6fd-merged.mount: Deactivated successfully.
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.753 244018 INFO nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Creating config drive at /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf/disk.config#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.757 244018 DEBUG oslo_concurrency.processutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpxft2u2uc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.795 244018 DEBUG nova.compute.manager [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Received event network-vif-plugged-3fb1749a-7239-482e-939c-ec165690b798 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.796 244018 DEBUG oslo_concurrency.lockutils [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "be11f836-327c-447c-865a-0088e0554c0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.797 244018 DEBUG oslo_concurrency.lockutils [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "be11f836-327c-447c-865a-0088e0554c0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.797 244018 DEBUG oslo_concurrency.lockutils [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "be11f836-327c-447c-865a-0088e0554c0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.797 244018 DEBUG nova.compute.manager [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] No waiting events found dispatching network-vif-plugged-3fb1749a-7239-482e-939c-ec165690b798 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.798 244018 WARNING nova.compute.manager [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Received unexpected event network-vif-plugged-3fb1749a-7239-482e-939c-ec165690b798 for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.798 244018 DEBUG nova.compute.manager [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Received event network-vif-unplugged-3fb1749a-7239-482e-939c-ec165690b798 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.798 244018 DEBUG oslo_concurrency.lockutils [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "be11f836-327c-447c-865a-0088e0554c0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.799 244018 DEBUG oslo_concurrency.lockutils [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "be11f836-327c-447c-865a-0088e0554c0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.799 244018 DEBUG oslo_concurrency.lockutils [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "be11f836-327c-447c-865a-0088e0554c0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.799 244018 DEBUG nova.compute.manager [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] No waiting events found dispatching network-vif-unplugged-3fb1749a-7239-482e-939c-ec165690b798 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.800 244018 DEBUG nova.compute.manager [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Received event network-vif-unplugged-3fb1749a-7239-482e-939c-ec165690b798 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.801 244018 DEBUG nova.compute.manager [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Received event network-vif-plugged-3fb1749a-7239-482e-939c-ec165690b798 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.801 244018 DEBUG oslo_concurrency.lockutils [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "be11f836-327c-447c-865a-0088e0554c0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.802 244018 DEBUG oslo_concurrency.lockutils [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "be11f836-327c-447c-865a-0088e0554c0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.802 244018 DEBUG oslo_concurrency.lockutils [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "be11f836-327c-447c-865a-0088e0554c0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.803 244018 DEBUG nova.compute.manager [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] No waiting events found dispatching network-vif-plugged-3fb1749a-7239-482e-939c-ec165690b798 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.803 244018 WARNING nova.compute.manager [req-38c92f77-034a-42e9-84ae-1a1576d28249 req-ee342f92-cc50-43fa-9555-792b416de966 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Received unexpected event network-vif-plugged-3fb1749a-7239-482e-939c-ec165690b798 for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.900 244018 DEBUG oslo_concurrency.processutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpxft2u2uc" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.919 244018 DEBUG nova.storage.rbd_utils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] rbd image 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.921 244018 DEBUG oslo_concurrency.processutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf/disk.config 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:09 np0005629333 nova_compute[244014]: 2026-02-25 12:36:09.950 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1631: 305 pgs: 305 active+clean; 200 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 07:36:10 np0005629333 podman[324163]: 2026-02-25 12:36:10.155354518 +0000 UTC m=+1.129867144 container cleanup 46c5d24ad592e89bdd0ec5d6f2895732dc46f0758087abab27fb3ed210180d54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:36:10 np0005629333 systemd[1]: libpod-conmon-46c5d24ad592e89bdd0ec5d6f2895732dc46f0758087abab27fb3ed210180d54.scope: Deactivated successfully.
Feb 25 07:36:10 np0005629333 podman[324279]: 2026-02-25 12:36:10.776862093 +0000 UTC m=+0.599623217 container remove 46c5d24ad592e89bdd0ec5d6f2895732dc46f0758087abab27fb3ed210180d54 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 07:36:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:10.782 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[12ba0a9c-a82c-41ed-9310-f19180aeace6]: (4, ('Wed Feb 25 12:36:09 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685 (46c5d24ad592e89bdd0ec5d6f2895732dc46f0758087abab27fb3ed210180d54)\n46c5d24ad592e89bdd0ec5d6f2895732dc46f0758087abab27fb3ed210180d54\nWed Feb 25 12:36:10 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685 (46c5d24ad592e89bdd0ec5d6f2895732dc46f0758087abab27fb3ed210180d54)\n46c5d24ad592e89bdd0ec5d6f2895732dc46f0758087abab27fb3ed210180d54\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:10.784 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b59f20b2-50b3-443e-834d-735214da1bef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:10.785 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaefc26c0-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:36:10 np0005629333 nova_compute[244014]: 2026-02-25 12:36:10.787 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:10 np0005629333 kernel: tapaefc26c0-50: left promiscuous mode
Feb 25 07:36:10 np0005629333 nova_compute[244014]: 2026-02-25 12:36:10.799 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:10.824 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b2e9002d-e37b-412a-9f9b-c4d84aaa2ff5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:10.843 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9da5c1b6-d401-4004-90cd-9a34daf9bd9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:10.844 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c26928-0615-4997-8549-c0a8d8d1e84b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:10.859 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f1938bd6-4d73-4b06-a3ab-418ffaecc5bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 493646, 'reachable_time': 20756, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324296, 'error': None, 'target': 'ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:10.861 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aefc26c0-533a-476c-be4d-dc2b231c8685 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:36:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:10.862 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[b5b7d96f-33ab-4dfb-a293-04058118b746]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:10 np0005629333 systemd[1]: run-netns-ovnmeta\x2daefc26c0\x2d533a\x2d476c\x2dbe4d\x2ddc2b231c8685.mount: Deactivated successfully.
Feb 25 07:36:11 np0005629333 nova_compute[244014]: 2026-02-25 12:36:11.096 244018 DEBUG oslo_concurrency.processutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf/disk.config 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:11 np0005629333 nova_compute[244014]: 2026-02-25 12:36:11.097 244018 INFO nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Deleting local config drive /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf/disk.config because it was imported into RBD.#033[00m
Feb 25 07:36:11 np0005629333 systemd-machined[210048]: New machine qemu-114-instance-00000059.
Feb 25 07:36:11 np0005629333 systemd[1]: Started Virtual Machine qemu-114-instance-00000059.
Feb 25 07:36:11 np0005629333 nova_compute[244014]: 2026-02-25 12:36:11.694 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022971.6934757, 2184a715-0ac8-4fc2-aa99-fae3b8e32edf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:36:11 np0005629333 nova_compute[244014]: 2026-02-25 12:36:11.695 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:36:11 np0005629333 nova_compute[244014]: 2026-02-25 12:36:11.697 244018 DEBUG nova.compute.manager [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:36:11 np0005629333 nova_compute[244014]: 2026-02-25 12:36:11.697 244018 DEBUG nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:36:11 np0005629333 nova_compute[244014]: 2026-02-25 12:36:11.703 244018 INFO nova.virt.libvirt.driver [-] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Instance spawned successfully.#033[00m
Feb 25 07:36:11 np0005629333 nova_compute[244014]: 2026-02-25 12:36:11.703 244018 DEBUG nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:36:11 np0005629333 nova_compute[244014]: 2026-02-25 12:36:11.733 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:36:11 np0005629333 nova_compute[244014]: 2026-02-25 12:36:11.738 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:36:11 np0005629333 nova_compute[244014]: 2026-02-25 12:36:11.740 244018 DEBUG nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:11 np0005629333 nova_compute[244014]: 2026-02-25 12:36:11.740 244018 DEBUG nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:11 np0005629333 nova_compute[244014]: 2026-02-25 12:36:11.741 244018 DEBUG nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:11 np0005629333 nova_compute[244014]: 2026-02-25 12:36:11.741 244018 DEBUG nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:11 np0005629333 nova_compute[244014]: 2026-02-25 12:36:11.741 244018 DEBUG nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:11 np0005629333 nova_compute[244014]: 2026-02-25 12:36:11.741 244018 DEBUG nova.virt.libvirt.driver [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:11 np0005629333 nova_compute[244014]: 2026-02-25 12:36:11.786 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:36:11 np0005629333 nova_compute[244014]: 2026-02-25 12:36:11.787 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022971.6952932, 2184a715-0ac8-4fc2-aa99-fae3b8e32edf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:36:11 np0005629333 nova_compute[244014]: 2026-02-25 12:36:11.787 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] VM Started (Lifecycle Event)#033[00m
Feb 25 07:36:11 np0005629333 nova_compute[244014]: 2026-02-25 12:36:11.820 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:36:11 np0005629333 nova_compute[244014]: 2026-02-25 12:36:11.823 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:36:11 np0005629333 nova_compute[244014]: 2026-02-25 12:36:11.834 244018 INFO nova.compute.manager [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Took 4.99 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:36:11 np0005629333 nova_compute[244014]: 2026-02-25 12:36:11.835 244018 DEBUG nova.compute.manager [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:36:11 np0005629333 nova_compute[244014]: 2026-02-25 12:36:11.861 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:36:11 np0005629333 nova_compute[244014]: 2026-02-25 12:36:11.908 244018 INFO nova.compute.manager [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Took 6.48 seconds to build instance.#033[00m
Feb 25 07:36:11 np0005629333 nova_compute[244014]: 2026-02-25 12:36:11.928 244018 DEBUG oslo_concurrency.lockutils [None req-f57742e5-a7ae-4db8-9111-9f61fe0efccb f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "2184a715-0ac8-4fc2-aa99-fae3b8e32edf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1632: 305 pgs: 305 active+clean; 211 MiB data, 763 MiB used, 59 GiB / 60 GiB avail; 445 KiB/s rd, 3.1 MiB/s wr, 76 op/s
Feb 25 07:36:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:36:12 np0005629333 nova_compute[244014]: 2026-02-25 12:36:12.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:36:12 np0005629333 nova_compute[244014]: 2026-02-25 12:36:12.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:36:12 np0005629333 nova_compute[244014]: 2026-02-25 12:36:12.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 07:36:12 np0005629333 nova_compute[244014]: 2026-02-25 12:36:12.902 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: be11f836-327c-447c-865a-0088e0554c0d] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Feb 25 07:36:12 np0005629333 podman[324358]: 2026-02-25 12:36:12.91175474 +0000 UTC m=+0.054516572 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 07:36:13 np0005629333 nova_compute[244014]: 2026-02-25 12:36:13.272 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-2184a715-0ac8-4fc2-aa99-fae3b8e32edf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:36:13 np0005629333 nova_compute[244014]: 2026-02-25 12:36:13.272 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-2184a715-0ac8-4fc2-aa99-fae3b8e32edf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:36:13 np0005629333 nova_compute[244014]: 2026-02-25 12:36:13.273 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 25 07:36:13 np0005629333 nova_compute[244014]: 2026-02-25 12:36:13.273 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2184a715-0ac8-4fc2-aa99-fae3b8e32edf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:36:13 np0005629333 nova_compute[244014]: 2026-02-25 12:36:13.786 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:36:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 305 active+clean; 200 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 1003 KiB/s rd, 3.3 MiB/s wr, 116 op/s
Feb 25 07:36:14 np0005629333 nova_compute[244014]: 2026-02-25 12:36:14.110 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:36:14 np0005629333 nova_compute[244014]: 2026-02-25 12:36:14.126 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-2184a715-0ac8-4fc2-aa99-fae3b8e32edf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:36:14 np0005629333 nova_compute[244014]: 2026-02-25 12:36:14.126 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 25 07:36:14 np0005629333 nova_compute[244014]: 2026-02-25 12:36:14.165 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:14 np0005629333 podman[324377]: 2026-02-25 12:36:14.73146964 +0000 UTC m=+0.075402402 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:36:14 np0005629333 nova_compute[244014]: 2026-02-25 12:36:14.793 244018 INFO nova.compute.manager [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Rebuilding instance#033[00m
Feb 25 07:36:14 np0005629333 nova_compute[244014]: 2026-02-25 12:36:14.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:36:14 np0005629333 nova_compute[244014]: 2026-02-25 12:36:14.925 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:15 np0005629333 nova_compute[244014]: 2026-02-25 12:36:15.245 244018 DEBUG nova.objects.instance [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 2184a715-0ac8-4fc2-aa99-fae3b8e32edf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:36:15 np0005629333 nova_compute[244014]: 2026-02-25 12:36:15.273 244018 DEBUG nova.compute.manager [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:36:15 np0005629333 nova_compute[244014]: 2026-02-25 12:36:15.340 244018 DEBUG nova.objects.instance [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lazy-loading 'pci_requests' on Instance uuid 2184a715-0ac8-4fc2-aa99-fae3b8e32edf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:36:15 np0005629333 nova_compute[244014]: 2026-02-25 12:36:15.361 244018 DEBUG nova.objects.instance [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2184a715-0ac8-4fc2-aa99-fae3b8e32edf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:36:15 np0005629333 nova_compute[244014]: 2026-02-25 12:36:15.415 244018 DEBUG nova.objects.instance [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lazy-loading 'resources' on Instance uuid 2184a715-0ac8-4fc2-aa99-fae3b8e32edf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:36:15 np0005629333 nova_compute[244014]: 2026-02-25 12:36:15.441 244018 DEBUG nova.objects.instance [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lazy-loading 'migration_context' on Instance uuid 2184a715-0ac8-4fc2-aa99-fae3b8e32edf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:36:15 np0005629333 nova_compute[244014]: 2026-02-25 12:36:15.458 244018 DEBUG nova.objects.instance [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 25 07:36:15 np0005629333 nova_compute[244014]: 2026-02-25 12:36:15.462 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 25 07:36:15 np0005629333 nova_compute[244014]: 2026-02-25 12:36:15.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:36:15 np0005629333 nova_compute[244014]: 2026-02-25 12:36:15.945 244018 INFO nova.virt.libvirt.driver [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Deleting instance files /var/lib/nova/instances/be11f836-327c-447c-865a-0088e0554c0d_del#033[00m
Feb 25 07:36:15 np0005629333 nova_compute[244014]: 2026-02-25 12:36:15.947 244018 INFO nova.virt.libvirt.driver [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Deletion of /var/lib/nova/instances/be11f836-327c-447c-865a-0088e0554c0d_del complete#033[00m
Feb 25 07:36:16 np0005629333 nova_compute[244014]: 2026-02-25 12:36:16.008 244018 INFO nova.compute.manager [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Took 7.30 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:36:16 np0005629333 nova_compute[244014]: 2026-02-25 12:36:16.009 244018 DEBUG oslo.service.loopingcall [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:36:16 np0005629333 nova_compute[244014]: 2026-02-25 12:36:16.010 244018 DEBUG nova.compute.manager [-] [instance: be11f836-327c-447c-865a-0088e0554c0d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:36:16 np0005629333 nova_compute[244014]: 2026-02-25 12:36:16.010 244018 DEBUG nova.network.neutron [-] [instance: be11f836-327c-447c-865a-0088e0554c0d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:36:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1634: 305 pgs: 305 active+clean; 200 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 997 KiB/s rd, 3.0 MiB/s wr, 104 op/s
Feb 25 07:36:16 np0005629333 nova_compute[244014]: 2026-02-25 12:36:16.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:36:16 np0005629333 nova_compute[244014]: 2026-02-25 12:36:16.960 244018 DEBUG nova.network.neutron [-] [instance: be11f836-327c-447c-865a-0088e0554c0d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:36:16 np0005629333 nova_compute[244014]: 2026-02-25 12:36:16.977 244018 INFO nova.compute.manager [-] [instance: be11f836-327c-447c-865a-0088e0554c0d] Took 0.97 seconds to deallocate network for instance.#033[00m
Feb 25 07:36:17 np0005629333 nova_compute[244014]: 2026-02-25 12:36:17.024 244018 DEBUG oslo_concurrency.lockutils [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:17 np0005629333 nova_compute[244014]: 2026-02-25 12:36:17.025 244018 DEBUG oslo_concurrency.lockutils [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:17 np0005629333 nova_compute[244014]: 2026-02-25 12:36:17.068 244018 DEBUG nova.compute.manager [req-685cd545-e79d-4889-bedc-d45662c2e5a5 req-afac249e-cefd-4dc4-a54e-1fa467ac4822 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: be11f836-327c-447c-865a-0088e0554c0d] Received event network-vif-deleted-3fb1749a-7239-482e-939c-ec165690b798 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:36:17 np0005629333 nova_compute[244014]: 2026-02-25 12:36:17.087 244018 DEBUG oslo_concurrency.processutils [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:36:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1963544512' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:36:17 np0005629333 nova_compute[244014]: 2026-02-25 12:36:17.667 244018 DEBUG oslo_concurrency.processutils [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:17 np0005629333 nova_compute[244014]: 2026-02-25 12:36:17.674 244018 DEBUG nova.compute.provider_tree [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:36:17 np0005629333 nova_compute[244014]: 2026-02-25 12:36:17.693 244018 DEBUG nova.scheduler.client.report [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:36:17 np0005629333 nova_compute[244014]: 2026-02-25 12:36:17.718 244018 DEBUG oslo_concurrency.lockutils [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:17 np0005629333 nova_compute[244014]: 2026-02-25 12:36:17.747 244018 INFO nova.scheduler.client.report [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Deleted allocations for instance be11f836-327c-447c-865a-0088e0554c0d#033[00m
Feb 25 07:36:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:36:17 np0005629333 nova_compute[244014]: 2026-02-25 12:36:17.812 244018 DEBUG oslo_concurrency.lockutils [None req-758359ee-3757-4054-ad23-58d856a7c87b 546ffcfc9e27442d89b984a2d0e30dda 6c30db4c9b5f480dbb12cc23f1604de1 - - default default] Lock "be11f836-327c-447c-865a-0088e0554c0d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:17 np0005629333 nova_compute[244014]: 2026-02-25 12:36:17.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:36:17 np0005629333 nova_compute[244014]: 2026-02-25 12:36:17.903 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:17 np0005629333 nova_compute[244014]: 2026-02-25 12:36:17.903 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:17 np0005629333 nova_compute[244014]: 2026-02-25 12:36:17.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:17 np0005629333 nova_compute[244014]: 2026-02-25 12:36:17.904 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:36:17 np0005629333 nova_compute[244014]: 2026-02-25 12:36:17.905 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1635: 305 pgs: 305 active+clean; 200 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.0 MiB/s wr, 180 op/s
Feb 25 07:36:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:36:18 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3402022151' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:36:18 np0005629333 nova_compute[244014]: 2026-02-25 12:36:18.496 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:18 np0005629333 nova_compute[244014]: 2026-02-25 12:36:18.757 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:36:18 np0005629333 nova_compute[244014]: 2026-02-25 12:36:18.758 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000059 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:36:18 np0005629333 nova_compute[244014]: 2026-02-25 12:36:18.889 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:36:18 np0005629333 nova_compute[244014]: 2026-02-25 12:36:18.890 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3666MB free_disk=59.96658870950341GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:36:18 np0005629333 nova_compute[244014]: 2026-02-25 12:36:18.890 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:18 np0005629333 nova_compute[244014]: 2026-02-25 12:36:18.891 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:18 np0005629333 nova_compute[244014]: 2026-02-25 12:36:18.964 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 2184a715-0ac8-4fc2-aa99-fae3b8e32edf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:36:18 np0005629333 nova_compute[244014]: 2026-02-25 12:36:18.964 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:36:18 np0005629333 nova_compute[244014]: 2026-02-25 12:36:18.964 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:36:19 np0005629333 nova_compute[244014]: 2026-02-25 12:36:19.006 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:19 np0005629333 nova_compute[244014]: 2026-02-25 12:36:19.167 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:36:19 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4031814701' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:36:19 np0005629333 nova_compute[244014]: 2026-02-25 12:36:19.667 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.661s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:19 np0005629333 nova_compute[244014]: 2026-02-25 12:36:19.673 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:36:19 np0005629333 nova_compute[244014]: 2026-02-25 12:36:19.699 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:36:19 np0005629333 nova_compute[244014]: 2026-02-25 12:36:19.727 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:36:19 np0005629333 nova_compute[244014]: 2026-02-25 12:36:19.728 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:19 np0005629333 nova_compute[244014]: 2026-02-25 12:36:19.927 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1636: 305 pgs: 305 active+clean; 200 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 166 op/s
Feb 25 07:36:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Feb 25 07:36:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Feb 25 07:36:20 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Feb 25 07:36:20 np0005629333 nova_compute[244014]: 2026-02-25 12:36:20.729 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:36:20 np0005629333 nova_compute[244014]: 2026-02-25 12:36:20.729 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:36:21 np0005629333 nova_compute[244014]: 2026-02-25 12:36:21.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:36:22 np0005629333 nova_compute[244014]: 2026-02-25 12:36:22.026 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 305 active+clean; 200 MiB data, 755 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 536 KiB/s wr, 167 op/s
Feb 25 07:36:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:36:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 305 active+clean; 200 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.3 KiB/s wr, 130 op/s
Feb 25 07:36:24 np0005629333 nova_compute[244014]: 2026-02-25 12:36:24.138 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022969.1368487, be11f836-327c-447c-865a-0088e0554c0d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:36:24 np0005629333 nova_compute[244014]: 2026-02-25 12:36:24.138 244018 INFO nova.compute.manager [-] [instance: be11f836-327c-447c-865a-0088e0554c0d] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:36:24 np0005629333 nova_compute[244014]: 2026-02-25 12:36:24.162 244018 DEBUG nova.compute.manager [None req-81a3e6cc-7536-455c-b595-526bc0c04319 - - - - - -] [instance: be11f836-327c-447c-865a-0088e0554c0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:36:24 np0005629333 nova_compute[244014]: 2026-02-25 12:36:24.169 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:24 np0005629333 nova_compute[244014]: 2026-02-25 12:36:24.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:36:24 np0005629333 nova_compute[244014]: 2026-02-25 12:36:24.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:36:24 np0005629333 nova_compute[244014]: 2026-02-25 12:36:24.929 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:25 np0005629333 nova_compute[244014]: 2026-02-25 12:36:25.510 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Feb 25 07:36:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1640: 305 pgs: 305 active+clean; 200 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.3 KiB/s wr, 130 op/s
Feb 25 07:36:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Feb 25 07:36:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Feb 25 07:36:26 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Feb 25 07:36:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:36:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1642: 305 pgs: 305 active+clean; 345 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 534 KiB/s rd, 17 MiB/s wr, 178 op/s
Feb 25 07:36:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Feb 25 07:36:29 np0005629333 nova_compute[244014]: 2026-02-25 12:36:29.172 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Feb 25 07:36:29 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Feb 25 07:36:29 np0005629333 nova_compute[244014]: 2026-02-25 12:36:29.931 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 305 active+clean; 345 MiB data, 910 MiB used, 59 GiB / 60 GiB avail; 514 KiB/s rd, 17 MiB/s wr, 147 op/s
Feb 25 07:36:30 np0005629333 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d00000059.scope: Deactivated successfully.
Feb 25 07:36:30 np0005629333 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d00000059.scope: Consumed 12.236s CPU time.
Feb 25 07:36:30 np0005629333 systemd-machined[210048]: Machine qemu-114-instance-00000059 terminated.
Feb 25 07:36:30 np0005629333 nova_compute[244014]: 2026-02-25 12:36:30.539 244018 INFO nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Instance shutdown successfully after 15 seconds.#033[00m
Feb 25 07:36:30 np0005629333 nova_compute[244014]: 2026-02-25 12:36:30.547 244018 INFO nova.virt.libvirt.driver [-] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Instance destroyed successfully.#033[00m
Feb 25 07:36:30 np0005629333 nova_compute[244014]: 2026-02-25 12:36:30.554 244018 INFO nova.virt.libvirt.driver [-] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Instance destroyed successfully.#033[00m
Feb 25 07:36:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:36:30
Feb 25 07:36:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:36:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:36:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'backups', 'images', 'volumes', 'default.rgw.log', '.rgw.root', 'default.rgw.control', '.mgr']
Feb 25 07:36:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:36:31 np0005629333 nova_compute[244014]: 2026-02-25 12:36:31.310 244018 INFO nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Deleting instance files /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf_del#033[00m
Feb 25 07:36:31 np0005629333 nova_compute[244014]: 2026-02-25 12:36:31.311 244018 INFO nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Deletion of /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf_del complete#033[00m
Feb 25 07:36:31 np0005629333 nova_compute[244014]: 2026-02-25 12:36:31.532 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:36:31 np0005629333 nova_compute[244014]: 2026-02-25 12:36:31.533 244018 INFO nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Creating image(s)#033[00m
Feb 25 07:36:31 np0005629333 nova_compute[244014]: 2026-02-25 12:36:31.551 244018 DEBUG nova.storage.rbd_utils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] rbd image 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:31 np0005629333 nova_compute[244014]: 2026-02-25 12:36:31.571 244018 DEBUG nova.storage.rbd_utils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] rbd image 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:31 np0005629333 nova_compute[244014]: 2026-02-25 12:36:31.592 244018 DEBUG nova.storage.rbd_utils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] rbd image 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:31 np0005629333 nova_compute[244014]: 2026-02-25 12:36:31.595 244018 DEBUG oslo_concurrency.processutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:36:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:36:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:36:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:36:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:36:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:36:31 np0005629333 nova_compute[244014]: 2026-02-25 12:36:31.669 244018 DEBUG oslo_concurrency.processutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:31 np0005629333 nova_compute[244014]: 2026-02-25 12:36:31.670 244018 DEBUG oslo_concurrency.lockutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Acquiring lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:31 np0005629333 nova_compute[244014]: 2026-02-25 12:36:31.671 244018 DEBUG oslo_concurrency.lockutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:31 np0005629333 nova_compute[244014]: 2026-02-25 12:36:31.671 244018 DEBUG oslo_concurrency.lockutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:31 np0005629333 nova_compute[244014]: 2026-02-25 12:36:31.691 244018 DEBUG nova.storage.rbd_utils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] rbd image 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:31 np0005629333 nova_compute[244014]: 2026-02-25 12:36:31.694 244018 DEBUG oslo_concurrency.processutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:31.785 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:36:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:31.786 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:36:31 np0005629333 nova_compute[244014]: 2026-02-25 12:36:31.786 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Feb 25 07:36:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Feb 25 07:36:31 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Feb 25 07:36:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:36:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:36:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:36:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:36:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:36:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:36:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:36:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:36:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:36:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:36:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 305 active+clean; 190 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 696 KiB/s rd, 23 MiB/s wr, 245 op/s
Feb 25 07:36:32 np0005629333 nova_compute[244014]: 2026-02-25 12:36:32.099 244018 DEBUG oslo_concurrency.processutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:32 np0005629333 nova_compute[244014]: 2026-02-25 12:36:32.162 244018 DEBUG nova.storage.rbd_utils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] resizing rbd image 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:36:32 np0005629333 nova_compute[244014]: 2026-02-25 12:36:32.256 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:36:32 np0005629333 nova_compute[244014]: 2026-02-25 12:36:32.257 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Ensure instance console log exists: /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:36:32 np0005629333 nova_compute[244014]: 2026-02-25 12:36:32.257 244018 DEBUG oslo_concurrency.lockutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:32 np0005629333 nova_compute[244014]: 2026-02-25 12:36:32.257 244018 DEBUG oslo_concurrency.lockutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:32 np0005629333 nova_compute[244014]: 2026-02-25 12:36:32.258 244018 DEBUG oslo_concurrency.lockutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:32 np0005629333 nova_compute[244014]: 2026-02-25 12:36:32.259 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:36:32 np0005629333 nova_compute[244014]: 2026-02-25 12:36:32.263 244018 WARNING nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Feb 25 07:36:32 np0005629333 nova_compute[244014]: 2026-02-25 12:36:32.279 244018 DEBUG nova.virt.libvirt.host [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:36:32 np0005629333 nova_compute[244014]: 2026-02-25 12:36:32.280 244018 DEBUG nova.virt.libvirt.host [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:36:32 np0005629333 nova_compute[244014]: 2026-02-25 12:36:32.283 244018 DEBUG nova.virt.libvirt.host [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:36:32 np0005629333 nova_compute[244014]: 2026-02-25 12:36:32.284 244018 DEBUG nova.virt.libvirt.host [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:36:32 np0005629333 nova_compute[244014]: 2026-02-25 12:36:32.284 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:36:32 np0005629333 nova_compute[244014]: 2026-02-25 12:36:32.285 244018 DEBUG nova.virt.hardware [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:36:32 np0005629333 nova_compute[244014]: 2026-02-25 12:36:32.285 244018 DEBUG nova.virt.hardware [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:36:32 np0005629333 nova_compute[244014]: 2026-02-25 12:36:32.286 244018 DEBUG nova.virt.hardware [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:36:32 np0005629333 nova_compute[244014]: 2026-02-25 12:36:32.286 244018 DEBUG nova.virt.hardware [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:36:32 np0005629333 nova_compute[244014]: 2026-02-25 12:36:32.286 244018 DEBUG nova.virt.hardware [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:36:32 np0005629333 nova_compute[244014]: 2026-02-25 12:36:32.286 244018 DEBUG nova.virt.hardware [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:36:32 np0005629333 nova_compute[244014]: 2026-02-25 12:36:32.287 244018 DEBUG nova.virt.hardware [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:36:32 np0005629333 nova_compute[244014]: 2026-02-25 12:36:32.287 244018 DEBUG nova.virt.hardware [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:36:32 np0005629333 nova_compute[244014]: 2026-02-25 12:36:32.287 244018 DEBUG nova.virt.hardware [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:36:32 np0005629333 nova_compute[244014]: 2026-02-25 12:36:32.287 244018 DEBUG nova.virt.hardware [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:36:32 np0005629333 nova_compute[244014]: 2026-02-25 12:36:32.288 244018 DEBUG nova.virt.hardware [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:36:32 np0005629333 nova_compute[244014]: 2026-02-25 12:36:32.288 244018 DEBUG nova.objects.instance [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2184a715-0ac8-4fc2-aa99-fae3b8e32edf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:36:32 np0005629333 nova_compute[244014]: 2026-02-25 12:36:32.310 244018 DEBUG oslo_concurrency.processutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:36:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:36:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/309203223' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:36:32 np0005629333 nova_compute[244014]: 2026-02-25 12:36:32.849 244018 DEBUG oslo_concurrency.processutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:32 np0005629333 nova_compute[244014]: 2026-02-25 12:36:32.870 244018 DEBUG nova.storage.rbd_utils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] rbd image 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:32 np0005629333 nova_compute[244014]: 2026-02-25 12:36:32.874 244018 DEBUG oslo_concurrency.processutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:36:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/817749783' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:36:33 np0005629333 nova_compute[244014]: 2026-02-25 12:36:33.398 244018 DEBUG oslo_concurrency.processutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:33 np0005629333 nova_compute[244014]: 2026-02-25 12:36:33.401 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:36:33 np0005629333 nova_compute[244014]:  <uuid>2184a715-0ac8-4fc2-aa99-fae3b8e32edf</uuid>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:  <name>instance-00000059</name>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:36:33 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServerShowV254Test-server-567186647</nova:name>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:36:32</nova:creationTime>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:36:33 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:        <nova:user uuid="f14f324492304e519e215bb56099abdd">tempest-ServerShowV254Test-1036115410-project-member</nova:user>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:        <nova:project uuid="70a704d198834ace8c228d59139dd7b4">tempest-ServerShowV254Test-1036115410</nova:project>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="f0ef5a9a-23b8-4883-8e47-feb7403a11d8"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:      <nova:ports/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:      <entry name="serial">2184a715-0ac8-4fc2-aa99-fae3b8e32edf</entry>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:      <entry name="uuid">2184a715-0ac8-4fc2-aa99-fae3b8e32edf</entry>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:36:33 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk">
Feb 25 07:36:33 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:36:33 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:36:33 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk.config">
Feb 25 07:36:33 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:36:33 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:36:33 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf/console.log" append="off"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:36:33 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:36:33 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:36:33 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:36:33 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:36:33 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:36:33 np0005629333 nova_compute[244014]: 2026-02-25 12:36:33.485 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:36:33 np0005629333 nova_compute[244014]: 2026-02-25 12:36:33.486 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:36:33 np0005629333 nova_compute[244014]: 2026-02-25 12:36:33.486 244018 INFO nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Using config drive#033[00m
Feb 25 07:36:33 np0005629333 nova_compute[244014]: 2026-02-25 12:36:33.512 244018 DEBUG nova.storage.rbd_utils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] rbd image 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:33 np0005629333 nova_compute[244014]: 2026-02-25 12:36:33.541 244018 DEBUG nova.objects.instance [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 2184a715-0ac8-4fc2-aa99-fae3b8e32edf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:36:33 np0005629333 nova_compute[244014]: 2026-02-25 12:36:33.823 244018 INFO nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Creating config drive at /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf/disk.config#033[00m
Feb 25 07:36:33 np0005629333 nova_compute[244014]: 2026-02-25 12:36:33.828 244018 DEBUG oslo_concurrency.processutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpjum7k_c5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:33 np0005629333 nova_compute[244014]: 2026-02-25 12:36:33.960 244018 DEBUG oslo_concurrency.processutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpjum7k_c5" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:33 np0005629333 nova_compute[244014]: 2026-02-25 12:36:33.982 244018 DEBUG nova.storage.rbd_utils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] rbd image 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:33 np0005629333 nova_compute[244014]: 2026-02-25 12:36:33.985 244018 DEBUG oslo_concurrency.processutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf/disk.config 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1647: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 173 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 497 KiB/s rd, 9.6 MiB/s wr, 208 op/s
Feb 25 07:36:34 np0005629333 nova_compute[244014]: 2026-02-25 12:36:34.119 244018 DEBUG oslo_concurrency.processutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf/disk.config 2184a715-0ac8-4fc2-aa99-fae3b8e32edf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:34 np0005629333 nova_compute[244014]: 2026-02-25 12:36:34.120 244018 INFO nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Deleting local config drive /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf/disk.config because it was imported into RBD.#033[00m
Feb 25 07:36:34 np0005629333 nova_compute[244014]: 2026-02-25 12:36:34.173 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:34 np0005629333 systemd-machined[210048]: New machine qemu-115-instance-00000059.
Feb 25 07:36:34 np0005629333 systemd[1]: Started Virtual Machine qemu-115-instance-00000059.
Feb 25 07:36:34 np0005629333 nova_compute[244014]: 2026-02-25 12:36:34.517 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for 2184a715-0ac8-4fc2-aa99-fae3b8e32edf due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 25 07:36:34 np0005629333 nova_compute[244014]: 2026-02-25 12:36:34.518 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022994.5158381, 2184a715-0ac8-4fc2-aa99-fae3b8e32edf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:36:34 np0005629333 nova_compute[244014]: 2026-02-25 12:36:34.518 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:36:34 np0005629333 nova_compute[244014]: 2026-02-25 12:36:34.525 244018 DEBUG nova.compute.manager [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:36:34 np0005629333 nova_compute[244014]: 2026-02-25 12:36:34.525 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:36:34 np0005629333 nova_compute[244014]: 2026-02-25 12:36:34.530 244018 INFO nova.virt.libvirt.driver [-] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Instance spawned successfully.#033[00m
Feb 25 07:36:34 np0005629333 nova_compute[244014]: 2026-02-25 12:36:34.531 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:36:34 np0005629333 nova_compute[244014]: 2026-02-25 12:36:34.547 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:36:34 np0005629333 nova_compute[244014]: 2026-02-25 12:36:34.555 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:36:34 np0005629333 nova_compute[244014]: 2026-02-25 12:36:34.562 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:34 np0005629333 nova_compute[244014]: 2026-02-25 12:36:34.563 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:34 np0005629333 nova_compute[244014]: 2026-02-25 12:36:34.564 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:34 np0005629333 nova_compute[244014]: 2026-02-25 12:36:34.564 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:34 np0005629333 nova_compute[244014]: 2026-02-25 12:36:34.565 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:34 np0005629333 nova_compute[244014]: 2026-02-25 12:36:34.566 244018 DEBUG nova.virt.libvirt.driver [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:34 np0005629333 nova_compute[244014]: 2026-02-25 12:36:34.574 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 25 07:36:34 np0005629333 nova_compute[244014]: 2026-02-25 12:36:34.574 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772022994.5181427, 2184a715-0ac8-4fc2-aa99-fae3b8e32edf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:36:34 np0005629333 nova_compute[244014]: 2026-02-25 12:36:34.575 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] VM Started (Lifecycle Event)#033[00m
Feb 25 07:36:34 np0005629333 nova_compute[244014]: 2026-02-25 12:36:34.609 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:36:34 np0005629333 nova_compute[244014]: 2026-02-25 12:36:34.613 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:36:34 np0005629333 nova_compute[244014]: 2026-02-25 12:36:34.635 244018 DEBUG nova.compute.manager [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:36:34 np0005629333 nova_compute[244014]: 2026-02-25 12:36:34.637 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 25 07:36:34 np0005629333 nova_compute[244014]: 2026-02-25 12:36:34.712 244018 DEBUG oslo_concurrency.lockutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:34 np0005629333 nova_compute[244014]: 2026-02-25 12:36:34.712 244018 DEBUG oslo_concurrency.lockutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:34 np0005629333 nova_compute[244014]: 2026-02-25 12:36:34.712 244018 DEBUG nova.objects.instance [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 25 07:36:34 np0005629333 nova_compute[244014]: 2026-02-25 12:36:34.783 244018 DEBUG oslo_concurrency.lockutils [None req-7b4c97a7-ebfa-4711-86da-5f2008278b47 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:34 np0005629333 nova_compute[244014]: 2026-02-25 12:36:34.935 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 173 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 1.1 MiB/s wr, 92 op/s
Feb 25 07:36:36 np0005629333 nova_compute[244014]: 2026-02-25 12:36:36.430 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "0061daee-43d7-458b-8645-0ad3f8fbb2af" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:36 np0005629333 nova_compute[244014]: 2026-02-25 12:36:36.431 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:36 np0005629333 nova_compute[244014]: 2026-02-25 12:36:36.467 244018 DEBUG nova.compute.manager [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:36:36 np0005629333 nova_compute[244014]: 2026-02-25 12:36:36.580 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:36 np0005629333 nova_compute[244014]: 2026-02-25 12:36:36.581 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:36 np0005629333 nova_compute[244014]: 2026-02-25 12:36:36.588 244018 DEBUG nova.virt.hardware [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:36:36 np0005629333 nova_compute[244014]: 2026-02-25 12:36:36.588 244018 INFO nova.compute.claims [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:36:36 np0005629333 nova_compute[244014]: 2026-02-25 12:36:36.725 244018 DEBUG oslo_concurrency.processutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:36 np0005629333 nova_compute[244014]: 2026-02-25 12:36:36.857 244018 DEBUG oslo_concurrency.lockutils [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Acquiring lock "2184a715-0ac8-4fc2-aa99-fae3b8e32edf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:36 np0005629333 nova_compute[244014]: 2026-02-25 12:36:36.858 244018 DEBUG oslo_concurrency.lockutils [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "2184a715-0ac8-4fc2-aa99-fae3b8e32edf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:36 np0005629333 nova_compute[244014]: 2026-02-25 12:36:36.858 244018 DEBUG oslo_concurrency.lockutils [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Acquiring lock "2184a715-0ac8-4fc2-aa99-fae3b8e32edf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:36 np0005629333 nova_compute[244014]: 2026-02-25 12:36:36.858 244018 DEBUG oslo_concurrency.lockutils [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "2184a715-0ac8-4fc2-aa99-fae3b8e32edf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:36 np0005629333 nova_compute[244014]: 2026-02-25 12:36:36.858 244018 DEBUG oslo_concurrency.lockutils [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "2184a715-0ac8-4fc2-aa99-fae3b8e32edf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:36 np0005629333 nova_compute[244014]: 2026-02-25 12:36:36.860 244018 INFO nova.compute.manager [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Terminating instance#033[00m
Feb 25 07:36:36 np0005629333 nova_compute[244014]: 2026-02-25 12:36:36.860 244018 DEBUG oslo_concurrency.lockutils [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Acquiring lock "refresh_cache-2184a715-0ac8-4fc2-aa99-fae3b8e32edf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:36:36 np0005629333 nova_compute[244014]: 2026-02-25 12:36:36.861 244018 DEBUG oslo_concurrency.lockutils [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Acquired lock "refresh_cache-2184a715-0ac8-4fc2-aa99-fae3b8e32edf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:36:36 np0005629333 nova_compute[244014]: 2026-02-25 12:36:36.861 244018 DEBUG nova.network.neutron [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:36:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:36:37 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/391316532' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:36:37 np0005629333 nova_compute[244014]: 2026-02-25 12:36:37.266 244018 DEBUG oslo_concurrency.processutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:37 np0005629333 nova_compute[244014]: 2026-02-25 12:36:37.270 244018 DEBUG nova.compute.provider_tree [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:36:37 np0005629333 nova_compute[244014]: 2026-02-25 12:36:37.284 244018 DEBUG nova.scheduler.client.report [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:36:37 np0005629333 nova_compute[244014]: 2026-02-25 12:36:37.307 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:37 np0005629333 nova_compute[244014]: 2026-02-25 12:36:37.308 244018 DEBUG nova.compute.manager [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:36:37 np0005629333 nova_compute[244014]: 2026-02-25 12:36:37.368 244018 DEBUG nova.compute.manager [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:36:37 np0005629333 nova_compute[244014]: 2026-02-25 12:36:37.368 244018 DEBUG nova.network.neutron [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:36:37 np0005629333 nova_compute[244014]: 2026-02-25 12:36:37.388 244018 INFO nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:36:37 np0005629333 nova_compute[244014]: 2026-02-25 12:36:37.393 244018 DEBUG nova.network.neutron [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:36:37 np0005629333 nova_compute[244014]: 2026-02-25 12:36:37.410 244018 DEBUG nova.compute.manager [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:36:37 np0005629333 nova_compute[244014]: 2026-02-25 12:36:37.504 244018 DEBUG nova.compute.manager [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:36:37 np0005629333 nova_compute[244014]: 2026-02-25 12:36:37.505 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:36:37 np0005629333 nova_compute[244014]: 2026-02-25 12:36:37.505 244018 INFO nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Creating image(s)#033[00m
Feb 25 07:36:37 np0005629333 nova_compute[244014]: 2026-02-25 12:36:37.529 244018 DEBUG nova.storage.rbd_utils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 0061daee-43d7-458b-8645-0ad3f8fbb2af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:37 np0005629333 nova_compute[244014]: 2026-02-25 12:36:37.559 244018 DEBUG nova.storage.rbd_utils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 0061daee-43d7-458b-8645-0ad3f8fbb2af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:37 np0005629333 nova_compute[244014]: 2026-02-25 12:36:37.594 244018 DEBUG nova.storage.rbd_utils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 0061daee-43d7-458b-8645-0ad3f8fbb2af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:37 np0005629333 nova_compute[244014]: 2026-02-25 12:36:37.598 244018 DEBUG oslo_concurrency.processutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:37 np0005629333 nova_compute[244014]: 2026-02-25 12:36:37.657 244018 DEBUG oslo_concurrency.processutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:37 np0005629333 nova_compute[244014]: 2026-02-25 12:36:37.658 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:37 np0005629333 nova_compute[244014]: 2026-02-25 12:36:37.660 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:37 np0005629333 nova_compute[244014]: 2026-02-25 12:36:37.660 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:37 np0005629333 nova_compute[244014]: 2026-02-25 12:36:37.692 244018 DEBUG nova.storage.rbd_utils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 0061daee-43d7-458b-8645-0ad3f8fbb2af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:37 np0005629333 nova_compute[244014]: 2026-02-25 12:36:37.696 244018 DEBUG oslo_concurrency.processutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 0061daee-43d7-458b-8645-0ad3f8fbb2af_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:37 np0005629333 nova_compute[244014]: 2026-02-25 12:36:37.723 244018 DEBUG nova.policy [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '00eb5c915b2d41f69600acd33967d0f5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '999f2a015b9c4bc98661fe6fe6db06a0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:36:37 np0005629333 nova_compute[244014]: 2026-02-25 12:36:37.758 244018 DEBUG nova.network.neutron [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:36:37 np0005629333 nova_compute[244014]: 2026-02-25 12:36:37.773 244018 DEBUG oslo_concurrency.lockutils [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Releasing lock "refresh_cache-2184a715-0ac8-4fc2-aa99-fae3b8e32edf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:36:37 np0005629333 nova_compute[244014]: 2026-02-25 12:36:37.773 244018 DEBUG nova.compute.manager [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:36:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:36:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Feb 25 07:36:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Feb 25 07:36:37 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Feb 25 07:36:37 np0005629333 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d00000059.scope: Deactivated successfully.
Feb 25 07:36:37 np0005629333 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d00000059.scope: Consumed 3.721s CPU time.
Feb 25 07:36:37 np0005629333 systemd-machined[210048]: Machine qemu-115-instance-00000059 terminated.
Feb 25 07:36:37 np0005629333 nova_compute[244014]: 2026-02-25 12:36:37.992 244018 INFO nova.virt.libvirt.driver [-] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Instance destroyed successfully.#033[00m
Feb 25 07:36:37 np0005629333 nova_compute[244014]: 2026-02-25 12:36:37.992 244018 DEBUG nova.objects.instance [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lazy-loading 'resources' on Instance uuid 2184a715-0ac8-4fc2-aa99-fae3b8e32edf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:36:38.094387) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022998094431, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 2086, "num_deletes": 254, "total_data_size": 3281800, "memory_usage": 3336680, "flush_reason": "Manual Compaction"}
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Feb 25 07:36:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 305 active+clean; 200 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.7 MiB/s wr, 263 op/s
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022998348259, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 3212104, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33275, "largest_seqno": 35360, "table_properties": {"data_size": 3202685, "index_size": 5849, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20107, "raw_average_key_size": 20, "raw_value_size": 3183522, "raw_average_value_size": 3258, "num_data_blocks": 257, "num_entries": 977, "num_filter_entries": 977, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772022802, "oldest_key_time": 1772022802, "file_creation_time": 1772022998, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 253989 microseconds, and 6872 cpu microseconds.
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:36:38.348365) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 3212104 bytes OK
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:36:38.348401) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:36:38.468331) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:36:38.468375) EVENT_LOG_v1 {"time_micros": 1772022998468361, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:36:38.468409) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 3272920, prev total WAL file size 3276081, number of live WAL files 2.
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:36:38.469796) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(3136KB)], [74(7862KB)]
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022998469899, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 11262979, "oldest_snapshot_seqno": -1}
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6069 keys, 9603441 bytes, temperature: kUnknown
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022998724675, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 9603441, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9561309, "index_size": 25830, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15237, "raw_key_size": 153209, "raw_average_key_size": 25, "raw_value_size": 9451096, "raw_average_value_size": 1557, "num_data_blocks": 1044, "num_entries": 6069, "num_filter_entries": 6069, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772022998, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:36:38 np0005629333 nova_compute[244014]: 2026-02-25 12:36:38.724 244018 DEBUG nova.network.neutron [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Successfully created port: d689bf7c-d44c-4f39-a2a7-a85e52dcee30 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:36:38 np0005629333 nova_compute[244014]: 2026-02-25 12:36:38.811 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "6076a107-bdef-4c8a-8f75-887cdb4833f0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:38 np0005629333 nova_compute[244014]: 2026-02-25 12:36:38.812 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "6076a107-bdef-4c8a-8f75-887cdb4833f0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:38 np0005629333 nova_compute[244014]: 2026-02-25 12:36:38.829 244018 DEBUG nova.compute.manager [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:36:38.725009) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 9603441 bytes
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:36:38.846302) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 44.2 rd, 37.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 7.7 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(6.5) write-amplify(3.0) OK, records in: 6595, records dropped: 526 output_compression: NoCompression
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:36:38.846348) EVENT_LOG_v1 {"time_micros": 1772022998846328, "job": 42, "event": "compaction_finished", "compaction_time_micros": 254890, "compaction_time_cpu_micros": 33697, "output_level": 6, "num_output_files": 1, "total_output_size": 9603441, "num_input_records": 6595, "num_output_records": 6069, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022998846994, "job": 42, "event": "table_file_deletion", "file_number": 76}
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772022998848301, "job": 42, "event": "table_file_deletion", "file_number": 74}
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:36:38.469614) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:36:38.848380) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:36:38.848385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:36:38.848386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:36:38.848388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:36:38 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:36:38.848389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:36:38 np0005629333 nova_compute[244014]: 2026-02-25 12:36:38.919 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:38 np0005629333 nova_compute[244014]: 2026-02-25 12:36:38.919 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:38 np0005629333 nova_compute[244014]: 2026-02-25 12:36:38.930 244018 DEBUG nova.virt.hardware [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:36:38 np0005629333 nova_compute[244014]: 2026-02-25 12:36:38.931 244018 INFO nova.compute.claims [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:36:39 np0005629333 nova_compute[244014]: 2026-02-25 12:36:39.087 244018 DEBUG oslo_concurrency.processutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:39 np0005629333 nova_compute[244014]: 2026-02-25 12:36:39.175 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:39 np0005629333 nova_compute[244014]: 2026-02-25 12:36:39.749 244018 DEBUG nova.network.neutron [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Successfully updated port: d689bf7c-d44c-4f39-a2a7-a85e52dcee30 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:36:39 np0005629333 nova_compute[244014]: 2026-02-25 12:36:39.768 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "refresh_cache-0061daee-43d7-458b-8645-0ad3f8fbb2af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:36:39 np0005629333 nova_compute[244014]: 2026-02-25 12:36:39.769 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquired lock "refresh_cache-0061daee-43d7-458b-8645-0ad3f8fbb2af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:36:39 np0005629333 nova_compute[244014]: 2026-02-25 12:36:39.770 244018 DEBUG nova.network.neutron [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:36:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:36:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/459317964' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:36:39 np0005629333 nova_compute[244014]: 2026-02-25 12:36:39.814 244018 DEBUG oslo_concurrency.processutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.727s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:39 np0005629333 nova_compute[244014]: 2026-02-25 12:36:39.818 244018 DEBUG nova.compute.provider_tree [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:36:39 np0005629333 nova_compute[244014]: 2026-02-25 12:36:39.834 244018 DEBUG nova.scheduler.client.report [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:36:39 np0005629333 nova_compute[244014]: 2026-02-25 12:36:39.860 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:39 np0005629333 nova_compute[244014]: 2026-02-25 12:36:39.861 244018 DEBUG nova.compute.manager [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:36:39 np0005629333 nova_compute[244014]: 2026-02-25 12:36:39.893 244018 DEBUG nova.compute.manager [req-6eff659b-eaa6-42fb-82ca-29104ef024fa req-246526f3-6ae1-4029-aa9f-6d25056f69be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received event network-changed-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:36:39 np0005629333 nova_compute[244014]: 2026-02-25 12:36:39.894 244018 DEBUG nova.compute.manager [req-6eff659b-eaa6-42fb-82ca-29104ef024fa req-246526f3-6ae1-4029-aa9f-6d25056f69be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Refreshing instance network info cache due to event network-changed-d689bf7c-d44c-4f39-a2a7-a85e52dcee30. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:36:39 np0005629333 nova_compute[244014]: 2026-02-25 12:36:39.895 244018 DEBUG oslo_concurrency.lockutils [req-6eff659b-eaa6-42fb-82ca-29104ef024fa req-246526f3-6ae1-4029-aa9f-6d25056f69be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-0061daee-43d7-458b-8645-0ad3f8fbb2af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:36:39 np0005629333 nova_compute[244014]: 2026-02-25 12:36:39.922 244018 DEBUG nova.compute.manager [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:36:39 np0005629333 nova_compute[244014]: 2026-02-25 12:36:39.922 244018 DEBUG nova.network.neutron [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:36:39 np0005629333 nova_compute[244014]: 2026-02-25 12:36:39.935 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:39 np0005629333 nova_compute[244014]: 2026-02-25 12:36:39.944 244018 INFO nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:36:39 np0005629333 nova_compute[244014]: 2026-02-25 12:36:39.965 244018 DEBUG nova.compute.manager [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:36:39 np0005629333 nova_compute[244014]: 2026-02-25 12:36:39.973 244018 DEBUG nova.network.neutron [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.069 244018 DEBUG nova.compute.manager [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.072 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.073 244018 INFO nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Creating image(s)#033[00m
Feb 25 07:36:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 305 active+clean; 200 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 201 op/s
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.107 244018 DEBUG nova.storage.rbd_utils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 6076a107-bdef-4c8a-8f75-887cdb4833f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.134 244018 DEBUG nova.storage.rbd_utils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 6076a107-bdef-4c8a-8f75-887cdb4833f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.168 244018 DEBUG nova.storage.rbd_utils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 6076a107-bdef-4c8a-8f75-887cdb4833f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.172 244018 DEBUG oslo_concurrency.processutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.212 244018 DEBUG nova.policy [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '00eb5c915b2d41f69600acd33967d0f5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '999f2a015b9c4bc98661fe6fe6db06a0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.242 244018 DEBUG oslo_concurrency.processutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.243 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.244 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.244 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.267 244018 DEBUG nova.storage.rbd_utils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 6076a107-bdef-4c8a-8f75-887cdb4833f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.272 244018 DEBUG oslo_concurrency.processutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 6076a107-bdef-4c8a-8f75-887cdb4833f0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.309 244018 DEBUG oslo_concurrency.processutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 0061daee-43d7-458b-8645-0ad3f8fbb2af_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.393 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.394 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.400 244018 DEBUG nova.storage.rbd_utils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] resizing rbd image 0061daee-43d7-458b-8645-0ad3f8fbb2af_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.445 244018 DEBUG nova.compute.manager [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.558 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.559 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.561 244018 DEBUG oslo_concurrency.processutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 6076a107-bdef-4c8a-8f75-887cdb4833f0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.588 244018 DEBUG nova.objects.instance [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'migration_context' on Instance uuid 0061daee-43d7-458b-8645-0ad3f8fbb2af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.593 244018 DEBUG nova.virt.hardware [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.593 244018 INFO nova.compute.claims [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.622 244018 INFO nova.virt.libvirt.driver [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Deleting instance files /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf_del#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.623 244018 INFO nova.virt.libvirt.driver [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Deletion of /var/lib/nova/instances/2184a715-0ac8-4fc2-aa99-fae3b8e32edf_del complete#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.628 244018 DEBUG nova.storage.rbd_utils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] resizing rbd image 6076a107-bdef-4c8a-8f75-887cdb4833f0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.647 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.648 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Ensure instance console log exists: /var/lib/nova/instances/0061daee-43d7-458b-8645-0ad3f8fbb2af/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.648 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.648 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.649 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.688 244018 DEBUG nova.objects.instance [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'migration_context' on Instance uuid 6076a107-bdef-4c8a-8f75-887cdb4833f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.693 244018 INFO nova.compute.manager [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Took 2.92 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.693 244018 DEBUG oslo.service.loopingcall [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.694 244018 DEBUG nova.compute.manager [-] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.694 244018 DEBUG nova.network.neutron [-] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.702 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.703 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Ensure instance console log exists: /var/lib/nova/instances/6076a107-bdef-4c8a-8f75-887cdb4833f0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.703 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.703 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.704 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:40 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:40.788 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:36:40 np0005629333 nova_compute[244014]: 2026-02-25 12:36:40.820 244018 DEBUG oslo_concurrency.processutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:41 np0005629333 nova_compute[244014]: 2026-02-25 12:36:41.182 244018 DEBUG nova.network.neutron [-] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:36:41 np0005629333 nova_compute[244014]: 2026-02-25 12:36:41.210 244018 DEBUG nova.network.neutron [-] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:36:41 np0005629333 nova_compute[244014]: 2026-02-25 12:36:41.231 244018 INFO nova.compute.manager [-] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Took 0.54 seconds to deallocate network for instance.#033[00m
Feb 25 07:36:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:36:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1861160029' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:36:41 np0005629333 nova_compute[244014]: 2026-02-25 12:36:41.294 244018 DEBUG oslo_concurrency.lockutils [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:41 np0005629333 nova_compute[244014]: 2026-02-25 12:36:41.299 244018 DEBUG oslo_concurrency.processutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:41 np0005629333 nova_compute[244014]: 2026-02-25 12:36:41.304 244018 DEBUG nova.compute.provider_tree [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:36:41 np0005629333 nova_compute[244014]: 2026-02-25 12:36:41.327 244018 DEBUG nova.scheduler.client.report [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:36:41 np0005629333 nova_compute[244014]: 2026-02-25 12:36:41.354 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:41 np0005629333 nova_compute[244014]: 2026-02-25 12:36:41.355 244018 DEBUG nova.compute.manager [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:36:41 np0005629333 nova_compute[244014]: 2026-02-25 12:36:41.357 244018 DEBUG oslo_concurrency.lockutils [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:41 np0005629333 nova_compute[244014]: 2026-02-25 12:36:41.411 244018 DEBUG nova.compute.manager [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:36:41 np0005629333 nova_compute[244014]: 2026-02-25 12:36:41.412 244018 DEBUG nova.network.neutron [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:36:41 np0005629333 nova_compute[244014]: 2026-02-25 12:36:41.433 244018 INFO nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:36:41 np0005629333 nova_compute[244014]: 2026-02-25 12:36:41.451 244018 DEBUG nova.compute.manager [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:36:41 np0005629333 nova_compute[244014]: 2026-02-25 12:36:41.461 244018 DEBUG nova.network.neutron [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Successfully created port: 66c41a3b-23a5-4cbf-a70e-416adf1617e5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:36:41 np0005629333 nova_compute[244014]: 2026-02-25 12:36:41.492 244018 DEBUG oslo_concurrency.processutils [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:41 np0005629333 nova_compute[244014]: 2026-02-25 12:36:41.630 244018 DEBUG nova.compute.manager [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:36:41 np0005629333 nova_compute[244014]: 2026-02-25 12:36:41.631 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:36:41 np0005629333 nova_compute[244014]: 2026-02-25 12:36:41.632 244018 INFO nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Creating image(s)#033[00m
Feb 25 07:36:41 np0005629333 nova_compute[244014]: 2026-02-25 12:36:41.657 244018 DEBUG nova.storage.rbd_utils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:41 np0005629333 nova_compute[244014]: 2026-02-25 12:36:41.678 244018 DEBUG nova.storage.rbd_utils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:41 np0005629333 nova_compute[244014]: 2026-02-25 12:36:41.700 244018 DEBUG nova.storage.rbd_utils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:41 np0005629333 nova_compute[244014]: 2026-02-25 12:36:41.704 244018 DEBUG oslo_concurrency.processutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:41 np0005629333 nova_compute[244014]: 2026-02-25 12:36:41.773 244018 DEBUG nova.network.neutron [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Updating instance_info_cache with network_info: [{"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:36:41 np0005629333 nova_compute[244014]: 2026-02-25 12:36:41.785 244018 DEBUG oslo_concurrency.processutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:41 np0005629333 nova_compute[244014]: 2026-02-25 12:36:41.786 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:41 np0005629333 nova_compute[244014]: 2026-02-25 12:36:41.787 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:41 np0005629333 nova_compute[244014]: 2026-02-25 12:36:41.788 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:41 np0005629333 nova_compute[244014]: 2026-02-25 12:36:41.818 244018 DEBUG nova.storage.rbd_utils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:41 np0005629333 nova_compute[244014]: 2026-02-25 12:36:41.823 244018 DEBUG oslo_concurrency.processutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:41 np0005629333 nova_compute[244014]: 2026-02-25 12:36:41.851 244018 DEBUG nova.policy [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '00eb5c915b2d41f69600acd33967d0f5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '999f2a015b9c4bc98661fe6fe6db06a0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.008 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Releasing lock "refresh_cache-0061daee-43d7-458b-8645-0ad3f8fbb2af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.009 244018 DEBUG nova.compute.manager [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Instance network_info: |[{"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.010 244018 DEBUG oslo_concurrency.lockutils [req-6eff659b-eaa6-42fb-82ca-29104ef024fa req-246526f3-6ae1-4029-aa9f-6d25056f69be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-0061daee-43d7-458b-8645-0ad3f8fbb2af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.011 244018 DEBUG nova.network.neutron [req-6eff659b-eaa6-42fb-82ca-29104ef024fa req-246526f3-6ae1-4029-aa9f-6d25056f69be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Refreshing network info cache for port d689bf7c-d44c-4f39-a2a7-a85e52dcee30 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.019 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Start _get_guest_xml network_info=[{"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.025 244018 WARNING nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.031 244018 DEBUG nova.virt.libvirt.host [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.032 244018 DEBUG nova.virt.libvirt.host [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.036 244018 DEBUG nova.virt.libvirt.host [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.038 244018 DEBUG nova.virt.libvirt.host [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:36:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:36:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1470824002' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.039 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.040 244018 DEBUG nova.virt.hardware [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.041 244018 DEBUG nova.virt.hardware [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.042 244018 DEBUG nova.virt.hardware [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.042 244018 DEBUG nova.virt.hardware [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.043 244018 DEBUG nova.virt.hardware [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.043 244018 DEBUG nova.virt.hardware [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.043 244018 DEBUG nova.virt.hardware [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.044 244018 DEBUG nova.virt.hardware [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.044 244018 DEBUG nova.virt.hardware [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.045 244018 DEBUG nova.virt.hardware [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.045 244018 DEBUG nova.virt.hardware [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.051 244018 DEBUG oslo_concurrency.processutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.088 244018 DEBUG oslo_concurrency.processutils [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.095 244018 DEBUG nova.compute.provider_tree [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:36:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 305 active+clean; 232 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.5 MiB/s wr, 215 op/s
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.108 244018 DEBUG oslo_concurrency.processutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.285s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.169 244018 DEBUG nova.storage.rbd_utils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] resizing rbd image 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.191 244018 DEBUG nova.scheduler.client.report [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.238 244018 DEBUG nova.objects.instance [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'migration_context' on Instance uuid 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.376 244018 DEBUG oslo_concurrency.lockutils [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.019s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.383 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.383 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Ensure instance console log exists: /var/lib/nova/instances/7c2fb1e7-04d0-4903-a675-8cda55bbb6ed/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.383 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.384 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.384 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:36:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:36:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:36:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:36:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006996718842186654 of space, bias 1.0, pg target 0.20990156526559964 quantized to 32 (current 32)
Feb 25 07:36:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:36:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:36:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:36:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:36:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:36:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024930145185177097 of space, bias 1.0, pg target 0.7479043555553129 quantized to 32 (current 32)
Feb 25 07:36:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:36:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.699971522888712e-07 of space, bias 4.0, pg target 0.0009239965827466455 quantized to 16 (current 16)
Feb 25 07:36:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:36:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:36:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:36:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:36:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:36:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:36:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:36:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:36:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:36:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.571 244018 INFO nova.scheduler.client.report [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Deleted allocations for instance 2184a715-0ac8-4fc2-aa99-fae3b8e32edf#033[00m
Feb 25 07:36:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:36:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3501951083' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.630 244018 DEBUG oslo_concurrency.processutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.651 244018 DEBUG nova.storage.rbd_utils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 0061daee-43d7-458b-8645-0ad3f8fbb2af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.654 244018 DEBUG oslo_concurrency.processutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:42 np0005629333 nova_compute[244014]: 2026-02-25 12:36:42.719 244018 DEBUG oslo_concurrency.lockutils [None req-1ab1b55c-ae3b-43c4-bb5a-2242f186fca0 f14f324492304e519e215bb56099abdd 70a704d198834ace8c228d59139dd7b4 - - default default] Lock "2184a715-0ac8-4fc2-aa99-fae3b8e32edf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:36:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Feb 25 07:36:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Feb 25 07:36:42 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Feb 25 07:36:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:36:43 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1989944339' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:36:43 np0005629333 nova_compute[244014]: 2026-02-25 12:36:43.193 244018 DEBUG oslo_concurrency.processutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:43 np0005629333 nova_compute[244014]: 2026-02-25 12:36:43.195 244018 DEBUG nova.virt.libvirt.vif [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:36:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1814919794',display_name='tempest-ListServerFiltersTestJSON-instance-1814919794',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1814919794',id=90,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='999f2a015b9c4bc98661fe6fe6db06a0',ramdisk_id='',reservation_id='r-la6jl8ba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1481541933',owner_user_name='tempest-ListServerFiltersTestJSON-1481541933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:36:37Z,user_data=None,user_id='00eb5c915b2d41f69600acd33967d0f5',uuid=0061daee-43d7-458b-8645-0ad3f8fbb2af,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:36:43 np0005629333 nova_compute[244014]: 2026-02-25 12:36:43.196 244018 DEBUG nova.network.os_vif_util [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converting VIF {"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:36:43 np0005629333 nova_compute[244014]: 2026-02-25 12:36:43.197 244018 DEBUG nova.network.os_vif_util [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:76:ae,bridge_name='br-int',has_traffic_filtering=True,id=d689bf7c-d44c-4f39-a2a7-a85e52dcee30,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd689bf7c-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:36:43 np0005629333 nova_compute[244014]: 2026-02-25 12:36:43.198 244018 DEBUG nova.objects.instance [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0061daee-43d7-458b-8645-0ad3f8fbb2af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:36:43 np0005629333 nova_compute[244014]: 2026-02-25 12:36:43.395 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:36:43 np0005629333 nova_compute[244014]:  <uuid>0061daee-43d7-458b-8645-0ad3f8fbb2af</uuid>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:  <name>instance-0000005a</name>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-1814919794</nova:name>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:36:42</nova:creationTime>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:36:43 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:        <nova:user uuid="00eb5c915b2d41f69600acd33967d0f5">tempest-ListServerFiltersTestJSON-1481541933-project-member</nova:user>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:        <nova:project uuid="999f2a015b9c4bc98661fe6fe6db06a0">tempest-ListServerFiltersTestJSON-1481541933</nova:project>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:        <nova:port uuid="d689bf7c-d44c-4f39-a2a7-a85e52dcee30">
Feb 25 07:36:43 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      <entry name="serial">0061daee-43d7-458b-8645-0ad3f8fbb2af</entry>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      <entry name="uuid">0061daee-43d7-458b-8645-0ad3f8fbb2af</entry>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/0061daee-43d7-458b-8645-0ad3f8fbb2af_disk">
Feb 25 07:36:43 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:36:43 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/0061daee-43d7-458b-8645-0ad3f8fbb2af_disk.config">
Feb 25 07:36:43 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:36:43 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:65:76:ae"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      <target dev="tapd689bf7c-d4"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/0061daee-43d7-458b-8645-0ad3f8fbb2af/console.log" append="off"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:36:43 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:36:43 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:36:43 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:36:43 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:36:43 np0005629333 nova_compute[244014]: 2026-02-25 12:36:43.397 244018 DEBUG nova.compute.manager [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Preparing to wait for external event network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:36:43 np0005629333 nova_compute[244014]: 2026-02-25 12:36:43.398 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:43 np0005629333 nova_compute[244014]: 2026-02-25 12:36:43.399 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:43 np0005629333 nova_compute[244014]: 2026-02-25 12:36:43.399 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:43 np0005629333 nova_compute[244014]: 2026-02-25 12:36:43.401 244018 DEBUG nova.virt.libvirt.vif [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:36:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1814919794',display_name='tempest-ListServerFiltersTestJSON-instance-1814919794',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1814919794',id=90,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='999f2a015b9c4bc98661fe6fe6db06a0',ramdisk_id='',reservation_id='r-la6jl8ba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1481541933',owner_user_name='tempest-ListServerFiltersTestJSON-1481541933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:36:37Z,user_data=None,user_id='00eb5c915b2d41f69600acd33967d0f5',uuid=0061daee-43d7-458b-8645-0ad3f8fbb2af,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:36:43 np0005629333 nova_compute[244014]: 2026-02-25 12:36:43.402 244018 DEBUG nova.network.os_vif_util [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converting VIF {"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:36:43 np0005629333 nova_compute[244014]: 2026-02-25 12:36:43.403 244018 DEBUG nova.network.os_vif_util [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:76:ae,bridge_name='br-int',has_traffic_filtering=True,id=d689bf7c-d44c-4f39-a2a7-a85e52dcee30,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd689bf7c-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:36:43 np0005629333 nova_compute[244014]: 2026-02-25 12:36:43.404 244018 DEBUG os_vif [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:76:ae,bridge_name='br-int',has_traffic_filtering=True,id=d689bf7c-d44c-4f39-a2a7-a85e52dcee30,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd689bf7c-d4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:36:43 np0005629333 nova_compute[244014]: 2026-02-25 12:36:43.405 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:43 np0005629333 nova_compute[244014]: 2026-02-25 12:36:43.406 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:36:43 np0005629333 nova_compute[244014]: 2026-02-25 12:36:43.407 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:36:43 np0005629333 nova_compute[244014]: 2026-02-25 12:36:43.412 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:43 np0005629333 nova_compute[244014]: 2026-02-25 12:36:43.412 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd689bf7c-d4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:36:43 np0005629333 nova_compute[244014]: 2026-02-25 12:36:43.413 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd689bf7c-d4, col_values=(('external_ids', {'iface-id': 'd689bf7c-d44c-4f39-a2a7-a85e52dcee30', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:76:ae', 'vm-uuid': '0061daee-43d7-458b-8645-0ad3f8fbb2af'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:36:43 np0005629333 nova_compute[244014]: 2026-02-25 12:36:43.416 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:43 np0005629333 NetworkManager[49836]: <info>  [1772023003.4178] manager: (tapd689bf7c-d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/376)
Feb 25 07:36:43 np0005629333 nova_compute[244014]: 2026-02-25 12:36:43.420 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:36:43 np0005629333 nova_compute[244014]: 2026-02-25 12:36:43.421 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:43 np0005629333 nova_compute[244014]: 2026-02-25 12:36:43.422 244018 INFO os_vif [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:76:ae,bridge_name='br-int',has_traffic_filtering=True,id=d689bf7c-d44c-4f39-a2a7-a85e52dcee30,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd689bf7c-d4')#033[00m
Feb 25 07:36:43 np0005629333 nova_compute[244014]: 2026-02-25 12:36:43.482 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:36:43 np0005629333 nova_compute[244014]: 2026-02-25 12:36:43.483 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:36:43 np0005629333 nova_compute[244014]: 2026-02-25 12:36:43.483 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] No VIF found with MAC fa:16:3e:65:76:ae, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:36:43 np0005629333 nova_compute[244014]: 2026-02-25 12:36:43.483 244018 INFO nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Using config drive#033[00m
Feb 25 07:36:43 np0005629333 nova_compute[244014]: 2026-02-25 12:36:43.517 244018 DEBUG nova.storage.rbd_utils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 0061daee-43d7-458b-8645-0ad3f8fbb2af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:43 np0005629333 podman[325514]: 2026-02-25 12:36:43.533577272 +0000 UTC m=+0.063405313 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 07:36:43 np0005629333 nova_compute[244014]: 2026-02-25 12:36:43.593 244018 DEBUG nova.network.neutron [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Successfully created port: ca16d7f1-7f94-4d64-906e-e1469230e4f1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:36:43 np0005629333 nova_compute[244014]: 2026-02-25 12:36:43.915 244018 INFO nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Creating config drive at /var/lib/nova/instances/0061daee-43d7-458b-8645-0ad3f8fbb2af/disk.config#033[00m
Feb 25 07:36:43 np0005629333 nova_compute[244014]: 2026-02-25 12:36:43.922 244018 DEBUG oslo_concurrency.processutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0061daee-43d7-458b-8645-0ad3f8fbb2af/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpd1088dmj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:44 np0005629333 nova_compute[244014]: 2026-02-25 12:36:44.010 244018 DEBUG nova.network.neutron [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Successfully updated port: 66c41a3b-23a5-4cbf-a70e-416adf1617e5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:36:44 np0005629333 nova_compute[244014]: 2026-02-25 12:36:44.035 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "refresh_cache-6076a107-bdef-4c8a-8f75-887cdb4833f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:36:44 np0005629333 nova_compute[244014]: 2026-02-25 12:36:44.036 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquired lock "refresh_cache-6076a107-bdef-4c8a-8f75-887cdb4833f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:36:44 np0005629333 nova_compute[244014]: 2026-02-25 12:36:44.036 244018 DEBUG nova.network.neutron [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:36:44 np0005629333 nova_compute[244014]: 2026-02-25 12:36:44.067 244018 DEBUG oslo_concurrency.processutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0061daee-43d7-458b-8645-0ad3f8fbb2af/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpd1088dmj" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1654: 305 pgs: 305 active+clean; 262 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 7.6 MiB/s wr, 313 op/s
Feb 25 07:36:44 np0005629333 nova_compute[244014]: 2026-02-25 12:36:44.100 244018 DEBUG nova.storage.rbd_utils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 0061daee-43d7-458b-8645-0ad3f8fbb2af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:44 np0005629333 nova_compute[244014]: 2026-02-25 12:36:44.104 244018 DEBUG oslo_concurrency.processutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0061daee-43d7-458b-8645-0ad3f8fbb2af/disk.config 0061daee-43d7-458b-8645-0ad3f8fbb2af_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:44 np0005629333 nova_compute[244014]: 2026-02-25 12:36:44.147 244018 DEBUG nova.compute.manager [req-e8af6b8e-de62-40ab-ae51-144eecd9013b req-b3005bae-79c1-4e14-b294-cfe9b9802be6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Received event network-changed-66c41a3b-23a5-4cbf-a70e-416adf1617e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:36:44 np0005629333 nova_compute[244014]: 2026-02-25 12:36:44.147 244018 DEBUG nova.compute.manager [req-e8af6b8e-de62-40ab-ae51-144eecd9013b req-b3005bae-79c1-4e14-b294-cfe9b9802be6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Refreshing instance network info cache due to event network-changed-66c41a3b-23a5-4cbf-a70e-416adf1617e5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:36:44 np0005629333 nova_compute[244014]: 2026-02-25 12:36:44.147 244018 DEBUG oslo_concurrency.lockutils [req-e8af6b8e-de62-40ab-ae51-144eecd9013b req-b3005bae-79c1-4e14-b294-cfe9b9802be6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-6076a107-bdef-4c8a-8f75-887cdb4833f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:36:44 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Feb 25 07:36:44 np0005629333 nova_compute[244014]: 2026-02-25 12:36:44.255 244018 DEBUG oslo_concurrency.processutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0061daee-43d7-458b-8645-0ad3f8fbb2af/disk.config 0061daee-43d7-458b-8645-0ad3f8fbb2af_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:44 np0005629333 nova_compute[244014]: 2026-02-25 12:36:44.256 244018 INFO nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Deleting local config drive /var/lib/nova/instances/0061daee-43d7-458b-8645-0ad3f8fbb2af/disk.config because it was imported into RBD.#033[00m
Feb 25 07:36:44 np0005629333 kernel: tapd689bf7c-d4: entered promiscuous mode
Feb 25 07:36:44 np0005629333 NetworkManager[49836]: <info>  [1772023004.3067] manager: (tapd689bf7c-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/377)
Feb 25 07:36:44 np0005629333 nova_compute[244014]: 2026-02-25 12:36:44.307 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:44 np0005629333 ovn_controller[147040]: 2026-02-25T12:36:44Z|00900|binding|INFO|Claiming lport d689bf7c-d44c-4f39-a2a7-a85e52dcee30 for this chassis.
Feb 25 07:36:44 np0005629333 ovn_controller[147040]: 2026-02-25T12:36:44Z|00901|binding|INFO|d689bf7c-d44c-4f39-a2a7-a85e52dcee30: Claiming fa:16:3e:65:76:ae 10.100.0.8
Feb 25 07:36:44 np0005629333 nova_compute[244014]: 2026-02-25 12:36:44.311 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:44 np0005629333 nova_compute[244014]: 2026-02-25 12:36:44.315 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.323 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:76:ae 10.100.0.8'], port_security=['fa:16:3e:65:76:ae 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0061daee-43d7-458b-8645-0ad3f8fbb2af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38a239da-c933-4cbc-be00-dd127471e198', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '999f2a015b9c4bc98661fe6fe6db06a0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '90d0dcc5-a9c4-4f99-8901-06a2a2854544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80217008-ae39-43e4-aa35-a210336c586d, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=d689bf7c-d44c-4f39-a2a7-a85e52dcee30) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.324 157129 INFO neutron.agent.ovn.metadata.agent [-] Port d689bf7c-d44c-4f39-a2a7-a85e52dcee30 in datapath 38a239da-c933-4cbc-be00-dd127471e198 bound to our chassis#033[00m
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.325 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 38a239da-c933-4cbc-be00-dd127471e198#033[00m
Feb 25 07:36:44 np0005629333 systemd-machined[210048]: New machine qemu-116-instance-0000005a.
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.338 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed44d18-91b3-40e5-bcbe-4611b36fbab5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.339 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap38a239da-c1 in ovnmeta-38a239da-c933-4cbc-be00-dd127471e198 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.340 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap38a239da-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.341 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f8af60-25ca-4fda-b95e-5ccbf6c3c983]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.341 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7413ddba-f8fc-445d-9cc4-76eb7ef39327]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:44 np0005629333 nova_compute[244014]: 2026-02-25 12:36:44.342 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:44 np0005629333 ovn_controller[147040]: 2026-02-25T12:36:44Z|00902|binding|INFO|Setting lport d689bf7c-d44c-4f39-a2a7-a85e52dcee30 ovn-installed in OVS
Feb 25 07:36:44 np0005629333 ovn_controller[147040]: 2026-02-25T12:36:44Z|00903|binding|INFO|Setting lport d689bf7c-d44c-4f39-a2a7-a85e52dcee30 up in Southbound
Feb 25 07:36:44 np0005629333 nova_compute[244014]: 2026-02-25 12:36:44.344 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.350 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[f747ae6c-9f14-471f-a626-3d6b001f75a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:44 np0005629333 systemd[1]: Started Virtual Machine qemu-116-instance-0000005a.
Feb 25 07:36:44 np0005629333 systemd-udevd[325606]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:36:44 np0005629333 NetworkManager[49836]: <info>  [1772023004.3682] device (tapd689bf7c-d4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:36:44 np0005629333 NetworkManager[49836]: <info>  [1772023004.3694] device (tapd689bf7c-d4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.372 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[443a47f6-9dff-4ced-b08d-267d351e9273]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.393 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f9423214-39d6-4f08-aad1-15c71ac6355c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.396 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[93747d7e-5ab6-40c8-b32d-89c726b238a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:44 np0005629333 systemd-udevd[325609]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:36:44 np0005629333 NetworkManager[49836]: <info>  [1772023004.3982] manager: (tap38a239da-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/378)
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.427 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[45061f05-529c-436c-bafa-d31cb301ea2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.429 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b9e85c79-0d2e-4cc3-8401-05f01a7d0927]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:44 np0005629333 NetworkManager[49836]: <info>  [1772023004.4458] device (tap38a239da-c0): carrier: link connected
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.451 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[72d50a8e-40a8-4673-9498-db297528efde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.464 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2a97c7d6-5a52-496d-814f-ede30fc9b143]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap38a239da-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:fa:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 497385, 'reachable_time': 39986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325636, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.480 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[16ea7eda-8361-47e6-a7c9-91d2aa2fa1d8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef1:fa50'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 497385, 'tstamp': 497385}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325637, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.497 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[21cf56a6-34de-49a8-a77b-f38567fc7c4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap38a239da-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:fa:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 497385, 'reachable_time': 39986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 325638, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.523 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1db3945a-20e6-48cf-b4bc-348acfbb5660]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.575 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[489b1060-bb2e-4cd9-9228-febc18198a0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.576 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38a239da-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.577 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.577 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap38a239da-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:36:44 np0005629333 kernel: tap38a239da-c0: entered promiscuous mode
Feb 25 07:36:44 np0005629333 nova_compute[244014]: 2026-02-25 12:36:44.580 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:44 np0005629333 NetworkManager[49836]: <info>  [1772023004.5810] manager: (tap38a239da-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/379)
Feb 25 07:36:44 np0005629333 nova_compute[244014]: 2026-02-25 12:36:44.585 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.587 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap38a239da-c0, col_values=(('external_ids', {'iface-id': 'b61d6004-89be-4a9d-aeb0-ecb4f03f3526'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:36:44 np0005629333 nova_compute[244014]: 2026-02-25 12:36:44.589 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:44 np0005629333 ovn_controller[147040]: 2026-02-25T12:36:44Z|00904|binding|INFO|Releasing lport b61d6004-89be-4a9d-aeb0-ecb4f03f3526 from this chassis (sb_readonly=0)
Feb 25 07:36:44 np0005629333 nova_compute[244014]: 2026-02-25 12:36:44.601 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:44 np0005629333 nova_compute[244014]: 2026-02-25 12:36:44.602 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.604 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/38a239da-c933-4cbc-be00-dd127471e198.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/38a239da-c933-4cbc-be00-dd127471e198.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.605 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[94729851-8004-4ad5-a487-4489879e5457]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.606 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-38a239da-c933-4cbc-be00-dd127471e198
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/38a239da-c933-4cbc-be00-dd127471e198.pid.haproxy
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 38a239da-c933-4cbc-be00-dd127471e198
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:36:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:44.607 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'env', 'PROCESS_TAG=haproxy-38a239da-c933-4cbc-be00-dd127471e198', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/38a239da-c933-4cbc-be00-dd127471e198.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:36:44 np0005629333 nova_compute[244014]: 2026-02-25 12:36:44.629 244018 DEBUG nova.compute.manager [req-edc06f77-d275-47eb-bbc4-00db192bb044 req-b32f426e-e655-44f9-ab69-e360b7ae972a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received event network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:36:44 np0005629333 nova_compute[244014]: 2026-02-25 12:36:44.630 244018 DEBUG oslo_concurrency.lockutils [req-edc06f77-d275-47eb-bbc4-00db192bb044 req-b32f426e-e655-44f9-ab69-e360b7ae972a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:44 np0005629333 nova_compute[244014]: 2026-02-25 12:36:44.630 244018 DEBUG oslo_concurrency.lockutils [req-edc06f77-d275-47eb-bbc4-00db192bb044 req-b32f426e-e655-44f9-ab69-e360b7ae972a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:44 np0005629333 nova_compute[244014]: 2026-02-25 12:36:44.630 244018 DEBUG oslo_concurrency.lockutils [req-edc06f77-d275-47eb-bbc4-00db192bb044 req-b32f426e-e655-44f9-ab69-e360b7ae972a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:44 np0005629333 nova_compute[244014]: 2026-02-25 12:36:44.631 244018 DEBUG nova.compute.manager [req-edc06f77-d275-47eb-bbc4-00db192bb044 req-b32f426e-e655-44f9-ab69-e360b7ae972a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Processing event network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:36:44 np0005629333 nova_compute[244014]: 2026-02-25 12:36:44.648 244018 DEBUG nova.network.neutron [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:36:44 np0005629333 nova_compute[244014]: 2026-02-25 12:36:44.936 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:45 np0005629333 podman[325670]: 2026-02-25 12:36:45.004119212 +0000 UTC m=+0.059436971 container create 2e466d152b6d4b411750f1032c64002c2b30eb3be3ca644500a1b88bcd54b2d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38a239da-c933-4cbc-be00-dd127471e198, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 07:36:45 np0005629333 systemd[1]: Started libpod-conmon-2e466d152b6d4b411750f1032c64002c2b30eb3be3ca644500a1b88bcd54b2d1.scope.
Feb 25 07:36:45 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:36:45 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82356394d1b742d50fdf5952aac21f145d54d5f578fc2e46fd418cf1c3cc383b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:36:45 np0005629333 podman[325670]: 2026-02-25 12:36:44.967375703 +0000 UTC m=+0.022693532 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:36:45 np0005629333 podman[325670]: 2026-02-25 12:36:45.072490124 +0000 UTC m=+0.127807943 container init 2e466d152b6d4b411750f1032c64002c2b30eb3be3ca644500a1b88bcd54b2d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38a239da-c933-4cbc-be00-dd127471e198, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 25 07:36:45 np0005629333 podman[325670]: 2026-02-25 12:36:45.077265219 +0000 UTC m=+0.132583008 container start 2e466d152b6d4b411750f1032c64002c2b30eb3be3ca644500a1b88bcd54b2d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38a239da-c933-4cbc-be00-dd127471e198, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0)
Feb 25 07:36:45 np0005629333 neutron-haproxy-ovnmeta-38a239da-c933-4cbc-be00-dd127471e198[325686]: [NOTICE]   (325699) : New worker (325707) forked
Feb 25 07:36:45 np0005629333 neutron-haproxy-ovnmeta-38a239da-c933-4cbc-be00-dd127471e198[325686]: [NOTICE]   (325699) : Loading success.
Feb 25 07:36:45 np0005629333 podman[325683]: 2026-02-25 12:36:45.133500838 +0000 UTC m=+0.092490225 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.275 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023005.2753124, 0061daee-43d7-458b-8645-0ad3f8fbb2af => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.276 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] VM Started (Lifecycle Event)#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.279 244018 DEBUG nova.compute.manager [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.283 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.287 244018 INFO nova.virt.libvirt.driver [-] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Instance spawned successfully.#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.287 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.318 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.325 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.333 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.334 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.335 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.336 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.337 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.337 244018 DEBUG nova.virt.libvirt.driver [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.375 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.376 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023005.2754111, 0061daee-43d7-458b-8645-0ad3f8fbb2af => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.376 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.417 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.420 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023005.2827766, 0061daee-43d7-458b-8645-0ad3f8fbb2af => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.420 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.446 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.450 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.457 244018 INFO nova.compute.manager [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Took 7.95 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.458 244018 DEBUG nova.compute.manager [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.474 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.522 244018 DEBUG nova.network.neutron [req-6eff659b-eaa6-42fb-82ca-29104ef024fa req-246526f3-6ae1-4029-aa9f-6d25056f69be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Updated VIF entry in instance network info cache for port d689bf7c-d44c-4f39-a2a7-a85e52dcee30. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.522 244018 DEBUG nova.network.neutron [req-6eff659b-eaa6-42fb-82ca-29104ef024fa req-246526f3-6ae1-4029-aa9f-6d25056f69be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Updating instance_info_cache with network_info: [{"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.527 244018 INFO nova.compute.manager [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Took 8.97 seconds to build instance.#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.542 244018 DEBUG oslo_concurrency.lockutils [req-6eff659b-eaa6-42fb-82ca-29104ef024fa req-246526f3-6ae1-4029-aa9f-6d25056f69be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-0061daee-43d7-458b-8645-0ad3f8fbb2af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.544 244018 DEBUG oslo_concurrency.lockutils [None req-ae60f61e-f7dc-44a9-bd96-fc6e87d1ac87 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.928 244018 DEBUG nova.network.neutron [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Successfully updated port: ca16d7f1-7f94-4d64-906e-e1469230e4f1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.944 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "refresh_cache-7c2fb1e7-04d0-4903-a675-8cda55bbb6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.944 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquired lock "refresh_cache-7c2fb1e7-04d0-4903-a675-8cda55bbb6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:36:45 np0005629333 nova_compute[244014]: 2026-02-25 12:36:45.945 244018 DEBUG nova.network.neutron [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:36:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 305 active+clean; 262 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 5.8 MiB/s wr, 138 op/s
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.221 244018 DEBUG nova.network.neutron [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.290 244018 DEBUG nova.network.neutron [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Updating instance_info_cache with network_info: [{"id": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "address": "fa:16:3e:6d:11:ce", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66c41a3b-23", "ovs_interfaceid": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.302 244018 DEBUG nova.compute.manager [req-2127f626-2d1a-414e-a97a-bdbc14fb47c0 req-69722ca1-b071-4f1f-9051-fc16d376d5d8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Received event network-changed-ca16d7f1-7f94-4d64-906e-e1469230e4f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.303 244018 DEBUG nova.compute.manager [req-2127f626-2d1a-414e-a97a-bdbc14fb47c0 req-69722ca1-b071-4f1f-9051-fc16d376d5d8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Refreshing instance network info cache due to event network-changed-ca16d7f1-7f94-4d64-906e-e1469230e4f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.303 244018 DEBUG oslo_concurrency.lockutils [req-2127f626-2d1a-414e-a97a-bdbc14fb47c0 req-69722ca1-b071-4f1f-9051-fc16d376d5d8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-7c2fb1e7-04d0-4903-a675-8cda55bbb6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.342 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Releasing lock "refresh_cache-6076a107-bdef-4c8a-8f75-887cdb4833f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.343 244018 DEBUG nova.compute.manager [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Instance network_info: |[{"id": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "address": "fa:16:3e:6d:11:ce", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66c41a3b-23", "ovs_interfaceid": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.345 244018 DEBUG oslo_concurrency.lockutils [req-e8af6b8e-de62-40ab-ae51-144eecd9013b req-b3005bae-79c1-4e14-b294-cfe9b9802be6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-6076a107-bdef-4c8a-8f75-887cdb4833f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.345 244018 DEBUG nova.network.neutron [req-e8af6b8e-de62-40ab-ae51-144eecd9013b req-b3005bae-79c1-4e14-b294-cfe9b9802be6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Refreshing network info cache for port 66c41a3b-23a5-4cbf-a70e-416adf1617e5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.351 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Start _get_guest_xml network_info=[{"id": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "address": "fa:16:3e:6d:11:ce", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66c41a3b-23", "ovs_interfaceid": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'f0ef5a9a-23b8-4883-8e47-feb7403a11d8'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.358 244018 WARNING nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.363 244018 DEBUG nova.virt.libvirt.host [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.363 244018 DEBUG nova.virt.libvirt.host [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.366 244018 DEBUG nova.virt.libvirt.host [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.366 244018 DEBUG nova.virt.libvirt.host [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.366 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.367 244018 DEBUG nova.virt.hardware [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.367 244018 DEBUG nova.virt.hardware [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.367 244018 DEBUG nova.virt.hardware [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.368 244018 DEBUG nova.virt.hardware [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.368 244018 DEBUG nova.virt.hardware [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.368 244018 DEBUG nova.virt.hardware [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.368 244018 DEBUG nova.virt.hardware [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.369 244018 DEBUG nova.virt.hardware [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.369 244018 DEBUG nova.virt.hardware [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.369 244018 DEBUG nova.virt.hardware [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.369 244018 DEBUG nova.virt.hardware [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.372 244018 DEBUG oslo_concurrency.processutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.831 244018 DEBUG nova.compute.manager [req-3f99e95b-a162-4149-aeee-681ed13b47fc req-29da5aad-3487-41fb-a083-cb37cbffb240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received event network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.832 244018 DEBUG oslo_concurrency.lockutils [req-3f99e95b-a162-4149-aeee-681ed13b47fc req-29da5aad-3487-41fb-a083-cb37cbffb240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.833 244018 DEBUG oslo_concurrency.lockutils [req-3f99e95b-a162-4149-aeee-681ed13b47fc req-29da5aad-3487-41fb-a083-cb37cbffb240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.833 244018 DEBUG oslo_concurrency.lockutils [req-3f99e95b-a162-4149-aeee-681ed13b47fc req-29da5aad-3487-41fb-a083-cb37cbffb240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.834 244018 DEBUG nova.compute.manager [req-3f99e95b-a162-4149-aeee-681ed13b47fc req-29da5aad-3487-41fb-a083-cb37cbffb240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] No waiting events found dispatching network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.834 244018 WARNING nova.compute.manager [req-3f99e95b-a162-4149-aeee-681ed13b47fc req-29da5aad-3487-41fb-a083-cb37cbffb240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received unexpected event network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:36:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:36:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1967359161' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.924 244018 DEBUG oslo_concurrency.processutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.946 244018 DEBUG nova.storage.rbd_utils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 6076a107-bdef-4c8a-8f75-887cdb4833f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:46 np0005629333 nova_compute[244014]: 2026-02-25 12:36:46.950 244018 DEBUG oslo_concurrency.processutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:36:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4057307832' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:36:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:36:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/40011578' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:36:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:36:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/40011578' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.496 244018 DEBUG oslo_concurrency.processutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.498 244018 DEBUG nova.virt.libvirt.vif [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:36:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1261051940',display_name='tempest-ListServerFiltersTestJSON-instance-1261051940',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1261051940',id=91,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='999f2a015b9c4bc98661fe6fe6db06a0',ramdisk_id='',reservation_id='r-c0kkprmf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1481541933',owner_user_name='tempest-ListServerFiltersTestJSON-1481541933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:36:40Z,user_data=None,user_id='00eb5c915b2d41f69600acd33967d0f5',uuid=6076a107-bdef-4c8a-8f75-887cdb4833f0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "address": "fa:16:3e:6d:11:ce", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66c41a3b-23", "ovs_interfaceid": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.499 244018 DEBUG nova.network.os_vif_util [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converting VIF {"id": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "address": "fa:16:3e:6d:11:ce", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66c41a3b-23", "ovs_interfaceid": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.500 244018 DEBUG nova.network.os_vif_util [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:11:ce,bridge_name='br-int',has_traffic_filtering=True,id=66c41a3b-23a5-4cbf-a70e-416adf1617e5,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66c41a3b-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.502 244018 DEBUG nova.objects.instance [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6076a107-bdef-4c8a-8f75-887cdb4833f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.559 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:36:47 np0005629333 nova_compute[244014]:  <uuid>6076a107-bdef-4c8a-8f75-887cdb4833f0</uuid>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:  <name>instance-0000005b</name>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-1261051940</nova:name>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:36:46</nova:creationTime>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:36:47 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:        <nova:user uuid="00eb5c915b2d41f69600acd33967d0f5">tempest-ListServerFiltersTestJSON-1481541933-project-member</nova:user>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:        <nova:project uuid="999f2a015b9c4bc98661fe6fe6db06a0">tempest-ListServerFiltersTestJSON-1481541933</nova:project>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="f0ef5a9a-23b8-4883-8e47-feb7403a11d8"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:        <nova:port uuid="66c41a3b-23a5-4cbf-a70e-416adf1617e5">
Feb 25 07:36:47 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      <entry name="serial">6076a107-bdef-4c8a-8f75-887cdb4833f0</entry>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      <entry name="uuid">6076a107-bdef-4c8a-8f75-887cdb4833f0</entry>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/6076a107-bdef-4c8a-8f75-887cdb4833f0_disk">
Feb 25 07:36:47 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:36:47 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/6076a107-bdef-4c8a-8f75-887cdb4833f0_disk.config">
Feb 25 07:36:47 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:36:47 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:6d:11:ce"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      <target dev="tap66c41a3b-23"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/6076a107-bdef-4c8a-8f75-887cdb4833f0/console.log" append="off"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:36:47 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:36:47 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:36:47 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:36:47 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.561 244018 DEBUG nova.compute.manager [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Preparing to wait for external event network-vif-plugged-66c41a3b-23a5-4cbf-a70e-416adf1617e5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.562 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "6076a107-bdef-4c8a-8f75-887cdb4833f0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.562 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "6076a107-bdef-4c8a-8f75-887cdb4833f0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.562 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "6076a107-bdef-4c8a-8f75-887cdb4833f0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.564 244018 DEBUG nova.virt.libvirt.vif [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:36:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1261051940',display_name='tempest-ListServerFiltersTestJSON-instance-1261051940',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1261051940',id=91,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='999f2a015b9c4bc98661fe6fe6db06a0',ramdisk_id='',reservation_id='r-c0kkprmf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1481541933',owner_user_name='tempest-ListServerFiltersTestJSON-1481541933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:36:40Z,user_data=None,user_id='00eb5c915b2d41f69600acd33967d0f5',uuid=6076a107-bdef-4c8a-8f75-887cdb4833f0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "address": "fa:16:3e:6d:11:ce", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66c41a3b-23", "ovs_interfaceid": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.564 244018 DEBUG nova.network.os_vif_util [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converting VIF {"id": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "address": "fa:16:3e:6d:11:ce", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66c41a3b-23", "ovs_interfaceid": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.565 244018 DEBUG nova.network.os_vif_util [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:11:ce,bridge_name='br-int',has_traffic_filtering=True,id=66c41a3b-23a5-4cbf-a70e-416adf1617e5,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66c41a3b-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.566 244018 DEBUG os_vif [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:11:ce,bridge_name='br-int',has_traffic_filtering=True,id=66c41a3b-23a5-4cbf-a70e-416adf1617e5,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66c41a3b-23') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.567 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.568 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.569 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.573 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.573 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap66c41a3b-23, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.574 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap66c41a3b-23, col_values=(('external_ids', {'iface-id': '66c41a3b-23a5-4cbf-a70e-416adf1617e5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6d:11:ce', 'vm-uuid': '6076a107-bdef-4c8a-8f75-887cdb4833f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.576 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:47 np0005629333 NetworkManager[49836]: <info>  [1772023007.5769] manager: (tap66c41a3b-23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/380)
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.579 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.583 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.583 244018 INFO os_vif [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:11:ce,bridge_name='br-int',has_traffic_filtering=True,id=66c41a3b-23a5-4cbf-a70e-416adf1617e5,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66c41a3b-23')#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.640 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.641 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.641 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] No VIF found with MAC fa:16:3e:6d:11:ce, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.641 244018 INFO nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Using config drive#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.660 244018 DEBUG nova.storage.rbd_utils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 6076a107-bdef-4c8a-8f75-887cdb4833f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.844 244018 DEBUG nova.network.neutron [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Updating instance_info_cache with network_info: [{"id": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "address": "fa:16:3e:aa:9e:27", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca16d7f1-7f", "ovs_interfaceid": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.869 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Releasing lock "refresh_cache-7c2fb1e7-04d0-4903-a675-8cda55bbb6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.870 244018 DEBUG nova.compute.manager [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Instance network_info: |[{"id": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "address": "fa:16:3e:aa:9e:27", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca16d7f1-7f", "ovs_interfaceid": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.871 244018 DEBUG oslo_concurrency.lockutils [req-2127f626-2d1a-414e-a97a-bdbc14fb47c0 req-69722ca1-b071-4f1f-9051-fc16d376d5d8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-7c2fb1e7-04d0-4903-a675-8cda55bbb6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.871 244018 DEBUG nova.network.neutron [req-2127f626-2d1a-414e-a97a-bdbc14fb47c0 req-69722ca1-b071-4f1f-9051-fc16d376d5d8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Refreshing network info cache for port ca16d7f1-7f94-4d64-906e-e1469230e4f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.876 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Start _get_guest_xml network_info=[{"id": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "address": "fa:16:3e:aa:9e:27", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca16d7f1-7f", "ovs_interfaceid": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.885 244018 WARNING nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.890 244018 DEBUG nova.virt.libvirt.host [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.891 244018 DEBUG nova.virt.libvirt.host [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.909 244018 DEBUG nova.virt.libvirt.host [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.910 244018 DEBUG nova.virt.libvirt.host [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.910 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.911 244018 DEBUG nova.virt.hardware [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8a55b37a-157d-41c3-9f41-7dbf617bee81',id=5,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.912 244018 DEBUG nova.virt.hardware [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.913 244018 DEBUG nova.virt.hardware [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.914 244018 DEBUG nova.virt.hardware [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.914 244018 DEBUG nova.virt.hardware [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.915 244018 DEBUG nova.virt.hardware [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.916 244018 DEBUG nova.virt.hardware [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.917 244018 DEBUG nova.virt.hardware [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.917 244018 DEBUG nova.virt.hardware [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.918 244018 DEBUG nova.virt.hardware [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.918 244018 DEBUG nova.virt.hardware [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:36:47 np0005629333 nova_compute[244014]: 2026-02-25 12:36:47.923 244018 DEBUG oslo_concurrency.processutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1656: 305 pgs: 305 active+clean; 292 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 6.4 MiB/s wr, 217 op/s
Feb 25 07:36:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:36:48 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4077882892' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:36:48 np0005629333 nova_compute[244014]: 2026-02-25 12:36:48.475 244018 DEBUG oslo_concurrency.processutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:48 np0005629333 nova_compute[244014]: 2026-02-25 12:36:48.497 244018 DEBUG nova.storage.rbd_utils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:48 np0005629333 nova_compute[244014]: 2026-02-25 12:36:48.501 244018 DEBUG oslo_concurrency.processutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:48 np0005629333 nova_compute[244014]: 2026-02-25 12:36:48.584 244018 INFO nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Creating config drive at /var/lib/nova/instances/6076a107-bdef-4c8a-8f75-887cdb4833f0/disk.config#033[00m
Feb 25 07:36:48 np0005629333 nova_compute[244014]: 2026-02-25 12:36:48.587 244018 DEBUG oslo_concurrency.processutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6076a107-bdef-4c8a-8f75-887cdb4833f0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpkyv9wk_v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:48 np0005629333 nova_compute[244014]: 2026-02-25 12:36:48.716 244018 DEBUG oslo_concurrency.processutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6076a107-bdef-4c8a-8f75-887cdb4833f0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpkyv9wk_v" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:48 np0005629333 nova_compute[244014]: 2026-02-25 12:36:48.747 244018 DEBUG nova.storage.rbd_utils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 6076a107-bdef-4c8a-8f75-887cdb4833f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:48 np0005629333 nova_compute[244014]: 2026-02-25 12:36:48.751 244018 DEBUG oslo_concurrency.processutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6076a107-bdef-4c8a-8f75-887cdb4833f0/disk.config 6076a107-bdef-4c8a-8f75-887cdb4833f0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:48 np0005629333 nova_compute[244014]: 2026-02-25 12:36:48.887 244018 DEBUG oslo_concurrency.processutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6076a107-bdef-4c8a-8f75-887cdb4833f0/disk.config 6076a107-bdef-4c8a-8f75-887cdb4833f0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:48 np0005629333 nova_compute[244014]: 2026-02-25 12:36:48.889 244018 INFO nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Deleting local config drive /var/lib/nova/instances/6076a107-bdef-4c8a-8f75-887cdb4833f0/disk.config because it was imported into RBD.#033[00m
Feb 25 07:36:48 np0005629333 NetworkManager[49836]: <info>  [1772023008.9366] manager: (tap66c41a3b-23): new Tun device (/org/freedesktop/NetworkManager/Devices/381)
Feb 25 07:36:48 np0005629333 kernel: tap66c41a3b-23: entered promiscuous mode
Feb 25 07:36:48 np0005629333 nova_compute[244014]: 2026-02-25 12:36:48.940 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:48 np0005629333 ovn_controller[147040]: 2026-02-25T12:36:48Z|00905|binding|INFO|Claiming lport 66c41a3b-23a5-4cbf-a70e-416adf1617e5 for this chassis.
Feb 25 07:36:48 np0005629333 ovn_controller[147040]: 2026-02-25T12:36:48Z|00906|binding|INFO|66c41a3b-23a5-4cbf-a70e-416adf1617e5: Claiming fa:16:3e:6d:11:ce 10.100.0.6
Feb 25 07:36:48 np0005629333 ovn_controller[147040]: 2026-02-25T12:36:48Z|00907|binding|INFO|Setting lport 66c41a3b-23a5-4cbf-a70e-416adf1617e5 ovn-installed in OVS
Feb 25 07:36:48 np0005629333 nova_compute[244014]: 2026-02-25 12:36:48.968 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:48 np0005629333 systemd-machined[210048]: New machine qemu-117-instance-0000005b.
Feb 25 07:36:48 np0005629333 ovn_controller[147040]: 2026-02-25T12:36:48Z|00908|binding|INFO|Setting lport 66c41a3b-23a5-4cbf-a70e-416adf1617e5 up in Southbound
Feb 25 07:36:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:48.980 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:11:ce 10.100.0.6'], port_security=['fa:16:3e:6d:11:ce 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6076a107-bdef-4c8a-8f75-887cdb4833f0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38a239da-c933-4cbc-be00-dd127471e198', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '999f2a015b9c4bc98661fe6fe6db06a0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '90d0dcc5-a9c4-4f99-8901-06a2a2854544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80217008-ae39-43e4-aa35-a210336c586d, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=66c41a3b-23a5-4cbf-a70e-416adf1617e5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:36:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:48.982 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 66c41a3b-23a5-4cbf-a70e-416adf1617e5 in datapath 38a239da-c933-4cbc-be00-dd127471e198 bound to our chassis#033[00m
Feb 25 07:36:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:48.984 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 38a239da-c933-4cbc-be00-dd127471e198#033[00m
Feb 25 07:36:48 np0005629333 systemd[1]: Started Virtual Machine qemu-117-instance-0000005b.
Feb 25 07:36:49 np0005629333 systemd-udevd[325964]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:36:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:49.003 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1d45b713-244a-4eb4-9fa0-b3e858d7055f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:49 np0005629333 NetworkManager[49836]: <info>  [1772023009.0146] device (tap66c41a3b-23): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:36:49 np0005629333 NetworkManager[49836]: <info>  [1772023009.0167] device (tap66c41a3b-23): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:36:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:36:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1339552868' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:36:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:49.054 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8dee6aa8-1441-4bb1-8ceb-ac644354f4d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:49.059 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5a97813b-0457-48d4-95e0-ab22ff13dc8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.070 244018 DEBUG oslo_concurrency.processutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.073 244018 DEBUG nova.virt.libvirt.vif [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:36:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1161077750',display_name='tempest-ListServerFiltersTestJSON-instance-1161077750',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1161077750',id=92,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='999f2a015b9c4bc98661fe6fe6db06a0',ramdisk_id='',reservation_id='r-hzdbk73p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1481541933',owner_user_name='tempest-ListServerFiltersTestJSON-1481541933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:36:41Z,user_data=None,user_id='00eb5c915b2d41f69600acd33967d0f5',uuid=7c2fb1e7-04d0-4903-a675-8cda55bbb6ed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "address": "fa:16:3e:aa:9e:27", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca16d7f1-7f", "ovs_interfaceid": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.074 244018 DEBUG nova.network.os_vif_util [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converting VIF {"id": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "address": "fa:16:3e:aa:9e:27", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca16d7f1-7f", "ovs_interfaceid": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.076 244018 DEBUG nova.network.os_vif_util [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:9e:27,bridge_name='br-int',has_traffic_filtering=True,id=ca16d7f1-7f94-4d64-906e-e1469230e4f1,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca16d7f1-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.079 244018 DEBUG nova.objects.instance [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:36:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:49.092 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed0fa4c-70dd-4759-bc8f-1c4258ef0f77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.098 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:36:49 np0005629333 nova_compute[244014]:  <uuid>7c2fb1e7-04d0-4903-a675-8cda55bbb6ed</uuid>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:  <name>instance-0000005c</name>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:  <memory>196608</memory>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-1161077750</nova:name>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:36:47</nova:creationTime>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.micro">
Feb 25 07:36:49 np0005629333 nova_compute[244014]:        <nova:memory>192</nova:memory>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:        <nova:user uuid="00eb5c915b2d41f69600acd33967d0f5">tempest-ListServerFiltersTestJSON-1481541933-project-member</nova:user>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:        <nova:project uuid="999f2a015b9c4bc98661fe6fe6db06a0">tempest-ListServerFiltersTestJSON-1481541933</nova:project>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:        <nova:port uuid="ca16d7f1-7f94-4d64-906e-e1469230e4f1">
Feb 25 07:36:49 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      <entry name="serial">7c2fb1e7-04d0-4903-a675-8cda55bbb6ed</entry>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      <entry name="uuid">7c2fb1e7-04d0-4903-a675-8cda55bbb6ed</entry>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/7c2fb1e7-04d0-4903-a675-8cda55bbb6ed_disk">
Feb 25 07:36:49 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:36:49 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/7c2fb1e7-04d0-4903-a675-8cda55bbb6ed_disk.config">
Feb 25 07:36:49 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:36:49 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:aa:9e:27"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      <target dev="tapca16d7f1-7f"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/7c2fb1e7-04d0-4903-a675-8cda55bbb6ed/console.log" append="off"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:36:49 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:36:49 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:36:49 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:36:49 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.104 244018 DEBUG nova.compute.manager [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Preparing to wait for external event network-vif-plugged-ca16d7f1-7f94-4d64-906e-e1469230e4f1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.105 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.105 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.105 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.106 244018 DEBUG nova.virt.libvirt.vif [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:36:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1161077750',display_name='tempest-ListServerFiltersTestJSON-instance-1161077750',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1161077750',id=92,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='999f2a015b9c4bc98661fe6fe6db06a0',ramdisk_id='',reservation_id='r-hzdbk73p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1481541933',owner_user_name='tempest-ListServerFiltersTestJSON-1481541933-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:36:41Z,user_data=None,user_id='00eb5c915b2d41f69600acd33967d0f5',uuid=7c2fb1e7-04d0-4903-a675-8cda55bbb6ed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "address": "fa:16:3e:aa:9e:27", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca16d7f1-7f", "ovs_interfaceid": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.106 244018 DEBUG nova.network.os_vif_util [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converting VIF {"id": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "address": "fa:16:3e:aa:9e:27", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca16d7f1-7f", "ovs_interfaceid": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.107 244018 DEBUG nova.network.os_vif_util [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:9e:27,bridge_name='br-int',has_traffic_filtering=True,id=ca16d7f1-7f94-4d64-906e-e1469230e4f1,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca16d7f1-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.107 244018 DEBUG os_vif [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:9e:27,bridge_name='br-int',has_traffic_filtering=True,id=ca16d7f1-7f94-4d64-906e-e1469230e4f1,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca16d7f1-7f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.108 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.108 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.108 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:36:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:49.109 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1e946cb2-300f-484f-b9d2-b4fb3e98f0aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap38a239da-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:fa:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 497385, 'reachable_time': 39986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325979, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.110 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.110 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca16d7f1-7f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.111 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapca16d7f1-7f, col_values=(('external_ids', {'iface-id': 'ca16d7f1-7f94-4d64-906e-e1469230e4f1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:aa:9e:27', 'vm-uuid': '7c2fb1e7-04d0-4903-a675-8cda55bbb6ed'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.112 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:49 np0005629333 NetworkManager[49836]: <info>  [1772023009.1132] manager: (tapca16d7f1-7f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/382)
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.114 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.120 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.121 244018 INFO os_vif [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:9e:27,bridge_name='br-int',has_traffic_filtering=True,id=ca16d7f1-7f94-4d64-906e-e1469230e4f1,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca16d7f1-7f')#033[00m
Feb 25 07:36:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:49.128 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d71e517d-c4ce-4ca8-b5a5-38d6b032dcbf]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap38a239da-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 497395, 'tstamp': 497395}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325981, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap38a239da-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 497397, 'tstamp': 497397}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325981, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:49.129 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38a239da-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.130 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:49.136 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap38a239da-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.136 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:49.136 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:36:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:49.137 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap38a239da-c0, col_values=(('external_ids', {'iface-id': 'b61d6004-89be-4a9d-aeb0-ecb4f03f3526'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:36:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:49.138 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.166 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.167 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.167 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] No VIF found with MAC fa:16:3e:aa:9e:27, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.167 244018 INFO nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Using config drive#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.202 244018 DEBUG nova.storage.rbd_utils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.268 244018 DEBUG nova.network.neutron [req-e8af6b8e-de62-40ab-ae51-144eecd9013b req-b3005bae-79c1-4e14-b294-cfe9b9802be6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Updated VIF entry in instance network info cache for port 66c41a3b-23a5-4cbf-a70e-416adf1617e5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.268 244018 DEBUG nova.network.neutron [req-e8af6b8e-de62-40ab-ae51-144eecd9013b req-b3005bae-79c1-4e14-b294-cfe9b9802be6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Updating instance_info_cache with network_info: [{"id": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "address": "fa:16:3e:6d:11:ce", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66c41a3b-23", "ovs_interfaceid": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.305 244018 DEBUG oslo_concurrency.lockutils [req-e8af6b8e-de62-40ab-ae51-144eecd9013b req-b3005bae-79c1-4e14-b294-cfe9b9802be6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-6076a107-bdef-4c8a-8f75-887cdb4833f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.800 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023009.800289, 6076a107-bdef-4c8a-8f75-887cdb4833f0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.801 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] VM Started (Lifecycle Event)#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.817 244018 INFO nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Creating config drive at /var/lib/nova/instances/7c2fb1e7-04d0-4903-a675-8cda55bbb6ed/disk.config#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.825 244018 DEBUG oslo_concurrency.processutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7c2fb1e7-04d0-4903-a675-8cda55bbb6ed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpj916sa5i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.864 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.869 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023009.802458, 6076a107-bdef-4c8a-8f75-887cdb4833f0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.870 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.886 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.889 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.900 244018 DEBUG nova.compute.manager [req-e6b330bc-3030-4cac-a366-587417f0ec51 req-7c2d148a-3fb9-4d55-8868-ba2154504b29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Received event network-vif-plugged-66c41a3b-23a5-4cbf-a70e-416adf1617e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.900 244018 DEBUG oslo_concurrency.lockutils [req-e6b330bc-3030-4cac-a366-587417f0ec51 req-7c2d148a-3fb9-4d55-8868-ba2154504b29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "6076a107-bdef-4c8a-8f75-887cdb4833f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.901 244018 DEBUG oslo_concurrency.lockutils [req-e6b330bc-3030-4cac-a366-587417f0ec51 req-7c2d148a-3fb9-4d55-8868-ba2154504b29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6076a107-bdef-4c8a-8f75-887cdb4833f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.901 244018 DEBUG oslo_concurrency.lockutils [req-e6b330bc-3030-4cac-a366-587417f0ec51 req-7c2d148a-3fb9-4d55-8868-ba2154504b29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6076a107-bdef-4c8a-8f75-887cdb4833f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.901 244018 DEBUG nova.compute.manager [req-e6b330bc-3030-4cac-a366-587417f0ec51 req-7c2d148a-3fb9-4d55-8868-ba2154504b29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Processing event network-vif-plugged-66c41a3b-23a5-4cbf-a70e-416adf1617e5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.902 244018 DEBUG nova.compute.manager [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.907 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.913 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.915 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023009.9073553, 6076a107-bdef-4c8a-8f75-887cdb4833f0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.915 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.918 244018 INFO nova.virt.libvirt.driver [-] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Instance spawned successfully.#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.918 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.938 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.943 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.947 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.947 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.948 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.948 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.949 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.949 244018 DEBUG nova.virt.libvirt.driver [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.952 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.966 244018 DEBUG oslo_concurrency.processutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7c2fb1e7-04d0-4903-a675-8cda55bbb6ed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpj916sa5i" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.988 244018 DEBUG nova.storage.rbd_utils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] rbd image 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:36:49 np0005629333 nova_compute[244014]: 2026-02-25 12:36:49.991 244018 DEBUG oslo_concurrency.processutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7c2fb1e7-04d0-4903-a675-8cda55bbb6ed/disk.config 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:36:50 np0005629333 nova_compute[244014]: 2026-02-25 12:36:50.024 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:36:50 np0005629333 nova_compute[244014]: 2026-02-25 12:36:50.026 244018 INFO nova.compute.manager [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Took 9.96 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:36:50 np0005629333 nova_compute[244014]: 2026-02-25 12:36:50.027 244018 DEBUG nova.compute.manager [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:36:50 np0005629333 nova_compute[244014]: 2026-02-25 12:36:50.085 244018 INFO nova.compute.manager [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Took 11.19 seconds to build instance.#033[00m
Feb 25 07:36:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1657: 305 pgs: 305 active+clean; 292 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 6.4 MiB/s wr, 217 op/s
Feb 25 07:36:50 np0005629333 nova_compute[244014]: 2026-02-25 12:36:50.105 244018 DEBUG oslo_concurrency.lockutils [None req-323436de-f8e3-4f44-87d7-98e92691afce 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "6076a107-bdef-4c8a-8f75-887cdb4833f0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.293s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:50 np0005629333 nova_compute[244014]: 2026-02-25 12:36:50.156 244018 DEBUG oslo_concurrency.processutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7c2fb1e7-04d0-4903-a675-8cda55bbb6ed/disk.config 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:36:50 np0005629333 nova_compute[244014]: 2026-02-25 12:36:50.157 244018 INFO nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Deleting local config drive /var/lib/nova/instances/7c2fb1e7-04d0-4903-a675-8cda55bbb6ed/disk.config because it was imported into RBD.#033[00m
Feb 25 07:36:50 np0005629333 NetworkManager[49836]: <info>  [1772023010.2021] manager: (tapca16d7f1-7f): new Tun device (/org/freedesktop/NetworkManager/Devices/383)
Feb 25 07:36:50 np0005629333 kernel: tapca16d7f1-7f: entered promiscuous mode
Feb 25 07:36:50 np0005629333 nova_compute[244014]: 2026-02-25 12:36:50.203 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:50 np0005629333 ovn_controller[147040]: 2026-02-25T12:36:50Z|00909|binding|INFO|Claiming lport ca16d7f1-7f94-4d64-906e-e1469230e4f1 for this chassis.
Feb 25 07:36:50 np0005629333 ovn_controller[147040]: 2026-02-25T12:36:50Z|00910|binding|INFO|ca16d7f1-7f94-4d64-906e-e1469230e4f1: Claiming fa:16:3e:aa:9e:27 10.100.0.3
Feb 25 07:36:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:50.211 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:9e:27 10.100.0.3'], port_security=['fa:16:3e:aa:9e:27 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '7c2fb1e7-04d0-4903-a675-8cda55bbb6ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38a239da-c933-4cbc-be00-dd127471e198', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '999f2a015b9c4bc98661fe6fe6db06a0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '90d0dcc5-a9c4-4f99-8901-06a2a2854544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80217008-ae39-43e4-aa35-a210336c586d, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ca16d7f1-7f94-4d64-906e-e1469230e4f1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:36:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:50.212 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ca16d7f1-7f94-4d64-906e-e1469230e4f1 in datapath 38a239da-c933-4cbc-be00-dd127471e198 bound to our chassis#033[00m
Feb 25 07:36:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:50.213 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 38a239da-c933-4cbc-be00-dd127471e198#033[00m
Feb 25 07:36:50 np0005629333 ovn_controller[147040]: 2026-02-25T12:36:50Z|00911|binding|INFO|Setting lport ca16d7f1-7f94-4d64-906e-e1469230e4f1 ovn-installed in OVS
Feb 25 07:36:50 np0005629333 ovn_controller[147040]: 2026-02-25T12:36:50Z|00912|binding|INFO|Setting lport ca16d7f1-7f94-4d64-906e-e1469230e4f1 up in Southbound
Feb 25 07:36:50 np0005629333 NetworkManager[49836]: <info>  [1772023010.2165] device (tapca16d7f1-7f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:36:50 np0005629333 nova_compute[244014]: 2026-02-25 12:36:50.216 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:50 np0005629333 NetworkManager[49836]: <info>  [1772023010.2173] device (tapca16d7f1-7f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:36:50 np0005629333 nova_compute[244014]: 2026-02-25 12:36:50.225 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:50 np0005629333 systemd-machined[210048]: New machine qemu-118-instance-0000005c.
Feb 25 07:36:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:50.241 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[18c69aae-3c50-4645-895a-37542a8f5937]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:50 np0005629333 systemd[1]: Started Virtual Machine qemu-118-instance-0000005c.
Feb 25 07:36:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:50.262 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[13c2b547-68ee-498c-bbcc-9c571fb57467]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:50.264 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4c866303-1a56-4fbc-b718-8213ee7d9857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:50.283 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[26b0ede3-35cf-4307-812a-0da8be4feff4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:50.299 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[518f15f8-7ec9-4ce2-bc9f-f6c297716ad9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap38a239da-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:fa:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 497385, 'reachable_time': 39986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326110, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:50.312 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[520a7f98-f042-4e61-b38c-7484cdf4ed29]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap38a239da-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 497395, 'tstamp': 497395}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326111, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap38a239da-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 497397, 'tstamp': 497397}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326111, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:36:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:50.313 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38a239da-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:36:50 np0005629333 nova_compute[244014]: 2026-02-25 12:36:50.314 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:50 np0005629333 nova_compute[244014]: 2026-02-25 12:36:50.315 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:50.317 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap38a239da-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:36:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:50.317 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:36:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:50.318 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap38a239da-c0, col_values=(('external_ids', {'iface-id': 'b61d6004-89be-4a9d-aeb0-ecb4f03f3526'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:36:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:50.318 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:36:50 np0005629333 nova_compute[244014]: 2026-02-25 12:36:50.735 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023010.7348795, 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:36:50 np0005629333 nova_compute[244014]: 2026-02-25 12:36:50.736 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] VM Started (Lifecycle Event)#033[00m
Feb 25 07:36:50 np0005629333 nova_compute[244014]: 2026-02-25 12:36:50.764 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:36:50 np0005629333 nova_compute[244014]: 2026-02-25 12:36:50.768 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023010.735014, 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:36:50 np0005629333 nova_compute[244014]: 2026-02-25 12:36:50.769 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:36:50 np0005629333 nova_compute[244014]: 2026-02-25 12:36:50.784 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:36:50 np0005629333 nova_compute[244014]: 2026-02-25 12:36:50.787 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:36:50 np0005629333 nova_compute[244014]: 2026-02-25 12:36:50.793 244018 DEBUG nova.network.neutron [req-2127f626-2d1a-414e-a97a-bdbc14fb47c0 req-69722ca1-b071-4f1f-9051-fc16d376d5d8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Updated VIF entry in instance network info cache for port ca16d7f1-7f94-4d64-906e-e1469230e4f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:36:50 np0005629333 nova_compute[244014]: 2026-02-25 12:36:50.794 244018 DEBUG nova.network.neutron [req-2127f626-2d1a-414e-a97a-bdbc14fb47c0 req-69722ca1-b071-4f1f-9051-fc16d376d5d8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Updating instance_info_cache with network_info: [{"id": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "address": "fa:16:3e:aa:9e:27", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca16d7f1-7f", "ovs_interfaceid": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:36:50 np0005629333 nova_compute[244014]: 2026-02-25 12:36:50.809 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:36:50 np0005629333 nova_compute[244014]: 2026-02-25 12:36:50.814 244018 DEBUG oslo_concurrency.lockutils [req-2127f626-2d1a-414e-a97a-bdbc14fb47c0 req-69722ca1-b071-4f1f-9051-fc16d376d5d8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-7c2fb1e7-04d0-4903-a675-8cda55bbb6ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:36:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 305 active+clean; 293 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.0 MiB/s wr, 233 op/s
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.135 244018 DEBUG nova.compute.manager [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Received event network-vif-plugged-66c41a3b-23a5-4cbf-a70e-416adf1617e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.135 244018 DEBUG oslo_concurrency.lockutils [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "6076a107-bdef-4c8a-8f75-887cdb4833f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.135 244018 DEBUG oslo_concurrency.lockutils [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6076a107-bdef-4c8a-8f75-887cdb4833f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.136 244018 DEBUG oslo_concurrency.lockutils [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6076a107-bdef-4c8a-8f75-887cdb4833f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.136 244018 DEBUG nova.compute.manager [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] No waiting events found dispatching network-vif-plugged-66c41a3b-23a5-4cbf-a70e-416adf1617e5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.136 244018 WARNING nova.compute.manager [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Received unexpected event network-vif-plugged-66c41a3b-23a5-4cbf-a70e-416adf1617e5 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.136 244018 DEBUG nova.compute.manager [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Received event network-vif-plugged-ca16d7f1-7f94-4d64-906e-e1469230e4f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.136 244018 DEBUG oslo_concurrency.lockutils [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.136 244018 DEBUG oslo_concurrency.lockutils [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.137 244018 DEBUG oslo_concurrency.lockutils [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.137 244018 DEBUG nova.compute.manager [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Processing event network-vif-plugged-ca16d7f1-7f94-4d64-906e-e1469230e4f1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.137 244018 DEBUG nova.compute.manager [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Received event network-vif-plugged-ca16d7f1-7f94-4d64-906e-e1469230e4f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.137 244018 DEBUG oslo_concurrency.lockutils [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.137 244018 DEBUG oslo_concurrency.lockutils [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.138 244018 DEBUG oslo_concurrency.lockutils [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.138 244018 DEBUG nova.compute.manager [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] No waiting events found dispatching network-vif-plugged-ca16d7f1-7f94-4d64-906e-e1469230e4f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.138 244018 WARNING nova.compute.manager [req-1c4b0e00-b7cf-4997-a72d-c46d1bbb9177 req-268b7a92-7830-42d9-ac1d-305b27ca8e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Received unexpected event network-vif-plugged-ca16d7f1-7f94-4d64-906e-e1469230e4f1 for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.138 244018 DEBUG nova.compute.manager [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.141 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023012.1414227, 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.141 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.143 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.145 244018 INFO nova.virt.libvirt.driver [-] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Instance spawned successfully.#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.145 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.174 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.186 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.192 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.192 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.193 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.194 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.195 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.195 244018 DEBUG nova.virt.libvirt.driver [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.230 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.261 244018 INFO nova.compute.manager [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Took 10.63 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.261 244018 DEBUG nova.compute.manager [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.327 244018 INFO nova.compute.manager [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Took 11.83 seconds to build instance.#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.346 244018 DEBUG oslo_concurrency.lockutils [None req-4967032f-cb41-40e9-94c1-0a0d281c1e25 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.953s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.990 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772022997.9888468, 2184a715-0ac8-4fc2-aa99-fae3b8e32edf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:36:52 np0005629333 nova_compute[244014]: 2026-02-25 12:36:52.991 244018 INFO nova.compute.manager [-] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:36:53 np0005629333 nova_compute[244014]: 2026-02-25 12:36:53.085 244018 DEBUG nova.compute.manager [None req-23e7d739-d1dd-45af-84aa-649f197d7681 - - - - - -] [instance: 2184a715-0ac8-4fc2-aa99-fae3b8e32edf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:36:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1659: 305 pgs: 305 active+clean; 293 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 1.5 MiB/s wr, 180 op/s
Feb 25 07:36:54 np0005629333 nova_compute[244014]: 2026-02-25 12:36:54.114 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:54 np0005629333 nova_compute[244014]: 2026-02-25 12:36:54.939 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:55.016 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:36:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:55.017 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:36:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:36:55.018 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:36:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 305 active+clean; 293 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.4 MiB/s wr, 169 op/s
Feb 25 07:36:57 np0005629333 ovn_controller[147040]: 2026-02-25T12:36:57Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:65:76:ae 10.100.0.8
Feb 25 07:36:57 np0005629333 ovn_controller[147040]: 2026-02-25T12:36:57Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:65:76:ae 10.100.0.8
Feb 25 07:36:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:36:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1661: 305 pgs: 305 active+clean; 325 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 3.4 MiB/s wr, 287 op/s
Feb 25 07:36:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:36:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:36:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:36:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:36:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:36:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:36:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:36:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:36:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:36:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:36:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:36:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:36:59 np0005629333 nova_compute[244014]: 2026-02-25 12:36:59.117 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:59 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:36:59 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:36:59 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:36:59 np0005629333 podman[326297]: 2026-02-25 12:36:59.254644324 +0000 UTC m=+0.033403035 container create 721e551f6a4a986370240ae470d311b5c36797f3627957283e1d81fa28dbb307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 07:36:59 np0005629333 systemd[1]: Started libpod-conmon-721e551f6a4a986370240ae470d311b5c36797f3627957283e1d81fa28dbb307.scope.
Feb 25 07:36:59 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:36:59 np0005629333 podman[326297]: 2026-02-25 12:36:59.327613156 +0000 UTC m=+0.106371917 container init 721e551f6a4a986370240ae470d311b5c36797f3627957283e1d81fa28dbb307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:36:59 np0005629333 podman[326297]: 2026-02-25 12:36:59.238456096 +0000 UTC m=+0.017214847 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:36:59 np0005629333 podman[326297]: 2026-02-25 12:36:59.336797376 +0000 UTC m=+0.115556097 container start 721e551f6a4a986370240ae470d311b5c36797f3627957283e1d81fa28dbb307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_euclid, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 07:36:59 np0005629333 podman[326297]: 2026-02-25 12:36:59.339678507 +0000 UTC m=+0.118437318 container attach 721e551f6a4a986370240ae470d311b5c36797f3627957283e1d81fa28dbb307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_euclid, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 07:36:59 np0005629333 fervent_euclid[326313]: 167 167
Feb 25 07:36:59 np0005629333 systemd[1]: libpod-721e551f6a4a986370240ae470d311b5c36797f3627957283e1d81fa28dbb307.scope: Deactivated successfully.
Feb 25 07:36:59 np0005629333 podman[326297]: 2026-02-25 12:36:59.343185716 +0000 UTC m=+0.121944447 container died 721e551f6a4a986370240ae470d311b5c36797f3627957283e1d81fa28dbb307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_euclid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 07:36:59 np0005629333 systemd[1]: var-lib-containers-storage-overlay-15f08a8f123dbf29353256e6bb5052dcff2805476144aae4b8653a2e27635ca8-merged.mount: Deactivated successfully.
Feb 25 07:36:59 np0005629333 podman[326297]: 2026-02-25 12:36:59.379821382 +0000 UTC m=+0.158580103 container remove 721e551f6a4a986370240ae470d311b5c36797f3627957283e1d81fa28dbb307 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:36:59 np0005629333 systemd[1]: libpod-conmon-721e551f6a4a986370240ae470d311b5c36797f3627957283e1d81fa28dbb307.scope: Deactivated successfully.
Feb 25 07:36:59 np0005629333 podman[326336]: 2026-02-25 12:36:59.512595554 +0000 UTC m=+0.027060316 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:36:59 np0005629333 podman[326336]: 2026-02-25 12:36:59.781842804 +0000 UTC m=+0.296307536 container create 897a3856d2e0cb96a9bb34cdf5e95bd8fb53bba004c63f13f8d002c2ee053408 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pasteur, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 07:36:59 np0005629333 systemd[1]: Started libpod-conmon-897a3856d2e0cb96a9bb34cdf5e95bd8fb53bba004c63f13f8d002c2ee053408.scope.
Feb 25 07:36:59 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:36:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd32251cfd9138fc7aa38c5574acbb7c16b3ed4544f1448be25e3ab55545f39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:36:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd32251cfd9138fc7aa38c5574acbb7c16b3ed4544f1448be25e3ab55545f39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:36:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd32251cfd9138fc7aa38c5574acbb7c16b3ed4544f1448be25e3ab55545f39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:36:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd32251cfd9138fc7aa38c5574acbb7c16b3ed4544f1448be25e3ab55545f39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:36:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd32251cfd9138fc7aa38c5574acbb7c16b3ed4544f1448be25e3ab55545f39/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:36:59 np0005629333 podman[326336]: 2026-02-25 12:36:59.928870559 +0000 UTC m=+0.443335311 container init 897a3856d2e0cb96a9bb34cdf5e95bd8fb53bba004c63f13f8d002c2ee053408 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pasteur, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:36:59 np0005629333 podman[326336]: 2026-02-25 12:36:59.936129554 +0000 UTC m=+0.450594306 container start 897a3856d2e0cb96a9bb34cdf5e95bd8fb53bba004c63f13f8d002c2ee053408 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 07:36:59 np0005629333 nova_compute[244014]: 2026-02-25 12:36:59.946 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:36:59 np0005629333 podman[326336]: 2026-02-25 12:36:59.980090447 +0000 UTC m=+0.494555179 container attach 897a3856d2e0cb96a9bb34cdf5e95bd8fb53bba004c63f13f8d002c2ee053408 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:37:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 305 active+clean; 325 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 201 op/s
Feb 25 07:37:00 np0005629333 quirky_pasteur[326352]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:37:00 np0005629333 quirky_pasteur[326352]: --> All data devices are unavailable
Feb 25 07:37:00 np0005629333 systemd[1]: libpod-897a3856d2e0cb96a9bb34cdf5e95bd8fb53bba004c63f13f8d002c2ee053408.scope: Deactivated successfully.
Feb 25 07:37:00 np0005629333 podman[326336]: 2026-02-25 12:37:00.379342581 +0000 UTC m=+0.893807343 container died 897a3856d2e0cb96a9bb34cdf5e95bd8fb53bba004c63f13f8d002c2ee053408 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pasteur, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 07:37:00 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ecd32251cfd9138fc7aa38c5574acbb7c16b3ed4544f1448be25e3ab55545f39-merged.mount: Deactivated successfully.
Feb 25 07:37:00 np0005629333 podman[326336]: 2026-02-25 12:37:00.622355229 +0000 UTC m=+1.136819961 container remove 897a3856d2e0cb96a9bb34cdf5e95bd8fb53bba004c63f13f8d002c2ee053408 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 07:37:00 np0005629333 systemd[1]: libpod-conmon-897a3856d2e0cb96a9bb34cdf5e95bd8fb53bba004c63f13f8d002c2ee053408.scope: Deactivated successfully.
Feb 25 07:37:01 np0005629333 podman[326446]: 2026-02-25 12:37:01.035829275 +0000 UTC m=+0.040440184 container create 26da9e7e76af979aa84a94be6e535572de4f06831b01f71b7225789bb3767f17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:37:01 np0005629333 systemd[1]: Started libpod-conmon-26da9e7e76af979aa84a94be6e535572de4f06831b01f71b7225789bb3767f17.scope.
Feb 25 07:37:01 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:37:01 np0005629333 podman[326446]: 2026-02-25 12:37:01.113478969 +0000 UTC m=+0.118089868 container init 26da9e7e76af979aa84a94be6e535572de4f06831b01f71b7225789bb3767f17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 07:37:01 np0005629333 podman[326446]: 2026-02-25 12:37:01.017199998 +0000 UTC m=+0.021810907 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:37:01 np0005629333 podman[326446]: 2026-02-25 12:37:01.117789471 +0000 UTC m=+0.122400380 container start 26da9e7e76af979aa84a94be6e535572de4f06831b01f71b7225789bb3767f17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hawking, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 07:37:01 np0005629333 thirsty_hawking[326462]: 167 167
Feb 25 07:37:01 np0005629333 systemd[1]: libpod-26da9e7e76af979aa84a94be6e535572de4f06831b01f71b7225789bb3767f17.scope: Deactivated successfully.
Feb 25 07:37:01 np0005629333 podman[326446]: 2026-02-25 12:37:01.123940875 +0000 UTC m=+0.128551794 container attach 26da9e7e76af979aa84a94be6e535572de4f06831b01f71b7225789bb3767f17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hawking, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 07:37:01 np0005629333 podman[326446]: 2026-02-25 12:37:01.124559462 +0000 UTC m=+0.129170371 container died 26da9e7e76af979aa84a94be6e535572de4f06831b01f71b7225789bb3767f17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 07:37:01 np0005629333 systemd[1]: var-lib-containers-storage-overlay-8a202e237e77d740205a7fffea1840055449fcba953cf64590e00ac3fcee4a21-merged.mount: Deactivated successfully.
Feb 25 07:37:01 np0005629333 podman[326446]: 2026-02-25 12:37:01.172136977 +0000 UTC m=+0.176747886 container remove 26da9e7e76af979aa84a94be6e535572de4f06831b01f71b7225789bb3767f17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 07:37:01 np0005629333 systemd[1]: libpod-conmon-26da9e7e76af979aa84a94be6e535572de4f06831b01f71b7225789bb3767f17.scope: Deactivated successfully.
Feb 25 07:37:01 np0005629333 podman[326487]: 2026-02-25 12:37:01.34066906 +0000 UTC m=+0.059538344 container create f656a2be4fa9558c67a2f95a9a6a64a683d077d2d194373a8bbf38fdd243df31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_noyce, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:37:01 np0005629333 ovn_controller[147040]: 2026-02-25T12:37:01Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6d:11:ce 10.100.0.6
Feb 25 07:37:01 np0005629333 ovn_controller[147040]: 2026-02-25T12:37:01Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6d:11:ce 10.100.0.6
Feb 25 07:37:01 np0005629333 systemd[1]: Started libpod-conmon-f656a2be4fa9558c67a2f95a9a6a64a683d077d2d194373a8bbf38fdd243df31.scope.
Feb 25 07:37:01 np0005629333 podman[326487]: 2026-02-25 12:37:01.317614708 +0000 UTC m=+0.036484032 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:37:01 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:37:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53407574a9fc29540ec0a7787606527ecfaa7cadb6474cda56fae3ad4eabf12f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:37:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53407574a9fc29540ec0a7787606527ecfaa7cadb6474cda56fae3ad4eabf12f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:37:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53407574a9fc29540ec0a7787606527ecfaa7cadb6474cda56fae3ad4eabf12f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:37:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53407574a9fc29540ec0a7787606527ecfaa7cadb6474cda56fae3ad4eabf12f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:37:01 np0005629333 podman[326487]: 2026-02-25 12:37:01.586863918 +0000 UTC m=+0.305733192 container init f656a2be4fa9558c67a2f95a9a6a64a683d077d2d194373a8bbf38fdd243df31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 07:37:01 np0005629333 podman[326487]: 2026-02-25 12:37:01.593137085 +0000 UTC m=+0.312006339 container start f656a2be4fa9558c67a2f95a9a6a64a683d077d2d194373a8bbf38fdd243df31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_noyce, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:37:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:37:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:37:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:37:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:37:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:37:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:37:01 np0005629333 podman[326487]: 2026-02-25 12:37:01.735730295 +0000 UTC m=+0.454599559 container attach f656a2be4fa9558c67a2f95a9a6a64a683d077d2d194373a8bbf38fdd243df31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_noyce, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]: {
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:    "0": [
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:        {
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "devices": [
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "/dev/loop3"
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            ],
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "lv_name": "ceph_lv0",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "lv_size": "21470642176",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "name": "ceph_lv0",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "tags": {
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.cluster_name": "ceph",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.crush_device_class": "",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.encrypted": "0",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.objectstore": "bluestore",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.osd_id": "0",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.type": "block",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.vdo": "0",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.with_tpm": "0"
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            },
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "type": "block",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "vg_name": "ceph_vg0"
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:        }
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:    ],
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:    "1": [
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:        {
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "devices": [
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "/dev/loop4"
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            ],
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "lv_name": "ceph_lv1",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "lv_size": "21470642176",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "name": "ceph_lv1",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "tags": {
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.cluster_name": "ceph",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.crush_device_class": "",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.encrypted": "0",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.objectstore": "bluestore",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.osd_id": "1",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.type": "block",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.vdo": "0",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.with_tpm": "0"
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            },
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "type": "block",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "vg_name": "ceph_vg1"
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:        }
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:    ],
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:    "2": [
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:        {
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "devices": [
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "/dev/loop5"
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            ],
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "lv_name": "ceph_lv2",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "lv_size": "21470642176",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "name": "ceph_lv2",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "tags": {
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.cluster_name": "ceph",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.crush_device_class": "",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.encrypted": "0",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.objectstore": "bluestore",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.osd_id": "2",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.type": "block",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.vdo": "0",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:                "ceph.with_tpm": "0"
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            },
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "type": "block",
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:            "vg_name": "ceph_vg2"
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:        }
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]:    ]
Feb 25 07:37:01 np0005629333 quizzical_noyce[326503]: }
Feb 25 07:37:01 np0005629333 systemd[1]: libpod-f656a2be4fa9558c67a2f95a9a6a64a683d077d2d194373a8bbf38fdd243df31.scope: Deactivated successfully.
Feb 25 07:37:01 np0005629333 conmon[326503]: conmon f656a2be4fa9558c67a2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f656a2be4fa9558c67a2f95a9a6a64a683d077d2d194373a8bbf38fdd243df31.scope/container/memory.events
Feb 25 07:37:01 np0005629333 podman[326487]: 2026-02-25 12:37:01.878159031 +0000 UTC m=+0.597028315 container died f656a2be4fa9558c67a2f95a9a6a64a683d077d2d194373a8bbf38fdd243df31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 07:37:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 305 active+clean; 341 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 3.3 MiB/s wr, 243 op/s
Feb 25 07:37:02 np0005629333 systemd[1]: var-lib-containers-storage-overlay-53407574a9fc29540ec0a7787606527ecfaa7cadb6474cda56fae3ad4eabf12f-merged.mount: Deactivated successfully.
Feb 25 07:37:02 np0005629333 podman[326487]: 2026-02-25 12:37:02.610612622 +0000 UTC m=+1.329481866 container remove f656a2be4fa9558c67a2f95a9a6a64a683d077d2d194373a8bbf38fdd243df31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_noyce, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:37:02 np0005629333 systemd[1]: libpod-conmon-f656a2be4fa9558c67a2f95a9a6a64a683d077d2d194373a8bbf38fdd243df31.scope: Deactivated successfully.
Feb 25 07:37:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:37:03 np0005629333 podman[326589]: 2026-02-25 12:37:03.024604441 +0000 UTC m=+0.022596080 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:37:03 np0005629333 podman[326589]: 2026-02-25 12:37:03.335463897 +0000 UTC m=+0.333455446 container create 7689547b44eab031d275f76355195eb0a49ba6808ae0e5ac5bc8f3d1a9536355 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 07:37:03 np0005629333 systemd[1]: Started libpod-conmon-7689547b44eab031d275f76355195eb0a49ba6808ae0e5ac5bc8f3d1a9536355.scope.
Feb 25 07:37:03 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:37:03 np0005629333 nova_compute[244014]: 2026-02-25 12:37:03.822 244018 DEBUG oslo_concurrency.lockutils [None req-2d70de51-a2d2-45f7-bb5e-4c5eeb09068d 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "0061daee-43d7-458b-8645-0ad3f8fbb2af" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:03 np0005629333 nova_compute[244014]: 2026-02-25 12:37:03.824 244018 DEBUG oslo_concurrency.lockutils [None req-2d70de51-a2d2-45f7-bb5e-4c5eeb09068d 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:03 np0005629333 nova_compute[244014]: 2026-02-25 12:37:03.824 244018 DEBUG nova.compute.manager [None req-2d70de51-a2d2-45f7-bb5e-4c5eeb09068d 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:37:03 np0005629333 nova_compute[244014]: 2026-02-25 12:37:03.828 244018 DEBUG nova.compute.manager [None req-2d70de51-a2d2-45f7-bb5e-4c5eeb09068d 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Feb 25 07:37:03 np0005629333 nova_compute[244014]: 2026-02-25 12:37:03.828 244018 DEBUG nova.objects.instance [None req-2d70de51-a2d2-45f7-bb5e-4c5eeb09068d 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'flavor' on Instance uuid 0061daee-43d7-458b-8645-0ad3f8fbb2af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:37:03 np0005629333 nova_compute[244014]: 2026-02-25 12:37:03.857 244018 DEBUG nova.virt.libvirt.driver [None req-2d70de51-a2d2-45f7-bb5e-4c5eeb09068d 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 25 07:37:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 305 active+clean; 351 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.2 MiB/s wr, 211 op/s
Feb 25 07:37:04 np0005629333 nova_compute[244014]: 2026-02-25 12:37:04.124 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:04 np0005629333 podman[326589]: 2026-02-25 12:37:04.190353733 +0000 UTC m=+1.188345372 container init 7689547b44eab031d275f76355195eb0a49ba6808ae0e5ac5bc8f3d1a9536355 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 07:37:04 np0005629333 podman[326589]: 2026-02-25 12:37:04.196897588 +0000 UTC m=+1.194889137 container start 7689547b44eab031d275f76355195eb0a49ba6808ae0e5ac5bc8f3d1a9536355 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:37:04 np0005629333 happy_dirac[326606]: 167 167
Feb 25 07:37:04 np0005629333 systemd[1]: libpod-7689547b44eab031d275f76355195eb0a49ba6808ae0e5ac5bc8f3d1a9536355.scope: Deactivated successfully.
Feb 25 07:37:04 np0005629333 podman[326589]: 2026-02-25 12:37:04.370796455 +0000 UTC m=+1.368788104 container attach 7689547b44eab031d275f76355195eb0a49ba6808ae0e5ac5bc8f3d1a9536355 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 07:37:04 np0005629333 podman[326589]: 2026-02-25 12:37:04.3712952 +0000 UTC m=+1.369286789 container died 7689547b44eab031d275f76355195eb0a49ba6808ae0e5ac5bc8f3d1a9536355 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_dirac, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 07:37:04 np0005629333 nova_compute[244014]: 2026-02-25 12:37:04.948 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:05 np0005629333 systemd[1]: var-lib-containers-storage-overlay-c4f94ccfc84b667e3ecbc6ee9fc3fcae98ef109078cc5ae4dc7504eaa1971b16-merged.mount: Deactivated successfully.
Feb 25 07:37:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1665: 305 pgs: 305 active+clean; 351 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.2 MiB/s wr, 182 op/s
Feb 25 07:37:07 np0005629333 podman[326589]: 2026-02-25 12:37:07.304494654 +0000 UTC m=+4.302486243 container remove 7689547b44eab031d275f76355195eb0a49ba6808ae0e5ac5bc8f3d1a9536355 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_dirac, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 07:37:07 np0005629333 systemd[1]: libpod-conmon-7689547b44eab031d275f76355195eb0a49ba6808ae0e5ac5bc8f3d1a9536355.scope: Deactivated successfully.
Feb 25 07:37:07 np0005629333 podman[326630]: 2026-02-25 12:37:07.50529047 +0000 UTC m=+0.034785862 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:37:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:37:08 np0005629333 podman[326630]: 2026-02-25 12:37:08.012466463 +0000 UTC m=+0.541961795 container create e55fcd29f71a2dcbe12b876744d704cabae8680179001b2d05678cdaf2e01153 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 07:37:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 305 active+clean; 370 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.7 MiB/s wr, 217 op/s
Feb 25 07:37:08 np0005629333 systemd[1]: Started libpod-conmon-e55fcd29f71a2dcbe12b876744d704cabae8680179001b2d05678cdaf2e01153.scope.
Feb 25 07:37:08 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:37:08 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d8cc31121381984df3f43e602664d586716c6d0b333549a816f3501cb703b8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:37:08 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d8cc31121381984df3f43e602664d586716c6d0b333549a816f3501cb703b8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:37:08 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d8cc31121381984df3f43e602664d586716c6d0b333549a816f3501cb703b8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:37:08 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d8cc31121381984df3f43e602664d586716c6d0b333549a816f3501cb703b8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:37:08 np0005629333 podman[326630]: 2026-02-25 12:37:08.641434912 +0000 UTC m=+1.170930294 container init e55fcd29f71a2dcbe12b876744d704cabae8680179001b2d05678cdaf2e01153 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:37:08 np0005629333 podman[326630]: 2026-02-25 12:37:08.651676531 +0000 UTC m=+1.181171863 container start e55fcd29f71a2dcbe12b876744d704cabae8680179001b2d05678cdaf2e01153 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_meninsky, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:37:08 np0005629333 podman[326630]: 2026-02-25 12:37:08.805716078 +0000 UTC m=+1.335211470 container attach e55fcd29f71a2dcbe12b876744d704cabae8680179001b2d05678cdaf2e01153 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_meninsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:37:09 np0005629333 nova_compute[244014]: 2026-02-25 12:37:09.127 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:09 np0005629333 lvm[326726]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:37:09 np0005629333 lvm[326726]: VG ceph_vg1 finished
Feb 25 07:37:09 np0005629333 lvm[326725]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:37:09 np0005629333 lvm[326725]: VG ceph_vg0 finished
Feb 25 07:37:09 np0005629333 lvm[326728]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:37:09 np0005629333 lvm[326728]: VG ceph_vg2 finished
Feb 25 07:37:09 np0005629333 xenodochial_meninsky[326646]: {}
Feb 25 07:37:09 np0005629333 podman[326630]: 2026-02-25 12:37:09.495117543 +0000 UTC m=+2.024612865 container died e55fcd29f71a2dcbe12b876744d704cabae8680179001b2d05678cdaf2e01153 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_meninsky, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:37:09 np0005629333 systemd[1]: libpod-e55fcd29f71a2dcbe12b876744d704cabae8680179001b2d05678cdaf2e01153.scope: Deactivated successfully.
Feb 25 07:37:09 np0005629333 systemd[1]: libpod-e55fcd29f71a2dcbe12b876744d704cabae8680179001b2d05678cdaf2e01153.scope: Consumed 1.025s CPU time.
Feb 25 07:37:09 np0005629333 systemd[1]: var-lib-containers-storage-overlay-3d8cc31121381984df3f43e602664d586716c6d0b333549a816f3501cb703b8f-merged.mount: Deactivated successfully.
Feb 25 07:37:09 np0005629333 nova_compute[244014]: 2026-02-25 12:37:09.949 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 305 active+clean; 370 MiB data, 918 MiB used, 59 GiB / 60 GiB avail; 376 KiB/s rd, 3.6 MiB/s wr, 99 op/s
Feb 25 07:37:10 np0005629333 podman[326630]: 2026-02-25 12:37:10.315283977 +0000 UTC m=+2.844779309 container remove e55fcd29f71a2dcbe12b876744d704cabae8680179001b2d05678cdaf2e01153 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_meninsky, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:37:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:37:10 np0005629333 systemd[1]: libpod-conmon-e55fcd29f71a2dcbe12b876744d704cabae8680179001b2d05678cdaf2e01153.scope: Deactivated successfully.
Feb 25 07:37:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:37:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:37:10 np0005629333 kernel: tapd689bf7c-d4 (unregistering): left promiscuous mode
Feb 25 07:37:10 np0005629333 NetworkManager[49836]: <info>  [1772023030.8746] device (tapd689bf7c-d4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:37:10 np0005629333 nova_compute[244014]: 2026-02-25 12:37:10.885 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:37:10Z|00913|binding|INFO|Releasing lport d689bf7c-d44c-4f39-a2a7-a85e52dcee30 from this chassis (sb_readonly=0)
Feb 25 07:37:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:37:10Z|00914|binding|INFO|Setting lport d689bf7c-d44c-4f39-a2a7-a85e52dcee30 down in Southbound
Feb 25 07:37:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:37:10Z|00915|binding|INFO|Removing iface tapd689bf7c-d4 ovn-installed in OVS
Feb 25 07:37:10 np0005629333 nova_compute[244014]: 2026-02-25 12:37:10.890 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:10 np0005629333 nova_compute[244014]: 2026-02-25 12:37:10.901 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:10.906 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:76:ae 10.100.0.8'], port_security=['fa:16:3e:65:76:ae 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0061daee-43d7-458b-8645-0ad3f8fbb2af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38a239da-c933-4cbc-be00-dd127471e198', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '999f2a015b9c4bc98661fe6fe6db06a0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '90d0dcc5-a9c4-4f99-8901-06a2a2854544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80217008-ae39-43e4-aa35-a210336c586d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=d689bf7c-d44c-4f39-a2a7-a85e52dcee30) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:37:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:10.908 157129 INFO neutron.agent.ovn.metadata.agent [-] Port d689bf7c-d44c-4f39-a2a7-a85e52dcee30 in datapath 38a239da-c933-4cbc-be00-dd127471e198 unbound from our chassis#033[00m
Feb 25 07:37:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:10.912 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 38a239da-c933-4cbc-be00-dd127471e198#033[00m
Feb 25 07:37:10 np0005629333 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Feb 25 07:37:10 np0005629333 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d0000005a.scope: Consumed 12.684s CPU time.
Feb 25 07:37:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:10.931 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0fa67ddb-e7fd-4e24-b269-77929d6898ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:10 np0005629333 systemd-machined[210048]: Machine qemu-116-instance-0000005a terminated.
Feb 25 07:37:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:10.955 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[132ca7a4-dca9-4f51-a1e1-1a81ffd04a9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:10.958 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[31360445-5567-4f02-8286-2701a1e19704]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:10.977 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f984bca3-6fcb-4bb7-a8ec-1699247df27a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:10.991 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6759532b-65c2-40f4-ad4b-c8bac98f6eef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap38a239da-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:fa:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 15, 'tx_packets': 9, 'rx_bytes': 910, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 15, 'tx_packets': 9, 'rx_bytes': 910, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 497385, 'reachable_time': 19502, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326755, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:11 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:37:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:11.027 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0f92935c-c7f2-485e-862a-e33a08a0bc08]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap38a239da-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 497395, 'tstamp': 497395}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326756, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap38a239da-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 497397, 'tstamp': 497397}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326756, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:11.028 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38a239da-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:37:11 np0005629333 nova_compute[244014]: 2026-02-25 12:37:11.030 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:11 np0005629333 nova_compute[244014]: 2026-02-25 12:37:11.038 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:11.038 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap38a239da-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:37:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:11.039 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:37:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:11.039 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap38a239da-c0, col_values=(('external_ids', {'iface-id': 'b61d6004-89be-4a9d-aeb0-ecb4f03f3526'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:37:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:11.040 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:37:11 np0005629333 nova_compute[244014]: 2026-02-25 12:37:11.126 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:11 np0005629333 nova_compute[244014]: 2026-02-25 12:37:11.135 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:11 np0005629333 nova_compute[244014]: 2026-02-25 12:37:11.138 244018 INFO nova.virt.libvirt.driver [None req-2d70de51-a2d2-45f7-bb5e-4c5eeb09068d 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Instance shutdown successfully after 7 seconds.#033[00m
Feb 25 07:37:11 np0005629333 nova_compute[244014]: 2026-02-25 12:37:11.159 244018 INFO nova.virt.libvirt.driver [-] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Instance destroyed successfully.#033[00m
Feb 25 07:37:11 np0005629333 nova_compute[244014]: 2026-02-25 12:37:11.160 244018 DEBUG nova.objects.instance [None req-2d70de51-a2d2-45f7-bb5e-4c5eeb09068d 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'numa_topology' on Instance uuid 0061daee-43d7-458b-8645-0ad3f8fbb2af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:37:11 np0005629333 nova_compute[244014]: 2026-02-25 12:37:11.206 244018 DEBUG nova.compute.manager [None req-2d70de51-a2d2-45f7-bb5e-4c5eeb09068d 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:37:11 np0005629333 nova_compute[244014]: 2026-02-25 12:37:11.268 244018 DEBUG oslo_concurrency.lockutils [None req-2d70de51-a2d2-45f7-bb5e-4c5eeb09068d 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 7.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:11 np0005629333 nova_compute[244014]: 2026-02-25 12:37:11.371 244018 DEBUG nova.compute.manager [req-1c67da9e-1146-463a-9b49-86fa6403cb4d req-23d7beb2-6e21-460d-9df8-fdb87ecc56d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received event network-vif-unplugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:37:11 np0005629333 nova_compute[244014]: 2026-02-25 12:37:11.372 244018 DEBUG oslo_concurrency.lockutils [req-1c67da9e-1146-463a-9b49-86fa6403cb4d req-23d7beb2-6e21-460d-9df8-fdb87ecc56d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:11 np0005629333 nova_compute[244014]: 2026-02-25 12:37:11.373 244018 DEBUG oslo_concurrency.lockutils [req-1c67da9e-1146-463a-9b49-86fa6403cb4d req-23d7beb2-6e21-460d-9df8-fdb87ecc56d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:11 np0005629333 nova_compute[244014]: 2026-02-25 12:37:11.373 244018 DEBUG oslo_concurrency.lockutils [req-1c67da9e-1146-463a-9b49-86fa6403cb4d req-23d7beb2-6e21-460d-9df8-fdb87ecc56d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:11 np0005629333 nova_compute[244014]: 2026-02-25 12:37:11.374 244018 DEBUG nova.compute.manager [req-1c67da9e-1146-463a-9b49-86fa6403cb4d req-23d7beb2-6e21-460d-9df8-fdb87ecc56d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] No waiting events found dispatching network-vif-unplugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:37:11 np0005629333 nova_compute[244014]: 2026-02-25 12:37:11.374 244018 WARNING nova.compute.manager [req-1c67da9e-1146-463a-9b49-86fa6403cb4d req-23d7beb2-6e21-460d-9df8-fdb87ecc56d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received unexpected event network-vif-unplugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 for instance with vm_state stopped and task_state None.#033[00m
Feb 25 07:37:11 np0005629333 ovn_controller[147040]: 2026-02-25T12:37:11Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:aa:9e:27 10.100.0.3
Feb 25 07:37:11 np0005629333 ovn_controller[147040]: 2026-02-25T12:37:11Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:aa:9e:27 10.100.0.3
Feb 25 07:37:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1668: 305 pgs: 305 active+clean; 372 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 445 KiB/s rd, 3.7 MiB/s wr, 113 op/s
Feb 25 07:37:12 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:37:12 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:37:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:37:12 np0005629333 nova_compute[244014]: 2026-02-25 12:37:12.932 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Acquiring lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:12 np0005629333 nova_compute[244014]: 2026-02-25 12:37:12.934 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:12 np0005629333 nova_compute[244014]: 2026-02-25 12:37:12.958 244018 DEBUG nova.compute.manager [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:37:13 np0005629333 nova_compute[244014]: 2026-02-25 12:37:13.079 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:13 np0005629333 nova_compute[244014]: 2026-02-25 12:37:13.079 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:13 np0005629333 nova_compute[244014]: 2026-02-25 12:37:13.088 244018 DEBUG nova.virt.hardware [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:37:13 np0005629333 nova_compute[244014]: 2026-02-25 12:37:13.089 244018 INFO nova.compute.claims [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:37:13 np0005629333 podman[326793]: 2026-02-25 12:37:13.730123403 +0000 UTC m=+0.061145847 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 07:37:14 np0005629333 nova_compute[244014]: 2026-02-25 12:37:14.053 244018 DEBUG oslo_concurrency.processutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:37:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1669: 305 pgs: 305 active+clean; 384 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 312 KiB/s rd, 3.0 MiB/s wr, 80 op/s
Feb 25 07:37:14 np0005629333 nova_compute[244014]: 2026-02-25 12:37:14.130 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:14 np0005629333 nova_compute[244014]: 2026-02-25 12:37:14.534 244018 DEBUG nova.compute.manager [req-fa383c28-e9a8-4634-a295-658c19d4d4be req-0383c3de-ed0f-48cf-8e83-36283ec68435 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received event network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:37:14 np0005629333 nova_compute[244014]: 2026-02-25 12:37:14.535 244018 DEBUG oslo_concurrency.lockutils [req-fa383c28-e9a8-4634-a295-658c19d4d4be req-0383c3de-ed0f-48cf-8e83-36283ec68435 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:14 np0005629333 nova_compute[244014]: 2026-02-25 12:37:14.535 244018 DEBUG oslo_concurrency.lockutils [req-fa383c28-e9a8-4634-a295-658c19d4d4be req-0383c3de-ed0f-48cf-8e83-36283ec68435 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:14 np0005629333 nova_compute[244014]: 2026-02-25 12:37:14.536 244018 DEBUG oslo_concurrency.lockutils [req-fa383c28-e9a8-4634-a295-658c19d4d4be req-0383c3de-ed0f-48cf-8e83-36283ec68435 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:14 np0005629333 nova_compute[244014]: 2026-02-25 12:37:14.536 244018 DEBUG nova.compute.manager [req-fa383c28-e9a8-4634-a295-658c19d4d4be req-0383c3de-ed0f-48cf-8e83-36283ec68435 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] No waiting events found dispatching network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:37:14 np0005629333 nova_compute[244014]: 2026-02-25 12:37:14.536 244018 WARNING nova.compute.manager [req-fa383c28-e9a8-4634-a295-658c19d4d4be req-0383c3de-ed0f-48cf-8e83-36283ec68435 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received unexpected event network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 for instance with vm_state stopped and task_state None.#033[00m
Feb 25 07:37:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:37:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1789193406' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:37:14 np0005629333 nova_compute[244014]: 2026-02-25 12:37:14.608 244018 DEBUG oslo_concurrency.processutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:37:14 np0005629333 nova_compute[244014]: 2026-02-25 12:37:14.614 244018 DEBUG nova.compute.provider_tree [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:37:14 np0005629333 nova_compute[244014]: 2026-02-25 12:37:14.629 244018 DEBUG nova.scheduler.client.report [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:37:14 np0005629333 nova_compute[244014]: 2026-02-25 12:37:14.652 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:14 np0005629333 nova_compute[244014]: 2026-02-25 12:37:14.654 244018 DEBUG nova.compute.manager [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:37:14 np0005629333 nova_compute[244014]: 2026-02-25 12:37:14.702 244018 DEBUG nova.compute.manager [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:37:14 np0005629333 nova_compute[244014]: 2026-02-25 12:37:14.702 244018 DEBUG nova.network.neutron [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:37:14 np0005629333 nova_compute[244014]: 2026-02-25 12:37:14.725 244018 INFO nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:37:14 np0005629333 nova_compute[244014]: 2026-02-25 12:37:14.769 244018 DEBUG nova.compute.manager [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:37:14 np0005629333 nova_compute[244014]: 2026-02-25 12:37:14.821 244018 DEBUG nova.objects.instance [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'flavor' on Instance uuid 0061daee-43d7-458b-8645-0ad3f8fbb2af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:37:14 np0005629333 nova_compute[244014]: 2026-02-25 12:37:14.865 244018 DEBUG oslo_concurrency.lockutils [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "refresh_cache-0061daee-43d7-458b-8645-0ad3f8fbb2af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:37:14 np0005629333 nova_compute[244014]: 2026-02-25 12:37:14.865 244018 DEBUG oslo_concurrency.lockutils [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquired lock "refresh_cache-0061daee-43d7-458b-8645-0ad3f8fbb2af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:37:14 np0005629333 nova_compute[244014]: 2026-02-25 12:37:14.865 244018 DEBUG nova.network.neutron [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:37:14 np0005629333 nova_compute[244014]: 2026-02-25 12:37:14.866 244018 DEBUG nova.objects.instance [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'info_cache' on Instance uuid 0061daee-43d7-458b-8645-0ad3f8fbb2af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:37:14 np0005629333 nova_compute[244014]: 2026-02-25 12:37:14.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:37:14 np0005629333 nova_compute[244014]: 2026-02-25 12:37:14.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:37:14 np0005629333 nova_compute[244014]: 2026-02-25 12:37:14.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 07:37:15 np0005629333 nova_compute[244014]: 2026-02-25 12:37:15.038 244018 DEBUG nova.policy [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3187080005d24d4b8fc920bcd975004b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '04e53ffc2c7b47b993b4fd34d0c71d77', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:37:15 np0005629333 nova_compute[244014]: 2026-02-25 12:37:15.040 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:15 np0005629333 nova_compute[244014]: 2026-02-25 12:37:15.042 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 25 07:37:15 np0005629333 nova_compute[244014]: 2026-02-25 12:37:15.043 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-0061daee-43d7-458b-8645-0ad3f8fbb2af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:37:15 np0005629333 nova_compute[244014]: 2026-02-25 12:37:15.043 244018 DEBUG nova.compute.manager [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:37:15 np0005629333 nova_compute[244014]: 2026-02-25 12:37:15.044 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:37:15 np0005629333 nova_compute[244014]: 2026-02-25 12:37:15.044 244018 INFO nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Creating image(s)#033[00m
Feb 25 07:37:15 np0005629333 nova_compute[244014]: 2026-02-25 12:37:15.064 244018 DEBUG nova.storage.rbd_utils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] rbd image ed3e74a3-3eba-4446-aba8-cb3936a346e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:37:15 np0005629333 nova_compute[244014]: 2026-02-25 12:37:15.084 244018 DEBUG nova.storage.rbd_utils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] rbd image ed3e74a3-3eba-4446-aba8-cb3936a346e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:37:15 np0005629333 nova_compute[244014]: 2026-02-25 12:37:15.107 244018 DEBUG nova.storage.rbd_utils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] rbd image ed3e74a3-3eba-4446-aba8-cb3936a346e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:37:15 np0005629333 nova_compute[244014]: 2026-02-25 12:37:15.110 244018 DEBUG oslo_concurrency.processutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:37:15 np0005629333 nova_compute[244014]: 2026-02-25 12:37:15.192 244018 DEBUG oslo_concurrency.processutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:37:15 np0005629333 nova_compute[244014]: 2026-02-25 12:37:15.193 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:15 np0005629333 nova_compute[244014]: 2026-02-25 12:37:15.193 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:15 np0005629333 nova_compute[244014]: 2026-02-25 12:37:15.194 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:15 np0005629333 nova_compute[244014]: 2026-02-25 12:37:15.214 244018 DEBUG nova.storage.rbd_utils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] rbd image ed3e74a3-3eba-4446-aba8-cb3936a346e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:37:15 np0005629333 nova_compute[244014]: 2026-02-25 12:37:15.217 244018 DEBUG oslo_concurrency.processutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 ed3e74a3-3eba-4446-aba8-cb3936a346e0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:37:15 np0005629333 nova_compute[244014]: 2026-02-25 12:37:15.472 244018 DEBUG oslo_concurrency.processutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 ed3e74a3-3eba-4446-aba8-cb3936a346e0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.254s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:37:15 np0005629333 nova_compute[244014]: 2026-02-25 12:37:15.560 244018 DEBUG nova.storage.rbd_utils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] resizing rbd image ed3e74a3-3eba-4446-aba8-cb3936a346e0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:37:15 np0005629333 nova_compute[244014]: 2026-02-25 12:37:15.641 244018 DEBUG nova.objects.instance [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lazy-loading 'migration_context' on Instance uuid ed3e74a3-3eba-4446-aba8-cb3936a346e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:37:15 np0005629333 nova_compute[244014]: 2026-02-25 12:37:15.657 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:37:15 np0005629333 nova_compute[244014]: 2026-02-25 12:37:15.658 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Ensure instance console log exists: /var/lib/nova/instances/ed3e74a3-3eba-4446-aba8-cb3936a346e0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:37:15 np0005629333 nova_compute[244014]: 2026-02-25 12:37:15.658 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:15 np0005629333 nova_compute[244014]: 2026-02-25 12:37:15.658 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:15 np0005629333 nova_compute[244014]: 2026-02-25 12:37:15.659 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:15 np0005629333 podman[327001]: 2026-02-25 12:37:15.739352113 +0000 UTC m=+0.091576626 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 25 07:37:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 305 active+clean; 384 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 163 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.195 244018 DEBUG nova.network.neutron [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Successfully created port: 7e3c890f-a02f-487b-9c31-7341f5caa914 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.273 244018 DEBUG nova.network.neutron [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Updating instance_info_cache with network_info: [{"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.291 244018 DEBUG oslo_concurrency.lockutils [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Releasing lock "refresh_cache-0061daee-43d7-458b-8645-0ad3f8fbb2af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.294 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-0061daee-43d7-458b-8645-0ad3f8fbb2af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.295 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.295 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0061daee-43d7-458b-8645-0ad3f8fbb2af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.344 244018 INFO nova.virt.libvirt.driver [-] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Instance destroyed successfully.#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.344 244018 DEBUG nova.objects.instance [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'numa_topology' on Instance uuid 0061daee-43d7-458b-8645-0ad3f8fbb2af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.358 244018 DEBUG nova.objects.instance [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'resources' on Instance uuid 0061daee-43d7-458b-8645-0ad3f8fbb2af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.372 244018 DEBUG nova.virt.libvirt.vif [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:36:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1814919794',display_name='tempest-ListServerFiltersTestJSON-instance-1814919794',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1814919794',id=90,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:36:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='999f2a015b9c4bc98661fe6fe6db06a0',ramdisk_id='',reservation_id='r-la6jl8ba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1481541933',owner_user_name='tempest-ListServerFiltersTestJSON-1481541933-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:37:11Z,user_data=None,user_id='00eb5c915b2d41f69600acd33967d0f5',uuid=0061daee-43d7-458b-8645-0ad3f8fbb2af,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.373 244018 DEBUG nova.network.os_vif_util [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converting VIF {"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.374 244018 DEBUG nova.network.os_vif_util [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:76:ae,bridge_name='br-int',has_traffic_filtering=True,id=d689bf7c-d44c-4f39-a2a7-a85e52dcee30,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd689bf7c-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.375 244018 DEBUG os_vif [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:76:ae,bridge_name='br-int',has_traffic_filtering=True,id=d689bf7c-d44c-4f39-a2a7-a85e52dcee30,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd689bf7c-d4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.378 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.378 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd689bf7c-d4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.381 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.383 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.387 244018 INFO os_vif [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:76:ae,bridge_name='br-int',has_traffic_filtering=True,id=d689bf7c-d44c-4f39-a2a7-a85e52dcee30,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd689bf7c-d4')#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.393 244018 DEBUG nova.virt.libvirt.driver [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Start _get_guest_xml network_info=[{"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.398 244018 WARNING nova.virt.libvirt.driver [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.402 244018 DEBUG nova.virt.libvirt.host [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.403 244018 DEBUG nova.virt.libvirt.host [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.406 244018 DEBUG nova.virt.libvirt.host [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.406 244018 DEBUG nova.virt.libvirt.host [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.407 244018 DEBUG nova.virt.libvirt.driver [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.407 244018 DEBUG nova.virt.hardware [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.407 244018 DEBUG nova.virt.hardware [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.408 244018 DEBUG nova.virt.hardware [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.408 244018 DEBUG nova.virt.hardware [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.408 244018 DEBUG nova.virt.hardware [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.409 244018 DEBUG nova.virt.hardware [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.409 244018 DEBUG nova.virt.hardware [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.409 244018 DEBUG nova.virt.hardware [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.410 244018 DEBUG nova.virt.hardware [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.410 244018 DEBUG nova.virt.hardware [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.410 244018 DEBUG nova.virt.hardware [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.411 244018 DEBUG nova.objects.instance [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0061daee-43d7-458b-8645-0ad3f8fbb2af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:37:16 np0005629333 nova_compute[244014]: 2026-02-25 12:37:16.434 244018 DEBUG oslo_concurrency.processutils [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:37:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:37:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/394028940' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.007 244018 DEBUG oslo_concurrency.processutils [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.039 244018 DEBUG oslo_concurrency.processutils [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:37:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:37:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3998435508' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.590 244018 DEBUG oslo_concurrency.processutils [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.591 244018 DEBUG nova.virt.libvirt.vif [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:36:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1814919794',display_name='tempest-ListServerFiltersTestJSON-instance-1814919794',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1814919794',id=90,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:36:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='999f2a015b9c4bc98661fe6fe6db06a0',ramdisk_id='',reservation_id='r-la6jl8ba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1481541933',owner_user_name='tempest-ListServerFiltersTestJSON-1481541933-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:37:11Z,user_data=None,user_id='00eb5c915b2d41f69600acd33967d0f5',uuid=0061daee-43d7-458b-8645-0ad3f8fbb2af,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.592 244018 DEBUG nova.network.os_vif_util [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converting VIF {"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.593 244018 DEBUG nova.network.os_vif_util [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:76:ae,bridge_name='br-int',has_traffic_filtering=True,id=d689bf7c-d44c-4f39-a2a7-a85e52dcee30,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd689bf7c-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.594 244018 DEBUG nova.objects.instance [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0061daee-43d7-458b-8645-0ad3f8fbb2af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.615 244018 DEBUG nova.virt.libvirt.driver [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:37:17 np0005629333 nova_compute[244014]:  <uuid>0061daee-43d7-458b-8645-0ad3f8fbb2af</uuid>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:  <name>instance-0000005a</name>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-1814919794</nova:name>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:37:16</nova:creationTime>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:37:17 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:        <nova:user uuid="00eb5c915b2d41f69600acd33967d0f5">tempest-ListServerFiltersTestJSON-1481541933-project-member</nova:user>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:        <nova:project uuid="999f2a015b9c4bc98661fe6fe6db06a0">tempest-ListServerFiltersTestJSON-1481541933</nova:project>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:        <nova:port uuid="d689bf7c-d44c-4f39-a2a7-a85e52dcee30">
Feb 25 07:37:17 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      <entry name="serial">0061daee-43d7-458b-8645-0ad3f8fbb2af</entry>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      <entry name="uuid">0061daee-43d7-458b-8645-0ad3f8fbb2af</entry>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/0061daee-43d7-458b-8645-0ad3f8fbb2af_disk">
Feb 25 07:37:17 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:37:17 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/0061daee-43d7-458b-8645-0ad3f8fbb2af_disk.config">
Feb 25 07:37:17 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:37:17 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:65:76:ae"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      <target dev="tapd689bf7c-d4"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/0061daee-43d7-458b-8645-0ad3f8fbb2af/console.log" append="off"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <input type="keyboard" bus="usb"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:37:17 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:37:17 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:37:17 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:37:17 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.618 244018 DEBUG nova.virt.libvirt.driver [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.619 244018 DEBUG nova.virt.libvirt.driver [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.621 244018 DEBUG nova.virt.libvirt.vif [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:36:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1814919794',display_name='tempest-ListServerFiltersTestJSON-instance-1814919794',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1814919794',id=90,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:36:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='999f2a015b9c4bc98661fe6fe6db06a0',ramdisk_id='',reservation_id='r-la6jl8ba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1481541933',owner_user_name='tempest-ListServerFiltersTestJSON-1481541933-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:37:11Z,user_data=None,user_id='00eb5c915b2d41f69600acd33967d0f5',uuid=0061daee-43d7-458b-8645-0ad3f8fbb2af,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.622 244018 DEBUG nova.network.os_vif_util [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converting VIF {"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.623 244018 DEBUG nova.network.os_vif_util [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:76:ae,bridge_name='br-int',has_traffic_filtering=True,id=d689bf7c-d44c-4f39-a2a7-a85e52dcee30,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd689bf7c-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.624 244018 DEBUG os_vif [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:76:ae,bridge_name='br-int',has_traffic_filtering=True,id=d689bf7c-d44c-4f39-a2a7-a85e52dcee30,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd689bf7c-d4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.625 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.626 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.627 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.632 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.632 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd689bf7c-d4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.633 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd689bf7c-d4, col_values=(('external_ids', {'iface-id': 'd689bf7c-d44c-4f39-a2a7-a85e52dcee30', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:76:ae', 'vm-uuid': '0061daee-43d7-458b-8645-0ad3f8fbb2af'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.635 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:17 np0005629333 NetworkManager[49836]: <info>  [1772023037.6367] manager: (tapd689bf7c-d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/384)
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.638 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.641 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.643 244018 INFO os_vif [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:76:ae,bridge_name='br-int',has_traffic_filtering=True,id=d689bf7c-d44c-4f39-a2a7-a85e52dcee30,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd689bf7c-d4')#033[00m
Feb 25 07:37:17 np0005629333 kernel: tapd689bf7c-d4: entered promiscuous mode
Feb 25 07:37:17 np0005629333 NetworkManager[49836]: <info>  [1772023037.7462] manager: (tapd689bf7c-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/385)
Feb 25 07:37:17 np0005629333 ovn_controller[147040]: 2026-02-25T12:37:17Z|00916|binding|INFO|Claiming lport d689bf7c-d44c-4f39-a2a7-a85e52dcee30 for this chassis.
Feb 25 07:37:17 np0005629333 ovn_controller[147040]: 2026-02-25T12:37:17Z|00917|binding|INFO|d689bf7c-d44c-4f39-a2a7-a85e52dcee30: Claiming fa:16:3e:65:76:ae 10.100.0.8
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.749 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.756 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:17 np0005629333 ovn_controller[147040]: 2026-02-25T12:37:17Z|00918|binding|INFO|Setting lport d689bf7c-d44c-4f39-a2a7-a85e52dcee30 ovn-installed in OVS
Feb 25 07:37:17 np0005629333 ovn_controller[147040]: 2026-02-25T12:37:17Z|00919|binding|INFO|Setting lport d689bf7c-d44c-4f39-a2a7-a85e52dcee30 up in Southbound
Feb 25 07:37:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:17.760 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:76:ae 10.100.0.8'], port_security=['fa:16:3e:65:76:ae 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0061daee-43d7-458b-8645-0ad3f8fbb2af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38a239da-c933-4cbc-be00-dd127471e198', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '999f2a015b9c4bc98661fe6fe6db06a0', 'neutron:revision_number': '5', 'neutron:security_group_ids': '90d0dcc5-a9c4-4f99-8901-06a2a2854544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80217008-ae39-43e4-aa35-a210336c586d, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=d689bf7c-d44c-4f39-a2a7-a85e52dcee30) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.760 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:17.763 157129 INFO neutron.agent.ovn.metadata.agent [-] Port d689bf7c-d44c-4f39-a2a7-a85e52dcee30 in datapath 38a239da-c933-4cbc-be00-dd127471e198 bound to our chassis#033[00m
Feb 25 07:37:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:17.766 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 38a239da-c933-4cbc-be00-dd127471e198#033[00m
Feb 25 07:37:17 np0005629333 systemd-udevd[327105]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:37:17 np0005629333 systemd-machined[210048]: New machine qemu-119-instance-0000005a.
Feb 25 07:37:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:17.785 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[223664d8-039b-4d6b-987e-279aab6cc2b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:17 np0005629333 NetworkManager[49836]: <info>  [1772023037.7869] device (tapd689bf7c-d4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:37:17 np0005629333 NetworkManager[49836]: <info>  [1772023037.7893] device (tapd689bf7c-d4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:37:17 np0005629333 systemd[1]: Started Virtual Machine qemu-119-instance-0000005a.
Feb 25 07:37:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:17.816 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[63be026d-a6fb-4d23-84f4-f97814d9eaf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:17.821 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3cb0cfd6-5847-4cc8-9796-ff0f33ead275]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.826 244018 DEBUG nova.network.neutron [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Successfully updated port: 7e3c890f-a02f-487b-9c31-7341f5caa914 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.846 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Acquiring lock "refresh_cache-ed3e74a3-3eba-4446-aba8-cb3936a346e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.847 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Acquired lock "refresh_cache-ed3e74a3-3eba-4446-aba8-cb3936a346e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.847 244018 DEBUG nova.network.neutron [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:37:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:17.848 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[6fed7bbb-47df-4739-8850-4d7a9f6d70dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:17.863 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d71576fd-93e7-47e7-9748-1750a6917033]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap38a239da-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:fa:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 11, 'rx_bytes': 1246, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 11, 'rx_bytes': 1246, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 497385, 'reachable_time': 19502, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327118, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:17.877 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fe3db334-4675-48cc-a776-0e4e54c94095]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap38a239da-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 497395, 'tstamp': 497395}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327119, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap38a239da-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 497397, 'tstamp': 497397}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327119, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:17.879 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38a239da-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.881 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.882 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:17.883 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap38a239da-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:37:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:17.883 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:37:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:17.884 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap38a239da-c0, col_values=(('external_ids', {'iface-id': 'b61d6004-89be-4a9d-aeb0-ecb4f03f3526'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:37:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:17.884 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.962 244018 DEBUG nova.compute.manager [req-6fe62b31-3e3e-4e9e-96d9-5f224fff1b7d req-2186ea1a-748d-4ce8-b2ac-9beb3766f104 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Received event network-changed-7e3c890f-a02f-487b-9c31-7341f5caa914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.962 244018 DEBUG nova.compute.manager [req-6fe62b31-3e3e-4e9e-96d9-5f224fff1b7d req-2186ea1a-748d-4ce8-b2ac-9beb3766f104 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Refreshing instance network info cache due to event network-changed-7e3c890f-a02f-487b-9c31-7341f5caa914. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:37:17 np0005629333 nova_compute[244014]: 2026-02-25 12:37:17.962 244018 DEBUG oslo_concurrency.lockutils [req-6fe62b31-3e3e-4e9e-96d9-5f224fff1b7d req-2186ea1a-748d-4ce8-b2ac-9beb3766f104 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-ed3e74a3-3eba-4446-aba8-cb3936a346e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:37:18 np0005629333 nova_compute[244014]: 2026-02-25 12:37:18.076 244018 DEBUG nova.compute.manager [req-c7f919ca-2150-40ab-89c1-0a9ac9547ce7 req-15a2c592-8a26-4b52-827e-296e3dd7df97 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received event network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:37:18 np0005629333 nova_compute[244014]: 2026-02-25 12:37:18.077 244018 DEBUG oslo_concurrency.lockutils [req-c7f919ca-2150-40ab-89c1-0a9ac9547ce7 req-15a2c592-8a26-4b52-827e-296e3dd7df97 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:18 np0005629333 nova_compute[244014]: 2026-02-25 12:37:18.077 244018 DEBUG oslo_concurrency.lockutils [req-c7f919ca-2150-40ab-89c1-0a9ac9547ce7 req-15a2c592-8a26-4b52-827e-296e3dd7df97 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:18 np0005629333 nova_compute[244014]: 2026-02-25 12:37:18.078 244018 DEBUG oslo_concurrency.lockutils [req-c7f919ca-2150-40ab-89c1-0a9ac9547ce7 req-15a2c592-8a26-4b52-827e-296e3dd7df97 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:18 np0005629333 nova_compute[244014]: 2026-02-25 12:37:18.079 244018 DEBUG nova.compute.manager [req-c7f919ca-2150-40ab-89c1-0a9ac9547ce7 req-15a2c592-8a26-4b52-827e-296e3dd7df97 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] No waiting events found dispatching network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:37:18 np0005629333 nova_compute[244014]: 2026-02-25 12:37:18.079 244018 WARNING nova.compute.manager [req-c7f919ca-2150-40ab-89c1-0a9ac9547ce7 req-15a2c592-8a26-4b52-827e-296e3dd7df97 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received unexpected event network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 for instance with vm_state stopped and task_state powering-on.#033[00m
Feb 25 07:37:18 np0005629333 nova_compute[244014]: 2026-02-25 12:37:18.085 244018 DEBUG nova.network.neutron [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:37:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1671: 305 pgs: 305 active+clean; 437 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 197 KiB/s rd, 4.0 MiB/s wr, 98 op/s
Feb 25 07:37:18 np0005629333 nova_compute[244014]: 2026-02-25 12:37:18.124 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Updating instance_info_cache with network_info: [{"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:37:18 np0005629333 nova_compute[244014]: 2026-02-25 12:37:18.140 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-0061daee-43d7-458b-8645-0ad3f8fbb2af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:37:18 np0005629333 nova_compute[244014]: 2026-02-25 12:37:18.140 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 25 07:37:18 np0005629333 nova_compute[244014]: 2026-02-25 12:37:18.141 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:37:18 np0005629333 nova_compute[244014]: 2026-02-25 12:37:18.141 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.151 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for 0061daee-43d7-458b-8645-0ad3f8fbb2af due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.151 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023039.1507766, 0061daee-43d7-458b-8645-0ad3f8fbb2af => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.151 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.154 244018 DEBUG nova.compute.manager [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.158 244018 INFO nova.virt.libvirt.driver [-] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Instance rebooted successfully.#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.159 244018 DEBUG nova.compute.manager [None req-019ed112-5b5c-4a91-a51c-50deb2bd7081 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.167 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.171 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.188 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.189 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023039.1535273, 0061daee-43d7-458b-8645-0ad3f8fbb2af => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.189 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] VM Started (Lifecycle Event)#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.214 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.218 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.351 244018 DEBUG nova.network.neutron [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Updating instance_info_cache with network_info: [{"id": "7e3c890f-a02f-487b-9c31-7341f5caa914", "address": "fa:16:3e:e9:ff:e0", "network": {"id": "a3ce9bca-ffe4-4184-87c6-054df1c1e0b8", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-641075665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04e53ffc2c7b47b993b4fd34d0c71d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e3c890f-a0", "ovs_interfaceid": "7e3c890f-a02f-487b-9c31-7341f5caa914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.375 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Releasing lock "refresh_cache-ed3e74a3-3eba-4446-aba8-cb3936a346e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.375 244018 DEBUG nova.compute.manager [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Instance network_info: |[{"id": "7e3c890f-a02f-487b-9c31-7341f5caa914", "address": "fa:16:3e:e9:ff:e0", "network": {"id": "a3ce9bca-ffe4-4184-87c6-054df1c1e0b8", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-641075665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04e53ffc2c7b47b993b4fd34d0c71d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e3c890f-a0", "ovs_interfaceid": "7e3c890f-a02f-487b-9c31-7341f5caa914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.376 244018 DEBUG oslo_concurrency.lockutils [req-6fe62b31-3e3e-4e9e-96d9-5f224fff1b7d req-2186ea1a-748d-4ce8-b2ac-9beb3766f104 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-ed3e74a3-3eba-4446-aba8-cb3936a346e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.376 244018 DEBUG nova.network.neutron [req-6fe62b31-3e3e-4e9e-96d9-5f224fff1b7d req-2186ea1a-748d-4ce8-b2ac-9beb3766f104 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Refreshing network info cache for port 7e3c890f-a02f-487b-9c31-7341f5caa914 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.381 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Start _get_guest_xml network_info=[{"id": "7e3c890f-a02f-487b-9c31-7341f5caa914", "address": "fa:16:3e:e9:ff:e0", "network": {"id": "a3ce9bca-ffe4-4184-87c6-054df1c1e0b8", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-641075665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04e53ffc2c7b47b993b4fd34d0c71d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e3c890f-a0", "ovs_interfaceid": "7e3c890f-a02f-487b-9c31-7341f5caa914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.385 244018 WARNING nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.392 244018 DEBUG nova.virt.libvirt.host [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.392 244018 DEBUG nova.virt.libvirt.host [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.400 244018 DEBUG nova.virt.libvirt.host [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.401 244018 DEBUG nova.virt.libvirt.host [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.401 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.401 244018 DEBUG nova.virt.hardware [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.402 244018 DEBUG nova.virt.hardware [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.402 244018 DEBUG nova.virt.hardware [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.402 244018 DEBUG nova.virt.hardware [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.402 244018 DEBUG nova.virt.hardware [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.403 244018 DEBUG nova.virt.hardware [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.403 244018 DEBUG nova.virt.hardware [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.403 244018 DEBUG nova.virt.hardware [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.403 244018 DEBUG nova.virt.hardware [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.404 244018 DEBUG nova.virt.hardware [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.404 244018 DEBUG nova.virt.hardware [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.407 244018 DEBUG oslo_concurrency.processutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.899 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.899 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.899 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.900 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.900 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:37:19 np0005629333 nova_compute[244014]: 2026-02-25 12:37:19.954 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:37:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1709585853' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.078 244018 DEBUG oslo_concurrency.processutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.671s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:37:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1672: 305 pgs: 305 active+clean; 437 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 130 KiB/s rd, 2.5 MiB/s wr, 63 op/s
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.118 244018 DEBUG nova.storage.rbd_utils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] rbd image ed3e74a3-3eba-4446-aba8-cb3936a346e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.122 244018 DEBUG oslo_concurrency.processutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:37:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:37:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4272266667' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.472 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.563 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.564 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.568 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.568 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.571 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.571 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:37:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:37:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3369517480' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.645 244018 DEBUG oslo_concurrency.processutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.646 244018 DEBUG nova.virt.libvirt.vif [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:37:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-1563092304',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-1563092304',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-1563092304',id=93,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='04e53ffc2c7b47b993b4fd34d0c71d77',ramdisk_id='',reservation_id='r-5zdt11tm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-685623650',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-685623650-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:37:14Z,user_data=None,user_id='3187080005d24d4b8fc920bcd975004b',uuid=ed3e74a3-3eba-4446-aba8-cb3936a346e0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7e3c890f-a02f-487b-9c31-7341f5caa914", "address": "fa:16:3e:e9:ff:e0", "network": {"id": "a3ce9bca-ffe4-4184-87c6-054df1c1e0b8", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-641075665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04e53ffc2c7b47b993b4fd34d0c71d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e3c890f-a0", "ovs_interfaceid": "7e3c890f-a02f-487b-9c31-7341f5caa914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.647 244018 DEBUG nova.network.os_vif_util [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Converting VIF {"id": "7e3c890f-a02f-487b-9c31-7341f5caa914", "address": "fa:16:3e:e9:ff:e0", "network": {"id": "a3ce9bca-ffe4-4184-87c6-054df1c1e0b8", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-641075665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04e53ffc2c7b47b993b4fd34d0c71d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e3c890f-a0", "ovs_interfaceid": "7e3c890f-a02f-487b-9c31-7341f5caa914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.648 244018 DEBUG nova.network.os_vif_util [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:ff:e0,bridge_name='br-int',has_traffic_filtering=True,id=7e3c890f-a02f-487b-9c31-7341f5caa914,network=Network(a3ce9bca-ffe4-4184-87c6-054df1c1e0b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e3c890f-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.650 244018 DEBUG nova.objects.instance [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lazy-loading 'pci_devices' on Instance uuid ed3e74a3-3eba-4446-aba8-cb3936a346e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.703 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:37:20 np0005629333 nova_compute[244014]:  <uuid>ed3e74a3-3eba-4446-aba8-cb3936a346e0</uuid>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:  <name>instance-0000005d</name>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServersNegativeTestMultiTenantJSON-server-1563092304</nova:name>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:37:19</nova:creationTime>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:37:20 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:        <nova:user uuid="3187080005d24d4b8fc920bcd975004b">tempest-ServersNegativeTestMultiTenantJSON-685623650-project-member</nova:user>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:        <nova:project uuid="04e53ffc2c7b47b993b4fd34d0c71d77">tempest-ServersNegativeTestMultiTenantJSON-685623650</nova:project>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:        <nova:port uuid="7e3c890f-a02f-487b-9c31-7341f5caa914">
Feb 25 07:37:20 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      <entry name="serial">ed3e74a3-3eba-4446-aba8-cb3936a346e0</entry>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      <entry name="uuid">ed3e74a3-3eba-4446-aba8-cb3936a346e0</entry>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/ed3e74a3-3eba-4446-aba8-cb3936a346e0_disk">
Feb 25 07:37:20 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:37:20 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/ed3e74a3-3eba-4446-aba8-cb3936a346e0_disk.config">
Feb 25 07:37:20 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:37:20 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:e9:ff:e0"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      <target dev="tap7e3c890f-a0"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/ed3e74a3-3eba-4446-aba8-cb3936a346e0/console.log" append="off"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:37:20 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:37:20 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:37:20 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:37:20 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.710 244018 DEBUG nova.compute.manager [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Preparing to wait for external event network-vif-plugged-7e3c890f-a02f-487b-9c31-7341f5caa914 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.710 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Acquiring lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.710 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.711 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.711 244018 DEBUG nova.virt.libvirt.vif [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:37:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-1563092304',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-1563092304',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-1563092304',id=93,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='04e53ffc2c7b47b993b4fd34d0c71d77',ramdisk_id='',reservation_id='r-5zdt11tm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-685623650',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-685623650-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:37:14Z,user_data=None,user_id='3187080005d24d4b8fc920bcd975004b',uuid=ed3e74a3-3eba-4446-aba8-cb3936a346e0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7e3c890f-a02f-487b-9c31-7341f5caa914", "address": "fa:16:3e:e9:ff:e0", "network": {"id": "a3ce9bca-ffe4-4184-87c6-054df1c1e0b8", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-641075665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04e53ffc2c7b47b993b4fd34d0c71d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e3c890f-a0", "ovs_interfaceid": "7e3c890f-a02f-487b-9c31-7341f5caa914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.712 244018 DEBUG nova.network.os_vif_util [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Converting VIF {"id": "7e3c890f-a02f-487b-9c31-7341f5caa914", "address": "fa:16:3e:e9:ff:e0", "network": {"id": "a3ce9bca-ffe4-4184-87c6-054df1c1e0b8", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-641075665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04e53ffc2c7b47b993b4fd34d0c71d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e3c890f-a0", "ovs_interfaceid": "7e3c890f-a02f-487b-9c31-7341f5caa914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.712 244018 DEBUG nova.network.os_vif_util [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:ff:e0,bridge_name='br-int',has_traffic_filtering=True,id=7e3c890f-a02f-487b-9c31-7341f5caa914,network=Network(a3ce9bca-ffe4-4184-87c6-054df1c1e0b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e3c890f-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.713 244018 DEBUG os_vif [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:ff:e0,bridge_name='br-int',has_traffic_filtering=True,id=7e3c890f-a02f-487b-9c31-7341f5caa914,network=Network(a3ce9bca-ffe4-4184-87c6-054df1c1e0b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e3c890f-a0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.716 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.716 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.717 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.719 244018 DEBUG nova.compute.manager [req-be93a9e7-706a-46e8-8744-af2fc915b061 req-9496b7dc-df3c-4d45-b368-dd3141de970d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received event network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.720 244018 DEBUG oslo_concurrency.lockutils [req-be93a9e7-706a-46e8-8744-af2fc915b061 req-9496b7dc-df3c-4d45-b368-dd3141de970d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.721 244018 DEBUG oslo_concurrency.lockutils [req-be93a9e7-706a-46e8-8744-af2fc915b061 req-9496b7dc-df3c-4d45-b368-dd3141de970d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.721 244018 DEBUG oslo_concurrency.lockutils [req-be93a9e7-706a-46e8-8744-af2fc915b061 req-9496b7dc-df3c-4d45-b368-dd3141de970d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.722 244018 DEBUG nova.compute.manager [req-be93a9e7-706a-46e8-8744-af2fc915b061 req-9496b7dc-df3c-4d45-b368-dd3141de970d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] No waiting events found dispatching network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.722 244018 WARNING nova.compute.manager [req-be93a9e7-706a-46e8-8744-af2fc915b061 req-9496b7dc-df3c-4d45-b368-dd3141de970d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received unexpected event network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.726 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.726 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7e3c890f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.727 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7e3c890f-a0, col_values=(('external_ids', {'iface-id': '7e3c890f-a02f-487b-9c31-7341f5caa914', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e9:ff:e0', 'vm-uuid': 'ed3e74a3-3eba-4446-aba8-cb3936a346e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.729 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:20 np0005629333 NetworkManager[49836]: <info>  [1772023040.7302] manager: (tap7e3c890f-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/386)
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.732 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.734 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.735 244018 INFO os_vif [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:ff:e0,bridge_name='br-int',has_traffic_filtering=True,id=7e3c890f-a02f-487b-9c31-7341f5caa914,network=Network(a3ce9bca-ffe4-4184-87c6-054df1c1e0b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e3c890f-a0')#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.799 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.799 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.800 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] No VIF found with MAC fa:16:3e:e9:ff:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.800 244018 INFO nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Using config drive#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.825 244018 DEBUG nova.storage.rbd_utils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] rbd image ed3e74a3-3eba-4446-aba8-cb3936a346e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.842 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.844 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3190MB free_disk=59.830501983873546GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.844 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:20 np0005629333 nova_compute[244014]: 2026-02-25 12:37:20.844 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:21 np0005629333 nova_compute[244014]: 2026-02-25 12:37:21.151 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 0061daee-43d7-458b-8645-0ad3f8fbb2af actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:37:21 np0005629333 nova_compute[244014]: 2026-02-25 12:37:21.151 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 6076a107-bdef-4c8a-8f75-887cdb4833f0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:37:21 np0005629333 nova_compute[244014]: 2026-02-25 12:37:21.152 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:37:21 np0005629333 nova_compute[244014]: 2026-02-25 12:37:21.152 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance ed3e74a3-3eba-4446-aba8-cb3936a346e0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:37:21 np0005629333 nova_compute[244014]: 2026-02-25 12:37:21.153 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:37:21 np0005629333 nova_compute[244014]: 2026-02-25 12:37:21.153 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1088MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:37:21 np0005629333 nova_compute[244014]: 2026-02-25 12:37:21.473 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:37:21 np0005629333 nova_compute[244014]: 2026-02-25 12:37:21.646 244018 INFO nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Creating config drive at /var/lib/nova/instances/ed3e74a3-3eba-4446-aba8-cb3936a346e0/disk.config#033[00m
Feb 25 07:37:21 np0005629333 nova_compute[244014]: 2026-02-25 12:37:21.653 244018 DEBUG oslo_concurrency.processutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ed3e74a3-3eba-4446-aba8-cb3936a346e0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpom2r0x5w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:37:21 np0005629333 nova_compute[244014]: 2026-02-25 12:37:21.792 244018 DEBUG oslo_concurrency.processutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ed3e74a3-3eba-4446-aba8-cb3936a346e0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpom2r0x5w" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:37:21 np0005629333 nova_compute[244014]: 2026-02-25 12:37:21.838 244018 DEBUG nova.storage.rbd_utils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] rbd image ed3e74a3-3eba-4446-aba8-cb3936a346e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:37:21 np0005629333 nova_compute[244014]: 2026-02-25 12:37:21.842 244018 DEBUG oslo_concurrency.processutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ed3e74a3-3eba-4446-aba8-cb3936a346e0/disk.config ed3e74a3-3eba-4446-aba8-cb3936a346e0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:37:21 np0005629333 nova_compute[244014]: 2026-02-25 12:37:21.875 244018 DEBUG nova.network.neutron [req-6fe62b31-3e3e-4e9e-96d9-5f224fff1b7d req-2186ea1a-748d-4ce8-b2ac-9beb3766f104 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Updated VIF entry in instance network info cache for port 7e3c890f-a02f-487b-9c31-7341f5caa914. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:37:21 np0005629333 nova_compute[244014]: 2026-02-25 12:37:21.876 244018 DEBUG nova.network.neutron [req-6fe62b31-3e3e-4e9e-96d9-5f224fff1b7d req-2186ea1a-748d-4ce8-b2ac-9beb3766f104 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Updating instance_info_cache with network_info: [{"id": "7e3c890f-a02f-487b-9c31-7341f5caa914", "address": "fa:16:3e:e9:ff:e0", "network": {"id": "a3ce9bca-ffe4-4184-87c6-054df1c1e0b8", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-641075665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04e53ffc2c7b47b993b4fd34d0c71d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e3c890f-a0", "ovs_interfaceid": "7e3c890f-a02f-487b-9c31-7341f5caa914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:37:21 np0005629333 nova_compute[244014]: 2026-02-25 12:37:21.901 244018 DEBUG oslo_concurrency.lockutils [req-6fe62b31-3e3e-4e9e-96d9-5f224fff1b7d req-2186ea1a-748d-4ce8-b2ac-9beb3766f104 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-ed3e74a3-3eba-4446-aba8-cb3936a346e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.000 244018 DEBUG oslo_concurrency.processutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ed3e74a3-3eba-4446-aba8-cb3936a346e0/disk.config ed3e74a3-3eba-4446-aba8-cb3936a346e0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.001 244018 INFO nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Deleting local config drive /var/lib/nova/instances/ed3e74a3-3eba-4446-aba8-cb3936a346e0/disk.config because it was imported into RBD.#033[00m
Feb 25 07:37:22 np0005629333 kernel: tap7e3c890f-a0: entered promiscuous mode
Feb 25 07:37:22 np0005629333 NetworkManager[49836]: <info>  [1772023042.0411] manager: (tap7e3c890f-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/387)
Feb 25 07:37:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:37:22Z|00920|binding|INFO|Claiming lport 7e3c890f-a02f-487b-9c31-7341f5caa914 for this chassis.
Feb 25 07:37:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:37:22Z|00921|binding|INFO|7e3c890f-a02f-487b-9c31-7341f5caa914: Claiming fa:16:3e:e9:ff:e0 10.100.0.14
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.048 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:22 np0005629333 systemd-udevd[327340]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:37:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:37:22 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1809555701' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.072 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:ff:e0 10.100.0.14'], port_security=['fa:16:3e:e9:ff:e0 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ed3e74a3-3eba-4446-aba8-cb3936a346e0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '04e53ffc2c7b47b993b4fd34d0c71d77', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a4d5db6f-4b6b-4f9f-b234-d31c315a5616', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9322afa8-c9d9-4d31-97f4-43f86a60b202, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=7e3c890f-a02f-487b-9c31-7341f5caa914) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.075 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 7e3c890f-a02f-487b-9c31-7341f5caa914 in datapath a3ce9bca-ffe4-4184-87c6-054df1c1e0b8 bound to our chassis#033[00m
Feb 25 07:37:22 np0005629333 NetworkManager[49836]: <info>  [1772023042.0769] device (tap7e3c890f-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:37:22 np0005629333 NetworkManager[49836]: <info>  [1772023042.0773] device (tap7e3c890f-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.077 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a3ce9bca-ffe4-4184-87c6-054df1c1e0b8#033[00m
Feb 25 07:37:22 np0005629333 systemd-machined[210048]: New machine qemu-120-instance-0000005d.
Feb 25 07:37:22 np0005629333 systemd[1]: Started Virtual Machine qemu-120-instance-0000005d.
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.086 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[70be9ae8-a71c-4cb6-a2ee-b5318b6bc0a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.086 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa3ce9bca-f1 in ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.088 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa3ce9bca-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.088 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cc9733eb-46b6-47ba-be45-38553262cbb9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.089 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2ee15c16-eff6-4776-b914-65fe7c866197]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:37:22Z|00922|binding|INFO|Setting lport 7e3c890f-a02f-487b-9c31-7341f5caa914 ovn-installed in OVS
Feb 25 07:37:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:37:22Z|00923|binding|INFO|Setting lport 7e3c890f-a02f-487b-9c31-7341f5caa914 up in Southbound
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.101 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[9f105aa4-ace2-45f0-973e-b4aa156b706c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.101 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.104 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.111 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cf380293-574f-4eae-b101-a0eace7823bd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 305 active+clean; 437 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 126 op/s
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.117 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.131 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cde97d0b-b09a-4ff3-a2d7-3d22d563379a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:22 np0005629333 NetworkManager[49836]: <info>  [1772023042.1390] manager: (tapa3ce9bca-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/388)
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.138 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9cc7a1fb-3d83-4f39-930a-2404b91d7436]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.162 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[637cd4d0-192c-4546-9ace-5fa1267f279e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.163 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.164 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[47759ce7-daf5-4b73-8b7b-fbd7aaa78f49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:22 np0005629333 NetworkManager[49836]: <info>  [1772023042.1791] device (tapa3ce9bca-f0): carrier: link connected
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.182 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e2310683-ba4c-412e-8f0a-c5a895a7fed6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.195 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c4245f03-828f-4858-802e-92026e3c242b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3ce9bca-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:b3:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501158, 'reachable_time': 39897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327376, 'error': None, 'target': 'ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.206 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5f601247-e536-430d-aa81-d3ac756eaa80]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feec:b353'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 501158, 'tstamp': 501158}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327377, 'error': None, 'target': 'ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.218 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7952a9ba-bf46-4b26-a415-a9b25c13d77e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa3ce9bca-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:b3:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501158, 'reachable_time': 39897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 327378, 'error': None, 'target': 'ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.225 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.226 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.381s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.241 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[77e8febb-0931-4879-8d67-920ecc05c42f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.288 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4e7977c3-6dda-4d27-a7fa-a5a3877d7fad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.289 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3ce9bca-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.289 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.290 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa3ce9bca-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.292 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:22 np0005629333 kernel: tapa3ce9bca-f0: entered promiscuous mode
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.294 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:22 np0005629333 NetworkManager[49836]: <info>  [1772023042.2952] manager: (tapa3ce9bca-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/389)
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.298 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa3ce9bca-f0, col_values=(('external_ids', {'iface-id': '241e63c3-9527-4162-9586-5bef285d531c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.299 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:37:22Z|00924|binding|INFO|Releasing lport 241e63c3-9527-4162-9586-5bef285d531c from this chassis (sb_readonly=0)
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.300 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.302 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a3ce9bca-ffe4-4184-87c6-054df1c1e0b8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a3ce9bca-ffe4-4184-87c6-054df1c1e0b8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.303 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e870baa9-4617-4a83-a339-f534639f8f9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.304 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/a3ce9bca-ffe4-4184-87c6-054df1c1e0b8.pid.haproxy
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID a3ce9bca-ffe4-4184-87c6-054df1c1e0b8
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:37:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:22.304 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8', 'env', 'PROCESS_TAG=haproxy-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a3ce9bca-ffe4-4184-87c6-054df1c1e0b8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.307 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.628 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023042.628258, ed3e74a3-3eba-4446-aba8-cb3936a346e0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.629 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] VM Started (Lifecycle Event)#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.661 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.666 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023042.628526, ed3e74a3-3eba-4446-aba8-cb3936a346e0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.667 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:37:22 np0005629333 podman[327452]: 2026-02-25 12:37:22.668265664 +0000 UTC m=+0.061729813 container create 26b1af2270e7c023d74f83bfdd07246185d24895a52af853e6acb358bb9d9a7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.703 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:37:22 np0005629333 systemd[1]: Started libpod-conmon-26b1af2270e7c023d74f83bfdd07246185d24895a52af853e6acb358bb9d9a7b.scope.
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.708 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:37:22 np0005629333 podman[327452]: 2026-02-25 12:37:22.628047679 +0000 UTC m=+0.021511848 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:37:22 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:37:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c53868578ef99a8810c9684c99d8c731ec4e09f65916a481d668654b6628d16a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:37:22 np0005629333 podman[327452]: 2026-02-25 12:37:22.747946853 +0000 UTC m=+0.141411002 container init 26b1af2270e7c023d74f83bfdd07246185d24895a52af853e6acb358bb9d9a7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:37:22 np0005629333 podman[327452]: 2026-02-25 12:37:22.752137271 +0000 UTC m=+0.145601420 container start 26b1af2270e7c023d74f83bfdd07246185d24895a52af853e6acb358bb9d9a7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.757 244018 DEBUG nova.compute.manager [req-a93c97d8-61ad-4c81-881d-d6c39bfc7ca0 req-1ca95013-fd70-4603-8652-0914900d3ad6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Received event network-vif-plugged-7e3c890f-a02f-487b-9c31-7341f5caa914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.757 244018 DEBUG oslo_concurrency.lockutils [req-a93c97d8-61ad-4c81-881d-d6c39bfc7ca0 req-1ca95013-fd70-4603-8652-0914900d3ad6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.758 244018 DEBUG oslo_concurrency.lockutils [req-a93c97d8-61ad-4c81-881d-d6c39bfc7ca0 req-1ca95013-fd70-4603-8652-0914900d3ad6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.758 244018 DEBUG oslo_concurrency.lockutils [req-a93c97d8-61ad-4c81-881d-d6c39bfc7ca0 req-1ca95013-fd70-4603-8652-0914900d3ad6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.760 244018 DEBUG nova.compute.manager [req-a93c97d8-61ad-4c81-881d-d6c39bfc7ca0 req-1ca95013-fd70-4603-8652-0914900d3ad6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Processing event network-vif-plugged-7e3c890f-a02f-487b-9c31-7341f5caa914 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.761 244018 DEBUG nova.compute.manager [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.765 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.768 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.768 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023042.7645283, ed3e74a3-3eba-4446-aba8-cb3936a346e0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.769 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.778 244018 INFO nova.virt.libvirt.driver [-] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Instance spawned successfully.#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.779 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:37:22 np0005629333 neutron-haproxy-ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8[327467]: [NOTICE]   (327471) : New worker (327473) forked
Feb 25 07:37:22 np0005629333 neutron-haproxy-ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8[327467]: [NOTICE]   (327471) : Loading success.
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.810 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.815 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.815 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.816 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.817 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.818 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.818 244018 DEBUG nova.virt.libvirt.driver [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.824 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:37:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:37:22 np0005629333 nova_compute[244014]: 2026-02-25 12:37:22.897 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:37:23 np0005629333 nova_compute[244014]: 2026-02-25 12:37:23.001 244018 INFO nova.compute.manager [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Took 7.96 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:37:23 np0005629333 nova_compute[244014]: 2026-02-25 12:37:23.002 244018 DEBUG nova.compute.manager [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:37:23 np0005629333 nova_compute[244014]: 2026-02-25 12:37:23.120 244018 INFO nova.compute.manager [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Took 10.09 seconds to build instance.#033[00m
Feb 25 07:37:23 np0005629333 nova_compute[244014]: 2026-02-25 12:37:23.152 244018 DEBUG oslo_concurrency.lockutils [None req-45e534c1-3ae0-49c5-b7ce-e7a7606ec3ed 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:23 np0005629333 nova_compute[244014]: 2026-02-25 12:37:23.225 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:37:23 np0005629333 nova_compute[244014]: 2026-02-25 12:37:23.226 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:37:23 np0005629333 nova_compute[244014]: 2026-02-25 12:37:23.226 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:37:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 305 active+clean; 437 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 126 op/s
Feb 25 07:37:24 np0005629333 nova_compute[244014]: 2026-02-25 12:37:24.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:37:24 np0005629333 nova_compute[244014]: 2026-02-25 12:37:24.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:37:24 np0005629333 nova_compute[244014]: 2026-02-25 12:37:24.955 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:25 np0005629333 nova_compute[244014]: 2026-02-25 12:37:25.112 244018 DEBUG nova.compute.manager [req-ad5c41c8-4879-4a0b-ae5c-9a4c5c3a7095 req-b30e8d33-3cc8-43e7-b797-926eb3cf4762 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Received event network-vif-plugged-7e3c890f-a02f-487b-9c31-7341f5caa914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:37:25 np0005629333 nova_compute[244014]: 2026-02-25 12:37:25.113 244018 DEBUG oslo_concurrency.lockutils [req-ad5c41c8-4879-4a0b-ae5c-9a4c5c3a7095 req-b30e8d33-3cc8-43e7-b797-926eb3cf4762 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:25 np0005629333 nova_compute[244014]: 2026-02-25 12:37:25.113 244018 DEBUG oslo_concurrency.lockutils [req-ad5c41c8-4879-4a0b-ae5c-9a4c5c3a7095 req-b30e8d33-3cc8-43e7-b797-926eb3cf4762 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:25 np0005629333 nova_compute[244014]: 2026-02-25 12:37:25.113 244018 DEBUG oslo_concurrency.lockutils [req-ad5c41c8-4879-4a0b-ae5c-9a4c5c3a7095 req-b30e8d33-3cc8-43e7-b797-926eb3cf4762 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:25 np0005629333 nova_compute[244014]: 2026-02-25 12:37:25.114 244018 DEBUG nova.compute.manager [req-ad5c41c8-4879-4a0b-ae5c-9a4c5c3a7095 req-b30e8d33-3cc8-43e7-b797-926eb3cf4762 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] No waiting events found dispatching network-vif-plugged-7e3c890f-a02f-487b-9c31-7341f5caa914 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:37:25 np0005629333 nova_compute[244014]: 2026-02-25 12:37:25.114 244018 WARNING nova.compute.manager [req-ad5c41c8-4879-4a0b-ae5c-9a4c5c3a7095 req-b30e8d33-3cc8-43e7-b797-926eb3cf4762 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Received unexpected event network-vif-plugged-7e3c890f-a02f-487b-9c31-7341f5caa914 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:37:25 np0005629333 nova_compute[244014]: 2026-02-25 12:37:25.730 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1675: 305 pgs: 305 active+clean; 437 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 116 op/s
Feb 25 07:37:26 np0005629333 nova_compute[244014]: 2026-02-25 12:37:26.659 244018 DEBUG oslo_concurrency.lockutils [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Acquiring lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:26 np0005629333 nova_compute[244014]: 2026-02-25 12:37:26.660 244018 DEBUG oslo_concurrency.lockutils [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:26 np0005629333 nova_compute[244014]: 2026-02-25 12:37:26.660 244018 DEBUG oslo_concurrency.lockutils [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Acquiring lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:26 np0005629333 nova_compute[244014]: 2026-02-25 12:37:26.660 244018 DEBUG oslo_concurrency.lockutils [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:26 np0005629333 nova_compute[244014]: 2026-02-25 12:37:26.661 244018 DEBUG oslo_concurrency.lockutils [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:26 np0005629333 nova_compute[244014]: 2026-02-25 12:37:26.662 244018 INFO nova.compute.manager [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Terminating instance#033[00m
Feb 25 07:37:26 np0005629333 nova_compute[244014]: 2026-02-25 12:37:26.663 244018 DEBUG nova.compute.manager [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:37:26 np0005629333 kernel: tap7e3c890f-a0 (unregistering): left promiscuous mode
Feb 25 07:37:26 np0005629333 NetworkManager[49836]: <info>  [1772023046.7038] device (tap7e3c890f-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:37:26 np0005629333 nova_compute[244014]: 2026-02-25 12:37:26.708 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:26 np0005629333 ovn_controller[147040]: 2026-02-25T12:37:26Z|00925|binding|INFO|Releasing lport 7e3c890f-a02f-487b-9c31-7341f5caa914 from this chassis (sb_readonly=0)
Feb 25 07:37:26 np0005629333 ovn_controller[147040]: 2026-02-25T12:37:26Z|00926|binding|INFO|Setting lport 7e3c890f-a02f-487b-9c31-7341f5caa914 down in Southbound
Feb 25 07:37:26 np0005629333 ovn_controller[147040]: 2026-02-25T12:37:26Z|00927|binding|INFO|Removing iface tap7e3c890f-a0 ovn-installed in OVS
Feb 25 07:37:26 np0005629333 nova_compute[244014]: 2026-02-25 12:37:26.711 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:26.717 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:ff:e0 10.100.0.14'], port_security=['fa:16:3e:e9:ff:e0 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'ed3e74a3-3eba-4446-aba8-cb3936a346e0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '04e53ffc2c7b47b993b4fd34d0c71d77', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a4d5db6f-4b6b-4f9f-b234-d31c315a5616', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9322afa8-c9d9-4d31-97f4-43f86a60b202, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=7e3c890f-a02f-487b-9c31-7341f5caa914) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:37:26 np0005629333 nova_compute[244014]: 2026-02-25 12:37:26.721 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:26.723 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 7e3c890f-a02f-487b-9c31-7341f5caa914 in datapath a3ce9bca-ffe4-4184-87c6-054df1c1e0b8 unbound from our chassis#033[00m
Feb 25 07:37:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:26.727 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a3ce9bca-ffe4-4184-87c6-054df1c1e0b8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:37:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:26.729 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f81c1ff8-2864-480f-8c09-9bc5a8f282c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:26.729 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8 namespace which is not needed anymore#033[00m
Feb 25 07:37:26 np0005629333 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Feb 25 07:37:26 np0005629333 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d0000005d.scope: Consumed 4.484s CPU time.
Feb 25 07:37:26 np0005629333 systemd-machined[210048]: Machine qemu-120-instance-0000005d terminated.
Feb 25 07:37:26 np0005629333 neutron-haproxy-ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8[327467]: [NOTICE]   (327471) : haproxy version is 2.8.14-c23fe91
Feb 25 07:37:26 np0005629333 neutron-haproxy-ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8[327467]: [NOTICE]   (327471) : path to executable is /usr/sbin/haproxy
Feb 25 07:37:26 np0005629333 neutron-haproxy-ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8[327467]: [WARNING]  (327471) : Exiting Master process...
Feb 25 07:37:26 np0005629333 neutron-haproxy-ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8[327467]: [ALERT]    (327471) : Current worker (327473) exited with code 143 (Terminated)
Feb 25 07:37:26 np0005629333 neutron-haproxy-ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8[327467]: [WARNING]  (327471) : All workers exited. Exiting... (0)
Feb 25 07:37:26 np0005629333 systemd[1]: libpod-26b1af2270e7c023d74f83bfdd07246185d24895a52af853e6acb358bb9d9a7b.scope: Deactivated successfully.
Feb 25 07:37:26 np0005629333 podman[327507]: 2026-02-25 12:37:26.882435637 +0000 UTC m=+0.056285539 container died 26b1af2270e7c023d74f83bfdd07246185d24895a52af853e6acb358bb9d9a7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:37:26 np0005629333 nova_compute[244014]: 2026-02-25 12:37:26.892 244018 INFO nova.virt.libvirt.driver [-] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Instance destroyed successfully.#033[00m
Feb 25 07:37:26 np0005629333 nova_compute[244014]: 2026-02-25 12:37:26.893 244018 DEBUG nova.objects.instance [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lazy-loading 'resources' on Instance uuid ed3e74a3-3eba-4446-aba8-cb3936a346e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:37:26 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-26b1af2270e7c023d74f83bfdd07246185d24895a52af853e6acb358bb9d9a7b-userdata-shm.mount: Deactivated successfully.
Feb 25 07:37:26 np0005629333 systemd[1]: var-lib-containers-storage-overlay-c53868578ef99a8810c9684c99d8c731ec4e09f65916a481d668654b6628d16a-merged.mount: Deactivated successfully.
Feb 25 07:37:26 np0005629333 nova_compute[244014]: 2026-02-25 12:37:26.910 244018 DEBUG nova.virt.libvirt.vif [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:37:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-1563092304',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-1563092304',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-1563092304',id=93,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:37:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='04e53ffc2c7b47b993b4fd34d0c71d77',ramdisk_id='',reservation_id='r-5zdt11tm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-685623650',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-685623650-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:37:23Z,user_data=None,user_id='3187080005d24d4b8fc920bcd975004b',uuid=ed3e74a3-3eba-4446-aba8-cb3936a346e0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7e3c890f-a02f-487b-9c31-7341f5caa914", "address": "fa:16:3e:e9:ff:e0", "network": {"id": "a3ce9bca-ffe4-4184-87c6-054df1c1e0b8", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-641075665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04e53ffc2c7b47b993b4fd34d0c71d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e3c890f-a0", "ovs_interfaceid": "7e3c890f-a02f-487b-9c31-7341f5caa914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:37:26 np0005629333 nova_compute[244014]: 2026-02-25 12:37:26.915 244018 DEBUG nova.network.os_vif_util [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Converting VIF {"id": "7e3c890f-a02f-487b-9c31-7341f5caa914", "address": "fa:16:3e:e9:ff:e0", "network": {"id": "a3ce9bca-ffe4-4184-87c6-054df1c1e0b8", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-641075665-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04e53ffc2c7b47b993b4fd34d0c71d77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7e3c890f-a0", "ovs_interfaceid": "7e3c890f-a02f-487b-9c31-7341f5caa914", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:37:26 np0005629333 nova_compute[244014]: 2026-02-25 12:37:26.916 244018 DEBUG nova.network.os_vif_util [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:ff:e0,bridge_name='br-int',has_traffic_filtering=True,id=7e3c890f-a02f-487b-9c31-7341f5caa914,network=Network(a3ce9bca-ffe4-4184-87c6-054df1c1e0b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e3c890f-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:37:26 np0005629333 nova_compute[244014]: 2026-02-25 12:37:26.916 244018 DEBUG os_vif [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:ff:e0,bridge_name='br-int',has_traffic_filtering=True,id=7e3c890f-a02f-487b-9c31-7341f5caa914,network=Network(a3ce9bca-ffe4-4184-87c6-054df1c1e0b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e3c890f-a0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:37:26 np0005629333 nova_compute[244014]: 2026-02-25 12:37:26.918 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:26 np0005629333 nova_compute[244014]: 2026-02-25 12:37:26.918 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e3c890f-a0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:37:26 np0005629333 nova_compute[244014]: 2026-02-25 12:37:26.923 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:37:26 np0005629333 nova_compute[244014]: 2026-02-25 12:37:26.925 244018 INFO os_vif [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:ff:e0,bridge_name='br-int',has_traffic_filtering=True,id=7e3c890f-a02f-487b-9c31-7341f5caa914,network=Network(a3ce9bca-ffe4-4184-87c6-054df1c1e0b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7e3c890f-a0')#033[00m
Feb 25 07:37:26 np0005629333 podman[327507]: 2026-02-25 12:37:26.931754859 +0000 UTC m=+0.105604761 container cleanup 26b1af2270e7c023d74f83bfdd07246185d24895a52af853e6acb358bb9d9a7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 25 07:37:26 np0005629333 systemd[1]: libpod-conmon-26b1af2270e7c023d74f83bfdd07246185d24895a52af853e6acb358bb9d9a7b.scope: Deactivated successfully.
Feb 25 07:37:26 np0005629333 podman[327556]: 2026-02-25 12:37:26.999166261 +0000 UTC m=+0.045135985 container remove 26b1af2270e7c023d74f83bfdd07246185d24895a52af853e6acb358bb9d9a7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 07:37:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.004 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fb8229cb-c960-4272-9b09-82d2ec9e89db]: (4, ('Wed Feb 25 12:37:26 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8 (26b1af2270e7c023d74f83bfdd07246185d24895a52af853e6acb358bb9d9a7b)\n26b1af2270e7c023d74f83bfdd07246185d24895a52af853e6acb358bb9d9a7b\nWed Feb 25 12:37:26 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8 (26b1af2270e7c023d74f83bfdd07246185d24895a52af853e6acb358bb9d9a7b)\n26b1af2270e7c023d74f83bfdd07246185d24895a52af853e6acb358bb9d9a7b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.005 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0fb48c1b-6937-4f51-a6d4-c639b2cdc385]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.006 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa3ce9bca-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:37:27 np0005629333 kernel: tapa3ce9bca-f0: left promiscuous mode
Feb 25 07:37:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.016 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b7863a2a-d6b1-408c-80d4-60940fcbe338]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:27 np0005629333 nova_compute[244014]: 2026-02-25 12:37:27.016 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.037 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[67fcdb10-444e-43e0-8451-58400bf1c68c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.039 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eb1cb752-40c6-4348-85a0-fb29529c78f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.049 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[478f168d-6aca-48ba-8681-4ab114f0a2f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 501153, 'reachable_time': 40530, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327579, 'error': None, 'target': 'ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:27 np0005629333 systemd[1]: run-netns-ovnmeta\x2da3ce9bca\x2dffe4\x2d4184\x2d87c6\x2d054df1c1e0b8.mount: Deactivated successfully.
Feb 25 07:37:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.052 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a3ce9bca-ffe4-4184-87c6-054df1c1e0b8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:37:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.052 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[7b1c611c-5a15-44be-b0b8-13b80bb0e78b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:27 np0005629333 nova_compute[244014]: 2026-02-25 12:37:27.181 244018 INFO nova.virt.libvirt.driver [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Deleting instance files /var/lib/nova/instances/ed3e74a3-3eba-4446-aba8-cb3936a346e0_del#033[00m
Feb 25 07:37:27 np0005629333 nova_compute[244014]: 2026-02-25 12:37:27.182 244018 INFO nova.virt.libvirt.driver [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Deletion of /var/lib/nova/instances/ed3e74a3-3eba-4446-aba8-cb3936a346e0_del complete#033[00m
Feb 25 07:37:27 np0005629333 nova_compute[244014]: 2026-02-25 12:37:27.249 244018 INFO nova.compute.manager [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Took 0.59 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:37:27 np0005629333 nova_compute[244014]: 2026-02-25 12:37:27.249 244018 DEBUG oslo.service.loopingcall [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:37:27 np0005629333 nova_compute[244014]: 2026-02-25 12:37:27.249 244018 DEBUG nova.compute.manager [-] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:37:27 np0005629333 nova_compute[244014]: 2026-02-25 12:37:27.250 244018 DEBUG nova.network.neutron [-] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:37:27 np0005629333 nova_compute[244014]: 2026-02-25 12:37:27.758 244018 DEBUG oslo_concurrency.lockutils [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:27 np0005629333 nova_compute[244014]: 2026-02-25 12:37:27.758 244018 DEBUG oslo_concurrency.lockutils [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:27 np0005629333 nova_compute[244014]: 2026-02-25 12:37:27.758 244018 DEBUG oslo_concurrency.lockutils [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:27 np0005629333 nova_compute[244014]: 2026-02-25 12:37:27.759 244018 DEBUG oslo_concurrency.lockutils [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:27 np0005629333 nova_compute[244014]: 2026-02-25 12:37:27.759 244018 DEBUG oslo_concurrency.lockutils [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:27 np0005629333 nova_compute[244014]: 2026-02-25 12:37:27.760 244018 INFO nova.compute.manager [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Terminating instance#033[00m
Feb 25 07:37:27 np0005629333 nova_compute[244014]: 2026-02-25 12:37:27.760 244018 DEBUG nova.compute.manager [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:37:27 np0005629333 kernel: tapca16d7f1-7f (unregistering): left promiscuous mode
Feb 25 07:37:27 np0005629333 NetworkManager[49836]: <info>  [1772023047.8071] device (tapca16d7f1-7f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:37:27 np0005629333 nova_compute[244014]: 2026-02-25 12:37:27.811 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:27 np0005629333 ovn_controller[147040]: 2026-02-25T12:37:27Z|00928|binding|INFO|Releasing lport ca16d7f1-7f94-4d64-906e-e1469230e4f1 from this chassis (sb_readonly=0)
Feb 25 07:37:27 np0005629333 ovn_controller[147040]: 2026-02-25T12:37:27Z|00929|binding|INFO|Setting lport ca16d7f1-7f94-4d64-906e-e1469230e4f1 down in Southbound
Feb 25 07:37:27 np0005629333 ovn_controller[147040]: 2026-02-25T12:37:27Z|00930|binding|INFO|Removing iface tapca16d7f1-7f ovn-installed in OVS
Feb 25 07:37:27 np0005629333 nova_compute[244014]: 2026-02-25 12:37:27.813 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:27 np0005629333 nova_compute[244014]: 2026-02-25 12:37:27.817 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.818 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:9e:27 10.100.0.3'], port_security=['fa:16:3e:aa:9e:27 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '7c2fb1e7-04d0-4903-a675-8cda55bbb6ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38a239da-c933-4cbc-be00-dd127471e198', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '999f2a015b9c4bc98661fe6fe6db06a0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '90d0dcc5-a9c4-4f99-8901-06a2a2854544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80217008-ae39-43e4-aa35-a210336c586d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ca16d7f1-7f94-4d64-906e-e1469230e4f1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:37:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.821 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ca16d7f1-7f94-4d64-906e-e1469230e4f1 in datapath 38a239da-c933-4cbc-be00-dd127471e198 unbound from our chassis#033[00m
Feb 25 07:37:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.822 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 38a239da-c933-4cbc-be00-dd127471e198#033[00m
Feb 25 07:37:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:37:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.842 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a01a783c-5572-4592-9abe-681ef7a3b432]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:27 np0005629333 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Feb 25 07:37:27 np0005629333 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d0000005c.scope: Consumed 13.414s CPU time.
Feb 25 07:37:27 np0005629333 systemd-machined[210048]: Machine qemu-118-instance-0000005c terminated.
Feb 25 07:37:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.867 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c02cf11a-229a-45d8-82ba-59eef755ec7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.869 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8eebd2f2-de4f-4f7e-a9a0-023605087925]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.887 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7b99f481-81ba-4437-8156-3d8c679d369d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.900 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fa3d7e9f-2ea9-4706-93d2-cac24a0ec438]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap38a239da-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:fa:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 27, 'tx_packets': 13, 'rx_bytes': 1414, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 27, 'tx_packets': 13, 'rx_bytes': 1414, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 497385, 'reachable_time': 19502, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327589, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.908 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0ad8d6b8-2d15-4cad-9849-78c0cc8899d0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap38a239da-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 497395, 'tstamp': 497395}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327590, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap38a239da-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 497397, 'tstamp': 497397}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327590, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.910 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38a239da-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:37:27 np0005629333 nova_compute[244014]: 2026-02-25 12:37:27.911 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:27 np0005629333 nova_compute[244014]: 2026-02-25 12:37:27.915 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.915 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap38a239da-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:37:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.915 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:37:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.916 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap38a239da-c0, col_values=(('external_ids', {'iface-id': 'b61d6004-89be-4a9d-aeb0-ecb4f03f3526'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:37:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:27.916 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:37:27 np0005629333 nova_compute[244014]: 2026-02-25 12:37:27.991 244018 INFO nova.virt.libvirt.driver [-] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Instance destroyed successfully.#033[00m
Feb 25 07:37:27 np0005629333 nova_compute[244014]: 2026-02-25 12:37:27.992 244018 DEBUG nova.objects.instance [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'resources' on Instance uuid 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:37:28 np0005629333 nova_compute[244014]: 2026-02-25 12:37:28.020 244018 DEBUG nova.virt.libvirt.vif [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:36:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1161077750',display_name='tempest-ListServerFiltersTestJSON-instance-1161077750',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1161077750',id=92,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:36:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='999f2a015b9c4bc98661fe6fe6db06a0',ramdisk_id='',reservation_id='r-hzdbk73p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1481541933',owner_user_name='tempest-ListServerFiltersTestJSON-1481541933-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:36:52Z,user_data=None,user_id='00eb5c915b2d41f69600acd33967d0f5',uuid=7c2fb1e7-04d0-4903-a675-8cda55bbb6ed,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "address": "fa:16:3e:aa:9e:27", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca16d7f1-7f", "ovs_interfaceid": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:37:28 np0005629333 nova_compute[244014]: 2026-02-25 12:37:28.021 244018 DEBUG nova.network.os_vif_util [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converting VIF {"id": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "address": "fa:16:3e:aa:9e:27", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca16d7f1-7f", "ovs_interfaceid": "ca16d7f1-7f94-4d64-906e-e1469230e4f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:37:28 np0005629333 nova_compute[244014]: 2026-02-25 12:37:28.022 244018 DEBUG nova.network.os_vif_util [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:9e:27,bridge_name='br-int',has_traffic_filtering=True,id=ca16d7f1-7f94-4d64-906e-e1469230e4f1,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca16d7f1-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:37:28 np0005629333 nova_compute[244014]: 2026-02-25 12:37:28.022 244018 DEBUG os_vif [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:9e:27,bridge_name='br-int',has_traffic_filtering=True,id=ca16d7f1-7f94-4d64-906e-e1469230e4f1,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca16d7f1-7f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:37:28 np0005629333 nova_compute[244014]: 2026-02-25 12:37:28.024 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:28 np0005629333 nova_compute[244014]: 2026-02-25 12:37:28.025 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca16d7f1-7f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:37:28 np0005629333 nova_compute[244014]: 2026-02-25 12:37:28.028 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:28 np0005629333 nova_compute[244014]: 2026-02-25 12:37:28.031 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:37:28 np0005629333 nova_compute[244014]: 2026-02-25 12:37:28.033 244018 INFO os_vif [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:9e:27,bridge_name='br-int',has_traffic_filtering=True,id=ca16d7f1-7f94-4d64-906e-e1469230e4f1,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca16d7f1-7f')#033[00m
Feb 25 07:37:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1676: 305 pgs: 305 active+clean; 422 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 195 op/s
Feb 25 07:37:28 np0005629333 nova_compute[244014]: 2026-02-25 12:37:28.278 244018 INFO nova.virt.libvirt.driver [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Deleting instance files /var/lib/nova/instances/7c2fb1e7-04d0-4903-a675-8cda55bbb6ed_del#033[00m
Feb 25 07:37:28 np0005629333 nova_compute[244014]: 2026-02-25 12:37:28.279 244018 INFO nova.virt.libvirt.driver [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Deletion of /var/lib/nova/instances/7c2fb1e7-04d0-4903-a675-8cda55bbb6ed_del complete#033[00m
Feb 25 07:37:28 np0005629333 nova_compute[244014]: 2026-02-25 12:37:28.346 244018 INFO nova.compute.manager [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Took 0.59 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:37:28 np0005629333 nova_compute[244014]: 2026-02-25 12:37:28.347 244018 DEBUG oslo.service.loopingcall [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:37:28 np0005629333 nova_compute[244014]: 2026-02-25 12:37:28.347 244018 DEBUG nova.compute.manager [-] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:37:28 np0005629333 nova_compute[244014]: 2026-02-25 12:37:28.347 244018 DEBUG nova.network.neutron [-] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:37:28 np0005629333 nova_compute[244014]: 2026-02-25 12:37:28.704 244018 DEBUG nova.network.neutron [-] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:37:28 np0005629333 nova_compute[244014]: 2026-02-25 12:37:28.723 244018 INFO nova.compute.manager [-] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Took 1.47 seconds to deallocate network for instance.#033[00m
Feb 25 07:37:28 np0005629333 nova_compute[244014]: 2026-02-25 12:37:28.767 244018 DEBUG oslo_concurrency.lockutils [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:28 np0005629333 nova_compute[244014]: 2026-02-25 12:37:28.768 244018 DEBUG oslo_concurrency.lockutils [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:28 np0005629333 nova_compute[244014]: 2026-02-25 12:37:28.782 244018 DEBUG nova.compute.manager [req-e08d7017-0088-4b9e-bbff-e384660f0acd req-5d4e7965-75f7-41de-aeb4-87aa24feb9ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Received event network-vif-deleted-7e3c890f-a02f-487b-9c31-7341f5caa914 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:37:28 np0005629333 nova_compute[244014]: 2026-02-25 12:37:28.917 244018 DEBUG oslo_concurrency.processutils [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:37:29 np0005629333 nova_compute[244014]: 2026-02-25 12:37:29.044 244018 DEBUG nova.network.neutron [-] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:37:29 np0005629333 nova_compute[244014]: 2026-02-25 12:37:29.074 244018 INFO nova.compute.manager [-] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Took 0.73 seconds to deallocate network for instance.#033[00m
Feb 25 07:37:29 np0005629333 nova_compute[244014]: 2026-02-25 12:37:29.128 244018 DEBUG nova.compute.manager [req-49eb6003-c712-4992-81a8-bc15991f7984 req-136f26ad-dba7-4a76-947b-e20d51a6e453 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Received event network-vif-deleted-ca16d7f1-7f94-4d64-906e-e1469230e4f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:37:29 np0005629333 nova_compute[244014]: 2026-02-25 12:37:29.139 244018 DEBUG oslo_concurrency.lockutils [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:37:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4270513869' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:37:29 np0005629333 nova_compute[244014]: 2026-02-25 12:37:29.522 244018 DEBUG oslo_concurrency.processutils [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:37:29 np0005629333 nova_compute[244014]: 2026-02-25 12:37:29.527 244018 DEBUG nova.compute.provider_tree [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:37:29 np0005629333 nova_compute[244014]: 2026-02-25 12:37:29.546 244018 DEBUG nova.scheduler.client.report [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:37:29 np0005629333 nova_compute[244014]: 2026-02-25 12:37:29.574 244018 DEBUG oslo_concurrency.lockutils [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:29 np0005629333 nova_compute[244014]: 2026-02-25 12:37:29.576 244018 DEBUG oslo_concurrency.lockutils [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:29 np0005629333 nova_compute[244014]: 2026-02-25 12:37:29.607 244018 INFO nova.scheduler.client.report [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Deleted allocations for instance ed3e74a3-3eba-4446-aba8-cb3936a346e0#033[00m
Feb 25 07:37:29 np0005629333 nova_compute[244014]: 2026-02-25 12:37:29.672 244018 DEBUG oslo_concurrency.processutils [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:37:29 np0005629333 nova_compute[244014]: 2026-02-25 12:37:29.699 244018 DEBUG oslo_concurrency.lockutils [None req-5cf2cf44-6070-4a37-9e67-448e77cfe97a 3187080005d24d4b8fc920bcd975004b 04e53ffc2c7b47b993b4fd34d0c71d77 - - default default] Lock "ed3e74a3-3eba-4446-aba8-cb3936a346e0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:29 np0005629333 nova_compute[244014]: 2026-02-25 12:37:29.959 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1677: 305 pgs: 305 active+clean; 422 MiB data, 944 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 26 KiB/s wr, 155 op/s
Feb 25 07:37:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:37:30 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1185976889' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:37:30 np0005629333 nova_compute[244014]: 2026-02-25 12:37:30.221 244018 DEBUG oslo_concurrency.processutils [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:37:30 np0005629333 nova_compute[244014]: 2026-02-25 12:37:30.226 244018 DEBUG nova.compute.provider_tree [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:37:30 np0005629333 nova_compute[244014]: 2026-02-25 12:37:30.244 244018 DEBUG nova.scheduler.client.report [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:37:30 np0005629333 nova_compute[244014]: 2026-02-25 12:37:30.279 244018 DEBUG oslo_concurrency.lockutils [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:30 np0005629333 nova_compute[244014]: 2026-02-25 12:37:30.310 244018 INFO nova.scheduler.client.report [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Deleted allocations for instance 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed#033[00m
Feb 25 07:37:30 np0005629333 nova_compute[244014]: 2026-02-25 12:37:30.388 244018 DEBUG oslo_concurrency.lockutils [None req-cd9ecd6e-878f-4831-9c6d-2da54baddd38 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "7c2fb1e7-04d0-4903-a675-8cda55bbb6ed" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:30 np0005629333 ovn_controller[147040]: 2026-02-25T12:37:30Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:65:76:ae 10.100.0.8
Feb 25 07:37:30 np0005629333 nova_compute[244014]: 2026-02-25 12:37:30.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:37:30 np0005629333 nova_compute[244014]: 2026-02-25 12:37:30.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 25 07:37:30 np0005629333 nova_compute[244014]: 2026-02-25 12:37:30.894 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 25 07:37:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:37:30
Feb 25 07:37:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:37:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:37:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'backups', 'volumes', 'default.rgw.meta', 'vms', '.rgw.root', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log']
Feb 25 07:37:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:37:30 np0005629333 nova_compute[244014]: 2026-02-25 12:37:30.992 244018 DEBUG oslo_concurrency.lockutils [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "6076a107-bdef-4c8a-8f75-887cdb4833f0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:30 np0005629333 nova_compute[244014]: 2026-02-25 12:37:30.993 244018 DEBUG oslo_concurrency.lockutils [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "6076a107-bdef-4c8a-8f75-887cdb4833f0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:30 np0005629333 nova_compute[244014]: 2026-02-25 12:37:30.994 244018 DEBUG oslo_concurrency.lockutils [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "6076a107-bdef-4c8a-8f75-887cdb4833f0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:30 np0005629333 nova_compute[244014]: 2026-02-25 12:37:30.994 244018 DEBUG oslo_concurrency.lockutils [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "6076a107-bdef-4c8a-8f75-887cdb4833f0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:30 np0005629333 nova_compute[244014]: 2026-02-25 12:37:30.995 244018 DEBUG oslo_concurrency.lockutils [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "6076a107-bdef-4c8a-8f75-887cdb4833f0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:30 np0005629333 nova_compute[244014]: 2026-02-25 12:37:30.997 244018 INFO nova.compute.manager [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Terminating instance#033[00m
Feb 25 07:37:30 np0005629333 nova_compute[244014]: 2026-02-25 12:37:30.999 244018 DEBUG nova.compute.manager [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:37:31 np0005629333 kernel: tap66c41a3b-23 (unregistering): left promiscuous mode
Feb 25 07:37:31 np0005629333 NetworkManager[49836]: <info>  [1772023051.0555] device (tap66c41a3b-23): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:37:31 np0005629333 ovn_controller[147040]: 2026-02-25T12:37:31Z|00931|binding|INFO|Releasing lport 66c41a3b-23a5-4cbf-a70e-416adf1617e5 from this chassis (sb_readonly=0)
Feb 25 07:37:31 np0005629333 ovn_controller[147040]: 2026-02-25T12:37:31Z|00932|binding|INFO|Setting lport 66c41a3b-23a5-4cbf-a70e-416adf1617e5 down in Southbound
Feb 25 07:37:31 np0005629333 ovn_controller[147040]: 2026-02-25T12:37:31Z|00933|binding|INFO|Removing iface tap66c41a3b-23 ovn-installed in OVS
Feb 25 07:37:31 np0005629333 nova_compute[244014]: 2026-02-25 12:37:31.062 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:31 np0005629333 nova_compute[244014]: 2026-02-25 12:37:31.065 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:31 np0005629333 nova_compute[244014]: 2026-02-25 12:37:31.072 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:31.074 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:11:ce 10.100.0.6'], port_security=['fa:16:3e:6d:11:ce 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6076a107-bdef-4c8a-8f75-887cdb4833f0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38a239da-c933-4cbc-be00-dd127471e198', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '999f2a015b9c4bc98661fe6fe6db06a0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '90d0dcc5-a9c4-4f99-8901-06a2a2854544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80217008-ae39-43e4-aa35-a210336c586d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=66c41a3b-23a5-4cbf-a70e-416adf1617e5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:37:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:31.078 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 66c41a3b-23a5-4cbf-a70e-416adf1617e5 in datapath 38a239da-c933-4cbc-be00-dd127471e198 unbound from our chassis#033[00m
Feb 25 07:37:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:31.080 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 38a239da-c933-4cbc-be00-dd127471e198#033[00m
Feb 25 07:37:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:31.094 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[921cd520-bae0-4893-b9d7-3b7c4282fe79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:31 np0005629333 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Feb 25 07:37:31 np0005629333 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d0000005b.scope: Consumed 13.362s CPU time.
Feb 25 07:37:31 np0005629333 systemd-machined[210048]: Machine qemu-117-instance-0000005b terminated.
Feb 25 07:37:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:31.117 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3eb6754e-1f41-4d1b-aa6c-56e213fdb6d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:31.120 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c1b2554a-1295-45be-ab01-dc08fd970a4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:31.145 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d011d24d-0054-4c5c-8324-47dd8fcb1af4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:31.160 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[582d70da-7cbb-4e4c-81ef-d47fa7cd3251]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap38a239da-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f1:fa:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 27, 'tx_packets': 15, 'rx_bytes': 1414, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 27, 'tx_packets': 15, 'rx_bytes': 1414, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 272], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 497385, 'reachable_time': 19502, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327679, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:31.172 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eb895b3e-1324-40d5-9d1c-7423a481fdb7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap38a239da-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 497395, 'tstamp': 497395}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327680, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap38a239da-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 497397, 'tstamp': 497397}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327680, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:31.173 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38a239da-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:37:31 np0005629333 nova_compute[244014]: 2026-02-25 12:37:31.175 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:31 np0005629333 nova_compute[244014]: 2026-02-25 12:37:31.178 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:31.179 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap38a239da-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:37:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:31.179 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:37:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:31.180 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap38a239da-c0, col_values=(('external_ids', {'iface-id': 'b61d6004-89be-4a9d-aeb0-ecb4f03f3526'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:37:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:31.181 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:37:31 np0005629333 nova_compute[244014]: 2026-02-25 12:37:31.234 244018 INFO nova.virt.libvirt.driver [-] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Instance destroyed successfully.#033[00m
Feb 25 07:37:31 np0005629333 nova_compute[244014]: 2026-02-25 12:37:31.236 244018 DEBUG nova.objects.instance [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'resources' on Instance uuid 6076a107-bdef-4c8a-8f75-887cdb4833f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:37:31 np0005629333 nova_compute[244014]: 2026-02-25 12:37:31.251 244018 DEBUG nova.virt.libvirt.vif [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:36:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1261051940',display_name='tempest-ListServerFiltersTestJSON-instance-1261051940',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1261051940',id=91,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:36:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='999f2a015b9c4bc98661fe6fe6db06a0',ramdisk_id='',reservation_id='r-c0kkprmf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1481541933',owner_user_name='tempest-ListServerFiltersTestJSON-1481541933-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:36:50Z,user_data=None,user_id='00eb5c915b2d41f69600acd33967d0f5',uuid=6076a107-bdef-4c8a-8f75-887cdb4833f0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "address": "fa:16:3e:6d:11:ce", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66c41a3b-23", "ovs_interfaceid": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:37:31 np0005629333 nova_compute[244014]: 2026-02-25 12:37:31.251 244018 DEBUG nova.network.os_vif_util [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converting VIF {"id": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "address": "fa:16:3e:6d:11:ce", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap66c41a3b-23", "ovs_interfaceid": "66c41a3b-23a5-4cbf-a70e-416adf1617e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:37:31 np0005629333 nova_compute[244014]: 2026-02-25 12:37:31.252 244018 DEBUG nova.network.os_vif_util [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:11:ce,bridge_name='br-int',has_traffic_filtering=True,id=66c41a3b-23a5-4cbf-a70e-416adf1617e5,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66c41a3b-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:37:31 np0005629333 nova_compute[244014]: 2026-02-25 12:37:31.252 244018 DEBUG os_vif [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:11:ce,bridge_name='br-int',has_traffic_filtering=True,id=66c41a3b-23a5-4cbf-a70e-416adf1617e5,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66c41a3b-23') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:37:31 np0005629333 nova_compute[244014]: 2026-02-25 12:37:31.254 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:31 np0005629333 nova_compute[244014]: 2026-02-25 12:37:31.254 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap66c41a3b-23, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:37:31 np0005629333 nova_compute[244014]: 2026-02-25 12:37:31.256 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:31 np0005629333 nova_compute[244014]: 2026-02-25 12:37:31.258 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:31 np0005629333 nova_compute[244014]: 2026-02-25 12:37:31.259 244018 INFO os_vif [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:11:ce,bridge_name='br-int',has_traffic_filtering=True,id=66c41a3b-23a5-4cbf-a70e-416adf1617e5,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap66c41a3b-23')#033[00m
Feb 25 07:37:31 np0005629333 nova_compute[244014]: 2026-02-25 12:37:31.518 244018 INFO nova.virt.libvirt.driver [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Deleting instance files /var/lib/nova/instances/6076a107-bdef-4c8a-8f75-887cdb4833f0_del#033[00m
Feb 25 07:37:31 np0005629333 nova_compute[244014]: 2026-02-25 12:37:31.520 244018 INFO nova.virt.libvirt.driver [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Deletion of /var/lib/nova/instances/6076a107-bdef-4c8a-8f75-887cdb4833f0_del complete#033[00m
Feb 25 07:37:31 np0005629333 nova_compute[244014]: 2026-02-25 12:37:31.592 244018 INFO nova.compute.manager [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Took 0.59 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:37:31 np0005629333 nova_compute[244014]: 2026-02-25 12:37:31.593 244018 DEBUG oslo.service.loopingcall [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:37:31 np0005629333 nova_compute[244014]: 2026-02-25 12:37:31.593 244018 DEBUG nova.compute.manager [-] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:37:31 np0005629333 nova_compute[244014]: 2026-02-25 12:37:31.594 244018 DEBUG nova.network.neutron [-] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:37:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:37:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:37:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:37:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:37:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:37:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:37:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:37:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:37:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:37:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:37:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:37:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:37:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:37:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:37:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:37:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:37:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:32.086 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:37:32 np0005629333 nova_compute[244014]: 2026-02-25 12:37:32.086 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:32.087 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:37:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 305 active+clean; 308 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 47 KiB/s wr, 226 op/s
Feb 25 07:37:32 np0005629333 nova_compute[244014]: 2026-02-25 12:37:32.511 244018 DEBUG nova.network.neutron [-] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:37:32 np0005629333 nova_compute[244014]: 2026-02-25 12:37:32.533 244018 INFO nova.compute.manager [-] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Took 0.94 seconds to deallocate network for instance.#033[00m
Feb 25 07:37:32 np0005629333 nova_compute[244014]: 2026-02-25 12:37:32.575 244018 DEBUG oslo_concurrency.lockutils [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:32 np0005629333 nova_compute[244014]: 2026-02-25 12:37:32.576 244018 DEBUG oslo_concurrency.lockutils [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:32 np0005629333 nova_compute[244014]: 2026-02-25 12:37:32.580 244018 DEBUG nova.compute.manager [req-a22e98a5-eb93-4edc-8fa9-d0033405c477 req-7e35add3-9ec8-47e0-b603-9e9315630013 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Received event network-vif-deleted-66c41a3b-23a5-4cbf-a70e-416adf1617e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:37:32 np0005629333 nova_compute[244014]: 2026-02-25 12:37:32.641 244018 DEBUG oslo_concurrency.processutils [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:37:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:37:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:37:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/320362384' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:37:33 np0005629333 nova_compute[244014]: 2026-02-25 12:37:33.206 244018 DEBUG oslo_concurrency.processutils [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:37:33 np0005629333 nova_compute[244014]: 2026-02-25 12:37:33.214 244018 DEBUG nova.compute.provider_tree [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:37:33 np0005629333 ovn_controller[147040]: 2026-02-25T12:37:33Z|00934|binding|INFO|Releasing lport b61d6004-89be-4a9d-aeb0-ecb4f03f3526 from this chassis (sb_readonly=0)
Feb 25 07:37:33 np0005629333 nova_compute[244014]: 2026-02-25 12:37:33.266 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:33 np0005629333 nova_compute[244014]: 2026-02-25 12:37:33.270 244018 DEBUG nova.scheduler.client.report [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:37:33 np0005629333 nova_compute[244014]: 2026-02-25 12:37:33.321 244018 DEBUG oslo_concurrency.lockutils [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:33 np0005629333 nova_compute[244014]: 2026-02-25 12:37:33.346 244018 INFO nova.scheduler.client.report [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Deleted allocations for instance 6076a107-bdef-4c8a-8f75-887cdb4833f0#033[00m
Feb 25 07:37:33 np0005629333 nova_compute[244014]: 2026-02-25 12:37:33.412 244018 DEBUG oslo_concurrency.lockutils [None req-6687546e-ae7a-436a-878c-08d4c27e0339 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "6076a107-bdef-4c8a-8f75-887cdb4833f0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.419s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:33 np0005629333 nova_compute[244014]: 2026-02-25 12:37:33.890 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:37:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1679: 305 pgs: 305 active+clean; 233 MiB data, 845 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 36 KiB/s wr, 212 op/s
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.311 244018 DEBUG oslo_concurrency.lockutils [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "0061daee-43d7-458b-8645-0ad3f8fbb2af" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.312 244018 DEBUG oslo_concurrency.lockutils [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.313 244018 DEBUG oslo_concurrency.lockutils [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.314 244018 DEBUG oslo_concurrency.lockutils [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.314 244018 DEBUG oslo_concurrency.lockutils [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.316 244018 INFO nova.compute.manager [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Terminating instance#033[00m
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.317 244018 DEBUG nova.compute.manager [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:37:34 np0005629333 kernel: tapd689bf7c-d4 (unregistering): left promiscuous mode
Feb 25 07:37:34 np0005629333 NetworkManager[49836]: <info>  [1772023054.3676] device (tapd689bf7c-d4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:37:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:37:34Z|00935|binding|INFO|Releasing lport d689bf7c-d44c-4f39-a2a7-a85e52dcee30 from this chassis (sb_readonly=0)
Feb 25 07:37:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:37:34Z|00936|binding|INFO|Setting lport d689bf7c-d44c-4f39-a2a7-a85e52dcee30 down in Southbound
Feb 25 07:37:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:37:34Z|00937|binding|INFO|Removing iface tapd689bf7c-d4 ovn-installed in OVS
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.375 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.376 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:34.381 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:76:ae 10.100.0.8'], port_security=['fa:16:3e:65:76:ae 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0061daee-43d7-458b-8645-0ad3f8fbb2af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38a239da-c933-4cbc-be00-dd127471e198', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '999f2a015b9c4bc98661fe6fe6db06a0', 'neutron:revision_number': '6', 'neutron:security_group_ids': '90d0dcc5-a9c4-4f99-8901-06a2a2854544', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80217008-ae39-43e4-aa35-a210336c586d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=d689bf7c-d44c-4f39-a2a7-a85e52dcee30) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:37:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:34.382 157129 INFO neutron.agent.ovn.metadata.agent [-] Port d689bf7c-d44c-4f39-a2a7-a85e52dcee30 in datapath 38a239da-c933-4cbc-be00-dd127471e198 unbound from our chassis#033[00m
Feb 25 07:37:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:34.383 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 38a239da-c933-4cbc-be00-dd127471e198, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:37:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:34.384 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e905ed8e-0d33-4289-a88f-d422dd631b49]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:34.385 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-38a239da-c933-4cbc-be00-dd127471e198 namespace which is not needed anymore#033[00m
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.390 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:34 np0005629333 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Feb 25 07:37:34 np0005629333 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d0000005a.scope: Consumed 12.407s CPU time.
Feb 25 07:37:34 np0005629333 systemd-machined[210048]: Machine qemu-119-instance-0000005a terminated.
Feb 25 07:37:34 np0005629333 neutron-haproxy-ovnmeta-38a239da-c933-4cbc-be00-dd127471e198[325686]: [NOTICE]   (325699) : haproxy version is 2.8.14-c23fe91
Feb 25 07:37:34 np0005629333 neutron-haproxy-ovnmeta-38a239da-c933-4cbc-be00-dd127471e198[325686]: [NOTICE]   (325699) : path to executable is /usr/sbin/haproxy
Feb 25 07:37:34 np0005629333 neutron-haproxy-ovnmeta-38a239da-c933-4cbc-be00-dd127471e198[325686]: [WARNING]  (325699) : Exiting Master process...
Feb 25 07:37:34 np0005629333 neutron-haproxy-ovnmeta-38a239da-c933-4cbc-be00-dd127471e198[325686]: [ALERT]    (325699) : Current worker (325707) exited with code 143 (Terminated)
Feb 25 07:37:34 np0005629333 neutron-haproxy-ovnmeta-38a239da-c933-4cbc-be00-dd127471e198[325686]: [WARNING]  (325699) : All workers exited. Exiting... (0)
Feb 25 07:37:34 np0005629333 systemd[1]: libpod-2e466d152b6d4b411750f1032c64002c2b30eb3be3ca644500a1b88bcd54b2d1.scope: Deactivated successfully.
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.533 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:34 np0005629333 podman[327760]: 2026-02-25 12:37:34.534841844 +0000 UTC m=+0.050383023 container died 2e466d152b6d4b411750f1032c64002c2b30eb3be3ca644500a1b88bcd54b2d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38a239da-c933-4cbc-be00-dd127471e198, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.539 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.550 244018 INFO nova.virt.libvirt.driver [-] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Instance destroyed successfully.#033[00m
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.551 244018 DEBUG nova.objects.instance [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lazy-loading 'resources' on Instance uuid 0061daee-43d7-458b-8645-0ad3f8fbb2af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:37:34 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2e466d152b6d4b411750f1032c64002c2b30eb3be3ca644500a1b88bcd54b2d1-userdata-shm.mount: Deactivated successfully.
Feb 25 07:37:34 np0005629333 systemd[1]: var-lib-containers-storage-overlay-82356394d1b742d50fdf5952aac21f145d54d5f578fc2e46fd418cf1c3cc383b-merged.mount: Deactivated successfully.
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.568 244018 DEBUG nova.virt.libvirt.vif [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:36:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1814919794',display_name='tempest-ListServerFiltersTestJSON-instance-1814919794',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1814919794',id=90,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:36:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='999f2a015b9c4bc98661fe6fe6db06a0',ramdisk_id='',reservation_id='r-la6jl8ba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1481541933',owner_user_name='tempest-ListServerFiltersTestJSON-1481541933-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:37:19Z,user_data=None,user_id='00eb5c915b2d41f69600acd33967d0f5',uuid=0061daee-43d7-458b-8645-0ad3f8fbb2af,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.569 244018 DEBUG nova.network.os_vif_util [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converting VIF {"id": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "address": "fa:16:3e:65:76:ae", "network": {"id": "38a239da-c933-4cbc-be00-dd127471e198", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-727921210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "999f2a015b9c4bc98661fe6fe6db06a0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd689bf7c-d4", "ovs_interfaceid": "d689bf7c-d44c-4f39-a2a7-a85e52dcee30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.570 244018 DEBUG nova.network.os_vif_util [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:76:ae,bridge_name='br-int',has_traffic_filtering=True,id=d689bf7c-d44c-4f39-a2a7-a85e52dcee30,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd689bf7c-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.570 244018 DEBUG os_vif [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:76:ae,bridge_name='br-int',has_traffic_filtering=True,id=d689bf7c-d44c-4f39-a2a7-a85e52dcee30,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd689bf7c-d4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.571 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.571 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd689bf7c-d4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.573 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.574 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:34 np0005629333 podman[327760]: 2026-02-25 12:37:34.576775347 +0000 UTC m=+0.092316566 container cleanup 2e466d152b6d4b411750f1032c64002c2b30eb3be3ca644500a1b88bcd54b2d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38a239da-c933-4cbc-be00-dd127471e198, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.576 244018 INFO os_vif [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:76:ae,bridge_name='br-int',has_traffic_filtering=True,id=d689bf7c-d44c-4f39-a2a7-a85e52dcee30,network=Network(38a239da-c933-4cbc-be00-dd127471e198),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd689bf7c-d4')#033[00m
Feb 25 07:37:34 np0005629333 systemd[1]: libpod-conmon-2e466d152b6d4b411750f1032c64002c2b30eb3be3ca644500a1b88bcd54b2d1.scope: Deactivated successfully.
Feb 25 07:37:34 np0005629333 podman[327805]: 2026-02-25 12:37:34.653833162 +0000 UTC m=+0.053539652 container remove 2e466d152b6d4b411750f1032c64002c2b30eb3be3ca644500a1b88bcd54b2d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38a239da-c933-4cbc-be00-dd127471e198, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 07:37:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:34.659 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2cadae38-bb44-4a7a-8934-30ac81159022]: (4, ('Wed Feb 25 12:37:34 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-38a239da-c933-4cbc-be00-dd127471e198 (2e466d152b6d4b411750f1032c64002c2b30eb3be3ca644500a1b88bcd54b2d1)\n2e466d152b6d4b411750f1032c64002c2b30eb3be3ca644500a1b88bcd54b2d1\nWed Feb 25 12:37:34 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-38a239da-c933-4cbc-be00-dd127471e198 (2e466d152b6d4b411750f1032c64002c2b30eb3be3ca644500a1b88bcd54b2d1)\n2e466d152b6d4b411750f1032c64002c2b30eb3be3ca644500a1b88bcd54b2d1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:34.661 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[277b1df6-2261-4555-b6bd-bcafd0fe5ae5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:34.662 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38a239da-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.664 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:34 np0005629333 kernel: tap38a239da-c0: left promiscuous mode
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.671 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:34.674 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[28e1002e-4a1d-4108-944c-bea2f8ce5b35]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:34.687 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8661e4d9-6f26-47c3-b932-403725b06a17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:34.688 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[732bb5cb-6cd2-46c1-9599-98e9f3ca49d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:34.703 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fd524d18-fbe4-4f33-b3a1-6b9ffd12c573]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 497379, 'reachable_time': 29779, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327831, 'error': None, 'target': 'ovnmeta-38a239da-c933-4cbc-be00-dd127471e198', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:34 np0005629333 systemd[1]: run-netns-ovnmeta\x2d38a239da\x2dc933\x2d4cbc\x2dbe00\x2ddd127471e198.mount: Deactivated successfully.
Feb 25 07:37:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:34.707 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-38a239da-c933-4cbc-be00-dd127471e198 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:37:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:34.707 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[a8205a0c-e0e3-49e9-9c60-96c537d196d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.795 244018 DEBUG nova.compute.manager [req-adb82449-ff03-4397-8a8b-5e5bf9e3c5c7 req-c392f170-928f-45fe-aa3d-413f7275c1a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received event network-vif-unplugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.795 244018 DEBUG oslo_concurrency.lockutils [req-adb82449-ff03-4397-8a8b-5e5bf9e3c5c7 req-c392f170-928f-45fe-aa3d-413f7275c1a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.796 244018 DEBUG oslo_concurrency.lockutils [req-adb82449-ff03-4397-8a8b-5e5bf9e3c5c7 req-c392f170-928f-45fe-aa3d-413f7275c1a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.796 244018 DEBUG oslo_concurrency.lockutils [req-adb82449-ff03-4397-8a8b-5e5bf9e3c5c7 req-c392f170-928f-45fe-aa3d-413f7275c1a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.796 244018 DEBUG nova.compute.manager [req-adb82449-ff03-4397-8a8b-5e5bf9e3c5c7 req-c392f170-928f-45fe-aa3d-413f7275c1a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] No waiting events found dispatching network-vif-unplugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.796 244018 DEBUG nova.compute.manager [req-adb82449-ff03-4397-8a8b-5e5bf9e3c5c7 req-c392f170-928f-45fe-aa3d-413f7275c1a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received event network-vif-unplugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.839 244018 INFO nova.virt.libvirt.driver [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Deleting instance files /var/lib/nova/instances/0061daee-43d7-458b-8645-0ad3f8fbb2af_del#033[00m
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.840 244018 INFO nova.virt.libvirt.driver [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Deletion of /var/lib/nova/instances/0061daee-43d7-458b-8645-0ad3f8fbb2af_del complete#033[00m
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.888 244018 INFO nova.compute.manager [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Took 0.57 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.889 244018 DEBUG oslo.service.loopingcall [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.889 244018 DEBUG nova.compute.manager [-] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.890 244018 DEBUG nova.network.neutron [-] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:37:34 np0005629333 nova_compute[244014]: 2026-02-25 12:37:34.960 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:35 np0005629333 nova_compute[244014]: 2026-02-25 12:37:35.534 244018 DEBUG nova.network.neutron [-] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:37:35 np0005629333 nova_compute[244014]: 2026-02-25 12:37:35.551 244018 INFO nova.compute.manager [-] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Took 0.66 seconds to deallocate network for instance.#033[00m
Feb 25 07:37:35 np0005629333 nova_compute[244014]: 2026-02-25 12:37:35.604 244018 DEBUG oslo_concurrency.lockutils [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:35 np0005629333 nova_compute[244014]: 2026-02-25 12:37:35.604 244018 DEBUG oslo_concurrency.lockutils [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:35 np0005629333 nova_compute[244014]: 2026-02-25 12:37:35.669 244018 DEBUG oslo_concurrency.processutils [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:37:35 np0005629333 nova_compute[244014]: 2026-02-25 12:37:35.703 244018 DEBUG nova.compute.manager [req-a570c5ce-e320-4dea-8a85-8abe6f8e8554 req-106b017b-5f88-4896-aa9e-1a6fe23c329b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received event network-vif-deleted-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:37:35 np0005629333 nova_compute[244014]: 2026-02-25 12:37:35.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:37:35 np0005629333 nova_compute[244014]: 2026-02-25 12:37:35.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 25 07:37:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 305 active+clean; 233 MiB data, 845 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 35 KiB/s wr, 198 op/s
Feb 25 07:37:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:37:36 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/316831477' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:37:36 np0005629333 nova_compute[244014]: 2026-02-25 12:37:36.211 244018 DEBUG oslo_concurrency.processutils [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:37:36 np0005629333 nova_compute[244014]: 2026-02-25 12:37:36.218 244018 DEBUG nova.compute.provider_tree [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:37:36 np0005629333 nova_compute[244014]: 2026-02-25 12:37:36.243 244018 DEBUG nova.scheduler.client.report [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:37:36 np0005629333 nova_compute[244014]: 2026-02-25 12:37:36.264 244018 DEBUG oslo_concurrency.lockutils [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:36 np0005629333 nova_compute[244014]: 2026-02-25 12:37:36.290 244018 INFO nova.scheduler.client.report [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Deleted allocations for instance 0061daee-43d7-458b-8645-0ad3f8fbb2af#033[00m
Feb 25 07:37:36 np0005629333 nova_compute[244014]: 2026-02-25 12:37:36.360 244018 DEBUG oslo_concurrency.lockutils [None req-d3164bfe-07aa-44fa-aa0c-0bb603eaf209 00eb5c915b2d41f69600acd33967d0f5 999f2a015b9c4bc98661fe6fe6db06a0 - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:36 np0005629333 nova_compute[244014]: 2026-02-25 12:37:36.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:37:37 np0005629333 nova_compute[244014]: 2026-02-25 12:37:37.015 244018 DEBUG nova.compute.manager [req-787704a4-ad2c-465f-b594-ef6f058ad58f req-c354ce6f-f0bf-4a9a-8006-a764715bb699 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received event network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:37:37 np0005629333 nova_compute[244014]: 2026-02-25 12:37:37.015 244018 DEBUG oslo_concurrency.lockutils [req-787704a4-ad2c-465f-b594-ef6f058ad58f req-c354ce6f-f0bf-4a9a-8006-a764715bb699 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:37 np0005629333 nova_compute[244014]: 2026-02-25 12:37:37.016 244018 DEBUG oslo_concurrency.lockutils [req-787704a4-ad2c-465f-b594-ef6f058ad58f req-c354ce6f-f0bf-4a9a-8006-a764715bb699 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:37 np0005629333 nova_compute[244014]: 2026-02-25 12:37:37.016 244018 DEBUG oslo_concurrency.lockutils [req-787704a4-ad2c-465f-b594-ef6f058ad58f req-c354ce6f-f0bf-4a9a-8006-a764715bb699 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0061daee-43d7-458b-8645-0ad3f8fbb2af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:37 np0005629333 nova_compute[244014]: 2026-02-25 12:37:37.017 244018 DEBUG nova.compute.manager [req-787704a4-ad2c-465f-b594-ef6f058ad58f req-c354ce6f-f0bf-4a9a-8006-a764715bb699 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] No waiting events found dispatching network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:37:37 np0005629333 nova_compute[244014]: 2026-02-25 12:37:37.018 244018 WARNING nova.compute.manager [req-787704a4-ad2c-465f-b594-ef6f058ad58f req-c354ce6f-f0bf-4a9a-8006-a764715bb699 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Received unexpected event network-vif-plugged-d689bf7c-d44c-4f39-a2a7-a85e52dcee30 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:37:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:37:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 305 active+clean; 153 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 36 KiB/s wr, 226 op/s
Feb 25 07:37:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:39.089 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:37:39 np0005629333 nova_compute[244014]: 2026-02-25 12:37:39.149 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:39 np0005629333 nova_compute[244014]: 2026-02-25 12:37:39.574 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:39 np0005629333 nova_compute[244014]: 2026-02-25 12:37:39.963 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1682: 305 pgs: 305 active+clean; 153 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 600 KiB/s rd, 23 KiB/s wr, 148 op/s
Feb 25 07:37:41 np0005629333 nova_compute[244014]: 2026-02-25 12:37:41.751 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:37:41 np0005629333 nova_compute[244014]: 2026-02-25 12:37:41.891 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023046.8893049, ed3e74a3-3eba-4446-aba8-cb3936a346e0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:37:41 np0005629333 nova_compute[244014]: 2026-02-25 12:37:41.892 244018 INFO nova.compute.manager [-] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:37:41 np0005629333 nova_compute[244014]: 2026-02-25 12:37:41.917 244018 DEBUG nova.compute.manager [None req-88669b5d-bf64-4df5-9951-3841ece5c05f - - - - - -] [instance: ed3e74a3-3eba-4446-aba8-cb3936a346e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:37:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1683: 305 pgs: 305 active+clean; 153 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 600 KiB/s rd, 23 KiB/s wr, 148 op/s
Feb 25 07:37:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:37:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:37:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:37:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:37:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.1233866754248558e-05 of space, bias 1.0, pg target 0.0033701600262745672 quantized to 32 (current 32)
Feb 25 07:37:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:37:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:37:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:37:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:37:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:37:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024931183502203227 of space, bias 1.0, pg target 0.7479355050660969 quantized to 32 (current 32)
Feb 25 07:37:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:37:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.626072165167579e-07 of space, bias 4.0, pg target 0.0009151286598201094 quantized to 16 (current 16)
Feb 25 07:37:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:37:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:37:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:37:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:37:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:37:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:37:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:37:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:37:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:37:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:37:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:37:42 np0005629333 nova_compute[244014]: 2026-02-25 12:37:42.988 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023047.9853299, 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:37:42 np0005629333 nova_compute[244014]: 2026-02-25 12:37:42.989 244018 INFO nova.compute.manager [-] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:37:43 np0005629333 nova_compute[244014]: 2026-02-25 12:37:43.019 244018 DEBUG nova.compute.manager [None req-4d98a929-0c32-41b5-8842-934245f16d54 - - - - - -] [instance: 7c2fb1e7-04d0-4903-a675-8cda55bbb6ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:37:43 np0005629333 nova_compute[244014]: 2026-02-25 12:37:43.715 244018 DEBUG oslo_concurrency.lockutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Acquiring lock "0ae40499-8d6f-432e-a58e-a28e508f7982" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:43 np0005629333 nova_compute[244014]: 2026-02-25 12:37:43.715 244018 DEBUG oslo_concurrency.lockutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lock "0ae40499-8d6f-432e-a58e-a28e508f7982" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:43 np0005629333 nova_compute[244014]: 2026-02-25 12:37:43.731 244018 DEBUG nova.compute.manager [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:37:43 np0005629333 nova_compute[244014]: 2026-02-25 12:37:43.810 244018 DEBUG oslo_concurrency.lockutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:43 np0005629333 nova_compute[244014]: 2026-02-25 12:37:43.810 244018 DEBUG oslo_concurrency.lockutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:43 np0005629333 nova_compute[244014]: 2026-02-25 12:37:43.816 244018 DEBUG nova.virt.hardware [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:37:43 np0005629333 nova_compute[244014]: 2026-02-25 12:37:43.816 244018 INFO nova.compute.claims [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:37:43 np0005629333 nova_compute[244014]: 2026-02-25 12:37:43.945 244018 DEBUG oslo_concurrency.processutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:37:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 305 active+clean; 153 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 197 KiB/s rd, 2.9 KiB/s wr, 76 op/s
Feb 25 07:37:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:37:44 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2192893107' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:37:44 np0005629333 nova_compute[244014]: 2026-02-25 12:37:44.475 244018 DEBUG oslo_concurrency.processutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:37:44 np0005629333 nova_compute[244014]: 2026-02-25 12:37:44.484 244018 DEBUG nova.compute.provider_tree [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:37:44 np0005629333 nova_compute[244014]: 2026-02-25 12:37:44.508 244018 DEBUG nova.scheduler.client.report [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:37:44 np0005629333 nova_compute[244014]: 2026-02-25 12:37:44.533 244018 DEBUG oslo_concurrency.lockutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:44 np0005629333 nova_compute[244014]: 2026-02-25 12:37:44.534 244018 DEBUG nova.compute.manager [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:37:44 np0005629333 nova_compute[244014]: 2026-02-25 12:37:44.577 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:44 np0005629333 nova_compute[244014]: 2026-02-25 12:37:44.611 244018 DEBUG nova.compute.manager [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Feb 25 07:37:44 np0005629333 nova_compute[244014]: 2026-02-25 12:37:44.630 244018 INFO nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:37:44 np0005629333 nova_compute[244014]: 2026-02-25 12:37:44.665 244018 DEBUG nova.compute.manager [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:37:44 np0005629333 podman[327877]: 2026-02-25 12:37:44.728999846 +0000 UTC m=+0.076698955 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223)
Feb 25 07:37:44 np0005629333 nova_compute[244014]: 2026-02-25 12:37:44.962 244018 DEBUG nova.compute.manager [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:37:44 np0005629333 nova_compute[244014]: 2026-02-25 12:37:44.964 244018 DEBUG nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:37:44 np0005629333 nova_compute[244014]: 2026-02-25 12:37:44.965 244018 INFO nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Creating image(s)#033[00m
Feb 25 07:37:44 np0005629333 nova_compute[244014]: 2026-02-25 12:37:44.998 244018 DEBUG nova.storage.rbd_utils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] rbd image 0ae40499-8d6f-432e-a58e-a28e508f7982_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:37:45 np0005629333 nova_compute[244014]: 2026-02-25 12:37:45.032 244018 DEBUG nova.storage.rbd_utils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] rbd image 0ae40499-8d6f-432e-a58e-a28e508f7982_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:37:45 np0005629333 nova_compute[244014]: 2026-02-25 12:37:45.065 244018 DEBUG nova.storage.rbd_utils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] rbd image 0ae40499-8d6f-432e-a58e-a28e508f7982_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:37:45 np0005629333 nova_compute[244014]: 2026-02-25 12:37:45.070 244018 DEBUG oslo_concurrency.processutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:37:45 np0005629333 nova_compute[244014]: 2026-02-25 12:37:45.100 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:45 np0005629333 nova_compute[244014]: 2026-02-25 12:37:45.176 244018 DEBUG oslo_concurrency.processutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:37:45 np0005629333 nova_compute[244014]: 2026-02-25 12:37:45.177 244018 DEBUG oslo_concurrency.lockutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:45 np0005629333 nova_compute[244014]: 2026-02-25 12:37:45.178 244018 DEBUG oslo_concurrency.lockutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:45 np0005629333 nova_compute[244014]: 2026-02-25 12:37:45.179 244018 DEBUG oslo_concurrency.lockutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:45 np0005629333 nova_compute[244014]: 2026-02-25 12:37:45.211 244018 DEBUG nova.storage.rbd_utils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] rbd image 0ae40499-8d6f-432e-a58e-a28e508f7982_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:37:45 np0005629333 nova_compute[244014]: 2026-02-25 12:37:45.216 244018 DEBUG oslo_concurrency.processutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 0ae40499-8d6f-432e-a58e-a28e508f7982_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:37:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 305 active+clean; 153 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Feb 25 07:37:46 np0005629333 nova_compute[244014]: 2026-02-25 12:37:46.232 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023051.2309868, 6076a107-bdef-4c8a-8f75-887cdb4833f0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:37:46 np0005629333 nova_compute[244014]: 2026-02-25 12:37:46.234 244018 INFO nova.compute.manager [-] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:37:46 np0005629333 nova_compute[244014]: 2026-02-25 12:37:46.257 244018 DEBUG nova.compute.manager [None req-f6a73a1c-8b6a-445d-b26f-95bd06f0d4fa - - - - - -] [instance: 6076a107-bdef-4c8a-8f75-887cdb4833f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:37:46 np0005629333 podman[327990]: 2026-02-25 12:37:46.764926158 +0000 UTC m=+0.107012771 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 07:37:47 np0005629333 nova_compute[244014]: 2026-02-25 12:37:47.203 244018 DEBUG oslo_concurrency.processutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 0ae40499-8d6f-432e-a58e-a28e508f7982_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.987s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:37:47 np0005629333 nova_compute[244014]: 2026-02-25 12:37:47.298 244018 DEBUG nova.storage.rbd_utils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] resizing rbd image 0ae40499-8d6f-432e-a58e-a28e508f7982_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:37:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:37:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2547865294' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:37:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:37:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2547865294' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:37:47 np0005629333 nova_compute[244014]: 2026-02-25 12:37:47.812 244018 DEBUG nova.objects.instance [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lazy-loading 'migration_context' on Instance uuid 0ae40499-8d6f-432e-a58e-a28e508f7982 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:37:47 np0005629333 nova_compute[244014]: 2026-02-25 12:37:47.826 244018 DEBUG nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:37:47 np0005629333 nova_compute[244014]: 2026-02-25 12:37:47.827 244018 DEBUG nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Ensure instance console log exists: /var/lib/nova/instances/0ae40499-8d6f-432e-a58e-a28e508f7982/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:37:47 np0005629333 nova_compute[244014]: 2026-02-25 12:37:47.828 244018 DEBUG oslo_concurrency.lockutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:47 np0005629333 nova_compute[244014]: 2026-02-25 12:37:47.829 244018 DEBUG oslo_concurrency.lockutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:47 np0005629333 nova_compute[244014]: 2026-02-25 12:37:47.829 244018 DEBUG oslo_concurrency.lockutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:47 np0005629333 nova_compute[244014]: 2026-02-25 12:37:47.832 244018 DEBUG nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:37:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:37:47 np0005629333 nova_compute[244014]: 2026-02-25 12:37:47.841 244018 WARNING nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:37:47 np0005629333 nova_compute[244014]: 2026-02-25 12:37:47.898 244018 DEBUG nova.virt.libvirt.host [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:37:47 np0005629333 nova_compute[244014]: 2026-02-25 12:37:47.898 244018 DEBUG nova.virt.libvirt.host [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:37:47 np0005629333 nova_compute[244014]: 2026-02-25 12:37:47.901 244018 DEBUG nova.virt.libvirt.host [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:37:47 np0005629333 nova_compute[244014]: 2026-02-25 12:37:47.902 244018 DEBUG nova.virt.libvirt.host [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:37:47 np0005629333 nova_compute[244014]: 2026-02-25 12:37:47.902 244018 DEBUG nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:37:47 np0005629333 nova_compute[244014]: 2026-02-25 12:37:47.902 244018 DEBUG nova.virt.hardware [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:37:47 np0005629333 nova_compute[244014]: 2026-02-25 12:37:47.903 244018 DEBUG nova.virt.hardware [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:37:47 np0005629333 nova_compute[244014]: 2026-02-25 12:37:47.903 244018 DEBUG nova.virt.hardware [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:37:47 np0005629333 nova_compute[244014]: 2026-02-25 12:37:47.903 244018 DEBUG nova.virt.hardware [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:37:47 np0005629333 nova_compute[244014]: 2026-02-25 12:37:47.904 244018 DEBUG nova.virt.hardware [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:37:47 np0005629333 nova_compute[244014]: 2026-02-25 12:37:47.904 244018 DEBUG nova.virt.hardware [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:37:47 np0005629333 nova_compute[244014]: 2026-02-25 12:37:47.904 244018 DEBUG nova.virt.hardware [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:37:47 np0005629333 nova_compute[244014]: 2026-02-25 12:37:47.904 244018 DEBUG nova.virt.hardware [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:37:47 np0005629333 nova_compute[244014]: 2026-02-25 12:37:47.904 244018 DEBUG nova.virt.hardware [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:37:47 np0005629333 nova_compute[244014]: 2026-02-25 12:37:47.905 244018 DEBUG nova.virt.hardware [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:37:47 np0005629333 nova_compute[244014]: 2026-02-25 12:37:47.906 244018 DEBUG nova.virt.hardware [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:37:47 np0005629333 nova_compute[244014]: 2026-02-25 12:37:47.909 244018 DEBUG oslo_concurrency.processutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:37:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 305 active+clean; 198 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 1.6 MiB/s wr, 50 op/s
Feb 25 07:37:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:37:48 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3916152046' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:37:48 np0005629333 nova_compute[244014]: 2026-02-25 12:37:48.526 244018 DEBUG oslo_concurrency.processutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:37:48 np0005629333 nova_compute[244014]: 2026-02-25 12:37:48.551 244018 DEBUG nova.storage.rbd_utils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] rbd image 0ae40499-8d6f-432e-a58e-a28e508f7982_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:37:48 np0005629333 nova_compute[244014]: 2026-02-25 12:37:48.555 244018 DEBUG oslo_concurrency.processutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:37:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:37:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3157725244' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:37:49 np0005629333 nova_compute[244014]: 2026-02-25 12:37:49.069 244018 DEBUG oslo_concurrency.processutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:37:49 np0005629333 nova_compute[244014]: 2026-02-25 12:37:49.072 244018 DEBUG nova.objects.instance [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0ae40499-8d6f-432e-a58e-a28e508f7982 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:37:49 np0005629333 nova_compute[244014]: 2026-02-25 12:37:49.091 244018 DEBUG nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:37:49 np0005629333 nova_compute[244014]:  <uuid>0ae40499-8d6f-432e-a58e-a28e508f7982</uuid>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:  <name>instance-0000005e</name>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:37:49 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServersAaction247Test-server-424896276</nova:name>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:37:47</nova:creationTime>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:37:49 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:        <nova:user uuid="410569f12f9b4726b30c6bb2d581bfed">tempest-ServersAaction247Test-801237191-project-member</nova:user>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:        <nova:project uuid="9d70f139f85141ba913156bfb65093a3">tempest-ServersAaction247Test-801237191</nova:project>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:      <nova:ports/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:      <entry name="serial">0ae40499-8d6f-432e-a58e-a28e508f7982</entry>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:      <entry name="uuid">0ae40499-8d6f-432e-a58e-a28e508f7982</entry>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:37:49 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/0ae40499-8d6f-432e-a58e-a28e508f7982_disk">
Feb 25 07:37:49 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:37:49 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:37:49 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/0ae40499-8d6f-432e-a58e-a28e508f7982_disk.config">
Feb 25 07:37:49 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:37:49 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:37:49 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/0ae40499-8d6f-432e-a58e-a28e508f7982/console.log" append="off"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:37:49 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:37:49 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:37:49 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:37:49 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:37:49 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:37:49 np0005629333 nova_compute[244014]: 2026-02-25 12:37:49.165 244018 DEBUG nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:37:49 np0005629333 nova_compute[244014]: 2026-02-25 12:37:49.166 244018 DEBUG nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:37:49 np0005629333 nova_compute[244014]: 2026-02-25 12:37:49.167 244018 INFO nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Using config drive#033[00m
Feb 25 07:37:49 np0005629333 nova_compute[244014]: 2026-02-25 12:37:49.240 244018 DEBUG nova.storage.rbd_utils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] rbd image 0ae40499-8d6f-432e-a58e-a28e508f7982_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:37:49 np0005629333 nova_compute[244014]: 2026-02-25 12:37:49.548 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023054.5479949, 0061daee-43d7-458b-8645-0ad3f8fbb2af => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:37:49 np0005629333 nova_compute[244014]: 2026-02-25 12:37:49.549 244018 INFO nova.compute.manager [-] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:37:49 np0005629333 nova_compute[244014]: 2026-02-25 12:37:49.565 244018 DEBUG nova.compute.manager [None req-728959d6-2c8d-4ff4-a321-07722eed6026 - - - - - -] [instance: 0061daee-43d7-458b-8645-0ad3f8fbb2af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:37:49 np0005629333 nova_compute[244014]: 2026-02-25 12:37:49.579 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:49 np0005629333 nova_compute[244014]: 2026-02-25 12:37:49.616 244018 INFO nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Creating config drive at /var/lib/nova/instances/0ae40499-8d6f-432e-a58e-a28e508f7982/disk.config#033[00m
Feb 25 07:37:49 np0005629333 nova_compute[244014]: 2026-02-25 12:37:49.622 244018 DEBUG oslo_concurrency.processutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0ae40499-8d6f-432e-a58e-a28e508f7982/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3oxzklq7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:37:49 np0005629333 nova_compute[244014]: 2026-02-25 12:37:49.759 244018 DEBUG oslo_concurrency.processutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0ae40499-8d6f-432e-a58e-a28e508f7982/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3oxzklq7" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:37:49 np0005629333 nova_compute[244014]: 2026-02-25 12:37:49.781 244018 DEBUG nova.storage.rbd_utils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] rbd image 0ae40499-8d6f-432e-a58e-a28e508f7982_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:37:49 np0005629333 nova_compute[244014]: 2026-02-25 12:37:49.783 244018 DEBUG oslo_concurrency.processutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0ae40499-8d6f-432e-a58e-a28e508f7982/disk.config 0ae40499-8d6f-432e-a58e-a28e508f7982_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:37:49 np0005629333 nova_compute[244014]: 2026-02-25 12:37:49.967 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1687: 305 pgs: 305 active+clean; 198 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.6 MiB/s wr, 22 op/s
Feb 25 07:37:50 np0005629333 nova_compute[244014]: 2026-02-25 12:37:50.854 244018 DEBUG oslo_concurrency.processutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0ae40499-8d6f-432e-a58e-a28e508f7982/disk.config 0ae40499-8d6f-432e-a58e-a28e508f7982_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:37:50 np0005629333 nova_compute[244014]: 2026-02-25 12:37:50.856 244018 INFO nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Deleting local config drive /var/lib/nova/instances/0ae40499-8d6f-432e-a58e-a28e508f7982/disk.config because it was imported into RBD.#033[00m
Feb 25 07:37:50 np0005629333 systemd-machined[210048]: New machine qemu-121-instance-0000005e.
Feb 25 07:37:50 np0005629333 systemd[1]: Started Virtual Machine qemu-121-instance-0000005e.
Feb 25 07:37:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 305 active+clean; 200 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Feb 25 07:37:52 np0005629333 nova_compute[244014]: 2026-02-25 12:37:52.164 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023072.163778, 0ae40499-8d6f-432e-a58e-a28e508f7982 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:37:52 np0005629333 nova_compute[244014]: 2026-02-25 12:37:52.164 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:37:52 np0005629333 nova_compute[244014]: 2026-02-25 12:37:52.168 244018 DEBUG nova.compute.manager [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:37:52 np0005629333 nova_compute[244014]: 2026-02-25 12:37:52.169 244018 DEBUG nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:37:52 np0005629333 nova_compute[244014]: 2026-02-25 12:37:52.172 244018 INFO nova.virt.libvirt.driver [-] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Instance spawned successfully.#033[00m
Feb 25 07:37:52 np0005629333 nova_compute[244014]: 2026-02-25 12:37:52.173 244018 DEBUG nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:37:52 np0005629333 nova_compute[244014]: 2026-02-25 12:37:52.194 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:37:52 np0005629333 nova_compute[244014]: 2026-02-25 12:37:52.204 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:37:52 np0005629333 nova_compute[244014]: 2026-02-25 12:37:52.209 244018 DEBUG nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:37:52 np0005629333 nova_compute[244014]: 2026-02-25 12:37:52.210 244018 DEBUG nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:37:52 np0005629333 nova_compute[244014]: 2026-02-25 12:37:52.211 244018 DEBUG nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:37:52 np0005629333 nova_compute[244014]: 2026-02-25 12:37:52.211 244018 DEBUG nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:37:52 np0005629333 nova_compute[244014]: 2026-02-25 12:37:52.212 244018 DEBUG nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:37:52 np0005629333 nova_compute[244014]: 2026-02-25 12:37:52.213 244018 DEBUG nova.virt.libvirt.driver [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:37:52 np0005629333 nova_compute[244014]: 2026-02-25 12:37:52.247 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:37:52 np0005629333 nova_compute[244014]: 2026-02-25 12:37:52.248 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023072.1673963, 0ae40499-8d6f-432e-a58e-a28e508f7982 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:37:52 np0005629333 nova_compute[244014]: 2026-02-25 12:37:52.248 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] VM Started (Lifecycle Event)#033[00m
Feb 25 07:37:52 np0005629333 nova_compute[244014]: 2026-02-25 12:37:52.278 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:37:52 np0005629333 nova_compute[244014]: 2026-02-25 12:37:52.281 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:37:52 np0005629333 nova_compute[244014]: 2026-02-25 12:37:52.286 244018 INFO nova.compute.manager [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Took 7.32 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:37:52 np0005629333 nova_compute[244014]: 2026-02-25 12:37:52.286 244018 DEBUG nova.compute.manager [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:37:52 np0005629333 nova_compute[244014]: 2026-02-25 12:37:52.310 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:37:52 np0005629333 nova_compute[244014]: 2026-02-25 12:37:52.343 244018 INFO nova.compute.manager [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Took 8.57 seconds to build instance.#033[00m
Feb 25 07:37:52 np0005629333 nova_compute[244014]: 2026-02-25 12:37:52.364 244018 DEBUG oslo_concurrency.lockutils [None req-5a97009e-1f16-4127-ab4c-b7889acb060a 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lock "0ae40499-8d6f-432e-a58e-a28e508f7982" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:37:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1689: 305 pgs: 305 active+clean; 200 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 87 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Feb 25 07:37:54 np0005629333 nova_compute[244014]: 2026-02-25 12:37:54.309 244018 DEBUG nova.compute.manager [None req-e3b30389-4cda-4875-a9dd-611948e67d2d 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:37:54 np0005629333 nova_compute[244014]: 2026-02-25 12:37:54.372 244018 INFO nova.compute.manager [None req-e3b30389-4cda-4875-a9dd-611948e67d2d 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] instance snapshotting#033[00m
Feb 25 07:37:54 np0005629333 nova_compute[244014]: 2026-02-25 12:37:54.373 244018 DEBUG nova.objects.instance [None req-e3b30389-4cda-4875-a9dd-611948e67d2d 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lazy-loading 'flavor' on Instance uuid 0ae40499-8d6f-432e-a58e-a28e508f7982 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:37:54 np0005629333 nova_compute[244014]: 2026-02-25 12:37:54.513 244018 DEBUG oslo_concurrency.lockutils [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Acquiring lock "0ae40499-8d6f-432e-a58e-a28e508f7982" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:54 np0005629333 nova_compute[244014]: 2026-02-25 12:37:54.514 244018 DEBUG oslo_concurrency.lockutils [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lock "0ae40499-8d6f-432e-a58e-a28e508f7982" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:54 np0005629333 nova_compute[244014]: 2026-02-25 12:37:54.514 244018 DEBUG oslo_concurrency.lockutils [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Acquiring lock "0ae40499-8d6f-432e-a58e-a28e508f7982-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:54 np0005629333 nova_compute[244014]: 2026-02-25 12:37:54.514 244018 DEBUG oslo_concurrency.lockutils [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lock "0ae40499-8d6f-432e-a58e-a28e508f7982-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:54 np0005629333 nova_compute[244014]: 2026-02-25 12:37:54.515 244018 DEBUG oslo_concurrency.lockutils [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lock "0ae40499-8d6f-432e-a58e-a28e508f7982-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:54 np0005629333 nova_compute[244014]: 2026-02-25 12:37:54.516 244018 INFO nova.compute.manager [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Terminating instance#033[00m
Feb 25 07:37:54 np0005629333 nova_compute[244014]: 2026-02-25 12:37:54.516 244018 DEBUG oslo_concurrency.lockutils [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Acquiring lock "refresh_cache-0ae40499-8d6f-432e-a58e-a28e508f7982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:37:54 np0005629333 nova_compute[244014]: 2026-02-25 12:37:54.516 244018 DEBUG oslo_concurrency.lockutils [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Acquired lock "refresh_cache-0ae40499-8d6f-432e-a58e-a28e508f7982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:37:54 np0005629333 nova_compute[244014]: 2026-02-25 12:37:54.517 244018 DEBUG nova.network.neutron [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:37:54 np0005629333 nova_compute[244014]: 2026-02-25 12:37:54.580 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:54 np0005629333 nova_compute[244014]: 2026-02-25 12:37:54.608 244018 INFO nova.virt.libvirt.driver [None req-e3b30389-4cda-4875-a9dd-611948e67d2d 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Beginning live snapshot process#033[00m
Feb 25 07:37:54 np0005629333 nova_compute[244014]: 2026-02-25 12:37:54.649 244018 DEBUG nova.compute.manager [None req-e3b30389-4cda-4875-a9dd-611948e67d2d 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Instance disappeared during snapshot _snapshot_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:4390#033[00m
Feb 25 07:37:54 np0005629333 nova_compute[244014]: 2026-02-25 12:37:54.696 244018 DEBUG nova.network.neutron [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:37:54 np0005629333 nova_compute[244014]: 2026-02-25 12:37:54.968 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:55.017 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:55.018 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:37:55.018 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:55 np0005629333 nova_compute[244014]: 2026-02-25 12:37:55.292 244018 DEBUG nova.network.neutron [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:37:55 np0005629333 nova_compute[244014]: 2026-02-25 12:37:55.321 244018 DEBUG oslo_concurrency.lockutils [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Releasing lock "refresh_cache-0ae40499-8d6f-432e-a58e-a28e508f7982" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:37:55 np0005629333 nova_compute[244014]: 2026-02-25 12:37:55.322 244018 DEBUG nova.compute.manager [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:37:55 np0005629333 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Feb 25 07:37:55 np0005629333 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d0000005e.scope: Consumed 4.338s CPU time.
Feb 25 07:37:55 np0005629333 systemd-machined[210048]: Machine qemu-121-instance-0000005e terminated.
Feb 25 07:37:55 np0005629333 nova_compute[244014]: 2026-02-25 12:37:55.544 244018 INFO nova.virt.libvirt.driver [-] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Instance destroyed successfully.#033[00m
Feb 25 07:37:55 np0005629333 nova_compute[244014]: 2026-02-25 12:37:55.545 244018 DEBUG nova.objects.instance [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lazy-loading 'resources' on Instance uuid 0ae40499-8d6f-432e-a58e-a28e508f7982 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:37:55 np0005629333 nova_compute[244014]: 2026-02-25 12:37:55.697 244018 DEBUG nova.compute.manager [None req-e3b30389-4cda-4875-a9dd-611948e67d2d 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Found 0 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Feb 25 07:37:55 np0005629333 nova_compute[244014]: 2026-02-25 12:37:55.858 244018 INFO nova.virt.libvirt.driver [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Deleting instance files /var/lib/nova/instances/0ae40499-8d6f-432e-a58e-a28e508f7982_del#033[00m
Feb 25 07:37:55 np0005629333 nova_compute[244014]: 2026-02-25 12:37:55.859 244018 INFO nova.virt.libvirt.driver [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Deletion of /var/lib/nova/instances/0ae40499-8d6f-432e-a58e-a28e508f7982_del complete#033[00m
Feb 25 07:37:55 np0005629333 nova_compute[244014]: 2026-02-25 12:37:55.932 244018 INFO nova.compute.manager [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Took 0.61 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:37:55 np0005629333 nova_compute[244014]: 2026-02-25 12:37:55.932 244018 DEBUG oslo.service.loopingcall [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:37:55 np0005629333 nova_compute[244014]: 2026-02-25 12:37:55.933 244018 DEBUG nova.compute.manager [-] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:37:55 np0005629333 nova_compute[244014]: 2026-02-25 12:37:55.933 244018 DEBUG nova.network.neutron [-] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:37:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1690: 305 pgs: 305 active+clean; 200 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 87 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Feb 25 07:37:56 np0005629333 nova_compute[244014]: 2026-02-25 12:37:56.321 244018 DEBUG nova.network.neutron [-] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:37:56 np0005629333 nova_compute[244014]: 2026-02-25 12:37:56.335 244018 DEBUG nova.network.neutron [-] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:37:56 np0005629333 nova_compute[244014]: 2026-02-25 12:37:56.359 244018 INFO nova.compute.manager [-] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Took 0.43 seconds to deallocate network for instance.#033[00m
Feb 25 07:37:56 np0005629333 nova_compute[244014]: 2026-02-25 12:37:56.416 244018 DEBUG oslo_concurrency.lockutils [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:37:56 np0005629333 nova_compute[244014]: 2026-02-25 12:37:56.416 244018 DEBUG oslo_concurrency.lockutils [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:37:56 np0005629333 nova_compute[244014]: 2026-02-25 12:37:56.481 244018 DEBUG oslo_concurrency.processutils [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:37:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:37:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2625209494' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:37:57 np0005629333 nova_compute[244014]: 2026-02-25 12:37:57.060 244018 DEBUG oslo_concurrency.processutils [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:37:57 np0005629333 nova_compute[244014]: 2026-02-25 12:37:57.067 244018 DEBUG nova.compute.provider_tree [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:37:57 np0005629333 nova_compute[244014]: 2026-02-25 12:37:57.082 244018 DEBUG nova.scheduler.client.report [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:37:57 np0005629333 nova_compute[244014]: 2026-02-25 12:37:57.101 244018 DEBUG oslo_concurrency.lockutils [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:57 np0005629333 nova_compute[244014]: 2026-02-25 12:37:57.134 244018 INFO nova.scheduler.client.report [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Deleted allocations for instance 0ae40499-8d6f-432e-a58e-a28e508f7982#033[00m
Feb 25 07:37:57 np0005629333 nova_compute[244014]: 2026-02-25 12:37:57.211 244018 DEBUG oslo_concurrency.lockutils [None req-6f1168ef-b460-49ac-aa3b-3383fe57d604 410569f12f9b4726b30c6bb2d581bfed 9d70f139f85141ba913156bfb65093a3 - - default default] Lock "0ae40499-8d6f-432e-a58e-a28e508f7982" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:37:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:37:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Feb 25 07:37:59 np0005629333 nova_compute[244014]: 2026-02-25 12:37:59.583 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:37:59 np0005629333 nova_compute[244014]: 2026-02-25 12:37:59.970 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1692: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 145 KiB/s wr, 104 op/s
Feb 25 07:38:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:38:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:38:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:38:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:38:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:38:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:38:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1693: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 145 KiB/s wr, 104 op/s
Feb 25 07:38:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:38:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1694: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 97 op/s
Feb 25 07:38:04 np0005629333 nova_compute[244014]: 2026-02-25 12:38:04.585 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:04 np0005629333 nova_compute[244014]: 2026-02-25 12:38:04.971 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 90 op/s
Feb 25 07:38:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:38:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1696: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 90 op/s
Feb 25 07:38:09 np0005629333 nova_compute[244014]: 2026-02-25 12:38:09.587 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:09 np0005629333 nova_compute[244014]: 2026-02-25 12:38:09.973 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1697: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:38:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:38:10Z|00938|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Feb 25 07:38:10 np0005629333 nova_compute[244014]: 2026-02-25 12:38:10.543 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023075.5413747, 0ae40499-8d6f-432e-a58e-a28e508f7982 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:38:10 np0005629333 nova_compute[244014]: 2026-02-25 12:38:10.544 244018 INFO nova.compute.manager [-] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:38:10 np0005629333 nova_compute[244014]: 2026-02-25 12:38:10.563 244018 DEBUG nova.compute.manager [None req-cfa1ca97-d50e-4f59-bd9c-ef1e2b87dbc5 - - - - - -] [instance: 0ae40499-8d6f-432e-a58e-a28e508f7982] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:38:11 np0005629333 nova_compute[244014]: 2026-02-25 12:38:11.653 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:38:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:38:11 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:38:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:38:11 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:38:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:38:11 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:38:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:38:11 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:38:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:38:11 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:38:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:38:11 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:38:11 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:38:11 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:38:11 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:38:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1698: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:38:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:38:12 np0005629333 podman[328455]: 2026-02-25 12:38:12.957783469 +0000 UTC m=+0.038582890 container create 6defcc1981f257817430f571f7da34c43ebc2fdca27b87ff6dc4e3c7c567e05b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_kapitsa, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:38:12 np0005629333 systemd[1]: Started libpod-conmon-6defcc1981f257817430f571f7da34c43ebc2fdca27b87ff6dc4e3c7c567e05b.scope.
Feb 25 07:38:13 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:38:13 np0005629333 podman[328455]: 2026-02-25 12:38:13.027412894 +0000 UTC m=+0.108212345 container init 6defcc1981f257817430f571f7da34c43ebc2fdca27b87ff6dc4e3c7c567e05b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_kapitsa, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 07:38:13 np0005629333 podman[328455]: 2026-02-25 12:38:13.033708321 +0000 UTC m=+0.114507762 container start 6defcc1981f257817430f571f7da34c43ebc2fdca27b87ff6dc4e3c7c567e05b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:38:13 np0005629333 podman[328455]: 2026-02-25 12:38:12.938780163 +0000 UTC m=+0.019579594 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:38:13 np0005629333 zen_kapitsa[328469]: 167 167
Feb 25 07:38:13 np0005629333 systemd[1]: libpod-6defcc1981f257817430f571f7da34c43ebc2fdca27b87ff6dc4e3c7c567e05b.scope: Deactivated successfully.
Feb 25 07:38:13 np0005629333 podman[328455]: 2026-02-25 12:38:13.038740094 +0000 UTC m=+0.119539565 container attach 6defcc1981f257817430f571f7da34c43ebc2fdca27b87ff6dc4e3c7c567e05b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:38:13 np0005629333 podman[328455]: 2026-02-25 12:38:13.03931916 +0000 UTC m=+0.120118591 container died 6defcc1981f257817430f571f7da34c43ebc2fdca27b87ff6dc4e3c7c567e05b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_kapitsa, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 07:38:13 np0005629333 systemd[1]: var-lib-containers-storage-overlay-8536bf4da4feb708c1efa08a927a81a9f383a54f466b612923f0b12e1817bac9-merged.mount: Deactivated successfully.
Feb 25 07:38:13 np0005629333 podman[328455]: 2026-02-25 12:38:13.08856876 +0000 UTC m=+0.169368181 container remove 6defcc1981f257817430f571f7da34c43ebc2fdca27b87ff6dc4e3c7c567e05b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_kapitsa, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:38:13 np0005629333 systemd[1]: libpod-conmon-6defcc1981f257817430f571f7da34c43ebc2fdca27b87ff6dc4e3c7c567e05b.scope: Deactivated successfully.
Feb 25 07:38:13 np0005629333 podman[328494]: 2026-02-25 12:38:13.219978658 +0000 UTC m=+0.036359657 container create b074b8095c2a4acc0d8f360502992c230eda9a33f2beed5ad143fb157fa1ac55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 07:38:13 np0005629333 systemd[1]: Started libpod-conmon-b074b8095c2a4acc0d8f360502992c230eda9a33f2beed5ad143fb157fa1ac55.scope.
Feb 25 07:38:13 np0005629333 podman[328494]: 2026-02-25 12:38:13.202542056 +0000 UTC m=+0.018923075 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:38:13 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:38:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acdb115712855f3b82b9bc3951d9c7a4a130559ffb75446d1d0d93c1fcfa2c4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:38:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acdb115712855f3b82b9bc3951d9c7a4a130559ffb75446d1d0d93c1fcfa2c4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:38:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acdb115712855f3b82b9bc3951d9c7a4a130559ffb75446d1d0d93c1fcfa2c4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:38:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acdb115712855f3b82b9bc3951d9c7a4a130559ffb75446d1d0d93c1fcfa2c4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:38:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acdb115712855f3b82b9bc3951d9c7a4a130559ffb75446d1d0d93c1fcfa2c4d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:38:13 np0005629333 podman[328494]: 2026-02-25 12:38:13.327678007 +0000 UTC m=+0.144059006 container init b074b8095c2a4acc0d8f360502992c230eda9a33f2beed5ad143fb157fa1ac55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:38:13 np0005629333 podman[328494]: 2026-02-25 12:38:13.33309982 +0000 UTC m=+0.149480839 container start b074b8095c2a4acc0d8f360502992c230eda9a33f2beed5ad143fb157fa1ac55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_hopper, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 07:38:13 np0005629333 podman[328494]: 2026-02-25 12:38:13.337630358 +0000 UTC m=+0.154011347 container attach b074b8095c2a4acc0d8f360502992c230eda9a33f2beed5ad143fb157fa1ac55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_hopper, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:38:13 np0005629333 wizardly_hopper[328511]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:38:13 np0005629333 wizardly_hopper[328511]: --> All data devices are unavailable
Feb 25 07:38:13 np0005629333 systemd[1]: libpod-b074b8095c2a4acc0d8f360502992c230eda9a33f2beed5ad143fb157fa1ac55.scope: Deactivated successfully.
Feb 25 07:38:13 np0005629333 podman[328531]: 2026-02-25 12:38:13.89227191 +0000 UTC m=+0.026782037 container died b074b8095c2a4acc0d8f360502992c230eda9a33f2beed5ad143fb157fa1ac55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_hopper, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:38:13 np0005629333 systemd[1]: var-lib-containers-storage-overlay-acdb115712855f3b82b9bc3951d9c7a4a130559ffb75446d1d0d93c1fcfa2c4d-merged.mount: Deactivated successfully.
Feb 25 07:38:13 np0005629333 podman[328531]: 2026-02-25 12:38:13.946604683 +0000 UTC m=+0.081114730 container remove b074b8095c2a4acc0d8f360502992c230eda9a33f2beed5ad143fb157fa1ac55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_hopper, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 07:38:13 np0005629333 systemd[1]: libpod-conmon-b074b8095c2a4acc0d8f360502992c230eda9a33f2beed5ad143fb157fa1ac55.scope: Deactivated successfully.
Feb 25 07:38:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:38:14 np0005629333 podman[328609]: 2026-02-25 12:38:14.377710908 +0000 UTC m=+0.045821464 container create 52098891f03848ab07ebee3b7a6bd29a1e214d31e409c000e624c4669b4aa386 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nightingale, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 07:38:14 np0005629333 systemd[1]: Started libpod-conmon-52098891f03848ab07ebee3b7a6bd29a1e214d31e409c000e624c4669b4aa386.scope.
Feb 25 07:38:14 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:38:14 np0005629333 podman[328609]: 2026-02-25 12:38:14.449022881 +0000 UTC m=+0.117133467 container init 52098891f03848ab07ebee3b7a6bd29a1e214d31e409c000e624c4669b4aa386 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:38:14 np0005629333 podman[328609]: 2026-02-25 12:38:14.359837254 +0000 UTC m=+0.027947860 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:38:14 np0005629333 podman[328609]: 2026-02-25 12:38:14.456730198 +0000 UTC m=+0.124840794 container start 52098891f03848ab07ebee3b7a6bd29a1e214d31e409c000e624c4669b4aa386 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Feb 25 07:38:14 np0005629333 pedantic_nightingale[328625]: 167 167
Feb 25 07:38:14 np0005629333 systemd[1]: libpod-52098891f03848ab07ebee3b7a6bd29a1e214d31e409c000e624c4669b4aa386.scope: Deactivated successfully.
Feb 25 07:38:14 np0005629333 conmon[328625]: conmon 52098891f03848ab07eb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-52098891f03848ab07ebee3b7a6bd29a1e214d31e409c000e624c4669b4aa386.scope/container/memory.events
Feb 25 07:38:14 np0005629333 podman[328609]: 2026-02-25 12:38:14.460835674 +0000 UTC m=+0.128946260 container attach 52098891f03848ab07ebee3b7a6bd29a1e214d31e409c000e624c4669b4aa386 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 07:38:14 np0005629333 podman[328609]: 2026-02-25 12:38:14.463820148 +0000 UTC m=+0.131930734 container died 52098891f03848ab07ebee3b7a6bd29a1e214d31e409c000e624c4669b4aa386 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nightingale, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:38:14 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7bcb527201cb61290c5da3ce10f53b8d0fe82d7ce04135ccfb2ea845c5b2a500-merged.mount: Deactivated successfully.
Feb 25 07:38:14 np0005629333 podman[328609]: 2026-02-25 12:38:14.501261615 +0000 UTC m=+0.169372181 container remove 52098891f03848ab07ebee3b7a6bd29a1e214d31e409c000e624c4669b4aa386 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_nightingale, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:38:14 np0005629333 systemd[1]: libpod-conmon-52098891f03848ab07ebee3b7a6bd29a1e214d31e409c000e624c4669b4aa386.scope: Deactivated successfully.
Feb 25 07:38:14 np0005629333 nova_compute[244014]: 2026-02-25 12:38:14.589 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:14 np0005629333 podman[328648]: 2026-02-25 12:38:14.640261847 +0000 UTC m=+0.043691654 container create ced6f174be39deafd1c0527a85e8d82103b099d05ddf6b064169f548e0b77b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_montalcini, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 07:38:14 np0005629333 systemd[1]: Started libpod-conmon-ced6f174be39deafd1c0527a85e8d82103b099d05ddf6b064169f548e0b77b32.scope.
Feb 25 07:38:14 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:38:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9cd2a3a50507c64854ee58b978744617b2b257d07849e62402f433cab1c9319/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:38:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9cd2a3a50507c64854ee58b978744617b2b257d07849e62402f433cab1c9319/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:38:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9cd2a3a50507c64854ee58b978744617b2b257d07849e62402f433cab1c9319/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:38:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9cd2a3a50507c64854ee58b978744617b2b257d07849e62402f433cab1c9319/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:38:14 np0005629333 podman[328648]: 2026-02-25 12:38:14.621741375 +0000 UTC m=+0.025171232 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:38:14 np0005629333 podman[328648]: 2026-02-25 12:38:14.749175211 +0000 UTC m=+0.152605068 container init ced6f174be39deafd1c0527a85e8d82103b099d05ddf6b064169f548e0b77b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_montalcini, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:38:14 np0005629333 podman[328648]: 2026-02-25 12:38:14.754975935 +0000 UTC m=+0.158405782 container start ced6f174be39deafd1c0527a85e8d82103b099d05ddf6b064169f548e0b77b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_montalcini, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:38:14 np0005629333 podman[328648]: 2026-02-25 12:38:14.774757323 +0000 UTC m=+0.178187160 container attach ced6f174be39deafd1c0527a85e8d82103b099d05ddf6b064169f548e0b77b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_montalcini, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 07:38:14 np0005629333 nova_compute[244014]: 2026-02-25 12:38:14.974 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]: {
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:    "0": [
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:        {
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "devices": [
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "/dev/loop3"
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            ],
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "lv_name": "ceph_lv0",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "lv_size": "21470642176",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "name": "ceph_lv0",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "tags": {
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.cluster_name": "ceph",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.crush_device_class": "",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.encrypted": "0",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.objectstore": "bluestore",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.osd_id": "0",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.type": "block",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.vdo": "0",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.with_tpm": "0"
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            },
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "type": "block",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "vg_name": "ceph_vg0"
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:        }
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:    ],
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:    "1": [
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:        {
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "devices": [
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "/dev/loop4"
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            ],
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "lv_name": "ceph_lv1",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "lv_size": "21470642176",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "name": "ceph_lv1",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "tags": {
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.cluster_name": "ceph",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.crush_device_class": "",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.encrypted": "0",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.objectstore": "bluestore",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.osd_id": "1",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.type": "block",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.vdo": "0",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.with_tpm": "0"
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            },
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "type": "block",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "vg_name": "ceph_vg1"
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:        }
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:    ],
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:    "2": [
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:        {
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "devices": [
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "/dev/loop5"
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            ],
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "lv_name": "ceph_lv2",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "lv_size": "21470642176",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "name": "ceph_lv2",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "tags": {
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.cluster_name": "ceph",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.crush_device_class": "",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.encrypted": "0",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.objectstore": "bluestore",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.osd_id": "2",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.type": "block",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.vdo": "0",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:                "ceph.with_tpm": "0"
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            },
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "type": "block",
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:            "vg_name": "ceph_vg2"
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:        }
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]:    ]
Feb 25 07:38:15 np0005629333 determined_montalcini[328665]: }
Feb 25 07:38:15 np0005629333 systemd[1]: libpod-ced6f174be39deafd1c0527a85e8d82103b099d05ddf6b064169f548e0b77b32.scope: Deactivated successfully.
Feb 25 07:38:15 np0005629333 podman[328648]: 2026-02-25 12:38:15.075414107 +0000 UTC m=+0.478843954 container died ced6f174be39deafd1c0527a85e8d82103b099d05ddf6b064169f548e0b77b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_montalcini, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:38:15 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f9cd2a3a50507c64854ee58b978744617b2b257d07849e62402f433cab1c9319-merged.mount: Deactivated successfully.
Feb 25 07:38:15 np0005629333 podman[328648]: 2026-02-25 12:38:15.129290948 +0000 UTC m=+0.532720755 container remove ced6f174be39deafd1c0527a85e8d82103b099d05ddf6b064169f548e0b77b32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_montalcini, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 07:38:15 np0005629333 systemd[1]: libpod-conmon-ced6f174be39deafd1c0527a85e8d82103b099d05ddf6b064169f548e0b77b32.scope: Deactivated successfully.
Feb 25 07:38:15 np0005629333 podman[328675]: 2026-02-25 12:38:15.182475569 +0000 UTC m=+0.069238775 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0)
Feb 25 07:38:15 np0005629333 podman[328773]: 2026-02-25 12:38:15.58154974 +0000 UTC m=+0.063756960 container create a191b0b31fb9a61a8cb8a9e6d7e699d39f1e2b7cc2eb74e2b383ad26f6c6fe6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 07:38:15 np0005629333 systemd[1]: Started libpod-conmon-a191b0b31fb9a61a8cb8a9e6d7e699d39f1e2b7cc2eb74e2b383ad26f6c6fe6f.scope.
Feb 25 07:38:15 np0005629333 podman[328773]: 2026-02-25 12:38:15.557042849 +0000 UTC m=+0.039250109 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:38:15 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:38:15 np0005629333 podman[328773]: 2026-02-25 12:38:15.668100823 +0000 UTC m=+0.150308053 container init a191b0b31fb9a61a8cb8a9e6d7e699d39f1e2b7cc2eb74e2b383ad26f6c6fe6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Feb 25 07:38:15 np0005629333 podman[328773]: 2026-02-25 12:38:15.67577681 +0000 UTC m=+0.157984030 container start a191b0b31fb9a61a8cb8a9e6d7e699d39f1e2b7cc2eb74e2b383ad26f6c6fe6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_hermann, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 07:38:15 np0005629333 podman[328773]: 2026-02-25 12:38:15.679281358 +0000 UTC m=+0.161488568 container attach a191b0b31fb9a61a8cb8a9e6d7e699d39f1e2b7cc2eb74e2b383ad26f6c6fe6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 07:38:15 np0005629333 intelligent_hermann[328789]: 167 167
Feb 25 07:38:15 np0005629333 systemd[1]: libpod-a191b0b31fb9a61a8cb8a9e6d7e699d39f1e2b7cc2eb74e2b383ad26f6c6fe6f.scope: Deactivated successfully.
Feb 25 07:38:15 np0005629333 podman[328773]: 2026-02-25 12:38:15.681071659 +0000 UTC m=+0.163278879 container died a191b0b31fb9a61a8cb8a9e6d7e699d39f1e2b7cc2eb74e2b383ad26f6c6fe6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_hermann, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:38:15 np0005629333 systemd[1]: var-lib-containers-storage-overlay-395e13ee95a79ef64faf036f94139b0562fb1805003b952ed7240620d8a82a3a-merged.mount: Deactivated successfully.
Feb 25 07:38:15 np0005629333 podman[328773]: 2026-02-25 12:38:15.7317812 +0000 UTC m=+0.213988420 container remove a191b0b31fb9a61a8cb8a9e6d7e699d39f1e2b7cc2eb74e2b383ad26f6c6fe6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_hermann, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:38:15 np0005629333 systemd[1]: libpod-conmon-a191b0b31fb9a61a8cb8a9e6d7e699d39f1e2b7cc2eb74e2b383ad26f6c6fe6f.scope: Deactivated successfully.
Feb 25 07:38:15 np0005629333 podman[328815]: 2026-02-25 12:38:15.915902566 +0000 UTC m=+0.056887477 container create 3237459b457920797ab86ffd7d539deff1288ec451989995e5428342c0276a4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:38:15 np0005629333 systemd[1]: Started libpod-conmon-3237459b457920797ab86ffd7d539deff1288ec451989995e5428342c0276a4b.scope.
Feb 25 07:38:15 np0005629333 podman[328815]: 2026-02-25 12:38:15.894354068 +0000 UTC m=+0.035339059 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:38:15 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:38:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f73c114b128e8b67ea3d493f5eff06bccebdb9ca222de4baf5e7ea342006b60e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:38:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f73c114b128e8b67ea3d493f5eff06bccebdb9ca222de4baf5e7ea342006b60e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:38:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f73c114b128e8b67ea3d493f5eff06bccebdb9ca222de4baf5e7ea342006b60e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:38:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f73c114b128e8b67ea3d493f5eff06bccebdb9ca222de4baf5e7ea342006b60e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:38:16 np0005629333 podman[328815]: 2026-02-25 12:38:16.022668989 +0000 UTC m=+0.163653970 container init 3237459b457920797ab86ffd7d539deff1288ec451989995e5428342c0276a4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_noether, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 07:38:16 np0005629333 podman[328815]: 2026-02-25 12:38:16.030919762 +0000 UTC m=+0.171904703 container start 3237459b457920797ab86ffd7d539deff1288ec451989995e5428342c0276a4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:38:16 np0005629333 podman[328815]: 2026-02-25 12:38:16.034273176 +0000 UTC m=+0.175258187 container attach 3237459b457920797ab86ffd7d539deff1288ec451989995e5428342c0276a4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_noether, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 07:38:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:38:16 np0005629333 lvm[328908]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:38:16 np0005629333 lvm[328908]: VG ceph_vg0 finished
Feb 25 07:38:16 np0005629333 lvm[328910]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:38:16 np0005629333 lvm[328910]: VG ceph_vg1 finished
Feb 25 07:38:16 np0005629333 lvm[328912]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:38:16 np0005629333 lvm[328912]: VG ceph_vg2 finished
Feb 25 07:38:16 np0005629333 romantic_noether[328831]: {}
Feb 25 07:38:16 np0005629333 systemd[1]: libpod-3237459b457920797ab86ffd7d539deff1288ec451989995e5428342c0276a4b.scope: Deactivated successfully.
Feb 25 07:38:16 np0005629333 podman[328815]: 2026-02-25 12:38:16.757972829 +0000 UTC m=+0.898957730 container died 3237459b457920797ab86ffd7d539deff1288ec451989995e5428342c0276a4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_noether, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 07:38:16 np0005629333 systemd[1]: libpod-3237459b457920797ab86ffd7d539deff1288ec451989995e5428342c0276a4b.scope: Consumed 1.005s CPU time.
Feb 25 07:38:16 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f73c114b128e8b67ea3d493f5eff06bccebdb9ca222de4baf5e7ea342006b60e-merged.mount: Deactivated successfully.
Feb 25 07:38:16 np0005629333 podman[328815]: 2026-02-25 12:38:16.818210359 +0000 UTC m=+0.959195270 container remove 3237459b457920797ab86ffd7d539deff1288ec451989995e5428342c0276a4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_noether, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:38:16 np0005629333 systemd[1]: libpod-conmon-3237459b457920797ab86ffd7d539deff1288ec451989995e5428342c0276a4b.scope: Deactivated successfully.
Feb 25 07:38:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:38:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:38:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:38:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:38:16 np0005629333 nova_compute[244014]: 2026-02-25 12:38:16.895 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:38:16 np0005629333 nova_compute[244014]: 2026-02-25 12:38:16.896 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:38:16 np0005629333 podman[328923]: 2026-02-25 12:38:16.926739782 +0000 UTC m=+0.115436359 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 25 07:38:16 np0005629333 nova_compute[244014]: 2026-02-25 12:38:16.950 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 07:38:16 np0005629333 nova_compute[244014]: 2026-02-25 12:38:16.951 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:38:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:38:17 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:38:17 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:38:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1701: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:38:18 np0005629333 nova_compute[244014]: 2026-02-25 12:38:18.927 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:38:19 np0005629333 nova_compute[244014]: 2026-02-25 12:38:19.593 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:19 np0005629333 nova_compute[244014]: 2026-02-25 12:38:19.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:38:19 np0005629333 nova_compute[244014]: 2026-02-25 12:38:19.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:38:19 np0005629333 nova_compute[244014]: 2026-02-25 12:38:19.910 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:38:19 np0005629333 nova_compute[244014]: 2026-02-25 12:38:19.911 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:38:19 np0005629333 nova_compute[244014]: 2026-02-25 12:38:19.912 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:38:19 np0005629333 nova_compute[244014]: 2026-02-25 12:38:19.912 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:38:19 np0005629333 nova_compute[244014]: 2026-02-25 12:38:19.913 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:38:19 np0005629333 nova_compute[244014]: 2026-02-25 12:38:19.977 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1702: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:38:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:38:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3468064007' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:38:20 np0005629333 nova_compute[244014]: 2026-02-25 12:38:20.467 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:38:20 np0005629333 nova_compute[244014]: 2026-02-25 12:38:20.661 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:38:20 np0005629333 nova_compute[244014]: 2026-02-25 12:38:20.662 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3766MB free_disk=59.987605430185795GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:38:20 np0005629333 nova_compute[244014]: 2026-02-25 12:38:20.662 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:38:20 np0005629333 nova_compute[244014]: 2026-02-25 12:38:20.662 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:38:20 np0005629333 nova_compute[244014]: 2026-02-25 12:38:20.731 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:38:20 np0005629333 nova_compute[244014]: 2026-02-25 12:38:20.731 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:38:20 np0005629333 nova_compute[244014]: 2026-02-25 12:38:20.749 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:38:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:38:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/978281410' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:38:21 np0005629333 nova_compute[244014]: 2026-02-25 12:38:21.303 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:38:21 np0005629333 nova_compute[244014]: 2026-02-25 12:38:21.312 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:38:21 np0005629333 nova_compute[244014]: 2026-02-25 12:38:21.337 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:38:21 np0005629333 nova_compute[244014]: 2026-02-25 12:38:21.374 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:38:21 np0005629333 nova_compute[244014]: 2026-02-25 12:38:21.374 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:38:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:38:22 np0005629333 nova_compute[244014]: 2026-02-25 12:38:22.375 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:38:22 np0005629333 nova_compute[244014]: 2026-02-25 12:38:22.376 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:38:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:38:22 np0005629333 nova_compute[244014]: 2026-02-25 12:38:22.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:38:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:38:24 np0005629333 nova_compute[244014]: 2026-02-25 12:38:24.595 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:24 np0005629333 nova_compute[244014]: 2026-02-25 12:38:24.980 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1705: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:38:26 np0005629333 nova_compute[244014]: 2026-02-25 12:38:26.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:38:26 np0005629333 nova_compute[244014]: 2026-02-25 12:38:26.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:38:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:38:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:38:29 np0005629333 nova_compute[244014]: 2026-02-25 12:38:29.599 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:29 np0005629333 nova_compute[244014]: 2026-02-25 12:38:29.982 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:38:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:30.844 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:b7:46 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-4896f114-8c32-45b5-8e19-88367b748d23', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4896f114-8c32-45b5-8e19-88367b748d23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6c80f8ceb8a4441b8b67e86c03ab970', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90eeb56b-9178-4d29-a92f-928e902be9b3, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=df10225b-1c1c-46d7-a344-56179f0d6b9b) old=Port_Binding(mac=['fa:16:3e:24:b7:46 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-4896f114-8c32-45b5-8e19-88367b748d23', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4896f114-8c32-45b5-8e19-88367b748d23', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6c80f8ceb8a4441b8b67e86c03ab970', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:38:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:30.848 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port df10225b-1c1c-46d7-a344-56179f0d6b9b in datapath 4896f114-8c32-45b5-8e19-88367b748d23 updated#033[00m
Feb 25 07:38:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:30.850 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4896f114-8c32-45b5-8e19-88367b748d23, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:38:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:30.851 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[647242fc-2b29-4872-b6e9-5854bb7b17aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:38:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:38:30
Feb 25 07:38:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:38:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:38:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'images', 'volumes', 'vms', '.mgr']
Feb 25 07:38:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:38:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:38:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:38:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:38:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:38:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:38:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:38:31 np0005629333 nova_compute[244014]: 2026-02-25 12:38:31.672 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Acquiring lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:38:31 np0005629333 nova_compute[244014]: 2026-02-25 12:38:31.673 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:38:31 np0005629333 nova_compute[244014]: 2026-02-25 12:38:31.689 244018 DEBUG nova.compute.manager [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:38:31 np0005629333 nova_compute[244014]: 2026-02-25 12:38:31.795 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:38:31 np0005629333 nova_compute[244014]: 2026-02-25 12:38:31.796 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:38:31 np0005629333 nova_compute[244014]: 2026-02-25 12:38:31.811 244018 DEBUG nova.virt.hardware [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:38:31 np0005629333 nova_compute[244014]: 2026-02-25 12:38:31.811 244018 INFO nova.compute.claims [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:38:31 np0005629333 nova_compute[244014]: 2026-02-25 12:38:31.923 244018 DEBUG oslo_concurrency.processutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:38:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:38:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:38:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:38:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:38:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:38:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:38:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:38:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:38:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:38:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:38:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:38:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:32.238 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:38:32 np0005629333 nova_compute[244014]: 2026-02-25 12:38:32.238 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:32.243 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:38:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:38:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/489997292' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:38:32 np0005629333 nova_compute[244014]: 2026-02-25 12:38:32.458 244018 DEBUG oslo_concurrency.processutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:38:32 np0005629333 nova_compute[244014]: 2026-02-25 12:38:32.466 244018 DEBUG nova.compute.provider_tree [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:38:32 np0005629333 nova_compute[244014]: 2026-02-25 12:38:32.514 244018 DEBUG nova.scheduler.client.report [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:38:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:38:32 np0005629333 nova_compute[244014]: 2026-02-25 12:38:32.872 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:38:32 np0005629333 nova_compute[244014]: 2026-02-25 12:38:32.985 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Acquiring lock "9ffe2b64-e10a-4231-8f78-a59ae059826c" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:38:32 np0005629333 nova_compute[244014]: 2026-02-25 12:38:32.986 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "9ffe2b64-e10a-4231-8f78-a59ae059826c" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:38:33 np0005629333 nova_compute[244014]: 2026-02-25 12:38:33.016 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "9ffe2b64-e10a-4231-8f78-a59ae059826c" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:38:33 np0005629333 nova_compute[244014]: 2026-02-25 12:38:33.017 244018 DEBUG nova.compute.manager [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:38:33 np0005629333 nova_compute[244014]: 2026-02-25 12:38:33.122 244018 DEBUG nova.compute.manager [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:38:33 np0005629333 nova_compute[244014]: 2026-02-25 12:38:33.123 244018 DEBUG nova.network.neutron [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:38:33 np0005629333 nova_compute[244014]: 2026-02-25 12:38:33.167 244018 INFO nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:38:33 np0005629333 nova_compute[244014]: 2026-02-25 12:38:33.243 244018 DEBUG nova.compute.manager [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:38:33 np0005629333 nova_compute[244014]: 2026-02-25 12:38:33.373 244018 DEBUG nova.policy [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '04dc6c3292f14b8398bec7165759bd4b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8d5e5d163084460c88c8f594df149ff0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:38:33 np0005629333 nova_compute[244014]: 2026-02-25 12:38:33.402 244018 DEBUG nova.compute.manager [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:38:33 np0005629333 nova_compute[244014]: 2026-02-25 12:38:33.403 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:38:33 np0005629333 nova_compute[244014]: 2026-02-25 12:38:33.404 244018 INFO nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Creating image(s)#033[00m
Feb 25 07:38:33 np0005629333 nova_compute[244014]: 2026-02-25 12:38:33.535 244018 DEBUG nova.storage.rbd_utils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] rbd image 61e87fb9-d3ba-4d0b-ae95-86b4272980cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:38:33 np0005629333 nova_compute[244014]: 2026-02-25 12:38:33.569 244018 DEBUG nova.storage.rbd_utils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] rbd image 61e87fb9-d3ba-4d0b-ae95-86b4272980cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:38:33 np0005629333 nova_compute[244014]: 2026-02-25 12:38:33.604 244018 DEBUG nova.storage.rbd_utils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] rbd image 61e87fb9-d3ba-4d0b-ae95-86b4272980cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:38:33 np0005629333 nova_compute[244014]: 2026-02-25 12:38:33.609 244018 DEBUG oslo_concurrency.processutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:38:33 np0005629333 nova_compute[244014]: 2026-02-25 12:38:33.700 244018 DEBUG oslo_concurrency.processutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:38:33 np0005629333 nova_compute[244014]: 2026-02-25 12:38:33.701 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:38:33 np0005629333 nova_compute[244014]: 2026-02-25 12:38:33.702 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:38:33 np0005629333 nova_compute[244014]: 2026-02-25 12:38:33.702 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:38:33 np0005629333 nova_compute[244014]: 2026-02-25 12:38:33.726 244018 DEBUG nova.storage.rbd_utils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] rbd image 61e87fb9-d3ba-4d0b-ae95-86b4272980cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:38:33 np0005629333 nova_compute[244014]: 2026-02-25 12:38:33.730 244018 DEBUG oslo_concurrency.processutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 61e87fb9-d3ba-4d0b-ae95-86b4272980cc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:38:33 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Feb 25 07:38:33 np0005629333 nova_compute[244014]: 2026-02-25 12:38:33.979 244018 DEBUG oslo_concurrency.processutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 61e87fb9-d3ba-4d0b-ae95-86b4272980cc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:38:34 np0005629333 nova_compute[244014]: 2026-02-25 12:38:34.038 244018 DEBUG nova.storage.rbd_utils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] resizing rbd image 61e87fb9-d3ba-4d0b-ae95-86b4272980cc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:38:34 np0005629333 nova_compute[244014]: 2026-02-25 12:38:34.128 244018 DEBUG nova.objects.instance [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lazy-loading 'migration_context' on Instance uuid 61e87fb9-d3ba-4d0b-ae95-86b4272980cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:38:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:38:34 np0005629333 nova_compute[244014]: 2026-02-25 12:38:34.299 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:38:34 np0005629333 nova_compute[244014]: 2026-02-25 12:38:34.299 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Ensure instance console log exists: /var/lib/nova/instances/61e87fb9-d3ba-4d0b-ae95-86b4272980cc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:38:34 np0005629333 nova_compute[244014]: 2026-02-25 12:38:34.300 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:38:34 np0005629333 nova_compute[244014]: 2026-02-25 12:38:34.300 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:38:34 np0005629333 nova_compute[244014]: 2026-02-25 12:38:34.301 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:38:34 np0005629333 nova_compute[244014]: 2026-02-25 12:38:34.313 244018 DEBUG nova.network.neutron [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Successfully created port: 3bb5f129-0f6a-4905-87a3-a81ccde523cd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:38:34 np0005629333 nova_compute[244014]: 2026-02-25 12:38:34.601 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:34 np0005629333 nova_compute[244014]: 2026-02-25 12:38:34.984 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1710: 305 pgs: 305 active+clean; 153 MiB data, 808 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:38:37 np0005629333 nova_compute[244014]: 2026-02-25 12:38:37.138 244018 DEBUG nova.network.neutron [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Successfully updated port: 3bb5f129-0f6a-4905-87a3-a81ccde523cd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:38:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:37.245 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:38:37 np0005629333 nova_compute[244014]: 2026-02-25 12:38:37.386 244018 DEBUG nova.compute.manager [req-97550b11-aa2d-42e7-9d45-1105baaaab03 req-32b08d17-6650-4c04-b6b9-9212c8e65511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received event network-changed-3bb5f129-0f6a-4905-87a3-a81ccde523cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:38:37 np0005629333 nova_compute[244014]: 2026-02-25 12:38:37.387 244018 DEBUG nova.compute.manager [req-97550b11-aa2d-42e7-9d45-1105baaaab03 req-32b08d17-6650-4c04-b6b9-9212c8e65511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Refreshing instance network info cache due to event network-changed-3bb5f129-0f6a-4905-87a3-a81ccde523cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:38:37 np0005629333 nova_compute[244014]: 2026-02-25 12:38:37.387 244018 DEBUG oslo_concurrency.lockutils [req-97550b11-aa2d-42e7-9d45-1105baaaab03 req-32b08d17-6650-4c04-b6b9-9212c8e65511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-61e87fb9-d3ba-4d0b-ae95-86b4272980cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:38:37 np0005629333 nova_compute[244014]: 2026-02-25 12:38:37.387 244018 DEBUG oslo_concurrency.lockutils [req-97550b11-aa2d-42e7-9d45-1105baaaab03 req-32b08d17-6650-4c04-b6b9-9212c8e65511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-61e87fb9-d3ba-4d0b-ae95-86b4272980cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:38:37 np0005629333 nova_compute[244014]: 2026-02-25 12:38:37.388 244018 DEBUG nova.network.neutron [req-97550b11-aa2d-42e7-9d45-1105baaaab03 req-32b08d17-6650-4c04-b6b9-9212c8e65511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Refreshing network info cache for port 3bb5f129-0f6a-4905-87a3-a81ccde523cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:38:37 np0005629333 nova_compute[244014]: 2026-02-25 12:38:37.391 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Acquiring lock "refresh_cache-61e87fb9-d3ba-4d0b-ae95-86b4272980cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:38:37 np0005629333 nova_compute[244014]: 2026-02-25 12:38:37.707 244018 DEBUG nova.network.neutron [req-97550b11-aa2d-42e7-9d45-1105baaaab03 req-32b08d17-6650-4c04-b6b9-9212c8e65511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:38:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:38:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1711: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 07:38:38 np0005629333 nova_compute[244014]: 2026-02-25 12:38:38.277 244018 DEBUG nova.network.neutron [req-97550b11-aa2d-42e7-9d45-1105baaaab03 req-32b08d17-6650-4c04-b6b9-9212c8e65511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:38:38 np0005629333 nova_compute[244014]: 2026-02-25 12:38:38.294 244018 DEBUG oslo_concurrency.lockutils [req-97550b11-aa2d-42e7-9d45-1105baaaab03 req-32b08d17-6650-4c04-b6b9-9212c8e65511 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-61e87fb9-d3ba-4d0b-ae95-86b4272980cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:38:38 np0005629333 nova_compute[244014]: 2026-02-25 12:38:38.296 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Acquired lock "refresh_cache-61e87fb9-d3ba-4d0b-ae95-86b4272980cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:38:38 np0005629333 nova_compute[244014]: 2026-02-25 12:38:38.296 244018 DEBUG nova.network.neutron [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:38:38 np0005629333 nova_compute[244014]: 2026-02-25 12:38:38.568 244018 DEBUG nova.network.neutron [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:38:39 np0005629333 nova_compute[244014]: 2026-02-25 12:38:39.604 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:39 np0005629333 nova_compute[244014]: 2026-02-25 12:38:39.987 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 07:38:40 np0005629333 nova_compute[244014]: 2026-02-25 12:38:40.355 244018 DEBUG nova.network.neutron [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Updating instance_info_cache with network_info: [{"id": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "address": "fa:16:3e:6a:a7:16", "network": {"id": "41119909-cf1c-417e-b52e-40b8a4b9716e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1320491603-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e5d163084460c88c8f594df149ff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bb5f129-0f", "ovs_interfaceid": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:38:40 np0005629333 nova_compute[244014]: 2026-02-25 12:38:40.383 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Releasing lock "refresh_cache-61e87fb9-d3ba-4d0b-ae95-86b4272980cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:38:40 np0005629333 nova_compute[244014]: 2026-02-25 12:38:40.383 244018 DEBUG nova.compute.manager [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Instance network_info: |[{"id": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "address": "fa:16:3e:6a:a7:16", "network": {"id": "41119909-cf1c-417e-b52e-40b8a4b9716e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1320491603-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e5d163084460c88c8f594df149ff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bb5f129-0f", "ovs_interfaceid": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:38:40 np0005629333 nova_compute[244014]: 2026-02-25 12:38:40.385 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Start _get_guest_xml network_info=[{"id": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "address": "fa:16:3e:6a:a7:16", "network": {"id": "41119909-cf1c-417e-b52e-40b8a4b9716e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1320491603-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e5d163084460c88c8f594df149ff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bb5f129-0f", "ovs_interfaceid": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:38:40 np0005629333 nova_compute[244014]: 2026-02-25 12:38:40.389 244018 WARNING nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:38:40 np0005629333 nova_compute[244014]: 2026-02-25 12:38:40.394 244018 DEBUG nova.virt.libvirt.host [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:38:40 np0005629333 nova_compute[244014]: 2026-02-25 12:38:40.394 244018 DEBUG nova.virt.libvirt.host [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:38:40 np0005629333 nova_compute[244014]: 2026-02-25 12:38:40.398 244018 DEBUG nova.virt.libvirt.host [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:38:40 np0005629333 nova_compute[244014]: 2026-02-25 12:38:40.399 244018 DEBUG nova.virt.libvirt.host [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:38:40 np0005629333 nova_compute[244014]: 2026-02-25 12:38:40.399 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:38:40 np0005629333 nova_compute[244014]: 2026-02-25 12:38:40.400 244018 DEBUG nova.virt.hardware [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:38:40 np0005629333 nova_compute[244014]: 2026-02-25 12:38:40.400 244018 DEBUG nova.virt.hardware [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:38:40 np0005629333 nova_compute[244014]: 2026-02-25 12:38:40.400 244018 DEBUG nova.virt.hardware [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:38:40 np0005629333 nova_compute[244014]: 2026-02-25 12:38:40.400 244018 DEBUG nova.virt.hardware [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:38:40 np0005629333 nova_compute[244014]: 2026-02-25 12:38:40.401 244018 DEBUG nova.virt.hardware [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:38:40 np0005629333 nova_compute[244014]: 2026-02-25 12:38:40.401 244018 DEBUG nova.virt.hardware [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:38:40 np0005629333 nova_compute[244014]: 2026-02-25 12:38:40.401 244018 DEBUG nova.virt.hardware [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:38:40 np0005629333 nova_compute[244014]: 2026-02-25 12:38:40.401 244018 DEBUG nova.virt.hardware [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:38:40 np0005629333 nova_compute[244014]: 2026-02-25 12:38:40.401 244018 DEBUG nova.virt.hardware [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:38:40 np0005629333 nova_compute[244014]: 2026-02-25 12:38:40.402 244018 DEBUG nova.virt.hardware [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:38:40 np0005629333 nova_compute[244014]: 2026-02-25 12:38:40.402 244018 DEBUG nova.virt.hardware [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:38:40 np0005629333 nova_compute[244014]: 2026-02-25 12:38:40.404 244018 DEBUG oslo_concurrency.processutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:38:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:38:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3751624086' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:38:40 np0005629333 nova_compute[244014]: 2026-02-25 12:38:40.994 244018 DEBUG oslo_concurrency.processutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:38:41 np0005629333 nova_compute[244014]: 2026-02-25 12:38:41.015 244018 DEBUG nova.storage.rbd_utils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] rbd image 61e87fb9-d3ba-4d0b-ae95-86b4272980cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:38:41 np0005629333 nova_compute[244014]: 2026-02-25 12:38:41.018 244018 DEBUG oslo_concurrency.processutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:38:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:38:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2927167528' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:38:41 np0005629333 nova_compute[244014]: 2026-02-25 12:38:41.592 244018 DEBUG oslo_concurrency.processutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:38:41 np0005629333 nova_compute[244014]: 2026-02-25 12:38:41.595 244018 DEBUG nova.virt.libvirt.vif [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:38:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-1119160360',display_name='tempest-ServerGroupTestJSON-server-1119160360',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-1119160360',id=95,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8d5e5d163084460c88c8f594df149ff0',ramdisk_id='',reservation_id='r-s85cq2rq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-45430372',owner_user_name='tempest-ServerGroupTestJSON-45430372-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:38:33Z,user_data=None,user_id='04dc6c3292f14b8398bec7165759bd4b',uuid=61e87fb9-d3ba-4d0b-ae95-86b4272980cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "address": "fa:16:3e:6a:a7:16", "network": {"id": "41119909-cf1c-417e-b52e-40b8a4b9716e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1320491603-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e5d163084460c88c8f594df149ff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bb5f129-0f", "ovs_interfaceid": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:38:41 np0005629333 nova_compute[244014]: 2026-02-25 12:38:41.596 244018 DEBUG nova.network.os_vif_util [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Converting VIF {"id": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "address": "fa:16:3e:6a:a7:16", "network": {"id": "41119909-cf1c-417e-b52e-40b8a4b9716e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1320491603-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e5d163084460c88c8f594df149ff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bb5f129-0f", "ovs_interfaceid": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:38:41 np0005629333 nova_compute[244014]: 2026-02-25 12:38:41.598 244018 DEBUG nova.network.os_vif_util [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:a7:16,bridge_name='br-int',has_traffic_filtering=True,id=3bb5f129-0f6a-4905-87a3-a81ccde523cd,network=Network(41119909-cf1c-417e-b52e-40b8a4b9716e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bb5f129-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:38:41 np0005629333 nova_compute[244014]: 2026-02-25 12:38:41.600 244018 DEBUG nova.objects.instance [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 61e87fb9-d3ba-4d0b-ae95-86b4272980cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:38:41 np0005629333 nova_compute[244014]: 2026-02-25 12:38:41.653 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:38:41 np0005629333 nova_compute[244014]:  <uuid>61e87fb9-d3ba-4d0b-ae95-86b4272980cc</uuid>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:  <name>instance-0000005f</name>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServerGroupTestJSON-server-1119160360</nova:name>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:38:40</nova:creationTime>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:38:41 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:        <nova:user uuid="04dc6c3292f14b8398bec7165759bd4b">tempest-ServerGroupTestJSON-45430372-project-member</nova:user>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:        <nova:project uuid="8d5e5d163084460c88c8f594df149ff0">tempest-ServerGroupTestJSON-45430372</nova:project>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:        <nova:port uuid="3bb5f129-0f6a-4905-87a3-a81ccde523cd">
Feb 25 07:38:41 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      <entry name="serial">61e87fb9-d3ba-4d0b-ae95-86b4272980cc</entry>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      <entry name="uuid">61e87fb9-d3ba-4d0b-ae95-86b4272980cc</entry>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/61e87fb9-d3ba-4d0b-ae95-86b4272980cc_disk">
Feb 25 07:38:41 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:38:41 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/61e87fb9-d3ba-4d0b-ae95-86b4272980cc_disk.config">
Feb 25 07:38:41 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:38:41 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:6a:a7:16"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      <target dev="tap3bb5f129-0f"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/61e87fb9-d3ba-4d0b-ae95-86b4272980cc/console.log" append="off"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:38:41 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:38:41 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:38:41 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:38:41 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:38:41 np0005629333 nova_compute[244014]: 2026-02-25 12:38:41.655 244018 DEBUG nova.compute.manager [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Preparing to wait for external event network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:38:41 np0005629333 nova_compute[244014]: 2026-02-25 12:38:41.656 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Acquiring lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:38:41 np0005629333 nova_compute[244014]: 2026-02-25 12:38:41.656 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:38:41 np0005629333 nova_compute[244014]: 2026-02-25 12:38:41.657 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:38:41 np0005629333 nova_compute[244014]: 2026-02-25 12:38:41.658 244018 DEBUG nova.virt.libvirt.vif [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:38:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-1119160360',display_name='tempest-ServerGroupTestJSON-server-1119160360',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-1119160360',id=95,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8d5e5d163084460c88c8f594df149ff0',ramdisk_id='',reservation_id='r-s85cq2rq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-45430372',owner_user_name='tempest-ServerGroupTestJSON-45430372-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:38:33Z,user_data=None,user_id='04dc6c3292f14b8398bec7165759bd4b',uuid=61e87fb9-d3ba-4d0b-ae95-86b4272980cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "address": "fa:16:3e:6a:a7:16", "network": {"id": "41119909-cf1c-417e-b52e-40b8a4b9716e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1320491603-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e5d163084460c88c8f594df149ff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bb5f129-0f", "ovs_interfaceid": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:38:41 np0005629333 nova_compute[244014]: 2026-02-25 12:38:41.659 244018 DEBUG nova.network.os_vif_util [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Converting VIF {"id": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "address": "fa:16:3e:6a:a7:16", "network": {"id": "41119909-cf1c-417e-b52e-40b8a4b9716e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1320491603-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e5d163084460c88c8f594df149ff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bb5f129-0f", "ovs_interfaceid": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:38:41 np0005629333 nova_compute[244014]: 2026-02-25 12:38:41.660 244018 DEBUG nova.network.os_vif_util [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:a7:16,bridge_name='br-int',has_traffic_filtering=True,id=3bb5f129-0f6a-4905-87a3-a81ccde523cd,network=Network(41119909-cf1c-417e-b52e-40b8a4b9716e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bb5f129-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:38:41 np0005629333 nova_compute[244014]: 2026-02-25 12:38:41.660 244018 DEBUG os_vif [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:a7:16,bridge_name='br-int',has_traffic_filtering=True,id=3bb5f129-0f6a-4905-87a3-a81ccde523cd,network=Network(41119909-cf1c-417e-b52e-40b8a4b9716e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bb5f129-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:38:41 np0005629333 nova_compute[244014]: 2026-02-25 12:38:41.661 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:41 np0005629333 nova_compute[244014]: 2026-02-25 12:38:41.662 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:38:41 np0005629333 nova_compute[244014]: 2026-02-25 12:38:41.663 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:38:41 np0005629333 nova_compute[244014]: 2026-02-25 12:38:41.668 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:41 np0005629333 nova_compute[244014]: 2026-02-25 12:38:41.668 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3bb5f129-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:38:41 np0005629333 nova_compute[244014]: 2026-02-25 12:38:41.669 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3bb5f129-0f, col_values=(('external_ids', {'iface-id': '3bb5f129-0f6a-4905-87a3-a81ccde523cd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6a:a7:16', 'vm-uuid': '61e87fb9-d3ba-4d0b-ae95-86b4272980cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:38:41 np0005629333 nova_compute[244014]: 2026-02-25 12:38:41.671 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:41 np0005629333 NetworkManager[49836]: <info>  [1772023121.6728] manager: (tap3bb5f129-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/390)
Feb 25 07:38:41 np0005629333 nova_compute[244014]: 2026-02-25 12:38:41.674 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:38:41 np0005629333 nova_compute[244014]: 2026-02-25 12:38:41.679 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:41 np0005629333 nova_compute[244014]: 2026-02-25 12:38:41.680 244018 INFO os_vif [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:a7:16,bridge_name='br-int',has_traffic_filtering=True,id=3bb5f129-0f6a-4905-87a3-a81ccde523cd,network=Network(41119909-cf1c-417e-b52e-40b8a4b9716e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bb5f129-0f')#033[00m
Feb 25 07:38:41 np0005629333 nova_compute[244014]: 2026-02-25 12:38:41.809 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:38:41 np0005629333 nova_compute[244014]: 2026-02-25 12:38:41.810 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:38:41 np0005629333 nova_compute[244014]: 2026-02-25 12:38:41.810 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] No VIF found with MAC fa:16:3e:6a:a7:16, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:38:41 np0005629333 nova_compute[244014]: 2026-02-25 12:38:41.811 244018 INFO nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Using config drive#033[00m
Feb 25 07:38:41 np0005629333 nova_compute[244014]: 2026-02-25 12:38:41.846 244018 DEBUG nova.storage.rbd_utils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] rbd image 61e87fb9-d3ba-4d0b-ae95-86b4272980cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:38:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1713: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 07:38:42 np0005629333 nova_compute[244014]: 2026-02-25 12:38:42.379 244018 INFO nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Creating config drive at /var/lib/nova/instances/61e87fb9-d3ba-4d0b-ae95-86b4272980cc/disk.config#033[00m
Feb 25 07:38:42 np0005629333 nova_compute[244014]: 2026-02-25 12:38:42.383 244018 DEBUG oslo_concurrency.processutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/61e87fb9-d3ba-4d0b-ae95-86b4272980cc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpyxqlrfkv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:38:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:38:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:38:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:38:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:38:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00035719931447786177 of space, bias 1.0, pg target 0.10715979434335852 quantized to 32 (current 32)
Feb 25 07:38:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:38:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:38:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:38:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:38:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:38:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024937167797671124 of space, bias 1.0, pg target 0.7481150339301337 quantized to 32 (current 32)
Feb 25 07:38:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:38:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.626072165167579e-07 of space, bias 4.0, pg target 0.0009151286598201094 quantized to 16 (current 16)
Feb 25 07:38:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:38:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:38:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:38:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:38:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:38:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:38:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:38:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:38:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:38:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:38:42 np0005629333 nova_compute[244014]: 2026-02-25 12:38:42.536 244018 DEBUG oslo_concurrency.processutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/61e87fb9-d3ba-4d0b-ae95-86b4272980cc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpyxqlrfkv" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:38:42 np0005629333 nova_compute[244014]: 2026-02-25 12:38:42.576 244018 DEBUG nova.storage.rbd_utils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] rbd image 61e87fb9-d3ba-4d0b-ae95-86b4272980cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:38:42 np0005629333 nova_compute[244014]: 2026-02-25 12:38:42.581 244018 DEBUG oslo_concurrency.processutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/61e87fb9-d3ba-4d0b-ae95-86b4272980cc/disk.config 61e87fb9-d3ba-4d0b-ae95-86b4272980cc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:38:42 np0005629333 nova_compute[244014]: 2026-02-25 12:38:42.739 244018 DEBUG oslo_concurrency.processutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/61e87fb9-d3ba-4d0b-ae95-86b4272980cc/disk.config 61e87fb9-d3ba-4d0b-ae95-86b4272980cc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:38:42 np0005629333 nova_compute[244014]: 2026-02-25 12:38:42.741 244018 INFO nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Deleting local config drive /var/lib/nova/instances/61e87fb9-d3ba-4d0b-ae95-86b4272980cc/disk.config because it was imported into RBD.#033[00m
Feb 25 07:38:42 np0005629333 kernel: tap3bb5f129-0f: entered promiscuous mode
Feb 25 07:38:42 np0005629333 ovn_controller[147040]: 2026-02-25T12:38:42Z|00939|binding|INFO|Claiming lport 3bb5f129-0f6a-4905-87a3-a81ccde523cd for this chassis.
Feb 25 07:38:42 np0005629333 ovn_controller[147040]: 2026-02-25T12:38:42Z|00940|binding|INFO|3bb5f129-0f6a-4905-87a3-a81ccde523cd: Claiming fa:16:3e:6a:a7:16 10.100.0.13
Feb 25 07:38:42 np0005629333 NetworkManager[49836]: <info>  [1772023122.7985] manager: (tap3bb5f129-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/391)
Feb 25 07:38:42 np0005629333 nova_compute[244014]: 2026-02-25 12:38:42.796 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:42 np0005629333 nova_compute[244014]: 2026-02-25 12:38:42.800 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:42 np0005629333 nova_compute[244014]: 2026-02-25 12:38:42.807 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:42 np0005629333 nova_compute[244014]: 2026-02-25 12:38:42.816 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:42.819 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:a7:16 10.100.0.13'], port_security=['fa:16:3e:6a:a7:16 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '61e87fb9-d3ba-4d0b-ae95-86b4272980cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41119909-cf1c-417e-b52e-40b8a4b9716e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8d5e5d163084460c88c8f594df149ff0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cba43bfc-12c5-414d-b845-95bd5bf66142', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5504f6fc-86f0-41e2-8986-2cb5dda54647, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=3bb5f129-0f6a-4905-87a3-a81ccde523cd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:38:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:42.821 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 3bb5f129-0f6a-4905-87a3-a81ccde523cd in datapath 41119909-cf1c-417e-b52e-40b8a4b9716e bound to our chassis#033[00m
Feb 25 07:38:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:42.823 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41119909-cf1c-417e-b52e-40b8a4b9716e#033[00m
Feb 25 07:38:42 np0005629333 systemd-udevd[329348]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:38:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:42.834 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[162f0066-3bf8-4b0d-9e62-7e67e645c3ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:38:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:42.835 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap41119909-c1 in ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:38:42 np0005629333 systemd-machined[210048]: New machine qemu-122-instance-0000005f.
Feb 25 07:38:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:42.838 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap41119909-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:38:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:42.838 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5bef7f52-3573-4ab0-bfe9-42870c221c80]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:38:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:42.839 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[19997dc3-b321-4ebf-9cb3-fd5c62e0c6a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:38:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:38:42 np0005629333 NetworkManager[49836]: <info>  [1772023122.8460] device (tap3bb5f129-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:38:42 np0005629333 NetworkManager[49836]: <info>  [1772023122.8471] device (tap3bb5f129-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:38:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:42.850 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[155bfc39-b0d8-43e7-8346-1699099b8929]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:38:42 np0005629333 systemd[1]: Started Virtual Machine qemu-122-instance-0000005f.
Feb 25 07:38:42 np0005629333 ovn_controller[147040]: 2026-02-25T12:38:42Z|00941|binding|INFO|Setting lport 3bb5f129-0f6a-4905-87a3-a81ccde523cd ovn-installed in OVS
Feb 25 07:38:42 np0005629333 ovn_controller[147040]: 2026-02-25T12:38:42Z|00942|binding|INFO|Setting lport 3bb5f129-0f6a-4905-87a3-a81ccde523cd up in Southbound
Feb 25 07:38:42 np0005629333 nova_compute[244014]: 2026-02-25 12:38:42.855 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:42.876 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[70abf00f-28e6-46d2-96e7-b059e4b14f04]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:38:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:42.908 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c31bbc-8cb2-466a-a4c7-829d48e97026]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:38:42 np0005629333 systemd-udevd[329352]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:38:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:42.916 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9f1b0bd3-3440-4ded-99ef-effe4b767bb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:38:42 np0005629333 NetworkManager[49836]: <info>  [1772023122.9186] manager: (tap41119909-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/392)
Feb 25 07:38:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:42.950 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ef907e53-e73d-4d62-aa92-cabb11f65c3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:38:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:42.955 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[86014866-9e4c-493c-bc5f-3309cbab350b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:38:42 np0005629333 NetworkManager[49836]: <info>  [1772023122.9782] device (tap41119909-c0): carrier: link connected
Feb 25 07:38:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:42.981 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[939550ad-5654-47e0-b110-8f7ab408bed5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:43.000 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b45d93c7-9940-4bb9-a791-4954736b2243]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41119909-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:70:37:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 284], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509238, 'reachable_time': 27991, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329381, 'error': None, 'target': 'ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:43.022 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5a070214-13aa-4dc2-a1c3-aac99c3ed78b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe70:3757'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 509238, 'tstamp': 509238}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329382, 'error': None, 'target': 'ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:43.034 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[04a9fd87-0edb-457d-b557-d04ed3c1c580]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41119909-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:70:37:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 284], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509238, 'reachable_time': 27991, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 329383, 'error': None, 'target': 'ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:43.063 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[16d56e91-1b39-4e0e-b79d-06c2b876b344]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:43.113 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a53c9f-d5b5-4c95-a862-1392f9b960a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:43.114 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41119909-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:43.115 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:43.115 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41119909-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.117 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:43 np0005629333 NetworkManager[49836]: <info>  [1772023123.1179] manager: (tap41119909-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/393)
Feb 25 07:38:43 np0005629333 kernel: tap41119909-c0: entered promiscuous mode
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.119 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:43.120 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41119909-c0, col_values=(('external_ids', {'iface-id': 'bce4452d-fc90-4131-bb29-426776368685'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.121 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:43 np0005629333 ovn_controller[147040]: 2026-02-25T12:38:43Z|00943|binding|INFO|Releasing lport bce4452d-fc90-4131-bb29-426776368685 from this chassis (sb_readonly=0)
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.125 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:43.126 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/41119909-cf1c-417e-b52e-40b8a4b9716e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/41119909-cf1c-417e-b52e-40b8a4b9716e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:43.127 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[21297fe4-5135-43b0-b02e-f912ed8c9e54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:43.128 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-41119909-cf1c-417e-b52e-40b8a4b9716e
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/41119909-cf1c-417e-b52e-40b8a4b9716e.pid.haproxy
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 41119909-cf1c-417e-b52e-40b8a4b9716e
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:38:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:43.129 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e', 'env', 'PROCESS_TAG=haproxy-41119909-cf1c-417e-b52e-40b8a4b9716e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/41119909-cf1c-417e-b52e-40b8a4b9716e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.184 244018 DEBUG nova.compute.manager [req-1085b3c2-1ad8-421b-be2a-315195c12728 req-8285c5ed-74d7-4da1-9df4-ecd1c894c84a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received event network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.185 244018 DEBUG oslo_concurrency.lockutils [req-1085b3c2-1ad8-421b-be2a-315195c12728 req-8285c5ed-74d7-4da1-9df4-ecd1c894c84a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.186 244018 DEBUG oslo_concurrency.lockutils [req-1085b3c2-1ad8-421b-be2a-315195c12728 req-8285c5ed-74d7-4da1-9df4-ecd1c894c84a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.186 244018 DEBUG oslo_concurrency.lockutils [req-1085b3c2-1ad8-421b-be2a-315195c12728 req-8285c5ed-74d7-4da1-9df4-ecd1c894c84a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.187 244018 DEBUG nova.compute.manager [req-1085b3c2-1ad8-421b-be2a-315195c12728 req-8285c5ed-74d7-4da1-9df4-ecd1c894c84a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Processing event network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:38:43 np0005629333 podman[329415]: 2026-02-25 12:38:43.536303093 +0000 UTC m=+0.073133205 container create bfadc3105e77e49d889167dc49751cf3a38c8d16997ef9f2e35eb81f55f82af1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:38:43 np0005629333 systemd[1]: Started libpod-conmon-bfadc3105e77e49d889167dc49751cf3a38c8d16997ef9f2e35eb81f55f82af1.scope.
Feb 25 07:38:43 np0005629333 podman[329415]: 2026-02-25 12:38:43.496569952 +0000 UTC m=+0.033400084 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:38:43 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:38:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a26b6ba428616ece1f36be1800c17c0da2392d8e06e17dc1967c2979f3495730/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:38:43 np0005629333 podman[329415]: 2026-02-25 12:38:43.627865617 +0000 UTC m=+0.164695719 container init bfadc3105e77e49d889167dc49751cf3a38c8d16997ef9f2e35eb81f55f82af1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 07:38:43 np0005629333 podman[329415]: 2026-02-25 12:38:43.636508681 +0000 UTC m=+0.173338793 container start bfadc3105e77e49d889167dc49751cf3a38c8d16997ef9f2e35eb81f55f82af1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 07:38:43 np0005629333 neutron-haproxy-ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e[329448]: [NOTICE]   (329475) : New worker (329477) forked
Feb 25 07:38:43 np0005629333 neutron-haproxy-ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e[329448]: [NOTICE]   (329475) : Loading success.
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.698 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023123.698035, 61e87fb9-d3ba-4d0b-ae95-86b4272980cc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.699 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] VM Started (Lifecycle Event)#033[00m
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.701 244018 DEBUG nova.compute.manager [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.705 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.709 244018 INFO nova.virt.libvirt.driver [-] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Instance spawned successfully.#033[00m
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.709 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.718 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.722 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.751 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.751 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.752 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.752 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.753 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.753 244018 DEBUG nova.virt.libvirt.driver [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.757 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.757 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023123.6981664, 61e87fb9-d3ba-4d0b-ae95-86b4272980cc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.757 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.815 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.820 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023123.7040687, 61e87fb9-d3ba-4d0b-ae95-86b4272980cc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.820 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.850 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.853 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.894 244018 INFO nova.compute.manager [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Took 10.49 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.894 244018 DEBUG nova.compute.manager [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.897 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:38:43 np0005629333 nova_compute[244014]: 2026-02-25 12:38:43.987 244018 INFO nova.compute.manager [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Took 12.25 seconds to build instance.#033[00m
Feb 25 07:38:44 np0005629333 nova_compute[244014]: 2026-02-25 12:38:44.046 244018 DEBUG oslo_concurrency.lockutils [None req-22916ec6-455e-4a05-b505-dc6beb8b325c 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.373s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:38:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 07:38:44 np0005629333 nova_compute[244014]: 2026-02-25 12:38:44.988 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:45 np0005629333 nova_compute[244014]: 2026-02-25 12:38:45.376 244018 DEBUG nova.compute.manager [req-fd374abd-30a8-4c0f-a4d8-a4ea2f4659ab req-e7b08a18-ace3-42e0-b4b2-aeb69afa514b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received event network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:38:45 np0005629333 nova_compute[244014]: 2026-02-25 12:38:45.377 244018 DEBUG oslo_concurrency.lockutils [req-fd374abd-30a8-4c0f-a4d8-a4ea2f4659ab req-e7b08a18-ace3-42e0-b4b2-aeb69afa514b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:38:45 np0005629333 nova_compute[244014]: 2026-02-25 12:38:45.378 244018 DEBUG oslo_concurrency.lockutils [req-fd374abd-30a8-4c0f-a4d8-a4ea2f4659ab req-e7b08a18-ace3-42e0-b4b2-aeb69afa514b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:38:45 np0005629333 nova_compute[244014]: 2026-02-25 12:38:45.379 244018 DEBUG oslo_concurrency.lockutils [req-fd374abd-30a8-4c0f-a4d8-a4ea2f4659ab req-e7b08a18-ace3-42e0-b4b2-aeb69afa514b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:38:45 np0005629333 nova_compute[244014]: 2026-02-25 12:38:45.379 244018 DEBUG nova.compute.manager [req-fd374abd-30a8-4c0f-a4d8-a4ea2f4659ab req-e7b08a18-ace3-42e0-b4b2-aeb69afa514b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] No waiting events found dispatching network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:38:45 np0005629333 nova_compute[244014]: 2026-02-25 12:38:45.380 244018 WARNING nova.compute.manager [req-fd374abd-30a8-4c0f-a4d8-a4ea2f4659ab req-e7b08a18-ace3-42e0-b4b2-aeb69afa514b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received unexpected event network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd for instance with vm_state active and task_state None.#033[00m
Feb 25 07:38:45 np0005629333 podman[329487]: 2026-02-25 12:38:45.712134985 +0000 UTC m=+0.058775530 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.111 244018 DEBUG oslo_concurrency.lockutils [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Acquiring lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.112 244018 DEBUG oslo_concurrency.lockutils [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.113 244018 DEBUG oslo_concurrency.lockutils [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Acquiring lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.113 244018 DEBUG oslo_concurrency.lockutils [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.114 244018 DEBUG oslo_concurrency.lockutils [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.116 244018 INFO nova.compute.manager [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Terminating instance#033[00m
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.117 244018 DEBUG nova.compute.manager [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:38:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1715: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 07:38:46 np0005629333 kernel: tap3bb5f129-0f (unregistering): left promiscuous mode
Feb 25 07:38:46 np0005629333 NetworkManager[49836]: <info>  [1772023126.1643] device (tap3bb5f129-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:38:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:38:46Z|00944|binding|INFO|Releasing lport 3bb5f129-0f6a-4905-87a3-a81ccde523cd from this chassis (sb_readonly=0)
Feb 25 07:38:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:38:46Z|00945|binding|INFO|Setting lport 3bb5f129-0f6a-4905-87a3-a81ccde523cd down in Southbound
Feb 25 07:38:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:38:46Z|00946|binding|INFO|Removing iface tap3bb5f129-0f ovn-installed in OVS
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.179 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.183 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.189 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:a7:16 10.100.0.13'], port_security=['fa:16:3e:6a:a7:16 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '61e87fb9-d3ba-4d0b-ae95-86b4272980cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41119909-cf1c-417e-b52e-40b8a4b9716e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8d5e5d163084460c88c8f594df149ff0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cba43bfc-12c5-414d-b845-95bd5bf66142', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5504f6fc-86f0-41e2-8986-2cb5dda54647, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=3bb5f129-0f6a-4905-87a3-a81ccde523cd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:38:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.191 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 3bb5f129-0f6a-4905-87a3-a81ccde523cd in datapath 41119909-cf1c-417e-b52e-40b8a4b9716e unbound from our chassis#033[00m
Feb 25 07:38:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.193 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 41119909-cf1c-417e-b52e-40b8a4b9716e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:38:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.195 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b12c9861-1af7-4090-b9c7-72a5596d1c38]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:38:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.196 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e namespace which is not needed anymore#033[00m
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.198 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:46 np0005629333 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Feb 25 07:38:46 np0005629333 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d0000005f.scope: Consumed 3.378s CPU time.
Feb 25 07:38:46 np0005629333 systemd-machined[210048]: Machine qemu-122-instance-0000005f terminated.
Feb 25 07:38:46 np0005629333 neutron-haproxy-ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e[329448]: [NOTICE]   (329475) : haproxy version is 2.8.14-c23fe91
Feb 25 07:38:46 np0005629333 neutron-haproxy-ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e[329448]: [NOTICE]   (329475) : path to executable is /usr/sbin/haproxy
Feb 25 07:38:46 np0005629333 neutron-haproxy-ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e[329448]: [WARNING]  (329475) : Exiting Master process...
Feb 25 07:38:46 np0005629333 neutron-haproxy-ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e[329448]: [ALERT]    (329475) : Current worker (329477) exited with code 143 (Terminated)
Feb 25 07:38:46 np0005629333 neutron-haproxy-ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e[329448]: [WARNING]  (329475) : All workers exited. Exiting... (0)
Feb 25 07:38:46 np0005629333 systemd[1]: libpod-bfadc3105e77e49d889167dc49751cf3a38c8d16997ef9f2e35eb81f55f82af1.scope: Deactivated successfully.
Feb 25 07:38:46 np0005629333 podman[329532]: 2026-02-25 12:38:46.314962006 +0000 UTC m=+0.044247889 container died bfadc3105e77e49d889167dc49751cf3a38c8d16997ef9f2e35eb81f55f82af1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 07:38:46 np0005629333 kernel: tap3bb5f129-0f: entered promiscuous mode
Feb 25 07:38:46 np0005629333 systemd-udevd[329511]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:38:46 np0005629333 NetworkManager[49836]: <info>  [1772023126.3400] manager: (tap3bb5f129-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/394)
Feb 25 07:38:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:38:46Z|00947|binding|INFO|Claiming lport 3bb5f129-0f6a-4905-87a3-a81ccde523cd for this chassis.
Feb 25 07:38:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:38:46Z|00948|binding|INFO|3bb5f129-0f6a-4905-87a3-a81ccde523cd: Claiming fa:16:3e:6a:a7:16 10.100.0.13
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.341 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:46 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bfadc3105e77e49d889167dc49751cf3a38c8d16997ef9f2e35eb81f55f82af1-userdata-shm.mount: Deactivated successfully.
Feb 25 07:38:46 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a26b6ba428616ece1f36be1800c17c0da2392d8e06e17dc1967c2979f3495730-merged.mount: Deactivated successfully.
Feb 25 07:38:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.353 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:a7:16 10.100.0.13'], port_security=['fa:16:3e:6a:a7:16 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '61e87fb9-d3ba-4d0b-ae95-86b4272980cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41119909-cf1c-417e-b52e-40b8a4b9716e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8d5e5d163084460c88c8f594df149ff0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cba43bfc-12c5-414d-b845-95bd5bf66142', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5504f6fc-86f0-41e2-8986-2cb5dda54647, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=3bb5f129-0f6a-4905-87a3-a81ccde523cd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:38:46 np0005629333 kernel: tap3bb5f129-0f (unregistering): left promiscuous mode
Feb 25 07:38:46 np0005629333 virtnodedevd[243651]: libvirt version: 11.10.0, package: 4.el9 (builder@centos.org, 2026-01-29-15:25:17, )
Feb 25 07:38:46 np0005629333 virtnodedevd[243651]: hostname: compute-0
Feb 25 07:38:46 np0005629333 virtnodedevd[243651]: ethtool ioctl error on tap3bb5f129-0f: No such device
Feb 25 07:38:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:38:46Z|00949|binding|INFO|Setting lport 3bb5f129-0f6a-4905-87a3-a81ccde523cd ovn-installed in OVS
Feb 25 07:38:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:38:46Z|00950|binding|INFO|Setting lport 3bb5f129-0f6a-4905-87a3-a81ccde523cd up in Southbound
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.363 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:38:46Z|00951|binding|INFO|Releasing lport 3bb5f129-0f6a-4905-87a3-a81ccde523cd from this chassis (sb_readonly=1)
Feb 25 07:38:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:38:46Z|00952|if_status|INFO|Dropped 2 log messages in last 328 seconds (most recently, 328 seconds ago) due to excessive rate
Feb 25 07:38:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:38:46Z|00953|if_status|INFO|Not setting lport 3bb5f129-0f6a-4905-87a3-a81ccde523cd down as sb is readonly
Feb 25 07:38:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:38:46Z|00954|binding|INFO|Removing iface tap3bb5f129-0f ovn-installed in OVS
Feb 25 07:38:46 np0005629333 virtnodedevd[243651]: ethtool ioctl error on tap3bb5f129-0f: No such device
Feb 25 07:38:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:38:46Z|00955|binding|INFO|Releasing lport 3bb5f129-0f6a-4905-87a3-a81ccde523cd from this chassis (sb_readonly=0)
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.368 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:38:46Z|00956|binding|INFO|Setting lport 3bb5f129-0f6a-4905-87a3-a81ccde523cd down in Southbound
Feb 25 07:38:46 np0005629333 podman[329532]: 2026-02-25 12:38:46.370055701 +0000 UTC m=+0.099341634 container cleanup bfadc3105e77e49d889167dc49751cf3a38c8d16997ef9f2e35eb81f55f82af1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.373 244018 INFO nova.virt.libvirt.driver [-] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Instance destroyed successfully.#033[00m
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.373 244018 DEBUG nova.objects.instance [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lazy-loading 'resources' on Instance uuid 61e87fb9-d3ba-4d0b-ae95-86b4272980cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:38:46 np0005629333 virtnodedevd[243651]: ethtool ioctl error on tap3bb5f129-0f: No such device
Feb 25 07:38:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.376 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:a7:16 10.100.0.13'], port_security=['fa:16:3e:6a:a7:16 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '61e87fb9-d3ba-4d0b-ae95-86b4272980cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41119909-cf1c-417e-b52e-40b8a4b9716e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8d5e5d163084460c88c8f594df149ff0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cba43bfc-12c5-414d-b845-95bd5bf66142', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5504f6fc-86f0-41e2-8986-2cb5dda54647, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=3bb5f129-0f6a-4905-87a3-a81ccde523cd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:38:46 np0005629333 systemd[1]: libpod-conmon-bfadc3105e77e49d889167dc49751cf3a38c8d16997ef9f2e35eb81f55f82af1.scope: Deactivated successfully.
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.379 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:46 np0005629333 virtnodedevd[243651]: ethtool ioctl error on tap3bb5f129-0f: No such device
Feb 25 07:38:46 np0005629333 virtnodedevd[243651]: ethtool ioctl error on tap3bb5f129-0f: No such device
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.387 244018 DEBUG nova.virt.libvirt.vif [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:38:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-1119160360',display_name='tempest-ServerGroupTestJSON-server-1119160360',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-1119160360',id=95,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:38:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8d5e5d163084460c88c8f594df149ff0',ramdisk_id='',reservation_id='r-s85cq2rq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerGroupTestJSON-45430372',owner_user_name='tempest-ServerGroupTestJSON-45430372-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:38:43Z,user_data=None,user_id='04dc6c3292f14b8398bec7165759bd4b',uuid=61e87fb9-d3ba-4d0b-ae95-86b4272980cc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "address": "fa:16:3e:6a:a7:16", "network": {"id": "41119909-cf1c-417e-b52e-40b8a4b9716e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1320491603-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e5d163084460c88c8f594df149ff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bb5f129-0f", "ovs_interfaceid": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.388 244018 DEBUG nova.network.os_vif_util [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Converting VIF {"id": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "address": "fa:16:3e:6a:a7:16", "network": {"id": "41119909-cf1c-417e-b52e-40b8a4b9716e", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-1320491603-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8d5e5d163084460c88c8f594df149ff0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bb5f129-0f", "ovs_interfaceid": "3bb5f129-0f6a-4905-87a3-a81ccde523cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:38:46 np0005629333 virtnodedevd[243651]: ethtool ioctl error on tap3bb5f129-0f: No such device
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.389 244018 DEBUG nova.network.os_vif_util [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:a7:16,bridge_name='br-int',has_traffic_filtering=True,id=3bb5f129-0f6a-4905-87a3-a81ccde523cd,network=Network(41119909-cf1c-417e-b52e-40b8a4b9716e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bb5f129-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.390 244018 DEBUG os_vif [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:a7:16,bridge_name='br-int',has_traffic_filtering=True,id=3bb5f129-0f6a-4905-87a3-a81ccde523cd,network=Network(41119909-cf1c-417e-b52e-40b8a4b9716e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bb5f129-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:38:46 np0005629333 virtnodedevd[243651]: ethtool ioctl error on tap3bb5f129-0f: No such device
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.392 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.392 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3bb5f129-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.394 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:46 np0005629333 virtnodedevd[243651]: ethtool ioctl error on tap3bb5f129-0f: No such device
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.396 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.398 244018 INFO os_vif [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:a7:16,bridge_name='br-int',has_traffic_filtering=True,id=3bb5f129-0f6a-4905-87a3-a81ccde523cd,network=Network(41119909-cf1c-417e-b52e-40b8a4b9716e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bb5f129-0f')#033[00m
Feb 25 07:38:46 np0005629333 podman[329572]: 2026-02-25 12:38:46.430485116 +0000 UTC m=+0.036949413 container remove bfadc3105e77e49d889167dc49751cf3a38c8d16997ef9f2e35eb81f55f82af1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0)
Feb 25 07:38:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.436 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2d6d913a-b9ba-4a16-8626-a14cb15b1e9a]: (4, ('Wed Feb 25 12:38:46 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e (bfadc3105e77e49d889167dc49751cf3a38c8d16997ef9f2e35eb81f55f82af1)\nbfadc3105e77e49d889167dc49751cf3a38c8d16997ef9f2e35eb81f55f82af1\nWed Feb 25 12:38:46 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e (bfadc3105e77e49d889167dc49751cf3a38c8d16997ef9f2e35eb81f55f82af1)\nbfadc3105e77e49d889167dc49751cf3a38c8d16997ef9f2e35eb81f55f82af1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:38:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.437 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6eb40c4b-a642-484f-a0f3-bc10267343ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:38:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.439 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41119909-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.441 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:46 np0005629333 kernel: tap41119909-c0: left promiscuous mode
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.450 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.452 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.453 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2f2fe8b1-7189-49c5-99c4-ec5f3ce34fc7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:38:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.468 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[75e1d72c-0043-47a7-9eb5-2574695061e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:38:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.469 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fd49ffe7-283c-4e6c-b7e3-8c4c5f3002e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:38:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.482 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dfb63cba-692f-44f1-8e81-1513cd31be4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 509231, 'reachable_time': 20922, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329614, 'error': None, 'target': 'ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:38:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.484 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-41119909-cf1c-417e-b52e-40b8a4b9716e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:38:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.484 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[a8f15c8c-8c8a-4660-b763-4369ad56d720]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:38:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.485 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 3bb5f129-0f6a-4905-87a3-a81ccde523cd in datapath 41119909-cf1c-417e-b52e-40b8a4b9716e unbound from our chassis#033[00m
Feb 25 07:38:46 np0005629333 systemd[1]: run-netns-ovnmeta\x2d41119909\x2dcf1c\x2d417e\x2db52e\x2d40b8a4b9716e.mount: Deactivated successfully.
Feb 25 07:38:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.486 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 41119909-cf1c-417e-b52e-40b8a4b9716e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:38:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.486 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dacf3e5a-901e-4e83-a157-0410ae235334]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:38:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.487 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 3bb5f129-0f6a-4905-87a3-a81ccde523cd in datapath 41119909-cf1c-417e-b52e-40b8a4b9716e unbound from our chassis#033[00m
Feb 25 07:38:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.488 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 41119909-cf1c-417e-b52e-40b8a4b9716e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:38:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:46.489 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b8b3fed2-786f-4e15-87dd-ef795859e852]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.654 244018 INFO nova.virt.libvirt.driver [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Deleting instance files /var/lib/nova/instances/61e87fb9-d3ba-4d0b-ae95-86b4272980cc_del#033[00m
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.655 244018 INFO nova.virt.libvirt.driver [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Deletion of /var/lib/nova/instances/61e87fb9-d3ba-4d0b-ae95-86b4272980cc_del complete#033[00m
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.733 244018 INFO nova.compute.manager [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Took 0.62 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.735 244018 DEBUG oslo.service.loopingcall [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.736 244018 DEBUG nova.compute.manager [-] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:38:46 np0005629333 nova_compute[244014]: 2026-02-25 12:38:46.736 244018 DEBUG nova.network.neutron [-] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:38:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:38:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/882259766' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:38:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:38:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/882259766' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.523 244018 DEBUG nova.compute.manager [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received event network-vif-unplugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.524 244018 DEBUG oslo_concurrency.lockutils [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.525 244018 DEBUG oslo_concurrency.lockutils [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.525 244018 DEBUG oslo_concurrency.lockutils [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.526 244018 DEBUG nova.compute.manager [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] No waiting events found dispatching network-vif-unplugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.526 244018 DEBUG nova.compute.manager [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received event network-vif-unplugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.527 244018 DEBUG nova.compute.manager [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received event network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.527 244018 DEBUG oslo_concurrency.lockutils [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.528 244018 DEBUG oslo_concurrency.lockutils [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.529 244018 DEBUG oslo_concurrency.lockutils [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.529 244018 DEBUG nova.compute.manager [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] No waiting events found dispatching network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.529 244018 WARNING nova.compute.manager [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received unexpected event network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.530 244018 DEBUG nova.compute.manager [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received event network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.530 244018 DEBUG oslo_concurrency.lockutils [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.531 244018 DEBUG oslo_concurrency.lockutils [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.531 244018 DEBUG oslo_concurrency.lockutils [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.532 244018 DEBUG nova.compute.manager [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] No waiting events found dispatching network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.532 244018 WARNING nova.compute.manager [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received unexpected event network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.532 244018 DEBUG nova.compute.manager [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received event network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.533 244018 DEBUG oslo_concurrency.lockutils [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.534 244018 DEBUG oslo_concurrency.lockutils [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.534 244018 DEBUG oslo_concurrency.lockutils [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.535 244018 DEBUG nova.compute.manager [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] No waiting events found dispatching network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.535 244018 WARNING nova.compute.manager [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received unexpected event network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.536 244018 DEBUG nova.compute.manager [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received event network-vif-unplugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.536 244018 DEBUG oslo_concurrency.lockutils [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.536 244018 DEBUG oslo_concurrency.lockutils [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.537 244018 DEBUG oslo_concurrency.lockutils [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.537 244018 DEBUG nova.compute.manager [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] No waiting events found dispatching network-vif-unplugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.538 244018 DEBUG nova.compute.manager [req-fb121410-8bc6-48fc-a586-c006ae6f4f46 req-c7372929-cb5a-4295-9ceb-49abfccfbb3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received event network-vif-unplugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:38:47 np0005629333 podman[329616]: 2026-02-25 12:38:47.778343991 +0000 UTC m=+0.115722117 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0)
Feb 25 07:38:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.876 244018 DEBUG nova.network.neutron [-] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.893 244018 INFO nova.compute.manager [-] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Took 1.16 seconds to deallocate network for instance.#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.935 244018 DEBUG oslo_concurrency.lockutils [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.936 244018 DEBUG oslo_concurrency.lockutils [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:38:47 np0005629333 nova_compute[244014]: 2026-02-25 12:38:47.994 244018 DEBUG oslo_concurrency.processutils [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:38:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 305 active+clean; 153 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Feb 25 07:38:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:38:48 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1650875818' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:38:48 np0005629333 nova_compute[244014]: 2026-02-25 12:38:48.569 244018 DEBUG oslo_concurrency.processutils [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:38:48 np0005629333 nova_compute[244014]: 2026-02-25 12:38:48.578 244018 DEBUG nova.compute.provider_tree [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:38:48 np0005629333 nova_compute[244014]: 2026-02-25 12:38:48.601 244018 DEBUG nova.scheduler.client.report [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:38:48 np0005629333 nova_compute[244014]: 2026-02-25 12:38:48.625 244018 DEBUG oslo_concurrency.lockutils [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:38:48 np0005629333 nova_compute[244014]: 2026-02-25 12:38:48.674 244018 INFO nova.scheduler.client.report [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Deleted allocations for instance 61e87fb9-d3ba-4d0b-ae95-86b4272980cc#033[00m
Feb 25 07:38:48 np0005629333 nova_compute[244014]: 2026-02-25 12:38:48.762 244018 DEBUG oslo_concurrency.lockutils [None req-b3dabe98-2977-4226-9aab-c74d92811cf5 04dc6c3292f14b8398bec7165759bd4b 8d5e5d163084460c88c8f594df149ff0 - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:38:49 np0005629333 nova_compute[244014]: 2026-02-25 12:38:49.766 244018 DEBUG nova.compute.manager [req-63e042ea-b140-4e9c-b234-ae287a5f00f0 req-2f77c99f-e24c-423d-b2db-2f8dfd916d0f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received event network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:38:49 np0005629333 nova_compute[244014]: 2026-02-25 12:38:49.768 244018 DEBUG oslo_concurrency.lockutils [req-63e042ea-b140-4e9c-b234-ae287a5f00f0 req-2f77c99f-e24c-423d-b2db-2f8dfd916d0f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:38:49 np0005629333 nova_compute[244014]: 2026-02-25 12:38:49.769 244018 DEBUG oslo_concurrency.lockutils [req-63e042ea-b140-4e9c-b234-ae287a5f00f0 req-2f77c99f-e24c-423d-b2db-2f8dfd916d0f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:38:49 np0005629333 nova_compute[244014]: 2026-02-25 12:38:49.769 244018 DEBUG oslo_concurrency.lockutils [req-63e042ea-b140-4e9c-b234-ae287a5f00f0 req-2f77c99f-e24c-423d-b2db-2f8dfd916d0f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "61e87fb9-d3ba-4d0b-ae95-86b4272980cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:38:49 np0005629333 nova_compute[244014]: 2026-02-25 12:38:49.770 244018 DEBUG nova.compute.manager [req-63e042ea-b140-4e9c-b234-ae287a5f00f0 req-2f77c99f-e24c-423d-b2db-2f8dfd916d0f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] No waiting events found dispatching network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:38:49 np0005629333 nova_compute[244014]: 2026-02-25 12:38:49.770 244018 WARNING nova.compute.manager [req-63e042ea-b140-4e9c-b234-ae287a5f00f0 req-2f77c99f-e24c-423d-b2db-2f8dfd916d0f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received unexpected event network-vif-plugged-3bb5f129-0f6a-4905-87a3-a81ccde523cd for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:38:49 np0005629333 nova_compute[244014]: 2026-02-25 12:38:49.771 244018 DEBUG nova.compute.manager [req-63e042ea-b140-4e9c-b234-ae287a5f00f0 req-2f77c99f-e24c-423d-b2db-2f8dfd916d0f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Received event network-vif-deleted-3bb5f129-0f6a-4905-87a3-a81ccde523cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:38:49 np0005629333 nova_compute[244014]: 2026-02-25 12:38:49.990 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1717: 305 pgs: 305 active+clean; 153 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 98 op/s
Feb 25 07:38:51 np0005629333 nova_compute[244014]: 2026-02-25 12:38:51.396 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Feb 25 07:38:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:38:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1719: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Feb 25 07:38:54 np0005629333 nova_compute[244014]: 2026-02-25 12:38:54.557 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:54 np0005629333 nova_compute[244014]: 2026-02-25 12:38:54.992 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:55.018 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:38:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:55.019 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:38:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:38:55.019 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:38:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1720: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 98 op/s
Feb 25 07:38:56 np0005629333 nova_compute[244014]: 2026-02-25 12:38:56.399 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:38:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:38:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 98 op/s
Feb 25 07:38:59 np0005629333 nova_compute[244014]: 2026-02-25 12:38:59.995 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1722: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Feb 25 07:39:01 np0005629333 nova_compute[244014]: 2026-02-25 12:39:01.370 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023126.36903, 61e87fb9-d3ba-4d0b-ae95-86b4272980cc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:39:01 np0005629333 nova_compute[244014]: 2026-02-25 12:39:01.371 244018 INFO nova.compute.manager [-] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:39:01 np0005629333 nova_compute[244014]: 2026-02-25 12:39:01.402 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:01 np0005629333 nova_compute[244014]: 2026-02-25 12:39:01.432 244018 DEBUG nova.compute.manager [None req-53d0a97e-9808-44b2-b575-ae6288ec19e4 - - - - - -] [instance: 61e87fb9-d3ba-4d0b-ae95-86b4272980cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:39:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:39:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:39:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:39:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:39:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:39:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:39:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:01.823 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:2f:7d 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-e5a58f3f-f1c5-482b-ba54-7b7acab1971b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5a58f3f-f1c5-482b-ba54-7b7acab1971b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6c80f8ceb8a4441b8b67e86c03ab970', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81cc8b5c-087a-44ef-8773-89172e384531, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=c863bed3-e120-4e62-b6cf-b3bdec2b0cb1) old=Port_Binding(mac=['fa:16:3e:bc:2f:7d 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-e5a58f3f-f1c5-482b-ba54-7b7acab1971b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5a58f3f-f1c5-482b-ba54-7b7acab1971b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6c80f8ceb8a4441b8b67e86c03ab970', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:39:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:01.825 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port c863bed3-e120-4e62-b6cf-b3bdec2b0cb1 in datapath e5a58f3f-f1c5-482b-ba54-7b7acab1971b updated#033[00m
Feb 25 07:39:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:01.826 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e5a58f3f-f1c5-482b-ba54-7b7acab1971b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:39:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:01.828 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[159aebb3-ea44-453f-bea7-7ee41699b713]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Feb 25 07:39:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:39:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1724: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:39:04 np0005629333 nova_compute[244014]: 2026-02-25 12:39:04.997 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:39:06 np0005629333 nova_compute[244014]: 2026-02-25 12:39:06.405 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:39:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:39:09 np0005629333 nova_compute[244014]: 2026-02-25 12:39:09.081 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Acquiring lock "e25f3aed-d038-4b27-a15f-9beada2b67b0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:39:09 np0005629333 nova_compute[244014]: 2026-02-25 12:39:09.082 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lock "e25f3aed-d038-4b27-a15f-9beada2b67b0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:39:09 np0005629333 nova_compute[244014]: 2026-02-25 12:39:09.099 244018 DEBUG nova.compute.manager [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:39:09 np0005629333 nova_compute[244014]: 2026-02-25 12:39:09.202 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:39:09 np0005629333 nova_compute[244014]: 2026-02-25 12:39:09.203 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:39:09 np0005629333 nova_compute[244014]: 2026-02-25 12:39:09.216 244018 DEBUG nova.virt.hardware [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:39:09 np0005629333 nova_compute[244014]: 2026-02-25 12:39:09.216 244018 INFO nova.compute.claims [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:39:09 np0005629333 nova_compute[244014]: 2026-02-25 12:39:09.336 244018 DEBUG oslo_concurrency.processutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:39:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:09.592 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:2f:7d 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-e5a58f3f-f1c5-482b-ba54-7b7acab1971b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5a58f3f-f1c5-482b-ba54-7b7acab1971b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6c80f8ceb8a4441b8b67e86c03ab970', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=81cc8b5c-087a-44ef-8773-89172e384531, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=c863bed3-e120-4e62-b6cf-b3bdec2b0cb1) old=Port_Binding(mac=['fa:16:3e:bc:2f:7d 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-e5a58f3f-f1c5-482b-ba54-7b7acab1971b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5a58f3f-f1c5-482b-ba54-7b7acab1971b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c6c80f8ceb8a4441b8b67e86c03ab970', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:39:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:09.593 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port c863bed3-e120-4e62-b6cf-b3bdec2b0cb1 in datapath e5a58f3f-f1c5-482b-ba54-7b7acab1971b updated#033[00m
Feb 25 07:39:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:09.594 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e5a58f3f-f1c5-482b-ba54-7b7acab1971b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:39:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:09.595 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e43c0192-5d00-4842-882c-58177e8f7ace]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:39:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3757355745' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:39:09 np0005629333 nova_compute[244014]: 2026-02-25 12:39:09.960 244018 DEBUG oslo_concurrency.processutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:39:09 np0005629333 nova_compute[244014]: 2026-02-25 12:39:09.967 244018 DEBUG nova.compute.provider_tree [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:39:10 np0005629333 nova_compute[244014]: 2026-02-25 12:39:10.001 244018 DEBUG nova.scheduler.client.report [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:39:10 np0005629333 nova_compute[244014]: 2026-02-25 12:39:10.006 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:10 np0005629333 nova_compute[244014]: 2026-02-25 12:39:10.032 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:39:10 np0005629333 nova_compute[244014]: 2026-02-25 12:39:10.033 244018 DEBUG nova.compute.manager [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:39:10 np0005629333 nova_compute[244014]: 2026-02-25 12:39:10.103 244018 DEBUG nova.compute.manager [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:39:10 np0005629333 nova_compute[244014]: 2026-02-25 12:39:10.104 244018 DEBUG nova.network.neutron [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:39:10 np0005629333 nova_compute[244014]: 2026-02-25 12:39:10.128 244018 INFO nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:39:10 np0005629333 nova_compute[244014]: 2026-02-25 12:39:10.152 244018 DEBUG nova.compute.manager [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:39:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1727: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:39:10 np0005629333 nova_compute[244014]: 2026-02-25 12:39:10.285 244018 DEBUG nova.compute.manager [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:39:10 np0005629333 nova_compute[244014]: 2026-02-25 12:39:10.288 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:39:10 np0005629333 nova_compute[244014]: 2026-02-25 12:39:10.288 244018 INFO nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Creating image(s)#033[00m
Feb 25 07:39:10 np0005629333 nova_compute[244014]: 2026-02-25 12:39:10.320 244018 DEBUG nova.storage.rbd_utils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] rbd image e25f3aed-d038-4b27-a15f-9beada2b67b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:39:10 np0005629333 nova_compute[244014]: 2026-02-25 12:39:10.344 244018 DEBUG nova.storage.rbd_utils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] rbd image e25f3aed-d038-4b27-a15f-9beada2b67b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:39:10 np0005629333 nova_compute[244014]: 2026-02-25 12:39:10.367 244018 DEBUG nova.storage.rbd_utils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] rbd image e25f3aed-d038-4b27-a15f-9beada2b67b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:39:10 np0005629333 nova_compute[244014]: 2026-02-25 12:39:10.371 244018 DEBUG oslo_concurrency.processutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:39:10 np0005629333 nova_compute[244014]: 2026-02-25 12:39:10.404 244018 DEBUG nova.policy [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0c0513e28db44d81aa02994a8543b844', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b2470f8bb56548cd818113da683292d8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:39:10 np0005629333 nova_compute[244014]: 2026-02-25 12:39:10.457 244018 DEBUG oslo_concurrency.processutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:39:10 np0005629333 nova_compute[244014]: 2026-02-25 12:39:10.457 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:39:10 np0005629333 nova_compute[244014]: 2026-02-25 12:39:10.458 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:39:10 np0005629333 nova_compute[244014]: 2026-02-25 12:39:10.459 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:39:10 np0005629333 nova_compute[244014]: 2026-02-25 12:39:10.488 244018 DEBUG nova.storage.rbd_utils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] rbd image e25f3aed-d038-4b27-a15f-9beada2b67b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:39:10 np0005629333 nova_compute[244014]: 2026-02-25 12:39:10.492 244018 DEBUG oslo_concurrency.processutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 e25f3aed-d038-4b27-a15f-9beada2b67b0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:39:10 np0005629333 nova_compute[244014]: 2026-02-25 12:39:10.812 244018 DEBUG oslo_concurrency.processutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 e25f3aed-d038-4b27-a15f-9beada2b67b0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.320s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:39:10 np0005629333 nova_compute[244014]: 2026-02-25 12:39:10.880 244018 DEBUG nova.storage.rbd_utils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] resizing rbd image e25f3aed-d038-4b27-a15f-9beada2b67b0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:39:10 np0005629333 nova_compute[244014]: 2026-02-25 12:39:10.989 244018 DEBUG nova.objects.instance [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lazy-loading 'migration_context' on Instance uuid e25f3aed-d038-4b27-a15f-9beada2b67b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:39:11 np0005629333 nova_compute[244014]: 2026-02-25 12:39:11.004 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:39:11 np0005629333 nova_compute[244014]: 2026-02-25 12:39:11.005 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Ensure instance console log exists: /var/lib/nova/instances/e25f3aed-d038-4b27-a15f-9beada2b67b0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:39:11 np0005629333 nova_compute[244014]: 2026-02-25 12:39:11.005 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:39:11 np0005629333 nova_compute[244014]: 2026-02-25 12:39:11.005 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:39:11 np0005629333 nova_compute[244014]: 2026-02-25 12:39:11.006 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:39:11 np0005629333 nova_compute[244014]: 2026-02-25 12:39:11.158 244018 DEBUG nova.network.neutron [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Successfully created port: 14cc64dc-7418-4171-b1f4-1e19bed07489 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:39:11 np0005629333 nova_compute[244014]: 2026-02-25 12:39:11.408 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1728: 305 pgs: 305 active+clean; 181 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 978 KiB/s wr, 25 op/s
Feb 25 07:39:12 np0005629333 nova_compute[244014]: 2026-02-25 12:39:12.209 244018 DEBUG nova.network.neutron [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Successfully updated port: 14cc64dc-7418-4171-b1f4-1e19bed07489 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:39:12 np0005629333 nova_compute[244014]: 2026-02-25 12:39:12.225 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Acquiring lock "refresh_cache-e25f3aed-d038-4b27-a15f-9beada2b67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:39:12 np0005629333 nova_compute[244014]: 2026-02-25 12:39:12.225 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Acquired lock "refresh_cache-e25f3aed-d038-4b27-a15f-9beada2b67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:39:12 np0005629333 nova_compute[244014]: 2026-02-25 12:39:12.225 244018 DEBUG nova.network.neutron [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:39:12 np0005629333 nova_compute[244014]: 2026-02-25 12:39:12.544 244018 DEBUG nova.compute.manager [req-c45b7d69-6118-46ba-ac25-ddf6e2e2c728 req-ab3a7a81-590e-46b2-b01a-dfaa57afbf58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Received event network-changed-14cc64dc-7418-4171-b1f4-1e19bed07489 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:39:12 np0005629333 nova_compute[244014]: 2026-02-25 12:39:12.544 244018 DEBUG nova.compute.manager [req-c45b7d69-6118-46ba-ac25-ddf6e2e2c728 req-ab3a7a81-590e-46b2-b01a-dfaa57afbf58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Refreshing instance network info cache due to event network-changed-14cc64dc-7418-4171-b1f4-1e19bed07489. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:39:12 np0005629333 nova_compute[244014]: 2026-02-25 12:39:12.545 244018 DEBUG oslo_concurrency.lockutils [req-c45b7d69-6118-46ba-ac25-ddf6e2e2c728 req-ab3a7a81-590e-46b2-b01a-dfaa57afbf58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-e25f3aed-d038-4b27-a15f-9beada2b67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:39:12 np0005629333 nova_compute[244014]: 2026-02-25 12:39:12.602 244018 DEBUG nova.network.neutron [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:39:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 07:39:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 8126 writes, 36K keys, 8126 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 8126 writes, 8126 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1489 writes, 6691 keys, 1489 commit groups, 1.0 writes per commit group, ingest: 9.13 MB, 0.02 MB/s#012Interval WAL: 1489 writes, 1489 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     47.8      0.86              0.12        21    0.041       0      0       0.0       0.0#012  L6      1/0    9.16 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.7    101.2     83.4      1.81              0.42        20    0.090    102K    11K       0.0       0.0#012 Sum      1/0    9.16 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.7     68.5     71.9      2.67              0.54        41    0.065    102K    11K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.9     38.6     39.4      1.28              0.15        10    0.128     32K   3075       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    101.2     83.4      1.81              0.42        20    0.090    102K    11K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     48.0      0.86              0.12        20    0.043       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.0 total, 600.0 interval#012Flush(GB): cumulative 0.040, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.19 GB write, 0.06 MB/s write, 0.18 GB read, 0.06 MB/s read, 2.7 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 1.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561a1af858d0#2 capacity: 304.00 MB usage: 22.63 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000402 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1442,21.84 MB,7.18383%) FilterBlock(42,293.36 KB,0.0942381%) IndexBlock(42,513.19 KB,0.164855%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb 25 07:39:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:39:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 07:39:14 np0005629333 nova_compute[244014]: 2026-02-25 12:39:14.259 244018 DEBUG nova.network.neutron [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Updating instance_info_cache with network_info: [{"id": "14cc64dc-7418-4171-b1f4-1e19bed07489", "address": "fa:16:3e:60:e2:2c", "network": {"id": "941e8032-4eea-492d-a5e0-a9cc25d1a9cc", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-986479741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2470f8bb56548cd818113da683292d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14cc64dc-74", "ovs_interfaceid": "14cc64dc-7418-4171-b1f4-1e19bed07489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:39:14 np0005629333 nova_compute[244014]: 2026-02-25 12:39:14.279 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Releasing lock "refresh_cache-e25f3aed-d038-4b27-a15f-9beada2b67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:39:14 np0005629333 nova_compute[244014]: 2026-02-25 12:39:14.280 244018 DEBUG nova.compute.manager [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Instance network_info: |[{"id": "14cc64dc-7418-4171-b1f4-1e19bed07489", "address": "fa:16:3e:60:e2:2c", "network": {"id": "941e8032-4eea-492d-a5e0-a9cc25d1a9cc", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-986479741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2470f8bb56548cd818113da683292d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14cc64dc-74", "ovs_interfaceid": "14cc64dc-7418-4171-b1f4-1e19bed07489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:39:14 np0005629333 nova_compute[244014]: 2026-02-25 12:39:14.281 244018 DEBUG oslo_concurrency.lockutils [req-c45b7d69-6118-46ba-ac25-ddf6e2e2c728 req-ab3a7a81-590e-46b2-b01a-dfaa57afbf58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-e25f3aed-d038-4b27-a15f-9beada2b67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:39:14 np0005629333 nova_compute[244014]: 2026-02-25 12:39:14.281 244018 DEBUG nova.network.neutron [req-c45b7d69-6118-46ba-ac25-ddf6e2e2c728 req-ab3a7a81-590e-46b2-b01a-dfaa57afbf58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Refreshing network info cache for port 14cc64dc-7418-4171-b1f4-1e19bed07489 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:39:14 np0005629333 nova_compute[244014]: 2026-02-25 12:39:14.286 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Start _get_guest_xml network_info=[{"id": "14cc64dc-7418-4171-b1f4-1e19bed07489", "address": "fa:16:3e:60:e2:2c", "network": {"id": "941e8032-4eea-492d-a5e0-a9cc25d1a9cc", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-986479741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2470f8bb56548cd818113da683292d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14cc64dc-74", "ovs_interfaceid": "14cc64dc-7418-4171-b1f4-1e19bed07489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:39:14 np0005629333 nova_compute[244014]: 2026-02-25 12:39:14.293 244018 WARNING nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:39:14 np0005629333 nova_compute[244014]: 2026-02-25 12:39:14.300 244018 DEBUG nova.virt.libvirt.host [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:39:14 np0005629333 nova_compute[244014]: 2026-02-25 12:39:14.300 244018 DEBUG nova.virt.libvirt.host [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:39:14 np0005629333 nova_compute[244014]: 2026-02-25 12:39:14.303 244018 DEBUG nova.virt.libvirt.host [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:39:14 np0005629333 nova_compute[244014]: 2026-02-25 12:39:14.303 244018 DEBUG nova.virt.libvirt.host [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:39:14 np0005629333 nova_compute[244014]: 2026-02-25 12:39:14.304 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:39:14 np0005629333 nova_compute[244014]: 2026-02-25 12:39:14.304 244018 DEBUG nova.virt.hardware [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:39:14 np0005629333 nova_compute[244014]: 2026-02-25 12:39:14.305 244018 DEBUG nova.virt.hardware [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:39:14 np0005629333 nova_compute[244014]: 2026-02-25 12:39:14.305 244018 DEBUG nova.virt.hardware [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:39:14 np0005629333 nova_compute[244014]: 2026-02-25 12:39:14.306 244018 DEBUG nova.virt.hardware [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:39:14 np0005629333 nova_compute[244014]: 2026-02-25 12:39:14.306 244018 DEBUG nova.virt.hardware [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:39:14 np0005629333 nova_compute[244014]: 2026-02-25 12:39:14.307 244018 DEBUG nova.virt.hardware [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:39:14 np0005629333 nova_compute[244014]: 2026-02-25 12:39:14.307 244018 DEBUG nova.virt.hardware [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:39:14 np0005629333 nova_compute[244014]: 2026-02-25 12:39:14.307 244018 DEBUG nova.virt.hardware [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:39:14 np0005629333 nova_compute[244014]: 2026-02-25 12:39:14.308 244018 DEBUG nova.virt.hardware [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:39:14 np0005629333 nova_compute[244014]: 2026-02-25 12:39:14.309 244018 DEBUG nova.virt.hardware [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:39:14 np0005629333 nova_compute[244014]: 2026-02-25 12:39:14.309 244018 DEBUG nova.virt.hardware [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:39:14 np0005629333 nova_compute[244014]: 2026-02-25 12:39:14.314 244018 DEBUG oslo_concurrency.processutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:39:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:39:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1171999011' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:39:14 np0005629333 nova_compute[244014]: 2026-02-25 12:39:14.889 244018 DEBUG oslo_concurrency.processutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:39:14 np0005629333 nova_compute[244014]: 2026-02-25 12:39:14.920 244018 DEBUG nova.storage.rbd_utils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] rbd image e25f3aed-d038-4b27-a15f-9beada2b67b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:39:14 np0005629333 nova_compute[244014]: 2026-02-25 12:39:14.925 244018 DEBUG oslo_concurrency.processutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:39:15 np0005629333 nova_compute[244014]: 2026-02-25 12:39:15.001 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:39:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3534650665' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:39:15 np0005629333 nova_compute[244014]: 2026-02-25 12:39:15.493 244018 DEBUG oslo_concurrency.processutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:39:15 np0005629333 nova_compute[244014]: 2026-02-25 12:39:15.496 244018 DEBUG nova.virt.libvirt.vif [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:39:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-99198811',display_name='tempest-ServerMetadataNegativeTestJSON-server-99198811',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-99198811',id=96,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b2470f8bb56548cd818113da683292d8',ramdisk_id='',reservation_id='r-8npm3urm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-1634701035',owner_user_name='tempest-ServerMetadataNegativeTestJSON-1634701035-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:39:10Z,user_data=None,user_id='0c0513e28db44d81aa02994a8543b844',uuid=e25f3aed-d038-4b27-a15f-9beada2b67b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "14cc64dc-7418-4171-b1f4-1e19bed07489", "address": "fa:16:3e:60:e2:2c", "network": {"id": "941e8032-4eea-492d-a5e0-a9cc25d1a9cc", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-986479741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2470f8bb56548cd818113da683292d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14cc64dc-74", "ovs_interfaceid": "14cc64dc-7418-4171-b1f4-1e19bed07489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:39:15 np0005629333 nova_compute[244014]: 2026-02-25 12:39:15.497 244018 DEBUG nova.network.os_vif_util [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Converting VIF {"id": "14cc64dc-7418-4171-b1f4-1e19bed07489", "address": "fa:16:3e:60:e2:2c", "network": {"id": "941e8032-4eea-492d-a5e0-a9cc25d1a9cc", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-986479741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2470f8bb56548cd818113da683292d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14cc64dc-74", "ovs_interfaceid": "14cc64dc-7418-4171-b1f4-1e19bed07489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:39:15 np0005629333 nova_compute[244014]: 2026-02-25 12:39:15.499 244018 DEBUG nova.network.os_vif_util [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:e2:2c,bridge_name='br-int',has_traffic_filtering=True,id=14cc64dc-7418-4171-b1f4-1e19bed07489,network=Network(941e8032-4eea-492d-a5e0-a9cc25d1a9cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14cc64dc-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:39:15 np0005629333 nova_compute[244014]: 2026-02-25 12:39:15.501 244018 DEBUG nova.objects.instance [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lazy-loading 'pci_devices' on Instance uuid e25f3aed-d038-4b27-a15f-9beada2b67b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:39:15 np0005629333 nova_compute[244014]: 2026-02-25 12:39:15.538 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:39:15 np0005629333 nova_compute[244014]:  <uuid>e25f3aed-d038-4b27-a15f-9beada2b67b0</uuid>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:  <name>instance-00000060</name>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServerMetadataNegativeTestJSON-server-99198811</nova:name>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:39:14</nova:creationTime>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:39:15 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:        <nova:user uuid="0c0513e28db44d81aa02994a8543b844">tempest-ServerMetadataNegativeTestJSON-1634701035-project-member</nova:user>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:        <nova:project uuid="b2470f8bb56548cd818113da683292d8">tempest-ServerMetadataNegativeTestJSON-1634701035</nova:project>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:        <nova:port uuid="14cc64dc-7418-4171-b1f4-1e19bed07489">
Feb 25 07:39:15 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      <entry name="serial">e25f3aed-d038-4b27-a15f-9beada2b67b0</entry>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      <entry name="uuid">e25f3aed-d038-4b27-a15f-9beada2b67b0</entry>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/e25f3aed-d038-4b27-a15f-9beada2b67b0_disk">
Feb 25 07:39:15 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:39:15 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/e25f3aed-d038-4b27-a15f-9beada2b67b0_disk.config">
Feb 25 07:39:15 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:39:15 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:60:e2:2c"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      <target dev="tap14cc64dc-74"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/e25f3aed-d038-4b27-a15f-9beada2b67b0/console.log" append="off"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:39:15 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:39:15 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:39:15 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:39:15 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:39:15 np0005629333 nova_compute[244014]: 2026-02-25 12:39:15.540 244018 DEBUG nova.compute.manager [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Preparing to wait for external event network-vif-plugged-14cc64dc-7418-4171-b1f4-1e19bed07489 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:39:15 np0005629333 nova_compute[244014]: 2026-02-25 12:39:15.541 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Acquiring lock "e25f3aed-d038-4b27-a15f-9beada2b67b0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:39:15 np0005629333 nova_compute[244014]: 2026-02-25 12:39:15.541 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lock "e25f3aed-d038-4b27-a15f-9beada2b67b0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:39:15 np0005629333 nova_compute[244014]: 2026-02-25 12:39:15.542 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lock "e25f3aed-d038-4b27-a15f-9beada2b67b0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:39:15 np0005629333 nova_compute[244014]: 2026-02-25 12:39:15.543 244018 DEBUG nova.virt.libvirt.vif [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:39:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-99198811',display_name='tempest-ServerMetadataNegativeTestJSON-server-99198811',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-99198811',id=96,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b2470f8bb56548cd818113da683292d8',ramdisk_id='',reservation_id='r-8npm3urm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-1634701035',owner_user_name='tempest-ServerMetadataNegativeTestJSON-1634701035-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:39:10Z,user_data=None,user_id='0c0513e28db44d81aa02994a8543b844',uuid=e25f3aed-d038-4b27-a15f-9beada2b67b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "14cc64dc-7418-4171-b1f4-1e19bed07489", "address": "fa:16:3e:60:e2:2c", "network": {"id": "941e8032-4eea-492d-a5e0-a9cc25d1a9cc", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-986479741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2470f8bb56548cd818113da683292d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14cc64dc-74", "ovs_interfaceid": "14cc64dc-7418-4171-b1f4-1e19bed07489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:39:15 np0005629333 nova_compute[244014]: 2026-02-25 12:39:15.543 244018 DEBUG nova.network.os_vif_util [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Converting VIF {"id": "14cc64dc-7418-4171-b1f4-1e19bed07489", "address": "fa:16:3e:60:e2:2c", "network": {"id": "941e8032-4eea-492d-a5e0-a9cc25d1a9cc", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-986479741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2470f8bb56548cd818113da683292d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14cc64dc-74", "ovs_interfaceid": "14cc64dc-7418-4171-b1f4-1e19bed07489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:39:15 np0005629333 nova_compute[244014]: 2026-02-25 12:39:15.544 244018 DEBUG nova.network.os_vif_util [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:e2:2c,bridge_name='br-int',has_traffic_filtering=True,id=14cc64dc-7418-4171-b1f4-1e19bed07489,network=Network(941e8032-4eea-492d-a5e0-a9cc25d1a9cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14cc64dc-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:39:15 np0005629333 nova_compute[244014]: 2026-02-25 12:39:15.544 244018 DEBUG os_vif [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:e2:2c,bridge_name='br-int',has_traffic_filtering=True,id=14cc64dc-7418-4171-b1f4-1e19bed07489,network=Network(941e8032-4eea-492d-a5e0-a9cc25d1a9cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14cc64dc-74') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:39:15 np0005629333 nova_compute[244014]: 2026-02-25 12:39:15.545 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:15 np0005629333 nova_compute[244014]: 2026-02-25 12:39:15.545 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:39:15 np0005629333 nova_compute[244014]: 2026-02-25 12:39:15.545 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:39:15 np0005629333 nova_compute[244014]: 2026-02-25 12:39:15.549 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:15 np0005629333 nova_compute[244014]: 2026-02-25 12:39:15.549 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14cc64dc-74, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:39:15 np0005629333 nova_compute[244014]: 2026-02-25 12:39:15.549 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap14cc64dc-74, col_values=(('external_ids', {'iface-id': '14cc64dc-7418-4171-b1f4-1e19bed07489', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:60:e2:2c', 'vm-uuid': 'e25f3aed-d038-4b27-a15f-9beada2b67b0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:39:15 np0005629333 NetworkManager[49836]: <info>  [1772023155.5523] manager: (tap14cc64dc-74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/395)
Feb 25 07:39:15 np0005629333 nova_compute[244014]: 2026-02-25 12:39:15.554 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:39:15 np0005629333 nova_compute[244014]: 2026-02-25 12:39:15.556 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:15 np0005629333 nova_compute[244014]: 2026-02-25 12:39:15.558 244018 INFO os_vif [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:e2:2c,bridge_name='br-int',has_traffic_filtering=True,id=14cc64dc-7418-4171-b1f4-1e19bed07489,network=Network(941e8032-4eea-492d-a5e0-a9cc25d1a9cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14cc64dc-74')#033[00m
Feb 25 07:39:15 np0005629333 nova_compute[244014]: 2026-02-25 12:39:15.658 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:39:15 np0005629333 nova_compute[244014]: 2026-02-25 12:39:15.659 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:39:15 np0005629333 nova_compute[244014]: 2026-02-25 12:39:15.659 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] No VIF found with MAC fa:16:3e:60:e2:2c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:39:15 np0005629333 nova_compute[244014]: 2026-02-25 12:39:15.660 244018 INFO nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Using config drive#033[00m
Feb 25 07:39:15 np0005629333 nova_compute[244014]: 2026-02-25 12:39:15.693 244018 DEBUG nova.storage.rbd_utils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] rbd image e25f3aed-d038-4b27-a15f-9beada2b67b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:39:16 np0005629333 nova_compute[244014]: 2026-02-25 12:39:16.080 244018 INFO nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Creating config drive at /var/lib/nova/instances/e25f3aed-d038-4b27-a15f-9beada2b67b0/disk.config#033[00m
Feb 25 07:39:16 np0005629333 nova_compute[244014]: 2026-02-25 12:39:16.088 244018 DEBUG oslo_concurrency.processutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e25f3aed-d038-4b27-a15f-9beada2b67b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmps54vjq5m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:39:16 np0005629333 nova_compute[244014]: 2026-02-25 12:39:16.123 244018 DEBUG nova.network.neutron [req-c45b7d69-6118-46ba-ac25-ddf6e2e2c728 req-ab3a7a81-590e-46b2-b01a-dfaa57afbf58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Updated VIF entry in instance network info cache for port 14cc64dc-7418-4171-b1f4-1e19bed07489. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:39:16 np0005629333 nova_compute[244014]: 2026-02-25 12:39:16.125 244018 DEBUG nova.network.neutron [req-c45b7d69-6118-46ba-ac25-ddf6e2e2c728 req-ab3a7a81-590e-46b2-b01a-dfaa57afbf58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Updating instance_info_cache with network_info: [{"id": "14cc64dc-7418-4171-b1f4-1e19bed07489", "address": "fa:16:3e:60:e2:2c", "network": {"id": "941e8032-4eea-492d-a5e0-a9cc25d1a9cc", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-986479741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2470f8bb56548cd818113da683292d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14cc64dc-74", "ovs_interfaceid": "14cc64dc-7418-4171-b1f4-1e19bed07489", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:39:16 np0005629333 nova_compute[244014]: 2026-02-25 12:39:16.144 244018 DEBUG oslo_concurrency.lockutils [req-c45b7d69-6118-46ba-ac25-ddf6e2e2c728 req-ab3a7a81-590e-46b2-b01a-dfaa57afbf58 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-e25f3aed-d038-4b27-a15f-9beada2b67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:39:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1730: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 07:39:16 np0005629333 nova_compute[244014]: 2026-02-25 12:39:16.230 244018 DEBUG oslo_concurrency.processutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e25f3aed-d038-4b27-a15f-9beada2b67b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmps54vjq5m" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:39:16 np0005629333 nova_compute[244014]: 2026-02-25 12:39:16.269 244018 DEBUG nova.storage.rbd_utils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] rbd image e25f3aed-d038-4b27-a15f-9beada2b67b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:39:16 np0005629333 nova_compute[244014]: 2026-02-25 12:39:16.275 244018 DEBUG oslo_concurrency.processutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e25f3aed-d038-4b27-a15f-9beada2b67b0/disk.config e25f3aed-d038-4b27-a15f-9beada2b67b0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:39:16 np0005629333 nova_compute[244014]: 2026-02-25 12:39:16.453 244018 DEBUG oslo_concurrency.processutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e25f3aed-d038-4b27-a15f-9beada2b67b0/disk.config e25f3aed-d038-4b27-a15f-9beada2b67b0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:39:16 np0005629333 nova_compute[244014]: 2026-02-25 12:39:16.455 244018 INFO nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Deleting local config drive /var/lib/nova/instances/e25f3aed-d038-4b27-a15f-9beada2b67b0/disk.config because it was imported into RBD.#033[00m
Feb 25 07:39:16 np0005629333 kernel: tap14cc64dc-74: entered promiscuous mode
Feb 25 07:39:16 np0005629333 NetworkManager[49836]: <info>  [1772023156.5233] manager: (tap14cc64dc-74): new Tun device (/org/freedesktop/NetworkManager/Devices/396)
Feb 25 07:39:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:39:16Z|00957|binding|INFO|Claiming lport 14cc64dc-7418-4171-b1f4-1e19bed07489 for this chassis.
Feb 25 07:39:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:39:16Z|00958|binding|INFO|14cc64dc-7418-4171-b1f4-1e19bed07489: Claiming fa:16:3e:60:e2:2c 10.100.0.12
Feb 25 07:39:16 np0005629333 nova_compute[244014]: 2026-02-25 12:39:16.527 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:16 np0005629333 nova_compute[244014]: 2026-02-25 12:39:16.534 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.551 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:e2:2c 10.100.0.12'], port_security=['fa:16:3e:60:e2:2c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e25f3aed-d038-4b27-a15f-9beada2b67b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-941e8032-4eea-492d-a5e0-a9cc25d1a9cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b2470f8bb56548cd818113da683292d8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0bf5667d-50da-48f9-b7c1-919d155c28ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ce9da3ce-009a-4d1e-9912-d9a8180cd783, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=14cc64dc-7418-4171-b1f4-1e19bed07489) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.554 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 14cc64dc-7418-4171-b1f4-1e19bed07489 in datapath 941e8032-4eea-492d-a5e0-a9cc25d1a9cc bound to our chassis#033[00m
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.556 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 941e8032-4eea-492d-a5e0-a9cc25d1a9cc#033[00m
Feb 25 07:39:16 np0005629333 systemd-machined[210048]: New machine qemu-123-instance-00000060.
Feb 25 07:39:16 np0005629333 systemd-udevd[330000]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.565 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[095e0f93-fa81-4d5c-9570-93a562b8610e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.566 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap941e8032-41 in ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.568 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap941e8032-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.568 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9a7eae67-cf23-41b3-9e9c-36567252001d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.569 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[114f0d8f-e155-4a04-a73d-9dd0ef3d0893]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:16 np0005629333 NetworkManager[49836]: <info>  [1772023156.5736] device (tap14cc64dc-74): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:39:16 np0005629333 NetworkManager[49836]: <info>  [1772023156.5745] device (tap14cc64dc-74): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:39:16 np0005629333 systemd[1]: Started Virtual Machine qemu-123-instance-00000060.
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.581 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[589b9327-42bd-498d-a05c-287d763ce4dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:39:16Z|00959|binding|INFO|Setting lport 14cc64dc-7418-4171-b1f4-1e19bed07489 ovn-installed in OVS
Feb 25 07:39:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:39:16Z|00960|binding|INFO|Setting lport 14cc64dc-7418-4171-b1f4-1e19bed07489 up in Southbound
Feb 25 07:39:16 np0005629333 nova_compute[244014]: 2026-02-25 12:39:16.591 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.607 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1766eacc-a5d6-41b1-ab09-78ea4ab06206]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:16 np0005629333 podman[329975]: 2026-02-25 12:39:16.620275849 +0000 UTC m=+0.122810156 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.636 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d2e8469b-ed76-4174-83e7-5660d0ffd00c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.640 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0f30065f-ce10-430d-81a5-7d4065f49dc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:16 np0005629333 NetworkManager[49836]: <info>  [1772023156.6415] manager: (tap941e8032-40): new Veth device (/org/freedesktop/NetworkManager/Devices/397)
Feb 25 07:39:16 np0005629333 systemd-udevd[330009]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.670 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c99d39b7-28fa-4b62-8313-4ef6eb604190]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.674 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[fbf975a4-ae09-4a12-a521-33e6d0cbe02c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:16 np0005629333 NetworkManager[49836]: <info>  [1772023156.6909] device (tap941e8032-40): carrier: link connected
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.693 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2b08cc1e-c92c-47f3-9575-824845903548]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.709 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d501b966-177d-4297-b393-b4e1edc160c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap941e8032-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:fa:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 287], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512609, 'reachable_time': 43861, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330040, 'error': None, 'target': 'ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.720 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[47b99608-dec9-422e-ab5e-191be34f02f7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb7:fa7e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 512609, 'tstamp': 512609}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330041, 'error': None, 'target': 'ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.732 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8f787085-a64c-4eaa-b696-77eaec7010a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap941e8032-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:fa:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 287], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512609, 'reachable_time': 43861, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 330042, 'error': None, 'target': 'ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.757 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[06c25158-3129-494d-aefe-9331220d2f72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.800 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d2fad42c-b145-4b67-9dd1-4aa0bb9e1d85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.801 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap941e8032-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.802 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.803 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap941e8032-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:39:16 np0005629333 kernel: tap941e8032-40: entered promiscuous mode
Feb 25 07:39:16 np0005629333 NetworkManager[49836]: <info>  [1772023156.8052] manager: (tap941e8032-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/398)
Feb 25 07:39:16 np0005629333 nova_compute[244014]: 2026-02-25 12:39:16.804 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.812 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap941e8032-40, col_values=(('external_ids', {'iface-id': '3398affd-9bd0-414f-9185-f9baaa68693b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:39:16 np0005629333 nova_compute[244014]: 2026-02-25 12:39:16.813 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:39:16Z|00961|binding|INFO|Releasing lport 3398affd-9bd0-414f-9185-f9baaa68693b from this chassis (sb_readonly=0)
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.816 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/941e8032-4eea-492d-a5e0-a9cc25d1a9cc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/941e8032-4eea-492d-a5e0-a9cc25d1a9cc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.818 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[47360737-7739-4915-b8cf-0f4ae83f9702]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:16 np0005629333 nova_compute[244014]: 2026-02-25 12:39:16.819 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.820 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-941e8032-4eea-492d-a5e0-a9cc25d1a9cc
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/941e8032-4eea-492d-a5e0-a9cc25d1a9cc.pid.haproxy
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 941e8032-4eea-492d-a5e0-a9cc25d1a9cc
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:39:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:16.820 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc', 'env', 'PROCESS_TAG=haproxy-941e8032-4eea-492d-a5e0-a9cc25d1a9cc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/941e8032-4eea-492d-a5e0-a9cc25d1a9cc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:39:16 np0005629333 nova_compute[244014]: 2026-02-25 12:39:16.860 244018 DEBUG nova.compute.manager [req-a52d6325-7f9f-437e-b800-76b17b0aa7ca req-57306a8a-24c3-4810-954d-f19513fe1e53 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Received event network-vif-plugged-14cc64dc-7418-4171-b1f4-1e19bed07489 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:39:16 np0005629333 nova_compute[244014]: 2026-02-25 12:39:16.861 244018 DEBUG oslo_concurrency.lockutils [req-a52d6325-7f9f-437e-b800-76b17b0aa7ca req-57306a8a-24c3-4810-954d-f19513fe1e53 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e25f3aed-d038-4b27-a15f-9beada2b67b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:39:16 np0005629333 nova_compute[244014]: 2026-02-25 12:39:16.862 244018 DEBUG oslo_concurrency.lockutils [req-a52d6325-7f9f-437e-b800-76b17b0aa7ca req-57306a8a-24c3-4810-954d-f19513fe1e53 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e25f3aed-d038-4b27-a15f-9beada2b67b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:39:16 np0005629333 nova_compute[244014]: 2026-02-25 12:39:16.863 244018 DEBUG oslo_concurrency.lockutils [req-a52d6325-7f9f-437e-b800-76b17b0aa7ca req-57306a8a-24c3-4810-954d-f19513fe1e53 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e25f3aed-d038-4b27-a15f-9beada2b67b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:39:16 np0005629333 nova_compute[244014]: 2026-02-25 12:39:16.863 244018 DEBUG nova.compute.manager [req-a52d6325-7f9f-437e-b800-76b17b0aa7ca req-57306a8a-24c3-4810-954d-f19513fe1e53 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Processing event network-vif-plugged-14cc64dc-7418-4171-b1f4-1e19bed07489 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:39:17 np0005629333 podman[330124]: 2026-02-25 12:39:17.164302502 +0000 UTC m=+0.054052267 container create 3ea5b341c07a411098b6987638b1736a127296def506404ca7b8badf0d6f950d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 07:39:17 np0005629333 systemd[1]: Started libpod-conmon-3ea5b341c07a411098b6987638b1736a127296def506404ca7b8badf0d6f950d.scope.
Feb 25 07:39:17 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:39:17 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50f32400978b76bc1f139aa47fa194c73ab872f9f6624b37050a90a5820e6f2f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:39:17 np0005629333 podman[330124]: 2026-02-25 12:39:17.133629686 +0000 UTC m=+0.023379511 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:39:17 np0005629333 podman[330124]: 2026-02-25 12:39:17.247080758 +0000 UTC m=+0.136830573 container init 3ea5b341c07a411098b6987638b1736a127296def506404ca7b8badf0d6f950d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:39:17 np0005629333 podman[330124]: 2026-02-25 12:39:17.251640786 +0000 UTC m=+0.141390581 container start 3ea5b341c07a411098b6987638b1736a127296def506404ca7b8badf0d6f950d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:39:17 np0005629333 neutron-haproxy-ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc[330158]: [NOTICE]   (330186) : New worker (330194) forked
Feb 25 07:39:17 np0005629333 neutron-haproxy-ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc[330158]: [NOTICE]   (330186) : Loading success.
Feb 25 07:39:17 np0005629333 nova_compute[244014]: 2026-02-25 12:39:17.351 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023157.3506556, e25f3aed-d038-4b27-a15f-9beada2b67b0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:39:17 np0005629333 nova_compute[244014]: 2026-02-25 12:39:17.353 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] VM Started (Lifecycle Event)#033[00m
Feb 25 07:39:17 np0005629333 nova_compute[244014]: 2026-02-25 12:39:17.356 244018 DEBUG nova.compute.manager [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:39:17 np0005629333 nova_compute[244014]: 2026-02-25 12:39:17.359 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:39:17 np0005629333 nova_compute[244014]: 2026-02-25 12:39:17.362 244018 INFO nova.virt.libvirt.driver [-] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Instance spawned successfully.#033[00m
Feb 25 07:39:17 np0005629333 nova_compute[244014]: 2026-02-25 12:39:17.363 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:39:17 np0005629333 nova_compute[244014]: 2026-02-25 12:39:17.416 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:39:17 np0005629333 nova_compute[244014]: 2026-02-25 12:39:17.423 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:39:17 np0005629333 nova_compute[244014]: 2026-02-25 12:39:17.426 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:39:17 np0005629333 nova_compute[244014]: 2026-02-25 12:39:17.426 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:39:17 np0005629333 nova_compute[244014]: 2026-02-25 12:39:17.427 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:39:17 np0005629333 nova_compute[244014]: 2026-02-25 12:39:17.427 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:39:17 np0005629333 nova_compute[244014]: 2026-02-25 12:39:17.427 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:39:17 np0005629333 nova_compute[244014]: 2026-02-25 12:39:17.428 244018 DEBUG nova.virt.libvirt.driver [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:39:17 np0005629333 nova_compute[244014]: 2026-02-25 12:39:17.461 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:39:17 np0005629333 nova_compute[244014]: 2026-02-25 12:39:17.461 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023157.3510406, e25f3aed-d038-4b27-a15f-9beada2b67b0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:39:17 np0005629333 nova_compute[244014]: 2026-02-25 12:39:17.462 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:39:17 np0005629333 nova_compute[244014]: 2026-02-25 12:39:17.479 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:39:17 np0005629333 nova_compute[244014]: 2026-02-25 12:39:17.481 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023157.3586068, e25f3aed-d038-4b27-a15f-9beada2b67b0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:39:17 np0005629333 nova_compute[244014]: 2026-02-25 12:39:17.482 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:39:17 np0005629333 nova_compute[244014]: 2026-02-25 12:39:17.505 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:39:17 np0005629333 nova_compute[244014]: 2026-02-25 12:39:17.508 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:39:17 np0005629333 nova_compute[244014]: 2026-02-25 12:39:17.516 244018 INFO nova.compute.manager [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Took 7.23 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:39:17 np0005629333 nova_compute[244014]: 2026-02-25 12:39:17.516 244018 DEBUG nova.compute.manager [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:39:17 np0005629333 nova_compute[244014]: 2026-02-25 12:39:17.527 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:39:17 np0005629333 nova_compute[244014]: 2026-02-25 12:39:17.571 244018 INFO nova.compute.manager [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Took 8.42 seconds to build instance.#033[00m
Feb 25 07:39:17 np0005629333 nova_compute[244014]: 2026-02-25 12:39:17.589 244018 DEBUG oslo_concurrency.lockutils [None req-127921db-02af-40ae-93f4-4485f97e14e8 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lock "e25f3aed-d038-4b27-a15f-9beada2b67b0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.507s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:39:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:39:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:39:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:39:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:39:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:39:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:39:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:39:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:39:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:39:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:39:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:39:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:39:17 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:39:17 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:39:17 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:39:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:39:17 np0005629333 nova_compute[244014]: 2026-02-25 12:39:17.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:39:17 np0005629333 nova_compute[244014]: 2026-02-25 12:39:17.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:39:17 np0005629333 nova_compute[244014]: 2026-02-25 12:39:17.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 07:39:17 np0005629333 podman[330277]: 2026-02-25 12:39:17.942552424 +0000 UTC m=+0.109575443 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 07:39:18 np0005629333 nova_compute[244014]: 2026-02-25 12:39:18.113 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-e25f3aed-d038-4b27-a15f-9beada2b67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:39:18 np0005629333 nova_compute[244014]: 2026-02-25 12:39:18.114 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-e25f3aed-d038-4b27-a15f-9beada2b67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:39:18 np0005629333 nova_compute[244014]: 2026-02-25 12:39:18.114 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 25 07:39:18 np0005629333 nova_compute[244014]: 2026-02-25 12:39:18.115 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid e25f3aed-d038-4b27-a15f-9beada2b67b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:39:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1731: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Feb 25 07:39:18 np0005629333 podman[330314]: 2026-02-25 12:39:18.212922614 +0000 UTC m=+0.123120056 container create f0606ae580b6a1f44c1b3b8d5ffb04a33e13db32467b666b0c00de1311f49543 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_carver, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 07:39:18 np0005629333 podman[330314]: 2026-02-25 12:39:18.128366727 +0000 UTC m=+0.038564069 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:39:18 np0005629333 systemd[1]: Started libpod-conmon-f0606ae580b6a1f44c1b3b8d5ffb04a33e13db32467b666b0c00de1311f49543.scope.
Feb 25 07:39:18 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:39:18 np0005629333 podman[330314]: 2026-02-25 12:39:18.331105819 +0000 UTC m=+0.241303171 container init f0606ae580b6a1f44c1b3b8d5ffb04a33e13db32467b666b0c00de1311f49543 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_carver, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:39:18 np0005629333 podman[330314]: 2026-02-25 12:39:18.341373179 +0000 UTC m=+0.251570501 container start f0606ae580b6a1f44c1b3b8d5ffb04a33e13db32467b666b0c00de1311f49543 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_carver, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:39:18 np0005629333 silly_carver[330330]: 167 167
Feb 25 07:39:18 np0005629333 systemd[1]: libpod-f0606ae580b6a1f44c1b3b8d5ffb04a33e13db32467b666b0c00de1311f49543.scope: Deactivated successfully.
Feb 25 07:39:18 np0005629333 podman[330314]: 2026-02-25 12:39:18.34639241 +0000 UTC m=+0.256589762 container attach f0606ae580b6a1f44c1b3b8d5ffb04a33e13db32467b666b0c00de1311f49543 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_carver, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:39:18 np0005629333 podman[330314]: 2026-02-25 12:39:18.347721668 +0000 UTC m=+0.257918980 container died f0606ae580b6a1f44c1b3b8d5ffb04a33e13db32467b666b0c00de1311f49543 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_carver, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:39:18 np0005629333 systemd[1]: var-lib-containers-storage-overlay-270082f391bdf911c47c26fc2d9af20322580860774b51e7471192f466be4bbf-merged.mount: Deactivated successfully.
Feb 25 07:39:18 np0005629333 podman[330314]: 2026-02-25 12:39:18.384433824 +0000 UTC m=+0.294631146 container remove f0606ae580b6a1f44c1b3b8d5ffb04a33e13db32467b666b0c00de1311f49543 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_carver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 07:39:18 np0005629333 systemd[1]: libpod-conmon-f0606ae580b6a1f44c1b3b8d5ffb04a33e13db32467b666b0c00de1311f49543.scope: Deactivated successfully.
Feb 25 07:39:18 np0005629333 podman[330355]: 2026-02-25 12:39:18.530784874 +0000 UTC m=+0.037231862 container create d9155e6a87d8b9d1f5ab3b292701a86218715cede490ca2fd58131a4527eb3f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:39:18 np0005629333 systemd[1]: Started libpod-conmon-d9155e6a87d8b9d1f5ab3b292701a86218715cede490ca2fd58131a4527eb3f6.scope.
Feb 25 07:39:18 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:39:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d37e14a82e721a5c377c0ea8d8b55dade4478b13c08fea30c7f1f6d9669ee2b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:39:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d37e14a82e721a5c377c0ea8d8b55dade4478b13c08fea30c7f1f6d9669ee2b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:39:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d37e14a82e721a5c377c0ea8d8b55dade4478b13c08fea30c7f1f6d9669ee2b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:39:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d37e14a82e721a5c377c0ea8d8b55dade4478b13c08fea30c7f1f6d9669ee2b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:39:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d37e14a82e721a5c377c0ea8d8b55dade4478b13c08fea30c7f1f6d9669ee2b9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:39:18 np0005629333 podman[330355]: 2026-02-25 12:39:18.514222326 +0000 UTC m=+0.020669344 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:39:18 np0005629333 podman[330355]: 2026-02-25 12:39:18.618927411 +0000 UTC m=+0.125374419 container init d9155e6a87d8b9d1f5ab3b292701a86218715cede490ca2fd58131a4527eb3f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 07:39:18 np0005629333 podman[330355]: 2026-02-25 12:39:18.624639432 +0000 UTC m=+0.131086420 container start d9155e6a87d8b9d1f5ab3b292701a86218715cede490ca2fd58131a4527eb3f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_boyd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 07:39:18 np0005629333 podman[330355]: 2026-02-25 12:39:18.62811237 +0000 UTC m=+0.134559358 container attach d9155e6a87d8b9d1f5ab3b292701a86218715cede490ca2fd58131a4527eb3f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_boyd, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 07:39:18 np0005629333 nova_compute[244014]: 2026-02-25 12:39:18.971 244018 DEBUG nova.compute.manager [req-6da99912-f464-4042-8888-fbd7efad9ac9 req-7c2e22f8-8da5-4718-9a22-e3c0a90d0a85 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Received event network-vif-plugged-14cc64dc-7418-4171-b1f4-1e19bed07489 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:39:18 np0005629333 nova_compute[244014]: 2026-02-25 12:39:18.973 244018 DEBUG oslo_concurrency.lockutils [req-6da99912-f464-4042-8888-fbd7efad9ac9 req-7c2e22f8-8da5-4718-9a22-e3c0a90d0a85 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e25f3aed-d038-4b27-a15f-9beada2b67b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:39:18 np0005629333 nova_compute[244014]: 2026-02-25 12:39:18.973 244018 DEBUG oslo_concurrency.lockutils [req-6da99912-f464-4042-8888-fbd7efad9ac9 req-7c2e22f8-8da5-4718-9a22-e3c0a90d0a85 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e25f3aed-d038-4b27-a15f-9beada2b67b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:39:18 np0005629333 nova_compute[244014]: 2026-02-25 12:39:18.973 244018 DEBUG oslo_concurrency.lockutils [req-6da99912-f464-4042-8888-fbd7efad9ac9 req-7c2e22f8-8da5-4718-9a22-e3c0a90d0a85 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e25f3aed-d038-4b27-a15f-9beada2b67b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:39:18 np0005629333 nova_compute[244014]: 2026-02-25 12:39:18.974 244018 DEBUG nova.compute.manager [req-6da99912-f464-4042-8888-fbd7efad9ac9 req-7c2e22f8-8da5-4718-9a22-e3c0a90d0a85 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] No waiting events found dispatching network-vif-plugged-14cc64dc-7418-4171-b1f4-1e19bed07489 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:39:18 np0005629333 nova_compute[244014]: 2026-02-25 12:39:18.975 244018 WARNING nova.compute.manager [req-6da99912-f464-4042-8888-fbd7efad9ac9 req-7c2e22f8-8da5-4718-9a22-e3c0a90d0a85 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Received unexpected event network-vif-plugged-14cc64dc-7418-4171-b1f4-1e19bed07489 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:39:19 np0005629333 zealous_boyd[330372]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:39:19 np0005629333 zealous_boyd[330372]: --> All data devices are unavailable
Feb 25 07:39:19 np0005629333 systemd[1]: libpod-d9155e6a87d8b9d1f5ab3b292701a86218715cede490ca2fd58131a4527eb3f6.scope: Deactivated successfully.
Feb 25 07:39:19 np0005629333 podman[330355]: 2026-02-25 12:39:19.089052867 +0000 UTC m=+0.595499895 container died d9155e6a87d8b9d1f5ab3b292701a86218715cede490ca2fd58131a4527eb3f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_boyd, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 07:39:19 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d37e14a82e721a5c377c0ea8d8b55dade4478b13c08fea30c7f1f6d9669ee2b9-merged.mount: Deactivated successfully.
Feb 25 07:39:19 np0005629333 podman[330355]: 2026-02-25 12:39:19.132556095 +0000 UTC m=+0.639003123 container remove d9155e6a87d8b9d1f5ab3b292701a86218715cede490ca2fd58131a4527eb3f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_boyd, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:39:19 np0005629333 systemd[1]: libpod-conmon-d9155e6a87d8b9d1f5ab3b292701a86218715cede490ca2fd58131a4527eb3f6.scope: Deactivated successfully.
Feb 25 07:39:19 np0005629333 podman[330468]: 2026-02-25 12:39:19.616141671 +0000 UTC m=+0.060125897 container create 4e7af13db1c8779f6b0257361fa75dbe393794042c5044e118d8e8917a6a920b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_swirles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:39:19 np0005629333 systemd[1]: Started libpod-conmon-4e7af13db1c8779f6b0257361fa75dbe393794042c5044e118d8e8917a6a920b.scope.
Feb 25 07:39:19 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:39:19 np0005629333 podman[330468]: 2026-02-25 12:39:19.589025486 +0000 UTC m=+0.033009772 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:39:19 np0005629333 podman[330468]: 2026-02-25 12:39:19.695199463 +0000 UTC m=+0.139183719 container init 4e7af13db1c8779f6b0257361fa75dbe393794042c5044e118d8e8917a6a920b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_swirles, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:39:19 np0005629333 podman[330468]: 2026-02-25 12:39:19.701459439 +0000 UTC m=+0.145443645 container start 4e7af13db1c8779f6b0257361fa75dbe393794042c5044e118d8e8917a6a920b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:39:19 np0005629333 epic_swirles[330486]: 167 167
Feb 25 07:39:19 np0005629333 podman[330468]: 2026-02-25 12:39:19.706076729 +0000 UTC m=+0.150060955 container attach 4e7af13db1c8779f6b0257361fa75dbe393794042c5044e118d8e8917a6a920b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Feb 25 07:39:19 np0005629333 systemd[1]: libpod-4e7af13db1c8779f6b0257361fa75dbe393794042c5044e118d8e8917a6a920b.scope: Deactivated successfully.
Feb 25 07:39:19 np0005629333 podman[330468]: 2026-02-25 12:39:19.706960394 +0000 UTC m=+0.150944630 container died 4e7af13db1c8779f6b0257361fa75dbe393794042c5044e118d8e8917a6a920b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_swirles, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 07:39:19 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e5bfaa8df3c0884f7648437641d338ca35ed57fcf09e763fbc06a950f75f92bc-merged.mount: Deactivated successfully.
Feb 25 07:39:19 np0005629333 podman[330468]: 2026-02-25 12:39:19.753830977 +0000 UTC m=+0.197815183 container remove 4e7af13db1c8779f6b0257361fa75dbe393794042c5044e118d8e8917a6a920b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_swirles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:39:19 np0005629333 systemd[1]: libpod-conmon-4e7af13db1c8779f6b0257361fa75dbe393794042c5044e118d8e8917a6a920b.scope: Deactivated successfully.
Feb 25 07:39:19 np0005629333 podman[330510]: 2026-02-25 12:39:19.900960049 +0000 UTC m=+0.052525563 container create ccad534afcd9a37a93cd11ef55057d038ade4044575a6ef3bec40755ff79af87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_maxwell, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:39:19 np0005629333 systemd[1]: Started libpod-conmon-ccad534afcd9a37a93cd11ef55057d038ade4044575a6ef3bec40755ff79af87.scope.
Feb 25 07:39:19 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:39:19 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c064270ac3fe9929810cc029ef9b8646a93d4a64535c0fd432f0f53e62d5517/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:39:19 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c064270ac3fe9929810cc029ef9b8646a93d4a64535c0fd432f0f53e62d5517/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:39:19 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c064270ac3fe9929810cc029ef9b8646a93d4a64535c0fd432f0f53e62d5517/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:39:19 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c064270ac3fe9929810cc029ef9b8646a93d4a64535c0fd432f0f53e62d5517/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:39:19 np0005629333 podman[330510]: 2026-02-25 12:39:19.882020065 +0000 UTC m=+0.033585569 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:39:19 np0005629333 podman[330510]: 2026-02-25 12:39:19.986614486 +0000 UTC m=+0.138180010 container init ccad534afcd9a37a93cd11ef55057d038ade4044575a6ef3bec40755ff79af87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_maxwell, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 07:39:19 np0005629333 podman[330510]: 2026-02-25 12:39:19.994440057 +0000 UTC m=+0.146005541 container start ccad534afcd9a37a93cd11ef55057d038ade4044575a6ef3bec40755ff79af87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_maxwell, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 07:39:19 np0005629333 podman[330510]: 2026-02-25 12:39:19.998366188 +0000 UTC m=+0.149931712 container attach ccad534afcd9a37a93cd11ef55057d038ade4044575a6ef3bec40755ff79af87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_maxwell, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:39:20 np0005629333 nova_compute[244014]: 2026-02-25 12:39:20.003 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1732: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]: {
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:    "0": [
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:        {
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "devices": [
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "/dev/loop3"
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            ],
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "lv_name": "ceph_lv0",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "lv_size": "21470642176",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "name": "ceph_lv0",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "tags": {
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.cluster_name": "ceph",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.crush_device_class": "",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.encrypted": "0",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.objectstore": "bluestore",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.osd_id": "0",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.type": "block",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.vdo": "0",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.with_tpm": "0"
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            },
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "type": "block",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "vg_name": "ceph_vg0"
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:        }
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:    ],
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:    "1": [
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:        {
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "devices": [
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "/dev/loop4"
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            ],
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "lv_name": "ceph_lv1",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "lv_size": "21470642176",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "name": "ceph_lv1",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "tags": {
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.cluster_name": "ceph",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.crush_device_class": "",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.encrypted": "0",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.objectstore": "bluestore",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.osd_id": "1",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.type": "block",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.vdo": "0",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.with_tpm": "0"
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            },
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "type": "block",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "vg_name": "ceph_vg1"
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:        }
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:    ],
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:    "2": [
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:        {
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "devices": [
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "/dev/loop5"
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            ],
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "lv_name": "ceph_lv2",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "lv_size": "21470642176",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "name": "ceph_lv2",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "tags": {
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.cluster_name": "ceph",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.crush_device_class": "",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.encrypted": "0",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.objectstore": "bluestore",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.osd_id": "2",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.type": "block",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.vdo": "0",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:                "ceph.with_tpm": "0"
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            },
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "type": "block",
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:            "vg_name": "ceph_vg2"
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:        }
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]:    ]
Feb 25 07:39:20 np0005629333 wizardly_maxwell[330526]: }
Feb 25 07:39:20 np0005629333 systemd[1]: libpod-ccad534afcd9a37a93cd11ef55057d038ade4044575a6ef3bec40755ff79af87.scope: Deactivated successfully.
Feb 25 07:39:20 np0005629333 podman[330510]: 2026-02-25 12:39:20.300090102 +0000 UTC m=+0.451655646 container died ccad534afcd9a37a93cd11ef55057d038ade4044575a6ef3bec40755ff79af87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:39:20 np0005629333 systemd[1]: var-lib-containers-storage-overlay-9c064270ac3fe9929810cc029ef9b8646a93d4a64535c0fd432f0f53e62d5517-merged.mount: Deactivated successfully.
Feb 25 07:39:20 np0005629333 podman[330510]: 2026-02-25 12:39:20.352310526 +0000 UTC m=+0.503876050 container remove ccad534afcd9a37a93cd11ef55057d038ade4044575a6ef3bec40755ff79af87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 07:39:20 np0005629333 systemd[1]: libpod-conmon-ccad534afcd9a37a93cd11ef55057d038ade4044575a6ef3bec40755ff79af87.scope: Deactivated successfully.
Feb 25 07:39:20 np0005629333 nova_compute[244014]: 2026-02-25 12:39:20.490 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Updating instance_info_cache with network_info: [{"id": "14cc64dc-7418-4171-b1f4-1e19bed07489", "address": "fa:16:3e:60:e2:2c", "network": {"id": "941e8032-4eea-492d-a5e0-a9cc25d1a9cc", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-986479741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2470f8bb56548cd818113da683292d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14cc64dc-74", "ovs_interfaceid": "14cc64dc-7418-4171-b1f4-1e19bed07489", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:39:20 np0005629333 nova_compute[244014]: 2026-02-25 12:39:20.529 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-e25f3aed-d038-4b27-a15f-9beada2b67b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:39:20 np0005629333 nova_compute[244014]: 2026-02-25 12:39:20.529 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 25 07:39:20 np0005629333 nova_compute[244014]: 2026-02-25 12:39:20.531 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:39:20 np0005629333 nova_compute[244014]: 2026-02-25 12:39:20.552 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:20 np0005629333 podman[330610]: 2026-02-25 12:39:20.867845314 +0000 UTC m=+0.049493147 container create 3ba9b03dec7b891314ca77408e0fc495784d8dc33b40490affc08add9496593c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_maxwell, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:39:20 np0005629333 nova_compute[244014]: 2026-02-25 12:39:20.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:39:20 np0005629333 nova_compute[244014]: 2026-02-25 12:39:20.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:39:20 np0005629333 systemd[1]: Started libpod-conmon-3ba9b03dec7b891314ca77408e0fc495784d8dc33b40490affc08add9496593c.scope.
Feb 25 07:39:20 np0005629333 nova_compute[244014]: 2026-02-25 12:39:20.917 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:39:20 np0005629333 nova_compute[244014]: 2026-02-25 12:39:20.918 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:39:20 np0005629333 nova_compute[244014]: 2026-02-25 12:39:20.919 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:39:20 np0005629333 nova_compute[244014]: 2026-02-25 12:39:20.919 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:39:20 np0005629333 nova_compute[244014]: 2026-02-25 12:39:20.919 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:39:20 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:39:20 np0005629333 podman[330610]: 2026-02-25 12:39:20.84925501 +0000 UTC m=+0.030902883 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:39:20 np0005629333 podman[330610]: 2026-02-25 12:39:20.952998707 +0000 UTC m=+0.134646630 container init 3ba9b03dec7b891314ca77408e0fc495784d8dc33b40490affc08add9496593c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_maxwell, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:39:20 np0005629333 podman[330610]: 2026-02-25 12:39:20.961309982 +0000 UTC m=+0.142957845 container start 3ba9b03dec7b891314ca77408e0fc495784d8dc33b40490affc08add9496593c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 07:39:20 np0005629333 podman[330610]: 2026-02-25 12:39:20.966177699 +0000 UTC m=+0.147825562 container attach 3ba9b03dec7b891314ca77408e0fc495784d8dc33b40490affc08add9496593c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_maxwell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 25 07:39:20 np0005629333 hardcore_maxwell[330626]: 167 167
Feb 25 07:39:20 np0005629333 systemd[1]: libpod-3ba9b03dec7b891314ca77408e0fc495784d8dc33b40490affc08add9496593c.scope: Deactivated successfully.
Feb 25 07:39:20 np0005629333 podman[330610]: 2026-02-25 12:39:20.96760862 +0000 UTC m=+0.149256463 container died 3ba9b03dec7b891314ca77408e0fc495784d8dc33b40490affc08add9496593c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_maxwell, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:39:20 np0005629333 systemd[1]: var-lib-containers-storage-overlay-358560c4a26c331605204b2980e8893ac4495c750cae63e5bf7c1a06317fb36b-merged.mount: Deactivated successfully.
Feb 25 07:39:21 np0005629333 podman[330610]: 2026-02-25 12:39:21.004628694 +0000 UTC m=+0.186276527 container remove 3ba9b03dec7b891314ca77408e0fc495784d8dc33b40490affc08add9496593c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 07:39:21 np0005629333 systemd[1]: libpod-conmon-3ba9b03dec7b891314ca77408e0fc495784d8dc33b40490affc08add9496593c.scope: Deactivated successfully.
Feb 25 07:39:21 np0005629333 podman[330666]: 2026-02-25 12:39:21.189597604 +0000 UTC m=+0.061682461 container create 2ad63788622127788fadc8401fa8360745b15febfae9798c853efccf899bb780 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_golick, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:39:21 np0005629333 systemd[1]: Started libpod-conmon-2ad63788622127788fadc8401fa8360745b15febfae9798c853efccf899bb780.scope.
Feb 25 07:39:21 np0005629333 podman[330666]: 2026-02-25 12:39:21.158995131 +0000 UTC m=+0.031080058 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:39:21 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:39:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e555e7f02c97960cf808b087980c7588722037e03e06820fa35e23134613e963/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:39:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e555e7f02c97960cf808b087980c7588722037e03e06820fa35e23134613e963/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:39:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e555e7f02c97960cf808b087980c7588722037e03e06820fa35e23134613e963/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:39:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e555e7f02c97960cf808b087980c7588722037e03e06820fa35e23134613e963/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:39:21 np0005629333 podman[330666]: 2026-02-25 12:39:21.30285776 +0000 UTC m=+0.174942627 container init 2ad63788622127788fadc8401fa8360745b15febfae9798c853efccf899bb780 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:39:21 np0005629333 podman[330666]: 2026-02-25 12:39:21.314818158 +0000 UTC m=+0.186903035 container start 2ad63788622127788fadc8401fa8360745b15febfae9798c853efccf899bb780 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 07:39:21 np0005629333 podman[330666]: 2026-02-25 12:39:21.319285734 +0000 UTC m=+0.191370611 container attach 2ad63788622127788fadc8401fa8360745b15febfae9798c853efccf899bb780 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:39:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:39:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/604528281' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:39:21 np0005629333 nova_compute[244014]: 2026-02-25 12:39:21.572 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:39:21 np0005629333 nova_compute[244014]: 2026-02-25 12:39:21.649 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:39:21 np0005629333 nova_compute[244014]: 2026-02-25 12:39:21.650 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:39:21 np0005629333 nova_compute[244014]: 2026-02-25 12:39:21.772 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:39:21 np0005629333 nova_compute[244014]: 2026-02-25 12:39:21.774 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3658MB free_disk=59.96670763287693GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:39:21 np0005629333 nova_compute[244014]: 2026-02-25 12:39:21.774 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:39:21 np0005629333 nova_compute[244014]: 2026-02-25 12:39:21.774 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:39:21 np0005629333 nova_compute[244014]: 2026-02-25 12:39:21.847 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance e25f3aed-d038-4b27-a15f-9beada2b67b0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:39:21 np0005629333 nova_compute[244014]: 2026-02-25 12:39:21.848 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:39:21 np0005629333 nova_compute[244014]: 2026-02-25 12:39:21.848 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:39:21 np0005629333 nova_compute[244014]: 2026-02-25 12:39:21.884 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:39:22 np0005629333 lvm[330783]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:39:22 np0005629333 lvm[330783]: VG ceph_vg1 finished
Feb 25 07:39:22 np0005629333 lvm[330780]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:39:22 np0005629333 lvm[330780]: VG ceph_vg0 finished
Feb 25 07:39:22 np0005629333 lvm[330788]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:39:22 np0005629333 lvm[330788]: VG ceph_vg2 finished
Feb 25 07:39:22 np0005629333 suspicious_golick[330684]: {}
Feb 25 07:39:22 np0005629333 podman[330666]: 2026-02-25 12:39:22.130810085 +0000 UTC m=+1.002894932 container died 2ad63788622127788fadc8401fa8360745b15febfae9798c853efccf899bb780 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:39:22 np0005629333 systemd[1]: libpod-2ad63788622127788fadc8401fa8360745b15febfae9798c853efccf899bb780.scope: Deactivated successfully.
Feb 25 07:39:22 np0005629333 systemd[1]: libpod-2ad63788622127788fadc8401fa8360745b15febfae9798c853efccf899bb780.scope: Consumed 1.046s CPU time.
Feb 25 07:39:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1733: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 72 op/s
Feb 25 07:39:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:39:22 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2908164515' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:39:22 np0005629333 nova_compute[244014]: 2026-02-25 12:39:22.427 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:39:22 np0005629333 nova_compute[244014]: 2026-02-25 12:39:22.431 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:39:22 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e555e7f02c97960cf808b087980c7588722037e03e06820fa35e23134613e963-merged.mount: Deactivated successfully.
Feb 25 07:39:22 np0005629333 nova_compute[244014]: 2026-02-25 12:39:22.453 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:39:22 np0005629333 nova_compute[244014]: 2026-02-25 12:39:22.487 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:39:22 np0005629333 nova_compute[244014]: 2026-02-25 12:39:22.488 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:39:22 np0005629333 nova_compute[244014]: 2026-02-25 12:39:22.614 244018 DEBUG oslo_concurrency.lockutils [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Acquiring lock "e25f3aed-d038-4b27-a15f-9beada2b67b0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:39:22 np0005629333 nova_compute[244014]: 2026-02-25 12:39:22.615 244018 DEBUG oslo_concurrency.lockutils [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lock "e25f3aed-d038-4b27-a15f-9beada2b67b0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:39:22 np0005629333 nova_compute[244014]: 2026-02-25 12:39:22.616 244018 DEBUG oslo_concurrency.lockutils [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Acquiring lock "e25f3aed-d038-4b27-a15f-9beada2b67b0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:39:22 np0005629333 nova_compute[244014]: 2026-02-25 12:39:22.617 244018 DEBUG oslo_concurrency.lockutils [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lock "e25f3aed-d038-4b27-a15f-9beada2b67b0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:39:22 np0005629333 nova_compute[244014]: 2026-02-25 12:39:22.617 244018 DEBUG oslo_concurrency.lockutils [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lock "e25f3aed-d038-4b27-a15f-9beada2b67b0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:39:22 np0005629333 nova_compute[244014]: 2026-02-25 12:39:22.620 244018 INFO nova.compute.manager [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Terminating instance#033[00m
Feb 25 07:39:22 np0005629333 nova_compute[244014]: 2026-02-25 12:39:22.622 244018 DEBUG nova.compute.manager [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:39:22 np0005629333 podman[330666]: 2026-02-25 12:39:22.819968012 +0000 UTC m=+1.692052899 container remove 2ad63788622127788fadc8401fa8360745b15febfae9798c853efccf899bb780 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_golick, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:39:22 np0005629333 kernel: tap14cc64dc-74 (unregistering): left promiscuous mode
Feb 25 07:39:22 np0005629333 nova_compute[244014]: 2026-02-25 12:39:22.872 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:22 np0005629333 NetworkManager[49836]: <info>  [1772023162.8751] device (tap14cc64dc-74): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:39:22 np0005629333 nova_compute[244014]: 2026-02-25 12:39:22.881 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:39:22Z|00962|binding|INFO|Releasing lport 14cc64dc-7418-4171-b1f4-1e19bed07489 from this chassis (sb_readonly=0)
Feb 25 07:39:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:39:22Z|00963|binding|INFO|Setting lport 14cc64dc-7418-4171-b1f4-1e19bed07489 down in Southbound
Feb 25 07:39:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:39:22Z|00964|binding|INFO|Removing iface tap14cc64dc-74 ovn-installed in OVS
Feb 25 07:39:22 np0005629333 nova_compute[244014]: 2026-02-25 12:39:22.884 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:22.892 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:e2:2c 10.100.0.12'], port_security=['fa:16:3e:60:e2:2c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'e25f3aed-d038-4b27-a15f-9beada2b67b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-941e8032-4eea-492d-a5e0-a9cc25d1a9cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b2470f8bb56548cd818113da683292d8', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0bf5667d-50da-48f9-b7c1-919d155c28ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ce9da3ce-009a-4d1e-9912-d9a8180cd783, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=14cc64dc-7418-4171-b1f4-1e19bed07489) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:39:22 np0005629333 nova_compute[244014]: 2026-02-25 12:39:22.893 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:22.897 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 14cc64dc-7418-4171-b1f4-1e19bed07489 in datapath 941e8032-4eea-492d-a5e0-a9cc25d1a9cc unbound from our chassis#033[00m
Feb 25 07:39:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:22.899 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 941e8032-4eea-492d-a5e0-a9cc25d1a9cc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:39:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:22.901 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e166df69-b439-411c-940f-b914a0e2fd5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:22.902 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc namespace which is not needed anymore#033[00m
Feb 25 07:39:22 np0005629333 systemd[1]: libpod-conmon-2ad63788622127788fadc8401fa8360745b15febfae9798c853efccf899bb780.scope: Deactivated successfully.
Feb 25 07:39:22 np0005629333 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000060.scope: Deactivated successfully.
Feb 25 07:39:22 np0005629333 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000060.scope: Consumed 6.043s CPU time.
Feb 25 07:39:22 np0005629333 systemd-machined[210048]: Machine qemu-123-instance-00000060 terminated.
Feb 25 07:39:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:39:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:39:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:39:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:39:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:39:23 np0005629333 neutron-haproxy-ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc[330158]: [NOTICE]   (330186) : haproxy version is 2.8.14-c23fe91
Feb 25 07:39:23 np0005629333 neutron-haproxy-ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc[330158]: [NOTICE]   (330186) : path to executable is /usr/sbin/haproxy
Feb 25 07:39:23 np0005629333 neutron-haproxy-ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc[330158]: [WARNING]  (330186) : Exiting Master process...
Feb 25 07:39:23 np0005629333 neutron-haproxy-ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc[330158]: [WARNING]  (330186) : Exiting Master process...
Feb 25 07:39:23 np0005629333 neutron-haproxy-ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc[330158]: [ALERT]    (330186) : Current worker (330194) exited with code 143 (Terminated)
Feb 25 07:39:23 np0005629333 neutron-haproxy-ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc[330158]: [WARNING]  (330186) : All workers exited. Exiting... (0)
Feb 25 07:39:23 np0005629333 nova_compute[244014]: 2026-02-25 12:39:23.044 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:23 np0005629333 systemd[1]: libpod-3ea5b341c07a411098b6987638b1736a127296def506404ca7b8badf0d6f950d.scope: Deactivated successfully.
Feb 25 07:39:23 np0005629333 conmon[330158]: conmon 3ea5b341c07a411098b6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3ea5b341c07a411098b6987638b1736a127296def506404ca7b8badf0d6f950d.scope/container/memory.events
Feb 25 07:39:23 np0005629333 nova_compute[244014]: 2026-02-25 12:39:23.047 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:23 np0005629333 podman[330826]: 2026-02-25 12:39:23.051541257 +0000 UTC m=+0.058469021 container died 3ea5b341c07a411098b6987638b1736a127296def506404ca7b8badf0d6f950d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:39:23 np0005629333 nova_compute[244014]: 2026-02-25 12:39:23.055 244018 INFO nova.virt.libvirt.driver [-] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Instance destroyed successfully.#033[00m
Feb 25 07:39:23 np0005629333 nova_compute[244014]: 2026-02-25 12:39:23.055 244018 DEBUG nova.objects.instance [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lazy-loading 'resources' on Instance uuid e25f3aed-d038-4b27-a15f-9beada2b67b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:39:23 np0005629333 nova_compute[244014]: 2026-02-25 12:39:23.070 244018 DEBUG nova.virt.libvirt.vif [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:39:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-99198811',display_name='tempest-ServerMetadataNegativeTestJSON-server-99198811',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-99198811',id=96,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:39:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b2470f8bb56548cd818113da683292d8',ramdisk_id='',reservation_id='r-8npm3urm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataNegativeTestJSON-1634701035',owner_user_name='tempest-ServerMetadataNegativeTestJSON-1634701035-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:39:17Z,user_data=None,user_id='0c0513e28db44d81aa02994a8543b844',uuid=e25f3aed-d038-4b27-a15f-9beada2b67b0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "14cc64dc-7418-4171-b1f4-1e19bed07489", "address": "fa:16:3e:60:e2:2c", "network": {"id": "941e8032-4eea-492d-a5e0-a9cc25d1a9cc", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-986479741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2470f8bb56548cd818113da683292d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14cc64dc-74", "ovs_interfaceid": "14cc64dc-7418-4171-b1f4-1e19bed07489", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:39:23 np0005629333 nova_compute[244014]: 2026-02-25 12:39:23.072 244018 DEBUG nova.network.os_vif_util [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Converting VIF {"id": "14cc64dc-7418-4171-b1f4-1e19bed07489", "address": "fa:16:3e:60:e2:2c", "network": {"id": "941e8032-4eea-492d-a5e0-a9cc25d1a9cc", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-986479741-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b2470f8bb56548cd818113da683292d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14cc64dc-74", "ovs_interfaceid": "14cc64dc-7418-4171-b1f4-1e19bed07489", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:39:23 np0005629333 nova_compute[244014]: 2026-02-25 12:39:23.072 244018 DEBUG nova.network.os_vif_util [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:60:e2:2c,bridge_name='br-int',has_traffic_filtering=True,id=14cc64dc-7418-4171-b1f4-1e19bed07489,network=Network(941e8032-4eea-492d-a5e0-a9cc25d1a9cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14cc64dc-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:39:23 np0005629333 nova_compute[244014]: 2026-02-25 12:39:23.073 244018 DEBUG os_vif [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:e2:2c,bridge_name='br-int',has_traffic_filtering=True,id=14cc64dc-7418-4171-b1f4-1e19bed07489,network=Network(941e8032-4eea-492d-a5e0-a9cc25d1a9cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14cc64dc-74') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:39:23 np0005629333 nova_compute[244014]: 2026-02-25 12:39:23.075 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:23 np0005629333 nova_compute[244014]: 2026-02-25 12:39:23.075 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14cc64dc-74, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:39:23 np0005629333 nova_compute[244014]: 2026-02-25 12:39:23.078 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:23 np0005629333 nova_compute[244014]: 2026-02-25 12:39:23.081 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:39:23 np0005629333 nova_compute[244014]: 2026-02-25 12:39:23.083 244018 INFO os_vif [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:60:e2:2c,bridge_name='br-int',has_traffic_filtering=True,id=14cc64dc-7418-4171-b1f4-1e19bed07489,network=Network(941e8032-4eea-492d-a5e0-a9cc25d1a9cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14cc64dc-74')#033[00m
Feb 25 07:39:23 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3ea5b341c07a411098b6987638b1736a127296def506404ca7b8badf0d6f950d-userdata-shm.mount: Deactivated successfully.
Feb 25 07:39:23 np0005629333 systemd[1]: var-lib-containers-storage-overlay-50f32400978b76bc1f139aa47fa194c73ab872f9f6624b37050a90a5820e6f2f-merged.mount: Deactivated successfully.
Feb 25 07:39:23 np0005629333 podman[330826]: 2026-02-25 12:39:23.109177214 +0000 UTC m=+0.116104988 container cleanup 3ea5b341c07a411098b6987638b1736a127296def506404ca7b8badf0d6f950d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 25 07:39:23 np0005629333 systemd[1]: libpod-conmon-3ea5b341c07a411098b6987638b1736a127296def506404ca7b8badf0d6f950d.scope: Deactivated successfully.
Feb 25 07:39:23 np0005629333 podman[330905]: 2026-02-25 12:39:23.177270545 +0000 UTC m=+0.045974678 container remove 3ea5b341c07a411098b6987638b1736a127296def506404ca7b8badf0d6f950d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:39:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:23.182 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[013eb1c2-c4be-493c-9e96-4cbb4ad315de]: (4, ('Wed Feb 25 12:39:22 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc (3ea5b341c07a411098b6987638b1736a127296def506404ca7b8badf0d6f950d)\n3ea5b341c07a411098b6987638b1736a127296def506404ca7b8badf0d6f950d\nWed Feb 25 12:39:23 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc (3ea5b341c07a411098b6987638b1736a127296def506404ca7b8badf0d6f950d)\n3ea5b341c07a411098b6987638b1736a127296def506404ca7b8badf0d6f950d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:23.184 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ee78dcbc-debd-4eac-98e6-9f99d03c925b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:23.185 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap941e8032-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:39:23 np0005629333 nova_compute[244014]: 2026-02-25 12:39:23.187 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:23 np0005629333 kernel: tap941e8032-40: left promiscuous mode
Feb 25 07:39:23 np0005629333 nova_compute[244014]: 2026-02-25 12:39:23.194 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:23.197 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ef6a4d8f-2822-4399-b88d-69d4ca41e95f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:23.214 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ef7bf3c5-3ae4-44f4-924e-684f5e387f08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:23.215 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1b4852dd-7b16-4db9-8257-bfe8339e0eff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:23.228 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dc023880-d5db-4ed3-8fed-089e09d9fd15]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 512603, 'reachable_time': 27742, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330923, 'error': None, 'target': 'ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:23 np0005629333 systemd[1]: run-netns-ovnmeta\x2d941e8032\x2d4eea\x2d492d\x2da5e0\x2da9cc25d1a9cc.mount: Deactivated successfully.
Feb 25 07:39:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:23.232 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-941e8032-4eea-492d-a5e0-a9cc25d1a9cc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:39:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:23.232 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[c8cc59ac-045e-467a-a1ae-2ed829e9c819]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:23 np0005629333 nova_compute[244014]: 2026-02-25 12:39:23.488 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:39:23 np0005629333 nova_compute[244014]: 2026-02-25 12:39:23.488 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:39:23 np0005629333 nova_compute[244014]: 2026-02-25 12:39:23.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:39:24 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:39:24 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:39:24 np0005629333 nova_compute[244014]: 2026-02-25 12:39:24.057 244018 INFO nova.virt.libvirt.driver [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Deleting instance files /var/lib/nova/instances/e25f3aed-d038-4b27-a15f-9beada2b67b0_del#033[00m
Feb 25 07:39:24 np0005629333 nova_compute[244014]: 2026-02-25 12:39:24.058 244018 INFO nova.virt.libvirt.driver [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Deletion of /var/lib/nova/instances/e25f3aed-d038-4b27-a15f-9beada2b67b0_del complete#033[00m
Feb 25 07:39:24 np0005629333 nova_compute[244014]: 2026-02-25 12:39:24.132 244018 INFO nova.compute.manager [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Took 1.51 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:39:24 np0005629333 nova_compute[244014]: 2026-02-25 12:39:24.132 244018 DEBUG oslo.service.loopingcall [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:39:24 np0005629333 nova_compute[244014]: 2026-02-25 12:39:24.133 244018 DEBUG nova.compute.manager [-] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:39:24 np0005629333 nova_compute[244014]: 2026-02-25 12:39:24.133 244018 DEBUG nova.network.neutron [-] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:39:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1734: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 849 KiB/s wr, 75 op/s
Feb 25 07:39:24 np0005629333 nova_compute[244014]: 2026-02-25 12:39:24.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:39:25 np0005629333 nova_compute[244014]: 2026-02-25 12:39:25.005 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:25 np0005629333 nova_compute[244014]: 2026-02-25 12:39:25.174 244018 DEBUG nova.network.neutron [-] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:39:25 np0005629333 nova_compute[244014]: 2026-02-25 12:39:25.199 244018 INFO nova.compute.manager [-] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Took 1.07 seconds to deallocate network for instance.#033[00m
Feb 25 07:39:25 np0005629333 nova_compute[244014]: 2026-02-25 12:39:25.246 244018 DEBUG oslo_concurrency.lockutils [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:39:25 np0005629333 nova_compute[244014]: 2026-02-25 12:39:25.248 244018 DEBUG oslo_concurrency.lockutils [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:39:25 np0005629333 nova_compute[244014]: 2026-02-25 12:39:25.294 244018 DEBUG oslo_concurrency.processutils [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:39:25 np0005629333 nova_compute[244014]: 2026-02-25 12:39:25.338 244018 DEBUG nova.compute.manager [req-ae228d59-63e5-4610-8165-e50a2f78812f req-93d8e484-1263-488d-89c2-391f8aa477a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Received event network-vif-deleted-14cc64dc-7418-4171-b1f4-1e19bed07489 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:39:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:39:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1411513205' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:39:25 np0005629333 nova_compute[244014]: 2026-02-25 12:39:25.936 244018 DEBUG oslo_concurrency.processutils [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.643s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:39:25 np0005629333 nova_compute[244014]: 2026-02-25 12:39:25.945 244018 DEBUG nova.compute.provider_tree [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:39:25 np0005629333 nova_compute[244014]: 2026-02-25 12:39:25.983 244018 DEBUG nova.scheduler.client.report [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:39:26 np0005629333 nova_compute[244014]: 2026-02-25 12:39:26.013 244018 DEBUG oslo_concurrency.lockutils [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:39:26 np0005629333 nova_compute[244014]: 2026-02-25 12:39:26.040 244018 INFO nova.scheduler.client.report [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Deleted allocations for instance e25f3aed-d038-4b27-a15f-9beada2b67b0#033[00m
Feb 25 07:39:26 np0005629333 nova_compute[244014]: 2026-02-25 12:39:26.107 244018 DEBUG oslo_concurrency.lockutils [None req-adfd1b24-fc23-4061-b634-d67dcabfd558 0c0513e28db44d81aa02994a8543b844 b2470f8bb56548cd818113da683292d8 - - default default] Lock "e25f3aed-d038-4b27-a15f-9beada2b67b0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.492s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:39:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 07:39:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:39:28 np0005629333 nova_compute[244014]: 2026-02-25 12:39:28.080 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1736: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Feb 25 07:39:28 np0005629333 nova_compute[244014]: 2026-02-25 12:39:28.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:39:28 np0005629333 nova_compute[244014]: 2026-02-25 12:39:28.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:39:29 np0005629333 nova_compute[244014]: 2026-02-25 12:39:29.604 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:30 np0005629333 nova_compute[244014]: 2026-02-25 12:39:30.007 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1737: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 92 op/s
Feb 25 07:39:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:39:30
Feb 25 07:39:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:39:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:39:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'images', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'backups']
Feb 25 07:39:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:39:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:39:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:39:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:39:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:39:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:39:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:39:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:39:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:39:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:39:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:39:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:39:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:39:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:39:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:39:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:39:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:39:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1738: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 92 op/s
Feb 25 07:39:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:39:33 np0005629333 nova_compute[244014]: 2026-02-25 12:39:33.083 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:33 np0005629333 nova_compute[244014]: 2026-02-25 12:39:33.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:39:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:34.134 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:39:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:34.135 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:39:34 np0005629333 nova_compute[244014]: 2026-02-25 12:39:34.135 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1739: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 864 KiB/s rd, 1.2 KiB/s wr, 54 op/s
Feb 25 07:39:35 np0005629333 nova_compute[244014]: 2026-02-25 12:39:35.009 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Feb 25 07:39:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:39:38 np0005629333 nova_compute[244014]: 2026-02-25 12:39:38.053 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023163.052404, e25f3aed-d038-4b27-a15f-9beada2b67b0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:39:38 np0005629333 nova_compute[244014]: 2026-02-25 12:39:38.054 244018 INFO nova.compute.manager [-] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:39:38 np0005629333 nova_compute[244014]: 2026-02-25 12:39:38.080 244018 DEBUG nova.compute.manager [None req-45d8dab6-73f1-48db-86e6-4e3a9c3b6db2 - - - - - -] [instance: e25f3aed-d038-4b27-a15f-9beada2b67b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:39:38 np0005629333 nova_compute[244014]: 2026-02-25 12:39:38.087 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1741: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Feb 25 07:39:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:39.137 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:39:40 np0005629333 nova_compute[244014]: 2026-02-25 12:39:40.011 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1742: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:39:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Feb 25 07:39:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Feb 25 07:39:41 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Feb 25 07:39:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 9.9 KiB/s rd, 1.4 KiB/s wr, 13 op/s
Feb 25 07:39:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:39:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:39:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:39:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:39:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.1290455653175771e-05 of space, bias 1.0, pg target 0.0033871366959527314 quantized to 32 (current 32)
Feb 25 07:39:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:39:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:39:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:39:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:39:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:39:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024939042760997172 of space, bias 1.0, pg target 0.7481712828299152 quantized to 32 (current 32)
Feb 25 07:39:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:39:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.601542546428212e-07 of space, bias 4.0, pg target 0.0009121851055713854 quantized to 16 (current 16)
Feb 25 07:39:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:39:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:39:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:39:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:39:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:39:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:39:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:39:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:39:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:39:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:39:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:39:43 np0005629333 nova_compute[244014]: 2026-02-25 12:39:43.090 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Feb 25 07:39:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Feb 25 07:39:44 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Feb 25 07:39:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1746: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.9 KiB/s wr, 17 op/s
Feb 25 07:39:45 np0005629333 nova_compute[244014]: 2026-02-25 12:39:45.014 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Feb 25 07:39:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Feb 25 07:39:46 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Feb 25 07:39:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1748: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 2.5 KiB/s wr, 23 op/s
Feb 25 07:39:46 np0005629333 podman[330947]: 2026-02-25 12:39:46.746791236 +0000 UTC m=+0.079856655 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent, io.buildah.version=1.43.0)
Feb 25 07:39:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:39:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1417499686' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:39:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:39:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1417499686' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:39:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:39:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Feb 25 07:39:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Feb 25 07:39:48 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Feb 25 07:39:48 np0005629333 nova_compute[244014]: 2026-02-25 12:39:48.092 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 5.3 KiB/s wr, 94 op/s
Feb 25 07:39:48 np0005629333 podman[330966]: 2026-02-25 12:39:48.796871769 +0000 UTC m=+0.135130305 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 07:39:48 np0005629333 nova_compute[244014]: 2026-02-25 12:39:48.803 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:39:48 np0005629333 nova_compute[244014]: 2026-02-25 12:39:48.804 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:39:48 np0005629333 nova_compute[244014]: 2026-02-25 12:39:48.821 244018 DEBUG nova.compute.manager [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:39:48 np0005629333 nova_compute[244014]: 2026-02-25 12:39:48.915 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:39:48 np0005629333 nova_compute[244014]: 2026-02-25 12:39:48.916 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:39:48 np0005629333 nova_compute[244014]: 2026-02-25 12:39:48.930 244018 DEBUG nova.virt.hardware [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:39:48 np0005629333 nova_compute[244014]: 2026-02-25 12:39:48.931 244018 INFO nova.compute.claims [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:39:49 np0005629333 nova_compute[244014]: 2026-02-25 12:39:49.053 244018 DEBUG oslo_concurrency.processutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:39:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:39:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1366555760' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:39:49 np0005629333 nova_compute[244014]: 2026-02-25 12:39:49.618 244018 DEBUG oslo_concurrency.processutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:39:49 np0005629333 nova_compute[244014]: 2026-02-25 12:39:49.624 244018 DEBUG nova.compute.provider_tree [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:39:49 np0005629333 nova_compute[244014]: 2026-02-25 12:39:49.648 244018 DEBUG nova.scheduler.client.report [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:39:49 np0005629333 nova_compute[244014]: 2026-02-25 12:39:49.673 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:39:49 np0005629333 nova_compute[244014]: 2026-02-25 12:39:49.674 244018 DEBUG nova.compute.manager [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:39:49 np0005629333 nova_compute[244014]: 2026-02-25 12:39:49.736 244018 DEBUG nova.compute.manager [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:39:49 np0005629333 nova_compute[244014]: 2026-02-25 12:39:49.737 244018 DEBUG nova.network.neutron [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:39:49 np0005629333 nova_compute[244014]: 2026-02-25 12:39:49.759 244018 INFO nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:39:49 np0005629333 nova_compute[244014]: 2026-02-25 12:39:49.785 244018 DEBUG nova.compute.manager [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:39:49 np0005629333 nova_compute[244014]: 2026-02-25 12:39:49.894 244018 DEBUG nova.compute.manager [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:39:49 np0005629333 nova_compute[244014]: 2026-02-25 12:39:49.896 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:39:49 np0005629333 nova_compute[244014]: 2026-02-25 12:39:49.896 244018 INFO nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Creating image(s)#033[00m
Feb 25 07:39:49 np0005629333 nova_compute[244014]: 2026-02-25 12:39:49.930 244018 DEBUG nova.storage.rbd_utils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 62d2fee1-f07f-44e3-a511-6b9bb341a3ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:39:49 np0005629333 nova_compute[244014]: 2026-02-25 12:39:49.967 244018 DEBUG nova.storage.rbd_utils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 62d2fee1-f07f-44e3-a511-6b9bb341a3ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:39:50 np0005629333 nova_compute[244014]: 2026-02-25 12:39:50.002 244018 DEBUG nova.storage.rbd_utils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 62d2fee1-f07f-44e3-a511-6b9bb341a3ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:39:50 np0005629333 nova_compute[244014]: 2026-02-25 12:39:50.007 244018 DEBUG oslo_concurrency.processutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:39:50 np0005629333 nova_compute[244014]: 2026-02-25 12:39:50.055 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:50 np0005629333 nova_compute[244014]: 2026-02-25 12:39:50.121 244018 DEBUG oslo_concurrency.processutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:39:50 np0005629333 nova_compute[244014]: 2026-02-25 12:39:50.123 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:39:50 np0005629333 nova_compute[244014]: 2026-02-25 12:39:50.124 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:39:50 np0005629333 nova_compute[244014]: 2026-02-25 12:39:50.125 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:39:50 np0005629333 nova_compute[244014]: 2026-02-25 12:39:50.161 244018 DEBUG nova.storage.rbd_utils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 62d2fee1-f07f-44e3-a511-6b9bb341a3ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:39:50 np0005629333 nova_compute[244014]: 2026-02-25 12:39:50.166 244018 DEBUG oslo_concurrency.processutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 62d2fee1-f07f-44e3-a511-6b9bb341a3ed_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:39:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1751: 305 pgs: 305 active+clean; 153 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 5.0 KiB/s wr, 91 op/s
Feb 25 07:39:50 np0005629333 nova_compute[244014]: 2026-02-25 12:39:50.196 244018 DEBUG nova.policy [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c5bc24b5f5048469cf3f701ce511bfa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '503e879cd1f44a16b9baef106ceba949', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:39:50 np0005629333 nova_compute[244014]: 2026-02-25 12:39:50.428 244018 DEBUG oslo_concurrency.processutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 62d2fee1-f07f-44e3-a511-6b9bb341a3ed_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:39:50 np0005629333 nova_compute[244014]: 2026-02-25 12:39:50.513 244018 DEBUG nova.storage.rbd_utils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] resizing rbd image 62d2fee1-f07f-44e3-a511-6b9bb341a3ed_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:39:50 np0005629333 nova_compute[244014]: 2026-02-25 12:39:50.617 244018 DEBUG nova.objects.instance [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'migration_context' on Instance uuid 62d2fee1-f07f-44e3-a511-6b9bb341a3ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:39:50 np0005629333 nova_compute[244014]: 2026-02-25 12:39:50.652 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:39:50 np0005629333 nova_compute[244014]: 2026-02-25 12:39:50.652 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Ensure instance console log exists: /var/lib/nova/instances/62d2fee1-f07f-44e3-a511-6b9bb341a3ed/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:39:50 np0005629333 nova_compute[244014]: 2026-02-25 12:39:50.652 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:39:50 np0005629333 nova_compute[244014]: 2026-02-25 12:39:50.653 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:39:50 np0005629333 nova_compute[244014]: 2026-02-25 12:39:50.653 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:39:51 np0005629333 nova_compute[244014]: 2026-02-25 12:39:51.244 244018 DEBUG nova.network.neutron [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Successfully created port: 521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:39:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1752: 305 pgs: 305 active+clean; 183 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 99 KiB/s rd, 1.8 MiB/s wr, 137 op/s
Feb 25 07:39:52 np0005629333 nova_compute[244014]: 2026-02-25 12:39:52.280 244018 DEBUG nova.network.neutron [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Successfully updated port: 521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:39:52 np0005629333 nova_compute[244014]: 2026-02-25 12:39:52.296 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "refresh_cache-62d2fee1-f07f-44e3-a511-6b9bb341a3ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:39:52 np0005629333 nova_compute[244014]: 2026-02-25 12:39:52.296 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquired lock "refresh_cache-62d2fee1-f07f-44e3-a511-6b9bb341a3ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:39:52 np0005629333 nova_compute[244014]: 2026-02-25 12:39:52.296 244018 DEBUG nova.network.neutron [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:39:52 np0005629333 nova_compute[244014]: 2026-02-25 12:39:52.439 244018 DEBUG nova.compute.manager [req-9123e8d6-e16e-4d8c-94bc-3ef48a790235 req-0e7f7a66-f6da-4e7a-b827-cd7853c308a6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Received event network-changed-521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:39:52 np0005629333 nova_compute[244014]: 2026-02-25 12:39:52.440 244018 DEBUG nova.compute.manager [req-9123e8d6-e16e-4d8c-94bc-3ef48a790235 req-0e7f7a66-f6da-4e7a-b827-cd7853c308a6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Refreshing instance network info cache due to event network-changed-521e9ad8-9ef3-4823-9ddb-59c3c3fe0674. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:39:52 np0005629333 nova_compute[244014]: 2026-02-25 12:39:52.441 244018 DEBUG oslo_concurrency.lockutils [req-9123e8d6-e16e-4d8c-94bc-3ef48a790235 req-0e7f7a66-f6da-4e7a-b827-cd7853c308a6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-62d2fee1-f07f-44e3-a511-6b9bb341a3ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:39:52 np0005629333 nova_compute[244014]: 2026-02-25 12:39:52.443 244018 DEBUG nova.network.neutron [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:39:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:39:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Feb 25 07:39:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Feb 25 07:39:53 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Feb 25 07:39:53 np0005629333 nova_compute[244014]: 2026-02-25 12:39:53.095 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:53 np0005629333 nova_compute[244014]: 2026-02-25 12:39:53.418 244018 DEBUG nova.network.neutron [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Updating instance_info_cache with network_info: [{"id": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "address": "fa:16:3e:f6:f0:2e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521e9ad8-9e", "ovs_interfaceid": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:39:53 np0005629333 nova_compute[244014]: 2026-02-25 12:39:53.442 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Releasing lock "refresh_cache-62d2fee1-f07f-44e3-a511-6b9bb341a3ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:39:53 np0005629333 nova_compute[244014]: 2026-02-25 12:39:53.443 244018 DEBUG nova.compute.manager [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Instance network_info: |[{"id": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "address": "fa:16:3e:f6:f0:2e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521e9ad8-9e", "ovs_interfaceid": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:39:53 np0005629333 nova_compute[244014]: 2026-02-25 12:39:53.443 244018 DEBUG oslo_concurrency.lockutils [req-9123e8d6-e16e-4d8c-94bc-3ef48a790235 req-0e7f7a66-f6da-4e7a-b827-cd7853c308a6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-62d2fee1-f07f-44e3-a511-6b9bb341a3ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:39:53 np0005629333 nova_compute[244014]: 2026-02-25 12:39:53.444 244018 DEBUG nova.network.neutron [req-9123e8d6-e16e-4d8c-94bc-3ef48a790235 req-0e7f7a66-f6da-4e7a-b827-cd7853c308a6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Refreshing network info cache for port 521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:39:53 np0005629333 nova_compute[244014]: 2026-02-25 12:39:53.447 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Start _get_guest_xml network_info=[{"id": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "address": "fa:16:3e:f6:f0:2e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521e9ad8-9e", "ovs_interfaceid": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:39:53 np0005629333 nova_compute[244014]: 2026-02-25 12:39:53.453 244018 WARNING nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:39:53 np0005629333 nova_compute[244014]: 2026-02-25 12:39:53.466 244018 DEBUG nova.virt.libvirt.host [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:39:53 np0005629333 nova_compute[244014]: 2026-02-25 12:39:53.467 244018 DEBUG nova.virt.libvirt.host [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:39:53 np0005629333 nova_compute[244014]: 2026-02-25 12:39:53.472 244018 DEBUG nova.virt.libvirt.host [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:39:53 np0005629333 nova_compute[244014]: 2026-02-25 12:39:53.473 244018 DEBUG nova.virt.libvirt.host [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:39:53 np0005629333 nova_compute[244014]: 2026-02-25 12:39:53.473 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:39:53 np0005629333 nova_compute[244014]: 2026-02-25 12:39:53.474 244018 DEBUG nova.virt.hardware [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:39:53 np0005629333 nova_compute[244014]: 2026-02-25 12:39:53.475 244018 DEBUG nova.virt.hardware [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:39:53 np0005629333 nova_compute[244014]: 2026-02-25 12:39:53.475 244018 DEBUG nova.virt.hardware [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:39:53 np0005629333 nova_compute[244014]: 2026-02-25 12:39:53.476 244018 DEBUG nova.virt.hardware [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:39:53 np0005629333 nova_compute[244014]: 2026-02-25 12:39:53.476 244018 DEBUG nova.virt.hardware [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:39:53 np0005629333 nova_compute[244014]: 2026-02-25 12:39:53.477 244018 DEBUG nova.virt.hardware [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:39:53 np0005629333 nova_compute[244014]: 2026-02-25 12:39:53.477 244018 DEBUG nova.virt.hardware [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:39:53 np0005629333 nova_compute[244014]: 2026-02-25 12:39:53.478 244018 DEBUG nova.virt.hardware [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:39:53 np0005629333 nova_compute[244014]: 2026-02-25 12:39:53.479 244018 DEBUG nova.virt.hardware [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:39:53 np0005629333 nova_compute[244014]: 2026-02-25 12:39:53.479 244018 DEBUG nova.virt.hardware [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:39:53 np0005629333 nova_compute[244014]: 2026-02-25 12:39:53.480 244018 DEBUG nova.virt.hardware [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:39:53 np0005629333 nova_compute[244014]: 2026-02-25 12:39:53.485 244018 DEBUG oslo_concurrency.processutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:39:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:39:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3097274914' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:39:54 np0005629333 nova_compute[244014]: 2026-02-25 12:39:54.058 244018 DEBUG oslo_concurrency.processutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:39:54 np0005629333 nova_compute[244014]: 2026-02-25 12:39:54.097 244018 DEBUG nova.storage.rbd_utils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 62d2fee1-f07f-44e3-a511-6b9bb341a3ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:39:54 np0005629333 nova_compute[244014]: 2026-02-25 12:39:54.103 244018 DEBUG oslo_concurrency.processutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:39:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 102 KiB/s rd, 2.7 MiB/s wr, 145 op/s
Feb 25 07:39:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:39:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/267101661' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:39:54 np0005629333 nova_compute[244014]: 2026-02-25 12:39:54.642 244018 DEBUG oslo_concurrency.processutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:39:54 np0005629333 nova_compute[244014]: 2026-02-25 12:39:54.644 244018 DEBUG nova.virt.libvirt.vif [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:39:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-825477072',display_name='tempest-₡-825477072',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--825477072',id=97,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-t20m600l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:39:49Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=62d2fee1-f07f-44e3-a511-6b9bb341a3ed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "address": "fa:16:3e:f6:f0:2e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521e9ad8-9e", "ovs_interfaceid": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:39:54 np0005629333 nova_compute[244014]: 2026-02-25 12:39:54.645 244018 DEBUG nova.network.os_vif_util [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "address": "fa:16:3e:f6:f0:2e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521e9ad8-9e", "ovs_interfaceid": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:39:54 np0005629333 nova_compute[244014]: 2026-02-25 12:39:54.646 244018 DEBUG nova.network.os_vif_util [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:f0:2e,bridge_name='br-int',has_traffic_filtering=True,id=521e9ad8-9ef3-4823-9ddb-59c3c3fe0674,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521e9ad8-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:39:54 np0005629333 nova_compute[244014]: 2026-02-25 12:39:54.647 244018 DEBUG nova.objects.instance [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'pci_devices' on Instance uuid 62d2fee1-f07f-44e3-a511-6b9bb341a3ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:39:54 np0005629333 nova_compute[244014]: 2026-02-25 12:39:54.668 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:39:54 np0005629333 nova_compute[244014]:  <uuid>62d2fee1-f07f-44e3-a511-6b9bb341a3ed</uuid>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:  <name>instance-00000061</name>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      <nova:name>tempest-₡-825477072</nova:name>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:39:53</nova:creationTime>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:39:54 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:        <nova:user uuid="4c5bc24b5f5048469cf3f701ce511bfa">tempest-ServersTestJSON-1472039551-project-member</nova:user>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:        <nova:project uuid="503e879cd1f44a16b9baef106ceba949">tempest-ServersTestJSON-1472039551</nova:project>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:        <nova:port uuid="521e9ad8-9ef3-4823-9ddb-59c3c3fe0674">
Feb 25 07:39:54 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      <entry name="serial">62d2fee1-f07f-44e3-a511-6b9bb341a3ed</entry>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      <entry name="uuid">62d2fee1-f07f-44e3-a511-6b9bb341a3ed</entry>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/62d2fee1-f07f-44e3-a511-6b9bb341a3ed_disk">
Feb 25 07:39:54 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:39:54 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/62d2fee1-f07f-44e3-a511-6b9bb341a3ed_disk.config">
Feb 25 07:39:54 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:39:54 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:f6:f0:2e"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      <target dev="tap521e9ad8-9e"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/62d2fee1-f07f-44e3-a511-6b9bb341a3ed/console.log" append="off"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:39:54 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:39:54 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:39:54 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:39:54 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:39:54 np0005629333 nova_compute[244014]: 2026-02-25 12:39:54.669 244018 DEBUG nova.compute.manager [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Preparing to wait for external event network-vif-plugged-521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:39:54 np0005629333 nova_compute[244014]: 2026-02-25 12:39:54.669 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:39:54 np0005629333 nova_compute[244014]: 2026-02-25 12:39:54.669 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:39:54 np0005629333 nova_compute[244014]: 2026-02-25 12:39:54.669 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:39:54 np0005629333 nova_compute[244014]: 2026-02-25 12:39:54.670 244018 DEBUG nova.virt.libvirt.vif [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:39:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-825477072',display_name='tempest-₡-825477072',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--825477072',id=97,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-t20m600l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:39:49Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=62d2fee1-f07f-44e3-a511-6b9bb341a3ed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "address": "fa:16:3e:f6:f0:2e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521e9ad8-9e", "ovs_interfaceid": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:39:54 np0005629333 nova_compute[244014]: 2026-02-25 12:39:54.670 244018 DEBUG nova.network.os_vif_util [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "address": "fa:16:3e:f6:f0:2e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521e9ad8-9e", "ovs_interfaceid": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:39:54 np0005629333 nova_compute[244014]: 2026-02-25 12:39:54.671 244018 DEBUG nova.network.os_vif_util [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:f0:2e,bridge_name='br-int',has_traffic_filtering=True,id=521e9ad8-9ef3-4823-9ddb-59c3c3fe0674,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521e9ad8-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:39:54 np0005629333 nova_compute[244014]: 2026-02-25 12:39:54.671 244018 DEBUG os_vif [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:f0:2e,bridge_name='br-int',has_traffic_filtering=True,id=521e9ad8-9ef3-4823-9ddb-59c3c3fe0674,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521e9ad8-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:39:54 np0005629333 nova_compute[244014]: 2026-02-25 12:39:54.672 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:54 np0005629333 nova_compute[244014]: 2026-02-25 12:39:54.672 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:39:54 np0005629333 nova_compute[244014]: 2026-02-25 12:39:54.673 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:39:54 np0005629333 nova_compute[244014]: 2026-02-25 12:39:54.678 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:54 np0005629333 nova_compute[244014]: 2026-02-25 12:39:54.678 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap521e9ad8-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:39:54 np0005629333 nova_compute[244014]: 2026-02-25 12:39:54.679 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap521e9ad8-9e, col_values=(('external_ids', {'iface-id': '521e9ad8-9ef3-4823-9ddb-59c3c3fe0674', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:f0:2e', 'vm-uuid': '62d2fee1-f07f-44e3-a511-6b9bb341a3ed'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:39:54 np0005629333 nova_compute[244014]: 2026-02-25 12:39:54.681 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:54 np0005629333 NetworkManager[49836]: <info>  [1772023194.6823] manager: (tap521e9ad8-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/399)
Feb 25 07:39:54 np0005629333 nova_compute[244014]: 2026-02-25 12:39:54.685 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:39:54 np0005629333 nova_compute[244014]: 2026-02-25 12:39:54.688 244018 INFO os_vif [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:f0:2e,bridge_name='br-int',has_traffic_filtering=True,id=521e9ad8-9ef3-4823-9ddb-59c3c3fe0674,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521e9ad8-9e')#033[00m
Feb 25 07:39:54 np0005629333 nova_compute[244014]: 2026-02-25 12:39:54.760 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:39:54 np0005629333 nova_compute[244014]: 2026-02-25 12:39:54.760 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:39:54 np0005629333 nova_compute[244014]: 2026-02-25 12:39:54.760 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No VIF found with MAC fa:16:3e:f6:f0:2e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:39:54 np0005629333 nova_compute[244014]: 2026-02-25 12:39:54.761 244018 INFO nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Using config drive#033[00m
Feb 25 07:39:54 np0005629333 nova_compute[244014]: 2026-02-25 12:39:54.786 244018 DEBUG nova.storage.rbd_utils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 62d2fee1-f07f-44e3-a511-6b9bb341a3ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:39:55 np0005629333 nova_compute[244014]: 2026-02-25 12:39:55.018 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.019 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:39:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.020 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:39:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.020 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:39:55 np0005629333 nova_compute[244014]: 2026-02-25 12:39:55.308 244018 INFO nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Creating config drive at /var/lib/nova/instances/62d2fee1-f07f-44e3-a511-6b9bb341a3ed/disk.config#033[00m
Feb 25 07:39:55 np0005629333 nova_compute[244014]: 2026-02-25 12:39:55.316 244018 DEBUG oslo_concurrency.processutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/62d2fee1-f07f-44e3-a511-6b9bb341a3ed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp2yh8lbi4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:39:55 np0005629333 nova_compute[244014]: 2026-02-25 12:39:55.426 244018 DEBUG nova.network.neutron [req-9123e8d6-e16e-4d8c-94bc-3ef48a790235 req-0e7f7a66-f6da-4e7a-b827-cd7853c308a6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Updated VIF entry in instance network info cache for port 521e9ad8-9ef3-4823-9ddb-59c3c3fe0674. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:39:55 np0005629333 nova_compute[244014]: 2026-02-25 12:39:55.427 244018 DEBUG nova.network.neutron [req-9123e8d6-e16e-4d8c-94bc-3ef48a790235 req-0e7f7a66-f6da-4e7a-b827-cd7853c308a6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Updating instance_info_cache with network_info: [{"id": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "address": "fa:16:3e:f6:f0:2e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521e9ad8-9e", "ovs_interfaceid": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:39:55 np0005629333 nova_compute[244014]: 2026-02-25 12:39:55.446 244018 DEBUG oslo_concurrency.lockutils [req-9123e8d6-e16e-4d8c-94bc-3ef48a790235 req-0e7f7a66-f6da-4e7a-b827-cd7853c308a6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-62d2fee1-f07f-44e3-a511-6b9bb341a3ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:39:55 np0005629333 nova_compute[244014]: 2026-02-25 12:39:55.462 244018 DEBUG oslo_concurrency.processutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/62d2fee1-f07f-44e3-a511-6b9bb341a3ed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp2yh8lbi4" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:39:55 np0005629333 nova_compute[244014]: 2026-02-25 12:39:55.499 244018 DEBUG nova.storage.rbd_utils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 62d2fee1-f07f-44e3-a511-6b9bb341a3ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:39:55 np0005629333 nova_compute[244014]: 2026-02-25 12:39:55.505 244018 DEBUG oslo_concurrency.processutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/62d2fee1-f07f-44e3-a511-6b9bb341a3ed/disk.config 62d2fee1-f07f-44e3-a511-6b9bb341a3ed_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:39:55 np0005629333 nova_compute[244014]: 2026-02-25 12:39:55.585 244018 DEBUG oslo_concurrency.lockutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Acquiring lock "01328724-b95f-4b36-809a-ddc156dd0dde" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:39:55 np0005629333 nova_compute[244014]: 2026-02-25 12:39:55.586 244018 DEBUG oslo_concurrency.lockutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lock "01328724-b95f-4b36-809a-ddc156dd0dde" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:39:55 np0005629333 nova_compute[244014]: 2026-02-25 12:39:55.603 244018 DEBUG nova.compute.manager [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:39:55 np0005629333 nova_compute[244014]: 2026-02-25 12:39:55.676 244018 DEBUG oslo_concurrency.processutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/62d2fee1-f07f-44e3-a511-6b9bb341a3ed/disk.config 62d2fee1-f07f-44e3-a511-6b9bb341a3ed_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:39:55 np0005629333 nova_compute[244014]: 2026-02-25 12:39:55.679 244018 INFO nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Deleting local config drive /var/lib/nova/instances/62d2fee1-f07f-44e3-a511-6b9bb341a3ed/disk.config because it was imported into RBD.#033[00m
Feb 25 07:39:55 np0005629333 nova_compute[244014]: 2026-02-25 12:39:55.723 244018 DEBUG oslo_concurrency.lockutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:39:55 np0005629333 nova_compute[244014]: 2026-02-25 12:39:55.724 244018 DEBUG oslo_concurrency.lockutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:39:55 np0005629333 nova_compute[244014]: 2026-02-25 12:39:55.735 244018 DEBUG nova.virt.hardware [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:39:55 np0005629333 nova_compute[244014]: 2026-02-25 12:39:55.736 244018 INFO nova.compute.claims [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:39:55 np0005629333 kernel: tap521e9ad8-9e: entered promiscuous mode
Feb 25 07:39:55 np0005629333 NetworkManager[49836]: <info>  [1772023195.7431] manager: (tap521e9ad8-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/400)
Feb 25 07:39:55 np0005629333 nova_compute[244014]: 2026-02-25 12:39:55.744 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:55 np0005629333 ovn_controller[147040]: 2026-02-25T12:39:55Z|00965|binding|INFO|Claiming lport 521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 for this chassis.
Feb 25 07:39:55 np0005629333 ovn_controller[147040]: 2026-02-25T12:39:55Z|00966|binding|INFO|521e9ad8-9ef3-4823-9ddb-59c3c3fe0674: Claiming fa:16:3e:f6:f0:2e 10.100.0.3
Feb 25 07:39:55 np0005629333 nova_compute[244014]: 2026-02-25 12:39:55.749 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:55 np0005629333 nova_compute[244014]: 2026-02-25 12:39:55.752 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:55 np0005629333 nova_compute[244014]: 2026-02-25 12:39:55.757 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:55 np0005629333 systemd-udevd[331315]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:39:55 np0005629333 NetworkManager[49836]: <info>  [1772023195.7851] device (tap521e9ad8-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:39:55 np0005629333 NetworkManager[49836]: <info>  [1772023195.7865] device (tap521e9ad8-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:39:55 np0005629333 systemd-machined[210048]: New machine qemu-124-instance-00000061.
Feb 25 07:39:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.791 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:f0:2e 10.100.0.3'], port_security=['fa:16:3e:f6:f0:2e 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '62d2fee1-f07f-44e3-a511-6b9bb341a3ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec8bae53-fe6a-49d1-a733-f00c198be561', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '503e879cd1f44a16b9baef106ceba949', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3bf34285-1a67-4c95-bb68-fd577a012f6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18f4e8da-4409-4095-9850-aaee82dd8fd1, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=521e9ad8-9ef3-4823-9ddb-59c3c3fe0674) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:39:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.792 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 in datapath ec8bae53-fe6a-49d1-a733-f00c198be561 bound to our chassis#033[00m
Feb 25 07:39:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.793 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec8bae53-fe6a-49d1-a733-f00c198be561#033[00m
Feb 25 07:39:55 np0005629333 ovn_controller[147040]: 2026-02-25T12:39:55Z|00967|binding|INFO|Setting lport 521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 ovn-installed in OVS
Feb 25 07:39:55 np0005629333 ovn_controller[147040]: 2026-02-25T12:39:55Z|00968|binding|INFO|Setting lport 521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 up in Southbound
Feb 25 07:39:55 np0005629333 nova_compute[244014]: 2026-02-25 12:39:55.799 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:55 np0005629333 systemd[1]: Started Virtual Machine qemu-124-instance-00000061.
Feb 25 07:39:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.802 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3aab6304-03f5-4446-a560-06d6883b70c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.803 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapec8bae53-f1 in ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:39:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.806 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapec8bae53-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:39:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.806 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[64b70197-9017-46be-aa03-8d5cccd33c20]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.806 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0e83f127-f969-45a0-b40a-752e2961da80]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.814 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[d99b8ab7-2485-401b-a18e-75acde5c70d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.829 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[917db3d3-7a38-4adb-9c94-10e8e7ac9026]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.852 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3c81d39b-314d-4315-9320-87a6dd124ce0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:55 np0005629333 NetworkManager[49836]: <info>  [1772023195.8586] manager: (tapec8bae53-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/401)
Feb 25 07:39:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.857 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8a64af5c-2855-4f1c-ad90-8426ff1c9c83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:55 np0005629333 systemd-udevd[331319]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:39:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.886 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ed0e4ad2-ea14-498d-bb91-1da31ade15c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.890 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[53e87037-3829-4b3a-97ea-fb7e1b92bba8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:55 np0005629333 NetworkManager[49836]: <info>  [1772023195.9153] device (tapec8bae53-f0): carrier: link connected
Feb 25 07:39:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.921 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[efc5b6da-856c-40ac-a5ea-aa240aa4fb11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.945 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[19ea2ff6-bfeb-4990-bc53-d3a53f154383]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec8bae53-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a5:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516532, 'reachable_time': 18880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331350, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:55 np0005629333 nova_compute[244014]: 2026-02-25 12:39:55.952 244018 DEBUG oslo_concurrency.processutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:39:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.979 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[306d78e1-333f-452a-a972-2b919fb48c59]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4b:a500'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516532, 'tstamp': 516532}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331351, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:55.999 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fdf480fa-3b35-47dd-b856-623b42c498ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec8bae53-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a5:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516532, 'reachable_time': 18880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 331353, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:56.028 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8ea1dc5f-86fc-4e73-be5a-ec9f87364b4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:56.096 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d7ec9756-e762-44e2-ae1e-c243b23ec78e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:56.098 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec8bae53-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:56.098 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:56.099 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec8bae53-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:39:56 np0005629333 kernel: tapec8bae53-f0: entered promiscuous mode
Feb 25 07:39:56 np0005629333 NetworkManager[49836]: <info>  [1772023196.1018] manager: (tapec8bae53-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/402)
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.102 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:56.103 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec8bae53-f0, col_values=(('external_ids', {'iface-id': 'e2d1eadf-baf7-4e5c-8052-6c64e8476a26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:39:56 np0005629333 ovn_controller[147040]: 2026-02-25T12:39:56Z|00969|binding|INFO|Releasing lport e2d1eadf-baf7-4e5c-8052-6c64e8476a26 from this chassis (sb_readonly=0)
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:56.106 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ec8bae53-fe6a-49d1-a733-f00c198be561.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ec8bae53-fe6a-49d1-a733-f00c198be561.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:56.107 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[654c1aec-0e09-425e-afb4-762eab38b5bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:56.108 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-ec8bae53-fe6a-49d1-a733-f00c198be561
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/ec8bae53-fe6a-49d1-a733-f00c198be561.pid.haproxy
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID ec8bae53-fe6a-49d1-a733-f00c198be561
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:39:56 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:39:56.109 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'env', 'PROCESS_TAG=haproxy-ec8bae53-fe6a-49d1-a733-f00c198be561', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ec8bae53-fe6a-49d1-a733-f00c198be561.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.124 244018 DEBUG nova.compute.manager [req-fa69c7d4-1dac-42ac-ab7a-bedde1109c0e req-3dda3859-8bb1-44f1-858b-7891b654e84d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Received event network-vif-plugged-521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.125 244018 DEBUG oslo_concurrency.lockutils [req-fa69c7d4-1dac-42ac-ab7a-bedde1109c0e req-3dda3859-8bb1-44f1-858b-7891b654e84d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.125 244018 DEBUG oslo_concurrency.lockutils [req-fa69c7d4-1dac-42ac-ab7a-bedde1109c0e req-3dda3859-8bb1-44f1-858b-7891b654e84d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.126 244018 DEBUG oslo_concurrency.lockutils [req-fa69c7d4-1dac-42ac-ab7a-bedde1109c0e req-3dda3859-8bb1-44f1-858b-7891b654e84d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.126 244018 DEBUG nova.compute.manager [req-fa69c7d4-1dac-42ac-ab7a-bedde1109c0e req-3dda3859-8bb1-44f1-858b-7891b654e84d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Processing event network-vif-plugged-521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:39:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1755: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 2.6 MiB/s wr, 73 op/s
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.302 244018 DEBUG nova.compute.manager [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.304 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023196.3041885, 62d2fee1-f07f-44e3-a511-6b9bb341a3ed => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.304 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] VM Started (Lifecycle Event)#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.313 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.318 244018 INFO nova.virt.libvirt.driver [-] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Instance spawned successfully.#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.319 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.362 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.366 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.367 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.368 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.369 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.369 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.370 244018 DEBUG nova.virt.libvirt.driver [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.375 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.418 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.420 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023196.3047404, 62d2fee1-f07f-44e3-a511-6b9bb341a3ed => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.420 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.449 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.455 244018 INFO nova.compute.manager [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Took 6.56 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.456 244018 DEBUG nova.compute.manager [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.457 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023196.3068895, 62d2fee1-f07f-44e3-a511-6b9bb341a3ed => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.457 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.484 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.488 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:39:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:39:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3048460738' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:39:56 np0005629333 podman[331446]: 2026-02-25 12:39:56.507414767 +0000 UTC m=+0.064110940 container create 5d45eed5684205ccfa7798515ef0592efacd955a22d233c2141fca16aba6dd02 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.509 244018 DEBUG oslo_concurrency.processutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.513 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.516 244018 DEBUG nova.compute.provider_tree [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.528 244018 INFO nova.compute.manager [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Took 7.65 seconds to build instance.#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.531 244018 DEBUG nova.scheduler.client.report [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:39:56 np0005629333 systemd[1]: Started libpod-conmon-5d45eed5684205ccfa7798515ef0592efacd955a22d233c2141fca16aba6dd02.scope.
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.564 244018 DEBUG oslo_concurrency.lockutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.565 244018 DEBUG nova.compute.manager [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.569 244018 DEBUG oslo_concurrency.lockutils [None req-ee49c5c7-e49d-4216-865c-73fe566c8575 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:39:56 np0005629333 podman[331446]: 2026-02-25 12:39:56.47915836 +0000 UTC m=+0.035854553 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:39:56 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:39:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aeb1a3a8bf9efcf56fafcf5d930b23a7cd7c44dae6b62bfc44fc981eafd5921/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.609 244018 DEBUG nova.compute.manager [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:39:56 np0005629333 podman[331446]: 2026-02-25 12:39:56.609768026 +0000 UTC m=+0.166464209 container init 5d45eed5684205ccfa7798515ef0592efacd955a22d233c2141fca16aba6dd02 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.610 244018 DEBUG nova.network.neutron [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:39:56 np0005629333 podman[331446]: 2026-02-25 12:39:56.615494598 +0000 UTC m=+0.172190761 container start 5d45eed5684205ccfa7798515ef0592efacd955a22d233c2141fca16aba6dd02 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.626 244018 INFO nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:39:56 np0005629333 neutron-haproxy-ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561[331464]: [NOTICE]   (331468) : New worker (331470) forked
Feb 25 07:39:56 np0005629333 neutron-haproxy-ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561[331464]: [NOTICE]   (331468) : Loading success.
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.646 244018 DEBUG nova.compute.manager [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.744 244018 DEBUG nova.compute.manager [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.745 244018 DEBUG nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.746 244018 INFO nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Creating image(s)#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.769 244018 DEBUG nova.storage.rbd_utils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] rbd image 01328724-b95f-4b36-809a-ddc156dd0dde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.796 244018 DEBUG nova.storage.rbd_utils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] rbd image 01328724-b95f-4b36-809a-ddc156dd0dde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.824 244018 DEBUG nova.storage.rbd_utils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] rbd image 01328724-b95f-4b36-809a-ddc156dd0dde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.829 244018 DEBUG oslo_concurrency.lockutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Acquiring lock "838576657076f409077f9f38137795a25cd06654" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:39:56 np0005629333 nova_compute[244014]: 2026-02-25 12:39:56.831 244018 DEBUG oslo_concurrency.lockutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lock "838576657076f409077f9f38137795a25cd06654" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:39:57 np0005629333 nova_compute[244014]: 2026-02-25 12:39:57.051 244018 DEBUG nova.network.neutron [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Feb 25 07:39:57 np0005629333 nova_compute[244014]: 2026-02-25 12:39:57.051 244018 DEBUG nova.compute.manager [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:39:57 np0005629333 nova_compute[244014]: 2026-02-25 12:39:57.168 244018 DEBUG nova.virt.libvirt.imagebackend [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Image locations are: [{'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/d1c7c812-5951-4aad-935f-d6237846a428/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/d1c7c812-5951-4aad-935f-d6237846a428/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Feb 25 07:39:57 np0005629333 nova_compute[244014]: 2026-02-25 12:39:57.207 244018 DEBUG nova.virt.libvirt.imagebackend [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Selected location: {'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/d1c7c812-5951-4aad-935f-d6237846a428/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Feb 25 07:39:57 np0005629333 nova_compute[244014]: 2026-02-25 12:39:57.208 244018 DEBUG nova.storage.rbd_utils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] cloning images/d1c7c812-5951-4aad-935f-d6237846a428@snap to None/01328724-b95f-4b36-809a-ddc156dd0dde_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 25 07:39:57 np0005629333 nova_compute[244014]: 2026-02-25 12:39:57.299 244018 DEBUG oslo_concurrency.lockutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lock "838576657076f409077f9f38137795a25cd06654" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.469s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:39:57 np0005629333 nova_compute[244014]: 2026-02-25 12:39:57.418 244018 DEBUG nova.storage.rbd_utils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] resizing rbd image 01328724-b95f-4b36-809a-ddc156dd0dde_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:39:57 np0005629333 nova_compute[244014]: 2026-02-25 12:39:57.478 244018 DEBUG nova.objects.instance [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lazy-loading 'migration_context' on Instance uuid 01328724-b95f-4b36-809a-ddc156dd0dde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:39:57 np0005629333 nova_compute[244014]: 2026-02-25 12:39:57.491 244018 DEBUG nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:39:57 np0005629333 nova_compute[244014]: 2026-02-25 12:39:57.492 244018 DEBUG nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Ensure instance console log exists: /var/lib/nova/instances/01328724-b95f-4b36-809a-ddc156dd0dde/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:39:57 np0005629333 nova_compute[244014]: 2026-02-25 12:39:57.492 244018 DEBUG oslo_concurrency.lockutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:39:57 np0005629333 nova_compute[244014]: 2026-02-25 12:39:57.493 244018 DEBUG oslo_concurrency.lockutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:39:57 np0005629333 nova_compute[244014]: 2026-02-25 12:39:57.493 244018 DEBUG oslo_concurrency.lockutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:39:57 np0005629333 nova_compute[244014]: 2026-02-25 12:39:57.495 244018 DEBUG nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='3f9e4304c5653f60352b81ed1f3aa247',container_format='bare',created_at=2026-02-25T12:39:52Z,direct_url=<?>,disk_format='raw',id=d1c7c812-5951-4aad-935f-d6237846a428,min_disk=0,min_ram=0,name='tempest-image-dependency-test-1512660417',owner='fc8ee4ee0f07455b8722a80a5e837c79',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2026-02-25T12:39:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'd1c7c812-5951-4aad-935f-d6237846a428'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:39:57 np0005629333 nova_compute[244014]: 2026-02-25 12:39:57.499 244018 WARNING nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:39:57 np0005629333 nova_compute[244014]: 2026-02-25 12:39:57.505 244018 DEBUG nova.virt.libvirt.host [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:39:57 np0005629333 nova_compute[244014]: 2026-02-25 12:39:57.505 244018 DEBUG nova.virt.libvirt.host [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:39:57 np0005629333 nova_compute[244014]: 2026-02-25 12:39:57.508 244018 DEBUG nova.virt.libvirt.host [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:39:57 np0005629333 nova_compute[244014]: 2026-02-25 12:39:57.508 244018 DEBUG nova.virt.libvirt.host [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:39:57 np0005629333 nova_compute[244014]: 2026-02-25 12:39:57.509 244018 DEBUG nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:39:57 np0005629333 nova_compute[244014]: 2026-02-25 12:39:57.510 244018 DEBUG nova.virt.hardware [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='3f9e4304c5653f60352b81ed1f3aa247',container_format='bare',created_at=2026-02-25T12:39:52Z,direct_url=<?>,disk_format='raw',id=d1c7c812-5951-4aad-935f-d6237846a428,min_disk=0,min_ram=0,name='tempest-image-dependency-test-1512660417',owner='fc8ee4ee0f07455b8722a80a5e837c79',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2026-02-25T12:39:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:39:57 np0005629333 nova_compute[244014]: 2026-02-25 12:39:57.510 244018 DEBUG nova.virt.hardware [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:39:57 np0005629333 nova_compute[244014]: 2026-02-25 12:39:57.511 244018 DEBUG nova.virt.hardware [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:39:57 np0005629333 nova_compute[244014]: 2026-02-25 12:39:57.511 244018 DEBUG nova.virt.hardware [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:39:57 np0005629333 nova_compute[244014]: 2026-02-25 12:39:57.512 244018 DEBUG nova.virt.hardware [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:39:57 np0005629333 nova_compute[244014]: 2026-02-25 12:39:57.512 244018 DEBUG nova.virt.hardware [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:39:57 np0005629333 nova_compute[244014]: 2026-02-25 12:39:57.513 244018 DEBUG nova.virt.hardware [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:39:57 np0005629333 nova_compute[244014]: 2026-02-25 12:39:57.513 244018 DEBUG nova.virt.hardware [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:39:57 np0005629333 nova_compute[244014]: 2026-02-25 12:39:57.513 244018 DEBUG nova.virt.hardware [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:39:57 np0005629333 nova_compute[244014]: 2026-02-25 12:39:57.514 244018 DEBUG nova.virt.hardware [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:39:57 np0005629333 nova_compute[244014]: 2026-02-25 12:39:57.514 244018 DEBUG nova.virt.hardware [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:39:57 np0005629333 nova_compute[244014]: 2026-02-25 12:39:57.518 244018 DEBUG oslo_concurrency.processutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4142169491' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:39:58.039932) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023198039986, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 2052, "num_deletes": 260, "total_data_size": 3184895, "memory_usage": 3227856, "flush_reason": "Manual Compaction"}
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023198053238, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 3128053, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35361, "largest_seqno": 37412, "table_properties": {"data_size": 3118835, "index_size": 5712, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19268, "raw_average_key_size": 20, "raw_value_size": 3100115, "raw_average_value_size": 3239, "num_data_blocks": 252, "num_entries": 957, "num_filter_entries": 957, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772022998, "oldest_key_time": 1772022998, "file_creation_time": 1772023198, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 13336 microseconds, and 4284 cpu microseconds.
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:39:58 np0005629333 nova_compute[244014]: 2026-02-25 12:39:58.052 244018 DEBUG oslo_concurrency.processutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:39:58.053275) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 3128053 bytes OK
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:39:58.053290) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:39:58.055250) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:39:58.055262) EVENT_LOG_v1 {"time_micros": 1772023198055258, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:39:58.055277) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 3176211, prev total WAL file size 3176211, number of live WAL files 2.
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:39:58.055887) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323539' seq:72057594037927935, type:22 .. '6C6F676D0031353133' seq:0, type:0; will stop at (end)
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(3054KB)], [77(9378KB)]
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023198055970, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 12731494, "oldest_snapshot_seqno": -1}
Feb 25 07:39:58 np0005629333 nova_compute[244014]: 2026-02-25 12:39:58.110 244018 DEBUG nova.storage.rbd_utils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] rbd image 01328724-b95f-4b36-809a-ddc156dd0dde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:39:58 np0005629333 nova_compute[244014]: 2026-02-25 12:39:58.115 244018 DEBUG oslo_concurrency.processutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6490 keys, 12596330 bytes, temperature: kUnknown
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023198139023, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 12596330, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12547995, "index_size": 31009, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16261, "raw_key_size": 162969, "raw_average_key_size": 25, "raw_value_size": 12427071, "raw_average_value_size": 1914, "num_data_blocks": 1264, "num_entries": 6490, "num_filter_entries": 6490, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772023198, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:39:58.139627) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 12596330 bytes
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:39:58.140976) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.6 rd, 151.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 9.2 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(8.1) write-amplify(4.0) OK, records in: 7026, records dropped: 536 output_compression: NoCompression
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:39:58.141006) EVENT_LOG_v1 {"time_micros": 1772023198140993, "job": 44, "event": "compaction_finished", "compaction_time_micros": 83447, "compaction_time_cpu_micros": 28636, "output_level": 6, "num_output_files": 1, "total_output_size": 12596330, "num_input_records": 7026, "num_output_records": 6490, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023198141758, "job": 44, "event": "table_file_deletion", "file_number": 79}
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023198143052, "job": 44, "event": "table_file_deletion", "file_number": 77}
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:39:58.055766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:39:58.143164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:39:58.143172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:39:58.143176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:39:58.143181) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:39:58.143185) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:39:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1757: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 1015 KiB/s rd, 2.7 MiB/s wr, 154 op/s
Feb 25 07:39:58 np0005629333 nova_compute[244014]: 2026-02-25 12:39:58.223 244018 DEBUG nova.compute.manager [req-cc71d807-f821-427c-903d-45e14d97d7e6 req-b5271348-5991-4746-9fa1-9ea4d9d8c7e7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Received event network-vif-plugged-521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:39:58 np0005629333 nova_compute[244014]: 2026-02-25 12:39:58.224 244018 DEBUG oslo_concurrency.lockutils [req-cc71d807-f821-427c-903d-45e14d97d7e6 req-b5271348-5991-4746-9fa1-9ea4d9d8c7e7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:39:58 np0005629333 nova_compute[244014]: 2026-02-25 12:39:58.224 244018 DEBUG oslo_concurrency.lockutils [req-cc71d807-f821-427c-903d-45e14d97d7e6 req-b5271348-5991-4746-9fa1-9ea4d9d8c7e7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:39:58 np0005629333 nova_compute[244014]: 2026-02-25 12:39:58.225 244018 DEBUG oslo_concurrency.lockutils [req-cc71d807-f821-427c-903d-45e14d97d7e6 req-b5271348-5991-4746-9fa1-9ea4d9d8c7e7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:39:58 np0005629333 nova_compute[244014]: 2026-02-25 12:39:58.226 244018 DEBUG nova.compute.manager [req-cc71d807-f821-427c-903d-45e14d97d7e6 req-b5271348-5991-4746-9fa1-9ea4d9d8c7e7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] No waiting events found dispatching network-vif-plugged-521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:39:58 np0005629333 nova_compute[244014]: 2026-02-25 12:39:58.226 244018 WARNING nova.compute.manager [req-cc71d807-f821-427c-903d-45e14d97d7e6 req-b5271348-5991-4746-9fa1-9ea4d9d8c7e7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Received unexpected event network-vif-plugged-521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:39:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2922528019' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:39:58 np0005629333 nova_compute[244014]: 2026-02-25 12:39:58.675 244018 DEBUG oslo_concurrency.processutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:39:58 np0005629333 nova_compute[244014]: 2026-02-25 12:39:58.678 244018 DEBUG nova.objects.instance [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lazy-loading 'pci_devices' on Instance uuid 01328724-b95f-4b36-809a-ddc156dd0dde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:39:58 np0005629333 nova_compute[244014]: 2026-02-25 12:39:58.699 244018 DEBUG nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:39:58 np0005629333 nova_compute[244014]:  <uuid>01328724-b95f-4b36-809a-ddc156dd0dde</uuid>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:  <name>instance-00000062</name>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:39:58 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:      <nova:name>instance-depend-image</nova:name>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:39:57</nova:creationTime>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:39:58 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:        <nova:user uuid="d154463d39d44a229e754c4dd30f78d0">tempest-ImageDependencyTests-1556343205-project-member</nova:user>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:        <nova:project uuid="fc8ee4ee0f07455b8722a80a5e837c79">tempest-ImageDependencyTests-1556343205</nova:project>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="d1c7c812-5951-4aad-935f-d6237846a428"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:      <nova:ports/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:      <entry name="serial">01328724-b95f-4b36-809a-ddc156dd0dde</entry>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:      <entry name="uuid">01328724-b95f-4b36-809a-ddc156dd0dde</entry>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:39:58 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/01328724-b95f-4b36-809a-ddc156dd0dde_disk">
Feb 25 07:39:58 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:39:58 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:39:58 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/01328724-b95f-4b36-809a-ddc156dd0dde_disk.config">
Feb 25 07:39:58 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:39:58 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:39:58 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/01328724-b95f-4b36-809a-ddc156dd0dde/console.log" append="off"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:39:58 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:39:58 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:39:58 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:39:58 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:39:58 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:39:58 np0005629333 nova_compute[244014]: 2026-02-25 12:39:58.762 244018 DEBUG nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:39:58 np0005629333 nova_compute[244014]: 2026-02-25 12:39:58.764 244018 DEBUG nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:39:58 np0005629333 nova_compute[244014]: 2026-02-25 12:39:58.765 244018 INFO nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Using config drive#033[00m
Feb 25 07:39:58 np0005629333 nova_compute[244014]: 2026-02-25 12:39:58.800 244018 DEBUG nova.storage.rbd_utils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] rbd image 01328724-b95f-4b36-809a-ddc156dd0dde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:39:58 np0005629333 nova_compute[244014]: 2026-02-25 12:39:58.996 244018 INFO nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Creating config drive at /var/lib/nova/instances/01328724-b95f-4b36-809a-ddc156dd0dde/disk.config#033[00m
Feb 25 07:39:59 np0005629333 nova_compute[244014]: 2026-02-25 12:39:59.003 244018 DEBUG oslo_concurrency.processutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/01328724-b95f-4b36-809a-ddc156dd0dde/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpj8v99gvx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:39:59 np0005629333 nova_compute[244014]: 2026-02-25 12:39:59.140 244018 DEBUG oslo_concurrency.processutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/01328724-b95f-4b36-809a-ddc156dd0dde/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpj8v99gvx" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:39:59 np0005629333 nova_compute[244014]: 2026-02-25 12:39:59.166 244018 DEBUG nova.storage.rbd_utils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] rbd image 01328724-b95f-4b36-809a-ddc156dd0dde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:39:59 np0005629333 nova_compute[244014]: 2026-02-25 12:39:59.171 244018 DEBUG oslo_concurrency.processutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/01328724-b95f-4b36-809a-ddc156dd0dde/disk.config 01328724-b95f-4b36-809a-ddc156dd0dde_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:39:59 np0005629333 nova_compute[244014]: 2026-02-25 12:39:59.347 244018 DEBUG oslo_concurrency.processutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/01328724-b95f-4b36-809a-ddc156dd0dde/disk.config 01328724-b95f-4b36-809a-ddc156dd0dde_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:39:59 np0005629333 nova_compute[244014]: 2026-02-25 12:39:59.348 244018 INFO nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Deleting local config drive /var/lib/nova/instances/01328724-b95f-4b36-809a-ddc156dd0dde/disk.config because it was imported into RBD.#033[00m
Feb 25 07:39:59 np0005629333 systemd-machined[210048]: New machine qemu-125-instance-00000062.
Feb 25 07:39:59 np0005629333 systemd[1]: Started Virtual Machine qemu-125-instance-00000062.
Feb 25 07:39:59 np0005629333 nova_compute[244014]: 2026-02-25 12:39:59.682 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:00 np0005629333 nova_compute[244014]: 2026-02-25 12:40:00.021 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1758: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 968 KiB/s rd, 889 KiB/s wr, 87 op/s
Feb 25 07:40:00 np0005629333 nova_compute[244014]: 2026-02-25 12:40:00.330 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023200.3296406, 01328724-b95f-4b36-809a-ddc156dd0dde => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:40:00 np0005629333 nova_compute[244014]: 2026-02-25 12:40:00.331 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:40:00 np0005629333 nova_compute[244014]: 2026-02-25 12:40:00.335 244018 DEBUG nova.compute.manager [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:40:00 np0005629333 nova_compute[244014]: 2026-02-25 12:40:00.336 244018 DEBUG nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:40:00 np0005629333 nova_compute[244014]: 2026-02-25 12:40:00.342 244018 INFO nova.virt.libvirt.driver [-] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Instance spawned successfully.#033[00m
Feb 25 07:40:00 np0005629333 nova_compute[244014]: 2026-02-25 12:40:00.343 244018 DEBUG nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:40:00 np0005629333 nova_compute[244014]: 2026-02-25 12:40:00.357 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:40:00 np0005629333 nova_compute[244014]: 2026-02-25 12:40:00.364 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:40:00 np0005629333 nova_compute[244014]: 2026-02-25 12:40:00.368 244018 DEBUG nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:40:00 np0005629333 nova_compute[244014]: 2026-02-25 12:40:00.368 244018 DEBUG nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:40:00 np0005629333 nova_compute[244014]: 2026-02-25 12:40:00.369 244018 DEBUG nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:40:00 np0005629333 nova_compute[244014]: 2026-02-25 12:40:00.369 244018 DEBUG nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:40:00 np0005629333 nova_compute[244014]: 2026-02-25 12:40:00.370 244018 DEBUG nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:40:00 np0005629333 nova_compute[244014]: 2026-02-25 12:40:00.370 244018 DEBUG nova.virt.libvirt.driver [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:40:00 np0005629333 nova_compute[244014]: 2026-02-25 12:40:00.395 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:40:00 np0005629333 nova_compute[244014]: 2026-02-25 12:40:00.396 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023200.331505, 01328724-b95f-4b36-809a-ddc156dd0dde => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:40:00 np0005629333 nova_compute[244014]: 2026-02-25 12:40:00.396 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] VM Started (Lifecycle Event)#033[00m
Feb 25 07:40:00 np0005629333 nova_compute[244014]: 2026-02-25 12:40:00.419 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:40:00 np0005629333 nova_compute[244014]: 2026-02-25 12:40:00.422 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:40:00 np0005629333 nova_compute[244014]: 2026-02-25 12:40:00.432 244018 INFO nova.compute.manager [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Took 3.69 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:40:00 np0005629333 nova_compute[244014]: 2026-02-25 12:40:00.433 244018 DEBUG nova.compute.manager [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:40:00 np0005629333 nova_compute[244014]: 2026-02-25 12:40:00.445 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:40:00 np0005629333 nova_compute[244014]: 2026-02-25 12:40:00.523 244018 INFO nova.compute.manager [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Took 4.84 seconds to build instance.#033[00m
Feb 25 07:40:00 np0005629333 nova_compute[244014]: 2026-02-25 12:40:00.566 244018 DEBUG oslo_concurrency.lockutils [None req-4a6273de-65b2-4134-8a81-8507cdb3e339 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lock "01328724-b95f-4b36-809a-ddc156dd0dde" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.980s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:40:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:40:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:40:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:40:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:40:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:40:01 np0005629333 nova_compute[244014]: 2026-02-25 12:40:01.845 244018 DEBUG nova.compute.manager [None req-e578cf97-a90c-4bb7-a904-dba1d71bc302 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:40:01 np0005629333 nova_compute[244014]: 2026-02-25 12:40:01.891 244018 INFO nova.compute.manager [None req-e578cf97-a90c-4bb7-a904-dba1d71bc302 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] instance snapshotting#033[00m
Feb 25 07:40:02 np0005629333 nova_compute[244014]: 2026-02-25 12:40:02.190 244018 INFO nova.virt.libvirt.driver [None req-e578cf97-a90c-4bb7-a904-dba1d71bc302 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Beginning live snapshot process#033[00m
Feb 25 07:40:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 19 KiB/s wr, 162 op/s
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:02.259932) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023202259977, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 300, "num_deletes": 251, "total_data_size": 99923, "memory_usage": 107016, "flush_reason": "Manual Compaction"}
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023202266033, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 99270, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37413, "largest_seqno": 37712, "table_properties": {"data_size": 97298, "index_size": 200, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5079, "raw_average_key_size": 18, "raw_value_size": 93422, "raw_average_value_size": 338, "num_data_blocks": 9, "num_entries": 276, "num_filter_entries": 276, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772023198, "oldest_key_time": 1772023198, "file_creation_time": 1772023202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 6151 microseconds, and 963 cpu microseconds.
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:02.266082) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 99270 bytes OK
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:02.266101) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:02.268179) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:02.268199) EVENT_LOG_v1 {"time_micros": 1772023202268192, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:02.268218) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 97740, prev total WAL file size 97740, number of live WAL files 2.
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:02.268583) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(96KB)], [80(12MB)]
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023202268611, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 12695600, "oldest_snapshot_seqno": -1}
Feb 25 07:40:02 np0005629333 nova_compute[244014]: 2026-02-25 12:40:02.348 244018 DEBUG nova.storage.rbd_utils [None req-e578cf97-a90c-4bb7-a904-dba1d71bc302 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] creating snapshot(0d009d3b9cf242088f2a76c7db6396f9) on rbd image(01328724-b95f-4b36-809a-ddc156dd0dde_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6257 keys, 11090288 bytes, temperature: kUnknown
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023202362272, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 11090288, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11044803, "index_size": 28735, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15685, "raw_key_size": 158829, "raw_average_key_size": 25, "raw_value_size": 10929226, "raw_average_value_size": 1746, "num_data_blocks": 1160, "num_entries": 6257, "num_filter_entries": 6257, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772023202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:02.362667) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 11090288 bytes
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:02.365059) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.5 rd, 118.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 12.0 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(239.6) write-amplify(111.7) OK, records in: 6766, records dropped: 509 output_compression: NoCompression
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:02.365076) EVENT_LOG_v1 {"time_micros": 1772023202365068, "job": 46, "event": "compaction_finished", "compaction_time_micros": 93715, "compaction_time_cpu_micros": 21195, "output_level": 6, "num_output_files": 1, "total_output_size": 11090288, "num_input_records": 6766, "num_output_records": 6257, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023202365169, "job": 46, "event": "table_file_deletion", "file_number": 82}
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023202366258, "job": 46, "event": "table_file_deletion", "file_number": 80}
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:02.268511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:02.366345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:02.366354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:02.366358) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:02.366363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:40:02 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:02.366367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:40:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:40:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Feb 25 07:40:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Feb 25 07:40:03 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Feb 25 07:40:03 np0005629333 nova_compute[244014]: 2026-02-25 12:40:03.433 244018 DEBUG nova.storage.rbd_utils [None req-e578cf97-a90c-4bb7-a904-dba1d71bc302 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] cloning vms/01328724-b95f-4b36-809a-ddc156dd0dde_disk@0d009d3b9cf242088f2a76c7db6396f9 to images/bc6f423b-2af0-43cc-9bd7-113d1ebfcdf9 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 25 07:40:03 np0005629333 nova_compute[244014]: 2026-02-25 12:40:03.529 244018 DEBUG nova.storage.rbd_utils [None req-e578cf97-a90c-4bb7-a904-dba1d71bc302 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] flattening images/bc6f423b-2af0-43cc-9bd7-113d1ebfcdf9 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 25 07:40:03 np0005629333 nova_compute[244014]: 2026-02-25 12:40:03.929 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:03 np0005629333 nova_compute[244014]: 2026-02-25 12:40:03.930 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:03 np0005629333 nova_compute[244014]: 2026-02-25 12:40:03.957 244018 DEBUG nova.compute.manager [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:40:03 np0005629333 nova_compute[244014]: 2026-02-25 12:40:03.998 244018 DEBUG nova.storage.rbd_utils [None req-e578cf97-a90c-4bb7-a904-dba1d71bc302 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] removing snapshot(0d009d3b9cf242088f2a76c7db6396f9) on rbd image(01328724-b95f-4b36-809a-ddc156dd0dde_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 25 07:40:04 np0005629333 nova_compute[244014]: 2026-02-25 12:40:04.038 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:04 np0005629333 nova_compute[244014]: 2026-02-25 12:40:04.039 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:04 np0005629333 nova_compute[244014]: 2026-02-25 12:40:04.046 244018 DEBUG nova.virt.hardware [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:40:04 np0005629333 nova_compute[244014]: 2026-02-25 12:40:04.047 244018 INFO nova.compute.claims [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:40:04 np0005629333 nova_compute[244014]: 2026-02-25 12:40:04.166 244018 DEBUG oslo_concurrency.processutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:40:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 40 KiB/s wr, 203 op/s
Feb 25 07:40:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Feb 25 07:40:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Feb 25 07:40:04 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Feb 25 07:40:04 np0005629333 nova_compute[244014]: 2026-02-25 12:40:04.684 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:04 np0005629333 nova_compute[244014]: 2026-02-25 12:40:04.877 244018 DEBUG nova.storage.rbd_utils [None req-e578cf97-a90c-4bb7-a904-dba1d71bc302 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] creating snapshot(snap) on rbd image(bc6f423b-2af0-43cc-9bd7-113d1ebfcdf9) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 07:40:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:40:04 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3039364253' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:40:05 np0005629333 nova_compute[244014]: 2026-02-25 12:40:05.021 244018 DEBUG oslo_concurrency.processutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.855s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:40:05 np0005629333 nova_compute[244014]: 2026-02-25 12:40:05.026 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:05 np0005629333 nova_compute[244014]: 2026-02-25 12:40:05.033 244018 DEBUG nova.compute.provider_tree [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:40:05 np0005629333 nova_compute[244014]: 2026-02-25 12:40:05.058 244018 DEBUG nova.scheduler.client.report [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:40:05 np0005629333 nova_compute[244014]: 2026-02-25 12:40:05.089 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:05 np0005629333 nova_compute[244014]: 2026-02-25 12:40:05.090 244018 DEBUG nova.compute.manager [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:40:05 np0005629333 nova_compute[244014]: 2026-02-25 12:40:05.146 244018 DEBUG nova.compute.manager [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:40:05 np0005629333 nova_compute[244014]: 2026-02-25 12:40:05.147 244018 DEBUG nova.network.neutron [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:40:05 np0005629333 nova_compute[244014]: 2026-02-25 12:40:05.167 244018 INFO nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:40:05 np0005629333 nova_compute[244014]: 2026-02-25 12:40:05.186 244018 DEBUG nova.compute.manager [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:40:05 np0005629333 nova_compute[244014]: 2026-02-25 12:40:05.278 244018 DEBUG nova.compute.manager [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:40:05 np0005629333 nova_compute[244014]: 2026-02-25 12:40:05.279 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:40:05 np0005629333 nova_compute[244014]: 2026-02-25 12:40:05.280 244018 INFO nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Creating image(s)#033[00m
Feb 25 07:40:05 np0005629333 nova_compute[244014]: 2026-02-25 12:40:05.299 244018 DEBUG nova.storage.rbd_utils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 89f166ca-c0fc-48ec-baf9-eda20ff22f2b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:40:05 np0005629333 nova_compute[244014]: 2026-02-25 12:40:05.322 244018 DEBUG nova.storage.rbd_utils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 89f166ca-c0fc-48ec-baf9-eda20ff22f2b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:40:05 np0005629333 nova_compute[244014]: 2026-02-25 12:40:05.343 244018 DEBUG nova.storage.rbd_utils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 89f166ca-c0fc-48ec-baf9-eda20ff22f2b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:40:05 np0005629333 nova_compute[244014]: 2026-02-25 12:40:05.347 244018 DEBUG oslo_concurrency.processutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:40:05 np0005629333 nova_compute[244014]: 2026-02-25 12:40:05.406 244018 DEBUG oslo_concurrency.processutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:40:05 np0005629333 nova_compute[244014]: 2026-02-25 12:40:05.408 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:05 np0005629333 nova_compute[244014]: 2026-02-25 12:40:05.408 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:05 np0005629333 nova_compute[244014]: 2026-02-25 12:40:05.409 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:05 np0005629333 nova_compute[244014]: 2026-02-25 12:40:05.458 244018 DEBUG nova.storage.rbd_utils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 89f166ca-c0fc-48ec-baf9-eda20ff22f2b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:40:05 np0005629333 nova_compute[244014]: 2026-02-25 12:40:05.462 244018 DEBUG oslo_concurrency.processutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 89f166ca-c0fc-48ec-baf9-eda20ff22f2b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:40:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Feb 25 07:40:05 np0005629333 nova_compute[244014]: 2026-02-25 12:40:05.491 244018 DEBUG nova.policy [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c5bc24b5f5048469cf3f701ce511bfa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '503e879cd1f44a16b9baef106ceba949', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:40:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Feb 25 07:40:05 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Feb 25 07:40:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1764: 305 pgs: 305 active+clean; 200 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 26 KiB/s wr, 165 op/s
Feb 25 07:40:06 np0005629333 nova_compute[244014]: 2026-02-25 12:40:06.481 244018 DEBUG oslo_concurrency.processutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 89f166ca-c0fc-48ec-baf9-eda20ff22f2b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:40:06 np0005629333 nova_compute[244014]: 2026-02-25 12:40:06.580 244018 DEBUG nova.storage.rbd_utils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] resizing rbd image 89f166ca-c0fc-48ec-baf9-eda20ff22f2b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:40:07 np0005629333 nova_compute[244014]: 2026-02-25 12:40:07.262 244018 DEBUG nova.network.neutron [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Successfully created port: 34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:40:07 np0005629333 nova_compute[244014]: 2026-02-25 12:40:07.971 244018 DEBUG nova.objects.instance [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'migration_context' on Instance uuid 89f166ca-c0fc-48ec-baf9-eda20ff22f2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:40:07 np0005629333 nova_compute[244014]: 2026-02-25 12:40:07.993 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:40:07 np0005629333 nova_compute[244014]: 2026-02-25 12:40:07.994 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Ensure instance console log exists: /var/lib/nova/instances/89f166ca-c0fc-48ec-baf9-eda20ff22f2b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:40:07 np0005629333 nova_compute[244014]: 2026-02-25 12:40:07.995 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:07 np0005629333 nova_compute[244014]: 2026-02-25 12:40:07.995 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:07 np0005629333 nova_compute[244014]: 2026-02-25 12:40:07.995 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:40:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1765: 305 pgs: 305 active+clean; 251 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 188 KiB/s rd, 5.2 MiB/s wr, 203 op/s
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:08.251470) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023208251532, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 326, "num_deletes": 250, "total_data_size": 120515, "memory_usage": 126640, "flush_reason": "Manual Compaction"}
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023208304024, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 118855, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37713, "largest_seqno": 38038, "table_properties": {"data_size": 116724, "index_size": 295, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5887, "raw_average_key_size": 20, "raw_value_size": 112493, "raw_average_value_size": 389, "num_data_blocks": 13, "num_entries": 289, "num_filter_entries": 289, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772023202, "oldest_key_time": 1772023202, "file_creation_time": 1772023208, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 52673 microseconds, and 1612 cpu microseconds.
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:08.304143) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 118855 bytes OK
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:08.304178) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:08.428148) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:08.428197) EVENT_LOG_v1 {"time_micros": 1772023208428186, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:08.428227) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 118235, prev total WAL file size 131791, number of live WAL files 2.
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:08.428888) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323533' seq:72057594037927935, type:22 .. '6D6772737461740031353034' seq:0, type:0; will stop at (end)
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(116KB)], [83(10MB)]
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023208428959, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 11209143, "oldest_snapshot_seqno": -1}
Feb 25 07:40:08 np0005629333 nova_compute[244014]: 2026-02-25 12:40:08.686 244018 DEBUG nova.network.neutron [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Successfully updated port: 34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6035 keys, 7864064 bytes, temperature: kUnknown
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023208693624, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 7864064, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7824881, "index_size": 23009, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15109, "raw_key_size": 154423, "raw_average_key_size": 25, "raw_value_size": 7717882, "raw_average_value_size": 1278, "num_data_blocks": 921, "num_entries": 6035, "num_filter_entries": 6035, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772023208, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:40:08 np0005629333 nova_compute[244014]: 2026-02-25 12:40:08.715 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "refresh_cache-89f166ca-c0fc-48ec-baf9-eda20ff22f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:40:08 np0005629333 nova_compute[244014]: 2026-02-25 12:40:08.715 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquired lock "refresh_cache-89f166ca-c0fc-48ec-baf9-eda20ff22f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:40:08 np0005629333 nova_compute[244014]: 2026-02-25 12:40:08.716 244018 DEBUG nova.network.neutron [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:40:08 np0005629333 nova_compute[244014]: 2026-02-25 12:40:08.787 244018 DEBUG nova.compute.manager [req-2c51ead4-fb6b-43b6-a593-d8b363b13c5c req-7eaecba6-204b-4105-963c-00a0c7c6cd1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Received event network-changed-34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:40:08 np0005629333 nova_compute[244014]: 2026-02-25 12:40:08.788 244018 DEBUG nova.compute.manager [req-2c51ead4-fb6b-43b6-a593-d8b363b13c5c req-7eaecba6-204b-4105-963c-00a0c7c6cd1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Refreshing instance network info cache due to event network-changed-34e36f4e-0cdc-4b6b-bbbc-daa0d461be44. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:40:08 np0005629333 nova_compute[244014]: 2026-02-25 12:40:08.788 244018 DEBUG oslo_concurrency.lockutils [req-2c51ead4-fb6b-43b6-a593-d8b363b13c5c req-7eaecba6-204b-4105-963c-00a0c7c6cd1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-89f166ca-c0fc-48ec-baf9-eda20ff22f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:40:08 np0005629333 nova_compute[244014]: 2026-02-25 12:40:08.909 244018 DEBUG nova.network.neutron [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:08.694026) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 7864064 bytes
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:08.916170) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 42.3 rd, 29.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 10.6 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(160.5) write-amplify(66.2) OK, records in: 6546, records dropped: 511 output_compression: NoCompression
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:08.916239) EVENT_LOG_v1 {"time_micros": 1772023208916214, "job": 48, "event": "compaction_finished", "compaction_time_micros": 264795, "compaction_time_cpu_micros": 29963, "output_level": 6, "num_output_files": 1, "total_output_size": 7864064, "num_input_records": 6546, "num_output_records": 6035, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023208916648, "job": 48, "event": "table_file_deletion", "file_number": 85}
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023208920016, "job": 48, "event": "table_file_deletion", "file_number": 83}
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:08.428678) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:08.920146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:08.920156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:08.920159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:08.920163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:40:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:40:08.920166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:40:09 np0005629333 nova_compute[244014]: 2026-02-25 12:40:09.341 244018 INFO nova.virt.libvirt.driver [None req-e578cf97-a90c-4bb7-a904-dba1d71bc302 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Snapshot image upload complete#033[00m
Feb 25 07:40:09 np0005629333 nova_compute[244014]: 2026-02-25 12:40:09.342 244018 INFO nova.compute.manager [None req-e578cf97-a90c-4bb7-a904-dba1d71bc302 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Took 7.45 seconds to snapshot the instance on the hypervisor.#033[00m
Feb 25 07:40:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:40:09Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f6:f0:2e 10.100.0.3
Feb 25 07:40:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:40:09Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f6:f0:2e 10.100.0.3
Feb 25 07:40:09 np0005629333 nova_compute[244014]: 2026-02-25 12:40:09.687 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:10 np0005629333 nova_compute[244014]: 2026-02-25 12:40:10.026 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 305 active+clean; 251 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 147 KiB/s rd, 4.5 MiB/s wr, 158 op/s
Feb 25 07:40:11 np0005629333 nova_compute[244014]: 2026-02-25 12:40:11.957 244018 DEBUG nova.network.neutron [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Updating instance_info_cache with network_info: [{"id": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "address": "fa:16:3e:bd:ca:34", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34e36f4e-0c", "ovs_interfaceid": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:40:11 np0005629333 nova_compute[244014]: 2026-02-25 12:40:11.976 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Releasing lock "refresh_cache-89f166ca-c0fc-48ec-baf9-eda20ff22f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:40:11 np0005629333 nova_compute[244014]: 2026-02-25 12:40:11.977 244018 DEBUG nova.compute.manager [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Instance network_info: |[{"id": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "address": "fa:16:3e:bd:ca:34", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34e36f4e-0c", "ovs_interfaceid": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:40:11 np0005629333 nova_compute[244014]: 2026-02-25 12:40:11.977 244018 DEBUG oslo_concurrency.lockutils [req-2c51ead4-fb6b-43b6-a593-d8b363b13c5c req-7eaecba6-204b-4105-963c-00a0c7c6cd1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-89f166ca-c0fc-48ec-baf9-eda20ff22f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:40:11 np0005629333 nova_compute[244014]: 2026-02-25 12:40:11.977 244018 DEBUG nova.network.neutron [req-2c51ead4-fb6b-43b6-a593-d8b363b13c5c req-7eaecba6-204b-4105-963c-00a0c7c6cd1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Refreshing network info cache for port 34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:40:11 np0005629333 nova_compute[244014]: 2026-02-25 12:40:11.979 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Start _get_guest_xml network_info=[{"id": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "address": "fa:16:3e:bd:ca:34", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34e36f4e-0c", "ovs_interfaceid": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:40:11 np0005629333 nova_compute[244014]: 2026-02-25 12:40:11.983 244018 WARNING nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:40:11 np0005629333 nova_compute[244014]: 2026-02-25 12:40:11.988 244018 DEBUG nova.virt.libvirt.host [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:40:11 np0005629333 nova_compute[244014]: 2026-02-25 12:40:11.989 244018 DEBUG nova.virt.libvirt.host [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:40:11 np0005629333 nova_compute[244014]: 2026-02-25 12:40:11.993 244018 DEBUG nova.virt.libvirt.host [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:40:11 np0005629333 nova_compute[244014]: 2026-02-25 12:40:11.993 244018 DEBUG nova.virt.libvirt.host [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:40:11 np0005629333 nova_compute[244014]: 2026-02-25 12:40:11.994 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:40:11 np0005629333 nova_compute[244014]: 2026-02-25 12:40:11.994 244018 DEBUG nova.virt.hardware [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:40:11 np0005629333 nova_compute[244014]: 2026-02-25 12:40:11.994 244018 DEBUG nova.virt.hardware [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:40:11 np0005629333 nova_compute[244014]: 2026-02-25 12:40:11.994 244018 DEBUG nova.virt.hardware [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:40:11 np0005629333 nova_compute[244014]: 2026-02-25 12:40:11.995 244018 DEBUG nova.virt.hardware [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:40:11 np0005629333 nova_compute[244014]: 2026-02-25 12:40:11.995 244018 DEBUG nova.virt.hardware [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:40:11 np0005629333 nova_compute[244014]: 2026-02-25 12:40:11.995 244018 DEBUG nova.virt.hardware [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:40:11 np0005629333 nova_compute[244014]: 2026-02-25 12:40:11.995 244018 DEBUG nova.virt.hardware [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:40:11 np0005629333 nova_compute[244014]: 2026-02-25 12:40:11.995 244018 DEBUG nova.virt.hardware [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:40:11 np0005629333 nova_compute[244014]: 2026-02-25 12:40:11.995 244018 DEBUG nova.virt.hardware [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:40:11 np0005629333 nova_compute[244014]: 2026-02-25 12:40:11.995 244018 DEBUG nova.virt.hardware [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:40:11 np0005629333 nova_compute[244014]: 2026-02-25 12:40:11.996 244018 DEBUG nova.virt.hardware [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:40:11 np0005629333 nova_compute[244014]: 2026-02-25 12:40:11.998 244018 DEBUG oslo_concurrency.processutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:40:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1767: 305 pgs: 305 active+clean; 269 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 409 KiB/s rd, 5.5 MiB/s wr, 219 op/s
Feb 25 07:40:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:40:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3105441243' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:40:12 np0005629333 nova_compute[244014]: 2026-02-25 12:40:12.587 244018 DEBUG oslo_concurrency.processutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:40:12 np0005629333 nova_compute[244014]: 2026-02-25 12:40:12.618 244018 DEBUG nova.storage.rbd_utils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 89f166ca-c0fc-48ec-baf9-eda20ff22f2b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:40:12 np0005629333 nova_compute[244014]: 2026-02-25 12:40:12.623 244018 DEBUG oslo_concurrency.processutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:40:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Feb 25 07:40:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Feb 25 07:40:12 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Feb 25 07:40:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:40:13 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/611789799' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.171 244018 DEBUG oslo_concurrency.processutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.174 244018 DEBUG nova.virt.libvirt.vif [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:40:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1192653093',display_name='tempest-ServersTestJSON-server-1192653093',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1192653093',id=99,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-3ml2upp9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:40:05Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=89f166ca-c0fc-48ec-baf9-eda20ff22f2b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "address": "fa:16:3e:bd:ca:34", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34e36f4e-0c", "ovs_interfaceid": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.175 244018 DEBUG nova.network.os_vif_util [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "address": "fa:16:3e:bd:ca:34", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34e36f4e-0c", "ovs_interfaceid": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.176 244018 DEBUG nova.network.os_vif_util [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:ca:34,bridge_name='br-int',has_traffic_filtering=True,id=34e36f4e-0cdc-4b6b-bbbc-daa0d461be44,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34e36f4e-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.178 244018 DEBUG nova.objects.instance [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'pci_devices' on Instance uuid 89f166ca-c0fc-48ec-baf9-eda20ff22f2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.196 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:40:13 np0005629333 nova_compute[244014]:  <uuid>89f166ca-c0fc-48ec-baf9-eda20ff22f2b</uuid>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:  <name>instance-00000063</name>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServersTestJSON-server-1192653093</nova:name>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:40:11</nova:creationTime>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:40:13 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:        <nova:user uuid="4c5bc24b5f5048469cf3f701ce511bfa">tempest-ServersTestJSON-1472039551-project-member</nova:user>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:        <nova:project uuid="503e879cd1f44a16b9baef106ceba949">tempest-ServersTestJSON-1472039551</nova:project>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:        <nova:port uuid="34e36f4e-0cdc-4b6b-bbbc-daa0d461be44">
Feb 25 07:40:13 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      <entry name="serial">89f166ca-c0fc-48ec-baf9-eda20ff22f2b</entry>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      <entry name="uuid">89f166ca-c0fc-48ec-baf9-eda20ff22f2b</entry>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/89f166ca-c0fc-48ec-baf9-eda20ff22f2b_disk">
Feb 25 07:40:13 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:40:13 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/89f166ca-c0fc-48ec-baf9-eda20ff22f2b_disk.config">
Feb 25 07:40:13 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:40:13 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:bd:ca:34"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      <target dev="tap34e36f4e-0c"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/89f166ca-c0fc-48ec-baf9-eda20ff22f2b/console.log" append="off"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:40:13 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:40:13 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:40:13 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:40:13 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.198 244018 DEBUG nova.compute.manager [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Preparing to wait for external event network-vif-plugged-34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.198 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.198 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.199 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.200 244018 DEBUG nova.virt.libvirt.vif [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:40:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1192653093',display_name='tempest-ServersTestJSON-server-1192653093',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1192653093',id=99,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-3ml2upp9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:40:05Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=89f166ca-c0fc-48ec-baf9-eda20ff22f2b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "address": "fa:16:3e:bd:ca:34", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34e36f4e-0c", "ovs_interfaceid": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.201 244018 DEBUG nova.network.os_vif_util [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "address": "fa:16:3e:bd:ca:34", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34e36f4e-0c", "ovs_interfaceid": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.202 244018 DEBUG nova.network.os_vif_util [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:ca:34,bridge_name='br-int',has_traffic_filtering=True,id=34e36f4e-0cdc-4b6b-bbbc-daa0d461be44,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34e36f4e-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.202 244018 DEBUG os_vif [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:ca:34,bridge_name='br-int',has_traffic_filtering=True,id=34e36f4e-0cdc-4b6b-bbbc-daa0d461be44,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34e36f4e-0c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.203 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.204 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.205 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.209 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.210 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34e36f4e-0c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.210 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap34e36f4e-0c, col_values=(('external_ids', {'iface-id': '34e36f4e-0cdc-4b6b-bbbc-daa0d461be44', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bd:ca:34', 'vm-uuid': '89f166ca-c0fc-48ec-baf9-eda20ff22f2b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.212 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:13 np0005629333 NetworkManager[49836]: <info>  [1772023213.2138] manager: (tap34e36f4e-0c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/403)
Feb 25 07:40:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:40:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.215 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.218 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.220 244018 INFO os_vif [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:ca:34,bridge_name='br-int',has_traffic_filtering=True,id=34e36f4e-0cdc-4b6b-bbbc-daa0d461be44,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34e36f4e-0c')#033[00m
Feb 25 07:40:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Feb 25 07:40:13 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.281 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.281 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.281 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No VIF found with MAC fa:16:3e:bd:ca:34, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.282 244018 INFO nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Using config drive#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.300 244018 DEBUG nova.storage.rbd_utils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 89f166ca-c0fc-48ec-baf9-eda20ff22f2b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.663 244018 DEBUG oslo_concurrency.lockutils [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Acquiring lock "01328724-b95f-4b36-809a-ddc156dd0dde" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.664 244018 DEBUG oslo_concurrency.lockutils [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lock "01328724-b95f-4b36-809a-ddc156dd0dde" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.664 244018 DEBUG oslo_concurrency.lockutils [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Acquiring lock "01328724-b95f-4b36-809a-ddc156dd0dde-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.665 244018 DEBUG oslo_concurrency.lockutils [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lock "01328724-b95f-4b36-809a-ddc156dd0dde-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.665 244018 DEBUG oslo_concurrency.lockutils [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lock "01328724-b95f-4b36-809a-ddc156dd0dde-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.667 244018 INFO nova.compute.manager [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Terminating instance#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.668 244018 DEBUG oslo_concurrency.lockutils [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Acquiring lock "refresh_cache-01328724-b95f-4b36-809a-ddc156dd0dde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.669 244018 DEBUG oslo_concurrency.lockutils [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Acquired lock "refresh_cache-01328724-b95f-4b36-809a-ddc156dd0dde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:40:13 np0005629333 nova_compute[244014]: 2026-02-25 12:40:13.669 244018 DEBUG nova.network.neutron [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:40:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 663 KiB/s rd, 5.9 MiB/s wr, 270 op/s
Feb 25 07:40:14 np0005629333 nova_compute[244014]: 2026-02-25 12:40:14.548 244018 DEBUG nova.network.neutron [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:40:14 np0005629333 nova_compute[244014]: 2026-02-25 12:40:14.586 244018 DEBUG nova.network.neutron [req-2c51ead4-fb6b-43b6-a593-d8b363b13c5c req-7eaecba6-204b-4105-963c-00a0c7c6cd1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Updated VIF entry in instance network info cache for port 34e36f4e-0cdc-4b6b-bbbc-daa0d461be44. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:40:14 np0005629333 nova_compute[244014]: 2026-02-25 12:40:14.587 244018 DEBUG nova.network.neutron [req-2c51ead4-fb6b-43b6-a593-d8b363b13c5c req-7eaecba6-204b-4105-963c-00a0c7c6cd1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Updating instance_info_cache with network_info: [{"id": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "address": "fa:16:3e:bd:ca:34", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34e36f4e-0c", "ovs_interfaceid": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:40:14 np0005629333 nova_compute[244014]: 2026-02-25 12:40:14.607 244018 INFO nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Creating config drive at /var/lib/nova/instances/89f166ca-c0fc-48ec-baf9-eda20ff22f2b/disk.config#033[00m
Feb 25 07:40:14 np0005629333 nova_compute[244014]: 2026-02-25 12:40:14.614 244018 DEBUG oslo_concurrency.processutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/89f166ca-c0fc-48ec-baf9-eda20ff22f2b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5fp12z1s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:40:14 np0005629333 nova_compute[244014]: 2026-02-25 12:40:14.653 244018 DEBUG oslo_concurrency.lockutils [req-2c51ead4-fb6b-43b6-a593-d8b363b13c5c req-7eaecba6-204b-4105-963c-00a0c7c6cd1e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-89f166ca-c0fc-48ec-baf9-eda20ff22f2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:40:14 np0005629333 nova_compute[244014]: 2026-02-25 12:40:14.760 244018 DEBUG oslo_concurrency.processutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/89f166ca-c0fc-48ec-baf9-eda20ff22f2b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5fp12z1s" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:40:14 np0005629333 nova_compute[244014]: 2026-02-25 12:40:14.798 244018 DEBUG nova.storage.rbd_utils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 89f166ca-c0fc-48ec-baf9-eda20ff22f2b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:40:14 np0005629333 nova_compute[244014]: 2026-02-25 12:40:14.803 244018 DEBUG oslo_concurrency.processutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/89f166ca-c0fc-48ec-baf9-eda20ff22f2b/disk.config 89f166ca-c0fc-48ec-baf9-eda20ff22f2b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:40:14 np0005629333 nova_compute[244014]: 2026-02-25 12:40:14.952 244018 DEBUG oslo_concurrency.processutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/89f166ca-c0fc-48ec-baf9-eda20ff22f2b/disk.config 89f166ca-c0fc-48ec-baf9-eda20ff22f2b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:40:14 np0005629333 nova_compute[244014]: 2026-02-25 12:40:14.953 244018 INFO nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Deleting local config drive /var/lib/nova/instances/89f166ca-c0fc-48ec-baf9-eda20ff22f2b/disk.config because it was imported into RBD.#033[00m
Feb 25 07:40:14 np0005629333 kernel: tap34e36f4e-0c: entered promiscuous mode
Feb 25 07:40:14 np0005629333 NetworkManager[49836]: <info>  [1772023214.9975] manager: (tap34e36f4e-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/404)
Feb 25 07:40:15 np0005629333 ovn_controller[147040]: 2026-02-25T12:40:14Z|00970|binding|INFO|Claiming lport 34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 for this chassis.
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:14.999 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:15 np0005629333 ovn_controller[147040]: 2026-02-25T12:40:14Z|00971|binding|INFO|34e36f4e-0cdc-4b6b-bbbc-daa0d461be44: Claiming fa:16:3e:bd:ca:34 10.100.0.12
Feb 25 07:40:15 np0005629333 ovn_controller[147040]: 2026-02-25T12:40:15Z|00972|binding|INFO|Setting lport 34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 ovn-installed in OVS
Feb 25 07:40:15 np0005629333 ovn_controller[147040]: 2026-02-25T12:40:15Z|00973|binding|INFO|Setting lport 34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 up in Southbound
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.011 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:15.013 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:ca:34 10.100.0.12'], port_security=['fa:16:3e:bd:ca:34 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '89f166ca-c0fc-48ec-baf9-eda20ff22f2b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec8bae53-fe6a-49d1-a733-f00c198be561', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '503e879cd1f44a16b9baef106ceba949', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3bf34285-1a67-4c95-bb68-fd577a012f6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18f4e8da-4409-4095-9850-aaee82dd8fd1, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=34e36f4e-0cdc-4b6b-bbbc-daa0d461be44) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.015 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:15.018 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 in datapath ec8bae53-fe6a-49d1-a733-f00c198be561 bound to our chassis#033[00m
Feb 25 07:40:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:15.020 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec8bae53-fe6a-49d1-a733-f00c198be561#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.027 244018 DEBUG nova.network.neutron [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.030 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:15 np0005629333 systemd-machined[210048]: New machine qemu-126-instance-00000063.
Feb 25 07:40:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:15.036 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[843c3e9d-1db5-4d80-9aba-abdbf69f7da1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:40:15 np0005629333 systemd[1]: Started Virtual Machine qemu-126-instance-00000063.
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.057 244018 DEBUG oslo_concurrency.lockutils [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Releasing lock "refresh_cache-01328724-b95f-4b36-809a-ddc156dd0dde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.057 244018 DEBUG nova.compute.manager [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:40:15 np0005629333 systemd-udevd[332321]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:40:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:15.073 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[10ca442b-0c33-41ef-9fcf-2a2f8bccd3f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:40:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:15.077 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3a4f3266-0976-4f6f-b402-81c73336e1fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:40:15 np0005629333 NetworkManager[49836]: <info>  [1772023215.0835] device (tap34e36f4e-0c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:40:15 np0005629333 NetworkManager[49836]: <info>  [1772023215.0843] device (tap34e36f4e-0c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:40:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:15.100 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8210f8c4-48ba-4aef-8b6a-be6656686a78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:40:15 np0005629333 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000062.scope: Deactivated successfully.
Feb 25 07:40:15 np0005629333 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000062.scope: Consumed 1.370s CPU time.
Feb 25 07:40:15 np0005629333 systemd-machined[210048]: Machine qemu-125-instance-00000062 terminated.
Feb 25 07:40:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:15.120 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[76b471ea-41f3-4289-8a87-a8b3fc85ba63]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec8bae53-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a5:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 574, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 574, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516532, 'reachable_time': 18880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332331, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:40:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:15.131 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[352eec46-a372-446b-b868-76c83b0977ad]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516546, 'tstamp': 516546}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332332, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516549, 'tstamp': 516549}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332332, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:40:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:15.133 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec8bae53-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.134 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.136 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:15.136 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec8bae53-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:40:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:15.136 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:40:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:15.137 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec8bae53-f0, col_values=(('external_ids', {'iface-id': 'e2d1eadf-baf7-4e5c-8052-6c64e8476a26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:40:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:15.137 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.279 244018 INFO nova.virt.libvirt.driver [-] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Instance destroyed successfully.#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.279 244018 DEBUG nova.objects.instance [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lazy-loading 'resources' on Instance uuid 01328724-b95f-4b36-809a-ddc156dd0dde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.353 244018 DEBUG nova.compute.manager [req-824a55fb-7733-4da2-befe-4a89b4b9c89a req-cbaa85f3-6d57-4923-a37f-2a371e5e96dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Received event network-vif-plugged-34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.353 244018 DEBUG oslo_concurrency.lockutils [req-824a55fb-7733-4da2-befe-4a89b4b9c89a req-cbaa85f3-6d57-4923-a37f-2a371e5e96dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.359 244018 DEBUG oslo_concurrency.lockutils [req-824a55fb-7733-4da2-befe-4a89b4b9c89a req-cbaa85f3-6d57-4923-a37f-2a371e5e96dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.359 244018 DEBUG oslo_concurrency.lockutils [req-824a55fb-7733-4da2-befe-4a89b4b9c89a req-cbaa85f3-6d57-4923-a37f-2a371e5e96dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.359 244018 DEBUG nova.compute.manager [req-824a55fb-7733-4da2-befe-4a89b4b9c89a req-cbaa85f3-6d57-4923-a37f-2a371e5e96dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Processing event network-vif-plugged-34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:40:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 07:40:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 32K writes, 125K keys, 32K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.04 MB/s#012Cumulative WAL: 32K writes, 11K syncs, 2.84 writes per sync, written: 0.12 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8270 writes, 30K keys, 8270 commit groups, 1.0 writes per commit group, ingest: 32.22 MB, 0.05 MB/s#012Interval WAL: 8270 writes, 3258 syncs, 2.54 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.897 244018 DEBUG nova.compute.manager [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.898 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023215.8968914, 89f166ca-c0fc-48ec-baf9-eda20ff22f2b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.899 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] VM Started (Lifecycle Event)#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.904 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.909 244018 INFO nova.virt.libvirt.driver [-] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Instance spawned successfully.#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.910 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.926 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.934 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.938 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.939 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.939 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.940 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.941 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.942 244018 DEBUG nova.virt.libvirt.driver [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.954 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.954 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023215.8995128, 89f166ca-c0fc-48ec-baf9-eda20ff22f2b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.955 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:40:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.981 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.985 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023215.9032626, 89f166ca-c0fc-48ec-baf9-eda20ff22f2b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.986 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.993 244018 INFO nova.compute.manager [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Took 10.71 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:40:15 np0005629333 nova_compute[244014]: 2026-02-25 12:40:15.993 244018 DEBUG nova.compute.manager [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:40:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Feb 25 07:40:15 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Feb 25 07:40:16 np0005629333 nova_compute[244014]: 2026-02-25 12:40:16.005 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:40:16 np0005629333 nova_compute[244014]: 2026-02-25 12:40:16.011 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:40:16 np0005629333 nova_compute[244014]: 2026-02-25 12:40:16.028 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:40:16 np0005629333 nova_compute[244014]: 2026-02-25 12:40:16.056 244018 INFO nova.compute.manager [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Took 12.05 seconds to build instance.#033[00m
Feb 25 07:40:16 np0005629333 nova_compute[244014]: 2026-02-25 12:40:16.069 244018 DEBUG oslo_concurrency.lockutils [None req-d8f00b39-b0bb-472c-a9c0-520a32f2d4ca 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:16 np0005629333 nova_compute[244014]: 2026-02-25 12:40:16.160 244018 INFO nova.virt.libvirt.driver [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Deleting instance files /var/lib/nova/instances/01328724-b95f-4b36-809a-ddc156dd0dde_del#033[00m
Feb 25 07:40:16 np0005629333 nova_compute[244014]: 2026-02-25 12:40:16.161 244018 INFO nova.virt.libvirt.driver [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Deletion of /var/lib/nova/instances/01328724-b95f-4b36-809a-ddc156dd0dde_del complete#033[00m
Feb 25 07:40:16 np0005629333 nova_compute[244014]: 2026-02-25 12:40:16.221 244018 INFO nova.compute.manager [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Took 1.16 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:40:16 np0005629333 nova_compute[244014]: 2026-02-25 12:40:16.222 244018 DEBUG oslo.service.loopingcall [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:40:16 np0005629333 nova_compute[244014]: 2026-02-25 12:40:16.223 244018 DEBUG nova.compute.manager [-] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:40:16 np0005629333 nova_compute[244014]: 2026-02-25 12:40:16.223 244018 DEBUG nova.network.neutron [-] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:40:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 716 KiB/s rd, 2.6 MiB/s wr, 179 op/s
Feb 25 07:40:16 np0005629333 nova_compute[244014]: 2026-02-25 12:40:16.421 244018 DEBUG nova.network.neutron [-] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:40:16 np0005629333 nova_compute[244014]: 2026-02-25 12:40:16.442 244018 DEBUG nova.network.neutron [-] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:40:16 np0005629333 nova_compute[244014]: 2026-02-25 12:40:16.456 244018 INFO nova.compute.manager [-] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Took 0.23 seconds to deallocate network for instance.#033[00m
Feb 25 07:40:16 np0005629333 nova_compute[244014]: 2026-02-25 12:40:16.506 244018 DEBUG oslo_concurrency.lockutils [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:16 np0005629333 nova_compute[244014]: 2026-02-25 12:40:16.507 244018 DEBUG oslo_concurrency.lockutils [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:16 np0005629333 nova_compute[244014]: 2026-02-25 12:40:16.621 244018 DEBUG oslo_concurrency.processutils [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:40:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:40:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3097539994' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:40:17 np0005629333 nova_compute[244014]: 2026-02-25 12:40:17.193 244018 DEBUG oslo_concurrency.processutils [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:40:17 np0005629333 nova_compute[244014]: 2026-02-25 12:40:17.198 244018 DEBUG nova.compute.provider_tree [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:40:17 np0005629333 nova_compute[244014]: 2026-02-25 12:40:17.217 244018 DEBUG nova.scheduler.client.report [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:40:17 np0005629333 nova_compute[244014]: 2026-02-25 12:40:17.238 244018 DEBUG oslo_concurrency.lockutils [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:17 np0005629333 nova_compute[244014]: 2026-02-25 12:40:17.264 244018 INFO nova.scheduler.client.report [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Deleted allocations for instance 01328724-b95f-4b36-809a-ddc156dd0dde#033[00m
Feb 25 07:40:17 np0005629333 nova_compute[244014]: 2026-02-25 12:40:17.339 244018 DEBUG oslo_concurrency.lockutils [None req-254cb029-4307-4861-bfe4-899aa04d90a2 d154463d39d44a229e754c4dd30f78d0 fc8ee4ee0f07455b8722a80a5e837c79 - - default default] Lock "01328724-b95f-4b36-809a-ddc156dd0dde" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:17 np0005629333 nova_compute[244014]: 2026-02-25 12:40:17.507 244018 DEBUG nova.compute.manager [req-ed7009ce-222f-4e78-840e-798869c4dd08 req-3c78ef5c-4d5d-4842-bdec-2b77ecf9107b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Received event network-vif-plugged-34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:40:17 np0005629333 nova_compute[244014]: 2026-02-25 12:40:17.509 244018 DEBUG oslo_concurrency.lockutils [req-ed7009ce-222f-4e78-840e-798869c4dd08 req-3c78ef5c-4d5d-4842-bdec-2b77ecf9107b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:17 np0005629333 nova_compute[244014]: 2026-02-25 12:40:17.510 244018 DEBUG oslo_concurrency.lockutils [req-ed7009ce-222f-4e78-840e-798869c4dd08 req-3c78ef5c-4d5d-4842-bdec-2b77ecf9107b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:17 np0005629333 nova_compute[244014]: 2026-02-25 12:40:17.511 244018 DEBUG oslo_concurrency.lockutils [req-ed7009ce-222f-4e78-840e-798869c4dd08 req-3c78ef5c-4d5d-4842-bdec-2b77ecf9107b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:17 np0005629333 nova_compute[244014]: 2026-02-25 12:40:17.512 244018 DEBUG nova.compute.manager [req-ed7009ce-222f-4e78-840e-798869c4dd08 req-3c78ef5c-4d5d-4842-bdec-2b77ecf9107b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] No waiting events found dispatching network-vif-plugged-34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:40:17 np0005629333 nova_compute[244014]: 2026-02-25 12:40:17.512 244018 WARNING nova.compute.manager [req-ed7009ce-222f-4e78-840e-798869c4dd08 req-3c78ef5c-4d5d-4842-bdec-2b77ecf9107b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Received unexpected event network-vif-plugged-34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:40:17 np0005629333 podman[332420]: 2026-02-25 12:40:17.740436476 +0000 UTC m=+0.083049795 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_managed=true)
Feb 25 07:40:17 np0005629333 nova_compute[244014]: 2026-02-25 12:40:17.781 244018 DEBUG oslo_concurrency.lockutils [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:17 np0005629333 nova_compute[244014]: 2026-02-25 12:40:17.781 244018 DEBUG oslo_concurrency.lockutils [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:17 np0005629333 nova_compute[244014]: 2026-02-25 12:40:17.782 244018 DEBUG oslo_concurrency.lockutils [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:17 np0005629333 nova_compute[244014]: 2026-02-25 12:40:17.782 244018 DEBUG oslo_concurrency.lockutils [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:17 np0005629333 nova_compute[244014]: 2026-02-25 12:40:17.782 244018 DEBUG oslo_concurrency.lockutils [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:17 np0005629333 nova_compute[244014]: 2026-02-25 12:40:17.783 244018 INFO nova.compute.manager [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Terminating instance#033[00m
Feb 25 07:40:17 np0005629333 nova_compute[244014]: 2026-02-25 12:40:17.784 244018 DEBUG nova.compute.manager [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:40:17 np0005629333 kernel: tap34e36f4e-0c (unregistering): left promiscuous mode
Feb 25 07:40:17 np0005629333 NetworkManager[49836]: <info>  [1772023217.8334] device (tap34e36f4e-0c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:40:17 np0005629333 ovn_controller[147040]: 2026-02-25T12:40:17Z|00974|binding|INFO|Releasing lport 34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 from this chassis (sb_readonly=0)
Feb 25 07:40:17 np0005629333 ovn_controller[147040]: 2026-02-25T12:40:17Z|00975|binding|INFO|Setting lport 34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 down in Southbound
Feb 25 07:40:17 np0005629333 ovn_controller[147040]: 2026-02-25T12:40:17Z|00976|binding|INFO|Removing iface tap34e36f4e-0c ovn-installed in OVS
Feb 25 07:40:17 np0005629333 nova_compute[244014]: 2026-02-25 12:40:17.838 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:17 np0005629333 nova_compute[244014]: 2026-02-25 12:40:17.842 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:17 np0005629333 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000063.scope: Deactivated successfully.
Feb 25 07:40:17 np0005629333 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000063.scope: Consumed 2.792s CPU time.
Feb 25 07:40:17 np0005629333 systemd-machined[210048]: Machine qemu-126-instance-00000063 terminated.
Feb 25 07:40:18 np0005629333 nova_compute[244014]: 2026-02-25 12:40:18.003 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:18 np0005629333 nova_compute[244014]: 2026-02-25 12:40:18.009 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:18.015 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:ca:34 10.100.0.12'], port_security=['fa:16:3e:bd:ca:34 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '89f166ca-c0fc-48ec-baf9-eda20ff22f2b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec8bae53-fe6a-49d1-a733-f00c198be561', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '503e879cd1f44a16b9baef106ceba949', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3bf34285-1a67-4c95-bb68-fd577a012f6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18f4e8da-4409-4095-9850-aaee82dd8fd1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=34e36f4e-0cdc-4b6b-bbbc-daa0d461be44) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:40:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:18.017 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 in datapath ec8bae53-fe6a-49d1-a733-f00c198be561 unbound from our chassis#033[00m
Feb 25 07:40:18 np0005629333 nova_compute[244014]: 2026-02-25 12:40:18.019 244018 INFO nova.virt.libvirt.driver [-] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Instance destroyed successfully.#033[00m
Feb 25 07:40:18 np0005629333 nova_compute[244014]: 2026-02-25 12:40:18.019 244018 DEBUG nova.objects.instance [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'resources' on Instance uuid 89f166ca-c0fc-48ec-baf9-eda20ff22f2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:40:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:18.019 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec8bae53-fe6a-49d1-a733-f00c198be561#033[00m
Feb 25 07:40:18 np0005629333 nova_compute[244014]: 2026-02-25 12:40:18.037 244018 DEBUG nova.virt.libvirt.vif [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:40:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1192653093',display_name='tempest-ServersTestJSON-server-1192653093',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1192653093',id=99,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:40:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-3ml2upp9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:40:16Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=89f166ca-c0fc-48ec-baf9-eda20ff22f2b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "address": "fa:16:3e:bd:ca:34", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34e36f4e-0c", "ovs_interfaceid": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:40:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:18.038 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[14233316-04cc-4dae-9e3b-dd0ad69caeb5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:40:18 np0005629333 nova_compute[244014]: 2026-02-25 12:40:18.038 244018 DEBUG nova.network.os_vif_util [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "address": "fa:16:3e:bd:ca:34", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34e36f4e-0c", "ovs_interfaceid": "34e36f4e-0cdc-4b6b-bbbc-daa0d461be44", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:40:18 np0005629333 nova_compute[244014]: 2026-02-25 12:40:18.040 244018 DEBUG nova.network.os_vif_util [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:ca:34,bridge_name='br-int',has_traffic_filtering=True,id=34e36f4e-0cdc-4b6b-bbbc-daa0d461be44,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34e36f4e-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:40:18 np0005629333 nova_compute[244014]: 2026-02-25 12:40:18.040 244018 DEBUG os_vif [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:ca:34,bridge_name='br-int',has_traffic_filtering=True,id=34e36f4e-0cdc-4b6b-bbbc-daa0d461be44,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34e36f4e-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:40:18 np0005629333 nova_compute[244014]: 2026-02-25 12:40:18.043 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:18 np0005629333 nova_compute[244014]: 2026-02-25 12:40:18.044 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34e36f4e-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:40:18 np0005629333 nova_compute[244014]: 2026-02-25 12:40:18.047 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:18 np0005629333 nova_compute[244014]: 2026-02-25 12:40:18.048 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:18 np0005629333 nova_compute[244014]: 2026-02-25 12:40:18.051 244018 INFO os_vif [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:ca:34,bridge_name='br-int',has_traffic_filtering=True,id=34e36f4e-0cdc-4b6b-bbbc-daa0d461be44,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34e36f4e-0c')#033[00m
Feb 25 07:40:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:18.063 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3dbd9ba1-ba97-40f4-ac2d-1866ea068b4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:40:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:18.067 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a8dd30-dce0-4e40-a874-d9fc7f772711]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:40:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:18.089 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9fc7e81d-fe7b-4b7f-82a5-daf104271818]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:40:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:18.099 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4adf9052-31c2-401a-8e13-69925951c07a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec8bae53-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a5:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516532, 'reachable_time': 18880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332476, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:40:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:18.111 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[227dd281-3e7b-4ba8-be7a-f8410d07a129]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516546, 'tstamp': 516546}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332480, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516549, 'tstamp': 516549}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332480, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:40:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:18.112 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec8bae53-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:40:18 np0005629333 nova_compute[244014]: 2026-02-25 12:40:18.113 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:18 np0005629333 nova_compute[244014]: 2026-02-25 12:40:18.113 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:18.114 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec8bae53-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:40:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:18.114 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:40:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:18.114 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec8bae53-f0, col_values=(('external_ids', {'iface-id': 'e2d1eadf-baf7-4e5c-8052-6c64e8476a26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:40:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:18.114 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:40:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1773: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 521 KiB/s wr, 270 op/s
Feb 25 07:40:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:40:18 np0005629333 nova_compute[244014]: 2026-02-25 12:40:18.486 244018 INFO nova.virt.libvirt.driver [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Deleting instance files /var/lib/nova/instances/89f166ca-c0fc-48ec-baf9-eda20ff22f2b_del#033[00m
Feb 25 07:40:18 np0005629333 nova_compute[244014]: 2026-02-25 12:40:18.487 244018 INFO nova.virt.libvirt.driver [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Deletion of /var/lib/nova/instances/89f166ca-c0fc-48ec-baf9-eda20ff22f2b_del complete#033[00m
Feb 25 07:40:18 np0005629333 nova_compute[244014]: 2026-02-25 12:40:18.575 244018 INFO nova.compute.manager [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:40:18 np0005629333 nova_compute[244014]: 2026-02-25 12:40:18.576 244018 DEBUG oslo.service.loopingcall [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:40:18 np0005629333 nova_compute[244014]: 2026-02-25 12:40:18.576 244018 DEBUG nova.compute.manager [-] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:40:18 np0005629333 nova_compute[244014]: 2026-02-25 12:40:18.576 244018 DEBUG nova.network.neutron [-] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:40:18 np0005629333 nova_compute[244014]: 2026-02-25 12:40:18.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:40:18 np0005629333 nova_compute[244014]: 2026-02-25 12:40:18.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:40:18 np0005629333 nova_compute[244014]: 2026-02-25 12:40:18.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 07:40:18 np0005629333 nova_compute[244014]: 2026-02-25 12:40:18.907 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Feb 25 07:40:19 np0005629333 nova_compute[244014]: 2026-02-25 12:40:19.109 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-62d2fee1-f07f-44e3-a511-6b9bb341a3ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:40:19 np0005629333 nova_compute[244014]: 2026-02-25 12:40:19.110 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-62d2fee1-f07f-44e3-a511-6b9bb341a3ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:40:19 np0005629333 nova_compute[244014]: 2026-02-25 12:40:19.110 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 25 07:40:19 np0005629333 nova_compute[244014]: 2026-02-25 12:40:19.111 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid 62d2fee1-f07f-44e3-a511-6b9bb341a3ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:40:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 07:40:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.2 total, 600.0 interval#012Cumulative writes: 33K writes, 128K keys, 33K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.04 MB/s#012Cumulative WAL: 33K writes, 11K syncs, 2.83 writes per sync, written: 0.12 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7316 writes, 27K keys, 7316 commit groups, 1.0 writes per commit group, ingest: 26.03 MB, 0.04 MB/s#012Interval WAL: 7316 writes, 2886 syncs, 2.53 writes per sync, written: 0.03 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 07:40:19 np0005629333 nova_compute[244014]: 2026-02-25 12:40:19.454 244018 DEBUG nova.network.neutron [-] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:40:19 np0005629333 nova_compute[244014]: 2026-02-25 12:40:19.474 244018 INFO nova.compute.manager [-] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Took 0.90 seconds to deallocate network for instance.#033[00m
Feb 25 07:40:19 np0005629333 nova_compute[244014]: 2026-02-25 12:40:19.549 244018 DEBUG oslo_concurrency.lockutils [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:19 np0005629333 nova_compute[244014]: 2026-02-25 12:40:19.551 244018 DEBUG oslo_concurrency.lockutils [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:19 np0005629333 nova_compute[244014]: 2026-02-25 12:40:19.588 244018 DEBUG nova.scheduler.client.report [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 25 07:40:19 np0005629333 nova_compute[244014]: 2026-02-25 12:40:19.596 244018 DEBUG nova.compute.manager [req-391a9a76-2761-4bec-9643-2f3a15c9c99a req-df53c50b-7bcb-49e6-b73d-5f9125675992 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Received event network-vif-deleted-34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:40:19 np0005629333 nova_compute[244014]: 2026-02-25 12:40:19.607 244018 DEBUG nova.scheduler.client.report [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 25 07:40:19 np0005629333 nova_compute[244014]: 2026-02-25 12:40:19.608 244018 DEBUG nova.compute.provider_tree [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 25 07:40:19 np0005629333 nova_compute[244014]: 2026-02-25 12:40:19.634 244018 DEBUG nova.scheduler.client.report [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 25 07:40:19 np0005629333 nova_compute[244014]: 2026-02-25 12:40:19.642 244018 DEBUG nova.compute.manager [req-7b2c8841-17d3-4fb6-8f45-71b6b8459b27 req-38ff3a50-a0fe-4416-bb0f-1f109f137e7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Received event network-vif-unplugged-34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:40:19 np0005629333 nova_compute[244014]: 2026-02-25 12:40:19.643 244018 DEBUG oslo_concurrency.lockutils [req-7b2c8841-17d3-4fb6-8f45-71b6b8459b27 req-38ff3a50-a0fe-4416-bb0f-1f109f137e7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:19 np0005629333 nova_compute[244014]: 2026-02-25 12:40:19.643 244018 DEBUG oslo_concurrency.lockutils [req-7b2c8841-17d3-4fb6-8f45-71b6b8459b27 req-38ff3a50-a0fe-4416-bb0f-1f109f137e7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:19 np0005629333 nova_compute[244014]: 2026-02-25 12:40:19.644 244018 DEBUG oslo_concurrency.lockutils [req-7b2c8841-17d3-4fb6-8f45-71b6b8459b27 req-38ff3a50-a0fe-4416-bb0f-1f109f137e7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:19 np0005629333 nova_compute[244014]: 2026-02-25 12:40:19.644 244018 DEBUG nova.compute.manager [req-7b2c8841-17d3-4fb6-8f45-71b6b8459b27 req-38ff3a50-a0fe-4416-bb0f-1f109f137e7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] No waiting events found dispatching network-vif-unplugged-34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:40:19 np0005629333 nova_compute[244014]: 2026-02-25 12:40:19.644 244018 WARNING nova.compute.manager [req-7b2c8841-17d3-4fb6-8f45-71b6b8459b27 req-38ff3a50-a0fe-4416-bb0f-1f109f137e7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Received unexpected event network-vif-unplugged-34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:40:19 np0005629333 nova_compute[244014]: 2026-02-25 12:40:19.645 244018 DEBUG nova.compute.manager [req-7b2c8841-17d3-4fb6-8f45-71b6b8459b27 req-38ff3a50-a0fe-4416-bb0f-1f109f137e7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Received event network-vif-plugged-34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:40:19 np0005629333 nova_compute[244014]: 2026-02-25 12:40:19.645 244018 DEBUG oslo_concurrency.lockutils [req-7b2c8841-17d3-4fb6-8f45-71b6b8459b27 req-38ff3a50-a0fe-4416-bb0f-1f109f137e7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:19 np0005629333 nova_compute[244014]: 2026-02-25 12:40:19.646 244018 DEBUG oslo_concurrency.lockutils [req-7b2c8841-17d3-4fb6-8f45-71b6b8459b27 req-38ff3a50-a0fe-4416-bb0f-1f109f137e7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:19 np0005629333 nova_compute[244014]: 2026-02-25 12:40:19.646 244018 DEBUG oslo_concurrency.lockutils [req-7b2c8841-17d3-4fb6-8f45-71b6b8459b27 req-38ff3a50-a0fe-4416-bb0f-1f109f137e7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:19 np0005629333 nova_compute[244014]: 2026-02-25 12:40:19.647 244018 DEBUG nova.compute.manager [req-7b2c8841-17d3-4fb6-8f45-71b6b8459b27 req-38ff3a50-a0fe-4416-bb0f-1f109f137e7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] No waiting events found dispatching network-vif-plugged-34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:40:19 np0005629333 nova_compute[244014]: 2026-02-25 12:40:19.647 244018 WARNING nova.compute.manager [req-7b2c8841-17d3-4fb6-8f45-71b6b8459b27 req-38ff3a50-a0fe-4416-bb0f-1f109f137e7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Received unexpected event network-vif-plugged-34e36f4e-0cdc-4b6b-bbbc-daa0d461be44 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:40:19 np0005629333 nova_compute[244014]: 2026-02-25 12:40:19.662 244018 DEBUG nova.scheduler.client.report [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 25 07:40:19 np0005629333 nova_compute[244014]: 2026-02-25 12:40:19.714 244018 DEBUG oslo_concurrency.processutils [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:40:19 np0005629333 podman[332482]: 2026-02-25 12:40:19.76859035 +0000 UTC m=+0.101310300 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:40:20 np0005629333 nova_compute[244014]: 2026-02-25 12:40:20.030 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1774: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 44 KiB/s wr, 167 op/s
Feb 25 07:40:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:40:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2778755769' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:40:20 np0005629333 nova_compute[244014]: 2026-02-25 12:40:20.283 244018 DEBUG oslo_concurrency.processutils [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:40:20 np0005629333 nova_compute[244014]: 2026-02-25 12:40:20.291 244018 DEBUG nova.compute.provider_tree [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:40:20 np0005629333 nova_compute[244014]: 2026-02-25 12:40:20.314 244018 DEBUG nova.scheduler.client.report [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:40:20 np0005629333 nova_compute[244014]: 2026-02-25 12:40:20.348 244018 DEBUG oslo_concurrency.lockutils [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:20 np0005629333 nova_compute[244014]: 2026-02-25 12:40:20.378 244018 INFO nova.scheduler.client.report [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Deleted allocations for instance 89f166ca-c0fc-48ec-baf9-eda20ff22f2b#033[00m
Feb 25 07:40:20 np0005629333 nova_compute[244014]: 2026-02-25 12:40:20.469 244018 DEBUG oslo_concurrency.lockutils [None req-67ed2497-1f3a-4fe1-a874-da78957ebd26 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "89f166ca-c0fc-48ec-baf9-eda20ff22f2b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:21 np0005629333 nova_compute[244014]: 2026-02-25 12:40:21.635 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Updating instance_info_cache with network_info: [{"id": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "address": "fa:16:3e:f6:f0:2e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521e9ad8-9e", "ovs_interfaceid": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:40:21 np0005629333 nova_compute[244014]: 2026-02-25 12:40:21.657 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-62d2fee1-f07f-44e3-a511-6b9bb341a3ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:40:21 np0005629333 nova_compute[244014]: 2026-02-25 12:40:21.658 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 25 07:40:21 np0005629333 nova_compute[244014]: 2026-02-25 12:40:21.659 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:40:21 np0005629333 nova_compute[244014]: 2026-02-25 12:40:21.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:40:21 np0005629333 nova_compute[244014]: 2026-02-25 12:40:21.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:40:21 np0005629333 nova_compute[244014]: 2026-02-25 12:40:21.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:40:21 np0005629333 nova_compute[244014]: 2026-02-25 12:40:21.918 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:21 np0005629333 nova_compute[244014]: 2026-02-25 12:40:21.919 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:21 np0005629333 nova_compute[244014]: 2026-02-25 12:40:21.920 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:21 np0005629333 nova_compute[244014]: 2026-02-25 12:40:21.920 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:40:21 np0005629333 nova_compute[244014]: 2026-02-25 12:40:21.921 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:40:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 305 active+clean; 252 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 37 KiB/s wr, 189 op/s
Feb 25 07:40:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:40:22 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/823477466' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:40:22 np0005629333 nova_compute[244014]: 2026-02-25 12:40:22.460 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:40:22 np0005629333 nova_compute[244014]: 2026-02-25 12:40:22.535 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:40:22 np0005629333 nova_compute[244014]: 2026-02-25 12:40:22.536 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:40:22 np0005629333 nova_compute[244014]: 2026-02-25 12:40:22.744 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:40:22 np0005629333 nova_compute[244014]: 2026-02-25 12:40:22.746 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3602MB free_disk=59.92121119052172GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:40:22 np0005629333 nova_compute[244014]: 2026-02-25 12:40:22.747 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:22 np0005629333 nova_compute[244014]: 2026-02-25 12:40:22.747 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:22 np0005629333 nova_compute[244014]: 2026-02-25 12:40:22.875 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 62d2fee1-f07f-44e3-a511-6b9bb341a3ed actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:40:22 np0005629333 nova_compute[244014]: 2026-02-25 12:40:22.876 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:40:22 np0005629333 nova_compute[244014]: 2026-02-25 12:40:22.876 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:40:22 np0005629333 nova_compute[244014]: 2026-02-25 12:40:22.927 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:40:23 np0005629333 nova_compute[244014]: 2026-02-25 12:40:23.048 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:40:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Feb 25 07:40:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Feb 25 07:40:23 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Feb 25 07:40:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:40:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/563994600' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:40:23 np0005629333 nova_compute[244014]: 2026-02-25 12:40:23.567 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.640s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:40:23 np0005629333 nova_compute[244014]: 2026-02-25 12:40:23.575 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:40:23 np0005629333 nova_compute[244014]: 2026-02-25 12:40:23.591 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:40:23 np0005629333 nova_compute[244014]: 2026-02-25 12:40:23.619 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:40:23 np0005629333 nova_compute[244014]: 2026-02-25 12:40:23.620 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:40:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:40:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:40:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:40:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:40:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:40:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:40:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:40:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:40:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:40:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:40:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:40:24 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:40:24 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:40:24 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:40:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1777: 305 pgs: 305 active+clean; 233 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 41 KiB/s wr, 209 op/s
Feb 25 07:40:24 np0005629333 podman[332721]: 2026-02-25 12:40:24.300003115 +0000 UTC m=+0.055099446 container create 919b6299b7d828c281e4cde1f9cee2b0d0b60949822b19d9a974dcd3a007976a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_dijkstra, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 07:40:24 np0005629333 systemd[1]: Started libpod-conmon-919b6299b7d828c281e4cde1f9cee2b0d0b60949822b19d9a974dcd3a007976a.scope.
Feb 25 07:40:24 np0005629333 podman[332721]: 2026-02-25 12:40:24.27784481 +0000 UTC m=+0.032941141 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:40:24 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:40:24 np0005629333 podman[332721]: 2026-02-25 12:40:24.393960026 +0000 UTC m=+0.149056397 container init 919b6299b7d828c281e4cde1f9cee2b0d0b60949822b19d9a974dcd3a007976a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 07:40:24 np0005629333 podman[332721]: 2026-02-25 12:40:24.402647702 +0000 UTC m=+0.157744033 container start 919b6299b7d828c281e4cde1f9cee2b0d0b60949822b19d9a974dcd3a007976a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_dijkstra, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 07:40:24 np0005629333 exciting_dijkstra[332737]: 167 167
Feb 25 07:40:24 np0005629333 systemd[1]: libpod-919b6299b7d828c281e4cde1f9cee2b0d0b60949822b19d9a974dcd3a007976a.scope: Deactivated successfully.
Feb 25 07:40:24 np0005629333 conmon[332737]: conmon 919b6299b7d828c281e4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-919b6299b7d828c281e4cde1f9cee2b0d0b60949822b19d9a974dcd3a007976a.scope/container/memory.events
Feb 25 07:40:24 np0005629333 podman[332721]: 2026-02-25 12:40:24.408802915 +0000 UTC m=+0.163899256 container attach 919b6299b7d828c281e4cde1f9cee2b0d0b60949822b19d9a974dcd3a007976a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_dijkstra, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:40:24 np0005629333 podman[332721]: 2026-02-25 12:40:24.409625939 +0000 UTC m=+0.164722270 container died 919b6299b7d828c281e4cde1f9cee2b0d0b60949822b19d9a974dcd3a007976a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_dijkstra, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:40:24 np0005629333 systemd[1]: var-lib-containers-storage-overlay-8f116f8591ecbeabef37b26bdc46c6052fdad5d613a2fe4d2bbe4c58cc6b8b92-merged.mount: Deactivated successfully.
Feb 25 07:40:24 np0005629333 podman[332721]: 2026-02-25 12:40:24.46780001 +0000 UTC m=+0.222896311 container remove 919b6299b7d828c281e4cde1f9cee2b0d0b60949822b19d9a974dcd3a007976a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_dijkstra, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:40:24 np0005629333 systemd[1]: libpod-conmon-919b6299b7d828c281e4cde1f9cee2b0d0b60949822b19d9a974dcd3a007976a.scope: Deactivated successfully.
Feb 25 07:40:24 np0005629333 podman[332763]: 2026-02-25 12:40:24.655771555 +0000 UTC m=+0.060577471 container create 73a9fea86e792427fdb4e311d2b9b8c428fd8a4c2e2465a12c7c955a61121f29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 07:40:24 np0005629333 systemd[1]: Started libpod-conmon-73a9fea86e792427fdb4e311d2b9b8c428fd8a4c2e2465a12c7c955a61121f29.scope.
Feb 25 07:40:24 np0005629333 podman[332763]: 2026-02-25 12:40:24.629787152 +0000 UTC m=+0.034593108 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:40:24 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:40:24 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d4701cba3ce0e90fa8908bcd332d31caaa5b7043837a3c8d4760a5da4065997/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:40:24 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d4701cba3ce0e90fa8908bcd332d31caaa5b7043837a3c8d4760a5da4065997/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:40:24 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d4701cba3ce0e90fa8908bcd332d31caaa5b7043837a3c8d4760a5da4065997/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:40:24 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d4701cba3ce0e90fa8908bcd332d31caaa5b7043837a3c8d4760a5da4065997/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:40:24 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d4701cba3ce0e90fa8908bcd332d31caaa5b7043837a3c8d4760a5da4065997/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:40:24 np0005629333 podman[332763]: 2026-02-25 12:40:24.767067786 +0000 UTC m=+0.171873752 container init 73a9fea86e792427fdb4e311d2b9b8c428fd8a4c2e2465a12c7c955a61121f29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wing, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 07:40:24 np0005629333 podman[332763]: 2026-02-25 12:40:24.777583022 +0000 UTC m=+0.182388938 container start 73a9fea86e792427fdb4e311d2b9b8c428fd8a4c2e2465a12c7c955a61121f29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 07:40:24 np0005629333 podman[332763]: 2026-02-25 12:40:24.782196532 +0000 UTC m=+0.187002509 container attach 73a9fea86e792427fdb4e311d2b9b8c428fd8a4c2e2465a12c7c955a61121f29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wing, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 07:40:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 07:40:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.6 total, 600.0 interval#012Cumulative writes: 25K writes, 105K keys, 25K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s#012Cumulative WAL: 25K writes, 8766 syncs, 2.94 writes per sync, written: 0.10 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5316 writes, 21K keys, 5316 commit groups, 1.0 writes per commit group, ingest: 21.40 MB, 0.04 MB/s#012Interval WAL: 5316 writes, 2101 syncs, 2.53 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 07:40:25 np0005629333 nova_compute[244014]: 2026-02-25 12:40:25.032 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:25 np0005629333 sad_wing[332780]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:40:25 np0005629333 sad_wing[332780]: --> All data devices are unavailable
Feb 25 07:40:25 np0005629333 systemd[1]: libpod-73a9fea86e792427fdb4e311d2b9b8c428fd8a4c2e2465a12c7c955a61121f29.scope: Deactivated successfully.
Feb 25 07:40:25 np0005629333 podman[332763]: 2026-02-25 12:40:25.291111084 +0000 UTC m=+0.695917000 container died 73a9fea86e792427fdb4e311d2b9b8c428fd8a4c2e2465a12c7c955a61121f29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wing, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:40:25 np0005629333 systemd[1]: var-lib-containers-storage-overlay-9d4701cba3ce0e90fa8908bcd332d31caaa5b7043837a3c8d4760a5da4065997-merged.mount: Deactivated successfully.
Feb 25 07:40:25 np0005629333 podman[332763]: 2026-02-25 12:40:25.333645044 +0000 UTC m=+0.738450930 container remove 73a9fea86e792427fdb4e311d2b9b8c428fd8a4c2e2465a12c7c955a61121f29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:40:25 np0005629333 systemd[1]: libpod-conmon-73a9fea86e792427fdb4e311d2b9b8c428fd8a4c2e2465a12c7c955a61121f29.scope: Deactivated successfully.
Feb 25 07:40:25 np0005629333 nova_compute[244014]: 2026-02-25 12:40:25.621 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:40:25 np0005629333 nova_compute[244014]: 2026-02-25 12:40:25.621 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:40:25 np0005629333 podman[332873]: 2026-02-25 12:40:25.8409336 +0000 UTC m=+0.056874426 container create 92613676b2b7bd705852e3b0b68921b21a055f6c9cbe106ad72480186b72cee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ride, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 07:40:25 np0005629333 nova_compute[244014]: 2026-02-25 12:40:25.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:40:25 np0005629333 systemd[1]: Started libpod-conmon-92613676b2b7bd705852e3b0b68921b21a055f6c9cbe106ad72480186b72cee8.scope.
Feb 25 07:40:25 np0005629333 podman[332873]: 2026-02-25 12:40:25.811257262 +0000 UTC m=+0.027198108 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:40:25 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:40:25 np0005629333 podman[332873]: 2026-02-25 12:40:25.929629052 +0000 UTC m=+0.145569948 container init 92613676b2b7bd705852e3b0b68921b21a055f6c9cbe106ad72480186b72cee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ride, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 07:40:25 np0005629333 podman[332873]: 2026-02-25 12:40:25.937686399 +0000 UTC m=+0.153627225 container start 92613676b2b7bd705852e3b0b68921b21a055f6c9cbe106ad72480186b72cee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ride, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:40:25 np0005629333 podman[332873]: 2026-02-25 12:40:25.941328352 +0000 UTC m=+0.157269238 container attach 92613676b2b7bd705852e3b0b68921b21a055f6c9cbe106ad72480186b72cee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 07:40:25 np0005629333 reverent_ride[332889]: 167 167
Feb 25 07:40:25 np0005629333 systemd[1]: libpod-92613676b2b7bd705852e3b0b68921b21a055f6c9cbe106ad72480186b72cee8.scope: Deactivated successfully.
Feb 25 07:40:25 np0005629333 podman[332873]: 2026-02-25 12:40:25.944868282 +0000 UTC m=+0.160809108 container died 92613676b2b7bd705852e3b0b68921b21a055f6c9cbe106ad72480186b72cee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ride, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:40:25 np0005629333 systemd[1]: var-lib-containers-storage-overlay-76480631b04f531ace4b1e2871b2d8f7ff116373b49589d0490cab25ecfee652-merged.mount: Deactivated successfully.
Feb 25 07:40:25 np0005629333 podman[332873]: 2026-02-25 12:40:25.993291799 +0000 UTC m=+0.209232645 container remove 92613676b2b7bd705852e3b0b68921b21a055f6c9cbe106ad72480186b72cee8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ride, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 07:40:26 np0005629333 systemd[1]: libpod-conmon-92613676b2b7bd705852e3b0b68921b21a055f6c9cbe106ad72480186b72cee8.scope: Deactivated successfully.
Feb 25 07:40:26 np0005629333 podman[332912]: 2026-02-25 12:40:26.163309797 +0000 UTC m=+0.047474481 container create a1aa50acffc28e528bf4f7129468ffbc8409f1d251a5871801df46af7bf80401 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mendel, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:40:26 np0005629333 systemd[1]: Started libpod-conmon-a1aa50acffc28e528bf4f7129468ffbc8409f1d251a5871801df46af7bf80401.scope.
Feb 25 07:40:26 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:40:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1778: 305 pgs: 305 active+clean; 233 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 33 KiB/s wr, 172 op/s
Feb 25 07:40:26 np0005629333 podman[332912]: 2026-02-25 12:40:26.142651134 +0000 UTC m=+0.026815818 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:40:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/039881223f401031a2217200c7d84c21b3dcb108e8ff627176641ddb67dee5c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:40:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/039881223f401031a2217200c7d84c21b3dcb108e8ff627176641ddb67dee5c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:40:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/039881223f401031a2217200c7d84c21b3dcb108e8ff627176641ddb67dee5c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:40:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/039881223f401031a2217200c7d84c21b3dcb108e8ff627176641ddb67dee5c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:40:26 np0005629333 podman[332912]: 2026-02-25 12:40:26.259516282 +0000 UTC m=+0.143681016 container init a1aa50acffc28e528bf4f7129468ffbc8409f1d251a5871801df46af7bf80401 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mendel, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 07:40:26 np0005629333 podman[332912]: 2026-02-25 12:40:26.272507888 +0000 UTC m=+0.156672572 container start a1aa50acffc28e528bf4f7129468ffbc8409f1d251a5871801df46af7bf80401 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mendel, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:40:26 np0005629333 podman[332912]: 2026-02-25 12:40:26.276907052 +0000 UTC m=+0.161071786 container attach a1aa50acffc28e528bf4f7129468ffbc8409f1d251a5871801df46af7bf80401 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Feb 25 07:40:26 np0005629333 nova_compute[244014]: 2026-02-25 12:40:26.528 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "4d29081c-59fe-483f-b905-8c7349e84fc5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:26 np0005629333 nova_compute[244014]: 2026-02-25 12:40:26.529 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "4d29081c-59fe-483f-b905-8c7349e84fc5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:26 np0005629333 nova_compute[244014]: 2026-02-25 12:40:26.549 244018 DEBUG nova.compute.manager [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:40:26 np0005629333 confident_mendel[332928]: {
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:    "0": [
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:        {
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "devices": [
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "/dev/loop3"
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            ],
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "lv_name": "ceph_lv0",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "lv_size": "21470642176",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "name": "ceph_lv0",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "tags": {
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.cluster_name": "ceph",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.crush_device_class": "",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.encrypted": "0",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.objectstore": "bluestore",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.osd_id": "0",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.type": "block",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.vdo": "0",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.with_tpm": "0"
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            },
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "type": "block",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "vg_name": "ceph_vg0"
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:        }
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:    ],
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:    "1": [
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:        {
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "devices": [
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "/dev/loop4"
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            ],
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "lv_name": "ceph_lv1",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "lv_size": "21470642176",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "name": "ceph_lv1",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "tags": {
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.cluster_name": "ceph",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.crush_device_class": "",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.encrypted": "0",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.objectstore": "bluestore",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.osd_id": "1",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.type": "block",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.vdo": "0",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.with_tpm": "0"
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            },
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "type": "block",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "vg_name": "ceph_vg1"
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:        }
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:    ],
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:    "2": [
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:        {
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "devices": [
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "/dev/loop5"
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            ],
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "lv_name": "ceph_lv2",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "lv_size": "21470642176",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "name": "ceph_lv2",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "tags": {
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.cluster_name": "ceph",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.crush_device_class": "",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.encrypted": "0",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.objectstore": "bluestore",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.osd_id": "2",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.type": "block",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.vdo": "0",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:                "ceph.with_tpm": "0"
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            },
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "type": "block",
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:            "vg_name": "ceph_vg2"
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:        }
Feb 25 07:40:26 np0005629333 confident_mendel[332928]:    ]
Feb 25 07:40:26 np0005629333 confident_mendel[332928]: }
Feb 25 07:40:26 np0005629333 systemd[1]: libpod-a1aa50acffc28e528bf4f7129468ffbc8409f1d251a5871801df46af7bf80401.scope: Deactivated successfully.
Feb 25 07:40:26 np0005629333 podman[332912]: 2026-02-25 12:40:26.595899704 +0000 UTC m=+0.480064408 container died a1aa50acffc28e528bf4f7129468ffbc8409f1d251a5871801df46af7bf80401 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mendel, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:40:26 np0005629333 systemd[1]: var-lib-containers-storage-overlay-039881223f401031a2217200c7d84c21b3dcb108e8ff627176641ddb67dee5c3-merged.mount: Deactivated successfully.
Feb 25 07:40:26 np0005629333 nova_compute[244014]: 2026-02-25 12:40:26.645 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:26 np0005629333 nova_compute[244014]: 2026-02-25 12:40:26.646 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:26 np0005629333 podman[332912]: 2026-02-25 12:40:26.654115936 +0000 UTC m=+0.538280620 container remove a1aa50acffc28e528bf4f7129468ffbc8409f1d251a5871801df46af7bf80401 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mendel, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 07:40:26 np0005629333 nova_compute[244014]: 2026-02-25 12:40:26.655 244018 DEBUG nova.virt.hardware [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:40:26 np0005629333 nova_compute[244014]: 2026-02-25 12:40:26.655 244018 INFO nova.compute.claims [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:40:26 np0005629333 systemd[1]: libpod-conmon-a1aa50acffc28e528bf4f7129468ffbc8409f1d251a5871801df46af7bf80401.scope: Deactivated successfully.
Feb 25 07:40:26 np0005629333 nova_compute[244014]: 2026-02-25 12:40:26.842 244018 DEBUG oslo_concurrency.processutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:40:27 np0005629333 ceph-mgr[76641]: [devicehealth INFO root] Check health
Feb 25 07:40:27 np0005629333 podman[333034]: 2026-02-25 12:40:27.198589821 +0000 UTC m=+0.046456172 container create 7d8a916c08eb29eed3c809b4a7e36ab574dd0971603d62eae620d13739e22a3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_greider, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 07:40:27 np0005629333 systemd[1]: Started libpod-conmon-7d8a916c08eb29eed3c809b4a7e36ab574dd0971603d62eae620d13739e22a3e.scope.
Feb 25 07:40:27 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:40:27 np0005629333 podman[333034]: 2026-02-25 12:40:27.179994276 +0000 UTC m=+0.027860667 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:40:27 np0005629333 podman[333034]: 2026-02-25 12:40:27.276553901 +0000 UTC m=+0.124420282 container init 7d8a916c08eb29eed3c809b4a7e36ab574dd0971603d62eae620d13739e22a3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_greider, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:40:27 np0005629333 podman[333034]: 2026-02-25 12:40:27.280957905 +0000 UTC m=+0.128824266 container start 7d8a916c08eb29eed3c809b4a7e36ab574dd0971603d62eae620d13739e22a3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 07:40:27 np0005629333 podman[333034]: 2026-02-25 12:40:27.284070713 +0000 UTC m=+0.131937054 container attach 7d8a916c08eb29eed3c809b4a7e36ab574dd0971603d62eae620d13739e22a3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 07:40:27 np0005629333 thirsty_greider[333051]: 167 167
Feb 25 07:40:27 np0005629333 systemd[1]: libpod-7d8a916c08eb29eed3c809b4a7e36ab574dd0971603d62eae620d13739e22a3e.scope: Deactivated successfully.
Feb 25 07:40:27 np0005629333 conmon[333051]: conmon 7d8a916c08eb29eed3c8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7d8a916c08eb29eed3c809b4a7e36ab574dd0971603d62eae620d13739e22a3e.scope/container/memory.events
Feb 25 07:40:27 np0005629333 podman[333034]: 2026-02-25 12:40:27.286383809 +0000 UTC m=+0.134250150 container died 7d8a916c08eb29eed3c809b4a7e36ab574dd0971603d62eae620d13739e22a3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:40:27 np0005629333 systemd[1]: var-lib-containers-storage-overlay-0029316227fdf6c41299d82668794f2f1d81adccabbcd36b289d8feba6cf837a-merged.mount: Deactivated successfully.
Feb 25 07:40:27 np0005629333 podman[333034]: 2026-02-25 12:40:27.326918472 +0000 UTC m=+0.174784823 container remove 7d8a916c08eb29eed3c809b4a7e36ab574dd0971603d62eae620d13739e22a3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_greider, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:40:27 np0005629333 systemd[1]: libpod-conmon-7d8a916c08eb29eed3c809b4a7e36ab574dd0971603d62eae620d13739e22a3e.scope: Deactivated successfully.
Feb 25 07:40:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:40:27 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1977409649' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:40:27 np0005629333 nova_compute[244014]: 2026-02-25 12:40:27.434 244018 DEBUG oslo_concurrency.processutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:40:27 np0005629333 nova_compute[244014]: 2026-02-25 12:40:27.441 244018 DEBUG nova.compute.provider_tree [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:40:27 np0005629333 nova_compute[244014]: 2026-02-25 12:40:27.471 244018 DEBUG nova.scheduler.client.report [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:40:27 np0005629333 podman[333077]: 2026-02-25 12:40:27.490822528 +0000 UTC m=+0.045927277 container create 66884c9ebb48687b5522b20026652f98a44fd1dd96cdddcd5f3bbcf6b6932f16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:40:27 np0005629333 nova_compute[244014]: 2026-02-25 12:40:27.499 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:27 np0005629333 nova_compute[244014]: 2026-02-25 12:40:27.500 244018 DEBUG nova.compute.manager [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:40:27 np0005629333 systemd[1]: Started libpod-conmon-66884c9ebb48687b5522b20026652f98a44fd1dd96cdddcd5f3bbcf6b6932f16.scope.
Feb 25 07:40:27 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:40:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93adbc3f66e8ba3bbbf9a89a6b1207ac5680319f61f12f19301475b1dde9fb8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:40:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93adbc3f66e8ba3bbbf9a89a6b1207ac5680319f61f12f19301475b1dde9fb8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:40:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93adbc3f66e8ba3bbbf9a89a6b1207ac5680319f61f12f19301475b1dde9fb8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:40:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93adbc3f66e8ba3bbbf9a89a6b1207ac5680319f61f12f19301475b1dde9fb8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:40:27 np0005629333 podman[333077]: 2026-02-25 12:40:27.46785364 +0000 UTC m=+0.022958359 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:40:27 np0005629333 nova_compute[244014]: 2026-02-25 12:40:27.578 244018 DEBUG nova.compute.manager [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:40:27 np0005629333 nova_compute[244014]: 2026-02-25 12:40:27.578 244018 DEBUG nova.network.neutron [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:40:27 np0005629333 podman[333077]: 2026-02-25 12:40:27.578979746 +0000 UTC m=+0.134084495 container init 66884c9ebb48687b5522b20026652f98a44fd1dd96cdddcd5f3bbcf6b6932f16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Feb 25 07:40:27 np0005629333 podman[333077]: 2026-02-25 12:40:27.587525397 +0000 UTC m=+0.142630106 container start 66884c9ebb48687b5522b20026652f98a44fd1dd96cdddcd5f3bbcf6b6932f16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 07:40:27 np0005629333 podman[333077]: 2026-02-25 12:40:27.590738037 +0000 UTC m=+0.145842796 container attach 66884c9ebb48687b5522b20026652f98a44fd1dd96cdddcd5f3bbcf6b6932f16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 07:40:27 np0005629333 nova_compute[244014]: 2026-02-25 12:40:27.614 244018 INFO nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:40:27 np0005629333 nova_compute[244014]: 2026-02-25 12:40:27.645 244018 DEBUG nova.compute.manager [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:40:27 np0005629333 nova_compute[244014]: 2026-02-25 12:40:27.730 244018 DEBUG nova.compute.manager [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:40:27 np0005629333 nova_compute[244014]: 2026-02-25 12:40:27.731 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:40:27 np0005629333 nova_compute[244014]: 2026-02-25 12:40:27.732 244018 INFO nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Creating image(s)#033[00m
Feb 25 07:40:27 np0005629333 nova_compute[244014]: 2026-02-25 12:40:27.754 244018 DEBUG nova.storage.rbd_utils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 4d29081c-59fe-483f-b905-8c7349e84fc5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:40:27 np0005629333 nova_compute[244014]: 2026-02-25 12:40:27.776 244018 DEBUG nova.storage.rbd_utils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 4d29081c-59fe-483f-b905-8c7349e84fc5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:40:27 np0005629333 nova_compute[244014]: 2026-02-25 12:40:27.799 244018 DEBUG nova.storage.rbd_utils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 4d29081c-59fe-483f-b905-8c7349e84fc5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:40:27 np0005629333 nova_compute[244014]: 2026-02-25 12:40:27.803 244018 DEBUG oslo_concurrency.processutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:40:27 np0005629333 nova_compute[244014]: 2026-02-25 12:40:27.882 244018 DEBUG nova.policy [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c5bc24b5f5048469cf3f701ce511bfa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '503e879cd1f44a16b9baef106ceba949', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:40:27 np0005629333 nova_compute[244014]: 2026-02-25 12:40:27.886 244018 DEBUG oslo_concurrency.processutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:40:27 np0005629333 nova_compute[244014]: 2026-02-25 12:40:27.886 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:27 np0005629333 nova_compute[244014]: 2026-02-25 12:40:27.887 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:27 np0005629333 nova_compute[244014]: 2026-02-25 12:40:27.888 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:27 np0005629333 nova_compute[244014]: 2026-02-25 12:40:27.909 244018 DEBUG nova.storage.rbd_utils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 4d29081c-59fe-483f-b905-8c7349e84fc5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:40:27 np0005629333 nova_compute[244014]: 2026-02-25 12:40:27.918 244018 DEBUG oslo_concurrency.processutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 4d29081c-59fe-483f-b905-8c7349e84fc5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:40:28 np0005629333 nova_compute[244014]: 2026-02-25 12:40:28.050 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:28 np0005629333 nova_compute[244014]: 2026-02-25 12:40:28.170 244018 DEBUG oslo_concurrency.processutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 4d29081c-59fe-483f-b905-8c7349e84fc5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.253s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:40:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 305 active+clean; 233 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 654 KiB/s rd, 1.4 KiB/s wr, 51 op/s
Feb 25 07:40:28 np0005629333 lvm[333298]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:40:28 np0005629333 lvm[333298]: VG ceph_vg0 finished
Feb 25 07:40:28 np0005629333 lvm[333302]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:40:28 np0005629333 lvm[333302]: VG ceph_vg2 finished
Feb 25 07:40:28 np0005629333 nova_compute[244014]: 2026-02-25 12:40:28.254 244018 DEBUG nova.storage.rbd_utils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] resizing rbd image 4d29081c-59fe-483f-b905-8c7349e84fc5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:40:28 np0005629333 lvm[333301]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:40:28 np0005629333 lvm[333301]: VG ceph_vg1 finished
Feb 25 07:40:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:40:28 np0005629333 magical_jennings[333093]: {}
Feb 25 07:40:28 np0005629333 nova_compute[244014]: 2026-02-25 12:40:28.335 244018 DEBUG nova.objects.instance [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'migration_context' on Instance uuid 4d29081c-59fe-483f-b905-8c7349e84fc5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:40:28 np0005629333 systemd[1]: libpod-66884c9ebb48687b5522b20026652f98a44fd1dd96cdddcd5f3bbcf6b6932f16.scope: Deactivated successfully.
Feb 25 07:40:28 np0005629333 systemd[1]: libpod-66884c9ebb48687b5522b20026652f98a44fd1dd96cdddcd5f3bbcf6b6932f16.scope: Consumed 1.025s CPU time.
Feb 25 07:40:28 np0005629333 nova_compute[244014]: 2026-02-25 12:40:28.353 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:40:28 np0005629333 nova_compute[244014]: 2026-02-25 12:40:28.353 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Ensure instance console log exists: /var/lib/nova/instances/4d29081c-59fe-483f-b905-8c7349e84fc5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:40:28 np0005629333 nova_compute[244014]: 2026-02-25 12:40:28.354 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:28 np0005629333 nova_compute[244014]: 2026-02-25 12:40:28.354 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:28 np0005629333 nova_compute[244014]: 2026-02-25 12:40:28.354 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:28 np0005629333 podman[333343]: 2026-02-25 12:40:28.383703455 +0000 UTC m=+0.018452542 container died 66884c9ebb48687b5522b20026652f98a44fd1dd96cdddcd5f3bbcf6b6932f16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:40:28 np0005629333 systemd[1]: var-lib-containers-storage-overlay-93adbc3f66e8ba3bbbf9a89a6b1207ac5680319f61f12f19301475b1dde9fb8f-merged.mount: Deactivated successfully.
Feb 25 07:40:28 np0005629333 podman[333343]: 2026-02-25 12:40:28.410253074 +0000 UTC m=+0.045002131 container remove 66884c9ebb48687b5522b20026652f98a44fd1dd96cdddcd5f3bbcf6b6932f16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_jennings, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:40:28 np0005629333 systemd[1]: libpod-conmon-66884c9ebb48687b5522b20026652f98a44fd1dd96cdddcd5f3bbcf6b6932f16.scope: Deactivated successfully.
Feb 25 07:40:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:40:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:40:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:40:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:40:29 np0005629333 nova_compute[244014]: 2026-02-25 12:40:29.473 244018 DEBUG nova.network.neutron [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Successfully created port: 29c2436f-5a46-4a8e-9f40-9b993368661c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:40:29 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:40:29 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:40:30 np0005629333 nova_compute[244014]: 2026-02-25 12:40:30.034 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1780: 305 pgs: 305 active+clean; 233 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 654 KiB/s rd, 1.4 KiB/s wr, 51 op/s
Feb 25 07:40:30 np0005629333 nova_compute[244014]: 2026-02-25 12:40:30.276 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023215.274664, 01328724-b95f-4b36-809a-ddc156dd0dde => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:40:30 np0005629333 nova_compute[244014]: 2026-02-25 12:40:30.277 244018 INFO nova.compute.manager [-] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:40:30 np0005629333 nova_compute[244014]: 2026-02-25 12:40:30.299 244018 DEBUG nova.compute.manager [None req-5866f534-21b9-435b-9deb-4b269b7cdf01 - - - - - -] [instance: 01328724-b95f-4b36-809a-ddc156dd0dde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:40:30 np0005629333 nova_compute[244014]: 2026-02-25 12:40:30.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:40:30 np0005629333 nova_compute[244014]: 2026-02-25 12:40:30.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:40:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:40:30
Feb 25 07:40:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:40:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:40:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'volumes', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'backups']
Feb 25 07:40:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:40:31 np0005629333 nova_compute[244014]: 2026-02-25 12:40:31.421 244018 DEBUG nova.network.neutron [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Successfully updated port: 29c2436f-5a46-4a8e-9f40-9b993368661c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:40:31 np0005629333 nova_compute[244014]: 2026-02-25 12:40:31.446 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "refresh_cache-4d29081c-59fe-483f-b905-8c7349e84fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:40:31 np0005629333 nova_compute[244014]: 2026-02-25 12:40:31.446 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquired lock "refresh_cache-4d29081c-59fe-483f-b905-8c7349e84fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:40:31 np0005629333 nova_compute[244014]: 2026-02-25 12:40:31.447 244018 DEBUG nova.network.neutron [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:40:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:40:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:40:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:40:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:40:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:40:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:40:31 np0005629333 nova_compute[244014]: 2026-02-25 12:40:31.627 244018 DEBUG nova.compute.manager [req-f1748503-3356-4fd3-99d5-f65bfb43ed8e req-6aaea6e6-41ca-4161-8a92-01e44c526328 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Received event network-changed-29c2436f-5a46-4a8e-9f40-9b993368661c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:40:31 np0005629333 nova_compute[244014]: 2026-02-25 12:40:31.627 244018 DEBUG nova.compute.manager [req-f1748503-3356-4fd3-99d5-f65bfb43ed8e req-6aaea6e6-41ca-4161-8a92-01e44c526328 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Refreshing instance network info cache due to event network-changed-29c2436f-5a46-4a8e-9f40-9b993368661c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:40:31 np0005629333 nova_compute[244014]: 2026-02-25 12:40:31.628 244018 DEBUG oslo_concurrency.lockutils [req-f1748503-3356-4fd3-99d5-f65bfb43ed8e req-6aaea6e6-41ca-4161-8a92-01e44c526328 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-4d29081c-59fe-483f-b905-8c7349e84fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:40:31 np0005629333 nova_compute[244014]: 2026-02-25 12:40:31.680 244018 DEBUG nova.network.neutron [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:40:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:40:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:40:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:40:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:40:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:40:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:40:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:40:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:40:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:40:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:40:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 305 active+clean; 258 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 955 KiB/s wr, 33 op/s
Feb 25 07:40:32 np0005629333 nova_compute[244014]: 2026-02-25 12:40:32.477 244018 DEBUG nova.network.neutron [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Updating instance_info_cache with network_info: [{"id": "29c2436f-5a46-4a8e-9f40-9b993368661c", "address": "fa:16:3e:48:85:8e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c2436f-5a", "ovs_interfaceid": "29c2436f-5a46-4a8e-9f40-9b993368661c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:40:32 np0005629333 nova_compute[244014]: 2026-02-25 12:40:32.497 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Releasing lock "refresh_cache-4d29081c-59fe-483f-b905-8c7349e84fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:40:32 np0005629333 nova_compute[244014]: 2026-02-25 12:40:32.497 244018 DEBUG nova.compute.manager [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Instance network_info: |[{"id": "29c2436f-5a46-4a8e-9f40-9b993368661c", "address": "fa:16:3e:48:85:8e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c2436f-5a", "ovs_interfaceid": "29c2436f-5a46-4a8e-9f40-9b993368661c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:40:32 np0005629333 nova_compute[244014]: 2026-02-25 12:40:32.498 244018 DEBUG oslo_concurrency.lockutils [req-f1748503-3356-4fd3-99d5-f65bfb43ed8e req-6aaea6e6-41ca-4161-8a92-01e44c526328 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-4d29081c-59fe-483f-b905-8c7349e84fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:40:32 np0005629333 nova_compute[244014]: 2026-02-25 12:40:32.498 244018 DEBUG nova.network.neutron [req-f1748503-3356-4fd3-99d5-f65bfb43ed8e req-6aaea6e6-41ca-4161-8a92-01e44c526328 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Refreshing network info cache for port 29c2436f-5a46-4a8e-9f40-9b993368661c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:40:32 np0005629333 nova_compute[244014]: 2026-02-25 12:40:32.500 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Start _get_guest_xml network_info=[{"id": "29c2436f-5a46-4a8e-9f40-9b993368661c", "address": "fa:16:3e:48:85:8e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c2436f-5a", "ovs_interfaceid": "29c2436f-5a46-4a8e-9f40-9b993368661c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:40:32 np0005629333 nova_compute[244014]: 2026-02-25 12:40:32.506 244018 WARNING nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:40:32 np0005629333 nova_compute[244014]: 2026-02-25 12:40:32.512 244018 DEBUG nova.virt.libvirt.host [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:40:32 np0005629333 nova_compute[244014]: 2026-02-25 12:40:32.512 244018 DEBUG nova.virt.libvirt.host [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:40:32 np0005629333 nova_compute[244014]: 2026-02-25 12:40:32.518 244018 DEBUG nova.virt.libvirt.host [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:40:32 np0005629333 nova_compute[244014]: 2026-02-25 12:40:32.519 244018 DEBUG nova.virt.libvirt.host [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:40:32 np0005629333 nova_compute[244014]: 2026-02-25 12:40:32.519 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:40:32 np0005629333 nova_compute[244014]: 2026-02-25 12:40:32.519 244018 DEBUG nova.virt.hardware [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:40:32 np0005629333 nova_compute[244014]: 2026-02-25 12:40:32.520 244018 DEBUG nova.virt.hardware [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:40:32 np0005629333 nova_compute[244014]: 2026-02-25 12:40:32.520 244018 DEBUG nova.virt.hardware [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:40:32 np0005629333 nova_compute[244014]: 2026-02-25 12:40:32.520 244018 DEBUG nova.virt.hardware [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:40:32 np0005629333 nova_compute[244014]: 2026-02-25 12:40:32.520 244018 DEBUG nova.virt.hardware [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:40:32 np0005629333 nova_compute[244014]: 2026-02-25 12:40:32.521 244018 DEBUG nova.virt.hardware [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:40:32 np0005629333 nova_compute[244014]: 2026-02-25 12:40:32.521 244018 DEBUG nova.virt.hardware [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:40:32 np0005629333 nova_compute[244014]: 2026-02-25 12:40:32.521 244018 DEBUG nova.virt.hardware [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:40:32 np0005629333 nova_compute[244014]: 2026-02-25 12:40:32.521 244018 DEBUG nova.virt.hardware [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:40:32 np0005629333 nova_compute[244014]: 2026-02-25 12:40:32.522 244018 DEBUG nova.virt.hardware [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:40:32 np0005629333 nova_compute[244014]: 2026-02-25 12:40:32.522 244018 DEBUG nova.virt.hardware [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:40:32 np0005629333 nova_compute[244014]: 2026-02-25 12:40:32.525 244018 DEBUG oslo_concurrency.processutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.018 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023218.0168273, 89f166ca-c0fc-48ec-baf9-eda20ff22f2b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.019 244018 INFO nova.compute.manager [-] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:40:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:40:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3005480750' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.040 244018 DEBUG nova.compute.manager [None req-f16340c9-fc6c-4be3-bf30-24b746ed22f7 - - - - - -] [instance: 89f166ca-c0fc-48ec-baf9-eda20ff22f2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.054 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.055 244018 DEBUG oslo_concurrency.processutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.076 244018 DEBUG nova.storage.rbd_utils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 4d29081c-59fe-483f-b905-8c7349e84fc5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.079 244018 DEBUG oslo_concurrency.processutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:40:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:40:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:40:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3523040187' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.588 244018 DEBUG oslo_concurrency.processutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.590 244018 DEBUG nova.virt.libvirt.vif [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:40:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1761451429',display_name='tempest-ServersTestJSON-server-1761451429',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1761451429',id=100,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD0D9Rc8lFbW4qL5T7jR3oexmsn8QI7vD/zBEdjljHhz+WbYLBt+vZCC/EcArywe2HQ7Qo+4kBD5ze0YkonOn3bgWm+BpeU3ibaQn/otIhthIErVKZalQ86qnW2bWscjqA==',key_name='tempest-key-57149867',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-2108qsoc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:40:27Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=4d29081c-59fe-483f-b905-8c7349e84fc5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29c2436f-5a46-4a8e-9f40-9b993368661c", "address": "fa:16:3e:48:85:8e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c2436f-5a", "ovs_interfaceid": "29c2436f-5a46-4a8e-9f40-9b993368661c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.590 244018 DEBUG nova.network.os_vif_util [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "29c2436f-5a46-4a8e-9f40-9b993368661c", "address": "fa:16:3e:48:85:8e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c2436f-5a", "ovs_interfaceid": "29c2436f-5a46-4a8e-9f40-9b993368661c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.591 244018 DEBUG nova.network.os_vif_util [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:85:8e,bridge_name='br-int',has_traffic_filtering=True,id=29c2436f-5a46-4a8e-9f40-9b993368661c,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29c2436f-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.592 244018 DEBUG nova.objects.instance [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4d29081c-59fe-483f-b905-8c7349e84fc5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.615 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:40:33 np0005629333 nova_compute[244014]:  <uuid>4d29081c-59fe-483f-b905-8c7349e84fc5</uuid>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:  <name>instance-00000064</name>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServersTestJSON-server-1761451429</nova:name>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:40:32</nova:creationTime>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:40:33 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:        <nova:user uuid="4c5bc24b5f5048469cf3f701ce511bfa">tempest-ServersTestJSON-1472039551-project-member</nova:user>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:        <nova:project uuid="503e879cd1f44a16b9baef106ceba949">tempest-ServersTestJSON-1472039551</nova:project>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:        <nova:port uuid="29c2436f-5a46-4a8e-9f40-9b993368661c">
Feb 25 07:40:33 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      <entry name="serial">4d29081c-59fe-483f-b905-8c7349e84fc5</entry>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      <entry name="uuid">4d29081c-59fe-483f-b905-8c7349e84fc5</entry>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/4d29081c-59fe-483f-b905-8c7349e84fc5_disk">
Feb 25 07:40:33 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:40:33 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/4d29081c-59fe-483f-b905-8c7349e84fc5_disk.config">
Feb 25 07:40:33 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:40:33 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:48:85:8e"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      <target dev="tap29c2436f-5a"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/4d29081c-59fe-483f-b905-8c7349e84fc5/console.log" append="off"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:40:33 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:40:33 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:40:33 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:40:33 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.617 244018 DEBUG nova.compute.manager [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Preparing to wait for external event network-vif-plugged-29c2436f-5a46-4a8e-9f40-9b993368661c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.617 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.618 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.618 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.619 244018 DEBUG nova.virt.libvirt.vif [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:40:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1761451429',display_name='tempest-ServersTestJSON-server-1761451429',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1761451429',id=100,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD0D9Rc8lFbW4qL5T7jR3oexmsn8QI7vD/zBEdjljHhz+WbYLBt+vZCC/EcArywe2HQ7Qo+4kBD5ze0YkonOn3bgWm+BpeU3ibaQn/otIhthIErVKZalQ86qnW2bWscjqA==',key_name='tempest-key-57149867',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-2108qsoc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:40:27Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=4d29081c-59fe-483f-b905-8c7349e84fc5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29c2436f-5a46-4a8e-9f40-9b993368661c", "address": "fa:16:3e:48:85:8e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c2436f-5a", "ovs_interfaceid": "29c2436f-5a46-4a8e-9f40-9b993368661c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.619 244018 DEBUG nova.network.os_vif_util [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "29c2436f-5a46-4a8e-9f40-9b993368661c", "address": "fa:16:3e:48:85:8e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c2436f-5a", "ovs_interfaceid": "29c2436f-5a46-4a8e-9f40-9b993368661c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.620 244018 DEBUG nova.network.os_vif_util [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:85:8e,bridge_name='br-int',has_traffic_filtering=True,id=29c2436f-5a46-4a8e-9f40-9b993368661c,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29c2436f-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.620 244018 DEBUG os_vif [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:85:8e,bridge_name='br-int',has_traffic_filtering=True,id=29c2436f-5a46-4a8e-9f40-9b993368661c,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29c2436f-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.621 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.621 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.622 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.625 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.625 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap29c2436f-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.626 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap29c2436f-5a, col_values=(('external_ids', {'iface-id': '29c2436f-5a46-4a8e-9f40-9b993368661c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:85:8e', 'vm-uuid': '4d29081c-59fe-483f-b905-8c7349e84fc5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:40:33 np0005629333 NetworkManager[49836]: <info>  [1772023233.6286] manager: (tap29c2436f-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/405)
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.630 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.632 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.633 244018 INFO os_vif [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:85:8e,bridge_name='br-int',has_traffic_filtering=True,id=29c2436f-5a46-4a8e-9f40-9b993368661c,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29c2436f-5a')#033[00m
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.679 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.679 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.679 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No VIF found with MAC fa:16:3e:48:85:8e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.680 244018 INFO nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Using config drive#033[00m
Feb 25 07:40:33 np0005629333 nova_compute[244014]: 2026-02-25 12:40:33.708 244018 DEBUG nova.storage.rbd_utils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 4d29081c-59fe-483f-b905-8c7349e84fc5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:40:34 np0005629333 nova_compute[244014]: 2026-02-25 12:40:34.179 244018 INFO nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Creating config drive at /var/lib/nova/instances/4d29081c-59fe-483f-b905-8c7349e84fc5/disk.config#033[00m
Feb 25 07:40:34 np0005629333 nova_compute[244014]: 2026-02-25 12:40:34.186 244018 DEBUG oslo_concurrency.processutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4d29081c-59fe-483f-b905-8c7349e84fc5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6tczv0z2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:40:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1782: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.9 MiB/s wr, 29 op/s
Feb 25 07:40:34 np0005629333 nova_compute[244014]: 2026-02-25 12:40:34.326 244018 DEBUG oslo_concurrency.processutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4d29081c-59fe-483f-b905-8c7349e84fc5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6tczv0z2" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:40:34 np0005629333 nova_compute[244014]: 2026-02-25 12:40:34.355 244018 DEBUG nova.storage.rbd_utils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 4d29081c-59fe-483f-b905-8c7349e84fc5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:40:34 np0005629333 nova_compute[244014]: 2026-02-25 12:40:34.359 244018 DEBUG oslo_concurrency.processutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4d29081c-59fe-483f-b905-8c7349e84fc5/disk.config 4d29081c-59fe-483f-b905-8c7349e84fc5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:40:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:34.458 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:40:34 np0005629333 nova_compute[244014]: 2026-02-25 12:40:34.458 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:34.461 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:40:34 np0005629333 nova_compute[244014]: 2026-02-25 12:40:34.500 244018 DEBUG oslo_concurrency.processutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4d29081c-59fe-483f-b905-8c7349e84fc5/disk.config 4d29081c-59fe-483f-b905-8c7349e84fc5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:40:34 np0005629333 nova_compute[244014]: 2026-02-25 12:40:34.501 244018 INFO nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Deleting local config drive /var/lib/nova/instances/4d29081c-59fe-483f-b905-8c7349e84fc5/disk.config because it was imported into RBD.#033[00m
Feb 25 07:40:34 np0005629333 kernel: tap29c2436f-5a: entered promiscuous mode
Feb 25 07:40:34 np0005629333 NetworkManager[49836]: <info>  [1772023234.5660] manager: (tap29c2436f-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/406)
Feb 25 07:40:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:40:34Z|00977|binding|INFO|Claiming lport 29c2436f-5a46-4a8e-9f40-9b993368661c for this chassis.
Feb 25 07:40:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:40:34Z|00978|binding|INFO|29c2436f-5a46-4a8e-9f40-9b993368661c: Claiming fa:16:3e:48:85:8e 10.100.0.14
Feb 25 07:40:34 np0005629333 nova_compute[244014]: 2026-02-25 12:40:34.566 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:34.574 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:85:8e 10.100.0.14'], port_security=['fa:16:3e:48:85:8e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4d29081c-59fe-483f-b905-8c7349e84fc5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec8bae53-fe6a-49d1-a733-f00c198be561', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '503e879cd1f44a16b9baef106ceba949', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3bf34285-1a67-4c95-bb68-fd577a012f6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18f4e8da-4409-4095-9850-aaee82dd8fd1, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=29c2436f-5a46-4a8e-9f40-9b993368661c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:40:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:34.575 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 29c2436f-5a46-4a8e-9f40-9b993368661c in datapath ec8bae53-fe6a-49d1-a733-f00c198be561 bound to our chassis#033[00m
Feb 25 07:40:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:34.576 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec8bae53-fe6a-49d1-a733-f00c198be561#033[00m
Feb 25 07:40:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:40:34Z|00979|binding|INFO|Setting lport 29c2436f-5a46-4a8e-9f40-9b993368661c ovn-installed in OVS
Feb 25 07:40:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:40:34Z|00980|binding|INFO|Setting lport 29c2436f-5a46-4a8e-9f40-9b993368661c up in Southbound
Feb 25 07:40:34 np0005629333 nova_compute[244014]: 2026-02-25 12:40:34.577 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:34 np0005629333 nova_compute[244014]: 2026-02-25 12:40:34.579 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:34.591 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[702a5c38-6e9f-4a19-9d0b-7935769e9db4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:40:34 np0005629333 systemd-udevd[333517]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:40:34 np0005629333 systemd-machined[210048]: New machine qemu-127-instance-00000064.
Feb 25 07:40:34 np0005629333 NetworkManager[49836]: <info>  [1772023234.6141] device (tap29c2436f-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:40:34 np0005629333 NetworkManager[49836]: <info>  [1772023234.6154] device (tap29c2436f-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:40:34 np0005629333 systemd[1]: Started Virtual Machine qemu-127-instance-00000064.
Feb 25 07:40:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:34.624 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d847f90c-0a41-4efd-8b53-009b648be4e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:40:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:34.628 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[316329a0-4194-407c-93e6-ebf79b49d12e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:40:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:34.660 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f54aa152-3fa2-4fff-868f-75efbf4c0fa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:40:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:34.682 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[39040a80-07f4-42e4-82e8-22af959ab357]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec8bae53-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a5:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516532, 'reachable_time': 18880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333528, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:40:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:34.700 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4a7b50ac-8c88-4118-ab48-85e387d7d136]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516546, 'tstamp': 516546}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333531, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516549, 'tstamp': 516549}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333531, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:40:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:34.702 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec8bae53-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:40:34 np0005629333 nova_compute[244014]: 2026-02-25 12:40:34.705 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:34.706 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec8bae53-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:40:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:34.707 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:40:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:34.707 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec8bae53-f0, col_values=(('external_ids', {'iface-id': 'e2d1eadf-baf7-4e5c-8052-6c64e8476a26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:40:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:34.708 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:40:34 np0005629333 nova_compute[244014]: 2026-02-25 12:40:34.760 244018 DEBUG nova.network.neutron [req-f1748503-3356-4fd3-99d5-f65bfb43ed8e req-6aaea6e6-41ca-4161-8a92-01e44c526328 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Updated VIF entry in instance network info cache for port 29c2436f-5a46-4a8e-9f40-9b993368661c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:40:34 np0005629333 nova_compute[244014]: 2026-02-25 12:40:34.761 244018 DEBUG nova.network.neutron [req-f1748503-3356-4fd3-99d5-f65bfb43ed8e req-6aaea6e6-41ca-4161-8a92-01e44c526328 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Updating instance_info_cache with network_info: [{"id": "29c2436f-5a46-4a8e-9f40-9b993368661c", "address": "fa:16:3e:48:85:8e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c2436f-5a", "ovs_interfaceid": "29c2436f-5a46-4a8e-9f40-9b993368661c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:40:34 np0005629333 nova_compute[244014]: 2026-02-25 12:40:34.783 244018 DEBUG oslo_concurrency.lockutils [req-f1748503-3356-4fd3-99d5-f65bfb43ed8e req-6aaea6e6-41ca-4161-8a92-01e44c526328 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-4d29081c-59fe-483f-b905-8c7349e84fc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:40:34 np0005629333 nova_compute[244014]: 2026-02-25 12:40:34.972 244018 DEBUG nova.compute.manager [req-2450b9d2-61a8-4635-ac54-5ca31c484edc req-7e8c2da5-446a-4d46-ad11-b983b11c3dd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Received event network-vif-plugged-29c2436f-5a46-4a8e-9f40-9b993368661c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:40:34 np0005629333 nova_compute[244014]: 2026-02-25 12:40:34.973 244018 DEBUG oslo_concurrency.lockutils [req-2450b9d2-61a8-4635-ac54-5ca31c484edc req-7e8c2da5-446a-4d46-ad11-b983b11c3dd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:34 np0005629333 nova_compute[244014]: 2026-02-25 12:40:34.973 244018 DEBUG oslo_concurrency.lockutils [req-2450b9d2-61a8-4635-ac54-5ca31c484edc req-7e8c2da5-446a-4d46-ad11-b983b11c3dd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:34 np0005629333 nova_compute[244014]: 2026-02-25 12:40:34.974 244018 DEBUG oslo_concurrency.lockutils [req-2450b9d2-61a8-4635-ac54-5ca31c484edc req-7e8c2da5-446a-4d46-ad11-b983b11c3dd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:34 np0005629333 nova_compute[244014]: 2026-02-25 12:40:34.974 244018 DEBUG nova.compute.manager [req-2450b9d2-61a8-4635-ac54-5ca31c484edc req-7e8c2da5-446a-4d46-ad11-b983b11c3dd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Processing event network-vif-plugged-29c2436f-5a46-4a8e-9f40-9b993368661c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:40:35 np0005629333 nova_compute[244014]: 2026-02-25 12:40:35.036 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:35 np0005629333 nova_compute[244014]: 2026-02-25 12:40:35.041 244018 DEBUG nova.compute.manager [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:40:35 np0005629333 nova_compute[244014]: 2026-02-25 12:40:35.043 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023235.0414193, 4d29081c-59fe-483f-b905-8c7349e84fc5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:40:35 np0005629333 nova_compute[244014]: 2026-02-25 12:40:35.043 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] VM Started (Lifecycle Event)#033[00m
Feb 25 07:40:35 np0005629333 nova_compute[244014]: 2026-02-25 12:40:35.047 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:40:35 np0005629333 nova_compute[244014]: 2026-02-25 12:40:35.051 244018 INFO nova.virt.libvirt.driver [-] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Instance spawned successfully.#033[00m
Feb 25 07:40:35 np0005629333 nova_compute[244014]: 2026-02-25 12:40:35.051 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:40:35 np0005629333 nova_compute[244014]: 2026-02-25 12:40:35.072 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:40:35 np0005629333 nova_compute[244014]: 2026-02-25 12:40:35.078 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:40:35 np0005629333 nova_compute[244014]: 2026-02-25 12:40:35.082 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:40:35 np0005629333 nova_compute[244014]: 2026-02-25 12:40:35.082 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:40:35 np0005629333 nova_compute[244014]: 2026-02-25 12:40:35.083 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:40:35 np0005629333 nova_compute[244014]: 2026-02-25 12:40:35.083 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:40:35 np0005629333 nova_compute[244014]: 2026-02-25 12:40:35.084 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:40:35 np0005629333 nova_compute[244014]: 2026-02-25 12:40:35.084 244018 DEBUG nova.virt.libvirt.driver [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:40:35 np0005629333 nova_compute[244014]: 2026-02-25 12:40:35.122 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:40:35 np0005629333 nova_compute[244014]: 2026-02-25 12:40:35.122 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023235.0420704, 4d29081c-59fe-483f-b905-8c7349e84fc5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:40:35 np0005629333 nova_compute[244014]: 2026-02-25 12:40:35.122 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:40:35 np0005629333 nova_compute[244014]: 2026-02-25 12:40:35.155 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:40:35 np0005629333 nova_compute[244014]: 2026-02-25 12:40:35.158 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023235.0460918, 4d29081c-59fe-483f-b905-8c7349e84fc5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:40:35 np0005629333 nova_compute[244014]: 2026-02-25 12:40:35.158 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:40:35 np0005629333 nova_compute[244014]: 2026-02-25 12:40:35.171 244018 INFO nova.compute.manager [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Took 7.44 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:40:35 np0005629333 nova_compute[244014]: 2026-02-25 12:40:35.171 244018 DEBUG nova.compute.manager [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:40:35 np0005629333 nova_compute[244014]: 2026-02-25 12:40:35.179 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:40:35 np0005629333 nova_compute[244014]: 2026-02-25 12:40:35.182 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:40:35 np0005629333 nova_compute[244014]: 2026-02-25 12:40:35.223 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:40:35 np0005629333 nova_compute[244014]: 2026-02-25 12:40:35.248 244018 INFO nova.compute.manager [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Took 8.63 seconds to build instance.#033[00m
Feb 25 07:40:35 np0005629333 nova_compute[244014]: 2026-02-25 12:40:35.319 244018 DEBUG oslo_concurrency.lockutils [None req-9dddcd8f-e852-476b-890d-4f0214ec8198 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "4d29081c-59fe-483f-b905-8c7349e84fc5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1783: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 07:40:37 np0005629333 nova_compute[244014]: 2026-02-25 12:40:37.126 244018 DEBUG nova.compute.manager [req-5ba0342a-12b2-4290-b2f6-650495d632f0 req-6715339c-5207-4e7a-b11c-19f66436420f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Received event network-vif-plugged-29c2436f-5a46-4a8e-9f40-9b993368661c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:40:37 np0005629333 nova_compute[244014]: 2026-02-25 12:40:37.127 244018 DEBUG oslo_concurrency.lockutils [req-5ba0342a-12b2-4290-b2f6-650495d632f0 req-6715339c-5207-4e7a-b11c-19f66436420f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:37 np0005629333 nova_compute[244014]: 2026-02-25 12:40:37.127 244018 DEBUG oslo_concurrency.lockutils [req-5ba0342a-12b2-4290-b2f6-650495d632f0 req-6715339c-5207-4e7a-b11c-19f66436420f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:37 np0005629333 nova_compute[244014]: 2026-02-25 12:40:37.127 244018 DEBUG oslo_concurrency.lockutils [req-5ba0342a-12b2-4290-b2f6-650495d632f0 req-6715339c-5207-4e7a-b11c-19f66436420f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:37 np0005629333 nova_compute[244014]: 2026-02-25 12:40:37.128 244018 DEBUG nova.compute.manager [req-5ba0342a-12b2-4290-b2f6-650495d632f0 req-6715339c-5207-4e7a-b11c-19f66436420f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] No waiting events found dispatching network-vif-plugged-29c2436f-5a46-4a8e-9f40-9b993368661c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:40:37 np0005629333 nova_compute[244014]: 2026-02-25 12:40:37.128 244018 WARNING nova.compute.manager [req-5ba0342a-12b2-4290-b2f6-650495d632f0 req-6715339c-5207-4e7a-b11c-19f66436420f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Received unexpected event network-vif-plugged-29c2436f-5a46-4a8e-9f40-9b993368661c for instance with vm_state active and task_state None.#033[00m
Feb 25 07:40:37 np0005629333 nova_compute[244014]: 2026-02-25 12:40:37.356 244018 DEBUG oslo_concurrency.lockutils [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "4d29081c-59fe-483f-b905-8c7349e84fc5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:37 np0005629333 nova_compute[244014]: 2026-02-25 12:40:37.356 244018 DEBUG oslo_concurrency.lockutils [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "4d29081c-59fe-483f-b905-8c7349e84fc5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:37 np0005629333 nova_compute[244014]: 2026-02-25 12:40:37.357 244018 DEBUG oslo_concurrency.lockutils [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:37 np0005629333 nova_compute[244014]: 2026-02-25 12:40:37.357 244018 DEBUG oslo_concurrency.lockutils [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:37 np0005629333 nova_compute[244014]: 2026-02-25 12:40:37.357 244018 DEBUG oslo_concurrency.lockutils [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:37 np0005629333 nova_compute[244014]: 2026-02-25 12:40:37.359 244018 INFO nova.compute.manager [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Terminating instance#033[00m
Feb 25 07:40:37 np0005629333 nova_compute[244014]: 2026-02-25 12:40:37.360 244018 DEBUG nova.compute.manager [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:40:37 np0005629333 kernel: tap29c2436f-5a (unregistering): left promiscuous mode
Feb 25 07:40:37 np0005629333 NetworkManager[49836]: <info>  [1772023237.4097] device (tap29c2436f-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:40:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:40:37Z|00981|binding|INFO|Releasing lport 29c2436f-5a46-4a8e-9f40-9b993368661c from this chassis (sb_readonly=0)
Feb 25 07:40:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:40:37Z|00982|binding|INFO|Setting lport 29c2436f-5a46-4a8e-9f40-9b993368661c down in Southbound
Feb 25 07:40:37 np0005629333 nova_compute[244014]: 2026-02-25 12:40:37.414 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:40:37Z|00983|binding|INFO|Removing iface tap29c2436f-5a ovn-installed in OVS
Feb 25 07:40:37 np0005629333 nova_compute[244014]: 2026-02-25 12:40:37.421 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:37.424 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:85:8e 10.100.0.14'], port_security=['fa:16:3e:48:85:8e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4d29081c-59fe-483f-b905-8c7349e84fc5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec8bae53-fe6a-49d1-a733-f00c198be561', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '503e879cd1f44a16b9baef106ceba949', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3bf34285-1a67-4c95-bb68-fd577a012f6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18f4e8da-4409-4095-9850-aaee82dd8fd1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=29c2436f-5a46-4a8e-9f40-9b993368661c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:40:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:37.426 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 29c2436f-5a46-4a8e-9f40-9b993368661c in datapath ec8bae53-fe6a-49d1-a733-f00c198be561 unbound from our chassis#033[00m
Feb 25 07:40:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:37.429 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec8bae53-fe6a-49d1-a733-f00c198be561#033[00m
Feb 25 07:40:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:37.445 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b78588ca-d42a-48da-b0bb-bc5b593521b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:40:37 np0005629333 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000064.scope: Deactivated successfully.
Feb 25 07:40:37 np0005629333 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000064.scope: Consumed 2.697s CPU time.
Feb 25 07:40:37 np0005629333 systemd-machined[210048]: Machine qemu-127-instance-00000064 terminated.
Feb 25 07:40:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:37.474 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[12df5156-cdd7-4985-b381-a06525aa1bca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:40:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:37.479 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d7580926-eb01-48a7-8e50-06a89456498a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:40:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:37.505 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d69a00e0-529a-425a-9e7d-375f29707e19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:40:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:37.526 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ee8e3e4b-c6e4-46d3-8b82-9b52ed4b1db9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec8bae53-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a5:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516532, 'reachable_time': 18880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333584, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:40:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:37.544 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f86aa7-b561-4284-8026-7d0c63de6080]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516546, 'tstamp': 516546}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333585, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516549, 'tstamp': 516549}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333585, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:40:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:37.546 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec8bae53-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:40:37 np0005629333 nova_compute[244014]: 2026-02-25 12:40:37.548 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:37 np0005629333 nova_compute[244014]: 2026-02-25 12:40:37.553 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:37.554 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec8bae53-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:40:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:37.554 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:40:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:37.555 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec8bae53-f0, col_values=(('external_ids', {'iface-id': 'e2d1eadf-baf7-4e5c-8052-6c64e8476a26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:40:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:37.555 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:40:37 np0005629333 nova_compute[244014]: 2026-02-25 12:40:37.605 244018 INFO nova.virt.libvirt.driver [-] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Instance destroyed successfully.#033[00m
Feb 25 07:40:37 np0005629333 nova_compute[244014]: 2026-02-25 12:40:37.605 244018 DEBUG nova.objects.instance [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'resources' on Instance uuid 4d29081c-59fe-483f-b905-8c7349e84fc5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:40:37 np0005629333 nova_compute[244014]: 2026-02-25 12:40:37.624 244018 DEBUG nova.virt.libvirt.vif [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:40:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1761451429',display_name='tempest-ServersTestJSON-server-1761451429',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1761451429',id=100,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD0D9Rc8lFbW4qL5T7jR3oexmsn8QI7vD/zBEdjljHhz+WbYLBt+vZCC/EcArywe2HQ7Qo+4kBD5ze0YkonOn3bgWm+BpeU3ibaQn/otIhthIErVKZalQ86qnW2bWscjqA==',key_name='tempest-key-57149867',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:40:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-2108qsoc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:40:35Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=4d29081c-59fe-483f-b905-8c7349e84fc5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "29c2436f-5a46-4a8e-9f40-9b993368661c", "address": "fa:16:3e:48:85:8e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c2436f-5a", "ovs_interfaceid": "29c2436f-5a46-4a8e-9f40-9b993368661c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:40:37 np0005629333 nova_compute[244014]: 2026-02-25 12:40:37.625 244018 DEBUG nova.network.os_vif_util [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "29c2436f-5a46-4a8e-9f40-9b993368661c", "address": "fa:16:3e:48:85:8e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29c2436f-5a", "ovs_interfaceid": "29c2436f-5a46-4a8e-9f40-9b993368661c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:40:37 np0005629333 nova_compute[244014]: 2026-02-25 12:40:37.626 244018 DEBUG nova.network.os_vif_util [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:85:8e,bridge_name='br-int',has_traffic_filtering=True,id=29c2436f-5a46-4a8e-9f40-9b993368661c,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29c2436f-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:40:37 np0005629333 nova_compute[244014]: 2026-02-25 12:40:37.626 244018 DEBUG os_vif [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:85:8e,bridge_name='br-int',has_traffic_filtering=True,id=29c2436f-5a46-4a8e-9f40-9b993368661c,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29c2436f-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:40:37 np0005629333 nova_compute[244014]: 2026-02-25 12:40:37.630 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:37 np0005629333 nova_compute[244014]: 2026-02-25 12:40:37.630 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap29c2436f-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:40:37 np0005629333 nova_compute[244014]: 2026-02-25 12:40:37.632 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:37 np0005629333 nova_compute[244014]: 2026-02-25 12:40:37.633 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:37 np0005629333 nova_compute[244014]: 2026-02-25 12:40:37.636 244018 INFO os_vif [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:85:8e,bridge_name='br-int',has_traffic_filtering=True,id=29c2436f-5a46-4a8e-9f40-9b993368661c,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29c2436f-5a')#033[00m
Feb 25 07:40:37 np0005629333 nova_compute[244014]: 2026-02-25 12:40:37.947 244018 INFO nova.virt.libvirt.driver [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Deleting instance files /var/lib/nova/instances/4d29081c-59fe-483f-b905-8c7349e84fc5_del#033[00m
Feb 25 07:40:37 np0005629333 nova_compute[244014]: 2026-02-25 12:40:37.948 244018 INFO nova.virt.libvirt.driver [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Deletion of /var/lib/nova/instances/4d29081c-59fe-483f-b905-8c7349e84fc5_del complete#033[00m
Feb 25 07:40:38 np0005629333 nova_compute[244014]: 2026-02-25 12:40:37.999 244018 INFO nova.compute.manager [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Took 0.64 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:40:38 np0005629333 nova_compute[244014]: 2026-02-25 12:40:38.000 244018 DEBUG oslo.service.loopingcall [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:40:38 np0005629333 nova_compute[244014]: 2026-02-25 12:40:38.000 244018 DEBUG nova.compute.manager [-] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:40:38 np0005629333 nova_compute[244014]: 2026-02-25 12:40:38.000 244018 DEBUG nova.network.neutron [-] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:40:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 25 07:40:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:40:38 np0005629333 nova_compute[244014]: 2026-02-25 12:40:38.968 244018 DEBUG nova.network.neutron [-] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:40:39 np0005629333 nova_compute[244014]: 2026-02-25 12:40:39.004 244018 INFO nova.compute.manager [-] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Took 1.00 seconds to deallocate network for instance.#033[00m
Feb 25 07:40:39 np0005629333 nova_compute[244014]: 2026-02-25 12:40:39.052 244018 DEBUG oslo_concurrency.lockutils [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:39 np0005629333 nova_compute[244014]: 2026-02-25 12:40:39.053 244018 DEBUG oslo_concurrency.lockutils [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:39 np0005629333 nova_compute[244014]: 2026-02-25 12:40:39.127 244018 DEBUG oslo_concurrency.processutils [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:40:39 np0005629333 nova_compute[244014]: 2026-02-25 12:40:39.166 244018 DEBUG nova.compute.manager [req-e39ab66c-411d-430c-87b2-4c2c05db3a99 req-cd787754-15fc-4f7c-976b-9c192d7401fa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Received event network-vif-deleted-29c2436f-5a46-4a8e-9f40-9b993368661c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:40:39 np0005629333 nova_compute[244014]: 2026-02-25 12:40:39.299 244018 DEBUG nova.compute.manager [req-1134d239-1a52-456f-9e61-eb1275813534 req-83019e88-6609-43d8-8972-53f7031a373c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Received event network-vif-unplugged-29c2436f-5a46-4a8e-9f40-9b993368661c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:40:39 np0005629333 nova_compute[244014]: 2026-02-25 12:40:39.300 244018 DEBUG oslo_concurrency.lockutils [req-1134d239-1a52-456f-9e61-eb1275813534 req-83019e88-6609-43d8-8972-53f7031a373c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:39 np0005629333 nova_compute[244014]: 2026-02-25 12:40:39.301 244018 DEBUG oslo_concurrency.lockutils [req-1134d239-1a52-456f-9e61-eb1275813534 req-83019e88-6609-43d8-8972-53f7031a373c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:39 np0005629333 nova_compute[244014]: 2026-02-25 12:40:39.302 244018 DEBUG oslo_concurrency.lockutils [req-1134d239-1a52-456f-9e61-eb1275813534 req-83019e88-6609-43d8-8972-53f7031a373c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:39 np0005629333 nova_compute[244014]: 2026-02-25 12:40:39.302 244018 DEBUG nova.compute.manager [req-1134d239-1a52-456f-9e61-eb1275813534 req-83019e88-6609-43d8-8972-53f7031a373c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] No waiting events found dispatching network-vif-unplugged-29c2436f-5a46-4a8e-9f40-9b993368661c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:40:39 np0005629333 nova_compute[244014]: 2026-02-25 12:40:39.303 244018 WARNING nova.compute.manager [req-1134d239-1a52-456f-9e61-eb1275813534 req-83019e88-6609-43d8-8972-53f7031a373c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Received unexpected event network-vif-unplugged-29c2436f-5a46-4a8e-9f40-9b993368661c for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:40:39 np0005629333 nova_compute[244014]: 2026-02-25 12:40:39.303 244018 DEBUG nova.compute.manager [req-1134d239-1a52-456f-9e61-eb1275813534 req-83019e88-6609-43d8-8972-53f7031a373c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Received event network-vif-plugged-29c2436f-5a46-4a8e-9f40-9b993368661c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:40:39 np0005629333 nova_compute[244014]: 2026-02-25 12:40:39.304 244018 DEBUG oslo_concurrency.lockutils [req-1134d239-1a52-456f-9e61-eb1275813534 req-83019e88-6609-43d8-8972-53f7031a373c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:39 np0005629333 nova_compute[244014]: 2026-02-25 12:40:39.304 244018 DEBUG oslo_concurrency.lockutils [req-1134d239-1a52-456f-9e61-eb1275813534 req-83019e88-6609-43d8-8972-53f7031a373c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:39 np0005629333 nova_compute[244014]: 2026-02-25 12:40:39.305 244018 DEBUG oslo_concurrency.lockutils [req-1134d239-1a52-456f-9e61-eb1275813534 req-83019e88-6609-43d8-8972-53f7031a373c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4d29081c-59fe-483f-b905-8c7349e84fc5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:39 np0005629333 nova_compute[244014]: 2026-02-25 12:40:39.306 244018 DEBUG nova.compute.manager [req-1134d239-1a52-456f-9e61-eb1275813534 req-83019e88-6609-43d8-8972-53f7031a373c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] No waiting events found dispatching network-vif-plugged-29c2436f-5a46-4a8e-9f40-9b993368661c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:40:39 np0005629333 nova_compute[244014]: 2026-02-25 12:40:39.306 244018 WARNING nova.compute.manager [req-1134d239-1a52-456f-9e61-eb1275813534 req-83019e88-6609-43d8-8972-53f7031a373c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Received unexpected event network-vif-plugged-29c2436f-5a46-4a8e-9f40-9b993368661c for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:40:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:40:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/100799852' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:40:39 np0005629333 nova_compute[244014]: 2026-02-25 12:40:39.679 244018 DEBUG oslo_concurrency.processutils [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:40:39 np0005629333 nova_compute[244014]: 2026-02-25 12:40:39.686 244018 DEBUG nova.compute.provider_tree [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:40:39 np0005629333 nova_compute[244014]: 2026-02-25 12:40:39.701 244018 DEBUG nova.scheduler.client.report [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:40:39 np0005629333 nova_compute[244014]: 2026-02-25 12:40:39.723 244018 DEBUG oslo_concurrency.lockutils [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:39 np0005629333 nova_compute[244014]: 2026-02-25 12:40:39.760 244018 INFO nova.scheduler.client.report [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Deleted allocations for instance 4d29081c-59fe-483f-b905-8c7349e84fc5#033[00m
Feb 25 07:40:39 np0005629333 nova_compute[244014]: 2026-02-25 12:40:39.821 244018 DEBUG oslo_concurrency.lockutils [None req-73fb730e-3276-4c53-be24-18d76954b1a9 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "4d29081c-59fe-483f-b905-8c7349e84fc5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.464s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:40 np0005629333 nova_compute[244014]: 2026-02-25 12:40:40.038 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 25 07:40:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1786: 305 pgs: 305 active+clean; 254 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Feb 25 07:40:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:40:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:40:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:40:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:40:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009670309111608878 of space, bias 1.0, pg target 0.2901092733482663 quantized to 32 (current 32)
Feb 25 07:40:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:40:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:40:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:40:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:40:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:40:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024937969202050445 of space, bias 1.0, pg target 0.7481390760615133 quantized to 32 (current 32)
Feb 25 07:40:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:40:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0215965201940558e-06 of space, bias 4.0, pg target 0.001225915824232867 quantized to 16 (current 16)
Feb 25 07:40:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:40:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:40:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:40:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:40:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:40:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:40:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:40:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:40:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:40:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:40:42 np0005629333 nova_compute[244014]: 2026-02-25 12:40:42.633 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:40:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1787: 305 pgs: 305 active+clean; 233 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 101 op/s
Feb 25 07:40:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:44.463 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:40:45 np0005629333 nova_compute[244014]: 2026-02-25 12:40:45.041 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:45 np0005629333 nova_compute[244014]: 2026-02-25 12:40:45.099 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "4717553a-9bb9-4278-b610-7c446db3f8d9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:45 np0005629333 nova_compute[244014]: 2026-02-25 12:40:45.100 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "4717553a-9bb9-4278-b610-7c446db3f8d9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:45 np0005629333 nova_compute[244014]: 2026-02-25 12:40:45.122 244018 DEBUG nova.compute.manager [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:40:45 np0005629333 nova_compute[244014]: 2026-02-25 12:40:45.206 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:45 np0005629333 nova_compute[244014]: 2026-02-25 12:40:45.207 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:45 np0005629333 nova_compute[244014]: 2026-02-25 12:40:45.219 244018 DEBUG nova.virt.hardware [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:40:45 np0005629333 nova_compute[244014]: 2026-02-25 12:40:45.219 244018 INFO nova.compute.claims [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:40:45 np0005629333 nova_compute[244014]: 2026-02-25 12:40:45.328 244018 DEBUG oslo_concurrency.processutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:40:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:40:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2896996746' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:40:45 np0005629333 nova_compute[244014]: 2026-02-25 12:40:45.948 244018 DEBUG oslo_concurrency.processutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:40:45 np0005629333 nova_compute[244014]: 2026-02-25 12:40:45.959 244018 DEBUG nova.compute.provider_tree [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:40:45 np0005629333 nova_compute[244014]: 2026-02-25 12:40:45.978 244018 DEBUG nova.scheduler.client.report [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:40:45 np0005629333 nova_compute[244014]: 2026-02-25 12:40:45.997 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:45 np0005629333 nova_compute[244014]: 2026-02-25 12:40:45.998 244018 DEBUG nova.compute.manager [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:40:46 np0005629333 nova_compute[244014]: 2026-02-25 12:40:46.051 244018 DEBUG nova.compute.manager [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:40:46 np0005629333 nova_compute[244014]: 2026-02-25 12:40:46.052 244018 DEBUG nova.network.neutron [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:40:46 np0005629333 nova_compute[244014]: 2026-02-25 12:40:46.074 244018 INFO nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:40:46 np0005629333 nova_compute[244014]: 2026-02-25 12:40:46.090 244018 DEBUG nova.compute.manager [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:40:46 np0005629333 nova_compute[244014]: 2026-02-25 12:40:46.176 244018 DEBUG nova.compute.manager [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:40:46 np0005629333 nova_compute[244014]: 2026-02-25 12:40:46.177 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:40:46 np0005629333 nova_compute[244014]: 2026-02-25 12:40:46.177 244018 INFO nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Creating image(s)#033[00m
Feb 25 07:40:46 np0005629333 nova_compute[244014]: 2026-02-25 12:40:46.198 244018 DEBUG nova.storage.rbd_utils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 4717553a-9bb9-4278-b610-7c446db3f8d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:40:46 np0005629333 nova_compute[244014]: 2026-02-25 12:40:46.220 244018 DEBUG nova.storage.rbd_utils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 4717553a-9bb9-4278-b610-7c446db3f8d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:40:46 np0005629333 nova_compute[244014]: 2026-02-25 12:40:46.240 244018 DEBUG nova.storage.rbd_utils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 4717553a-9bb9-4278-b610-7c446db3f8d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:40:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1788: 305 pgs: 305 active+clean; 233 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 100 op/s
Feb 25 07:40:46 np0005629333 nova_compute[244014]: 2026-02-25 12:40:46.244 244018 DEBUG oslo_concurrency.processutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:40:46 np0005629333 nova_compute[244014]: 2026-02-25 12:40:46.305 244018 DEBUG oslo_concurrency.processutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:40:46 np0005629333 nova_compute[244014]: 2026-02-25 12:40:46.306 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:46 np0005629333 nova_compute[244014]: 2026-02-25 12:40:46.307 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:46 np0005629333 nova_compute[244014]: 2026-02-25 12:40:46.308 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:46 np0005629333 nova_compute[244014]: 2026-02-25 12:40:46.329 244018 DEBUG nova.storage.rbd_utils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 4717553a-9bb9-4278-b610-7c446db3f8d9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:40:46 np0005629333 nova_compute[244014]: 2026-02-25 12:40:46.332 244018 DEBUG oslo_concurrency.processutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 4717553a-9bb9-4278-b610-7c446db3f8d9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:40:46 np0005629333 nova_compute[244014]: 2026-02-25 12:40:46.463 244018 DEBUG nova.policy [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c5bc24b5f5048469cf3f701ce511bfa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '503e879cd1f44a16b9baef106ceba949', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:40:47 np0005629333 nova_compute[244014]: 2026-02-25 12:40:47.187 244018 DEBUG oslo_concurrency.processutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 4717553a-9bb9-4278-b610-7c446db3f8d9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.855s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:40:47 np0005629333 nova_compute[244014]: 2026-02-25 12:40:47.258 244018 DEBUG nova.storage.rbd_utils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] resizing rbd image 4717553a-9bb9-4278-b610-7c446db3f8d9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:40:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:40:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3233863198' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:40:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:40:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3233863198' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:40:47 np0005629333 nova_compute[244014]: 2026-02-25 12:40:47.634 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:47 np0005629333 nova_compute[244014]: 2026-02-25 12:40:47.697 244018 DEBUG nova.objects.instance [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'migration_context' on Instance uuid 4717553a-9bb9-4278-b610-7c446db3f8d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:40:47 np0005629333 nova_compute[244014]: 2026-02-25 12:40:47.791 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:40:47 np0005629333 nova_compute[244014]: 2026-02-25 12:40:47.792 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Ensure instance console log exists: /var/lib/nova/instances/4717553a-9bb9-4278-b610-7c446db3f8d9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:40:47 np0005629333 nova_compute[244014]: 2026-02-25 12:40:47.792 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:47 np0005629333 nova_compute[244014]: 2026-02-25 12:40:47.793 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:47 np0005629333 nova_compute[244014]: 2026-02-25 12:40:47.793 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1789: 305 pgs: 305 active+clean; 268 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 125 op/s
Feb 25 07:40:48 np0005629333 nova_compute[244014]: 2026-02-25 12:40:48.281 244018 DEBUG nova.network.neutron [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Successfully created port: 6150f46a-debb-49d6-b2e0-07b81f4cccea _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:40:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:40:48 np0005629333 podman[333829]: 2026-02-25 12:40:48.741938776 +0000 UTC m=+0.079150124 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 25 07:40:50 np0005629333 nova_compute[244014]: 2026-02-25 12:40:50.041 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1790: 305 pgs: 305 active+clean; 268 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 1.2 MiB/s wr, 50 op/s
Feb 25 07:40:50 np0005629333 podman[333849]: 2026-02-25 12:40:50.752501924 +0000 UTC m=+0.092545382 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 07:40:51 np0005629333 nova_compute[244014]: 2026-02-25 12:40:51.102 244018 DEBUG nova.network.neutron [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Successfully updated port: 6150f46a-debb-49d6-b2e0-07b81f4cccea _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:40:51 np0005629333 nova_compute[244014]: 2026-02-25 12:40:51.117 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "refresh_cache-4717553a-9bb9-4278-b610-7c446db3f8d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:40:51 np0005629333 nova_compute[244014]: 2026-02-25 12:40:51.117 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquired lock "refresh_cache-4717553a-9bb9-4278-b610-7c446db3f8d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:40:51 np0005629333 nova_compute[244014]: 2026-02-25 12:40:51.117 244018 DEBUG nova.network.neutron [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:40:51 np0005629333 nova_compute[244014]: 2026-02-25 12:40:51.261 244018 DEBUG nova.compute.manager [req-de1abd5f-eae6-4115-88a7-cc67803e401d req-31632092-100a-4b80-afc3-124f09b3e8b7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Received event network-changed-6150f46a-debb-49d6-b2e0-07b81f4cccea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:40:51 np0005629333 nova_compute[244014]: 2026-02-25 12:40:51.262 244018 DEBUG nova.compute.manager [req-de1abd5f-eae6-4115-88a7-cc67803e401d req-31632092-100a-4b80-afc3-124f09b3e8b7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Refreshing instance network info cache due to event network-changed-6150f46a-debb-49d6-b2e0-07b81f4cccea. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:40:51 np0005629333 nova_compute[244014]: 2026-02-25 12:40:51.262 244018 DEBUG oslo_concurrency.lockutils [req-de1abd5f-eae6-4115-88a7-cc67803e401d req-31632092-100a-4b80-afc3-124f09b3e8b7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-4717553a-9bb9-4278-b610-7c446db3f8d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:40:51 np0005629333 nova_compute[244014]: 2026-02-25 12:40:51.404 244018 DEBUG nova.network.neutron [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:40:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1791: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Feb 25 07:40:52 np0005629333 nova_compute[244014]: 2026-02-25 12:40:52.603 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023237.601733, 4d29081c-59fe-483f-b905-8c7349e84fc5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:40:52 np0005629333 nova_compute[244014]: 2026-02-25 12:40:52.603 244018 INFO nova.compute.manager [-] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:40:52 np0005629333 nova_compute[244014]: 2026-02-25 12:40:52.626 244018 DEBUG nova.compute.manager [None req-39f5a20b-da25-44a1-91c1-1eab3fa2496f - - - - - -] [instance: 4d29081c-59fe-483f-b905-8c7349e84fc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:40:52 np0005629333 nova_compute[244014]: 2026-02-25 12:40:52.637 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:52 np0005629333 nova_compute[244014]: 2026-02-25 12:40:52.881 244018 DEBUG nova.network.neutron [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Updating instance_info_cache with network_info: [{"id": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "address": "fa:16:3e:ee:a2:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6150f46a-de", "ovs_interfaceid": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:40:52 np0005629333 nova_compute[244014]: 2026-02-25 12:40:52.899 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Releasing lock "refresh_cache-4717553a-9bb9-4278-b610-7c446db3f8d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:40:52 np0005629333 nova_compute[244014]: 2026-02-25 12:40:52.900 244018 DEBUG nova.compute.manager [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Instance network_info: |[{"id": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "address": "fa:16:3e:ee:a2:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6150f46a-de", "ovs_interfaceid": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:40:52 np0005629333 nova_compute[244014]: 2026-02-25 12:40:52.901 244018 DEBUG oslo_concurrency.lockutils [req-de1abd5f-eae6-4115-88a7-cc67803e401d req-31632092-100a-4b80-afc3-124f09b3e8b7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-4717553a-9bb9-4278-b610-7c446db3f8d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:40:52 np0005629333 nova_compute[244014]: 2026-02-25 12:40:52.901 244018 DEBUG nova.network.neutron [req-de1abd5f-eae6-4115-88a7-cc67803e401d req-31632092-100a-4b80-afc3-124f09b3e8b7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Refreshing network info cache for port 6150f46a-debb-49d6-b2e0-07b81f4cccea _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:40:52 np0005629333 nova_compute[244014]: 2026-02-25 12:40:52.907 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Start _get_guest_xml network_info=[{"id": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "address": "fa:16:3e:ee:a2:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6150f46a-de", "ovs_interfaceid": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:40:52 np0005629333 nova_compute[244014]: 2026-02-25 12:40:52.914 244018 WARNING nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:40:52 np0005629333 nova_compute[244014]: 2026-02-25 12:40:52.924 244018 DEBUG nova.virt.libvirt.host [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:40:52 np0005629333 nova_compute[244014]: 2026-02-25 12:40:52.925 244018 DEBUG nova.virt.libvirt.host [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:40:52 np0005629333 nova_compute[244014]: 2026-02-25 12:40:52.938 244018 DEBUG nova.virt.libvirt.host [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:40:52 np0005629333 nova_compute[244014]: 2026-02-25 12:40:52.939 244018 DEBUG nova.virt.libvirt.host [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:40:52 np0005629333 nova_compute[244014]: 2026-02-25 12:40:52.939 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:40:52 np0005629333 nova_compute[244014]: 2026-02-25 12:40:52.939 244018 DEBUG nova.virt.hardware [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:40:52 np0005629333 nova_compute[244014]: 2026-02-25 12:40:52.940 244018 DEBUG nova.virt.hardware [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:40:52 np0005629333 nova_compute[244014]: 2026-02-25 12:40:52.941 244018 DEBUG nova.virt.hardware [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:40:52 np0005629333 nova_compute[244014]: 2026-02-25 12:40:52.941 244018 DEBUG nova.virt.hardware [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:40:52 np0005629333 nova_compute[244014]: 2026-02-25 12:40:52.941 244018 DEBUG nova.virt.hardware [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:40:52 np0005629333 nova_compute[244014]: 2026-02-25 12:40:52.942 244018 DEBUG nova.virt.hardware [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:40:52 np0005629333 nova_compute[244014]: 2026-02-25 12:40:52.942 244018 DEBUG nova.virt.hardware [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:40:52 np0005629333 nova_compute[244014]: 2026-02-25 12:40:52.943 244018 DEBUG nova.virt.hardware [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:40:52 np0005629333 nova_compute[244014]: 2026-02-25 12:40:52.943 244018 DEBUG nova.virt.hardware [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:40:52 np0005629333 nova_compute[244014]: 2026-02-25 12:40:52.943 244018 DEBUG nova.virt.hardware [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:40:52 np0005629333 nova_compute[244014]: 2026-02-25 12:40:52.944 244018 DEBUG nova.virt.hardware [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:40:52 np0005629333 nova_compute[244014]: 2026-02-25 12:40:52.949 244018 DEBUG oslo_concurrency.processutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:40:53 np0005629333 nova_compute[244014]: 2026-02-25 12:40:53.004 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "bb4e80d2-c200-4c24-8154-e1a86d06946b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:53 np0005629333 nova_compute[244014]: 2026-02-25 12:40:53.004 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "bb4e80d2-c200-4c24-8154-e1a86d06946b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:53 np0005629333 nova_compute[244014]: 2026-02-25 12:40:53.023 244018 DEBUG nova.compute.manager [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:40:53 np0005629333 nova_compute[244014]: 2026-02-25 12:40:53.106 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:53 np0005629333 nova_compute[244014]: 2026-02-25 12:40:53.106 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:53 np0005629333 nova_compute[244014]: 2026-02-25 12:40:53.113 244018 DEBUG nova.virt.hardware [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:40:53 np0005629333 nova_compute[244014]: 2026-02-25 12:40:53.113 244018 INFO nova.compute.claims [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:40:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:53.127 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:8a:86 10.100.0.2 2001:db8::f816:3eff:feaa:8a86'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feaa:8a86/64', 'neutron:device_id': 'ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5cb98e41-b63f-472f-91ce-2c5130dfa4a8, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=895bc3cc-c38d-425b-b005-1acb3139bbee) old=Port_Binding(mac=['fa:16:3e:aa:8a:86 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:40:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:53.129 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 895bc3cc-c38d-425b-b005-1acb3139bbee in datapath 9ec47b46-6b4d-4267-964b-6f16eef9a7b1 updated#033[00m
Feb 25 07:40:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:53.131 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9ec47b46-6b4d-4267-964b-6f16eef9a7b1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:40:53 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:53.132 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2340c8a6-af40-4621-985a-9ddc7e963175]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:40:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:40:53 np0005629333 nova_compute[244014]: 2026-02-25 12:40:53.298 244018 DEBUG oslo_concurrency.processutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:40:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:40:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1929090395' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:40:53 np0005629333 nova_compute[244014]: 2026-02-25 12:40:53.518 244018 DEBUG oslo_concurrency.processutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:40:53 np0005629333 nova_compute[244014]: 2026-02-25 12:40:53.547 244018 DEBUG nova.storage.rbd_utils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 4717553a-9bb9-4278-b610-7c446db3f8d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:40:53 np0005629333 nova_compute[244014]: 2026-02-25 12:40:53.552 244018 DEBUG oslo_concurrency.processutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:40:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:40:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3488990023' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:40:53 np0005629333 nova_compute[244014]: 2026-02-25 12:40:53.866 244018 DEBUG oslo_concurrency.processutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:40:53 np0005629333 nova_compute[244014]: 2026-02-25 12:40:53.873 244018 DEBUG nova.compute.provider_tree [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:40:53 np0005629333 nova_compute[244014]: 2026-02-25 12:40:53.891 244018 DEBUG nova.scheduler.client.report [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:40:53 np0005629333 nova_compute[244014]: 2026-02-25 12:40:53.921 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:53 np0005629333 nova_compute[244014]: 2026-02-25 12:40:53.922 244018 DEBUG nova.compute.manager [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:40:53 np0005629333 nova_compute[244014]: 2026-02-25 12:40:53.969 244018 DEBUG nova.compute.manager [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:40:53 np0005629333 nova_compute[244014]: 2026-02-25 12:40:53.969 244018 DEBUG nova.network.neutron [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:40:53 np0005629333 nova_compute[244014]: 2026-02-25 12:40:53.987 244018 INFO nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.005 244018 DEBUG nova.compute.manager [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:40:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:40:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3892986754' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.096 244018 DEBUG oslo_concurrency.processutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.099 244018 DEBUG nova.virt.libvirt.vif [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:40:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1299470364',display_name='tempest-ServersTestJSON-server-1299470364',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1299470364',id=101,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-mh21m2o5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:40:46Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=4717553a-9bb9-4278-b610-7c446db3f8d9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "address": "fa:16:3e:ee:a2:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6150f46a-de", "ovs_interfaceid": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.099 244018 DEBUG nova.network.os_vif_util [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "address": "fa:16:3e:ee:a2:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6150f46a-de", "ovs_interfaceid": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.100 244018 DEBUG nova.network.os_vif_util [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:a2:d4,bridge_name='br-int',has_traffic_filtering=True,id=6150f46a-debb-49d6-b2e0-07b81f4cccea,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6150f46a-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.102 244018 DEBUG nova.objects.instance [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4717553a-9bb9-4278-b610-7c446db3f8d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.125 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:40:54 np0005629333 nova_compute[244014]:  <uuid>4717553a-9bb9-4278-b610-7c446db3f8d9</uuid>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:  <name>instance-00000065</name>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServersTestJSON-server-1299470364</nova:name>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:40:52</nova:creationTime>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:40:54 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:        <nova:user uuid="4c5bc24b5f5048469cf3f701ce511bfa">tempest-ServersTestJSON-1472039551-project-member</nova:user>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:        <nova:project uuid="503e879cd1f44a16b9baef106ceba949">tempest-ServersTestJSON-1472039551</nova:project>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:        <nova:port uuid="6150f46a-debb-49d6-b2e0-07b81f4cccea">
Feb 25 07:40:54 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      <entry name="serial">4717553a-9bb9-4278-b610-7c446db3f8d9</entry>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      <entry name="uuid">4717553a-9bb9-4278-b610-7c446db3f8d9</entry>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/4717553a-9bb9-4278-b610-7c446db3f8d9_disk">
Feb 25 07:40:54 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:40:54 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/4717553a-9bb9-4278-b610-7c446db3f8d9_disk.config">
Feb 25 07:40:54 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:40:54 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:ee:a2:d4"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      <target dev="tap6150f46a-de"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/4717553a-9bb9-4278-b610-7c446db3f8d9/console.log" append="off"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:40:54 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:40:54 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:40:54 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:40:54 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.126 244018 DEBUG nova.compute.manager [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Preparing to wait for external event network-vif-plugged-6150f46a-debb-49d6-b2e0-07b81f4cccea prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.126 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.127 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.127 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.128 244018 DEBUG nova.virt.libvirt.vif [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:40:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1299470364',display_name='tempest-ServersTestJSON-server-1299470364',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1299470364',id=101,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-mh21m2o5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:40:46Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=4717553a-9bb9-4278-b610-7c446db3f8d9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "address": "fa:16:3e:ee:a2:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6150f46a-de", "ovs_interfaceid": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.129 244018 DEBUG nova.network.os_vif_util [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "address": "fa:16:3e:ee:a2:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6150f46a-de", "ovs_interfaceid": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.130 244018 DEBUG nova.network.os_vif_util [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:a2:d4,bridge_name='br-int',has_traffic_filtering=True,id=6150f46a-debb-49d6-b2e0-07b81f4cccea,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6150f46a-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.131 244018 DEBUG os_vif [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:a2:d4,bridge_name='br-int',has_traffic_filtering=True,id=6150f46a-debb-49d6-b2e0-07b81f4cccea,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6150f46a-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.132 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.133 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.133 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.137 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.138 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6150f46a-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.138 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6150f46a-de, col_values=(('external_ids', {'iface-id': '6150f46a-debb-49d6-b2e0-07b81f4cccea', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ee:a2:d4', 'vm-uuid': '4717553a-9bb9-4278-b610-7c446db3f8d9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:40:54 np0005629333 NetworkManager[49836]: <info>  [1772023254.1420] manager: (tap6150f46a-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/407)
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.142 244018 DEBUG nova.compute.manager [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.143 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.144 244018 INFO nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Creating image(s)#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.177 244018 DEBUG nova.storage.rbd_utils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image bb4e80d2-c200-4c24-8154-e1a86d06946b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.215 244018 DEBUG nova.storage.rbd_utils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image bb4e80d2-c200-4c24-8154-e1a86d06946b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:40:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.249 244018 DEBUG nova.storage.rbd_utils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image bb4e80d2-c200-4c24-8154-e1a86d06946b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.252 244018 DEBUG oslo_concurrency.processutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.276 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.280 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.280 244018 INFO os_vif [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:a2:d4,bridge_name='br-int',has_traffic_filtering=True,id=6150f46a-debb-49d6-b2e0-07b81f4cccea,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6150f46a-de')#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.321 244018 DEBUG oslo_concurrency.processutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.321 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.322 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.322 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.346 244018 DEBUG nova.storage.rbd_utils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image bb4e80d2-c200-4c24-8154-e1a86d06946b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.350 244018 DEBUG oslo_concurrency.processutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 bb4e80d2-c200-4c24-8154-e1a86d06946b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.393 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.394 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.394 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No VIF found with MAC fa:16:3e:ee:a2:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.395 244018 INFO nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Using config drive#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.423 244018 DEBUG nova.storage.rbd_utils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 4717553a-9bb9-4278-b610-7c446db3f8d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.654 244018 DEBUG nova.policy [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb37a481eb114226822ed8b2ef4f9a89', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.727 244018 DEBUG oslo_concurrency.processutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 bb4e80d2-c200-4c24-8154-e1a86d06946b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.378s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.808 244018 DEBUG nova.storage.rbd_utils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] resizing rbd image bb4e80d2-c200-4c24-8154-e1a86d06946b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.897 244018 DEBUG nova.objects.instance [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'migration_context' on Instance uuid bb4e80d2-c200-4c24-8154-e1a86d06946b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.918 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.919 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Ensure instance console log exists: /var/lib/nova/instances/bb4e80d2-c200-4c24-8154-e1a86d06946b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.919 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.919 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:54 np0005629333 nova_compute[244014]: 2026-02-25 12:40:54.919 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.020 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.020 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.021 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:55 np0005629333 nova_compute[244014]: 2026-02-25 12:40:55.043 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:55 np0005629333 nova_compute[244014]: 2026-02-25 12:40:55.058 244018 INFO nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Creating config drive at /var/lib/nova/instances/4717553a-9bb9-4278-b610-7c446db3f8d9/disk.config#033[00m
Feb 25 07:40:55 np0005629333 nova_compute[244014]: 2026-02-25 12:40:55.062 244018 DEBUG oslo_concurrency.processutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4717553a-9bb9-4278-b610-7c446db3f8d9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpfitiy1tt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:40:55 np0005629333 nova_compute[244014]: 2026-02-25 12:40:55.197 244018 DEBUG oslo_concurrency.processutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4717553a-9bb9-4278-b610-7c446db3f8d9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpfitiy1tt" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:40:55 np0005629333 nova_compute[244014]: 2026-02-25 12:40:55.220 244018 DEBUG nova.storage.rbd_utils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image 4717553a-9bb9-4278-b610-7c446db3f8d9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:40:55 np0005629333 nova_compute[244014]: 2026-02-25 12:40:55.224 244018 DEBUG oslo_concurrency.processutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4717553a-9bb9-4278-b610-7c446db3f8d9/disk.config 4717553a-9bb9-4278-b610-7c446db3f8d9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:40:55 np0005629333 nova_compute[244014]: 2026-02-25 12:40:55.376 244018 DEBUG oslo_concurrency.processutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4717553a-9bb9-4278-b610-7c446db3f8d9/disk.config 4717553a-9bb9-4278-b610-7c446db3f8d9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:40:55 np0005629333 nova_compute[244014]: 2026-02-25 12:40:55.377 244018 INFO nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Deleting local config drive /var/lib/nova/instances/4717553a-9bb9-4278-b610-7c446db3f8d9/disk.config because it was imported into RBD.#033[00m
Feb 25 07:40:55 np0005629333 kernel: tap6150f46a-de: entered promiscuous mode
Feb 25 07:40:55 np0005629333 ovn_controller[147040]: 2026-02-25T12:40:55Z|00984|binding|INFO|Claiming lport 6150f46a-debb-49d6-b2e0-07b81f4cccea for this chassis.
Feb 25 07:40:55 np0005629333 NetworkManager[49836]: <info>  [1772023255.4415] manager: (tap6150f46a-de): new Tun device (/org/freedesktop/NetworkManager/Devices/408)
Feb 25 07:40:55 np0005629333 nova_compute[244014]: 2026-02-25 12:40:55.441 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:55 np0005629333 ovn_controller[147040]: 2026-02-25T12:40:55Z|00985|binding|INFO|6150f46a-debb-49d6-b2e0-07b81f4cccea: Claiming fa:16:3e:ee:a2:d4 10.100.0.9
Feb 25 07:40:55 np0005629333 nova_compute[244014]: 2026-02-25 12:40:55.444 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:55 np0005629333 ovn_controller[147040]: 2026-02-25T12:40:55Z|00986|binding|INFO|Setting lport 6150f46a-debb-49d6-b2e0-07b81f4cccea ovn-installed in OVS
Feb 25 07:40:55 np0005629333 nova_compute[244014]: 2026-02-25 12:40:55.452 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:55 np0005629333 ovn_controller[147040]: 2026-02-25T12:40:55Z|00987|binding|INFO|Setting lport 6150f46a-debb-49d6-b2e0-07b81f4cccea up in Southbound
Feb 25 07:40:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.469 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:a2:d4 10.100.0.9'], port_security=['fa:16:3e:ee:a2:d4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4717553a-9bb9-4278-b610-7c446db3f8d9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec8bae53-fe6a-49d1-a733-f00c198be561', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '503e879cd1f44a16b9baef106ceba949', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3bf34285-1a67-4c95-bb68-fd577a012f6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18f4e8da-4409-4095-9850-aaee82dd8fd1, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6150f46a-debb-49d6-b2e0-07b81f4cccea) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:40:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.470 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6150f46a-debb-49d6-b2e0-07b81f4cccea in datapath ec8bae53-fe6a-49d1-a733-f00c198be561 bound to our chassis#033[00m
Feb 25 07:40:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.472 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec8bae53-fe6a-49d1-a733-f00c198be561#033[00m
Feb 25 07:40:55 np0005629333 systemd-udevd[334198]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:40:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.485 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dc681900-57ea-4418-93d1-969bc21cdec0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:40:55 np0005629333 NetworkManager[49836]: <info>  [1772023255.4902] device (tap6150f46a-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:40:55 np0005629333 NetworkManager[49836]: <info>  [1772023255.4910] device (tap6150f46a-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:40:55 np0005629333 systemd-machined[210048]: New machine qemu-128-instance-00000065.
Feb 25 07:40:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.510 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[acec9576-92d0-49e5-9da1-3c21af9bbd0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:40:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.514 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[393d46dc-d66d-4266-91df-7f65bbe0a39e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:40:55 np0005629333 systemd[1]: Started Virtual Machine qemu-128-instance-00000065.
Feb 25 07:40:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.538 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[389deeea-8182-4d72-945a-fa8af59de439]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:40:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.552 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2954fdc9-fdec-47da-9c76-d7e5704ac476]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec8bae53-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a5:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516532, 'reachable_time': 18880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334213, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:40:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.566 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[708e9e48-c763-4a01-907d-9a0c00162627]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516546, 'tstamp': 516546}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334215, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516549, 'tstamp': 516549}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334215, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:40:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.568 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec8bae53-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:40:55 np0005629333 nova_compute[244014]: 2026-02-25 12:40:55.569 244018 DEBUG nova.network.neutron [req-de1abd5f-eae6-4115-88a7-cc67803e401d req-31632092-100a-4b80-afc3-124f09b3e8b7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Updated VIF entry in instance network info cache for port 6150f46a-debb-49d6-b2e0-07b81f4cccea. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:40:55 np0005629333 nova_compute[244014]: 2026-02-25 12:40:55.570 244018 DEBUG nova.network.neutron [req-de1abd5f-eae6-4115-88a7-cc67803e401d req-31632092-100a-4b80-afc3-124f09b3e8b7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Updating instance_info_cache with network_info: [{"id": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "address": "fa:16:3e:ee:a2:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6150f46a-de", "ovs_interfaceid": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:40:55 np0005629333 nova_compute[244014]: 2026-02-25 12:40:55.572 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.572 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec8bae53-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:40:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.572 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:40:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.573 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec8bae53-f0, col_values=(('external_ids', {'iface-id': 'e2d1eadf-baf7-4e5c-8052-6c64e8476a26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:40:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:40:55.573 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:40:55 np0005629333 nova_compute[244014]: 2026-02-25 12:40:55.611 244018 DEBUG oslo_concurrency.lockutils [req-de1abd5f-eae6-4115-88a7-cc67803e401d req-31632092-100a-4b80-afc3-124f09b3e8b7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-4717553a-9bb9-4278-b610-7c446db3f8d9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:40:55 np0005629333 nova_compute[244014]: 2026-02-25 12:40:55.893 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023255.893339, 4717553a-9bb9-4278-b610-7c446db3f8d9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:40:55 np0005629333 nova_compute[244014]: 2026-02-25 12:40:55.894 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] VM Started (Lifecycle Event)#033[00m
Feb 25 07:40:55 np0005629333 nova_compute[244014]: 2026-02-25 12:40:55.926 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:40:55 np0005629333 nova_compute[244014]: 2026-02-25 12:40:55.930 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023255.8935688, 4717553a-9bb9-4278-b610-7c446db3f8d9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:40:55 np0005629333 nova_compute[244014]: 2026-02-25 12:40:55.930 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:40:55 np0005629333 nova_compute[244014]: 2026-02-25 12:40:55.962 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:40:55 np0005629333 nova_compute[244014]: 2026-02-25 12:40:55.966 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:40:56 np0005629333 nova_compute[244014]: 2026-02-25 12:40:56.023 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:40:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1793: 305 pgs: 305 active+clean; 279 MiB data, 891 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 07:40:56 np0005629333 nova_compute[244014]: 2026-02-25 12:40:56.547 244018 DEBUG nova.network.neutron [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Successfully created port: ca235479-72b7-40ba-b8e5-d8d3227079fc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:40:56 np0005629333 nova_compute[244014]: 2026-02-25 12:40:56.599 244018 DEBUG nova.compute.manager [req-b64b10cb-4ff0-491c-b19b-2d08f0dbabce req-9ae7d88d-1cb0-47ca-8898-708229fa7456 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Received event network-vif-plugged-6150f46a-debb-49d6-b2e0-07b81f4cccea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:40:56 np0005629333 nova_compute[244014]: 2026-02-25 12:40:56.600 244018 DEBUG oslo_concurrency.lockutils [req-b64b10cb-4ff0-491c-b19b-2d08f0dbabce req-9ae7d88d-1cb0-47ca-8898-708229fa7456 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:56 np0005629333 nova_compute[244014]: 2026-02-25 12:40:56.600 244018 DEBUG oslo_concurrency.lockutils [req-b64b10cb-4ff0-491c-b19b-2d08f0dbabce req-9ae7d88d-1cb0-47ca-8898-708229fa7456 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:56 np0005629333 nova_compute[244014]: 2026-02-25 12:40:56.600 244018 DEBUG oslo_concurrency.lockutils [req-b64b10cb-4ff0-491c-b19b-2d08f0dbabce req-9ae7d88d-1cb0-47ca-8898-708229fa7456 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:56 np0005629333 nova_compute[244014]: 2026-02-25 12:40:56.600 244018 DEBUG nova.compute.manager [req-b64b10cb-4ff0-491c-b19b-2d08f0dbabce req-9ae7d88d-1cb0-47ca-8898-708229fa7456 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Processing event network-vif-plugged-6150f46a-debb-49d6-b2e0-07b81f4cccea _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:40:56 np0005629333 nova_compute[244014]: 2026-02-25 12:40:56.601 244018 DEBUG nova.compute.manager [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:40:56 np0005629333 nova_compute[244014]: 2026-02-25 12:40:56.606 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023256.6056755, 4717553a-9bb9-4278-b610-7c446db3f8d9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:40:56 np0005629333 nova_compute[244014]: 2026-02-25 12:40:56.606 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:40:56 np0005629333 nova_compute[244014]: 2026-02-25 12:40:56.610 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:40:56 np0005629333 nova_compute[244014]: 2026-02-25 12:40:56.614 244018 INFO nova.virt.libvirt.driver [-] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Instance spawned successfully.#033[00m
Feb 25 07:40:56 np0005629333 nova_compute[244014]: 2026-02-25 12:40:56.614 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:40:56 np0005629333 nova_compute[244014]: 2026-02-25 12:40:56.631 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:40:56 np0005629333 nova_compute[244014]: 2026-02-25 12:40:56.639 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:40:56 np0005629333 nova_compute[244014]: 2026-02-25 12:40:56.646 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:40:56 np0005629333 nova_compute[244014]: 2026-02-25 12:40:56.647 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:40:56 np0005629333 nova_compute[244014]: 2026-02-25 12:40:56.647 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:40:56 np0005629333 nova_compute[244014]: 2026-02-25 12:40:56.648 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:40:56 np0005629333 nova_compute[244014]: 2026-02-25 12:40:56.649 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:40:56 np0005629333 nova_compute[244014]: 2026-02-25 12:40:56.649 244018 DEBUG nova.virt.libvirt.driver [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:40:56 np0005629333 nova_compute[244014]: 2026-02-25 12:40:56.656 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:40:56 np0005629333 nova_compute[244014]: 2026-02-25 12:40:56.710 244018 INFO nova.compute.manager [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Took 10.53 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:40:56 np0005629333 nova_compute[244014]: 2026-02-25 12:40:56.711 244018 DEBUG nova.compute.manager [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:40:56 np0005629333 nova_compute[244014]: 2026-02-25 12:40:56.771 244018 INFO nova.compute.manager [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Took 11.60 seconds to build instance.#033[00m
Feb 25 07:40:56 np0005629333 nova_compute[244014]: 2026-02-25 12:40:56.792 244018 DEBUG oslo_concurrency.lockutils [None req-5e55e856-7d72-4cba-818f-678bc45bd55c 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "4717553a-9bb9-4278-b610-7c446db3f8d9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 305 active+clean; 325 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 747 KiB/s rd, 3.6 MiB/s wr, 88 op/s
Feb 25 07:40:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:40:58 np0005629333 nova_compute[244014]: 2026-02-25 12:40:58.748 244018 DEBUG nova.compute.manager [req-b86272f8-caf0-4618-9697-22456ef97720 req-0862e25a-add2-4440-9fad-069e6b9faf1f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Received event network-vif-plugged-6150f46a-debb-49d6-b2e0-07b81f4cccea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:40:58 np0005629333 nova_compute[244014]: 2026-02-25 12:40:58.748 244018 DEBUG oslo_concurrency.lockutils [req-b86272f8-caf0-4618-9697-22456ef97720 req-0862e25a-add2-4440-9fad-069e6b9faf1f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:40:58 np0005629333 nova_compute[244014]: 2026-02-25 12:40:58.749 244018 DEBUG oslo_concurrency.lockutils [req-b86272f8-caf0-4618-9697-22456ef97720 req-0862e25a-add2-4440-9fad-069e6b9faf1f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:40:58 np0005629333 nova_compute[244014]: 2026-02-25 12:40:58.749 244018 DEBUG oslo_concurrency.lockutils [req-b86272f8-caf0-4618-9697-22456ef97720 req-0862e25a-add2-4440-9fad-069e6b9faf1f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:40:58 np0005629333 nova_compute[244014]: 2026-02-25 12:40:58.749 244018 DEBUG nova.compute.manager [req-b86272f8-caf0-4618-9697-22456ef97720 req-0862e25a-add2-4440-9fad-069e6b9faf1f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] No waiting events found dispatching network-vif-plugged-6150f46a-debb-49d6-b2e0-07b81f4cccea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:40:58 np0005629333 nova_compute[244014]: 2026-02-25 12:40:58.750 244018 WARNING nova.compute.manager [req-b86272f8-caf0-4618-9697-22456ef97720 req-0862e25a-add2-4440-9fad-069e6b9faf1f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Received unexpected event network-vif-plugged-6150f46a-debb-49d6-b2e0-07b81f4cccea for instance with vm_state active and task_state None.#033[00m
Feb 25 07:40:59 np0005629333 nova_compute[244014]: 2026-02-25 12:40:59.141 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:40:59 np0005629333 nova_compute[244014]: 2026-02-25 12:40:59.316 244018 DEBUG nova.network.neutron [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Successfully updated port: ca235479-72b7-40ba-b8e5-d8d3227079fc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:40:59 np0005629333 nova_compute[244014]: 2026-02-25 12:40:59.333 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "refresh_cache-bb4e80d2-c200-4c24-8154-e1a86d06946b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:40:59 np0005629333 nova_compute[244014]: 2026-02-25 12:40:59.333 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquired lock "refresh_cache-bb4e80d2-c200-4c24-8154-e1a86d06946b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:40:59 np0005629333 nova_compute[244014]: 2026-02-25 12:40:59.333 244018 DEBUG nova.network.neutron [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:40:59 np0005629333 nova_compute[244014]: 2026-02-25 12:40:59.477 244018 DEBUG nova.network.neutron [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:41:00 np0005629333 nova_compute[244014]: 2026-02-25 12:41:00.046 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 305 active+clean; 325 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 731 KiB/s rd, 2.3 MiB/s wr, 63 op/s
Feb 25 07:41:01 np0005629333 nova_compute[244014]: 2026-02-25 12:41:01.277 244018 DEBUG nova.network.neutron [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Updating instance_info_cache with network_info: [{"id": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "address": "fa:16:3e:c8:65:4a", "network": {"id": "2ffae040-354a-4380-b0b9-a74c45661bf1", "bridge": "br-int", "label": "tempest-network-smoke--1173854570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca235479-72", "ovs_interfaceid": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:41:01 np0005629333 nova_compute[244014]: 2026-02-25 12:41:01.312 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Releasing lock "refresh_cache-bb4e80d2-c200-4c24-8154-e1a86d06946b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:41:01 np0005629333 nova_compute[244014]: 2026-02-25 12:41:01.312 244018 DEBUG nova.compute.manager [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Instance network_info: |[{"id": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "address": "fa:16:3e:c8:65:4a", "network": {"id": "2ffae040-354a-4380-b0b9-a74c45661bf1", "bridge": "br-int", "label": "tempest-network-smoke--1173854570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca235479-72", "ovs_interfaceid": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:41:01 np0005629333 nova_compute[244014]: 2026-02-25 12:41:01.317 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Start _get_guest_xml network_info=[{"id": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "address": "fa:16:3e:c8:65:4a", "network": {"id": "2ffae040-354a-4380-b0b9-a74c45661bf1", "bridge": "br-int", "label": "tempest-network-smoke--1173854570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca235479-72", "ovs_interfaceid": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:41:01 np0005629333 nova_compute[244014]: 2026-02-25 12:41:01.322 244018 WARNING nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:41:01 np0005629333 nova_compute[244014]: 2026-02-25 12:41:01.329 244018 DEBUG nova.virt.libvirt.host [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:41:01 np0005629333 nova_compute[244014]: 2026-02-25 12:41:01.330 244018 DEBUG nova.virt.libvirt.host [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:41:01 np0005629333 nova_compute[244014]: 2026-02-25 12:41:01.335 244018 DEBUG nova.virt.libvirt.host [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:41:01 np0005629333 nova_compute[244014]: 2026-02-25 12:41:01.336 244018 DEBUG nova.virt.libvirt.host [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:41:01 np0005629333 nova_compute[244014]: 2026-02-25 12:41:01.337 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:41:01 np0005629333 nova_compute[244014]: 2026-02-25 12:41:01.338 244018 DEBUG nova.virt.hardware [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:41:01 np0005629333 nova_compute[244014]: 2026-02-25 12:41:01.339 244018 DEBUG nova.virt.hardware [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:41:01 np0005629333 nova_compute[244014]: 2026-02-25 12:41:01.339 244018 DEBUG nova.virt.hardware [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:41:01 np0005629333 nova_compute[244014]: 2026-02-25 12:41:01.340 244018 DEBUG nova.virt.hardware [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:41:01 np0005629333 nova_compute[244014]: 2026-02-25 12:41:01.340 244018 DEBUG nova.virt.hardware [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:41:01 np0005629333 nova_compute[244014]: 2026-02-25 12:41:01.340 244018 DEBUG nova.virt.hardware [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:41:01 np0005629333 nova_compute[244014]: 2026-02-25 12:41:01.341 244018 DEBUG nova.virt.hardware [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:41:01 np0005629333 nova_compute[244014]: 2026-02-25 12:41:01.341 244018 DEBUG nova.virt.hardware [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:41:01 np0005629333 nova_compute[244014]: 2026-02-25 12:41:01.342 244018 DEBUG nova.virt.hardware [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:41:01 np0005629333 nova_compute[244014]: 2026-02-25 12:41:01.342 244018 DEBUG nova.virt.hardware [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:41:01 np0005629333 nova_compute[244014]: 2026-02-25 12:41:01.343 244018 DEBUG nova.virt.hardware [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:41:01 np0005629333 nova_compute[244014]: 2026-02-25 12:41:01.347 244018 DEBUG oslo_concurrency.processutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:01 np0005629333 nova_compute[244014]: 2026-02-25 12:41:01.484 244018 DEBUG nova.compute.manager [req-e6ee8e28-aecb-4413-9a56-8830a84a57b9 req-744e0735-d460-49ed-9225-4f2553e6580f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Received event network-changed-ca235479-72b7-40ba-b8e5-d8d3227079fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:41:01 np0005629333 nova_compute[244014]: 2026-02-25 12:41:01.484 244018 DEBUG nova.compute.manager [req-e6ee8e28-aecb-4413-9a56-8830a84a57b9 req-744e0735-d460-49ed-9225-4f2553e6580f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Refreshing instance network info cache due to event network-changed-ca235479-72b7-40ba-b8e5-d8d3227079fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:41:01 np0005629333 nova_compute[244014]: 2026-02-25 12:41:01.485 244018 DEBUG oslo_concurrency.lockutils [req-e6ee8e28-aecb-4413-9a56-8830a84a57b9 req-744e0735-d460-49ed-9225-4f2553e6580f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-bb4e80d2-c200-4c24-8154-e1a86d06946b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:41:01 np0005629333 nova_compute[244014]: 2026-02-25 12:41:01.485 244018 DEBUG oslo_concurrency.lockutils [req-e6ee8e28-aecb-4413-9a56-8830a84a57b9 req-744e0735-d460-49ed-9225-4f2553e6580f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-bb4e80d2-c200-4c24-8154-e1a86d06946b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:41:01 np0005629333 nova_compute[244014]: 2026-02-25 12:41:01.485 244018 DEBUG nova.network.neutron [req-e6ee8e28-aecb-4413-9a56-8830a84a57b9 req-744e0735-d460-49ed-9225-4f2553e6580f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Refreshing network info cache for port ca235479-72b7-40ba-b8e5-d8d3227079fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:41:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:41:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:41:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:41:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:41:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:41:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:41:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:41:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3615576775' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:41:01 np0005629333 nova_compute[244014]: 2026-02-25 12:41:01.910 244018 DEBUG oslo_concurrency.processutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:01 np0005629333 nova_compute[244014]: 2026-02-25 12:41:01.937 244018 DEBUG nova.storage.rbd_utils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image bb4e80d2-c200-4c24-8154-e1a86d06946b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:01 np0005629333 nova_compute[244014]: 2026-02-25 12:41:01.943 244018 DEBUG oslo_concurrency.processutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.248 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "bd882e8f-3bea-4fa6-a185-186bc9911267" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.249 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "bd882e8f-3bea-4fa6-a185-186bc9911267" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 305 active+clean; 325 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 104 op/s
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.260 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "d547b0db-242e-49a5-8a76-5682b0235b6d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.261 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "d547b0db-242e-49a5-8a76-5682b0235b6d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.279 244018 DEBUG nova.compute.manager [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.284 244018 DEBUG nova.compute.manager [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.375 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.376 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.377 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.386 244018 DEBUG nova.virt.hardware [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.387 244018 INFO nova.compute.claims [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:41:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:41:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3677413552' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.502 244018 DEBUG oslo_concurrency.processutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.505 244018 DEBUG nova.virt.libvirt.vif [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:40:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-262633266',display_name='tempest-TestNetworkAdvancedServerOps-server-262633266',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-262633266',id=102,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHLLfY/ZYTp8zkjISi8dRav/XikNBwgTA0veZE/C8b/WfaOQQWjNxzsgI+O1rEE3S7jO5qPjb3pQG3PNH0tfGcXeLebTLFJlD+G8jxg8kTGDYVpT+gC8btxGaaq2vWYzmA==',key_name='tempest-TestNetworkAdvancedServerOps-1175251533',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-uqls6dt7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:40:54Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=bb4e80d2-c200-4c24-8154-e1a86d06946b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "address": "fa:16:3e:c8:65:4a", "network": {"id": "2ffae040-354a-4380-b0b9-a74c45661bf1", "bridge": "br-int", "label": "tempest-network-smoke--1173854570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca235479-72", "ovs_interfaceid": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.505 244018 DEBUG nova.network.os_vif_util [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "address": "fa:16:3e:c8:65:4a", "network": {"id": "2ffae040-354a-4380-b0b9-a74c45661bf1", "bridge": "br-int", "label": "tempest-network-smoke--1173854570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca235479-72", "ovs_interfaceid": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.506 244018 DEBUG nova.network.os_vif_util [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:65:4a,bridge_name='br-int',has_traffic_filtering=True,id=ca235479-72b7-40ba-b8e5-d8d3227079fc,network=Network(2ffae040-354a-4380-b0b9-a74c45661bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca235479-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.507 244018 DEBUG nova.objects.instance [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'pci_devices' on Instance uuid bb4e80d2-c200-4c24-8154-e1a86d06946b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.524 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:41:02 np0005629333 nova_compute[244014]:  <uuid>bb4e80d2-c200-4c24-8154-e1a86d06946b</uuid>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:  <name>instance-00000066</name>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-262633266</nova:name>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:41:01</nova:creationTime>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:41:02 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:        <nova:user uuid="fb37a481eb114226822ed8b2ef4f9a89">tempest-TestNetworkAdvancedServerOps-1424801157-project-member</nova:user>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:        <nova:project uuid="6821a6e7edd54dbe97920b79aae8f54c">tempest-TestNetworkAdvancedServerOps-1424801157</nova:project>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:        <nova:port uuid="ca235479-72b7-40ba-b8e5-d8d3227079fc">
Feb 25 07:41:02 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      <entry name="serial">bb4e80d2-c200-4c24-8154-e1a86d06946b</entry>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      <entry name="uuid">bb4e80d2-c200-4c24-8154-e1a86d06946b</entry>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/bb4e80d2-c200-4c24-8154-e1a86d06946b_disk">
Feb 25 07:41:02 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:41:02 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/bb4e80d2-c200-4c24-8154-e1a86d06946b_disk.config">
Feb 25 07:41:02 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:41:02 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:c8:65:4a"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      <target dev="tapca235479-72"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/bb4e80d2-c200-4c24-8154-e1a86d06946b/console.log" append="off"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:41:02 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:41:02 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:41:02 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:41:02 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.526 244018 DEBUG nova.compute.manager [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Preparing to wait for external event network-vif-plugged-ca235479-72b7-40ba-b8e5-d8d3227079fc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.526 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.526 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.526 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.527 244018 DEBUG nova.virt.libvirt.vif [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:40:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-262633266',display_name='tempest-TestNetworkAdvancedServerOps-server-262633266',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-262633266',id=102,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHLLfY/ZYTp8zkjISi8dRav/XikNBwgTA0veZE/C8b/WfaOQQWjNxzsgI+O1rEE3S7jO5qPjb3pQG3PNH0tfGcXeLebTLFJlD+G8jxg8kTGDYVpT+gC8btxGaaq2vWYzmA==',key_name='tempest-TestNetworkAdvancedServerOps-1175251533',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-uqls6dt7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:40:54Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=bb4e80d2-c200-4c24-8154-e1a86d06946b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "address": "fa:16:3e:c8:65:4a", "network": {"id": "2ffae040-354a-4380-b0b9-a74c45661bf1", "bridge": "br-int", "label": "tempest-network-smoke--1173854570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca235479-72", "ovs_interfaceid": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.527 244018 DEBUG nova.network.os_vif_util [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "address": "fa:16:3e:c8:65:4a", "network": {"id": "2ffae040-354a-4380-b0b9-a74c45661bf1", "bridge": "br-int", "label": "tempest-network-smoke--1173854570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca235479-72", "ovs_interfaceid": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.528 244018 DEBUG nova.network.os_vif_util [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:65:4a,bridge_name='br-int',has_traffic_filtering=True,id=ca235479-72b7-40ba-b8e5-d8d3227079fc,network=Network(2ffae040-354a-4380-b0b9-a74c45661bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca235479-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.528 244018 DEBUG os_vif [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:65:4a,bridge_name='br-int',has_traffic_filtering=True,id=ca235479-72b7-40ba-b8e5-d8d3227079fc,network=Network(2ffae040-354a-4380-b0b9-a74c45661bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca235479-72') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.529 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.529 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.530 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.535 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.535 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca235479-72, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.536 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapca235479-72, col_values=(('external_ids', {'iface-id': 'ca235479-72b7-40ba-b8e5-d8d3227079fc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c8:65:4a', 'vm-uuid': 'bb4e80d2-c200-4c24-8154-e1a86d06946b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:02 np0005629333 NetworkManager[49836]: <info>  [1772023262.5385] manager: (tapca235479-72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/409)
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.540 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.543 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.544 244018 INFO os_vif [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:65:4a,bridge_name='br-int',has_traffic_filtering=True,id=ca235479-72b7-40ba-b8e5-d8d3227079fc,network=Network(2ffae040-354a-4380-b0b9-a74c45661bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca235479-72')#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.561 244018 DEBUG oslo_concurrency.processutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.621 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.621 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.622 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No VIF found with MAC fa:16:3e:c8:65:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.622 244018 INFO nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Using config drive#033[00m
Feb 25 07:41:02 np0005629333 nova_compute[244014]: 2026-02-25 12:41:02.644 244018 DEBUG nova.storage.rbd_utils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image bb4e80d2-c200-4c24-8154-e1a86d06946b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:41:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/301190081' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.138 244018 DEBUG oslo_concurrency.processutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.143 244018 DEBUG nova.compute.provider_tree [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.163 244018 DEBUG nova.scheduler.client.report [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.187 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.188 244018 DEBUG nova.compute.manager [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.191 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.197 244018 DEBUG nova.virt.hardware [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.197 244018 INFO nova.compute.claims [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.274 244018 DEBUG nova.compute.manager [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.275 244018 DEBUG nova.network.neutron [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:41:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.292 244018 INFO nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.311 244018 DEBUG nova.compute.manager [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.392 244018 DEBUG nova.compute.manager [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.393 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.394 244018 INFO nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Creating image(s)#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.412 244018 DEBUG nova.storage.rbd_utils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image bd882e8f-3bea-4fa6-a185-186bc9911267_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.434 244018 DEBUG nova.storage.rbd_utils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image bd882e8f-3bea-4fa6-a185-186bc9911267_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.459 244018 DEBUG nova.storage.rbd_utils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image bd882e8f-3bea-4fa6-a185-186bc9911267_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.464 244018 DEBUG oslo_concurrency.processutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.491 244018 DEBUG oslo_concurrency.processutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.524 244018 DEBUG oslo_concurrency.processutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.524 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.525 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.525 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.547 244018 DEBUG nova.storage.rbd_utils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image bd882e8f-3bea-4fa6-a185-186bc9911267_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.550 244018 DEBUG oslo_concurrency.processutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 bd882e8f-3bea-4fa6-a185-186bc9911267_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.640 244018 DEBUG nova.network.neutron [req-e6ee8e28-aecb-4413-9a56-8830a84a57b9 req-744e0735-d460-49ed-9225-4f2553e6580f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Updated VIF entry in instance network info cache for port ca235479-72b7-40ba-b8e5-d8d3227079fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.641 244018 DEBUG nova.network.neutron [req-e6ee8e28-aecb-4413-9a56-8830a84a57b9 req-744e0735-d460-49ed-9225-4f2553e6580f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Updating instance_info_cache with network_info: [{"id": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "address": "fa:16:3e:c8:65:4a", "network": {"id": "2ffae040-354a-4380-b0b9-a74c45661bf1", "bridge": "br-int", "label": "tempest-network-smoke--1173854570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca235479-72", "ovs_interfaceid": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.651 244018 DEBUG nova.policy [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c5bc24b5f5048469cf3f701ce511bfa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '503e879cd1f44a16b9baef106ceba949', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.661 244018 DEBUG oslo_concurrency.lockutils [req-e6ee8e28-aecb-4413-9a56-8830a84a57b9 req-744e0735-d460-49ed-9225-4f2553e6580f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-bb4e80d2-c200-4c24-8154-e1a86d06946b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.785 244018 INFO nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Creating config drive at /var/lib/nova/instances/bb4e80d2-c200-4c24-8154-e1a86d06946b/disk.config#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.789 244018 DEBUG oslo_concurrency.processutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bb4e80d2-c200-4c24-8154-e1a86d06946b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp1g3f49x3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.815 244018 DEBUG oslo_concurrency.processutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 bd882e8f-3bea-4fa6-a185-186bc9911267_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.264s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.871 244018 DEBUG nova.storage.rbd_utils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] resizing rbd image bd882e8f-3bea-4fa6-a185-186bc9911267_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.919 244018 DEBUG oslo_concurrency.processutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bb4e80d2-c200-4c24-8154-e1a86d06946b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp1g3f49x3" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.940 244018 DEBUG nova.storage.rbd_utils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image bb4e80d2-c200-4c24-8154-e1a86d06946b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:03 np0005629333 nova_compute[244014]: 2026-02-25 12:41:03.946 244018 DEBUG oslo_concurrency.processutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bb4e80d2-c200-4c24-8154-e1a86d06946b/disk.config bb4e80d2-c200-4c24-8154-e1a86d06946b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.009 244018 DEBUG nova.objects.instance [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'migration_context' on Instance uuid bd882e8f-3bea-4fa6-a185-186bc9911267 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.023 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.024 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Ensure instance console log exists: /var/lib/nova/instances/bd882e8f-3bea-4fa6-a185-186bc9911267/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.024 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.024 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.025 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:41:04 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3249733971' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.081 244018 DEBUG oslo_concurrency.processutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.086 244018 DEBUG nova.compute.provider_tree [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.098 244018 DEBUG oslo_concurrency.processutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bb4e80d2-c200-4c24-8154-e1a86d06946b/disk.config bb4e80d2-c200-4c24-8154-e1a86d06946b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.098 244018 INFO nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Deleting local config drive /var/lib/nova/instances/bb4e80d2-c200-4c24-8154-e1a86d06946b/disk.config because it was imported into RBD.#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.104 244018 DEBUG nova.scheduler.client.report [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.122 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.123 244018 DEBUG nova.compute.manager [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:41:04 np0005629333 NetworkManager[49836]: <info>  [1772023264.1475] manager: (tapca235479-72): new Tun device (/org/freedesktop/NetworkManager/Devices/410)
Feb 25 07:41:04 np0005629333 kernel: tapca235479-72: entered promiscuous mode
Feb 25 07:41:04 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:04Z|00988|binding|INFO|Claiming lport ca235479-72b7-40ba-b8e5-d8d3227079fc for this chassis.
Feb 25 07:41:04 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:04Z|00989|binding|INFO|ca235479-72b7-40ba-b8e5-d8d3227079fc: Claiming fa:16:3e:c8:65:4a 10.100.0.7
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.151 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.157 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.165 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:65:4a 10.100.0.7'], port_security=['fa:16:3e:c8:65:4a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bb4e80d2-c200-4c24-8154-e1a86d06946b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ffae040-354a-4380-b0b9-a74c45661bf1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf47c8ae-ac17-4dbe-9374-1ad52fb8ff7a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=861eb98d-f02a-4401-a60e-4c578f807c6b, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ca235479-72b7-40ba-b8e5-d8d3227079fc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.167 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ca235479-72b7-40ba-b8e5-d8d3227079fc in datapath 2ffae040-354a-4380-b0b9-a74c45661bf1 bound to our chassis#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.167 244018 DEBUG nova.compute.manager [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.167 244018 DEBUG nova.network.neutron [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.168 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2ffae040-354a-4380-b0b9-a74c45661bf1#033[00m
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.177 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[316b6b71-22a0-4467-b8f2-1e047d12ff02]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.178 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2ffae040-31 in ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.180 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2ffae040-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.180 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2821417d-12ee-400f-bf6c-d35f1dde7b68]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.182 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.181 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6f1178ea-372a-409d-918e-23d97b7e485a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:04 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:04Z|00990|binding|INFO|Setting lport ca235479-72b7-40ba-b8e5-d8d3227079fc ovn-installed in OVS
Feb 25 07:41:04 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:04Z|00991|binding|INFO|Setting lport ca235479-72b7-40ba-b8e5-d8d3227079fc up in Southbound
Feb 25 07:41:04 np0005629333 systemd-udevd[334606]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.187 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.193 244018 INFO nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.194 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[13469c44-2f7e-4f35-a087-2541fa83af94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:04 np0005629333 systemd-machined[210048]: New machine qemu-129-instance-00000066.
Feb 25 07:41:04 np0005629333 NetworkManager[49836]: <info>  [1772023264.2041] device (tapca235479-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:41:04 np0005629333 NetworkManager[49836]: <info>  [1772023264.2047] device (tapca235479-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:41:04 np0005629333 systemd[1]: Started Virtual Machine qemu-129-instance-00000066.
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.210 244018 DEBUG nova.compute.manager [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.216 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6c03a7fc-5a54-4cd6-9db6-271459f81c04]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.243 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a39e6618-7471-4289-876b-1cf25ff83708]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:04 np0005629333 systemd-udevd[334610]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.247 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9240d99a-1448-4914-8279-beefd3f4f875]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:04 np0005629333 NetworkManager[49836]: <info>  [1772023264.2485] manager: (tap2ffae040-30): new Veth device (/org/freedesktop/NetworkManager/Devices/411)
Feb 25 07:41:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1797: 305 pgs: 305 active+clean; 325 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.275 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0d0e9628-751d-4e4d-b4ff-314eb41ca9f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.278 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[86866db2-a219-494f-8095-84be7c996b79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:04 np0005629333 NetworkManager[49836]: <info>  [1772023264.3026] device (tap2ffae040-30): carrier: link connected
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.305 244018 DEBUG nova.compute.manager [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.308 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.310 244018 INFO nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Creating image(s)#033[00m
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.310 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f67f5d45-566e-45c1-8eb0-ad48d91c1bc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.328 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[be0504b4-31e9-4924-b68e-8fd482e6e6df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ffae040-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:73:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 297], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 523371, 'reachable_time': 26475, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334638, 'error': None, 'target': 'ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.340 244018 DEBUG nova.storage.rbd_utils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image d547b0db-242e-49a5-8a76-5682b0235b6d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.347 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[67551647-3680-4627-ac37-35f589afb405]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe55:7355'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 523371, 'tstamp': 523371}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334655, 'error': None, 'target': 'ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.365 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7c3029e4-d628-4ca0-8c7b-0de2da448462]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2ffae040-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:73:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 297], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 523371, 'reachable_time': 26475, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 334658, 'error': None, 'target': 'ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.383 244018 DEBUG nova.storage.rbd_utils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image d547b0db-242e-49a5-8a76-5682b0235b6d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.389 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[56bd1098-b4ab-4830-a24e-2dc3a3345022]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.419 244018 DEBUG nova.storage.rbd_utils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image d547b0db-242e-49a5-8a76-5682b0235b6d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.427 244018 DEBUG oslo_concurrency.processutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.452 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8728462e-90aa-4d6f-bdb6-2269eeb18b57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.455 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ffae040-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.455 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.456 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ffae040-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:04 np0005629333 kernel: tap2ffae040-30: entered promiscuous mode
Feb 25 07:41:04 np0005629333 NetworkManager[49836]: <info>  [1772023264.4581] manager: (tap2ffae040-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/412)
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.461 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2ffae040-30, col_values=(('external_ids', {'iface-id': '866d334c-e04a-4c89-acaa-4482da6c6f0c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:04 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:04Z|00992|binding|INFO|Releasing lport 866d334c-e04a-4c89-acaa-4482da6c6f0c from this chassis (sb_readonly=0)
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.463 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2ffae040-354a-4380-b0b9-a74c45661bf1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2ffae040-354a-4380-b0b9-a74c45661bf1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.464 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7f0d066f-1c54-45a2-9262-a83987f558dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.465 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-2ffae040-354a-4380-b0b9-a74c45661bf1
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/2ffae040-354a-4380-b0b9-a74c45661bf1.pid.haproxy
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 2ffae040-354a-4380-b0b9-a74c45661bf1
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:41:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:04.467 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1', 'env', 'PROCESS_TAG=haproxy-2ffae040-354a-4380-b0b9-a74c45661bf1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2ffae040-354a-4380-b0b9-a74c45661bf1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.480 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.517 244018 DEBUG oslo_concurrency.processutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.518 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.519 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.519 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.547 244018 DEBUG nova.storage.rbd_utils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image d547b0db-242e-49a5-8a76-5682b0235b6d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.554 244018 DEBUG oslo_concurrency.processutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d547b0db-242e-49a5-8a76-5682b0235b6d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:04 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.793 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023264.792724, bb4e80d2-c200-4c24-8154-e1a86d06946b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.793 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] VM Started (Lifecycle Event)#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.799 244018 DEBUG oslo_concurrency.processutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d547b0db-242e-49a5-8a76-5682b0235b6d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.246s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.841 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.871 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023264.792899, bb4e80d2-c200-4c24-8154-e1a86d06946b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.871 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:41:04 np0005629333 podman[334809]: 2026-02-25 12:41:04.872331611 +0000 UTC m=+0.043652483 container create 73db30a6660878239719f63131246132729224544a65e2b2b08a344cd16cd5e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.882 244018 DEBUG nova.storage.rbd_utils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] resizing rbd image d547b0db-242e-49a5-8a76-5682b0235b6d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:41:04 np0005629333 systemd[1]: Started libpod-conmon-73db30a6660878239719f63131246132729224544a65e2b2b08a344cd16cd5e1.scope.
Feb 25 07:41:04 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:41:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/308c3469f7246fcc17ea83f23367abb6b8fd53a71b962f6f2bf6602aad8594fe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.928 244018 DEBUG nova.policy [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb8dbf8cc448ad946fd23aaae2326e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '25fa1e8dd32c483686f869da2604f2b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.931 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.937 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:41:04 np0005629333 podman[334809]: 2026-02-25 12:41:04.942594113 +0000 UTC m=+0.113914985 container init 73db30a6660878239719f63131246132729224544a65e2b2b08a344cd16cd5e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:41:04 np0005629333 podman[334809]: 2026-02-25 12:41:04.847991404 +0000 UTC m=+0.019312306 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:41:04 np0005629333 podman[334809]: 2026-02-25 12:41:04.949247421 +0000 UTC m=+0.120568293 container start 73db30a6660878239719f63131246132729224544a65e2b2b08a344cd16cd5e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:41:04 np0005629333 neutron-haproxy-ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1[334874]: [NOTICE]   (334881) : New worker (334901) forked
Feb 25 07:41:04 np0005629333 neutron-haproxy-ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1[334874]: [NOTICE]   (334881) : Loading success.
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.976 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.980 244018 DEBUG nova.objects.instance [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'migration_context' on Instance uuid d547b0db-242e-49a5-8a76-5682b0235b6d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.996 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.996 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Ensure instance console log exists: /var/lib/nova/instances/d547b0db-242e-49a5-8a76-5682b0235b6d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.997 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.997 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:04 np0005629333 nova_compute[244014]: 2026-02-25 12:41:04.997 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:05 np0005629333 nova_compute[244014]: 2026-02-25 12:41:05.046 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:05 np0005629333 nova_compute[244014]: 2026-02-25 12:41:05.129 244018 DEBUG nova.compute.manager [req-2c447212-360f-4beb-8040-23caba22e935 req-26168cf2-0e38-41ab-b215-abd27e5683b3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Received event network-vif-plugged-ca235479-72b7-40ba-b8e5-d8d3227079fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:41:05 np0005629333 nova_compute[244014]: 2026-02-25 12:41:05.129 244018 DEBUG oslo_concurrency.lockutils [req-2c447212-360f-4beb-8040-23caba22e935 req-26168cf2-0e38-41ab-b215-abd27e5683b3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:05 np0005629333 nova_compute[244014]: 2026-02-25 12:41:05.129 244018 DEBUG oslo_concurrency.lockutils [req-2c447212-360f-4beb-8040-23caba22e935 req-26168cf2-0e38-41ab-b215-abd27e5683b3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:05 np0005629333 nova_compute[244014]: 2026-02-25 12:41:05.129 244018 DEBUG oslo_concurrency.lockutils [req-2c447212-360f-4beb-8040-23caba22e935 req-26168cf2-0e38-41ab-b215-abd27e5683b3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:05 np0005629333 nova_compute[244014]: 2026-02-25 12:41:05.130 244018 DEBUG nova.compute.manager [req-2c447212-360f-4beb-8040-23caba22e935 req-26168cf2-0e38-41ab-b215-abd27e5683b3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Processing event network-vif-plugged-ca235479-72b7-40ba-b8e5-d8d3227079fc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:41:05 np0005629333 nova_compute[244014]: 2026-02-25 12:41:05.130 244018 DEBUG nova.compute.manager [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:41:05 np0005629333 nova_compute[244014]: 2026-02-25 12:41:05.142 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023265.1420126, bb4e80d2-c200-4c24-8154-e1a86d06946b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:41:05 np0005629333 nova_compute[244014]: 2026-02-25 12:41:05.142 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:41:05 np0005629333 nova_compute[244014]: 2026-02-25 12:41:05.149 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:41:05 np0005629333 nova_compute[244014]: 2026-02-25 12:41:05.152 244018 INFO nova.virt.libvirt.driver [-] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Instance spawned successfully.#033[00m
Feb 25 07:41:05 np0005629333 nova_compute[244014]: 2026-02-25 12:41:05.152 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:41:05 np0005629333 nova_compute[244014]: 2026-02-25 12:41:05.172 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:41:05 np0005629333 nova_compute[244014]: 2026-02-25 12:41:05.178 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:41:05 np0005629333 nova_compute[244014]: 2026-02-25 12:41:05.181 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:41:05 np0005629333 nova_compute[244014]: 2026-02-25 12:41:05.181 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:41:05 np0005629333 nova_compute[244014]: 2026-02-25 12:41:05.181 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:41:05 np0005629333 nova_compute[244014]: 2026-02-25 12:41:05.182 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:41:05 np0005629333 nova_compute[244014]: 2026-02-25 12:41:05.182 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:41:05 np0005629333 nova_compute[244014]: 2026-02-25 12:41:05.182 244018 DEBUG nova.virt.libvirt.driver [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:41:05 np0005629333 nova_compute[244014]: 2026-02-25 12:41:05.215 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:41:05 np0005629333 nova_compute[244014]: 2026-02-25 12:41:05.251 244018 INFO nova.compute.manager [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Took 11.11 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:41:05 np0005629333 nova_compute[244014]: 2026-02-25 12:41:05.252 244018 DEBUG nova.compute.manager [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:41:05 np0005629333 nova_compute[244014]: 2026-02-25 12:41:05.309 244018 INFO nova.compute.manager [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Took 12.23 seconds to build instance.#033[00m
Feb 25 07:41:05 np0005629333 nova_compute[244014]: 2026-02-25 12:41:05.326 244018 DEBUG oslo_concurrency.lockutils [None req-887499e5-dfbd-40a5-a7ae-1b4ce4a71d9b fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "bb4e80d2-c200-4c24-8154-e1a86d06946b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.322s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:05 np0005629333 nova_compute[244014]: 2026-02-25 12:41:05.338 244018 DEBUG nova.network.neutron [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Successfully created port: 12e30ba1-a42b-437b-b14c-ab41975d5f53 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:41:05 np0005629333 nova_compute[244014]: 2026-02-25 12:41:05.707 244018 DEBUG nova.network.neutron [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Successfully created port: 96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:41:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1798: 305 pgs: 305 active+clean; 325 MiB data, 912 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Feb 25 07:41:06 np0005629333 nova_compute[244014]: 2026-02-25 12:41:06.464 244018 DEBUG nova.network.neutron [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Successfully updated port: 12e30ba1-a42b-437b-b14c-ab41975d5f53 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:41:06 np0005629333 nova_compute[244014]: 2026-02-25 12:41:06.541 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "refresh_cache-bd882e8f-3bea-4fa6-a185-186bc9911267" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:41:06 np0005629333 nova_compute[244014]: 2026-02-25 12:41:06.542 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquired lock "refresh_cache-bd882e8f-3bea-4fa6-a185-186bc9911267" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:41:06 np0005629333 nova_compute[244014]: 2026-02-25 12:41:06.542 244018 DEBUG nova.network.neutron [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:41:06 np0005629333 nova_compute[244014]: 2026-02-25 12:41:06.787 244018 DEBUG nova.network.neutron [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:41:07 np0005629333 nova_compute[244014]: 2026-02-25 12:41:07.269 244018 DEBUG nova.compute.manager [req-7951c04e-6335-4ea1-a5d5-8c022f2542e8 req-f7d32bf3-d100-4c03-b411-9de3fb9cfcdf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Received event network-vif-plugged-ca235479-72b7-40ba-b8e5-d8d3227079fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:41:07 np0005629333 nova_compute[244014]: 2026-02-25 12:41:07.269 244018 DEBUG oslo_concurrency.lockutils [req-7951c04e-6335-4ea1-a5d5-8c022f2542e8 req-f7d32bf3-d100-4c03-b411-9de3fb9cfcdf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:07 np0005629333 nova_compute[244014]: 2026-02-25 12:41:07.269 244018 DEBUG oslo_concurrency.lockutils [req-7951c04e-6335-4ea1-a5d5-8c022f2542e8 req-f7d32bf3-d100-4c03-b411-9de3fb9cfcdf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:07 np0005629333 nova_compute[244014]: 2026-02-25 12:41:07.270 244018 DEBUG oslo_concurrency.lockutils [req-7951c04e-6335-4ea1-a5d5-8c022f2542e8 req-f7d32bf3-d100-4c03-b411-9de3fb9cfcdf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:07 np0005629333 nova_compute[244014]: 2026-02-25 12:41:07.270 244018 DEBUG nova.compute.manager [req-7951c04e-6335-4ea1-a5d5-8c022f2542e8 req-f7d32bf3-d100-4c03-b411-9de3fb9cfcdf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] No waiting events found dispatching network-vif-plugged-ca235479-72b7-40ba-b8e5-d8d3227079fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:41:07 np0005629333 nova_compute[244014]: 2026-02-25 12:41:07.270 244018 WARNING nova.compute.manager [req-7951c04e-6335-4ea1-a5d5-8c022f2542e8 req-f7d32bf3-d100-4c03-b411-9de3fb9cfcdf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Received unexpected event network-vif-plugged-ca235479-72b7-40ba-b8e5-d8d3227079fc for instance with vm_state active and task_state None.#033[00m
Feb 25 07:41:07 np0005629333 nova_compute[244014]: 2026-02-25 12:41:07.270 244018 DEBUG nova.compute.manager [req-7951c04e-6335-4ea1-a5d5-8c022f2542e8 req-f7d32bf3-d100-4c03-b411-9de3fb9cfcdf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Received event network-changed-12e30ba1-a42b-437b-b14c-ab41975d5f53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:41:07 np0005629333 nova_compute[244014]: 2026-02-25 12:41:07.271 244018 DEBUG nova.compute.manager [req-7951c04e-6335-4ea1-a5d5-8c022f2542e8 req-f7d32bf3-d100-4c03-b411-9de3fb9cfcdf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Refreshing instance network info cache due to event network-changed-12e30ba1-a42b-437b-b14c-ab41975d5f53. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:41:07 np0005629333 nova_compute[244014]: 2026-02-25 12:41:07.271 244018 DEBUG oslo_concurrency.lockutils [req-7951c04e-6335-4ea1-a5d5-8c022f2542e8 req-f7d32bf3-d100-4c03-b411-9de3fb9cfcdf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-bd882e8f-3bea-4fa6-a185-186bc9911267" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:41:07 np0005629333 nova_compute[244014]: 2026-02-25 12:41:07.538 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 305 active+clean; 422 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.8 MiB/s wr, 251 op/s
Feb 25 07:41:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:41:08 np0005629333 nova_compute[244014]: 2026-02-25 12:41:08.627 244018 DEBUG nova.network.neutron [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Successfully updated port: 96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:41:08 np0005629333 nova_compute[244014]: 2026-02-25 12:41:08.655 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "refresh_cache-d547b0db-242e-49a5-8a76-5682b0235b6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:41:08 np0005629333 nova_compute[244014]: 2026-02-25 12:41:08.657 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquired lock "refresh_cache-d547b0db-242e-49a5-8a76-5682b0235b6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:41:08 np0005629333 nova_compute[244014]: 2026-02-25 12:41:08.657 244018 DEBUG nova.network.neutron [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:41:08 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:08Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ee:a2:d4 10.100.0.9
Feb 25 07:41:08 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:08Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ee:a2:d4 10.100.0.9
Feb 25 07:41:08 np0005629333 nova_compute[244014]: 2026-02-25 12:41:08.958 244018 DEBUG nova.compute.manager [req-151e4f9b-ab40-4f06-b088-2d5208a621d0 req-dab74daf-226c-424a-b596-61e7601fd2e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Received event network-changed-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:41:08 np0005629333 nova_compute[244014]: 2026-02-25 12:41:08.959 244018 DEBUG nova.compute.manager [req-151e4f9b-ab40-4f06-b088-2d5208a621d0 req-dab74daf-226c-424a-b596-61e7601fd2e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Refreshing instance network info cache due to event network-changed-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:41:08 np0005629333 nova_compute[244014]: 2026-02-25 12:41:08.959 244018 DEBUG oslo_concurrency.lockutils [req-151e4f9b-ab40-4f06-b088-2d5208a621d0 req-dab74daf-226c-424a-b596-61e7601fd2e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d547b0db-242e-49a5-8a76-5682b0235b6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:41:09 np0005629333 nova_compute[244014]: 2026-02-25 12:41:09.586 244018 DEBUG nova.network.neutron [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:41:09 np0005629333 nova_compute[244014]: 2026-02-25 12:41:09.655 244018 DEBUG nova.network.neutron [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Updating instance_info_cache with network_info: [{"id": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "address": "fa:16:3e:e6:70:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12e30ba1-a4", "ovs_interfaceid": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:41:09 np0005629333 nova_compute[244014]: 2026-02-25 12:41:09.679 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Releasing lock "refresh_cache-bd882e8f-3bea-4fa6-a185-186bc9911267" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:41:09 np0005629333 nova_compute[244014]: 2026-02-25 12:41:09.679 244018 DEBUG nova.compute.manager [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Instance network_info: |[{"id": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "address": "fa:16:3e:e6:70:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12e30ba1-a4", "ovs_interfaceid": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:41:09 np0005629333 nova_compute[244014]: 2026-02-25 12:41:09.680 244018 DEBUG oslo_concurrency.lockutils [req-7951c04e-6335-4ea1-a5d5-8c022f2542e8 req-f7d32bf3-d100-4c03-b411-9de3fb9cfcdf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-bd882e8f-3bea-4fa6-a185-186bc9911267" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:41:09 np0005629333 nova_compute[244014]: 2026-02-25 12:41:09.680 244018 DEBUG nova.network.neutron [req-7951c04e-6335-4ea1-a5d5-8c022f2542e8 req-f7d32bf3-d100-4c03-b411-9de3fb9cfcdf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Refreshing network info cache for port 12e30ba1-a42b-437b-b14c-ab41975d5f53 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:41:09 np0005629333 nova_compute[244014]: 2026-02-25 12:41:09.683 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Start _get_guest_xml network_info=[{"id": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "address": "fa:16:3e:e6:70:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12e30ba1-a4", "ovs_interfaceid": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:41:09 np0005629333 nova_compute[244014]: 2026-02-25 12:41:09.687 244018 WARNING nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:41:09 np0005629333 nova_compute[244014]: 2026-02-25 12:41:09.694 244018 DEBUG nova.virt.libvirt.host [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:41:09 np0005629333 nova_compute[244014]: 2026-02-25 12:41:09.695 244018 DEBUG nova.virt.libvirt.host [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:41:09 np0005629333 nova_compute[244014]: 2026-02-25 12:41:09.698 244018 DEBUG nova.virt.libvirt.host [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:41:09 np0005629333 nova_compute[244014]: 2026-02-25 12:41:09.699 244018 DEBUG nova.virt.libvirt.host [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:41:09 np0005629333 nova_compute[244014]: 2026-02-25 12:41:09.699 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:41:09 np0005629333 nova_compute[244014]: 2026-02-25 12:41:09.700 244018 DEBUG nova.virt.hardware [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:41:09 np0005629333 nova_compute[244014]: 2026-02-25 12:41:09.700 244018 DEBUG nova.virt.hardware [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:41:09 np0005629333 nova_compute[244014]: 2026-02-25 12:41:09.701 244018 DEBUG nova.virt.hardware [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:41:09 np0005629333 nova_compute[244014]: 2026-02-25 12:41:09.701 244018 DEBUG nova.virt.hardware [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:41:09 np0005629333 nova_compute[244014]: 2026-02-25 12:41:09.701 244018 DEBUG nova.virt.hardware [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:41:09 np0005629333 nova_compute[244014]: 2026-02-25 12:41:09.701 244018 DEBUG nova.virt.hardware [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:41:09 np0005629333 nova_compute[244014]: 2026-02-25 12:41:09.702 244018 DEBUG nova.virt.hardware [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:41:09 np0005629333 nova_compute[244014]: 2026-02-25 12:41:09.702 244018 DEBUG nova.virt.hardware [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:41:09 np0005629333 nova_compute[244014]: 2026-02-25 12:41:09.703 244018 DEBUG nova.virt.hardware [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:41:09 np0005629333 nova_compute[244014]: 2026-02-25 12:41:09.703 244018 DEBUG nova.virt.hardware [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:41:09 np0005629333 nova_compute[244014]: 2026-02-25 12:41:09.703 244018 DEBUG nova.virt.hardware [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:41:09 np0005629333 nova_compute[244014]: 2026-02-25 12:41:09.706 244018 DEBUG oslo_concurrency.processutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.048 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 305 active+clean; 422 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 4.0 MiB/s wr, 189 op/s
Feb 25 07:41:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:41:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2502032656' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.289 244018 DEBUG oslo_concurrency.processutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.317 244018 DEBUG nova.storage.rbd_utils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image bd882e8f-3bea-4fa6-a185-186bc9911267_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.322 244018 DEBUG oslo_concurrency.processutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:41:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3068695911' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.892 244018 DEBUG oslo_concurrency.processutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.895 244018 DEBUG nova.virt.libvirt.vif [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:41:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1299470364',display_name='tempest-ServersTestJSON-server-1299470364',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1299470364',id=103,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-itlntti8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:41:03Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=bd882e8f-3bea-4fa6-a185-186bc9911267,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "address": "fa:16:3e:e6:70:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12e30ba1-a4", "ovs_interfaceid": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.896 244018 DEBUG nova.network.os_vif_util [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "address": "fa:16:3e:e6:70:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12e30ba1-a4", "ovs_interfaceid": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.897 244018 DEBUG nova.network.os_vif_util [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:70:d4,bridge_name='br-int',has_traffic_filtering=True,id=12e30ba1-a42b-437b-b14c-ab41975d5f53,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12e30ba1-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.899 244018 DEBUG nova.objects.instance [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'pci_devices' on Instance uuid bd882e8f-3bea-4fa6-a185-186bc9911267 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.917 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:41:10 np0005629333 nova_compute[244014]:  <uuid>bd882e8f-3bea-4fa6-a185-186bc9911267</uuid>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:  <name>instance-00000067</name>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServersTestJSON-server-1299470364</nova:name>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:41:09</nova:creationTime>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:41:10 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:        <nova:user uuid="4c5bc24b5f5048469cf3f701ce511bfa">tempest-ServersTestJSON-1472039551-project-member</nova:user>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:        <nova:project uuid="503e879cd1f44a16b9baef106ceba949">tempest-ServersTestJSON-1472039551</nova:project>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:        <nova:port uuid="12e30ba1-a42b-437b-b14c-ab41975d5f53">
Feb 25 07:41:10 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      <entry name="serial">bd882e8f-3bea-4fa6-a185-186bc9911267</entry>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      <entry name="uuid">bd882e8f-3bea-4fa6-a185-186bc9911267</entry>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/bd882e8f-3bea-4fa6-a185-186bc9911267_disk">
Feb 25 07:41:10 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:41:10 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/bd882e8f-3bea-4fa6-a185-186bc9911267_disk.config">
Feb 25 07:41:10 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:41:10 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:e6:70:d4"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      <target dev="tap12e30ba1-a4"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/bd882e8f-3bea-4fa6-a185-186bc9911267/console.log" append="off"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:41:10 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:41:10 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:41:10 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:41:10 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.922 244018 DEBUG nova.compute.manager [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Preparing to wait for external event network-vif-plugged-12e30ba1-a42b-437b-b14c-ab41975d5f53 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.923 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.923 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.923 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.924 244018 DEBUG nova.virt.libvirt.vif [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:41:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1299470364',display_name='tempest-ServersTestJSON-server-1299470364',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1299470364',id=103,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-itlntti8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:41:03Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=bd882e8f-3bea-4fa6-a185-186bc9911267,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "address": "fa:16:3e:e6:70:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12e30ba1-a4", "ovs_interfaceid": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.924 244018 DEBUG nova.network.os_vif_util [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "address": "fa:16:3e:e6:70:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12e30ba1-a4", "ovs_interfaceid": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.925 244018 DEBUG nova.network.os_vif_util [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:70:d4,bridge_name='br-int',has_traffic_filtering=True,id=12e30ba1-a42b-437b-b14c-ab41975d5f53,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12e30ba1-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.926 244018 DEBUG os_vif [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:70:d4,bridge_name='br-int',has_traffic_filtering=True,id=12e30ba1-a42b-437b-b14c-ab41975d5f53,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12e30ba1-a4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.926 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.927 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.927 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.929 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.929 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap12e30ba1-a4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.930 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap12e30ba1-a4, col_values=(('external_ids', {'iface-id': '12e30ba1-a42b-437b-b14c-ab41975d5f53', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e6:70:d4', 'vm-uuid': 'bd882e8f-3bea-4fa6-a185-186bc9911267'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.932 244018 DEBUG nova.network.neutron [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Updating instance_info_cache with network_info: [{"id": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "address": "fa:16:3e:95:77:21", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7721", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96ad4438-9f", "ovs_interfaceid": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:41:10 np0005629333 NetworkManager[49836]: <info>  [1772023270.9328] manager: (tap12e30ba1-a4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/413)
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.934 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.936 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.936 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.937 244018 INFO os_vif [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:70:d4,bridge_name='br-int',has_traffic_filtering=True,id=12e30ba1-a42b-437b-b14c-ab41975d5f53,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12e30ba1-a4')#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.951 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Releasing lock "refresh_cache-d547b0db-242e-49a5-8a76-5682b0235b6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.952 244018 DEBUG nova.compute.manager [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Instance network_info: |[{"id": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "address": "fa:16:3e:95:77:21", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7721", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96ad4438-9f", "ovs_interfaceid": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.952 244018 DEBUG oslo_concurrency.lockutils [req-151e4f9b-ab40-4f06-b088-2d5208a621d0 req-dab74daf-226c-424a-b596-61e7601fd2e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d547b0db-242e-49a5-8a76-5682b0235b6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.952 244018 DEBUG nova.network.neutron [req-151e4f9b-ab40-4f06-b088-2d5208a621d0 req-dab74daf-226c-424a-b596-61e7601fd2e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Refreshing network info cache for port 96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.955 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Start _get_guest_xml network_info=[{"id": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "address": "fa:16:3e:95:77:21", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7721", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96ad4438-9f", "ovs_interfaceid": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.959 244018 WARNING nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.964 244018 DEBUG nova.virt.libvirt.host [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.965 244018 DEBUG nova.virt.libvirt.host [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.972 244018 DEBUG nova.virt.libvirt.host [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.972 244018 DEBUG nova.virt.libvirt.host [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.973 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.973 244018 DEBUG nova.virt.hardware [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.974 244018 DEBUG nova.virt.hardware [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.974 244018 DEBUG nova.virt.hardware [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.974 244018 DEBUG nova.virt.hardware [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.974 244018 DEBUG nova.virt.hardware [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.975 244018 DEBUG nova.virt.hardware [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.975 244018 DEBUG nova.virt.hardware [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.975 244018 DEBUG nova.virt.hardware [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.976 244018 DEBUG nova.virt.hardware [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.976 244018 DEBUG nova.virt.hardware [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.976 244018 DEBUG nova.virt.hardware [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:41:10 np0005629333 nova_compute[244014]: 2026-02-25 12:41:10.979 244018 DEBUG oslo_concurrency.processutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:11 np0005629333 nova_compute[244014]: 2026-02-25 12:41:11.028 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:41:11 np0005629333 nova_compute[244014]: 2026-02-25 12:41:11.029 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:41:11 np0005629333 nova_compute[244014]: 2026-02-25 12:41:11.029 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No VIF found with MAC fa:16:3e:e6:70:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:41:11 np0005629333 nova_compute[244014]: 2026-02-25 12:41:11.030 244018 INFO nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Using config drive#033[00m
Feb 25 07:41:11 np0005629333 nova_compute[244014]: 2026-02-25 12:41:11.051 244018 DEBUG nova.storage.rbd_utils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image bd882e8f-3bea-4fa6-a185-186bc9911267_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:41:11 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3331425324' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:41:11 np0005629333 nova_compute[244014]: 2026-02-25 12:41:11.508 244018 DEBUG oslo_concurrency.processutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:11 np0005629333 nova_compute[244014]: 2026-02-25 12:41:11.529 244018 DEBUG nova.storage.rbd_utils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image d547b0db-242e-49a5-8a76-5682b0235b6d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:11 np0005629333 nova_compute[244014]: 2026-02-25 12:41:11.533 244018 DEBUG oslo_concurrency.processutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:11 np0005629333 nova_compute[244014]: 2026-02-25 12:41:11.611 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:11 np0005629333 NetworkManager[49836]: <info>  [1772023271.6119] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/414)
Feb 25 07:41:11 np0005629333 NetworkManager[49836]: <info>  [1772023271.6127] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/415)
Feb 25 07:41:11 np0005629333 nova_compute[244014]: 2026-02-25 12:41:11.638 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:11 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:11Z|00993|binding|INFO|Releasing lport e2d1eadf-baf7-4e5c-8052-6c64e8476a26 from this chassis (sb_readonly=0)
Feb 25 07:41:11 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:11Z|00994|binding|INFO|Releasing lport 866d334c-e04a-4c89-acaa-4482da6c6f0c from this chassis (sb_readonly=0)
Feb 25 07:41:11 np0005629333 nova_compute[244014]: 2026-02-25 12:41:11.657 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:11 np0005629333 nova_compute[244014]: 2026-02-25 12:41:11.871 244018 DEBUG nova.network.neutron [req-7951c04e-6335-4ea1-a5d5-8c022f2542e8 req-f7d32bf3-d100-4c03-b411-9de3fb9cfcdf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Updated VIF entry in instance network info cache for port 12e30ba1-a42b-437b-b14c-ab41975d5f53. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:41:11 np0005629333 nova_compute[244014]: 2026-02-25 12:41:11.872 244018 DEBUG nova.network.neutron [req-7951c04e-6335-4ea1-a5d5-8c022f2542e8 req-f7d32bf3-d100-4c03-b411-9de3fb9cfcdf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Updating instance_info_cache with network_info: [{"id": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "address": "fa:16:3e:e6:70:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12e30ba1-a4", "ovs_interfaceid": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:41:11 np0005629333 nova_compute[244014]: 2026-02-25 12:41:11.889 244018 DEBUG oslo_concurrency.lockutils [req-7951c04e-6335-4ea1-a5d5-8c022f2542e8 req-f7d32bf3-d100-4c03-b411-9de3fb9cfcdf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-bd882e8f-3bea-4fa6-a185-186bc9911267" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:41:11 np0005629333 nova_compute[244014]: 2026-02-25 12:41:11.934 244018 INFO nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Creating config drive at /var/lib/nova/instances/bd882e8f-3bea-4fa6-a185-186bc9911267/disk.config#033[00m
Feb 25 07:41:11 np0005629333 nova_compute[244014]: 2026-02-25 12:41:11.937 244018 DEBUG oslo_concurrency.processutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bd882e8f-3bea-4fa6-a185-186bc9911267/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpiet785_m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.063 244018 DEBUG oslo_concurrency.processutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bd882e8f-3bea-4fa6-a185-186bc9911267/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpiet785_m" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:41:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2244499993' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.118 244018 DEBUG nova.storage.rbd_utils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image bd882e8f-3bea-4fa6-a185-186bc9911267_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.121 244018 DEBUG oslo_concurrency.processutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bd882e8f-3bea-4fa6-a185-186bc9911267/disk.config bd882e8f-3bea-4fa6-a185-186bc9911267_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.148 244018 DEBUG oslo_concurrency.processutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.150 244018 DEBUG nova.virt.libvirt.vif [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:41:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-43297595',display_name='tempest-TestGettingAddress-server-43297595',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-43297595',id=104,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOF5qZPMlaCDABTzKIN8P76NPvIVQ2htm3U8AfLvXmjtQ5ATadIDI8WR25EzcFon8xGJAOKa64XoS6ByiRlqYYuZKqui9AsV1f+Y3cN0xRd0ljo9lo2zQ0wdsqjfZAW6NA==',key_name='tempest-TestGettingAddress-1477618447',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-k2no7ltx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:41:04Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=d547b0db-242e-49a5-8a76-5682b0235b6d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "address": "fa:16:3e:95:77:21", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7721", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96ad4438-9f", "ovs_interfaceid": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.151 244018 DEBUG nova.network.os_vif_util [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "address": "fa:16:3e:95:77:21", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7721", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96ad4438-9f", "ovs_interfaceid": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.152 244018 DEBUG nova.network.os_vif_util [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:77:21,bridge_name='br-int',has_traffic_filtering=True,id=96ad4438-9fbd-4fbe-ae7f-af0eecb763c1,network=Network(9ec47b46-6b4d-4267-964b-6f16eef9a7b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96ad4438-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.153 244018 DEBUG nova.objects.instance [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid d547b0db-242e-49a5-8a76-5682b0235b6d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.175 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:41:12 np0005629333 nova_compute[244014]:  <uuid>d547b0db-242e-49a5-8a76-5682b0235b6d</uuid>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:  <name>instance-00000068</name>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestGettingAddress-server-43297595</nova:name>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:41:10</nova:creationTime>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:41:12 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:        <nova:user uuid="f8eb8dbf8cc448ad946fd23aaae2326e">tempest-TestGettingAddress-344063294-project-member</nova:user>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:        <nova:project uuid="25fa1e8dd32c483686f869da2604f2b1">tempest-TestGettingAddress-344063294</nova:project>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:        <nova:port uuid="96ad4438-9fbd-4fbe-ae7f-af0eecb763c1">
Feb 25 07:41:12 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe95:7721" ipVersion="6"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      <entry name="serial">d547b0db-242e-49a5-8a76-5682b0235b6d</entry>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      <entry name="uuid">d547b0db-242e-49a5-8a76-5682b0235b6d</entry>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/d547b0db-242e-49a5-8a76-5682b0235b6d_disk">
Feb 25 07:41:12 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:41:12 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/d547b0db-242e-49a5-8a76-5682b0235b6d_disk.config">
Feb 25 07:41:12 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:41:12 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:95:77:21"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      <target dev="tap96ad4438-9f"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/d547b0db-242e-49a5-8a76-5682b0235b6d/console.log" append="off"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:41:12 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:41:12 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:41:12 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:41:12 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.180 244018 DEBUG nova.compute.manager [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Preparing to wait for external event network-vif-plugged-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.180 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.181 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.181 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.182 244018 DEBUG nova.virt.libvirt.vif [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:41:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-43297595',display_name='tempest-TestGettingAddress-server-43297595',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-43297595',id=104,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOF5qZPMlaCDABTzKIN8P76NPvIVQ2htm3U8AfLvXmjtQ5ATadIDI8WR25EzcFon8xGJAOKa64XoS6ByiRlqYYuZKqui9AsV1f+Y3cN0xRd0ljo9lo2zQ0wdsqjfZAW6NA==',key_name='tempest-TestGettingAddress-1477618447',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-k2no7ltx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:41:04Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=d547b0db-242e-49a5-8a76-5682b0235b6d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "address": "fa:16:3e:95:77:21", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7721", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96ad4438-9f", "ovs_interfaceid": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.182 244018 DEBUG nova.network.os_vif_util [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "address": "fa:16:3e:95:77:21", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7721", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96ad4438-9f", "ovs_interfaceid": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.183 244018 DEBUG nova.network.os_vif_util [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:77:21,bridge_name='br-int',has_traffic_filtering=True,id=96ad4438-9fbd-4fbe-ae7f-af0eecb763c1,network=Network(9ec47b46-6b4d-4267-964b-6f16eef9a7b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96ad4438-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.183 244018 DEBUG os_vif [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:77:21,bridge_name='br-int',has_traffic_filtering=True,id=96ad4438-9fbd-4fbe-ae7f-af0eecb763c1,network=Network(9ec47b46-6b4d-4267-964b-6f16eef9a7b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96ad4438-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.184 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.185 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.185 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.187 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.187 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap96ad4438-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.188 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap96ad4438-9f, col_values=(('external_ids', {'iface-id': '96ad4438-9fbd-4fbe-ae7f-af0eecb763c1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:95:77:21', 'vm-uuid': 'd547b0db-242e-49a5-8a76-5682b0235b6d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.189 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:12 np0005629333 NetworkManager[49836]: <info>  [1772023272.1901] manager: (tap96ad4438-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/416)
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.193 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.194 244018 INFO os_vif [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:77:21,bridge_name='br-int',has_traffic_filtering=True,id=96ad4438-9fbd-4fbe-ae7f-af0eecb763c1,network=Network(9ec47b46-6b4d-4267-964b-6f16eef9a7b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96ad4438-9f')#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.201 244018 DEBUG nova.compute.manager [req-3a716371-8b7c-45b8-8379-edb7cf25653e req-a48a1600-dfba-49b7-8a5c-11d1aa19a560 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Received event network-changed-ca235479-72b7-40ba-b8e5-d8d3227079fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.202 244018 DEBUG nova.compute.manager [req-3a716371-8b7c-45b8-8379-edb7cf25653e req-a48a1600-dfba-49b7-8a5c-11d1aa19a560 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Refreshing instance network info cache due to event network-changed-ca235479-72b7-40ba-b8e5-d8d3227079fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.202 244018 DEBUG oslo_concurrency.lockutils [req-3a716371-8b7c-45b8-8379-edb7cf25653e req-a48a1600-dfba-49b7-8a5c-11d1aa19a560 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-bb4e80d2-c200-4c24-8154-e1a86d06946b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.202 244018 DEBUG oslo_concurrency.lockutils [req-3a716371-8b7c-45b8-8379-edb7cf25653e req-a48a1600-dfba-49b7-8a5c-11d1aa19a560 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-bb4e80d2-c200-4c24-8154-e1a86d06946b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.203 244018 DEBUG nova.network.neutron [req-3a716371-8b7c-45b8-8379-edb7cf25653e req-a48a1600-dfba-49b7-8a5c-11d1aa19a560 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Refreshing network info cache for port ca235479-72b7-40ba-b8e5-d8d3227079fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:41:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1801: 305 pgs: 305 active+clean; 447 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.6 MiB/s wr, 219 op/s
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.262 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.263 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.263 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:95:77:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.264 244018 INFO nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Using config drive#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.284 244018 DEBUG nova.storage.rbd_utils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image d547b0db-242e-49a5-8a76-5682b0235b6d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.310 244018 DEBUG oslo_concurrency.processutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bd882e8f-3bea-4fa6-a185-186bc9911267/disk.config bd882e8f-3bea-4fa6-a185-186bc9911267_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.311 244018 INFO nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Deleting local config drive /var/lib/nova/instances/bd882e8f-3bea-4fa6-a185-186bc9911267/disk.config because it was imported into RBD.#033[00m
Feb 25 07:41:12 np0005629333 NetworkManager[49836]: <info>  [1772023272.3482] manager: (tap12e30ba1-a4): new Tun device (/org/freedesktop/NetworkManager/Devices/417)
Feb 25 07:41:12 np0005629333 kernel: tap12e30ba1-a4: entered promiscuous mode
Feb 25 07:41:12 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:12Z|00995|binding|INFO|Claiming lport 12e30ba1-a42b-437b-b14c-ab41975d5f53 for this chassis.
Feb 25 07:41:12 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:12Z|00996|binding|INFO|12e30ba1-a42b-437b-b14c-ab41975d5f53: Claiming fa:16:3e:e6:70:d4 10.100.0.6
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.364 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:12.371 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:70:d4 10.100.0.6'], port_security=['fa:16:3e:e6:70:d4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'bd882e8f-3bea-4fa6-a185-186bc9911267', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec8bae53-fe6a-49d1-a733-f00c198be561', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '503e879cd1f44a16b9baef106ceba949', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3bf34285-1a67-4c95-bb68-fd577a012f6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18f4e8da-4409-4095-9850-aaee82dd8fd1, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=12e30ba1-a42b-437b-b14c-ab41975d5f53) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:41:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:12.373 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 12e30ba1-a42b-437b-b14c-ab41975d5f53 in datapath ec8bae53-fe6a-49d1-a733-f00c198be561 bound to our chassis#033[00m
Feb 25 07:41:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:12.374 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec8bae53-fe6a-49d1-a733-f00c198be561#033[00m
Feb 25 07:41:12 np0005629333 systemd-machined[210048]: New machine qemu-130-instance-00000067.
Feb 25 07:41:12 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:12Z|00997|binding|INFO|Setting lport 12e30ba1-a42b-437b-b14c-ab41975d5f53 ovn-installed in OVS
Feb 25 07:41:12 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:12Z|00998|binding|INFO|Setting lport 12e30ba1-a42b-437b-b14c-ab41975d5f53 up in Southbound
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.385 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:12 np0005629333 systemd[1]: Started Virtual Machine qemu-130-instance-00000067.
Feb 25 07:41:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:12.389 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1296cf3c-ab73-4b18-a368-cc9250627016]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:12 np0005629333 systemd-udevd[335131]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:41:12 np0005629333 NetworkManager[49836]: <info>  [1772023272.4039] device (tap12e30ba1-a4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:41:12 np0005629333 NetworkManager[49836]: <info>  [1772023272.4045] device (tap12e30ba1-a4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:41:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:12.416 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[044dbc1f-66ac-4616-806c-ae45a8e95cb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:12.419 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[292c130a-50d8-4939-b7b2-1b40dc9cfed3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:12.443 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[92442e93-c4b1-4a2f-a3e2-6ebc0048deb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:12.457 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d10139d6-f8da-4d65-b402-9cd9397121cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec8bae53-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a5:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516532, 'reachable_time': 18880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335143, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:12.470 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1cb8ab0d-ed6d-4b8c-b9d3-4aa99561a7d0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516546, 'tstamp': 516546}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335144, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516549, 'tstamp': 516549}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335144, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:12.472 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec8bae53-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.473 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.477 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:12.477 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec8bae53-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:12.478 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:41:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:12.479 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec8bae53-f0, col_values=(('external_ids', {'iface-id': 'e2d1eadf-baf7-4e5c-8052-6c64e8476a26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:12.479 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.658 244018 INFO nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Creating config drive at /var/lib/nova/instances/d547b0db-242e-49a5-8a76-5682b0235b6d/disk.config#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.662 244018 DEBUG oslo_concurrency.processutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d547b0db-242e-49a5-8a76-5682b0235b6d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpdhtiua4h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.796 244018 DEBUG oslo_concurrency.processutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d547b0db-242e-49a5-8a76-5682b0235b6d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpdhtiua4h" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.820 244018 DEBUG nova.storage.rbd_utils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image d547b0db-242e-49a5-8a76-5682b0235b6d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.824 244018 DEBUG oslo_concurrency.processutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d547b0db-242e-49a5-8a76-5682b0235b6d/disk.config d547b0db-242e-49a5-8a76-5682b0235b6d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.984 244018 DEBUG oslo_concurrency.processutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d547b0db-242e-49a5-8a76-5682b0235b6d/disk.config d547b0db-242e-49a5-8a76-5682b0235b6d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:12 np0005629333 nova_compute[244014]: 2026-02-25 12:41:12.984 244018 INFO nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Deleting local config drive /var/lib/nova/instances/d547b0db-242e-49a5-8a76-5682b0235b6d/disk.config because it was imported into RBD.#033[00m
Feb 25 07:41:13 np0005629333 NetworkManager[49836]: <info>  [1772023273.0135] manager: (tap96ad4438-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/418)
Feb 25 07:41:13 np0005629333 kernel: tap96ad4438-9f: entered promiscuous mode
Feb 25 07:41:13 np0005629333 systemd-udevd[335134]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:41:13 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:13Z|00999|binding|INFO|Claiming lport 96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 for this chassis.
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.017 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:13 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:13Z|01000|binding|INFO|96ad4438-9fbd-4fbe-ae7f-af0eecb763c1: Claiming fa:16:3e:95:77:21 10.100.0.12 2001:db8::f816:3eff:fe95:7721
Feb 25 07:41:13 np0005629333 NetworkManager[49836]: <info>  [1772023273.0248] device (tap96ad4438-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:41:13 np0005629333 NetworkManager[49836]: <info>  [1772023273.0257] device (tap96ad4438-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:41:13 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:13Z|01001|binding|INFO|Setting lport 96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 ovn-installed in OVS
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.028 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.031 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:13 np0005629333 systemd-machined[210048]: New machine qemu-131-instance-00000068.
Feb 25 07:41:13 np0005629333 systemd[1]: Started Virtual Machine qemu-131-instance-00000068.
Feb 25 07:41:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.349 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023273.3485942, bd882e8f-3bea-4fa6-a185-186bc9911267 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.354 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] VM Started (Lifecycle Event)#033[00m
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.446 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:77:21 10.100.0.12 2001:db8::f816:3eff:fe95:7721'], port_security=['fa:16:3e:95:77:21 10.100.0.12 2001:db8::f816:3eff:fe95:7721'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8::f816:3eff:fe95:7721/64', 'neutron:device_id': 'd547b0db-242e-49a5-8a76-5682b0235b6d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '219ff989-2dc4-4de6-abcc-fec08f1c06f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5cb98e41-b63f-472f-91ce-2c5130dfa4a8, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=96ad4438-9fbd-4fbe-ae7f-af0eecb763c1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:41:13 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:13Z|01002|binding|INFO|Setting lport 96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 up in Southbound
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.448 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 in datapath 9ec47b46-6b4d-4267-964b-6f16eef9a7b1 bound to our chassis#033[00m
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.450 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9ec47b46-6b4d-4267-964b-6f16eef9a7b1#033[00m
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.459 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fee07e01-eae9-4e77-9847-22f87265fa51]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.460 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9ec47b46-61 in ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.462 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9ec47b46-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.462 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[24f3a279-7bf8-4b89-80c4-2922df8e19e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.463 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6f5208a7-013e-43ac-929d-48b83188dc36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.474 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[924e1289-3765-42b3-81c0-9f2fa79d5002]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.485 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[81cde8e3-a1f3-49f0-afec-12ffd036a67c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.488 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.492 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023273.352884, bd882e8f-3bea-4fa6-a185-186bc9911267 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.492 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.510 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[08ee2a92-9b72-4b5c-a176-a501aac03d6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.516 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b048d93e-3b49-4dd1-9e62-723cdb2fdb6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:13 np0005629333 NetworkManager[49836]: <info>  [1772023273.5175] manager: (tap9ec47b46-60): new Veth device (/org/freedesktop/NetworkManager/Devices/419)
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.524 244018 DEBUG nova.compute.manager [req-ea153940-1f3f-411d-b6d2-b3dff5cb331d req-b8a63a7f-7258-4393-9251-1df38382dc5a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Received event network-vif-plugged-12e30ba1-a42b-437b-b14c-ab41975d5f53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.525 244018 DEBUG oslo_concurrency.lockutils [req-ea153940-1f3f-411d-b6d2-b3dff5cb331d req-b8a63a7f-7258-4393-9251-1df38382dc5a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.525 244018 DEBUG oslo_concurrency.lockutils [req-ea153940-1f3f-411d-b6d2-b3dff5cb331d req-b8a63a7f-7258-4393-9251-1df38382dc5a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.525 244018 DEBUG oslo_concurrency.lockutils [req-ea153940-1f3f-411d-b6d2-b3dff5cb331d req-b8a63a7f-7258-4393-9251-1df38382dc5a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.525 244018 DEBUG nova.compute.manager [req-ea153940-1f3f-411d-b6d2-b3dff5cb331d req-b8a63a7f-7258-4393-9251-1df38382dc5a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Processing event network-vif-plugged-12e30ba1-a42b-437b-b14c-ab41975d5f53 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.526 244018 DEBUG nova.compute.manager [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.527 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.533 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.537 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023273.416806, d547b0db-242e-49a5-8a76-5682b0235b6d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.538 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] VM Started (Lifecycle Event)#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.541 244018 INFO nova.virt.libvirt.driver [-] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Instance spawned successfully.#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.541 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.548 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[101d9b2a-cb43-415b-aac3-4708ad8b06bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.551 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[94a70406-ec4a-4533-b28f-d278bf3d56c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.570 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.574 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023273.4169364, d547b0db-242e-49a5-8a76-5682b0235b6d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.575 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:41:13 np0005629333 NetworkManager[49836]: <info>  [1772023273.5772] device (tap9ec47b46-60): carrier: link connected
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.580 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.580 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.580 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.581 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.581 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.581 244018 DEBUG nova.virt.libvirt.driver [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.582 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9ca0b1cd-6419-4ade-9d1d-2b41c469ed1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.593 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.596 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.595 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[251e2343-c5db-4239-8cbb-be6f7040048f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ec47b46-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:aa:8a:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 300], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524298, 'reachable_time': 43888, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335316, 'error': None, 'target': 'ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.608 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d40a493d-9836-4b52-97a6-f52517004656]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feaa:8a86'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 524298, 'tstamp': 524298}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335317, 'error': None, 'target': 'ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.618 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.618 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023273.5315669, bd882e8f-3bea-4fa6-a185-186bc9911267 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.618 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.620 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8c434778-d17a-4512-ad5e-5eb972a55864]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ec47b46-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:aa:8a:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 300], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524298, 'reachable_time': 43888, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 335318, 'error': None, 'target': 'ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.638 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.639 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[92929090-9320-404c-bc2d-a01e9474514d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.641 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.651 244018 INFO nova.compute.manager [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Took 10.26 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.651 244018 DEBUG nova.compute.manager [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.659 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.676 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[246ea11a-3654-4cdb-98fd-53f4afac97b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.677 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ec47b46-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.677 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.678 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ec47b46-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:13 np0005629333 NetworkManager[49836]: <info>  [1772023273.6802] manager: (tap9ec47b46-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/420)
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.681 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:13 np0005629333 kernel: tap9ec47b46-60: entered promiscuous mode
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.683 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.683 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9ec47b46-60, col_values=(('external_ids', {'iface-id': '895bc3cc-c38d-425b-b005-1acb3139bbee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.684 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:13 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:13Z|01003|binding|INFO|Releasing lport 895bc3cc-c38d-425b-b005-1acb3139bbee from this chassis (sb_readonly=0)
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.694 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.695 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9ec47b46-6b4d-4267-964b-6f16eef9a7b1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9ec47b46-6b4d-4267-964b-6f16eef9a7b1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.695 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b6ecbecd-e85c-46a6-b941-ee87e659ee7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.696 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-9ec47b46-6b4d-4267-964b-6f16eef9a7b1
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/9ec47b46-6b4d-4267-964b-6f16eef9a7b1.pid.haproxy
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 9ec47b46-6b4d-4267-964b-6f16eef9a7b1
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:41:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:13.696 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'env', 'PROCESS_TAG=haproxy-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9ec47b46-6b4d-4267-964b-6f16eef9a7b1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.709 244018 INFO nova.compute.manager [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Took 11.36 seconds to build instance.#033[00m
Feb 25 07:41:13 np0005629333 nova_compute[244014]: 2026-02-25 12:41:13.729 244018 DEBUG oslo_concurrency.lockutils [None req-2825eec9-e4f1-43be-a48f-7bb9ada1b8c4 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "bd882e8f-3bea-4fa6-a185-186bc9911267" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.480s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:14 np0005629333 podman[335351]: 2026-02-25 12:41:14.032311492 +0000 UTC m=+0.064149422 container create 5f09185511d4e215a85b7a2be2a3fdd458b7c1e274aa11f0c181571c358ee95b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:41:14 np0005629333 systemd[1]: Started libpod-conmon-5f09185511d4e215a85b7a2be2a3fdd458b7c1e274aa11f0c181571c358ee95b.scope.
Feb 25 07:41:14 np0005629333 podman[335351]: 2026-02-25 12:41:13.995543504 +0000 UTC m=+0.027381374 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:41:14 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:41:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02aa81a3db703ffc9f0b50010648091a42999b6e192082fa62a6c0558305e7bb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:41:14 np0005629333 podman[335351]: 2026-02-25 12:41:14.115078757 +0000 UTC m=+0.146916627 container init 5f09185511d4e215a85b7a2be2a3fdd458b7c1e274aa11f0c181571c358ee95b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:41:14 np0005629333 podman[335351]: 2026-02-25 12:41:14.119515722 +0000 UTC m=+0.151353572 container start 5f09185511d4e215a85b7a2be2a3fdd458b7c1e274aa11f0c181571c358ee95b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 07:41:14 np0005629333 neutron-haproxy-ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1[335366]: [NOTICE]   (335370) : New worker (335372) forked
Feb 25 07:41:14 np0005629333 neutron-haproxy-ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1[335366]: [NOTICE]   (335370) : Loading success.
Feb 25 07:41:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1802: 305 pgs: 305 active+clean; 451 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 195 op/s
Feb 25 07:41:14 np0005629333 nova_compute[244014]: 2026-02-25 12:41:14.746 244018 DEBUG nova.network.neutron [req-151e4f9b-ab40-4f06-b088-2d5208a621d0 req-dab74daf-226c-424a-b596-61e7601fd2e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Updated VIF entry in instance network info cache for port 96ad4438-9fbd-4fbe-ae7f-af0eecb763c1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:41:14 np0005629333 nova_compute[244014]: 2026-02-25 12:41:14.746 244018 DEBUG nova.network.neutron [req-151e4f9b-ab40-4f06-b088-2d5208a621d0 req-dab74daf-226c-424a-b596-61e7601fd2e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Updating instance_info_cache with network_info: [{"id": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "address": "fa:16:3e:95:77:21", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7721", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96ad4438-9f", "ovs_interfaceid": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:41:14 np0005629333 nova_compute[244014]: 2026-02-25 12:41:14.768 244018 DEBUG oslo_concurrency.lockutils [req-151e4f9b-ab40-4f06-b088-2d5208a621d0 req-dab74daf-226c-424a-b596-61e7601fd2e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d547b0db-242e-49a5-8a76-5682b0235b6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:41:14 np0005629333 nova_compute[244014]: 2026-02-25 12:41:14.966 244018 DEBUG nova.network.neutron [req-3a716371-8b7c-45b8-8379-edb7cf25653e req-a48a1600-dfba-49b7-8a5c-11d1aa19a560 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Updated VIF entry in instance network info cache for port ca235479-72b7-40ba-b8e5-d8d3227079fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:41:14 np0005629333 nova_compute[244014]: 2026-02-25 12:41:14.967 244018 DEBUG nova.network.neutron [req-3a716371-8b7c-45b8-8379-edb7cf25653e req-a48a1600-dfba-49b7-8a5c-11d1aa19a560 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Updating instance_info_cache with network_info: [{"id": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "address": "fa:16:3e:c8:65:4a", "network": {"id": "2ffae040-354a-4380-b0b9-a74c45661bf1", "bridge": "br-int", "label": "tempest-network-smoke--1173854570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca235479-72", "ovs_interfaceid": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:41:14 np0005629333 nova_compute[244014]: 2026-02-25 12:41:14.998 244018 DEBUG oslo_concurrency.lockutils [req-3a716371-8b7c-45b8-8379-edb7cf25653e req-a48a1600-dfba-49b7-8a5c-11d1aa19a560 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-bb4e80d2-c200-4c24-8154-e1a86d06946b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.050 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.748 244018 DEBUG nova.compute.manager [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Received event network-vif-plugged-12e30ba1-a42b-437b-b14c-ab41975d5f53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.748 244018 DEBUG oslo_concurrency.lockutils [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.748 244018 DEBUG oslo_concurrency.lockutils [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.749 244018 DEBUG oslo_concurrency.lockutils [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.749 244018 DEBUG nova.compute.manager [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] No waiting events found dispatching network-vif-plugged-12e30ba1-a42b-437b-b14c-ab41975d5f53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.749 244018 WARNING nova.compute.manager [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Received unexpected event network-vif-plugged-12e30ba1-a42b-437b-b14c-ab41975d5f53 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.749 244018 DEBUG nova.compute.manager [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Received event network-vif-plugged-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.749 244018 DEBUG oslo_concurrency.lockutils [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.750 244018 DEBUG oslo_concurrency.lockutils [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.756 244018 DEBUG oslo_concurrency.lockutils [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.756 244018 DEBUG nova.compute.manager [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Processing event network-vif-plugged-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.757 244018 DEBUG nova.compute.manager [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Received event network-vif-plugged-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.757 244018 DEBUG oslo_concurrency.lockutils [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.757 244018 DEBUG oslo_concurrency.lockutils [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.757 244018 DEBUG oslo_concurrency.lockutils [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.758 244018 DEBUG nova.compute.manager [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] No waiting events found dispatching network-vif-plugged-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.758 244018 WARNING nova.compute.manager [req-557f8e3a-7dec-4dab-a23c-195ecd03514a req-4bdfdddf-6db1-4c05-ae8d-3e8ce6a0a74d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Received unexpected event network-vif-plugged-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.759 244018 DEBUG nova.compute.manager [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.762 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023275.7619615, d547b0db-242e-49a5-8a76-5682b0235b6d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.762 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.764 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.769 244018 INFO nova.virt.libvirt.driver [-] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Instance spawned successfully.#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.769 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.801 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.806 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.810 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.811 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.812 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.812 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.813 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.813 244018 DEBUG nova.virt.libvirt.driver [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.853 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.890 244018 INFO nova.compute.manager [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Took 11.58 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.890 244018 DEBUG nova.compute.manager [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.961 244018 INFO nova.compute.manager [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Took 13.61 seconds to build instance.#033[00m
Feb 25 07:41:15 np0005629333 nova_compute[244014]: 2026-02-25 12:41:15.981 244018 DEBUG oslo_concurrency.lockutils [None req-75de41a5-0f3d-45aa-944b-a2e93c9ceab6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "d547b0db-242e-49a5-8a76-5682b0235b6d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1803: 305 pgs: 305 active+clean; 451 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 195 op/s
Feb 25 07:41:16 np0005629333 nova_compute[244014]: 2026-02-25 12:41:16.432 244018 DEBUG oslo_concurrency.lockutils [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "bd882e8f-3bea-4fa6-a185-186bc9911267" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:16 np0005629333 nova_compute[244014]: 2026-02-25 12:41:16.432 244018 DEBUG oslo_concurrency.lockutils [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "bd882e8f-3bea-4fa6-a185-186bc9911267" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:16 np0005629333 nova_compute[244014]: 2026-02-25 12:41:16.433 244018 DEBUG oslo_concurrency.lockutils [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:16 np0005629333 nova_compute[244014]: 2026-02-25 12:41:16.433 244018 DEBUG oslo_concurrency.lockutils [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:16 np0005629333 nova_compute[244014]: 2026-02-25 12:41:16.433 244018 DEBUG oslo_concurrency.lockutils [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:16 np0005629333 nova_compute[244014]: 2026-02-25 12:41:16.434 244018 INFO nova.compute.manager [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Terminating instance#033[00m
Feb 25 07:41:16 np0005629333 nova_compute[244014]: 2026-02-25 12:41:16.436 244018 DEBUG nova.compute.manager [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:41:16 np0005629333 kernel: tap12e30ba1-a4 (unregistering): left promiscuous mode
Feb 25 07:41:16 np0005629333 NetworkManager[49836]: <info>  [1772023276.4829] device (tap12e30ba1-a4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:41:16 np0005629333 nova_compute[244014]: 2026-02-25 12:41:16.490 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:16Z|01004|binding|INFO|Releasing lport 12e30ba1-a42b-437b-b14c-ab41975d5f53 from this chassis (sb_readonly=0)
Feb 25 07:41:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:16Z|01005|binding|INFO|Setting lport 12e30ba1-a42b-437b-b14c-ab41975d5f53 down in Southbound
Feb 25 07:41:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:16Z|01006|binding|INFO|Removing iface tap12e30ba1-a4 ovn-installed in OVS
Feb 25 07:41:16 np0005629333 nova_compute[244014]: 2026-02-25 12:41:16.494 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:16.499 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:70:d4 10.100.0.6'], port_security=['fa:16:3e:e6:70:d4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'bd882e8f-3bea-4fa6-a185-186bc9911267', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec8bae53-fe6a-49d1-a733-f00c198be561', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '503e879cd1f44a16b9baef106ceba949', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3bf34285-1a67-4c95-bb68-fd577a012f6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18f4e8da-4409-4095-9850-aaee82dd8fd1, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=12e30ba1-a42b-437b-b14c-ab41975d5f53) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:41:16 np0005629333 nova_compute[244014]: 2026-02-25 12:41:16.500 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:16.500 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 12e30ba1-a42b-437b-b14c-ab41975d5f53 in datapath ec8bae53-fe6a-49d1-a733-f00c198be561 unbound from our chassis#033[00m
Feb 25 07:41:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:16.502 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec8bae53-fe6a-49d1-a733-f00c198be561#033[00m
Feb 25 07:41:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:16.514 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8523bbce-f41d-484f-9345-96dd2b5457f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:16.536 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3605565f-3427-424a-85c1-3f2e7189d57c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:16 np0005629333 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000067.scope: Deactivated successfully.
Feb 25 07:41:16 np0005629333 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000067.scope: Consumed 3.824s CPU time.
Feb 25 07:41:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:16.539 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[26bfef85-241a-48b4-9f43-d55b79e29aab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:16 np0005629333 systemd-machined[210048]: Machine qemu-130-instance-00000067 terminated.
Feb 25 07:41:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:16.562 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[463c7d4f-373b-4a94-9798-b20e4274cc6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:16.576 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a50aac-af7d-4519-811b-9822ff2dde36]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec8bae53-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a5:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 17, 'rx_bytes': 700, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 17, 'rx_bytes': 700, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516532, 'reachable_time': 18880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335393, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:16.588 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[11b8e255-8445-4b46-8441-28161c026751]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516546, 'tstamp': 516546}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335394, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516549, 'tstamp': 516549}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335394, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:16.590 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec8bae53-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:16 np0005629333 nova_compute[244014]: 2026-02-25 12:41:16.591 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:16 np0005629333 nova_compute[244014]: 2026-02-25 12:41:16.594 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:16.594 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec8bae53-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:16.594 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:41:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:16.595 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec8bae53-f0, col_values=(('external_ids', {'iface-id': 'e2d1eadf-baf7-4e5c-8052-6c64e8476a26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:16.595 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:41:16 np0005629333 nova_compute[244014]: 2026-02-25 12:41:16.664 244018 INFO nova.virt.libvirt.driver [-] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Instance destroyed successfully.#033[00m
Feb 25 07:41:16 np0005629333 nova_compute[244014]: 2026-02-25 12:41:16.664 244018 DEBUG nova.objects.instance [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'resources' on Instance uuid bd882e8f-3bea-4fa6-a185-186bc9911267 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:41:16 np0005629333 nova_compute[244014]: 2026-02-25 12:41:16.679 244018 DEBUG nova.virt.libvirt.vif [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:41:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1299470364',display_name='tempest-ServersTestJSON-server-1299470364',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1299470364',id=103,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:41:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-itlntti8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:41:13Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=bd882e8f-3bea-4fa6-a185-186bc9911267,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "address": "fa:16:3e:e6:70:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12e30ba1-a4", "ovs_interfaceid": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:41:16 np0005629333 nova_compute[244014]: 2026-02-25 12:41:16.680 244018 DEBUG nova.network.os_vif_util [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "address": "fa:16:3e:e6:70:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12e30ba1-a4", "ovs_interfaceid": "12e30ba1-a42b-437b-b14c-ab41975d5f53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:41:16 np0005629333 nova_compute[244014]: 2026-02-25 12:41:16.680 244018 DEBUG nova.network.os_vif_util [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:70:d4,bridge_name='br-int',has_traffic_filtering=True,id=12e30ba1-a42b-437b-b14c-ab41975d5f53,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12e30ba1-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:41:16 np0005629333 nova_compute[244014]: 2026-02-25 12:41:16.686 244018 DEBUG os_vif [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:70:d4,bridge_name='br-int',has_traffic_filtering=True,id=12e30ba1-a42b-437b-b14c-ab41975d5f53,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12e30ba1-a4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:41:16 np0005629333 nova_compute[244014]: 2026-02-25 12:41:16.688 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:16 np0005629333 nova_compute[244014]: 2026-02-25 12:41:16.688 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12e30ba1-a4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:16 np0005629333 nova_compute[244014]: 2026-02-25 12:41:16.691 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:16 np0005629333 nova_compute[244014]: 2026-02-25 12:41:16.693 244018 INFO os_vif [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:70:d4,bridge_name='br-int',has_traffic_filtering=True,id=12e30ba1-a42b-437b-b14c-ab41975d5f53,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12e30ba1-a4')#033[00m
Feb 25 07:41:17 np0005629333 nova_compute[244014]: 2026-02-25 12:41:17.146 244018 INFO nova.virt.libvirt.driver [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Deleting instance files /var/lib/nova/instances/bd882e8f-3bea-4fa6-a185-186bc9911267_del#033[00m
Feb 25 07:41:17 np0005629333 nova_compute[244014]: 2026-02-25 12:41:17.147 244018 INFO nova.virt.libvirt.driver [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Deletion of /var/lib/nova/instances/bd882e8f-3bea-4fa6-a185-186bc9911267_del complete#033[00m
Feb 25 07:41:17 np0005629333 nova_compute[244014]: 2026-02-25 12:41:17.274 244018 INFO nova.compute.manager [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:41:17 np0005629333 nova_compute[244014]: 2026-02-25 12:41:17.274 244018 DEBUG oslo.service.loopingcall [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:41:17 np0005629333 nova_compute[244014]: 2026-02-25 12:41:17.275 244018 DEBUG nova.compute.manager [-] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:41:17 np0005629333 nova_compute[244014]: 2026-02-25 12:41:17.275 244018 DEBUG nova.network.neutron [-] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:41:17 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:17Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c8:65:4a 10.100.0.7
Feb 25 07:41:17 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:17Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c8:65:4a 10.100.0.7
Feb 25 07:41:18 np0005629333 nova_compute[244014]: 2026-02-25 12:41:18.034 244018 DEBUG nova.compute.manager [req-10dfcddc-2c0a-4d6e-826a-10c9709f310b req-5fa1a64f-85fd-4672-9128-6e07ff69dc3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Received event network-vif-unplugged-12e30ba1-a42b-437b-b14c-ab41975d5f53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:41:18 np0005629333 nova_compute[244014]: 2026-02-25 12:41:18.035 244018 DEBUG oslo_concurrency.lockutils [req-10dfcddc-2c0a-4d6e-826a-10c9709f310b req-5fa1a64f-85fd-4672-9128-6e07ff69dc3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:18 np0005629333 nova_compute[244014]: 2026-02-25 12:41:18.036 244018 DEBUG oslo_concurrency.lockutils [req-10dfcddc-2c0a-4d6e-826a-10c9709f310b req-5fa1a64f-85fd-4672-9128-6e07ff69dc3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:18 np0005629333 nova_compute[244014]: 2026-02-25 12:41:18.036 244018 DEBUG oslo_concurrency.lockutils [req-10dfcddc-2c0a-4d6e-826a-10c9709f310b req-5fa1a64f-85fd-4672-9128-6e07ff69dc3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:18 np0005629333 nova_compute[244014]: 2026-02-25 12:41:18.037 244018 DEBUG nova.compute.manager [req-10dfcddc-2c0a-4d6e-826a-10c9709f310b req-5fa1a64f-85fd-4672-9128-6e07ff69dc3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] No waiting events found dispatching network-vif-unplugged-12e30ba1-a42b-437b-b14c-ab41975d5f53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:41:18 np0005629333 nova_compute[244014]: 2026-02-25 12:41:18.038 244018 DEBUG nova.compute.manager [req-10dfcddc-2c0a-4d6e-826a-10c9709f310b req-5fa1a64f-85fd-4672-9128-6e07ff69dc3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Received event network-vif-unplugged-12e30ba1-a42b-437b-b14c-ab41975d5f53 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:41:18 np0005629333 nova_compute[244014]: 2026-02-25 12:41:18.038 244018 DEBUG nova.compute.manager [req-10dfcddc-2c0a-4d6e-826a-10c9709f310b req-5fa1a64f-85fd-4672-9128-6e07ff69dc3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Received event network-vif-plugged-12e30ba1-a42b-437b-b14c-ab41975d5f53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:41:18 np0005629333 nova_compute[244014]: 2026-02-25 12:41:18.038 244018 DEBUG oslo_concurrency.lockutils [req-10dfcddc-2c0a-4d6e-826a-10c9709f310b req-5fa1a64f-85fd-4672-9128-6e07ff69dc3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:18 np0005629333 nova_compute[244014]: 2026-02-25 12:41:18.039 244018 DEBUG oslo_concurrency.lockutils [req-10dfcddc-2c0a-4d6e-826a-10c9709f310b req-5fa1a64f-85fd-4672-9128-6e07ff69dc3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:18 np0005629333 nova_compute[244014]: 2026-02-25 12:41:18.039 244018 DEBUG oslo_concurrency.lockutils [req-10dfcddc-2c0a-4d6e-826a-10c9709f310b req-5fa1a64f-85fd-4672-9128-6e07ff69dc3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bd882e8f-3bea-4fa6-a185-186bc9911267-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:18 np0005629333 nova_compute[244014]: 2026-02-25 12:41:18.039 244018 DEBUG nova.compute.manager [req-10dfcddc-2c0a-4d6e-826a-10c9709f310b req-5fa1a64f-85fd-4672-9128-6e07ff69dc3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] No waiting events found dispatching network-vif-plugged-12e30ba1-a42b-437b-b14c-ab41975d5f53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:41:18 np0005629333 nova_compute[244014]: 2026-02-25 12:41:18.039 244018 WARNING nova.compute.manager [req-10dfcddc-2c0a-4d6e-826a-10c9709f310b req-5fa1a64f-85fd-4672-9128-6e07ff69dc3f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Received unexpected event network-vif-plugged-12e30ba1-a42b-437b-b14c-ab41975d5f53 for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:41:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 305 active+clean; 440 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 7.8 MiB/s wr, 420 op/s
Feb 25 07:41:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:41:19 np0005629333 nova_compute[244014]: 2026-02-25 12:41:19.001 244018 DEBUG nova.network.neutron [-] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:41:19 np0005629333 nova_compute[244014]: 2026-02-25 12:41:19.023 244018 INFO nova.compute.manager [-] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Took 1.75 seconds to deallocate network for instance.#033[00m
Feb 25 07:41:19 np0005629333 nova_compute[244014]: 2026-02-25 12:41:19.084 244018 DEBUG oslo_concurrency.lockutils [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:19 np0005629333 nova_compute[244014]: 2026-02-25 12:41:19.084 244018 DEBUG oslo_concurrency.lockutils [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:19 np0005629333 nova_compute[244014]: 2026-02-25 12:41:19.207 244018 DEBUG oslo_concurrency.processutils [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:19 np0005629333 podman[335446]: 2026-02-25 12:41:19.700754064 +0000 UTC m=+0.048447529 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:41:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:41:19 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3063817213' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:41:19 np0005629333 nova_compute[244014]: 2026-02-25 12:41:19.765 244018 DEBUG oslo_concurrency.processutils [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:19 np0005629333 nova_compute[244014]: 2026-02-25 12:41:19.770 244018 DEBUG nova.compute.provider_tree [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:41:19 np0005629333 nova_compute[244014]: 2026-02-25 12:41:19.785 244018 DEBUG nova.scheduler.client.report [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:41:19 np0005629333 nova_compute[244014]: 2026-02-25 12:41:19.804 244018 DEBUG oslo_concurrency.lockutils [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:19 np0005629333 nova_compute[244014]: 2026-02-25 12:41:19.834 244018 INFO nova.scheduler.client.report [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Deleted allocations for instance bd882e8f-3bea-4fa6-a185-186bc9911267#033[00m
Feb 25 07:41:19 np0005629333 nova_compute[244014]: 2026-02-25 12:41:19.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:41:19 np0005629333 nova_compute[244014]: 2026-02-25 12:41:19.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:41:19 np0005629333 nova_compute[244014]: 2026-02-25 12:41:19.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 07:41:19 np0005629333 nova_compute[244014]: 2026-02-25 12:41:19.910 244018 DEBUG oslo_concurrency.lockutils [None req-51ddd589-4485-4f62-82f5-4b3f08e75290 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "bd882e8f-3bea-4fa6-a185-186bc9911267" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.478s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:20 np0005629333 nova_compute[244014]: 2026-02-25 12:41:20.053 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:20 np0005629333 nova_compute[244014]: 2026-02-25 12:41:20.085 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-62d2fee1-f07f-44e3-a511-6b9bb341a3ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:41:20 np0005629333 nova_compute[244014]: 2026-02-25 12:41:20.086 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-62d2fee1-f07f-44e3-a511-6b9bb341a3ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:41:20 np0005629333 nova_compute[244014]: 2026-02-25 12:41:20.086 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 25 07:41:20 np0005629333 nova_compute[244014]: 2026-02-25 12:41:20.086 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid 62d2fee1-f07f-44e3-a511-6b9bb341a3ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:41:20 np0005629333 nova_compute[244014]: 2026-02-25 12:41:20.132 244018 DEBUG nova.compute.manager [req-605bf9ae-9359-43e5-9c96-8f1c24d55066 req-d953a1f5-53de-4ea2-ae0f-06edab08e931 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Received event network-vif-deleted-12e30ba1-a42b-437b-b14c-ab41975d5f53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:41:20 np0005629333 nova_compute[244014]: 2026-02-25 12:41:20.137 244018 DEBUG oslo_concurrency.lockutils [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "4717553a-9bb9-4278-b610-7c446db3f8d9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:20 np0005629333 nova_compute[244014]: 2026-02-25 12:41:20.137 244018 DEBUG oslo_concurrency.lockutils [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "4717553a-9bb9-4278-b610-7c446db3f8d9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:20 np0005629333 nova_compute[244014]: 2026-02-25 12:41:20.137 244018 DEBUG oslo_concurrency.lockutils [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:20 np0005629333 nova_compute[244014]: 2026-02-25 12:41:20.138 244018 DEBUG oslo_concurrency.lockutils [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:20 np0005629333 nova_compute[244014]: 2026-02-25 12:41:20.138 244018 DEBUG oslo_concurrency.lockutils [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:20 np0005629333 nova_compute[244014]: 2026-02-25 12:41:20.139 244018 INFO nova.compute.manager [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Terminating instance#033[00m
Feb 25 07:41:20 np0005629333 nova_compute[244014]: 2026-02-25 12:41:20.140 244018 DEBUG nova.compute.manager [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:41:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1805: 305 pgs: 305 active+clean; 440 MiB data, 1006 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 3.8 MiB/s wr, 271 op/s
Feb 25 07:41:20 np0005629333 kernel: tap6150f46a-de (unregistering): left promiscuous mode
Feb 25 07:41:20 np0005629333 NetworkManager[49836]: <info>  [1772023280.4797] device (tap6150f46a-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:41:20 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:20Z|01007|binding|INFO|Releasing lport 6150f46a-debb-49d6-b2e0-07b81f4cccea from this chassis (sb_readonly=0)
Feb 25 07:41:20 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:20Z|01008|binding|INFO|Setting lport 6150f46a-debb-49d6-b2e0-07b81f4cccea down in Southbound
Feb 25 07:41:20 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:20Z|01009|binding|INFO|Removing iface tap6150f46a-de ovn-installed in OVS
Feb 25 07:41:20 np0005629333 nova_compute[244014]: 2026-02-25 12:41:20.487 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:20 np0005629333 nova_compute[244014]: 2026-02-25 12:41:20.492 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:20 np0005629333 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000065.scope: Deactivated successfully.
Feb 25 07:41:20 np0005629333 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000065.scope: Consumed 13.394s CPU time.
Feb 25 07:41:20 np0005629333 systemd-machined[210048]: Machine qemu-128-instance-00000065 terminated.
Feb 25 07:41:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:20.604 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:a2:d4 10.100.0.9'], port_security=['fa:16:3e:ee:a2:d4 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4717553a-9bb9-4278-b610-7c446db3f8d9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec8bae53-fe6a-49d1-a733-f00c198be561', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '503e879cd1f44a16b9baef106ceba949', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3bf34285-1a67-4c95-bb68-fd577a012f6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18f4e8da-4409-4095-9850-aaee82dd8fd1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6150f46a-debb-49d6-b2e0-07b81f4cccea) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:41:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:20.605 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6150f46a-debb-49d6-b2e0-07b81f4cccea in datapath ec8bae53-fe6a-49d1-a733-f00c198be561 unbound from our chassis#033[00m
Feb 25 07:41:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:20.607 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec8bae53-fe6a-49d1-a733-f00c198be561#033[00m
Feb 25 07:41:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:20.619 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2811d441-03ae-4134-97bb-e940d5b79ee4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:20.642 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1e60cd67-b2a1-498d-abc5-8f21d452c1f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:20.646 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ce7b69d4-20ed-4b40-b675-ad934235db62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:20.667 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[99fc63b2-0a22-4a99-acc5-2ecad05ef662]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:20.681 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[64dbdb60-a19d-4b0b-9412-49fea9b5d998]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec8bae53-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a5:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516532, 'reachable_time': 18880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335478, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:20.695 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[30a5f17a-7677-42da-b2f6-ed8c43daf774]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516546, 'tstamp': 516546}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335479, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516549, 'tstamp': 516549}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335479, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:20.697 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec8bae53-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:20 np0005629333 nova_compute[244014]: 2026-02-25 12:41:20.698 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:20 np0005629333 nova_compute[244014]: 2026-02-25 12:41:20.702 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:20.702 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec8bae53-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:20.702 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:41:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:20.703 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec8bae53-f0, col_values=(('external_ids', {'iface-id': 'e2d1eadf-baf7-4e5c-8052-6c64e8476a26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:20.703 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:41:20 np0005629333 nova_compute[244014]: 2026-02-25 12:41:20.799 244018 INFO nova.virt.libvirt.driver [-] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Instance destroyed successfully.#033[00m
Feb 25 07:41:20 np0005629333 nova_compute[244014]: 2026-02-25 12:41:20.799 244018 DEBUG nova.objects.instance [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'resources' on Instance uuid 4717553a-9bb9-4278-b610-7c446db3f8d9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:41:20 np0005629333 nova_compute[244014]: 2026-02-25 12:41:20.831 244018 DEBUG nova.virt.libvirt.vif [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:40:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1299470364',display_name='tempest-ServersTestJSON-server-1299470364',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1299470364',id=101,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:40:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-mh21m2o5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:40:56Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=4717553a-9bb9-4278-b610-7c446db3f8d9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "address": "fa:16:3e:ee:a2:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6150f46a-de", "ovs_interfaceid": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:41:20 np0005629333 nova_compute[244014]: 2026-02-25 12:41:20.831 244018 DEBUG nova.network.os_vif_util [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "address": "fa:16:3e:ee:a2:d4", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6150f46a-de", "ovs_interfaceid": "6150f46a-debb-49d6-b2e0-07b81f4cccea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:41:20 np0005629333 nova_compute[244014]: 2026-02-25 12:41:20.832 244018 DEBUG nova.network.os_vif_util [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:a2:d4,bridge_name='br-int',has_traffic_filtering=True,id=6150f46a-debb-49d6-b2e0-07b81f4cccea,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6150f46a-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:41:20 np0005629333 nova_compute[244014]: 2026-02-25 12:41:20.832 244018 DEBUG os_vif [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:a2:d4,bridge_name='br-int',has_traffic_filtering=True,id=6150f46a-debb-49d6-b2e0-07b81f4cccea,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6150f46a-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:41:20 np0005629333 nova_compute[244014]: 2026-02-25 12:41:20.833 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:20 np0005629333 nova_compute[244014]: 2026-02-25 12:41:20.834 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6150f46a-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:20 np0005629333 nova_compute[244014]: 2026-02-25 12:41:20.835 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:20 np0005629333 nova_compute[244014]: 2026-02-25 12:41:20.837 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:41:20 np0005629333 nova_compute[244014]: 2026-02-25 12:41:20.839 244018 INFO os_vif [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:a2:d4,bridge_name='br-int',has_traffic_filtering=True,id=6150f46a-debb-49d6-b2e0-07b81f4cccea,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6150f46a-de')#033[00m
Feb 25 07:41:20 np0005629333 podman[335492]: 2026-02-25 12:41:20.879390173 +0000 UTC m=+0.069779670 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 25 07:41:21 np0005629333 nova_compute[244014]: 2026-02-25 12:41:21.225 244018 DEBUG nova.compute.manager [req-a8086259-b5ba-4dbb-ac75-506c4d83120e req-f07ba6b7-63ee-4aa1-b4fa-9bfd3604f13d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Received event network-changed-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:41:21 np0005629333 nova_compute[244014]: 2026-02-25 12:41:21.225 244018 DEBUG nova.compute.manager [req-a8086259-b5ba-4dbb-ac75-506c4d83120e req-f07ba6b7-63ee-4aa1-b4fa-9bfd3604f13d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Refreshing instance network info cache due to event network-changed-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:41:21 np0005629333 nova_compute[244014]: 2026-02-25 12:41:21.225 244018 DEBUG oslo_concurrency.lockutils [req-a8086259-b5ba-4dbb-ac75-506c4d83120e req-f07ba6b7-63ee-4aa1-b4fa-9bfd3604f13d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d547b0db-242e-49a5-8a76-5682b0235b6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:41:21 np0005629333 nova_compute[244014]: 2026-02-25 12:41:21.225 244018 DEBUG oslo_concurrency.lockutils [req-a8086259-b5ba-4dbb-ac75-506c4d83120e req-f07ba6b7-63ee-4aa1-b4fa-9bfd3604f13d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d547b0db-242e-49a5-8a76-5682b0235b6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:41:21 np0005629333 nova_compute[244014]: 2026-02-25 12:41:21.226 244018 DEBUG nova.network.neutron [req-a8086259-b5ba-4dbb-ac75-506c4d83120e req-f07ba6b7-63ee-4aa1-b4fa-9bfd3604f13d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Refreshing network info cache for port 96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:41:21 np0005629333 nova_compute[244014]: 2026-02-25 12:41:21.920 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Updating instance_info_cache with network_info: [{"id": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "address": "fa:16:3e:f6:f0:2e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521e9ad8-9e", "ovs_interfaceid": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:41:21 np0005629333 nova_compute[244014]: 2026-02-25 12:41:21.944 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-62d2fee1-f07f-44e3-a511-6b9bb341a3ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:41:21 np0005629333 nova_compute[244014]: 2026-02-25 12:41:21.945 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 25 07:41:21 np0005629333 nova_compute[244014]: 2026-02-25 12:41:21.945 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:41:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 305 active+clean; 438 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 3.9 MiB/s wr, 305 op/s
Feb 25 07:41:22 np0005629333 nova_compute[244014]: 2026-02-25 12:41:22.277 244018 DEBUG nova.compute.manager [req-4c065e5b-8650-4b5a-8d45-99abe464ca47 req-18abde24-a69b-4f55-a937-cf158cd36912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Received event network-vif-unplugged-6150f46a-debb-49d6-b2e0-07b81f4cccea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:41:22 np0005629333 nova_compute[244014]: 2026-02-25 12:41:22.277 244018 DEBUG oslo_concurrency.lockutils [req-4c065e5b-8650-4b5a-8d45-99abe464ca47 req-18abde24-a69b-4f55-a937-cf158cd36912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:22 np0005629333 nova_compute[244014]: 2026-02-25 12:41:22.277 244018 DEBUG oslo_concurrency.lockutils [req-4c065e5b-8650-4b5a-8d45-99abe464ca47 req-18abde24-a69b-4f55-a937-cf158cd36912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:22 np0005629333 nova_compute[244014]: 2026-02-25 12:41:22.278 244018 DEBUG oslo_concurrency.lockutils [req-4c065e5b-8650-4b5a-8d45-99abe464ca47 req-18abde24-a69b-4f55-a937-cf158cd36912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:22 np0005629333 nova_compute[244014]: 2026-02-25 12:41:22.278 244018 DEBUG nova.compute.manager [req-4c065e5b-8650-4b5a-8d45-99abe464ca47 req-18abde24-a69b-4f55-a937-cf158cd36912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] No waiting events found dispatching network-vif-unplugged-6150f46a-debb-49d6-b2e0-07b81f4cccea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:41:22 np0005629333 nova_compute[244014]: 2026-02-25 12:41:22.278 244018 DEBUG nova.compute.manager [req-4c065e5b-8650-4b5a-8d45-99abe464ca47 req-18abde24-a69b-4f55-a937-cf158cd36912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Received event network-vif-unplugged-6150f46a-debb-49d6-b2e0-07b81f4cccea for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:41:22 np0005629333 nova_compute[244014]: 2026-02-25 12:41:22.279 244018 DEBUG nova.compute.manager [req-4c065e5b-8650-4b5a-8d45-99abe464ca47 req-18abde24-a69b-4f55-a937-cf158cd36912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Received event network-vif-plugged-6150f46a-debb-49d6-b2e0-07b81f4cccea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:41:22 np0005629333 nova_compute[244014]: 2026-02-25 12:41:22.279 244018 DEBUG oslo_concurrency.lockutils [req-4c065e5b-8650-4b5a-8d45-99abe464ca47 req-18abde24-a69b-4f55-a937-cf158cd36912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:22 np0005629333 nova_compute[244014]: 2026-02-25 12:41:22.280 244018 DEBUG oslo_concurrency.lockutils [req-4c065e5b-8650-4b5a-8d45-99abe464ca47 req-18abde24-a69b-4f55-a937-cf158cd36912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:22 np0005629333 nova_compute[244014]: 2026-02-25 12:41:22.280 244018 DEBUG oslo_concurrency.lockutils [req-4c065e5b-8650-4b5a-8d45-99abe464ca47 req-18abde24-a69b-4f55-a937-cf158cd36912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4717553a-9bb9-4278-b610-7c446db3f8d9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:22 np0005629333 nova_compute[244014]: 2026-02-25 12:41:22.280 244018 DEBUG nova.compute.manager [req-4c065e5b-8650-4b5a-8d45-99abe464ca47 req-18abde24-a69b-4f55-a937-cf158cd36912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] No waiting events found dispatching network-vif-plugged-6150f46a-debb-49d6-b2e0-07b81f4cccea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:41:22 np0005629333 nova_compute[244014]: 2026-02-25 12:41:22.281 244018 WARNING nova.compute.manager [req-4c065e5b-8650-4b5a-8d45-99abe464ca47 req-18abde24-a69b-4f55-a937-cf158cd36912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Received unexpected event network-vif-plugged-6150f46a-debb-49d6-b2e0-07b81f4cccea for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:41:22 np0005629333 nova_compute[244014]: 2026-02-25 12:41:22.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:41:22 np0005629333 nova_compute[244014]: 2026-02-25 12:41:22.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:41:22 np0005629333 nova_compute[244014]: 2026-02-25 12:41:22.900 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:22 np0005629333 nova_compute[244014]: 2026-02-25 12:41:22.901 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:22 np0005629333 nova_compute[244014]: 2026-02-25 12:41:22.901 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:22 np0005629333 nova_compute[244014]: 2026-02-25 12:41:22.901 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:41:22 np0005629333 nova_compute[244014]: 2026-02-25 12:41:22.902 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:41:23 np0005629333 nova_compute[244014]: 2026-02-25 12:41:23.588 244018 DEBUG nova.network.neutron [req-a8086259-b5ba-4dbb-ac75-506c4d83120e req-f07ba6b7-63ee-4aa1-b4fa-9bfd3604f13d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Updated VIF entry in instance network info cache for port 96ad4438-9fbd-4fbe-ae7f-af0eecb763c1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:41:23 np0005629333 nova_compute[244014]: 2026-02-25 12:41:23.590 244018 DEBUG nova.network.neutron [req-a8086259-b5ba-4dbb-ac75-506c4d83120e req-f07ba6b7-63ee-4aa1-b4fa-9bfd3604f13d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Updating instance_info_cache with network_info: [{"id": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "address": "fa:16:3e:95:77:21", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7721", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96ad4438-9f", "ovs_interfaceid": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:41:23 np0005629333 nova_compute[244014]: 2026-02-25 12:41:23.612 244018 DEBUG oslo_concurrency.lockutils [req-a8086259-b5ba-4dbb-ac75-506c4d83120e req-f07ba6b7-63ee-4aa1-b4fa-9bfd3604f13d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d547b0db-242e-49a5-8a76-5682b0235b6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:41:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:41:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1936890460' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:41:23 np0005629333 nova_compute[244014]: 2026-02-25 12:41:23.709 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.807s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:23 np0005629333 nova_compute[244014]: 2026-02-25 12:41:23.802 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:41:23 np0005629333 nova_compute[244014]: 2026-02-25 12:41:23.802 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:41:23 np0005629333 nova_compute[244014]: 2026-02-25 12:41:23.806 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:41:23 np0005629333 nova_compute[244014]: 2026-02-25 12:41:23.807 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:41:23 np0005629333 nova_compute[244014]: 2026-02-25 12:41:23.810 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:41:23 np0005629333 nova_compute[244014]: 2026-02-25 12:41:23.811 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:41:23 np0005629333 nova_compute[244014]: 2026-02-25 12:41:23.814 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:41:23 np0005629333 nova_compute[244014]: 2026-02-25 12:41:23.814 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:41:23 np0005629333 nova_compute[244014]: 2026-02-25 12:41:23.985 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:41:23 np0005629333 nova_compute[244014]: 2026-02-25 12:41:23.986 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3234MB free_disk=59.83032176736742GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:41:23 np0005629333 nova_compute[244014]: 2026-02-25 12:41:23.987 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:23 np0005629333 nova_compute[244014]: 2026-02-25 12:41:23.987 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:24 np0005629333 nova_compute[244014]: 2026-02-25 12:41:24.066 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 62d2fee1-f07f-44e3-a511-6b9bb341a3ed actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:41:24 np0005629333 nova_compute[244014]: 2026-02-25 12:41:24.067 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 4717553a-9bb9-4278-b610-7c446db3f8d9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:41:24 np0005629333 nova_compute[244014]: 2026-02-25 12:41:24.067 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance bb4e80d2-c200-4c24-8154-e1a86d06946b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:41:24 np0005629333 nova_compute[244014]: 2026-02-25 12:41:24.068 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance d547b0db-242e-49a5-8a76-5682b0235b6d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:41:24 np0005629333 nova_compute[244014]: 2026-02-25 12:41:24.068 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:41:24 np0005629333 nova_compute[244014]: 2026-02-25 12:41:24.069 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:41:24 np0005629333 nova_compute[244014]: 2026-02-25 12:41:24.143 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:24 np0005629333 nova_compute[244014]: 2026-02-25 12:41:24.199 244018 INFO nova.virt.libvirt.driver [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Deleting instance files /var/lib/nova/instances/4717553a-9bb9-4278-b610-7c446db3f8d9_del#033[00m
Feb 25 07:41:24 np0005629333 nova_compute[244014]: 2026-02-25 12:41:24.201 244018 INFO nova.virt.libvirt.driver [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Deletion of /var/lib/nova/instances/4717553a-9bb9-4278-b610-7c446db3f8d9_del complete#033[00m
Feb 25 07:41:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1807: 305 pgs: 305 active+clean; 416 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 2.2 MiB/s wr, 295 op/s
Feb 25 07:41:24 np0005629333 nova_compute[244014]: 2026-02-25 12:41:24.302 244018 INFO nova.compute.manager [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Took 4.16 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:41:24 np0005629333 nova_compute[244014]: 2026-02-25 12:41:24.304 244018 DEBUG oslo.service.loopingcall [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:41:24 np0005629333 nova_compute[244014]: 2026-02-25 12:41:24.304 244018 DEBUG nova.compute.manager [-] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:41:24 np0005629333 nova_compute[244014]: 2026-02-25 12:41:24.305 244018 DEBUG nova.network.neutron [-] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:41:24 np0005629333 nova_compute[244014]: 2026-02-25 12:41:24.633 244018 INFO nova.compute.manager [None req-df569dc2-06ac-4c35-a513-2950c7b81e50 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Get console output#033[00m
Feb 25 07:41:24 np0005629333 nova_compute[244014]: 2026-02-25 12:41:24.638 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 25 07:41:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:41:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1388731596' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:41:24 np0005629333 nova_compute[244014]: 2026-02-25 12:41:24.691 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:24 np0005629333 nova_compute[244014]: 2026-02-25 12:41:24.695 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:41:24 np0005629333 nova_compute[244014]: 2026-02-25 12:41:24.708 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:41:24 np0005629333 nova_compute[244014]: 2026-02-25 12:41:24.737 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:41:24 np0005629333 nova_compute[244014]: 2026-02-25 12:41:24.738 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:24 np0005629333 nova_compute[244014]: 2026-02-25 12:41:24.900 244018 INFO nova.compute.manager [None req-bb1f7e25-c491-4c67-b272-5270cc202368 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Pausing#033[00m
Feb 25 07:41:24 np0005629333 nova_compute[244014]: 2026-02-25 12:41:24.902 244018 DEBUG nova.objects.instance [None req-bb1f7e25-c491-4c67-b272-5270cc202368 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'flavor' on Instance uuid bb4e80d2-c200-4c24-8154-e1a86d06946b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:41:24 np0005629333 nova_compute[244014]: 2026-02-25 12:41:24.935 244018 DEBUG nova.network.neutron [-] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:41:24 np0005629333 nova_compute[244014]: 2026-02-25 12:41:24.938 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023284.9374084, bb4e80d2-c200-4c24-8154-e1a86d06946b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:41:24 np0005629333 nova_compute[244014]: 2026-02-25 12:41:24.938 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:41:24 np0005629333 nova_compute[244014]: 2026-02-25 12:41:24.940 244018 DEBUG nova.compute.manager [None req-bb1f7e25-c491-4c67-b272-5270cc202368 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:41:24 np0005629333 nova_compute[244014]: 2026-02-25 12:41:24.969 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:41:24 np0005629333 nova_compute[244014]: 2026-02-25 12:41:24.972 244018 INFO nova.compute.manager [-] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Took 0.67 seconds to deallocate network for instance.#033[00m
Feb 25 07:41:24 np0005629333 nova_compute[244014]: 2026-02-25 12:41:24.976 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:41:25 np0005629333 nova_compute[244014]: 2026-02-25 12:41:25.007 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Feb 25 07:41:25 np0005629333 nova_compute[244014]: 2026-02-25 12:41:25.014 244018 DEBUG nova.compute.manager [req-b3f28807-c2da-4827-914f-e2e2f1a0c514 req-b6b3aa0e-cb33-4931-b11a-0a828e2ac378 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Received event network-vif-deleted-6150f46a-debb-49d6-b2e0-07b81f4cccea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:41:25 np0005629333 nova_compute[244014]: 2026-02-25 12:41:25.031 244018 DEBUG oslo_concurrency.lockutils [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:25 np0005629333 nova_compute[244014]: 2026-02-25 12:41:25.032 244018 DEBUG oslo_concurrency.lockutils [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:25 np0005629333 nova_compute[244014]: 2026-02-25 12:41:25.054 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:25 np0005629333 nova_compute[244014]: 2026-02-25 12:41:25.134 244018 DEBUG oslo_concurrency.processutils [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:41:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1310139123' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:41:25 np0005629333 nova_compute[244014]: 2026-02-25 12:41:25.653 244018 DEBUG oslo_concurrency.processutils [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:25 np0005629333 nova_compute[244014]: 2026-02-25 12:41:25.658 244018 DEBUG nova.compute.provider_tree [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:41:25 np0005629333 nova_compute[244014]: 2026-02-25 12:41:25.738 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:41:25 np0005629333 nova_compute[244014]: 2026-02-25 12:41:25.740 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:41:25 np0005629333 nova_compute[244014]: 2026-02-25 12:41:25.746 244018 DEBUG nova.scheduler.client.report [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:41:25 np0005629333 nova_compute[244014]: 2026-02-25 12:41:25.767 244018 DEBUG oslo_concurrency.lockutils [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:25 np0005629333 nova_compute[244014]: 2026-02-25 12:41:25.811 244018 INFO nova.scheduler.client.report [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Deleted allocations for instance 4717553a-9bb9-4278-b610-7c446db3f8d9#033[00m
Feb 25 07:41:25 np0005629333 nova_compute[244014]: 2026-02-25 12:41:25.835 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:25 np0005629333 nova_compute[244014]: 2026-02-25 12:41:25.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:41:25 np0005629333 nova_compute[244014]: 2026-02-25 12:41:25.894 244018 DEBUG oslo_concurrency.lockutils [None req-a080c4d3-ad6b-4e30-8448-06c4d223d99f 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "4717553a-9bb9-4278-b610-7c446db3f8d9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1808: 305 pgs: 305 active+clean; 416 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 278 op/s
Feb 25 07:41:26 np0005629333 nova_compute[244014]: 2026-02-25 12:41:26.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:41:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1809: 305 pgs: 305 active+clean; 384 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 4.2 MiB/s wr, 373 op/s
Feb 25 07:41:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:41:28 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:28Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:95:77:21 10.100.0.12
Feb 25 07:41:28 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:28Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:95:77:21 10.100.0.12
Feb 25 07:41:28 np0005629333 nova_compute[244014]: 2026-02-25 12:41:28.696 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "aeced0b2-f2b3-4012-b740-eaa411f99631" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:28 np0005629333 nova_compute[244014]: 2026-02-25 12:41:28.697 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "aeced0b2-f2b3-4012-b740-eaa411f99631" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:28 np0005629333 nova_compute[244014]: 2026-02-25 12:41:28.714 244018 DEBUG nova.compute.manager [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:41:28 np0005629333 nova_compute[244014]: 2026-02-25 12:41:28.790 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:28 np0005629333 nova_compute[244014]: 2026-02-25 12:41:28.791 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:28 np0005629333 nova_compute[244014]: 2026-02-25 12:41:28.797 244018 DEBUG nova.virt.hardware [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:41:28 np0005629333 nova_compute[244014]: 2026-02-25 12:41:28.798 244018 INFO nova.compute.claims [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:41:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:41:28 np0005629333 nova_compute[244014]: 2026-02-25 12:41:28.973 244018 DEBUG oslo_concurrency.processutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:41:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:41:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:41:29 np0005629333 nova_compute[244014]: 2026-02-25 12:41:29.033 244018 INFO nova.compute.manager [None req-0d38717e-20bc-49ee-8544-a1c31a6e50d8 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Get console output#033[00m
Feb 25 07:41:29 np0005629333 nova_compute[244014]: 2026-02-25 12:41:29.039 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 25 07:41:29 np0005629333 nova_compute[244014]: 2026-02-25 12:41:29.228 244018 INFO nova.compute.manager [None req-ac37be72-2481-4445-a40f-0ca67578947c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Unpausing#033[00m
Feb 25 07:41:29 np0005629333 nova_compute[244014]: 2026-02-25 12:41:29.230 244018 DEBUG nova.objects.instance [None req-ac37be72-2481-4445-a40f-0ca67578947c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'flavor' on Instance uuid bb4e80d2-c200-4c24-8154-e1a86d06946b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:41:29 np0005629333 nova_compute[244014]: 2026-02-25 12:41:29.261 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023289.2611344, bb4e80d2-c200-4c24-8154-e1a86d06946b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:41:29 np0005629333 nova_compute[244014]: 2026-02-25 12:41:29.262 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:41:29 np0005629333 virtqemud[243235]: argument unsupported: QEMU guest agent is not configured
Feb 25 07:41:29 np0005629333 nova_compute[244014]: 2026-02-25 12:41:29.270 244018 DEBUG nova.virt.libvirt.guest [None req-ac37be72-2481-4445-a40f-0ca67578947c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Feb 25 07:41:29 np0005629333 nova_compute[244014]: 2026-02-25 12:41:29.271 244018 DEBUG nova.compute.manager [None req-ac37be72-2481-4445-a40f-0ca67578947c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:41:29 np0005629333 nova_compute[244014]: 2026-02-25 12:41:29.282 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:41:29 np0005629333 nova_compute[244014]: 2026-02-25 12:41:29.288 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:41:29 np0005629333 nova_compute[244014]: 2026-02-25 12:41:29.332 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Feb 25 07:41:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:41:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1795722616' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:41:29 np0005629333 nova_compute[244014]: 2026-02-25 12:41:29.517 244018 DEBUG oslo_concurrency.processutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:29 np0005629333 nova_compute[244014]: 2026-02-25 12:41:29.525 244018 DEBUG nova.compute.provider_tree [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:41:29 np0005629333 nova_compute[244014]: 2026-02-25 12:41:29.545 244018 DEBUG nova.scheduler.client.report [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:41:29 np0005629333 nova_compute[244014]: 2026-02-25 12:41:29.586 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:29 np0005629333 nova_compute[244014]: 2026-02-25 12:41:29.587 244018 DEBUG nova.compute.manager [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:41:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:41:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:41:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:41:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:41:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:41:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:41:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:41:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:41:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:41:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:41:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:41:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:41:29 np0005629333 nova_compute[244014]: 2026-02-25 12:41:29.646 244018 DEBUG nova.compute.manager [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:41:29 np0005629333 nova_compute[244014]: 2026-02-25 12:41:29.646 244018 DEBUG nova.network.neutron [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:41:29 np0005629333 nova_compute[244014]: 2026-02-25 12:41:29.670 244018 INFO nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:41:29 np0005629333 nova_compute[244014]: 2026-02-25 12:41:29.697 244018 DEBUG nova.compute.manager [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:41:29 np0005629333 nova_compute[244014]: 2026-02-25 12:41:29.930 244018 DEBUG nova.policy [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c5bc24b5f5048469cf3f701ce511bfa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '503e879cd1f44a16b9baef106ceba949', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:41:29 np0005629333 nova_compute[244014]: 2026-02-25 12:41:29.936 244018 DEBUG nova.compute.manager [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:41:29 np0005629333 nova_compute[244014]: 2026-02-25 12:41:29.938 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:41:29 np0005629333 nova_compute[244014]: 2026-02-25 12:41:29.938 244018 INFO nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Creating image(s)#033[00m
Feb 25 07:41:29 np0005629333 nova_compute[244014]: 2026-02-25 12:41:29.961 244018 DEBUG nova.storage.rbd_utils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image aeced0b2-f2b3-4012-b740-eaa411f99631_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:30 np0005629333 nova_compute[244014]: 2026-02-25 12:41:30.093 244018 DEBUG nova.storage.rbd_utils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image aeced0b2-f2b3-4012-b740-eaa411f99631_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:30 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:41:30 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:41:30 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:41:30 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:41:30 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:41:30 np0005629333 podman[335862]: 2026-02-25 12:41:30.106174321 +0000 UTC m=+0.057844143 container create 2de3d8003e006a4e4bbef1d5d9fa595729527531685b6e41edd958dc8a57a6b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_goldstine, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:41:30 np0005629333 nova_compute[244014]: 2026-02-25 12:41:30.121 244018 DEBUG nova.storage.rbd_utils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image aeced0b2-f2b3-4012-b740-eaa411f99631_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:30 np0005629333 nova_compute[244014]: 2026-02-25 12:41:30.124 244018 DEBUG oslo_concurrency.processutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:30 np0005629333 nova_compute[244014]: 2026-02-25 12:41:30.149 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:30 np0005629333 podman[335862]: 2026-02-25 12:41:30.071984276 +0000 UTC m=+0.023654128 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:41:30 np0005629333 nova_compute[244014]: 2026-02-25 12:41:30.190 244018 DEBUG oslo_concurrency.processutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:30 np0005629333 nova_compute[244014]: 2026-02-25 12:41:30.194 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:30 np0005629333 nova_compute[244014]: 2026-02-25 12:41:30.196 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:30 np0005629333 nova_compute[244014]: 2026-02-25 12:41:30.196 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:30 np0005629333 systemd[1]: Started libpod-conmon-2de3d8003e006a4e4bbef1d5d9fa595729527531685b6e41edd958dc8a57a6b2.scope.
Feb 25 07:41:30 np0005629333 nova_compute[244014]: 2026-02-25 12:41:30.226 244018 DEBUG nova.storage.rbd_utils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image aeced0b2-f2b3-4012-b740-eaa411f99631_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:30 np0005629333 nova_compute[244014]: 2026-02-25 12:41:30.233 244018 DEBUG oslo_concurrency.processutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 aeced0b2-f2b3-4012-b740-eaa411f99631_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:30 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:41:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1810: 305 pgs: 305 active+clean; 384 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 2.1 MiB/s wr, 148 op/s
Feb 25 07:41:30 np0005629333 podman[335862]: 2026-02-25 12:41:30.296252655 +0000 UTC m=+0.247922477 container init 2de3d8003e006a4e4bbef1d5d9fa595729527531685b6e41edd958dc8a57a6b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_goldstine, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:41:30 np0005629333 podman[335862]: 2026-02-25 12:41:30.304144088 +0000 UTC m=+0.255813910 container start 2de3d8003e006a4e4bbef1d5d9fa595729527531685b6e41edd958dc8a57a6b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_goldstine, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:41:30 np0005629333 podman[335862]: 2026-02-25 12:41:30.310492747 +0000 UTC m=+0.262162569 container attach 2de3d8003e006a4e4bbef1d5d9fa595729527531685b6e41edd958dc8a57a6b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:41:30 np0005629333 compassionate_goldstine[335920]: 167 167
Feb 25 07:41:30 np0005629333 systemd[1]: libpod-2de3d8003e006a4e4bbef1d5d9fa595729527531685b6e41edd958dc8a57a6b2.scope: Deactivated successfully.
Feb 25 07:41:30 np0005629333 conmon[335920]: conmon 2de3d8003e006a4e4bbe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2de3d8003e006a4e4bbef1d5d9fa595729527531685b6e41edd958dc8a57a6b2.scope/container/memory.events
Feb 25 07:41:30 np0005629333 podman[335862]: 2026-02-25 12:41:30.314543791 +0000 UTC m=+0.266213623 container died 2de3d8003e006a4e4bbef1d5d9fa595729527531685b6e41edd958dc8a57a6b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_goldstine, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:41:30 np0005629333 systemd[1]: var-lib-containers-storage-overlay-34b3d86b2f99e1222f8976eb870eaaf474af7411abe0f21a4cb6e99781ce5479-merged.mount: Deactivated successfully.
Feb 25 07:41:30 np0005629333 podman[335862]: 2026-02-25 12:41:30.386951255 +0000 UTC m=+0.338621077 container remove 2de3d8003e006a4e4bbef1d5d9fa595729527531685b6e41edd958dc8a57a6b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_goldstine, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 07:41:30 np0005629333 systemd[1]: libpod-conmon-2de3d8003e006a4e4bbef1d5d9fa595729527531685b6e41edd958dc8a57a6b2.scope: Deactivated successfully.
Feb 25 07:41:30 np0005629333 nova_compute[244014]: 2026-02-25 12:41:30.569 244018 DEBUG nova.network.neutron [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Successfully created port: 8627fb46-e938-4a67-9067-fc60794fca05 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:41:30 np0005629333 podman[335975]: 2026-02-25 12:41:30.570383901 +0000 UTC m=+0.072398584 container create 59529b2eba7ab9eeab6a85a76b253e5dd543e86925f912efc02363d8a4f595d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_bardeen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:41:30 np0005629333 nova_compute[244014]: 2026-02-25 12:41:30.613 244018 DEBUG oslo_concurrency.processutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 aeced0b2-f2b3-4012-b740-eaa411f99631_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.380s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:30 np0005629333 systemd[1]: Started libpod-conmon-59529b2eba7ab9eeab6a85a76b253e5dd543e86925f912efc02363d8a4f595d5.scope.
Feb 25 07:41:30 np0005629333 podman[335975]: 2026-02-25 12:41:30.526641297 +0000 UTC m=+0.028656010 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:41:30 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:41:30 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd2e77d69899408a345361707dbc63f14b1dfb1ad78c3c1fdc141a1c6a8a59d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:41:30 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd2e77d69899408a345361707dbc63f14b1dfb1ad78c3c1fdc141a1c6a8a59d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:41:30 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd2e77d69899408a345361707dbc63f14b1dfb1ad78c3c1fdc141a1c6a8a59d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:41:30 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd2e77d69899408a345361707dbc63f14b1dfb1ad78c3c1fdc141a1c6a8a59d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:41:30 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd2e77d69899408a345361707dbc63f14b1dfb1ad78c3c1fdc141a1c6a8a59d7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:41:30 np0005629333 podman[335975]: 2026-02-25 12:41:30.660631618 +0000 UTC m=+0.162646321 container init 59529b2eba7ab9eeab6a85a76b253e5dd543e86925f912efc02363d8a4f595d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 07:41:30 np0005629333 podman[335975]: 2026-02-25 12:41:30.668814469 +0000 UTC m=+0.170829162 container start 59529b2eba7ab9eeab6a85a76b253e5dd543e86925f912efc02363d8a4f595d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_bardeen, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 07:41:30 np0005629333 podman[335975]: 2026-02-25 12:41:30.675810846 +0000 UTC m=+0.177825529 container attach 59529b2eba7ab9eeab6a85a76b253e5dd543e86925f912efc02363d8a4f595d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_bardeen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 07:41:30 np0005629333 nova_compute[244014]: 2026-02-25 12:41:30.687 244018 DEBUG nova.storage.rbd_utils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] resizing rbd image aeced0b2-f2b3-4012-b740-eaa411f99631_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:41:30 np0005629333 nova_compute[244014]: 2026-02-25 12:41:30.781 244018 DEBUG nova.objects.instance [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'migration_context' on Instance uuid aeced0b2-f2b3-4012-b740-eaa411f99631 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:41:30 np0005629333 nova_compute[244014]: 2026-02-25 12:41:30.800 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:41:30 np0005629333 nova_compute[244014]: 2026-02-25 12:41:30.801 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Ensure instance console log exists: /var/lib/nova/instances/aeced0b2-f2b3-4012-b740-eaa411f99631/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:41:30 np0005629333 nova_compute[244014]: 2026-02-25 12:41:30.801 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:30 np0005629333 nova_compute[244014]: 2026-02-25 12:41:30.802 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:30 np0005629333 nova_compute[244014]: 2026-02-25 12:41:30.802 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:30 np0005629333 nova_compute[244014]: 2026-02-25 12:41:30.836 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:30 np0005629333 nova_compute[244014]: 2026-02-25 12:41:30.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:41:30 np0005629333 nova_compute[244014]: 2026-02-25 12:41:30.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:41:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:41:30
Feb 25 07:41:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:41:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:41:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'backups', '.mgr', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'vms']
Feb 25 07:41:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:41:31 np0005629333 brave_bardeen[335994]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:41:31 np0005629333 brave_bardeen[335994]: --> All data devices are unavailable
Feb 25 07:41:31 np0005629333 systemd[1]: libpod-59529b2eba7ab9eeab6a85a76b253e5dd543e86925f912efc02363d8a4f595d5.scope: Deactivated successfully.
Feb 25 07:41:31 np0005629333 podman[335975]: 2026-02-25 12:41:31.107681463 +0000 UTC m=+0.609696146 container died 59529b2eba7ab9eeab6a85a76b253e5dd543e86925f912efc02363d8a4f595d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_bardeen, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:41:31 np0005629333 systemd[1]: var-lib-containers-storage-overlay-cd2e77d69899408a345361707dbc63f14b1dfb1ad78c3c1fdc141a1c6a8a59d7-merged.mount: Deactivated successfully.
Feb 25 07:41:31 np0005629333 podman[335975]: 2026-02-25 12:41:31.160961836 +0000 UTC m=+0.662976529 container remove 59529b2eba7ab9eeab6a85a76b253e5dd543e86925f912efc02363d8a4f595d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_bardeen, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3)
Feb 25 07:41:31 np0005629333 systemd[1]: libpod-conmon-59529b2eba7ab9eeab6a85a76b253e5dd543e86925f912efc02363d8a4f595d5.scope: Deactivated successfully.
Feb 25 07:41:31 np0005629333 nova_compute[244014]: 2026-02-25 12:41:31.420 244018 DEBUG nova.network.neutron [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Successfully updated port: 8627fb46-e938-4a67-9067-fc60794fca05 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:41:31 np0005629333 nova_compute[244014]: 2026-02-25 12:41:31.443 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "refresh_cache-aeced0b2-f2b3-4012-b740-eaa411f99631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:41:31 np0005629333 nova_compute[244014]: 2026-02-25 12:41:31.443 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquired lock "refresh_cache-aeced0b2-f2b3-4012-b740-eaa411f99631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:41:31 np0005629333 nova_compute[244014]: 2026-02-25 12:41:31.443 244018 DEBUG nova.network.neutron [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:41:31 np0005629333 podman[336159]: 2026-02-25 12:41:31.58881938 +0000 UTC m=+0.038134597 container create b7bcc283ab9713268d50b88747dc3010e0565bd3603fa06e10b2851d25faffde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_austin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 07:41:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:41:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:41:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:41:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:41:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:41:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:41:31 np0005629333 systemd[1]: Started libpod-conmon-b7bcc283ab9713268d50b88747dc3010e0565bd3603fa06e10b2851d25faffde.scope.
Feb 25 07:41:31 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:41:31 np0005629333 podman[336159]: 2026-02-25 12:41:31.662349685 +0000 UTC m=+0.111664942 container init b7bcc283ab9713268d50b88747dc3010e0565bd3603fa06e10b2851d25faffde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_austin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:41:31 np0005629333 nova_compute[244014]: 2026-02-25 12:41:31.664 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023276.6630564, bd882e8f-3bea-4fa6-a185-186bc9911267 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:41:31 np0005629333 nova_compute[244014]: 2026-02-25 12:41:31.664 244018 INFO nova.compute.manager [-] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:41:31 np0005629333 podman[336159]: 2026-02-25 12:41:31.572050627 +0000 UTC m=+0.021365874 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:41:31 np0005629333 podman[336159]: 2026-02-25 12:41:31.668969652 +0000 UTC m=+0.118284879 container start b7bcc283ab9713268d50b88747dc3010e0565bd3603fa06e10b2851d25faffde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_austin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:41:31 np0005629333 systemd[1]: libpod-b7bcc283ab9713268d50b88747dc3010e0565bd3603fa06e10b2851d25faffde.scope: Deactivated successfully.
Feb 25 07:41:31 np0005629333 affectionate_austin[336176]: 167 167
Feb 25 07:41:31 np0005629333 conmon[336176]: conmon b7bcc283ab9713268d50 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b7bcc283ab9713268d50b88747dc3010e0565bd3603fa06e10b2851d25faffde.scope/container/memory.events
Feb 25 07:41:31 np0005629333 podman[336159]: 2026-02-25 12:41:31.679837859 +0000 UTC m=+0.129153106 container attach b7bcc283ab9713268d50b88747dc3010e0565bd3603fa06e10b2851d25faffde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_austin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 07:41:31 np0005629333 podman[336159]: 2026-02-25 12:41:31.680737244 +0000 UTC m=+0.130052471 container died b7bcc283ab9713268d50b88747dc3010e0565bd3603fa06e10b2851d25faffde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:41:31 np0005629333 systemd[1]: var-lib-containers-storage-overlay-275b4b28fedf72aee5d03dba8906423381562040981d0f40573811a8898e2619-merged.mount: Deactivated successfully.
Feb 25 07:41:31 np0005629333 podman[336159]: 2026-02-25 12:41:31.724678874 +0000 UTC m=+0.173994101 container remove b7bcc283ab9713268d50b88747dc3010e0565bd3603fa06e10b2851d25faffde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_austin, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 07:41:31 np0005629333 systemd[1]: libpod-conmon-b7bcc283ab9713268d50b88747dc3010e0565bd3603fa06e10b2851d25faffde.scope: Deactivated successfully.
Feb 25 07:41:31 np0005629333 nova_compute[244014]: 2026-02-25 12:41:31.743 244018 DEBUG nova.compute.manager [None req-4c28932b-889c-484e-8941-476d1e92da68 - - - - - -] [instance: bd882e8f-3bea-4fa6-a185-186bc9911267] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:41:31 np0005629333 nova_compute[244014]: 2026-02-25 12:41:31.791 244018 DEBUG nova.compute.manager [req-e3bc7dfa-6550-443c-a86a-541ee006201a req-f26ad408-d75f-40a2-b16e-684c94b009bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Received event network-changed-8627fb46-e938-4a67-9067-fc60794fca05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:41:31 np0005629333 nova_compute[244014]: 2026-02-25 12:41:31.791 244018 DEBUG nova.compute.manager [req-e3bc7dfa-6550-443c-a86a-541ee006201a req-f26ad408-d75f-40a2-b16e-684c94b009bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Refreshing instance network info cache due to event network-changed-8627fb46-e938-4a67-9067-fc60794fca05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:41:31 np0005629333 nova_compute[244014]: 2026-02-25 12:41:31.791 244018 DEBUG oslo_concurrency.lockutils [req-e3bc7dfa-6550-443c-a86a-541ee006201a req-f26ad408-d75f-40a2-b16e-684c94b009bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-aeced0b2-f2b3-4012-b740-eaa411f99631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:41:31 np0005629333 nova_compute[244014]: 2026-02-25 12:41:31.822 244018 DEBUG nova.network.neutron [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:41:31 np0005629333 podman[336200]: 2026-02-25 12:41:31.87299756 +0000 UTC m=+0.034739132 container create a6a72f4056ea366aaf0bf03db39cbff5cdd3ca4861bb2b3610a6baf8b0afd075 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:41:31 np0005629333 systemd[1]: Started libpod-conmon-a6a72f4056ea366aaf0bf03db39cbff5cdd3ca4861bb2b3610a6baf8b0afd075.scope.
Feb 25 07:41:31 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:41:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/212a3747a6360eb37192a3d45b8042ab3aa336922b2a06b27f88a292138c0a5f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:41:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/212a3747a6360eb37192a3d45b8042ab3aa336922b2a06b27f88a292138c0a5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:41:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/212a3747a6360eb37192a3d45b8042ab3aa336922b2a06b27f88a292138c0a5f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:41:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/212a3747a6360eb37192a3d45b8042ab3aa336922b2a06b27f88a292138c0a5f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:41:31 np0005629333 podman[336200]: 2026-02-25 12:41:31.858571953 +0000 UTC m=+0.020313545 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:41:31 np0005629333 podman[336200]: 2026-02-25 12:41:31.955814577 +0000 UTC m=+0.117556189 container init a6a72f4056ea366aaf0bf03db39cbff5cdd3ca4861bb2b3610a6baf8b0afd075 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_newton, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 07:41:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:41:31 np0005629333 podman[336200]: 2026-02-25 12:41:31.960619602 +0000 UTC m=+0.122361194 container start a6a72f4056ea366aaf0bf03db39cbff5cdd3ca4861bb2b3610a6baf8b0afd075 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_newton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 07:41:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:41:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:41:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:41:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:41:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:41:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:41:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:41:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:41:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:41:32 np0005629333 podman[336200]: 2026-02-25 12:41:32.013805543 +0000 UTC m=+0.175547425 container attach a6a72f4056ea366aaf0bf03db39cbff5cdd3ca4861bb2b3610a6baf8b0afd075 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.052 244018 INFO nova.compute.manager [None req-e29bc8b6-b0d1-41b8-9782-144ac0cd2dd5 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Get console output#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.060 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 25 07:41:32 np0005629333 serene_newton[336218]: {
Feb 25 07:41:32 np0005629333 serene_newton[336218]:    "0": [
Feb 25 07:41:32 np0005629333 serene_newton[336218]:        {
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "devices": [
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "/dev/loop3"
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            ],
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "lv_name": "ceph_lv0",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "lv_size": "21470642176",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "name": "ceph_lv0",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "tags": {
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.cluster_name": "ceph",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.crush_device_class": "",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.encrypted": "0",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.objectstore": "bluestore",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.osd_id": "0",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.type": "block",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.vdo": "0",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.with_tpm": "0"
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            },
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "type": "block",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "vg_name": "ceph_vg0"
Feb 25 07:41:32 np0005629333 serene_newton[336218]:        }
Feb 25 07:41:32 np0005629333 serene_newton[336218]:    ],
Feb 25 07:41:32 np0005629333 serene_newton[336218]:    "1": [
Feb 25 07:41:32 np0005629333 serene_newton[336218]:        {
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "devices": [
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "/dev/loop4"
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            ],
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "lv_name": "ceph_lv1",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "lv_size": "21470642176",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "name": "ceph_lv1",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "tags": {
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.cluster_name": "ceph",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.crush_device_class": "",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.encrypted": "0",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.objectstore": "bluestore",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.osd_id": "1",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.type": "block",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.vdo": "0",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.with_tpm": "0"
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            },
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "type": "block",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "vg_name": "ceph_vg1"
Feb 25 07:41:32 np0005629333 serene_newton[336218]:        }
Feb 25 07:41:32 np0005629333 serene_newton[336218]:    ],
Feb 25 07:41:32 np0005629333 serene_newton[336218]:    "2": [
Feb 25 07:41:32 np0005629333 serene_newton[336218]:        {
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "devices": [
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "/dev/loop5"
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            ],
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "lv_name": "ceph_lv2",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "lv_size": "21470642176",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "name": "ceph_lv2",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "tags": {
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.cluster_name": "ceph",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.crush_device_class": "",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.encrypted": "0",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.objectstore": "bluestore",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.osd_id": "2",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.type": "block",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.vdo": "0",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:                "ceph.with_tpm": "0"
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            },
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "type": "block",
Feb 25 07:41:32 np0005629333 serene_newton[336218]:            "vg_name": "ceph_vg2"
Feb 25 07:41:32 np0005629333 serene_newton[336218]:        }
Feb 25 07:41:32 np0005629333 serene_newton[336218]:    ]
Feb 25 07:41:32 np0005629333 serene_newton[336218]: }
Feb 25 07:41:32 np0005629333 systemd[1]: libpod-a6a72f4056ea366aaf0bf03db39cbff5cdd3ca4861bb2b3610a6baf8b0afd075.scope: Deactivated successfully.
Feb 25 07:41:32 np0005629333 podman[336200]: 2026-02-25 12:41:32.229975374 +0000 UTC m=+0.391716956 container died a6a72f4056ea366aaf0bf03db39cbff5cdd3ca4861bb2b3610a6baf8b0afd075 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_newton, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 07:41:32 np0005629333 systemd[1]: var-lib-containers-storage-overlay-212a3747a6360eb37192a3d45b8042ab3aa336922b2a06b27f88a292138c0a5f-merged.mount: Deactivated successfully.
Feb 25 07:41:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1811: 305 pgs: 305 active+clean; 423 MiB data, 998 MiB used, 59 GiB / 60 GiB avail; 405 KiB/s rd, 3.7 MiB/s wr, 175 op/s
Feb 25 07:41:32 np0005629333 podman[336200]: 2026-02-25 12:41:32.282967589 +0000 UTC m=+0.444709161 container remove a6a72f4056ea366aaf0bf03db39cbff5cdd3ca4861bb2b3610a6baf8b0afd075 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_newton, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 07:41:32 np0005629333 systemd[1]: libpod-conmon-a6a72f4056ea366aaf0bf03db39cbff5cdd3ca4861bb2b3610a6baf8b0afd075.scope: Deactivated successfully.
Feb 25 07:41:32 np0005629333 podman[336298]: 2026-02-25 12:41:32.69973905 +0000 UTC m=+0.037255912 container create 38e79cf349071c38b04accfa1354b50c7992663e147d405652c0ff64649caccc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.710 244018 DEBUG nova.network.neutron [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Updating instance_info_cache with network_info: [{"id": "8627fb46-e938-4a67-9067-fc60794fca05", "address": "fa:16:3e:9d:4f:fa", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8627fb46-e9", "ovs_interfaceid": "8627fb46-e938-4a67-9067-fc60794fca05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:41:32 np0005629333 systemd[1]: Started libpod-conmon-38e79cf349071c38b04accfa1354b50c7992663e147d405652c0ff64649caccc.scope.
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.736 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Releasing lock "refresh_cache-aeced0b2-f2b3-4012-b740-eaa411f99631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.737 244018 DEBUG nova.compute.manager [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Instance network_info: |[{"id": "8627fb46-e938-4a67-9067-fc60794fca05", "address": "fa:16:3e:9d:4f:fa", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8627fb46-e9", "ovs_interfaceid": "8627fb46-e938-4a67-9067-fc60794fca05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.737 244018 DEBUG oslo_concurrency.lockutils [req-e3bc7dfa-6550-443c-a86a-541ee006201a req-f26ad408-d75f-40a2-b16e-684c94b009bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-aeced0b2-f2b3-4012-b740-eaa411f99631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.737 244018 DEBUG nova.network.neutron [req-e3bc7dfa-6550-443c-a86a-541ee006201a req-f26ad408-d75f-40a2-b16e-684c94b009bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Refreshing network info cache for port 8627fb46-e938-4a67-9067-fc60794fca05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.740 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Start _get_guest_xml network_info=[{"id": "8627fb46-e938-4a67-9067-fc60794fca05", "address": "fa:16:3e:9d:4f:fa", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8627fb46-e9", "ovs_interfaceid": "8627fb46-e938-4a67-9067-fc60794fca05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.746 244018 WARNING nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.754 244018 DEBUG nova.virt.libvirt.host [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.755 244018 DEBUG nova.virt.libvirt.host [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:41:32 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.758 244018 DEBUG nova.virt.libvirt.host [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.758 244018 DEBUG nova.virt.libvirt.host [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.758 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.759 244018 DEBUG nova.virt.hardware [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.759 244018 DEBUG nova.virt.hardware [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.759 244018 DEBUG nova.virt.hardware [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.759 244018 DEBUG nova.virt.hardware [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.759 244018 DEBUG nova.virt.hardware [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.760 244018 DEBUG nova.virt.hardware [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.760 244018 DEBUG nova.virt.hardware [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.760 244018 DEBUG nova.virt.hardware [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.760 244018 DEBUG nova.virt.hardware [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.761 244018 DEBUG nova.virt.hardware [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.761 244018 DEBUG nova.virt.hardware [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.763 244018 DEBUG oslo_concurrency.processutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:32 np0005629333 podman[336298]: 2026-02-25 12:41:32.771204897 +0000 UTC m=+0.108721769 container init 38e79cf349071c38b04accfa1354b50c7992663e147d405652c0ff64649caccc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 25 07:41:32 np0005629333 podman[336298]: 2026-02-25 12:41:32.776544018 +0000 UTC m=+0.114060880 container start 38e79cf349071c38b04accfa1354b50c7992663e147d405652c0ff64649caccc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 07:41:32 np0005629333 podman[336298]: 2026-02-25 12:41:32.681784074 +0000 UTC m=+0.019300966 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:41:32 np0005629333 serene_thompson[336314]: 167 167
Feb 25 07:41:32 np0005629333 systemd[1]: libpod-38e79cf349071c38b04accfa1354b50c7992663e147d405652c0ff64649caccc.scope: Deactivated successfully.
Feb 25 07:41:32 np0005629333 podman[336298]: 2026-02-25 12:41:32.784024459 +0000 UTC m=+0.121541341 container attach 38e79cf349071c38b04accfa1354b50c7992663e147d405652c0ff64649caccc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:41:32 np0005629333 podman[336298]: 2026-02-25 12:41:32.784520773 +0000 UTC m=+0.122037645 container died 38e79cf349071c38b04accfa1354b50c7992663e147d405652c0ff64649caccc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_thompson, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.796 244018 DEBUG nova.compute.manager [req-5ed76d2a-09b8-4d18-b3fb-849841859212 req-3af544fc-74ff-4c60-bc57-8b9d0ad0fbff 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Received event network-changed-ca235479-72b7-40ba-b8e5-d8d3227079fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.797 244018 DEBUG nova.compute.manager [req-5ed76d2a-09b8-4d18-b3fb-849841859212 req-3af544fc-74ff-4c60-bc57-8b9d0ad0fbff 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Refreshing instance network info cache due to event network-changed-ca235479-72b7-40ba-b8e5-d8d3227079fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.797 244018 DEBUG oslo_concurrency.lockutils [req-5ed76d2a-09b8-4d18-b3fb-849841859212 req-3af544fc-74ff-4c60-bc57-8b9d0ad0fbff 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-bb4e80d2-c200-4c24-8154-e1a86d06946b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.797 244018 DEBUG oslo_concurrency.lockutils [req-5ed76d2a-09b8-4d18-b3fb-849841859212 req-3af544fc-74ff-4c60-bc57-8b9d0ad0fbff 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-bb4e80d2-c200-4c24-8154-e1a86d06946b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.797 244018 DEBUG nova.network.neutron [req-5ed76d2a-09b8-4d18-b3fb-849841859212 req-3af544fc-74ff-4c60-bc57-8b9d0ad0fbff 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Refreshing network info cache for port ca235479-72b7-40ba-b8e5-d8d3227079fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:41:32 np0005629333 systemd[1]: var-lib-containers-storage-overlay-9154efe256526b705e9a04ebd83bdd23c810ae0e68e7dd559168601ee20caef6-merged.mount: Deactivated successfully.
Feb 25 07:41:32 np0005629333 podman[336298]: 2026-02-25 12:41:32.826097426 +0000 UTC m=+0.163614288 container remove 38e79cf349071c38b04accfa1354b50c7992663e147d405652c0ff64649caccc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_thompson, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 07:41:32 np0005629333 systemd[1]: libpod-conmon-38e79cf349071c38b04accfa1354b50c7992663e147d405652c0ff64649caccc.scope: Deactivated successfully.
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.841 244018 DEBUG oslo_concurrency.lockutils [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "bb4e80d2-c200-4c24-8154-e1a86d06946b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.841 244018 DEBUG oslo_concurrency.lockutils [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "bb4e80d2-c200-4c24-8154-e1a86d06946b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.841 244018 DEBUG oslo_concurrency.lockutils [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.842 244018 DEBUG oslo_concurrency.lockutils [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.842 244018 DEBUG oslo_concurrency.lockutils [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.843 244018 INFO nova.compute.manager [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Terminating instance#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.844 244018 DEBUG nova.compute.manager [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:41:32 np0005629333 kernel: tapca235479-72 (unregistering): left promiscuous mode
Feb 25 07:41:32 np0005629333 NetworkManager[49836]: <info>  [1772023292.8954] device (tapca235479-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:41:32 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:32Z|01010|binding|INFO|Releasing lport ca235479-72b7-40ba-b8e5-d8d3227079fc from this chassis (sb_readonly=0)
Feb 25 07:41:32 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:32Z|01011|binding|INFO|Setting lport ca235479-72b7-40ba-b8e5-d8d3227079fc down in Southbound
Feb 25 07:41:32 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:32Z|01012|binding|INFO|Removing iface tapca235479-72 ovn-installed in OVS
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.903 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:32 np0005629333 nova_compute[244014]: 2026-02-25 12:41:32.908 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:32.911 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:65:4a 10.100.0.7'], port_security=['fa:16:3e:c8:65:4a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bb4e80d2-c200-4c24-8154-e1a86d06946b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2ffae040-354a-4380-b0b9-a74c45661bf1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf47c8ae-ac17-4dbe-9374-1ad52fb8ff7a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=861eb98d-f02a-4401-a60e-4c578f807c6b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ca235479-72b7-40ba-b8e5-d8d3227079fc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:41:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:32.913 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ca235479-72b7-40ba-b8e5-d8d3227079fc in datapath 2ffae040-354a-4380-b0b9-a74c45661bf1 unbound from our chassis#033[00m
Feb 25 07:41:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:32.915 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2ffae040-354a-4380-b0b9-a74c45661bf1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:41:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:32.917 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2188e67a-6109-498c-9327-42f350482c02]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:32.917 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1 namespace which is not needed anymore#033[00m
Feb 25 07:41:32 np0005629333 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000066.scope: Deactivated successfully.
Feb 25 07:41:32 np0005629333 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000066.scope: Consumed 12.207s CPU time.
Feb 25 07:41:32 np0005629333 systemd-machined[210048]: Machine qemu-129-instance-00000066 terminated.
Feb 25 07:41:32 np0005629333 podman[336360]: 2026-02-25 12:41:32.969783331 +0000 UTC m=+0.039441044 container create c0707c7ff97707f688760a5e431125252f96bb804c07394ce6f52fd0d708a2bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sammet, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:41:33 np0005629333 systemd[1]: Started libpod-conmon-c0707c7ff97707f688760a5e431125252f96bb804c07394ce6f52fd0d708a2bc.scope.
Feb 25 07:41:33 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:41:33 np0005629333 neutron-haproxy-ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1[334874]: [NOTICE]   (334881) : haproxy version is 2.8.14-c23fe91
Feb 25 07:41:33 np0005629333 neutron-haproxy-ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1[334874]: [NOTICE]   (334881) : path to executable is /usr/sbin/haproxy
Feb 25 07:41:33 np0005629333 neutron-haproxy-ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1[334874]: [WARNING]  (334881) : Exiting Master process...
Feb 25 07:41:33 np0005629333 neutron-haproxy-ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1[334874]: [ALERT]    (334881) : Current worker (334901) exited with code 143 (Terminated)
Feb 25 07:41:33 np0005629333 neutron-haproxy-ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1[334874]: [WARNING]  (334881) : All workers exited. Exiting... (0)
Feb 25 07:41:33 np0005629333 systemd[1]: libpod-73db30a6660878239719f63131246132729224544a65e2b2b08a344cd16cd5e1.scope: Deactivated successfully.
Feb 25 07:41:33 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e82a44d26d3e4e7146e87fa2df5c83ca244d4b02f26ffa1b2b628b362b0b59b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:41:33 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e82a44d26d3e4e7146e87fa2df5c83ca244d4b02f26ffa1b2b628b362b0b59b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:41:33 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e82a44d26d3e4e7146e87fa2df5c83ca244d4b02f26ffa1b2b628b362b0b59b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:41:33 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e82a44d26d3e4e7146e87fa2df5c83ca244d4b02f26ffa1b2b628b362b0b59b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:41:33 np0005629333 podman[336360]: 2026-02-25 12:41:32.953362168 +0000 UTC m=+0.023019901 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:41:33 np0005629333 podman[336395]: 2026-02-25 12:41:33.050580441 +0000 UTC m=+0.048850449 container died 73db30a6660878239719f63131246132729224544a65e2b2b08a344cd16cd5e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 07:41:33 np0005629333 nova_compute[244014]: 2026-02-25 12:41:33.061 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:33 np0005629333 nova_compute[244014]: 2026-02-25 12:41:33.066 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:33 np0005629333 podman[336360]: 2026-02-25 12:41:33.07853228 +0000 UTC m=+0.148190003 container init c0707c7ff97707f688760a5e431125252f96bb804c07394ce6f52fd0d708a2bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sammet, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 07:41:33 np0005629333 podman[336360]: 2026-02-25 12:41:33.08667998 +0000 UTC m=+0.156337703 container start c0707c7ff97707f688760a5e431125252f96bb804c07394ce6f52fd0d708a2bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sammet, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:41:33 np0005629333 nova_compute[244014]: 2026-02-25 12:41:33.088 244018 INFO nova.virt.libvirt.driver [-] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Instance destroyed successfully.#033[00m
Feb 25 07:41:33 np0005629333 nova_compute[244014]: 2026-02-25 12:41:33.090 244018 DEBUG nova.objects.instance [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'resources' on Instance uuid bb4e80d2-c200-4c24-8154-e1a86d06946b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:41:33 np0005629333 podman[336360]: 2026-02-25 12:41:33.091705202 +0000 UTC m=+0.161362935 container attach c0707c7ff97707f688760a5e431125252f96bb804c07394ce6f52fd0d708a2bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sammet, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 07:41:33 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-73db30a6660878239719f63131246132729224544a65e2b2b08a344cd16cd5e1-userdata-shm.mount: Deactivated successfully.
Feb 25 07:41:33 np0005629333 nova_compute[244014]: 2026-02-25 12:41:33.105 244018 DEBUG nova.virt.libvirt.vif [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:40:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-262633266',display_name='tempest-TestNetworkAdvancedServerOps-server-262633266',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-262633266',id=102,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHLLfY/ZYTp8zkjISi8dRav/XikNBwgTA0veZE/C8b/WfaOQQWjNxzsgI+O1rEE3S7jO5qPjb3pQG3PNH0tfGcXeLebTLFJlD+G8jxg8kTGDYVpT+gC8btxGaaq2vWYzmA==',key_name='tempest-TestNetworkAdvancedServerOps-1175251533',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:41:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-uqls6dt7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:41:29Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=bb4e80d2-c200-4c24-8154-e1a86d06946b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "address": "fa:16:3e:c8:65:4a", "network": {"id": "2ffae040-354a-4380-b0b9-a74c45661bf1", "bridge": "br-int", "label": "tempest-network-smoke--1173854570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca235479-72", "ovs_interfaceid": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:41:33 np0005629333 nova_compute[244014]: 2026-02-25 12:41:33.105 244018 DEBUG nova.network.os_vif_util [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "address": "fa:16:3e:c8:65:4a", "network": {"id": "2ffae040-354a-4380-b0b9-a74c45661bf1", "bridge": "br-int", "label": "tempest-network-smoke--1173854570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca235479-72", "ovs_interfaceid": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:41:33 np0005629333 nova_compute[244014]: 2026-02-25 12:41:33.106 244018 DEBUG nova.network.os_vif_util [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c8:65:4a,bridge_name='br-int',has_traffic_filtering=True,id=ca235479-72b7-40ba-b8e5-d8d3227079fc,network=Network(2ffae040-354a-4380-b0b9-a74c45661bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca235479-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:41:33 np0005629333 nova_compute[244014]: 2026-02-25 12:41:33.106 244018 DEBUG os_vif [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c8:65:4a,bridge_name='br-int',has_traffic_filtering=True,id=ca235479-72b7-40ba-b8e5-d8d3227079fc,network=Network(2ffae040-354a-4380-b0b9-a74c45661bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca235479-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:41:33 np0005629333 nova_compute[244014]: 2026-02-25 12:41:33.108 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:33 np0005629333 nova_compute[244014]: 2026-02-25 12:41:33.108 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca235479-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:33 np0005629333 nova_compute[244014]: 2026-02-25 12:41:33.109 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:33 np0005629333 nova_compute[244014]: 2026-02-25 12:41:33.111 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:33 np0005629333 nova_compute[244014]: 2026-02-25 12:41:33.116 244018 INFO os_vif [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c8:65:4a,bridge_name='br-int',has_traffic_filtering=True,id=ca235479-72b7-40ba-b8e5-d8d3227079fc,network=Network(2ffae040-354a-4380-b0b9-a74c45661bf1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca235479-72')#033[00m
Feb 25 07:41:33 np0005629333 podman[336395]: 2026-02-25 12:41:33.123794567 +0000 UTC m=+0.122064575 container cleanup 73db30a6660878239719f63131246132729224544a65e2b2b08a344cd16cd5e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 25 07:41:33 np0005629333 systemd[1]: libpod-conmon-73db30a6660878239719f63131246132729224544a65e2b2b08a344cd16cd5e1.scope: Deactivated successfully.
Feb 25 07:41:33 np0005629333 podman[336455]: 2026-02-25 12:41:33.197744994 +0000 UTC m=+0.053581963 container remove 73db30a6660878239719f63131246132729224544a65e2b2b08a344cd16cd5e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 07:41:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:33.202 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f80911fd-07ed-44a5-9f76-5f5bf94f1aa7]: (4, ('Wed Feb 25 12:41:32 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1 (73db30a6660878239719f63131246132729224544a65e2b2b08a344cd16cd5e1)\n73db30a6660878239719f63131246132729224544a65e2b2b08a344cd16cd5e1\nWed Feb 25 12:41:33 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1 (73db30a6660878239719f63131246132729224544a65e2b2b08a344cd16cd5e1)\n73db30a6660878239719f63131246132729224544a65e2b2b08a344cd16cd5e1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:33.204 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[26499fac-ae85-4627-9bc1-931ad6f7c36e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:33.205 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ffae040-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:33 np0005629333 kernel: tap2ffae040-30: left promiscuous mode
Feb 25 07:41:33 np0005629333 nova_compute[244014]: 2026-02-25 12:41:33.209 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:33 np0005629333 nova_compute[244014]: 2026-02-25 12:41:33.215 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:33.215 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d25c6ed9-8115-4907-9c79-70c7931c274c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:33.231 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e751454d-bfce-4c94-aa9b-0ef5941eab06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:33.232 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c1b6da43-6cab-41e4-95d7-5b6680c1b97c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:33.250 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2c222a98-5b02-48c5-900d-5e0e603ee113]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 523364, 'reachable_time': 37856, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336471, 'error': None, 'target': 'ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:33.253 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2ffae040-354a-4380-b0b9-a74c45661bf1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:41:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:33.253 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[227f6246-7fb9-4b7d-a44a-d0aa84996126]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:33 np0005629333 systemd[1]: var-lib-containers-storage-overlay-308c3469f7246fcc17ea83f23367abb6b8fd53a71b962f6f2bf6602aad8594fe-merged.mount: Deactivated successfully.
Feb 25 07:41:33 np0005629333 systemd[1]: run-netns-ovnmeta\x2d2ffae040\x2d354a\x2d4380\x2db0b9\x2da74c45661bf1.mount: Deactivated successfully.
Feb 25 07:41:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:41:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:41:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1252519309' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:41:33 np0005629333 nova_compute[244014]: 2026-02-25 12:41:33.386 244018 DEBUG oslo_concurrency.processutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:33 np0005629333 nova_compute[244014]: 2026-02-25 12:41:33.412 244018 DEBUG nova.storage.rbd_utils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image aeced0b2-f2b3-4012-b740-eaa411f99631_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:33 np0005629333 nova_compute[244014]: 2026-02-25 12:41:33.426 244018 DEBUG oslo_concurrency.processutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:33 np0005629333 nova_compute[244014]: 2026-02-25 12:41:33.595 244018 INFO nova.virt.libvirt.driver [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Deleting instance files /var/lib/nova/instances/bb4e80d2-c200-4c24-8154-e1a86d06946b_del#033[00m
Feb 25 07:41:33 np0005629333 nova_compute[244014]: 2026-02-25 12:41:33.596 244018 INFO nova.virt.libvirt.driver [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Deletion of /var/lib/nova/instances/bb4e80d2-c200-4c24-8154-e1a86d06946b_del complete#033[00m
Feb 25 07:41:33 np0005629333 lvm[336583]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:41:33 np0005629333 lvm[336583]: VG ceph_vg0 finished
Feb 25 07:41:33 np0005629333 nova_compute[244014]: 2026-02-25 12:41:33.674 244018 INFO nova.compute.manager [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:41:33 np0005629333 nova_compute[244014]: 2026-02-25 12:41:33.674 244018 DEBUG oslo.service.loopingcall [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:41:33 np0005629333 nova_compute[244014]: 2026-02-25 12:41:33.675 244018 DEBUG nova.compute.manager [-] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:41:33 np0005629333 nova_compute[244014]: 2026-02-25 12:41:33.675 244018 DEBUG nova.network.neutron [-] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:41:33 np0005629333 lvm[336585]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:41:33 np0005629333 lvm[336585]: VG ceph_vg1 finished
Feb 25 07:41:33 np0005629333 lvm[336586]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:41:33 np0005629333 lvm[336586]: VG ceph_vg2 finished
Feb 25 07:41:33 np0005629333 competent_sammet[336408]: {}
Feb 25 07:41:33 np0005629333 systemd[1]: libpod-c0707c7ff97707f688760a5e431125252f96bb804c07394ce6f52fd0d708a2bc.scope: Deactivated successfully.
Feb 25 07:41:33 np0005629333 systemd[1]: libpod-c0707c7ff97707f688760a5e431125252f96bb804c07394ce6f52fd0d708a2bc.scope: Consumed 1.029s CPU time.
Feb 25 07:41:33 np0005629333 conmon[336408]: conmon c0707c7ff97707f68876 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c0707c7ff97707f688760a5e431125252f96bb804c07394ce6f52fd0d708a2bc.scope/container/memory.events
Feb 25 07:41:33 np0005629333 podman[336360]: 2026-02-25 12:41:33.841584113 +0000 UTC m=+0.911241826 container died c0707c7ff97707f688760a5e431125252f96bb804c07394ce6f52fd0d708a2bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sammet, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:41:33 np0005629333 nova_compute[244014]: 2026-02-25 12:41:33.873 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:41:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:41:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2380833841' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.007 244018 DEBUG oslo_concurrency.processutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.008 244018 DEBUG nova.virt.libvirt.vif [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:41:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-131630496',display_name='tempest-ServersTestJSON-server-131630496',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-131630496',id=105,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-9s80l7z0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:41:29Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=aeced0b2-f2b3-4012-b740-eaa411f99631,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8627fb46-e938-4a67-9067-fc60794fca05", "address": "fa:16:3e:9d:4f:fa", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8627fb46-e9", "ovs_interfaceid": "8627fb46-e938-4a67-9067-fc60794fca05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.009 244018 DEBUG nova.network.os_vif_util [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "8627fb46-e938-4a67-9067-fc60794fca05", "address": "fa:16:3e:9d:4f:fa", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8627fb46-e9", "ovs_interfaceid": "8627fb46-e938-4a67-9067-fc60794fca05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.010 244018 DEBUG nova.network.os_vif_util [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:4f:fa,bridge_name='br-int',has_traffic_filtering=True,id=8627fb46-e938-4a67-9067-fc60794fca05,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8627fb46-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.011 244018 DEBUG nova.objects.instance [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'pci_devices' on Instance uuid aeced0b2-f2b3-4012-b740-eaa411f99631 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.028 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:41:34 np0005629333 nova_compute[244014]:  <uuid>aeced0b2-f2b3-4012-b740-eaa411f99631</uuid>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:  <name>instance-00000069</name>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServersTestJSON-server-131630496</nova:name>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:41:32</nova:creationTime>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:41:34 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:        <nova:user uuid="4c5bc24b5f5048469cf3f701ce511bfa">tempest-ServersTestJSON-1472039551-project-member</nova:user>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:        <nova:project uuid="503e879cd1f44a16b9baef106ceba949">tempest-ServersTestJSON-1472039551</nova:project>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:        <nova:port uuid="8627fb46-e938-4a67-9067-fc60794fca05">
Feb 25 07:41:34 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      <entry name="serial">aeced0b2-f2b3-4012-b740-eaa411f99631</entry>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      <entry name="uuid">aeced0b2-f2b3-4012-b740-eaa411f99631</entry>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/aeced0b2-f2b3-4012-b740-eaa411f99631_disk">
Feb 25 07:41:34 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:41:34 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/aeced0b2-f2b3-4012-b740-eaa411f99631_disk.config">
Feb 25 07:41:34 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:41:34 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:9d:4f:fa"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      <target dev="tap8627fb46-e9"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/aeced0b2-f2b3-4012-b740-eaa411f99631/console.log" append="off"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:41:34 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:41:34 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:41:34 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:41:34 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.030 244018 DEBUG nova.compute.manager [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Preparing to wait for external event network-vif-plugged-8627fb46-e938-4a67-9067-fc60794fca05 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.030 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.030 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.030 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.031 244018 DEBUG nova.virt.libvirt.vif [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:41:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-131630496',display_name='tempest-ServersTestJSON-server-131630496',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-131630496',id=105,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-9s80l7z0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:41:29Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=aeced0b2-f2b3-4012-b740-eaa411f99631,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8627fb46-e938-4a67-9067-fc60794fca05", "address": "fa:16:3e:9d:4f:fa", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8627fb46-e9", "ovs_interfaceid": "8627fb46-e938-4a67-9067-fc60794fca05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.031 244018 DEBUG nova.network.os_vif_util [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "8627fb46-e938-4a67-9067-fc60794fca05", "address": "fa:16:3e:9d:4f:fa", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8627fb46-e9", "ovs_interfaceid": "8627fb46-e938-4a67-9067-fc60794fca05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.032 244018 DEBUG nova.network.os_vif_util [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:4f:fa,bridge_name='br-int',has_traffic_filtering=True,id=8627fb46-e938-4a67-9067-fc60794fca05,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8627fb46-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.032 244018 DEBUG os_vif [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:4f:fa,bridge_name='br-int',has_traffic_filtering=True,id=8627fb46-e938-4a67-9067-fc60794fca05,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8627fb46-e9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.033 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.033 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.034 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.037 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.037 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8627fb46-e9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.037 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8627fb46-e9, col_values=(('external_ids', {'iface-id': '8627fb46-e938-4a67-9067-fc60794fca05', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:4f:fa', 'vm-uuid': 'aeced0b2-f2b3-4012-b740-eaa411f99631'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.039 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:34 np0005629333 NetworkManager[49836]: <info>  [1772023294.0398] manager: (tap8627fb46-e9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/421)
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.041 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.044 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.045 244018 INFO os_vif [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:4f:fa,bridge_name='br-int',has_traffic_filtering=True,id=8627fb46-e938-4a67-9067-fc60794fca05,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8627fb46-e9')#033[00m
Feb 25 07:41:34 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e82a44d26d3e4e7146e87fa2df5c83ca244d4b02f26ffa1b2b628b362b0b59b9-merged.mount: Deactivated successfully.
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.116 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.116 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.116 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No VIF found with MAC fa:16:3e:9d:4f:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.117 244018 INFO nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Using config drive#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.156 244018 DEBUG nova.storage.rbd_utils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image aeced0b2-f2b3-4012-b740-eaa411f99631_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1812: 305 pgs: 305 active+clean; 437 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 384 KiB/s rd, 3.9 MiB/s wr, 152 op/s
Feb 25 07:41:34 np0005629333 podman[336589]: 2026-02-25 12:41:34.315049185 +0000 UTC m=+0.461460474 container remove c0707c7ff97707f688760a5e431125252f96bb804c07394ce6f52fd0d708a2bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sammet, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:41:34 np0005629333 systemd[1]: libpod-conmon-c0707c7ff97707f688760a5e431125252f96bb804c07394ce6f52fd0d708a2bc.scope: Deactivated successfully.
Feb 25 07:41:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:41:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:41:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.523 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:34.525 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:41:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:34.526 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:41:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:34.527 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.565 244018 DEBUG nova.network.neutron [-] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.590 244018 INFO nova.compute.manager [-] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Took 0.91 seconds to deallocate network for instance.#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.663 244018 DEBUG oslo_concurrency.lockutils [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.664 244018 DEBUG oslo_concurrency.lockutils [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.832 244018 DEBUG oslo_concurrency.processutils [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.911 244018 DEBUG nova.compute.manager [req-50097e69-73d1-4b31-b61f-6ca97d283393 req-af608d68-0544-46e8-b902-dca532010558 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Received event network-vif-unplugged-ca235479-72b7-40ba-b8e5-d8d3227079fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.911 244018 DEBUG oslo_concurrency.lockutils [req-50097e69-73d1-4b31-b61f-6ca97d283393 req-af608d68-0544-46e8-b902-dca532010558 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.912 244018 DEBUG oslo_concurrency.lockutils [req-50097e69-73d1-4b31-b61f-6ca97d283393 req-af608d68-0544-46e8-b902-dca532010558 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.912 244018 DEBUG oslo_concurrency.lockutils [req-50097e69-73d1-4b31-b61f-6ca97d283393 req-af608d68-0544-46e8-b902-dca532010558 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.912 244018 DEBUG nova.compute.manager [req-50097e69-73d1-4b31-b61f-6ca97d283393 req-af608d68-0544-46e8-b902-dca532010558 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] No waiting events found dispatching network-vif-unplugged-ca235479-72b7-40ba-b8e5-d8d3227079fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.912 244018 WARNING nova.compute.manager [req-50097e69-73d1-4b31-b61f-6ca97d283393 req-af608d68-0544-46e8-b902-dca532010558 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Received unexpected event network-vif-unplugged-ca235479-72b7-40ba-b8e5-d8d3227079fc for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.913 244018 DEBUG nova.compute.manager [req-50097e69-73d1-4b31-b61f-6ca97d283393 req-af608d68-0544-46e8-b902-dca532010558 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Received event network-vif-plugged-ca235479-72b7-40ba-b8e5-d8d3227079fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.913 244018 DEBUG oslo_concurrency.lockutils [req-50097e69-73d1-4b31-b61f-6ca97d283393 req-af608d68-0544-46e8-b902-dca532010558 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.913 244018 DEBUG oslo_concurrency.lockutils [req-50097e69-73d1-4b31-b61f-6ca97d283393 req-af608d68-0544-46e8-b902-dca532010558 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.913 244018 DEBUG oslo_concurrency.lockutils [req-50097e69-73d1-4b31-b61f-6ca97d283393 req-af608d68-0544-46e8-b902-dca532010558 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "bb4e80d2-c200-4c24-8154-e1a86d06946b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.914 244018 DEBUG nova.compute.manager [req-50097e69-73d1-4b31-b61f-6ca97d283393 req-af608d68-0544-46e8-b902-dca532010558 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] No waiting events found dispatching network-vif-plugged-ca235479-72b7-40ba-b8e5-d8d3227079fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.914 244018 WARNING nova.compute.manager [req-50097e69-73d1-4b31-b61f-6ca97d283393 req-af608d68-0544-46e8-b902-dca532010558 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Received unexpected event network-vif-plugged-ca235479-72b7-40ba-b8e5-d8d3227079fc for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:41:34 np0005629333 nova_compute[244014]: 2026-02-25 12:41:34.914 244018 DEBUG nova.compute.manager [req-50097e69-73d1-4b31-b61f-6ca97d283393 req-af608d68-0544-46e8-b902-dca532010558 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Received event network-vif-deleted-ca235479-72b7-40ba-b8e5-d8d3227079fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:41:35 np0005629333 nova_compute[244014]: 2026-02-25 12:41:35.059 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:35 np0005629333 nova_compute[244014]: 2026-02-25 12:41:35.077 244018 INFO nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Creating config drive at /var/lib/nova/instances/aeced0b2-f2b3-4012-b740-eaa411f99631/disk.config#033[00m
Feb 25 07:41:35 np0005629333 nova_compute[244014]: 2026-02-25 12:41:35.080 244018 DEBUG oslo_concurrency.processutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aeced0b2-f2b3-4012-b740-eaa411f99631/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpc_4e8tzp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:35 np0005629333 nova_compute[244014]: 2026-02-25 12:41:35.119 244018 DEBUG nova.network.neutron [req-5ed76d2a-09b8-4d18-b3fb-849841859212 req-3af544fc-74ff-4c60-bc57-8b9d0ad0fbff 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Updated VIF entry in instance network info cache for port ca235479-72b7-40ba-b8e5-d8d3227079fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:41:35 np0005629333 nova_compute[244014]: 2026-02-25 12:41:35.120 244018 DEBUG nova.network.neutron [req-5ed76d2a-09b8-4d18-b3fb-849841859212 req-3af544fc-74ff-4c60-bc57-8b9d0ad0fbff 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Updating instance_info_cache with network_info: [{"id": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "address": "fa:16:3e:c8:65:4a", "network": {"id": "2ffae040-354a-4380-b0b9-a74c45661bf1", "bridge": "br-int", "label": "tempest-network-smoke--1173854570", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca235479-72", "ovs_interfaceid": "ca235479-72b7-40ba-b8e5-d8d3227079fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:41:35 np0005629333 nova_compute[244014]: 2026-02-25 12:41:35.123 244018 DEBUG nova.network.neutron [req-e3bc7dfa-6550-443c-a86a-541ee006201a req-f26ad408-d75f-40a2-b16e-684c94b009bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Updated VIF entry in instance network info cache for port 8627fb46-e938-4a67-9067-fc60794fca05. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:41:35 np0005629333 nova_compute[244014]: 2026-02-25 12:41:35.123 244018 DEBUG nova.network.neutron [req-e3bc7dfa-6550-443c-a86a-541ee006201a req-f26ad408-d75f-40a2-b16e-684c94b009bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Updating instance_info_cache with network_info: [{"id": "8627fb46-e938-4a67-9067-fc60794fca05", "address": "fa:16:3e:9d:4f:fa", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8627fb46-e9", "ovs_interfaceid": "8627fb46-e938-4a67-9067-fc60794fca05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:41:35 np0005629333 nova_compute[244014]: 2026-02-25 12:41:35.167 244018 DEBUG oslo_concurrency.lockutils [req-e3bc7dfa-6550-443c-a86a-541ee006201a req-f26ad408-d75f-40a2-b16e-684c94b009bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-aeced0b2-f2b3-4012-b740-eaa411f99631" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:41:35 np0005629333 nova_compute[244014]: 2026-02-25 12:41:35.169 244018 DEBUG oslo_concurrency.lockutils [req-5ed76d2a-09b8-4d18-b3fb-849841859212 req-3af544fc-74ff-4c60-bc57-8b9d0ad0fbff 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-bb4e80d2-c200-4c24-8154-e1a86d06946b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:41:35 np0005629333 nova_compute[244014]: 2026-02-25 12:41:35.225 244018 DEBUG oslo_concurrency.processutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aeced0b2-f2b3-4012-b740-eaa411f99631/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpc_4e8tzp" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:35 np0005629333 nova_compute[244014]: 2026-02-25 12:41:35.252 244018 DEBUG nova.storage.rbd_utils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image aeced0b2-f2b3-4012-b740-eaa411f99631_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:35 np0005629333 nova_compute[244014]: 2026-02-25 12:41:35.258 244018 DEBUG oslo_concurrency.processutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aeced0b2-f2b3-4012-b740-eaa411f99631/disk.config aeced0b2-f2b3-4012-b740-eaa411f99631_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:41:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1529161144' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:41:35 np0005629333 nova_compute[244014]: 2026-02-25 12:41:35.406 244018 DEBUG oslo_concurrency.processutils [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:35 np0005629333 nova_compute[244014]: 2026-02-25 12:41:35.411 244018 DEBUG nova.compute.provider_tree [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:41:35 np0005629333 nova_compute[244014]: 2026-02-25 12:41:35.433 244018 DEBUG nova.scheduler.client.report [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:41:35 np0005629333 nova_compute[244014]: 2026-02-25 12:41:35.468 244018 DEBUG oslo_concurrency.lockutils [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:35 np0005629333 nova_compute[244014]: 2026-02-25 12:41:35.508 244018 INFO nova.scheduler.client.report [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Deleted allocations for instance bb4e80d2-c200-4c24-8154-e1a86d06946b#033[00m
Feb 25 07:41:35 np0005629333 nova_compute[244014]: 2026-02-25 12:41:35.622 244018 DEBUG oslo_concurrency.lockutils [None req-3384db15-335d-4d62-b7d5-65cc9caca0ed fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "bb4e80d2-c200-4c24-8154-e1a86d06946b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:35 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:41:35 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:41:35 np0005629333 nova_compute[244014]: 2026-02-25 12:41:35.797 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023280.7955923, 4717553a-9bb9-4278-b610-7c446db3f8d9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:41:35 np0005629333 nova_compute[244014]: 2026-02-25 12:41:35.797 244018 INFO nova.compute.manager [-] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:41:35 np0005629333 nova_compute[244014]: 2026-02-25 12:41:35.816 244018 DEBUG nova.compute.manager [None req-f70d302d-efb6-4c23-93ef-008f73191e0a - - - - - -] [instance: 4717553a-9bb9-4278-b610-7c446db3f8d9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:41:35 np0005629333 nova_compute[244014]: 2026-02-25 12:41:35.957 244018 DEBUG oslo_concurrency.processutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aeced0b2-f2b3-4012-b740-eaa411f99631/disk.config aeced0b2-f2b3-4012-b740-eaa411f99631_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.700s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:35 np0005629333 nova_compute[244014]: 2026-02-25 12:41:35.958 244018 INFO nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Deleting local config drive /var/lib/nova/instances/aeced0b2-f2b3-4012-b740-eaa411f99631/disk.config because it was imported into RBD.#033[00m
Feb 25 07:41:35 np0005629333 kernel: tap8627fb46-e9: entered promiscuous mode
Feb 25 07:41:35 np0005629333 NetworkManager[49836]: <info>  [1772023295.9982] manager: (tap8627fb46-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/422)
Feb 25 07:41:36 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:35Z|01013|binding|INFO|Claiming lport 8627fb46-e938-4a67-9067-fc60794fca05 for this chassis.
Feb 25 07:41:36 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:35Z|01014|binding|INFO|8627fb46-e938-4a67-9067-fc60794fca05: Claiming fa:16:3e:9d:4f:fa 10.100.0.13
Feb 25 07:41:36 np0005629333 nova_compute[244014]: 2026-02-25 12:41:36.000 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:36 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:36Z|01015|binding|INFO|Setting lport 8627fb46-e938-4a67-9067-fc60794fca05 ovn-installed in OVS
Feb 25 07:41:36 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:36Z|01016|binding|INFO|Setting lport 8627fb46-e938-4a67-9067-fc60794fca05 up in Southbound
Feb 25 07:41:36 np0005629333 nova_compute[244014]: 2026-02-25 12:41:36.007 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:36.007 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:4f:fa 10.100.0.13'], port_security=['fa:16:3e:9d:4f:fa 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'aeced0b2-f2b3-4012-b740-eaa411f99631', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec8bae53-fe6a-49d1-a733-f00c198be561', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '503e879cd1f44a16b9baef106ceba949', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3bf34285-1a67-4c95-bb68-fd577a012f6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18f4e8da-4409-4095-9850-aaee82dd8fd1, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=8627fb46-e938-4a67-9067-fc60794fca05) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:41:36 np0005629333 nova_compute[244014]: 2026-02-25 12:41:36.008 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:36.009 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 8627fb46-e938-4a67-9067-fc60794fca05 in datapath ec8bae53-fe6a-49d1-a733-f00c198be561 bound to our chassis#033[00m
Feb 25 07:41:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:36.010 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec8bae53-fe6a-49d1-a733-f00c198be561#033[00m
Feb 25 07:41:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:36.020 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f95cfd77-bdfa-4a00-a16d-753e5cbc784c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:36 np0005629333 systemd-machined[210048]: New machine qemu-132-instance-00000069.
Feb 25 07:41:36 np0005629333 systemd[1]: Started Virtual Machine qemu-132-instance-00000069.
Feb 25 07:41:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:36.041 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3d494c14-e53a-495b-a3c5-b887265cdf51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:36.043 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f0e683d9-ab14-4817-b49d-8fdc88fd99a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:36 np0005629333 systemd-udevd[336731]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:41:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:36.063 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9fdfbc2a-5624-4ea0-9370-1544c247fb30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:36 np0005629333 NetworkManager[49836]: <info>  [1772023296.0723] device (tap8627fb46-e9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:41:36 np0005629333 NetworkManager[49836]: <info>  [1772023296.0729] device (tap8627fb46-e9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:41:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:36.079 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[adb76595-12cf-4497-8c0d-47f04f046707]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec8bae53-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a5:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 21, 'rx_bytes': 700, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 21, 'rx_bytes': 700, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516532, 'reachable_time': 18880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336733, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:36.092 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[976c7dce-5115-4f41-99f6-f892885bf318]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516546, 'tstamp': 516546}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336738, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516549, 'tstamp': 516549}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336738, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:36.094 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec8bae53-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:36 np0005629333 nova_compute[244014]: 2026-02-25 12:41:36.095 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:36 np0005629333 nova_compute[244014]: 2026-02-25 12:41:36.097 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:36.097 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec8bae53-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:36.097 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:41:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:36.098 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec8bae53-f0, col_values=(('external_ids', {'iface-id': 'e2d1eadf-baf7-4e5c-8052-6c64e8476a26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:36.098 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:41:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1813: 305 pgs: 305 active+clean; 437 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 370 KiB/s rd, 3.9 MiB/s wr, 132 op/s
Feb 25 07:41:36 np0005629333 nova_compute[244014]: 2026-02-25 12:41:36.743 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023296.7430825, aeced0b2-f2b3-4012-b740-eaa411f99631 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:41:36 np0005629333 nova_compute[244014]: 2026-02-25 12:41:36.744 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] VM Started (Lifecycle Event)#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.136 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.140 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023296.7463038, aeced0b2-f2b3-4012-b740-eaa411f99631 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.141 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.195 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.200 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.228 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.240 244018 DEBUG nova.compute.manager [req-e7756298-cf22-4832-8c8d-1cd569bcb607 req-5f4e9ebb-2bd7-4284-a053-a2ccb815e963 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Received event network-vif-plugged-8627fb46-e938-4a67-9067-fc60794fca05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.240 244018 DEBUG oslo_concurrency.lockutils [req-e7756298-cf22-4832-8c8d-1cd569bcb607 req-5f4e9ebb-2bd7-4284-a053-a2ccb815e963 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.241 244018 DEBUG oslo_concurrency.lockutils [req-e7756298-cf22-4832-8c8d-1cd569bcb607 req-5f4e9ebb-2bd7-4284-a053-a2ccb815e963 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.241 244018 DEBUG oslo_concurrency.lockutils [req-e7756298-cf22-4832-8c8d-1cd569bcb607 req-5f4e9ebb-2bd7-4284-a053-a2ccb815e963 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.242 244018 DEBUG nova.compute.manager [req-e7756298-cf22-4832-8c8d-1cd569bcb607 req-5f4e9ebb-2bd7-4284-a053-a2ccb815e963 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Processing event network-vif-plugged-8627fb46-e938-4a67-9067-fc60794fca05 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.242 244018 DEBUG nova.compute.manager [req-e7756298-cf22-4832-8c8d-1cd569bcb607 req-5f4e9ebb-2bd7-4284-a053-a2ccb815e963 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Received event network-vif-plugged-8627fb46-e938-4a67-9067-fc60794fca05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.243 244018 DEBUG oslo_concurrency.lockutils [req-e7756298-cf22-4832-8c8d-1cd569bcb607 req-5f4e9ebb-2bd7-4284-a053-a2ccb815e963 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.243 244018 DEBUG oslo_concurrency.lockutils [req-e7756298-cf22-4832-8c8d-1cd569bcb607 req-5f4e9ebb-2bd7-4284-a053-a2ccb815e963 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.243 244018 DEBUG oslo_concurrency.lockutils [req-e7756298-cf22-4832-8c8d-1cd569bcb607 req-5f4e9ebb-2bd7-4284-a053-a2ccb815e963 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.244 244018 DEBUG nova.compute.manager [req-e7756298-cf22-4832-8c8d-1cd569bcb607 req-5f4e9ebb-2bd7-4284-a053-a2ccb815e963 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] No waiting events found dispatching network-vif-plugged-8627fb46-e938-4a67-9067-fc60794fca05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.245 244018 WARNING nova.compute.manager [req-e7756298-cf22-4832-8c8d-1cd569bcb607 req-5f4e9ebb-2bd7-4284-a053-a2ccb815e963 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Received unexpected event network-vif-plugged-8627fb46-e938-4a67-9067-fc60794fca05 for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.246 244018 DEBUG nova.compute.manager [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.250 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.252 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023297.25038, aeced0b2-f2b3-4012-b740-eaa411f99631 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.252 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.257 244018 INFO nova.virt.libvirt.driver [-] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Instance spawned successfully.#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.258 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.280 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.288 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.289 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.291 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.291 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.292 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.293 244018 DEBUG nova.virt.libvirt.driver [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.301 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.334 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.361 244018 INFO nova.compute.manager [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Took 7.42 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.362 244018 DEBUG nova.compute.manager [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.481 244018 INFO nova.compute.manager [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Took 8.71 seconds to build instance.#033[00m
Feb 25 07:41:37 np0005629333 nova_compute[244014]: 2026-02-25 12:41:37.511 244018 DEBUG oslo_concurrency.lockutils [None req-1420dd74-4461-47e1-8a71-0d22290d9bcc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "aeced0b2-f2b3-4012-b740-eaa411f99631" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1814: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 397 KiB/s rd, 3.9 MiB/s wr, 171 op/s
Feb 25 07:41:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:41:39 np0005629333 nova_compute[244014]: 2026-02-25 12:41:39.040 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:40 np0005629333 nova_compute[244014]: 2026-02-25 12:41:40.061 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1815: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 1.9 MiB/s wr, 76 op/s
Feb 25 07:41:40 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:40Z|01017|binding|INFO|Releasing lport e2d1eadf-baf7-4e5c-8052-6c64e8476a26 from this chassis (sb_readonly=0)
Feb 25 07:41:40 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:40Z|01018|binding|INFO|Releasing lport 895bc3cc-c38d-425b-b005-1acb3139bbee from this chassis (sb_readonly=0)
Feb 25 07:41:41 np0005629333 nova_compute[244014]: 2026-02-25 12:41:41.008 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:41 np0005629333 nova_compute[244014]: 2026-02-25 12:41:41.757 244018 DEBUG oslo_concurrency.lockutils [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "aeced0b2-f2b3-4012-b740-eaa411f99631" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:41 np0005629333 nova_compute[244014]: 2026-02-25 12:41:41.758 244018 DEBUG oslo_concurrency.lockutils [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "aeced0b2-f2b3-4012-b740-eaa411f99631" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:41 np0005629333 nova_compute[244014]: 2026-02-25 12:41:41.759 244018 DEBUG oslo_concurrency.lockutils [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:41 np0005629333 nova_compute[244014]: 2026-02-25 12:41:41.759 244018 DEBUG oslo_concurrency.lockutils [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:41 np0005629333 nova_compute[244014]: 2026-02-25 12:41:41.759 244018 DEBUG oslo_concurrency.lockutils [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:41 np0005629333 nova_compute[244014]: 2026-02-25 12:41:41.760 244018 INFO nova.compute.manager [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Terminating instance#033[00m
Feb 25 07:41:41 np0005629333 nova_compute[244014]: 2026-02-25 12:41:41.762 244018 DEBUG nova.compute.manager [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:41:41 np0005629333 kernel: tap8627fb46-e9 (unregistering): left promiscuous mode
Feb 25 07:41:41 np0005629333 NetworkManager[49836]: <info>  [1772023301.9090] device (tap8627fb46-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:41:41 np0005629333 nova_compute[244014]: 2026-02-25 12:41:41.915 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:41 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:41Z|01019|binding|INFO|Releasing lport 8627fb46-e938-4a67-9067-fc60794fca05 from this chassis (sb_readonly=0)
Feb 25 07:41:41 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:41Z|01020|binding|INFO|Setting lport 8627fb46-e938-4a67-9067-fc60794fca05 down in Southbound
Feb 25 07:41:41 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:41Z|01021|binding|INFO|Removing iface tap8627fb46-e9 ovn-installed in OVS
Feb 25 07:41:41 np0005629333 nova_compute[244014]: 2026-02-25 12:41:41.918 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:41 np0005629333 nova_compute[244014]: 2026-02-25 12:41:41.923 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:41.925 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:4f:fa 10.100.0.13'], port_security=['fa:16:3e:9d:4f:fa 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'aeced0b2-f2b3-4012-b740-eaa411f99631', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec8bae53-fe6a-49d1-a733-f00c198be561', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '503e879cd1f44a16b9baef106ceba949', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3bf34285-1a67-4c95-bb68-fd577a012f6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18f4e8da-4409-4095-9850-aaee82dd8fd1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=8627fb46-e938-4a67-9067-fc60794fca05) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:41:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:41.926 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 8627fb46-e938-4a67-9067-fc60794fca05 in datapath ec8bae53-fe6a-49d1-a733-f00c198be561 unbound from our chassis#033[00m
Feb 25 07:41:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:41.928 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec8bae53-fe6a-49d1-a733-f00c198be561#033[00m
Feb 25 07:41:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:41.939 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7c8aebae-715f-4d0a-b4aa-6b61866a36e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:41 np0005629333 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d00000069.scope: Deactivated successfully.
Feb 25 07:41:41 np0005629333 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d00000069.scope: Consumed 5.218s CPU time.
Feb 25 07:41:41 np0005629333 systemd-machined[210048]: Machine qemu-132-instance-00000069 terminated.
Feb 25 07:41:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:41.960 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[850a2c84-d829-4c86-8cf0-b2626cb55995]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:41.963 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[21625381-5649-4c93-b599-3a0a9eecfe24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:41 np0005629333 nova_compute[244014]: 2026-02-25 12:41:41.977 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:41 np0005629333 nova_compute[244014]: 2026-02-25 12:41:41.981 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:41.982 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e36afa62-63ca-47fa-944a-2c43df4ccd26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:41 np0005629333 nova_compute[244014]: 2026-02-25 12:41:41.993 244018 INFO nova.virt.libvirt.driver [-] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Instance destroyed successfully.#033[00m
Feb 25 07:41:41 np0005629333 nova_compute[244014]: 2026-02-25 12:41:41.993 244018 DEBUG nova.objects.instance [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'resources' on Instance uuid aeced0b2-f2b3-4012-b740-eaa411f99631 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:41:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:41.995 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9ce6b351-4802-4a99-be4b-ae7871e4da73]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec8bae53-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a5:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 23, 'rx_bytes': 700, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 23, 'rx_bytes': 700, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516532, 'reachable_time': 18880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336804, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:42.008 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[27b3f3e0-ad9d-4394-bc1e-fc50ea6b68b8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516546, 'tstamp': 516546}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336807, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516549, 'tstamp': 516549}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336807, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:42.009 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec8bae53-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:42 np0005629333 nova_compute[244014]: 2026-02-25 12:41:42.010 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:42 np0005629333 nova_compute[244014]: 2026-02-25 12:41:42.013 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:42.014 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec8bae53-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:42.014 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:41:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:42.014 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec8bae53-f0, col_values=(('external_ids', {'iface-id': 'e2d1eadf-baf7-4e5c-8052-6c64e8476a26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:42.015 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:41:42 np0005629333 nova_compute[244014]: 2026-02-25 12:41:42.017 244018 DEBUG nova.virt.libvirt.vif [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:202:202,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:41:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-131630496',display_name='tempest-ServersTestJSON-server-131630496',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-131630496',id=105,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:41:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-9s80l7z0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:41:40Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=aeced0b2-f2b3-4012-b740-eaa411f99631,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8627fb46-e938-4a67-9067-fc60794fca05", "address": "fa:16:3e:9d:4f:fa", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8627fb46-e9", "ovs_interfaceid": "8627fb46-e938-4a67-9067-fc60794fca05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:41:42 np0005629333 nova_compute[244014]: 2026-02-25 12:41:42.018 244018 DEBUG nova.network.os_vif_util [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "8627fb46-e938-4a67-9067-fc60794fca05", "address": "fa:16:3e:9d:4f:fa", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8627fb46-e9", "ovs_interfaceid": "8627fb46-e938-4a67-9067-fc60794fca05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:41:42 np0005629333 nova_compute[244014]: 2026-02-25 12:41:42.019 244018 DEBUG nova.network.os_vif_util [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:4f:fa,bridge_name='br-int',has_traffic_filtering=True,id=8627fb46-e938-4a67-9067-fc60794fca05,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8627fb46-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:41:42 np0005629333 nova_compute[244014]: 2026-02-25 12:41:42.019 244018 DEBUG os_vif [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:4f:fa,bridge_name='br-int',has_traffic_filtering=True,id=8627fb46-e938-4a67-9067-fc60794fca05,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8627fb46-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:41:42 np0005629333 nova_compute[244014]: 2026-02-25 12:41:42.020 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:42 np0005629333 nova_compute[244014]: 2026-02-25 12:41:42.020 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8627fb46-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:42 np0005629333 nova_compute[244014]: 2026-02-25 12:41:42.022 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:42 np0005629333 nova_compute[244014]: 2026-02-25 12:41:42.024 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:41:42 np0005629333 nova_compute[244014]: 2026-02-25 12:41:42.026 244018 INFO os_vif [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:4f:fa,bridge_name='br-int',has_traffic_filtering=True,id=8627fb46-e938-4a67-9067-fc60794fca05,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8627fb46-e9')#033[00m
Feb 25 07:41:42 np0005629333 nova_compute[244014]: 2026-02-25 12:41:42.111 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "227efbfe-da43-423a-8652-9636ecded4cd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:42 np0005629333 nova_compute[244014]: 2026-02-25 12:41:42.111 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "227efbfe-da43-423a-8652-9636ecded4cd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:42 np0005629333 nova_compute[244014]: 2026-02-25 12:41:42.136 244018 DEBUG nova.compute.manager [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:41:42 np0005629333 nova_compute[244014]: 2026-02-25 12:41:42.149 244018 DEBUG nova.compute.manager [req-07abb67d-47f1-44ea-925c-724fd8e8314b req-729e782f-06a8-4d88-87e6-f2f42f13e456 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Received event network-vif-unplugged-8627fb46-e938-4a67-9067-fc60794fca05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:41:42 np0005629333 nova_compute[244014]: 2026-02-25 12:41:42.150 244018 DEBUG oslo_concurrency.lockutils [req-07abb67d-47f1-44ea-925c-724fd8e8314b req-729e782f-06a8-4d88-87e6-f2f42f13e456 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:42 np0005629333 nova_compute[244014]: 2026-02-25 12:41:42.150 244018 DEBUG oslo_concurrency.lockutils [req-07abb67d-47f1-44ea-925c-724fd8e8314b req-729e782f-06a8-4d88-87e6-f2f42f13e456 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:42 np0005629333 nova_compute[244014]: 2026-02-25 12:41:42.150 244018 DEBUG oslo_concurrency.lockutils [req-07abb67d-47f1-44ea-925c-724fd8e8314b req-729e782f-06a8-4d88-87e6-f2f42f13e456 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:42 np0005629333 nova_compute[244014]: 2026-02-25 12:41:42.150 244018 DEBUG nova.compute.manager [req-07abb67d-47f1-44ea-925c-724fd8e8314b req-729e782f-06a8-4d88-87e6-f2f42f13e456 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] No waiting events found dispatching network-vif-unplugged-8627fb46-e938-4a67-9067-fc60794fca05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:41:42 np0005629333 nova_compute[244014]: 2026-02-25 12:41:42.151 244018 DEBUG nova.compute.manager [req-07abb67d-47f1-44ea-925c-724fd8e8314b req-729e782f-06a8-4d88-87e6-f2f42f13e456 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Received event network-vif-unplugged-8627fb46-e938-4a67-9067-fc60794fca05 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:41:42 np0005629333 nova_compute[244014]: 2026-02-25 12:41:42.217 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:42 np0005629333 nova_compute[244014]: 2026-02-25 12:41:42.217 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:42 np0005629333 nova_compute[244014]: 2026-02-25 12:41:42.227 244018 DEBUG nova.virt.hardware [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:41:42 np0005629333 nova_compute[244014]: 2026-02-25 12:41:42.227 244018 INFO nova.compute.claims [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:41:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.9 MiB/s wr, 128 op/s
Feb 25 07:41:42 np0005629333 nova_compute[244014]: 2026-02-25 12:41:42.397 244018 DEBUG oslo_concurrency.processutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:41:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:41:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:41:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:41:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018777258889690822 of space, bias 1.0, pg target 0.5633177666907246 quantized to 32 (current 32)
Feb 25 07:41:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:41:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:41:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:41:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:41:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:41:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002493855278962539 of space, bias 1.0, pg target 0.7481565836887617 quantized to 32 (current 32)
Feb 25 07:41:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:41:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0166595462958793e-06 of space, bias 4.0, pg target 0.0012199914555550552 quantized to 16 (current 16)
Feb 25 07:41:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:41:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:41:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:41:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:41:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:41:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:41:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:41:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:41:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:41:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:41:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:41:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4089410437' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:41:42 np0005629333 nova_compute[244014]: 2026-02-25 12:41:42.977 244018 DEBUG oslo_concurrency.processutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:42 np0005629333 nova_compute[244014]: 2026-02-25 12:41:42.982 244018 DEBUG nova.compute.provider_tree [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:41:42 np0005629333 nova_compute[244014]: 2026-02-25 12:41:42.998 244018 DEBUG nova.scheduler.client.report [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:41:43 np0005629333 nova_compute[244014]: 2026-02-25 12:41:43.019 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:43 np0005629333 nova_compute[244014]: 2026-02-25 12:41:43.020 244018 DEBUG nova.compute.manager [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:41:43 np0005629333 nova_compute[244014]: 2026-02-25 12:41:43.096 244018 DEBUG nova.compute.manager [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:41:43 np0005629333 nova_compute[244014]: 2026-02-25 12:41:43.096 244018 DEBUG nova.network.neutron [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:41:43 np0005629333 nova_compute[244014]: 2026-02-25 12:41:43.115 244018 INFO nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:41:43 np0005629333 nova_compute[244014]: 2026-02-25 12:41:43.133 244018 DEBUG nova.compute.manager [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:41:43 np0005629333 nova_compute[244014]: 2026-02-25 12:41:43.237 244018 DEBUG nova.compute.manager [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:41:43 np0005629333 nova_compute[244014]: 2026-02-25 12:41:43.239 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:41:43 np0005629333 nova_compute[244014]: 2026-02-25 12:41:43.239 244018 INFO nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Creating image(s)#033[00m
Feb 25 07:41:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:41:43 np0005629333 nova_compute[244014]: 2026-02-25 12:41:43.293 244018 DEBUG nova.storage.rbd_utils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 227efbfe-da43-423a-8652-9636ecded4cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:43 np0005629333 nova_compute[244014]: 2026-02-25 12:41:43.316 244018 DEBUG nova.storage.rbd_utils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 227efbfe-da43-423a-8652-9636ecded4cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:43 np0005629333 nova_compute[244014]: 2026-02-25 12:41:43.347 244018 DEBUG nova.storage.rbd_utils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 227efbfe-da43-423a-8652-9636ecded4cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:43 np0005629333 nova_compute[244014]: 2026-02-25 12:41:43.352 244018 DEBUG oslo_concurrency.processutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:43 np0005629333 nova_compute[244014]: 2026-02-25 12:41:43.433 244018 DEBUG oslo_concurrency.processutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:43 np0005629333 nova_compute[244014]: 2026-02-25 12:41:43.434 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:43 np0005629333 nova_compute[244014]: 2026-02-25 12:41:43.434 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:43 np0005629333 nova_compute[244014]: 2026-02-25 12:41:43.434 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:43 np0005629333 nova_compute[244014]: 2026-02-25 12:41:43.454 244018 DEBUG nova.storage.rbd_utils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 227efbfe-da43-423a-8652-9636ecded4cd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:43 np0005629333 nova_compute[244014]: 2026-02-25 12:41:43.457 244018 DEBUG oslo_concurrency.processutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 227efbfe-da43-423a-8652-9636ecded4cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:43 np0005629333 nova_compute[244014]: 2026-02-25 12:41:43.802 244018 DEBUG nova.policy [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb8dbf8cc448ad946fd23aaae2326e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '25fa1e8dd32c483686f869da2604f2b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:41:44 np0005629333 nova_compute[244014]: 2026-02-25 12:41:44.052 244018 DEBUG oslo_concurrency.processutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 227efbfe-da43-423a-8652-9636ecded4cd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:44 np0005629333 nova_compute[244014]: 2026-02-25 12:41:44.102 244018 DEBUG nova.storage.rbd_utils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] resizing rbd image 227efbfe-da43-423a-8652-9636ecded4cd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:41:44 np0005629333 nova_compute[244014]: 2026-02-25 12:41:44.225 244018 DEBUG nova.compute.manager [req-e18c6724-234e-4901-ba58-7add0ad5861e req-71074e15-1c41-4529-8ea2-d5ec0ac214c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Received event network-vif-plugged-8627fb46-e938-4a67-9067-fc60794fca05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:41:44 np0005629333 nova_compute[244014]: 2026-02-25 12:41:44.226 244018 DEBUG oslo_concurrency.lockutils [req-e18c6724-234e-4901-ba58-7add0ad5861e req-71074e15-1c41-4529-8ea2-d5ec0ac214c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:44 np0005629333 nova_compute[244014]: 2026-02-25 12:41:44.226 244018 DEBUG oslo_concurrency.lockutils [req-e18c6724-234e-4901-ba58-7add0ad5861e req-71074e15-1c41-4529-8ea2-d5ec0ac214c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:44 np0005629333 nova_compute[244014]: 2026-02-25 12:41:44.226 244018 DEBUG oslo_concurrency.lockutils [req-e18c6724-234e-4901-ba58-7add0ad5861e req-71074e15-1c41-4529-8ea2-d5ec0ac214c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aeced0b2-f2b3-4012-b740-eaa411f99631-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:44 np0005629333 nova_compute[244014]: 2026-02-25 12:41:44.226 244018 DEBUG nova.compute.manager [req-e18c6724-234e-4901-ba58-7add0ad5861e req-71074e15-1c41-4529-8ea2-d5ec0ac214c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] No waiting events found dispatching network-vif-plugged-8627fb46-e938-4a67-9067-fc60794fca05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:41:44 np0005629333 nova_compute[244014]: 2026-02-25 12:41:44.226 244018 WARNING nova.compute.manager [req-e18c6724-234e-4901-ba58-7add0ad5861e req-71074e15-1c41-4529-8ea2-d5ec0ac214c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Received unexpected event network-vif-plugged-8627fb46-e938-4a67-9067-fc60794fca05 for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:41:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 284 KiB/s wr, 114 op/s
Feb 25 07:41:44 np0005629333 nova_compute[244014]: 2026-02-25 12:41:44.570 244018 DEBUG nova.objects.instance [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'migration_context' on Instance uuid 227efbfe-da43-423a-8652-9636ecded4cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:41:44 np0005629333 nova_compute[244014]: 2026-02-25 12:41:44.584 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:41:44 np0005629333 nova_compute[244014]: 2026-02-25 12:41:44.584 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Ensure instance console log exists: /var/lib/nova/instances/227efbfe-da43-423a-8652-9636ecded4cd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:41:44 np0005629333 nova_compute[244014]: 2026-02-25 12:41:44.585 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:44 np0005629333 nova_compute[244014]: 2026-02-25 12:41:44.585 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:44 np0005629333 nova_compute[244014]: 2026-02-25 12:41:44.585 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:44 np0005629333 nova_compute[244014]: 2026-02-25 12:41:44.742 244018 INFO nova.virt.libvirt.driver [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Deleting instance files /var/lib/nova/instances/aeced0b2-f2b3-4012-b740-eaa411f99631_del#033[00m
Feb 25 07:41:44 np0005629333 nova_compute[244014]: 2026-02-25 12:41:44.744 244018 INFO nova.virt.libvirt.driver [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Deletion of /var/lib/nova/instances/aeced0b2-f2b3-4012-b740-eaa411f99631_del complete#033[00m
Feb 25 07:41:44 np0005629333 nova_compute[244014]: 2026-02-25 12:41:44.799 244018 INFO nova.compute.manager [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Took 3.04 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:41:44 np0005629333 nova_compute[244014]: 2026-02-25 12:41:44.800 244018 DEBUG oslo.service.loopingcall [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:41:44 np0005629333 nova_compute[244014]: 2026-02-25 12:41:44.801 244018 DEBUG nova.compute.manager [-] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:41:44 np0005629333 nova_compute[244014]: 2026-02-25 12:41:44.801 244018 DEBUG nova.network.neutron [-] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:41:45 np0005629333 nova_compute[244014]: 2026-02-25 12:41:45.064 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:45 np0005629333 nova_compute[244014]: 2026-02-25 12:41:45.274 244018 DEBUG nova.network.neutron [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Successfully created port: ee68d04f-36ba-4727-ab4b-c31e559353e0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:41:45 np0005629333 nova_compute[244014]: 2026-02-25 12:41:45.640 244018 DEBUG nova.network.neutron [-] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:41:45 np0005629333 nova_compute[244014]: 2026-02-25 12:41:45.670 244018 INFO nova.compute.manager [-] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Took 0.87 seconds to deallocate network for instance.#033[00m
Feb 25 07:41:45 np0005629333 nova_compute[244014]: 2026-02-25 12:41:45.764 244018 DEBUG oslo_concurrency.lockutils [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:45 np0005629333 nova_compute[244014]: 2026-02-25 12:41:45.765 244018 DEBUG oslo_concurrency.lockutils [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:45 np0005629333 nova_compute[244014]: 2026-02-25 12:41:45.869 244018 DEBUG oslo_concurrency.processutils [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:46 np0005629333 nova_compute[244014]: 2026-02-25 12:41:46.177 244018 DEBUG nova.network.neutron [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Successfully updated port: ee68d04f-36ba-4727-ab4b-c31e559353e0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:41:46 np0005629333 nova_compute[244014]: 2026-02-25 12:41:46.201 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "refresh_cache-227efbfe-da43-423a-8652-9636ecded4cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:41:46 np0005629333 nova_compute[244014]: 2026-02-25 12:41:46.201 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquired lock "refresh_cache-227efbfe-da43-423a-8652-9636ecded4cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:41:46 np0005629333 nova_compute[244014]: 2026-02-25 12:41:46.202 244018 DEBUG nova.network.neutron [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:41:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1818: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 103 op/s
Feb 25 07:41:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:41:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4076591774' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:41:46 np0005629333 nova_compute[244014]: 2026-02-25 12:41:46.407 244018 DEBUG oslo_concurrency.processutils [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:46 np0005629333 nova_compute[244014]: 2026-02-25 12:41:46.413 244018 DEBUG nova.compute.provider_tree [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:41:46 np0005629333 nova_compute[244014]: 2026-02-25 12:41:46.421 244018 DEBUG nova.compute.manager [req-df843bc5-b604-4a23-b79f-d13cb231dcd7 req-0425f375-142a-4ff8-9811-4c462d84e4a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Received event network-changed-ee68d04f-36ba-4727-ab4b-c31e559353e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:41:46 np0005629333 nova_compute[244014]: 2026-02-25 12:41:46.421 244018 DEBUG nova.compute.manager [req-df843bc5-b604-4a23-b79f-d13cb231dcd7 req-0425f375-142a-4ff8-9811-4c462d84e4a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Refreshing instance network info cache due to event network-changed-ee68d04f-36ba-4727-ab4b-c31e559353e0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:41:46 np0005629333 nova_compute[244014]: 2026-02-25 12:41:46.421 244018 DEBUG oslo_concurrency.lockutils [req-df843bc5-b604-4a23-b79f-d13cb231dcd7 req-0425f375-142a-4ff8-9811-4c462d84e4a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-227efbfe-da43-423a-8652-9636ecded4cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:41:46 np0005629333 nova_compute[244014]: 2026-02-25 12:41:46.425 244018 DEBUG nova.compute.manager [req-8b4b6247-825e-4794-a3ea-c3adff770e5a req-d819805a-6e3a-4e6a-8b0b-ae7162bd54da 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Received event network-vif-deleted-8627fb46-e938-4a67-9067-fc60794fca05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:41:46 np0005629333 nova_compute[244014]: 2026-02-25 12:41:46.434 244018 DEBUG nova.scheduler.client.report [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:41:46 np0005629333 nova_compute[244014]: 2026-02-25 12:41:46.461 244018 DEBUG oslo_concurrency.lockutils [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:46 np0005629333 nova_compute[244014]: 2026-02-25 12:41:46.494 244018 INFO nova.scheduler.client.report [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Deleted allocations for instance aeced0b2-f2b3-4012-b740-eaa411f99631#033[00m
Feb 25 07:41:46 np0005629333 nova_compute[244014]: 2026-02-25 12:41:46.562 244018 DEBUG nova.network.neutron [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:41:46 np0005629333 nova_compute[244014]: 2026-02-25 12:41:46.577 244018 DEBUG oslo_concurrency.lockutils [None req-49ebd8d5-1a1a-4af8-96e8-b69c7b316881 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "aeced0b2-f2b3-4012-b740-eaa411f99631" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:47 np0005629333 nova_compute[244014]: 2026-02-25 12:41:47.021 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:41:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1809902909' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:41:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:41:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1809902909' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:41:48 np0005629333 nova_compute[244014]: 2026-02-25 12:41:48.082 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023293.0807204, bb4e80d2-c200-4c24-8154-e1a86d06946b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:41:48 np0005629333 nova_compute[244014]: 2026-02-25 12:41:48.082 244018 INFO nova.compute.manager [-] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:41:48 np0005629333 nova_compute[244014]: 2026-02-25 12:41:48.108 244018 DEBUG nova.compute.manager [None req-097dafce-faf2-4e4d-bf16-9e163a894dd0 - - - - - -] [instance: bb4e80d2-c200-4c24-8154-e1a86d06946b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:41:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1819: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 155 op/s
Feb 25 07:41:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:41:49 np0005629333 nova_compute[244014]: 2026-02-25 12:41:49.197 244018 DEBUG nova.network.neutron [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Updating instance_info_cache with network_info: [{"id": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "address": "fa:16:3e:b7:74:9a", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:749a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee68d04f-36", "ovs_interfaceid": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:41:49 np0005629333 nova_compute[244014]: 2026-02-25 12:41:49.213 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Releasing lock "refresh_cache-227efbfe-da43-423a-8652-9636ecded4cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:41:49 np0005629333 nova_compute[244014]: 2026-02-25 12:41:49.213 244018 DEBUG nova.compute.manager [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Instance network_info: |[{"id": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "address": "fa:16:3e:b7:74:9a", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:749a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee68d04f-36", "ovs_interfaceid": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:41:49 np0005629333 nova_compute[244014]: 2026-02-25 12:41:49.214 244018 DEBUG oslo_concurrency.lockutils [req-df843bc5-b604-4a23-b79f-d13cb231dcd7 req-0425f375-142a-4ff8-9811-4c462d84e4a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-227efbfe-da43-423a-8652-9636ecded4cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:41:49 np0005629333 nova_compute[244014]: 2026-02-25 12:41:49.214 244018 DEBUG nova.network.neutron [req-df843bc5-b604-4a23-b79f-d13cb231dcd7 req-0425f375-142a-4ff8-9811-4c462d84e4a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Refreshing network info cache for port ee68d04f-36ba-4727-ab4b-c31e559353e0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:41:49 np0005629333 nova_compute[244014]: 2026-02-25 12:41:49.216 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Start _get_guest_xml network_info=[{"id": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "address": "fa:16:3e:b7:74:9a", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:749a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee68d04f-36", "ovs_interfaceid": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:41:49 np0005629333 nova_compute[244014]: 2026-02-25 12:41:49.222 244018 WARNING nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:41:49 np0005629333 nova_compute[244014]: 2026-02-25 12:41:49.226 244018 DEBUG nova.virt.libvirt.host [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:41:49 np0005629333 nova_compute[244014]: 2026-02-25 12:41:49.226 244018 DEBUG nova.virt.libvirt.host [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:41:49 np0005629333 nova_compute[244014]: 2026-02-25 12:41:49.229 244018 DEBUG nova.virt.libvirt.host [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:41:49 np0005629333 nova_compute[244014]: 2026-02-25 12:41:49.230 244018 DEBUG nova.virt.libvirt.host [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:41:49 np0005629333 nova_compute[244014]: 2026-02-25 12:41:49.230 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:41:49 np0005629333 nova_compute[244014]: 2026-02-25 12:41:49.230 244018 DEBUG nova.virt.hardware [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:41:49 np0005629333 nova_compute[244014]: 2026-02-25 12:41:49.231 244018 DEBUG nova.virt.hardware [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:41:49 np0005629333 nova_compute[244014]: 2026-02-25 12:41:49.231 244018 DEBUG nova.virt.hardware [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:41:49 np0005629333 nova_compute[244014]: 2026-02-25 12:41:49.231 244018 DEBUG nova.virt.hardware [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:41:49 np0005629333 nova_compute[244014]: 2026-02-25 12:41:49.231 244018 DEBUG nova.virt.hardware [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:41:49 np0005629333 nova_compute[244014]: 2026-02-25 12:41:49.232 244018 DEBUG nova.virt.hardware [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:41:49 np0005629333 nova_compute[244014]: 2026-02-25 12:41:49.232 244018 DEBUG nova.virt.hardware [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:41:49 np0005629333 nova_compute[244014]: 2026-02-25 12:41:49.232 244018 DEBUG nova.virt.hardware [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:41:49 np0005629333 nova_compute[244014]: 2026-02-25 12:41:49.232 244018 DEBUG nova.virt.hardware [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:41:49 np0005629333 nova_compute[244014]: 2026-02-25 12:41:49.232 244018 DEBUG nova.virt.hardware [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:41:49 np0005629333 nova_compute[244014]: 2026-02-25 12:41:49.233 244018 DEBUG nova.virt.hardware [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:41:49 np0005629333 nova_compute[244014]: 2026-02-25 12:41:49.236 244018 DEBUG oslo_concurrency.processutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:41:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1694417328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:41:49 np0005629333 nova_compute[244014]: 2026-02-25 12:41:49.782 244018 DEBUG oslo_concurrency.processutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:49 np0005629333 nova_compute[244014]: 2026-02-25 12:41:49.803 244018 DEBUG nova.storage.rbd_utils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 227efbfe-da43-423a-8652-9636ecded4cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:49 np0005629333 nova_compute[244014]: 2026-02-25 12:41:49.807 244018 DEBUG oslo_concurrency.processutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.066 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1820: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Feb 25 07:41:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:41:50 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2062132063' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.327 244018 DEBUG oslo_concurrency.processutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.330 244018 DEBUG nova.virt.libvirt.vif [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:41:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1104420481',display_name='tempest-TestGettingAddress-server-1104420481',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1104420481',id=106,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOF5qZPMlaCDABTzKIN8P76NPvIVQ2htm3U8AfLvXmjtQ5ATadIDI8WR25EzcFon8xGJAOKa64XoS6ByiRlqYYuZKqui9AsV1f+Y3cN0xRd0ljo9lo2zQ0wdsqjfZAW6NA==',key_name='tempest-TestGettingAddress-1477618447',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-g8zs80ws',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:41:43Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=227efbfe-da43-423a-8652-9636ecded4cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "address": "fa:16:3e:b7:74:9a", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:749a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee68d04f-36", "ovs_interfaceid": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.331 244018 DEBUG nova.network.os_vif_util [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "address": "fa:16:3e:b7:74:9a", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:749a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee68d04f-36", "ovs_interfaceid": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.332 244018 DEBUG nova.network.os_vif_util [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:74:9a,bridge_name='br-int',has_traffic_filtering=True,id=ee68d04f-36ba-4727-ab4b-c31e559353e0,network=Network(9ec47b46-6b4d-4267-964b-6f16eef9a7b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee68d04f-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.334 244018 DEBUG nova.objects.instance [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 227efbfe-da43-423a-8652-9636ecded4cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.352 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:41:50 np0005629333 nova_compute[244014]:  <uuid>227efbfe-da43-423a-8652-9636ecded4cd</uuid>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:  <name>instance-0000006a</name>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestGettingAddress-server-1104420481</nova:name>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:41:49</nova:creationTime>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:41:50 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:        <nova:user uuid="f8eb8dbf8cc448ad946fd23aaae2326e">tempest-TestGettingAddress-344063294-project-member</nova:user>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:        <nova:project uuid="25fa1e8dd32c483686f869da2604f2b1">tempest-TestGettingAddress-344063294</nova:project>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:        <nova:port uuid="ee68d04f-36ba-4727-ab4b-c31e559353e0">
Feb 25 07:41:50 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:feb7:749a" ipVersion="6"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      <entry name="serial">227efbfe-da43-423a-8652-9636ecded4cd</entry>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      <entry name="uuid">227efbfe-da43-423a-8652-9636ecded4cd</entry>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/227efbfe-da43-423a-8652-9636ecded4cd_disk">
Feb 25 07:41:50 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:41:50 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/227efbfe-da43-423a-8652-9636ecded4cd_disk.config">
Feb 25 07:41:50 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:41:50 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:b7:74:9a"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      <target dev="tapee68d04f-36"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/227efbfe-da43-423a-8652-9636ecded4cd/console.log" append="off"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:41:50 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:41:50 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:41:50 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:41:50 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.354 244018 DEBUG nova.compute.manager [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Preparing to wait for external event network-vif-plugged-ee68d04f-36ba-4727-ab4b-c31e559353e0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.354 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "227efbfe-da43-423a-8652-9636ecded4cd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.355 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "227efbfe-da43-423a-8652-9636ecded4cd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.355 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "227efbfe-da43-423a-8652-9636ecded4cd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.356 244018 DEBUG nova.virt.libvirt.vif [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:41:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1104420481',display_name='tempest-TestGettingAddress-server-1104420481',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1104420481',id=106,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOF5qZPMlaCDABTzKIN8P76NPvIVQ2htm3U8AfLvXmjtQ5ATadIDI8WR25EzcFon8xGJAOKa64XoS6ByiRlqYYuZKqui9AsV1f+Y3cN0xRd0ljo9lo2zQ0wdsqjfZAW6NA==',key_name='tempest-TestGettingAddress-1477618447',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-g8zs80ws',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:41:43Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=227efbfe-da43-423a-8652-9636ecded4cd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "address": "fa:16:3e:b7:74:9a", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:749a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee68d04f-36", "ovs_interfaceid": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.356 244018 DEBUG nova.network.os_vif_util [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "address": "fa:16:3e:b7:74:9a", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:749a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee68d04f-36", "ovs_interfaceid": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.357 244018 DEBUG nova.network.os_vif_util [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:74:9a,bridge_name='br-int',has_traffic_filtering=True,id=ee68d04f-36ba-4727-ab4b-c31e559353e0,network=Network(9ec47b46-6b4d-4267-964b-6f16eef9a7b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee68d04f-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.357 244018 DEBUG os_vif [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:74:9a,bridge_name='br-int',has_traffic_filtering=True,id=ee68d04f-36ba-4727-ab4b-c31e559353e0,network=Network(9ec47b46-6b4d-4267-964b-6f16eef9a7b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee68d04f-36') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.358 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.358 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.359 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.362 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.362 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee68d04f-36, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.362 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapee68d04f-36, col_values=(('external_ids', {'iface-id': 'ee68d04f-36ba-4727-ab4b-c31e559353e0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:74:9a', 'vm-uuid': '227efbfe-da43-423a-8652-9636ecded4cd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.363 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:50 np0005629333 NetworkManager[49836]: <info>  [1772023310.3648] manager: (tapee68d04f-36): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/423)
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.367 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.368 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.369 244018 INFO os_vif [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:74:9a,bridge_name='br-int',has_traffic_filtering=True,id=ee68d04f-36ba-4727-ab4b-c31e559353e0,network=Network(9ec47b46-6b4d-4267-964b-6f16eef9a7b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee68d04f-36')#033[00m
Feb 25 07:41:50 np0005629333 podman[337102]: 2026-02-25 12:41:50.456393267 +0000 UTC m=+0.053819160 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.477 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "b8d9acc4-7912-4d26-bad6-1159f6993361" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.478 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.495 244018 DEBUG nova.compute.manager [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.568 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.569 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.570 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:b7:74:9a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.571 244018 INFO nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Using config drive#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.598 244018 DEBUG nova.storage.rbd_utils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 227efbfe-da43-423a-8652-9636ecded4cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.771 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.772 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.782 244018 DEBUG nova.virt.hardware [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.782 244018 INFO nova.compute.claims [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:41:50 np0005629333 nova_compute[244014]: 2026-02-25 12:41:50.955 244018 DEBUG oslo_concurrency.processutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.199 244018 INFO nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Creating config drive at /var/lib/nova/instances/227efbfe-da43-423a-8652-9636ecded4cd/disk.config#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.208 244018 DEBUG oslo_concurrency.processutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/227efbfe-da43-423a-8652-9636ecded4cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpo27n9xj0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.355 244018 DEBUG oslo_concurrency.processutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/227efbfe-da43-423a-8652-9636ecded4cd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpo27n9xj0" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.382 244018 DEBUG nova.storage.rbd_utils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 227efbfe-da43-423a-8652-9636ecded4cd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.386 244018 DEBUG oslo_concurrency.processutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/227efbfe-da43-423a-8652-9636ecded4cd/disk.config 227efbfe-da43-423a-8652-9636ecded4cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:41:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2951979274' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.500 244018 DEBUG oslo_concurrency.processutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.504 244018 DEBUG nova.compute.provider_tree [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.525 244018 DEBUG nova.scheduler.client.report [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.547 244018 DEBUG nova.network.neutron [req-df843bc5-b604-4a23-b79f-d13cb231dcd7 req-0425f375-142a-4ff8-9811-4c462d84e4a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Updated VIF entry in instance network info cache for port ee68d04f-36ba-4727-ab4b-c31e559353e0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.548 244018 DEBUG nova.network.neutron [req-df843bc5-b604-4a23-b79f-d13cb231dcd7 req-0425f375-142a-4ff8-9811-4c462d84e4a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Updating instance_info_cache with network_info: [{"id": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "address": "fa:16:3e:b7:74:9a", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:749a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee68d04f-36", "ovs_interfaceid": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.555 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.555 244018 DEBUG nova.compute.manager [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.577 244018 DEBUG oslo_concurrency.lockutils [req-df843bc5-b604-4a23-b79f-d13cb231dcd7 req-0425f375-142a-4ff8-9811-4c462d84e4a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-227efbfe-da43-423a-8652-9636ecded4cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.627 244018 DEBUG nova.compute.manager [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.627 244018 DEBUG nova.network.neutron [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.665 244018 INFO nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.685 244018 DEBUG oslo_concurrency.processutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/227efbfe-da43-423a-8652-9636ecded4cd/disk.config 227efbfe-da43-423a-8652-9636ecded4cd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.685 244018 INFO nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Deleting local config drive /var/lib/nova/instances/227efbfe-da43-423a-8652-9636ecded4cd/disk.config because it was imported into RBD.#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.687 244018 DEBUG nova.compute.manager [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.689 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:51 np0005629333 kernel: tapee68d04f-36: entered promiscuous mode
Feb 25 07:41:51 np0005629333 NetworkManager[49836]: <info>  [1772023311.7314] manager: (tapee68d04f-36): new Tun device (/org/freedesktop/NetworkManager/Devices/424)
Feb 25 07:41:51 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:51Z|01022|binding|INFO|Claiming lport ee68d04f-36ba-4727-ab4b-c31e559353e0 for this chassis.
Feb 25 07:41:51 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:51Z|01023|binding|INFO|ee68d04f-36ba-4727-ab4b-c31e559353e0: Claiming fa:16:3e:b7:74:9a 10.100.0.8 2001:db8::f816:3eff:feb7:749a
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.731 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:51.738 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:74:9a 10.100.0.8 2001:db8::f816:3eff:feb7:749a'], port_security=['fa:16:3e:b7:74:9a 10.100.0.8 2001:db8::f816:3eff:feb7:749a'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28 2001:db8::f816:3eff:feb7:749a/64', 'neutron:device_id': '227efbfe-da43-423a-8652-9636ecded4cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '219ff989-2dc4-4de6-abcc-fec08f1c06f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5cb98e41-b63f-472f-91ce-2c5130dfa4a8, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ee68d04f-36ba-4727-ab4b-c31e559353e0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:41:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:51.739 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ee68d04f-36ba-4727-ab4b-c31e559353e0 in datapath 9ec47b46-6b4d-4267-964b-6f16eef9a7b1 bound to our chassis#033[00m
Feb 25 07:41:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:51.740 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9ec47b46-6b4d-4267-964b-6f16eef9a7b1#033[00m
Feb 25 07:41:51 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:51Z|01024|binding|INFO|Setting lport ee68d04f-36ba-4727-ab4b-c31e559353e0 ovn-installed in OVS
Feb 25 07:41:51 np0005629333 ovn_controller[147040]: 2026-02-25T12:41:51Z|01025|binding|INFO|Setting lport ee68d04f-36ba-4727-ab4b-c31e559353e0 up in Southbound
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.742 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:51 np0005629333 podman[337202]: 2026-02-25 12:41:51.75417756 +0000 UTC m=+0.100679962 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Feb 25 07:41:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:51.754 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dbe71152-c237-45fc-8f7a-bae202405389]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:51 np0005629333 systemd-udevd[337241]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:41:51 np0005629333 systemd-machined[210048]: New machine qemu-133-instance-0000006a.
Feb 25 07:41:51 np0005629333 NetworkManager[49836]: <info>  [1772023311.7705] device (tapee68d04f-36): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:41:51 np0005629333 NetworkManager[49836]: <info>  [1772023311.7710] device (tapee68d04f-36): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:41:51 np0005629333 systemd[1]: Started Virtual Machine qemu-133-instance-0000006a.
Feb 25 07:41:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:51.781 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[86a1c770-04c3-41e0-a939-e60e82984cef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:51.785 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c6788317-7a87-41ff-b6c2-98a6b109bcd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.799 244018 DEBUG nova.compute.manager [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.800 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.800 244018 INFO nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Creating image(s)#033[00m
Feb 25 07:41:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:51.815 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7b655281-ed4c-460d-99ee-d5d8cb9aba6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.830 244018 DEBUG nova.storage.rbd_utils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image b8d9acc4-7912-4d26-bad6-1159f6993361_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:51.836 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[181ae194-1f03-4f6d-9cea-2fb9dc3fd0de]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ec47b46-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:aa:8a:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 5, 'rx_bytes': 1586, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 5, 'rx_bytes': 1586, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 300], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524298, 'reachable_time': 43888, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 17, 'inoctets': 1264, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 17, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1264, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 17, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337269, 'error': None, 'target': 'ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:51.855 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ec3062fc-40d2-4666-a286-b5c15061b604]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9ec47b46-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 524306, 'tstamp': 524306}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337281, 'error': None, 'target': 'ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9ec47b46-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 524308, 'tstamp': 524308}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337281, 'error': None, 'target': 'ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.856 244018 DEBUG nova.storage.rbd_utils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image b8d9acc4-7912-4d26-bad6-1159f6993361_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:51.857 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ec47b46-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:51.860 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ec47b46-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:51.860 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:41:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:51.861 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9ec47b46-60, col_values=(('external_ids', {'iface-id': '895bc3cc-c38d-425b-b005-1acb3139bbee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:51.861 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.884 244018 DEBUG nova.storage.rbd_utils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image b8d9acc4-7912-4d26-bad6-1159f6993361_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.889 244018 DEBUG oslo_concurrency.processutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.917 244018 DEBUG nova.policy [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c5bc24b5f5048469cf3f701ce511bfa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '503e879cd1f44a16b9baef106ceba949', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.919 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.953 244018 DEBUG oslo_concurrency.processutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.954 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.954 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.955 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.974 244018 DEBUG nova.storage.rbd_utils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image b8d9acc4-7912-4d26-bad6-1159f6993361_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:51 np0005629333 nova_compute[244014]: 2026-02-25 12:41:51.978 244018 DEBUG oslo_concurrency.processutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 b8d9acc4-7912-4d26-bad6-1159f6993361_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:52 np0005629333 nova_compute[244014]: 2026-02-25 12:41:52.249 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023312.2485483, 227efbfe-da43-423a-8652-9636ecded4cd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:41:52 np0005629333 nova_compute[244014]: 2026-02-25 12:41:52.249 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] VM Started (Lifecycle Event)#033[00m
Feb 25 07:41:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1821: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Feb 25 07:41:52 np0005629333 nova_compute[244014]: 2026-02-25 12:41:52.286 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:41:52 np0005629333 nova_compute[244014]: 2026-02-25 12:41:52.290 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023312.2517533, 227efbfe-da43-423a-8652-9636ecded4cd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:41:52 np0005629333 nova_compute[244014]: 2026-02-25 12:41:52.290 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:41:52 np0005629333 nova_compute[244014]: 2026-02-25 12:41:52.309 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:41:52 np0005629333 nova_compute[244014]: 2026-02-25 12:41:52.311 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:41:52 np0005629333 nova_compute[244014]: 2026-02-25 12:41:52.340 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:41:52 np0005629333 nova_compute[244014]: 2026-02-25 12:41:52.499 244018 DEBUG nova.network.neutron [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Successfully created port: 6178078f-c3e6-4ed1-86fa-d99671618a75 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:41:52 np0005629333 nova_compute[244014]: 2026-02-25 12:41:52.580 244018 DEBUG oslo_concurrency.processutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 b8d9acc4-7912-4d26-bad6-1159f6993361_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:52 np0005629333 nova_compute[244014]: 2026-02-25 12:41:52.662 244018 DEBUG nova.storage.rbd_utils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] resizing rbd image b8d9acc4-7912-4d26-bad6-1159f6993361_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:41:52 np0005629333 nova_compute[244014]: 2026-02-25 12:41:52.892 244018 DEBUG nova.objects.instance [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'migration_context' on Instance uuid b8d9acc4-7912-4d26-bad6-1159f6993361 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:41:52 np0005629333 nova_compute[244014]: 2026-02-25 12:41:52.911 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:41:52 np0005629333 nova_compute[244014]: 2026-02-25 12:41:52.911 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Ensure instance console log exists: /var/lib/nova/instances/b8d9acc4-7912-4d26-bad6-1159f6993361/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:41:52 np0005629333 nova_compute[244014]: 2026-02-25 12:41:52.911 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:52 np0005629333 nova_compute[244014]: 2026-02-25 12:41:52.912 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:52 np0005629333 nova_compute[244014]: 2026-02-25 12:41:52.912 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:41:53 np0005629333 nova_compute[244014]: 2026-02-25 12:41:53.761 244018 DEBUG nova.network.neutron [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Successfully updated port: 6178078f-c3e6-4ed1-86fa-d99671618a75 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:41:53 np0005629333 nova_compute[244014]: 2026-02-25 12:41:53.778 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "refresh_cache-b8d9acc4-7912-4d26-bad6-1159f6993361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:41:53 np0005629333 nova_compute[244014]: 2026-02-25 12:41:53.779 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquired lock "refresh_cache-b8d9acc4-7912-4d26-bad6-1159f6993361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:41:53 np0005629333 nova_compute[244014]: 2026-02-25 12:41:53.779 244018 DEBUG nova.network.neutron [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:41:53 np0005629333 nova_compute[244014]: 2026-02-25 12:41:53.847 244018 DEBUG nova.compute.manager [req-e55e24c8-28ed-4309-ab34-9a565ddda9dc req-33f3ee93-8b30-44e0-adda-7419c9e85e04 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Received event network-changed-6178078f-c3e6-4ed1-86fa-d99671618a75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:41:53 np0005629333 nova_compute[244014]: 2026-02-25 12:41:53.847 244018 DEBUG nova.compute.manager [req-e55e24c8-28ed-4309-ab34-9a565ddda9dc req-33f3ee93-8b30-44e0-adda-7419c9e85e04 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Refreshing instance network info cache due to event network-changed-6178078f-c3e6-4ed1-86fa-d99671618a75. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:41:53 np0005629333 nova_compute[244014]: 2026-02-25 12:41:53.847 244018 DEBUG oslo_concurrency.lockutils [req-e55e24c8-28ed-4309-ab34-9a565ddda9dc req-33f3ee93-8b30-44e0-adda-7419c9e85e04 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-b8d9acc4-7912-4d26-bad6-1159f6993361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:41:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1822: 305 pgs: 305 active+clean; 375 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 368 KiB/s rd, 2.4 MiB/s wr, 83 op/s
Feb 25 07:41:54 np0005629333 nova_compute[244014]: 2026-02-25 12:41:54.595 244018 DEBUG nova.network.neutron [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:41:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:55.020 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:55.021 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:41:55.021 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.068 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.255 244018 DEBUG nova.compute.manager [req-80c26e69-5868-4891-98bf-79c102783b08 req-bdc1e758-59b8-4e39-be72-3ea3f599f019 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Received event network-vif-plugged-ee68d04f-36ba-4727-ab4b-c31e559353e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.256 244018 DEBUG oslo_concurrency.lockutils [req-80c26e69-5868-4891-98bf-79c102783b08 req-bdc1e758-59b8-4e39-be72-3ea3f599f019 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "227efbfe-da43-423a-8652-9636ecded4cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.257 244018 DEBUG oslo_concurrency.lockutils [req-80c26e69-5868-4891-98bf-79c102783b08 req-bdc1e758-59b8-4e39-be72-3ea3f599f019 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "227efbfe-da43-423a-8652-9636ecded4cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.257 244018 DEBUG oslo_concurrency.lockutils [req-80c26e69-5868-4891-98bf-79c102783b08 req-bdc1e758-59b8-4e39-be72-3ea3f599f019 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "227efbfe-da43-423a-8652-9636ecded4cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.257 244018 DEBUG nova.compute.manager [req-80c26e69-5868-4891-98bf-79c102783b08 req-bdc1e758-59b8-4e39-be72-3ea3f599f019 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Processing event network-vif-plugged-ee68d04f-36ba-4727-ab4b-c31e559353e0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.258 244018 DEBUG nova.compute.manager [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.262 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023315.2619576, 227efbfe-da43-423a-8652-9636ecded4cd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.262 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.265 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.269 244018 INFO nova.virt.libvirt.driver [-] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Instance spawned successfully.#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.270 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.309 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.315 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.315 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.316 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.316 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.317 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.318 244018 DEBUG nova.virt.libvirt.driver [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.323 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.364 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.366 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.385 244018 INFO nova.compute.manager [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Took 12.15 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.385 244018 DEBUG nova.compute.manager [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.468 244018 INFO nova.compute.manager [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Took 13.28 seconds to build instance.#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.471 244018 DEBUG nova.network.neutron [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Updating instance_info_cache with network_info: [{"id": "6178078f-c3e6-4ed1-86fa-d99671618a75", "address": "fa:16:3e:e2:bd:b3", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6178078f-c3", "ovs_interfaceid": "6178078f-c3e6-4ed1-86fa-d99671618a75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.495 244018 DEBUG oslo_concurrency.lockutils [None req-8ef272ee-3330-4a73-bcf9-984f3e6bbc3b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "227efbfe-da43-423a-8652-9636ecded4cd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.383s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.496 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Releasing lock "refresh_cache-b8d9acc4-7912-4d26-bad6-1159f6993361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.496 244018 DEBUG nova.compute.manager [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Instance network_info: |[{"id": "6178078f-c3e6-4ed1-86fa-d99671618a75", "address": "fa:16:3e:e2:bd:b3", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6178078f-c3", "ovs_interfaceid": "6178078f-c3e6-4ed1-86fa-d99671618a75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.496 244018 DEBUG oslo_concurrency.lockutils [req-e55e24c8-28ed-4309-ab34-9a565ddda9dc req-33f3ee93-8b30-44e0-adda-7419c9e85e04 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-b8d9acc4-7912-4d26-bad6-1159f6993361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.497 244018 DEBUG nova.network.neutron [req-e55e24c8-28ed-4309-ab34-9a565ddda9dc req-33f3ee93-8b30-44e0-adda-7419c9e85e04 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Refreshing network info cache for port 6178078f-c3e6-4ed1-86fa-d99671618a75 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.499 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Start _get_guest_xml network_info=[{"id": "6178078f-c3e6-4ed1-86fa-d99671618a75", "address": "fa:16:3e:e2:bd:b3", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6178078f-c3", "ovs_interfaceid": "6178078f-c3e6-4ed1-86fa-d99671618a75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.503 244018 WARNING nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.508 244018 DEBUG nova.virt.libvirt.host [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.509 244018 DEBUG nova.virt.libvirt.host [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.515 244018 DEBUG nova.virt.libvirt.host [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.515 244018 DEBUG nova.virt.libvirt.host [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.516 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.516 244018 DEBUG nova.virt.hardware [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.516 244018 DEBUG nova.virt.hardware [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.516 244018 DEBUG nova.virt.hardware [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.517 244018 DEBUG nova.virt.hardware [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.517 244018 DEBUG nova.virt.hardware [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.517 244018 DEBUG nova.virt.hardware [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.517 244018 DEBUG nova.virt.hardware [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.517 244018 DEBUG nova.virt.hardware [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.517 244018 DEBUG nova.virt.hardware [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.518 244018 DEBUG nova.virt.hardware [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.518 244018 DEBUG nova.virt.hardware [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:41:55 np0005629333 nova_compute[244014]: 2026-02-25 12:41:55.521 244018 DEBUG oslo_concurrency.processutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:41:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1720281530' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:41:56 np0005629333 nova_compute[244014]: 2026-02-25 12:41:56.210 244018 DEBUG oslo_concurrency.processutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.689s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:56 np0005629333 nova_compute[244014]: 2026-02-25 12:41:56.230 244018 DEBUG nova.storage.rbd_utils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image b8d9acc4-7912-4d26-bad6-1159f6993361_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:56 np0005629333 nova_compute[244014]: 2026-02-25 12:41:56.235 244018 DEBUG oslo_concurrency.processutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1823: 305 pgs: 305 active+clean; 375 MiB data, 963 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 2.4 MiB/s wr, 71 op/s
Feb 25 07:41:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:41:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2872760037' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:41:56 np0005629333 nova_compute[244014]: 2026-02-25 12:41:56.768 244018 DEBUG oslo_concurrency.processutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:56 np0005629333 nova_compute[244014]: 2026-02-25 12:41:56.771 244018 DEBUG nova.virt.libvirt.vif [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:41:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1506093231',display_name='tempest-ServersTestJSON-server-1506093231',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1506093231',id=107,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-p42ovzbe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:41:51Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=b8d9acc4-7912-4d26-bad6-1159f6993361,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6178078f-c3e6-4ed1-86fa-d99671618a75", "address": "fa:16:3e:e2:bd:b3", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6178078f-c3", "ovs_interfaceid": "6178078f-c3e6-4ed1-86fa-d99671618a75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:41:56 np0005629333 nova_compute[244014]: 2026-02-25 12:41:56.772 244018 DEBUG nova.network.os_vif_util [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "6178078f-c3e6-4ed1-86fa-d99671618a75", "address": "fa:16:3e:e2:bd:b3", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6178078f-c3", "ovs_interfaceid": "6178078f-c3e6-4ed1-86fa-d99671618a75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:41:56 np0005629333 nova_compute[244014]: 2026-02-25 12:41:56.773 244018 DEBUG nova.network.os_vif_util [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:bd:b3,bridge_name='br-int',has_traffic_filtering=True,id=6178078f-c3e6-4ed1-86fa-d99671618a75,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6178078f-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:41:56 np0005629333 nova_compute[244014]: 2026-02-25 12:41:56.774 244018 DEBUG nova.objects.instance [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'pci_devices' on Instance uuid b8d9acc4-7912-4d26-bad6-1159f6993361 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:41:56 np0005629333 nova_compute[244014]: 2026-02-25 12:41:56.794 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:41:56 np0005629333 nova_compute[244014]:  <uuid>b8d9acc4-7912-4d26-bad6-1159f6993361</uuid>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:  <name>instance-0000006b</name>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      <nova:name>tempest-ServersTestJSON-server-1506093231</nova:name>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:41:55</nova:creationTime>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:41:56 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:        <nova:user uuid="4c5bc24b5f5048469cf3f701ce511bfa">tempest-ServersTestJSON-1472039551-project-member</nova:user>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:        <nova:project uuid="503e879cd1f44a16b9baef106ceba949">tempest-ServersTestJSON-1472039551</nova:project>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:        <nova:port uuid="6178078f-c3e6-4ed1-86fa-d99671618a75">
Feb 25 07:41:56 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      <entry name="serial">b8d9acc4-7912-4d26-bad6-1159f6993361</entry>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      <entry name="uuid">b8d9acc4-7912-4d26-bad6-1159f6993361</entry>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/b8d9acc4-7912-4d26-bad6-1159f6993361_disk">
Feb 25 07:41:56 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:41:56 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/b8d9acc4-7912-4d26-bad6-1159f6993361_disk.config">
Feb 25 07:41:56 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:41:56 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:e2:bd:b3"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      <target dev="tap6178078f-c3"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/b8d9acc4-7912-4d26-bad6-1159f6993361/console.log" append="off"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:41:56 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:41:56 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:41:56 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:41:56 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:41:56 np0005629333 nova_compute[244014]: 2026-02-25 12:41:56.796 244018 DEBUG nova.compute.manager [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Preparing to wait for external event network-vif-plugged-6178078f-c3e6-4ed1-86fa-d99671618a75 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:41:56 np0005629333 nova_compute[244014]: 2026-02-25 12:41:56.796 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:56 np0005629333 nova_compute[244014]: 2026-02-25 12:41:56.796 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:56 np0005629333 nova_compute[244014]: 2026-02-25 12:41:56.797 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:56 np0005629333 nova_compute[244014]: 2026-02-25 12:41:56.797 244018 DEBUG nova.virt.libvirt.vif [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:41:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1506093231',display_name='tempest-ServersTestJSON-server-1506093231',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1506093231',id=107,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-p42ovzbe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:41:51Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=b8d9acc4-7912-4d26-bad6-1159f6993361,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6178078f-c3e6-4ed1-86fa-d99671618a75", "address": "fa:16:3e:e2:bd:b3", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6178078f-c3", "ovs_interfaceid": "6178078f-c3e6-4ed1-86fa-d99671618a75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:41:56 np0005629333 nova_compute[244014]: 2026-02-25 12:41:56.798 244018 DEBUG nova.network.os_vif_util [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "6178078f-c3e6-4ed1-86fa-d99671618a75", "address": "fa:16:3e:e2:bd:b3", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6178078f-c3", "ovs_interfaceid": "6178078f-c3e6-4ed1-86fa-d99671618a75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:41:56 np0005629333 nova_compute[244014]: 2026-02-25 12:41:56.799 244018 DEBUG nova.network.os_vif_util [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:bd:b3,bridge_name='br-int',has_traffic_filtering=True,id=6178078f-c3e6-4ed1-86fa-d99671618a75,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6178078f-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:41:56 np0005629333 nova_compute[244014]: 2026-02-25 12:41:56.799 244018 DEBUG os_vif [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:bd:b3,bridge_name='br-int',has_traffic_filtering=True,id=6178078f-c3e6-4ed1-86fa-d99671618a75,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6178078f-c3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:41:56 np0005629333 nova_compute[244014]: 2026-02-25 12:41:56.800 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:56 np0005629333 nova_compute[244014]: 2026-02-25 12:41:56.800 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:56 np0005629333 nova_compute[244014]: 2026-02-25 12:41:56.801 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:41:56 np0005629333 nova_compute[244014]: 2026-02-25 12:41:56.804 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:56 np0005629333 nova_compute[244014]: 2026-02-25 12:41:56.804 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6178078f-c3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:56 np0005629333 nova_compute[244014]: 2026-02-25 12:41:56.804 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6178078f-c3, col_values=(('external_ids', {'iface-id': '6178078f-c3e6-4ed1-86fa-d99671618a75', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:bd:b3', 'vm-uuid': 'b8d9acc4-7912-4d26-bad6-1159f6993361'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:41:56 np0005629333 nova_compute[244014]: 2026-02-25 12:41:56.806 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:56 np0005629333 NetworkManager[49836]: <info>  [1772023316.8078] manager: (tap6178078f-c3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/425)
Feb 25 07:41:56 np0005629333 nova_compute[244014]: 2026-02-25 12:41:56.811 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:41:56 np0005629333 nova_compute[244014]: 2026-02-25 12:41:56.813 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:56 np0005629333 nova_compute[244014]: 2026-02-25 12:41:56.814 244018 INFO os_vif [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:bd:b3,bridge_name='br-int',has_traffic_filtering=True,id=6178078f-c3e6-4ed1-86fa-d99671618a75,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6178078f-c3')#033[00m
Feb 25 07:41:56 np0005629333 nova_compute[244014]: 2026-02-25 12:41:56.894 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:41:56 np0005629333 nova_compute[244014]: 2026-02-25 12:41:56.895 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:41:56 np0005629333 nova_compute[244014]: 2026-02-25 12:41:56.895 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] No VIF found with MAC fa:16:3e:e2:bd:b3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:41:56 np0005629333 nova_compute[244014]: 2026-02-25 12:41:56.896 244018 INFO nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Using config drive#033[00m
Feb 25 07:41:57 np0005629333 nova_compute[244014]: 2026-02-25 12:41:57.013 244018 DEBUG nova.storage.rbd_utils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image b8d9acc4-7912-4d26-bad6-1159f6993361_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:57 np0005629333 nova_compute[244014]: 2026-02-25 12:41:57.032 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023301.991494, aeced0b2-f2b3-4012-b740-eaa411f99631 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:41:57 np0005629333 nova_compute[244014]: 2026-02-25 12:41:57.033 244018 INFO nova.compute.manager [-] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:41:57 np0005629333 nova_compute[244014]: 2026-02-25 12:41:57.058 244018 DEBUG nova.compute.manager [None req-d959d633-2ead-4c83-a701-ee9115081cb4 - - - - - -] [instance: aeced0b2-f2b3-4012-b740-eaa411f99631] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:41:57 np0005629333 nova_compute[244014]: 2026-02-25 12:41:57.330 244018 DEBUG nova.compute.manager [req-68688402-04e9-45da-9896-3dfe0d99f49e req-c53e1fc4-da29-4d92-b956-1ccf9c24f167 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Received event network-vif-plugged-ee68d04f-36ba-4727-ab4b-c31e559353e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:41:57 np0005629333 nova_compute[244014]: 2026-02-25 12:41:57.331 244018 DEBUG oslo_concurrency.lockutils [req-68688402-04e9-45da-9896-3dfe0d99f49e req-c53e1fc4-da29-4d92-b956-1ccf9c24f167 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "227efbfe-da43-423a-8652-9636ecded4cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:41:57 np0005629333 nova_compute[244014]: 2026-02-25 12:41:57.331 244018 DEBUG oslo_concurrency.lockutils [req-68688402-04e9-45da-9896-3dfe0d99f49e req-c53e1fc4-da29-4d92-b956-1ccf9c24f167 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "227efbfe-da43-423a-8652-9636ecded4cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:41:57 np0005629333 nova_compute[244014]: 2026-02-25 12:41:57.331 244018 DEBUG oslo_concurrency.lockutils [req-68688402-04e9-45da-9896-3dfe0d99f49e req-c53e1fc4-da29-4d92-b956-1ccf9c24f167 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "227efbfe-da43-423a-8652-9636ecded4cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:41:57 np0005629333 nova_compute[244014]: 2026-02-25 12:41:57.331 244018 DEBUG nova.compute.manager [req-68688402-04e9-45da-9896-3dfe0d99f49e req-c53e1fc4-da29-4d92-b956-1ccf9c24f167 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] No waiting events found dispatching network-vif-plugged-ee68d04f-36ba-4727-ab4b-c31e559353e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:41:57 np0005629333 nova_compute[244014]: 2026-02-25 12:41:57.332 244018 WARNING nova.compute.manager [req-68688402-04e9-45da-9896-3dfe0d99f49e req-c53e1fc4-da29-4d92-b956-1ccf9c24f167 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Received unexpected event network-vif-plugged-ee68d04f-36ba-4727-ab4b-c31e559353e0 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:41:57 np0005629333 nova_compute[244014]: 2026-02-25 12:41:57.639 244018 DEBUG nova.network.neutron [req-e55e24c8-28ed-4309-ab34-9a565ddda9dc req-33f3ee93-8b30-44e0-adda-7419c9e85e04 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Updated VIF entry in instance network info cache for port 6178078f-c3e6-4ed1-86fa-d99671618a75. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:41:57 np0005629333 nova_compute[244014]: 2026-02-25 12:41:57.639 244018 DEBUG nova.network.neutron [req-e55e24c8-28ed-4309-ab34-9a565ddda9dc req-33f3ee93-8b30-44e0-adda-7419c9e85e04 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Updating instance_info_cache with network_info: [{"id": "6178078f-c3e6-4ed1-86fa-d99671618a75", "address": "fa:16:3e:e2:bd:b3", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6178078f-c3", "ovs_interfaceid": "6178078f-c3e6-4ed1-86fa-d99671618a75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:41:57 np0005629333 nova_compute[244014]: 2026-02-25 12:41:57.656 244018 DEBUG oslo_concurrency.lockutils [req-e55e24c8-28ed-4309-ab34-9a565ddda9dc req-33f3ee93-8b30-44e0-adda-7419c9e85e04 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-b8d9acc4-7912-4d26-bad6-1159f6993361" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:41:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1824: 305 pgs: 305 active+clean; 405 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 153 op/s
Feb 25 07:41:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:41:59 np0005629333 nova_compute[244014]: 2026-02-25 12:41:59.651 244018 INFO nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Creating config drive at /var/lib/nova/instances/b8d9acc4-7912-4d26-bad6-1159f6993361/disk.config#033[00m
Feb 25 07:41:59 np0005629333 nova_compute[244014]: 2026-02-25 12:41:59.655 244018 DEBUG oslo_concurrency.processutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b8d9acc4-7912-4d26-bad6-1159f6993361/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpd_p86hp6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:41:59 np0005629333 nova_compute[244014]: 2026-02-25 12:41:59.764 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:41:59 np0005629333 nova_compute[244014]: 2026-02-25 12:41:59.788 244018 DEBUG oslo_concurrency.processutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b8d9acc4-7912-4d26-bad6-1159f6993361/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpd_p86hp6" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:41:59 np0005629333 nova_compute[244014]: 2026-02-25 12:41:59.811 244018 DEBUG nova.storage.rbd_utils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] rbd image b8d9acc4-7912-4d26-bad6-1159f6993361_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:41:59 np0005629333 nova_compute[244014]: 2026-02-25 12:41:59.814 244018 DEBUG oslo_concurrency.processutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b8d9acc4-7912-4d26-bad6-1159f6993361/disk.config b8d9acc4-7912-4d26-bad6-1159f6993361_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:42:00 np0005629333 nova_compute[244014]: 2026-02-25 12:42:00.070 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:00 np0005629333 nova_compute[244014]: 2026-02-25 12:42:00.238 244018 DEBUG oslo_concurrency.processutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b8d9acc4-7912-4d26-bad6-1159f6993361/disk.config b8d9acc4-7912-4d26-bad6-1159f6993361_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:42:00 np0005629333 nova_compute[244014]: 2026-02-25 12:42:00.239 244018 INFO nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Deleting local config drive /var/lib/nova/instances/b8d9acc4-7912-4d26-bad6-1159f6993361/disk.config because it was imported into RBD.#033[00m
Feb 25 07:42:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1825: 305 pgs: 305 active+clean; 405 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 07:42:00 np0005629333 kernel: tap6178078f-c3: entered promiscuous mode
Feb 25 07:42:00 np0005629333 NetworkManager[49836]: <info>  [1772023320.2795] manager: (tap6178078f-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/426)
Feb 25 07:42:00 np0005629333 nova_compute[244014]: 2026-02-25 12:42:00.282 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:00 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:00Z|01026|binding|INFO|Claiming lport 6178078f-c3e6-4ed1-86fa-d99671618a75 for this chassis.
Feb 25 07:42:00 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:00Z|01027|binding|INFO|6178078f-c3e6-4ed1-86fa-d99671618a75: Claiming fa:16:3e:e2:bd:b3 10.100.0.6
Feb 25 07:42:00 np0005629333 nova_compute[244014]: 2026-02-25 12:42:00.287 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:00 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:00Z|01028|binding|INFO|Setting lport 6178078f-c3e6-4ed1-86fa-d99671618a75 ovn-installed in OVS
Feb 25 07:42:00 np0005629333 nova_compute[244014]: 2026-02-25 12:42:00.293 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:00 np0005629333 systemd-udevd[337598]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:42:00 np0005629333 NetworkManager[49836]: <info>  [1772023320.3210] device (tap6178078f-c3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:42:00 np0005629333 NetworkManager[49836]: <info>  [1772023320.3214] device (tap6178078f-c3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:42:00 np0005629333 systemd-machined[210048]: New machine qemu-134-instance-0000006b.
Feb 25 07:42:00 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:00Z|01029|binding|INFO|Setting lport 6178078f-c3e6-4ed1-86fa-d99671618a75 up in Southbound
Feb 25 07:42:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:00.346 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:bd:b3 10.100.0.6'], port_security=['fa:16:3e:e2:bd:b3 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b8d9acc4-7912-4d26-bad6-1159f6993361', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec8bae53-fe6a-49d1-a733-f00c198be561', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '503e879cd1f44a16b9baef106ceba949', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3bf34285-1a67-4c95-bb68-fd577a012f6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18f4e8da-4409-4095-9850-aaee82dd8fd1, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6178078f-c3e6-4ed1-86fa-d99671618a75) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:42:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:00.347 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6178078f-c3e6-4ed1-86fa-d99671618a75 in datapath ec8bae53-fe6a-49d1-a733-f00c198be561 bound to our chassis#033[00m
Feb 25 07:42:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:00.349 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec8bae53-fe6a-49d1-a733-f00c198be561#033[00m
Feb 25 07:42:00 np0005629333 systemd[1]: Started Virtual Machine qemu-134-instance-0000006b.
Feb 25 07:42:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:00.363 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d9d72b92-dad0-485f-adda-c527fa84e984]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:00.394 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c3e3d3f6-0edc-4e1c-b217-e9c9f805922c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:00.398 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[26bede8b-2971-4c4e-aac9-bedfa6206e17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:00.418 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3a519f31-7f7a-4d54-897d-e99aecce69d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:00.433 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f96975bb-3979-4cb6-adc3-1329bdc9bd80]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec8bae53-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a5:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 25, 'rx_bytes': 700, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 25, 'rx_bytes': 700, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516532, 'reachable_time': 18880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337615, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:00.447 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[25b7bb1e-13d2-478c-a462-f2b9e626edbf]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516546, 'tstamp': 516546}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337616, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516549, 'tstamp': 516549}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337616, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:00.448 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec8bae53-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:42:00 np0005629333 nova_compute[244014]: 2026-02-25 12:42:00.450 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:00 np0005629333 nova_compute[244014]: 2026-02-25 12:42:00.451 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:00.451 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec8bae53-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:42:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:00.451 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:42:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:00.452 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec8bae53-f0, col_values=(('external_ids', {'iface-id': 'e2d1eadf-baf7-4e5c-8052-6c64e8476a26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:42:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:00.452 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:42:00 np0005629333 nova_compute[244014]: 2026-02-25 12:42:00.951 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023320.9506202, b8d9acc4-7912-4d26-bad6-1159f6993361 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:42:00 np0005629333 nova_compute[244014]: 2026-02-25 12:42:00.951 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] VM Started (Lifecycle Event)#033[00m
Feb 25 07:42:00 np0005629333 nova_compute[244014]: 2026-02-25 12:42:00.971 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:42:00 np0005629333 nova_compute[244014]: 2026-02-25 12:42:00.975 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023320.953524, b8d9acc4-7912-4d26-bad6-1159f6993361 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:42:00 np0005629333 nova_compute[244014]: 2026-02-25 12:42:00.975 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:42:01 np0005629333 nova_compute[244014]: 2026-02-25 12:42:01.007 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:42:01 np0005629333 nova_compute[244014]: 2026-02-25 12:42:01.010 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:42:01 np0005629333 nova_compute[244014]: 2026-02-25 12:42:01.039 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:42:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:42:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:42:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:42:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:42:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:42:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:42:01 np0005629333 nova_compute[244014]: 2026-02-25 12:42:01.807 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 305 active+clean; 405 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Feb 25 07:42:02 np0005629333 nova_compute[244014]: 2026-02-25 12:42:02.989 244018 DEBUG nova.compute.manager [req-54ff6bd5-a911-47d0-a019-89b24e5b95f1 req-a9736538-adca-481e-ab68-9372923d8064 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Received event network-changed-ee68d04f-36ba-4727-ab4b-c31e559353e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:42:02 np0005629333 nova_compute[244014]: 2026-02-25 12:42:02.989 244018 DEBUG nova.compute.manager [req-54ff6bd5-a911-47d0-a019-89b24e5b95f1 req-a9736538-adca-481e-ab68-9372923d8064 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Refreshing instance network info cache due to event network-changed-ee68d04f-36ba-4727-ab4b-c31e559353e0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:42:02 np0005629333 nova_compute[244014]: 2026-02-25 12:42:02.990 244018 DEBUG oslo_concurrency.lockutils [req-54ff6bd5-a911-47d0-a019-89b24e5b95f1 req-a9736538-adca-481e-ab68-9372923d8064 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-227efbfe-da43-423a-8652-9636ecded4cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:42:02 np0005629333 nova_compute[244014]: 2026-02-25 12:42:02.990 244018 DEBUG oslo_concurrency.lockutils [req-54ff6bd5-a911-47d0-a019-89b24e5b95f1 req-a9736538-adca-481e-ab68-9372923d8064 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-227efbfe-da43-423a-8652-9636ecded4cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:42:02 np0005629333 nova_compute[244014]: 2026-02-25 12:42:02.990 244018 DEBUG nova.network.neutron [req-54ff6bd5-a911-47d0-a019-89b24e5b95f1 req-a9736538-adca-481e-ab68-9372923d8064 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Refreshing network info cache for port ee68d04f-36ba-4727-ab4b-c31e559353e0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:42:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:42:03 np0005629333 nova_compute[244014]: 2026-02-25 12:42:03.980 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "11f1a7e0-6001-4367-8491-5b5508f56bdb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:03 np0005629333 nova_compute[244014]: 2026-02-25 12:42:03.981 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:03 np0005629333 nova_compute[244014]: 2026-02-25 12:42:03.994 244018 DEBUG nova.compute.manager [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:42:04 np0005629333 nova_compute[244014]: 2026-02-25 12:42:04.065 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:04 np0005629333 nova_compute[244014]: 2026-02-25 12:42:04.065 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:04 np0005629333 nova_compute[244014]: 2026-02-25 12:42:04.073 244018 DEBUG nova.virt.hardware [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:42:04 np0005629333 nova_compute[244014]: 2026-02-25 12:42:04.073 244018 INFO nova.compute.claims [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:42:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 305 active+clean; 405 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Feb 25 07:42:04 np0005629333 nova_compute[244014]: 2026-02-25 12:42:04.406 244018 DEBUG oslo_concurrency.processutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:42:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:42:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2124750038' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.032 244018 DEBUG oslo_concurrency.processutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.039 244018 DEBUG nova.compute.provider_tree [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.067 244018 DEBUG nova.scheduler.client.report [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.072 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.095 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.029s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.096 244018 DEBUG nova.compute.manager [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.169 244018 DEBUG nova.compute.manager [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.169 244018 DEBUG nova.network.neutron [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.204 244018 INFO nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.238 244018 DEBUG nova.compute.manager [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.320 244018 DEBUG nova.compute.manager [req-bf3948cf-e52f-46db-ae7f-4c17c8fb3554 req-5db2cca5-f876-408f-9bd0-adc88439b614 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Received event network-vif-plugged-6178078f-c3e6-4ed1-86fa-d99671618a75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.321 244018 DEBUG oslo_concurrency.lockutils [req-bf3948cf-e52f-46db-ae7f-4c17c8fb3554 req-5db2cca5-f876-408f-9bd0-adc88439b614 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.321 244018 DEBUG oslo_concurrency.lockutils [req-bf3948cf-e52f-46db-ae7f-4c17c8fb3554 req-5db2cca5-f876-408f-9bd0-adc88439b614 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.322 244018 DEBUG oslo_concurrency.lockutils [req-bf3948cf-e52f-46db-ae7f-4c17c8fb3554 req-5db2cca5-f876-408f-9bd0-adc88439b614 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.322 244018 DEBUG nova.compute.manager [req-bf3948cf-e52f-46db-ae7f-4c17c8fb3554 req-5db2cca5-f876-408f-9bd0-adc88439b614 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Processing event network-vif-plugged-6178078f-c3e6-4ed1-86fa-d99671618a75 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.323 244018 DEBUG nova.compute.manager [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.326 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023325.3260763, b8d9acc4-7912-4d26-bad6-1159f6993361 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.327 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.329 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.333 244018 INFO nova.virt.libvirt.driver [-] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Instance spawned successfully.#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.334 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.372 244018 DEBUG nova.compute.manager [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.374 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.374 244018 INFO nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Creating image(s)#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.396 244018 DEBUG nova.storage.rbd_utils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image 11f1a7e0-6001-4367-8491-5b5508f56bdb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.421 244018 DEBUG nova.storage.rbd_utils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image 11f1a7e0-6001-4367-8491-5b5508f56bdb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.441 244018 DEBUG nova.storage.rbd_utils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image 11f1a7e0-6001-4367-8491-5b5508f56bdb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.445 244018 DEBUG oslo_concurrency.processutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.475 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.481 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.481 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.482 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.482 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.482 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.483 244018 DEBUG nova.virt.libvirt.driver [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.486 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.511 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.517 244018 DEBUG oslo_concurrency.processutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.518 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.518 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.519 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.535 244018 DEBUG nova.storage.rbd_utils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image 11f1a7e0-6001-4367-8491-5b5508f56bdb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.537 244018 DEBUG oslo_concurrency.processutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 11f1a7e0-6001-4367-8491-5b5508f56bdb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.561 244018 DEBUG nova.policy [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb37a481eb114226822ed8b2ef4f9a89', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.568 244018 INFO nova.compute.manager [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Took 13.77 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.568 244018 DEBUG nova.compute.manager [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.631 244018 INFO nova.compute.manager [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Took 15.02 seconds to build instance.#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.658 244018 DEBUG oslo_concurrency.lockutils [None req-b3fbe3da-05fc-46e3-9ca6-95ae75b5e61a 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.978 244018 DEBUG nova.network.neutron [req-54ff6bd5-a911-47d0-a019-89b24e5b95f1 req-a9736538-adca-481e-ab68-9372923d8064 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Updated VIF entry in instance network info cache for port ee68d04f-36ba-4727-ab4b-c31e559353e0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:42:05 np0005629333 nova_compute[244014]: 2026-02-25 12:42:05.978 244018 DEBUG nova.network.neutron [req-54ff6bd5-a911-47d0-a019-89b24e5b95f1 req-a9736538-adca-481e-ab68-9372923d8064 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Updating instance_info_cache with network_info: [{"id": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "address": "fa:16:3e:b7:74:9a", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:749a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee68d04f-36", "ovs_interfaceid": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:42:06 np0005629333 nova_compute[244014]: 2026-02-25 12:42:06.001 244018 DEBUG oslo_concurrency.lockutils [req-54ff6bd5-a911-47d0-a019-89b24e5b95f1 req-a9736538-adca-481e-ab68-9372923d8064 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-227efbfe-da43-423a-8652-9636ecded4cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:42:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 305 active+clean; 405 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 92 op/s
Feb 25 07:42:06 np0005629333 nova_compute[244014]: 2026-02-25 12:42:06.809 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:07 np0005629333 nova_compute[244014]: 2026-02-25 12:42:07.141 244018 DEBUG nova.network.neutron [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Successfully created port: 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:42:07 np0005629333 nova_compute[244014]: 2026-02-25 12:42:07.422 244018 DEBUG nova.compute.manager [req-e1b56c9c-3f72-4dd9-8fcf-e95dcbd5a785 req-7823f80a-30cc-4ab5-bd90-fdf7bbdf3f2b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Received event network-vif-plugged-6178078f-c3e6-4ed1-86fa-d99671618a75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:42:07 np0005629333 nova_compute[244014]: 2026-02-25 12:42:07.423 244018 DEBUG oslo_concurrency.lockutils [req-e1b56c9c-3f72-4dd9-8fcf-e95dcbd5a785 req-7823f80a-30cc-4ab5-bd90-fdf7bbdf3f2b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:07 np0005629333 nova_compute[244014]: 2026-02-25 12:42:07.423 244018 DEBUG oslo_concurrency.lockutils [req-e1b56c9c-3f72-4dd9-8fcf-e95dcbd5a785 req-7823f80a-30cc-4ab5-bd90-fdf7bbdf3f2b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:07 np0005629333 nova_compute[244014]: 2026-02-25 12:42:07.423 244018 DEBUG oslo_concurrency.lockutils [req-e1b56c9c-3f72-4dd9-8fcf-e95dcbd5a785 req-7823f80a-30cc-4ab5-bd90-fdf7bbdf3f2b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:07 np0005629333 nova_compute[244014]: 2026-02-25 12:42:07.424 244018 DEBUG nova.compute.manager [req-e1b56c9c-3f72-4dd9-8fcf-e95dcbd5a785 req-7823f80a-30cc-4ab5-bd90-fdf7bbdf3f2b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] No waiting events found dispatching network-vif-plugged-6178078f-c3e6-4ed1-86fa-d99671618a75 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:42:07 np0005629333 nova_compute[244014]: 2026-02-25 12:42:07.424 244018 WARNING nova.compute.manager [req-e1b56c9c-3f72-4dd9-8fcf-e95dcbd5a785 req-7823f80a-30cc-4ab5-bd90-fdf7bbdf3f2b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Received unexpected event network-vif-plugged-6178078f-c3e6-4ed1-86fa-d99671618a75 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:42:07 np0005629333 nova_compute[244014]: 2026-02-25 12:42:07.462 244018 DEBUG oslo_concurrency.processutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 11f1a7e0-6001-4367-8491-5b5508f56bdb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.925s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:42:07 np0005629333 nova_compute[244014]: 2026-02-25 12:42:07.509 244018 DEBUG nova.storage.rbd_utils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] resizing rbd image 11f1a7e0-6001-4367-8491-5b5508f56bdb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:42:07 np0005629333 nova_compute[244014]: 2026-02-25 12:42:07.948 244018 DEBUG nova.objects.instance [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'migration_context' on Instance uuid 11f1a7e0-6001-4367-8491-5b5508f56bdb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:42:07 np0005629333 nova_compute[244014]: 2026-02-25 12:42:07.976 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:42:07 np0005629333 nova_compute[244014]: 2026-02-25 12:42:07.977 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Ensure instance console log exists: /var/lib/nova/instances/11f1a7e0-6001-4367-8491-5b5508f56bdb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:42:07 np0005629333 nova_compute[244014]: 2026-02-25 12:42:07.977 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:07 np0005629333 nova_compute[244014]: 2026-02-25 12:42:07.978 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:07 np0005629333 nova_compute[244014]: 2026-02-25 12:42:07.978 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 460 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.2 MiB/s wr, 198 op/s
Feb 25 07:42:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:42:08 np0005629333 nova_compute[244014]: 2026-02-25 12:42:08.654 244018 DEBUG oslo_concurrency.lockutils [None req-5b47d73b-67aa-4c66-8a16-46e01e8f20e1 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "b8d9acc4-7912-4d26-bad6-1159f6993361" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:08 np0005629333 nova_compute[244014]: 2026-02-25 12:42:08.654 244018 DEBUG oslo_concurrency.lockutils [None req-5b47d73b-67aa-4c66-8a16-46e01e8f20e1 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:08 np0005629333 nova_compute[244014]: 2026-02-25 12:42:08.655 244018 DEBUG nova.compute.manager [None req-5b47d73b-67aa-4c66-8a16-46e01e8f20e1 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:42:08 np0005629333 nova_compute[244014]: 2026-02-25 12:42:08.660 244018 DEBUG nova.compute.manager [None req-5b47d73b-67aa-4c66-8a16-46e01e8f20e1 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Feb 25 07:42:08 np0005629333 nova_compute[244014]: 2026-02-25 12:42:08.661 244018 DEBUG nova.objects.instance [None req-5b47d73b-67aa-4c66-8a16-46e01e8f20e1 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'flavor' on Instance uuid b8d9acc4-7912-4d26-bad6-1159f6993361 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:42:08 np0005629333 nova_compute[244014]: 2026-02-25 12:42:08.686 244018 DEBUG nova.virt.libvirt.driver [None req-5b47d73b-67aa-4c66-8a16-46e01e8f20e1 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 25 07:42:08 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:08Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b7:74:9a 10.100.0.8
Feb 25 07:42:08 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:08Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b7:74:9a 10.100.0.8
Feb 25 07:42:09 np0005629333 nova_compute[244014]: 2026-02-25 12:42:09.041 244018 DEBUG nova.network.neutron [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Successfully updated port: 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:42:09 np0005629333 nova_compute[244014]: 2026-02-25 12:42:09.231 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "refresh_cache-11f1a7e0-6001-4367-8491-5b5508f56bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:42:09 np0005629333 nova_compute[244014]: 2026-02-25 12:42:09.231 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquired lock "refresh_cache-11f1a7e0-6001-4367-8491-5b5508f56bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:42:09 np0005629333 nova_compute[244014]: 2026-02-25 12:42:09.232 244018 DEBUG nova.network.neutron [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:42:09 np0005629333 nova_compute[244014]: 2026-02-25 12:42:09.452 244018 DEBUG nova.network.neutron [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:42:09 np0005629333 nova_compute[244014]: 2026-02-25 12:42:09.519 244018 DEBUG nova.compute.manager [req-4a7940ee-7ced-4519-aeb0-c168968e75f0 req-92ae13c0-3f4e-4d5e-8ef5-6398a5cebe37 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received event network-changed-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:42:09 np0005629333 nova_compute[244014]: 2026-02-25 12:42:09.519 244018 DEBUG nova.compute.manager [req-4a7940ee-7ced-4519-aeb0-c168968e75f0 req-92ae13c0-3f4e-4d5e-8ef5-6398a5cebe37 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Refreshing instance network info cache due to event network-changed-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:42:09 np0005629333 nova_compute[244014]: 2026-02-25 12:42:09.520 244018 DEBUG oslo_concurrency.lockutils [req-4a7940ee-7ced-4519-aeb0-c168968e75f0 req-92ae13c0-3f4e-4d5e-8ef5-6398a5cebe37 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-11f1a7e0-6001-4367-8491-5b5508f56bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:42:10 np0005629333 nova_compute[244014]: 2026-02-25 12:42:10.074 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1830: 305 pgs: 305 active+clean; 460 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 115 op/s
Feb 25 07:42:10 np0005629333 nova_compute[244014]: 2026-02-25 12:42:10.353 244018 DEBUG nova.network.neutron [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Updating instance_info_cache with network_info: [{"id": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "address": "fa:16:3e:3e:d0:2e", "network": {"id": "898394ed-19f4-4525-9c0d-05895748de8c", "bridge": "br-int", "label": "tempest-network-smoke--407161761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aff0d14-b6", "ovs_interfaceid": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:42:10 np0005629333 nova_compute[244014]: 2026-02-25 12:42:10.372 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Releasing lock "refresh_cache-11f1a7e0-6001-4367-8491-5b5508f56bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:42:10 np0005629333 nova_compute[244014]: 2026-02-25 12:42:10.372 244018 DEBUG nova.compute.manager [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Instance network_info: |[{"id": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "address": "fa:16:3e:3e:d0:2e", "network": {"id": "898394ed-19f4-4525-9c0d-05895748de8c", "bridge": "br-int", "label": "tempest-network-smoke--407161761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aff0d14-b6", "ovs_interfaceid": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:42:10 np0005629333 nova_compute[244014]: 2026-02-25 12:42:10.373 244018 DEBUG oslo_concurrency.lockutils [req-4a7940ee-7ced-4519-aeb0-c168968e75f0 req-92ae13c0-3f4e-4d5e-8ef5-6398a5cebe37 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-11f1a7e0-6001-4367-8491-5b5508f56bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:42:10 np0005629333 nova_compute[244014]: 2026-02-25 12:42:10.373 244018 DEBUG nova.network.neutron [req-4a7940ee-7ced-4519-aeb0-c168968e75f0 req-92ae13c0-3f4e-4d5e-8ef5-6398a5cebe37 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Refreshing network info cache for port 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:42:10 np0005629333 nova_compute[244014]: 2026-02-25 12:42:10.375 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Start _get_guest_xml network_info=[{"id": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "address": "fa:16:3e:3e:d0:2e", "network": {"id": "898394ed-19f4-4525-9c0d-05895748de8c", "bridge": "br-int", "label": "tempest-network-smoke--407161761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aff0d14-b6", "ovs_interfaceid": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:42:10 np0005629333 nova_compute[244014]: 2026-02-25 12:42:10.380 244018 WARNING nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:42:10 np0005629333 nova_compute[244014]: 2026-02-25 12:42:10.386 244018 DEBUG nova.virt.libvirt.host [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:42:10 np0005629333 nova_compute[244014]: 2026-02-25 12:42:10.386 244018 DEBUG nova.virt.libvirt.host [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:42:10 np0005629333 nova_compute[244014]: 2026-02-25 12:42:10.391 244018 DEBUG nova.virt.libvirt.host [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:42:10 np0005629333 nova_compute[244014]: 2026-02-25 12:42:10.391 244018 DEBUG nova.virt.libvirt.host [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:42:10 np0005629333 nova_compute[244014]: 2026-02-25 12:42:10.392 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:42:10 np0005629333 nova_compute[244014]: 2026-02-25 12:42:10.392 244018 DEBUG nova.virt.hardware [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:42:10 np0005629333 nova_compute[244014]: 2026-02-25 12:42:10.392 244018 DEBUG nova.virt.hardware [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:42:10 np0005629333 nova_compute[244014]: 2026-02-25 12:42:10.392 244018 DEBUG nova.virt.hardware [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:42:10 np0005629333 nova_compute[244014]: 2026-02-25 12:42:10.393 244018 DEBUG nova.virt.hardware [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:42:10 np0005629333 nova_compute[244014]: 2026-02-25 12:42:10.393 244018 DEBUG nova.virt.hardware [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:42:10 np0005629333 nova_compute[244014]: 2026-02-25 12:42:10.393 244018 DEBUG nova.virt.hardware [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:42:10 np0005629333 nova_compute[244014]: 2026-02-25 12:42:10.393 244018 DEBUG nova.virt.hardware [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:42:10 np0005629333 nova_compute[244014]: 2026-02-25 12:42:10.393 244018 DEBUG nova.virt.hardware [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:42:10 np0005629333 nova_compute[244014]: 2026-02-25 12:42:10.394 244018 DEBUG nova.virt.hardware [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:42:10 np0005629333 nova_compute[244014]: 2026-02-25 12:42:10.394 244018 DEBUG nova.virt.hardware [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:42:10 np0005629333 nova_compute[244014]: 2026-02-25 12:42:10.395 244018 DEBUG nova.virt.hardware [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:42:10 np0005629333 nova_compute[244014]: 2026-02-25 12:42:10.399 244018 DEBUG oslo_concurrency.processutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:42:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:42:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3274460178' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.001 244018 DEBUG oslo_concurrency.processutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.034 244018 DEBUG nova.storage.rbd_utils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image 11f1a7e0-6001-4367-8491-5b5508f56bdb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.038 244018 DEBUG oslo_concurrency.processutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:42:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:42:11 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2144351085' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.574 244018 DEBUG oslo_concurrency.processutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.576 244018 DEBUG nova.virt.libvirt.vif [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:42:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-356243921',display_name='tempest-TestNetworkAdvancedServerOps-server-356243921',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-356243921',id=108,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOZBQUrVipYRBieQq1xDRMWSoKXwtqySIiZY2u0KWFIY87u59XRYFhhGNlsK64UxHDZ7UBqeTfQ7KU9w8um5F3hgrG+dYRSQSNJa4y14JlVwVfxefeeLtULB24TIYsOAUg==',key_name='tempest-TestNetworkAdvancedServerOps-1545448232',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-em0x8woo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:42:05Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=11f1a7e0-6001-4367-8491-5b5508f56bdb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "address": "fa:16:3e:3e:d0:2e", "network": {"id": "898394ed-19f4-4525-9c0d-05895748de8c", "bridge": "br-int", "label": "tempest-network-smoke--407161761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aff0d14-b6", "ovs_interfaceid": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.577 244018 DEBUG nova.network.os_vif_util [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "address": "fa:16:3e:3e:d0:2e", "network": {"id": "898394ed-19f4-4525-9c0d-05895748de8c", "bridge": "br-int", "label": "tempest-network-smoke--407161761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aff0d14-b6", "ovs_interfaceid": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.577 244018 DEBUG nova.network.os_vif_util [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:d0:2e,bridge_name='br-int',has_traffic_filtering=True,id=6aff0d14-b6b5-45d1-83da-b9f0946a2e7e,network=Network(898394ed-19f4-4525-9c0d-05895748de8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6aff0d14-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.578 244018 DEBUG nova.objects.instance [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'pci_devices' on Instance uuid 11f1a7e0-6001-4367-8491-5b5508f56bdb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.595 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:42:11 np0005629333 nova_compute[244014]:  <uuid>11f1a7e0-6001-4367-8491-5b5508f56bdb</uuid>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:  <name>instance-0000006c</name>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-356243921</nova:name>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:42:10</nova:creationTime>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:42:11 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:        <nova:user uuid="fb37a481eb114226822ed8b2ef4f9a89">tempest-TestNetworkAdvancedServerOps-1424801157-project-member</nova:user>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:        <nova:project uuid="6821a6e7edd54dbe97920b79aae8f54c">tempest-TestNetworkAdvancedServerOps-1424801157</nova:project>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:        <nova:port uuid="6aff0d14-b6b5-45d1-83da-b9f0946a2e7e">
Feb 25 07:42:11 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      <entry name="serial">11f1a7e0-6001-4367-8491-5b5508f56bdb</entry>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      <entry name="uuid">11f1a7e0-6001-4367-8491-5b5508f56bdb</entry>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/11f1a7e0-6001-4367-8491-5b5508f56bdb_disk">
Feb 25 07:42:11 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:42:11 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/11f1a7e0-6001-4367-8491-5b5508f56bdb_disk.config">
Feb 25 07:42:11 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:42:11 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:3e:d0:2e"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      <target dev="tap6aff0d14-b6"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/11f1a7e0-6001-4367-8491-5b5508f56bdb/console.log" append="off"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:42:11 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:42:11 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:42:11 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:42:11 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.600 244018 DEBUG nova.compute.manager [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Preparing to wait for external event network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.600 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.600 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.601 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.601 244018 DEBUG nova.virt.libvirt.vif [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:42:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-356243921',display_name='tempest-TestNetworkAdvancedServerOps-server-356243921',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-356243921',id=108,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOZBQUrVipYRBieQq1xDRMWSoKXwtqySIiZY2u0KWFIY87u59XRYFhhGNlsK64UxHDZ7UBqeTfQ7KU9w8um5F3hgrG+dYRSQSNJa4y14JlVwVfxefeeLtULB24TIYsOAUg==',key_name='tempest-TestNetworkAdvancedServerOps-1545448232',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-em0x8woo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:42:05Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=11f1a7e0-6001-4367-8491-5b5508f56bdb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "address": "fa:16:3e:3e:d0:2e", "network": {"id": "898394ed-19f4-4525-9c0d-05895748de8c", "bridge": "br-int", "label": "tempest-network-smoke--407161761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aff0d14-b6", "ovs_interfaceid": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.602 244018 DEBUG nova.network.os_vif_util [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "address": "fa:16:3e:3e:d0:2e", "network": {"id": "898394ed-19f4-4525-9c0d-05895748de8c", "bridge": "br-int", "label": "tempest-network-smoke--407161761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aff0d14-b6", "ovs_interfaceid": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.602 244018 DEBUG nova.network.os_vif_util [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:d0:2e,bridge_name='br-int',has_traffic_filtering=True,id=6aff0d14-b6b5-45d1-83da-b9f0946a2e7e,network=Network(898394ed-19f4-4525-9c0d-05895748de8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6aff0d14-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.603 244018 DEBUG os_vif [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:d0:2e,bridge_name='br-int',has_traffic_filtering=True,id=6aff0d14-b6b5-45d1-83da-b9f0946a2e7e,network=Network(898394ed-19f4-4525-9c0d-05895748de8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6aff0d14-b6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.603 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.604 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.604 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.606 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.606 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6aff0d14-b6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.607 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6aff0d14-b6, col_values=(('external_ids', {'iface-id': '6aff0d14-b6b5-45d1-83da-b9f0946a2e7e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3e:d0:2e', 'vm-uuid': '11f1a7e0-6001-4367-8491-5b5508f56bdb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.609 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:11 np0005629333 NetworkManager[49836]: <info>  [1772023331.6102] manager: (tap6aff0d14-b6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/427)
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.612 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.614 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.614 244018 INFO os_vif [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:d0:2e,bridge_name='br-int',has_traffic_filtering=True,id=6aff0d14-b6b5-45d1-83da-b9f0946a2e7e,network=Network(898394ed-19f4-4525-9c0d-05895748de8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6aff0d14-b6')#033[00m
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.671 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.671 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.671 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No VIF found with MAC fa:16:3e:3e:d0:2e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.672 244018 INFO nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Using config drive#033[00m
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.689 244018 DEBUG nova.storage.rbd_utils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image 11f1a7e0-6001-4367-8491-5b5508f56bdb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:42:11 np0005629333 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 07:42:11 np0005629333 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.940 244018 DEBUG nova.network.neutron [req-4a7940ee-7ced-4519-aeb0-c168968e75f0 req-92ae13c0-3f4e-4d5e-8ef5-6398a5cebe37 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Updated VIF entry in instance network info cache for port 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.941 244018 DEBUG nova.network.neutron [req-4a7940ee-7ced-4519-aeb0-c168968e75f0 req-92ae13c0-3f4e-4d5e-8ef5-6398a5cebe37 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Updating instance_info_cache with network_info: [{"id": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "address": "fa:16:3e:3e:d0:2e", "network": {"id": "898394ed-19f4-4525-9c0d-05895748de8c", "bridge": "br-int", "label": "tempest-network-smoke--407161761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aff0d14-b6", "ovs_interfaceid": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:42:11 np0005629333 nova_compute[244014]: 2026-02-25 12:42:11.961 244018 DEBUG oslo_concurrency.lockutils [req-4a7940ee-7ced-4519-aeb0-c168968e75f0 req-92ae13c0-3f4e-4d5e-8ef5-6398a5cebe37 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-11f1a7e0-6001-4367-8491-5b5508f56bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.070 244018 INFO nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Creating config drive at /var/lib/nova/instances/11f1a7e0-6001-4367-8491-5b5508f56bdb/disk.config#033[00m
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.076 244018 DEBUG oslo_concurrency.processutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/11f1a7e0-6001-4367-8491-5b5508f56bdb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpstdau324 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.210 244018 DEBUG oslo_concurrency.processutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/11f1a7e0-6001-4367-8491-5b5508f56bdb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpstdau324" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.251 244018 DEBUG nova.storage.rbd_utils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image 11f1a7e0-6001-4367-8491-5b5508f56bdb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.256 244018 DEBUG oslo_concurrency.processutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/11f1a7e0-6001-4367-8491-5b5508f56bdb/disk.config 11f1a7e0-6001-4367-8491-5b5508f56bdb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:42:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1831: 305 pgs: 305 active+clean; 482 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 153 op/s
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.398 244018 DEBUG oslo_concurrency.processutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/11f1a7e0-6001-4367-8491-5b5508f56bdb/disk.config 11f1a7e0-6001-4367-8491-5b5508f56bdb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.399 244018 INFO nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Deleting local config drive /var/lib/nova/instances/11f1a7e0-6001-4367-8491-5b5508f56bdb/disk.config because it was imported into RBD.#033[00m
Feb 25 07:42:12 np0005629333 NetworkManager[49836]: <info>  [1772023332.4366] manager: (tap6aff0d14-b6): new Tun device (/org/freedesktop/NetworkManager/Devices/428)
Feb 25 07:42:12 np0005629333 kernel: tap6aff0d14-b6: entered promiscuous mode
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.439 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:12 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:12Z|01030|binding|INFO|Claiming lport 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e for this chassis.
Feb 25 07:42:12 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:12Z|01031|binding|INFO|6aff0d14-b6b5-45d1-83da-b9f0946a2e7e: Claiming fa:16:3e:3e:d0:2e 10.100.0.4
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.443 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.453 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.452 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:d0:2e 10.100.0.4'], port_security=['fa:16:3e:3e:d0:2e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '11f1a7e0-6001-4367-8491-5b5508f56bdb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-898394ed-19f4-4525-9c0d-05895748de8c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '563c6e30-ca07-41f4-bf4e-27cec8015d08', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f095686b-cedf-4f50-94ed-1d2cdaae5b7e, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6aff0d14-b6b5-45d1-83da-b9f0946a2e7e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.453 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e in datapath 898394ed-19f4-4525-9c0d-05895748de8c bound to our chassis#033[00m
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.455 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 898394ed-19f4-4525-9c0d-05895748de8c#033[00m
Feb 25 07:42:12 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:12Z|01032|binding|INFO|Setting lport 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e ovn-installed in OVS
Feb 25 07:42:12 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:12Z|01033|binding|INFO|Setting lport 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e up in Southbound
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.457 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.463 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c76d33c5-ab2d-4a3f-8cc8-9107e0183500]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.465 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap898394ed-11 in ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.468 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap898394ed-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.468 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a525a782-09d8-4afe-bbdc-5c8bec9e7284]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.469 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d1a36740-82d7-4d70-9590-4844008659a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:12 np0005629333 systemd-machined[210048]: New machine qemu-135-instance-0000006c.
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.483 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[f80530ad-f12f-4515-99ff-e855b601bcd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:12 np0005629333 systemd[1]: Started Virtual Machine qemu-135-instance-0000006c.
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.494 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6a6c8aaa-fdb9-4cd7-b42f-28f734ff20ea]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:12 np0005629333 systemd-udevd[337993]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:42:12 np0005629333 NetworkManager[49836]: <info>  [1772023332.5170] device (tap6aff0d14-b6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:42:12 np0005629333 NetworkManager[49836]: <info>  [1772023332.5176] device (tap6aff0d14-b6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.518 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[63066a6e-83c1-42f0-9eff-c16b276a99d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:12 np0005629333 NetworkManager[49836]: <info>  [1772023332.5241] manager: (tap898394ed-10): new Veth device (/org/freedesktop/NetworkManager/Devices/429)
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.523 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1b66fad6-8966-4539-9db9-bf120e08faae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.553 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[96d3725b-2d5c-46bd-bbdf-10d8ed1ebd1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.556 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7e297306-2724-44c5-af99-b55160413e23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:12 np0005629333 NetworkManager[49836]: <info>  [1772023332.5718] device (tap898394ed-10): carrier: link connected
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.574 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d4168d10-8c9a-4ea6-a863-6ef3c06e7f1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.587 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1b8c91c0-eb36-451b-977d-93c33ee2ff9e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap898394ed-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:00:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 309], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530197, 'reachable_time': 21665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338021, 'error': None, 'target': 'ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.598 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7f142e3a-88b8-44ab-bffc-1e1c95997048]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:f1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530197, 'tstamp': 530197}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338022, 'error': None, 'target': 'ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.610 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cd1d3ae9-fb36-4d98-9c87-01e8b8677fe3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap898394ed-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:00:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 309], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530197, 'reachable_time': 21665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 338023, 'error': None, 'target': 'ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.633 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[376a0cf9-a54b-46a6-bc5c-378b574b9623]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.681 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ccdd5a20-34d5-4db3-b329-3692a32f4426]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.683 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap898394ed-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.683 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.684 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap898394ed-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.686 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:12 np0005629333 NetworkManager[49836]: <info>  [1772023332.6869] manager: (tap898394ed-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/430)
Feb 25 07:42:12 np0005629333 kernel: tap898394ed-10: entered promiscuous mode
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.687 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.688 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap898394ed-10, col_values=(('external_ids', {'iface-id': '90ed2c98-937c-42de-8c1d-0f2d90380e03'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.690 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:12 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:12Z|01034|binding|INFO|Releasing lport 90ed2c98-937c-42de-8c1d-0f2d90380e03 from this chassis (sb_readonly=0)
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.696 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.697 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/898394ed-19f4-4525-9c0d-05895748de8c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/898394ed-19f4-4525-9c0d-05895748de8c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.698 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5346283b-fd37-482c-94c0-fbf159993463]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.699 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-898394ed-19f4-4525-9c0d-05895748de8c
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/898394ed-19f4-4525-9c0d-05895748de8c.pid.haproxy
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 898394ed-19f4-4525-9c0d-05895748de8c
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:42:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:12.699 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c', 'env', 'PROCESS_TAG=haproxy-898394ed-19f4-4525-9c0d-05895748de8c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/898394ed-19f4-4525-9c0d-05895748de8c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.773 244018 DEBUG nova.compute.manager [req-b1972967-b3cb-4f76-bf7e-9c554898928f req-761d2d85-808c-4285-983d-ab1cb79924bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received event network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.773 244018 DEBUG oslo_concurrency.lockutils [req-b1972967-b3cb-4f76-bf7e-9c554898928f req-761d2d85-808c-4285-983d-ab1cb79924bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.774 244018 DEBUG oslo_concurrency.lockutils [req-b1972967-b3cb-4f76-bf7e-9c554898928f req-761d2d85-808c-4285-983d-ab1cb79924bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.774 244018 DEBUG oslo_concurrency.lockutils [req-b1972967-b3cb-4f76-bf7e-9c554898928f req-761d2d85-808c-4285-983d-ab1cb79924bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.774 244018 DEBUG nova.compute.manager [req-b1972967-b3cb-4f76-bf7e-9c554898928f req-761d2d85-808c-4285-983d-ab1cb79924bc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Processing event network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.893 244018 DEBUG nova.compute.manager [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.894 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023332.893629, 11f1a7e0-6001-4367-8491-5b5508f56bdb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.894 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] VM Started (Lifecycle Event)#033[00m
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.897 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.900 244018 INFO nova.virt.libvirt.driver [-] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Instance spawned successfully.#033[00m
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.900 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.929 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.933 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.933 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.934 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.934 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.935 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.935 244018 DEBUG nova.virt.libvirt.driver [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.943 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.973 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.973 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023332.8943381, 11f1a7e0-6001-4367-8491-5b5508f56bdb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:42:12 np0005629333 nova_compute[244014]: 2026-02-25 12:42:12.974 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:42:13 np0005629333 nova_compute[244014]: 2026-02-25 12:42:13.006 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:42:13 np0005629333 nova_compute[244014]: 2026-02-25 12:42:13.009 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023332.8965251, 11f1a7e0-6001-4367-8491-5b5508f56bdb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:42:13 np0005629333 nova_compute[244014]: 2026-02-25 12:42:13.010 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:42:13 np0005629333 nova_compute[244014]: 2026-02-25 12:42:13.013 244018 INFO nova.compute.manager [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Took 7.64 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:42:13 np0005629333 nova_compute[244014]: 2026-02-25 12:42:13.014 244018 DEBUG nova.compute.manager [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:42:13 np0005629333 nova_compute[244014]: 2026-02-25 12:42:13.077 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:42:13 np0005629333 nova_compute[244014]: 2026-02-25 12:42:13.081 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:42:13 np0005629333 nova_compute[244014]: 2026-02-25 12:42:13.119 244018 INFO nova.compute.manager [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Took 9.09 seconds to build instance.#033[00m
Feb 25 07:42:13 np0005629333 podman[338096]: 2026-02-25 12:42:13.120186369 +0000 UTC m=+0.056962358 container create 06a4d8f54adee94a88166f848b8eb43190438dbfca792ead9ec4745f5182e839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 07:42:13 np0005629333 nova_compute[244014]: 2026-02-25 12:42:13.144 244018 DEBUG oslo_concurrency.lockutils [None req-dc3c6f08-2d66-4438-8918-837e02e3c23d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:13 np0005629333 systemd[1]: Started libpod-conmon-06a4d8f54adee94a88166f848b8eb43190438dbfca792ead9ec4745f5182e839.scope.
Feb 25 07:42:13 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:42:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7657430fcd1ccf36e0ff1aaea267888d4bce6d3d745f3ebe88290757fb8da93/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:42:13 np0005629333 podman[338096]: 2026-02-25 12:42:13.089279717 +0000 UTC m=+0.026055696 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:42:13 np0005629333 podman[338096]: 2026-02-25 12:42:13.192941143 +0000 UTC m=+0.129717162 container init 06a4d8f54adee94a88166f848b8eb43190438dbfca792ead9ec4745f5182e839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 07:42:13 np0005629333 podman[338096]: 2026-02-25 12:42:13.198186021 +0000 UTC m=+0.134962000 container start 06a4d8f54adee94a88166f848b8eb43190438dbfca792ead9ec4745f5182e839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 25 07:42:13 np0005629333 neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c[338110]: [NOTICE]   (338114) : New worker (338116) forked
Feb 25 07:42:13 np0005629333 neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c[338110]: [NOTICE]   (338114) : Loading success.
Feb 25 07:42:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:42:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1832: 305 pgs: 305 active+clean; 484 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 156 op/s
Feb 25 07:42:14 np0005629333 nova_compute[244014]: 2026-02-25 12:42:14.876 244018 DEBUG nova.compute.manager [req-00e8c411-c669-4e70-88d2-2484df484711 req-fedeafc7-e8fb-449f-aa4c-0e00bba990e0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received event network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:42:14 np0005629333 nova_compute[244014]: 2026-02-25 12:42:14.876 244018 DEBUG oslo_concurrency.lockutils [req-00e8c411-c669-4e70-88d2-2484df484711 req-fedeafc7-e8fb-449f-aa4c-0e00bba990e0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:14 np0005629333 nova_compute[244014]: 2026-02-25 12:42:14.876 244018 DEBUG oslo_concurrency.lockutils [req-00e8c411-c669-4e70-88d2-2484df484711 req-fedeafc7-e8fb-449f-aa4c-0e00bba990e0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:14 np0005629333 nova_compute[244014]: 2026-02-25 12:42:14.876 244018 DEBUG oslo_concurrency.lockutils [req-00e8c411-c669-4e70-88d2-2484df484711 req-fedeafc7-e8fb-449f-aa4c-0e00bba990e0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:14 np0005629333 nova_compute[244014]: 2026-02-25 12:42:14.877 244018 DEBUG nova.compute.manager [req-00e8c411-c669-4e70-88d2-2484df484711 req-fedeafc7-e8fb-449f-aa4c-0e00bba990e0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] No waiting events found dispatching network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:42:14 np0005629333 nova_compute[244014]: 2026-02-25 12:42:14.877 244018 WARNING nova.compute.manager [req-00e8c411-c669-4e70-88d2-2484df484711 req-fedeafc7-e8fb-449f-aa4c-0e00bba990e0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received unexpected event network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e for instance with vm_state active and task_state None.#033[00m
Feb 25 07:42:15 np0005629333 nova_compute[244014]: 2026-02-25 12:42:15.076 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1833: 305 pgs: 305 active+clean; 484 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 156 op/s
Feb 25 07:42:16 np0005629333 nova_compute[244014]: 2026-02-25 12:42:16.609 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:17 np0005629333 nova_compute[244014]: 2026-02-25 12:42:17.263 244018 DEBUG nova.compute.manager [req-f94b50d3-774b-4bbb-b2cb-4641de0b647b req-d41ec173-917e-4466-a1a8-499ef4a2075a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received event network-changed-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:42:17 np0005629333 nova_compute[244014]: 2026-02-25 12:42:17.263 244018 DEBUG nova.compute.manager [req-f94b50d3-774b-4bbb-b2cb-4641de0b647b req-d41ec173-917e-4466-a1a8-499ef4a2075a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Refreshing instance network info cache due to event network-changed-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:42:17 np0005629333 nova_compute[244014]: 2026-02-25 12:42:17.263 244018 DEBUG oslo_concurrency.lockutils [req-f94b50d3-774b-4bbb-b2cb-4641de0b647b req-d41ec173-917e-4466-a1a8-499ef4a2075a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-11f1a7e0-6001-4367-8491-5b5508f56bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:42:17 np0005629333 nova_compute[244014]: 2026-02-25 12:42:17.264 244018 DEBUG oslo_concurrency.lockutils [req-f94b50d3-774b-4bbb-b2cb-4641de0b647b req-d41ec173-917e-4466-a1a8-499ef4a2075a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-11f1a7e0-6001-4367-8491-5b5508f56bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:42:17 np0005629333 nova_compute[244014]: 2026-02-25 12:42:17.264 244018 DEBUG nova.network.neutron [req-f94b50d3-774b-4bbb-b2cb-4641de0b647b req-d41ec173-917e-4466-a1a8-499ef4a2075a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Refreshing network info cache for port 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:42:17 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:17Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e2:bd:b3 10.100.0.6
Feb 25 07:42:17 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:17Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e2:bd:b3 10.100.0.6
Feb 25 07:42:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 305 active+clean; 508 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 6.0 MiB/s wr, 275 op/s
Feb 25 07:42:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:42:18 np0005629333 nova_compute[244014]: 2026-02-25 12:42:18.790 244018 DEBUG nova.virt.libvirt.driver [None req-5b47d73b-67aa-4c66-8a16-46e01e8f20e1 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Feb 25 07:42:18 np0005629333 nova_compute[244014]: 2026-02-25 12:42:18.846 244018 DEBUG nova.network.neutron [req-f94b50d3-774b-4bbb-b2cb-4641de0b647b req-d41ec173-917e-4466-a1a8-499ef4a2075a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Updated VIF entry in instance network info cache for port 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:42:18 np0005629333 nova_compute[244014]: 2026-02-25 12:42:18.847 244018 DEBUG nova.network.neutron [req-f94b50d3-774b-4bbb-b2cb-4641de0b647b req-d41ec173-917e-4466-a1a8-499ef4a2075a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Updating instance_info_cache with network_info: [{"id": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "address": "fa:16:3e:3e:d0:2e", "network": {"id": "898394ed-19f4-4525-9c0d-05895748de8c", "bridge": "br-int", "label": "tempest-network-smoke--407161761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aff0d14-b6", "ovs_interfaceid": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:42:18 np0005629333 nova_compute[244014]: 2026-02-25 12:42:18.876 244018 DEBUG oslo_concurrency.lockutils [req-f94b50d3-774b-4bbb-b2cb-4641de0b647b req-d41ec173-917e-4466-a1a8-499ef4a2075a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-11f1a7e0-6001-4367-8491-5b5508f56bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:42:20 np0005629333 nova_compute[244014]: 2026-02-25 12:42:20.079 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:20 np0005629333 nova_compute[244014]: 2026-02-25 12:42:20.123 244018 DEBUG nova.compute.manager [req-0310f901-077d-44c2-a76e-10599499f115 req-58b59377-e8fa-4487-b10a-b42825bbebda 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Received event network-changed-ee68d04f-36ba-4727-ab4b-c31e559353e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:42:20 np0005629333 nova_compute[244014]: 2026-02-25 12:42:20.124 244018 DEBUG nova.compute.manager [req-0310f901-077d-44c2-a76e-10599499f115 req-58b59377-e8fa-4487-b10a-b42825bbebda 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Refreshing instance network info cache due to event network-changed-ee68d04f-36ba-4727-ab4b-c31e559353e0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:42:20 np0005629333 nova_compute[244014]: 2026-02-25 12:42:20.125 244018 DEBUG oslo_concurrency.lockutils [req-0310f901-077d-44c2-a76e-10599499f115 req-58b59377-e8fa-4487-b10a-b42825bbebda 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-227efbfe-da43-423a-8652-9636ecded4cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:42:20 np0005629333 nova_compute[244014]: 2026-02-25 12:42:20.125 244018 DEBUG oslo_concurrency.lockutils [req-0310f901-077d-44c2-a76e-10599499f115 req-58b59377-e8fa-4487-b10a-b42825bbebda 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-227efbfe-da43-423a-8652-9636ecded4cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:42:20 np0005629333 nova_compute[244014]: 2026-02-25 12:42:20.126 244018 DEBUG nova.network.neutron [req-0310f901-077d-44c2-a76e-10599499f115 req-58b59377-e8fa-4487-b10a-b42825bbebda 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Refreshing network info cache for port ee68d04f-36ba-4727-ab4b-c31e559353e0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:42:20 np0005629333 nova_compute[244014]: 2026-02-25 12:42:20.168 244018 DEBUG oslo_concurrency.lockutils [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "227efbfe-da43-423a-8652-9636ecded4cd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:20 np0005629333 nova_compute[244014]: 2026-02-25 12:42:20.169 244018 DEBUG oslo_concurrency.lockutils [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "227efbfe-da43-423a-8652-9636ecded4cd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:20 np0005629333 nova_compute[244014]: 2026-02-25 12:42:20.169 244018 DEBUG oslo_concurrency.lockutils [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "227efbfe-da43-423a-8652-9636ecded4cd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:20 np0005629333 nova_compute[244014]: 2026-02-25 12:42:20.169 244018 DEBUG oslo_concurrency.lockutils [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "227efbfe-da43-423a-8652-9636ecded4cd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:20 np0005629333 nova_compute[244014]: 2026-02-25 12:42:20.169 244018 DEBUG oslo_concurrency.lockutils [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "227efbfe-da43-423a-8652-9636ecded4cd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:20 np0005629333 nova_compute[244014]: 2026-02-25 12:42:20.170 244018 INFO nova.compute.manager [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Terminating instance#033[00m
Feb 25 07:42:20 np0005629333 nova_compute[244014]: 2026-02-25 12:42:20.172 244018 DEBUG nova.compute.manager [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:42:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1835: 305 pgs: 305 active+clean; 508 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.9 MiB/s wr, 169 op/s
Feb 25 07:42:20 np0005629333 kernel: tapee68d04f-36 (unregistering): left promiscuous mode
Feb 25 07:42:20 np0005629333 NetworkManager[49836]: <info>  [1772023340.2976] device (tapee68d04f-36): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:42:20 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:20Z|01035|binding|INFO|Releasing lport ee68d04f-36ba-4727-ab4b-c31e559353e0 from this chassis (sb_readonly=0)
Feb 25 07:42:20 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:20Z|01036|binding|INFO|Setting lport ee68d04f-36ba-4727-ab4b-c31e559353e0 down in Southbound
Feb 25 07:42:20 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:20Z|01037|binding|INFO|Removing iface tapee68d04f-36 ovn-installed in OVS
Feb 25 07:42:20 np0005629333 nova_compute[244014]: 2026-02-25 12:42:20.313 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:20.325 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:74:9a 10.100.0.8 2001:db8::f816:3eff:feb7:749a'], port_security=['fa:16:3e:b7:74:9a 10.100.0.8 2001:db8::f816:3eff:feb7:749a'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28 2001:db8::f816:3eff:feb7:749a/64', 'neutron:device_id': '227efbfe-da43-423a-8652-9636ecded4cd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '219ff989-2dc4-4de6-abcc-fec08f1c06f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5cb98e41-b63f-472f-91ce-2c5130dfa4a8, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ee68d04f-36ba-4727-ab4b-c31e559353e0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:42:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:20.327 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ee68d04f-36ba-4727-ab4b-c31e559353e0 in datapath 9ec47b46-6b4d-4267-964b-6f16eef9a7b1 unbound from our chassis#033[00m
Feb 25 07:42:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:20.328 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9ec47b46-6b4d-4267-964b-6f16eef9a7b1#033[00m
Feb 25 07:42:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:20.343 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0bc36e09-1708-41b5-b0b7-952be437885c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:20 np0005629333 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Feb 25 07:42:20 np0005629333 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006a.scope: Consumed 12.127s CPU time.
Feb 25 07:42:20 np0005629333 systemd-machined[210048]: Machine qemu-133-instance-0000006a terminated.
Feb 25 07:42:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:20.371 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8b3798d0-05b8-4c4c-81a3-bdc011f9cb41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:20.375 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9b7f9d0a-bfe0-4632-aa92-756d46aff305]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:20.395 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[6220131e-393b-4aa0-924e-bcadd1c84c08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:20 np0005629333 nova_compute[244014]: 2026-02-25 12:42:20.414 244018 INFO nova.virt.libvirt.driver [-] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Instance destroyed successfully.#033[00m
Feb 25 07:42:20 np0005629333 nova_compute[244014]: 2026-02-25 12:42:20.414 244018 DEBUG nova.objects.instance [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'resources' on Instance uuid 227efbfe-da43-423a-8652-9636ecded4cd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:42:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:20.413 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[28acd6c3-937b-4eab-9bcd-f2ffd7f528d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ec47b46-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:aa:8a:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 32, 'tx_packets': 7, 'rx_bytes': 2640, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 32, 'tx_packets': 7, 'rx_bytes': 2640, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 300], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524298, 'reachable_time': 40654, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 28, 'inoctets': 2080, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 28, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2080, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 28, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338143, 'error': None, 'target': 'ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:20 np0005629333 nova_compute[244014]: 2026-02-25 12:42:20.427 244018 DEBUG nova.virt.libvirt.vif [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:41:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1104420481',display_name='tempest-TestGettingAddress-server-1104420481',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1104420481',id=106,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOF5qZPMlaCDABTzKIN8P76NPvIVQ2htm3U8AfLvXmjtQ5ATadIDI8WR25EzcFon8xGJAOKa64XoS6ByiRlqYYuZKqui9AsV1f+Y3cN0xRd0ljo9lo2zQ0wdsqjfZAW6NA==',key_name='tempest-TestGettingAddress-1477618447',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:41:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-g8zs80ws',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:41:55Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=227efbfe-da43-423a-8652-9636ecded4cd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "address": "fa:16:3e:b7:74:9a", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:749a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee68d04f-36", "ovs_interfaceid": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:42:20 np0005629333 nova_compute[244014]: 2026-02-25 12:42:20.428 244018 DEBUG nova.network.os_vif_util [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "address": "fa:16:3e:b7:74:9a", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:749a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee68d04f-36", "ovs_interfaceid": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:42:20 np0005629333 nova_compute[244014]: 2026-02-25 12:42:20.429 244018 DEBUG nova.network.os_vif_util [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b7:74:9a,bridge_name='br-int',has_traffic_filtering=True,id=ee68d04f-36ba-4727-ab4b-c31e559353e0,network=Network(9ec47b46-6b4d-4267-964b-6f16eef9a7b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee68d04f-36') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:42:20 np0005629333 nova_compute[244014]: 2026-02-25 12:42:20.429 244018 DEBUG os_vif [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:74:9a,bridge_name='br-int',has_traffic_filtering=True,id=ee68d04f-36ba-4727-ab4b-c31e559353e0,network=Network(9ec47b46-6b4d-4267-964b-6f16eef9a7b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee68d04f-36') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:42:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:20.429 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[850d52cf-0c99-480e-bec6-674170e50d43]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9ec47b46-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 524306, 'tstamp': 524306}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338148, 'error': None, 'target': 'ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9ec47b46-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 524308, 'tstamp': 524308}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338148, 'error': None, 'target': 'ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:20.431 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ec47b46-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:42:20 np0005629333 nova_compute[244014]: 2026-02-25 12:42:20.432 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:20 np0005629333 nova_compute[244014]: 2026-02-25 12:42:20.433 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee68d04f-36, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:42:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:20.437 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ec47b46-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:42:20 np0005629333 nova_compute[244014]: 2026-02-25 12:42:20.437 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:20.437 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:42:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:20.438 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9ec47b46-60, col_values=(('external_ids', {'iface-id': '895bc3cc-c38d-425b-b005-1acb3139bbee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:42:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:20.438 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:42:20 np0005629333 nova_compute[244014]: 2026-02-25 12:42:20.440 244018 INFO os_vif [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:74:9a,bridge_name='br-int',has_traffic_filtering=True,id=ee68d04f-36ba-4727-ab4b-c31e559353e0,network=Network(9ec47b46-6b4d-4267-964b-6f16eef9a7b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapee68d04f-36')#033[00m
Feb 25 07:42:20 np0005629333 nova_compute[244014]: 2026-02-25 12:42:20.649 244018 DEBUG nova.compute.manager [req-38471a43-0bd2-44ae-9801-837f5a6d9841 req-cf502de3-d6df-442b-a21d-9bc3fe867b53 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Received event network-vif-unplugged-ee68d04f-36ba-4727-ab4b-c31e559353e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:42:20 np0005629333 nova_compute[244014]: 2026-02-25 12:42:20.650 244018 DEBUG oslo_concurrency.lockutils [req-38471a43-0bd2-44ae-9801-837f5a6d9841 req-cf502de3-d6df-442b-a21d-9bc3fe867b53 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "227efbfe-da43-423a-8652-9636ecded4cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:20 np0005629333 nova_compute[244014]: 2026-02-25 12:42:20.651 244018 DEBUG oslo_concurrency.lockutils [req-38471a43-0bd2-44ae-9801-837f5a6d9841 req-cf502de3-d6df-442b-a21d-9bc3fe867b53 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "227efbfe-da43-423a-8652-9636ecded4cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:20 np0005629333 nova_compute[244014]: 2026-02-25 12:42:20.651 244018 DEBUG oslo_concurrency.lockutils [req-38471a43-0bd2-44ae-9801-837f5a6d9841 req-cf502de3-d6df-442b-a21d-9bc3fe867b53 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "227efbfe-da43-423a-8652-9636ecded4cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:20 np0005629333 nova_compute[244014]: 2026-02-25 12:42:20.652 244018 DEBUG nova.compute.manager [req-38471a43-0bd2-44ae-9801-837f5a6d9841 req-cf502de3-d6df-442b-a21d-9bc3fe867b53 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] No waiting events found dispatching network-vif-unplugged-ee68d04f-36ba-4727-ab4b-c31e559353e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:42:20 np0005629333 nova_compute[244014]: 2026-02-25 12:42:20.652 244018 DEBUG nova.compute.manager [req-38471a43-0bd2-44ae-9801-837f5a6d9841 req-cf502de3-d6df-442b-a21d-9bc3fe867b53 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Received event network-vif-unplugged-ee68d04f-36ba-4727-ab4b-c31e559353e0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:42:20 np0005629333 podman[338168]: 2026-02-25 12:42:20.745482872 +0000 UTC m=+0.077263271 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:42:20 np0005629333 nova_compute[244014]: 2026-02-25 12:42:20.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:42:20 np0005629333 nova_compute[244014]: 2026-02-25 12:42:20.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:42:21 np0005629333 nova_compute[244014]: 2026-02-25 12:42:21.107 244018 INFO nova.virt.libvirt.driver [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Deleting instance files /var/lib/nova/instances/227efbfe-da43-423a-8652-9636ecded4cd_del#033[00m
Feb 25 07:42:21 np0005629333 nova_compute[244014]: 2026-02-25 12:42:21.108 244018 INFO nova.virt.libvirt.driver [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Deletion of /var/lib/nova/instances/227efbfe-da43-423a-8652-9636ecded4cd_del complete#033[00m
Feb 25 07:42:21 np0005629333 nova_compute[244014]: 2026-02-25 12:42:21.124 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-d547b0db-242e-49a5-8a76-5682b0235b6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:42:21 np0005629333 nova_compute[244014]: 2026-02-25 12:42:21.124 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-d547b0db-242e-49a5-8a76-5682b0235b6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:42:21 np0005629333 nova_compute[244014]: 2026-02-25 12:42:21.125 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 25 07:42:21 np0005629333 nova_compute[244014]: 2026-02-25 12:42:21.167 244018 INFO nova.compute.manager [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Took 0.99 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:42:21 np0005629333 nova_compute[244014]: 2026-02-25 12:42:21.168 244018 DEBUG oslo.service.loopingcall [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:42:21 np0005629333 nova_compute[244014]: 2026-02-25 12:42:21.168 244018 DEBUG nova.compute.manager [-] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:42:21 np0005629333 nova_compute[244014]: 2026-02-25 12:42:21.168 244018 DEBUG nova.network.neutron [-] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:42:21 np0005629333 kernel: tap6178078f-c3 (unregistering): left promiscuous mode
Feb 25 07:42:21 np0005629333 NetworkManager[49836]: <info>  [1772023341.3745] device (tap6178078f-c3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:42:21 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:21Z|01038|binding|INFO|Releasing lport 6178078f-c3e6-4ed1-86fa-d99671618a75 from this chassis (sb_readonly=0)
Feb 25 07:42:21 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:21Z|01039|binding|INFO|Setting lport 6178078f-c3e6-4ed1-86fa-d99671618a75 down in Southbound
Feb 25 07:42:21 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:21Z|01040|binding|INFO|Removing iface tap6178078f-c3 ovn-installed in OVS
Feb 25 07:42:21 np0005629333 nova_compute[244014]: 2026-02-25 12:42:21.383 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:21.389 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:bd:b3 10.100.0.6'], port_security=['fa:16:3e:e2:bd:b3 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b8d9acc4-7912-4d26-bad6-1159f6993361', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec8bae53-fe6a-49d1-a733-f00c198be561', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '503e879cd1f44a16b9baef106ceba949', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3bf34285-1a67-4c95-bb68-fd577a012f6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18f4e8da-4409-4095-9850-aaee82dd8fd1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6178078f-c3e6-4ed1-86fa-d99671618a75) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:42:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:21.390 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6178078f-c3e6-4ed1-86fa-d99671618a75 in datapath ec8bae53-fe6a-49d1-a733-f00c198be561 unbound from our chassis#033[00m
Feb 25 07:42:21 np0005629333 nova_compute[244014]: 2026-02-25 12:42:21.393 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:21.394 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ec8bae53-fe6a-49d1-a733-f00c198be561#033[00m
Feb 25 07:42:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:21.421 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[71ce079e-e7ba-413a-8b97-3e269688e08a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:21 np0005629333 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Feb 25 07:42:21 np0005629333 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006b.scope: Consumed 12.339s CPU time.
Feb 25 07:42:21 np0005629333 systemd-machined[210048]: Machine qemu-134-instance-0000006b terminated.
Feb 25 07:42:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:21.456 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[577756a6-a3f3-4afc-8c65-fdc6ae722cb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:21.460 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3967c6e4-c255-401f-8016-0833f2d5c511]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:21.502 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7a857fe9-b295-4df0-b03e-7f043a7a3748]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:21.522 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4bcdfad1-00d5-44ee-9989-76bf7daef021]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapec8bae53-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:a5:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 27, 'rx_bytes': 700, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 27, 'rx_bytes': 700, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 290], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516532, 'reachable_time': 43845, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338196, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:21.546 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b47eaab0-1d90-4a8b-9999-4ebf789d969b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516546, 'tstamp': 516546}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338197, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapec8bae53-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516549, 'tstamp': 516549}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338197, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:21.549 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec8bae53-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:42:21 np0005629333 nova_compute[244014]: 2026-02-25 12:42:21.552 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:21 np0005629333 nova_compute[244014]: 2026-02-25 12:42:21.608 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:21.611 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec8bae53-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:42:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:21.611 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:42:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:21.612 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapec8bae53-f0, col_values=(('external_ids', {'iface-id': 'e2d1eadf-baf7-4e5c-8052-6c64e8476a26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:42:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:21.613 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:42:21 np0005629333 nova_compute[244014]: 2026-02-25 12:42:21.805 244018 INFO nova.virt.libvirt.driver [None req-5b47d73b-67aa-4c66-8a16-46e01e8f20e1 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Instance shutdown successfully after 13 seconds.#033[00m
Feb 25 07:42:21 np0005629333 nova_compute[244014]: 2026-02-25 12:42:21.811 244018 INFO nova.virt.libvirt.driver [-] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Instance destroyed successfully.#033[00m
Feb 25 07:42:21 np0005629333 nova_compute[244014]: 2026-02-25 12:42:21.811 244018 DEBUG nova.objects.instance [None req-5b47d73b-67aa-4c66-8a16-46e01e8f20e1 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'numa_topology' on Instance uuid b8d9acc4-7912-4d26-bad6-1159f6993361 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:42:21 np0005629333 nova_compute[244014]: 2026-02-25 12:42:21.825 244018 DEBUG nova.compute.manager [None req-5b47d73b-67aa-4c66-8a16-46e01e8f20e1 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:42:21 np0005629333 nova_compute[244014]: 2026-02-25 12:42:21.866 244018 DEBUG oslo_concurrency.lockutils [None req-5b47d73b-67aa-4c66-8a16-46e01e8f20e1 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:22 np0005629333 nova_compute[244014]: 2026-02-25 12:42:22.060 244018 DEBUG nova.network.neutron [req-0310f901-077d-44c2-a76e-10599499f115 req-58b59377-e8fa-4487-b10a-b42825bbebda 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Updated VIF entry in instance network info cache for port ee68d04f-36ba-4727-ab4b-c31e559353e0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:42:22 np0005629333 nova_compute[244014]: 2026-02-25 12:42:22.061 244018 DEBUG nova.network.neutron [req-0310f901-077d-44c2-a76e-10599499f115 req-58b59377-e8fa-4487-b10a-b42825bbebda 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Updating instance_info_cache with network_info: [{"id": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "address": "fa:16:3e:b7:74:9a", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb7:749a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapee68d04f-36", "ovs_interfaceid": "ee68d04f-36ba-4727-ab4b-c31e559353e0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:42:22 np0005629333 nova_compute[244014]: 2026-02-25 12:42:22.077 244018 DEBUG oslo_concurrency.lockutils [req-0310f901-077d-44c2-a76e-10599499f115 req-58b59377-e8fa-4487-b10a-b42825bbebda 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-227efbfe-da43-423a-8652-9636ecded4cd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:42:22 np0005629333 nova_compute[244014]: 2026-02-25 12:42:22.138 244018 DEBUG nova.network.neutron [-] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:42:22 np0005629333 nova_compute[244014]: 2026-02-25 12:42:22.155 244018 INFO nova.compute.manager [-] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Took 0.99 seconds to deallocate network for instance.#033[00m
Feb 25 07:42:22 np0005629333 nova_compute[244014]: 2026-02-25 12:42:22.202 244018 DEBUG oslo_concurrency.lockutils [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:22 np0005629333 nova_compute[244014]: 2026-02-25 12:42:22.202 244018 DEBUG oslo_concurrency.lockutils [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1836: 305 pgs: 305 active+clean; 464 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.0 MiB/s wr, 195 op/s
Feb 25 07:42:22 np0005629333 nova_compute[244014]: 2026-02-25 12:42:22.534 244018 DEBUG oslo_concurrency.processutils [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:42:22 np0005629333 nova_compute[244014]: 2026-02-25 12:42:22.747 244018 DEBUG nova.compute.manager [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Received event network-vif-plugged-ee68d04f-36ba-4727-ab4b-c31e559353e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:42:22 np0005629333 podman[338212]: 2026-02-25 12:42:22.748479855 +0000 UTC m=+0.084702771 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible)
Feb 25 07:42:22 np0005629333 nova_compute[244014]: 2026-02-25 12:42:22.748 244018 DEBUG oslo_concurrency.lockutils [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "227efbfe-da43-423a-8652-9636ecded4cd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:22 np0005629333 nova_compute[244014]: 2026-02-25 12:42:22.749 244018 DEBUG oslo_concurrency.lockutils [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "227efbfe-da43-423a-8652-9636ecded4cd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:22 np0005629333 nova_compute[244014]: 2026-02-25 12:42:22.749 244018 DEBUG oslo_concurrency.lockutils [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "227efbfe-da43-423a-8652-9636ecded4cd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:22 np0005629333 nova_compute[244014]: 2026-02-25 12:42:22.749 244018 DEBUG nova.compute.manager [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] No waiting events found dispatching network-vif-plugged-ee68d04f-36ba-4727-ab4b-c31e559353e0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:42:22 np0005629333 nova_compute[244014]: 2026-02-25 12:42:22.750 244018 WARNING nova.compute.manager [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Received unexpected event network-vif-plugged-ee68d04f-36ba-4727-ab4b-c31e559353e0 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:42:22 np0005629333 nova_compute[244014]: 2026-02-25 12:42:22.750 244018 DEBUG nova.compute.manager [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Received event network-vif-unplugged-6178078f-c3e6-4ed1-86fa-d99671618a75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:42:22 np0005629333 nova_compute[244014]: 2026-02-25 12:42:22.750 244018 DEBUG oslo_concurrency.lockutils [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:22 np0005629333 nova_compute[244014]: 2026-02-25 12:42:22.750 244018 DEBUG oslo_concurrency.lockutils [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:22 np0005629333 nova_compute[244014]: 2026-02-25 12:42:22.751 244018 DEBUG oslo_concurrency.lockutils [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:22 np0005629333 nova_compute[244014]: 2026-02-25 12:42:22.751 244018 DEBUG nova.compute.manager [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] No waiting events found dispatching network-vif-unplugged-6178078f-c3e6-4ed1-86fa-d99671618a75 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:42:22 np0005629333 nova_compute[244014]: 2026-02-25 12:42:22.751 244018 WARNING nova.compute.manager [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Received unexpected event network-vif-unplugged-6178078f-c3e6-4ed1-86fa-d99671618a75 for instance with vm_state stopped and task_state None.#033[00m
Feb 25 07:42:22 np0005629333 nova_compute[244014]: 2026-02-25 12:42:22.751 244018 DEBUG nova.compute.manager [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Received event network-vif-deleted-ee68d04f-36ba-4727-ab4b-c31e559353e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:42:22 np0005629333 nova_compute[244014]: 2026-02-25 12:42:22.752 244018 DEBUG nova.compute.manager [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Received event network-vif-plugged-6178078f-c3e6-4ed1-86fa-d99671618a75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:42:22 np0005629333 nova_compute[244014]: 2026-02-25 12:42:22.752 244018 DEBUG oslo_concurrency.lockutils [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:22 np0005629333 nova_compute[244014]: 2026-02-25 12:42:22.752 244018 DEBUG oslo_concurrency.lockutils [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:22 np0005629333 nova_compute[244014]: 2026-02-25 12:42:22.752 244018 DEBUG oslo_concurrency.lockutils [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:22 np0005629333 nova_compute[244014]: 2026-02-25 12:42:22.753 244018 DEBUG nova.compute.manager [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] No waiting events found dispatching network-vif-plugged-6178078f-c3e6-4ed1-86fa-d99671618a75 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:42:22 np0005629333 nova_compute[244014]: 2026-02-25 12:42:22.753 244018 WARNING nova.compute.manager [req-ae5b1608-2088-40f5-9060-4a445f4fa6e5 req-55680e11-c3cb-4706-9d6c-7e0b9e2fbd41 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Received unexpected event network-vif-plugged-6178078f-c3e6-4ed1-86fa-d99671618a75 for instance with vm_state stopped and task_state None.#033[00m
Feb 25 07:42:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:42:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3671590202' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:42:23 np0005629333 nova_compute[244014]: 2026-02-25 12:42:23.089 244018 DEBUG oslo_concurrency.processutils [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:42:23 np0005629333 nova_compute[244014]: 2026-02-25 12:42:23.097 244018 DEBUG nova.compute.provider_tree [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:42:23 np0005629333 nova_compute[244014]: 2026-02-25 12:42:23.111 244018 DEBUG nova.scheduler.client.report [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:42:23 np0005629333 nova_compute[244014]: 2026-02-25 12:42:23.131 244018 DEBUG oslo_concurrency.lockutils [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.929s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:23 np0005629333 nova_compute[244014]: 2026-02-25 12:42:23.179 244018 INFO nova.scheduler.client.report [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Deleted allocations for instance 227efbfe-da43-423a-8652-9636ecded4cd#033[00m
Feb 25 07:42:23 np0005629333 nova_compute[244014]: 2026-02-25 12:42:23.269 244018 DEBUG oslo_concurrency.lockutils [None req-57a5acc5-d4c7-461a-8b8e-17f5740b5eb6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "227efbfe-da43-423a-8652-9636ecded4cd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:42:23 np0005629333 nova_compute[244014]: 2026-02-25 12:42:23.327 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Updating instance_info_cache with network_info: [{"id": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "address": "fa:16:3e:95:77:21", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7721", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96ad4438-9f", "ovs_interfaceid": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:42:23 np0005629333 nova_compute[244014]: 2026-02-25 12:42:23.344 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-d547b0db-242e-49a5-8a76-5682b0235b6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:42:23 np0005629333 nova_compute[244014]: 2026-02-25 12:42:23.344 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 25 07:42:23 np0005629333 nova_compute[244014]: 2026-02-25 12:42:23.344 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:42:23 np0005629333 nova_compute[244014]: 2026-02-25 12:42:23.855 244018 DEBUG nova.compute.manager [req-357a5708-9950-46f2-859d-82a93c518e71 req-0db0903c-2594-485c-bdc5-aef282da88e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Received event network-changed-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:42:23 np0005629333 nova_compute[244014]: 2026-02-25 12:42:23.855 244018 DEBUG nova.compute.manager [req-357a5708-9950-46f2-859d-82a93c518e71 req-0db0903c-2594-485c-bdc5-aef282da88e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Refreshing instance network info cache due to event network-changed-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:42:23 np0005629333 nova_compute[244014]: 2026-02-25 12:42:23.856 244018 DEBUG oslo_concurrency.lockutils [req-357a5708-9950-46f2-859d-82a93c518e71 req-0db0903c-2594-485c-bdc5-aef282da88e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d547b0db-242e-49a5-8a76-5682b0235b6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:42:23 np0005629333 nova_compute[244014]: 2026-02-25 12:42:23.856 244018 DEBUG oslo_concurrency.lockutils [req-357a5708-9950-46f2-859d-82a93c518e71 req-0db0903c-2594-485c-bdc5-aef282da88e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d547b0db-242e-49a5-8a76-5682b0235b6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:42:23 np0005629333 nova_compute[244014]: 2026-02-25 12:42:23.856 244018 DEBUG nova.network.neutron [req-357a5708-9950-46f2-859d-82a93c518e71 req-0db0903c-2594-485c-bdc5-aef282da88e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Refreshing network info cache for port 96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:42:23 np0005629333 nova_compute[244014]: 2026-02-25 12:42:23.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:42:23 np0005629333 nova_compute[244014]: 2026-02-25 12:42:23.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:42:23 np0005629333 nova_compute[244014]: 2026-02-25 12:42:23.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:42:23 np0005629333 nova_compute[244014]: 2026-02-25 12:42:23.899 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:23 np0005629333 nova_compute[244014]: 2026-02-25 12:42:23.900 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:23 np0005629333 nova_compute[244014]: 2026-02-25 12:42:23.900 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:23 np0005629333 nova_compute[244014]: 2026-02-25 12:42:23.900 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:42:23 np0005629333 nova_compute[244014]: 2026-02-25 12:42:23.901 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.039 244018 DEBUG oslo_concurrency.lockutils [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "d547b0db-242e-49a5-8a76-5682b0235b6d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.039 244018 DEBUG oslo_concurrency.lockutils [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "d547b0db-242e-49a5-8a76-5682b0235b6d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.040 244018 DEBUG oslo_concurrency.lockutils [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.040 244018 DEBUG oslo_concurrency.lockutils [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.041 244018 DEBUG oslo_concurrency.lockutils [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.043 244018 INFO nova.compute.manager [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Terminating instance#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.046 244018 DEBUG nova.compute.manager [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:42:24 np0005629333 kernel: tap96ad4438-9f (unregistering): left promiscuous mode
Feb 25 07:42:24 np0005629333 NetworkManager[49836]: <info>  [1772023344.0939] device (tap96ad4438-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:42:24 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:24Z|01041|binding|INFO|Releasing lport 96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 from this chassis (sb_readonly=0)
Feb 25 07:42:24 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:24Z|01042|binding|INFO|Setting lport 96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 down in Southbound
Feb 25 07:42:24 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:24Z|01043|binding|INFO|Removing iface tap96ad4438-9f ovn-installed in OVS
Feb 25 07:42:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:24.112 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:77:21 10.100.0.12 2001:db8::f816:3eff:fe95:7721'], port_security=['fa:16:3e:95:77:21 10.100.0.12 2001:db8::f816:3eff:fe95:7721'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28 2001:db8::f816:3eff:fe95:7721/64', 'neutron:device_id': 'd547b0db-242e-49a5-8a76-5682b0235b6d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '219ff989-2dc4-4de6-abcc-fec08f1c06f5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5cb98e41-b63f-472f-91ce-2c5130dfa4a8, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=96ad4438-9fbd-4fbe-ae7f-af0eecb763c1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:42:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:24.115 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 in datapath 9ec47b46-6b4d-4267-964b-6f16eef9a7b1 unbound from our chassis#033[00m
Feb 25 07:42:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:24.117 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9ec47b46-6b4d-4267-964b-6f16eef9a7b1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.117 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:24.119 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d7597d36-86cc-4e52-9c40-1b229cdfc10a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:24.119 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1 namespace which is not needed anymore#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.126 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:24 np0005629333 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d00000068.scope: Deactivated successfully.
Feb 25 07:42:24 np0005629333 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d00000068.scope: Consumed 13.914s CPU time.
Feb 25 07:42:24 np0005629333 systemd-machined[210048]: Machine qemu-131-instance-00000068 terminated.
Feb 25 07:42:24 np0005629333 neutron-haproxy-ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1[335366]: [NOTICE]   (335370) : haproxy version is 2.8.14-c23fe91
Feb 25 07:42:24 np0005629333 neutron-haproxy-ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1[335366]: [NOTICE]   (335370) : path to executable is /usr/sbin/haproxy
Feb 25 07:42:24 np0005629333 neutron-haproxy-ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1[335366]: [WARNING]  (335370) : Exiting Master process...
Feb 25 07:42:24 np0005629333 neutron-haproxy-ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1[335366]: [ALERT]    (335370) : Current worker (335372) exited with code 143 (Terminated)
Feb 25 07:42:24 np0005629333 neutron-haproxy-ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1[335366]: [WARNING]  (335370) : All workers exited. Exiting... (0)
Feb 25 07:42:24 np0005629333 systemd[1]: libpod-5f09185511d4e215a85b7a2be2a3fdd458b7c1e274aa11f0c181571c358ee95b.scope: Deactivated successfully.
Feb 25 07:42:24 np0005629333 podman[338304]: 2026-02-25 12:42:24.252035306 +0000 UTC m=+0.048194051 container died 5f09185511d4e215a85b7a2be2a3fdd458b7c1e274aa11f0c181571c358ee95b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.282 244018 INFO nova.virt.libvirt.driver [-] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Instance destroyed successfully.#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.283 244018 DEBUG nova.objects.instance [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'resources' on Instance uuid d547b0db-242e-49a5-8a76-5682b0235b6d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:42:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1837: 305 pgs: 305 active+clean; 438 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 172 op/s
Feb 25 07:42:24 np0005629333 systemd[1]: var-lib-containers-storage-overlay-02aa81a3db703ffc9f0b50010648091a42999b6e192082fa62a6c0558305e7bb-merged.mount: Deactivated successfully.
Feb 25 07:42:24 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5f09185511d4e215a85b7a2be2a3fdd458b7c1e274aa11f0c181571c358ee95b-userdata-shm.mount: Deactivated successfully.
Feb 25 07:42:24 np0005629333 podman[338304]: 2026-02-25 12:42:24.29897384 +0000 UTC m=+0.095132565 container cleanup 5f09185511d4e215a85b7a2be2a3fdd458b7c1e274aa11f0c181571c358ee95b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223)
Feb 25 07:42:24 np0005629333 systemd[1]: libpod-conmon-5f09185511d4e215a85b7a2be2a3fdd458b7c1e274aa11f0c181571c358ee95b.scope: Deactivated successfully.
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.317 244018 DEBUG nova.virt.libvirt.vif [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:41:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-43297595',display_name='tempest-TestGettingAddress-server-43297595',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-43297595',id=104,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOF5qZPMlaCDABTzKIN8P76NPvIVQ2htm3U8AfLvXmjtQ5ATadIDI8WR25EzcFon8xGJAOKa64XoS6ByiRlqYYuZKqui9AsV1f+Y3cN0xRd0ljo9lo2zQ0wdsqjfZAW6NA==',key_name='tempest-TestGettingAddress-1477618447',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:41:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-k2no7ltx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:41:15Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=d547b0db-242e-49a5-8a76-5682b0235b6d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "address": "fa:16:3e:95:77:21", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7721", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96ad4438-9f", "ovs_interfaceid": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.317 244018 DEBUG nova.network.os_vif_util [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "address": "fa:16:3e:95:77:21", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7721", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96ad4438-9f", "ovs_interfaceid": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.318 244018 DEBUG nova.network.os_vif_util [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:95:77:21,bridge_name='br-int',has_traffic_filtering=True,id=96ad4438-9fbd-4fbe-ae7f-af0eecb763c1,network=Network(9ec47b46-6b4d-4267-964b-6f16eef9a7b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96ad4438-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.319 244018 DEBUG os_vif [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:95:77:21,bridge_name='br-int',has_traffic_filtering=True,id=96ad4438-9fbd-4fbe-ae7f-af0eecb763c1,network=Network(9ec47b46-6b4d-4267-964b-6f16eef9a7b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96ad4438-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.320 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.320 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96ad4438-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.322 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.324 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.331 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.337 244018 INFO os_vif [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:95:77:21,bridge_name='br-int',has_traffic_filtering=True,id=96ad4438-9fbd-4fbe-ae7f-af0eecb763c1,network=Network(9ec47b46-6b4d-4267-964b-6f16eef9a7b1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap96ad4438-9f')#033[00m
Feb 25 07:42:24 np0005629333 podman[338343]: 2026-02-25 12:42:24.373161114 +0000 UTC m=+0.053025468 container remove 5f09185511d4e215a85b7a2be2a3fdd458b7c1e274aa11f0c181571c358ee95b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 07:42:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:24.380 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[95a8716a-dad7-43ce-96f7-445a11e67188]: (4, ('Wed Feb 25 12:42:24 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1 (5f09185511d4e215a85b7a2be2a3fdd458b7c1e274aa11f0c181571c358ee95b)\n5f09185511d4e215a85b7a2be2a3fdd458b7c1e274aa11f0c181571c358ee95b\nWed Feb 25 12:42:24 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1 (5f09185511d4e215a85b7a2be2a3fdd458b7c1e274aa11f0c181571c358ee95b)\n5f09185511d4e215a85b7a2be2a3fdd458b7c1e274aa11f0c181571c358ee95b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:24.383 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[00c8f851-b9ba-4c02-af22-69f9dab2d1a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:24.384 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ec47b46-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.386 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:24 np0005629333 kernel: tap9ec47b46-60: left promiscuous mode
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.402 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.403 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:24.406 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f0359870-14bf-4eba-b84d-18f8c96f8f27]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:24.418 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6afb2e5f-07d9-4a72-9a74-f7136567fe7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:24.420 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a692dfd4-5702-4b92-9a84-bfef86f3cf5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:24.440 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3a448496-2abf-4620-9a9c-1c66a65da238]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524291, 'reachable_time': 42330, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338381, 'error': None, 'target': 'ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:24 np0005629333 systemd[1]: run-netns-ovnmeta\x2d9ec47b46\x2d6b4d\x2d4267\x2d964b\x2d6f16eef9a7b1.mount: Deactivated successfully.
Feb 25 07:42:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:24.444 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9ec47b46-6b4d-4267-964b-6f16eef9a7b1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:42:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:24.444 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[7822310b-f362-4ce0-9b5c-d3913fa0886d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:42:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1306296653' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.567 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.665s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.620 244018 INFO nova.virt.libvirt.driver [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Deleting instance files /var/lib/nova/instances/d547b0db-242e-49a5-8a76-5682b0235b6d_del#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.622 244018 INFO nova.virt.libvirt.driver [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Deletion of /var/lib/nova/instances/d547b0db-242e-49a5-8a76-5682b0235b6d_del complete#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.682 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.683 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000061 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.686 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.686 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000006b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.693 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.693 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.697 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.697 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.714 244018 INFO nova.compute.manager [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Took 0.67 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.715 244018 DEBUG oslo.service.loopingcall [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.715 244018 DEBUG nova.compute.manager [-] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.716 244018 DEBUG nova.network.neutron [-] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.837 244018 DEBUG nova.compute.manager [req-c04b7db0-b96b-454a-b96d-5a6ebf1b3e5e req-0d742b5f-2231-40cb-88ed-940161450fbb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Received event network-vif-unplugged-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.837 244018 DEBUG oslo_concurrency.lockutils [req-c04b7db0-b96b-454a-b96d-5a6ebf1b3e5e req-0d742b5f-2231-40cb-88ed-940161450fbb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.837 244018 DEBUG oslo_concurrency.lockutils [req-c04b7db0-b96b-454a-b96d-5a6ebf1b3e5e req-0d742b5f-2231-40cb-88ed-940161450fbb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.838 244018 DEBUG oslo_concurrency.lockutils [req-c04b7db0-b96b-454a-b96d-5a6ebf1b3e5e req-0d742b5f-2231-40cb-88ed-940161450fbb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.838 244018 DEBUG nova.compute.manager [req-c04b7db0-b96b-454a-b96d-5a6ebf1b3e5e req-0d742b5f-2231-40cb-88ed-940161450fbb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] No waiting events found dispatching network-vif-unplugged-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.838 244018 DEBUG nova.compute.manager [req-c04b7db0-b96b-454a-b96d-5a6ebf1b3e5e req-0d742b5f-2231-40cb-88ed-940161450fbb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Received event network-vif-unplugged-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.856 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.857 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3351MB free_disk=59.83013467211276GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.857 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.857 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.952 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 62d2fee1-f07f-44e3-a511-6b9bb341a3ed actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.953 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance d547b0db-242e-49a5-8a76-5682b0235b6d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.953 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance b8d9acc4-7912-4d26-bad6-1159f6993361 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.953 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 11f1a7e0-6001-4367-8491-5b5508f56bdb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.953 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:42:24 np0005629333 nova_compute[244014]: 2026-02-25 12:42:24.953 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.039 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.081 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.345 244018 DEBUG nova.network.neutron [-] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.368 244018 INFO nova.compute.manager [-] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Took 0.65 seconds to deallocate network for instance.#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.438 244018 DEBUG oslo_concurrency.lockutils [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.439 244018 DEBUG oslo_concurrency.lockutils [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "b8d9acc4-7912-4d26-bad6-1159f6993361" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.440 244018 DEBUG oslo_concurrency.lockutils [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.441 244018 DEBUG oslo_concurrency.lockutils [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.441 244018 DEBUG oslo_concurrency.lockutils [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.441 244018 DEBUG oslo_concurrency.lockutils [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.444 244018 INFO nova.compute.manager [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Terminating instance#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.445 244018 DEBUG nova.compute.manager [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.455 244018 INFO nova.virt.libvirt.driver [-] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Instance destroyed successfully.#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.456 244018 DEBUG nova.objects.instance [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'resources' on Instance uuid b8d9acc4-7912-4d26-bad6-1159f6993361 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.467 244018 DEBUG nova.virt.libvirt.vif [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:41:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1506093231',display_name='tempest-Íñstáñcé-59660951',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1506093231',id=107,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:42:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-p42ovzbe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:42:23Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=b8d9acc4-7912-4d26-bad6-1159f6993361,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "6178078f-c3e6-4ed1-86fa-d99671618a75", "address": "fa:16:3e:e2:bd:b3", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6178078f-c3", "ovs_interfaceid": "6178078f-c3e6-4ed1-86fa-d99671618a75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.468 244018 DEBUG nova.network.os_vif_util [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "6178078f-c3e6-4ed1-86fa-d99671618a75", "address": "fa:16:3e:e2:bd:b3", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6178078f-c3", "ovs_interfaceid": "6178078f-c3e6-4ed1-86fa-d99671618a75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.469 244018 DEBUG nova.network.os_vif_util [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:bd:b3,bridge_name='br-int',has_traffic_filtering=True,id=6178078f-c3e6-4ed1-86fa-d99671618a75,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6178078f-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.469 244018 DEBUG os_vif [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:bd:b3,bridge_name='br-int',has_traffic_filtering=True,id=6178078f-c3e6-4ed1-86fa-d99671618a75,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6178078f-c3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.472 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.472 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6178078f-c3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.474 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.477 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.480 244018 INFO os_vif [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:bd:b3,bridge_name='br-int',has_traffic_filtering=True,id=6178078f-c3e6-4ed1-86fa-d99671618a75,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6178078f-c3')#033[00m
Feb 25 07:42:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:42:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1526370141' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.680 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.687 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.706 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.726 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.726 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.727 244018 DEBUG oslo_concurrency.lockutils [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.289s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.809 244018 INFO nova.virt.libvirt.driver [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Deleting instance files /var/lib/nova/instances/b8d9acc4-7912-4d26-bad6-1159f6993361_del#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.810 244018 INFO nova.virt.libvirt.driver [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Deletion of /var/lib/nova/instances/b8d9acc4-7912-4d26-bad6-1159f6993361_del complete#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.829 244018 DEBUG oslo_concurrency.processutils [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.900 244018 INFO nova.compute.manager [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Took 0.45 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.901 244018 DEBUG oslo.service.loopingcall [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.901 244018 DEBUG nova.compute.manager [-] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.901 244018 DEBUG nova.network.neutron [-] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.906 244018 DEBUG nova.network.neutron [req-357a5708-9950-46f2-859d-82a93c518e71 req-0db0903c-2594-485c-bdc5-aef282da88e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Updated VIF entry in instance network info cache for port 96ad4438-9fbd-4fbe-ae7f-af0eecb763c1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.906 244018 DEBUG nova.network.neutron [req-357a5708-9950-46f2-859d-82a93c518e71 req-0db0903c-2594-485c-bdc5-aef282da88e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Updating instance_info_cache with network_info: [{"id": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "address": "fa:16:3e:95:77:21", "network": {"id": "9ec47b46-6b4d-4267-964b-6f16eef9a7b1", "bridge": "br-int", "label": "tempest-network-smoke--1966940503", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe95:7721", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap96ad4438-9f", "ovs_interfaceid": "96ad4438-9fbd-4fbe-ae7f-af0eecb763c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.923 244018 DEBUG oslo_concurrency.lockutils [req-357a5708-9950-46f2-859d-82a93c518e71 req-0db0903c-2594-485c-bdc5-aef282da88e4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d547b0db-242e-49a5-8a76-5682b0235b6d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.949 244018 DEBUG nova.compute.manager [req-a90e0961-908b-439a-bcab-5f526d743fcd req-3a13d184-27b8-42e2-be46-9d6a6c838adf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Received event network-vif-deleted-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.949 244018 INFO nova.compute.manager [req-a90e0961-908b-439a-bcab-5f526d743fcd req-3a13d184-27b8-42e2-be46-9d6a6c838adf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Neutron deleted interface 96ad4438-9fbd-4fbe-ae7f-af0eecb763c1; detaching it from the instance and deleting it from the info cache#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.950 244018 DEBUG nova.network.neutron [req-a90e0961-908b-439a-bcab-5f526d743fcd req-3a13d184-27b8-42e2-be46-9d6a6c838adf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:42:25 np0005629333 nova_compute[244014]: 2026-02-25 12:42:25.971 244018 DEBUG nova.compute.manager [req-a90e0961-908b-439a-bcab-5f526d743fcd req-3a13d184-27b8-42e2-be46-9d6a6c838adf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Detach interface failed, port_id=96ad4438-9fbd-4fbe-ae7f-af0eecb763c1, reason: Instance d547b0db-242e-49a5-8a76-5682b0235b6d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 25 07:42:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1838: 305 pgs: 305 active+clean; 438 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 159 op/s
Feb 25 07:42:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:42:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3437893879' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:42:26 np0005629333 nova_compute[244014]: 2026-02-25 12:42:26.361 244018 DEBUG oslo_concurrency.processutils [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:42:26 np0005629333 nova_compute[244014]: 2026-02-25 12:42:26.364 244018 DEBUG nova.compute.provider_tree [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:42:26 np0005629333 nova_compute[244014]: 2026-02-25 12:42:26.379 244018 DEBUG nova.scheduler.client.report [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:42:26 np0005629333 nova_compute[244014]: 2026-02-25 12:42:26.404 244018 DEBUG oslo_concurrency.lockutils [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:26 np0005629333 nova_compute[244014]: 2026-02-25 12:42:26.431 244018 INFO nova.scheduler.client.report [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Deleted allocations for instance d547b0db-242e-49a5-8a76-5682b0235b6d#033[00m
Feb 25 07:42:26 np0005629333 nova_compute[244014]: 2026-02-25 12:42:26.487 244018 DEBUG oslo_concurrency.lockutils [None req-def452d9-fc73-45bc-bca7-0e47fc069b6b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "d547b0db-242e-49a5-8a76-5682b0235b6d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.448s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:26 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:26Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3e:d0:2e 10.100.0.4
Feb 25 07:42:26 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:26Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3e:d0:2e 10.100.0.4
Feb 25 07:42:26 np0005629333 nova_compute[244014]: 2026-02-25 12:42:26.691 244018 DEBUG nova.network.neutron [-] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:42:26 np0005629333 nova_compute[244014]: 2026-02-25 12:42:26.717 244018 INFO nova.compute.manager [-] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Took 0.82 seconds to deallocate network for instance.#033[00m
Feb 25 07:42:26 np0005629333 nova_compute[244014]: 2026-02-25 12:42:26.766 244018 DEBUG oslo_concurrency.lockutils [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:26 np0005629333 nova_compute[244014]: 2026-02-25 12:42:26.767 244018 DEBUG oslo_concurrency.lockutils [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:26 np0005629333 nova_compute[244014]: 2026-02-25 12:42:26.852 244018 DEBUG oslo_concurrency.processutils [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:42:26 np0005629333 nova_compute[244014]: 2026-02-25 12:42:26.950 244018 DEBUG nova.compute.manager [req-42a8efd8-0776-496e-b60a-526b1767cdf2 req-ee8dfc03-94d4-4ea8-b3c2-5409ab10fe68 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Received event network-vif-plugged-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:42:26 np0005629333 nova_compute[244014]: 2026-02-25 12:42:26.951 244018 DEBUG oslo_concurrency.lockutils [req-42a8efd8-0776-496e-b60a-526b1767cdf2 req-ee8dfc03-94d4-4ea8-b3c2-5409ab10fe68 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:26 np0005629333 nova_compute[244014]: 2026-02-25 12:42:26.952 244018 DEBUG oslo_concurrency.lockutils [req-42a8efd8-0776-496e-b60a-526b1767cdf2 req-ee8dfc03-94d4-4ea8-b3c2-5409ab10fe68 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:26 np0005629333 nova_compute[244014]: 2026-02-25 12:42:26.952 244018 DEBUG oslo_concurrency.lockutils [req-42a8efd8-0776-496e-b60a-526b1767cdf2 req-ee8dfc03-94d4-4ea8-b3c2-5409ab10fe68 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d547b0db-242e-49a5-8a76-5682b0235b6d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:26 np0005629333 nova_compute[244014]: 2026-02-25 12:42:26.952 244018 DEBUG nova.compute.manager [req-42a8efd8-0776-496e-b60a-526b1767cdf2 req-ee8dfc03-94d4-4ea8-b3c2-5409ab10fe68 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] No waiting events found dispatching network-vif-plugged-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:42:26 np0005629333 nova_compute[244014]: 2026-02-25 12:42:26.953 244018 WARNING nova.compute.manager [req-42a8efd8-0776-496e-b60a-526b1767cdf2 req-ee8dfc03-94d4-4ea8-b3c2-5409ab10fe68 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Received unexpected event network-vif-plugged-96ad4438-9fbd-4fbe-ae7f-af0eecb763c1 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:42:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:42:27 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3715076582' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:42:27 np0005629333 nova_compute[244014]: 2026-02-25 12:42:27.421 244018 DEBUG oslo_concurrency.processutils [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:42:27 np0005629333 nova_compute[244014]: 2026-02-25 12:42:27.428 244018 DEBUG nova.compute.provider_tree [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:42:27 np0005629333 nova_compute[244014]: 2026-02-25 12:42:27.448 244018 DEBUG nova.scheduler.client.report [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:42:27 np0005629333 nova_compute[244014]: 2026-02-25 12:42:27.654 244018 DEBUG oslo_concurrency.lockutils [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.887s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:27 np0005629333 nova_compute[244014]: 2026-02-25 12:42:27.688 244018 INFO nova.scheduler.client.report [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Deleted allocations for instance b8d9acc4-7912-4d26-bad6-1159f6993361#033[00m
Feb 25 07:42:27 np0005629333 nova_compute[244014]: 2026-02-25 12:42:27.727 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:42:27 np0005629333 nova_compute[244014]: 2026-02-25 12:42:27.727 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:42:27 np0005629333 nova_compute[244014]: 2026-02-25 12:42:27.744 244018 DEBUG oslo_concurrency.lockutils [None req-4d806d08-3264-41d1-88a5-60a78766fccc 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "b8d9acc4-7912-4d26-bad6-1159f6993361" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.304s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:27 np0005629333 nova_compute[244014]: 2026-02-25 12:42:27.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:42:28 np0005629333 nova_compute[244014]: 2026-02-25 12:42:28.047 244018 DEBUG nova.compute.manager [req-2299ce07-f709-4b29-8d58-c633436af3a7 req-df7f2e5a-8262-431f-921a-21fa2fcbc5d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Received event network-vif-deleted-6178078f-c3e6-4ed1-86fa-d99671618a75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:42:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1839: 305 pgs: 305 active+clean; 312 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 274 op/s
Feb 25 07:42:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:42:29 np0005629333 nova_compute[244014]: 2026-02-25 12:42:29.920 244018 DEBUG oslo_concurrency.lockutils [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:29 np0005629333 nova_compute[244014]: 2026-02-25 12:42:29.920 244018 DEBUG oslo_concurrency.lockutils [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:29 np0005629333 nova_compute[244014]: 2026-02-25 12:42:29.920 244018 DEBUG oslo_concurrency.lockutils [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:29 np0005629333 nova_compute[244014]: 2026-02-25 12:42:29.921 244018 DEBUG oslo_concurrency.lockutils [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:29 np0005629333 nova_compute[244014]: 2026-02-25 12:42:29.921 244018 DEBUG oslo_concurrency.lockutils [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:29 np0005629333 nova_compute[244014]: 2026-02-25 12:42:29.922 244018 INFO nova.compute.manager [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Terminating instance#033[00m
Feb 25 07:42:29 np0005629333 nova_compute[244014]: 2026-02-25 12:42:29.923 244018 DEBUG nova.compute.manager [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:42:29 np0005629333 kernel: tap521e9ad8-9e (unregistering): left promiscuous mode
Feb 25 07:42:29 np0005629333 NetworkManager[49836]: <info>  [1772023349.9716] device (tap521e9ad8-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:42:29 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:29Z|01044|binding|INFO|Releasing lport 521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 from this chassis (sb_readonly=0)
Feb 25 07:42:29 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:29Z|01045|binding|INFO|Setting lport 521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 down in Southbound
Feb 25 07:42:29 np0005629333 nova_compute[244014]: 2026-02-25 12:42:29.977 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:29 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:29Z|01046|binding|INFO|Removing iface tap521e9ad8-9e ovn-installed in OVS
Feb 25 07:42:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:29.984 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:f0:2e 10.100.0.3'], port_security=['fa:16:3e:f6:f0:2e 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '62d2fee1-f07f-44e3-a511-6b9bb341a3ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ec8bae53-fe6a-49d1-a733-f00c198be561', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '503e879cd1f44a16b9baef106ceba949', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3bf34285-1a67-4c95-bb68-fd577a012f6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18f4e8da-4409-4095-9850-aaee82dd8fd1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=521e9ad8-9ef3-4823-9ddb-59c3c3fe0674) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:42:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:29.985 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 in datapath ec8bae53-fe6a-49d1-a733-f00c198be561 unbound from our chassis#033[00m
Feb 25 07:42:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:29.986 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ec8bae53-fe6a-49d1-a733-f00c198be561, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:42:29 np0005629333 nova_compute[244014]: 2026-02-25 12:42:29.986 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:29.988 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[36d0ec4a-6cf3-4761-8173-593a36d37528]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:29.989 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561 namespace which is not needed anymore#033[00m
Feb 25 07:42:30 np0005629333 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000061.scope: Deactivated successfully.
Feb 25 07:42:30 np0005629333 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000061.scope: Consumed 17.132s CPU time.
Feb 25 07:42:30 np0005629333 systemd-machined[210048]: Machine qemu-124-instance-00000061 terminated.
Feb 25 07:42:30 np0005629333 nova_compute[244014]: 2026-02-25 12:42:30.083 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:30 np0005629333 neutron-haproxy-ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561[331464]: [NOTICE]   (331468) : haproxy version is 2.8.14-c23fe91
Feb 25 07:42:30 np0005629333 neutron-haproxy-ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561[331464]: [NOTICE]   (331468) : path to executable is /usr/sbin/haproxy
Feb 25 07:42:30 np0005629333 neutron-haproxy-ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561[331464]: [WARNING]  (331468) : Exiting Master process...
Feb 25 07:42:30 np0005629333 neutron-haproxy-ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561[331464]: [ALERT]    (331468) : Current worker (331470) exited with code 143 (Terminated)
Feb 25 07:42:30 np0005629333 neutron-haproxy-ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561[331464]: [WARNING]  (331468) : All workers exited. Exiting... (0)
Feb 25 07:42:30 np0005629333 systemd[1]: libpod-5d45eed5684205ccfa7798515ef0592efacd955a22d233c2141fca16aba6dd02.scope: Deactivated successfully.
Feb 25 07:42:30 np0005629333 podman[338499]: 2026-02-25 12:42:30.124421381 +0000 UTC m=+0.046345929 container died 5d45eed5684205ccfa7798515ef0592efacd955a22d233c2141fca16aba6dd02 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 25 07:42:30 np0005629333 systemd[1]: var-lib-containers-storage-overlay-4aeb1a3a8bf9efcf56fafcf5d930b23a7cd7c44dae6b62bfc44fc981eafd5921-merged.mount: Deactivated successfully.
Feb 25 07:42:30 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5d45eed5684205ccfa7798515ef0592efacd955a22d233c2141fca16aba6dd02-userdata-shm.mount: Deactivated successfully.
Feb 25 07:42:30 np0005629333 nova_compute[244014]: 2026-02-25 12:42:30.160 244018 INFO nova.virt.libvirt.driver [-] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Instance destroyed successfully.#033[00m
Feb 25 07:42:30 np0005629333 nova_compute[244014]: 2026-02-25 12:42:30.160 244018 DEBUG nova.objects.instance [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lazy-loading 'resources' on Instance uuid 62d2fee1-f07f-44e3-a511-6b9bb341a3ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:42:30 np0005629333 podman[338499]: 2026-02-25 12:42:30.164276156 +0000 UTC m=+0.086200694 container cleanup 5d45eed5684205ccfa7798515ef0592efacd955a22d233c2141fca16aba6dd02 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 25 07:42:30 np0005629333 systemd[1]: libpod-conmon-5d45eed5684205ccfa7798515ef0592efacd955a22d233c2141fca16aba6dd02.scope: Deactivated successfully.
Feb 25 07:42:30 np0005629333 nova_compute[244014]: 2026-02-25 12:42:30.175 244018 DEBUG nova.virt.libvirt.vif [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:39:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-₡-825477072',display_name='tempest-₡-825477072',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--825477072',id=97,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:39:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='503e879cd1f44a16b9baef106ceba949',ramdisk_id='',reservation_id='r-t20m600l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1472039551',owner_user_name='tempest-ServersTestJSON-1472039551-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:39:56Z,user_data=None,user_id='4c5bc24b5f5048469cf3f701ce511bfa',uuid=62d2fee1-f07f-44e3-a511-6b9bb341a3ed,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "address": "fa:16:3e:f6:f0:2e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521e9ad8-9e", "ovs_interfaceid": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:42:30 np0005629333 nova_compute[244014]: 2026-02-25 12:42:30.175 244018 DEBUG nova.network.os_vif_util [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converting VIF {"id": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "address": "fa:16:3e:f6:f0:2e", "network": {"id": "ec8bae53-fe6a-49d1-a733-f00c198be561", "bridge": "br-int", "label": "tempest-ServersTestJSON-1453052816-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "503e879cd1f44a16b9baef106ceba949", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap521e9ad8-9e", "ovs_interfaceid": "521e9ad8-9ef3-4823-9ddb-59c3c3fe0674", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:42:30 np0005629333 nova_compute[244014]: 2026-02-25 12:42:30.176 244018 DEBUG nova.network.os_vif_util [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:f0:2e,bridge_name='br-int',has_traffic_filtering=True,id=521e9ad8-9ef3-4823-9ddb-59c3c3fe0674,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521e9ad8-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:42:30 np0005629333 nova_compute[244014]: 2026-02-25 12:42:30.176 244018 DEBUG os_vif [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:f0:2e,bridge_name='br-int',has_traffic_filtering=True,id=521e9ad8-9ef3-4823-9ddb-59c3c3fe0674,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521e9ad8-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:42:30 np0005629333 nova_compute[244014]: 2026-02-25 12:42:30.177 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:30 np0005629333 nova_compute[244014]: 2026-02-25 12:42:30.177 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap521e9ad8-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:42:30 np0005629333 nova_compute[244014]: 2026-02-25 12:42:30.179 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:30 np0005629333 nova_compute[244014]: 2026-02-25 12:42:30.180 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:30 np0005629333 nova_compute[244014]: 2026-02-25 12:42:30.182 244018 INFO os_vif [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:f0:2e,bridge_name='br-int',has_traffic_filtering=True,id=521e9ad8-9ef3-4823-9ddb-59c3c3fe0674,network=Network(ec8bae53-fe6a-49d1-a733-f00c198be561),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap521e9ad8-9e')#033[00m
Feb 25 07:42:30 np0005629333 podman[338543]: 2026-02-25 12:42:30.228020184 +0000 UTC m=+0.041909123 container remove 5d45eed5684205ccfa7798515ef0592efacd955a22d233c2141fca16aba6dd02 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 07:42:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:30.232 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8adad288-b1f1-4352-a50a-9b664a64a15f]: (4, ('Wed Feb 25 12:42:30 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561 (5d45eed5684205ccfa7798515ef0592efacd955a22d233c2141fca16aba6dd02)\n5d45eed5684205ccfa7798515ef0592efacd955a22d233c2141fca16aba6dd02\nWed Feb 25 12:42:30 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561 (5d45eed5684205ccfa7798515ef0592efacd955a22d233c2141fca16aba6dd02)\n5d45eed5684205ccfa7798515ef0592efacd955a22d233c2141fca16aba6dd02\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:30.234 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[02a03ac2-ea35-46d7-b2ea-1bea2b20eee3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:30.235 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec8bae53-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:42:30 np0005629333 kernel: tapec8bae53-f0: left promiscuous mode
Feb 25 07:42:30 np0005629333 nova_compute[244014]: 2026-02-25 12:42:30.237 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:30.240 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7127f8ba-a491-480c-8618-9a924c1ddd30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:30 np0005629333 nova_compute[244014]: 2026-02-25 12:42:30.250 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:30.253 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[767778ea-bebb-4e3b-83c0-6ad25d8ef1b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:30.254 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e82dafcd-7162-43fa-97b0-0388a60ed4a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:30.267 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[34908ccb-bb82-4e9a-a738-ef287485cf03]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516525, 'reachable_time': 28996, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338575, 'error': None, 'target': 'ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:30 np0005629333 systemd[1]: run-netns-ovnmeta\x2dec8bae53\x2dfe6a\x2d49d1\x2da733\x2df00c198be561.mount: Deactivated successfully.
Feb 25 07:42:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:30.271 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ec8bae53-fe6a-49d1-a733-f00c198be561 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:42:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:30.271 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[0e2448bc-378a-4664-be92-1bec24ef0d89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1840: 305 pgs: 305 active+clean; 312 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 2.2 MiB/s wr, 155 op/s
Feb 25 07:42:30 np0005629333 nova_compute[244014]: 2026-02-25 12:42:30.401 244018 INFO nova.virt.libvirt.driver [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Deleting instance files /var/lib/nova/instances/62d2fee1-f07f-44e3-a511-6b9bb341a3ed_del#033[00m
Feb 25 07:42:30 np0005629333 nova_compute[244014]: 2026-02-25 12:42:30.402 244018 INFO nova.virt.libvirt.driver [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Deletion of /var/lib/nova/instances/62d2fee1-f07f-44e3-a511-6b9bb341a3ed_del complete#033[00m
Feb 25 07:42:30 np0005629333 nova_compute[244014]: 2026-02-25 12:42:30.445 244018 INFO nova.compute.manager [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Took 0.52 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:42:30 np0005629333 nova_compute[244014]: 2026-02-25 12:42:30.446 244018 DEBUG oslo.service.loopingcall [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:42:30 np0005629333 nova_compute[244014]: 2026-02-25 12:42:30.446 244018 DEBUG nova.compute.manager [-] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:42:30 np0005629333 nova_compute[244014]: 2026-02-25 12:42:30.446 244018 DEBUG nova.network.neutron [-] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:42:30 np0005629333 nova_compute[244014]: 2026-02-25 12:42:30.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:42:30 np0005629333 nova_compute[244014]: 2026-02-25 12:42:30.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:42:30 np0005629333 nova_compute[244014]: 2026-02-25 12:42:30.955 244018 DEBUG nova.compute.manager [req-dd0a730b-159b-4492-935c-7d8192d1ec01 req-fcd47848-fac4-4dfb-9c9e-762ba9407a1a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Received event network-vif-unplugged-521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:42:30 np0005629333 nova_compute[244014]: 2026-02-25 12:42:30.955 244018 DEBUG oslo_concurrency.lockutils [req-dd0a730b-159b-4492-935c-7d8192d1ec01 req-fcd47848-fac4-4dfb-9c9e-762ba9407a1a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:30 np0005629333 nova_compute[244014]: 2026-02-25 12:42:30.956 244018 DEBUG oslo_concurrency.lockutils [req-dd0a730b-159b-4492-935c-7d8192d1ec01 req-fcd47848-fac4-4dfb-9c9e-762ba9407a1a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:30 np0005629333 nova_compute[244014]: 2026-02-25 12:42:30.956 244018 DEBUG oslo_concurrency.lockutils [req-dd0a730b-159b-4492-935c-7d8192d1ec01 req-fcd47848-fac4-4dfb-9c9e-762ba9407a1a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:30 np0005629333 nova_compute[244014]: 2026-02-25 12:42:30.956 244018 DEBUG nova.compute.manager [req-dd0a730b-159b-4492-935c-7d8192d1ec01 req-fcd47848-fac4-4dfb-9c9e-762ba9407a1a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] No waiting events found dispatching network-vif-unplugged-521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:42:30 np0005629333 nova_compute[244014]: 2026-02-25 12:42:30.956 244018 DEBUG nova.compute.manager [req-dd0a730b-159b-4492-935c-7d8192d1ec01 req-fcd47848-fac4-4dfb-9c9e-762ba9407a1a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Received event network-vif-unplugged-521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:42:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:42:30
Feb 25 07:42:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:42:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:42:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'backups', 'volumes', 'default.rgw.control', '.rgw.root']
Feb 25 07:42:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:42:31 np0005629333 nova_compute[244014]: 2026-02-25 12:42:31.173 244018 DEBUG nova.network.neutron [-] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:42:31 np0005629333 nova_compute[244014]: 2026-02-25 12:42:31.197 244018 INFO nova.compute.manager [-] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Took 0.75 seconds to deallocate network for instance.#033[00m
Feb 25 07:42:31 np0005629333 nova_compute[244014]: 2026-02-25 12:42:31.247 244018 DEBUG nova.compute.manager [req-fba2be00-e56a-4422-a67c-81ace07c82e0 req-42749830-6398-499d-989a-c192bd244a82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Received event network-vif-deleted-521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:42:31 np0005629333 nova_compute[244014]: 2026-02-25 12:42:31.254 244018 DEBUG oslo_concurrency.lockutils [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:31 np0005629333 nova_compute[244014]: 2026-02-25 12:42:31.254 244018 DEBUG oslo_concurrency.lockutils [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:31 np0005629333 nova_compute[244014]: 2026-02-25 12:42:31.342 244018 DEBUG oslo_concurrency.processutils [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:42:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:42:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:42:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:42:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:42:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:42:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:42:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:42:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3545476640' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:42:31 np0005629333 nova_compute[244014]: 2026-02-25 12:42:31.959 244018 DEBUG oslo_concurrency.processutils [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:42:31 np0005629333 nova_compute[244014]: 2026-02-25 12:42:31.963 244018 DEBUG nova.compute.provider_tree [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:42:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:42:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:42:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:42:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:42:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:42:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:42:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:42:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:42:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:42:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:42:31 np0005629333 nova_compute[244014]: 2026-02-25 12:42:31.981 244018 DEBUG nova.scheduler.client.report [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:42:32 np0005629333 nova_compute[244014]: 2026-02-25 12:42:32.002 244018 DEBUG oslo_concurrency.lockutils [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:32 np0005629333 nova_compute[244014]: 2026-02-25 12:42:32.039 244018 INFO nova.scheduler.client.report [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Deleted allocations for instance 62d2fee1-f07f-44e3-a511-6b9bb341a3ed#033[00m
Feb 25 07:42:32 np0005629333 nova_compute[244014]: 2026-02-25 12:42:32.122 244018 DEBUG oslo_concurrency.lockutils [None req-a8e2f4b5-a6b1-4aec-8455-d634a1cf7d77 4c5bc24b5f5048469cf3f701ce511bfa 503e879cd1f44a16b9baef106ceba949 - - default default] Lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 305 active+clean; 262 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 396 KiB/s rd, 2.2 MiB/s wr, 174 op/s
Feb 25 07:42:32 np0005629333 nova_compute[244014]: 2026-02-25 12:42:32.866 244018 INFO nova.compute.manager [None req-04689145-144d-4a06-a756-a33730bf0d54 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Get console output#033[00m
Feb 25 07:42:32 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:32Z|01047|binding|INFO|Releasing lport 90ed2c98-937c-42de-8c1d-0f2d90380e03 from this chassis (sb_readonly=0)
Feb 25 07:42:32 np0005629333 nova_compute[244014]: 2026-02-25 12:42:32.874 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 25 07:42:32 np0005629333 nova_compute[244014]: 2026-02-25 12:42:32.903 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:33 np0005629333 nova_compute[244014]: 2026-02-25 12:42:33.072 244018 DEBUG nova.compute.manager [req-4c316fb0-330e-48fa-8e91-f4f0fee508a0 req-01575bcb-32bd-48fd-9cb1-2faa9a82dd11 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Received event network-vif-plugged-521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:42:33 np0005629333 nova_compute[244014]: 2026-02-25 12:42:33.073 244018 DEBUG oslo_concurrency.lockutils [req-4c316fb0-330e-48fa-8e91-f4f0fee508a0 req-01575bcb-32bd-48fd-9cb1-2faa9a82dd11 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:33 np0005629333 nova_compute[244014]: 2026-02-25 12:42:33.073 244018 DEBUG oslo_concurrency.lockutils [req-4c316fb0-330e-48fa-8e91-f4f0fee508a0 req-01575bcb-32bd-48fd-9cb1-2faa9a82dd11 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:33 np0005629333 nova_compute[244014]: 2026-02-25 12:42:33.073 244018 DEBUG oslo_concurrency.lockutils [req-4c316fb0-330e-48fa-8e91-f4f0fee508a0 req-01575bcb-32bd-48fd-9cb1-2faa9a82dd11 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "62d2fee1-f07f-44e3-a511-6b9bb341a3ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:33 np0005629333 nova_compute[244014]: 2026-02-25 12:42:33.074 244018 DEBUG nova.compute.manager [req-4c316fb0-330e-48fa-8e91-f4f0fee508a0 req-01575bcb-32bd-48fd-9cb1-2faa9a82dd11 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] No waiting events found dispatching network-vif-plugged-521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:42:33 np0005629333 nova_compute[244014]: 2026-02-25 12:42:33.074 244018 WARNING nova.compute.manager [req-4c316fb0-330e-48fa-8e91-f4f0fee508a0 req-01575bcb-32bd-48fd-9cb1-2faa9a82dd11 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Received unexpected event network-vif-plugged-521e9ad8-9ef3-4823-9ddb-59c3c3fe0674 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:42:33 np0005629333 nova_compute[244014]: 2026-02-25 12:42:33.154 244018 DEBUG oslo_concurrency.lockutils [None req-185962ba-3606-4ca7-9b8a-ac0c1dab96ee fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "11f1a7e0-6001-4367-8491-5b5508f56bdb" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:33 np0005629333 nova_compute[244014]: 2026-02-25 12:42:33.155 244018 DEBUG oslo_concurrency.lockutils [None req-185962ba-3606-4ca7-9b8a-ac0c1dab96ee fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:33 np0005629333 nova_compute[244014]: 2026-02-25 12:42:33.155 244018 INFO nova.compute.manager [None req-185962ba-3606-4ca7-9b8a-ac0c1dab96ee fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Rebooting instance#033[00m
Feb 25 07:42:33 np0005629333 nova_compute[244014]: 2026-02-25 12:42:33.177 244018 DEBUG oslo_concurrency.lockutils [None req-185962ba-3606-4ca7-9b8a-ac0c1dab96ee fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "refresh_cache-11f1a7e0-6001-4367-8491-5b5508f56bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:42:33 np0005629333 nova_compute[244014]: 2026-02-25 12:42:33.178 244018 DEBUG oslo_concurrency.lockutils [None req-185962ba-3606-4ca7-9b8a-ac0c1dab96ee fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquired lock "refresh_cache-11f1a7e0-6001-4367-8491-5b5508f56bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:42:33 np0005629333 nova_compute[244014]: 2026-02-25 12:42:33.178 244018 DEBUG nova.network.neutron [None req-185962ba-3606-4ca7-9b8a-ac0c1dab96ee fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:42:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:42:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1842: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 158 op/s
Feb 25 07:42:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:35.023 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:42:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:35.024 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:42:35 np0005629333 nova_compute[244014]: 2026-02-25 12:42:35.024 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:35 np0005629333 nova_compute[244014]: 2026-02-25 12:42:35.093 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:35 np0005629333 nova_compute[244014]: 2026-02-25 12:42:35.180 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:35 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:35Z|01048|binding|INFO|Releasing lport 90ed2c98-937c-42de-8c1d-0f2d90380e03 from this chassis (sb_readonly=0)
Feb 25 07:42:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 25 07:42:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 07:42:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:42:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:42:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:42:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:42:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:42:35 np0005629333 nova_compute[244014]: 2026-02-25 12:42:35.230 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:42:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:42:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:42:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:42:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:42:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:42:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:42:35 np0005629333 nova_compute[244014]: 2026-02-25 12:42:35.304 244018 DEBUG nova.network.neutron [None req-185962ba-3606-4ca7-9b8a-ac0c1dab96ee fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Updating instance_info_cache with network_info: [{"id": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "address": "fa:16:3e:3e:d0:2e", "network": {"id": "898394ed-19f4-4525-9c0d-05895748de8c", "bridge": "br-int", "label": "tempest-network-smoke--407161761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aff0d14-b6", "ovs_interfaceid": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:42:35 np0005629333 nova_compute[244014]: 2026-02-25 12:42:35.322 244018 DEBUG oslo_concurrency.lockutils [None req-185962ba-3606-4ca7-9b8a-ac0c1dab96ee fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Releasing lock "refresh_cache-11f1a7e0-6001-4367-8491-5b5508f56bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:42:35 np0005629333 nova_compute[244014]: 2026-02-25 12:42:35.324 244018 DEBUG nova.compute.manager [None req-185962ba-3606-4ca7-9b8a-ac0c1dab96ee fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:42:35 np0005629333 nova_compute[244014]: 2026-02-25 12:42:35.411 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023340.4106336, 227efbfe-da43-423a-8652-9636ecded4cd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:42:35 np0005629333 nova_compute[244014]: 2026-02-25 12:42:35.413 244018 INFO nova.compute.manager [-] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:42:35 np0005629333 nova_compute[244014]: 2026-02-25 12:42:35.435 244018 DEBUG nova.compute.manager [None req-9d12d821-e5b2-42ea-967d-a44822e16652 - - - - - -] [instance: 227efbfe-da43-423a-8652-9636ecded4cd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:42:35 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 07:42:35 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:42:35 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:42:35 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:42:35 np0005629333 podman[338742]: 2026-02-25 12:42:35.734820953 +0000 UTC m=+0.077633562 container create 8088284d25938ecec2b032ddb3ea8d7d9a7c74e2abe4ae2243c11c4a17949ae3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_meitner, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:42:35 np0005629333 podman[338742]: 2026-02-25 12:42:35.683391391 +0000 UTC m=+0.026204010 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:42:35 np0005629333 systemd[1]: Started libpod-conmon-8088284d25938ecec2b032ddb3ea8d7d9a7c74e2abe4ae2243c11c4a17949ae3.scope.
Feb 25 07:42:35 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:42:35 np0005629333 podman[338742]: 2026-02-25 12:42:35.850985481 +0000 UTC m=+0.193798060 container init 8088284d25938ecec2b032ddb3ea8d7d9a7c74e2abe4ae2243c11c4a17949ae3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_meitner, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 07:42:35 np0005629333 podman[338742]: 2026-02-25 12:42:35.855938461 +0000 UTC m=+0.198751040 container start 8088284d25938ecec2b032ddb3ea8d7d9a7c74e2abe4ae2243c11c4a17949ae3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:42:35 np0005629333 podman[338742]: 2026-02-25 12:42:35.858640127 +0000 UTC m=+0.201452746 container attach 8088284d25938ecec2b032ddb3ea8d7d9a7c74e2abe4ae2243c11c4a17949ae3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 07:42:35 np0005629333 affectionate_meitner[338758]: 167 167
Feb 25 07:42:35 np0005629333 systemd[1]: libpod-8088284d25938ecec2b032ddb3ea8d7d9a7c74e2abe4ae2243c11c4a17949ae3.scope: Deactivated successfully.
Feb 25 07:42:35 np0005629333 conmon[338758]: conmon 8088284d25938ecec2b0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8088284d25938ecec2b032ddb3ea8d7d9a7c74e2abe4ae2243c11c4a17949ae3.scope/container/memory.events
Feb 25 07:42:35 np0005629333 podman[338742]: 2026-02-25 12:42:35.861200439 +0000 UTC m=+0.204013058 container died 8088284d25938ecec2b032ddb3ea8d7d9a7c74e2abe4ae2243c11c4a17949ae3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_meitner, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 07:42:35 np0005629333 systemd[1]: var-lib-containers-storage-overlay-62a81bb82d03b2d46c7db29f6c40bf94f374b10bb4658dec0bde1a23ca286aa4-merged.mount: Deactivated successfully.
Feb 25 07:42:36 np0005629333 podman[338742]: 2026-02-25 12:42:36.001109128 +0000 UTC m=+0.343921707 container remove 8088284d25938ecec2b032ddb3ea8d7d9a7c74e2abe4ae2243c11c4a17949ae3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Feb 25 07:42:36 np0005629333 systemd[1]: libpod-conmon-8088284d25938ecec2b032ddb3ea8d7d9a7c74e2abe4ae2243c11c4a17949ae3.scope: Deactivated successfully.
Feb 25 07:42:36 np0005629333 podman[338782]: 2026-02-25 12:42:36.216261729 +0000 UTC m=+0.067050163 container create dac3b645cdc7d1cd349e3b22fa7cc2398ed1308a1a395f659992009a499884f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_kowalevski, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 07:42:36 np0005629333 podman[338782]: 2026-02-25 12:42:36.174356357 +0000 UTC m=+0.025144831 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:42:36 np0005629333 systemd[1]: Started libpod-conmon-dac3b645cdc7d1cd349e3b22fa7cc2398ed1308a1a395f659992009a499884f9.scope.
Feb 25 07:42:36 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:42:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd59dc419172d0653bd103c581b3c684c7a3757faa9a4e5362a10f8c8a5fa336/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:42:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd59dc419172d0653bd103c581b3c684c7a3757faa9a4e5362a10f8c8a5fa336/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:42:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd59dc419172d0653bd103c581b3c684c7a3757faa9a4e5362a10f8c8a5fa336/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:42:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd59dc419172d0653bd103c581b3c684c7a3757faa9a4e5362a10f8c8a5fa336/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:42:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd59dc419172d0653bd103c581b3c684c7a3757faa9a4e5362a10f8c8a5fa336/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:42:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1843: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 143 op/s
Feb 25 07:42:36 np0005629333 podman[338782]: 2026-02-25 12:42:36.315184081 +0000 UTC m=+0.165972565 container init dac3b645cdc7d1cd349e3b22fa7cc2398ed1308a1a395f659992009a499884f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_kowalevski, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 07:42:36 np0005629333 podman[338782]: 2026-02-25 12:42:36.327878239 +0000 UTC m=+0.178666673 container start dac3b645cdc7d1cd349e3b22fa7cc2398ed1308a1a395f659992009a499884f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_kowalevski, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:42:36 np0005629333 podman[338782]: 2026-02-25 12:42:36.333528518 +0000 UTC m=+0.184316962 container attach dac3b645cdc7d1cd349e3b22fa7cc2398ed1308a1a395f659992009a499884f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 07:42:36 np0005629333 nova_compute[244014]: 2026-02-25 12:42:36.635 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023341.6339054, b8d9acc4-7912-4d26-bad6-1159f6993361 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:42:36 np0005629333 nova_compute[244014]: 2026-02-25 12:42:36.638 244018 INFO nova.compute.manager [-] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:42:36 np0005629333 nova_compute[244014]: 2026-02-25 12:42:36.658 244018 DEBUG nova.compute.manager [None req-23a7d195-8ed6-4728-aa60-8f8984f32cfe - - - - - -] [instance: b8d9acc4-7912-4d26-bad6-1159f6993361] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:42:36 np0005629333 friendly_kowalevski[338798]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:42:36 np0005629333 friendly_kowalevski[338798]: --> All data devices are unavailable
Feb 25 07:42:36 np0005629333 systemd[1]: libpod-dac3b645cdc7d1cd349e3b22fa7cc2398ed1308a1a395f659992009a499884f9.scope: Deactivated successfully.
Feb 25 07:42:36 np0005629333 conmon[338798]: conmon dac3b645cdc7d1cd349e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dac3b645cdc7d1cd349e3b22fa7cc2398ed1308a1a395f659992009a499884f9.scope/container/memory.events
Feb 25 07:42:36 np0005629333 podman[338818]: 2026-02-25 12:42:36.862378333 +0000 UTC m=+0.026256722 container died dac3b645cdc7d1cd349e3b22fa7cc2398ed1308a1a395f659992009a499884f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 07:42:36 np0005629333 systemd[1]: var-lib-containers-storage-overlay-fd59dc419172d0653bd103c581b3c684c7a3757faa9a4e5362a10f8c8a5fa336-merged.mount: Deactivated successfully.
Feb 25 07:42:36 np0005629333 podman[338818]: 2026-02-25 12:42:36.936220926 +0000 UTC m=+0.100099305 container remove dac3b645cdc7d1cd349e3b22fa7cc2398ed1308a1a395f659992009a499884f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_kowalevski, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 07:42:36 np0005629333 systemd[1]: libpod-conmon-dac3b645cdc7d1cd349e3b22fa7cc2398ed1308a1a395f659992009a499884f9.scope: Deactivated successfully.
Feb 25 07:42:37 np0005629333 podman[338894]: 2026-02-25 12:42:37.378916139 +0000 UTC m=+0.068283838 container create bd8d88a5de78efb71ec6dabf07372689c41d7e2ba901ad56f1bc3d32b803928d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 07:42:37 np0005629333 podman[338894]: 2026-02-25 12:42:37.336875573 +0000 UTC m=+0.026243262 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:42:37 np0005629333 systemd[1]: Started libpod-conmon-bd8d88a5de78efb71ec6dabf07372689c41d7e2ba901ad56f1bc3d32b803928d.scope.
Feb 25 07:42:37 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:42:37 np0005629333 podman[338894]: 2026-02-25 12:42:37.565598727 +0000 UTC m=+0.254966466 container init bd8d88a5de78efb71ec6dabf07372689c41d7e2ba901ad56f1bc3d32b803928d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_thompson, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:42:37 np0005629333 podman[338894]: 2026-02-25 12:42:37.575545848 +0000 UTC m=+0.264913557 container start bd8d88a5de78efb71ec6dabf07372689c41d7e2ba901ad56f1bc3d32b803928d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:42:37 np0005629333 podman[338894]: 2026-02-25 12:42:37.580020734 +0000 UTC m=+0.269388473 container attach bd8d88a5de78efb71ec6dabf07372689c41d7e2ba901ad56f1bc3d32b803928d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_thompson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 07:42:37 np0005629333 recursing_thompson[338910]: 167 167
Feb 25 07:42:37 np0005629333 systemd[1]: libpod-bd8d88a5de78efb71ec6dabf07372689c41d7e2ba901ad56f1bc3d32b803928d.scope: Deactivated successfully.
Feb 25 07:42:37 np0005629333 podman[338894]: 2026-02-25 12:42:37.581516166 +0000 UTC m=+0.270883865 container died bd8d88a5de78efb71ec6dabf07372689c41d7e2ba901ad56f1bc3d32b803928d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_thompson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 07:42:37 np0005629333 systemd[1]: var-lib-containers-storage-overlay-01efe4991c5ad508bf88f2d3af27223ad17b50c26d1e56b6c895ac032ddb928b-merged.mount: Deactivated successfully.
Feb 25 07:42:37 np0005629333 podman[338894]: 2026-02-25 12:42:37.759747716 +0000 UTC m=+0.449115395 container remove bd8d88a5de78efb71ec6dabf07372689c41d7e2ba901ad56f1bc3d32b803928d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_thompson, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:42:37 np0005629333 kernel: tap6aff0d14-b6 (unregistering): left promiscuous mode
Feb 25 07:42:37 np0005629333 NetworkManager[49836]: <info>  [1772023357.7956] device (tap6aff0d14-b6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:42:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:37Z|01049|binding|INFO|Releasing lport 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e from this chassis (sb_readonly=0)
Feb 25 07:42:37 np0005629333 nova_compute[244014]: 2026-02-25 12:42:37.804 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:37Z|01050|binding|INFO|Setting lport 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e down in Southbound
Feb 25 07:42:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:37Z|01051|binding|INFO|Removing iface tap6aff0d14-b6 ovn-installed in OVS
Feb 25 07:42:37 np0005629333 nova_compute[244014]: 2026-02-25 12:42:37.807 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:37.815 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:d0:2e 10.100.0.4'], port_security=['fa:16:3e:3e:d0:2e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '11f1a7e0-6001-4367-8491-5b5508f56bdb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-898394ed-19f4-4525-9c0d-05895748de8c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '563c6e30-ca07-41f4-bf4e-27cec8015d08', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.206'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f095686b-cedf-4f50-94ed-1d2cdaae5b7e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6aff0d14-b6b5-45d1-83da-b9f0946a2e7e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:42:37 np0005629333 nova_compute[244014]: 2026-02-25 12:42:37.814 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:37.819 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e in datapath 898394ed-19f4-4525-9c0d-05895748de8c unbound from our chassis#033[00m
Feb 25 07:42:37 np0005629333 systemd[1]: libpod-conmon-bd8d88a5de78efb71ec6dabf07372689c41d7e2ba901ad56f1bc3d32b803928d.scope: Deactivated successfully.
Feb 25 07:42:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:37.820 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 898394ed-19f4-4525-9c0d-05895748de8c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:42:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:37.821 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[74446245-3d36-4735-b1ad-e80282d7c470]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:37.822 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c namespace which is not needed anymore#033[00m
Feb 25 07:42:37 np0005629333 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Feb 25 07:42:37 np0005629333 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006c.scope: Consumed 14.751s CPU time.
Feb 25 07:42:37 np0005629333 systemd-machined[210048]: Machine qemu-135-instance-0000006c terminated.
Feb 25 07:42:37 np0005629333 podman[338954]: 2026-02-25 12:42:37.973316583 +0000 UTC m=+0.103469781 container create 40e0343d458a39c63cd264d2ffa92cd34abf60eaed9b0f489c91e23ad89fee98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:42:37 np0005629333 podman[338954]: 2026-02-25 12:42:37.888772957 +0000 UTC m=+0.018926185 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:42:38 np0005629333 nova_compute[244014]: 2026-02-25 12:42:38.023 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:38 np0005629333 nova_compute[244014]: 2026-02-25 12:42:38.028 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:38 np0005629333 systemd[1]: Started libpod-conmon-40e0343d458a39c63cd264d2ffa92cd34abf60eaed9b0f489c91e23ad89fee98.scope.
Feb 25 07:42:38 np0005629333 nova_compute[244014]: 2026-02-25 12:42:38.038 244018 DEBUG nova.compute.manager [req-af0500ab-d39d-42a3-b132-6bb8332d9490 req-28b97bb4-302d-4460-ad51-d34aa30de357 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received event network-vif-unplugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:42:38 np0005629333 nova_compute[244014]: 2026-02-25 12:42:38.039 244018 DEBUG oslo_concurrency.lockutils [req-af0500ab-d39d-42a3-b132-6bb8332d9490 req-28b97bb4-302d-4460-ad51-d34aa30de357 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:38 np0005629333 nova_compute[244014]: 2026-02-25 12:42:38.039 244018 DEBUG oslo_concurrency.lockutils [req-af0500ab-d39d-42a3-b132-6bb8332d9490 req-28b97bb4-302d-4460-ad51-d34aa30de357 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:38 np0005629333 nova_compute[244014]: 2026-02-25 12:42:38.040 244018 DEBUG oslo_concurrency.lockutils [req-af0500ab-d39d-42a3-b132-6bb8332d9490 req-28b97bb4-302d-4460-ad51-d34aa30de357 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:38 np0005629333 nova_compute[244014]: 2026-02-25 12:42:38.040 244018 DEBUG nova.compute.manager [req-af0500ab-d39d-42a3-b132-6bb8332d9490 req-28b97bb4-302d-4460-ad51-d34aa30de357 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] No waiting events found dispatching network-vif-unplugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:42:38 np0005629333 nova_compute[244014]: 2026-02-25 12:42:38.040 244018 WARNING nova.compute.manager [req-af0500ab-d39d-42a3-b132-6bb8332d9490 req-28b97bb4-302d-4460-ad51-d34aa30de357 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received unexpected event network-vif-unplugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e for instance with vm_state active and task_state reboot_started.#033[00m
Feb 25 07:42:38 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:42:38 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1039eb16db0a4bcbab2c6cb408fb6324a4f9619fc967a63ae8d8456cac7816/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:42:38 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1039eb16db0a4bcbab2c6cb408fb6324a4f9619fc967a63ae8d8456cac7816/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:42:38 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1039eb16db0a4bcbab2c6cb408fb6324a4f9619fc967a63ae8d8456cac7816/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:42:38 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd1039eb16db0a4bcbab2c6cb408fb6324a4f9619fc967a63ae8d8456cac7816/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:42:38 np0005629333 neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c[338110]: [NOTICE]   (338114) : haproxy version is 2.8.14-c23fe91
Feb 25 07:42:38 np0005629333 neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c[338110]: [NOTICE]   (338114) : path to executable is /usr/sbin/haproxy
Feb 25 07:42:38 np0005629333 neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c[338110]: [WARNING]  (338114) : Exiting Master process...
Feb 25 07:42:38 np0005629333 neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c[338110]: [WARNING]  (338114) : Exiting Master process...
Feb 25 07:42:38 np0005629333 neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c[338110]: [ALERT]    (338114) : Current worker (338116) exited with code 143 (Terminated)
Feb 25 07:42:38 np0005629333 neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c[338110]: [WARNING]  (338114) : All workers exited. Exiting... (0)
Feb 25 07:42:38 np0005629333 systemd[1]: libpod-06a4d8f54adee94a88166f848b8eb43190438dbfca792ead9ec4745f5182e839.scope: Deactivated successfully.
Feb 25 07:42:38 np0005629333 conmon[338110]: conmon 06a4d8f54adee94a8816 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-06a4d8f54adee94a88166f848b8eb43190438dbfca792ead9ec4745f5182e839.scope/container/memory.events
Feb 25 07:42:38 np0005629333 podman[338954]: 2026-02-25 12:42:38.19043996 +0000 UTC m=+0.320593178 container init 40e0343d458a39c63cd264d2ffa92cd34abf60eaed9b0f489c91e23ad89fee98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_rubin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:42:38 np0005629333 podman[338954]: 2026-02-25 12:42:38.19858063 +0000 UTC m=+0.328733868 container start 40e0343d458a39c63cd264d2ffa92cd34abf60eaed9b0f489c91e23ad89fee98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 07:42:38 np0005629333 podman[338954]: 2026-02-25 12:42:38.277107556 +0000 UTC m=+0.407260834 container attach 40e0343d458a39c63cd264d2ffa92cd34abf60eaed9b0f489c91e23ad89fee98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 07:42:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:42:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1844: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.2 MiB/s wr, 147 op/s
Feb 25 07:42:38 np0005629333 podman[338964]: 2026-02-25 12:42:38.325957235 +0000 UTC m=+0.431424926 container died 06a4d8f54adee94a88166f848b8eb43190438dbfca792ead9ec4745f5182e839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 25 07:42:38 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-06a4d8f54adee94a88166f848b8eb43190438dbfca792ead9ec4745f5182e839-userdata-shm.mount: Deactivated successfully.
Feb 25 07:42:38 np0005629333 systemd[1]: var-lib-containers-storage-overlay-b7657430fcd1ccf36e0ff1aaea267888d4bce6d3d745f3ebe88290757fb8da93-merged.mount: Deactivated successfully.
Feb 25 07:42:38 np0005629333 podman[338964]: 2026-02-25 12:42:38.386770921 +0000 UTC m=+0.492238602 container cleanup 06a4d8f54adee94a88166f848b8eb43190438dbfca792ead9ec4745f5182e839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 07:42:38 np0005629333 systemd[1]: libpod-conmon-06a4d8f54adee94a88166f848b8eb43190438dbfca792ead9ec4745f5182e839.scope: Deactivated successfully.
Feb 25 07:42:38 np0005629333 nova_compute[244014]: 2026-02-25 12:42:38.449 244018 INFO nova.virt.libvirt.driver [None req-185962ba-3606-4ca7-9b8a-ac0c1dab96ee fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Instance shutdown successfully.#033[00m
Feb 25 07:42:38 np0005629333 podman[339022]: 2026-02-25 12:42:38.454543813 +0000 UTC m=+0.041435500 container remove 06a4d8f54adee94a88166f848b8eb43190438dbfca792ead9ec4745f5182e839 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.459 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f0a1ad71-618f-4f02-8872-24cc4edcf910]: (4, ('Wed Feb 25 12:42:37 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c (06a4d8f54adee94a88166f848b8eb43190438dbfca792ead9ec4745f5182e839)\n06a4d8f54adee94a88166f848b8eb43190438dbfca792ead9ec4745f5182e839\nWed Feb 25 12:42:38 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c (06a4d8f54adee94a88166f848b8eb43190438dbfca792ead9ec4745f5182e839)\n06a4d8f54adee94a88166f848b8eb43190438dbfca792ead9ec4745f5182e839\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.461 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[24da8cde-c8ef-40b2-b6a5-8faae9971479]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.461 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap898394ed-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:42:38 np0005629333 nova_compute[244014]: 2026-02-25 12:42:38.463 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:38 np0005629333 kernel: tap898394ed-10: left promiscuous mode
Feb 25 07:42:38 np0005629333 nova_compute[244014]: 2026-02-25 12:42:38.472 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:38 np0005629333 keen_rubin[338995]: {
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:    "0": [
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:        {
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "devices": [
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "/dev/loop3"
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            ],
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "lv_name": "ceph_lv0",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "lv_size": "21470642176",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "name": "ceph_lv0",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "tags": {
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.cluster_name": "ceph",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.crush_device_class": "",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.encrypted": "0",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.objectstore": "bluestore",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.osd_id": "0",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.type": "block",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.vdo": "0",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.with_tpm": "0"
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            },
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "type": "block",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "vg_name": "ceph_vg0"
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:        }
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:    ],
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:    "1": [
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:        {
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "devices": [
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "/dev/loop4"
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            ],
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "lv_name": "ceph_lv1",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "lv_size": "21470642176",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "name": "ceph_lv1",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "tags": {
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.cluster_name": "ceph",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.crush_device_class": "",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.encrypted": "0",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.objectstore": "bluestore",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.osd_id": "1",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.type": "block",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.vdo": "0",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.with_tpm": "0"
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            },
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "type": "block",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "vg_name": "ceph_vg1"
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:        }
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:    ],
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:    "2": [
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:        {
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "devices": [
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "/dev/loop5"
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            ],
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "lv_name": "ceph_lv2",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "lv_size": "21470642176",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "name": "ceph_lv2",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "tags": {
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.cluster_name": "ceph",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.crush_device_class": "",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.encrypted": "0",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.objectstore": "bluestore",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.osd_id": "2",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.type": "block",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.vdo": "0",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:                "ceph.with_tpm": "0"
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            },
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "type": "block",
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:            "vg_name": "ceph_vg2"
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:        }
Feb 25 07:42:38 np0005629333 keen_rubin[338995]:    ]
Feb 25 07:42:38 np0005629333 keen_rubin[338995]: }
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.477 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b95b0e0b-efd6-46f1-a6be-d09926ea66b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.491 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c11ed968-575b-4a76-a6cc-fdb9c1fa575d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.492 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b56f418f-9885-490f-8ccc-efed2c801c43]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:38 np0005629333 systemd[1]: libpod-40e0343d458a39c63cd264d2ffa92cd34abf60eaed9b0f489c91e23ad89fee98.scope: Deactivated successfully.
Feb 25 07:42:38 np0005629333 podman[338954]: 2026-02-25 12:42:38.498852204 +0000 UTC m=+0.629005412 container died 40e0343d458a39c63cd264d2ffa92cd34abf60eaed9b0f489c91e23ad89fee98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_rubin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:42:38 np0005629333 kernel: tap6aff0d14-b6: entered promiscuous mode
Feb 25 07:42:38 np0005629333 NetworkManager[49836]: <info>  [1772023358.5063] manager: (tap6aff0d14-b6): new Tun device (/org/freedesktop/NetworkManager/Devices/431)
Feb 25 07:42:38 np0005629333 nova_compute[244014]: 2026-02-25 12:42:38.506 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:38 np0005629333 systemd-udevd[338932]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:42:38 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:38Z|01052|binding|INFO|Claiming lport 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e for this chassis.
Feb 25 07:42:38 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:38Z|01053|binding|INFO|6aff0d14-b6b5-45d1-83da-b9f0946a2e7e: Claiming fa:16:3e:3e:d0:2e 10.100.0.4
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.508 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8500c573-b78d-42f3-813d-4aef017a9463]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530192, 'reachable_time': 17439, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339051, 'error': None, 'target': 'ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:38 np0005629333 systemd[1]: run-netns-ovnmeta\x2d898394ed\x2d19f4\x2d4525\x2d9c0d\x2d05895748de8c.mount: Deactivated successfully.
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.516 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.516 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[38375d80-7be0-4f38-9d6d-c5f1f36627e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:38 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:38Z|01054|binding|INFO|Setting lport 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e ovn-installed in OVS
Feb 25 07:42:38 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:38Z|01055|binding|INFO|Setting lport 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e up in Southbound
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.520 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:d0:2e 10.100.0.4'], port_security=['fa:16:3e:3e:d0:2e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '11f1a7e0-6001-4367-8491-5b5508f56bdb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-898394ed-19f4-4525-9c0d-05895748de8c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '563c6e30-ca07-41f4-bf4e-27cec8015d08', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.206'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f095686b-cedf-4f50-94ed-1d2cdaae5b7e, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6aff0d14-b6b5-45d1-83da-b9f0946a2e7e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:42:38 np0005629333 NetworkManager[49836]: <info>  [1772023358.5224] device (tap6aff0d14-b6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:42:38 np0005629333 NetworkManager[49836]: <info>  [1772023358.5230] device (tap6aff0d14-b6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:42:38 np0005629333 nova_compute[244014]: 2026-02-25 12:42:38.519 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:38 np0005629333 nova_compute[244014]: 2026-02-25 12:42:38.521 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.524 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e in datapath 898394ed-19f4-4525-9c0d-05895748de8c bound to our chassis#033[00m
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.525 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 898394ed-19f4-4525-9c0d-05895748de8c#033[00m
Feb 25 07:42:38 np0005629333 systemd[1]: var-lib-containers-storage-overlay-bd1039eb16db0a4bcbab2c6cb408fb6324a4f9619fc967a63ae8d8456cac7816-merged.mount: Deactivated successfully.
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.539 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c8d7e0f2-cb0a-4839-844c-d3273f5725c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.540 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap898394ed-11 in ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.541 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap898394ed-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.542 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[792e6e4f-8a09-4bf1-bac5-559a62541d96]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.543 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bfc98b9a-f296-49b5-8f63-f1a1ae82aab5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:38 np0005629333 systemd-machined[210048]: New machine qemu-136-instance-0000006c.
Feb 25 07:42:38 np0005629333 podman[338954]: 2026-02-25 12:42:38.555007838 +0000 UTC m=+0.685161036 container remove 40e0343d458a39c63cd264d2ffa92cd34abf60eaed9b0f489c91e23ad89fee98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.556 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[b419681b-6db4-4074-bdb5-cfc2332e2e61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:38 np0005629333 systemd[1]: Started Virtual Machine qemu-136-instance-0000006c.
Feb 25 07:42:38 np0005629333 systemd[1]: libpod-conmon-40e0343d458a39c63cd264d2ffa92cd34abf60eaed9b0f489c91e23ad89fee98.scope: Deactivated successfully.
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.568 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e626fcb6-a1cb-48ee-b077-49dee04b3811]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.591 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[fb479f2e-9979-4a49-8257-346207e91406]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:38 np0005629333 NetworkManager[49836]: <info>  [1772023358.5965] manager: (tap898394ed-10): new Veth device (/org/freedesktop/NetworkManager/Devices/432)
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.596 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0500481f-1d6c-4c0b-b90d-d5fbcac0fd45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.616 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f9866625-6196-4f62-9688-4fa46958dbdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.619 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4ca312e4-73fd-49da-b0d2-a56ba3ee903b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:38 np0005629333 NetworkManager[49836]: <info>  [1772023358.6350] device (tap898394ed-10): carrier: link connected
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.637 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[92967756-2b31-47dc-b54f-dfee3b0d727a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.650 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6531fc56-c4fd-4591-9db9-b35e6a25182e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap898394ed-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:00:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 316], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532804, 'reachable_time': 21633, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339126, 'error': None, 'target': 'ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.661 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0fd44225-ffbe-4bf1-9fcf-e94622c273a0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:f1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 532804, 'tstamp': 532804}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339128, 'error': None, 'target': 'ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.674 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a9696a1f-01eb-410a-a48f-d1d8bf35f4c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap898394ed-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:00:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 316], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532804, 'reachable_time': 21633, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 339130, 'error': None, 'target': 'ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.695 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6c1ea9f3-47fe-4d66-ad23-9afe77eb5594]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.754 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[15e52752-02fc-41bf-970b-09c39507334a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.755 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap898394ed-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.755 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.756 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap898394ed-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:42:38 np0005629333 kernel: tap898394ed-10: entered promiscuous mode
Feb 25 07:42:38 np0005629333 NetworkManager[49836]: <info>  [1772023358.7588] manager: (tap898394ed-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/433)
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.762 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap898394ed-10, col_values=(('external_ids', {'iface-id': '90ed2c98-937c-42de-8c1d-0f2d90380e03'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:42:38 np0005629333 nova_compute[244014]: 2026-02-25 12:42:38.760 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:38 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:38Z|01056|binding|INFO|Releasing lport 90ed2c98-937c-42de-8c1d-0f2d90380e03 from this chassis (sb_readonly=0)
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.765 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/898394ed-19f4-4525-9c0d-05895748de8c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/898394ed-19f4-4525-9c0d-05895748de8c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.766 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b78963f8-3ef4-4d9d-9cda-efc9500e2800]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.767 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-898394ed-19f4-4525-9c0d-05895748de8c
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/898394ed-19f4-4525-9c0d-05895748de8c.pid.haproxy
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 898394ed-19f4-4525-9c0d-05895748de8c
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:42:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:38.767 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c', 'env', 'PROCESS_TAG=haproxy-898394ed-19f4-4525-9c0d-05895748de8c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/898394ed-19f4-4525-9c0d-05895748de8c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:42:38 np0005629333 nova_compute[244014]: 2026-02-25 12:42:38.775 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:38 np0005629333 podman[339177]: 2026-02-25 12:42:38.949566033 +0000 UTC m=+0.034257778 container create 0f388cea1359a55c91f09cbef0f16491854fc054885693ae606ca97d84cfc7f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hertz, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:42:38 np0005629333 systemd[1]: Started libpod-conmon-0f388cea1359a55c91f09cbef0f16491854fc054885693ae606ca97d84cfc7f9.scope.
Feb 25 07:42:39 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:42:39 np0005629333 podman[339177]: 2026-02-25 12:42:39.020468634 +0000 UTC m=+0.105160399 container init 0f388cea1359a55c91f09cbef0f16491854fc054885693ae606ca97d84cfc7f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hertz, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:42:39 np0005629333 podman[339177]: 2026-02-25 12:42:39.027619095 +0000 UTC m=+0.112310850 container start 0f388cea1359a55c91f09cbef0f16491854fc054885693ae606ca97d84cfc7f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 07:42:39 np0005629333 podman[339177]: 2026-02-25 12:42:38.935247099 +0000 UTC m=+0.019938844 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:42:39 np0005629333 podman[339177]: 2026-02-25 12:42:39.031598448 +0000 UTC m=+0.116290233 container attach 0f388cea1359a55c91f09cbef0f16491854fc054885693ae606ca97d84cfc7f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 07:42:39 np0005629333 flamboyant_hertz[339198]: 167 167
Feb 25 07:42:39 np0005629333 systemd[1]: libpod-0f388cea1359a55c91f09cbef0f16491854fc054885693ae606ca97d84cfc7f9.scope: Deactivated successfully.
Feb 25 07:42:39 np0005629333 conmon[339198]: conmon 0f388cea1359a55c91f0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0f388cea1359a55c91f09cbef0f16491854fc054885693ae606ca97d84cfc7f9.scope/container/memory.events
Feb 25 07:42:39 np0005629333 podman[339177]: 2026-02-25 12:42:39.034468349 +0000 UTC m=+0.119160134 container died 0f388cea1359a55c91f09cbef0f16491854fc054885693ae606ca97d84cfc7f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hertz, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:42:39 np0005629333 systemd[1]: var-lib-containers-storage-overlay-976b663c346405524adea0bd2fb9f63a911101632e95d6e10d529cbdb7a2704a-merged.mount: Deactivated successfully.
Feb 25 07:42:39 np0005629333 podman[339177]: 2026-02-25 12:42:39.080114767 +0000 UTC m=+0.164806522 container remove 0f388cea1359a55c91f09cbef0f16491854fc054885693ae606ca97d84cfc7f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hertz, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 07:42:39 np0005629333 systemd[1]: libpod-conmon-0f388cea1359a55c91f09cbef0f16491854fc054885693ae606ca97d84cfc7f9.scope: Deactivated successfully.
Feb 25 07:42:39 np0005629333 podman[339231]: 2026-02-25 12:42:39.120536477 +0000 UTC m=+0.042107739 container create 68c78756e684bd54a529515ebf69648b5bd727e6829e847d0e8ec3b23fcd6f0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 07:42:39 np0005629333 systemd[1]: Started libpod-conmon-68c78756e684bd54a529515ebf69648b5bd727e6829e847d0e8ec3b23fcd6f0f.scope.
Feb 25 07:42:39 np0005629333 podman[339231]: 2026-02-25 12:42:39.097314711 +0000 UTC m=+0.018885963 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:42:39 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:42:39 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85336dac33943960f28fe299da666ed4b00ba73c5530f114cd6d990127d9387f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:42:39 np0005629333 podman[339231]: 2026-02-25 12:42:39.22801375 +0000 UTC m=+0.149585082 container init 68c78756e684bd54a529515ebf69648b5bd727e6829e847d0e8ec3b23fcd6f0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 07:42:39 np0005629333 podman[339231]: 2026-02-25 12:42:39.232897157 +0000 UTC m=+0.154468439 container start 68c78756e684bd54a529515ebf69648b5bd727e6829e847d0e8ec3b23fcd6f0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 07:42:39 np0005629333 podman[339253]: 2026-02-25 12:42:39.23902397 +0000 UTC m=+0.047778819 container create 8f2bcd5b71564f82c194e9cb03ae57ca720b1aaef86afe9394ec10aed6abcb71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 07:42:39 np0005629333 neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c[339254]: [NOTICE]   (339268) : New worker (339270) forked
Feb 25 07:42:39 np0005629333 neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c[339254]: [NOTICE]   (339268) : Loading success.
Feb 25 07:42:39 np0005629333 nova_compute[244014]: 2026-02-25 12:42:39.279 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023344.278254, d547b0db-242e-49a5-8a76-5682b0235b6d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:42:39 np0005629333 nova_compute[244014]: 2026-02-25 12:42:39.280 244018 INFO nova.compute.manager [-] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:42:39 np0005629333 systemd[1]: Started libpod-conmon-8f2bcd5b71564f82c194e9cb03ae57ca720b1aaef86afe9394ec10aed6abcb71.scope.
Feb 25 07:42:39 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:42:39 np0005629333 nova_compute[244014]: 2026-02-25 12:42:39.307 244018 DEBUG nova.compute.manager [None req-1c80d8cb-d659-41ec-a264-afb622005127 - - - - - -] [instance: d547b0db-242e-49a5-8a76-5682b0235b6d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:42:39 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c2dce0cf89c192b535b364c8d13d0f61e05e053073438e024dd9f3d2a72033/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:42:39 np0005629333 podman[339253]: 2026-02-25 12:42:39.218604184 +0000 UTC m=+0.027359113 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:42:39 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c2dce0cf89c192b535b364c8d13d0f61e05e053073438e024dd9f3d2a72033/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:42:39 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c2dce0cf89c192b535b364c8d13d0f61e05e053073438e024dd9f3d2a72033/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:42:39 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c2dce0cf89c192b535b364c8d13d0f61e05e053073438e024dd9f3d2a72033/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:42:39 np0005629333 podman[339253]: 2026-02-25 12:42:39.339913967 +0000 UTC m=+0.148668826 container init 8f2bcd5b71564f82c194e9cb03ae57ca720b1aaef86afe9394ec10aed6abcb71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 07:42:39 np0005629333 podman[339253]: 2026-02-25 12:42:39.347139381 +0000 UTC m=+0.155894230 container start 8f2bcd5b71564f82c194e9cb03ae57ca720b1aaef86afe9394ec10aed6abcb71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dubinsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 07:42:39 np0005629333 podman[339253]: 2026-02-25 12:42:39.350785664 +0000 UTC m=+0.159540733 container attach 8f2bcd5b71564f82c194e9cb03ae57ca720b1aaef86afe9394ec10aed6abcb71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dubinsky, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:42:39 np0005629333 nova_compute[244014]: 2026-02-25 12:42:39.626 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for 11f1a7e0-6001-4367-8491-5b5508f56bdb due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 25 07:42:39 np0005629333 nova_compute[244014]: 2026-02-25 12:42:39.626 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023359.6251705, 11f1a7e0-6001-4367-8491-5b5508f56bdb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:42:39 np0005629333 nova_compute[244014]: 2026-02-25 12:42:39.627 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:42:39 np0005629333 nova_compute[244014]: 2026-02-25 12:42:39.633 244018 INFO nova.virt.libvirt.driver [-] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Instance running successfully.#033[00m
Feb 25 07:42:39 np0005629333 nova_compute[244014]: 2026-02-25 12:42:39.633 244018 INFO nova.virt.libvirt.driver [None req-185962ba-3606-4ca7-9b8a-ac0c1dab96ee fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Instance soft rebooted successfully.#033[00m
Feb 25 07:42:39 np0005629333 nova_compute[244014]: 2026-02-25 12:42:39.634 244018 DEBUG nova.compute.manager [None req-185962ba-3606-4ca7-9b8a-ac0c1dab96ee fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:42:39 np0005629333 nova_compute[244014]: 2026-02-25 12:42:39.663 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:42:39 np0005629333 nova_compute[244014]: 2026-02-25 12:42:39.667 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:42:39 np0005629333 nova_compute[244014]: 2026-02-25 12:42:39.694 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] During sync_power_state the instance has a pending task (reboot_started). Skip.#033[00m
Feb 25 07:42:39 np0005629333 nova_compute[244014]: 2026-02-25 12:42:39.695 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023359.6255298, 11f1a7e0-6001-4367-8491-5b5508f56bdb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:42:39 np0005629333 nova_compute[244014]: 2026-02-25 12:42:39.695 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] VM Started (Lifecycle Event)#033[00m
Feb 25 07:42:39 np0005629333 nova_compute[244014]: 2026-02-25 12:42:39.704 244018 DEBUG oslo_concurrency.lockutils [None req-185962ba-3606-4ca7-9b8a-ac0c1dab96ee fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:39 np0005629333 nova_compute[244014]: 2026-02-25 12:42:39.712 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:42:39 np0005629333 nova_compute[244014]: 2026-02-25 12:42:39.715 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:42:39 np0005629333 lvm[339401]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:42:39 np0005629333 lvm[339401]: VG ceph_vg0 finished
Feb 25 07:42:39 np0005629333 lvm[339403]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:42:39 np0005629333 lvm[339403]: VG ceph_vg1 finished
Feb 25 07:42:39 np0005629333 lvm[339404]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:42:39 np0005629333 lvm[339404]: VG ceph_vg2 finished
Feb 25 07:42:40 np0005629333 pedantic_dubinsky[339284]: {}
Feb 25 07:42:40 np0005629333 systemd[1]: libpod-8f2bcd5b71564f82c194e9cb03ae57ca720b1aaef86afe9394ec10aed6abcb71.scope: Deactivated successfully.
Feb 25 07:42:40 np0005629333 systemd[1]: libpod-8f2bcd5b71564f82c194e9cb03ae57ca720b1aaef86afe9394ec10aed6abcb71.scope: Consumed 1.044s CPU time.
Feb 25 07:42:40 np0005629333 nova_compute[244014]: 2026-02-25 12:42:40.091 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:40 np0005629333 podman[339407]: 2026-02-25 12:42:40.134877901 +0000 UTC m=+0.037844519 container died 8f2bcd5b71564f82c194e9cb03ae57ca720b1aaef86afe9394ec10aed6abcb71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:42:40 np0005629333 systemd[1]: var-lib-containers-storage-overlay-14c2dce0cf89c192b535b364c8d13d0f61e05e053073438e024dd9f3d2a72033-merged.mount: Deactivated successfully.
Feb 25 07:42:40 np0005629333 podman[339407]: 2026-02-25 12:42:40.171514725 +0000 UTC m=+0.074481323 container remove 8f2bcd5b71564f82c194e9cb03ae57ca720b1aaef86afe9394ec10aed6abcb71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:42:40 np0005629333 systemd[1]: libpod-conmon-8f2bcd5b71564f82c194e9cb03ae57ca720b1aaef86afe9394ec10aed6abcb71.scope: Deactivated successfully.
Feb 25 07:42:40 np0005629333 nova_compute[244014]: 2026-02-25 12:42:40.181 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:42:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:42:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:42:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:42:40 np0005629333 nova_compute[244014]: 2026-02-25 12:42:40.279 244018 DEBUG nova.compute.manager [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received event network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:42:40 np0005629333 nova_compute[244014]: 2026-02-25 12:42:40.280 244018 DEBUG oslo_concurrency.lockutils [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:40 np0005629333 nova_compute[244014]: 2026-02-25 12:42:40.281 244018 DEBUG oslo_concurrency.lockutils [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:40 np0005629333 nova_compute[244014]: 2026-02-25 12:42:40.281 244018 DEBUG oslo_concurrency.lockutils [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:40 np0005629333 nova_compute[244014]: 2026-02-25 12:42:40.281 244018 DEBUG nova.compute.manager [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] No waiting events found dispatching network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:42:40 np0005629333 nova_compute[244014]: 2026-02-25 12:42:40.281 244018 WARNING nova.compute.manager [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received unexpected event network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e for instance with vm_state active and task_state None.#033[00m
Feb 25 07:42:40 np0005629333 nova_compute[244014]: 2026-02-25 12:42:40.281 244018 DEBUG nova.compute.manager [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received event network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:42:40 np0005629333 nova_compute[244014]: 2026-02-25 12:42:40.282 244018 DEBUG oslo_concurrency.lockutils [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:40 np0005629333 nova_compute[244014]: 2026-02-25 12:42:40.282 244018 DEBUG oslo_concurrency.lockutils [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:40 np0005629333 nova_compute[244014]: 2026-02-25 12:42:40.282 244018 DEBUG oslo_concurrency.lockutils [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:40 np0005629333 nova_compute[244014]: 2026-02-25 12:42:40.282 244018 DEBUG nova.compute.manager [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] No waiting events found dispatching network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:42:40 np0005629333 nova_compute[244014]: 2026-02-25 12:42:40.283 244018 WARNING nova.compute.manager [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received unexpected event network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e for instance with vm_state active and task_state None.#033[00m
Feb 25 07:42:40 np0005629333 nova_compute[244014]: 2026-02-25 12:42:40.283 244018 DEBUG nova.compute.manager [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received event network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:42:40 np0005629333 nova_compute[244014]: 2026-02-25 12:42:40.283 244018 DEBUG oslo_concurrency.lockutils [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:40 np0005629333 nova_compute[244014]: 2026-02-25 12:42:40.283 244018 DEBUG oslo_concurrency.lockutils [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:40 np0005629333 nova_compute[244014]: 2026-02-25 12:42:40.284 244018 DEBUG oslo_concurrency.lockutils [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:40 np0005629333 nova_compute[244014]: 2026-02-25 12:42:40.284 244018 DEBUG nova.compute.manager [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] No waiting events found dispatching network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:42:40 np0005629333 nova_compute[244014]: 2026-02-25 12:42:40.284 244018 WARNING nova.compute.manager [req-fd2a714e-8f48-4257-b6bc-258f5027d04e req-2729f57f-97c6-4dbc-ba14-3d494db32ff4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received unexpected event network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e for instance with vm_state active and task_state None.#033[00m
Feb 25 07:42:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1845: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 31 KiB/s wr, 32 op/s
Feb 25 07:42:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:41.026 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:42:41 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:42:41 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:42:42 np0005629333 nova_compute[244014]: 2026-02-25 12:42:42.031 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1846: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 31 KiB/s wr, 86 op/s
Feb 25 07:42:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:42:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:42:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:42:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:42:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007717237114215445 of space, bias 1.0, pg target 0.23151711342646333 quantized to 32 (current 32)
Feb 25 07:42:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:42:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:42:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:42:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:42:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:42:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002493797090980871 of space, bias 1.0, pg target 0.7481391272942614 quantized to 32 (current 32)
Feb 25 07:42:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:42:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0114275959698244e-06 of space, bias 4.0, pg target 0.0012137131151637893 quantized to 16 (current 16)
Feb 25 07:42:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:42:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:42:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:42:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:42:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:42:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:42:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:42:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:42:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:42:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:42:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:42:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1847: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 82 op/s
Feb 25 07:42:44 np0005629333 nova_compute[244014]: 2026-02-25 12:42:44.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:42:44 np0005629333 nova_compute[244014]: 2026-02-25 12:42:44.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 25 07:42:44 np0005629333 nova_compute[244014]: 2026-02-25 12:42:44.893 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 25 07:42:45 np0005629333 nova_compute[244014]: 2026-02-25 12:42:45.093 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:45 np0005629333 nova_compute[244014]: 2026-02-25 12:42:45.158 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023350.1581175, 62d2fee1-f07f-44e3-a511-6b9bb341a3ed => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:42:45 np0005629333 nova_compute[244014]: 2026-02-25 12:42:45.159 244018 INFO nova.compute.manager [-] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:42:45 np0005629333 nova_compute[244014]: 2026-02-25 12:42:45.183 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:45 np0005629333 nova_compute[244014]: 2026-02-25 12:42:45.193 244018 DEBUG nova.compute.manager [None req-3dbccde4-452a-4e30-9f9d-7fa10527494c - - - - - -] [instance: 62d2fee1-f07f-44e3-a511-6b9bb341a3ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:42:45 np0005629333 nova_compute[244014]: 2026-02-25 12:42:45.479 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1848: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 72 op/s
Feb 25 07:42:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:42:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3756345258' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:42:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:42:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3756345258' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:42:47 np0005629333 nova_compute[244014]: 2026-02-25 12:42:47.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:42:47 np0005629333 nova_compute[244014]: 2026-02-25 12:42:47.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 25 07:42:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:42:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1849: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 72 op/s
Feb 25 07:42:49 np0005629333 nova_compute[244014]: 2026-02-25 12:42:49.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:42:49 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:49Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3e:d0:2e 10.100.0.4
Feb 25 07:42:50 np0005629333 nova_compute[244014]: 2026-02-25 12:42:50.095 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:50 np0005629333 nova_compute[244014]: 2026-02-25 12:42:50.184 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 69 op/s
Feb 25 07:42:51 np0005629333 podman[339447]: 2026-02-25 12:42:51.714609805 +0000 UTC m=+0.055239890 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 25 07:42:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1851: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 12 KiB/s wr, 94 op/s
Feb 25 07:42:52 np0005629333 nova_compute[244014]: 2026-02-25 12:42:52.609 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:42:53 np0005629333 podman[339468]: 2026-02-25 12:42:53.753581323 +0000 UTC m=+0.100640521 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 07:42:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 944 KiB/s rd, 12 KiB/s wr, 59 op/s
Feb 25 07:42:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:55.022 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:55.023 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:55.023 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:55 np0005629333 nova_compute[244014]: 2026-02-25 12:42:55.098 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:55 np0005629333 nova_compute[244014]: 2026-02-25 12:42:55.185 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:55 np0005629333 nova_compute[244014]: 2026-02-25 12:42:55.913 244018 INFO nova.compute.manager [None req-26dd1456-d6be-45d5-be5b-a0f6d959fd45 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Get console output#033[00m
Feb 25 07:42:55 np0005629333 nova_compute[244014]: 2026-02-25 12:42:55.920 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 25 07:42:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 529 KiB/s rd, 12 KiB/s wr, 43 op/s
Feb 25 07:42:57 np0005629333 nova_compute[244014]: 2026-02-25 12:42:57.902 244018 DEBUG nova.compute.manager [req-c99f375c-153e-46c4-b1a0-dc963a3d039d req-eb9f776a-8aeb-4928-bd48-bd2b80909133 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received event network-changed-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:42:57 np0005629333 nova_compute[244014]: 2026-02-25 12:42:57.902 244018 DEBUG nova.compute.manager [req-c99f375c-153e-46c4-b1a0-dc963a3d039d req-eb9f776a-8aeb-4928-bd48-bd2b80909133 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Refreshing instance network info cache due to event network-changed-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:42:57 np0005629333 nova_compute[244014]: 2026-02-25 12:42:57.903 244018 DEBUG oslo_concurrency.lockutils [req-c99f375c-153e-46c4-b1a0-dc963a3d039d req-eb9f776a-8aeb-4928-bd48-bd2b80909133 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-11f1a7e0-6001-4367-8491-5b5508f56bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:42:57 np0005629333 nova_compute[244014]: 2026-02-25 12:42:57.903 244018 DEBUG oslo_concurrency.lockutils [req-c99f375c-153e-46c4-b1a0-dc963a3d039d req-eb9f776a-8aeb-4928-bd48-bd2b80909133 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-11f1a7e0-6001-4367-8491-5b5508f56bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:42:57 np0005629333 nova_compute[244014]: 2026-02-25 12:42:57.903 244018 DEBUG nova.network.neutron [req-c99f375c-153e-46c4-b1a0-dc963a3d039d req-eb9f776a-8aeb-4928-bd48-bd2b80909133 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Refreshing network info cache for port 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:42:57 np0005629333 nova_compute[244014]: 2026-02-25 12:42:57.996 244018 DEBUG oslo_concurrency.lockutils [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "11f1a7e0-6001-4367-8491-5b5508f56bdb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:57 np0005629333 nova_compute[244014]: 2026-02-25 12:42:57.997 244018 DEBUG oslo_concurrency.lockutils [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:57 np0005629333 nova_compute[244014]: 2026-02-25 12:42:57.997 244018 DEBUG oslo_concurrency.lockutils [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:57 np0005629333 nova_compute[244014]: 2026-02-25 12:42:57.997 244018 DEBUG oslo_concurrency.lockutils [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:57 np0005629333 nova_compute[244014]: 2026-02-25 12:42:57.998 244018 DEBUG oslo_concurrency.lockutils [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:58 np0005629333 nova_compute[244014]: 2026-02-25 12:42:57.999 244018 INFO nova.compute.manager [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Terminating instance#033[00m
Feb 25 07:42:58 np0005629333 nova_compute[244014]: 2026-02-25 12:42:58.001 244018 DEBUG nova.compute.manager [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:42:58 np0005629333 kernel: tap6aff0d14-b6 (unregistering): left promiscuous mode
Feb 25 07:42:58 np0005629333 NetworkManager[49836]: <info>  [1772023378.0645] device (tap6aff0d14-b6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:42:58 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:58Z|01057|binding|INFO|Releasing lport 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e from this chassis (sb_readonly=0)
Feb 25 07:42:58 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:58Z|01058|binding|INFO|Setting lport 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e down in Southbound
Feb 25 07:42:58 np0005629333 nova_compute[244014]: 2026-02-25 12:42:58.073 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:58 np0005629333 ovn_controller[147040]: 2026-02-25T12:42:58Z|01059|binding|INFO|Removing iface tap6aff0d14-b6 ovn-installed in OVS
Feb 25 07:42:58 np0005629333 nova_compute[244014]: 2026-02-25 12:42:58.088 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:58.092 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:d0:2e 10.100.0.4'], port_security=['fa:16:3e:3e:d0:2e 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '11f1a7e0-6001-4367-8491-5b5508f56bdb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-898394ed-19f4-4525-9c0d-05895748de8c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '563c6e30-ca07-41f4-bf4e-27cec8015d08', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f095686b-cedf-4f50-94ed-1d2cdaae5b7e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6aff0d14-b6b5-45d1-83da-b9f0946a2e7e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:42:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:58.094 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e in datapath 898394ed-19f4-4525-9c0d-05895748de8c unbound from our chassis#033[00m
Feb 25 07:42:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:58.095 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 898394ed-19f4-4525-9c0d-05895748de8c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:42:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:58.097 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[75efc87d-6667-45a7-8068-b1950fc45572]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:58.098 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c namespace which is not needed anymore#033[00m
Feb 25 07:42:58 np0005629333 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Feb 25 07:42:58 np0005629333 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006c.scope: Consumed 11.906s CPU time.
Feb 25 07:42:58 np0005629333 systemd-machined[210048]: Machine qemu-136-instance-0000006c terminated.
Feb 25 07:42:58 np0005629333 nova_compute[244014]: 2026-02-25 12:42:58.227 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:58 np0005629333 nova_compute[244014]: 2026-02-25 12:42:58.234 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:58 np0005629333 nova_compute[244014]: 2026-02-25 12:42:58.242 244018 INFO nova.virt.libvirt.driver [-] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Instance destroyed successfully.#033[00m
Feb 25 07:42:58 np0005629333 nova_compute[244014]: 2026-02-25 12:42:58.243 244018 DEBUG nova.objects.instance [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'resources' on Instance uuid 11f1a7e0-6001-4367-8491-5b5508f56bdb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:42:58 np0005629333 neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c[339254]: [NOTICE]   (339268) : haproxy version is 2.8.14-c23fe91
Feb 25 07:42:58 np0005629333 neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c[339254]: [NOTICE]   (339268) : path to executable is /usr/sbin/haproxy
Feb 25 07:42:58 np0005629333 neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c[339254]: [WARNING]  (339268) : Exiting Master process...
Feb 25 07:42:58 np0005629333 neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c[339254]: [ALERT]    (339268) : Current worker (339270) exited with code 143 (Terminated)
Feb 25 07:42:58 np0005629333 neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c[339254]: [WARNING]  (339268) : All workers exited. Exiting... (0)
Feb 25 07:42:58 np0005629333 systemd[1]: libpod-68c78756e684bd54a529515ebf69648b5bd727e6829e847d0e8ec3b23fcd6f0f.scope: Deactivated successfully.
Feb 25 07:42:58 np0005629333 nova_compute[244014]: 2026-02-25 12:42:58.265 244018 DEBUG nova.virt.libvirt.vif [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:42:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-356243921',display_name='tempest-TestNetworkAdvancedServerOps-server-356243921',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-356243921',id=108,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOZBQUrVipYRBieQq1xDRMWSoKXwtqySIiZY2u0KWFIY87u59XRYFhhGNlsK64UxHDZ7UBqeTfQ7KU9w8um5F3hgrG+dYRSQSNJa4y14JlVwVfxefeeLtULB24TIYsOAUg==',key_name='tempest-TestNetworkAdvancedServerOps-1545448232',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:42:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-em0x8woo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:42:39Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=11f1a7e0-6001-4367-8491-5b5508f56bdb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "address": "fa:16:3e:3e:d0:2e", "network": {"id": "898394ed-19f4-4525-9c0d-05895748de8c", "bridge": "br-int", "label": "tempest-network-smoke--407161761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aff0d14-b6", "ovs_interfaceid": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:42:58 np0005629333 nova_compute[244014]: 2026-02-25 12:42:58.265 244018 DEBUG nova.network.os_vif_util [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "address": "fa:16:3e:3e:d0:2e", "network": {"id": "898394ed-19f4-4525-9c0d-05895748de8c", "bridge": "br-int", "label": "tempest-network-smoke--407161761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aff0d14-b6", "ovs_interfaceid": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:42:58 np0005629333 podman[339520]: 2026-02-25 12:42:58.266168707 +0000 UTC m=+0.059812958 container died 68c78756e684bd54a529515ebf69648b5bd727e6829e847d0e8ec3b23fcd6f0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 25 07:42:58 np0005629333 nova_compute[244014]: 2026-02-25 12:42:58.266 244018 DEBUG nova.network.os_vif_util [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3e:d0:2e,bridge_name='br-int',has_traffic_filtering=True,id=6aff0d14-b6b5-45d1-83da-b9f0946a2e7e,network=Network(898394ed-19f4-4525-9c0d-05895748de8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6aff0d14-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:42:58 np0005629333 nova_compute[244014]: 2026-02-25 12:42:58.267 244018 DEBUG os_vif [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:d0:2e,bridge_name='br-int',has_traffic_filtering=True,id=6aff0d14-b6b5-45d1-83da-b9f0946a2e7e,network=Network(898394ed-19f4-4525-9c0d-05895748de8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6aff0d14-b6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:42:58 np0005629333 nova_compute[244014]: 2026-02-25 12:42:58.269 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:58 np0005629333 nova_compute[244014]: 2026-02-25 12:42:58.269 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6aff0d14-b6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:42:58 np0005629333 nova_compute[244014]: 2026-02-25 12:42:58.275 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:58 np0005629333 nova_compute[244014]: 2026-02-25 12:42:58.278 244018 INFO os_vif [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:d0:2e,bridge_name='br-int',has_traffic_filtering=True,id=6aff0d14-b6b5-45d1-83da-b9f0946a2e7e,network=Network(898394ed-19f4-4525-9c0d-05895748de8c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6aff0d14-b6')#033[00m
Feb 25 07:42:58 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-68c78756e684bd54a529515ebf69648b5bd727e6829e847d0e8ec3b23fcd6f0f-userdata-shm.mount: Deactivated successfully.
Feb 25 07:42:58 np0005629333 systemd[1]: var-lib-containers-storage-overlay-85336dac33943960f28fe299da666ed4b00ba73c5530f114cd6d990127d9387f-merged.mount: Deactivated successfully.
Feb 25 07:42:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:42:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 305 active+clean; 235 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 23 KiB/s wr, 44 op/s
Feb 25 07:42:58 np0005629333 podman[339520]: 2026-02-25 12:42:58.314711627 +0000 UTC m=+0.108355798 container cleanup 68c78756e684bd54a529515ebf69648b5bd727e6829e847d0e8ec3b23fcd6f0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 25 07:42:58 np0005629333 systemd[1]: libpod-conmon-68c78756e684bd54a529515ebf69648b5bd727e6829e847d0e8ec3b23fcd6f0f.scope: Deactivated successfully.
Feb 25 07:42:58 np0005629333 podman[339574]: 2026-02-25 12:42:58.384721793 +0000 UTC m=+0.045577217 container remove 68c78756e684bd54a529515ebf69648b5bd727e6829e847d0e8ec3b23fcd6f0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:42:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:58.389 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[82ab6c6b-ff12-4553-a154-bb27b27bf1c3]: (4, ('Wed Feb 25 12:42:58 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c (68c78756e684bd54a529515ebf69648b5bd727e6829e847d0e8ec3b23fcd6f0f)\n68c78756e684bd54a529515ebf69648b5bd727e6829e847d0e8ec3b23fcd6f0f\nWed Feb 25 12:42:58 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c (68c78756e684bd54a529515ebf69648b5bd727e6829e847d0e8ec3b23fcd6f0f)\n68c78756e684bd54a529515ebf69648b5bd727e6829e847d0e8ec3b23fcd6f0f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:58.390 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bbeb39ce-d15a-4d59-8924-c0701ca531d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:58.391 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap898394ed-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:42:58 np0005629333 nova_compute[244014]: 2026-02-25 12:42:58.393 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:58 np0005629333 kernel: tap898394ed-10: left promiscuous mode
Feb 25 07:42:58 np0005629333 nova_compute[244014]: 2026-02-25 12:42:58.399 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:42:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:58.404 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2d666dbf-dc07-4c99-b0d9-d1a443c03c9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:58.414 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[650e6071-c8b5-44b7-9d9b-0ac70912db66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:58.415 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3fc9149f-31ba-423f-8f61-d54186687fd3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:58.430 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[83bf50f4-6583-4190-918c-dd8f31fe41da]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532799, 'reachable_time': 26842, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339595, 'error': None, 'target': 'ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:58.433 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-898394ed-19f4-4525-9c0d-05895748de8c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:42:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:42:58.433 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[4f3a85cd-9595-4e19-bb9c-2c4047fb05ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:42:58 np0005629333 systemd[1]: run-netns-ovnmeta\x2d898394ed\x2d19f4\x2d4525\x2d9c0d\x2d05895748de8c.mount: Deactivated successfully.
Feb 25 07:42:58 np0005629333 nova_compute[244014]: 2026-02-25 12:42:58.569 244018 INFO nova.virt.libvirt.driver [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Deleting instance files /var/lib/nova/instances/11f1a7e0-6001-4367-8491-5b5508f56bdb_del#033[00m
Feb 25 07:42:58 np0005629333 nova_compute[244014]: 2026-02-25 12:42:58.570 244018 INFO nova.virt.libvirt.driver [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Deletion of /var/lib/nova/instances/11f1a7e0-6001-4367-8491-5b5508f56bdb_del complete#033[00m
Feb 25 07:42:58 np0005629333 nova_compute[244014]: 2026-02-25 12:42:58.680 244018 INFO nova.compute.manager [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:42:58 np0005629333 nova_compute[244014]: 2026-02-25 12:42:58.681 244018 DEBUG oslo.service.loopingcall [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:42:58 np0005629333 nova_compute[244014]: 2026-02-25 12:42:58.683 244018 DEBUG nova.compute.manager [-] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:42:58 np0005629333 nova_compute[244014]: 2026-02-25 12:42:58.684 244018 DEBUG nova.network.neutron [-] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:42:58 np0005629333 nova_compute[244014]: 2026-02-25 12:42:58.783 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:58 np0005629333 nova_compute[244014]: 2026-02-25 12:42:58.784 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:58 np0005629333 nova_compute[244014]: 2026-02-25 12:42:58.802 244018 DEBUG nova.compute.manager [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:42:58 np0005629333 nova_compute[244014]: 2026-02-25 12:42:58.893 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:58 np0005629333 nova_compute[244014]: 2026-02-25 12:42:58.893 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:58 np0005629333 nova_compute[244014]: 2026-02-25 12:42:58.903 244018 DEBUG nova.virt.hardware [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:42:58 np0005629333 nova_compute[244014]: 2026-02-25 12:42:58.904 244018 INFO nova.compute.claims [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:42:59 np0005629333 nova_compute[244014]: 2026-02-25 12:42:59.011 244018 DEBUG nova.compute.manager [req-5c46cdc2-31e5-4bc3-b025-c81611660e2d req-1161b547-0783-43cf-94a0-2d5ac8ea3fc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received event network-vif-unplugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:42:59 np0005629333 nova_compute[244014]: 2026-02-25 12:42:59.012 244018 DEBUG oslo_concurrency.lockutils [req-5c46cdc2-31e5-4bc3-b025-c81611660e2d req-1161b547-0783-43cf-94a0-2d5ac8ea3fc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:42:59 np0005629333 nova_compute[244014]: 2026-02-25 12:42:59.012 244018 DEBUG oslo_concurrency.lockutils [req-5c46cdc2-31e5-4bc3-b025-c81611660e2d req-1161b547-0783-43cf-94a0-2d5ac8ea3fc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:42:59 np0005629333 nova_compute[244014]: 2026-02-25 12:42:59.013 244018 DEBUG oslo_concurrency.lockutils [req-5c46cdc2-31e5-4bc3-b025-c81611660e2d req-1161b547-0783-43cf-94a0-2d5ac8ea3fc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:59 np0005629333 nova_compute[244014]: 2026-02-25 12:42:59.013 244018 DEBUG nova.compute.manager [req-5c46cdc2-31e5-4bc3-b025-c81611660e2d req-1161b547-0783-43cf-94a0-2d5ac8ea3fc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] No waiting events found dispatching network-vif-unplugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:42:59 np0005629333 nova_compute[244014]: 2026-02-25 12:42:59.013 244018 DEBUG nova.compute.manager [req-5c46cdc2-31e5-4bc3-b025-c81611660e2d req-1161b547-0783-43cf-94a0-2d5ac8ea3fc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received event network-vif-unplugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:42:59 np0005629333 nova_compute[244014]: 2026-02-25 12:42:59.064 244018 DEBUG oslo_concurrency.processutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:42:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:42:59 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/954141345' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:42:59 np0005629333 nova_compute[244014]: 2026-02-25 12:42:59.619 244018 DEBUG oslo_concurrency.processutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:42:59 np0005629333 nova_compute[244014]: 2026-02-25 12:42:59.624 244018 DEBUG nova.compute.provider_tree [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:42:59 np0005629333 nova_compute[244014]: 2026-02-25 12:42:59.647 244018 DEBUG nova.scheduler.client.report [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:42:59 np0005629333 nova_compute[244014]: 2026-02-25 12:42:59.670 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:42:59 np0005629333 nova_compute[244014]: 2026-02-25 12:42:59.671 244018 DEBUG nova.compute.manager [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:42:59 np0005629333 nova_compute[244014]: 2026-02-25 12:42:59.749 244018 DEBUG nova.compute.manager [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:42:59 np0005629333 nova_compute[244014]: 2026-02-25 12:42:59.750 244018 DEBUG nova.network.neutron [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:42:59 np0005629333 nova_compute[244014]: 2026-02-25 12:42:59.779 244018 INFO nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:42:59 np0005629333 nova_compute[244014]: 2026-02-25 12:42:59.807 244018 DEBUG nova.compute.manager [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:42:59 np0005629333 nova_compute[244014]: 2026-02-25 12:42:59.904 244018 DEBUG nova.compute.manager [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:42:59 np0005629333 nova_compute[244014]: 2026-02-25 12:42:59.905 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:42:59 np0005629333 nova_compute[244014]: 2026-02-25 12:42:59.906 244018 INFO nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Creating image(s)#033[00m
Feb 25 07:42:59 np0005629333 nova_compute[244014]: 2026-02-25 12:42:59.937 244018 DEBUG nova.storage.rbd_utils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image cf7f6093-44a3-4e8f-8970-db25cf0b4ab9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:42:59 np0005629333 nova_compute[244014]: 2026-02-25 12:42:59.968 244018 DEBUG nova.storage.rbd_utils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image cf7f6093-44a3-4e8f-8970-db25cf0b4ab9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:42:59 np0005629333 nova_compute[244014]: 2026-02-25 12:42:59.999 244018 DEBUG nova.storage.rbd_utils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image cf7f6093-44a3-4e8f-8970-db25cf0b4ab9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:43:00 np0005629333 nova_compute[244014]: 2026-02-25 12:43:00.002 244018 DEBUG oslo_concurrency.processutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:43:00 np0005629333 nova_compute[244014]: 2026-02-25 12:43:00.026 244018 DEBUG nova.policy [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb8dbf8cc448ad946fd23aaae2326e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '25fa1e8dd32c483686f869da2604f2b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:43:00 np0005629333 nova_compute[244014]: 2026-02-25 12:43:00.068 244018 DEBUG oslo_concurrency.processutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:43:00 np0005629333 nova_compute[244014]: 2026-02-25 12:43:00.069 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:00 np0005629333 nova_compute[244014]: 2026-02-25 12:43:00.069 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:00 np0005629333 nova_compute[244014]: 2026-02-25 12:43:00.069 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:00 np0005629333 nova_compute[244014]: 2026-02-25 12:43:00.095 244018 DEBUG nova.storage.rbd_utils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image cf7f6093-44a3-4e8f-8970-db25cf0b4ab9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:43:00 np0005629333 nova_compute[244014]: 2026-02-25 12:43:00.101 244018 DEBUG oslo_concurrency.processutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 cf7f6093-44a3-4e8f-8970-db25cf0b4ab9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:43:00 np0005629333 nova_compute[244014]: 2026-02-25 12:43:00.140 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:00 np0005629333 nova_compute[244014]: 2026-02-25 12:43:00.145 244018 DEBUG nova.network.neutron [req-c99f375c-153e-46c4-b1a0-dc963a3d039d req-eb9f776a-8aeb-4928-bd48-bd2b80909133 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Updated VIF entry in instance network info cache for port 6aff0d14-b6b5-45d1-83da-b9f0946a2e7e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:43:00 np0005629333 nova_compute[244014]: 2026-02-25 12:43:00.145 244018 DEBUG nova.network.neutron [req-c99f375c-153e-46c4-b1a0-dc963a3d039d req-eb9f776a-8aeb-4928-bd48-bd2b80909133 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Updating instance_info_cache with network_info: [{"id": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "address": "fa:16:3e:3e:d0:2e", "network": {"id": "898394ed-19f4-4525-9c0d-05895748de8c", "bridge": "br-int", "label": "tempest-network-smoke--407161761", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6aff0d14-b6", "ovs_interfaceid": "6aff0d14-b6b5-45d1-83da-b9f0946a2e7e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:43:00 np0005629333 nova_compute[244014]: 2026-02-25 12:43:00.169 244018 DEBUG oslo_concurrency.lockutils [req-c99f375c-153e-46c4-b1a0-dc963a3d039d req-eb9f776a-8aeb-4928-bd48-bd2b80909133 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-11f1a7e0-6001-4367-8491-5b5508f56bdb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:43:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1855: 305 pgs: 305 active+clean; 235 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 531 KiB/s rd, 23 KiB/s wr, 44 op/s
Feb 25 07:43:00 np0005629333 nova_compute[244014]: 2026-02-25 12:43:00.691 244018 DEBUG nova.network.neutron [-] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:43:00 np0005629333 nova_compute[244014]: 2026-02-25 12:43:00.706 244018 INFO nova.compute.manager [-] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Took 2.02 seconds to deallocate network for instance.#033[00m
Feb 25 07:43:00 np0005629333 nova_compute[244014]: 2026-02-25 12:43:00.762 244018 DEBUG oslo_concurrency.lockutils [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:00 np0005629333 nova_compute[244014]: 2026-02-25 12:43:00.763 244018 DEBUG oslo_concurrency.lockutils [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:00 np0005629333 nova_compute[244014]: 2026-02-25 12:43:00.831 244018 DEBUG oslo_concurrency.processutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 cf7f6093-44a3-4e8f-8970-db25cf0b4ab9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.730s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:43:00 np0005629333 nova_compute[244014]: 2026-02-25 12:43:00.865 244018 DEBUG oslo_concurrency.processutils [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:43:00 np0005629333 nova_compute[244014]: 2026-02-25 12:43:00.946 244018 DEBUG nova.storage.rbd_utils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] resizing rbd image cf7f6093-44a3-4e8f-8970-db25cf0b4ab9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:43:01 np0005629333 nova_compute[244014]: 2026-02-25 12:43:01.208 244018 DEBUG nova.compute.manager [req-e3d04eaa-2dce-4012-ae88-08a284968b8a req-bdbde54b-f498-4395-9527-62dc733fa502 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received event network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:43:01 np0005629333 nova_compute[244014]: 2026-02-25 12:43:01.209 244018 DEBUG oslo_concurrency.lockutils [req-e3d04eaa-2dce-4012-ae88-08a284968b8a req-bdbde54b-f498-4395-9527-62dc733fa502 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:01 np0005629333 nova_compute[244014]: 2026-02-25 12:43:01.210 244018 DEBUG oslo_concurrency.lockutils [req-e3d04eaa-2dce-4012-ae88-08a284968b8a req-bdbde54b-f498-4395-9527-62dc733fa502 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:01 np0005629333 nova_compute[244014]: 2026-02-25 12:43:01.210 244018 DEBUG oslo_concurrency.lockutils [req-e3d04eaa-2dce-4012-ae88-08a284968b8a req-bdbde54b-f498-4395-9527-62dc733fa502 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:01 np0005629333 nova_compute[244014]: 2026-02-25 12:43:01.211 244018 DEBUG nova.compute.manager [req-e3d04eaa-2dce-4012-ae88-08a284968b8a req-bdbde54b-f498-4395-9527-62dc733fa502 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] No waiting events found dispatching network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:43:01 np0005629333 nova_compute[244014]: 2026-02-25 12:43:01.211 244018 WARNING nova.compute.manager [req-e3d04eaa-2dce-4012-ae88-08a284968b8a req-bdbde54b-f498-4395-9527-62dc733fa502 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received unexpected event network-vif-plugged-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:43:01 np0005629333 nova_compute[244014]: 2026-02-25 12:43:01.212 244018 DEBUG nova.compute.manager [req-e3d04eaa-2dce-4012-ae88-08a284968b8a req-bdbde54b-f498-4395-9527-62dc733fa502 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Received event network-vif-deleted-6aff0d14-b6b5-45d1-83da-b9f0946a2e7e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:43:01 np0005629333 nova_compute[244014]: 2026-02-25 12:43:01.260 244018 DEBUG nova.objects.instance [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'migration_context' on Instance uuid cf7f6093-44a3-4e8f-8970-db25cf0b4ab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:43:01 np0005629333 nova_compute[244014]: 2026-02-25 12:43:01.274 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:43:01 np0005629333 nova_compute[244014]: 2026-02-25 12:43:01.275 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Ensure instance console log exists: /var/lib/nova/instances/cf7f6093-44a3-4e8f-8970-db25cf0b4ab9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:43:01 np0005629333 nova_compute[244014]: 2026-02-25 12:43:01.276 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:01 np0005629333 nova_compute[244014]: 2026-02-25 12:43:01.276 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:01 np0005629333 nova_compute[244014]: 2026-02-25 12:43:01.277 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:43:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1304900099' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:43:01 np0005629333 nova_compute[244014]: 2026-02-25 12:43:01.415 244018 DEBUG oslo_concurrency.processutils [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:43:01 np0005629333 nova_compute[244014]: 2026-02-25 12:43:01.421 244018 DEBUG nova.compute.provider_tree [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:43:01 np0005629333 nova_compute[244014]: 2026-02-25 12:43:01.454 244018 DEBUG nova.scheduler.client.report [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:43:01 np0005629333 nova_compute[244014]: 2026-02-25 12:43:01.494 244018 DEBUG oslo_concurrency.lockutils [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:01 np0005629333 nova_compute[244014]: 2026-02-25 12:43:01.531 244018 INFO nova.scheduler.client.report [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Deleted allocations for instance 11f1a7e0-6001-4367-8491-5b5508f56bdb#033[00m
Feb 25 07:43:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:43:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:43:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:43:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:43:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:43:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:43:01 np0005629333 nova_compute[244014]: 2026-02-25 12:43:01.608 244018 DEBUG oslo_concurrency.lockutils [None req-4c5c05fa-44aa-4c70-9fb7-4d97c8ffa7a2 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "11f1a7e0-6001-4367-8491-5b5508f56bdb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:01 np0005629333 nova_compute[244014]: 2026-02-25 12:43:01.641 244018 DEBUG nova.network.neutron [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Successfully created port: 473bf89e-f488-42b9-b6d3-d736d2a61760 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:43:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 197 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 553 KiB/s rd, 1.4 MiB/s wr, 77 op/s
Feb 25 07:43:02 np0005629333 nova_compute[244014]: 2026-02-25 12:43:02.443 244018 DEBUG nova.network.neutron [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Successfully created port: 7126aff6-e4c8-45c9-a3f7-6c333946b022 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:43:03 np0005629333 nova_compute[244014]: 2026-02-25 12:43:03.175 244018 DEBUG nova.network.neutron [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Successfully updated port: 473bf89e-f488-42b9-b6d3-d736d2a61760 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:43:03 np0005629333 nova_compute[244014]: 2026-02-25 12:43:03.272 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:43:03 np0005629333 nova_compute[244014]: 2026-02-25 12:43:03.352 244018 DEBUG nova.compute.manager [req-e0c18802-64c0-4a85-98ae-28f997a54fbb req-7dfec9b6-d13e-4d24-94be-c8333460b2fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received event network-changed-473bf89e-f488-42b9-b6d3-d736d2a61760 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:43:03 np0005629333 nova_compute[244014]: 2026-02-25 12:43:03.353 244018 DEBUG nova.compute.manager [req-e0c18802-64c0-4a85-98ae-28f997a54fbb req-7dfec9b6-d13e-4d24-94be-c8333460b2fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Refreshing instance network info cache due to event network-changed-473bf89e-f488-42b9-b6d3-d736d2a61760. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:43:03 np0005629333 nova_compute[244014]: 2026-02-25 12:43:03.353 244018 DEBUG oslo_concurrency.lockutils [req-e0c18802-64c0-4a85-98ae-28f997a54fbb req-7dfec9b6-d13e-4d24-94be-c8333460b2fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:43:03 np0005629333 nova_compute[244014]: 2026-02-25 12:43:03.354 244018 DEBUG oslo_concurrency.lockutils [req-e0c18802-64c0-4a85-98ae-28f997a54fbb req-7dfec9b6-d13e-4d24-94be-c8333460b2fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:43:03 np0005629333 nova_compute[244014]: 2026-02-25 12:43:03.354 244018 DEBUG nova.network.neutron [req-e0c18802-64c0-4a85-98ae-28f997a54fbb req-7dfec9b6-d13e-4d24-94be-c8333460b2fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Refreshing network info cache for port 473bf89e-f488-42b9-b6d3-d736d2a61760 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:43:03 np0005629333 nova_compute[244014]: 2026-02-25 12:43:03.592 244018 DEBUG nova.network.neutron [req-e0c18802-64c0-4a85-98ae-28f997a54fbb req-7dfec9b6-d13e-4d24-94be-c8333460b2fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:43:04 np0005629333 nova_compute[244014]: 2026-02-25 12:43:04.078 244018 DEBUG nova.network.neutron [req-e0c18802-64c0-4a85-98ae-28f997a54fbb req-7dfec9b6-d13e-4d24-94be-c8333460b2fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:43:04 np0005629333 nova_compute[244014]: 2026-02-25 12:43:04.103 244018 DEBUG oslo_concurrency.lockutils [req-e0c18802-64c0-4a85-98ae-28f997a54fbb req-7dfec9b6-d13e-4d24-94be-c8333460b2fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:43:04 np0005629333 nova_compute[244014]: 2026-02-25 12:43:04.233 244018 DEBUG nova.network.neutron [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Successfully updated port: 7126aff6-e4c8-45c9-a3f7-6c333946b022 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:43:04 np0005629333 nova_compute[244014]: 2026-02-25 12:43:04.257 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:43:04 np0005629333 nova_compute[244014]: 2026-02-25 12:43:04.258 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquired lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:43:04 np0005629333 nova_compute[244014]: 2026-02-25 12:43:04.258 244018 DEBUG nova.network.neutron [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:43:04 np0005629333 nova_compute[244014]: 2026-02-25 12:43:04.286 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1857: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 144 KiB/s rd, 1.8 MiB/s wr, 74 op/s
Feb 25 07:43:04 np0005629333 nova_compute[244014]: 2026-02-25 12:43:04.357 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:04 np0005629333 nova_compute[244014]: 2026-02-25 12:43:04.598 244018 DEBUG nova.network.neutron [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:43:05 np0005629333 nova_compute[244014]: 2026-02-25 12:43:05.102 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:05 np0005629333 nova_compute[244014]: 2026-02-25 12:43:05.530 244018 DEBUG nova.compute.manager [req-23a70b39-e85e-49fe-9533-f6cdcede7da4 req-05b1121d-10d2-4d4b-a354-15f943875363 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received event network-changed-7126aff6-e4c8-45c9-a3f7-6c333946b022 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:43:05 np0005629333 nova_compute[244014]: 2026-02-25 12:43:05.530 244018 DEBUG nova.compute.manager [req-23a70b39-e85e-49fe-9533-f6cdcede7da4 req-05b1121d-10d2-4d4b-a354-15f943875363 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Refreshing instance network info cache due to event network-changed-7126aff6-e4c8-45c9-a3f7-6c333946b022. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:43:05 np0005629333 nova_compute[244014]: 2026-02-25 12:43:05.531 244018 DEBUG oslo_concurrency.lockutils [req-23a70b39-e85e-49fe-9533-f6cdcede7da4 req-05b1121d-10d2-4d4b-a354-15f943875363 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:43:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 25 07:43:06 np0005629333 nova_compute[244014]: 2026-02-25 12:43:06.746 244018 DEBUG nova.network.neutron [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Updating instance_info_cache with network_info: [{"id": "473bf89e-f488-42b9-b6d3-d736d2a61760", "address": "fa:16:3e:7a:8c:23", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap473bf89e-f4", "ovs_interfaceid": "473bf89e-f488-42b9-b6d3-d736d2a61760", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "address": "fa:16:3e:f5:63:85", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:6385", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7126aff6-e4", "ovs_interfaceid": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:43:06 np0005629333 nova_compute[244014]: 2026-02-25 12:43:06.767 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Releasing lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:43:06 np0005629333 nova_compute[244014]: 2026-02-25 12:43:06.768 244018 DEBUG nova.compute.manager [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Instance network_info: |[{"id": "473bf89e-f488-42b9-b6d3-d736d2a61760", "address": "fa:16:3e:7a:8c:23", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap473bf89e-f4", "ovs_interfaceid": "473bf89e-f488-42b9-b6d3-d736d2a61760", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "address": "fa:16:3e:f5:63:85", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:6385", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7126aff6-e4", "ovs_interfaceid": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:43:06 np0005629333 nova_compute[244014]: 2026-02-25 12:43:06.769 244018 DEBUG oslo_concurrency.lockutils [req-23a70b39-e85e-49fe-9533-f6cdcede7da4 req-05b1121d-10d2-4d4b-a354-15f943875363 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:43:06 np0005629333 nova_compute[244014]: 2026-02-25 12:43:06.769 244018 DEBUG nova.network.neutron [req-23a70b39-e85e-49fe-9533-f6cdcede7da4 req-05b1121d-10d2-4d4b-a354-15f943875363 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Refreshing network info cache for port 7126aff6-e4c8-45c9-a3f7-6c333946b022 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:43:06 np0005629333 nova_compute[244014]: 2026-02-25 12:43:06.772 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Start _get_guest_xml network_info=[{"id": "473bf89e-f488-42b9-b6d3-d736d2a61760", "address": "fa:16:3e:7a:8c:23", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap473bf89e-f4", "ovs_interfaceid": "473bf89e-f488-42b9-b6d3-d736d2a61760", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "address": "fa:16:3e:f5:63:85", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:6385", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7126aff6-e4", "ovs_interfaceid": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:43:06 np0005629333 nova_compute[244014]: 2026-02-25 12:43:06.777 244018 WARNING nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:43:06 np0005629333 nova_compute[244014]: 2026-02-25 12:43:06.782 244018 DEBUG nova.virt.libvirt.host [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:43:06 np0005629333 nova_compute[244014]: 2026-02-25 12:43:06.783 244018 DEBUG nova.virt.libvirt.host [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:43:06 np0005629333 nova_compute[244014]: 2026-02-25 12:43:06.789 244018 DEBUG nova.virt.libvirt.host [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:43:06 np0005629333 nova_compute[244014]: 2026-02-25 12:43:06.790 244018 DEBUG nova.virt.libvirt.host [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:43:06 np0005629333 nova_compute[244014]: 2026-02-25 12:43:06.790 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:43:06 np0005629333 nova_compute[244014]: 2026-02-25 12:43:06.791 244018 DEBUG nova.virt.hardware [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:43:06 np0005629333 nova_compute[244014]: 2026-02-25 12:43:06.791 244018 DEBUG nova.virt.hardware [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:43:06 np0005629333 nova_compute[244014]: 2026-02-25 12:43:06.792 244018 DEBUG nova.virt.hardware [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:43:06 np0005629333 nova_compute[244014]: 2026-02-25 12:43:06.792 244018 DEBUG nova.virt.hardware [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:43:06 np0005629333 nova_compute[244014]: 2026-02-25 12:43:06.792 244018 DEBUG nova.virt.hardware [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:43:06 np0005629333 nova_compute[244014]: 2026-02-25 12:43:06.793 244018 DEBUG nova.virt.hardware [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:43:06 np0005629333 nova_compute[244014]: 2026-02-25 12:43:06.793 244018 DEBUG nova.virt.hardware [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:43:06 np0005629333 nova_compute[244014]: 2026-02-25 12:43:06.793 244018 DEBUG nova.virt.hardware [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:43:06 np0005629333 nova_compute[244014]: 2026-02-25 12:43:06.793 244018 DEBUG nova.virt.hardware [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:43:06 np0005629333 nova_compute[244014]: 2026-02-25 12:43:06.794 244018 DEBUG nova.virt.hardware [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:43:06 np0005629333 nova_compute[244014]: 2026-02-25 12:43:06.794 244018 DEBUG nova.virt.hardware [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:43:06 np0005629333 nova_compute[244014]: 2026-02-25 12:43:06.797 244018 DEBUG oslo_concurrency.processutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:43:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:43:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1687572344' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:43:07 np0005629333 nova_compute[244014]: 2026-02-25 12:43:07.416 244018 DEBUG oslo_concurrency.processutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:43:07 np0005629333 nova_compute[244014]: 2026-02-25 12:43:07.455 244018 DEBUG nova.storage.rbd_utils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image cf7f6093-44a3-4e8f-8970-db25cf0b4ab9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:43:07 np0005629333 nova_compute[244014]: 2026-02-25 12:43:07.462 244018 DEBUG oslo_concurrency.processutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:43:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:43:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3344917758' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.063 244018 DEBUG oslo_concurrency.processutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.069 244018 DEBUG nova.virt.libvirt.vif [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:42:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1568259628',display_name='tempest-TestGettingAddress-server-1568259628',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1568259628',id=109,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjwXf4BRPjcASt12gAye0nTLMsY81chM8AwzuzQAmadzcHhb8MVkuSUIiO1cL5+mTUrVlDTjaV+Wk+tczkCutLjKZYT8KokDtS1+FrxD3TeeYwMZXPHCENgnD6/bPtHPg==',key_name='tempest-TestGettingAddress-617797010',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-wpph0igs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:42:59Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=cf7f6093-44a3-4e8f-8970-db25cf0b4ab9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "473bf89e-f488-42b9-b6d3-d736d2a61760", "address": "fa:16:3e:7a:8c:23", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap473bf89e-f4", "ovs_interfaceid": "473bf89e-f488-42b9-b6d3-d736d2a61760", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.070 244018 DEBUG nova.network.os_vif_util [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "473bf89e-f488-42b9-b6d3-d736d2a61760", "address": "fa:16:3e:7a:8c:23", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap473bf89e-f4", "ovs_interfaceid": "473bf89e-f488-42b9-b6d3-d736d2a61760", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.072 244018 DEBUG nova.network.os_vif_util [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:8c:23,bridge_name='br-int',has_traffic_filtering=True,id=473bf89e-f488-42b9-b6d3-d736d2a61760,network=Network(e0ff7905-af45-428a-b6a0-d6e1209fd009),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap473bf89e-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.075 244018 DEBUG nova.virt.libvirt.vif [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:42:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1568259628',display_name='tempest-TestGettingAddress-server-1568259628',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1568259628',id=109,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjwXf4BRPjcASt12gAye0nTLMsY81chM8AwzuzQAmadzcHhb8MVkuSUIiO1cL5+mTUrVlDTjaV+Wk+tczkCutLjKZYT8KokDtS1+FrxD3TeeYwMZXPHCENgnD6/bPtHPg==',key_name='tempest-TestGettingAddress-617797010',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-wpph0igs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:42:59Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=cf7f6093-44a3-4e8f-8970-db25cf0b4ab9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "address": "fa:16:3e:f5:63:85", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:6385", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7126aff6-e4", "ovs_interfaceid": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.076 244018 DEBUG nova.network.os_vif_util [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "address": "fa:16:3e:f5:63:85", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:6385", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7126aff6-e4", "ovs_interfaceid": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.078 244018 DEBUG nova.network.os_vif_util [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:63:85,bridge_name='br-int',has_traffic_filtering=True,id=7126aff6-e4c8-45c9-a3f7-6c333946b022,network=Network(e4445989-91e8-4869-98cb-32b4b81bb3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7126aff6-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.079 244018 DEBUG nova.objects.instance [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid cf7f6093-44a3-4e8f-8970-db25cf0b4ab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.102 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:43:08 np0005629333 nova_compute[244014]:  <uuid>cf7f6093-44a3-4e8f-8970-db25cf0b4ab9</uuid>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:  <name>instance-0000006d</name>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestGettingAddress-server-1568259628</nova:name>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:43:06</nova:creationTime>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:43:08 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:        <nova:user uuid="f8eb8dbf8cc448ad946fd23aaae2326e">tempest-TestGettingAddress-344063294-project-member</nova:user>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:        <nova:project uuid="25fa1e8dd32c483686f869da2604f2b1">tempest-TestGettingAddress-344063294</nova:project>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:        <nova:port uuid="473bf89e-f488-42b9-b6d3-d736d2a61760">
Feb 25 07:43:08 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:        <nova:port uuid="7126aff6-e4c8-45c9-a3f7-6c333946b022">
Feb 25 07:43:08 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fef5:6385" ipVersion="6"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <entry name="serial">cf7f6093-44a3-4e8f-8970-db25cf0b4ab9</entry>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <entry name="uuid">cf7f6093-44a3-4e8f-8970-db25cf0b4ab9</entry>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/cf7f6093-44a3-4e8f-8970-db25cf0b4ab9_disk">
Feb 25 07:43:08 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:43:08 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/cf7f6093-44a3-4e8f-8970-db25cf0b4ab9_disk.config">
Feb 25 07:43:08 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:43:08 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:7a:8c:23"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <target dev="tap473bf89e-f4"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:f5:63:85"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <target dev="tap7126aff6-e4"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/cf7f6093-44a3-4e8f-8970-db25cf0b4ab9/console.log" append="off"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:43:08 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:43:08 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:43:08 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:43:08 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.104 244018 DEBUG nova.compute.manager [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Preparing to wait for external event network-vif-plugged-473bf89e-f488-42b9-b6d3-d736d2a61760 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.104 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.105 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.105 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.105 244018 DEBUG nova.compute.manager [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Preparing to wait for external event network-vif-plugged-7126aff6-e4c8-45c9-a3f7-6c333946b022 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.105 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.105 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.106 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.106 244018 DEBUG nova.virt.libvirt.vif [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:42:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1568259628',display_name='tempest-TestGettingAddress-server-1568259628',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1568259628',id=109,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjwXf4BRPjcASt12gAye0nTLMsY81chM8AwzuzQAmadzcHhb8MVkuSUIiO1cL5+mTUrVlDTjaV+Wk+tczkCutLjKZYT8KokDtS1+FrxD3TeeYwMZXPHCENgnD6/bPtHPg==',key_name='tempest-TestGettingAddress-617797010',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-wpph0igs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:42:59Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=cf7f6093-44a3-4e8f-8970-db25cf0b4ab9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "473bf89e-f488-42b9-b6d3-d736d2a61760", "address": "fa:16:3e:7a:8c:23", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap473bf89e-f4", "ovs_interfaceid": "473bf89e-f488-42b9-b6d3-d736d2a61760", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.107 244018 DEBUG nova.network.os_vif_util [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "473bf89e-f488-42b9-b6d3-d736d2a61760", "address": "fa:16:3e:7a:8c:23", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap473bf89e-f4", "ovs_interfaceid": "473bf89e-f488-42b9-b6d3-d736d2a61760", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.107 244018 DEBUG nova.network.os_vif_util [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:8c:23,bridge_name='br-int',has_traffic_filtering=True,id=473bf89e-f488-42b9-b6d3-d736d2a61760,network=Network(e0ff7905-af45-428a-b6a0-d6e1209fd009),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap473bf89e-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.108 244018 DEBUG os_vif [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:8c:23,bridge_name='br-int',has_traffic_filtering=True,id=473bf89e-f488-42b9-b6d3-d736d2a61760,network=Network(e0ff7905-af45-428a-b6a0-d6e1209fd009),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap473bf89e-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.108 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.109 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.109 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.113 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.113 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap473bf89e-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.114 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap473bf89e-f4, col_values=(('external_ids', {'iface-id': '473bf89e-f488-42b9-b6d3-d736d2a61760', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:8c:23', 'vm-uuid': 'cf7f6093-44a3-4e8f-8970-db25cf0b4ab9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.116 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:08 np0005629333 NetworkManager[49836]: <info>  [1772023388.1177] manager: (tap473bf89e-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/434)
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.122 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.124 244018 INFO os_vif [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:8c:23,bridge_name='br-int',has_traffic_filtering=True,id=473bf89e-f488-42b9-b6d3-d736d2a61760,network=Network(e0ff7905-af45-428a-b6a0-d6e1209fd009),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap473bf89e-f4')#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.125 244018 DEBUG nova.virt.libvirt.vif [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:42:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1568259628',display_name='tempest-TestGettingAddress-server-1568259628',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1568259628',id=109,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjwXf4BRPjcASt12gAye0nTLMsY81chM8AwzuzQAmadzcHhb8MVkuSUIiO1cL5+mTUrVlDTjaV+Wk+tczkCutLjKZYT8KokDtS1+FrxD3TeeYwMZXPHCENgnD6/bPtHPg==',key_name='tempest-TestGettingAddress-617797010',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-wpph0igs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:42:59Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=cf7f6093-44a3-4e8f-8970-db25cf0b4ab9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "address": "fa:16:3e:f5:63:85", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:6385", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7126aff6-e4", "ovs_interfaceid": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.126 244018 DEBUG nova.network.os_vif_util [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "address": "fa:16:3e:f5:63:85", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:6385", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7126aff6-e4", "ovs_interfaceid": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.126 244018 DEBUG nova.network.os_vif_util [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:63:85,bridge_name='br-int',has_traffic_filtering=True,id=7126aff6-e4c8-45c9-a3f7-6c333946b022,network=Network(e4445989-91e8-4869-98cb-32b4b81bb3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7126aff6-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.127 244018 DEBUG os_vif [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:63:85,bridge_name='br-int',has_traffic_filtering=True,id=7126aff6-e4c8-45c9-a3f7-6c333946b022,network=Network(e4445989-91e8-4869-98cb-32b4b81bb3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7126aff6-e4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.127 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.128 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.128 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.131 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.132 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7126aff6-e4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.132 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7126aff6-e4, col_values=(('external_ids', {'iface-id': '7126aff6-e4c8-45c9-a3f7-6c333946b022', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f5:63:85', 'vm-uuid': 'cf7f6093-44a3-4e8f-8970-db25cf0b4ab9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.134 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:08 np0005629333 NetworkManager[49836]: <info>  [1772023388.1352] manager: (tap7126aff6-e4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/435)
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.137 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.140 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.141 244018 INFO os_vif [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:63:85,bridge_name='br-int',has_traffic_filtering=True,id=7126aff6-e4c8-45c9-a3f7-6c333946b022,network=Network(e4445989-91e8-4869-98cb-32b4b81bb3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7126aff6-e4')#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.225 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.225 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.226 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:7a:8c:23, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.226 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:f5:63:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.226 244018 INFO nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Using config drive#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.258 244018 DEBUG nova.storage.rbd_utils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image cf7f6093-44a3-4e8f-8970-db25cf0b4ab9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:43:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:43:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1859: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.471 244018 DEBUG nova.network.neutron [req-23a70b39-e85e-49fe-9533-f6cdcede7da4 req-05b1121d-10d2-4d4b-a354-15f943875363 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Updated VIF entry in instance network info cache for port 7126aff6-e4c8-45c9-a3f7-6c333946b022. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.471 244018 DEBUG nova.network.neutron [req-23a70b39-e85e-49fe-9533-f6cdcede7da4 req-05b1121d-10d2-4d4b-a354-15f943875363 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Updating instance_info_cache with network_info: [{"id": "473bf89e-f488-42b9-b6d3-d736d2a61760", "address": "fa:16:3e:7a:8c:23", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap473bf89e-f4", "ovs_interfaceid": "473bf89e-f488-42b9-b6d3-d736d2a61760", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "address": "fa:16:3e:f5:63:85", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:6385", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7126aff6-e4", "ovs_interfaceid": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.494 244018 DEBUG oslo_concurrency.lockutils [req-23a70b39-e85e-49fe-9533-f6cdcede7da4 req-05b1121d-10d2-4d4b-a354-15f943875363 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.740 244018 INFO nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Creating config drive at /var/lib/nova/instances/cf7f6093-44a3-4e8f-8970-db25cf0b4ab9/disk.config#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.744 244018 DEBUG oslo_concurrency.processutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cf7f6093-44a3-4e8f-8970-db25cf0b4ab9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3hx6j_52 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.885 244018 DEBUG oslo_concurrency.processutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cf7f6093-44a3-4e8f-8970-db25cf0b4ab9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3hx6j_52" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.911 244018 DEBUG nova.storage.rbd_utils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image cf7f6093-44a3-4e8f-8970-db25cf0b4ab9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:43:08 np0005629333 nova_compute[244014]: 2026-02-25 12:43:08.915 244018 DEBUG oslo_concurrency.processutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cf7f6093-44a3-4e8f-8970-db25cf0b4ab9/disk.config cf7f6093-44a3-4e8f-8970-db25cf0b4ab9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:43:09 np0005629333 nova_compute[244014]: 2026-02-25 12:43:09.247 244018 DEBUG oslo_concurrency.processutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cf7f6093-44a3-4e8f-8970-db25cf0b4ab9/disk.config cf7f6093-44a3-4e8f-8970-db25cf0b4ab9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:43:09 np0005629333 nova_compute[244014]: 2026-02-25 12:43:09.249 244018 INFO nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Deleting local config drive /var/lib/nova/instances/cf7f6093-44a3-4e8f-8970-db25cf0b4ab9/disk.config because it was imported into RBD.#033[00m
Feb 25 07:43:09 np0005629333 kernel: tap473bf89e-f4: entered promiscuous mode
Feb 25 07:43:09 np0005629333 NetworkManager[49836]: <info>  [1772023389.3227] manager: (tap473bf89e-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/436)
Feb 25 07:43:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:09Z|01060|binding|INFO|Claiming lport 473bf89e-f488-42b9-b6d3-d736d2a61760 for this chassis.
Feb 25 07:43:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:09Z|01061|binding|INFO|473bf89e-f488-42b9-b6d3-d736d2a61760: Claiming fa:16:3e:7a:8c:23 10.100.0.8
Feb 25 07:43:09 np0005629333 nova_compute[244014]: 2026-02-25 12:43:09.325 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:09 np0005629333 nova_compute[244014]: 2026-02-25 12:43:09.332 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:09 np0005629333 NetworkManager[49836]: <info>  [1772023389.3420] manager: (tap7126aff6-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/437)
Feb 25 07:43:09 np0005629333 systemd-udevd[339949]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:43:09 np0005629333 systemd-udevd[339948]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.346 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:8c:23 10.100.0.8'], port_security=['fa:16:3e:7a:8c:23 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'cf7f6093-44a3-4e8f-8970-db25cf0b4ab9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e0ff7905-af45-428a-b6a0-d6e1209fd009', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0cf44a3d-3002-496e-a2d4-b5607642ce07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4edb1eef-979c-4be1-8c88-b734bc363731, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=473bf89e-f488-42b9-b6d3-d736d2a61760) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.348 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 473bf89e-f488-42b9-b6d3-d736d2a61760 in datapath e0ff7905-af45-428a-b6a0-d6e1209fd009 bound to our chassis#033[00m
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.351 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e0ff7905-af45-428a-b6a0-d6e1209fd009#033[00m
Feb 25 07:43:09 np0005629333 NetworkManager[49836]: <info>  [1772023389.3612] device (tap473bf89e-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:43:09 np0005629333 NetworkManager[49836]: <info>  [1772023389.3618] device (tap473bf89e-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.361 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[35aad0c9-af9a-4878-849f-1e7e9d93e250]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.364 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape0ff7905-a1 in ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.368 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape0ff7905-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.369 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5df4d4bb-078d-45fc-ac3d-77ced49b37e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.370 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4b36e628-6e64-42c3-8835-1aa4ccffe32a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:09 np0005629333 nova_compute[244014]: 2026-02-25 12:43:09.374 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:09 np0005629333 kernel: tap7126aff6-e4: entered promiscuous mode
Feb 25 07:43:09 np0005629333 NetworkManager[49836]: <info>  [1772023389.3763] device (tap7126aff6-e4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:43:09 np0005629333 NetworkManager[49836]: <info>  [1772023389.3774] device (tap7126aff6-e4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:43:09 np0005629333 nova_compute[244014]: 2026-02-25 12:43:09.378 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:09Z|01062|binding|INFO|Claiming lport 7126aff6-e4c8-45c9-a3f7-6c333946b022 for this chassis.
Feb 25 07:43:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:09Z|01063|binding|INFO|7126aff6-e4c8-45c9-a3f7-6c333946b022: Claiming fa:16:3e:f5:63:85 2001:db8::f816:3eff:fef5:6385
Feb 25 07:43:09 np0005629333 systemd-machined[210048]: New machine qemu-137-instance-0000006d.
Feb 25 07:43:09 np0005629333 nova_compute[244014]: 2026-02-25 12:43:09.381 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.382 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[dca97360-20e9-4f5f-9731-af7e62d5b83c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.385 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:63:85 2001:db8::f816:3eff:fef5:6385'], port_security=['fa:16:3e:f5:63:85 2001:db8::f816:3eff:fef5:6385'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef5:6385/64', 'neutron:device_id': 'cf7f6093-44a3-4e8f-8970-db25cf0b4ab9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4445989-91e8-4869-98cb-32b4b81bb3da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0cf44a3d-3002-496e-a2d4-b5607642ce07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c1f88174-704f-4cf1-b79c-73e2410029c4, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=7126aff6-e4c8-45c9-a3f7-6c333946b022) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:43:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:09Z|01064|binding|INFO|Setting lport 473bf89e-f488-42b9-b6d3-d736d2a61760 ovn-installed in OVS
Feb 25 07:43:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:09Z|01065|binding|INFO|Setting lport 473bf89e-f488-42b9-b6d3-d736d2a61760 up in Southbound
Feb 25 07:43:09 np0005629333 nova_compute[244014]: 2026-02-25 12:43:09.389 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:09Z|01066|binding|INFO|Setting lport 7126aff6-e4c8-45c9-a3f7-6c333946b022 ovn-installed in OVS
Feb 25 07:43:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:09Z|01067|binding|INFO|Setting lport 7126aff6-e4c8-45c9-a3f7-6c333946b022 up in Southbound
Feb 25 07:43:09 np0005629333 systemd[1]: Started Virtual Machine qemu-137-instance-0000006d.
Feb 25 07:43:09 np0005629333 nova_compute[244014]: 2026-02-25 12:43:09.395 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.397 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3d761de4-0f12-4069-a7c8-e867d69cd7be]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.427 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8aef7770-c9c4-4408-8b9f-fed06bfb0957]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:09 np0005629333 NetworkManager[49836]: <info>  [1772023389.4338] manager: (tape0ff7905-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/438)
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.433 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4b94b528-4a43-4409-a359-604961d41a83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.466 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[40798cf9-7392-4833-88ed-bef2a806f419]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.470 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7d35f874-b046-49fa-938c-63616dfd39c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:09 np0005629333 NetworkManager[49836]: <info>  [1772023389.4920] device (tape0ff7905-a0): carrier: link connected
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.496 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[36927694-4da9-4e5d-a8dc-0a89c4d13bde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.512 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2e54fd4d-ce25-4051-aa05-696e493e6c76]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape0ff7905-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:d2:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 320], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535889, 'reachable_time': 44896, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339986, 'error': None, 'target': 'ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.523 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b5e0b6ee-fc49-4246-ab37-1e4730dc9d37]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe25:d2fa'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535889, 'tstamp': 535889}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339987, 'error': None, 'target': 'ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.537 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8e2cbe7d-93f5-41f8-a00d-51c89ed47c2f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape0ff7905-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:d2:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 320], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535889, 'reachable_time': 44896, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 339988, 'error': None, 'target': 'ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.569 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ffe65bff-2903-4f23-b5e8-6ad7140a8a6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.623 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1dfc38d7-3140-494c-a7fa-7708cc27e738]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.625 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0ff7905-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.625 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.626 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape0ff7905-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:09 np0005629333 kernel: tape0ff7905-a0: entered promiscuous mode
Feb 25 07:43:09 np0005629333 NetworkManager[49836]: <info>  [1772023389.6286] manager: (tape0ff7905-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/439)
Feb 25 07:43:09 np0005629333 nova_compute[244014]: 2026-02-25 12:43:09.627 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.631 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape0ff7905-a0, col_values=(('external_ids', {'iface-id': 'a592354c-38c0-41be-9126-87d6dec6c687'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:09Z|01068|binding|INFO|Releasing lport a592354c-38c0-41be-9126-87d6dec6c687 from this chassis (sb_readonly=0)
Feb 25 07:43:09 np0005629333 nova_compute[244014]: 2026-02-25 12:43:09.633 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.635 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e0ff7905-af45-428a-b6a0-d6e1209fd009.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e0ff7905-af45-428a-b6a0-d6e1209fd009.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.636 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[df1751eb-0757-496e-995e-3057f2250c10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.636 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-e0ff7905-af45-428a-b6a0-d6e1209fd009
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/e0ff7905-af45-428a-b6a0-d6e1209fd009.pid.haproxy
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID e0ff7905-af45-428a-b6a0-d6e1209fd009
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:43:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:09.637 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009', 'env', 'PROCESS_TAG=haproxy-e0ff7905-af45-428a-b6a0-d6e1209fd009', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e0ff7905-af45-428a-b6a0-d6e1209fd009.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:43:09 np0005629333 nova_compute[244014]: 2026-02-25 12:43:09.639 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:09 np0005629333 nova_compute[244014]: 2026-02-25 12:43:09.700 244018 DEBUG nova.compute.manager [req-280955ad-0c21-4499-80b9-b0e5df223bcc req-7a4810c3-6d74-42bc-ab00-23aab947d0e5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received event network-vif-plugged-7126aff6-e4c8-45c9-a3f7-6c333946b022 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:43:09 np0005629333 nova_compute[244014]: 2026-02-25 12:43:09.701 244018 DEBUG oslo_concurrency.lockutils [req-280955ad-0c21-4499-80b9-b0e5df223bcc req-7a4810c3-6d74-42bc-ab00-23aab947d0e5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:09 np0005629333 nova_compute[244014]: 2026-02-25 12:43:09.701 244018 DEBUG oslo_concurrency.lockutils [req-280955ad-0c21-4499-80b9-b0e5df223bcc req-7a4810c3-6d74-42bc-ab00-23aab947d0e5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:09 np0005629333 nova_compute[244014]: 2026-02-25 12:43:09.702 244018 DEBUG oslo_concurrency.lockutils [req-280955ad-0c21-4499-80b9-b0e5df223bcc req-7a4810c3-6d74-42bc-ab00-23aab947d0e5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:09 np0005629333 nova_compute[244014]: 2026-02-25 12:43:09.702 244018 DEBUG nova.compute.manager [req-280955ad-0c21-4499-80b9-b0e5df223bcc req-7a4810c3-6d74-42bc-ab00-23aab947d0e5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Processing event network-vif-plugged-7126aff6-e4c8-45c9-a3f7-6c333946b022 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:43:09 np0005629333 nova_compute[244014]: 2026-02-25 12:43:09.816 244018 DEBUG nova.compute.manager [req-71a48f81-b42c-4aaa-a7b0-ac5886c689a1 req-2d83045c-fd16-4120-943d-1d7ac4df9cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received event network-vif-plugged-473bf89e-f488-42b9-b6d3-d736d2a61760 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:43:09 np0005629333 nova_compute[244014]: 2026-02-25 12:43:09.817 244018 DEBUG oslo_concurrency.lockutils [req-71a48f81-b42c-4aaa-a7b0-ac5886c689a1 req-2d83045c-fd16-4120-943d-1d7ac4df9cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:09 np0005629333 nova_compute[244014]: 2026-02-25 12:43:09.818 244018 DEBUG oslo_concurrency.lockutils [req-71a48f81-b42c-4aaa-a7b0-ac5886c689a1 req-2d83045c-fd16-4120-943d-1d7ac4df9cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:09 np0005629333 nova_compute[244014]: 2026-02-25 12:43:09.818 244018 DEBUG oslo_concurrency.lockutils [req-71a48f81-b42c-4aaa-a7b0-ac5886c689a1 req-2d83045c-fd16-4120-943d-1d7ac4df9cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:09 np0005629333 nova_compute[244014]: 2026-02-25 12:43:09.818 244018 DEBUG nova.compute.manager [req-71a48f81-b42c-4aaa-a7b0-ac5886c689a1 req-2d83045c-fd16-4120-943d-1d7ac4df9cb7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Processing event network-vif-plugged-473bf89e-f488-42b9-b6d3-d736d2a61760 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:43:10 np0005629333 podman[340020]: 2026-02-25 12:43:10.014763829 +0000 UTC m=+0.082445958 container create 4315067433dd06d0fb6867f0b9642846435e8a80f6b55ddc75fa62a5759dfb07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:43:10 np0005629333 systemd[1]: Started libpod-conmon-4315067433dd06d0fb6867f0b9642846435e8a80f6b55ddc75fa62a5759dfb07.scope.
Feb 25 07:43:10 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:43:10 np0005629333 podman[340020]: 2026-02-25 12:43:09.965060836 +0000 UTC m=+0.032742985 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:43:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26baa473b5cd94140eb2f5364901afbf20c0a34db5790bb729eb08e64d9f641f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:43:10 np0005629333 nova_compute[244014]: 2026-02-25 12:43:10.112 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:10 np0005629333 podman[340020]: 2026-02-25 12:43:10.149542532 +0000 UTC m=+0.217224691 container init 4315067433dd06d0fb6867f0b9642846435e8a80f6b55ddc75fa62a5759dfb07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 25 07:43:10 np0005629333 podman[340020]: 2026-02-25 12:43:10.160177072 +0000 UTC m=+0.227859201 container start 4315067433dd06d0fb6867f0b9642846435e8a80f6b55ddc75fa62a5759dfb07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:43:10 np0005629333 nova_compute[244014]: 2026-02-25 12:43:10.172 244018 DEBUG nova.compute.manager [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:43:10 np0005629333 nova_compute[244014]: 2026-02-25 12:43:10.173 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023390.172215, cf7f6093-44a3-4e8f-8970-db25cf0b4ab9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:43:10 np0005629333 nova_compute[244014]: 2026-02-25 12:43:10.174 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] VM Started (Lifecycle Event)#033[00m
Feb 25 07:43:10 np0005629333 nova_compute[244014]: 2026-02-25 12:43:10.180 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:43:10 np0005629333 nova_compute[244014]: 2026-02-25 12:43:10.184 244018 INFO nova.virt.libvirt.driver [-] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Instance spawned successfully.#033[00m
Feb 25 07:43:10 np0005629333 nova_compute[244014]: 2026-02-25 12:43:10.184 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:43:10 np0005629333 neutron-haproxy-ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009[340073]: [NOTICE]   (340082) : New worker (340084) forked
Feb 25 07:43:10 np0005629333 neutron-haproxy-ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009[340073]: [NOTICE]   (340082) : Loading success.
Feb 25 07:43:10 np0005629333 nova_compute[244014]: 2026-02-25 12:43:10.201 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:43:10 np0005629333 nova_compute[244014]: 2026-02-25 12:43:10.208 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:43:10 np0005629333 nova_compute[244014]: 2026-02-25 12:43:10.211 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:43:10 np0005629333 nova_compute[244014]: 2026-02-25 12:43:10.212 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:43:10 np0005629333 nova_compute[244014]: 2026-02-25 12:43:10.212 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:43:10 np0005629333 nova_compute[244014]: 2026-02-25 12:43:10.213 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:43:10 np0005629333 nova_compute[244014]: 2026-02-25 12:43:10.213 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:43:10 np0005629333 nova_compute[244014]: 2026-02-25 12:43:10.214 244018 DEBUG nova.virt.libvirt.driver [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.215 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 7126aff6-e4c8-45c9-a3f7-6c333946b022 in datapath e4445989-91e8-4869-98cb-32b4b81bb3da unbound from our chassis#033[00m
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.217 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e4445989-91e8-4869-98cb-32b4b81bb3da#033[00m
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.225 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a6859fcf-57ce-4f89-b681-659cf23d5891]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.226 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape4445989-91 in ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.228 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape4445989-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.229 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0c491bad-757b-43fd-bca5-363697357af1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.230 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[51e3c271-b9cf-4945-a80b-bb770a0f2c22]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.240 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[9d0b7597-4898-4062-a9ae-1155344dc1b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:10 np0005629333 nova_compute[244014]: 2026-02-25 12:43:10.242 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:43:10 np0005629333 nova_compute[244014]: 2026-02-25 12:43:10.242 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023390.172342, cf7f6093-44a3-4e8f-8970-db25cf0b4ab9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:43:10 np0005629333 nova_compute[244014]: 2026-02-25 12:43:10.242 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.260 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4d99574a-9f04-4380-b975-160d5b709ffa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.281 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[67299b00-259e-4031-a563-1317cd405022]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:10 np0005629333 systemd-udevd[339970]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.287 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6c3f20a8-73b3-4de1-a89b-d793e4e156dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:10 np0005629333 NetworkManager[49836]: <info>  [1772023390.2884] manager: (tape4445989-90): new Veth device (/org/freedesktop/NetworkManager/Devices/440)
Feb 25 07:43:10 np0005629333 nova_compute[244014]: 2026-02-25 12:43:10.311 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:43:10 np0005629333 nova_compute[244014]: 2026-02-25 12:43:10.314 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023390.178914, cf7f6093-44a3-4e8f-8970-db25cf0b4ab9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:43:10 np0005629333 nova_compute[244014]: 2026-02-25 12:43:10.314 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:43:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1860: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.321 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d5b17a7f-9ae3-4ca1-b592-a65d42f27306]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.323 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7dafc382-7c61-49a5-9b4e-f59e034925eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:10 np0005629333 nova_compute[244014]: 2026-02-25 12:43:10.331 244018 INFO nova.compute.manager [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Took 10.43 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:43:10 np0005629333 nova_compute[244014]: 2026-02-25 12:43:10.331 244018 DEBUG nova.compute.manager [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:43:10 np0005629333 nova_compute[244014]: 2026-02-25 12:43:10.337 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:43:10 np0005629333 nova_compute[244014]: 2026-02-25 12:43:10.338 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:43:10 np0005629333 NetworkManager[49836]: <info>  [1772023390.3491] device (tape4445989-90): carrier: link connected
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.353 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[716f2907-e8b0-4e3e-a7c2-3906622597f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:10 np0005629333 nova_compute[244014]: 2026-02-25 12:43:10.364 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.364 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4f809fa1-a215-4ba7-86f9-701a145db21c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4445989-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:7d:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 321], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535975, 'reachable_time': 27894, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340103, 'error': None, 'target': 'ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.376 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b001c383-ae36-434e-9be0-655d9f7de120]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febd:7d54'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535975, 'tstamp': 535975}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340104, 'error': None, 'target': 'ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.390 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1180ae9d-6deb-4cd0-8199-580b174beee3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4445989-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:7d:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 321], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535975, 'reachable_time': 27894, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 340105, 'error': None, 'target': 'ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.413 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3e12aaac-3c5d-450c-8ba3-16fa25549d30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:10 np0005629333 nova_compute[244014]: 2026-02-25 12:43:10.418 244018 INFO nova.compute.manager [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Took 11.56 seconds to build instance.#033[00m
Feb 25 07:43:10 np0005629333 nova_compute[244014]: 2026-02-25 12:43:10.432 244018 DEBUG oslo_concurrency.lockutils [None req-471d9ef9-af4f-4197-ac32-4d12f466ba62 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.440 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e4afe856-eaa5-445a-aa1e-35ffead04239]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.441 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4445989-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.441 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.442 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4445989-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:10 np0005629333 nova_compute[244014]: 2026-02-25 12:43:10.444 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:10 np0005629333 NetworkManager[49836]: <info>  [1772023390.4450] manager: (tape4445989-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/441)
Feb 25 07:43:10 np0005629333 kernel: tape4445989-90: entered promiscuous mode
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.450 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape4445989-90, col_values=(('external_ids', {'iface-id': '4a3f602e-68fb-46ca-b4ed-cec3ad6d3ec5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:10 np0005629333 nova_compute[244014]: 2026-02-25 12:43:10.451 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:10Z|01069|binding|INFO|Releasing lport 4a3f602e-68fb-46ca-b4ed-cec3ad6d3ec5 from this chassis (sb_readonly=0)
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.454 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e4445989-91e8-4869-98cb-32b4b81bb3da.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e4445989-91e8-4869-98cb-32b4b81bb3da.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.455 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f47a8611-0d2f-467e-81b2-dada723b7491]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.456 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-e4445989-91e8-4869-98cb-32b4b81bb3da
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/e4445989-91e8-4869-98cb-32b4b81bb3da.pid.haproxy
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID e4445989-91e8-4869-98cb-32b4b81bb3da
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:43:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:10.457 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da', 'env', 'PROCESS_TAG=haproxy-e4445989-91e8-4869-98cb-32b4b81bb3da', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e4445989-91e8-4869-98cb-32b4b81bb3da.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:43:10 np0005629333 nova_compute[244014]: 2026-02-25 12:43:10.458 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:10 np0005629333 podman[340135]: 2026-02-25 12:43:10.815960168 +0000 UTC m=+0.052615485 container create cc6e724a15c0d9441d72982a086e52dd5e08a297e068d7f150d312adffe8d4b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:43:10 np0005629333 systemd[1]: Started libpod-conmon-cc6e724a15c0d9441d72982a086e52dd5e08a297e068d7f150d312adffe8d4b8.scope.
Feb 25 07:43:10 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:43:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/053075e96790c2347754d133c524b37fb39668500f025fbd5384e1ede98d4099/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:43:10 np0005629333 podman[340135]: 2026-02-25 12:43:10.877815134 +0000 UTC m=+0.114470431 container init cc6e724a15c0d9441d72982a086e52dd5e08a297e068d7f150d312adffe8d4b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 07:43:10 np0005629333 podman[340135]: 2026-02-25 12:43:10.882712202 +0000 UTC m=+0.119367479 container start cc6e724a15c0d9441d72982a086e52dd5e08a297e068d7f150d312adffe8d4b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:43:10 np0005629333 podman[340135]: 2026-02-25 12:43:10.790368686 +0000 UTC m=+0.027023993 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:43:10 np0005629333 neutron-haproxy-ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da[340150]: [NOTICE]   (340154) : New worker (340156) forked
Feb 25 07:43:10 np0005629333 neutron-haproxy-ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da[340150]: [NOTICE]   (340154) : Loading success.
Feb 25 07:43:11 np0005629333 nova_compute[244014]: 2026-02-25 12:43:11.880 244018 DEBUG nova.compute.manager [req-422797fd-721a-4205-86c3-f99bbfed3613 req-aa88766f-718a-4a2b-b895-1e001fdc8997 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received event network-vif-plugged-7126aff6-e4c8-45c9-a3f7-6c333946b022 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:43:11 np0005629333 nova_compute[244014]: 2026-02-25 12:43:11.880 244018 DEBUG oslo_concurrency.lockutils [req-422797fd-721a-4205-86c3-f99bbfed3613 req-aa88766f-718a-4a2b-b895-1e001fdc8997 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:11 np0005629333 nova_compute[244014]: 2026-02-25 12:43:11.880 244018 DEBUG oslo_concurrency.lockutils [req-422797fd-721a-4205-86c3-f99bbfed3613 req-aa88766f-718a-4a2b-b895-1e001fdc8997 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:11 np0005629333 nova_compute[244014]: 2026-02-25 12:43:11.881 244018 DEBUG oslo_concurrency.lockutils [req-422797fd-721a-4205-86c3-f99bbfed3613 req-aa88766f-718a-4a2b-b895-1e001fdc8997 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:11 np0005629333 nova_compute[244014]: 2026-02-25 12:43:11.881 244018 DEBUG nova.compute.manager [req-422797fd-721a-4205-86c3-f99bbfed3613 req-aa88766f-718a-4a2b-b895-1e001fdc8997 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] No waiting events found dispatching network-vif-plugged-7126aff6-e4c8-45c9-a3f7-6c333946b022 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:43:11 np0005629333 nova_compute[244014]: 2026-02-25 12:43:11.881 244018 WARNING nova.compute.manager [req-422797fd-721a-4205-86c3-f99bbfed3613 req-aa88766f-718a-4a2b-b895-1e001fdc8997 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received unexpected event network-vif-plugged-7126aff6-e4c8-45c9-a3f7-6c333946b022 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:43:11 np0005629333 nova_compute[244014]: 2026-02-25 12:43:11.901 244018 DEBUG nova.compute.manager [req-e01ee847-b096-4e58-bc41-6f28c0d3dd6f req-c61bee64-0fb9-48d1-b310-85dc6fd7a3dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received event network-vif-plugged-473bf89e-f488-42b9-b6d3-d736d2a61760 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:43:11 np0005629333 nova_compute[244014]: 2026-02-25 12:43:11.902 244018 DEBUG oslo_concurrency.lockutils [req-e01ee847-b096-4e58-bc41-6f28c0d3dd6f req-c61bee64-0fb9-48d1-b310-85dc6fd7a3dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:11 np0005629333 nova_compute[244014]: 2026-02-25 12:43:11.902 244018 DEBUG oslo_concurrency.lockutils [req-e01ee847-b096-4e58-bc41-6f28c0d3dd6f req-c61bee64-0fb9-48d1-b310-85dc6fd7a3dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:11 np0005629333 nova_compute[244014]: 2026-02-25 12:43:11.902 244018 DEBUG oslo_concurrency.lockutils [req-e01ee847-b096-4e58-bc41-6f28c0d3dd6f req-c61bee64-0fb9-48d1-b310-85dc6fd7a3dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:11 np0005629333 nova_compute[244014]: 2026-02-25 12:43:11.903 244018 DEBUG nova.compute.manager [req-e01ee847-b096-4e58-bc41-6f28c0d3dd6f req-c61bee64-0fb9-48d1-b310-85dc6fd7a3dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] No waiting events found dispatching network-vif-plugged-473bf89e-f488-42b9-b6d3-d736d2a61760 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:43:11 np0005629333 nova_compute[244014]: 2026-02-25 12:43:11.903 244018 WARNING nova.compute.manager [req-e01ee847-b096-4e58-bc41-6f28c0d3dd6f req-c61bee64-0fb9-48d1-b310-85dc6fd7a3dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received unexpected event network-vif-plugged-473bf89e-f488-42b9-b6d3-d736d2a61760 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:43:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Feb 25 07:43:13 np0005629333 nova_compute[244014]: 2026-02-25 12:43:13.135 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:13 np0005629333 nova_compute[244014]: 2026-02-25 12:43:13.241 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023378.2400763, 11f1a7e0-6001-4367-8491-5b5508f56bdb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:43:13 np0005629333 nova_compute[244014]: 2026-02-25 12:43:13.242 244018 INFO nova.compute.manager [-] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:43:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:43:13 np0005629333 nova_compute[244014]: 2026-02-25 12:43:13.722 244018 DEBUG nova.compute.manager [None req-5c95fc4a-77b5-47c6-b6b8-bf7ee5c49221 - - - - - -] [instance: 11f1a7e0-6001-4367-8491-5b5508f56bdb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:43:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1862: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 440 KiB/s wr, 95 op/s
Feb 25 07:43:15 np0005629333 nova_compute[244014]: 2026-02-25 12:43:15.106 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:16 np0005629333 nova_compute[244014]: 2026-02-25 12:43:16.152 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:16 np0005629333 NetworkManager[49836]: <info>  [1772023396.1526] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/442)
Feb 25 07:43:16 np0005629333 NetworkManager[49836]: <info>  [1772023396.1533] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/443)
Feb 25 07:43:16 np0005629333 nova_compute[244014]: 2026-02-25 12:43:16.201 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:16Z|01070|binding|INFO|Releasing lport a592354c-38c0-41be-9126-87d6dec6c687 from this chassis (sb_readonly=0)
Feb 25 07:43:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:16Z|01071|binding|INFO|Releasing lport 4a3f602e-68fb-46ca-b4ed-cec3ad6d3ec5 from this chassis (sb_readonly=0)
Feb 25 07:43:16 np0005629333 nova_compute[244014]: 2026-02-25 12:43:16.212 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1863: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 07:43:16 np0005629333 nova_compute[244014]: 2026-02-25 12:43:16.441 244018 DEBUG nova.compute.manager [req-9e982954-ce69-472c-bfd7-8444289b199f req-592fa26f-e893-4c47-882e-47f94cdc0508 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received event network-changed-473bf89e-f488-42b9-b6d3-d736d2a61760 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:43:16 np0005629333 nova_compute[244014]: 2026-02-25 12:43:16.442 244018 DEBUG nova.compute.manager [req-9e982954-ce69-472c-bfd7-8444289b199f req-592fa26f-e893-4c47-882e-47f94cdc0508 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Refreshing instance network info cache due to event network-changed-473bf89e-f488-42b9-b6d3-d736d2a61760. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:43:16 np0005629333 nova_compute[244014]: 2026-02-25 12:43:16.442 244018 DEBUG oslo_concurrency.lockutils [req-9e982954-ce69-472c-bfd7-8444289b199f req-592fa26f-e893-4c47-882e-47f94cdc0508 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:43:16 np0005629333 nova_compute[244014]: 2026-02-25 12:43:16.443 244018 DEBUG oslo_concurrency.lockutils [req-9e982954-ce69-472c-bfd7-8444289b199f req-592fa26f-e893-4c47-882e-47f94cdc0508 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:43:16 np0005629333 nova_compute[244014]: 2026-02-25 12:43:16.443 244018 DEBUG nova.network.neutron [req-9e982954-ce69-472c-bfd7-8444289b199f req-592fa26f-e893-4c47-882e-47f94cdc0508 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Refreshing network info cache for port 473bf89e-f488-42b9-b6d3-d736d2a61760 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:43:17 np0005629333 nova_compute[244014]: 2026-02-25 12:43:17.951 244018 DEBUG nova.network.neutron [req-9e982954-ce69-472c-bfd7-8444289b199f req-592fa26f-e893-4c47-882e-47f94cdc0508 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Updated VIF entry in instance network info cache for port 473bf89e-f488-42b9-b6d3-d736d2a61760. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:43:17 np0005629333 nova_compute[244014]: 2026-02-25 12:43:17.952 244018 DEBUG nova.network.neutron [req-9e982954-ce69-472c-bfd7-8444289b199f req-592fa26f-e893-4c47-882e-47f94cdc0508 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Updating instance_info_cache with network_info: [{"id": "473bf89e-f488-42b9-b6d3-d736d2a61760", "address": "fa:16:3e:7a:8c:23", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap473bf89e-f4", "ovs_interfaceid": "473bf89e-f488-42b9-b6d3-d736d2a61760", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "address": "fa:16:3e:f5:63:85", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:6385", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7126aff6-e4", "ovs_interfaceid": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:43:17 np0005629333 nova_compute[244014]: 2026-02-25 12:43:17.982 244018 DEBUG oslo_concurrency.lockutils [req-9e982954-ce69-472c-bfd7-8444289b199f req-592fa26f-e893-4c47-882e-47f94cdc0508 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:43:18 np0005629333 nova_compute[244014]: 2026-02-25 12:43:18.138 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:43:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 07:43:18 np0005629333 nova_compute[244014]: 2026-02-25 12:43:18.570 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:18 np0005629333 nova_compute[244014]: 2026-02-25 12:43:18.571 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:18 np0005629333 nova_compute[244014]: 2026-02-25 12:43:18.593 244018 DEBUG nova.compute.manager [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:43:18 np0005629333 nova_compute[244014]: 2026-02-25 12:43:18.671 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:18 np0005629333 nova_compute[244014]: 2026-02-25 12:43:18.672 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:18 np0005629333 nova_compute[244014]: 2026-02-25 12:43:18.682 244018 DEBUG nova.virt.hardware [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:43:18 np0005629333 nova_compute[244014]: 2026-02-25 12:43:18.683 244018 INFO nova.compute.claims [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:43:18 np0005629333 nova_compute[244014]: 2026-02-25 12:43:18.866 244018 DEBUG oslo_concurrency.processutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:43:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:43:19 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4067404949' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:43:19 np0005629333 nova_compute[244014]: 2026-02-25 12:43:19.394 244018 DEBUG oslo_concurrency.processutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:43:19 np0005629333 nova_compute[244014]: 2026-02-25 12:43:19.402 244018 DEBUG nova.compute.provider_tree [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:43:19 np0005629333 nova_compute[244014]: 2026-02-25 12:43:19.420 244018 DEBUG nova.scheduler.client.report [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:43:19 np0005629333 nova_compute[244014]: 2026-02-25 12:43:19.448 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:19 np0005629333 nova_compute[244014]: 2026-02-25 12:43:19.450 244018 DEBUG nova.compute.manager [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:43:19 np0005629333 nova_compute[244014]: 2026-02-25 12:43:19.517 244018 DEBUG nova.compute.manager [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:43:19 np0005629333 nova_compute[244014]: 2026-02-25 12:43:19.518 244018 DEBUG nova.network.neutron [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:43:19 np0005629333 nova_compute[244014]: 2026-02-25 12:43:19.542 244018 INFO nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:43:19 np0005629333 nova_compute[244014]: 2026-02-25 12:43:19.563 244018 DEBUG nova.compute.manager [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:43:19 np0005629333 nova_compute[244014]: 2026-02-25 12:43:19.670 244018 DEBUG nova.compute.manager [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:43:19 np0005629333 nova_compute[244014]: 2026-02-25 12:43:19.671 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:43:19 np0005629333 nova_compute[244014]: 2026-02-25 12:43:19.672 244018 INFO nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Creating image(s)#033[00m
Feb 25 07:43:19 np0005629333 nova_compute[244014]: 2026-02-25 12:43:19.703 244018 DEBUG nova.storage.rbd_utils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:43:19 np0005629333 nova_compute[244014]: 2026-02-25 12:43:19.730 244018 DEBUG nova.storage.rbd_utils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:43:19 np0005629333 nova_compute[244014]: 2026-02-25 12:43:19.754 244018 DEBUG nova.storage.rbd_utils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:43:19 np0005629333 nova_compute[244014]: 2026-02-25 12:43:19.759 244018 DEBUG oslo_concurrency.processutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:43:19 np0005629333 nova_compute[244014]: 2026-02-25 12:43:19.790 244018 DEBUG nova.policy [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb37a481eb114226822ed8b2ef4f9a89', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:43:19 np0005629333 nova_compute[244014]: 2026-02-25 12:43:19.824 244018 DEBUG oslo_concurrency.processutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:43:19 np0005629333 nova_compute[244014]: 2026-02-25 12:43:19.824 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:19 np0005629333 nova_compute[244014]: 2026-02-25 12:43:19.825 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:19 np0005629333 nova_compute[244014]: 2026-02-25 12:43:19.826 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:19 np0005629333 nova_compute[244014]: 2026-02-25 12:43:19.850 244018 DEBUG nova.storage.rbd_utils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:43:19 np0005629333 nova_compute[244014]: 2026-02-25 12:43:19.857 244018 DEBUG oslo_concurrency.processutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:43:20 np0005629333 nova_compute[244014]: 2026-02-25 12:43:20.108 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1865: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 07:43:20 np0005629333 nova_compute[244014]: 2026-02-25 12:43:20.651 244018 DEBUG oslo_concurrency.processutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.794s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:43:20 np0005629333 nova_compute[244014]: 2026-02-25 12:43:20.721 244018 DEBUG nova.storage.rbd_utils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] resizing rbd image b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:43:20 np0005629333 nova_compute[244014]: 2026-02-25 12:43:20.893 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:43:20 np0005629333 nova_compute[244014]: 2026-02-25 12:43:20.894 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:43:20 np0005629333 nova_compute[244014]: 2026-02-25 12:43:20.894 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 07:43:20 np0005629333 nova_compute[244014]: 2026-02-25 12:43:20.917 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 25 07:43:20 np0005629333 nova_compute[244014]: 2026-02-25 12:43:20.981 244018 DEBUG nova.network.neutron [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Successfully created port: 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:43:21 np0005629333 nova_compute[244014]: 2026-02-25 12:43:21.169 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:43:21 np0005629333 nova_compute[244014]: 2026-02-25 12:43:21.169 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:43:21 np0005629333 nova_compute[244014]: 2026-02-25 12:43:21.170 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 25 07:43:21 np0005629333 nova_compute[244014]: 2026-02-25 12:43:21.170 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid cf7f6093-44a3-4e8f-8970-db25cf0b4ab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:43:21 np0005629333 nova_compute[244014]: 2026-02-25 12:43:21.287 244018 DEBUG nova.objects.instance [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'migration_context' on Instance uuid b5941b54-9cd2-465c-89c0-3cf87ebed83e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:43:21 np0005629333 nova_compute[244014]: 2026-02-25 12:43:21.301 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:43:21 np0005629333 nova_compute[244014]: 2026-02-25 12:43:21.302 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Ensure instance console log exists: /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:43:21 np0005629333 nova_compute[244014]: 2026-02-25 12:43:21.302 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:21 np0005629333 nova_compute[244014]: 2026-02-25 12:43:21.303 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:21 np0005629333 nova_compute[244014]: 2026-02-25 12:43:21.303 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:21 np0005629333 nova_compute[244014]: 2026-02-25 12:43:21.725 244018 DEBUG nova.network.neutron [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Successfully updated port: 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:43:21 np0005629333 nova_compute[244014]: 2026-02-25 12:43:21.743 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "refresh_cache-b5941b54-9cd2-465c-89c0-3cf87ebed83e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:43:21 np0005629333 nova_compute[244014]: 2026-02-25 12:43:21.743 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquired lock "refresh_cache-b5941b54-9cd2-465c-89c0-3cf87ebed83e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:43:21 np0005629333 nova_compute[244014]: 2026-02-25 12:43:21.744 244018 DEBUG nova.network.neutron [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:43:21 np0005629333 nova_compute[244014]: 2026-02-25 12:43:21.907 244018 DEBUG nova.compute.manager [req-eb6c005c-7c0d-490c-8918-874adb6f468e req-5a3c8d46-23d9-4841-a7b2-e62af6a3f5c8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received event network-changed-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:43:21 np0005629333 nova_compute[244014]: 2026-02-25 12:43:21.907 244018 DEBUG nova.compute.manager [req-eb6c005c-7c0d-490c-8918-874adb6f468e req-5a3c8d46-23d9-4841-a7b2-e62af6a3f5c8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Refreshing instance network info cache due to event network-changed-43ea1958-7fd9-47b6-be81-3eeb1b3801a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:43:21 np0005629333 nova_compute[244014]: 2026-02-25 12:43:21.907 244018 DEBUG oslo_concurrency.lockutils [req-eb6c005c-7c0d-490c-8918-874adb6f468e req-5a3c8d46-23d9-4841-a7b2-e62af6a3f5c8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-b5941b54-9cd2-465c-89c0-3cf87ebed83e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:43:22 np0005629333 nova_compute[244014]: 2026-02-25 12:43:22.139 244018 DEBUG nova.network.neutron [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:43:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:22Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7a:8c:23 10.100.0.8
Feb 25 07:43:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:22Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7a:8c:23 10.100.0.8
Feb 25 07:43:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 241 MiB data, 881 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 113 op/s
Feb 25 07:43:22 np0005629333 podman[340355]: 2026-02-25 12:43:22.767368662 +0000 UTC m=+0.103101821 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 07:43:23 np0005629333 nova_compute[244014]: 2026-02-25 12:43:23.142 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:43:23 np0005629333 nova_compute[244014]: 2026-02-25 12:43:23.335 244018 DEBUG nova.network.neutron [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Updating instance_info_cache with network_info: [{"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:43:23 np0005629333 nova_compute[244014]: 2026-02-25 12:43:23.354 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Releasing lock "refresh_cache-b5941b54-9cd2-465c-89c0-3cf87ebed83e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:43:23 np0005629333 nova_compute[244014]: 2026-02-25 12:43:23.355 244018 DEBUG nova.compute.manager [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Instance network_info: |[{"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:43:23 np0005629333 nova_compute[244014]: 2026-02-25 12:43:23.356 244018 DEBUG oslo_concurrency.lockutils [req-eb6c005c-7c0d-490c-8918-874adb6f468e req-5a3c8d46-23d9-4841-a7b2-e62af6a3f5c8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-b5941b54-9cd2-465c-89c0-3cf87ebed83e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:43:23 np0005629333 nova_compute[244014]: 2026-02-25 12:43:23.356 244018 DEBUG nova.network.neutron [req-eb6c005c-7c0d-490c-8918-874adb6f468e req-5a3c8d46-23d9-4841-a7b2-e62af6a3f5c8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Refreshing network info cache for port 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:43:23 np0005629333 nova_compute[244014]: 2026-02-25 12:43:23.361 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Start _get_guest_xml network_info=[{"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:43:23 np0005629333 nova_compute[244014]: 2026-02-25 12:43:23.368 244018 WARNING nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:43:23 np0005629333 nova_compute[244014]: 2026-02-25 12:43:23.374 244018 DEBUG nova.virt.libvirt.host [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:43:23 np0005629333 nova_compute[244014]: 2026-02-25 12:43:23.375 244018 DEBUG nova.virt.libvirt.host [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:43:23 np0005629333 nova_compute[244014]: 2026-02-25 12:43:23.388 244018 DEBUG nova.virt.libvirt.host [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:43:23 np0005629333 nova_compute[244014]: 2026-02-25 12:43:23.389 244018 DEBUG nova.virt.libvirt.host [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:43:23 np0005629333 nova_compute[244014]: 2026-02-25 12:43:23.390 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:43:23 np0005629333 nova_compute[244014]: 2026-02-25 12:43:23.391 244018 DEBUG nova.virt.hardware [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:43:23 np0005629333 nova_compute[244014]: 2026-02-25 12:43:23.392 244018 DEBUG nova.virt.hardware [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:43:23 np0005629333 nova_compute[244014]: 2026-02-25 12:43:23.393 244018 DEBUG nova.virt.hardware [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:43:23 np0005629333 nova_compute[244014]: 2026-02-25 12:43:23.393 244018 DEBUG nova.virt.hardware [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:43:23 np0005629333 nova_compute[244014]: 2026-02-25 12:43:23.394 244018 DEBUG nova.virt.hardware [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:43:23 np0005629333 nova_compute[244014]: 2026-02-25 12:43:23.394 244018 DEBUG nova.virt.hardware [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:43:23 np0005629333 nova_compute[244014]: 2026-02-25 12:43:23.395 244018 DEBUG nova.virt.hardware [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:43:23 np0005629333 nova_compute[244014]: 2026-02-25 12:43:23.395 244018 DEBUG nova.virt.hardware [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:43:23 np0005629333 nova_compute[244014]: 2026-02-25 12:43:23.396 244018 DEBUG nova.virt.hardware [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:43:23 np0005629333 nova_compute[244014]: 2026-02-25 12:43:23.397 244018 DEBUG nova.virt.hardware [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:43:23 np0005629333 nova_compute[244014]: 2026-02-25 12:43:23.397 244018 DEBUG nova.virt.hardware [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:43:23 np0005629333 nova_compute[244014]: 2026-02-25 12:43:23.404 244018 DEBUG oslo_concurrency.processutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:43:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:43:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2005405089' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:43:23 np0005629333 nova_compute[244014]: 2026-02-25 12:43:23.942 244018 DEBUG oslo_concurrency.processutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:43:23 np0005629333 nova_compute[244014]: 2026-02-25 12:43:23.977 244018 DEBUG nova.storage.rbd_utils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:43:23 np0005629333 nova_compute[244014]: 2026-02-25 12:43:23.982 244018 DEBUG oslo_concurrency.processutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:43:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 305 active+clean; 269 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 832 KiB/s rd, 3.9 MiB/s wr, 99 op/s
Feb 25 07:43:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:43:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1958499659' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:43:24 np0005629333 nova_compute[244014]: 2026-02-25 12:43:24.514 244018 DEBUG oslo_concurrency.processutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:43:24 np0005629333 nova_compute[244014]: 2026-02-25 12:43:24.516 244018 DEBUG nova.virt.libvirt.vif [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:43:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1420097374',display_name='tempest-TestNetworkAdvancedServerOps-server-1420097374',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1420097374',id=110,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLB4lCGz708UDYBfDiKGSO/K1vvU3TZkSpUC/m/pNqcj9p06BKZCsaZ+HTq1tFiTei87P3smYtAHKXB341loC6n/SM62zSw05o3YeNjhjC3ZsqOrkJUnRdjhvIYhFIjYsQ==',key_name='tempest-TestNetworkAdvancedServerOps-1361605021',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-ivr9l6e8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:43:19Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=b5941b54-9cd2-465c-89c0-3cf87ebed83e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:43:24 np0005629333 nova_compute[244014]: 2026-02-25 12:43:24.516 244018 DEBUG nova.network.os_vif_util [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:43:24 np0005629333 nova_compute[244014]: 2026-02-25 12:43:24.517 244018 DEBUG nova.network.os_vif_util [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:93:a9,bridge_name='br-int',has_traffic_filtering=True,id=43ea1958-7fd9-47b6-be81-3eeb1b3801a0,network=Network(f8edf066-3c6a-45fb-bc20-36dc74f8aee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ea1958-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:43:24 np0005629333 nova_compute[244014]: 2026-02-25 12:43:24.518 244018 DEBUG nova.objects.instance [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'pci_devices' on Instance uuid b5941b54-9cd2-465c-89c0-3cf87ebed83e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:43:24 np0005629333 nova_compute[244014]: 2026-02-25 12:43:24.534 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:43:24 np0005629333 nova_compute[244014]:  <uuid>b5941b54-9cd2-465c-89c0-3cf87ebed83e</uuid>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:  <name>instance-0000006e</name>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1420097374</nova:name>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:43:23</nova:creationTime>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:43:24 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:        <nova:user uuid="fb37a481eb114226822ed8b2ef4f9a89">tempest-TestNetworkAdvancedServerOps-1424801157-project-member</nova:user>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:        <nova:project uuid="6821a6e7edd54dbe97920b79aae8f54c">tempest-TestNetworkAdvancedServerOps-1424801157</nova:project>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:        <nova:port uuid="43ea1958-7fd9-47b6-be81-3eeb1b3801a0">
Feb 25 07:43:24 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      <entry name="serial">b5941b54-9cd2-465c-89c0-3cf87ebed83e</entry>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      <entry name="uuid">b5941b54-9cd2-465c-89c0-3cf87ebed83e</entry>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk">
Feb 25 07:43:24 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:43:24 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk.config">
Feb 25 07:43:24 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:43:24 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:3e:93:a9"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      <target dev="tap43ea1958-7f"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e/console.log" append="off"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:43:24 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:43:24 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:43:24 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:43:24 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:43:24 np0005629333 nova_compute[244014]: 2026-02-25 12:43:24.535 244018 DEBUG nova.compute.manager [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Preparing to wait for external event network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:43:24 np0005629333 nova_compute[244014]: 2026-02-25 12:43:24.536 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:24 np0005629333 nova_compute[244014]: 2026-02-25 12:43:24.536 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:24 np0005629333 nova_compute[244014]: 2026-02-25 12:43:24.537 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:24 np0005629333 nova_compute[244014]: 2026-02-25 12:43:24.537 244018 DEBUG nova.virt.libvirt.vif [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:43:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1420097374',display_name='tempest-TestNetworkAdvancedServerOps-server-1420097374',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1420097374',id=110,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLB4lCGz708UDYBfDiKGSO/K1vvU3TZkSpUC/m/pNqcj9p06BKZCsaZ+HTq1tFiTei87P3smYtAHKXB341loC6n/SM62zSw05o3YeNjhjC3ZsqOrkJUnRdjhvIYhFIjYsQ==',key_name='tempest-TestNetworkAdvancedServerOps-1361605021',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-ivr9l6e8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:43:19Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=b5941b54-9cd2-465c-89c0-3cf87ebed83e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:43:24 np0005629333 nova_compute[244014]: 2026-02-25 12:43:24.538 244018 DEBUG nova.network.os_vif_util [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:43:24 np0005629333 nova_compute[244014]: 2026-02-25 12:43:24.539 244018 DEBUG nova.network.os_vif_util [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:93:a9,bridge_name='br-int',has_traffic_filtering=True,id=43ea1958-7fd9-47b6-be81-3eeb1b3801a0,network=Network(f8edf066-3c6a-45fb-bc20-36dc74f8aee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ea1958-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:43:24 np0005629333 nova_compute[244014]: 2026-02-25 12:43:24.539 244018 DEBUG os_vif [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:93:a9,bridge_name='br-int',has_traffic_filtering=True,id=43ea1958-7fd9-47b6-be81-3eeb1b3801a0,network=Network(f8edf066-3c6a-45fb-bc20-36dc74f8aee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ea1958-7f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:43:24 np0005629333 nova_compute[244014]: 2026-02-25 12:43:24.539 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:24 np0005629333 nova_compute[244014]: 2026-02-25 12:43:24.540 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:24 np0005629333 nova_compute[244014]: 2026-02-25 12:43:24.540 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:43:24 np0005629333 nova_compute[244014]: 2026-02-25 12:43:24.545 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:24 np0005629333 nova_compute[244014]: 2026-02-25 12:43:24.546 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43ea1958-7f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:24 np0005629333 nova_compute[244014]: 2026-02-25 12:43:24.546 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap43ea1958-7f, col_values=(('external_ids', {'iface-id': '43ea1958-7fd9-47b6-be81-3eeb1b3801a0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3e:93:a9', 'vm-uuid': 'b5941b54-9cd2-465c-89c0-3cf87ebed83e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:24 np0005629333 nova_compute[244014]: 2026-02-25 12:43:24.548 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:24 np0005629333 NetworkManager[49836]: <info>  [1772023404.5494] manager: (tap43ea1958-7f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/444)
Feb 25 07:43:24 np0005629333 nova_compute[244014]: 2026-02-25 12:43:24.549 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:43:24 np0005629333 nova_compute[244014]: 2026-02-25 12:43:24.554 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:24 np0005629333 nova_compute[244014]: 2026-02-25 12:43:24.555 244018 INFO os_vif [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:93:a9,bridge_name='br-int',has_traffic_filtering=True,id=43ea1958-7fd9-47b6-be81-3eeb1b3801a0,network=Network(f8edf066-3c6a-45fb-bc20-36dc74f8aee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ea1958-7f')#033[00m
Feb 25 07:43:24 np0005629333 nova_compute[244014]: 2026-02-25 12:43:24.616 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:43:24 np0005629333 nova_compute[244014]: 2026-02-25 12:43:24.617 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:43:24 np0005629333 nova_compute[244014]: 2026-02-25 12:43:24.617 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No VIF found with MAC fa:16:3e:3e:93:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:43:24 np0005629333 nova_compute[244014]: 2026-02-25 12:43:24.619 244018 INFO nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Using config drive#033[00m
Feb 25 07:43:24 np0005629333 nova_compute[244014]: 2026-02-25 12:43:24.647 244018 DEBUG nova.storage.rbd_utils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:43:24 np0005629333 podman[340440]: 2026-02-25 12:43:24.663086959 +0000 UTC m=+0.075002858 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller)
Feb 25 07:43:25 np0005629333 nova_compute[244014]: 2026-02-25 12:43:25.110 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:25 np0005629333 nova_compute[244014]: 2026-02-25 12:43:25.252 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Updating instance_info_cache with network_info: [{"id": "473bf89e-f488-42b9-b6d3-d736d2a61760", "address": "fa:16:3e:7a:8c:23", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap473bf89e-f4", "ovs_interfaceid": "473bf89e-f488-42b9-b6d3-d736d2a61760", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "address": "fa:16:3e:f5:63:85", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:6385", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7126aff6-e4", "ovs_interfaceid": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:43:25 np0005629333 nova_compute[244014]: 2026-02-25 12:43:25.278 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:43:25 np0005629333 nova_compute[244014]: 2026-02-25 12:43:25.278 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 25 07:43:25 np0005629333 nova_compute[244014]: 2026-02-25 12:43:25.278 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:43:25 np0005629333 nova_compute[244014]: 2026-02-25 12:43:25.279 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:43:25 np0005629333 nova_compute[244014]: 2026-02-25 12:43:25.706 244018 DEBUG nova.network.neutron [req-eb6c005c-7c0d-490c-8918-874adb6f468e req-5a3c8d46-23d9-4841-a7b2-e62af6a3f5c8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Updated VIF entry in instance network info cache for port 43ea1958-7fd9-47b6-be81-3eeb1b3801a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:43:25 np0005629333 nova_compute[244014]: 2026-02-25 12:43:25.707 244018 DEBUG nova.network.neutron [req-eb6c005c-7c0d-490c-8918-874adb6f468e req-5a3c8d46-23d9-4841-a7b2-e62af6a3f5c8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Updating instance_info_cache with network_info: [{"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:43:25 np0005629333 nova_compute[244014]: 2026-02-25 12:43:25.719 244018 INFO nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Creating config drive at /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e/disk.config#033[00m
Feb 25 07:43:25 np0005629333 nova_compute[244014]: 2026-02-25 12:43:25.725 244018 DEBUG oslo_concurrency.processutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpxrp36wmi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:43:25 np0005629333 nova_compute[244014]: 2026-02-25 12:43:25.762 244018 DEBUG oslo_concurrency.lockutils [req-eb6c005c-7c0d-490c-8918-874adb6f468e req-5a3c8d46-23d9-4841-a7b2-e62af6a3f5c8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-b5941b54-9cd2-465c-89c0-3cf87ebed83e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:43:25 np0005629333 nova_compute[244014]: 2026-02-25 12:43:25.873 244018 DEBUG oslo_concurrency.processutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpxrp36wmi" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:43:25 np0005629333 nova_compute[244014]: 2026-02-25 12:43:25.922 244018 DEBUG nova.storage.rbd_utils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:43:25 np0005629333 nova_compute[244014]: 2026-02-25 12:43:25.929 244018 DEBUG oslo_concurrency.processutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e/disk.config b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:43:25 np0005629333 nova_compute[244014]: 2026-02-25 12:43:25.960 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:43:25 np0005629333 nova_compute[244014]: 2026-02-25 12:43:25.962 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:43:25 np0005629333 nova_compute[244014]: 2026-02-25 12:43:25.984 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:25 np0005629333 nova_compute[244014]: 2026-02-25 12:43:25.985 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:25 np0005629333 nova_compute[244014]: 2026-02-25 12:43:25.985 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:25 np0005629333 nova_compute[244014]: 2026-02-25 12:43:25.986 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:43:25 np0005629333 nova_compute[244014]: 2026-02-25 12:43:25.986 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.085 244018 DEBUG oslo_concurrency.processutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e/disk.config b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.085 244018 INFO nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Deleting local config drive /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e/disk.config because it was imported into RBD.#033[00m
Feb 25 07:43:26 np0005629333 kernel: tap43ea1958-7f: entered promiscuous mode
Feb 25 07:43:26 np0005629333 NetworkManager[49836]: <info>  [1772023406.1266] manager: (tap43ea1958-7f): new Tun device (/org/freedesktop/NetworkManager/Devices/445)
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.128 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:26 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:26Z|01072|binding|INFO|Claiming lport 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 for this chassis.
Feb 25 07:43:26 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:26Z|01073|binding|INFO|43ea1958-7fd9-47b6-be81-3eeb1b3801a0: Claiming fa:16:3e:3e:93:a9 10.100.0.5
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.137 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:26 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:26Z|01074|binding|INFO|Setting lport 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 ovn-installed in OVS
Feb 25 07:43:26 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:26Z|01075|binding|INFO|Setting lport 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 up in Southbound
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.138 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:93:a9 10.100.0.5'], port_security=['fa:16:3e:3e:93:a9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b5941b54-9cd2-465c-89c0-3cf87ebed83e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f8edf066-3c6a-45fb-bc20-36dc74f8aee6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a5401511-20ab-4c2d-9471-9223f0b77f54', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32cc7d41-346e-4e2e-a938-6389d190a22f, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=43ea1958-7fd9-47b6-be81-3eeb1b3801a0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.139 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.141 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 in datapath f8edf066-3c6a-45fb-bc20-36dc74f8aee6 bound to our chassis#033[00m
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.144 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f8edf066-3c6a-45fb-bc20-36dc74f8aee6#033[00m
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.152 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6995a66e-d6be-4890-a4b6-62e783077163]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.153 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf8edf066-31 in ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.156 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf8edf066-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.156 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8816d6c2-16a8-458b-8541-5a00917a4586]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.157 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[66f06533-929f-4276-a0be-6eeb607611de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:26 np0005629333 systemd-machined[210048]: New machine qemu-138-instance-0000006e.
Feb 25 07:43:26 np0005629333 systemd[1]: Started Virtual Machine qemu-138-instance-0000006e.
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.171 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[5192eda7-5015-45ff-a5f1-e1033b2f817b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:26 np0005629333 systemd-udevd[340560]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:43:26 np0005629333 NetworkManager[49836]: <info>  [1772023406.1856] device (tap43ea1958-7f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:43:26 np0005629333 NetworkManager[49836]: <info>  [1772023406.1863] device (tap43ea1958-7f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.203 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[82b489b0-f72e-4ed9-a409-d55b3cc15a32]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.233 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9d648dff-22d4-4275-9128-2bcb4c75a7d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:26 np0005629333 systemd-udevd[340563]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:43:26 np0005629333 NetworkManager[49836]: <info>  [1772023406.2395] manager: (tapf8edf066-30): new Veth device (/org/freedesktop/NetworkManager/Devices/446)
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.239 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e0e2b5d4-2820-4378-9cb7-0798e23edb1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.268 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[db251ba8-d132-4f4c-8645-199cbf294473]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.271 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cd3f0727-872b-4cf1-a284-15ae167ed41b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:26 np0005629333 NetworkManager[49836]: <info>  [1772023406.2901] device (tapf8edf066-30): carrier: link connected
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.295 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ded17a2f-b181-4eae-8b23-4e6400a52b50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.307 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8535a312-0592-4b77-ab16-bfff5b758954]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf8edf066-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:6d:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 323], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537569, 'reachable_time': 16694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340591, 'error': None, 'target': 'ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.317 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[998db363-8cfd-400b-8425-d061a2bef22c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2f:6d18'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537569, 'tstamp': 537569}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340592, 'error': None, 'target': 'ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 305 active+clean; 269 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 3.8 MiB/s wr, 81 op/s
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.331 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a2695d9c-44be-4e0f-b751-9ceeb3a9aa20]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf8edf066-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:6d:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 323], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537569, 'reachable_time': 16694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 340593, 'error': None, 'target': 'ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.352 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bca364d5-5f37-4107-9d92-e263a25776ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.384 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[78e60fb0-b005-490e-8306-5af220176a74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.385 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8edf066-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.385 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.385 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf8edf066-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.386 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:26 np0005629333 NetworkManager[49836]: <info>  [1772023406.3875] manager: (tapf8edf066-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/447)
Feb 25 07:43:26 np0005629333 kernel: tapf8edf066-30: entered promiscuous mode
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.391 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.392 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf8edf066-30, col_values=(('external_ids', {'iface-id': '537184a5-5d27-4b28-acba-8f254f6dc5ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.393 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:26 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:26Z|01076|binding|INFO|Releasing lport 537184a5-5d27-4b28-acba-8f254f6dc5ca from this chassis (sb_readonly=0)
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.401 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.402 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.403 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f8edf066-3c6a-45fb-bc20-36dc74f8aee6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f8edf066-3c6a-45fb-bc20-36dc74f8aee6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.404 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cea9d25c-21c8-4ee8-bf84-bfa1f3a488ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.404 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-f8edf066-3c6a-45fb-bc20-36dc74f8aee6
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/f8edf066-3c6a-45fb-bc20-36dc74f8aee6.pid.haproxy
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID f8edf066-3c6a-45fb-bc20-36dc74f8aee6
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:43:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:26.405 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6', 'env', 'PROCESS_TAG=haproxy-f8edf066-3c6a-45fb-bc20-36dc74f8aee6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f8edf066-3c6a-45fb-bc20-36dc74f8aee6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.469 244018 DEBUG nova.compute.manager [req-12e65119-586a-462c-aba5-694e0472e495 req-5d1d173a-b2b1-480d-a1bf-933f6b598b57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received event network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.469 244018 DEBUG oslo_concurrency.lockutils [req-12e65119-586a-462c-aba5-694e0472e495 req-5d1d173a-b2b1-480d-a1bf-933f6b598b57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.469 244018 DEBUG oslo_concurrency.lockutils [req-12e65119-586a-462c-aba5-694e0472e495 req-5d1d173a-b2b1-480d-a1bf-933f6b598b57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.470 244018 DEBUG oslo_concurrency.lockutils [req-12e65119-586a-462c-aba5-694e0472e495 req-5d1d173a-b2b1-480d-a1bf-933f6b598b57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.470 244018 DEBUG nova.compute.manager [req-12e65119-586a-462c-aba5-694e0472e495 req-5d1d173a-b2b1-480d-a1bf-933f6b598b57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Processing event network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2285557306' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.580 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:43:26.618881) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023406618916, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 2033, "num_deletes": 253, "total_data_size": 3198380, "memory_usage": 3239728, "flush_reason": "Manual Compaction"}
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023406633448, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 3130447, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38039, "largest_seqno": 40071, "table_properties": {"data_size": 3121300, "index_size": 5641, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19516, "raw_average_key_size": 20, "raw_value_size": 3102733, "raw_average_value_size": 3252, "num_data_blocks": 249, "num_entries": 954, "num_filter_entries": 954, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772023208, "oldest_key_time": 1772023208, "file_creation_time": 1772023406, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 14613 microseconds, and 4575 cpu microseconds.
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:43:26.633491) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 3130447 bytes OK
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:43:26.633509) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:43:26.635227) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:43:26.635239) EVENT_LOG_v1 {"time_micros": 1772023406635235, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:43:26.635259) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 3189752, prev total WAL file size 3189752, number of live WAL files 2.
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:43:26.639774) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(3057KB)], [86(7679KB)]
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023406639837, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 10994511, "oldest_snapshot_seqno": -1}
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.655 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.655 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.661 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.661 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6467 keys, 9273776 bytes, temperature: kUnknown
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023406694932, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 9273776, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9230328, "index_size": 26186, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16197, "raw_key_size": 164216, "raw_average_key_size": 25, "raw_value_size": 9114477, "raw_average_value_size": 1409, "num_data_blocks": 1051, "num_entries": 6467, "num_filter_entries": 6467, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772023406, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:43:26.695780) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 9273776 bytes
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:43:26.697360) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 197.1 rd, 166.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 7.5 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(6.5) write-amplify(3.0) OK, records in: 6989, records dropped: 522 output_compression: NoCompression
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:43:26.697376) EVENT_LOG_v1 {"time_micros": 1772023406697368, "job": 50, "event": "compaction_finished", "compaction_time_micros": 55782, "compaction_time_cpu_micros": 16072, "output_level": 6, "num_output_files": 1, "total_output_size": 9273776, "num_input_records": 6989, "num_output_records": 6467, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023406697676, "job": 50, "event": "table_file_deletion", "file_number": 88}
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023406698860, "job": 50, "event": "table_file_deletion", "file_number": 86}
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:43:26.635741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:43:26.698905) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:43:26.698909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:43:26.698911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:43:26.698912) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:43:26 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:43:26.698914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.775 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023406.7752013, b5941b54-9cd2-465c-89c0-3cf87ebed83e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.776 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] VM Started (Lifecycle Event)#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.778 244018 DEBUG nova.compute.manager [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.781 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.784 244018 INFO nova.virt.libvirt.driver [-] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Instance spawned successfully.#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.784 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.801 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.806 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.810 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.810 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.811 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.811 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.811 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.812 244018 DEBUG nova.virt.libvirt.driver [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:43:26 np0005629333 podman[340669]: 2026-02-25 12:43:26.822197308 +0000 UTC m=+0.056658100 container create 314448028c0bb71621d0c09c9fb8e3e4e2e7a53175dc854d717d856b0c26a4bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.841 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.842 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023406.775345, b5941b54-9cd2-465c-89c0-3cf87ebed83e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.842 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:43:26 np0005629333 systemd[1]: Started libpod-conmon-314448028c0bb71621d0c09c9fb8e3e4e2e7a53175dc854d717d856b0c26a4bd.scope.
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.874 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.877 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023406.7813468, b5941b54-9cd2-465c-89c0-3cf87ebed83e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.877 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:43:26 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.886 244018 INFO nova.compute.manager [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Took 7.22 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.887 244018 DEBUG nova.compute.manager [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:43:26 np0005629333 podman[340669]: 2026-02-25 12:43:26.793399735 +0000 UTC m=+0.027860577 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:43:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbd30b2a51b99c22ab6484bb072cec6f6373098acd12eff7ecbd61ac2cb75c84/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.898 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.899 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3503MB free_disk=59.92203834373504GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.899 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.899 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:26 np0005629333 podman[340669]: 2026-02-25 12:43:26.901343441 +0000 UTC m=+0.135804263 container init 314448028c0bb71621d0c09c9fb8e3e4e2e7a53175dc854d717d856b0c26a4bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 25 07:43:26 np0005629333 podman[340669]: 2026-02-25 12:43:26.905676483 +0000 UTC m=+0.140137285 container start 314448028c0bb71621d0c09c9fb8e3e4e2e7a53175dc854d717d856b0c26a4bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.920 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.923 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:43:26 np0005629333 neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6[340681]: [NOTICE]   (340688) : New worker (340690) forked
Feb 25 07:43:26 np0005629333 neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6[340681]: [NOTICE]   (340688) : Loading success.
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.951 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.974 244018 INFO nova.compute.manager [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Took 8.34 seconds to build instance.#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.977 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance cf7f6093-44a3-4e8f-8970-db25cf0b4ab9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.977 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance b5941b54-9cd2-465c-89c0-3cf87ebed83e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.977 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.977 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:43:26 np0005629333 nova_compute[244014]: 2026-02-25 12:43:26.992 244018 DEBUG oslo_concurrency.lockutils [None req-0cc85e06-5dc2-4a92-a40b-9ab281c33215 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.421s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:27 np0005629333 nova_compute[244014]: 2026-02-25 12:43:27.033 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:43:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:43:27 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3241667786' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:43:27 np0005629333 nova_compute[244014]: 2026-02-25 12:43:27.563 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:43:27 np0005629333 nova_compute[244014]: 2026-02-25 12:43:27.577 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:43:27 np0005629333 nova_compute[244014]: 2026-02-25 12:43:27.599 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:43:27 np0005629333 nova_compute[244014]: 2026-02-25 12:43:27.638 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:43:27 np0005629333 nova_compute[244014]: 2026-02-25 12:43:27.638 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:43:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 646 KiB/s rd, 3.9 MiB/s wr, 108 op/s
Feb 25 07:43:28 np0005629333 nova_compute[244014]: 2026-02-25 12:43:28.553 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:43:28 np0005629333 nova_compute[244014]: 2026-02-25 12:43:28.554 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:43:28 np0005629333 nova_compute[244014]: 2026-02-25 12:43:28.555 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:43:28 np0005629333 nova_compute[244014]: 2026-02-25 12:43:28.644 244018 DEBUG nova.compute.manager [req-770c4ed4-27f7-4784-b19d-f07564fe6365 req-da4288cf-0f78-4b26-98c3-3ce1bc85e52b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received event network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:43:28 np0005629333 nova_compute[244014]: 2026-02-25 12:43:28.644 244018 DEBUG oslo_concurrency.lockutils [req-770c4ed4-27f7-4784-b19d-f07564fe6365 req-da4288cf-0f78-4b26-98c3-3ce1bc85e52b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:28 np0005629333 nova_compute[244014]: 2026-02-25 12:43:28.645 244018 DEBUG oslo_concurrency.lockutils [req-770c4ed4-27f7-4784-b19d-f07564fe6365 req-da4288cf-0f78-4b26-98c3-3ce1bc85e52b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:28 np0005629333 nova_compute[244014]: 2026-02-25 12:43:28.645 244018 DEBUG oslo_concurrency.lockutils [req-770c4ed4-27f7-4784-b19d-f07564fe6365 req-da4288cf-0f78-4b26-98c3-3ce1bc85e52b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:28 np0005629333 nova_compute[244014]: 2026-02-25 12:43:28.645 244018 DEBUG nova.compute.manager [req-770c4ed4-27f7-4784-b19d-f07564fe6365 req-da4288cf-0f78-4b26-98c3-3ce1bc85e52b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] No waiting events found dispatching network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:43:28 np0005629333 nova_compute[244014]: 2026-02-25 12:43:28.645 244018 WARNING nova.compute.manager [req-770c4ed4-27f7-4784-b19d-f07564fe6365 req-da4288cf-0f78-4b26-98c3-3ce1bc85e52b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received unexpected event network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:43:29 np0005629333 nova_compute[244014]: 2026-02-25 12:43:29.549 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:30 np0005629333 nova_compute[244014]: 2026-02-25 12:43:30.112 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1870: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 646 KiB/s rd, 3.9 MiB/s wr, 108 op/s
Feb 25 07:43:30 np0005629333 nova_compute[244014]: 2026-02-25 12:43:30.714 244018 DEBUG nova.compute.manager [req-d897cbfa-5421-423c-bec7-df9d38546241 req-9aa4d4e5-3d82-4816-a62e-1d4f985252a2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received event network-changed-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:43:30 np0005629333 nova_compute[244014]: 2026-02-25 12:43:30.714 244018 DEBUG nova.compute.manager [req-d897cbfa-5421-423c-bec7-df9d38546241 req-9aa4d4e5-3d82-4816-a62e-1d4f985252a2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Refreshing instance network info cache due to event network-changed-43ea1958-7fd9-47b6-be81-3eeb1b3801a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:43:30 np0005629333 nova_compute[244014]: 2026-02-25 12:43:30.715 244018 DEBUG oslo_concurrency.lockutils [req-d897cbfa-5421-423c-bec7-df9d38546241 req-9aa4d4e5-3d82-4816-a62e-1d4f985252a2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-b5941b54-9cd2-465c-89c0-3cf87ebed83e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:43:30 np0005629333 nova_compute[244014]: 2026-02-25 12:43:30.715 244018 DEBUG oslo_concurrency.lockutils [req-d897cbfa-5421-423c-bec7-df9d38546241 req-9aa4d4e5-3d82-4816-a62e-1d4f985252a2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-b5941b54-9cd2-465c-89c0-3cf87ebed83e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:43:30 np0005629333 nova_compute[244014]: 2026-02-25 12:43:30.715 244018 DEBUG nova.network.neutron [req-d897cbfa-5421-423c-bec7-df9d38546241 req-9aa4d4e5-3d82-4816-a62e-1d4f985252a2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Refreshing network info cache for port 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:43:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:43:30
Feb 25 07:43:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:43:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:43:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'backups', 'default.rgw.log', 'images', 'vms', 'volumes', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Feb 25 07:43:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:43:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:43:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:43:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:43:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:43:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:43:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:43:31 np0005629333 nova_compute[244014]: 2026-02-25 12:43:31.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:43:31 np0005629333 nova_compute[244014]: 2026-02-25 12:43:31.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:43:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:43:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:43:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:43:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:43:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:43:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:43:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:43:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:43:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:43:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:43:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 305 active+clean; 279 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 157 op/s
Feb 25 07:43:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:43:33 np0005629333 nova_compute[244014]: 2026-02-25 12:43:33.770 244018 DEBUG nova.network.neutron [req-d897cbfa-5421-423c-bec7-df9d38546241 req-9aa4d4e5-3d82-4816-a62e-1d4f985252a2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Updated VIF entry in instance network info cache for port 43ea1958-7fd9-47b6-be81-3eeb1b3801a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:43:33 np0005629333 nova_compute[244014]: 2026-02-25 12:43:33.770 244018 DEBUG nova.network.neutron [req-d897cbfa-5421-423c-bec7-df9d38546241 req-9aa4d4e5-3d82-4816-a62e-1d4f985252a2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Updating instance_info_cache with network_info: [{"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:43:33 np0005629333 nova_compute[244014]: 2026-02-25 12:43:33.798 244018 DEBUG oslo_concurrency.lockutils [req-d897cbfa-5421-423c-bec7-df9d38546241 req-9aa4d4e5-3d82-4816-a62e-1d4f985252a2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-b5941b54-9cd2-465c-89c0-3cf87ebed83e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:43:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 279 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Feb 25 07:43:34 np0005629333 nova_compute[244014]: 2026-02-25 12:43:34.551 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:35 np0005629333 nova_compute[244014]: 2026-02-25 12:43:35.114 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 305 active+clean; 279 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 86 KiB/s wr, 82 op/s
Feb 25 07:43:36 np0005629333 nova_compute[244014]: 2026-02-25 12:43:36.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:43:37 np0005629333 nova_compute[244014]: 2026-02-25 12:43:37.670 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:37 np0005629333 nova_compute[244014]: 2026-02-25 12:43:37.670 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:37 np0005629333 nova_compute[244014]: 2026-02-25 12:43:37.692 244018 DEBUG nova.compute.manager [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:43:37 np0005629333 nova_compute[244014]: 2026-02-25 12:43:37.797 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:37 np0005629333 nova_compute[244014]: 2026-02-25 12:43:37.798 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:37 np0005629333 nova_compute[244014]: 2026-02-25 12:43:37.805 244018 DEBUG nova.virt.hardware [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:43:37 np0005629333 nova_compute[244014]: 2026-02-25 12:43:37.806 244018 INFO nova.compute.claims [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:43:37 np0005629333 nova_compute[244014]: 2026-02-25 12:43:37.986 244018 DEBUG oslo_concurrency.processutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:43:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:43:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 305 active+clean; 289 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.0 MiB/s wr, 107 op/s
Feb 25 07:43:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:43:38 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2209978200' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:43:38 np0005629333 nova_compute[244014]: 2026-02-25 12:43:38.535 244018 DEBUG oslo_concurrency.processutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:43:38 np0005629333 nova_compute[244014]: 2026-02-25 12:43:38.540 244018 DEBUG nova.compute.provider_tree [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:43:38 np0005629333 nova_compute[244014]: 2026-02-25 12:43:38.559 244018 DEBUG nova.scheduler.client.report [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:43:38 np0005629333 nova_compute[244014]: 2026-02-25 12:43:38.583 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:38 np0005629333 nova_compute[244014]: 2026-02-25 12:43:38.584 244018 DEBUG nova.compute.manager [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:43:38 np0005629333 nova_compute[244014]: 2026-02-25 12:43:38.636 244018 DEBUG nova.compute.manager [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:43:38 np0005629333 nova_compute[244014]: 2026-02-25 12:43:38.637 244018 DEBUG nova.network.neutron [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:43:38 np0005629333 nova_compute[244014]: 2026-02-25 12:43:38.655 244018 INFO nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:43:38 np0005629333 nova_compute[244014]: 2026-02-25 12:43:38.676 244018 DEBUG nova.compute.manager [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:43:38 np0005629333 nova_compute[244014]: 2026-02-25 12:43:38.770 244018 DEBUG nova.compute.manager [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:43:38 np0005629333 nova_compute[244014]: 2026-02-25 12:43:38.771 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:43:38 np0005629333 nova_compute[244014]: 2026-02-25 12:43:38.771 244018 INFO nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Creating image(s)#033[00m
Feb 25 07:43:38 np0005629333 nova_compute[244014]: 2026-02-25 12:43:38.788 244018 DEBUG nova.storage.rbd_utils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:43:38 np0005629333 nova_compute[244014]: 2026-02-25 12:43:38.810 244018 DEBUG nova.storage.rbd_utils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:43:38 np0005629333 nova_compute[244014]: 2026-02-25 12:43:38.835 244018 DEBUG nova.storage.rbd_utils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:43:38 np0005629333 nova_compute[244014]: 2026-02-25 12:43:38.846 244018 DEBUG oslo_concurrency.processutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:43:38 np0005629333 nova_compute[244014]: 2026-02-25 12:43:38.917 244018 DEBUG oslo_concurrency.processutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:43:38 np0005629333 nova_compute[244014]: 2026-02-25 12:43:38.918 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:38 np0005629333 nova_compute[244014]: 2026-02-25 12:43:38.919 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:38 np0005629333 nova_compute[244014]: 2026-02-25 12:43:38.919 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:38 np0005629333 nova_compute[244014]: 2026-02-25 12:43:38.938 244018 DEBUG nova.storage.rbd_utils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:43:38 np0005629333 nova_compute[244014]: 2026-02-25 12:43:38.941 244018 DEBUG oslo_concurrency.processutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:43:39 np0005629333 nova_compute[244014]: 2026-02-25 12:43:39.057 244018 DEBUG nova.policy [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb8dbf8cc448ad946fd23aaae2326e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '25fa1e8dd32c483686f869da2604f2b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:43:39 np0005629333 nova_compute[244014]: 2026-02-25 12:43:39.440 244018 DEBUG oslo_concurrency.processutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:43:39 np0005629333 nova_compute[244014]: 2026-02-25 12:43:39.502 244018 DEBUG nova.storage.rbd_utils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] resizing rbd image c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:43:39 np0005629333 nova_compute[244014]: 2026-02-25 12:43:39.553 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:39 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:39Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3e:93:a9 10.100.0.5
Feb 25 07:43:39 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:39Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3e:93:a9 10.100.0.5
Feb 25 07:43:39 np0005629333 nova_compute[244014]: 2026-02-25 12:43:39.748 244018 DEBUG nova.objects.instance [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'migration_context' on Instance uuid c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:43:39 np0005629333 nova_compute[244014]: 2026-02-25 12:43:39.822 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:43:39 np0005629333 nova_compute[244014]: 2026-02-25 12:43:39.823 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Ensure instance console log exists: /var/lib/nova/instances/c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:43:39 np0005629333 nova_compute[244014]: 2026-02-25 12:43:39.824 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:39 np0005629333 nova_compute[244014]: 2026-02-25 12:43:39.824 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:39 np0005629333 nova_compute[244014]: 2026-02-25 12:43:39.825 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:39 np0005629333 nova_compute[244014]: 2026-02-25 12:43:39.830 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:39.831 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:43:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:39.833 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:43:40 np0005629333 nova_compute[244014]: 2026-02-25 12:43:40.115 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:40 np0005629333 nova_compute[244014]: 2026-02-25 12:43:40.155 244018 DEBUG nova.network.neutron [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Successfully created port: d71dae92-b542-404e-b4cc-ecad408ed655 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:43:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 289 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 973 KiB/s wr, 79 op/s
Feb 25 07:43:40 np0005629333 podman[341004]: 2026-02-25 12:43:40.867536201 +0000 UTC m=+0.146148055 container exec ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 07:43:40 np0005629333 podman[341004]: 2026-02-25 12:43:40.958004084 +0000 UTC m=+0.236615938 container exec_died ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:43:41 np0005629333 nova_compute[244014]: 2026-02-25 12:43:41.108 244018 DEBUG nova.network.neutron [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Successfully created port: 18ca6c9a-c1c9-4a48-b124-25942ebef5df _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:43:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:43:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:43:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:43:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:43:42 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:43:42 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:43:42 np0005629333 nova_compute[244014]: 2026-02-25 12:43:42.308 244018 DEBUG nova.network.neutron [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Successfully updated port: d71dae92-b542-404e-b4cc-ecad408ed655 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:43:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1876: 305 pgs: 305 active+clean; 345 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.7 MiB/s wr, 126 op/s
Feb 25 07:43:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:43:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:43:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:43:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:43:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:43:42 np0005629333 nova_compute[244014]: 2026-02-25 12:43:42.426 244018 DEBUG nova.compute.manager [req-f09c0282-ec08-4697-91c6-cce6c5517b8d req-a8a8e67b-65a6-4607-bd15-ada5951e29e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received event network-changed-d71dae92-b542-404e-b4cc-ecad408ed655 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:43:42 np0005629333 nova_compute[244014]: 2026-02-25 12:43:42.426 244018 DEBUG nova.compute.manager [req-f09c0282-ec08-4697-91c6-cce6c5517b8d req-a8a8e67b-65a6-4607-bd15-ada5951e29e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Refreshing instance network info cache due to event network-changed-d71dae92-b542-404e-b4cc-ecad408ed655. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:43:42 np0005629333 nova_compute[244014]: 2026-02-25 12:43:42.426 244018 DEBUG oslo_concurrency.lockutils [req-f09c0282-ec08-4697-91c6-cce6c5517b8d req-a8a8e67b-65a6-4607-bd15-ada5951e29e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:43:42 np0005629333 nova_compute[244014]: 2026-02-25 12:43:42.426 244018 DEBUG oslo_concurrency.lockutils [req-f09c0282-ec08-4697-91c6-cce6c5517b8d req-a8a8e67b-65a6-4607-bd15-ada5951e29e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:43:42 np0005629333 nova_compute[244014]: 2026-02-25 12:43:42.426 244018 DEBUG nova.network.neutron [req-f09c0282-ec08-4697-91c6-cce6c5517b8d req-a8a8e67b-65a6-4607-bd15-ada5951e29e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Refreshing network info cache for port d71dae92-b542-404e-b4cc-ecad408ed655 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:43:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:43:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:43:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:43:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:43:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:43:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:43:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:43:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:43:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:43:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:43:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:43:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018414802412164857 of space, bias 1.0, pg target 0.5524440723649457 quantized to 32 (current 32)
Feb 25 07:43:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:43:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:43:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:43:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:43:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:43:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002493766335806996 of space, bias 1.0, pg target 0.7481299007420988 quantized to 32 (current 32)
Feb 25 07:43:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:43:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0089591090207362e-06 of space, bias 4.0, pg target 0.0012107509308248836 quantized to 16 (current 16)
Feb 25 07:43:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:43:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:43:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:43:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:43:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:43:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:43:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:43:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:43:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:43:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:43:42 np0005629333 nova_compute[244014]: 2026-02-25 12:43:42.726 244018 DEBUG nova.network.neutron [req-f09c0282-ec08-4697-91c6-cce6c5517b8d req-a8a8e67b-65a6-4607-bd15-ada5951e29e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:43:42 np0005629333 podman[341338]: 2026-02-25 12:43:42.84126587 +0000 UTC m=+0.067074294 container create 7d13092960c6c10f58e27c7ee2113fe6a86326ffd392ec6c8b9cb42f7eb2d9b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:43:42 np0005629333 podman[341338]: 2026-02-25 12:43:42.79239021 +0000 UTC m=+0.018198534 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:43:42 np0005629333 systemd[1]: Started libpod-conmon-7d13092960c6c10f58e27c7ee2113fe6a86326ffd392ec6c8b9cb42f7eb2d9b4.scope.
Feb 25 07:43:42 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:43:42 np0005629333 podman[341338]: 2026-02-25 12:43:42.948087764 +0000 UTC m=+0.173896098 container init 7d13092960c6c10f58e27c7ee2113fe6a86326ffd392ec6c8b9cb42f7eb2d9b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 07:43:42 np0005629333 podman[341338]: 2026-02-25 12:43:42.958725025 +0000 UTC m=+0.184533329 container start 7d13092960c6c10f58e27c7ee2113fe6a86326ffd392ec6c8b9cb42f7eb2d9b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_zhukovsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 07:43:42 np0005629333 hopeful_zhukovsky[341354]: 167 167
Feb 25 07:43:42 np0005629333 systemd[1]: libpod-7d13092960c6c10f58e27c7ee2113fe6a86326ffd392ec6c8b9cb42f7eb2d9b4.scope: Deactivated successfully.
Feb 25 07:43:42 np0005629333 conmon[341354]: conmon 7d13092960c6c10f58e2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7d13092960c6c10f58e27c7ee2113fe6a86326ffd392ec6c8b9cb42f7eb2d9b4.scope/container/memory.events
Feb 25 07:43:43 np0005629333 podman[341338]: 2026-02-25 12:43:43.022379271 +0000 UTC m=+0.248187625 container attach 7d13092960c6c10f58e27c7ee2113fe6a86326ffd392ec6c8b9cb42f7eb2d9b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_zhukovsky, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:43:43 np0005629333 podman[341338]: 2026-02-25 12:43:43.023368809 +0000 UTC m=+0.249177123 container died 7d13092960c6c10f58e27c7ee2113fe6a86326ffd392ec6c8b9cb42f7eb2d9b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 07:43:43 np0005629333 nova_compute[244014]: 2026-02-25 12:43:43.079 244018 DEBUG nova.network.neutron [req-f09c0282-ec08-4697-91c6-cce6c5517b8d req-a8a8e67b-65a6-4607-bd15-ada5951e29e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:43:43 np0005629333 nova_compute[244014]: 2026-02-25 12:43:43.100 244018 DEBUG oslo_concurrency.lockutils [req-f09c0282-ec08-4697-91c6-cce6c5517b8d req-a8a8e67b-65a6-4607-bd15-ada5951e29e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:43:43 np0005629333 systemd[1]: var-lib-containers-storage-overlay-8cf0a9835b7b569ec5b9b1e0c2a1a62dd0f66cc068b5bbe5e7f66f2f73318b9a-merged.mount: Deactivated successfully.
Feb 25 07:43:43 np0005629333 podman[341338]: 2026-02-25 12:43:43.15242309 +0000 UTC m=+0.378231394 container remove 7d13092960c6c10f58e27c7ee2113fe6a86326ffd392ec6c8b9cb42f7eb2d9b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_zhukovsky, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 07:43:43 np0005629333 systemd[1]: libpod-conmon-7d13092960c6c10f58e27c7ee2113fe6a86326ffd392ec6c8b9cb42f7eb2d9b4.scope: Deactivated successfully.
Feb 25 07:43:43 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:43:43 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:43:43 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:43:43 np0005629333 podman[341378]: 2026-02-25 12:43:43.308954238 +0000 UTC m=+0.046750261 container create beb27933528d028ec031f4264ef03ccb77a501bfffa1a4abbe37e2a0c0ee62fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_lamport, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 07:43:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:43:43 np0005629333 podman[341378]: 2026-02-25 12:43:43.282862522 +0000 UTC m=+0.020658575 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:43:43 np0005629333 systemd[1]: Started libpod-conmon-beb27933528d028ec031f4264ef03ccb77a501bfffa1a4abbe37e2a0c0ee62fe.scope.
Feb 25 07:43:43 np0005629333 nova_compute[244014]: 2026-02-25 12:43:43.404 244018 DEBUG nova.network.neutron [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Successfully updated port: 18ca6c9a-c1c9-4a48-b124-25942ebef5df _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:43:43 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:43:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/797099886ce7650185d2ea8619b6982b181a101e59b955b6f94d20d4930a928b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:43:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/797099886ce7650185d2ea8619b6982b181a101e59b955b6f94d20d4930a928b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:43:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/797099886ce7650185d2ea8619b6982b181a101e59b955b6f94d20d4930a928b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:43:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/797099886ce7650185d2ea8619b6982b181a101e59b955b6f94d20d4930a928b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:43:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/797099886ce7650185d2ea8619b6982b181a101e59b955b6f94d20d4930a928b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:43:43 np0005629333 nova_compute[244014]: 2026-02-25 12:43:43.425 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "refresh_cache-c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:43:43 np0005629333 nova_compute[244014]: 2026-02-25 12:43:43.426 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquired lock "refresh_cache-c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:43:43 np0005629333 nova_compute[244014]: 2026-02-25 12:43:43.426 244018 DEBUG nova.network.neutron [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:43:43 np0005629333 podman[341378]: 2026-02-25 12:43:43.486309773 +0000 UTC m=+0.224105816 container init beb27933528d028ec031f4264ef03ccb77a501bfffa1a4abbe37e2a0c0ee62fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_lamport, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 07:43:43 np0005629333 podman[341378]: 2026-02-25 12:43:43.492681663 +0000 UTC m=+0.230477686 container start beb27933528d028ec031f4264ef03ccb77a501bfffa1a4abbe37e2a0c0ee62fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 07:43:43 np0005629333 podman[341378]: 2026-02-25 12:43:43.496627534 +0000 UTC m=+0.234423577 container attach beb27933528d028ec031f4264ef03ccb77a501bfffa1a4abbe37e2a0c0ee62fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_lamport, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 07:43:43 np0005629333 nova_compute[244014]: 2026-02-25 12:43:43.620 244018 DEBUG nova.network.neutron [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:43:43 np0005629333 cool_lamport[341395]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:43:43 np0005629333 cool_lamport[341395]: --> All data devices are unavailable
Feb 25 07:43:43 np0005629333 systemd[1]: libpod-beb27933528d028ec031f4264ef03ccb77a501bfffa1a4abbe37e2a0c0ee62fe.scope: Deactivated successfully.
Feb 25 07:43:43 np0005629333 podman[341378]: 2026-02-25 12:43:43.968230281 +0000 UTC m=+0.706026304 container died beb27933528d028ec031f4264ef03ccb77a501bfffa1a4abbe37e2a0c0ee62fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_lamport, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:43:44 np0005629333 systemd[1]: var-lib-containers-storage-overlay-797099886ce7650185d2ea8619b6982b181a101e59b955b6f94d20d4930a928b-merged.mount: Deactivated successfully.
Feb 25 07:43:44 np0005629333 podman[341378]: 2026-02-25 12:43:44.126182409 +0000 UTC m=+0.863978432 container remove beb27933528d028ec031f4264ef03ccb77a501bfffa1a4abbe37e2a0c0ee62fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_lamport, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 07:43:44 np0005629333 systemd[1]: libpod-conmon-beb27933528d028ec031f4264ef03ccb77a501bfffa1a4abbe37e2a0c0ee62fe.scope: Deactivated successfully.
Feb 25 07:43:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 563 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Feb 25 07:43:44 np0005629333 podman[341490]: 2026-02-25 12:43:44.543675711 +0000 UTC m=+0.042450579 container create e20b8a1f5b89668f9d4013d40f18e805d04d18404be94d833871150ceea3f62d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:43:44 np0005629333 nova_compute[244014]: 2026-02-25 12:43:44.555 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:44 np0005629333 systemd[1]: Started libpod-conmon-e20b8a1f5b89668f9d4013d40f18e805d04d18404be94d833871150ceea3f62d.scope.
Feb 25 07:43:44 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:43:44 np0005629333 podman[341490]: 2026-02-25 12:43:44.519670393 +0000 UTC m=+0.018445311 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:43:44 np0005629333 podman[341490]: 2026-02-25 12:43:44.629684938 +0000 UTC m=+0.128459826 container init e20b8a1f5b89668f9d4013d40f18e805d04d18404be94d833871150ceea3f62d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_curran, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 07:43:44 np0005629333 nova_compute[244014]: 2026-02-25 12:43:44.634 244018 DEBUG nova.compute.manager [req-c9d8c4a3-ae73-410b-bf97-2f3dc4970679 req-dabb61f2-f0ae-477a-93e3-f32438f5ecd0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received event network-changed-18ca6c9a-c1c9-4a48-b124-25942ebef5df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:43:44 np0005629333 nova_compute[244014]: 2026-02-25 12:43:44.635 244018 DEBUG nova.compute.manager [req-c9d8c4a3-ae73-410b-bf97-2f3dc4970679 req-dabb61f2-f0ae-477a-93e3-f32438f5ecd0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Refreshing instance network info cache due to event network-changed-18ca6c9a-c1c9-4a48-b124-25942ebef5df. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:43:44 np0005629333 nova_compute[244014]: 2026-02-25 12:43:44.635 244018 DEBUG oslo_concurrency.lockutils [req-c9d8c4a3-ae73-410b-bf97-2f3dc4970679 req-dabb61f2-f0ae-477a-93e3-f32438f5ecd0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:43:44 np0005629333 podman[341490]: 2026-02-25 12:43:44.637397225 +0000 UTC m=+0.136172093 container start e20b8a1f5b89668f9d4013d40f18e805d04d18404be94d833871150ceea3f62d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_curran, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:43:44 np0005629333 competent_curran[341506]: 167 167
Feb 25 07:43:44 np0005629333 systemd[1]: libpod-e20b8a1f5b89668f9d4013d40f18e805d04d18404be94d833871150ceea3f62d.scope: Deactivated successfully.
Feb 25 07:43:44 np0005629333 conmon[341506]: conmon e20b8a1f5b89668f9d40 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e20b8a1f5b89668f9d4013d40f18e805d04d18404be94d833871150ceea3f62d.scope/container/memory.events
Feb 25 07:43:44 np0005629333 podman[341490]: 2026-02-25 12:43:44.644274399 +0000 UTC m=+0.143049267 container attach e20b8a1f5b89668f9d4013d40f18e805d04d18404be94d833871150ceea3f62d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 07:43:44 np0005629333 podman[341490]: 2026-02-25 12:43:44.645078902 +0000 UTC m=+0.143853770 container died e20b8a1f5b89668f9d4013d40f18e805d04d18404be94d833871150ceea3f62d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_curran, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Feb 25 07:43:44 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d2b8ccd0177ac1cb836290a29b3a9c626be74b5d3d077a4a81f025eabe460b5b-merged.mount: Deactivated successfully.
Feb 25 07:43:44 np0005629333 podman[341490]: 2026-02-25 12:43:44.764146602 +0000 UTC m=+0.262921480 container remove e20b8a1f5b89668f9d4013d40f18e805d04d18404be94d833871150ceea3f62d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_curran, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Feb 25 07:43:44 np0005629333 systemd[1]: libpod-conmon-e20b8a1f5b89668f9d4013d40f18e805d04d18404be94d833871150ceea3f62d.scope: Deactivated successfully.
Feb 25 07:43:44 np0005629333 podman[341530]: 2026-02-25 12:43:44.965468524 +0000 UTC m=+0.090798254 container create 04319a254d458ffd7c995dd98cafe436cc382f9fe596c5339fe8f64c9358a272 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Feb 25 07:43:44 np0005629333 podman[341530]: 2026-02-25 12:43:44.893002799 +0000 UTC m=+0.018332519 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:43:45 np0005629333 systemd[1]: Started libpod-conmon-04319a254d458ffd7c995dd98cafe436cc382f9fe596c5339fe8f64c9358a272.scope.
Feb 25 07:43:45 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:43:45 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de4e36d84329450239c4aeeae28b72de3ba5b331e9f961e07c5345d4e6f21bf3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:43:45 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de4e36d84329450239c4aeeae28b72de3ba5b331e9f961e07c5345d4e6f21bf3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:43:45 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de4e36d84329450239c4aeeae28b72de3ba5b331e9f961e07c5345d4e6f21bf3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:43:45 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de4e36d84329450239c4aeeae28b72de3ba5b331e9f961e07c5345d4e6f21bf3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:43:45 np0005629333 nova_compute[244014]: 2026-02-25 12:43:45.117 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:45 np0005629333 podman[341530]: 2026-02-25 12:43:45.191258865 +0000 UTC m=+0.316588595 container init 04319a254d458ffd7c995dd98cafe436cc382f9fe596c5339fe8f64c9358a272 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_nobel, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 07:43:45 np0005629333 podman[341530]: 2026-02-25 12:43:45.198033187 +0000 UTC m=+0.323362887 container start 04319a254d458ffd7c995dd98cafe436cc382f9fe596c5339fe8f64c9358a272 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 07:43:45 np0005629333 podman[341530]: 2026-02-25 12:43:45.202607496 +0000 UTC m=+0.327937266 container attach 04319a254d458ffd7c995dd98cafe436cc382f9fe596c5339fe8f64c9358a272 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_nobel, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]: {
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:    "0": [
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:        {
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "devices": [
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "/dev/loop3"
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            ],
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "lv_name": "ceph_lv0",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "lv_size": "21470642176",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "name": "ceph_lv0",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "tags": {
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.cluster_name": "ceph",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.crush_device_class": "",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.encrypted": "0",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.objectstore": "bluestore",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.osd_id": "0",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.type": "block",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.vdo": "0",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.with_tpm": "0"
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            },
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "type": "block",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "vg_name": "ceph_vg0"
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:        }
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:    ],
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:    "1": [
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:        {
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "devices": [
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "/dev/loop4"
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            ],
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "lv_name": "ceph_lv1",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "lv_size": "21470642176",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "name": "ceph_lv1",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "tags": {
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.cluster_name": "ceph",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.crush_device_class": "",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.encrypted": "0",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.objectstore": "bluestore",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.osd_id": "1",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.type": "block",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.vdo": "0",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.with_tpm": "0"
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            },
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "type": "block",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "vg_name": "ceph_vg1"
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:        }
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:    ],
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:    "2": [
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:        {
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "devices": [
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "/dev/loop5"
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            ],
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "lv_name": "ceph_lv2",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "lv_size": "21470642176",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "name": "ceph_lv2",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "tags": {
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.cluster_name": "ceph",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.crush_device_class": "",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.encrypted": "0",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.objectstore": "bluestore",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.osd_id": "2",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.type": "block",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.vdo": "0",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:                "ceph.with_tpm": "0"
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            },
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "type": "block",
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:            "vg_name": "ceph_vg2"
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:        }
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]:    ]
Feb 25 07:43:45 np0005629333 jovial_nobel[341548]: }
Feb 25 07:43:45 np0005629333 systemd[1]: libpod-04319a254d458ffd7c995dd98cafe436cc382f9fe596c5339fe8f64c9358a272.scope: Deactivated successfully.
Feb 25 07:43:45 np0005629333 podman[341530]: 2026-02-25 12:43:45.498273799 +0000 UTC m=+0.623603509 container died 04319a254d458ffd7c995dd98cafe436cc382f9fe596c5339fe8f64c9358a272 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_nobel, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 07:43:45 np0005629333 nova_compute[244014]: 2026-02-25 12:43:45.590 244018 DEBUG nova.network.neutron [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Updating instance_info_cache with network_info: [{"id": "d71dae92-b542-404e-b4cc-ecad408ed655", "address": "fa:16:3e:74:17:b4", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd71dae92-b5", "ovs_interfaceid": "d71dae92-b542-404e-b4cc-ecad408ed655", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "address": "fa:16:3e:5d:4f:81", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5d:4f81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ca6c9a-c1", "ovs_interfaceid": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:43:45 np0005629333 systemd[1]: var-lib-containers-storage-overlay-de4e36d84329450239c4aeeae28b72de3ba5b331e9f961e07c5345d4e6f21bf3-merged.mount: Deactivated successfully.
Feb 25 07:43:45 np0005629333 nova_compute[244014]: 2026-02-25 12:43:45.613 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Releasing lock "refresh_cache-c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:43:45 np0005629333 nova_compute[244014]: 2026-02-25 12:43:45.614 244018 DEBUG nova.compute.manager [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Instance network_info: |[{"id": "d71dae92-b542-404e-b4cc-ecad408ed655", "address": "fa:16:3e:74:17:b4", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd71dae92-b5", "ovs_interfaceid": "d71dae92-b542-404e-b4cc-ecad408ed655", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "address": "fa:16:3e:5d:4f:81", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5d:4f81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ca6c9a-c1", "ovs_interfaceid": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:43:45 np0005629333 nova_compute[244014]: 2026-02-25 12:43:45.614 244018 DEBUG oslo_concurrency.lockutils [req-c9d8c4a3-ae73-410b-bf97-2f3dc4970679 req-dabb61f2-f0ae-477a-93e3-f32438f5ecd0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:43:45 np0005629333 nova_compute[244014]: 2026-02-25 12:43:45.614 244018 DEBUG nova.network.neutron [req-c9d8c4a3-ae73-410b-bf97-2f3dc4970679 req-dabb61f2-f0ae-477a-93e3-f32438f5ecd0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Refreshing network info cache for port 18ca6c9a-c1c9-4a48-b124-25942ebef5df _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:43:45 np0005629333 nova_compute[244014]: 2026-02-25 12:43:45.619 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Start _get_guest_xml network_info=[{"id": "d71dae92-b542-404e-b4cc-ecad408ed655", "address": "fa:16:3e:74:17:b4", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd71dae92-b5", "ovs_interfaceid": "d71dae92-b542-404e-b4cc-ecad408ed655", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "address": "fa:16:3e:5d:4f:81", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5d:4f81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ca6c9a-c1", "ovs_interfaceid": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:43:45 np0005629333 nova_compute[244014]: 2026-02-25 12:43:45.625 244018 WARNING nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:43:45 np0005629333 nova_compute[244014]: 2026-02-25 12:43:45.631 244018 DEBUG nova.virt.libvirt.host [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:43:45 np0005629333 nova_compute[244014]: 2026-02-25 12:43:45.632 244018 DEBUG nova.virt.libvirt.host [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:43:45 np0005629333 nova_compute[244014]: 2026-02-25 12:43:45.640 244018 DEBUG nova.virt.libvirt.host [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:43:45 np0005629333 nova_compute[244014]: 2026-02-25 12:43:45.640 244018 DEBUG nova.virt.libvirt.host [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:43:45 np0005629333 nova_compute[244014]: 2026-02-25 12:43:45.641 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:43:45 np0005629333 nova_compute[244014]: 2026-02-25 12:43:45.641 244018 DEBUG nova.virt.hardware [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:43:45 np0005629333 nova_compute[244014]: 2026-02-25 12:43:45.641 244018 DEBUG nova.virt.hardware [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:43:45 np0005629333 nova_compute[244014]: 2026-02-25 12:43:45.642 244018 DEBUG nova.virt.hardware [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:43:45 np0005629333 nova_compute[244014]: 2026-02-25 12:43:45.642 244018 DEBUG nova.virt.hardware [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:43:45 np0005629333 nova_compute[244014]: 2026-02-25 12:43:45.642 244018 DEBUG nova.virt.hardware [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:43:45 np0005629333 nova_compute[244014]: 2026-02-25 12:43:45.642 244018 DEBUG nova.virt.hardware [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:43:45 np0005629333 nova_compute[244014]: 2026-02-25 12:43:45.642 244018 DEBUG nova.virt.hardware [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:43:45 np0005629333 nova_compute[244014]: 2026-02-25 12:43:45.643 244018 DEBUG nova.virt.hardware [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:43:45 np0005629333 nova_compute[244014]: 2026-02-25 12:43:45.643 244018 DEBUG nova.virt.hardware [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:43:45 np0005629333 nova_compute[244014]: 2026-02-25 12:43:45.643 244018 DEBUG nova.virt.hardware [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:43:45 np0005629333 nova_compute[244014]: 2026-02-25 12:43:45.643 244018 DEBUG nova.virt.hardware [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:43:45 np0005629333 nova_compute[244014]: 2026-02-25 12:43:45.647 244018 DEBUG oslo_concurrency.processutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:43:45 np0005629333 podman[341530]: 2026-02-25 12:43:45.648297903 +0000 UTC m=+0.773627603 container remove 04319a254d458ffd7c995dd98cafe436cc382f9fe596c5339fe8f64c9358a272 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:43:45 np0005629333 systemd[1]: libpod-conmon-04319a254d458ffd7c995dd98cafe436cc382f9fe596c5339fe8f64c9358a272.scope: Deactivated successfully.
Feb 25 07:43:46 np0005629333 podman[341655]: 2026-02-25 12:43:46.15316959 +0000 UTC m=+0.096519704 container create 3446a844164b95adfafe4212b9c732aa2a85f6ed30dc0792040b6b18fffef570 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bartik, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 07:43:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:43:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1959454357' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:43:46 np0005629333 podman[341655]: 2026-02-25 12:43:46.078639727 +0000 UTC m=+0.021989931 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.179 244018 DEBUG oslo_concurrency.processutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:43:46 np0005629333 systemd[1]: Started libpod-conmon-3446a844164b95adfafe4212b9c732aa2a85f6ed30dc0792040b6b18fffef570.scope.
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.203 244018 DEBUG nova.storage.rbd_utils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.208 244018 DEBUG oslo_concurrency.processutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:43:46 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:43:46 np0005629333 podman[341655]: 2026-02-25 12:43:46.259734998 +0000 UTC m=+0.203085112 container init 3446a844164b95adfafe4212b9c732aa2a85f6ed30dc0792040b6b18fffef570 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bartik, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:43:46 np0005629333 podman[341655]: 2026-02-25 12:43:46.2665433 +0000 UTC m=+0.209893394 container start 3446a844164b95adfafe4212b9c732aa2a85f6ed30dc0792040b6b18fffef570 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bartik, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:43:46 np0005629333 reverent_bartik[341688]: 167 167
Feb 25 07:43:46 np0005629333 systemd[1]: libpod-3446a844164b95adfafe4212b9c732aa2a85f6ed30dc0792040b6b18fffef570.scope: Deactivated successfully.
Feb 25 07:43:46 np0005629333 podman[341655]: 2026-02-25 12:43:46.317868258 +0000 UTC m=+0.261218352 container attach 3446a844164b95adfafe4212b9c732aa2a85f6ed30dc0792040b6b18fffef570 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 07:43:46 np0005629333 podman[341655]: 2026-02-25 12:43:46.318236799 +0000 UTC m=+0.261586913 container died 3446a844164b95adfafe4212b9c732aa2a85f6ed30dc0792040b6b18fffef570 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bartik, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:43:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1878: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 383 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Feb 25 07:43:46 np0005629333 systemd[1]: var-lib-containers-storage-overlay-768e573fa1406d971f67a538e712a25d28eed7a9cd22650de07fe490b49670f9-merged.mount: Deactivated successfully.
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.463 244018 INFO nova.compute.manager [None req-7ac9f207-9ab6-4eca-8ef7-61442abb41b3 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Get console output#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.477 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 25 07:43:46 np0005629333 podman[341655]: 2026-02-25 12:43:46.617079302 +0000 UTC m=+0.560429436 container remove 3446a844164b95adfafe4212b9c732aa2a85f6ed30dc0792040b6b18fffef570 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bartik, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 25 07:43:46 np0005629333 systemd[1]: libpod-conmon-3446a844164b95adfafe4212b9c732aa2a85f6ed30dc0792040b6b18fffef570.scope: Deactivated successfully.
Feb 25 07:43:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:43:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1349605789' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.777 244018 DEBUG oslo_concurrency.processutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.780 244018 DEBUG nova.virt.libvirt.vif [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:43:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1190047841',display_name='tempest-TestGettingAddress-server-1190047841',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1190047841',id=111,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjwXf4BRPjcASt12gAye0nTLMsY81chM8AwzuzQAmadzcHhb8MVkuSUIiO1cL5+mTUrVlDTjaV+Wk+tczkCutLjKZYT8KokDtS1+FrxD3TeeYwMZXPHCENgnD6/bPtHPg==',key_name='tempest-TestGettingAddress-617797010',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-gjjevuzx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:43:38Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d71dae92-b542-404e-b4cc-ecad408ed655", "address": "fa:16:3e:74:17:b4", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd71dae92-b5", "ovs_interfaceid": "d71dae92-b542-404e-b4cc-ecad408ed655", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.782 244018 DEBUG nova.network.os_vif_util [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "d71dae92-b542-404e-b4cc-ecad408ed655", "address": "fa:16:3e:74:17:b4", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd71dae92-b5", "ovs_interfaceid": "d71dae92-b542-404e-b4cc-ecad408ed655", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.783 244018 DEBUG nova.network.os_vif_util [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:74:17:b4,bridge_name='br-int',has_traffic_filtering=True,id=d71dae92-b542-404e-b4cc-ecad408ed655,network=Network(e0ff7905-af45-428a-b6a0-d6e1209fd009),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd71dae92-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.784 244018 DEBUG nova.virt.libvirt.vif [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:43:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1190047841',display_name='tempest-TestGettingAddress-server-1190047841',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1190047841',id=111,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjwXf4BRPjcASt12gAye0nTLMsY81chM8AwzuzQAmadzcHhb8MVkuSUIiO1cL5+mTUrVlDTjaV+Wk+tczkCutLjKZYT8KokDtS1+FrxD3TeeYwMZXPHCENgnD6/bPtHPg==',key_name='tempest-TestGettingAddress-617797010',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-gjjevuzx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:43:38Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "address": "fa:16:3e:5d:4f:81", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5d:4f81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ca6c9a-c1", "ovs_interfaceid": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.784 244018 DEBUG nova.network.os_vif_util [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "address": "fa:16:3e:5d:4f:81", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5d:4f81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ca6c9a-c1", "ovs_interfaceid": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.785 244018 DEBUG nova.network.os_vif_util [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:4f:81,bridge_name='br-int',has_traffic_filtering=True,id=18ca6c9a-c1c9-4a48-b124-25942ebef5df,network=Network(e4445989-91e8-4869-98cb-32b4b81bb3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18ca6c9a-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.786 244018 DEBUG nova.objects.instance [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.803 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:43:46 np0005629333 nova_compute[244014]:  <uuid>c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367</uuid>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:  <name>instance-0000006f</name>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestGettingAddress-server-1190047841</nova:name>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:43:45</nova:creationTime>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:43:46 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:        <nova:user uuid="f8eb8dbf8cc448ad946fd23aaae2326e">tempest-TestGettingAddress-344063294-project-member</nova:user>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:        <nova:project uuid="25fa1e8dd32c483686f869da2604f2b1">tempest-TestGettingAddress-344063294</nova:project>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:        <nova:port uuid="d71dae92-b542-404e-b4cc-ecad408ed655">
Feb 25 07:43:46 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:        <nova:port uuid="18ca6c9a-c1c9-4a48-b124-25942ebef5df">
Feb 25 07:43:46 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe5d:4f81" ipVersion="6"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <entry name="serial">c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367</entry>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <entry name="uuid">c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367</entry>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367_disk">
Feb 25 07:43:46 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:43:46 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367_disk.config">
Feb 25 07:43:46 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:43:46 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:74:17:b4"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <target dev="tapd71dae92-b5"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:5d:4f:81"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <target dev="tap18ca6c9a-c1"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367/console.log" append="off"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:43:46 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:43:46 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:43:46 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:43:46 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.803 244018 DEBUG nova.compute.manager [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Preparing to wait for external event network-vif-plugged-d71dae92-b542-404e-b4cc-ecad408ed655 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.804 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.804 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.804 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.804 244018 DEBUG nova.compute.manager [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Preparing to wait for external event network-vif-plugged-18ca6c9a-c1c9-4a48-b124-25942ebef5df prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.804 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.805 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.805 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.806 244018 DEBUG nova.virt.libvirt.vif [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:43:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1190047841',display_name='tempest-TestGettingAddress-server-1190047841',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1190047841',id=111,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjwXf4BRPjcASt12gAye0nTLMsY81chM8AwzuzQAmadzcHhb8MVkuSUIiO1cL5+mTUrVlDTjaV+Wk+tczkCutLjKZYT8KokDtS1+FrxD3TeeYwMZXPHCENgnD6/bPtHPg==',key_name='tempest-TestGettingAddress-617797010',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-gjjevuzx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:43:38Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d71dae92-b542-404e-b4cc-ecad408ed655", "address": "fa:16:3e:74:17:b4", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd71dae92-b5", "ovs_interfaceid": "d71dae92-b542-404e-b4cc-ecad408ed655", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.806 244018 DEBUG nova.network.os_vif_util [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "d71dae92-b542-404e-b4cc-ecad408ed655", "address": "fa:16:3e:74:17:b4", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd71dae92-b5", "ovs_interfaceid": "d71dae92-b542-404e-b4cc-ecad408ed655", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.806 244018 DEBUG nova.network.os_vif_util [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:74:17:b4,bridge_name='br-int',has_traffic_filtering=True,id=d71dae92-b542-404e-b4cc-ecad408ed655,network=Network(e0ff7905-af45-428a-b6a0-d6e1209fd009),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd71dae92-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.807 244018 DEBUG os_vif [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:17:b4,bridge_name='br-int',has_traffic_filtering=True,id=d71dae92-b542-404e-b4cc-ecad408ed655,network=Network(e0ff7905-af45-428a-b6a0-d6e1209fd009),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd71dae92-b5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.807 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.808 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.808 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.812 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.812 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd71dae92-b5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.813 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd71dae92-b5, col_values=(('external_ids', {'iface-id': 'd71dae92-b542-404e-b4cc-ecad408ed655', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:74:17:b4', 'vm-uuid': 'c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.814 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:46 np0005629333 NetworkManager[49836]: <info>  [1772023426.8158] manager: (tapd71dae92-b5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/448)
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.818 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.821 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.822 244018 INFO os_vif [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:17:b4,bridge_name='br-int',has_traffic_filtering=True,id=d71dae92-b542-404e-b4cc-ecad408ed655,network=Network(e0ff7905-af45-428a-b6a0-d6e1209fd009),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd71dae92-b5')#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.823 244018 DEBUG nova.virt.libvirt.vif [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:43:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1190047841',display_name='tempest-TestGettingAddress-server-1190047841',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1190047841',id=111,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjwXf4BRPjcASt12gAye0nTLMsY81chM8AwzuzQAmadzcHhb8MVkuSUIiO1cL5+mTUrVlDTjaV+Wk+tczkCutLjKZYT8KokDtS1+FrxD3TeeYwMZXPHCENgnD6/bPtHPg==',key_name='tempest-TestGettingAddress-617797010',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-gjjevuzx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:43:38Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "address": "fa:16:3e:5d:4f:81", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5d:4f81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ca6c9a-c1", "ovs_interfaceid": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.823 244018 DEBUG nova.network.os_vif_util [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "address": "fa:16:3e:5d:4f:81", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5d:4f81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ca6c9a-c1", "ovs_interfaceid": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.824 244018 DEBUG nova.network.os_vif_util [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:4f:81,bridge_name='br-int',has_traffic_filtering=True,id=18ca6c9a-c1c9-4a48-b124-25942ebef5df,network=Network(e4445989-91e8-4869-98cb-32b4b81bb3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18ca6c9a-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.824 244018 DEBUG os_vif [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:4f:81,bridge_name='br-int',has_traffic_filtering=True,id=18ca6c9a-c1c9-4a48-b124-25942ebef5df,network=Network(e4445989-91e8-4869-98cb-32b4b81bb3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18ca6c9a-c1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.824 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.825 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.825 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.827 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.828 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18ca6c9a-c1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.828 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap18ca6c9a-c1, col_values=(('external_ids', {'iface-id': '18ca6c9a-c1c9-4a48-b124-25942ebef5df', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5d:4f:81', 'vm-uuid': 'c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.830 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:46 np0005629333 NetworkManager[49836]: <info>  [1772023426.8315] manager: (tap18ca6c9a-c1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/449)
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.833 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.836 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.838 244018 INFO os_vif [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:4f:81,bridge_name='br-int',has_traffic_filtering=True,id=18ca6c9a-c1c9-4a48-b124-25942ebef5df,network=Network(e4445989-91e8-4869-98cb-32b4b81bb3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18ca6c9a-c1')#033[00m
Feb 25 07:43:46 np0005629333 podman[341738]: 2026-02-25 12:43:46.86683877 +0000 UTC m=+0.114122131 container create 4c457a02447dde8bb0e8001e64320f07f6018dd4e4b856e6f8383a66cb89bb62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 07:43:46 np0005629333 podman[341738]: 2026-02-25 12:43:46.778706043 +0000 UTC m=+0.025989434 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.902 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.903 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.903 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:74:17:b4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.903 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:5d:4f:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.903 244018 INFO nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Using config drive#033[00m
Feb 25 07:43:46 np0005629333 nova_compute[244014]: 2026-02-25 12:43:46.925 244018 DEBUG nova.storage.rbd_utils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:43:46 np0005629333 systemd[1]: Started libpod-conmon-4c457a02447dde8bb0e8001e64320f07f6018dd4e4b856e6f8383a66cb89bb62.scope.
Feb 25 07:43:46 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:43:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f51e1a90d4ef6eb50e5a8790431db754bc004fd5ccfb459f3a978b0906e98b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:43:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f51e1a90d4ef6eb50e5a8790431db754bc004fd5ccfb459f3a978b0906e98b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:43:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f51e1a90d4ef6eb50e5a8790431db754bc004fd5ccfb459f3a978b0906e98b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:43:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f51e1a90d4ef6eb50e5a8790431db754bc004fd5ccfb459f3a978b0906e98b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:43:47 np0005629333 podman[341738]: 2026-02-25 12:43:47.03516251 +0000 UTC m=+0.282445921 container init 4c457a02447dde8bb0e8001e64320f07f6018dd4e4b856e6f8383a66cb89bb62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 07:43:47 np0005629333 podman[341738]: 2026-02-25 12:43:47.041477159 +0000 UTC m=+0.288760530 container start 4c457a02447dde8bb0e8001e64320f07f6018dd4e4b856e6f8383a66cb89bb62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ptolemy, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:43:47 np0005629333 podman[341738]: 2026-02-25 12:43:47.055154575 +0000 UTC m=+0.302437946 container attach 4c457a02447dde8bb0e8001e64320f07f6018dd4e4b856e6f8383a66cb89bb62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ptolemy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:43:47 np0005629333 nova_compute[244014]: 2026-02-25 12:43:47.418 244018 INFO nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Creating config drive at /var/lib/nova/instances/c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367/disk.config#033[00m
Feb 25 07:43:47 np0005629333 nova_compute[244014]: 2026-02-25 12:43:47.422 244018 DEBUG oslo_concurrency.processutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpgeu50kf5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:43:47 np0005629333 nova_compute[244014]: 2026-02-25 12:43:47.558 244018 DEBUG oslo_concurrency.processutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpgeu50kf5" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:43:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:43:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1733278138' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:43:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:43:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1733278138' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:43:47 np0005629333 nova_compute[244014]: 2026-02-25 12:43:47.586 244018 DEBUG nova.storage.rbd_utils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:43:47 np0005629333 nova_compute[244014]: 2026-02-25 12:43:47.589 244018 DEBUG oslo_concurrency.processutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367/disk.config c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:43:47 np0005629333 lvm[341876]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:43:47 np0005629333 lvm[341876]: VG ceph_vg0 finished
Feb 25 07:43:47 np0005629333 lvm[341880]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:43:47 np0005629333 lvm[341880]: VG ceph_vg1 finished
Feb 25 07:43:47 np0005629333 lvm[341896]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:43:47 np0005629333 lvm[341896]: VG ceph_vg2 finished
Feb 25 07:43:47 np0005629333 lvm[341897]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:43:47 np0005629333 lvm[341897]: VG ceph_vg0 finished
Feb 25 07:43:47 np0005629333 confident_ptolemy[341778]: {}
Feb 25 07:43:47 np0005629333 systemd[1]: libpod-4c457a02447dde8bb0e8001e64320f07f6018dd4e4b856e6f8383a66cb89bb62.scope: Deactivated successfully.
Feb 25 07:43:47 np0005629333 systemd[1]: libpod-4c457a02447dde8bb0e8001e64320f07f6018dd4e4b856e6f8383a66cb89bb62.scope: Consumed 1.064s CPU time.
Feb 25 07:43:47 np0005629333 podman[341738]: 2026-02-25 12:43:47.851343912 +0000 UTC m=+1.098627313 container died 4c457a02447dde8bb0e8001e64320f07f6018dd4e4b856e6f8383a66cb89bb62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ptolemy, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:43:47 np0005629333 systemd[1]: var-lib-containers-storage-overlay-5f51e1a90d4ef6eb50e5a8790431db754bc004fd5ccfb459f3a978b0906e98b7-merged.mount: Deactivated successfully.
Feb 25 07:43:47 np0005629333 nova_compute[244014]: 2026-02-25 12:43:47.972 244018 DEBUG oslo_concurrency.processutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367/disk.config c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.383s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:43:47 np0005629333 nova_compute[244014]: 2026-02-25 12:43:47.973 244018 INFO nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Deleting local config drive /var/lib/nova/instances/c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367/disk.config because it was imported into RBD.#033[00m
Feb 25 07:43:48 np0005629333 NetworkManager[49836]: <info>  [1772023428.0309] manager: (tapd71dae92-b5): new Tun device (/org/freedesktop/NetworkManager/Devices/450)
Feb 25 07:43:48 np0005629333 systemd-udevd[341875]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:43:48 np0005629333 kernel: tapd71dae92-b5: entered promiscuous mode
Feb 25 07:43:48 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:48Z|01077|binding|INFO|Claiming lport d71dae92-b542-404e-b4cc-ecad408ed655 for this chassis.
Feb 25 07:43:48 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:48Z|01078|binding|INFO|d71dae92-b542-404e-b4cc-ecad408ed655: Claiming fa:16:3e:74:17:b4 10.100.0.4
Feb 25 07:43:48 np0005629333 nova_compute[244014]: 2026-02-25 12:43:48.038 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:48 np0005629333 kernel: tap18ca6c9a-c1: entered promiscuous mode
Feb 25 07:43:48 np0005629333 podman[341738]: 2026-02-25 12:43:48.045405818 +0000 UTC m=+1.292689179 container remove 4c457a02447dde8bb0e8001e64320f07f6018dd4e4b856e6f8383a66cb89bb62 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:43:48 np0005629333 NetworkManager[49836]: <info>  [1772023428.0465] device (tapd71dae92-b5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:43:48 np0005629333 NetworkManager[49836]: <info>  [1772023428.0498] manager: (tap18ca6c9a-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/451)
Feb 25 07:43:48 np0005629333 NetworkManager[49836]: <info>  [1772023428.0509] device (tapd71dae92-b5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:43:48 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:48Z|01079|binding|INFO|Claiming lport 18ca6c9a-c1c9-4a48-b124-25942ebef5df for this chassis.
Feb 25 07:43:48 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:48Z|01080|binding|INFO|18ca6c9a-c1c9-4a48-b124-25942ebef5df: Claiming fa:16:3e:5d:4f:81 2001:db8::f816:3eff:fe5d:4f81
Feb 25 07:43:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.051 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:74:17:b4 10.100.0.4'], port_security=['fa:16:3e:74:17:b4 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e0ff7905-af45-428a-b6a0-d6e1209fd009', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0cf44a3d-3002-496e-a2d4-b5607642ce07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4edb1eef-979c-4be1-8c88-b734bc363731, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=d71dae92-b542-404e-b4cc-ecad408ed655) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:43:48 np0005629333 systemd-udevd[341895]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:43:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.053 157129 INFO neutron.agent.ovn.metadata.agent [-] Port d71dae92-b542-404e-b4cc-ecad408ed655 in datapath e0ff7905-af45-428a-b6a0-d6e1209fd009 bound to our chassis#033[00m
Feb 25 07:43:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.055 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e0ff7905-af45-428a-b6a0-d6e1209fd009#033[00m
Feb 25 07:43:48 np0005629333 nova_compute[244014]: 2026-02-25 12:43:48.057 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:48 np0005629333 systemd[1]: libpod-conmon-4c457a02447dde8bb0e8001e64320f07f6018dd4e4b856e6f8383a66cb89bb62.scope: Deactivated successfully.
Feb 25 07:43:48 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:48Z|01081|binding|INFO|Setting lport d71dae92-b542-404e-b4cc-ecad408ed655 ovn-installed in OVS
Feb 25 07:43:48 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:48Z|01082|binding|INFO|Setting lport d71dae92-b542-404e-b4cc-ecad408ed655 up in Southbound
Feb 25 07:43:48 np0005629333 nova_compute[244014]: 2026-02-25 12:43:48.060 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:48 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:48Z|01083|binding|INFO|Setting lport 18ca6c9a-c1c9-4a48-b124-25942ebef5df ovn-installed in OVS
Feb 25 07:43:48 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:48Z|01084|binding|INFO|Setting lport 18ca6c9a-c1c9-4a48-b124-25942ebef5df up in Southbound
Feb 25 07:43:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.063 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:4f:81 2001:db8::f816:3eff:fe5d:4f81'], port_security=['fa:16:3e:5d:4f:81 2001:db8::f816:3eff:fe5d:4f81'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe5d:4f81/64', 'neutron:device_id': 'c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4445989-91e8-4869-98cb-32b4b81bb3da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0cf44a3d-3002-496e-a2d4-b5607642ce07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c1f88174-704f-4cf1-b79c-73e2410029c4, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=18ca6c9a-c1c9-4a48-b124-25942ebef5df) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:43:48 np0005629333 nova_compute[244014]: 2026-02-25 12:43:48.065 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:48 np0005629333 NetworkManager[49836]: <info>  [1772023428.0665] device (tap18ca6c9a-c1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:43:48 np0005629333 NetworkManager[49836]: <info>  [1772023428.0670] device (tap18ca6c9a-c1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:43:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.072 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a2f5034a-ca77-4869-b7c1-90f05fbf7c49]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:48 np0005629333 systemd-machined[210048]: New machine qemu-139-instance-0000006f.
Feb 25 07:43:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.096 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b8679249-e7e0-40a4-9eae-d71dcb5261c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.099 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a2bdd450-8c74-4619-96d5-e4ca76d278a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:48 np0005629333 systemd[1]: Started Virtual Machine qemu-139-instance-0000006f.
Feb 25 07:43:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:43:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.120 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[717316fc-125c-417f-8920-56c34cf78d36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.137 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[986581ad-f3ee-4689-bbde-16d71da412db]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape0ff7905-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:d2:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 320], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535889, 'reachable_time': 44896, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341940, 'error': None, 'target': 'ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.155 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b0aa6074-5019-48f0-a09a-065f12bab8ef]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape0ff7905-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535900, 'tstamp': 535900}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341943, 'error': None, 'target': 'ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape0ff7905-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535902, 'tstamp': 535902}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341943, 'error': None, 'target': 'ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.159 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0ff7905-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:48 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:43:48 np0005629333 nova_compute[244014]: 2026-02-25 12:43:48.161 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.163 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape0ff7905-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.163 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:43:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:43:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.164 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape0ff7905-a0, col_values=(('external_ids', {'iface-id': 'a592354c-38c0-41be-9126-87d6dec6c687'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.165 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:43:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.166 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 18ca6c9a-c1c9-4a48-b124-25942ebef5df in datapath e4445989-91e8-4869-98cb-32b4b81bb3da unbound from our chassis#033[00m
Feb 25 07:43:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.168 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e4445989-91e8-4869-98cb-32b4b81bb3da#033[00m
Feb 25 07:43:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.181 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0e45d3ef-fa32-4267-82b1-7cfda5baf619]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:48 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:43:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.201 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7ce2d770-9ffc-42e0-890c-23af8799fd59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.205 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[595b22ea-1893-4980-a8e0-27f3110dc469]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.228 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[009cd81d-e04c-456e-bbc5-2073700d868e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.240 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b07b9ba8-2342-48d1-8789-22fc449d6bed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4445989-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:7d:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 17, 'tx_packets': 4, 'rx_bytes': 1502, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 17, 'tx_packets': 4, 'rx_bytes': 1502, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 321], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535975, 'reachable_time': 27894, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 17, 'inoctets': 1264, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 17, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1264, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 17, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341977, 'error': None, 'target': 'ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.252 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[32114282-3380-4bc8-99ae-36078314808e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape4445989-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535984, 'tstamp': 535984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341979, 'error': None, 'target': 'ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.255 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4445989-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:48 np0005629333 nova_compute[244014]: 2026-02-25 12:43:48.257 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:48 np0005629333 nova_compute[244014]: 2026-02-25 12:43:48.258 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.258 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4445989-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.258 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:43:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.259 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape4445989-90, col_values=(('external_ids', {'iface-id': '4a3f602e-68fb-46ca-b4ed-cec3ad6d3ec5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.259 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:43:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:43:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 384 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Feb 25 07:43:48 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:43:48 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:43:48 np0005629333 nova_compute[244014]: 2026-02-25 12:43:48.544 244018 DEBUG nova.network.neutron [req-c9d8c4a3-ae73-410b-bf97-2f3dc4970679 req-dabb61f2-f0ae-477a-93e3-f32438f5ecd0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Updated VIF entry in instance network info cache for port 18ca6c9a-c1c9-4a48-b124-25942ebef5df. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:43:48 np0005629333 nova_compute[244014]: 2026-02-25 12:43:48.544 244018 DEBUG nova.network.neutron [req-c9d8c4a3-ae73-410b-bf97-2f3dc4970679 req-dabb61f2-f0ae-477a-93e3-f32438f5ecd0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Updating instance_info_cache with network_info: [{"id": "d71dae92-b542-404e-b4cc-ecad408ed655", "address": "fa:16:3e:74:17:b4", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd71dae92-b5", "ovs_interfaceid": "d71dae92-b542-404e-b4cc-ecad408ed655", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "address": "fa:16:3e:5d:4f:81", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5d:4f81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ca6c9a-c1", "ovs_interfaceid": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:43:48 np0005629333 nova_compute[244014]: 2026-02-25 12:43:48.557 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023428.557236, c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:43:48 np0005629333 nova_compute[244014]: 2026-02-25 12:43:48.557 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] VM Started (Lifecycle Event)#033[00m
Feb 25 07:43:48 np0005629333 nova_compute[244014]: 2026-02-25 12:43:48.568 244018 DEBUG oslo_concurrency.lockutils [req-c9d8c4a3-ae73-410b-bf97-2f3dc4970679 req-dabb61f2-f0ae-477a-93e3-f32438f5ecd0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:43:48 np0005629333 nova_compute[244014]: 2026-02-25 12:43:48.569 244018 INFO nova.compute.manager [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Rebuilding instance#033[00m
Feb 25 07:43:48 np0005629333 nova_compute[244014]: 2026-02-25 12:43:48.574 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:43:48 np0005629333 nova_compute[244014]: 2026-02-25 12:43:48.579 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023428.5588315, c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:43:48 np0005629333 nova_compute[244014]: 2026-02-25 12:43:48.579 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:43:48 np0005629333 nova_compute[244014]: 2026-02-25 12:43:48.608 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:43:48 np0005629333 nova_compute[244014]: 2026-02-25 12:43:48.612 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:43:48 np0005629333 nova_compute[244014]: 2026-02-25 12:43:48.642 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:43:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:48.834 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:49 np0005629333 nova_compute[244014]: 2026-02-25 12:43:49.143 244018 DEBUG nova.compute.manager [req-94cae8ba-bb2a-4c39-8155-25c1d68e5a65 req-0e1d4803-0f2b-4573-813c-bb7bdc73e33d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received event network-vif-plugged-d71dae92-b542-404e-b4cc-ecad408ed655 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:43:49 np0005629333 nova_compute[244014]: 2026-02-25 12:43:49.143 244018 DEBUG oslo_concurrency.lockutils [req-94cae8ba-bb2a-4c39-8155-25c1d68e5a65 req-0e1d4803-0f2b-4573-813c-bb7bdc73e33d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:49 np0005629333 nova_compute[244014]: 2026-02-25 12:43:49.143 244018 DEBUG oslo_concurrency.lockutils [req-94cae8ba-bb2a-4c39-8155-25c1d68e5a65 req-0e1d4803-0f2b-4573-813c-bb7bdc73e33d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:49 np0005629333 nova_compute[244014]: 2026-02-25 12:43:49.144 244018 DEBUG oslo_concurrency.lockutils [req-94cae8ba-bb2a-4c39-8155-25c1d68e5a65 req-0e1d4803-0f2b-4573-813c-bb7bdc73e33d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:49 np0005629333 nova_compute[244014]: 2026-02-25 12:43:49.144 244018 DEBUG nova.compute.manager [req-94cae8ba-bb2a-4c39-8155-25c1d68e5a65 req-0e1d4803-0f2b-4573-813c-bb7bdc73e33d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Processing event network-vif-plugged-d71dae92-b542-404e-b4cc-ecad408ed655 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:43:49 np0005629333 nova_compute[244014]: 2026-02-25 12:43:49.170 244018 DEBUG nova.objects.instance [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'trusted_certs' on Instance uuid b5941b54-9cd2-465c-89c0-3cf87ebed83e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:43:49 np0005629333 nova_compute[244014]: 2026-02-25 12:43:49.193 244018 DEBUG nova.compute.manager [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:43:49 np0005629333 nova_compute[244014]: 2026-02-25 12:43:49.255 244018 DEBUG nova.objects.instance [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'pci_requests' on Instance uuid b5941b54-9cd2-465c-89c0-3cf87ebed83e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:43:49 np0005629333 nova_compute[244014]: 2026-02-25 12:43:49.270 244018 DEBUG nova.objects.instance [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'pci_devices' on Instance uuid b5941b54-9cd2-465c-89c0-3cf87ebed83e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:43:49 np0005629333 nova_compute[244014]: 2026-02-25 12:43:49.290 244018 DEBUG nova.objects.instance [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'resources' on Instance uuid b5941b54-9cd2-465c-89c0-3cf87ebed83e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:43:49 np0005629333 nova_compute[244014]: 2026-02-25 12:43:49.305 244018 DEBUG nova.objects.instance [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'migration_context' on Instance uuid b5941b54-9cd2-465c-89c0-3cf87ebed83e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:43:49 np0005629333 nova_compute[244014]: 2026-02-25 12:43:49.316 244018 DEBUG nova.objects.instance [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 25 07:43:49 np0005629333 nova_compute[244014]: 2026-02-25 12:43:49.320 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 25 07:43:50 np0005629333 nova_compute[244014]: 2026-02-25 12:43:50.122 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1880: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 239 KiB/s rd, 3.0 MiB/s wr, 67 op/s
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.318 244018 DEBUG nova.compute.manager [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received event network-vif-plugged-d71dae92-b542-404e-b4cc-ecad408ed655 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.318 244018 DEBUG oslo_concurrency.lockutils [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.319 244018 DEBUG oslo_concurrency.lockutils [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.319 244018 DEBUG oslo_concurrency.lockutils [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.320 244018 DEBUG nova.compute.manager [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] No event matching network-vif-plugged-d71dae92-b542-404e-b4cc-ecad408ed655 in dict_keys([('network-vif-plugged', '18ca6c9a-c1c9-4a48-b124-25942ebef5df')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.320 244018 WARNING nova.compute.manager [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received unexpected event network-vif-plugged-d71dae92-b542-404e-b4cc-ecad408ed655 for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.321 244018 DEBUG nova.compute.manager [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received event network-vif-plugged-18ca6c9a-c1c9-4a48-b124-25942ebef5df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.321 244018 DEBUG oslo_concurrency.lockutils [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.322 244018 DEBUG oslo_concurrency.lockutils [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.323 244018 DEBUG oslo_concurrency.lockutils [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.323 244018 DEBUG nova.compute.manager [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Processing event network-vif-plugged-18ca6c9a-c1c9-4a48-b124-25942ebef5df _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.324 244018 DEBUG nova.compute.manager [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received event network-vif-plugged-18ca6c9a-c1c9-4a48-b124-25942ebef5df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.324 244018 DEBUG oslo_concurrency.lockutils [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.325 244018 DEBUG oslo_concurrency.lockutils [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.326 244018 DEBUG oslo_concurrency.lockutils [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.327 244018 DEBUG nova.compute.manager [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] No waiting events found dispatching network-vif-plugged-18ca6c9a-c1c9-4a48-b124-25942ebef5df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.327 244018 WARNING nova.compute.manager [req-b8f93c7f-813e-4141-9ba5-9a1a78de1c68 req-2d0ecb32-7f99-46b8-8880-dd4b60e73f15 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received unexpected event network-vif-plugged-18ca6c9a-c1c9-4a48-b124-25942ebef5df for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.328 244018 DEBUG nova.compute.manager [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.332 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023431.3317747, c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.333 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.336 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.341 244018 INFO nova.virt.libvirt.driver [-] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Instance spawned successfully.#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.342 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.365 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.372 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.378 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.378 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.379 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.380 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.380 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.381 244018 DEBUG nova.virt.libvirt.driver [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.448 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.477 244018 INFO nova.compute.manager [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Took 12.71 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.477 244018 DEBUG nova.compute.manager [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.520 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.560 244018 INFO nova.compute.manager [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Took 13.80 seconds to build instance.#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.587 244018 DEBUG oslo_concurrency.lockutils [None req-550443b0-e8ad-4f4d-9a13-903a53873d8d f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:51 np0005629333 kernel: tap43ea1958-7f (unregistering): left promiscuous mode
Feb 25 07:43:51 np0005629333 NetworkManager[49836]: <info>  [1772023431.7287] device (tap43ea1958-7f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:43:51 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:51Z|01085|binding|INFO|Releasing lport 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 from this chassis (sb_readonly=0)
Feb 25 07:43:51 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:51Z|01086|binding|INFO|Setting lport 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 down in Southbound
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.736 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:51 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:51Z|01087|binding|INFO|Removing iface tap43ea1958-7f ovn-installed in OVS
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.739 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.747 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:51.747 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:93:a9 10.100.0.5'], port_security=['fa:16:3e:3e:93:a9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b5941b54-9cd2-465c-89c0-3cf87ebed83e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f8edf066-3c6a-45fb-bc20-36dc74f8aee6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a5401511-20ab-4c2d-9471-9223f0b77f54', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.221'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32cc7d41-346e-4e2e-a938-6389d190a22f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=43ea1958-7fd9-47b6-be81-3eeb1b3801a0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:43:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:51.750 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 in datapath f8edf066-3c6a-45fb-bc20-36dc74f8aee6 unbound from our chassis#033[00m
Feb 25 07:43:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:51.753 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f8edf066-3c6a-45fb-bc20-36dc74f8aee6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:43:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:51.754 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1192a1b6-d6b1-4d38-afbb-cd736f371f7b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:51.755 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6 namespace which is not needed anymore#033[00m
Feb 25 07:43:51 np0005629333 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Feb 25 07:43:51 np0005629333 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006e.scope: Consumed 12.203s CPU time.
Feb 25 07:43:51 np0005629333 systemd-machined[210048]: Machine qemu-138-instance-0000006e terminated.
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.830 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:51 np0005629333 neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6[340681]: [NOTICE]   (340688) : haproxy version is 2.8.14-c23fe91
Feb 25 07:43:51 np0005629333 neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6[340681]: [NOTICE]   (340688) : path to executable is /usr/sbin/haproxy
Feb 25 07:43:51 np0005629333 neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6[340681]: [WARNING]  (340688) : Exiting Master process...
Feb 25 07:43:51 np0005629333 neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6[340681]: [ALERT]    (340688) : Current worker (340690) exited with code 143 (Terminated)
Feb 25 07:43:51 np0005629333 neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6[340681]: [WARNING]  (340688) : All workers exited. Exiting... (0)
Feb 25 07:43:51 np0005629333 systemd[1]: libpod-314448028c0bb71621d0c09c9fb8e3e4e2e7a53175dc854d717d856b0c26a4bd.scope: Deactivated successfully.
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.960 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:51 np0005629333 nova_compute[244014]: 2026-02-25 12:43:51.965 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:51 np0005629333 podman[342046]: 2026-02-25 12:43:51.964783062 +0000 UTC m=+0.095180847 container died 314448028c0bb71621d0c09c9fb8e3e4e2e7a53175dc854d717d856b0c26a4bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:43:52 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-314448028c0bb71621d0c09c9fb8e3e4e2e7a53175dc854d717d856b0c26a4bd-userdata-shm.mount: Deactivated successfully.
Feb 25 07:43:52 np0005629333 systemd[1]: var-lib-containers-storage-overlay-dbd30b2a51b99c22ab6484bb072cec6f6373098acd12eff7ecbd61ac2cb75c84-merged.mount: Deactivated successfully.
Feb 25 07:43:52 np0005629333 podman[342046]: 2026-02-25 12:43:52.031105444 +0000 UTC m=+0.161503209 container cleanup 314448028c0bb71621d0c09c9fb8e3e4e2e7a53175dc854d717d856b0c26a4bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 07:43:52 np0005629333 systemd[1]: libpod-conmon-314448028c0bb71621d0c09c9fb8e3e4e2e7a53175dc854d717d856b0c26a4bd.scope: Deactivated successfully.
Feb 25 07:43:52 np0005629333 podman[342089]: 2026-02-25 12:43:52.323780893 +0000 UTC m=+0.267745647 container remove 314448028c0bb71621d0c09c9fb8e3e4e2e7a53175dc854d717d856b0c26a4bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:43:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:52.328 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7b547d60-45c9-4d79-ba58-a2f9b35dfa43]: (4, ('Wed Feb 25 12:43:51 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6 (314448028c0bb71621d0c09c9fb8e3e4e2e7a53175dc854d717d856b0c26a4bd)\n314448028c0bb71621d0c09c9fb8e3e4e2e7a53175dc854d717d856b0c26a4bd\nWed Feb 25 12:43:52 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6 (314448028c0bb71621d0c09c9fb8e3e4e2e7a53175dc854d717d856b0c26a4bd)\n314448028c0bb71621d0c09c9fb8e3e4e2e7a53175dc854d717d856b0c26a4bd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:52.331 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[98b2e7e2-724e-4256-aa12-b042da3dcf52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:52.332 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8edf066-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:52 np0005629333 nova_compute[244014]: 2026-02-25 12:43:52.335 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:52 np0005629333 kernel: tapf8edf066-30: left promiscuous mode
Feb 25 07:43:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 3.0 MiB/s wr, 82 op/s
Feb 25 07:43:52 np0005629333 nova_compute[244014]: 2026-02-25 12:43:52.343 244018 INFO nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Instance shutdown successfully after 3 seconds.#033[00m
Feb 25 07:43:52 np0005629333 nova_compute[244014]: 2026-02-25 12:43:52.345 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:52 np0005629333 nova_compute[244014]: 2026-02-25 12:43:52.350 244018 INFO nova.virt.libvirt.driver [-] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Instance destroyed successfully.#033[00m
Feb 25 07:43:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:52.349 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0b78508d-2ac7-411f-8f91-07c624c96fc6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:52 np0005629333 nova_compute[244014]: 2026-02-25 12:43:52.357 244018 INFO nova.virt.libvirt.driver [-] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Instance destroyed successfully.#033[00m
Feb 25 07:43:52 np0005629333 nova_compute[244014]: 2026-02-25 12:43:52.358 244018 DEBUG nova.virt.libvirt.vif [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:43:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1420097374',display_name='tempest-TestNetworkAdvancedServerOps-server-1420097374',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1420097374',id=110,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLB4lCGz708UDYBfDiKGSO/K1vvU3TZkSpUC/m/pNqcj9p06BKZCsaZ+HTq1tFiTei87P3smYtAHKXB341loC6n/SM62zSw05o3YeNjhjC3ZsqOrkJUnRdjhvIYhFIjYsQ==',key_name='tempest-TestNetworkAdvancedServerOps-1361605021',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:43:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-ivr9l6e8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:43:47Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=b5941b54-9cd2-465c-89c0-3cf87ebed83e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:43:52 np0005629333 nova_compute[244014]: 2026-02-25 12:43:52.358 244018 DEBUG nova.network.os_vif_util [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:43:52 np0005629333 nova_compute[244014]: 2026-02-25 12:43:52.359 244018 DEBUG nova.network.os_vif_util [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3e:93:a9,bridge_name='br-int',has_traffic_filtering=True,id=43ea1958-7fd9-47b6-be81-3eeb1b3801a0,network=Network(f8edf066-3c6a-45fb-bc20-36dc74f8aee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ea1958-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:43:52 np0005629333 nova_compute[244014]: 2026-02-25 12:43:52.360 244018 DEBUG os_vif [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:93:a9,bridge_name='br-int',has_traffic_filtering=True,id=43ea1958-7fd9-47b6-be81-3eeb1b3801a0,network=Network(f8edf066-3c6a-45fb-bc20-36dc74f8aee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ea1958-7f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:43:52 np0005629333 nova_compute[244014]: 2026-02-25 12:43:52.363 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:52 np0005629333 nova_compute[244014]: 2026-02-25 12:43:52.364 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ea1958-7f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:52 np0005629333 nova_compute[244014]: 2026-02-25 12:43:52.366 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:43:52 np0005629333 nova_compute[244014]: 2026-02-25 12:43:52.370 244018 INFO os_vif [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:93:a9,bridge_name='br-int',has_traffic_filtering=True,id=43ea1958-7fd9-47b6-be81-3eeb1b3801a0,network=Network(f8edf066-3c6a-45fb-bc20-36dc74f8aee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ea1958-7f')#033[00m
Feb 25 07:43:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:52.368 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[15759f5b-a34e-41e6-a3ab-6c6d78153f13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:52.372 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a490b228-bd86-4095-8360-1fb833e35f09]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:52.385 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[902186d3-2cc3-4478-8ac2-97fc32ee02e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537563, 'reachable_time': 32737, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342108, 'error': None, 'target': 'ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:52.390 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:43:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:52.390 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[172ff96e-c394-4e41-a40f-651267381172]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:52 np0005629333 systemd[1]: run-netns-ovnmeta\x2df8edf066\x2d3c6a\x2d45fb\x2dbc20\x2d36dc74f8aee6.mount: Deactivated successfully.
Feb 25 07:43:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:43:53 np0005629333 nova_compute[244014]: 2026-02-25 12:43:53.467 244018 DEBUG nova.compute.manager [req-ea299699-6e9c-417c-9502-e3cbc2f14438 req-48985056-8037-4000-a135-5f1d1b0e163c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received event network-vif-unplugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:43:53 np0005629333 nova_compute[244014]: 2026-02-25 12:43:53.467 244018 DEBUG oslo_concurrency.lockutils [req-ea299699-6e9c-417c-9502-e3cbc2f14438 req-48985056-8037-4000-a135-5f1d1b0e163c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:53 np0005629333 nova_compute[244014]: 2026-02-25 12:43:53.468 244018 DEBUG oslo_concurrency.lockutils [req-ea299699-6e9c-417c-9502-e3cbc2f14438 req-48985056-8037-4000-a135-5f1d1b0e163c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:53 np0005629333 nova_compute[244014]: 2026-02-25 12:43:53.468 244018 DEBUG oslo_concurrency.lockutils [req-ea299699-6e9c-417c-9502-e3cbc2f14438 req-48985056-8037-4000-a135-5f1d1b0e163c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:53 np0005629333 nova_compute[244014]: 2026-02-25 12:43:53.468 244018 DEBUG nova.compute.manager [req-ea299699-6e9c-417c-9502-e3cbc2f14438 req-48985056-8037-4000-a135-5f1d1b0e163c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] No waiting events found dispatching network-vif-unplugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:43:53 np0005629333 nova_compute[244014]: 2026-02-25 12:43:53.468 244018 WARNING nova.compute.manager [req-ea299699-6e9c-417c-9502-e3cbc2f14438 req-48985056-8037-4000-a135-5f1d1b0e163c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received unexpected event network-vif-unplugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 for instance with vm_state active and task_state rebuilding.#033[00m
Feb 25 07:43:53 np0005629333 nova_compute[244014]: 2026-02-25 12:43:53.468 244018 DEBUG nova.compute.manager [req-ea299699-6e9c-417c-9502-e3cbc2f14438 req-48985056-8037-4000-a135-5f1d1b0e163c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received event network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:43:53 np0005629333 nova_compute[244014]: 2026-02-25 12:43:53.469 244018 DEBUG oslo_concurrency.lockutils [req-ea299699-6e9c-417c-9502-e3cbc2f14438 req-48985056-8037-4000-a135-5f1d1b0e163c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:53 np0005629333 nova_compute[244014]: 2026-02-25 12:43:53.469 244018 DEBUG oslo_concurrency.lockutils [req-ea299699-6e9c-417c-9502-e3cbc2f14438 req-48985056-8037-4000-a135-5f1d1b0e163c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:53 np0005629333 nova_compute[244014]: 2026-02-25 12:43:53.469 244018 DEBUG oslo_concurrency.lockutils [req-ea299699-6e9c-417c-9502-e3cbc2f14438 req-48985056-8037-4000-a135-5f1d1b0e163c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:53 np0005629333 nova_compute[244014]: 2026-02-25 12:43:53.469 244018 DEBUG nova.compute.manager [req-ea299699-6e9c-417c-9502-e3cbc2f14438 req-48985056-8037-4000-a135-5f1d1b0e163c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] No waiting events found dispatching network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:43:53 np0005629333 nova_compute[244014]: 2026-02-25 12:43:53.469 244018 WARNING nova.compute.manager [req-ea299699-6e9c-417c-9502-e3cbc2f14438 req-48985056-8037-4000-a135-5f1d1b0e163c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received unexpected event network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 for instance with vm_state active and task_state rebuilding.#033[00m
Feb 25 07:43:53 np0005629333 podman[342128]: 2026-02-25 12:43:53.718877562 +0000 UTC m=+0.061027213 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:43:54 np0005629333 nova_compute[244014]: 2026-02-25 12:43:54.077 244018 INFO nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Deleting instance files /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e_del#033[00m
Feb 25 07:43:54 np0005629333 nova_compute[244014]: 2026-02-25 12:43:54.078 244018 INFO nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Deletion of /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e_del complete#033[00m
Feb 25 07:43:54 np0005629333 nova_compute[244014]: 2026-02-25 12:43:54.220 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:43:54 np0005629333 nova_compute[244014]: 2026-02-25 12:43:54.221 244018 INFO nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Creating image(s)#033[00m
Feb 25 07:43:54 np0005629333 nova_compute[244014]: 2026-02-25 12:43:54.247 244018 DEBUG nova.storage.rbd_utils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:43:54 np0005629333 nova_compute[244014]: 2026-02-25 12:43:54.275 244018 DEBUG nova.storage.rbd_utils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:43:54 np0005629333 nova_compute[244014]: 2026-02-25 12:43:54.297 244018 DEBUG nova.storage.rbd_utils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:43:54 np0005629333 nova_compute[244014]: 2026-02-25 12:43:54.301 244018 DEBUG oslo_concurrency.processutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:43:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1882: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 492 KiB/s rd, 219 KiB/s wr, 50 op/s
Feb 25 07:43:54 np0005629333 nova_compute[244014]: 2026-02-25 12:43:54.376 244018 DEBUG oslo_concurrency.processutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:43:54 np0005629333 nova_compute[244014]: 2026-02-25 12:43:54.377 244018 DEBUG oslo_concurrency.lockutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:54 np0005629333 nova_compute[244014]: 2026-02-25 12:43:54.377 244018 DEBUG oslo_concurrency.lockutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:54 np0005629333 nova_compute[244014]: 2026-02-25 12:43:54.377 244018 DEBUG oslo_concurrency.lockutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "d54266c9ce37b98d8a911b5ac30e52735f3ff538" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:54 np0005629333 nova_compute[244014]: 2026-02-25 12:43:54.401 244018 DEBUG nova.storage.rbd_utils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:43:54 np0005629333 nova_compute[244014]: 2026-02-25 12:43:54.405 244018 DEBUG oslo_concurrency.processutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:43:54 np0005629333 nova_compute[244014]: 2026-02-25 12:43:54.485 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:55.022 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:55.023 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:55.023 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:55 np0005629333 nova_compute[244014]: 2026-02-25 12:43:55.125 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:55 np0005629333 podman[342242]: 2026-02-25 12:43:55.476323996 +0000 UTC m=+0.074514484 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 07:43:55 np0005629333 nova_compute[244014]: 2026-02-25 12:43:55.697 244018 DEBUG oslo_concurrency.processutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538 b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:43:55 np0005629333 nova_compute[244014]: 2026-02-25 12:43:55.864 244018 DEBUG nova.storage.rbd_utils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] resizing rbd image b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.072 244018 DEBUG nova.compute.manager [req-50fab266-5416-4bda-b24d-755f9a83108c req-8a57aea0-af9c-4307-bcbd-2290c365218d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received event network-changed-d71dae92-b542-404e-b4cc-ecad408ed655 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.072 244018 DEBUG nova.compute.manager [req-50fab266-5416-4bda-b24d-755f9a83108c req-8a57aea0-af9c-4307-bcbd-2290c365218d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Refreshing instance network info cache due to event network-changed-d71dae92-b542-404e-b4cc-ecad408ed655. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.073 244018 DEBUG oslo_concurrency.lockutils [req-50fab266-5416-4bda-b24d-755f9a83108c req-8a57aea0-af9c-4307-bcbd-2290c365218d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.073 244018 DEBUG oslo_concurrency.lockutils [req-50fab266-5416-4bda-b24d-755f9a83108c req-8a57aea0-af9c-4307-bcbd-2290c365218d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.073 244018 DEBUG nova.network.neutron [req-50fab266-5416-4bda-b24d-755f9a83108c req-8a57aea0-af9c-4307-bcbd-2290c365218d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Refreshing network info cache for port d71dae92-b542-404e-b4cc-ecad408ed655 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.221 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.221 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Ensure instance console log exists: /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.222 244018 DEBUG oslo_concurrency.lockutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.222 244018 DEBUG oslo_concurrency.lockutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.222 244018 DEBUG oslo_concurrency.lockutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.225 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Start _get_guest_xml network_info=[{"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.231 244018 WARNING nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.238 244018 DEBUG nova.virt.libvirt.host [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.239 244018 DEBUG nova.virt.libvirt.host [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.243 244018 DEBUG nova.virt.libvirt.host [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.244 244018 DEBUG nova.virt.libvirt.host [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.244 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.245 244018 DEBUG nova.virt.hardware [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:25Z,direct_url=<?>,disk_format='qcow2',id=f0ef5a9a-23b8-4883-8e47-feb7403a11d8,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.246 244018 DEBUG nova.virt.hardware [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.246 244018 DEBUG nova.virt.hardware [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.247 244018 DEBUG nova.virt.hardware [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.247 244018 DEBUG nova.virt.hardware [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.247 244018 DEBUG nova.virt.hardware [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.247 244018 DEBUG nova.virt.hardware [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.248 244018 DEBUG nova.virt.hardware [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.248 244018 DEBUG nova.virt.hardware [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.248 244018 DEBUG nova.virt.hardware [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.248 244018 DEBUG nova.virt.hardware [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.249 244018 DEBUG nova.objects.instance [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'vcpu_model' on Instance uuid b5941b54-9cd2-465c-89c0-3cf87ebed83e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.267 244018 DEBUG oslo_concurrency.processutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:43:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1883: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 440 KiB/s rd, 42 KiB/s wr, 30 op/s
Feb 25 07:43:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:43:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1296203534' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.830 244018 DEBUG oslo_concurrency.processutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.864 244018 DEBUG nova.storage.rbd_utils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:43:56 np0005629333 nova_compute[244014]: 2026-02-25 12:43:56.871 244018 DEBUG oslo_concurrency.processutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:43:57 np0005629333 nova_compute[244014]: 2026-02-25 12:43:57.366 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:43:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3001768558' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:43:57 np0005629333 nova_compute[244014]: 2026-02-25 12:43:57.411 244018 DEBUG oslo_concurrency.processutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:43:57 np0005629333 nova_compute[244014]: 2026-02-25 12:43:57.412 244018 DEBUG nova.virt.libvirt.vif [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:43:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1420097374',display_name='tempest-TestNetworkAdvancedServerOps-server-1420097374',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1420097374',id=110,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLB4lCGz708UDYBfDiKGSO/K1vvU3TZkSpUC/m/pNqcj9p06BKZCsaZ+HTq1tFiTei87P3smYtAHKXB341loC6n/SM62zSw05o3YeNjhjC3ZsqOrkJUnRdjhvIYhFIjYsQ==',key_name='tempest-TestNetworkAdvancedServerOps-1361605021',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:43:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-ivr9l6e8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:43:54Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=b5941b54-9cd2-465c-89c0-3cf87ebed83e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:43:57 np0005629333 nova_compute[244014]: 2026-02-25 12:43:57.412 244018 DEBUG nova.network.os_vif_util [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:43:57 np0005629333 nova_compute[244014]: 2026-02-25 12:43:57.413 244018 DEBUG nova.network.os_vif_util [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3e:93:a9,bridge_name='br-int',has_traffic_filtering=True,id=43ea1958-7fd9-47b6-be81-3eeb1b3801a0,network=Network(f8edf066-3c6a-45fb-bc20-36dc74f8aee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ea1958-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:43:57 np0005629333 nova_compute[244014]: 2026-02-25 12:43:57.416 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:43:57 np0005629333 nova_compute[244014]:  <uuid>b5941b54-9cd2-465c-89c0-3cf87ebed83e</uuid>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:  <name>instance-0000006e</name>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1420097374</nova:name>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:43:56</nova:creationTime>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:43:57 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:        <nova:user uuid="fb37a481eb114226822ed8b2ef4f9a89">tempest-TestNetworkAdvancedServerOps-1424801157-project-member</nova:user>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:        <nova:project uuid="6821a6e7edd54dbe97920b79aae8f54c">tempest-TestNetworkAdvancedServerOps-1424801157</nova:project>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="f0ef5a9a-23b8-4883-8e47-feb7403a11d8"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:        <nova:port uuid="43ea1958-7fd9-47b6-be81-3eeb1b3801a0">
Feb 25 07:43:57 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      <entry name="serial">b5941b54-9cd2-465c-89c0-3cf87ebed83e</entry>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      <entry name="uuid">b5941b54-9cd2-465c-89c0-3cf87ebed83e</entry>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk">
Feb 25 07:43:57 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:43:57 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk.config">
Feb 25 07:43:57 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:43:57 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:3e:93:a9"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      <target dev="tap43ea1958-7f"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e/console.log" append="off"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:43:57 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:43:57 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:43:57 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:43:57 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:43:57 np0005629333 nova_compute[244014]: 2026-02-25 12:43:57.417 244018 DEBUG nova.virt.libvirt.vif [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:43:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1420097374',display_name='tempest-TestNetworkAdvancedServerOps-server-1420097374',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1420097374',id=110,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLB4lCGz708UDYBfDiKGSO/K1vvU3TZkSpUC/m/pNqcj9p06BKZCsaZ+HTq1tFiTei87P3smYtAHKXB341loC6n/SM62zSw05o3YeNjhjC3ZsqOrkJUnRdjhvIYhFIjYsQ==',key_name='tempest-TestNetworkAdvancedServerOps-1361605021',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:43:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-ivr9l6e8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:43:54Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=b5941b54-9cd2-465c-89c0-3cf87ebed83e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:43:57 np0005629333 nova_compute[244014]: 2026-02-25 12:43:57.417 244018 DEBUG nova.network.os_vif_util [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:43:57 np0005629333 nova_compute[244014]: 2026-02-25 12:43:57.418 244018 DEBUG nova.network.os_vif_util [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3e:93:a9,bridge_name='br-int',has_traffic_filtering=True,id=43ea1958-7fd9-47b6-be81-3eeb1b3801a0,network=Network(f8edf066-3c6a-45fb-bc20-36dc74f8aee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ea1958-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:43:57 np0005629333 nova_compute[244014]: 2026-02-25 12:43:57.418 244018 DEBUG os_vif [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:93:a9,bridge_name='br-int',has_traffic_filtering=True,id=43ea1958-7fd9-47b6-be81-3eeb1b3801a0,network=Network(f8edf066-3c6a-45fb-bc20-36dc74f8aee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ea1958-7f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:43:57 np0005629333 nova_compute[244014]: 2026-02-25 12:43:57.418 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:57 np0005629333 nova_compute[244014]: 2026-02-25 12:43:57.419 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:57 np0005629333 nova_compute[244014]: 2026-02-25 12:43:57.419 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:43:57 np0005629333 nova_compute[244014]: 2026-02-25 12:43:57.422 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:57 np0005629333 nova_compute[244014]: 2026-02-25 12:43:57.423 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43ea1958-7f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:57 np0005629333 nova_compute[244014]: 2026-02-25 12:43:57.423 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap43ea1958-7f, col_values=(('external_ids', {'iface-id': '43ea1958-7fd9-47b6-be81-3eeb1b3801a0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3e:93:a9', 'vm-uuid': 'b5941b54-9cd2-465c-89c0-3cf87ebed83e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:57 np0005629333 nova_compute[244014]: 2026-02-25 12:43:57.424 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:57 np0005629333 NetworkManager[49836]: <info>  [1772023437.4258] manager: (tap43ea1958-7f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/452)
Feb 25 07:43:57 np0005629333 nova_compute[244014]: 2026-02-25 12:43:57.426 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:43:57 np0005629333 nova_compute[244014]: 2026-02-25 12:43:57.428 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:57 np0005629333 nova_compute[244014]: 2026-02-25 12:43:57.428 244018 INFO os_vif [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:93:a9,bridge_name='br-int',has_traffic_filtering=True,id=43ea1958-7fd9-47b6-be81-3eeb1b3801a0,network=Network(f8edf066-3c6a-45fb-bc20-36dc74f8aee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ea1958-7f')#033[00m
Feb 25 07:43:57 np0005629333 nova_compute[244014]: 2026-02-25 12:43:57.544 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:43:57 np0005629333 nova_compute[244014]: 2026-02-25 12:43:57.545 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:43:57 np0005629333 nova_compute[244014]: 2026-02-25 12:43:57.545 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No VIF found with MAC fa:16:3e:3e:93:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:43:57 np0005629333 nova_compute[244014]: 2026-02-25 12:43:57.546 244018 INFO nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Using config drive#033[00m
Feb 25 07:43:57 np0005629333 nova_compute[244014]: 2026-02-25 12:43:57.574 244018 DEBUG nova.storage.rbd_utils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:43:57 np0005629333 nova_compute[244014]: 2026-02-25 12:43:57.599 244018 DEBUG nova.objects.instance [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'ec2_ids' on Instance uuid b5941b54-9cd2-465c-89c0-3cf87ebed83e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:43:57 np0005629333 nova_compute[244014]: 2026-02-25 12:43:57.632 244018 DEBUG nova.objects.instance [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'keypairs' on Instance uuid b5941b54-9cd2-465c-89c0-3cf87ebed83e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:43:58 np0005629333 nova_compute[244014]: 2026-02-25 12:43:58.047 244018 INFO nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Creating config drive at /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e/disk.config#033[00m
Feb 25 07:43:58 np0005629333 nova_compute[244014]: 2026-02-25 12:43:58.054 244018 DEBUG oslo_concurrency.processutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpu2e7atri execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:43:58 np0005629333 nova_compute[244014]: 2026-02-25 12:43:58.193 244018 DEBUG oslo_concurrency.processutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpu2e7atri" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:43:58 np0005629333 nova_compute[244014]: 2026-02-25 12:43:58.223 244018 DEBUG nova.storage.rbd_utils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:43:58 np0005629333 nova_compute[244014]: 2026-02-25 12:43:58.229 244018 DEBUG oslo_concurrency.processutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e/disk.config b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:43:58 np0005629333 nova_compute[244014]: 2026-02-25 12:43:58.305 244018 DEBUG nova.network.neutron [req-50fab266-5416-4bda-b24d-755f9a83108c req-8a57aea0-af9c-4307-bcbd-2290c365218d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Updated VIF entry in instance network info cache for port d71dae92-b542-404e-b4cc-ecad408ed655. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:43:58 np0005629333 nova_compute[244014]: 2026-02-25 12:43:58.306 244018 DEBUG nova.network.neutron [req-50fab266-5416-4bda-b24d-755f9a83108c req-8a57aea0-af9c-4307-bcbd-2290c365218d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Updating instance_info_cache with network_info: [{"id": "d71dae92-b542-404e-b4cc-ecad408ed655", "address": "fa:16:3e:74:17:b4", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd71dae92-b5", "ovs_interfaceid": "d71dae92-b542-404e-b4cc-ecad408ed655", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "address": "fa:16:3e:5d:4f:81", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5d:4f81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ca6c9a-c1", "ovs_interfaceid": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:43:58 np0005629333 nova_compute[244014]: 2026-02-25 12:43:58.332 244018 DEBUG oslo_concurrency.lockutils [req-50fab266-5416-4bda-b24d-755f9a83108c req-8a57aea0-af9c-4307-bcbd-2290c365218d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:43:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:43:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1884: 305 pgs: 305 active+clean; 325 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 132 op/s
Feb 25 07:43:58 np0005629333 nova_compute[244014]: 2026-02-25 12:43:58.693 244018 DEBUG oslo_concurrency.processutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e/disk.config b5941b54-9cd2-465c-89c0-3cf87ebed83e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:43:58 np0005629333 nova_compute[244014]: 2026-02-25 12:43:58.694 244018 INFO nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Deleting local config drive /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e/disk.config because it was imported into RBD.#033[00m
Feb 25 07:43:58 np0005629333 kernel: tap43ea1958-7f: entered promiscuous mode
Feb 25 07:43:58 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:58Z|01088|binding|INFO|Claiming lport 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 for this chassis.
Feb 25 07:43:58 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:58Z|01089|binding|INFO|43ea1958-7fd9-47b6-be81-3eeb1b3801a0: Claiming fa:16:3e:3e:93:a9 10.100.0.5
Feb 25 07:43:58 np0005629333 nova_compute[244014]: 2026-02-25 12:43:58.770 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:58 np0005629333 NetworkManager[49836]: <info>  [1772023438.7726] manager: (tap43ea1958-7f): new Tun device (/org/freedesktop/NetworkManager/Devices/453)
Feb 25 07:43:58 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:58Z|01090|binding|INFO|Setting lport 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 ovn-installed in OVS
Feb 25 07:43:58 np0005629333 nova_compute[244014]: 2026-02-25 12:43:58.780 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.780 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:93:a9 10.100.0.5'], port_security=['fa:16:3e:3e:93:a9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b5941b54-9cd2-465c-89c0-3cf87ebed83e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f8edf066-3c6a-45fb-bc20-36dc74f8aee6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'a5401511-20ab-4c2d-9471-9223f0b77f54', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.221'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32cc7d41-346e-4e2e-a938-6389d190a22f, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=43ea1958-7fd9-47b6-be81-3eeb1b3801a0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:43:58 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:58Z|01091|binding|INFO|Setting lport 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 up in Southbound
Feb 25 07:43:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.784 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 in datapath f8edf066-3c6a-45fb-bc20-36dc74f8aee6 bound to our chassis#033[00m
Feb 25 07:43:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.788 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f8edf066-3c6a-45fb-bc20-36dc74f8aee6#033[00m
Feb 25 07:43:58 np0005629333 nova_compute[244014]: 2026-02-25 12:43:58.788 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.801 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[69133733-20b6-485e-b6d2-904806d25c61]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.802 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf8edf066-31 in ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:43:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.805 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf8edf066-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:43:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.805 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[71b4af7e-87fc-4a46-990c-747c9c9f4c66]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.806 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[09734e11-1698-43e8-83d4-56845cde8124]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:58 np0005629333 systemd-machined[210048]: New machine qemu-140-instance-0000006e.
Feb 25 07:43:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.814 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[f94eb4ea-ba24-4fc3-a571-da9d036130a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:58 np0005629333 systemd[1]: Started Virtual Machine qemu-140-instance-0000006e.
Feb 25 07:43:58 np0005629333 systemd-udevd[342479]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:43:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.836 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9c37f400-7934-41bb-8a83-6e6553ae0eb5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:58 np0005629333 NetworkManager[49836]: <info>  [1772023438.8448] device (tap43ea1958-7f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:43:58 np0005629333 NetworkManager[49836]: <info>  [1772023438.8461] device (tap43ea1958-7f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:43:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.863 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[124b9ad9-0c11-4ae9-aa6b-603b177d3261]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:58 np0005629333 systemd-udevd[342482]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:43:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.868 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7cb48662-18d0-4069-b0a2-99186f62de96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:58 np0005629333 NetworkManager[49836]: <info>  [1772023438.8697] manager: (tapf8edf066-30): new Veth device (/org/freedesktop/NetworkManager/Devices/454)
Feb 25 07:43:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.902 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ba2b1ef3-2eff-47bf-bde5-13738f60716c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.904 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[611caf7c-c3cb-4383-9a3b-8fc23b2598ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:58 np0005629333 NetworkManager[49836]: <info>  [1772023438.9232] device (tapf8edf066-30): carrier: link connected
Feb 25 07:43:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.928 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[414c90d3-b5bb-45b5-a3af-f040df0fafee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.943 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[72152f50-cbcc-4600-954c-9d65d015664b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf8edf066-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:6d:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 328], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540833, 'reachable_time': 42496, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342509, 'error': None, 'target': 'ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.960 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[36881004-7f86-4d3f-be20-5aaf42380992]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2f:6d18'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540833, 'tstamp': 540833}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342510, 'error': None, 'target': 'ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:58.976 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[212d94d9-6720-4704-9cae-03167a2d2b5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf8edf066-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:6d:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 328], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540833, 'reachable_time': 42496, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 342511, 'error': None, 'target': 'ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:59.019 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ba960485-0cb5-46e0-a45d-1f7402ecc5a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:59.071 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[da8326e2-ba76-42b5-9f48-35e8f13b8b01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:59.073 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8edf066-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:59.073 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:59.074 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf8edf066-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.076 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:59 np0005629333 kernel: tapf8edf066-30: entered promiscuous mode
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.079 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:59.080 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf8edf066-30, col_values=(('external_ids', {'iface-id': '537184a5-5d27-4b28-acba-8f254f6dc5ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.082 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:43:59Z|01092|binding|INFO|Releasing lport 537184a5-5d27-4b28-acba-8f254f6dc5ca from this chassis (sb_readonly=0)
Feb 25 07:43:59 np0005629333 NetworkManager[49836]: <info>  [1772023439.0837] manager: (tapf8edf066-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/455)
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.088 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:59.090 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f8edf066-3c6a-45fb-bc20-36dc74f8aee6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f8edf066-3c6a-45fb-bc20-36dc74f8aee6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.091 244018 DEBUG nova.compute.manager [req-397f5f70-c2ef-4730-a8e1-fba2d57240c6 req-81e843fa-b3a1-4092-b606-37dde4493138 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received event network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.092 244018 DEBUG oslo_concurrency.lockutils [req-397f5f70-c2ef-4730-a8e1-fba2d57240c6 req-81e843fa-b3a1-4092-b606-37dde4493138 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.092 244018 DEBUG oslo_concurrency.lockutils [req-397f5f70-c2ef-4730-a8e1-fba2d57240c6 req-81e843fa-b3a1-4092-b606-37dde4493138 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.093 244018 DEBUG oslo_concurrency.lockutils [req-397f5f70-c2ef-4730-a8e1-fba2d57240c6 req-81e843fa-b3a1-4092-b606-37dde4493138 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.093 244018 DEBUG nova.compute.manager [req-397f5f70-c2ef-4730-a8e1-fba2d57240c6 req-81e843fa-b3a1-4092-b606-37dde4493138 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] No waiting events found dispatching network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.093 244018 WARNING nova.compute.manager [req-397f5f70-c2ef-4730-a8e1-fba2d57240c6 req-81e843fa-b3a1-4092-b606-37dde4493138 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received unexpected event network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:59.094 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d48943dc-ed90-4bc6-9fdb-5b20007c170d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:59.095 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-f8edf066-3c6a-45fb-bc20-36dc74f8aee6
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/f8edf066-3c6a-45fb-bc20-36dc74f8aee6.pid.haproxy
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID f8edf066-3c6a-45fb-bc20-36dc74f8aee6
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:43:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:43:59.096 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6', 'env', 'PROCESS_TAG=haproxy-f8edf066-3c6a-45fb-bc20-36dc74f8aee6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f8edf066-3c6a-45fb-bc20-36dc74f8aee6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.491 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for b5941b54-9cd2-465c-89c0-3cf87ebed83e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.491 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023439.4866447, b5941b54-9cd2-465c-89c0-3cf87ebed83e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.491 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.493 244018 DEBUG nova.compute.manager [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.493 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.496 244018 INFO nova.virt.libvirt.driver [-] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Instance spawned successfully.#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.497 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.509 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.515 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.518 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.519 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.519 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.520 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.520 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.521 244018 DEBUG nova.virt.libvirt.driver [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:43:59 np0005629333 podman[342577]: 2026-02-25 12:43:59.426115957 +0000 UTC m=+0.019522631 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.529 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.530 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023439.4872682, b5941b54-9cd2-465c-89c0-3cf87ebed83e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.530 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] VM Started (Lifecycle Event)#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.552 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.556 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.573 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.579 244018 DEBUG nova.compute.manager [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:43:59 np0005629333 podman[342577]: 2026-02-25 12:43:59.634772586 +0000 UTC m=+0.228179270 container create 4af2c7d409d71fe65160f469f65703643059bf45beb37d2a55ef2d1cbc32c7da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_managed=true)
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.660 244018 DEBUG oslo_concurrency.lockutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.661 244018 DEBUG oslo_concurrency.lockutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.661 244018 DEBUG nova.objects.instance [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.725 244018 DEBUG oslo_concurrency.lockutils [None req-35553c4b-1010-474f-9c2b-b2fe3fe62535 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:43:59 np0005629333 nova_compute[244014]: 2026-02-25 12:43:59.750 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:43:59 np0005629333 systemd[1]: Started libpod-conmon-4af2c7d409d71fe65160f469f65703643059bf45beb37d2a55ef2d1cbc32c7da.scope.
Feb 25 07:43:59 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:43:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1caf97c9deaa2249bb8b02c7dbf21ee80762224b30ee568b1931de368be96fa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:43:59 np0005629333 podman[342577]: 2026-02-25 12:43:59.969497301 +0000 UTC m=+0.562903995 container init 4af2c7d409d71fe65160f469f65703643059bf45beb37d2a55ef2d1cbc32c7da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 07:43:59 np0005629333 podman[342577]: 2026-02-25 12:43:59.974724079 +0000 UTC m=+0.568130733 container start 4af2c7d409d71fe65160f469f65703643059bf45beb37d2a55ef2d1cbc32c7da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6, org.label-schema.build-date=20260223, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 25 07:43:59 np0005629333 neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6[342597]: [NOTICE]   (342601) : New worker (342603) forked
Feb 25 07:43:59 np0005629333 neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6[342597]: [NOTICE]   (342601) : Loading success.
Feb 25 07:44:00 np0005629333 nova_compute[244014]: 2026-02-25 12:44:00.127 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1885: 305 pgs: 305 active+clean; 325 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 132 op/s
Feb 25 07:44:01 np0005629333 nova_compute[244014]: 2026-02-25 12:44:01.354 244018 DEBUG nova.compute.manager [req-d0cb0b1b-5b65-493e-a86b-7679ee0d6477 req-b1e01f6f-b4ab-442c-8cd7-fe61bf24d82d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received event network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:44:01 np0005629333 nova_compute[244014]: 2026-02-25 12:44:01.354 244018 DEBUG oslo_concurrency.lockutils [req-d0cb0b1b-5b65-493e-a86b-7679ee0d6477 req-b1e01f6f-b4ab-442c-8cd7-fe61bf24d82d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:01 np0005629333 nova_compute[244014]: 2026-02-25 12:44:01.355 244018 DEBUG oslo_concurrency.lockutils [req-d0cb0b1b-5b65-493e-a86b-7679ee0d6477 req-b1e01f6f-b4ab-442c-8cd7-fe61bf24d82d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:01 np0005629333 nova_compute[244014]: 2026-02-25 12:44:01.355 244018 DEBUG oslo_concurrency.lockutils [req-d0cb0b1b-5b65-493e-a86b-7679ee0d6477 req-b1e01f6f-b4ab-442c-8cd7-fe61bf24d82d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:01 np0005629333 nova_compute[244014]: 2026-02-25 12:44:01.355 244018 DEBUG nova.compute.manager [req-d0cb0b1b-5b65-493e-a86b-7679ee0d6477 req-b1e01f6f-b4ab-442c-8cd7-fe61bf24d82d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] No waiting events found dispatching network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:44:01 np0005629333 nova_compute[244014]: 2026-02-25 12:44:01.356 244018 WARNING nova.compute.manager [req-d0cb0b1b-5b65-493e-a86b-7679ee0d6477 req-b1e01f6f-b4ab-442c-8cd7-fe61bf24d82d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received unexpected event network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:44:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:44:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:44:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:44:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:44:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:44:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:44:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 305 active+clean; 327 MiB data, 930 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.1 MiB/s wr, 202 op/s
Feb 25 07:44:02 np0005629333 nova_compute[244014]: 2026-02-25 12:44:02.424 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:03 np0005629333 nova_compute[244014]: 2026-02-25 12:44:03.132 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:44:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 305 active+clean; 334 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.5 MiB/s wr, 204 op/s
Feb 25 07:44:04 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:04Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:74:17:b4 10.100.0.4
Feb 25 07:44:04 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:04Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:74:17:b4 10.100.0.4
Feb 25 07:44:05 np0005629333 nova_compute[244014]: 2026-02-25 12:44:05.130 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 305 active+clean; 334 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.5 MiB/s wr, 189 op/s
Feb 25 07:44:07 np0005629333 nova_compute[244014]: 2026-02-25 12:44:07.425 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:44:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1889: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.9 MiB/s wr, 241 op/s
Feb 25 07:44:10 np0005629333 nova_compute[244014]: 2026-02-25 12:44:10.132 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 305 active+clean; 358 MiB data, 955 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Feb 25 07:44:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1891: 305 pgs: 305 active+clean; 379 MiB data, 974 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.8 MiB/s wr, 161 op/s
Feb 25 07:44:12 np0005629333 nova_compute[244014]: 2026-02-25 12:44:12.427 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:12 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:12Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3e:93:a9 10.100.0.5
Feb 25 07:44:12 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:12Z|00126|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3e:93:a9 10.100.0.5
Feb 25 07:44:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:44:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 305 active+clean; 383 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 855 KiB/s rd, 3.9 MiB/s wr, 128 op/s
Feb 25 07:44:15 np0005629333 nova_compute[244014]: 2026-02-25 12:44:15.136 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.092 244018 DEBUG nova.compute.manager [req-e2cfcce1-1e9f-44f7-bff2-d5fcaaf68717 req-ebac838e-d220-4aac-8533-9e34d11bbdff 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received event network-changed-d71dae92-b542-404e-b4cc-ecad408ed655 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.093 244018 DEBUG nova.compute.manager [req-e2cfcce1-1e9f-44f7-bff2-d5fcaaf68717 req-ebac838e-d220-4aac-8533-9e34d11bbdff 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Refreshing instance network info cache due to event network-changed-d71dae92-b542-404e-b4cc-ecad408ed655. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.093 244018 DEBUG oslo_concurrency.lockutils [req-e2cfcce1-1e9f-44f7-bff2-d5fcaaf68717 req-ebac838e-d220-4aac-8533-9e34d11bbdff 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.093 244018 DEBUG oslo_concurrency.lockutils [req-e2cfcce1-1e9f-44f7-bff2-d5fcaaf68717 req-ebac838e-d220-4aac-8533-9e34d11bbdff 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.094 244018 DEBUG nova.network.neutron [req-e2cfcce1-1e9f-44f7-bff2-d5fcaaf68717 req-ebac838e-d220-4aac-8533-9e34d11bbdff 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Refreshing network info cache for port d71dae92-b542-404e-b4cc-ecad408ed655 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.184 244018 DEBUG oslo_concurrency.lockutils [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.185 244018 DEBUG oslo_concurrency.lockutils [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.185 244018 DEBUG oslo_concurrency.lockutils [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.186 244018 DEBUG oslo_concurrency.lockutils [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.186 244018 DEBUG oslo_concurrency.lockutils [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.187 244018 INFO nova.compute.manager [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Terminating instance#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.188 244018 DEBUG nova.compute.manager [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:44:16 np0005629333 kernel: tapd71dae92-b5 (unregistering): left promiscuous mode
Feb 25 07:44:16 np0005629333 NetworkManager[49836]: <info>  [1772023456.2994] device (tapd71dae92-b5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:44:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:16Z|01093|binding|INFO|Releasing lport d71dae92-b542-404e-b4cc-ecad408ed655 from this chassis (sb_readonly=0)
Feb 25 07:44:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:16Z|01094|binding|INFO|Setting lport d71dae92-b542-404e-b4cc-ecad408ed655 down in Southbound
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.304 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:16Z|01095|binding|INFO|Removing iface tapd71dae92-b5 ovn-installed in OVS
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.307 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.314 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:74:17:b4 10.100.0.4'], port_security=['fa:16:3e:74:17:b4 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e0ff7905-af45-428a-b6a0-d6e1209fd009', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0cf44a3d-3002-496e-a2d4-b5607642ce07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4edb1eef-979c-4be1-8c88-b734bc363731, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=d71dae92-b542-404e-b4cc-ecad408ed655) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:44:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.317 157129 INFO neutron.agent.ovn.metadata.agent [-] Port d71dae92-b542-404e-b4cc-ecad408ed655 in datapath e0ff7905-af45-428a-b6a0-d6e1209fd009 unbound from our chassis#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.318 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:16 np0005629333 kernel: tap18ca6c9a-c1 (unregistering): left promiscuous mode
Feb 25 07:44:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.321 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e0ff7905-af45-428a-b6a0-d6e1209fd009#033[00m
Feb 25 07:44:16 np0005629333 NetworkManager[49836]: <info>  [1772023456.3266] device (tap18ca6c9a-c1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:44:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.335 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d3fbb67c-361e-421a-962d-2e5140a6e121]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:16Z|01096|binding|INFO|Releasing lport 18ca6c9a-c1c9-4a48-b124-25942ebef5df from this chassis (sb_readonly=0)
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.336 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:16Z|01097|binding|INFO|Setting lport 18ca6c9a-c1c9-4a48-b124-25942ebef5df down in Southbound
Feb 25 07:44:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:16Z|01098|binding|INFO|Removing iface tap18ca6c9a-c1 ovn-installed in OVS
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.339 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.343 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1893: 305 pgs: 305 active+clean; 383 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 634 KiB/s rd, 3.5 MiB/s wr, 111 op/s
Feb 25 07:44:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.362 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:4f:81 2001:db8::f816:3eff:fe5d:4f81'], port_security=['fa:16:3e:5d:4f:81 2001:db8::f816:3eff:fe5d:4f81'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe5d:4f81/64', 'neutron:device_id': 'c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4445989-91e8-4869-98cb-32b4b81bb3da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0cf44a3d-3002-496e-a2d4-b5607642ce07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c1f88174-704f-4cf1-b79c-73e2410029c4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=18ca6c9a-c1c9-4a48-b124-25942ebef5df) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:44:16 np0005629333 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Feb 25 07:44:16 np0005629333 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d0000006f.scope: Consumed 12.506s CPU time.
Feb 25 07:44:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.371 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1d9892c5-2367-4564-9256-b6f3bc9e1cbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:16 np0005629333 systemd-machined[210048]: Machine qemu-139-instance-0000006f terminated.
Feb 25 07:44:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.376 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3c5dcdd8-c373-4d3f-b560-e0c550e448b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.404 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[702fb74e-9633-413d-81e1-86ff9928881c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:16 np0005629333 NetworkManager[49836]: <info>  [1772023456.4179] manager: (tap18ca6c9a-c1): new Tun device (/org/freedesktop/NetworkManager/Devices/456)
Feb 25 07:44:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.424 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c0776678-4aed-4291-9eb9-8c1883d42281]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape0ff7905-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:d2:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 320], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535889, 'reachable_time': 44896, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342638, 'error': None, 'target': 'ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.432 244018 INFO nova.virt.libvirt.driver [-] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Instance destroyed successfully.#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.433 244018 DEBUG nova.objects.instance [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'resources' on Instance uuid c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:44:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.439 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[58c04bfa-83e9-45bf-8057-74b319fe9623]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape0ff7905-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535900, 'tstamp': 535900}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342654, 'error': None, 'target': 'ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape0ff7905-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535902, 'tstamp': 535902}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342654, 'error': None, 'target': 'ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.441 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0ff7905-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.442 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.444 244018 DEBUG nova.virt.libvirt.vif [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:43:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1190047841',display_name='tempest-TestGettingAddress-server-1190047841',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1190047841',id=111,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjwXf4BRPjcASt12gAye0nTLMsY81chM8AwzuzQAmadzcHhb8MVkuSUIiO1cL5+mTUrVlDTjaV+Wk+tczkCutLjKZYT8KokDtS1+FrxD3TeeYwMZXPHCENgnD6/bPtHPg==',key_name='tempest-TestGettingAddress-617797010',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:43:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-gjjevuzx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:43:51Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d71dae92-b542-404e-b4cc-ecad408ed655", "address": "fa:16:3e:74:17:b4", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd71dae92-b5", "ovs_interfaceid": "d71dae92-b542-404e-b4cc-ecad408ed655", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.445 244018 DEBUG nova.network.os_vif_util [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "d71dae92-b542-404e-b4cc-ecad408ed655", "address": "fa:16:3e:74:17:b4", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd71dae92-b5", "ovs_interfaceid": "d71dae92-b542-404e-b4cc-ecad408ed655", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.445 244018 DEBUG nova.network.os_vif_util [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:74:17:b4,bridge_name='br-int',has_traffic_filtering=True,id=d71dae92-b542-404e-b4cc-ecad408ed655,network=Network(e0ff7905-af45-428a-b6a0-d6e1209fd009),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd71dae92-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.445 244018 DEBUG os_vif [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:74:17:b4,bridge_name='br-int',has_traffic_filtering=True,id=d71dae92-b542-404e-b4cc-ecad408ed655,network=Network(e0ff7905-af45-428a-b6a0-d6e1209fd009),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd71dae92-b5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.447 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.447 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd71dae92-b5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.448 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.448 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape0ff7905-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:44:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.448 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:44:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.449 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape0ff7905-a0, col_values=(('external_ids', {'iface-id': 'a592354c-38c0-41be-9126-87d6dec6c687'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:44:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.449 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.450 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:44:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.451 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 18ca6c9a-c1c9-4a48-b124-25942ebef5df in datapath e4445989-91e8-4869-98cb-32b4b81bb3da unbound from our chassis#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.452 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.452 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e4445989-91e8-4869-98cb-32b4b81bb3da#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.453 244018 INFO os_vif [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:74:17:b4,bridge_name='br-int',has_traffic_filtering=True,id=d71dae92-b542-404e-b4cc-ecad408ed655,network=Network(e0ff7905-af45-428a-b6a0-d6e1209fd009),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd71dae92-b5')#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.454 244018 DEBUG nova.virt.libvirt.vif [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:43:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1190047841',display_name='tempest-TestGettingAddress-server-1190047841',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1190047841',id=111,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjwXf4BRPjcASt12gAye0nTLMsY81chM8AwzuzQAmadzcHhb8MVkuSUIiO1cL5+mTUrVlDTjaV+Wk+tczkCutLjKZYT8KokDtS1+FrxD3TeeYwMZXPHCENgnD6/bPtHPg==',key_name='tempest-TestGettingAddress-617797010',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:43:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-gjjevuzx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:43:51Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "address": "fa:16:3e:5d:4f:81", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5d:4f81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ca6c9a-c1", "ovs_interfaceid": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.454 244018 DEBUG nova.network.os_vif_util [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "address": "fa:16:3e:5d:4f:81", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5d:4f81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ca6c9a-c1", "ovs_interfaceid": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.454 244018 DEBUG nova.network.os_vif_util [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:4f:81,bridge_name='br-int',has_traffic_filtering=True,id=18ca6c9a-c1c9-4a48-b124-25942ebef5df,network=Network(e4445989-91e8-4869-98cb-32b4b81bb3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18ca6c9a-c1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.455 244018 DEBUG os_vif [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:4f:81,bridge_name='br-int',has_traffic_filtering=True,id=18ca6c9a-c1c9-4a48-b124-25942ebef5df,network=Network(e4445989-91e8-4869-98cb-32b4b81bb3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18ca6c9a-c1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.456 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.456 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18ca6c9a-c1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.458 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.460 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.461 244018 INFO os_vif [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:4f:81,bridge_name='br-int',has_traffic_filtering=True,id=18ca6c9a-c1c9-4a48-b124-25942ebef5df,network=Network(e4445989-91e8-4869-98cb-32b4b81bb3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18ca6c9a-c1')#033[00m
Feb 25 07:44:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.464 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8ec07bb4-75d7-4e34-802e-a5f03befec32]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.485 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[37e637bf-179a-48b6-8a44-3cca001678d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.487 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ac284767-2b64-41c6-a2fd-260bef9a46eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.506 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[775548b1-5963-41a9-b119-49be79a99349]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.521 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[94d4517c-2ad4-407e-a204-a9181203ee00]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4445989-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:7d:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 28, 'tx_packets': 5, 'rx_bytes': 2472, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 28, 'tx_packets': 5, 'rx_bytes': 2472, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 321], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535975, 'reachable_time': 27894, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 28, 'inoctets': 2080, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 28, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2080, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 28, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342681, 'error': None, 'target': 'ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.534 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c0f7f6ca-3cae-4b01-9eeb-91bee4447e0d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape4445989-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 535984, 'tstamp': 535984}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342682, 'error': None, 'target': 'ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.535 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4445989-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.537 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.538 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.538 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4445989-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:44:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.539 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:44:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.539 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape4445989-90, col_values=(('external_ids', {'iface-id': '4a3f602e-68fb-46ca-b4ed-cec3ad6d3ec5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:44:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:16.539 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.711 244018 DEBUG nova.compute.manager [req-105a3551-107c-467a-88ed-c7b7aaa897f4 req-ee9c13d6-2739-49f2-9364-4aefe3ddeb31 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received event network-vif-unplugged-d71dae92-b542-404e-b4cc-ecad408ed655 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.712 244018 DEBUG oslo_concurrency.lockutils [req-105a3551-107c-467a-88ed-c7b7aaa897f4 req-ee9c13d6-2739-49f2-9364-4aefe3ddeb31 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.712 244018 DEBUG oslo_concurrency.lockutils [req-105a3551-107c-467a-88ed-c7b7aaa897f4 req-ee9c13d6-2739-49f2-9364-4aefe3ddeb31 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.712 244018 DEBUG oslo_concurrency.lockutils [req-105a3551-107c-467a-88ed-c7b7aaa897f4 req-ee9c13d6-2739-49f2-9364-4aefe3ddeb31 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.712 244018 DEBUG nova.compute.manager [req-105a3551-107c-467a-88ed-c7b7aaa897f4 req-ee9c13d6-2739-49f2-9364-4aefe3ddeb31 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] No waiting events found dispatching network-vif-unplugged-d71dae92-b542-404e-b4cc-ecad408ed655 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.712 244018 DEBUG nova.compute.manager [req-105a3551-107c-467a-88ed-c7b7aaa897f4 req-ee9c13d6-2739-49f2-9364-4aefe3ddeb31 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received event network-vif-unplugged-d71dae92-b542-404e-b4cc-ecad408ed655 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.826 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "874359d8-3251-4416-82dc-f6776853e384" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:16 np0005629333 nova_compute[244014]: 2026-02-25 12:44:16.827 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:17 np0005629333 nova_compute[244014]: 2026-02-25 12:44:17.027 244018 DEBUG nova.compute.manager [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:44:17 np0005629333 nova_compute[244014]: 2026-02-25 12:44:17.132 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:17 np0005629333 nova_compute[244014]: 2026-02-25 12:44:17.132 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:17 np0005629333 nova_compute[244014]: 2026-02-25 12:44:17.142 244018 DEBUG nova.virt.hardware [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:44:17 np0005629333 nova_compute[244014]: 2026-02-25 12:44:17.142 244018 INFO nova.compute.claims [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:44:17 np0005629333 nova_compute[244014]: 2026-02-25 12:44:17.330 244018 DEBUG oslo_concurrency.processutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:44:17 np0005629333 nova_compute[244014]: 2026-02-25 12:44:17.500 244018 INFO nova.virt.libvirt.driver [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Deleting instance files /var/lib/nova/instances/c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367_del#033[00m
Feb 25 07:44:17 np0005629333 nova_compute[244014]: 2026-02-25 12:44:17.501 244018 INFO nova.virt.libvirt.driver [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Deletion of /var/lib/nova/instances/c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367_del complete#033[00m
Feb 25 07:44:17 np0005629333 nova_compute[244014]: 2026-02-25 12:44:17.551 244018 INFO nova.compute.manager [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Took 1.36 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:44:17 np0005629333 nova_compute[244014]: 2026-02-25 12:44:17.552 244018 DEBUG oslo.service.loopingcall [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:44:17 np0005629333 nova_compute[244014]: 2026-02-25 12:44:17.553 244018 DEBUG nova.compute.manager [-] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:44:17 np0005629333 nova_compute[244014]: 2026-02-25 12:44:17.553 244018 DEBUG nova.network.neutron [-] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:44:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:44:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2911438101' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:44:17 np0005629333 nova_compute[244014]: 2026-02-25 12:44:17.873 244018 DEBUG oslo_concurrency.processutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:44:17 np0005629333 nova_compute[244014]: 2026-02-25 12:44:17.879 244018 DEBUG nova.compute.provider_tree [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:44:17 np0005629333 nova_compute[244014]: 2026-02-25 12:44:17.961 244018 DEBUG nova.scheduler.client.report [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:44:17 np0005629333 nova_compute[244014]: 2026-02-25 12:44:17.991 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:17 np0005629333 nova_compute[244014]: 2026-02-25 12:44:17.992 244018 DEBUG nova.compute.manager [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.047 244018 DEBUG nova.compute.manager [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.048 244018 DEBUG nova.network.neutron [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.067 244018 INFO nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.085 244018 DEBUG nova.compute.manager [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.140 244018 DEBUG nova.network.neutron [req-e2cfcce1-1e9f-44f7-bff2-d5fcaaf68717 req-ebac838e-d220-4aac-8533-9e34d11bbdff 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Updated VIF entry in instance network info cache for port d71dae92-b542-404e-b4cc-ecad408ed655. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.141 244018 DEBUG nova.network.neutron [req-e2cfcce1-1e9f-44f7-bff2-d5fcaaf68717 req-ebac838e-d220-4aac-8533-9e34d11bbdff 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Updating instance_info_cache with network_info: [{"id": "d71dae92-b542-404e-b4cc-ecad408ed655", "address": "fa:16:3e:74:17:b4", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd71dae92-b5", "ovs_interfaceid": "d71dae92-b542-404e-b4cc-ecad408ed655", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "address": "fa:16:3e:5d:4f:81", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5d:4f81", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18ca6c9a-c1", "ovs_interfaceid": "18ca6c9a-c1c9-4a48-b124-25942ebef5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.187 244018 DEBUG oslo_concurrency.lockutils [req-e2cfcce1-1e9f-44f7-bff2-d5fcaaf68717 req-ebac838e-d220-4aac-8533-9e34d11bbdff 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.196 244018 DEBUG nova.compute.manager [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.198 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.198 244018 INFO nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Creating image(s)#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.224 244018 DEBUG nova.storage.rbd_utils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] rbd image 874359d8-3251-4416-82dc-f6776853e384_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.251 244018 DEBUG nova.storage.rbd_utils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] rbd image 874359d8-3251-4416-82dc-f6776853e384_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.279 244018 DEBUG nova.storage.rbd_utils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] rbd image 874359d8-3251-4416-82dc-f6776853e384_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.284 244018 DEBUG oslo_concurrency.processutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.322 244018 DEBUG nova.policy [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '44c0b78107ea4f7381e82a02c5954e7c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6c0adb05683141e7a0b866f450e410e0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:44:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:44:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1894: 305 pgs: 305 active+clean; 312 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 679 KiB/s rd, 3.6 MiB/s wr, 139 op/s
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.393 244018 DEBUG oslo_concurrency.processutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.394 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.394 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.394 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.413 244018 DEBUG nova.storage.rbd_utils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] rbd image 874359d8-3251-4416-82dc-f6776853e384_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.417 244018 DEBUG oslo_concurrency.processutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 874359d8-3251-4416-82dc-f6776853e384_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.838 244018 DEBUG nova.compute.manager [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received event network-vif-plugged-d71dae92-b542-404e-b4cc-ecad408ed655 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.838 244018 DEBUG oslo_concurrency.lockutils [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.839 244018 DEBUG oslo_concurrency.lockutils [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.839 244018 DEBUG oslo_concurrency.lockutils [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.839 244018 DEBUG nova.compute.manager [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] No waiting events found dispatching network-vif-plugged-d71dae92-b542-404e-b4cc-ecad408ed655 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.839 244018 WARNING nova.compute.manager [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received unexpected event network-vif-plugged-d71dae92-b542-404e-b4cc-ecad408ed655 for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.839 244018 DEBUG nova.compute.manager [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received event network-vif-unplugged-18ca6c9a-c1c9-4a48-b124-25942ebef5df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.840 244018 DEBUG oslo_concurrency.lockutils [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.840 244018 DEBUG oslo_concurrency.lockutils [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.841 244018 DEBUG oslo_concurrency.lockutils [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.841 244018 DEBUG nova.compute.manager [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] No waiting events found dispatching network-vif-unplugged-18ca6c9a-c1c9-4a48-b124-25942ebef5df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.841 244018 DEBUG nova.compute.manager [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received event network-vif-unplugged-18ca6c9a-c1c9-4a48-b124-25942ebef5df for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.841 244018 DEBUG nova.compute.manager [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received event network-vif-plugged-18ca6c9a-c1c9-4a48-b124-25942ebef5df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.841 244018 DEBUG oslo_concurrency.lockutils [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.842 244018 DEBUG oslo_concurrency.lockutils [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.842 244018 DEBUG oslo_concurrency.lockutils [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.842 244018 DEBUG nova.compute.manager [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] No waiting events found dispatching network-vif-plugged-18ca6c9a-c1c9-4a48-b124-25942ebef5df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.842 244018 WARNING nova.compute.manager [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received unexpected event network-vif-plugged-18ca6c9a-c1c9-4a48-b124-25942ebef5df for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.842 244018 DEBUG nova.compute.manager [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received event network-vif-deleted-18ca6c9a-c1c9-4a48-b124-25942ebef5df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.843 244018 INFO nova.compute.manager [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Neutron deleted interface 18ca6c9a-c1c9-4a48-b124-25942ebef5df; detaching it from the instance and deleting it from the info cache#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.843 244018 DEBUG nova.network.neutron [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Updating instance_info_cache with network_info: [{"id": "d71dae92-b542-404e-b4cc-ecad408ed655", "address": "fa:16:3e:74:17:b4", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd71dae92-b5", "ovs_interfaceid": "d71dae92-b542-404e-b4cc-ecad408ed655", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:44:18 np0005629333 nova_compute[244014]: 2026-02-25 12:44:18.873 244018 DEBUG nova.compute.manager [req-514fe318-a9ec-4a64-8960-dd28b11824f9 req-43767e3f-de49-4e12-894e-3d09c6f17735 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Detach interface failed, port_id=18ca6c9a-c1c9-4a48-b124-25942ebef5df, reason: Instance c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 25 07:44:19 np0005629333 nova_compute[244014]: 2026-02-25 12:44:19.225 244018 DEBUG nova.network.neutron [-] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:44:19 np0005629333 nova_compute[244014]: 2026-02-25 12:44:19.245 244018 INFO nova.compute.manager [-] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Took 1.69 seconds to deallocate network for instance.#033[00m
Feb 25 07:44:19 np0005629333 nova_compute[244014]: 2026-02-25 12:44:19.299 244018 DEBUG oslo_concurrency.lockutils [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:19 np0005629333 nova_compute[244014]: 2026-02-25 12:44:19.300 244018 DEBUG oslo_concurrency.lockutils [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:19 np0005629333 nova_compute[244014]: 2026-02-25 12:44:19.320 244018 DEBUG oslo_concurrency.processutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 874359d8-3251-4416-82dc-f6776853e384_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.903s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:44:19 np0005629333 nova_compute[244014]: 2026-02-25 12:44:19.368 244018 INFO nova.compute.manager [None req-86dbe725-5961-437a-b145-78245956a81f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Get console output#033[00m
Feb 25 07:44:19 np0005629333 nova_compute[244014]: 2026-02-25 12:44:19.410 244018 DEBUG nova.storage.rbd_utils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] resizing rbd image 874359d8-3251-4416-82dc-f6776853e384_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:44:19 np0005629333 nova_compute[244014]: 2026-02-25 12:44:19.452 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 25 07:44:19 np0005629333 nova_compute[244014]: 2026-02-25 12:44:19.485 244018 DEBUG oslo_concurrency.processutils [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:44:19 np0005629333 nova_compute[244014]: 2026-02-25 12:44:19.568 244018 DEBUG nova.objects.instance [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lazy-loading 'migration_context' on Instance uuid 874359d8-3251-4416-82dc-f6776853e384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:44:19 np0005629333 nova_compute[244014]: 2026-02-25 12:44:19.595 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:44:19 np0005629333 nova_compute[244014]: 2026-02-25 12:44:19.595 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Ensure instance console log exists: /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:44:19 np0005629333 nova_compute[244014]: 2026-02-25 12:44:19.596 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:19 np0005629333 nova_compute[244014]: 2026-02-25 12:44:19.596 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:19 np0005629333 nova_compute[244014]: 2026-02-25 12:44:19.596 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:19 np0005629333 nova_compute[244014]: 2026-02-25 12:44:19.682 244018 DEBUG nova.network.neutron [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Successfully created port: 97fb9f99-cb59-4581-8866-375ea3e167d7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:44:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:44:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3026762265' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.086 244018 DEBUG oslo_concurrency.processutils [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.094 244018 DEBUG nova.compute.provider_tree [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.113 244018 DEBUG nova.scheduler.client.report [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.135 244018 DEBUG oslo_concurrency.lockutils [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.138 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.167 244018 INFO nova.scheduler.client.report [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Deleted allocations for instance c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367#033[00m
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.234 244018 DEBUG oslo_concurrency.lockutils [None req-87904a7a-2a50-4b60-9175-0b425b30c172 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1895: 305 pgs: 305 active+clean; 312 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.2 MiB/s wr, 87 op/s
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.578 244018 DEBUG oslo_concurrency.lockutils [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.578 244018 DEBUG oslo_concurrency.lockutils [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.579 244018 DEBUG oslo_concurrency.lockutils [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.579 244018 DEBUG oslo_concurrency.lockutils [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.580 244018 DEBUG oslo_concurrency.lockutils [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.581 244018 INFO nova.compute.manager [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Terminating instance#033[00m
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.583 244018 DEBUG nova.compute.manager [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:44:20 np0005629333 kernel: tap43ea1958-7f (unregistering): left promiscuous mode
Feb 25 07:44:20 np0005629333 NetworkManager[49836]: <info>  [1772023460.6281] device (tap43ea1958-7f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.634 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:20 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:20Z|01099|binding|INFO|Releasing lport 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 from this chassis (sb_readonly=0)
Feb 25 07:44:20 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:20Z|01100|binding|INFO|Setting lport 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 down in Southbound
Feb 25 07:44:20 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:20Z|01101|binding|INFO|Removing iface tap43ea1958-7f ovn-installed in OVS
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.640 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:20.645 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:93:a9 10.100.0.5'], port_security=['fa:16:3e:3e:93:a9 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b5941b54-9cd2-465c-89c0-3cf87ebed83e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f8edf066-3c6a-45fb-bc20-36dc74f8aee6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a5401511-20ab-4c2d-9471-9223f0b77f54', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32cc7d41-346e-4e2e-a938-6389d190a22f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=43ea1958-7fd9-47b6-be81-3eeb1b3801a0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:44:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:20.646 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 in datapath f8edf066-3c6a-45fb-bc20-36dc74f8aee6 unbound from our chassis#033[00m
Feb 25 07:44:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:20.647 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f8edf066-3c6a-45fb-bc20-36dc74f8aee6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:44:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:20.649 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7d5d0d23-9790-49c5-8931-67fff67c7d3c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:20.650 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6 namespace which is not needed anymore#033[00m
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.650 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:20 np0005629333 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Feb 25 07:44:20 np0005629333 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d0000006e.scope: Consumed 12.529s CPU time.
Feb 25 07:44:20 np0005629333 systemd-machined[210048]: Machine qemu-140-instance-0000006e terminated.
Feb 25 07:44:20 np0005629333 neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6[342597]: [NOTICE]   (342601) : haproxy version is 2.8.14-c23fe91
Feb 25 07:44:20 np0005629333 neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6[342597]: [NOTICE]   (342601) : path to executable is /usr/sbin/haproxy
Feb 25 07:44:20 np0005629333 neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6[342597]: [WARNING]  (342601) : Exiting Master process...
Feb 25 07:44:20 np0005629333 neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6[342597]: [ALERT]    (342601) : Current worker (342603) exited with code 143 (Terminated)
Feb 25 07:44:20 np0005629333 neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6[342597]: [WARNING]  (342601) : All workers exited. Exiting... (0)
Feb 25 07:44:20 np0005629333 systemd[1]: libpod-4af2c7d409d71fe65160f469f65703643059bf45beb37d2a55ef2d1cbc32c7da.scope: Deactivated successfully.
Feb 25 07:44:20 np0005629333 podman[342918]: 2026-02-25 12:44:20.784178334 +0000 UTC m=+0.047268785 container died 4af2c7d409d71fe65160f469f65703643059bf45beb37d2a55ef2d1cbc32c7da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.800 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.804 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:20 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4af2c7d409d71fe65160f469f65703643059bf45beb37d2a55ef2d1cbc32c7da-userdata-shm.mount: Deactivated successfully.
Feb 25 07:44:20 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e1caf97c9deaa2249bb8b02c7dbf21ee80762224b30ee568b1931de368be96fa-merged.mount: Deactivated successfully.
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.818 244018 INFO nova.virt.libvirt.driver [-] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Instance destroyed successfully.#033[00m
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.819 244018 DEBUG nova.objects.instance [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'resources' on Instance uuid b5941b54-9cd2-465c-89c0-3cf87ebed83e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:44:20 np0005629333 podman[342918]: 2026-02-25 12:44:20.827893338 +0000 UTC m=+0.090983809 container cleanup 4af2c7d409d71fe65160f469f65703643059bf45beb37d2a55ef2d1cbc32c7da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.831 244018 DEBUG nova.virt.libvirt.vif [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:43:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1420097374',display_name='tempest-TestNetworkAdvancedServerOps-server-1420097374',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1420097374',id=110,image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLB4lCGz708UDYBfDiKGSO/K1vvU3TZkSpUC/m/pNqcj9p06BKZCsaZ+HTq1tFiTei87P3smYtAHKXB341loC6n/SM62zSw05o3YeNjhjC3ZsqOrkJUnRdjhvIYhFIjYsQ==',key_name='tempest-TestNetworkAdvancedServerOps-1361605021',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:43:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-ivr9l6e8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='f0ef5a9a-23b8-4883-8e47-feb7403a11d8',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:43:59Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=b5941b54-9cd2-465c-89c0-3cf87ebed83e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.831 244018 DEBUG nova.network.os_vif_util [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.832 244018 DEBUG nova.network.os_vif_util [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3e:93:a9,bridge_name='br-int',has_traffic_filtering=True,id=43ea1958-7fd9-47b6-be81-3eeb1b3801a0,network=Network(f8edf066-3c6a-45fb-bc20-36dc74f8aee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ea1958-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.832 244018 DEBUG os_vif [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:93:a9,bridge_name='br-int',has_traffic_filtering=True,id=43ea1958-7fd9-47b6-be81-3eeb1b3801a0,network=Network(f8edf066-3c6a-45fb-bc20-36dc74f8aee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ea1958-7f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:44:20 np0005629333 systemd[1]: libpod-conmon-4af2c7d409d71fe65160f469f65703643059bf45beb37d2a55ef2d1cbc32c7da.scope: Deactivated successfully.
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.836 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.837 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43ea1958-7f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.838 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.840 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.841 244018 INFO os_vif [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3e:93:a9,bridge_name='br-int',has_traffic_filtering=True,id=43ea1958-7fd9-47b6-be81-3eeb1b3801a0,network=Network(f8edf066-3c6a-45fb-bc20-36dc74f8aee6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap43ea1958-7f')#033[00m
Feb 25 07:44:20 np0005629333 podman[342956]: 2026-02-25 12:44:20.880690728 +0000 UTC m=+0.037917661 container remove 4af2c7d409d71fe65160f469f65703643059bf45beb37d2a55ef2d1cbc32c7da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 07:44:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:20.885 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f4a9f8ff-99f3-467f-ae96-b2806a747f2f]: (4, ('Wed Feb 25 12:44:20 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6 (4af2c7d409d71fe65160f469f65703643059bf45beb37d2a55ef2d1cbc32c7da)\n4af2c7d409d71fe65160f469f65703643059bf45beb37d2a55ef2d1cbc32c7da\nWed Feb 25 12:44:20 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6 (4af2c7d409d71fe65160f469f65703643059bf45beb37d2a55ef2d1cbc32c7da)\n4af2c7d409d71fe65160f469f65703643059bf45beb37d2a55ef2d1cbc32c7da\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:20.886 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9085521d-f0a3-433e-be1a-5409c21464c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:20.888 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8edf066-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:44:20 np0005629333 kernel: tapf8edf066-30: left promiscuous mode
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.889 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.895 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.896 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:20.898 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c942d6ba-e63c-408a-83e4-b42ef9a27225]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:20.910 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d4d8396e-d0f3-49df-89de-037357f71811]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:20.911 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[537e6637-e77c-40f2-bf31-99313f6ff988]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:20.921 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2aaffe88-6af0-46ff-b7c9-4734a54d41d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540826, 'reachable_time': 22751, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342989, 'error': None, 'target': 'ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:20 np0005629333 systemd[1]: run-netns-ovnmeta\x2df8edf066\x2d3c6a\x2d45fb\x2dbc20\x2d36dc74f8aee6.mount: Deactivated successfully.
Feb 25 07:44:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:20.923 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f8edf066-3c6a-45fb-bc20-36dc74f8aee6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:44:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:20.924 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[bb19a566-bef4-45ff-9d4e-144c27163409]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.932 244018 DEBUG nova.network.neutron [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Successfully updated port: 97fb9f99-cb59-4581-8866-375ea3e167d7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.943 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.944 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquired lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:44:20 np0005629333 nova_compute[244014]: 2026-02-25 12:44:20.944 244018 DEBUG nova.network.neutron [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:44:21 np0005629333 nova_compute[244014]: 2026-02-25 12:44:21.125 244018 INFO nova.virt.libvirt.driver [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Deleting instance files /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e_del#033[00m
Feb 25 07:44:21 np0005629333 nova_compute[244014]: 2026-02-25 12:44:21.125 244018 INFO nova.virt.libvirt.driver [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Deletion of /var/lib/nova/instances/b5941b54-9cd2-465c-89c0-3cf87ebed83e_del complete#033[00m
Feb 25 07:44:21 np0005629333 nova_compute[244014]: 2026-02-25 12:44:21.176 244018 DEBUG nova.network.neutron [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:44:21 np0005629333 nova_compute[244014]: 2026-02-25 12:44:21.189 244018 INFO nova.compute.manager [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Took 0.60 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:44:21 np0005629333 nova_compute[244014]: 2026-02-25 12:44:21.189 244018 DEBUG oslo.service.loopingcall [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:44:21 np0005629333 nova_compute[244014]: 2026-02-25 12:44:21.190 244018 DEBUG nova.compute.manager [-] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:44:21 np0005629333 nova_compute[244014]: 2026-02-25 12:44:21.190 244018 DEBUG nova.network.neutron [-] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:44:21 np0005629333 nova_compute[244014]: 2026-02-25 12:44:21.350 244018 DEBUG nova.compute.manager [req-aa5e56ef-222a-4918-aacc-f28e240384c0 req-66f3c92c-0f16-4529-9c05-6611823d83b1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received event network-changed-97fb9f99-cb59-4581-8866-375ea3e167d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:44:21 np0005629333 nova_compute[244014]: 2026-02-25 12:44:21.351 244018 DEBUG nova.compute.manager [req-aa5e56ef-222a-4918-aacc-f28e240384c0 req-66f3c92c-0f16-4529-9c05-6611823d83b1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Refreshing instance network info cache due to event network-changed-97fb9f99-cb59-4581-8866-375ea3e167d7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:44:21 np0005629333 nova_compute[244014]: 2026-02-25 12:44:21.351 244018 DEBUG oslo_concurrency.lockutils [req-aa5e56ef-222a-4918-aacc-f28e240384c0 req-66f3c92c-0f16-4529-9c05-6611823d83b1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:44:21 np0005629333 nova_compute[244014]: 2026-02-25 12:44:21.391 244018 DEBUG nova.compute.manager [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Received event network-vif-deleted-d71dae92-b542-404e-b4cc-ecad408ed655 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:44:21 np0005629333 nova_compute[244014]: 2026-02-25 12:44:21.391 244018 DEBUG nova.compute.manager [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received event network-changed-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:44:21 np0005629333 nova_compute[244014]: 2026-02-25 12:44:21.392 244018 DEBUG nova.compute.manager [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Refreshing instance network info cache due to event network-changed-43ea1958-7fd9-47b6-be81-3eeb1b3801a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:44:21 np0005629333 nova_compute[244014]: 2026-02-25 12:44:21.392 244018 DEBUG oslo_concurrency.lockutils [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-b5941b54-9cd2-465c-89c0-3cf87ebed83e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:44:21 np0005629333 nova_compute[244014]: 2026-02-25 12:44:21.393 244018 DEBUG oslo_concurrency.lockutils [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-b5941b54-9cd2-465c-89c0-3cf87ebed83e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:44:21 np0005629333 nova_compute[244014]: 2026-02-25 12:44:21.393 244018 DEBUG nova.network.neutron [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Refreshing network info cache for port 43ea1958-7fd9-47b6-be81-3eeb1b3801a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:44:21 np0005629333 nova_compute[244014]: 2026-02-25 12:44:21.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:44:21 np0005629333 nova_compute[244014]: 2026-02-25 12:44:21.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:44:21 np0005629333 nova_compute[244014]: 2026-02-25 12:44:21.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 07:44:21 np0005629333 nova_compute[244014]: 2026-02-25 12:44:21.894 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Feb 25 07:44:21 np0005629333 nova_compute[244014]: 2026-02-25 12:44:21.895 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.004 244018 DEBUG nova.network.neutron [-] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.030 244018 INFO nova.compute.manager [-] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Took 0.84 seconds to deallocate network for instance.#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.079 244018 DEBUG oslo_concurrency.lockutils [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.079 244018 DEBUG oslo_concurrency.lockutils [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.114 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.114 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.114 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.114 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid cf7f6093-44a3-4e8f-8970-db25cf0b4ab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.195 244018 DEBUG oslo_concurrency.processutils [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:44:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1896: 305 pgs: 305 active+clean; 287 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 385 KiB/s rd, 3.4 MiB/s wr, 133 op/s
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.588 244018 DEBUG nova.network.neutron [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updating instance_info_cache with network_info: [{"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.613 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Releasing lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.614 244018 DEBUG nova.compute.manager [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Instance network_info: |[{"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.614 244018 DEBUG oslo_concurrency.lockutils [req-aa5e56ef-222a-4918-aacc-f28e240384c0 req-66f3c92c-0f16-4529-9c05-6611823d83b1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.615 244018 DEBUG nova.network.neutron [req-aa5e56ef-222a-4918-aacc-f28e240384c0 req-66f3c92c-0f16-4529-9c05-6611823d83b1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Refreshing network info cache for port 97fb9f99-cb59-4581-8866-375ea3e167d7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.620 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Start _get_guest_xml network_info=[{"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.626 244018 WARNING nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.631 244018 DEBUG nova.virt.libvirt.host [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.632 244018 DEBUG nova.virt.libvirt.host [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.637 244018 DEBUG nova.virt.libvirt.host [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.638 244018 DEBUG nova.virt.libvirt.host [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.638 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.638 244018 DEBUG nova.virt.hardware [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.639 244018 DEBUG nova.virt.hardware [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.639 244018 DEBUG nova.virt.hardware [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.640 244018 DEBUG nova.virt.hardware [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.640 244018 DEBUG nova.virt.hardware [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.641 244018 DEBUG nova.virt.hardware [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.641 244018 DEBUG nova.virt.hardware [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.641 244018 DEBUG nova.virt.hardware [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.642 244018 DEBUG nova.virt.hardware [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.642 244018 DEBUG nova.virt.hardware [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.642 244018 DEBUG nova.virt.hardware [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.648 244018 DEBUG oslo_concurrency.processutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:44:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:44:22 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3087698293' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.718 244018 DEBUG oslo_concurrency.processutils [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.727 244018 DEBUG nova.compute.provider_tree [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.743 244018 DEBUG nova.scheduler.client.report [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.777 244018 DEBUG oslo_concurrency.lockutils [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.803 244018 INFO nova.scheduler.client.report [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Deleted allocations for instance b5941b54-9cd2-465c-89c0-3cf87ebed83e#033[00m
Feb 25 07:44:22 np0005629333 nova_compute[244014]: 2026-02-25 12:44:22.884 244018 DEBUG oslo_concurrency.lockutils [None req-920c6d5d-5a66-486a-a846-1f8226efef87 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.306s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.111 244018 DEBUG oslo_concurrency.lockutils [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.112 244018 DEBUG oslo_concurrency.lockutils [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.113 244018 DEBUG oslo_concurrency.lockutils [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.113 244018 DEBUG oslo_concurrency.lockutils [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.113 244018 DEBUG oslo_concurrency.lockutils [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.115 244018 INFO nova.compute.manager [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Terminating instance#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.116 244018 DEBUG nova.compute.manager [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:44:23 np0005629333 kernel: tap473bf89e-f4 (unregistering): left promiscuous mode
Feb 25 07:44:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:44:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/701796555' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:44:23 np0005629333 NetworkManager[49836]: <info>  [1772023463.1783] device (tap473bf89e-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:44:23 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:23Z|01102|binding|INFO|Releasing lport 473bf89e-f488-42b9-b6d3-d736d2a61760 from this chassis (sb_readonly=0)
Feb 25 07:44:23 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:23Z|01103|binding|INFO|Setting lport 473bf89e-f488-42b9-b6d3-d736d2a61760 down in Southbound
Feb 25 07:44:23 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:23Z|01104|binding|INFO|Removing iface tap473bf89e-f4 ovn-installed in OVS
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.185 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.193 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:8c:23 10.100.0.8'], port_security=['fa:16:3e:7a:8c:23 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'cf7f6093-44a3-4e8f-8970-db25cf0b4ab9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e0ff7905-af45-428a-b6a0-d6e1209fd009', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0cf44a3d-3002-496e-a2d4-b5607642ce07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4edb1eef-979c-4be1-8c88-b734bc363731, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=473bf89e-f488-42b9-b6d3-d736d2a61760) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:44:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.194 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 473bf89e-f488-42b9-b6d3-d736d2a61760 in datapath e0ff7905-af45-428a-b6a0-d6e1209fd009 unbound from our chassis#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.194 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.195 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e0ff7905-af45-428a-b6a0-d6e1209fd009, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:44:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.197 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[510c8356-7aae-4517-8356-1469879a7cfa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.197 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009 namespace which is not needed anymore#033[00m
Feb 25 07:44:23 np0005629333 kernel: tap7126aff6-e4 (unregistering): left promiscuous mode
Feb 25 07:44:23 np0005629333 NetworkManager[49836]: <info>  [1772023463.2131] device (tap7126aff6-e4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.213 244018 DEBUG oslo_concurrency.processutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:44:23 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:23Z|01105|binding|INFO|Releasing lport 7126aff6-e4c8-45c9-a3f7-6c333946b022 from this chassis (sb_readonly=0)
Feb 25 07:44:23 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:23Z|01106|binding|INFO|Setting lport 7126aff6-e4c8-45c9-a3f7-6c333946b022 down in Southbound
Feb 25 07:44:23 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:23Z|01107|binding|INFO|Removing iface tap7126aff6-e4 ovn-installed in OVS
Feb 25 07:44:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.236 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:63:85 2001:db8::f816:3eff:fef5:6385'], port_security=['fa:16:3e:f5:63:85 2001:db8::f816:3eff:fef5:6385'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef5:6385/64', 'neutron:device_id': 'cf7f6093-44a3-4e8f-8970-db25cf0b4ab9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4445989-91e8-4869-98cb-32b4b81bb3da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0cf44a3d-3002-496e-a2d4-b5607642ce07', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c1f88174-704f-4cf1-b79c-73e2410029c4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=7126aff6-e4c8-45c9-a3f7-6c333946b022) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.242 244018 DEBUG nova.storage.rbd_utils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] rbd image 874359d8-3251-4416-82dc-f6776853e384_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.246 244018 DEBUG oslo_concurrency.processutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:44:23 np0005629333 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Feb 25 07:44:23 np0005629333 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006d.scope: Consumed 14.818s CPU time.
Feb 25 07:44:23 np0005629333 systemd-machined[210048]: Machine qemu-137-instance-0000006d terminated.
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.270 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:44:23 np0005629333 NetworkManager[49836]: <info>  [1772023463.3518] manager: (tap7126aff6-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/457)
Feb 25 07:44:23 np0005629333 neutron-haproxy-ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009[340073]: [NOTICE]   (340082) : haproxy version is 2.8.14-c23fe91
Feb 25 07:44:23 np0005629333 neutron-haproxy-ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009[340073]: [NOTICE]   (340082) : path to executable is /usr/sbin/haproxy
Feb 25 07:44:23 np0005629333 neutron-haproxy-ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009[340073]: [WARNING]  (340082) : Exiting Master process...
Feb 25 07:44:23 np0005629333 neutron-haproxy-ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009[340073]: [WARNING]  (340082) : Exiting Master process...
Feb 25 07:44:23 np0005629333 neutron-haproxy-ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009[340073]: [ALERT]    (340082) : Current worker (340084) exited with code 143 (Terminated)
Feb 25 07:44:23 np0005629333 neutron-haproxy-ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009[340073]: [WARNING]  (340082) : All workers exited. Exiting... (0)
Feb 25 07:44:23 np0005629333 systemd[1]: libpod-4315067433dd06d0fb6867f0b9642846435e8a80f6b55ddc75fa62a5759dfb07.scope: Deactivated successfully.
Feb 25 07:44:23 np0005629333 podman[343081]: 2026-02-25 12:44:23.36663783 +0000 UTC m=+0.058189603 container died 4315067433dd06d0fb6867f0b9642846435e8a80f6b55ddc75fa62a5759dfb07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.369 244018 INFO nova.virt.libvirt.driver [-] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Instance destroyed successfully.#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.370 244018 DEBUG nova.objects.instance [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'resources' on Instance uuid cf7f6093-44a3-4e8f-8970-db25cf0b4ab9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.393 244018 DEBUG nova.virt.libvirt.vif [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:42:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1568259628',display_name='tempest-TestGettingAddress-server-1568259628',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1568259628',id=109,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjwXf4BRPjcASt12gAye0nTLMsY81chM8AwzuzQAmadzcHhb8MVkuSUIiO1cL5+mTUrVlDTjaV+Wk+tczkCutLjKZYT8KokDtS1+FrxD3TeeYwMZXPHCENgnD6/bPtHPg==',key_name='tempest-TestGettingAddress-617797010',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:43:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-wpph0igs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:43:10Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=cf7f6093-44a3-4e8f-8970-db25cf0b4ab9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "473bf89e-f488-42b9-b6d3-d736d2a61760", "address": "fa:16:3e:7a:8c:23", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap473bf89e-f4", "ovs_interfaceid": "473bf89e-f488-42b9-b6d3-d736d2a61760", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.393 244018 DEBUG nova.network.os_vif_util [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "473bf89e-f488-42b9-b6d3-d736d2a61760", "address": "fa:16:3e:7a:8c:23", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap473bf89e-f4", "ovs_interfaceid": "473bf89e-f488-42b9-b6d3-d736d2a61760", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.394 244018 DEBUG nova.network.os_vif_util [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7a:8c:23,bridge_name='br-int',has_traffic_filtering=True,id=473bf89e-f488-42b9-b6d3-d736d2a61760,network=Network(e0ff7905-af45-428a-b6a0-d6e1209fd009),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap473bf89e-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.395 244018 DEBUG os_vif [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:8c:23,bridge_name='br-int',has_traffic_filtering=True,id=473bf89e-f488-42b9-b6d3-d736d2a61760,network=Network(e0ff7905-af45-428a-b6a0-d6e1209fd009),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap473bf89e-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.396 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.396 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap473bf89e-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.398 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:23 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4315067433dd06d0fb6867f0b9642846435e8a80f6b55ddc75fa62a5759dfb07-userdata-shm.mount: Deactivated successfully.
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.400 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.401 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:23 np0005629333 systemd[1]: var-lib-containers-storage-overlay-26baa473b5cd94140eb2f5364901afbf20c0a34db5790bb729eb08e64d9f641f-merged.mount: Deactivated successfully.
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.404 244018 INFO os_vif [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:8c:23,bridge_name='br-int',has_traffic_filtering=True,id=473bf89e-f488-42b9-b6d3-d736d2a61760,network=Network(e0ff7905-af45-428a-b6a0-d6e1209fd009),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap473bf89e-f4')#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.405 244018 DEBUG nova.virt.libvirt.vif [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:42:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1568259628',display_name='tempest-TestGettingAddress-server-1568259628',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1568259628',id=109,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPjwXf4BRPjcASt12gAye0nTLMsY81chM8AwzuzQAmadzcHhb8MVkuSUIiO1cL5+mTUrVlDTjaV+Wk+tczkCutLjKZYT8KokDtS1+FrxD3TeeYwMZXPHCENgnD6/bPtHPg==',key_name='tempest-TestGettingAddress-617797010',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:43:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-wpph0igs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:43:10Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=cf7f6093-44a3-4e8f-8970-db25cf0b4ab9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "address": "fa:16:3e:f5:63:85", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:6385", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7126aff6-e4", "ovs_interfaceid": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.405 244018 DEBUG nova.network.os_vif_util [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "address": "fa:16:3e:f5:63:85", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:6385", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7126aff6-e4", "ovs_interfaceid": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.407 244018 DEBUG nova.network.os_vif_util [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f5:63:85,bridge_name='br-int',has_traffic_filtering=True,id=7126aff6-e4c8-45c9-a3f7-6c333946b022,network=Network(e4445989-91e8-4869-98cb-32b4b81bb3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7126aff6-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.407 244018 DEBUG os_vif [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f5:63:85,bridge_name='br-int',has_traffic_filtering=True,id=7126aff6-e4c8-45c9-a3f7-6c333946b022,network=Network(e4445989-91e8-4869-98cb-32b4b81bb3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7126aff6-e4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.408 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.409 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7126aff6-e4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.410 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.411 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.413 244018 INFO os_vif [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f5:63:85,bridge_name='br-int',has_traffic_filtering=True,id=7126aff6-e4c8-45c9-a3f7-6c333946b022,network=Network(e4445989-91e8-4869-98cb-32b4b81bb3da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7126aff6-e4')#033[00m
Feb 25 07:44:23 np0005629333 podman[343081]: 2026-02-25 12:44:23.41731686 +0000 UTC m=+0.108868623 container cleanup 4315067433dd06d0fb6867f0b9642846435e8a80f6b55ddc75fa62a5759dfb07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:44:23 np0005629333 systemd[1]: libpod-conmon-4315067433dd06d0fb6867f0b9642846435e8a80f6b55ddc75fa62a5759dfb07.scope: Deactivated successfully.
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.434 244018 DEBUG nova.network.neutron [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Updated VIF entry in instance network info cache for port 43ea1958-7fd9-47b6-be81-3eeb1b3801a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.434 244018 DEBUG nova.network.neutron [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Updating instance_info_cache with network_info: [{"id": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "address": "fa:16:3e:3e:93:a9", "network": {"id": "f8edf066-3c6a-45fb-bc20-36dc74f8aee6", "bridge": "br-int", "label": "tempest-network-smoke--1483868163", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap43ea1958-7f", "ovs_interfaceid": "43ea1958-7fd9-47b6-be81-3eeb1b3801a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.448 244018 DEBUG nova.compute.manager [req-9c4fcc53-399a-4784-96a9-e1c7abc511de req-23aed889-d7af-4e76-b7c6-572995787f6f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received event network-changed-473bf89e-f488-42b9-b6d3-d736d2a61760 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.449 244018 DEBUG nova.compute.manager [req-9c4fcc53-399a-4784-96a9-e1c7abc511de req-23aed889-d7af-4e76-b7c6-572995787f6f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Refreshing instance network info cache due to event network-changed-473bf89e-f488-42b9-b6d3-d736d2a61760. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.450 244018 DEBUG oslo_concurrency.lockutils [req-9c4fcc53-399a-4784-96a9-e1c7abc511de req-23aed889-d7af-4e76-b7c6-572995787f6f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.465 244018 DEBUG oslo_concurrency.lockutils [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-b5941b54-9cd2-465c-89c0-3cf87ebed83e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.466 244018 DEBUG nova.compute.manager [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received event network-vif-unplugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.466 244018 DEBUG oslo_concurrency.lockutils [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.467 244018 DEBUG oslo_concurrency.lockutils [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.467 244018 DEBUG oslo_concurrency.lockutils [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.467 244018 DEBUG nova.compute.manager [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] No waiting events found dispatching network-vif-unplugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.468 244018 DEBUG nova.compute.manager [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received event network-vif-unplugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.468 244018 DEBUG nova.compute.manager [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received event network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.468 244018 DEBUG oslo_concurrency.lockutils [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.468 244018 DEBUG oslo_concurrency.lockutils [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.468 244018 DEBUG oslo_concurrency.lockutils [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b5941b54-9cd2-465c-89c0-3cf87ebed83e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.468 244018 DEBUG nova.compute.manager [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] No waiting events found dispatching network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.469 244018 WARNING nova.compute.manager [req-52f0cbca-398b-4d26-be5f-fe1065b08893 req-179ca667-b8c2-4d2a-af49-7bfba72fa1ef 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received unexpected event network-vif-plugged-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:44:23 np0005629333 podman[343162]: 2026-02-25 12:44:23.496130244 +0000 UTC m=+0.055735294 container remove 4315067433dd06d0fb6867f0b9642846435e8a80f6b55ddc75fa62a5759dfb07 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 25 07:44:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.500 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2de8bb9b-dd2a-4784-94b6-39809ad8218a]: (4, ('Wed Feb 25 12:44:23 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009 (4315067433dd06d0fb6867f0b9642846435e8a80f6b55ddc75fa62a5759dfb07)\n4315067433dd06d0fb6867f0b9642846435e8a80f6b55ddc75fa62a5759dfb07\nWed Feb 25 12:44:23 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009 (4315067433dd06d0fb6867f0b9642846435e8a80f6b55ddc75fa62a5759dfb07)\n4315067433dd06d0fb6867f0b9642846435e8a80f6b55ddc75fa62a5759dfb07\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.501 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[74395e14-a1b2-4085-9b7c-07549c349f0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.502 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0ff7905-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.504 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:23 np0005629333 kernel: tape0ff7905-a0: left promiscuous mode
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.509 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.515 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[30ddc3df-e4e4-4de8-9f53-4a98b032109a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.524 244018 DEBUG nova.compute.manager [req-e3dfccbb-2c86-498f-a547-e8a6a23b6389 req-ef790e7f-59c3-4fc8-ae41-599dd1431f79 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Received event network-vif-deleted-43ea1958-7fd9-47b6-be81-3eeb1b3801a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.524 244018 DEBUG nova.compute.manager [req-e3dfccbb-2c86-498f-a547-e8a6a23b6389 req-ef790e7f-59c3-4fc8-ae41-599dd1431f79 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received event network-vif-unplugged-473bf89e-f488-42b9-b6d3-d736d2a61760 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.524 244018 DEBUG oslo_concurrency.lockutils [req-e3dfccbb-2c86-498f-a547-e8a6a23b6389 req-ef790e7f-59c3-4fc8-ae41-599dd1431f79 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.525 244018 DEBUG oslo_concurrency.lockutils [req-e3dfccbb-2c86-498f-a547-e8a6a23b6389 req-ef790e7f-59c3-4fc8-ae41-599dd1431f79 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.525 244018 DEBUG oslo_concurrency.lockutils [req-e3dfccbb-2c86-498f-a547-e8a6a23b6389 req-ef790e7f-59c3-4fc8-ae41-599dd1431f79 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.525 244018 DEBUG nova.compute.manager [req-e3dfccbb-2c86-498f-a547-e8a6a23b6389 req-ef790e7f-59c3-4fc8-ae41-599dd1431f79 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] No waiting events found dispatching network-vif-unplugged-473bf89e-f488-42b9-b6d3-d736d2a61760 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.526 244018 DEBUG nova.compute.manager [req-e3dfccbb-2c86-498f-a547-e8a6a23b6389 req-ef790e7f-59c3-4fc8-ae41-599dd1431f79 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received event network-vif-unplugged-473bf89e-f488-42b9-b6d3-d736d2a61760 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:44:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.529 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7861eb50-9113-4582-a0fe-4bdc3209ef4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.530 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bcc6fc9d-4f33-44f9-af0b-0df104504fca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.545 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5224878c-483d-4cbf-bbda-3326f507e690]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535882, 'reachable_time': 25294, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343181, 'error': None, 'target': 'ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.548 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e0ff7905-af45-428a-b6a0-d6e1209fd009 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:44:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.548 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[93c7a1f2-0a79-465e-ba79-437af260706a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.548 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 7126aff6-e4c8-45c9-a3f7-6c333946b022 in datapath e4445989-91e8-4869-98cb-32b4b81bb3da unbound from our chassis#033[00m
Feb 25 07:44:23 np0005629333 systemd[1]: run-netns-ovnmeta\x2de0ff7905\x2daf45\x2d428a\x2db6a0\x2dd6e1209fd009.mount: Deactivated successfully.
Feb 25 07:44:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.551 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e4445989-91e8-4869-98cb-32b4b81bb3da, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:44:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.552 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4616ae75-b914-48e2-b289-13ffbe5d66fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.552 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da namespace which is not needed anymore#033[00m
Feb 25 07:44:23 np0005629333 neutron-haproxy-ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da[340150]: [NOTICE]   (340154) : haproxy version is 2.8.14-c23fe91
Feb 25 07:44:23 np0005629333 neutron-haproxy-ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da[340150]: [NOTICE]   (340154) : path to executable is /usr/sbin/haproxy
Feb 25 07:44:23 np0005629333 neutron-haproxy-ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da[340150]: [WARNING]  (340154) : Exiting Master process...
Feb 25 07:44:23 np0005629333 neutron-haproxy-ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da[340150]: [WARNING]  (340154) : Exiting Master process...
Feb 25 07:44:23 np0005629333 neutron-haproxy-ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da[340150]: [ALERT]    (340154) : Current worker (340156) exited with code 143 (Terminated)
Feb 25 07:44:23 np0005629333 neutron-haproxy-ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da[340150]: [WARNING]  (340154) : All workers exited. Exiting... (0)
Feb 25 07:44:23 np0005629333 systemd[1]: libpod-cc6e724a15c0d9441d72982a086e52dd5e08a297e068d7f150d312adffe8d4b8.scope: Deactivated successfully.
Feb 25 07:44:23 np0005629333 podman[343200]: 2026-02-25 12:44:23.676991738 +0000 UTC m=+0.049856838 container died cc6e724a15c0d9441d72982a086e52dd5e08a297e068d7f150d312adffe8d4b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 07:44:23 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cc6e724a15c0d9441d72982a086e52dd5e08a297e068d7f150d312adffe8d4b8-userdata-shm.mount: Deactivated successfully.
Feb 25 07:44:23 np0005629333 systemd[1]: var-lib-containers-storage-overlay-053075e96790c2347754d133c524b37fb39668500f025fbd5384e1ede98d4099-merged.mount: Deactivated successfully.
Feb 25 07:44:23 np0005629333 podman[343200]: 2026-02-25 12:44:23.722935945 +0000 UTC m=+0.095801045 container cleanup cc6e724a15c0d9441d72982a086e52dd5e08a297e068d7f150d312adffe8d4b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.742 244018 INFO nova.virt.libvirt.driver [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Deleting instance files /var/lib/nova/instances/cf7f6093-44a3-4e8f-8970-db25cf0b4ab9_del#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.743 244018 INFO nova.virt.libvirt.driver [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Deletion of /var/lib/nova/instances/cf7f6093-44a3-4e8f-8970-db25cf0b4ab9_del complete#033[00m
Feb 25 07:44:23 np0005629333 systemd[1]: libpod-conmon-cc6e724a15c0d9441d72982a086e52dd5e08a297e068d7f150d312adffe8d4b8.scope: Deactivated successfully.
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.797 244018 INFO nova.compute.manager [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.797 244018 DEBUG oslo.service.loopingcall [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.798 244018 DEBUG nova.compute.manager [-] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.798 244018 DEBUG nova.network.neutron [-] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:44:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:44:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/784260646' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.838 244018 DEBUG oslo_concurrency.processutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.839 244018 DEBUG nova.virt.libvirt.vif [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:44:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1245136630',display_name='tempest-TestShelveInstance-server-1245136630',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1245136630',id=112,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP9lQWzdSu90yaQLNul1MY74fQc98Cg9DYq6NKBJiDcNTT2XlppuxCcKfQtu58BOyBXAh+PrpYKR3FwTDUP4XAt9wnb5ixmiKo9lnvQqdB0jXAsnGExk5Uj48m62YSy+xg==',key_name='tempest-TestShelveInstance-571084469',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6c0adb05683141e7a0b866f450e410e0',ramdisk_id='',reservation_id='r-tfbrsni1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-1925524092',owner_user_name='tempest-TestShelveInstance-1925524092-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:44:18Z,user_data=None,user_id='44c0b78107ea4f7381e82a02c5954e7c',uuid=874359d8-3251-4416-82dc-f6776853e384,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.839 244018 DEBUG nova.network.os_vif_util [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Converting VIF {"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.840 244018 DEBUG nova.network.os_vif_util [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:b7:f7,bridge_name='br-int',has_traffic_filtering=True,id=97fb9f99-cb59-4581-8866-375ea3e167d7,network=Network(e4d059ba-eacc-463b-a8a4-393a5a36dba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97fb9f99-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.841 244018 DEBUG nova.objects.instance [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 874359d8-3251-4416-82dc-f6776853e384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.856 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:44:23 np0005629333 nova_compute[244014]:  <uuid>874359d8-3251-4416-82dc-f6776853e384</uuid>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:  <name>instance-00000070</name>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestShelveInstance-server-1245136630</nova:name>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:44:22</nova:creationTime>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:44:23 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:        <nova:user uuid="44c0b78107ea4f7381e82a02c5954e7c">tempest-TestShelveInstance-1925524092-project-member</nova:user>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:        <nova:project uuid="6c0adb05683141e7a0b866f450e410e0">tempest-TestShelveInstance-1925524092</nova:project>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:        <nova:port uuid="97fb9f99-cb59-4581-8866-375ea3e167d7">
Feb 25 07:44:23 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      <entry name="serial">874359d8-3251-4416-82dc-f6776853e384</entry>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      <entry name="uuid">874359d8-3251-4416-82dc-f6776853e384</entry>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/874359d8-3251-4416-82dc-f6776853e384_disk">
Feb 25 07:44:23 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:44:23 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/874359d8-3251-4416-82dc-f6776853e384_disk.config">
Feb 25 07:44:23 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:44:23 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:61:b7:f7"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      <target dev="tap97fb9f99-cb"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384/console.log" append="off"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:44:23 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:44:23 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:44:23 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:44:23 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.857 244018 DEBUG nova.compute.manager [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Preparing to wait for external event network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.857 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "874359d8-3251-4416-82dc-f6776853e384-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.857 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.858 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.859 244018 DEBUG nova.virt.libvirt.vif [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:44:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1245136630',display_name='tempest-TestShelveInstance-server-1245136630',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1245136630',id=112,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP9lQWzdSu90yaQLNul1MY74fQc98Cg9DYq6NKBJiDcNTT2XlppuxCcKfQtu58BOyBXAh+PrpYKR3FwTDUP4XAt9wnb5ixmiKo9lnvQqdB0jXAsnGExk5Uj48m62YSy+xg==',key_name='tempest-TestShelveInstance-571084469',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6c0adb05683141e7a0b866f450e410e0',ramdisk_id='',reservation_id='r-tfbrsni1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-1925524092',owner_user_name='tempest-TestShelveInstance-1925524092-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:44:18Z,user_data=None,user_id='44c0b78107ea4f7381e82a02c5954e7c',uuid=874359d8-3251-4416-82dc-f6776853e384,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.859 244018 DEBUG nova.network.os_vif_util [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Converting VIF {"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.860 244018 DEBUG nova.network.os_vif_util [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:b7:f7,bridge_name='br-int',has_traffic_filtering=True,id=97fb9f99-cb59-4581-8866-375ea3e167d7,network=Network(e4d059ba-eacc-463b-a8a4-393a5a36dba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97fb9f99-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.861 244018 DEBUG os_vif [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:b7:f7,bridge_name='br-int',has_traffic_filtering=True,id=97fb9f99-cb59-4581-8866-375ea3e167d7,network=Network(e4d059ba-eacc-463b-a8a4-393a5a36dba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97fb9f99-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.861 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.862 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:44:23 np0005629333 podman[343228]: 2026-02-25 12:44:23.862111972 +0000 UTC m=+0.106593719 container remove cc6e724a15c0d9441d72982a086e52dd5e08a297e068d7f150d312adffe8d4b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.862 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.864 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.865 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap97fb9f99-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.865 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap97fb9f99-cb, col_values=(('external_ids', {'iface-id': '97fb9f99-cb59-4581-8866-375ea3e167d7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:b7:f7', 'vm-uuid': '874359d8-3251-4416-82dc-f6776853e384'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:44:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.866 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5d2b4546-e84e-47e8-85ac-21ac08372f95]: (4, ('Wed Feb 25 12:44:23 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da (cc6e724a15c0d9441d72982a086e52dd5e08a297e068d7f150d312adffe8d4b8)\ncc6e724a15c0d9441d72982a086e52dd5e08a297e068d7f150d312adffe8d4b8\nWed Feb 25 12:44:23 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da (cc6e724a15c0d9441d72982a086e52dd5e08a297e068d7f150d312adffe8d4b8)\ncc6e724a15c0d9441d72982a086e52dd5e08a297e068d7f150d312adffe8d4b8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.868 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:23 np0005629333 NetworkManager[49836]: <info>  [1772023463.8688] manager: (tap97fb9f99-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/458)
Feb 25 07:44:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.867 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[041dbdfd-9fda-4a57-84b4-f8b8b9e25410]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.870 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4445989-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.874 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.875 244018 INFO os_vif [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:b7:f7,bridge_name='br-int',has_traffic_filtering=True,id=97fb9f99-cb59-4581-8866-375ea3e167d7,network=Network(e4d059ba-eacc-463b-a8a4-393a5a36dba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97fb9f99-cb')#033[00m
Feb 25 07:44:23 np0005629333 kernel: tape4445989-90: left promiscuous mode
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.876 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.880 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.882 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[319a416f-bfd6-4fdd-bd3a-bbf0173d2f90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.901 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9128f8ce-a8ed-4699-95fc-71af3628e85c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.902 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[85fef852-dca7-4f00-ad54-3537d089092a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.917 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b9d01555-d16b-464d-a72b-8a374efabc5f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 535968, 'reachable_time': 37183, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343264, 'error': None, 'target': 'ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.919 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e4445989-91e8-4869-98cb-32b4b81bb3da deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:44:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:23.919 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[a355a982-3970-4887-8b2d-84705c9efd05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:23 np0005629333 podman[343234]: 2026-02-25 12:44:23.922162837 +0000 UTC m=+0.138259843 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.940 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.940 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.940 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] No VIF found with MAC fa:16:3e:61:b7:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.941 244018 INFO nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Using config drive#033[00m
Feb 25 07:44:23 np0005629333 nova_compute[244014]: 2026-02-25 12:44:23.957 244018 DEBUG nova.storage.rbd_utils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] rbd image 874359d8-3251-4416-82dc-f6776853e384_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:44:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1897: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 272 KiB/s rd, 2.3 MiB/s wr, 128 op/s
Feb 25 07:44:24 np0005629333 systemd[1]: run-netns-ovnmeta\x2de4445989\x2d91e8\x2d4869\x2d98cb\x2d32b4b81bb3da.mount: Deactivated successfully.
Feb 25 07:44:24 np0005629333 nova_compute[244014]: 2026-02-25 12:44:24.791 244018 DEBUG nova.network.neutron [req-aa5e56ef-222a-4918-aacc-f28e240384c0 req-66f3c92c-0f16-4529-9c05-6611823d83b1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updated VIF entry in instance network info cache for port 97fb9f99-cb59-4581-8866-375ea3e167d7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:44:24 np0005629333 nova_compute[244014]: 2026-02-25 12:44:24.792 244018 DEBUG nova.network.neutron [req-aa5e56ef-222a-4918-aacc-f28e240384c0 req-66f3c92c-0f16-4529-9c05-6611823d83b1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updating instance_info_cache with network_info: [{"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:44:24 np0005629333 nova_compute[244014]: 2026-02-25 12:44:24.813 244018 DEBUG oslo_concurrency.lockutils [req-aa5e56ef-222a-4918-aacc-f28e240384c0 req-66f3c92c-0f16-4529-9c05-6611823d83b1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:44:24 np0005629333 nova_compute[244014]: 2026-02-25 12:44:24.987 244018 INFO nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Creating config drive at /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384/disk.config#033[00m
Feb 25 07:44:24 np0005629333 nova_compute[244014]: 2026-02-25 12:44:24.993 244018 DEBUG oslo_concurrency.processutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6lw6a38u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.024 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Updating instance_info_cache with network_info: [{"id": "473bf89e-f488-42b9-b6d3-d736d2a61760", "address": "fa:16:3e:7a:8c:23", "network": {"id": "e0ff7905-af45-428a-b6a0-d6e1209fd009", "bridge": "br-int", "label": "tempest-network-smoke--1128870534", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap473bf89e-f4", "ovs_interfaceid": "473bf89e-f488-42b9-b6d3-d736d2a61760", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "address": "fa:16:3e:f5:63:85", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:6385", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7126aff6-e4", "ovs_interfaceid": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.047 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.047 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.048 244018 DEBUG oslo_concurrency.lockutils [req-9c4fcc53-399a-4784-96a9-e1c7abc511de req-23aed889-d7af-4e76-b7c6-572995787f6f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.048 244018 DEBUG nova.network.neutron [req-9c4fcc53-399a-4784-96a9-e1c7abc511de req-23aed889-d7af-4e76-b7c6-572995787f6f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Refreshing network info cache for port 473bf89e-f488-42b9-b6d3-d736d2a61760 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.126 244018 DEBUG oslo_concurrency.processutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp6lw6a38u" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.164 244018 DEBUG nova.storage.rbd_utils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] rbd image 874359d8-3251-4416-82dc-f6776853e384_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.167 244018 DEBUG oslo_concurrency.processutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384/disk.config 874359d8-3251-4416-82dc-f6776853e384_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.189 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.247 244018 INFO nova.network.neutron [req-9c4fcc53-399a-4784-96a9-e1c7abc511de req-23aed889-d7af-4e76-b7c6-572995787f6f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Port 473bf89e-f488-42b9-b6d3-d736d2a61760 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.247 244018 DEBUG nova.network.neutron [req-9c4fcc53-399a-4784-96a9-e1c7abc511de req-23aed889-d7af-4e76-b7c6-572995787f6f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Updating instance_info_cache with network_info: [{"id": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "address": "fa:16:3e:f5:63:85", "network": {"id": "e4445989-91e8-4869-98cb-32b4b81bb3da", "bridge": "br-int", "label": "tempest-network-smoke--397487013", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:6385", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7126aff6-e4", "ovs_interfaceid": "7126aff6-e4c8-45c9-a3f7-6c333946b022", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.281 244018 DEBUG oslo_concurrency.lockutils [req-9c4fcc53-399a-4784-96a9-e1c7abc511de req-23aed889-d7af-4e76-b7c6-572995787f6f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.298 244018 DEBUG oslo_concurrency.processutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384/disk.config 874359d8-3251-4416-82dc-f6776853e384_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.299 244018 INFO nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Deleting local config drive /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384/disk.config because it was imported into RBD.#033[00m
Feb 25 07:44:25 np0005629333 kernel: tap97fb9f99-cb: entered promiscuous mode
Feb 25 07:44:25 np0005629333 NetworkManager[49836]: <info>  [1772023465.3509] manager: (tap97fb9f99-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/459)
Feb 25 07:44:25 np0005629333 systemd-udevd[343267]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.353 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:25Z|01108|binding|INFO|Claiming lport 97fb9f99-cb59-4581-8866-375ea3e167d7 for this chassis.
Feb 25 07:44:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:25Z|01109|binding|INFO|97fb9f99-cb59-4581-8866-375ea3e167d7: Claiming fa:16:3e:61:b7:f7 10.100.0.11
Feb 25 07:44:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:25Z|01110|binding|INFO|Setting lport 97fb9f99-cb59-4581-8866-375ea3e167d7 ovn-installed in OVS
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.360 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:25 np0005629333 NetworkManager[49836]: <info>  [1772023465.3640] device (tap97fb9f99-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:44:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:25Z|01111|binding|INFO|Setting lport 97fb9f99-cb59-4581-8866-375ea3e167d7 up in Southbound
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.364 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:b7:f7 10.100.0.11'], port_security=['fa:16:3e:61:b7:f7 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '874359d8-3251-4416-82dc-f6776853e384', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4d059ba-eacc-463b-a8a4-393a5a36dba3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6c0adb05683141e7a0b866f450e410e0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c8e48a3c-4f73-4222-8cd8-b956480085c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88c1b411-2973-4db5-967d-0a2cebc1066f, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=97fb9f99-cb59-4581-8866-375ea3e167d7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:44:25 np0005629333 NetworkManager[49836]: <info>  [1772023465.3648] device (tap97fb9f99-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.365 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 97fb9f99-cb59-4581-8866-375ea3e167d7 in datapath e4d059ba-eacc-463b-a8a4-393a5a36dba3 bound to our chassis#033[00m
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.366 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e4d059ba-eacc-463b-a8a4-393a5a36dba3#033[00m
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.379 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b2935849-272b-4e32-a986-125739f9b590]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.380 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape4d059ba-e1 in ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:44:25 np0005629333 systemd-machined[210048]: New machine qemu-141-instance-00000070.
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.382 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape4d059ba-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.382 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2d85d1f5-7e0f-4f0b-b2b7-d6b0ea4bc2ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.383 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7906339c-ef44-4a29-b941-2ae308dde8c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.394 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[2f4a3ee7-5337-4951-a581-fda8816f5e87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:25 np0005629333 systemd[1]: Started Virtual Machine qemu-141-instance-00000070.
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.414 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9625549d-ed91-49a6-bf4d-07d415f33630]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.439 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[572f4cfa-0f0e-4717-81eb-4bb2fa148c00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:25 np0005629333 NetworkManager[49836]: <info>  [1772023465.4454] manager: (tape4d059ba-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/460)
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.444 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d7338b69-4728-4db1-b8d2-7f69f130c079]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.445 244018 DEBUG nova.network.neutron [-] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.472 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5485cfe8-ba75-4d14-a883-94516f0cb78e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.475 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[46e64f79-2d68-4e0f-a84e-2ccb6f68a3ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.481 244018 INFO nova.compute.manager [-] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Took 1.68 seconds to deallocate network for instance.#033[00m
Feb 25 07:44:25 np0005629333 NetworkManager[49836]: <info>  [1772023465.4931] device (tape4d059ba-e0): carrier: link connected
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.496 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a7d57583-e85a-4e56-a93f-6bb30e2be85c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.509 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f26a8acc-cd33-4a38-9457-a843a9012b9c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4d059ba-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:42:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 335], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543490, 'reachable_time': 22180, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343376, 'error': None, 'target': 'ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.522 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c9e95ad1-59bf-42d3-a1a6-e8f1085e0dce]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe63:42a7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543490, 'tstamp': 543490}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343377, 'error': None, 'target': 'ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.536 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[95ff5321-c0eb-4273-b710-b49fbdaf5279]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4d059ba-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:42:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 335], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543490, 'reachable_time': 22180, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 343378, 'error': None, 'target': 'ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.549 244018 DEBUG nova.compute.manager [req-82348298-23d3-4f82-bbc0-f36109b2a021 req-86cec087-fbc7-459f-a4d4-dbdda47ef193 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received event network-vif-unplugged-7126aff6-e4c8-45c9-a3f7-6c333946b022 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.549 244018 DEBUG oslo_concurrency.lockutils [req-82348298-23d3-4f82-bbc0-f36109b2a021 req-86cec087-fbc7-459f-a4d4-dbdda47ef193 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.550 244018 DEBUG oslo_concurrency.lockutils [req-82348298-23d3-4f82-bbc0-f36109b2a021 req-86cec087-fbc7-459f-a4d4-dbdda47ef193 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.550 244018 DEBUG oslo_concurrency.lockutils [req-82348298-23d3-4f82-bbc0-f36109b2a021 req-86cec087-fbc7-459f-a4d4-dbdda47ef193 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.550 244018 DEBUG nova.compute.manager [req-82348298-23d3-4f82-bbc0-f36109b2a021 req-86cec087-fbc7-459f-a4d4-dbdda47ef193 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] No waiting events found dispatching network-vif-unplugged-7126aff6-e4c8-45c9-a3f7-6c333946b022 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.551 244018 DEBUG nova.compute.manager [req-82348298-23d3-4f82-bbc0-f36109b2a021 req-86cec087-fbc7-459f-a4d4-dbdda47ef193 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received event network-vif-unplugged-7126aff6-e4c8-45c9-a3f7-6c333946b022 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.551 244018 DEBUG nova.compute.manager [req-82348298-23d3-4f82-bbc0-f36109b2a021 req-86cec087-fbc7-459f-a4d4-dbdda47ef193 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received event network-vif-plugged-7126aff6-e4c8-45c9-a3f7-6c333946b022 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.551 244018 DEBUG oslo_concurrency.lockutils [req-82348298-23d3-4f82-bbc0-f36109b2a021 req-86cec087-fbc7-459f-a4d4-dbdda47ef193 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.552 244018 DEBUG oslo_concurrency.lockutils [req-82348298-23d3-4f82-bbc0-f36109b2a021 req-86cec087-fbc7-459f-a4d4-dbdda47ef193 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.552 244018 DEBUG oslo_concurrency.lockutils [req-82348298-23d3-4f82-bbc0-f36109b2a021 req-86cec087-fbc7-459f-a4d4-dbdda47ef193 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.553 244018 DEBUG nova.compute.manager [req-82348298-23d3-4f82-bbc0-f36109b2a021 req-86cec087-fbc7-459f-a4d4-dbdda47ef193 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] No waiting events found dispatching network-vif-plugged-7126aff6-e4c8-45c9-a3f7-6c333946b022 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.553 244018 WARNING nova.compute.manager [req-82348298-23d3-4f82-bbc0-f36109b2a021 req-86cec087-fbc7-459f-a4d4-dbdda47ef193 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received unexpected event network-vif-plugged-7126aff6-e4c8-45c9-a3f7-6c333946b022 for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.553 244018 DEBUG nova.compute.manager [req-82348298-23d3-4f82-bbc0-f36109b2a021 req-86cec087-fbc7-459f-a4d4-dbdda47ef193 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received event network-vif-deleted-473bf89e-f488-42b9-b6d3-d736d2a61760 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.553 244018 DEBUG nova.compute.manager [req-82348298-23d3-4f82-bbc0-f36109b2a021 req-86cec087-fbc7-459f-a4d4-dbdda47ef193 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received event network-vif-deleted-7126aff6-e4c8-45c9-a3f7-6c333946b022 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.556 244018 DEBUG oslo_concurrency.lockutils [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.556 244018 DEBUG oslo_concurrency.lockutils [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.561 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[700acde5-24f8-473b-8330-b4dffb884da9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.605 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[90433b16-04e7-40d2-93ee-23c482608da8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.607 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4d059ba-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.607 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.608 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4d059ba-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:44:25 np0005629333 kernel: tape4d059ba-e0: entered promiscuous mode
Feb 25 07:44:25 np0005629333 NetworkManager[49836]: <info>  [1772023465.6105] manager: (tape4d059ba-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/461)
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.612 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.613 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape4d059ba-e0, col_values=(('external_ids', {'iface-id': '751d6f2c-9451-4526-9618-730226a78a70'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:44:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:25Z|01112|binding|INFO|Releasing lport 751d6f2c-9451-4526-9618-730226a78a70 from this chassis (sb_readonly=0)
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.616 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e4d059ba-eacc-463b-a8a4-393a5a36dba3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e4d059ba-eacc-463b-a8a4-393a5a36dba3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.617 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6537c473-9d62-48a0-a130-fb9a93d056b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.619 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-e4d059ba-eacc-463b-a8a4-393a5a36dba3
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/e4d059ba-eacc-463b-a8a4-393a5a36dba3.pid.haproxy
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID e4d059ba-eacc-463b-a8a4-393a5a36dba3
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:44:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:25.619 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3', 'env', 'PROCESS_TAG=haproxy-e4d059ba-eacc-463b-a8a4-393a5a36dba3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e4d059ba-eacc-463b-a8a4-393a5a36dba3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.623 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.667 244018 DEBUG nova.compute.manager [req-8208e997-180f-4d0e-b4a9-ce6b02a338b4 req-3d00c0f1-6d83-452e-b0db-bb72993e0605 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received event network-vif-plugged-473bf89e-f488-42b9-b6d3-d736d2a61760 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.669 244018 DEBUG oslo_concurrency.lockutils [req-8208e997-180f-4d0e-b4a9-ce6b02a338b4 req-3d00c0f1-6d83-452e-b0db-bb72993e0605 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.669 244018 DEBUG oslo_concurrency.lockutils [req-8208e997-180f-4d0e-b4a9-ce6b02a338b4 req-3d00c0f1-6d83-452e-b0db-bb72993e0605 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.670 244018 DEBUG oslo_concurrency.lockutils [req-8208e997-180f-4d0e-b4a9-ce6b02a338b4 req-3d00c0f1-6d83-452e-b0db-bb72993e0605 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.670 244018 DEBUG nova.compute.manager [req-8208e997-180f-4d0e-b4a9-ce6b02a338b4 req-3d00c0f1-6d83-452e-b0db-bb72993e0605 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] No waiting events found dispatching network-vif-plugged-473bf89e-f488-42b9-b6d3-d736d2a61760 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.670 244018 WARNING nova.compute.manager [req-8208e997-180f-4d0e-b4a9-ce6b02a338b4 req-3d00c0f1-6d83-452e-b0db-bb72993e0605 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Received unexpected event network-vif-plugged-473bf89e-f488-42b9-b6d3-d736d2a61760 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.676 244018 DEBUG oslo_concurrency.processutils [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:44:25 np0005629333 podman[343386]: 2026-02-25 12:44:25.770108516 +0000 UTC m=+0.106765324 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.887 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023465.8874667, 874359d8-3251-4416-82dc-f6776853e384 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.888 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] VM Started (Lifecycle Event)#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.905 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.906 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.911 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023465.887552, 874359d8-3251-4416-82dc-f6776853e384 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.911 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.926 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.930 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:44:25 np0005629333 nova_compute[244014]: 2026-02-25 12:44:25.954 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:44:25 np0005629333 podman[343498]: 2026-02-25 12:44:25.993907441 +0000 UTC m=+0.058755959 container create d905eeb48e3eb29d0a002c06c66e690ff4ec9e474a3519a030dac86e4992bf31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:44:26 np0005629333 systemd[1]: Started libpod-conmon-d905eeb48e3eb29d0a002c06c66e690ff4ec9e474a3519a030dac86e4992bf31.scope.
Feb 25 07:44:26 np0005629333 podman[343498]: 2026-02-25 12:44:25.961733783 +0000 UTC m=+0.026582281 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:44:26 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:44:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5928287d34ac4333e0ae9c06ec6f097d3bd4e406cca623c2f40db94d0a23763/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:44:26 np0005629333 podman[343498]: 2026-02-25 12:44:26.091645389 +0000 UTC m=+0.156493897 container init d905eeb48e3eb29d0a002c06c66e690ff4ec9e474a3519a030dac86e4992bf31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:44:26 np0005629333 podman[343498]: 2026-02-25 12:44:26.099212983 +0000 UTC m=+0.164061461 container start d905eeb48e3eb29d0a002c06c66e690ff4ec9e474a3519a030dac86e4992bf31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:44:26 np0005629333 neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3[343513]: [NOTICE]   (343517) : New worker (343519) forked
Feb 25 07:44:26 np0005629333 neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3[343513]: [NOTICE]   (343517) : Loading success.
Feb 25 07:44:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:44:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3446958632' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:44:26 np0005629333 nova_compute[244014]: 2026-02-25 12:44:26.223 244018 DEBUG oslo_concurrency.processutils [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:44:26 np0005629333 nova_compute[244014]: 2026-02-25 12:44:26.227 244018 DEBUG nova.compute.provider_tree [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:44:26 np0005629333 nova_compute[244014]: 2026-02-25 12:44:26.247 244018 DEBUG nova.scheduler.client.report [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:44:26 np0005629333 nova_compute[244014]: 2026-02-25 12:44:26.278 244018 DEBUG oslo_concurrency.lockutils [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:26 np0005629333 nova_compute[244014]: 2026-02-25 12:44:26.281 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.376s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:26 np0005629333 nova_compute[244014]: 2026-02-25 12:44:26.281 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:26 np0005629333 nova_compute[244014]: 2026-02-25 12:44:26.281 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:44:26 np0005629333 nova_compute[244014]: 2026-02-25 12:44:26.282 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:44:26 np0005629333 nova_compute[244014]: 2026-02-25 12:44:26.345 244018 INFO nova.scheduler.client.report [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Deleted allocations for instance cf7f6093-44a3-4e8f-8970-db25cf0b4ab9#033[00m
Feb 25 07:44:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1898: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 85 KiB/s rd, 1.9 MiB/s wr, 90 op/s
Feb 25 07:44:26 np0005629333 nova_compute[244014]: 2026-02-25 12:44:26.414 244018 DEBUG oslo_concurrency.lockutils [None req-2fdbe1fb-8615-49e5-98cf-ad9a2dde019b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "cf7f6093-44a3-4e8f-8970-db25cf0b4ab9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.302s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:44:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3063588877' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:44:26 np0005629333 nova_compute[244014]: 2026-02-25 12:44:26.885 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:44:26 np0005629333 nova_compute[244014]: 2026-02-25 12:44:26.953 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:44:26 np0005629333 nova_compute[244014]: 2026-02-25 12:44:26.954 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:44:26 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:26Z|01113|binding|INFO|Releasing lport 751d6f2c-9451-4526-9618-730226a78a70 from this chassis (sb_readonly=0)
Feb 25 07:44:26 np0005629333 nova_compute[244014]: 2026-02-25 12:44:26.995 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:27 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:27Z|01114|binding|INFO|Releasing lport 751d6f2c-9451-4526-9618-730226a78a70 from this chassis (sb_readonly=0)
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.061 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.078 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.079 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3601MB free_disk=59.92112819105387GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.079 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.079 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.157 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 874359d8-3251-4416-82dc-f6776853e384 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.157 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.158 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.188 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:44:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:44:27 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1048043195' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.727 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.732 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.751 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.773 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.774 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.837 244018 DEBUG nova.compute.manager [req-3d6aa62d-bad4-45c6-b2a0-2ea84d7bd55b req-efaee553-b5f6-49d0-9e25-4e2b861c209f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received event network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.837 244018 DEBUG oslo_concurrency.lockutils [req-3d6aa62d-bad4-45c6-b2a0-2ea84d7bd55b req-efaee553-b5f6-49d0-9e25-4e2b861c209f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "874359d8-3251-4416-82dc-f6776853e384-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.837 244018 DEBUG oslo_concurrency.lockutils [req-3d6aa62d-bad4-45c6-b2a0-2ea84d7bd55b req-efaee553-b5f6-49d0-9e25-4e2b861c209f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.838 244018 DEBUG oslo_concurrency.lockutils [req-3d6aa62d-bad4-45c6-b2a0-2ea84d7bd55b req-efaee553-b5f6-49d0-9e25-4e2b861c209f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.838 244018 DEBUG nova.compute.manager [req-3d6aa62d-bad4-45c6-b2a0-2ea84d7bd55b req-efaee553-b5f6-49d0-9e25-4e2b861c209f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Processing event network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.838 244018 DEBUG nova.compute.manager [req-3d6aa62d-bad4-45c6-b2a0-2ea84d7bd55b req-efaee553-b5f6-49d0-9e25-4e2b861c209f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received event network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.838 244018 DEBUG oslo_concurrency.lockutils [req-3d6aa62d-bad4-45c6-b2a0-2ea84d7bd55b req-efaee553-b5f6-49d0-9e25-4e2b861c209f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "874359d8-3251-4416-82dc-f6776853e384-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.838 244018 DEBUG oslo_concurrency.lockutils [req-3d6aa62d-bad4-45c6-b2a0-2ea84d7bd55b req-efaee553-b5f6-49d0-9e25-4e2b861c209f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.839 244018 DEBUG oslo_concurrency.lockutils [req-3d6aa62d-bad4-45c6-b2a0-2ea84d7bd55b req-efaee553-b5f6-49d0-9e25-4e2b861c209f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.839 244018 DEBUG nova.compute.manager [req-3d6aa62d-bad4-45c6-b2a0-2ea84d7bd55b req-efaee553-b5f6-49d0-9e25-4e2b861c209f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] No waiting events found dispatching network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.839 244018 WARNING nova.compute.manager [req-3d6aa62d-bad4-45c6-b2a0-2ea84d7bd55b req-efaee553-b5f6-49d0-9e25-4e2b861c209f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received unexpected event network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.840 244018 DEBUG nova.compute.manager [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.843 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023467.8432598, 874359d8-3251-4416-82dc-f6776853e384 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.843 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.845 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.848 244018 INFO nova.virt.libvirt.driver [-] [instance: 874359d8-3251-4416-82dc-f6776853e384] Instance spawned successfully.#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.849 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.872 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.877 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.878 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.878 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.878 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.879 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.879 244018 DEBUG nova.virt.libvirt.driver [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.882 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.920 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.958 244018 INFO nova.compute.manager [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Took 9.76 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:44:27 np0005629333 nova_compute[244014]: 2026-02-25 12:44:27.959 244018 DEBUG nova.compute.manager [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:44:28 np0005629333 nova_compute[244014]: 2026-02-25 12:44:28.034 244018 INFO nova.compute.manager [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Took 10.95 seconds to build instance.#033[00m
Feb 25 07:44:28 np0005629333 nova_compute[244014]: 2026-02-25 12:44:28.054 244018 DEBUG oslo_concurrency.lockutils [None req-77d8df34-bbfd-4be7-9c3f-4cf45c6b414e 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.227s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:44:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 112 KiB/s rd, 1.9 MiB/s wr, 127 op/s
Feb 25 07:44:28 np0005629333 nova_compute[244014]: 2026-02-25 12:44:28.773 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:44:28 np0005629333 nova_compute[244014]: 2026-02-25 12:44:28.774 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:44:28 np0005629333 nova_compute[244014]: 2026-02-25 12:44:28.869 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:28 np0005629333 nova_compute[244014]: 2026-02-25 12:44:28.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:44:28 np0005629333 nova_compute[244014]: 2026-02-25 12:44:28.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:44:30 np0005629333 nova_compute[244014]: 2026-02-25 12:44:30.142 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1900: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 1.8 MiB/s wr, 99 op/s
Feb 25 07:44:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:44:30
Feb 25 07:44:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:44:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:44:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'vms', 'images', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', '.rgw.root']
Feb 25 07:44:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:44:31 np0005629333 nova_compute[244014]: 2026-02-25 12:44:31.431 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023456.4303594, c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:44:31 np0005629333 nova_compute[244014]: 2026-02-25 12:44:31.434 244018 INFO nova.compute.manager [-] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:44:31 np0005629333 nova_compute[244014]: 2026-02-25 12:44:31.465 244018 DEBUG nova.compute.manager [None req-d488c380-a838-4703-98f2-a866ef8c19f4 - - - - - -] [instance: c5c6e3f2-3aa3-4b5a-89ac-90bdd2118367] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:44:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:44:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:44:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:44:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:44:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:44:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:44:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:44:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:44:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:44:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:44:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:44:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:44:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:44:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:44:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:44:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:44:32 np0005629333 nova_compute[244014]: 2026-02-25 12:44:32.265 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:32 np0005629333 NetworkManager[49836]: <info>  [1772023472.2668] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/462)
Feb 25 07:44:32 np0005629333 NetworkManager[49836]: <info>  [1772023472.2683] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/463)
Feb 25 07:44:32 np0005629333 nova_compute[244014]: 2026-02-25 12:44:32.302 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:32 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:32Z|01115|binding|INFO|Releasing lport 751d6f2c-9451-4526-9618-730226a78a70 from this chassis (sb_readonly=0)
Feb 25 07:44:32 np0005629333 nova_compute[244014]: 2026-02-25 12:44:32.312 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1901: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Feb 25 07:44:33 np0005629333 nova_compute[244014]: 2026-02-25 12:44:33.017 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:33 np0005629333 nova_compute[244014]: 2026-02-25 12:44:33.076 244018 DEBUG nova.compute.manager [req-2daef223-2afe-4826-b406-9e7042fc1d09 req-136871a8-adf9-4f50-9003-1c34e6ef3240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received event network-changed-97fb9f99-cb59-4581-8866-375ea3e167d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:44:33 np0005629333 nova_compute[244014]: 2026-02-25 12:44:33.077 244018 DEBUG nova.compute.manager [req-2daef223-2afe-4826-b406-9e7042fc1d09 req-136871a8-adf9-4f50-9003-1c34e6ef3240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Refreshing instance network info cache due to event network-changed-97fb9f99-cb59-4581-8866-375ea3e167d7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:44:33 np0005629333 nova_compute[244014]: 2026-02-25 12:44:33.077 244018 DEBUG oslo_concurrency.lockutils [req-2daef223-2afe-4826-b406-9e7042fc1d09 req-136871a8-adf9-4f50-9003-1c34e6ef3240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:44:33 np0005629333 nova_compute[244014]: 2026-02-25 12:44:33.077 244018 DEBUG oslo_concurrency.lockutils [req-2daef223-2afe-4826-b406-9e7042fc1d09 req-136871a8-adf9-4f50-9003-1c34e6ef3240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:44:33 np0005629333 nova_compute[244014]: 2026-02-25 12:44:33.078 244018 DEBUG nova.network.neutron [req-2daef223-2afe-4826-b406-9e7042fc1d09 req-136871a8-adf9-4f50-9003-1c34e6ef3240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Refreshing network info cache for port 97fb9f99-cb59-4581-8866-375ea3e167d7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:44:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:44:33 np0005629333 nova_compute[244014]: 2026-02-25 12:44:33.871 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:33 np0005629333 nova_compute[244014]: 2026-02-25 12:44:33.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:44:33 np0005629333 nova_compute[244014]: 2026-02-25 12:44:33.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:44:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1902: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 522 KiB/s wr, 118 op/s
Feb 25 07:44:34 np0005629333 nova_compute[244014]: 2026-02-25 12:44:34.793 244018 DEBUG nova.network.neutron [req-2daef223-2afe-4826-b406-9e7042fc1d09 req-136871a8-adf9-4f50-9003-1c34e6ef3240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updated VIF entry in instance network info cache for port 97fb9f99-cb59-4581-8866-375ea3e167d7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:44:34 np0005629333 nova_compute[244014]: 2026-02-25 12:44:34.794 244018 DEBUG nova.network.neutron [req-2daef223-2afe-4826-b406-9e7042fc1d09 req-136871a8-adf9-4f50-9003-1c34e6ef3240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updating instance_info_cache with network_info: [{"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:44:34 np0005629333 nova_compute[244014]: 2026-02-25 12:44:34.816 244018 DEBUG oslo_concurrency.lockutils [req-2daef223-2afe-4826-b406-9e7042fc1d09 req-136871a8-adf9-4f50-9003-1c34e6ef3240 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:44:35 np0005629333 nova_compute[244014]: 2026-02-25 12:44:35.143 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:35 np0005629333 nova_compute[244014]: 2026-02-25 12:44:35.811 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023460.8090527, b5941b54-9cd2-465c-89c0-3cf87ebed83e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:44:35 np0005629333 nova_compute[244014]: 2026-02-25 12:44:35.811 244018 INFO nova.compute.manager [-] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:44:35 np0005629333 nova_compute[244014]: 2026-02-25 12:44:35.833 244018 DEBUG nova.compute.manager [None req-ac637bbc-09ea-4ca8-8cf5-8ecabe5529bc - - - - - -] [instance: b5941b54-9cd2-465c-89c0-3cf87ebed83e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:44:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1903: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 25 07:44:37 np0005629333 nova_compute[244014]: 2026-02-25 12:44:37.882 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:44:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 25 07:44:38 np0005629333 nova_compute[244014]: 2026-02-25 12:44:38.365 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023463.3641844, cf7f6093-44a3-4e8f-8970-db25cf0b4ab9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:44:38 np0005629333 nova_compute[244014]: 2026-02-25 12:44:38.366 244018 INFO nova.compute.manager [-] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:44:38 np0005629333 nova_compute[244014]: 2026-02-25 12:44:38.383 244018 DEBUG nova.compute.manager [None req-ac0cb84d-a76a-4344-99b7-6d1bd56b312f - - - - - -] [instance: cf7f6093-44a3-4e8f-8970-db25cf0b4ab9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:44:38 np0005629333 nova_compute[244014]: 2026-02-25 12:44:38.874 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:39 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:39Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:61:b7:f7 10.100.0.11
Feb 25 07:44:39 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:39Z|00128|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:61:b7:f7 10.100.0.11
Feb 25 07:44:40 np0005629333 nova_compute[244014]: 2026-02-25 12:44:40.145 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:40 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:40.252 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:44:40 np0005629333 nova_compute[244014]: 2026-02-25 12:44:40.252 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:40 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:40.253 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:44:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 305 active+clean; 200 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Feb 25 07:44:41 np0005629333 nova_compute[244014]: 2026-02-25 12:44:41.303 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "aee87402-4b34-4083-888b-bb653e2beaa9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:41 np0005629333 nova_compute[244014]: 2026-02-25 12:44:41.304 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:41 np0005629333 nova_compute[244014]: 2026-02-25 12:44:41.325 244018 DEBUG nova.compute.manager [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:44:41 np0005629333 nova_compute[244014]: 2026-02-25 12:44:41.401 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:41 np0005629333 nova_compute[244014]: 2026-02-25 12:44:41.402 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:41 np0005629333 nova_compute[244014]: 2026-02-25 12:44:41.411 244018 DEBUG nova.virt.hardware [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:44:41 np0005629333 nova_compute[244014]: 2026-02-25 12:44:41.412 244018 INFO nova.compute.claims [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:44:41 np0005629333 nova_compute[244014]: 2026-02-25 12:44:41.556 244018 DEBUG oslo_concurrency.processutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:44:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:44:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2544809804' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:44:42 np0005629333 nova_compute[244014]: 2026-02-25 12:44:42.098 244018 DEBUG oslo_concurrency.processutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:44:42 np0005629333 nova_compute[244014]: 2026-02-25 12:44:42.106 244018 DEBUG nova.compute.provider_tree [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:44:42 np0005629333 nova_compute[244014]: 2026-02-25 12:44:42.126 244018 DEBUG nova.scheduler.client.report [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:44:42 np0005629333 nova_compute[244014]: 2026-02-25 12:44:42.165 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:42 np0005629333 nova_compute[244014]: 2026-02-25 12:44:42.166 244018 DEBUG nova.compute.manager [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:44:42 np0005629333 nova_compute[244014]: 2026-02-25 12:44:42.221 244018 DEBUG nova.compute.manager [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:44:42 np0005629333 nova_compute[244014]: 2026-02-25 12:44:42.221 244018 DEBUG nova.network.neutron [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:44:42 np0005629333 nova_compute[244014]: 2026-02-25 12:44:42.257 244018 INFO nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:44:42 np0005629333 nova_compute[244014]: 2026-02-25 12:44:42.277 244018 DEBUG nova.compute.manager [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:44:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1906: 305 pgs: 305 active+clean; 217 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.2 MiB/s wr, 115 op/s
Feb 25 07:44:42 np0005629333 nova_compute[244014]: 2026-02-25 12:44:42.388 244018 DEBUG nova.compute.manager [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:44:42 np0005629333 nova_compute[244014]: 2026-02-25 12:44:42.390 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:44:42 np0005629333 nova_compute[244014]: 2026-02-25 12:44:42.391 244018 INFO nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Creating image(s)#033[00m
Feb 25 07:44:42 np0005629333 nova_compute[244014]: 2026-02-25 12:44:42.423 244018 DEBUG nova.storage.rbd_utils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image aee87402-4b34-4083-888b-bb653e2beaa9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:44:42 np0005629333 nova_compute[244014]: 2026-02-25 12:44:42.454 244018 DEBUG nova.storage.rbd_utils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image aee87402-4b34-4083-888b-bb653e2beaa9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:44:42 np0005629333 nova_compute[244014]: 2026-02-25 12:44:42.476 244018 DEBUG nova.storage.rbd_utils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image aee87402-4b34-4083-888b-bb653e2beaa9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:44:42 np0005629333 nova_compute[244014]: 2026-02-25 12:44:42.479 244018 DEBUG oslo_concurrency.processutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:44:42 np0005629333 nova_compute[244014]: 2026-02-25 12:44:42.514 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:44:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:44:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:44:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:44:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005991630289651938 of space, bias 1.0, pg target 0.17974890868955815 quantized to 32 (current 32)
Feb 25 07:44:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:44:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:44:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:44:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:44:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:44:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002493807710132275 of space, bias 1.0, pg target 0.7481423130396825 quantized to 32 (current 32)
Feb 25 07:44:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:44:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.006242220869224e-06 of space, bias 4.0, pg target 0.0012074906650430689 quantized to 16 (current 16)
Feb 25 07:44:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:44:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:44:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:44:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:44:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:44:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:44:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:44:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:44:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:44:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:44:42 np0005629333 nova_compute[244014]: 2026-02-25 12:44:42.569 244018 DEBUG oslo_concurrency.processutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:44:42 np0005629333 nova_compute[244014]: 2026-02-25 12:44:42.570 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:42 np0005629333 nova_compute[244014]: 2026-02-25 12:44:42.571 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:42 np0005629333 nova_compute[244014]: 2026-02-25 12:44:42.572 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:42 np0005629333 nova_compute[244014]: 2026-02-25 12:44:42.602 244018 DEBUG nova.storage.rbd_utils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image aee87402-4b34-4083-888b-bb653e2beaa9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:44:42 np0005629333 nova_compute[244014]: 2026-02-25 12:44:42.607 244018 DEBUG oslo_concurrency.processutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 aee87402-4b34-4083-888b-bb653e2beaa9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:44:42 np0005629333 nova_compute[244014]: 2026-02-25 12:44:42.635 244018 DEBUG nova.policy [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb37a481eb114226822ed8b2ef4f9a89', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:44:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:44:43 np0005629333 nova_compute[244014]: 2026-02-25 12:44:43.878 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:44 np0005629333 nova_compute[244014]: 2026-02-25 12:44:44.107 244018 DEBUG oslo_concurrency.processutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 aee87402-4b34-4083-888b-bb653e2beaa9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:44:44 np0005629333 nova_compute[244014]: 2026-02-25 12:44:44.191 244018 DEBUG nova.storage.rbd_utils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] resizing rbd image aee87402-4b34-4083-888b-bb653e2beaa9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:44:44 np0005629333 nova_compute[244014]: 2026-02-25 12:44:44.284 244018 DEBUG nova.network.neutron [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Successfully created port: 8d032336-9efd-4e76-9498-4dafee40640b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:44:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1907: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 863 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Feb 25 07:44:44 np0005629333 nova_compute[244014]: 2026-02-25 12:44:44.569 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:44 np0005629333 nova_compute[244014]: 2026-02-25 12:44:44.574 244018 DEBUG nova.objects.instance [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'migration_context' on Instance uuid aee87402-4b34-4083-888b-bb653e2beaa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:44:44 np0005629333 nova_compute[244014]: 2026-02-25 12:44:44.591 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:44:44 np0005629333 nova_compute[244014]: 2026-02-25 12:44:44.591 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Ensure instance console log exists: /var/lib/nova/instances/aee87402-4b34-4083-888b-bb653e2beaa9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:44:44 np0005629333 nova_compute[244014]: 2026-02-25 12:44:44.592 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:44 np0005629333 nova_compute[244014]: 2026-02-25 12:44:44.592 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:44 np0005629333 nova_compute[244014]: 2026-02-25 12:44:44.592 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:45 np0005629333 nova_compute[244014]: 2026-02-25 12:44:45.147 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:45 np0005629333 nova_compute[244014]: 2026-02-25 12:44:45.238 244018 DEBUG nova.network.neutron [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Successfully updated port: 8d032336-9efd-4e76-9498-4dafee40640b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:44:45 np0005629333 nova_compute[244014]: 2026-02-25 12:44:45.256 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "refresh_cache-aee87402-4b34-4083-888b-bb653e2beaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:44:45 np0005629333 nova_compute[244014]: 2026-02-25 12:44:45.256 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquired lock "refresh_cache-aee87402-4b34-4083-888b-bb653e2beaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:44:45 np0005629333 nova_compute[244014]: 2026-02-25 12:44:45.257 244018 DEBUG nova.network.neutron [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:44:45 np0005629333 nova_compute[244014]: 2026-02-25 12:44:45.380 244018 DEBUG nova.compute.manager [req-25086d65-ca0c-4766-bdfe-2b95bc27c997 req-c7df5bd0-fcdf-4754-b0c5-766c80f90f7a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received event network-changed-8d032336-9efd-4e76-9498-4dafee40640b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:44:45 np0005629333 nova_compute[244014]: 2026-02-25 12:44:45.380 244018 DEBUG nova.compute.manager [req-25086d65-ca0c-4766-bdfe-2b95bc27c997 req-c7df5bd0-fcdf-4754-b0c5-766c80f90f7a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Refreshing instance network info cache due to event network-changed-8d032336-9efd-4e76-9498-4dafee40640b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:44:45 np0005629333 nova_compute[244014]: 2026-02-25 12:44:45.380 244018 DEBUG oslo_concurrency.lockutils [req-25086d65-ca0c-4766-bdfe-2b95bc27c997 req-c7df5bd0-fcdf-4754-b0c5-766c80f90f7a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-aee87402-4b34-4083-888b-bb653e2beaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:44:45 np0005629333 nova_compute[244014]: 2026-02-25 12:44:45.740 244018 DEBUG nova.network.neutron [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:44:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Feb 25 07:44:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Feb 25 07:44:46 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Feb 25 07:44:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:46.255 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:44:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1909: 305 pgs: 305 active+clean; 233 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 2.6 MiB/s wr, 76 op/s
Feb 25 07:44:46 np0005629333 nova_compute[244014]: 2026-02-25 12:44:46.811 244018 DEBUG nova.network.neutron [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Updating instance_info_cache with network_info: [{"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:44:46 np0005629333 nova_compute[244014]: 2026-02-25 12:44:46.841 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Releasing lock "refresh_cache-aee87402-4b34-4083-888b-bb653e2beaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:44:46 np0005629333 nova_compute[244014]: 2026-02-25 12:44:46.842 244018 DEBUG nova.compute.manager [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Instance network_info: |[{"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:44:46 np0005629333 nova_compute[244014]: 2026-02-25 12:44:46.843 244018 DEBUG oslo_concurrency.lockutils [req-25086d65-ca0c-4766-bdfe-2b95bc27c997 req-c7df5bd0-fcdf-4754-b0c5-766c80f90f7a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-aee87402-4b34-4083-888b-bb653e2beaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:44:46 np0005629333 nova_compute[244014]: 2026-02-25 12:44:46.843 244018 DEBUG nova.network.neutron [req-25086d65-ca0c-4766-bdfe-2b95bc27c997 req-c7df5bd0-fcdf-4754-b0c5-766c80f90f7a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Refreshing network info cache for port 8d032336-9efd-4e76-9498-4dafee40640b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:44:46 np0005629333 nova_compute[244014]: 2026-02-25 12:44:46.850 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Start _get_guest_xml network_info=[{"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:44:46 np0005629333 nova_compute[244014]: 2026-02-25 12:44:46.858 244018 WARNING nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:44:46 np0005629333 nova_compute[244014]: 2026-02-25 12:44:46.864 244018 DEBUG nova.virt.libvirt.host [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:44:46 np0005629333 nova_compute[244014]: 2026-02-25 12:44:46.865 244018 DEBUG nova.virt.libvirt.host [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:44:46 np0005629333 nova_compute[244014]: 2026-02-25 12:44:46.872 244018 DEBUG nova.virt.libvirt.host [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:44:46 np0005629333 nova_compute[244014]: 2026-02-25 12:44:46.872 244018 DEBUG nova.virt.libvirt.host [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:44:46 np0005629333 nova_compute[244014]: 2026-02-25 12:44:46.873 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:44:46 np0005629333 nova_compute[244014]: 2026-02-25 12:44:46.873 244018 DEBUG nova.virt.hardware [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:44:46 np0005629333 nova_compute[244014]: 2026-02-25 12:44:46.874 244018 DEBUG nova.virt.hardware [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:44:46 np0005629333 nova_compute[244014]: 2026-02-25 12:44:46.874 244018 DEBUG nova.virt.hardware [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:44:46 np0005629333 nova_compute[244014]: 2026-02-25 12:44:46.875 244018 DEBUG nova.virt.hardware [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:44:46 np0005629333 nova_compute[244014]: 2026-02-25 12:44:46.875 244018 DEBUG nova.virt.hardware [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:44:46 np0005629333 nova_compute[244014]: 2026-02-25 12:44:46.875 244018 DEBUG nova.virt.hardware [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:44:46 np0005629333 nova_compute[244014]: 2026-02-25 12:44:46.876 244018 DEBUG nova.virt.hardware [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:44:46 np0005629333 nova_compute[244014]: 2026-02-25 12:44:46.876 244018 DEBUG nova.virt.hardware [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:44:46 np0005629333 nova_compute[244014]: 2026-02-25 12:44:46.876 244018 DEBUG nova.virt.hardware [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:44:46 np0005629333 nova_compute[244014]: 2026-02-25 12:44:46.876 244018 DEBUG nova.virt.hardware [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:44:46 np0005629333 nova_compute[244014]: 2026-02-25 12:44:46.877 244018 DEBUG nova.virt.hardware [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:44:46 np0005629333 nova_compute[244014]: 2026-02-25 12:44:46.881 244018 DEBUG oslo_concurrency.processutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:44:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Feb 25 07:44:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Feb 25 07:44:47 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Feb 25 07:44:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:44:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2782215990' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:44:47 np0005629333 nova_compute[244014]: 2026-02-25 12:44:47.417 244018 DEBUG oslo_concurrency.processutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:44:47 np0005629333 nova_compute[244014]: 2026-02-25 12:44:47.438 244018 DEBUG nova.storage.rbd_utils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image aee87402-4b34-4083-888b-bb653e2beaa9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:44:47 np0005629333 nova_compute[244014]: 2026-02-25 12:44:47.442 244018 DEBUG oslo_concurrency.processutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:44:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:44:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/399059919' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:44:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:44:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/399059919' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:44:47 np0005629333 nova_compute[244014]: 2026-02-25 12:44:47.895 244018 DEBUG nova.network.neutron [req-25086d65-ca0c-4766-bdfe-2b95bc27c997 req-c7df5bd0-fcdf-4754-b0c5-766c80f90f7a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Updated VIF entry in instance network info cache for port 8d032336-9efd-4e76-9498-4dafee40640b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:44:47 np0005629333 nova_compute[244014]: 2026-02-25 12:44:47.896 244018 DEBUG nova.network.neutron [req-25086d65-ca0c-4766-bdfe-2b95bc27c997 req-c7df5bd0-fcdf-4754-b0c5-766c80f90f7a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Updating instance_info_cache with network_info: [{"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:44:47 np0005629333 nova_compute[244014]: 2026-02-25 12:44:47.922 244018 DEBUG oslo_concurrency.lockutils [req-25086d65-ca0c-4766-bdfe-2b95bc27c997 req-c7df5bd0-fcdf-4754-b0c5-766c80f90f7a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-aee87402-4b34-4083-888b-bb653e2beaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:44:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:44:48 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1135161991' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.024 244018 DEBUG oslo_concurrency.processutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.025 244018 DEBUG nova.virt.libvirt.vif [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:44:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-204528973',display_name='tempest-TestNetworkAdvancedServerOps-server-204528973',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-204528973',id=113,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5R/voudn53mFn1fesyFCvR/uhV0Qe/z38Tv5jlE5Qy+mC9LN8VH4xJizPPQof/n3K4Uot5xWkFCPJAwQJsVcwssLNZ96buGra8QRtUpJKVli2/glgbebk2AU5Vop/oTA==',key_name='tempest-TestNetworkAdvancedServerOps-211055070',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-z6csfx6h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:44:42Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=aee87402-4b34-4083-888b-bb653e2beaa9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.026 244018 DEBUG nova.network.os_vif_util [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.026 244018 DEBUG nova.network.os_vif_util [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5f:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d032336-9efd-4e76-9498-4dafee40640b,network=Network(cd92597b-67bf-4492-9581-a9a7ec80f716),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d032336-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.027 244018 DEBUG nova.objects.instance [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'pci_devices' on Instance uuid aee87402-4b34-4083-888b-bb653e2beaa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.039 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:44:48 np0005629333 nova_compute[244014]:  <uuid>aee87402-4b34-4083-888b-bb653e2beaa9</uuid>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:  <name>instance-00000071</name>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-204528973</nova:name>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:44:46</nova:creationTime>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:44:48 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:        <nova:user uuid="fb37a481eb114226822ed8b2ef4f9a89">tempest-TestNetworkAdvancedServerOps-1424801157-project-member</nova:user>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:        <nova:project uuid="6821a6e7edd54dbe97920b79aae8f54c">tempest-TestNetworkAdvancedServerOps-1424801157</nova:project>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:        <nova:port uuid="8d032336-9efd-4e76-9498-4dafee40640b">
Feb 25 07:44:48 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      <entry name="serial">aee87402-4b34-4083-888b-bb653e2beaa9</entry>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      <entry name="uuid">aee87402-4b34-4083-888b-bb653e2beaa9</entry>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/aee87402-4b34-4083-888b-bb653e2beaa9_disk">
Feb 25 07:44:48 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:44:48 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/aee87402-4b34-4083-888b-bb653e2beaa9_disk.config">
Feb 25 07:44:48 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:44:48 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:b3:5f:c9"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      <target dev="tap8d032336-9e"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/aee87402-4b34-4083-888b-bb653e2beaa9/console.log" append="off"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:44:48 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:44:48 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:44:48 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:44:48 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.040 244018 DEBUG nova.compute.manager [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Preparing to wait for external event network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.040 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.040 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.041 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.041 244018 DEBUG nova.virt.libvirt.vif [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:44:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-204528973',display_name='tempest-TestNetworkAdvancedServerOps-server-204528973',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-204528973',id=113,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5R/voudn53mFn1fesyFCvR/uhV0Qe/z38Tv5jlE5Qy+mC9LN8VH4xJizPPQof/n3K4Uot5xWkFCPJAwQJsVcwssLNZ96buGra8QRtUpJKVli2/glgbebk2AU5Vop/oTA==',key_name='tempest-TestNetworkAdvancedServerOps-211055070',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-z6csfx6h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:44:42Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=aee87402-4b34-4083-888b-bb653e2beaa9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.041 244018 DEBUG nova.network.os_vif_util [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.042 244018 DEBUG nova.network.os_vif_util [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5f:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d032336-9efd-4e76-9498-4dafee40640b,network=Network(cd92597b-67bf-4492-9581-a9a7ec80f716),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d032336-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.042 244018 DEBUG os_vif [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5f:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d032336-9efd-4e76-9498-4dafee40640b,network=Network(cd92597b-67bf-4492-9581-a9a7ec80f716),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d032336-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.043 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.043 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.043 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.047 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.047 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d032336-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.047 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d032336-9e, col_values=(('external_ids', {'iface-id': '8d032336-9efd-4e76-9498-4dafee40640b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:5f:c9', 'vm-uuid': 'aee87402-4b34-4083-888b-bb653e2beaa9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.049 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:48 np0005629333 NetworkManager[49836]: <info>  [1772023488.0500] manager: (tap8d032336-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/464)
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.051 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.054 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.055 244018 INFO os_vif [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5f:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d032336-9efd-4e76-9498-4dafee40640b,network=Network(cd92597b-67bf-4492-9581-a9a7ec80f716),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d032336-9e')#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.115 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.116 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.117 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No VIF found with MAC fa:16:3e:b3:5f:c9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.117 244018 INFO nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Using config drive#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.142 244018 DEBUG nova.storage.rbd_utils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image aee87402-4b34-4083-888b-bb653e2beaa9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:44:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Feb 25 07:44:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Feb 25 07:44:48 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Feb 25 07:44:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:44:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1912: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 201 KiB/s rd, 5.3 MiB/s wr, 122 op/s
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.398 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.598 244018 INFO nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Creating config drive at /var/lib/nova/instances/aee87402-4b34-4083-888b-bb653e2beaa9/disk.config#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.605 244018 DEBUG oslo_concurrency.processutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aee87402-4b34-4083-888b-bb653e2beaa9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp98i6gijm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.746 244018 DEBUG oslo_concurrency.processutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aee87402-4b34-4083-888b-bb653e2beaa9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp98i6gijm" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.768 244018 DEBUG nova.storage.rbd_utils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image aee87402-4b34-4083-888b-bb653e2beaa9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.771 244018 DEBUG oslo_concurrency.processutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aee87402-4b34-4083-888b-bb653e2beaa9/disk.config aee87402-4b34-4083-888b-bb653e2beaa9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.926 244018 DEBUG oslo_concurrency.processutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aee87402-4b34-4083-888b-bb653e2beaa9/disk.config aee87402-4b34-4083-888b-bb653e2beaa9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:44:48 np0005629333 nova_compute[244014]: 2026-02-25 12:44:48.928 244018 INFO nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Deleting local config drive /var/lib/nova/instances/aee87402-4b34-4083-888b-bb653e2beaa9/disk.config because it was imported into RBD.#033[00m
Feb 25 07:44:48 np0005629333 kernel: tap8d032336-9e: entered promiscuous mode
Feb 25 07:44:48 np0005629333 NetworkManager[49836]: <info>  [1772023488.9972] manager: (tap8d032336-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/465)
Feb 25 07:44:49 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:49Z|01116|binding|INFO|Claiming lport 8d032336-9efd-4e76-9498-4dafee40640b for this chassis.
Feb 25 07:44:49 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:49Z|01117|binding|INFO|8d032336-9efd-4e76-9498-4dafee40640b: Claiming fa:16:3e:b3:5f:c9 10.100.0.11
Feb 25 07:44:49 np0005629333 nova_compute[244014]: 2026-02-25 12:44:49.001 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.013 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:5f:c9 10.100.0.11'], port_security=['fa:16:3e:b3:5f:c9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'aee87402-4b34-4083-888b-bb653e2beaa9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd92597b-67bf-4492-9581-a9a7ec80f716', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '815102d4-6506-4957-9109-24ea4e91e4b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d5163c3-96b2-4512-ae02-51f6c9b4bef4, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=8d032336-9efd-4e76-9498-4dafee40640b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:44:49 np0005629333 nova_compute[244014]: 2026-02-25 12:44:49.014 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:49 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:49Z|01118|binding|INFO|Setting lport 8d032336-9efd-4e76-9498-4dafee40640b ovn-installed in OVS
Feb 25 07:44:49 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:49Z|01119|binding|INFO|Setting lport 8d032336-9efd-4e76-9498-4dafee40640b up in Southbound
Feb 25 07:44:49 np0005629333 nova_compute[244014]: 2026-02-25 12:44:49.017 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.016 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 8d032336-9efd-4e76-9498-4dafee40640b in datapath cd92597b-67bf-4492-9581-a9a7ec80f716 bound to our chassis#033[00m
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.018 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cd92597b-67bf-4492-9581-a9a7ec80f716#033[00m
Feb 25 07:44:49 np0005629333 nova_compute[244014]: 2026-02-25 12:44:49.020 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.030 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[87f60eb9-c1fc-4988-83e9-afb7f476ab86]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.032 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcd92597b-61 in ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.034 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcd92597b-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.034 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9e77e3e0-c2f9-4e2e-9010-f98904b48e95]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.034 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d6258619-aec0-4b08-9364-b0b8577bd2a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:49 np0005629333 systemd-udevd[343981]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:44:49 np0005629333 systemd-machined[210048]: New machine qemu-142-instance-00000071.
Feb 25 07:44:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:44:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.044 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[29a2e364-50ce-4e62-aff2-502da9305b40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:44:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:44:49 np0005629333 NetworkManager[49836]: <info>  [1772023489.0498] device (tap8d032336-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:44:49 np0005629333 NetworkManager[49836]: <info>  [1772023489.0503] device (tap8d032336-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:44:49 np0005629333 systemd[1]: Started Virtual Machine qemu-142-instance-00000071.
Feb 25 07:44:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:44:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:44:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:44:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:44:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:44:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.066 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7d1db437-565c-47ab-9082-5717392a1ec8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:44:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.091 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7c903665-9a20-42fb-9afc-5d64a190bf22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.096 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1b467182-ca74-48a5-8f2c-59c151afd40c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:49 np0005629333 NetworkManager[49836]: <info>  [1772023489.0977] manager: (tapcd92597b-60): new Veth device (/org/freedesktop/NetworkManager/Devices/466)
Feb 25 07:44:49 np0005629333 systemd-udevd[343985]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.123 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a47e83dd-35d6-4a67-85a9-15f62d7295a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.125 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d0a1398a-ae27-4600-af53-86e6b49928bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:49 np0005629333 NetworkManager[49836]: <info>  [1772023489.1399] device (tapcd92597b-60): carrier: link connected
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.146 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e82fce5c-036b-4092-80cb-a72e1b138cf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.157 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9b36980e-8be7-4448-beae-e9401a997db0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd92597b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:89:ac:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 337], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545854, 'reachable_time': 41882, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344043, 'error': None, 'target': 'ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.168 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[92e0d09e-1c94-4c0b-8eeb-21c6ab0d91dd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe89:ace3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545854, 'tstamp': 545854}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344053, 'error': None, 'target': 'ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:49 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:44:49 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:44:49 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.189 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[63d5d807-3411-4e5a-94c1-d650a8baf8ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd92597b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:89:ac:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 337], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545854, 'reachable_time': 41882, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 344063, 'error': None, 'target': 'ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.213 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eae10f16-2599-4b25-85e1-681b0b405ea0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.255 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1ea203e2-50e2-443c-967b-7407c40329e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.257 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd92597b-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.257 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.257 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd92597b-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:44:49 np0005629333 nova_compute[244014]: 2026-02-25 12:44:49.259 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:49 np0005629333 kernel: tapcd92597b-60: entered promiscuous mode
Feb 25 07:44:49 np0005629333 NetworkManager[49836]: <info>  [1772023489.2600] manager: (tapcd92597b-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/467)
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.263 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcd92597b-60, col_values=(('external_ids', {'iface-id': '37c4424f-372d-4923-b009-7893a123c4d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:44:49 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:49Z|01120|binding|INFO|Releasing lport 37c4424f-372d-4923-b009-7893a123c4d8 from this chassis (sb_readonly=0)
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.266 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cd92597b-67bf-4492-9581-a9a7ec80f716.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cd92597b-67bf-4492-9581-a9a7ec80f716.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.271 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e6109620-6cde-4719-9d5c-9dd91e6414b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.272 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-cd92597b-67bf-4492-9581-a9a7ec80f716
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/cd92597b-67bf-4492-9581-a9a7ec80f716.pid.haproxy
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID cd92597b-67bf-4492-9581-a9a7ec80f716
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:44:49 np0005629333 nova_compute[244014]: 2026-02-25 12:44:49.272 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:49.273 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716', 'env', 'PROCESS_TAG=haproxy-cd92597b-67bf-4492-9581-a9a7ec80f716', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cd92597b-67bf-4492-9581-a9a7ec80f716.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:44:49 np0005629333 nova_compute[244014]: 2026-02-25 12:44:49.309 244018 DEBUG oslo_concurrency.lockutils [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "874359d8-3251-4416-82dc-f6776853e384" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:49 np0005629333 nova_compute[244014]: 2026-02-25 12:44:49.311 244018 DEBUG oslo_concurrency.lockutils [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:49 np0005629333 nova_compute[244014]: 2026-02-25 12:44:49.311 244018 INFO nova.compute.manager [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Shelving#033[00m
Feb 25 07:44:49 np0005629333 nova_compute[244014]: 2026-02-25 12:44:49.338 244018 DEBUG nova.virt.libvirt.driver [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 25 07:44:49 np0005629333 podman[344086]: 2026-02-25 12:44:49.463623277 +0000 UTC m=+0.040214335 container create 182ab755e4c9a54b3c61c19151a1320c15ced7411a30a4a0d6b284b2298b38e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_lamarr, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 07:44:49 np0005629333 systemd[1]: Started libpod-conmon-182ab755e4c9a54b3c61c19151a1320c15ced7411a30a4a0d6b284b2298b38e0.scope.
Feb 25 07:44:49 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:44:49 np0005629333 podman[344086]: 2026-02-25 12:44:49.529498236 +0000 UTC m=+0.106089294 container init 182ab755e4c9a54b3c61c19151a1320c15ced7411a30a4a0d6b284b2298b38e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:44:49 np0005629333 podman[344086]: 2026-02-25 12:44:49.537650936 +0000 UTC m=+0.114242004 container start 182ab755e4c9a54b3c61c19151a1320c15ced7411a30a4a0d6b284b2298b38e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_lamarr, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 07:44:49 np0005629333 podman[344086]: 2026-02-25 12:44:49.540768444 +0000 UTC m=+0.117359532 container attach 182ab755e4c9a54b3c61c19151a1320c15ced7411a30a4a0d6b284b2298b38e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 07:44:49 np0005629333 pensive_lamarr[344143]: 167 167
Feb 25 07:44:49 np0005629333 podman[344086]: 2026-02-25 12:44:49.447653557 +0000 UTC m=+0.024244655 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:44:49 np0005629333 nova_compute[244014]: 2026-02-25 12:44:49.546 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023489.5463667, aee87402-4b34-4083-888b-bb653e2beaa9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:44:49 np0005629333 nova_compute[244014]: 2026-02-25 12:44:49.548 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] VM Started (Lifecycle Event)#033[00m
Feb 25 07:44:49 np0005629333 systemd[1]: libpod-182ab755e4c9a54b3c61c19151a1320c15ced7411a30a4a0d6b284b2298b38e0.scope: Deactivated successfully.
Feb 25 07:44:49 np0005629333 podman[344086]: 2026-02-25 12:44:49.556455617 +0000 UTC m=+0.133046685 container died 182ab755e4c9a54b3c61c19151a1320c15ced7411a30a4a0d6b284b2298b38e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_lamarr, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:44:49 np0005629333 nova_compute[244014]: 2026-02-25 12:44:49.576 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:44:49 np0005629333 systemd[1]: var-lib-containers-storage-overlay-2f2646160838be3ad6a1ce69c5e96bf86f81a36120dbce0ebc6afa337762a0d2-merged.mount: Deactivated successfully.
Feb 25 07:44:49 np0005629333 nova_compute[244014]: 2026-02-25 12:44:49.584 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023489.546581, aee87402-4b34-4083-888b-bb653e2beaa9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:44:49 np0005629333 nova_compute[244014]: 2026-02-25 12:44:49.585 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:44:49 np0005629333 podman[344086]: 2026-02-25 12:44:49.589334105 +0000 UTC m=+0.165925163 container remove 182ab755e4c9a54b3c61c19151a1320c15ced7411a30a4a0d6b284b2298b38e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_lamarr, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 07:44:49 np0005629333 systemd[1]: libpod-conmon-182ab755e4c9a54b3c61c19151a1320c15ced7411a30a4a0d6b284b2298b38e0.scope: Deactivated successfully.
Feb 25 07:44:49 np0005629333 nova_compute[244014]: 2026-02-25 12:44:49.610 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:44:49 np0005629333 nova_compute[244014]: 2026-02-25 12:44:49.614 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:44:49 np0005629333 nova_compute[244014]: 2026-02-25 12:44:49.648 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:44:49 np0005629333 podman[344178]: 2026-02-25 12:44:49.657962012 +0000 UTC m=+0.056476015 container create 5d6cb16551b0af2601462ec24c2bc4d9d2bd3c8d57b4a0af6c62f63f738e6740 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0)
Feb 25 07:44:49 np0005629333 systemd[1]: Started libpod-conmon-5d6cb16551b0af2601462ec24c2bc4d9d2bd3c8d57b4a0af6c62f63f738e6740.scope.
Feb 25 07:44:49 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:44:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f152e8c42872b5c20ce576f93cd6d4f4e71b6bf0c572194dbd79d4f61bceb58/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:44:49 np0005629333 podman[344178]: 2026-02-25 12:44:49.629595351 +0000 UTC m=+0.028109434 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:44:49 np0005629333 podman[344178]: 2026-02-25 12:44:49.73089454 +0000 UTC m=+0.129408553 container init 5d6cb16551b0af2601462ec24c2bc4d9d2bd3c8d57b4a0af6c62f63f738e6740 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716, tcib_managed=true, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:44:49 np0005629333 podman[344178]: 2026-02-25 12:44:49.736113277 +0000 UTC m=+0.134627270 container start 5d6cb16551b0af2601462ec24c2bc4d9d2bd3c8d57b4a0af6c62f63f738e6740 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 25 07:44:49 np0005629333 podman[344204]: 2026-02-25 12:44:49.745093851 +0000 UTC m=+0.044588250 container create 1d85ad2d73fcdca23be95f749e4dea58a5344a53570366b598725299f4793a04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 07:44:49 np0005629333 neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716[344206]: [NOTICE]   (344221) : New worker (344223) forked
Feb 25 07:44:49 np0005629333 neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716[344206]: [NOTICE]   (344221) : Loading success.
Feb 25 07:44:49 np0005629333 systemd[1]: Started libpod-conmon-1d85ad2d73fcdca23be95f749e4dea58a5344a53570366b598725299f4793a04.scope.
Feb 25 07:44:49 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:44:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9865e4b7f92829a78e12dca4dd0a6a32cf290cdd4f3e437f6621e5b0e2efa3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:44:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9865e4b7f92829a78e12dca4dd0a6a32cf290cdd4f3e437f6621e5b0e2efa3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:44:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9865e4b7f92829a78e12dca4dd0a6a32cf290cdd4f3e437f6621e5b0e2efa3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:44:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9865e4b7f92829a78e12dca4dd0a6a32cf290cdd4f3e437f6621e5b0e2efa3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:44:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9865e4b7f92829a78e12dca4dd0a6a32cf290cdd4f3e437f6621e5b0e2efa3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:44:49 np0005629333 podman[344204]: 2026-02-25 12:44:49.725841447 +0000 UTC m=+0.025335856 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:44:49 np0005629333 podman[344204]: 2026-02-25 12:44:49.829436991 +0000 UTC m=+0.128931390 container init 1d85ad2d73fcdca23be95f749e4dea58a5344a53570366b598725299f4793a04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:44:49 np0005629333 podman[344204]: 2026-02-25 12:44:49.835813691 +0000 UTC m=+0.135308110 container start 1d85ad2d73fcdca23be95f749e4dea58a5344a53570366b598725299f4793a04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:44:49 np0005629333 podman[344204]: 2026-02-25 12:44:49.83967214 +0000 UTC m=+0.139166559 container attach 1d85ad2d73fcdca23be95f749e4dea58a5344a53570366b598725299f4793a04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_payne, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 07:44:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Feb 25 07:44:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Feb 25 07:44:50 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Feb 25 07:44:50 np0005629333 nova_compute[244014]: 2026-02-25 12:44:50.150 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:50 np0005629333 cool_payne[344234]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:44:50 np0005629333 cool_payne[344234]: --> All data devices are unavailable
Feb 25 07:44:50 np0005629333 systemd[1]: libpod-1d85ad2d73fcdca23be95f749e4dea58a5344a53570366b598725299f4793a04.scope: Deactivated successfully.
Feb 25 07:44:50 np0005629333 podman[344204]: 2026-02-25 12:44:50.291258803 +0000 UTC m=+0.590753272 container died 1d85ad2d73fcdca23be95f749e4dea58a5344a53570366b598725299f4793a04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_payne, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:44:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:50.314 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:ad:86 2001:db8:0:1:f816:3eff:fec6:ad86 2001:db8::f816:3eff:fec6:ad86'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fec6:ad86/64 2001:db8::f816:3eff:fec6:ad86/64', 'neutron:device_id': 'ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5bfff87-5507-4fe1-aed9-1b014e8e7384, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=0b276f7a-90bb-4427-8f39-0e014732fd20) old=Port_Binding(mac=['fa:16:3e:c6:ad:86 2001:db8::f816:3eff:fec6:ad86'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fec6:ad86/64', 'neutron:device_id': 'ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:44:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:50.318 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 0b276f7a-90bb-4427-8f39-0e014732fd20 in datapath 6df13266-bcfe-4a5b-94c4-81b5f08a6c21 updated#033[00m
Feb 25 07:44:50 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ed9865e4b7f92829a78e12dca4dd0a6a32cf290cdd4f3e437f6621e5b0e2efa3-merged.mount: Deactivated successfully.
Feb 25 07:44:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:50.322 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6df13266-bcfe-4a5b-94c4-81b5f08a6c21, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:44:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:50.324 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ea4cc413-5e85-4ff0-9313-857c1440186f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:50 np0005629333 podman[344204]: 2026-02-25 12:44:50.338660711 +0000 UTC m=+0.638155100 container remove 1d85ad2d73fcdca23be95f749e4dea58a5344a53570366b598725299f4793a04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_payne, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:44:50 np0005629333 systemd[1]: libpod-conmon-1d85ad2d73fcdca23be95f749e4dea58a5344a53570366b598725299f4793a04.scope: Deactivated successfully.
Feb 25 07:44:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 279 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 91 KiB/s rd, 5.1 MiB/s wr, 138 op/s
Feb 25 07:44:50 np0005629333 podman[344329]: 2026-02-25 12:44:50.840789741 +0000 UTC m=+0.050405253 container create 46141dd898c2d520d267f3216756e5057c74309cae30c8232a626755ed4a36d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_tharp, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 07:44:50 np0005629333 systemd[1]: Started libpod-conmon-46141dd898c2d520d267f3216756e5057c74309cae30c8232a626755ed4a36d1.scope.
Feb 25 07:44:50 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:44:50 np0005629333 podman[344329]: 2026-02-25 12:44:50.822497895 +0000 UTC m=+0.032113367 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:44:50 np0005629333 podman[344329]: 2026-02-25 12:44:50.921624702 +0000 UTC m=+0.131240164 container init 46141dd898c2d520d267f3216756e5057c74309cae30c8232a626755ed4a36d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_tharp, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 07:44:50 np0005629333 podman[344329]: 2026-02-25 12:44:50.928336622 +0000 UTC m=+0.137952054 container start 46141dd898c2d520d267f3216756e5057c74309cae30c8232a626755ed4a36d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_tharp, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:44:50 np0005629333 distracted_tharp[344346]: 167 167
Feb 25 07:44:50 np0005629333 systemd[1]: libpod-46141dd898c2d520d267f3216756e5057c74309cae30c8232a626755ed4a36d1.scope: Deactivated successfully.
Feb 25 07:44:50 np0005629333 podman[344329]: 2026-02-25 12:44:50.93254522 +0000 UTC m=+0.142160682 container attach 46141dd898c2d520d267f3216756e5057c74309cae30c8232a626755ed4a36d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_tharp, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:44:50 np0005629333 podman[344329]: 2026-02-25 12:44:50.933288611 +0000 UTC m=+0.142904053 container died 46141dd898c2d520d267f3216756e5057c74309cae30c8232a626755ed4a36d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_tharp, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 07:44:50 np0005629333 systemd[1]: var-lib-containers-storage-overlay-5e3375f5f97be8916b85e73160c6eea15c54d2dbd9a38bb599bc79fdb3432288-merged.mount: Deactivated successfully.
Feb 25 07:44:50 np0005629333 podman[344329]: 2026-02-25 12:44:50.975861603 +0000 UTC m=+0.185477025 container remove 46141dd898c2d520d267f3216756e5057c74309cae30c8232a626755ed4a36d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:44:50 np0005629333 systemd[1]: libpod-conmon-46141dd898c2d520d267f3216756e5057c74309cae30c8232a626755ed4a36d1.scope: Deactivated successfully.
Feb 25 07:44:51 np0005629333 podman[344370]: 2026-02-25 12:44:51.120319919 +0000 UTC m=+0.039855975 container create 2e82ee3d4170e1bbab0200c3dc61fd7dcbfec0463631ce6cd6a1635cf97ed2ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mclean, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 07:44:51 np0005629333 systemd[1]: Started libpod-conmon-2e82ee3d4170e1bbab0200c3dc61fd7dcbfec0463631ce6cd6a1635cf97ed2ab.scope.
Feb 25 07:44:51 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:44:51 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48aa87aa81a54d68fbb9a8b6c968f62be8491d0ba0a22612ada7a2d8c0a1f7ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:44:51 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48aa87aa81a54d68fbb9a8b6c968f62be8491d0ba0a22612ada7a2d8c0a1f7ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:44:51 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48aa87aa81a54d68fbb9a8b6c968f62be8491d0ba0a22612ada7a2d8c0a1f7ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:44:51 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48aa87aa81a54d68fbb9a8b6c968f62be8491d0ba0a22612ada7a2d8c0a1f7ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:44:51 np0005629333 podman[344370]: 2026-02-25 12:44:51.18376597 +0000 UTC m=+0.103302046 container init 2e82ee3d4170e1bbab0200c3dc61fd7dcbfec0463631ce6cd6a1635cf97ed2ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mclean, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 07:44:51 np0005629333 podman[344370]: 2026-02-25 12:44:51.191477917 +0000 UTC m=+0.111014003 container start 2e82ee3d4170e1bbab0200c3dc61fd7dcbfec0463631ce6cd6a1635cf97ed2ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mclean, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 07:44:51 np0005629333 podman[344370]: 2026-02-25 12:44:51.195081829 +0000 UTC m=+0.114617885 container attach 2e82ee3d4170e1bbab0200c3dc61fd7dcbfec0463631ce6cd6a1635cf97ed2ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mclean, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 07:44:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Feb 25 07:44:51 np0005629333 podman[344370]: 2026-02-25 12:44:51.102429214 +0000 UTC m=+0.021965280 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:44:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Feb 25 07:44:51 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]: {
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:    "0": [
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:        {
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "devices": [
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "/dev/loop3"
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            ],
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "lv_name": "ceph_lv0",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "lv_size": "21470642176",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "name": "ceph_lv0",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "tags": {
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.cluster_name": "ceph",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.crush_device_class": "",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.encrypted": "0",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.objectstore": "bluestore",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.osd_id": "0",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.type": "block",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.vdo": "0",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.with_tpm": "0"
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            },
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "type": "block",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "vg_name": "ceph_vg0"
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:        }
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:    ],
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:    "1": [
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:        {
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "devices": [
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "/dev/loop4"
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            ],
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "lv_name": "ceph_lv1",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "lv_size": "21470642176",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "name": "ceph_lv1",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "tags": {
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.cluster_name": "ceph",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.crush_device_class": "",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.encrypted": "0",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.objectstore": "bluestore",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.osd_id": "1",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.type": "block",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.vdo": "0",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.with_tpm": "0"
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            },
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "type": "block",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "vg_name": "ceph_vg1"
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:        }
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:    ],
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:    "2": [
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:        {
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "devices": [
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "/dev/loop5"
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            ],
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "lv_name": "ceph_lv2",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "lv_size": "21470642176",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "name": "ceph_lv2",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "tags": {
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.cluster_name": "ceph",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.crush_device_class": "",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.encrypted": "0",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.objectstore": "bluestore",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.osd_id": "2",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.type": "block",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.vdo": "0",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:                "ceph.with_tpm": "0"
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            },
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "type": "block",
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:            "vg_name": "ceph_vg2"
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:        }
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]:    ]
Feb 25 07:44:51 np0005629333 fervent_mclean[344386]: }
Feb 25 07:44:51 np0005629333 systemd[1]: libpod-2e82ee3d4170e1bbab0200c3dc61fd7dcbfec0463631ce6cd6a1635cf97ed2ab.scope: Deactivated successfully.
Feb 25 07:44:51 np0005629333 podman[344370]: 2026-02-25 12:44:51.480648248 +0000 UTC m=+0.400184314 container died 2e82ee3d4170e1bbab0200c3dc61fd7dcbfec0463631ce6cd6a1635cf97ed2ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mclean, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:44:51 np0005629333 systemd[1]: var-lib-containers-storage-overlay-48aa87aa81a54d68fbb9a8b6c968f62be8491d0ba0a22612ada7a2d8c0a1f7ec-merged.mount: Deactivated successfully.
Feb 25 07:44:51 np0005629333 podman[344370]: 2026-02-25 12:44:51.51757358 +0000 UTC m=+0.437109646 container remove 2e82ee3d4170e1bbab0200c3dc61fd7dcbfec0463631ce6cd6a1635cf97ed2ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_mclean, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:44:51 np0005629333 systemd[1]: libpod-conmon-2e82ee3d4170e1bbab0200c3dc61fd7dcbfec0463631ce6cd6a1635cf97ed2ab.scope: Deactivated successfully.
Feb 25 07:44:51 np0005629333 kernel: tap97fb9f99-cb (unregistering): left promiscuous mode
Feb 25 07:44:51 np0005629333 NetworkManager[49836]: <info>  [1772023491.5736] device (tap97fb9f99-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:44:51 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:51Z|01121|binding|INFO|Releasing lport 97fb9f99-cb59-4581-8866-375ea3e167d7 from this chassis (sb_readonly=0)
Feb 25 07:44:51 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:51Z|01122|binding|INFO|Setting lport 97fb9f99-cb59-4581-8866-375ea3e167d7 down in Southbound
Feb 25 07:44:51 np0005629333 nova_compute[244014]: 2026-02-25 12:44:51.579 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:51 np0005629333 ovn_controller[147040]: 2026-02-25T12:44:51Z|01123|binding|INFO|Removing iface tap97fb9f99-cb ovn-installed in OVS
Feb 25 07:44:51 np0005629333 nova_compute[244014]: 2026-02-25 12:44:51.582 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:51 np0005629333 nova_compute[244014]: 2026-02-25 12:44:51.586 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:51.587 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:b7:f7 10.100.0.11'], port_security=['fa:16:3e:61:b7:f7 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '874359d8-3251-4416-82dc-f6776853e384', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4d059ba-eacc-463b-a8a4-393a5a36dba3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6c0adb05683141e7a0b866f450e410e0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c8e48a3c-4f73-4222-8cd8-b956480085c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.235'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88c1b411-2973-4db5-967d-0a2cebc1066f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=97fb9f99-cb59-4581-8866-375ea3e167d7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:44:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:51.588 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 97fb9f99-cb59-4581-8866-375ea3e167d7 in datapath e4d059ba-eacc-463b-a8a4-393a5a36dba3 unbound from our chassis#033[00m
Feb 25 07:44:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:51.590 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e4d059ba-eacc-463b-a8a4-393a5a36dba3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:44:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:51.590 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2d059191-c1e1-4a35-af71-21690a165bf0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:51.591 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3 namespace which is not needed anymore#033[00m
Feb 25 07:44:51 np0005629333 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d00000070.scope: Deactivated successfully.
Feb 25 07:44:51 np0005629333 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d00000070.scope: Consumed 12.809s CPU time.
Feb 25 07:44:51 np0005629333 systemd-machined[210048]: Machine qemu-141-instance-00000070 terminated.
Feb 25 07:44:51 np0005629333 neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3[343513]: [NOTICE]   (343517) : haproxy version is 2.8.14-c23fe91
Feb 25 07:44:51 np0005629333 neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3[343513]: [NOTICE]   (343517) : path to executable is /usr/sbin/haproxy
Feb 25 07:44:51 np0005629333 neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3[343513]: [WARNING]  (343517) : Exiting Master process...
Feb 25 07:44:51 np0005629333 neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3[343513]: [ALERT]    (343517) : Current worker (343519) exited with code 143 (Terminated)
Feb 25 07:44:51 np0005629333 neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3[343513]: [WARNING]  (343517) : All workers exited. Exiting... (0)
Feb 25 07:44:51 np0005629333 systemd[1]: libpod-d905eeb48e3eb29d0a002c06c66e690ff4ec9e474a3519a030dac86e4992bf31.scope: Deactivated successfully.
Feb 25 07:44:51 np0005629333 conmon[343513]: conmon d905eeb48e3eb29d0a00 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d905eeb48e3eb29d0a002c06c66e690ff4ec9e474a3519a030dac86e4992bf31.scope/container/memory.events
Feb 25 07:44:51 np0005629333 podman[344474]: 2026-02-25 12:44:51.702989411 +0000 UTC m=+0.038120307 container died d905eeb48e3eb29d0a002c06c66e690ff4ec9e474a3519a030dac86e4992bf31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 25 07:44:51 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d905eeb48e3eb29d0a002c06c66e690ff4ec9e474a3519a030dac86e4992bf31-userdata-shm.mount: Deactivated successfully.
Feb 25 07:44:51 np0005629333 systemd[1]: var-lib-containers-storage-overlay-b5928287d34ac4333e0ae9c06ec6f097d3bd4e406cca623c2f40db94d0a23763-merged.mount: Deactivated successfully.
Feb 25 07:44:51 np0005629333 podman[344474]: 2026-02-25 12:44:51.738083512 +0000 UTC m=+0.073214408 container cleanup d905eeb48e3eb29d0a002c06c66e690ff4ec9e474a3519a030dac86e4992bf31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:44:51 np0005629333 systemd[1]: libpod-conmon-d905eeb48e3eb29d0a002c06c66e690ff4ec9e474a3519a030dac86e4992bf31.scope: Deactivated successfully.
Feb 25 07:44:51 np0005629333 podman[344507]: 2026-02-25 12:44:51.793063193 +0000 UTC m=+0.038944630 container remove d905eeb48e3eb29d0a002c06c66e690ff4ec9e474a3519a030dac86e4992bf31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 07:44:51 np0005629333 nova_compute[244014]: 2026-02-25 12:44:51.800 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:51.802 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[55669442-9860-4dec-a12a-d564ca56e9e7]: (4, ('Wed Feb 25 12:44:51 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3 (d905eeb48e3eb29d0a002c06c66e690ff4ec9e474a3519a030dac86e4992bf31)\nd905eeb48e3eb29d0a002c06c66e690ff4ec9e474a3519a030dac86e4992bf31\nWed Feb 25 12:44:51 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3 (d905eeb48e3eb29d0a002c06c66e690ff4ec9e474a3519a030dac86e4992bf31)\nd905eeb48e3eb29d0a002c06c66e690ff4ec9e474a3519a030dac86e4992bf31\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:51.805 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[29b153dd-f333-470e-9f33-c38710fbd6f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:51.806 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4d059ba-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:44:51 np0005629333 nova_compute[244014]: 2026-02-25 12:44:51.808 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:51 np0005629333 kernel: tape4d059ba-e0: left promiscuous mode
Feb 25 07:44:51 np0005629333 nova_compute[244014]: 2026-02-25 12:44:51.820 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:51 np0005629333 nova_compute[244014]: 2026-02-25 12:44:51.822 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:51.824 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[78547dc0-2f97-454c-91dc-0b448f1495c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:51.845 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6c6fc288-66b9-459c-8ed6-c4b53d2018b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:51.846 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f42a6bc6-3f96-47fe-9a83-f429b825712b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:51.864 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b6a85da5-f5bf-4195-9ccf-43f43cc64ec1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543484, 'reachable_time': 44463, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344538, 'error': None, 'target': 'ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:51.867 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:44:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:51.868 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[e25bc163-1a4c-43f7-8c39-34e27badaa81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:44:51 np0005629333 systemd[1]: run-netns-ovnmeta\x2de4d059ba\x2deacc\x2d463b\x2da8a4\x2d393a5a36dba3.mount: Deactivated successfully.
Feb 25 07:44:51 np0005629333 podman[344547]: 2026-02-25 12:44:51.941017408 +0000 UTC m=+0.046365259 container create 165aa4e0d63a3112f2187965e62f49c6c7c31439e664f10a6d042868600682ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:44:51 np0005629333 systemd[1]: Started libpod-conmon-165aa4e0d63a3112f2187965e62f49c6c7c31439e664f10a6d042868600682ed.scope.
Feb 25 07:44:52 np0005629333 podman[344547]: 2026-02-25 12:44:51.915147778 +0000 UTC m=+0.020495669 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:44:52 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:44:52 np0005629333 podman[344547]: 2026-02-25 12:44:52.026862041 +0000 UTC m=+0.132209952 container init 165aa4e0d63a3112f2187965e62f49c6c7c31439e664f10a6d042868600682ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_engelbart, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 07:44:52 np0005629333 podman[344547]: 2026-02-25 12:44:52.035305749 +0000 UTC m=+0.140653550 container start 165aa4e0d63a3112f2187965e62f49c6c7c31439e664f10a6d042868600682ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_engelbart, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 07:44:52 np0005629333 podman[344547]: 2026-02-25 12:44:52.039102616 +0000 UTC m=+0.144450517 container attach 165aa4e0d63a3112f2187965e62f49c6c7c31439e664f10a6d042868600682ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:44:52 np0005629333 blissful_engelbart[344563]: 167 167
Feb 25 07:44:52 np0005629333 systemd[1]: libpod-165aa4e0d63a3112f2187965e62f49c6c7c31439e664f10a6d042868600682ed.scope: Deactivated successfully.
Feb 25 07:44:52 np0005629333 podman[344547]: 2026-02-25 12:44:52.043009356 +0000 UTC m=+0.148357207 container died 165aa4e0d63a3112f2187965e62f49c6c7c31439e664f10a6d042868600682ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 07:44:52 np0005629333 podman[344547]: 2026-02-25 12:44:52.088252243 +0000 UTC m=+0.193600094 container remove 165aa4e0d63a3112f2187965e62f49c6c7c31439e664f10a6d042868600682ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_engelbart, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:44:52 np0005629333 systemd[1]: libpod-conmon-165aa4e0d63a3112f2187965e62f49c6c7c31439e664f10a6d042868600682ed.scope: Deactivated successfully.
Feb 25 07:44:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Feb 25 07:44:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Feb 25 07:44:52 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Feb 25 07:44:52 np0005629333 podman[344587]: 2026-02-25 12:44:52.245392568 +0000 UTC m=+0.044746144 container create eaa038dcc1af3b3e4123c6fc2a2e5ac7047b7471f037ae31adb085d8157398bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_shirley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 07:44:52 np0005629333 systemd[1]: Started libpod-conmon-eaa038dcc1af3b3e4123c6fc2a2e5ac7047b7471f037ae31adb085d8157398bd.scope.
Feb 25 07:44:52 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:44:52 np0005629333 podman[344587]: 2026-02-25 12:44:52.226117114 +0000 UTC m=+0.025470740 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:44:52 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/945eaf6277a214eee789c113cacfd1d69335b1debd2d56dfaec43d1c299d29c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:44:52 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/945eaf6277a214eee789c113cacfd1d69335b1debd2d56dfaec43d1c299d29c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:44:52 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/945eaf6277a214eee789c113cacfd1d69335b1debd2d56dfaec43d1c299d29c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:44:52 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/945eaf6277a214eee789c113cacfd1d69335b1debd2d56dfaec43d1c299d29c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:44:52 np0005629333 podman[344587]: 2026-02-25 12:44:52.352417488 +0000 UTC m=+0.151771094 container init eaa038dcc1af3b3e4123c6fc2a2e5ac7047b7471f037ae31adb085d8157398bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 07:44:52 np0005629333 nova_compute[244014]: 2026-02-25 12:44:52.358 244018 INFO nova.virt.libvirt.driver [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Instance shutdown successfully after 3 seconds.#033[00m
Feb 25 07:44:52 np0005629333 podman[344587]: 2026-02-25 12:44:52.363883802 +0000 UTC m=+0.163237368 container start eaa038dcc1af3b3e4123c6fc2a2e5ac7047b7471f037ae31adb085d8157398bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_shirley, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 07:44:52 np0005629333 podman[344587]: 2026-02-25 12:44:52.368273495 +0000 UTC m=+0.167627121 container attach eaa038dcc1af3b3e4123c6fc2a2e5ac7047b7471f037ae31adb085d8157398bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_shirley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:44:52 np0005629333 nova_compute[244014]: 2026-02-25 12:44:52.368 244018 DEBUG nova.compute.manager [req-5f0f07bd-d98b-4924-9c79-ad3f72026d47 req-1179a6a6-8142-4305-8410-9f0c55efd9b5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received event network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:44:52 np0005629333 nova_compute[244014]: 2026-02-25 12:44:52.369 244018 DEBUG oslo_concurrency.lockutils [req-5f0f07bd-d98b-4924-9c79-ad3f72026d47 req-1179a6a6-8142-4305-8410-9f0c55efd9b5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:52 np0005629333 nova_compute[244014]: 2026-02-25 12:44:52.370 244018 DEBUG oslo_concurrency.lockutils [req-5f0f07bd-d98b-4924-9c79-ad3f72026d47 req-1179a6a6-8142-4305-8410-9f0c55efd9b5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 305 active+clean; 279 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 91 KiB/s wr, 156 op/s
Feb 25 07:44:52 np0005629333 nova_compute[244014]: 2026-02-25 12:44:52.370 244018 DEBUG oslo_concurrency.lockutils [req-5f0f07bd-d98b-4924-9c79-ad3f72026d47 req-1179a6a6-8142-4305-8410-9f0c55efd9b5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:52 np0005629333 nova_compute[244014]: 2026-02-25 12:44:52.371 244018 DEBUG nova.compute.manager [req-5f0f07bd-d98b-4924-9c79-ad3f72026d47 req-1179a6a6-8142-4305-8410-9f0c55efd9b5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Processing event network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:44:52 np0005629333 nova_compute[244014]: 2026-02-25 12:44:52.372 244018 DEBUG nova.compute.manager [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:44:52 np0005629333 nova_compute[244014]: 2026-02-25 12:44:52.376 244018 INFO nova.virt.libvirt.driver [-] [instance: 874359d8-3251-4416-82dc-f6776853e384] Instance destroyed successfully.#033[00m
Feb 25 07:44:52 np0005629333 nova_compute[244014]: 2026-02-25 12:44:52.377 244018 DEBUG nova.objects.instance [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lazy-loading 'numa_topology' on Instance uuid 874359d8-3251-4416-82dc-f6776853e384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:44:52 np0005629333 nova_compute[244014]: 2026-02-25 12:44:52.380 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023492.3781693, aee87402-4b34-4083-888b-bb653e2beaa9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:44:52 np0005629333 nova_compute[244014]: 2026-02-25 12:44:52.380 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:44:52 np0005629333 nova_compute[244014]: 2026-02-25 12:44:52.383 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:44:52 np0005629333 nova_compute[244014]: 2026-02-25 12:44:52.389 244018 INFO nova.virt.libvirt.driver [-] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Instance spawned successfully.#033[00m
Feb 25 07:44:52 np0005629333 nova_compute[244014]: 2026-02-25 12:44:52.390 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:44:52 np0005629333 nova_compute[244014]: 2026-02-25 12:44:52.414 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:44:52 np0005629333 nova_compute[244014]: 2026-02-25 12:44:52.423 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:44:52 np0005629333 nova_compute[244014]: 2026-02-25 12:44:52.441 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:44:52 np0005629333 nova_compute[244014]: 2026-02-25 12:44:52.441 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:44:52 np0005629333 nova_compute[244014]: 2026-02-25 12:44:52.442 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:44:52 np0005629333 nova_compute[244014]: 2026-02-25 12:44:52.443 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:44:52 np0005629333 nova_compute[244014]: 2026-02-25 12:44:52.444 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:44:52 np0005629333 nova_compute[244014]: 2026-02-25 12:44:52.445 244018 DEBUG nova.virt.libvirt.driver [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:44:52 np0005629333 nova_compute[244014]: 2026-02-25 12:44:52.457 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:44:52 np0005629333 nova_compute[244014]: 2026-02-25 12:44:52.533 244018 INFO nova.compute.manager [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Took 10.14 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:44:52 np0005629333 nova_compute[244014]: 2026-02-25 12:44:52.539 244018 DEBUG nova.compute.manager [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:44:52 np0005629333 nova_compute[244014]: 2026-02-25 12:44:52.617 244018 INFO nova.compute.manager [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Took 11.24 seconds to build instance.#033[00m
Feb 25 07:44:52 np0005629333 nova_compute[244014]: 2026-02-25 12:44:52.636 244018 DEBUG oslo_concurrency.lockutils [None req-a93e71f3-3176-4a26-bc88-a104d7ee799d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.332s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:52 np0005629333 nova_compute[244014]: 2026-02-25 12:44:52.783 244018 INFO nova.virt.libvirt.driver [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Beginning cold snapshot process#033[00m
Feb 25 07:44:52 np0005629333 nova_compute[244014]: 2026-02-25 12:44:52.967 244018 DEBUG nova.virt.libvirt.imagebackend [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Feb 25 07:44:53 np0005629333 lvm[344713]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:44:53 np0005629333 lvm[344713]: VG ceph_vg0 finished
Feb 25 07:44:53 np0005629333 lvm[344716]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:44:53 np0005629333 lvm[344716]: VG ceph_vg1 finished
Feb 25 07:44:53 np0005629333 lvm[344718]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:44:53 np0005629333 lvm[344718]: VG ceph_vg2 finished
Feb 25 07:44:53 np0005629333 nova_compute[244014]: 2026-02-25 12:44:53.050 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:53 np0005629333 lvm[344719]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:44:53 np0005629333 lvm[344719]: VG ceph_vg0 finished
Feb 25 07:44:53 np0005629333 heuristic_shirley[344604]: {}
Feb 25 07:44:53 np0005629333 systemd[1]: libpod-eaa038dcc1af3b3e4123c6fc2a2e5ac7047b7471f037ae31adb085d8157398bd.scope: Deactivated successfully.
Feb 25 07:44:53 np0005629333 systemd[1]: libpod-eaa038dcc1af3b3e4123c6fc2a2e5ac7047b7471f037ae31adb085d8157398bd.scope: Consumed 1.188s CPU time.
Feb 25 07:44:53 np0005629333 podman[344722]: 2026-02-25 12:44:53.18027128 +0000 UTC m=+0.026954962 container died eaa038dcc1af3b3e4123c6fc2a2e5ac7047b7471f037ae31adb085d8157398bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_shirley, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:44:53 np0005629333 systemd[1]: var-lib-containers-storage-overlay-945eaf6277a214eee789c113cacfd1d69335b1debd2d56dfaec43d1c299d29c0-merged.mount: Deactivated successfully.
Feb 25 07:44:53 np0005629333 podman[344722]: 2026-02-25 12:44:53.230714433 +0000 UTC m=+0.077398095 container remove eaa038dcc1af3b3e4123c6fc2a2e5ac7047b7471f037ae31adb085d8157398bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_shirley, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 07:44:53 np0005629333 systemd[1]: libpod-conmon-eaa038dcc1af3b3e4123c6fc2a2e5ac7047b7471f037ae31adb085d8157398bd.scope: Deactivated successfully.
Feb 25 07:44:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:44:53 np0005629333 nova_compute[244014]: 2026-02-25 12:44:53.277 244018 DEBUG nova.storage.rbd_utils [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] creating snapshot(1638c28f135b4127b592203a14fda253) on rbd image(874359d8-3251-4416-82dc-f6776853e384_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 07:44:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:44:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:44:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:44:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:44:54 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:44:54 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:44:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Feb 25 07:44:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Feb 25 07:44:54 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Feb 25 07:44:54 np0005629333 nova_compute[244014]: 2026-02-25 12:44:54.328 244018 DEBUG nova.storage.rbd_utils [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] cloning vms/874359d8-3251-4416-82dc-f6776853e384_disk@1638c28f135b4127b592203a14fda253 to images/ce5f0984-576d-4f20-8e86-b4b26cb5bb6e clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 25 07:44:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1919: 305 pgs: 305 active+clean; 279 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 134 KiB/s rd, 127 KiB/s wr, 187 op/s
Feb 25 07:44:54 np0005629333 nova_compute[244014]: 2026-02-25 12:44:54.446 244018 DEBUG nova.storage.rbd_utils [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] flattening images/ce5f0984-576d-4f20-8e86-b4b26cb5bb6e flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 25 07:44:54 np0005629333 podman[344833]: 2026-02-25 12:44:54.772002698 +0000 UTC m=+0.106816295 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Feb 25 07:44:54 np0005629333 nova_compute[244014]: 2026-02-25 12:44:54.991 244018 DEBUG nova.compute.manager [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received event network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:44:54 np0005629333 nova_compute[244014]: 2026-02-25 12:44:54.992 244018 DEBUG oslo_concurrency.lockutils [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:54 np0005629333 nova_compute[244014]: 2026-02-25 12:44:54.993 244018 DEBUG oslo_concurrency.lockutils [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:54 np0005629333 nova_compute[244014]: 2026-02-25 12:44:54.994 244018 DEBUG oslo_concurrency.lockutils [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:54 np0005629333 nova_compute[244014]: 2026-02-25 12:44:54.994 244018 DEBUG nova.compute.manager [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] No waiting events found dispatching network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:44:54 np0005629333 nova_compute[244014]: 2026-02-25 12:44:54.995 244018 WARNING nova.compute.manager [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received unexpected event network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b for instance with vm_state active and task_state None.#033[00m
Feb 25 07:44:54 np0005629333 nova_compute[244014]: 2026-02-25 12:44:54.995 244018 DEBUG nova.compute.manager [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received event network-vif-unplugged-97fb9f99-cb59-4581-8866-375ea3e167d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:44:54 np0005629333 nova_compute[244014]: 2026-02-25 12:44:54.996 244018 DEBUG oslo_concurrency.lockutils [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "874359d8-3251-4416-82dc-f6776853e384-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:54 np0005629333 nova_compute[244014]: 2026-02-25 12:44:54.996 244018 DEBUG oslo_concurrency.lockutils [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:54 np0005629333 nova_compute[244014]: 2026-02-25 12:44:54.997 244018 DEBUG oslo_concurrency.lockutils [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:54 np0005629333 nova_compute[244014]: 2026-02-25 12:44:54.997 244018 DEBUG nova.compute.manager [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] No waiting events found dispatching network-vif-unplugged-97fb9f99-cb59-4581-8866-375ea3e167d7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:44:54 np0005629333 nova_compute[244014]: 2026-02-25 12:44:54.998 244018 WARNING nova.compute.manager [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received unexpected event network-vif-unplugged-97fb9f99-cb59-4581-8866-375ea3e167d7 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Feb 25 07:44:54 np0005629333 nova_compute[244014]: 2026-02-25 12:44:54.998 244018 DEBUG nova.compute.manager [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received event network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:44:54 np0005629333 nova_compute[244014]: 2026-02-25 12:44:54.999 244018 DEBUG oslo_concurrency.lockutils [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "874359d8-3251-4416-82dc-f6776853e384-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:55 np0005629333 nova_compute[244014]: 2026-02-25 12:44:54.999 244018 DEBUG oslo_concurrency.lockutils [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:55 np0005629333 nova_compute[244014]: 2026-02-25 12:44:55.000 244018 DEBUG oslo_concurrency.lockutils [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:55 np0005629333 nova_compute[244014]: 2026-02-25 12:44:55.000 244018 DEBUG nova.compute.manager [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] No waiting events found dispatching network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:44:55 np0005629333 nova_compute[244014]: 2026-02-25 12:44:55.001 244018 WARNING nova.compute.manager [req-df5f1fdf-d7a1-4a19-a779-3e56af5ac146 req-781b12d7-4991-44db-b7ff-8722fd0759d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received unexpected event network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Feb 25 07:44:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:55.023 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:55.023 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:44:55.024 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:55 np0005629333 nova_compute[244014]: 2026-02-25 12:44:55.073 244018 DEBUG nova.storage.rbd_utils [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] removing snapshot(1638c28f135b4127b592203a14fda253) on rbd image(874359d8-3251-4416-82dc-f6776853e384_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 25 07:44:55 np0005629333 nova_compute[244014]: 2026-02-25 12:44:55.152 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Feb 25 07:44:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Feb 25 07:44:55 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Feb 25 07:44:55 np0005629333 nova_compute[244014]: 2026-02-25 12:44:55.590 244018 DEBUG nova.storage.rbd_utils [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] creating snapshot(snap) on rbd image(ce5f0984-576d-4f20-8e86-b4b26cb5bb6e) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 07:44:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1921: 305 pgs: 305 active+clean; 279 MiB data, 909 MiB used, 59 GiB / 60 GiB avail; 112 KiB/s rd, 106 KiB/s wr, 156 op/s
Feb 25 07:44:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Feb 25 07:44:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Feb 25 07:44:56 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Feb 25 07:44:56 np0005629333 podman[344890]: 2026-02-25 12:44:56.745599792 +0000 UTC m=+0.098335696 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 25 07:44:57 np0005629333 nova_compute[244014]: 2026-02-25 12:44:57.273 244018 DEBUG nova.compute.manager [req-a7312af8-d973-4885-8bac-899bf45fd87c req-ce6d7732-e3ea-496a-b815-855e390fe229 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received event network-changed-8d032336-9efd-4e76-9498-4dafee40640b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:44:57 np0005629333 nova_compute[244014]: 2026-02-25 12:44:57.274 244018 DEBUG nova.compute.manager [req-a7312af8-d973-4885-8bac-899bf45fd87c req-ce6d7732-e3ea-496a-b815-855e390fe229 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Refreshing instance network info cache due to event network-changed-8d032336-9efd-4e76-9498-4dafee40640b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:44:57 np0005629333 nova_compute[244014]: 2026-02-25 12:44:57.274 244018 DEBUG oslo_concurrency.lockutils [req-a7312af8-d973-4885-8bac-899bf45fd87c req-ce6d7732-e3ea-496a-b815-855e390fe229 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-aee87402-4b34-4083-888b-bb653e2beaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:44:57 np0005629333 nova_compute[244014]: 2026-02-25 12:44:57.275 244018 DEBUG oslo_concurrency.lockutils [req-a7312af8-d973-4885-8bac-899bf45fd87c req-ce6d7732-e3ea-496a-b815-855e390fe229 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-aee87402-4b34-4083-888b-bb653e2beaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:44:57 np0005629333 nova_compute[244014]: 2026-02-25 12:44:57.275 244018 DEBUG nova.network.neutron [req-a7312af8-d973-4885-8bac-899bf45fd87c req-ce6d7732-e3ea-496a-b815-855e390fe229 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Refreshing network info cache for port 8d032336-9efd-4e76-9498-4dafee40640b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:44:57 np0005629333 nova_compute[244014]: 2026-02-25 12:44:57.416 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "dd8c9142-2607-4722-90eb-65233f258639" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:57 np0005629333 nova_compute[244014]: 2026-02-25 12:44:57.417 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:57 np0005629333 nova_compute[244014]: 2026-02-25 12:44:57.443 244018 DEBUG nova.compute.manager [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:44:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Feb 25 07:44:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Feb 25 07:44:57 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Feb 25 07:44:57 np0005629333 nova_compute[244014]: 2026-02-25 12:44:57.531 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:57 np0005629333 nova_compute[244014]: 2026-02-25 12:44:57.532 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:57 np0005629333 nova_compute[244014]: 2026-02-25 12:44:57.543 244018 DEBUG nova.virt.hardware [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:44:57 np0005629333 nova_compute[244014]: 2026-02-25 12:44:57.544 244018 INFO nova.compute.claims [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:44:57 np0005629333 nova_compute[244014]: 2026-02-25 12:44:57.703 244018 DEBUG oslo_concurrency.processutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:44:57 np0005629333 nova_compute[244014]: 2026-02-25 12:44:57.935 244018 INFO nova.virt.libvirt.driver [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Snapshot image upload complete#033[00m
Feb 25 07:44:57 np0005629333 nova_compute[244014]: 2026-02-25 12:44:57.938 244018 DEBUG nova.compute.manager [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:44:58 np0005629333 nova_compute[244014]: 2026-02-25 12:44:58.018 244018 INFO nova.compute.manager [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Shelve offloading#033[00m
Feb 25 07:44:58 np0005629333 nova_compute[244014]: 2026-02-25 12:44:58.028 244018 INFO nova.virt.libvirt.driver [-] [instance: 874359d8-3251-4416-82dc-f6776853e384] Instance destroyed successfully.#033[00m
Feb 25 07:44:58 np0005629333 nova_compute[244014]: 2026-02-25 12:44:58.028 244018 DEBUG nova.compute.manager [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:44:58 np0005629333 nova_compute[244014]: 2026-02-25 12:44:58.032 244018 DEBUG oslo_concurrency.lockutils [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:44:58 np0005629333 nova_compute[244014]: 2026-02-25 12:44:58.033 244018 DEBUG oslo_concurrency.lockutils [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquired lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:44:58 np0005629333 nova_compute[244014]: 2026-02-25 12:44:58.033 244018 DEBUG nova.network.neutron [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:44:58 np0005629333 nova_compute[244014]: 2026-02-25 12:44:58.054 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:44:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:44:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1182150988' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:44:58 np0005629333 nova_compute[244014]: 2026-02-25 12:44:58.263 244018 DEBUG oslo_concurrency.processutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:44:58 np0005629333 nova_compute[244014]: 2026-02-25 12:44:58.273 244018 DEBUG nova.compute.provider_tree [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:44:58 np0005629333 nova_compute[244014]: 2026-02-25 12:44:58.289 244018 DEBUG nova.scheduler.client.report [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:44:58 np0005629333 nova_compute[244014]: 2026-02-25 12:44:58.318 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:58 np0005629333 nova_compute[244014]: 2026-02-25 12:44:58.319 244018 DEBUG nova.compute.manager [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:44:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:44:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 305 active+clean; 358 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 17 MiB/s rd, 11 MiB/s wr, 479 op/s
Feb 25 07:44:58 np0005629333 nova_compute[244014]: 2026-02-25 12:44:58.374 244018 DEBUG nova.compute.manager [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:44:58 np0005629333 nova_compute[244014]: 2026-02-25 12:44:58.375 244018 DEBUG nova.network.neutron [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:44:58 np0005629333 nova_compute[244014]: 2026-02-25 12:44:58.402 244018 INFO nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:44:58 np0005629333 nova_compute[244014]: 2026-02-25 12:44:58.420 244018 DEBUG nova.compute.manager [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:44:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Feb 25 07:44:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Feb 25 07:44:58 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Feb 25 07:44:58 np0005629333 nova_compute[244014]: 2026-02-25 12:44:58.549 244018 DEBUG nova.compute.manager [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:44:58 np0005629333 nova_compute[244014]: 2026-02-25 12:44:58.551 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:44:58 np0005629333 nova_compute[244014]: 2026-02-25 12:44:58.552 244018 INFO nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Creating image(s)#033[00m
Feb 25 07:44:58 np0005629333 nova_compute[244014]: 2026-02-25 12:44:58.588 244018 DEBUG nova.storage.rbd_utils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dd8c9142-2607-4722-90eb-65233f258639_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:44:58 np0005629333 nova_compute[244014]: 2026-02-25 12:44:58.614 244018 DEBUG nova.storage.rbd_utils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dd8c9142-2607-4722-90eb-65233f258639_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:44:58 np0005629333 nova_compute[244014]: 2026-02-25 12:44:58.639 244018 DEBUG nova.storage.rbd_utils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dd8c9142-2607-4722-90eb-65233f258639_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:44:58 np0005629333 nova_compute[244014]: 2026-02-25 12:44:58.643 244018 DEBUG oslo_concurrency.processutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:44:58 np0005629333 nova_compute[244014]: 2026-02-25 12:44:58.733 244018 DEBUG oslo_concurrency.processutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:44:58 np0005629333 nova_compute[244014]: 2026-02-25 12:44:58.735 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:58 np0005629333 nova_compute[244014]: 2026-02-25 12:44:58.736 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:58 np0005629333 nova_compute[244014]: 2026-02-25 12:44:58.736 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:58 np0005629333 nova_compute[244014]: 2026-02-25 12:44:58.766 244018 DEBUG nova.storage.rbd_utils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dd8c9142-2607-4722-90eb-65233f258639_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:44:58 np0005629333 nova_compute[244014]: 2026-02-25 12:44:58.774 244018 DEBUG oslo_concurrency.processutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 dd8c9142-2607-4722-90eb-65233f258639_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:44:58 np0005629333 nova_compute[244014]: 2026-02-25 12:44:58.820 244018 DEBUG nova.policy [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb8dbf8cc448ad946fd23aaae2326e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '25fa1e8dd32c483686f869da2604f2b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:44:59 np0005629333 nova_compute[244014]: 2026-02-25 12:44:59.034 244018 DEBUG oslo_concurrency.processutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 dd8c9142-2607-4722-90eb-65233f258639_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:44:59 np0005629333 nova_compute[244014]: 2026-02-25 12:44:59.117 244018 DEBUG nova.network.neutron [req-a7312af8-d973-4885-8bac-899bf45fd87c req-ce6d7732-e3ea-496a-b815-855e390fe229 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Updated VIF entry in instance network info cache for port 8d032336-9efd-4e76-9498-4dafee40640b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:44:59 np0005629333 nova_compute[244014]: 2026-02-25 12:44:59.118 244018 DEBUG nova.network.neutron [req-a7312af8-d973-4885-8bac-899bf45fd87c req-ce6d7732-e3ea-496a-b815-855e390fe229 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Updating instance_info_cache with network_info: [{"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:44:59 np0005629333 nova_compute[244014]: 2026-02-25 12:44:59.132 244018 DEBUG nova.storage.rbd_utils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] resizing rbd image dd8c9142-2607-4722-90eb-65233f258639_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:44:59 np0005629333 nova_compute[244014]: 2026-02-25 12:44:59.177 244018 DEBUG oslo_concurrency.lockutils [req-a7312af8-d973-4885-8bac-899bf45fd87c req-ce6d7732-e3ea-496a-b815-855e390fe229 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-aee87402-4b34-4083-888b-bb653e2beaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:44:59 np0005629333 nova_compute[244014]: 2026-02-25 12:44:59.214 244018 DEBUG nova.objects.instance [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'migration_context' on Instance uuid dd8c9142-2607-4722-90eb-65233f258639 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:44:59 np0005629333 nova_compute[244014]: 2026-02-25 12:44:59.239 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:44:59 np0005629333 nova_compute[244014]: 2026-02-25 12:44:59.239 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Ensure instance console log exists: /var/lib/nova/instances/dd8c9142-2607-4722-90eb-65233f258639/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:44:59 np0005629333 nova_compute[244014]: 2026-02-25 12:44:59.240 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:44:59 np0005629333 nova_compute[244014]: 2026-02-25 12:44:59.240 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:44:59 np0005629333 nova_compute[244014]: 2026-02-25 12:44:59.240 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:44:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Feb 25 07:44:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Feb 25 07:44:59 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Feb 25 07:45:00 np0005629333 nova_compute[244014]: 2026-02-25 12:45:00.154 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1927: 305 pgs: 305 active+clean; 358 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 18 MiB/s rd, 12 MiB/s wr, 488 op/s
Feb 25 07:45:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Feb 25 07:45:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Feb 25 07:45:00 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Feb 25 07:45:00 np0005629333 nova_compute[244014]: 2026-02-25 12:45:00.801 244018 DEBUG nova.network.neutron [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updating instance_info_cache with network_info: [{"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:45:00 np0005629333 nova_compute[244014]: 2026-02-25 12:45:00.825 244018 DEBUG oslo_concurrency.lockutils [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Releasing lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:45:00 np0005629333 nova_compute[244014]: 2026-02-25 12:45:00.829 244018 DEBUG nova.network.neutron [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Successfully created port: e50c9f03-a8a5-48d1-a34b-4a8fd638d5df _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:45:01 np0005629333 nova_compute[244014]: 2026-02-25 12:45:01.456 244018 DEBUG nova.network.neutron [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Successfully created port: 68ffeedb-a9df-4fa8-9ae1-2535a1f1799d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:45:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Feb 25 07:45:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Feb 25 07:45:01 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Feb 25 07:45:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:45:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:45:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:45:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:45:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:45:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:45:01 np0005629333 nova_compute[244014]: 2026-02-25 12:45:01.858 244018 INFO nova.virt.libvirt.driver [-] [instance: 874359d8-3251-4416-82dc-f6776853e384] Instance destroyed successfully.#033[00m
Feb 25 07:45:01 np0005629333 nova_compute[244014]: 2026-02-25 12:45:01.859 244018 DEBUG nova.objects.instance [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lazy-loading 'resources' on Instance uuid 874359d8-3251-4416-82dc-f6776853e384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:45:01 np0005629333 nova_compute[244014]: 2026-02-25 12:45:01.875 244018 DEBUG nova.virt.libvirt.vif [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:44:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1245136630',display_name='tempest-TestShelveInstance-server-1245136630',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1245136630',id=112,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP9lQWzdSu90yaQLNul1MY74fQc98Cg9DYq6NKBJiDcNTT2XlppuxCcKfQtu58BOyBXAh+PrpYKR3FwTDUP4XAt9wnb5ixmiKo9lnvQqdB0jXAsnGExk5Uj48m62YSy+xg==',key_name='tempest-TestShelveInstance-571084469',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:44:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6c0adb05683141e7a0b866f450e410e0',ramdisk_id='',reservation_id='r-tfbrsni1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1925524092',owner_user_name='tempest-TestShelveInstance-1925524092-project-member',shelved_at='2026-02-25T12:44:57.937945',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='ce5f0984-576d-4f20-8e86-b4b26cb5bb6e'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:44:52Z,user_data=None,user_id='44c0b78107ea4f7381e82a02c5954e7c',uuid=874359d8-3251-4416-82dc-f6776853e384,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:45:01 np0005629333 nova_compute[244014]: 2026-02-25 12:45:01.875 244018 DEBUG nova.network.os_vif_util [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Converting VIF {"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:45:01 np0005629333 nova_compute[244014]: 2026-02-25 12:45:01.876 244018 DEBUG nova.network.os_vif_util [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:b7:f7,bridge_name='br-int',has_traffic_filtering=True,id=97fb9f99-cb59-4581-8866-375ea3e167d7,network=Network(e4d059ba-eacc-463b-a8a4-393a5a36dba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97fb9f99-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:45:01 np0005629333 nova_compute[244014]: 2026-02-25 12:45:01.876 244018 DEBUG os_vif [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:b7:f7,bridge_name='br-int',has_traffic_filtering=True,id=97fb9f99-cb59-4581-8866-375ea3e167d7,network=Network(e4d059ba-eacc-463b-a8a4-393a5a36dba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97fb9f99-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:45:01 np0005629333 nova_compute[244014]: 2026-02-25 12:45:01.878 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:01 np0005629333 nova_compute[244014]: 2026-02-25 12:45:01.879 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97fb9f99-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:01 np0005629333 nova_compute[244014]: 2026-02-25 12:45:01.880 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:01 np0005629333 nova_compute[244014]: 2026-02-25 12:45:01.884 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:45:01 np0005629333 nova_compute[244014]: 2026-02-25 12:45:01.887 244018 INFO os_vif [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:b7:f7,bridge_name='br-int',has_traffic_filtering=True,id=97fb9f99-cb59-4581-8866-375ea3e167d7,network=Network(e4d059ba-eacc-463b-a8a4-393a5a36dba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97fb9f99-cb')#033[00m
Feb 25 07:45:01 np0005629333 nova_compute[244014]: 2026-02-25 12:45:01.959 244018 DEBUG nova.compute.manager [req-656200b6-2163-4ff1-aa3f-bcc2dffa5a73 req-70b772f9-e22b-4416-9f7e-abc93c113acd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received event network-changed-97fb9f99-cb59-4581-8866-375ea3e167d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:45:01 np0005629333 nova_compute[244014]: 2026-02-25 12:45:01.959 244018 DEBUG nova.compute.manager [req-656200b6-2163-4ff1-aa3f-bcc2dffa5a73 req-70b772f9-e22b-4416-9f7e-abc93c113acd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Refreshing instance network info cache due to event network-changed-97fb9f99-cb59-4581-8866-375ea3e167d7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:45:01 np0005629333 nova_compute[244014]: 2026-02-25 12:45:01.960 244018 DEBUG oslo_concurrency.lockutils [req-656200b6-2163-4ff1-aa3f-bcc2dffa5a73 req-70b772f9-e22b-4416-9f7e-abc93c113acd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:45:01 np0005629333 nova_compute[244014]: 2026-02-25 12:45:01.960 244018 DEBUG oslo_concurrency.lockutils [req-656200b6-2163-4ff1-aa3f-bcc2dffa5a73 req-70b772f9-e22b-4416-9f7e-abc93c113acd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:45:01 np0005629333 nova_compute[244014]: 2026-02-25 12:45:01.961 244018 DEBUG nova.network.neutron [req-656200b6-2163-4ff1-aa3f-bcc2dffa5a73 req-70b772f9-e22b-4416-9f7e-abc93c113acd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Refreshing network info cache for port 97fb9f99-cb59-4581-8866-375ea3e167d7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:45:02 np0005629333 nova_compute[244014]: 2026-02-25 12:45:02.059 244018 DEBUG nova.network.neutron [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Successfully updated port: e50c9f03-a8a5-48d1-a34b-4a8fd638d5df _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:45:02 np0005629333 nova_compute[244014]: 2026-02-25 12:45:02.187 244018 INFO nova.virt.libvirt.driver [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Deleting instance files /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384_del#033[00m
Feb 25 07:45:02 np0005629333 nova_compute[244014]: 2026-02-25 12:45:02.189 244018 INFO nova.virt.libvirt.driver [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Deletion of /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384_del complete#033[00m
Feb 25 07:45:02 np0005629333 nova_compute[244014]: 2026-02-25 12:45:02.245 244018 DEBUG nova.compute.manager [req-571eb01f-832e-4109-a214-b301717765c0 req-e3cac998-636d-42ec-8c59-595ff818a7f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received event network-changed-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:45:02 np0005629333 nova_compute[244014]: 2026-02-25 12:45:02.247 244018 DEBUG nova.compute.manager [req-571eb01f-832e-4109-a214-b301717765c0 req-e3cac998-636d-42ec-8c59-595ff818a7f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Refreshing instance network info cache due to event network-changed-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:45:02 np0005629333 nova_compute[244014]: 2026-02-25 12:45:02.247 244018 DEBUG oslo_concurrency.lockutils [req-571eb01f-832e-4109-a214-b301717765c0 req-e3cac998-636d-42ec-8c59-595ff818a7f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:45:02 np0005629333 nova_compute[244014]: 2026-02-25 12:45:02.248 244018 DEBUG oslo_concurrency.lockutils [req-571eb01f-832e-4109-a214-b301717765c0 req-e3cac998-636d-42ec-8c59-595ff818a7f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:45:02 np0005629333 nova_compute[244014]: 2026-02-25 12:45:02.248 244018 DEBUG nova.network.neutron [req-571eb01f-832e-4109-a214-b301717765c0 req-e3cac998-636d-42ec-8c59-595ff818a7f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Refreshing network info cache for port e50c9f03-a8a5-48d1-a34b-4a8fd638d5df _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:45:02 np0005629333 nova_compute[244014]: 2026-02-25 12:45:02.282 244018 INFO nova.scheduler.client.report [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Deleted allocations for instance 874359d8-3251-4416-82dc-f6776853e384#033[00m
Feb 25 07:45:02 np0005629333 nova_compute[244014]: 2026-02-25 12:45:02.330 244018 DEBUG oslo_concurrency.lockutils [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:02 np0005629333 nova_compute[244014]: 2026-02-25 12:45:02.330 244018 DEBUG oslo_concurrency.lockutils [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1930: 305 pgs: 305 active+clean; 347 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 292 KiB/s rd, 4.4 MiB/s wr, 391 op/s
Feb 25 07:45:02 np0005629333 nova_compute[244014]: 2026-02-25 12:45:02.405 244018 DEBUG oslo_concurrency.processutils [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:45:02 np0005629333 nova_compute[244014]: 2026-02-25 12:45:02.508 244018 DEBUG nova.network.neutron [req-571eb01f-832e-4109-a214-b301717765c0 req-e3cac998-636d-42ec-8c59-595ff818a7f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:45:02 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:02Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b3:5f:c9 10.100.0.11
Feb 25 07:45:02 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:02Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b3:5f:c9 10.100.0.11
Feb 25 07:45:02 np0005629333 nova_compute[244014]: 2026-02-25 12:45:02.949 244018 DEBUG nova.network.neutron [req-571eb01f-832e-4109-a214-b301717765c0 req-e3cac998-636d-42ec-8c59-595ff818a7f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:45:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:45:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1713606025' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:45:02 np0005629333 nova_compute[244014]: 2026-02-25 12:45:02.964 244018 DEBUG oslo_concurrency.lockutils [req-571eb01f-832e-4109-a214-b301717765c0 req-e3cac998-636d-42ec-8c59-595ff818a7f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:45:02 np0005629333 nova_compute[244014]: 2026-02-25 12:45:02.974 244018 DEBUG oslo_concurrency.processutils [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:45:02 np0005629333 nova_compute[244014]: 2026-02-25 12:45:02.978 244018 DEBUG nova.compute.provider_tree [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:45:02 np0005629333 nova_compute[244014]: 2026-02-25 12:45:02.996 244018 DEBUG nova.scheduler.client.report [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:45:03 np0005629333 nova_compute[244014]: 2026-02-25 12:45:03.012 244018 DEBUG oslo_concurrency.lockutils [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:03 np0005629333 nova_compute[244014]: 2026-02-25 12:45:03.081 244018 DEBUG oslo_concurrency.lockutils [None req-1444b083-d770-4ac6-90c1-fd30cad56bbf 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 13.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:03 np0005629333 nova_compute[244014]: 2026-02-25 12:45:03.108 244018 DEBUG nova.network.neutron [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Successfully updated port: 68ffeedb-a9df-4fa8-9ae1-2535a1f1799d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:45:03 np0005629333 nova_compute[244014]: 2026-02-25 12:45:03.130 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:45:03 np0005629333 nova_compute[244014]: 2026-02-25 12:45:03.130 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquired lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:45:03 np0005629333 nova_compute[244014]: 2026-02-25 12:45:03.131 244018 DEBUG nova.network.neutron [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:45:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:45:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Feb 25 07:45:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Feb 25 07:45:03 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Feb 25 07:45:03 np0005629333 nova_compute[244014]: 2026-02-25 12:45:03.790 244018 DEBUG nova.network.neutron [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:45:04 np0005629333 nova_compute[244014]: 2026-02-25 12:45:04.103 244018 DEBUG nova.network.neutron [req-656200b6-2163-4ff1-aa3f-bcc2dffa5a73 req-70b772f9-e22b-4416-9f7e-abc93c113acd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updated VIF entry in instance network info cache for port 97fb9f99-cb59-4581-8866-375ea3e167d7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:45:04 np0005629333 nova_compute[244014]: 2026-02-25 12:45:04.104 244018 DEBUG nova.network.neutron [req-656200b6-2163-4ff1-aa3f-bcc2dffa5a73 req-70b772f9-e22b-4416-9f7e-abc93c113acd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updating instance_info_cache with network_info: [{"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": null, "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:45:04 np0005629333 nova_compute[244014]: 2026-02-25 12:45:04.124 244018 DEBUG oslo_concurrency.lockutils [req-656200b6-2163-4ff1-aa3f-bcc2dffa5a73 req-70b772f9-e22b-4416-9f7e-abc93c113acd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:45:04 np0005629333 nova_compute[244014]: 2026-02-25 12:45:04.323 244018 DEBUG nova.compute.manager [req-809de0db-4f1d-4865-a3c8-b46acf6f52d6 req-9e14986b-bcfa-4195-80e7-6debee05afe6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received event network-changed-68ffeedb-a9df-4fa8-9ae1-2535a1f1799d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:45:04 np0005629333 nova_compute[244014]: 2026-02-25 12:45:04.323 244018 DEBUG nova.compute.manager [req-809de0db-4f1d-4865-a3c8-b46acf6f52d6 req-9e14986b-bcfa-4195-80e7-6debee05afe6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Refreshing instance network info cache due to event network-changed-68ffeedb-a9df-4fa8-9ae1-2535a1f1799d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:45:04 np0005629333 nova_compute[244014]: 2026-02-25 12:45:04.324 244018 DEBUG oslo_concurrency.lockutils [req-809de0db-4f1d-4865-a3c8-b46acf6f52d6 req-9e14986b-bcfa-4195-80e7-6debee05afe6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:45:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 305 active+clean; 341 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 495 KiB/s rd, 6.3 MiB/s wr, 454 op/s
Feb 25 07:45:05 np0005629333 nova_compute[244014]: 2026-02-25 12:45:05.158 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1933: 305 pgs: 305 active+clean; 341 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 401 KiB/s rd, 5.1 MiB/s wr, 368 op/s
Feb 25 07:45:06 np0005629333 nova_compute[244014]: 2026-02-25 12:45:06.813 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023491.81187, 874359d8-3251-4416-82dc-f6776853e384 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:45:06 np0005629333 nova_compute[244014]: 2026-02-25 12:45:06.814 244018 INFO nova.compute.manager [-] [instance: 874359d8-3251-4416-82dc-f6776853e384] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:45:06 np0005629333 nova_compute[244014]: 2026-02-25 12:45:06.829 244018 DEBUG nova.network.neutron [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Updating instance_info_cache with network_info: [{"id": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "address": "fa:16:3e:47:4a:25", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50c9f03-a8", "ovs_interfaceid": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "address": "fa:16:3e:60:df:22", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68ffeedb-a9", "ovs_interfaceid": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:45:06 np0005629333 nova_compute[244014]: 2026-02-25 12:45:06.850 244018 DEBUG nova.compute.manager [None req-6ca805a9-88d8-429a-86dd-3858063e5319 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:45:06 np0005629333 nova_compute[244014]: 2026-02-25 12:45:06.857 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Releasing lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:45:06 np0005629333 nova_compute[244014]: 2026-02-25 12:45:06.858 244018 DEBUG nova.compute.manager [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Instance network_info: |[{"id": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "address": "fa:16:3e:47:4a:25", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50c9f03-a8", "ovs_interfaceid": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "address": "fa:16:3e:60:df:22", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68ffeedb-a9", "ovs_interfaceid": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:45:06 np0005629333 nova_compute[244014]: 2026-02-25 12:45:06.859 244018 DEBUG oslo_concurrency.lockutils [req-809de0db-4f1d-4865-a3c8-b46acf6f52d6 req-9e14986b-bcfa-4195-80e7-6debee05afe6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:45:06 np0005629333 nova_compute[244014]: 2026-02-25 12:45:06.860 244018 DEBUG nova.network.neutron [req-809de0db-4f1d-4865-a3c8-b46acf6f52d6 req-9e14986b-bcfa-4195-80e7-6debee05afe6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Refreshing network info cache for port 68ffeedb-a9df-4fa8-9ae1-2535a1f1799d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:45:06 np0005629333 nova_compute[244014]: 2026-02-25 12:45:06.864 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Start _get_guest_xml network_info=[{"id": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "address": "fa:16:3e:47:4a:25", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50c9f03-a8", "ovs_interfaceid": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "address": "fa:16:3e:60:df:22", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68ffeedb-a9", "ovs_interfaceid": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:45:06 np0005629333 nova_compute[244014]: 2026-02-25 12:45:06.871 244018 WARNING nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:45:06 np0005629333 nova_compute[244014]: 2026-02-25 12:45:06.876 244018 DEBUG nova.virt.libvirt.host [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:45:06 np0005629333 nova_compute[244014]: 2026-02-25 12:45:06.877 244018 DEBUG nova.virt.libvirt.host [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:45:06 np0005629333 nova_compute[244014]: 2026-02-25 12:45:06.884 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:06 np0005629333 nova_compute[244014]: 2026-02-25 12:45:06.885 244018 DEBUG nova.virt.libvirt.host [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:45:06 np0005629333 nova_compute[244014]: 2026-02-25 12:45:06.886 244018 DEBUG nova.virt.libvirt.host [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:45:06 np0005629333 nova_compute[244014]: 2026-02-25 12:45:06.886 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:45:06 np0005629333 nova_compute[244014]: 2026-02-25 12:45:06.887 244018 DEBUG nova.virt.hardware [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:45:06 np0005629333 nova_compute[244014]: 2026-02-25 12:45:06.887 244018 DEBUG nova.virt.hardware [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:45:06 np0005629333 nova_compute[244014]: 2026-02-25 12:45:06.887 244018 DEBUG nova.virt.hardware [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:45:06 np0005629333 nova_compute[244014]: 2026-02-25 12:45:06.888 244018 DEBUG nova.virt.hardware [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:45:06 np0005629333 nova_compute[244014]: 2026-02-25 12:45:06.888 244018 DEBUG nova.virt.hardware [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:45:06 np0005629333 nova_compute[244014]: 2026-02-25 12:45:06.888 244018 DEBUG nova.virt.hardware [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:45:06 np0005629333 nova_compute[244014]: 2026-02-25 12:45:06.889 244018 DEBUG nova.virt.hardware [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:45:06 np0005629333 nova_compute[244014]: 2026-02-25 12:45:06.889 244018 DEBUG nova.virt.hardware [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:45:06 np0005629333 nova_compute[244014]: 2026-02-25 12:45:06.889 244018 DEBUG nova.virt.hardware [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:45:06 np0005629333 nova_compute[244014]: 2026-02-25 12:45:06.889 244018 DEBUG nova.virt.hardware [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:45:06 np0005629333 nova_compute[244014]: 2026-02-25 12:45:06.890 244018 DEBUG nova.virt.hardware [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:45:06 np0005629333 nova_compute[244014]: 2026-02-25 12:45:06.894 244018 DEBUG oslo_concurrency.processutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:45:07 np0005629333 nova_compute[244014]: 2026-02-25 12:45:07.107 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "874359d8-3251-4416-82dc-f6776853e384" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:07 np0005629333 nova_compute[244014]: 2026-02-25 12:45:07.108 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:07 np0005629333 nova_compute[244014]: 2026-02-25 12:45:07.108 244018 INFO nova.compute.manager [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Unshelving#033[00m
Feb 25 07:45:07 np0005629333 nova_compute[244014]: 2026-02-25 12:45:07.214 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:07 np0005629333 nova_compute[244014]: 2026-02-25 12:45:07.215 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:07 np0005629333 nova_compute[244014]: 2026-02-25 12:45:07.228 244018 DEBUG nova.objects.instance [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lazy-loading 'pci_requests' on Instance uuid 874359d8-3251-4416-82dc-f6776853e384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:45:07 np0005629333 nova_compute[244014]: 2026-02-25 12:45:07.271 244018 DEBUG nova.objects.instance [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lazy-loading 'numa_topology' on Instance uuid 874359d8-3251-4416-82dc-f6776853e384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:45:07 np0005629333 nova_compute[244014]: 2026-02-25 12:45:07.283 244018 DEBUG nova.virt.hardware [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:45:07 np0005629333 nova_compute[244014]: 2026-02-25 12:45:07.283 244018 INFO nova.compute.claims [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:45:07 np0005629333 nova_compute[244014]: 2026-02-25 12:45:07.398 244018 DEBUG oslo_concurrency.processutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:45:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:45:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2136277098' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:45:07 np0005629333 nova_compute[244014]: 2026-02-25 12:45:07.472 244018 DEBUG oslo_concurrency.processutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:45:07 np0005629333 nova_compute[244014]: 2026-02-25 12:45:07.500 244018 DEBUG nova.storage.rbd_utils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dd8c9142-2607-4722-90eb-65233f258639_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:45:07 np0005629333 nova_compute[244014]: 2026-02-25 12:45:07.506 244018 DEBUG oslo_concurrency.processutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:45:07 np0005629333 nova_compute[244014]: 2026-02-25 12:45:07.952 244018 INFO nova.compute.manager [None req-42e46de6-9a97-47ff-82ab-d3df2084fda9 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Get console output#033[00m
Feb 25 07:45:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:45:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1771756237' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:45:07 np0005629333 nova_compute[244014]: 2026-02-25 12:45:07.963 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 25 07:45:07 np0005629333 nova_compute[244014]: 2026-02-25 12:45:07.988 244018 DEBUG oslo_concurrency.processutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:45:07 np0005629333 nova_compute[244014]: 2026-02-25 12:45:07.994 244018 DEBUG nova.compute.provider_tree [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.008 244018 DEBUG nova.scheduler.client.report [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.029 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:45:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3067648468' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.090 244018 DEBUG oslo_concurrency.processutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.091 244018 DEBUG nova.virt.libvirt.vif [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:44:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1795695092',display_name='tempest-TestGettingAddress-server-1795695092',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1795695092',id=114,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD9kunFNn82YP0ulUqe0tIhu80P4MbOBi6POOa2CvjUcw/O3cOMXLzAIoTeZySuOv8M/4O47QFj9wA4a/asArTJADOO8TDCsWVVXiUd+J4BzXpwIPt1WHbwTYgvUR5L7vw==',key_name='tempest-TestGettingAddress-931608106',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-e52fw19u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:44:58Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=dd8c9142-2607-4722-90eb-65233f258639,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "address": "fa:16:3e:47:4a:25", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50c9f03-a8", "ovs_interfaceid": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.092 244018 DEBUG nova.network.os_vif_util [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "address": "fa:16:3e:47:4a:25", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50c9f03-a8", "ovs_interfaceid": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.092 244018 DEBUG nova.network.os_vif_util [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:4a:25,bridge_name='br-int',has_traffic_filtering=True,id=e50c9f03-a8a5-48d1-a34b-4a8fd638d5df,network=Network(bb79e0fd-2a4d-4a70-9c80-4853297401ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50c9f03-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.093 244018 DEBUG nova.virt.libvirt.vif [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:44:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1795695092',display_name='tempest-TestGettingAddress-server-1795695092',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1795695092',id=114,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD9kunFNn82YP0ulUqe0tIhu80P4MbOBi6POOa2CvjUcw/O3cOMXLzAIoTeZySuOv8M/4O47QFj9wA4a/asArTJADOO8TDCsWVVXiUd+J4BzXpwIPt1WHbwTYgvUR5L7vw==',key_name='tempest-TestGettingAddress-931608106',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-e52fw19u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:44:58Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=dd8c9142-2607-4722-90eb-65233f258639,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "address": "fa:16:3e:60:df:22", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68ffeedb-a9", "ovs_interfaceid": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.094 244018 DEBUG nova.network.os_vif_util [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "address": "fa:16:3e:60:df:22", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68ffeedb-a9", "ovs_interfaceid": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.095 244018 DEBUG nova.network.os_vif_util [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:df:22,bridge_name='br-int',has_traffic_filtering=True,id=68ffeedb-a9df-4fa8-9ae1-2535a1f1799d,network=Network(6df13266-bcfe-4a5b-94c4-81b5f08a6c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68ffeedb-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.096 244018 DEBUG nova.objects.instance [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid dd8c9142-2607-4722-90eb-65233f258639 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.113 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:45:08 np0005629333 nova_compute[244014]:  <uuid>dd8c9142-2607-4722-90eb-65233f258639</uuid>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:  <name>instance-00000072</name>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestGettingAddress-server-1795695092</nova:name>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:45:06</nova:creationTime>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:45:08 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:        <nova:user uuid="f8eb8dbf8cc448ad946fd23aaae2326e">tempest-TestGettingAddress-344063294-project-member</nova:user>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:        <nova:project uuid="25fa1e8dd32c483686f869da2604f2b1">tempest-TestGettingAddress-344063294</nova:project>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:        <nova:port uuid="e50c9f03-a8a5-48d1-a34b-4a8fd638d5df">
Feb 25 07:45:08 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:        <nova:port uuid="68ffeedb-a9df-4fa8-9ae1-2535a1f1799d">
Feb 25 07:45:08 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe60:df22" ipVersion="6"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe60:df22" ipVersion="6"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <entry name="serial">dd8c9142-2607-4722-90eb-65233f258639</entry>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <entry name="uuid">dd8c9142-2607-4722-90eb-65233f258639</entry>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/dd8c9142-2607-4722-90eb-65233f258639_disk">
Feb 25 07:45:08 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:45:08 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/dd8c9142-2607-4722-90eb-65233f258639_disk.config">
Feb 25 07:45:08 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:45:08 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:47:4a:25"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <target dev="tape50c9f03-a8"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:60:df:22"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <target dev="tap68ffeedb-a9"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/dd8c9142-2607-4722-90eb-65233f258639/console.log" append="off"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:45:08 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:45:08 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:45:08 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:45:08 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.114 244018 DEBUG nova.compute.manager [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Preparing to wait for external event network-vif-plugged-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.115 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "dd8c9142-2607-4722-90eb-65233f258639-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.115 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.115 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.115 244018 DEBUG nova.compute.manager [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Preparing to wait for external event network-vif-plugged-68ffeedb-a9df-4fa8-9ae1-2535a1f1799d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.116 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "dd8c9142-2607-4722-90eb-65233f258639-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.116 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.116 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.117 244018 DEBUG nova.virt.libvirt.vif [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:44:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1795695092',display_name='tempest-TestGettingAddress-server-1795695092',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1795695092',id=114,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD9kunFNn82YP0ulUqe0tIhu80P4MbOBi6POOa2CvjUcw/O3cOMXLzAIoTeZySuOv8M/4O47QFj9wA4a/asArTJADOO8TDCsWVVXiUd+J4BzXpwIPt1WHbwTYgvUR5L7vw==',key_name='tempest-TestGettingAddress-931608106',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-e52fw19u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:44:58Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=dd8c9142-2607-4722-90eb-65233f258639,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "address": "fa:16:3e:47:4a:25", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50c9f03-a8", "ovs_interfaceid": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.117 244018 DEBUG nova.network.os_vif_util [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "address": "fa:16:3e:47:4a:25", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50c9f03-a8", "ovs_interfaceid": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.118 244018 DEBUG nova.network.os_vif_util [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:4a:25,bridge_name='br-int',has_traffic_filtering=True,id=e50c9f03-a8a5-48d1-a34b-4a8fd638d5df,network=Network(bb79e0fd-2a4d-4a70-9c80-4853297401ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50c9f03-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.118 244018 DEBUG os_vif [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:4a:25,bridge_name='br-int',has_traffic_filtering=True,id=e50c9f03-a8a5-48d1-a34b-4a8fd638d5df,network=Network(bb79e0fd-2a4d-4a70-9c80-4853297401ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50c9f03-a8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.119 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.119 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.120 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.122 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.123 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape50c9f03-a8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.123 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape50c9f03-a8, col_values=(('external_ids', {'iface-id': 'e50c9f03-a8a5-48d1-a34b-4a8fd638d5df', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:4a:25', 'vm-uuid': 'dd8c9142-2607-4722-90eb-65233f258639'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.125 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:08 np0005629333 NetworkManager[49836]: <info>  [1772023508.1264] manager: (tape50c9f03-a8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/468)
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.130 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.133 244018 INFO os_vif [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:4a:25,bridge_name='br-int',has_traffic_filtering=True,id=e50c9f03-a8a5-48d1-a34b-4a8fd638d5df,network=Network(bb79e0fd-2a4d-4a70-9c80-4853297401ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50c9f03-a8')#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.134 244018 DEBUG nova.virt.libvirt.vif [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:44:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1795695092',display_name='tempest-TestGettingAddress-server-1795695092',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1795695092',id=114,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD9kunFNn82YP0ulUqe0tIhu80P4MbOBi6POOa2CvjUcw/O3cOMXLzAIoTeZySuOv8M/4O47QFj9wA4a/asArTJADOO8TDCsWVVXiUd+J4BzXpwIPt1WHbwTYgvUR5L7vw==',key_name='tempest-TestGettingAddress-931608106',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-e52fw19u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:44:58Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=dd8c9142-2607-4722-90eb-65233f258639,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "address": "fa:16:3e:60:df:22", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68ffeedb-a9", "ovs_interfaceid": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.134 244018 DEBUG nova.network.os_vif_util [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "address": "fa:16:3e:60:df:22", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68ffeedb-a9", "ovs_interfaceid": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.136 244018 DEBUG nova.network.os_vif_util [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:df:22,bridge_name='br-int',has_traffic_filtering=True,id=68ffeedb-a9df-4fa8-9ae1-2535a1f1799d,network=Network(6df13266-bcfe-4a5b-94c4-81b5f08a6c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68ffeedb-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.136 244018 DEBUG os_vif [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:df:22,bridge_name='br-int',has_traffic_filtering=True,id=68ffeedb-a9df-4fa8-9ae1-2535a1f1799d,network=Network(6df13266-bcfe-4a5b-94c4-81b5f08a6c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68ffeedb-a9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.139 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.139 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.140 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.143 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.143 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap68ffeedb-a9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.143 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap68ffeedb-a9, col_values=(('external_ids', {'iface-id': '68ffeedb-a9df-4fa8-9ae1-2535a1f1799d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:60:df:22', 'vm-uuid': 'dd8c9142-2607-4722-90eb-65233f258639'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.145 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:08 np0005629333 NetworkManager[49836]: <info>  [1772023508.1460] manager: (tap68ffeedb-a9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/469)
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.148 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.152 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.153 244018 INFO os_vif [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:df:22,bridge_name='br-int',has_traffic_filtering=True,id=68ffeedb-a9df-4fa8-9ae1-2535a1f1799d,network=Network(6df13266-bcfe-4a5b-94c4-81b5f08a6c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68ffeedb-a9')#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.215 244018 INFO nova.network.neutron [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updating port 97fb9f99-cb59-4581-8866-375ea3e167d7 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.245 244018 DEBUG oslo_concurrency.lockutils [None req-9c06cec7-32a3-4c54-9d63-0c778c3d1ae4 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "aee87402-4b34-4083-888b-bb653e2beaa9" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.246 244018 DEBUG oslo_concurrency.lockutils [None req-9c06cec7-32a3-4c54-9d63-0c778c3d1ae4 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.246 244018 DEBUG nova.compute.manager [None req-9c06cec7-32a3-4c54-9d63-0c778c3d1ae4 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.253 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.254 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.254 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:47:4a:25, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.254 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:60:df:22, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.255 244018 INFO nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Using config drive#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.286 244018 DEBUG nova.storage.rbd_utils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dd8c9142-2607-4722-90eb-65233f258639_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.300 244018 DEBUG nova.compute.manager [None req-9c06cec7-32a3-4c54-9d63-0c778c3d1ae4 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.301 244018 DEBUG nova.objects.instance [None req-9c06cec7-32a3-4c54-9d63-0c778c3d1ae4 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'flavor' on Instance uuid aee87402-4b34-4083-888b-bb653e2beaa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.334 244018 DEBUG nova.virt.libvirt.driver [None req-9c06cec7-32a3-4c54-9d63-0c778c3d1ae4 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Feb 25 07:45:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:45:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Feb 25 07:45:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 305 active+clean; 358 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 679 KiB/s rd, 6.0 MiB/s wr, 352 op/s
Feb 25 07:45:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Feb 25 07:45:08 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.786 244018 DEBUG nova.network.neutron [req-809de0db-4f1d-4865-a3c8-b46acf6f52d6 req-9e14986b-bcfa-4195-80e7-6debee05afe6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Updated VIF entry in instance network info cache for port 68ffeedb-a9df-4fa8-9ae1-2535a1f1799d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.786 244018 DEBUG nova.network.neutron [req-809de0db-4f1d-4865-a3c8-b46acf6f52d6 req-9e14986b-bcfa-4195-80e7-6debee05afe6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Updating instance_info_cache with network_info: [{"id": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "address": "fa:16:3e:47:4a:25", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50c9f03-a8", "ovs_interfaceid": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "address": "fa:16:3e:60:df:22", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68ffeedb-a9", "ovs_interfaceid": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.808 244018 DEBUG oslo_concurrency.lockutils [req-809de0db-4f1d-4865-a3c8-b46acf6f52d6 req-9e14986b-bcfa-4195-80e7-6debee05afe6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.840 244018 INFO nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Creating config drive at /var/lib/nova/instances/dd8c9142-2607-4722-90eb-65233f258639/disk.config#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.844 244018 DEBUG oslo_concurrency.processutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dd8c9142-2607-4722-90eb-65233f258639/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpimtvacjf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.883 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.884 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquired lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.884 244018 DEBUG nova.network.neutron [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:45:08 np0005629333 nova_compute[244014]: 2026-02-25 12:45:08.986 244018 DEBUG oslo_concurrency.processutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dd8c9142-2607-4722-90eb-65233f258639/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpimtvacjf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:45:09 np0005629333 nova_compute[244014]: 2026-02-25 12:45:09.026 244018 DEBUG nova.storage.rbd_utils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dd8c9142-2607-4722-90eb-65233f258639_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:45:09 np0005629333 nova_compute[244014]: 2026-02-25 12:45:09.031 244018 DEBUG oslo_concurrency.processutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dd8c9142-2607-4722-90eb-65233f258639/disk.config dd8c9142-2607-4722-90eb-65233f258639_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:45:09 np0005629333 nova_compute[244014]: 2026-02-25 12:45:09.179 244018 DEBUG oslo_concurrency.processutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dd8c9142-2607-4722-90eb-65233f258639/disk.config dd8c9142-2607-4722-90eb-65233f258639_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:45:09 np0005629333 nova_compute[244014]: 2026-02-25 12:45:09.180 244018 INFO nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Deleting local config drive /var/lib/nova/instances/dd8c9142-2607-4722-90eb-65233f258639/disk.config because it was imported into RBD.#033[00m
Feb 25 07:45:09 np0005629333 NetworkManager[49836]: <info>  [1772023509.2303] manager: (tape50c9f03-a8): new Tun device (/org/freedesktop/NetworkManager/Devices/470)
Feb 25 07:45:09 np0005629333 kernel: tape50c9f03-a8: entered promiscuous mode
Feb 25 07:45:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:09Z|01124|binding|INFO|Claiming lport e50c9f03-a8a5-48d1-a34b-4a8fd638d5df for this chassis.
Feb 25 07:45:09 np0005629333 nova_compute[244014]: 2026-02-25 12:45:09.235 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:09Z|01125|binding|INFO|e50c9f03-a8a5-48d1-a34b-4a8fd638d5df: Claiming fa:16:3e:47:4a:25 10.100.0.14
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.243 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:4a:25 10.100.0.14'], port_security=['fa:16:3e:47:4a:25 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'dd8c9142-2607-4722-90eb-65233f258639', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bb79e0fd-2a4d-4a70-9c80-4853297401ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '09e05c4f-db6b-40c9-84a5-79dc857b9d0a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1aa114a0-206b-4aa1-b770-18f248344fa4, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=e50c9f03-a8a5-48d1-a34b-4a8fd638d5df) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.244 157129 INFO neutron.agent.ovn.metadata.agent [-] Port e50c9f03-a8a5-48d1-a34b-4a8fd638d5df in datapath bb79e0fd-2a4d-4a70-9c80-4853297401ff bound to our chassis#033[00m
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.245 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bb79e0fd-2a4d-4a70-9c80-4853297401ff#033[00m
Feb 25 07:45:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:09Z|01126|binding|INFO|Setting lport e50c9f03-a8a5-48d1-a34b-4a8fd638d5df ovn-installed in OVS
Feb 25 07:45:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:09Z|01127|binding|INFO|Setting lport e50c9f03-a8a5-48d1-a34b-4a8fd638d5df up in Southbound
Feb 25 07:45:09 np0005629333 kernel: tap68ffeedb-a9: entered promiscuous mode
Feb 25 07:45:09 np0005629333 nova_compute[244014]: 2026-02-25 12:45:09.250 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:09 np0005629333 NetworkManager[49836]: <info>  [1772023509.2511] manager: (tap68ffeedb-a9): new Tun device (/org/freedesktop/NetworkManager/Devices/471)
Feb 25 07:45:09 np0005629333 nova_compute[244014]: 2026-02-25 12:45:09.252 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:09Z|01128|binding|INFO|Claiming lport 68ffeedb-a9df-4fa8-9ae1-2535a1f1799d for this chassis.
Feb 25 07:45:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:09Z|01129|binding|INFO|68ffeedb-a9df-4fa8-9ae1-2535a1f1799d: Claiming fa:16:3e:60:df:22 2001:db8:0:1:f816:3eff:fe60:df22 2001:db8::f816:3eff:fe60:df22
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.257 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[72d06ebf-6fcb-4d3b-a0a6-8854330e1c69]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.258 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbb79e0fd-21 in ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.262 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbb79e0fd-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.262 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2fa8b78a-3424-4f1e-b066-3ea075c436f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:09 np0005629333 nova_compute[244014]: 2026-02-25 12:45:09.264 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.263 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[428f3833-40c2-4461-9afe-1d51c369ccef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:09 np0005629333 systemd-udevd[345313]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:45:09 np0005629333 systemd-udevd[345314]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:45:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:09Z|01130|binding|INFO|Setting lport 68ffeedb-a9df-4fa8-9ae1-2535a1f1799d up in Southbound
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.267 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:df:22 2001:db8:0:1:f816:3eff:fe60:df22 2001:db8::f816:3eff:fe60:df22'], port_security=['fa:16:3e:60:df:22 2001:db8:0:1:f816:3eff:fe60:df22 2001:db8::f816:3eff:fe60:df22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe60:df22/64 2001:db8::f816:3eff:fe60:df22/64', 'neutron:device_id': 'dd8c9142-2607-4722-90eb-65233f258639', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '09e05c4f-db6b-40c9-84a5-79dc857b9d0a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5bfff87-5507-4fe1-aed9-1b014e8e7384, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=68ffeedb-a9df-4fa8-9ae1-2535a1f1799d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:45:09 np0005629333 nova_compute[244014]: 2026-02-25 12:45:09.268 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:09Z|01131|binding|INFO|Setting lport 68ffeedb-a9df-4fa8-9ae1-2535a1f1799d ovn-installed in OVS
Feb 25 07:45:09 np0005629333 NetworkManager[49836]: <info>  [1772023509.2789] device (tap68ffeedb-a9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:45:09 np0005629333 NetworkManager[49836]: <info>  [1772023509.2799] device (tap68ffeedb-a9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.277 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[2d813831-4109-4ae3-a031-6e3a8777a925]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:09 np0005629333 NetworkManager[49836]: <info>  [1772023509.2805] device (tape50c9f03-a8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:45:09 np0005629333 NetworkManager[49836]: <info>  [1772023509.2812] device (tape50c9f03-a8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.291 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[adf341a4-fc02-4195-a4d7-ddac75d41752]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:09 np0005629333 systemd-machined[210048]: New machine qemu-143-instance-00000072.
Feb 25 07:45:09 np0005629333 systemd[1]: Started Virtual Machine qemu-143-instance-00000072.
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.324 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[903ed677-7e54-44bd-8134-f9af0a3aebd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.329 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b148b462-332d-432f-b025-641d5bd73133]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:09 np0005629333 NetworkManager[49836]: <info>  [1772023509.3302] manager: (tapbb79e0fd-20): new Veth device (/org/freedesktop/NetworkManager/Devices/472)
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.369 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0f34975f-3dcc-43a1-9772-e13faab88f9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.376 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9279cef2-bc77-4992-9dfa-3dd26b35f06a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:09 np0005629333 NetworkManager[49836]: <info>  [1772023509.6499] device (tapbb79e0fd-20): carrier: link connected
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.660 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[77494bf6-8006-4dc8-9bfb-4e5bed9c73de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:09 np0005629333 nova_compute[244014]: 2026-02-25 12:45:09.662 244018 DEBUG nova.compute.manager [req-a78d283b-6546-418e-b33e-98b47357694b req-f948bdad-c1dc-49bf-8440-64c5800abbdb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received event network-changed-97fb9f99-cb59-4581-8866-375ea3e167d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:45:09 np0005629333 nova_compute[244014]: 2026-02-25 12:45:09.662 244018 DEBUG nova.compute.manager [req-a78d283b-6546-418e-b33e-98b47357694b req-f948bdad-c1dc-49bf-8440-64c5800abbdb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Refreshing instance network info cache due to event network-changed-97fb9f99-cb59-4581-8866-375ea3e167d7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:45:09 np0005629333 nova_compute[244014]: 2026-02-25 12:45:09.662 244018 DEBUG oslo_concurrency.lockutils [req-a78d283b-6546-418e-b33e-98b47357694b req-f948bdad-c1dc-49bf-8440-64c5800abbdb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.683 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b9aeaaea-a343-42cc-bb6a-d462e8092fc3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbb79e0fd-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:0b:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 341], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547905, 'reachable_time': 39147, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345349, 'error': None, 'target': 'ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.701 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[24a425be-fe64-45ec-bb93-1020c029ee2b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4b:b00'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547905, 'tstamp': 547905}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345350, 'error': None, 'target': 'ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:09 np0005629333 nova_compute[244014]: 2026-02-25 12:45:09.706 244018 DEBUG nova.compute.manager [req-e69fce93-db9b-48d2-94da-3c66fd268474 req-514ecdf8-8eb4-4187-946f-f074cae1fac8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received event network-vif-plugged-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:45:09 np0005629333 nova_compute[244014]: 2026-02-25 12:45:09.706 244018 DEBUG oslo_concurrency.lockutils [req-e69fce93-db9b-48d2-94da-3c66fd268474 req-514ecdf8-8eb4-4187-946f-f074cae1fac8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "dd8c9142-2607-4722-90eb-65233f258639-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:09 np0005629333 nova_compute[244014]: 2026-02-25 12:45:09.707 244018 DEBUG oslo_concurrency.lockutils [req-e69fce93-db9b-48d2-94da-3c66fd268474 req-514ecdf8-8eb4-4187-946f-f074cae1fac8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:09 np0005629333 nova_compute[244014]: 2026-02-25 12:45:09.707 244018 DEBUG oslo_concurrency.lockutils [req-e69fce93-db9b-48d2-94da-3c66fd268474 req-514ecdf8-8eb4-4187-946f-f074cae1fac8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:09 np0005629333 nova_compute[244014]: 2026-02-25 12:45:09.707 244018 DEBUG nova.compute.manager [req-e69fce93-db9b-48d2-94da-3c66fd268474 req-514ecdf8-8eb4-4187-946f-f074cae1fac8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Processing event network-vif-plugged-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.728 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[94a5369b-beae-47c4-afad-7b4c14408fed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbb79e0fd-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:0b:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 341], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547905, 'reachable_time': 39147, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 345351, 'error': None, 'target': 'ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.762 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e1b8e8c6-82fa-4d26-a3a5-39c14a2929c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.833 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9222479b-01a9-4fd2-a6eb-c31fbcef1b61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.835 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb79e0fd-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.836 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.836 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbb79e0fd-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:09 np0005629333 nova_compute[244014]: 2026-02-25 12:45:09.838 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:09 np0005629333 NetworkManager[49836]: <info>  [1772023509.8398] manager: (tapbb79e0fd-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/473)
Feb 25 07:45:09 np0005629333 kernel: tapbb79e0fd-20: entered promiscuous mode
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.843 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbb79e0fd-20, col_values=(('external_ids', {'iface-id': '091afd0d-cbaa-4d88-8f55-1965b8ffcb56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:09 np0005629333 nova_compute[244014]: 2026-02-25 12:45:09.845 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:09Z|01132|binding|INFO|Releasing lport 091afd0d-cbaa-4d88-8f55-1965b8ffcb56 from this chassis (sb_readonly=0)
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.846 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bb79e0fd-2a4d-4a70-9c80-4853297401ff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bb79e0fd-2a4d-4a70-9c80-4853297401ff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.847 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[81a2c382-5cdc-45d4-81d9-5c82ffac77ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.849 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-bb79e0fd-2a4d-4a70-9c80-4853297401ff
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/bb79e0fd-2a4d-4a70-9c80-4853297401ff.pid.haproxy
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID bb79e0fd-2a4d-4a70-9c80-4853297401ff
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:45:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:09.851 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff', 'env', 'PROCESS_TAG=haproxy-bb79e0fd-2a4d-4a70-9c80-4853297401ff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bb79e0fd-2a4d-4a70-9c80-4853297401ff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:45:09 np0005629333 nova_compute[244014]: 2026-02-25 12:45:09.852 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:10 np0005629333 nova_compute[244014]: 2026-02-25 12:45:10.038 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023510.0375986, dd8c9142-2607-4722-90eb-65233f258639 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:45:10 np0005629333 nova_compute[244014]: 2026-02-25 12:45:10.038 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd8c9142-2607-4722-90eb-65233f258639] VM Started (Lifecycle Event)#033[00m
Feb 25 07:45:10 np0005629333 nova_compute[244014]: 2026-02-25 12:45:10.061 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd8c9142-2607-4722-90eb-65233f258639] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:45:10 np0005629333 nova_compute[244014]: 2026-02-25 12:45:10.067 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023510.0378032, dd8c9142-2607-4722-90eb-65233f258639 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:45:10 np0005629333 nova_compute[244014]: 2026-02-25 12:45:10.067 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd8c9142-2607-4722-90eb-65233f258639] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:45:10 np0005629333 nova_compute[244014]: 2026-02-25 12:45:10.090 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd8c9142-2607-4722-90eb-65233f258639] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:45:10 np0005629333 nova_compute[244014]: 2026-02-25 12:45:10.096 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd8c9142-2607-4722-90eb-65233f258639] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:45:10 np0005629333 nova_compute[244014]: 2026-02-25 12:45:10.121 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd8c9142-2607-4722-90eb-65233f258639] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:45:10 np0005629333 nova_compute[244014]: 2026-02-25 12:45:10.158 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:10 np0005629333 podman[345426]: 2026-02-25 12:45:10.23856777 +0000 UTC m=+0.060575400 container create d9c72c0a2b0012d1a231683380c38f5ece558e7f5e61fce7df9eb24b49e1b633 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 25 07:45:10 np0005629333 nova_compute[244014]: 2026-02-25 12:45:10.272 244018 DEBUG nova.network.neutron [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updating instance_info_cache with network_info: [{"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:45:10 np0005629333 systemd[1]: Started libpod-conmon-d9c72c0a2b0012d1a231683380c38f5ece558e7f5e61fce7df9eb24b49e1b633.scope.
Feb 25 07:45:10 np0005629333 nova_compute[244014]: 2026-02-25 12:45:10.299 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Releasing lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:45:10 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:45:10 np0005629333 nova_compute[244014]: 2026-02-25 12:45:10.302 244018 DEBUG nova.virt.libvirt.driver [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:45:10 np0005629333 nova_compute[244014]: 2026-02-25 12:45:10.303 244018 INFO nova.virt.libvirt.driver [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Creating image(s)#033[00m
Feb 25 07:45:10 np0005629333 podman[345426]: 2026-02-25 12:45:10.208735118 +0000 UTC m=+0.030742768 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:45:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86c13903373d387e7781319c35f624b48a598e4d3ad7505fd786416ab8b5fa73/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:45:10 np0005629333 podman[345426]: 2026-02-25 12:45:10.319340599 +0000 UTC m=+0.141348239 container init d9c72c0a2b0012d1a231683380c38f5ece558e7f5e61fce7df9eb24b49e1b633 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 25 07:45:10 np0005629333 podman[345426]: 2026-02-25 12:45:10.329174387 +0000 UTC m=+0.151182017 container start d9c72c0a2b0012d1a231683380c38f5ece558e7f5e61fce7df9eb24b49e1b633 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:45:10 np0005629333 nova_compute[244014]: 2026-02-25 12:45:10.339 244018 DEBUG nova.storage.rbd_utils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] rbd image 874359d8-3251-4416-82dc-f6776853e384_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:45:10 np0005629333 nova_compute[244014]: 2026-02-25 12:45:10.346 244018 DEBUG nova.objects.instance [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 874359d8-3251-4416-82dc-f6776853e384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:45:10 np0005629333 nova_compute[244014]: 2026-02-25 12:45:10.349 244018 DEBUG oslo_concurrency.lockutils [req-a78d283b-6546-418e-b33e-98b47357694b req-f948bdad-c1dc-49bf-8440-64c5800abbdb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:45:10 np0005629333 nova_compute[244014]: 2026-02-25 12:45:10.350 244018 DEBUG nova.network.neutron [req-a78d283b-6546-418e-b33e-98b47357694b req-f948bdad-c1dc-49bf-8440-64c5800abbdb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Refreshing network info cache for port 97fb9f99-cb59-4581-8866-375ea3e167d7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:45:10 np0005629333 neutron-haproxy-ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff[345441]: [NOTICE]   (345461) : New worker (345466) forked
Feb 25 07:45:10 np0005629333 neutron-haproxy-ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff[345441]: [NOTICE]   (345461) : Loading success.
Feb 25 07:45:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1936: 305 pgs: 305 active+clean; 358 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 519 KiB/s rd, 3.6 MiB/s wr, 150 op/s
Feb 25 07:45:10 np0005629333 nova_compute[244014]: 2026-02-25 12:45:10.401 244018 DEBUG nova.storage.rbd_utils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] rbd image 874359d8-3251-4416-82dc-f6776853e384_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.410 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 68ffeedb-a9df-4fa8-9ae1-2535a1f1799d in datapath 6df13266-bcfe-4a5b-94c4-81b5f08a6c21 unbound from our chassis#033[00m
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.413 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6df13266-bcfe-4a5b-94c4-81b5f08a6c21#033[00m
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.421 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[644c9ae0-2e4a-4398-97de-22f729a4a193]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.421 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6df13266-b1 in ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:45:10 np0005629333 nova_compute[244014]: 2026-02-25 12:45:10.421 244018 DEBUG nova.storage.rbd_utils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] rbd image 874359d8-3251-4416-82dc-f6776853e384_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.424 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6df13266-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.424 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d6afa467-6145-4591-bb40-832ae70bccf9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:10 np0005629333 nova_compute[244014]: 2026-02-25 12:45:10.425 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "fe4b7a9f067b869997678002ce25dd04ec0bdad0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.425 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ed763b1a-d053-4088-b101-5c857820c277]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:10 np0005629333 nova_compute[244014]: 2026-02-25 12:45:10.426 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "fe4b7a9f067b869997678002ce25dd04ec0bdad0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.433 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[9996edaa-adf1-4125-a8c0-43cf86ebce51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.447 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8bff900e-da0c-4bb5-b37f-3e8fa403eca3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.475 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e32b61dc-420f-43d3-a886-6576b1075d3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:10 np0005629333 NetworkManager[49836]: <info>  [1772023510.4819] manager: (tap6df13266-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/474)
Feb 25 07:45:10 np0005629333 systemd-udevd[345329]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.482 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0b7b32d7-093c-49e8-98c8-885b03c71c4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.517 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f84d094b-e7e8-42e4-b70c-d202a1fb0f1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.522 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[93160036-1b67-42f0-8812-2e78353584f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:10 np0005629333 NetworkManager[49836]: <info>  [1772023510.5471] device (tap6df13266-b0): carrier: link connected
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.552 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9ed2e96c-ec74-4c1c-a9ed-39b899a7114b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.571 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7d4db3b1-3d8a-4c7c-9327-689f49862e92]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6df13266-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:ad:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 342], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547995, 'reachable_time': 15036, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345521, 'error': None, 'target': 'ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.584 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[aa89da31-31e4-42df-bf29-5efb0ad76078]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec6:ad86'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547995, 'tstamp': 547995}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345522, 'error': None, 'target': 'ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.599 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5780f304-1e67-4deb-85dc-71a0d2b846f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6df13266-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:ad:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 342], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547995, 'reachable_time': 15036, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 345523, 'error': None, 'target': 'ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:10 np0005629333 kernel: tap8d032336-9e (unregistering): left promiscuous mode
Feb 25 07:45:10 np0005629333 NetworkManager[49836]: <info>  [1772023510.6215] device (tap8d032336-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:45:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:10Z|01133|binding|INFO|Releasing lport 8d032336-9efd-4e76-9498-4dafee40640b from this chassis (sb_readonly=0)
Feb 25 07:45:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:10Z|01134|binding|INFO|Setting lport 8d032336-9efd-4e76-9498-4dafee40640b down in Southbound
Feb 25 07:45:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:10Z|01135|binding|INFO|Removing iface tap8d032336-9e ovn-installed in OVS
Feb 25 07:45:10 np0005629333 nova_compute[244014]: 2026-02-25 12:45:10.632 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:10 np0005629333 nova_compute[244014]: 2026-02-25 12:45:10.635 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.632 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d32b0c25-7567-4b9e-9d98-f0c9cc9e0f4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.665 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ef2b1e2c-4e31-4d98-85c2-ba1a166b734f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.666 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6df13266-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.667 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.667 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6df13266-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:10 np0005629333 nova_compute[244014]: 2026-02-25 12:45:10.669 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:10 np0005629333 NetworkManager[49836]: <info>  [1772023510.6702] manager: (tap6df13266-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/475)
Feb 25 07:45:10 np0005629333 kernel: tap6df13266-b0: entered promiscuous mode
Feb 25 07:45:10 np0005629333 nova_compute[244014]: 2026-02-25 12:45:10.675 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.675 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6df13266-b0, col_values=(('external_ids', {'iface-id': '0b276f7a-90bb-4427-8f39-0e014732fd20'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:10Z|01136|binding|INFO|Releasing lport 0b276f7a-90bb-4427-8f39-0e014732fd20 from this chassis (sb_readonly=1)
Feb 25 07:45:10 np0005629333 nova_compute[244014]: 2026-02-25 12:45:10.683 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:10 np0005629333 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000071.scope: Deactivated successfully.
Feb 25 07:45:10 np0005629333 nova_compute[244014]: 2026-02-25 12:45:10.687 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:10 np0005629333 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000071.scope: Consumed 12.122s CPU time.
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.688 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6df13266-bcfe-4a5b-94c4-81b5f08a6c21.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6df13266-bcfe-4a5b-94c4-81b5f08a6c21.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.689 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[320bc6e9-bdd9-43d0-b76d-75093dd06c72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.691 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-6df13266-bcfe-4a5b-94c4-81b5f08a6c21
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/6df13266-bcfe-4a5b-94c4-81b5f08a6c21.pid.haproxy
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 6df13266-bcfe-4a5b-94c4-81b5f08a6c21
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.691 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'env', 'PROCESS_TAG=haproxy-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6df13266-bcfe-4a5b-94c4-81b5f08a6c21.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:45:10 np0005629333 systemd-machined[210048]: Machine qemu-142-instance-00000071 terminated.
Feb 25 07:45:10 np0005629333 nova_compute[244014]: 2026-02-25 12:45:10.734 244018 DEBUG nova.virt.libvirt.imagebackend [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Image locations are: [{'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/ce5f0984-576d-4f20-8e86-b4b26cb5bb6e/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/ce5f0984-576d-4f20-8e86-b4b26cb5bb6e/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Feb 25 07:45:10 np0005629333 nova_compute[244014]: 2026-02-25 12:45:10.790 244018 DEBUG nova.virt.libvirt.imagebackend [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Selected location: {'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/ce5f0984-576d-4f20-8e86-b4b26cb5bb6e/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Feb 25 07:45:10 np0005629333 nova_compute[244014]: 2026-02-25 12:45:10.790 244018 DEBUG nova.storage.rbd_utils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] cloning images/ce5f0984-576d-4f20-8e86-b4b26cb5bb6e@snap to None/874359d8-3251-4416-82dc-f6776853e384_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 25 07:45:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:10.802 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:5f:c9 10.100.0.11'], port_security=['fa:16:3e:b3:5f:c9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'aee87402-4b34-4083-888b-bb653e2beaa9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd92597b-67bf-4492-9581-a9a7ec80f716', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '815102d4-6506-4957-9109-24ea4e91e4b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.190'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d5163c3-96b2-4512-ae02-51f6c9b4bef4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=8d032336-9efd-4e76-9498-4dafee40640b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:45:10 np0005629333 NetworkManager[49836]: <info>  [1772023510.8453] manager: (tap8d032336-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/476)
Feb 25 07:45:10 np0005629333 nova_compute[244014]: 2026-02-25 12:45:10.913 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "fe4b7a9f067b869997678002ce25dd04ec0bdad0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.487s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:11 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.045 244018 DEBUG nova.objects.instance [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lazy-loading 'migration_context' on Instance uuid 874359d8-3251-4416-82dc-f6776853e384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:45:11 np0005629333 podman[345675]: 2026-02-25 12:45:11.057520301 +0000 UTC m=+0.049676413 container create 2258344ca27eda115bfae6739f31cb76dbd827b96bc2c197ec7a6c0c2acbc989 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:45:11 np0005629333 systemd[1]: Started libpod-conmon-2258344ca27eda115bfae6739f31cb76dbd827b96bc2c197ec7a6c0c2acbc989.scope.
Feb 25 07:45:11 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.109 244018 DEBUG nova.storage.rbd_utils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] flattening vms/874359d8-3251-4416-82dc-f6776853e384_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 25 07:45:11 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:45:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3db6813ebd7b49f3ead32cb60564520a6c07da0da03dd8df369d119009fd878/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:45:11 np0005629333 podman[345675]: 2026-02-25 12:45:11.035056837 +0000 UTC m=+0.027212959 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:45:11 np0005629333 podman[345675]: 2026-02-25 12:45:11.138312371 +0000 UTC m=+0.130468483 container init 2258344ca27eda115bfae6739f31cb76dbd827b96bc2c197ec7a6c0c2acbc989 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 25 07:45:11 np0005629333 podman[345675]: 2026-02-25 12:45:11.143826886 +0000 UTC m=+0.135982978 container start 2258344ca27eda115bfae6739f31cb76dbd827b96bc2c197ec7a6c0c2acbc989 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 07:45:11 np0005629333 neutron-haproxy-ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21[345726]: [NOTICE]   (345748) : New worker (345750) forked
Feb 25 07:45:11 np0005629333 neutron-haproxy-ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21[345726]: [NOTICE]   (345748) : Loading success.
Feb 25 07:45:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:11.202 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 8d032336-9efd-4e76-9498-4dafee40640b in datapath cd92597b-67bf-4492-9581-a9a7ec80f716 unbound from our chassis#033[00m
Feb 25 07:45:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:11.208 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cd92597b-67bf-4492-9581-a9a7ec80f716, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:45:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:11.215 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[00cbe21c-3861-4303-8500-dc730b0c35b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:11.216 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716 namespace which is not needed anymore#033[00m
Feb 25 07:45:11 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Feb 25 07:45:11 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.363 244018 INFO nova.virt.libvirt.driver [None req-9c06cec7-32a3-4c54-9d63-0c778c3d1ae4 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Instance shutdown successfully after 3 seconds.#033[00m
Feb 25 07:45:11 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.373 244018 INFO nova.virt.libvirt.driver [-] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Instance destroyed successfully.#033[00m
Feb 25 07:45:11 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.373 244018 DEBUG nova.objects.instance [None req-9c06cec7-32a3-4c54-9d63-0c778c3d1ae4 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'numa_topology' on Instance uuid aee87402-4b34-4083-888b-bb653e2beaa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:45:11 np0005629333 neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716[344206]: [NOTICE]   (344221) : haproxy version is 2.8.14-c23fe91
Feb 25 07:45:11 np0005629333 neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716[344206]: [NOTICE]   (344221) : path to executable is /usr/sbin/haproxy
Feb 25 07:45:11 np0005629333 neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716[344206]: [WARNING]  (344221) : Exiting Master process...
Feb 25 07:45:11 np0005629333 neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716[344206]: [WARNING]  (344221) : Exiting Master process...
Feb 25 07:45:11 np0005629333 neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716[344206]: [ALERT]    (344221) : Current worker (344223) exited with code 143 (Terminated)
Feb 25 07:45:11 np0005629333 neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716[344206]: [WARNING]  (344221) : All workers exited. Exiting... (0)
Feb 25 07:45:11 np0005629333 systemd[1]: libpod-5d6cb16551b0af2601462ec24c2bc4d9d2bd3c8d57b4a0af6c62f63f738e6740.scope: Deactivated successfully.
Feb 25 07:45:11 np0005629333 podman[345776]: 2026-02-25 12:45:11.387948235 +0000 UTC m=+0.077386124 container died 5d6cb16551b0af2601462ec24c2bc4d9d2bd3c8d57b4a0af6c62f63f738e6740 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 25 07:45:11 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.391 244018 DEBUG nova.compute.manager [None req-9c06cec7-32a3-4c54-9d63-0c778c3d1ae4 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:45:11 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.449 244018 DEBUG oslo_concurrency.lockutils [None req-9c06cec7-32a3-4c54-9d63-0c778c3d1ae4 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.830 244018 DEBUG nova.compute.manager [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received event network-vif-plugged-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.831 244018 DEBUG oslo_concurrency.lockutils [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "dd8c9142-2607-4722-90eb-65233f258639-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.831 244018 DEBUG oslo_concurrency.lockutils [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.831 244018 DEBUG oslo_concurrency.lockutils [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.832 244018 DEBUG nova.compute.manager [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] No event matching network-vif-plugged-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df in dict_keys([('network-vif-plugged', '68ffeedb-a9df-4fa8-9ae1-2535a1f1799d')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.832 244018 WARNING nova.compute.manager [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received unexpected event network-vif-plugged-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.832 244018 DEBUG nova.compute.manager [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received event network-vif-plugged-68ffeedb-a9df-4fa8-9ae1-2535a1f1799d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.833 244018 DEBUG oslo_concurrency.lockutils [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "dd8c9142-2607-4722-90eb-65233f258639-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.833 244018 DEBUG oslo_concurrency.lockutils [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.834 244018 DEBUG oslo_concurrency.lockutils [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.834 244018 DEBUG nova.compute.manager [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Processing event network-vif-plugged-68ffeedb-a9df-4fa8-9ae1-2535a1f1799d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.834 244018 DEBUG nova.compute.manager [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received event network-vif-plugged-68ffeedb-a9df-4fa8-9ae1-2535a1f1799d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.834 244018 DEBUG oslo_concurrency.lockutils [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "dd8c9142-2607-4722-90eb-65233f258639-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.835 244018 DEBUG oslo_concurrency.lockutils [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.835 244018 DEBUG oslo_concurrency.lockutils [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.835 244018 DEBUG nova.compute.manager [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] No waiting events found dispatching network-vif-plugged-68ffeedb-a9df-4fa8-9ae1-2535a1f1799d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.836 244018 WARNING nova.compute.manager [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received unexpected event network-vif-plugged-68ffeedb-a9df-4fa8-9ae1-2535a1f1799d for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.836 244018 DEBUG nova.compute.manager [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received event network-vif-unplugged-8d032336-9efd-4e76-9498-4dafee40640b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.836 244018 DEBUG oslo_concurrency.lockutils [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.836 244018 DEBUG oslo_concurrency.lockutils [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.837 244018 DEBUG oslo_concurrency.lockutils [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.837 244018 DEBUG nova.compute.manager [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] No waiting events found dispatching network-vif-unplugged-8d032336-9efd-4e76-9498-4dafee40640b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.837 244018 WARNING nova.compute.manager [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received unexpected event network-vif-unplugged-8d032336-9efd-4e76-9498-4dafee40640b for instance with vm_state stopped and task_state None.#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.837 244018 DEBUG nova.compute.manager [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received event network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.837 244018 DEBUG oslo_concurrency.lockutils [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.838 244018 DEBUG oslo_concurrency.lockutils [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.838 244018 DEBUG oslo_concurrency.lockutils [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.838 244018 DEBUG nova.compute.manager [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] No waiting events found dispatching network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.838 244018 WARNING nova.compute.manager [req-345a7c58-d490-40fa-89ae-a6ed992378e7 req-fcef8289-ad68-4207-9ff1-c0df3ade9f64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received unexpected event network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b for instance with vm_state stopped and task_state None.#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.840 244018 DEBUG nova.network.neutron [req-a78d283b-6546-418e-b33e-98b47357694b req-f948bdad-c1dc-49bf-8440-64c5800abbdb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updated VIF entry in instance network info cache for port 97fb9f99-cb59-4581-8866-375ea3e167d7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.840 244018 DEBUG nova.network.neutron [req-a78d283b-6546-418e-b33e-98b47357694b req-f948bdad-c1dc-49bf-8440-64c5800abbdb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updating instance_info_cache with network_info: [{"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.842 244018 DEBUG nova.compute.manager [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:11.865 244018 DEBUG oslo_concurrency.lockutils [req-a78d283b-6546-418e-b33e-98b47357694b req-f948bdad-c1dc-49bf-8440-64c5800abbdb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.007 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023512.006942, dd8c9142-2607-4722-90eb-65233f258639 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.008 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd8c9142-2607-4722-90eb-65233f258639] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.011 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.023 244018 INFO nova.virt.libvirt.driver [-] [instance: dd8c9142-2607-4722-90eb-65233f258639] Instance spawned successfully.#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.023 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.029 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd8c9142-2607-4722-90eb-65233f258639] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.033 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd8c9142-2607-4722-90eb-65233f258639] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.044 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.045 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.045 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.046 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:45:12 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5d6cb16551b0af2601462ec24c2bc4d9d2bd3c8d57b4a0af6c62f63f738e6740-userdata-shm.mount: Deactivated successfully.
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.046 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.046 244018 DEBUG nova.virt.libvirt.driver [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:45:12 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1f152e8c42872b5c20ce576f93cd6d4f4e71b6bf0c572194dbd79d4f61bceb58-merged.mount: Deactivated successfully.
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.051 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd8c9142-2607-4722-90eb-65233f258639] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:45:12 np0005629333 podman[345776]: 2026-02-25 12:45:12.067339928 +0000 UTC m=+0.756777807 container cleanup 5d6cb16551b0af2601462ec24c2bc4d9d2bd3c8d57b4a0af6c62f63f738e6740 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 25 07:45:12 np0005629333 systemd[1]: libpod-conmon-5d6cb16551b0af2601462ec24c2bc4d9d2bd3c8d57b4a0af6c62f63f738e6740.scope: Deactivated successfully.
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.080 244018 DEBUG nova.virt.libvirt.driver [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Image rbd:vms/874359d8-3251-4416-82dc-f6776853e384_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.081 244018 DEBUG nova.virt.libvirt.driver [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.082 244018 DEBUG nova.virt.libvirt.driver [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Ensure instance console log exists: /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.082 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.083 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.083 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.085 244018 DEBUG nova.virt.libvirt.driver [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Start _get_guest_xml network_info=[{"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-02-25T12:44:49Z,direct_url=<?>,disk_format='raw',id=ce5f0984-576d-4f20-8e86-b4b26cb5bb6e,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-1245136630-shelved',owner='6c0adb05683141e7a0b866f450e410e0',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-25T12:44:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.090 244018 WARNING nova.virt.libvirt.driver [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.094 244018 INFO nova.compute.manager [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Took 13.54 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.094 244018 DEBUG nova.compute.manager [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.099 244018 DEBUG nova.virt.libvirt.host [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.100 244018 DEBUG nova.virt.libvirt.host [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.107 244018 DEBUG nova.virt.libvirt.host [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.108 244018 DEBUG nova.virt.libvirt.host [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.108 244018 DEBUG nova.virt.libvirt.driver [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.108 244018 DEBUG nova.virt.hardware [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-02-25T12:44:49Z,direct_url=<?>,disk_format='raw',id=ce5f0984-576d-4f20-8e86-b4b26cb5bb6e,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-1245136630-shelved',owner='6c0adb05683141e7a0b866f450e410e0',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-25T12:44:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.109 244018 DEBUG nova.virt.hardware [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.109 244018 DEBUG nova.virt.hardware [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.109 244018 DEBUG nova.virt.hardware [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.109 244018 DEBUG nova.virt.hardware [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.109 244018 DEBUG nova.virt.hardware [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.109 244018 DEBUG nova.virt.hardware [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.110 244018 DEBUG nova.virt.hardware [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.110 244018 DEBUG nova.virt.hardware [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.110 244018 DEBUG nova.virt.hardware [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.110 244018 DEBUG nova.virt.hardware [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.110 244018 DEBUG nova.objects.instance [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 874359d8-3251-4416-82dc-f6776853e384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.127 244018 DEBUG oslo_concurrency.processutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:45:12 np0005629333 podman[345808]: 2026-02-25 12:45:12.140033219 +0000 UTC m=+0.049153878 container remove 5d6cb16551b0af2601462ec24c2bc4d9d2bd3c8d57b4a0af6c62f63f738e6740 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 25 07:45:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:12.160 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[00b47a9c-e828-40dd-9351-806c6b3d7865]: (4, ('Wed Feb 25 12:45:11 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716 (5d6cb16551b0af2601462ec24c2bc4d9d2bd3c8d57b4a0af6c62f63f738e6740)\n5d6cb16551b0af2601462ec24c2bc4d9d2bd3c8d57b4a0af6c62f63f738e6740\nWed Feb 25 12:45:12 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716 (5d6cb16551b0af2601462ec24c2bc4d9d2bd3c8d57b4a0af6c62f63f738e6740)\n5d6cb16551b0af2601462ec24c2bc4d9d2bd3c8d57b4a0af6c62f63f738e6740\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.162 244018 INFO nova.compute.manager [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Took 14.66 seconds to build instance.#033[00m
Feb 25 07:45:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:12.162 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5386f1c5-2374-4a7f-bcb6-d075238f67ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:12.163 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd92597b-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.165 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:12 np0005629333 kernel: tapcd92597b-60: left promiscuous mode
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.178 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.179 244018 DEBUG oslo_concurrency.lockutils [None req-e1b081e0-a358-47bc-bc35-089f22469802 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.179 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:12.182 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[84d72aac-1ab8-4473-9542-02c222d061f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:12.201 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[35d41262-8c05-4644-882b-8b2a9a35bfa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:12.203 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4ae1e51d-b686-435a-a6b8-61d521d67653]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:12.217 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[db76dc2f-06c0-44f7-a0b2-ebd55327c267]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545849, 'reachable_time': 29158, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345829, 'error': None, 'target': 'ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:12.220 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:45:12 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:12.220 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[ac7ed0f9-ba3a-47a0-a1a6-25535fc0c0fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:12 np0005629333 systemd[1]: run-netns-ovnmeta\x2dcd92597b\x2d67bf\x2d4492\x2d9581\x2da9a7ec80f716.mount: Deactivated successfully.
Feb 25 07:45:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1937: 305 pgs: 305 active+clean; 399 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 4.8 MiB/s wr, 147 op/s
Feb 25 07:45:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:45:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3150052644' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.728 244018 DEBUG oslo_concurrency.processutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.757 244018 DEBUG nova.storage.rbd_utils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] rbd image 874359d8-3251-4416-82dc-f6776853e384_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:45:12 np0005629333 nova_compute[244014]: 2026-02-25 12:45:12.764 244018 DEBUG oslo_concurrency.processutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:45:13 np0005629333 nova_compute[244014]: 2026-02-25 12:45:13.147 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:45:13 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1920090227' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:45:13 np0005629333 nova_compute[244014]: 2026-02-25 12:45:13.318 244018 DEBUG oslo_concurrency.processutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:45:13 np0005629333 nova_compute[244014]: 2026-02-25 12:45:13.320 244018 DEBUG nova.virt.libvirt.vif [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:44:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1245136630',display_name='tempest-TestShelveInstance-server-1245136630',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1245136630',id=112,image_ref='ce5f0984-576d-4f20-8e86-b4b26cb5bb6e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-571084469',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:44:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='6c0adb05683141e7a0b866f450e410e0',ramdisk_id='',reservation_id='r-tfbrsni1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1925524092',owner_user_name='tempest-TestShelveInstance-1925524092-project-member',shelved_at='2026-02-25T12:44:57.937945',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='ce5f0984-576d-4f20-8e86-b4b26cb5bb6e'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:45:07Z,user_data=None,user_id='44c0b78107ea4f7381e82a02c5954e7c',uuid=874359d8-3251-4416-82dc-f6776853e384,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:45:13 np0005629333 nova_compute[244014]: 2026-02-25 12:45:13.320 244018 DEBUG nova.network.os_vif_util [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Converting VIF {"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:45:13 np0005629333 nova_compute[244014]: 2026-02-25 12:45:13.321 244018 DEBUG nova.network.os_vif_util [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:b7:f7,bridge_name='br-int',has_traffic_filtering=True,id=97fb9f99-cb59-4581-8866-375ea3e167d7,network=Network(e4d059ba-eacc-463b-a8a4-393a5a36dba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97fb9f99-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:45:13 np0005629333 nova_compute[244014]: 2026-02-25 12:45:13.322 244018 DEBUG nova.objects.instance [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 874359d8-3251-4416-82dc-f6776853e384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:45:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:45:14 np0005629333 nova_compute[244014]: 2026-02-25 12:45:14.148 244018 DEBUG nova.virt.libvirt.driver [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:45:14 np0005629333 nova_compute[244014]:  <uuid>874359d8-3251-4416-82dc-f6776853e384</uuid>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:  <name>instance-00000070</name>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestShelveInstance-server-1245136630</nova:name>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:45:12</nova:creationTime>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:45:14 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:        <nova:user uuid="44c0b78107ea4f7381e82a02c5954e7c">tempest-TestShelveInstance-1925524092-project-member</nova:user>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:        <nova:project uuid="6c0adb05683141e7a0b866f450e410e0">tempest-TestShelveInstance-1925524092</nova:project>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="ce5f0984-576d-4f20-8e86-b4b26cb5bb6e"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:        <nova:port uuid="97fb9f99-cb59-4581-8866-375ea3e167d7">
Feb 25 07:45:14 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      <entry name="serial">874359d8-3251-4416-82dc-f6776853e384</entry>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      <entry name="uuid">874359d8-3251-4416-82dc-f6776853e384</entry>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/874359d8-3251-4416-82dc-f6776853e384_disk">
Feb 25 07:45:14 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:45:14 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/874359d8-3251-4416-82dc-f6776853e384_disk.config">
Feb 25 07:45:14 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:45:14 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:61:b7:f7"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      <target dev="tap97fb9f99-cb"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384/console.log" append="off"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <input type="keyboard" bus="usb"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:45:14 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:45:14 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:45:14 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:45:14 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:45:14 np0005629333 nova_compute[244014]: 2026-02-25 12:45:14.149 244018 DEBUG nova.compute.manager [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Preparing to wait for external event network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:45:14 np0005629333 nova_compute[244014]: 2026-02-25 12:45:14.149 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "874359d8-3251-4416-82dc-f6776853e384-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:14 np0005629333 nova_compute[244014]: 2026-02-25 12:45:14.149 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:14 np0005629333 nova_compute[244014]: 2026-02-25 12:45:14.150 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:14 np0005629333 nova_compute[244014]: 2026-02-25 12:45:14.150 244018 DEBUG nova.virt.libvirt.vif [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:44:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1245136630',display_name='tempest-TestShelveInstance-server-1245136630',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1245136630',id=112,image_ref='ce5f0984-576d-4f20-8e86-b4b26cb5bb6e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-571084469',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:44:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='6c0adb05683141e7a0b866f450e410e0',ramdisk_id='',reservation_id='r-tfbrsni1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1925524092',owner_user_name='tempest-TestShelveInstance-1925524092-project-member',shelved_at='2026-02-25T12:44:57.937945',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='ce5f0984-576d-4f20-8e86-b4b26cb5bb6e'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:45:07Z,user_data=None,user_id='44c0b78107ea4f7381e82a02c5954e7c',uuid=874359d8-3251-4416-82dc-f6776853e384,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:45:14 np0005629333 nova_compute[244014]: 2026-02-25 12:45:14.151 244018 DEBUG nova.network.os_vif_util [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Converting VIF {"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:45:14 np0005629333 nova_compute[244014]: 2026-02-25 12:45:14.152 244018 DEBUG nova.network.os_vif_util [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:b7:f7,bridge_name='br-int',has_traffic_filtering=True,id=97fb9f99-cb59-4581-8866-375ea3e167d7,network=Network(e4d059ba-eacc-463b-a8a4-393a5a36dba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97fb9f99-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:45:14 np0005629333 nova_compute[244014]: 2026-02-25 12:45:14.152 244018 DEBUG os_vif [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:b7:f7,bridge_name='br-int',has_traffic_filtering=True,id=97fb9f99-cb59-4581-8866-375ea3e167d7,network=Network(e4d059ba-eacc-463b-a8a4-393a5a36dba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97fb9f99-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:45:14 np0005629333 nova_compute[244014]: 2026-02-25 12:45:14.153 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:14 np0005629333 nova_compute[244014]: 2026-02-25 12:45:14.154 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:14 np0005629333 nova_compute[244014]: 2026-02-25 12:45:14.156 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:45:14 np0005629333 nova_compute[244014]: 2026-02-25 12:45:14.161 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:14 np0005629333 nova_compute[244014]: 2026-02-25 12:45:14.161 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap97fb9f99-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:14 np0005629333 nova_compute[244014]: 2026-02-25 12:45:14.164 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap97fb9f99-cb, col_values=(('external_ids', {'iface-id': '97fb9f99-cb59-4581-8866-375ea3e167d7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:b7:f7', 'vm-uuid': '874359d8-3251-4416-82dc-f6776853e384'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:14 np0005629333 NetworkManager[49836]: <info>  [1772023514.1666] manager: (tap97fb9f99-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/477)
Feb 25 07:45:14 np0005629333 nova_compute[244014]: 2026-02-25 12:45:14.165 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:14 np0005629333 nova_compute[244014]: 2026-02-25 12:45:14.170 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:45:14 np0005629333 nova_compute[244014]: 2026-02-25 12:45:14.174 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:14 np0005629333 nova_compute[244014]: 2026-02-25 12:45:14.175 244018 INFO os_vif [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:b7:f7,bridge_name='br-int',has_traffic_filtering=True,id=97fb9f99-cb59-4581-8866-375ea3e167d7,network=Network(e4d059ba-eacc-463b-a8a4-393a5a36dba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97fb9f99-cb')#033[00m
Feb 25 07:45:14 np0005629333 nova_compute[244014]: 2026-02-25 12:45:14.230 244018 DEBUG nova.virt.libvirt.driver [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:45:14 np0005629333 nova_compute[244014]: 2026-02-25 12:45:14.231 244018 DEBUG nova.virt.libvirt.driver [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:45:14 np0005629333 nova_compute[244014]: 2026-02-25 12:45:14.231 244018 DEBUG nova.virt.libvirt.driver [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] No VIF found with MAC fa:16:3e:61:b7:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:45:14 np0005629333 nova_compute[244014]: 2026-02-25 12:45:14.232 244018 INFO nova.virt.libvirt.driver [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Using config drive#033[00m
Feb 25 07:45:14 np0005629333 nova_compute[244014]: 2026-02-25 12:45:14.262 244018 DEBUG nova.storage.rbd_utils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] rbd image 874359d8-3251-4416-82dc-f6776853e384_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:45:14 np0005629333 nova_compute[244014]: 2026-02-25 12:45:14.287 244018 DEBUG nova.objects.instance [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 874359d8-3251-4416-82dc-f6776853e384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:45:14 np0005629333 nova_compute[244014]: 2026-02-25 12:45:14.346 244018 DEBUG nova.objects.instance [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lazy-loading 'keypairs' on Instance uuid 874359d8-3251-4416-82dc-f6776853e384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:45:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1938: 305 pgs: 305 active+clean; 436 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 6.3 MiB/s wr, 168 op/s
Feb 25 07:45:15 np0005629333 nova_compute[244014]: 2026-02-25 12:45:15.139 244018 INFO nova.virt.libvirt.driver [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Creating config drive at /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384/disk.config#033[00m
Feb 25 07:45:15 np0005629333 nova_compute[244014]: 2026-02-25 12:45:15.144 244018 DEBUG oslo_concurrency.processutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpxybfkycw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:45:15 np0005629333 nova_compute[244014]: 2026-02-25 12:45:15.177 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:15 np0005629333 nova_compute[244014]: 2026-02-25 12:45:15.288 244018 DEBUG oslo_concurrency.processutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpxybfkycw" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:45:15 np0005629333 nova_compute[244014]: 2026-02-25 12:45:15.325 244018 DEBUG nova.storage.rbd_utils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] rbd image 874359d8-3251-4416-82dc-f6776853e384_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:45:15 np0005629333 nova_compute[244014]: 2026-02-25 12:45:15.330 244018 DEBUG oslo_concurrency.processutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384/disk.config 874359d8-3251-4416-82dc-f6776853e384_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:45:15 np0005629333 nova_compute[244014]: 2026-02-25 12:45:15.496 244018 DEBUG oslo_concurrency.processutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384/disk.config 874359d8-3251-4416-82dc-f6776853e384_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:45:15 np0005629333 nova_compute[244014]: 2026-02-25 12:45:15.498 244018 INFO nova.virt.libvirt.driver [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Deleting local config drive /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384/disk.config because it was imported into RBD.#033[00m
Feb 25 07:45:15 np0005629333 kernel: tap97fb9f99-cb: entered promiscuous mode
Feb 25 07:45:15 np0005629333 NetworkManager[49836]: <info>  [1772023515.5498] manager: (tap97fb9f99-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/478)
Feb 25 07:45:15 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:15Z|01137|binding|INFO|Claiming lport 97fb9f99-cb59-4581-8866-375ea3e167d7 for this chassis.
Feb 25 07:45:15 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:15Z|01138|binding|INFO|97fb9f99-cb59-4581-8866-375ea3e167d7: Claiming fa:16:3e:61:b7:f7 10.100.0.11
Feb 25 07:45:15 np0005629333 nova_compute[244014]: 2026-02-25 12:45:15.552 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.561 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:b7:f7 10.100.0.11'], port_security=['fa:16:3e:61:b7:f7 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '874359d8-3251-4416-82dc-f6776853e384', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4d059ba-eacc-463b-a8a4-393a5a36dba3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6c0adb05683141e7a0b866f450e410e0', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'c8e48a3c-4f73-4222-8cd8-b956480085c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.235'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88c1b411-2973-4db5-967d-0a2cebc1066f, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=97fb9f99-cb59-4581-8866-375ea3e167d7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.563 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 97fb9f99-cb59-4581-8866-375ea3e167d7 in datapath e4d059ba-eacc-463b-a8a4-393a5a36dba3 bound to our chassis#033[00m
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.566 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e4d059ba-eacc-463b-a8a4-393a5a36dba3#033[00m
Feb 25 07:45:15 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:15Z|01139|binding|INFO|Setting lport 97fb9f99-cb59-4581-8866-375ea3e167d7 ovn-installed in OVS
Feb 25 07:45:15 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:15Z|01140|binding|INFO|Setting lport 97fb9f99-cb59-4581-8866-375ea3e167d7 up in Southbound
Feb 25 07:45:15 np0005629333 nova_compute[244014]: 2026-02-25 12:45:15.572 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.577 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe257f7-56ae-4ac2-b3a5-fe1dbe305274]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.578 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape4d059ba-e1 in ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.580 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape4d059ba-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.580 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8699fc84-7ff8-4047-b0a5-f8daa70c2a07]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.582 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4219a5c6-2df7-419b-a4ca-fc0962776bc9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:15 np0005629333 nova_compute[244014]: 2026-02-25 12:45:15.582 244018 INFO nova.compute.manager [None req-87463989-7183-4875-ab5c-e24a53d41990 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Get console output#033[00m
Feb 25 07:45:15 np0005629333 nova_compute[244014]: 2026-02-25 12:45:15.583 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:15 np0005629333 systemd-machined[210048]: New machine qemu-144-instance-00000070.
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.596 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[4fc7b92f-0bb9-401c-883f-24c0eac83778]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:15 np0005629333 systemd[1]: Started Virtual Machine qemu-144-instance-00000070.
Feb 25 07:45:15 np0005629333 systemd-udevd[345972]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.611 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4b4ca83c-f845-4bd6-a411-87f7a54b7f5d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:15 np0005629333 NetworkManager[49836]: <info>  [1772023515.6243] device (tap97fb9f99-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:45:15 np0005629333 NetworkManager[49836]: <info>  [1772023515.6264] device (tap97fb9f99-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.639 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[454abaa7-4be2-4efa-9cbb-634f518e5c8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:15 np0005629333 NetworkManager[49836]: <info>  [1772023515.6484] manager: (tape4d059ba-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/479)
Feb 25 07:45:15 np0005629333 systemd-udevd[345976]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.649 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cb443366-344c-481c-b377-9746b46130f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.672 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0a90dfc3-232d-4aa6-8c72-3c2d51ec3d59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.675 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9e1252ef-ce7d-4972-9070-efbc3df9748e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:15 np0005629333 NetworkManager[49836]: <info>  [1772023515.6928] device (tape4d059ba-e0): carrier: link connected
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.696 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f17ba4e0-0d1d-464e-ad30-db2459229259]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.709 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[202e2010-7172-42e5-a1a7-7f477dbc750e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4d059ba-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:42:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 345], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548510, 'reachable_time': 31711, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346005, 'error': None, 'target': 'ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.724 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bfbb3d20-3aec-434b-92b9-202f8ad3616e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe63:42a7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548510, 'tstamp': 548510}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346006, 'error': None, 'target': 'ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.739 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2f95f3d1-4253-4e22-9159-afde24eb8b3f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape4d059ba-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:42:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 345], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548510, 'reachable_time': 31711, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 346007, 'error': None, 'target': 'ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.770 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0d5e6326-02da-4f7e-b709-5a66f3dc3ae1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.831 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bbae288f-895c-4c36-ada9-0ecd862911bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.833 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4d059ba-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.834 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.834 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4d059ba-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:15 np0005629333 kernel: tape4d059ba-e0: entered promiscuous mode
Feb 25 07:45:15 np0005629333 nova_compute[244014]: 2026-02-25 12:45:15.836 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:15 np0005629333 NetworkManager[49836]: <info>  [1772023515.8404] manager: (tape4d059ba-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/480)
Feb 25 07:45:15 np0005629333 nova_compute[244014]: 2026-02-25 12:45:15.841 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.844 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape4d059ba-e0, col_values=(('external_ids', {'iface-id': '751d6f2c-9451-4526-9618-730226a78a70'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:15 np0005629333 nova_compute[244014]: 2026-02-25 12:45:15.846 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:15 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:15Z|01141|binding|INFO|Releasing lport 751d6f2c-9451-4526-9618-730226a78a70 from this chassis (sb_readonly=0)
Feb 25 07:45:15 np0005629333 nova_compute[244014]: 2026-02-25 12:45:15.855 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:15 np0005629333 nova_compute[244014]: 2026-02-25 12:45:15.857 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.859 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e4d059ba-eacc-463b-a8a4-393a5a36dba3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e4d059ba-eacc-463b-a8a4-393a5a36dba3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.860 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ef29f248-a410-4a0d-92cb-0476ba5a4afb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.861 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-e4d059ba-eacc-463b-a8a4-393a5a36dba3
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/e4d059ba-eacc-463b-a8a4-393a5a36dba3.pid.haproxy
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID e4d059ba-eacc-463b-a8a4-393a5a36dba3
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:45:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:15.862 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3', 'env', 'PROCESS_TAG=haproxy-e4d059ba-eacc-463b-a8a4-393a5a36dba3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e4d059ba-eacc-463b-a8a4-393a5a36dba3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:45:15 np0005629333 nova_compute[244014]: 2026-02-25 12:45:15.879 244018 DEBUG nova.compute.manager [req-05a0cf0c-1e8d-4fbf-994b-0b94d6d08b66 req-57963243-3ae5-49c3-8db9-276af0125ddb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received event network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:45:15 np0005629333 nova_compute[244014]: 2026-02-25 12:45:15.880 244018 DEBUG oslo_concurrency.lockutils [req-05a0cf0c-1e8d-4fbf-994b-0b94d6d08b66 req-57963243-3ae5-49c3-8db9-276af0125ddb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "874359d8-3251-4416-82dc-f6776853e384-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:15 np0005629333 nova_compute[244014]: 2026-02-25 12:45:15.880 244018 DEBUG oslo_concurrency.lockutils [req-05a0cf0c-1e8d-4fbf-994b-0b94d6d08b66 req-57963243-3ae5-49c3-8db9-276af0125ddb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:15 np0005629333 nova_compute[244014]: 2026-02-25 12:45:15.881 244018 DEBUG oslo_concurrency.lockutils [req-05a0cf0c-1e8d-4fbf-994b-0b94d6d08b66 req-57963243-3ae5-49c3-8db9-276af0125ddb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:15 np0005629333 nova_compute[244014]: 2026-02-25 12:45:15.882 244018 DEBUG nova.compute.manager [req-05a0cf0c-1e8d-4fbf-994b-0b94d6d08b66 req-57963243-3ae5-49c3-8db9-276af0125ddb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Processing event network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:45:15 np0005629333 nova_compute[244014]: 2026-02-25 12:45:15.908 244018 DEBUG nova.objects.instance [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'flavor' on Instance uuid aee87402-4b34-4083-888b-bb653e2beaa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:45:15 np0005629333 nova_compute[244014]: 2026-02-25 12:45:15.933 244018 DEBUG oslo_concurrency.lockutils [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "refresh_cache-aee87402-4b34-4083-888b-bb653e2beaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:45:15 np0005629333 nova_compute[244014]: 2026-02-25 12:45:15.934 244018 DEBUG oslo_concurrency.lockutils [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquired lock "refresh_cache-aee87402-4b34-4083-888b-bb653e2beaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:45:15 np0005629333 nova_compute[244014]: 2026-02-25 12:45:15.934 244018 DEBUG nova.network.neutron [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:45:15 np0005629333 nova_compute[244014]: 2026-02-25 12:45:15.935 244018 DEBUG nova.objects.instance [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'info_cache' on Instance uuid aee87402-4b34-4083-888b-bb653e2beaa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:45:15 np0005629333 nova_compute[244014]: 2026-02-25 12:45:15.961 244018 DEBUG nova.compute.manager [req-abbac5ce-7a0c-4b30-9db1-2f6330a161a0 req-1388ecf1-513e-46ff-9a9f-0b973e1b1a5c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received event network-changed-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:45:15 np0005629333 nova_compute[244014]: 2026-02-25 12:45:15.962 244018 DEBUG nova.compute.manager [req-abbac5ce-7a0c-4b30-9db1-2f6330a161a0 req-1388ecf1-513e-46ff-9a9f-0b973e1b1a5c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Refreshing instance network info cache due to event network-changed-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:45:15 np0005629333 nova_compute[244014]: 2026-02-25 12:45:15.962 244018 DEBUG oslo_concurrency.lockutils [req-abbac5ce-7a0c-4b30-9db1-2f6330a161a0 req-1388ecf1-513e-46ff-9a9f-0b973e1b1a5c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:45:15 np0005629333 nova_compute[244014]: 2026-02-25 12:45:15.962 244018 DEBUG oslo_concurrency.lockutils [req-abbac5ce-7a0c-4b30-9db1-2f6330a161a0 req-1388ecf1-513e-46ff-9a9f-0b973e1b1a5c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:45:15 np0005629333 nova_compute[244014]: 2026-02-25 12:45:15.962 244018 DEBUG nova.network.neutron [req-abbac5ce-7a0c-4b30-9db1-2f6330a161a0 req-1388ecf1-513e-46ff-9a9f-0b973e1b1a5c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Refreshing network info cache for port e50c9f03-a8a5-48d1-a34b-4a8fd638d5df _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:45:16 np0005629333 podman[346041]: 2026-02-25 12:45:16.236449648 +0000 UTC m=+0.066154467 container create f6917395b968e4f6241e2780401ad4854e911d6bc3c72f025de08a94f631bb80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 07:45:16 np0005629333 systemd[1]: Started libpod-conmon-f6917395b968e4f6241e2780401ad4854e911d6bc3c72f025de08a94f631bb80.scope.
Feb 25 07:45:16 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:45:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dff6ea0a8a911ca82af7fb30e8506525301753b046b20808291f024818c24148/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:45:16 np0005629333 podman[346041]: 2026-02-25 12:45:16.295379212 +0000 UTC m=+0.125084021 container init f6917395b968e4f6241e2780401ad4854e911d6bc3c72f025de08a94f631bb80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 07:45:16 np0005629333 podman[346041]: 2026-02-25 12:45:16.299533719 +0000 UTC m=+0.129238498 container start f6917395b968e4f6241e2780401ad4854e911d6bc3c72f025de08a94f631bb80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 25 07:45:16 np0005629333 podman[346041]: 2026-02-25 12:45:16.204886178 +0000 UTC m=+0.034591017 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:45:16 np0005629333 neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3[346098]: [NOTICE]   (346103) : New worker (346106) forked
Feb 25 07:45:16 np0005629333 neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3[346098]: [NOTICE]   (346103) : Loading success.
Feb 25 07:45:16 np0005629333 nova_compute[244014]: 2026-02-25 12:45:16.346 244018 DEBUG nova.compute.manager [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:45:16 np0005629333 nova_compute[244014]: 2026-02-25 12:45:16.348 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023516.3456142, 874359d8-3251-4416-82dc-f6776853e384 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:45:16 np0005629333 nova_compute[244014]: 2026-02-25 12:45:16.348 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] VM Started (Lifecycle Event)#033[00m
Feb 25 07:45:16 np0005629333 nova_compute[244014]: 2026-02-25 12:45:16.351 244018 DEBUG nova.virt.libvirt.driver [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:45:16 np0005629333 nova_compute[244014]: 2026-02-25 12:45:16.355 244018 INFO nova.virt.libvirt.driver [-] [instance: 874359d8-3251-4416-82dc-f6776853e384] Instance spawned successfully.#033[00m
Feb 25 07:45:16 np0005629333 nova_compute[244014]: 2026-02-25 12:45:16.368 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:45:16 np0005629333 nova_compute[244014]: 2026-02-25 12:45:16.372 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:45:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1939: 305 pgs: 305 active+clean; 436 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 6.3 MiB/s wr, 168 op/s
Feb 25 07:45:16 np0005629333 nova_compute[244014]: 2026-02-25 12:45:16.396 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:45:16 np0005629333 nova_compute[244014]: 2026-02-25 12:45:16.397 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023516.3458734, 874359d8-3251-4416-82dc-f6776853e384 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:45:16 np0005629333 nova_compute[244014]: 2026-02-25 12:45:16.397 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:45:16 np0005629333 nova_compute[244014]: 2026-02-25 12:45:16.417 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:45:16 np0005629333 nova_compute[244014]: 2026-02-25 12:45:16.421 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023516.3506489, 874359d8-3251-4416-82dc-f6776853e384 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:45:16 np0005629333 nova_compute[244014]: 2026-02-25 12:45:16.421 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:45:16 np0005629333 nova_compute[244014]: 2026-02-25 12:45:16.447 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:45:16 np0005629333 nova_compute[244014]: 2026-02-25 12:45:16.451 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:45:16 np0005629333 nova_compute[244014]: 2026-02-25 12:45:16.469 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:45:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Feb 25 07:45:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Feb 25 07:45:17 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.218 244018 DEBUG nova.network.neutron [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Updating instance_info_cache with network_info: [{"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.239 244018 DEBUG oslo_concurrency.lockutils [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Releasing lock "refresh_cache-aee87402-4b34-4083-888b-bb653e2beaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.275 244018 INFO nova.virt.libvirt.driver [-] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Instance destroyed successfully.#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.276 244018 DEBUG nova.objects.instance [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'numa_topology' on Instance uuid aee87402-4b34-4083-888b-bb653e2beaa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.294 244018 DEBUG nova.objects.instance [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'resources' on Instance uuid aee87402-4b34-4083-888b-bb653e2beaa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.307 244018 DEBUG nova.virt.libvirt.vif [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:44:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-204528973',display_name='tempest-TestNetworkAdvancedServerOps-server-204528973',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-204528973',id=113,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5R/voudn53mFn1fesyFCvR/uhV0Qe/z38Tv5jlE5Qy+mC9LN8VH4xJizPPQof/n3K4Uot5xWkFCPJAwQJsVcwssLNZ96buGra8QRtUpJKVli2/glgbebk2AU5Vop/oTA==',key_name='tempest-TestNetworkAdvancedServerOps-211055070',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:44:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-z6csfx6h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:45:11Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=aee87402-4b34-4083-888b-bb653e2beaa9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.308 244018 DEBUG nova.network.os_vif_util [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.309 244018 DEBUG nova.network.os_vif_util [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5f:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d032336-9efd-4e76-9498-4dafee40640b,network=Network(cd92597b-67bf-4492-9581-a9a7ec80f716),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d032336-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.309 244018 DEBUG os_vif [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5f:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d032336-9efd-4e76-9498-4dafee40640b,network=Network(cd92597b-67bf-4492-9581-a9a7ec80f716),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d032336-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.312 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.313 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d032336-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.317 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.318 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.321 244018 INFO os_vif [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5f:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d032336-9efd-4e76-9498-4dafee40640b,network=Network(cd92597b-67bf-4492-9581-a9a7ec80f716),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d032336-9e')#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.328 244018 DEBUG nova.virt.libvirt.driver [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Start _get_guest_xml network_info=[{"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.333 244018 WARNING nova.virt.libvirt.driver [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.341 244018 DEBUG nova.virt.libvirt.host [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.342 244018 DEBUG nova.virt.libvirt.host [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.345 244018 DEBUG nova.virt.libvirt.host [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.346 244018 DEBUG nova.virt.libvirt.host [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.346 244018 DEBUG nova.virt.libvirt.driver [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.346 244018 DEBUG nova.virt.hardware [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.347 244018 DEBUG nova.virt.hardware [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.347 244018 DEBUG nova.virt.hardware [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.347 244018 DEBUG nova.virt.hardware [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.347 244018 DEBUG nova.virt.hardware [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.347 244018 DEBUG nova.virt.hardware [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.347 244018 DEBUG nova.virt.hardware [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.348 244018 DEBUG nova.virt.hardware [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.348 244018 DEBUG nova.virt.hardware [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.348 244018 DEBUG nova.virt.hardware [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.348 244018 DEBUG nova.virt.hardware [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.349 244018 DEBUG nova.objects.instance [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'vcpu_model' on Instance uuid aee87402-4b34-4083-888b-bb653e2beaa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.367 244018 DEBUG oslo_concurrency.processutils [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.397 244018 DEBUG nova.compute.manager [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.400 244018 DEBUG nova.network.neutron [req-abbac5ce-7a0c-4b30-9db1-2f6330a161a0 req-1388ecf1-513e-46ff-9a9f-0b973e1b1a5c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Updated VIF entry in instance network info cache for port e50c9f03-a8a5-48d1-a34b-4a8fd638d5df. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.401 244018 DEBUG nova.network.neutron [req-abbac5ce-7a0c-4b30-9db1-2f6330a161a0 req-1388ecf1-513e-46ff-9a9f-0b973e1b1a5c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Updating instance_info_cache with network_info: [{"id": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "address": "fa:16:3e:47:4a:25", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50c9f03-a8", "ovs_interfaceid": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "address": "fa:16:3e:60:df:22", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68ffeedb-a9", "ovs_interfaceid": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.449 244018 DEBUG oslo_concurrency.lockutils [req-abbac5ce-7a0c-4b30-9db1-2f6330a161a0 req-1388ecf1-513e-46ff-9a9f-0b973e1b1a5c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.469 244018 DEBUG oslo_concurrency.lockutils [None req-d52ff18a-aaef-4345-8a3d-33df71ca7de2 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 10.361s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:45:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/485664098' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.905 244018 DEBUG oslo_concurrency.processutils [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.948 244018 DEBUG oslo_concurrency.processutils [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.989 244018 DEBUG nova.compute.manager [req-0d0ef4a9-b21c-4aa3-978a-423cbec42e8f req-2681189a-aa11-4524-8b2e-ec14abc2322b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received event network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.989 244018 DEBUG oslo_concurrency.lockutils [req-0d0ef4a9-b21c-4aa3-978a-423cbec42e8f req-2681189a-aa11-4524-8b2e-ec14abc2322b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "874359d8-3251-4416-82dc-f6776853e384-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.990 244018 DEBUG oslo_concurrency.lockutils [req-0d0ef4a9-b21c-4aa3-978a-423cbec42e8f req-2681189a-aa11-4524-8b2e-ec14abc2322b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.990 244018 DEBUG oslo_concurrency.lockutils [req-0d0ef4a9-b21c-4aa3-978a-423cbec42e8f req-2681189a-aa11-4524-8b2e-ec14abc2322b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.990 244018 DEBUG nova.compute.manager [req-0d0ef4a9-b21c-4aa3-978a-423cbec42e8f req-2681189a-aa11-4524-8b2e-ec14abc2322b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] No waiting events found dispatching network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:45:17 np0005629333 nova_compute[244014]: 2026-02-25 12:45:17.991 244018 WARNING nova.compute.manager [req-0d0ef4a9-b21c-4aa3-978a-423cbec42e8f req-2681189a-aa11-4524-8b2e-ec14abc2322b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received unexpected event network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:45:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:45:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1941: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 425 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 4.7 MiB/s wr, 245 op/s
Feb 25 07:45:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:45:18 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/8230030' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:45:18 np0005629333 nova_compute[244014]: 2026-02-25 12:45:18.569 244018 DEBUG oslo_concurrency.processutils [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:45:18 np0005629333 nova_compute[244014]: 2026-02-25 12:45:18.570 244018 DEBUG nova.virt.libvirt.vif [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:44:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-204528973',display_name='tempest-TestNetworkAdvancedServerOps-server-204528973',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-204528973',id=113,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5R/voudn53mFn1fesyFCvR/uhV0Qe/z38Tv5jlE5Qy+mC9LN8VH4xJizPPQof/n3K4Uot5xWkFCPJAwQJsVcwssLNZ96buGra8QRtUpJKVli2/glgbebk2AU5Vop/oTA==',key_name='tempest-TestNetworkAdvancedServerOps-211055070',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:44:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-z6csfx6h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:45:11Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=aee87402-4b34-4083-888b-bb653e2beaa9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:45:18 np0005629333 nova_compute[244014]: 2026-02-25 12:45:18.570 244018 DEBUG nova.network.os_vif_util [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:45:18 np0005629333 nova_compute[244014]: 2026-02-25 12:45:18.571 244018 DEBUG nova.network.os_vif_util [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5f:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d032336-9efd-4e76-9498-4dafee40640b,network=Network(cd92597b-67bf-4492-9581-a9a7ec80f716),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d032336-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:45:18 np0005629333 nova_compute[244014]: 2026-02-25 12:45:18.572 244018 DEBUG nova.objects.instance [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'pci_devices' on Instance uuid aee87402-4b34-4083-888b-bb653e2beaa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:45:18 np0005629333 nova_compute[244014]: 2026-02-25 12:45:18.602 244018 DEBUG nova.virt.libvirt.driver [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:45:18 np0005629333 nova_compute[244014]:  <uuid>aee87402-4b34-4083-888b-bb653e2beaa9</uuid>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:  <name>instance-00000071</name>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-204528973</nova:name>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:45:17</nova:creationTime>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:45:18 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:        <nova:user uuid="fb37a481eb114226822ed8b2ef4f9a89">tempest-TestNetworkAdvancedServerOps-1424801157-project-member</nova:user>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:        <nova:project uuid="6821a6e7edd54dbe97920b79aae8f54c">tempest-TestNetworkAdvancedServerOps-1424801157</nova:project>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:        <nova:port uuid="8d032336-9efd-4e76-9498-4dafee40640b">
Feb 25 07:45:18 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      <entry name="serial">aee87402-4b34-4083-888b-bb653e2beaa9</entry>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      <entry name="uuid">aee87402-4b34-4083-888b-bb653e2beaa9</entry>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/aee87402-4b34-4083-888b-bb653e2beaa9_disk">
Feb 25 07:45:18 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:45:18 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/aee87402-4b34-4083-888b-bb653e2beaa9_disk.config">
Feb 25 07:45:18 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:45:18 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:b3:5f:c9"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      <target dev="tap8d032336-9e"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/aee87402-4b34-4083-888b-bb653e2beaa9/console.log" append="off"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <input type="keyboard" bus="usb"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:45:18 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:45:18 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:45:18 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:45:18 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:45:18 np0005629333 nova_compute[244014]: 2026-02-25 12:45:18.602 244018 DEBUG nova.virt.libvirt.driver [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:45:18 np0005629333 nova_compute[244014]: 2026-02-25 12:45:18.603 244018 DEBUG nova.virt.libvirt.driver [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:45:18 np0005629333 nova_compute[244014]: 2026-02-25 12:45:18.603 244018 DEBUG nova.virt.libvirt.vif [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:44:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-204528973',display_name='tempest-TestNetworkAdvancedServerOps-server-204528973',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-204528973',id=113,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5R/voudn53mFn1fesyFCvR/uhV0Qe/z38Tv5jlE5Qy+mC9LN8VH4xJizPPQof/n3K4Uot5xWkFCPJAwQJsVcwssLNZ96buGra8QRtUpJKVli2/glgbebk2AU5Vop/oTA==',key_name='tempest-TestNetworkAdvancedServerOps-211055070',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:44:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-z6csfx6h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:45:11Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=aee87402-4b34-4083-888b-bb653e2beaa9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:45:18 np0005629333 nova_compute[244014]: 2026-02-25 12:45:18.603 244018 DEBUG nova.network.os_vif_util [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:45:18 np0005629333 nova_compute[244014]: 2026-02-25 12:45:18.604 244018 DEBUG nova.network.os_vif_util [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5f:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d032336-9efd-4e76-9498-4dafee40640b,network=Network(cd92597b-67bf-4492-9581-a9a7ec80f716),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d032336-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:45:18 np0005629333 nova_compute[244014]: 2026-02-25 12:45:18.604 244018 DEBUG os_vif [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5f:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d032336-9efd-4e76-9498-4dafee40640b,network=Network(cd92597b-67bf-4492-9581-a9a7ec80f716),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d032336-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:45:18 np0005629333 nova_compute[244014]: 2026-02-25 12:45:18.605 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:18 np0005629333 nova_compute[244014]: 2026-02-25 12:45:18.605 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:18 np0005629333 nova_compute[244014]: 2026-02-25 12:45:18.605 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:45:18 np0005629333 nova_compute[244014]: 2026-02-25 12:45:18.607 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:18 np0005629333 nova_compute[244014]: 2026-02-25 12:45:18.607 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d032336-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:18 np0005629333 nova_compute[244014]: 2026-02-25 12:45:18.608 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d032336-9e, col_values=(('external_ids', {'iface-id': '8d032336-9efd-4e76-9498-4dafee40640b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b3:5f:c9', 'vm-uuid': 'aee87402-4b34-4083-888b-bb653e2beaa9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:18 np0005629333 NetworkManager[49836]: <info>  [1772023518.6098] manager: (tap8d032336-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/481)
Feb 25 07:45:18 np0005629333 nova_compute[244014]: 2026-02-25 12:45:18.611 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:45:18 np0005629333 nova_compute[244014]: 2026-02-25 12:45:18.613 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:18 np0005629333 nova_compute[244014]: 2026-02-25 12:45:18.614 244018 INFO os_vif [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5f:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d032336-9efd-4e76-9498-4dafee40640b,network=Network(cd92597b-67bf-4492-9581-a9a7ec80f716),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d032336-9e')#033[00m
Feb 25 07:45:18 np0005629333 kernel: tap8d032336-9e: entered promiscuous mode
Feb 25 07:45:18 np0005629333 NetworkManager[49836]: <info>  [1772023518.7871] manager: (tap8d032336-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/482)
Feb 25 07:45:18 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:18Z|01142|binding|INFO|Claiming lport 8d032336-9efd-4e76-9498-4dafee40640b for this chassis.
Feb 25 07:45:18 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:18Z|01143|binding|INFO|8d032336-9efd-4e76-9498-4dafee40640b: Claiming fa:16:3e:b3:5f:c9 10.100.0.11
Feb 25 07:45:18 np0005629333 nova_compute[244014]: 2026-02-25 12:45:18.790 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:18.797 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:5f:c9 10.100.0.11'], port_security=['fa:16:3e:b3:5f:c9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'aee87402-4b34-4083-888b-bb653e2beaa9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd92597b-67bf-4492-9581-a9a7ec80f716', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '815102d4-6506-4957-9109-24ea4e91e4b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.190'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d5163c3-96b2-4512-ae02-51f6c9b4bef4, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=8d032336-9efd-4e76-9498-4dafee40640b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:45:18 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:18Z|01144|binding|INFO|Setting lport 8d032336-9efd-4e76-9498-4dafee40640b ovn-installed in OVS
Feb 25 07:45:18 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:18Z|01145|binding|INFO|Setting lport 8d032336-9efd-4e76-9498-4dafee40640b up in Southbound
Feb 25 07:45:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:18.800 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 8d032336-9efd-4e76-9498-4dafee40640b in datapath cd92597b-67bf-4492-9581-a9a7ec80f716 bound to our chassis#033[00m
Feb 25 07:45:18 np0005629333 nova_compute[244014]: 2026-02-25 12:45:18.803 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:18.805 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cd92597b-67bf-4492-9581-a9a7ec80f716#033[00m
Feb 25 07:45:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:18.817 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4aa02bdd-fc37-46b6-a791-f2a33b61c0d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:18.818 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcd92597b-61 in ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:45:18 np0005629333 systemd-machined[210048]: New machine qemu-145-instance-00000071.
Feb 25 07:45:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:18.821 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcd92597b-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:45:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:18.821 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[04939727-bc66-410e-9d73-5f86d7fdbc44]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:18.823 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f08c990f-8ec3-48c1-a2b6-486321f41c17]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:18 np0005629333 systemd[1]: Started Virtual Machine qemu-145-instance-00000071.
Feb 25 07:45:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:18.835 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[a98490f7-2dad-4b41-bcd7-bbc1824987cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:18 np0005629333 systemd-udevd[346194]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:45:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:18.869 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8fe663c8-50f1-4dff-9451-df64c8f4ae2c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:18 np0005629333 NetworkManager[49836]: <info>  [1772023518.8710] device (tap8d032336-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:45:18 np0005629333 NetworkManager[49836]: <info>  [1772023518.8720] device (tap8d032336-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:45:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:18.894 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ad653b2a-516d-4f72-a4a0-700463db0e60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:18.898 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[317c69c5-e6b9-497c-9097-24ccf3cfd668]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:18 np0005629333 NetworkManager[49836]: <info>  [1772023518.8995] manager: (tapcd92597b-60): new Veth device (/org/freedesktop/NetworkManager/Devices/483)
Feb 25 07:45:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:18.929 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ee40b0d3-f2c4-48ec-93e9-6e4abf255df4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:18.932 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[012c25d3-2c85-4c2a-8d3c-3bb3c964ee59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:18 np0005629333 NetworkManager[49836]: <info>  [1772023518.9649] device (tapcd92597b-60): carrier: link connected
Feb 25 07:45:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:18.973 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[031d362a-fb81-4cb5-bc7e-e525301c6c31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:18.988 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ccecbbf9-9d08-4061-b9df-33d0f6d6a4d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd92597b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:89:ac:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 347], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548837, 'reachable_time': 33065, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346225, 'error': None, 'target': 'ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:19.004 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[46c3003e-fb30-4f2d-a748-700edfa6bbdf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe89:ace3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548837, 'tstamp': 548837}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346226, 'error': None, 'target': 'ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:19.021 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1a50355e-e6fd-4aaa-bfac-04c7f82d08a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcd92597b-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:89:ac:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 347], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548837, 'reachable_time': 33065, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 346227, 'error': None, 'target': 'ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:19.046 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0b7ba602-9c1f-46c1-8205-ed5fec4677aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:19.102 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0bde864d-a850-48ae-bc9b-b99125dc5187]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:19.103 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd92597b-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:19.104 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:19.104 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcd92597b-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:19 np0005629333 nova_compute[244014]: 2026-02-25 12:45:19.105 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:19 np0005629333 kernel: tapcd92597b-60: entered promiscuous mode
Feb 25 07:45:19 np0005629333 NetworkManager[49836]: <info>  [1772023519.1066] manager: (tapcd92597b-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/484)
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:19.107 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcd92597b-60, col_values=(('external_ids', {'iface-id': '37c4424f-372d-4923-b009-7893a123c4d8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:19 np0005629333 nova_compute[244014]: 2026-02-25 12:45:19.108 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:19 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:19Z|01146|binding|INFO|Releasing lport 37c4424f-372d-4923-b009-7893a123c4d8 from this chassis (sb_readonly=0)
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:19.116 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cd92597b-67bf-4492-9581-a9a7ec80f716.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cd92597b-67bf-4492-9581-a9a7ec80f716.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:45:19 np0005629333 nova_compute[244014]: 2026-02-25 12:45:19.116 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:19.117 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[08b82419-eafc-481f-92e3-1e6fff0f9888]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:19.117 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-cd92597b-67bf-4492-9581-a9a7ec80f716
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/cd92597b-67bf-4492-9581-a9a7ec80f716.pid.haproxy
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID cd92597b-67bf-4492-9581-a9a7ec80f716
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:45:19 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:19.118 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716', 'env', 'PROCESS_TAG=haproxy-cd92597b-67bf-4492-9581-a9a7ec80f716', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cd92597b-67bf-4492-9581-a9a7ec80f716.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:45:19 np0005629333 podman[346259]: 2026-02-25 12:45:19.443661625 +0000 UTC m=+0.026819118 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:45:19 np0005629333 podman[346259]: 2026-02-25 12:45:19.585378904 +0000 UTC m=+0.168536407 container create eb0d35100acc8c7d045c75b2d824237b2e0396e2ecddd9c41701bc72f9f03b75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 07:45:19 np0005629333 systemd[1]: Started libpod-conmon-eb0d35100acc8c7d045c75b2d824237b2e0396e2ecddd9c41701bc72f9f03b75.scope.
Feb 25 07:45:19 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:45:19 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4abce4945cdbbec3adb7c76e2b958b55b61f3b27db7344184a7c0c756e9956d5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:45:19 np0005629333 nova_compute[244014]: 2026-02-25 12:45:19.675 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for aee87402-4b34-4083-888b-bb653e2beaa9 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 25 07:45:19 np0005629333 nova_compute[244014]: 2026-02-25 12:45:19.676 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023519.675293, aee87402-4b34-4083-888b-bb653e2beaa9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:45:19 np0005629333 nova_compute[244014]: 2026-02-25 12:45:19.676 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:45:19 np0005629333 nova_compute[244014]: 2026-02-25 12:45:19.679 244018 DEBUG nova.compute.manager [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:45:19 np0005629333 nova_compute[244014]: 2026-02-25 12:45:19.682 244018 INFO nova.virt.libvirt.driver [-] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Instance rebooted successfully.#033[00m
Feb 25 07:45:19 np0005629333 nova_compute[244014]: 2026-02-25 12:45:19.682 244018 DEBUG nova.compute.manager [None req-3fc14473-6f00-434b-b693-423bb1a1605c fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:45:19 np0005629333 nova_compute[244014]: 2026-02-25 12:45:19.723 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:45:19 np0005629333 nova_compute[244014]: 2026-02-25 12:45:19.726 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:45:19 np0005629333 podman[346259]: 2026-02-25 12:45:19.761092753 +0000 UTC m=+0.344250226 container init eb0d35100acc8c7d045c75b2d824237b2e0396e2ecddd9c41701bc72f9f03b75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 07:45:19 np0005629333 podman[346259]: 2026-02-25 12:45:19.765416365 +0000 UTC m=+0.348573838 container start eb0d35100acc8c7d045c75b2d824237b2e0396e2ecddd9c41701bc72f9f03b75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 25 07:45:19 np0005629333 nova_compute[244014]: 2026-02-25 12:45:19.770 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Feb 25 07:45:19 np0005629333 nova_compute[244014]: 2026-02-25 12:45:19.771 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023519.6784546, aee87402-4b34-4083-888b-bb653e2beaa9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:45:19 np0005629333 nova_compute[244014]: 2026-02-25 12:45:19.771 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] VM Started (Lifecycle Event)#033[00m
Feb 25 07:45:19 np0005629333 neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716[346317]: [NOTICE]   (346322) : New worker (346324) forked
Feb 25 07:45:19 np0005629333 neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716[346317]: [NOTICE]   (346322) : Loading success.
Feb 25 07:45:19 np0005629333 nova_compute[244014]: 2026-02-25 12:45:19.802 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:45:19 np0005629333 nova_compute[244014]: 2026-02-25 12:45:19.809 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:45:20 np0005629333 nova_compute[244014]: 2026-02-25 12:45:20.087 244018 DEBUG nova.compute.manager [req-a20f8510-92d8-4d43-b6b6-4972971e1495 req-0b0ccdc8-ab6e-4caf-9fb0-358a9bee7ca5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received event network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:45:20 np0005629333 nova_compute[244014]: 2026-02-25 12:45:20.087 244018 DEBUG oslo_concurrency.lockutils [req-a20f8510-92d8-4d43-b6b6-4972971e1495 req-0b0ccdc8-ab6e-4caf-9fb0-358a9bee7ca5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:20 np0005629333 nova_compute[244014]: 2026-02-25 12:45:20.088 244018 DEBUG oslo_concurrency.lockutils [req-a20f8510-92d8-4d43-b6b6-4972971e1495 req-0b0ccdc8-ab6e-4caf-9fb0-358a9bee7ca5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:20 np0005629333 nova_compute[244014]: 2026-02-25 12:45:20.088 244018 DEBUG oslo_concurrency.lockutils [req-a20f8510-92d8-4d43-b6b6-4972971e1495 req-0b0ccdc8-ab6e-4caf-9fb0-358a9bee7ca5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:20 np0005629333 nova_compute[244014]: 2026-02-25 12:45:20.088 244018 DEBUG nova.compute.manager [req-a20f8510-92d8-4d43-b6b6-4972971e1495 req-0b0ccdc8-ab6e-4caf-9fb0-358a9bee7ca5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] No waiting events found dispatching network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:45:20 np0005629333 nova_compute[244014]: 2026-02-25 12:45:20.088 244018 WARNING nova.compute.manager [req-a20f8510-92d8-4d43-b6b6-4972971e1495 req-0b0ccdc8-ab6e-4caf-9fb0-358a9bee7ca5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received unexpected event network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b for instance with vm_state active and task_state None.#033[00m
Feb 25 07:45:20 np0005629333 nova_compute[244014]: 2026-02-25 12:45:20.089 244018 DEBUG nova.compute.manager [req-a20f8510-92d8-4d43-b6b6-4972971e1495 req-0b0ccdc8-ab6e-4caf-9fb0-358a9bee7ca5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received event network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:45:20 np0005629333 nova_compute[244014]: 2026-02-25 12:45:20.089 244018 DEBUG oslo_concurrency.lockutils [req-a20f8510-92d8-4d43-b6b6-4972971e1495 req-0b0ccdc8-ab6e-4caf-9fb0-358a9bee7ca5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:20 np0005629333 nova_compute[244014]: 2026-02-25 12:45:20.089 244018 DEBUG oslo_concurrency.lockutils [req-a20f8510-92d8-4d43-b6b6-4972971e1495 req-0b0ccdc8-ab6e-4caf-9fb0-358a9bee7ca5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:20 np0005629333 nova_compute[244014]: 2026-02-25 12:45:20.089 244018 DEBUG oslo_concurrency.lockutils [req-a20f8510-92d8-4d43-b6b6-4972971e1495 req-0b0ccdc8-ab6e-4caf-9fb0-358a9bee7ca5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:20 np0005629333 nova_compute[244014]: 2026-02-25 12:45:20.090 244018 DEBUG nova.compute.manager [req-a20f8510-92d8-4d43-b6b6-4972971e1495 req-0b0ccdc8-ab6e-4caf-9fb0-358a9bee7ca5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] No waiting events found dispatching network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:45:20 np0005629333 nova_compute[244014]: 2026-02-25 12:45:20.090 244018 WARNING nova.compute.manager [req-a20f8510-92d8-4d43-b6b6-4972971e1495 req-0b0ccdc8-ab6e-4caf-9fb0-358a9bee7ca5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received unexpected event network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b for instance with vm_state active and task_state None.#033[00m
Feb 25 07:45:20 np0005629333 nova_compute[244014]: 2026-02-25 12:45:20.187 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 425 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 4.7 MiB/s wr, 245 op/s
Feb 25 07:45:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1943: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 2.1 MiB/s wr, 294 op/s
Feb 25 07:45:22 np0005629333 nova_compute[244014]: 2026-02-25 12:45:22.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:45:22 np0005629333 nova_compute[244014]: 2026-02-25 12:45:22.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:45:22 np0005629333 nova_compute[244014]: 2026-02-25 12:45:22.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:45:22 np0005629333 nova_compute[244014]: 2026-02-25 12:45:22.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 07:45:23 np0005629333 nova_compute[244014]: 2026-02-25 12:45:23.066 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:45:23 np0005629333 nova_compute[244014]: 2026-02-25 12:45:23.067 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:45:23 np0005629333 nova_compute[244014]: 2026-02-25 12:45:23.068 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 25 07:45:23 np0005629333 nova_compute[244014]: 2026-02-25 12:45:23.068 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid 874359d8-3251-4416-82dc-f6776853e384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:45:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:45:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Feb 25 07:45:23 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Feb 25 07:45:23 np0005629333 nova_compute[244014]: 2026-02-25 12:45:23.611 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Feb 25 07:45:23 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Feb 25 07:45:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1945: 305 pgs: 305 active+clean; 361 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.5 MiB/s rd, 183 KiB/s wr, 347 op/s
Feb 25 07:45:24 np0005629333 nova_compute[244014]: 2026-02-25 12:45:24.799 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updating instance_info_cache with network_info: [{"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:45:24 np0005629333 nova_compute[244014]: 2026-02-25 12:45:24.817 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:45:24 np0005629333 nova_compute[244014]: 2026-02-25 12:45:24.818 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 25 07:45:24 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:24Z|00131|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:47:4a:25 10.100.0.14
Feb 25 07:45:24 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:24Z|00132|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:47:4a:25 10.100.0.14
Feb 25 07:45:25 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Feb 25 07:45:25 np0005629333 nova_compute[244014]: 2026-02-25 12:45:25.189 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:25 np0005629333 podman[346333]: 2026-02-25 12:45:25.773970004 +0000 UTC m=+0.106339432 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223)
Feb 25 07:45:25 np0005629333 nova_compute[244014]: 2026-02-25 12:45:25.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:45:25 np0005629333 nova_compute[244014]: 2026-02-25 12:45:25.898 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:25 np0005629333 nova_compute[244014]: 2026-02-25 12:45:25.898 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:25 np0005629333 nova_compute[244014]: 2026-02-25 12:45:25.899 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:25 np0005629333 nova_compute[244014]: 2026-02-25 12:45:25.899 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:45:25 np0005629333 nova_compute[244014]: 2026-02-25 12:45:25.901 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:45:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 305 active+clean; 361 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.1 MiB/s rd, 139 KiB/s wr, 285 op/s
Feb 25 07:45:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:45:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3352791063' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:45:26 np0005629333 nova_compute[244014]: 2026-02-25 12:45:26.441 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:45:26 np0005629333 nova_compute[244014]: 2026-02-25 12:45:26.550 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:45:26 np0005629333 nova_compute[244014]: 2026-02-25 12:45:26.551 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:45:26 np0005629333 nova_compute[244014]: 2026-02-25 12:45:26.557 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:45:26 np0005629333 nova_compute[244014]: 2026-02-25 12:45:26.557 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:45:26 np0005629333 nova_compute[244014]: 2026-02-25 12:45:26.563 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:45:26 np0005629333 nova_compute[244014]: 2026-02-25 12:45:26.564 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:45:26 np0005629333 nova_compute[244014]: 2026-02-25 12:45:26.814 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:45:26 np0005629333 nova_compute[244014]: 2026-02-25 12:45:26.815 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3008MB free_disk=59.8733590496704GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:45:26 np0005629333 nova_compute[244014]: 2026-02-25 12:45:26.815 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:26 np0005629333 nova_compute[244014]: 2026-02-25 12:45:26.815 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:26 np0005629333 nova_compute[244014]: 2026-02-25 12:45:26.884 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance aee87402-4b34-4083-888b-bb653e2beaa9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:45:26 np0005629333 nova_compute[244014]: 2026-02-25 12:45:26.885 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance dd8c9142-2607-4722-90eb-65233f258639 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:45:26 np0005629333 nova_compute[244014]: 2026-02-25 12:45:26.885 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 874359d8-3251-4416-82dc-f6776853e384 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:45:26 np0005629333 nova_compute[244014]: 2026-02-25 12:45:26.885 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:45:26 np0005629333 nova_compute[244014]: 2026-02-25 12:45:26.885 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:45:26 np0005629333 nova_compute[244014]: 2026-02-25 12:45:26.906 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 25 07:45:26 np0005629333 nova_compute[244014]: 2026-02-25 12:45:26.922 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 25 07:45:26 np0005629333 nova_compute[244014]: 2026-02-25 12:45:26.922 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 25 07:45:26 np0005629333 nova_compute[244014]: 2026-02-25 12:45:26.935 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 25 07:45:26 np0005629333 nova_compute[244014]: 2026-02-25 12:45:26.961 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 25 07:45:27 np0005629333 nova_compute[244014]: 2026-02-25 12:45:27.046 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:45:27 np0005629333 podman[346394]: 2026-02-25 12:45:27.784022876 +0000 UTC m=+0.110353925 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 25 07:45:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:45:27 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2268235967' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:45:27 np0005629333 nova_compute[244014]: 2026-02-25 12:45:27.924 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.877s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:45:27 np0005629333 nova_compute[244014]: 2026-02-25 12:45:27.932 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:45:27 np0005629333 nova_compute[244014]: 2026-02-25 12:45:27.950 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:45:27 np0005629333 nova_compute[244014]: 2026-02-25 12:45:27.984 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:45:27 np0005629333 nova_compute[244014]: 2026-02-25 12:45:27.985 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1947: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.6 MiB/s wr, 228 op/s
Feb 25 07:45:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:45:28 np0005629333 nova_compute[244014]: 2026-02-25 12:45:28.613 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:29 np0005629333 nova_compute[244014]: 2026-02-25 12:45:29.986 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:45:29 np0005629333 nova_compute[244014]: 2026-02-25 12:45:29.987 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:45:29 np0005629333 nova_compute[244014]: 2026-02-25 12:45:29.988 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:45:29 np0005629333 nova_compute[244014]: 2026-02-25 12:45:29.988 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:45:30 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:30Z|00133|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:61:b7:f7 10.100.0.11
Feb 25 07:45:30 np0005629333 nova_compute[244014]: 2026-02-25 12:45:30.193 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.6 MiB/s wr, 228 op/s
Feb 25 07:45:30 np0005629333 nova_compute[244014]: 2026-02-25 12:45:30.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:45:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:45:30
Feb 25 07:45:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:45:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:45:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', '.rgw.root', 'default.rgw.log', 'images', '.mgr']
Feb 25 07:45:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:45:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:45:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:45:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:45:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:45:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:45:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:45:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:45:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:45:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:45:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:45:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:45:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:45:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:45:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:45:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:45:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:45:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1949: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.6 MiB/s wr, 154 op/s
Feb 25 07:45:32 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:32Z|00134|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b3:5f:c9 10.100.0.11
Feb 25 07:45:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:45:33 np0005629333 nova_compute[244014]: 2026-02-25 12:45:33.617 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1950: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.3 MiB/s wr, 166 op/s
Feb 25 07:45:34 np0005629333 nova_compute[244014]: 2026-02-25 12:45:34.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:45:34 np0005629333 nova_compute[244014]: 2026-02-25 12:45:34.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:45:35 np0005629333 nova_compute[244014]: 2026-02-25 12:45:35.197 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1951: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 148 op/s
Feb 25 07:45:36 np0005629333 nova_compute[244014]: 2026-02-25 12:45:36.796 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:36 np0005629333 nova_compute[244014]: 2026-02-25 12:45:36.797 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:36 np0005629333 nova_compute[244014]: 2026-02-25 12:45:36.821 244018 DEBUG nova.compute.manager [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:45:36 np0005629333 nova_compute[244014]: 2026-02-25 12:45:36.919 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:36 np0005629333 nova_compute[244014]: 2026-02-25 12:45:36.919 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:36 np0005629333 nova_compute[244014]: 2026-02-25 12:45:36.927 244018 DEBUG nova.virt.hardware [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:45:36 np0005629333 nova_compute[244014]: 2026-02-25 12:45:36.927 244018 INFO nova.compute.claims [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:45:37 np0005629333 nova_compute[244014]: 2026-02-25 12:45:37.074 244018 DEBUG oslo_concurrency.processutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:45:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:45:37 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3574570873' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:45:37 np0005629333 nova_compute[244014]: 2026-02-25 12:45:37.632 244018 DEBUG oslo_concurrency.processutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:45:37 np0005629333 nova_compute[244014]: 2026-02-25 12:45:37.639 244018 DEBUG nova.compute.provider_tree [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:45:37 np0005629333 nova_compute[244014]: 2026-02-25 12:45:37.653 244018 DEBUG nova.scheduler.client.report [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:45:37 np0005629333 nova_compute[244014]: 2026-02-25 12:45:37.672 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:37 np0005629333 nova_compute[244014]: 2026-02-25 12:45:37.672 244018 DEBUG nova.compute.manager [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:45:37 np0005629333 nova_compute[244014]: 2026-02-25 12:45:37.720 244018 DEBUG nova.compute.manager [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:45:37 np0005629333 nova_compute[244014]: 2026-02-25 12:45:37.721 244018 DEBUG nova.network.neutron [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:45:37 np0005629333 nova_compute[244014]: 2026-02-25 12:45:37.748 244018 INFO nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:45:37 np0005629333 nova_compute[244014]: 2026-02-25 12:45:37.775 244018 DEBUG nova.compute.manager [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:45:37 np0005629333 nova_compute[244014]: 2026-02-25 12:45:37.864 244018 DEBUG nova.compute.manager [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:45:37 np0005629333 nova_compute[244014]: 2026-02-25 12:45:37.867 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:45:37 np0005629333 nova_compute[244014]: 2026-02-25 12:45:37.867 244018 INFO nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Creating image(s)#033[00m
Feb 25 07:45:37 np0005629333 nova_compute[244014]: 2026-02-25 12:45:37.941 244018 DEBUG nova.storage.rbd_utils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 2b9ea98f-9b1a-495b-8669-e79da967b0ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:45:37 np0005629333 nova_compute[244014]: 2026-02-25 12:45:37.976 244018 DEBUG nova.storage.rbd_utils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 2b9ea98f-9b1a-495b-8669-e79da967b0ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:45:38 np0005629333 nova_compute[244014]: 2026-02-25 12:45:38.011 244018 DEBUG nova.storage.rbd_utils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 2b9ea98f-9b1a-495b-8669-e79da967b0ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:45:38 np0005629333 nova_compute[244014]: 2026-02-25 12:45:38.017 244018 DEBUG oslo_concurrency.processutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:45:38 np0005629333 nova_compute[244014]: 2026-02-25 12:45:38.060 244018 DEBUG nova.policy [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb8dbf8cc448ad946fd23aaae2326e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '25fa1e8dd32c483686f869da2604f2b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:45:38 np0005629333 nova_compute[244014]: 2026-02-25 12:45:38.064 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:45:38 np0005629333 nova_compute[244014]: 2026-02-25 12:45:38.070 244018 INFO nova.compute.manager [None req-0938bf09-ee14-4e7d-8e0e-d400621b5511 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Get console output#033[00m
Feb 25 07:45:38 np0005629333 nova_compute[244014]: 2026-02-25 12:45:38.079 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 25 07:45:38 np0005629333 nova_compute[244014]: 2026-02-25 12:45:38.106 244018 DEBUG oslo_concurrency.processutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:45:38 np0005629333 nova_compute[244014]: 2026-02-25 12:45:38.107 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:38 np0005629333 nova_compute[244014]: 2026-02-25 12:45:38.108 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:38 np0005629333 nova_compute[244014]: 2026-02-25 12:45:38.108 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:38 np0005629333 nova_compute[244014]: 2026-02-25 12:45:38.203 244018 DEBUG nova.storage.rbd_utils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 2b9ea98f-9b1a-495b-8669-e79da967b0ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:45:38 np0005629333 nova_compute[244014]: 2026-02-25 12:45:38.208 244018 DEBUG oslo_concurrency.processutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 2b9ea98f-9b1a-495b-8669-e79da967b0ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:45:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1952: 305 pgs: 305 active+clean; 395 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 150 op/s
Feb 25 07:45:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:45:38 np0005629333 nova_compute[244014]: 2026-02-25 12:45:38.620 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:39 np0005629333 nova_compute[244014]: 2026-02-25 12:45:39.796 244018 DEBUG nova.compute.manager [req-aeda064a-abcb-4ef4-9010-3c826e7815c9 req-eaaae7de-75d9-4aa4-a563-6124c25e29db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received event network-changed-8d032336-9efd-4e76-9498-4dafee40640b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:45:39 np0005629333 nova_compute[244014]: 2026-02-25 12:45:39.797 244018 DEBUG nova.compute.manager [req-aeda064a-abcb-4ef4-9010-3c826e7815c9 req-eaaae7de-75d9-4aa4-a563-6124c25e29db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Refreshing instance network info cache due to event network-changed-8d032336-9efd-4e76-9498-4dafee40640b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:45:39 np0005629333 nova_compute[244014]: 2026-02-25 12:45:39.797 244018 DEBUG oslo_concurrency.lockutils [req-aeda064a-abcb-4ef4-9010-3c826e7815c9 req-eaaae7de-75d9-4aa4-a563-6124c25e29db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-aee87402-4b34-4083-888b-bb653e2beaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:45:39 np0005629333 nova_compute[244014]: 2026-02-25 12:45:39.797 244018 DEBUG oslo_concurrency.lockutils [req-aeda064a-abcb-4ef4-9010-3c826e7815c9 req-eaaae7de-75d9-4aa4-a563-6124c25e29db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-aee87402-4b34-4083-888b-bb653e2beaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:45:39 np0005629333 nova_compute[244014]: 2026-02-25 12:45:39.798 244018 DEBUG nova.network.neutron [req-aeda064a-abcb-4ef4-9010-3c826e7815c9 req-eaaae7de-75d9-4aa4-a563-6124c25e29db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Refreshing network info cache for port 8d032336-9efd-4e76-9498-4dafee40640b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:45:39 np0005629333 nova_compute[244014]: 2026-02-25 12:45:39.873 244018 DEBUG oslo_concurrency.lockutils [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "aee87402-4b34-4083-888b-bb653e2beaa9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:39 np0005629333 nova_compute[244014]: 2026-02-25 12:45:39.874 244018 DEBUG oslo_concurrency.lockutils [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:39 np0005629333 nova_compute[244014]: 2026-02-25 12:45:39.875 244018 DEBUG oslo_concurrency.lockutils [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:39 np0005629333 nova_compute[244014]: 2026-02-25 12:45:39.875 244018 DEBUG oslo_concurrency.lockutils [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:39 np0005629333 nova_compute[244014]: 2026-02-25 12:45:39.876 244018 DEBUG oslo_concurrency.lockutils [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:39 np0005629333 nova_compute[244014]: 2026-02-25 12:45:39.880 244018 INFO nova.compute.manager [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Terminating instance#033[00m
Feb 25 07:45:39 np0005629333 nova_compute[244014]: 2026-02-25 12:45:39.882 244018 DEBUG nova.compute.manager [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:45:40 np0005629333 nova_compute[244014]: 2026-02-25 12:45:40.199 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1953: 305 pgs: 305 active+clean; 395 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 983 KiB/s rd, 58 KiB/s wr, 82 op/s
Feb 25 07:45:40 np0005629333 nova_compute[244014]: 2026-02-25 12:45:40.998 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:40 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:40.998 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:45:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:41.001 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:45:41 np0005629333 nova_compute[244014]: 2026-02-25 12:45:41.118 244018 DEBUG nova.network.neutron [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Successfully created port: 443e71ca-9c38-40e8-b799-1026ef47c35d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:45:41 np0005629333 kernel: tap8d032336-9e (unregistering): left promiscuous mode
Feb 25 07:45:41 np0005629333 NetworkManager[49836]: <info>  [1772023541.7201] device (tap8d032336-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:45:41 np0005629333 nova_compute[244014]: 2026-02-25 12:45:41.728 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:41 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:41Z|01147|binding|INFO|Releasing lport 8d032336-9efd-4e76-9498-4dafee40640b from this chassis (sb_readonly=0)
Feb 25 07:45:41 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:41Z|01148|binding|INFO|Setting lport 8d032336-9efd-4e76-9498-4dafee40640b down in Southbound
Feb 25 07:45:41 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:41Z|01149|binding|INFO|Removing iface tap8d032336-9e ovn-installed in OVS
Feb 25 07:45:41 np0005629333 nova_compute[244014]: 2026-02-25 12:45:41.732 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:41.738 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b3:5f:c9 10.100.0.11'], port_security=['fa:16:3e:b3:5f:c9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'aee87402-4b34-4083-888b-bb653e2beaa9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cd92597b-67bf-4492-9581-a9a7ec80f716', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '815102d4-6506-4957-9109-24ea4e91e4b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d5163c3-96b2-4512-ae02-51f6c9b4bef4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=8d032336-9efd-4e76-9498-4dafee40640b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:45:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:41.740 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 8d032336-9efd-4e76-9498-4dafee40640b in datapath cd92597b-67bf-4492-9581-a9a7ec80f716 unbound from our chassis#033[00m
Feb 25 07:45:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:41.743 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cd92597b-67bf-4492-9581-a9a7ec80f716, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:45:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:41.744 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c367060f-2299-4382-89e9-c3813fe22683]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:41.745 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716 namespace which is not needed anymore#033[00m
Feb 25 07:45:41 np0005629333 nova_compute[244014]: 2026-02-25 12:45:41.746 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:41 np0005629333 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000071.scope: Deactivated successfully.
Feb 25 07:45:41 np0005629333 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000071.scope: Consumed 12.583s CPU time.
Feb 25 07:45:41 np0005629333 systemd-machined[210048]: Machine qemu-145-instance-00000071 terminated.
Feb 25 07:45:41 np0005629333 nova_compute[244014]: 2026-02-25 12:45:41.952 244018 INFO nova.virt.libvirt.driver [-] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Instance destroyed successfully.#033[00m
Feb 25 07:45:41 np0005629333 nova_compute[244014]: 2026-02-25 12:45:41.952 244018 DEBUG nova.objects.instance [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'resources' on Instance uuid aee87402-4b34-4083-888b-bb653e2beaa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:45:41 np0005629333 nova_compute[244014]: 2026-02-25 12:45:41.971 244018 DEBUG nova.virt.libvirt.vif [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:44:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-204528973',display_name='tempest-TestNetworkAdvancedServerOps-server-204528973',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-204528973',id=113,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBE5R/voudn53mFn1fesyFCvR/uhV0Qe/z38Tv5jlE5Qy+mC9LN8VH4xJizPPQof/n3K4Uot5xWkFCPJAwQJsVcwssLNZ96buGra8QRtUpJKVli2/glgbebk2AU5Vop/oTA==',key_name='tempest-TestNetworkAdvancedServerOps-211055070',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:44:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-z6csfx6h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:45:19Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=aee87402-4b34-4083-888b-bb653e2beaa9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:45:41 np0005629333 nova_compute[244014]: 2026-02-25 12:45:41.972 244018 DEBUG nova.network.os_vif_util [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:45:41 np0005629333 nova_compute[244014]: 2026-02-25 12:45:41.973 244018 DEBUG nova.network.os_vif_util [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5f:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d032336-9efd-4e76-9498-4dafee40640b,network=Network(cd92597b-67bf-4492-9581-a9a7ec80f716),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d032336-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:45:41 np0005629333 nova_compute[244014]: 2026-02-25 12:45:41.974 244018 DEBUG os_vif [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5f:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d032336-9efd-4e76-9498-4dafee40640b,network=Network(cd92597b-67bf-4492-9581-a9a7ec80f716),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d032336-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:45:41 np0005629333 nova_compute[244014]: 2026-02-25 12:45:41.976 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:41 np0005629333 nova_compute[244014]: 2026-02-25 12:45:41.977 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d032336-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:41 np0005629333 nova_compute[244014]: 2026-02-25 12:45:41.985 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:41 np0005629333 nova_compute[244014]: 2026-02-25 12:45:41.988 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:45:41 np0005629333 nova_compute[244014]: 2026-02-25 12:45:41.991 244018 INFO os_vif [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b3:5f:c9,bridge_name='br-int',has_traffic_filtering=True,id=8d032336-9efd-4e76-9498-4dafee40640b,network=Network(cd92597b-67bf-4492-9581-a9a7ec80f716),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d032336-9e')#033[00m
Feb 25 07:45:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:42.002 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:42 np0005629333 nova_compute[244014]: 2026-02-25 12:45:42.071 244018 DEBUG nova.compute.manager [req-bff44d47-ffa5-4a0a-a448-338ec8206dcd req-9fff18b5-ee40-4292-82bd-4fb78d07b428 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received event network-vif-unplugged-8d032336-9efd-4e76-9498-4dafee40640b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:45:42 np0005629333 nova_compute[244014]: 2026-02-25 12:45:42.072 244018 DEBUG oslo_concurrency.lockutils [req-bff44d47-ffa5-4a0a-a448-338ec8206dcd req-9fff18b5-ee40-4292-82bd-4fb78d07b428 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:42 np0005629333 nova_compute[244014]: 2026-02-25 12:45:42.072 244018 DEBUG oslo_concurrency.lockutils [req-bff44d47-ffa5-4a0a-a448-338ec8206dcd req-9fff18b5-ee40-4292-82bd-4fb78d07b428 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:42 np0005629333 nova_compute[244014]: 2026-02-25 12:45:42.073 244018 DEBUG oslo_concurrency.lockutils [req-bff44d47-ffa5-4a0a-a448-338ec8206dcd req-9fff18b5-ee40-4292-82bd-4fb78d07b428 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:42 np0005629333 nova_compute[244014]: 2026-02-25 12:45:42.073 244018 DEBUG nova.compute.manager [req-bff44d47-ffa5-4a0a-a448-338ec8206dcd req-9fff18b5-ee40-4292-82bd-4fb78d07b428 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] No waiting events found dispatching network-vif-unplugged-8d032336-9efd-4e76-9498-4dafee40640b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:45:42 np0005629333 nova_compute[244014]: 2026-02-25 12:45:42.073 244018 DEBUG nova.compute.manager [req-bff44d47-ffa5-4a0a-a448-338ec8206dcd req-9fff18b5-ee40-4292-82bd-4fb78d07b428 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received event network-vif-unplugged-8d032336-9efd-4e76-9498-4dafee40640b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:45:42 np0005629333 nova_compute[244014]: 2026-02-25 12:45:42.087 244018 DEBUG oslo_concurrency.processutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 2b9ea98f-9b1a-495b-8669-e79da967b0ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.879s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:45:42 np0005629333 neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716[346317]: [NOTICE]   (346322) : haproxy version is 2.8.14-c23fe91
Feb 25 07:45:42 np0005629333 neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716[346317]: [NOTICE]   (346322) : path to executable is /usr/sbin/haproxy
Feb 25 07:45:42 np0005629333 neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716[346317]: [WARNING]  (346322) : Exiting Master process...
Feb 25 07:45:42 np0005629333 neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716[346317]: [WARNING]  (346322) : Exiting Master process...
Feb 25 07:45:42 np0005629333 neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716[346317]: [ALERT]    (346322) : Current worker (346324) exited with code 143 (Terminated)
Feb 25 07:45:42 np0005629333 neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716[346317]: [WARNING]  (346322) : All workers exited. Exiting... (0)
Feb 25 07:45:42 np0005629333 systemd[1]: libpod-eb0d35100acc8c7d045c75b2d824237b2e0396e2ecddd9c41701bc72f9f03b75.scope: Deactivated successfully.
Feb 25 07:45:42 np0005629333 podman[346563]: 2026-02-25 12:45:42.107288042 +0000 UTC m=+0.225035706 container died eb0d35100acc8c7d045c75b2d824237b2e0396e2ecddd9c41701bc72f9f03b75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 07:45:42 np0005629333 nova_compute[244014]: 2026-02-25 12:45:42.236 244018 DEBUG nova.storage.rbd_utils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] resizing rbd image 2b9ea98f-9b1a-495b-8669-e79da967b0ab_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:45:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 305 active+clean; 428 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 986 KiB/s rd, 1.1 MiB/s wr, 87 op/s
Feb 25 07:45:42 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eb0d35100acc8c7d045c75b2d824237b2e0396e2ecddd9c41701bc72f9f03b75-userdata-shm.mount: Deactivated successfully.
Feb 25 07:45:42 np0005629333 systemd[1]: var-lib-containers-storage-overlay-4abce4945cdbbec3adb7c76e2b958b55b61f3b27db7344184a7c0c756e9956d5-merged.mount: Deactivated successfully.
Feb 25 07:45:42 np0005629333 nova_compute[244014]: 2026-02-25 12:45:42.530 244018 DEBUG nova.network.neutron [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Successfully created port: 39f87175-7c4f-4092-bce6-29b413c731cd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:45:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:45:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:45:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:45:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:45:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0024997657061228648 of space, bias 1.0, pg target 0.7499297118368594 quantized to 32 (current 32)
Feb 25 07:45:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:45:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:45:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:45:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:45:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:45:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024939818704253244 of space, bias 1.0, pg target 0.7481945611275973 quantized to 32 (current 32)
Feb 25 07:45:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:45:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4167562580251474e-06 of space, bias 4.0, pg target 0.001700107509630177 quantized to 16 (current 16)
Feb 25 07:45:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:45:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:45:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:45:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:45:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:45:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:45:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:45:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:45:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:45:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:45:42 np0005629333 podman[346563]: 2026-02-25 12:45:42.57684493 +0000 UTC m=+0.694592594 container cleanup eb0d35100acc8c7d045c75b2d824237b2e0396e2ecddd9c41701bc72f9f03b75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 25 07:45:42 np0005629333 nova_compute[244014]: 2026-02-25 12:45:42.583 244018 DEBUG nova.network.neutron [req-aeda064a-abcb-4ef4-9010-3c826e7815c9 req-eaaae7de-75d9-4aa4-a563-6124c25e29db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Updated VIF entry in instance network info cache for port 8d032336-9efd-4e76-9498-4dafee40640b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:45:42 np0005629333 nova_compute[244014]: 2026-02-25 12:45:42.584 244018 DEBUG nova.network.neutron [req-aeda064a-abcb-4ef4-9010-3c826e7815c9 req-eaaae7de-75d9-4aa4-a563-6124c25e29db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Updating instance_info_cache with network_info: [{"id": "8d032336-9efd-4e76-9498-4dafee40640b", "address": "fa:16:3e:b3:5f:c9", "network": {"id": "cd92597b-67bf-4492-9581-a9a7ec80f716", "bridge": "br-int", "label": "tempest-network-smoke--585731863", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d032336-9e", "ovs_interfaceid": "8d032336-9efd-4e76-9498-4dafee40640b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:45:42 np0005629333 systemd[1]: libpod-conmon-eb0d35100acc8c7d045c75b2d824237b2e0396e2ecddd9c41701bc72f9f03b75.scope: Deactivated successfully.
Feb 25 07:45:42 np0005629333 nova_compute[244014]: 2026-02-25 12:45:42.606 244018 DEBUG oslo_concurrency.lockutils [req-aeda064a-abcb-4ef4-9010-3c826e7815c9 req-eaaae7de-75d9-4aa4-a563-6124c25e29db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-aee87402-4b34-4083-888b-bb653e2beaa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:45:42 np0005629333 nova_compute[244014]: 2026-02-25 12:45:42.748 244018 DEBUG nova.objects.instance [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'migration_context' on Instance uuid 2b9ea98f-9b1a-495b-8669-e79da967b0ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:45:42 np0005629333 nova_compute[244014]: 2026-02-25 12:45:42.763 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:45:42 np0005629333 nova_compute[244014]: 2026-02-25 12:45:42.764 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Ensure instance console log exists: /var/lib/nova/instances/2b9ea98f-9b1a-495b-8669-e79da967b0ab/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:45:42 np0005629333 nova_compute[244014]: 2026-02-25 12:45:42.765 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:42 np0005629333 nova_compute[244014]: 2026-02-25 12:45:42.765 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:42 np0005629333 nova_compute[244014]: 2026-02-25 12:45:42.766 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:43 np0005629333 podman[346677]: 2026-02-25 12:45:43.269169379 +0000 UTC m=+0.660763398 container remove eb0d35100acc8c7d045c75b2d824237b2e0396e2ecddd9c41701bc72f9f03b75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 25 07:45:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:43.275 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4987efb3-54c3-489b-aa6b-d516452407b0]: (4, ('Wed Feb 25 12:45:41 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716 (eb0d35100acc8c7d045c75b2d824237b2e0396e2ecddd9c41701bc72f9f03b75)\neb0d35100acc8c7d045c75b2d824237b2e0396e2ecddd9c41701bc72f9f03b75\nWed Feb 25 12:45:42 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716 (eb0d35100acc8c7d045c75b2d824237b2e0396e2ecddd9c41701bc72f9f03b75)\neb0d35100acc8c7d045c75b2d824237b2e0396e2ecddd9c41701bc72f9f03b75\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:43.277 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f0412724-54b0-425c-9af1-816cef80ce98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:43.279 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcd92597b-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:43 np0005629333 nova_compute[244014]: 2026-02-25 12:45:43.281 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:43 np0005629333 kernel: tapcd92597b-60: left promiscuous mode
Feb 25 07:45:43 np0005629333 nova_compute[244014]: 2026-02-25 12:45:43.292 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:43.297 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a7c3dd86-f151-4ea7-85b0-5a67000b41e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:43.318 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8bdf65e2-fd97-40d2-8435-1e5bd280f1d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:43.320 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4828b62b-cd6c-48fa-8658-b28784e550df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:43.337 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[89c70a0a-3cb5-4b43-ad55-abf877cb40e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548829, 'reachable_time': 42288, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346711, 'error': None, 'target': 'ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:43.340 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cd92597b-67bf-4492-9581-a9a7ec80f716 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:45:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:43.340 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[ac46d547-346f-40bb-89b3-d47508e58431]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:43 np0005629333 systemd[1]: run-netns-ovnmeta\x2dcd92597b\x2d67bf\x2d4492\x2d9581\x2da9a7ec80f716.mount: Deactivated successfully.
Feb 25 07:45:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:45:43 np0005629333 nova_compute[244014]: 2026-02-25 12:45:43.884 244018 DEBUG nova.network.neutron [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Successfully updated port: 443e71ca-9c38-40e8-b799-1026ef47c35d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:45:44 np0005629333 nova_compute[244014]: 2026-02-25 12:45:44.008 244018 DEBUG nova.compute.manager [req-16c4cf69-f6bc-40d5-a7f9-33e2dd705d9d req-224819f0-d97c-4752-bdf5-b3e81c0f201d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Received event network-changed-443e71ca-9c38-40e8-b799-1026ef47c35d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:45:44 np0005629333 nova_compute[244014]: 2026-02-25 12:45:44.008 244018 DEBUG nova.compute.manager [req-16c4cf69-f6bc-40d5-a7f9-33e2dd705d9d req-224819f0-d97c-4752-bdf5-b3e81c0f201d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Refreshing instance network info cache due to event network-changed-443e71ca-9c38-40e8-b799-1026ef47c35d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:45:44 np0005629333 nova_compute[244014]: 2026-02-25 12:45:44.009 244018 DEBUG oslo_concurrency.lockutils [req-16c4cf69-f6bc-40d5-a7f9-33e2dd705d9d req-224819f0-d97c-4752-bdf5-b3e81c0f201d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-2b9ea98f-9b1a-495b-8669-e79da967b0ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:45:44 np0005629333 nova_compute[244014]: 2026-02-25 12:45:44.009 244018 DEBUG oslo_concurrency.lockutils [req-16c4cf69-f6bc-40d5-a7f9-33e2dd705d9d req-224819f0-d97c-4752-bdf5-b3e81c0f201d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-2b9ea98f-9b1a-495b-8669-e79da967b0ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:45:44 np0005629333 nova_compute[244014]: 2026-02-25 12:45:44.010 244018 DEBUG nova.network.neutron [req-16c4cf69-f6bc-40d5-a7f9-33e2dd705d9d req-224819f0-d97c-4752-bdf5-b3e81c0f201d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Refreshing network info cache for port 443e71ca-9c38-40e8-b799-1026ef47c35d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:45:44 np0005629333 nova_compute[244014]: 2026-02-25 12:45:44.243 244018 DEBUG nova.compute.manager [req-b034baf2-bef2-46c6-80e5-e4a39a523456 req-4fb4ae33-4583-4043-b9c0-1d1b83ea5620 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received event network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:45:44 np0005629333 nova_compute[244014]: 2026-02-25 12:45:44.244 244018 DEBUG oslo_concurrency.lockutils [req-b034baf2-bef2-46c6-80e5-e4a39a523456 req-4fb4ae33-4583-4043-b9c0-1d1b83ea5620 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:44 np0005629333 nova_compute[244014]: 2026-02-25 12:45:44.244 244018 DEBUG oslo_concurrency.lockutils [req-b034baf2-bef2-46c6-80e5-e4a39a523456 req-4fb4ae33-4583-4043-b9c0-1d1b83ea5620 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:44 np0005629333 nova_compute[244014]: 2026-02-25 12:45:44.245 244018 DEBUG oslo_concurrency.lockutils [req-b034baf2-bef2-46c6-80e5-e4a39a523456 req-4fb4ae33-4583-4043-b9c0-1d1b83ea5620 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:44 np0005629333 nova_compute[244014]: 2026-02-25 12:45:44.245 244018 DEBUG nova.compute.manager [req-b034baf2-bef2-46c6-80e5-e4a39a523456 req-4fb4ae33-4583-4043-b9c0-1d1b83ea5620 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] No waiting events found dispatching network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:45:44 np0005629333 nova_compute[244014]: 2026-02-25 12:45:44.246 244018 WARNING nova.compute.manager [req-b034baf2-bef2-46c6-80e5-e4a39a523456 req-4fb4ae33-4583-4043-b9c0-1d1b83ea5620 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received unexpected event network-vif-plugged-8d032336-9efd-4e76-9498-4dafee40640b for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:45:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 305 active+clean; 441 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 496 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Feb 25 07:45:44 np0005629333 nova_compute[244014]: 2026-02-25 12:45:44.536 244018 DEBUG nova.network.neutron [req-16c4cf69-f6bc-40d5-a7f9-33e2dd705d9d req-224819f0-d97c-4752-bdf5-b3e81c0f201d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:45:45 np0005629333 nova_compute[244014]: 2026-02-25 12:45:45.202 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:45 np0005629333 nova_compute[244014]: 2026-02-25 12:45:45.272 244018 DEBUG nova.network.neutron [req-16c4cf69-f6bc-40d5-a7f9-33e2dd705d9d req-224819f0-d97c-4752-bdf5-b3e81c0f201d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:45:45 np0005629333 nova_compute[244014]: 2026-02-25 12:45:45.289 244018 DEBUG oslo_concurrency.lockutils [req-16c4cf69-f6bc-40d5-a7f9-33e2dd705d9d req-224819f0-d97c-4752-bdf5-b3e81c0f201d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-2b9ea98f-9b1a-495b-8669-e79da967b0ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:45:45 np0005629333 nova_compute[244014]: 2026-02-25 12:45:45.367 244018 DEBUG nova.network.neutron [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Successfully updated port: 39f87175-7c4f-4092-bce6-29b413c731cd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:45:45 np0005629333 nova_compute[244014]: 2026-02-25 12:45:45.403 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "refresh_cache-2b9ea98f-9b1a-495b-8669-e79da967b0ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:45:45 np0005629333 nova_compute[244014]: 2026-02-25 12:45:45.404 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquired lock "refresh_cache-2b9ea98f-9b1a-495b-8669-e79da967b0ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:45:45 np0005629333 nova_compute[244014]: 2026-02-25 12:45:45.404 244018 DEBUG nova.network.neutron [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:45:45 np0005629333 nova_compute[244014]: 2026-02-25 12:45:45.571 244018 DEBUG nova.network.neutron [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:45:46 np0005629333 nova_compute[244014]: 2026-02-25 12:45:46.097 244018 DEBUG nova.compute.manager [req-135385ba-4c34-4bee-95a7-b7b3b29dd8a3 req-d62b66db-e012-47be-882f-59f233095869 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Received event network-changed-39f87175-7c4f-4092-bce6-29b413c731cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:45:46 np0005629333 nova_compute[244014]: 2026-02-25 12:45:46.098 244018 DEBUG nova.compute.manager [req-135385ba-4c34-4bee-95a7-b7b3b29dd8a3 req-d62b66db-e012-47be-882f-59f233095869 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Refreshing instance network info cache due to event network-changed-39f87175-7c4f-4092-bce6-29b413c731cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:45:46 np0005629333 nova_compute[244014]: 2026-02-25 12:45:46.099 244018 DEBUG oslo_concurrency.lockutils [req-135385ba-4c34-4bee-95a7-b7b3b29dd8a3 req-d62b66db-e012-47be-882f-59f233095869 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-2b9ea98f-9b1a-495b-8669-e79da967b0ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:45:46 np0005629333 nova_compute[244014]: 2026-02-25 12:45:46.139 244018 INFO nova.virt.libvirt.driver [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Deleting instance files /var/lib/nova/instances/aee87402-4b34-4083-888b-bb653e2beaa9_del#033[00m
Feb 25 07:45:46 np0005629333 nova_compute[244014]: 2026-02-25 12:45:46.140 244018 INFO nova.virt.libvirt.driver [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Deletion of /var/lib/nova/instances/aee87402-4b34-4083-888b-bb653e2beaa9_del complete#033[00m
Feb 25 07:45:46 np0005629333 nova_compute[244014]: 2026-02-25 12:45:46.210 244018 INFO nova.compute.manager [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Took 6.33 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:45:46 np0005629333 nova_compute[244014]: 2026-02-25 12:45:46.211 244018 DEBUG oslo.service.loopingcall [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:45:46 np0005629333 nova_compute[244014]: 2026-02-25 12:45:46.212 244018 DEBUG nova.compute.manager [-] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:45:46 np0005629333 nova_compute[244014]: 2026-02-25 12:45:46.212 244018 DEBUG nova.network.neutron [-] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:45:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1956: 305 pgs: 305 active+clean; 441 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.8 MiB/s wr, 21 op/s
Feb 25 07:45:46 np0005629333 nova_compute[244014]: 2026-02-25 12:45:46.980 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:47 np0005629333 nova_compute[244014]: 2026-02-25 12:45:47.178 244018 DEBUG nova.network.neutron [-] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:45:47 np0005629333 nova_compute[244014]: 2026-02-25 12:45:47.204 244018 INFO nova.compute.manager [-] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Took 0.99 seconds to deallocate network for instance.#033[00m
Feb 25 07:45:47 np0005629333 nova_compute[244014]: 2026-02-25 12:45:47.265 244018 DEBUG oslo_concurrency.lockutils [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:47 np0005629333 nova_compute[244014]: 2026-02-25 12:45:47.266 244018 DEBUG oslo_concurrency.lockutils [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:47 np0005629333 nova_compute[244014]: 2026-02-25 12:45:47.391 244018 DEBUG oslo_concurrency.processutils [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:45:47 np0005629333 nova_compute[244014]: 2026-02-25 12:45:47.772 244018 DEBUG nova.network.neutron [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Updating instance_info_cache with network_info: [{"id": "443e71ca-9c38-40e8-b799-1026ef47c35d", "address": "fa:16:3e:98:e2:13", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap443e71ca-9c", "ovs_interfaceid": "443e71ca-9c38-40e8-b799-1026ef47c35d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "39f87175-7c4f-4092-bce6-29b413c731cd", "address": "fa:16:3e:b2:de:d3", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39f87175-7c", "ovs_interfaceid": "39f87175-7c4f-4092-bce6-29b413c731cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:45:47 np0005629333 nova_compute[244014]: 2026-02-25 12:45:47.800 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Releasing lock "refresh_cache-2b9ea98f-9b1a-495b-8669-e79da967b0ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:45:47 np0005629333 nova_compute[244014]: 2026-02-25 12:45:47.800 244018 DEBUG nova.compute.manager [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Instance network_info: |[{"id": "443e71ca-9c38-40e8-b799-1026ef47c35d", "address": "fa:16:3e:98:e2:13", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap443e71ca-9c", "ovs_interfaceid": "443e71ca-9c38-40e8-b799-1026ef47c35d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "39f87175-7c4f-4092-bce6-29b413c731cd", "address": "fa:16:3e:b2:de:d3", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39f87175-7c", "ovs_interfaceid": "39f87175-7c4f-4092-bce6-29b413c731cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:45:47 np0005629333 nova_compute[244014]: 2026-02-25 12:45:47.802 244018 DEBUG oslo_concurrency.lockutils [req-135385ba-4c34-4bee-95a7-b7b3b29dd8a3 req-d62b66db-e012-47be-882f-59f233095869 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-2b9ea98f-9b1a-495b-8669-e79da967b0ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:45:47 np0005629333 nova_compute[244014]: 2026-02-25 12:45:47.802 244018 DEBUG nova.network.neutron [req-135385ba-4c34-4bee-95a7-b7b3b29dd8a3 req-d62b66db-e012-47be-882f-59f233095869 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Refreshing network info cache for port 39f87175-7c4f-4092-bce6-29b413c731cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:45:47 np0005629333 nova_compute[244014]: 2026-02-25 12:45:47.809 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Start _get_guest_xml network_info=[{"id": "443e71ca-9c38-40e8-b799-1026ef47c35d", "address": "fa:16:3e:98:e2:13", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap443e71ca-9c", "ovs_interfaceid": "443e71ca-9c38-40e8-b799-1026ef47c35d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "39f87175-7c4f-4092-bce6-29b413c731cd", "address": "fa:16:3e:b2:de:d3", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39f87175-7c", "ovs_interfaceid": "39f87175-7c4f-4092-bce6-29b413c731cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:45:47 np0005629333 nova_compute[244014]: 2026-02-25 12:45:47.815 244018 WARNING nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:45:47 np0005629333 nova_compute[244014]: 2026-02-25 12:45:47.821 244018 DEBUG nova.virt.libvirt.host [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:45:47 np0005629333 nova_compute[244014]: 2026-02-25 12:45:47.822 244018 DEBUG nova.virt.libvirt.host [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:45:47 np0005629333 nova_compute[244014]: 2026-02-25 12:45:47.826 244018 DEBUG nova.virt.libvirt.host [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:45:47 np0005629333 nova_compute[244014]: 2026-02-25 12:45:47.827 244018 DEBUG nova.virt.libvirt.host [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:45:47 np0005629333 nova_compute[244014]: 2026-02-25 12:45:47.828 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:45:47 np0005629333 nova_compute[244014]: 2026-02-25 12:45:47.828 244018 DEBUG nova.virt.hardware [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:45:47 np0005629333 nova_compute[244014]: 2026-02-25 12:45:47.829 244018 DEBUG nova.virt.hardware [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:45:47 np0005629333 nova_compute[244014]: 2026-02-25 12:45:47.830 244018 DEBUG nova.virt.hardware [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:45:47 np0005629333 nova_compute[244014]: 2026-02-25 12:45:47.830 244018 DEBUG nova.virt.hardware [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:45:47 np0005629333 nova_compute[244014]: 2026-02-25 12:45:47.831 244018 DEBUG nova.virt.hardware [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:45:47 np0005629333 nova_compute[244014]: 2026-02-25 12:45:47.831 244018 DEBUG nova.virt.hardware [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:45:47 np0005629333 nova_compute[244014]: 2026-02-25 12:45:47.832 244018 DEBUG nova.virt.hardware [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:45:47 np0005629333 nova_compute[244014]: 2026-02-25 12:45:47.832 244018 DEBUG nova.virt.hardware [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:45:47 np0005629333 nova_compute[244014]: 2026-02-25 12:45:47.833 244018 DEBUG nova.virt.hardware [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:45:47 np0005629333 nova_compute[244014]: 2026-02-25 12:45:47.833 244018 DEBUG nova.virt.hardware [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:45:47 np0005629333 nova_compute[244014]: 2026-02-25 12:45:47.834 244018 DEBUG nova.virt.hardware [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:45:47 np0005629333 nova_compute[244014]: 2026-02-25 12:45:47.839 244018 DEBUG oslo_concurrency.processutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:45:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:45:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/146916130' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:45:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:45:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/146916130' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:45:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:45:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3897932473' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:45:47 np0005629333 nova_compute[244014]: 2026-02-25 12:45:47.962 244018 DEBUG oslo_concurrency.processutils [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:45:47 np0005629333 nova_compute[244014]: 2026-02-25 12:45:47.970 244018 DEBUG nova.compute.provider_tree [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:45:47 np0005629333 nova_compute[244014]: 2026-02-25 12:45:47.996 244018 DEBUG nova.scheduler.client.report [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:45:48 np0005629333 nova_compute[244014]: 2026-02-25 12:45:48.033 244018 DEBUG oslo_concurrency.lockutils [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:48 np0005629333 nova_compute[244014]: 2026-02-25 12:45:48.076 244018 INFO nova.scheduler.client.report [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Deleted allocations for instance aee87402-4b34-4083-888b-bb653e2beaa9#033[00m
Feb 25 07:45:48 np0005629333 nova_compute[244014]: 2026-02-25 12:45:48.164 244018 DEBUG oslo_concurrency.lockutils [None req-c528b69b-be42-4cba-80c9-1106c246ef48 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "aee87402-4b34-4083-888b-bb653e2beaa9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.290s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:48 np0005629333 nova_compute[244014]: 2026-02-25 12:45:48.237 244018 DEBUG nova.compute.manager [req-ef5ec5e0-88e6-44f4-9d11-8a9da9b1b7ae req-2e8eb9ea-d7b5-4843-acb0-6702bdb62b1a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Received event network-vif-deleted-8d032336-9efd-4e76-9498-4dafee40640b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:45:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:45:48 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1168713874' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:45:48 np0005629333 nova_compute[244014]: 2026-02-25 12:45:48.378 244018 DEBUG oslo_concurrency.processutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:45:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1957: 305 pgs: 305 active+clean; 360 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Feb 25 07:45:48 np0005629333 nova_compute[244014]: 2026-02-25 12:45:48.414 244018 DEBUG nova.storage.rbd_utils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 2b9ea98f-9b1a-495b-8669-e79da967b0ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:45:48 np0005629333 nova_compute[244014]: 2026-02-25 12:45:48.418 244018 DEBUG oslo_concurrency.processutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:45:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:45:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:45:48 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/516825819' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:45:48 np0005629333 nova_compute[244014]: 2026-02-25 12:45:48.968 244018 DEBUG oslo_concurrency.processutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:45:48 np0005629333 nova_compute[244014]: 2026-02-25 12:45:48.971 244018 DEBUG nova.virt.libvirt.vif [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:45:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1283323036',display_name='tempest-TestGettingAddress-server-1283323036',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1283323036',id=115,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD9kunFNn82YP0ulUqe0tIhu80P4MbOBi6POOa2CvjUcw/O3cOMXLzAIoTeZySuOv8M/4O47QFj9wA4a/asArTJADOO8TDCsWVVXiUd+J4BzXpwIPt1WHbwTYgvUR5L7vw==',key_name='tempest-TestGettingAddress-931608106',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-bo0inwqe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:45:37Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=2b9ea98f-9b1a-495b-8669-e79da967b0ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "443e71ca-9c38-40e8-b799-1026ef47c35d", "address": "fa:16:3e:98:e2:13", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap443e71ca-9c", "ovs_interfaceid": "443e71ca-9c38-40e8-b799-1026ef47c35d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:45:48 np0005629333 nova_compute[244014]: 2026-02-25 12:45:48.971 244018 DEBUG nova.network.os_vif_util [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "443e71ca-9c38-40e8-b799-1026ef47c35d", "address": "fa:16:3e:98:e2:13", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap443e71ca-9c", "ovs_interfaceid": "443e71ca-9c38-40e8-b799-1026ef47c35d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:45:48 np0005629333 nova_compute[244014]: 2026-02-25 12:45:48.973 244018 DEBUG nova.network.os_vif_util [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:e2:13,bridge_name='br-int',has_traffic_filtering=True,id=443e71ca-9c38-40e8-b799-1026ef47c35d,network=Network(bb79e0fd-2a4d-4a70-9c80-4853297401ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap443e71ca-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:45:48 np0005629333 nova_compute[244014]: 2026-02-25 12:45:48.975 244018 DEBUG nova.virt.libvirt.vif [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:45:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1283323036',display_name='tempest-TestGettingAddress-server-1283323036',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1283323036',id=115,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD9kunFNn82YP0ulUqe0tIhu80P4MbOBi6POOa2CvjUcw/O3cOMXLzAIoTeZySuOv8M/4O47QFj9wA4a/asArTJADOO8TDCsWVVXiUd+J4BzXpwIPt1WHbwTYgvUR5L7vw==',key_name='tempest-TestGettingAddress-931608106',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-bo0inwqe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:45:37Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=2b9ea98f-9b1a-495b-8669-e79da967b0ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "39f87175-7c4f-4092-bce6-29b413c731cd", "address": "fa:16:3e:b2:de:d3", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39f87175-7c", "ovs_interfaceid": "39f87175-7c4f-4092-bce6-29b413c731cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:45:48 np0005629333 nova_compute[244014]: 2026-02-25 12:45:48.976 244018 DEBUG nova.network.os_vif_util [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "39f87175-7c4f-4092-bce6-29b413c731cd", "address": "fa:16:3e:b2:de:d3", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39f87175-7c", "ovs_interfaceid": "39f87175-7c4f-4092-bce6-29b413c731cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:45:48 np0005629333 nova_compute[244014]: 2026-02-25 12:45:48.977 244018 DEBUG nova.network.os_vif_util [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:de:d3,bridge_name='br-int',has_traffic_filtering=True,id=39f87175-7c4f-4092-bce6-29b413c731cd,network=Network(6df13266-bcfe-4a5b-94c4-81b5f08a6c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39f87175-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:45:48 np0005629333 nova_compute[244014]: 2026-02-25 12:45:48.979 244018 DEBUG nova.objects.instance [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b9ea98f-9b1a-495b-8669-e79da967b0ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.011 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:45:49 np0005629333 nova_compute[244014]:  <uuid>2b9ea98f-9b1a-495b-8669-e79da967b0ab</uuid>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:  <name>instance-00000073</name>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestGettingAddress-server-1283323036</nova:name>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:45:47</nova:creationTime>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:45:49 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:        <nova:user uuid="f8eb8dbf8cc448ad946fd23aaae2326e">tempest-TestGettingAddress-344063294-project-member</nova:user>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:        <nova:project uuid="25fa1e8dd32c483686f869da2604f2b1">tempest-TestGettingAddress-344063294</nova:project>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:        <nova:port uuid="443e71ca-9c38-40e8-b799-1026ef47c35d">
Feb 25 07:45:49 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:        <nova:port uuid="39f87175-7c4f-4092-bce6-29b413c731cd">
Feb 25 07:45:49 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:feb2:ded3" ipVersion="6"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:feb2:ded3" ipVersion="6"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <entry name="serial">2b9ea98f-9b1a-495b-8669-e79da967b0ab</entry>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <entry name="uuid">2b9ea98f-9b1a-495b-8669-e79da967b0ab</entry>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/2b9ea98f-9b1a-495b-8669-e79da967b0ab_disk">
Feb 25 07:45:49 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:45:49 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/2b9ea98f-9b1a-495b-8669-e79da967b0ab_disk.config">
Feb 25 07:45:49 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:45:49 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:98:e2:13"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <target dev="tap443e71ca-9c"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:b2:de:d3"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <target dev="tap39f87175-7c"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/2b9ea98f-9b1a-495b-8669-e79da967b0ab/console.log" append="off"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:45:49 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:45:49 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:45:49 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:45:49 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.012 244018 DEBUG nova.compute.manager [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Preparing to wait for external event network-vif-plugged-443e71ca-9c38-40e8-b799-1026ef47c35d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.013 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.013 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.014 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.014 244018 DEBUG nova.compute.manager [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Preparing to wait for external event network-vif-plugged-39f87175-7c4f-4092-bce6-29b413c731cd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.014 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.015 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.015 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.016 244018 DEBUG nova.virt.libvirt.vif [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:45:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1283323036',display_name='tempest-TestGettingAddress-server-1283323036',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1283323036',id=115,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD9kunFNn82YP0ulUqe0tIhu80P4MbOBi6POOa2CvjUcw/O3cOMXLzAIoTeZySuOv8M/4O47QFj9wA4a/asArTJADOO8TDCsWVVXiUd+J4BzXpwIPt1WHbwTYgvUR5L7vw==',key_name='tempest-TestGettingAddress-931608106',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-bo0inwqe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:45:37Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=2b9ea98f-9b1a-495b-8669-e79da967b0ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "443e71ca-9c38-40e8-b799-1026ef47c35d", "address": "fa:16:3e:98:e2:13", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap443e71ca-9c", "ovs_interfaceid": "443e71ca-9c38-40e8-b799-1026ef47c35d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.017 244018 DEBUG nova.network.os_vif_util [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "443e71ca-9c38-40e8-b799-1026ef47c35d", "address": "fa:16:3e:98:e2:13", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap443e71ca-9c", "ovs_interfaceid": "443e71ca-9c38-40e8-b799-1026ef47c35d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.018 244018 DEBUG nova.network.os_vif_util [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:e2:13,bridge_name='br-int',has_traffic_filtering=True,id=443e71ca-9c38-40e8-b799-1026ef47c35d,network=Network(bb79e0fd-2a4d-4a70-9c80-4853297401ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap443e71ca-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.018 244018 DEBUG os_vif [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:e2:13,bridge_name='br-int',has_traffic_filtering=True,id=443e71ca-9c38-40e8-b799-1026ef47c35d,network=Network(bb79e0fd-2a4d-4a70-9c80-4853297401ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap443e71ca-9c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.019 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.020 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.021 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.024 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.025 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap443e71ca-9c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.026 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap443e71ca-9c, col_values=(('external_ids', {'iface-id': '443e71ca-9c38-40e8-b799-1026ef47c35d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:98:e2:13', 'vm-uuid': '2b9ea98f-9b1a-495b-8669-e79da967b0ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:49 np0005629333 NetworkManager[49836]: <info>  [1772023549.0299] manager: (tap443e71ca-9c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/485)
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.032 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.034 244018 INFO os_vif [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:e2:13,bridge_name='br-int',has_traffic_filtering=True,id=443e71ca-9c38-40e8-b799-1026ef47c35d,network=Network(bb79e0fd-2a4d-4a70-9c80-4853297401ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap443e71ca-9c')#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.035 244018 DEBUG nova.virt.libvirt.vif [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:45:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1283323036',display_name='tempest-TestGettingAddress-server-1283323036',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1283323036',id=115,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD9kunFNn82YP0ulUqe0tIhu80P4MbOBi6POOa2CvjUcw/O3cOMXLzAIoTeZySuOv8M/4O47QFj9wA4a/asArTJADOO8TDCsWVVXiUd+J4BzXpwIPt1WHbwTYgvUR5L7vw==',key_name='tempest-TestGettingAddress-931608106',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-bo0inwqe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:45:37Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=2b9ea98f-9b1a-495b-8669-e79da967b0ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "39f87175-7c4f-4092-bce6-29b413c731cd", "address": "fa:16:3e:b2:de:d3", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39f87175-7c", "ovs_interfaceid": "39f87175-7c4f-4092-bce6-29b413c731cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.036 244018 DEBUG nova.network.os_vif_util [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "39f87175-7c4f-4092-bce6-29b413c731cd", "address": "fa:16:3e:b2:de:d3", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39f87175-7c", "ovs_interfaceid": "39f87175-7c4f-4092-bce6-29b413c731cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.037 244018 DEBUG nova.network.os_vif_util [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:de:d3,bridge_name='br-int',has_traffic_filtering=True,id=39f87175-7c4f-4092-bce6-29b413c731cd,network=Network(6df13266-bcfe-4a5b-94c4-81b5f08a6c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39f87175-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.038 244018 DEBUG os_vif [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:de:d3,bridge_name='br-int',has_traffic_filtering=True,id=39f87175-7c4f-4092-bce6-29b413c731cd,network=Network(6df13266-bcfe-4a5b-94c4-81b5f08a6c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39f87175-7c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.039 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.039 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.040 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.044 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.044 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39f87175-7c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.045 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap39f87175-7c, col_values=(('external_ids', {'iface-id': '39f87175-7c4f-4092-bce6-29b413c731cd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b2:de:d3', 'vm-uuid': '2b9ea98f-9b1a-495b-8669-e79da967b0ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.046 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:49 np0005629333 NetworkManager[49836]: <info>  [1772023549.0478] manager: (tap39f87175-7c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/486)
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.049 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.055 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.056 244018 INFO os_vif [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:de:d3,bridge_name='br-int',has_traffic_filtering=True,id=39f87175-7c4f-4092-bce6-29b413c731cd,network=Network(6df13266-bcfe-4a5b-94c4-81b5f08a6c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39f87175-7c')#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.229 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.230 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.230 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:98:e2:13, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.230 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:b2:de:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.231 244018 INFO nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Using config drive#033[00m
Feb 25 07:45:49 np0005629333 nova_compute[244014]: 2026-02-25 12:45:49.264 244018 DEBUG nova.storage.rbd_utils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 2b9ea98f-9b1a-495b-8669-e79da967b0ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:45:50 np0005629333 nova_compute[244014]: 2026-02-25 12:45:50.203 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:50 np0005629333 nova_compute[244014]: 2026-02-25 12:45:50.305 244018 INFO nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Creating config drive at /var/lib/nova/instances/2b9ea98f-9b1a-495b-8669-e79da967b0ab/disk.config#033[00m
Feb 25 07:45:50 np0005629333 nova_compute[244014]: 2026-02-25 12:45:50.310 244018 DEBUG oslo_concurrency.processutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2b9ea98f-9b1a-495b-8669-e79da967b0ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqs3b8jyh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:45:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 305 active+clean; 360 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 25 07:45:50 np0005629333 nova_compute[244014]: 2026-02-25 12:45:50.446 244018 DEBUG oslo_concurrency.processutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2b9ea98f-9b1a-495b-8669-e79da967b0ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqs3b8jyh" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:45:50 np0005629333 nova_compute[244014]: 2026-02-25 12:45:50.487 244018 DEBUG nova.storage.rbd_utils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 2b9ea98f-9b1a-495b-8669-e79da967b0ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:45:50 np0005629333 nova_compute[244014]: 2026-02-25 12:45:50.493 244018 DEBUG oslo_concurrency.processutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2b9ea98f-9b1a-495b-8669-e79da967b0ab/disk.config 2b9ea98f-9b1a-495b-8669-e79da967b0ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:45:50 np0005629333 nova_compute[244014]: 2026-02-25 12:45:50.561 244018 DEBUG nova.network.neutron [req-135385ba-4c34-4bee-95a7-b7b3b29dd8a3 req-d62b66db-e012-47be-882f-59f233095869 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Updated VIF entry in instance network info cache for port 39f87175-7c4f-4092-bce6-29b413c731cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:45:50 np0005629333 nova_compute[244014]: 2026-02-25 12:45:50.562 244018 DEBUG nova.network.neutron [req-135385ba-4c34-4bee-95a7-b7b3b29dd8a3 req-d62b66db-e012-47be-882f-59f233095869 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Updating instance_info_cache with network_info: [{"id": "443e71ca-9c38-40e8-b799-1026ef47c35d", "address": "fa:16:3e:98:e2:13", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap443e71ca-9c", "ovs_interfaceid": "443e71ca-9c38-40e8-b799-1026ef47c35d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "39f87175-7c4f-4092-bce6-29b413c731cd", "address": "fa:16:3e:b2:de:d3", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39f87175-7c", "ovs_interfaceid": "39f87175-7c4f-4092-bce6-29b413c731cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:45:50 np0005629333 nova_compute[244014]: 2026-02-25 12:45:50.580 244018 DEBUG oslo_concurrency.lockutils [req-135385ba-4c34-4bee-95a7-b7b3b29dd8a3 req-d62b66db-e012-47be-882f-59f233095869 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-2b9ea98f-9b1a-495b-8669-e79da967b0ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:45:50 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:50Z|01150|binding|INFO|Releasing lport 0b276f7a-90bb-4427-8f39-0e014732fd20 from this chassis (sb_readonly=0)
Feb 25 07:45:50 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:50Z|01151|binding|INFO|Releasing lport 091afd0d-cbaa-4d88-8f55-1965b8ffcb56 from this chassis (sb_readonly=0)
Feb 25 07:45:50 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:50Z|01152|binding|INFO|Releasing lport 751d6f2c-9451-4526-9618-730226a78a70 from this chassis (sb_readonly=0)
Feb 25 07:45:51 np0005629333 nova_compute[244014]: 2026-02-25 12:45:51.012 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:51 np0005629333 nova_compute[244014]: 2026-02-25 12:45:51.405 244018 DEBUG nova.compute.manager [req-4784e8c7-36a7-4114-9b5a-67ca98e1f4dc req-e9d52444-f7c9-46c5-be00-fd6f8c60d7f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received event network-changed-97fb9f99-cb59-4581-8866-375ea3e167d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:45:51 np0005629333 nova_compute[244014]: 2026-02-25 12:45:51.406 244018 DEBUG nova.compute.manager [req-4784e8c7-36a7-4114-9b5a-67ca98e1f4dc req-e9d52444-f7c9-46c5-be00-fd6f8c60d7f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Refreshing instance network info cache due to event network-changed-97fb9f99-cb59-4581-8866-375ea3e167d7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:45:51 np0005629333 nova_compute[244014]: 2026-02-25 12:45:51.407 244018 DEBUG oslo_concurrency.lockutils [req-4784e8c7-36a7-4114-9b5a-67ca98e1f4dc req-e9d52444-f7c9-46c5-be00-fd6f8c60d7f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:45:51 np0005629333 nova_compute[244014]: 2026-02-25 12:45:51.407 244018 DEBUG oslo_concurrency.lockutils [req-4784e8c7-36a7-4114-9b5a-67ca98e1f4dc req-e9d52444-f7c9-46c5-be00-fd6f8c60d7f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:45:51 np0005629333 nova_compute[244014]: 2026-02-25 12:45:51.407 244018 DEBUG nova.network.neutron [req-4784e8c7-36a7-4114-9b5a-67ca98e1f4dc req-e9d52444-f7c9-46c5-be00-fd6f8c60d7f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Refreshing network info cache for port 97fb9f99-cb59-4581-8866-375ea3e167d7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:45:51 np0005629333 nova_compute[244014]: 2026-02-25 12:45:51.499 244018 DEBUG oslo_concurrency.lockutils [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "874359d8-3251-4416-82dc-f6776853e384" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:51 np0005629333 nova_compute[244014]: 2026-02-25 12:45:51.500 244018 DEBUG oslo_concurrency.lockutils [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:51 np0005629333 nova_compute[244014]: 2026-02-25 12:45:51.500 244018 DEBUG oslo_concurrency.lockutils [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "874359d8-3251-4416-82dc-f6776853e384-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:51 np0005629333 nova_compute[244014]: 2026-02-25 12:45:51.501 244018 DEBUG oslo_concurrency.lockutils [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:51 np0005629333 nova_compute[244014]: 2026-02-25 12:45:51.501 244018 DEBUG oslo_concurrency.lockutils [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:51 np0005629333 nova_compute[244014]: 2026-02-25 12:45:51.503 244018 INFO nova.compute.manager [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Terminating instance#033[00m
Feb 25 07:45:51 np0005629333 nova_compute[244014]: 2026-02-25 12:45:51.505 244018 DEBUG nova.compute.manager [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:45:52 np0005629333 kernel: tap97fb9f99-cb (unregistering): left promiscuous mode
Feb 25 07:45:52 np0005629333 NetworkManager[49836]: <info>  [1772023552.0902] device (tap97fb9f99-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:45:52 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:52Z|01153|binding|INFO|Releasing lport 97fb9f99-cb59-4581-8866-375ea3e167d7 from this chassis (sb_readonly=0)
Feb 25 07:45:52 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:52Z|01154|binding|INFO|Setting lport 97fb9f99-cb59-4581-8866-375ea3e167d7 down in Southbound
Feb 25 07:45:52 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:52Z|01155|binding|INFO|Removing iface tap97fb9f99-cb ovn-installed in OVS
Feb 25 07:45:52 np0005629333 nova_compute[244014]: 2026-02-25 12:45:52.099 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:52.108 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:b7:f7 10.100.0.11'], port_security=['fa:16:3e:61:b7:f7 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '874359d8-3251-4416-82dc-f6776853e384', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e4d059ba-eacc-463b-a8a4-393a5a36dba3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6c0adb05683141e7a0b866f450e410e0', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'c8e48a3c-4f73-4222-8cd8-b956480085c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88c1b411-2973-4db5-967d-0a2cebc1066f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=97fb9f99-cb59-4581-8866-375ea3e167d7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:45:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:52.110 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 97fb9f99-cb59-4581-8866-375ea3e167d7 in datapath e4d059ba-eacc-463b-a8a4-393a5a36dba3 unbound from our chassis#033[00m
Feb 25 07:45:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:52.113 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e4d059ba-eacc-463b-a8a4-393a5a36dba3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:45:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:52.114 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[81f51818-2a00-45b5-8b5e-abc370b10487]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:52.115 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3 namespace which is not needed anymore#033[00m
Feb 25 07:45:52 np0005629333 nova_compute[244014]: 2026-02-25 12:45:52.123 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:52 np0005629333 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000070.scope: Deactivated successfully.
Feb 25 07:45:52 np0005629333 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000070.scope: Consumed 13.233s CPU time.
Feb 25 07:45:52 np0005629333 systemd-machined[210048]: Machine qemu-144-instance-00000070 terminated.
Feb 25 07:45:52 np0005629333 nova_compute[244014]: 2026-02-25 12:45:52.339 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:52 np0005629333 nova_compute[244014]: 2026-02-25 12:45:52.351 244018 INFO nova.virt.libvirt.driver [-] [instance: 874359d8-3251-4416-82dc-f6776853e384] Instance destroyed successfully.#033[00m
Feb 25 07:45:52 np0005629333 nova_compute[244014]: 2026-02-25 12:45:52.351 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:52 np0005629333 nova_compute[244014]: 2026-02-25 12:45:52.353 244018 DEBUG nova.objects.instance [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lazy-loading 'resources' on Instance uuid 874359d8-3251-4416-82dc-f6776853e384 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:45:52 np0005629333 nova_compute[244014]: 2026-02-25 12:45:52.368 244018 DEBUG nova.virt.libvirt.vif [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-02-25T12:44:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1245136630',display_name='tempest-TestShelveInstance-server-1245136630',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1245136630',id=112,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP9lQWzdSu90yaQLNul1MY74fQc98Cg9DYq6NKBJiDcNTT2XlppuxCcKfQtu58BOyBXAh+PrpYKR3FwTDUP4XAt9wnb5ixmiKo9lnvQqdB0jXAsnGExk5Uj48m62YSy+xg==',key_name='tempest-TestShelveInstance-571084469',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:45:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6c0adb05683141e7a0b866f450e410e0',ramdisk_id='',reservation_id='r-tfbrsni1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1925524092',owner_user_name='tempest-TestShelveInstance-1925524092-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:45:17Z,user_data=None,user_id='44c0b78107ea4f7381e82a02c5954e7c',uuid=874359d8-3251-4416-82dc-f6776853e384,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:45:52 np0005629333 nova_compute[244014]: 2026-02-25 12:45:52.369 244018 DEBUG nova.network.os_vif_util [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Converting VIF {"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:45:52 np0005629333 nova_compute[244014]: 2026-02-25 12:45:52.370 244018 DEBUG nova.network.os_vif_util [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:61:b7:f7,bridge_name='br-int',has_traffic_filtering=True,id=97fb9f99-cb59-4581-8866-375ea3e167d7,network=Network(e4d059ba-eacc-463b-a8a4-393a5a36dba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97fb9f99-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:45:52 np0005629333 nova_compute[244014]: 2026-02-25 12:45:52.371 244018 DEBUG os_vif [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:61:b7:f7,bridge_name='br-int',has_traffic_filtering=True,id=97fb9f99-cb59-4581-8866-375ea3e167d7,network=Network(e4d059ba-eacc-463b-a8a4-393a5a36dba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97fb9f99-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:45:52 np0005629333 nova_compute[244014]: 2026-02-25 12:45:52.374 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:52 np0005629333 nova_compute[244014]: 2026-02-25 12:45:52.374 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97fb9f99-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:52 np0005629333 nova_compute[244014]: 2026-02-25 12:45:52.376 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:52 np0005629333 nova_compute[244014]: 2026-02-25 12:45:52.380 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:45:52 np0005629333 nova_compute[244014]: 2026-02-25 12:45:52.385 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:52 np0005629333 nova_compute[244014]: 2026-02-25 12:45:52.388 244018 INFO os_vif [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:61:b7:f7,bridge_name='br-int',has_traffic_filtering=True,id=97fb9f99-cb59-4581-8866-375ea3e167d7,network=Network(e4d059ba-eacc-463b-a8a4-393a5a36dba3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97fb9f99-cb')#033[00m
Feb 25 07:45:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1959: 305 pgs: 305 active+clean; 360 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 237 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Feb 25 07:45:52 np0005629333 nova_compute[244014]: 2026-02-25 12:45:52.433 244018 DEBUG oslo_concurrency.processutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2b9ea98f-9b1a-495b-8669-e79da967b0ab/disk.config 2b9ea98f-9b1a-495b-8669-e79da967b0ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.940s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:45:52 np0005629333 nova_compute[244014]: 2026-02-25 12:45:52.434 244018 INFO nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Deleting local config drive /var/lib/nova/instances/2b9ea98f-9b1a-495b-8669-e79da967b0ab/disk.config because it was imported into RBD.#033[00m
Feb 25 07:45:52 np0005629333 systemd-udevd[346869]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:45:52 np0005629333 kernel: tap443e71ca-9c: entered promiscuous mode
Feb 25 07:45:52 np0005629333 NetworkManager[49836]: <info>  [1772023552.4911] manager: (tap443e71ca-9c): new Tun device (/org/freedesktop/NetworkManager/Devices/487)
Feb 25 07:45:52 np0005629333 NetworkManager[49836]: <info>  [1772023552.5036] device (tap443e71ca-9c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:45:52 np0005629333 NetworkManager[49836]: <info>  [1772023552.5054] device (tap443e71ca-9c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:45:52 np0005629333 nova_compute[244014]: 2026-02-25 12:45:52.548 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:52 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:52Z|01156|binding|INFO|Claiming lport 443e71ca-9c38-40e8-b799-1026ef47c35d for this chassis.
Feb 25 07:45:52 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:52Z|01157|binding|INFO|443e71ca-9c38-40e8-b799-1026ef47c35d: Claiming fa:16:3e:98:e2:13 10.100.0.4
Feb 25 07:45:52 np0005629333 NetworkManager[49836]: <info>  [1772023552.5567] manager: (tap39f87175-7c): new Tun device (/org/freedesktop/NetworkManager/Devices/488)
Feb 25 07:45:52 np0005629333 kernel: tap39f87175-7c: entered promiscuous mode
Feb 25 07:45:52 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:52Z|01158|binding|INFO|Claiming lport 39f87175-7c4f-4092-bce6-29b413c731cd for this chassis.
Feb 25 07:45:52 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:52Z|01159|binding|INFO|39f87175-7c4f-4092-bce6-29b413c731cd: Claiming fa:16:3e:b2:de:d3 2001:db8:0:1:f816:3eff:feb2:ded3 2001:db8::f816:3eff:feb2:ded3
Feb 25 07:45:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:52.560 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:e2:13 10.100.0.4'], port_security=['fa:16:3e:98:e2:13 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2b9ea98f-9b1a-495b-8669-e79da967b0ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bb79e0fd-2a4d-4a70-9c80-4853297401ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '09e05c4f-db6b-40c9-84a5-79dc857b9d0a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1aa114a0-206b-4aa1-b770-18f248344fa4, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=443e71ca-9c38-40e8-b799-1026ef47c35d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:45:52 np0005629333 nova_compute[244014]: 2026-02-25 12:45:52.561 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:52 np0005629333 nova_compute[244014]: 2026-02-25 12:45:52.563 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:52 np0005629333 nova_compute[244014]: 2026-02-25 12:45:52.565 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:52 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:52Z|01160|binding|INFO|Setting lport 443e71ca-9c38-40e8-b799-1026ef47c35d ovn-installed in OVS
Feb 25 07:45:52 np0005629333 nova_compute[244014]: 2026-02-25 12:45:52.566 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:52 np0005629333 NetworkManager[49836]: <info>  [1772023552.5687] device (tap39f87175-7c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:45:52 np0005629333 NetworkManager[49836]: <info>  [1772023552.5695] device (tap39f87175-7c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:45:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:52.569 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:de:d3 2001:db8:0:1:f816:3eff:feb2:ded3 2001:db8::f816:3eff:feb2:ded3'], port_security=['fa:16:3e:b2:de:d3 2001:db8:0:1:f816:3eff:feb2:ded3 2001:db8::f816:3eff:feb2:ded3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feb2:ded3/64 2001:db8::f816:3eff:feb2:ded3/64', 'neutron:device_id': '2b9ea98f-9b1a-495b-8669-e79da967b0ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '09e05c4f-db6b-40c9-84a5-79dc857b9d0a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5bfff87-5507-4fe1-aed9-1b014e8e7384, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=39f87175-7c4f-4092-bce6-29b413c731cd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:45:52 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:52Z|01161|binding|INFO|Setting lport 443e71ca-9c38-40e8-b799-1026ef47c35d up in Southbound
Feb 25 07:45:52 np0005629333 nova_compute[244014]: 2026-02-25 12:45:52.574 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:52 np0005629333 nova_compute[244014]: 2026-02-25 12:45:52.575 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:52 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:52Z|01162|binding|INFO|Setting lport 39f87175-7c4f-4092-bce6-29b413c731cd ovn-installed in OVS
Feb 25 07:45:52 np0005629333 nova_compute[244014]: 2026-02-25 12:45:52.577 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:52 np0005629333 ovn_controller[147040]: 2026-02-25T12:45:52Z|01163|binding|INFO|Setting lport 39f87175-7c4f-4092-bce6-29b413c731cd up in Southbound
Feb 25 07:45:52 np0005629333 systemd-machined[210048]: New machine qemu-146-instance-00000073.
Feb 25 07:45:52 np0005629333 systemd[1]: Started Virtual Machine qemu-146-instance-00000073.
Feb 25 07:45:52 np0005629333 neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3[346098]: [NOTICE]   (346103) : haproxy version is 2.8.14-c23fe91
Feb 25 07:45:52 np0005629333 neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3[346098]: [NOTICE]   (346103) : path to executable is /usr/sbin/haproxy
Feb 25 07:45:52 np0005629333 neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3[346098]: [WARNING]  (346103) : Exiting Master process...
Feb 25 07:45:52 np0005629333 neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3[346098]: [WARNING]  (346103) : Exiting Master process...
Feb 25 07:45:52 np0005629333 neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3[346098]: [ALERT]    (346103) : Current worker (346106) exited with code 143 (Terminated)
Feb 25 07:45:52 np0005629333 neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3[346098]: [WARNING]  (346103) : All workers exited. Exiting... (0)
Feb 25 07:45:52 np0005629333 systemd[1]: libpod-f6917395b968e4f6241e2780401ad4854e911d6bc3c72f025de08a94f631bb80.scope: Deactivated successfully.
Feb 25 07:45:52 np0005629333 podman[346890]: 2026-02-25 12:45:52.835543628 +0000 UTC m=+0.613360690 container died f6917395b968e4f6241e2780401ad4854e911d6bc3c72f025de08a94f631bb80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:45:52 np0005629333 nova_compute[244014]: 2026-02-25 12:45:52.884 244018 DEBUG nova.compute.manager [req-9989124e-e8dc-4fcb-a8e4-98f2800633b8 req-b1054707-fda9-4223-8fae-48bc3e13d9b1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Received event network-vif-plugged-39f87175-7c4f-4092-bce6-29b413c731cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:45:52 np0005629333 nova_compute[244014]: 2026-02-25 12:45:52.885 244018 DEBUG oslo_concurrency.lockutils [req-9989124e-e8dc-4fcb-a8e4-98f2800633b8 req-b1054707-fda9-4223-8fae-48bc3e13d9b1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:52 np0005629333 nova_compute[244014]: 2026-02-25 12:45:52.886 244018 DEBUG oslo_concurrency.lockutils [req-9989124e-e8dc-4fcb-a8e4-98f2800633b8 req-b1054707-fda9-4223-8fae-48bc3e13d9b1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:52 np0005629333 nova_compute[244014]: 2026-02-25 12:45:52.887 244018 DEBUG oslo_concurrency.lockutils [req-9989124e-e8dc-4fcb-a8e4-98f2800633b8 req-b1054707-fda9-4223-8fae-48bc3e13d9b1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:52 np0005629333 nova_compute[244014]: 2026-02-25 12:45:52.887 244018 DEBUG nova.compute.manager [req-9989124e-e8dc-4fcb-a8e4-98f2800633b8 req-b1054707-fda9-4223-8fae-48bc3e13d9b1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Processing event network-vif-plugged-39f87175-7c4f-4092-bce6-29b413c731cd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:45:53 np0005629333 nova_compute[244014]: 2026-02-25 12:45:53.282 244018 DEBUG nova.network.neutron [req-4784e8c7-36a7-4114-9b5a-67ca98e1f4dc req-e9d52444-f7c9-46c5-be00-fd6f8c60d7f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updated VIF entry in instance network info cache for port 97fb9f99-cb59-4581-8866-375ea3e167d7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:45:53 np0005629333 nova_compute[244014]: 2026-02-25 12:45:53.283 244018 DEBUG nova.network.neutron [req-4784e8c7-36a7-4114-9b5a-67ca98e1f4dc req-e9d52444-f7c9-46c5-be00-fd6f8c60d7f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updating instance_info_cache with network_info: [{"id": "97fb9f99-cb59-4581-8866-375ea3e167d7", "address": "fa:16:3e:61:b7:f7", "network": {"id": "e4d059ba-eacc-463b-a8a4-393a5a36dba3", "bridge": "br-int", "label": "tempest-TestShelveInstance-284265908-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c0adb05683141e7a0b866f450e410e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97fb9f99-cb", "ovs_interfaceid": "97fb9f99-cb59-4581-8866-375ea3e167d7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:45:53 np0005629333 nova_compute[244014]: 2026-02-25 12:45:53.301 244018 DEBUG oslo_concurrency.lockutils [req-4784e8c7-36a7-4114-9b5a-67ca98e1f4dc req-e9d52444-f7c9-46c5-be00-fd6f8c60d7f9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-874359d8-3251-4416-82dc-f6776853e384" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:45:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:45:53 np0005629333 nova_compute[244014]: 2026-02-25 12:45:53.550 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023553.5500739, 2b9ea98f-9b1a-495b-8669-e79da967b0ab => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:45:53 np0005629333 nova_compute[244014]: 2026-02-25 12:45:53.550 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] VM Started (Lifecycle Event)#033[00m
Feb 25 07:45:53 np0005629333 nova_compute[244014]: 2026-02-25 12:45:53.570 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:45:53 np0005629333 nova_compute[244014]: 2026-02-25 12:45:53.576 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023553.5526402, 2b9ea98f-9b1a-495b-8669-e79da967b0ab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:45:53 np0005629333 nova_compute[244014]: 2026-02-25 12:45:53.577 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:45:53 np0005629333 nova_compute[244014]: 2026-02-25 12:45:53.594 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:45:53 np0005629333 nova_compute[244014]: 2026-02-25 12:45:53.597 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:45:53 np0005629333 nova_compute[244014]: 2026-02-25 12:45:53.743 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:45:53 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f6917395b968e4f6241e2780401ad4854e911d6bc3c72f025de08a94f631bb80-userdata-shm.mount: Deactivated successfully.
Feb 25 07:45:53 np0005629333 systemd[1]: var-lib-containers-storage-overlay-dff6ea0a8a911ca82af7fb30e8506525301753b046b20808291f024818c24148-merged.mount: Deactivated successfully.
Feb 25 07:45:54 np0005629333 podman[346890]: 2026-02-25 12:45:54.343521298 +0000 UTC m=+2.121338340 container cleanup f6917395b968e4f6241e2780401ad4854e911d6bc3c72f025de08a94f631bb80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 07:45:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:45:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:45:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:45:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:45:54 np0005629333 systemd[1]: libpod-conmon-f6917395b968e4f6241e2780401ad4854e911d6bc3c72f025de08a94f631bb80.scope: Deactivated successfully.
Feb 25 07:45:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:45:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1960: 305 pgs: 305 active+clean; 360 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 252 KiB/s rd, 763 KiB/s wr, 64 op/s
Feb 25 07:45:54 np0005629333 nova_compute[244014]: 2026-02-25 12:45:54.971 244018 DEBUG nova.compute.manager [req-4a64d203-0c3e-49d5-b37b-76d338e12ff6 req-7b7a5cdc-7877-41df-be2a-c978c11b970c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Received event network-vif-plugged-39f87175-7c4f-4092-bce6-29b413c731cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:45:54 np0005629333 nova_compute[244014]: 2026-02-25 12:45:54.973 244018 DEBUG oslo_concurrency.lockutils [req-4a64d203-0c3e-49d5-b37b-76d338e12ff6 req-7b7a5cdc-7877-41df-be2a-c978c11b970c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:54 np0005629333 nova_compute[244014]: 2026-02-25 12:45:54.973 244018 DEBUG oslo_concurrency.lockutils [req-4a64d203-0c3e-49d5-b37b-76d338e12ff6 req-7b7a5cdc-7877-41df-be2a-c978c11b970c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:54 np0005629333 nova_compute[244014]: 2026-02-25 12:45:54.974 244018 DEBUG oslo_concurrency.lockutils [req-4a64d203-0c3e-49d5-b37b-76d338e12ff6 req-7b7a5cdc-7877-41df-be2a-c978c11b970c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:54 np0005629333 nova_compute[244014]: 2026-02-25 12:45:54.974 244018 DEBUG nova.compute.manager [req-4a64d203-0c3e-49d5-b37b-76d338e12ff6 req-7b7a5cdc-7877-41df-be2a-c978c11b970c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] No event matching network-vif-plugged-39f87175-7c4f-4092-bce6-29b413c731cd in dict_keys([('network-vif-plugged', '443e71ca-9c38-40e8-b799-1026ef47c35d')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Feb 25 07:45:54 np0005629333 nova_compute[244014]: 2026-02-25 12:45:54.975 244018 WARNING nova.compute.manager [req-4a64d203-0c3e-49d5-b37b-76d338e12ff6 req-7b7a5cdc-7877-41df-be2a-c978c11b970c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Received unexpected event network-vif-plugged-39f87175-7c4f-4092-bce6-29b413c731cd for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:45:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.024 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.024 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.026 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:45:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:45:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:45:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:45:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:45:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:45:55 np0005629333 nova_compute[244014]: 2026-02-25 12:45:55.207 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:55 np0005629333 podman[347107]: 2026-02-25 12:45:55.297541176 +0000 UTC m=+0.929376463 container remove f6917395b968e4f6241e2780401ad4854e911d6bc3c72f025de08a94f631bb80 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.303 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[869ed914-0540-4870-9a98-9a6dae188be7]: (4, ('Wed Feb 25 12:45:52 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3 (f6917395b968e4f6241e2780401ad4854e911d6bc3c72f025de08a94f631bb80)\nf6917395b968e4f6241e2780401ad4854e911d6bc3c72f025de08a94f631bb80\nWed Feb 25 12:45:54 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3 (f6917395b968e4f6241e2780401ad4854e911d6bc3c72f025de08a94f631bb80)\nf6917395b968e4f6241e2780401ad4854e911d6bc3c72f025de08a94f631bb80\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.306 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4d7793f8-6942-4d5d-8023-0b2892baab76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.307 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4d059ba-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:55 np0005629333 nova_compute[244014]: 2026-02-25 12:45:55.310 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:55 np0005629333 kernel: tape4d059ba-e0: left promiscuous mode
Feb 25 07:45:55 np0005629333 nova_compute[244014]: 2026-02-25 12:45:55.313 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.316 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[95402bf0-b575-4629-bb8a-6e73607623cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:55 np0005629333 nova_compute[244014]: 2026-02-25 12:45:55.322 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.329 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d74ae32c-41a5-48cb-afc1-77a9c57d2f99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.330 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[28a6bbe8-6baa-4a0d-b89b-8bce6d966b10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.344 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cd74df3e-926a-49bf-97d6-81b16a7226b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548504, 'reachable_time': 28536, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347171, 'error': None, 'target': 'ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:55 np0005629333 systemd[1]: run-netns-ovnmeta\x2de4d059ba\x2deacc\x2d463b\x2da8a4\x2d393a5a36dba3.mount: Deactivated successfully.
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.348 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e4d059ba-eacc-463b-a8a4-393a5a36dba3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.349 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[35db6755-9762-4eb9-bdb2-916067a7ea54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.350 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 443e71ca-9c38-40e8-b799-1026ef47c35d in datapath bb79e0fd-2a4d-4a70-9c80-4853297401ff unbound from our chassis#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.356 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bb79e0fd-2a4d-4a70-9c80-4853297401ff#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.416 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0ff08fec-b4ae-41a7-a96a-93843d6da97d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.455 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b9b32a5e-b771-4c90-a561-e16db3753238]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.460 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b659e4dd-5c05-4830-8fe6-6f1053387fa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.495 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0daddea2-1c4f-4e3f-9bfb-9eb72155961f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.519 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fb794ab5-8361-4ca0-bb64-b7c9ecf72a2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbb79e0fd-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:0b:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 341], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547905, 'reachable_time': 39147, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347184, 'error': None, 'target': 'ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:55 np0005629333 nova_compute[244014]: 2026-02-25 12:45:55.533 244018 DEBUG nova.compute.manager [req-68dc920e-00f3-437b-8d05-16f6b2c17a84 req-2befc313-cb54-4ed1-8f8f-cec140f6c33e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received event network-vif-unplugged-97fb9f99-cb59-4581-8866-375ea3e167d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:45:55 np0005629333 nova_compute[244014]: 2026-02-25 12:45:55.534 244018 DEBUG oslo_concurrency.lockutils [req-68dc920e-00f3-437b-8d05-16f6b2c17a84 req-2befc313-cb54-4ed1-8f8f-cec140f6c33e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "874359d8-3251-4416-82dc-f6776853e384-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:55 np0005629333 nova_compute[244014]: 2026-02-25 12:45:55.535 244018 DEBUG oslo_concurrency.lockutils [req-68dc920e-00f3-437b-8d05-16f6b2c17a84 req-2befc313-cb54-4ed1-8f8f-cec140f6c33e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:55 np0005629333 nova_compute[244014]: 2026-02-25 12:45:55.535 244018 DEBUG oslo_concurrency.lockutils [req-68dc920e-00f3-437b-8d05-16f6b2c17a84 req-2befc313-cb54-4ed1-8f8f-cec140f6c33e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:55 np0005629333 nova_compute[244014]: 2026-02-25 12:45:55.535 244018 DEBUG nova.compute.manager [req-68dc920e-00f3-437b-8d05-16f6b2c17a84 req-2befc313-cb54-4ed1-8f8f-cec140f6c33e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] No waiting events found dispatching network-vif-unplugged-97fb9f99-cb59-4581-8866-375ea3e167d7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:45:55 np0005629333 nova_compute[244014]: 2026-02-25 12:45:55.536 244018 DEBUG nova.compute.manager [req-68dc920e-00f3-437b-8d05-16f6b2c17a84 req-2befc313-cb54-4ed1-8f8f-cec140f6c33e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received event network-vif-unplugged-97fb9f99-cb59-4581-8866-375ea3e167d7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.542 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1adea4b3-d07d-419f-94de-d7e9e9f2a1d8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbb79e0fd-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547920, 'tstamp': 547920}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347186, 'error': None, 'target': 'ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbb79e0fd-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547923, 'tstamp': 547923}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347186, 'error': None, 'target': 'ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.543 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb79e0fd-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:55 np0005629333 nova_compute[244014]: 2026-02-25 12:45:55.545 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.554 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbb79e0fd-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.554 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.555 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbb79e0fd-20, col_values=(('external_ids', {'iface-id': '091afd0d-cbaa-4d88-8f55-1965b8ffcb56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:55 np0005629333 nova_compute[244014]: 2026-02-25 12:45:55.553 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.555 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.557 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 39f87175-7c4f-4092-bce6-29b413c731cd in datapath 6df13266-bcfe-4a5b-94c4-81b5f08a6c21 unbound from our chassis#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.558 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6df13266-bcfe-4a5b-94c4-81b5f08a6c21#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.604 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a6f7796b-43e9-499c-89c4-8799c0716926]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.634 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cd2257ba-c324-454e-87d6-192096ceb53a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.637 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2424b682-c29b-4098-8741-a1b9a5679a82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.668 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[64f6acd9-5d41-470e-910b-7669ddc4e50c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.693 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[45df2b47-de97-4014-b90a-34c69c32f7e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6df13266-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:ad:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 22, 'tx_packets': 4, 'rx_bytes': 1916, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 22, 'tx_packets': 4, 'rx_bytes': 1916, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 342], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547995, 'reachable_time': 15036, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 22, 'inoctets': 1608, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 22, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1608, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 22, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347216, 'error': None, 'target': 'ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.710 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2dbc54ff-967c-4b94-becd-6ed4d6af1304]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6df13266-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548006, 'tstamp': 548006}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347217, 'error': None, 'target': 'ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.712 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6df13266-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:55 np0005629333 nova_compute[244014]: 2026-02-25 12:45:55.714 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.715 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6df13266-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.716 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.717 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6df13266-b0, col_values=(('external_ids', {'iface-id': '0b276f7a-90bb-4427-8f39-0e014732fd20'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:45:55.717 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:45:55 np0005629333 podman[347198]: 2026-02-25 12:45:55.633427891 +0000 UTC m=+0.028170237 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:45:55 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:45:55 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:45:55 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:45:56 np0005629333 podman[347198]: 2026-02-25 12:45:56.158953329 +0000 UTC m=+0.553695655 container create 5561020dfde9f87e904105e6c7f331e4de4ca47c7fc08d81139d8df6b0f878ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_nightingale, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:45:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 305 active+clean; 360 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 243 KiB/s rd, 23 KiB/s wr, 50 op/s
Feb 25 07:45:56 np0005629333 systemd[1]: Started libpod-conmon-5561020dfde9f87e904105e6c7f331e4de4ca47c7fc08d81139d8df6b0f878ff.scope.
Feb 25 07:45:56 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:45:56 np0005629333 nova_compute[244014]: 2026-02-25 12:45:56.509 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:56 np0005629333 podman[347198]: 2026-02-25 12:45:56.616408895 +0000 UTC m=+1.011151251 container init 5561020dfde9f87e904105e6c7f331e4de4ca47c7fc08d81139d8df6b0f878ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 07:45:56 np0005629333 podman[347218]: 2026-02-25 12:45:56.625886853 +0000 UTC m=+0.426579096 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Feb 25 07:45:56 np0005629333 podman[347198]: 2026-02-25 12:45:56.626996714 +0000 UTC m=+1.021739050 container start 5561020dfde9f87e904105e6c7f331e4de4ca47c7fc08d81139d8df6b0f878ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 07:45:56 np0005629333 epic_nightingale[347229]: 167 167
Feb 25 07:45:56 np0005629333 systemd[1]: libpod-5561020dfde9f87e904105e6c7f331e4de4ca47c7fc08d81139d8df6b0f878ff.scope: Deactivated successfully.
Feb 25 07:45:56 np0005629333 podman[347198]: 2026-02-25 12:45:56.692736141 +0000 UTC m=+1.087478527 container attach 5561020dfde9f87e904105e6c7f331e4de4ca47c7fc08d81139d8df6b0f878ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_nightingale, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 07:45:56 np0005629333 podman[347198]: 2026-02-25 12:45:56.693799051 +0000 UTC m=+1.088541407 container died 5561020dfde9f87e904105e6c7f331e4de4ca47c7fc08d81139d8df6b0f878ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_nightingale, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:45:56 np0005629333 nova_compute[244014]: 2026-02-25 12:45:56.950 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023541.948884, aee87402-4b34-4083-888b-bb653e2beaa9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:45:56 np0005629333 nova_compute[244014]: 2026-02-25 12:45:56.951 244018 INFO nova.compute.manager [-] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:45:56 np0005629333 nova_compute[244014]: 2026-02-25 12:45:56.972 244018 DEBUG nova.compute.manager [None req-fd431afe-d29e-4dd1-af4d-9057cc23ea04 - - - - - -] [instance: aee87402-4b34-4083-888b-bb653e2beaa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:45:57 np0005629333 systemd[1]: var-lib-containers-storage-overlay-02d942c076a211e304ec5498370777b5f180cbaf75ee7aaca5010f648ee60007-merged.mount: Deactivated successfully.
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.377 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.507 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.613 244018 DEBUG nova.compute.manager [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received event network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.614 244018 DEBUG oslo_concurrency.lockutils [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "874359d8-3251-4416-82dc-f6776853e384-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.614 244018 DEBUG oslo_concurrency.lockutils [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.614 244018 DEBUG oslo_concurrency.lockutils [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.615 244018 DEBUG nova.compute.manager [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] No waiting events found dispatching network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.615 244018 WARNING nova.compute.manager [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received unexpected event network-vif-plugged-97fb9f99-cb59-4581-8866-375ea3e167d7 for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.615 244018 DEBUG nova.compute.manager [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Received event network-vif-plugged-443e71ca-9c38-40e8-b799-1026ef47c35d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.615 244018 DEBUG oslo_concurrency.lockutils [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.616 244018 DEBUG oslo_concurrency.lockutils [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.616 244018 DEBUG oslo_concurrency.lockutils [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.616 244018 DEBUG nova.compute.manager [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Processing event network-vif-plugged-443e71ca-9c38-40e8-b799-1026ef47c35d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.616 244018 DEBUG nova.compute.manager [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Received event network-vif-plugged-443e71ca-9c38-40e8-b799-1026ef47c35d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.617 244018 DEBUG oslo_concurrency.lockutils [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.617 244018 DEBUG oslo_concurrency.lockutils [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.617 244018 DEBUG oslo_concurrency.lockutils [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.617 244018 DEBUG nova.compute.manager [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] No waiting events found dispatching network-vif-plugged-443e71ca-9c38-40e8-b799-1026ef47c35d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.618 244018 WARNING nova.compute.manager [req-01afd029-b7a3-41c8-811e-fc6c85138a19 req-daec2a7d-5243-47d4-aae8-da776b4d0e9c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Received unexpected event network-vif-plugged-443e71ca-9c38-40e8-b799-1026ef47c35d for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.618 244018 DEBUG nova.compute.manager [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Instance event wait completed in 4 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.623 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023557.623453, 2b9ea98f-9b1a-495b-8669-e79da967b0ab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.624 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.627 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.632 244018 INFO nova.virt.libvirt.driver [-] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Instance spawned successfully.#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.633 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.648 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.655 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.659 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.659 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.660 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.660 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.661 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.661 244018 DEBUG nova.virt.libvirt.driver [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:45:57 np0005629333 podman[347198]: 2026-02-25 12:45:57.685500643 +0000 UTC m=+2.080242939 container remove 5561020dfde9f87e904105e6c7f331e4de4ca47c7fc08d81139d8df6b0f878ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.693 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:45:57 np0005629333 systemd[1]: libpod-conmon-5561020dfde9f87e904105e6c7f331e4de4ca47c7fc08d81139d8df6b0f878ff.scope: Deactivated successfully.
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.742 244018 INFO nova.compute.manager [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Took 19.88 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.742 244018 DEBUG nova.compute.manager [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:45:57 np0005629333 podman[347265]: 2026-02-25 12:45:57.850249685 +0000 UTC m=+0.033204968 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.968 244018 INFO nova.compute.manager [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Took 21.07 seconds to build instance.#033[00m
Feb 25 07:45:57 np0005629333 nova_compute[244014]: 2026-02-25 12:45:57.983 244018 DEBUG oslo_concurrency.lockutils [None req-f53437e8-0b41-44f6-b6fd-60c851905cea f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:45:58 np0005629333 podman[347265]: 2026-02-25 12:45:58.000884479 +0000 UTC m=+0.183839752 container create 6213ef0680632629d413e8d93bc7773987d74d6fcf433ccd7de8492c5ea1d969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shannon, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:45:58 np0005629333 systemd[1]: Started libpod-conmon-6213ef0680632629d413e8d93bc7773987d74d6fcf433ccd7de8492c5ea1d969.scope.
Feb 25 07:45:58 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:45:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc3f2e04d38a36ee0364ad51550433f967a4df7e138417b87f819acccfe93fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:45:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc3f2e04d38a36ee0364ad51550433f967a4df7e138417b87f819acccfe93fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:45:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc3f2e04d38a36ee0364ad51550433f967a4df7e138417b87f819acccfe93fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:45:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc3f2e04d38a36ee0364ad51550433f967a4df7e138417b87f819acccfe93fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:45:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc3f2e04d38a36ee0364ad51550433f967a4df7e138417b87f819acccfe93fb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:45:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1962: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 260 KiB/s rd, 26 KiB/s wr, 73 op/s
Feb 25 07:45:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:45:58 np0005629333 podman[347265]: 2026-02-25 12:45:58.557962689 +0000 UTC m=+0.740918032 container init 6213ef0680632629d413e8d93bc7773987d74d6fcf433ccd7de8492c5ea1d969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shannon, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:45:58 np0005629333 podman[347279]: 2026-02-25 12:45:58.563916337 +0000 UTC m=+0.523949056 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 07:45:58 np0005629333 podman[347265]: 2026-02-25 12:45:58.568686242 +0000 UTC m=+0.751641515 container start 6213ef0680632629d413e8d93bc7773987d74d6fcf433ccd7de8492c5ea1d969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shannon, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 07:45:58 np0005629333 podman[347265]: 2026-02-25 12:45:58.776049287 +0000 UTC m=+0.959004600 container attach 6213ef0680632629d413e8d93bc7773987d74d6fcf433ccd7de8492c5ea1d969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shannon, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 07:45:59 np0005629333 trusting_shannon[347292]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:45:59 np0005629333 trusting_shannon[347292]: --> All data devices are unavailable
Feb 25 07:45:59 np0005629333 systemd[1]: libpod-6213ef0680632629d413e8d93bc7773987d74d6fcf433ccd7de8492c5ea1d969.scope: Deactivated successfully.
Feb 25 07:45:59 np0005629333 podman[347265]: 2026-02-25 12:45:59.062234728 +0000 UTC m=+1.245189991 container died 6213ef0680632629d413e8d93bc7773987d74d6fcf433ccd7de8492c5ea1d969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:45:59 np0005629333 nova_compute[244014]: 2026-02-25 12:45:59.381 244018 INFO nova.virt.libvirt.driver [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Deleting instance files /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384_del#033[00m
Feb 25 07:45:59 np0005629333 nova_compute[244014]: 2026-02-25 12:45:59.384 244018 INFO nova.virt.libvirt.driver [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Deletion of /var/lib/nova/instances/874359d8-3251-4416-82dc-f6776853e384_del complete#033[00m
Feb 25 07:45:59 np0005629333 nova_compute[244014]: 2026-02-25 12:45:59.463 244018 INFO nova.compute.manager [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Took 7.96 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:45:59 np0005629333 nova_compute[244014]: 2026-02-25 12:45:59.464 244018 DEBUG oslo.service.loopingcall [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:45:59 np0005629333 nova_compute[244014]: 2026-02-25 12:45:59.464 244018 DEBUG nova.compute.manager [-] [instance: 874359d8-3251-4416-82dc-f6776853e384] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:45:59 np0005629333 nova_compute[244014]: 2026-02-25 12:45:59.464 244018 DEBUG nova.network.neutron [-] [instance: 874359d8-3251-4416-82dc-f6776853e384] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:45:59 np0005629333 nova_compute[244014]: 2026-02-25 12:45:59.532 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:45:59 np0005629333 systemd[1]: var-lib-containers-storage-overlay-4cc3f2e04d38a36ee0364ad51550433f967a4df7e138417b87f819acccfe93fb-merged.mount: Deactivated successfully.
Feb 25 07:45:59 np0005629333 podman[347265]: 2026-02-25 12:45:59.782070943 +0000 UTC m=+1.965026166 container remove 6213ef0680632629d413e8d93bc7773987d74d6fcf433ccd7de8492c5ea1d969 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shannon, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Feb 25 07:45:59 np0005629333 systemd[1]: libpod-conmon-6213ef0680632629d413e8d93bc7773987d74d6fcf433ccd7de8492c5ea1d969.scope: Deactivated successfully.
Feb 25 07:46:00 np0005629333 nova_compute[244014]: 2026-02-25 12:46:00.209 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 233 KiB/s rd, 21 KiB/s wr, 37 op/s
Feb 25 07:46:00 np0005629333 podman[347402]: 2026-02-25 12:46:00.319405396 +0000 UTC m=+0.034051593 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:46:00 np0005629333 podman[347402]: 2026-02-25 12:46:00.454340286 +0000 UTC m=+0.168986463 container create b7490ba93a48be0accd25411babb69ba40422e3228af7db5ff8562df4ed60b93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:46:00 np0005629333 systemd[1]: Started libpod-conmon-b7490ba93a48be0accd25411babb69ba40422e3228af7db5ff8562df4ed60b93.scope.
Feb 25 07:46:00 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:46:00 np0005629333 podman[347402]: 2026-02-25 12:46:00.78668541 +0000 UTC m=+0.501331607 container init b7490ba93a48be0accd25411babb69ba40422e3228af7db5ff8562df4ed60b93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:46:00 np0005629333 podman[347402]: 2026-02-25 12:46:00.799339827 +0000 UTC m=+0.513986014 container start b7490ba93a48be0accd25411babb69ba40422e3228af7db5ff8562df4ed60b93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:46:00 np0005629333 thirsty_zhukovsky[347418]: 167 167
Feb 25 07:46:00 np0005629333 systemd[1]: libpod-b7490ba93a48be0accd25411babb69ba40422e3228af7db5ff8562df4ed60b93.scope: Deactivated successfully.
Feb 25 07:46:00 np0005629333 conmon[347418]: conmon b7490ba93a48be0accd2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b7490ba93a48be0accd25411babb69ba40422e3228af7db5ff8562df4ed60b93.scope/container/memory.events
Feb 25 07:46:00 np0005629333 podman[347402]: 2026-02-25 12:46:00.939923567 +0000 UTC m=+0.654569764 container attach b7490ba93a48be0accd25411babb69ba40422e3228af7db5ff8562df4ed60b93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_zhukovsky, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 07:46:00 np0005629333 podman[347402]: 2026-02-25 12:46:00.941009128 +0000 UTC m=+0.655655315 container died b7490ba93a48be0accd25411babb69ba40422e3228af7db5ff8562df4ed60b93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_zhukovsky, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:46:01 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d66ae0feb6f75bf57bede6efe89060e54a3567e7dd7bbbccb0edf126cbcf5d3e-merged.mount: Deactivated successfully.
Feb 25 07:46:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:46:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:46:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:46:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:46:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:46:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:46:01 np0005629333 nova_compute[244014]: 2026-02-25 12:46:01.658 244018 DEBUG nova.network.neutron [-] [instance: 874359d8-3251-4416-82dc-f6776853e384] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:46:01 np0005629333 nova_compute[244014]: 2026-02-25 12:46:01.699 244018 INFO nova.compute.manager [-] [instance: 874359d8-3251-4416-82dc-f6776853e384] Took 2.23 seconds to deallocate network for instance.#033[00m
Feb 25 07:46:01 np0005629333 nova_compute[244014]: 2026-02-25 12:46:01.733 244018 DEBUG nova.compute.manager [req-4e9e6215-d79f-475c-9285-6006eaf1b4be req-9a827067-ddd2-4402-8994-6dd4ac676672 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 874359d8-3251-4416-82dc-f6776853e384] Received event network-vif-deleted-97fb9f99-cb59-4581-8866-375ea3e167d7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:46:01 np0005629333 nova_compute[244014]: 2026-02-25 12:46:01.746 244018 DEBUG oslo_concurrency.lockutils [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:01 np0005629333 nova_compute[244014]: 2026-02-25 12:46:01.747 244018 DEBUG oslo_concurrency.lockutils [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:01 np0005629333 podman[347402]: 2026-02-25 12:46:01.825372369 +0000 UTC m=+1.540018556 container remove b7490ba93a48be0accd25411babb69ba40422e3228af7db5ff8562df4ed60b93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 07:46:01 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:01Z|01164|binding|INFO|Releasing lport 0b276f7a-90bb-4427-8f39-0e014732fd20 from this chassis (sb_readonly=0)
Feb 25 07:46:01 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:01Z|01165|binding|INFO|Releasing lport 091afd0d-cbaa-4d88-8f55-1965b8ffcb56 from this chassis (sb_readonly=0)
Feb 25 07:46:01 np0005629333 nova_compute[244014]: 2026-02-25 12:46:01.862 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:01 np0005629333 systemd[1]: libpod-conmon-b7490ba93a48be0accd25411babb69ba40422e3228af7db5ff8562df4ed60b93.scope: Deactivated successfully.
Feb 25 07:46:01 np0005629333 nova_compute[244014]: 2026-02-25 12:46:01.901 244018 DEBUG oslo_concurrency.processutils [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:46:02 np0005629333 podman[347444]: 2026-02-25 12:46:02.053547252 +0000 UTC m=+0.099265274 container create 763d926dda808ec4ce5669015491c981192ae3ff9c2cdd44bd2837c6d7e8ae7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 07:46:02 np0005629333 podman[347444]: 2026-02-25 12:46:01.979661926 +0000 UTC m=+0.025379998 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:46:02 np0005629333 systemd[1]: Started libpod-conmon-763d926dda808ec4ce5669015491c981192ae3ff9c2cdd44bd2837c6d7e8ae7f.scope.
Feb 25 07:46:02 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:46:02 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbcfeb3a823762698e4221689bb5cc88a62a2d868dd00ffa3a0bb7cb90fdd4b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:46:02 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbcfeb3a823762698e4221689bb5cc88a62a2d868dd00ffa3a0bb7cb90fdd4b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:46:02 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbcfeb3a823762698e4221689bb5cc88a62a2d868dd00ffa3a0bb7cb90fdd4b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:46:02 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbcfeb3a823762698e4221689bb5cc88a62a2d868dd00ffa3a0bb7cb90fdd4b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:46:02 np0005629333 podman[347444]: 2026-02-25 12:46:02.379372903 +0000 UTC m=+0.425090925 container init 763d926dda808ec4ce5669015491c981192ae3ff9c2cdd44bd2837c6d7e8ae7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:46:02 np0005629333 nova_compute[244014]: 2026-02-25 12:46:02.380 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:02 np0005629333 podman[347444]: 2026-02-25 12:46:02.38673383 +0000 UTC m=+0.432451822 container start 763d926dda808ec4ce5669015491c981192ae3ff9c2cdd44bd2837c6d7e8ae7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_kilby, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 25 07:46:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1964: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 22 KiB/s wr, 78 op/s
Feb 25 07:46:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:46:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/412297614' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:46:02 np0005629333 nova_compute[244014]: 2026-02-25 12:46:02.434 244018 DEBUG oslo_concurrency.processutils [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:46:02 np0005629333 nova_compute[244014]: 2026-02-25 12:46:02.440 244018 DEBUG nova.compute.provider_tree [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:46:02 np0005629333 nova_compute[244014]: 2026-02-25 12:46:02.455 244018 DEBUG nova.scheduler.client.report [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:46:02 np0005629333 nova_compute[244014]: 2026-02-25 12:46:02.474 244018 DEBUG oslo_concurrency.lockutils [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:02 np0005629333 podman[347444]: 2026-02-25 12:46:02.484343917 +0000 UTC m=+0.530061899 container attach 763d926dda808ec4ce5669015491c981192ae3ff9c2cdd44bd2837c6d7e8ae7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_kilby, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 07:46:02 np0005629333 nova_compute[244014]: 2026-02-25 12:46:02.496 244018 INFO nova.scheduler.client.report [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Deleted allocations for instance 874359d8-3251-4416-82dc-f6776853e384#033[00m
Feb 25 07:46:02 np0005629333 nova_compute[244014]: 2026-02-25 12:46:02.577 244018 DEBUG oslo_concurrency.lockutils [None req-1b694d67-cfaf-4ddf-82e0-754ce92dec85 44c0b78107ea4f7381e82a02c5954e7c 6c0adb05683141e7a0b866f450e410e0 - - default default] Lock "874359d8-3251-4416-82dc-f6776853e384" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]: {
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:    "0": [
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:        {
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "devices": [
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "/dev/loop3"
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            ],
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "lv_name": "ceph_lv0",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "lv_size": "21470642176",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "name": "ceph_lv0",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "tags": {
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.cluster_name": "ceph",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.crush_device_class": "",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.encrypted": "0",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.objectstore": "bluestore",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.osd_id": "0",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.type": "block",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.vdo": "0",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.with_tpm": "0"
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            },
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "type": "block",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "vg_name": "ceph_vg0"
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:        }
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:    ],
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:    "1": [
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:        {
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "devices": [
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "/dev/loop4"
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            ],
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "lv_name": "ceph_lv1",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "lv_size": "21470642176",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "name": "ceph_lv1",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "tags": {
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.cluster_name": "ceph",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.crush_device_class": "",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.encrypted": "0",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.objectstore": "bluestore",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.osd_id": "1",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.type": "block",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.vdo": "0",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.with_tpm": "0"
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            },
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "type": "block",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "vg_name": "ceph_vg1"
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:        }
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:    ],
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:    "2": [
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:        {
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "devices": [
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "/dev/loop5"
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            ],
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "lv_name": "ceph_lv2",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "lv_size": "21470642176",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "name": "ceph_lv2",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "tags": {
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.cluster_name": "ceph",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.crush_device_class": "",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.encrypted": "0",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.objectstore": "bluestore",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.osd_id": "2",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.type": "block",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.vdo": "0",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:                "ceph.with_tpm": "0"
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            },
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "type": "block",
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:            "vg_name": "ceph_vg2"
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:        }
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]:    ]
Feb 25 07:46:02 np0005629333 stupefied_kilby[347477]: }
Feb 25 07:46:02 np0005629333 systemd[1]: libpod-763d926dda808ec4ce5669015491c981192ae3ff9c2cdd44bd2837c6d7e8ae7f.scope: Deactivated successfully.
Feb 25 07:46:02 np0005629333 conmon[347477]: conmon 763d926dda808ec4ce56 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-763d926dda808ec4ce5669015491c981192ae3ff9c2cdd44bd2837c6d7e8ae7f.scope/container/memory.events
Feb 25 07:46:02 np0005629333 podman[347444]: 2026-02-25 12:46:02.748181376 +0000 UTC m=+0.793899398 container died 763d926dda808ec4ce5669015491c981192ae3ff9c2cdd44bd2837c6d7e8ae7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_kilby, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:46:03 np0005629333 systemd[1]: var-lib-containers-storage-overlay-bbcfeb3a823762698e4221689bb5cc88a62a2d868dd00ffa3a0bb7cb90fdd4b1-merged.mount: Deactivated successfully.
Feb 25 07:46:03 np0005629333 podman[347444]: 2026-02-25 12:46:03.426782407 +0000 UTC m=+1.472500409 container remove 763d926dda808ec4ce5669015491c981192ae3ff9c2cdd44bd2837c6d7e8ae7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_kilby, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:46:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:46:03 np0005629333 systemd[1]: libpod-conmon-763d926dda808ec4ce5669015491c981192ae3ff9c2cdd44bd2837c6d7e8ae7f.scope: Deactivated successfully.
Feb 25 07:46:03 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:03Z|01166|binding|INFO|Releasing lport 0b276f7a-90bb-4427-8f39-0e014732fd20 from this chassis (sb_readonly=0)
Feb 25 07:46:03 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:03Z|01167|binding|INFO|Releasing lport 091afd0d-cbaa-4d88-8f55-1965b8ffcb56 from this chassis (sb_readonly=0)
Feb 25 07:46:03 np0005629333 nova_compute[244014]: 2026-02-25 12:46:03.890 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:04 np0005629333 podman[347558]: 2026-02-25 12:46:04.01690851 +0000 UTC m=+0.113564727 container create 47a4f4e1478a85bf794f9d65a04f379fb12bcb2b7cdd7db485f26336b8b6ea20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 07:46:04 np0005629333 podman[347558]: 2026-02-25 12:46:03.926807006 +0000 UTC m=+0.023463233 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:46:04 np0005629333 nova_compute[244014]: 2026-02-25 12:46:04.247 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:04 np0005629333 systemd[1]: Started libpod-conmon-47a4f4e1478a85bf794f9d65a04f379fb12bcb2b7cdd7db485f26336b8b6ea20.scope.
Feb 25 07:46:04 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:46:04 np0005629333 podman[347558]: 2026-02-25 12:46:04.389214923 +0000 UTC m=+0.485871150 container init 47a4f4e1478a85bf794f9d65a04f379fb12bcb2b7cdd7db485f26336b8b6ea20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_booth, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 07:46:04 np0005629333 podman[347558]: 2026-02-25 12:46:04.398945858 +0000 UTC m=+0.495602025 container start 47a4f4e1478a85bf794f9d65a04f379fb12bcb2b7cdd7db485f26336b8b6ea20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:46:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1965: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.7 KiB/s wr, 99 op/s
Feb 25 07:46:04 np0005629333 boring_booth[347574]: 167 167
Feb 25 07:46:04 np0005629333 systemd[1]: libpod-47a4f4e1478a85bf794f9d65a04f379fb12bcb2b7cdd7db485f26336b8b6ea20.scope: Deactivated successfully.
Feb 25 07:46:04 np0005629333 conmon[347574]: conmon 47a4f4e1478a85bf794f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-47a4f4e1478a85bf794f9d65a04f379fb12bcb2b7cdd7db485f26336b8b6ea20.scope/container/memory.events
Feb 25 07:46:04 np0005629333 nova_compute[244014]: 2026-02-25 12:46:04.511 244018 DEBUG nova.compute.manager [req-2c67b827-d4da-4ce2-8eaf-375876e742e4 req-0d638cb2-18cb-4858-8fba-6c3467a98f4f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Received event network-changed-443e71ca-9c38-40e8-b799-1026ef47c35d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:46:04 np0005629333 nova_compute[244014]: 2026-02-25 12:46:04.513 244018 DEBUG nova.compute.manager [req-2c67b827-d4da-4ce2-8eaf-375876e742e4 req-0d638cb2-18cb-4858-8fba-6c3467a98f4f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Refreshing instance network info cache due to event network-changed-443e71ca-9c38-40e8-b799-1026ef47c35d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:46:04 np0005629333 nova_compute[244014]: 2026-02-25 12:46:04.514 244018 DEBUG oslo_concurrency.lockutils [req-2c67b827-d4da-4ce2-8eaf-375876e742e4 req-0d638cb2-18cb-4858-8fba-6c3467a98f4f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-2b9ea98f-9b1a-495b-8669-e79da967b0ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:46:04 np0005629333 nova_compute[244014]: 2026-02-25 12:46:04.515 244018 DEBUG oslo_concurrency.lockutils [req-2c67b827-d4da-4ce2-8eaf-375876e742e4 req-0d638cb2-18cb-4858-8fba-6c3467a98f4f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-2b9ea98f-9b1a-495b-8669-e79da967b0ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:46:04 np0005629333 nova_compute[244014]: 2026-02-25 12:46:04.516 244018 DEBUG nova.network.neutron [req-2c67b827-d4da-4ce2-8eaf-375876e742e4 req-0d638cb2-18cb-4858-8fba-6c3467a98f4f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Refreshing network info cache for port 443e71ca-9c38-40e8-b799-1026ef47c35d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:46:04 np0005629333 podman[347558]: 2026-02-25 12:46:04.611562332 +0000 UTC m=+0.708218609 container attach 47a4f4e1478a85bf794f9d65a04f379fb12bcb2b7cdd7db485f26336b8b6ea20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_booth, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:46:04 np0005629333 podman[347558]: 2026-02-25 12:46:04.612169889 +0000 UTC m=+0.708826096 container died 47a4f4e1478a85bf794f9d65a04f379fb12bcb2b7cdd7db485f26336b8b6ea20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_booth, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 07:46:04 np0005629333 systemd[1]: var-lib-containers-storage-overlay-5288a3c901499fd791c740e79b0028beb422068ca0568a9c3eb32b96a4f96f71-merged.mount: Deactivated successfully.
Feb 25 07:46:05 np0005629333 podman[347558]: 2026-02-25 12:46:05.104609984 +0000 UTC m=+1.201266191 container remove 47a4f4e1478a85bf794f9d65a04f379fb12bcb2b7cdd7db485f26336b8b6ea20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_booth, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 07:46:05 np0005629333 systemd[1]: libpod-conmon-47a4f4e1478a85bf794f9d65a04f379fb12bcb2b7cdd7db485f26336b8b6ea20.scope: Deactivated successfully.
Feb 25 07:46:05 np0005629333 nova_compute[244014]: 2026-02-25 12:46:05.210 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:05 np0005629333 podman[347599]: 2026-02-25 12:46:05.299501217 +0000 UTC m=+0.024098631 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:46:05 np0005629333 podman[347599]: 2026-02-25 12:46:05.44160803 +0000 UTC m=+0.166205484 container create c1b6d2fb609f201e0b3991c0ee2092f9d50647c0b3ebc3e9cd5ade9f6ef93852 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_noether, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:46:05 np0005629333 systemd[1]: Started libpod-conmon-c1b6d2fb609f201e0b3991c0ee2092f9d50647c0b3ebc3e9cd5ade9f6ef93852.scope.
Feb 25 07:46:05 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:46:05 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2216db5e4a45918da92d2a8d89ed409556aff20b748b72e19efbb53152967980/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:46:05 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2216db5e4a45918da92d2a8d89ed409556aff20b748b72e19efbb53152967980/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:46:05 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2216db5e4a45918da92d2a8d89ed409556aff20b748b72e19efbb53152967980/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:46:05 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2216db5e4a45918da92d2a8d89ed409556aff20b748b72e19efbb53152967980/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:46:05 np0005629333 podman[347599]: 2026-02-25 12:46:05.871425226 +0000 UTC m=+0.596022710 container init c1b6d2fb609f201e0b3991c0ee2092f9d50647c0b3ebc3e9cd5ade9f6ef93852 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_noether, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 07:46:05 np0005629333 podman[347599]: 2026-02-25 12:46:05.879663459 +0000 UTC m=+0.604260903 container start c1b6d2fb609f201e0b3991c0ee2092f9d50647c0b3ebc3e9cd5ade9f6ef93852 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_noether, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 07:46:06 np0005629333 podman[347599]: 2026-02-25 12:46:06.046286174 +0000 UTC m=+0.770883608 container attach c1b6d2fb609f201e0b3991c0ee2092f9d50647c0b3ebc3e9cd5ade9f6ef93852 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_noether, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 07:46:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.5 KiB/s wr, 94 op/s
Feb 25 07:46:06 np0005629333 lvm[347694]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:46:06 np0005629333 lvm[347694]: VG ceph_vg1 finished
Feb 25 07:46:06 np0005629333 lvm[347693]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:46:06 np0005629333 lvm[347693]: VG ceph_vg0 finished
Feb 25 07:46:06 np0005629333 lvm[347696]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:46:06 np0005629333 lvm[347696]: VG ceph_vg2 finished
Feb 25 07:46:06 np0005629333 charming_noether[347615]: {}
Feb 25 07:46:06 np0005629333 systemd[1]: libpod-c1b6d2fb609f201e0b3991c0ee2092f9d50647c0b3ebc3e9cd5ade9f6ef93852.scope: Deactivated successfully.
Feb 25 07:46:06 np0005629333 systemd[1]: libpod-c1b6d2fb609f201e0b3991c0ee2092f9d50647c0b3ebc3e9cd5ade9f6ef93852.scope: Consumed 1.030s CPU time.
Feb 25 07:46:06 np0005629333 podman[347699]: 2026-02-25 12:46:06.738935341 +0000 UTC m=+0.048267104 container died c1b6d2fb609f201e0b3991c0ee2092f9d50647c0b3ebc3e9cd5ade9f6ef93852 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_noether, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 07:46:07 np0005629333 systemd[1]: var-lib-containers-storage-overlay-2216db5e4a45918da92d2a8d89ed409556aff20b748b72e19efbb53152967980-merged.mount: Deactivated successfully.
Feb 25 07:46:07 np0005629333 podman[347699]: 2026-02-25 12:46:07.289918879 +0000 UTC m=+0.599250662 container remove c1b6d2fb609f201e0b3991c0ee2092f9d50647c0b3ebc3e9cd5ade9f6ef93852 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_noether, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:46:07 np0005629333 systemd[1]: libpod-conmon-c1b6d2fb609f201e0b3991c0ee2092f9d50647c0b3ebc3e9cd5ade9f6ef93852.scope: Deactivated successfully.
Feb 25 07:46:07 np0005629333 nova_compute[244014]: 2026-02-25 12:46:07.347 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023552.3462524, 874359d8-3251-4416-82dc-f6776853e384 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:46:07 np0005629333 nova_compute[244014]: 2026-02-25 12:46:07.351 244018 INFO nova.compute.manager [-] [instance: 874359d8-3251-4416-82dc-f6776853e384] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:46:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:46:07 np0005629333 nova_compute[244014]: 2026-02-25 12:46:07.371 244018 DEBUG nova.compute.manager [None req-df8ebcd7-25b7-46d5-8ea5-80facc20641f - - - - - -] [instance: 874359d8-3251-4416-82dc-f6776853e384] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:46:07 np0005629333 nova_compute[244014]: 2026-02-25 12:46:07.384 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:46:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:46:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:46:07 np0005629333 nova_compute[244014]: 2026-02-25 12:46:07.855 244018 DEBUG nova.network.neutron [req-2c67b827-d4da-4ce2-8eaf-375876e742e4 req-0d638cb2-18cb-4858-8fba-6c3467a98f4f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Updated VIF entry in instance network info cache for port 443e71ca-9c38-40e8-b799-1026ef47c35d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:46:07 np0005629333 nova_compute[244014]: 2026-02-25 12:46:07.856 244018 DEBUG nova.network.neutron [req-2c67b827-d4da-4ce2-8eaf-375876e742e4 req-0d638cb2-18cb-4858-8fba-6c3467a98f4f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Updating instance_info_cache with network_info: [{"id": "443e71ca-9c38-40e8-b799-1026ef47c35d", "address": "fa:16:3e:98:e2:13", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap443e71ca-9c", "ovs_interfaceid": "443e71ca-9c38-40e8-b799-1026ef47c35d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "39f87175-7c4f-4092-bce6-29b413c731cd", "address": "fa:16:3e:b2:de:d3", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39f87175-7c", "ovs_interfaceid": "39f87175-7c4f-4092-bce6-29b413c731cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:46:07 np0005629333 nova_compute[244014]: 2026-02-25 12:46:07.903 244018 DEBUG oslo_concurrency.lockutils [req-2c67b827-d4da-4ce2-8eaf-375876e742e4 req-0d638cb2-18cb-4858-8fba-6c3467a98f4f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-2b9ea98f-9b1a-495b-8669-e79da967b0ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:46:07 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:46:07 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:46:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1967: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.5 KiB/s wr, 94 op/s
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:46:08 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:08Z|01168|binding|INFO|Releasing lport 0b276f7a-90bb-4427-8f39-0e014732fd20 from this chassis (sb_readonly=0)
Feb 25 07:46:08 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:08Z|01169|binding|INFO|Releasing lport 091afd0d-cbaa-4d88-8f55-1965b8ffcb56 from this chassis (sb_readonly=0)
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:08.526559) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023568526607, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 1867, "num_deletes": 254, "total_data_size": 2828935, "memory_usage": 2872464, "flush_reason": "Manual Compaction"}
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023568545393, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 2763803, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40072, "largest_seqno": 41938, "table_properties": {"data_size": 2755173, "index_size": 5316, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 17013, "raw_average_key_size": 19, "raw_value_size": 2737875, "raw_average_value_size": 3093, "num_data_blocks": 234, "num_entries": 885, "num_filter_entries": 885, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772023407, "oldest_key_time": 1772023407, "file_creation_time": 1772023568, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 18896 microseconds, and 6704 cpu microseconds.
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:08.545452) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 2763803 bytes OK
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:08.545476) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:08.548501) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:08.548517) EVENT_LOG_v1 {"time_micros": 1772023568548512, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:08.548543) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 2820892, prev total WAL file size 2820892, number of live WAL files 2.
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:08.549220) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323532' seq:0, type:0; will stop at (end)
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(2699KB)], [89(9056KB)]
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023568549305, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 12037579, "oldest_snapshot_seqno": -1}
Feb 25 07:46:08 np0005629333 nova_compute[244014]: 2026-02-25 12:46:08.553 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 6829 keys, 11279146 bytes, temperature: kUnknown
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023568653432, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 11279146, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11230739, "index_size": 30247, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17093, "raw_key_size": 173686, "raw_average_key_size": 25, "raw_value_size": 11105943, "raw_average_value_size": 1626, "num_data_blocks": 1206, "num_entries": 6829, "num_filter_entries": 6829, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772023568, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:08.654106) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 11279146 bytes
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:08.662070) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 115.1 rd, 107.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 8.8 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(8.4) write-amplify(4.1) OK, records in: 7352, records dropped: 523 output_compression: NoCompression
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:08.662125) EVENT_LOG_v1 {"time_micros": 1772023568662103, "job": 52, "event": "compaction_finished", "compaction_time_micros": 104578, "compaction_time_cpu_micros": 40004, "output_level": 6, "num_output_files": 1, "total_output_size": 11279146, "num_input_records": 7352, "num_output_records": 6829, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023568662861, "job": 52, "event": "table_file_deletion", "file_number": 91}
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023568664506, "job": 52, "event": "table_file_deletion", "file_number": 89}
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:08.549057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:08.664636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:08.664647) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:08.664650) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:08.664654) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:46:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:08.664657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:46:08 np0005629333 nova_compute[244014]: 2026-02-25 12:46:08.801 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "709a8b15-83eb-45f4-b681-c150ad270e01" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:08 np0005629333 nova_compute[244014]: 2026-02-25 12:46:08.805 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:08 np0005629333 nova_compute[244014]: 2026-02-25 12:46:08.826 244018 DEBUG nova.compute.manager [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:46:08 np0005629333 nova_compute[244014]: 2026-02-25 12:46:08.900 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:08 np0005629333 nova_compute[244014]: 2026-02-25 12:46:08.901 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:08 np0005629333 nova_compute[244014]: 2026-02-25 12:46:08.909 244018 DEBUG nova.virt.hardware [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:46:08 np0005629333 nova_compute[244014]: 2026-02-25 12:46:08.910 244018 INFO nova.compute.claims [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:46:09 np0005629333 nova_compute[244014]: 2026-02-25 12:46:09.067 244018 DEBUG oslo_concurrency.processutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:46:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:46:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2161363190' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:46:09 np0005629333 nova_compute[244014]: 2026-02-25 12:46:09.674 244018 DEBUG oslo_concurrency.processutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:46:09 np0005629333 nova_compute[244014]: 2026-02-25 12:46:09.680 244018 DEBUG nova.compute.provider_tree [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:46:09 np0005629333 nova_compute[244014]: 2026-02-25 12:46:09.702 244018 DEBUG nova.scheduler.client.report [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:46:09 np0005629333 nova_compute[244014]: 2026-02-25 12:46:09.735 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:09 np0005629333 nova_compute[244014]: 2026-02-25 12:46:09.736 244018 DEBUG nova.compute.manager [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:46:09 np0005629333 nova_compute[244014]: 2026-02-25 12:46:09.781 244018 DEBUG nova.compute.manager [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:46:09 np0005629333 nova_compute[244014]: 2026-02-25 12:46:09.781 244018 DEBUG nova.network.neutron [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:46:09 np0005629333 nova_compute[244014]: 2026-02-25 12:46:09.800 244018 INFO nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:46:09 np0005629333 nova_compute[244014]: 2026-02-25 12:46:09.814 244018 DEBUG nova.compute.manager [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:46:09 np0005629333 nova_compute[244014]: 2026-02-25 12:46:09.895 244018 DEBUG nova.compute.manager [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:46:09 np0005629333 nova_compute[244014]: 2026-02-25 12:46:09.897 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:46:09 np0005629333 nova_compute[244014]: 2026-02-25 12:46:09.897 244018 INFO nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Creating image(s)#033[00m
Feb 25 07:46:09 np0005629333 nova_compute[244014]: 2026-02-25 12:46:09.926 244018 DEBUG nova.storage.rbd_utils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image 709a8b15-83eb-45f4-b681-c150ad270e01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:46:09 np0005629333 nova_compute[244014]: 2026-02-25 12:46:09.960 244018 DEBUG nova.storage.rbd_utils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image 709a8b15-83eb-45f4-b681-c150ad270e01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:46:09 np0005629333 nova_compute[244014]: 2026-02-25 12:46:09.998 244018 DEBUG nova.storage.rbd_utils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image 709a8b15-83eb-45f4-b681-c150ad270e01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:46:10 np0005629333 nova_compute[244014]: 2026-02-25 12:46:10.003 244018 DEBUG oslo_concurrency.processutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:46:10 np0005629333 nova_compute[244014]: 2026-02-25 12:46:10.105 244018 DEBUG oslo_concurrency.processutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:46:10 np0005629333 nova_compute[244014]: 2026-02-25 12:46:10.106 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:10 np0005629333 nova_compute[244014]: 2026-02-25 12:46:10.106 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:10 np0005629333 nova_compute[244014]: 2026-02-25 12:46:10.106 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:10 np0005629333 nova_compute[244014]: 2026-02-25 12:46:10.137 244018 DEBUG nova.storage.rbd_utils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image 709a8b15-83eb-45f4-b681-c150ad270e01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:46:10 np0005629333 nova_compute[244014]: 2026-02-25 12:46:10.141 244018 DEBUG oslo_concurrency.processutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 709a8b15-83eb-45f4-b681-c150ad270e01_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:46:10 np0005629333 nova_compute[244014]: 2026-02-25 12:46:10.212 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1968: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 597 B/s wr, 70 op/s
Feb 25 07:46:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:10Z|00135|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:98:e2:13 10.100.0.4
Feb 25 07:46:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:10Z|00136|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:98:e2:13 10.100.0.4
Feb 25 07:46:10 np0005629333 nova_compute[244014]: 2026-02-25 12:46:10.781 244018 DEBUG oslo_concurrency.processutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 709a8b15-83eb-45f4-b681-c150ad270e01_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.640s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:46:10 np0005629333 nova_compute[244014]: 2026-02-25 12:46:10.866 244018 DEBUG nova.storage.rbd_utils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] resizing rbd image 709a8b15-83eb-45f4-b681-c150ad270e01_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:46:11 np0005629333 nova_compute[244014]: 2026-02-25 12:46:11.012 244018 DEBUG nova.policy [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb37a481eb114226822ed8b2ef4f9a89', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:46:11 np0005629333 nova_compute[244014]: 2026-02-25 12:46:11.100 244018 DEBUG nova.objects.instance [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'migration_context' on Instance uuid 709a8b15-83eb-45f4-b681-c150ad270e01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:46:11 np0005629333 nova_compute[244014]: 2026-02-25 12:46:11.118 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:46:11 np0005629333 nova_compute[244014]: 2026-02-25 12:46:11.119 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Ensure instance console log exists: /var/lib/nova/instances/709a8b15-83eb-45f4-b681-c150ad270e01/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:46:11 np0005629333 nova_compute[244014]: 2026-02-25 12:46:11.120 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:11 np0005629333 nova_compute[244014]: 2026-02-25 12:46:11.121 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:11 np0005629333 nova_compute[244014]: 2026-02-25 12:46:11.121 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:12 np0005629333 nova_compute[244014]: 2026-02-25 12:46:12.385 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1969: 305 pgs: 305 active+clean; 325 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 127 op/s
Feb 25 07:46:12 np0005629333 nova_compute[244014]: 2026-02-25 12:46:12.853 244018 DEBUG nova.network.neutron [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Successfully created port: a0f62675-5535-4e75-98f6-dc4fbadbc4c9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:46:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:46:13 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:13Z|01170|binding|INFO|Releasing lport 0b276f7a-90bb-4427-8f39-0e014732fd20 from this chassis (sb_readonly=0)
Feb 25 07:46:13 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:13Z|01171|binding|INFO|Releasing lport 091afd0d-cbaa-4d88-8f55-1965b8ffcb56 from this chassis (sb_readonly=0)
Feb 25 07:46:13 np0005629333 nova_compute[244014]: 2026-02-25 12:46:13.847 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1970: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 118 op/s
Feb 25 07:46:15 np0005629333 nova_compute[244014]: 2026-02-25 12:46:15.215 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:16 np0005629333 nova_compute[244014]: 2026-02-25 12:46:16.029 244018 DEBUG nova.network.neutron [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Successfully updated port: a0f62675-5535-4e75-98f6-dc4fbadbc4c9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:46:16 np0005629333 nova_compute[244014]: 2026-02-25 12:46:16.052 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "refresh_cache-709a8b15-83eb-45f4-b681-c150ad270e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:46:16 np0005629333 nova_compute[244014]: 2026-02-25 12:46:16.053 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquired lock "refresh_cache-709a8b15-83eb-45f4-b681-c150ad270e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:46:16 np0005629333 nova_compute[244014]: 2026-02-25 12:46:16.053 244018 DEBUG nova.network.neutron [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:46:16 np0005629333 nova_compute[244014]: 2026-02-25 12:46:16.275 244018 DEBUG nova.compute.manager [req-c5727e92-14b2-44be-b130-66229074fdcb req-24b27656-6d41-44d6-b723-92332d33d6a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received event network-changed-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:46:16 np0005629333 nova_compute[244014]: 2026-02-25 12:46:16.276 244018 DEBUG nova.compute.manager [req-c5727e92-14b2-44be-b130-66229074fdcb req-24b27656-6d41-44d6-b723-92332d33d6a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Refreshing instance network info cache due to event network-changed-a0f62675-5535-4e75-98f6-dc4fbadbc4c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:46:16 np0005629333 nova_compute[244014]: 2026-02-25 12:46:16.276 244018 DEBUG oslo_concurrency.lockutils [req-c5727e92-14b2-44be-b130-66229074fdcb req-24b27656-6d41-44d6-b723-92332d33d6a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-709a8b15-83eb-45f4-b681-c150ad270e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:46:16 np0005629333 nova_compute[244014]: 2026-02-25 12:46:16.354 244018 DEBUG nova.network.neutron [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:46:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Feb 25 07:46:17 np0005629333 nova_compute[244014]: 2026-02-25 12:46:17.388 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1972: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Feb 25 07:46:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:46:19 np0005629333 nova_compute[244014]: 2026-02-25 12:46:19.837 244018 DEBUG nova.network.neutron [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Updating instance_info_cache with network_info: [{"id": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "address": "fa:16:3e:56:b0:61", "network": {"id": "bf745dc4-78d4-4d88-8794-da49c21b9f38", "bridge": "br-int", "label": "tempest-network-smoke--1075619721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f62675-55", "ovs_interfaceid": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:46:19 np0005629333 nova_compute[244014]: 2026-02-25 12:46:19.877 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Releasing lock "refresh_cache-709a8b15-83eb-45f4-b681-c150ad270e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:46:19 np0005629333 nova_compute[244014]: 2026-02-25 12:46:19.877 244018 DEBUG nova.compute.manager [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Instance network_info: |[{"id": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "address": "fa:16:3e:56:b0:61", "network": {"id": "bf745dc4-78d4-4d88-8794-da49c21b9f38", "bridge": "br-int", "label": "tempest-network-smoke--1075619721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f62675-55", "ovs_interfaceid": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:46:19 np0005629333 nova_compute[244014]: 2026-02-25 12:46:19.878 244018 DEBUG oslo_concurrency.lockutils [req-c5727e92-14b2-44be-b130-66229074fdcb req-24b27656-6d41-44d6-b723-92332d33d6a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-709a8b15-83eb-45f4-b681-c150ad270e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:46:19 np0005629333 nova_compute[244014]: 2026-02-25 12:46:19.878 244018 DEBUG nova.network.neutron [req-c5727e92-14b2-44be-b130-66229074fdcb req-24b27656-6d41-44d6-b723-92332d33d6a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Refreshing network info cache for port a0f62675-5535-4e75-98f6-dc4fbadbc4c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:46:19 np0005629333 nova_compute[244014]: 2026-02-25 12:46:19.882 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Start _get_guest_xml network_info=[{"id": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "address": "fa:16:3e:56:b0:61", "network": {"id": "bf745dc4-78d4-4d88-8794-da49c21b9f38", "bridge": "br-int", "label": "tempest-network-smoke--1075619721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f62675-55", "ovs_interfaceid": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:46:19 np0005629333 nova_compute[244014]: 2026-02-25 12:46:19.888 244018 WARNING nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:46:19 np0005629333 nova_compute[244014]: 2026-02-25 12:46:19.906 244018 DEBUG nova.virt.libvirt.host [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:46:19 np0005629333 nova_compute[244014]: 2026-02-25 12:46:19.907 244018 DEBUG nova.virt.libvirt.host [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:46:19 np0005629333 nova_compute[244014]: 2026-02-25 12:46:19.912 244018 DEBUG nova.virt.libvirt.host [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:46:19 np0005629333 nova_compute[244014]: 2026-02-25 12:46:19.912 244018 DEBUG nova.virt.libvirt.host [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:46:19 np0005629333 nova_compute[244014]: 2026-02-25 12:46:19.913 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:46:19 np0005629333 nova_compute[244014]: 2026-02-25 12:46:19.914 244018 DEBUG nova.virt.hardware [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:46:19 np0005629333 nova_compute[244014]: 2026-02-25 12:46:19.914 244018 DEBUG nova.virt.hardware [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:46:19 np0005629333 nova_compute[244014]: 2026-02-25 12:46:19.915 244018 DEBUG nova.virt.hardware [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:46:19 np0005629333 nova_compute[244014]: 2026-02-25 12:46:19.915 244018 DEBUG nova.virt.hardware [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:46:19 np0005629333 nova_compute[244014]: 2026-02-25 12:46:19.915 244018 DEBUG nova.virt.hardware [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:46:19 np0005629333 nova_compute[244014]: 2026-02-25 12:46:19.915 244018 DEBUG nova.virt.hardware [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:46:19 np0005629333 nova_compute[244014]: 2026-02-25 12:46:19.916 244018 DEBUG nova.virt.hardware [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:46:19 np0005629333 nova_compute[244014]: 2026-02-25 12:46:19.916 244018 DEBUG nova.virt.hardware [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:46:19 np0005629333 nova_compute[244014]: 2026-02-25 12:46:19.916 244018 DEBUG nova.virt.hardware [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:46:19 np0005629333 nova_compute[244014]: 2026-02-25 12:46:19.917 244018 DEBUG nova.virt.hardware [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:46:19 np0005629333 nova_compute[244014]: 2026-02-25 12:46:19.917 244018 DEBUG nova.virt.hardware [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:46:19 np0005629333 nova_compute[244014]: 2026-02-25 12:46:19.920 244018 DEBUG oslo_concurrency.processutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:46:20 np0005629333 nova_compute[244014]: 2026-02-25 12:46:20.217 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Feb 25 07:46:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:46:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4012586038' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:46:20 np0005629333 nova_compute[244014]: 2026-02-25 12:46:20.442 244018 DEBUG oslo_concurrency.processutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:46:20 np0005629333 nova_compute[244014]: 2026-02-25 12:46:20.472 244018 DEBUG nova.storage.rbd_utils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image 709a8b15-83eb-45f4-b681-c150ad270e01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:46:20 np0005629333 nova_compute[244014]: 2026-02-25 12:46:20.478 244018 DEBUG oslo_concurrency.processutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:46:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:46:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3469782281' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.056 244018 DEBUG oslo_concurrency.processutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.058 244018 DEBUG nova.virt.libvirt.vif [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:46:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-742082713',display_name='tempest-TestNetworkAdvancedServerOps-server-742082713',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-742082713',id=116,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB+I5tZfRy0ovwNPnrrYTneQbLw5hVnQnH/IanoLiHVduiItxLS5mgbnypXYZj3B9Q1TA9okLBB9xUXwMoUUkuZoJAUYr2FIjfbOIXstrZLpgrCV3aVFIh2fnMujkQ9rZw==',key_name='tempest-TestNetworkAdvancedServerOps-1976579647',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-m0ykjdnm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:46:09Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=709a8b15-83eb-45f4-b681-c150ad270e01,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "address": "fa:16:3e:56:b0:61", "network": {"id": "bf745dc4-78d4-4d88-8794-da49c21b9f38", "bridge": "br-int", "label": "tempest-network-smoke--1075619721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f62675-55", "ovs_interfaceid": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.058 244018 DEBUG nova.network.os_vif_util [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "address": "fa:16:3e:56:b0:61", "network": {"id": "bf745dc4-78d4-4d88-8794-da49c21b9f38", "bridge": "br-int", "label": "tempest-network-smoke--1075619721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f62675-55", "ovs_interfaceid": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.059 244018 DEBUG nova.network.os_vif_util [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:b0:61,bridge_name='br-int',has_traffic_filtering=True,id=a0f62675-5535-4e75-98f6-dc4fbadbc4c9,network=Network(bf745dc4-78d4-4d88-8794-da49c21b9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f62675-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.060 244018 DEBUG nova.objects.instance [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'pci_devices' on Instance uuid 709a8b15-83eb-45f4-b681-c150ad270e01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.074 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:46:21 np0005629333 nova_compute[244014]:  <uuid>709a8b15-83eb-45f4-b681-c150ad270e01</uuid>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:  <name>instance-00000074</name>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-742082713</nova:name>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:46:19</nova:creationTime>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:46:21 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:        <nova:user uuid="fb37a481eb114226822ed8b2ef4f9a89">tempest-TestNetworkAdvancedServerOps-1424801157-project-member</nova:user>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:        <nova:project uuid="6821a6e7edd54dbe97920b79aae8f54c">tempest-TestNetworkAdvancedServerOps-1424801157</nova:project>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:        <nova:port uuid="a0f62675-5535-4e75-98f6-dc4fbadbc4c9">
Feb 25 07:46:21 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      <entry name="serial">709a8b15-83eb-45f4-b681-c150ad270e01</entry>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      <entry name="uuid">709a8b15-83eb-45f4-b681-c150ad270e01</entry>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/709a8b15-83eb-45f4-b681-c150ad270e01_disk">
Feb 25 07:46:21 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:46:21 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/709a8b15-83eb-45f4-b681-c150ad270e01_disk.config">
Feb 25 07:46:21 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:46:21 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:56:b0:61"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      <target dev="tapa0f62675-55"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/709a8b15-83eb-45f4-b681-c150ad270e01/console.log" append="off"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:46:21 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:46:21 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:46:21 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:46:21 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.074 244018 DEBUG nova.compute.manager [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Preparing to wait for external event network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.074 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.075 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.075 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.075 244018 DEBUG nova.virt.libvirt.vif [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:46:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-742082713',display_name='tempest-TestNetworkAdvancedServerOps-server-742082713',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-742082713',id=116,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB+I5tZfRy0ovwNPnrrYTneQbLw5hVnQnH/IanoLiHVduiItxLS5mgbnypXYZj3B9Q1TA9okLBB9xUXwMoUUkuZoJAUYr2FIjfbOIXstrZLpgrCV3aVFIh2fnMujkQ9rZw==',key_name='tempest-TestNetworkAdvancedServerOps-1976579647',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-m0ykjdnm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:46:09Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=709a8b15-83eb-45f4-b681-c150ad270e01,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "address": "fa:16:3e:56:b0:61", "network": {"id": "bf745dc4-78d4-4d88-8794-da49c21b9f38", "bridge": "br-int", "label": "tempest-network-smoke--1075619721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f62675-55", "ovs_interfaceid": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.075 244018 DEBUG nova.network.os_vif_util [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "address": "fa:16:3e:56:b0:61", "network": {"id": "bf745dc4-78d4-4d88-8794-da49c21b9f38", "bridge": "br-int", "label": "tempest-network-smoke--1075619721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f62675-55", "ovs_interfaceid": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.076 244018 DEBUG nova.network.os_vif_util [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:b0:61,bridge_name='br-int',has_traffic_filtering=True,id=a0f62675-5535-4e75-98f6-dc4fbadbc4c9,network=Network(bf745dc4-78d4-4d88-8794-da49c21b9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f62675-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.077 244018 DEBUG os_vif [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:b0:61,bridge_name='br-int',has_traffic_filtering=True,id=a0f62675-5535-4e75-98f6-dc4fbadbc4c9,network=Network(bf745dc4-78d4-4d88-8794-da49c21b9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f62675-55') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.077 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.078 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.078 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.080 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.080 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0f62675-55, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.081 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa0f62675-55, col_values=(('external_ids', {'iface-id': 'a0f62675-5535-4e75-98f6-dc4fbadbc4c9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:56:b0:61', 'vm-uuid': '709a8b15-83eb-45f4-b681-c150ad270e01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.082 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:21 np0005629333 NetworkManager[49836]: <info>  [1772023581.0837] manager: (tapa0f62675-55): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/489)
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.084 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.091 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.092 244018 INFO os_vif [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:b0:61,bridge_name='br-int',has_traffic_filtering=True,id=a0f62675-5535-4e75-98f6-dc4fbadbc4c9,network=Network(bf745dc4-78d4-4d88-8794-da49c21b9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f62675-55')#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.148 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.149 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.149 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] No VIF found with MAC fa:16:3e:56:b0:61, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.149 244018 INFO nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Using config drive#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.169 244018 DEBUG nova.storage.rbd_utils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image 709a8b15-83eb-45f4-b681-c150ad270e01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.203 244018 DEBUG nova.network.neutron [req-c5727e92-14b2-44be-b130-66229074fdcb req-24b27656-6d41-44d6-b723-92332d33d6a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Updated VIF entry in instance network info cache for port a0f62675-5535-4e75-98f6-dc4fbadbc4c9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.204 244018 DEBUG nova.network.neutron [req-c5727e92-14b2-44be-b130-66229074fdcb req-24b27656-6d41-44d6-b723-92332d33d6a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Updating instance_info_cache with network_info: [{"id": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "address": "fa:16:3e:56:b0:61", "network": {"id": "bf745dc4-78d4-4d88-8794-da49c21b9f38", "bridge": "br-int", "label": "tempest-network-smoke--1075619721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f62675-55", "ovs_interfaceid": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.218 244018 DEBUG oslo_concurrency.lockutils [req-c5727e92-14b2-44be-b130-66229074fdcb req-24b27656-6d41-44d6-b723-92332d33d6a1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-709a8b15-83eb-45f4-b681-c150ad270e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.583 244018 INFO nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Creating config drive at /var/lib/nova/instances/709a8b15-83eb-45f4-b681-c150ad270e01/disk.config#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.587 244018 DEBUG oslo_concurrency.processutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/709a8b15-83eb-45f4-b681-c150ad270e01/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpsq3isf8f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.719 244018 DEBUG oslo_concurrency.processutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/709a8b15-83eb-45f4-b681-c150ad270e01/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpsq3isf8f" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.750 244018 DEBUG nova.storage.rbd_utils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] rbd image 709a8b15-83eb-45f4-b681-c150ad270e01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.754 244018 DEBUG oslo_concurrency.processutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/709a8b15-83eb-45f4-b681-c150ad270e01/disk.config 709a8b15-83eb-45f4-b681-c150ad270e01_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.869 244018 DEBUG oslo_concurrency.processutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/709a8b15-83eb-45f4-b681-c150ad270e01/disk.config 709a8b15-83eb-45f4-b681-c150ad270e01_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.869 244018 INFO nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Deleting local config drive /var/lib/nova/instances/709a8b15-83eb-45f4-b681-c150ad270e01/disk.config because it was imported into RBD.#033[00m
Feb 25 07:46:21 np0005629333 kernel: tapa0f62675-55: entered promiscuous mode
Feb 25 07:46:21 np0005629333 NetworkManager[49836]: <info>  [1772023581.9119] manager: (tapa0f62675-55): new Tun device (/org/freedesktop/NetworkManager/Devices/490)
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.913 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:21 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:21Z|01172|binding|INFO|Claiming lport a0f62675-5535-4e75-98f6-dc4fbadbc4c9 for this chassis.
Feb 25 07:46:21 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:21Z|01173|binding|INFO|a0f62675-5535-4e75-98f6-dc4fbadbc4c9: Claiming fa:16:3e:56:b0:61 10.100.0.7
Feb 25 07:46:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:21.920 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:b0:61 10.100.0.7'], port_security=['fa:16:3e:56:b0:61 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '709a8b15-83eb-45f4-b681-c150ad270e01', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf745dc4-78d4-4d88-8794-da49c21b9f38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6cab933e-ec3e-4cce-9e9e-8098a6cc3a83', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68434b74-2de5-46be-a5a4-5cead97a1427, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=a0f62675-5535-4e75-98f6-dc4fbadbc4c9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.922 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:21 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:21Z|01174|binding|INFO|Setting lport a0f62675-5535-4e75-98f6-dc4fbadbc4c9 ovn-installed in OVS
Feb 25 07:46:21 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:21Z|01175|binding|INFO|Setting lport a0f62675-5535-4e75-98f6-dc4fbadbc4c9 up in Southbound
Feb 25 07:46:21 np0005629333 nova_compute[244014]: 2026-02-25 12:46:21.924 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:21.922 157129 INFO neutron.agent.ovn.metadata.agent [-] Port a0f62675-5535-4e75-98f6-dc4fbadbc4c9 in datapath bf745dc4-78d4-4d88-8794-da49c21b9f38 bound to our chassis#033[00m
Feb 25 07:46:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:21.924 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bf745dc4-78d4-4d88-8794-da49c21b9f38#033[00m
Feb 25 07:46:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:21.938 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bcc29cce-0043-4ad0-981f-d7f79893f2f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:21.939 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbf745dc4-71 in ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:46:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:21.942 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbf745dc4-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:46:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:21.942 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[56bbdd9d-ae69-4fe8-9a21-a9412fe62d17]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:21.943 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[715b1fd4-3cfa-41e1-8fba-e474ecf15b82]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:21 np0005629333 systemd-udevd[348063]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:46:21 np0005629333 systemd-machined[210048]: New machine qemu-147-instance-00000074.
Feb 25 07:46:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:21.951 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[8314d7ef-2589-426b-b3f1-7c8c80255192]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:21 np0005629333 NetworkManager[49836]: <info>  [1772023581.9584] device (tapa0f62675-55): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:46:21 np0005629333 NetworkManager[49836]: <info>  [1772023581.9591] device (tapa0f62675-55): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:46:21 np0005629333 systemd[1]: Started Virtual Machine qemu-147-instance-00000074.
Feb 25 07:46:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:21.966 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0f89a12e-d93a-4f30-82a4-56d3b98a422e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:21.995 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[910bd9f9-52e4-46b7-ae2f-a5ae1bcae0aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:22 np0005629333 systemd-udevd[348067]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:46:22 np0005629333 NetworkManager[49836]: <info>  [1772023582.0024] manager: (tapbf745dc4-70): new Veth device (/org/freedesktop/NetworkManager/Devices/491)
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.000 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[81793156-5891-431b-a59b-ee339913b240]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.028 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[935cd98c-41ec-4baa-8245-1414347a5557]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.032 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d10cebf6-dd77-408a-8e64-43cc64c34369]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:22 np0005629333 NetworkManager[49836]: <info>  [1772023582.0555] device (tapbf745dc4-70): carrier: link connected
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.061 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[11c1ab93-3484-46f8-8c88-9707203f0135]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.072 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a93b54-5f77-4d4d-b1ab-46a33d401c15]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf745dc4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:7d:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 353], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 555146, 'reachable_time': 25803, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348095, 'error': None, 'target': 'ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.084 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a298d7c1-6dcb-484c-9672-dac6a994ba40]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe91:7daf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 555146, 'tstamp': 555146}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348096, 'error': None, 'target': 'ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.098 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6cb44c58-d3ec-4003-a631-feb818781a64]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf745dc4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:7d:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 353], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 555146, 'reachable_time': 25803, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 348097, 'error': None, 'target': 'ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.128 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[38ef4646-e58a-458f-b155-42944f4866f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.188 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[705697c9-b286-43e2-a2fd-10372f1b6eb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.189 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf745dc4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.190 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.190 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf745dc4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:46:22 np0005629333 nova_compute[244014]: 2026-02-25 12:46:22.192 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:22 np0005629333 kernel: tapbf745dc4-70: entered promiscuous mode
Feb 25 07:46:22 np0005629333 NetworkManager[49836]: <info>  [1772023582.1960] manager: (tapbf745dc4-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/492)
Feb 25 07:46:22 np0005629333 nova_compute[244014]: 2026-02-25 12:46:22.197 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.198 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbf745dc4-70, col_values=(('external_ids', {'iface-id': '92ae33df-1d64-498f-b132-6ef0663f81e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:46:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:22Z|01176|binding|INFO|Releasing lport 92ae33df-1d64-498f-b132-6ef0663f81e9 from this chassis (sb_readonly=0)
Feb 25 07:46:22 np0005629333 nova_compute[244014]: 2026-02-25 12:46:22.201 244018 DEBUG nova.compute.manager [req-d09e0eec-0542-4f83-87ae-2403ebeb8f6f req-b96f22ee-bab8-4e99-a263-25aa1e0e8cc7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received event network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:46:22 np0005629333 nova_compute[244014]: 2026-02-25 12:46:22.201 244018 DEBUG oslo_concurrency.lockutils [req-d09e0eec-0542-4f83-87ae-2403ebeb8f6f req-b96f22ee-bab8-4e99-a263-25aa1e0e8cc7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.201 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bf745dc4-78d4-4d88-8794-da49c21b9f38.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bf745dc4-78d4-4d88-8794-da49c21b9f38.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:46:22 np0005629333 nova_compute[244014]: 2026-02-25 12:46:22.202 244018 DEBUG oslo_concurrency.lockutils [req-d09e0eec-0542-4f83-87ae-2403ebeb8f6f req-b96f22ee-bab8-4e99-a263-25aa1e0e8cc7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:22 np0005629333 nova_compute[244014]: 2026-02-25 12:46:22.202 244018 DEBUG oslo_concurrency.lockutils [req-d09e0eec-0542-4f83-87ae-2403ebeb8f6f req-b96f22ee-bab8-4e99-a263-25aa1e0e8cc7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:22 np0005629333 nova_compute[244014]: 2026-02-25 12:46:22.202 244018 DEBUG nova.compute.manager [req-d09e0eec-0542-4f83-87ae-2403ebeb8f6f req-b96f22ee-bab8-4e99-a263-25aa1e0e8cc7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Processing event network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:46:22 np0005629333 nova_compute[244014]: 2026-02-25 12:46:22.202 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.202 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[101acfea-fe24-4a0a-b627-61f9c01bc464]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.203 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-bf745dc4-78d4-4d88-8794-da49c21b9f38
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/bf745dc4-78d4-4d88-8794-da49c21b9f38.pid.haproxy
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID bf745dc4-78d4-4d88-8794-da49c21b9f38
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:46:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:22.204 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38', 'env', 'PROCESS_TAG=haproxy-bf745dc4-78d4-4d88-8794-da49c21b9f38', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bf745dc4-78d4-4d88-8794-da49c21b9f38.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:46:22 np0005629333 nova_compute[244014]: 2026-02-25 12:46:22.206 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1974: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 333 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Feb 25 07:46:22 np0005629333 podman[348129]: 2026-02-25 12:46:22.523651588 +0000 UTC m=+0.040556856 container create fde4953fed28e007b5b69e80e6f3c28902ff149574cb654c3cd08315acb4de87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 25 07:46:22 np0005629333 systemd[1]: Started libpod-conmon-fde4953fed28e007b5b69e80e6f3c28902ff149574cb654c3cd08315acb4de87.scope.
Feb 25 07:46:22 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:46:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f539fbb4b8a2e0d559dbacdf783585633e2c940d6c86cd49a3a52f168fc29a4e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:46:22 np0005629333 podman[348129]: 2026-02-25 12:46:22.501461472 +0000 UTC m=+0.018366770 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:46:22 np0005629333 podman[348129]: 2026-02-25 12:46:22.604020648 +0000 UTC m=+0.120925916 container init fde4953fed28e007b5b69e80e6f3c28902ff149574cb654c3cd08315acb4de87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0)
Feb 25 07:46:22 np0005629333 podman[348129]: 2026-02-25 12:46:22.609557364 +0000 UTC m=+0.126462632 container start fde4953fed28e007b5b69e80e6f3c28902ff149574cb654c3cd08315acb4de87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 25 07:46:22 np0005629333 neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38[348144]: [NOTICE]   (348148) : New worker (348150) forked
Feb 25 07:46:22 np0005629333 neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38[348144]: [NOTICE]   (348148) : Loading success.
Feb 25 07:46:22 np0005629333 nova_compute[244014]: 2026-02-25 12:46:22.847 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023582.8474016, 709a8b15-83eb-45f4-b681-c150ad270e01 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:46:22 np0005629333 nova_compute[244014]: 2026-02-25 12:46:22.848 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] VM Started (Lifecycle Event)#033[00m
Feb 25 07:46:22 np0005629333 nova_compute[244014]: 2026-02-25 12:46:22.850 244018 DEBUG nova.compute.manager [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:46:22 np0005629333 nova_compute[244014]: 2026-02-25 12:46:22.855 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:46:22 np0005629333 nova_compute[244014]: 2026-02-25 12:46:22.859 244018 INFO nova.virt.libvirt.driver [-] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Instance spawned successfully.#033[00m
Feb 25 07:46:22 np0005629333 nova_compute[244014]: 2026-02-25 12:46:22.860 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:46:23 np0005629333 nova_compute[244014]: 2026-02-25 12:46:23.005 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:46:23 np0005629333 nova_compute[244014]: 2026-02-25 12:46:23.012 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:46:23 np0005629333 nova_compute[244014]: 2026-02-25 12:46:23.017 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:46:23 np0005629333 nova_compute[244014]: 2026-02-25 12:46:23.017 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:46:23 np0005629333 nova_compute[244014]: 2026-02-25 12:46:23.018 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:46:23 np0005629333 nova_compute[244014]: 2026-02-25 12:46:23.018 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:46:23 np0005629333 nova_compute[244014]: 2026-02-25 12:46:23.019 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:46:23 np0005629333 nova_compute[244014]: 2026-02-25 12:46:23.019 244018 DEBUG nova.virt.libvirt.driver [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:46:23 np0005629333 nova_compute[244014]: 2026-02-25 12:46:23.042 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:46:23 np0005629333 nova_compute[244014]: 2026-02-25 12:46:23.043 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023582.8475673, 709a8b15-83eb-45f4-b681-c150ad270e01 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:46:23 np0005629333 nova_compute[244014]: 2026-02-25 12:46:23.043 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:46:23 np0005629333 nova_compute[244014]: 2026-02-25 12:46:23.075 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:46:23 np0005629333 nova_compute[244014]: 2026-02-25 12:46:23.080 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023582.854272, 709a8b15-83eb-45f4-b681-c150ad270e01 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:46:23 np0005629333 nova_compute[244014]: 2026-02-25 12:46:23.080 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:46:23 np0005629333 nova_compute[244014]: 2026-02-25 12:46:23.085 244018 INFO nova.compute.manager [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Took 13.19 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:46:23 np0005629333 nova_compute[244014]: 2026-02-25 12:46:23.085 244018 DEBUG nova.compute.manager [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:46:23 np0005629333 nova_compute[244014]: 2026-02-25 12:46:23.094 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:46:23 np0005629333 nova_compute[244014]: 2026-02-25 12:46:23.099 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:46:23 np0005629333 nova_compute[244014]: 2026-02-25 12:46:23.123 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:46:23 np0005629333 nova_compute[244014]: 2026-02-25 12:46:23.143 244018 INFO nova.compute.manager [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Took 14.27 seconds to build instance.#033[00m
Feb 25 07:46:23 np0005629333 nova_compute[244014]: 2026-02-25 12:46:23.158 244018 DEBUG oslo_concurrency.lockutils [None req-d7d6140f-360d-48c4-bb3f-77a63db6c740 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.353s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:46:24 np0005629333 nova_compute[244014]: 2026-02-25 12:46:24.347 244018 DEBUG nova.compute.manager [req-52982717-48dc-4ecf-aac5-008a6addcb86 req-9ed93ebc-5781-48b1-901e-ecd00585b55b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received event network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:46:24 np0005629333 nova_compute[244014]: 2026-02-25 12:46:24.347 244018 DEBUG oslo_concurrency.lockutils [req-52982717-48dc-4ecf-aac5-008a6addcb86 req-9ed93ebc-5781-48b1-901e-ecd00585b55b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:24 np0005629333 nova_compute[244014]: 2026-02-25 12:46:24.348 244018 DEBUG oslo_concurrency.lockutils [req-52982717-48dc-4ecf-aac5-008a6addcb86 req-9ed93ebc-5781-48b1-901e-ecd00585b55b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:24 np0005629333 nova_compute[244014]: 2026-02-25 12:46:24.348 244018 DEBUG oslo_concurrency.lockutils [req-52982717-48dc-4ecf-aac5-008a6addcb86 req-9ed93ebc-5781-48b1-901e-ecd00585b55b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:24 np0005629333 nova_compute[244014]: 2026-02-25 12:46:24.349 244018 DEBUG nova.compute.manager [req-52982717-48dc-4ecf-aac5-008a6addcb86 req-9ed93ebc-5781-48b1-901e-ecd00585b55b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] No waiting events found dispatching network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:46:24 np0005629333 nova_compute[244014]: 2026-02-25 12:46:24.349 244018 WARNING nova.compute.manager [req-52982717-48dc-4ecf-aac5-008a6addcb86 req-9ed93ebc-5781-48b1-901e-ecd00585b55b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received unexpected event network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:46:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1975: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Feb 25 07:46:24 np0005629333 nova_compute[244014]: 2026-02-25 12:46:24.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:46:24 np0005629333 nova_compute[244014]: 2026-02-25 12:46:24.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:46:24 np0005629333 nova_compute[244014]: 2026-02-25 12:46:24.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:46:24 np0005629333 nova_compute[244014]: 2026-02-25 12:46:24.919 244018 DEBUG nova.compute.manager [req-86ec9e67-6abc-4139-afd8-6f0f436630b9 req-9c792013-47f4-4d76-80d3-a87a05dab795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Received event network-changed-443e71ca-9c38-40e8-b799-1026ef47c35d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:46:24 np0005629333 nova_compute[244014]: 2026-02-25 12:46:24.920 244018 DEBUG nova.compute.manager [req-86ec9e67-6abc-4139-afd8-6f0f436630b9 req-9c792013-47f4-4d76-80d3-a87a05dab795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Refreshing instance network info cache due to event network-changed-443e71ca-9c38-40e8-b799-1026ef47c35d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:46:24 np0005629333 nova_compute[244014]: 2026-02-25 12:46:24.920 244018 DEBUG oslo_concurrency.lockutils [req-86ec9e67-6abc-4139-afd8-6f0f436630b9 req-9c792013-47f4-4d76-80d3-a87a05dab795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-2b9ea98f-9b1a-495b-8669-e79da967b0ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:46:24 np0005629333 nova_compute[244014]: 2026-02-25 12:46:24.920 244018 DEBUG oslo_concurrency.lockutils [req-86ec9e67-6abc-4139-afd8-6f0f436630b9 req-9c792013-47f4-4d76-80d3-a87a05dab795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-2b9ea98f-9b1a-495b-8669-e79da967b0ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:46:24 np0005629333 nova_compute[244014]: 2026-02-25 12:46:24.920 244018 DEBUG nova.network.neutron [req-86ec9e67-6abc-4139-afd8-6f0f436630b9 req-9c792013-47f4-4d76-80d3-a87a05dab795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Refreshing network info cache for port 443e71ca-9c38-40e8-b799-1026ef47c35d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.010 244018 DEBUG oslo_concurrency.lockutils [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.010 244018 DEBUG oslo_concurrency.lockutils [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.010 244018 DEBUG oslo_concurrency.lockutils [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.011 244018 DEBUG oslo_concurrency.lockutils [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.011 244018 DEBUG oslo_concurrency.lockutils [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.012 244018 INFO nova.compute.manager [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Terminating instance#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.012 244018 DEBUG nova.compute.manager [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:46:25 np0005629333 kernel: tap443e71ca-9c (unregistering): left promiscuous mode
Feb 25 07:46:25 np0005629333 NetworkManager[49836]: <info>  [1772023585.0790] device (tap443e71ca-9c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.097 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:25Z|01177|binding|INFO|Releasing lport 443e71ca-9c38-40e8-b799-1026ef47c35d from this chassis (sb_readonly=0)
Feb 25 07:46:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:25Z|01178|binding|INFO|Setting lport 443e71ca-9c38-40e8-b799-1026ef47c35d down in Southbound
Feb 25 07:46:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:25Z|01179|binding|INFO|Removing iface tap443e71ca-9c ovn-installed in OVS
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.100 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:25 np0005629333 kernel: tap39f87175-7c (unregistering): left promiscuous mode
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.107 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:25 np0005629333 NetworkManager[49836]: <info>  [1772023585.1104] device (tap39f87175-7c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:46:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:25Z|01180|binding|INFO|Releasing lport 0b276f7a-90bb-4427-8f39-0e014732fd20 from this chassis (sb_readonly=0)
Feb 25 07:46:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:25Z|01181|binding|INFO|Releasing lport 091afd0d-cbaa-4d88-8f55-1965b8ffcb56 from this chassis (sb_readonly=0)
Feb 25 07:46:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:25Z|01182|binding|INFO|Releasing lport 92ae33df-1d64-498f-b132-6ef0663f81e9 from this chassis (sb_readonly=0)
Feb 25 07:46:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.110 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:e2:13 10.100.0.4'], port_security=['fa:16:3e:98:e2:13 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2b9ea98f-9b1a-495b-8669-e79da967b0ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bb79e0fd-2a4d-4a70-9c80-4853297401ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '09e05c4f-db6b-40c9-84a5-79dc857b9d0a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1aa114a0-206b-4aa1-b770-18f248344fa4, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=443e71ca-9c38-40e8-b799-1026ef47c35d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:46:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.112 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 443e71ca-9c38-40e8-b799-1026ef47c35d in datapath bb79e0fd-2a4d-4a70-9c80-4853297401ff unbound from our chassis#033[00m
Feb 25 07:46:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.116 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bb79e0fd-2a4d-4a70-9c80-4853297401ff#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.122 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:25Z|01183|binding|INFO|Releasing lport 39f87175-7c4f-4092-bce6-29b413c731cd from this chassis (sb_readonly=0)
Feb 25 07:46:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:25Z|01184|binding|INFO|Setting lport 39f87175-7c4f-4092-bce6-29b413c731cd down in Southbound
Feb 25 07:46:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:25Z|01185|binding|INFO|Removing iface tap39f87175-7c ovn-installed in OVS
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.124 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.131 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b74e9752-eda9-4fb0-931a-936db6a01f8e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.133 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.151 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:de:d3 2001:db8:0:1:f816:3eff:feb2:ded3 2001:db8::f816:3eff:feb2:ded3'], port_security=['fa:16:3e:b2:de:d3 2001:db8:0:1:f816:3eff:feb2:ded3 2001:db8::f816:3eff:feb2:ded3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feb2:ded3/64 2001:db8::f816:3eff:feb2:ded3/64', 'neutron:device_id': '2b9ea98f-9b1a-495b-8669-e79da967b0ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '09e05c4f-db6b-40c9-84a5-79dc857b9d0a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5bfff87-5507-4fe1-aed9-1b014e8e7384, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=39f87175-7c4f-4092-bce6-29b413c731cd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:46:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.156 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a55a244f-dabc-4100-8b6a-35c4dab9e33f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:25 np0005629333 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000073.scope: Deactivated successfully.
Feb 25 07:46:25 np0005629333 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000073.scope: Consumed 13.752s CPU time.
Feb 25 07:46:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.158 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f816f4d7-96a5-4a74-9d00-45d3de0a9d7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:25 np0005629333 systemd-machined[210048]: Machine qemu-146-instance-00000073 terminated.
Feb 25 07:46:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.177 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d92bc3d1-83c5-401d-9bc7-11ee2d48252e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.189 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c3eb00ac-c01f-44f6-8c9b-6a6aa5ff2a68]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbb79e0fd-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4b:0b:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 341], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547905, 'reachable_time': 39147, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348217, 'error': None, 'target': 'ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.200 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9c2c24ff-03cd-4578-be6f-a0b6947ef142]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbb79e0fd-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547920, 'tstamp': 547920}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348218, 'error': None, 'target': 'ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbb79e0fd-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 547923, 'tstamp': 547923}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348218, 'error': None, 'target': 'ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.204 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb79e0fd-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.206 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.211 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.211 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbb79e0fd-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:46:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.211 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:46:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.211 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbb79e0fd-20, col_values=(('external_ids', {'iface-id': '091afd0d-cbaa-4d88-8f55-1965b8ffcb56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:46:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.212 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:46:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.213 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 39f87175-7c4f-4092-bce6-29b413c731cd in datapath 6df13266-bcfe-4a5b-94c4-81b5f08a6c21 unbound from our chassis#033[00m
Feb 25 07:46:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.214 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6df13266-bcfe-4a5b-94c4-81b5f08a6c21#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.222 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.223 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[76e342ca-5ceb-4ca1-8e9f-19a7c8155638]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:25 np0005629333 NetworkManager[49836]: <info>  [1772023585.2379] manager: (tap39f87175-7c): new Tun device (/org/freedesktop/NetworkManager/Devices/493)
Feb 25 07:46:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.247 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[52fdd905-0560-47f1-a67c-36da88f4cf69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.252 244018 INFO nova.virt.libvirt.driver [-] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Instance destroyed successfully.#033[00m
Feb 25 07:46:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.252 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[44d64846-37a9-4f75-8327-517434d031b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.252 244018 DEBUG nova.objects.instance [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'resources' on Instance uuid 2b9ea98f-9b1a-495b-8669-e79da967b0ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.275 244018 DEBUG nova.virt.libvirt.vif [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:45:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1283323036',display_name='tempest-TestGettingAddress-server-1283323036',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1283323036',id=115,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD9kunFNn82YP0ulUqe0tIhu80P4MbOBi6POOa2CvjUcw/O3cOMXLzAIoTeZySuOv8M/4O47QFj9wA4a/asArTJADOO8TDCsWVVXiUd+J4BzXpwIPt1WHbwTYgvUR5L7vw==',key_name='tempest-TestGettingAddress-931608106',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:45:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-bo0inwqe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:45:57Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=2b9ea98f-9b1a-495b-8669-e79da967b0ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "443e71ca-9c38-40e8-b799-1026ef47c35d", "address": "fa:16:3e:98:e2:13", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap443e71ca-9c", "ovs_interfaceid": "443e71ca-9c38-40e8-b799-1026ef47c35d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.276 244018 DEBUG nova.network.os_vif_util [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "443e71ca-9c38-40e8-b799-1026ef47c35d", "address": "fa:16:3e:98:e2:13", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap443e71ca-9c", "ovs_interfaceid": "443e71ca-9c38-40e8-b799-1026ef47c35d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.276 244018 DEBUG nova.network.os_vif_util [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:98:e2:13,bridge_name='br-int',has_traffic_filtering=True,id=443e71ca-9c38-40e8-b799-1026ef47c35d,network=Network(bb79e0fd-2a4d-4a70-9c80-4853297401ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap443e71ca-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.277 244018 DEBUG os_vif [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:98:e2:13,bridge_name='br-int',has_traffic_filtering=True,id=443e71ca-9c38-40e8-b799-1026ef47c35d,network=Network(bb79e0fd-2a4d-4a70-9c80-4853297401ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap443e71ca-9c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.278 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.278 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap443e71ca-9c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.280 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.282 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.283 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.283 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[839e73a6-0847-4d75-8ecd-0e8214da0061]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.285 244018 INFO os_vif [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:98:e2:13,bridge_name='br-int',has_traffic_filtering=True,id=443e71ca-9c38-40e8-b799-1026ef47c35d,network=Network(bb79e0fd-2a4d-4a70-9c80-4853297401ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap443e71ca-9c')#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.286 244018 DEBUG nova.virt.libvirt.vif [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:45:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1283323036',display_name='tempest-TestGettingAddress-server-1283323036',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1283323036',id=115,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD9kunFNn82YP0ulUqe0tIhu80P4MbOBi6POOa2CvjUcw/O3cOMXLzAIoTeZySuOv8M/4O47QFj9wA4a/asArTJADOO8TDCsWVVXiUd+J4BzXpwIPt1WHbwTYgvUR5L7vw==',key_name='tempest-TestGettingAddress-931608106',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:45:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-bo0inwqe',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:45:57Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=2b9ea98f-9b1a-495b-8669-e79da967b0ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "39f87175-7c4f-4092-bce6-29b413c731cd", "address": "fa:16:3e:b2:de:d3", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39f87175-7c", "ovs_interfaceid": "39f87175-7c4f-4092-bce6-29b413c731cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.286 244018 DEBUG nova.network.os_vif_util [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "39f87175-7c4f-4092-bce6-29b413c731cd", "address": "fa:16:3e:b2:de:d3", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39f87175-7c", "ovs_interfaceid": "39f87175-7c4f-4092-bce6-29b413c731cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.287 244018 DEBUG nova.network.os_vif_util [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:de:d3,bridge_name='br-int',has_traffic_filtering=True,id=39f87175-7c4f-4092-bce6-29b413c731cd,network=Network(6df13266-bcfe-4a5b-94c4-81b5f08a6c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39f87175-7c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.287 244018 DEBUG os_vif [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:de:d3,bridge_name='br-int',has_traffic_filtering=True,id=39f87175-7c4f-4092-bce6-29b413c731cd,network=Network(6df13266-bcfe-4a5b-94c4-81b5f08a6c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39f87175-7c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.288 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.289 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39f87175-7c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.290 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.292 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.292 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.294 244018 INFO os_vif [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:de:d3,bridge_name='br-int',has_traffic_filtering=True,id=39f87175-7c4f-4092-bce6-29b413c731cd,network=Network(6df13266-bcfe-4a5b-94c4-81b5f08a6c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39f87175-7c')#033[00m
Feb 25 07:46:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.299 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0aece9f6-32df-4949-b39b-c687c1a8d59c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6df13266-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:ad:86'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 342], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547995, 'reachable_time': 15036, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2768, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2768, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348247, 'error': None, 'target': 'ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.311 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[905a9fc9-10be-401b-944d-d40d4edc0d8e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6df13266-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548006, 'tstamp': 548006}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348262, 'error': None, 'target': 'ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.313 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6df13266-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.315 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.317 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6df13266-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:46:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.318 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:46:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.318 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6df13266-b0, col_values=(('external_ids', {'iface-id': '0b276f7a-90bb-4427-8f39-0e014732fd20'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:46:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:25.319 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.688 244018 INFO nova.virt.libvirt.driver [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Deleting instance files /var/lib/nova/instances/2b9ea98f-9b1a-495b-8669-e79da967b0ab_del#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.690 244018 INFO nova.virt.libvirt.driver [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Deletion of /var/lib/nova/instances/2b9ea98f-9b1a-495b-8669-e79da967b0ab_del complete#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.767 244018 INFO nova.compute.manager [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.767 244018 DEBUG oslo.service.loopingcall [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.768 244018 DEBUG nova.compute.manager [-] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.768 244018 DEBUG nova.network.neutron [-] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.840 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.840 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:46:25 np0005629333 nova_compute[244014]: 2026-02-25 12:46:25.841 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: dd8c9142-2607-4722-90eb-65233f258639] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 25 07:46:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1976: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 25 KiB/s wr, 9 op/s
Feb 25 07:46:26 np0005629333 nova_compute[244014]: 2026-02-25 12:46:26.742 244018 DEBUG nova.compute.manager [req-787c2626-a279-40fa-85b8-1253ee8a0e52 req-75385cf2-efbf-4a31-a139-591f9d7e0068 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Received event network-vif-deleted-39f87175-7c4f-4092-bce6-29b413c731cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:46:26 np0005629333 nova_compute[244014]: 2026-02-25 12:46:26.742 244018 INFO nova.compute.manager [req-787c2626-a279-40fa-85b8-1253ee8a0e52 req-75385cf2-efbf-4a31-a139-591f9d7e0068 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Neutron deleted interface 39f87175-7c4f-4092-bce6-29b413c731cd; detaching it from the instance and deleting it from the info cache#033[00m
Feb 25 07:46:26 np0005629333 nova_compute[244014]: 2026-02-25 12:46:26.743 244018 DEBUG nova.network.neutron [req-787c2626-a279-40fa-85b8-1253ee8a0e52 req-75385cf2-efbf-4a31-a139-591f9d7e0068 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Updating instance_info_cache with network_info: [{"id": "443e71ca-9c38-40e8-b799-1026ef47c35d", "address": "fa:16:3e:98:e2:13", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap443e71ca-9c", "ovs_interfaceid": "443e71ca-9c38-40e8-b799-1026ef47c35d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:46:26 np0005629333 nova_compute[244014]: 2026-02-25 12:46:26.767 244018 DEBUG nova.compute.manager [req-787c2626-a279-40fa-85b8-1253ee8a0e52 req-75385cf2-efbf-4a31-a139-591f9d7e0068 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Detach interface failed, port_id=39f87175-7c4f-4092-bce6-29b413c731cd, reason: Instance 2b9ea98f-9b1a-495b-8669-e79da967b0ab could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 25 07:46:27 np0005629333 nova_compute[244014]: 2026-02-25 12:46:27.035 244018 DEBUG nova.compute.manager [req-db63ac8d-34a7-4130-94e3-3e88a50afdc8 req-bcddf3ad-be08-4e1d-80e1-f3680980b83f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Received event network-vif-unplugged-443e71ca-9c38-40e8-b799-1026ef47c35d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:46:27 np0005629333 nova_compute[244014]: 2026-02-25 12:46:27.035 244018 DEBUG oslo_concurrency.lockutils [req-db63ac8d-34a7-4130-94e3-3e88a50afdc8 req-bcddf3ad-be08-4e1d-80e1-f3680980b83f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:27 np0005629333 nova_compute[244014]: 2026-02-25 12:46:27.037 244018 DEBUG oslo_concurrency.lockutils [req-db63ac8d-34a7-4130-94e3-3e88a50afdc8 req-bcddf3ad-be08-4e1d-80e1-f3680980b83f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:27 np0005629333 nova_compute[244014]: 2026-02-25 12:46:27.037 244018 DEBUG oslo_concurrency.lockutils [req-db63ac8d-34a7-4130-94e3-3e88a50afdc8 req-bcddf3ad-be08-4e1d-80e1-f3680980b83f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:27 np0005629333 nova_compute[244014]: 2026-02-25 12:46:27.038 244018 DEBUG nova.compute.manager [req-db63ac8d-34a7-4130-94e3-3e88a50afdc8 req-bcddf3ad-be08-4e1d-80e1-f3680980b83f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] No waiting events found dispatching network-vif-unplugged-443e71ca-9c38-40e8-b799-1026ef47c35d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:46:27 np0005629333 nova_compute[244014]: 2026-02-25 12:46:27.039 244018 DEBUG nova.compute.manager [req-db63ac8d-34a7-4130-94e3-3e88a50afdc8 req-bcddf3ad-be08-4e1d-80e1-f3680980b83f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Received event network-vif-unplugged-443e71ca-9c38-40e8-b799-1026ef47c35d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:46:27 np0005629333 podman[348268]: 2026-02-25 12:46:27.741679348 +0000 UTC m=+0.074579647 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:46:27 np0005629333 nova_compute[244014]: 2026-02-25 12:46:27.798 244018 DEBUG nova.network.neutron [-] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:46:27 np0005629333 nova_compute[244014]: 2026-02-25 12:46:27.826 244018 INFO nova.compute.manager [-] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Took 2.06 seconds to deallocate network for instance.#033[00m
Feb 25 07:46:27 np0005629333 nova_compute[244014]: 2026-02-25 12:46:27.881 244018 DEBUG oslo_concurrency.lockutils [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:27 np0005629333 nova_compute[244014]: 2026-02-25 12:46:27.882 244018 DEBUG oslo_concurrency.lockutils [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:27 np0005629333 nova_compute[244014]: 2026-02-25 12:46:27.886 244018 DEBUG nova.network.neutron [req-86ec9e67-6abc-4139-afd8-6f0f436630b9 req-9c792013-47f4-4d76-80d3-a87a05dab795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Updated VIF entry in instance network info cache for port 443e71ca-9c38-40e8-b799-1026ef47c35d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:46:27 np0005629333 nova_compute[244014]: 2026-02-25 12:46:27.887 244018 DEBUG nova.network.neutron [req-86ec9e67-6abc-4139-afd8-6f0f436630b9 req-9c792013-47f4-4d76-80d3-a87a05dab795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Updating instance_info_cache with network_info: [{"id": "443e71ca-9c38-40e8-b799-1026ef47c35d", "address": "fa:16:3e:98:e2:13", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap443e71ca-9c", "ovs_interfaceid": "443e71ca-9c38-40e8-b799-1026ef47c35d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "39f87175-7c4f-4092-bce6-29b413c731cd", "address": "fa:16:3e:b2:de:d3", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb2:ded3", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39f87175-7c", "ovs_interfaceid": "39f87175-7c4f-4092-bce6-29b413c731cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:46:27 np0005629333 nova_compute[244014]: 2026-02-25 12:46:27.917 244018 DEBUG oslo_concurrency.lockutils [req-86ec9e67-6abc-4139-afd8-6f0f436630b9 req-9c792013-47f4-4d76-80d3-a87a05dab795 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-2b9ea98f-9b1a-495b-8669-e79da967b0ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:46:28 np0005629333 nova_compute[244014]: 2026-02-25 12:46:28.056 244018 DEBUG oslo_concurrency.processutils [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:46:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 40 KiB/s wr, 105 op/s
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:28.529060) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023588529089, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 410, "num_deletes": 256, "total_data_size": 298596, "memory_usage": 307752, "flush_reason": "Manual Compaction"}
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023588532176, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 296163, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41939, "largest_seqno": 42348, "table_properties": {"data_size": 293729, "index_size": 534, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5713, "raw_average_key_size": 17, "raw_value_size": 288948, "raw_average_value_size": 905, "num_data_blocks": 24, "num_entries": 319, "num_filter_entries": 319, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772023568, "oldest_key_time": 1772023568, "file_creation_time": 1772023588, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 3143 microseconds, and 1017 cpu microseconds.
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:28.532204) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 296163 bytes OK
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:28.532215) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:28.533644) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:28.533655) EVENT_LOG_v1 {"time_micros": 1772023588533652, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:28.533667) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 295998, prev total WAL file size 295998, number of live WAL files 2.
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:28.533978) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353132' seq:72057594037927935, type:22 .. '6C6F676D0031373634' seq:0, type:0; will stop at (end)
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(289KB)], [92(10MB)]
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023588534039, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 11575309, "oldest_snapshot_seqno": -1}
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1565904403' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:46:28 np0005629333 nova_compute[244014]: 2026-02-25 12:46:28.591 244018 DEBUG oslo_concurrency.processutils [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:46:28 np0005629333 nova_compute[244014]: 2026-02-25 12:46:28.596 244018 DEBUG nova.compute.provider_tree [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 6629 keys, 11459092 bytes, temperature: kUnknown
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023588602344, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 11459092, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11411215, "index_size": 30229, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16581, "raw_key_size": 170463, "raw_average_key_size": 25, "raw_value_size": 11289081, "raw_average_value_size": 1702, "num_data_blocks": 1202, "num_entries": 6629, "num_filter_entries": 6629, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772023588, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:28.602611) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 11459092 bytes
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:28.604403) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.3 rd, 167.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.8 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(77.8) write-amplify(38.7) OK, records in: 7148, records dropped: 519 output_compression: NoCompression
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:28.604431) EVENT_LOG_v1 {"time_micros": 1772023588604419, "job": 54, "event": "compaction_finished", "compaction_time_micros": 68382, "compaction_time_cpu_micros": 29094, "output_level": 6, "num_output_files": 1, "total_output_size": 11459092, "num_input_records": 7148, "num_output_records": 6629, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023588604595, "job": 54, "event": "table_file_deletion", "file_number": 94}
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023588606055, "job": 54, "event": "table_file_deletion", "file_number": 92}
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:28.533900) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:28.606141) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:28.606146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:28.606148) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:28.606149) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:46:28 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:28.606151) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:46:28 np0005629333 nova_compute[244014]: 2026-02-25 12:46:28.613 244018 DEBUG nova.scheduler.client.report [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:46:28 np0005629333 nova_compute[244014]: 2026-02-25 12:46:28.635 244018 DEBUG oslo_concurrency.lockutils [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:28 np0005629333 nova_compute[244014]: 2026-02-25 12:46:28.663 244018 INFO nova.scheduler.client.report [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Deleted allocations for instance 2b9ea98f-9b1a-495b-8669-e79da967b0ab#033[00m
Feb 25 07:46:28 np0005629333 podman[348309]: 2026-02-25 12:46:28.733316678 +0000 UTC m=+0.085908077 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 07:46:28 np0005629333 nova_compute[244014]: 2026-02-25 12:46:28.742 244018 DEBUG oslo_concurrency.lockutils [None req-fbca7b1a-6b0c-411c-8165-a69aa9fadebe f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:28 np0005629333 nova_compute[244014]: 2026-02-25 12:46:28.844 244018 DEBUG nova.compute.manager [req-f034d967-8dd2-4ba0-b5dd-50d9f48d0b6d req-0c3ac0fd-1813-4db2-b771-58e05cc692bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Received event network-vif-deleted-443e71ca-9c38-40e8-b799-1026ef47c35d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:46:28 np0005629333 nova_compute[244014]: 2026-02-25 12:46:28.845 244018 INFO nova.compute.manager [req-f034d967-8dd2-4ba0-b5dd-50d9f48d0b6d req-0c3ac0fd-1813-4db2-b771-58e05cc692bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Neutron deleted interface 443e71ca-9c38-40e8-b799-1026ef47c35d; detaching it from the instance and deleting it from the info cache#033[00m
Feb 25 07:46:28 np0005629333 nova_compute[244014]: 2026-02-25 12:46:28.846 244018 DEBUG nova.network.neutron [req-f034d967-8dd2-4ba0-b5dd-50d9f48d0b6d req-0c3ac0fd-1813-4db2-b771-58e05cc692bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Feb 25 07:46:28 np0005629333 nova_compute[244014]: 2026-02-25 12:46:28.848 244018 DEBUG nova.compute.manager [req-f034d967-8dd2-4ba0-b5dd-50d9f48d0b6d req-0c3ac0fd-1813-4db2-b771-58e05cc692bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Detach interface failed, port_id=443e71ca-9c38-40e8-b799-1026ef47c35d, reason: Instance 2b9ea98f-9b1a-495b-8669-e79da967b0ab could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 25 07:46:29 np0005629333 nova_compute[244014]: 2026-02-25 12:46:29.144 244018 DEBUG nova.compute.manager [req-2533f2e8-6779-41f3-bec4-5caa24621c1b req-b66baff7-0919-4a63-be35-98766284f3e0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Received event network-vif-plugged-443e71ca-9c38-40e8-b799-1026ef47c35d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:46:29 np0005629333 nova_compute[244014]: 2026-02-25 12:46:29.145 244018 DEBUG oslo_concurrency.lockutils [req-2533f2e8-6779-41f3-bec4-5caa24621c1b req-b66baff7-0919-4a63-be35-98766284f3e0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:29 np0005629333 nova_compute[244014]: 2026-02-25 12:46:29.145 244018 DEBUG oslo_concurrency.lockutils [req-2533f2e8-6779-41f3-bec4-5caa24621c1b req-b66baff7-0919-4a63-be35-98766284f3e0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:29 np0005629333 nova_compute[244014]: 2026-02-25 12:46:29.146 244018 DEBUG oslo_concurrency.lockutils [req-2533f2e8-6779-41f3-bec4-5caa24621c1b req-b66baff7-0919-4a63-be35-98766284f3e0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "2b9ea98f-9b1a-495b-8669-e79da967b0ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:29 np0005629333 nova_compute[244014]: 2026-02-25 12:46:29.146 244018 DEBUG nova.compute.manager [req-2533f2e8-6779-41f3-bec4-5caa24621c1b req-b66baff7-0919-4a63-be35-98766284f3e0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] No waiting events found dispatching network-vif-plugged-443e71ca-9c38-40e8-b799-1026ef47c35d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:46:29 np0005629333 nova_compute[244014]: 2026-02-25 12:46:29.146 244018 WARNING nova.compute.manager [req-2533f2e8-6779-41f3-bec4-5caa24621c1b req-b66baff7-0919-4a63-be35-98766284f3e0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Received unexpected event network-vif-plugged-443e71ca-9c38-40e8-b799-1026ef47c35d for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.167 244018 DEBUG oslo_concurrency.lockutils [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "dd8c9142-2607-4722-90eb-65233f258639" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.168 244018 DEBUG oslo_concurrency.lockutils [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.169 244018 DEBUG oslo_concurrency.lockutils [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "dd8c9142-2607-4722-90eb-65233f258639-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.169 244018 DEBUG oslo_concurrency.lockutils [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.170 244018 DEBUG oslo_concurrency.lockutils [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.171 244018 INFO nova.compute.manager [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Terminating instance#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.173 244018 DEBUG nova.compute.manager [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:46:30 np0005629333 kernel: tape50c9f03-a8 (unregistering): left promiscuous mode
Feb 25 07:46:30 np0005629333 NetworkManager[49836]: <info>  [1772023590.2158] device (tape50c9f03-a8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.221 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:30 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:30Z|01186|binding|INFO|Releasing lport e50c9f03-a8a5-48d1-a34b-4a8fd638d5df from this chassis (sb_readonly=0)
Feb 25 07:46:30 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:30Z|01187|binding|INFO|Setting lport e50c9f03-a8a5-48d1-a34b-4a8fd638d5df down in Southbound
Feb 25 07:46:30 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:30Z|01188|binding|INFO|Removing iface tape50c9f03-a8 ovn-installed in OVS
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.225 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.229 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.232 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:4a:25 10.100.0.14'], port_security=['fa:16:3e:47:4a:25 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'dd8c9142-2607-4722-90eb-65233f258639', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bb79e0fd-2a4d-4a70-9c80-4853297401ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '09e05c4f-db6b-40c9-84a5-79dc857b9d0a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1aa114a0-206b-4aa1-b770-18f248344fa4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=e50c9f03-a8a5-48d1-a34b-4a8fd638d5df) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:46:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.233 157129 INFO neutron.agent.ovn.metadata.agent [-] Port e50c9f03-a8a5-48d1-a34b-4a8fd638d5df in datapath bb79e0fd-2a4d-4a70-9c80-4853297401ff unbound from our chassis#033[00m
Feb 25 07:46:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.235 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bb79e0fd-2a4d-4a70-9c80-4853297401ff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:46:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.236 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ee8fece5-a5ce-4c7c-a7fc-384d38d02b72]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:30 np0005629333 kernel: tap68ffeedb-a9 (unregistering): left promiscuous mode
Feb 25 07:46:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.237 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff namespace which is not needed anymore#033[00m
Feb 25 07:46:30 np0005629333 NetworkManager[49836]: <info>  [1772023590.2411] device (tap68ffeedb-a9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:46:30 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:30Z|01189|binding|INFO|Releasing lport 68ffeedb-a9df-4fa8-9ae1-2535a1f1799d from this chassis (sb_readonly=0)
Feb 25 07:46:30 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:30Z|01190|binding|INFO|Setting lport 68ffeedb-a9df-4fa8-9ae1-2535a1f1799d down in Southbound
Feb 25 07:46:30 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:30Z|01191|binding|INFO|Removing iface tap68ffeedb-a9 ovn-installed in OVS
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.259 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.263 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.267 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:df:22 2001:db8:0:1:f816:3eff:fe60:df22 2001:db8::f816:3eff:fe60:df22'], port_security=['fa:16:3e:60:df:22 2001:db8:0:1:f816:3eff:fe60:df22 2001:db8::f816:3eff:fe60:df22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe60:df22/64 2001:db8::f816:3eff:fe60:df22/64', 'neutron:device_id': 'dd8c9142-2607-4722-90eb-65233f258639', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '09e05c4f-db6b-40c9-84a5-79dc857b9d0a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5bfff87-5507-4fe1-aed9-1b014e8e7384, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=68ffeedb-a9df-4fa8-9ae1-2535a1f1799d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:46:30 np0005629333 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000072.scope: Deactivated successfully.
Feb 25 07:46:30 np0005629333 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000072.scope: Consumed 15.517s CPU time.
Feb 25 07:46:30 np0005629333 systemd-machined[210048]: Machine qemu-143-instance-00000072 terminated.
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.290 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:30 np0005629333 neutron-haproxy-ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff[345441]: [NOTICE]   (345461) : haproxy version is 2.8.14-c23fe91
Feb 25 07:46:30 np0005629333 neutron-haproxy-ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff[345441]: [NOTICE]   (345461) : path to executable is /usr/sbin/haproxy
Feb 25 07:46:30 np0005629333 neutron-haproxy-ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff[345441]: [WARNING]  (345461) : Exiting Master process...
Feb 25 07:46:30 np0005629333 neutron-haproxy-ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff[345441]: [ALERT]    (345461) : Current worker (345466) exited with code 143 (Terminated)
Feb 25 07:46:30 np0005629333 neutron-haproxy-ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff[345441]: [WARNING]  (345461) : All workers exited. Exiting... (0)
Feb 25 07:46:30 np0005629333 systemd[1]: libpod-d9c72c0a2b0012d1a231683380c38f5ece558e7f5e61fce7df9eb24b49e1b633.scope: Deactivated successfully.
Feb 25 07:46:30 np0005629333 podman[348365]: 2026-02-25 12:46:30.361837512 +0000 UTC m=+0.043978053 container died d9c72c0a2b0012d1a231683380c38f5ece558e7f5e61fce7df9eb24b49e1b633 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:46:30 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d9c72c0a2b0012d1a231683380c38f5ece558e7f5e61fce7df9eb24b49e1b633-userdata-shm.mount: Deactivated successfully.
Feb 25 07:46:30 np0005629333 systemd[1]: var-lib-containers-storage-overlay-86c13903373d387e7781319c35f624b48a598e4d3ad7505fd786416ab8b5fa73-merged.mount: Deactivated successfully.
Feb 25 07:46:30 np0005629333 podman[348365]: 2026-02-25 12:46:30.403115968 +0000 UTC m=+0.085256519 container cleanup d9c72c0a2b0012d1a231683380c38f5ece558e7f5e61fce7df9eb24b49e1b633 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:46:30 np0005629333 NetworkManager[49836]: <info>  [1772023590.4060] manager: (tap68ffeedb-a9): new Tun device (/org/freedesktop/NetworkManager/Devices/494)
Feb 25 07:46:30 np0005629333 systemd[1]: libpod-conmon-d9c72c0a2b0012d1a231683380c38f5ece558e7f5e61fce7df9eb24b49e1b633.scope: Deactivated successfully.
Feb 25 07:46:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1978: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 104 op/s
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.422 244018 INFO nova.virt.libvirt.driver [-] [instance: dd8c9142-2607-4722-90eb-65233f258639] Instance destroyed successfully.#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.423 244018 DEBUG nova.objects.instance [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'resources' on Instance uuid dd8c9142-2607-4722-90eb-65233f258639 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.451 244018 DEBUG nova.virt.libvirt.vif [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:44:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1795695092',display_name='tempest-TestGettingAddress-server-1795695092',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1795695092',id=114,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD9kunFNn82YP0ulUqe0tIhu80P4MbOBi6POOa2CvjUcw/O3cOMXLzAIoTeZySuOv8M/4O47QFj9wA4a/asArTJADOO8TDCsWVVXiUd+J4BzXpwIPt1WHbwTYgvUR5L7vw==',key_name='tempest-TestGettingAddress-931608106',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:45:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-e52fw19u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:45:12Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=dd8c9142-2607-4722-90eb-65233f258639,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "address": "fa:16:3e:47:4a:25", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50c9f03-a8", "ovs_interfaceid": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.452 244018 DEBUG nova.network.os_vif_util [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "address": "fa:16:3e:47:4a:25", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50c9f03-a8", "ovs_interfaceid": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.452 244018 DEBUG nova.network.os_vif_util [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:47:4a:25,bridge_name='br-int',has_traffic_filtering=True,id=e50c9f03-a8a5-48d1-a34b-4a8fd638d5df,network=Network(bb79e0fd-2a4d-4a70-9c80-4853297401ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50c9f03-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.453 244018 DEBUG os_vif [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:47:4a:25,bridge_name='br-int',has_traffic_filtering=True,id=e50c9f03-a8a5-48d1-a34b-4a8fd638d5df,network=Network(bb79e0fd-2a4d-4a70-9c80-4853297401ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50c9f03-a8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.455 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.455 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape50c9f03-a8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.456 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.458 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.460 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.462 244018 INFO os_vif [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:47:4a:25,bridge_name='br-int',has_traffic_filtering=True,id=e50c9f03-a8a5-48d1-a34b-4a8fd638d5df,network=Network(bb79e0fd-2a4d-4a70-9c80-4853297401ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape50c9f03-a8')#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.463 244018 DEBUG nova.virt.libvirt.vif [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:44:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1795695092',display_name='tempest-TestGettingAddress-server-1795695092',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1795695092',id=114,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD9kunFNn82YP0ulUqe0tIhu80P4MbOBi6POOa2CvjUcw/O3cOMXLzAIoTeZySuOv8M/4O47QFj9wA4a/asArTJADOO8TDCsWVVXiUd+J4BzXpwIPt1WHbwTYgvUR5L7vw==',key_name='tempest-TestGettingAddress-931608106',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:45:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-e52fw19u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:45:12Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=dd8c9142-2607-4722-90eb-65233f258639,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "address": "fa:16:3e:60:df:22", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68ffeedb-a9", "ovs_interfaceid": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.463 244018 DEBUG nova.network.os_vif_util [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "address": "fa:16:3e:60:df:22", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68ffeedb-a9", "ovs_interfaceid": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.464 244018 DEBUG nova.network.os_vif_util [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:df:22,bridge_name='br-int',has_traffic_filtering=True,id=68ffeedb-a9df-4fa8-9ae1-2535a1f1799d,network=Network(6df13266-bcfe-4a5b-94c4-81b5f08a6c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68ffeedb-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.465 244018 DEBUG os_vif [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:df:22,bridge_name='br-int',has_traffic_filtering=True,id=68ffeedb-a9df-4fa8-9ae1-2535a1f1799d,network=Network(6df13266-bcfe-4a5b-94c4-81b5f08a6c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68ffeedb-a9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:46:30 np0005629333 podman[348409]: 2026-02-25 12:46:30.465487039 +0000 UTC m=+0.041215145 container remove d9c72c0a2b0012d1a231683380c38f5ece558e7f5e61fce7df9eb24b49e1b633 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true)
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.466 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.466 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap68ffeedb-a9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.467 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.469 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.471 244018 INFO os_vif [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:df:22,bridge_name='br-int',has_traffic_filtering=True,id=68ffeedb-a9df-4fa8-9ae1-2535a1f1799d,network=Network(6df13266-bcfe-4a5b-94c4-81b5f08a6c21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68ffeedb-a9')#033[00m
Feb 25 07:46:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.472 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fca30e9b-fd80-4230-b2ea-cecd574d51de]: (4, ('Wed Feb 25 12:46:30 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff (d9c72c0a2b0012d1a231683380c38f5ece558e7f5e61fce7df9eb24b49e1b633)\nd9c72c0a2b0012d1a231683380c38f5ece558e7f5e61fce7df9eb24b49e1b633\nWed Feb 25 12:46:30 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff (d9c72c0a2b0012d1a231683380c38f5ece558e7f5e61fce7df9eb24b49e1b633)\nd9c72c0a2b0012d1a231683380c38f5ece558e7f5e61fce7df9eb24b49e1b633\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.474 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a8348202-7833-4155-8b7c-913bf16fd034]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.475 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb79e0fd-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:46:30 np0005629333 kernel: tapbb79e0fd-20: left promiscuous mode
Feb 25 07:46:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.482 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[42cbb241-83e0-4280-b513-f4e7fafd789e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.497 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.497 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9f537ef4-fecb-47e8-a8bc-b3798b94eb47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.499 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fabeb5d9-4829-4570-bca0-bb3218095880]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.518 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[85169c15-b7ad-4a37-9782-1b79f6d1b95b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547872, 'reachable_time': 20125, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348445, 'error': None, 'target': 'ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:30 np0005629333 systemd[1]: run-netns-ovnmeta\x2dbb79e0fd\x2d2a4d\x2d4a70\x2d9c80\x2d4853297401ff.mount: Deactivated successfully.
Feb 25 07:46:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.526 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bb79e0fd-2a4d-4a70-9c80-4853297401ff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:46:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.526 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[0ea0e68a-c388-49e7-aa60-9e2cadddb75d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.528 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 68ffeedb-a9df-4fa8-9ae1-2535a1f1799d in datapath 6df13266-bcfe-4a5b-94c4-81b5f08a6c21 unbound from our chassis#033[00m
Feb 25 07:46:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.530 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6df13266-bcfe-4a5b-94c4-81b5f08a6c21, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:46:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.532 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6deeeeaf-68a8-4533-ae7e-e3672d1eecf4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.533 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21 namespace which is not needed anymore#033[00m
Feb 25 07:46:30 np0005629333 neutron-haproxy-ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21[345726]: [NOTICE]   (345748) : haproxy version is 2.8.14-c23fe91
Feb 25 07:46:30 np0005629333 neutron-haproxy-ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21[345726]: [NOTICE]   (345748) : path to executable is /usr/sbin/haproxy
Feb 25 07:46:30 np0005629333 neutron-haproxy-ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21[345726]: [WARNING]  (345748) : Exiting Master process...
Feb 25 07:46:30 np0005629333 neutron-haproxy-ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21[345726]: [WARNING]  (345748) : Exiting Master process...
Feb 25 07:46:30 np0005629333 neutron-haproxy-ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21[345726]: [ALERT]    (345748) : Current worker (345750) exited with code 143 (Terminated)
Feb 25 07:46:30 np0005629333 neutron-haproxy-ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21[345726]: [WARNING]  (345748) : All workers exited. Exiting... (0)
Feb 25 07:46:30 np0005629333 systemd[1]: libpod-2258344ca27eda115bfae6739f31cb76dbd827b96bc2c197ec7a6c0c2acbc989.scope: Deactivated successfully.
Feb 25 07:46:30 np0005629333 conmon[345726]: conmon 2258344ca27eda115bfa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2258344ca27eda115bfae6739f31cb76dbd827b96bc2c197ec7a6c0c2acbc989.scope/container/memory.events
Feb 25 07:46:30 np0005629333 podman[348466]: 2026-02-25 12:46:30.706099733 +0000 UTC m=+0.077362505 container died 2258344ca27eda115bfae6739f31cb76dbd827b96bc2c197ec7a6c0c2acbc989 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 07:46:30 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2258344ca27eda115bfae6739f31cb76dbd827b96bc2c197ec7a6c0c2acbc989-userdata-shm.mount: Deactivated successfully.
Feb 25 07:46:30 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d3db6813ebd7b49f3ead32cb60564520a6c07da0da03dd8df369d119009fd878-merged.mount: Deactivated successfully.
Feb 25 07:46:30 np0005629333 podman[348466]: 2026-02-25 12:46:30.745414613 +0000 UTC m=+0.116677375 container cleanup 2258344ca27eda115bfae6739f31cb76dbd827b96bc2c197ec7a6c0c2acbc989 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.757 244018 INFO nova.virt.libvirt.driver [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Deleting instance files /var/lib/nova/instances/dd8c9142-2607-4722-90eb-65233f258639_del#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.758 244018 INFO nova.virt.libvirt.driver [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Deletion of /var/lib/nova/instances/dd8c9142-2607-4722-90eb-65233f258639_del complete#033[00m
Feb 25 07:46:30 np0005629333 systemd[1]: libpod-conmon-2258344ca27eda115bfae6739f31cb76dbd827b96bc2c197ec7a6c0c2acbc989.scope: Deactivated successfully.
Feb 25 07:46:30 np0005629333 podman[348496]: 2026-02-25 12:46:30.806287272 +0000 UTC m=+0.043481509 container remove 2258344ca27eda115bfae6739f31cb76dbd827b96bc2c197ec7a6c0c2acbc989 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 25 07:46:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.810 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cb83b63a-ac03-433e-b39b-c58f8b913134]: (4, ('Wed Feb 25 12:46:30 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21 (2258344ca27eda115bfae6739f31cb76dbd827b96bc2c197ec7a6c0c2acbc989)\n2258344ca27eda115bfae6739f31cb76dbd827b96bc2c197ec7a6c0c2acbc989\nWed Feb 25 12:46:30 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21 (2258344ca27eda115bfae6739f31cb76dbd827b96bc2c197ec7a6c0c2acbc989)\n2258344ca27eda115bfae6739f31cb76dbd827b96bc2c197ec7a6c0c2acbc989\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.812 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[22c0b904-27ae-417f-9f37-dca45f5c95df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.813 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6df13266-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.815 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:30 np0005629333 kernel: tap6df13266-b0: left promiscuous mode
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.821 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.826 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a88f42e7-6c21-4164-abba-0c590355fbf6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.840 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5e941538-14f0-4875-8792-41b11d806e4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.841 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f93bf1a3-8d66-41b7-b923-414317024b9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.851 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[080bfa0e-121e-49a1-ad38-2140b97e7281]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 547987, 'reachable_time': 20751, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348511, 'error': None, 'target': 'ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.858 244018 INFO nova.compute.manager [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.858 244018 DEBUG oslo.service.loopingcall [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.859 244018 DEBUG nova.compute.manager [-] [instance: dd8c9142-2607-4722-90eb-65233f258639] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.859 244018 DEBUG nova.network.neutron [-] [instance: dd8c9142-2607-4722-90eb-65233f258639] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:46:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.862 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6df13266-bcfe-4a5b-94c4-81b5f08a6c21 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:46:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:30.862 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[2e43cc41-a231-4bec-815b-b7943eb75610]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.930 244018 DEBUG nova.compute.manager [req-29e3b37f-2334-4490-bb45-ff2bc16d291c req-021620c6-2604-4020-8082-75d046a54a00 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received event network-changed-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.931 244018 DEBUG nova.compute.manager [req-29e3b37f-2334-4490-bb45-ff2bc16d291c req-021620c6-2604-4020-8082-75d046a54a00 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Refreshing instance network info cache due to event network-changed-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:46:30 np0005629333 nova_compute[244014]: 2026-02-25 12:46:30.931 244018 DEBUG oslo_concurrency.lockutils [req-29e3b37f-2334-4490-bb45-ff2bc16d291c req-021620c6-2604-4020-8082-75d046a54a00 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:46:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:46:30
Feb 25 07:46:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:46:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:46:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['images', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'backups', '.mgr', 'volumes']
Feb 25 07:46:30 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:46:31 np0005629333 nova_compute[244014]: 2026-02-25 12:46:31.240 244018 DEBUG nova.compute.manager [req-88f46c2d-5098-4a9e-aefb-53ced0f81bc3 req-0d8789ea-b1c0-47eb-b4c0-e42afd9fbd29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received event network-vif-unplugged-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:46:31 np0005629333 nova_compute[244014]: 2026-02-25 12:46:31.242 244018 DEBUG oslo_concurrency.lockutils [req-88f46c2d-5098-4a9e-aefb-53ced0f81bc3 req-0d8789ea-b1c0-47eb-b4c0-e42afd9fbd29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "dd8c9142-2607-4722-90eb-65233f258639-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:31 np0005629333 nova_compute[244014]: 2026-02-25 12:46:31.242 244018 DEBUG oslo_concurrency.lockutils [req-88f46c2d-5098-4a9e-aefb-53ced0f81bc3 req-0d8789ea-b1c0-47eb-b4c0-e42afd9fbd29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:31 np0005629333 nova_compute[244014]: 2026-02-25 12:46:31.243 244018 DEBUG oslo_concurrency.lockutils [req-88f46c2d-5098-4a9e-aefb-53ced0f81bc3 req-0d8789ea-b1c0-47eb-b4c0-e42afd9fbd29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:31 np0005629333 nova_compute[244014]: 2026-02-25 12:46:31.243 244018 DEBUG nova.compute.manager [req-88f46c2d-5098-4a9e-aefb-53ced0f81bc3 req-0d8789ea-b1c0-47eb-b4c0-e42afd9fbd29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] No waiting events found dispatching network-vif-unplugged-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:46:31 np0005629333 nova_compute[244014]: 2026-02-25 12:46:31.244 244018 DEBUG nova.compute.manager [req-88f46c2d-5098-4a9e-aefb-53ced0f81bc3 req-0d8789ea-b1c0-47eb-b4c0-e42afd9fbd29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received event network-vif-unplugged-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:46:31 np0005629333 nova_compute[244014]: 2026-02-25 12:46:31.383 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: dd8c9142-2607-4722-90eb-65233f258639] Updating instance_info_cache with network_info: [{"id": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "address": "fa:16:3e:47:4a:25", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50c9f03-a8", "ovs_interfaceid": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "address": "fa:16:3e:60:df:22", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68ffeedb-a9", "ovs_interfaceid": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:46:31 np0005629333 systemd[1]: run-netns-ovnmeta\x2d6df13266\x2dbcfe\x2d4a5b\x2d94c4\x2d81b5f08a6c21.mount: Deactivated successfully.
Feb 25 07:46:31 np0005629333 nova_compute[244014]: 2026-02-25 12:46:31.409 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:46:31 np0005629333 nova_compute[244014]: 2026-02-25 12:46:31.410 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: dd8c9142-2607-4722-90eb-65233f258639] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 25 07:46:31 np0005629333 nova_compute[244014]: 2026-02-25 12:46:31.411 244018 DEBUG oslo_concurrency.lockutils [req-29e3b37f-2334-4490-bb45-ff2bc16d291c req-021620c6-2604-4020-8082-75d046a54a00 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:46:31 np0005629333 nova_compute[244014]: 2026-02-25 12:46:31.412 244018 DEBUG nova.network.neutron [req-29e3b37f-2334-4490-bb45-ff2bc16d291c req-021620c6-2604-4020-8082-75d046a54a00 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Refreshing network info cache for port e50c9f03-a8a5-48d1-a34b-4a8fd638d5df _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:46:31 np0005629333 nova_compute[244014]: 2026-02-25 12:46:31.415 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:46:31 np0005629333 nova_compute[244014]: 2026-02-25 12:46:31.416 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:46:31 np0005629333 nova_compute[244014]: 2026-02-25 12:46:31.416 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:46:31 np0005629333 nova_compute[244014]: 2026-02-25 12:46:31.418 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:46:31 np0005629333 nova_compute[244014]: 2026-02-25 12:46:31.419 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:46:31 np0005629333 nova_compute[244014]: 2026-02-25 12:46:31.443 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:31 np0005629333 nova_compute[244014]: 2026-02-25 12:46:31.444 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:31 np0005629333 nova_compute[244014]: 2026-02-25 12:46:31.444 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:31 np0005629333 nova_compute[244014]: 2026-02-25 12:46:31.445 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:46:31 np0005629333 nova_compute[244014]: 2026-02-25 12:46:31.445 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:46:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:46:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:46:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:46:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:46:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:46:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:46:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:46:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:46:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:46:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:46:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:46:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:46:31 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:46:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:46:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:46:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:46:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:46:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3019240042' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:46:32 np0005629333 nova_compute[244014]: 2026-02-25 12:46:32.023 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:46:32 np0005629333 nova_compute[244014]: 2026-02-25 12:46:32.100 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:46:32 np0005629333 nova_compute[244014]: 2026-02-25 12:46:32.101 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:46:32 np0005629333 nova_compute[244014]: 2026-02-25 12:46:32.322 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:46:32 np0005629333 nova_compute[244014]: 2026-02-25 12:46:32.327 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3402MB free_disk=59.920832998119295GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:46:32 np0005629333 nova_compute[244014]: 2026-02-25 12:46:32.327 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:32 np0005629333 nova_compute[244014]: 2026-02-25 12:46:32.328 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:32 np0005629333 nova_compute[244014]: 2026-02-25 12:46:32.398 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance dd8c9142-2607-4722-90eb-65233f258639 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:46:32 np0005629333 nova_compute[244014]: 2026-02-25 12:46:32.400 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 709a8b15-83eb-45f4-b681-c150ad270e01 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:46:32 np0005629333 nova_compute[244014]: 2026-02-25 12:46:32.400 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:46:32 np0005629333 nova_compute[244014]: 2026-02-25 12:46:32.401 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:46:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 305 active+clean; 231 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 29 KiB/s wr, 122 op/s
Feb 25 07:46:32 np0005629333 nova_compute[244014]: 2026-02-25 12:46:32.468 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:46:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:46:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/733207259' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:46:32 np0005629333 nova_compute[244014]: 2026-02-25 12:46:32.990 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:46:32 np0005629333 nova_compute[244014]: 2026-02-25 12:46:32.996 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:46:33 np0005629333 nova_compute[244014]: 2026-02-25 12:46:33.015 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:46:33 np0005629333 nova_compute[244014]: 2026-02-25 12:46:33.038 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:46:33 np0005629333 nova_compute[244014]: 2026-02-25 12:46:33.038 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:33 np0005629333 nova_compute[244014]: 2026-02-25 12:46:33.327 244018 DEBUG nova.compute.manager [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received event network-changed-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:46:33 np0005629333 nova_compute[244014]: 2026-02-25 12:46:33.327 244018 DEBUG nova.compute.manager [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Refreshing instance network info cache due to event network-changed-a0f62675-5535-4e75-98f6-dc4fbadbc4c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:46:33 np0005629333 nova_compute[244014]: 2026-02-25 12:46:33.328 244018 DEBUG oslo_concurrency.lockutils [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-709a8b15-83eb-45f4-b681-c150ad270e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:46:33 np0005629333 nova_compute[244014]: 2026-02-25 12:46:33.329 244018 DEBUG oslo_concurrency.lockutils [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-709a8b15-83eb-45f4-b681-c150ad270e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:46:33 np0005629333 nova_compute[244014]: 2026-02-25 12:46:33.329 244018 DEBUG nova.network.neutron [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Refreshing network info cache for port a0f62675-5535-4e75-98f6-dc4fbadbc4c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:46:33 np0005629333 nova_compute[244014]: 2026-02-25 12:46:33.478 244018 DEBUG nova.network.neutron [-] [instance: dd8c9142-2607-4722-90eb-65233f258639] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:46:33 np0005629333 nova_compute[244014]: 2026-02-25 12:46:33.498 244018 INFO nova.compute.manager [-] [instance: dd8c9142-2607-4722-90eb-65233f258639] Took 2.64 seconds to deallocate network for instance.#033[00m
Feb 25 07:46:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:46:33 np0005629333 nova_compute[244014]: 2026-02-25 12:46:33.541 244018 DEBUG oslo_concurrency.lockutils [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:33 np0005629333 nova_compute[244014]: 2026-02-25 12:46:33.542 244018 DEBUG oslo_concurrency.lockutils [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:33 np0005629333 nova_compute[244014]: 2026-02-25 12:46:33.597 244018 DEBUG oslo_concurrency.processutils [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:46:33 np0005629333 nova_compute[244014]: 2026-02-25 12:46:33.658 244018 DEBUG nova.network.neutron [req-29e3b37f-2334-4490-bb45-ff2bc16d291c req-021620c6-2604-4020-8082-75d046a54a00 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Updated VIF entry in instance network info cache for port e50c9f03-a8a5-48d1-a34b-4a8fd638d5df. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:46:33 np0005629333 nova_compute[244014]: 2026-02-25 12:46:33.659 244018 DEBUG nova.network.neutron [req-29e3b37f-2334-4490-bb45-ff2bc16d291c req-021620c6-2604-4020-8082-75d046a54a00 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Updating instance_info_cache with network_info: [{"id": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "address": "fa:16:3e:47:4a:25", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50c9f03-a8", "ovs_interfaceid": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "address": "fa:16:3e:60:df:22", "network": {"id": "6df13266-bcfe-4a5b-94c4-81b5f08a6c21", "bridge": "br-int", "label": "tempest-network-smoke--1467420495", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe60:df22", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68ffeedb-a9", "ovs_interfaceid": "68ffeedb-a9df-4fa8-9ae1-2535a1f1799d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:46:33 np0005629333 nova_compute[244014]: 2026-02-25 12:46:33.682 244018 DEBUG oslo_concurrency.lockutils [req-29e3b37f-2334-4490-bb45-ff2bc16d291c req-021620c6-2604-4020-8082-75d046a54a00 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-dd8c9142-2607-4722-90eb-65233f258639" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:46:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:46:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3907871664' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:46:34 np0005629333 nova_compute[244014]: 2026-02-25 12:46:34.144 244018 DEBUG oslo_concurrency.processutils [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:46:34 np0005629333 nova_compute[244014]: 2026-02-25 12:46:34.149 244018 DEBUG nova.compute.provider_tree [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:46:34 np0005629333 nova_compute[244014]: 2026-02-25 12:46:34.172 244018 DEBUG nova.scheduler.client.report [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:46:34 np0005629333 nova_compute[244014]: 2026-02-25 12:46:34.206 244018 DEBUG oslo_concurrency.lockutils [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:34 np0005629333 nova_compute[244014]: 2026-02-25 12:46:34.227 244018 INFO nova.scheduler.client.report [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Deleted allocations for instance dd8c9142-2607-4722-90eb-65233f258639#033[00m
Feb 25 07:46:34 np0005629333 nova_compute[244014]: 2026-02-25 12:46:34.313 244018 DEBUG oslo_concurrency.lockutils [None req-f038fb33-3440-4992-a7db-6e7ea6a02ece f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:34 np0005629333 nova_compute[244014]: 2026-02-25 12:46:34.369 244018 DEBUG nova.network.neutron [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Updated VIF entry in instance network info cache for port a0f62675-5535-4e75-98f6-dc4fbadbc4c9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:46:34 np0005629333 nova_compute[244014]: 2026-02-25 12:46:34.369 244018 DEBUG nova.network.neutron [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Updating instance_info_cache with network_info: [{"id": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "address": "fa:16:3e:56:b0:61", "network": {"id": "bf745dc4-78d4-4d88-8794-da49c21b9f38", "bridge": "br-int", "label": "tempest-network-smoke--1075619721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f62675-55", "ovs_interfaceid": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:46:34 np0005629333 nova_compute[244014]: 2026-02-25 12:46:34.392 244018 DEBUG oslo_concurrency.lockutils [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-709a8b15-83eb-45f4-b681-c150ad270e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:46:34 np0005629333 nova_compute[244014]: 2026-02-25 12:46:34.393 244018 DEBUG nova.compute.manager [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received event network-vif-plugged-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:46:34 np0005629333 nova_compute[244014]: 2026-02-25 12:46:34.394 244018 DEBUG oslo_concurrency.lockutils [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "dd8c9142-2607-4722-90eb-65233f258639-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:34 np0005629333 nova_compute[244014]: 2026-02-25 12:46:34.394 244018 DEBUG oslo_concurrency.lockutils [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:34 np0005629333 nova_compute[244014]: 2026-02-25 12:46:34.395 244018 DEBUG oslo_concurrency.lockutils [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:34 np0005629333 nova_compute[244014]: 2026-02-25 12:46:34.395 244018 DEBUG nova.compute.manager [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] No waiting events found dispatching network-vif-plugged-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:46:34 np0005629333 nova_compute[244014]: 2026-02-25 12:46:34.395 244018 WARNING nova.compute.manager [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received unexpected event network-vif-plugged-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:46:34 np0005629333 nova_compute[244014]: 2026-02-25 12:46:34.396 244018 DEBUG nova.compute.manager [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received event network-vif-plugged-68ffeedb-a9df-4fa8-9ae1-2535a1f1799d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:46:34 np0005629333 nova_compute[244014]: 2026-02-25 12:46:34.396 244018 DEBUG oslo_concurrency.lockutils [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "dd8c9142-2607-4722-90eb-65233f258639-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:34 np0005629333 nova_compute[244014]: 2026-02-25 12:46:34.397 244018 DEBUG oslo_concurrency.lockutils [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:34 np0005629333 nova_compute[244014]: 2026-02-25 12:46:34.397 244018 DEBUG oslo_concurrency.lockutils [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd8c9142-2607-4722-90eb-65233f258639-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:34 np0005629333 nova_compute[244014]: 2026-02-25 12:46:34.397 244018 DEBUG nova.compute.manager [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] No waiting events found dispatching network-vif-plugged-68ffeedb-a9df-4fa8-9ae1-2535a1f1799d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:46:34 np0005629333 nova_compute[244014]: 2026-02-25 12:46:34.397 244018 WARNING nova.compute.manager [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received unexpected event network-vif-plugged-68ffeedb-a9df-4fa8-9ae1-2535a1f1799d for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:46:34 np0005629333 nova_compute[244014]: 2026-02-25 12:46:34.398 244018 DEBUG nova.compute.manager [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received event network-vif-deleted-68ffeedb-a9df-4fa8-9ae1-2535a1f1799d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:46:34 np0005629333 nova_compute[244014]: 2026-02-25 12:46:34.398 244018 INFO nova.compute.manager [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Neutron deleted interface 68ffeedb-a9df-4fa8-9ae1-2535a1f1799d; detaching it from the instance and deleting it from the info cache#033[00m
Feb 25 07:46:34 np0005629333 nova_compute[244014]: 2026-02-25 12:46:34.398 244018 DEBUG nova.network.neutron [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Updating instance_info_cache with network_info: [{"id": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "address": "fa:16:3e:47:4a:25", "network": {"id": "bb79e0fd-2a4d-4a70-9c80-4853297401ff", "bridge": "br-int", "label": "tempest-network-smoke--1671675840", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.191", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape50c9f03-a8", "ovs_interfaceid": "e50c9f03-a8a5-48d1-a34b-4a8fd638d5df", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:46:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 131 op/s
Feb 25 07:46:34 np0005629333 nova_compute[244014]: 2026-02-25 12:46:34.423 244018 DEBUG nova.compute.manager [req-52423f54-925b-4cfe-a75c-5900cbafb33d req-9f832f40-4323-4846-9150-e1b260c3eb87 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Detach interface failed, port_id=68ffeedb-a9df-4fa8-9ae1-2535a1f1799d, reason: Instance dd8c9142-2607-4722-90eb-65233f258639 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 25 07:46:34 np0005629333 nova_compute[244014]: 2026-02-25 12:46:34.495 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:46:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:34Z|00137|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:56:b0:61 10.100.0.7
Feb 25 07:46:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:34Z|00138|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:56:b0:61 10.100.0.7
Feb 25 07:46:35 np0005629333 nova_compute[244014]: 2026-02-25 12:46:35.231 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:35 np0005629333 nova_compute[244014]: 2026-02-25 12:46:35.428 244018 DEBUG nova.compute.manager [req-73ebc221-ccf8-4b1a-8f29-573afe97c7e2 req-0e799139-7e1c-40fd-bdc1-87ac8b6381ca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Received event network-vif-deleted-e50c9f03-a8a5-48d1-a34b-4a8fd638d5df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:46:35 np0005629333 nova_compute[244014]: 2026-02-25 12:46:35.429 244018 INFO nova.compute.manager [req-73ebc221-ccf8-4b1a-8f29-573afe97c7e2 req-0e799139-7e1c-40fd-bdc1-87ac8b6381ca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Neutron deleted interface e50c9f03-a8a5-48d1-a34b-4a8fd638d5df; detaching it from the instance and deleting it from the info cache#033[00m
Feb 25 07:46:35 np0005629333 nova_compute[244014]: 2026-02-25 12:46:35.429 244018 DEBUG nova.network.neutron [req-73ebc221-ccf8-4b1a-8f29-573afe97c7e2 req-0e799139-7e1c-40fd-bdc1-87ac8b6381ca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Feb 25 07:46:35 np0005629333 nova_compute[244014]: 2026-02-25 12:46:35.432 244018 DEBUG nova.compute.manager [req-73ebc221-ccf8-4b1a-8f29-573afe97c7e2 req-0e799139-7e1c-40fd-bdc1-87ac8b6381ca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd8c9142-2607-4722-90eb-65233f258639] Detach interface failed, port_id=e50c9f03-a8a5-48d1-a34b-4a8fd638d5df, reason: Instance dd8c9142-2607-4722-90eb-65233f258639 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 25 07:46:35 np0005629333 nova_compute[244014]: 2026-02-25 12:46:35.468 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 123 op/s
Feb 25 07:46:36 np0005629333 nova_compute[244014]: 2026-02-25 12:46:36.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:46:36 np0005629333 nova_compute[244014]: 2026-02-25 12:46:36.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:46:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1982: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 187 op/s
Feb 25 07:46:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:46:40 np0005629333 nova_compute[244014]: 2026-02-25 12:46:40.234 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:40 np0005629333 nova_compute[244014]: 2026-02-25 12:46:40.249 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023585.2485282, 2b9ea98f-9b1a-495b-8669-e79da967b0ab => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:46:40 np0005629333 nova_compute[244014]: 2026-02-25 12:46:40.250 244018 INFO nova.compute.manager [-] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:46:40 np0005629333 nova_compute[244014]: 2026-02-25 12:46:40.266 244018 DEBUG nova.compute.manager [None req-a3603cee-7a94-4a45-86bb-98893f68a113 - - - - - -] [instance: 2b9ea98f-9b1a-495b-8669-e79da967b0ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:46:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Feb 25 07:46:40 np0005629333 nova_compute[244014]: 2026-02-25 12:46:40.470 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:40 np0005629333 nova_compute[244014]: 2026-02-25 12:46:40.535 244018 INFO nova.compute.manager [None req-ff6ea604-fffb-4001-9a66-7e4704310772 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Get console output#033[00m
Feb 25 07:46:40 np0005629333 nova_compute[244014]: 2026-02-25 12:46:40.545 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 25 07:46:40 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:40Z|01192|binding|INFO|Releasing lport 92ae33df-1d64-498f-b132-6ef0663f81e9 from this chassis (sb_readonly=0)
Feb 25 07:46:40 np0005629333 nova_compute[244014]: 2026-02-25 12:46:40.920 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:41 np0005629333 nova_compute[244014]: 2026-02-25 12:46:41.015 244018 DEBUG nova.objects.instance [None req-1c5c5436-4c19-4fcf-8592-efe6056f8235 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'pci_devices' on Instance uuid 709a8b15-83eb-45f4-b681-c150ad270e01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:46:41 np0005629333 nova_compute[244014]: 2026-02-25 12:46:41.047 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023601.0466552, 709a8b15-83eb-45f4-b681-c150ad270e01 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:46:41 np0005629333 nova_compute[244014]: 2026-02-25 12:46:41.047 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:46:41 np0005629333 nova_compute[244014]: 2026-02-25 12:46:41.065 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:46:41 np0005629333 nova_compute[244014]: 2026-02-25 12:46:41.071 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:46:41 np0005629333 nova_compute[244014]: 2026-02-25 12:46:41.089 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Feb 25 07:46:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:41.481 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:46:41 np0005629333 nova_compute[244014]: 2026-02-25 12:46:41.482 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:41.483 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:46:41 np0005629333 kernel: tapa0f62675-55 (unregistering): left promiscuous mode
Feb 25 07:46:41 np0005629333 NetworkManager[49836]: <info>  [1772023601.7308] device (tapa0f62675-55): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:46:41 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:41Z|01193|binding|INFO|Releasing lport a0f62675-5535-4e75-98f6-dc4fbadbc4c9 from this chassis (sb_readonly=0)
Feb 25 07:46:41 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:41Z|01194|binding|INFO|Setting lport a0f62675-5535-4e75-98f6-dc4fbadbc4c9 down in Southbound
Feb 25 07:46:41 np0005629333 nova_compute[244014]: 2026-02-25 12:46:41.735 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:41 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:41Z|01195|binding|INFO|Removing iface tapa0f62675-55 ovn-installed in OVS
Feb 25 07:46:41 np0005629333 nova_compute[244014]: 2026-02-25 12:46:41.740 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:41.744 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:b0:61 10.100.0.7'], port_security=['fa:16:3e:56:b0:61 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '709a8b15-83eb-45f4-b681-c150ad270e01', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf745dc4-78d4-4d88-8794-da49c21b9f38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6cab933e-ec3e-4cce-9e9e-8098a6cc3a83', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.220'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68434b74-2de5-46be-a5a4-5cead97a1427, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=a0f62675-5535-4e75-98f6-dc4fbadbc4c9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:46:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:41.746 157129 INFO neutron.agent.ovn.metadata.agent [-] Port a0f62675-5535-4e75-98f6-dc4fbadbc4c9 in datapath bf745dc4-78d4-4d88-8794-da49c21b9f38 unbound from our chassis#033[00m
Feb 25 07:46:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:41.747 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bf745dc4-78d4-4d88-8794-da49c21b9f38, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:46:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:41.749 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dd643433-ef82-48e6-a922-0a1b37072025]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:41.750 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38 namespace which is not needed anymore#033[00m
Feb 25 07:46:41 np0005629333 nova_compute[244014]: 2026-02-25 12:46:41.757 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:41 np0005629333 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000074.scope: Deactivated successfully.
Feb 25 07:46:41 np0005629333 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000074.scope: Consumed 12.636s CPU time.
Feb 25 07:46:41 np0005629333 systemd-machined[210048]: Machine qemu-147-instance-00000074 terminated.
Feb 25 07:46:41 np0005629333 neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38[348144]: [NOTICE]   (348148) : haproxy version is 2.8.14-c23fe91
Feb 25 07:46:41 np0005629333 neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38[348144]: [NOTICE]   (348148) : path to executable is /usr/sbin/haproxy
Feb 25 07:46:41 np0005629333 neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38[348144]: [WARNING]  (348148) : Exiting Master process...
Feb 25 07:46:41 np0005629333 neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38[348144]: [ALERT]    (348148) : Current worker (348150) exited with code 143 (Terminated)
Feb 25 07:46:41 np0005629333 neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38[348144]: [WARNING]  (348148) : All workers exited. Exiting... (0)
Feb 25 07:46:41 np0005629333 systemd[1]: libpod-fde4953fed28e007b5b69e80e6f3c28902ff149574cb654c3cd08315acb4de87.scope: Deactivated successfully.
Feb 25 07:46:41 np0005629333 podman[348607]: 2026-02-25 12:46:41.903681414 +0000 UTC m=+0.047891653 container died fde4953fed28e007b5b69e80e6f3c28902ff149574cb654c3cd08315acb4de87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 25 07:46:41 np0005629333 nova_compute[244014]: 2026-02-25 12:46:41.903 244018 DEBUG nova.compute.manager [None req-1c5c5436-4c19-4fcf-8592-efe6056f8235 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:46:41 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fde4953fed28e007b5b69e80e6f3c28902ff149574cb654c3cd08315acb4de87-userdata-shm.mount: Deactivated successfully.
Feb 25 07:46:41 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f539fbb4b8a2e0d559dbacdf783585633e2c940d6c86cd49a3a52f168fc29a4e-merged.mount: Deactivated successfully.
Feb 25 07:46:41 np0005629333 podman[348607]: 2026-02-25 12:46:41.95770931 +0000 UTC m=+0.101919529 container cleanup fde4953fed28e007b5b69e80e6f3c28902ff149574cb654c3cd08315acb4de87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 25 07:46:41 np0005629333 systemd[1]: libpod-conmon-fde4953fed28e007b5b69e80e6f3c28902ff149574cb654c3cd08315acb4de87.scope: Deactivated successfully.
Feb 25 07:46:42 np0005629333 podman[348647]: 2026-02-25 12:46:42.019192216 +0000 UTC m=+0.042389938 container remove fde4953fed28e007b5b69e80e6f3c28902ff149574cb654c3cd08315acb4de87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 25 07:46:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:42.027 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2b00fc29-a4c0-4f73-8f41-13ab8bebe099]: (4, ('Wed Feb 25 12:46:41 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38 (fde4953fed28e007b5b69e80e6f3c28902ff149574cb654c3cd08315acb4de87)\nfde4953fed28e007b5b69e80e6f3c28902ff149574cb654c3cd08315acb4de87\nWed Feb 25 12:46:41 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38 (fde4953fed28e007b5b69e80e6f3c28902ff149574cb654c3cd08315acb4de87)\nfde4953fed28e007b5b69e80e6f3c28902ff149574cb654c3cd08315acb4de87\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:42 np0005629333 nova_compute[244014]: 2026-02-25 12:46:42.027 244018 DEBUG nova.compute.manager [req-c785064d-83c1-4f04-9bc3-18211353712f req-6eb7043d-3a70-4649-823f-65c778c815b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received event network-vif-unplugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:46:42 np0005629333 nova_compute[244014]: 2026-02-25 12:46:42.028 244018 DEBUG oslo_concurrency.lockutils [req-c785064d-83c1-4f04-9bc3-18211353712f req-6eb7043d-3a70-4649-823f-65c778c815b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:42.029 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[be061554-744d-4e91-8610-f94bca556497]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:42 np0005629333 nova_compute[244014]: 2026-02-25 12:46:42.029 244018 DEBUG oslo_concurrency.lockutils [req-c785064d-83c1-4f04-9bc3-18211353712f req-6eb7043d-3a70-4649-823f-65c778c815b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:42 np0005629333 nova_compute[244014]: 2026-02-25 12:46:42.029 244018 DEBUG oslo_concurrency.lockutils [req-c785064d-83c1-4f04-9bc3-18211353712f req-6eb7043d-3a70-4649-823f-65c778c815b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:42.030 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf745dc4-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:46:42 np0005629333 nova_compute[244014]: 2026-02-25 12:46:42.030 244018 DEBUG nova.compute.manager [req-c785064d-83c1-4f04-9bc3-18211353712f req-6eb7043d-3a70-4649-823f-65c778c815b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] No waiting events found dispatching network-vif-unplugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:46:42 np0005629333 nova_compute[244014]: 2026-02-25 12:46:42.030 244018 WARNING nova.compute.manager [req-c785064d-83c1-4f04-9bc3-18211353712f req-6eb7043d-3a70-4649-823f-65c778c815b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received unexpected event network-vif-unplugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 for instance with vm_state suspended and task_state None.#033[00m
Feb 25 07:46:42 np0005629333 nova_compute[244014]: 2026-02-25 12:46:42.032 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:42 np0005629333 kernel: tapbf745dc4-70: left promiscuous mode
Feb 25 07:46:42 np0005629333 nova_compute[244014]: 2026-02-25 12:46:42.044 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:42.052 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[04fb9f37-cf43-4512-8b36-fcc67b9954ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:42.067 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bca4ed68-aa20-45c1-aa4a-f60d49424ffd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:42.069 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[80b082c9-7c20-4813-9715-50c5b5b5129c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:42.087 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[26c87a1e-c35f-465d-8664-8d452893eae5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 555139, 'reachable_time': 25620, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348667, 'error': None, 'target': 'ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:42.092 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:46:42 np0005629333 systemd[1]: run-netns-ovnmeta\x2dbf745dc4\x2d78d4\x2d4d88\x2d8794\x2dda49c21b9f38.mount: Deactivated successfully.
Feb 25 07:46:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:42.092 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[23a1bc6b-1fb8-4853-91de-df7b20a22fcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Feb 25 07:46:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:46:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:46:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:46:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:46:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007744472287801465 of space, bias 1.0, pg target 0.23233416863404394 quantized to 32 (current 32)
Feb 25 07:46:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:46:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:46:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:46:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:46:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:46:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002493995175414729 of space, bias 1.0, pg target 0.7481985526244187 quantized to 32 (current 32)
Feb 25 07:46:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:46:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4068357350033398e-06 of space, bias 4.0, pg target 0.0016882028820040078 quantized to 16 (current 16)
Feb 25 07:46:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:46:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:46:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:46:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:46:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:46:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:46:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:46:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:46:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:46:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:42.699504) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023602699534, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 377, "num_deletes": 251, "total_data_size": 242446, "memory_usage": 250888, "flush_reason": "Manual Compaction"}
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023602704054, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 240265, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42349, "largest_seqno": 42725, "table_properties": {"data_size": 238012, "index_size": 417, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5649, "raw_average_key_size": 18, "raw_value_size": 233544, "raw_average_value_size": 768, "num_data_blocks": 19, "num_entries": 304, "num_filter_entries": 304, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772023589, "oldest_key_time": 1772023589, "file_creation_time": 1772023602, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 4617 microseconds, and 1926 cpu microseconds.
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:42.704113) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 240265 bytes OK
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:42.704139) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:42.706598) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:42.706628) EVENT_LOG_v1 {"time_micros": 1772023602706618, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:42.706654) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 239988, prev total WAL file size 239988, number of live WAL files 2.
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:42.707092) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(234KB)], [95(10MB)]
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023602707232, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 11699357, "oldest_snapshot_seqno": -1}
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 6424 keys, 10047448 bytes, temperature: kUnknown
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023602767541, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 10047448, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10002243, "index_size": 28071, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16069, "raw_key_size": 166893, "raw_average_key_size": 25, "raw_value_size": 9884955, "raw_average_value_size": 1538, "num_data_blocks": 1103, "num_entries": 6424, "num_filter_entries": 6424, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772023602, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:42.767896) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 10047448 bytes
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:42.769097) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 193.8 rd, 166.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.9 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(90.5) write-amplify(41.8) OK, records in: 6933, records dropped: 509 output_compression: NoCompression
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:42.769127) EVENT_LOG_v1 {"time_micros": 1772023602769113, "job": 56, "event": "compaction_finished", "compaction_time_micros": 60367, "compaction_time_cpu_micros": 22099, "output_level": 6, "num_output_files": 1, "total_output_size": 10047448, "num_input_records": 6933, "num_output_records": 6424, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023602769307, "job": 56, "event": "table_file_deletion", "file_number": 97}
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023602771102, "job": 56, "event": "table_file_deletion", "file_number": 95}
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:42.707023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:42.771198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:42.771206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:42.771210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:42.771214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:46:42 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:46:42.771218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:46:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:46:44 np0005629333 nova_compute[244014]: 2026-02-25 12:46:44.097 244018 DEBUG nova.compute.manager [req-058eefb1-a824-4293-84b6-a8bdac6fb732 req-54770e74-b7b6-47e0-b508-ec63d8721058 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received event network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:46:44 np0005629333 nova_compute[244014]: 2026-02-25 12:46:44.098 244018 DEBUG oslo_concurrency.lockutils [req-058eefb1-a824-4293-84b6-a8bdac6fb732 req-54770e74-b7b6-47e0-b508-ec63d8721058 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:44 np0005629333 nova_compute[244014]: 2026-02-25 12:46:44.098 244018 DEBUG oslo_concurrency.lockutils [req-058eefb1-a824-4293-84b6-a8bdac6fb732 req-54770e74-b7b6-47e0-b508-ec63d8721058 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:44 np0005629333 nova_compute[244014]: 2026-02-25 12:46:44.099 244018 DEBUG oslo_concurrency.lockutils [req-058eefb1-a824-4293-84b6-a8bdac6fb732 req-54770e74-b7b6-47e0-b508-ec63d8721058 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:44 np0005629333 nova_compute[244014]: 2026-02-25 12:46:44.099 244018 DEBUG nova.compute.manager [req-058eefb1-a824-4293-84b6-a8bdac6fb732 req-54770e74-b7b6-47e0-b508-ec63d8721058 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] No waiting events found dispatching network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:46:44 np0005629333 nova_compute[244014]: 2026-02-25 12:46:44.099 244018 WARNING nova.compute.manager [req-058eefb1-a824-4293-84b6-a8bdac6fb732 req-54770e74-b7b6-47e0-b508-ec63d8721058 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received unexpected event network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 for instance with vm_state suspended and task_state None.#033[00m
Feb 25 07:46:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Feb 25 07:46:44 np0005629333 nova_compute[244014]: 2026-02-25 12:46:44.791 244018 INFO nova.compute.manager [None req-5cf55615-aefe-462a-88a6-e8e407639019 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Get console output#033[00m
Feb 25 07:46:45 np0005629333 nova_compute[244014]: 2026-02-25 12:46:45.001 244018 INFO nova.compute.manager [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Resuming#033[00m
Feb 25 07:46:45 np0005629333 nova_compute[244014]: 2026-02-25 12:46:45.002 244018 DEBUG nova.objects.instance [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'flavor' on Instance uuid 709a8b15-83eb-45f4-b681-c150ad270e01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:46:45 np0005629333 nova_compute[244014]: 2026-02-25 12:46:45.041 244018 DEBUG oslo_concurrency.lockutils [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "refresh_cache-709a8b15-83eb-45f4-b681-c150ad270e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:46:45 np0005629333 nova_compute[244014]: 2026-02-25 12:46:45.041 244018 DEBUG oslo_concurrency.lockutils [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquired lock "refresh_cache-709a8b15-83eb-45f4-b681-c150ad270e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:46:45 np0005629333 nova_compute[244014]: 2026-02-25 12:46:45.042 244018 DEBUG nova.network.neutron [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:46:45 np0005629333 nova_compute[244014]: 2026-02-25 12:46:45.238 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:45 np0005629333 nova_compute[244014]: 2026-02-25 12:46:45.421 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023590.4197423, dd8c9142-2607-4722-90eb-65233f258639 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:46:45 np0005629333 nova_compute[244014]: 2026-02-25 12:46:45.422 244018 INFO nova.compute.manager [-] [instance: dd8c9142-2607-4722-90eb-65233f258639] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:46:45 np0005629333 nova_compute[244014]: 2026-02-25 12:46:45.467 244018 DEBUG nova.compute.manager [None req-81d7a25e-c4b7-455c-8188-e3c7e9c8df41 - - - - - -] [instance: dd8c9142-2607-4722-90eb-65233f258639] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:46:45 np0005629333 nova_compute[244014]: 2026-02-25 12:46:45.471 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:45.487 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:46:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 07:46:46 np0005629333 nova_compute[244014]: 2026-02-25 12:46:46.817 244018 DEBUG nova.network.neutron [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Updating instance_info_cache with network_info: [{"id": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "address": "fa:16:3e:56:b0:61", "network": {"id": "bf745dc4-78d4-4d88-8794-da49c21b9f38", "bridge": "br-int", "label": "tempest-network-smoke--1075619721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f62675-55", "ovs_interfaceid": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:46:46 np0005629333 nova_compute[244014]: 2026-02-25 12:46:46.837 244018 DEBUG oslo_concurrency.lockutils [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Releasing lock "refresh_cache-709a8b15-83eb-45f4-b681-c150ad270e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:46:46 np0005629333 nova_compute[244014]: 2026-02-25 12:46:46.844 244018 DEBUG nova.virt.libvirt.vif [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:46:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-742082713',display_name='tempest-TestNetworkAdvancedServerOps-server-742082713',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-742082713',id=116,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB+I5tZfRy0ovwNPnrrYTneQbLw5hVnQnH/IanoLiHVduiItxLS5mgbnypXYZj3B9Q1TA9okLBB9xUXwMoUUkuZoJAUYr2FIjfbOIXstrZLpgrCV3aVFIh2fnMujkQ9rZw==',key_name='tempest-TestNetworkAdvancedServerOps-1976579647',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:46:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-m0ykjdnm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:46:41Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=709a8b15-83eb-45f4-b681-c150ad270e01,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "address": "fa:16:3e:56:b0:61", "network": {"id": "bf745dc4-78d4-4d88-8794-da49c21b9f38", "bridge": "br-int", "label": "tempest-network-smoke--1075619721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f62675-55", "ovs_interfaceid": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:46:46 np0005629333 nova_compute[244014]: 2026-02-25 12:46:46.844 244018 DEBUG nova.network.os_vif_util [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "address": "fa:16:3e:56:b0:61", "network": {"id": "bf745dc4-78d4-4d88-8794-da49c21b9f38", "bridge": "br-int", "label": "tempest-network-smoke--1075619721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f62675-55", "ovs_interfaceid": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:46:46 np0005629333 nova_compute[244014]: 2026-02-25 12:46:46.846 244018 DEBUG nova.network.os_vif_util [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:b0:61,bridge_name='br-int',has_traffic_filtering=True,id=a0f62675-5535-4e75-98f6-dc4fbadbc4c9,network=Network(bf745dc4-78d4-4d88-8794-da49c21b9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f62675-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:46:46 np0005629333 nova_compute[244014]: 2026-02-25 12:46:46.847 244018 DEBUG os_vif [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:b0:61,bridge_name='br-int',has_traffic_filtering=True,id=a0f62675-5535-4e75-98f6-dc4fbadbc4c9,network=Network(bf745dc4-78d4-4d88-8794-da49c21b9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f62675-55') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:46:46 np0005629333 nova_compute[244014]: 2026-02-25 12:46:46.848 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:46 np0005629333 nova_compute[244014]: 2026-02-25 12:46:46.848 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:46:46 np0005629333 nova_compute[244014]: 2026-02-25 12:46:46.849 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:46:46 np0005629333 nova_compute[244014]: 2026-02-25 12:46:46.852 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:46 np0005629333 nova_compute[244014]: 2026-02-25 12:46:46.853 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0f62675-55, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:46:46 np0005629333 nova_compute[244014]: 2026-02-25 12:46:46.853 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa0f62675-55, col_values=(('external_ids', {'iface-id': 'a0f62675-5535-4e75-98f6-dc4fbadbc4c9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:56:b0:61', 'vm-uuid': '709a8b15-83eb-45f4-b681-c150ad270e01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:46:46 np0005629333 nova_compute[244014]: 2026-02-25 12:46:46.854 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:46:46 np0005629333 nova_compute[244014]: 2026-02-25 12:46:46.855 244018 INFO os_vif [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:b0:61,bridge_name='br-int',has_traffic_filtering=True,id=a0f62675-5535-4e75-98f6-dc4fbadbc4c9,network=Network(bf745dc4-78d4-4d88-8794-da49c21b9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f62675-55')#033[00m
Feb 25 07:46:46 np0005629333 nova_compute[244014]: 2026-02-25 12:46:46.882 244018 DEBUG nova.objects.instance [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'numa_topology' on Instance uuid 709a8b15-83eb-45f4-b681-c150ad270e01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:46:46 np0005629333 kernel: tapa0f62675-55: entered promiscuous mode
Feb 25 07:46:46 np0005629333 nova_compute[244014]: 2026-02-25 12:46:46.961 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:46Z|01196|binding|INFO|Claiming lport a0f62675-5535-4e75-98f6-dc4fbadbc4c9 for this chassis.
Feb 25 07:46:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:46Z|01197|binding|INFO|a0f62675-5535-4e75-98f6-dc4fbadbc4c9: Claiming fa:16:3e:56:b0:61 10.100.0.7
Feb 25 07:46:46 np0005629333 NetworkManager[49836]: <info>  [1772023606.9625] manager: (tapa0f62675-55): new Tun device (/org/freedesktop/NetworkManager/Devices/495)
Feb 25 07:46:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:46.970 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:b0:61 10.100.0.7'], port_security=['fa:16:3e:56:b0:61 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '709a8b15-83eb-45f4-b681-c150ad270e01', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf745dc4-78d4-4d88-8794-da49c21b9f38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '6cab933e-ec3e-4cce-9e9e-8098a6cc3a83', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.220'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68434b74-2de5-46be-a5a4-5cead97a1427, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=a0f62675-5535-4e75-98f6-dc4fbadbc4c9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:46:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:46.972 157129 INFO neutron.agent.ovn.metadata.agent [-] Port a0f62675-5535-4e75-98f6-dc4fbadbc4c9 in datapath bf745dc4-78d4-4d88-8794-da49c21b9f38 bound to our chassis#033[00m
Feb 25 07:46:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:46.974 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bf745dc4-78d4-4d88-8794-da49c21b9f38#033[00m
Feb 25 07:46:46 np0005629333 nova_compute[244014]: 2026-02-25 12:46:46.976 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:46Z|01198|binding|INFO|Setting lport a0f62675-5535-4e75-98f6-dc4fbadbc4c9 ovn-installed in OVS
Feb 25 07:46:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:46Z|01199|binding|INFO|Setting lport a0f62675-5535-4e75-98f6-dc4fbadbc4c9 up in Southbound
Feb 25 07:46:46 np0005629333 nova_compute[244014]: 2026-02-25 12:46:46.979 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:46 np0005629333 nova_compute[244014]: 2026-02-25 12:46:46.983 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:46.990 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bff9b017-551f-4b4d-8f46-f0e39ed9cc70]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:46.992 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbf745dc4-71 in ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:46:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:46.995 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbf745dc4-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:46:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:46.995 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[61e9a55c-e356-4bf7-8282-6ba9bd746c0d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:46.997 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[46a94d44-fc07-45f8-aae4-343a6f9670a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:47 np0005629333 systemd-machined[210048]: New machine qemu-148-instance-00000074.
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.013 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[1668d6c5-c5bf-4701-8afc-a4d75859a69c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:47 np0005629333 systemd[1]: Started Virtual Machine qemu-148-instance-00000074.
Feb 25 07:46:47 np0005629333 systemd-udevd[348685]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.040 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8ed6b96b-6b29-4ab3-8906-ddc643dfc8b0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:47 np0005629333 NetworkManager[49836]: <info>  [1772023607.0446] device (tapa0f62675-55): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:46:47 np0005629333 NetworkManager[49836]: <info>  [1772023607.0466] device (tapa0f62675-55): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.076 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[95ec7651-8b21-4ea5-888f-2cbd5750bde7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.082 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8f9b556a-3a2c-4ffc-827d-b4eeda635de7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:47 np0005629333 NetworkManager[49836]: <info>  [1772023607.0849] manager: (tapbf745dc4-70): new Veth device (/org/freedesktop/NetworkManager/Devices/496)
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.118 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b980ea43-6f1e-4f2b-b365-fca436ffba10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.121 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cdc40d63-36e6-49ec-b4c4-233a3d34eeb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:47 np0005629333 NetworkManager[49836]: <info>  [1772023607.1391] device (tapbf745dc4-70): carrier: link connected
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.146 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d7f1d9fa-9d81-493e-bab7-1eb097369caf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.161 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb00bad-a2c5-42b1-a185-3b49c53b2007]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf745dc4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:7d:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 360], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557654, 'reachable_time': 25679, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348715, 'error': None, 'target': 'ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.175 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1d189375-4a43-4e41-8c0c-70a69b71f9da]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe91:7daf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 557654, 'tstamp': 557654}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348716, 'error': None, 'target': 'ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.189 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e08c4cfa-0040-469e-8d58-2bdd8f78ba1c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbf745dc4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:7d:af'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 360], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557654, 'reachable_time': 25679, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 348717, 'error': None, 'target': 'ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.218 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d0f328f1-fb85-4839-947d-d407358168f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:47 np0005629333 nova_compute[244014]: 2026-02-25 12:46:47.229 244018 DEBUG nova.compute.manager [req-3210b748-c2ea-4a58-86d7-7ebf3cfb5c20 req-8f32f598-82cd-4907-9b9f-2c9664127868 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received event network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:46:47 np0005629333 nova_compute[244014]: 2026-02-25 12:46:47.230 244018 DEBUG oslo_concurrency.lockutils [req-3210b748-c2ea-4a58-86d7-7ebf3cfb5c20 req-8f32f598-82cd-4907-9b9f-2c9664127868 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:47 np0005629333 nova_compute[244014]: 2026-02-25 12:46:47.230 244018 DEBUG oslo_concurrency.lockutils [req-3210b748-c2ea-4a58-86d7-7ebf3cfb5c20 req-8f32f598-82cd-4907-9b9f-2c9664127868 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:47 np0005629333 nova_compute[244014]: 2026-02-25 12:46:47.231 244018 DEBUG oslo_concurrency.lockutils [req-3210b748-c2ea-4a58-86d7-7ebf3cfb5c20 req-8f32f598-82cd-4907-9b9f-2c9664127868 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:47 np0005629333 nova_compute[244014]: 2026-02-25 12:46:47.231 244018 DEBUG nova.compute.manager [req-3210b748-c2ea-4a58-86d7-7ebf3cfb5c20 req-8f32f598-82cd-4907-9b9f-2c9664127868 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] No waiting events found dispatching network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:46:47 np0005629333 nova_compute[244014]: 2026-02-25 12:46:47.231 244018 WARNING nova.compute.manager [req-3210b748-c2ea-4a58-86d7-7ebf3cfb5c20 req-8f32f598-82cd-4907-9b9f-2c9664127868 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received unexpected event network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 for instance with vm_state suspended and task_state resuming.#033[00m
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.269 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[902f6c46-ad0e-436e-a164-1f73ea9bcdce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.271 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf745dc4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.271 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.272 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf745dc4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:46:47 np0005629333 nova_compute[244014]: 2026-02-25 12:46:47.274 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:47 np0005629333 NetworkManager[49836]: <info>  [1772023607.2753] manager: (tapbf745dc4-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/497)
Feb 25 07:46:47 np0005629333 kernel: tapbf745dc4-70: entered promiscuous mode
Feb 25 07:46:47 np0005629333 nova_compute[244014]: 2026-02-25 12:46:47.276 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.277 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbf745dc4-70, col_values=(('external_ids', {'iface-id': '92ae33df-1d64-498f-b132-6ef0663f81e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:46:47 np0005629333 nova_compute[244014]: 2026-02-25 12:46:47.278 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:47 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:47Z|01200|binding|INFO|Releasing lport 92ae33df-1d64-498f-b132-6ef0663f81e9 from this chassis (sb_readonly=0)
Feb 25 07:46:47 np0005629333 nova_compute[244014]: 2026-02-25 12:46:47.279 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.280 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bf745dc4-78d4-4d88-8794-da49c21b9f38.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bf745dc4-78d4-4d88-8794-da49c21b9f38.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.280 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[417d4412-2419-4c4b-bcf5-1380aa5032aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.281 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-bf745dc4-78d4-4d88-8794-da49c21b9f38
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/bf745dc4-78d4-4d88-8794-da49c21b9f38.pid.haproxy
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID bf745dc4-78d4-4d88-8794-da49c21b9f38
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:46:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:47.282 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38', 'env', 'PROCESS_TAG=haproxy-bf745dc4-78d4-4d88-8794-da49c21b9f38', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bf745dc4-78d4-4d88-8794-da49c21b9f38.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:46:47 np0005629333 nova_compute[244014]: 2026-02-25 12:46:47.285 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:46:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2585637404' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:46:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:46:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2585637404' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:46:47 np0005629333 nova_compute[244014]: 2026-02-25 12:46:47.550 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for 709a8b15-83eb-45f4-b681-c150ad270e01 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 25 07:46:47 np0005629333 nova_compute[244014]: 2026-02-25 12:46:47.551 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023607.549498, 709a8b15-83eb-45f4-b681-c150ad270e01 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:46:47 np0005629333 nova_compute[244014]: 2026-02-25 12:46:47.551 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] VM Started (Lifecycle Event)#033[00m
Feb 25 07:46:47 np0005629333 nova_compute[244014]: 2026-02-25 12:46:47.573 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:46:47 np0005629333 nova_compute[244014]: 2026-02-25 12:46:47.579 244018 DEBUG nova.compute.manager [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:46:47 np0005629333 nova_compute[244014]: 2026-02-25 12:46:47.580 244018 DEBUG nova.objects.instance [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'pci_devices' on Instance uuid 709a8b15-83eb-45f4-b681-c150ad270e01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:46:47 np0005629333 nova_compute[244014]: 2026-02-25 12:46:47.585 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:46:47 np0005629333 nova_compute[244014]: 2026-02-25 12:46:47.601 244018 INFO nova.virt.libvirt.driver [-] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Instance running successfully.#033[00m
Feb 25 07:46:47 np0005629333 nova_compute[244014]: 2026-02-25 12:46:47.603 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Feb 25 07:46:47 np0005629333 virtqemud[243235]: argument unsupported: QEMU guest agent is not configured
Feb 25 07:46:47 np0005629333 nova_compute[244014]: 2026-02-25 12:46:47.604 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023607.5547223, 709a8b15-83eb-45f4-b681-c150ad270e01 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:46:47 np0005629333 nova_compute[244014]: 2026-02-25 12:46:47.605 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:46:47 np0005629333 nova_compute[244014]: 2026-02-25 12:46:47.610 244018 DEBUG nova.virt.libvirt.guest [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Feb 25 07:46:47 np0005629333 nova_compute[244014]: 2026-02-25 12:46:47.610 244018 DEBUG nova.compute.manager [None req-7522a877-1595-4f04-acf7-6edd8bec641f fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:46:47 np0005629333 nova_compute[244014]: 2026-02-25 12:46:47.645 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:46:47 np0005629333 nova_compute[244014]: 2026-02-25 12:46:47.648 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:46:47 np0005629333 podman[348790]: 2026-02-25 12:46:47.650320901 +0000 UTC m=+0.063089353 container create 930373a23354dc6d682075b9cddf8a0658967699adde4861174bb9a454683329 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:46:47 np0005629333 systemd[1]: Started libpod-conmon-930373a23354dc6d682075b9cddf8a0658967699adde4861174bb9a454683329.scope.
Feb 25 07:46:47 np0005629333 nova_compute[244014]: 2026-02-25 12:46:47.688 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Feb 25 07:46:47 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:46:47 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e45e31c374f7f8967ca524d1b85ad5a63827aeadab1c7beb276d1f90b01fde07/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:46:47 np0005629333 podman[348790]: 2026-02-25 12:46:47.626894349 +0000 UTC m=+0.039662841 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:46:47 np0005629333 podman[348790]: 2026-02-25 12:46:47.733205081 +0000 UTC m=+0.145973613 container init 930373a23354dc6d682075b9cddf8a0658967699adde4861174bb9a454683329 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:46:47 np0005629333 podman[348790]: 2026-02-25 12:46:47.740564479 +0000 UTC m=+0.153332961 container start 930373a23354dc6d682075b9cddf8a0658967699adde4861174bb9a454683329 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0)
Feb 25 07:46:47 np0005629333 neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38[348805]: [NOTICE]   (348809) : New worker (348811) forked
Feb 25 07:46:47 np0005629333 neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38[348805]: [NOTICE]   (348809) : Loading success.
Feb 25 07:46:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 25 07:46:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:46:49 np0005629333 nova_compute[244014]: 2026-02-25 12:46:49.303 244018 DEBUG nova.compute.manager [req-480441d1-d99b-4bf3-a785-164a60538364 req-93853a36-7306-44b0-af82-4d45299ecfc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received event network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:46:49 np0005629333 nova_compute[244014]: 2026-02-25 12:46:49.304 244018 DEBUG oslo_concurrency.lockutils [req-480441d1-d99b-4bf3-a785-164a60538364 req-93853a36-7306-44b0-af82-4d45299ecfc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:49 np0005629333 nova_compute[244014]: 2026-02-25 12:46:49.304 244018 DEBUG oslo_concurrency.lockutils [req-480441d1-d99b-4bf3-a785-164a60538364 req-93853a36-7306-44b0-af82-4d45299ecfc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:49 np0005629333 nova_compute[244014]: 2026-02-25 12:46:49.305 244018 DEBUG oslo_concurrency.lockutils [req-480441d1-d99b-4bf3-a785-164a60538364 req-93853a36-7306-44b0-af82-4d45299ecfc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:49 np0005629333 nova_compute[244014]: 2026-02-25 12:46:49.305 244018 DEBUG nova.compute.manager [req-480441d1-d99b-4bf3-a785-164a60538364 req-93853a36-7306-44b0-af82-4d45299ecfc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] No waiting events found dispatching network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:46:49 np0005629333 nova_compute[244014]: 2026-02-25 12:46:49.306 244018 WARNING nova.compute.manager [req-480441d1-d99b-4bf3-a785-164a60538364 req-93853a36-7306-44b0-af82-4d45299ecfc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received unexpected event network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:46:50 np0005629333 nova_compute[244014]: 2026-02-25 12:46:50.241 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:50 np0005629333 nova_compute[244014]: 2026-02-25 12:46:50.269 244018 INFO nova.compute.manager [None req-616fa0f6-5322-435f-958a-54738a92ad05 fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Get console output#033[00m
Feb 25 07:46:50 np0005629333 nova_compute[244014]: 2026-02-25 12:46:50.276 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 25 07:46:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 12 KiB/s wr, 2 op/s
Feb 25 07:46:50 np0005629333 nova_compute[244014]: 2026-02-25 12:46:50.474 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:50 np0005629333 nova_compute[244014]: 2026-02-25 12:46:50.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:46:50 np0005629333 nova_compute[244014]: 2026-02-25 12:46:50.876 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:50 np0005629333 nova_compute[244014]: 2026-02-25 12:46:50.877 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:50 np0005629333 nova_compute[244014]: 2026-02-25 12:46:50.878 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:50 np0005629333 nova_compute[244014]: 2026-02-25 12:46:50.878 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:50 np0005629333 nova_compute[244014]: 2026-02-25 12:46:50.878 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:50 np0005629333 nova_compute[244014]: 2026-02-25 12:46:50.878 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:50 np0005629333 nova_compute[244014]: 2026-02-25 12:46:50.909 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Feb 25 07:46:50 np0005629333 nova_compute[244014]: 2026-02-25 12:46:50.937 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Feb 25 07:46:50 np0005629333 nova_compute[244014]: 2026-02-25 12:46:50.938 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Image id c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6 yields fingerprint a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Feb 25 07:46:50 np0005629333 nova_compute[244014]: 2026-02-25 12:46:50.938 244018 INFO nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] image c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6 at (/var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6): checking#033[00m
Feb 25 07:46:50 np0005629333 nova_compute[244014]: 2026-02-25 12:46:50.938 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] image c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6 at (/var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Feb 25 07:46:50 np0005629333 nova_compute[244014]: 2026-02-25 12:46:50.940 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Feb 25 07:46:50 np0005629333 nova_compute[244014]: 2026-02-25 12:46:50.941 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] 709a8b15-83eb-45f4-b681-c150ad270e01 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Feb 25 07:46:50 np0005629333 nova_compute[244014]: 2026-02-25 12:46:50.941 244018 WARNING nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Unknown base file: /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538#033[00m
Feb 25 07:46:50 np0005629333 nova_compute[244014]: 2026-02-25 12:46:50.941 244018 INFO nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Active base files: /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6#033[00m
Feb 25 07:46:50 np0005629333 nova_compute[244014]: 2026-02-25 12:46:50.941 244018 INFO nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Removable base files: /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538#033[00m
Feb 25 07:46:50 np0005629333 nova_compute[244014]: 2026-02-25 12:46:50.942 244018 INFO nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538#033[00m
Feb 25 07:46:50 np0005629333 nova_compute[244014]: 2026-02-25 12:46:50.942 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Feb 25 07:46:50 np0005629333 nova_compute[244014]: 2026-02-25 12:46:50.942 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Feb 25 07:46:50 np0005629333 nova_compute[244014]: 2026-02-25 12:46:50.943 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Feb 25 07:46:50 np0005629333 nova_compute[244014]: 2026-02-25 12:46:50.943 244018 INFO nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Feb 25 07:46:51 np0005629333 nova_compute[244014]: 2026-02-25 12:46:51.494 244018 DEBUG nova.compute.manager [req-559ad663-1a02-4838-aa56-c77b0a15021a req-260de867-0e07-42be-afd5-3e2f0cb7e5ca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received event network-changed-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:46:51 np0005629333 nova_compute[244014]: 2026-02-25 12:46:51.495 244018 DEBUG nova.compute.manager [req-559ad663-1a02-4838-aa56-c77b0a15021a req-260de867-0e07-42be-afd5-3e2f0cb7e5ca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Refreshing instance network info cache due to event network-changed-a0f62675-5535-4e75-98f6-dc4fbadbc4c9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:46:51 np0005629333 nova_compute[244014]: 2026-02-25 12:46:51.495 244018 DEBUG oslo_concurrency.lockutils [req-559ad663-1a02-4838-aa56-c77b0a15021a req-260de867-0e07-42be-afd5-3e2f0cb7e5ca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-709a8b15-83eb-45f4-b681-c150ad270e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:46:51 np0005629333 nova_compute[244014]: 2026-02-25 12:46:51.496 244018 DEBUG oslo_concurrency.lockutils [req-559ad663-1a02-4838-aa56-c77b0a15021a req-260de867-0e07-42be-afd5-3e2f0cb7e5ca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-709a8b15-83eb-45f4-b681-c150ad270e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:46:51 np0005629333 nova_compute[244014]: 2026-02-25 12:46:51.496 244018 DEBUG nova.network.neutron [req-559ad663-1a02-4838-aa56-c77b0a15021a req-260de867-0e07-42be-afd5-3e2f0cb7e5ca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Refreshing network info cache for port a0f62675-5535-4e75-98f6-dc4fbadbc4c9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:46:51 np0005629333 nova_compute[244014]: 2026-02-25 12:46:51.536 244018 DEBUG oslo_concurrency.lockutils [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "709a8b15-83eb-45f4-b681-c150ad270e01" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:51 np0005629333 nova_compute[244014]: 2026-02-25 12:46:51.537 244018 DEBUG oslo_concurrency.lockutils [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:51 np0005629333 nova_compute[244014]: 2026-02-25 12:46:51.537 244018 DEBUG oslo_concurrency.lockutils [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:51 np0005629333 nova_compute[244014]: 2026-02-25 12:46:51.538 244018 DEBUG oslo_concurrency.lockutils [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:51 np0005629333 nova_compute[244014]: 2026-02-25 12:46:51.538 244018 DEBUG oslo_concurrency.lockutils [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:51 np0005629333 nova_compute[244014]: 2026-02-25 12:46:51.540 244018 INFO nova.compute.manager [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Terminating instance#033[00m
Feb 25 07:46:51 np0005629333 nova_compute[244014]: 2026-02-25 12:46:51.541 244018 DEBUG nova.compute.manager [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:46:51 np0005629333 kernel: tapa0f62675-55 (unregistering): left promiscuous mode
Feb 25 07:46:51 np0005629333 NetworkManager[49836]: <info>  [1772023611.5898] device (tapa0f62675-55): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:46:51 np0005629333 nova_compute[244014]: 2026-02-25 12:46:51.590 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:51 np0005629333 nova_compute[244014]: 2026-02-25 12:46:51.597 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:51 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:51Z|01201|binding|INFO|Releasing lport a0f62675-5535-4e75-98f6-dc4fbadbc4c9 from this chassis (sb_readonly=0)
Feb 25 07:46:51 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:51Z|01202|binding|INFO|Setting lport a0f62675-5535-4e75-98f6-dc4fbadbc4c9 down in Southbound
Feb 25 07:46:51 np0005629333 ovn_controller[147040]: 2026-02-25T12:46:51Z|01203|binding|INFO|Removing iface tapa0f62675-55 ovn-installed in OVS
Feb 25 07:46:51 np0005629333 nova_compute[244014]: 2026-02-25 12:46:51.599 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:51.605 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:b0:61 10.100.0.7'], port_security=['fa:16:3e:56:b0:61 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '709a8b15-83eb-45f4-b681-c150ad270e01', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf745dc4-78d4-4d88-8794-da49c21b9f38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6821a6e7edd54dbe97920b79aae8f54c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '6cab933e-ec3e-4cce-9e9e-8098a6cc3a83', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=68434b74-2de5-46be-a5a4-5cead97a1427, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=a0f62675-5535-4e75-98f6-dc4fbadbc4c9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:46:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:51.606 157129 INFO neutron.agent.ovn.metadata.agent [-] Port a0f62675-5535-4e75-98f6-dc4fbadbc4c9 in datapath bf745dc4-78d4-4d88-8794-da49c21b9f38 unbound from our chassis#033[00m
Feb 25 07:46:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:51.607 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bf745dc4-78d4-4d88-8794-da49c21b9f38, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:46:51 np0005629333 nova_compute[244014]: 2026-02-25 12:46:51.608 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:51.608 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1bdf4988-4bd6-47f0-812a-cb044947014b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:51.609 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38 namespace which is not needed anymore#033[00m
Feb 25 07:46:51 np0005629333 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000074.scope: Deactivated successfully.
Feb 25 07:46:51 np0005629333 systemd-machined[210048]: Machine qemu-148-instance-00000074 terminated.
Feb 25 07:46:51 np0005629333 neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38[348805]: [NOTICE]   (348809) : haproxy version is 2.8.14-c23fe91
Feb 25 07:46:51 np0005629333 neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38[348805]: [NOTICE]   (348809) : path to executable is /usr/sbin/haproxy
Feb 25 07:46:51 np0005629333 neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38[348805]: [WARNING]  (348809) : Exiting Master process...
Feb 25 07:46:51 np0005629333 neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38[348805]: [ALERT]    (348809) : Current worker (348811) exited with code 143 (Terminated)
Feb 25 07:46:51 np0005629333 neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38[348805]: [WARNING]  (348809) : All workers exited. Exiting... (0)
Feb 25 07:46:51 np0005629333 systemd[1]: libpod-930373a23354dc6d682075b9cddf8a0658967699adde4861174bb9a454683329.scope: Deactivated successfully.
Feb 25 07:46:51 np0005629333 podman[348844]: 2026-02-25 12:46:51.740525694 +0000 UTC m=+0.050356003 container died 930373a23354dc6d682075b9cddf8a0658967699adde4861174bb9a454683329 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 07:46:51 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e45e31c374f7f8967ca524d1b85ad5a63827aeadab1c7beb276d1f90b01fde07-merged.mount: Deactivated successfully.
Feb 25 07:46:51 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-930373a23354dc6d682075b9cddf8a0658967699adde4861174bb9a454683329-userdata-shm.mount: Deactivated successfully.
Feb 25 07:46:51 np0005629333 podman[348844]: 2026-02-25 12:46:51.774457772 +0000 UTC m=+0.084288081 container cleanup 930373a23354dc6d682075b9cddf8a0658967699adde4861174bb9a454683329 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:46:51 np0005629333 nova_compute[244014]: 2026-02-25 12:46:51.781 244018 INFO nova.virt.libvirt.driver [-] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Instance destroyed successfully.#033[00m
Feb 25 07:46:51 np0005629333 nova_compute[244014]: 2026-02-25 12:46:51.781 244018 DEBUG nova.objects.instance [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lazy-loading 'resources' on Instance uuid 709a8b15-83eb-45f4-b681-c150ad270e01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:46:51 np0005629333 systemd[1]: libpod-conmon-930373a23354dc6d682075b9cddf8a0658967699adde4861174bb9a454683329.scope: Deactivated successfully.
Feb 25 07:46:51 np0005629333 nova_compute[244014]: 2026-02-25 12:46:51.815 244018 DEBUG nova.virt.libvirt.vif [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:46:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-742082713',display_name='tempest-TestNetworkAdvancedServerOps-server-742082713',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-742082713',id=116,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB+I5tZfRy0ovwNPnrrYTneQbLw5hVnQnH/IanoLiHVduiItxLS5mgbnypXYZj3B9Q1TA9okLBB9xUXwMoUUkuZoJAUYr2FIjfbOIXstrZLpgrCV3aVFIh2fnMujkQ9rZw==',key_name='tempest-TestNetworkAdvancedServerOps-1976579647',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:46:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6821a6e7edd54dbe97920b79aae8f54c',ramdisk_id='',reservation_id='r-m0ykjdnm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1424801157',owner_user_name='tempest-TestNetworkAdvancedServerOps-1424801157-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:46:47Z,user_data=None,user_id='fb37a481eb114226822ed8b2ef4f9a89',uuid=709a8b15-83eb-45f4-b681-c150ad270e01,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "address": "fa:16:3e:56:b0:61", "network": {"id": "bf745dc4-78d4-4d88-8794-da49c21b9f38", "bridge": "br-int", "label": "tempest-network-smoke--1075619721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f62675-55", "ovs_interfaceid": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:46:51 np0005629333 nova_compute[244014]: 2026-02-25 12:46:51.816 244018 DEBUG nova.network.os_vif_util [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converting VIF {"id": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "address": "fa:16:3e:56:b0:61", "network": {"id": "bf745dc4-78d4-4d88-8794-da49c21b9f38", "bridge": "br-int", "label": "tempest-network-smoke--1075619721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f62675-55", "ovs_interfaceid": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:46:51 np0005629333 nova_compute[244014]: 2026-02-25 12:46:51.816 244018 DEBUG nova.network.os_vif_util [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:b0:61,bridge_name='br-int',has_traffic_filtering=True,id=a0f62675-5535-4e75-98f6-dc4fbadbc4c9,network=Network(bf745dc4-78d4-4d88-8794-da49c21b9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f62675-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:46:51 np0005629333 nova_compute[244014]: 2026-02-25 12:46:51.816 244018 DEBUG os_vif [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:b0:61,bridge_name='br-int',has_traffic_filtering=True,id=a0f62675-5535-4e75-98f6-dc4fbadbc4c9,network=Network(bf745dc4-78d4-4d88-8794-da49c21b9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f62675-55') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:46:51 np0005629333 nova_compute[244014]: 2026-02-25 12:46:51.818 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:51 np0005629333 nova_compute[244014]: 2026-02-25 12:46:51.818 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0f62675-55, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:46:51 np0005629333 nova_compute[244014]: 2026-02-25 12:46:51.820 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:51 np0005629333 nova_compute[244014]: 2026-02-25 12:46:51.822 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:46:51 np0005629333 nova_compute[244014]: 2026-02-25 12:46:51.822 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:51 np0005629333 nova_compute[244014]: 2026-02-25 12:46:51.824 244018 INFO os_vif [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:b0:61,bridge_name='br-int',has_traffic_filtering=True,id=a0f62675-5535-4e75-98f6-dc4fbadbc4c9,network=Network(bf745dc4-78d4-4d88-8794-da49c21b9f38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0f62675-55')#033[00m
Feb 25 07:46:51 np0005629333 podman[348884]: 2026-02-25 12:46:51.848588525 +0000 UTC m=+0.053623625 container remove 930373a23354dc6d682075b9cddf8a0658967699adde4861174bb9a454683329 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38, tcib_managed=true, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:46:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:51.856 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7bfe1fd0-7b57-44da-bf33-c4555e1a8bf6]: (4, ('Wed Feb 25 12:46:51 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38 (930373a23354dc6d682075b9cddf8a0658967699adde4861174bb9a454683329)\n930373a23354dc6d682075b9cddf8a0658967699adde4861174bb9a454683329\nWed Feb 25 12:46:51 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38 (930373a23354dc6d682075b9cddf8a0658967699adde4861174bb9a454683329)\n930373a23354dc6d682075b9cddf8a0658967699adde4861174bb9a454683329\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:51.858 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0381fc3f-742f-4874-987f-52731057a707]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:51.859 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf745dc4-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:46:51 np0005629333 kernel: tapbf745dc4-70: left promiscuous mode
Feb 25 07:46:51 np0005629333 nova_compute[244014]: 2026-02-25 12:46:51.861 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:51 np0005629333 nova_compute[244014]: 2026-02-25 12:46:51.869 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:51.872 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[73fcaae2-4ac5-4a82-9037-3306e1c7f514]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:51.886 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[efed83ec-18f1-4be8-a25b-38f42df37edf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:51.887 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6951cb9b-4fd7-4041-9488-f977e01b7a0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:51.903 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0a77d852-b25e-46fc-bd9a-ac9b1c8907cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557647, 'reachable_time': 32557, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348918, 'error': None, 'target': 'ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:51 np0005629333 systemd[1]: run-netns-ovnmeta\x2dbf745dc4\x2d78d4\x2d4d88\x2d8794\x2dda49c21b9f38.mount: Deactivated successfully.
Feb 25 07:46:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:51.907 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bf745dc4-78d4-4d88-8794-da49c21b9f38 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:46:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:51.907 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[18bfc7fa-0fbd-4acd-bc1e-892a78790af9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:46:52 np0005629333 nova_compute[244014]: 2026-02-25 12:46:52.115 244018 INFO nova.virt.libvirt.driver [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Deleting instance files /var/lib/nova/instances/709a8b15-83eb-45f4-b681-c150ad270e01_del#033[00m
Feb 25 07:46:52 np0005629333 nova_compute[244014]: 2026-02-25 12:46:52.116 244018 INFO nova.virt.libvirt.driver [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Deletion of /var/lib/nova/instances/709a8b15-83eb-45f4-b681-c150ad270e01_del complete#033[00m
Feb 25 07:46:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1989: 305 pgs: 305 active+clean; 191 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 13 KiB/s wr, 17 op/s
Feb 25 07:46:52 np0005629333 nova_compute[244014]: 2026-02-25 12:46:52.606 244018 INFO nova.compute.manager [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Took 1.06 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:46:52 np0005629333 nova_compute[244014]: 2026-02-25 12:46:52.606 244018 DEBUG oslo.service.loopingcall [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:46:52 np0005629333 nova_compute[244014]: 2026-02-25 12:46:52.607 244018 DEBUG nova.compute.manager [-] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:46:52 np0005629333 nova_compute[244014]: 2026-02-25 12:46:52.607 244018 DEBUG nova.network.neutron [-] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:46:53 np0005629333 nova_compute[244014]: 2026-02-25 12:46:53.193 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:46:53 np0005629333 nova_compute[244014]: 2026-02-25 12:46:53.650 244018 DEBUG nova.compute.manager [req-de5fc72d-c19c-4684-ad11-b469bae6546e req-2a3ce887-b162-4872-a3bc-32c7a29aebd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received event network-vif-unplugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:46:53 np0005629333 nova_compute[244014]: 2026-02-25 12:46:53.651 244018 DEBUG oslo_concurrency.lockutils [req-de5fc72d-c19c-4684-ad11-b469bae6546e req-2a3ce887-b162-4872-a3bc-32c7a29aebd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:53 np0005629333 nova_compute[244014]: 2026-02-25 12:46:53.652 244018 DEBUG oslo_concurrency.lockutils [req-de5fc72d-c19c-4684-ad11-b469bae6546e req-2a3ce887-b162-4872-a3bc-32c7a29aebd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:53 np0005629333 nova_compute[244014]: 2026-02-25 12:46:53.652 244018 DEBUG oslo_concurrency.lockutils [req-de5fc72d-c19c-4684-ad11-b469bae6546e req-2a3ce887-b162-4872-a3bc-32c7a29aebd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:53 np0005629333 nova_compute[244014]: 2026-02-25 12:46:53.653 244018 DEBUG nova.compute.manager [req-de5fc72d-c19c-4684-ad11-b469bae6546e req-2a3ce887-b162-4872-a3bc-32c7a29aebd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] No waiting events found dispatching network-vif-unplugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:46:53 np0005629333 nova_compute[244014]: 2026-02-25 12:46:53.653 244018 DEBUG nova.compute.manager [req-de5fc72d-c19c-4684-ad11-b469bae6546e req-2a3ce887-b162-4872-a3bc-32c7a29aebd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received event network-vif-unplugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:46:53 np0005629333 nova_compute[244014]: 2026-02-25 12:46:53.654 244018 DEBUG nova.compute.manager [req-de5fc72d-c19c-4684-ad11-b469bae6546e req-2a3ce887-b162-4872-a3bc-32c7a29aebd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received event network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:46:53 np0005629333 nova_compute[244014]: 2026-02-25 12:46:53.654 244018 DEBUG oslo_concurrency.lockutils [req-de5fc72d-c19c-4684-ad11-b469bae6546e req-2a3ce887-b162-4872-a3bc-32c7a29aebd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:53 np0005629333 nova_compute[244014]: 2026-02-25 12:46:53.655 244018 DEBUG oslo_concurrency.lockutils [req-de5fc72d-c19c-4684-ad11-b469bae6546e req-2a3ce887-b162-4872-a3bc-32c7a29aebd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:53 np0005629333 nova_compute[244014]: 2026-02-25 12:46:53.655 244018 DEBUG oslo_concurrency.lockutils [req-de5fc72d-c19c-4684-ad11-b469bae6546e req-2a3ce887-b162-4872-a3bc-32c7a29aebd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:53 np0005629333 nova_compute[244014]: 2026-02-25 12:46:53.656 244018 DEBUG nova.compute.manager [req-de5fc72d-c19c-4684-ad11-b469bae6546e req-2a3ce887-b162-4872-a3bc-32c7a29aebd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] No waiting events found dispatching network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:46:53 np0005629333 nova_compute[244014]: 2026-02-25 12:46:53.656 244018 WARNING nova.compute.manager [req-de5fc72d-c19c-4684-ad11-b469bae6546e req-2a3ce887-b162-4872-a3bc-32c7a29aebd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received unexpected event network-vif-plugged-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:46:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1990: 305 pgs: 305 active+clean; 153 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 2.5 KiB/s wr, 32 op/s
Feb 25 07:46:54 np0005629333 nova_compute[244014]: 2026-02-25 12:46:54.478 244018 DEBUG nova.network.neutron [-] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:46:54 np0005629333 nova_compute[244014]: 2026-02-25 12:46:54.505 244018 INFO nova.compute.manager [-] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Took 1.90 seconds to deallocate network for instance.#033[00m
Feb 25 07:46:54 np0005629333 nova_compute[244014]: 2026-02-25 12:46:54.568 244018 DEBUG oslo_concurrency.lockutils [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:54 np0005629333 nova_compute[244014]: 2026-02-25 12:46:54.569 244018 DEBUG oslo_concurrency.lockutils [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:54 np0005629333 nova_compute[244014]: 2026-02-25 12:46:54.591 244018 DEBUG nova.compute.manager [req-1563e37e-6a6a-4e75-ad68-76d7922bde04 req-75874591-ef59-4401-8c11-494bd9d2f0bd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Received event network-vif-deleted-a0f62675-5535-4e75-98f6-dc4fbadbc4c9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:46:54 np0005629333 nova_compute[244014]: 2026-02-25 12:46:54.643 244018 DEBUG oslo_concurrency.processutils [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:46:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:55.025 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:46:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:55.025 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:46:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:46:55.026 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:55 np0005629333 nova_compute[244014]: 2026-02-25 12:46:55.114 244018 DEBUG nova.network.neutron [req-559ad663-1a02-4838-aa56-c77b0a15021a req-260de867-0e07-42be-afd5-3e2f0cb7e5ca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Updated VIF entry in instance network info cache for port a0f62675-5535-4e75-98f6-dc4fbadbc4c9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:46:55 np0005629333 nova_compute[244014]: 2026-02-25 12:46:55.115 244018 DEBUG nova.network.neutron [req-559ad663-1a02-4838-aa56-c77b0a15021a req-260de867-0e07-42be-afd5-3e2f0cb7e5ca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Updating instance_info_cache with network_info: [{"id": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "address": "fa:16:3e:56:b0:61", "network": {"id": "bf745dc4-78d4-4d88-8794-da49c21b9f38", "bridge": "br-int", "label": "tempest-network-smoke--1075619721", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6821a6e7edd54dbe97920b79aae8f54c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0f62675-55", "ovs_interfaceid": "a0f62675-5535-4e75-98f6-dc4fbadbc4c9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:46:55 np0005629333 nova_compute[244014]: 2026-02-25 12:46:55.163 244018 DEBUG oslo_concurrency.lockutils [req-559ad663-1a02-4838-aa56-c77b0a15021a req-260de867-0e07-42be-afd5-3e2f0cb7e5ca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-709a8b15-83eb-45f4-b681-c150ad270e01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:46:55 np0005629333 nova_compute[244014]: 2026-02-25 12:46:55.243 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:46:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2331357453' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:46:55 np0005629333 nova_compute[244014]: 2026-02-25 12:46:55.271 244018 DEBUG oslo_concurrency.processutils [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:46:55 np0005629333 nova_compute[244014]: 2026-02-25 12:46:55.279 244018 DEBUG nova.compute.provider_tree [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:46:55 np0005629333 nova_compute[244014]: 2026-02-25 12:46:55.307 244018 DEBUG nova.scheduler.client.report [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:46:55 np0005629333 nova_compute[244014]: 2026-02-25 12:46:55.329 244018 DEBUG oslo_concurrency.lockutils [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:55 np0005629333 nova_compute[244014]: 2026-02-25 12:46:55.374 244018 INFO nova.scheduler.client.report [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Deleted allocations for instance 709a8b15-83eb-45f4-b681-c150ad270e01#033[00m
Feb 25 07:46:55 np0005629333 nova_compute[244014]: 2026-02-25 12:46:55.465 244018 DEBUG oslo_concurrency.lockutils [None req-a5c97af4-7945-4885-b4a4-d059d874fc9d fb37a481eb114226822ed8b2ef4f9a89 6821a6e7edd54dbe97920b79aae8f54c - - default default] Lock "709a8b15-83eb-45f4-b681-c150ad270e01" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.928s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:46:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1991: 305 pgs: 305 active+clean; 153 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Feb 25 07:46:56 np0005629333 nova_compute[244014]: 2026-02-25 12:46:56.716 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:56 np0005629333 nova_compute[244014]: 2026-02-25 12:46:56.819 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Feb 25 07:46:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:46:58 np0005629333 podman[348942]: 2026-02-25 12:46:58.735303102 +0000 UTC m=+0.064689807 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:46:58 np0005629333 podman[348961]: 2026-02-25 12:46:58.859370716 +0000 UTC m=+0.089707944 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0)
Feb 25 07:46:59 np0005629333 nova_compute[244014]: 2026-02-25 12:46:59.388 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:46:59 np0005629333 nova_compute[244014]: 2026-02-25 12:46:59.455 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:00 np0005629333 nova_compute[244014]: 2026-02-25 12:47:00.245 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1993: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Feb 25 07:47:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:47:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:47:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:47:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:47:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:47:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:47:01 np0005629333 nova_compute[244014]: 2026-02-25 12:47:01.822 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Feb 25 07:47:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:47:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1995: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 682 B/s wr, 16 op/s
Feb 25 07:47:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:04.997 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:af:c3 2001:db8:0:1:f816:3eff:fe02:afc3 2001:db8::f816:3eff:fe02:afc3'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe02:afc3/64 2001:db8::f816:3eff:fe02:afc3/64', 'neutron:device_id': 'ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c482202-8994-4033-a0a9-167d92a9e301', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06544f4e-1579-4f9d-8faa-1248478e7fbc, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=8372e6c6-fbbc-48a7-be95-d95e1d2ad95a) old=Port_Binding(mac=['fa:16:3e:02:af:c3 2001:db8::f816:3eff:fe02:afc3'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe02:afc3/64', 'neutron:device_id': 'ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c482202-8994-4033-a0a9-167d92a9e301', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:47:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:04.999 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 8372e6c6-fbbc-48a7-be95-d95e1d2ad95a in datapath 5c482202-8994-4033-a0a9-167d92a9e301 updated#033[00m
Feb 25 07:47:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:05.000 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5c482202-8994-4033-a0a9-167d92a9e301, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:47:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:05.001 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4bdd29a4-4bda-42ff-a235-30466cfefc41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:05 np0005629333 nova_compute[244014]: 2026-02-25 12:47:05.246 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1996: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Feb 25 07:47:06 np0005629333 nova_compute[244014]: 2026-02-25 12:47:06.775 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023611.7733135, 709a8b15-83eb-45f4-b681-c150ad270e01 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:47:06 np0005629333 nova_compute[244014]: 2026-02-25 12:47:06.775 244018 INFO nova.compute.manager [-] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:47:06 np0005629333 nova_compute[244014]: 2026-02-25 12:47:06.798 244018 DEBUG nova.compute.manager [None req-48b42e87-fc83-4362-a13b-e46986ce3a5c - - - - - -] [instance: 709a8b15-83eb-45f4-b681-c150ad270e01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:47:06 np0005629333 nova_compute[244014]: 2026-02-25 12:47:06.825 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:47:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:47:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:47:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:47:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:47:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:47:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:47:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:47:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:47:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:47:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:47:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:47:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1997: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Feb 25 07:47:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:47:08 np0005629333 podman[349133]: 2026-02-25 12:47:08.694555217 +0000 UTC m=+0.049798307 container create 36655a96217cf4a90e5f77c24ac5e0535305e870624d4097c3ba6056670e4ddd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_spence, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 07:47:08 np0005629333 systemd[1]: Started libpod-conmon-36655a96217cf4a90e5f77c24ac5e0535305e870624d4097c3ba6056670e4ddd.scope.
Feb 25 07:47:08 np0005629333 podman[349133]: 2026-02-25 12:47:08.675206811 +0000 UTC m=+0.030449941 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:47:08 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:47:08 np0005629333 podman[349133]: 2026-02-25 12:47:08.789973862 +0000 UTC m=+0.145216982 container init 36655a96217cf4a90e5f77c24ac5e0535305e870624d4097c3ba6056670e4ddd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_spence, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:47:08 np0005629333 podman[349133]: 2026-02-25 12:47:08.798838352 +0000 UTC m=+0.154081402 container start 36655a96217cf4a90e5f77c24ac5e0535305e870624d4097c3ba6056670e4ddd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_spence, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:47:08 np0005629333 podman[349133]: 2026-02-25 12:47:08.80267822 +0000 UTC m=+0.157921350 container attach 36655a96217cf4a90e5f77c24ac5e0535305e870624d4097c3ba6056670e4ddd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_spence, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3)
Feb 25 07:47:08 np0005629333 busy_spence[349149]: 167 167
Feb 25 07:47:08 np0005629333 systemd[1]: libpod-36655a96217cf4a90e5f77c24ac5e0535305e870624d4097c3ba6056670e4ddd.scope: Deactivated successfully.
Feb 25 07:47:08 np0005629333 podman[349133]: 2026-02-25 12:47:08.807240159 +0000 UTC m=+0.162483249 container died 36655a96217cf4a90e5f77c24ac5e0535305e870624d4097c3ba6056670e4ddd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_spence, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:47:08 np0005629333 systemd[1]: var-lib-containers-storage-overlay-25893d5e3c5de31391c381596dd8f03ffc64ce915373dca1c2bbf771f0e8f8b7-merged.mount: Deactivated successfully.
Feb 25 07:47:08 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:47:08 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:47:08 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:47:08 np0005629333 podman[349133]: 2026-02-25 12:47:08.85860875 +0000 UTC m=+0.213851810 container remove 36655a96217cf4a90e5f77c24ac5e0535305e870624d4097c3ba6056670e4ddd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_spence, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:47:08 np0005629333 systemd[1]: libpod-conmon-36655a96217cf4a90e5f77c24ac5e0535305e870624d4097c3ba6056670e4ddd.scope: Deactivated successfully.
Feb 25 07:47:09 np0005629333 podman[349173]: 2026-02-25 12:47:09.012549487 +0000 UTC m=+0.060686275 container create 5f8f8302afcd0b6af52ffe2cf492f47593dc6c22ae72028fb50203c55bd1648b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:47:09 np0005629333 systemd[1]: Started libpod-conmon-5f8f8302afcd0b6af52ffe2cf492f47593dc6c22ae72028fb50203c55bd1648b.scope.
Feb 25 07:47:09 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:47:09 np0005629333 podman[349173]: 2026-02-25 12:47:08.988724474 +0000 UTC m=+0.036861312 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:47:09 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86a4bc497ff16825ab182494acbce1f2d31bbd0d5639f9cacd8bd618053d7844/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:47:09 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86a4bc497ff16825ab182494acbce1f2d31bbd0d5639f9cacd8bd618053d7844/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:47:09 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86a4bc497ff16825ab182494acbce1f2d31bbd0d5639f9cacd8bd618053d7844/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:47:09 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86a4bc497ff16825ab182494acbce1f2d31bbd0d5639f9cacd8bd618053d7844/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:47:09 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86a4bc497ff16825ab182494acbce1f2d31bbd0d5639f9cacd8bd618053d7844/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:47:09 np0005629333 podman[349173]: 2026-02-25 12:47:09.113316942 +0000 UTC m=+0.161453790 container init 5f8f8302afcd0b6af52ffe2cf492f47593dc6c22ae72028fb50203c55bd1648b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kowalevski, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:47:09 np0005629333 podman[349173]: 2026-02-25 12:47:09.120775063 +0000 UTC m=+0.168911871 container start 5f8f8302afcd0b6af52ffe2cf492f47593dc6c22ae72028fb50203c55bd1648b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kowalevski, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:47:09 np0005629333 podman[349173]: 2026-02-25 12:47:09.132410931 +0000 UTC m=+0.180547709 container attach 5f8f8302afcd0b6af52ffe2cf492f47593dc6c22ae72028fb50203c55bd1648b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 25 07:47:09 np0005629333 funny_kowalevski[349190]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:47:09 np0005629333 funny_kowalevski[349190]: --> All data devices are unavailable
Feb 25 07:47:09 np0005629333 systemd[1]: libpod-5f8f8302afcd0b6af52ffe2cf492f47593dc6c22ae72028fb50203c55bd1648b.scope: Deactivated successfully.
Feb 25 07:47:09 np0005629333 podman[349173]: 2026-02-25 12:47:09.575105771 +0000 UTC m=+0.623242599 container died 5f8f8302afcd0b6af52ffe2cf492f47593dc6c22ae72028fb50203c55bd1648b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kowalevski, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:47:09 np0005629333 systemd[1]: var-lib-containers-storage-overlay-86a4bc497ff16825ab182494acbce1f2d31bbd0d5639f9cacd8bd618053d7844-merged.mount: Deactivated successfully.
Feb 25 07:47:09 np0005629333 podman[349173]: 2026-02-25 12:47:09.616380627 +0000 UTC m=+0.664517405 container remove 5f8f8302afcd0b6af52ffe2cf492f47593dc6c22ae72028fb50203c55bd1648b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 07:47:09 np0005629333 systemd[1]: libpod-conmon-5f8f8302afcd0b6af52ffe2cf492f47593dc6c22ae72028fb50203c55bd1648b.scope: Deactivated successfully.
Feb 25 07:47:10 np0005629333 podman[349282]: 2026-02-25 12:47:10.048710555 +0000 UTC m=+0.059109940 container create 193743b9394d04f5dd3750b5ab8898308ca1235d6f3164a01885792bd733b806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 25 07:47:10 np0005629333 systemd[1]: Started libpod-conmon-193743b9394d04f5dd3750b5ab8898308ca1235d6f3164a01885792bd733b806.scope.
Feb 25 07:47:10 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:47:10 np0005629333 podman[349282]: 2026-02-25 12:47:10.025200511 +0000 UTC m=+0.035599946 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:47:10 np0005629333 podman[349282]: 2026-02-25 12:47:10.122757805 +0000 UTC m=+0.133157190 container init 193743b9394d04f5dd3750b5ab8898308ca1235d6f3164a01885792bd733b806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_chebyshev, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:47:10 np0005629333 podman[349282]: 2026-02-25 12:47:10.129701071 +0000 UTC m=+0.140100446 container start 193743b9394d04f5dd3750b5ab8898308ca1235d6f3164a01885792bd733b806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_chebyshev, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 07:47:10 np0005629333 podman[349282]: 2026-02-25 12:47:10.133672984 +0000 UTC m=+0.144072389 container attach 193743b9394d04f5dd3750b5ab8898308ca1235d6f3164a01885792bd733b806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_chebyshev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:47:10 np0005629333 practical_chebyshev[349298]: 167 167
Feb 25 07:47:10 np0005629333 systemd[1]: libpod-193743b9394d04f5dd3750b5ab8898308ca1235d6f3164a01885792bd733b806.scope: Deactivated successfully.
Feb 25 07:47:10 np0005629333 podman[349282]: 2026-02-25 12:47:10.135439323 +0000 UTC m=+0.145838708 container died 193743b9394d04f5dd3750b5ab8898308ca1235d6f3164a01885792bd733b806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_chebyshev, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 07:47:10 np0005629333 systemd[1]: var-lib-containers-storage-overlay-44cc39a9e288d7bb7961ce1333a461e57e179402e881480add0b6a8fb58295d4-merged.mount: Deactivated successfully.
Feb 25 07:47:10 np0005629333 podman[349282]: 2026-02-25 12:47:10.178110268 +0000 UTC m=+0.188509663 container remove 193743b9394d04f5dd3750b5ab8898308ca1235d6f3164a01885792bd733b806 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:47:10 np0005629333 systemd[1]: libpod-conmon-193743b9394d04f5dd3750b5ab8898308ca1235d6f3164a01885792bd733b806.scope: Deactivated successfully.
Feb 25 07:47:10 np0005629333 nova_compute[244014]: 2026-02-25 12:47:10.248 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:10 np0005629333 podman[349321]: 2026-02-25 12:47:10.3442629 +0000 UTC m=+0.042401488 container create 960f0038eb94d6188622d7ef25fff7bd9e38a6c4ce14395dfb69f54342cd4974 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_blackwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 07:47:10 np0005629333 systemd[1]: Started libpod-conmon-960f0038eb94d6188622d7ef25fff7bd9e38a6c4ce14395dfb69f54342cd4974.scope.
Feb 25 07:47:10 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:47:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52b70f7e7e1423533accac71a11060e8d645cd0131c858ef9db8d57f837fb120/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:47:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52b70f7e7e1423533accac71a11060e8d645cd0131c858ef9db8d57f837fb120/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:47:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52b70f7e7e1423533accac71a11060e8d645cd0131c858ef9db8d57f837fb120/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:47:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52b70f7e7e1423533accac71a11060e8d645cd0131c858ef9db8d57f837fb120/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:47:10 np0005629333 podman[349321]: 2026-02-25 12:47:10.323175545 +0000 UTC m=+0.021314183 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:47:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1998: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:47:10 np0005629333 podman[349321]: 2026-02-25 12:47:10.437780711 +0000 UTC m=+0.135919309 container init 960f0038eb94d6188622d7ef25fff7bd9e38a6c4ce14395dfb69f54342cd4974 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Feb 25 07:47:10 np0005629333 podman[349321]: 2026-02-25 12:47:10.442480383 +0000 UTC m=+0.140618981 container start 960f0038eb94d6188622d7ef25fff7bd9e38a6c4ce14395dfb69f54342cd4974 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_blackwell, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:47:10 np0005629333 podman[349321]: 2026-02-25 12:47:10.446087535 +0000 UTC m=+0.144226113 container attach 960f0038eb94d6188622d7ef25fff7bd9e38a6c4ce14395dfb69f54342cd4974 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]: {
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:    "0": [
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:        {
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "devices": [
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "/dev/loop3"
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            ],
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "lv_name": "ceph_lv0",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "lv_size": "21470642176",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "name": "ceph_lv0",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "tags": {
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.cluster_name": "ceph",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.crush_device_class": "",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.encrypted": "0",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.objectstore": "bluestore",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.osd_id": "0",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.type": "block",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.vdo": "0",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.with_tpm": "0"
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            },
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "type": "block",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "vg_name": "ceph_vg0"
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:        }
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:    ],
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:    "1": [
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:        {
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "devices": [
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "/dev/loop4"
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            ],
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "lv_name": "ceph_lv1",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "lv_size": "21470642176",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "name": "ceph_lv1",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "tags": {
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.cluster_name": "ceph",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.crush_device_class": "",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.encrypted": "0",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.objectstore": "bluestore",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.osd_id": "1",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.type": "block",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.vdo": "0",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.with_tpm": "0"
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            },
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "type": "block",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "vg_name": "ceph_vg1"
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:        }
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:    ],
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:    "2": [
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:        {
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "devices": [
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "/dev/loop5"
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            ],
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "lv_name": "ceph_lv2",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "lv_size": "21470642176",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "name": "ceph_lv2",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "tags": {
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.cluster_name": "ceph",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.crush_device_class": "",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.encrypted": "0",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.objectstore": "bluestore",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.osd_id": "2",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.type": "block",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.vdo": "0",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:                "ceph.with_tpm": "0"
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            },
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "type": "block",
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:            "vg_name": "ceph_vg2"
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:        }
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]:    ]
Feb 25 07:47:10 np0005629333 boring_blackwell[349337]: }
Feb 25 07:47:10 np0005629333 systemd[1]: libpod-960f0038eb94d6188622d7ef25fff7bd9e38a6c4ce14395dfb69f54342cd4974.scope: Deactivated successfully.
Feb 25 07:47:10 np0005629333 podman[349321]: 2026-02-25 12:47:10.692147473 +0000 UTC m=+0.390286091 container died 960f0038eb94d6188622d7ef25fff7bd9e38a6c4ce14395dfb69f54342cd4974 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_blackwell, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:47:10 np0005629333 systemd[1]: var-lib-containers-storage-overlay-52b70f7e7e1423533accac71a11060e8d645cd0131c858ef9db8d57f837fb120-merged.mount: Deactivated successfully.
Feb 25 07:47:10 np0005629333 podman[349321]: 2026-02-25 12:47:10.744236414 +0000 UTC m=+0.442374992 container remove 960f0038eb94d6188622d7ef25fff7bd9e38a6c4ce14395dfb69f54342cd4974 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 07:47:10 np0005629333 systemd[1]: libpod-conmon-960f0038eb94d6188622d7ef25fff7bd9e38a6c4ce14395dfb69f54342cd4974.scope: Deactivated successfully.
Feb 25 07:47:11 np0005629333 podman[349420]: 2026-02-25 12:47:11.166772264 +0000 UTC m=+0.051834695 container create d31d3051f8ff1895d9131726e97b6537d80041e87508edbfc8f42c15ef14859c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_keldysh, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 07:47:11 np0005629333 systemd[1]: Started libpod-conmon-d31d3051f8ff1895d9131726e97b6537d80041e87508edbfc8f42c15ef14859c.scope.
Feb 25 07:47:11 np0005629333 podman[349420]: 2026-02-25 12:47:11.138899787 +0000 UTC m=+0.023962298 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:47:11 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:47:11 np0005629333 podman[349420]: 2026-02-25 12:47:11.255756207 +0000 UTC m=+0.140818708 container init d31d3051f8ff1895d9131726e97b6537d80041e87508edbfc8f42c15ef14859c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_keldysh, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 07:47:11 np0005629333 podman[349420]: 2026-02-25 12:47:11.266735907 +0000 UTC m=+0.151798358 container start d31d3051f8ff1895d9131726e97b6537d80041e87508edbfc8f42c15ef14859c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:47:11 np0005629333 crazy_keldysh[349437]: 167 167
Feb 25 07:47:11 np0005629333 podman[349420]: 2026-02-25 12:47:11.271516192 +0000 UTC m=+0.156578703 container attach d31d3051f8ff1895d9131726e97b6537d80041e87508edbfc8f42c15ef14859c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_keldysh, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:47:11 np0005629333 systemd[1]: libpod-d31d3051f8ff1895d9131726e97b6537d80041e87508edbfc8f42c15ef14859c.scope: Deactivated successfully.
Feb 25 07:47:11 np0005629333 podman[349420]: 2026-02-25 12:47:11.273047445 +0000 UTC m=+0.158109896 container died d31d3051f8ff1895d9131726e97b6537d80041e87508edbfc8f42c15ef14859c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:47:11 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7abcd0643aba3dc235464b0e316dac9e32ae06a28a2da594dd1a3abd66b2ab68-merged.mount: Deactivated successfully.
Feb 25 07:47:11 np0005629333 podman[349420]: 2026-02-25 12:47:11.3128966 +0000 UTC m=+0.197959021 container remove d31d3051f8ff1895d9131726e97b6537d80041e87508edbfc8f42c15ef14859c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_keldysh, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 07:47:11 np0005629333 systemd[1]: libpod-conmon-d31d3051f8ff1895d9131726e97b6537d80041e87508edbfc8f42c15ef14859c.scope: Deactivated successfully.
Feb 25 07:47:11 np0005629333 nova_compute[244014]: 2026-02-25 12:47:11.392 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "848fd033-0ebb-460a-a8a0-56583fa5f481" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:47:11 np0005629333 nova_compute[244014]: 2026-02-25 12:47:11.394 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:47:11 np0005629333 nova_compute[244014]: 2026-02-25 12:47:11.419 244018 DEBUG nova.compute.manager [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:47:11 np0005629333 nova_compute[244014]: 2026-02-25 12:47:11.502 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:47:11 np0005629333 nova_compute[244014]: 2026-02-25 12:47:11.503 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:47:11 np0005629333 nova_compute[244014]: 2026-02-25 12:47:11.512 244018 DEBUG nova.virt.hardware [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:47:11 np0005629333 nova_compute[244014]: 2026-02-25 12:47:11.512 244018 INFO nova.compute.claims [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:47:11 np0005629333 podman[349461]: 2026-02-25 12:47:11.473854495 +0000 UTC m=+0.039237919 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:47:11 np0005629333 nova_compute[244014]: 2026-02-25 12:47:11.627 244018 DEBUG oslo_concurrency.processutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:47:12 np0005629333 nova_compute[244014]: 2026-02-25 12:47:12.304 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:47:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/176965018' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:47:12 np0005629333 nova_compute[244014]: 2026-02-25 12:47:12.327 244018 DEBUG oslo_concurrency.processutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.699s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:47:12 np0005629333 nova_compute[244014]: 2026-02-25 12:47:12.333 244018 DEBUG nova.compute.provider_tree [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:47:12 np0005629333 podman[349461]: 2026-02-25 12:47:12.338922342 +0000 UTC m=+0.904305806 container create f30424b75af9cd909521153735f5c5bcc11d72663e72de3e5fedcc3bd5c6d5cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_neumann, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:47:12 np0005629333 nova_compute[244014]: 2026-02-25 12:47:12.353 244018 DEBUG nova.scheduler.client.report [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:47:12 np0005629333 nova_compute[244014]: 2026-02-25 12:47:12.401 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:47:12 np0005629333 nova_compute[244014]: 2026-02-25 12:47:12.405 244018 DEBUG nova.compute.manager [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:47:12 np0005629333 systemd[1]: Started libpod-conmon-f30424b75af9cd909521153735f5c5bcc11d72663e72de3e5fedcc3bd5c6d5cf.scope.
Feb 25 07:47:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:47:12 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:47:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/068c619ffb0499fad911446ab8b97662cb2cb26e782674bf7558d4704f824621/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:47:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/068c619ffb0499fad911446ab8b97662cb2cb26e782674bf7558d4704f824621/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:47:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/068c619ffb0499fad911446ab8b97662cb2cb26e782674bf7558d4704f824621/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:47:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/068c619ffb0499fad911446ab8b97662cb2cb26e782674bf7558d4704f824621/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:47:12 np0005629333 nova_compute[244014]: 2026-02-25 12:47:12.493 244018 DEBUG nova.compute.manager [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:47:12 np0005629333 nova_compute[244014]: 2026-02-25 12:47:12.494 244018 DEBUG nova.network.neutron [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:47:12 np0005629333 nova_compute[244014]: 2026-02-25 12:47:12.520 244018 INFO nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:47:12 np0005629333 podman[349461]: 2026-02-25 12:47:12.532046175 +0000 UTC m=+1.097429669 container init f30424b75af9cd909521153735f5c5bcc11d72663e72de3e5fedcc3bd5c6d5cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 07:47:12 np0005629333 podman[349461]: 2026-02-25 12:47:12.54143748 +0000 UTC m=+1.106820924 container start f30424b75af9cd909521153735f5c5bcc11d72663e72de3e5fedcc3bd5c6d5cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:47:12 np0005629333 podman[349461]: 2026-02-25 12:47:12.545909077 +0000 UTC m=+1.111292621 container attach f30424b75af9cd909521153735f5c5bcc11d72663e72de3e5fedcc3bd5c6d5cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_neumann, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:47:12 np0005629333 nova_compute[244014]: 2026-02-25 12:47:12.562 244018 DEBUG nova.compute.manager [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:47:12 np0005629333 nova_compute[244014]: 2026-02-25 12:47:12.684 244018 DEBUG nova.compute.manager [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:47:12 np0005629333 nova_compute[244014]: 2026-02-25 12:47:12.686 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:47:12 np0005629333 nova_compute[244014]: 2026-02-25 12:47:12.687 244018 INFO nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Creating image(s)#033[00m
Feb 25 07:47:12 np0005629333 nova_compute[244014]: 2026-02-25 12:47:12.716 244018 DEBUG nova.storage.rbd_utils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 848fd033-0ebb-460a-a8a0-56583fa5f481_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:47:12 np0005629333 nova_compute[244014]: 2026-02-25 12:47:12.738 244018 DEBUG nova.storage.rbd_utils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 848fd033-0ebb-460a-a8a0-56583fa5f481_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:47:12 np0005629333 nova_compute[244014]: 2026-02-25 12:47:12.773 244018 DEBUG nova.storage.rbd_utils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 848fd033-0ebb-460a-a8a0-56583fa5f481_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:47:12 np0005629333 nova_compute[244014]: 2026-02-25 12:47:12.776 244018 DEBUG oslo_concurrency.processutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:47:12 np0005629333 nova_compute[244014]: 2026-02-25 12:47:12.849 244018 DEBUG oslo_concurrency.processutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:47:12 np0005629333 nova_compute[244014]: 2026-02-25 12:47:12.850 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:47:12 np0005629333 nova_compute[244014]: 2026-02-25 12:47:12.851 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:47:12 np0005629333 nova_compute[244014]: 2026-02-25 12:47:12.851 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:47:12 np0005629333 nova_compute[244014]: 2026-02-25 12:47:12.875 244018 DEBUG nova.storage.rbd_utils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 848fd033-0ebb-460a-a8a0-56583fa5f481_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:47:12 np0005629333 nova_compute[244014]: 2026-02-25 12:47:12.884 244018 DEBUG oslo_concurrency.processutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 848fd033-0ebb-460a-a8a0-56583fa5f481_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:47:12 np0005629333 nova_compute[244014]: 2026-02-25 12:47:12.913 244018 DEBUG nova.policy [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb8dbf8cc448ad946fd23aaae2326e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '25fa1e8dd32c483686f869da2604f2b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:47:13 np0005629333 lvm[349668]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:47:13 np0005629333 lvm[349668]: VG ceph_vg0 finished
Feb 25 07:47:13 np0005629333 lvm[349669]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:47:13 np0005629333 lvm[349669]: VG ceph_vg1 finished
Feb 25 07:47:13 np0005629333 lvm[349672]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:47:13 np0005629333 lvm[349672]: VG ceph_vg2 finished
Feb 25 07:47:13 np0005629333 lvm[349676]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:47:13 np0005629333 lvm[349676]: VG ceph_vg2 finished
Feb 25 07:47:13 np0005629333 jovial_neumann[349499]: {}
Feb 25 07:47:13 np0005629333 systemd[1]: libpod-f30424b75af9cd909521153735f5c5bcc11d72663e72de3e5fedcc3bd5c6d5cf.scope: Deactivated successfully.
Feb 25 07:47:13 np0005629333 podman[349461]: 2026-02-25 12:47:13.307119221 +0000 UTC m=+1.872502655 container died f30424b75af9cd909521153735f5c5bcc11d72663e72de3e5fedcc3bd5c6d5cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_neumann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 07:47:13 np0005629333 systemd[1]: libpod-f30424b75af9cd909521153735f5c5bcc11d72663e72de3e5fedcc3bd5c6d5cf.scope: Consumed 1.070s CPU time.
Feb 25 07:47:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:47:13 np0005629333 systemd[1]: var-lib-containers-storage-overlay-068c619ffb0499fad911446ab8b97662cb2cb26e782674bf7558d4704f824621-merged.mount: Deactivated successfully.
Feb 25 07:47:13 np0005629333 podman[349461]: 2026-02-25 12:47:13.600860635 +0000 UTC m=+2.166244079 container remove f30424b75af9cd909521153735f5c5bcc11d72663e72de3e5fedcc3bd5c6d5cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 25 07:47:13 np0005629333 nova_compute[244014]: 2026-02-25 12:47:13.608 244018 DEBUG oslo_concurrency.processutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 848fd033-0ebb-460a-a8a0-56583fa5f481_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.723s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:47:13 np0005629333 systemd[1]: libpod-conmon-f30424b75af9cd909521153735f5c5bcc11d72663e72de3e5fedcc3bd5c6d5cf.scope: Deactivated successfully.
Feb 25 07:47:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:47:13 np0005629333 nova_compute[244014]: 2026-02-25 12:47:13.753 244018 DEBUG nova.storage.rbd_utils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] resizing rbd image 848fd033-0ebb-460a-a8a0-56583fa5f481_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:47:13 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:47:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:47:13 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:47:13 np0005629333 nova_compute[244014]: 2026-02-25 12:47:13.911 244018 DEBUG nova.objects.instance [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'migration_context' on Instance uuid 848fd033-0ebb-460a-a8a0-56583fa5f481 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:47:13 np0005629333 nova_compute[244014]: 2026-02-25 12:47:13.933 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:47:13 np0005629333 nova_compute[244014]: 2026-02-25 12:47:13.934 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Ensure instance console log exists: /var/lib/nova/instances/848fd033-0ebb-460a-a8a0-56583fa5f481/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:47:13 np0005629333 nova_compute[244014]: 2026-02-25 12:47:13.935 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:47:13 np0005629333 nova_compute[244014]: 2026-02-25 12:47:13.936 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:47:13 np0005629333 nova_compute[244014]: 2026-02-25 12:47:13.936 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:47:14 np0005629333 nova_compute[244014]: 2026-02-25 12:47:14.110 244018 DEBUG nova.network.neutron [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Successfully created port: 0358e18d-8ce8-43a7-a8b2-11193708891a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:47:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2000: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:47:14 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:47:14 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:47:14 np0005629333 nova_compute[244014]: 2026-02-25 12:47:14.912 244018 DEBUG nova.network.neutron [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Successfully created port: ebd67787-8af9-4a7f-8b38-6d18daff8ff3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:47:15 np0005629333 nova_compute[244014]: 2026-02-25 12:47:15.249 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:16 np0005629333 nova_compute[244014]: 2026-02-25 12:47:16.203 244018 DEBUG nova.network.neutron [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Successfully updated port: 0358e18d-8ce8-43a7-a8b2-11193708891a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:47:16 np0005629333 nova_compute[244014]: 2026-02-25 12:47:16.320 244018 DEBUG nova.compute.manager [req-e31c424f-c74b-4a7c-ba23-b004bdc97a02 req-f79e1289-6e0f-4025-9a08-446470691127 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received event network-changed-0358e18d-8ce8-43a7-a8b2-11193708891a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:47:16 np0005629333 nova_compute[244014]: 2026-02-25 12:47:16.321 244018 DEBUG nova.compute.manager [req-e31c424f-c74b-4a7c-ba23-b004bdc97a02 req-f79e1289-6e0f-4025-9a08-446470691127 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Refreshing instance network info cache due to event network-changed-0358e18d-8ce8-43a7-a8b2-11193708891a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:47:16 np0005629333 nova_compute[244014]: 2026-02-25 12:47:16.322 244018 DEBUG oslo_concurrency.lockutils [req-e31c424f-c74b-4a7c-ba23-b004bdc97a02 req-f79e1289-6e0f-4025-9a08-446470691127 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:47:16 np0005629333 nova_compute[244014]: 2026-02-25 12:47:16.322 244018 DEBUG oslo_concurrency.lockutils [req-e31c424f-c74b-4a7c-ba23-b004bdc97a02 req-f79e1289-6e0f-4025-9a08-446470691127 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:47:16 np0005629333 nova_compute[244014]: 2026-02-25 12:47:16.322 244018 DEBUG nova.network.neutron [req-e31c424f-c74b-4a7c-ba23-b004bdc97a02 req-f79e1289-6e0f-4025-9a08-446470691127 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Refreshing network info cache for port 0358e18d-8ce8-43a7-a8b2-11193708891a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:47:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:47:16 np0005629333 nova_compute[244014]: 2026-02-25 12:47:16.585 244018 DEBUG nova.network.neutron [req-e31c424f-c74b-4a7c-ba23-b004bdc97a02 req-f79e1289-6e0f-4025-9a08-446470691127 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:47:17 np0005629333 nova_compute[244014]: 2026-02-25 12:47:17.157 244018 DEBUG nova.network.neutron [req-e31c424f-c74b-4a7c-ba23-b004bdc97a02 req-f79e1289-6e0f-4025-9a08-446470691127 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:47:17 np0005629333 nova_compute[244014]: 2026-02-25 12:47:17.176 244018 DEBUG oslo_concurrency.lockutils [req-e31c424f-c74b-4a7c-ba23-b004bdc97a02 req-f79e1289-6e0f-4025-9a08-446470691127 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:47:17 np0005629333 nova_compute[244014]: 2026-02-25 12:47:17.305 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:17 np0005629333 nova_compute[244014]: 2026-02-25 12:47:17.324 244018 DEBUG nova.network.neutron [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Successfully updated port: ebd67787-8af9-4a7f-8b38-6d18daff8ff3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:47:17 np0005629333 nova_compute[244014]: 2026-02-25 12:47:17.340 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:47:17 np0005629333 nova_compute[244014]: 2026-02-25 12:47:17.340 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquired lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:47:17 np0005629333 nova_compute[244014]: 2026-02-25 12:47:17.341 244018 DEBUG nova.network.neutron [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:47:17 np0005629333 nova_compute[244014]: 2026-02-25 12:47:17.485 244018 DEBUG nova.network.neutron [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:47:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2002: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 07:47:18 np0005629333 nova_compute[244014]: 2026-02-25 12:47:18.475 244018 DEBUG nova.compute.manager [req-ebf8b40e-5f83-4b19-9d7a-f8da08e16af0 req-d35f7335-851c-4c2e-90c7-20471f8cb512 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received event network-changed-ebd67787-8af9-4a7f-8b38-6d18daff8ff3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:47:18 np0005629333 nova_compute[244014]: 2026-02-25 12:47:18.476 244018 DEBUG nova.compute.manager [req-ebf8b40e-5f83-4b19-9d7a-f8da08e16af0 req-d35f7335-851c-4c2e-90c7-20471f8cb512 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Refreshing instance network info cache due to event network-changed-ebd67787-8af9-4a7f-8b38-6d18daff8ff3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:47:18 np0005629333 nova_compute[244014]: 2026-02-25 12:47:18.476 244018 DEBUG oslo_concurrency.lockutils [req-ebf8b40e-5f83-4b19-9d7a-f8da08e16af0 req-d35f7335-851c-4c2e-90c7-20471f8cb512 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:47:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:47:19 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:19.359 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:05:ae 10.100.0.2 2001:db8::f816:3eff:fe99:5ae'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe99:5ae/64', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=718d24fb-493f-42ce-a4ab-2f64b57cb3ac, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=13855d81-6f9e-45ad-9b2f-8fbca36efa4a) old=Port_Binding(mac=['fa:16:3e:99:05:ae 2001:db8::f816:3eff:fe99:5ae'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe99:5ae/64', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:47:19 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:19.361 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 13855d81-6f9e-45ad-9b2f-8fbca36efa4a in datapath 1002c177-97f1-4286-9217-c8db43f28825 updated#033[00m
Feb 25 07:47:19 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:19.364 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1002c177-97f1-4286-9217-c8db43f28825, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:47:19 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:19.365 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4aa15935-d0d7-4559-86cb-5cee03ef70a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:19 np0005629333 nova_compute[244014]: 2026-02-25 12:47:19.884 244018 DEBUG nova.network.neutron [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Updating instance_info_cache with network_info: [{"id": "0358e18d-8ce8-43a7-a8b2-11193708891a", "address": "fa:16:3e:fd:1f:00", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0358e18d-8c", "ovs_interfaceid": "0358e18d-8ce8-43a7-a8b2-11193708891a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "address": "fa:16:3e:8e:ea:66", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebd67787-8a", "ovs_interfaceid": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:47:19 np0005629333 nova_compute[244014]: 2026-02-25 12:47:19.910 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Releasing lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:47:19 np0005629333 nova_compute[244014]: 2026-02-25 12:47:19.911 244018 DEBUG nova.compute.manager [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Instance network_info: |[{"id": "0358e18d-8ce8-43a7-a8b2-11193708891a", "address": "fa:16:3e:fd:1f:00", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0358e18d-8c", "ovs_interfaceid": "0358e18d-8ce8-43a7-a8b2-11193708891a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "address": "fa:16:3e:8e:ea:66", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebd67787-8a", "ovs_interfaceid": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:47:19 np0005629333 nova_compute[244014]: 2026-02-25 12:47:19.912 244018 DEBUG oslo_concurrency.lockutils [req-ebf8b40e-5f83-4b19-9d7a-f8da08e16af0 req-d35f7335-851c-4c2e-90c7-20471f8cb512 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:47:19 np0005629333 nova_compute[244014]: 2026-02-25 12:47:19.912 244018 DEBUG nova.network.neutron [req-ebf8b40e-5f83-4b19-9d7a-f8da08e16af0 req-d35f7335-851c-4c2e-90c7-20471f8cb512 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Refreshing network info cache for port ebd67787-8af9-4a7f-8b38-6d18daff8ff3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:47:19 np0005629333 nova_compute[244014]: 2026-02-25 12:47:19.921 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Start _get_guest_xml network_info=[{"id": "0358e18d-8ce8-43a7-a8b2-11193708891a", "address": "fa:16:3e:fd:1f:00", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0358e18d-8c", "ovs_interfaceid": "0358e18d-8ce8-43a7-a8b2-11193708891a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "address": "fa:16:3e:8e:ea:66", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebd67787-8a", "ovs_interfaceid": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:47:19 np0005629333 nova_compute[244014]: 2026-02-25 12:47:19.926 244018 WARNING nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:47:19 np0005629333 nova_compute[244014]: 2026-02-25 12:47:19.933 244018 DEBUG nova.virt.libvirt.host [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:47:19 np0005629333 nova_compute[244014]: 2026-02-25 12:47:19.934 244018 DEBUG nova.virt.libvirt.host [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:47:19 np0005629333 nova_compute[244014]: 2026-02-25 12:47:19.942 244018 DEBUG nova.virt.libvirt.host [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:47:19 np0005629333 nova_compute[244014]: 2026-02-25 12:47:19.943 244018 DEBUG nova.virt.libvirt.host [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:47:19 np0005629333 nova_compute[244014]: 2026-02-25 12:47:19.943 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:47:19 np0005629333 nova_compute[244014]: 2026-02-25 12:47:19.944 244018 DEBUG nova.virt.hardware [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:47:19 np0005629333 nova_compute[244014]: 2026-02-25 12:47:19.945 244018 DEBUG nova.virt.hardware [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:47:19 np0005629333 nova_compute[244014]: 2026-02-25 12:47:19.945 244018 DEBUG nova.virt.hardware [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:47:19 np0005629333 nova_compute[244014]: 2026-02-25 12:47:19.946 244018 DEBUG nova.virt.hardware [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:47:19 np0005629333 nova_compute[244014]: 2026-02-25 12:47:19.946 244018 DEBUG nova.virt.hardware [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:47:19 np0005629333 nova_compute[244014]: 2026-02-25 12:47:19.947 244018 DEBUG nova.virt.hardware [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:47:19 np0005629333 nova_compute[244014]: 2026-02-25 12:47:19.947 244018 DEBUG nova.virt.hardware [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:47:19 np0005629333 nova_compute[244014]: 2026-02-25 12:47:19.948 244018 DEBUG nova.virt.hardware [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:47:19 np0005629333 nova_compute[244014]: 2026-02-25 12:47:19.948 244018 DEBUG nova.virt.hardware [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:47:19 np0005629333 nova_compute[244014]: 2026-02-25 12:47:19.948 244018 DEBUG nova.virt.hardware [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:47:19 np0005629333 nova_compute[244014]: 2026-02-25 12:47:19.949 244018 DEBUG nova.virt.hardware [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:47:19 np0005629333 nova_compute[244014]: 2026-02-25 12:47:19.953 244018 DEBUG oslo_concurrency.processutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:47:20 np0005629333 nova_compute[244014]: 2026-02-25 12:47:20.251 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:47:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1203664412' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:47:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 07:47:20 np0005629333 nova_compute[244014]: 2026-02-25 12:47:20.456 244018 DEBUG oslo_concurrency.processutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:47:20 np0005629333 nova_compute[244014]: 2026-02-25 12:47:20.476 244018 DEBUG nova.storage.rbd_utils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 848fd033-0ebb-460a-a8a0-56583fa5f481_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:47:20 np0005629333 nova_compute[244014]: 2026-02-25 12:47:20.481 244018 DEBUG oslo_concurrency.processutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:47:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:47:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2838295176' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:47:20 np0005629333 nova_compute[244014]: 2026-02-25 12:47:20.997 244018 DEBUG oslo_concurrency.processutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:47:20 np0005629333 nova_compute[244014]: 2026-02-25 12:47:20.999 244018 DEBUG nova.virt.libvirt.vif [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:47:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-289696819',display_name='tempest-TestGettingAddress-server-289696819',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-289696819',id=117,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB69Pr57xlpfPAIOfVFDA83wUDqkiKKz/q2j8EaAao/3/InA7y2axmKr5B1TGfyn/pk4XdqMYpcKTTAq9q/wSlLakw5QjJZuZ8Zodvba1czxOZQjigR2CJ2ZqyN+ZHKUOA==',key_name='tempest-TestGettingAddress-545180732',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-03p0oxty',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:47:12Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=848fd033-0ebb-460a-a8a0-56583fa5f481,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0358e18d-8ce8-43a7-a8b2-11193708891a", "address": "fa:16:3e:fd:1f:00", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0358e18d-8c", "ovs_interfaceid": "0358e18d-8ce8-43a7-a8b2-11193708891a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:47:20 np0005629333 nova_compute[244014]: 2026-02-25 12:47:20.999 244018 DEBUG nova.network.os_vif_util [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "0358e18d-8ce8-43a7-a8b2-11193708891a", "address": "fa:16:3e:fd:1f:00", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0358e18d-8c", "ovs_interfaceid": "0358e18d-8ce8-43a7-a8b2-11193708891a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.000 244018 DEBUG nova.network.os_vif_util [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fd:1f:00,bridge_name='br-int',has_traffic_filtering=True,id=0358e18d-8ce8-43a7-a8b2-11193708891a,network=Network(d0b6d114-fcb4-4a25-988c-1ee301ef0419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0358e18d-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.001 244018 DEBUG nova.virt.libvirt.vif [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:47:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-289696819',display_name='tempest-TestGettingAddress-server-289696819',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-289696819',id=117,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB69Pr57xlpfPAIOfVFDA83wUDqkiKKz/q2j8EaAao/3/InA7y2axmKr5B1TGfyn/pk4XdqMYpcKTTAq9q/wSlLakw5QjJZuZ8Zodvba1czxOZQjigR2CJ2ZqyN+ZHKUOA==',key_name='tempest-TestGettingAddress-545180732',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-03p0oxty',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:47:12Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=848fd033-0ebb-460a-a8a0-56583fa5f481,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "address": "fa:16:3e:8e:ea:66", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebd67787-8a", "ovs_interfaceid": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.001 244018 DEBUG nova.network.os_vif_util [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "address": "fa:16:3e:8e:ea:66", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebd67787-8a", "ovs_interfaceid": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.001 244018 DEBUG nova.network.os_vif_util [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:ea:66,bridge_name='br-int',has_traffic_filtering=True,id=ebd67787-8af9-4a7f-8b38-6d18daff8ff3,network=Network(5c482202-8994-4033-a0a9-167d92a9e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebd67787-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.002 244018 DEBUG nova.objects.instance [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 848fd033-0ebb-460a-a8a0-56583fa5f481 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.022 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:47:21 np0005629333 nova_compute[244014]:  <uuid>848fd033-0ebb-460a-a8a0-56583fa5f481</uuid>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:  <name>instance-00000075</name>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestGettingAddress-server-289696819</nova:name>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:47:19</nova:creationTime>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:47:21 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:        <nova:user uuid="f8eb8dbf8cc448ad946fd23aaae2326e">tempest-TestGettingAddress-344063294-project-member</nova:user>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:        <nova:project uuid="25fa1e8dd32c483686f869da2604f2b1">tempest-TestGettingAddress-344063294</nova:project>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:        <nova:port uuid="0358e18d-8ce8-43a7-a8b2-11193708891a">
Feb 25 07:47:21 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:        <nova:port uuid="ebd67787-8af9-4a7f-8b38-6d18daff8ff3">
Feb 25 07:47:21 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe8e:ea66" ipVersion="6"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe8e:ea66" ipVersion="6"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <entry name="serial">848fd033-0ebb-460a-a8a0-56583fa5f481</entry>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <entry name="uuid">848fd033-0ebb-460a-a8a0-56583fa5f481</entry>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/848fd033-0ebb-460a-a8a0-56583fa5f481_disk">
Feb 25 07:47:21 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:47:21 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/848fd033-0ebb-460a-a8a0-56583fa5f481_disk.config">
Feb 25 07:47:21 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:47:21 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:fd:1f:00"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <target dev="tap0358e18d-8c"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:8e:ea:66"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <target dev="tapebd67787-8a"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/848fd033-0ebb-460a-a8a0-56583fa5f481/console.log" append="off"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:47:21 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:47:21 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:47:21 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:47:21 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.023 244018 DEBUG nova.compute.manager [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Preparing to wait for external event network-vif-plugged-0358e18d-8ce8-43a7-a8b2-11193708891a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.024 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.024 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.024 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.024 244018 DEBUG nova.compute.manager [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Preparing to wait for external event network-vif-plugged-ebd67787-8af9-4a7f-8b38-6d18daff8ff3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.024 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.025 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.025 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.025 244018 DEBUG nova.virt.libvirt.vif [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:47:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-289696819',display_name='tempest-TestGettingAddress-server-289696819',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-289696819',id=117,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB69Pr57xlpfPAIOfVFDA83wUDqkiKKz/q2j8EaAao/3/InA7y2axmKr5B1TGfyn/pk4XdqMYpcKTTAq9q/wSlLakw5QjJZuZ8Zodvba1czxOZQjigR2CJ2ZqyN+ZHKUOA==',key_name='tempest-TestGettingAddress-545180732',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-03p0oxty',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:47:12Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=848fd033-0ebb-460a-a8a0-56583fa5f481,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0358e18d-8ce8-43a7-a8b2-11193708891a", "address": "fa:16:3e:fd:1f:00", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0358e18d-8c", "ovs_interfaceid": "0358e18d-8ce8-43a7-a8b2-11193708891a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.025 244018 DEBUG nova.network.os_vif_util [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "0358e18d-8ce8-43a7-a8b2-11193708891a", "address": "fa:16:3e:fd:1f:00", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0358e18d-8c", "ovs_interfaceid": "0358e18d-8ce8-43a7-a8b2-11193708891a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.026 244018 DEBUG nova.network.os_vif_util [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fd:1f:00,bridge_name='br-int',has_traffic_filtering=True,id=0358e18d-8ce8-43a7-a8b2-11193708891a,network=Network(d0b6d114-fcb4-4a25-988c-1ee301ef0419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0358e18d-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.026 244018 DEBUG os_vif [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:1f:00,bridge_name='br-int',has_traffic_filtering=True,id=0358e18d-8ce8-43a7-a8b2-11193708891a,network=Network(d0b6d114-fcb4-4a25-988c-1ee301ef0419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0358e18d-8c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.027 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.027 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.027 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.030 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.030 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0358e18d-8c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.030 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0358e18d-8c, col_values=(('external_ids', {'iface-id': '0358e18d-8ce8-43a7-a8b2-11193708891a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fd:1f:00', 'vm-uuid': '848fd033-0ebb-460a-a8a0-56583fa5f481'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.032 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:21 np0005629333 NetworkManager[49836]: <info>  [1772023641.0338] manager: (tap0358e18d-8c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/498)
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.034 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.038 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.038 244018 INFO os_vif [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:1f:00,bridge_name='br-int',has_traffic_filtering=True,id=0358e18d-8ce8-43a7-a8b2-11193708891a,network=Network(d0b6d114-fcb4-4a25-988c-1ee301ef0419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0358e18d-8c')#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.039 244018 DEBUG nova.virt.libvirt.vif [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:47:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-289696819',display_name='tempest-TestGettingAddress-server-289696819',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-289696819',id=117,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB69Pr57xlpfPAIOfVFDA83wUDqkiKKz/q2j8EaAao/3/InA7y2axmKr5B1TGfyn/pk4XdqMYpcKTTAq9q/wSlLakw5QjJZuZ8Zodvba1czxOZQjigR2CJ2ZqyN+ZHKUOA==',key_name='tempest-TestGettingAddress-545180732',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-03p0oxty',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:47:12Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=848fd033-0ebb-460a-a8a0-56583fa5f481,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "address": "fa:16:3e:8e:ea:66", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebd67787-8a", "ovs_interfaceid": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.040 244018 DEBUG nova.network.os_vif_util [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "address": "fa:16:3e:8e:ea:66", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebd67787-8a", "ovs_interfaceid": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.041 244018 DEBUG nova.network.os_vif_util [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:ea:66,bridge_name='br-int',has_traffic_filtering=True,id=ebd67787-8af9-4a7f-8b38-6d18daff8ff3,network=Network(5c482202-8994-4033-a0a9-167d92a9e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebd67787-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.041 244018 DEBUG os_vif [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:ea:66,bridge_name='br-int',has_traffic_filtering=True,id=ebd67787-8af9-4a7f-8b38-6d18daff8ff3,network=Network(5c482202-8994-4033-a0a9-167d92a9e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebd67787-8a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.042 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.042 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.042 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.045 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.045 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebd67787-8a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.047 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapebd67787-8a, col_values=(('external_ids', {'iface-id': 'ebd67787-8af9-4a7f-8b38-6d18daff8ff3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8e:ea:66', 'vm-uuid': '848fd033-0ebb-460a-a8a0-56583fa5f481'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.048 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:21 np0005629333 NetworkManager[49836]: <info>  [1772023641.0497] manager: (tapebd67787-8a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/499)
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.050 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.056 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.057 244018 INFO os_vif [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:ea:66,bridge_name='br-int',has_traffic_filtering=True,id=ebd67787-8af9-4a7f-8b38-6d18daff8ff3,network=Network(5c482202-8994-4033-a0a9-167d92a9e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebd67787-8a')#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.121 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.122 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.122 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:fd:1f:00, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.122 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:8e:ea:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.123 244018 INFO nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Using config drive#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.143 244018 DEBUG nova.storage.rbd_utils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 848fd033-0ebb-460a-a8a0-56583fa5f481_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.698 244018 DEBUG nova.network.neutron [req-ebf8b40e-5f83-4b19-9d7a-f8da08e16af0 req-d35f7335-851c-4c2e-90c7-20471f8cb512 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Updated VIF entry in instance network info cache for port ebd67787-8af9-4a7f-8b38-6d18daff8ff3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.699 244018 DEBUG nova.network.neutron [req-ebf8b40e-5f83-4b19-9d7a-f8da08e16af0 req-d35f7335-851c-4c2e-90c7-20471f8cb512 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Updating instance_info_cache with network_info: [{"id": "0358e18d-8ce8-43a7-a8b2-11193708891a", "address": "fa:16:3e:fd:1f:00", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0358e18d-8c", "ovs_interfaceid": "0358e18d-8ce8-43a7-a8b2-11193708891a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "address": "fa:16:3e:8e:ea:66", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebd67787-8a", "ovs_interfaceid": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.716 244018 INFO nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Creating config drive at /var/lib/nova/instances/848fd033-0ebb-460a-a8a0-56583fa5f481/disk.config#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.720 244018 DEBUG oslo_concurrency.processutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/848fd033-0ebb-460a-a8a0-56583fa5f481/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpeh83pveg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.754 244018 DEBUG oslo_concurrency.lockutils [req-ebf8b40e-5f83-4b19-9d7a-f8da08e16af0 req-d35f7335-851c-4c2e-90c7-20471f8cb512 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.860 244018 DEBUG oslo_concurrency.processutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/848fd033-0ebb-460a-a8a0-56583fa5f481/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpeh83pveg" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.886 244018 DEBUG nova.storage.rbd_utils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 848fd033-0ebb-460a-a8a0-56583fa5f481_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.890 244018 DEBUG oslo_concurrency.processutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/848fd033-0ebb-460a-a8a0-56583fa5f481/disk.config 848fd033-0ebb-460a-a8a0-56583fa5f481_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.939 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "95730650-36ac-4eee-8b22-9ea3f01d82d1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.940 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:47:21 np0005629333 nova_compute[244014]: 2026-02-25 12:47:21.955 244018 DEBUG nova.compute.manager [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:47:22 np0005629333 nova_compute[244014]: 2026-02-25 12:47:22.049 244018 DEBUG oslo_concurrency.processutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/848fd033-0ebb-460a-a8a0-56583fa5f481/disk.config 848fd033-0ebb-460a-a8a0-56583fa5f481_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:47:22 np0005629333 nova_compute[244014]: 2026-02-25 12:47:22.050 244018 INFO nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Deleting local config drive /var/lib/nova/instances/848fd033-0ebb-460a-a8a0-56583fa5f481/disk.config because it was imported into RBD.#033[00m
Feb 25 07:47:22 np0005629333 kernel: tap0358e18d-8c: entered promiscuous mode
Feb 25 07:47:22 np0005629333 NetworkManager[49836]: <info>  [1772023642.1134] manager: (tap0358e18d-8c): new Tun device (/org/freedesktop/NetworkManager/Devices/500)
Feb 25 07:47:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:22Z|01204|binding|INFO|Claiming lport 0358e18d-8ce8-43a7-a8b2-11193708891a for this chassis.
Feb 25 07:47:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:22Z|01205|binding|INFO|0358e18d-8ce8-43a7-a8b2-11193708891a: Claiming fa:16:3e:fd:1f:00 10.100.0.11
Feb 25 07:47:22 np0005629333 nova_compute[244014]: 2026-02-25 12:47:22.118 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:22 np0005629333 NetworkManager[49836]: <info>  [1772023642.1269] manager: (tapebd67787-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/501)
Feb 25 07:47:22 np0005629333 kernel: tapebd67787-8a: entered promiscuous mode
Feb 25 07:47:22 np0005629333 nova_compute[244014]: 2026-02-25 12:47:22.130 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:47:22 np0005629333 nova_compute[244014]: 2026-02-25 12:47:22.131 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:47:22 np0005629333 nova_compute[244014]: 2026-02-25 12:47:22.132 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:22Z|01206|if_status|INFO|Dropped 2 log messages in last 1195 seconds (most recently, 1195 seconds ago) due to excessive rate
Feb 25 07:47:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:22Z|01207|if_status|INFO|Not updating pb chassis for ebd67787-8af9-4a7f-8b38-6d18daff8ff3 now as sb is readonly
Feb 25 07:47:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:22Z|01208|binding|INFO|Claiming lport ebd67787-8af9-4a7f-8b38-6d18daff8ff3 for this chassis.
Feb 25 07:47:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:22Z|01209|binding|INFO|ebd67787-8af9-4a7f-8b38-6d18daff8ff3: Claiming fa:16:3e:8e:ea:66 2001:db8:0:1:f816:3eff:fe8e:ea66 2001:db8::f816:3eff:fe8e:ea66
Feb 25 07:47:22 np0005629333 nova_compute[244014]: 2026-02-25 12:47:22.138 244018 DEBUG nova.virt.hardware [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:47:22 np0005629333 nova_compute[244014]: 2026-02-25 12:47:22.138 244018 INFO nova.compute.claims [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.141 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:1f:00 10.100.0.11'], port_security=['fa:16:3e:fd:1f:00 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '848fd033-0ebb-460a-a8a0-56583fa5f481', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e663e6c0-5fd2-4899-a0e9-500029428c03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=560bd45a-3180-4994-88ce-74a6fe57d885, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=0358e18d-8ce8-43a7-a8b2-11193708891a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:47:22 np0005629333 systemd-udevd[349930]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:47:22 np0005629333 systemd-udevd[349931]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.143 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 0358e18d-8ce8-43a7-a8b2-11193708891a in datapath d0b6d114-fcb4-4a25-988c-1ee301ef0419 bound to our chassis#033[00m
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.145 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d0b6d114-fcb4-4a25-988c-1ee301ef0419#033[00m
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.148 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:ea:66 2001:db8:0:1:f816:3eff:fe8e:ea66 2001:db8::f816:3eff:fe8e:ea66'], port_security=['fa:16:3e:8e:ea:66 2001:db8:0:1:f816:3eff:fe8e:ea66 2001:db8::f816:3eff:fe8e:ea66'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe8e:ea66/64 2001:db8::f816:3eff:fe8e:ea66/64', 'neutron:device_id': '848fd033-0ebb-460a-a8a0-56583fa5f481', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c482202-8994-4033-a0a9-167d92a9e301', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e663e6c0-5fd2-4899-a0e9-500029428c03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06544f4e-1579-4f9d-8faa-1248478e7fbc, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ebd67787-8af9-4a7f-8b38-6d18daff8ff3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.155 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[20184612-45e6-4410-80cf-10f54dc057b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.156 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd0b6d114-f1 in ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:47:22 np0005629333 NetworkManager[49836]: <info>  [1772023642.1575] device (tap0358e18d-8c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:47:22 np0005629333 NetworkManager[49836]: <info>  [1772023642.1590] device (tap0358e18d-8c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:47:22 np0005629333 NetworkManager[49836]: <info>  [1772023642.1598] device (tapebd67787-8a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:47:22 np0005629333 NetworkManager[49836]: <info>  [1772023642.1608] device (tapebd67787-8a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.159 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd0b6d114-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.159 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ee28af28-40c2-437e-ada7-d9eef8ca6d38]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.161 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[afcb8c52-76d9-47ee-a44c-e2a67b135b6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.169 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[527ccde7-b1e1-4a49-9920-b231e383b558]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:22 np0005629333 systemd-machined[210048]: New machine qemu-149-instance-00000075.
Feb 25 07:47:22 np0005629333 nova_compute[244014]: 2026-02-25 12:47:22.178 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:22Z|01210|binding|INFO|Setting lport 0358e18d-8ce8-43a7-a8b2-11193708891a ovn-installed in OVS
Feb 25 07:47:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:22Z|01211|binding|INFO|Setting lport 0358e18d-8ce8-43a7-a8b2-11193708891a up in Southbound
Feb 25 07:47:22 np0005629333 nova_compute[244014]: 2026-02-25 12:47:22.185 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:22Z|01212|binding|INFO|Setting lport ebd67787-8af9-4a7f-8b38-6d18daff8ff3 ovn-installed in OVS
Feb 25 07:47:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:22Z|01213|binding|INFO|Setting lport ebd67787-8af9-4a7f-8b38-6d18daff8ff3 up in Southbound
Feb 25 07:47:22 np0005629333 nova_compute[244014]: 2026-02-25 12:47:22.190 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.192 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[765361fd-d586-40a6-9094-732fceb27612]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:22 np0005629333 systemd[1]: Started Virtual Machine qemu-149-instance-00000075.
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.218 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ae644c35-a1bc-49ae-876a-f8e9314b834a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.225 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6f6c81f9-eabc-496e-8626-b7978d5f99a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:22 np0005629333 NetworkManager[49836]: <info>  [1772023642.2289] manager: (tapd0b6d114-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/502)
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.252 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f84f6c34-20fb-490c-8771-b744c5c8439f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.255 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[21c6de20-83a2-461b-852e-ca0a4dd52ca7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:22 np0005629333 NetworkManager[49836]: <info>  [1772023642.2722] device (tapd0b6d114-f0): carrier: link connected
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.274 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4fc6ba3a-35b4-4d26-ab78-9c594ffcf398]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.286 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0de490e6-8ae5-47d4-8c0f-97ac069bf800]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd0b6d114-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:c1:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 364], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561168, 'reachable_time': 40851, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349967, 'error': None, 'target': 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.295 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[11be16ef-f294-473d-8c86-9fec263401b6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe97:c1ce'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561168, 'tstamp': 561168}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349968, 'error': None, 'target': 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.304 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7bb8122f-cfd5-4157-8faf-33e3a78f598d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd0b6d114-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:c1:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 364], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561168, 'reachable_time': 40851, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 349969, 'error': None, 'target': 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.322 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d616ec4b-ef9e-4e75-8987-b26c5665cccf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.362 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a88218c8-fcfb-46f3-8acd-ee0a61be0dcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.363 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0b6d114-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.364 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.364 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0b6d114-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:47:22 np0005629333 nova_compute[244014]: 2026-02-25 12:47:22.366 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:22 np0005629333 NetworkManager[49836]: <info>  [1772023642.3671] manager: (tapd0b6d114-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/503)
Feb 25 07:47:22 np0005629333 kernel: tapd0b6d114-f0: entered promiscuous mode
Feb 25 07:47:22 np0005629333 nova_compute[244014]: 2026-02-25 12:47:22.369 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.370 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd0b6d114-f0, col_values=(('external_ids', {'iface-id': '4e1af2d0-780b-4ffd-93a0-445de7a22322'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:47:22 np0005629333 nova_compute[244014]: 2026-02-25 12:47:22.371 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:22 np0005629333 nova_compute[244014]: 2026-02-25 12:47:22.372 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:22Z|01214|binding|INFO|Releasing lport 4e1af2d0-780b-4ffd-93a0-445de7a22322 from this chassis (sb_readonly=0)
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.373 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d0b6d114-fcb4-4a25-988c-1ee301ef0419.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d0b6d114-fcb4-4a25-988c-1ee301ef0419.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.376 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a132a9a9-bd67-4474-b3d0-073f4086b135]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.377 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-d0b6d114-fcb4-4a25-988c-1ee301ef0419
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/d0b6d114-fcb4-4a25-988c-1ee301ef0419.pid.haproxy
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID d0b6d114-fcb4-4a25-988c-1ee301ef0419
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:47:22 np0005629333 nova_compute[244014]: 2026-02-25 12:47:22.378 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.379 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'env', 'PROCESS_TAG=haproxy-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d0b6d114-fcb4-4a25-988c-1ee301ef0419.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:47:22 np0005629333 nova_compute[244014]: 2026-02-25 12:47:22.397 244018 DEBUG oslo_concurrency.processutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:47:22 np0005629333 nova_compute[244014]: 2026-02-25 12:47:22.429 244018 DEBUG nova.compute.manager [req-5484353f-e7a0-4774-85c8-70985a7e502f req-facc1d7b-96b0-4b77-8d2f-f2ff8b5bb93a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received event network-vif-plugged-0358e18d-8ce8-43a7-a8b2-11193708891a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:47:22 np0005629333 nova_compute[244014]: 2026-02-25 12:47:22.430 244018 DEBUG oslo_concurrency.lockutils [req-5484353f-e7a0-4774-85c8-70985a7e502f req-facc1d7b-96b0-4b77-8d2f-f2ff8b5bb93a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:47:22 np0005629333 nova_compute[244014]: 2026-02-25 12:47:22.430 244018 DEBUG oslo_concurrency.lockutils [req-5484353f-e7a0-4774-85c8-70985a7e502f req-facc1d7b-96b0-4b77-8d2f-f2ff8b5bb93a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:47:22 np0005629333 nova_compute[244014]: 2026-02-25 12:47:22.430 244018 DEBUG oslo_concurrency.lockutils [req-5484353f-e7a0-4774-85c8-70985a7e502f req-facc1d7b-96b0-4b77-8d2f-f2ff8b5bb93a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:47:22 np0005629333 nova_compute[244014]: 2026-02-25 12:47:22.430 244018 DEBUG nova.compute.manager [req-5484353f-e7a0-4774-85c8-70985a7e502f req-facc1d7b-96b0-4b77-8d2f-f2ff8b5bb93a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Processing event network-vif-plugged-0358e18d-8ce8-43a7-a8b2-11193708891a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:47:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.464 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:05:ae 2001:db8::f816:3eff:fe99:5ae'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe99:5ae/64', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=718d24fb-493f-42ce-a4ab-2f64b57cb3ac, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=13855d81-6f9e-45ad-9b2f-8fbca36efa4a) old=Port_Binding(mac=['fa:16:3e:99:05:ae 10.100.0.2 2001:db8::f816:3eff:fe99:5ae'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe99:5ae/64', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:47:22 np0005629333 nova_compute[244014]: 2026-02-25 12:47:22.703 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023642.7032177, 848fd033-0ebb-460a-a8a0-56583fa5f481 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:47:22 np0005629333 nova_compute[244014]: 2026-02-25 12:47:22.703 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] VM Started (Lifecycle Event)#033[00m
Feb 25 07:47:22 np0005629333 nova_compute[244014]: 2026-02-25 12:47:22.730 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:47:22 np0005629333 nova_compute[244014]: 2026-02-25 12:47:22.734 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023642.7032783, 848fd033-0ebb-460a-a8a0-56583fa5f481 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:47:22 np0005629333 nova_compute[244014]: 2026-02-25 12:47:22.734 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:47:22 np0005629333 podman[350064]: 2026-02-25 12:47:22.749612064 +0000 UTC m=+0.055258451 container create a7aee791e988a0c55e6f0af6c7e94838ecc75f9d23ad6b6c79c31b227fe683a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 25 07:47:22 np0005629333 nova_compute[244014]: 2026-02-25 12:47:22.752 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:47:22 np0005629333 nova_compute[244014]: 2026-02-25 12:47:22.774 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:47:22 np0005629333 systemd[1]: Started libpod-conmon-a7aee791e988a0c55e6f0af6c7e94838ecc75f9d23ad6b6c79c31b227fe683a9.scope.
Feb 25 07:47:22 np0005629333 nova_compute[244014]: 2026-02-25 12:47:22.793 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:47:22 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:47:22 np0005629333 podman[350064]: 2026-02-25 12:47:22.719223416 +0000 UTC m=+0.024869873 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:47:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a06c297626a192e338cd1e2dd123eba9e16164173bf46ec6839ab1a455ed742/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:47:22 np0005629333 podman[350064]: 2026-02-25 12:47:22.833238886 +0000 UTC m=+0.138885273 container init a7aee791e988a0c55e6f0af6c7e94838ecc75f9d23ad6b6c79c31b227fe683a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 07:47:22 np0005629333 podman[350064]: 2026-02-25 12:47:22.837250469 +0000 UTC m=+0.142896856 container start a7aee791e988a0c55e6f0af6c7e94838ecc75f9d23ad6b6c79c31b227fe683a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:47:22 np0005629333 neutron-haproxy-ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419[350077]: [NOTICE]   (350081) : New worker (350083) forked
Feb 25 07:47:22 np0005629333 neutron-haproxy-ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419[350077]: [NOTICE]   (350081) : Loading success.
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.910 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ebd67787-8af9-4a7f-8b38-6d18daff8ff3 in datapath 5c482202-8994-4033-a0a9-167d92a9e301 unbound from our chassis#033[00m
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.913 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5c482202-8994-4033-a0a9-167d92a9e301#033[00m
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.922 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[51f1d402-83a2-4010-84d2-fe2724faf4c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.923 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5c482202-81 in ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.926 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5c482202-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.926 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[07124876-1581-4c07-8ba5-87a3be1b38f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.928 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a1da0a89-ffeb-49e4-b87c-b79d33ddd502]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.939 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[07ef7971-6888-40d1-8806-034bc7329ea6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.949 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7305ec96-fcca-4899-b1d9-87a2f4ab6602]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:47:22 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1294403115' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.973 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2e0e1898-f96b-48a2-95d9-0600c5f274d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:22.980 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[67b27c46-0446-4c5c-9af3-51112fc990a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:22 np0005629333 NetworkManager[49836]: <info>  [1772023642.9819] manager: (tap5c482202-80): new Veth device (/org/freedesktop/NetworkManager/Devices/504)
Feb 25 07:47:22 np0005629333 systemd-udevd[349958]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:47:22 np0005629333 nova_compute[244014]: 2026-02-25 12:47:22.987 244018 DEBUG oslo_concurrency.processutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:47:22 np0005629333 nova_compute[244014]: 2026-02-25 12:47:22.992 244018 DEBUG nova.compute.provider_tree [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.007 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[44310bfe-44d7-4117-a0a0-2532589ed728]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.011 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[fa0278bf-e9b7-45c0-bda4-bf14f8c1ead7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:23 np0005629333 nova_compute[244014]: 2026-02-25 12:47:23.023 244018 DEBUG nova.scheduler.client.report [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:47:23 np0005629333 NetworkManager[49836]: <info>  [1772023643.0334] device (tap5c482202-80): carrier: link connected
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.038 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[477320fd-5993-4b1c-9d6f-2d03b4cc3f39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:23 np0005629333 nova_compute[244014]: 2026-02-25 12:47:23.048 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:47:23 np0005629333 nova_compute[244014]: 2026-02-25 12:47:23.049 244018 DEBUG nova.compute.manager [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.053 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[18386e0e-00ec-4edf-b138-c28a404547ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c482202-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:af:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 365], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561244, 'reachable_time': 22558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350104, 'error': None, 'target': 'ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.069 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a1358bb4-92d0-4018-b167-ea875f76a4ce]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe02:afc3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561244, 'tstamp': 561244}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350105, 'error': None, 'target': 'ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.088 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb9186d-40ab-4fcb-b9de-6582f0f1ae15]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c482202-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:af:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 365], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561244, 'reachable_time': 22558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 350106, 'error': None, 'target': 'ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.117 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[35e27d57-4925-4b89-8467-e3ebe4ea9912]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:23 np0005629333 nova_compute[244014]: 2026-02-25 12:47:23.118 244018 DEBUG nova.compute.manager [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:47:23 np0005629333 nova_compute[244014]: 2026-02-25 12:47:23.118 244018 DEBUG nova.network.neutron [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.142 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[179700c6-4d67-44b2-aec0-171cbac34d2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:23 np0005629333 nova_compute[244014]: 2026-02-25 12:47:23.143 244018 INFO nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.145 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c482202-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.145 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.146 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c482202-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:47:23 np0005629333 NetworkManager[49836]: <info>  [1772023643.1489] manager: (tap5c482202-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/505)
Feb 25 07:47:23 np0005629333 nova_compute[244014]: 2026-02-25 12:47:23.148 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:23 np0005629333 kernel: tap5c482202-80: entered promiscuous mode
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.151 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5c482202-80, col_values=(('external_ids', {'iface-id': '8372e6c6-fbbc-48a7-be95-d95e1d2ad95a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:47:23 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:23Z|01215|binding|INFO|Releasing lport 8372e6c6-fbbc-48a7-be95-d95e1d2ad95a from this chassis (sb_readonly=0)
Feb 25 07:47:23 np0005629333 nova_compute[244014]: 2026-02-25 12:47:23.153 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.154 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5c482202-8994-4033-a0a9-167d92a9e301.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5c482202-8994-4033-a0a9-167d92a9e301.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.154 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1d55ef3a-7888-436b-a0a1-f6511dfa82dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.156 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-5c482202-8994-4033-a0a9-167d92a9e301
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/5c482202-8994-4033-a0a9-167d92a9e301.pid.haproxy
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 5c482202-8994-4033-a0a9-167d92a9e301
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.157 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301', 'env', 'PROCESS_TAG=haproxy-5c482202-8994-4033-a0a9-167d92a9e301', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5c482202-8994-4033-a0a9-167d92a9e301.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:47:23 np0005629333 nova_compute[244014]: 2026-02-25 12:47:23.158 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:23 np0005629333 nova_compute[244014]: 2026-02-25 12:47:23.180 244018 DEBUG nova.compute.manager [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:47:23 np0005629333 nova_compute[244014]: 2026-02-25 12:47:23.275 244018 DEBUG nova.compute.manager [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:47:23 np0005629333 nova_compute[244014]: 2026-02-25 12:47:23.276 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:47:23 np0005629333 nova_compute[244014]: 2026-02-25 12:47:23.277 244018 INFO nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Creating image(s)#033[00m
Feb 25 07:47:23 np0005629333 nova_compute[244014]: 2026-02-25 12:47:23.299 244018 DEBUG nova.storage.rbd_utils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 95730650-36ac-4eee-8b22-9ea3f01d82d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:47:23 np0005629333 nova_compute[244014]: 2026-02-25 12:47:23.322 244018 DEBUG nova.storage.rbd_utils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 95730650-36ac-4eee-8b22-9ea3f01d82d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:47:23 np0005629333 nova_compute[244014]: 2026-02-25 12:47:23.342 244018 DEBUG nova.storage.rbd_utils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 95730650-36ac-4eee-8b22-9ea3f01d82d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:47:23 np0005629333 nova_compute[244014]: 2026-02-25 12:47:23.346 244018 DEBUG oslo_concurrency.processutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:47:23 np0005629333 nova_compute[244014]: 2026-02-25 12:47:23.381 244018 DEBUG nova.policy [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31d013eaf26a447394d93c83ab8def60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e227b91c24404ab5aed600e2fe792d32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.426 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:05:ae 10.100.0.2 2001:db8::f816:3eff:fe99:5ae'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe99:5ae/64', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=718d24fb-493f-42ce-a4ab-2f64b57cb3ac, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=13855d81-6f9e-45ad-9b2f-8fbca36efa4a) old=Port_Binding(mac=['fa:16:3e:99:05:ae 2001:db8::f816:3eff:fe99:5ae'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe99:5ae/64', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:47:23 np0005629333 nova_compute[244014]: 2026-02-25 12:47:23.433 244018 DEBUG oslo_concurrency.processutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:47:23 np0005629333 nova_compute[244014]: 2026-02-25 12:47:23.434 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:47:23 np0005629333 nova_compute[244014]: 2026-02-25 12:47:23.435 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:47:23 np0005629333 nova_compute[244014]: 2026-02-25 12:47:23.435 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:47:23 np0005629333 nova_compute[244014]: 2026-02-25 12:47:23.459 244018 DEBUG nova.storage.rbd_utils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 95730650-36ac-4eee-8b22-9ea3f01d82d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:47:23 np0005629333 nova_compute[244014]: 2026-02-25 12:47:23.463 244018 DEBUG oslo_concurrency.processutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 95730650-36ac-4eee-8b22-9ea3f01d82d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:47:23 np0005629333 podman[350190]: 2026-02-25 12:47:23.48607037 +0000 UTC m=+0.047272816 container create 20aff2b420c99f48402d14ea550cc6f53615620d8268dcc470c0d042dfb5ae45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:47:23 np0005629333 systemd[1]: Started libpod-conmon-20aff2b420c99f48402d14ea550cc6f53615620d8268dcc470c0d042dfb5ae45.scope.
Feb 25 07:47:23 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:47:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:47:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83335b7bb0badbab01818a01b3ea3886d2d30cde665b02056377fe1391514531/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:47:23 np0005629333 podman[350190]: 2026-02-25 12:47:23.461972099 +0000 UTC m=+0.023174575 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:47:23 np0005629333 podman[350190]: 2026-02-25 12:47:23.567109268 +0000 UTC m=+0.128311814 container init 20aff2b420c99f48402d14ea550cc6f53615620d8268dcc470c0d042dfb5ae45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 25 07:47:23 np0005629333 podman[350190]: 2026-02-25 12:47:23.572910092 +0000 UTC m=+0.134112588 container start 20aff2b420c99f48402d14ea550cc6f53615620d8268dcc470c0d042dfb5ae45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:47:23 np0005629333 neutron-haproxy-ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301[350226]: [NOTICE]   (350245) : New worker (350250) forked
Feb 25 07:47:23 np0005629333 neutron-haproxy-ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301[350226]: [NOTICE]   (350245) : Loading success.
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.634 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 13855d81-6f9e-45ad-9b2f-8fbca36efa4a in datapath 1002c177-97f1-4286-9217-c8db43f28825 updated#033[00m
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.636 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1002c177-97f1-4286-9217-c8db43f28825, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.637 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c48f27bb-dce5-4e1c-bfbe-2510db368d62]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.637 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 13855d81-6f9e-45ad-9b2f-8fbca36efa4a in datapath 1002c177-97f1-4286-9217-c8db43f28825 updated#033[00m
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.638 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1002c177-97f1-4286-9217-c8db43f28825, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:47:23 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:23.639 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[11da65b3-26c8-4a93-a0d1-c6920d8894ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:23 np0005629333 nova_compute[244014]: 2026-02-25 12:47:23.741 244018 DEBUG oslo_concurrency.processutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 95730650-36ac-4eee-8b22-9ea3f01d82d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:47:23 np0005629333 nova_compute[244014]: 2026-02-25 12:47:23.798 244018 DEBUG nova.storage.rbd_utils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] resizing rbd image 95730650-36ac-4eee-8b22-9ea3f01d82d1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:47:23 np0005629333 nova_compute[244014]: 2026-02-25 12:47:23.873 244018 DEBUG nova.objects.instance [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'migration_context' on Instance uuid 95730650-36ac-4eee-8b22-9ea3f01d82d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:47:23 np0005629333 nova_compute[244014]: 2026-02-25 12:47:23.886 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:47:23 np0005629333 nova_compute[244014]: 2026-02-25 12:47:23.887 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Ensure instance console log exists: /var/lib/nova/instances/95730650-36ac-4eee-8b22-9ea3f01d82d1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:47:23 np0005629333 nova_compute[244014]: 2026-02-25 12:47:23.887 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:47:23 np0005629333 nova_compute[244014]: 2026-02-25 12:47:23.887 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:47:23 np0005629333 nova_compute[244014]: 2026-02-25 12:47:23.888 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.143 244018 DEBUG nova.network.neutron [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Successfully created port: 0660ccb7-a986-45dd-8aa8-e10ddd71144a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:47:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.548 244018 DEBUG nova.compute.manager [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received event network-vif-plugged-0358e18d-8ce8-43a7-a8b2-11193708891a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.548 244018 DEBUG oslo_concurrency.lockutils [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.549 244018 DEBUG oslo_concurrency.lockutils [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.549 244018 DEBUG oslo_concurrency.lockutils [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.550 244018 DEBUG nova.compute.manager [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] No event matching network-vif-plugged-0358e18d-8ce8-43a7-a8b2-11193708891a in dict_keys([('network-vif-plugged', 'ebd67787-8af9-4a7f-8b38-6d18daff8ff3')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.551 244018 WARNING nova.compute.manager [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received unexpected event network-vif-plugged-0358e18d-8ce8-43a7-a8b2-11193708891a for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.551 244018 DEBUG nova.compute.manager [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received event network-vif-plugged-ebd67787-8af9-4a7f-8b38-6d18daff8ff3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.551 244018 DEBUG oslo_concurrency.lockutils [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.552 244018 DEBUG oslo_concurrency.lockutils [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.552 244018 DEBUG oslo_concurrency.lockutils [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.553 244018 DEBUG nova.compute.manager [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Processing event network-vif-plugged-ebd67787-8af9-4a7f-8b38-6d18daff8ff3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.553 244018 DEBUG nova.compute.manager [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received event network-vif-plugged-ebd67787-8af9-4a7f-8b38-6d18daff8ff3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.554 244018 DEBUG oslo_concurrency.lockutils [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.554 244018 DEBUG oslo_concurrency.lockutils [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.555 244018 DEBUG oslo_concurrency.lockutils [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.555 244018 DEBUG nova.compute.manager [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] No waiting events found dispatching network-vif-plugged-ebd67787-8af9-4a7f-8b38-6d18daff8ff3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.556 244018 WARNING nova.compute.manager [req-4dfc541d-4553-49d3-acd0-7e6fd2c39279 req-ffe98c51-0ad6-4981-ba84-6115f0e29b32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received unexpected event network-vif-plugged-ebd67787-8af9-4a7f-8b38-6d18daff8ff3 for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.557 244018 DEBUG nova.compute.manager [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.562 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023644.5616028, 848fd033-0ebb-460a-a8a0-56583fa5f481 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.562 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.565 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.569 244018 INFO nova.virt.libvirt.driver [-] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Instance spawned successfully.#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.570 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.586 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.594 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.600 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.601 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.601 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.602 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.603 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.604 244018 DEBUG nova.virt.libvirt.driver [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.630 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.670 244018 INFO nova.compute.manager [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Took 11.99 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.671 244018 DEBUG nova.compute.manager [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.745 244018 INFO nova.compute.manager [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Took 13.27 seconds to build instance.#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.762 244018 DEBUG oslo_concurrency.lockutils [None req-453c827e-bfce-44b7-9b74-9658e1c24e4b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.368s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:47:24 np0005629333 nova_compute[244014]: 2026-02-25 12:47:24.940 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:47:25 np0005629333 nova_compute[244014]: 2026-02-25 12:47:25.058 244018 DEBUG nova.network.neutron [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Successfully updated port: 0660ccb7-a986-45dd-8aa8-e10ddd71144a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:47:25 np0005629333 nova_compute[244014]: 2026-02-25 12:47:25.167 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-95730650-36ac-4eee-8b22-9ea3f01d82d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:47:25 np0005629333 nova_compute[244014]: 2026-02-25 12:47:25.167 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-95730650-36ac-4eee-8b22-9ea3f01d82d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:47:25 np0005629333 nova_compute[244014]: 2026-02-25 12:47:25.168 244018 DEBUG nova.network.neutron [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:47:25 np0005629333 nova_compute[244014]: 2026-02-25 12:47:25.176 244018 DEBUG nova.compute.manager [req-c1f4b90f-7c7c-4445-858e-051c103e901c req-e55d102b-2c21-47be-8037-cdb28e77fea8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Received event network-changed-0660ccb7-a986-45dd-8aa8-e10ddd71144a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:47:25 np0005629333 nova_compute[244014]: 2026-02-25 12:47:25.176 244018 DEBUG nova.compute.manager [req-c1f4b90f-7c7c-4445-858e-051c103e901c req-e55d102b-2c21-47be-8037-cdb28e77fea8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Refreshing instance network info cache due to event network-changed-0660ccb7-a986-45dd-8aa8-e10ddd71144a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:47:25 np0005629333 nova_compute[244014]: 2026-02-25 12:47:25.177 244018 DEBUG oslo_concurrency.lockutils [req-c1f4b90f-7c7c-4445-858e-051c103e901c req-e55d102b-2c21-47be-8037-cdb28e77fea8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-95730650-36ac-4eee-8b22-9ea3f01d82d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:47:25 np0005629333 nova_compute[244014]: 2026-02-25 12:47:25.252 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:25 np0005629333 nova_compute[244014]: 2026-02-25 12:47:25.369 244018 DEBUG nova.network.neutron [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:47:25 np0005629333 nova_compute[244014]: 2026-02-25 12:47:25.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:47:25 np0005629333 nova_compute[244014]: 2026-02-25 12:47:25.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:47:25 np0005629333 nova_compute[244014]: 2026-02-25 12:47:25.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 07:47:25 np0005629333 nova_compute[244014]: 2026-02-25 12:47:25.903 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 25 07:47:26 np0005629333 nova_compute[244014]: 2026-02-25 12:47:26.050 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:26 np0005629333 nova_compute[244014]: 2026-02-25 12:47:26.081 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:47:26 np0005629333 nova_compute[244014]: 2026-02-25 12:47:26.082 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:47:26 np0005629333 nova_compute[244014]: 2026-02-25 12:47:26.082 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 25 07:47:26 np0005629333 nova_compute[244014]: 2026-02-25 12:47:26.082 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid 848fd033-0ebb-460a-a8a0-56583fa5f481 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:47:26 np0005629333 nova_compute[244014]: 2026-02-25 12:47:26.124 244018 DEBUG nova.network.neutron [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Updating instance_info_cache with network_info: [{"id": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "address": "fa:16:3e:cb:b0:7a", "network": {"id": "b261df4d-3b33-4344-9c3c-a73feb8773db", "bridge": "br-int", "label": "tempest-network-smoke--1044834333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0660ccb7-a9", "ovs_interfaceid": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:47:26 np0005629333 nova_compute[244014]: 2026-02-25 12:47:26.186 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-95730650-36ac-4eee-8b22-9ea3f01d82d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:47:26 np0005629333 nova_compute[244014]: 2026-02-25 12:47:26.186 244018 DEBUG nova.compute.manager [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Instance network_info: |[{"id": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "address": "fa:16:3e:cb:b0:7a", "network": {"id": "b261df4d-3b33-4344-9c3c-a73feb8773db", "bridge": "br-int", "label": "tempest-network-smoke--1044834333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0660ccb7-a9", "ovs_interfaceid": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:47:26 np0005629333 nova_compute[244014]: 2026-02-25 12:47:26.187 244018 DEBUG oslo_concurrency.lockutils [req-c1f4b90f-7c7c-4445-858e-051c103e901c req-e55d102b-2c21-47be-8037-cdb28e77fea8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-95730650-36ac-4eee-8b22-9ea3f01d82d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:47:26 np0005629333 nova_compute[244014]: 2026-02-25 12:47:26.187 244018 DEBUG nova.network.neutron [req-c1f4b90f-7c7c-4445-858e-051c103e901c req-e55d102b-2c21-47be-8037-cdb28e77fea8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Refreshing network info cache for port 0660ccb7-a986-45dd-8aa8-e10ddd71144a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:47:26 np0005629333 nova_compute[244014]: 2026-02-25 12:47:26.189 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Start _get_guest_xml network_info=[{"id": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "address": "fa:16:3e:cb:b0:7a", "network": {"id": "b261df4d-3b33-4344-9c3c-a73feb8773db", "bridge": "br-int", "label": "tempest-network-smoke--1044834333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0660ccb7-a9", "ovs_interfaceid": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:47:26 np0005629333 nova_compute[244014]: 2026-02-25 12:47:26.193 244018 WARNING nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:47:26 np0005629333 nova_compute[244014]: 2026-02-25 12:47:26.198 244018 DEBUG nova.virt.libvirt.host [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:47:26 np0005629333 nova_compute[244014]: 2026-02-25 12:47:26.198 244018 DEBUG nova.virt.libvirt.host [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:47:26 np0005629333 nova_compute[244014]: 2026-02-25 12:47:26.201 244018 DEBUG nova.virt.libvirt.host [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:47:26 np0005629333 nova_compute[244014]: 2026-02-25 12:47:26.201 244018 DEBUG nova.virt.libvirt.host [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:47:26 np0005629333 nova_compute[244014]: 2026-02-25 12:47:26.201 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:47:26 np0005629333 nova_compute[244014]: 2026-02-25 12:47:26.201 244018 DEBUG nova.virt.hardware [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:47:26 np0005629333 nova_compute[244014]: 2026-02-25 12:47:26.202 244018 DEBUG nova.virt.hardware [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:47:26 np0005629333 nova_compute[244014]: 2026-02-25 12:47:26.202 244018 DEBUG nova.virt.hardware [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:47:26 np0005629333 nova_compute[244014]: 2026-02-25 12:47:26.202 244018 DEBUG nova.virt.hardware [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:47:26 np0005629333 nova_compute[244014]: 2026-02-25 12:47:26.202 244018 DEBUG nova.virt.hardware [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:47:26 np0005629333 nova_compute[244014]: 2026-02-25 12:47:26.202 244018 DEBUG nova.virt.hardware [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:47:26 np0005629333 nova_compute[244014]: 2026-02-25 12:47:26.202 244018 DEBUG nova.virt.hardware [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:47:26 np0005629333 nova_compute[244014]: 2026-02-25 12:47:26.203 244018 DEBUG nova.virt.hardware [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:47:26 np0005629333 nova_compute[244014]: 2026-02-25 12:47:26.203 244018 DEBUG nova.virt.hardware [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:47:26 np0005629333 nova_compute[244014]: 2026-02-25 12:47:26.203 244018 DEBUG nova.virt.hardware [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:47:26 np0005629333 nova_compute[244014]: 2026-02-25 12:47:26.203 244018 DEBUG nova.virt.hardware [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:47:26 np0005629333 nova_compute[244014]: 2026-02-25 12:47:26.205 244018 DEBUG oslo_concurrency.processutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:47:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2006: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Feb 25 07:47:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:47:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1062346787' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:47:26 np0005629333 nova_compute[244014]: 2026-02-25 12:47:26.760 244018 DEBUG oslo_concurrency.processutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:47:26 np0005629333 nova_compute[244014]: 2026-02-25 12:47:26.781 244018 DEBUG nova.storage.rbd_utils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 95730650-36ac-4eee-8b22-9ea3f01d82d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:47:26 np0005629333 nova_compute[244014]: 2026-02-25 12:47:26.785 244018 DEBUG oslo_concurrency.processutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:47:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:26.948 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:05:ae 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=718d24fb-493f-42ce-a4ab-2f64b57cb3ac, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=13855d81-6f9e-45ad-9b2f-8fbca36efa4a) old=Port_Binding(mac=['fa:16:3e:99:05:ae 10.100.0.2 2001:db8::f816:3eff:fe99:5ae'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe99:5ae/64', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:47:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:26.950 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 13855d81-6f9e-45ad-9b2f-8fbca36efa4a in datapath 1002c177-97f1-4286-9217-c8db43f28825 updated#033[00m
Feb 25 07:47:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:26.951 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1002c177-97f1-4286-9217-c8db43f28825, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:47:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:26.951 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[064896a2-c26e-40cf-9c36-c952c89cc386]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:47:27 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1616965702' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:47:27 np0005629333 nova_compute[244014]: 2026-02-25 12:47:27.326 244018 DEBUG oslo_concurrency.processutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:47:27 np0005629333 nova_compute[244014]: 2026-02-25 12:47:27.328 244018 DEBUG nova.virt.libvirt.vif [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:47:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-404307455',display_name='tempest-TestNetworkBasicOps-server-404307455',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-404307455',id=118,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFFlaz+1JiPozqOMtDFDKtIjDHy+pR4Dk92wrSmsbZfFxI7F561H9M9jMhpR4UoZcnqUowU/hewm+5foBFGhY3SM2rprh5R3QDm+X6unzTrRIJHcQVeiyxVtyOmfm3aC3A==',key_name='tempest-TestNetworkBasicOps-1001785432',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-x2oa88n1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:47:23Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=95730650-36ac-4eee-8b22-9ea3f01d82d1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "address": "fa:16:3e:cb:b0:7a", "network": {"id": "b261df4d-3b33-4344-9c3c-a73feb8773db", "bridge": "br-int", "label": "tempest-network-smoke--1044834333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0660ccb7-a9", "ovs_interfaceid": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:47:27 np0005629333 nova_compute[244014]: 2026-02-25 12:47:27.328 244018 DEBUG nova.network.os_vif_util [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "address": "fa:16:3e:cb:b0:7a", "network": {"id": "b261df4d-3b33-4344-9c3c-a73feb8773db", "bridge": "br-int", "label": "tempest-network-smoke--1044834333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0660ccb7-a9", "ovs_interfaceid": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:47:27 np0005629333 nova_compute[244014]: 2026-02-25 12:47:27.329 244018 DEBUG nova.network.os_vif_util [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:b0:7a,bridge_name='br-int',has_traffic_filtering=True,id=0660ccb7-a986-45dd-8aa8-e10ddd71144a,network=Network(b261df4d-3b33-4344-9c3c-a73feb8773db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0660ccb7-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:47:27 np0005629333 nova_compute[244014]: 2026-02-25 12:47:27.330 244018 DEBUG nova.objects.instance [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'pci_devices' on Instance uuid 95730650-36ac-4eee-8b22-9ea3f01d82d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:47:27 np0005629333 nova_compute[244014]: 2026-02-25 12:47:27.344 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:47:27 np0005629333 nova_compute[244014]:  <uuid>95730650-36ac-4eee-8b22-9ea3f01d82d1</uuid>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:  <name>instance-00000076</name>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestNetworkBasicOps-server-404307455</nova:name>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:47:26</nova:creationTime>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:47:27 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:        <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:        <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:        <nova:port uuid="0660ccb7-a986-45dd-8aa8-e10ddd71144a">
Feb 25 07:47:27 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      <entry name="serial">95730650-36ac-4eee-8b22-9ea3f01d82d1</entry>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      <entry name="uuid">95730650-36ac-4eee-8b22-9ea3f01d82d1</entry>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/95730650-36ac-4eee-8b22-9ea3f01d82d1_disk">
Feb 25 07:47:27 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:47:27 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/95730650-36ac-4eee-8b22-9ea3f01d82d1_disk.config">
Feb 25 07:47:27 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:47:27 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:cb:b0:7a"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      <target dev="tap0660ccb7-a9"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/95730650-36ac-4eee-8b22-9ea3f01d82d1/console.log" append="off"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:47:27 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:47:27 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:47:27 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:47:27 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:47:27 np0005629333 nova_compute[244014]: 2026-02-25 12:47:27.345 244018 DEBUG nova.compute.manager [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Preparing to wait for external event network-vif-plugged-0660ccb7-a986-45dd-8aa8-e10ddd71144a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:47:27 np0005629333 nova_compute[244014]: 2026-02-25 12:47:27.345 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:47:27 np0005629333 nova_compute[244014]: 2026-02-25 12:47:27.345 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:47:27 np0005629333 nova_compute[244014]: 2026-02-25 12:47:27.346 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:47:27 np0005629333 nova_compute[244014]: 2026-02-25 12:47:27.346 244018 DEBUG nova.virt.libvirt.vif [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:47:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-404307455',display_name='tempest-TestNetworkBasicOps-server-404307455',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-404307455',id=118,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFFlaz+1JiPozqOMtDFDKtIjDHy+pR4Dk92wrSmsbZfFxI7F561H9M9jMhpR4UoZcnqUowU/hewm+5foBFGhY3SM2rprh5R3QDm+X6unzTrRIJHcQVeiyxVtyOmfm3aC3A==',key_name='tempest-TestNetworkBasicOps-1001785432',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-x2oa88n1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:47:23Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=95730650-36ac-4eee-8b22-9ea3f01d82d1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "address": "fa:16:3e:cb:b0:7a", "network": {"id": "b261df4d-3b33-4344-9c3c-a73feb8773db", "bridge": "br-int", "label": "tempest-network-smoke--1044834333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0660ccb7-a9", "ovs_interfaceid": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:47:27 np0005629333 nova_compute[244014]: 2026-02-25 12:47:27.346 244018 DEBUG nova.network.os_vif_util [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "address": "fa:16:3e:cb:b0:7a", "network": {"id": "b261df4d-3b33-4344-9c3c-a73feb8773db", "bridge": "br-int", "label": "tempest-network-smoke--1044834333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0660ccb7-a9", "ovs_interfaceid": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:47:27 np0005629333 nova_compute[244014]: 2026-02-25 12:47:27.347 244018 DEBUG nova.network.os_vif_util [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:b0:7a,bridge_name='br-int',has_traffic_filtering=True,id=0660ccb7-a986-45dd-8aa8-e10ddd71144a,network=Network(b261df4d-3b33-4344-9c3c-a73feb8773db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0660ccb7-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:47:27 np0005629333 nova_compute[244014]: 2026-02-25 12:47:27.347 244018 DEBUG os_vif [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:b0:7a,bridge_name='br-int',has_traffic_filtering=True,id=0660ccb7-a986-45dd-8aa8-e10ddd71144a,network=Network(b261df4d-3b33-4344-9c3c-a73feb8773db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0660ccb7-a9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:47:27 np0005629333 nova_compute[244014]: 2026-02-25 12:47:27.348 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:27 np0005629333 nova_compute[244014]: 2026-02-25 12:47:27.348 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:47:27 np0005629333 nova_compute[244014]: 2026-02-25 12:47:27.348 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:47:27 np0005629333 nova_compute[244014]: 2026-02-25 12:47:27.350 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:27 np0005629333 nova_compute[244014]: 2026-02-25 12:47:27.350 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0660ccb7-a9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:47:27 np0005629333 nova_compute[244014]: 2026-02-25 12:47:27.351 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0660ccb7-a9, col_values=(('external_ids', {'iface-id': '0660ccb7-a986-45dd-8aa8-e10ddd71144a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cb:b0:7a', 'vm-uuid': '95730650-36ac-4eee-8b22-9ea3f01d82d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:47:27 np0005629333 nova_compute[244014]: 2026-02-25 12:47:27.352 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:27 np0005629333 NetworkManager[49836]: <info>  [1772023647.3530] manager: (tap0660ccb7-a9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/506)
Feb 25 07:47:27 np0005629333 nova_compute[244014]: 2026-02-25 12:47:27.354 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:47:27 np0005629333 nova_compute[244014]: 2026-02-25 12:47:27.357 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:27 np0005629333 nova_compute[244014]: 2026-02-25 12:47:27.358 244018 INFO os_vif [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:b0:7a,bridge_name='br-int',has_traffic_filtering=True,id=0660ccb7-a986-45dd-8aa8-e10ddd71144a,network=Network(b261df4d-3b33-4344-9c3c-a73feb8773db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0660ccb7-a9')#033[00m
Feb 25 07:47:27 np0005629333 nova_compute[244014]: 2026-02-25 12:47:27.407 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:47:27 np0005629333 nova_compute[244014]: 2026-02-25 12:47:27.407 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:47:27 np0005629333 nova_compute[244014]: 2026-02-25 12:47:27.407 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:cb:b0:7a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:47:27 np0005629333 nova_compute[244014]: 2026-02-25 12:47:27.407 244018 INFO nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Using config drive#033[00m
Feb 25 07:47:27 np0005629333 nova_compute[244014]: 2026-02-25 12:47:27.429 244018 DEBUG nova.storage.rbd_utils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 95730650-36ac-4eee-8b22-9ea3f01d82d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.088 244018 INFO nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Creating config drive at /var/lib/nova/instances/95730650-36ac-4eee-8b22-9ea3f01d82d1/disk.config#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.094 244018 DEBUG oslo_concurrency.processutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/95730650-36ac-4eee-8b22-9ea3f01d82d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp69z2zc0d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.232 244018 DEBUG oslo_concurrency.processutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/95730650-36ac-4eee-8b22-9ea3f01d82d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp69z2zc0d" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.272 244018 DEBUG nova.storage.rbd_utils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 95730650-36ac-4eee-8b22-9ea3f01d82d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.277 244018 DEBUG oslo_concurrency.processutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/95730650-36ac-4eee-8b22-9ea3f01d82d1/disk.config 95730650-36ac-4eee-8b22-9ea3f01d82d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.306 244018 DEBUG nova.network.neutron [req-c1f4b90f-7c7c-4445-858e-051c103e901c req-e55d102b-2c21-47be-8037-cdb28e77fea8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Updated VIF entry in instance network info cache for port 0660ccb7-a986-45dd-8aa8-e10ddd71144a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.308 244018 DEBUG nova.network.neutron [req-c1f4b90f-7c7c-4445-858e-051c103e901c req-e55d102b-2c21-47be-8037-cdb28e77fea8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Updating instance_info_cache with network_info: [{"id": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "address": "fa:16:3e:cb:b0:7a", "network": {"id": "b261df4d-3b33-4344-9c3c-a73feb8773db", "bridge": "br-int", "label": "tempest-network-smoke--1044834333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0660ccb7-a9", "ovs_interfaceid": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.345 244018 DEBUG oslo_concurrency.lockutils [req-c1f4b90f-7c7c-4445-858e-051c103e901c req-e55d102b-2c21-47be-8037-cdb28e77fea8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-95730650-36ac-4eee-8b22-9ea3f01d82d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.416 244018 DEBUG oslo_concurrency.processutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/95730650-36ac-4eee-8b22-9ea3f01d82d1/disk.config 95730650-36ac-4eee-8b22-9ea3f01d82d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.417 244018 INFO nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Deleting local config drive /var/lib/nova/instances/95730650-36ac-4eee-8b22-9ea3f01d82d1/disk.config because it was imported into RBD.#033[00m
Feb 25 07:47:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2007: 305 pgs: 305 active+clean; 246 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Feb 25 07:47:28 np0005629333 kernel: tap0660ccb7-a9: entered promiscuous mode
Feb 25 07:47:28 np0005629333 NetworkManager[49836]: <info>  [1772023648.4638] manager: (tap0660ccb7-a9): new Tun device (/org/freedesktop/NetworkManager/Devices/507)
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.466 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:28 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:28Z|01216|binding|INFO|Claiming lport 0660ccb7-a986-45dd-8aa8-e10ddd71144a for this chassis.
Feb 25 07:47:28 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:28Z|01217|binding|INFO|0660ccb7-a986-45dd-8aa8-e10ddd71144a: Claiming fa:16:3e:cb:b0:7a 10.100.0.12
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.480 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:b0:7a 10.100.0.12'], port_security=['fa:16:3e:cb:b0:7a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '95730650-36ac-4eee-8b22-9ea3f01d82d1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b261df4d-3b33-4344-9c3c-a73feb8773db', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6d838fa7-2ae5-457e-b70a-ef0e132d7e89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af3ba9a4-232f-4585-95fc-f83215abb671, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=0660ccb7-a986-45dd-8aa8-e10ddd71144a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.481 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 0660ccb7-a986-45dd-8aa8-e10ddd71144a in datapath b261df4d-3b33-4344-9c3c-a73feb8773db bound to our chassis#033[00m
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.483 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b261df4d-3b33-4344-9c3c-a73feb8773db#033[00m
Feb 25 07:47:28 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:28Z|01218|binding|INFO|Setting lport 0660ccb7-a986-45dd-8aa8-e10ddd71144a ovn-installed in OVS
Feb 25 07:47:28 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:28Z|01219|binding|INFO|Setting lport 0660ccb7-a986-45dd-8aa8-e10ddd71144a up in Southbound
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.494 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.494 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fbf72d91-1c4d-4ae7-a3cb-fd0040303fc7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.495 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb261df4d-31 in ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:47:28 np0005629333 systemd-machined[210048]: New machine qemu-150-instance-00000076.
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.500 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb261df4d-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.501 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5a8837c7-886e-4c3c-ba20-4c087ff0ffcd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.502 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[95e070bd-fd79-4b85-8d89-b2f0f21159a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:28 np0005629333 systemd[1]: Started Virtual Machine qemu-150-instance-00000076.
Feb 25 07:47:28 np0005629333 systemd-udevd[350469]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.514 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[8ed8623d-6b36-476e-8b5a-ec7d02134a9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:28 np0005629333 NetworkManager[49836]: <info>  [1772023648.5178] device (tap0660ccb7-a9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:47:28 np0005629333 NetworkManager[49836]: <info>  [1772023648.5183] device (tap0660ccb7-a9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.530 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f847c1-2599-4d0a-9f64-60535a52dc1b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.546 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e13ad397-3509-4d5f-9df2-40c0c9ce7937]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:28 np0005629333 NetworkManager[49836]: <info>  [1772023648.5512] manager: (tapb261df4d-30): new Veth device (/org/freedesktop/NetworkManager/Devices/508)
Feb 25 07:47:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:47:28 np0005629333 systemd-udevd[350471]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.550 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5a647d1a-ed4a-4df4-a821-cfda9b22a53d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.580 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3e342e28-6e01-4bee-973c-d3d02b9d5bb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.582 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc82d95-59aa-40b0-867e-94ba5dbead64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:28 np0005629333 NetworkManager[49836]: <info>  [1772023648.6001] device (tapb261df4d-30): carrier: link connected
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.604 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0eaa3830-7f9f-4c24-b7ab-e26154763fca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.616 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[692464a7-2c95-4067-b8f1-3cac7fd1a0f1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb261df4d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d9:6f:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 367], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561800, 'reachable_time': 30020, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350500, 'error': None, 'target': 'ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.627 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f30e8bff-9b21-4889-b083-679bc7772122]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed9:6f1c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561800, 'tstamp': 561800}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350501, 'error': None, 'target': 'ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.640 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5e6a191b-e70c-4fca-b372-f593f5b4f1e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb261df4d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d9:6f:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 367], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561800, 'reachable_time': 30020, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 350502, 'error': None, 'target': 'ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.665 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[56d4b11a-269f-4052-b210-6720797a39cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.705 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[11bdabf5-3ece-49fb-aaa4-6b11d3f8ecc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.706 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb261df4d-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.707 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.707 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb261df4d-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.709 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:28 np0005629333 NetworkManager[49836]: <info>  [1772023648.7098] manager: (tapb261df4d-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/509)
Feb 25 07:47:28 np0005629333 kernel: tapb261df4d-30: entered promiscuous mode
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.710 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.714 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb261df4d-30, col_values=(('external_ids', {'iface-id': 'f0fc68ea-8ea9-4b0b-9e36-f54d3bc2f6ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.716 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:28 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:28Z|01220|binding|INFO|Releasing lport f0fc68ea-8ea9-4b0b-9e36-f54d3bc2f6ca from this chassis (sb_readonly=0)
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.716 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.717 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b261df4d-3b33-4344-9c3c-a73feb8773db.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b261df4d-3b33-4344-9c3c-a73feb8773db.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.718 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1e6db6cd-53c1-4c8d-b7af-3ee57a0ec7b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.719 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-b261df4d-3b33-4344-9c3c-a73feb8773db
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/b261df4d-3b33-4344-9c3c-a73feb8773db.pid.haproxy
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID b261df4d-3b33-4344-9c3c-a73feb8773db
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:47:28 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:28.720 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db', 'env', 'PROCESS_TAG=haproxy-b261df4d-3b33-4344-9c3c-a73feb8773db', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b261df4d-3b33-4344-9c3c-a73feb8773db.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.724 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.734 244018 DEBUG nova.compute.manager [req-9a8b8013-94db-4a6e-b4ec-46522c8da0ef req-cab7232f-ac7b-413f-b544-63de2cdb70dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Received event network-vif-plugged-0660ccb7-a986-45dd-8aa8-e10ddd71144a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.734 244018 DEBUG oslo_concurrency.lockutils [req-9a8b8013-94db-4a6e-b4ec-46522c8da0ef req-cab7232f-ac7b-413f-b544-63de2cdb70dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.734 244018 DEBUG oslo_concurrency.lockutils [req-9a8b8013-94db-4a6e-b4ec-46522c8da0ef req-cab7232f-ac7b-413f-b544-63de2cdb70dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.734 244018 DEBUG oslo_concurrency.lockutils [req-9a8b8013-94db-4a6e-b4ec-46522c8da0ef req-cab7232f-ac7b-413f-b544-63de2cdb70dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.735 244018 DEBUG nova.compute.manager [req-9a8b8013-94db-4a6e-b4ec-46522c8da0ef req-cab7232f-ac7b-413f-b544-63de2cdb70dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Processing event network-vif-plugged-0660ccb7-a986-45dd-8aa8-e10ddd71144a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.849 244018 DEBUG nova.compute.manager [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.850 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023648.8495708, 95730650-36ac-4eee-8b22-9ea3f01d82d1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.851 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] VM Started (Lifecycle Event)#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.854 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.857 244018 INFO nova.virt.libvirt.driver [-] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Instance spawned successfully.#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.858 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.877 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.884 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.888 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.888 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.889 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.890 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.890 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.891 244018 DEBUG nova.virt.libvirt.driver [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.908 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.908 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023648.8503447, 95730650-36ac-4eee-8b22-9ea3f01d82d1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.908 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.946 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.950 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023648.8541086, 95730650-36ac-4eee-8b22-9ea3f01d82d1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.951 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.994 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:47:28 np0005629333 nova_compute[244014]: 2026-02-25 12:47:28.996 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:47:29 np0005629333 nova_compute[244014]: 2026-02-25 12:47:29.000 244018 INFO nova.compute.manager [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Took 5.72 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:47:29 np0005629333 nova_compute[244014]: 2026-02-25 12:47:29.000 244018 DEBUG nova.compute.manager [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:47:29 np0005629333 podman[350575]: 2026-02-25 12:47:29.052241129 +0000 UTC m=+0.054742676 container create c4bec2971f5a5d44bfda5fbe5b0886aea5c0ed8770d17a0dd446c42b323b07bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 25 07:47:29 np0005629333 nova_compute[244014]: 2026-02-25 12:47:29.062 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:47:29 np0005629333 systemd[1]: Started libpod-conmon-c4bec2971f5a5d44bfda5fbe5b0886aea5c0ed8770d17a0dd446c42b323b07bc.scope.
Feb 25 07:47:29 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:47:29 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5086818508e5fa7e94bd5cba9a5524707dcb64bf7ba443e8eab59b7dad15f1a4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:47:29 np0005629333 podman[350575]: 2026-02-25 12:47:29.119175179 +0000 UTC m=+0.121676746 container init c4bec2971f5a5d44bfda5fbe5b0886aea5c0ed8770d17a0dd446c42b323b07bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:47:29 np0005629333 podman[350575]: 2026-02-25 12:47:29.024560717 +0000 UTC m=+0.027062304 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:47:29 np0005629333 podman[350575]: 2026-02-25 12:47:29.126042162 +0000 UTC m=+0.128543719 container start c4bec2971f5a5d44bfda5fbe5b0886aea5c0ed8770d17a0dd446c42b323b07bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 25 07:47:29 np0005629333 nova_compute[244014]: 2026-02-25 12:47:29.136 244018 INFO nova.compute.manager [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Took 7.13 seconds to build instance.#033[00m
Feb 25 07:47:29 np0005629333 neutron-haproxy-ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db[350592]: [NOTICE]   (350617) : New worker (350633) forked
Feb 25 07:47:29 np0005629333 neutron-haproxy-ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db[350592]: [NOTICE]   (350617) : Loading success.
Feb 25 07:47:29 np0005629333 podman[350588]: 2026-02-25 12:47:29.157955344 +0000 UTC m=+0.066516680 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:47:29 np0005629333 nova_compute[244014]: 2026-02-25 12:47:29.168 244018 DEBUG oslo_concurrency.lockutils [None req-5302a1dd-a189-420a-8048-ef1bf8777fb4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:47:29 np0005629333 podman[350591]: 2026-02-25 12:47:29.218976447 +0000 UTC m=+0.131680470 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:47:29 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:29Z|01221|binding|INFO|Releasing lport 4e1af2d0-780b-4ffd-93a0-445de7a22322 from this chassis (sb_readonly=0)
Feb 25 07:47:29 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:29Z|01222|binding|INFO|Releasing lport 8372e6c6-fbbc-48a7-be95-d95e1d2ad95a from this chassis (sb_readonly=0)
Feb 25 07:47:29 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:29Z|01223|binding|INFO|Releasing lport f0fc68ea-8ea9-4b0b-9e36-f54d3bc2f6ca from this chassis (sb_readonly=0)
Feb 25 07:47:29 np0005629333 NetworkManager[49836]: <info>  [1772023649.8417] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/510)
Feb 25 07:47:29 np0005629333 NetworkManager[49836]: <info>  [1772023649.8428] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/511)
Feb 25 07:47:29 np0005629333 nova_compute[244014]: 2026-02-25 12:47:29.844 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:29 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:29Z|01224|binding|INFO|Releasing lport 4e1af2d0-780b-4ffd-93a0-445de7a22322 from this chassis (sb_readonly=0)
Feb 25 07:47:29 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:29Z|01225|binding|INFO|Releasing lport 8372e6c6-fbbc-48a7-be95-d95e1d2ad95a from this chassis (sb_readonly=0)
Feb 25 07:47:29 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:29Z|01226|binding|INFO|Releasing lport f0fc68ea-8ea9-4b0b-9e36-f54d3bc2f6ca from this chassis (sb_readonly=0)
Feb 25 07:47:29 np0005629333 nova_compute[244014]: 2026-02-25 12:47:29.863 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:29 np0005629333 nova_compute[244014]: 2026-02-25 12:47:29.870 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:29.901 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:05:ae 10.100.0.2 2001:db8::f816:3eff:fe99:5ae'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe99:5ae/64', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=718d24fb-493f-42ce-a4ab-2f64b57cb3ac, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=13855d81-6f9e-45ad-9b2f-8fbca36efa4a) old=Port_Binding(mac=['fa:16:3e:99:05:ae 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:47:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:29.903 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 13855d81-6f9e-45ad-9b2f-8fbca36efa4a in datapath 1002c177-97f1-4286-9217-c8db43f28825 updated#033[00m
Feb 25 07:47:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:29.906 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1002c177-97f1-4286-9217-c8db43f28825, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:47:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:29.907 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bbfdad46-c5c5-4455-8313-a1ae295133ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:30 np0005629333 nova_compute[244014]: 2026-02-25 12:47:30.258 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:30 np0005629333 nova_compute[244014]: 2026-02-25 12:47:30.290 244018 DEBUG nova.compute.manager [req-0d573559-55f6-44ab-a15e-4b3c4ec1654e req-665fb788-9c7b-4bec-a645-703bf0a3da40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received event network-changed-0358e18d-8ce8-43a7-a8b2-11193708891a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:47:30 np0005629333 nova_compute[244014]: 2026-02-25 12:47:30.291 244018 DEBUG nova.compute.manager [req-0d573559-55f6-44ab-a15e-4b3c4ec1654e req-665fb788-9c7b-4bec-a645-703bf0a3da40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Refreshing instance network info cache due to event network-changed-0358e18d-8ce8-43a7-a8b2-11193708891a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:47:30 np0005629333 nova_compute[244014]: 2026-02-25 12:47:30.291 244018 DEBUG oslo_concurrency.lockutils [req-0d573559-55f6-44ab-a15e-4b3c4ec1654e req-665fb788-9c7b-4bec-a645-703bf0a3da40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:47:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2008: 305 pgs: 305 active+clean; 246 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 07:47:30 np0005629333 nova_compute[244014]: 2026-02-25 12:47:30.620 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Updating instance_info_cache with network_info: [{"id": "0358e18d-8ce8-43a7-a8b2-11193708891a", "address": "fa:16:3e:fd:1f:00", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0358e18d-8c", "ovs_interfaceid": "0358e18d-8ce8-43a7-a8b2-11193708891a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "address": "fa:16:3e:8e:ea:66", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebd67787-8a", "ovs_interfaceid": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:47:30 np0005629333 nova_compute[244014]: 2026-02-25 12:47:30.643 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:47:30 np0005629333 nova_compute[244014]: 2026-02-25 12:47:30.644 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 25 07:47:30 np0005629333 nova_compute[244014]: 2026-02-25 12:47:30.645 244018 DEBUG oslo_concurrency.lockutils [req-0d573559-55f6-44ab-a15e-4b3c4ec1654e req-665fb788-9c7b-4bec-a645-703bf0a3da40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:47:30 np0005629333 nova_compute[244014]: 2026-02-25 12:47:30.646 244018 DEBUG nova.network.neutron [req-0d573559-55f6-44ab-a15e-4b3c4ec1654e req-665fb788-9c7b-4bec-a645-703bf0a3da40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Refreshing network info cache for port 0358e18d-8ce8-43a7-a8b2-11193708891a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:47:30 np0005629333 nova_compute[244014]: 2026-02-25 12:47:30.648 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:47:30 np0005629333 nova_compute[244014]: 2026-02-25 12:47:30.649 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:47:30 np0005629333 nova_compute[244014]: 2026-02-25 12:47:30.671 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:47:30 np0005629333 nova_compute[244014]: 2026-02-25 12:47:30.672 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:47:30 np0005629333 nova_compute[244014]: 2026-02-25 12:47:30.672 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:47:30 np0005629333 nova_compute[244014]: 2026-02-25 12:47:30.672 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:47:30 np0005629333 nova_compute[244014]: 2026-02-25 12:47:30.673 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:47:30 np0005629333 nova_compute[244014]: 2026-02-25 12:47:30.827 244018 DEBUG nova.compute.manager [req-e19dea95-a9f3-477b-ab4f-38a1d55fc6c3 req-f50cefd5-2c96-47e5-98b5-308db9f1cc47 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Received event network-vif-plugged-0660ccb7-a986-45dd-8aa8-e10ddd71144a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:47:30 np0005629333 nova_compute[244014]: 2026-02-25 12:47:30.830 244018 DEBUG oslo_concurrency.lockutils [req-e19dea95-a9f3-477b-ab4f-38a1d55fc6c3 req-f50cefd5-2c96-47e5-98b5-308db9f1cc47 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:47:30 np0005629333 nova_compute[244014]: 2026-02-25 12:47:30.830 244018 DEBUG oslo_concurrency.lockutils [req-e19dea95-a9f3-477b-ab4f-38a1d55fc6c3 req-f50cefd5-2c96-47e5-98b5-308db9f1cc47 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:47:30 np0005629333 nova_compute[244014]: 2026-02-25 12:47:30.830 244018 DEBUG oslo_concurrency.lockutils [req-e19dea95-a9f3-477b-ab4f-38a1d55fc6c3 req-f50cefd5-2c96-47e5-98b5-308db9f1cc47 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:47:30 np0005629333 nova_compute[244014]: 2026-02-25 12:47:30.830 244018 DEBUG nova.compute.manager [req-e19dea95-a9f3-477b-ab4f-38a1d55fc6c3 req-f50cefd5-2c96-47e5-98b5-308db9f1cc47 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] No waiting events found dispatching network-vif-plugged-0660ccb7-a986-45dd-8aa8-e10ddd71144a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:47:30 np0005629333 nova_compute[244014]: 2026-02-25 12:47:30.831 244018 WARNING nova.compute.manager [req-e19dea95-a9f3-477b-ab4f-38a1d55fc6c3 req-f50cefd5-2c96-47e5-98b5-308db9f1cc47 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Received unexpected event network-vif-plugged-0660ccb7-a986-45dd-8aa8-e10ddd71144a for instance with vm_state active and task_state None.#033[00m
Feb 25 07:47:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:47:30
Feb 25 07:47:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:47:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:47:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', '.mgr', 'volumes', 'images', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta']
Feb 25 07:47:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:47:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:47:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3858140156' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:47:31 np0005629333 nova_compute[244014]: 2026-02-25 12:47:31.253 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:47:31 np0005629333 nova_compute[244014]: 2026-02-25 12:47:31.476 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:47:31 np0005629333 nova_compute[244014]: 2026-02-25 12:47:31.476 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:47:31 np0005629333 nova_compute[244014]: 2026-02-25 12:47:31.480 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:47:31 np0005629333 nova_compute[244014]: 2026-02-25 12:47:31.480 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:47:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:47:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:47:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:47:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:47:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:47:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:47:31 np0005629333 nova_compute[244014]: 2026-02-25 12:47:31.632 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:47:31 np0005629333 nova_compute[244014]: 2026-02-25 12:47:31.633 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3287MB free_disk=59.945687803439796GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:47:31 np0005629333 nova_compute[244014]: 2026-02-25 12:47:31.633 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:47:31 np0005629333 nova_compute[244014]: 2026-02-25 12:47:31.633 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:47:31 np0005629333 nova_compute[244014]: 2026-02-25 12:47:31.756 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 848fd033-0ebb-460a-a8a0-56583fa5f481 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:47:31 np0005629333 nova_compute[244014]: 2026-02-25 12:47:31.756 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 95730650-36ac-4eee-8b22-9ea3f01d82d1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:47:31 np0005629333 nova_compute[244014]: 2026-02-25 12:47:31.756 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:47:31 np0005629333 nova_compute[244014]: 2026-02-25 12:47:31.757 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:47:31 np0005629333 nova_compute[244014]: 2026-02-25 12:47:31.845 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:47:31 np0005629333 nova_compute[244014]: 2026-02-25 12:47:31.920 244018 DEBUG nova.network.neutron [req-0d573559-55f6-44ab-a15e-4b3c4ec1654e req-665fb788-9c7b-4bec-a645-703bf0a3da40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Updated VIF entry in instance network info cache for port 0358e18d-8ce8-43a7-a8b2-11193708891a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:47:31 np0005629333 nova_compute[244014]: 2026-02-25 12:47:31.921 244018 DEBUG nova.network.neutron [req-0d573559-55f6-44ab-a15e-4b3c4ec1654e req-665fb788-9c7b-4bec-a645-703bf0a3da40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Updating instance_info_cache with network_info: [{"id": "0358e18d-8ce8-43a7-a8b2-11193708891a", "address": "fa:16:3e:fd:1f:00", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0358e18d-8c", "ovs_interfaceid": "0358e18d-8ce8-43a7-a8b2-11193708891a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "address": "fa:16:3e:8e:ea:66", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebd67787-8a", "ovs_interfaceid": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:47:31 np0005629333 nova_compute[244014]: 2026-02-25 12:47:31.942 244018 DEBUG oslo_concurrency.lockutils [req-0d573559-55f6-44ab-a15e-4b3c4ec1654e req-665fb788-9c7b-4bec-a645-703bf0a3da40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:47:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:47:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:47:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:47:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:47:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:47:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:47:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:47:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:47:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:47:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:47:32 np0005629333 nova_compute[244014]: 2026-02-25 12:47:32.353 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:47:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2705992989' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:47:32 np0005629333 nova_compute[244014]: 2026-02-25 12:47:32.411 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:47:32 np0005629333 nova_compute[244014]: 2026-02-25 12:47:32.417 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:47:32 np0005629333 nova_compute[244014]: 2026-02-25 12:47:32.431 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:47:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 305 active+clean; 246 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Feb 25 07:47:32 np0005629333 nova_compute[244014]: 2026-02-25 12:47:32.448 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:47:32 np0005629333 nova_compute[244014]: 2026-02-25 12:47:32.448 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:47:32 np0005629333 nova_compute[244014]: 2026-02-25 12:47:32.675 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:47:32 np0005629333 nova_compute[244014]: 2026-02-25 12:47:32.676 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:47:32 np0005629333 nova_compute[244014]: 2026-02-25 12:47:32.676 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:47:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:47:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 305 active+clean; 246 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 172 op/s
Feb 25 07:47:34 np0005629333 nova_compute[244014]: 2026-02-25 12:47:34.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:47:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:34.979 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:05:ae 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=718d24fb-493f-42ce-a4ab-2f64b57cb3ac, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=13855d81-6f9e-45ad-9b2f-8fbca36efa4a) old=Port_Binding(mac=['fa:16:3e:99:05:ae 10.100.0.2 2001:db8::f816:3eff:fe99:5ae'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe99:5ae/64', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:47:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:34.980 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 13855d81-6f9e-45ad-9b2f-8fbca36efa4a in datapath 1002c177-97f1-4286-9217-c8db43f28825 updated#033[00m
Feb 25 07:47:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:34.981 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1002c177-97f1-4286-9217-c8db43f28825, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:47:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:34.982 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[002ff129-e920-44a9-8b63-83ab537a167c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:35 np0005629333 nova_compute[244014]: 2026-02-25 12:47:35.051 244018 DEBUG nova.compute.manager [req-9b6b142f-8917-4e74-a38a-c35951f56dbd req-0c2b02fd-1d7d-4f4a-8b9a-3f6754c24d37 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Received event network-changed-0660ccb7-a986-45dd-8aa8-e10ddd71144a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:47:35 np0005629333 nova_compute[244014]: 2026-02-25 12:47:35.051 244018 DEBUG nova.compute.manager [req-9b6b142f-8917-4e74-a38a-c35951f56dbd req-0c2b02fd-1d7d-4f4a-8b9a-3f6754c24d37 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Refreshing instance network info cache due to event network-changed-0660ccb7-a986-45dd-8aa8-e10ddd71144a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:47:35 np0005629333 nova_compute[244014]: 2026-02-25 12:47:35.052 244018 DEBUG oslo_concurrency.lockutils [req-9b6b142f-8917-4e74-a38a-c35951f56dbd req-0c2b02fd-1d7d-4f4a-8b9a-3f6754c24d37 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-95730650-36ac-4eee-8b22-9ea3f01d82d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:47:35 np0005629333 nova_compute[244014]: 2026-02-25 12:47:35.052 244018 DEBUG oslo_concurrency.lockutils [req-9b6b142f-8917-4e74-a38a-c35951f56dbd req-0c2b02fd-1d7d-4f4a-8b9a-3f6754c24d37 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-95730650-36ac-4eee-8b22-9ea3f01d82d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:47:35 np0005629333 nova_compute[244014]: 2026-02-25 12:47:35.052 244018 DEBUG nova.network.neutron [req-9b6b142f-8917-4e74-a38a-c35951f56dbd req-0c2b02fd-1d7d-4f4a-8b9a-3f6754c24d37 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Refreshing network info cache for port 0660ccb7-a986-45dd-8aa8-e10ddd71144a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:47:35 np0005629333 nova_compute[244014]: 2026-02-25 12:47:35.259 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 305 active+clean; 246 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 167 op/s
Feb 25 07:47:36 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:36Z|00139|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fd:1f:00 10.100.0.11
Feb 25 07:47:36 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:36Z|00140|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fd:1f:00 10.100.0.11
Feb 25 07:47:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:36.666 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:05:ae 10.100.0.2 2001:db8::f816:3eff:fe99:5ae'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe99:5ae/64', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=718d24fb-493f-42ce-a4ab-2f64b57cb3ac, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=13855d81-6f9e-45ad-9b2f-8fbca36efa4a) old=Port_Binding(mac=['fa:16:3e:99:05:ae 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:47:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:36.667 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 13855d81-6f9e-45ad-9b2f-8fbca36efa4a in datapath 1002c177-97f1-4286-9217-c8db43f28825 updated#033[00m
Feb 25 07:47:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:36.668 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1002c177-97f1-4286-9217-c8db43f28825, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:47:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:36.669 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ce687954-e767-4681-9c0e-1aced68aede9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:36 np0005629333 nova_compute[244014]: 2026-02-25 12:47:36.692 244018 DEBUG nova.network.neutron [req-9b6b142f-8917-4e74-a38a-c35951f56dbd req-0c2b02fd-1d7d-4f4a-8b9a-3f6754c24d37 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Updated VIF entry in instance network info cache for port 0660ccb7-a986-45dd-8aa8-e10ddd71144a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:47:36 np0005629333 nova_compute[244014]: 2026-02-25 12:47:36.693 244018 DEBUG nova.network.neutron [req-9b6b142f-8917-4e74-a38a-c35951f56dbd req-0c2b02fd-1d7d-4f4a-8b9a-3f6754c24d37 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Updating instance_info_cache with network_info: [{"id": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "address": "fa:16:3e:cb:b0:7a", "network": {"id": "b261df4d-3b33-4344-9c3c-a73feb8773db", "bridge": "br-int", "label": "tempest-network-smoke--1044834333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0660ccb7-a9", "ovs_interfaceid": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:47:36 np0005629333 nova_compute[244014]: 2026-02-25 12:47:36.714 244018 DEBUG oslo_concurrency.lockutils [req-9b6b142f-8917-4e74-a38a-c35951f56dbd req-0c2b02fd-1d7d-4f4a-8b9a-3f6754c24d37 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-95730650-36ac-4eee-8b22-9ea3f01d82d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:47:36 np0005629333 nova_compute[244014]: 2026-02-25 12:47:36.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:47:36 np0005629333 nova_compute[244014]: 2026-02-25 12:47:36.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:47:37 np0005629333 nova_compute[244014]: 2026-02-25 12:47:37.359 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2012: 305 pgs: 305 active+clean; 279 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 234 op/s
Feb 25 07:47:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:47:40 np0005629333 nova_compute[244014]: 2026-02-25 12:47:40.260 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2013: 305 pgs: 305 active+clean; 279 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 140 op/s
Feb 25 07:47:40 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:40Z|00141|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cb:b0:7a 10.100.0.12
Feb 25 07:47:40 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:40Z|00142|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cb:b0:7a 10.100.0.12
Feb 25 07:47:40 np0005629333 nova_compute[244014]: 2026-02-25 12:47:40.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:47:42 np0005629333 nova_compute[244014]: 2026-02-25 12:47:42.360 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:42.403 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:47:42 np0005629333 nova_compute[244014]: 2026-02-25 12:47:42.404 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:42.405 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:47:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 305 active+clean; 307 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.9 MiB/s wr, 185 op/s
Feb 25 07:47:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:42.520 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:05:ae 2001:db8::f816:3eff:fe99:5ae'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe99:5ae/64', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=718d24fb-493f-42ce-a4ab-2f64b57cb3ac, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=13855d81-6f9e-45ad-9b2f-8fbca36efa4a) old=Port_Binding(mac=['fa:16:3e:99:05:ae 10.100.0.2 2001:db8::f816:3eff:fe99:5ae'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe99:5ae/64', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:47:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:42.523 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 13855d81-6f9e-45ad-9b2f-8fbca36efa4a in datapath 1002c177-97f1-4286-9217-c8db43f28825 updated#033[00m
Feb 25 07:47:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:42.526 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1002c177-97f1-4286-9217-c8db43f28825, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:47:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:42.527 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2c642377-822b-4212-bfa8-93e3f2af7d7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:47:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:47:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:47:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:47:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014518223260928079 of space, bias 1.0, pg target 0.4355466978278424 quantized to 32 (current 32)
Feb 25 07:47:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:47:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:47:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:47:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:47:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:47:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002494016506867987 of space, bias 1.0, pg target 0.7482049520603962 quantized to 32 (current 32)
Feb 25 07:47:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:47:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4046466994069785e-06 of space, bias 4.0, pg target 0.0016855760392883742 quantized to 16 (current 16)
Feb 25 07:47:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:47:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:47:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:47:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:47:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:47:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:47:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:47:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:47:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:47:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:47:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:47:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2015: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 4.3 MiB/s wr, 148 op/s
Feb 25 07:47:45 np0005629333 nova_compute[244014]: 2026-02-25 12:47:45.263 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2016: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 716 KiB/s rd, 4.3 MiB/s wr, 130 op/s
Feb 25 07:47:47 np0005629333 nova_compute[244014]: 2026-02-25 12:47:47.362 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:47 np0005629333 nova_compute[244014]: 2026-02-25 12:47:47.691 244018 INFO nova.compute.manager [None req-70b4cf28-6122-4eed-8752-18ace07472d8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Get console output#033[00m
Feb 25 07:47:47 np0005629333 nova_compute[244014]: 2026-02-25 12:47:47.698 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 25 07:47:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2017: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 717 KiB/s rd, 4.3 MiB/s wr, 131 op/s
Feb 25 07:47:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:47:48 np0005629333 nova_compute[244014]: 2026-02-25 12:47:48.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:47:48 np0005629333 nova_compute[244014]: 2026-02-25 12:47:48.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 25 07:47:48 np0005629333 nova_compute[244014]: 2026-02-25 12:47:48.926 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 25 07:47:49 np0005629333 nova_compute[244014]: 2026-02-25 12:47:49.126 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "271f6569-a8f6-43a3-ac98-511eff77c426" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:47:49 np0005629333 nova_compute[244014]: 2026-02-25 12:47:49.126 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:47:49 np0005629333 nova_compute[244014]: 2026-02-25 12:47:49.243 244018 DEBUG nova.compute.manager [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:47:49 np0005629333 nova_compute[244014]: 2026-02-25 12:47:49.393 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:47:49 np0005629333 nova_compute[244014]: 2026-02-25 12:47:49.394 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:47:49 np0005629333 nova_compute[244014]: 2026-02-25 12:47:49.403 244018 DEBUG nova.virt.hardware [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:47:49 np0005629333 nova_compute[244014]: 2026-02-25 12:47:49.403 244018 INFO nova.compute.claims [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:47:49 np0005629333 nova_compute[244014]: 2026-02-25 12:47:49.616 244018 DEBUG oslo_concurrency.processutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:47:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:47:50 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/341875472' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:47:50 np0005629333 nova_compute[244014]: 2026-02-25 12:47:50.191 244018 DEBUG oslo_concurrency.processutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:47:50 np0005629333 nova_compute[244014]: 2026-02-25 12:47:50.196 244018 DEBUG nova.compute.provider_tree [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:47:50 np0005629333 nova_compute[244014]: 2026-02-25 12:47:50.213 244018 DEBUG nova.scheduler.client.report [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:47:50 np0005629333 nova_compute[244014]: 2026-02-25 12:47:50.240 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:47:50 np0005629333 nova_compute[244014]: 2026-02-25 12:47:50.240 244018 DEBUG nova.compute.manager [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:47:50 np0005629333 nova_compute[244014]: 2026-02-25 12:47:50.265 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:50 np0005629333 nova_compute[244014]: 2026-02-25 12:47:50.291 244018 DEBUG nova.compute.manager [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:47:50 np0005629333 nova_compute[244014]: 2026-02-25 12:47:50.291 244018 DEBUG nova.network.neutron [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:47:50 np0005629333 nova_compute[244014]: 2026-02-25 12:47:50.308 244018 INFO nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:47:50 np0005629333 nova_compute[244014]: 2026-02-25 12:47:50.323 244018 DEBUG nova.compute.manager [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:47:50 np0005629333 nova_compute[244014]: 2026-02-25 12:47:50.396 244018 DEBUG nova.compute.manager [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:47:50 np0005629333 nova_compute[244014]: 2026-02-25 12:47:50.398 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:47:50 np0005629333 nova_compute[244014]: 2026-02-25 12:47:50.399 244018 INFO nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Creating image(s)#033[00m
Feb 25 07:47:50 np0005629333 nova_compute[244014]: 2026-02-25 12:47:50.431 244018 DEBUG nova.storage.rbd_utils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 271f6569-a8f6-43a3-ac98-511eff77c426_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:47:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2018: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Feb 25 07:47:50 np0005629333 nova_compute[244014]: 2026-02-25 12:47:50.469 244018 DEBUG nova.storage.rbd_utils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 271f6569-a8f6-43a3-ac98-511eff77c426_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:47:50 np0005629333 nova_compute[244014]: 2026-02-25 12:47:50.506 244018 DEBUG nova.storage.rbd_utils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 271f6569-a8f6-43a3-ac98-511eff77c426_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:47:50 np0005629333 nova_compute[244014]: 2026-02-25 12:47:50.510 244018 DEBUG oslo_concurrency.processutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:47:50 np0005629333 nova_compute[244014]: 2026-02-25 12:47:50.550 244018 DEBUG nova.policy [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb8dbf8cc448ad946fd23aaae2326e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '25fa1e8dd32c483686f869da2604f2b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:47:50 np0005629333 nova_compute[244014]: 2026-02-25 12:47:50.591 244018 DEBUG oslo_concurrency.processutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:47:50 np0005629333 nova_compute[244014]: 2026-02-25 12:47:50.592 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:47:50 np0005629333 nova_compute[244014]: 2026-02-25 12:47:50.594 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:47:50 np0005629333 nova_compute[244014]: 2026-02-25 12:47:50.594 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:47:50 np0005629333 nova_compute[244014]: 2026-02-25 12:47:50.628 244018 DEBUG nova.storage.rbd_utils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 271f6569-a8f6-43a3-ac98-511eff77c426_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:47:50 np0005629333 nova_compute[244014]: 2026-02-25 12:47:50.633 244018 DEBUG oslo_concurrency.processutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 271f6569-a8f6-43a3-ac98-511eff77c426_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:47:50 np0005629333 nova_compute[244014]: 2026-02-25 12:47:50.963 244018 DEBUG oslo_concurrency.processutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 271f6569-a8f6-43a3-ac98-511eff77c426_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.330s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:47:51 np0005629333 nova_compute[244014]: 2026-02-25 12:47:51.042 244018 DEBUG nova.storage.rbd_utils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] resizing rbd image 271f6569-a8f6-43a3-ac98-511eff77c426_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:47:51 np0005629333 nova_compute[244014]: 2026-02-25 12:47:51.144 244018 DEBUG nova.objects.instance [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'migration_context' on Instance uuid 271f6569-a8f6-43a3-ac98-511eff77c426 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:47:51 np0005629333 nova_compute[244014]: 2026-02-25 12:47:51.159 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:47:51 np0005629333 nova_compute[244014]: 2026-02-25 12:47:51.160 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Ensure instance console log exists: /var/lib/nova/instances/271f6569-a8f6-43a3-ac98-511eff77c426/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:47:51 np0005629333 nova_compute[244014]: 2026-02-25 12:47:51.160 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:47:51 np0005629333 nova_compute[244014]: 2026-02-25 12:47:51.161 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:47:51 np0005629333 nova_compute[244014]: 2026-02-25 12:47:51.161 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:47:51 np0005629333 nova_compute[244014]: 2026-02-25 12:47:51.583 244018 DEBUG nova.network.neutron [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Successfully created port: 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:47:52 np0005629333 nova_compute[244014]: 2026-02-25 12:47:52.364 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:52.407 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:47:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2019: 305 pgs: 305 active+clean; 345 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 3.5 MiB/s wr, 80 op/s
Feb 25 07:47:52 np0005629333 nova_compute[244014]: 2026-02-25 12:47:52.938 244018 DEBUG nova.network.neutron [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Successfully created port: 0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:47:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:47:54 np0005629333 nova_compute[244014]: 2026-02-25 12:47:54.179 244018 DEBUG nova.network.neutron [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Successfully updated port: 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:47:54 np0005629333 nova_compute[244014]: 2026-02-25 12:47:54.278 244018 DEBUG nova.compute.manager [req-26959e06-026a-4cf3-8791-05e5263f7cc3 req-fddc6246-9d85-48ad-8a0b-5e31b644f427 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received event network-changed-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:47:54 np0005629333 nova_compute[244014]: 2026-02-25 12:47:54.279 244018 DEBUG nova.compute.manager [req-26959e06-026a-4cf3-8791-05e5263f7cc3 req-fddc6246-9d85-48ad-8a0b-5e31b644f427 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Refreshing instance network info cache due to event network-changed-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:47:54 np0005629333 nova_compute[244014]: 2026-02-25 12:47:54.279 244018 DEBUG oslo_concurrency.lockutils [req-26959e06-026a-4cf3-8791-05e5263f7cc3 req-fddc6246-9d85-48ad-8a0b-5e31b644f427 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-271f6569-a8f6-43a3-ac98-511eff77c426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:47:54 np0005629333 nova_compute[244014]: 2026-02-25 12:47:54.279 244018 DEBUG oslo_concurrency.lockutils [req-26959e06-026a-4cf3-8791-05e5263f7cc3 req-fddc6246-9d85-48ad-8a0b-5e31b644f427 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-271f6569-a8f6-43a3-ac98-511eff77c426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:47:54 np0005629333 nova_compute[244014]: 2026-02-25 12:47:54.280 244018 DEBUG nova.network.neutron [req-26959e06-026a-4cf3-8791-05e5263f7cc3 req-fddc6246-9d85-48ad-8a0b-5e31b644f427 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Refreshing network info cache for port 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:47:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2020: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 95 KiB/s rd, 2.2 MiB/s wr, 47 op/s
Feb 25 07:47:54 np0005629333 nova_compute[244014]: 2026-02-25 12:47:54.492 244018 DEBUG nova.network.neutron [req-26959e06-026a-4cf3-8791-05e5263f7cc3 req-fddc6246-9d85-48ad-8a0b-5e31b644f427 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:47:54 np0005629333 nova_compute[244014]: 2026-02-25 12:47:54.801 244018 DEBUG nova.network.neutron [req-26959e06-026a-4cf3-8791-05e5263f7cc3 req-fddc6246-9d85-48ad-8a0b-5e31b644f427 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:47:54 np0005629333 nova_compute[244014]: 2026-02-25 12:47:54.820 244018 DEBUG oslo_concurrency.lockutils [req-26959e06-026a-4cf3-8791-05e5263f7cc3 req-fddc6246-9d85-48ad-8a0b-5e31b644f427 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-271f6569-a8f6-43a3-ac98-511eff77c426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:47:54 np0005629333 nova_compute[244014]: 2026-02-25 12:47:54.851 244018 DEBUG nova.network.neutron [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Successfully updated port: 0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:47:54 np0005629333 nova_compute[244014]: 2026-02-25 12:47:54.867 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "refresh_cache-271f6569-a8f6-43a3-ac98-511eff77c426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:47:54 np0005629333 nova_compute[244014]: 2026-02-25 12:47:54.867 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquired lock "refresh_cache-271f6569-a8f6-43a3-ac98-511eff77c426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:47:54 np0005629333 nova_compute[244014]: 2026-02-25 12:47:54.867 244018 DEBUG nova.network.neutron [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:47:54 np0005629333 nova_compute[244014]: 2026-02-25 12:47:54.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:47:55 np0005629333 nova_compute[244014]: 2026-02-25 12:47:55.011 244018 DEBUG nova.network.neutron [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:47:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:55.025 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:47:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:55.026 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:47:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:55.027 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:47:55 np0005629333 nova_compute[244014]: 2026-02-25 12:47:55.268 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:56 np0005629333 nova_compute[244014]: 2026-02-25 12:47:56.354 244018 DEBUG nova.compute.manager [req-1d3565ae-1fd9-4d09-b365-edf8aeb1c0ad req-92e10afe-6b12-488c-b646-4babbda1edd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received event network-changed-0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:47:56 np0005629333 nova_compute[244014]: 2026-02-25 12:47:56.355 244018 DEBUG nova.compute.manager [req-1d3565ae-1fd9-4d09-b365-edf8aeb1c0ad req-92e10afe-6b12-488c-b646-4babbda1edd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Refreshing instance network info cache due to event network-changed-0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:47:56 np0005629333 nova_compute[244014]: 2026-02-25 12:47:56.356 244018 DEBUG oslo_concurrency.lockutils [req-1d3565ae-1fd9-4d09-b365-edf8aeb1c0ad req-92e10afe-6b12-488c-b646-4babbda1edd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-271f6569-a8f6-43a3-ac98-511eff77c426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:47:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2021: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 07:47:56 np0005629333 nova_compute[244014]: 2026-02-25 12:47:56.893 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:47:56 np0005629333 nova_compute[244014]: 2026-02-25 12:47:56.894 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.056 244018 DEBUG nova.network.neutron [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Updating instance_info_cache with network_info: [{"id": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "address": "fa:16:3e:c8:6b:63", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de24fa3-5d", "ovs_interfaceid": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "address": "fa:16:3e:46:c2:9f", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dbf7aaf-2d", "ovs_interfaceid": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.075 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Releasing lock "refresh_cache-271f6569-a8f6-43a3-ac98-511eff77c426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.075 244018 DEBUG nova.compute.manager [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Instance network_info: |[{"id": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "address": "fa:16:3e:c8:6b:63", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de24fa3-5d", "ovs_interfaceid": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "address": "fa:16:3e:46:c2:9f", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dbf7aaf-2d", "ovs_interfaceid": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.076 244018 DEBUG oslo_concurrency.lockutils [req-1d3565ae-1fd9-4d09-b365-edf8aeb1c0ad req-92e10afe-6b12-488c-b646-4babbda1edd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-271f6569-a8f6-43a3-ac98-511eff77c426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.076 244018 DEBUG nova.network.neutron [req-1d3565ae-1fd9-4d09-b365-edf8aeb1c0ad req-92e10afe-6b12-488c-b646-4babbda1edd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Refreshing network info cache for port 0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.079 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Start _get_guest_xml network_info=[{"id": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "address": "fa:16:3e:c8:6b:63", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de24fa3-5d", "ovs_interfaceid": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "address": "fa:16:3e:46:c2:9f", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dbf7aaf-2d", "ovs_interfaceid": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.083 244018 WARNING nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.088 244018 DEBUG nova.virt.libvirt.host [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.088 244018 DEBUG nova.virt.libvirt.host [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.095 244018 DEBUG nova.virt.libvirt.host [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.095 244018 DEBUG nova.virt.libvirt.host [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.096 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.096 244018 DEBUG nova.virt.hardware [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.096 244018 DEBUG nova.virt.hardware [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.097 244018 DEBUG nova.virt.hardware [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.097 244018 DEBUG nova.virt.hardware [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.097 244018 DEBUG nova.virt.hardware [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.097 244018 DEBUG nova.virt.hardware [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.097 244018 DEBUG nova.virt.hardware [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.098 244018 DEBUG nova.virt.hardware [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.098 244018 DEBUG nova.virt.hardware [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.098 244018 DEBUG nova.virt.hardware [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.098 244018 DEBUG nova.virt.hardware [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.103 244018 DEBUG oslo_concurrency.processutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.215 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "9c11f17d-5dba-4ece-8340-1c4ff0939294" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.221 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.239 244018 DEBUG nova.compute.manager [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.313 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.314 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.321 244018 DEBUG nova.virt.hardware [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.322 244018 INFO nova.compute.claims [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.366 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.482 244018 DEBUG oslo_concurrency.processutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:47:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:47:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2826206493' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.680 244018 DEBUG oslo_concurrency.processutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.714 244018 DEBUG nova.storage.rbd_utils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 271f6569-a8f6-43a3-ac98-511eff77c426_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:47:57 np0005629333 nova_compute[244014]: 2026-02-25 12:47:57.721 244018 DEBUG oslo_concurrency.processutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:47:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:47:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1701022679' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.035 244018 DEBUG oslo_concurrency.processutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.040 244018 DEBUG nova.compute.provider_tree [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.096 244018 DEBUG nova.scheduler.client.report [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.256 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.942s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.257 244018 DEBUG nova.compute.manager [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:47:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:47:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2295620193' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.285 244018 DEBUG oslo_concurrency.processutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.287 244018 DEBUG nova.virt.libvirt.vif [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:47:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1288096549',display_name='tempest-TestGettingAddress-server-1288096549',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1288096549',id=119,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB69Pr57xlpfPAIOfVFDA83wUDqkiKKz/q2j8EaAao/3/InA7y2axmKr5B1TGfyn/pk4XdqMYpcKTTAq9q/wSlLakw5QjJZuZ8Zodvba1czxOZQjigR2CJ2ZqyN+ZHKUOA==',key_name='tempest-TestGettingAddress-545180732',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-w8etrtwi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:47:50Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=271f6569-a8f6-43a3-ac98-511eff77c426,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "address": "fa:16:3e:c8:6b:63", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de24fa3-5d", "ovs_interfaceid": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.287 244018 DEBUG nova.network.os_vif_util [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "address": "fa:16:3e:c8:6b:63", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de24fa3-5d", "ovs_interfaceid": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.288 244018 DEBUG nova.network.os_vif_util [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:6b:63,bridge_name='br-int',has_traffic_filtering=True,id=3de24fa3-5df5-470c-98d2-1a60e0ebfc0c,network=Network(d0b6d114-fcb4-4a25-988c-1ee301ef0419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3de24fa3-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.289 244018 DEBUG nova.virt.libvirt.vif [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:47:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1288096549',display_name='tempest-TestGettingAddress-server-1288096549',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1288096549',id=119,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB69Pr57xlpfPAIOfVFDA83wUDqkiKKz/q2j8EaAao/3/InA7y2axmKr5B1TGfyn/pk4XdqMYpcKTTAq9q/wSlLakw5QjJZuZ8Zodvba1czxOZQjigR2CJ2ZqyN+ZHKUOA==',key_name='tempest-TestGettingAddress-545180732',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-w8etrtwi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:47:50Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=271f6569-a8f6-43a3-ac98-511eff77c426,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "address": "fa:16:3e:46:c2:9f", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dbf7aaf-2d", "ovs_interfaceid": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.289 244018 DEBUG nova.network.os_vif_util [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "address": "fa:16:3e:46:c2:9f", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dbf7aaf-2d", "ovs_interfaceid": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.290 244018 DEBUG nova.network.os_vif_util [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:c2:9f,bridge_name='br-int',has_traffic_filtering=True,id=0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934,network=Network(5c482202-8994-4033-a0a9-167d92a9e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0dbf7aaf-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.291 244018 DEBUG nova.objects.instance [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 271f6569-a8f6-43a3-ac98-511eff77c426 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.455 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:47:58 np0005629333 nova_compute[244014]:  <uuid>271f6569-a8f6-43a3-ac98-511eff77c426</uuid>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:  <name>instance-00000077</name>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestGettingAddress-server-1288096549</nova:name>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:47:57</nova:creationTime>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:47:58 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:        <nova:user uuid="f8eb8dbf8cc448ad946fd23aaae2326e">tempest-TestGettingAddress-344063294-project-member</nova:user>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:        <nova:project uuid="25fa1e8dd32c483686f869da2604f2b1">tempest-TestGettingAddress-344063294</nova:project>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:        <nova:port uuid="3de24fa3-5df5-470c-98d2-1a60e0ebfc0c">
Feb 25 07:47:58 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:        <nova:port uuid="0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934">
Feb 25 07:47:58 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe46:c29f" ipVersion="6"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe46:c29f" ipVersion="6"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <entry name="serial">271f6569-a8f6-43a3-ac98-511eff77c426</entry>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <entry name="uuid">271f6569-a8f6-43a3-ac98-511eff77c426</entry>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/271f6569-a8f6-43a3-ac98-511eff77c426_disk">
Feb 25 07:47:58 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:47:58 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/271f6569-a8f6-43a3-ac98-511eff77c426_disk.config">
Feb 25 07:47:58 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:47:58 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:c8:6b:63"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <target dev="tap3de24fa3-5d"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:46:c2:9f"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <target dev="tap0dbf7aaf-2d"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/271f6569-a8f6-43a3-ac98-511eff77c426/console.log" append="off"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:47:58 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:47:58 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:47:58 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:47:58 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.457 244018 DEBUG nova.compute.manager [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Preparing to wait for external event network-vif-plugged-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:47:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2022: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.457 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.458 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.458 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.458 244018 DEBUG nova.compute.manager [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Preparing to wait for external event network-vif-plugged-0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.459 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.459 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.459 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.460 244018 DEBUG nova.virt.libvirt.vif [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:47:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1288096549',display_name='tempest-TestGettingAddress-server-1288096549',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1288096549',id=119,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB69Pr57xlpfPAIOfVFDA83wUDqkiKKz/q2j8EaAao/3/InA7y2axmKr5B1TGfyn/pk4XdqMYpcKTTAq9q/wSlLakw5QjJZuZ8Zodvba1czxOZQjigR2CJ2ZqyN+ZHKUOA==',key_name='tempest-TestGettingAddress-545180732',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-w8etrtwi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:47:50Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=271f6569-a8f6-43a3-ac98-511eff77c426,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "address": "fa:16:3e:c8:6b:63", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de24fa3-5d", "ovs_interfaceid": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.460 244018 DEBUG nova.network.os_vif_util [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "address": "fa:16:3e:c8:6b:63", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de24fa3-5d", "ovs_interfaceid": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.461 244018 DEBUG nova.network.os_vif_util [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:6b:63,bridge_name='br-int',has_traffic_filtering=True,id=3de24fa3-5df5-470c-98d2-1a60e0ebfc0c,network=Network(d0b6d114-fcb4-4a25-988c-1ee301ef0419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3de24fa3-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.461 244018 DEBUG os_vif [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:6b:63,bridge_name='br-int',has_traffic_filtering=True,id=3de24fa3-5df5-470c-98d2-1a60e0ebfc0c,network=Network(d0b6d114-fcb4-4a25-988c-1ee301ef0419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3de24fa3-5d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.462 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.463 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.463 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.466 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.466 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3de24fa3-5d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.466 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3de24fa3-5d, col_values=(('external_ids', {'iface-id': '3de24fa3-5df5-470c-98d2-1a60e0ebfc0c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c8:6b:63', 'vm-uuid': '271f6569-a8f6-43a3-ac98-511eff77c426'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:47:58 np0005629333 NetworkManager[49836]: <info>  [1772023678.4687] manager: (tap3de24fa3-5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/512)
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.470 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.476 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.477 244018 DEBUG nova.compute.manager [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.478 244018 DEBUG nova.network.neutron [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.479 244018 INFO os_vif [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:6b:63,bridge_name='br-int',has_traffic_filtering=True,id=3de24fa3-5df5-470c-98d2-1a60e0ebfc0c,network=Network(d0b6d114-fcb4-4a25-988c-1ee301ef0419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3de24fa3-5d')#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.480 244018 DEBUG nova.virt.libvirt.vif [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:47:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1288096549',display_name='tempest-TestGettingAddress-server-1288096549',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1288096549',id=119,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB69Pr57xlpfPAIOfVFDA83wUDqkiKKz/q2j8EaAao/3/InA7y2axmKr5B1TGfyn/pk4XdqMYpcKTTAq9q/wSlLakw5QjJZuZ8Zodvba1czxOZQjigR2CJ2ZqyN+ZHKUOA==',key_name='tempest-TestGettingAddress-545180732',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-w8etrtwi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:47:50Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=271f6569-a8f6-43a3-ac98-511eff77c426,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "address": "fa:16:3e:46:c2:9f", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dbf7aaf-2d", "ovs_interfaceid": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.480 244018 DEBUG nova.network.os_vif_util [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "address": "fa:16:3e:46:c2:9f", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dbf7aaf-2d", "ovs_interfaceid": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.481 244018 DEBUG nova.network.os_vif_util [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:c2:9f,bridge_name='br-int',has_traffic_filtering=True,id=0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934,network=Network(5c482202-8994-4033-a0a9-167d92a9e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0dbf7aaf-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.481 244018 DEBUG os_vif [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:c2:9f,bridge_name='br-int',has_traffic_filtering=True,id=0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934,network=Network(5c482202-8994-4033-a0a9-167d92a9e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0dbf7aaf-2d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.482 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.482 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.482 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.484 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.484 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0dbf7aaf-2d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.485 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0dbf7aaf-2d, col_values=(('external_ids', {'iface-id': '0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:46:c2:9f', 'vm-uuid': '271f6569-a8f6-43a3-ac98-511eff77c426'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.486 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:58 np0005629333 NetworkManager[49836]: <info>  [1772023678.4868] manager: (tap0dbf7aaf-2d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/513)
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.488 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.490 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.491 244018 INFO os_vif [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:c2:9f,bridge_name='br-int',has_traffic_filtering=True,id=0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934,network=Network(5c482202-8994-4033-a0a9-167d92a9e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0dbf7aaf-2d')#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.521 244018 INFO nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:47:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.572 244018 DEBUG nova.compute.manager [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.594 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.594 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.594 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:c8:6b:63, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.595 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:46:c2:9f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.595 244018 INFO nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Using config drive#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.629 244018 DEBUG nova.storage.rbd_utils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 271f6569-a8f6-43a3-ac98-511eff77c426_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.717 244018 DEBUG nova.policy [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31d013eaf26a447394d93c83ab8def60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e227b91c24404ab5aed600e2fe792d32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.906 244018 DEBUG nova.compute.manager [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.907 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.907 244018 INFO nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Creating image(s)#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.929 244018 DEBUG nova.storage.rbd_utils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 9c11f17d-5dba-4ece-8340-1c4ff0939294_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.954 244018 DEBUG nova.storage.rbd_utils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 9c11f17d-5dba-4ece-8340-1c4ff0939294_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.982 244018 DEBUG nova.storage.rbd_utils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 9c11f17d-5dba-4ece-8340-1c4ff0939294_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:47:58 np0005629333 nova_compute[244014]: 2026-02-25 12:47:58.986 244018 DEBUG oslo_concurrency.processutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.044 244018 INFO nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Creating config drive at /var/lib/nova/instances/271f6569-a8f6-43a3-ac98-511eff77c426/disk.config#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.048 244018 DEBUG oslo_concurrency.processutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/271f6569-a8f6-43a3-ac98-511eff77c426/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpzb37u9ux execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.088 244018 DEBUG oslo_concurrency.processutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.089 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.090 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.090 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.109 244018 DEBUG nova.storage.rbd_utils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 9c11f17d-5dba-4ece-8340-1c4ff0939294_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.112 244018 DEBUG oslo_concurrency.processutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 9c11f17d-5dba-4ece-8340-1c4ff0939294_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.190 244018 DEBUG oslo_concurrency.processutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/271f6569-a8f6-43a3-ac98-511eff77c426/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpzb37u9ux" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.231 244018 DEBUG nova.storage.rbd_utils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 271f6569-a8f6-43a3-ac98-511eff77c426_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.239 244018 DEBUG oslo_concurrency.processutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/271f6569-a8f6-43a3-ac98-511eff77c426/disk.config 271f6569-a8f6-43a3-ac98-511eff77c426_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.277 244018 DEBUG nova.network.neutron [req-1d3565ae-1fd9-4d09-b365-edf8aeb1c0ad req-92e10afe-6b12-488c-b646-4babbda1edd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Updated VIF entry in instance network info cache for port 0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.278 244018 DEBUG nova.network.neutron [req-1d3565ae-1fd9-4d09-b365-edf8aeb1c0ad req-92e10afe-6b12-488c-b646-4babbda1edd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Updating instance_info_cache with network_info: [{"id": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "address": "fa:16:3e:c8:6b:63", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de24fa3-5d", "ovs_interfaceid": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "address": "fa:16:3e:46:c2:9f", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dbf7aaf-2d", "ovs_interfaceid": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.337 244018 DEBUG oslo_concurrency.processutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 9c11f17d-5dba-4ece-8340-1c4ff0939294_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.225s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.379 244018 DEBUG oslo_concurrency.lockutils [req-1d3565ae-1fd9-4d09-b365-edf8aeb1c0ad req-92e10afe-6b12-488c-b646-4babbda1edd6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-271f6569-a8f6-43a3-ac98-511eff77c426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.407 244018 DEBUG oslo_concurrency.processutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/271f6569-a8f6-43a3-ac98-511eff77c426/disk.config 271f6569-a8f6-43a3-ac98-511eff77c426_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.408 244018 INFO nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Deleting local config drive /var/lib/nova/instances/271f6569-a8f6-43a3-ac98-511eff77c426/disk.config because it was imported into RBD.#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.411 244018 DEBUG nova.storage.rbd_utils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] resizing rbd image 9c11f17d-5dba-4ece-8340-1c4ff0939294_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:47:59 np0005629333 kernel: tap3de24fa3-5d: entered promiscuous mode
Feb 25 07:47:59 np0005629333 NetworkManager[49836]: <info>  [1772023679.4471] manager: (tap3de24fa3-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/514)
Feb 25 07:47:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:59Z|01227|binding|INFO|Claiming lport 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c for this chassis.
Feb 25 07:47:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:59Z|01228|binding|INFO|3de24fa3-5df5-470c-98d2-1a60e0ebfc0c: Claiming fa:16:3e:c8:6b:63 10.100.0.8
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.450 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:59 np0005629333 NetworkManager[49836]: <info>  [1772023679.4590] manager: (tap0dbf7aaf-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/515)
Feb 25 07:47:59 np0005629333 kernel: tap0dbf7aaf-2d: entered promiscuous mode
Feb 25 07:47:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:59Z|01229|binding|INFO|Setting lport 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c ovn-installed in OVS
Feb 25 07:47:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:59Z|01230|if_status|INFO|Not updating pb chassis for 0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 now as sb is readonly
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.461 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.468 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.471 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:59Z|01231|binding|INFO|Claiming lport 0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 for this chassis.
Feb 25 07:47:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:59Z|01232|binding|INFO|0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934: Claiming fa:16:3e:46:c2:9f 2001:db8:0:1:f816:3eff:fe46:c29f 2001:db8::f816:3eff:fe46:c29f
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.473 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:59Z|01233|binding|INFO|Setting lport 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c up in Southbound
Feb 25 07:47:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:59Z|01234|binding|INFO|Setting lport 0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 ovn-installed in OVS
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.475 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.474 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:6b:63 10.100.0.8'], port_security=['fa:16:3e:c8:6b:63 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '271f6569-a8f6-43a3-ac98-511eff77c426', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e663e6c0-5fd2-4899-a0e9-500029428c03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=560bd45a-3180-4994-88ce-74a6fe57d885, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=3de24fa3-5df5-470c-98d2-1a60e0ebfc0c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:47:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.475 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c in datapath d0b6d114-fcb4-4a25-988c-1ee301ef0419 bound to our chassis#033[00m
Feb 25 07:47:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.477 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d0b6d114-fcb4-4a25-988c-1ee301ef0419#033[00m
Feb 25 07:47:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:47:59Z|01235|binding|INFO|Setting lport 0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 up in Southbound
Feb 25 07:47:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.489 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:c2:9f 2001:db8:0:1:f816:3eff:fe46:c29f 2001:db8::f816:3eff:fe46:c29f'], port_security=['fa:16:3e:46:c2:9f 2001:db8:0:1:f816:3eff:fe46:c29f 2001:db8::f816:3eff:fe46:c29f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe46:c29f/64 2001:db8::f816:3eff:fe46:c29f/64', 'neutron:device_id': '271f6569-a8f6-43a3-ac98-511eff77c426', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c482202-8994-4033-a0a9-167d92a9e301', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e663e6c0-5fd2-4899-a0e9-500029428c03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06544f4e-1579-4f9d-8faa-1248478e7fbc, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:47:59 np0005629333 systemd-udevd[351217]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:47:59 np0005629333 systemd-udevd[351218]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:47:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.491 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c41c82fb-651c-4cd0-8d30-59519985d3c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:59 np0005629333 systemd-machined[210048]: New machine qemu-151-instance-00000077.
Feb 25 07:47:59 np0005629333 NetworkManager[49836]: <info>  [1772023679.5067] device (tap3de24fa3-5d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:47:59 np0005629333 NetworkManager[49836]: <info>  [1772023679.5075] device (tap3de24fa3-5d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:47:59 np0005629333 NetworkManager[49836]: <info>  [1772023679.5084] device (tap0dbf7aaf-2d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:47:59 np0005629333 systemd[1]: Started Virtual Machine qemu-151-instance-00000077.
Feb 25 07:47:59 np0005629333 NetworkManager[49836]: <info>  [1772023679.5109] device (tap0dbf7aaf-2d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:47:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.516 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f7d7253f-d6a2-4c33-9a89-098862217bbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.521 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[bbe0a436-56fb-4d3f-902b-5f651fa87dee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.543 244018 DEBUG nova.objects.instance [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'migration_context' on Instance uuid 9c11f17d-5dba-4ece-8340-1c4ff0939294 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:47:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.546 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0e28de5f-8f9a-46a6-9142-c13728dc82b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.558 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3a1a060d-3259-4848-86ab-a6de57452367]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd0b6d114-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:c1:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 364], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561168, 'reachable_time': 40851, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351269, 'error': None, 'target': 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.566 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.566 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Ensure instance console log exists: /var/lib/nova/instances/9c11f17d-5dba-4ece-8340-1c4ff0939294/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.566 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.567 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.567 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:47:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.567 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2e85700d-1543-42d2-b020-1590d326fdd6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd0b6d114-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561174, 'tstamp': 561174}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351273, 'error': None, 'target': 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd0b6d114-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561176, 'tstamp': 561176}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351273, 'error': None, 'target': 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.568 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0b6d114-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:47:59 np0005629333 podman[351189]: 2026-02-25 12:47:59.568818953 +0000 UTC m=+0.103888174 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.569 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.570 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0b6d114-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:47:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.570 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:47:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.571 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd0b6d114-f0, col_values=(('external_ids', {'iface-id': '4e1af2d0-780b-4ffd-93a0-445de7a22322'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:47:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.571 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:47:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.572 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 in datapath 5c482202-8994-4033-a0a9-167d92a9e301 unbound from our chassis#033[00m
Feb 25 07:47:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.573 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5c482202-8994-4033-a0a9-167d92a9e301#033[00m
Feb 25 07:47:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.585 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[05fc4223-ea30-41d0-985d-def68c53d3aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:59 np0005629333 podman[351193]: 2026-02-25 12:47:59.589504857 +0000 UTC m=+0.122231142 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 07:47:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.608 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2d39ec1a-6ebc-4d92-9556-5c60e2fbdabd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.610 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c90119b4-4216-4c53-a84a-1480c09fdc9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.627 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[660595bd-a11f-4b7a-8508-17d49fdfd9c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.639 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c65e96f6-db29-4ef9-8bb2-6b6958801209]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c482202-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:af:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 21, 'tx_packets': 4, 'rx_bytes': 1846, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 21, 'tx_packets': 4, 'rx_bytes': 1846, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 365], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561244, 'reachable_time': 22558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 21, 'inoctets': 1552, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 21, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1552, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 21, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351280, 'error': None, 'target': 'ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.653 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[918eae70-339d-4bc6-a900-1845833a8184]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5c482202-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561254, 'tstamp': 561254}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351281, 'error': None, 'target': 'ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:47:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.655 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c482202-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.656 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.657 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:47:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.658 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c482202-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:47:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.658 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:47:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.658 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5c482202-80, col_values=(('external_ids', {'iface-id': '8372e6c6-fbbc-48a7-be95-d95e1d2ad95a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:47:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:47:59.659 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.793 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023679.7924416, 271f6569-a8f6-43a3-ac98-511eff77c426 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.793 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] VM Started (Lifecycle Event)#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.817 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.822 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023679.7952223, 271f6569-a8f6-43a3-ac98-511eff77c426 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.822 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.850 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.855 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:47:59 np0005629333 nova_compute[244014]: 2026-02-25 12:47:59.886 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:48:00 np0005629333 nova_compute[244014]: 2026-02-25 12:48:00.135 244018 DEBUG nova.network.neutron [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Successfully created port: cbf23880-21db-4752-be67-0abdb0153c2b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:48:00 np0005629333 nova_compute[244014]: 2026-02-25 12:48:00.271 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:00 np0005629333 nova_compute[244014]: 2026-02-25 12:48:00.436 244018 DEBUG nova.compute.manager [req-9eaf9c38-a85c-4c70-9a38-95f8ee498a96 req-ade40265-07d1-4f09-8943-8659296418cc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received event network-vif-plugged-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:48:00 np0005629333 nova_compute[244014]: 2026-02-25 12:48:00.437 244018 DEBUG oslo_concurrency.lockutils [req-9eaf9c38-a85c-4c70-9a38-95f8ee498a96 req-ade40265-07d1-4f09-8943-8659296418cc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:00 np0005629333 nova_compute[244014]: 2026-02-25 12:48:00.437 244018 DEBUG oslo_concurrency.lockutils [req-9eaf9c38-a85c-4c70-9a38-95f8ee498a96 req-ade40265-07d1-4f09-8943-8659296418cc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:00 np0005629333 nova_compute[244014]: 2026-02-25 12:48:00.437 244018 DEBUG oslo_concurrency.lockutils [req-9eaf9c38-a85c-4c70-9a38-95f8ee498a96 req-ade40265-07d1-4f09-8943-8659296418cc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:00 np0005629333 nova_compute[244014]: 2026-02-25 12:48:00.438 244018 DEBUG nova.compute.manager [req-9eaf9c38-a85c-4c70-9a38-95f8ee498a96 req-ade40265-07d1-4f09-8943-8659296418cc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Processing event network-vif-plugged-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:48:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2023: 305 pgs: 305 active+clean; 358 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 07:48:00 np0005629333 nova_compute[244014]: 2026-02-25 12:48:00.864 244018 DEBUG nova.network.neutron [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Successfully updated port: cbf23880-21db-4752-be67-0abdb0153c2b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:48:00 np0005629333 nova_compute[244014]: 2026-02-25 12:48:00.883 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-9c11f17d-5dba-4ece-8340-1c4ff0939294" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:48:00 np0005629333 nova_compute[244014]: 2026-02-25 12:48:00.884 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-9c11f17d-5dba-4ece-8340-1c4ff0939294" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:48:00 np0005629333 nova_compute[244014]: 2026-02-25 12:48:00.884 244018 DEBUG nova.network.neutron [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:48:00 np0005629333 nova_compute[244014]: 2026-02-25 12:48:00.958 244018 DEBUG nova.compute.manager [req-30fe56e4-213a-4ed4-80bc-03ca0dea2653 req-087dad7e-6565-49e8-bc7b-ad2dcfdcd0f7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Received event network-changed-cbf23880-21db-4752-be67-0abdb0153c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:48:00 np0005629333 nova_compute[244014]: 2026-02-25 12:48:00.958 244018 DEBUG nova.compute.manager [req-30fe56e4-213a-4ed4-80bc-03ca0dea2653 req-087dad7e-6565-49e8-bc7b-ad2dcfdcd0f7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Refreshing instance network info cache due to event network-changed-cbf23880-21db-4752-be67-0abdb0153c2b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:48:00 np0005629333 nova_compute[244014]: 2026-02-25 12:48:00.959 244018 DEBUG oslo_concurrency.lockutils [req-30fe56e4-213a-4ed4-80bc-03ca0dea2653 req-087dad7e-6565-49e8-bc7b-ad2dcfdcd0f7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-9c11f17d-5dba-4ece-8340-1c4ff0939294" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:48:01 np0005629333 nova_compute[244014]: 2026-02-25 12:48:01.081 244018 DEBUG nova.network.neutron [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:48:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:48:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:48:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:48:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:48:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:48:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.230 244018 DEBUG nova.network.neutron [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Updating instance_info_cache with network_info: [{"id": "cbf23880-21db-4752-be67-0abdb0153c2b", "address": "fa:16:3e:72:a7:94", "network": {"id": "f3635704-b98f-49df-8dc5-9052a34d8970", "bridge": "br-int", "label": "tempest-network-smoke--1605130746", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbf23880-21", "ovs_interfaceid": "cbf23880-21db-4752-be67-0abdb0153c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.272 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-9c11f17d-5dba-4ece-8340-1c4ff0939294" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.272 244018 DEBUG nova.compute.manager [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Instance network_info: |[{"id": "cbf23880-21db-4752-be67-0abdb0153c2b", "address": "fa:16:3e:72:a7:94", "network": {"id": "f3635704-b98f-49df-8dc5-9052a34d8970", "bridge": "br-int", "label": "tempest-network-smoke--1605130746", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbf23880-21", "ovs_interfaceid": "cbf23880-21db-4752-be67-0abdb0153c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.273 244018 DEBUG oslo_concurrency.lockutils [req-30fe56e4-213a-4ed4-80bc-03ca0dea2653 req-087dad7e-6565-49e8-bc7b-ad2dcfdcd0f7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-9c11f17d-5dba-4ece-8340-1c4ff0939294" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.273 244018 DEBUG nova.network.neutron [req-30fe56e4-213a-4ed4-80bc-03ca0dea2653 req-087dad7e-6565-49e8-bc7b-ad2dcfdcd0f7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Refreshing network info cache for port cbf23880-21db-4752-be67-0abdb0153c2b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.277 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Start _get_guest_xml network_info=[{"id": "cbf23880-21db-4752-be67-0abdb0153c2b", "address": "fa:16:3e:72:a7:94", "network": {"id": "f3635704-b98f-49df-8dc5-9052a34d8970", "bridge": "br-int", "label": "tempest-network-smoke--1605130746", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbf23880-21", "ovs_interfaceid": "cbf23880-21db-4752-be67-0abdb0153c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.281 244018 WARNING nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.287 244018 DEBUG nova.virt.libvirt.host [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.287 244018 DEBUG nova.virt.libvirt.host [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.291 244018 DEBUG nova.virt.libvirt.host [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.291 244018 DEBUG nova.virt.libvirt.host [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.292 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.292 244018 DEBUG nova.virt.hardware [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.293 244018 DEBUG nova.virt.hardware [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.293 244018 DEBUG nova.virt.hardware [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.294 244018 DEBUG nova.virt.hardware [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.294 244018 DEBUG nova.virt.hardware [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.294 244018 DEBUG nova.virt.hardware [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.295 244018 DEBUG nova.virt.hardware [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.295 244018 DEBUG nova.virt.hardware [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.295 244018 DEBUG nova.virt.hardware [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.295 244018 DEBUG nova.virt.hardware [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.296 244018 DEBUG nova.virt.hardware [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.301 244018 DEBUG oslo_concurrency.processutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:48:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 305 active+clean; 384 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 2.7 MiB/s wr, 58 op/s
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.544 244018 DEBUG nova.compute.manager [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received event network-vif-plugged-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.545 244018 DEBUG oslo_concurrency.lockutils [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.545 244018 DEBUG oslo_concurrency.lockutils [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.546 244018 DEBUG oslo_concurrency.lockutils [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.546 244018 DEBUG nova.compute.manager [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] No event matching network-vif-plugged-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c in dict_keys([('network-vif-plugged', '0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.547 244018 WARNING nova.compute.manager [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received unexpected event network-vif-plugged-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.547 244018 DEBUG nova.compute.manager [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received event network-vif-plugged-0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.547 244018 DEBUG oslo_concurrency.lockutils [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.548 244018 DEBUG oslo_concurrency.lockutils [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.548 244018 DEBUG oslo_concurrency.lockutils [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.549 244018 DEBUG nova.compute.manager [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Processing event network-vif-plugged-0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.549 244018 DEBUG nova.compute.manager [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received event network-vif-plugged-0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.550 244018 DEBUG oslo_concurrency.lockutils [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.550 244018 DEBUG oslo_concurrency.lockutils [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.550 244018 DEBUG oslo_concurrency.lockutils [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.551 244018 DEBUG nova.compute.manager [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] No waiting events found dispatching network-vif-plugged-0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.551 244018 WARNING nova.compute.manager [req-22591323-c5e6-44d1-b87e-2b98f3f44eb5 req-436c13be-5e50-4fd0-ae64-13ea02f8f6fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received unexpected event network-vif-plugged-0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.552 244018 DEBUG nova.compute.manager [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.558 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.559 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023682.557369, 271f6569-a8f6-43a3-ac98-511eff77c426 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.559 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.567 244018 INFO nova.virt.libvirt.driver [-] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Instance spawned successfully.#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.567 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.576 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.581 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.594 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.595 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.596 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.597 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.597 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.598 244018 DEBUG nova.virt.libvirt.driver [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.608 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.660 244018 INFO nova.compute.manager [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Took 12.26 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.661 244018 DEBUG nova.compute.manager [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.766 244018 INFO nova.compute.manager [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Took 13.41 seconds to build instance.#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.790 244018 DEBUG oslo_concurrency.lockutils [None req-e1f2b172-6982-4765-ac02-8415386152b4 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:48:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2557137040' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.849 244018 DEBUG oslo_concurrency.processutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.870 244018 DEBUG nova.storage.rbd_utils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 9c11f17d-5dba-4ece-8340-1c4ff0939294_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:48:02 np0005629333 nova_compute[244014]: 2026-02-25 12:48:02.873 244018 DEBUG oslo_concurrency.processutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:48:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:48:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3889701299' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:48:03 np0005629333 nova_compute[244014]: 2026-02-25 12:48:03.445 244018 DEBUG oslo_concurrency.processutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:48:03 np0005629333 nova_compute[244014]: 2026-02-25 12:48:03.447 244018 DEBUG nova.virt.libvirt.vif [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:47:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-83970638',display_name='tempest-TestNetworkBasicOps-server-83970638',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-83970638',id=120,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMxYaDCaF+dhprfU7p2GFAe30UCzLc9MF80n83kHh1iIsnjcJHnQ6Q/VzjgOn2GIc4A+5MBe7VFnhNTJz7AlxywFHXtPZOo+vT/4VV+kNxRXdCo8I4zsXHElpgbrS0cbTQ==',key_name='tempest-TestNetworkBasicOps-1724785857',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-0o0rgl3q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:47:58Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=9c11f17d-5dba-4ece-8340-1c4ff0939294,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cbf23880-21db-4752-be67-0abdb0153c2b", "address": "fa:16:3e:72:a7:94", "network": {"id": "f3635704-b98f-49df-8dc5-9052a34d8970", "bridge": "br-int", "label": "tempest-network-smoke--1605130746", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbf23880-21", "ovs_interfaceid": "cbf23880-21db-4752-be67-0abdb0153c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:48:03 np0005629333 nova_compute[244014]: 2026-02-25 12:48:03.447 244018 DEBUG nova.network.os_vif_util [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "cbf23880-21db-4752-be67-0abdb0153c2b", "address": "fa:16:3e:72:a7:94", "network": {"id": "f3635704-b98f-49df-8dc5-9052a34d8970", "bridge": "br-int", "label": "tempest-network-smoke--1605130746", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbf23880-21", "ovs_interfaceid": "cbf23880-21db-4752-be67-0abdb0153c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:48:03 np0005629333 nova_compute[244014]: 2026-02-25 12:48:03.448 244018 DEBUG nova.network.os_vif_util [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:a7:94,bridge_name='br-int',has_traffic_filtering=True,id=cbf23880-21db-4752-be67-0abdb0153c2b,network=Network(f3635704-b98f-49df-8dc5-9052a34d8970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbf23880-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:48:03 np0005629333 nova_compute[244014]: 2026-02-25 12:48:03.450 244018 DEBUG nova.objects.instance [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9c11f17d-5dba-4ece-8340-1c4ff0939294 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:48:03 np0005629333 nova_compute[244014]: 2026-02-25 12:48:03.487 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:48:03 np0005629333 nova_compute[244014]: 2026-02-25 12:48:03.564 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:48:03 np0005629333 nova_compute[244014]:  <uuid>9c11f17d-5dba-4ece-8340-1c4ff0939294</uuid>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:  <name>instance-00000078</name>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestNetworkBasicOps-server-83970638</nova:name>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:48:02</nova:creationTime>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:48:03 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:        <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:        <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:        <nova:port uuid="cbf23880-21db-4752-be67-0abdb0153c2b">
Feb 25 07:48:03 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.30" ipVersion="4"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      <entry name="serial">9c11f17d-5dba-4ece-8340-1c4ff0939294</entry>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      <entry name="uuid">9c11f17d-5dba-4ece-8340-1c4ff0939294</entry>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/9c11f17d-5dba-4ece-8340-1c4ff0939294_disk">
Feb 25 07:48:03 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:48:03 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/9c11f17d-5dba-4ece-8340-1c4ff0939294_disk.config">
Feb 25 07:48:03 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:48:03 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:72:a7:94"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      <target dev="tapcbf23880-21"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/9c11f17d-5dba-4ece-8340-1c4ff0939294/console.log" append="off"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:48:03 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:48:03 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:48:03 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:48:03 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:48:03 np0005629333 nova_compute[244014]: 2026-02-25 12:48:03.565 244018 DEBUG nova.compute.manager [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Preparing to wait for external event network-vif-plugged-cbf23880-21db-4752-be67-0abdb0153c2b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:48:03 np0005629333 nova_compute[244014]: 2026-02-25 12:48:03.565 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:03 np0005629333 nova_compute[244014]: 2026-02-25 12:48:03.565 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:03 np0005629333 nova_compute[244014]: 2026-02-25 12:48:03.566 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:03 np0005629333 nova_compute[244014]: 2026-02-25 12:48:03.566 244018 DEBUG nova.virt.libvirt.vif [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:47:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-83970638',display_name='tempest-TestNetworkBasicOps-server-83970638',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-83970638',id=120,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMxYaDCaF+dhprfU7p2GFAe30UCzLc9MF80n83kHh1iIsnjcJHnQ6Q/VzjgOn2GIc4A+5MBe7VFnhNTJz7AlxywFHXtPZOo+vT/4VV+kNxRXdCo8I4zsXHElpgbrS0cbTQ==',key_name='tempest-TestNetworkBasicOps-1724785857',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-0o0rgl3q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:47:58Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=9c11f17d-5dba-4ece-8340-1c4ff0939294,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cbf23880-21db-4752-be67-0abdb0153c2b", "address": "fa:16:3e:72:a7:94", "network": {"id": "f3635704-b98f-49df-8dc5-9052a34d8970", "bridge": "br-int", "label": "tempest-network-smoke--1605130746", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbf23880-21", "ovs_interfaceid": "cbf23880-21db-4752-be67-0abdb0153c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:48:03 np0005629333 nova_compute[244014]: 2026-02-25 12:48:03.567 244018 DEBUG nova.network.os_vif_util [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "cbf23880-21db-4752-be67-0abdb0153c2b", "address": "fa:16:3e:72:a7:94", "network": {"id": "f3635704-b98f-49df-8dc5-9052a34d8970", "bridge": "br-int", "label": "tempest-network-smoke--1605130746", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbf23880-21", "ovs_interfaceid": "cbf23880-21db-4752-be67-0abdb0153c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:48:03 np0005629333 nova_compute[244014]: 2026-02-25 12:48:03.567 244018 DEBUG nova.network.os_vif_util [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:a7:94,bridge_name='br-int',has_traffic_filtering=True,id=cbf23880-21db-4752-be67-0abdb0153c2b,network=Network(f3635704-b98f-49df-8dc5-9052a34d8970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbf23880-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:48:03 np0005629333 nova_compute[244014]: 2026-02-25 12:48:03.567 244018 DEBUG os_vif [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:a7:94,bridge_name='br-int',has_traffic_filtering=True,id=cbf23880-21db-4752-be67-0abdb0153c2b,network=Network(f3635704-b98f-49df-8dc5-9052a34d8970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbf23880-21') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:48:03 np0005629333 nova_compute[244014]: 2026-02-25 12:48:03.568 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:03 np0005629333 nova_compute[244014]: 2026-02-25 12:48:03.568 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:48:03 np0005629333 nova_compute[244014]: 2026-02-25 12:48:03.568 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:48:03 np0005629333 nova_compute[244014]: 2026-02-25 12:48:03.570 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:03 np0005629333 nova_compute[244014]: 2026-02-25 12:48:03.571 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcbf23880-21, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:48:03 np0005629333 nova_compute[244014]: 2026-02-25 12:48:03.571 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcbf23880-21, col_values=(('external_ids', {'iface-id': 'cbf23880-21db-4752-be67-0abdb0153c2b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:72:a7:94', 'vm-uuid': '9c11f17d-5dba-4ece-8340-1c4ff0939294'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:48:03 np0005629333 nova_compute[244014]: 2026-02-25 12:48:03.572 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:03 np0005629333 NetworkManager[49836]: <info>  [1772023683.5734] manager: (tapcbf23880-21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/516)
Feb 25 07:48:03 np0005629333 nova_compute[244014]: 2026-02-25 12:48:03.574 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:48:03 np0005629333 nova_compute[244014]: 2026-02-25 12:48:03.579 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:03 np0005629333 nova_compute[244014]: 2026-02-25 12:48:03.580 244018 INFO os_vif [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:a7:94,bridge_name='br-int',has_traffic_filtering=True,id=cbf23880-21db-4752-be67-0abdb0153c2b,network=Network(f3635704-b98f-49df-8dc5-9052a34d8970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbf23880-21')#033[00m
Feb 25 07:48:04 np0005629333 nova_compute[244014]: 2026-02-25 12:48:04.406 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:48:04 np0005629333 nova_compute[244014]: 2026-02-25 12:48:04.407 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:48:04 np0005629333 nova_compute[244014]: 2026-02-25 12:48:04.407 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:72:a7:94, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:48:04 np0005629333 nova_compute[244014]: 2026-02-25 12:48:04.407 244018 INFO nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Using config drive#033[00m
Feb 25 07:48:04 np0005629333 nova_compute[244014]: 2026-02-25 12:48:04.422 244018 DEBUG nova.storage.rbd_utils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 9c11f17d-5dba-4ece-8340-1c4ff0939294_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:48:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2025: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 2.2 MiB/s wr, 49 op/s
Feb 25 07:48:04 np0005629333 nova_compute[244014]: 2026-02-25 12:48:04.872 244018 DEBUG nova.network.neutron [req-30fe56e4-213a-4ed4-80bc-03ca0dea2653 req-087dad7e-6565-49e8-bc7b-ad2dcfdcd0f7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Updated VIF entry in instance network info cache for port cbf23880-21db-4752-be67-0abdb0153c2b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:48:04 np0005629333 nova_compute[244014]: 2026-02-25 12:48:04.873 244018 DEBUG nova.network.neutron [req-30fe56e4-213a-4ed4-80bc-03ca0dea2653 req-087dad7e-6565-49e8-bc7b-ad2dcfdcd0f7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Updating instance_info_cache with network_info: [{"id": "cbf23880-21db-4752-be67-0abdb0153c2b", "address": "fa:16:3e:72:a7:94", "network": {"id": "f3635704-b98f-49df-8dc5-9052a34d8970", "bridge": "br-int", "label": "tempest-network-smoke--1605130746", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbf23880-21", "ovs_interfaceid": "cbf23880-21db-4752-be67-0abdb0153c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:48:04 np0005629333 nova_compute[244014]: 2026-02-25 12:48:04.887 244018 DEBUG oslo_concurrency.lockutils [req-30fe56e4-213a-4ed4-80bc-03ca0dea2653 req-087dad7e-6565-49e8-bc7b-ad2dcfdcd0f7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-9c11f17d-5dba-4ece-8340-1c4ff0939294" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:48:05 np0005629333 nova_compute[244014]: 2026-02-25 12:48:05.098 244018 INFO nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Creating config drive at /var/lib/nova/instances/9c11f17d-5dba-4ece-8340-1c4ff0939294/disk.config#033[00m
Feb 25 07:48:05 np0005629333 nova_compute[244014]: 2026-02-25 12:48:05.105 244018 DEBUG oslo_concurrency.processutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9c11f17d-5dba-4ece-8340-1c4ff0939294/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpeod7h4ha execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:48:05 np0005629333 nova_compute[244014]: 2026-02-25 12:48:05.243 244018 DEBUG oslo_concurrency.processutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9c11f17d-5dba-4ece-8340-1c4ff0939294/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpeod7h4ha" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:48:05 np0005629333 nova_compute[244014]: 2026-02-25 12:48:05.278 244018 DEBUG nova.storage.rbd_utils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 9c11f17d-5dba-4ece-8340-1c4ff0939294_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:48:05 np0005629333 nova_compute[244014]: 2026-02-25 12:48:05.283 244018 DEBUG oslo_concurrency.processutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9c11f17d-5dba-4ece-8340-1c4ff0939294/disk.config 9c11f17d-5dba-4ece-8340-1c4ff0939294_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:48:05 np0005629333 nova_compute[244014]: 2026-02-25 12:48:05.312 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:05 np0005629333 nova_compute[244014]: 2026-02-25 12:48:05.412 244018 DEBUG oslo_concurrency.processutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9c11f17d-5dba-4ece-8340-1c4ff0939294/disk.config 9c11f17d-5dba-4ece-8340-1c4ff0939294_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:48:05 np0005629333 nova_compute[244014]: 2026-02-25 12:48:05.413 244018 INFO nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Deleting local config drive /var/lib/nova/instances/9c11f17d-5dba-4ece-8340-1c4ff0939294/disk.config because it was imported into RBD.#033[00m
Feb 25 07:48:05 np0005629333 kernel: tapcbf23880-21: entered promiscuous mode
Feb 25 07:48:05 np0005629333 NetworkManager[49836]: <info>  [1772023685.4835] manager: (tapcbf23880-21): new Tun device (/org/freedesktop/NetworkManager/Devices/517)
Feb 25 07:48:05 np0005629333 nova_compute[244014]: 2026-02-25 12:48:05.485 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:05 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:05Z|01236|binding|INFO|Claiming lport cbf23880-21db-4752-be67-0abdb0153c2b for this chassis.
Feb 25 07:48:05 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:05Z|01237|binding|INFO|cbf23880-21db-4752-be67-0abdb0153c2b: Claiming fa:16:3e:72:a7:94 10.100.0.30
Feb 25 07:48:05 np0005629333 nova_compute[244014]: 2026-02-25 12:48:05.488 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.500 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:a7:94 10.100.0.30'], port_security=['fa:16:3e:72:a7:94 10.100.0.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/28', 'neutron:device_id': '9c11f17d-5dba-4ece-8340-1c4ff0939294', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3635704-b98f-49df-8dc5-9052a34d8970', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '2', 'neutron:security_group_ids': '98aaa3fd-2372-44ea-9de4-f4c0cf776cf4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08599aa3-75c1-4d0f-bdce-36e8497cd876, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=cbf23880-21db-4752-be67-0abdb0153c2b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.501 157129 INFO neutron.agent.ovn.metadata.agent [-] Port cbf23880-21db-4752-be67-0abdb0153c2b in datapath f3635704-b98f-49df-8dc5-9052a34d8970 bound to our chassis#033[00m
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.503 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f3635704-b98f-49df-8dc5-9052a34d8970#033[00m
Feb 25 07:48:05 np0005629333 nova_compute[244014]: 2026-02-25 12:48:05.515 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.516 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5e729dd3-a5af-48f5-99f7-534f2ccc319e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:05 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:05Z|01238|binding|INFO|Setting lport cbf23880-21db-4752-be67-0abdb0153c2b ovn-installed in OVS
Feb 25 07:48:05 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:05Z|01239|binding|INFO|Setting lport cbf23880-21db-4752-be67-0abdb0153c2b up in Southbound
Feb 25 07:48:05 np0005629333 nova_compute[244014]: 2026-02-25 12:48:05.518 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.518 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf3635704-b1 in ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.520 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf3635704-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.521 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[38b6d0f7-3dc8-4452-890c-d0a677afe925]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.523 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6d475cef-b256-4b5e-acdc-4de2376e5460]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:05 np0005629333 systemd-udevd[351462]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:48:05 np0005629333 systemd-machined[210048]: New machine qemu-152-instance-00000078.
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.537 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[206cfc65-86a5-4fbb-9b6c-6642adf2183f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:05 np0005629333 NetworkManager[49836]: <info>  [1772023685.5457] device (tapcbf23880-21): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:48:05 np0005629333 NetworkManager[49836]: <info>  [1772023685.5468] device (tapcbf23880-21): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:48:05 np0005629333 systemd[1]: Started Virtual Machine qemu-152-instance-00000078.
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.568 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[94021274-fa3f-493f-b01b-d91b97b52206]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.597 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7b5cbd44-a2d7-4b55-8786-1329348ada3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:05 np0005629333 NetworkManager[49836]: <info>  [1772023685.6044] manager: (tapf3635704-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/518)
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.603 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4036167c-51c6-4d1e-9a41-dbf94e7afe27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.636 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d8ef6056-f264-4854-bfc7-6d66b0457884]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.641 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[67fdf63b-2c0b-4423-a65f-1007c80c9983]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:05 np0005629333 NetworkManager[49836]: <info>  [1772023685.6636] device (tapf3635704-b0): carrier: link connected
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.669 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4024c6ef-d8ea-4aaa-9068-789b9d196ccf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.690 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[64e88a62-3f5a-46f5-9171-48561a054ecc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3635704-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:cb:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 371], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565507, 'reachable_time': 18528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351494, 'error': None, 'target': 'ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.708 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[49c34c94-be51-42ba-be54-f80cd3237bf8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe35:cb09'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565507, 'tstamp': 565507}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351495, 'error': None, 'target': 'ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.727 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b50f16fa-14ba-43b8-9dc7-b4d6da55e265]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf3635704-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:cb:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 371], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565507, 'reachable_time': 18528, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351496, 'error': None, 'target': 'ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.756 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[19827ac5-947e-46dd-9bec-d846bbd23be0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.829 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a282e615-a29d-42d4-ac32-4630f9d03895]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.830 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3635704-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.831 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.831 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3635704-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:48:05 np0005629333 NetworkManager[49836]: <info>  [1772023685.8349] manager: (tapf3635704-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/519)
Feb 25 07:48:05 np0005629333 kernel: tapf3635704-b0: entered promiscuous mode
Feb 25 07:48:05 np0005629333 nova_compute[244014]: 2026-02-25 12:48:05.834 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:05 np0005629333 nova_compute[244014]: 2026-02-25 12:48:05.837 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.841 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf3635704-b0, col_values=(('external_ids', {'iface-id': 'ed1dee53-c556-4395-a0e4-d8aae80a256a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:48:05 np0005629333 nova_compute[244014]: 2026-02-25 12:48:05.843 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:05 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:05Z|01240|binding|INFO|Releasing lport ed1dee53-c556-4395-a0e4-d8aae80a256a from this chassis (sb_readonly=0)
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.854 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f3635704-b98f-49df-8dc5-9052a34d8970.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f3635704-b98f-49df-8dc5-9052a34d8970.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.856 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e3c1cd6f-209a-4521-8047-c1f0ae52fd84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.857 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-f3635704-b98f-49df-8dc5-9052a34d8970
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/f3635704-b98f-49df-8dc5-9052a34d8970.pid.haproxy
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID f3635704-b98f-49df-8dc5-9052a34d8970
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:48:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:05.857 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970', 'env', 'PROCESS_TAG=haproxy-f3635704-b98f-49df-8dc5-9052a34d8970', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f3635704-b98f-49df-8dc5-9052a34d8970.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:48:05 np0005629333 nova_compute[244014]: 2026-02-25 12:48:05.860 244018 DEBUG nova.compute.manager [req-1a904070-4eac-4c96-8807-4e81aa86fd35 req-7eedc969-b456-4762-a0bf-1a0abb701039 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Received event network-vif-plugged-cbf23880-21db-4752-be67-0abdb0153c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:48:05 np0005629333 nova_compute[244014]: 2026-02-25 12:48:05.861 244018 DEBUG oslo_concurrency.lockutils [req-1a904070-4eac-4c96-8807-4e81aa86fd35 req-7eedc969-b456-4762-a0bf-1a0abb701039 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:05 np0005629333 nova_compute[244014]: 2026-02-25 12:48:05.861 244018 DEBUG oslo_concurrency.lockutils [req-1a904070-4eac-4c96-8807-4e81aa86fd35 req-7eedc969-b456-4762-a0bf-1a0abb701039 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:05 np0005629333 nova_compute[244014]: 2026-02-25 12:48:05.862 244018 DEBUG oslo_concurrency.lockutils [req-1a904070-4eac-4c96-8807-4e81aa86fd35 req-7eedc969-b456-4762-a0bf-1a0abb701039 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:05 np0005629333 nova_compute[244014]: 2026-02-25 12:48:05.862 244018 DEBUG nova.compute.manager [req-1a904070-4eac-4c96-8807-4e81aa86fd35 req-7eedc969-b456-4762-a0bf-1a0abb701039 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Processing event network-vif-plugged-cbf23880-21db-4752-be67-0abdb0153c2b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:48:05 np0005629333 nova_compute[244014]: 2026-02-25 12:48:05.863 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:06 np0005629333 nova_compute[244014]: 2026-02-25 12:48:06.073 244018 DEBUG nova.compute.manager [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:48:06 np0005629333 nova_compute[244014]: 2026-02-25 12:48:06.075 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023686.0726354, 9c11f17d-5dba-4ece-8340-1c4ff0939294 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:48:06 np0005629333 nova_compute[244014]: 2026-02-25 12:48:06.075 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] VM Started (Lifecycle Event)#033[00m
Feb 25 07:48:06 np0005629333 nova_compute[244014]: 2026-02-25 12:48:06.082 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:48:06 np0005629333 nova_compute[244014]: 2026-02-25 12:48:06.088 244018 INFO nova.virt.libvirt.driver [-] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Instance spawned successfully.#033[00m
Feb 25 07:48:06 np0005629333 nova_compute[244014]: 2026-02-25 12:48:06.089 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:48:06 np0005629333 nova_compute[244014]: 2026-02-25 12:48:06.117 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:48:06 np0005629333 nova_compute[244014]: 2026-02-25 12:48:06.122 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:48:06 np0005629333 nova_compute[244014]: 2026-02-25 12:48:06.128 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:48:06 np0005629333 nova_compute[244014]: 2026-02-25 12:48:06.128 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:48:06 np0005629333 nova_compute[244014]: 2026-02-25 12:48:06.129 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:48:06 np0005629333 nova_compute[244014]: 2026-02-25 12:48:06.130 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:48:06 np0005629333 nova_compute[244014]: 2026-02-25 12:48:06.130 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:48:06 np0005629333 nova_compute[244014]: 2026-02-25 12:48:06.131 244018 DEBUG nova.virt.libvirt.driver [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:48:06 np0005629333 nova_compute[244014]: 2026-02-25 12:48:06.162 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:48:06 np0005629333 nova_compute[244014]: 2026-02-25 12:48:06.162 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023686.0878763, 9c11f17d-5dba-4ece-8340-1c4ff0939294 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:48:06 np0005629333 nova_compute[244014]: 2026-02-25 12:48:06.163 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:48:06 np0005629333 nova_compute[244014]: 2026-02-25 12:48:06.192 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:48:06 np0005629333 nova_compute[244014]: 2026-02-25 12:48:06.196 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023686.0889897, 9c11f17d-5dba-4ece-8340-1c4ff0939294 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:48:06 np0005629333 nova_compute[244014]: 2026-02-25 12:48:06.196 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:48:06 np0005629333 nova_compute[244014]: 2026-02-25 12:48:06.207 244018 INFO nova.compute.manager [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Took 7.30 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:48:06 np0005629333 nova_compute[244014]: 2026-02-25 12:48:06.208 244018 DEBUG nova.compute.manager [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:48:06 np0005629333 nova_compute[244014]: 2026-02-25 12:48:06.220 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:48:06 np0005629333 nova_compute[244014]: 2026-02-25 12:48:06.223 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:48:06 np0005629333 nova_compute[244014]: 2026-02-25 12:48:06.249 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:48:06 np0005629333 podman[351567]: 2026-02-25 12:48:06.29970965 +0000 UTC m=+0.069092212 container create 7cde34016f96915e524d46b661b532662a38979cd901aa6b48cc21a32f0d613f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 25 07:48:06 np0005629333 nova_compute[244014]: 2026-02-25 12:48:06.307 244018 INFO nova.compute.manager [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Took 9.02 seconds to build instance.#033[00m
Feb 25 07:48:06 np0005629333 nova_compute[244014]: 2026-02-25 12:48:06.334 244018 DEBUG oslo_concurrency.lockutils [None req-e6609ff7-33df-4fbd-85f7-35e0b63a0dfb 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:06 np0005629333 systemd[1]: Started libpod-conmon-7cde34016f96915e524d46b661b532662a38979cd901aa6b48cc21a32f0d613f.scope.
Feb 25 07:48:06 np0005629333 podman[351567]: 2026-02-25 12:48:06.2671242 +0000 UTC m=+0.036506852 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:48:06 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:48:06 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97b72957ff8d950551abb65b325b6b882f88ff85666c7edcb84f856ea8d68c66/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:48:06 np0005629333 podman[351567]: 2026-02-25 12:48:06.382392695 +0000 UTC m=+0.151775267 container init 7cde34016f96915e524d46b661b532662a38979cd901aa6b48cc21a32f0d613f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 07:48:06 np0005629333 podman[351567]: 2026-02-25 12:48:06.38788585 +0000 UTC m=+0.157268422 container start 7cde34016f96915e524d46b661b532662a38979cd901aa6b48cc21a32f0d613f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 25 07:48:06 np0005629333 neutron-haproxy-ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970[351582]: [NOTICE]   (351586) : New worker (351588) forked
Feb 25 07:48:06 np0005629333 neutron-haproxy-ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970[351582]: [NOTICE]   (351586) : Loading success.
Feb 25 07:48:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2026: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 25 07:48:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:07.351 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:05:ae 2001:db8:0:1:f816:3eff:fe99:5ae'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe99:5ae/64', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=718d24fb-493f-42ce-a4ab-2f64b57cb3ac, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=13855d81-6f9e-45ad-9b2f-8fbca36efa4a) old=Port_Binding(mac=['fa:16:3e:99:05:ae 2001:db8::f816:3eff:fe99:5ae'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe99:5ae/64', 'neutron:device_id': 'ovnmeta-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1002c177-97f1-4286-9217-c8db43f28825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5baa466cf1094324a8e911d4abaf07b5', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:48:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:07.353 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 13855d81-6f9e-45ad-9b2f-8fbca36efa4a in datapath 1002c177-97f1-4286-9217-c8db43f28825 updated#033[00m
Feb 25 07:48:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:07.355 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1002c177-97f1-4286-9217-c8db43f28825, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:48:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:07.356 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e871e335-d627-4ee7-8a3e-b8526220bf19]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:08 np0005629333 nova_compute[244014]: 2026-02-25 12:48:08.129 244018 DEBUG nova.compute.manager [req-3eebb263-d958-4861-8921-b467e8b2cec9 req-ebdeeb9f-01af-45c0-94db-d8223abdec25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Received event network-vif-plugged-cbf23880-21db-4752-be67-0abdb0153c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:48:08 np0005629333 nova_compute[244014]: 2026-02-25 12:48:08.130 244018 DEBUG oslo_concurrency.lockutils [req-3eebb263-d958-4861-8921-b467e8b2cec9 req-ebdeeb9f-01af-45c0-94db-d8223abdec25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:08 np0005629333 nova_compute[244014]: 2026-02-25 12:48:08.130 244018 DEBUG oslo_concurrency.lockutils [req-3eebb263-d958-4861-8921-b467e8b2cec9 req-ebdeeb9f-01af-45c0-94db-d8223abdec25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:08 np0005629333 nova_compute[244014]: 2026-02-25 12:48:08.130 244018 DEBUG oslo_concurrency.lockutils [req-3eebb263-d958-4861-8921-b467e8b2cec9 req-ebdeeb9f-01af-45c0-94db-d8223abdec25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:08 np0005629333 nova_compute[244014]: 2026-02-25 12:48:08.130 244018 DEBUG nova.compute.manager [req-3eebb263-d958-4861-8921-b467e8b2cec9 req-ebdeeb9f-01af-45c0-94db-d8223abdec25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] No waiting events found dispatching network-vif-plugged-cbf23880-21db-4752-be67-0abdb0153c2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:48:08 np0005629333 nova_compute[244014]: 2026-02-25 12:48:08.130 244018 WARNING nova.compute.manager [req-3eebb263-d958-4861-8921-b467e8b2cec9 req-ebdeeb9f-01af-45c0-94db-d8223abdec25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Received unexpected event network-vif-plugged-cbf23880-21db-4752-be67-0abdb0153c2b for instance with vm_state active and task_state None.#033[00m
Feb 25 07:48:08 np0005629333 nova_compute[244014]: 2026-02-25 12:48:08.131 244018 DEBUG nova.compute.manager [req-3eebb263-d958-4861-8921-b467e8b2cec9 req-ebdeeb9f-01af-45c0-94db-d8223abdec25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received event network-changed-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:48:08 np0005629333 nova_compute[244014]: 2026-02-25 12:48:08.131 244018 DEBUG nova.compute.manager [req-3eebb263-d958-4861-8921-b467e8b2cec9 req-ebdeeb9f-01af-45c0-94db-d8223abdec25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Refreshing instance network info cache due to event network-changed-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:48:08 np0005629333 nova_compute[244014]: 2026-02-25 12:48:08.131 244018 DEBUG oslo_concurrency.lockutils [req-3eebb263-d958-4861-8921-b467e8b2cec9 req-ebdeeb9f-01af-45c0-94db-d8223abdec25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-271f6569-a8f6-43a3-ac98-511eff77c426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:48:08 np0005629333 nova_compute[244014]: 2026-02-25 12:48:08.131 244018 DEBUG oslo_concurrency.lockutils [req-3eebb263-d958-4861-8921-b467e8b2cec9 req-ebdeeb9f-01af-45c0-94db-d8223abdec25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-271f6569-a8f6-43a3-ac98-511eff77c426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:48:08 np0005629333 nova_compute[244014]: 2026-02-25 12:48:08.131 244018 DEBUG nova.network.neutron [req-3eebb263-d958-4861-8921-b467e8b2cec9 req-ebdeeb9f-01af-45c0-94db-d8223abdec25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Refreshing network info cache for port 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:48:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2027: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 164 op/s
Feb 25 07:48:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:48:08 np0005629333 nova_compute[244014]: 2026-02-25 12:48:08.572 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:10 np0005629333 nova_compute[244014]: 2026-02-25 12:48:10.275 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2028: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 163 op/s
Feb 25 07:48:12 np0005629333 nova_compute[244014]: 2026-02-25 12:48:12.084 244018 DEBUG nova.network.neutron [req-3eebb263-d958-4861-8921-b467e8b2cec9 req-ebdeeb9f-01af-45c0-94db-d8223abdec25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Updated VIF entry in instance network info cache for port 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:48:12 np0005629333 nova_compute[244014]: 2026-02-25 12:48:12.085 244018 DEBUG nova.network.neutron [req-3eebb263-d958-4861-8921-b467e8b2cec9 req-ebdeeb9f-01af-45c0-94db-d8223abdec25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Updating instance_info_cache with network_info: [{"id": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "address": "fa:16:3e:c8:6b:63", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de24fa3-5d", "ovs_interfaceid": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "address": "fa:16:3e:46:c2:9f", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dbf7aaf-2d", "ovs_interfaceid": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:48:12 np0005629333 nova_compute[244014]: 2026-02-25 12:48:12.110 244018 DEBUG oslo_concurrency.lockutils [req-3eebb263-d958-4861-8921-b467e8b2cec9 req-ebdeeb9f-01af-45c0-94db-d8223abdec25 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-271f6569-a8f6-43a3-ac98-511eff77c426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:48:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2029: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 175 op/s
Feb 25 07:48:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:48:13 np0005629333 nova_compute[244014]: 2026-02-25 12:48:13.576 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2030: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 911 KiB/s wr, 145 op/s
Feb 25 07:48:14 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:14Z|00143|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c8:6b:63 10.100.0.8
Feb 25 07:48:14 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:14Z|00144|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c8:6b:63 10.100.0.8
Feb 25 07:48:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:48:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:48:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:48:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:48:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:48:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:48:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:48:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:48:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:48:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:48:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:48:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:48:14 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:48:14 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:48:14 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:48:15 np0005629333 podman[351740]: 2026-02-25 12:48:15.110508768 +0000 UTC m=+0.057730262 container create b47863c2fed86d3e4a231f80890d594e0f236b76b7a5d2f23722d8a5aa44c7fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_easley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 07:48:15 np0005629333 podman[351740]: 2026-02-25 12:48:15.080541851 +0000 UTC m=+0.027763425 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:48:15 np0005629333 systemd[1]: Started libpod-conmon-b47863c2fed86d3e4a231f80890d594e0f236b76b7a5d2f23722d8a5aa44c7fc.scope.
Feb 25 07:48:15 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:48:15 np0005629333 podman[351740]: 2026-02-25 12:48:15.235465176 +0000 UTC m=+0.182686680 container init b47863c2fed86d3e4a231f80890d594e0f236b76b7a5d2f23722d8a5aa44c7fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:48:15 np0005629333 podman[351740]: 2026-02-25 12:48:15.242206136 +0000 UTC m=+0.189427620 container start b47863c2fed86d3e4a231f80890d594e0f236b76b7a5d2f23722d8a5aa44c7fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_easley, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 07:48:15 np0005629333 podman[351740]: 2026-02-25 12:48:15.245362785 +0000 UTC m=+0.192584279 container attach b47863c2fed86d3e4a231f80890d594e0f236b76b7a5d2f23722d8a5aa44c7fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_easley, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:48:15 np0005629333 cool_easley[351756]: 167 167
Feb 25 07:48:15 np0005629333 podman[351740]: 2026-02-25 12:48:15.249168313 +0000 UTC m=+0.196389797 container died b47863c2fed86d3e4a231f80890d594e0f236b76b7a5d2f23722d8a5aa44c7fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_easley, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:48:15 np0005629333 systemd[1]: libpod-b47863c2fed86d3e4a231f80890d594e0f236b76b7a5d2f23722d8a5aa44c7fc.scope: Deactivated successfully.
Feb 25 07:48:15 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7742335af27db4c06b3a3866e7935cc7e624cd68a84da4f9f6bad4939c606bdc-merged.mount: Deactivated successfully.
Feb 25 07:48:15 np0005629333 nova_compute[244014]: 2026-02-25 12:48:15.278 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:15 np0005629333 podman[351740]: 2026-02-25 12:48:15.297265821 +0000 UTC m=+0.244487305 container remove b47863c2fed86d3e4a231f80890d594e0f236b76b7a5d2f23722d8a5aa44c7fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_easley, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:48:15 np0005629333 systemd[1]: libpod-conmon-b47863c2fed86d3e4a231f80890d594e0f236b76b7a5d2f23722d8a5aa44c7fc.scope: Deactivated successfully.
Feb 25 07:48:15 np0005629333 podman[351780]: 2026-02-25 12:48:15.465751558 +0000 UTC m=+0.040643298 container create ba8652c9fdc0eed3516f12175353d2df15418a248de431f4607bdd13b1435f87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_yalow, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 07:48:15 np0005629333 systemd[1]: Started libpod-conmon-ba8652c9fdc0eed3516f12175353d2df15418a248de431f4607bdd13b1435f87.scope.
Feb 25 07:48:15 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:48:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eec3944f3e91f3b586411119359c2535f1a084d3f232af7e3e5cfc641fb3ac2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:48:15 np0005629333 podman[351780]: 2026-02-25 12:48:15.44561533 +0000 UTC m=+0.020507100 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:48:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eec3944f3e91f3b586411119359c2535f1a084d3f232af7e3e5cfc641fb3ac2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:48:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eec3944f3e91f3b586411119359c2535f1a084d3f232af7e3e5cfc641fb3ac2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:48:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eec3944f3e91f3b586411119359c2535f1a084d3f232af7e3e5cfc641fb3ac2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:48:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eec3944f3e91f3b586411119359c2535f1a084d3f232af7e3e5cfc641fb3ac2f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:48:15 np0005629333 podman[351780]: 2026-02-25 12:48:15.554268777 +0000 UTC m=+0.129160597 container init ba8652c9fdc0eed3516f12175353d2df15418a248de431f4607bdd13b1435f87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 07:48:15 np0005629333 podman[351780]: 2026-02-25 12:48:15.563361064 +0000 UTC m=+0.138252814 container start ba8652c9fdc0eed3516f12175353d2df15418a248de431f4607bdd13b1435f87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_yalow, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:48:15 np0005629333 podman[351780]: 2026-02-25 12:48:15.566500102 +0000 UTC m=+0.141391922 container attach ba8652c9fdc0eed3516f12175353d2df15418a248de431f4607bdd13b1435f87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:48:16 np0005629333 epic_yalow[351796]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:48:16 np0005629333 epic_yalow[351796]: --> All data devices are unavailable
Feb 25 07:48:16 np0005629333 systemd[1]: libpod-ba8652c9fdc0eed3516f12175353d2df15418a248de431f4607bdd13b1435f87.scope: Deactivated successfully.
Feb 25 07:48:16 np0005629333 conmon[351796]: conmon ba8652c9fdc0eed3516f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ba8652c9fdc0eed3516f12175353d2df15418a248de431f4607bdd13b1435f87.scope/container/memory.events
Feb 25 07:48:16 np0005629333 podman[351780]: 2026-02-25 12:48:16.104843733 +0000 UTC m=+0.679735473 container died ba8652c9fdc0eed3516f12175353d2df15418a248de431f4607bdd13b1435f87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_yalow, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:48:16 np0005629333 systemd[1]: var-lib-containers-storage-overlay-eec3944f3e91f3b586411119359c2535f1a084d3f232af7e3e5cfc641fb3ac2f-merged.mount: Deactivated successfully.
Feb 25 07:48:16 np0005629333 podman[351780]: 2026-02-25 12:48:16.152929741 +0000 UTC m=+0.727821481 container remove ba8652c9fdc0eed3516f12175353d2df15418a248de431f4607bdd13b1435f87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:48:16 np0005629333 systemd[1]: libpod-conmon-ba8652c9fdc0eed3516f12175353d2df15418a248de431f4607bdd13b1435f87.scope: Deactivated successfully.
Feb 25 07:48:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2031: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 17 KiB/s wr, 138 op/s
Feb 25 07:48:16 np0005629333 podman[351893]: 2026-02-25 12:48:16.647317201 +0000 UTC m=+0.035151033 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:48:16 np0005629333 podman[351893]: 2026-02-25 12:48:16.77475961 +0000 UTC m=+0.162593432 container create 8448081713d03a7d7159c7730d207bee2da6aaddf5dee447c678290c45e6f81b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:48:16 np0005629333 systemd[1]: Started libpod-conmon-8448081713d03a7d7159c7730d207bee2da6aaddf5dee447c678290c45e6f81b.scope.
Feb 25 07:48:16 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:48:16 np0005629333 podman[351893]: 2026-02-25 12:48:16.962017727 +0000 UTC m=+0.349851609 container init 8448081713d03a7d7159c7730d207bee2da6aaddf5dee447c678290c45e6f81b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hugle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:48:16 np0005629333 podman[351893]: 2026-02-25 12:48:16.968359466 +0000 UTC m=+0.356193298 container start 8448081713d03a7d7159c7730d207bee2da6aaddf5dee447c678290c45e6f81b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hugle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 07:48:16 np0005629333 angry_hugle[351911]: 167 167
Feb 25 07:48:16 np0005629333 systemd[1]: libpod-8448081713d03a7d7159c7730d207bee2da6aaddf5dee447c678290c45e6f81b.scope: Deactivated successfully.
Feb 25 07:48:16 np0005629333 podman[351893]: 2026-02-25 12:48:16.993003082 +0000 UTC m=+0.380836884 container attach 8448081713d03a7d7159c7730d207bee2da6aaddf5dee447c678290c45e6f81b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hugle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:48:16 np0005629333 podman[351893]: 2026-02-25 12:48:16.994107693 +0000 UTC m=+0.381941505 container died 8448081713d03a7d7159c7730d207bee2da6aaddf5dee447c678290c45e6f81b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hugle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:48:17 np0005629333 systemd[1]: var-lib-containers-storage-overlay-641b2d2e6066891ee27df8a6842fb96822d8082fc044cf68036ca46d2c71d628-merged.mount: Deactivated successfully.
Feb 25 07:48:17 np0005629333 podman[351893]: 2026-02-25 12:48:17.260278259 +0000 UTC m=+0.648112071 container remove 8448081713d03a7d7159c7730d207bee2da6aaddf5dee447c678290c45e6f81b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hugle, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:48:17 np0005629333 systemd[1]: libpod-conmon-8448081713d03a7d7159c7730d207bee2da6aaddf5dee447c678290c45e6f81b.scope: Deactivated successfully.
Feb 25 07:48:17 np0005629333 podman[351937]: 2026-02-25 12:48:17.474822847 +0000 UTC m=+0.058671047 container create 2c4afd431e71d6ff6fd9332dccbe3093e450c557dd9d29de3ae8667e39504e9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_allen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:48:17 np0005629333 systemd[1]: Started libpod-conmon-2c4afd431e71d6ff6fd9332dccbe3093e450c557dd9d29de3ae8667e39504e9e.scope.
Feb 25 07:48:17 np0005629333 podman[351937]: 2026-02-25 12:48:17.446776765 +0000 UTC m=+0.030625005 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:48:17 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:48:17 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d38a9f12bdad6efa115b3145c468428871041b1e5eb8380502c7ff9497380721/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:48:17 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d38a9f12bdad6efa115b3145c468428871041b1e5eb8380502c7ff9497380721/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:48:17 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d38a9f12bdad6efa115b3145c468428871041b1e5eb8380502c7ff9497380721/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:48:17 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d38a9f12bdad6efa115b3145c468428871041b1e5eb8380502c7ff9497380721/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:48:17 np0005629333 podman[351937]: 2026-02-25 12:48:17.576805767 +0000 UTC m=+0.160653987 container init 2c4afd431e71d6ff6fd9332dccbe3093e450c557dd9d29de3ae8667e39504e9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:48:17 np0005629333 podman[351937]: 2026-02-25 12:48:17.582205719 +0000 UTC m=+0.166053919 container start 2c4afd431e71d6ff6fd9332dccbe3093e450c557dd9d29de3ae8667e39504e9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_allen, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 07:48:17 np0005629333 podman[351937]: 2026-02-25 12:48:17.587829648 +0000 UTC m=+0.171677888 container attach 2c4afd431e71d6ff6fd9332dccbe3093e450c557dd9d29de3ae8667e39504e9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_allen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 07:48:17 np0005629333 nova_compute[244014]: 2026-02-25 12:48:17.668 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:48:17 np0005629333 nova_compute[244014]: 2026-02-25 12:48:17.693 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Triggering sync for uuid 848fd033-0ebb-460a-a8a0-56583fa5f481 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Feb 25 07:48:17 np0005629333 nova_compute[244014]: 2026-02-25 12:48:17.694 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Triggering sync for uuid 95730650-36ac-4eee-8b22-9ea3f01d82d1 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Feb 25 07:48:17 np0005629333 nova_compute[244014]: 2026-02-25 12:48:17.695 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Triggering sync for uuid 271f6569-a8f6-43a3-ac98-511eff77c426 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Feb 25 07:48:17 np0005629333 nova_compute[244014]: 2026-02-25 12:48:17.696 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Triggering sync for uuid 9c11f17d-5dba-4ece-8340-1c4ff0939294 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Feb 25 07:48:17 np0005629333 nova_compute[244014]: 2026-02-25 12:48:17.696 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "848fd033-0ebb-460a-a8a0-56583fa5f481" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:17 np0005629333 nova_compute[244014]: 2026-02-25 12:48:17.697 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:17 np0005629333 nova_compute[244014]: 2026-02-25 12:48:17.698 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "95730650-36ac-4eee-8b22-9ea3f01d82d1" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:17 np0005629333 nova_compute[244014]: 2026-02-25 12:48:17.699 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:17 np0005629333 nova_compute[244014]: 2026-02-25 12:48:17.699 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "271f6569-a8f6-43a3-ac98-511eff77c426" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:17 np0005629333 nova_compute[244014]: 2026-02-25 12:48:17.700 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "271f6569-a8f6-43a3-ac98-511eff77c426" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:17 np0005629333 nova_compute[244014]: 2026-02-25 12:48:17.700 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "9c11f17d-5dba-4ece-8340-1c4ff0939294" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:17 np0005629333 nova_compute[244014]: 2026-02-25 12:48:17.701 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:17 np0005629333 nova_compute[244014]: 2026-02-25 12:48:17.738 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:17 np0005629333 nova_compute[244014]: 2026-02-25 12:48:17.741 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:17 np0005629333 nova_compute[244014]: 2026-02-25 12:48:17.743 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:17 np0005629333 nova_compute[244014]: 2026-02-25 12:48:17.745 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "271f6569-a8f6-43a3-ac98-511eff77c426" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:17 np0005629333 quirky_allen[351954]: {
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:    "0": [
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:        {
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "devices": [
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "/dev/loop3"
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            ],
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "lv_name": "ceph_lv0",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "lv_size": "21470642176",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "name": "ceph_lv0",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "tags": {
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.cluster_name": "ceph",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.crush_device_class": "",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.encrypted": "0",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.objectstore": "bluestore",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.osd_id": "0",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.type": "block",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.vdo": "0",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.with_tpm": "0"
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            },
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "type": "block",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "vg_name": "ceph_vg0"
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:        }
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:    ],
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:    "1": [
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:        {
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "devices": [
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "/dev/loop4"
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            ],
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "lv_name": "ceph_lv1",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "lv_size": "21470642176",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "name": "ceph_lv1",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "tags": {
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.cluster_name": "ceph",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.crush_device_class": "",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.encrypted": "0",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.objectstore": "bluestore",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.osd_id": "1",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.type": "block",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.vdo": "0",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.with_tpm": "0"
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            },
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "type": "block",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "vg_name": "ceph_vg1"
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:        }
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:    ],
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:    "2": [
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:        {
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "devices": [
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "/dev/loop5"
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            ],
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "lv_name": "ceph_lv2",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "lv_size": "21470642176",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "name": "ceph_lv2",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "tags": {
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.cluster_name": "ceph",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.crush_device_class": "",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.encrypted": "0",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.objectstore": "bluestore",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.osd_id": "2",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.type": "block",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.vdo": "0",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:                "ceph.with_tpm": "0"
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            },
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "type": "block",
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:            "vg_name": "ceph_vg2"
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:        }
Feb 25 07:48:17 np0005629333 quirky_allen[351954]:    ]
Feb 25 07:48:17 np0005629333 quirky_allen[351954]: }
Feb 25 07:48:17 np0005629333 systemd[1]: libpod-2c4afd431e71d6ff6fd9332dccbe3093e450c557dd9d29de3ae8667e39504e9e.scope: Deactivated successfully.
Feb 25 07:48:18 np0005629333 podman[351963]: 2026-02-25 12:48:18.018936551 +0000 UTC m=+0.025645435 container died 2c4afd431e71d6ff6fd9332dccbe3093e450c557dd9d29de3ae8667e39504e9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Feb 25 07:48:18 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d38a9f12bdad6efa115b3145c468428871041b1e5eb8380502c7ff9497380721-merged.mount: Deactivated successfully.
Feb 25 07:48:18 np0005629333 podman[351963]: 2026-02-25 12:48:18.055552275 +0000 UTC m=+0.062261129 container remove 2c4afd431e71d6ff6fd9332dccbe3093e450c557dd9d29de3ae8667e39504e9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_allen, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 07:48:18 np0005629333 systemd[1]: libpod-conmon-2c4afd431e71d6ff6fd9332dccbe3093e450c557dd9d29de3ae8667e39504e9e.scope: Deactivated successfully.
Feb 25 07:48:18 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:18Z|00145|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:72:a7:94 10.100.0.30
Feb 25 07:48:18 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:18Z|00146|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:72:a7:94 10.100.0.30
Feb 25 07:48:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2032: 305 pgs: 305 active+clean; 448 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.2 MiB/s wr, 217 op/s
Feb 25 07:48:18 np0005629333 podman[352040]: 2026-02-25 12:48:18.555651897 +0000 UTC m=+0.046377291 container create 18a0c883a8d5c626f74b914214aaae8572e91a9b296b0d85ba5399ed86bb7ad0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_beaver, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:48:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:48:18 np0005629333 nova_compute[244014]: 2026-02-25 12:48:18.578 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:18 np0005629333 systemd[1]: Started libpod-conmon-18a0c883a8d5c626f74b914214aaae8572e91a9b296b0d85ba5399ed86bb7ad0.scope.
Feb 25 07:48:18 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:48:18 np0005629333 podman[352040]: 2026-02-25 12:48:18.53664927 +0000 UTC m=+0.027374714 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:48:18 np0005629333 podman[352040]: 2026-02-25 12:48:18.639554056 +0000 UTC m=+0.130279450 container init 18a0c883a8d5c626f74b914214aaae8572e91a9b296b0d85ba5399ed86bb7ad0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_beaver, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 07:48:18 np0005629333 podman[352040]: 2026-02-25 12:48:18.649012313 +0000 UTC m=+0.139737717 container start 18a0c883a8d5c626f74b914214aaae8572e91a9b296b0d85ba5399ed86bb7ad0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:48:18 np0005629333 systemd[1]: libpod-18a0c883a8d5c626f74b914214aaae8572e91a9b296b0d85ba5399ed86bb7ad0.scope: Deactivated successfully.
Feb 25 07:48:18 np0005629333 podman[352040]: 2026-02-25 12:48:18.652303346 +0000 UTC m=+0.143028740 container attach 18a0c883a8d5c626f74b914214aaae8572e91a9b296b0d85ba5399ed86bb7ad0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Feb 25 07:48:18 np0005629333 nostalgic_beaver[352056]: 167 167
Feb 25 07:48:18 np0005629333 conmon[352056]: conmon 18a0c883a8d5c626f74b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-18a0c883a8d5c626f74b914214aaae8572e91a9b296b0d85ba5399ed86bb7ad0.scope/container/memory.events
Feb 25 07:48:18 np0005629333 podman[352040]: 2026-02-25 12:48:18.657850872 +0000 UTC m=+0.148576306 container died 18a0c883a8d5c626f74b914214aaae8572e91a9b296b0d85ba5399ed86bb7ad0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 25 07:48:18 np0005629333 systemd[1]: var-lib-containers-storage-overlay-44edb3642857b5e51a8e9bb1a982c975a0e094d742b5a7736e1a84e13de23853-merged.mount: Deactivated successfully.
Feb 25 07:48:18 np0005629333 podman[352040]: 2026-02-25 12:48:18.699341564 +0000 UTC m=+0.190066998 container remove 18a0c883a8d5c626f74b914214aaae8572e91a9b296b0d85ba5399ed86bb7ad0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_beaver, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:48:18 np0005629333 systemd[1]: libpod-conmon-18a0c883a8d5c626f74b914214aaae8572e91a9b296b0d85ba5399ed86bb7ad0.scope: Deactivated successfully.
Feb 25 07:48:18 np0005629333 podman[352080]: 2026-02-25 12:48:18.894004861 +0000 UTC m=+0.044138398 container create 13a01251c79adab22bc8ed37de0d601fe411ada86e0c5271952ff1d441a345ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 07:48:18 np0005629333 systemd[1]: Started libpod-conmon-13a01251c79adab22bc8ed37de0d601fe411ada86e0c5271952ff1d441a345ec.scope.
Feb 25 07:48:18 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:48:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1762dc3ed452471c8aee009aa1bc4227ce16f02c31a4d6301d6f0e979fae0d81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:48:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1762dc3ed452471c8aee009aa1bc4227ce16f02c31a4d6301d6f0e979fae0d81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:48:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1762dc3ed452471c8aee009aa1bc4227ce16f02c31a4d6301d6f0e979fae0d81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:48:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1762dc3ed452471c8aee009aa1bc4227ce16f02c31a4d6301d6f0e979fae0d81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:48:18 np0005629333 podman[352080]: 2026-02-25 12:48:18.875444986 +0000 UTC m=+0.025578583 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:48:18 np0005629333 podman[352080]: 2026-02-25 12:48:18.973340481 +0000 UTC m=+0.123474038 container init 13a01251c79adab22bc8ed37de0d601fe411ada86e0c5271952ff1d441a345ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:48:18 np0005629333 podman[352080]: 2026-02-25 12:48:18.978042994 +0000 UTC m=+0.128176531 container start 13a01251c79adab22bc8ed37de0d601fe411ada86e0c5271952ff1d441a345ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_ramanujan, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:48:18 np0005629333 podman[352080]: 2026-02-25 12:48:18.982106948 +0000 UTC m=+0.132240485 container attach 13a01251c79adab22bc8ed37de0d601fe411ada86e0c5271952ff1d441a345ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 07:48:19 np0005629333 lvm[352173]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:48:19 np0005629333 lvm[352176]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:48:19 np0005629333 lvm[352176]: VG ceph_vg1 finished
Feb 25 07:48:19 np0005629333 lvm[352173]: VG ceph_vg0 finished
Feb 25 07:48:19 np0005629333 lvm[352178]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:48:19 np0005629333 lvm[352178]: VG ceph_vg2 finished
Feb 25 07:48:19 np0005629333 lvm[352179]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:48:19 np0005629333 lvm[352179]: VG ceph_vg1 finished
Feb 25 07:48:19 np0005629333 lvm[352181]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:48:19 np0005629333 lvm[352181]: VG ceph_vg1 finished
Feb 25 07:48:19 np0005629333 nervous_ramanujan[352097]: {}
Feb 25 07:48:19 np0005629333 systemd[1]: libpod-13a01251c79adab22bc8ed37de0d601fe411ada86e0c5271952ff1d441a345ec.scope: Deactivated successfully.
Feb 25 07:48:19 np0005629333 podman[352080]: 2026-02-25 12:48:19.678951884 +0000 UTC m=+0.829085421 container died 13a01251c79adab22bc8ed37de0d601fe411ada86e0c5271952ff1d441a345ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_ramanujan, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 07:48:19 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1762dc3ed452471c8aee009aa1bc4227ce16f02c31a4d6301d6f0e979fae0d81-merged.mount: Deactivated successfully.
Feb 25 07:48:19 np0005629333 podman[352080]: 2026-02-25 12:48:19.714989402 +0000 UTC m=+0.865122979 container remove 13a01251c79adab22bc8ed37de0d601fe411ada86e0c5271952ff1d441a345ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_ramanujan, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 07:48:19 np0005629333 systemd[1]: libpod-conmon-13a01251c79adab22bc8ed37de0d601fe411ada86e0c5271952ff1d441a345ec.scope: Deactivated successfully.
Feb 25 07:48:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:48:19 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:48:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:48:19 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:48:19 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:48:19 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:48:20 np0005629333 nova_compute[244014]: 2026-02-25 12:48:20.283 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2033: 305 pgs: 305 active+clean; 448 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 777 KiB/s rd, 3.2 MiB/s wr, 90 op/s
Feb 25 07:48:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2034: 305 pgs: 305 active+clean; 468 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 951 KiB/s rd, 4.3 MiB/s wr, 133 op/s
Feb 25 07:48:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:48:23 np0005629333 nova_compute[244014]: 2026-02-25 12:48:23.580 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2035: 305 pgs: 305 active+clean; 471 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 651 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.286 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.359 244018 DEBUG oslo_concurrency.lockutils [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "9c11f17d-5dba-4ece-8340-1c4ff0939294" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.360 244018 DEBUG oslo_concurrency.lockutils [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.360 244018 DEBUG oslo_concurrency.lockutils [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.361 244018 DEBUG oslo_concurrency.lockutils [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.361 244018 DEBUG oslo_concurrency.lockutils [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.363 244018 INFO nova.compute.manager [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Terminating instance#033[00m
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.365 244018 DEBUG nova.compute.manager [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:48:25 np0005629333 kernel: tapcbf23880-21 (unregistering): left promiscuous mode
Feb 25 07:48:25 np0005629333 NetworkManager[49836]: <info>  [1772023705.4145] device (tapcbf23880-21): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:48:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:25Z|01241|binding|INFO|Releasing lport cbf23880-21db-4752-be67-0abdb0153c2b from this chassis (sb_readonly=0)
Feb 25 07:48:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:25Z|01242|binding|INFO|Setting lport cbf23880-21db-4752-be67-0abdb0153c2b down in Southbound
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.425 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:25Z|01243|binding|INFO|Removing iface tapcbf23880-21 ovn-installed in OVS
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.430 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.441 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:25.447 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:a7:94 10.100.0.30'], port_security=['fa:16:3e:72:a7:94 10.100.0.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/28', 'neutron:device_id': '9c11f17d-5dba-4ece-8340-1c4ff0939294', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f3635704-b98f-49df-8dc5-9052a34d8970', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '4', 'neutron:security_group_ids': '98aaa3fd-2372-44ea-9de4-f4c0cf776cf4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08599aa3-75c1-4d0f-bdce-36e8497cd876, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=cbf23880-21db-4752-be67-0abdb0153c2b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:48:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:25.449 157129 INFO neutron.agent.ovn.metadata.agent [-] Port cbf23880-21db-4752-be67-0abdb0153c2b in datapath f3635704-b98f-49df-8dc5-9052a34d8970 unbound from our chassis#033[00m
Feb 25 07:48:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:25.450 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f3635704-b98f-49df-8dc5-9052a34d8970, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:48:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:25.452 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dacd2751-eaa0-4fba-b125-a275dba1b8ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:25.452 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970 namespace which is not needed anymore#033[00m
Feb 25 07:48:25 np0005629333 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000078.scope: Deactivated successfully.
Feb 25 07:48:25 np0005629333 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000078.scope: Consumed 13.511s CPU time.
Feb 25 07:48:25 np0005629333 systemd-machined[210048]: Machine qemu-152-instance-00000078 terminated.
Feb 25 07:48:25 np0005629333 neutron-haproxy-ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970[351582]: [NOTICE]   (351586) : haproxy version is 2.8.14-c23fe91
Feb 25 07:48:25 np0005629333 neutron-haproxy-ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970[351582]: [NOTICE]   (351586) : path to executable is /usr/sbin/haproxy
Feb 25 07:48:25 np0005629333 neutron-haproxy-ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970[351582]: [ALERT]    (351586) : Current worker (351588) exited with code 143 (Terminated)
Feb 25 07:48:25 np0005629333 neutron-haproxy-ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970[351582]: [WARNING]  (351586) : All workers exited. Exiting... (0)
Feb 25 07:48:25 np0005629333 systemd[1]: libpod-7cde34016f96915e524d46b661b532662a38979cd901aa6b48cc21a32f0d613f.scope: Deactivated successfully.
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.603 244018 INFO nova.virt.libvirt.driver [-] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Instance destroyed successfully.#033[00m
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.603 244018 DEBUG nova.objects.instance [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'resources' on Instance uuid 9c11f17d-5dba-4ece-8340-1c4ff0939294 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:48:25 np0005629333 podman[352243]: 2026-02-25 12:48:25.605327045 +0000 UTC m=+0.050507467 container died 7cde34016f96915e524d46b661b532662a38979cd901aa6b48cc21a32f0d613f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.619 244018 DEBUG nova.virt.libvirt.vif [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:47:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-83970638',display_name='tempest-TestNetworkBasicOps-server-83970638',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-83970638',id=120,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMxYaDCaF+dhprfU7p2GFAe30UCzLc9MF80n83kHh1iIsnjcJHnQ6Q/VzjgOn2GIc4A+5MBe7VFnhNTJz7AlxywFHXtPZOo+vT/4VV+kNxRXdCo8I4zsXHElpgbrS0cbTQ==',key_name='tempest-TestNetworkBasicOps-1724785857',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:48:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-0o0rgl3q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:48:06Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=9c11f17d-5dba-4ece-8340-1c4ff0939294,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cbf23880-21db-4752-be67-0abdb0153c2b", "address": "fa:16:3e:72:a7:94", "network": {"id": "f3635704-b98f-49df-8dc5-9052a34d8970", "bridge": "br-int", "label": "tempest-network-smoke--1605130746", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbf23880-21", "ovs_interfaceid": "cbf23880-21db-4752-be67-0abdb0153c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.620 244018 DEBUG nova.network.os_vif_util [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "cbf23880-21db-4752-be67-0abdb0153c2b", "address": "fa:16:3e:72:a7:94", "network": {"id": "f3635704-b98f-49df-8dc5-9052a34d8970", "bridge": "br-int", "label": "tempest-network-smoke--1605130746", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbf23880-21", "ovs_interfaceid": "cbf23880-21db-4752-be67-0abdb0153c2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.621 244018 DEBUG nova.network.os_vif_util [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:a7:94,bridge_name='br-int',has_traffic_filtering=True,id=cbf23880-21db-4752-be67-0abdb0153c2b,network=Network(f3635704-b98f-49df-8dc5-9052a34d8970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbf23880-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.621 244018 DEBUG os_vif [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:a7:94,bridge_name='br-int',has_traffic_filtering=True,id=cbf23880-21db-4752-be67-0abdb0153c2b,network=Network(f3635704-b98f-49df-8dc5-9052a34d8970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbf23880-21') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.623 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.623 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcbf23880-21, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.626 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.627 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.630 244018 INFO os_vif [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:a7:94,bridge_name='br-int',has_traffic_filtering=True,id=cbf23880-21db-4752-be67-0abdb0153c2b,network=Network(f3635704-b98f-49df-8dc5-9052a34d8970),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbf23880-21')#033[00m
Feb 25 07:48:25 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7cde34016f96915e524d46b661b532662a38979cd901aa6b48cc21a32f0d613f-userdata-shm.mount: Deactivated successfully.
Feb 25 07:48:25 np0005629333 systemd[1]: var-lib-containers-storage-overlay-97b72957ff8d950551abb65b325b6b882f88ff85666c7edcb84f856ea8d68c66-merged.mount: Deactivated successfully.
Feb 25 07:48:25 np0005629333 podman[352243]: 2026-02-25 12:48:25.647085304 +0000 UTC m=+0.092265726 container cleanup 7cde34016f96915e524d46b661b532662a38979cd901aa6b48cc21a32f0d613f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 07:48:25 np0005629333 systemd[1]: libpod-conmon-7cde34016f96915e524d46b661b532662a38979cd901aa6b48cc21a32f0d613f.scope: Deactivated successfully.
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.701 244018 DEBUG nova.compute.manager [req-ed89f19b-e000-40da-b766-49d67685f221 req-91376bde-f0d6-45a4-b78e-82071af2c005 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Received event network-vif-unplugged-cbf23880-21db-4752-be67-0abdb0153c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.702 244018 DEBUG oslo_concurrency.lockutils [req-ed89f19b-e000-40da-b766-49d67685f221 req-91376bde-f0d6-45a4-b78e-82071af2c005 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.702 244018 DEBUG oslo_concurrency.lockutils [req-ed89f19b-e000-40da-b766-49d67685f221 req-91376bde-f0d6-45a4-b78e-82071af2c005 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.702 244018 DEBUG oslo_concurrency.lockutils [req-ed89f19b-e000-40da-b766-49d67685f221 req-91376bde-f0d6-45a4-b78e-82071af2c005 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.703 244018 DEBUG nova.compute.manager [req-ed89f19b-e000-40da-b766-49d67685f221 req-91376bde-f0d6-45a4-b78e-82071af2c005 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] No waiting events found dispatching network-vif-unplugged-cbf23880-21db-4752-be67-0abdb0153c2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.703 244018 DEBUG nova.compute.manager [req-ed89f19b-e000-40da-b766-49d67685f221 req-91376bde-f0d6-45a4-b78e-82071af2c005 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Received event network-vif-unplugged-cbf23880-21db-4752-be67-0abdb0153c2b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:48:25 np0005629333 podman[352298]: 2026-02-25 12:48:25.721446754 +0000 UTC m=+0.051842155 container remove 7cde34016f96915e524d46b661b532662a38979cd901aa6b48cc21a32f0d613f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:48:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:25.727 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5e361888-d00f-4337-9dd7-7ea8015056dd]: (4, ('Wed Feb 25 12:48:25 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970 (7cde34016f96915e524d46b661b532662a38979cd901aa6b48cc21a32f0d613f)\n7cde34016f96915e524d46b661b532662a38979cd901aa6b48cc21a32f0d613f\nWed Feb 25 12:48:25 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970 (7cde34016f96915e524d46b661b532662a38979cd901aa6b48cc21a32f0d613f)\n7cde34016f96915e524d46b661b532662a38979cd901aa6b48cc21a32f0d613f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:25.729 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a61003f7-63e6-4c42-90b4-dbb9e102c302]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:25.731 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3635704-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.733 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:25 np0005629333 kernel: tapf3635704-b0: left promiscuous mode
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.740 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:25.743 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8bc0f11f-5b8b-4418-ad3f-2cda0a2e1e71]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:25.763 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[699fb1e5-4d28-446e-95d0-78c991ed53f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:25.765 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cae5b83c-8e83-40be-8040-ce2ec3e10e05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:25.778 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3981ec8e-beaa-4cb2-bce6-31561a0ad479]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565499, 'reachable_time': 18596, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352317, 'error': None, 'target': 'ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:25.780 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f3635704-b98f-49df-8dc5-9052a34d8970 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:48:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:25.780 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[b15ec24e-74f4-403a-a7f8-9e9a3722b41d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:25 np0005629333 systemd[1]: run-netns-ovnmeta\x2df3635704\x2db98f\x2d49df\x2d8dc5\x2d9052a34d8970.mount: Deactivated successfully.
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.877 244018 INFO nova.virt.libvirt.driver [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Deleting instance files /var/lib/nova/instances/9c11f17d-5dba-4ece-8340-1c4ff0939294_del#033[00m
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.878 244018 INFO nova.virt.libvirt.driver [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Deletion of /var/lib/nova/instances/9c11f17d-5dba-4ece-8340-1c4ff0939294_del complete#033[00m
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.953 244018 INFO nova.compute.manager [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Took 0.59 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.954 244018 DEBUG oslo.service.loopingcall [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.954 244018 DEBUG nova.compute.manager [-] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:48:25 np0005629333 nova_compute[244014]: 2026-02-25 12:48:25.954 244018 DEBUG nova.network.neutron [-] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:48:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2036: 305 pgs: 305 active+clean; 471 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 650 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Feb 25 07:48:26 np0005629333 nova_compute[244014]: 2026-02-25 12:48:26.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:48:26 np0005629333 nova_compute[244014]: 2026-02-25 12:48:26.878 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:48:26 np0005629333 nova_compute[244014]: 2026-02-25 12:48:26.922 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:26 np0005629333 nova_compute[244014]: 2026-02-25 12:48:26.923 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:26 np0005629333 nova_compute[244014]: 2026-02-25 12:48:26.923 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:26 np0005629333 nova_compute[244014]: 2026-02-25 12:48:26.924 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:48:26 np0005629333 nova_compute[244014]: 2026-02-25 12:48:26.924 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.223 244018 DEBUG nova.network.neutron [-] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.251 244018 INFO nova.compute.manager [-] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Took 1.30 seconds to deallocate network for instance.#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.292 244018 DEBUG oslo_concurrency.lockutils [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "271f6569-a8f6-43a3-ac98-511eff77c426" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.292 244018 DEBUG oslo_concurrency.lockutils [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.293 244018 DEBUG oslo_concurrency.lockutils [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.293 244018 DEBUG oslo_concurrency.lockutils [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.294 244018 DEBUG oslo_concurrency.lockutils [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.295 244018 INFO nova.compute.manager [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Terminating instance#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.296 244018 DEBUG nova.compute.manager [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.304 244018 DEBUG oslo_concurrency.lockutils [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.304 244018 DEBUG oslo_concurrency.lockutils [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:27 np0005629333 kernel: tap3de24fa3-5d (unregistering): left promiscuous mode
Feb 25 07:48:27 np0005629333 NetworkManager[49836]: <info>  [1772023707.3418] device (tap3de24fa3-5d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:48:27 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:27Z|01244|binding|INFO|Releasing lport 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c from this chassis (sb_readonly=0)
Feb 25 07:48:27 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:27Z|01245|binding|INFO|Setting lport 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c down in Southbound
Feb 25 07:48:27 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:27Z|01246|binding|INFO|Removing iface tap3de24fa3-5d ovn-installed in OVS
Feb 25 07:48:27 np0005629333 kernel: tap0dbf7aaf-2d (unregistering): left promiscuous mode
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.355 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:27 np0005629333 NetworkManager[49836]: <info>  [1772023707.3597] device (tap0dbf7aaf-2d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.359 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:6b:63 10.100.0.8'], port_security=['fa:16:3e:c8:6b:63 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '271f6569-a8f6-43a3-ac98-511eff77c426', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e663e6c0-5fd2-4899-a0e9-500029428c03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=560bd45a-3180-4994-88ce-74a6fe57d885, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=3de24fa3-5df5-470c-98d2-1a60e0ebfc0c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.361 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c in datapath d0b6d114-fcb4-4a25-988c-1ee301ef0419 unbound from our chassis#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.364 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d0b6d114-fcb4-4a25-988c-1ee301ef0419#033[00m
Feb 25 07:48:27 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:27Z|01247|binding|INFO|Releasing lport 0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 from this chassis (sb_readonly=0)
Feb 25 07:48:27 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:27Z|01248|binding|INFO|Setting lport 0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 down in Southbound
Feb 25 07:48:27 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:27Z|01249|binding|INFO|Removing iface tap0dbf7aaf-2d ovn-installed in OVS
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.377 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.380 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:c2:9f 2001:db8:0:1:f816:3eff:fe46:c29f 2001:db8::f816:3eff:fe46:c29f'], port_security=['fa:16:3e:46:c2:9f 2001:db8:0:1:f816:3eff:fe46:c29f 2001:db8::f816:3eff:fe46:c29f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe46:c29f/64 2001:db8::f816:3eff:fe46:c29f/64', 'neutron:device_id': '271f6569-a8f6-43a3-ac98-511eff77c426', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c482202-8994-4033-a0a9-167d92a9e301', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e663e6c0-5fd2-4899-a0e9-500029428c03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06544f4e-1579-4f9d-8faa-1248478e7fbc, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.381 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b4db1f08-7af7-454e-ac3b-5807d7202e36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.383 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:27 np0005629333 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000077.scope: Deactivated successfully.
Feb 25 07:48:27 np0005629333 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000077.scope: Consumed 13.402s CPU time.
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.413 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3ebd54ba-0592-4f88-9478-878b5ef58089]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:27 np0005629333 systemd-machined[210048]: Machine qemu-151-instance-00000077 terminated.
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.416 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3bd4bdbe-654b-4931-acb8-c7f3810e9c21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.447 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[12e10e40-24ef-45c9-a890-6aae73ff1a64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.454 244018 DEBUG oslo_concurrency.processutils [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.468 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[21b04bb0-cc15-4e4f-8cad-073e60fdedf3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd0b6d114-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:c1:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 364], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561168, 'reachable_time': 40851, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352352, 'error': None, 'target': 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.483 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c870ad70-921b-4dcd-8417-e1f5b4d75836]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd0b6d114-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561174, 'tstamp': 561174}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352354, 'error': None, 'target': 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd0b6d114-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561176, 'tstamp': 561176}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352354, 'error': None, 'target': 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.485 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0b6d114-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.496 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0b6d114-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.494 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.496 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.496 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd0b6d114-f0, col_values=(('external_ids', {'iface-id': '4e1af2d0-780b-4ffd-93a0-445de7a22322'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.497 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.498 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 in datapath 5c482202-8994-4033-a0a9-167d92a9e301 unbound from our chassis#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.499 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5c482202-8994-4033-a0a9-167d92a9e301#033[00m
Feb 25 07:48:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:48:27 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2047658061' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:48:27 np0005629333 kernel: tap3de24fa3-5d: entered promiscuous mode
Feb 25 07:48:27 np0005629333 kernel: tap3de24fa3-5d (unregistering): left promiscuous mode
Feb 25 07:48:27 np0005629333 NetworkManager[49836]: <info>  [1772023707.5170] manager: (tap3de24fa3-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/520)
Feb 25 07:48:27 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:27Z|01250|binding|INFO|Claiming lport 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c for this chassis.
Feb 25 07:48:27 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:27Z|01251|binding|INFO|3de24fa3-5df5-470c-98d2-1a60e0ebfc0c: Claiming fa:16:3e:c8:6b:63 10.100.0.8
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.520 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.522 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2f53eb1a-9b26-4ac5-b6a6-57dcc2d2b4bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.527 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:6b:63 10.100.0.8'], port_security=['fa:16:3e:c8:6b:63 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '271f6569-a8f6-43a3-ac98-511eff77c426', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e663e6c0-5fd2-4899-a0e9-500029428c03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=560bd45a-3180-4994-88ce-74a6fe57d885, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=3de24fa3-5df5-470c-98d2-1a60e0ebfc0c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:48:27 np0005629333 NetworkManager[49836]: <info>  [1772023707.5283] manager: (tap0dbf7aaf-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/521)
Feb 25 07:48:27 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:27Z|01252|binding|INFO|Releasing lport 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c from this chassis (sb_readonly=0)
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.538 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.545 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:6b:63 10.100.0.8'], port_security=['fa:16:3e:c8:6b:63 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '271f6569-a8f6-43a3-ac98-511eff77c426', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e663e6c0-5fd2-4899-a0e9-500029428c03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=560bd45a-3180-4994-88ce-74a6fe57d885, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=3de24fa3-5df5-470c-98d2-1a60e0ebfc0c) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.548 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.553 244018 INFO nova.virt.libvirt.driver [-] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Instance destroyed successfully.#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.554 244018 DEBUG nova.objects.instance [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'resources' on Instance uuid 271f6569-a8f6-43a3-ac98-511eff77c426 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.555 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[07458673-4107-43db-934b-8935478b1894]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.558 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f31fce2b-2f64-460f-9e3b-f31649e0076b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.569 244018 DEBUG nova.virt.libvirt.vif [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:47:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1288096549',display_name='tempest-TestGettingAddress-server-1288096549',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1288096549',id=119,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB69Pr57xlpfPAIOfVFDA83wUDqkiKKz/q2j8EaAao/3/InA7y2axmKr5B1TGfyn/pk4XdqMYpcKTTAq9q/wSlLakw5QjJZuZ8Zodvba1czxOZQjigR2CJ2ZqyN+ZHKUOA==',key_name='tempest-TestGettingAddress-545180732',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:48:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-w8etrtwi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:48:02Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=271f6569-a8f6-43a3-ac98-511eff77c426,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "address": "fa:16:3e:c8:6b:63", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de24fa3-5d", "ovs_interfaceid": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.570 244018 DEBUG nova.network.os_vif_util [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "address": "fa:16:3e:c8:6b:63", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de24fa3-5d", "ovs_interfaceid": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.571 244018 DEBUG nova.network.os_vif_util [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c8:6b:63,bridge_name='br-int',has_traffic_filtering=True,id=3de24fa3-5df5-470c-98d2-1a60e0ebfc0c,network=Network(d0b6d114-fcb4-4a25-988c-1ee301ef0419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3de24fa3-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.571 244018 DEBUG os_vif [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c8:6b:63,bridge_name='br-int',has_traffic_filtering=True,id=3de24fa3-5df5-470c-98d2-1a60e0ebfc0c,network=Network(d0b6d114-fcb4-4a25-988c-1ee301ef0419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3de24fa3-5d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.573 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.573 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3de24fa3-5d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.575 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.577 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.580 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.582 244018 INFO os_vif [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c8:6b:63,bridge_name='br-int',has_traffic_filtering=True,id=3de24fa3-5df5-470c-98d2-1a60e0ebfc0c,network=Network(d0b6d114-fcb4-4a25-988c-1ee301ef0419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3de24fa3-5d')#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.583 244018 DEBUG nova.virt.libvirt.vif [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:47:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1288096549',display_name='tempest-TestGettingAddress-server-1288096549',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1288096549',id=119,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB69Pr57xlpfPAIOfVFDA83wUDqkiKKz/q2j8EaAao/3/InA7y2axmKr5B1TGfyn/pk4XdqMYpcKTTAq9q/wSlLakw5QjJZuZ8Zodvba1czxOZQjigR2CJ2ZqyN+ZHKUOA==',key_name='tempest-TestGettingAddress-545180732',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:48:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-w8etrtwi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:48:02Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=271f6569-a8f6-43a3-ac98-511eff77c426,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "address": "fa:16:3e:46:c2:9f", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dbf7aaf-2d", "ovs_interfaceid": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.584 244018 DEBUG nova.network.os_vif_util [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "address": "fa:16:3e:46:c2:9f", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dbf7aaf-2d", "ovs_interfaceid": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.584 244018 DEBUG nova.network.os_vif_util [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:c2:9f,bridge_name='br-int',has_traffic_filtering=True,id=0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934,network=Network(5c482202-8994-4033-a0a9-167d92a9e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0dbf7aaf-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.585 244018 DEBUG os_vif [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:c2:9f,bridge_name='br-int',has_traffic_filtering=True,id=0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934,network=Network(5c482202-8994-4033-a0a9-167d92a9e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0dbf7aaf-2d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.587 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.587 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0dbf7aaf-2d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.589 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.590 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.592 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5bedc655-2f09-426d-bbc1-ef7c1383d8e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.592 244018 INFO os_vif [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:c2:9f,bridge_name='br-int',has_traffic_filtering=True,id=0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934,network=Network(5c482202-8994-4033-a0a9-167d92a9e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0dbf7aaf-2d')#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.606 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[283273ae-8b78-4573-bc2d-5c233dff9379]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5c482202-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:af:c3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 36, 'tx_packets': 5, 'rx_bytes': 3160, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 36, 'tx_packets': 5, 'rx_bytes': 3160, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 365], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561244, 'reachable_time': 22558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 36, 'inoctets': 2656, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 36, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2656, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 36, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352399, 'error': None, 'target': 'ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.618 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[755ab6dd-5f6d-42f2-aa95-950fe1ec3eb2]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5c482202-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561254, 'tstamp': 561254}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352416, 'error': None, 'target': 'ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.619 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c482202-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.621 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.622 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.622 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c482202-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.622 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.623 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5c482202-80, col_values=(('external_ids', {'iface-id': '8372e6c6-fbbc-48a7-be95-d95e1d2ad95a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.623 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.624 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c in datapath d0b6d114-fcb4-4a25-988c-1ee301ef0419 unbound from our chassis#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.625 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d0b6d114-fcb4-4a25-988c-1ee301ef0419#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.637 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c4c61403-a7de-4975-a662-469e4dd8d311]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.660 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb2658b-36b2-4b2b-96ff-bd077c387e12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.664 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b79788d6-0ed7-4f50-a01f-6cabcb9ad263]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.670 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.670 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.675 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.675 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000076 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.681 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.681 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.692 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1e0ba077-5d0d-4dda-a234-67d8f88052c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.716 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c2412249-1fc1-4bcf-a45d-85c21a571b5d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd0b6d114-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:c1:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 364], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561168, 'reachable_time': 40851, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352430, 'error': None, 'target': 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.734 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[adf553b1-e9df-4ca9-913a-34e0378b08af]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd0b6d114-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561174, 'tstamp': 561174}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352431, 'error': None, 'target': 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd0b6d114-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561176, 'tstamp': 561176}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352431, 'error': None, 'target': 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.736 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0b6d114-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.737 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.744 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.745 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0b6d114-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.745 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.745 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd0b6d114-f0, col_values=(('external_ids', {'iface-id': '4e1af2d0-780b-4ffd-93a0-445de7a22322'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.745 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.746 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c in datapath d0b6d114-fcb4-4a25-988c-1ee301ef0419 unbound from our chassis#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.748 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d0b6d114-fcb4-4a25-988c-1ee301ef0419#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.760 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[545e318c-11cc-475b-b27a-3b605ccd3162]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.784 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[79c5fcb0-bb78-4ba9-9630-ea0dd53627e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.788 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2ee9cdf0-db9b-43a3-aef8-57be6e0accd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.813 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[85c4b549-b831-4266-bb9c-a4f23de1098a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.828 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[49608d58-4d5c-4434-844e-e53e48f97127]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd0b6d114-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:c1:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 364], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561168, 'reachable_time': 40851, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352438, 'error': None, 'target': 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.842 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[22cd27b3-ca78-4840-b02d-10e2681e3bf5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapd0b6d114-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561174, 'tstamp': 561174}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352439, 'error': None, 'target': 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd0b6d114-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 561176, 'tstamp': 561176}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352439, 'error': None, 'target': 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.844 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0b6d114-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.845 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.846 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0b6d114-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.846 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.847 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd0b6d114-f0, col_values=(('external_ids', {'iface-id': '4e1af2d0-780b-4ffd-93a0-445de7a22322'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:48:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:27.847 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.855 244018 INFO nova.virt.libvirt.driver [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Deleting instance files /var/lib/nova/instances/271f6569-a8f6-43a3-ac98-511eff77c426_del#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.856 244018 INFO nova.virt.libvirt.driver [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Deletion of /var/lib/nova/instances/271f6569-a8f6-43a3-ac98-511eff77c426_del complete#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.894 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.895 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3215MB free_disk=59.80528958886862GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.895 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.945 244018 INFO nova.compute.manager [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Took 0.65 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.945 244018 DEBUG oslo.service.loopingcall [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.946 244018 DEBUG nova.compute.manager [-] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.946 244018 DEBUG nova.network.neutron [-] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.985 244018 DEBUG nova.compute.manager [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Received event network-vif-plugged-cbf23880-21db-4752-be67-0abdb0153c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.985 244018 DEBUG oslo_concurrency.lockutils [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.985 244018 DEBUG oslo_concurrency.lockutils [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.985 244018 DEBUG oslo_concurrency.lockutils [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.985 244018 DEBUG nova.compute.manager [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] No waiting events found dispatching network-vif-plugged-cbf23880-21db-4752-be67-0abdb0153c2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.985 244018 WARNING nova.compute.manager [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Received unexpected event network-vif-plugged-cbf23880-21db-4752-be67-0abdb0153c2b for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.986 244018 DEBUG nova.compute.manager [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received event network-changed-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.986 244018 DEBUG nova.compute.manager [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Refreshing instance network info cache due to event network-changed-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.986 244018 DEBUG oslo_concurrency.lockutils [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-271f6569-a8f6-43a3-ac98-511eff77c426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.986 244018 DEBUG oslo_concurrency.lockutils [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-271f6569-a8f6-43a3-ac98-511eff77c426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:48:27 np0005629333 nova_compute[244014]: 2026-02-25 12:48:27.986 244018 DEBUG nova.network.neutron [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Refreshing network info cache for port 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1155424539' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:48:28 np0005629333 nova_compute[244014]: 2026-02-25 12:48:28.067 244018 DEBUG oslo_concurrency.processutils [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:48:28 np0005629333 nova_compute[244014]: 2026-02-25 12:48:28.072 244018 DEBUG nova.compute.provider_tree [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:48:28 np0005629333 nova_compute[244014]: 2026-02-25 12:48:28.085 244018 DEBUG nova.scheduler.client.report [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:48:28 np0005629333 nova_compute[244014]: 2026-02-25 12:48:28.105 244018 DEBUG oslo_concurrency.lockutils [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.801s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:28 np0005629333 nova_compute[244014]: 2026-02-25 12:48:28.107 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.211s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:28 np0005629333 nova_compute[244014]: 2026-02-25 12:48:28.129 244018 INFO nova.scheduler.client.report [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Deleted allocations for instance 9c11f17d-5dba-4ece-8340-1c4ff0939294#033[00m
Feb 25 07:48:28 np0005629333 nova_compute[244014]: 2026-02-25 12:48:28.185 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 848fd033-0ebb-460a-a8a0-56583fa5f481 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:48:28 np0005629333 nova_compute[244014]: 2026-02-25 12:48:28.186 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 95730650-36ac-4eee-8b22-9ea3f01d82d1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:48:28 np0005629333 nova_compute[244014]: 2026-02-25 12:48:28.186 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 271f6569-a8f6-43a3-ac98-511eff77c426 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:48:28 np0005629333 nova_compute[244014]: 2026-02-25 12:48:28.187 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:48:28 np0005629333 nova_compute[244014]: 2026-02-25 12:48:28.187 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:48:28 np0005629333 nova_compute[244014]: 2026-02-25 12:48:28.197 244018 DEBUG oslo_concurrency.lockutils [None req-cf18e1e8-2066-46b4-aa22-5bcf6a55c385 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "9c11f17d-5dba-4ece-8340-1c4ff0939294" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:28 np0005629333 nova_compute[244014]: 2026-02-25 12:48:28.280 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:48:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2037: 305 pgs: 305 active+clean; 366 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 693 KiB/s rd, 4.3 MiB/s wr, 172 op/s
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:48:28.570466) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023708570520, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 1128, "num_deletes": 250, "total_data_size": 1619031, "memory_usage": 1645000, "flush_reason": "Manual Compaction"}
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023708577734, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 979369, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42726, "largest_seqno": 43853, "table_properties": {"data_size": 975144, "index_size": 1749, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11341, "raw_average_key_size": 20, "raw_value_size": 965968, "raw_average_value_size": 1765, "num_data_blocks": 79, "num_entries": 547, "num_filter_entries": 547, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772023603, "oldest_key_time": 1772023603, "file_creation_time": 1772023708, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 7317 microseconds, and 3830 cpu microseconds.
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:48:28.577782) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 979369 bytes OK
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:48:28.577805) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:48:28.579346) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:48:28.579367) EVENT_LOG_v1 {"time_micros": 1772023708579361, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:48:28.579385) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 1613813, prev total WAL file size 1613813, number of live WAL files 2.
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:48:28.580480) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353033' seq:72057594037927935, type:22 .. '6D6772737461740031373534' seq:0, type:0; will stop at (end)
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(956KB)], [98(9811KB)]
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023708580566, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 11026817, "oldest_snapshot_seqno": -1}
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 6505 keys, 8294781 bytes, temperature: kUnknown
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023708633352, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 8294781, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8252458, "index_size": 24933, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 168761, "raw_average_key_size": 25, "raw_value_size": 8137247, "raw_average_value_size": 1250, "num_data_blocks": 975, "num_entries": 6505, "num_filter_entries": 6505, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772023708, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:48:28.633600) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 8294781 bytes
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:48:28.635119) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 208.6 rd, 156.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 9.6 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(19.7) write-amplify(8.5) OK, records in: 6971, records dropped: 466 output_compression: NoCompression
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:48:28.635140) EVENT_LOG_v1 {"time_micros": 1772023708635130, "job": 58, "event": "compaction_finished", "compaction_time_micros": 52869, "compaction_time_cpu_micros": 29782, "output_level": 6, "num_output_files": 1, "total_output_size": 8294781, "num_input_records": 6971, "num_output_records": 6505, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023708635456, "job": 58, "event": "table_file_deletion", "file_number": 100}
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023708636638, "job": 58, "event": "table_file_deletion", "file_number": 98}
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:48:28.580343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:48:28.636807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:48:28.636814) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:48:28.636818) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:48:28.636821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:48:28.636824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:48:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3337761892' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:48:28 np0005629333 nova_compute[244014]: 2026-02-25 12:48:28.854 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:48:28 np0005629333 nova_compute[244014]: 2026-02-25 12:48:28.864 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:48:28 np0005629333 nova_compute[244014]: 2026-02-25 12:48:28.884 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:48:28 np0005629333 nova_compute[244014]: 2026-02-25 12:48:28.928 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:48:28 np0005629333 nova_compute[244014]: 2026-02-25 12:48:28.929 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:29 np0005629333 podman[352464]: 2026-02-25 12:48:29.723527439 +0000 UTC m=+0.059337857 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent)
Feb 25 07:48:29 np0005629333 podman[352465]: 2026-02-25 12:48:29.759918466 +0000 UTC m=+0.096438864 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:48:29 np0005629333 nova_compute[244014]: 2026-02-25 12:48:29.926 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:48:29 np0005629333 nova_compute[244014]: 2026-02-25 12:48:29.927 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:48:29 np0005629333 nova_compute[244014]: 2026-02-25 12:48:29.927 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 07:48:29 np0005629333 nova_compute[244014]: 2026-02-25 12:48:29.957 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.248 244018 DEBUG nova.compute.manager [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received event network-vif-plugged-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.249 244018 DEBUG oslo_concurrency.lockutils [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.249 244018 DEBUG oslo_concurrency.lockutils [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.250 244018 DEBUG oslo_concurrency.lockutils [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.250 244018 DEBUG nova.compute.manager [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] No waiting events found dispatching network-vif-plugged-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.251 244018 WARNING nova.compute.manager [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received unexpected event network-vif-plugged-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.251 244018 DEBUG nova.compute.manager [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received event network-vif-unplugged-0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.252 244018 DEBUG oslo_concurrency.lockutils [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.252 244018 DEBUG oslo_concurrency.lockutils [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.252 244018 DEBUG oslo_concurrency.lockutils [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.253 244018 DEBUG nova.compute.manager [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] No waiting events found dispatching network-vif-unplugged-0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.253 244018 DEBUG nova.compute.manager [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received event network-vif-unplugged-0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.254 244018 DEBUG nova.compute.manager [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received event network-vif-plugged-0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.254 244018 DEBUG oslo_concurrency.lockutils [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.255 244018 DEBUG oslo_concurrency.lockutils [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.255 244018 DEBUG oslo_concurrency.lockutils [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.256 244018 DEBUG nova.compute.manager [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] No waiting events found dispatching network-vif-plugged-0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.256 244018 WARNING nova.compute.manager [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received unexpected event network-vif-plugged-0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.256 244018 DEBUG nova.compute.manager [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received event network-vif-deleted-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.257 244018 INFO nova.compute.manager [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Neutron deleted interface 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c; detaching it from the instance and deleting it from the info cache#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.257 244018 DEBUG nova.network.neutron [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Updating instance_info_cache with network_info: [{"id": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "address": "fa:16:3e:46:c2:9f", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dbf7aaf-2d", "ovs_interfaceid": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.287 244018 DEBUG nova.compute.manager [req-50c4a1df-b477-4aaa-8851-46d423d8eb5e req-f3b5c94c-663f-4305-9189-61c06a388abc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Detach interface failed, port_id=3de24fa3-5df5-470c-98d2-1a60e0ebfc0c, reason: Instance 271f6569-a8f6-43a3-ac98-511eff77c426 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.288 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.300 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.300 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.300 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.301 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid 848fd033-0ebb-460a-a8a0-56583fa5f481 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:48:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2038: 305 pgs: 305 active+clean; 366 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 258 KiB/s rd, 1.1 MiB/s wr, 94 op/s
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.646 244018 DEBUG nova.network.neutron [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Updated VIF entry in instance network info cache for port 3de24fa3-5df5-470c-98d2-1a60e0ebfc0c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.647 244018 DEBUG nova.network.neutron [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Updating instance_info_cache with network_info: [{"id": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "address": "fa:16:3e:c8:6b:63", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3de24fa3-5d", "ovs_interfaceid": "3de24fa3-5df5-470c-98d2-1a60e0ebfc0c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "address": "fa:16:3e:46:c2:9f", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe46:c29f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0dbf7aaf-2d", "ovs_interfaceid": "0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.680 244018 DEBUG nova.network.neutron [-] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.683 244018 DEBUG oslo_concurrency.lockutils [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-271f6569-a8f6-43a3-ac98-511eff77c426" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.684 244018 DEBUG nova.compute.manager [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Received event network-vif-deleted-cbf23880-21db-4752-be67-0abdb0153c2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.684 244018 DEBUG nova.compute.manager [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received event network-vif-unplugged-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.684 244018 DEBUG oslo_concurrency.lockutils [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.684 244018 DEBUG oslo_concurrency.lockutils [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.685 244018 DEBUG oslo_concurrency.lockutils [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.685 244018 DEBUG nova.compute.manager [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] No waiting events found dispatching network-vif-unplugged-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.685 244018 DEBUG nova.compute.manager [req-aa2767f8-8f5e-4ebd-886f-a297ecc1196e req-ce64e5cb-7ce7-4c4f-95be-2d53d5d68860 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received event network-vif-unplugged-3de24fa3-5df5-470c-98d2-1a60e0ebfc0c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.700 244018 INFO nova.compute.manager [-] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Took 2.75 seconds to deallocate network for instance.#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.773 244018 DEBUG oslo_concurrency.lockutils [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.774 244018 DEBUG oslo_concurrency.lockutils [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:30 np0005629333 nova_compute[244014]: 2026-02-25 12:48:30.895 244018 DEBUG oslo_concurrency.processutils [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:48:30 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:30Z|01253|binding|INFO|Releasing lport 4e1af2d0-780b-4ffd-93a0-445de7a22322 from this chassis (sb_readonly=0)
Feb 25 07:48:30 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:30Z|01254|binding|INFO|Releasing lport 8372e6c6-fbbc-48a7-be95-d95e1d2ad95a from this chassis (sb_readonly=0)
Feb 25 07:48:30 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:30Z|01255|binding|INFO|Releasing lport f0fc68ea-8ea9-4b0b-9e36-f54d3bc2f6ca from this chassis (sb_readonly=0)
Feb 25 07:48:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:48:31
Feb 25 07:48:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:48:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:48:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'volumes', 'images', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'backups']
Feb 25 07:48:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:48:31 np0005629333 nova_compute[244014]: 2026-02-25 12:48:31.008 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:48:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2638342429' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:48:31 np0005629333 nova_compute[244014]: 2026-02-25 12:48:31.479 244018 DEBUG oslo_concurrency.processutils [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:48:31 np0005629333 nova_compute[244014]: 2026-02-25 12:48:31.486 244018 DEBUG nova.compute.provider_tree [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:48:31 np0005629333 nova_compute[244014]: 2026-02-25 12:48:31.504 244018 DEBUG nova.scheduler.client.report [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:48:31 np0005629333 nova_compute[244014]: 2026-02-25 12:48:31.529 244018 DEBUG oslo_concurrency.lockutils [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:31 np0005629333 nova_compute[244014]: 2026-02-25 12:48:31.585 244018 INFO nova.scheduler.client.report [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Deleted allocations for instance 271f6569-a8f6-43a3-ac98-511eff77c426#033[00m
Feb 25 07:48:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:48:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:48:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:48:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:48:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:48:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:48:31 np0005629333 nova_compute[244014]: 2026-02-25 12:48:31.687 244018 DEBUG oslo_concurrency.lockutils [None req-5c48acb5-68e6-4b30-8ba9-8b19ffc6c33b f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "271f6569-a8f6-43a3-ac98-511eff77c426" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:48:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:48:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:48:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:48:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:48:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:48:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:48:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:48:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:48:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:48:32 np0005629333 nova_compute[244014]: 2026-02-25 12:48:32.382 244018 DEBUG nova.compute.manager [req-d2182a6b-24e2-40d6-8115-b53c410ae474 req-93be17be-6be2-45ed-8093-c442898b135d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Received event network-vif-deleted-0dbf7aaf-2d70-42bc-b1fa-ceb87d1e5934 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:48:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2039: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 268 KiB/s rd, 1.1 MiB/s wr, 109 op/s
Feb 25 07:48:32 np0005629333 nova_compute[244014]: 2026-02-25 12:48:32.589 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:32 np0005629333 nova_compute[244014]: 2026-02-25 12:48:32.615 244018 DEBUG nova.compute.manager [req-83f9d7bc-3c1b-4b15-b7a7-8102e8283bf6 req-19d4d245-c4b5-4f8c-b9b4-e2a2e455c6ee 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Received event network-changed-0660ccb7-a986-45dd-8aa8-e10ddd71144a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:48:32 np0005629333 nova_compute[244014]: 2026-02-25 12:48:32.615 244018 DEBUG nova.compute.manager [req-83f9d7bc-3c1b-4b15-b7a7-8102e8283bf6 req-19d4d245-c4b5-4f8c-b9b4-e2a2e455c6ee 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Refreshing instance network info cache due to event network-changed-0660ccb7-a986-45dd-8aa8-e10ddd71144a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:48:32 np0005629333 nova_compute[244014]: 2026-02-25 12:48:32.616 244018 DEBUG oslo_concurrency.lockutils [req-83f9d7bc-3c1b-4b15-b7a7-8102e8283bf6 req-19d4d245-c4b5-4f8c-b9b4-e2a2e455c6ee 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-95730650-36ac-4eee-8b22-9ea3f01d82d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:48:32 np0005629333 nova_compute[244014]: 2026-02-25 12:48:32.616 244018 DEBUG oslo_concurrency.lockutils [req-83f9d7bc-3c1b-4b15-b7a7-8102e8283bf6 req-19d4d245-c4b5-4f8c-b9b4-e2a2e455c6ee 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-95730650-36ac-4eee-8b22-9ea3f01d82d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:48:32 np0005629333 nova_compute[244014]: 2026-02-25 12:48:32.616 244018 DEBUG nova.network.neutron [req-83f9d7bc-3c1b-4b15-b7a7-8102e8283bf6 req-19d4d245-c4b5-4f8c-b9b4-e2a2e455c6ee 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Refreshing network info cache for port 0660ccb7-a986-45dd-8aa8-e10ddd71144a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:48:32 np0005629333 nova_compute[244014]: 2026-02-25 12:48:32.702 244018 DEBUG oslo_concurrency.lockutils [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "95730650-36ac-4eee-8b22-9ea3f01d82d1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:32 np0005629333 nova_compute[244014]: 2026-02-25 12:48:32.703 244018 DEBUG oslo_concurrency.lockutils [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:32 np0005629333 nova_compute[244014]: 2026-02-25 12:48:32.704 244018 DEBUG oslo_concurrency.lockutils [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:32 np0005629333 nova_compute[244014]: 2026-02-25 12:48:32.704 244018 DEBUG oslo_concurrency.lockutils [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:32 np0005629333 nova_compute[244014]: 2026-02-25 12:48:32.705 244018 DEBUG oslo_concurrency.lockutils [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:32 np0005629333 nova_compute[244014]: 2026-02-25 12:48:32.707 244018 INFO nova.compute.manager [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Terminating instance#033[00m
Feb 25 07:48:32 np0005629333 nova_compute[244014]: 2026-02-25 12:48:32.709 244018 DEBUG nova.compute.manager [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:48:32 np0005629333 kernel: tap0660ccb7-a9 (unregistering): left promiscuous mode
Feb 25 07:48:32 np0005629333 NetworkManager[49836]: <info>  [1772023712.7564] device (tap0660ccb7-a9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:48:32 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:32Z|01256|binding|INFO|Releasing lport 0660ccb7-a986-45dd-8aa8-e10ddd71144a from this chassis (sb_readonly=0)
Feb 25 07:48:32 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:32Z|01257|binding|INFO|Setting lport 0660ccb7-a986-45dd-8aa8-e10ddd71144a down in Southbound
Feb 25 07:48:32 np0005629333 nova_compute[244014]: 2026-02-25 12:48:32.765 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:32 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:32Z|01258|binding|INFO|Removing iface tap0660ccb7-a9 ovn-installed in OVS
Feb 25 07:48:32 np0005629333 nova_compute[244014]: 2026-02-25 12:48:32.769 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:32.778 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:b0:7a 10.100.0.12'], port_security=['fa:16:3e:cb:b0:7a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '95730650-36ac-4eee-8b22-9ea3f01d82d1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b261df4d-3b33-4344-9c3c-a73feb8773db', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6d838fa7-2ae5-457e-b70a-ef0e132d7e89', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af3ba9a4-232f-4585-95fc-f83215abb671, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=0660ccb7-a986-45dd-8aa8-e10ddd71144a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:48:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:32.780 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 0660ccb7-a986-45dd-8aa8-e10ddd71144a in datapath b261df4d-3b33-4344-9c3c-a73feb8773db unbound from our chassis#033[00m
Feb 25 07:48:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:32.781 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b261df4d-3b33-4344-9c3c-a73feb8773db, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:48:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:32.783 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7a641000-62d9-49f2-8a16-796fadfb2c36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:32 np0005629333 nova_compute[244014]: 2026-02-25 12:48:32.784 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:32.785 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db namespace which is not needed anymore#033[00m
Feb 25 07:48:32 np0005629333 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000076.scope: Deactivated successfully.
Feb 25 07:48:32 np0005629333 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000076.scope: Consumed 14.218s CPU time.
Feb 25 07:48:32 np0005629333 systemd-machined[210048]: Machine qemu-150-instance-00000076 terminated.
Feb 25 07:48:32 np0005629333 nova_compute[244014]: 2026-02-25 12:48:32.946 244018 INFO nova.virt.libvirt.driver [-] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Instance destroyed successfully.#033[00m
Feb 25 07:48:32 np0005629333 nova_compute[244014]: 2026-02-25 12:48:32.946 244018 DEBUG nova.objects.instance [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'resources' on Instance uuid 95730650-36ac-4eee-8b22-9ea3f01d82d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:48:32 np0005629333 neutron-haproxy-ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db[350592]: [NOTICE]   (350617) : haproxy version is 2.8.14-c23fe91
Feb 25 07:48:32 np0005629333 neutron-haproxy-ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db[350592]: [NOTICE]   (350617) : path to executable is /usr/sbin/haproxy
Feb 25 07:48:32 np0005629333 neutron-haproxy-ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db[350592]: [WARNING]  (350617) : Exiting Master process...
Feb 25 07:48:32 np0005629333 neutron-haproxy-ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db[350592]: [WARNING]  (350617) : Exiting Master process...
Feb 25 07:48:32 np0005629333 neutron-haproxy-ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db[350592]: [ALERT]    (350617) : Current worker (350633) exited with code 143 (Terminated)
Feb 25 07:48:32 np0005629333 neutron-haproxy-ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db[350592]: [WARNING]  (350617) : All workers exited. Exiting... (0)
Feb 25 07:48:32 np0005629333 systemd[1]: libpod-c4bec2971f5a5d44bfda5fbe5b0886aea5c0ed8770d17a0dd446c42b323b07bc.scope: Deactivated successfully.
Feb 25 07:48:32 np0005629333 nova_compute[244014]: 2026-02-25 12:48:32.963 244018 DEBUG nova.virt.libvirt.vif [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:47:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-404307455',display_name='tempest-TestNetworkBasicOps-server-404307455',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-404307455',id=118,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFFlaz+1JiPozqOMtDFDKtIjDHy+pR4Dk92wrSmsbZfFxI7F561H9M9jMhpR4UoZcnqUowU/hewm+5foBFGhY3SM2rprh5R3QDm+X6unzTrRIJHcQVeiyxVtyOmfm3aC3A==',key_name='tempest-TestNetworkBasicOps-1001785432',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:47:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-x2oa88n1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:47:29Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=95730650-36ac-4eee-8b22-9ea3f01d82d1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "address": "fa:16:3e:cb:b0:7a", "network": {"id": "b261df4d-3b33-4344-9c3c-a73feb8773db", "bridge": "br-int", "label": "tempest-network-smoke--1044834333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0660ccb7-a9", "ovs_interfaceid": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:48:32 np0005629333 podman[352556]: 2026-02-25 12:48:32.966990793 +0000 UTC m=+0.053833391 container died c4bec2971f5a5d44bfda5fbe5b0886aea5c0ed8770d17a0dd446c42b323b07bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:48:32 np0005629333 nova_compute[244014]: 2026-02-25 12:48:32.964 244018 DEBUG nova.network.os_vif_util [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "address": "fa:16:3e:cb:b0:7a", "network": {"id": "b261df4d-3b33-4344-9c3c-a73feb8773db", "bridge": "br-int", "label": "tempest-network-smoke--1044834333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0660ccb7-a9", "ovs_interfaceid": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:48:32 np0005629333 nova_compute[244014]: 2026-02-25 12:48:32.967 244018 DEBUG nova.network.os_vif_util [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cb:b0:7a,bridge_name='br-int',has_traffic_filtering=True,id=0660ccb7-a986-45dd-8aa8-e10ddd71144a,network=Network(b261df4d-3b33-4344-9c3c-a73feb8773db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0660ccb7-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:48:32 np0005629333 nova_compute[244014]: 2026-02-25 12:48:32.968 244018 DEBUG os_vif [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:b0:7a,bridge_name='br-int',has_traffic_filtering=True,id=0660ccb7-a986-45dd-8aa8-e10ddd71144a,network=Network(b261df4d-3b33-4344-9c3c-a73feb8773db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0660ccb7-a9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:48:32 np0005629333 nova_compute[244014]: 2026-02-25 12:48:32.971 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:32 np0005629333 nova_compute[244014]: 2026-02-25 12:48:32.971 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0660ccb7-a9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:48:32 np0005629333 nova_compute[244014]: 2026-02-25 12:48:32.974 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:32 np0005629333 nova_compute[244014]: 2026-02-25 12:48:32.975 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:32 np0005629333 nova_compute[244014]: 2026-02-25 12:48:32.977 244018 INFO os_vif [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:b0:7a,bridge_name='br-int',has_traffic_filtering=True,id=0660ccb7-a986-45dd-8aa8-e10ddd71144a,network=Network(b261df4d-3b33-4344-9c3c-a73feb8773db),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0660ccb7-a9')#033[00m
Feb 25 07:48:32 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c4bec2971f5a5d44bfda5fbe5b0886aea5c0ed8770d17a0dd446c42b323b07bc-userdata-shm.mount: Deactivated successfully.
Feb 25 07:48:32 np0005629333 systemd[1]: var-lib-containers-storage-overlay-5086818508e5fa7e94bd5cba9a5524707dcb64bf7ba443e8eab59b7dad15f1a4-merged.mount: Deactivated successfully.
Feb 25 07:48:33 np0005629333 podman[352556]: 2026-02-25 12:48:33.00830168 +0000 UTC m=+0.095144278 container cleanup c4bec2971f5a5d44bfda5fbe5b0886aea5c0ed8770d17a0dd446c42b323b07bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:48:33 np0005629333 systemd[1]: libpod-conmon-c4bec2971f5a5d44bfda5fbe5b0886aea5c0ed8770d17a0dd446c42b323b07bc.scope: Deactivated successfully.
Feb 25 07:48:33 np0005629333 podman[352614]: 2026-02-25 12:48:33.100170824 +0000 UTC m=+0.058607606 container remove c4bec2971f5a5d44bfda5fbe5b0886aea5c0ed8770d17a0dd446c42b323b07bc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 07:48:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:33.105 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[333f4730-8b1b-429e-960b-f5b982895dbd]: (4, ('Wed Feb 25 12:48:32 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db (c4bec2971f5a5d44bfda5fbe5b0886aea5c0ed8770d17a0dd446c42b323b07bc)\nc4bec2971f5a5d44bfda5fbe5b0886aea5c0ed8770d17a0dd446c42b323b07bc\nWed Feb 25 12:48:33 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db (c4bec2971f5a5d44bfda5fbe5b0886aea5c0ed8770d17a0dd446c42b323b07bc)\nc4bec2971f5a5d44bfda5fbe5b0886aea5c0ed8770d17a0dd446c42b323b07bc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:33.107 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[58ce5851-70b1-49b4-908d-0c2cb9446105]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:33.109 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb261df4d-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:48:33 np0005629333 nova_compute[244014]: 2026-02-25 12:48:33.110 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:33 np0005629333 kernel: tapb261df4d-30: left promiscuous mode
Feb 25 07:48:33 np0005629333 nova_compute[244014]: 2026-02-25 12:48:33.118 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:33 np0005629333 nova_compute[244014]: 2026-02-25 12:48:33.120 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:33.124 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e35b0d54-517b-4653-b359-9cd8c5deda17]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:33.141 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[15696970-b5a7-4309-9088-2be343ac09f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:33.144 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d4d19c27-5f8c-429d-a18a-9c4b069eae05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:33.160 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[571f6d10-302f-416b-be98-96ce44ef4488]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561795, 'reachable_time': 44386, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352630, 'error': None, 'target': 'ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:33.164 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b261df4d-3b33-4344-9c3c-a73feb8773db deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:48:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:33.164 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[35b005d0-9da0-4acb-b002-5b27e568336d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:33 np0005629333 systemd[1]: run-netns-ovnmeta\x2db261df4d\x2d3b33\x2d4344\x2d9c3c\x2da73feb8773db.mount: Deactivated successfully.
Feb 25 07:48:33 np0005629333 nova_compute[244014]: 2026-02-25 12:48:33.260 244018 INFO nova.virt.libvirt.driver [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Deleting instance files /var/lib/nova/instances/95730650-36ac-4eee-8b22-9ea3f01d82d1_del#033[00m
Feb 25 07:48:33 np0005629333 nova_compute[244014]: 2026-02-25 12:48:33.261 244018 INFO nova.virt.libvirt.driver [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Deletion of /var/lib/nova/instances/95730650-36ac-4eee-8b22-9ea3f01d82d1_del complete#033[00m
Feb 25 07:48:33 np0005629333 nova_compute[244014]: 2026-02-25 12:48:33.326 244018 INFO nova.compute.manager [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Took 0.62 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:48:33 np0005629333 nova_compute[244014]: 2026-02-25 12:48:33.327 244018 DEBUG oslo.service.loopingcall [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:48:33 np0005629333 nova_compute[244014]: 2026-02-25 12:48:33.327 244018 DEBUG nova.compute.manager [-] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:48:33 np0005629333 nova_compute[244014]: 2026-02-25 12:48:33.327 244018 DEBUG nova.network.neutron [-] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:48:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.025 244018 DEBUG nova.network.neutron [req-83f9d7bc-3c1b-4b15-b7a7-8102e8283bf6 req-19d4d245-c4b5-4f8c-b9b4-e2a2e455c6ee 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Updated VIF entry in instance network info cache for port 0660ccb7-a986-45dd-8aa8-e10ddd71144a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.026 244018 DEBUG nova.network.neutron [req-83f9d7bc-3c1b-4b15-b7a7-8102e8283bf6 req-19d4d245-c4b5-4f8c-b9b4-e2a2e455c6ee 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Updating instance_info_cache with network_info: [{"id": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "address": "fa:16:3e:cb:b0:7a", "network": {"id": "b261df4d-3b33-4344-9c3c-a73feb8773db", "bridge": "br-int", "label": "tempest-network-smoke--1044834333", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0660ccb7-a9", "ovs_interfaceid": "0660ccb7-a986-45dd-8aa8-e10ddd71144a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.056 244018 DEBUG oslo_concurrency.lockutils [req-83f9d7bc-3c1b-4b15-b7a7-8102e8283bf6 req-19d4d245-c4b5-4f8c-b9b4-e2a2e455c6ee 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-95730650-36ac-4eee-8b22-9ea3f01d82d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.152 244018 DEBUG oslo_concurrency.lockutils [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "848fd033-0ebb-460a-a8a0-56583fa5f481" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.153 244018 DEBUG oslo_concurrency.lockutils [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.153 244018 DEBUG oslo_concurrency.lockutils [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.154 244018 DEBUG oslo_concurrency.lockutils [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.154 244018 DEBUG oslo_concurrency.lockutils [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.155 244018 INFO nova.compute.manager [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Terminating instance#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.157 244018 DEBUG nova.compute.manager [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:48:34 np0005629333 kernel: tap0358e18d-8c (unregistering): left promiscuous mode
Feb 25 07:48:34 np0005629333 NetworkManager[49836]: <info>  [1772023714.2205] device (tap0358e18d-8c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.226 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:34Z|01259|binding|INFO|Releasing lport 0358e18d-8ce8-43a7-a8b2-11193708891a from this chassis (sb_readonly=0)
Feb 25 07:48:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:34Z|01260|binding|INFO|Setting lport 0358e18d-8ce8-43a7-a8b2-11193708891a down in Southbound
Feb 25 07:48:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:34Z|01261|binding|INFO|Removing iface tap0358e18d-8c ovn-installed in OVS
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.230 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.236 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:1f:00 10.100.0.11'], port_security=['fa:16:3e:fd:1f:00 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '848fd033-0ebb-460a-a8a0-56583fa5f481', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e663e6c0-5fd2-4899-a0e9-500029428c03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=560bd45a-3180-4994-88ce-74a6fe57d885, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=0358e18d-8ce8-43a7-a8b2-11193708891a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.236 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.238 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 0358e18d-8ce8-43a7-a8b2-11193708891a in datapath d0b6d114-fcb4-4a25-988c-1ee301ef0419 unbound from our chassis#033[00m
Feb 25 07:48:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.239 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d0b6d114-fcb4-4a25-988c-1ee301ef0419, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:48:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.240 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f831975a-eee3-4afc-b827-492d8243d3f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.240 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419 namespace which is not needed anymore#033[00m
Feb 25 07:48:34 np0005629333 kernel: tapebd67787-8a (unregistering): left promiscuous mode
Feb 25 07:48:34 np0005629333 NetworkManager[49836]: <info>  [1772023714.2489] device (tapebd67787-8a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.251 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:34Z|01262|binding|INFO|Releasing lport ebd67787-8af9-4a7f-8b38-6d18daff8ff3 from this chassis (sb_readonly=0)
Feb 25 07:48:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:34Z|01263|binding|INFO|Setting lport ebd67787-8af9-4a7f-8b38-6d18daff8ff3 down in Southbound
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.261 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:48:34Z|01264|binding|INFO|Removing iface tapebd67787-8a ovn-installed in OVS
Feb 25 07:48:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.268 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:ea:66 2001:db8:0:1:f816:3eff:fe8e:ea66 2001:db8::f816:3eff:fe8e:ea66'], port_security=['fa:16:3e:8e:ea:66 2001:db8:0:1:f816:3eff:fe8e:ea66 2001:db8::f816:3eff:fe8e:ea66'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe8e:ea66/64 2001:db8::f816:3eff:fe8e:ea66/64', 'neutron:device_id': '848fd033-0ebb-460a-a8a0-56583fa5f481', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5c482202-8994-4033-a0a9-167d92a9e301', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e663e6c0-5fd2-4899-a0e9-500029428c03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06544f4e-1579-4f9d-8faa-1248478e7fbc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=ebd67787-8af9-4a7f-8b38-6d18daff8ff3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.273 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:34 np0005629333 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000075.scope: Deactivated successfully.
Feb 25 07:48:34 np0005629333 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000075.scope: Consumed 15.227s CPU time.
Feb 25 07:48:34 np0005629333 systemd-machined[210048]: Machine qemu-149-instance-00000075 terminated.
Feb 25 07:48:34 np0005629333 NetworkManager[49836]: <info>  [1772023714.3817] manager: (tap0358e18d-8c): new Tun device (/org/freedesktop/NetworkManager/Devices/522)
Feb 25 07:48:34 np0005629333 neutron-haproxy-ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419[350077]: [NOTICE]   (350081) : haproxy version is 2.8.14-c23fe91
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.388 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:34 np0005629333 neutron-haproxy-ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419[350077]: [NOTICE]   (350081) : path to executable is /usr/sbin/haproxy
Feb 25 07:48:34 np0005629333 neutron-haproxy-ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419[350077]: [WARNING]  (350081) : Exiting Master process...
Feb 25 07:48:34 np0005629333 neutron-haproxy-ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419[350077]: [WARNING]  (350081) : Exiting Master process...
Feb 25 07:48:34 np0005629333 neutron-haproxy-ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419[350077]: [ALERT]    (350081) : Current worker (350083) exited with code 143 (Terminated)
Feb 25 07:48:34 np0005629333 neutron-haproxy-ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419[350077]: [WARNING]  (350081) : All workers exited. Exiting... (0)
Feb 25 07:48:34 np0005629333 systemd[1]: libpod-a7aee791e988a0c55e6f0af6c7e94838ecc75f9d23ad6b6c79c31b227fe683a9.scope: Deactivated successfully.
Feb 25 07:48:34 np0005629333 NetworkManager[49836]: <info>  [1772023714.3961] manager: (tapebd67787-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/523)
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.400 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:34 np0005629333 podman[352657]: 2026-02-25 12:48:34.400998594 +0000 UTC m=+0.069430392 container died a7aee791e988a0c55e6f0af6c7e94838ecc75f9d23ad6b6c79c31b227fe683a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.412 244018 INFO nova.virt.libvirt.driver [-] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Instance destroyed successfully.#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.414 244018 DEBUG nova.objects.instance [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'resources' on Instance uuid 848fd033-0ebb-460a-a8a0-56583fa5f481 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:48:34 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a7aee791e988a0c55e6f0af6c7e94838ecc75f9d23ad6b6c79c31b227fe683a9-userdata-shm.mount: Deactivated successfully.
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.430 244018 DEBUG nova.virt.libvirt.vif [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:47:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-289696819',display_name='tempest-TestGettingAddress-server-289696819',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-289696819',id=117,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB69Pr57xlpfPAIOfVFDA83wUDqkiKKz/q2j8EaAao/3/InA7y2axmKr5B1TGfyn/pk4XdqMYpcKTTAq9q/wSlLakw5QjJZuZ8Zodvba1czxOZQjigR2CJ2ZqyN+ZHKUOA==',key_name='tempest-TestGettingAddress-545180732',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:47:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-03p0oxty',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:47:24Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=848fd033-0ebb-460a-a8a0-56583fa5f481,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0358e18d-8ce8-43a7-a8b2-11193708891a", "address": "fa:16:3e:fd:1f:00", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0358e18d-8c", "ovs_interfaceid": "0358e18d-8ce8-43a7-a8b2-11193708891a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.432 244018 DEBUG nova.network.os_vif_util [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "0358e18d-8ce8-43a7-a8b2-11193708891a", "address": "fa:16:3e:fd:1f:00", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0358e18d-8c", "ovs_interfaceid": "0358e18d-8ce8-43a7-a8b2-11193708891a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.433 244018 DEBUG nova.network.os_vif_util [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fd:1f:00,bridge_name='br-int',has_traffic_filtering=True,id=0358e18d-8ce8-43a7-a8b2-11193708891a,network=Network(d0b6d114-fcb4-4a25-988c-1ee301ef0419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0358e18d-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.434 244018 DEBUG os_vif [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fd:1f:00,bridge_name='br-int',has_traffic_filtering=True,id=0358e18d-8ce8-43a7-a8b2-11193708891a,network=Network(d0b6d114-fcb4-4a25-988c-1ee301ef0419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0358e18d-8c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:48:34 np0005629333 systemd[1]: var-lib-containers-storage-overlay-9a06c297626a192e338cd1e2dd123eba9e16164173bf46ec6839ab1a455ed742-merged.mount: Deactivated successfully.
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.436 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.437 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0358e18d-8c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:48:34 np0005629333 podman[352657]: 2026-02-25 12:48:34.437556086 +0000 UTC m=+0.105987884 container cleanup a7aee791e988a0c55e6f0af6c7e94838ecc75f9d23ad6b6c79c31b227fe683a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 25 07:48:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2040: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 94 KiB/s rd, 55 KiB/s wr, 66 op/s
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.513 244018 DEBUG nova.compute.manager [req-1dc85682-0a62-414c-ba87-d0b27c314d3c req-eef9c2e4-6c2d-400a-b50e-10b6329bedb8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received event network-changed-0358e18d-8ce8-43a7-a8b2-11193708891a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.514 244018 DEBUG nova.compute.manager [req-1dc85682-0a62-414c-ba87-d0b27c314d3c req-eef9c2e4-6c2d-400a-b50e-10b6329bedb8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Refreshing instance network info cache due to event network-changed-0358e18d-8ce8-43a7-a8b2-11193708891a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.514 244018 DEBUG oslo_concurrency.lockutils [req-1dc85682-0a62-414c-ba87-d0b27c314d3c req-eef9c2e4-6c2d-400a-b50e-10b6329bedb8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.514 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.517 244018 INFO os_vif [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fd:1f:00,bridge_name='br-int',has_traffic_filtering=True,id=0358e18d-8ce8-43a7-a8b2-11193708891a,network=Network(d0b6d114-fcb4-4a25-988c-1ee301ef0419),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0358e18d-8c')#033[00m
Feb 25 07:48:34 np0005629333 systemd[1]: libpod-conmon-a7aee791e988a0c55e6f0af6c7e94838ecc75f9d23ad6b6c79c31b227fe683a9.scope: Deactivated successfully.
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.518 244018 DEBUG nova.virt.libvirt.vif [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:47:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-289696819',display_name='tempest-TestGettingAddress-server-289696819',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-289696819',id=117,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB69Pr57xlpfPAIOfVFDA83wUDqkiKKz/q2j8EaAao/3/InA7y2axmKr5B1TGfyn/pk4XdqMYpcKTTAq9q/wSlLakw5QjJZuZ8Zodvba1czxOZQjigR2CJ2ZqyN+ZHKUOA==',key_name='tempest-TestGettingAddress-545180732',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:47:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-03p0oxty',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:47:24Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=848fd033-0ebb-460a-a8a0-56583fa5f481,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "address": "fa:16:3e:8e:ea:66", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebd67787-8a", "ovs_interfaceid": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.519 244018 DEBUG nova.network.os_vif_util [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "address": "fa:16:3e:8e:ea:66", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebd67787-8a", "ovs_interfaceid": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.520 244018 DEBUG nova.network.os_vif_util [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8e:ea:66,bridge_name='br-int',has_traffic_filtering=True,id=ebd67787-8af9-4a7f-8b38-6d18daff8ff3,network=Network(5c482202-8994-4033-a0a9-167d92a9e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebd67787-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.520 244018 DEBUG os_vif [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:ea:66,bridge_name='br-int',has_traffic_filtering=True,id=ebd67787-8af9-4a7f-8b38-6d18daff8ff3,network=Network(5c482202-8994-4033-a0a9-167d92a9e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebd67787-8a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.521 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.521 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebd67787-8a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.523 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.524 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.525 244018 INFO os_vif [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:ea:66,bridge_name='br-int',has_traffic_filtering=True,id=ebd67787-8af9-4a7f-8b38-6d18daff8ff3,network=Network(5c482202-8994-4033-a0a9-167d92a9e301),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebd67787-8a')#033[00m
Feb 25 07:48:34 np0005629333 podman[352703]: 2026-02-25 12:48:34.56803336 +0000 UTC m=+0.040499824 container remove a7aee791e988a0c55e6f0af6c7e94838ecc75f9d23ad6b6c79c31b227fe683a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 07:48:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.573 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dcb4d531-eb6c-4c7c-b512-d398f86cae30]: (4, ('Wed Feb 25 12:48:34 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419 (a7aee791e988a0c55e6f0af6c7e94838ecc75f9d23ad6b6c79c31b227fe683a9)\na7aee791e988a0c55e6f0af6c7e94838ecc75f9d23ad6b6c79c31b227fe683a9\nWed Feb 25 12:48:34 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419 (a7aee791e988a0c55e6f0af6c7e94838ecc75f9d23ad6b6c79c31b227fe683a9)\na7aee791e988a0c55e6f0af6c7e94838ecc75f9d23ad6b6c79c31b227fe683a9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.575 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bc8f969d-2349-4d37-833d-509850160d89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.576 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0b6d114-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.577 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:34 np0005629333 kernel: tapd0b6d114-f0: left promiscuous mode
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.579 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.581 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7be3dad5-a852-488b-b003-51c70e4dcca5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.585 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.593 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[86e9d39b-bc76-4f5b-963a-4bd9db834b34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.594 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[642264b7-6865-4a4f-b306-1d98076acf13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.609 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d8cc4eca-2136-48fa-a49c-6d9a9f48dd11]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561162, 'reachable_time': 16301, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352734, 'error': None, 'target': 'ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.611 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d0b6d114-fcb4-4a25-988c-1ee301ef0419 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:48:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.611 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[d308adc8-76f3-41a9-b2f2-4faa74d01120]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.612 157129 INFO neutron.agent.ovn.metadata.agent [-] Port ebd67787-8af9-4a7f-8b38-6d18daff8ff3 in datapath 5c482202-8994-4033-a0a9-167d92a9e301 unbound from our chassis#033[00m
Feb 25 07:48:34 np0005629333 systemd[1]: run-netns-ovnmeta\x2dd0b6d114\x2dfcb4\x2d4a25\x2d988c\x2d1ee301ef0419.mount: Deactivated successfully.
Feb 25 07:48:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.614 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5c482202-8994-4033-a0a9-167d92a9e301, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:48:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.615 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[16def604-5ee2-4c71-aa89-58b7956286b5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.616 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301 namespace which is not needed anymore#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.732 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Updating instance_info_cache with network_info: [{"id": "0358e18d-8ce8-43a7-a8b2-11193708891a", "address": "fa:16:3e:fd:1f:00", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0358e18d-8c", "ovs_interfaceid": "0358e18d-8ce8-43a7-a8b2-11193708891a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "address": "fa:16:3e:8e:ea:66", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebd67787-8a", "ovs_interfaceid": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:48:34 np0005629333 neutron-haproxy-ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301[350226]: [NOTICE]   (350245) : haproxy version is 2.8.14-c23fe91
Feb 25 07:48:34 np0005629333 neutron-haproxy-ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301[350226]: [NOTICE]   (350245) : path to executable is /usr/sbin/haproxy
Feb 25 07:48:34 np0005629333 neutron-haproxy-ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301[350226]: [WARNING]  (350245) : Exiting Master process...
Feb 25 07:48:34 np0005629333 neutron-haproxy-ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301[350226]: [ALERT]    (350245) : Current worker (350250) exited with code 143 (Terminated)
Feb 25 07:48:34 np0005629333 neutron-haproxy-ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301[350226]: [WARNING]  (350245) : All workers exited. Exiting... (0)
Feb 25 07:48:34 np0005629333 systemd[1]: libpod-20aff2b420c99f48402d14ea550cc6f53615620d8268dcc470c0d042dfb5ae45.scope: Deactivated successfully.
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.749 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:48:34 np0005629333 podman[352753]: 2026-02-25 12:48:34.749329739 +0000 UTC m=+0.046610507 container died 20aff2b420c99f48402d14ea550cc6f53615620d8268dcc470c0d042dfb5ae45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.750 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.751 244018 DEBUG oslo_concurrency.lockutils [req-1dc85682-0a62-414c-ba87-d0b27c314d3c req-eef9c2e4-6c2d-400a-b50e-10b6329bedb8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.751 244018 DEBUG nova.network.neutron [req-1dc85682-0a62-414c-ba87-d0b27c314d3c req-eef9c2e4-6c2d-400a-b50e-10b6329bedb8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Refreshing network info cache for port 0358e18d-8ce8-43a7-a8b2-11193708891a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.752 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.753 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.754 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.754 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.767 244018 INFO nova.virt.libvirt.driver [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Deleting instance files /var/lib/nova/instances/848fd033-0ebb-460a-a8a0-56583fa5f481_del#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.768 244018 INFO nova.virt.libvirt.driver [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Deletion of /var/lib/nova/instances/848fd033-0ebb-460a-a8a0-56583fa5f481_del complete#033[00m
Feb 25 07:48:34 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-20aff2b420c99f48402d14ea550cc6f53615620d8268dcc470c0d042dfb5ae45-userdata-shm.mount: Deactivated successfully.
Feb 25 07:48:34 np0005629333 systemd[1]: var-lib-containers-storage-overlay-83335b7bb0badbab01818a01b3ea3886d2d30cde665b02056377fe1391514531-merged.mount: Deactivated successfully.
Feb 25 07:48:34 np0005629333 podman[352753]: 2026-02-25 12:48:34.796201623 +0000 UTC m=+0.093482431 container cleanup 20aff2b420c99f48402d14ea550cc6f53615620d8268dcc470c0d042dfb5ae45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:48:34 np0005629333 systemd[1]: libpod-conmon-20aff2b420c99f48402d14ea550cc6f53615620d8268dcc470c0d042dfb5ae45.scope: Deactivated successfully.
Feb 25 07:48:34 np0005629333 podman[352782]: 2026-02-25 12:48:34.857471363 +0000 UTC m=+0.043409157 container remove 20aff2b420c99f48402d14ea550cc6f53615620d8268dcc470c0d042dfb5ae45 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 25 07:48:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.863 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6319e025-3596-4491-915f-b3207c3e955a]: (4, ('Wed Feb 25 12:48:34 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301 (20aff2b420c99f48402d14ea550cc6f53615620d8268dcc470c0d042dfb5ae45)\n20aff2b420c99f48402d14ea550cc6f53615620d8268dcc470c0d042dfb5ae45\nWed Feb 25 12:48:34 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301 (20aff2b420c99f48402d14ea550cc6f53615620d8268dcc470c0d042dfb5ae45)\n20aff2b420c99f48402d14ea550cc6f53615620d8268dcc470c0d042dfb5ae45\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.865 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e849ef6e-0aa4-453f-bb42-706159ca2661]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.866 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c482202-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:48:34 np0005629333 kernel: tap5c482202-80: left promiscuous mode
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.870 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.874 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.876 244018 INFO nova.compute.manager [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.877 244018 DEBUG oslo.service.loopingcall [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.877 244018 DEBUG nova.compute.manager [-] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:48:34 np0005629333 nova_compute[244014]: 2026-02-25 12:48:34.877 244018 DEBUG nova.network.neutron [-] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:48:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.878 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3a1062dc-05d9-43b7-8918-1587889a5254]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.898 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5778d4e8-8e4f-4f1e-8771-197bd186f8bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.899 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[db1e5407-d8aa-4d4e-95ff-348bbed7bc6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.915 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7918bff6-d224-47aa-8b7a-609a4c69fef1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 561237, 'reachable_time': 29426, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352799, 'error': None, 'target': 'ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.916 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5c482202-8994-4033-a0a9-167d92a9e301 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:48:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:34.916 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[b3e82092-6b80-45d4-bff3-953b6bf5031e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:48:35 np0005629333 nova_compute[244014]: 2026-02-25 12:48:35.035 244018 DEBUG nova.compute.manager [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Received event network-vif-unplugged-0660ccb7-a986-45dd-8aa8-e10ddd71144a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:48:35 np0005629333 nova_compute[244014]: 2026-02-25 12:48:35.035 244018 DEBUG oslo_concurrency.lockutils [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:35 np0005629333 nova_compute[244014]: 2026-02-25 12:48:35.036 244018 DEBUG oslo_concurrency.lockutils [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:35 np0005629333 nova_compute[244014]: 2026-02-25 12:48:35.036 244018 DEBUG oslo_concurrency.lockutils [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:35 np0005629333 nova_compute[244014]: 2026-02-25 12:48:35.036 244018 DEBUG nova.compute.manager [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] No waiting events found dispatching network-vif-unplugged-0660ccb7-a986-45dd-8aa8-e10ddd71144a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:48:35 np0005629333 nova_compute[244014]: 2026-02-25 12:48:35.037 244018 DEBUG nova.compute.manager [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Received event network-vif-unplugged-0660ccb7-a986-45dd-8aa8-e10ddd71144a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:48:35 np0005629333 nova_compute[244014]: 2026-02-25 12:48:35.037 244018 DEBUG nova.compute.manager [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Received event network-vif-plugged-0660ccb7-a986-45dd-8aa8-e10ddd71144a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:48:35 np0005629333 nova_compute[244014]: 2026-02-25 12:48:35.037 244018 DEBUG oslo_concurrency.lockutils [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:35 np0005629333 nova_compute[244014]: 2026-02-25 12:48:35.038 244018 DEBUG oslo_concurrency.lockutils [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:35 np0005629333 nova_compute[244014]: 2026-02-25 12:48:35.038 244018 DEBUG oslo_concurrency.lockutils [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:35 np0005629333 nova_compute[244014]: 2026-02-25 12:48:35.038 244018 DEBUG nova.compute.manager [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] No waiting events found dispatching network-vif-plugged-0660ccb7-a986-45dd-8aa8-e10ddd71144a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:48:35 np0005629333 nova_compute[244014]: 2026-02-25 12:48:35.038 244018 WARNING nova.compute.manager [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Received unexpected event network-vif-plugged-0660ccb7-a986-45dd-8aa8-e10ddd71144a for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:48:35 np0005629333 nova_compute[244014]: 2026-02-25 12:48:35.039 244018 DEBUG nova.compute.manager [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received event network-vif-unplugged-0358e18d-8ce8-43a7-a8b2-11193708891a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:48:35 np0005629333 nova_compute[244014]: 2026-02-25 12:48:35.039 244018 DEBUG oslo_concurrency.lockutils [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:35 np0005629333 nova_compute[244014]: 2026-02-25 12:48:35.039 244018 DEBUG oslo_concurrency.lockutils [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:35 np0005629333 nova_compute[244014]: 2026-02-25 12:48:35.040 244018 DEBUG oslo_concurrency.lockutils [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:35 np0005629333 nova_compute[244014]: 2026-02-25 12:48:35.040 244018 DEBUG nova.compute.manager [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] No waiting events found dispatching network-vif-unplugged-0358e18d-8ce8-43a7-a8b2-11193708891a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:48:35 np0005629333 nova_compute[244014]: 2026-02-25 12:48:35.040 244018 DEBUG nova.compute.manager [req-31558f55-bc19-44f0-8cc7-7156e5b9f4ff req-a85f89e2-0560-408d-9d33-6a4b3437d72e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received event network-vif-unplugged-0358e18d-8ce8-43a7-a8b2-11193708891a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:48:35 np0005629333 nova_compute[244014]: 2026-02-25 12:48:35.053 244018 DEBUG nova.network.neutron [-] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:48:35 np0005629333 nova_compute[244014]: 2026-02-25 12:48:35.072 244018 INFO nova.compute.manager [-] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Took 1.74 seconds to deallocate network for instance.#033[00m
Feb 25 07:48:35 np0005629333 nova_compute[244014]: 2026-02-25 12:48:35.126 244018 DEBUG oslo_concurrency.lockutils [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:35 np0005629333 nova_compute[244014]: 2026-02-25 12:48:35.127 244018 DEBUG oslo_concurrency.lockutils [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:35 np0005629333 nova_compute[244014]: 2026-02-25 12:48:35.207 244018 DEBUG oslo_concurrency.processutils [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:48:35 np0005629333 nova_compute[244014]: 2026-02-25 12:48:35.290 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:35 np0005629333 systemd[1]: run-netns-ovnmeta\x2d5c482202\x2d8994\x2d4033\x2da0a9\x2d167d92a9e301.mount: Deactivated successfully.
Feb 25 07:48:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:48:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1663251523' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:48:35 np0005629333 nova_compute[244014]: 2026-02-25 12:48:35.758 244018 DEBUG oslo_concurrency.processutils [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:48:35 np0005629333 nova_compute[244014]: 2026-02-25 12:48:35.765 244018 DEBUG nova.compute.provider_tree [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:48:35 np0005629333 nova_compute[244014]: 2026-02-25 12:48:35.787 244018 DEBUG nova.scheduler.client.report [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:48:35 np0005629333 nova_compute[244014]: 2026-02-25 12:48:35.822 244018 DEBUG oslo_concurrency.lockutils [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:35 np0005629333 nova_compute[244014]: 2026-02-25 12:48:35.864 244018 INFO nova.scheduler.client.report [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Deleted allocations for instance 95730650-36ac-4eee-8b22-9ea3f01d82d1#033[00m
Feb 25 07:48:35 np0005629333 nova_compute[244014]: 2026-02-25 12:48:35.935 244018 DEBUG oslo_concurrency.lockutils [None req-306db358-e9b2-4cac-a83a-e3c0e4489fee 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "95730650-36ac-4eee-8b22-9ea3f01d82d1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2041: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 35 KiB/s wr, 60 op/s
Feb 25 07:48:36 np0005629333 nova_compute[244014]: 2026-02-25 12:48:36.653 244018 DEBUG nova.compute.manager [req-b614b30f-940b-4aaa-a45e-a11507e5e764 req-6d5c8ed8-6488-4580-93b4-1eb22542172e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Received event network-vif-deleted-0660ccb7-a986-45dd-8aa8-e10ddd71144a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:48:36 np0005629333 nova_compute[244014]: 2026-02-25 12:48:36.654 244018 DEBUG nova.compute.manager [req-b614b30f-940b-4aaa-a45e-a11507e5e764 req-6d5c8ed8-6488-4580-93b4-1eb22542172e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received event network-vif-deleted-ebd67787-8af9-4a7f-8b38-6d18daff8ff3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:48:36 np0005629333 nova_compute[244014]: 2026-02-25 12:48:36.654 244018 INFO nova.compute.manager [req-b614b30f-940b-4aaa-a45e-a11507e5e764 req-6d5c8ed8-6488-4580-93b4-1eb22542172e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Neutron deleted interface ebd67787-8af9-4a7f-8b38-6d18daff8ff3; detaching it from the instance and deleting it from the info cache#033[00m
Feb 25 07:48:36 np0005629333 nova_compute[244014]: 2026-02-25 12:48:36.655 244018 DEBUG nova.network.neutron [req-b614b30f-940b-4aaa-a45e-a11507e5e764 req-6d5c8ed8-6488-4580-93b4-1eb22542172e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Updating instance_info_cache with network_info: [{"id": "0358e18d-8ce8-43a7-a8b2-11193708891a", "address": "fa:16:3e:fd:1f:00", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0358e18d-8c", "ovs_interfaceid": "0358e18d-8ce8-43a7-a8b2-11193708891a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:48:36 np0005629333 nova_compute[244014]: 2026-02-25 12:48:36.702 244018 DEBUG nova.compute.manager [req-b614b30f-940b-4aaa-a45e-a11507e5e764 req-6d5c8ed8-6488-4580-93b4-1eb22542172e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Detach interface failed, port_id=ebd67787-8af9-4a7f-8b38-6d18daff8ff3, reason: Instance 848fd033-0ebb-460a-a8a0-56583fa5f481 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 25 07:48:36 np0005629333 nova_compute[244014]: 2026-02-25 12:48:36.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:48:36 np0005629333 nova_compute[244014]: 2026-02-25 12:48:36.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:48:37 np0005629333 nova_compute[244014]: 2026-02-25 12:48:37.133 244018 DEBUG nova.compute.manager [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received event network-vif-plugged-0358e18d-8ce8-43a7-a8b2-11193708891a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:48:37 np0005629333 nova_compute[244014]: 2026-02-25 12:48:37.134 244018 DEBUG oslo_concurrency.lockutils [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:37 np0005629333 nova_compute[244014]: 2026-02-25 12:48:37.135 244018 DEBUG oslo_concurrency.lockutils [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:37 np0005629333 nova_compute[244014]: 2026-02-25 12:48:37.135 244018 DEBUG oslo_concurrency.lockutils [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:37 np0005629333 nova_compute[244014]: 2026-02-25 12:48:37.136 244018 DEBUG nova.compute.manager [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] No waiting events found dispatching network-vif-plugged-0358e18d-8ce8-43a7-a8b2-11193708891a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:48:37 np0005629333 nova_compute[244014]: 2026-02-25 12:48:37.136 244018 WARNING nova.compute.manager [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received unexpected event network-vif-plugged-0358e18d-8ce8-43a7-a8b2-11193708891a for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:48:37 np0005629333 nova_compute[244014]: 2026-02-25 12:48:37.137 244018 DEBUG nova.compute.manager [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received event network-vif-unplugged-ebd67787-8af9-4a7f-8b38-6d18daff8ff3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:48:37 np0005629333 nova_compute[244014]: 2026-02-25 12:48:37.137 244018 DEBUG oslo_concurrency.lockutils [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:37 np0005629333 nova_compute[244014]: 2026-02-25 12:48:37.138 244018 DEBUG oslo_concurrency.lockutils [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:37 np0005629333 nova_compute[244014]: 2026-02-25 12:48:37.138 244018 DEBUG oslo_concurrency.lockutils [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:37 np0005629333 nova_compute[244014]: 2026-02-25 12:48:37.139 244018 DEBUG nova.compute.manager [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] No waiting events found dispatching network-vif-unplugged-ebd67787-8af9-4a7f-8b38-6d18daff8ff3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:48:37 np0005629333 nova_compute[244014]: 2026-02-25 12:48:37.139 244018 DEBUG nova.compute.manager [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received event network-vif-unplugged-ebd67787-8af9-4a7f-8b38-6d18daff8ff3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:48:37 np0005629333 nova_compute[244014]: 2026-02-25 12:48:37.140 244018 DEBUG nova.compute.manager [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received event network-vif-plugged-ebd67787-8af9-4a7f-8b38-6d18daff8ff3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:48:37 np0005629333 nova_compute[244014]: 2026-02-25 12:48:37.140 244018 DEBUG oslo_concurrency.lockutils [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:37 np0005629333 nova_compute[244014]: 2026-02-25 12:48:37.141 244018 DEBUG oslo_concurrency.lockutils [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:37 np0005629333 nova_compute[244014]: 2026-02-25 12:48:37.141 244018 DEBUG oslo_concurrency.lockutils [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:37 np0005629333 nova_compute[244014]: 2026-02-25 12:48:37.141 244018 DEBUG nova.compute.manager [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] No waiting events found dispatching network-vif-plugged-ebd67787-8af9-4a7f-8b38-6d18daff8ff3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:48:37 np0005629333 nova_compute[244014]: 2026-02-25 12:48:37.142 244018 WARNING nova.compute.manager [req-63a2255b-9bdf-4bc6-bc87-563cf3684375 req-06163bf8-ddf6-4f31-8231-144eae5208dc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received unexpected event network-vif-plugged-ebd67787-8af9-4a7f-8b38-6d18daff8ff3 for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:48:37 np0005629333 nova_compute[244014]: 2026-02-25 12:48:37.424 244018 DEBUG nova.network.neutron [req-1dc85682-0a62-414c-ba87-d0b27c314d3c req-eef9c2e4-6c2d-400a-b50e-10b6329bedb8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Updated VIF entry in instance network info cache for port 0358e18d-8ce8-43a7-a8b2-11193708891a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:48:37 np0005629333 nova_compute[244014]: 2026-02-25 12:48:37.425 244018 DEBUG nova.network.neutron [req-1dc85682-0a62-414c-ba87-d0b27c314d3c req-eef9c2e4-6c2d-400a-b50e-10b6329bedb8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Updating instance_info_cache with network_info: [{"id": "0358e18d-8ce8-43a7-a8b2-11193708891a", "address": "fa:16:3e:fd:1f:00", "network": {"id": "d0b6d114-fcb4-4a25-988c-1ee301ef0419", "bridge": "br-int", "label": "tempest-network-smoke--23842255", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0358e18d-8c", "ovs_interfaceid": "0358e18d-8ce8-43a7-a8b2-11193708891a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "address": "fa:16:3e:8e:ea:66", "network": {"id": "5c482202-8994-4033-a0a9-167d92a9e301", "bridge": "br-int", "label": "tempest-network-smoke--1313035639", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe8e:ea66", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebd67787-8a", "ovs_interfaceid": "ebd67787-8af9-4a7f-8b38-6d18daff8ff3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:48:37 np0005629333 nova_compute[244014]: 2026-02-25 12:48:37.452 244018 DEBUG oslo_concurrency.lockutils [req-1dc85682-0a62-414c-ba87-d0b27c314d3c req-eef9c2e4-6c2d-400a-b50e-10b6329bedb8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-848fd033-0ebb-460a-a8a0-56583fa5f481" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:48:38 np0005629333 nova_compute[244014]: 2026-02-25 12:48:38.270 244018 DEBUG nova.network.neutron [-] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:48:38 np0005629333 nova_compute[244014]: 2026-02-25 12:48:38.312 244018 INFO nova.compute.manager [-] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Took 3.43 seconds to deallocate network for instance.#033[00m
Feb 25 07:48:38 np0005629333 nova_compute[244014]: 2026-02-25 12:48:38.379 244018 DEBUG oslo_concurrency.lockutils [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:38 np0005629333 nova_compute[244014]: 2026-02-25 12:48:38.379 244018 DEBUG oslo_concurrency.lockutils [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:38 np0005629333 nova_compute[244014]: 2026-02-25 12:48:38.420 244018 DEBUG oslo_concurrency.processutils [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:48:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2042: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 37 KiB/s wr, 115 op/s
Feb 25 07:48:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:48:38 np0005629333 nova_compute[244014]: 2026-02-25 12:48:38.787 244018 DEBUG nova.compute.manager [req-bbf106c9-2a1c-4d8b-925e-db38292697e8 req-37fd7718-4045-4a7f-9f08-3de35016a6db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Received event network-vif-deleted-0358e18d-8ce8-43a7-a8b2-11193708891a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:48:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:48:38 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/794443488' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:48:39 np0005629333 nova_compute[244014]: 2026-02-25 12:48:39.016 244018 DEBUG oslo_concurrency.processutils [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:48:39 np0005629333 nova_compute[244014]: 2026-02-25 12:48:39.023 244018 DEBUG nova.compute.provider_tree [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:48:39 np0005629333 nova_compute[244014]: 2026-02-25 12:48:39.042 244018 DEBUG nova.scheduler.client.report [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:48:39 np0005629333 nova_compute[244014]: 2026-02-25 12:48:39.070 244018 DEBUG oslo_concurrency.lockutils [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:39 np0005629333 nova_compute[244014]: 2026-02-25 12:48:39.111 244018 INFO nova.scheduler.client.report [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Deleted allocations for instance 848fd033-0ebb-460a-a8a0-56583fa5f481#033[00m
Feb 25 07:48:39 np0005629333 nova_compute[244014]: 2026-02-25 12:48:39.220 244018 DEBUG oslo_concurrency.lockutils [None req-3af7087f-a8db-4d32-b32c-329b82c795c6 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "848fd033-0ebb-460a-a8a0-56583fa5f481" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:39 np0005629333 nova_compute[244014]: 2026-02-25 12:48:39.526 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:40 np0005629333 nova_compute[244014]: 2026-02-25 12:48:40.291 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2043: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 8.9 KiB/s wr, 70 op/s
Feb 25 07:48:40 np0005629333 nova_compute[244014]: 2026-02-25 12:48:40.600 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023705.5991154, 9c11f17d-5dba-4ece-8340-1c4ff0939294 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:48:40 np0005629333 nova_compute[244014]: 2026-02-25 12:48:40.601 244018 INFO nova.compute.manager [-] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:48:40 np0005629333 nova_compute[244014]: 2026-02-25 12:48:40.621 244018 DEBUG nova.compute.manager [None req-de036231-73d5-480c-8346-7062bf3b3abd - - - - - -] [instance: 9c11f17d-5dba-4ece-8340-1c4ff0939294] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:48:41 np0005629333 nova_compute[244014]: 2026-02-25 12:48:41.786 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:41 np0005629333 nova_compute[244014]: 2026-02-25 12:48:41.856 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2044: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 8.9 KiB/s wr, 70 op/s
Feb 25 07:48:42 np0005629333 nova_compute[244014]: 2026-02-25 12:48:42.551 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023707.548173, 271f6569-a8f6-43a3-ac98-511eff77c426 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:48:42 np0005629333 nova_compute[244014]: 2026-02-25 12:48:42.552 244018 INFO nova.compute.manager [-] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:48:42 np0005629333 nova_compute[244014]: 2026-02-25 12:48:42.577 244018 DEBUG nova.compute.manager [None req-f32260c3-2aa0-49bd-81e9-5893d12ffd7d - - - - - -] [instance: 271f6569-a8f6-43a3-ac98-511eff77c426] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:48:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:48:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:48:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:48:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:48:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.566761086646441e-05 of space, bias 1.0, pg target 0.004700283259939323 quantized to 32 (current 32)
Feb 25 07:48:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:48:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:48:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:48:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:48:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:48:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024940534720719228 of space, bias 1.0, pg target 0.7482160416215768 quantized to 32 (current 32)
Feb 25 07:48:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:48:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3979243418663797e-06 of space, bias 4.0, pg target 0.0016775092102396555 quantized to 16 (current 16)
Feb 25 07:48:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:48:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:48:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:48:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:48:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:48:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:48:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:48:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:48:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:48:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:48:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:42.882 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:48:42 np0005629333 nova_compute[244014]: 2026-02-25 12:48:42.883 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:42.885 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:48:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:48:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2045: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 2.4 KiB/s wr, 55 op/s
Feb 25 07:48:44 np0005629333 nova_compute[244014]: 2026-02-25 12:48:44.528 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:45 np0005629333 nova_compute[244014]: 2026-02-25 12:48:45.293 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2046: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Feb 25 07:48:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:48:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/333512940' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:48:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:48:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/333512940' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:48:47 np0005629333 nova_compute[244014]: 2026-02-25 12:48:47.944 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023712.9430044, 95730650-36ac-4eee-8b22-9ea3f01d82d1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:48:47 np0005629333 nova_compute[244014]: 2026-02-25 12:48:47.944 244018 INFO nova.compute.manager [-] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:48:47 np0005629333 nova_compute[244014]: 2026-02-25 12:48:47.970 244018 DEBUG nova.compute.manager [None req-dde8d55b-0c06-4c3b-832f-86a893457556 - - - - - -] [instance: 95730650-36ac-4eee-8b22-9ea3f01d82d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:48:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2047: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Feb 25 07:48:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:48:49 np0005629333 nova_compute[244014]: 2026-02-25 12:48:49.412 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023714.4112868, 848fd033-0ebb-460a-a8a0-56583fa5f481 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:48:49 np0005629333 nova_compute[244014]: 2026-02-25 12:48:49.413 244018 INFO nova.compute.manager [-] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:48:49 np0005629333 nova_compute[244014]: 2026-02-25 12:48:49.448 244018 DEBUG nova.compute.manager [None req-fb8eaa51-af69-410c-9d0e-dbc43380ea00 - - - - - -] [instance: 848fd033-0ebb-460a-a8a0-56583fa5f481] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:48:49 np0005629333 nova_compute[244014]: 2026-02-25 12:48:49.530 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:50 np0005629333 nova_compute[244014]: 2026-02-25 12:48:50.296 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2048: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:48:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2049: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:48:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:52.887 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:48:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:48:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2050: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:48:54 np0005629333 nova_compute[244014]: 2026-02-25 12:48:54.534 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:55.026 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:55.027 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:48:55.027 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:55 np0005629333 nova_compute[244014]: 2026-02-25 12:48:55.298 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2051: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:48:58 np0005629333 nova_compute[244014]: 2026-02-25 12:48:58.504 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:58 np0005629333 nova_compute[244014]: 2026-02-25 12:48:58.505 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2052: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:48:58 np0005629333 nova_compute[244014]: 2026-02-25 12:48:58.528 244018 DEBUG nova.compute.manager [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:48:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:48:58 np0005629333 nova_compute[244014]: 2026-02-25 12:48:58.663 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:58 np0005629333 nova_compute[244014]: 2026-02-25 12:48:58.664 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:58 np0005629333 nova_compute[244014]: 2026-02-25 12:48:58.675 244018 DEBUG nova.virt.hardware [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:48:58 np0005629333 nova_compute[244014]: 2026-02-25 12:48:58.675 244018 INFO nova.compute.claims [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:48:58 np0005629333 nova_compute[244014]: 2026-02-25 12:48:58.881 244018 DEBUG oslo_concurrency.processutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:48:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:48:59 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3490951547' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:48:59 np0005629333 nova_compute[244014]: 2026-02-25 12:48:59.424 244018 DEBUG oslo_concurrency.processutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:48:59 np0005629333 nova_compute[244014]: 2026-02-25 12:48:59.431 244018 DEBUG nova.compute.provider_tree [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:48:59 np0005629333 nova_compute[244014]: 2026-02-25 12:48:59.455 244018 DEBUG nova.scheduler.client.report [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:48:59 np0005629333 nova_compute[244014]: 2026-02-25 12:48:59.488 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:59 np0005629333 nova_compute[244014]: 2026-02-25 12:48:59.489 244018 DEBUG nova.compute.manager [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:48:59 np0005629333 nova_compute[244014]: 2026-02-25 12:48:59.539 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:48:59 np0005629333 nova_compute[244014]: 2026-02-25 12:48:59.579 244018 DEBUG nova.compute.manager [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:48:59 np0005629333 nova_compute[244014]: 2026-02-25 12:48:59.579 244018 DEBUG nova.network.neutron [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:48:59 np0005629333 nova_compute[244014]: 2026-02-25 12:48:59.612 244018 INFO nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:48:59 np0005629333 nova_compute[244014]: 2026-02-25 12:48:59.642 244018 DEBUG nova.compute.manager [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:48:59 np0005629333 nova_compute[244014]: 2026-02-25 12:48:59.735 244018 DEBUG nova.compute.manager [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:48:59 np0005629333 nova_compute[244014]: 2026-02-25 12:48:59.737 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:48:59 np0005629333 nova_compute[244014]: 2026-02-25 12:48:59.738 244018 INFO nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Creating image(s)#033[00m
Feb 25 07:48:59 np0005629333 nova_compute[244014]: 2026-02-25 12:48:59.769 244018 DEBUG nova.storage.rbd_utils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:48:59 np0005629333 nova_compute[244014]: 2026-02-25 12:48:59.795 244018 DEBUG nova.storage.rbd_utils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:48:59 np0005629333 nova_compute[244014]: 2026-02-25 12:48:59.818 244018 DEBUG nova.storage.rbd_utils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:48:59 np0005629333 nova_compute[244014]: 2026-02-25 12:48:59.821 244018 DEBUG oslo_concurrency.processutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:48:59 np0005629333 nova_compute[244014]: 2026-02-25 12:48:59.872 244018 DEBUG nova.policy [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31d013eaf26a447394d93c83ab8def60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e227b91c24404ab5aed600e2fe792d32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:48:59 np0005629333 nova_compute[244014]: 2026-02-25 12:48:59.894 244018 DEBUG oslo_concurrency.processutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:48:59 np0005629333 nova_compute[244014]: 2026-02-25 12:48:59.895 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:48:59 np0005629333 nova_compute[244014]: 2026-02-25 12:48:59.895 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:48:59 np0005629333 nova_compute[244014]: 2026-02-25 12:48:59.896 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:48:59 np0005629333 nova_compute[244014]: 2026-02-25 12:48:59.918 244018 DEBUG nova.storage.rbd_utils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:48:59 np0005629333 nova_compute[244014]: 2026-02-25 12:48:59.921 244018 DEBUG oslo_concurrency.processutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:49:00 np0005629333 nova_compute[244014]: 2026-02-25 12:49:00.189 244018 DEBUG oslo_concurrency.processutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.267s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:49:00 np0005629333 nova_compute[244014]: 2026-02-25 12:49:00.247 244018 DEBUG nova.storage.rbd_utils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] resizing rbd image 37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:49:00 np0005629333 nova_compute[244014]: 2026-02-25 12:49:00.312 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:00 np0005629333 nova_compute[244014]: 2026-02-25 12:49:00.318 244018 DEBUG nova.objects.instance [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'migration_context' on Instance uuid 37ce1876-2b57-4f84-800c-2a1b6eaa6943 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:49:00 np0005629333 nova_compute[244014]: 2026-02-25 12:49:00.333 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:49:00 np0005629333 nova_compute[244014]: 2026-02-25 12:49:00.333 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Ensure instance console log exists: /var/lib/nova/instances/37ce1876-2b57-4f84-800c-2a1b6eaa6943/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:49:00 np0005629333 nova_compute[244014]: 2026-02-25 12:49:00.333 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:49:00 np0005629333 nova_compute[244014]: 2026-02-25 12:49:00.334 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:49:00 np0005629333 nova_compute[244014]: 2026-02-25 12:49:00.334 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:49:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2053: 305 pgs: 305 active+clean; 153 MiB data, 947 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:49:00 np0005629333 podman[353033]: 2026-02-25 12:49:00.716822292 +0000 UTC m=+0.058874453 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:49:00 np0005629333 podman[353034]: 2026-02-25 12:49:00.763342336 +0000 UTC m=+0.096458375 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, container_name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 07:49:00 np0005629333 nova_compute[244014]: 2026-02-25 12:49:00.782 244018 DEBUG nova.network.neutron [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Successfully created port: cda27370-858e-4443-819e-696576515c52 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:49:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:49:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:49:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:49:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:49:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:49:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:49:01 np0005629333 nova_compute[244014]: 2026-02-25 12:49:01.928 244018 DEBUG nova.network.neutron [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Successfully updated port: cda27370-858e-4443-819e-696576515c52 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:49:01 np0005629333 nova_compute[244014]: 2026-02-25 12:49:01.962 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:49:01 np0005629333 nova_compute[244014]: 2026-02-25 12:49:01.963 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:49:01 np0005629333 nova_compute[244014]: 2026-02-25 12:49:01.963 244018 DEBUG nova.network.neutron [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:49:02 np0005629333 nova_compute[244014]: 2026-02-25 12:49:02.066 244018 DEBUG nova.compute.manager [req-55768304-40d9-4ee7-81dc-b5a13ae5f30a req-2f60c6d1-f69d-4cda-aa2c-7542a1035393 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received event network-changed-cda27370-858e-4443-819e-696576515c52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:49:02 np0005629333 nova_compute[244014]: 2026-02-25 12:49:02.067 244018 DEBUG nova.compute.manager [req-55768304-40d9-4ee7-81dc-b5a13ae5f30a req-2f60c6d1-f69d-4cda-aa2c-7542a1035393 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Refreshing instance network info cache due to event network-changed-cda27370-858e-4443-819e-696576515c52. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:49:02 np0005629333 nova_compute[244014]: 2026-02-25 12:49:02.067 244018 DEBUG oslo_concurrency.lockutils [req-55768304-40d9-4ee7-81dc-b5a13ae5f30a req-2f60c6d1-f69d-4cda-aa2c-7542a1035393 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:49:02 np0005629333 nova_compute[244014]: 2026-02-25 12:49:02.203 244018 DEBUG nova.network.neutron [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:49:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2054: 305 pgs: 305 active+clean; 176 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 917 KiB/s wr, 14 op/s
Feb 25 07:49:02 np0005629333 nova_compute[244014]: 2026-02-25 12:49:02.976 244018 DEBUG nova.network.neutron [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Updating instance_info_cache with network_info: [{"id": "cda27370-858e-4443-819e-696576515c52", "address": "fa:16:3e:50:74:9c", "network": {"id": "b302251f-a239-4374-92d3-7686a49e9d67", "bridge": "br-int", "label": "tempest-network-smoke--1675307748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcda27370-85", "ovs_interfaceid": "cda27370-858e-4443-819e-696576515c52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:49:03 np0005629333 nova_compute[244014]: 2026-02-25 12:49:03.012 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:49:03 np0005629333 nova_compute[244014]: 2026-02-25 12:49:03.013 244018 DEBUG nova.compute.manager [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Instance network_info: |[{"id": "cda27370-858e-4443-819e-696576515c52", "address": "fa:16:3e:50:74:9c", "network": {"id": "b302251f-a239-4374-92d3-7686a49e9d67", "bridge": "br-int", "label": "tempest-network-smoke--1675307748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcda27370-85", "ovs_interfaceid": "cda27370-858e-4443-819e-696576515c52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:49:03 np0005629333 nova_compute[244014]: 2026-02-25 12:49:03.013 244018 DEBUG oslo_concurrency.lockutils [req-55768304-40d9-4ee7-81dc-b5a13ae5f30a req-2f60c6d1-f69d-4cda-aa2c-7542a1035393 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:49:03 np0005629333 nova_compute[244014]: 2026-02-25 12:49:03.014 244018 DEBUG nova.network.neutron [req-55768304-40d9-4ee7-81dc-b5a13ae5f30a req-2f60c6d1-f69d-4cda-aa2c-7542a1035393 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Refreshing network info cache for port cda27370-858e-4443-819e-696576515c52 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:49:03 np0005629333 nova_compute[244014]: 2026-02-25 12:49:03.020 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Start _get_guest_xml network_info=[{"id": "cda27370-858e-4443-819e-696576515c52", "address": "fa:16:3e:50:74:9c", "network": {"id": "b302251f-a239-4374-92d3-7686a49e9d67", "bridge": "br-int", "label": "tempest-network-smoke--1675307748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcda27370-85", "ovs_interfaceid": "cda27370-858e-4443-819e-696576515c52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:49:03 np0005629333 nova_compute[244014]: 2026-02-25 12:49:03.026 244018 WARNING nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:49:03 np0005629333 nova_compute[244014]: 2026-02-25 12:49:03.037 244018 DEBUG nova.virt.libvirt.host [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:49:03 np0005629333 nova_compute[244014]: 2026-02-25 12:49:03.038 244018 DEBUG nova.virt.libvirt.host [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:49:03 np0005629333 nova_compute[244014]: 2026-02-25 12:49:03.047 244018 DEBUG nova.virt.libvirt.host [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:49:03 np0005629333 nova_compute[244014]: 2026-02-25 12:49:03.047 244018 DEBUG nova.virt.libvirt.host [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:49:03 np0005629333 nova_compute[244014]: 2026-02-25 12:49:03.048 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:49:03 np0005629333 nova_compute[244014]: 2026-02-25 12:49:03.048 244018 DEBUG nova.virt.hardware [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:49:03 np0005629333 nova_compute[244014]: 2026-02-25 12:49:03.049 244018 DEBUG nova.virt.hardware [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:49:03 np0005629333 nova_compute[244014]: 2026-02-25 12:49:03.050 244018 DEBUG nova.virt.hardware [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:49:03 np0005629333 nova_compute[244014]: 2026-02-25 12:49:03.050 244018 DEBUG nova.virt.hardware [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:49:03 np0005629333 nova_compute[244014]: 2026-02-25 12:49:03.051 244018 DEBUG nova.virt.hardware [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:49:03 np0005629333 nova_compute[244014]: 2026-02-25 12:49:03.051 244018 DEBUG nova.virt.hardware [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:49:03 np0005629333 nova_compute[244014]: 2026-02-25 12:49:03.052 244018 DEBUG nova.virt.hardware [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:49:03 np0005629333 nova_compute[244014]: 2026-02-25 12:49:03.052 244018 DEBUG nova.virt.hardware [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:49:03 np0005629333 nova_compute[244014]: 2026-02-25 12:49:03.053 244018 DEBUG nova.virt.hardware [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:49:03 np0005629333 nova_compute[244014]: 2026-02-25 12:49:03.053 244018 DEBUG nova.virt.hardware [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:49:03 np0005629333 nova_compute[244014]: 2026-02-25 12:49:03.054 244018 DEBUG nova.virt.hardware [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:49:03 np0005629333 nova_compute[244014]: 2026-02-25 12:49:03.058 244018 DEBUG oslo_concurrency.processutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:49:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:49:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:49:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1959613151' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:49:03 np0005629333 nova_compute[244014]: 2026-02-25 12:49:03.638 244018 DEBUG oslo_concurrency.processutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:49:03 np0005629333 nova_compute[244014]: 2026-02-25 12:49:03.663 244018 DEBUG nova.storage.rbd_utils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:49:03 np0005629333 nova_compute[244014]: 2026-02-25 12:49:03.669 244018 DEBUG oslo_concurrency.processutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:49:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:49:04 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/757917452' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.270 244018 DEBUG oslo_concurrency.processutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.273 244018 DEBUG nova.virt.libvirt.vif [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:48:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1787082415',display_name='tempest-TestNetworkBasicOps-server-1787082415',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1787082415',id=121,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKssTuj33zFMa3evVOk6flWSjlUceMgeiXlovCojJrinnNv1phI+OVsHaPYu7D22Bc6n/wlCJmAYv3X7OJM/21EB1B1U1udfCABsCK7Pj9XBc8dCdeNdko4o08iW1C9pCg==',key_name='tempest-TestNetworkBasicOps-1218860627',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-h0ic083v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:48:59Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=37ce1876-2b57-4f84-800c-2a1b6eaa6943,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cda27370-858e-4443-819e-696576515c52", "address": "fa:16:3e:50:74:9c", "network": {"id": "b302251f-a239-4374-92d3-7686a49e9d67", "bridge": "br-int", "label": "tempest-network-smoke--1675307748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcda27370-85", "ovs_interfaceid": "cda27370-858e-4443-819e-696576515c52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.273 244018 DEBUG nova.network.os_vif_util [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "cda27370-858e-4443-819e-696576515c52", "address": "fa:16:3e:50:74:9c", "network": {"id": "b302251f-a239-4374-92d3-7686a49e9d67", "bridge": "br-int", "label": "tempest-network-smoke--1675307748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcda27370-85", "ovs_interfaceid": "cda27370-858e-4443-819e-696576515c52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.274 244018 DEBUG nova.network.os_vif_util [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:74:9c,bridge_name='br-int',has_traffic_filtering=True,id=cda27370-858e-4443-819e-696576515c52,network=Network(b302251f-a239-4374-92d3-7686a49e9d67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcda27370-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.276 244018 DEBUG nova.objects.instance [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'pci_devices' on Instance uuid 37ce1876-2b57-4f84-800c-2a1b6eaa6943 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.293 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:49:04 np0005629333 nova_compute[244014]:  <uuid>37ce1876-2b57-4f84-800c-2a1b6eaa6943</uuid>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:  <name>instance-00000079</name>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestNetworkBasicOps-server-1787082415</nova:name>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:49:03</nova:creationTime>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:49:04 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:        <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:        <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:        <nova:port uuid="cda27370-858e-4443-819e-696576515c52">
Feb 25 07:49:04 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      <entry name="serial">37ce1876-2b57-4f84-800c-2a1b6eaa6943</entry>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      <entry name="uuid">37ce1876-2b57-4f84-800c-2a1b6eaa6943</entry>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk">
Feb 25 07:49:04 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:49:04 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk.config">
Feb 25 07:49:04 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:49:04 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:50:74:9c"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      <target dev="tapcda27370-85"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/37ce1876-2b57-4f84-800c-2a1b6eaa6943/console.log" append="off"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:49:04 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:49:04 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:49:04 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:49:04 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.295 244018 DEBUG nova.compute.manager [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Preparing to wait for external event network-vif-plugged-cda27370-858e-4443-819e-696576515c52 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.296 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.296 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.297 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.298 244018 DEBUG nova.virt.libvirt.vif [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:48:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1787082415',display_name='tempest-TestNetworkBasicOps-server-1787082415',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1787082415',id=121,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKssTuj33zFMa3evVOk6flWSjlUceMgeiXlovCojJrinnNv1phI+OVsHaPYu7D22Bc6n/wlCJmAYv3X7OJM/21EB1B1U1udfCABsCK7Pj9XBc8dCdeNdko4o08iW1C9pCg==',key_name='tempest-TestNetworkBasicOps-1218860627',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-h0ic083v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:48:59Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=37ce1876-2b57-4f84-800c-2a1b6eaa6943,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cda27370-858e-4443-819e-696576515c52", "address": "fa:16:3e:50:74:9c", "network": {"id": "b302251f-a239-4374-92d3-7686a49e9d67", "bridge": "br-int", "label": "tempest-network-smoke--1675307748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcda27370-85", "ovs_interfaceid": "cda27370-858e-4443-819e-696576515c52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.298 244018 DEBUG nova.network.os_vif_util [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "cda27370-858e-4443-819e-696576515c52", "address": "fa:16:3e:50:74:9c", "network": {"id": "b302251f-a239-4374-92d3-7686a49e9d67", "bridge": "br-int", "label": "tempest-network-smoke--1675307748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcda27370-85", "ovs_interfaceid": "cda27370-858e-4443-819e-696576515c52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.299 244018 DEBUG nova.network.os_vif_util [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:74:9c,bridge_name='br-int',has_traffic_filtering=True,id=cda27370-858e-4443-819e-696576515c52,network=Network(b302251f-a239-4374-92d3-7686a49e9d67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcda27370-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.299 244018 DEBUG os_vif [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:74:9c,bridge_name='br-int',has_traffic_filtering=True,id=cda27370-858e-4443-819e-696576515c52,network=Network(b302251f-a239-4374-92d3-7686a49e9d67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcda27370-85') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.300 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.301 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.302 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.306 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.307 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcda27370-85, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.308 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcda27370-85, col_values=(('external_ids', {'iface-id': 'cda27370-858e-4443-819e-696576515c52', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:50:74:9c', 'vm-uuid': '37ce1876-2b57-4f84-800c-2a1b6eaa6943'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.311 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:04 np0005629333 NetworkManager[49836]: <info>  [1772023744.3125] manager: (tapcda27370-85): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/524)
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.315 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.318 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.319 244018 INFO os_vif [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:74:9c,bridge_name='br-int',has_traffic_filtering=True,id=cda27370-858e-4443-819e-696576515c52,network=Network(b302251f-a239-4374-92d3-7686a49e9d67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcda27370-85')#033[00m
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.366 244018 DEBUG nova.network.neutron [req-55768304-40d9-4ee7-81dc-b5a13ae5f30a req-2f60c6d1-f69d-4cda-aa2c-7542a1035393 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Updated VIF entry in instance network info cache for port cda27370-858e-4443-819e-696576515c52. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.367 244018 DEBUG nova.network.neutron [req-55768304-40d9-4ee7-81dc-b5a13ae5f30a req-2f60c6d1-f69d-4cda-aa2c-7542a1035393 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Updating instance_info_cache with network_info: [{"id": "cda27370-858e-4443-819e-696576515c52", "address": "fa:16:3e:50:74:9c", "network": {"id": "b302251f-a239-4374-92d3-7686a49e9d67", "bridge": "br-int", "label": "tempest-network-smoke--1675307748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcda27370-85", "ovs_interfaceid": "cda27370-858e-4443-819e-696576515c52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.395 244018 DEBUG oslo_concurrency.lockutils [req-55768304-40d9-4ee7-81dc-b5a13ae5f30a req-2f60c6d1-f69d-4cda-aa2c-7542a1035393 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.397 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.397 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.398 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:50:74:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.398 244018 INFO nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Using config drive#033[00m
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.423 244018 DEBUG nova.storage.rbd_utils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:49:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2055: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.879 244018 INFO nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Creating config drive at /var/lib/nova/instances/37ce1876-2b57-4f84-800c-2a1b6eaa6943/disk.config#033[00m
Feb 25 07:49:04 np0005629333 nova_compute[244014]: 2026-02-25 12:49:04.886 244018 DEBUG oslo_concurrency.processutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/37ce1876-2b57-4f84-800c-2a1b6eaa6943/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpa83ldsxx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.023 244018 DEBUG oslo_concurrency.processutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/37ce1876-2b57-4f84-800c-2a1b6eaa6943/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpa83ldsxx" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.051 244018 DEBUG nova.storage.rbd_utils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.056 244018 DEBUG oslo_concurrency.processutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/37ce1876-2b57-4f84-800c-2a1b6eaa6943/disk.config 37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.208 244018 DEBUG oslo_concurrency.processutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/37ce1876-2b57-4f84-800c-2a1b6eaa6943/disk.config 37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.209 244018 INFO nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Deleting local config drive /var/lib/nova/instances/37ce1876-2b57-4f84-800c-2a1b6eaa6943/disk.config because it was imported into RBD.#033[00m
Feb 25 07:49:05 np0005629333 kernel: tapcda27370-85: entered promiscuous mode
Feb 25 07:49:05 np0005629333 NetworkManager[49836]: <info>  [1772023745.2673] manager: (tapcda27370-85): new Tun device (/org/freedesktop/NetworkManager/Devices/525)
Feb 25 07:49:05 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:05Z|01265|binding|INFO|Claiming lport cda27370-858e-4443-819e-696576515c52 for this chassis.
Feb 25 07:49:05 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:05Z|01266|binding|INFO|cda27370-858e-4443-819e-696576515c52: Claiming fa:16:3e:50:74:9c 10.100.0.7
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.267 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.276 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.278 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:05 np0005629333 systemd-udevd[353211]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.298 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:74:9c 10.100.0.7'], port_security=['fa:16:3e:50:74:9c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '37ce1876-2b57-4f84-800c-2a1b6eaa6943', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b302251f-a239-4374-92d3-7686a49e9d67', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9170f76a-1c5e-4379-83f2-194a69c3afae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fa6e9e5e-6b6d-43d1-bea6-b8dba8200d28, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=cda27370-858e-4443-819e-696576515c52) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:49:05 np0005629333 NetworkManager[49836]: <info>  [1772023745.2994] device (tapcda27370-85): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:49:05 np0005629333 NetworkManager[49836]: <info>  [1772023745.2999] device (tapcda27370-85): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:49:05 np0005629333 systemd-machined[210048]: New machine qemu-153-instance-00000079.
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.299 157129 INFO neutron.agent.ovn.metadata.agent [-] Port cda27370-858e-4443-819e-696576515c52 in datapath b302251f-a239-4374-92d3-7686a49e9d67 bound to our chassis#033[00m
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.300 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b302251f-a239-4374-92d3-7686a49e9d67#033[00m
Feb 25 07:49:05 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:05Z|01267|binding|INFO|Setting lport cda27370-858e-4443-819e-696576515c52 ovn-installed in OVS
Feb 25 07:49:05 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:05Z|01268|binding|INFO|Setting lport cda27370-858e-4443-819e-696576515c52 up in Southbound
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.300 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.303 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.310 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[045a5f61-b414-4f69-b1fa-c1c165fc967d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.310 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb302251f-a1 in ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.313 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb302251f-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.313 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d82f0faf-4684-4294-bfed-ed750d1c4869]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.313 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[79996e52-0f4a-41d4-b4c5-1430439a6403]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:05 np0005629333 systemd[1]: Started Virtual Machine qemu-153-instance-00000079.
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.324 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[3c9b0a52-c509-4049-baef-f93960fb6394]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.338 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eddbd7a1-c74f-4291-8325-920cb1cbcbad]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.365 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[063d4145-fd52-4731-a42b-02e5a2499505]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:05 np0005629333 NetworkManager[49836]: <info>  [1772023745.3707] manager: (tapb302251f-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/526)
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.370 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[35536431-c627-43bf-85bb-018b4326af09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.398 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7a78e5a5-8a82-4264-9055-f6f1309989de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.402 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[63afdd50-911d-41e4-9129-5ec9217cf492]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:05 np0005629333 NetworkManager[49836]: <info>  [1772023745.4171] device (tapb302251f-a0): carrier: link connected
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.419 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c4ed1aaa-2581-4e3b-8372-e11734645c7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.430 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6be21d90-acc9-4d28-8eb3-404da71fd66c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb302251f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:a5:14'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571482, 'reachable_time': 15721, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353245, 'error': None, 'target': 'ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.445 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[36058356-abdd-4c5e-941b-e6051d1786b1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3c:a514'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 571482, 'tstamp': 571482}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353246, 'error': None, 'target': 'ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.458 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7c2599c3-4f99-470d-a0b3-5daab67df511]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb302251f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:a5:14'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571482, 'reachable_time': 15721, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 353247, 'error': None, 'target': 'ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.484 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[51881249-22ec-40e2-97ac-96a62a052dfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.521 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d997af6c-efbe-4670-a616-3ab0b7fc21f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.523 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb302251f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.523 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.524 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb302251f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.526 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:05 np0005629333 kernel: tapb302251f-a0: entered promiscuous mode
Feb 25 07:49:05 np0005629333 NetworkManager[49836]: <info>  [1772023745.5267] manager: (tapb302251f-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/527)
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.530 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb302251f-a0, col_values=(('external_ids', {'iface-id': '3be3f9aa-5947-437f-9936-240be99a8dd3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.531 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:05 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:05Z|01269|binding|INFO|Releasing lport 3be3f9aa-5947-437f-9936-240be99a8dd3 from this chassis (sb_readonly=0)
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.532 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.534 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b302251f-a239-4374-92d3-7686a49e9d67.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b302251f-a239-4374-92d3-7686a49e9d67.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.535 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[83706e33-3c1a-4f0e-9bc0-8a3088bde0b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.536 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-b302251f-a239-4374-92d3-7686a49e9d67
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/b302251f-a239-4374-92d3-7686a49e9d67.pid.haproxy
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID b302251f-a239-4374-92d3-7686a49e9d67
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.536 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:05.537 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67', 'env', 'PROCESS_TAG=haproxy-b302251f-a239-4374-92d3-7686a49e9d67', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b302251f-a239-4374-92d3-7686a49e9d67.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.714 244018 DEBUG nova.compute.manager [req-59f372fe-e7e0-47dc-8110-ac314632b4d3 req-ab375902-e050-4fd1-b8dd-c82bbd09c73d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received event network-vif-plugged-cda27370-858e-4443-819e-696576515c52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.715 244018 DEBUG oslo_concurrency.lockutils [req-59f372fe-e7e0-47dc-8110-ac314632b4d3 req-ab375902-e050-4fd1-b8dd-c82bbd09c73d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.716 244018 DEBUG oslo_concurrency.lockutils [req-59f372fe-e7e0-47dc-8110-ac314632b4d3 req-ab375902-e050-4fd1-b8dd-c82bbd09c73d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.716 244018 DEBUG oslo_concurrency.lockutils [req-59f372fe-e7e0-47dc-8110-ac314632b4d3 req-ab375902-e050-4fd1-b8dd-c82bbd09c73d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.716 244018 DEBUG nova.compute.manager [req-59f372fe-e7e0-47dc-8110-ac314632b4d3 req-ab375902-e050-4fd1-b8dd-c82bbd09c73d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Processing event network-vif-plugged-cda27370-858e-4443-819e-696576515c52 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.788 244018 DEBUG nova.compute.manager [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.789 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023745.7873554, 37ce1876-2b57-4f84-800c-2a1b6eaa6943 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.789 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] VM Started (Lifecycle Event)#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.796 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.800 244018 INFO nova.virt.libvirt.driver [-] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Instance spawned successfully.#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.801 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.834 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.841 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.842 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.843 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.844 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.845 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.845 244018 DEBUG nova.virt.libvirt.driver [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.854 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.888 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.889 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023745.7927842, 37ce1876-2b57-4f84-800c-2a1b6eaa6943 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.889 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:49:05 np0005629333 podman[353321]: 2026-02-25 12:49:05.916840102 +0000 UTC m=+0.047561844 container create 5556953bd7563a9eda464e8979f7b2d6d66d7f50ad788b8b64ed9efc2d9af704 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.933 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.937 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023745.796021, 37ce1876-2b57-4f84-800c-2a1b6eaa6943 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.937 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.949 244018 INFO nova.compute.manager [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Took 6.21 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.950 244018 DEBUG nova.compute.manager [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:49:05 np0005629333 systemd[1]: Started libpod-conmon-5556953bd7563a9eda464e8979f7b2d6d66d7f50ad788b8b64ed9efc2d9af704.scope.
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.971 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:49:05 np0005629333 nova_compute[244014]: 2026-02-25 12:49:05.977 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:49:05 np0005629333 podman[353321]: 2026-02-25 12:49:05.888230925 +0000 UTC m=+0.018952687 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:49:05 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:49:05 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51fe52d468e2be311accba8895564be948c5e431928c89066ff8445171f1806f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:49:06 np0005629333 podman[353321]: 2026-02-25 12:49:06.006189375 +0000 UTC m=+0.136911137 container init 5556953bd7563a9eda464e8979f7b2d6d66d7f50ad788b8b64ed9efc2d9af704 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 07:49:06 np0005629333 podman[353321]: 2026-02-25 12:49:06.010441065 +0000 UTC m=+0.141162817 container start 5556953bd7563a9eda464e8979f7b2d6d66d7f50ad788b8b64ed9efc2d9af704 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 07:49:06 np0005629333 neutron-haproxy-ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67[353337]: [NOTICE]   (353342) : New worker (353344) forked
Feb 25 07:49:06 np0005629333 neutron-haproxy-ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67[353337]: [NOTICE]   (353342) : Loading success.
Feb 25 07:49:06 np0005629333 nova_compute[244014]: 2026-02-25 12:49:06.036 244018 INFO nova.compute.manager [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Took 7.41 seconds to build instance.#033[00m
Feb 25 07:49:06 np0005629333 nova_compute[244014]: 2026-02-25 12:49:06.067 244018 DEBUG oslo_concurrency.lockutils [None req-482b0cee-f28c-4a51-84f0-30a1d4d8038f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:49:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2056: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 07:49:07 np0005629333 nova_compute[244014]: 2026-02-25 12:49:07.815 244018 DEBUG nova.compute.manager [req-ffe70cd6-bbfd-4d78-970d-d8ee43192fde req-50bb8f2c-2a14-4dfa-a3e7-672fb54addc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received event network-vif-plugged-cda27370-858e-4443-819e-696576515c52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:49:07 np0005629333 nova_compute[244014]: 2026-02-25 12:49:07.816 244018 DEBUG oslo_concurrency.lockutils [req-ffe70cd6-bbfd-4d78-970d-d8ee43192fde req-50bb8f2c-2a14-4dfa-a3e7-672fb54addc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:49:07 np0005629333 nova_compute[244014]: 2026-02-25 12:49:07.816 244018 DEBUG oslo_concurrency.lockutils [req-ffe70cd6-bbfd-4d78-970d-d8ee43192fde req-50bb8f2c-2a14-4dfa-a3e7-672fb54addc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:49:07 np0005629333 nova_compute[244014]: 2026-02-25 12:49:07.816 244018 DEBUG oslo_concurrency.lockutils [req-ffe70cd6-bbfd-4d78-970d-d8ee43192fde req-50bb8f2c-2a14-4dfa-a3e7-672fb54addc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:49:07 np0005629333 nova_compute[244014]: 2026-02-25 12:49:07.816 244018 DEBUG nova.compute.manager [req-ffe70cd6-bbfd-4d78-970d-d8ee43192fde req-50bb8f2c-2a14-4dfa-a3e7-672fb54addc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] No waiting events found dispatching network-vif-plugged-cda27370-858e-4443-819e-696576515c52 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:49:07 np0005629333 nova_compute[244014]: 2026-02-25 12:49:07.817 244018 WARNING nova.compute.manager [req-ffe70cd6-bbfd-4d78-970d-d8ee43192fde req-50bb8f2c-2a14-4dfa-a3e7-672fb54addc3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received unexpected event network-vif-plugged-cda27370-858e-4443-819e-696576515c52 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:49:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2057: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 07:49:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:49:09 np0005629333 nova_compute[244014]: 2026-02-25 12:49:09.312 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:09 np0005629333 NetworkManager[49836]: <info>  [1772023749.6037] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/528)
Feb 25 07:49:09 np0005629333 NetworkManager[49836]: <info>  [1772023749.6046] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/529)
Feb 25 07:49:09 np0005629333 nova_compute[244014]: 2026-02-25 12:49:09.604 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:09 np0005629333 nova_compute[244014]: 2026-02-25 12:49:09.642 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:09Z|01270|binding|INFO|Releasing lport 3be3f9aa-5947-437f-9936-240be99a8dd3 from this chassis (sb_readonly=0)
Feb 25 07:49:09 np0005629333 nova_compute[244014]: 2026-02-25 12:49:09.651 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:10 np0005629333 nova_compute[244014]: 2026-02-25 12:49:10.150 244018 DEBUG nova.compute.manager [req-146016df-fc66-43d4-88fc-4ce3a3a360d6 req-2707e6bb-9d32-46f0-9de7-27b2c5afaf59 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received event network-changed-cda27370-858e-4443-819e-696576515c52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:49:10 np0005629333 nova_compute[244014]: 2026-02-25 12:49:10.151 244018 DEBUG nova.compute.manager [req-146016df-fc66-43d4-88fc-4ce3a3a360d6 req-2707e6bb-9d32-46f0-9de7-27b2c5afaf59 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Refreshing instance network info cache due to event network-changed-cda27370-858e-4443-819e-696576515c52. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:49:10 np0005629333 nova_compute[244014]: 2026-02-25 12:49:10.151 244018 DEBUG oslo_concurrency.lockutils [req-146016df-fc66-43d4-88fc-4ce3a3a360d6 req-2707e6bb-9d32-46f0-9de7-27b2c5afaf59 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:49:10 np0005629333 nova_compute[244014]: 2026-02-25 12:49:10.152 244018 DEBUG oslo_concurrency.lockutils [req-146016df-fc66-43d4-88fc-4ce3a3a360d6 req-2707e6bb-9d32-46f0-9de7-27b2c5afaf59 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:49:10 np0005629333 nova_compute[244014]: 2026-02-25 12:49:10.152 244018 DEBUG nova.network.neutron [req-146016df-fc66-43d4-88fc-4ce3a3a360d6 req-2707e6bb-9d32-46f0-9de7-27b2c5afaf59 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Refreshing network info cache for port cda27370-858e-4443-819e-696576515c52 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:49:10 np0005629333 nova_compute[244014]: 2026-02-25 12:49:10.303 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 07:49:11 np0005629333 nova_compute[244014]: 2026-02-25 12:49:11.959 244018 DEBUG nova.network.neutron [req-146016df-fc66-43d4-88fc-4ce3a3a360d6 req-2707e6bb-9d32-46f0-9de7-27b2c5afaf59 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Updated VIF entry in instance network info cache for port cda27370-858e-4443-819e-696576515c52. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:49:11 np0005629333 nova_compute[244014]: 2026-02-25 12:49:11.960 244018 DEBUG nova.network.neutron [req-146016df-fc66-43d4-88fc-4ce3a3a360d6 req-2707e6bb-9d32-46f0-9de7-27b2c5afaf59 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Updating instance_info_cache with network_info: [{"id": "cda27370-858e-4443-819e-696576515c52", "address": "fa:16:3e:50:74:9c", "network": {"id": "b302251f-a239-4374-92d3-7686a49e9d67", "bridge": "br-int", "label": "tempest-network-smoke--1675307748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcda27370-85", "ovs_interfaceid": "cda27370-858e-4443-819e-696576515c52", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:49:11 np0005629333 nova_compute[244014]: 2026-02-25 12:49:11.987 244018 DEBUG oslo_concurrency.lockutils [req-146016df-fc66-43d4-88fc-4ce3a3a360d6 req-2707e6bb-9d32-46f0-9de7-27b2c5afaf59 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:49:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2059: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 07:49:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 07:49:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 9637 writes, 44K keys, 9637 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 9637 writes, 9637 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1511 writes, 7510 keys, 1511 commit groups, 1.0 writes per commit group, ingest: 9.35 MB, 0.02 MB/s#012Interval WAL: 1511 writes, 1511 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     52.3      0.98              0.14        29    0.034       0      0       0.0       0.0#012  L6      1/0    7.91 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.5    105.1     88.3      2.59              0.64        28    0.093    158K    15K       0.0       0.0#012 Sum      1/0    7.91 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.5     76.2     78.4      3.58              0.78        57    0.063    158K    15K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.6     99.1     97.7      0.90              0.24        16    0.057     55K   4095       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    105.1     88.3      2.59              0.64        28    0.093    158K    15K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     52.5      0.98              0.14        28    0.035       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.0 total, 600.0 interval#012Flush(GB): cumulative 0.050, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.27 GB write, 0.08 MB/s write, 0.27 GB read, 0.08 MB/s read, 3.6 seconds#012Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.09 GB read, 0.15 MB/s read, 0.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561a1af858d0#2 capacity: 304.00 MB usage: 30.33 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000275 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1917,29.15 MB,9.58745%) FilterBlock(58,441.98 KB,0.141982%) IndexBlock(58,771.70 KB,0.2479%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb 25 07:49:12 np0005629333 nova_compute[244014]: 2026-02-25 12:49:12.969 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:49:12 np0005629333 nova_compute[244014]: 2026-02-25 12:49:12.970 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:49:12 np0005629333 nova_compute[244014]: 2026-02-25 12:49:12.988 244018 DEBUG nova.compute.manager [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:49:13 np0005629333 nova_compute[244014]: 2026-02-25 12:49:13.058 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:49:13 np0005629333 nova_compute[244014]: 2026-02-25 12:49:13.059 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:49:13 np0005629333 nova_compute[244014]: 2026-02-25 12:49:13.067 244018 DEBUG nova.virt.hardware [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:49:13 np0005629333 nova_compute[244014]: 2026-02-25 12:49:13.068 244018 INFO nova.compute.claims [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:49:13 np0005629333 nova_compute[244014]: 2026-02-25 12:49:13.199 244018 DEBUG oslo_concurrency.processutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:49:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:49:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:49:13 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1446107335' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:49:13 np0005629333 nova_compute[244014]: 2026-02-25 12:49:13.725 244018 DEBUG oslo_concurrency.processutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:49:13 np0005629333 nova_compute[244014]: 2026-02-25 12:49:13.732 244018 DEBUG nova.compute.provider_tree [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:49:13 np0005629333 nova_compute[244014]: 2026-02-25 12:49:13.978 244018 DEBUG nova.scheduler.client.report [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:49:14 np0005629333 nova_compute[244014]: 2026-02-25 12:49:14.009 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.950s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:49:14 np0005629333 nova_compute[244014]: 2026-02-25 12:49:14.010 244018 DEBUG nova.compute.manager [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:49:14 np0005629333 nova_compute[244014]: 2026-02-25 12:49:14.116 244018 DEBUG nova.compute.manager [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:49:14 np0005629333 nova_compute[244014]: 2026-02-25 12:49:14.116 244018 DEBUG nova.network.neutron [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:49:14 np0005629333 nova_compute[244014]: 2026-02-25 12:49:14.135 244018 INFO nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:49:14 np0005629333 nova_compute[244014]: 2026-02-25 12:49:14.152 244018 DEBUG nova.compute.manager [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:49:14 np0005629333 nova_compute[244014]: 2026-02-25 12:49:14.272 244018 DEBUG nova.compute.manager [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:49:14 np0005629333 nova_compute[244014]: 2026-02-25 12:49:14.274 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:49:14 np0005629333 nova_compute[244014]: 2026-02-25 12:49:14.275 244018 INFO nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Creating image(s)#033[00m
Feb 25 07:49:14 np0005629333 nova_compute[244014]: 2026-02-25 12:49:14.309 244018 DEBUG nova.storage.rbd_utils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image b6501baa-8bc9-4724-b4c1-8ff43faf517c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:49:14 np0005629333 nova_compute[244014]: 2026-02-25 12:49:14.343 244018 DEBUG nova.storage.rbd_utils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image b6501baa-8bc9-4724-b4c1-8ff43faf517c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:49:14 np0005629333 nova_compute[244014]: 2026-02-25 12:49:14.372 244018 DEBUG nova.storage.rbd_utils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image b6501baa-8bc9-4724-b4c1-8ff43faf517c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:49:14 np0005629333 nova_compute[244014]: 2026-02-25 12:49:14.377 244018 DEBUG oslo_concurrency.processutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:49:14 np0005629333 nova_compute[244014]: 2026-02-25 12:49:14.405 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:14 np0005629333 nova_compute[244014]: 2026-02-25 12:49:14.440 244018 DEBUG oslo_concurrency.processutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:49:14 np0005629333 nova_compute[244014]: 2026-02-25 12:49:14.441 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:49:14 np0005629333 nova_compute[244014]: 2026-02-25 12:49:14.442 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:49:14 np0005629333 nova_compute[244014]: 2026-02-25 12:49:14.443 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:49:14 np0005629333 nova_compute[244014]: 2026-02-25 12:49:14.474 244018 DEBUG nova.storage.rbd_utils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image b6501baa-8bc9-4724-b4c1-8ff43faf517c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:49:14 np0005629333 nova_compute[244014]: 2026-02-25 12:49:14.479 244018 DEBUG oslo_concurrency.processutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 b6501baa-8bc9-4724-b4c1-8ff43faf517c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:49:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2060: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 910 KiB/s wr, 86 op/s
Feb 25 07:49:14 np0005629333 nova_compute[244014]: 2026-02-25 12:49:14.739 244018 DEBUG oslo_concurrency.processutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 b6501baa-8bc9-4724-b4c1-8ff43faf517c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.260s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:49:14 np0005629333 nova_compute[244014]: 2026-02-25 12:49:14.795 244018 DEBUG nova.storage.rbd_utils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] resizing rbd image b6501baa-8bc9-4724-b4c1-8ff43faf517c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:49:14 np0005629333 nova_compute[244014]: 2026-02-25 12:49:14.886 244018 DEBUG nova.objects.instance [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'migration_context' on Instance uuid b6501baa-8bc9-4724-b4c1-8ff43faf517c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:49:14 np0005629333 nova_compute[244014]: 2026-02-25 12:49:14.913 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:49:14 np0005629333 nova_compute[244014]: 2026-02-25 12:49:14.914 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Ensure instance console log exists: /var/lib/nova/instances/b6501baa-8bc9-4724-b4c1-8ff43faf517c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:49:14 np0005629333 nova_compute[244014]: 2026-02-25 12:49:14.914 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:49:14 np0005629333 nova_compute[244014]: 2026-02-25 12:49:14.915 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:49:14 np0005629333 nova_compute[244014]: 2026-02-25 12:49:14.915 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:49:14 np0005629333 nova_compute[244014]: 2026-02-25 12:49:14.971 244018 DEBUG nova.policy [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb8dbf8cc448ad946fd23aaae2326e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '25fa1e8dd32c483686f869da2604f2b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:49:15 np0005629333 nova_compute[244014]: 2026-02-25 12:49:15.303 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:16 np0005629333 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 07:49:16 np0005629333 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 07:49:16 np0005629333 nova_compute[244014]: 2026-02-25 12:49:16.330 244018 DEBUG nova.network.neutron [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Successfully created port: dee62982-d46c-4a81-b4ad-8154d7cfc7af _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:49:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2061: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 07:49:17 np0005629333 nova_compute[244014]: 2026-02-25 12:49:17.313 244018 DEBUG nova.network.neutron [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Successfully created port: 344b59b1-93f2-4e23-8c28-835e5d954630 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:49:17 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:17Z|00147|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:50:74:9c 10.100.0.7
Feb 25 07:49:17 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:17Z|00148|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:50:74:9c 10.100.0.7
Feb 25 07:49:18 np0005629333 nova_compute[244014]: 2026-02-25 12:49:18.294 244018 DEBUG nova.network.neutron [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Successfully updated port: dee62982-d46c-4a81-b4ad-8154d7cfc7af _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:49:18 np0005629333 nova_compute[244014]: 2026-02-25 12:49:18.500 244018 DEBUG nova.compute.manager [req-c6567ce0-bda7-4389-809f-c1361ee7e18e req-dadb75a5-465c-4633-bdb0-c1e29648ee74 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received event network-changed-dee62982-d46c-4a81-b4ad-8154d7cfc7af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:49:18 np0005629333 nova_compute[244014]: 2026-02-25 12:49:18.501 244018 DEBUG nova.compute.manager [req-c6567ce0-bda7-4389-809f-c1361ee7e18e req-dadb75a5-465c-4633-bdb0-c1e29648ee74 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Refreshing instance network info cache due to event network-changed-dee62982-d46c-4a81-b4ad-8154d7cfc7af. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:49:18 np0005629333 nova_compute[244014]: 2026-02-25 12:49:18.502 244018 DEBUG oslo_concurrency.lockutils [req-c6567ce0-bda7-4389-809f-c1361ee7e18e req-dadb75a5-465c-4633-bdb0-c1e29648ee74 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:49:18 np0005629333 nova_compute[244014]: 2026-02-25 12:49:18.502 244018 DEBUG oslo_concurrency.lockutils [req-c6567ce0-bda7-4389-809f-c1361ee7e18e req-dadb75a5-465c-4633-bdb0-c1e29648ee74 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:49:18 np0005629333 nova_compute[244014]: 2026-02-25 12:49:18.502 244018 DEBUG nova.network.neutron [req-c6567ce0-bda7-4389-809f-c1361ee7e18e req-dadb75a5-465c-4633-bdb0-c1e29648ee74 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Refreshing network info cache for port dee62982-d46c-4a81-b4ad-8154d7cfc7af _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:49:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2062: 305 pgs: 305 active+clean; 272 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 159 op/s
Feb 25 07:49:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:49:18 np0005629333 nova_compute[244014]: 2026-02-25 12:49:18.751 244018 DEBUG nova.network.neutron [req-c6567ce0-bda7-4389-809f-c1361ee7e18e req-dadb75a5-465c-4633-bdb0-c1e29648ee74 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:49:19 np0005629333 nova_compute[244014]: 2026-02-25 12:49:19.407 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:19 np0005629333 nova_compute[244014]: 2026-02-25 12:49:19.568 244018 DEBUG nova.network.neutron [req-c6567ce0-bda7-4389-809f-c1361ee7e18e req-dadb75a5-465c-4633-bdb0-c1e29648ee74 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:49:19 np0005629333 nova_compute[244014]: 2026-02-25 12:49:19.586 244018 DEBUG oslo_concurrency.lockutils [req-c6567ce0-bda7-4389-809f-c1361ee7e18e req-dadb75a5-465c-4633-bdb0-c1e29648ee74 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:49:19 np0005629333 nova_compute[244014]: 2026-02-25 12:49:19.588 244018 DEBUG nova.network.neutron [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Successfully updated port: 344b59b1-93f2-4e23-8c28-835e5d954630 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:49:19 np0005629333 nova_compute[244014]: 2026-02-25 12:49:19.601 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:49:19 np0005629333 nova_compute[244014]: 2026-02-25 12:49:19.601 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquired lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:49:19 np0005629333 nova_compute[244014]: 2026-02-25 12:49:19.602 244018 DEBUG nova.network.neutron [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:49:19 np0005629333 nova_compute[244014]: 2026-02-25 12:49:19.834 244018 DEBUG nova.network.neutron [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:49:19 np0005629333 systemd[1]: Starting dnf makecache...
Feb 25 07:49:20 np0005629333 nova_compute[244014]: 2026-02-25 12:49:20.304 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:20 np0005629333 dnf[353569]: Metadata cache refreshed recently.
Feb 25 07:49:20 np0005629333 systemd[1]: dnf-makecache.service: Deactivated successfully.
Feb 25 07:49:20 np0005629333 systemd[1]: Finished dnf makecache.
Feb 25 07:49:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:49:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:49:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:49:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:49:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:49:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:49:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:49:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:49:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2063: 305 pgs: 305 active+clean; 272 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 3.9 MiB/s wr, 86 op/s
Feb 25 07:49:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:49:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:49:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:49:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:49:20 np0005629333 nova_compute[244014]: 2026-02-25 12:49:20.599 244018 DEBUG nova.compute.manager [req-b78e4e41-55b7-45f9-aaea-268c290f0314 req-15de43a3-1df5-4732-9e98-eb86fe94bdeb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received event network-changed-344b59b1-93f2-4e23-8c28-835e5d954630 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:49:20 np0005629333 nova_compute[244014]: 2026-02-25 12:49:20.600 244018 DEBUG nova.compute.manager [req-b78e4e41-55b7-45f9-aaea-268c290f0314 req-15de43a3-1df5-4732-9e98-eb86fe94bdeb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Refreshing instance network info cache due to event network-changed-344b59b1-93f2-4e23-8c28-835e5d954630. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:49:20 np0005629333 nova_compute[244014]: 2026-02-25 12:49:20.601 244018 DEBUG oslo_concurrency.lockutils [req-b78e4e41-55b7-45f9-aaea-268c290f0314 req-15de43a3-1df5-4732-9e98-eb86fe94bdeb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:49:20 np0005629333 podman[353688]: 2026-02-25 12:49:20.969519138 +0000 UTC m=+0.058194154 container create 8841414be1dbcb763c48ebc62ffbd8fff2f6ef129b25bb67a414f65fc7b97759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_clarke, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 07:49:21 np0005629333 systemd[1]: Started libpod-conmon-8841414be1dbcb763c48ebc62ffbd8fff2f6ef129b25bb67a414f65fc7b97759.scope.
Feb 25 07:49:21 np0005629333 podman[353688]: 2026-02-25 12:49:20.947120326 +0000 UTC m=+0.035795422 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:49:21 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:49:21 np0005629333 podman[353688]: 2026-02-25 12:49:21.073105843 +0000 UTC m=+0.161780889 container init 8841414be1dbcb763c48ebc62ffbd8fff2f6ef129b25bb67a414f65fc7b97759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 07:49:21 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:49:21 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:49:21 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:49:21 np0005629333 podman[353688]: 2026-02-25 12:49:21.08149183 +0000 UTC m=+0.170166846 container start 8841414be1dbcb763c48ebc62ffbd8fff2f6ef129b25bb67a414f65fc7b97759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:49:21 np0005629333 podman[353688]: 2026-02-25 12:49:21.084956248 +0000 UTC m=+0.173631264 container attach 8841414be1dbcb763c48ebc62ffbd8fff2f6ef129b25bb67a414f65fc7b97759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_clarke, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:49:21 np0005629333 competent_clarke[353704]: 167 167
Feb 25 07:49:21 np0005629333 systemd[1]: libpod-8841414be1dbcb763c48ebc62ffbd8fff2f6ef129b25bb67a414f65fc7b97759.scope: Deactivated successfully.
Feb 25 07:49:21 np0005629333 podman[353688]: 2026-02-25 12:49:21.088155878 +0000 UTC m=+0.176830884 container died 8841414be1dbcb763c48ebc62ffbd8fff2f6ef129b25bb67a414f65fc7b97759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True)
Feb 25 07:49:21 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7ca1571c8437861220340b29e14cd90a2cf279110985ca7d90b8e7baa09995b8-merged.mount: Deactivated successfully.
Feb 25 07:49:21 np0005629333 podman[353688]: 2026-02-25 12:49:21.132604133 +0000 UTC m=+0.221279139 container remove 8841414be1dbcb763c48ebc62ffbd8fff2f6ef129b25bb67a414f65fc7b97759 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 07:49:21 np0005629333 systemd[1]: libpod-conmon-8841414be1dbcb763c48ebc62ffbd8fff2f6ef129b25bb67a414f65fc7b97759.scope: Deactivated successfully.
Feb 25 07:49:21 np0005629333 podman[353728]: 2026-02-25 12:49:21.280413217 +0000 UTC m=+0.048969944 container create 310da7678af840f3aae0a791a904fc437b13e1cf6cbbe6f25c1992038415f823 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 07:49:21 np0005629333 systemd[1]: Started libpod-conmon-310da7678af840f3aae0a791a904fc437b13e1cf6cbbe6f25c1992038415f823.scope.
Feb 25 07:49:21 np0005629333 podman[353728]: 2026-02-25 12:49:21.258712124 +0000 UTC m=+0.027268861 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:49:21 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:49:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3e22b877e5f480e04792d18fc6e2ed7407b82574717911793e40168dc3a2d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:49:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3e22b877e5f480e04792d18fc6e2ed7407b82574717911793e40168dc3a2d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:49:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3e22b877e5f480e04792d18fc6e2ed7407b82574717911793e40168dc3a2d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:49:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3e22b877e5f480e04792d18fc6e2ed7407b82574717911793e40168dc3a2d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:49:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3e22b877e5f480e04792d18fc6e2ed7407b82574717911793e40168dc3a2d8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:49:21 np0005629333 podman[353728]: 2026-02-25 12:49:21.390804274 +0000 UTC m=+0.159360981 container init 310da7678af840f3aae0a791a904fc437b13e1cf6cbbe6f25c1992038415f823 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wing, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:49:21 np0005629333 podman[353728]: 2026-02-25 12:49:21.406842637 +0000 UTC m=+0.175399364 container start 310da7678af840f3aae0a791a904fc437b13e1cf6cbbe6f25c1992038415f823 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wing, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Feb 25 07:49:21 np0005629333 podman[353728]: 2026-02-25 12:49:21.412580789 +0000 UTC m=+0.181137516 container attach 310da7678af840f3aae0a791a904fc437b13e1cf6cbbe6f25c1992038415f823 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 07:49:21 np0005629333 inspiring_wing[353745]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:49:21 np0005629333 inspiring_wing[353745]: --> All data devices are unavailable
Feb 25 07:49:21 np0005629333 systemd[1]: libpod-310da7678af840f3aae0a791a904fc437b13e1cf6cbbe6f25c1992038415f823.scope: Deactivated successfully.
Feb 25 07:49:21 np0005629333 podman[353765]: 2026-02-25 12:49:21.893448067 +0000 UTC m=+0.029376181 container died 310da7678af840f3aae0a791a904fc437b13e1cf6cbbe6f25c1992038415f823 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Feb 25 07:49:21 np0005629333 systemd[1]: var-lib-containers-storage-overlay-3c3e22b877e5f480e04792d18fc6e2ed7407b82574717911793e40168dc3a2d8-merged.mount: Deactivated successfully.
Feb 25 07:49:21 np0005629333 podman[353765]: 2026-02-25 12:49:21.941628817 +0000 UTC m=+0.077556921 container remove 310da7678af840f3aae0a791a904fc437b13e1cf6cbbe6f25c1992038415f823 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wing, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:49:21 np0005629333 systemd[1]: libpod-conmon-310da7678af840f3aae0a791a904fc437b13e1cf6cbbe6f25c1992038415f823.scope: Deactivated successfully.
Feb 25 07:49:22 np0005629333 podman[353842]: 2026-02-25 12:49:22.423517314 +0000 UTC m=+0.046396561 container create 509a74230eba3fac9260c00f04838884526f96ba164d63fb07c9368474600a85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 07:49:22 np0005629333 systemd[1]: Started libpod-conmon-509a74230eba3fac9260c00f04838884526f96ba164d63fb07c9368474600a85.scope.
Feb 25 07:49:22 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:49:22 np0005629333 podman[353842]: 2026-02-25 12:49:22.407524303 +0000 UTC m=+0.030403560 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:49:22 np0005629333 podman[353842]: 2026-02-25 12:49:22.513177646 +0000 UTC m=+0.136056953 container init 509a74230eba3fac9260c00f04838884526f96ba164d63fb07c9368474600a85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 07:49:22 np0005629333 podman[353842]: 2026-02-25 12:49:22.518409124 +0000 UTC m=+0.141288361 container start 509a74230eba3fac9260c00f04838884526f96ba164d63fb07c9368474600a85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chaum, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 07:49:22 np0005629333 podman[353842]: 2026-02-25 12:49:22.522139179 +0000 UTC m=+0.145018416 container attach 509a74230eba3fac9260c00f04838884526f96ba164d63fb07c9368474600a85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chaum, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:49:22 np0005629333 adoring_chaum[353858]: 167 167
Feb 25 07:49:22 np0005629333 systemd[1]: libpod-509a74230eba3fac9260c00f04838884526f96ba164d63fb07c9368474600a85.scope: Deactivated successfully.
Feb 25 07:49:22 np0005629333 podman[353842]: 2026-02-25 12:49:22.523464617 +0000 UTC m=+0.146343844 container died 509a74230eba3fac9260c00f04838884526f96ba164d63fb07c9368474600a85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chaum, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 07:49:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2064: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Feb 25 07:49:22 np0005629333 systemd[1]: var-lib-containers-storage-overlay-619c316696d0c687b52a390d8172a0469116e0a1d8e45e3ab53f847f3904ae93-merged.mount: Deactivated successfully.
Feb 25 07:49:22 np0005629333 podman[353842]: 2026-02-25 12:49:22.559787782 +0000 UTC m=+0.182667009 container remove 509a74230eba3fac9260c00f04838884526f96ba164d63fb07c9368474600a85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 07:49:22 np0005629333 systemd[1]: libpod-conmon-509a74230eba3fac9260c00f04838884526f96ba164d63fb07c9368474600a85.scope: Deactivated successfully.
Feb 25 07:49:22 np0005629333 podman[353882]: 2026-02-25 12:49:22.71092097 +0000 UTC m=+0.042794479 container create 8e031ee93dbeffae9055a67795fb35bdac42c2520c65d99b9c7f3750514f166f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 25 07:49:22 np0005629333 systemd[1]: Started libpod-conmon-8e031ee93dbeffae9055a67795fb35bdac42c2520c65d99b9c7f3750514f166f.scope.
Feb 25 07:49:22 np0005629333 podman[353882]: 2026-02-25 12:49:22.692307224 +0000 UTC m=+0.024180823 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:49:22 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:49:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7756a809f24b9b1861f720fc93e27e4178bc62c268e843c2fa3e6d730b686b90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:49:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7756a809f24b9b1861f720fc93e27e4178bc62c268e843c2fa3e6d730b686b90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:49:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7756a809f24b9b1861f720fc93e27e4178bc62c268e843c2fa3e6d730b686b90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:49:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7756a809f24b9b1861f720fc93e27e4178bc62c268e843c2fa3e6d730b686b90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:49:22 np0005629333 podman[353882]: 2026-02-25 12:49:22.809793402 +0000 UTC m=+0.141666911 container init 8e031ee93dbeffae9055a67795fb35bdac42c2520c65d99b9c7f3750514f166f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_moore, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 07:49:22 np0005629333 podman[353882]: 2026-02-25 12:49:22.814727641 +0000 UTC m=+0.146601180 container start 8e031ee93dbeffae9055a67795fb35bdac42c2520c65d99b9c7f3750514f166f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_moore, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:49:22 np0005629333 podman[353882]: 2026-02-25 12:49:22.818884348 +0000 UTC m=+0.150757877 container attach 8e031ee93dbeffae9055a67795fb35bdac42c2520c65d99b9c7f3750514f166f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3)
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]: {
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:    "0": [
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:        {
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "devices": [
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "/dev/loop3"
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            ],
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "lv_name": "ceph_lv0",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "lv_size": "21470642176",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "name": "ceph_lv0",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "tags": {
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.cluster_name": "ceph",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.crush_device_class": "",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.encrypted": "0",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.objectstore": "bluestore",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.osd_id": "0",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.type": "block",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.vdo": "0",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.with_tpm": "0"
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            },
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "type": "block",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "vg_name": "ceph_vg0"
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:        }
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:    ],
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:    "1": [
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:        {
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "devices": [
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "/dev/loop4"
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            ],
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "lv_name": "ceph_lv1",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "lv_size": "21470642176",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "name": "ceph_lv1",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "tags": {
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.cluster_name": "ceph",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.crush_device_class": "",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.encrypted": "0",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.objectstore": "bluestore",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.osd_id": "1",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.type": "block",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.vdo": "0",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.with_tpm": "0"
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            },
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "type": "block",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "vg_name": "ceph_vg1"
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:        }
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:    ],
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:    "2": [
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:        {
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "devices": [
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "/dev/loop5"
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            ],
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "lv_name": "ceph_lv2",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "lv_size": "21470642176",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "name": "ceph_lv2",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "tags": {
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.cluster_name": "ceph",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.crush_device_class": "",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.encrypted": "0",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.objectstore": "bluestore",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.osd_id": "2",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.type": "block",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.vdo": "0",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:                "ceph.with_tpm": "0"
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            },
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "type": "block",
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:            "vg_name": "ceph_vg2"
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:        }
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]:    ]
Feb 25 07:49:23 np0005629333 thirsty_moore[353898]: }
Feb 25 07:49:23 np0005629333 systemd[1]: libpod-8e031ee93dbeffae9055a67795fb35bdac42c2520c65d99b9c7f3750514f166f.scope: Deactivated successfully.
Feb 25 07:49:23 np0005629333 podman[353882]: 2026-02-25 12:49:23.12066403 +0000 UTC m=+0.452537559 container died 8e031ee93dbeffae9055a67795fb35bdac42c2520c65d99b9c7f3750514f166f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_moore, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:49:23 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7756a809f24b9b1861f720fc93e27e4178bc62c268e843c2fa3e6d730b686b90-merged.mount: Deactivated successfully.
Feb 25 07:49:23 np0005629333 podman[353882]: 2026-02-25 12:49:23.158843818 +0000 UTC m=+0.490717327 container remove 8e031ee93dbeffae9055a67795fb35bdac42c2520c65d99b9c7f3750514f166f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_moore, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:49:23 np0005629333 systemd[1]: libpod-conmon-8e031ee93dbeffae9055a67795fb35bdac42c2520c65d99b9c7f3750514f166f.scope: Deactivated successfully.
Feb 25 07:49:23 np0005629333 nova_compute[244014]: 2026-02-25 12:49:23.186 244018 DEBUG nova.network.neutron [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Updating instance_info_cache with network_info: [{"id": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "address": "fa:16:3e:93:cc:74", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdee62982-d4", "ovs_interfaceid": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "344b59b1-93f2-4e23-8c28-835e5d954630", "address": "fa:16:3e:8d:10:50", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:1050", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap344b59b1-93", "ovs_interfaceid": "344b59b1-93f2-4e23-8c28-835e5d954630", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:49:23 np0005629333 nova_compute[244014]: 2026-02-25 12:49:23.207 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Releasing lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:49:23 np0005629333 nova_compute[244014]: 2026-02-25 12:49:23.208 244018 DEBUG nova.compute.manager [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Instance network_info: |[{"id": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "address": "fa:16:3e:93:cc:74", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdee62982-d4", "ovs_interfaceid": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "344b59b1-93f2-4e23-8c28-835e5d954630", "address": "fa:16:3e:8d:10:50", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:1050", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap344b59b1-93", "ovs_interfaceid": "344b59b1-93f2-4e23-8c28-835e5d954630", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:49:23 np0005629333 nova_compute[244014]: 2026-02-25 12:49:23.208 244018 DEBUG oslo_concurrency.lockutils [req-b78e4e41-55b7-45f9-aaea-268c290f0314 req-15de43a3-1df5-4732-9e98-eb86fe94bdeb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:49:23 np0005629333 nova_compute[244014]: 2026-02-25 12:49:23.208 244018 DEBUG nova.network.neutron [req-b78e4e41-55b7-45f9-aaea-268c290f0314 req-15de43a3-1df5-4732-9e98-eb86fe94bdeb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Refreshing network info cache for port 344b59b1-93f2-4e23-8c28-835e5d954630 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:49:23 np0005629333 nova_compute[244014]: 2026-02-25 12:49:23.213 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Start _get_guest_xml network_info=[{"id": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "address": "fa:16:3e:93:cc:74", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdee62982-d4", "ovs_interfaceid": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "344b59b1-93f2-4e23-8c28-835e5d954630", "address": "fa:16:3e:8d:10:50", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:1050", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap344b59b1-93", "ovs_interfaceid": "344b59b1-93f2-4e23-8c28-835e5d954630", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:49:23 np0005629333 nova_compute[244014]: 2026-02-25 12:49:23.219 244018 WARNING nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:49:23 np0005629333 nova_compute[244014]: 2026-02-25 12:49:23.228 244018 DEBUG nova.virt.libvirt.host [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:49:23 np0005629333 nova_compute[244014]: 2026-02-25 12:49:23.229 244018 DEBUG nova.virt.libvirt.host [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:49:23 np0005629333 nova_compute[244014]: 2026-02-25 12:49:23.233 244018 DEBUG nova.virt.libvirt.host [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:49:23 np0005629333 nova_compute[244014]: 2026-02-25 12:49:23.234 244018 DEBUG nova.virt.libvirt.host [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:49:23 np0005629333 nova_compute[244014]: 2026-02-25 12:49:23.234 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:49:23 np0005629333 nova_compute[244014]: 2026-02-25 12:49:23.234 244018 DEBUG nova.virt.hardware [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:49:23 np0005629333 nova_compute[244014]: 2026-02-25 12:49:23.235 244018 DEBUG nova.virt.hardware [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:49:23 np0005629333 nova_compute[244014]: 2026-02-25 12:49:23.235 244018 DEBUG nova.virt.hardware [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:49:23 np0005629333 nova_compute[244014]: 2026-02-25 12:49:23.235 244018 DEBUG nova.virt.hardware [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:49:23 np0005629333 nova_compute[244014]: 2026-02-25 12:49:23.236 244018 DEBUG nova.virt.hardware [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:49:23 np0005629333 nova_compute[244014]: 2026-02-25 12:49:23.236 244018 DEBUG nova.virt.hardware [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:49:23 np0005629333 nova_compute[244014]: 2026-02-25 12:49:23.236 244018 DEBUG nova.virt.hardware [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:49:23 np0005629333 nova_compute[244014]: 2026-02-25 12:49:23.237 244018 DEBUG nova.virt.hardware [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:49:23 np0005629333 nova_compute[244014]: 2026-02-25 12:49:23.237 244018 DEBUG nova.virt.hardware [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:49:23 np0005629333 nova_compute[244014]: 2026-02-25 12:49:23.237 244018 DEBUG nova.virt.hardware [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:49:23 np0005629333 nova_compute[244014]: 2026-02-25 12:49:23.238 244018 DEBUG nova.virt.hardware [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:49:23 np0005629333 nova_compute[244014]: 2026-02-25 12:49:23.241 244018 DEBUG oslo_concurrency.processutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:49:23 np0005629333 nova_compute[244014]: 2026-02-25 12:49:23.470 244018 INFO nova.compute.manager [None req-3fcac6e8-50e0-4815-ba31-4c0f4efbf563 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Get console output#033[00m
Feb 25 07:49:23 np0005629333 nova_compute[244014]: 2026-02-25 12:49:23.479 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 25 07:49:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:49:23 np0005629333 podman[354001]: 2026-02-25 12:49:23.620237385 +0000 UTC m=+0.044712663 container create 74a8da66738a32fef979c22761ea619d61f8e2088d8ee24f2201ce4cc4286a12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 07:49:23 np0005629333 systemd[1]: Started libpod-conmon-74a8da66738a32fef979c22761ea619d61f8e2088d8ee24f2201ce4cc4286a12.scope.
Feb 25 07:49:23 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:49:23 np0005629333 podman[354001]: 2026-02-25 12:49:23.600862398 +0000 UTC m=+0.025337706 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:49:23 np0005629333 podman[354001]: 2026-02-25 12:49:23.698585977 +0000 UTC m=+0.123061275 container init 74a8da66738a32fef979c22761ea619d61f8e2088d8ee24f2201ce4cc4286a12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:49:23 np0005629333 podman[354001]: 2026-02-25 12:49:23.704606118 +0000 UTC m=+0.129081396 container start 74a8da66738a32fef979c22761ea619d61f8e2088d8ee24f2201ce4cc4286a12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 07:49:23 np0005629333 relaxed_jones[354018]: 167 167
Feb 25 07:49:23 np0005629333 systemd[1]: libpod-74a8da66738a32fef979c22761ea619d61f8e2088d8ee24f2201ce4cc4286a12.scope: Deactivated successfully.
Feb 25 07:49:23 np0005629333 podman[354001]: 2026-02-25 12:49:23.710406261 +0000 UTC m=+0.134881539 container attach 74a8da66738a32fef979c22761ea619d61f8e2088d8ee24f2201ce4cc4286a12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jones, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 07:49:23 np0005629333 podman[354001]: 2026-02-25 12:49:23.710868894 +0000 UTC m=+0.135344162 container died 74a8da66738a32fef979c22761ea619d61f8e2088d8ee24f2201ce4cc4286a12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jones, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 07:49:23 np0005629333 systemd[1]: var-lib-containers-storage-overlay-2948901fdea99e55a805067cc9fa3d12e7dae76f768d0d1023b1b3a3cd17d4bd-merged.mount: Deactivated successfully.
Feb 25 07:49:23 np0005629333 podman[354001]: 2026-02-25 12:49:23.755806963 +0000 UTC m=+0.180282281 container remove 74a8da66738a32fef979c22761ea619d61f8e2088d8ee24f2201ce4cc4286a12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_jones, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:49:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:49:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1756431689' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:49:23 np0005629333 systemd[1]: libpod-conmon-74a8da66738a32fef979c22761ea619d61f8e2088d8ee24f2201ce4cc4286a12.scope: Deactivated successfully.
Feb 25 07:49:23 np0005629333 nova_compute[244014]: 2026-02-25 12:49:23.780 244018 DEBUG oslo_concurrency.processutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:49:23 np0005629333 nova_compute[244014]: 2026-02-25 12:49:23.800 244018 DEBUG nova.storage.rbd_utils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image b6501baa-8bc9-4724-b4c1-8ff43faf517c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:49:23 np0005629333 nova_compute[244014]: 2026-02-25 12:49:23.804 244018 DEBUG oslo_concurrency.processutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:49:23 np0005629333 podman[354063]: 2026-02-25 12:49:23.912095686 +0000 UTC m=+0.046097712 container create 1c4b2e4b2a982b981affd321dab0ab389d5cb3f1e02cf22b7157b396bae6a91e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_feynman, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:49:23 np0005629333 systemd[1]: Started libpod-conmon-1c4b2e4b2a982b981affd321dab0ab389d5cb3f1e02cf22b7157b396bae6a91e.scope.
Feb 25 07:49:23 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:49:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/935f0f6ea4cc0791112f4b61da2c886e8f8c060ec9cdb488fa735e1cbfcf33f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:49:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/935f0f6ea4cc0791112f4b61da2c886e8f8c060ec9cdb488fa735e1cbfcf33f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:49:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/935f0f6ea4cc0791112f4b61da2c886e8f8c060ec9cdb488fa735e1cbfcf33f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:49:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/935f0f6ea4cc0791112f4b61da2c886e8f8c060ec9cdb488fa735e1cbfcf33f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:49:23 np0005629333 podman[354063]: 2026-02-25 12:49:23.889090107 +0000 UTC m=+0.023092133 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:49:23 np0005629333 podman[354063]: 2026-02-25 12:49:23.999562696 +0000 UTC m=+0.133564792 container init 1c4b2e4b2a982b981affd321dab0ab389d5cb3f1e02cf22b7157b396bae6a91e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:49:24 np0005629333 podman[354063]: 2026-02-25 12:49:24.014161638 +0000 UTC m=+0.148163694 container start 1c4b2e4b2a982b981affd321dab0ab389d5cb3f1e02cf22b7157b396bae6a91e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 07:49:24 np0005629333 podman[354063]: 2026-02-25 12:49:24.018168442 +0000 UTC m=+0.152170498 container attach 1c4b2e4b2a982b981affd321dab0ab389d5cb3f1e02cf22b7157b396bae6a91e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_feynman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:49:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:49:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4258743471' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.360 244018 DEBUG oslo_concurrency.processutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.363 244018 DEBUG nova.virt.libvirt.vif [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:49:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1173463848',display_name='tempest-TestGettingAddress-server-1173463848',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1173463848',id=122,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNlLCvqIkUYh/9ZHtxnKBN7YBsKpfQ8TjO19iCX554GJUCo/N1T5J3/ZTJ7NHwET0eZFR6/wdUxNMyoCAZQSh5tzxHfA6vjgYnp2UPefpqRUyjRlBnYX2yGGsA8ccwHtsg==',key_name='tempest-TestGettingAddress-238420135',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-s4cszaw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:49:14Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=b6501baa-8bc9-4724-b4c1-8ff43faf517c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "address": "fa:16:3e:93:cc:74", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdee62982-d4", "ovs_interfaceid": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.364 244018 DEBUG nova.network.os_vif_util [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "address": "fa:16:3e:93:cc:74", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdee62982-d4", "ovs_interfaceid": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.365 244018 DEBUG nova.network.os_vif_util [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:cc:74,bridge_name='br-int',has_traffic_filtering=True,id=dee62982-d46c-4a81-b4ad-8154d7cfc7af,network=Network(481feaf1-7ff4-47be-9159-a1dd19ceebcc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdee62982-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.366 244018 DEBUG nova.virt.libvirt.vif [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:49:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1173463848',display_name='tempest-TestGettingAddress-server-1173463848',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1173463848',id=122,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNlLCvqIkUYh/9ZHtxnKBN7YBsKpfQ8TjO19iCX554GJUCo/N1T5J3/ZTJ7NHwET0eZFR6/wdUxNMyoCAZQSh5tzxHfA6vjgYnp2UPefpqRUyjRlBnYX2yGGsA8ccwHtsg==',key_name='tempest-TestGettingAddress-238420135',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-s4cszaw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:49:14Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=b6501baa-8bc9-4724-b4c1-8ff43faf517c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "344b59b1-93f2-4e23-8c28-835e5d954630", "address": "fa:16:3e:8d:10:50", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:1050", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap344b59b1-93", "ovs_interfaceid": "344b59b1-93f2-4e23-8c28-835e5d954630", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.367 244018 DEBUG nova.network.os_vif_util [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "344b59b1-93f2-4e23-8c28-835e5d954630", "address": "fa:16:3e:8d:10:50", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:1050", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap344b59b1-93", "ovs_interfaceid": "344b59b1-93f2-4e23-8c28-835e5d954630", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.368 244018 DEBUG nova.network.os_vif_util [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:10:50,bridge_name='br-int',has_traffic_filtering=True,id=344b59b1-93f2-4e23-8c28-835e5d954630,network=Network(eb832dde-9848-40c5-9505-cc643b1bd0fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap344b59b1-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.369 244018 DEBUG nova.objects.instance [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid b6501baa-8bc9-4724-b4c1-8ff43faf517c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.389 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:49:24 np0005629333 nova_compute[244014]:  <uuid>b6501baa-8bc9-4724-b4c1-8ff43faf517c</uuid>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:  <name>instance-0000007a</name>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestGettingAddress-server-1173463848</nova:name>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:49:23</nova:creationTime>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:49:24 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:        <nova:user uuid="f8eb8dbf8cc448ad946fd23aaae2326e">tempest-TestGettingAddress-344063294-project-member</nova:user>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:        <nova:project uuid="25fa1e8dd32c483686f869da2604f2b1">tempest-TestGettingAddress-344063294</nova:project>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:        <nova:port uuid="dee62982-d46c-4a81-b4ad-8154d7cfc7af">
Feb 25 07:49:24 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:        <nova:port uuid="344b59b1-93f2-4e23-8c28-835e5d954630">
Feb 25 07:49:24 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe8d:1050" ipVersion="6"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <entry name="serial">b6501baa-8bc9-4724-b4c1-8ff43faf517c</entry>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <entry name="uuid">b6501baa-8bc9-4724-b4c1-8ff43faf517c</entry>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/b6501baa-8bc9-4724-b4c1-8ff43faf517c_disk">
Feb 25 07:49:24 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:49:24 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/b6501baa-8bc9-4724-b4c1-8ff43faf517c_disk.config">
Feb 25 07:49:24 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:49:24 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:93:cc:74"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <target dev="tapdee62982-d4"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:8d:10:50"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <target dev="tap344b59b1-93"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/b6501baa-8bc9-4724-b4c1-8ff43faf517c/console.log" append="off"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:49:24 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:49:24 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:49:24 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:49:24 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.390 244018 DEBUG nova.compute.manager [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Preparing to wait for external event network-vif-plugged-dee62982-d46c-4a81-b4ad-8154d7cfc7af prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.390 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.390 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.390 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.390 244018 DEBUG nova.compute.manager [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Preparing to wait for external event network-vif-plugged-344b59b1-93f2-4e23-8c28-835e5d954630 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.391 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.391 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.391 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.392 244018 DEBUG nova.virt.libvirt.vif [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:49:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1173463848',display_name='tempest-TestGettingAddress-server-1173463848',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1173463848',id=122,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNlLCvqIkUYh/9ZHtxnKBN7YBsKpfQ8TjO19iCX554GJUCo/N1T5J3/ZTJ7NHwET0eZFR6/wdUxNMyoCAZQSh5tzxHfA6vjgYnp2UPefpqRUyjRlBnYX2yGGsA8ccwHtsg==',key_name='tempest-TestGettingAddress-238420135',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-s4cszaw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:49:14Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=b6501baa-8bc9-4724-b4c1-8ff43faf517c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "address": "fa:16:3e:93:cc:74", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdee62982-d4", "ovs_interfaceid": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.392 244018 DEBUG nova.network.os_vif_util [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "address": "fa:16:3e:93:cc:74", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdee62982-d4", "ovs_interfaceid": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.393 244018 DEBUG nova.network.os_vif_util [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:cc:74,bridge_name='br-int',has_traffic_filtering=True,id=dee62982-d46c-4a81-b4ad-8154d7cfc7af,network=Network(481feaf1-7ff4-47be-9159-a1dd19ceebcc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdee62982-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.393 244018 DEBUG os_vif [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:cc:74,bridge_name='br-int',has_traffic_filtering=True,id=dee62982-d46c-4a81-b4ad-8154d7cfc7af,network=Network(481feaf1-7ff4-47be-9159-a1dd19ceebcc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdee62982-d4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.394 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.394 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.395 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.399 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.399 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdee62982-d4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.400 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdee62982-d4, col_values=(('external_ids', {'iface-id': 'dee62982-d46c-4a81-b4ad-8154d7cfc7af', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:93:cc:74', 'vm-uuid': 'b6501baa-8bc9-4724-b4c1-8ff43faf517c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:49:24 np0005629333 NetworkManager[49836]: <info>  [1772023764.4022] manager: (tapdee62982-d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/530)
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.401 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.404 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.406 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.406 244018 INFO os_vif [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:cc:74,bridge_name='br-int',has_traffic_filtering=True,id=dee62982-d46c-4a81-b4ad-8154d7cfc7af,network=Network(481feaf1-7ff4-47be-9159-a1dd19ceebcc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdee62982-d4')#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.407 244018 DEBUG nova.virt.libvirt.vif [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:49:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1173463848',display_name='tempest-TestGettingAddress-server-1173463848',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1173463848',id=122,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNlLCvqIkUYh/9ZHtxnKBN7YBsKpfQ8TjO19iCX554GJUCo/N1T5J3/ZTJ7NHwET0eZFR6/wdUxNMyoCAZQSh5tzxHfA6vjgYnp2UPefpqRUyjRlBnYX2yGGsA8ccwHtsg==',key_name='tempest-TestGettingAddress-238420135',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-s4cszaw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:49:14Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=b6501baa-8bc9-4724-b4c1-8ff43faf517c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "344b59b1-93f2-4e23-8c28-835e5d954630", "address": "fa:16:3e:8d:10:50", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:1050", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap344b59b1-93", "ovs_interfaceid": "344b59b1-93f2-4e23-8c28-835e5d954630", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.407 244018 DEBUG nova.network.os_vif_util [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "344b59b1-93f2-4e23-8c28-835e5d954630", "address": "fa:16:3e:8d:10:50", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:1050", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap344b59b1-93", "ovs_interfaceid": "344b59b1-93f2-4e23-8c28-835e5d954630", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.408 244018 DEBUG nova.network.os_vif_util [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:10:50,bridge_name='br-int',has_traffic_filtering=True,id=344b59b1-93f2-4e23-8c28-835e5d954630,network=Network(eb832dde-9848-40c5-9505-cc643b1bd0fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap344b59b1-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.408 244018 DEBUG os_vif [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:10:50,bridge_name='br-int',has_traffic_filtering=True,id=344b59b1-93f2-4e23-8c28-835e5d954630,network=Network(eb832dde-9848-40c5-9505-cc643b1bd0fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap344b59b1-93') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.409 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.409 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.409 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.411 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.412 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap344b59b1-93, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.412 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap344b59b1-93, col_values=(('external_ids', {'iface-id': '344b59b1-93f2-4e23-8c28-835e5d954630', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8d:10:50', 'vm-uuid': 'b6501baa-8bc9-4724-b4c1-8ff43faf517c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.413 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:24 np0005629333 NetworkManager[49836]: <info>  [1772023764.4144] manager: (tap344b59b1-93): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/531)
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.416 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.420 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.420 244018 INFO os_vif [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:10:50,bridge_name='br-int',has_traffic_filtering=True,id=344b59b1-93f2-4e23-8c28-835e5d954630,network=Network(eb832dde-9848-40c5-9505-cc643b1bd0fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap344b59b1-93')#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.490 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.491 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.491 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:93:cc:74, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.492 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:8d:10:50, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.492 244018 INFO nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Using config drive#033[00m
Feb 25 07:49:24 np0005629333 nova_compute[244014]: 2026-02-25 12:49:24.516 244018 DEBUG nova.storage.rbd_utils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image b6501baa-8bc9-4724-b4c1-8ff43faf517c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:49:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2065: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Feb 25 07:49:24 np0005629333 lvm[354200]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:49:24 np0005629333 lvm[354200]: VG ceph_vg0 finished
Feb 25 07:49:24 np0005629333 lvm[354203]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:49:24 np0005629333 lvm[354203]: VG ceph_vg1 finished
Feb 25 07:49:24 np0005629333 lvm[354205]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:49:24 np0005629333 lvm[354205]: VG ceph_vg2 finished
Feb 25 07:49:24 np0005629333 cranky_feynman[354098]: {}
Feb 25 07:49:24 np0005629333 systemd[1]: libpod-1c4b2e4b2a982b981affd321dab0ab389d5cb3f1e02cf22b7157b396bae6a91e.scope: Deactivated successfully.
Feb 25 07:49:24 np0005629333 systemd[1]: libpod-1c4b2e4b2a982b981affd321dab0ab389d5cb3f1e02cf22b7157b396bae6a91e.scope: Consumed 1.055s CPU time.
Feb 25 07:49:24 np0005629333 podman[354063]: 2026-02-25 12:49:24.779414967 +0000 UTC m=+0.913417023 container died 1c4b2e4b2a982b981affd321dab0ab389d5cb3f1e02cf22b7157b396bae6a91e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_feynman, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 07:49:24 np0005629333 systemd[1]: var-lib-containers-storage-overlay-935f0f6ea4cc0791112f4b61da2c886e8f8c060ec9cdb488fa735e1cbfcf33f5-merged.mount: Deactivated successfully.
Feb 25 07:49:24 np0005629333 podman[354063]: 2026-02-25 12:49:24.817133942 +0000 UTC m=+0.951135958 container remove 1c4b2e4b2a982b981affd321dab0ab389d5cb3f1e02cf22b7157b396bae6a91e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_feynman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 07:49:24 np0005629333 systemd[1]: libpod-conmon-1c4b2e4b2a982b981affd321dab0ab389d5cb3f1e02cf22b7157b396bae6a91e.scope: Deactivated successfully.
Feb 25 07:49:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:49:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:49:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:49:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:49:25 np0005629333 nova_compute[244014]: 2026-02-25 12:49:25.010 244018 INFO nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Creating config drive at /var/lib/nova/instances/b6501baa-8bc9-4724-b4c1-8ff43faf517c/disk.config#033[00m
Feb 25 07:49:25 np0005629333 nova_compute[244014]: 2026-02-25 12:49:25.017 244018 DEBUG oslo_concurrency.processutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b6501baa-8bc9-4724-b4c1-8ff43faf517c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqj5qfh21 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:49:25 np0005629333 nova_compute[244014]: 2026-02-25 12:49:25.056 244018 DEBUG nova.network.neutron [req-b78e4e41-55b7-45f9-aaea-268c290f0314 req-15de43a3-1df5-4732-9e98-eb86fe94bdeb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Updated VIF entry in instance network info cache for port 344b59b1-93f2-4e23-8c28-835e5d954630. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:49:25 np0005629333 nova_compute[244014]: 2026-02-25 12:49:25.057 244018 DEBUG nova.network.neutron [req-b78e4e41-55b7-45f9-aaea-268c290f0314 req-15de43a3-1df5-4732-9e98-eb86fe94bdeb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Updating instance_info_cache with network_info: [{"id": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "address": "fa:16:3e:93:cc:74", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdee62982-d4", "ovs_interfaceid": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "344b59b1-93f2-4e23-8c28-835e5d954630", "address": "fa:16:3e:8d:10:50", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:1050", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap344b59b1-93", "ovs_interfaceid": "344b59b1-93f2-4e23-8c28-835e5d954630", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:49:25 np0005629333 nova_compute[244014]: 2026-02-25 12:49:25.087 244018 DEBUG oslo_concurrency.lockutils [req-b78e4e41-55b7-45f9-aaea-268c290f0314 req-15de43a3-1df5-4732-9e98-eb86fe94bdeb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:49:25 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:49:25 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:49:25 np0005629333 nova_compute[244014]: 2026-02-25 12:49:25.162 244018 DEBUG oslo_concurrency.processutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b6501baa-8bc9-4724-b4c1-8ff43faf517c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpqj5qfh21" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:49:25 np0005629333 nova_compute[244014]: 2026-02-25 12:49:25.201 244018 DEBUG nova.storage.rbd_utils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image b6501baa-8bc9-4724-b4c1-8ff43faf517c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:49:25 np0005629333 nova_compute[244014]: 2026-02-25 12:49:25.204 244018 DEBUG oslo_concurrency.processutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b6501baa-8bc9-4724-b4c1-8ff43faf517c/disk.config b6501baa-8bc9-4724-b4c1-8ff43faf517c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:49:25 np0005629333 nova_compute[244014]: 2026-02-25 12:49:25.306 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:25 np0005629333 nova_compute[244014]: 2026-02-25 12:49:25.329 244018 DEBUG oslo_concurrency.processutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b6501baa-8bc9-4724-b4c1-8ff43faf517c/disk.config b6501baa-8bc9-4724-b4c1-8ff43faf517c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:49:25 np0005629333 nova_compute[244014]: 2026-02-25 12:49:25.330 244018 INFO nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Deleting local config drive /var/lib/nova/instances/b6501baa-8bc9-4724-b4c1-8ff43faf517c/disk.config because it was imported into RBD.#033[00m
Feb 25 07:49:25 np0005629333 NetworkManager[49836]: <info>  [1772023765.3735] manager: (tapdee62982-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/532)
Feb 25 07:49:25 np0005629333 systemd-udevd[354202]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:49:25 np0005629333 kernel: tapdee62982-d4: entered promiscuous mode
Feb 25 07:49:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:25Z|01271|binding|INFO|Claiming lport dee62982-d46c-4a81-b4ad-8154d7cfc7af for this chassis.
Feb 25 07:49:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:25Z|01272|binding|INFO|dee62982-d46c-4a81-b4ad-8154d7cfc7af: Claiming fa:16:3e:93:cc:74 10.100.0.9
Feb 25 07:49:25 np0005629333 nova_compute[244014]: 2026-02-25 12:49:25.377 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:25 np0005629333 NetworkManager[49836]: <info>  [1772023765.3827] manager: (tap344b59b1-93): new Tun device (/org/freedesktop/NetworkManager/Devices/533)
Feb 25 07:49:25 np0005629333 systemd-udevd[354199]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:49:25 np0005629333 kernel: tap344b59b1-93: entered promiscuous mode
Feb 25 07:49:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:25Z|01273|binding|INFO|Claiming lport 344b59b1-93f2-4e23-8c28-835e5d954630 for this chassis.
Feb 25 07:49:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:25Z|01274|binding|INFO|344b59b1-93f2-4e23-8c28-835e5d954630: Claiming fa:16:3e:8d:10:50 2001:db8::f816:3eff:fe8d:1050
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.386 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:cc:74 10.100.0.9'], port_security=['fa:16:3e:93:cc:74 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b6501baa-8bc9-4724-b4c1-8ff43faf517c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-481feaf1-7ff4-47be-9159-a1dd19ceebcc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f3102e48-4ea3-4c65-a010-ac507aeeeba5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7b776d7-fb2a-404e-b423-f79885837022, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=dee62982-d46c-4a81-b4ad-8154d7cfc7af) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:49:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:25Z|01275|binding|INFO|Setting lport dee62982-d46c-4a81-b4ad-8154d7cfc7af ovn-installed in OVS
Feb 25 07:49:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:25Z|01276|binding|INFO|Setting lport dee62982-d46c-4a81-b4ad-8154d7cfc7af up in Southbound
Feb 25 07:49:25 np0005629333 nova_compute[244014]: 2026-02-25 12:49:25.386 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:25 np0005629333 nova_compute[244014]: 2026-02-25 12:49:25.390 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:25 np0005629333 NetworkManager[49836]: <info>  [1772023765.3911] device (tapdee62982-d4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.391 157129 INFO neutron.agent.ovn.metadata.agent [-] Port dee62982-d46c-4a81-b4ad-8154d7cfc7af in datapath 481feaf1-7ff4-47be-9159-a1dd19ceebcc bound to our chassis#033[00m
Feb 25 07:49:25 np0005629333 NetworkManager[49836]: <info>  [1772023765.3932] device (tapdee62982-d4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:49:25 np0005629333 NetworkManager[49836]: <info>  [1772023765.3942] device (tap344b59b1-93): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:49:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:25Z|01277|binding|INFO|Setting lport 344b59b1-93f2-4e23-8c28-835e5d954630 ovn-installed in OVS
Feb 25 07:49:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:25Z|01278|binding|INFO|Setting lport 344b59b1-93f2-4e23-8c28-835e5d954630 up in Southbound
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.393 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 481feaf1-7ff4-47be-9159-a1dd19ceebcc#033[00m
Feb 25 07:49:25 np0005629333 nova_compute[244014]: 2026-02-25 12:49:25.395 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:25 np0005629333 NetworkManager[49836]: <info>  [1772023765.3962] device (tap344b59b1-93): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.397 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:10:50 2001:db8::f816:3eff:fe8d:1050'], port_security=['fa:16:3e:8d:10:50 2001:db8::f816:3eff:fe8d:1050'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe8d:1050/64', 'neutron:device_id': 'b6501baa-8bc9-4724-b4c1-8ff43faf517c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb832dde-9848-40c5-9505-cc643b1bd0fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f3102e48-4ea3-4c65-a010-ac507aeeeba5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f83b91c5-311c-45de-aa70-06fb6ec6fece, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=344b59b1-93f2-4e23-8c28-835e5d954630) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.404 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fd4efa9b-f3b2-495d-804a-0bc1172ad26d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.404 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap481feaf1-71 in ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.411 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap481feaf1-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.411 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b9bbc69d-ecc3-41f6-a908-c7e990d9e2b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.412 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[355a5b19-1bf4-4231-93e9-552cf5515d62]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:25 np0005629333 systemd-machined[210048]: New machine qemu-154-instance-0000007a.
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.421 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[e64758fc-21cc-4fc5-a3ab-a5584c90f867]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.435 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1f6cb856-9e79-43de-9521-34a182d00922]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:25 np0005629333 systemd[1]: Started Virtual Machine qemu-154-instance-0000007a.
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.458 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8a95f0b6-6717-4c84-980b-d676726b981a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.465 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7ec0ad6e-5f95-415d-a28a-d26397d9ddef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:25 np0005629333 NetworkManager[49836]: <info>  [1772023765.4660] manager: (tap481feaf1-70): new Veth device (/org/freedesktop/NetworkManager/Devices/534)
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.491 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[06688183-86ab-44a4-8fef-d7199f449a88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.494 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1adfc4d4-0b1f-4254-b742-9a5ecee7a4a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:25 np0005629333 NetworkManager[49836]: <info>  [1772023765.5122] device (tap481feaf1-70): carrier: link connected
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.516 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b6727c93-51cf-4fc4-aff9-b1595045c645]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.529 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fb24fc19-1370-417b-b6cf-7467930a3583]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap481feaf1-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:24:6f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 382], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573491, 'reachable_time': 20258, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354330, 'error': None, 'target': 'ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.544 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[142a477d-cdc6-42f1-88e1-59572382a2c0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe24:6f82'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573491, 'tstamp': 573491}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354331, 'error': None, 'target': 'ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.555 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c83132f4-3457-472c-b4cf-4b15631f55a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap481feaf1-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:24:6f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 382], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573491, 'reachable_time': 20258, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 354332, 'error': None, 'target': 'ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.576 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e50c72c3-e5e9-4d6a-ba76-1adf2e717459]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.626 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[abd154d3-9d6f-439a-af84-e72fb141eb00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.628 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap481feaf1-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.628 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.628 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap481feaf1-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:49:25 np0005629333 kernel: tap481feaf1-70: entered promiscuous mode
Feb 25 07:49:25 np0005629333 nova_compute[244014]: 2026-02-25 12:49:25.630 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:25 np0005629333 NetworkManager[49836]: <info>  [1772023765.6312] manager: (tap481feaf1-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/535)
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.633 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap481feaf1-70, col_values=(('external_ids', {'iface-id': '8234c876-6930-42e9-b642-2d1f82e23ee0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:49:25 np0005629333 nova_compute[244014]: 2026-02-25 12:49:25.634 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:25Z|01279|binding|INFO|Releasing lport 8234c876-6930-42e9-b642-2d1f82e23ee0 from this chassis (sb_readonly=0)
Feb 25 07:49:25 np0005629333 nova_compute[244014]: 2026-02-25 12:49:25.635 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.635 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/481feaf1-7ff4-47be-9159-a1dd19ceebcc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/481feaf1-7ff4-47be-9159-a1dd19ceebcc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.636 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dfd712ec-e2fe-4c9f-a1fc-9ffc5d3e0e24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.637 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-481feaf1-7ff4-47be-9159-a1dd19ceebcc
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/481feaf1-7ff4-47be-9159-a1dd19ceebcc.pid.haproxy
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 481feaf1-7ff4-47be-9159-a1dd19ceebcc
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:49:25 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:25.638 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc', 'env', 'PROCESS_TAG=haproxy-481feaf1-7ff4-47be-9159-a1dd19ceebcc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/481feaf1-7ff4-47be-9159-a1dd19ceebcc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:49:25 np0005629333 nova_compute[244014]: 2026-02-25 12:49:25.639 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:25 np0005629333 nova_compute[244014]: 2026-02-25 12:49:25.811 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023765.8112133, b6501baa-8bc9-4724-b4c1-8ff43faf517c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:49:25 np0005629333 nova_compute[244014]: 2026-02-25 12:49:25.812 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] VM Started (Lifecycle Event)#033[00m
Feb 25 07:49:25 np0005629333 nova_compute[244014]: 2026-02-25 12:49:25.834 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:49:25 np0005629333 nova_compute[244014]: 2026-02-25 12:49:25.839 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023765.8115282, b6501baa-8bc9-4724-b4c1-8ff43faf517c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:49:25 np0005629333 nova_compute[244014]: 2026-02-25 12:49:25.840 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:49:25 np0005629333 nova_compute[244014]: 2026-02-25 12:49:25.856 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:49:25 np0005629333 nova_compute[244014]: 2026-02-25 12:49:25.859 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:49:25 np0005629333 nova_compute[244014]: 2026-02-25 12:49:25.878 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:49:25 np0005629333 podman[354407]: 2026-02-25 12:49:25.989329021 +0000 UTC m=+0.060528200 container create 7b53b721630132d693daa620ddd634af4092aee8eab1fae2dcc83c54626328f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 07:49:26 np0005629333 systemd[1]: Started libpod-conmon-7b53b721630132d693daa620ddd634af4092aee8eab1fae2dcc83c54626328f6.scope.
Feb 25 07:49:26 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:49:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6fa4fcab328f5a267bf20836bf596abfcd5ef7623497a8450d10dfaa2251880/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:49:26 np0005629333 podman[354407]: 2026-02-25 12:49:25.960631071 +0000 UTC m=+0.031830330 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:49:26 np0005629333 podman[354407]: 2026-02-25 12:49:26.055432407 +0000 UTC m=+0.126631676 container init 7b53b721630132d693daa620ddd634af4092aee8eab1fae2dcc83c54626328f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 07:49:26 np0005629333 podman[354407]: 2026-02-25 12:49:26.059731699 +0000 UTC m=+0.130930908 container start 7b53b721630132d693daa620ddd634af4092aee8eab1fae2dcc83c54626328f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 25 07:49:26 np0005629333 neutron-haproxy-ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc[354422]: [NOTICE]   (354426) : New worker (354428) forked
Feb 25 07:49:26 np0005629333 neutron-haproxy-ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc[354422]: [NOTICE]   (354426) : Loading success.
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.105 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 344b59b1-93f2-4e23-8c28-835e5d954630 in datapath eb832dde-9848-40c5-9505-cc643b1bd0fa unbound from our chassis#033[00m
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.108 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eb832dde-9848-40c5-9505-cc643b1bd0fa#033[00m
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.115 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[35591b7b-4bbb-4702-85ba-f852b1a173ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.116 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapeb832dde-91 in ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.119 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapeb832dde-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.119 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eb1a0cd9-4269-4769-98ca-37bd28f20e22]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.120 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[29caf8fd-3097-4d38-b999-c6e37fb28320]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.131 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[3fe10efd-ac8e-439b-b75f-f003c876a76b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.155 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3d99de92-0f1b-40fd-947c-0362f372b395]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.179 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[6831c47b-9741-44e3-88fd-ba12e6d77ce4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:26 np0005629333 NetworkManager[49836]: <info>  [1772023766.1864] manager: (tapeb832dde-90): new Veth device (/org/freedesktop/NetworkManager/Devices/536)
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.185 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a9251794-64bc-4d66-a088-4b5f4ce42c1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:26 np0005629333 systemd-udevd[354324]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.211 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c3762ba3-61ed-4afd-872c-2b74f57d8007]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.214 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e82ef732-8863-48da-b98d-5b44a0f1cec9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:26 np0005629333 NetworkManager[49836]: <info>  [1772023766.2368] device (tapeb832dde-90): carrier: link connected
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.245 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1b60d522-eb2d-4595-a9f1-b4fb071941cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:26 np0005629333 nova_compute[244014]: 2026-02-25 12:49:26.247 244018 DEBUG nova.compute.manager [req-23a5580f-44f5-45c8-8232-5c59a220d786 req-b39b9514-a044-47b2-b189-b89badc09af4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received event network-vif-plugged-dee62982-d46c-4a81-b4ad-8154d7cfc7af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:49:26 np0005629333 nova_compute[244014]: 2026-02-25 12:49:26.248 244018 DEBUG oslo_concurrency.lockutils [req-23a5580f-44f5-45c8-8232-5c59a220d786 req-b39b9514-a044-47b2-b189-b89badc09af4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:49:26 np0005629333 nova_compute[244014]: 2026-02-25 12:49:26.248 244018 DEBUG oslo_concurrency.lockutils [req-23a5580f-44f5-45c8-8232-5c59a220d786 req-b39b9514-a044-47b2-b189-b89badc09af4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:49:26 np0005629333 nova_compute[244014]: 2026-02-25 12:49:26.248 244018 DEBUG oslo_concurrency.lockutils [req-23a5580f-44f5-45c8-8232-5c59a220d786 req-b39b9514-a044-47b2-b189-b89badc09af4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:49:26 np0005629333 nova_compute[244014]: 2026-02-25 12:49:26.248 244018 DEBUG nova.compute.manager [req-23a5580f-44f5-45c8-8232-5c59a220d786 req-b39b9514-a044-47b2-b189-b89badc09af4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Processing event network-vif-plugged-dee62982-d46c-4a81-b4ad-8154d7cfc7af _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.261 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c110a3d3-d24f-4277-a0f3-31840d246579]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb832dde-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:a5:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573564, 'reachable_time': 17677, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354447, 'error': None, 'target': 'ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.272 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[665f69d3-55d7-42fd-8986-2df926350080]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8e:a537'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573564, 'tstamp': 573564}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354448, 'error': None, 'target': 'ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.290 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d45f364a-f45b-4e34-8a5f-56b32086a06f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb832dde-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:a5:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573564, 'reachable_time': 17677, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 354449, 'error': None, 'target': 'ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.311 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[01f3fd79-d3f7-47ee-b81f-5a1aaa701e3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.332 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[284b242e-8e2b-40d8-9f5c-59d9546bc580]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.333 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb832dde-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.333 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.334 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb832dde-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:49:26 np0005629333 nova_compute[244014]: 2026-02-25 12:49:26.336 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:26 np0005629333 NetworkManager[49836]: <info>  [1772023766.3371] manager: (tapeb832dde-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/537)
Feb 25 07:49:26 np0005629333 kernel: tapeb832dde-90: entered promiscuous mode
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.338 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeb832dde-90, col_values=(('external_ids', {'iface-id': 'b3234524-e109-417d-a204-a0ab750c983e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:49:26 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:26Z|01280|binding|INFO|Releasing lport b3234524-e109-417d-a204-a0ab750c983e from this chassis (sb_readonly=0)
Feb 25 07:49:26 np0005629333 nova_compute[244014]: 2026-02-25 12:49:26.339 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:26 np0005629333 nova_compute[244014]: 2026-02-25 12:49:26.345 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:26 np0005629333 nova_compute[244014]: 2026-02-25 12:49:26.346 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.346 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eb832dde-9848-40c5-9505-cc643b1bd0fa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eb832dde-9848-40c5-9505-cc643b1bd0fa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.347 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f489c13f-7c59-481a-a998-c6a0380584f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.348 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-eb832dde-9848-40c5-9505-cc643b1bd0fa
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/eb832dde-9848-40c5-9505-cc643b1bd0fa.pid.haproxy
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID eb832dde-9848-40c5-9505-cc643b1bd0fa
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:49:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:26.349 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa', 'env', 'PROCESS_TAG=haproxy-eb832dde-9848-40c5-9505-cc643b1bd0fa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/eb832dde-9848-40c5-9505-cc643b1bd0fa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:49:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2066: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Feb 25 07:49:26 np0005629333 podman[354480]: 2026-02-25 12:49:26.662801868 +0000 UTC m=+0.038638302 container create af9a11f6a910945d2cf71f6bc4e628f38eee74c652461113531378a36272d49c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:49:26 np0005629333 systemd[1]: Started libpod-conmon-af9a11f6a910945d2cf71f6bc4e628f38eee74c652461113531378a36272d49c.scope.
Feb 25 07:49:26 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:49:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1abdf7d75bc10bb95826887c814df1a0ab66d19dd1ae55e1c4beadda3a359345/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:49:26 np0005629333 podman[354480]: 2026-02-25 12:49:26.711913394 +0000 UTC m=+0.087749848 container init af9a11f6a910945d2cf71f6bc4e628f38eee74c652461113531378a36272d49c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 07:49:26 np0005629333 podman[354480]: 2026-02-25 12:49:26.716211116 +0000 UTC m=+0.092047550 container start af9a11f6a910945d2cf71f6bc4e628f38eee74c652461113531378a36272d49c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223)
Feb 25 07:49:26 np0005629333 podman[354480]: 2026-02-25 12:49:26.643627946 +0000 UTC m=+0.019464400 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:49:26 np0005629333 neutron-haproxy-ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa[354495]: [NOTICE]   (354499) : New worker (354501) forked
Feb 25 07:49:26 np0005629333 neutron-haproxy-ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa[354495]: [NOTICE]   (354499) : Loading success.
Feb 25 07:49:27 np0005629333 nova_compute[244014]: 2026-02-25 12:49:27.873 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:49:27 np0005629333 nova_compute[244014]: 2026-02-25 12:49:27.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:49:27 np0005629333 nova_compute[244014]: 2026-02-25 12:49:27.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:49:27 np0005629333 nova_compute[244014]: 2026-02-25 12:49:27.905 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.347 244018 DEBUG nova.compute.manager [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received event network-vif-plugged-dee62982-d46c-4a81-b4ad-8154d7cfc7af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.348 244018 DEBUG oslo_concurrency.lockutils [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.348 244018 DEBUG oslo_concurrency.lockutils [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.349 244018 DEBUG oslo_concurrency.lockutils [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.349 244018 DEBUG nova.compute.manager [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] No event matching network-vif-plugged-dee62982-d46c-4a81-b4ad-8154d7cfc7af in dict_keys([('network-vif-plugged', '344b59b1-93f2-4e23-8c28-835e5d954630')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.349 244018 WARNING nova.compute.manager [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received unexpected event network-vif-plugged-dee62982-d46c-4a81-b4ad-8154d7cfc7af for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.350 244018 DEBUG nova.compute.manager [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received event network-vif-plugged-344b59b1-93f2-4e23-8c28-835e5d954630 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.350 244018 DEBUG oslo_concurrency.lockutils [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.350 244018 DEBUG oslo_concurrency.lockutils [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.351 244018 DEBUG oslo_concurrency.lockutils [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.352 244018 DEBUG nova.compute.manager [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Processing event network-vif-plugged-344b59b1-93f2-4e23-8c28-835e5d954630 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.352 244018 DEBUG nova.compute.manager [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received event network-vif-plugged-344b59b1-93f2-4e23-8c28-835e5d954630 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.352 244018 DEBUG oslo_concurrency.lockutils [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.353 244018 DEBUG oslo_concurrency.lockutils [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.353 244018 DEBUG oslo_concurrency.lockutils [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.353 244018 DEBUG nova.compute.manager [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] No waiting events found dispatching network-vif-plugged-344b59b1-93f2-4e23-8c28-835e5d954630 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.353 244018 WARNING nova.compute.manager [req-5647ed8e-6ce8-4987-97c3-d9bbd2f06a39 req-b659d8d7-a4ea-4e64-a420-e19311d60c45 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received unexpected event network-vif-plugged-344b59b1-93f2-4e23-8c28-835e5d954630 for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.354 244018 DEBUG nova.compute.manager [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.359 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023768.3588724, b6501baa-8bc9-4724-b4c1-8ff43faf517c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.359 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.361 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.366 244018 INFO nova.virt.libvirt.driver [-] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Instance spawned successfully.#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.367 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.387 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.394 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.397 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.398 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.398 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.399 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.399 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.400 244018 DEBUG nova.virt.libvirt.driver [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.428 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.466 244018 INFO nova.compute.manager [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Took 14.19 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.467 244018 DEBUG nova.compute.manager [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.511 244018 DEBUG oslo_concurrency.lockutils [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "interface-37ce1876-2b57-4f84-800c-2a1b6eaa6943-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.511 244018 DEBUG oslo_concurrency.lockutils [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "interface-37ce1876-2b57-4f84-800c-2a1b6eaa6943-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.512 244018 DEBUG nova.objects.instance [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'flavor' on Instance uuid 37ce1876-2b57-4f84-800c-2a1b6eaa6943 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:49:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2067: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 350 KiB/s rd, 3.9 MiB/s wr, 100 op/s
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.545 244018 INFO nova.compute.manager [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Took 15.51 seconds to build instance.#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.577 244018 DEBUG oslo_concurrency.lockutils [None req-66fddd09-0a67-449b-8d75-d303fb605f90 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:49:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.895 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.896 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.896 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.897 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.897 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.986 244018 DEBUG nova.objects.instance [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'pci_requests' on Instance uuid 37ce1876-2b57-4f84-800c-2a1b6eaa6943 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:49:28 np0005629333 nova_compute[244014]: 2026-02-25 12:49:28.997 244018 DEBUG nova.network.neutron [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:49:29 np0005629333 nova_compute[244014]: 2026-02-25 12:49:29.120 244018 DEBUG nova.policy [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31d013eaf26a447394d93c83ab8def60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e227b91c24404ab5aed600e2fe792d32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:49:29 np0005629333 nova_compute[244014]: 2026-02-25 12:49:29.414 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:49:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1950405128' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:49:29 np0005629333 nova_compute[244014]: 2026-02-25 12:49:29.485 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:49:29 np0005629333 nova_compute[244014]: 2026-02-25 12:49:29.561 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:49:29 np0005629333 nova_compute[244014]: 2026-02-25 12:49:29.562 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:49:29 np0005629333 nova_compute[244014]: 2026-02-25 12:49:29.565 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:49:29 np0005629333 nova_compute[244014]: 2026-02-25 12:49:29.565 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:49:29 np0005629333 nova_compute[244014]: 2026-02-25 12:49:29.619 244018 DEBUG nova.network.neutron [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Successfully created port: 887af58f-85b4-49c2-93b7-855c843c0cd5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:49:29 np0005629333 nova_compute[244014]: 2026-02-25 12:49:29.723 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:49:29 np0005629333 nova_compute[244014]: 2026-02-25 12:49:29.724 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3345MB free_disk=59.920919011346996GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:49:29 np0005629333 nova_compute[244014]: 2026-02-25 12:49:29.724 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:49:29 np0005629333 nova_compute[244014]: 2026-02-25 12:49:29.725 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:49:29 np0005629333 nova_compute[244014]: 2026-02-25 12:49:29.818 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 37ce1876-2b57-4f84-800c-2a1b6eaa6943 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:49:29 np0005629333 nova_compute[244014]: 2026-02-25 12:49:29.819 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance b6501baa-8bc9-4724-b4c1-8ff43faf517c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:49:29 np0005629333 nova_compute[244014]: 2026-02-25 12:49:29.819 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:49:29 np0005629333 nova_compute[244014]: 2026-02-25 12:49:29.819 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:49:29 np0005629333 nova_compute[244014]: 2026-02-25 12:49:29.873 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:49:30 np0005629333 nova_compute[244014]: 2026-02-25 12:49:30.308 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:30 np0005629333 nova_compute[244014]: 2026-02-25 12:49:30.333 244018 DEBUG nova.network.neutron [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Successfully updated port: 887af58f-85b4-49c2-93b7-855c843c0cd5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:49:30 np0005629333 nova_compute[244014]: 2026-02-25 12:49:30.347 244018 DEBUG oslo_concurrency.lockutils [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:49:30 np0005629333 nova_compute[244014]: 2026-02-25 12:49:30.347 244018 DEBUG oslo_concurrency.lockutils [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:49:30 np0005629333 nova_compute[244014]: 2026-02-25 12:49:30.348 244018 DEBUG nova.network.neutron [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:49:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:49:30 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2470822927' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:49:30 np0005629333 nova_compute[244014]: 2026-02-25 12:49:30.376 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:49:30 np0005629333 nova_compute[244014]: 2026-02-25 12:49:30.381 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:49:30 np0005629333 nova_compute[244014]: 2026-02-25 12:49:30.397 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:49:30 np0005629333 nova_compute[244014]: 2026-02-25 12:49:30.418 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:49:30 np0005629333 nova_compute[244014]: 2026-02-25 12:49:30.419 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:49:30 np0005629333 nova_compute[244014]: 2026-02-25 12:49:30.491 244018 DEBUG nova.compute.manager [req-9fd9b4ca-6a0d-4770-a80f-6950bd34dff7 req-a66dae6d-05e6-4637-926a-4d2b426eb704 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received event network-changed-887af58f-85b4-49c2-93b7-855c843c0cd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:49:30 np0005629333 nova_compute[244014]: 2026-02-25 12:49:30.492 244018 DEBUG nova.compute.manager [req-9fd9b4ca-6a0d-4770-a80f-6950bd34dff7 req-a66dae6d-05e6-4637-926a-4d2b426eb704 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Refreshing instance network info cache due to event network-changed-887af58f-85b4-49c2-93b7-855c843c0cd5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:49:30 np0005629333 nova_compute[244014]: 2026-02-25 12:49:30.493 244018 DEBUG oslo_concurrency.lockutils [req-9fd9b4ca-6a0d-4770-a80f-6950bd34dff7 req-a66dae6d-05e6-4637-926a-4d2b426eb704 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:49:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2068: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 63 KiB/s wr, 14 op/s
Feb 25 07:49:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:49:31
Feb 25 07:49:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:49:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:49:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'default.rgw.control', 'volumes', 'default.rgw.meta', 'images', 'vms']
Feb 25 07:49:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:49:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:49:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:49:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:49:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:49:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:49:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:49:31 np0005629333 podman[354556]: 2026-02-25 12:49:31.72619827 +0000 UTC m=+0.064920904 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:49:31 np0005629333 podman[354555]: 2026-02-25 12:49:31.731655914 +0000 UTC m=+0.073049054 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true)
Feb 25 07:49:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:49:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:49:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:49:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:49:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:49:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:49:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:49:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:49:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:49:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.305 244018 DEBUG nova.network.neutron [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Updating instance_info_cache with network_info: [{"id": "cda27370-858e-4443-819e-696576515c52", "address": "fa:16:3e:50:74:9c", "network": {"id": "b302251f-a239-4374-92d3-7686a49e9d67", "bridge": "br-int", "label": "tempest-network-smoke--1675307748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcda27370-85", "ovs_interfaceid": "cda27370-858e-4443-819e-696576515c52", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "887af58f-85b4-49c2-93b7-855c843c0cd5", "address": "fa:16:3e:08:00:77", "network": {"id": "f8617723-856d-4eb3-ab8d-62fd3cacab7e", "bridge": "br-int", "label": "tempest-network-smoke--1849940990", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap887af58f-85", "ovs_interfaceid": "887af58f-85b4-49c2-93b7-855c843c0cd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.330 244018 DEBUG oslo_concurrency.lockutils [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.331 244018 DEBUG oslo_concurrency.lockutils [req-9fd9b4ca-6a0d-4770-a80f-6950bd34dff7 req-a66dae6d-05e6-4637-926a-4d2b426eb704 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.332 244018 DEBUG nova.network.neutron [req-9fd9b4ca-6a0d-4770-a80f-6950bd34dff7 req-a66dae6d-05e6-4637-926a-4d2b426eb704 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Refreshing network info cache for port 887af58f-85b4-49c2-93b7-855c843c0cd5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.335 244018 DEBUG nova.virt.libvirt.vif [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:48:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1787082415',display_name='tempest-TestNetworkBasicOps-server-1787082415',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1787082415',id=121,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKssTuj33zFMa3evVOk6flWSjlUceMgeiXlovCojJrinnNv1phI+OVsHaPYu7D22Bc6n/wlCJmAYv3X7OJM/21EB1B1U1udfCABsCK7Pj9XBc8dCdeNdko4o08iW1C9pCg==',key_name='tempest-TestNetworkBasicOps-1218860627',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:49:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-h0ic083v',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:49:06Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=37ce1876-2b57-4f84-800c-2a1b6eaa6943,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "887af58f-85b4-49c2-93b7-855c843c0cd5", "address": "fa:16:3e:08:00:77", "network": {"id": "f8617723-856d-4eb3-ab8d-62fd3cacab7e", "bridge": "br-int", "label": "tempest-network-smoke--1849940990", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap887af58f-85", "ovs_interfaceid": "887af58f-85b4-49c2-93b7-855c843c0cd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.336 244018 DEBUG nova.network.os_vif_util [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "887af58f-85b4-49c2-93b7-855c843c0cd5", "address": "fa:16:3e:08:00:77", "network": {"id": "f8617723-856d-4eb3-ab8d-62fd3cacab7e", "bridge": "br-int", "label": "tempest-network-smoke--1849940990", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap887af58f-85", "ovs_interfaceid": "887af58f-85b4-49c2-93b7-855c843c0cd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.336 244018 DEBUG nova.network.os_vif_util [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:00:77,bridge_name='br-int',has_traffic_filtering=True,id=887af58f-85b4-49c2-93b7-855c843c0cd5,network=Network(f8617723-856d-4eb3-ab8d-62fd3cacab7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap887af58f-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.337 244018 DEBUG os_vif [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:00:77,bridge_name='br-int',has_traffic_filtering=True,id=887af58f-85b4-49c2-93b7-855c843c0cd5,network=Network(f8617723-856d-4eb3-ab8d-62fd3cacab7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap887af58f-85') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.338 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.338 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.339 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.341 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.341 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap887af58f-85, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.342 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap887af58f-85, col_values=(('external_ids', {'iface-id': '887af58f-85b4-49c2-93b7-855c843c0cd5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:08:00:77', 'vm-uuid': '37ce1876-2b57-4f84-800c-2a1b6eaa6943'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:49:32 np0005629333 NetworkManager[49836]: <info>  [1772023772.3451] manager: (tap887af58f-85): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/538)
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.348 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.351 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.352 244018 INFO os_vif [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:00:77,bridge_name='br-int',has_traffic_filtering=True,id=887af58f-85b4-49c2-93b7-855c843c0cd5,network=Network(f8617723-856d-4eb3-ab8d-62fd3cacab7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap887af58f-85')#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.354 244018 DEBUG nova.virt.libvirt.vif [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:48:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1787082415',display_name='tempest-TestNetworkBasicOps-server-1787082415',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1787082415',id=121,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKssTuj33zFMa3evVOk6flWSjlUceMgeiXlovCojJrinnNv1phI+OVsHaPYu7D22Bc6n/wlCJmAYv3X7OJM/21EB1B1U1udfCABsCK7Pj9XBc8dCdeNdko4o08iW1C9pCg==',key_name='tempest-TestNetworkBasicOps-1218860627',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:49:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-h0ic083v',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:49:06Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=37ce1876-2b57-4f84-800c-2a1b6eaa6943,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "887af58f-85b4-49c2-93b7-855c843c0cd5", "address": "fa:16:3e:08:00:77", "network": {"id": "f8617723-856d-4eb3-ab8d-62fd3cacab7e", "bridge": "br-int", "label": "tempest-network-smoke--1849940990", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap887af58f-85", "ovs_interfaceid": "887af58f-85b4-49c2-93b7-855c843c0cd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.354 244018 DEBUG nova.network.os_vif_util [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "887af58f-85b4-49c2-93b7-855c843c0cd5", "address": "fa:16:3e:08:00:77", "network": {"id": "f8617723-856d-4eb3-ab8d-62fd3cacab7e", "bridge": "br-int", "label": "tempest-network-smoke--1849940990", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap887af58f-85", "ovs_interfaceid": "887af58f-85b4-49c2-93b7-855c843c0cd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.355 244018 DEBUG nova.network.os_vif_util [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:00:77,bridge_name='br-int',has_traffic_filtering=True,id=887af58f-85b4-49c2-93b7-855c843c0cd5,network=Network(f8617723-856d-4eb3-ab8d-62fd3cacab7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap887af58f-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.358 244018 DEBUG nova.virt.libvirt.guest [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] attach device xml: <interface type="ethernet">
Feb 25 07:49:32 np0005629333 nova_compute[244014]:  <mac address="fa:16:3e:08:00:77"/>
Feb 25 07:49:32 np0005629333 nova_compute[244014]:  <model type="virtio"/>
Feb 25 07:49:32 np0005629333 nova_compute[244014]:  <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:49:32 np0005629333 nova_compute[244014]:  <mtu size="1442"/>
Feb 25 07:49:32 np0005629333 nova_compute[244014]:  <target dev="tap887af58f-85"/>
Feb 25 07:49:32 np0005629333 nova_compute[244014]: </interface>
Feb 25 07:49:32 np0005629333 nova_compute[244014]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb 25 07:49:32 np0005629333 NetworkManager[49836]: <info>  [1772023772.3681] manager: (tap887af58f-85): new Tun device (/org/freedesktop/NetworkManager/Devices/539)
Feb 25 07:49:32 np0005629333 kernel: tap887af58f-85: entered promiscuous mode
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.381 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:32 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:32Z|01281|binding|INFO|Claiming lport 887af58f-85b4-49c2-93b7-855c843c0cd5 for this chassis.
Feb 25 07:49:32 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:32Z|01282|binding|INFO|887af58f-85b4-49c2-93b7-855c843c0cd5: Claiming fa:16:3e:08:00:77 10.100.0.29
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.393 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:00:77 10.100.0.29'], port_security=['fa:16:3e:08:00:77 10.100.0.29'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.29/28', 'neutron:device_id': '37ce1876-2b57-4f84-800c-2a1b6eaa6943', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f8617723-856d-4eb3-ab8d-62fd3cacab7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4f6e829b-663a-471b-98d6-d0d40f869440', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd570f87-4a07-4ca5-8faf-e2384fdeb2a2, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=887af58f-85b4-49c2-93b7-855c843c0cd5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.395 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 887af58f-85b4-49c2-93b7-855c843c0cd5 in datapath f8617723-856d-4eb3-ab8d-62fd3cacab7e bound to our chassis#033[00m
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.397 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f8617723-856d-4eb3-ab8d-62fd3cacab7e#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.400 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:32 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:32Z|01283|binding|INFO|Setting lport 887af58f-85b4-49c2-93b7-855c843c0cd5 ovn-installed in OVS
Feb 25 07:49:32 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:32Z|01284|binding|INFO|Setting lport 887af58f-85b4-49c2-93b7-855c843c0cd5 up in Southbound
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.404 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:32 np0005629333 systemd-udevd[354606]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.405 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[73754554-30f1-42a7-9113-290c61a51210]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.407 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf8617723-81 in ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.409 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf8617723-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.409 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dd21a12f-2ad0-4ce7-969d-a6c04e14990f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.412 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cbccb4d5-269a-46ce-bfcf-61e5710a3043]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:32 np0005629333 NetworkManager[49836]: <info>  [1772023772.4205] device (tap887af58f-85): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:49:32 np0005629333 NetworkManager[49836]: <info>  [1772023772.4211] device (tap887af58f-85): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.424 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[75060272-bd2d-42df-bef6-39facf5d34ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.448 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f811d8d4-8096-42bf-8542-506605673dc1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.455 244018 DEBUG nova.virt.libvirt.driver [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.456 244018 DEBUG nova.virt.libvirt.driver [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.456 244018 DEBUG nova.virt.libvirt.driver [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:50:74:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.457 244018 DEBUG nova.virt.libvirt.driver [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:08:00:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.472 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c0221642-ecc8-4b39-bb99-00e995582765]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.478 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2c257432-80ea-4c19-9243-43079863834c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:32 np0005629333 NetworkManager[49836]: <info>  [1772023772.4794] manager: (tapf8617723-80): new Veth device (/org/freedesktop/NetworkManager/Devices/540)
Feb 25 07:49:32 np0005629333 systemd-udevd[354609]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.497 244018 DEBUG nova.virt.libvirt.guest [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:49:32 np0005629333 nova_compute[244014]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:49:32 np0005629333 nova_compute[244014]:  <nova:name>tempest-TestNetworkBasicOps-server-1787082415</nova:name>
Feb 25 07:49:32 np0005629333 nova_compute[244014]:  <nova:creationTime>2026-02-25 12:49:32</nova:creationTime>
Feb 25 07:49:32 np0005629333 nova_compute[244014]:  <nova:flavor name="m1.nano">
Feb 25 07:49:32 np0005629333 nova_compute[244014]:    <nova:memory>128</nova:memory>
Feb 25 07:49:32 np0005629333 nova_compute[244014]:    <nova:disk>1</nova:disk>
Feb 25 07:49:32 np0005629333 nova_compute[244014]:    <nova:swap>0</nova:swap>
Feb 25 07:49:32 np0005629333 nova_compute[244014]:    <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:49:32 np0005629333 nova_compute[244014]:    <nova:vcpus>1</nova:vcpus>
Feb 25 07:49:32 np0005629333 nova_compute[244014]:  </nova:flavor>
Feb 25 07:49:32 np0005629333 nova_compute[244014]:  <nova:owner>
Feb 25 07:49:32 np0005629333 nova_compute[244014]:    <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 07:49:32 np0005629333 nova_compute[244014]:    <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 07:49:32 np0005629333 nova_compute[244014]:  </nova:owner>
Feb 25 07:49:32 np0005629333 nova_compute[244014]:  <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:49:32 np0005629333 nova_compute[244014]:  <nova:ports>
Feb 25 07:49:32 np0005629333 nova_compute[244014]:    <nova:port uuid="cda27370-858e-4443-819e-696576515c52">
Feb 25 07:49:32 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 07:49:32 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:49:32 np0005629333 nova_compute[244014]:    <nova:port uuid="887af58f-85b4-49c2-93b7-855c843c0cd5">
Feb 25 07:49:32 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Feb 25 07:49:32 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:49:32 np0005629333 nova_compute[244014]:  </nova:ports>
Feb 25 07:49:32 np0005629333 nova_compute[244014]: </nova:instance>
Feb 25 07:49:32 np0005629333 nova_compute[244014]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.503 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[57c19cc0-f1d0-484d-9a01-7937417f51d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.507 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[24cdd947-affc-4206-88b7-f9f80379f2ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:32 np0005629333 NetworkManager[49836]: <info>  [1772023772.5231] device (tapf8617723-80): carrier: link connected
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.524 244018 DEBUG oslo_concurrency.lockutils [None req-f5d2545f-f279-4baf-a78c-3c8bbd39d2f7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "interface-37ce1876-2b57-4f84-800c-2a1b6eaa6943-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 4.013s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.528 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[284ae3b9-0fe2-4e8d-a501-46383a38ce2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2069: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 65 KiB/s wr, 61 op/s
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.542 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[25cc46f3-0370-45de-a11c-b8598c3308d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf8617723-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:4b:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 385], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 574193, 'reachable_time': 16327, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354634, 'error': None, 'target': 'ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.553 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[aff99c6b-5dde-4b4d-b2cf-5c52770ae957]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3a:4b55'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 574193, 'tstamp': 574193}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354635, 'error': None, 'target': 'ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.567 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fdb07871-32c0-4a07-a795-1bd1ec25cf04]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf8617723-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:4b:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 385], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 574193, 'reachable_time': 16327, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 354636, 'error': None, 'target': 'ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.591 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f92e32c5-c5f7-4580-9216-5606f1201c39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.638 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[716c8b34-b05b-480a-b24d-88975d24da05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.639 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8617723-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.639 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.639 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf8617723-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.641 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:32 np0005629333 NetworkManager[49836]: <info>  [1772023772.6432] manager: (tapf8617723-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/541)
Feb 25 07:49:32 np0005629333 kernel: tapf8617723-80: entered promiscuous mode
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.649 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.650 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf8617723-80, col_values=(('external_ids', {'iface-id': '256a466a-4cb8-48fa-a60d-9db8ac74f95b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.651 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:32 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:32Z|01285|binding|INFO|Releasing lport 256a466a-4cb8-48fa-a60d-9db8ac74f95b from this chassis (sb_readonly=0)
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.665 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f8617723-856d-4eb3-ab8d-62fd3cacab7e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f8617723-856d-4eb3-ab8d-62fd3cacab7e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.665 244018 DEBUG nova.compute.manager [req-8a9aa50f-b845-41ff-ad1c-95e06f70c7b7 req-ee7ced94-9984-4117-91c0-9fab7ca90d55 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received event network-vif-plugged-887af58f-85b4-49c2-93b7-855c843c0cd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.666 244018 DEBUG oslo_concurrency.lockutils [req-8a9aa50f-b845-41ff-ad1c-95e06f70c7b7 req-ee7ced94-9984-4117-91c0-9fab7ca90d55 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.666 244018 DEBUG oslo_concurrency.lockutils [req-8a9aa50f-b845-41ff-ad1c-95e06f70c7b7 req-ee7ced94-9984-4117-91c0-9fab7ca90d55 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.666 244018 DEBUG oslo_concurrency.lockutils [req-8a9aa50f-b845-41ff-ad1c-95e06f70c7b7 req-ee7ced94-9984-4117-91c0-9fab7ca90d55 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.666 244018 DEBUG nova.compute.manager [req-8a9aa50f-b845-41ff-ad1c-95e06f70c7b7 req-ee7ced94-9984-4117-91c0-9fab7ca90d55 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] No waiting events found dispatching network-vif-plugged-887af58f-85b4-49c2-93b7-855c843c0cd5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.667 244018 WARNING nova.compute.manager [req-8a9aa50f-b845-41ff-ad1c-95e06f70c7b7 req-ee7ced94-9984-4117-91c0-9fab7ca90d55 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received unexpected event network-vif-plugged-887af58f-85b4-49c2-93b7-855c843c0cd5 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:49:32 np0005629333 nova_compute[244014]: 2026-02-25 12:49:32.667 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.668 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8f2eb0b3-470c-4484-ba64-8708fe5911df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.669 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-f8617723-856d-4eb3-ab8d-62fd3cacab7e
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/f8617723-856d-4eb3-ab8d-62fd3cacab7e.pid.haproxy
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID f8617723-856d-4eb3-ab8d-62fd3cacab7e
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:49:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:32.671 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e', 'env', 'PROCESS_TAG=haproxy-f8617723-856d-4eb3-ab8d-62fd3cacab7e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f8617723-856d-4eb3-ab8d-62fd3cacab7e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:49:33 np0005629333 podman[354668]: 2026-02-25 12:49:33.000332647 +0000 UTC m=+0.042262674 container create cac817ebfb80f8215941f3df5eeeead31732d9703a77675479f4f56d4e69ca52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:49:33 np0005629333 systemd[1]: Started libpod-conmon-cac817ebfb80f8215941f3df5eeeead31732d9703a77675479f4f56d4e69ca52.scope.
Feb 25 07:49:33 np0005629333 nova_compute[244014]: 2026-02-25 12:49:33.055 244018 DEBUG nova.compute.manager [req-1b14ac6b-4366-4fba-ab88-e31e063a7351 req-17c4573e-6015-427f-95d1-6807716b1d3e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received event network-changed-dee62982-d46c-4a81-b4ad-8154d7cfc7af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:49:33 np0005629333 nova_compute[244014]: 2026-02-25 12:49:33.056 244018 DEBUG nova.compute.manager [req-1b14ac6b-4366-4fba-ab88-e31e063a7351 req-17c4573e-6015-427f-95d1-6807716b1d3e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Refreshing instance network info cache due to event network-changed-dee62982-d46c-4a81-b4ad-8154d7cfc7af. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:49:33 np0005629333 nova_compute[244014]: 2026-02-25 12:49:33.056 244018 DEBUG oslo_concurrency.lockutils [req-1b14ac6b-4366-4fba-ab88-e31e063a7351 req-17c4573e-6015-427f-95d1-6807716b1d3e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:49:33 np0005629333 nova_compute[244014]: 2026-02-25 12:49:33.056 244018 DEBUG oslo_concurrency.lockutils [req-1b14ac6b-4366-4fba-ab88-e31e063a7351 req-17c4573e-6015-427f-95d1-6807716b1d3e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:49:33 np0005629333 nova_compute[244014]: 2026-02-25 12:49:33.056 244018 DEBUG nova.network.neutron [req-1b14ac6b-4366-4fba-ab88-e31e063a7351 req-17c4573e-6015-427f-95d1-6807716b1d3e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Refreshing network info cache for port dee62982-d46c-4a81-b4ad-8154d7cfc7af _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:49:33 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:49:33 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5ae28e2e538a9b1e2fa1fd3ba97baafc40136904659ba2baf74eab449664c35/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:49:33 np0005629333 podman[354668]: 2026-02-25 12:49:33.071569269 +0000 UTC m=+0.113499306 container init cac817ebfb80f8215941f3df5eeeead31732d9703a77675479f4f56d4e69ca52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 07:49:33 np0005629333 podman[354668]: 2026-02-25 12:49:33.075408517 +0000 UTC m=+0.117338534 container start cac817ebfb80f8215941f3df5eeeead31732d9703a77675479f4f56d4e69ca52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:49:33 np0005629333 podman[354668]: 2026-02-25 12:49:32.97952147 +0000 UTC m=+0.021451517 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:49:33 np0005629333 neutron-haproxy-ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e[354684]: [NOTICE]   (354688) : New worker (354690) forked
Feb 25 07:49:33 np0005629333 neutron-haproxy-ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e[354684]: [NOTICE]   (354688) : Loading success.
Feb 25 07:49:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:49:33 np0005629333 nova_compute[244014]: 2026-02-25 12:49:33.929 244018 DEBUG nova.network.neutron [req-9fd9b4ca-6a0d-4770-a80f-6950bd34dff7 req-a66dae6d-05e6-4637-926a-4d2b426eb704 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Updated VIF entry in instance network info cache for port 887af58f-85b4-49c2-93b7-855c843c0cd5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:49:33 np0005629333 nova_compute[244014]: 2026-02-25 12:49:33.930 244018 DEBUG nova.network.neutron [req-9fd9b4ca-6a0d-4770-a80f-6950bd34dff7 req-a66dae6d-05e6-4637-926a-4d2b426eb704 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Updating instance_info_cache with network_info: [{"id": "cda27370-858e-4443-819e-696576515c52", "address": "fa:16:3e:50:74:9c", "network": {"id": "b302251f-a239-4374-92d3-7686a49e9d67", "bridge": "br-int", "label": "tempest-network-smoke--1675307748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcda27370-85", "ovs_interfaceid": "cda27370-858e-4443-819e-696576515c52", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "887af58f-85b4-49c2-93b7-855c843c0cd5", "address": "fa:16:3e:08:00:77", "network": {"id": "f8617723-856d-4eb3-ab8d-62fd3cacab7e", "bridge": "br-int", "label": "tempest-network-smoke--1849940990", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap887af58f-85", "ovs_interfaceid": "887af58f-85b4-49c2-93b7-855c843c0cd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:49:33 np0005629333 nova_compute[244014]: 2026-02-25 12:49:33.943 244018 DEBUG oslo_concurrency.lockutils [req-9fd9b4ca-6a0d-4770-a80f-6950bd34dff7 req-a66dae6d-05e6-4637-926a-4d2b426eb704 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:49:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:34Z|00149|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:08:00:77 10.100.0.29
Feb 25 07:49:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:34Z|00150|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:08:00:77 10.100.0.29
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.172 244018 DEBUG oslo_concurrency.lockutils [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "interface-37ce1876-2b57-4f84-800c-2a1b6eaa6943-887af58f-85b4-49c2-93b7-855c843c0cd5" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.173 244018 DEBUG oslo_concurrency.lockutils [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "interface-37ce1876-2b57-4f84-800c-2a1b6eaa6943-887af58f-85b4-49c2-93b7-855c843c0cd5" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.188 244018 DEBUG nova.objects.instance [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'flavor' on Instance uuid 37ce1876-2b57-4f84-800c-2a1b6eaa6943 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.213 244018 DEBUG nova.virt.libvirt.vif [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:48:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1787082415',display_name='tempest-TestNetworkBasicOps-server-1787082415',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1787082415',id=121,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKssTuj33zFMa3evVOk6flWSjlUceMgeiXlovCojJrinnNv1phI+OVsHaPYu7D22Bc6n/wlCJmAYv3X7OJM/21EB1B1U1udfCABsCK7Pj9XBc8dCdeNdko4o08iW1C9pCg==',key_name='tempest-TestNetworkBasicOps-1218860627',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:49:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-h0ic083v',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:49:06Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=37ce1876-2b57-4f84-800c-2a1b6eaa6943,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "887af58f-85b4-49c2-93b7-855c843c0cd5", "address": "fa:16:3e:08:00:77", "network": {"id": "f8617723-856d-4eb3-ab8d-62fd3cacab7e", "bridge": "br-int", "label": "tempest-network-smoke--1849940990", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap887af58f-85", "ovs_interfaceid": "887af58f-85b4-49c2-93b7-855c843c0cd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.215 244018 DEBUG nova.network.os_vif_util [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "887af58f-85b4-49c2-93b7-855c843c0cd5", "address": "fa:16:3e:08:00:77", "network": {"id": "f8617723-856d-4eb3-ab8d-62fd3cacab7e", "bridge": "br-int", "label": "tempest-network-smoke--1849940990", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap887af58f-85", "ovs_interfaceid": "887af58f-85b4-49c2-93b7-855c843c0cd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.216 244018 DEBUG nova.network.os_vif_util [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:00:77,bridge_name='br-int',has_traffic_filtering=True,id=887af58f-85b4-49c2-93b7-855c843c0cd5,network=Network(f8617723-856d-4eb3-ab8d-62fd3cacab7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap887af58f-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.221 244018 DEBUG nova.virt.libvirt.guest [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:08:00:77"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap887af58f-85"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.223 244018 DEBUG nova.virt.libvirt.guest [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:08:00:77"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap887af58f-85"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.226 244018 DEBUG nova.virt.libvirt.driver [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Attempting to detach device tap887af58f-85 from instance 37ce1876-2b57-4f84-800c-2a1b6eaa6943 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.226 244018 DEBUG nova.virt.libvirt.guest [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] detach device xml: <interface type="ethernet">
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <mac address="fa:16:3e:08:00:77"/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <model type="virtio"/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <mtu size="1442"/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <target dev="tap887af58f-85"/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]: </interface>
Feb 25 07:49:34 np0005629333 nova_compute[244014]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.232 244018 DEBUG nova.virt.libvirt.guest [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:08:00:77"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap887af58f-85"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.236 244018 DEBUG nova.virt.libvirt.guest [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:08:00:77"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap887af58f-85"/></interface>not found in domain: <domain type='kvm' id='153'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <name>instance-00000079</name>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <uuid>37ce1876-2b57-4f84-800c-2a1b6eaa6943</uuid>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <nova:name>tempest-TestNetworkBasicOps-server-1787082415</nova:name>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <nova:creationTime>2026-02-25 12:49:32</nova:creationTime>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <nova:flavor name="m1.nano">
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <nova:memory>128</nova:memory>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <nova:disk>1</nova:disk>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <nova:swap>0</nova:swap>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <nova:vcpus>1</nova:vcpus>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  </nova:flavor>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <nova:owner>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  </nova:owner>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <nova:ports>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <nova:port uuid="cda27370-858e-4443-819e-696576515c52">
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <nova:port uuid="887af58f-85b4-49c2-93b7-855c843c0cd5">
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  </nova:ports>
Feb 25 07:49:34 np0005629333 nova_compute[244014]: </nova:instance>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <memory unit='KiB'>131072</memory>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <currentMemory unit='KiB'>131072</currentMemory>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <vcpu placement='static'>1</vcpu>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <resource>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <partition>/machine</partition>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  </resource>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <sysinfo type='smbios'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <entry name='manufacturer'>RDO</entry>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <entry name='product'>OpenStack Compute</entry>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <entry name='serial'>37ce1876-2b57-4f84-800c-2a1b6eaa6943</entry>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <entry name='uuid'>37ce1876-2b57-4f84-800c-2a1b6eaa6943</entry>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <entry name='family'>Virtual Machine</entry>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <boot dev='hd'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <smbios mode='sysinfo'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <vmcoreinfo state='on'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <cpu mode='custom' match='exact' check='full'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <model fallback='forbid'>EPYC-Rome</model>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <vendor>AMD</vendor>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='require' name='x2apic'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='require' name='tsc-deadline'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='require' name='hypervisor'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='require' name='tsc_adjust'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='require' name='spec-ctrl'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='require' name='stibp'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='require' name='ssbd'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='require' name='cmp_legacy'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='require' name='overflow-recov'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='require' name='succor'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='require' name='ibrs'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='require' name='amd-ssbd'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='require' name='virt-ssbd'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='disable' name='lbrv'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='disable' name='tsc-scale'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='disable' name='vmcb-clean'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='disable' name='flushbyasid'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='disable' name='pause-filter'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='disable' name='pfthreshold'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='disable' name='svme-addr-chk'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='require' name='lfence-always-serializing'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='disable' name='xsaves'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='disable' name='svm'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='require' name='topoext'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='disable' name='npt'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='disable' name='nrip-save'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <clock offset='utc'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <timer name='pit' tickpolicy='delay'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <timer name='rtc' tickpolicy='catchup'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <timer name='hpet' present='no'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <on_poweroff>destroy</on_poweroff>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <on_reboot>restart</on_reboot>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <on_crash>destroy</on_crash>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <disk type='network' device='disk'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <driver name='qemu' type='raw' cache='none'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <auth username='openstack'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:        <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <source protocol='rbd' name='vms/37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk' index='2'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:        <host name='192.168.122.100' port='6789'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target dev='vda' bus='virtio'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='virtio-disk0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <disk type='network' device='cdrom'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <driver name='qemu' type='raw' cache='none'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <auth username='openstack'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:        <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <source protocol='rbd' name='vms/37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk.config' index='1'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:        <host name='192.168.122.100' port='6789'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target dev='sda' bus='sata'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <readonly/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='sata0-0-0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='0' model='pcie-root'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pcie.0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='1' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='1' port='0x10'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.1'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='2' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='2' port='0x11'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.2'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='3' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='3' port='0x12'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.3'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='4' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='4' port='0x13'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.4'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='5' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='5' port='0x14'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.5'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='6' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='6' port='0x15'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.6'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='7' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='7' port='0x16'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.7'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='8' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='8' port='0x17'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.8'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='9' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='9' port='0x18'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.9'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='10' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='10' port='0x19'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.10'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='11' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='11' port='0x1a'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.11'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='12' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='12' port='0x1b'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.12'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='13' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='13' port='0x1c'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.13'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='14' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='14' port='0x1d'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.14'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='15' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='15' port='0x1e'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.15'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='16' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='16' port='0x1f'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.16'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='17' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='17' port='0x20'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.17'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='18' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='18' port='0x21'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.18'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='19' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='19' port='0x22'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.19'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='20' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='20' port='0x23'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.20'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='21' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='21' port='0x24'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.21'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='22' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='22' port='0x25'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.22'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='23' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='23' port='0x26'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.23'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='24' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='24' port='0x27'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.24'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='25' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='25' port='0x28'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.25'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-pci-bridge'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.26'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='usb' index='0' model='piix3-uhci'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='usb'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='sata' index='0'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='ide'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <interface type='ethernet'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <mac address='fa:16:3e:50:74:9c'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target dev='tapcda27370-85'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model type='virtio'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <driver name='vhost' rx_queue_size='512'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <mtu size='1442'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='net0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <interface type='ethernet'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <mac address='fa:16:3e:08:00:77'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target dev='tap887af58f-85'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model type='virtio'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <driver name='vhost' rx_queue_size='512'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <mtu size='1442'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='net1'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <serial type='pty'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <source path='/dev/pts/0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <log file='/var/lib/nova/instances/37ce1876-2b57-4f84-800c-2a1b6eaa6943/console.log' append='off'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target type='isa-serial' port='0'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:        <model name='isa-serial'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      </target>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='serial0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <console type='pty' tty='/dev/pts/0'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <source path='/dev/pts/0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <log file='/var/lib/nova/instances/37ce1876-2b57-4f84-800c-2a1b6eaa6943/console.log' append='off'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target type='serial' port='0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='serial0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </console>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <input type='tablet' bus='usb'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='input0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='usb' bus='0' port='1'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <input type='mouse' bus='ps2'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='input1'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <input type='keyboard' bus='ps2'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='input2'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <listen type='address' address='::0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </graphics>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <audio id='1' type='none'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model type='virtio' heads='1' primary='yes'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='video0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <watchdog model='itco' action='reset'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='watchdog0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </watchdog>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <memballoon model='virtio'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <stats period='10'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='balloon0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <rng model='virtio'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <backend model='random'>/dev/urandom</backend>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='rng0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <label>system_u:system_r:svirt_t:s0:c59,c138</label>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c59,c138</imagelabel>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  </seclabel>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <label>+107:+107</label>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <imagelabel>+107:+107</imagelabel>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  </seclabel>
Feb 25 07:49:34 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:49:34 np0005629333 nova_compute[244014]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.238 244018 INFO nova.virt.libvirt.driver [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully detached device tap887af58f-85 from instance 37ce1876-2b57-4f84-800c-2a1b6eaa6943 from the persistent domain config.#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.238 244018 DEBUG nova.virt.libvirt.driver [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] (1/8): Attempting to detach device tap887af58f-85 with device alias net1 from instance 37ce1876-2b57-4f84-800c-2a1b6eaa6943 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.238 244018 DEBUG nova.virt.libvirt.guest [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] detach device xml: <interface type="ethernet">
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <mac address="fa:16:3e:08:00:77"/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <model type="virtio"/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <mtu size="1442"/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <target dev="tap887af58f-85"/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]: </interface>
Feb 25 07:49:34 np0005629333 nova_compute[244014]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.295 244018 DEBUG nova.network.neutron [req-1b14ac6b-4366-4fba-ab88-e31e063a7351 req-17c4573e-6015-427f-95d1-6807716b1d3e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Updated VIF entry in instance network info cache for port dee62982-d46c-4a81-b4ad-8154d7cfc7af. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.296 244018 DEBUG nova.network.neutron [req-1b14ac6b-4366-4fba-ab88-e31e063a7351 req-17c4573e-6015-427f-95d1-6807716b1d3e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Updating instance_info_cache with network_info: [{"id": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "address": "fa:16:3e:93:cc:74", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdee62982-d4", "ovs_interfaceid": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "344b59b1-93f2-4e23-8c28-835e5d954630", "address": "fa:16:3e:8d:10:50", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:1050", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap344b59b1-93", "ovs_interfaceid": "344b59b1-93f2-4e23-8c28-835e5d954630", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.314 244018 DEBUG oslo_concurrency.lockutils [req-1b14ac6b-4366-4fba-ab88-e31e063a7351 req-17c4573e-6015-427f-95d1-6807716b1d3e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:49:34 np0005629333 kernel: tap887af58f-85 (unregistering): left promiscuous mode
Feb 25 07:49:34 np0005629333 NetworkManager[49836]: <info>  [1772023774.3451] device (tap887af58f-85): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.351 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:34Z|01286|binding|INFO|Releasing lport 887af58f-85b4-49c2-93b7-855c843c0cd5 from this chassis (sb_readonly=0)
Feb 25 07:49:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:34Z|01287|binding|INFO|Setting lport 887af58f-85b4-49c2-93b7-855c843c0cd5 down in Southbound
Feb 25 07:49:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:34Z|01288|binding|INFO|Removing iface tap887af58f-85 ovn-installed in OVS
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.354 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.360 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:34.361 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:00:77 10.100.0.29'], port_security=['fa:16:3e:08:00:77 10.100.0.29'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.29/28', 'neutron:device_id': '37ce1876-2b57-4f84-800c-2a1b6eaa6943', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f8617723-856d-4eb3-ab8d-62fd3cacab7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4f6e829b-663a-471b-98d6-d0d40f869440', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd570f87-4a07-4ca5-8faf-e2384fdeb2a2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=887af58f-85b4-49c2-93b7-855c843c0cd5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:49:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:34.363 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 887af58f-85b4-49c2-93b7-855c843c0cd5 in datapath f8617723-856d-4eb3-ab8d-62fd3cacab7e unbound from our chassis#033[00m
Feb 25 07:49:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:34.364 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f8617723-856d-4eb3-ab8d-62fd3cacab7e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:49:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:34.365 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b0bb231d-c552-4091-9bfb-616ef41b9d58]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:34.366 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e namespace which is not needed anymore#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.370 244018 DEBUG nova.virt.libvirt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Received event <DeviceRemovedEvent: 1772023774.3705173, 37ce1876-2b57-4f84-800c-2a1b6eaa6943 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.372 244018 DEBUG nova.virt.libvirt.driver [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Start waiting for the detach event from libvirt for device tap887af58f-85 with device alias net1 for instance 37ce1876-2b57-4f84-800c-2a1b6eaa6943 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.372 244018 DEBUG nova.virt.libvirt.guest [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:08:00:77"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap887af58f-85"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.377 244018 DEBUG nova.virt.libvirt.guest [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:08:00:77"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap887af58f-85"/></interface>not found in domain: <domain type='kvm' id='153'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <name>instance-00000079</name>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <uuid>37ce1876-2b57-4f84-800c-2a1b6eaa6943</uuid>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <nova:name>tempest-TestNetworkBasicOps-server-1787082415</nova:name>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <nova:creationTime>2026-02-25 12:49:32</nova:creationTime>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <nova:flavor name="m1.nano">
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <nova:memory>128</nova:memory>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <nova:disk>1</nova:disk>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <nova:swap>0</nova:swap>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <nova:vcpus>1</nova:vcpus>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  </nova:flavor>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <nova:owner>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  </nova:owner>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <nova:ports>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <nova:port uuid="cda27370-858e-4443-819e-696576515c52">
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <nova:port uuid="887af58f-85b4-49c2-93b7-855c843c0cd5">
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  </nova:ports>
Feb 25 07:49:34 np0005629333 nova_compute[244014]: </nova:instance>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <memory unit='KiB'>131072</memory>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <currentMemory unit='KiB'>131072</currentMemory>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <vcpu placement='static'>1</vcpu>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <resource>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <partition>/machine</partition>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  </resource>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <sysinfo type='smbios'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <entry name='manufacturer'>RDO</entry>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <entry name='product'>OpenStack Compute</entry>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <entry name='serial'>37ce1876-2b57-4f84-800c-2a1b6eaa6943</entry>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <entry name='uuid'>37ce1876-2b57-4f84-800c-2a1b6eaa6943</entry>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <entry name='family'>Virtual Machine</entry>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <boot dev='hd'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <smbios mode='sysinfo'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <vmcoreinfo state='on'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <cpu mode='custom' match='exact' check='full'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <model fallback='forbid'>EPYC-Rome</model>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <vendor>AMD</vendor>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='require' name='x2apic'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='require' name='tsc-deadline'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='require' name='hypervisor'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='require' name='tsc_adjust'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='require' name='spec-ctrl'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='require' name='stibp'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='require' name='ssbd'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='require' name='cmp_legacy'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='require' name='overflow-recov'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='require' name='succor'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='require' name='ibrs'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='require' name='amd-ssbd'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='require' name='virt-ssbd'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='disable' name='lbrv'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='disable' name='tsc-scale'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='disable' name='vmcb-clean'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='disable' name='flushbyasid'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='disable' name='pause-filter'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='disable' name='pfthreshold'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='disable' name='svme-addr-chk'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='require' name='lfence-always-serializing'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='disable' name='xsaves'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='disable' name='svm'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='require' name='topoext'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='disable' name='npt'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <feature policy='disable' name='nrip-save'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <clock offset='utc'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <timer name='pit' tickpolicy='delay'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <timer name='rtc' tickpolicy='catchup'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <timer name='hpet' present='no'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <on_poweroff>destroy</on_poweroff>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <on_reboot>restart</on_reboot>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <on_crash>destroy</on_crash>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <disk type='network' device='disk'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <driver name='qemu' type='raw' cache='none'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <auth username='openstack'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:        <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <source protocol='rbd' name='vms/37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk' index='2'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:        <host name='192.168.122.100' port='6789'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target dev='vda' bus='virtio'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='virtio-disk0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <disk type='network' device='cdrom'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <driver name='qemu' type='raw' cache='none'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <auth username='openstack'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:        <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <source protocol='rbd' name='vms/37ce1876-2b57-4f84-800c-2a1b6eaa6943_disk.config' index='1'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:        <host name='192.168.122.100' port='6789'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target dev='sda' bus='sata'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <readonly/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='sata0-0-0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='0' model='pcie-root'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pcie.0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='1' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='1' port='0x10'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.1'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='2' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='2' port='0x11'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.2'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='3' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='3' port='0x12'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.3'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='4' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='4' port='0x13'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.4'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='5' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='5' port='0x14'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.5'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='6' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='6' port='0x15'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.6'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='7' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='7' port='0x16'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.7'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='8' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='8' port='0x17'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.8'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='9' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='9' port='0x18'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.9'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='10' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='10' port='0x19'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.10'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='11' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='11' port='0x1a'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.11'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='12' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='12' port='0x1b'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.12'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='13' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='13' port='0x1c'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.13'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='14' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='14' port='0x1d'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.14'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='15' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='15' port='0x1e'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.15'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='16' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='16' port='0x1f'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.16'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='17' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='17' port='0x20'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.17'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='18' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='18' port='0x21'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.18'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='19' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='19' port='0x22'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.19'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='20' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='20' port='0x23'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.20'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='21' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='21' port='0x24'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.21'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='22' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='22' port='0x25'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.22'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='23' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='23' port='0x26'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.23'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='24' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='24' port='0x27'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.24'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='25' model='pcie-root-port'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target chassis='25' port='0x28'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.25'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model name='pcie-pci-bridge'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='pci.26'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='usb' index='0' model='piix3-uhci'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='usb'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <controller type='sata' index='0'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='ide'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <interface type='ethernet'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <mac address='fa:16:3e:50:74:9c'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target dev='tapcda27370-85'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model type='virtio'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <driver name='vhost' rx_queue_size='512'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <mtu size='1442'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='net0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <serial type='pty'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <source path='/dev/pts/0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <log file='/var/lib/nova/instances/37ce1876-2b57-4f84-800c-2a1b6eaa6943/console.log' append='off'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target type='isa-serial' port='0'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:        <model name='isa-serial'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      </target>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='serial0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <console type='pty' tty='/dev/pts/0'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <source path='/dev/pts/0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <log file='/var/lib/nova/instances/37ce1876-2b57-4f84-800c-2a1b6eaa6943/console.log' append='off'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <target type='serial' port='0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='serial0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </console>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <input type='tablet' bus='usb'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='input0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='usb' bus='0' port='1'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <input type='mouse' bus='ps2'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='input1'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <input type='keyboard' bus='ps2'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='input2'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <listen type='address' address='::0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </graphics>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <audio id='1' type='none'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <model type='virtio' heads='1' primary='yes'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='video0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <watchdog model='itco' action='reset'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='watchdog0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </watchdog>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <memballoon model='virtio'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <stats period='10'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='balloon0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <rng model='virtio'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <backend model='random'>/dev/urandom</backend>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <alias name='rng0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <label>system_u:system_r:svirt_t:s0:c59,c138</label>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c59,c138</imagelabel>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  </seclabel>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <label>+107:+107</label>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <imagelabel>+107:+107</imagelabel>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  </seclabel>
Feb 25 07:49:34 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:49:34 np0005629333 nova_compute[244014]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.378 244018 INFO nova.virt.libvirt.driver [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully detached device tap887af58f-85 from instance 37ce1876-2b57-4f84-800c-2a1b6eaa6943 from the live domain config.#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.380 244018 DEBUG nova.virt.libvirt.vif [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:48:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1787082415',display_name='tempest-TestNetworkBasicOps-server-1787082415',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1787082415',id=121,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKssTuj33zFMa3evVOk6flWSjlUceMgeiXlovCojJrinnNv1phI+OVsHaPYu7D22Bc6n/wlCJmAYv3X7OJM/21EB1B1U1udfCABsCK7Pj9XBc8dCdeNdko4o08iW1C9pCg==',key_name='tempest-TestNetworkBasicOps-1218860627',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:49:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-h0ic083v',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:49:06Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=37ce1876-2b57-4f84-800c-2a1b6eaa6943,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "887af58f-85b4-49c2-93b7-855c843c0cd5", "address": "fa:16:3e:08:00:77", "network": {"id": "f8617723-856d-4eb3-ab8d-62fd3cacab7e", "bridge": "br-int", "label": "tempest-network-smoke--1849940990", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap887af58f-85", "ovs_interfaceid": "887af58f-85b4-49c2-93b7-855c843c0cd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.380 244018 DEBUG nova.network.os_vif_util [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "887af58f-85b4-49c2-93b7-855c843c0cd5", "address": "fa:16:3e:08:00:77", "network": {"id": "f8617723-856d-4eb3-ab8d-62fd3cacab7e", "bridge": "br-int", "label": "tempest-network-smoke--1849940990", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap887af58f-85", "ovs_interfaceid": "887af58f-85b4-49c2-93b7-855c843c0cd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.381 244018 DEBUG nova.network.os_vif_util [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:00:77,bridge_name='br-int',has_traffic_filtering=True,id=887af58f-85b4-49c2-93b7-855c843c0cd5,network=Network(f8617723-856d-4eb3-ab8d-62fd3cacab7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap887af58f-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.381 244018 DEBUG os_vif [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:00:77,bridge_name='br-int',has_traffic_filtering=True,id=887af58f-85b4-49c2-93b7-855c843c0cd5,network=Network(f8617723-856d-4eb3-ab8d-62fd3cacab7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap887af58f-85') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.383 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.383 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap887af58f-85, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.385 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.385 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.387 244018 INFO os_vif [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:00:77,bridge_name='br-int',has_traffic_filtering=True,id=887af58f-85b4-49c2-93b7-855c843c0cd5,network=Network(f8617723-856d-4eb3-ab8d-62fd3cacab7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap887af58f-85')#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.388 244018 DEBUG nova.virt.libvirt.guest [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <nova:name>tempest-TestNetworkBasicOps-server-1787082415</nova:name>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <nova:creationTime>2026-02-25 12:49:34</nova:creationTime>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <nova:flavor name="m1.nano">
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <nova:memory>128</nova:memory>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <nova:disk>1</nova:disk>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <nova:swap>0</nova:swap>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <nova:vcpus>1</nova:vcpus>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  </nova:flavor>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <nova:owner>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  </nova:owner>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  <nova:ports>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    <nova:port uuid="cda27370-858e-4443-819e-696576515c52">
Feb 25 07:49:34 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:49:34 np0005629333 nova_compute[244014]:  </nova:ports>
Feb 25 07:49:34 np0005629333 nova_compute[244014]: </nova:instance>
Feb 25 07:49:34 np0005629333 nova_compute[244014]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.421 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.421 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:49:34 np0005629333 neutron-haproxy-ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e[354684]: [NOTICE]   (354688) : haproxy version is 2.8.14-c23fe91
Feb 25 07:49:34 np0005629333 neutron-haproxy-ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e[354684]: [NOTICE]   (354688) : path to executable is /usr/sbin/haproxy
Feb 25 07:49:34 np0005629333 neutron-haproxy-ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e[354684]: [WARNING]  (354688) : Exiting Master process...
Feb 25 07:49:34 np0005629333 neutron-haproxy-ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e[354684]: [WARNING]  (354688) : Exiting Master process...
Feb 25 07:49:34 np0005629333 neutron-haproxy-ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e[354684]: [ALERT]    (354688) : Current worker (354690) exited with code 143 (Terminated)
Feb 25 07:49:34 np0005629333 neutron-haproxy-ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e[354684]: [WARNING]  (354688) : All workers exited. Exiting... (0)
Feb 25 07:49:34 np0005629333 systemd[1]: libpod-cac817ebfb80f8215941f3df5eeeead31732d9703a77675479f4f56d4e69ca52.scope: Deactivated successfully.
Feb 25 07:49:34 np0005629333 conmon[354684]: conmon cac817ebfb80f8215941 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cac817ebfb80f8215941f3df5eeeead31732d9703a77675479f4f56d4e69ca52.scope/container/memory.events
Feb 25 07:49:34 np0005629333 podman[354721]: 2026-02-25 12:49:34.502140773 +0000 UTC m=+0.050266190 container died cac817ebfb80f8215941f3df5eeeead31732d9703a77675479f4f56d4e69ca52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 25 07:49:34 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cac817ebfb80f8215941f3df5eeeead31732d9703a77675479f4f56d4e69ca52-userdata-shm.mount: Deactivated successfully.
Feb 25 07:49:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2070: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 74 op/s
Feb 25 07:49:34 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f5ae28e2e538a9b1e2fa1fd3ba97baafc40136904659ba2baf74eab449664c35-merged.mount: Deactivated successfully.
Feb 25 07:49:34 np0005629333 podman[354721]: 2026-02-25 12:49:34.548561514 +0000 UTC m=+0.096686891 container cleanup cac817ebfb80f8215941f3df5eeeead31732d9703a77675479f4f56d4e69ca52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 25 07:49:34 np0005629333 systemd[1]: libpod-conmon-cac817ebfb80f8215941f3df5eeeead31732d9703a77675479f4f56d4e69ca52.scope: Deactivated successfully.
Feb 25 07:49:34 np0005629333 podman[354752]: 2026-02-25 12:49:34.61571417 +0000 UTC m=+0.046412892 container remove cac817ebfb80f8215941f3df5eeeead31732d9703a77675479f4f56d4e69ca52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:49:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:34.620 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[82a3780b-296a-43a1-8504-662bf67b7f36]: (4, ('Wed Feb 25 12:49:34 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e (cac817ebfb80f8215941f3df5eeeead31732d9703a77675479f4f56d4e69ca52)\ncac817ebfb80f8215941f3df5eeeead31732d9703a77675479f4f56d4e69ca52\nWed Feb 25 12:49:34 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e (cac817ebfb80f8215941f3df5eeeead31732d9703a77675479f4f56d4e69ca52)\ncac817ebfb80f8215941f3df5eeeead31732d9703a77675479f4f56d4e69ca52\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:34.622 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[45c9f911-6cb6-43ec-8454-8a290ab2cd28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:34.623 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf8617723-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.625 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:34 np0005629333 kernel: tapf8617723-80: left promiscuous mode
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.634 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:34.637 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[290186bf-c020-457f-bfc3-0220e1dd54cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:34.652 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[55faf56c-c783-410f-9f50-ed60208fa5c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:34.653 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ed8d57e5-33ab-4af8-bde3-eb733ebb4801]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:34.672 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ec846bd4-6140-4261-ab01-a578c5704fff]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 574187, 'reachable_time': 30851, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354767, 'error': None, 'target': 'ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:34.674 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f8617723-856d-4eb3-ab8d-62fd3cacab7e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:49:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:34.675 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[8e1a69a6-bca2-4cf4-9cd3-7b211bdf66df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:34 np0005629333 systemd[1]: run-netns-ovnmeta\x2df8617723\x2d856d\x2d4eb3\x2dab8d\x2d62fd3cacab7e.mount: Deactivated successfully.
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.750 244018 DEBUG nova.compute.manager [req-031e8bb2-e92b-4bf8-9ee8-76525edb11b1 req-be5257c1-ee30-48ad-9240-0fd938b9c31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received event network-vif-plugged-887af58f-85b4-49c2-93b7-855c843c0cd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.750 244018 DEBUG oslo_concurrency.lockutils [req-031e8bb2-e92b-4bf8-9ee8-76525edb11b1 req-be5257c1-ee30-48ad-9240-0fd938b9c31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.751 244018 DEBUG oslo_concurrency.lockutils [req-031e8bb2-e92b-4bf8-9ee8-76525edb11b1 req-be5257c1-ee30-48ad-9240-0fd938b9c31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.751 244018 DEBUG oslo_concurrency.lockutils [req-031e8bb2-e92b-4bf8-9ee8-76525edb11b1 req-be5257c1-ee30-48ad-9240-0fd938b9c31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.751 244018 DEBUG nova.compute.manager [req-031e8bb2-e92b-4bf8-9ee8-76525edb11b1 req-be5257c1-ee30-48ad-9240-0fd938b9c31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] No waiting events found dispatching network-vif-plugged-887af58f-85b4-49c2-93b7-855c843c0cd5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.751 244018 WARNING nova.compute.manager [req-031e8bb2-e92b-4bf8-9ee8-76525edb11b1 req-be5257c1-ee30-48ad-9240-0fd938b9c31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received unexpected event network-vif-plugged-887af58f-85b4-49c2-93b7-855c843c0cd5 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.751 244018 DEBUG nova.compute.manager [req-031e8bb2-e92b-4bf8-9ee8-76525edb11b1 req-be5257c1-ee30-48ad-9240-0fd938b9c31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received event network-vif-unplugged-887af58f-85b4-49c2-93b7-855c843c0cd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.752 244018 DEBUG oslo_concurrency.lockutils [req-031e8bb2-e92b-4bf8-9ee8-76525edb11b1 req-be5257c1-ee30-48ad-9240-0fd938b9c31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.752 244018 DEBUG oslo_concurrency.lockutils [req-031e8bb2-e92b-4bf8-9ee8-76525edb11b1 req-be5257c1-ee30-48ad-9240-0fd938b9c31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.752 244018 DEBUG oslo_concurrency.lockutils [req-031e8bb2-e92b-4bf8-9ee8-76525edb11b1 req-be5257c1-ee30-48ad-9240-0fd938b9c31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.752 244018 DEBUG nova.compute.manager [req-031e8bb2-e92b-4bf8-9ee8-76525edb11b1 req-be5257c1-ee30-48ad-9240-0fd938b9c31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] No waiting events found dispatching network-vif-unplugged-887af58f-85b4-49c2-93b7-855c843c0cd5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.752 244018 WARNING nova.compute.manager [req-031e8bb2-e92b-4bf8-9ee8-76525edb11b1 req-be5257c1-ee30-48ad-9240-0fd938b9c31c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received unexpected event network-vif-unplugged-887af58f-85b4-49c2-93b7-855c843c0cd5 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:49:34 np0005629333 nova_compute[244014]: 2026-02-25 12:49:34.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:49:35 np0005629333 nova_compute[244014]: 2026-02-25 12:49:35.051 244018 DEBUG oslo_concurrency.lockutils [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:49:35 np0005629333 nova_compute[244014]: 2026-02-25 12:49:35.052 244018 DEBUG oslo_concurrency.lockutils [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:49:35 np0005629333 nova_compute[244014]: 2026-02-25 12:49:35.052 244018 DEBUG nova.network.neutron [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:49:35 np0005629333 nova_compute[244014]: 2026-02-25 12:49:35.311 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:36 np0005629333 nova_compute[244014]: 2026-02-25 12:49:36.243 244018 INFO nova.network.neutron [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Port 887af58f-85b4-49c2-93b7-855c843c0cd5 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Feb 25 07:49:36 np0005629333 nova_compute[244014]: 2026-02-25 12:49:36.243 244018 DEBUG nova.network.neutron [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Updating instance_info_cache with network_info: [{"id": "cda27370-858e-4443-819e-696576515c52", "address": "fa:16:3e:50:74:9c", "network": {"id": "b302251f-a239-4374-92d3-7686a49e9d67", "bridge": "br-int", "label": "tempest-network-smoke--1675307748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcda27370-85", "ovs_interfaceid": "cda27370-858e-4443-819e-696576515c52", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:49:36 np0005629333 nova_compute[244014]: 2026-02-25 12:49:36.285 244018 DEBUG oslo_concurrency.lockutils [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:49:36 np0005629333 nova_compute[244014]: 2026-02-25 12:49:36.335 244018 DEBUG oslo_concurrency.lockutils [None req-09d7acc0-8269-448a-973b-d932cbf6a8b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "interface-37ce1876-2b57-4f84-800c-2a1b6eaa6943-887af58f-85b4-49c2-93b7-855c843c0cd5" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:49:36 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:36Z|01289|binding|INFO|Releasing lport 3be3f9aa-5947-437f-9936-240be99a8dd3 from this chassis (sb_readonly=0)
Feb 25 07:49:36 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:36Z|01290|binding|INFO|Releasing lport 8234c876-6930-42e9-b642-2d1f82e23ee0 from this chassis (sb_readonly=0)
Feb 25 07:49:36 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:36Z|01291|binding|INFO|Releasing lport b3234524-e109-417d-a204-a0ab750c983e from this chassis (sb_readonly=0)
Feb 25 07:49:36 np0005629333 nova_compute[244014]: 2026-02-25 12:49:36.444 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2071: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 74 op/s
Feb 25 07:49:36 np0005629333 nova_compute[244014]: 2026-02-25 12:49:36.818 244018 DEBUG nova.compute.manager [req-35c71e6c-1eb2-452e-86ec-516bb6620518 req-f17ace63-ec43-41a7-800c-7cbe20bf58b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received event network-vif-plugged-887af58f-85b4-49c2-93b7-855c843c0cd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:49:36 np0005629333 nova_compute[244014]: 2026-02-25 12:49:36.819 244018 DEBUG oslo_concurrency.lockutils [req-35c71e6c-1eb2-452e-86ec-516bb6620518 req-f17ace63-ec43-41a7-800c-7cbe20bf58b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:49:36 np0005629333 nova_compute[244014]: 2026-02-25 12:49:36.819 244018 DEBUG oslo_concurrency.lockutils [req-35c71e6c-1eb2-452e-86ec-516bb6620518 req-f17ace63-ec43-41a7-800c-7cbe20bf58b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:49:36 np0005629333 nova_compute[244014]: 2026-02-25 12:49:36.819 244018 DEBUG oslo_concurrency.lockutils [req-35c71e6c-1eb2-452e-86ec-516bb6620518 req-f17ace63-ec43-41a7-800c-7cbe20bf58b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:49:36 np0005629333 nova_compute[244014]: 2026-02-25 12:49:36.820 244018 DEBUG nova.compute.manager [req-35c71e6c-1eb2-452e-86ec-516bb6620518 req-f17ace63-ec43-41a7-800c-7cbe20bf58b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] No waiting events found dispatching network-vif-plugged-887af58f-85b4-49c2-93b7-855c843c0cd5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:49:36 np0005629333 nova_compute[244014]: 2026-02-25 12:49:36.820 244018 WARNING nova.compute.manager [req-35c71e6c-1eb2-452e-86ec-516bb6620518 req-f17ace63-ec43-41a7-800c-7cbe20bf58b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received unexpected event network-vif-plugged-887af58f-85b4-49c2-93b7-855c843c0cd5 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:49:36 np0005629333 nova_compute[244014]: 2026-02-25 12:49:36.820 244018 DEBUG nova.compute.manager [req-35c71e6c-1eb2-452e-86ec-516bb6620518 req-f17ace63-ec43-41a7-800c-7cbe20bf58b9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received event network-vif-deleted-887af58f-85b4-49c2-93b7-855c843c0cd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:49:36 np0005629333 nova_compute[244014]: 2026-02-25 12:49:36.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.193 244018 DEBUG nova.compute.manager [req-ab30028d-f4a9-4271-8bda-8463adf1686f req-d959d1cb-8b23-4028-9ca2-eb889da995f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received event network-changed-cda27370-858e-4443-819e-696576515c52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.193 244018 DEBUG nova.compute.manager [req-ab30028d-f4a9-4271-8bda-8463adf1686f req-d959d1cb-8b23-4028-9ca2-eb889da995f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Refreshing instance network info cache due to event network-changed-cda27370-858e-4443-819e-696576515c52. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.193 244018 DEBUG oslo_concurrency.lockutils [req-ab30028d-f4a9-4271-8bda-8463adf1686f req-d959d1cb-8b23-4028-9ca2-eb889da995f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.193 244018 DEBUG oslo_concurrency.lockutils [req-ab30028d-f4a9-4271-8bda-8463adf1686f req-d959d1cb-8b23-4028-9ca2-eb889da995f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.193 244018 DEBUG nova.network.neutron [req-ab30028d-f4a9-4271-8bda-8463adf1686f req-d959d1cb-8b23-4028-9ca2-eb889da995f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Refreshing network info cache for port cda27370-858e-4443-819e-696576515c52 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.342 244018 DEBUG oslo_concurrency.lockutils [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.343 244018 DEBUG oslo_concurrency.lockutils [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.343 244018 DEBUG oslo_concurrency.lockutils [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.344 244018 DEBUG oslo_concurrency.lockutils [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.344 244018 DEBUG oslo_concurrency.lockutils [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.347 244018 INFO nova.compute.manager [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Terminating instance#033[00m
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.349 244018 DEBUG nova.compute.manager [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:49:37 np0005629333 kernel: tapcda27370-85 (unregistering): left promiscuous mode
Feb 25 07:49:37 np0005629333 NetworkManager[49836]: <info>  [1772023777.4061] device (tapcda27370-85): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:49:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:37Z|01292|binding|INFO|Releasing lport cda27370-858e-4443-819e-696576515c52 from this chassis (sb_readonly=0)
Feb 25 07:49:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:37Z|01293|binding|INFO|Setting lport cda27370-858e-4443-819e-696576515c52 down in Southbound
Feb 25 07:49:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:37Z|01294|binding|INFO|Removing iface tapcda27370-85 ovn-installed in OVS
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.411 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.420 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:37.425 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:74:9c 10.100.0.7'], port_security=['fa:16:3e:50:74:9c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '37ce1876-2b57-4f84-800c-2a1b6eaa6943', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b302251f-a239-4374-92d3-7686a49e9d67', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9170f76a-1c5e-4379-83f2-194a69c3afae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fa6e9e5e-6b6d-43d1-bea6-b8dba8200d28, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=cda27370-858e-4443-819e-696576515c52) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:49:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:37.427 157129 INFO neutron.agent.ovn.metadata.agent [-] Port cda27370-858e-4443-819e-696576515c52 in datapath b302251f-a239-4374-92d3-7686a49e9d67 unbound from our chassis#033[00m
Feb 25 07:49:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:37.428 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b302251f-a239-4374-92d3-7686a49e9d67, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:49:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:37.429 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e1cf9abf-e113-4927-80d3-5e488f6fa2d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:37.430 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67 namespace which is not needed anymore#033[00m
Feb 25 07:49:37 np0005629333 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d00000079.scope: Deactivated successfully.
Feb 25 07:49:37 np0005629333 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d00000079.scope: Consumed 12.614s CPU time.
Feb 25 07:49:37 np0005629333 systemd-machined[210048]: Machine qemu-153-instance-00000079 terminated.
Feb 25 07:49:37 np0005629333 neutron-haproxy-ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67[353337]: [NOTICE]   (353342) : haproxy version is 2.8.14-c23fe91
Feb 25 07:49:37 np0005629333 neutron-haproxy-ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67[353337]: [NOTICE]   (353342) : path to executable is /usr/sbin/haproxy
Feb 25 07:49:37 np0005629333 neutron-haproxy-ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67[353337]: [WARNING]  (353342) : Exiting Master process...
Feb 25 07:49:37 np0005629333 neutron-haproxy-ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67[353337]: [ALERT]    (353342) : Current worker (353344) exited with code 143 (Terminated)
Feb 25 07:49:37 np0005629333 neutron-haproxy-ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67[353337]: [WARNING]  (353342) : All workers exited. Exiting... (0)
Feb 25 07:49:37 np0005629333 systemd[1]: libpod-5556953bd7563a9eda464e8979f7b2d6d66d7f50ad788b8b64ed9efc2d9af704.scope: Deactivated successfully.
Feb 25 07:49:37 np0005629333 podman[354793]: 2026-02-25 12:49:37.555588613 +0000 UTC m=+0.041132093 container died 5556953bd7563a9eda464e8979f7b2d6d66d7f50ad788b8b64ed9efc2d9af704 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 25 07:49:37 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5556953bd7563a9eda464e8979f7b2d6d66d7f50ad788b8b64ed9efc2d9af704-userdata-shm.mount: Deactivated successfully.
Feb 25 07:49:37 np0005629333 systemd[1]: var-lib-containers-storage-overlay-51fe52d468e2be311accba8895564be948c5e431928c89066ff8445171f1806f-merged.mount: Deactivated successfully.
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.586 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.590 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:37 np0005629333 podman[354793]: 2026-02-25 12:49:37.599018389 +0000 UTC m=+0.084561859 container cleanup 5556953bd7563a9eda464e8979f7b2d6d66d7f50ad788b8b64ed9efc2d9af704 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.598 244018 INFO nova.virt.libvirt.driver [-] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Instance destroyed successfully.#033[00m
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.599 244018 DEBUG nova.objects.instance [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'resources' on Instance uuid 37ce1876-2b57-4f84-800c-2a1b6eaa6943 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:49:37 np0005629333 systemd[1]: libpod-conmon-5556953bd7563a9eda464e8979f7b2d6d66d7f50ad788b8b64ed9efc2d9af704.scope: Deactivated successfully.
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.614 244018 DEBUG nova.virt.libvirt.vif [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:48:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1787082415',display_name='tempest-TestNetworkBasicOps-server-1787082415',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1787082415',id=121,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKssTuj33zFMa3evVOk6flWSjlUceMgeiXlovCojJrinnNv1phI+OVsHaPYu7D22Bc6n/wlCJmAYv3X7OJM/21EB1B1U1udfCABsCK7Pj9XBc8dCdeNdko4o08iW1C9pCg==',key_name='tempest-TestNetworkBasicOps-1218860627',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:49:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-h0ic083v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:49:06Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=37ce1876-2b57-4f84-800c-2a1b6eaa6943,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cda27370-858e-4443-819e-696576515c52", "address": "fa:16:3e:50:74:9c", "network": {"id": "b302251f-a239-4374-92d3-7686a49e9d67", "bridge": "br-int", "label": "tempest-network-smoke--1675307748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcda27370-85", "ovs_interfaceid": "cda27370-858e-4443-819e-696576515c52", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.615 244018 DEBUG nova.network.os_vif_util [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "cda27370-858e-4443-819e-696576515c52", "address": "fa:16:3e:50:74:9c", "network": {"id": "b302251f-a239-4374-92d3-7686a49e9d67", "bridge": "br-int", "label": "tempest-network-smoke--1675307748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.216", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcda27370-85", "ovs_interfaceid": "cda27370-858e-4443-819e-696576515c52", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.615 244018 DEBUG nova.network.os_vif_util [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:50:74:9c,bridge_name='br-int',has_traffic_filtering=True,id=cda27370-858e-4443-819e-696576515c52,network=Network(b302251f-a239-4374-92d3-7686a49e9d67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcda27370-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.617 244018 DEBUG os_vif [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:50:74:9c,bridge_name='br-int',has_traffic_filtering=True,id=cda27370-858e-4443-819e-696576515c52,network=Network(b302251f-a239-4374-92d3-7686a49e9d67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcda27370-85') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.618 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.620 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcda27370-85, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.621 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.624 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.626 244018 INFO os_vif [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:50:74:9c,bridge_name='br-int',has_traffic_filtering=True,id=cda27370-858e-4443-819e-696576515c52,network=Network(b302251f-a239-4374-92d3-7686a49e9d67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcda27370-85')#033[00m
Feb 25 07:49:37 np0005629333 podman[354833]: 2026-02-25 12:49:37.653909059 +0000 UTC m=+0.036381798 container remove 5556953bd7563a9eda464e8979f7b2d6d66d7f50ad788b8b64ed9efc2d9af704 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0)
Feb 25 07:49:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:37.657 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4301166d-3518-432c-8554-0609133ff0a4]: (4, ('Wed Feb 25 12:49:37 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67 (5556953bd7563a9eda464e8979f7b2d6d66d7f50ad788b8b64ed9efc2d9af704)\n5556953bd7563a9eda464e8979f7b2d6d66d7f50ad788b8b64ed9efc2d9af704\nWed Feb 25 12:49:37 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67 (5556953bd7563a9eda464e8979f7b2d6d66d7f50ad788b8b64ed9efc2d9af704)\n5556953bd7563a9eda464e8979f7b2d6d66d7f50ad788b8b64ed9efc2d9af704\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:37.659 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8aa4edee-0c60-4609-8b7e-f0e82007d551]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:37.660 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb302251f-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.662 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:37 np0005629333 kernel: tapb302251f-a0: left promiscuous mode
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.666 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:37.669 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[41ac133b-723f-4f59-a99e-f049a0c8d88b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:37.685 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[20465ee8-3ef2-4663-88a3-d5358842b775]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:37.686 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[10b7123e-2ce1-4e0f-8c36-f00daf7ff6e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:37.697 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[13633d11-0f51-44b5-9571-98be906cdc07]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 571476, 'reachable_time': 20180, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354866, 'error': None, 'target': 'ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:37 np0005629333 systemd[1]: run-netns-ovnmeta\x2db302251f\x2da239\x2d4374\x2d92d3\x2d7686a49e9d67.mount: Deactivated successfully.
Feb 25 07:49:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:37.700 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b302251f-a239-4374-92d3-7686a49e9d67 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:49:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:37.700 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[2e1e3f64-668a-4e19-af0b-4d90ceb59d6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.851 244018 INFO nova.virt.libvirt.driver [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Deleting instance files /var/lib/nova/instances/37ce1876-2b57-4f84-800c-2a1b6eaa6943_del#033[00m
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.851 244018 INFO nova.virt.libvirt.driver [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Deletion of /var/lib/nova/instances/37ce1876-2b57-4f84-800c-2a1b6eaa6943_del complete#033[00m
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.922 244018 INFO nova.compute.manager [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Took 0.57 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.923 244018 DEBUG oslo.service.loopingcall [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.923 244018 DEBUG nova.compute.manager [-] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:49:37 np0005629333 nova_compute[244014]: 2026-02-25 12:49:37.924 244018 DEBUG nova.network.neutron [-] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:49:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2072: 305 pgs: 305 active+clean; 249 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 33 KiB/s wr, 80 op/s
Feb 25 07:49:38 np0005629333 nova_compute[244014]: 2026-02-25 12:49:38.577 244018 DEBUG nova.network.neutron [req-ab30028d-f4a9-4271-8bda-8463adf1686f req-d959d1cb-8b23-4028-9ca2-eb889da995f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Updated VIF entry in instance network info cache for port cda27370-858e-4443-819e-696576515c52. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:49:38 np0005629333 nova_compute[244014]: 2026-02-25 12:49:38.577 244018 DEBUG nova.network.neutron [req-ab30028d-f4a9-4271-8bda-8463adf1686f req-d959d1cb-8b23-4028-9ca2-eb889da995f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Updating instance_info_cache with network_info: [{"id": "cda27370-858e-4443-819e-696576515c52", "address": "fa:16:3e:50:74:9c", "network": {"id": "b302251f-a239-4374-92d3-7686a49e9d67", "bridge": "br-int", "label": "tempest-network-smoke--1675307748", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcda27370-85", "ovs_interfaceid": "cda27370-858e-4443-819e-696576515c52", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:49:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:49:38 np0005629333 nova_compute[244014]: 2026-02-25 12:49:38.610 244018 DEBUG nova.network.neutron [-] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:49:38 np0005629333 nova_compute[244014]: 2026-02-25 12:49:38.612 244018 DEBUG oslo_concurrency.lockutils [req-ab30028d-f4a9-4271-8bda-8463adf1686f req-d959d1cb-8b23-4028-9ca2-eb889da995f1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-37ce1876-2b57-4f84-800c-2a1b6eaa6943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:49:38 np0005629333 nova_compute[244014]: 2026-02-25 12:49:38.626 244018 INFO nova.compute.manager [-] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Took 0.70 seconds to deallocate network for instance.#033[00m
Feb 25 07:49:38 np0005629333 nova_compute[244014]: 2026-02-25 12:49:38.665 244018 DEBUG oslo_concurrency.lockutils [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:49:38 np0005629333 nova_compute[244014]: 2026-02-25 12:49:38.665 244018 DEBUG oslo_concurrency.lockutils [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:49:38 np0005629333 nova_compute[244014]: 2026-02-25 12:49:38.721 244018 DEBUG oslo_concurrency.processutils [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:49:38 np0005629333 nova_compute[244014]: 2026-02-25 12:49:38.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:49:38 np0005629333 nova_compute[244014]: 2026-02-25 12:49:38.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:49:38 np0005629333 nova_compute[244014]: 2026-02-25 12:49:38.903 244018 DEBUG nova.compute.manager [req-afcdba3e-07c6-430e-846c-cf6242493980 req-c10f8a35-0742-4a89-9a0d-55d0ae136965 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received event network-vif-unplugged-cda27370-858e-4443-819e-696576515c52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:49:38 np0005629333 nova_compute[244014]: 2026-02-25 12:49:38.904 244018 DEBUG oslo_concurrency.lockutils [req-afcdba3e-07c6-430e-846c-cf6242493980 req-c10f8a35-0742-4a89-9a0d-55d0ae136965 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:49:38 np0005629333 nova_compute[244014]: 2026-02-25 12:49:38.904 244018 DEBUG oslo_concurrency.lockutils [req-afcdba3e-07c6-430e-846c-cf6242493980 req-c10f8a35-0742-4a89-9a0d-55d0ae136965 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:49:38 np0005629333 nova_compute[244014]: 2026-02-25 12:49:38.905 244018 DEBUG oslo_concurrency.lockutils [req-afcdba3e-07c6-430e-846c-cf6242493980 req-c10f8a35-0742-4a89-9a0d-55d0ae136965 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:49:38 np0005629333 nova_compute[244014]: 2026-02-25 12:49:38.905 244018 DEBUG nova.compute.manager [req-afcdba3e-07c6-430e-846c-cf6242493980 req-c10f8a35-0742-4a89-9a0d-55d0ae136965 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] No waiting events found dispatching network-vif-unplugged-cda27370-858e-4443-819e-696576515c52 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:49:38 np0005629333 nova_compute[244014]: 2026-02-25 12:49:38.905 244018 WARNING nova.compute.manager [req-afcdba3e-07c6-430e-846c-cf6242493980 req-c10f8a35-0742-4a89-9a0d-55d0ae136965 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received unexpected event network-vif-unplugged-cda27370-858e-4443-819e-696576515c52 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:49:38 np0005629333 nova_compute[244014]: 2026-02-25 12:49:38.906 244018 DEBUG nova.compute.manager [req-afcdba3e-07c6-430e-846c-cf6242493980 req-c10f8a35-0742-4a89-9a0d-55d0ae136965 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received event network-vif-plugged-cda27370-858e-4443-819e-696576515c52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:49:38 np0005629333 nova_compute[244014]: 2026-02-25 12:49:38.906 244018 DEBUG oslo_concurrency.lockutils [req-afcdba3e-07c6-430e-846c-cf6242493980 req-c10f8a35-0742-4a89-9a0d-55d0ae136965 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:49:38 np0005629333 nova_compute[244014]: 2026-02-25 12:49:38.907 244018 DEBUG oslo_concurrency.lockutils [req-afcdba3e-07c6-430e-846c-cf6242493980 req-c10f8a35-0742-4a89-9a0d-55d0ae136965 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:49:38 np0005629333 nova_compute[244014]: 2026-02-25 12:49:38.907 244018 DEBUG oslo_concurrency.lockutils [req-afcdba3e-07c6-430e-846c-cf6242493980 req-c10f8a35-0742-4a89-9a0d-55d0ae136965 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:49:38 np0005629333 nova_compute[244014]: 2026-02-25 12:49:38.908 244018 DEBUG nova.compute.manager [req-afcdba3e-07c6-430e-846c-cf6242493980 req-c10f8a35-0742-4a89-9a0d-55d0ae136965 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] No waiting events found dispatching network-vif-plugged-cda27370-858e-4443-819e-696576515c52 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:49:38 np0005629333 nova_compute[244014]: 2026-02-25 12:49:38.908 244018 WARNING nova.compute.manager [req-afcdba3e-07c6-430e-846c-cf6242493980 req-c10f8a35-0742-4a89-9a0d-55d0ae136965 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received unexpected event network-vif-plugged-cda27370-858e-4443-819e-696576515c52 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:49:38 np0005629333 nova_compute[244014]: 2026-02-25 12:49:38.908 244018 DEBUG nova.compute.manager [req-afcdba3e-07c6-430e-846c-cf6242493980 req-c10f8a35-0742-4a89-9a0d-55d0ae136965 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Received event network-vif-deleted-cda27370-858e-4443-819e-696576515c52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:49:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:49:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/844278572' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:49:39 np0005629333 nova_compute[244014]: 2026-02-25 12:49:39.422 244018 DEBUG oslo_concurrency.processutils [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.701s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:49:39 np0005629333 nova_compute[244014]: 2026-02-25 12:49:39.428 244018 DEBUG nova.compute.provider_tree [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:49:39 np0005629333 nova_compute[244014]: 2026-02-25 12:49:39.446 244018 DEBUG nova.scheduler.client.report [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:49:39 np0005629333 nova_compute[244014]: 2026-02-25 12:49:39.469 244018 DEBUG oslo_concurrency.lockutils [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:49:39 np0005629333 nova_compute[244014]: 2026-02-25 12:49:39.496 244018 INFO nova.scheduler.client.report [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Deleted allocations for instance 37ce1876-2b57-4f84-800c-2a1b6eaa6943#033[00m
Feb 25 07:49:39 np0005629333 nova_compute[244014]: 2026-02-25 12:49:39.585 244018 DEBUG oslo_concurrency.lockutils [None req-ce52c4e3-aaa4-4172-9f10-1e0cc11cf8ac 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "37ce1876-2b57-4f84-800c-2a1b6eaa6943" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:49:40 np0005629333 nova_compute[244014]: 2026-02-25 12:49:40.314 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:40 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:40Z|00151|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:93:cc:74 10.100.0.9
Feb 25 07:49:40 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:40Z|00152|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:93:cc:74 10.100.0.9
Feb 25 07:49:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2073: 305 pgs: 305 active+clean; 249 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 8.0 KiB/s wr, 70 op/s
Feb 25 07:49:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2074: 305 pgs: 305 active+clean; 223 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 138 op/s
Feb 25 07:49:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:49:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:49:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:49:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:49:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000697193756197908 of space, bias 1.0, pg target 0.2091581268593724 quantized to 32 (current 32)
Feb 25 07:49:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:49:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:49:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:49:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:49:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:49:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002494054403576432 of space, bias 1.0, pg target 0.7482163210729297 quantized to 32 (current 32)
Feb 25 07:49:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:49:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3979243418663797e-06 of space, bias 4.0, pg target 0.0016775092102396555 quantized to 16 (current 16)
Feb 25 07:49:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:49:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:49:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:49:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:49:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:49:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:49:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:49:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:49:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:49:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:49:42 np0005629333 nova_compute[244014]: 2026-02-25 12:49:42.622 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:49:43 np0005629333 nova_compute[244014]: 2026-02-25 12:49:43.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:49:44 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:44Z|01295|binding|INFO|Releasing lport 8234c876-6930-42e9-b642-2d1f82e23ee0 from this chassis (sb_readonly=0)
Feb 25 07:49:44 np0005629333 ovn_controller[147040]: 2026-02-25T12:49:44Z|01296|binding|INFO|Releasing lport b3234524-e109-417d-a204-a0ab750c983e from this chassis (sb_readonly=0)
Feb 25 07:49:44 np0005629333 nova_compute[244014]: 2026-02-25 12:49:44.115 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2075: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 910 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Feb 25 07:49:45 np0005629333 nova_compute[244014]: 2026-02-25 12:49:45.239 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:45.240 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:49:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:45.241 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:49:45 np0005629333 nova_compute[244014]: 2026-02-25 12:49:45.317 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2076: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Feb 25 07:49:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:49:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1245765082' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:49:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:49:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1245765082' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:49:47 np0005629333 nova_compute[244014]: 2026-02-25 12:49:47.626 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:48 np0005629333 nova_compute[244014]: 2026-02-25 12:49:48.063 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2077: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Feb 25 07:49:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:49:50 np0005629333 nova_compute[244014]: 2026-02-25 12:49:50.319 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2078: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Feb 25 07:49:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:52.243 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:49:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2079: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Feb 25 07:49:52 np0005629333 nova_compute[244014]: 2026-02-25 12:49:52.595 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023777.5943918, 37ce1876-2b57-4f84-800c-2a1b6eaa6943 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:49:52 np0005629333 nova_compute[244014]: 2026-02-25 12:49:52.596 244018 INFO nova.compute.manager [-] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:49:52 np0005629333 nova_compute[244014]: 2026-02-25 12:49:52.618 244018 DEBUG nova.compute.manager [None req-c8527e5e-5089-4cbd-8e49-f0673f82ef61 - - - - - -] [instance: 37ce1876-2b57-4f84-800c-2a1b6eaa6943] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:49:52 np0005629333 nova_compute[244014]: 2026-02-25 12:49:52.628 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:53 np0005629333 nova_compute[244014]: 2026-02-25 12:49:53.366 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "828def8a-01dd-4845-98b3-1516060251a0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:49:53 np0005629333 nova_compute[244014]: 2026-02-25 12:49:53.366 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:49:53 np0005629333 nova_compute[244014]: 2026-02-25 12:49:53.390 244018 DEBUG nova.compute.manager [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:49:53 np0005629333 nova_compute[244014]: 2026-02-25 12:49:53.468 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:49:53 np0005629333 nova_compute[244014]: 2026-02-25 12:49:53.469 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:49:53 np0005629333 nova_compute[244014]: 2026-02-25 12:49:53.477 244018 DEBUG nova.virt.hardware [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:49:53 np0005629333 nova_compute[244014]: 2026-02-25 12:49:53.477 244018 INFO nova.compute.claims [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:49:53 np0005629333 nova_compute[244014]: 2026-02-25 12:49:53.556 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:49:53 np0005629333 nova_compute[244014]: 2026-02-25 12:49:53.609 244018 DEBUG oslo_concurrency.processutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:49:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:49:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2760296266' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:49:54 np0005629333 nova_compute[244014]: 2026-02-25 12:49:54.205 244018 DEBUG oslo_concurrency.processutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:49:54 np0005629333 nova_compute[244014]: 2026-02-25 12:49:54.212 244018 DEBUG nova.compute.provider_tree [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:49:54 np0005629333 nova_compute[244014]: 2026-02-25 12:49:54.239 244018 DEBUG nova.scheduler.client.report [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:49:54 np0005629333 nova_compute[244014]: 2026-02-25 12:49:54.283 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:49:54 np0005629333 nova_compute[244014]: 2026-02-25 12:49:54.284 244018 DEBUG nova.compute.manager [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:49:54 np0005629333 nova_compute[244014]: 2026-02-25 12:49:54.385 244018 DEBUG nova.compute.manager [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:49:54 np0005629333 nova_compute[244014]: 2026-02-25 12:49:54.386 244018 DEBUG nova.network.neutron [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:49:54 np0005629333 nova_compute[244014]: 2026-02-25 12:49:54.437 244018 INFO nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:49:54 np0005629333 nova_compute[244014]: 2026-02-25 12:49:54.521 244018 DEBUG nova.compute.manager [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:49:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2080: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 409 KiB/s wr, 20 op/s
Feb 25 07:49:54 np0005629333 nova_compute[244014]: 2026-02-25 12:49:54.742 244018 DEBUG nova.compute.manager [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:49:54 np0005629333 nova_compute[244014]: 2026-02-25 12:49:54.744 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:49:54 np0005629333 nova_compute[244014]: 2026-02-25 12:49:54.744 244018 INFO nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Creating image(s)#033[00m
Feb 25 07:49:54 np0005629333 nova_compute[244014]: 2026-02-25 12:49:54.775 244018 DEBUG nova.storage.rbd_utils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 828def8a-01dd-4845-98b3-1516060251a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:49:54 np0005629333 nova_compute[244014]: 2026-02-25 12:49:54.810 244018 DEBUG nova.storage.rbd_utils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 828def8a-01dd-4845-98b3-1516060251a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:49:54 np0005629333 nova_compute[244014]: 2026-02-25 12:49:54.840 244018 DEBUG nova.storage.rbd_utils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 828def8a-01dd-4845-98b3-1516060251a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:49:54 np0005629333 nova_compute[244014]: 2026-02-25 12:49:54.843 244018 DEBUG oslo_concurrency.processutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:49:54 np0005629333 nova_compute[244014]: 2026-02-25 12:49:54.939 244018 DEBUG oslo_concurrency.processutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:49:54 np0005629333 nova_compute[244014]: 2026-02-25 12:49:54.940 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:49:54 np0005629333 nova_compute[244014]: 2026-02-25 12:49:54.941 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:49:54 np0005629333 nova_compute[244014]: 2026-02-25 12:49:54.942 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:49:54 np0005629333 nova_compute[244014]: 2026-02-25 12:49:54.973 244018 DEBUG nova.storage.rbd_utils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 828def8a-01dd-4845-98b3-1516060251a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:49:54 np0005629333 nova_compute[244014]: 2026-02-25 12:49:54.978 244018 DEBUG oslo_concurrency.processutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 828def8a-01dd-4845-98b3-1516060251a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:49:55 np0005629333 nova_compute[244014]: 2026-02-25 12:49:55.024 244018 DEBUG nova.policy [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb8dbf8cc448ad946fd23aaae2326e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '25fa1e8dd32c483686f869da2604f2b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:49:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:55.026 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:49:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:55.027 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:49:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:49:55.028 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:49:55 np0005629333 nova_compute[244014]: 2026-02-25 12:49:55.270 244018 DEBUG oslo_concurrency.processutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 828def8a-01dd-4845-98b3-1516060251a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:49:55 np0005629333 nova_compute[244014]: 2026-02-25 12:49:55.328 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:55 np0005629333 nova_compute[244014]: 2026-02-25 12:49:55.335 244018 DEBUG nova.storage.rbd_utils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] resizing rbd image 828def8a-01dd-4845-98b3-1516060251a0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:49:55 np0005629333 nova_compute[244014]: 2026-02-25 12:49:55.426 244018 DEBUG nova.objects.instance [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'migration_context' on Instance uuid 828def8a-01dd-4845-98b3-1516060251a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:49:55 np0005629333 nova_compute[244014]: 2026-02-25 12:49:55.447 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:49:55 np0005629333 nova_compute[244014]: 2026-02-25 12:49:55.448 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Ensure instance console log exists: /var/lib/nova/instances/828def8a-01dd-4845-98b3-1516060251a0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:49:55 np0005629333 nova_compute[244014]: 2026-02-25 12:49:55.448 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:49:55 np0005629333 nova_compute[244014]: 2026-02-25 12:49:55.449 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:49:55 np0005629333 nova_compute[244014]: 2026-02-25 12:49:55.449 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:49:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2081: 305 pgs: 305 active+clean; 233 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Feb 25 07:49:56 np0005629333 nova_compute[244014]: 2026-02-25 12:49:56.575 244018 DEBUG nova.network.neutron [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Successfully created port: 99fbecbd-1815-4bf9-8e7f-6f114847f46f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:49:57 np0005629333 nova_compute[244014]: 2026-02-25 12:49:57.190 244018 DEBUG nova.network.neutron [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Successfully created port: 94cae4f8-cc4b-443a-b22b-36ef77438ede _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:49:57 np0005629333 nova_compute[244014]: 2026-02-25 12:49:57.630 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:49:58 np0005629333 nova_compute[244014]: 2026-02-25 12:49:58.094 244018 DEBUG nova.network.neutron [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Successfully updated port: 99fbecbd-1815-4bf9-8e7f-6f114847f46f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:49:58 np0005629333 nova_compute[244014]: 2026-02-25 12:49:58.221 244018 DEBUG nova.compute.manager [req-2ab21ab8-7570-4600-9eb4-b0ae72398b8d req-2b70f917-687d-45b6-9e1f-58d4621732a5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received event network-changed-99fbecbd-1815-4bf9-8e7f-6f114847f46f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:49:58 np0005629333 nova_compute[244014]: 2026-02-25 12:49:58.221 244018 DEBUG nova.compute.manager [req-2ab21ab8-7570-4600-9eb4-b0ae72398b8d req-2b70f917-687d-45b6-9e1f-58d4621732a5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Refreshing instance network info cache due to event network-changed-99fbecbd-1815-4bf9-8e7f-6f114847f46f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:49:58 np0005629333 nova_compute[244014]: 2026-02-25 12:49:58.221 244018 DEBUG oslo_concurrency.lockutils [req-2ab21ab8-7570-4600-9eb4-b0ae72398b8d req-2b70f917-687d-45b6-9e1f-58d4621732a5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-828def8a-01dd-4845-98b3-1516060251a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:49:58 np0005629333 nova_compute[244014]: 2026-02-25 12:49:58.222 244018 DEBUG oslo_concurrency.lockutils [req-2ab21ab8-7570-4600-9eb4-b0ae72398b8d req-2b70f917-687d-45b6-9e1f-58d4621732a5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-828def8a-01dd-4845-98b3-1516060251a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:49:58 np0005629333 nova_compute[244014]: 2026-02-25 12:49:58.222 244018 DEBUG nova.network.neutron [req-2ab21ab8-7570-4600-9eb4-b0ae72398b8d req-2b70f917-687d-45b6-9e1f-58d4621732a5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Refreshing network info cache for port 99fbecbd-1815-4bf9-8e7f-6f114847f46f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:49:58 np0005629333 nova_compute[244014]: 2026-02-25 12:49:58.417 244018 DEBUG nova.network.neutron [req-2ab21ab8-7570-4600-9eb4-b0ae72398b8d req-2b70f917-687d-45b6-9e1f-58d4621732a5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:49:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2082: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 07:49:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:49:59 np0005629333 nova_compute[244014]: 2026-02-25 12:49:59.079 244018 DEBUG nova.network.neutron [req-2ab21ab8-7570-4600-9eb4-b0ae72398b8d req-2b70f917-687d-45b6-9e1f-58d4621732a5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:49:59 np0005629333 nova_compute[244014]: 2026-02-25 12:49:59.100 244018 DEBUG oslo_concurrency.lockutils [req-2ab21ab8-7570-4600-9eb4-b0ae72398b8d req-2b70f917-687d-45b6-9e1f-58d4621732a5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-828def8a-01dd-4845-98b3-1516060251a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:50:00 np0005629333 nova_compute[244014]: 2026-02-25 12:50:00.324 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:00 np0005629333 nova_compute[244014]: 2026-02-25 12:50:00.372 244018 DEBUG nova.network.neutron [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Successfully updated port: 94cae4f8-cc4b-443a-b22b-36ef77438ede _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:50:00 np0005629333 nova_compute[244014]: 2026-02-25 12:50:00.406 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "refresh_cache-828def8a-01dd-4845-98b3-1516060251a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:50:00 np0005629333 nova_compute[244014]: 2026-02-25 12:50:00.407 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquired lock "refresh_cache-828def8a-01dd-4845-98b3-1516060251a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:50:00 np0005629333 nova_compute[244014]: 2026-02-25 12:50:00.407 244018 DEBUG nova.network.neutron [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:50:00 np0005629333 nova_compute[244014]: 2026-02-25 12:50:00.470 244018 DEBUG nova.compute.manager [req-46551c20-e378-4344-b5aa-706552095b90 req-eb51a01e-f17d-45b4-b2c3-512c5151233e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received event network-changed-94cae4f8-cc4b-443a-b22b-36ef77438ede external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:50:00 np0005629333 nova_compute[244014]: 2026-02-25 12:50:00.471 244018 DEBUG nova.compute.manager [req-46551c20-e378-4344-b5aa-706552095b90 req-eb51a01e-f17d-45b4-b2c3-512c5151233e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Refreshing instance network info cache due to event network-changed-94cae4f8-cc4b-443a-b22b-36ef77438ede. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:50:00 np0005629333 nova_compute[244014]: 2026-02-25 12:50:00.472 244018 DEBUG oslo_concurrency.lockutils [req-46551c20-e378-4344-b5aa-706552095b90 req-eb51a01e-f17d-45b4-b2c3-512c5151233e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-828def8a-01dd-4845-98b3-1516060251a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:50:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2083: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 07:50:00 np0005629333 nova_compute[244014]: 2026-02-25 12:50:00.607 244018 DEBUG nova.network.neutron [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:50:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:50:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:50:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:50:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:50:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:50:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:50:02 np0005629333 nova_compute[244014]: 2026-02-25 12:50:02.350 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "d6cf21ec-717e-41f7-9351-2214b43ce275" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:02 np0005629333 nova_compute[244014]: 2026-02-25 12:50:02.351 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "d6cf21ec-717e-41f7-9351-2214b43ce275" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:02 np0005629333 nova_compute[244014]: 2026-02-25 12:50:02.384 244018 DEBUG nova.compute.manager [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:50:02 np0005629333 nova_compute[244014]: 2026-02-25 12:50:02.463 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:02 np0005629333 nova_compute[244014]: 2026-02-25 12:50:02.464 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:02 np0005629333 nova_compute[244014]: 2026-02-25 12:50:02.469 244018 DEBUG nova.virt.hardware [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:50:02 np0005629333 nova_compute[244014]: 2026-02-25 12:50:02.469 244018 INFO nova.compute.claims [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:50:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2084: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 07:50:02 np0005629333 nova_compute[244014]: 2026-02-25 12:50:02.632 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:02 np0005629333 nova_compute[244014]: 2026-02-25 12:50:02.636 244018 DEBUG oslo_concurrency.processutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:50:02 np0005629333 podman[355078]: 2026-02-25 12:50:02.746732613 +0000 UTC m=+0.078299821 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent)
Feb 25 07:50:02 np0005629333 podman[355080]: 2026-02-25 12:50:02.786750903 +0000 UTC m=+0.115649376 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Feb 25 07:50:02 np0005629333 nova_compute[244014]: 2026-02-25 12:50:02.930 244018 DEBUG nova.network.neutron [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Updating instance_info_cache with network_info: [{"id": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "address": "fa:16:3e:7e:94:3d", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbecbd-18", "ovs_interfaceid": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "address": "fa:16:3e:ec:bf:12", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feec:bf12", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94cae4f8-cc", "ovs_interfaceid": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:50:02 np0005629333 nova_compute[244014]: 2026-02-25 12:50:02.969 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Releasing lock "refresh_cache-828def8a-01dd-4845-98b3-1516060251a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:50:02 np0005629333 nova_compute[244014]: 2026-02-25 12:50:02.970 244018 DEBUG nova.compute.manager [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Instance network_info: |[{"id": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "address": "fa:16:3e:7e:94:3d", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbecbd-18", "ovs_interfaceid": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "address": "fa:16:3e:ec:bf:12", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feec:bf12", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94cae4f8-cc", "ovs_interfaceid": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:50:02 np0005629333 nova_compute[244014]: 2026-02-25 12:50:02.972 244018 DEBUG oslo_concurrency.lockutils [req-46551c20-e378-4344-b5aa-706552095b90 req-eb51a01e-f17d-45b4-b2c3-512c5151233e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-828def8a-01dd-4845-98b3-1516060251a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:50:02 np0005629333 nova_compute[244014]: 2026-02-25 12:50:02.973 244018 DEBUG nova.network.neutron [req-46551c20-e378-4344-b5aa-706552095b90 req-eb51a01e-f17d-45b4-b2c3-512c5151233e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Refreshing network info cache for port 94cae4f8-cc4b-443a-b22b-36ef77438ede _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:50:02 np0005629333 nova_compute[244014]: 2026-02-25 12:50:02.980 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Start _get_guest_xml network_info=[{"id": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "address": "fa:16:3e:7e:94:3d", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbecbd-18", "ovs_interfaceid": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "address": "fa:16:3e:ec:bf:12", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feec:bf12", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94cae4f8-cc", "ovs_interfaceid": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:50:02 np0005629333 nova_compute[244014]: 2026-02-25 12:50:02.989 244018 WARNING nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:50:02 np0005629333 nova_compute[244014]: 2026-02-25 12:50:02.995 244018 DEBUG nova.virt.libvirt.host [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:50:02 np0005629333 nova_compute[244014]: 2026-02-25 12:50:02.996 244018 DEBUG nova.virt.libvirt.host [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.006 244018 DEBUG nova.virt.libvirt.host [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.007 244018 DEBUG nova.virt.libvirt.host [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.008 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.009 244018 DEBUG nova.virt.hardware [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.010 244018 DEBUG nova.virt.hardware [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.010 244018 DEBUG nova.virt.hardware [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.011 244018 DEBUG nova.virt.hardware [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.012 244018 DEBUG nova.virt.hardware [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.012 244018 DEBUG nova.virt.hardware [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.013 244018 DEBUG nova.virt.hardware [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.013 244018 DEBUG nova.virt.hardware [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.014 244018 DEBUG nova.virt.hardware [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.014 244018 DEBUG nova.virt.hardware [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.015 244018 DEBUG nova.virt.hardware [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.021 244018 DEBUG oslo_concurrency.processutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:50:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:50:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1622933079' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.186 244018 DEBUG oslo_concurrency.processutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.205 244018 DEBUG nova.compute.provider_tree [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.232 244018 DEBUG nova.scheduler.client.report [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.279 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.280 244018 DEBUG nova.compute.manager [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.344 244018 DEBUG nova.compute.manager [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.345 244018 DEBUG nova.network.neutron [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.368 244018 INFO nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.389 244018 DEBUG nova.compute.manager [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.480 244018 DEBUG nova.compute.manager [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.482 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.483 244018 INFO nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Creating image(s)#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.514 244018 DEBUG nova.storage.rbd_utils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image d6cf21ec-717e-41f7-9351-2214b43ce275_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.547 244018 DEBUG nova.storage.rbd_utils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image d6cf21ec-717e-41f7-9351-2214b43ce275_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:50:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:50:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2827062943' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.582 244018 DEBUG nova.storage.rbd_utils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image d6cf21ec-717e-41f7-9351-2214b43ce275_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:50:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.587 244018 DEBUG oslo_concurrency.processutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.612 244018 DEBUG oslo_concurrency.processutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.645 244018 DEBUG nova.storage.rbd_utils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 828def8a-01dd-4845-98b3-1516060251a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.650 244018 DEBUG oslo_concurrency.processutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.700 244018 DEBUG oslo_concurrency.processutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.701 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.702 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.703 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.735 244018 DEBUG nova.storage.rbd_utils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image d6cf21ec-717e-41f7-9351-2214b43ce275_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.741 244018 DEBUG oslo_concurrency.processutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d6cf21ec-717e-41f7-9351-2214b43ce275_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:50:03 np0005629333 nova_compute[244014]: 2026-02-25 12:50:03.913 244018 DEBUG nova.policy [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31d013eaf26a447394d93c83ab8def60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e227b91c24404ab5aed600e2fe792d32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.069 244018 DEBUG oslo_concurrency.processutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d6cf21ec-717e-41f7-9351-2214b43ce275_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.327s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.133 244018 DEBUG nova.storage.rbd_utils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] resizing rbd image d6cf21ec-717e-41f7-9351-2214b43ce275_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.218 244018 DEBUG nova.objects.instance [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'migration_context' on Instance uuid d6cf21ec-717e-41f7-9351-2214b43ce275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:50:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:50:04 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3173344239' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.231 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.231 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Ensure instance console log exists: /var/lib/nova/instances/d6cf21ec-717e-41f7-9351-2214b43ce275/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.232 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.232 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.232 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.240 244018 DEBUG oslo_concurrency.processutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.241 244018 DEBUG nova.virt.libvirt.vif [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:49:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1153433119',display_name='tempest-TestGettingAddress-server-1153433119',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1153433119',id=123,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNlLCvqIkUYh/9ZHtxnKBN7YBsKpfQ8TjO19iCX554GJUCo/N1T5J3/ZTJ7NHwET0eZFR6/wdUxNMyoCAZQSh5tzxHfA6vjgYnp2UPefpqRUyjRlBnYX2yGGsA8ccwHtsg==',key_name='tempest-TestGettingAddress-238420135',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-ee56zhhv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:49:54Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=828def8a-01dd-4845-98b3-1516060251a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "address": "fa:16:3e:7e:94:3d", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbecbd-18", "ovs_interfaceid": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.242 244018 DEBUG nova.network.os_vif_util [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "address": "fa:16:3e:7e:94:3d", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbecbd-18", "ovs_interfaceid": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.243 244018 DEBUG nova.network.os_vif_util [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:94:3d,bridge_name='br-int',has_traffic_filtering=True,id=99fbecbd-1815-4bf9-8e7f-6f114847f46f,network=Network(481feaf1-7ff4-47be-9159-a1dd19ceebcc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99fbecbd-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.244 244018 DEBUG nova.virt.libvirt.vif [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:49:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1153433119',display_name='tempest-TestGettingAddress-server-1153433119',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1153433119',id=123,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNlLCvqIkUYh/9ZHtxnKBN7YBsKpfQ8TjO19iCX554GJUCo/N1T5J3/ZTJ7NHwET0eZFR6/wdUxNMyoCAZQSh5tzxHfA6vjgYnp2UPefpqRUyjRlBnYX2yGGsA8ccwHtsg==',key_name='tempest-TestGettingAddress-238420135',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-ee56zhhv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:49:54Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=828def8a-01dd-4845-98b3-1516060251a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "address": "fa:16:3e:ec:bf:12", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feec:bf12", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94cae4f8-cc", "ovs_interfaceid": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.244 244018 DEBUG nova.network.os_vif_util [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "address": "fa:16:3e:ec:bf:12", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feec:bf12", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94cae4f8-cc", "ovs_interfaceid": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.244 244018 DEBUG nova.network.os_vif_util [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:bf:12,bridge_name='br-int',has_traffic_filtering=True,id=94cae4f8-cc4b-443a-b22b-36ef77438ede,network=Network(eb832dde-9848-40c5-9505-cc643b1bd0fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94cae4f8-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.245 244018 DEBUG nova.objects.instance [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 828def8a-01dd-4845-98b3-1516060251a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.258 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:50:04 np0005629333 nova_compute[244014]:  <uuid>828def8a-01dd-4845-98b3-1516060251a0</uuid>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:  <name>instance-0000007b</name>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestGettingAddress-server-1153433119</nova:name>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:50:02</nova:creationTime>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:50:04 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:        <nova:user uuid="f8eb8dbf8cc448ad946fd23aaae2326e">tempest-TestGettingAddress-344063294-project-member</nova:user>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:        <nova:project uuid="25fa1e8dd32c483686f869da2604f2b1">tempest-TestGettingAddress-344063294</nova:project>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:        <nova:port uuid="99fbecbd-1815-4bf9-8e7f-6f114847f46f">
Feb 25 07:50:04 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:        <nova:port uuid="94cae4f8-cc4b-443a-b22b-36ef77438ede">
Feb 25 07:50:04 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:feec:bf12" ipVersion="6"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <entry name="serial">828def8a-01dd-4845-98b3-1516060251a0</entry>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <entry name="uuid">828def8a-01dd-4845-98b3-1516060251a0</entry>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/828def8a-01dd-4845-98b3-1516060251a0_disk">
Feb 25 07:50:04 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:50:04 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/828def8a-01dd-4845-98b3-1516060251a0_disk.config">
Feb 25 07:50:04 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:50:04 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:7e:94:3d"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <target dev="tap99fbecbd-18"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:ec:bf:12"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <target dev="tap94cae4f8-cc"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/828def8a-01dd-4845-98b3-1516060251a0/console.log" append="off"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:50:04 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:50:04 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:50:04 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:50:04 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.259 244018 DEBUG nova.compute.manager [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Preparing to wait for external event network-vif-plugged-99fbecbd-1815-4bf9-8e7f-6f114847f46f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.259 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "828def8a-01dd-4845-98b3-1516060251a0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.259 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.259 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.260 244018 DEBUG nova.compute.manager [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Preparing to wait for external event network-vif-plugged-94cae4f8-cc4b-443a-b22b-36ef77438ede prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.260 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "828def8a-01dd-4845-98b3-1516060251a0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.260 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.260 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.261 244018 DEBUG nova.virt.libvirt.vif [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:49:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1153433119',display_name='tempest-TestGettingAddress-server-1153433119',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1153433119',id=123,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNlLCvqIkUYh/9ZHtxnKBN7YBsKpfQ8TjO19iCX554GJUCo/N1T5J3/ZTJ7NHwET0eZFR6/wdUxNMyoCAZQSh5tzxHfA6vjgYnp2UPefpqRUyjRlBnYX2yGGsA8ccwHtsg==',key_name='tempest-TestGettingAddress-238420135',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-ee56zhhv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:49:54Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=828def8a-01dd-4845-98b3-1516060251a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "address": "fa:16:3e:7e:94:3d", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbecbd-18", "ovs_interfaceid": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.261 244018 DEBUG nova.network.os_vif_util [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "address": "fa:16:3e:7e:94:3d", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbecbd-18", "ovs_interfaceid": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.261 244018 DEBUG nova.network.os_vif_util [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:94:3d,bridge_name='br-int',has_traffic_filtering=True,id=99fbecbd-1815-4bf9-8e7f-6f114847f46f,network=Network(481feaf1-7ff4-47be-9159-a1dd19ceebcc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99fbecbd-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.262 244018 DEBUG os_vif [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:94:3d,bridge_name='br-int',has_traffic_filtering=True,id=99fbecbd-1815-4bf9-8e7f-6f114847f46f,network=Network(481feaf1-7ff4-47be-9159-a1dd19ceebcc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99fbecbd-18') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.262 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.263 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.263 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.266 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.266 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99fbecbd-18, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.266 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap99fbecbd-18, col_values=(('external_ids', {'iface-id': '99fbecbd-1815-4bf9-8e7f-6f114847f46f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7e:94:3d', 'vm-uuid': '828def8a-01dd-4845-98b3-1516060251a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:04 np0005629333 NetworkManager[49836]: <info>  [1772023804.2688] manager: (tap99fbecbd-18): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/542)
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.270 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.272 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.273 244018 INFO os_vif [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:94:3d,bridge_name='br-int',has_traffic_filtering=True,id=99fbecbd-1815-4bf9-8e7f-6f114847f46f,network=Network(481feaf1-7ff4-47be-9159-a1dd19ceebcc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99fbecbd-18')#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.274 244018 DEBUG nova.virt.libvirt.vif [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:49:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1153433119',display_name='tempest-TestGettingAddress-server-1153433119',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1153433119',id=123,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNlLCvqIkUYh/9ZHtxnKBN7YBsKpfQ8TjO19iCX554GJUCo/N1T5J3/ZTJ7NHwET0eZFR6/wdUxNMyoCAZQSh5tzxHfA6vjgYnp2UPefpqRUyjRlBnYX2yGGsA8ccwHtsg==',key_name='tempest-TestGettingAddress-238420135',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-ee56zhhv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:49:54Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=828def8a-01dd-4845-98b3-1516060251a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "address": "fa:16:3e:ec:bf:12", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feec:bf12", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94cae4f8-cc", "ovs_interfaceid": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.274 244018 DEBUG nova.network.os_vif_util [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "address": "fa:16:3e:ec:bf:12", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feec:bf12", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94cae4f8-cc", "ovs_interfaceid": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.274 244018 DEBUG nova.network.os_vif_util [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:bf:12,bridge_name='br-int',has_traffic_filtering=True,id=94cae4f8-cc4b-443a-b22b-36ef77438ede,network=Network(eb832dde-9848-40c5-9505-cc643b1bd0fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94cae4f8-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.275 244018 DEBUG os_vif [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:bf:12,bridge_name='br-int',has_traffic_filtering=True,id=94cae4f8-cc4b-443a-b22b-36ef77438ede,network=Network(eb832dde-9848-40c5-9505-cc643b1bd0fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94cae4f8-cc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.275 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.275 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.275 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.278 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.278 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap94cae4f8-cc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.279 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap94cae4f8-cc, col_values=(('external_ids', {'iface-id': '94cae4f8-cc4b-443a-b22b-36ef77438ede', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ec:bf:12', 'vm-uuid': '828def8a-01dd-4845-98b3-1516060251a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:04 np0005629333 NetworkManager[49836]: <info>  [1772023804.2815] manager: (tap94cae4f8-cc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/543)
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.282 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.285 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.286 244018 INFO os_vif [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:bf:12,bridge_name='br-int',has_traffic_filtering=True,id=94cae4f8-cc4b-443a-b22b-36ef77438ede,network=Network(eb832dde-9848-40c5-9505-cc643b1bd0fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94cae4f8-cc')#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.347 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.348 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.348 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:7e:94:3d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.349 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:ec:bf:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.349 244018 INFO nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Using config drive#033[00m
Feb 25 07:50:04 np0005629333 nova_compute[244014]: 2026-02-25 12:50:04.370 244018 DEBUG nova.storage.rbd_utils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 828def8a-01dd-4845-98b3-1516060251a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:50:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2085: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.084 244018 INFO nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Creating config drive at /var/lib/nova/instances/828def8a-01dd-4845-98b3-1516060251a0/disk.config#033[00m
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.091 244018 DEBUG oslo_concurrency.processutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/828def8a-01dd-4845-98b3-1516060251a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpknv5lugx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.234 244018 DEBUG oslo_concurrency.processutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/828def8a-01dd-4845-98b3-1516060251a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpknv5lugx" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.266 244018 DEBUG nova.storage.rbd_utils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 828def8a-01dd-4845-98b3-1516060251a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.271 244018 DEBUG oslo_concurrency.processutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/828def8a-01dd-4845-98b3-1516060251a0/disk.config 828def8a-01dd-4845-98b3-1516060251a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.326 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.402 244018 DEBUG oslo_concurrency.processutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/828def8a-01dd-4845-98b3-1516060251a0/disk.config 828def8a-01dd-4845-98b3-1516060251a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.403 244018 INFO nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Deleting local config drive /var/lib/nova/instances/828def8a-01dd-4845-98b3-1516060251a0/disk.config because it was imported into RBD.#033[00m
Feb 25 07:50:05 np0005629333 NetworkManager[49836]: <info>  [1772023805.4568] manager: (tap99fbecbd-18): new Tun device (/org/freedesktop/NetworkManager/Devices/544)
Feb 25 07:50:05 np0005629333 kernel: tap99fbecbd-18: entered promiscuous mode
Feb 25 07:50:05 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:05Z|01297|binding|INFO|Claiming lport 99fbecbd-1815-4bf9-8e7f-6f114847f46f for this chassis.
Feb 25 07:50:05 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:05Z|01298|binding|INFO|99fbecbd-1815-4bf9-8e7f-6f114847f46f: Claiming fa:16:3e:7e:94:3d 10.100.0.3
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.461 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.470 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:94:3d 10.100.0.3'], port_security=['fa:16:3e:7e:94:3d 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '828def8a-01dd-4845-98b3-1516060251a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-481feaf1-7ff4-47be-9159-a1dd19ceebcc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f3102e48-4ea3-4c65-a010-ac507aeeeba5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7b776d7-fb2a-404e-b423-f79885837022, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=99fbecbd-1815-4bf9-8e7f-6f114847f46f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:50:05 np0005629333 NetworkManager[49836]: <info>  [1772023805.4718] manager: (tap94cae4f8-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/545)
Feb 25 07:50:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.472 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 99fbecbd-1815-4bf9-8e7f-6f114847f46f in datapath 481feaf1-7ff4-47be-9159-a1dd19ceebcc bound to our chassis#033[00m
Feb 25 07:50:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.473 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 481feaf1-7ff4-47be-9159-a1dd19ceebcc#033[00m
Feb 25 07:50:05 np0005629333 kernel: tap94cae4f8-cc: entered promiscuous mode
Feb 25 07:50:05 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:05Z|01299|binding|INFO|Setting lport 99fbecbd-1815-4bf9-8e7f-6f114847f46f ovn-installed in OVS
Feb 25 07:50:05 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:05Z|01300|binding|INFO|Setting lport 99fbecbd-1815-4bf9-8e7f-6f114847f46f up in Southbound
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.479 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:05 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:05Z|01301|if_status|INFO|Dropped 5 log messages in last 126 seconds (most recently, 126 seconds ago) due to excessive rate
Feb 25 07:50:05 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:05Z|01302|if_status|INFO|Not updating pb chassis for 94cae4f8-cc4b-443a-b22b-36ef77438ede now as sb is readonly
Feb 25 07:50:05 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:05Z|01303|binding|INFO|Claiming lport 94cae4f8-cc4b-443a-b22b-36ef77438ede for this chassis.
Feb 25 07:50:05 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:05Z|01304|binding|INFO|94cae4f8-cc4b-443a-b22b-36ef77438ede: Claiming fa:16:3e:ec:bf:12 2001:db8::f816:3eff:feec:bf12
Feb 25 07:50:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.488 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:bf:12 2001:db8::f816:3eff:feec:bf12'], port_security=['fa:16:3e:ec:bf:12 2001:db8::f816:3eff:feec:bf12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feec:bf12/64', 'neutron:device_id': '828def8a-01dd-4845-98b3-1516060251a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb832dde-9848-40c5-9505-cc643b1bd0fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f3102e48-4ea3-4c65-a010-ac507aeeeba5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f83b91c5-311c-45de-aa70-06fb6ec6fece, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=94cae4f8-cc4b-443a-b22b-36ef77438ede) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.489 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.491 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:05 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:05Z|01305|binding|INFO|Setting lport 94cae4f8-cc4b-443a-b22b-36ef77438ede up in Southbound
Feb 25 07:50:05 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:05Z|01306|binding|INFO|Setting lport 94cae4f8-cc4b-443a-b22b-36ef77438ede ovn-installed in OVS
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.492 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.494 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6c0185f2-8c52-4e4f-afb7-d711652db354]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:05 np0005629333 systemd-udevd[355452]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:50:05 np0005629333 systemd-udevd[355451]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:50:05 np0005629333 systemd-machined[210048]: New machine qemu-155-instance-0000007b.
Feb 25 07:50:05 np0005629333 NetworkManager[49836]: <info>  [1772023805.5085] device (tap99fbecbd-18): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:50:05 np0005629333 NetworkManager[49836]: <info>  [1772023805.5099] device (tap99fbecbd-18): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:50:05 np0005629333 NetworkManager[49836]: <info>  [1772023805.5112] device (tap94cae4f8-cc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:50:05 np0005629333 NetworkManager[49836]: <info>  [1772023805.5123] device (tap94cae4f8-cc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:50:05 np0005629333 systemd[1]: Started Virtual Machine qemu-155-instance-0000007b.
Feb 25 07:50:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.525 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[262ae0af-ae0d-4e83-909e-ee16d476faf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.529 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[07b26c99-65cb-481a-a598-4896a6e2555e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.555 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a20e398c-6639-4d3b-9159-822b4509754e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.572 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d1c83fc8-8c9d-4dea-8e26-6cd8361bc330]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap481feaf1-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:24:6f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 382], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573491, 'reachable_time': 20258, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355465, 'error': None, 'target': 'ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.584 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c5bebb6b-f1a5-4f60-abf2-1e4c0922e34d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap481feaf1-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573500, 'tstamp': 573500}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355467, 'error': None, 'target': 'ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap481feaf1-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573502, 'tstamp': 573502}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355467, 'error': None, 'target': 'ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.585 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap481feaf1-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.587 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.588 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.588 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap481feaf1-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.588 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:50:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.588 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap481feaf1-70, col_values=(('external_ids', {'iface-id': '8234c876-6930-42e9-b642-2d1f82e23ee0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.589 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:50:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.590 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 94cae4f8-cc4b-443a-b22b-36ef77438ede in datapath eb832dde-9848-40c5-9505-cc643b1bd0fa unbound from our chassis#033[00m
Feb 25 07:50:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.591 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eb832dde-9848-40c5-9505-cc643b1bd0fa#033[00m
Feb 25 07:50:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.601 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1a0fb45f-3cbb-4387-bbfa-c76fedc86f30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.619 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[6d2e7a98-74f4-4695-ba36-41b2a824df1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.622 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1ce95209-39c0-49e7-8899-7caf69d4b8d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.642 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e9adfd05-cc90-4990-860a-64d9420743fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.652 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[57b396d4-a789-4a6c-aa07-27d8dbb23cfe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb832dde-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:a5:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 17, 'tx_packets': 4, 'rx_bytes': 1502, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 17, 'tx_packets': 4, 'rx_bytes': 1502, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573564, 'reachable_time': 17677, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 17, 'inoctets': 1264, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 17, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1264, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 17, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355474, 'error': None, 'target': 'ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.664 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a2fc671d-3c0c-4b98-87fc-cbe61df84358]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapeb832dde-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573573, 'tstamp': 573573}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355475, 'error': None, 'target': 'ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.665 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb832dde-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.666 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.667 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.668 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb832dde-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.668 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:50:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.668 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeb832dde-90, col_values=(('external_ids', {'iface-id': 'b3234524-e109-417d-a204-a0ab750c983e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:05.669 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.676 244018 DEBUG nova.compute.manager [req-058c4a5d-a447-4942-9e49-a9b9d4d07625 req-e2aa301a-e093-4c64-a662-b41aaa2d372a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received event network-vif-plugged-94cae4f8-cc4b-443a-b22b-36ef77438ede external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.676 244018 DEBUG oslo_concurrency.lockutils [req-058c4a5d-a447-4942-9e49-a9b9d4d07625 req-e2aa301a-e093-4c64-a662-b41aaa2d372a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "828def8a-01dd-4845-98b3-1516060251a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.676 244018 DEBUG oslo_concurrency.lockutils [req-058c4a5d-a447-4942-9e49-a9b9d4d07625 req-e2aa301a-e093-4c64-a662-b41aaa2d372a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.677 244018 DEBUG oslo_concurrency.lockutils [req-058c4a5d-a447-4942-9e49-a9b9d4d07625 req-e2aa301a-e093-4c64-a662-b41aaa2d372a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.677 244018 DEBUG nova.compute.manager [req-058c4a5d-a447-4942-9e49-a9b9d4d07625 req-e2aa301a-e093-4c64-a662-b41aaa2d372a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Processing event network-vif-plugged-94cae4f8-cc4b-443a-b22b-36ef77438ede _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.727 244018 DEBUG nova.network.neutron [req-46551c20-e378-4344-b5aa-706552095b90 req-eb51a01e-f17d-45b4-b2c3-512c5151233e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Updated VIF entry in instance network info cache for port 94cae4f8-cc4b-443a-b22b-36ef77438ede. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.727 244018 DEBUG nova.network.neutron [req-46551c20-e378-4344-b5aa-706552095b90 req-eb51a01e-f17d-45b4-b2c3-512c5151233e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Updating instance_info_cache with network_info: [{"id": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "address": "fa:16:3e:7e:94:3d", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbecbd-18", "ovs_interfaceid": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "address": "fa:16:3e:ec:bf:12", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feec:bf12", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94cae4f8-cc", "ovs_interfaceid": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.747 244018 DEBUG oslo_concurrency.lockutils [req-46551c20-e378-4344-b5aa-706552095b90 req-eb51a01e-f17d-45b4-b2c3-512c5151233e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-828def8a-01dd-4845-98b3-1516060251a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.837 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023805.8363304, 828def8a-01dd-4845-98b3-1516060251a0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.837 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 828def8a-01dd-4845-98b3-1516060251a0] VM Started (Lifecycle Event)#033[00m
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.862 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.867 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023805.836876, 828def8a-01dd-4845-98b3-1516060251a0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.868 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 828def8a-01dd-4845-98b3-1516060251a0] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.891 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.894 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.913 244018 DEBUG nova.network.neutron [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Successfully created port: 33f0e898-9477-416a-9ae6-268ef8e71ee3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:50:05 np0005629333 nova_compute[244014]: 2026-02-25 12:50:05.919 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 828def8a-01dd-4845-98b3-1516060251a0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:50:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2086: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 07:50:07 np0005629333 nova_compute[244014]: 2026-02-25 12:50:07.129 244018 DEBUG nova.network.neutron [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Successfully updated port: 33f0e898-9477-416a-9ae6-268ef8e71ee3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:50:07 np0005629333 nova_compute[244014]: 2026-02-25 12:50:07.147 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-d6cf21ec-717e-41f7-9351-2214b43ce275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:50:07 np0005629333 nova_compute[244014]: 2026-02-25 12:50:07.148 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-d6cf21ec-717e-41f7-9351-2214b43ce275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:50:07 np0005629333 nova_compute[244014]: 2026-02-25 12:50:07.148 244018 DEBUG nova.network.neutron [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:50:07 np0005629333 nova_compute[244014]: 2026-02-25 12:50:07.233 244018 DEBUG nova.compute.manager [req-9fffdcd1-b9ff-4591-a307-cc8c635845e6 req-ff338ac6-2620-4f64-88e8-459ba4e12593 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Received event network-changed-33f0e898-9477-416a-9ae6-268ef8e71ee3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:50:07 np0005629333 nova_compute[244014]: 2026-02-25 12:50:07.234 244018 DEBUG nova.compute.manager [req-9fffdcd1-b9ff-4591-a307-cc8c635845e6 req-ff338ac6-2620-4f64-88e8-459ba4e12593 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Refreshing instance network info cache due to event network-changed-33f0e898-9477-416a-9ae6-268ef8e71ee3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:50:07 np0005629333 nova_compute[244014]: 2026-02-25 12:50:07.234 244018 DEBUG oslo_concurrency.lockutils [req-9fffdcd1-b9ff-4591-a307-cc8c635845e6 req-ff338ac6-2620-4f64-88e8-459ba4e12593 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d6cf21ec-717e-41f7-9351-2214b43ce275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:50:07 np0005629333 nova_compute[244014]: 2026-02-25 12:50:07.317 244018 DEBUG nova.network.neutron [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:50:07 np0005629333 nova_compute[244014]: 2026-02-25 12:50:07.781 244018 DEBUG nova.compute.manager [req-c1f80bd6-d1ab-4070-a882-4c5f72a746ba req-a69af818-c9b3-4c89-869f-93490b7f95c1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received event network-vif-plugged-94cae4f8-cc4b-443a-b22b-36ef77438ede external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:50:07 np0005629333 nova_compute[244014]: 2026-02-25 12:50:07.782 244018 DEBUG oslo_concurrency.lockutils [req-c1f80bd6-d1ab-4070-a882-4c5f72a746ba req-a69af818-c9b3-4c89-869f-93490b7f95c1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "828def8a-01dd-4845-98b3-1516060251a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:07 np0005629333 nova_compute[244014]: 2026-02-25 12:50:07.783 244018 DEBUG oslo_concurrency.lockutils [req-c1f80bd6-d1ab-4070-a882-4c5f72a746ba req-a69af818-c9b3-4c89-869f-93490b7f95c1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:07 np0005629333 nova_compute[244014]: 2026-02-25 12:50:07.783 244018 DEBUG oslo_concurrency.lockutils [req-c1f80bd6-d1ab-4070-a882-4c5f72a746ba req-a69af818-c9b3-4c89-869f-93490b7f95c1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:07 np0005629333 nova_compute[244014]: 2026-02-25 12:50:07.783 244018 DEBUG nova.compute.manager [req-c1f80bd6-d1ab-4070-a882-4c5f72a746ba req-a69af818-c9b3-4c89-869f-93490b7f95c1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] No event matching network-vif-plugged-94cae4f8-cc4b-443a-b22b-36ef77438ede in dict_keys([('network-vif-plugged', '99fbecbd-1815-4bf9-8e7f-6f114847f46f')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Feb 25 07:50:07 np0005629333 nova_compute[244014]: 2026-02-25 12:50:07.784 244018 WARNING nova.compute.manager [req-c1f80bd6-d1ab-4070-a882-4c5f72a746ba req-a69af818-c9b3-4c89-869f-93490b7f95c1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received unexpected event network-vif-plugged-94cae4f8-cc4b-443a-b22b-36ef77438ede for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:50:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2087: 305 pgs: 305 active+clean; 325 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Feb 25 07:50:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:50:08 np0005629333 nova_compute[244014]: 2026-02-25 12:50:08.931 244018 DEBUG nova.network.neutron [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Updating instance_info_cache with network_info: [{"id": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "address": "fa:16:3e:8d:16:fb", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33f0e898-94", "ovs_interfaceid": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:50:08 np0005629333 nova_compute[244014]: 2026-02-25 12:50:08.968 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-d6cf21ec-717e-41f7-9351-2214b43ce275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:50:08 np0005629333 nova_compute[244014]: 2026-02-25 12:50:08.969 244018 DEBUG nova.compute.manager [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Instance network_info: |[{"id": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "address": "fa:16:3e:8d:16:fb", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33f0e898-94", "ovs_interfaceid": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:50:08 np0005629333 nova_compute[244014]: 2026-02-25 12:50:08.970 244018 DEBUG oslo_concurrency.lockutils [req-9fffdcd1-b9ff-4591-a307-cc8c635845e6 req-ff338ac6-2620-4f64-88e8-459ba4e12593 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d6cf21ec-717e-41f7-9351-2214b43ce275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:50:08 np0005629333 nova_compute[244014]: 2026-02-25 12:50:08.971 244018 DEBUG nova.network.neutron [req-9fffdcd1-b9ff-4591-a307-cc8c635845e6 req-ff338ac6-2620-4f64-88e8-459ba4e12593 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Refreshing network info cache for port 33f0e898-9477-416a-9ae6-268ef8e71ee3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:50:08 np0005629333 nova_compute[244014]: 2026-02-25 12:50:08.977 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Start _get_guest_xml network_info=[{"id": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "address": "fa:16:3e:8d:16:fb", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33f0e898-94", "ovs_interfaceid": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:50:08 np0005629333 nova_compute[244014]: 2026-02-25 12:50:08.983 244018 WARNING nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:50:08 np0005629333 nova_compute[244014]: 2026-02-25 12:50:08.989 244018 DEBUG nova.virt.libvirt.host [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:50:08 np0005629333 nova_compute[244014]: 2026-02-25 12:50:08.990 244018 DEBUG nova.virt.libvirt.host [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:50:08 np0005629333 nova_compute[244014]: 2026-02-25 12:50:08.993 244018 DEBUG nova.virt.libvirt.host [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:50:08 np0005629333 nova_compute[244014]: 2026-02-25 12:50:08.994 244018 DEBUG nova.virt.libvirt.host [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:50:08 np0005629333 nova_compute[244014]: 2026-02-25 12:50:08.995 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:50:08 np0005629333 nova_compute[244014]: 2026-02-25 12:50:08.995 244018 DEBUG nova.virt.hardware [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:50:08 np0005629333 nova_compute[244014]: 2026-02-25 12:50:08.996 244018 DEBUG nova.virt.hardware [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:50:08 np0005629333 nova_compute[244014]: 2026-02-25 12:50:08.996 244018 DEBUG nova.virt.hardware [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:50:08 np0005629333 nova_compute[244014]: 2026-02-25 12:50:08.996 244018 DEBUG nova.virt.hardware [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:50:08 np0005629333 nova_compute[244014]: 2026-02-25 12:50:08.997 244018 DEBUG nova.virt.hardware [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:50:08 np0005629333 nova_compute[244014]: 2026-02-25 12:50:08.997 244018 DEBUG nova.virt.hardware [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:50:08 np0005629333 nova_compute[244014]: 2026-02-25 12:50:08.997 244018 DEBUG nova.virt.hardware [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:50:08 np0005629333 nova_compute[244014]: 2026-02-25 12:50:08.998 244018 DEBUG nova.virt.hardware [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:50:08 np0005629333 nova_compute[244014]: 2026-02-25 12:50:08.998 244018 DEBUG nova.virt.hardware [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:50:08 np0005629333 nova_compute[244014]: 2026-02-25 12:50:08.999 244018 DEBUG nova.virt.hardware [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:50:08 np0005629333 nova_compute[244014]: 2026-02-25 12:50:08.999 244018 DEBUG nova.virt.hardware [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:50:09 np0005629333 nova_compute[244014]: 2026-02-25 12:50:09.004 244018 DEBUG oslo_concurrency.processutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:50:09 np0005629333 nova_compute[244014]: 2026-02-25 12:50:09.283 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:50:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1861968286' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:50:09 np0005629333 nova_compute[244014]: 2026-02-25 12:50:09.565 244018 DEBUG oslo_concurrency.processutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:50:09 np0005629333 nova_compute[244014]: 2026-02-25 12:50:09.596 244018 DEBUG nova.storage.rbd_utils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image d6cf21ec-717e-41f7-9351-2214b43ce275_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:50:09 np0005629333 nova_compute[244014]: 2026-02-25 12:50:09.601 244018 DEBUG oslo_concurrency.processutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.329 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.365 244018 DEBUG nova.network.neutron [req-9fffdcd1-b9ff-4591-a307-cc8c635845e6 req-ff338ac6-2620-4f64-88e8-459ba4e12593 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Updated VIF entry in instance network info cache for port 33f0e898-9477-416a-9ae6-268ef8e71ee3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.366 244018 DEBUG nova.network.neutron [req-9fffdcd1-b9ff-4591-a307-cc8c635845e6 req-ff338ac6-2620-4f64-88e8-459ba4e12593 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Updating instance_info_cache with network_info: [{"id": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "address": "fa:16:3e:8d:16:fb", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33f0e898-94", "ovs_interfaceid": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.385 244018 DEBUG oslo_concurrency.lockutils [req-9fffdcd1-b9ff-4591-a307-cc8c635845e6 req-ff338ac6-2620-4f64-88e8-459ba4e12593 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d6cf21ec-717e-41f7-9351-2214b43ce275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:50:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:50:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3992282661' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.507 244018 DEBUG oslo_concurrency.processutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.906s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.508 244018 DEBUG nova.virt.libvirt.vif [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:50:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-563456365',display_name='tempest-TestNetworkBasicOps-server-563456365',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-563456365',id=124,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLeMiVeu1dOqPpqcWXNCLagoBwiGcWNicZxrpOmZtGijG3qf3SaUB0NXPfDvHb9oPVwp3SJzJeTa27pr/BNkR9Koy7DJ2jBsI3Xx4qBnFg17X6/5BsC47NShmWdZikYbsA==',key_name='tempest-TestNetworkBasicOps-2016084182',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-8x0n80rf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:50:03Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=d6cf21ec-717e-41f7-9351-2214b43ce275,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "address": "fa:16:3e:8d:16:fb", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33f0e898-94", "ovs_interfaceid": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.508 244018 DEBUG nova.network.os_vif_util [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "address": "fa:16:3e:8d:16:fb", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33f0e898-94", "ovs_interfaceid": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.509 244018 DEBUG nova.network.os_vif_util [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:16:fb,bridge_name='br-int',has_traffic_filtering=True,id=33f0e898-9477-416a-9ae6-268ef8e71ee3,network=Network(06dcd48c-d26b-4718-b4c7-9c2416698bed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33f0e898-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.510 244018 DEBUG nova.objects.instance [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'pci_devices' on Instance uuid d6cf21ec-717e-41f7-9351-2214b43ce275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:50:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2088: 305 pgs: 305 active+clean; 325 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.647 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:50:10 np0005629333 nova_compute[244014]:  <uuid>d6cf21ec-717e-41f7-9351-2214b43ce275</uuid>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:  <name>instance-0000007c</name>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestNetworkBasicOps-server-563456365</nova:name>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:50:08</nova:creationTime>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:50:10 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:        <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:        <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:        <nova:port uuid="33f0e898-9477-416a-9ae6-268ef8e71ee3">
Feb 25 07:50:10 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      <entry name="serial">d6cf21ec-717e-41f7-9351-2214b43ce275</entry>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      <entry name="uuid">d6cf21ec-717e-41f7-9351-2214b43ce275</entry>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/d6cf21ec-717e-41f7-9351-2214b43ce275_disk">
Feb 25 07:50:10 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:50:10 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/d6cf21ec-717e-41f7-9351-2214b43ce275_disk.config">
Feb 25 07:50:10 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:50:10 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:8d:16:fb"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      <target dev="tap33f0e898-94"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/d6cf21ec-717e-41f7-9351-2214b43ce275/console.log" append="off"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:50:10 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:50:10 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:50:10 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:50:10 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.648 244018 DEBUG nova.compute.manager [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Preparing to wait for external event network-vif-plugged-33f0e898-9477-416a-9ae6-268ef8e71ee3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.648 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "d6cf21ec-717e-41f7-9351-2214b43ce275-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.649 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "d6cf21ec-717e-41f7-9351-2214b43ce275-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.649 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "d6cf21ec-717e-41f7-9351-2214b43ce275-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.650 244018 DEBUG nova.virt.libvirt.vif [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:50:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-563456365',display_name='tempest-TestNetworkBasicOps-server-563456365',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-563456365',id=124,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLeMiVeu1dOqPpqcWXNCLagoBwiGcWNicZxrpOmZtGijG3qf3SaUB0NXPfDvHb9oPVwp3SJzJeTa27pr/BNkR9Koy7DJ2jBsI3Xx4qBnFg17X6/5BsC47NShmWdZikYbsA==',key_name='tempest-TestNetworkBasicOps-2016084182',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-8x0n80rf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:50:03Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=d6cf21ec-717e-41f7-9351-2214b43ce275,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "address": "fa:16:3e:8d:16:fb", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33f0e898-94", "ovs_interfaceid": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.650 244018 DEBUG nova.network.os_vif_util [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "address": "fa:16:3e:8d:16:fb", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33f0e898-94", "ovs_interfaceid": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.651 244018 DEBUG nova.network.os_vif_util [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:16:fb,bridge_name='br-int',has_traffic_filtering=True,id=33f0e898-9477-416a-9ae6-268ef8e71ee3,network=Network(06dcd48c-d26b-4718-b4c7-9c2416698bed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33f0e898-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.652 244018 DEBUG os_vif [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:16:fb,bridge_name='br-int',has_traffic_filtering=True,id=33f0e898-9477-416a-9ae6-268ef8e71ee3,network=Network(06dcd48c-d26b-4718-b4c7-9c2416698bed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33f0e898-94') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.653 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.653 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.654 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.656 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.656 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap33f0e898-94, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.657 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap33f0e898-94, col_values=(('external_ids', {'iface-id': '33f0e898-9477-416a-9ae6-268ef8e71ee3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8d:16:fb', 'vm-uuid': 'd6cf21ec-717e-41f7-9351-2214b43ce275'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.659 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:10 np0005629333 NetworkManager[49836]: <info>  [1772023810.6610] manager: (tap33f0e898-94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/546)
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.661 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.667 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.667 244018 INFO os_vif [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:16:fb,bridge_name='br-int',has_traffic_filtering=True,id=33f0e898-9477-416a-9ae6-268ef8e71ee3,network=Network(06dcd48c-d26b-4718-b4c7-9c2416698bed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33f0e898-94')#033[00m
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.728 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.728 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.728 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:8d:16:fb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.729 244018 INFO nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Using config drive#033[00m
Feb 25 07:50:10 np0005629333 nova_compute[244014]: 2026-02-25 12:50:10.758 244018 DEBUG nova.storage.rbd_utils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image d6cf21ec-717e-41f7-9351-2214b43ce275_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.229 244018 INFO nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Creating config drive at /var/lib/nova/instances/d6cf21ec-717e-41f7-9351-2214b43ce275/disk.config#033[00m
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.236 244018 DEBUG oslo_concurrency.processutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d6cf21ec-717e-41f7-9351-2214b43ce275/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp1vioq7t4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.380 244018 DEBUG oslo_concurrency.processutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d6cf21ec-717e-41f7-9351-2214b43ce275/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp1vioq7t4" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.418 244018 DEBUG nova.storage.rbd_utils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image d6cf21ec-717e-41f7-9351-2214b43ce275_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.422 244018 DEBUG oslo_concurrency.processutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d6cf21ec-717e-41f7-9351-2214b43ce275/disk.config d6cf21ec-717e-41f7-9351-2214b43ce275_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.564 244018 DEBUG oslo_concurrency.processutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d6cf21ec-717e-41f7-9351-2214b43ce275/disk.config d6cf21ec-717e-41f7-9351-2214b43ce275_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.566 244018 INFO nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Deleting local config drive /var/lib/nova/instances/d6cf21ec-717e-41f7-9351-2214b43ce275/disk.config because it was imported into RBD.#033[00m
Feb 25 07:50:11 np0005629333 kernel: tap33f0e898-94: entered promiscuous mode
Feb 25 07:50:11 np0005629333 NetworkManager[49836]: <info>  [1772023811.6285] manager: (tap33f0e898-94): new Tun device (/org/freedesktop/NetworkManager/Devices/547)
Feb 25 07:50:11 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:11Z|01307|binding|INFO|Claiming lport 33f0e898-9477-416a-9ae6-268ef8e71ee3 for this chassis.
Feb 25 07:50:11 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:11Z|01308|binding|INFO|33f0e898-9477-416a-9ae6-268ef8e71ee3: Claiming fa:16:3e:8d:16:fb 10.100.0.8
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.631 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:11 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:11Z|01309|binding|INFO|Setting lport 33f0e898-9477-416a-9ae6-268ef8e71ee3 ovn-installed in OVS
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.641 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:11 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:11Z|01310|binding|INFO|Setting lport 33f0e898-9477-416a-9ae6-268ef8e71ee3 up in Southbound
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.640 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:16:fb 10.100.0.8'], port_security=['fa:16:3e:8d:16:fb 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd6cf21ec-717e-41f7-9351-2214b43ce275', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-06dcd48c-d26b-4718-b4c7-9c2416698bed', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7113265a-1d97-4a63-a30a-7677c464f652', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d3a1f8d-2c5f-4ea3-9518-03dbef27606a, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=33f0e898-9477-416a-9ae6-268ef8e71ee3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.643 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 33f0e898-9477-416a-9ae6-268ef8e71ee3 in datapath 06dcd48c-d26b-4718-b4c7-9c2416698bed bound to our chassis#033[00m
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.646 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 06dcd48c-d26b-4718-b4c7-9c2416698bed#033[00m
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.659 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[561015e8-fd6e-4b99-a6ed-e840a0340f08]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.659 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap06dcd48c-d1 in ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.662 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap06dcd48c-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.662 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a1aa0abe-f26e-433a-959c-686a5d697ef0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.663 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7a2f2acc-8bef-47fa-9f57-1c9dbdadfc5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:11 np0005629333 systemd-udevd[355655]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:50:11 np0005629333 systemd-machined[210048]: New machine qemu-156-instance-0000007c.
Feb 25 07:50:11 np0005629333 NetworkManager[49836]: <info>  [1772023811.6793] device (tap33f0e898-94): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:50:11 np0005629333 NetworkManager[49836]: <info>  [1772023811.6810] device (tap33f0e898-94): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.683 244018 DEBUG nova.compute.manager [req-cdfef91e-581c-4719-b7e2-752c6c4237a7 req-9f1881e6-29a8-4a08-a9e3-14be59209362 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received event network-vif-plugged-99fbecbd-1815-4bf9-8e7f-6f114847f46f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.683 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[a87702c7-025e-40fa-bf67-6c7a62e65e4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.684 244018 DEBUG oslo_concurrency.lockutils [req-cdfef91e-581c-4719-b7e2-752c6c4237a7 req-9f1881e6-29a8-4a08-a9e3-14be59209362 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "828def8a-01dd-4845-98b3-1516060251a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.685 244018 DEBUG oslo_concurrency.lockutils [req-cdfef91e-581c-4719-b7e2-752c6c4237a7 req-9f1881e6-29a8-4a08-a9e3-14be59209362 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.685 244018 DEBUG oslo_concurrency.lockutils [req-cdfef91e-581c-4719-b7e2-752c6c4237a7 req-9f1881e6-29a8-4a08-a9e3-14be59209362 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.686 244018 DEBUG nova.compute.manager [req-cdfef91e-581c-4719-b7e2-752c6c4237a7 req-9f1881e6-29a8-4a08-a9e3-14be59209362 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Processing event network-vif-plugged-99fbecbd-1815-4bf9-8e7f-6f114847f46f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.687 244018 DEBUG nova.compute.manager [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Instance event wait completed in 5 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:50:11 np0005629333 systemd[1]: Started Virtual Machine qemu-156-instance-0000007c.
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.692 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023811.692569, 828def8a-01dd-4845-98b3-1516060251a0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.693 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 828def8a-01dd-4845-98b3-1516060251a0] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.696 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.696 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[34b8dab4-9d29-41ff-8e6d-7b3c73a12152]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.702 244018 INFO nova.virt.libvirt.driver [-] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Instance spawned successfully.#033[00m
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.703 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.716 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.722 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c6a02b2d-ddda-4e18-8e6f-b775239c792e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.729 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:50:11 np0005629333 NetworkManager[49836]: <info>  [1772023811.7309] manager: (tap06dcd48c-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/548)
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.730 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8ebe9faa-3247-4e29-b275-c9b394a1ec2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.733 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.734 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.735 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.735 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.735 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.736 244018 DEBUG nova.virt.libvirt.driver [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:50:11 np0005629333 systemd-udevd[355658]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.745 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 828def8a-01dd-4845-98b3-1516060251a0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.764 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5a0c2f4d-b561-4dcb-899e-cbf897752f5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.767 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4276d6da-d610-43db-99aa-2ff1c30fea5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.789 244018 INFO nova.compute.manager [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Took 17.05 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.789 244018 DEBUG nova.compute.manager [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:50:11 np0005629333 NetworkManager[49836]: <info>  [1772023811.7932] device (tap06dcd48c-d0): carrier: link connected
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.797 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0517a302-b02d-4e20-9ca0-a15bf912c0f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.812 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[69783a18-2442-4084-9cdf-918fc5ba6bc5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap06dcd48c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:35:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 390], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578120, 'reachable_time': 38011, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355689, 'error': None, 'target': 'ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.827 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[077d4418-225c-4d67-bd70-e0dd8912e1eb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8b:35ed'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 578120, 'tstamp': 578120}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355690, 'error': None, 'target': 'ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.840 244018 INFO nova.compute.manager [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Took 18.40 seconds to build instance.#033[00m
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.842 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[35657669-e886-4b69-827a-6ad7a3184da1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap06dcd48c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:35:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 390], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578120, 'reachable_time': 38011, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 355691, 'error': None, 'target': 'ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.863 244018 DEBUG oslo_concurrency.lockutils [None req-5f7afbcb-42b2-4bf9-8445-01c6a74f034a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.864 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d45c6d0d-a038-4b5e-98e1-71dafccd6fac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.928 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2dead495-577d-4af4-8c4e-288404e91e92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.930 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06dcd48c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.931 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.931 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap06dcd48c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.933 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:11 np0005629333 NetworkManager[49836]: <info>  [1772023811.9345] manager: (tap06dcd48c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/549)
Feb 25 07:50:11 np0005629333 kernel: tap06dcd48c-d0: entered promiscuous mode
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.935 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.937 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap06dcd48c-d0, col_values=(('external_ids', {'iface-id': '06afb3ba-d963-4735-ae78-91cfa95e52ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.938 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:11 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:11Z|01311|binding|INFO|Releasing lport 06afb3ba-d963-4735-ae78-91cfa95e52ff from this chassis (sb_readonly=0)
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.939 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.940 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/06dcd48c-d26b-4718-b4c7-9c2416698bed.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/06dcd48c-d26b-4718-b4c7-9c2416698bed.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.941 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[29560145-a240-4bac-8e57-ad3426de35e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.942 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-06dcd48c-d26b-4718-b4c7-9c2416698bed
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/06dcd48c-d26b-4718-b4c7-9c2416698bed.pid.haproxy
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 06dcd48c-d26b-4718-b4c7-9c2416698bed
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:50:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:11.943 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed', 'env', 'PROCESS_TAG=haproxy-06dcd48c-d26b-4718-b4c7-9c2416698bed', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/06dcd48c-d26b-4718-b4c7-9c2416698bed.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:50:11 np0005629333 nova_compute[244014]: 2026-02-25 12:50:11.949 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:12 np0005629333 nova_compute[244014]: 2026-02-25 12:50:12.122 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023812.1222494, d6cf21ec-717e-41f7-9351-2214b43ce275 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:50:12 np0005629333 nova_compute[244014]: 2026-02-25 12:50:12.122 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] VM Started (Lifecycle Event)#033[00m
Feb 25 07:50:12 np0005629333 nova_compute[244014]: 2026-02-25 12:50:12.171 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:50:12 np0005629333 nova_compute[244014]: 2026-02-25 12:50:12.176 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023812.12639, d6cf21ec-717e-41f7-9351-2214b43ce275 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:50:12 np0005629333 nova_compute[244014]: 2026-02-25 12:50:12.176 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:50:12 np0005629333 nova_compute[244014]: 2026-02-25 12:50:12.224 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:50:12 np0005629333 nova_compute[244014]: 2026-02-25 12:50:12.230 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:50:12 np0005629333 nova_compute[244014]: 2026-02-25 12:50:12.277 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:50:12 np0005629333 podman[355765]: 2026-02-25 12:50:12.319749321 +0000 UTC m=+0.058821702 container create a0d971f8098c852696847b3611a5eaaa47f0f2fbc87dc08c1f4644483bac5bed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:50:12 np0005629333 systemd[1]: Started libpod-conmon-a0d971f8098c852696847b3611a5eaaa47f0f2fbc87dc08c1f4644483bac5bed.scope.
Feb 25 07:50:12 np0005629333 podman[355765]: 2026-02-25 12:50:12.287661165 +0000 UTC m=+0.026733606 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:50:12 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:50:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66261fae152732234a66a0a4cefd1a9db46debf2297b95beda372509622fae68/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:50:12 np0005629333 podman[355765]: 2026-02-25 12:50:12.41637513 +0000 UTC m=+0.155447551 container init a0d971f8098c852696847b3611a5eaaa47f0f2fbc87dc08c1f4644483bac5bed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 07:50:12 np0005629333 podman[355765]: 2026-02-25 12:50:12.423657375 +0000 UTC m=+0.162729766 container start a0d971f8098c852696847b3611a5eaaa47f0f2fbc87dc08c1f4644483bac5bed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:50:12 np0005629333 neutron-haproxy-ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed[355780]: [NOTICE]   (355784) : New worker (355786) forked
Feb 25 07:50:12 np0005629333 neutron-haproxy-ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed[355780]: [NOTICE]   (355784) : Loading success.
Feb 25 07:50:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2089: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Feb 25 07:50:12 np0005629333 nova_compute[244014]: 2026-02-25 12:50:12.567 244018 DEBUG nova.compute.manager [req-073b3e0b-9c79-456d-964f-7febab8f8732 req-51964750-eabe-4f13-ab9d-c9db00d40ed8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Received event network-vif-plugged-33f0e898-9477-416a-9ae6-268ef8e71ee3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:50:12 np0005629333 nova_compute[244014]: 2026-02-25 12:50:12.568 244018 DEBUG oslo_concurrency.lockutils [req-073b3e0b-9c79-456d-964f-7febab8f8732 req-51964750-eabe-4f13-ab9d-c9db00d40ed8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d6cf21ec-717e-41f7-9351-2214b43ce275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:12 np0005629333 nova_compute[244014]: 2026-02-25 12:50:12.570 244018 DEBUG oslo_concurrency.lockutils [req-073b3e0b-9c79-456d-964f-7febab8f8732 req-51964750-eabe-4f13-ab9d-c9db00d40ed8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d6cf21ec-717e-41f7-9351-2214b43ce275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:12 np0005629333 nova_compute[244014]: 2026-02-25 12:50:12.570 244018 DEBUG oslo_concurrency.lockutils [req-073b3e0b-9c79-456d-964f-7febab8f8732 req-51964750-eabe-4f13-ab9d-c9db00d40ed8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d6cf21ec-717e-41f7-9351-2214b43ce275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:12 np0005629333 nova_compute[244014]: 2026-02-25 12:50:12.571 244018 DEBUG nova.compute.manager [req-073b3e0b-9c79-456d-964f-7febab8f8732 req-51964750-eabe-4f13-ab9d-c9db00d40ed8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Processing event network-vif-plugged-33f0e898-9477-416a-9ae6-268ef8e71ee3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:50:12 np0005629333 nova_compute[244014]: 2026-02-25 12:50:12.572 244018 DEBUG nova.compute.manager [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:50:12 np0005629333 nova_compute[244014]: 2026-02-25 12:50:12.576 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023812.5764813, d6cf21ec-717e-41f7-9351-2214b43ce275 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:50:12 np0005629333 nova_compute[244014]: 2026-02-25 12:50:12.577 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:50:12 np0005629333 nova_compute[244014]: 2026-02-25 12:50:12.580 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:50:12 np0005629333 nova_compute[244014]: 2026-02-25 12:50:12.583 244018 INFO nova.virt.libvirt.driver [-] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Instance spawned successfully.#033[00m
Feb 25 07:50:12 np0005629333 nova_compute[244014]: 2026-02-25 12:50:12.584 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:50:12 np0005629333 nova_compute[244014]: 2026-02-25 12:50:12.634 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:50:12 np0005629333 nova_compute[244014]: 2026-02-25 12:50:12.640 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:50:12 np0005629333 nova_compute[244014]: 2026-02-25 12:50:12.651 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:50:12 np0005629333 nova_compute[244014]: 2026-02-25 12:50:12.652 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:50:12 np0005629333 nova_compute[244014]: 2026-02-25 12:50:12.653 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:50:12 np0005629333 nova_compute[244014]: 2026-02-25 12:50:12.654 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:50:12 np0005629333 nova_compute[244014]: 2026-02-25 12:50:12.654 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:50:12 np0005629333 nova_compute[244014]: 2026-02-25 12:50:12.655 244018 DEBUG nova.virt.libvirt.driver [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:50:12 np0005629333 nova_compute[244014]: 2026-02-25 12:50:12.779 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:50:12 np0005629333 nova_compute[244014]: 2026-02-25 12:50:12.848 244018 INFO nova.compute.manager [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Took 9.37 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:50:12 np0005629333 nova_compute[244014]: 2026-02-25 12:50:12.848 244018 DEBUG nova.compute.manager [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:50:12 np0005629333 nova_compute[244014]: 2026-02-25 12:50:12.943 244018 INFO nova.compute.manager [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Took 10.52 seconds to build instance.#033[00m
Feb 25 07:50:12 np0005629333 nova_compute[244014]: 2026-02-25 12:50:12.974 244018 DEBUG oslo_concurrency.lockutils [None req-b115f330-1eaf-41b1-9f0b-64b453d9738f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "d6cf21ec-717e-41f7-9351-2214b43ce275" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:50:13 np0005629333 nova_compute[244014]: 2026-02-25 12:50:13.873 244018 DEBUG nova.compute.manager [req-02587f89-5cd5-4193-a8e6-92560ef47743 req-4c40e874-0a20-4754-aa5b-bf8c8462f5d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received event network-vif-plugged-99fbecbd-1815-4bf9-8e7f-6f114847f46f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:50:13 np0005629333 nova_compute[244014]: 2026-02-25 12:50:13.873 244018 DEBUG oslo_concurrency.lockutils [req-02587f89-5cd5-4193-a8e6-92560ef47743 req-4c40e874-0a20-4754-aa5b-bf8c8462f5d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "828def8a-01dd-4845-98b3-1516060251a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:13 np0005629333 nova_compute[244014]: 2026-02-25 12:50:13.873 244018 DEBUG oslo_concurrency.lockutils [req-02587f89-5cd5-4193-a8e6-92560ef47743 req-4c40e874-0a20-4754-aa5b-bf8c8462f5d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:13 np0005629333 nova_compute[244014]: 2026-02-25 12:50:13.874 244018 DEBUG oslo_concurrency.lockutils [req-02587f89-5cd5-4193-a8e6-92560ef47743 req-4c40e874-0a20-4754-aa5b-bf8c8462f5d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:13 np0005629333 nova_compute[244014]: 2026-02-25 12:50:13.874 244018 DEBUG nova.compute.manager [req-02587f89-5cd5-4193-a8e6-92560ef47743 req-4c40e874-0a20-4754-aa5b-bf8c8462f5d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] No waiting events found dispatching network-vif-plugged-99fbecbd-1815-4bf9-8e7f-6f114847f46f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:50:13 np0005629333 nova_compute[244014]: 2026-02-25 12:50:13.875 244018 WARNING nova.compute.manager [req-02587f89-5cd5-4193-a8e6-92560ef47743 req-4c40e874-0a20-4754-aa5b-bf8c8462f5d1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received unexpected event network-vif-plugged-99fbecbd-1815-4bf9-8e7f-6f114847f46f for instance with vm_state active and task_state None.#033[00m
Feb 25 07:50:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2090: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 454 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Feb 25 07:50:14 np0005629333 nova_compute[244014]: 2026-02-25 12:50:14.689 244018 DEBUG nova.compute.manager [req-f490ab53-33bb-4122-9eb3-6c7ad7cecec2 req-f11df207-e7be-4719-80a6-40ca9f5b6515 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Received event network-vif-plugged-33f0e898-9477-416a-9ae6-268ef8e71ee3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:50:14 np0005629333 nova_compute[244014]: 2026-02-25 12:50:14.690 244018 DEBUG oslo_concurrency.lockutils [req-f490ab53-33bb-4122-9eb3-6c7ad7cecec2 req-f11df207-e7be-4719-80a6-40ca9f5b6515 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d6cf21ec-717e-41f7-9351-2214b43ce275-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:14 np0005629333 nova_compute[244014]: 2026-02-25 12:50:14.691 244018 DEBUG oslo_concurrency.lockutils [req-f490ab53-33bb-4122-9eb3-6c7ad7cecec2 req-f11df207-e7be-4719-80a6-40ca9f5b6515 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d6cf21ec-717e-41f7-9351-2214b43ce275-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:14 np0005629333 nova_compute[244014]: 2026-02-25 12:50:14.691 244018 DEBUG oslo_concurrency.lockutils [req-f490ab53-33bb-4122-9eb3-6c7ad7cecec2 req-f11df207-e7be-4719-80a6-40ca9f5b6515 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d6cf21ec-717e-41f7-9351-2214b43ce275-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:14 np0005629333 nova_compute[244014]: 2026-02-25 12:50:14.692 244018 DEBUG nova.compute.manager [req-f490ab53-33bb-4122-9eb3-6c7ad7cecec2 req-f11df207-e7be-4719-80a6-40ca9f5b6515 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] No waiting events found dispatching network-vif-plugged-33f0e898-9477-416a-9ae6-268ef8e71ee3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:50:14 np0005629333 nova_compute[244014]: 2026-02-25 12:50:14.693 244018 WARNING nova.compute.manager [req-f490ab53-33bb-4122-9eb3-6c7ad7cecec2 req-f11df207-e7be-4719-80a6-40ca9f5b6515 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Received unexpected event network-vif-plugged-33f0e898-9477-416a-9ae6-268ef8e71ee3 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:50:15 np0005629333 nova_compute[244014]: 2026-02-25 12:50:15.331 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 07:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 38K writes, 150K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.04 MB/s#012Cumulative WAL: 38K writes, 13K syncs, 2.79 writes per sync, written: 0.15 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6246 writes, 25K keys, 6246 commit groups, 1.0 writes per commit group, ingest: 29.52 MB, 0.05 MB/s#012Interval WAL: 6246 writes, 2436 syncs, 2.56 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 07:50:15 np0005629333 nova_compute[244014]: 2026-02-25 12:50:15.659 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2091: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 454 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Feb 25 07:50:16 np0005629333 nova_compute[244014]: 2026-02-25 12:50:16.813 244018 DEBUG nova.compute.manager [req-50f828f4-4644-462a-b892-f99e5d0496d0 req-f9a14a70-5637-4b3f-aa57-67880a4a92bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received event network-changed-99fbecbd-1815-4bf9-8e7f-6f114847f46f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:50:16 np0005629333 nova_compute[244014]: 2026-02-25 12:50:16.815 244018 DEBUG nova.compute.manager [req-50f828f4-4644-462a-b892-f99e5d0496d0 req-f9a14a70-5637-4b3f-aa57-67880a4a92bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Refreshing instance network info cache due to event network-changed-99fbecbd-1815-4bf9-8e7f-6f114847f46f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:50:16 np0005629333 nova_compute[244014]: 2026-02-25 12:50:16.816 244018 DEBUG oslo_concurrency.lockutils [req-50f828f4-4644-462a-b892-f99e5d0496d0 req-f9a14a70-5637-4b3f-aa57-67880a4a92bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-828def8a-01dd-4845-98b3-1516060251a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:50:16 np0005629333 nova_compute[244014]: 2026-02-25 12:50:16.817 244018 DEBUG oslo_concurrency.lockutils [req-50f828f4-4644-462a-b892-f99e5d0496d0 req-f9a14a70-5637-4b3f-aa57-67880a4a92bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-828def8a-01dd-4845-98b3-1516060251a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:50:16 np0005629333 nova_compute[244014]: 2026-02-25 12:50:16.817 244018 DEBUG nova.network.neutron [req-50f828f4-4644-462a-b892-f99e5d0496d0 req-f9a14a70-5637-4b3f-aa57-67880a4a92bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Refreshing network info cache for port 99fbecbd-1815-4bf9-8e7f-6f114847f46f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:50:18 np0005629333 nova_compute[244014]: 2026-02-25 12:50:18.135 244018 DEBUG nova.network.neutron [req-50f828f4-4644-462a-b892-f99e5d0496d0 req-f9a14a70-5637-4b3f-aa57-67880a4a92bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Updated VIF entry in instance network info cache for port 99fbecbd-1815-4bf9-8e7f-6f114847f46f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:50:18 np0005629333 nova_compute[244014]: 2026-02-25 12:50:18.136 244018 DEBUG nova.network.neutron [req-50f828f4-4644-462a-b892-f99e5d0496d0 req-f9a14a70-5637-4b3f-aa57-67880a4a92bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Updating instance_info_cache with network_info: [{"id": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "address": "fa:16:3e:7e:94:3d", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbecbd-18", "ovs_interfaceid": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "address": "fa:16:3e:ec:bf:12", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feec:bf12", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94cae4f8-cc", "ovs_interfaceid": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:50:18 np0005629333 nova_compute[244014]: 2026-02-25 12:50:18.169 244018 DEBUG oslo_concurrency.lockutils [req-50f828f4-4644-462a-b892-f99e5d0496d0 req-f9a14a70-5637-4b3f-aa57-67880a4a92bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-828def8a-01dd-4845-98b3-1516060251a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:50:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2092: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Feb 25 07:50:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:50:18 np0005629333 nova_compute[244014]: 2026-02-25 12:50:18.943 244018 DEBUG nova.compute.manager [req-e5916643-349f-4304-a6ba-1c05aa26f29a req-5c05db17-377d-485d-968f-5c0ce1460970 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Received event network-changed-33f0e898-9477-416a-9ae6-268ef8e71ee3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:50:18 np0005629333 nova_compute[244014]: 2026-02-25 12:50:18.944 244018 DEBUG nova.compute.manager [req-e5916643-349f-4304-a6ba-1c05aa26f29a req-5c05db17-377d-485d-968f-5c0ce1460970 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Refreshing instance network info cache due to event network-changed-33f0e898-9477-416a-9ae6-268ef8e71ee3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:50:18 np0005629333 nova_compute[244014]: 2026-02-25 12:50:18.945 244018 DEBUG oslo_concurrency.lockutils [req-e5916643-349f-4304-a6ba-1c05aa26f29a req-5c05db17-377d-485d-968f-5c0ce1460970 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d6cf21ec-717e-41f7-9351-2214b43ce275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:50:18 np0005629333 nova_compute[244014]: 2026-02-25 12:50:18.945 244018 DEBUG oslo_concurrency.lockutils [req-e5916643-349f-4304-a6ba-1c05aa26f29a req-5c05db17-377d-485d-968f-5c0ce1460970 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d6cf21ec-717e-41f7-9351-2214b43ce275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:50:18 np0005629333 nova_compute[244014]: 2026-02-25 12:50:18.946 244018 DEBUG nova.network.neutron [req-e5916643-349f-4304-a6ba-1c05aa26f29a req-5c05db17-377d-485d-968f-5c0ce1460970 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Refreshing network info cache for port 33f0e898-9477-416a-9ae6-268ef8e71ee3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 07:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.2 total, 600.0 interval#012Cumulative writes: 40K writes, 157K keys, 40K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.04 MB/s#012Cumulative WAL: 40K writes, 14K syncs, 2.80 writes per sync, written: 0.15 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6865 writes, 28K keys, 6865 commit groups, 1.0 writes per commit group, ingest: 33.19 MB, 0.06 MB/s#012Interval WAL: 6865 writes, 2589 syncs, 2.65 writes per sync, written: 0.03 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 07:50:20 np0005629333 nova_compute[244014]: 2026-02-25 12:50:20.332 244018 DEBUG nova.network.neutron [req-e5916643-349f-4304-a6ba-1c05aa26f29a req-5c05db17-377d-485d-968f-5c0ce1460970 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Updated VIF entry in instance network info cache for port 33f0e898-9477-416a-9ae6-268ef8e71ee3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:50:20 np0005629333 nova_compute[244014]: 2026-02-25 12:50:20.334 244018 DEBUG nova.network.neutron [req-e5916643-349f-4304-a6ba-1c05aa26f29a req-5c05db17-377d-485d-968f-5c0ce1460970 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Updating instance_info_cache with network_info: [{"id": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "address": "fa:16:3e:8d:16:fb", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33f0e898-94", "ovs_interfaceid": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:50:20 np0005629333 nova_compute[244014]: 2026-02-25 12:50:20.336 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:20 np0005629333 nova_compute[244014]: 2026-02-25 12:50:20.358 244018 DEBUG oslo_concurrency.lockutils [req-e5916643-349f-4304-a6ba-1c05aa26f29a req-5c05db17-377d-485d-968f-5c0ce1460970 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d6cf21ec-717e-41f7-9351-2214b43ce275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:50:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2093: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 15 KiB/s wr, 138 op/s
Feb 25 07:50:20 np0005629333 nova_compute[244014]: 2026-02-25 12:50:20.661 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2094: 305 pgs: 305 active+clean; 335 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.2 MiB/s wr, 157 op/s
Feb 25 07:50:23 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:23Z|00153|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8d:16:fb 10.100.0.8
Feb 25 07:50:23 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:23Z|00154|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8d:16:fb 10.100.0.8
Feb 25 07:50:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:50:24 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:24Z|00155|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7e:94:3d 10.100.0.3
Feb 25 07:50:24 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:24Z|00156|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7e:94:3d 10.100.0.3
Feb 25 07:50:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2095: 305 pgs: 305 active+clean; 335 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.2 MiB/s wr, 147 op/s
Feb 25 07:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 07:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.6 total, 600.0 interval#012Cumulative writes: 31K writes, 127K keys, 31K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.04 MB/s#012Cumulative WAL: 31K writes, 11K syncs, 2.85 writes per sync, written: 0.13 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5791 writes, 22K keys, 5791 commit groups, 1.0 writes per commit group, ingest: 24.60 MB, 0.04 MB/s#012Interval WAL: 5791 writes, 2335 syncs, 2.48 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 07:50:25 np0005629333 nova_compute[244014]: 2026-02-25 12:50:25.337 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:25 np0005629333 nova_compute[244014]: 2026-02-25 12:50:25.663 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:50:26.183146) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023826183189, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1182, "num_deletes": 251, "total_data_size": 1769908, "memory_usage": 1796352, "flush_reason": "Manual Compaction"}
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023826222800, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 1752055, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43854, "largest_seqno": 45035, "table_properties": {"data_size": 1746378, "index_size": 3072, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12191, "raw_average_key_size": 19, "raw_value_size": 1735000, "raw_average_value_size": 2830, "num_data_blocks": 137, "num_entries": 613, "num_filter_entries": 613, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772023708, "oldest_key_time": 1772023708, "file_creation_time": 1772023826, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 39729 microseconds, and 6780 cpu microseconds.
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:50:26.222870) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 1752055 bytes OK
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:50:26.222897) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:50:26.225049) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:50:26.225080) EVENT_LOG_v1 {"time_micros": 1772023826225068, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:50:26.225100) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 1764479, prev total WAL file size 1807731, number of live WAL files 2.
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:50:26.225880) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(1710KB)], [101(8100KB)]
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023826225930, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 10046836, "oldest_snapshot_seqno": -1}
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 6604 keys, 8365998 bytes, temperature: kUnknown
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023826363830, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 8365998, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8322986, "index_size": 25428, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16517, "raw_key_size": 171525, "raw_average_key_size": 25, "raw_value_size": 8205944, "raw_average_value_size": 1242, "num_data_blocks": 991, "num_entries": 6604, "num_filter_entries": 6604, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772023826, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:50:26.364071) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 8365998 bytes
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:50:26.365520) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 72.8 rd, 60.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.9 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(10.5) write-amplify(4.8) OK, records in: 7118, records dropped: 514 output_compression: NoCompression
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:50:26.365541) EVENT_LOG_v1 {"time_micros": 1772023826365531, "job": 60, "event": "compaction_finished", "compaction_time_micros": 137971, "compaction_time_cpu_micros": 29257, "output_level": 6, "num_output_files": 1, "total_output_size": 8365998, "num_input_records": 7118, "num_output_records": 6604, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023826365906, "job": 60, "event": "table_file_deletion", "file_number": 103}
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772023826366987, "job": 60, "event": "table_file_deletion", "file_number": 101}
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:50:26.225799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:50:26.367107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:50:26.367113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:50:26.367114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:50:26.367116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:50:26 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:50:26.367118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:50:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2096: 305 pgs: 305 active+clean; 356 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.6 MiB/s wr, 157 op/s
Feb 25 07:50:26 np0005629333 podman[356007]: 2026-02-25 12:50:26.674309515 +0000 UTC m=+0.092257366 container create 9e2701e5753b823c31d9cfd89dd793cfe63933b846891cc116eadec9dbf35b78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle)
Feb 25 07:50:26 np0005629333 podman[356007]: 2026-02-25 12:50:26.616111482 +0000 UTC m=+0.034059333 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:50:26 np0005629333 systemd[1]: Started libpod-conmon-9e2701e5753b823c31d9cfd89dd793cfe63933b846891cc116eadec9dbf35b78.scope.
Feb 25 07:50:26 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:50:26 np0005629333 podman[356007]: 2026-02-25 12:50:26.758601045 +0000 UTC m=+0.176548876 container init 9e2701e5753b823c31d9cfd89dd793cfe63933b846891cc116eadec9dbf35b78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dewdney, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:50:26 np0005629333 podman[356007]: 2026-02-25 12:50:26.764516272 +0000 UTC m=+0.182464083 container start 9e2701e5753b823c31d9cfd89dd793cfe63933b846891cc116eadec9dbf35b78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dewdney, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:50:26 np0005629333 wonderful_dewdney[356023]: 167 167
Feb 25 07:50:26 np0005629333 systemd[1]: libpod-9e2701e5753b823c31d9cfd89dd793cfe63933b846891cc116eadec9dbf35b78.scope: Deactivated successfully.
Feb 25 07:50:26 np0005629333 podman[356007]: 2026-02-25 12:50:26.768830494 +0000 UTC m=+0.186778305 container attach 9e2701e5753b823c31d9cfd89dd793cfe63933b846891cc116eadec9dbf35b78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dewdney, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:50:26 np0005629333 podman[356007]: 2026-02-25 12:50:26.769790601 +0000 UTC m=+0.187738432 container died 9e2701e5753b823c31d9cfd89dd793cfe63933b846891cc116eadec9dbf35b78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dewdney, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 07:50:26 np0005629333 systemd[1]: var-lib-containers-storage-overlay-629f68d10826f564bf274a950859b155b0faab3fbb130a72593551d8a4b85aaa-merged.mount: Deactivated successfully.
Feb 25 07:50:26 np0005629333 podman[356007]: 2026-02-25 12:50:26.83595177 +0000 UTC m=+0.253899581 container remove 9e2701e5753b823c31d9cfd89dd793cfe63933b846891cc116eadec9dbf35b78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_dewdney, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:50:26 np0005629333 systemd[1]: libpod-conmon-9e2701e5753b823c31d9cfd89dd793cfe63933b846891cc116eadec9dbf35b78.scope: Deactivated successfully.
Feb 25 07:50:27 np0005629333 podman[356047]: 2026-02-25 12:50:27.017118705 +0000 UTC m=+0.037200481 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:50:27 np0005629333 ceph-mgr[76641]: [devicehealth INFO root] Check health
Feb 25 07:50:27 np0005629333 podman[356047]: 2026-02-25 12:50:27.216490475 +0000 UTC m=+0.236572201 container create fa98803bf6bf21bacf06791afa4b2f6e4ef3c02f42cd0f63fc2e2c19fc1621c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mirzakhani, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:50:27 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:50:27 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:50:27 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:50:27 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:50:27 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:50:27 np0005629333 systemd[1]: Started libpod-conmon-fa98803bf6bf21bacf06791afa4b2f6e4ef3c02f42cd0f63fc2e2c19fc1621c9.scope.
Feb 25 07:50:27 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:50:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c27adb0972d3c59e94a39f474e670a923bfe15f7b8b22be1da6d694a33db91/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:50:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c27adb0972d3c59e94a39f474e670a923bfe15f7b8b22be1da6d694a33db91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:50:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c27adb0972d3c59e94a39f474e670a923bfe15f7b8b22be1da6d694a33db91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:50:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c27adb0972d3c59e94a39f474e670a923bfe15f7b8b22be1da6d694a33db91/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:50:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c27adb0972d3c59e94a39f474e670a923bfe15f7b8b22be1da6d694a33db91/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:50:27 np0005629333 podman[356047]: 2026-02-25 12:50:27.456987735 +0000 UTC m=+0.477069461 container init fa98803bf6bf21bacf06791afa4b2f6e4ef3c02f42cd0f63fc2e2c19fc1621c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mirzakhani, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 07:50:27 np0005629333 podman[356047]: 2026-02-25 12:50:27.46882568 +0000 UTC m=+0.488907396 container start fa98803bf6bf21bacf06791afa4b2f6e4ef3c02f42cd0f63fc2e2c19fc1621c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:50:27 np0005629333 podman[356047]: 2026-02-25 12:50:27.546115482 +0000 UTC m=+0.566197248 container attach fa98803bf6bf21bacf06791afa4b2f6e4ef3c02f42cd0f63fc2e2c19fc1621c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:50:27 np0005629333 nova_compute[244014]: 2026-02-25 12:50:27.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:50:27 np0005629333 nova_compute[244014]: 2026-02-25 12:50:27.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:50:27 np0005629333 nova_compute[244014]: 2026-02-25 12:50:27.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 07:50:27 np0005629333 strange_mirzakhani[356064]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:50:27 np0005629333 strange_mirzakhani[356064]: --> All data devices are unavailable
Feb 25 07:50:27 np0005629333 systemd[1]: libpod-fa98803bf6bf21bacf06791afa4b2f6e4ef3c02f42cd0f63fc2e2c19fc1621c9.scope: Deactivated successfully.
Feb 25 07:50:27 np0005629333 podman[356047]: 2026-02-25 12:50:27.950204452 +0000 UTC m=+0.970286168 container died fa98803bf6bf21bacf06791afa4b2f6e4ef3c02f42cd0f63fc2e2c19fc1621c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 07:50:28 np0005629333 nova_compute[244014]: 2026-02-25 12:50:28.069 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:50:28 np0005629333 nova_compute[244014]: 2026-02-25 12:50:28.070 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:50:28 np0005629333 nova_compute[244014]: 2026-02-25 12:50:28.070 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 25 07:50:28 np0005629333 nova_compute[244014]: 2026-02-25 12:50:28.070 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid b6501baa-8bc9-4724-b4c1-8ff43faf517c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:50:28 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d5c27adb0972d3c59e94a39f474e670a923bfe15f7b8b22be1da6d694a33db91-merged.mount: Deactivated successfully.
Feb 25 07:50:28 np0005629333 podman[356047]: 2026-02-25 12:50:28.105168207 +0000 UTC m=+1.125249903 container remove fa98803bf6bf21bacf06791afa4b2f6e4ef3c02f42cd0f63fc2e2c19fc1621c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_mirzakhani, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 07:50:28 np0005629333 systemd[1]: libpod-conmon-fa98803bf6bf21bacf06791afa4b2f6e4ef3c02f42cd0f63fc2e2c19fc1621c9.scope: Deactivated successfully.
Feb 25 07:50:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2097: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.3 MiB/s wr, 240 op/s
Feb 25 07:50:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:50:28 np0005629333 podman[356161]: 2026-02-25 12:50:28.647797689 +0000 UTC m=+0.087468731 container create b80d88f2ab36a821c71d950b3385be0aab60d6bfe00fa98b26796d07777f79c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 07:50:28 np0005629333 podman[356161]: 2026-02-25 12:50:28.593170056 +0000 UTC m=+0.032841148 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:50:28 np0005629333 systemd[1]: Started libpod-conmon-b80d88f2ab36a821c71d950b3385be0aab60d6bfe00fa98b26796d07777f79c6.scope.
Feb 25 07:50:28 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:50:28 np0005629333 podman[356161]: 2026-02-25 12:50:28.748168803 +0000 UTC m=+0.187839925 container init b80d88f2ab36a821c71d950b3385be0aab60d6bfe00fa98b26796d07777f79c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Feb 25 07:50:28 np0005629333 podman[356161]: 2026-02-25 12:50:28.755803869 +0000 UTC m=+0.195474881 container start b80d88f2ab36a821c71d950b3385be0aab60d6bfe00fa98b26796d07777f79c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_gauss, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 07:50:28 np0005629333 podman[356161]: 2026-02-25 12:50:28.759265946 +0000 UTC m=+0.198937058 container attach b80d88f2ab36a821c71d950b3385be0aab60d6bfe00fa98b26796d07777f79c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_gauss, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Feb 25 07:50:28 np0005629333 recursing_gauss[356178]: 167 167
Feb 25 07:50:28 np0005629333 systemd[1]: libpod-b80d88f2ab36a821c71d950b3385be0aab60d6bfe00fa98b26796d07777f79c6.scope: Deactivated successfully.
Feb 25 07:50:28 np0005629333 podman[356161]: 2026-02-25 12:50:28.762484677 +0000 UTC m=+0.202155719 container died b80d88f2ab36a821c71d950b3385be0aab60d6bfe00fa98b26796d07777f79c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_gauss, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:50:28 np0005629333 systemd[1]: var-lib-containers-storage-overlay-49b01d286bf8dc91af89b4fbdbc81db13d2d232336fbd618055ce2bd401afb3a-merged.mount: Deactivated successfully.
Feb 25 07:50:28 np0005629333 podman[356161]: 2026-02-25 12:50:28.866577667 +0000 UTC m=+0.306248699 container remove b80d88f2ab36a821c71d950b3385be0aab60d6bfe00fa98b26796d07777f79c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 07:50:28 np0005629333 systemd[1]: libpod-conmon-b80d88f2ab36a821c71d950b3385be0aab60d6bfe00fa98b26796d07777f79c6.scope: Deactivated successfully.
Feb 25 07:50:29 np0005629333 podman[356204]: 2026-02-25 12:50:29.032725838 +0000 UTC m=+0.025313686 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:50:29 np0005629333 podman[356204]: 2026-02-25 12:50:29.135871031 +0000 UTC m=+0.128458859 container create 6653bb110b02e0f94bbb445e0a65d3c276ead19a3619d27219cf072e9f162086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 07:50:29 np0005629333 systemd[1]: Started libpod-conmon-6653bb110b02e0f94bbb445e0a65d3c276ead19a3619d27219cf072e9f162086.scope.
Feb 25 07:50:29 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:50:29 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67e914ddb02a0f2f753961b75a504ba63d4f50e7909ff963db6f3e20d6273a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:50:29 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67e914ddb02a0f2f753961b75a504ba63d4f50e7909ff963db6f3e20d6273a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:50:29 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67e914ddb02a0f2f753961b75a504ba63d4f50e7909ff963db6f3e20d6273a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:50:29 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f67e914ddb02a0f2f753961b75a504ba63d4f50e7909ff963db6f3e20d6273a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:50:29 np0005629333 podman[356204]: 2026-02-25 12:50:29.269082062 +0000 UTC m=+0.261669900 container init 6653bb110b02e0f94bbb445e0a65d3c276ead19a3619d27219cf072e9f162086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_meninsky, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:50:29 np0005629333 podman[356204]: 2026-02-25 12:50:29.276734068 +0000 UTC m=+0.269321896 container start 6653bb110b02e0f94bbb445e0a65d3c276ead19a3619d27219cf072e9f162086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_meninsky, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 07:50:29 np0005629333 podman[356204]: 2026-02-25 12:50:29.287655177 +0000 UTC m=+0.280243015 container attach 6653bb110b02e0f94bbb445e0a65d3c276ead19a3619d27219cf072e9f162086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_meninsky, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]: {
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:    "0": [
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:        {
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "devices": [
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "/dev/loop3"
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            ],
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "lv_name": "ceph_lv0",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "lv_size": "21470642176",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "name": "ceph_lv0",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "tags": {
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.cluster_name": "ceph",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.crush_device_class": "",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.encrypted": "0",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.objectstore": "bluestore",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.osd_id": "0",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.type": "block",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.vdo": "0",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.with_tpm": "0"
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            },
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "type": "block",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "vg_name": "ceph_vg0"
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:        }
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:    ],
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:    "1": [
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:        {
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "devices": [
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "/dev/loop4"
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            ],
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "lv_name": "ceph_lv1",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "lv_size": "21470642176",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "name": "ceph_lv1",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "tags": {
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.cluster_name": "ceph",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.crush_device_class": "",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.encrypted": "0",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.objectstore": "bluestore",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.osd_id": "1",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.type": "block",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.vdo": "0",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.with_tpm": "0"
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            },
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "type": "block",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "vg_name": "ceph_vg1"
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:        }
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:    ],
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:    "2": [
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:        {
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "devices": [
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "/dev/loop5"
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            ],
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "lv_name": "ceph_lv2",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "lv_size": "21470642176",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "name": "ceph_lv2",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "tags": {
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.cluster_name": "ceph",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.crush_device_class": "",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.encrypted": "0",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.objectstore": "bluestore",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.osd_id": "2",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.type": "block",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.vdo": "0",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:                "ceph.with_tpm": "0"
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            },
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "type": "block",
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:            "vg_name": "ceph_vg2"
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:        }
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]:    ]
Feb 25 07:50:29 np0005629333 priceless_meninsky[356220]: }
Feb 25 07:50:29 np0005629333 systemd[1]: libpod-6653bb110b02e0f94bbb445e0a65d3c276ead19a3619d27219cf072e9f162086.scope: Deactivated successfully.
Feb 25 07:50:29 np0005629333 podman[356204]: 2026-02-25 12:50:29.57640103 +0000 UTC m=+0.568988888 container died 6653bb110b02e0f94bbb445e0a65d3c276ead19a3619d27219cf072e9f162086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_meninsky, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:50:29 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f67e914ddb02a0f2f753961b75a504ba63d4f50e7909ff963db6f3e20d6273a2-merged.mount: Deactivated successfully.
Feb 25 07:50:29 np0005629333 podman[356204]: 2026-02-25 12:50:29.6174755 +0000 UTC m=+0.610063308 container remove 6653bb110b02e0f94bbb445e0a65d3c276ead19a3619d27219cf072e9f162086 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_meninsky, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:50:29 np0005629333 systemd[1]: libpod-conmon-6653bb110b02e0f94bbb445e0a65d3c276ead19a3619d27219cf072e9f162086.scope: Deactivated successfully.
Feb 25 07:50:29 np0005629333 nova_compute[244014]: 2026-02-25 12:50:29.883 244018 INFO nova.compute.manager [None req-8dc1755a-1790-4238-8bef-85fc9f0c30ff 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Get console output#033[00m
Feb 25 07:50:29 np0005629333 nova_compute[244014]: 2026-02-25 12:50:29.890 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 25 07:50:30 np0005629333 podman[356303]: 2026-02-25 12:50:30.067667252 +0000 UTC m=+0.070133042 container create 499671c135758fa72ae93b6ae83cbe1171883427405117bc033b536c8de7c17f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lederberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:50:30 np0005629333 podman[356303]: 2026-02-25 12:50:30.021114917 +0000 UTC m=+0.023580677 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:50:30 np0005629333 systemd[1]: Started libpod-conmon-499671c135758fa72ae93b6ae83cbe1171883427405117bc033b536c8de7c17f.scope.
Feb 25 07:50:30 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:50:30 np0005629333 podman[356303]: 2026-02-25 12:50:30.179268763 +0000 UTC m=+0.181734603 container init 499671c135758fa72ae93b6ae83cbe1171883427405117bc033b536c8de7c17f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lederberg, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:50:30 np0005629333 podman[356303]: 2026-02-25 12:50:30.187206317 +0000 UTC m=+0.189672087 container start 499671c135758fa72ae93b6ae83cbe1171883427405117bc033b536c8de7c17f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lederberg, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 07:50:30 np0005629333 clever_lederberg[356320]: 167 167
Feb 25 07:50:30 np0005629333 podman[356303]: 2026-02-25 12:50:30.191011184 +0000 UTC m=+0.193477014 container attach 499671c135758fa72ae93b6ae83cbe1171883427405117bc033b536c8de7c17f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lederberg, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:50:30 np0005629333 systemd[1]: libpod-499671c135758fa72ae93b6ae83cbe1171883427405117bc033b536c8de7c17f.scope: Deactivated successfully.
Feb 25 07:50:30 np0005629333 podman[356303]: 2026-02-25 12:50:30.192119586 +0000 UTC m=+0.194585386 container died 499671c135758fa72ae93b6ae83cbe1171883427405117bc033b536c8de7c17f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 07:50:30 np0005629333 systemd[1]: var-lib-containers-storage-overlay-0041eebafbf00d6964450c9f9f73828fea016b2ab9c0c7518c2707ee435da7e3-merged.mount: Deactivated successfully.
Feb 25 07:50:30 np0005629333 podman[356303]: 2026-02-25 12:50:30.239966197 +0000 UTC m=+0.242431957 container remove 499671c135758fa72ae93b6ae83cbe1171883427405117bc033b536c8de7c17f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:50:30 np0005629333 systemd[1]: libpod-conmon-499671c135758fa72ae93b6ae83cbe1171883427405117bc033b536c8de7c17f.scope: Deactivated successfully.
Feb 25 07:50:30 np0005629333 nova_compute[244014]: 2026-02-25 12:50:30.340 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:30 np0005629333 podman[356344]: 2026-02-25 12:50:30.430933449 +0000 UTC m=+0.054570552 container create d601e62937a5d49adc6f0d92781cbed988c5bb0127950da5e514fc13a217ce6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_chandrasekhar, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:50:30 np0005629333 systemd[1]: Started libpod-conmon-d601e62937a5d49adc6f0d92781cbed988c5bb0127950da5e514fc13a217ce6e.scope.
Feb 25 07:50:30 np0005629333 podman[356344]: 2026-02-25 12:50:30.40686856 +0000 UTC m=+0.030505733 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:50:30 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:50:30 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf040c63894f878c02e4f07b53577ba5ba2d7e7852fc76913613c5a13e879dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:50:30 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf040c63894f878c02e4f07b53577ba5ba2d7e7852fc76913613c5a13e879dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:50:30 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf040c63894f878c02e4f07b53577ba5ba2d7e7852fc76913613c5a13e879dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:50:30 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf040c63894f878c02e4f07b53577ba5ba2d7e7852fc76913613c5a13e879dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:50:30 np0005629333 podman[356344]: 2026-02-25 12:50:30.530635724 +0000 UTC m=+0.154272847 container init d601e62937a5d49adc6f0d92781cbed988c5bb0127950da5e514fc13a217ce6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_chandrasekhar, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 07:50:30 np0005629333 podman[356344]: 2026-02-25 12:50:30.547374857 +0000 UTC m=+0.171011970 container start d601e62937a5d49adc6f0d92781cbed988c5bb0127950da5e514fc13a217ce6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_chandrasekhar, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 07:50:30 np0005629333 podman[356344]: 2026-02-25 12:50:30.550938198 +0000 UTC m=+0.174575321 container attach d601e62937a5d49adc6f0d92781cbed988c5bb0127950da5e514fc13a217ce6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:50:30 np0005629333 nova_compute[244014]: 2026-02-25 12:50:30.552 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Updating instance_info_cache with network_info: [{"id": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "address": "fa:16:3e:93:cc:74", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdee62982-d4", "ovs_interfaceid": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "344b59b1-93f2-4e23-8c28-835e5d954630", "address": "fa:16:3e:8d:10:50", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:1050", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap344b59b1-93", "ovs_interfaceid": "344b59b1-93f2-4e23-8c28-835e5d954630", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:50:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2098: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 630 KiB/s rd, 4.3 MiB/s wr, 126 op/s
Feb 25 07:50:30 np0005629333 nova_compute[244014]: 2026-02-25 12:50:30.574 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:50:30 np0005629333 nova_compute[244014]: 2026-02-25 12:50:30.575 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 25 07:50:30 np0005629333 nova_compute[244014]: 2026-02-25 12:50:30.575 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:50:30 np0005629333 nova_compute[244014]: 2026-02-25 12:50:30.576 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:50:30 np0005629333 nova_compute[244014]: 2026-02-25 12:50:30.599 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:30 np0005629333 nova_compute[244014]: 2026-02-25 12:50:30.600 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:30 np0005629333 nova_compute[244014]: 2026-02-25 12:50:30.600 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:30 np0005629333 nova_compute[244014]: 2026-02-25 12:50:30.601 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:50:30 np0005629333 nova_compute[244014]: 2026-02-25 12:50:30.601 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:50:30 np0005629333 nova_compute[244014]: 2026-02-25 12:50:30.666 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:50:31
Feb 25 07:50:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:50:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:50:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', 'default.rgw.control', 'images', 'backups', '.mgr', '.rgw.root']
Feb 25 07:50:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:50:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:50:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/650485319' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:50:31 np0005629333 nova_compute[244014]: 2026-02-25 12:50:31.218 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:50:31 np0005629333 lvm[356461]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:50:31 np0005629333 lvm[356462]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:50:31 np0005629333 lvm[356461]: VG ceph_vg0 finished
Feb 25 07:50:31 np0005629333 lvm[356462]: VG ceph_vg1 finished
Feb 25 07:50:31 np0005629333 lvm[356464]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:50:31 np0005629333 lvm[356464]: VG ceph_vg2 finished
Feb 25 07:50:31 np0005629333 gallant_chandrasekhar[356360]: {}
Feb 25 07:50:31 np0005629333 systemd[1]: libpod-d601e62937a5d49adc6f0d92781cbed988c5bb0127950da5e514fc13a217ce6e.scope: Deactivated successfully.
Feb 25 07:50:31 np0005629333 systemd[1]: libpod-d601e62937a5d49adc6f0d92781cbed988c5bb0127950da5e514fc13a217ce6e.scope: Consumed 1.248s CPU time.
Feb 25 07:50:31 np0005629333 podman[356344]: 2026-02-25 12:50:31.413298818 +0000 UTC m=+1.036935901 container died d601e62937a5d49adc6f0d92781cbed988c5bb0127950da5e514fc13a217ce6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_chandrasekhar, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 07:50:31 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7bf040c63894f878c02e4f07b53577ba5ba2d7e7852fc76913613c5a13e879dc-merged.mount: Deactivated successfully.
Feb 25 07:50:31 np0005629333 podman[356344]: 2026-02-25 12:50:31.455655904 +0000 UTC m=+1.079293017 container remove d601e62937a5d49adc6f0d92781cbed988c5bb0127950da5e514fc13a217ce6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 07:50:31 np0005629333 systemd[1]: libpod-conmon-d601e62937a5d49adc6f0d92781cbed988c5bb0127950da5e514fc13a217ce6e.scope: Deactivated successfully.
Feb 25 07:50:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:50:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:50:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:50:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:50:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:50:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:50:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:50:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:50:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:50:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:50:31 np0005629333 nova_compute[244014]: 2026-02-25 12:50:31.751 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:50:31 np0005629333 nova_compute[244014]: 2026-02-25 12:50:31.752 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000007b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:50:31 np0005629333 nova_compute[244014]: 2026-02-25 12:50:31.759 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:50:31 np0005629333 nova_compute[244014]: 2026-02-25 12:50:31.759 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:50:31 np0005629333 nova_compute[244014]: 2026-02-25 12:50:31.764 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:50:31 np0005629333 nova_compute[244014]: 2026-02-25 12:50:31.764 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:50:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:50:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:50:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:50:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:50:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:50:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:50:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:50:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:50:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:50:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:50:32 np0005629333 nova_compute[244014]: 2026-02-25 12:50:32.043 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:50:32 np0005629333 nova_compute[244014]: 2026-02-25 12:50:32.044 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2905MB free_disk=59.85101382341236GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:50:32 np0005629333 nova_compute[244014]: 2026-02-25 12:50:32.045 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:32 np0005629333 nova_compute[244014]: 2026-02-25 12:50:32.045 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:32 np0005629333 nova_compute[244014]: 2026-02-25 12:50:32.068 244018 DEBUG nova.compute.manager [req-1442bc48-890b-4252-9bc3-cfe4dea2dc76 req-920aa452-0b30-4903-98ef-55d6bda3e9c3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Received event network-changed-33f0e898-9477-416a-9ae6-268ef8e71ee3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:50:32 np0005629333 nova_compute[244014]: 2026-02-25 12:50:32.068 244018 DEBUG nova.compute.manager [req-1442bc48-890b-4252-9bc3-cfe4dea2dc76 req-920aa452-0b30-4903-98ef-55d6bda3e9c3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Refreshing instance network info cache due to event network-changed-33f0e898-9477-416a-9ae6-268ef8e71ee3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:50:32 np0005629333 nova_compute[244014]: 2026-02-25 12:50:32.069 244018 DEBUG oslo_concurrency.lockutils [req-1442bc48-890b-4252-9bc3-cfe4dea2dc76 req-920aa452-0b30-4903-98ef-55d6bda3e9c3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d6cf21ec-717e-41f7-9351-2214b43ce275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:50:32 np0005629333 nova_compute[244014]: 2026-02-25 12:50:32.069 244018 DEBUG oslo_concurrency.lockutils [req-1442bc48-890b-4252-9bc3-cfe4dea2dc76 req-920aa452-0b30-4903-98ef-55d6bda3e9c3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d6cf21ec-717e-41f7-9351-2214b43ce275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:50:32 np0005629333 nova_compute[244014]: 2026-02-25 12:50:32.069 244018 DEBUG nova.network.neutron [req-1442bc48-890b-4252-9bc3-cfe4dea2dc76 req-920aa452-0b30-4903-98ef-55d6bda3e9c3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Refreshing network info cache for port 33f0e898-9477-416a-9ae6-268ef8e71ee3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:50:32 np0005629333 nova_compute[244014]: 2026-02-25 12:50:32.210 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance b6501baa-8bc9-4724-b4c1-8ff43faf517c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:50:32 np0005629333 nova_compute[244014]: 2026-02-25 12:50:32.210 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 828def8a-01dd-4845-98b3-1516060251a0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:50:32 np0005629333 nova_compute[244014]: 2026-02-25 12:50:32.210 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance d6cf21ec-717e-41f7-9351-2214b43ce275 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:50:32 np0005629333 nova_compute[244014]: 2026-02-25 12:50:32.211 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:50:32 np0005629333 nova_compute[244014]: 2026-02-25 12:50:32.211 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:50:32 np0005629333 nova_compute[244014]: 2026-02-25 12:50:32.228 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 25 07:50:32 np0005629333 nova_compute[244014]: 2026-02-25 12:50:32.247 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 25 07:50:32 np0005629333 nova_compute[244014]: 2026-02-25 12:50:32.248 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 25 07:50:32 np0005629333 nova_compute[244014]: 2026-02-25 12:50:32.266 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 25 07:50:32 np0005629333 nova_compute[244014]: 2026-02-25 12:50:32.289 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 25 07:50:32 np0005629333 nova_compute[244014]: 2026-02-25 12:50:32.353 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:50:32 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:50:32 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:50:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2099: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 636 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Feb 25 07:50:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:50:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/213670914' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:50:32 np0005629333 nova_compute[244014]: 2026-02-25 12:50:32.945 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:50:32 np0005629333 nova_compute[244014]: 2026-02-25 12:50:32.951 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:50:32 np0005629333 nova_compute[244014]: 2026-02-25 12:50:32.990 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:50:33 np0005629333 nova_compute[244014]: 2026-02-25 12:50:33.134 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:50:33 np0005629333 nova_compute[244014]: 2026-02-25 12:50:33.135 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.090s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:50:33 np0005629333 podman[356525]: 2026-02-25 12:50:33.748074093 +0000 UTC m=+0.077676244 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 07:50:33 np0005629333 podman[356526]: 2026-02-25 12:50:33.810592348 +0000 UTC m=+0.140809307 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible)
Feb 25 07:50:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2100: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 3.1 MiB/s wr, 108 op/s
Feb 25 07:50:34 np0005629333 nova_compute[244014]: 2026-02-25 12:50:34.847 244018 DEBUG nova.network.neutron [req-1442bc48-890b-4252-9bc3-cfe4dea2dc76 req-920aa452-0b30-4903-98ef-55d6bda3e9c3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Updated VIF entry in instance network info cache for port 33f0e898-9477-416a-9ae6-268ef8e71ee3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:50:34 np0005629333 nova_compute[244014]: 2026-02-25 12:50:34.848 244018 DEBUG nova.network.neutron [req-1442bc48-890b-4252-9bc3-cfe4dea2dc76 req-920aa452-0b30-4903-98ef-55d6bda3e9c3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Updating instance_info_cache with network_info: [{"id": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "address": "fa:16:3e:8d:16:fb", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33f0e898-94", "ovs_interfaceid": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:50:34 np0005629333 nova_compute[244014]: 2026-02-25 12:50:34.915 244018 DEBUG oslo_concurrency.lockutils [req-1442bc48-890b-4252-9bc3-cfe4dea2dc76 req-920aa452-0b30-4903-98ef-55d6bda3e9c3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d6cf21ec-717e-41f7-9351-2214b43ce275" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:50:35 np0005629333 nova_compute[244014]: 2026-02-25 12:50:35.130 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:50:35 np0005629333 nova_compute[244014]: 2026-02-25 12:50:35.131 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:50:35 np0005629333 nova_compute[244014]: 2026-02-25 12:50:35.131 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:50:35 np0005629333 nova_compute[244014]: 2026-02-25 12:50:35.344 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:35 np0005629333 nova_compute[244014]: 2026-02-25 12:50:35.668 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:35 np0005629333 nova_compute[244014]: 2026-02-25 12:50:35.680 244018 DEBUG nova.compute.manager [req-e3d6a3dd-819d-49fb-a15c-72b5de1c8946 req-77a7e98a-a389-43c3-8263-850ec4267dea 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received event network-changed-99fbecbd-1815-4bf9-8e7f-6f114847f46f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:50:35 np0005629333 nova_compute[244014]: 2026-02-25 12:50:35.680 244018 DEBUG nova.compute.manager [req-e3d6a3dd-819d-49fb-a15c-72b5de1c8946 req-77a7e98a-a389-43c3-8263-850ec4267dea 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Refreshing instance network info cache due to event network-changed-99fbecbd-1815-4bf9-8e7f-6f114847f46f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:50:35 np0005629333 nova_compute[244014]: 2026-02-25 12:50:35.681 244018 DEBUG oslo_concurrency.lockutils [req-e3d6a3dd-819d-49fb-a15c-72b5de1c8946 req-77a7e98a-a389-43c3-8263-850ec4267dea 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-828def8a-01dd-4845-98b3-1516060251a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:50:35 np0005629333 nova_compute[244014]: 2026-02-25 12:50:35.681 244018 DEBUG oslo_concurrency.lockutils [req-e3d6a3dd-819d-49fb-a15c-72b5de1c8946 req-77a7e98a-a389-43c3-8263-850ec4267dea 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-828def8a-01dd-4845-98b3-1516060251a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:50:35 np0005629333 nova_compute[244014]: 2026-02-25 12:50:35.681 244018 DEBUG nova.network.neutron [req-e3d6a3dd-819d-49fb-a15c-72b5de1c8946 req-77a7e98a-a389-43c3-8263-850ec4267dea 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Refreshing network info cache for port 99fbecbd-1815-4bf9-8e7f-6f114847f46f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:50:35 np0005629333 nova_compute[244014]: 2026-02-25 12:50:35.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:50:35 np0005629333 nova_compute[244014]: 2026-02-25 12:50:35.912 244018 DEBUG oslo_concurrency.lockutils [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "828def8a-01dd-4845-98b3-1516060251a0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:35 np0005629333 nova_compute[244014]: 2026-02-25 12:50:35.912 244018 DEBUG oslo_concurrency.lockutils [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:35 np0005629333 nova_compute[244014]: 2026-02-25 12:50:35.912 244018 DEBUG oslo_concurrency.lockutils [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "828def8a-01dd-4845-98b3-1516060251a0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:35 np0005629333 nova_compute[244014]: 2026-02-25 12:50:35.913 244018 DEBUG oslo_concurrency.lockutils [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:35 np0005629333 nova_compute[244014]: 2026-02-25 12:50:35.913 244018 DEBUG oslo_concurrency.lockutils [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:35 np0005629333 nova_compute[244014]: 2026-02-25 12:50:35.914 244018 INFO nova.compute.manager [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Terminating instance#033[00m
Feb 25 07:50:35 np0005629333 nova_compute[244014]: 2026-02-25 12:50:35.914 244018 DEBUG nova.compute.manager [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:50:35 np0005629333 kernel: tap99fbecbd-18 (unregistering): left promiscuous mode
Feb 25 07:50:35 np0005629333 NetworkManager[49836]: <info>  [1772023835.9529] device (tap99fbecbd-18): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:50:35 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:35Z|01312|binding|INFO|Releasing lport 99fbecbd-1815-4bf9-8e7f-6f114847f46f from this chassis (sb_readonly=0)
Feb 25 07:50:35 np0005629333 nova_compute[244014]: 2026-02-25 12:50:35.959 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:35 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:35Z|01313|binding|INFO|Setting lport 99fbecbd-1815-4bf9-8e7f-6f114847f46f down in Southbound
Feb 25 07:50:35 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:35Z|01314|binding|INFO|Removing iface tap99fbecbd-18 ovn-installed in OVS
Feb 25 07:50:35 np0005629333 nova_compute[244014]: 2026-02-25 12:50:35.966 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:35 np0005629333 nova_compute[244014]: 2026-02-25 12:50:35.969 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:35 np0005629333 kernel: tap94cae4f8-cc (unregistering): left promiscuous mode
Feb 25 07:50:35 np0005629333 NetworkManager[49836]: <info>  [1772023835.9767] device (tap94cae4f8-cc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:50:35 np0005629333 nova_compute[244014]: 2026-02-25 12:50:35.984 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:35 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:35Z|01315|binding|INFO|Releasing lport 94cae4f8-cc4b-443a-b22b-36ef77438ede from this chassis (sb_readonly=1)
Feb 25 07:50:35 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:35Z|01316|binding|INFO|Removing iface tap94cae4f8-cc ovn-installed in OVS
Feb 25 07:50:35 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:35Z|01317|if_status|INFO|Dropped 1 log messages in last 710 seconds (most recently, 710 seconds ago) due to excessive rate
Feb 25 07:50:35 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:35Z|01318|if_status|INFO|Not setting lport 94cae4f8-cc4b-443a-b22b-36ef77438ede down as sb is readonly
Feb 25 07:50:35 np0005629333 nova_compute[244014]: 2026-02-25 12:50:35.986 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:35 np0005629333 nova_compute[244014]: 2026-02-25 12:50:35.989 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:35 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:35Z|01319|binding|INFO|Setting lport 94cae4f8-cc4b-443a-b22b-36ef77438ede down in Southbound
Feb 25 07:50:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.001 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:94:3d 10.100.0.3'], port_security=['fa:16:3e:7e:94:3d 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '828def8a-01dd-4845-98b3-1516060251a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-481feaf1-7ff4-47be-9159-a1dd19ceebcc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f3102e48-4ea3-4c65-a010-ac507aeeeba5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7b776d7-fb2a-404e-b423-f79885837022, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=99fbecbd-1815-4bf9-8e7f-6f114847f46f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:50:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.004 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 99fbecbd-1815-4bf9-8e7f-6f114847f46f in datapath 481feaf1-7ff4-47be-9159-a1dd19ceebcc unbound from our chassis#033[00m
Feb 25 07:50:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.006 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 481feaf1-7ff4-47be-9159-a1dd19ceebcc#033[00m
Feb 25 07:50:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.020 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ec:bf:12 2001:db8::f816:3eff:feec:bf12'], port_security=['fa:16:3e:ec:bf:12 2001:db8::f816:3eff:feec:bf12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feec:bf12/64', 'neutron:device_id': '828def8a-01dd-4845-98b3-1516060251a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb832dde-9848-40c5-9505-cc643b1bd0fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f3102e48-4ea3-4c65-a010-ac507aeeeba5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f83b91c5-311c-45de-aa70-06fb6ec6fece, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=94cae4f8-cc4b-443a-b22b-36ef77438ede) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:50:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.021 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eff86ce5-4249-4895-8da9-8063591a42ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:36 np0005629333 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Feb 25 07:50:36 np0005629333 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007b.scope: Consumed 13.008s CPU time.
Feb 25 07:50:36 np0005629333 systemd-machined[210048]: Machine qemu-155-instance-0000007b terminated.
Feb 25 07:50:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.042 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5f35dba4-0168-4cf4-9112-9bdebc861dac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.047 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[71a5cdd5-c90c-4c40-a295-331ffdc1cd23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.079 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8c548bd7-afa0-4477-8e7c-0fd1c2f61311]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.096 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[da02ecab-e037-44b6-ace5-679ecbe724d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap481feaf1-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:24:6f:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 382], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573491, 'reachable_time': 20258, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356588, 'error': None, 'target': 'ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.111 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eeb01a8a-ca38-4bc4-b602-21ef28456ee7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap481feaf1-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573500, 'tstamp': 573500}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356589, 'error': None, 'target': 'ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap481feaf1-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573502, 'tstamp': 573502}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356589, 'error': None, 'target': 'ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.113 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap481feaf1-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.115 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.124 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.125 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap481feaf1-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.126 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:50:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.126 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap481feaf1-70, col_values=(('external_ids', {'iface-id': '8234c876-6930-42e9-b642-2d1f82e23ee0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.127 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:50:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.128 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 94cae4f8-cc4b-443a-b22b-36ef77438ede in datapath eb832dde-9848-40c5-9505-cc643b1bd0fa unbound from our chassis#033[00m
Feb 25 07:50:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.129 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eb832dde-9848-40c5-9505-cc643b1bd0fa#033[00m
Feb 25 07:50:36 np0005629333 NetworkManager[49836]: <info>  [1772023836.1422] manager: (tap94cae4f8-cc): new Tun device (/org/freedesktop/NetworkManager/Devices/550)
Feb 25 07:50:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.142 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bf7626cd-a6dd-4e6e-aa0f-c2b5af79cc9a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.154 244018 INFO nova.virt.libvirt.driver [-] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Instance destroyed successfully.#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.154 244018 DEBUG nova.objects.instance [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'resources' on Instance uuid 828def8a-01dd-4845-98b3-1516060251a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:50:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.168 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[46657d19-b1de-410e-a74f-ebf36866caa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.171 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[045280a9-ce3c-45e8-a70a-bd7eb42d8c99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.175 244018 DEBUG nova.virt.libvirt.vif [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:49:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1153433119',display_name='tempest-TestGettingAddress-server-1153433119',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1153433119',id=123,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNlLCvqIkUYh/9ZHtxnKBN7YBsKpfQ8TjO19iCX554GJUCo/N1T5J3/ZTJ7NHwET0eZFR6/wdUxNMyoCAZQSh5tzxHfA6vjgYnp2UPefpqRUyjRlBnYX2yGGsA8ccwHtsg==',key_name='tempest-TestGettingAddress-238420135',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:50:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-ee56zhhv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:50:11Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=828def8a-01dd-4845-98b3-1516060251a0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "address": "fa:16:3e:7e:94:3d", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbecbd-18", "ovs_interfaceid": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.175 244018 DEBUG nova.network.os_vif_util [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "address": "fa:16:3e:7e:94:3d", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbecbd-18", "ovs_interfaceid": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.176 244018 DEBUG nova.network.os_vif_util [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7e:94:3d,bridge_name='br-int',has_traffic_filtering=True,id=99fbecbd-1815-4bf9-8e7f-6f114847f46f,network=Network(481feaf1-7ff4-47be-9159-a1dd19ceebcc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99fbecbd-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.176 244018 DEBUG os_vif [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7e:94:3d,bridge_name='br-int',has_traffic_filtering=True,id=99fbecbd-1815-4bf9-8e7f-6f114847f46f,network=Network(481feaf1-7ff4-47be-9159-a1dd19ceebcc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99fbecbd-18') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.177 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.177 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99fbecbd-18, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.179 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.181 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.184 244018 INFO os_vif [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7e:94:3d,bridge_name='br-int',has_traffic_filtering=True,id=99fbecbd-1815-4bf9-8e7f-6f114847f46f,network=Network(481feaf1-7ff4-47be-9159-a1dd19ceebcc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap99fbecbd-18')#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.185 244018 DEBUG nova.virt.libvirt.vif [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:49:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1153433119',display_name='tempest-TestGettingAddress-server-1153433119',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1153433119',id=123,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNlLCvqIkUYh/9ZHtxnKBN7YBsKpfQ8TjO19iCX554GJUCo/N1T5J3/ZTJ7NHwET0eZFR6/wdUxNMyoCAZQSh5tzxHfA6vjgYnp2UPefpqRUyjRlBnYX2yGGsA8ccwHtsg==',key_name='tempest-TestGettingAddress-238420135',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:50:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-ee56zhhv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:50:11Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=828def8a-01dd-4845-98b3-1516060251a0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "address": "fa:16:3e:ec:bf:12", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feec:bf12", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94cae4f8-cc", "ovs_interfaceid": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.185 244018 DEBUG nova.network.os_vif_util [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "address": "fa:16:3e:ec:bf:12", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feec:bf12", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94cae4f8-cc", "ovs_interfaceid": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.186 244018 DEBUG nova.network.os_vif_util [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ec:bf:12,bridge_name='br-int',has_traffic_filtering=True,id=94cae4f8-cc4b-443a-b22b-36ef77438ede,network=Network(eb832dde-9848-40c5-9505-cc643b1bd0fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94cae4f8-cc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.186 244018 DEBUG os_vif [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:bf:12,bridge_name='br-int',has_traffic_filtering=True,id=94cae4f8-cc4b-443a-b22b-36ef77438ede,network=Network(eb832dde-9848-40c5-9505-cc643b1bd0fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94cae4f8-cc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.187 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.187 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94cae4f8-cc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.188 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.190 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.191 244018 INFO os_vif [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ec:bf:12,bridge_name='br-int',has_traffic_filtering=True,id=94cae4f8-cc4b-443a-b22b-36ef77438ede,network=Network(eb832dde-9848-40c5-9505-cc643b1bd0fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94cae4f8-cc')#033[00m
Feb 25 07:50:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.195 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4099ad90-bb07-47d7-b1cd-e92443b74b4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.210 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e8c3e133-ca3c-4212-a2f1-3d826cb1a9c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb832dde-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:a5:37'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 28, 'tx_packets': 5, 'rx_bytes': 2472, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 28, 'tx_packets': 5, 'rx_bytes': 2472, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 383], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573564, 'reachable_time': 17677, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 28, 'inoctets': 2080, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 28, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2080, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 28, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356627, 'error': None, 'target': 'ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.224 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c050f04b-6d90-48e2-99d8-3bf59cd13d97]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapeb832dde-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 573573, 'tstamp': 573573}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356636, 'error': None, 'target': 'ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.226 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb832dde-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.228 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.228 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.229 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb832dde-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.229 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:50:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.230 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeb832dde-90, col_values=(('external_ids', {'iface-id': 'b3234524-e109-417d-a204-a0ab750c983e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:36.230 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.429 244018 INFO nova.virt.libvirt.driver [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Deleting instance files /var/lib/nova/instances/828def8a-01dd-4845-98b3-1516060251a0_del#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.430 244018 INFO nova.virt.libvirt.driver [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Deletion of /var/lib/nova/instances/828def8a-01dd-4845-98b3-1516060251a0_del complete#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.466 244018 DEBUG nova.compute.manager [req-be65fb29-70f8-468b-9cc7-6e0bc893aa89 req-ab90abc1-8991-4c99-9924-afcdb7c6bed1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received event network-vif-unplugged-99fbecbd-1815-4bf9-8e7f-6f114847f46f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.466 244018 DEBUG oslo_concurrency.lockutils [req-be65fb29-70f8-468b-9cc7-6e0bc893aa89 req-ab90abc1-8991-4c99-9924-afcdb7c6bed1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "828def8a-01dd-4845-98b3-1516060251a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.467 244018 DEBUG oslo_concurrency.lockutils [req-be65fb29-70f8-468b-9cc7-6e0bc893aa89 req-ab90abc1-8991-4c99-9924-afcdb7c6bed1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.467 244018 DEBUG oslo_concurrency.lockutils [req-be65fb29-70f8-468b-9cc7-6e0bc893aa89 req-ab90abc1-8991-4c99-9924-afcdb7c6bed1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.467 244018 DEBUG nova.compute.manager [req-be65fb29-70f8-468b-9cc7-6e0bc893aa89 req-ab90abc1-8991-4c99-9924-afcdb7c6bed1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] No waiting events found dispatching network-vif-unplugged-99fbecbd-1815-4bf9-8e7f-6f114847f46f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.468 244018 DEBUG nova.compute.manager [req-be65fb29-70f8-468b-9cc7-6e0bc893aa89 req-ab90abc1-8991-4c99-9924-afcdb7c6bed1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received event network-vif-unplugged-99fbecbd-1815-4bf9-8e7f-6f114847f46f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.535 244018 INFO nova.compute.manager [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Took 0.62 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.536 244018 DEBUG oslo.service.loopingcall [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.536 244018 DEBUG nova.compute.manager [-] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:50:36 np0005629333 nova_compute[244014]: 2026-02-25 12:50:36.536 244018 DEBUG nova.network.neutron [-] [instance: 828def8a-01dd-4845-98b3-1516060251a0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:50:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2101: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 612 KiB/s rd, 3.1 MiB/s wr, 109 op/s
Feb 25 07:50:37 np0005629333 nova_compute[244014]: 2026-02-25 12:50:37.284 244018 DEBUG nova.network.neutron [req-e3d6a3dd-819d-49fb-a15c-72b5de1c8946 req-77a7e98a-a389-43c3-8263-850ec4267dea 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Updated VIF entry in instance network info cache for port 99fbecbd-1815-4bf9-8e7f-6f114847f46f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:50:37 np0005629333 nova_compute[244014]: 2026-02-25 12:50:37.285 244018 DEBUG nova.network.neutron [req-e3d6a3dd-819d-49fb-a15c-72b5de1c8946 req-77a7e98a-a389-43c3-8263-850ec4267dea 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Updating instance_info_cache with network_info: [{"id": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "address": "fa:16:3e:7e:94:3d", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap99fbecbd-18", "ovs_interfaceid": "99fbecbd-1815-4bf9-8e7f-6f114847f46f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "address": "fa:16:3e:ec:bf:12", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feec:bf12", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94cae4f8-cc", "ovs_interfaceid": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:50:37 np0005629333 nova_compute[244014]: 2026-02-25 12:50:37.481 244018 DEBUG oslo_concurrency.lockutils [req-e3d6a3dd-819d-49fb-a15c-72b5de1c8946 req-77a7e98a-a389-43c3-8263-850ec4267dea 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-828def8a-01dd-4845-98b3-1516060251a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:50:37 np0005629333 nova_compute[244014]: 2026-02-25 12:50:37.812 244018 DEBUG nova.compute.manager [req-27a8cc2c-489f-471e-a4ff-e560a859ab80 req-2b3a4c6b-9676-4019-9bcc-a6309dafd68e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received event network-vif-unplugged-94cae4f8-cc4b-443a-b22b-36ef77438ede external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:50:37 np0005629333 nova_compute[244014]: 2026-02-25 12:50:37.813 244018 DEBUG oslo_concurrency.lockutils [req-27a8cc2c-489f-471e-a4ff-e560a859ab80 req-2b3a4c6b-9676-4019-9bcc-a6309dafd68e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "828def8a-01dd-4845-98b3-1516060251a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:37 np0005629333 nova_compute[244014]: 2026-02-25 12:50:37.813 244018 DEBUG oslo_concurrency.lockutils [req-27a8cc2c-489f-471e-a4ff-e560a859ab80 req-2b3a4c6b-9676-4019-9bcc-a6309dafd68e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:37 np0005629333 nova_compute[244014]: 2026-02-25 12:50:37.814 244018 DEBUG oslo_concurrency.lockutils [req-27a8cc2c-489f-471e-a4ff-e560a859ab80 req-2b3a4c6b-9676-4019-9bcc-a6309dafd68e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:37 np0005629333 nova_compute[244014]: 2026-02-25 12:50:37.814 244018 DEBUG nova.compute.manager [req-27a8cc2c-489f-471e-a4ff-e560a859ab80 req-2b3a4c6b-9676-4019-9bcc-a6309dafd68e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] No waiting events found dispatching network-vif-unplugged-94cae4f8-cc4b-443a-b22b-36ef77438ede pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:50:37 np0005629333 nova_compute[244014]: 2026-02-25 12:50:37.814 244018 DEBUG nova.compute.manager [req-27a8cc2c-489f-471e-a4ff-e560a859ab80 req-2b3a4c6b-9676-4019-9bcc-a6309dafd68e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received event network-vif-unplugged-94cae4f8-cc4b-443a-b22b-36ef77438ede for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:50:37 np0005629333 nova_compute[244014]: 2026-02-25 12:50:37.815 244018 DEBUG nova.compute.manager [req-27a8cc2c-489f-471e-a4ff-e560a859ab80 req-2b3a4c6b-9676-4019-9bcc-a6309dafd68e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received event network-vif-plugged-94cae4f8-cc4b-443a-b22b-36ef77438ede external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:50:37 np0005629333 nova_compute[244014]: 2026-02-25 12:50:37.815 244018 DEBUG oslo_concurrency.lockutils [req-27a8cc2c-489f-471e-a4ff-e560a859ab80 req-2b3a4c6b-9676-4019-9bcc-a6309dafd68e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "828def8a-01dd-4845-98b3-1516060251a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:37 np0005629333 nova_compute[244014]: 2026-02-25 12:50:37.815 244018 DEBUG oslo_concurrency.lockutils [req-27a8cc2c-489f-471e-a4ff-e560a859ab80 req-2b3a4c6b-9676-4019-9bcc-a6309dafd68e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:37 np0005629333 nova_compute[244014]: 2026-02-25 12:50:37.815 244018 DEBUG oslo_concurrency.lockutils [req-27a8cc2c-489f-471e-a4ff-e560a859ab80 req-2b3a4c6b-9676-4019-9bcc-a6309dafd68e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:37 np0005629333 nova_compute[244014]: 2026-02-25 12:50:37.816 244018 DEBUG nova.compute.manager [req-27a8cc2c-489f-471e-a4ff-e560a859ab80 req-2b3a4c6b-9676-4019-9bcc-a6309dafd68e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] No waiting events found dispatching network-vif-plugged-94cae4f8-cc4b-443a-b22b-36ef77438ede pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:50:37 np0005629333 nova_compute[244014]: 2026-02-25 12:50:37.816 244018 WARNING nova.compute.manager [req-27a8cc2c-489f-471e-a4ff-e560a859ab80 req-2b3a4c6b-9676-4019-9bcc-a6309dafd68e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received unexpected event network-vif-plugged-94cae4f8-cc4b-443a-b22b-36ef77438ede for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:50:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2102: 305 pgs: 305 active+clean; 343 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 534 KiB/s rd, 1.7 MiB/s wr, 111 op/s
Feb 25 07:50:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:50:38 np0005629333 nova_compute[244014]: 2026-02-25 12:50:38.675 244018 DEBUG nova.compute.manager [req-6bef563b-37ec-4772-924d-d363e13b4c7b req-14cca97d-2536-46dc-b0bd-fd22e5ce55fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received event network-vif-plugged-99fbecbd-1815-4bf9-8e7f-6f114847f46f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:50:38 np0005629333 nova_compute[244014]: 2026-02-25 12:50:38.676 244018 DEBUG oslo_concurrency.lockutils [req-6bef563b-37ec-4772-924d-d363e13b4c7b req-14cca97d-2536-46dc-b0bd-fd22e5ce55fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "828def8a-01dd-4845-98b3-1516060251a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:38 np0005629333 nova_compute[244014]: 2026-02-25 12:50:38.676 244018 DEBUG oslo_concurrency.lockutils [req-6bef563b-37ec-4772-924d-d363e13b4c7b req-14cca97d-2536-46dc-b0bd-fd22e5ce55fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:38 np0005629333 nova_compute[244014]: 2026-02-25 12:50:38.677 244018 DEBUG oslo_concurrency.lockutils [req-6bef563b-37ec-4772-924d-d363e13b4c7b req-14cca97d-2536-46dc-b0bd-fd22e5ce55fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:38 np0005629333 nova_compute[244014]: 2026-02-25 12:50:38.677 244018 DEBUG nova.compute.manager [req-6bef563b-37ec-4772-924d-d363e13b4c7b req-14cca97d-2536-46dc-b0bd-fd22e5ce55fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] No waiting events found dispatching network-vif-plugged-99fbecbd-1815-4bf9-8e7f-6f114847f46f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:50:38 np0005629333 nova_compute[244014]: 2026-02-25 12:50:38.678 244018 WARNING nova.compute.manager [req-6bef563b-37ec-4772-924d-d363e13b4c7b req-14cca97d-2536-46dc-b0bd-fd22e5ce55fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received unexpected event network-vif-plugged-99fbecbd-1815-4bf9-8e7f-6f114847f46f for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:50:38 np0005629333 nova_compute[244014]: 2026-02-25 12:50:38.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:50:40 np0005629333 nova_compute[244014]: 2026-02-25 12:50:40.008 244018 DEBUG nova.compute.manager [req-5b49402c-4a83-48b7-92bd-8881be13907f req-32446b4e-8621-47db-b530-5c4054435925 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received event network-vif-deleted-99fbecbd-1815-4bf9-8e7f-6f114847f46f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:50:40 np0005629333 nova_compute[244014]: 2026-02-25 12:50:40.009 244018 INFO nova.compute.manager [req-5b49402c-4a83-48b7-92bd-8881be13907f req-32446b4e-8621-47db-b530-5c4054435925 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Neutron deleted interface 99fbecbd-1815-4bf9-8e7f-6f114847f46f; detaching it from the instance and deleting it from the info cache#033[00m
Feb 25 07:50:40 np0005629333 nova_compute[244014]: 2026-02-25 12:50:40.009 244018 DEBUG nova.network.neutron [req-5b49402c-4a83-48b7-92bd-8881be13907f req-32446b4e-8621-47db-b530-5c4054435925 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Updating instance_info_cache with network_info: [{"id": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "address": "fa:16:3e:ec:bf:12", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feec:bf12", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94cae4f8-cc", "ovs_interfaceid": "94cae4f8-cc4b-443a-b22b-36ef77438ede", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:50:40 np0005629333 nova_compute[244014]: 2026-02-25 12:50:40.332 244018 DEBUG nova.network.neutron [-] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:50:40 np0005629333 nova_compute[244014]: 2026-02-25 12:50:40.347 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:40 np0005629333 nova_compute[244014]: 2026-02-25 12:50:40.481 244018 DEBUG nova.compute.manager [req-5b49402c-4a83-48b7-92bd-8881be13907f req-32446b4e-8621-47db-b530-5c4054435925 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Detach interface failed, port_id=99fbecbd-1815-4bf9-8e7f-6f114847f46f, reason: Instance 828def8a-01dd-4845-98b3-1516060251a0 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 25 07:50:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2103: 305 pgs: 305 active+clean; 343 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 34 KiB/s wr, 28 op/s
Feb 25 07:50:40 np0005629333 nova_compute[244014]: 2026-02-25 12:50:40.847 244018 INFO nova.compute.manager [-] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Took 4.31 seconds to deallocate network for instance.#033[00m
Feb 25 07:50:40 np0005629333 nova_compute[244014]: 2026-02-25 12:50:40.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:50:40 np0005629333 nova_compute[244014]: 2026-02-25 12:50:40.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:50:41 np0005629333 nova_compute[244014]: 2026-02-25 12:50:41.107 244018 DEBUG oslo_concurrency.lockutils [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:41 np0005629333 nova_compute[244014]: 2026-02-25 12:50:41.107 244018 DEBUG oslo_concurrency.lockutils [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:41 np0005629333 nova_compute[244014]: 2026-02-25 12:50:41.189 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:41 np0005629333 nova_compute[244014]: 2026-02-25 12:50:41.203 244018 DEBUG oslo_concurrency.processutils [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:50:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:50:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3143914118' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:50:41 np0005629333 nova_compute[244014]: 2026-02-25 12:50:41.804 244018 DEBUG oslo_concurrency.processutils [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:50:41 np0005629333 nova_compute[244014]: 2026-02-25 12:50:41.812 244018 DEBUG nova.compute.provider_tree [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:50:41 np0005629333 nova_compute[244014]: 2026-02-25 12:50:41.898 244018 DEBUG nova.scheduler.client.report [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:50:42 np0005629333 nova_compute[244014]: 2026-02-25 12:50:42.069 244018 DEBUG oslo_concurrency.lockutils [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.962s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:42 np0005629333 nova_compute[244014]: 2026-02-25 12:50:42.211 244018 DEBUG nova.compute.manager [req-039bbcff-414b-4d18-b96a-74d6330f1d77 req-edd8bacb-cfc5-49d7-bf1d-992f5c4a6cea 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Received event network-vif-deleted-94cae4f8-cc4b-443a-b22b-36ef77438ede external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:50:42 np0005629333 nova_compute[244014]: 2026-02-25 12:50:42.245 244018 INFO nova.scheduler.client.report [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Deleted allocations for instance 828def8a-01dd-4845-98b3-1516060251a0#033[00m
Feb 25 07:50:42 np0005629333 nova_compute[244014]: 2026-02-25 12:50:42.435 244018 DEBUG oslo_concurrency.lockutils [None req-90db4a50-afcb-4c0c-b976-84d5ce9d93de f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "828def8a-01dd-4845-98b3-1516060251a0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.523s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2104: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 39 KiB/s wr, 31 op/s
Feb 25 07:50:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:50:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:50:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:50:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:50:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015353869746002576 of space, bias 1.0, pg target 0.4606160923800773 quantized to 32 (current 32)
Feb 25 07:50:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:50:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:50:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:50:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:50:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:50:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002494070409928913 of space, bias 1.0, pg target 0.7482211229786739 quantized to 32 (current 32)
Feb 25 07:50:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:50:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3976293654385012e-06 of space, bias 4.0, pg target 0.0016771552385262014 quantized to 16 (current 16)
Feb 25 07:50:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:50:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:50:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:50:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:50:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:50:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:50:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:50:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:50:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:50:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:50:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:50:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2105: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 14 KiB/s wr, 30 op/s
Feb 25 07:50:45 np0005629333 nova_compute[244014]: 2026-02-25 12:50:45.322 244018 DEBUG oslo_concurrency.lockutils [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:45 np0005629333 nova_compute[244014]: 2026-02-25 12:50:45.322 244018 DEBUG oslo_concurrency.lockutils [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:45 np0005629333 nova_compute[244014]: 2026-02-25 12:50:45.323 244018 DEBUG oslo_concurrency.lockutils [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:45 np0005629333 nova_compute[244014]: 2026-02-25 12:50:45.323 244018 DEBUG oslo_concurrency.lockutils [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:45 np0005629333 nova_compute[244014]: 2026-02-25 12:50:45.323 244018 DEBUG oslo_concurrency.lockutils [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:45 np0005629333 nova_compute[244014]: 2026-02-25 12:50:45.324 244018 INFO nova.compute.manager [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Terminating instance#033[00m
Feb 25 07:50:45 np0005629333 nova_compute[244014]: 2026-02-25 12:50:45.325 244018 DEBUG nova.compute.manager [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:50:45 np0005629333 nova_compute[244014]: 2026-02-25 12:50:45.350 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:45 np0005629333 kernel: tapdee62982-d4 (unregistering): left promiscuous mode
Feb 25 07:50:45 np0005629333 NetworkManager[49836]: <info>  [1772023845.8028] device (tapdee62982-d4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:50:45 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:45Z|01320|binding|INFO|Releasing lport dee62982-d46c-4a81-b4ad-8154d7cfc7af from this chassis (sb_readonly=0)
Feb 25 07:50:45 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:45Z|01321|binding|INFO|Setting lport dee62982-d46c-4a81-b4ad-8154d7cfc7af down in Southbound
Feb 25 07:50:45 np0005629333 nova_compute[244014]: 2026-02-25 12:50:45.810 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:45 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:45Z|01322|binding|INFO|Removing iface tapdee62982-d4 ovn-installed in OVS
Feb 25 07:50:45 np0005629333 nova_compute[244014]: 2026-02-25 12:50:45.814 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:45 np0005629333 kernel: tap344b59b1-93 (unregistering): left promiscuous mode
Feb 25 07:50:45 np0005629333 NetworkManager[49836]: <info>  [1772023845.8231] device (tap344b59b1-93): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:50:45 np0005629333 nova_compute[244014]: 2026-02-25 12:50:45.824 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:45 np0005629333 nova_compute[244014]: 2026-02-25 12:50:45.832 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:45 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:45Z|01323|binding|INFO|Releasing lport 344b59b1-93f2-4e23-8c28-835e5d954630 from this chassis (sb_readonly=1)
Feb 25 07:50:45 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:45Z|01324|binding|INFO|Removing iface tap344b59b1-93 ovn-installed in OVS
Feb 25 07:50:45 np0005629333 nova_compute[244014]: 2026-02-25 12:50:45.836 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:45 np0005629333 nova_compute[244014]: 2026-02-25 12:50:45.843 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:45 np0005629333 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Feb 25 07:50:45 np0005629333 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007a.scope: Consumed 15.271s CPU time.
Feb 25 07:50:45 np0005629333 systemd-machined[210048]: Machine qemu-154-instance-0000007a terminated.
Feb 25 07:50:45 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:45Z|01325|binding|INFO|Setting lport 344b59b1-93f2-4e23-8c28-835e5d954630 down in Southbound
Feb 25 07:50:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:45.913 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:cc:74 10.100.0.9'], port_security=['fa:16:3e:93:cc:74 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'b6501baa-8bc9-4724-b4c1-8ff43faf517c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-481feaf1-7ff4-47be-9159-a1dd19ceebcc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f3102e48-4ea3-4c65-a010-ac507aeeeba5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7b776d7-fb2a-404e-b423-f79885837022, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=dee62982-d46c-4a81-b4ad-8154d7cfc7af) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:50:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:45.915 157129 INFO neutron.agent.ovn.metadata.agent [-] Port dee62982-d46c-4a81-b4ad-8154d7cfc7af in datapath 481feaf1-7ff4-47be-9159-a1dd19ceebcc unbound from our chassis#033[00m
Feb 25 07:50:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:45.918 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 481feaf1-7ff4-47be-9159-a1dd19ceebcc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:50:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:45.919 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[00066dd3-fedd-451c-857b-c87ce2922b5c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:45.920 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc namespace which is not needed anymore#033[00m
Feb 25 07:50:45 np0005629333 nova_compute[244014]: 2026-02-25 12:50:45.972 244018 INFO nova.virt.libvirt.driver [-] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Instance destroyed successfully.#033[00m
Feb 25 07:50:45 np0005629333 nova_compute[244014]: 2026-02-25 12:50:45.972 244018 DEBUG nova.objects.instance [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'resources' on Instance uuid b6501baa-8bc9-4724-b4c1-8ff43faf517c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:50:45 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:45.978 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:10:50 2001:db8::f816:3eff:fe8d:1050'], port_security=['fa:16:3e:8d:10:50 2001:db8::f816:3eff:fe8d:1050'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe8d:1050/64', 'neutron:device_id': 'b6501baa-8bc9-4724-b4c1-8ff43faf517c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb832dde-9848-40c5-9505-cc643b1bd0fa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f3102e48-4ea3-4c65-a010-ac507aeeeba5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f83b91c5-311c-45de-aa70-06fb6ec6fece, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=344b59b1-93f2-4e23-8c28-835e5d954630) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:50:46 np0005629333 nova_compute[244014]: 2026-02-25 12:50:46.113 244018 DEBUG nova.virt.libvirt.vif [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:49:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1173463848',display_name='tempest-TestGettingAddress-server-1173463848',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1173463848',id=122,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNlLCvqIkUYh/9ZHtxnKBN7YBsKpfQ8TjO19iCX554GJUCo/N1T5J3/ZTJ7NHwET0eZFR6/wdUxNMyoCAZQSh5tzxHfA6vjgYnp2UPefpqRUyjRlBnYX2yGGsA8ccwHtsg==',key_name='tempest-TestGettingAddress-238420135',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:49:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-s4cszaw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:49:28Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=b6501baa-8bc9-4724-b4c1-8ff43faf517c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "address": "fa:16:3e:93:cc:74", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdee62982-d4", "ovs_interfaceid": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:50:46 np0005629333 nova_compute[244014]: 2026-02-25 12:50:46.114 244018 DEBUG nova.network.os_vif_util [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "address": "fa:16:3e:93:cc:74", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdee62982-d4", "ovs_interfaceid": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:50:46 np0005629333 nova_compute[244014]: 2026-02-25 12:50:46.115 244018 DEBUG nova.network.os_vif_util [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:93:cc:74,bridge_name='br-int',has_traffic_filtering=True,id=dee62982-d46c-4a81-b4ad-8154d7cfc7af,network=Network(481feaf1-7ff4-47be-9159-a1dd19ceebcc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdee62982-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:50:46 np0005629333 nova_compute[244014]: 2026-02-25 12:50:46.116 244018 DEBUG os_vif [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:cc:74,bridge_name='br-int',has_traffic_filtering=True,id=dee62982-d46c-4a81-b4ad-8154d7cfc7af,network=Network(481feaf1-7ff4-47be-9159-a1dd19ceebcc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdee62982-d4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:50:46 np0005629333 nova_compute[244014]: 2026-02-25 12:50:46.117 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:46 np0005629333 nova_compute[244014]: 2026-02-25 12:50:46.118 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdee62982-d4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:46 np0005629333 nova_compute[244014]: 2026-02-25 12:50:46.119 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:46 np0005629333 nova_compute[244014]: 2026-02-25 12:50:46.122 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:50:46 np0005629333 nova_compute[244014]: 2026-02-25 12:50:46.125 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:46 np0005629333 nova_compute[244014]: 2026-02-25 12:50:46.127 244018 INFO os_vif [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:cc:74,bridge_name='br-int',has_traffic_filtering=True,id=dee62982-d46c-4a81-b4ad-8154d7cfc7af,network=Network(481feaf1-7ff4-47be-9159-a1dd19ceebcc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdee62982-d4')#033[00m
Feb 25 07:50:46 np0005629333 nova_compute[244014]: 2026-02-25 12:50:46.128 244018 DEBUG nova.virt.libvirt.vif [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:49:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1173463848',display_name='tempest-TestGettingAddress-server-1173463848',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1173463848',id=122,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNlLCvqIkUYh/9ZHtxnKBN7YBsKpfQ8TjO19iCX554GJUCo/N1T5J3/ZTJ7NHwET0eZFR6/wdUxNMyoCAZQSh5tzxHfA6vjgYnp2UPefpqRUyjRlBnYX2yGGsA8ccwHtsg==',key_name='tempest-TestGettingAddress-238420135',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:49:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-s4cszaw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:49:28Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=b6501baa-8bc9-4724-b4c1-8ff43faf517c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "344b59b1-93f2-4e23-8c28-835e5d954630", "address": "fa:16:3e:8d:10:50", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:1050", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap344b59b1-93", "ovs_interfaceid": "344b59b1-93f2-4e23-8c28-835e5d954630", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:50:46 np0005629333 nova_compute[244014]: 2026-02-25 12:50:46.129 244018 DEBUG nova.network.os_vif_util [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "344b59b1-93f2-4e23-8c28-835e5d954630", "address": "fa:16:3e:8d:10:50", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:1050", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap344b59b1-93", "ovs_interfaceid": "344b59b1-93f2-4e23-8c28-835e5d954630", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:50:46 np0005629333 nova_compute[244014]: 2026-02-25 12:50:46.130 244018 DEBUG nova.network.os_vif_util [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8d:10:50,bridge_name='br-int',has_traffic_filtering=True,id=344b59b1-93f2-4e23-8c28-835e5d954630,network=Network(eb832dde-9848-40c5-9505-cc643b1bd0fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap344b59b1-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:50:46 np0005629333 nova_compute[244014]: 2026-02-25 12:50:46.130 244018 DEBUG os_vif [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8d:10:50,bridge_name='br-int',has_traffic_filtering=True,id=344b59b1-93f2-4e23-8c28-835e5d954630,network=Network(eb832dde-9848-40c5-9505-cc643b1bd0fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap344b59b1-93') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:50:46 np0005629333 nova_compute[244014]: 2026-02-25 12:50:46.132 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:46 np0005629333 nova_compute[244014]: 2026-02-25 12:50:46.133 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap344b59b1-93, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:46 np0005629333 nova_compute[244014]: 2026-02-25 12:50:46.134 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:46 np0005629333 nova_compute[244014]: 2026-02-25 12:50:46.135 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:50:46 np0005629333 nova_compute[244014]: 2026-02-25 12:50:46.136 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:46 np0005629333 nova_compute[244014]: 2026-02-25 12:50:46.139 244018 INFO os_vif [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8d:10:50,bridge_name='br-int',has_traffic_filtering=True,id=344b59b1-93f2-4e23-8c28-835e5d954630,network=Network(eb832dde-9848-40c5-9505-cc643b1bd0fa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap344b59b1-93')#033[00m
Feb 25 07:50:46 np0005629333 nova_compute[244014]: 2026-02-25 12:50:46.300 244018 DEBUG nova.compute.manager [req-6a61418e-1ced-469a-bfea-347ddd392f56 req-4b9d4530-4561-4e1c-9aa4-71f2ac961d94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received event network-changed-dee62982-d46c-4a81-b4ad-8154d7cfc7af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:50:46 np0005629333 nova_compute[244014]: 2026-02-25 12:50:46.301 244018 DEBUG nova.compute.manager [req-6a61418e-1ced-469a-bfea-347ddd392f56 req-4b9d4530-4561-4e1c-9aa4-71f2ac961d94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Refreshing instance network info cache due to event network-changed-dee62982-d46c-4a81-b4ad-8154d7cfc7af. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:50:46 np0005629333 nova_compute[244014]: 2026-02-25 12:50:46.301 244018 DEBUG oslo_concurrency.lockutils [req-6a61418e-1ced-469a-bfea-347ddd392f56 req-4b9d4530-4561-4e1c-9aa4-71f2ac961d94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:50:46 np0005629333 nova_compute[244014]: 2026-02-25 12:50:46.301 244018 DEBUG oslo_concurrency.lockutils [req-6a61418e-1ced-469a-bfea-347ddd392f56 req-4b9d4530-4561-4e1c-9aa4-71f2ac961d94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:50:46 np0005629333 nova_compute[244014]: 2026-02-25 12:50:46.302 244018 DEBUG nova.network.neutron [req-6a61418e-1ced-469a-bfea-347ddd392f56 req-4b9d4530-4561-4e1c-9aa4-71f2ac961d94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Refreshing network info cache for port dee62982-d46c-4a81-b4ad-8154d7cfc7af _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:50:46 np0005629333 neutron-haproxy-ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc[354422]: [NOTICE]   (354426) : haproxy version is 2.8.14-c23fe91
Feb 25 07:50:46 np0005629333 neutron-haproxy-ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc[354422]: [NOTICE]   (354426) : path to executable is /usr/sbin/haproxy
Feb 25 07:50:46 np0005629333 neutron-haproxy-ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc[354422]: [WARNING]  (354426) : Exiting Master process...
Feb 25 07:50:46 np0005629333 neutron-haproxy-ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc[354422]: [WARNING]  (354426) : Exiting Master process...
Feb 25 07:50:46 np0005629333 neutron-haproxy-ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc[354422]: [ALERT]    (354426) : Current worker (354428) exited with code 143 (Terminated)
Feb 25 07:50:46 np0005629333 neutron-haproxy-ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc[354422]: [WARNING]  (354426) : All workers exited. Exiting... (0)
Feb 25 07:50:46 np0005629333 systemd[1]: libpod-7b53b721630132d693daa620ddd634af4092aee8eab1fae2dcc83c54626328f6.scope: Deactivated successfully.
Feb 25 07:50:46 np0005629333 nova_compute[244014]: 2026-02-25 12:50:46.332 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:46 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:46.333 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:50:46 np0005629333 podman[356714]: 2026-02-25 12:50:46.335009741 +0000 UTC m=+0.315285743 container died 7b53b721630132d693daa620ddd634af4092aee8eab1fae2dcc83c54626328f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:50:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2106: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 14 KiB/s wr, 30 op/s
Feb 25 07:50:47 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7b53b721630132d693daa620ddd634af4092aee8eab1fae2dcc83c54626328f6-userdata-shm.mount: Deactivated successfully.
Feb 25 07:50:47 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d6fa4fcab328f5a267bf20836bf596abfcd5ef7623497a8450d10dfaa2251880-merged.mount: Deactivated successfully.
Feb 25 07:50:47 np0005629333 nova_compute[244014]: 2026-02-25 12:50:47.442 244018 DEBUG nova.compute.manager [req-d401b4fe-823a-4415-856c-082d5f8ac912 req-f2441223-5813-47b6-a13f-1bc18b45fc40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received event network-vif-unplugged-dee62982-d46c-4a81-b4ad-8154d7cfc7af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:50:47 np0005629333 nova_compute[244014]: 2026-02-25 12:50:47.443 244018 DEBUG oslo_concurrency.lockutils [req-d401b4fe-823a-4415-856c-082d5f8ac912 req-f2441223-5813-47b6-a13f-1bc18b45fc40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:47 np0005629333 nova_compute[244014]: 2026-02-25 12:50:47.443 244018 DEBUG oslo_concurrency.lockutils [req-d401b4fe-823a-4415-856c-082d5f8ac912 req-f2441223-5813-47b6-a13f-1bc18b45fc40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:47 np0005629333 nova_compute[244014]: 2026-02-25 12:50:47.444 244018 DEBUG oslo_concurrency.lockutils [req-d401b4fe-823a-4415-856c-082d5f8ac912 req-f2441223-5813-47b6-a13f-1bc18b45fc40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:47 np0005629333 nova_compute[244014]: 2026-02-25 12:50:47.444 244018 DEBUG nova.compute.manager [req-d401b4fe-823a-4415-856c-082d5f8ac912 req-f2441223-5813-47b6-a13f-1bc18b45fc40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] No waiting events found dispatching network-vif-unplugged-dee62982-d46c-4a81-b4ad-8154d7cfc7af pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:50:47 np0005629333 nova_compute[244014]: 2026-02-25 12:50:47.445 244018 DEBUG nova.compute.manager [req-d401b4fe-823a-4415-856c-082d5f8ac912 req-f2441223-5813-47b6-a13f-1bc18b45fc40 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received event network-vif-unplugged-dee62982-d46c-4a81-b4ad-8154d7cfc7af for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:50:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:50:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1598363634' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:50:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:50:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1598363634' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:50:47 np0005629333 podman[356714]: 2026-02-25 12:50:47.558291843 +0000 UTC m=+1.538567805 container cleanup 7b53b721630132d693daa620ddd634af4092aee8eab1fae2dcc83c54626328f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 25 07:50:47 np0005629333 systemd[1]: libpod-conmon-7b53b721630132d693daa620ddd634af4092aee8eab1fae2dcc83c54626328f6.scope: Deactivated successfully.
Feb 25 07:50:47 np0005629333 nova_compute[244014]: 2026-02-25 12:50:47.882 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "968294d4-db1f-4cdc-822b-f7d4e382ac90" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:47 np0005629333 nova_compute[244014]: 2026-02-25 12:50:47.883 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "968294d4-db1f-4cdc-822b-f7d4e382ac90" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:47 np0005629333 nova_compute[244014]: 2026-02-25 12:50:47.937 244018 DEBUG nova.compute.manager [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:50:48 np0005629333 nova_compute[244014]: 2026-02-25 12:50:48.057 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:48 np0005629333 nova_compute[244014]: 2026-02-25 12:50:48.058 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:48 np0005629333 nova_compute[244014]: 2026-02-25 12:50:48.067 244018 DEBUG nova.virt.hardware [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:50:48 np0005629333 nova_compute[244014]: 2026-02-25 12:50:48.067 244018 INFO nova.compute.claims [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:50:48 np0005629333 podman[356764]: 2026-02-25 12:50:48.226991226 +0000 UTC m=+0.639046166 container remove 7b53b721630132d693daa620ddd634af4092aee8eab1fae2dcc83c54626328f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 07:50:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:48.236 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cc9c1659-95a2-4852-ae1a-ae19b4796b23]: (4, ('Wed Feb 25 12:50:46 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc (7b53b721630132d693daa620ddd634af4092aee8eab1fae2dcc83c54626328f6)\n7b53b721630132d693daa620ddd634af4092aee8eab1fae2dcc83c54626328f6\nWed Feb 25 12:50:47 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc (7b53b721630132d693daa620ddd634af4092aee8eab1fae2dcc83c54626328f6)\n7b53b721630132d693daa620ddd634af4092aee8eab1fae2dcc83c54626328f6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:48.237 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8caf3a4b-efd9-4750-b7a9-12a51495ebcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:48.238 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap481feaf1-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:48 np0005629333 nova_compute[244014]: 2026-02-25 12:50:48.240 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:48 np0005629333 kernel: tap481feaf1-70: left promiscuous mode
Feb 25 07:50:48 np0005629333 nova_compute[244014]: 2026-02-25 12:50:48.245 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:48.250 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[49487980-357d-4778-826d-480efcd30fb6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:48 np0005629333 nova_compute[244014]: 2026-02-25 12:50:48.259 244018 DEBUG oslo_concurrency.processutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:50:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:48.278 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[211c1569-55d5-4d4c-afef-2ce0fe5e0a68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:48.279 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c5fce779-2873-47fe-8e33-8e95f718407e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:48.294 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[302c3c3a-793a-49f6-a04e-3e04315e03aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573486, 'reachable_time': 40955, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356780, 'error': None, 'target': 'ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:48.312 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-481feaf1-7ff4-47be-9159-a1dd19ceebcc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:50:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:48.312 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[e2207473-3c44-455b-8b81-f4434cc05bd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:48.313 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 344b59b1-93f2-4e23-8c28-835e5d954630 in datapath eb832dde-9848-40c5-9505-cc643b1bd0fa unbound from our chassis#033[00m
Feb 25 07:50:48 np0005629333 systemd[1]: run-netns-ovnmeta\x2d481feaf1\x2d7ff4\x2d47be\x2d9159\x2da1dd19ceebcc.mount: Deactivated successfully.
Feb 25 07:50:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:48.314 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eb832dde-9848-40c5-9505-cc643b1bd0fa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:50:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:48.315 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4683f05e-1de2-41fe-8dfa-3fe354471fad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:48.316 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa namespace which is not needed anymore#033[00m
Feb 25 07:50:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2107: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 9.9 KiB/s wr, 35 op/s
Feb 25 07:50:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:50:48 np0005629333 nova_compute[244014]: 2026-02-25 12:50:48.663 244018 DEBUG nova.network.neutron [req-6a61418e-1ced-469a-bfea-347ddd392f56 req-4b9d4530-4561-4e1c-9aa4-71f2ac961d94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Updated VIF entry in instance network info cache for port dee62982-d46c-4a81-b4ad-8154d7cfc7af. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:50:48 np0005629333 nova_compute[244014]: 2026-02-25 12:50:48.664 244018 DEBUG nova.network.neutron [req-6a61418e-1ced-469a-bfea-347ddd392f56 req-4b9d4530-4561-4e1c-9aa4-71f2ac961d94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Updating instance_info_cache with network_info: [{"id": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "address": "fa:16:3e:93:cc:74", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdee62982-d4", "ovs_interfaceid": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "344b59b1-93f2-4e23-8c28-835e5d954630", "address": "fa:16:3e:8d:10:50", "network": {"id": "eb832dde-9848-40c5-9505-cc643b1bd0fa", "bridge": "br-int", "label": "tempest-network-smoke--578810748", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe8d:1050", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap344b59b1-93", "ovs_interfaceid": "344b59b1-93f2-4e23-8c28-835e5d954630", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:50:48 np0005629333 nova_compute[244014]: 2026-02-25 12:50:48.728 244018 DEBUG oslo_concurrency.lockutils [req-6a61418e-1ced-469a-bfea-347ddd392f56 req-4b9d4530-4561-4e1c-9aa4-71f2ac961d94 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-b6501baa-8bc9-4724-b4c1-8ff43faf517c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:50:48 np0005629333 neutron-haproxy-ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa[354495]: [NOTICE]   (354499) : haproxy version is 2.8.14-c23fe91
Feb 25 07:50:48 np0005629333 neutron-haproxy-ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa[354495]: [NOTICE]   (354499) : path to executable is /usr/sbin/haproxy
Feb 25 07:50:48 np0005629333 neutron-haproxy-ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa[354495]: [WARNING]  (354499) : Exiting Master process...
Feb 25 07:50:48 np0005629333 neutron-haproxy-ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa[354495]: [WARNING]  (354499) : Exiting Master process...
Feb 25 07:50:48 np0005629333 neutron-haproxy-ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa[354495]: [ALERT]    (354499) : Current worker (354501) exited with code 143 (Terminated)
Feb 25 07:50:48 np0005629333 neutron-haproxy-ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa[354495]: [WARNING]  (354499) : All workers exited. Exiting... (0)
Feb 25 07:50:48 np0005629333 systemd[1]: libpod-af9a11f6a910945d2cf71f6bc4e628f38eee74c652461113531378a36272d49c.scope: Deactivated successfully.
Feb 25 07:50:48 np0005629333 podman[356798]: 2026-02-25 12:50:48.789496939 +0000 UTC m=+0.387959806 container died af9a11f6a910945d2cf71f6bc4e628f38eee74c652461113531378a36272d49c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 07:50:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:50:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1975364052' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.072 244018 DEBUG oslo_concurrency.processutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.813s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.078 244018 DEBUG nova.compute.provider_tree [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.094 244018 DEBUG nova.scheduler.client.report [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.175 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.176 244018 DEBUG nova.compute.manager [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:50:49 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-af9a11f6a910945d2cf71f6bc4e628f38eee74c652461113531378a36272d49c-userdata-shm.mount: Deactivated successfully.
Feb 25 07:50:49 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1abdf7d75bc10bb95826887c814df1a0ab66d19dd1ae55e1c4beadda3a359345-merged.mount: Deactivated successfully.
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.298 244018 DEBUG nova.compute.manager [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.299 244018 DEBUG nova.network.neutron [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.326 244018 INFO nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.420 244018 DEBUG nova.compute.manager [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:50:49 np0005629333 podman[356798]: 2026-02-25 12:50:49.538681993 +0000 UTC m=+1.137144910 container cleanup af9a11f6a910945d2cf71f6bc4e628f38eee74c652461113531378a36272d49c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.541 244018 DEBUG nova.compute.manager [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.543 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.544 244018 INFO nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Creating image(s)#033[00m
Feb 25 07:50:49 np0005629333 systemd[1]: libpod-conmon-af9a11f6a910945d2cf71f6bc4e628f38eee74c652461113531378a36272d49c.scope: Deactivated successfully.
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.579 244018 DEBUG nova.storage.rbd_utils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 968294d4-db1f-4cdc-822b-f7d4e382ac90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.607 244018 DEBUG nova.storage.rbd_utils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 968294d4-db1f-4cdc-822b-f7d4e382ac90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.632 244018 DEBUG nova.storage.rbd_utils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 968294d4-db1f-4cdc-822b-f7d4e382ac90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.636 244018 DEBUG oslo_concurrency.processutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.662 244018 DEBUG nova.policy [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31d013eaf26a447394d93c83ab8def60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e227b91c24404ab5aed600e2fe792d32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.696 244018 DEBUG oslo_concurrency.processutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.696 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.697 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.697 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.715 244018 DEBUG nova.storage.rbd_utils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 968294d4-db1f-4cdc-822b-f7d4e382ac90_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.717 244018 DEBUG oslo_concurrency.processutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 968294d4-db1f-4cdc-822b-f7d4e382ac90_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.745 244018 DEBUG nova.compute.manager [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received event network-vif-plugged-dee62982-d46c-4a81-b4ad-8154d7cfc7af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.745 244018 DEBUG oslo_concurrency.lockutils [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.745 244018 DEBUG oslo_concurrency.lockutils [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.745 244018 DEBUG oslo_concurrency.lockutils [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.745 244018 DEBUG nova.compute.manager [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] No waiting events found dispatching network-vif-plugged-dee62982-d46c-4a81-b4ad-8154d7cfc7af pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.746 244018 WARNING nova.compute.manager [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received unexpected event network-vif-plugged-dee62982-d46c-4a81-b4ad-8154d7cfc7af for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.746 244018 DEBUG nova.compute.manager [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received event network-vif-unplugged-344b59b1-93f2-4e23-8c28-835e5d954630 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.746 244018 DEBUG oslo_concurrency.lockutils [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.746 244018 DEBUG oslo_concurrency.lockutils [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.746 244018 DEBUG oslo_concurrency.lockutils [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.746 244018 DEBUG nova.compute.manager [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] No waiting events found dispatching network-vif-unplugged-344b59b1-93f2-4e23-8c28-835e5d954630 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.746 244018 DEBUG nova.compute.manager [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received event network-vif-unplugged-344b59b1-93f2-4e23-8c28-835e5d954630 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.746 244018 DEBUG nova.compute.manager [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received event network-vif-plugged-344b59b1-93f2-4e23-8c28-835e5d954630 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.747 244018 DEBUG oslo_concurrency.lockutils [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.747 244018 DEBUG oslo_concurrency.lockutils [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.747 244018 DEBUG oslo_concurrency.lockutils [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.747 244018 DEBUG nova.compute.manager [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] No waiting events found dispatching network-vif-plugged-344b59b1-93f2-4e23-8c28-835e5d954630 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:50:49 np0005629333 nova_compute[244014]: 2026-02-25 12:50:49.747 244018 WARNING nova.compute.manager [req-8603b852-7dea-4e7f-98e1-24b5bae61b18 req-793cf1e3-4a52-441d-8fd5-3d64a159a0d7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received unexpected event network-vif-plugged-344b59b1-93f2-4e23-8c28-835e5d954630 for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:50:50 np0005629333 podman[356859]: 2026-02-25 12:50:50.060489307 +0000 UTC m=+0.487426834 container remove af9a11f6a910945d2cf71f6bc4e628f38eee74c652461113531378a36272d49c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 07:50:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:50.066 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bdb1ea26-2966-48d9-8dc2-e151213baa5d]: (4, ('Wed Feb 25 12:50:48 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa (af9a11f6a910945d2cf71f6bc4e628f38eee74c652461113531378a36272d49c)\naf9a11f6a910945d2cf71f6bc4e628f38eee74c652461113531378a36272d49c\nWed Feb 25 12:50:49 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa (af9a11f6a910945d2cf71f6bc4e628f38eee74c652461113531378a36272d49c)\naf9a11f6a910945d2cf71f6bc4e628f38eee74c652461113531378a36272d49c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:50.068 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[36dc6550-c9b3-41f6-bc22-b395309d0ef0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:50.069 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb832dde-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:50 np0005629333 nova_compute[244014]: 2026-02-25 12:50:50.071 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:50 np0005629333 kernel: tapeb832dde-90: left promiscuous mode
Feb 25 07:50:50 np0005629333 nova_compute[244014]: 2026-02-25 12:50:50.076 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:50.078 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e6124a29-6b3b-40a4-8ca3-c3bb8b536a1a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:50.088 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9b062b9a-dec0-431d-b0e1-d522ea484f31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:50.091 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5beb1803-7130-4d42-85b0-fccb1303ea01]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:50.106 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5d093758-3559-4d3b-9ae9-100ee1ada4fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 573558, 'reachable_time': 36906, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356960, 'error': None, 'target': 'ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:50.108 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-eb832dde-9848-40c5-9505-cc643b1bd0fa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:50:50 np0005629333 systemd[1]: run-netns-ovnmeta\x2deb832dde\x2d9848\x2d40c5\x2d9505\x2dcc643b1bd0fa.mount: Deactivated successfully.
Feb 25 07:50:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:50.108 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[2c68ac8c-a5c9-4e43-a87b-89003f991765]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:50.109 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:50:50 np0005629333 nova_compute[244014]: 2026-02-25 12:50:50.352 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2108: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 5.8 KiB/s wr, 9 op/s
Feb 25 07:50:51 np0005629333 nova_compute[244014]: 2026-02-25 12:50:51.136 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:51 np0005629333 nova_compute[244014]: 2026-02-25 12:50:51.146 244018 DEBUG nova.network.neutron [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Successfully created port: a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:50:51 np0005629333 nova_compute[244014]: 2026-02-25 12:50:51.153 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023836.1516545, 828def8a-01dd-4845-98b3-1516060251a0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:50:51 np0005629333 nova_compute[244014]: 2026-02-25 12:50:51.153 244018 INFO nova.compute.manager [-] [instance: 828def8a-01dd-4845-98b3-1516060251a0] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:50:51 np0005629333 nova_compute[244014]: 2026-02-25 12:50:51.188 244018 DEBUG nova.compute.manager [None req-bccee992-167b-47a8-97f7-014e88b50b4e - - - - - -] [instance: 828def8a-01dd-4845-98b3-1516060251a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:50:51 np0005629333 nova_compute[244014]: 2026-02-25 12:50:51.897 244018 DEBUG nova.network.neutron [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Successfully updated port: a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:50:51 np0005629333 nova_compute[244014]: 2026-02-25 12:50:51.916 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-968294d4-db1f-4cdc-822b-f7d4e382ac90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:50:51 np0005629333 nova_compute[244014]: 2026-02-25 12:50:51.917 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-968294d4-db1f-4cdc-822b-f7d4e382ac90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:50:51 np0005629333 nova_compute[244014]: 2026-02-25 12:50:51.918 244018 DEBUG nova.network.neutron [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:50:51 np0005629333 nova_compute[244014]: 2026-02-25 12:50:51.986 244018 DEBUG nova.compute.manager [req-90f49a4e-c3cd-487e-971e-22156e5ae088 req-63dcbf07-c919-4517-8874-dfa187991692 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Received event network-changed-a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:50:51 np0005629333 nova_compute[244014]: 2026-02-25 12:50:51.986 244018 DEBUG nova.compute.manager [req-90f49a4e-c3cd-487e-971e-22156e5ae088 req-63dcbf07-c919-4517-8874-dfa187991692 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Refreshing instance network info cache due to event network-changed-a4ebf65d-00dd-4cf9-88b7-af09e1ff5738. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:50:51 np0005629333 nova_compute[244014]: 2026-02-25 12:50:51.987 244018 DEBUG oslo_concurrency.lockutils [req-90f49a4e-c3cd-487e-971e-22156e5ae088 req-63dcbf07-c919-4517-8874-dfa187991692 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-968294d4-db1f-4cdc-822b-f7d4e382ac90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:50:52 np0005629333 nova_compute[244014]: 2026-02-25 12:50:52.040 244018 DEBUG oslo_concurrency.processutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 968294d4-db1f-4cdc-822b-f7d4e382ac90_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.322s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:50:52 np0005629333 nova_compute[244014]: 2026-02-25 12:50:52.106 244018 DEBUG nova.storage.rbd_utils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] resizing rbd image 968294d4-db1f-4cdc-822b-f7d4e382ac90_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:50:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:52.110 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:52 np0005629333 nova_compute[244014]: 2026-02-25 12:50:52.257 244018 DEBUG nova.network.neutron [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:50:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2109: 305 pgs: 305 active+clean; 271 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.5 MiB/s wr, 40 op/s
Feb 25 07:50:52 np0005629333 nova_compute[244014]: 2026-02-25 12:50:52.911 244018 DEBUG nova.objects.instance [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'migration_context' on Instance uuid 968294d4-db1f-4cdc-822b-f7d4e382ac90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:50:52 np0005629333 nova_compute[244014]: 2026-02-25 12:50:52.928 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:50:52 np0005629333 nova_compute[244014]: 2026-02-25 12:50:52.929 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Ensure instance console log exists: /var/lib/nova/instances/968294d4-db1f-4cdc-822b-f7d4e382ac90/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:50:52 np0005629333 nova_compute[244014]: 2026-02-25 12:50:52.930 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:52 np0005629333 nova_compute[244014]: 2026-02-25 12:50:52.931 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:52 np0005629333 nova_compute[244014]: 2026-02-25 12:50:52.931 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.124 244018 INFO nova.virt.libvirt.driver [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Deleting instance files /var/lib/nova/instances/b6501baa-8bc9-4724-b4c1-8ff43faf517c_del#033[00m
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.125 244018 INFO nova.virt.libvirt.driver [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Deletion of /var/lib/nova/instances/b6501baa-8bc9-4724-b4c1-8ff43faf517c_del complete#033[00m
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.202 244018 INFO nova.compute.manager [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Took 7.88 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.203 244018 DEBUG oslo.service.loopingcall [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.203 244018 DEBUG nova.compute.manager [-] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.203 244018 DEBUG nova.network.neutron [-] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.259 244018 DEBUG nova.network.neutron [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Updating instance_info_cache with network_info: [{"id": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "address": "fa:16:3e:9f:b9:1b", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4ebf65d-00", "ovs_interfaceid": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.301 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-968294d4-db1f-4cdc-822b-f7d4e382ac90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.301 244018 DEBUG nova.compute.manager [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Instance network_info: |[{"id": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "address": "fa:16:3e:9f:b9:1b", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4ebf65d-00", "ovs_interfaceid": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.302 244018 DEBUG oslo_concurrency.lockutils [req-90f49a4e-c3cd-487e-971e-22156e5ae088 req-63dcbf07-c919-4517-8874-dfa187991692 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-968294d4-db1f-4cdc-822b-f7d4e382ac90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.303 244018 DEBUG nova.network.neutron [req-90f49a4e-c3cd-487e-971e-22156e5ae088 req-63dcbf07-c919-4517-8874-dfa187991692 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Refreshing network info cache for port a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.309 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Start _get_guest_xml network_info=[{"id": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "address": "fa:16:3e:9f:b9:1b", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4ebf65d-00", "ovs_interfaceid": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.316 244018 WARNING nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.321 244018 DEBUG nova.virt.libvirt.host [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.322 244018 DEBUG nova.virt.libvirt.host [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.328 244018 DEBUG nova.virt.libvirt.host [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.329 244018 DEBUG nova.virt.libvirt.host [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.329 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.329 244018 DEBUG nova.virt.hardware [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.330 244018 DEBUG nova.virt.hardware [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.330 244018 DEBUG nova.virt.hardware [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.330 244018 DEBUG nova.virt.hardware [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.331 244018 DEBUG nova.virt.hardware [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.331 244018 DEBUG nova.virt.hardware [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.331 244018 DEBUG nova.virt.hardware [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.331 244018 DEBUG nova.virt.hardware [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.331 244018 DEBUG nova.virt.hardware [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.332 244018 DEBUG nova.virt.hardware [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.332 244018 DEBUG nova.virt.hardware [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.335 244018 DEBUG oslo_concurrency.processutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:50:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:50:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:50:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2407116719' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.916 244018 DEBUG oslo_concurrency.processutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.954 244018 DEBUG nova.storage.rbd_utils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 968294d4-db1f-4cdc-822b-f7d4e382ac90_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:50:53 np0005629333 nova_compute[244014]: 2026-02-25 12:50:53.961 244018 DEBUG oslo_concurrency.processutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.029 244018 DEBUG nova.compute.manager [req-77718d30-ce5e-43dd-bf14-cd2b80668813 req-eb0a2aa3-4d31-4cf1-8267-8f25a38a2743 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received event network-vif-deleted-344b59b1-93f2-4e23-8c28-835e5d954630 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.030 244018 INFO nova.compute.manager [req-77718d30-ce5e-43dd-bf14-cd2b80668813 req-eb0a2aa3-4d31-4cf1-8267-8f25a38a2743 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Neutron deleted interface 344b59b1-93f2-4e23-8c28-835e5d954630; detaching it from the instance and deleting it from the info cache#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.031 244018 DEBUG nova.network.neutron [req-77718d30-ce5e-43dd-bf14-cd2b80668813 req-eb0a2aa3-4d31-4cf1-8267-8f25a38a2743 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Updating instance_info_cache with network_info: [{"id": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "address": "fa:16:3e:93:cc:74", "network": {"id": "481feaf1-7ff4-47be-9159-a1dd19ceebcc", "bridge": "br-int", "label": "tempest-network-smoke--524248332", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdee62982-d4", "ovs_interfaceid": "dee62982-d46c-4a81-b4ad-8154d7cfc7af", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.322 244018 DEBUG nova.network.neutron [-] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.326 244018 DEBUG nova.compute.manager [req-77718d30-ce5e-43dd-bf14-cd2b80668813 req-eb0a2aa3-4d31-4cf1-8267-8f25a38a2743 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Detach interface failed, port_id=344b59b1-93f2-4e23-8c28-835e5d954630, reason: Instance b6501baa-8bc9-4724-b4c1-8ff43faf517c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.337 244018 INFO nova.compute.manager [-] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Took 1.13 seconds to deallocate network for instance.#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.378 244018 DEBUG oslo_concurrency.lockutils [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.379 244018 DEBUG oslo_concurrency.lockutils [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.453 244018 DEBUG oslo_concurrency.processutils [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:50:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:50:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3020492380' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.510 244018 DEBUG oslo_concurrency.processutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.512 244018 DEBUG nova.virt.libvirt.vif [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:50:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-653885904',display_name='tempest-TestNetworkBasicOps-server-653885904',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-653885904',id=125,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOy92l8/vPN7vFNQNIFwAyPWnh0ZWRvqGkx7SK83ofkqUIVSoYlWbbBqq+DpRGXPAAbaPssNxxLMrtn4SsTj7FK0oeeoWePcAOUtLz/kEkMZ4dQRPKUcIrOPQSmGB7w+fw==',key_name='tempest-TestNetworkBasicOps-27091948',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-tqixi3n6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:50:49Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=968294d4-db1f-4cdc-822b-f7d4e382ac90,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "address": "fa:16:3e:9f:b9:1b", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4ebf65d-00", "ovs_interfaceid": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.512 244018 DEBUG nova.network.os_vif_util [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "address": "fa:16:3e:9f:b9:1b", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4ebf65d-00", "ovs_interfaceid": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.513 244018 DEBUG nova.network.os_vif_util [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:b9:1b,bridge_name='br-int',has_traffic_filtering=True,id=a4ebf65d-00dd-4cf9-88b7-af09e1ff5738,network=Network(06dcd48c-d26b-4718-b4c7-9c2416698bed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4ebf65d-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.514 244018 DEBUG nova.objects.instance [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'pci_devices' on Instance uuid 968294d4-db1f-4cdc-822b-f7d4e382ac90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.531 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:50:54 np0005629333 nova_compute[244014]:  <uuid>968294d4-db1f-4cdc-822b-f7d4e382ac90</uuid>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:  <name>instance-0000007d</name>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestNetworkBasicOps-server-653885904</nova:name>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:50:53</nova:creationTime>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:50:54 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:        <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:        <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:        <nova:port uuid="a4ebf65d-00dd-4cf9-88b7-af09e1ff5738">
Feb 25 07:50:54 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      <entry name="serial">968294d4-db1f-4cdc-822b-f7d4e382ac90</entry>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      <entry name="uuid">968294d4-db1f-4cdc-822b-f7d4e382ac90</entry>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/968294d4-db1f-4cdc-822b-f7d4e382ac90_disk">
Feb 25 07:50:54 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:50:54 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/968294d4-db1f-4cdc-822b-f7d4e382ac90_disk.config">
Feb 25 07:50:54 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:50:54 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:9f:b9:1b"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      <target dev="tapa4ebf65d-00"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/968294d4-db1f-4cdc-822b-f7d4e382ac90/console.log" append="off"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:50:54 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:50:54 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:50:54 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:50:54 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.532 244018 DEBUG nova.compute.manager [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Preparing to wait for external event network-vif-plugged-a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.532 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "968294d4-db1f-4cdc-822b-f7d4e382ac90-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.532 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "968294d4-db1f-4cdc-822b-f7d4e382ac90-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.533 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "968294d4-db1f-4cdc-822b-f7d4e382ac90-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.533 244018 DEBUG nova.virt.libvirt.vif [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:50:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-653885904',display_name='tempest-TestNetworkBasicOps-server-653885904',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-653885904',id=125,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOy92l8/vPN7vFNQNIFwAyPWnh0ZWRvqGkx7SK83ofkqUIVSoYlWbbBqq+DpRGXPAAbaPssNxxLMrtn4SsTj7FK0oeeoWePcAOUtLz/kEkMZ4dQRPKUcIrOPQSmGB7w+fw==',key_name='tempest-TestNetworkBasicOps-27091948',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-tqixi3n6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:50:49Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=968294d4-db1f-4cdc-822b-f7d4e382ac90,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "address": "fa:16:3e:9f:b9:1b", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4ebf65d-00", "ovs_interfaceid": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.534 244018 DEBUG nova.network.os_vif_util [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "address": "fa:16:3e:9f:b9:1b", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4ebf65d-00", "ovs_interfaceid": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.534 244018 DEBUG nova.network.os_vif_util [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:b9:1b,bridge_name='br-int',has_traffic_filtering=True,id=a4ebf65d-00dd-4cf9-88b7-af09e1ff5738,network=Network(06dcd48c-d26b-4718-b4c7-9c2416698bed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4ebf65d-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.535 244018 DEBUG os_vif [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:b9:1b,bridge_name='br-int',has_traffic_filtering=True,id=a4ebf65d-00dd-4cf9-88b7-af09e1ff5738,network=Network(06dcd48c-d26b-4718-b4c7-9c2416698bed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4ebf65d-00') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.535 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.536 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.536 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.539 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.540 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa4ebf65d-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.540 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa4ebf65d-00, col_values=(('external_ids', {'iface-id': 'a4ebf65d-00dd-4cf9-88b7-af09e1ff5738', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9f:b9:1b', 'vm-uuid': '968294d4-db1f-4cdc-822b-f7d4e382ac90'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.542 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:54 np0005629333 NetworkManager[49836]: <info>  [1772023854.5429] manager: (tapa4ebf65d-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/551)
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.544 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.546 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.547 244018 INFO os_vif [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:b9:1b,bridge_name='br-int',has_traffic_filtering=True,id=a4ebf65d-00dd-4cf9-88b7-af09e1ff5738,network=Network(06dcd48c-d26b-4718-b4c7-9c2416698bed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4ebf65d-00')#033[00m
Feb 25 07:50:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2110: 305 pgs: 305 active+clean; 271 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.5 MiB/s wr, 37 op/s
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.603 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.603 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.604 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:9f:b9:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.604 244018 INFO nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Using config drive#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.634 244018 DEBUG nova.storage.rbd_utils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 968294d4-db1f-4cdc-822b-f7d4e382ac90_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.933 244018 DEBUG nova.network.neutron [req-90f49a4e-c3cd-487e-971e-22156e5ae088 req-63dcbf07-c919-4517-8874-dfa187991692 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Updated VIF entry in instance network info cache for port a4ebf65d-00dd-4cf9-88b7-af09e1ff5738. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.934 244018 DEBUG nova.network.neutron [req-90f49a4e-c3cd-487e-971e-22156e5ae088 req-63dcbf07-c919-4517-8874-dfa187991692 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Updating instance_info_cache with network_info: [{"id": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "address": "fa:16:3e:9f:b9:1b", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4ebf65d-00", "ovs_interfaceid": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:50:54 np0005629333 nova_compute[244014]: 2026-02-25 12:50:54.951 244018 DEBUG oslo_concurrency.lockutils [req-90f49a4e-c3cd-487e-971e-22156e5ae088 req-63dcbf07-c919-4517-8874-dfa187991692 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-968294d4-db1f-4cdc-822b-f7d4e382ac90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:50:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:50:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3739611012' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:50:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.027 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.028 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.028 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.039 244018 DEBUG oslo_concurrency.processutils [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.044 244018 DEBUG nova.compute.provider_tree [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.061 244018 DEBUG nova.scheduler.client.report [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.088 244018 DEBUG oslo_concurrency.lockutils [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.098 244018 INFO nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Creating config drive at /var/lib/nova/instances/968294d4-db1f-4cdc-822b-f7d4e382ac90/disk.config#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.102 244018 DEBUG oslo_concurrency.processutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/968294d4-db1f-4cdc-822b-f7d4e382ac90/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpnuc3toiq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.145 244018 INFO nova.scheduler.client.report [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Deleted allocations for instance b6501baa-8bc9-4724-b4c1-8ff43faf517c#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.208 244018 DEBUG oslo_concurrency.lockutils [None req-e69c2f94-2c08-4865-8477-a2bbda95c16e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "b6501baa-8bc9-4724-b4c1-8ff43faf517c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.257 244018 DEBUG oslo_concurrency.processutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/968294d4-db1f-4cdc-822b-f7d4e382ac90/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpnuc3toiq" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.283 244018 DEBUG nova.storage.rbd_utils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 968294d4-db1f-4cdc-822b-f7d4e382ac90_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.286 244018 DEBUG oslo_concurrency.processutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/968294d4-db1f-4cdc-822b-f7d4e382ac90/disk.config 968294d4-db1f-4cdc-822b-f7d4e382ac90_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.355 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.413 244018 DEBUG oslo_concurrency.processutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/968294d4-db1f-4cdc-822b-f7d4e382ac90/disk.config 968294d4-db1f-4cdc-822b-f7d4e382ac90_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.414 244018 INFO nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Deleting local config drive /var/lib/nova/instances/968294d4-db1f-4cdc-822b-f7d4e382ac90/disk.config because it was imported into RBD.#033[00m
Feb 25 07:50:55 np0005629333 kernel: tapa4ebf65d-00: entered promiscuous mode
Feb 25 07:50:55 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:55Z|01326|binding|INFO|Claiming lport a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 for this chassis.
Feb 25 07:50:55 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:55Z|01327|binding|INFO|a4ebf65d-00dd-4cf9-88b7-af09e1ff5738: Claiming fa:16:3e:9f:b9:1b 10.100.0.5
Feb 25 07:50:55 np0005629333 NetworkManager[49836]: <info>  [1772023855.4622] manager: (tapa4ebf65d-00): new Tun device (/org/freedesktop/NetworkManager/Devices/552)
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.461 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:55 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:55Z|01328|binding|INFO|Setting lport a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 ovn-installed in OVS
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.468 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.469 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:55 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:55Z|01329|binding|INFO|Setting lport a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 up in Southbound
Feb 25 07:50:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.471 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:b9:1b 10.100.0.5'], port_security=['fa:16:3e:9f:b9:1b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '968294d4-db1f-4cdc-822b-f7d4e382ac90', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-06dcd48c-d26b-4718-b4c7-9c2416698bed', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '2', 'neutron:security_group_ids': '46c21bd2-9635-4433-9b19-dbda2570ebd9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d3a1f8d-2c5f-4ea3-9518-03dbef27606a, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=a4ebf65d-00dd-4cf9-88b7-af09e1ff5738) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.474 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.474 157129 INFO neutron.agent.ovn.metadata.agent [-] Port a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 in datapath 06dcd48c-d26b-4718-b4c7-9c2416698bed bound to our chassis#033[00m
Feb 25 07:50:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.478 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 06dcd48c-d26b-4718-b4c7-9c2416698bed#033[00m
Feb 25 07:50:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.493 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9d2e8c16-c949-4d60-8a27-1654c1532aa5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:55 np0005629333 systemd-machined[210048]: New machine qemu-157-instance-0000007d.
Feb 25 07:50:55 np0005629333 systemd-udevd[357196]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:50:55 np0005629333 systemd[1]: Started Virtual Machine qemu-157-instance-0000007d.
Feb 25 07:50:55 np0005629333 NetworkManager[49836]: <info>  [1772023855.5208] device (tapa4ebf65d-00): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:50:55 np0005629333 NetworkManager[49836]: <info>  [1772023855.5222] device (tapa4ebf65d-00): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:50:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.527 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3c0414a5-c993-4125-af99-03425189dba2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.531 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[fe0aa7ed-a9ce-4d54-99be-5767ee9606d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.556 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0f4e706d-babe-4f89-9ea6-56e3867122f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.571 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9e1e79a2-4bce-4643-b9e8-47d51da5d327]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap06dcd48c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:35:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 390], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578120, 'reachable_time': 38011, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357208, 'error': None, 'target': 'ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.583 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8b73d627-d7de-4836-8894-b3da78af0243]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap06dcd48c-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 578130, 'tstamp': 578130}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357209, 'error': None, 'target': 'ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap06dcd48c-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 578133, 'tstamp': 578133}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357209, 'error': None, 'target': 'ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:50:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.585 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06dcd48c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.587 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.588 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.589 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap06dcd48c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.589 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:50:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.590 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap06dcd48c-d0, col_values=(('external_ids', {'iface-id': '06afb3ba-d963-4735-ae78-91cfa95e52ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:50:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:50:55.591 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.756 244018 DEBUG nova.compute.manager [req-699759c8-e5a5-4634-8ed2-c4561c419b94 req-d16b4aa8-3bcd-4571-8ad9-f536dcffc0ad 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Received event network-vif-plugged-a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.756 244018 DEBUG oslo_concurrency.lockutils [req-699759c8-e5a5-4634-8ed2-c4561c419b94 req-d16b4aa8-3bcd-4571-8ad9-f536dcffc0ad 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "968294d4-db1f-4cdc-822b-f7d4e382ac90-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.756 244018 DEBUG oslo_concurrency.lockutils [req-699759c8-e5a5-4634-8ed2-c4561c419b94 req-d16b4aa8-3bcd-4571-8ad9-f536dcffc0ad 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "968294d4-db1f-4cdc-822b-f7d4e382ac90-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.757 244018 DEBUG oslo_concurrency.lockutils [req-699759c8-e5a5-4634-8ed2-c4561c419b94 req-d16b4aa8-3bcd-4571-8ad9-f536dcffc0ad 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "968294d4-db1f-4cdc-822b-f7d4e382ac90-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.757 244018 DEBUG nova.compute.manager [req-699759c8-e5a5-4634-8ed2-c4561c419b94 req-d16b4aa8-3bcd-4571-8ad9-f536dcffc0ad 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Processing event network-vif-plugged-a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.823 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023855.82343, 968294d4-db1f-4cdc-822b-f7d4e382ac90 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.824 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] VM Started (Lifecycle Event)#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.826 244018 DEBUG nova.compute.manager [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.829 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.835 244018 INFO nova.virt.libvirt.driver [-] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Instance spawned successfully.#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.835 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.848 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.851 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.860 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.860 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.861 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.861 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.862 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.862 244018 DEBUG nova.virt.libvirt.driver [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.871 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.871 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023855.8263345, 968294d4-db1f-4cdc-822b-f7d4e382ac90 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.871 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.896 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.900 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023855.8303695, 968294d4-db1f-4cdc-822b-f7d4e382ac90 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.900 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.923 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.926 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.934 244018 INFO nova.compute.manager [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Took 6.39 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.935 244018 DEBUG nova.compute.manager [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:50:55 np0005629333 nova_compute[244014]: 2026-02-25 12:50:55.965 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:50:56 np0005629333 nova_compute[244014]: 2026-02-25 12:50:56.011 244018 INFO nova.compute.manager [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Took 7.98 seconds to build instance.#033[00m
Feb 25 07:50:56 np0005629333 nova_compute[244014]: 2026-02-25 12:50:56.025 244018 DEBUG oslo_concurrency.lockutils [None req-d28f2b74-c5cd-4d30-a437-676933549981 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "968294d4-db1f-4cdc-822b-f7d4e382ac90" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.141s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:56 np0005629333 nova_compute[244014]: 2026-02-25 12:50:56.148 244018 DEBUG nova.compute.manager [req-502d7000-2748-45b8-a77d-e3394b48ce72 req-3f0214fa-fde4-4a2b-b2ff-f54b466e5dcd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Received event network-vif-deleted-dee62982-d46c-4a81-b4ad-8154d7cfc7af external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:50:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2111: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Feb 25 07:50:57 np0005629333 nova_compute[244014]: 2026-02-25 12:50:57.902 244018 DEBUG nova.compute.manager [req-37127379-7137-4004-8333-4ac779d9d19f req-6a6e9bfb-1e02-48ad-bf7c-6446f88dc9a7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Received event network-vif-plugged-a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:50:57 np0005629333 nova_compute[244014]: 2026-02-25 12:50:57.902 244018 DEBUG oslo_concurrency.lockutils [req-37127379-7137-4004-8333-4ac779d9d19f req-6a6e9bfb-1e02-48ad-bf7c-6446f88dc9a7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "968294d4-db1f-4cdc-822b-f7d4e382ac90-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:50:57 np0005629333 nova_compute[244014]: 2026-02-25 12:50:57.903 244018 DEBUG oslo_concurrency.lockutils [req-37127379-7137-4004-8333-4ac779d9d19f req-6a6e9bfb-1e02-48ad-bf7c-6446f88dc9a7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "968294d4-db1f-4cdc-822b-f7d4e382ac90-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:50:57 np0005629333 nova_compute[244014]: 2026-02-25 12:50:57.904 244018 DEBUG oslo_concurrency.lockutils [req-37127379-7137-4004-8333-4ac779d9d19f req-6a6e9bfb-1e02-48ad-bf7c-6446f88dc9a7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "968294d4-db1f-4cdc-822b-f7d4e382ac90-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:50:57 np0005629333 nova_compute[244014]: 2026-02-25 12:50:57.904 244018 DEBUG nova.compute.manager [req-37127379-7137-4004-8333-4ac779d9d19f req-6a6e9bfb-1e02-48ad-bf7c-6446f88dc9a7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] No waiting events found dispatching network-vif-plugged-a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:50:57 np0005629333 nova_compute[244014]: 2026-02-25 12:50:57.905 244018 WARNING nova.compute.manager [req-37127379-7137-4004-8333-4ac779d9d19f req-6a6e9bfb-1e02-48ad-bf7c-6446f88dc9a7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Received unexpected event network-vif-plugged-a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:50:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2112: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1002 KiB/s rd, 1.8 MiB/s wr, 96 op/s
Feb 25 07:50:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:50:59 np0005629333 nova_compute[244014]: 2026-02-25 12:50:59.544 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:50:59Z|01330|binding|INFO|Releasing lport 06afb3ba-d963-4735-ae78-91cfa95e52ff from this chassis (sb_readonly=0)
Feb 25 07:50:59 np0005629333 nova_compute[244014]: 2026-02-25 12:50:59.652 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:50:59 np0005629333 nova_compute[244014]: 2026-02-25 12:50:59.711 244018 DEBUG nova.compute.manager [req-6d75b847-aa42-43e1-ab79-0a7bd5be47bd req-0ab59913-9d29-43b0-aafa-64c050894b93 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Received event network-changed-a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:50:59 np0005629333 nova_compute[244014]: 2026-02-25 12:50:59.711 244018 DEBUG nova.compute.manager [req-6d75b847-aa42-43e1-ab79-0a7bd5be47bd req-0ab59913-9d29-43b0-aafa-64c050894b93 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Refreshing instance network info cache due to event network-changed-a4ebf65d-00dd-4cf9-88b7-af09e1ff5738. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:50:59 np0005629333 nova_compute[244014]: 2026-02-25 12:50:59.711 244018 DEBUG oslo_concurrency.lockutils [req-6d75b847-aa42-43e1-ab79-0a7bd5be47bd req-0ab59913-9d29-43b0-aafa-64c050894b93 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-968294d4-db1f-4cdc-822b-f7d4e382ac90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:50:59 np0005629333 nova_compute[244014]: 2026-02-25 12:50:59.711 244018 DEBUG oslo_concurrency.lockutils [req-6d75b847-aa42-43e1-ab79-0a7bd5be47bd req-0ab59913-9d29-43b0-aafa-64c050894b93 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-968294d4-db1f-4cdc-822b-f7d4e382ac90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:50:59 np0005629333 nova_compute[244014]: 2026-02-25 12:50:59.711 244018 DEBUG nova.network.neutron [req-6d75b847-aa42-43e1-ab79-0a7bd5be47bd req-0ab59913-9d29-43b0-aafa-64c050894b93 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Refreshing network info cache for port a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:51:00 np0005629333 nova_compute[244014]: 2026-02-25 12:51:00.357 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2113: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 997 KiB/s rd, 1.8 MiB/s wr, 90 op/s
Feb 25 07:51:00 np0005629333 nova_compute[244014]: 2026-02-25 12:51:00.971 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023845.970157, b6501baa-8bc9-4724-b4c1-8ff43faf517c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:51:00 np0005629333 nova_compute[244014]: 2026-02-25 12:51:00.972 244018 INFO nova.compute.manager [-] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:51:00 np0005629333 nova_compute[244014]: 2026-02-25 12:51:00.995 244018 DEBUG nova.compute.manager [None req-e876b3d4-4ad1-4af6-9230-33b2c405d67b - - - - - -] [instance: b6501baa-8bc9-4724-b4c1-8ff43faf517c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:51:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:51:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:51:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:51:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:51:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:51:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:51:02 np0005629333 nova_compute[244014]: 2026-02-25 12:51:02.214 244018 DEBUG nova.network.neutron [req-6d75b847-aa42-43e1-ab79-0a7bd5be47bd req-0ab59913-9d29-43b0-aafa-64c050894b93 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Updated VIF entry in instance network info cache for port a4ebf65d-00dd-4cf9-88b7-af09e1ff5738. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:51:02 np0005629333 nova_compute[244014]: 2026-02-25 12:51:02.216 244018 DEBUG nova.network.neutron [req-6d75b847-aa42-43e1-ab79-0a7bd5be47bd req-0ab59913-9d29-43b0-aafa-64c050894b93 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Updating instance_info_cache with network_info: [{"id": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "address": "fa:16:3e:9f:b9:1b", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4ebf65d-00", "ovs_interfaceid": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:51:02 np0005629333 nova_compute[244014]: 2026-02-25 12:51:02.253 244018 DEBUG oslo_concurrency.lockutils [req-6d75b847-aa42-43e1-ab79-0a7bd5be47bd req-0ab59913-9d29-43b0-aafa-64c050894b93 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-968294d4-db1f-4cdc-822b-f7d4e382ac90" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:51:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2114: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Feb 25 07:51:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:51:04 np0005629333 nova_compute[244014]: 2026-02-25 12:51:04.547 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2115: 305 pgs: 305 active+clean; 279 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 315 KiB/s wr, 91 op/s
Feb 25 07:51:04 np0005629333 podman[357252]: 2026-02-25 12:51:04.748414144 +0000 UTC m=+0.083175590 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:51:04 np0005629333 podman[357253]: 2026-02-25 12:51:04.771088894 +0000 UTC m=+0.104092990 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 07:51:05 np0005629333 nova_compute[244014]: 2026-02-25 12:51:05.359 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2116: 305 pgs: 305 active+clean; 282 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 731 KiB/s wr, 95 op/s
Feb 25 07:51:07 np0005629333 ovn_controller[147040]: 2026-02-25T12:51:07Z|00157|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9f:b9:1b 10.100.0.5
Feb 25 07:51:07 np0005629333 ovn_controller[147040]: 2026-02-25T12:51:07Z|00158|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9f:b9:1b 10.100.0.5
Feb 25 07:51:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2117: 305 pgs: 305 active+clean; 295 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 123 op/s
Feb 25 07:51:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:51:09 np0005629333 nova_compute[244014]: 2026-02-25 12:51:09.550 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:10 np0005629333 nova_compute[244014]: 2026-02-25 12:51:10.361 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2118: 305 pgs: 305 active+clean; 295 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.6 MiB/s wr, 68 op/s
Feb 25 07:51:11 np0005629333 nova_compute[244014]: 2026-02-25 12:51:11.305 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2119: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 96 op/s
Feb 25 07:51:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:51:14 np0005629333 nova_compute[244014]: 2026-02-25 12:51:14.420 244018 INFO nova.compute.manager [None req-7b830aa4-ae46-4c5f-9df0-40341875ddcd 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Get console output#033[00m
Feb 25 07:51:14 np0005629333 nova_compute[244014]: 2026-02-25 12:51:14.427 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 25 07:51:14 np0005629333 nova_compute[244014]: 2026-02-25 12:51:14.553 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2120: 305 pgs: 305 active+clean; 312 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 07:51:14 np0005629333 nova_compute[244014]: 2026-02-25 12:51:14.748 244018 DEBUG oslo_concurrency.lockutils [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "968294d4-db1f-4cdc-822b-f7d4e382ac90" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:51:14 np0005629333 nova_compute[244014]: 2026-02-25 12:51:14.749 244018 DEBUG oslo_concurrency.lockutils [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "968294d4-db1f-4cdc-822b-f7d4e382ac90" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:51:14 np0005629333 nova_compute[244014]: 2026-02-25 12:51:14.749 244018 DEBUG oslo_concurrency.lockutils [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "968294d4-db1f-4cdc-822b-f7d4e382ac90-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:51:14 np0005629333 nova_compute[244014]: 2026-02-25 12:51:14.750 244018 DEBUG oslo_concurrency.lockutils [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "968294d4-db1f-4cdc-822b-f7d4e382ac90-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:51:14 np0005629333 nova_compute[244014]: 2026-02-25 12:51:14.750 244018 DEBUG oslo_concurrency.lockutils [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "968294d4-db1f-4cdc-822b-f7d4e382ac90-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:51:14 np0005629333 nova_compute[244014]: 2026-02-25 12:51:14.752 244018 INFO nova.compute.manager [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Terminating instance#033[00m
Feb 25 07:51:14 np0005629333 nova_compute[244014]: 2026-02-25 12:51:14.754 244018 DEBUG nova.compute.manager [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:51:14 np0005629333 kernel: tapa4ebf65d-00 (unregistering): left promiscuous mode
Feb 25 07:51:14 np0005629333 NetworkManager[49836]: <info>  [1772023874.8551] device (tapa4ebf65d-00): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:51:14 np0005629333 nova_compute[244014]: 2026-02-25 12:51:14.863 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:14 np0005629333 ovn_controller[147040]: 2026-02-25T12:51:14Z|01331|binding|INFO|Releasing lport a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 from this chassis (sb_readonly=0)
Feb 25 07:51:14 np0005629333 ovn_controller[147040]: 2026-02-25T12:51:14Z|01332|binding|INFO|Setting lport a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 down in Southbound
Feb 25 07:51:14 np0005629333 ovn_controller[147040]: 2026-02-25T12:51:14Z|01333|binding|INFO|Removing iface tapa4ebf65d-00 ovn-installed in OVS
Feb 25 07:51:14 np0005629333 nova_compute[244014]: 2026-02-25 12:51:14.869 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:14 np0005629333 nova_compute[244014]: 2026-02-25 12:51:14.874 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:14.883 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:b9:1b 10.100.0.5'], port_security=['fa:16:3e:9f:b9:1b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '968294d4-db1f-4cdc-822b-f7d4e382ac90', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-06dcd48c-d26b-4718-b4c7-9c2416698bed', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '4', 'neutron:security_group_ids': '46c21bd2-9635-4433-9b19-dbda2570ebd9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.172'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d3a1f8d-2c5f-4ea3-9518-03dbef27606a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=a4ebf65d-00dd-4cf9-88b7-af09e1ff5738) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:51:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:14.885 157129 INFO neutron.agent.ovn.metadata.agent [-] Port a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 in datapath 06dcd48c-d26b-4718-b4c7-9c2416698bed unbound from our chassis#033[00m
Feb 25 07:51:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:14.887 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 06dcd48c-d26b-4718-b4c7-9c2416698bed#033[00m
Feb 25 07:51:14 np0005629333 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Feb 25 07:51:14 np0005629333 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007d.scope: Consumed 13.002s CPU time.
Feb 25 07:51:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:14.899 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5c7c63d7-af4a-4bec-8864-edda7af23d03]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:14 np0005629333 systemd-machined[210048]: Machine qemu-157-instance-0000007d terminated.
Feb 25 07:51:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:14.925 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[73452ede-6a9c-4760-89ce-4b4ef5e0a526]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:14.929 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a20664f0-a66b-4b6b-8771-24edc426038c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:14.949 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f6c0e95d-e9f2-44bb-b374-bc63bfc7707c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:14.964 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[aa423813-991a-412c-95a2-d86a64d8bd4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap06dcd48c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8b:35:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 390], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578120, 'reachable_time': 38011, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357305, 'error': None, 'target': 'ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:14.980 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dc3ee3af-b693-4c9a-9760-284e87598c3b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap06dcd48c-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 578130, 'tstamp': 578130}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357308, 'error': None, 'target': 'ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap06dcd48c-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 578133, 'tstamp': 578133}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357308, 'error': None, 'target': 'ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:14.982 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06dcd48c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:51:14 np0005629333 nova_compute[244014]: 2026-02-25 12:51:14.983 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:14 np0005629333 nova_compute[244014]: 2026-02-25 12:51:14.988 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:14.989 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap06dcd48c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:51:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:14.989 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:51:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:14.989 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap06dcd48c-d0, col_values=(('external_ids', {'iface-id': '06afb3ba-d963-4735-ae78-91cfa95e52ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:51:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:14.990 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:51:14 np0005629333 nova_compute[244014]: 2026-02-25 12:51:14.990 244018 INFO nova.virt.libvirt.driver [-] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Instance destroyed successfully.#033[00m
Feb 25 07:51:14 np0005629333 nova_compute[244014]: 2026-02-25 12:51:14.991 244018 DEBUG nova.objects.instance [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'resources' on Instance uuid 968294d4-db1f-4cdc-822b-f7d4e382ac90 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:51:15 np0005629333 nova_compute[244014]: 2026-02-25 12:51:15.008 244018 DEBUG nova.virt.libvirt.vif [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:50:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-653885904',display_name='tempest-TestNetworkBasicOps-server-653885904',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-653885904',id=125,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOy92l8/vPN7vFNQNIFwAyPWnh0ZWRvqGkx7SK83ofkqUIVSoYlWbbBqq+DpRGXPAAbaPssNxxLMrtn4SsTj7FK0oeeoWePcAOUtLz/kEkMZ4dQRPKUcIrOPQSmGB7w+fw==',key_name='tempest-TestNetworkBasicOps-27091948',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:50:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-tqixi3n6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:50:55Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=968294d4-db1f-4cdc-822b-f7d4e382ac90,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "address": "fa:16:3e:9f:b9:1b", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4ebf65d-00", "ovs_interfaceid": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:51:15 np0005629333 nova_compute[244014]: 2026-02-25 12:51:15.008 244018 DEBUG nova.network.os_vif_util [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "address": "fa:16:3e:9f:b9:1b", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4ebf65d-00", "ovs_interfaceid": "a4ebf65d-00dd-4cf9-88b7-af09e1ff5738", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:51:15 np0005629333 nova_compute[244014]: 2026-02-25 12:51:15.009 244018 DEBUG nova.network.os_vif_util [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9f:b9:1b,bridge_name='br-int',has_traffic_filtering=True,id=a4ebf65d-00dd-4cf9-88b7-af09e1ff5738,network=Network(06dcd48c-d26b-4718-b4c7-9c2416698bed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4ebf65d-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:51:15 np0005629333 nova_compute[244014]: 2026-02-25 12:51:15.009 244018 DEBUG os_vif [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9f:b9:1b,bridge_name='br-int',has_traffic_filtering=True,id=a4ebf65d-00dd-4cf9-88b7-af09e1ff5738,network=Network(06dcd48c-d26b-4718-b4c7-9c2416698bed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4ebf65d-00') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:51:15 np0005629333 nova_compute[244014]: 2026-02-25 12:51:15.011 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:15 np0005629333 nova_compute[244014]: 2026-02-25 12:51:15.011 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa4ebf65d-00, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:51:15 np0005629333 nova_compute[244014]: 2026-02-25 12:51:15.012 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:15 np0005629333 nova_compute[244014]: 2026-02-25 12:51:15.014 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:15 np0005629333 nova_compute[244014]: 2026-02-25 12:51:15.017 244018 INFO os_vif [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9f:b9:1b,bridge_name='br-int',has_traffic_filtering=True,id=a4ebf65d-00dd-4cf9-88b7-af09e1ff5738,network=Network(06dcd48c-d26b-4718-b4c7-9c2416698bed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4ebf65d-00')#033[00m
Feb 25 07:51:15 np0005629333 nova_compute[244014]: 2026-02-25 12:51:15.330 244018 INFO nova.virt.libvirt.driver [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Deleting instance files /var/lib/nova/instances/968294d4-db1f-4cdc-822b-f7d4e382ac90_del#033[00m
Feb 25 07:51:15 np0005629333 nova_compute[244014]: 2026-02-25 12:51:15.331 244018 INFO nova.virt.libvirt.driver [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Deletion of /var/lib/nova/instances/968294d4-db1f-4cdc-822b-f7d4e382ac90_del complete#033[00m
Feb 25 07:51:15 np0005629333 nova_compute[244014]: 2026-02-25 12:51:15.365 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:15 np0005629333 nova_compute[244014]: 2026-02-25 12:51:15.392 244018 INFO nova.compute.manager [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Took 0.64 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:51:15 np0005629333 nova_compute[244014]: 2026-02-25 12:51:15.392 244018 DEBUG oslo.service.loopingcall [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:51:15 np0005629333 nova_compute[244014]: 2026-02-25 12:51:15.393 244018 DEBUG nova.compute.manager [-] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:51:15 np0005629333 nova_compute[244014]: 2026-02-25 12:51:15.393 244018 DEBUG nova.network.neutron [-] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:51:15 np0005629333 nova_compute[244014]: 2026-02-25 12:51:15.404 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2121: 305 pgs: 305 active+clean; 286 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Feb 25 07:51:17 np0005629333 nova_compute[244014]: 2026-02-25 12:51:17.731 244018 DEBUG nova.network.neutron [-] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:51:17 np0005629333 nova_compute[244014]: 2026-02-25 12:51:17.757 244018 INFO nova.compute.manager [-] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Took 2.36 seconds to deallocate network for instance.#033[00m
Feb 25 07:51:17 np0005629333 nova_compute[244014]: 2026-02-25 12:51:17.832 244018 DEBUG oslo_concurrency.lockutils [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:51:17 np0005629333 nova_compute[244014]: 2026-02-25 12:51:17.833 244018 DEBUG oslo_concurrency.lockutils [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:51:17 np0005629333 nova_compute[244014]: 2026-02-25 12:51:17.859 244018 DEBUG nova.compute.manager [req-ddc65e11-3e54-4b0f-a0b2-aa98cfbcf027 req-f50286e8-46ba-45f2-89a7-8682a2311e51 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Received event network-vif-deleted-a4ebf65d-00dd-4cf9-88b7-af09e1ff5738 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:51:17 np0005629333 nova_compute[244014]: 2026-02-25 12:51:17.911 244018 DEBUG oslo_concurrency.processutils [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:51:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:18.268 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:4b:a0 10.100.0.2 2001:db8::f816:3eff:fe6f:4ba0'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe6f:4ba0/64', 'neutron:device_id': 'ovnmeta-e19ed85e-54ee-4274-951c-ade412625983', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e19ed85e-54ee-4274-951c-ade412625983', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a56192da-444f-498f-a789-c0e4cafb114e, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=591ce053-3764-4ce0-841f-6728c8fd9491) old=Port_Binding(mac=['fa:16:3e:6f:4b:a0 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-e19ed85e-54ee-4274-951c-ade412625983', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e19ed85e-54ee-4274-951c-ade412625983', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:51:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:18.270 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 591ce053-3764-4ce0-841f-6728c8fd9491 in datapath e19ed85e-54ee-4274-951c-ade412625983 updated#033[00m
Feb 25 07:51:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:18.272 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e19ed85e-54ee-4274-951c-ade412625983, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:51:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:18.273 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c8bb10-d597-42a7-a956-ffb399a8835f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:51:18 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2729816460' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:51:18 np0005629333 nova_compute[244014]: 2026-02-25 12:51:18.474 244018 DEBUG oslo_concurrency.processutils [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:51:18 np0005629333 nova_compute[244014]: 2026-02-25 12:51:18.481 244018 DEBUG nova.compute.provider_tree [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:51:18 np0005629333 nova_compute[244014]: 2026-02-25 12:51:18.504 244018 DEBUG nova.scheduler.client.report [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:51:18 np0005629333 nova_compute[244014]: 2026-02-25 12:51:18.531 244018 DEBUG oslo_concurrency.lockutils [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:51:18 np0005629333 nova_compute[244014]: 2026-02-25 12:51:18.583 244018 INFO nova.scheduler.client.report [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Deleted allocations for instance 968294d4-db1f-4cdc-822b-f7d4e382ac90#033[00m
Feb 25 07:51:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2122: 305 pgs: 305 active+clean; 233 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 1.7 MiB/s wr, 87 op/s
Feb 25 07:51:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:51:18 np0005629333 nova_compute[244014]: 2026-02-25 12:51:18.879 244018 DEBUG oslo_concurrency.lockutils [None req-5014e92c-c1d1-4c86-bfc6-4bb24ca746d9 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "968294d4-db1f-4cdc-822b-f7d4e382ac90" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:51:20 np0005629333 nova_compute[244014]: 2026-02-25 12:51:20.013 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:20 np0005629333 nova_compute[244014]: 2026-02-25 12:51:20.366 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2123: 305 pgs: 305 active+clean; 233 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 152 KiB/s rd, 578 KiB/s wr, 56 op/s
Feb 25 07:51:21 np0005629333 nova_compute[244014]: 2026-02-25 12:51:21.601 244018 DEBUG oslo_concurrency.lockutils [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "d6cf21ec-717e-41f7-9351-2214b43ce275" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:51:21 np0005629333 nova_compute[244014]: 2026-02-25 12:51:21.601 244018 DEBUG oslo_concurrency.lockutils [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "d6cf21ec-717e-41f7-9351-2214b43ce275" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:51:21 np0005629333 nova_compute[244014]: 2026-02-25 12:51:21.602 244018 DEBUG oslo_concurrency.lockutils [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "d6cf21ec-717e-41f7-9351-2214b43ce275-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:51:21 np0005629333 nova_compute[244014]: 2026-02-25 12:51:21.602 244018 DEBUG oslo_concurrency.lockutils [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "d6cf21ec-717e-41f7-9351-2214b43ce275-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:51:21 np0005629333 nova_compute[244014]: 2026-02-25 12:51:21.602 244018 DEBUG oslo_concurrency.lockutils [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "d6cf21ec-717e-41f7-9351-2214b43ce275-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:51:21 np0005629333 nova_compute[244014]: 2026-02-25 12:51:21.604 244018 INFO nova.compute.manager [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Terminating instance#033[00m
Feb 25 07:51:21 np0005629333 nova_compute[244014]: 2026-02-25 12:51:21.605 244018 DEBUG nova.compute.manager [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:51:21 np0005629333 kernel: tap33f0e898-94 (unregistering): left promiscuous mode
Feb 25 07:51:21 np0005629333 nova_compute[244014]: 2026-02-25 12:51:21.788 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:21 np0005629333 NetworkManager[49836]: <info>  [1772023881.7896] device (tap33f0e898-94): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:51:21 np0005629333 nova_compute[244014]: 2026-02-25 12:51:21.793 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:21 np0005629333 ovn_controller[147040]: 2026-02-25T12:51:21Z|01334|binding|INFO|Releasing lport 33f0e898-9477-416a-9ae6-268ef8e71ee3 from this chassis (sb_readonly=0)
Feb 25 07:51:21 np0005629333 ovn_controller[147040]: 2026-02-25T12:51:21Z|01335|binding|INFO|Setting lport 33f0e898-9477-416a-9ae6-268ef8e71ee3 down in Southbound
Feb 25 07:51:21 np0005629333 ovn_controller[147040]: 2026-02-25T12:51:21Z|01336|binding|INFO|Removing iface tap33f0e898-94 ovn-installed in OVS
Feb 25 07:51:21 np0005629333 nova_compute[244014]: 2026-02-25 12:51:21.797 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:21 np0005629333 nova_compute[244014]: 2026-02-25 12:51:21.801 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:21.814 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:16:fb 10.100.0.8'], port_security=['fa:16:3e:8d:16:fb 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd6cf21ec-717e-41f7-9351-2214b43ce275', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-06dcd48c-d26b-4718-b4c7-9c2416698bed', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7113265a-1d97-4a63-a30a-7677c464f652', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d3a1f8d-2c5f-4ea3-9518-03dbef27606a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=33f0e898-9477-416a-9ae6-268ef8e71ee3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:51:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:21.816 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 33f0e898-9477-416a-9ae6-268ef8e71ee3 in datapath 06dcd48c-d26b-4718-b4c7-9c2416698bed unbound from our chassis#033[00m
Feb 25 07:51:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:21.818 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 06dcd48c-d26b-4718-b4c7-9c2416698bed, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:51:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:21.819 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8308171b-d57f-4a40-988b-d21dbd6e719f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:21.820 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed namespace which is not needed anymore#033[00m
Feb 25 07:51:21 np0005629333 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Feb 25 07:51:21 np0005629333 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007c.scope: Consumed 14.427s CPU time.
Feb 25 07:51:21 np0005629333 systemd-machined[210048]: Machine qemu-156-instance-0000007c terminated.
Feb 25 07:51:22 np0005629333 nova_compute[244014]: 2026-02-25 12:51:22.040 244018 INFO nova.virt.libvirt.driver [-] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Instance destroyed successfully.#033[00m
Feb 25 07:51:22 np0005629333 nova_compute[244014]: 2026-02-25 12:51:22.040 244018 DEBUG nova.objects.instance [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'resources' on Instance uuid d6cf21ec-717e-41f7-9351-2214b43ce275 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:51:22 np0005629333 neutron-haproxy-ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed[355780]: [NOTICE]   (355784) : haproxy version is 2.8.14-c23fe91
Feb 25 07:51:22 np0005629333 neutron-haproxy-ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed[355780]: [NOTICE]   (355784) : path to executable is /usr/sbin/haproxy
Feb 25 07:51:22 np0005629333 neutron-haproxy-ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed[355780]: [WARNING]  (355784) : Exiting Master process...
Feb 25 07:51:22 np0005629333 neutron-haproxy-ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed[355780]: [WARNING]  (355784) : Exiting Master process...
Feb 25 07:51:22 np0005629333 neutron-haproxy-ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed[355780]: [ALERT]    (355784) : Current worker (355786) exited with code 143 (Terminated)
Feb 25 07:51:22 np0005629333 neutron-haproxy-ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed[355780]: [WARNING]  (355784) : All workers exited. Exiting... (0)
Feb 25 07:51:22 np0005629333 systemd[1]: libpod-a0d971f8098c852696847b3611a5eaaa47f0f2fbc87dc08c1f4644483bac5bed.scope: Deactivated successfully.
Feb 25 07:51:22 np0005629333 podman[357384]: 2026-02-25 12:51:22.057322275 +0000 UTC m=+0.128473828 container died a0d971f8098c852696847b3611a5eaaa47f0f2fbc87dc08c1f4644483bac5bed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 07:51:22 np0005629333 nova_compute[244014]: 2026-02-25 12:51:22.074 244018 DEBUG nova.virt.libvirt.vif [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:50:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-563456365',display_name='tempest-TestNetworkBasicOps-server-563456365',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-563456365',id=124,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLeMiVeu1dOqPpqcWXNCLagoBwiGcWNicZxrpOmZtGijG3qf3SaUB0NXPfDvHb9oPVwp3SJzJeTa27pr/BNkR9Koy7DJ2jBsI3Xx4qBnFg17X6/5BsC47NShmWdZikYbsA==',key_name='tempest-TestNetworkBasicOps-2016084182',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:50:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-8x0n80rf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:50:12Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=d6cf21ec-717e-41f7-9351-2214b43ce275,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "address": "fa:16:3e:8d:16:fb", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33f0e898-94", "ovs_interfaceid": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:51:22 np0005629333 nova_compute[244014]: 2026-02-25 12:51:22.075 244018 DEBUG nova.network.os_vif_util [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "address": "fa:16:3e:8d:16:fb", "network": {"id": "06dcd48c-d26b-4718-b4c7-9c2416698bed", "bridge": "br-int", "label": "tempest-network-smoke--1092141211", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap33f0e898-94", "ovs_interfaceid": "33f0e898-9477-416a-9ae6-268ef8e71ee3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:51:22 np0005629333 nova_compute[244014]: 2026-02-25 12:51:22.077 244018 DEBUG nova.network.os_vif_util [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8d:16:fb,bridge_name='br-int',has_traffic_filtering=True,id=33f0e898-9477-416a-9ae6-268ef8e71ee3,network=Network(06dcd48c-d26b-4718-b4c7-9c2416698bed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33f0e898-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:51:22 np0005629333 nova_compute[244014]: 2026-02-25 12:51:22.077 244018 DEBUG os_vif [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8d:16:fb,bridge_name='br-int',has_traffic_filtering=True,id=33f0e898-9477-416a-9ae6-268ef8e71ee3,network=Network(06dcd48c-d26b-4718-b4c7-9c2416698bed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33f0e898-94') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:51:22 np0005629333 nova_compute[244014]: 2026-02-25 12:51:22.080 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:22 np0005629333 nova_compute[244014]: 2026-02-25 12:51:22.081 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33f0e898-94, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:51:22 np0005629333 nova_compute[244014]: 2026-02-25 12:51:22.083 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:22 np0005629333 nova_compute[244014]: 2026-02-25 12:51:22.085 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:22 np0005629333 nova_compute[244014]: 2026-02-25 12:51:22.089 244018 INFO os_vif [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8d:16:fb,bridge_name='br-int',has_traffic_filtering=True,id=33f0e898-9477-416a-9ae6-268ef8e71ee3,network=Network(06dcd48c-d26b-4718-b4c7-9c2416698bed),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap33f0e898-94')#033[00m
Feb 25 07:51:22 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a0d971f8098c852696847b3611a5eaaa47f0f2fbc87dc08c1f4644483bac5bed-userdata-shm.mount: Deactivated successfully.
Feb 25 07:51:22 np0005629333 systemd[1]: var-lib-containers-storage-overlay-66261fae152732234a66a0a4cefd1a9db46debf2297b95beda372509622fae68-merged.mount: Deactivated successfully.
Feb 25 07:51:22 np0005629333 podman[357384]: 2026-02-25 12:51:22.280185998 +0000 UTC m=+0.351337551 container cleanup a0d971f8098c852696847b3611a5eaaa47f0f2fbc87dc08c1f4644483bac5bed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 25 07:51:22 np0005629333 systemd[1]: libpod-conmon-a0d971f8098c852696847b3611a5eaaa47f0f2fbc87dc08c1f4644483bac5bed.scope: Deactivated successfully.
Feb 25 07:51:22 np0005629333 podman[357441]: 2026-02-25 12:51:22.415220671 +0000 UTC m=+0.109127812 container remove a0d971f8098c852696847b3611a5eaaa47f0f2fbc87dc08c1f4644483bac5bed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 07:51:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:22.422 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e06e821f-51ea-4b36-a8a7-0356456ca892]: (4, ('Wed Feb 25 12:51:21 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed (a0d971f8098c852696847b3611a5eaaa47f0f2fbc87dc08c1f4644483bac5bed)\na0d971f8098c852696847b3611a5eaaa47f0f2fbc87dc08c1f4644483bac5bed\nWed Feb 25 12:51:22 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed (a0d971f8098c852696847b3611a5eaaa47f0f2fbc87dc08c1f4644483bac5bed)\na0d971f8098c852696847b3611a5eaaa47f0f2fbc87dc08c1f4644483bac5bed\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:22.424 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1d4c46cf-c5ed-4f87-b291-e9c3a541ca76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:22.425 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06dcd48c-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:51:22 np0005629333 kernel: tap06dcd48c-d0: left promiscuous mode
Feb 25 07:51:22 np0005629333 nova_compute[244014]: 2026-02-25 12:51:22.427 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:22 np0005629333 nova_compute[244014]: 2026-02-25 12:51:22.437 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:22 np0005629333 nova_compute[244014]: 2026-02-25 12:51:22.439 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:22.441 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4da7e76f-cb69-4ee9-ae68-1793b7fd940e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:22.461 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d7d63bd2-949f-4ecf-8ed2-e302e15c1cbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:22.463 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5c045220-8c25-4d2c-9d1c-ae2f8e9bfb47]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:22.481 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[aa1c5342-2436-41a3-a2ca-db4a78551e0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 578112, 'reachable_time': 27766, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357459, 'error': None, 'target': 'ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:22.485 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-06dcd48c-d26b-4718-b4c7-9c2416698bed deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:51:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:22.485 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[da963c7d-fd62-4d82-be7c-880c1378778e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:22 np0005629333 systemd[1]: run-netns-ovnmeta\x2d06dcd48c\x2dd26b\x2d4718\x2db4c7\x2d9c2416698bed.mount: Deactivated successfully.
Feb 25 07:51:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2124: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 203 KiB/s rd, 578 KiB/s wr, 144 op/s
Feb 25 07:51:22 np0005629333 nova_compute[244014]: 2026-02-25 12:51:22.649 244018 INFO nova.virt.libvirt.driver [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Deleting instance files /var/lib/nova/instances/d6cf21ec-717e-41f7-9351-2214b43ce275_del#033[00m
Feb 25 07:51:22 np0005629333 nova_compute[244014]: 2026-02-25 12:51:22.650 244018 INFO nova.virt.libvirt.driver [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Deletion of /var/lib/nova/instances/d6cf21ec-717e-41f7-9351-2214b43ce275_del complete#033[00m
Feb 25 07:51:22 np0005629333 nova_compute[244014]: 2026-02-25 12:51:22.720 244018 INFO nova.compute.manager [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Took 1.11 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:51:22 np0005629333 nova_compute[244014]: 2026-02-25 12:51:22.721 244018 DEBUG oslo.service.loopingcall [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:51:22 np0005629333 nova_compute[244014]: 2026-02-25 12:51:22.722 244018 DEBUG nova.compute.manager [-] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:51:22 np0005629333 nova_compute[244014]: 2026-02-25 12:51:22.722 244018 DEBUG nova.network.neutron [-] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:51:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:51:23 np0005629333 nova_compute[244014]: 2026-02-25 12:51:23.859 244018 DEBUG nova.network.neutron [-] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:51:23 np0005629333 nova_compute[244014]: 2026-02-25 12:51:23.875 244018 INFO nova.compute.manager [-] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Took 1.15 seconds to deallocate network for instance.#033[00m
Feb 25 07:51:23 np0005629333 nova_compute[244014]: 2026-02-25 12:51:23.920 244018 DEBUG oslo_concurrency.lockutils [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:51:23 np0005629333 nova_compute[244014]: 2026-02-25 12:51:23.921 244018 DEBUG oslo_concurrency.lockutils [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:51:23 np0005629333 nova_compute[244014]: 2026-02-25 12:51:23.956 244018 DEBUG nova.compute.manager [req-6c992cc8-881f-4985-8153-c57d2d418265 req-89d57ae2-8ff2-4f97-87b6-c84f2f1ef9e9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Received event network-vif-deleted-33f0e898-9477-416a-9ae6-268ef8e71ee3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:51:23 np0005629333 nova_compute[244014]: 2026-02-25 12:51:23.988 244018 DEBUG oslo_concurrency.processutils [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:51:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:24.018 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:4b:a0 10.100.0.2 2001:db8:0:1:f816:3eff:fe6f:4ba0 2001:db8::f816:3eff:fe6f:4ba0'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe6f:4ba0/64 2001:db8::f816:3eff:fe6f:4ba0/64', 'neutron:device_id': 'ovnmeta-e19ed85e-54ee-4274-951c-ade412625983', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e19ed85e-54ee-4274-951c-ade412625983', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a56192da-444f-498f-a789-c0e4cafb114e, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=591ce053-3764-4ce0-841f-6728c8fd9491) old=Port_Binding(mac=['fa:16:3e:6f:4b:a0 10.100.0.2 2001:db8::f816:3eff:fe6f:4ba0'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe6f:4ba0/64', 'neutron:device_id': 'ovnmeta-e19ed85e-54ee-4274-951c-ade412625983', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e19ed85e-54ee-4274-951c-ade412625983', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:51:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:24.021 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 591ce053-3764-4ce0-841f-6728c8fd9491 in datapath e19ed85e-54ee-4274-951c-ade412625983 updated#033[00m
Feb 25 07:51:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:24.023 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e19ed85e-54ee-4274-951c-ade412625983, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:51:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:24.025 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[862d66ae-5108-4c7a-b26c-8b0ea25b8e7a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:51:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1042474607' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:51:24 np0005629333 nova_compute[244014]: 2026-02-25 12:51:24.532 244018 DEBUG oslo_concurrency.processutils [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:51:24 np0005629333 nova_compute[244014]: 2026-02-25 12:51:24.539 244018 DEBUG nova.compute.provider_tree [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:51:24 np0005629333 nova_compute[244014]: 2026-02-25 12:51:24.558 244018 DEBUG nova.scheduler.client.report [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:51:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2125: 305 pgs: 305 active+clean; 233 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 13 KiB/s wr, 117 op/s
Feb 25 07:51:24 np0005629333 nova_compute[244014]: 2026-02-25 12:51:24.591 244018 DEBUG oslo_concurrency.lockutils [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:51:24 np0005629333 nova_compute[244014]: 2026-02-25 12:51:24.618 244018 INFO nova.scheduler.client.report [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Deleted allocations for instance d6cf21ec-717e-41f7-9351-2214b43ce275#033[00m
Feb 25 07:51:24 np0005629333 nova_compute[244014]: 2026-02-25 12:51:24.699 244018 DEBUG oslo_concurrency.lockutils [None req-a8e1513f-031c-409f-8cb5-273d22c657e4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "d6cf21ec-717e-41f7-9351-2214b43ce275" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:51:25 np0005629333 nova_compute[244014]: 2026-02-25 12:51:25.369 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2126: 305 pgs: 305 active+clean; 210 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 80 KiB/s rd, 14 KiB/s wr, 130 op/s
Feb 25 07:51:27 np0005629333 nova_compute[244014]: 2026-02-25 12:51:27.085 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:27 np0005629333 nova_compute[244014]: 2026-02-25 12:51:27.698 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:27 np0005629333 nova_compute[244014]: 2026-02-25 12:51:27.762 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2127: 305 pgs: 305 active+clean; 153 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 87 KiB/s rd, 2.8 KiB/s wr, 139 op/s
Feb 25 07:51:28 np0005629333 nova_compute[244014]: 2026-02-25 12:51:28.617 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:51:28 np0005629333 nova_compute[244014]: 2026-02-25 12:51:28.618 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:51:28 np0005629333 nova_compute[244014]: 2026-02-25 12:51:28.634 244018 DEBUG nova.compute.manager [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:51:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:51:28 np0005629333 nova_compute[244014]: 2026-02-25 12:51:28.721 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:51:28 np0005629333 nova_compute[244014]: 2026-02-25 12:51:28.722 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:51:28 np0005629333 nova_compute[244014]: 2026-02-25 12:51:28.730 244018 DEBUG nova.virt.hardware [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:51:28 np0005629333 nova_compute[244014]: 2026-02-25 12:51:28.730 244018 INFO nova.compute.claims [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:51:28 np0005629333 nova_compute[244014]: 2026-02-25 12:51:28.824 244018 DEBUG oslo_concurrency.processutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:51:28 np0005629333 nova_compute[244014]: 2026-02-25 12:51:28.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:51:28 np0005629333 nova_compute[244014]: 2026-02-25 12:51:28.898 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:51:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:51:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3001207695' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:51:29 np0005629333 nova_compute[244014]: 2026-02-25 12:51:29.377 244018 DEBUG oslo_concurrency.processutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:51:29 np0005629333 nova_compute[244014]: 2026-02-25 12:51:29.383 244018 DEBUG nova.compute.provider_tree [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:51:29 np0005629333 nova_compute[244014]: 2026-02-25 12:51:29.396 244018 DEBUG nova.scheduler.client.report [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:51:29 np0005629333 nova_compute[244014]: 2026-02-25 12:51:29.415 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:51:29 np0005629333 nova_compute[244014]: 2026-02-25 12:51:29.415 244018 DEBUG nova.compute.manager [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:51:29 np0005629333 nova_compute[244014]: 2026-02-25 12:51:29.418 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.520s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:51:29 np0005629333 nova_compute[244014]: 2026-02-25 12:51:29.418 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:51:29 np0005629333 nova_compute[244014]: 2026-02-25 12:51:29.419 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:51:29 np0005629333 nova_compute[244014]: 2026-02-25 12:51:29.419 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:51:29 np0005629333 nova_compute[244014]: 2026-02-25 12:51:29.509 244018 DEBUG nova.compute.manager [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:51:29 np0005629333 nova_compute[244014]: 2026-02-25 12:51:29.510 244018 DEBUG nova.network.neutron [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:51:29 np0005629333 nova_compute[244014]: 2026-02-25 12:51:29.530 244018 INFO nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:51:29 np0005629333 nova_compute[244014]: 2026-02-25 12:51:29.550 244018 DEBUG nova.compute.manager [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:51:29 np0005629333 nova_compute[244014]: 2026-02-25 12:51:29.663 244018 DEBUG nova.compute.manager [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:51:29 np0005629333 nova_compute[244014]: 2026-02-25 12:51:29.665 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:51:29 np0005629333 nova_compute[244014]: 2026-02-25 12:51:29.666 244018 INFO nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Creating image(s)#033[00m
Feb 25 07:51:29 np0005629333 nova_compute[244014]: 2026-02-25 12:51:29.700 244018 DEBUG nova.storage.rbd_utils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dd7feae9-9d2a-41b6-9277-cbf51a2c8f23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:51:29 np0005629333 nova_compute[244014]: 2026-02-25 12:51:29.735 244018 DEBUG nova.storage.rbd_utils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dd7feae9-9d2a-41b6-9277-cbf51a2c8f23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:51:29 np0005629333 nova_compute[244014]: 2026-02-25 12:51:29.773 244018 DEBUG nova.storage.rbd_utils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dd7feae9-9d2a-41b6-9277-cbf51a2c8f23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:51:29 np0005629333 nova_compute[244014]: 2026-02-25 12:51:29.779 244018 DEBUG oslo_concurrency.processutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:51:29 np0005629333 nova_compute[244014]: 2026-02-25 12:51:29.819 244018 DEBUG nova.policy [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb8dbf8cc448ad946fd23aaae2326e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '25fa1e8dd32c483686f869da2604f2b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:51:29 np0005629333 nova_compute[244014]: 2026-02-25 12:51:29.847 244018 DEBUG oslo_concurrency.processutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:51:29 np0005629333 nova_compute[244014]: 2026-02-25 12:51:29.848 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:51:29 np0005629333 nova_compute[244014]: 2026-02-25 12:51:29.849 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:51:29 np0005629333 nova_compute[244014]: 2026-02-25 12:51:29.850 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:51:29 np0005629333 nova_compute[244014]: 2026-02-25 12:51:29.884 244018 DEBUG nova.storage.rbd_utils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dd7feae9-9d2a-41b6-9277-cbf51a2c8f23_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:51:29 np0005629333 nova_compute[244014]: 2026-02-25 12:51:29.888 244018 DEBUG oslo_concurrency.processutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 dd7feae9-9d2a-41b6-9277-cbf51a2c8f23_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:51:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:51:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1097842149' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:51:29 np0005629333 nova_compute[244014]: 2026-02-25 12:51:29.965 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:51:29 np0005629333 nova_compute[244014]: 2026-02-25 12:51:29.987 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023874.9873362, 968294d4-db1f-4cdc-822b-f7d4e382ac90 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:51:29 np0005629333 nova_compute[244014]: 2026-02-25 12:51:29.989 244018 INFO nova.compute.manager [-] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:51:30 np0005629333 nova_compute[244014]: 2026-02-25 12:51:30.026 244018 DEBUG nova.compute.manager [None req-b89ab770-f981-4c63-a899-0c7203dd2a8b - - - - - -] [instance: 968294d4-db1f-4cdc-822b-f7d4e382ac90] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:51:30 np0005629333 nova_compute[244014]: 2026-02-25 12:51:30.171 244018 DEBUG oslo_concurrency.processutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 dd7feae9-9d2a-41b6-9277-cbf51a2c8f23_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:51:30 np0005629333 nova_compute[244014]: 2026-02-25 12:51:30.259 244018 DEBUG nova.storage.rbd_utils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] resizing rbd image dd7feae9-9d2a-41b6-9277-cbf51a2c8f23_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:51:30 np0005629333 nova_compute[244014]: 2026-02-25 12:51:30.373 244018 DEBUG nova.objects.instance [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'migration_context' on Instance uuid dd7feae9-9d2a-41b6-9277-cbf51a2c8f23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:51:30 np0005629333 nova_compute[244014]: 2026-02-25 12:51:30.376 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:30 np0005629333 nova_compute[244014]: 2026-02-25 12:51:30.390 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:51:30 np0005629333 nova_compute[244014]: 2026-02-25 12:51:30.391 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Ensure instance console log exists: /var/lib/nova/instances/dd7feae9-9d2a-41b6-9277-cbf51a2c8f23/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:51:30 np0005629333 nova_compute[244014]: 2026-02-25 12:51:30.392 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:51:30 np0005629333 nova_compute[244014]: 2026-02-25 12:51:30.392 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:51:30 np0005629333 nova_compute[244014]: 2026-02-25 12:51:30.393 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:51:30 np0005629333 nova_compute[244014]: 2026-02-25 12:51:30.430 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:51:30 np0005629333 nova_compute[244014]: 2026-02-25 12:51:30.431 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3634MB free_disk=59.9873388139531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:51:30 np0005629333 nova_compute[244014]: 2026-02-25 12:51:30.432 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:51:30 np0005629333 nova_compute[244014]: 2026-02-25 12:51:30.432 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:51:30 np0005629333 nova_compute[244014]: 2026-02-25 12:51:30.516 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance dd7feae9-9d2a-41b6-9277-cbf51a2c8f23 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:51:30 np0005629333 nova_compute[244014]: 2026-02-25 12:51:30.517 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:51:30 np0005629333 nova_compute[244014]: 2026-02-25 12:51:30.517 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:51:30 np0005629333 nova_compute[244014]: 2026-02-25 12:51:30.567 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:51:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2128: 305 pgs: 305 active+clean; 153 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 71 KiB/s rd, 1.2 KiB/s wr, 116 op/s
Feb 25 07:51:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:51:31
Feb 25 07:51:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:51:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:51:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['volumes', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'vms', 'backups', 'default.rgw.log', '.mgr', 'default.rgw.control', 'default.rgw.meta']
Feb 25 07:51:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:51:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:51:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/598244499' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:51:31 np0005629333 nova_compute[244014]: 2026-02-25 12:51:31.135 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:51:31 np0005629333 nova_compute[244014]: 2026-02-25 12:51:31.142 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:51:31 np0005629333 nova_compute[244014]: 2026-02-25 12:51:31.163 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:51:31 np0005629333 nova_compute[244014]: 2026-02-25 12:51:31.189 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:51:31 np0005629333 nova_compute[244014]: 2026-02-25 12:51:31.190 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:51:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:51:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:51:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:51:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:51:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:51:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:51:31 np0005629333 nova_compute[244014]: 2026-02-25 12:51:31.741 244018 DEBUG nova.network.neutron [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Successfully created port: 9981394d-e733-404e-85a5-e2e51877881a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:51:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:51:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:51:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:51:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:51:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:51:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:51:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:51:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:51:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:51:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:51:32 np0005629333 nova_compute[244014]: 2026-02-25 12:51:32.087 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:51:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:51:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:51:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:51:32 np0005629333 nova_compute[244014]: 2026-02-25 12:51:32.190 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:51:32 np0005629333 nova_compute[244014]: 2026-02-25 12:51:32.190 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:51:32 np0005629333 nova_compute[244014]: 2026-02-25 12:51:32.191 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:51:32 np0005629333 nova_compute[244014]: 2026-02-25 12:51:32.258 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 07:51:32 np0005629333 nova_compute[244014]: 2026-02-25 12:51:32.259 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:51:32 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:51:32 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:51:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2129: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 1.8 MiB/s wr, 143 op/s
Feb 25 07:51:32 np0005629333 nova_compute[244014]: 2026-02-25 12:51:32.891 244018 DEBUG nova.network.neutron [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Successfully updated port: 9981394d-e733-404e-85a5-e2e51877881a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:51:32 np0005629333 nova_compute[244014]: 2026-02-25 12:51:32.949 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "refresh_cache-dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:51:32 np0005629333 nova_compute[244014]: 2026-02-25 12:51:32.949 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquired lock "refresh_cache-dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:51:32 np0005629333 nova_compute[244014]: 2026-02-25 12:51:32.950 244018 DEBUG nova.network.neutron [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:51:33 np0005629333 nova_compute[244014]: 2026-02-25 12:51:33.030 244018 DEBUG nova.compute.manager [req-929674b7-671c-439d-b4ca-db1fec0279a0 req-d032811b-bc72-4806-8bd0-5dba65b00221 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Received event network-changed-9981394d-e733-404e-85a5-e2e51877881a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:51:33 np0005629333 nova_compute[244014]: 2026-02-25 12:51:33.031 244018 DEBUG nova.compute.manager [req-929674b7-671c-439d-b4ca-db1fec0279a0 req-d032811b-bc72-4806-8bd0-5dba65b00221 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Refreshing instance network info cache due to event network-changed-9981394d-e733-404e-85a5-e2e51877881a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:51:33 np0005629333 nova_compute[244014]: 2026-02-25 12:51:33.031 244018 DEBUG oslo_concurrency.lockutils [req-929674b7-671c-439d-b4ca-db1fec0279a0 req-d032811b-bc72-4806-8bd0-5dba65b00221 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:51:33 np0005629333 nova_compute[244014]: 2026-02-25 12:51:33.191 244018 DEBUG nova.network.neutron [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:51:33 np0005629333 podman[357931]: 2026-02-25 12:51:33.217894682 +0000 UTC m=+0.057835134 container create 1e84d70d4aa57e7cf73ea4bb0c1a5d52cda2ee2ef1ee879c24b83831286e9df9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_dewdney, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 07:51:33 np0005629333 systemd[1]: Started libpod-conmon-1e84d70d4aa57e7cf73ea4bb0c1a5d52cda2ee2ef1ee879c24b83831286e9df9.scope.
Feb 25 07:51:33 np0005629333 podman[357931]: 2026-02-25 12:51:33.19305844 +0000 UTC m=+0.032998952 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:51:33 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:51:33 np0005629333 podman[357931]: 2026-02-25 12:51:33.306976437 +0000 UTC m=+0.146916949 container init 1e84d70d4aa57e7cf73ea4bb0c1a5d52cda2ee2ef1ee879c24b83831286e9df9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_dewdney, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 07:51:33 np0005629333 podman[357931]: 2026-02-25 12:51:33.31629055 +0000 UTC m=+0.156230972 container start 1e84d70d4aa57e7cf73ea4bb0c1a5d52cda2ee2ef1ee879c24b83831286e9df9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_dewdney, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 07:51:33 np0005629333 podman[357931]: 2026-02-25 12:51:33.319865211 +0000 UTC m=+0.159805723 container attach 1e84d70d4aa57e7cf73ea4bb0c1a5d52cda2ee2ef1ee879c24b83831286e9df9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 07:51:33 np0005629333 eager_dewdney[357947]: 167 167
Feb 25 07:51:33 np0005629333 systemd[1]: libpod-1e84d70d4aa57e7cf73ea4bb0c1a5d52cda2ee2ef1ee879c24b83831286e9df9.scope: Deactivated successfully.
Feb 25 07:51:33 np0005629333 podman[357931]: 2026-02-25 12:51:33.320658323 +0000 UTC m=+0.160598785 container died 1e84d70d4aa57e7cf73ea4bb0c1a5d52cda2ee2ef1ee879c24b83831286e9df9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 25 07:51:33 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7a629305b72bdccd98a16f5220eddbdb67af4c8565cf8c6e347fd33f6304c6f5-merged.mount: Deactivated successfully.
Feb 25 07:51:33 np0005629333 podman[357931]: 2026-02-25 12:51:33.364839341 +0000 UTC m=+0.204779803 container remove 1e84d70d4aa57e7cf73ea4bb0c1a5d52cda2ee2ef1ee879c24b83831286e9df9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:51:33 np0005629333 systemd[1]: libpod-conmon-1e84d70d4aa57e7cf73ea4bb0c1a5d52cda2ee2ef1ee879c24b83831286e9df9.scope: Deactivated successfully.
Feb 25 07:51:33 np0005629333 podman[357971]: 2026-02-25 12:51:33.547214011 +0000 UTC m=+0.060628443 container create 346c5c06366a6a8f2377ed4230412d41d1b52a4b51f681d16c97d73bae6989c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:51:33 np0005629333 systemd[1]: Started libpod-conmon-346c5c06366a6a8f2377ed4230412d41d1b52a4b51f681d16c97d73bae6989c7.scope.
Feb 25 07:51:33 np0005629333 podman[357971]: 2026-02-25 12:51:33.520668001 +0000 UTC m=+0.034082453 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:51:33 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:51:33 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41cd9a8fce12fe30854b8389d46ab8e703dfdf6919bd42986ca327d0194fc242/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:51:33 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41cd9a8fce12fe30854b8389d46ab8e703dfdf6919bd42986ca327d0194fc242/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:51:33 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41cd9a8fce12fe30854b8389d46ab8e703dfdf6919bd42986ca327d0194fc242/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:51:33 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41cd9a8fce12fe30854b8389d46ab8e703dfdf6919bd42986ca327d0194fc242/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:51:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:51:33 np0005629333 podman[357971]: 2026-02-25 12:51:33.657843834 +0000 UTC m=+0.171258316 container init 346c5c06366a6a8f2377ed4230412d41d1b52a4b51f681d16c97d73bae6989c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:51:33 np0005629333 podman[357971]: 2026-02-25 12:51:33.670325937 +0000 UTC m=+0.183740379 container start 346c5c06366a6a8f2377ed4230412d41d1b52a4b51f681d16c97d73bae6989c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_brown, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:51:33 np0005629333 podman[357971]: 2026-02-25 12:51:33.674067963 +0000 UTC m=+0.187482405 container attach 346c5c06366a6a8f2377ed4230412d41d1b52a4b51f681d16c97d73bae6989c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 07:51:34 np0005629333 confident_brown[357988]: [
Feb 25 07:51:34 np0005629333 confident_brown[357988]:    {
Feb 25 07:51:34 np0005629333 confident_brown[357988]:        "available": false,
Feb 25 07:51:34 np0005629333 confident_brown[357988]:        "being_replaced": false,
Feb 25 07:51:34 np0005629333 confident_brown[357988]:        "ceph_device_lvm": false,
Feb 25 07:51:34 np0005629333 confident_brown[357988]:        "device_id": "QEMU_DVD-ROM_QM00001",
Feb 25 07:51:34 np0005629333 confident_brown[357988]:        "lsm_data": {},
Feb 25 07:51:34 np0005629333 confident_brown[357988]:        "lvs": [],
Feb 25 07:51:34 np0005629333 confident_brown[357988]:        "path": "/dev/sr0",
Feb 25 07:51:34 np0005629333 confident_brown[357988]:        "rejected_reasons": [
Feb 25 07:51:34 np0005629333 confident_brown[357988]:            "Has a FileSystem",
Feb 25 07:51:34 np0005629333 confident_brown[357988]:            "Insufficient space (<5GB)"
Feb 25 07:51:34 np0005629333 confident_brown[357988]:        ],
Feb 25 07:51:34 np0005629333 confident_brown[357988]:        "sys_api": {
Feb 25 07:51:34 np0005629333 confident_brown[357988]:            "actuators": null,
Feb 25 07:51:34 np0005629333 confident_brown[357988]:            "device_nodes": [
Feb 25 07:51:34 np0005629333 confident_brown[357988]:                "sr0"
Feb 25 07:51:34 np0005629333 confident_brown[357988]:            ],
Feb 25 07:51:34 np0005629333 confident_brown[357988]:            "devname": "sr0",
Feb 25 07:51:34 np0005629333 confident_brown[357988]:            "human_readable_size": "482.00 KB",
Feb 25 07:51:34 np0005629333 confident_brown[357988]:            "id_bus": "ata",
Feb 25 07:51:34 np0005629333 confident_brown[357988]:            "model": "QEMU DVD-ROM",
Feb 25 07:51:34 np0005629333 confident_brown[357988]:            "nr_requests": "2",
Feb 25 07:51:34 np0005629333 confident_brown[357988]:            "parent": "/dev/sr0",
Feb 25 07:51:34 np0005629333 confident_brown[357988]:            "partitions": {},
Feb 25 07:51:34 np0005629333 confident_brown[357988]:            "path": "/dev/sr0",
Feb 25 07:51:34 np0005629333 confident_brown[357988]:            "removable": "1",
Feb 25 07:51:34 np0005629333 confident_brown[357988]:            "rev": "2.5+",
Feb 25 07:51:34 np0005629333 confident_brown[357988]:            "ro": "0",
Feb 25 07:51:34 np0005629333 confident_brown[357988]:            "rotational": "1",
Feb 25 07:51:34 np0005629333 confident_brown[357988]:            "sas_address": "",
Feb 25 07:51:34 np0005629333 confident_brown[357988]:            "sas_device_handle": "",
Feb 25 07:51:34 np0005629333 confident_brown[357988]:            "scheduler_mode": "mq-deadline",
Feb 25 07:51:34 np0005629333 confident_brown[357988]:            "sectors": 0,
Feb 25 07:51:34 np0005629333 confident_brown[357988]:            "sectorsize": "2048",
Feb 25 07:51:34 np0005629333 confident_brown[357988]:            "size": 493568.0,
Feb 25 07:51:34 np0005629333 confident_brown[357988]:            "support_discard": "2048",
Feb 25 07:51:34 np0005629333 confident_brown[357988]:            "type": "disk",
Feb 25 07:51:34 np0005629333 confident_brown[357988]:            "vendor": "QEMU"
Feb 25 07:51:34 np0005629333 confident_brown[357988]:        }
Feb 25 07:51:34 np0005629333 confident_brown[357988]:    }
Feb 25 07:51:34 np0005629333 confident_brown[357988]: ]
Feb 25 07:51:34 np0005629333 systemd[1]: libpod-346c5c06366a6a8f2377ed4230412d41d1b52a4b51f681d16c97d73bae6989c7.scope: Deactivated successfully.
Feb 25 07:51:34 np0005629333 podman[357971]: 2026-02-25 12:51:34.273334684 +0000 UTC m=+0.786749136 container died 346c5c06366a6a8f2377ed4230412d41d1b52a4b51f681d16c97d73bae6989c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 07:51:34 np0005629333 systemd[1]: var-lib-containers-storage-overlay-41cd9a8fce12fe30854b8389d46ab8e703dfdf6919bd42986ca327d0194fc242-merged.mount: Deactivated successfully.
Feb 25 07:51:34 np0005629333 podman[357971]: 2026-02-25 12:51:34.323009017 +0000 UTC m=+0.836423459 container remove 346c5c06366a6a8f2377ed4230412d41d1b52a4b51f681d16c97d73bae6989c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_brown, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Feb 25 07:51:34 np0005629333 systemd[1]: libpod-conmon-346c5c06366a6a8f2377ed4230412d41d1b52a4b51f681d16c97d73bae6989c7.scope: Deactivated successfully.
Feb 25 07:51:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:51:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:51:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:51:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:51:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:51:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:51:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:51:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:51:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:51:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:51:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:51:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:51:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:51:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:51:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:51:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:51:34 np0005629333 nova_compute[244014]: 2026-02-25 12:51:34.511 244018 DEBUG nova.network.neutron [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Updating instance_info_cache with network_info: [{"id": "9981394d-e733-404e-85a5-e2e51877881a", "address": "fa:16:3e:2b:92:99", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9981394d-e7", "ovs_interfaceid": "9981394d-e733-404e-85a5-e2e51877881a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:51:34 np0005629333 nova_compute[244014]: 2026-02-25 12:51:34.568 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Releasing lock "refresh_cache-dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:51:34 np0005629333 nova_compute[244014]: 2026-02-25 12:51:34.568 244018 DEBUG nova.compute.manager [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Instance network_info: |[{"id": "9981394d-e733-404e-85a5-e2e51877881a", "address": "fa:16:3e:2b:92:99", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9981394d-e7", "ovs_interfaceid": "9981394d-e733-404e-85a5-e2e51877881a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:51:34 np0005629333 nova_compute[244014]: 2026-02-25 12:51:34.569 244018 DEBUG oslo_concurrency.lockutils [req-929674b7-671c-439d-b4ca-db1fec0279a0 req-d032811b-bc72-4806-8bd0-5dba65b00221 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:51:34 np0005629333 nova_compute[244014]: 2026-02-25 12:51:34.569 244018 DEBUG nova.network.neutron [req-929674b7-671c-439d-b4ca-db1fec0279a0 req-d032811b-bc72-4806-8bd0-5dba65b00221 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Refreshing network info cache for port 9981394d-e733-404e-85a5-e2e51877881a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:51:34 np0005629333 nova_compute[244014]: 2026-02-25 12:51:34.575 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Start _get_guest_xml network_info=[{"id": "9981394d-e733-404e-85a5-e2e51877881a", "address": "fa:16:3e:2b:92:99", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9981394d-e7", "ovs_interfaceid": "9981394d-e733-404e-85a5-e2e51877881a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:51:34 np0005629333 nova_compute[244014]: 2026-02-25 12:51:34.582 244018 WARNING nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:51:34 np0005629333 nova_compute[244014]: 2026-02-25 12:51:34.588 244018 DEBUG nova.virt.libvirt.host [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:51:34 np0005629333 nova_compute[244014]: 2026-02-25 12:51:34.589 244018 DEBUG nova.virt.libvirt.host [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:51:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2130: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 25 07:51:34 np0005629333 nova_compute[244014]: 2026-02-25 12:51:34.593 244018 DEBUG nova.virt.libvirt.host [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:51:34 np0005629333 nova_compute[244014]: 2026-02-25 12:51:34.594 244018 DEBUG nova.virt.libvirt.host [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:51:34 np0005629333 nova_compute[244014]: 2026-02-25 12:51:34.595 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:51:34 np0005629333 nova_compute[244014]: 2026-02-25 12:51:34.595 244018 DEBUG nova.virt.hardware [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:51:34 np0005629333 nova_compute[244014]: 2026-02-25 12:51:34.596 244018 DEBUG nova.virt.hardware [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:51:34 np0005629333 nova_compute[244014]: 2026-02-25 12:51:34.597 244018 DEBUG nova.virt.hardware [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:51:34 np0005629333 nova_compute[244014]: 2026-02-25 12:51:34.597 244018 DEBUG nova.virt.hardware [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:51:34 np0005629333 nova_compute[244014]: 2026-02-25 12:51:34.597 244018 DEBUG nova.virt.hardware [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:51:34 np0005629333 nova_compute[244014]: 2026-02-25 12:51:34.598 244018 DEBUG nova.virt.hardware [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:51:34 np0005629333 nova_compute[244014]: 2026-02-25 12:51:34.598 244018 DEBUG nova.virt.hardware [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:51:34 np0005629333 nova_compute[244014]: 2026-02-25 12:51:34.599 244018 DEBUG nova.virt.hardware [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:51:34 np0005629333 nova_compute[244014]: 2026-02-25 12:51:34.599 244018 DEBUG nova.virt.hardware [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:51:34 np0005629333 nova_compute[244014]: 2026-02-25 12:51:34.599 244018 DEBUG nova.virt.hardware [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:51:34 np0005629333 nova_compute[244014]: 2026-02-25 12:51:34.600 244018 DEBUG nova.virt.hardware [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:51:34 np0005629333 nova_compute[244014]: 2026-02-25 12:51:34.605 244018 DEBUG oslo_concurrency.processutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:51:34 np0005629333 nova_compute[244014]: 2026-02-25 12:51:34.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:51:34 np0005629333 podman[358800]: 2026-02-25 12:51:34.94962981 +0000 UTC m=+0.063160384 container create 9e547131f7ab0043a60fc00abc23e2414dc283e2c9c87a29dda15b8c6cb36503 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brown, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 07:51:35 np0005629333 podman[358800]: 2026-02-25 12:51:34.926368694 +0000 UTC m=+0.039899338 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:51:35 np0005629333 systemd[1]: Started libpod-conmon-9e547131f7ab0043a60fc00abc23e2414dc283e2c9c87a29dda15b8c6cb36503.scope.
Feb 25 07:51:35 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:51:35 np0005629333 podman[358800]: 2026-02-25 12:51:35.076516403 +0000 UTC m=+0.190047007 container init 9e547131f7ab0043a60fc00abc23e2414dc283e2c9c87a29dda15b8c6cb36503 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brown, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Feb 25 07:51:35 np0005629333 podman[358814]: 2026-02-25 12:51:35.080558598 +0000 UTC m=+0.079992090 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 07:51:35 np0005629333 podman[358800]: 2026-02-25 12:51:35.084131418 +0000 UTC m=+0.197661972 container start 9e547131f7ab0043a60fc00abc23e2414dc283e2c9c87a29dda15b8c6cb36503 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brown, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 07:51:35 np0005629333 bold_brown[358832]: 167 167
Feb 25 07:51:35 np0005629333 systemd[1]: libpod-9e547131f7ab0043a60fc00abc23e2414dc283e2c9c87a29dda15b8c6cb36503.scope: Deactivated successfully.
Feb 25 07:51:35 np0005629333 podman[358800]: 2026-02-25 12:51:35.092039882 +0000 UTC m=+0.205570506 container attach 9e547131f7ab0043a60fc00abc23e2414dc283e2c9c87a29dda15b8c6cb36503 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brown, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:51:35 np0005629333 podman[358800]: 2026-02-25 12:51:35.092782533 +0000 UTC m=+0.206313097 container died 9e547131f7ab0043a60fc00abc23e2414dc283e2c9c87a29dda15b8c6cb36503 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brown, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:51:35 np0005629333 podman[358817]: 2026-02-25 12:51:35.111041028 +0000 UTC m=+0.107985160 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 07:51:35 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7cb9f0771bda27cd0e290ca7fdc6fe5c32d01ed42287f57da7ab65de40f09e00-merged.mount: Deactivated successfully.
Feb 25 07:51:35 np0005629333 podman[358800]: 2026-02-25 12:51:35.136792405 +0000 UTC m=+0.250322969 container remove 9e547131f7ab0043a60fc00abc23e2414dc283e2c9c87a29dda15b8c6cb36503 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:51:35 np0005629333 systemd[1]: libpod-conmon-9e547131f7ab0043a60fc00abc23e2414dc283e2c9c87a29dda15b8c6cb36503.scope: Deactivated successfully.
Feb 25 07:51:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:51:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3982017398' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.240 244018 DEBUG oslo_concurrency.processutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.635s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.262 244018 DEBUG nova.storage.rbd_utils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dd7feae9-9d2a-41b6-9277-cbf51a2c8f23_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.266 244018 DEBUG oslo_concurrency.processutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:51:35 np0005629333 podman[358887]: 2026-02-25 12:51:35.322148679 +0000 UTC m=+0.073876967 container create ace8c7f8326a29a5dbf5734e6941e365c7d175c2887ccd4359d9d65802232cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_northcutt, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:51:35 np0005629333 systemd[1]: Started libpod-conmon-ace8c7f8326a29a5dbf5734e6941e365c7d175c2887ccd4359d9d65802232cfa.scope.
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.372 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:35 np0005629333 podman[358887]: 2026-02-25 12:51:35.287957644 +0000 UTC m=+0.039685992 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:51:35 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:51:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12487860bfaab00c97082ab1626c4919aea90af305ae2e42619ec465782cc8b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:51:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12487860bfaab00c97082ab1626c4919aea90af305ae2e42619ec465782cc8b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:51:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12487860bfaab00c97082ab1626c4919aea90af305ae2e42619ec465782cc8b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:51:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12487860bfaab00c97082ab1626c4919aea90af305ae2e42619ec465782cc8b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:51:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12487860bfaab00c97082ab1626c4919aea90af305ae2e42619ec465782cc8b9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:51:35 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:51:35 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:51:35 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:51:35 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:51:35 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:51:35 np0005629333 podman[358887]: 2026-02-25 12:51:35.429767288 +0000 UTC m=+0.181495576 container init ace8c7f8326a29a5dbf5734e6941e365c7d175c2887ccd4359d9d65802232cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_northcutt, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:51:35 np0005629333 podman[358887]: 2026-02-25 12:51:35.447276042 +0000 UTC m=+0.199004330 container start ace8c7f8326a29a5dbf5734e6941e365c7d175c2887ccd4359d9d65802232cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:51:35 np0005629333 podman[358887]: 2026-02-25 12:51:35.4553196 +0000 UTC m=+0.207047888 container attach ace8c7f8326a29a5dbf5734e6941e365c7d175c2887ccd4359d9d65802232cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:51:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:51:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4287189214' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:51:35 np0005629333 practical_northcutt[358922]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:51:35 np0005629333 practical_northcutt[358922]: --> All data devices are unavailable
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.887 244018 DEBUG oslo_concurrency.processutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.889 244018 DEBUG nova.virt.libvirt.vif [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:51:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1033485245',display_name='tempest-TestGettingAddress-server-1033485245',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1033485245',id=126,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJC+upPNElBVnEP9sRAofuJ3WgjrN7NUHIjnb2xNcja3SXT0AJcyip72U8J6Hd3gQBhmqSCIrT6CHz7iNR5W3wt+U6axciPl3ik+0GTg6pCZIfse39ZA4oXcSHjtD+Yg8A==',key_name='tempest-TestGettingAddress-1847809733',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-iqq9ar0s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:51:29Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=dd7feae9-9d2a-41b6-9277-cbf51a2c8f23,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9981394d-e733-404e-85a5-e2e51877881a", "address": "fa:16:3e:2b:92:99", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9981394d-e7", "ovs_interfaceid": "9981394d-e733-404e-85a5-e2e51877881a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.889 244018 DEBUG nova.network.os_vif_util [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "9981394d-e733-404e-85a5-e2e51877881a", "address": "fa:16:3e:2b:92:99", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9981394d-e7", "ovs_interfaceid": "9981394d-e733-404e-85a5-e2e51877881a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.890 244018 DEBUG nova.network.os_vif_util [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:92:99,bridge_name='br-int',has_traffic_filtering=True,id=9981394d-e733-404e-85a5-e2e51877881a,network=Network(e19ed85e-54ee-4274-951c-ade412625983),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9981394d-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.891 244018 DEBUG nova.objects.instance [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid dd7feae9-9d2a-41b6-9277-cbf51a2c8f23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:51:35 np0005629333 systemd[1]: libpod-ace8c7f8326a29a5dbf5734e6941e365c7d175c2887ccd4359d9d65802232cfa.scope: Deactivated successfully.
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.906 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:51:35 np0005629333 nova_compute[244014]:  <uuid>dd7feae9-9d2a-41b6-9277-cbf51a2c8f23</uuid>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:  <name>instance-0000007e</name>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestGettingAddress-server-1033485245</nova:name>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:51:34</nova:creationTime>
Feb 25 07:51:35 np0005629333 podman[358887]: 2026-02-25 12:51:35.906932562 +0000 UTC m=+0.658660820 container died ace8c7f8326a29a5dbf5734e6941e365c7d175c2887ccd4359d9d65802232cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:51:35 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:        <nova:user uuid="f8eb8dbf8cc448ad946fd23aaae2326e">tempest-TestGettingAddress-344063294-project-member</nova:user>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:        <nova:project uuid="25fa1e8dd32c483686f869da2604f2b1">tempest-TestGettingAddress-344063294</nova:project>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:        <nova:port uuid="9981394d-e733-404e-85a5-e2e51877881a">
Feb 25 07:51:35 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe2b:9299" ipVersion="6"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe2b:9299" ipVersion="6"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      <entry name="serial">dd7feae9-9d2a-41b6-9277-cbf51a2c8f23</entry>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      <entry name="uuid">dd7feae9-9d2a-41b6-9277-cbf51a2c8f23</entry>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/dd7feae9-9d2a-41b6-9277-cbf51a2c8f23_disk">
Feb 25 07:51:35 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:51:35 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/dd7feae9-9d2a-41b6-9277-cbf51a2c8f23_disk.config">
Feb 25 07:51:35 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:51:35 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:2b:92:99"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      <target dev="tap9981394d-e7"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/dd7feae9-9d2a-41b6-9277-cbf51a2c8f23/console.log" append="off"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:51:35 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:51:35 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:51:35 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:51:35 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.906 244018 DEBUG nova.compute.manager [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Preparing to wait for external event network-vif-plugged-9981394d-e733-404e-85a5-e2e51877881a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.907 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.907 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.907 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.907 244018 DEBUG nova.virt.libvirt.vif [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:51:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1033485245',display_name='tempest-TestGettingAddress-server-1033485245',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1033485245',id=126,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJC+upPNElBVnEP9sRAofuJ3WgjrN7NUHIjnb2xNcja3SXT0AJcyip72U8J6Hd3gQBhmqSCIrT6CHz7iNR5W3wt+U6axciPl3ik+0GTg6pCZIfse39ZA4oXcSHjtD+Yg8A==',key_name='tempest-TestGettingAddress-1847809733',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-iqq9ar0s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:51:29Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=dd7feae9-9d2a-41b6-9277-cbf51a2c8f23,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9981394d-e733-404e-85a5-e2e51877881a", "address": "fa:16:3e:2b:92:99", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9981394d-e7", "ovs_interfaceid": "9981394d-e733-404e-85a5-e2e51877881a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.908 244018 DEBUG nova.network.os_vif_util [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "9981394d-e733-404e-85a5-e2e51877881a", "address": "fa:16:3e:2b:92:99", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9981394d-e7", "ovs_interfaceid": "9981394d-e733-404e-85a5-e2e51877881a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.908 244018 DEBUG nova.network.os_vif_util [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:92:99,bridge_name='br-int',has_traffic_filtering=True,id=9981394d-e733-404e-85a5-e2e51877881a,network=Network(e19ed85e-54ee-4274-951c-ade412625983),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9981394d-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.909 244018 DEBUG os_vif [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:92:99,bridge_name='br-int',has_traffic_filtering=True,id=9981394d-e733-404e-85a5-e2e51877881a,network=Network(e19ed85e-54ee-4274-951c-ade412625983),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9981394d-e7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.909 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.909 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.910 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.912 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.912 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9981394d-e7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.912 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9981394d-e7, col_values=(('external_ids', {'iface-id': '9981394d-e733-404e-85a5-e2e51877881a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2b:92:99', 'vm-uuid': 'dd7feae9-9d2a-41b6-9277-cbf51a2c8f23'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.913 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:35 np0005629333 NetworkManager[49836]: <info>  [1772023895.9148] manager: (tap9981394d-e7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/553)
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.915 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.920 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.921 244018 INFO os_vif [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:92:99,bridge_name='br-int',has_traffic_filtering=True,id=9981394d-e733-404e-85a5-e2e51877881a,network=Network(e19ed85e-54ee-4274-951c-ade412625983),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9981394d-e7')#033[00m
Feb 25 07:51:35 np0005629333 systemd[1]: var-lib-containers-storage-overlay-12487860bfaab00c97082ab1626c4919aea90af305ae2e42619ec465782cc8b9-merged.mount: Deactivated successfully.
Feb 25 07:51:35 np0005629333 podman[358887]: 2026-02-25 12:51:35.954877455 +0000 UTC m=+0.706605753 container remove ace8c7f8326a29a5dbf5734e6941e365c7d175c2887ccd4359d9d65802232cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:51:35 np0005629333 systemd[1]: libpod-conmon-ace8c7f8326a29a5dbf5734e6941e365c7d175c2887ccd4359d9d65802232cfa.scope: Deactivated successfully.
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.967 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.967 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.968 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:2b:92:99, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.968 244018 INFO nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Using config drive#033[00m
Feb 25 07:51:35 np0005629333 nova_compute[244014]: 2026-02-25 12:51:35.988 244018 DEBUG nova.storage.rbd_utils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dd7feae9-9d2a-41b6-9277-cbf51a2c8f23_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:51:36 np0005629333 podman[359058]: 2026-02-25 12:51:36.43138397 +0000 UTC m=+0.054259244 container create 978601e3bc791a6167e348bd9c9ac09adccf0ad1f9c744b967ff37743cc1bc94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_yalow, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 07:51:36 np0005629333 systemd[1]: Started libpod-conmon-978601e3bc791a6167e348bd9c9ac09adccf0ad1f9c744b967ff37743cc1bc94.scope.
Feb 25 07:51:36 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:51:36 np0005629333 podman[359058]: 2026-02-25 12:51:36.410351646 +0000 UTC m=+0.033226810 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:51:36 np0005629333 podman[359058]: 2026-02-25 12:51:36.511549743 +0000 UTC m=+0.134424897 container init 978601e3bc791a6167e348bd9c9ac09adccf0ad1f9c744b967ff37743cc1bc94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_yalow, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:51:36 np0005629333 podman[359058]: 2026-02-25 12:51:36.519726474 +0000 UTC m=+0.142601608 container start 978601e3bc791a6167e348bd9c9ac09adccf0ad1f9c744b967ff37743cc1bc94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_yalow, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:51:36 np0005629333 focused_yalow[359075]: 167 167
Feb 25 07:51:36 np0005629333 podman[359058]: 2026-02-25 12:51:36.52490261 +0000 UTC m=+0.147777744 container attach 978601e3bc791a6167e348bd9c9ac09adccf0ad1f9c744b967ff37743cc1bc94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 07:51:36 np0005629333 systemd[1]: libpod-978601e3bc791a6167e348bd9c9ac09adccf0ad1f9c744b967ff37743cc1bc94.scope: Deactivated successfully.
Feb 25 07:51:36 np0005629333 podman[359080]: 2026-02-25 12:51:36.580037307 +0000 UTC m=+0.040822064 container died 978601e3bc791a6167e348bd9c9ac09adccf0ad1f9c744b967ff37743cc1bc94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:51:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2131: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 25 07:51:36 np0005629333 systemd[1]: var-lib-containers-storage-overlay-b8d7c7d0ce80ecdf82e2e7671a0ed943ab803a09482b98006c3eb61018cd07f8-merged.mount: Deactivated successfully.
Feb 25 07:51:36 np0005629333 podman[359080]: 2026-02-25 12:51:36.636191243 +0000 UTC m=+0.096975960 container remove 978601e3bc791a6167e348bd9c9ac09adccf0ad1f9c744b967ff37743cc1bc94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 07:51:36 np0005629333 systemd[1]: libpod-conmon-978601e3bc791a6167e348bd9c9ac09adccf0ad1f9c744b967ff37743cc1bc94.scope: Deactivated successfully.
Feb 25 07:51:36 np0005629333 nova_compute[244014]: 2026-02-25 12:51:36.773 244018 INFO nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Creating config drive at /var/lib/nova/instances/dd7feae9-9d2a-41b6-9277-cbf51a2c8f23/disk.config#033[00m
Feb 25 07:51:36 np0005629333 nova_compute[244014]: 2026-02-25 12:51:36.780 244018 DEBUG oslo_concurrency.processutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dd7feae9-9d2a-41b6-9277-cbf51a2c8f23/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpj6sm1g_7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:51:36 np0005629333 podman[359102]: 2026-02-25 12:51:36.86309572 +0000 UTC m=+0.090010793 container create 1a7353b498cf0fad125656e67d4b5ef11605390cc7c9ff0ee67792a662b3e623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_panini, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 07:51:36 np0005629333 nova_compute[244014]: 2026-02-25 12:51:36.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:51:36 np0005629333 podman[359102]: 2026-02-25 12:51:36.8025427 +0000 UTC m=+0.029457863 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:51:36 np0005629333 systemd[1]: Started libpod-conmon-1a7353b498cf0fad125656e67d4b5ef11605390cc7c9ff0ee67792a662b3e623.scope.
Feb 25 07:51:36 np0005629333 nova_compute[244014]: 2026-02-25 12:51:36.927 244018 DEBUG oslo_concurrency.processutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dd7feae9-9d2a-41b6-9277-cbf51a2c8f23/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpj6sm1g_7" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:51:36 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:51:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ae5596fd29a0d9542c52a36b25ac79037728a9ae27431a10764d260c364ef23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:51:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ae5596fd29a0d9542c52a36b25ac79037728a9ae27431a10764d260c364ef23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:51:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ae5596fd29a0d9542c52a36b25ac79037728a9ae27431a10764d260c364ef23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:51:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ae5596fd29a0d9542c52a36b25ac79037728a9ae27431a10764d260c364ef23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:51:36 np0005629333 nova_compute[244014]: 2026-02-25 12:51:36.955 244018 DEBUG nova.storage.rbd_utils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dd7feae9-9d2a-41b6-9277-cbf51a2c8f23_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:51:36 np0005629333 nova_compute[244014]: 2026-02-25 12:51:36.959 244018 DEBUG oslo_concurrency.processutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dd7feae9-9d2a-41b6-9277-cbf51a2c8f23/disk.config dd7feae9-9d2a-41b6-9277-cbf51a2c8f23_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:51:36 np0005629333 podman[359102]: 2026-02-25 12:51:36.971226553 +0000 UTC m=+0.198141636 container init 1a7353b498cf0fad125656e67d4b5ef11605390cc7c9ff0ee67792a662b3e623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 07:51:36 np0005629333 podman[359102]: 2026-02-25 12:51:36.981461892 +0000 UTC m=+0.208376995 container start 1a7353b498cf0fad125656e67d4b5ef11605390cc7c9ff0ee67792a662b3e623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_panini, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:51:36 np0005629333 podman[359102]: 2026-02-25 12:51:36.991125555 +0000 UTC m=+0.218040708 container attach 1a7353b498cf0fad125656e67d4b5ef11605390cc7c9ff0ee67792a662b3e623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.039 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023882.037794, d6cf21ec-717e-41f7-9351-2214b43ce275 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.040 244018 INFO nova.compute.manager [-] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.070 244018 DEBUG nova.compute.manager [None req-3e2077a3-43ad-4595-820f-2caff9af09eb - - - - - -] [instance: d6cf21ec-717e-41f7-9351-2214b43ce275] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.166 244018 DEBUG nova.network.neutron [req-929674b7-671c-439d-b4ca-db1fec0279a0 req-d032811b-bc72-4806-8bd0-5dba65b00221 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Updated VIF entry in instance network info cache for port 9981394d-e733-404e-85a5-e2e51877881a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.166 244018 DEBUG nova.network.neutron [req-929674b7-671c-439d-b4ca-db1fec0279a0 req-d032811b-bc72-4806-8bd0-5dba65b00221 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Updating instance_info_cache with network_info: [{"id": "9981394d-e733-404e-85a5-e2e51877881a", "address": "fa:16:3e:2b:92:99", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9981394d-e7", "ovs_interfaceid": "9981394d-e733-404e-85a5-e2e51877881a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.183 244018 DEBUG oslo_concurrency.processutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dd7feae9-9d2a-41b6-9277-cbf51a2c8f23/disk.config dd7feae9-9d2a-41b6-9277-cbf51a2c8f23_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.224s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.183 244018 INFO nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Deleting local config drive /var/lib/nova/instances/dd7feae9-9d2a-41b6-9277-cbf51a2c8f23/disk.config because it was imported into RBD.#033[00m
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.186 244018 DEBUG oslo_concurrency.lockutils [req-929674b7-671c-439d-b4ca-db1fec0279a0 req-d032811b-bc72-4806-8bd0-5dba65b00221 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:51:37 np0005629333 loving_panini[359122]: {
Feb 25 07:51:37 np0005629333 loving_panini[359122]:    "0": [
Feb 25 07:51:37 np0005629333 loving_panini[359122]:        {
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "devices": [
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "/dev/loop3"
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            ],
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "lv_name": "ceph_lv0",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "lv_size": "21470642176",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "name": "ceph_lv0",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "tags": {
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.cluster_name": "ceph",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.crush_device_class": "",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.encrypted": "0",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.objectstore": "bluestore",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.osd_id": "0",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.type": "block",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.vdo": "0",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.with_tpm": "0"
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            },
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "type": "block",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "vg_name": "ceph_vg0"
Feb 25 07:51:37 np0005629333 loving_panini[359122]:        }
Feb 25 07:51:37 np0005629333 loving_panini[359122]:    ],
Feb 25 07:51:37 np0005629333 loving_panini[359122]:    "1": [
Feb 25 07:51:37 np0005629333 loving_panini[359122]:        {
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "devices": [
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "/dev/loop4"
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            ],
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "lv_name": "ceph_lv1",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "lv_size": "21470642176",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "name": "ceph_lv1",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "tags": {
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.cluster_name": "ceph",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.crush_device_class": "",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.encrypted": "0",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.objectstore": "bluestore",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.osd_id": "1",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.type": "block",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.vdo": "0",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.with_tpm": "0"
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            },
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "type": "block",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "vg_name": "ceph_vg1"
Feb 25 07:51:37 np0005629333 loving_panini[359122]:        }
Feb 25 07:51:37 np0005629333 loving_panini[359122]:    ],
Feb 25 07:51:37 np0005629333 loving_panini[359122]:    "2": [
Feb 25 07:51:37 np0005629333 loving_panini[359122]:        {
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "devices": [
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "/dev/loop5"
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            ],
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "lv_name": "ceph_lv2",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "lv_size": "21470642176",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "name": "ceph_lv2",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "tags": {
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.cluster_name": "ceph",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.crush_device_class": "",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.encrypted": "0",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.objectstore": "bluestore",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.osd_id": "2",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.type": "block",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.vdo": "0",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:                "ceph.with_tpm": "0"
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            },
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "type": "block",
Feb 25 07:51:37 np0005629333 loving_panini[359122]:            "vg_name": "ceph_vg2"
Feb 25 07:51:37 np0005629333 loving_panini[359122]:        }
Feb 25 07:51:37 np0005629333 loving_panini[359122]:    ]
Feb 25 07:51:37 np0005629333 loving_panini[359122]: }
Feb 25 07:51:37 np0005629333 kernel: tap9981394d-e7: entered promiscuous mode
Feb 25 07:51:37 np0005629333 NetworkManager[49836]: <info>  [1772023897.2401] manager: (tap9981394d-e7): new Tun device (/org/freedesktop/NetworkManager/Devices/554)
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.242 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.248 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:51:37Z|01337|binding|INFO|Claiming lport 9981394d-e733-404e-85a5-e2e51877881a for this chassis.
Feb 25 07:51:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:51:37Z|01338|binding|INFO|9981394d-e733-404e-85a5-e2e51877881a: Claiming fa:16:3e:2b:92:99 10.100.0.7 2001:db8:0:1:f816:3eff:fe2b:9299 2001:db8::f816:3eff:fe2b:9299
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.262 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:92:99 10.100.0.7 2001:db8:0:1:f816:3eff:fe2b:9299 2001:db8::f816:3eff:fe2b:9299'], port_security=['fa:16:3e:2b:92:99 10.100.0.7 2001:db8:0:1:f816:3eff:fe2b:9299 2001:db8::f816:3eff:fe2b:9299'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28 2001:db8:0:1:f816:3eff:fe2b:9299/64 2001:db8::f816:3eff:fe2b:9299/64', 'neutron:device_id': 'dd7feae9-9d2a-41b6-9277-cbf51a2c8f23', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e19ed85e-54ee-4274-951c-ade412625983', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a1c36bad-4548-4f88-8bee-0f028af7a076', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a56192da-444f-498f-a789-c0e4cafb114e, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=9981394d-e733-404e-85a5-e2e51877881a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.264 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 9981394d-e733-404e-85a5-e2e51877881a in datapath e19ed85e-54ee-4274-951c-ade412625983 bound to our chassis#033[00m
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.265 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e19ed85e-54ee-4274-951c-ade412625983#033[00m
Feb 25 07:51:37 np0005629333 systemd[1]: libpod-1a7353b498cf0fad125656e67d4b5ef11605390cc7c9ff0ee67792a662b3e623.scope: Deactivated successfully.
Feb 25 07:51:37 np0005629333 systemd-machined[210048]: New machine qemu-158-instance-0000007e.
Feb 25 07:51:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:51:37Z|01339|binding|INFO|Setting lport 9981394d-e733-404e-85a5-e2e51877881a ovn-installed in OVS
Feb 25 07:51:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:51:37Z|01340|binding|INFO|Setting lport 9981394d-e733-404e-85a5-e2e51877881a up in Southbound
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.279 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.279 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[92192337-dcd5-4c1b-8b73-fe8ec1cf59e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.280 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape19ed85e-51 in ovnmeta-e19ed85e-54ee-4274-951c-ade412625983 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:51:37 np0005629333 systemd-udevd[359183]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:51:37 np0005629333 systemd[1]: Started Virtual Machine qemu-158-instance-0000007e.
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.284 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape19ed85e-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.284 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ab47d5bf-f2bd-445b-bc04-081db138e0d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.285 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eff50e4b-62f2-498b-8af0-a594f386d79a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:37 np0005629333 NetworkManager[49836]: <info>  [1772023897.2981] device (tap9981394d-e7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:51:37 np0005629333 NetworkManager[49836]: <info>  [1772023897.2990] device (tap9981394d-e7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.297 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[d69e979e-b3e8-4bab-855c-2bfd0c40f4ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.319 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[960713b7-f467-420a-8b2d-200d549d4ad9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:37 np0005629333 podman[359181]: 2026-02-25 12:51:37.320993689 +0000 UTC m=+0.036048099 container died 1a7353b498cf0fad125656e67d4b5ef11605390cc7c9ff0ee67792a662b3e623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_panini, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.349 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[27b7bbc6-9b07-4c21-82c0-143cb4ac7ddb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:37 np0005629333 systemd-udevd[359192]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:51:37 np0005629333 NetworkManager[49836]: <info>  [1772023897.3542] manager: (tape19ed85e-50): new Veth device (/org/freedesktop/NetworkManager/Devices/555)
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.353 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d6ea0d93-feba-4bd9-a25d-8ece8bc76273]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.384 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[af91d82d-4358-4fc3-9a13-92b34e6650e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.387 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c7e3b357-7598-4ad5-b88a-d06efc4923a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:37 np0005629333 NetworkManager[49836]: <info>  [1772023897.4088] device (tape19ed85e-50): carrier: link connected
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.414 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9a015788-6052-4ece-8028-533dd74e54e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.429 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cb45745d-44da-49cf-a212-8a0f9e71367f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape19ed85e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:4b:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 399], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586681, 'reachable_time': 39508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359228, 'error': None, 'target': 'ovnmeta-e19ed85e-54ee-4274-951c-ade412625983', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.444 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[34ffd3e8-e789-4a0c-818e-82778f352e20]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6f:4ba0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586681, 'tstamp': 586681}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 359229, 'error': None, 'target': 'ovnmeta-e19ed85e-54ee-4274-951c-ade412625983', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:37 np0005629333 systemd[1]: var-lib-containers-storage-overlay-9ae5596fd29a0d9542c52a36b25ac79037728a9ae27431a10764d260c364ef23-merged.mount: Deactivated successfully.
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.463 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cf4791bc-4ffc-4059-8c04-675b0b6d3598]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape19ed85e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:4b:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 399], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586681, 'reachable_time': 39508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 359230, 'error': None, 'target': 'ovnmeta-e19ed85e-54ee-4274-951c-ade412625983', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.487 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[84d17b6e-1b26-42e5-90e5-96641380bfb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:37 np0005629333 podman[359181]: 2026-02-25 12:51:37.53278898 +0000 UTC m=+0.247843390 container remove 1a7353b498cf0fad125656e67d4b5ef11605390cc7c9ff0ee67792a662b3e623 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_panini, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.533 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cc0e2f37-2ca8-49e2-9f03-f4dfae368c05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.535 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape19ed85e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.535 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.536 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape19ed85e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.537 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:37 np0005629333 systemd[1]: libpod-conmon-1a7353b498cf0fad125656e67d4b5ef11605390cc7c9ff0ee67792a662b3e623.scope: Deactivated successfully.
Feb 25 07:51:37 np0005629333 kernel: tape19ed85e-50: entered promiscuous mode
Feb 25 07:51:37 np0005629333 NetworkManager[49836]: <info>  [1772023897.5403] manager: (tape19ed85e-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/556)
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.540 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.542 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape19ed85e-50, col_values=(('external_ids', {'iface-id': '591ce053-3764-4ce0-841f-6728c8fd9491'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.543 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:51:37Z|01341|binding|INFO|Releasing lport 591ce053-3764-4ce0-841f-6728c8fd9491 from this chassis (sb_readonly=0)
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.573 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.574 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e19ed85e-54ee-4274-951c-ade412625983.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e19ed85e-54ee-4274-951c-ade412625983.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.575 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[897288d6-99f3-4c85-ba0e-2c87fd82454f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.576 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-e19ed85e-54ee-4274-951c-ade412625983
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/e19ed85e-54ee-4274-951c-ade412625983.pid.haproxy
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID e19ed85e-54ee-4274-951c-ade412625983
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:51:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:37.578 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e19ed85e-54ee-4274-951c-ade412625983', 'env', 'PROCESS_TAG=haproxy-e19ed85e-54ee-4274-951c-ade412625983', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e19ed85e-54ee-4274-951c-ade412625983.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.605 244018 DEBUG nova.compute.manager [req-10d8ef44-1e7e-43ff-a0a7-6a4cdcbf3bc0 req-7e7af82d-28c4-4601-be4d-9ce4b3d8597b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Received event network-vif-plugged-9981394d-e733-404e-85a5-e2e51877881a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.606 244018 DEBUG oslo_concurrency.lockutils [req-10d8ef44-1e7e-43ff-a0a7-6a4cdcbf3bc0 req-7e7af82d-28c4-4601-be4d-9ce4b3d8597b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.607 244018 DEBUG oslo_concurrency.lockutils [req-10d8ef44-1e7e-43ff-a0a7-6a4cdcbf3bc0 req-7e7af82d-28c4-4601-be4d-9ce4b3d8597b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.607 244018 DEBUG oslo_concurrency.lockutils [req-10d8ef44-1e7e-43ff-a0a7-6a4cdcbf3bc0 req-7e7af82d-28c4-4601-be4d-9ce4b3d8597b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.608 244018 DEBUG nova.compute.manager [req-10d8ef44-1e7e-43ff-a0a7-6a4cdcbf3bc0 req-7e7af82d-28c4-4601-be4d-9ce4b3d8597b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Processing event network-vif-plugged-9981394d-e733-404e-85a5-e2e51877881a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.787 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023897.786589, dd7feae9-9d2a-41b6-9277-cbf51a2c8f23 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.787 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] VM Started (Lifecycle Event)#033[00m
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.789 244018 DEBUG nova.compute.manager [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.792 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.794 244018 INFO nova.virt.libvirt.driver [-] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Instance spawned successfully.#033[00m
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.795 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.828 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.834 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.838 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.838 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.839 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.839 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.840 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.840 244018 DEBUG nova.virt.libvirt.driver [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.870 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.871 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023897.7868466, dd7feae9-9d2a-41b6-9277-cbf51a2c8f23 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:51:37 np0005629333 nova_compute[244014]: 2026-02-25 12:51:37.871 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:51:37 np0005629333 podman[359355]: 2026-02-25 12:51:37.940263636 +0000 UTC m=+0.066641443 container create f6693c06ff36140c6e384f51011936b906ddbca089bb7c89e73186677d382987 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e19ed85e-54ee-4274-951c-ade412625983, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:51:37 np0005629333 podman[359379]: 2026-02-25 12:51:37.986642735 +0000 UTC m=+0.054438998 container create e435224e711a797bac01534e8861fbc3bf6178f36d173ac99ad2b57ef4167cd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 07:51:37 np0005629333 podman[359355]: 2026-02-25 12:51:37.893892236 +0000 UTC m=+0.020270043 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:51:38 np0005629333 systemd[1]: Started libpod-conmon-f6693c06ff36140c6e384f51011936b906ddbca089bb7c89e73186677d382987.scope.
Feb 25 07:51:38 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:51:38 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b16cd2a2c94cf38fc347d4884920d0f50de2f7df3adcfcac47023bdece528de3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:51:38 np0005629333 podman[359355]: 2026-02-25 12:51:38.054248704 +0000 UTC m=+0.180626501 container init f6693c06ff36140c6e384f51011936b906ddbca089bb7c89e73186677d382987 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e19ed85e-54ee-4274-951c-ade412625983, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 25 07:51:38 np0005629333 podman[359355]: 2026-02-25 12:51:38.058557376 +0000 UTC m=+0.184935163 container start f6693c06ff36140c6e384f51011936b906ddbca089bb7c89e73186677d382987 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e19ed85e-54ee-4274-951c-ade412625983, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Feb 25 07:51:38 np0005629333 podman[359379]: 2026-02-25 12:51:37.97159728 +0000 UTC m=+0.039393563 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:51:38 np0005629333 neutron-haproxy-ovnmeta-e19ed85e-54ee-4274-951c-ade412625983[359394]: [NOTICE]   (359402) : New worker (359404) forked
Feb 25 07:51:38 np0005629333 neutron-haproxy-ovnmeta-e19ed85e-54ee-4274-951c-ade412625983[359394]: [NOTICE]   (359402) : Loading success.
Feb 25 07:51:38 np0005629333 systemd[1]: Started libpod-conmon-e435224e711a797bac01534e8861fbc3bf6178f36d173ac99ad2b57ef4167cd8.scope.
Feb 25 07:51:38 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:51:38 np0005629333 nova_compute[244014]: 2026-02-25 12:51:38.102 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:51:38 np0005629333 nova_compute[244014]: 2026-02-25 12:51:38.107 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023897.7913318, dd7feae9-9d2a-41b6-9277-cbf51a2c8f23 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:51:38 np0005629333 nova_compute[244014]: 2026-02-25 12:51:38.108 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:51:38 np0005629333 podman[359379]: 2026-02-25 12:51:38.110344988 +0000 UTC m=+0.178141261 container init e435224e711a797bac01534e8861fbc3bf6178f36d173ac99ad2b57ef4167cd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:51:38 np0005629333 nova_compute[244014]: 2026-02-25 12:51:38.114 244018 INFO nova.compute.manager [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Took 8.45 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:51:38 np0005629333 nova_compute[244014]: 2026-02-25 12:51:38.115 244018 DEBUG nova.compute.manager [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:51:38 np0005629333 podman[359379]: 2026-02-25 12:51:38.117380936 +0000 UTC m=+0.185177199 container start e435224e711a797bac01534e8861fbc3bf6178f36d173ac99ad2b57ef4167cd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cartwright, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 07:51:38 np0005629333 fervent_cartwright[359413]: 167 167
Feb 25 07:51:38 np0005629333 systemd[1]: libpod-e435224e711a797bac01534e8861fbc3bf6178f36d173ac99ad2b57ef4167cd8.scope: Deactivated successfully.
Feb 25 07:51:38 np0005629333 podman[359379]: 2026-02-25 12:51:38.123126379 +0000 UTC m=+0.190922662 container attach e435224e711a797bac01534e8861fbc3bf6178f36d173ac99ad2b57ef4167cd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:51:38 np0005629333 podman[359379]: 2026-02-25 12:51:38.123727656 +0000 UTC m=+0.191523919 container died e435224e711a797bac01534e8861fbc3bf6178f36d173ac99ad2b57ef4167cd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cartwright, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:51:38 np0005629333 nova_compute[244014]: 2026-02-25 12:51:38.130 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:51:38 np0005629333 nova_compute[244014]: 2026-02-25 12:51:38.136 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:51:38 np0005629333 nova_compute[244014]: 2026-02-25 12:51:38.162 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:51:38 np0005629333 systemd[1]: var-lib-containers-storage-overlay-6c7f5e0a4569f1c0a4a8374dc215249e71b87684b5bda6c4616b5a2c3849fb9e-merged.mount: Deactivated successfully.
Feb 25 07:51:38 np0005629333 nova_compute[244014]: 2026-02-25 12:51:38.186 244018 INFO nova.compute.manager [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Took 9.48 seconds to build instance.#033[00m
Feb 25 07:51:38 np0005629333 nova_compute[244014]: 2026-02-25 12:51:38.205 244018 DEBUG oslo_concurrency.lockutils [None req-1f6d81c7-4051-481e-9e39-6e65f590f619 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:51:38 np0005629333 podman[359379]: 2026-02-25 12:51:38.219364326 +0000 UTC m=+0.287160599 container remove e435224e711a797bac01534e8861fbc3bf6178f36d173ac99ad2b57ef4167cd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:51:38 np0005629333 systemd[1]: libpod-conmon-e435224e711a797bac01534e8861fbc3bf6178f36d173ac99ad2b57ef4167cd8.scope: Deactivated successfully.
Feb 25 07:51:38 np0005629333 podman[359436]: 2026-02-25 12:51:38.417143041 +0000 UTC m=+0.074985748 container create 6bd24c36ebc30b0480b77886c0a4f00b6beddf55b6a55018aad081adcf4e8502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_swanson, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:51:38 np0005629333 systemd[1]: Started libpod-conmon-6bd24c36ebc30b0480b77886c0a4f00b6beddf55b6a55018aad081adcf4e8502.scope.
Feb 25 07:51:38 np0005629333 podman[359436]: 2026-02-25 12:51:38.382497783 +0000 UTC m=+0.040340540 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:51:38 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:51:38 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0928661ff8ca5d456265d50d65d0fcb14f0bd5b32a9fe531a8ab4979ee7cac66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:51:38 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0928661ff8ca5d456265d50d65d0fcb14f0bd5b32a9fe531a8ab4979ee7cac66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:51:38 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0928661ff8ca5d456265d50d65d0fcb14f0bd5b32a9fe531a8ab4979ee7cac66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:51:38 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0928661ff8ca5d456265d50d65d0fcb14f0bd5b32a9fe531a8ab4979ee7cac66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:51:38 np0005629333 podman[359436]: 2026-02-25 12:51:38.563195635 +0000 UTC m=+0.221038382 container init 6bd24c36ebc30b0480b77886c0a4f00b6beddf55b6a55018aad081adcf4e8502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_swanson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:51:38 np0005629333 podman[359436]: 2026-02-25 12:51:38.57046775 +0000 UTC m=+0.228310417 container start 6bd24c36ebc30b0480b77886c0a4f00b6beddf55b6a55018aad081adcf4e8502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_swanson, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Feb 25 07:51:38 np0005629333 podman[359436]: 2026-02-25 12:51:38.574524975 +0000 UTC m=+0.232367742 container attach 6bd24c36ebc30b0480b77886c0a4f00b6beddf55b6a55018aad081adcf4e8502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 07:51:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2132: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Feb 25 07:51:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:51:38 np0005629333 nova_compute[244014]: 2026-02-25 12:51:38.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:51:39 np0005629333 lvm[359527]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:51:39 np0005629333 lvm[359527]: VG ceph_vg0 finished
Feb 25 07:51:39 np0005629333 lvm[359529]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:51:39 np0005629333 lvm[359529]: VG ceph_vg1 finished
Feb 25 07:51:39 np0005629333 lvm[359530]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:51:39 np0005629333 lvm[359530]: VG ceph_vg2 finished
Feb 25 07:51:39 np0005629333 lvm[359531]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:51:39 np0005629333 lvm[359531]: VG ceph_vg2 finished
Feb 25 07:51:39 np0005629333 lvm[359534]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:51:39 np0005629333 lvm[359534]: VG ceph_vg2 finished
Feb 25 07:51:39 np0005629333 cool_swanson[359452]: {}
Feb 25 07:51:39 np0005629333 systemd[1]: libpod-6bd24c36ebc30b0480b77886c0a4f00b6beddf55b6a55018aad081adcf4e8502.scope: Deactivated successfully.
Feb 25 07:51:39 np0005629333 systemd[1]: libpod-6bd24c36ebc30b0480b77886c0a4f00b6beddf55b6a55018aad081adcf4e8502.scope: Consumed 1.030s CPU time.
Feb 25 07:51:39 np0005629333 podman[359436]: 2026-02-25 12:51:39.397448422 +0000 UTC m=+1.055291109 container died 6bd24c36ebc30b0480b77886c0a4f00b6beddf55b6a55018aad081adcf4e8502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_swanson, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:51:39 np0005629333 systemd[1]: var-lib-containers-storage-overlay-0928661ff8ca5d456265d50d65d0fcb14f0bd5b32a9fe531a8ab4979ee7cac66-merged.mount: Deactivated successfully.
Feb 25 07:51:39 np0005629333 podman[359436]: 2026-02-25 12:51:39.464294689 +0000 UTC m=+1.122137366 container remove 6bd24c36ebc30b0480b77886c0a4f00b6beddf55b6a55018aad081adcf4e8502 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_swanson, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 07:51:39 np0005629333 systemd[1]: libpod-conmon-6bd24c36ebc30b0480b77886c0a4f00b6beddf55b6a55018aad081adcf4e8502.scope: Deactivated successfully.
Feb 25 07:51:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:51:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:51:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:51:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:51:39 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:51:39 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:51:39 np0005629333 nova_compute[244014]: 2026-02-25 12:51:39.715 244018 DEBUG nova.compute.manager [req-b00bd408-c994-4c30-9704-6dd1ff8ee0a2 req-b7fd5fba-628a-47b4-be8e-8c6a061254e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Received event network-vif-plugged-9981394d-e733-404e-85a5-e2e51877881a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:51:39 np0005629333 nova_compute[244014]: 2026-02-25 12:51:39.715 244018 DEBUG oslo_concurrency.lockutils [req-b00bd408-c994-4c30-9704-6dd1ff8ee0a2 req-b7fd5fba-628a-47b4-be8e-8c6a061254e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:51:39 np0005629333 nova_compute[244014]: 2026-02-25 12:51:39.715 244018 DEBUG oslo_concurrency.lockutils [req-b00bd408-c994-4c30-9704-6dd1ff8ee0a2 req-b7fd5fba-628a-47b4-be8e-8c6a061254e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:51:39 np0005629333 nova_compute[244014]: 2026-02-25 12:51:39.715 244018 DEBUG oslo_concurrency.lockutils [req-b00bd408-c994-4c30-9704-6dd1ff8ee0a2 req-b7fd5fba-628a-47b4-be8e-8c6a061254e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:51:39 np0005629333 nova_compute[244014]: 2026-02-25 12:51:39.715 244018 DEBUG nova.compute.manager [req-b00bd408-c994-4c30-9704-6dd1ff8ee0a2 req-b7fd5fba-628a-47b4-be8e-8c6a061254e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] No waiting events found dispatching network-vif-plugged-9981394d-e733-404e-85a5-e2e51877881a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:51:39 np0005629333 nova_compute[244014]: 2026-02-25 12:51:39.715 244018 WARNING nova.compute.manager [req-b00bd408-c994-4c30-9704-6dd1ff8ee0a2 req-b7fd5fba-628a-47b4-be8e-8c6a061254e8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Received unexpected event network-vif-plugged-9981394d-e733-404e-85a5-e2e51877881a for instance with vm_state active and task_state None.#033[00m
Feb 25 07:51:40 np0005629333 nova_compute[244014]: 2026-02-25 12:51:40.374 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2133: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 07:51:40 np0005629333 nova_compute[244014]: 2026-02-25 12:51:40.914 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:41 np0005629333 nova_compute[244014]: 2026-02-25 12:51:41.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:51:41 np0005629333 nova_compute[244014]: 2026-02-25 12:51:41.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:51:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2134: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 07:51:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:51:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:51:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:51:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:51:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00036411289436910166 of space, bias 1.0, pg target 0.1092338683107305 quantized to 32 (current 32)
Feb 25 07:51:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:51:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:51:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:51:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:51:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:51:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002494097439084752 of space, bias 1.0, pg target 0.7482292317254255 quantized to 32 (current 32)
Feb 25 07:51:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:51:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3976293654385012e-06 of space, bias 4.0, pg target 0.0016771552385262014 quantized to 16 (current 16)
Feb 25 07:51:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:51:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:51:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:51:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:51:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:51:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:51:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:51:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:51:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:51:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:51:43 np0005629333 NetworkManager[49836]: <info>  [1772023903.1099] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/557)
Feb 25 07:51:43 np0005629333 NetworkManager[49836]: <info>  [1772023903.1108] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/558)
Feb 25 07:51:43 np0005629333 nova_compute[244014]: 2026-02-25 12:51:43.109 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:43 np0005629333 nova_compute[244014]: 2026-02-25 12:51:43.161 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:43 np0005629333 ovn_controller[147040]: 2026-02-25T12:51:43Z|01342|binding|INFO|Releasing lport 591ce053-3764-4ce0-841f-6728c8fd9491 from this chassis (sb_readonly=0)
Feb 25 07:51:43 np0005629333 nova_compute[244014]: 2026-02-25 12:51:43.181 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:51:43 np0005629333 nova_compute[244014]: 2026-02-25 12:51:43.667 244018 DEBUG nova.compute.manager [req-8d6c1200-80d6-4443-94e6-2dd1fc4a14f5 req-15cc9dd3-e246-448c-a877-6a709adb928c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Received event network-changed-9981394d-e733-404e-85a5-e2e51877881a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:51:43 np0005629333 nova_compute[244014]: 2026-02-25 12:51:43.668 244018 DEBUG nova.compute.manager [req-8d6c1200-80d6-4443-94e6-2dd1fc4a14f5 req-15cc9dd3-e246-448c-a877-6a709adb928c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Refreshing instance network info cache due to event network-changed-9981394d-e733-404e-85a5-e2e51877881a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:51:43 np0005629333 nova_compute[244014]: 2026-02-25 12:51:43.669 244018 DEBUG oslo_concurrency.lockutils [req-8d6c1200-80d6-4443-94e6-2dd1fc4a14f5 req-15cc9dd3-e246-448c-a877-6a709adb928c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:51:43 np0005629333 nova_compute[244014]: 2026-02-25 12:51:43.669 244018 DEBUG oslo_concurrency.lockutils [req-8d6c1200-80d6-4443-94e6-2dd1fc4a14f5 req-15cc9dd3-e246-448c-a877-6a709adb928c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:51:43 np0005629333 nova_compute[244014]: 2026-02-25 12:51:43.669 244018 DEBUG nova.network.neutron [req-8d6c1200-80d6-4443-94e6-2dd1fc4a14f5 req-15cc9dd3-e246-448c-a877-6a709adb928c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Refreshing network info cache for port 9981394d-e733-404e-85a5-e2e51877881a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:51:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2135: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 07:51:45 np0005629333 nova_compute[244014]: 2026-02-25 12:51:45.376 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:45 np0005629333 nova_compute[244014]: 2026-02-25 12:51:45.387 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "809da994-7551-4f52-8920-b0dfaa2ef73e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:51:45 np0005629333 nova_compute[244014]: 2026-02-25 12:51:45.388 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:51:45 np0005629333 nova_compute[244014]: 2026-02-25 12:51:45.417 244018 DEBUG nova.compute.manager [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:51:45 np0005629333 nova_compute[244014]: 2026-02-25 12:51:45.538 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:51:45 np0005629333 nova_compute[244014]: 2026-02-25 12:51:45.539 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:51:45 np0005629333 nova_compute[244014]: 2026-02-25 12:51:45.548 244018 DEBUG nova.virt.hardware [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:51:45 np0005629333 nova_compute[244014]: 2026-02-25 12:51:45.549 244018 INFO nova.compute.claims [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:51:45 np0005629333 nova_compute[244014]: 2026-02-25 12:51:45.668 244018 DEBUG oslo_concurrency.processutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:51:45 np0005629333 nova_compute[244014]: 2026-02-25 12:51:45.917 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:51:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/869653802' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:51:46 np0005629333 nova_compute[244014]: 2026-02-25 12:51:46.182 244018 DEBUG oslo_concurrency.processutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:51:46 np0005629333 nova_compute[244014]: 2026-02-25 12:51:46.188 244018 DEBUG nova.compute.provider_tree [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:51:46 np0005629333 nova_compute[244014]: 2026-02-25 12:51:46.279 244018 DEBUG nova.scheduler.client.report [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:51:46 np0005629333 nova_compute[244014]: 2026-02-25 12:51:46.316 244018 DEBUG nova.network.neutron [req-8d6c1200-80d6-4443-94e6-2dd1fc4a14f5 req-15cc9dd3-e246-448c-a877-6a709adb928c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Updated VIF entry in instance network info cache for port 9981394d-e733-404e-85a5-e2e51877881a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:51:46 np0005629333 nova_compute[244014]: 2026-02-25 12:51:46.317 244018 DEBUG nova.network.neutron [req-8d6c1200-80d6-4443-94e6-2dd1fc4a14f5 req-15cc9dd3-e246-448c-a877-6a709adb928c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Updating instance_info_cache with network_info: [{"id": "9981394d-e733-404e-85a5-e2e51877881a", "address": "fa:16:3e:2b:92:99", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9981394d-e7", "ovs_interfaceid": "9981394d-e733-404e-85a5-e2e51877881a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:51:46 np0005629333 nova_compute[244014]: 2026-02-25 12:51:46.349 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:51:46 np0005629333 nova_compute[244014]: 2026-02-25 12:51:46.350 244018 DEBUG nova.compute.manager [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:51:46 np0005629333 nova_compute[244014]: 2026-02-25 12:51:46.356 244018 DEBUG oslo_concurrency.lockutils [req-8d6c1200-80d6-4443-94e6-2dd1fc4a14f5 req-15cc9dd3-e246-448c-a877-6a709adb928c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:51:46 np0005629333 nova_compute[244014]: 2026-02-25 12:51:46.417 244018 DEBUG nova.compute.manager [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:51:46 np0005629333 nova_compute[244014]: 2026-02-25 12:51:46.418 244018 DEBUG nova.network.neutron [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:51:46 np0005629333 nova_compute[244014]: 2026-02-25 12:51:46.477 244018 INFO nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:51:46 np0005629333 nova_compute[244014]: 2026-02-25 12:51:46.560 244018 DEBUG nova.compute.manager [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:51:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2136: 305 pgs: 305 active+clean; 200 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 07:51:46 np0005629333 nova_compute[244014]: 2026-02-25 12:51:46.619 244018 DEBUG nova.policy [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31d013eaf26a447394d93c83ab8def60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e227b91c24404ab5aed600e2fe792d32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:51:46 np0005629333 nova_compute[244014]: 2026-02-25 12:51:46.660 244018 DEBUG nova.compute.manager [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:51:46 np0005629333 nova_compute[244014]: 2026-02-25 12:51:46.662 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:51:46 np0005629333 nova_compute[244014]: 2026-02-25 12:51:46.663 244018 INFO nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Creating image(s)#033[00m
Feb 25 07:51:46 np0005629333 nova_compute[244014]: 2026-02-25 12:51:46.696 244018 DEBUG nova.storage.rbd_utils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 809da994-7551-4f52-8920-b0dfaa2ef73e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:51:46 np0005629333 nova_compute[244014]: 2026-02-25 12:51:46.733 244018 DEBUG nova.storage.rbd_utils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 809da994-7551-4f52-8920-b0dfaa2ef73e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:51:46 np0005629333 nova_compute[244014]: 2026-02-25 12:51:46.763 244018 DEBUG nova.storage.rbd_utils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 809da994-7551-4f52-8920-b0dfaa2ef73e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:51:46 np0005629333 nova_compute[244014]: 2026-02-25 12:51:46.767 244018 DEBUG oslo_concurrency.processutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:51:46 np0005629333 nova_compute[244014]: 2026-02-25 12:51:46.868 244018 DEBUG oslo_concurrency.processutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:51:46 np0005629333 nova_compute[244014]: 2026-02-25 12:51:46.869 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:51:46 np0005629333 nova_compute[244014]: 2026-02-25 12:51:46.870 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:51:46 np0005629333 nova_compute[244014]: 2026-02-25 12:51:46.870 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:51:46 np0005629333 nova_compute[244014]: 2026-02-25 12:51:46.894 244018 DEBUG nova.storage.rbd_utils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 809da994-7551-4f52-8920-b0dfaa2ef73e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:51:46 np0005629333 nova_compute[244014]: 2026-02-25 12:51:46.897 244018 DEBUG oslo_concurrency.processutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 809da994-7551-4f52-8920-b0dfaa2ef73e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:51:47 np0005629333 nova_compute[244014]: 2026-02-25 12:51:47.186 244018 DEBUG oslo_concurrency.processutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 809da994-7551-4f52-8920-b0dfaa2ef73e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.288s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:51:47 np0005629333 nova_compute[244014]: 2026-02-25 12:51:47.234 244018 DEBUG nova.storage.rbd_utils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] resizing rbd image 809da994-7551-4f52-8920-b0dfaa2ef73e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:51:47 np0005629333 nova_compute[244014]: 2026-02-25 12:51:47.297 244018 DEBUG nova.objects.instance [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'migration_context' on Instance uuid 809da994-7551-4f52-8920-b0dfaa2ef73e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:51:47 np0005629333 nova_compute[244014]: 2026-02-25 12:51:47.310 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:51:47 np0005629333 nova_compute[244014]: 2026-02-25 12:51:47.310 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Ensure instance console log exists: /var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:51:47 np0005629333 nova_compute[244014]: 2026-02-25 12:51:47.310 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:51:47 np0005629333 nova_compute[244014]: 2026-02-25 12:51:47.310 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:51:47 np0005629333 nova_compute[244014]: 2026-02-25 12:51:47.311 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:51:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:51:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1069909818' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:51:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:51:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1069909818' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:51:47 np0005629333 nova_compute[244014]: 2026-02-25 12:51:47.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:51:48 np0005629333 nova_compute[244014]: 2026-02-25 12:51:48.274 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:48.273 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:51:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:48.276 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:51:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2137: 305 pgs: 305 active+clean; 209 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 294 KiB/s wr, 74 op/s
Feb 25 07:51:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:51:48 np0005629333 ovn_controller[147040]: 2026-02-25T12:51:48Z|00159|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2b:92:99 10.100.0.7
Feb 25 07:51:48 np0005629333 ovn_controller[147040]: 2026-02-25T12:51:48Z|00160|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2b:92:99 10.100.0.7
Feb 25 07:51:49 np0005629333 nova_compute[244014]: 2026-02-25 12:51:49.089 244018 DEBUG nova.network.neutron [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Successfully created port: 4f59e1f7-f07c-48a1-82b4-b6a563a7130a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:51:50 np0005629333 nova_compute[244014]: 2026-02-25 12:51:50.379 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:50 np0005629333 nova_compute[244014]: 2026-02-25 12:51:50.439 244018 DEBUG nova.network.neutron [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Successfully updated port: 4f59e1f7-f07c-48a1-82b4-b6a563a7130a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:51:50 np0005629333 nova_compute[244014]: 2026-02-25 12:51:50.475 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:51:50 np0005629333 nova_compute[244014]: 2026-02-25 12:51:50.476 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:51:50 np0005629333 nova_compute[244014]: 2026-02-25 12:51:50.477 244018 DEBUG nova.network.neutron [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:51:50 np0005629333 nova_compute[244014]: 2026-02-25 12:51:50.537 244018 DEBUG nova.compute.manager [req-d8e01e7e-abb7-40d4-ad75-745fba325e40 req-6c5c0ceb-b67d-41e9-8a74-5b5ad3abfd4c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received event network-changed-4f59e1f7-f07c-48a1-82b4-b6a563a7130a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:51:50 np0005629333 nova_compute[244014]: 2026-02-25 12:51:50.538 244018 DEBUG nova.compute.manager [req-d8e01e7e-abb7-40d4-ad75-745fba325e40 req-6c5c0ceb-b67d-41e9-8a74-5b5ad3abfd4c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Refreshing instance network info cache due to event network-changed-4f59e1f7-f07c-48a1-82b4-b6a563a7130a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:51:50 np0005629333 nova_compute[244014]: 2026-02-25 12:51:50.538 244018 DEBUG oslo_concurrency.lockutils [req-d8e01e7e-abb7-40d4-ad75-745fba325e40 req-6c5c0ceb-b67d-41e9-8a74-5b5ad3abfd4c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:51:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2138: 305 pgs: 305 active+clean; 209 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 294 KiB/s wr, 74 op/s
Feb 25 07:51:50 np0005629333 nova_compute[244014]: 2026-02-25 12:51:50.631 244018 DEBUG nova.network.neutron [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:51:50 np0005629333 nova_compute[244014]: 2026-02-25 12:51:50.920 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:51 np0005629333 nova_compute[244014]: 2026-02-25 12:51:51.937 244018 DEBUG nova.network.neutron [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Updating instance_info_cache with network_info: [{"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:51:52 np0005629333 nova_compute[244014]: 2026-02-25 12:51:52.081 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:51:52 np0005629333 nova_compute[244014]: 2026-02-25 12:51:52.082 244018 DEBUG nova.compute.manager [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Instance network_info: |[{"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:51:52 np0005629333 nova_compute[244014]: 2026-02-25 12:51:52.082 244018 DEBUG oslo_concurrency.lockutils [req-d8e01e7e-abb7-40d4-ad75-745fba325e40 req-6c5c0ceb-b67d-41e9-8a74-5b5ad3abfd4c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:51:52 np0005629333 nova_compute[244014]: 2026-02-25 12:51:52.083 244018 DEBUG nova.network.neutron [req-d8e01e7e-abb7-40d4-ad75-745fba325e40 req-6c5c0ceb-b67d-41e9-8a74-5b5ad3abfd4c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Refreshing network info cache for port 4f59e1f7-f07c-48a1-82b4-b6a563a7130a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:51:52 np0005629333 nova_compute[244014]: 2026-02-25 12:51:52.088 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Start _get_guest_xml network_info=[{"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:51:52 np0005629333 nova_compute[244014]: 2026-02-25 12:51:52.095 244018 WARNING nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:51:52 np0005629333 nova_compute[244014]: 2026-02-25 12:51:52.101 244018 DEBUG nova.virt.libvirt.host [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:51:52 np0005629333 nova_compute[244014]: 2026-02-25 12:51:52.102 244018 DEBUG nova.virt.libvirt.host [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:51:52 np0005629333 nova_compute[244014]: 2026-02-25 12:51:52.107 244018 DEBUG nova.virt.libvirt.host [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:51:52 np0005629333 nova_compute[244014]: 2026-02-25 12:51:52.108 244018 DEBUG nova.virt.libvirt.host [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:51:52 np0005629333 nova_compute[244014]: 2026-02-25 12:51:52.108 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:51:52 np0005629333 nova_compute[244014]: 2026-02-25 12:51:52.109 244018 DEBUG nova.virt.hardware [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:51:52 np0005629333 nova_compute[244014]: 2026-02-25 12:51:52.109 244018 DEBUG nova.virt.hardware [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:51:52 np0005629333 nova_compute[244014]: 2026-02-25 12:51:52.110 244018 DEBUG nova.virt.hardware [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:51:52 np0005629333 nova_compute[244014]: 2026-02-25 12:51:52.110 244018 DEBUG nova.virt.hardware [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:51:52 np0005629333 nova_compute[244014]: 2026-02-25 12:51:52.111 244018 DEBUG nova.virt.hardware [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:51:52 np0005629333 nova_compute[244014]: 2026-02-25 12:51:52.111 244018 DEBUG nova.virt.hardware [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:51:52 np0005629333 nova_compute[244014]: 2026-02-25 12:51:52.112 244018 DEBUG nova.virt.hardware [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:51:52 np0005629333 nova_compute[244014]: 2026-02-25 12:51:52.112 244018 DEBUG nova.virt.hardware [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:51:52 np0005629333 nova_compute[244014]: 2026-02-25 12:51:52.113 244018 DEBUG nova.virt.hardware [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:51:52 np0005629333 nova_compute[244014]: 2026-02-25 12:51:52.113 244018 DEBUG nova.virt.hardware [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:51:52 np0005629333 nova_compute[244014]: 2026-02-25 12:51:52.113 244018 DEBUG nova.virt.hardware [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:51:52 np0005629333 nova_compute[244014]: 2026-02-25 12:51:52.118 244018 DEBUG oslo_concurrency.processutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:51:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2139: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 163 op/s
Feb 25 07:51:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:51:52 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/188310535' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:51:52 np0005629333 nova_compute[244014]: 2026-02-25 12:51:52.662 244018 DEBUG oslo_concurrency.processutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:51:52 np0005629333 nova_compute[244014]: 2026-02-25 12:51:52.686 244018 DEBUG nova.storage.rbd_utils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 809da994-7551-4f52-8920-b0dfaa2ef73e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:51:52 np0005629333 nova_compute[244014]: 2026-02-25 12:51:52.689 244018 DEBUG oslo_concurrency.processutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:51:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:51:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4225123064' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.219 244018 DEBUG oslo_concurrency.processutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.222 244018 DEBUG nova.virt.libvirt.vif [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:51:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1151259173',display_name='tempest-TestNetworkBasicOps-server-1151259173',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1151259173',id=127,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuhKTGrJ6xoqtXbC9i6shxmOzzCAmEiPLvEhcBT9lMLvpNHET3NrmwNha38Zzx8OOcER4UhJ6EWWvnBqNIlR5/VZu+vuQ6n1q9c1LTSLn17vfclbttzgZoR8PFOeBob+Q==',key_name='tempest-TestNetworkBasicOps-1930796261',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ijtjj0g2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:51:46Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=809da994-7551-4f52-8920-b0dfaa2ef73e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.223 244018 DEBUG nova.network.os_vif_util [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.224 244018 DEBUG nova.network.os_vif_util [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:63:14,bridge_name='br-int',has_traffic_filtering=True,id=4f59e1f7-f07c-48a1-82b4-b6a563a7130a,network=Network(526ae63c-3640-4e70-a308-56e7a67e4cf2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f59e1f7-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.226 244018 DEBUG nova.objects.instance [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'pci_devices' on Instance uuid 809da994-7551-4f52-8920-b0dfaa2ef73e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.247 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:51:53 np0005629333 nova_compute[244014]:  <uuid>809da994-7551-4f52-8920-b0dfaa2ef73e</uuid>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:  <name>instance-0000007f</name>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestNetworkBasicOps-server-1151259173</nova:name>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:51:52</nova:creationTime>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:51:53 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:        <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:        <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:        <nova:port uuid="4f59e1f7-f07c-48a1-82b4-b6a563a7130a">
Feb 25 07:51:53 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      <entry name="serial">809da994-7551-4f52-8920-b0dfaa2ef73e</entry>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      <entry name="uuid">809da994-7551-4f52-8920-b0dfaa2ef73e</entry>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/809da994-7551-4f52-8920-b0dfaa2ef73e_disk">
Feb 25 07:51:53 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:51:53 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/809da994-7551-4f52-8920-b0dfaa2ef73e_disk.config">
Feb 25 07:51:53 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:51:53 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:c3:63:14"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      <target dev="tap4f59e1f7-f0"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e/console.log" append="off"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:51:53 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:51:53 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:51:53 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:51:53 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.249 244018 DEBUG nova.compute.manager [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Preparing to wait for external event network-vif-plugged-4f59e1f7-f07c-48a1-82b4-b6a563a7130a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.250 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.250 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.251 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.252 244018 DEBUG nova.virt.libvirt.vif [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:51:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1151259173',display_name='tempest-TestNetworkBasicOps-server-1151259173',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1151259173',id=127,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuhKTGrJ6xoqtXbC9i6shxmOzzCAmEiPLvEhcBT9lMLvpNHET3NrmwNha38Zzx8OOcER4UhJ6EWWvnBqNIlR5/VZu+vuQ6n1q9c1LTSLn17vfclbttzgZoR8PFOeBob+Q==',key_name='tempest-TestNetworkBasicOps-1930796261',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ijtjj0g2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:51:46Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=809da994-7551-4f52-8920-b0dfaa2ef73e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.253 244018 DEBUG nova.network.os_vif_util [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.254 244018 DEBUG nova.network.os_vif_util [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:63:14,bridge_name='br-int',has_traffic_filtering=True,id=4f59e1f7-f07c-48a1-82b4-b6a563a7130a,network=Network(526ae63c-3640-4e70-a308-56e7a67e4cf2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f59e1f7-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.255 244018 DEBUG os_vif [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:63:14,bridge_name='br-int',has_traffic_filtering=True,id=4f59e1f7-f07c-48a1-82b4-b6a563a7130a,network=Network(526ae63c-3640-4e70-a308-56e7a67e4cf2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f59e1f7-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.256 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.257 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.258 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.262 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.263 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f59e1f7-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.264 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4f59e1f7-f0, col_values=(('external_ids', {'iface-id': '4f59e1f7-f07c-48a1-82b4-b6a563a7130a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c3:63:14', 'vm-uuid': '809da994-7551-4f52-8920-b0dfaa2ef73e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.266 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:53 np0005629333 NetworkManager[49836]: <info>  [1772023913.2675] manager: (tap4f59e1f7-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/559)
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.268 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.275 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.276 244018 INFO os_vif [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:63:14,bridge_name='br-int',has_traffic_filtering=True,id=4f59e1f7-f07c-48a1-82b4-b6a563a7130a,network=Network(526ae63c-3640-4e70-a308-56e7a67e4cf2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f59e1f7-f0')#033[00m
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.333 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.334 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.334 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:c3:63:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.334 244018 INFO nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Using config drive#033[00m
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.354 244018 DEBUG nova.storage.rbd_utils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 809da994-7551-4f52-8920-b0dfaa2ef73e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:51:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.685 244018 INFO nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Creating config drive at /var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e/disk.config#033[00m
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.688 244018 DEBUG oslo_concurrency.processutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpuvt8xfhc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.829 244018 DEBUG oslo_concurrency.processutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpuvt8xfhc" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.867 244018 DEBUG nova.storage.rbd_utils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 809da994-7551-4f52-8920-b0dfaa2ef73e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:51:53 np0005629333 nova_compute[244014]: 2026-02-25 12:51:53.872 244018 DEBUG oslo_concurrency.processutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e/disk.config 809da994-7551-4f52-8920-b0dfaa2ef73e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:51:54 np0005629333 nova_compute[244014]: 2026-02-25 12:51:54.026 244018 DEBUG oslo_concurrency.processutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e/disk.config 809da994-7551-4f52-8920-b0dfaa2ef73e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:51:54 np0005629333 nova_compute[244014]: 2026-02-25 12:51:54.026 244018 INFO nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Deleting local config drive /var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e/disk.config because it was imported into RBD.#033[00m
Feb 25 07:51:54 np0005629333 kernel: tap4f59e1f7-f0: entered promiscuous mode
Feb 25 07:51:54 np0005629333 NetworkManager[49836]: <info>  [1772023914.0793] manager: (tap4f59e1f7-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/560)
Feb 25 07:51:54 np0005629333 ovn_controller[147040]: 2026-02-25T12:51:54Z|01343|binding|INFO|Claiming lport 4f59e1f7-f07c-48a1-82b4-b6a563a7130a for this chassis.
Feb 25 07:51:54 np0005629333 ovn_controller[147040]: 2026-02-25T12:51:54Z|01344|binding|INFO|4f59e1f7-f07c-48a1-82b4-b6a563a7130a: Claiming fa:16:3e:c3:63:14 10.100.0.12
Feb 25 07:51:54 np0005629333 nova_compute[244014]: 2026-02-25 12:51:54.084 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:54 np0005629333 ovn_controller[147040]: 2026-02-25T12:51:54Z|01345|binding|INFO|Setting lport 4f59e1f7-f07c-48a1-82b4-b6a563a7130a ovn-installed in OVS
Feb 25 07:51:54 np0005629333 nova_compute[244014]: 2026-02-25 12:51:54.092 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:54 np0005629333 nova_compute[244014]: 2026-02-25 12:51:54.095 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:54 np0005629333 systemd-machined[210048]: New machine qemu-159-instance-0000007f.
Feb 25 07:51:54 np0005629333 ovn_controller[147040]: 2026-02-25T12:51:54Z|01346|binding|INFO|Setting lport 4f59e1f7-f07c-48a1-82b4-b6a563a7130a up in Southbound
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.131 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:63:14 10.100.0.12'], port_security=['fa:16:3e:c3:63:14 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '809da994-7551-4f52-8920-b0dfaa2ef73e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-526ae63c-3640-4e70-a308-56e7a67e4cf2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3e3f0007-e379-4ef5-bc52-5669e937e826', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9099fb32-6839-4ebe-bdc3-ec67cc06d180, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=4f59e1f7-f07c-48a1-82b4-b6a563a7130a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.134 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 4f59e1f7-f07c-48a1-82b4-b6a563a7130a in datapath 526ae63c-3640-4e70-a308-56e7a67e4cf2 bound to our chassis#033[00m
Feb 25 07:51:54 np0005629333 systemd[1]: Started Virtual Machine qemu-159-instance-0000007f.
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.139 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 526ae63c-3640-4e70-a308-56e7a67e4cf2#033[00m
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.150 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[302073b1-c067-472d-98c3-60d2e9a1cd1e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.152 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap526ae63c-31 in ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.155 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap526ae63c-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.155 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1455a863-1660-4656-8f19-1de7f6403898]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.156 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c59de4e7-db14-48a3-aea2-43b4722b1c85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:54 np0005629333 systemd-udevd[359897]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.171 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[e369750d-34f3-490d-8f67-42a1bd3924ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:54 np0005629333 NetworkManager[49836]: <info>  [1772023914.1801] device (tap4f59e1f7-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:51:54 np0005629333 NetworkManager[49836]: <info>  [1772023914.1808] device (tap4f59e1f7-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.185 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b5364a42-73f9-4aa6-a09f-7ab70ec6a10d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.219 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5bd05c65-d051-4fed-a64b-9919584f59db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:54 np0005629333 NetworkManager[49836]: <info>  [1772023914.2272] manager: (tap526ae63c-30): new Veth device (/org/freedesktop/NetworkManager/Devices/561)
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.226 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f4193afe-4cd6-49ab-b50c-92073cdcd9dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.266 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[61995f98-d8fc-4e6c-8b58-84af7d2e33e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.270 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[098db765-0630-4d16-bc5c-04559da5b97e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:54 np0005629333 NetworkManager[49836]: <info>  [1772023914.2974] device (tap526ae63c-30): carrier: link connected
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.306 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d2ab3389-7d1d-4777-a12d-12a341507ad7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.323 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fbd522ff-a0ce-4015-8efd-dcb53e635f56]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap526ae63c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:f2:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 401], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588370, 'reachable_time': 33616, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359929, 'error': None, 'target': 'ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.340 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7b7d2e04-90f6-4a59-ba4e-5c1f22a1f317]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4d:f27b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588370, 'tstamp': 588370}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 359930, 'error': None, 'target': 'ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.356 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e9dadb63-e84f-472d-8c8a-f3e72d325e1b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap526ae63c-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4d:f2:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 401], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588370, 'reachable_time': 33616, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 359931, 'error': None, 'target': 'ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.379 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4b392db4-caa9-462b-990b-cb281452eff7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.443 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b2120c06-204b-4ba8-9c49-acc51a5c5664]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.447 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap526ae63c-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.448 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.448 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap526ae63c-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:51:54 np0005629333 nova_compute[244014]: 2026-02-25 12:51:54.450 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:54 np0005629333 kernel: tap526ae63c-30: entered promiscuous mode
Feb 25 07:51:54 np0005629333 NetworkManager[49836]: <info>  [1772023914.4537] manager: (tap526ae63c-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/562)
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.464 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap526ae63c-30, col_values=(('external_ids', {'iface-id': '1599c73d-07eb-42e9-83bd-2cf546347a5b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:51:54 np0005629333 ovn_controller[147040]: 2026-02-25T12:51:54Z|01347|binding|INFO|Releasing lport 1599c73d-07eb-42e9-83bd-2cf546347a5b from this chassis (sb_readonly=0)
Feb 25 07:51:54 np0005629333 nova_compute[244014]: 2026-02-25 12:51:54.465 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:54 np0005629333 nova_compute[244014]: 2026-02-25 12:51:54.466 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.470 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/526ae63c-3640-4e70-a308-56e7a67e4cf2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/526ae63c-3640-4e70-a308-56e7a67e4cf2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.471 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b050fae9-3b15-4852-9e27-beae8c1415e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.472 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-526ae63c-3640-4e70-a308-56e7a67e4cf2
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/526ae63c-3640-4e70-a308-56e7a67e4cf2.pid.haproxy
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 526ae63c-3640-4e70-a308-56e7a67e4cf2
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:51:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:54.473 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2', 'env', 'PROCESS_TAG=haproxy-526ae63c-3640-4e70-a308-56e7a67e4cf2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/526ae63c-3640-4e70-a308-56e7a67e4cf2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:51:54 np0005629333 nova_compute[244014]: 2026-02-25 12:51:54.476 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2140: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Feb 25 07:51:54 np0005629333 nova_compute[244014]: 2026-02-25 12:51:54.803 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023914.8030102, 809da994-7551-4f52-8920-b0dfaa2ef73e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:51:54 np0005629333 nova_compute[244014]: 2026-02-25 12:51:54.805 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] VM Started (Lifecycle Event)#033[00m
Feb 25 07:51:54 np0005629333 nova_compute[244014]: 2026-02-25 12:51:54.825 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:51:54 np0005629333 nova_compute[244014]: 2026-02-25 12:51:54.831 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023914.8044746, 809da994-7551-4f52-8920-b0dfaa2ef73e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:51:54 np0005629333 nova_compute[244014]: 2026-02-25 12:51:54.832 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:51:54 np0005629333 nova_compute[244014]: 2026-02-25 12:51:54.849 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:51:54 np0005629333 nova_compute[244014]: 2026-02-25 12:51:54.853 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:51:54 np0005629333 nova_compute[244014]: 2026-02-25 12:51:54.870 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:51:54 np0005629333 podman[360004]: 2026-02-25 12:51:54.941320586 +0000 UTC m=+0.070453680 container create ecfea8e647a5e0a1222cbb98f4b3892ab54ecda1af4c00fe810aec476144a552 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 25 07:51:54 np0005629333 nova_compute[244014]: 2026-02-25 12:51:54.979 244018 DEBUG nova.network.neutron [req-d8e01e7e-abb7-40d4-ad75-745fba325e40 req-6c5c0ceb-b67d-41e9-8a74-5b5ad3abfd4c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Updated VIF entry in instance network info cache for port 4f59e1f7-f07c-48a1-82b4-b6a563a7130a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:51:54 np0005629333 nova_compute[244014]: 2026-02-25 12:51:54.980 244018 DEBUG nova.network.neutron [req-d8e01e7e-abb7-40d4-ad75-745fba325e40 req-6c5c0ceb-b67d-41e9-8a74-5b5ad3abfd4c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Updating instance_info_cache with network_info: [{"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:51:54 np0005629333 systemd[1]: Started libpod-conmon-ecfea8e647a5e0a1222cbb98f4b3892ab54ecda1af4c00fe810aec476144a552.scope.
Feb 25 07:51:54 np0005629333 podman[360004]: 2026-02-25 12:51:54.905776922 +0000 UTC m=+0.034910066 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:51:55 np0005629333 nova_compute[244014]: 2026-02-25 12:51:55.002 244018 DEBUG oslo_concurrency.lockutils [req-d8e01e7e-abb7-40d4-ad75-745fba325e40 req-6c5c0ceb-b67d-41e9-8a74-5b5ad3abfd4c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:51:55 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:51:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:55.029 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:51:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:55.029 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:51:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:55.030 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:51:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f39925cd20db3978a9556b803bb9da1be3764b7cb84297a2c2d11b4e6c0059d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:51:55 np0005629333 podman[360004]: 2026-02-25 12:51:55.046577028 +0000 UTC m=+0.175710132 container init ecfea8e647a5e0a1222cbb98f4b3892ab54ecda1af4c00fe810aec476144a552 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 07:51:55 np0005629333 podman[360004]: 2026-02-25 12:51:55.054537823 +0000 UTC m=+0.183670907 container start ecfea8e647a5e0a1222cbb98f4b3892ab54ecda1af4c00fe810aec476144a552 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:51:55 np0005629333 neutron-haproxy-ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2[360019]: [NOTICE]   (360023) : New worker (360025) forked
Feb 25 07:51:55 np0005629333 neutron-haproxy-ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2[360019]: [NOTICE]   (360023) : Loading success.
Feb 25 07:51:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:51:55.278 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:51:55 np0005629333 nova_compute[244014]: 2026-02-25 12:51:55.382 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2141: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 350 KiB/s rd, 3.9 MiB/s wr, 98 op/s
Feb 25 07:51:58 np0005629333 nova_compute[244014]: 2026-02-25 12:51:58.268 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:51:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2142: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 3.9 MiB/s wr, 100 op/s
Feb 25 07:51:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:52:00 np0005629333 nova_compute[244014]: 2026-02-25 12:52:00.386 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2143: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 3.7 MiB/s wr, 99 op/s
Feb 25 07:52:01 np0005629333 nova_compute[244014]: 2026-02-25 12:52:01.258 244018 DEBUG nova.compute.manager [req-f4c1d7c0-d424-4330-8251-faf2ae93b40d req-b300ba2b-516a-4648-9b57-c0480790dcf1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received event network-vif-plugged-4f59e1f7-f07c-48a1-82b4-b6a563a7130a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:52:01 np0005629333 nova_compute[244014]: 2026-02-25 12:52:01.258 244018 DEBUG oslo_concurrency.lockutils [req-f4c1d7c0-d424-4330-8251-faf2ae93b40d req-b300ba2b-516a-4648-9b57-c0480790dcf1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:52:01 np0005629333 nova_compute[244014]: 2026-02-25 12:52:01.259 244018 DEBUG oslo_concurrency.lockutils [req-f4c1d7c0-d424-4330-8251-faf2ae93b40d req-b300ba2b-516a-4648-9b57-c0480790dcf1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:52:01 np0005629333 nova_compute[244014]: 2026-02-25 12:52:01.259 244018 DEBUG oslo_concurrency.lockutils [req-f4c1d7c0-d424-4330-8251-faf2ae93b40d req-b300ba2b-516a-4648-9b57-c0480790dcf1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:01 np0005629333 nova_compute[244014]: 2026-02-25 12:52:01.260 244018 DEBUG nova.compute.manager [req-f4c1d7c0-d424-4330-8251-faf2ae93b40d req-b300ba2b-516a-4648-9b57-c0480790dcf1 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Processing event network-vif-plugged-4f59e1f7-f07c-48a1-82b4-b6a563a7130a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:52:01 np0005629333 nova_compute[244014]: 2026-02-25 12:52:01.261 244018 DEBUG nova.compute.manager [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:52:01 np0005629333 nova_compute[244014]: 2026-02-25 12:52:01.271 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023921.2709608, 809da994-7551-4f52-8920-b0dfaa2ef73e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:52:01 np0005629333 nova_compute[244014]: 2026-02-25 12:52:01.271 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:52:01 np0005629333 nova_compute[244014]: 2026-02-25 12:52:01.274 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:52:01 np0005629333 nova_compute[244014]: 2026-02-25 12:52:01.279 244018 INFO nova.virt.libvirt.driver [-] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Instance spawned successfully.#033[00m
Feb 25 07:52:01 np0005629333 nova_compute[244014]: 2026-02-25 12:52:01.280 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:52:01 np0005629333 nova_compute[244014]: 2026-02-25 12:52:01.299 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:52:01 np0005629333 nova_compute[244014]: 2026-02-25 12:52:01.307 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:52:01 np0005629333 nova_compute[244014]: 2026-02-25 12:52:01.314 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:52:01 np0005629333 nova_compute[244014]: 2026-02-25 12:52:01.315 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:52:01 np0005629333 nova_compute[244014]: 2026-02-25 12:52:01.316 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:52:01 np0005629333 nova_compute[244014]: 2026-02-25 12:52:01.316 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:52:01 np0005629333 nova_compute[244014]: 2026-02-25 12:52:01.317 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:52:01 np0005629333 nova_compute[244014]: 2026-02-25 12:52:01.318 244018 DEBUG nova.virt.libvirt.driver [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:52:01 np0005629333 nova_compute[244014]: 2026-02-25 12:52:01.351 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:52:01 np0005629333 nova_compute[244014]: 2026-02-25 12:52:01.436 244018 INFO nova.compute.manager [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Took 14.78 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:52:01 np0005629333 nova_compute[244014]: 2026-02-25 12:52:01.437 244018 DEBUG nova.compute.manager [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:52:01 np0005629333 nova_compute[244014]: 2026-02-25 12:52:01.495 244018 INFO nova.compute.manager [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Took 16.00 seconds to build instance.#033[00m
Feb 25 07:52:01 np0005629333 nova_compute[244014]: 2026-02-25 12:52:01.509 244018 DEBUG oslo_concurrency.lockutils [None req-c5e86bd3-4d90-40e2-8f02-4d66f75373c7 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:52:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:52:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:52:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:52:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:52:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:52:01 np0005629333 nova_compute[244014]: 2026-02-25 12:52:01.652 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "739026fc-9c96-4212-9fa3-e6731e7f61f9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:52:01 np0005629333 nova_compute[244014]: 2026-02-25 12:52:01.653 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "739026fc-9c96-4212-9fa3-e6731e7f61f9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:52:01 np0005629333 nova_compute[244014]: 2026-02-25 12:52:01.676 244018 DEBUG nova.compute.manager [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:52:01 np0005629333 nova_compute[244014]: 2026-02-25 12:52:01.756 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:52:01 np0005629333 nova_compute[244014]: 2026-02-25 12:52:01.757 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:52:01 np0005629333 nova_compute[244014]: 2026-02-25 12:52:01.766 244018 DEBUG nova.virt.hardware [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:52:01 np0005629333 nova_compute[244014]: 2026-02-25 12:52:01.767 244018 INFO nova.compute.claims [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:52:01 np0005629333 nova_compute[244014]: 2026-02-25 12:52:01.891 244018 DEBUG oslo_concurrency.processutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:52:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:52:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3239587367' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:52:02 np0005629333 nova_compute[244014]: 2026-02-25 12:52:02.459 244018 DEBUG oslo_concurrency.processutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:52:02 np0005629333 nova_compute[244014]: 2026-02-25 12:52:02.467 244018 DEBUG nova.compute.provider_tree [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:52:02 np0005629333 nova_compute[244014]: 2026-02-25 12:52:02.490 244018 DEBUG nova.scheduler.client.report [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:52:02 np0005629333 nova_compute[244014]: 2026-02-25 12:52:02.522 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:02 np0005629333 nova_compute[244014]: 2026-02-25 12:52:02.523 244018 DEBUG nova.compute.manager [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:52:02 np0005629333 nova_compute[244014]: 2026-02-25 12:52:02.592 244018 DEBUG nova.compute.manager [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:52:02 np0005629333 nova_compute[244014]: 2026-02-25 12:52:02.593 244018 DEBUG nova.network.neutron [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:52:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2144: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 661 KiB/s rd, 3.7 MiB/s wr, 110 op/s
Feb 25 07:52:02 np0005629333 nova_compute[244014]: 2026-02-25 12:52:02.613 244018 INFO nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:52:02 np0005629333 nova_compute[244014]: 2026-02-25 12:52:02.645 244018 DEBUG nova.compute.manager [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:52:02 np0005629333 nova_compute[244014]: 2026-02-25 12:52:02.749 244018 DEBUG nova.compute.manager [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:52:02 np0005629333 nova_compute[244014]: 2026-02-25 12:52:02.750 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:52:02 np0005629333 nova_compute[244014]: 2026-02-25 12:52:02.751 244018 INFO nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Creating image(s)#033[00m
Feb 25 07:52:02 np0005629333 nova_compute[244014]: 2026-02-25 12:52:02.775 244018 DEBUG nova.storage.rbd_utils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 739026fc-9c96-4212-9fa3-e6731e7f61f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:52:02 np0005629333 nova_compute[244014]: 2026-02-25 12:52:02.803 244018 DEBUG nova.storage.rbd_utils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 739026fc-9c96-4212-9fa3-e6731e7f61f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:52:02 np0005629333 nova_compute[244014]: 2026-02-25 12:52:02.834 244018 DEBUG nova.storage.rbd_utils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 739026fc-9c96-4212-9fa3-e6731e7f61f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:52:02 np0005629333 nova_compute[244014]: 2026-02-25 12:52:02.839 244018 DEBUG oslo_concurrency.processutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:52:02 np0005629333 nova_compute[244014]: 2026-02-25 12:52:02.925 244018 DEBUG oslo_concurrency.processutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:52:02 np0005629333 nova_compute[244014]: 2026-02-25 12:52:02.926 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:52:02 np0005629333 nova_compute[244014]: 2026-02-25 12:52:02.927 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:52:02 np0005629333 nova_compute[244014]: 2026-02-25 12:52:02.928 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:02 np0005629333 nova_compute[244014]: 2026-02-25 12:52:02.961 244018 DEBUG nova.storage.rbd_utils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 739026fc-9c96-4212-9fa3-e6731e7f61f9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:52:02 np0005629333 nova_compute[244014]: 2026-02-25 12:52:02.966 244018 DEBUG oslo_concurrency.processutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 739026fc-9c96-4212-9fa3-e6731e7f61f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:52:03 np0005629333 nova_compute[244014]: 2026-02-25 12:52:03.032 244018 DEBUG nova.policy [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb8dbf8cc448ad946fd23aaae2326e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '25fa1e8dd32c483686f869da2604f2b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:52:03 np0005629333 nova_compute[244014]: 2026-02-25 12:52:03.261 244018 DEBUG oslo_concurrency.processutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 739026fc-9c96-4212-9fa3-e6731e7f61f9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:52:03 np0005629333 nova_compute[244014]: 2026-02-25 12:52:03.302 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:03 np0005629333 nova_compute[244014]: 2026-02-25 12:52:03.341 244018 DEBUG nova.storage.rbd_utils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] resizing rbd image 739026fc-9c96-4212-9fa3-e6731e7f61f9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:52:03 np0005629333 nova_compute[244014]: 2026-02-25 12:52:03.378 244018 DEBUG nova.compute.manager [req-6c525655-3dea-4eb6-a282-e638921d9030 req-43d0bccc-8ebf-47f4-bf11-4903aa99a3bf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received event network-vif-plugged-4f59e1f7-f07c-48a1-82b4-b6a563a7130a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:52:03 np0005629333 nova_compute[244014]: 2026-02-25 12:52:03.379 244018 DEBUG oslo_concurrency.lockutils [req-6c525655-3dea-4eb6-a282-e638921d9030 req-43d0bccc-8ebf-47f4-bf11-4903aa99a3bf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:52:03 np0005629333 nova_compute[244014]: 2026-02-25 12:52:03.379 244018 DEBUG oslo_concurrency.lockutils [req-6c525655-3dea-4eb6-a282-e638921d9030 req-43d0bccc-8ebf-47f4-bf11-4903aa99a3bf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:52:03 np0005629333 nova_compute[244014]: 2026-02-25 12:52:03.379 244018 DEBUG oslo_concurrency.lockutils [req-6c525655-3dea-4eb6-a282-e638921d9030 req-43d0bccc-8ebf-47f4-bf11-4903aa99a3bf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:03 np0005629333 nova_compute[244014]: 2026-02-25 12:52:03.380 244018 DEBUG nova.compute.manager [req-6c525655-3dea-4eb6-a282-e638921d9030 req-43d0bccc-8ebf-47f4-bf11-4903aa99a3bf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] No waiting events found dispatching network-vif-plugged-4f59e1f7-f07c-48a1-82b4-b6a563a7130a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:52:03 np0005629333 nova_compute[244014]: 2026-02-25 12:52:03.380 244018 WARNING nova.compute.manager [req-6c525655-3dea-4eb6-a282-e638921d9030 req-43d0bccc-8ebf-47f4-bf11-4903aa99a3bf 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received unexpected event network-vif-plugged-4f59e1f7-f07c-48a1-82b4-b6a563a7130a for instance with vm_state active and task_state None.#033[00m
Feb 25 07:52:03 np0005629333 nova_compute[244014]: 2026-02-25 12:52:03.432 244018 DEBUG nova.objects.instance [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'migration_context' on Instance uuid 739026fc-9c96-4212-9fa3-e6731e7f61f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:52:03 np0005629333 nova_compute[244014]: 2026-02-25 12:52:03.447 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:52:03 np0005629333 nova_compute[244014]: 2026-02-25 12:52:03.448 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Ensure instance console log exists: /var/lib/nova/instances/739026fc-9c96-4212-9fa3-e6731e7f61f9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:52:03 np0005629333 nova_compute[244014]: 2026-02-25 12:52:03.448 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:52:03 np0005629333 nova_compute[244014]: 2026-02-25 12:52:03.448 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:52:03 np0005629333 nova_compute[244014]: 2026-02-25 12:52:03.449 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:52:04 np0005629333 nova_compute[244014]: 2026-02-25 12:52:04.502 244018 DEBUG nova.network.neutron [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Successfully created port: 61c3ba1d-0d4b-426e-8d04-ce56efd650ae _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:52:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2145: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 317 KiB/s rd, 25 KiB/s wr, 21 op/s
Feb 25 07:52:05 np0005629333 nova_compute[244014]: 2026-02-25 12:52:05.388 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:05 np0005629333 nova_compute[244014]: 2026-02-25 12:52:05.630 244018 DEBUG nova.compute.manager [req-d0634062-9620-48a8-a6ab-8c0d0c875de0 req-901559fe-fee1-4bf0-ba04-7d09bab6f72b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received event network-changed-4f59e1f7-f07c-48a1-82b4-b6a563a7130a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:52:05 np0005629333 nova_compute[244014]: 2026-02-25 12:52:05.631 244018 DEBUG nova.compute.manager [req-d0634062-9620-48a8-a6ab-8c0d0c875de0 req-901559fe-fee1-4bf0-ba04-7d09bab6f72b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Refreshing instance network info cache due to event network-changed-4f59e1f7-f07c-48a1-82b4-b6a563a7130a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:52:05 np0005629333 nova_compute[244014]: 2026-02-25 12:52:05.631 244018 DEBUG oslo_concurrency.lockutils [req-d0634062-9620-48a8-a6ab-8c0d0c875de0 req-901559fe-fee1-4bf0-ba04-7d09bab6f72b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:52:05 np0005629333 nova_compute[244014]: 2026-02-25 12:52:05.632 244018 DEBUG oslo_concurrency.lockutils [req-d0634062-9620-48a8-a6ab-8c0d0c875de0 req-901559fe-fee1-4bf0-ba04-7d09bab6f72b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:52:05 np0005629333 nova_compute[244014]: 2026-02-25 12:52:05.632 244018 DEBUG nova.network.neutron [req-d0634062-9620-48a8-a6ab-8c0d0c875de0 req-901559fe-fee1-4bf0-ba04-7d09bab6f72b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Refreshing network info cache for port 4f59e1f7-f07c-48a1-82b4-b6a563a7130a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:52:05 np0005629333 podman[360222]: 2026-02-25 12:52:05.760562964 +0000 UTC m=+0.094010535 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 07:52:05 np0005629333 podman[360223]: 2026-02-25 12:52:05.773672355 +0000 UTC m=+0.097235907 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.43.0)
Feb 25 07:52:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2146: 305 pgs: 305 active+clean; 306 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 564 KiB/s rd, 1.0 MiB/s wr, 32 op/s
Feb 25 07:52:07 np0005629333 nova_compute[244014]: 2026-02-25 12:52:07.048 244018 DEBUG nova.network.neutron [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Successfully updated port: 61c3ba1d-0d4b-426e-8d04-ce56efd650ae _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:52:07 np0005629333 nova_compute[244014]: 2026-02-25 12:52:07.065 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "refresh_cache-739026fc-9c96-4212-9fa3-e6731e7f61f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:52:07 np0005629333 nova_compute[244014]: 2026-02-25 12:52:07.066 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquired lock "refresh_cache-739026fc-9c96-4212-9fa3-e6731e7f61f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:52:07 np0005629333 nova_compute[244014]: 2026-02-25 12:52:07.066 244018 DEBUG nova.network.neutron [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:52:07 np0005629333 nova_compute[244014]: 2026-02-25 12:52:07.264 244018 DEBUG nova.network.neutron [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:52:07 np0005629333 nova_compute[244014]: 2026-02-25 12:52:07.451 244018 DEBUG nova.network.neutron [req-d0634062-9620-48a8-a6ab-8c0d0c875de0 req-901559fe-fee1-4bf0-ba04-7d09bab6f72b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Updated VIF entry in instance network info cache for port 4f59e1f7-f07c-48a1-82b4-b6a563a7130a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:52:07 np0005629333 nova_compute[244014]: 2026-02-25 12:52:07.451 244018 DEBUG nova.network.neutron [req-d0634062-9620-48a8-a6ab-8c0d0c875de0 req-901559fe-fee1-4bf0-ba04-7d09bab6f72b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Updating instance_info_cache with network_info: [{"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:52:07 np0005629333 nova_compute[244014]: 2026-02-25 12:52:07.470 244018 DEBUG oslo_concurrency.lockutils [req-d0634062-9620-48a8-a6ab-8c0d0c875de0 req-901559fe-fee1-4bf0-ba04-7d09bab6f72b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:52:07 np0005629333 nova_compute[244014]: 2026-02-25 12:52:07.710 244018 DEBUG nova.compute.manager [req-b358a9d4-f33f-4c58-b686-f22456cbd997 req-a1d78515-a459-4a06-9bb3-6929aa32b1d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Received event network-changed-61c3ba1d-0d4b-426e-8d04-ce56efd650ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:52:07 np0005629333 nova_compute[244014]: 2026-02-25 12:52:07.711 244018 DEBUG nova.compute.manager [req-b358a9d4-f33f-4c58-b686-f22456cbd997 req-a1d78515-a459-4a06-9bb3-6929aa32b1d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Refreshing instance network info cache due to event network-changed-61c3ba1d-0d4b-426e-8d04-ce56efd650ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:52:07 np0005629333 nova_compute[244014]: 2026-02-25 12:52:07.711 244018 DEBUG oslo_concurrency.lockutils [req-b358a9d4-f33f-4c58-b686-f22456cbd997 req-a1d78515-a459-4a06-9bb3-6929aa32b1d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-739026fc-9c96-4212-9fa3-e6731e7f61f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:52:08 np0005629333 nova_compute[244014]: 2026-02-25 12:52:08.305 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2147: 305 pgs: 305 active+clean; 325 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Feb 25 07:52:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:52:08 np0005629333 nova_compute[244014]: 2026-02-25 12:52:08.774 244018 DEBUG nova.network.neutron [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Updating instance_info_cache with network_info: [{"id": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "address": "fa:16:3e:93:52:d2", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61c3ba1d-0d", "ovs_interfaceid": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:52:08 np0005629333 nova_compute[244014]: 2026-02-25 12:52:08.792 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Releasing lock "refresh_cache-739026fc-9c96-4212-9fa3-e6731e7f61f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:52:08 np0005629333 nova_compute[244014]: 2026-02-25 12:52:08.792 244018 DEBUG nova.compute.manager [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Instance network_info: |[{"id": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "address": "fa:16:3e:93:52:d2", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61c3ba1d-0d", "ovs_interfaceid": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:52:08 np0005629333 nova_compute[244014]: 2026-02-25 12:52:08.793 244018 DEBUG oslo_concurrency.lockutils [req-b358a9d4-f33f-4c58-b686-f22456cbd997 req-a1d78515-a459-4a06-9bb3-6929aa32b1d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-739026fc-9c96-4212-9fa3-e6731e7f61f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:52:08 np0005629333 nova_compute[244014]: 2026-02-25 12:52:08.793 244018 DEBUG nova.network.neutron [req-b358a9d4-f33f-4c58-b686-f22456cbd997 req-a1d78515-a459-4a06-9bb3-6929aa32b1d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Refreshing network info cache for port 61c3ba1d-0d4b-426e-8d04-ce56efd650ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:52:08 np0005629333 nova_compute[244014]: 2026-02-25 12:52:08.800 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Start _get_guest_xml network_info=[{"id": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "address": "fa:16:3e:93:52:d2", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61c3ba1d-0d", "ovs_interfaceid": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:52:08 np0005629333 nova_compute[244014]: 2026-02-25 12:52:08.806 244018 WARNING nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:52:08 np0005629333 nova_compute[244014]: 2026-02-25 12:52:08.811 244018 DEBUG nova.virt.libvirt.host [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:52:08 np0005629333 nova_compute[244014]: 2026-02-25 12:52:08.812 244018 DEBUG nova.virt.libvirt.host [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:52:08 np0005629333 nova_compute[244014]: 2026-02-25 12:52:08.816 244018 DEBUG nova.virt.libvirt.host [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:52:08 np0005629333 nova_compute[244014]: 2026-02-25 12:52:08.816 244018 DEBUG nova.virt.libvirt.host [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:52:08 np0005629333 nova_compute[244014]: 2026-02-25 12:52:08.817 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:52:08 np0005629333 nova_compute[244014]: 2026-02-25 12:52:08.817 244018 DEBUG nova.virt.hardware [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:52:08 np0005629333 nova_compute[244014]: 2026-02-25 12:52:08.817 244018 DEBUG nova.virt.hardware [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:52:08 np0005629333 nova_compute[244014]: 2026-02-25 12:52:08.818 244018 DEBUG nova.virt.hardware [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:52:08 np0005629333 nova_compute[244014]: 2026-02-25 12:52:08.818 244018 DEBUG nova.virt.hardware [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:52:08 np0005629333 nova_compute[244014]: 2026-02-25 12:52:08.818 244018 DEBUG nova.virt.hardware [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:52:08 np0005629333 nova_compute[244014]: 2026-02-25 12:52:08.819 244018 DEBUG nova.virt.hardware [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:52:08 np0005629333 nova_compute[244014]: 2026-02-25 12:52:08.819 244018 DEBUG nova.virt.hardware [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:52:08 np0005629333 nova_compute[244014]: 2026-02-25 12:52:08.819 244018 DEBUG nova.virt.hardware [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:52:08 np0005629333 nova_compute[244014]: 2026-02-25 12:52:08.819 244018 DEBUG nova.virt.hardware [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:52:08 np0005629333 nova_compute[244014]: 2026-02-25 12:52:08.820 244018 DEBUG nova.virt.hardware [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:52:08 np0005629333 nova_compute[244014]: 2026-02-25 12:52:08.820 244018 DEBUG nova.virt.hardware [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:52:08 np0005629333 nova_compute[244014]: 2026-02-25 12:52:08.823 244018 DEBUG oslo_concurrency.processutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:52:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:52:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2882774330' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:52:09 np0005629333 nova_compute[244014]: 2026-02-25 12:52:09.336 244018 DEBUG oslo_concurrency.processutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:52:09 np0005629333 nova_compute[244014]: 2026-02-25 12:52:09.377 244018 DEBUG nova.storage.rbd_utils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 739026fc-9c96-4212-9fa3-e6731e7f61f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:52:09 np0005629333 nova_compute[244014]: 2026-02-25 12:52:09.383 244018 DEBUG oslo_concurrency.processutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:52:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:52:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1968571132' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:52:09 np0005629333 nova_compute[244014]: 2026-02-25 12:52:09.945 244018 DEBUG oslo_concurrency.processutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:52:09 np0005629333 nova_compute[244014]: 2026-02-25 12:52:09.947 244018 DEBUG nova.virt.libvirt.vif [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:52:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1844602195',display_name='tempest-TestGettingAddress-server-1844602195',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1844602195',id=128,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJC+upPNElBVnEP9sRAofuJ3WgjrN7NUHIjnb2xNcja3SXT0AJcyip72U8J6Hd3gQBhmqSCIrT6CHz7iNR5W3wt+U6axciPl3ik+0GTg6pCZIfse39ZA4oXcSHjtD+Yg8A==',key_name='tempest-TestGettingAddress-1847809733',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-3sho0d1g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:52:02Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=739026fc-9c96-4212-9fa3-e6731e7f61f9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "address": "fa:16:3e:93:52:d2", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61c3ba1d-0d", "ovs_interfaceid": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:52:09 np0005629333 nova_compute[244014]: 2026-02-25 12:52:09.948 244018 DEBUG nova.network.os_vif_util [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "address": "fa:16:3e:93:52:d2", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61c3ba1d-0d", "ovs_interfaceid": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:52:09 np0005629333 nova_compute[244014]: 2026-02-25 12:52:09.950 244018 DEBUG nova.network.os_vif_util [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:52:d2,bridge_name='br-int',has_traffic_filtering=True,id=61c3ba1d-0d4b-426e-8d04-ce56efd650ae,network=Network(e19ed85e-54ee-4274-951c-ade412625983),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61c3ba1d-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:52:09 np0005629333 nova_compute[244014]: 2026-02-25 12:52:09.951 244018 DEBUG nova.objects.instance [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 739026fc-9c96-4212-9fa3-e6731e7f61f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:52:09 np0005629333 nova_compute[244014]: 2026-02-25 12:52:09.966 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:52:09 np0005629333 nova_compute[244014]:  <uuid>739026fc-9c96-4212-9fa3-e6731e7f61f9</uuid>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:  <name>instance-00000080</name>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestGettingAddress-server-1844602195</nova:name>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:52:08</nova:creationTime>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:52:09 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:        <nova:user uuid="f8eb8dbf8cc448ad946fd23aaae2326e">tempest-TestGettingAddress-344063294-project-member</nova:user>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:        <nova:project uuid="25fa1e8dd32c483686f869da2604f2b1">tempest-TestGettingAddress-344063294</nova:project>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:        <nova:port uuid="61c3ba1d-0d4b-426e-8d04-ce56efd650ae">
Feb 25 07:52:09 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe93:52d2" ipVersion="6"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe93:52d2" ipVersion="6"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      <entry name="serial">739026fc-9c96-4212-9fa3-e6731e7f61f9</entry>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      <entry name="uuid">739026fc-9c96-4212-9fa3-e6731e7f61f9</entry>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/739026fc-9c96-4212-9fa3-e6731e7f61f9_disk">
Feb 25 07:52:09 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:52:09 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/739026fc-9c96-4212-9fa3-e6731e7f61f9_disk.config">
Feb 25 07:52:09 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:52:09 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:93:52:d2"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      <target dev="tap61c3ba1d-0d"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/739026fc-9c96-4212-9fa3-e6731e7f61f9/console.log" append="off"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:52:09 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:52:09 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:52:09 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:52:09 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:52:09 np0005629333 nova_compute[244014]: 2026-02-25 12:52:09.972 244018 DEBUG nova.compute.manager [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Preparing to wait for external event network-vif-plugged-61c3ba1d-0d4b-426e-8d04-ce56efd650ae prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:52:09 np0005629333 nova_compute[244014]: 2026-02-25 12:52:09.973 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:52:09 np0005629333 nova_compute[244014]: 2026-02-25 12:52:09.973 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:52:09 np0005629333 nova_compute[244014]: 2026-02-25 12:52:09.973 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:09 np0005629333 nova_compute[244014]: 2026-02-25 12:52:09.974 244018 DEBUG nova.virt.libvirt.vif [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:52:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1844602195',display_name='tempest-TestGettingAddress-server-1844602195',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1844602195',id=128,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJC+upPNElBVnEP9sRAofuJ3WgjrN7NUHIjnb2xNcja3SXT0AJcyip72U8J6Hd3gQBhmqSCIrT6CHz7iNR5W3wt+U6axciPl3ik+0GTg6pCZIfse39ZA4oXcSHjtD+Yg8A==',key_name='tempest-TestGettingAddress-1847809733',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-3sho0d1g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:52:02Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=739026fc-9c96-4212-9fa3-e6731e7f61f9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "address": "fa:16:3e:93:52:d2", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61c3ba1d-0d", "ovs_interfaceid": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:52:09 np0005629333 nova_compute[244014]: 2026-02-25 12:52:09.975 244018 DEBUG nova.network.os_vif_util [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "address": "fa:16:3e:93:52:d2", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61c3ba1d-0d", "ovs_interfaceid": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:52:09 np0005629333 nova_compute[244014]: 2026-02-25 12:52:09.976 244018 DEBUG nova.network.os_vif_util [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:52:d2,bridge_name='br-int',has_traffic_filtering=True,id=61c3ba1d-0d4b-426e-8d04-ce56efd650ae,network=Network(e19ed85e-54ee-4274-951c-ade412625983),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61c3ba1d-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:52:09 np0005629333 nova_compute[244014]: 2026-02-25 12:52:09.976 244018 DEBUG os_vif [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:52:d2,bridge_name='br-int',has_traffic_filtering=True,id=61c3ba1d-0d4b-426e-8d04-ce56efd650ae,network=Network(e19ed85e-54ee-4274-951c-ade412625983),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61c3ba1d-0d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:52:09 np0005629333 nova_compute[244014]: 2026-02-25 12:52:09.977 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:09 np0005629333 nova_compute[244014]: 2026-02-25 12:52:09.978 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:52:09 np0005629333 nova_compute[244014]: 2026-02-25 12:52:09.978 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:52:09 np0005629333 nova_compute[244014]: 2026-02-25 12:52:09.982 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:09 np0005629333 nova_compute[244014]: 2026-02-25 12:52:09.982 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap61c3ba1d-0d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:52:09 np0005629333 nova_compute[244014]: 2026-02-25 12:52:09.983 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap61c3ba1d-0d, col_values=(('external_ids', {'iface-id': '61c3ba1d-0d4b-426e-8d04-ce56efd650ae', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:93:52:d2', 'vm-uuid': '739026fc-9c96-4212-9fa3-e6731e7f61f9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:52:09 np0005629333 nova_compute[244014]: 2026-02-25 12:52:09.984 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:09 np0005629333 NetworkManager[49836]: <info>  [1772023929.9855] manager: (tap61c3ba1d-0d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/563)
Feb 25 07:52:09 np0005629333 nova_compute[244014]: 2026-02-25 12:52:09.986 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:52:09 np0005629333 nova_compute[244014]: 2026-02-25 12:52:09.990 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:09 np0005629333 nova_compute[244014]: 2026-02-25 12:52:09.991 244018 INFO os_vif [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:52:d2,bridge_name='br-int',has_traffic_filtering=True,id=61c3ba1d-0d4b-426e-8d04-ce56efd650ae,network=Network(e19ed85e-54ee-4274-951c-ade412625983),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61c3ba1d-0d')#033[00m
Feb 25 07:52:10 np0005629333 nova_compute[244014]: 2026-02-25 12:52:10.045 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:52:10 np0005629333 nova_compute[244014]: 2026-02-25 12:52:10.045 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:52:10 np0005629333 nova_compute[244014]: 2026-02-25 12:52:10.046 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:93:52:d2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:52:10 np0005629333 nova_compute[244014]: 2026-02-25 12:52:10.046 244018 INFO nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Using config drive#033[00m
Feb 25 07:52:10 np0005629333 nova_compute[244014]: 2026-02-25 12:52:10.069 244018 DEBUG nova.storage.rbd_utils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 739026fc-9c96-4212-9fa3-e6731e7f61f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:52:10 np0005629333 nova_compute[244014]: 2026-02-25 12:52:10.390 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:10 np0005629333 nova_compute[244014]: 2026-02-25 12:52:10.472 244018 DEBUG nova.network.neutron [req-b358a9d4-f33f-4c58-b686-f22456cbd997 req-a1d78515-a459-4a06-9bb3-6929aa32b1d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Updated VIF entry in instance network info cache for port 61c3ba1d-0d4b-426e-8d04-ce56efd650ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:52:10 np0005629333 nova_compute[244014]: 2026-02-25 12:52:10.473 244018 DEBUG nova.network.neutron [req-b358a9d4-f33f-4c58-b686-f22456cbd997 req-a1d78515-a459-4a06-9bb3-6929aa32b1d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Updating instance_info_cache with network_info: [{"id": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "address": "fa:16:3e:93:52:d2", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61c3ba1d-0d", "ovs_interfaceid": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:52:10 np0005629333 nova_compute[244014]: 2026-02-25 12:52:10.492 244018 DEBUG oslo_concurrency.lockutils [req-b358a9d4-f33f-4c58-b686-f22456cbd997 req-a1d78515-a459-4a06-9bb3-6929aa32b1d4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-739026fc-9c96-4212-9fa3-e6731e7f61f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:52:10 np0005629333 nova_compute[244014]: 2026-02-25 12:52:10.498 244018 INFO nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Creating config drive at /var/lib/nova/instances/739026fc-9c96-4212-9fa3-e6731e7f61f9/disk.config#033[00m
Feb 25 07:52:10 np0005629333 nova_compute[244014]: 2026-02-25 12:52:10.502 244018 DEBUG oslo_concurrency.processutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/739026fc-9c96-4212-9fa3-e6731e7f61f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpryghxuhx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:52:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2148: 305 pgs: 305 active+clean; 325 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Feb 25 07:52:10 np0005629333 nova_compute[244014]: 2026-02-25 12:52:10.635 244018 DEBUG oslo_concurrency.processutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/739026fc-9c96-4212-9fa3-e6731e7f61f9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpryghxuhx" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:52:10 np0005629333 nova_compute[244014]: 2026-02-25 12:52:10.661 244018 DEBUG nova.storage.rbd_utils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 739026fc-9c96-4212-9fa3-e6731e7f61f9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:52:10 np0005629333 nova_compute[244014]: 2026-02-25 12:52:10.667 244018 DEBUG oslo_concurrency.processutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/739026fc-9c96-4212-9fa3-e6731e7f61f9/disk.config 739026fc-9c96-4212-9fa3-e6731e7f61f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:52:10 np0005629333 nova_compute[244014]: 2026-02-25 12:52:10.841 244018 DEBUG oslo_concurrency.processutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/739026fc-9c96-4212-9fa3-e6731e7f61f9/disk.config 739026fc-9c96-4212-9fa3-e6731e7f61f9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:52:10 np0005629333 nova_compute[244014]: 2026-02-25 12:52:10.841 244018 INFO nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Deleting local config drive /var/lib/nova/instances/739026fc-9c96-4212-9fa3-e6731e7f61f9/disk.config because it was imported into RBD.#033[00m
Feb 25 07:52:10 np0005629333 NetworkManager[49836]: <info>  [1772023930.8744] manager: (tap61c3ba1d-0d): new Tun device (/org/freedesktop/NetworkManager/Devices/564)
Feb 25 07:52:10 np0005629333 kernel: tap61c3ba1d-0d: entered promiscuous mode
Feb 25 07:52:10 np0005629333 nova_compute[244014]: 2026-02-25 12:52:10.878 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:52:10Z|01348|binding|INFO|Claiming lport 61c3ba1d-0d4b-426e-8d04-ce56efd650ae for this chassis.
Feb 25 07:52:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:52:10Z|01349|binding|INFO|61c3ba1d-0d4b-426e-8d04-ce56efd650ae: Claiming fa:16:3e:93:52:d2 10.100.0.14 2001:db8:0:1:f816:3eff:fe93:52d2 2001:db8::f816:3eff:fe93:52d2
Feb 25 07:52:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:10.889 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:52:d2 10.100.0.14 2001:db8:0:1:f816:3eff:fe93:52d2 2001:db8::f816:3eff:fe93:52d2'], port_security=['fa:16:3e:93:52:d2 10.100.0.14 2001:db8:0:1:f816:3eff:fe93:52d2 2001:db8::f816:3eff:fe93:52d2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8:0:1:f816:3eff:fe93:52d2/64 2001:db8::f816:3eff:fe93:52d2/64', 'neutron:device_id': '739026fc-9c96-4212-9fa3-e6731e7f61f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e19ed85e-54ee-4274-951c-ade412625983', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a1c36bad-4548-4f88-8bee-0f028af7a076', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a56192da-444f-498f-a789-c0e4cafb114e, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=61c3ba1d-0d4b-426e-8d04-ce56efd650ae) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:52:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:10.891 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 61c3ba1d-0d4b-426e-8d04-ce56efd650ae in datapath e19ed85e-54ee-4274-951c-ade412625983 bound to our chassis#033[00m
Feb 25 07:52:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:10.892 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e19ed85e-54ee-4274-951c-ade412625983#033[00m
Feb 25 07:52:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:52:10Z|01350|binding|INFO|Setting lport 61c3ba1d-0d4b-426e-8d04-ce56efd650ae ovn-installed in OVS
Feb 25 07:52:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:52:10Z|01351|binding|INFO|Setting lport 61c3ba1d-0d4b-426e-8d04-ce56efd650ae up in Southbound
Feb 25 07:52:10 np0005629333 nova_compute[244014]: 2026-02-25 12:52:10.897 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:10 np0005629333 nova_compute[244014]: 2026-02-25 12:52:10.899 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:10 np0005629333 systemd-udevd[360404]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:52:10 np0005629333 systemd-machined[210048]: New machine qemu-160-instance-00000080.
Feb 25 07:52:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:10.904 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1f2e5a07-0e69-4b47-a3fa-3656541bc7aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:10 np0005629333 NetworkManager[49836]: <info>  [1772023930.9128] device (tap61c3ba1d-0d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:52:10 np0005629333 NetworkManager[49836]: <info>  [1772023930.9135] device (tap61c3ba1d-0d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:52:10 np0005629333 systemd[1]: Started Virtual Machine qemu-160-instance-00000080.
Feb 25 07:52:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:10.934 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[46c1c55f-3d13-40ff-8260-e688ea29e990]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:10.937 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[19611ef2-e4db-4089-abfc-eb44a15685dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:10.956 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f118893d-e6b2-4fd9-acd8-28e8bbf17c88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:10.966 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b84927c5-f2c6-4b4b-99fe-351dac7b3dc6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape19ed85e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:4b:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 24, 'tx_packets': 5, 'rx_bytes': 2000, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 24, 'tx_packets': 5, 'rx_bytes': 2000, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 399], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586681, 'reachable_time': 39508, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 22, 'inoctets': 1608, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 22, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1608, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 22, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360419, 'error': None, 'target': 'ovnmeta-e19ed85e-54ee-4274-951c-ade412625983', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:10.973 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[789a28e3-d492-4570-a7d9-fe50c5a4e702]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape19ed85e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586691, 'tstamp': 586691}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360420, 'error': None, 'target': 'ovnmeta-e19ed85e-54ee-4274-951c-ade412625983', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape19ed85e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586693, 'tstamp': 586693}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360420, 'error': None, 'target': 'ovnmeta-e19ed85e-54ee-4274-951c-ade412625983', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:10.975 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape19ed85e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:52:10 np0005629333 nova_compute[244014]: 2026-02-25 12:52:10.976 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:10 np0005629333 nova_compute[244014]: 2026-02-25 12:52:10.977 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:10.977 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape19ed85e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:52:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:10.977 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:52:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:10.978 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape19ed85e-50, col_values=(('external_ids', {'iface-id': '591ce053-3764-4ce0-841f-6728c8fd9491'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:52:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:10.978 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:52:11 np0005629333 nova_compute[244014]: 2026-02-25 12:52:11.361 244018 DEBUG nova.compute.manager [req-38e12b8f-8f3c-4334-a836-b6fad576e6c7 req-b40e0451-cc5f-4619-86a2-1f47613edeab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Received event network-vif-plugged-61c3ba1d-0d4b-426e-8d04-ce56efd650ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:52:11 np0005629333 nova_compute[244014]: 2026-02-25 12:52:11.361 244018 DEBUG oslo_concurrency.lockutils [req-38e12b8f-8f3c-4334-a836-b6fad576e6c7 req-b40e0451-cc5f-4619-86a2-1f47613edeab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:52:11 np0005629333 nova_compute[244014]: 2026-02-25 12:52:11.361 244018 DEBUG oslo_concurrency.lockutils [req-38e12b8f-8f3c-4334-a836-b6fad576e6c7 req-b40e0451-cc5f-4619-86a2-1f47613edeab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:52:11 np0005629333 nova_compute[244014]: 2026-02-25 12:52:11.361 244018 DEBUG oslo_concurrency.lockutils [req-38e12b8f-8f3c-4334-a836-b6fad576e6c7 req-b40e0451-cc5f-4619-86a2-1f47613edeab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:11 np0005629333 nova_compute[244014]: 2026-02-25 12:52:11.362 244018 DEBUG nova.compute.manager [req-38e12b8f-8f3c-4334-a836-b6fad576e6c7 req-b40e0451-cc5f-4619-86a2-1f47613edeab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Processing event network-vif-plugged-61c3ba1d-0d4b-426e-8d04-ce56efd650ae _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:52:11 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Feb 25 07:52:11 np0005629333 nova_compute[244014]: 2026-02-25 12:52:11.727 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023931.7268791, 739026fc-9c96-4212-9fa3-e6731e7f61f9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:52:11 np0005629333 nova_compute[244014]: 2026-02-25 12:52:11.728 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] VM Started (Lifecycle Event)#033[00m
Feb 25 07:52:11 np0005629333 nova_compute[244014]: 2026-02-25 12:52:11.732 244018 DEBUG nova.compute.manager [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:52:11 np0005629333 nova_compute[244014]: 2026-02-25 12:52:11.737 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:52:11 np0005629333 nova_compute[244014]: 2026-02-25 12:52:11.740 244018 INFO nova.virt.libvirt.driver [-] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Instance spawned successfully.#033[00m
Feb 25 07:52:11 np0005629333 nova_compute[244014]: 2026-02-25 12:52:11.741 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:52:11 np0005629333 nova_compute[244014]: 2026-02-25 12:52:11.750 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:52:11 np0005629333 nova_compute[244014]: 2026-02-25 12:52:11.756 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:52:11 np0005629333 nova_compute[244014]: 2026-02-25 12:52:11.762 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:52:11 np0005629333 nova_compute[244014]: 2026-02-25 12:52:11.763 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:52:11 np0005629333 nova_compute[244014]: 2026-02-25 12:52:11.763 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:52:11 np0005629333 nova_compute[244014]: 2026-02-25 12:52:11.764 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:52:11 np0005629333 nova_compute[244014]: 2026-02-25 12:52:11.764 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:52:11 np0005629333 nova_compute[244014]: 2026-02-25 12:52:11.764 244018 DEBUG nova.virt.libvirt.driver [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:52:11 np0005629333 nova_compute[244014]: 2026-02-25 12:52:11.773 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:52:11 np0005629333 nova_compute[244014]: 2026-02-25 12:52:11.773 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023931.727957, 739026fc-9c96-4212-9fa3-e6731e7f61f9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:52:11 np0005629333 nova_compute[244014]: 2026-02-25 12:52:11.773 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:52:11 np0005629333 nova_compute[244014]: 2026-02-25 12:52:11.803 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:52:11 np0005629333 nova_compute[244014]: 2026-02-25 12:52:11.806 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023931.7361777, 739026fc-9c96-4212-9fa3-e6731e7f61f9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:52:11 np0005629333 nova_compute[244014]: 2026-02-25 12:52:11.806 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:52:11 np0005629333 nova_compute[244014]: 2026-02-25 12:52:11.813 244018 INFO nova.compute.manager [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Took 9.06 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:52:11 np0005629333 nova_compute[244014]: 2026-02-25 12:52:11.813 244018 DEBUG nova.compute.manager [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:52:11 np0005629333 nova_compute[244014]: 2026-02-25 12:52:11.824 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:52:11 np0005629333 nova_compute[244014]: 2026-02-25 12:52:11.827 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:52:11 np0005629333 nova_compute[244014]: 2026-02-25 12:52:11.847 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:52:11 np0005629333 nova_compute[244014]: 2026-02-25 12:52:11.878 244018 INFO nova.compute.manager [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Took 10.15 seconds to build instance.#033[00m
Feb 25 07:52:11 np0005629333 nova_compute[244014]: 2026-02-25 12:52:11.892 244018 DEBUG oslo_concurrency.lockutils [None req-d49df2c9-cbe8-4acc-aa69-97c97b345af2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "739026fc-9c96-4212-9fa3-e6731e7f61f9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.239s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2149: 305 pgs: 305 active+clean; 341 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.2 MiB/s wr, 127 op/s
Feb 25 07:52:13 np0005629333 ovn_controller[147040]: 2026-02-25T12:52:13Z|00161|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c3:63:14 10.100.0.12
Feb 25 07:52:13 np0005629333 ovn_controller[147040]: 2026-02-25T12:52:13Z|00162|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c3:63:14 10.100.0.12
Feb 25 07:52:13 np0005629333 nova_compute[244014]: 2026-02-25 12:52:13.448 244018 DEBUG nova.compute.manager [req-b6abd9da-e6c7-46ac-a330-53c528a676c6 req-78bb2179-eb0c-473d-ac76-1916481e4133 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Received event network-vif-plugged-61c3ba1d-0d4b-426e-8d04-ce56efd650ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:52:13 np0005629333 nova_compute[244014]: 2026-02-25 12:52:13.448 244018 DEBUG oslo_concurrency.lockutils [req-b6abd9da-e6c7-46ac-a330-53c528a676c6 req-78bb2179-eb0c-473d-ac76-1916481e4133 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:52:13 np0005629333 nova_compute[244014]: 2026-02-25 12:52:13.449 244018 DEBUG oslo_concurrency.lockutils [req-b6abd9da-e6c7-46ac-a330-53c528a676c6 req-78bb2179-eb0c-473d-ac76-1916481e4133 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:52:13 np0005629333 nova_compute[244014]: 2026-02-25 12:52:13.449 244018 DEBUG oslo_concurrency.lockutils [req-b6abd9da-e6c7-46ac-a330-53c528a676c6 req-78bb2179-eb0c-473d-ac76-1916481e4133 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:13 np0005629333 nova_compute[244014]: 2026-02-25 12:52:13.449 244018 DEBUG nova.compute.manager [req-b6abd9da-e6c7-46ac-a330-53c528a676c6 req-78bb2179-eb0c-473d-ac76-1916481e4133 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] No waiting events found dispatching network-vif-plugged-61c3ba1d-0d4b-426e-8d04-ce56efd650ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:52:13 np0005629333 nova_compute[244014]: 2026-02-25 12:52:13.449 244018 WARNING nova.compute.manager [req-b6abd9da-e6c7-46ac-a330-53c528a676c6 req-78bb2179-eb0c-473d-ac76-1916481e4133 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Received unexpected event network-vif-plugged-61c3ba1d-0d4b-426e-8d04-ce56efd650ae for instance with vm_state active and task_state None.#033[00m
Feb 25 07:52:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:52:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2150: 305 pgs: 305 active+clean; 341 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.2 MiB/s wr, 116 op/s
Feb 25 07:52:14 np0005629333 nova_compute[244014]: 2026-02-25 12:52:14.985 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:15 np0005629333 nova_compute[244014]: 2026-02-25 12:52:15.392 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:16 np0005629333 nova_compute[244014]: 2026-02-25 12:52:16.445 244018 DEBUG nova.compute.manager [req-11b6af75-32ad-4e40-a751-8a6a86df5565 req-94693877-133e-49cc-b7d9-e8806e64a16c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Received event network-changed-61c3ba1d-0d4b-426e-8d04-ce56efd650ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:52:16 np0005629333 nova_compute[244014]: 2026-02-25 12:52:16.446 244018 DEBUG nova.compute.manager [req-11b6af75-32ad-4e40-a751-8a6a86df5565 req-94693877-133e-49cc-b7d9-e8806e64a16c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Refreshing instance network info cache due to event network-changed-61c3ba1d-0d4b-426e-8d04-ce56efd650ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:52:16 np0005629333 nova_compute[244014]: 2026-02-25 12:52:16.446 244018 DEBUG oslo_concurrency.lockutils [req-11b6af75-32ad-4e40-a751-8a6a86df5565 req-94693877-133e-49cc-b7d9-e8806e64a16c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-739026fc-9c96-4212-9fa3-e6731e7f61f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:52:16 np0005629333 nova_compute[244014]: 2026-02-25 12:52:16.446 244018 DEBUG oslo_concurrency.lockutils [req-11b6af75-32ad-4e40-a751-8a6a86df5565 req-94693877-133e-49cc-b7d9-e8806e64a16c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-739026fc-9c96-4212-9fa3-e6731e7f61f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:52:16 np0005629333 nova_compute[244014]: 2026-02-25 12:52:16.446 244018 DEBUG nova.network.neutron [req-11b6af75-32ad-4e40-a751-8a6a86df5565 req-94693877-133e-49cc-b7d9-e8806e64a16c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Refreshing network info cache for port 61c3ba1d-0d4b-426e-8d04-ce56efd650ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:52:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2151: 305 pgs: 305 active+clean; 354 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.8 MiB/s wr, 167 op/s
Feb 25 07:52:18 np0005629333 nova_compute[244014]: 2026-02-25 12:52:18.112 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2152: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.9 MiB/s wr, 206 op/s
Feb 25 07:52:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:52:19 np0005629333 nova_compute[244014]: 2026-02-25 12:52:19.986 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:20 np0005629333 nova_compute[244014]: 2026-02-25 12:52:20.395 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:20 np0005629333 nova_compute[244014]: 2026-02-25 12:52:20.464 244018 INFO nova.compute.manager [None req-458f70d0-6a76-433b-b06e-8cda6e0b96ab 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Get console output#033[00m
Feb 25 07:52:20 np0005629333 nova_compute[244014]: 2026-02-25 12:52:20.472 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 25 07:52:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2153: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Feb 25 07:52:21 np0005629333 nova_compute[244014]: 2026-02-25 12:52:21.024 244018 DEBUG nova.network.neutron [req-11b6af75-32ad-4e40-a751-8a6a86df5565 req-94693877-133e-49cc-b7d9-e8806e64a16c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Updated VIF entry in instance network info cache for port 61c3ba1d-0d4b-426e-8d04-ce56efd650ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:52:21 np0005629333 nova_compute[244014]: 2026-02-25 12:52:21.025 244018 DEBUG nova.network.neutron [req-11b6af75-32ad-4e40-a751-8a6a86df5565 req-94693877-133e-49cc-b7d9-e8806e64a16c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Updating instance_info_cache with network_info: [{"id": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "address": "fa:16:3e:93:52:d2", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61c3ba1d-0d", "ovs_interfaceid": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:52:21 np0005629333 nova_compute[244014]: 2026-02-25 12:52:21.069 244018 DEBUG oslo_concurrency.lockutils [req-11b6af75-32ad-4e40-a751-8a6a86df5565 req-94693877-133e-49cc-b7d9-e8806e64a16c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-739026fc-9c96-4212-9fa3-e6731e7f61f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:52:21 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Feb 25 07:52:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:52:22Z|00163|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:93:52:d2 10.100.0.14
Feb 25 07:52:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:52:22Z|00164|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:93:52:d2 10.100.0.14
Feb 25 07:52:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2154: 305 pgs: 305 active+clean; 384 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.1 MiB/s wr, 170 op/s
Feb 25 07:52:23 np0005629333 nova_compute[244014]: 2026-02-25 12:52:23.559 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:52:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2155: 305 pgs: 305 active+clean; 384 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.7 MiB/s wr, 134 op/s
Feb 25 07:52:24 np0005629333 nova_compute[244014]: 2026-02-25 12:52:24.775 244018 DEBUG oslo_concurrency.lockutils [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "interface-809da994-7551-4f52-8920-b0dfaa2ef73e-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:52:24 np0005629333 nova_compute[244014]: 2026-02-25 12:52:24.776 244018 DEBUG oslo_concurrency.lockutils [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "interface-809da994-7551-4f52-8920-b0dfaa2ef73e-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:52:24 np0005629333 nova_compute[244014]: 2026-02-25 12:52:24.776 244018 DEBUG nova.objects.instance [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'flavor' on Instance uuid 809da994-7551-4f52-8920-b0dfaa2ef73e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:52:24 np0005629333 nova_compute[244014]: 2026-02-25 12:52:24.990 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:25 np0005629333 nova_compute[244014]: 2026-02-25 12:52:25.140 244018 DEBUG nova.objects.instance [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'pci_requests' on Instance uuid 809da994-7551-4f52-8920-b0dfaa2ef73e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:52:25 np0005629333 nova_compute[244014]: 2026-02-25 12:52:25.154 244018 DEBUG nova.network.neutron [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:52:25 np0005629333 nova_compute[244014]: 2026-02-25 12:52:25.370 244018 DEBUG nova.policy [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31d013eaf26a447394d93c83ab8def60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e227b91c24404ab5aed600e2fe792d32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:52:25 np0005629333 nova_compute[244014]: 2026-02-25 12:52:25.398 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:26 np0005629333 nova_compute[244014]: 2026-02-25 12:52:26.359 244018 DEBUG nova.network.neutron [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Successfully created port: 77034e66-a3e3-47d0-b467-29a045343530 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:52:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2156: 305 pgs: 305 active+clean; 386 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.8 MiB/s wr, 157 op/s
Feb 25 07:52:27 np0005629333 nova_compute[244014]: 2026-02-25 12:52:27.252 244018 DEBUG nova.network.neutron [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Successfully updated port: 77034e66-a3e3-47d0-b467-29a045343530 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:52:27 np0005629333 nova_compute[244014]: 2026-02-25 12:52:27.271 244018 DEBUG oslo_concurrency.lockutils [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:52:27 np0005629333 nova_compute[244014]: 2026-02-25 12:52:27.271 244018 DEBUG oslo_concurrency.lockutils [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:52:27 np0005629333 nova_compute[244014]: 2026-02-25 12:52:27.271 244018 DEBUG nova.network.neutron [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:52:27 np0005629333 nova_compute[244014]: 2026-02-25 12:52:27.367 244018 DEBUG nova.compute.manager [req-b75cd115-2e37-4db6-a74c-1ffbc2ec1fe3 req-8834ac7f-ab66-4faf-b914-e72bd254ad5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received event network-changed-77034e66-a3e3-47d0-b467-29a045343530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:52:27 np0005629333 nova_compute[244014]: 2026-02-25 12:52:27.367 244018 DEBUG nova.compute.manager [req-b75cd115-2e37-4db6-a74c-1ffbc2ec1fe3 req-8834ac7f-ab66-4faf-b914-e72bd254ad5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Refreshing instance network info cache due to event network-changed-77034e66-a3e3-47d0-b467-29a045343530. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:52:27 np0005629333 nova_compute[244014]: 2026-02-25 12:52:27.368 244018 DEBUG oslo_concurrency.lockutils [req-b75cd115-2e37-4db6-a74c-1ffbc2ec1fe3 req-8834ac7f-ab66-4faf-b914-e72bd254ad5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:52:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2157: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.2 MiB/s wr, 116 op/s
Feb 25 07:52:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:52:29 np0005629333 nova_compute[244014]: 2026-02-25 12:52:29.663 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:29 np0005629333 nova_compute[244014]: 2026-02-25 12:52:29.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:52:29 np0005629333 nova_compute[244014]: 2026-02-25 12:52:29.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:52:29 np0005629333 nova_compute[244014]: 2026-02-25 12:52:29.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 07:52:29 np0005629333 nova_compute[244014]: 2026-02-25 12:52:29.992 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:30 np0005629333 nova_compute[244014]: 2026-02-25 12:52:30.150 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:52:30 np0005629333 nova_compute[244014]: 2026-02-25 12:52:30.151 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:52:30 np0005629333 nova_compute[244014]: 2026-02-25 12:52:30.151 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 25 07:52:30 np0005629333 nova_compute[244014]: 2026-02-25 12:52:30.152 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid dd7feae9-9d2a-41b6-9277-cbf51a2c8f23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:52:30 np0005629333 nova_compute[244014]: 2026-02-25 12:52:30.400 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2158: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 07:52:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:52:31
Feb 25 07:52:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:52:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:52:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'images', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'default.rgw.meta', '.mgr']
Feb 25 07:52:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.586 244018 DEBUG nova.network.neutron [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Updating instance_info_cache with network_info: [{"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "77034e66-a3e3-47d0-b467-29a045343530", "address": "fa:16:3e:63:eb:19", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77034e66-a3", "ovs_interfaceid": "77034e66-a3e3-47d0-b467-29a045343530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:52:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:52:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:52:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:52:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:52:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:52:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.621 244018 DEBUG oslo_concurrency.lockutils [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.623 244018 DEBUG oslo_concurrency.lockutils [req-b75cd115-2e37-4db6-a74c-1ffbc2ec1fe3 req-8834ac7f-ab66-4faf-b914-e72bd254ad5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.624 244018 DEBUG nova.network.neutron [req-b75cd115-2e37-4db6-a74c-1ffbc2ec1fe3 req-8834ac7f-ab66-4faf-b914-e72bd254ad5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Refreshing network info cache for port 77034e66-a3e3-47d0-b467-29a045343530 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.628 244018 DEBUG nova.virt.libvirt.vif [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:51:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1151259173',display_name='tempest-TestNetworkBasicOps-server-1151259173',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1151259173',id=127,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuhKTGrJ6xoqtXbC9i6shxmOzzCAmEiPLvEhcBT9lMLvpNHET3NrmwNha38Zzx8OOcER4UhJ6EWWvnBqNIlR5/VZu+vuQ6n1q9c1LTSLn17vfclbttzgZoR8PFOeBob+Q==',key_name='tempest-TestNetworkBasicOps-1930796261',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:52:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ijtjj0g2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:52:01Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=809da994-7551-4f52-8920-b0dfaa2ef73e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "77034e66-a3e3-47d0-b467-29a045343530", "address": "fa:16:3e:63:eb:19", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77034e66-a3", "ovs_interfaceid": "77034e66-a3e3-47d0-b467-29a045343530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.629 244018 DEBUG nova.network.os_vif_util [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "77034e66-a3e3-47d0-b467-29a045343530", "address": "fa:16:3e:63:eb:19", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77034e66-a3", "ovs_interfaceid": "77034e66-a3e3-47d0-b467-29a045343530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.631 244018 DEBUG nova.network.os_vif_util [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:eb:19,bridge_name='br-int',has_traffic_filtering=True,id=77034e66-a3e3-47d0-b467-29a045343530,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77034e66-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.631 244018 DEBUG os_vif [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:eb:19,bridge_name='br-int',has_traffic_filtering=True,id=77034e66-a3e3-47d0-b467-29a045343530,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77034e66-a3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.632 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.633 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.633 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.637 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.638 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77034e66-a3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.639 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap77034e66-a3, col_values=(('external_ids', {'iface-id': '77034e66-a3e3-47d0-b467-29a045343530', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:63:eb:19', 'vm-uuid': '809da994-7551-4f52-8920-b0dfaa2ef73e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:52:31 np0005629333 NetworkManager[49836]: <info>  [1772023951.6425] manager: (tap77034e66-a3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/565)
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.647 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.651 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.652 244018 INFO os_vif [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:eb:19,bridge_name='br-int',has_traffic_filtering=True,id=77034e66-a3e3-47d0-b467-29a045343530,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77034e66-a3')#033[00m
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.653 244018 DEBUG nova.virt.libvirt.vif [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:51:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1151259173',display_name='tempest-TestNetworkBasicOps-server-1151259173',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1151259173',id=127,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuhKTGrJ6xoqtXbC9i6shxmOzzCAmEiPLvEhcBT9lMLvpNHET3NrmwNha38Zzx8OOcER4UhJ6EWWvnBqNIlR5/VZu+vuQ6n1q9c1LTSLn17vfclbttzgZoR8PFOeBob+Q==',key_name='tempest-TestNetworkBasicOps-1930796261',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:52:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ijtjj0g2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:52:01Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=809da994-7551-4f52-8920-b0dfaa2ef73e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "77034e66-a3e3-47d0-b467-29a045343530", "address": "fa:16:3e:63:eb:19", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77034e66-a3", "ovs_interfaceid": "77034e66-a3e3-47d0-b467-29a045343530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.654 244018 DEBUG nova.network.os_vif_util [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "77034e66-a3e3-47d0-b467-29a045343530", "address": "fa:16:3e:63:eb:19", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77034e66-a3", "ovs_interfaceid": "77034e66-a3e3-47d0-b467-29a045343530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.655 244018 DEBUG nova.network.os_vif_util [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:eb:19,bridge_name='br-int',has_traffic_filtering=True,id=77034e66-a3e3-47d0-b467-29a045343530,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77034e66-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.658 244018 DEBUG nova.virt.libvirt.guest [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] attach device xml: <interface type="ethernet">
Feb 25 07:52:31 np0005629333 nova_compute[244014]:  <mac address="fa:16:3e:63:eb:19"/>
Feb 25 07:52:31 np0005629333 nova_compute[244014]:  <model type="virtio"/>
Feb 25 07:52:31 np0005629333 nova_compute[244014]:  <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:52:31 np0005629333 nova_compute[244014]:  <mtu size="1442"/>
Feb 25 07:52:31 np0005629333 nova_compute[244014]:  <target dev="tap77034e66-a3"/>
Feb 25 07:52:31 np0005629333 nova_compute[244014]: </interface>
Feb 25 07:52:31 np0005629333 nova_compute[244014]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Feb 25 07:52:31 np0005629333 kernel: tap77034e66-a3: entered promiscuous mode
Feb 25 07:52:31 np0005629333 NetworkManager[49836]: <info>  [1772023951.6734] manager: (tap77034e66-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/566)
Feb 25 07:52:31 np0005629333 ovn_controller[147040]: 2026-02-25T12:52:31Z|01352|binding|INFO|Claiming lport 77034e66-a3e3-47d0-b467-29a045343530 for this chassis.
Feb 25 07:52:31 np0005629333 ovn_controller[147040]: 2026-02-25T12:52:31Z|01353|binding|INFO|77034e66-a3e3-47d0-b467-29a045343530: Claiming fa:16:3e:63:eb:19 10.100.0.29
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.675 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.688 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:eb:19 10.100.0.29'], port_security=['fa:16:3e:63:eb:19 10.100.0.29'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.29/28', 'neutron:device_id': '809da994-7551-4f52-8920-b0dfaa2ef73e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e68b4f5a-a28d-4155-93af-2997c1302403', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4f6e829b-663a-471b-98d6-d0d40f869440', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c6b15ef5-ec0a-4008-9d34-97a6f1968263, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=77034e66-a3e3-47d0-b467-29a045343530) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:52:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.690 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 77034e66-a3e3-47d0-b467-29a045343530 in datapath e68b4f5a-a28d-4155-93af-2997c1302403 bound to our chassis#033[00m
Feb 25 07:52:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.694 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e68b4f5a-a28d-4155-93af-2997c1302403#033[00m
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.702 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:31 np0005629333 ovn_controller[147040]: 2026-02-25T12:52:31Z|01354|binding|INFO|Setting lport 77034e66-a3e3-47d0-b467-29a045343530 ovn-installed in OVS
Feb 25 07:52:31 np0005629333 ovn_controller[147040]: 2026-02-25T12:52:31Z|01355|binding|INFO|Setting lport 77034e66-a3e3-47d0-b467-29a045343530 up in Southbound
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.709 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.710 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8fe3b404-cb54-4d11-8b57-501c499d25f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.711 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape68b4f5a-a1 in ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:52:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.712 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape68b4f5a-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:52:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.712 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0432e7fc-2250-44cb-9c27-3df216c991f6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.713 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2a43bf88-2cce-48a3-be49-600fc21b7882]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:31 np0005629333 systemd-udevd[360472]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:52:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.724 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[2213ec93-8032-465d-877e-af58505f61c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.736 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[61c04112-e998-499f-a24d-9c0847d81740]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:31 np0005629333 NetworkManager[49836]: <info>  [1772023951.7383] device (tap77034e66-a3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:52:31 np0005629333 NetworkManager[49836]: <info>  [1772023951.7397] device (tap77034e66-a3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:52:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.768 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[810fe5ca-9fbd-4521-adb0-20ab3e32ca38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.774 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6428ff34-7f84-4b45-b827-d4ca096a92a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:31 np0005629333 systemd-udevd[360476]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:52:31 np0005629333 NetworkManager[49836]: <info>  [1772023951.7761] manager: (tape68b4f5a-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/567)
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.780 244018 DEBUG nova.virt.libvirt.driver [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.780 244018 DEBUG nova.virt.libvirt.driver [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.781 244018 DEBUG nova.virt.libvirt.driver [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:c3:63:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.781 244018 DEBUG nova.virt.libvirt.driver [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:63:eb:19, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:52:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.806 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[99ea369f-3b92-4352-a9b0-80b3cb148125]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.806 244018 DEBUG nova.virt.libvirt.guest [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:52:31 np0005629333 nova_compute[244014]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:52:31 np0005629333 nova_compute[244014]:  <nova:name>tempest-TestNetworkBasicOps-server-1151259173</nova:name>
Feb 25 07:52:31 np0005629333 nova_compute[244014]:  <nova:creationTime>2026-02-25 12:52:31</nova:creationTime>
Feb 25 07:52:31 np0005629333 nova_compute[244014]:  <nova:flavor name="m1.nano">
Feb 25 07:52:31 np0005629333 nova_compute[244014]:    <nova:memory>128</nova:memory>
Feb 25 07:52:31 np0005629333 nova_compute[244014]:    <nova:disk>1</nova:disk>
Feb 25 07:52:31 np0005629333 nova_compute[244014]:    <nova:swap>0</nova:swap>
Feb 25 07:52:31 np0005629333 nova_compute[244014]:    <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:52:31 np0005629333 nova_compute[244014]:    <nova:vcpus>1</nova:vcpus>
Feb 25 07:52:31 np0005629333 nova_compute[244014]:  </nova:flavor>
Feb 25 07:52:31 np0005629333 nova_compute[244014]:  <nova:owner>
Feb 25 07:52:31 np0005629333 nova_compute[244014]:    <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 07:52:31 np0005629333 nova_compute[244014]:    <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 07:52:31 np0005629333 nova_compute[244014]:  </nova:owner>
Feb 25 07:52:31 np0005629333 nova_compute[244014]:  <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:52:31 np0005629333 nova_compute[244014]:  <nova:ports>
Feb 25 07:52:31 np0005629333 nova_compute[244014]:    <nova:port uuid="4f59e1f7-f07c-48a1-82b4-b6a563a7130a">
Feb 25 07:52:31 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 07:52:31 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:52:31 np0005629333 nova_compute[244014]:    <nova:port uuid="77034e66-a3e3-47d0-b467-29a045343530">
Feb 25 07:52:31 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Feb 25 07:52:31 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:52:31 np0005629333 nova_compute[244014]:  </nova:ports>
Feb 25 07:52:31 np0005629333 nova_compute[244014]: </nova:instance>
Feb 25 07:52:31 np0005629333 nova_compute[244014]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Feb 25 07:52:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.810 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3060dd74-c423-4c3b-98d5-9d8b1804dbd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:31 np0005629333 NetworkManager[49836]: <info>  [1772023951.8297] device (tape68b4f5a-a0): carrier: link connected
Feb 25 07:52:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.837 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ff5d18bd-f7c8-4b58-876d-7a1ece3f9359]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.849 244018 DEBUG oslo_concurrency.lockutils [None req-b9bc2e73-a747-4670-bce1-79a3e941168f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "interface-809da994-7551-4f52-8920-b0dfaa2ef73e-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.857 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[234435eb-b4ca-4744-856e-83bb8d4085c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape68b4f5a-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:ac:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 404], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592123, 'reachable_time': 41665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360498, 'error': None, 'target': 'ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.872 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[937d43ca-9568-41b7-83df-c5e5a64cfab5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe03:ace6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592123, 'tstamp': 592123}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360499, 'error': None, 'target': 'ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.889 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5d06c2ee-b0ce-4056-8f48-437958bfb92d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape68b4f5a-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:ac:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 404], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592123, 'reachable_time': 41665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 360500, 'error': None, 'target': 'ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.920 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0f409d3c-5c9c-4dd3-8ff0-8837ea5826ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.979 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b5117192-4aac-415a-9139-fe2843f95d55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.981 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape68b4f5a-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:52:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.981 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:52:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.982 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape68b4f5a-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.982 244018 DEBUG nova.compute.manager [req-c7b8eb70-7141-4e8d-9e3a-af134930535a req-ac568642-f8a6-43a4-80db-cb8e8870b403 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received event network-vif-plugged-77034e66-a3e3-47d0-b467-29a045343530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.982 244018 DEBUG oslo_concurrency.lockutils [req-c7b8eb70-7141-4e8d-9e3a-af134930535a req-ac568642-f8a6-43a4-80db-cb8e8870b403 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.984 244018 DEBUG oslo_concurrency.lockutils [req-c7b8eb70-7141-4e8d-9e3a-af134930535a req-ac568642-f8a6-43a4-80db-cb8e8870b403 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.984 244018 DEBUG oslo_concurrency.lockutils [req-c7b8eb70-7141-4e8d-9e3a-af134930535a req-ac568642-f8a6-43a4-80db-cb8e8870b403 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.984 244018 DEBUG nova.compute.manager [req-c7b8eb70-7141-4e8d-9e3a-af134930535a req-ac568642-f8a6-43a4-80db-cb8e8870b403 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] No waiting events found dispatching network-vif-plugged-77034e66-a3e3-47d0-b467-29a045343530 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.985 244018 WARNING nova.compute.manager [req-c7b8eb70-7141-4e8d-9e3a-af134930535a req-ac568642-f8a6-43a4-80db-cb8e8870b403 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received unexpected event network-vif-plugged-77034e66-a3e3-47d0-b467-29a045343530 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.985 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:31 np0005629333 NetworkManager[49836]: <info>  [1772023951.9860] manager: (tape68b4f5a-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/568)
Feb 25 07:52:31 np0005629333 kernel: tape68b4f5a-a0: entered promiscuous mode
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.988 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:31.990 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape68b4f5a-a0, col_values=(('external_ids', {'iface-id': 'd3cef94a-f2c7-4a41-beb1-fd29a623854a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:52:31 np0005629333 ovn_controller[147040]: 2026-02-25T12:52:31Z|01356|binding|INFO|Releasing lport d3cef94a-f2c7-4a41-beb1-fd29a623854a from this chassis (sb_readonly=0)
Feb 25 07:52:31 np0005629333 nova_compute[244014]: 2026-02-25 12:52:31.991 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:32 np0005629333 nova_compute[244014]: 2026-02-25 12:52:32.000 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:32.002 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e68b4f5a-a28d-4155-93af-2997c1302403.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e68b4f5a-a28d-4155-93af-2997c1302403.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:32.003 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b9455aeb-f426-4094-8250-78cd18f2a78a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:32.004 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-e68b4f5a-a28d-4155-93af-2997c1302403
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/e68b4f5a-a28d-4155-93af-2997c1302403.pid.haproxy
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID e68b4f5a-a28d-4155-93af-2997c1302403
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:52:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:32.004 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403', 'env', 'PROCESS_TAG=haproxy-e68b4f5a-a28d-4155-93af-2997c1302403', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e68b4f5a-a28d-4155-93af-2997c1302403.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:52:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:52:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:52:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:52:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:52:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:52:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:52:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:52:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:52:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:52:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:52:32 np0005629333 podman[360532]: 2026-02-25 12:52:32.379625003 +0000 UTC m=+0.062789764 container create 62a431ef2588502cb52e896538a38fe0b2b55f1d522f5872eb0314ef660af17e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 07:52:32 np0005629333 systemd[1]: Started libpod-conmon-62a431ef2588502cb52e896538a38fe0b2b55f1d522f5872eb0314ef660af17e.scope.
Feb 25 07:52:32 np0005629333 podman[360532]: 2026-02-25 12:52:32.341325452 +0000 UTC m=+0.024490273 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:52:32 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:52:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce54b1a0b6462e52effccd4126e7ec55dceac4038acb86f1df2038cf5f156ad6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:52:32 np0005629333 podman[360532]: 2026-02-25 12:52:32.470854 +0000 UTC m=+0.154018821 container init 62a431ef2588502cb52e896538a38fe0b2b55f1d522f5872eb0314ef660af17e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:52:32 np0005629333 podman[360532]: 2026-02-25 12:52:32.478747692 +0000 UTC m=+0.161912453 container start 62a431ef2588502cb52e896538a38fe0b2b55f1d522f5872eb0314ef660af17e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 25 07:52:32 np0005629333 neutron-haproxy-ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403[360547]: [NOTICE]   (360551) : New worker (360553) forked
Feb 25 07:52:32 np0005629333 neutron-haproxy-ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403[360547]: [NOTICE]   (360551) : Loading success.
Feb 25 07:52:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2159: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Feb 25 07:52:32 np0005629333 ovn_controller[147040]: 2026-02-25T12:52:32Z|00165|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:63:eb:19 10.100.0.29
Feb 25 07:52:32 np0005629333 ovn_controller[147040]: 2026-02-25T12:52:32Z|00166|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:63:eb:19 10.100.0.29
Feb 25 07:52:33 np0005629333 nova_compute[244014]: 2026-02-25 12:52:33.475 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Updating instance_info_cache with network_info: [{"id": "9981394d-e733-404e-85a5-e2e51877881a", "address": "fa:16:3e:2b:92:99", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9981394d-e7", "ovs_interfaceid": "9981394d-e733-404e-85a5-e2e51877881a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:52:33 np0005629333 nova_compute[244014]: 2026-02-25 12:52:33.495 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:52:33 np0005629333 nova_compute[244014]: 2026-02-25 12:52:33.495 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 25 07:52:33 np0005629333 nova_compute[244014]: 2026-02-25 12:52:33.496 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:52:33 np0005629333 nova_compute[244014]: 2026-02-25 12:52:33.496 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:52:33 np0005629333 nova_compute[244014]: 2026-02-25 12:52:33.521 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:52:33 np0005629333 nova_compute[244014]: 2026-02-25 12:52:33.522 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:52:33 np0005629333 nova_compute[244014]: 2026-02-25 12:52:33.522 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:33 np0005629333 nova_compute[244014]: 2026-02-25 12:52:33.523 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:52:33 np0005629333 nova_compute[244014]: 2026-02-25 12:52:33.523 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:52:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:52:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:52:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/913115532' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:52:34 np0005629333 nova_compute[244014]: 2026-02-25 12:52:34.078 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:52:34 np0005629333 nova_compute[244014]: 2026-02-25 12:52:34.149 244018 DEBUG nova.compute.manager [req-fef323b9-2fd3-46bb-9c21-6dbc595fa32c req-a7c02cce-4b16-4ebb-a26d-294ae89d21db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received event network-vif-plugged-77034e66-a3e3-47d0-b467-29a045343530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:52:34 np0005629333 nova_compute[244014]: 2026-02-25 12:52:34.150 244018 DEBUG oslo_concurrency.lockutils [req-fef323b9-2fd3-46bb-9c21-6dbc595fa32c req-a7c02cce-4b16-4ebb-a26d-294ae89d21db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:52:34 np0005629333 nova_compute[244014]: 2026-02-25 12:52:34.150 244018 DEBUG oslo_concurrency.lockutils [req-fef323b9-2fd3-46bb-9c21-6dbc595fa32c req-a7c02cce-4b16-4ebb-a26d-294ae89d21db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:52:34 np0005629333 nova_compute[244014]: 2026-02-25 12:52:34.151 244018 DEBUG oslo_concurrency.lockutils [req-fef323b9-2fd3-46bb-9c21-6dbc595fa32c req-a7c02cce-4b16-4ebb-a26d-294ae89d21db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:34 np0005629333 nova_compute[244014]: 2026-02-25 12:52:34.152 244018 DEBUG nova.compute.manager [req-fef323b9-2fd3-46bb-9c21-6dbc595fa32c req-a7c02cce-4b16-4ebb-a26d-294ae89d21db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] No waiting events found dispatching network-vif-plugged-77034e66-a3e3-47d0-b467-29a045343530 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:52:34 np0005629333 nova_compute[244014]: 2026-02-25 12:52:34.152 244018 WARNING nova.compute.manager [req-fef323b9-2fd3-46bb-9c21-6dbc595fa32c req-a7c02cce-4b16-4ebb-a26d-294ae89d21db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received unexpected event network-vif-plugged-77034e66-a3e3-47d0-b467-29a045343530 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:52:34 np0005629333 nova_compute[244014]: 2026-02-25 12:52:34.189 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:52:34 np0005629333 nova_compute[244014]: 2026-02-25 12:52:34.189 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:52:34 np0005629333 nova_compute[244014]: 2026-02-25 12:52:34.195 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:52:34 np0005629333 nova_compute[244014]: 2026-02-25 12:52:34.196 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:52:34 np0005629333 nova_compute[244014]: 2026-02-25 12:52:34.201 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:52:34 np0005629333 nova_compute[244014]: 2026-02-25 12:52:34.201 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:52:34 np0005629333 nova_compute[244014]: 2026-02-25 12:52:34.223 244018 DEBUG nova.network.neutron [req-b75cd115-2e37-4db6-a74c-1ffbc2ec1fe3 req-8834ac7f-ab66-4faf-b914-e72bd254ad5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Updated VIF entry in instance network info cache for port 77034e66-a3e3-47d0-b467-29a045343530. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:52:34 np0005629333 nova_compute[244014]: 2026-02-25 12:52:34.224 244018 DEBUG nova.network.neutron [req-b75cd115-2e37-4db6-a74c-1ffbc2ec1fe3 req-8834ac7f-ab66-4faf-b914-e72bd254ad5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Updating instance_info_cache with network_info: [{"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "77034e66-a3e3-47d0-b467-29a045343530", "address": "fa:16:3e:63:eb:19", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77034e66-a3", "ovs_interfaceid": "77034e66-a3e3-47d0-b467-29a045343530", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:52:34 np0005629333 nova_compute[244014]: 2026-02-25 12:52:34.247 244018 DEBUG oslo_concurrency.lockutils [req-b75cd115-2e37-4db6-a74c-1ffbc2ec1fe3 req-8834ac7f-ab66-4faf-b914-e72bd254ad5f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:52:34 np0005629333 nova_compute[244014]: 2026-02-25 12:52:34.449 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:52:34 np0005629333 nova_compute[244014]: 2026-02-25 12:52:34.451 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2955MB free_disk=59.85068342462182GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:52:34 np0005629333 nova_compute[244014]: 2026-02-25 12:52:34.451 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:52:34 np0005629333 nova_compute[244014]: 2026-02-25 12:52:34.451 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:52:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2160: 305 pgs: 305 active+clean; 391 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 277 KiB/s rd, 173 KiB/s wr, 33 op/s
Feb 25 07:52:34 np0005629333 nova_compute[244014]: 2026-02-25 12:52:34.730 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance dd7feae9-9d2a-41b6-9277-cbf51a2c8f23 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:52:34 np0005629333 nova_compute[244014]: 2026-02-25 12:52:34.731 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 809da994-7551-4f52-8920-b0dfaa2ef73e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:52:34 np0005629333 nova_compute[244014]: 2026-02-25 12:52:34.731 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 739026fc-9c96-4212-9fa3-e6731e7f61f9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:52:34 np0005629333 nova_compute[244014]: 2026-02-25 12:52:34.731 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:52:34 np0005629333 nova_compute[244014]: 2026-02-25 12:52:34.732 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.182 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.404 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.503 244018 DEBUG nova.compute.manager [req-78b7c122-abbf-4c63-bb5c-1c3b99609ca4 req-d1a0f099-5fa7-4220-a085-b9049978bf3c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Received event network-changed-61c3ba1d-0d4b-426e-8d04-ce56efd650ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.504 244018 DEBUG nova.compute.manager [req-78b7c122-abbf-4c63-bb5c-1c3b99609ca4 req-d1a0f099-5fa7-4220-a085-b9049978bf3c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Refreshing instance network info cache due to event network-changed-61c3ba1d-0d4b-426e-8d04-ce56efd650ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.505 244018 DEBUG oslo_concurrency.lockutils [req-78b7c122-abbf-4c63-bb5c-1c3b99609ca4 req-d1a0f099-5fa7-4220-a085-b9049978bf3c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-739026fc-9c96-4212-9fa3-e6731e7f61f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.506 244018 DEBUG oslo_concurrency.lockutils [req-78b7c122-abbf-4c63-bb5c-1c3b99609ca4 req-d1a0f099-5fa7-4220-a085-b9049978bf3c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-739026fc-9c96-4212-9fa3-e6731e7f61f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.506 244018 DEBUG nova.network.neutron [req-78b7c122-abbf-4c63-bb5c-1c3b99609ca4 req-d1a0f099-5fa7-4220-a085-b9049978bf3c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Refreshing network info cache for port 61c3ba1d-0d4b-426e-8d04-ce56efd650ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.555 244018 DEBUG oslo_concurrency.lockutils [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "739026fc-9c96-4212-9fa3-e6731e7f61f9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.556 244018 DEBUG oslo_concurrency.lockutils [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "739026fc-9c96-4212-9fa3-e6731e7f61f9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.557 244018 DEBUG oslo_concurrency.lockutils [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.558 244018 DEBUG oslo_concurrency.lockutils [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.559 244018 DEBUG oslo_concurrency.lockutils [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.561 244018 INFO nova.compute.manager [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Terminating instance#033[00m
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.563 244018 DEBUG nova.compute.manager [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:52:35 np0005629333 kernel: tap61c3ba1d-0d (unregistering): left promiscuous mode
Feb 25 07:52:35 np0005629333 NetworkManager[49836]: <info>  [1772023955.6103] device (tap61c3ba1d-0d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:52:35 np0005629333 ovn_controller[147040]: 2026-02-25T12:52:35Z|01357|binding|INFO|Releasing lport 61c3ba1d-0d4b-426e-8d04-ce56efd650ae from this chassis (sb_readonly=0)
Feb 25 07:52:35 np0005629333 ovn_controller[147040]: 2026-02-25T12:52:35Z|01358|binding|INFO|Setting lport 61c3ba1d-0d4b-426e-8d04-ce56efd650ae down in Southbound
Feb 25 07:52:35 np0005629333 ovn_controller[147040]: 2026-02-25T12:52:35Z|01359|binding|INFO|Removing iface tap61c3ba1d-0d ovn-installed in OVS
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.636 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:35.639 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:52:d2 10.100.0.14 2001:db8:0:1:f816:3eff:fe93:52d2 2001:db8::f816:3eff:fe93:52d2'], port_security=['fa:16:3e:93:52:d2 10.100.0.14 2001:db8:0:1:f816:3eff:fe93:52d2 2001:db8::f816:3eff:fe93:52d2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8:0:1:f816:3eff:fe93:52d2/64 2001:db8::f816:3eff:fe93:52d2/64', 'neutron:device_id': '739026fc-9c96-4212-9fa3-e6731e7f61f9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e19ed85e-54ee-4274-951c-ade412625983', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a1c36bad-4548-4f88-8bee-0f028af7a076', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a56192da-444f-498f-a789-c0e4cafb114e, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=61c3ba1d-0d4b-426e-8d04-ce56efd650ae) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:52:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:35.642 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 61c3ba1d-0d4b-426e-8d04-ce56efd650ae in datapath e19ed85e-54ee-4274-951c-ade412625983 unbound from our chassis#033[00m
Feb 25 07:52:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:35.645 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e19ed85e-54ee-4274-951c-ade412625983#033[00m
Feb 25 07:52:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:35.662 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e324be3d-4e24-46b9-8f98-3b319f7a5232]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:35 np0005629333 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d00000080.scope: Deactivated successfully.
Feb 25 07:52:35 np0005629333 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d00000080.scope: Consumed 12.282s CPU time.
Feb 25 07:52:35 np0005629333 systemd-machined[210048]: Machine qemu-160-instance-00000080 terminated.
Feb 25 07:52:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:35.691 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[eecb4ea6-f4ea-43e6-a67a-55f9e4a53bb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:35.695 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4cacdae9-86a7-4a8c-bfb4-cf07120f1454]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:35.720 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7f5c790e-761b-4f51-8aa7-9f02014ba49a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:35.741 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[05747e0d-f072-409e-ae05-642f8513b41a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape19ed85e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6f:4b:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 3468, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 3468, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 399], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586681, 'reachable_time': 17563, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2768, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2768, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360617, 'error': None, 'target': 'ovnmeta-e19ed85e-54ee-4274-951c-ade412625983', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:35.757 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dea07f12-4267-4b3a-b49c-0ce917613cc8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape19ed85e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586691, 'tstamp': 586691}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360618, 'error': None, 'target': 'ovnmeta-e19ed85e-54ee-4274-951c-ade412625983', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape19ed85e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586693, 'tstamp': 586693}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360618, 'error': None, 'target': 'ovnmeta-e19ed85e-54ee-4274-951c-ade412625983', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:35.760 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape19ed85e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.762 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.767 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:35.768 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape19ed85e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:52:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:35.769 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:52:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:35.771 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape19ed85e-50, col_values=(('external_ids', {'iface-id': '591ce053-3764-4ce0-841f-6728c8fd9491'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:52:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:35.772 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.798 244018 INFO nova.virt.libvirt.driver [-] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Instance destroyed successfully.#033[00m
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.799 244018 DEBUG nova.objects.instance [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'resources' on Instance uuid 739026fc-9c96-4212-9fa3-e6731e7f61f9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:52:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:52:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/399743122' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.816 244018 DEBUG nova.virt.libvirt.vif [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:52:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1844602195',display_name='tempest-TestGettingAddress-server-1844602195',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1844602195',id=128,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJC+upPNElBVnEP9sRAofuJ3WgjrN7NUHIjnb2xNcja3SXT0AJcyip72U8J6Hd3gQBhmqSCIrT6CHz7iNR5W3wt+U6axciPl3ik+0GTg6pCZIfse39ZA4oXcSHjtD+Yg8A==',key_name='tempest-TestGettingAddress-1847809733',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:52:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-3sho0d1g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:52:11Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=739026fc-9c96-4212-9fa3-e6731e7f61f9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "address": "fa:16:3e:93:52:d2", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61c3ba1d-0d", "ovs_interfaceid": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.816 244018 DEBUG nova.network.os_vif_util [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "address": "fa:16:3e:93:52:d2", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61c3ba1d-0d", "ovs_interfaceid": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.817 244018 DEBUG nova.network.os_vif_util [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:93:52:d2,bridge_name='br-int',has_traffic_filtering=True,id=61c3ba1d-0d4b-426e-8d04-ce56efd650ae,network=Network(e19ed85e-54ee-4274-951c-ade412625983),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61c3ba1d-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.817 244018 DEBUG os_vif [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:52:d2,bridge_name='br-int',has_traffic_filtering=True,id=61c3ba1d-0d4b-426e-8d04-ce56efd650ae,network=Network(e19ed85e-54ee-4274-951c-ade412625983),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61c3ba1d-0d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.819 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.819 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61c3ba1d-0d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.820 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.823 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.825 244018 INFO os_vif [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:52:d2,bridge_name='br-int',has_traffic_filtering=True,id=61c3ba1d-0d4b-426e-8d04-ce56efd650ae,network=Network(e19ed85e-54ee-4274-951c-ade412625983),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61c3ba1d-0d')#033[00m
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.841 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.659s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.846 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.870 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.888 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:52:35 np0005629333 nova_compute[244014]: 2026-02-25 12:52:35.888 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:35 np0005629333 podman[360633]: 2026-02-25 12:52:35.954143325 +0000 UTC m=+0.109027829 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 07:52:35 np0005629333 podman[360634]: 2026-02-25 12:52:35.952732296 +0000 UTC m=+0.107488287 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 07:52:36 np0005629333 nova_compute[244014]: 2026-02-25 12:52:36.131 244018 INFO nova.virt.libvirt.driver [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Deleting instance files /var/lib/nova/instances/739026fc-9c96-4212-9fa3-e6731e7f61f9_del#033[00m
Feb 25 07:52:36 np0005629333 nova_compute[244014]: 2026-02-25 12:52:36.132 244018 INFO nova.virt.libvirt.driver [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Deletion of /var/lib/nova/instances/739026fc-9c96-4212-9fa3-e6731e7f61f9_del complete#033[00m
Feb 25 07:52:36 np0005629333 nova_compute[244014]: 2026-02-25 12:52:36.165 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:36 np0005629333 nova_compute[244014]: 2026-02-25 12:52:36.195 244018 INFO nova.compute.manager [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Took 0.63 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:52:36 np0005629333 nova_compute[244014]: 2026-02-25 12:52:36.195 244018 DEBUG oslo.service.loopingcall [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:52:36 np0005629333 nova_compute[244014]: 2026-02-25 12:52:36.196 244018 DEBUG nova.compute.manager [-] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:52:36 np0005629333 nova_compute[244014]: 2026-02-25 12:52:36.196 244018 DEBUG nova.network.neutron [-] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:52:36 np0005629333 nova_compute[244014]: 2026-02-25 12:52:36.274 244018 DEBUG nova.compute.manager [req-f91c18e2-b2c4-4d26-9ad8-cc04e5d92295 req-f2beb3db-09af-4b61-b1ea-0f856ac55bf0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Received event network-vif-unplugged-61c3ba1d-0d4b-426e-8d04-ce56efd650ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:52:36 np0005629333 nova_compute[244014]: 2026-02-25 12:52:36.275 244018 DEBUG oslo_concurrency.lockutils [req-f91c18e2-b2c4-4d26-9ad8-cc04e5d92295 req-f2beb3db-09af-4b61-b1ea-0f856ac55bf0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:52:36 np0005629333 nova_compute[244014]: 2026-02-25 12:52:36.275 244018 DEBUG oslo_concurrency.lockutils [req-f91c18e2-b2c4-4d26-9ad8-cc04e5d92295 req-f2beb3db-09af-4b61-b1ea-0f856ac55bf0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:52:36 np0005629333 nova_compute[244014]: 2026-02-25 12:52:36.275 244018 DEBUG oslo_concurrency.lockutils [req-f91c18e2-b2c4-4d26-9ad8-cc04e5d92295 req-f2beb3db-09af-4b61-b1ea-0f856ac55bf0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:36 np0005629333 nova_compute[244014]: 2026-02-25 12:52:36.275 244018 DEBUG nova.compute.manager [req-f91c18e2-b2c4-4d26-9ad8-cc04e5d92295 req-f2beb3db-09af-4b61-b1ea-0f856ac55bf0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] No waiting events found dispatching network-vif-unplugged-61c3ba1d-0d4b-426e-8d04-ce56efd650ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:52:36 np0005629333 nova_compute[244014]: 2026-02-25 12:52:36.275 244018 DEBUG nova.compute.manager [req-f91c18e2-b2c4-4d26-9ad8-cc04e5d92295 req-f2beb3db-09af-4b61-b1ea-0f856ac55bf0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Received event network-vif-unplugged-61c3ba1d-0d4b-426e-8d04-ce56efd650ae for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:52:36 np0005629333 nova_compute[244014]: 2026-02-25 12:52:36.276 244018 DEBUG nova.compute.manager [req-f91c18e2-b2c4-4d26-9ad8-cc04e5d92295 req-f2beb3db-09af-4b61-b1ea-0f856ac55bf0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Received event network-vif-plugged-61c3ba1d-0d4b-426e-8d04-ce56efd650ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:52:36 np0005629333 nova_compute[244014]: 2026-02-25 12:52:36.276 244018 DEBUG oslo_concurrency.lockutils [req-f91c18e2-b2c4-4d26-9ad8-cc04e5d92295 req-f2beb3db-09af-4b61-b1ea-0f856ac55bf0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:52:36 np0005629333 nova_compute[244014]: 2026-02-25 12:52:36.276 244018 DEBUG oslo_concurrency.lockutils [req-f91c18e2-b2c4-4d26-9ad8-cc04e5d92295 req-f2beb3db-09af-4b61-b1ea-0f856ac55bf0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:52:36 np0005629333 nova_compute[244014]: 2026-02-25 12:52:36.276 244018 DEBUG oslo_concurrency.lockutils [req-f91c18e2-b2c4-4d26-9ad8-cc04e5d92295 req-f2beb3db-09af-4b61-b1ea-0f856ac55bf0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "739026fc-9c96-4212-9fa3-e6731e7f61f9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:36 np0005629333 nova_compute[244014]: 2026-02-25 12:52:36.276 244018 DEBUG nova.compute.manager [req-f91c18e2-b2c4-4d26-9ad8-cc04e5d92295 req-f2beb3db-09af-4b61-b1ea-0f856ac55bf0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] No waiting events found dispatching network-vif-plugged-61c3ba1d-0d4b-426e-8d04-ce56efd650ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:52:36 np0005629333 nova_compute[244014]: 2026-02-25 12:52:36.277 244018 WARNING nova.compute.manager [req-f91c18e2-b2c4-4d26-9ad8-cc04e5d92295 req-f2beb3db-09af-4b61-b1ea-0f856ac55bf0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Received unexpected event network-vif-plugged-61c3ba1d-0d4b-426e-8d04-ce56efd650ae for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:52:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2161: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 286 KiB/s rd, 180 KiB/s wr, 40 op/s
Feb 25 07:52:36 np0005629333 nova_compute[244014]: 2026-02-25 12:52:36.883 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:52:36 np0005629333 nova_compute[244014]: 2026-02-25 12:52:36.884 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:52:37 np0005629333 nova_compute[244014]: 2026-02-25 12:52:37.239 244018 DEBUG nova.network.neutron [-] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:52:37 np0005629333 nova_compute[244014]: 2026-02-25 12:52:37.271 244018 INFO nova.compute.manager [-] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Took 1.08 seconds to deallocate network for instance.#033[00m
Feb 25 07:52:37 np0005629333 nova_compute[244014]: 2026-02-25 12:52:37.340 244018 DEBUG oslo_concurrency.lockutils [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:52:37 np0005629333 nova_compute[244014]: 2026-02-25 12:52:37.340 244018 DEBUG oslo_concurrency.lockutils [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:52:37 np0005629333 nova_compute[244014]: 2026-02-25 12:52:37.696 244018 DEBUG oslo_concurrency.processutils [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:52:37 np0005629333 nova_compute[244014]: 2026-02-25 12:52:37.742 244018 DEBUG nova.network.neutron [req-78b7c122-abbf-4c63-bb5c-1c3b99609ca4 req-d1a0f099-5fa7-4220-a085-b9049978bf3c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Updated VIF entry in instance network info cache for port 61c3ba1d-0d4b-426e-8d04-ce56efd650ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:52:37 np0005629333 nova_compute[244014]: 2026-02-25 12:52:37.743 244018 DEBUG nova.network.neutron [req-78b7c122-abbf-4c63-bb5c-1c3b99609ca4 req-d1a0f099-5fa7-4220-a085-b9049978bf3c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Updating instance_info_cache with network_info: [{"id": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "address": "fa:16:3e:93:52:d2", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe93:52d2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61c3ba1d-0d", "ovs_interfaceid": "61c3ba1d-0d4b-426e-8d04-ce56efd650ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:52:37 np0005629333 nova_compute[244014]: 2026-02-25 12:52:37.766 244018 DEBUG oslo_concurrency.lockutils [req-78b7c122-abbf-4c63-bb5c-1c3b99609ca4 req-d1a0f099-5fa7-4220-a085-b9049978bf3c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-739026fc-9c96-4212-9fa3-e6731e7f61f9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:52:37 np0005629333 nova_compute[244014]: 2026-02-25 12:52:37.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:52:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:52:38 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1652785066' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:52:38 np0005629333 nova_compute[244014]: 2026-02-25 12:52:38.231 244018 DEBUG oslo_concurrency.processutils [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:52:38 np0005629333 nova_compute[244014]: 2026-02-25 12:52:38.238 244018 DEBUG nova.compute.provider_tree [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:52:38 np0005629333 nova_compute[244014]: 2026-02-25 12:52:38.264 244018 DEBUG nova.scheduler.client.report [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:52:38 np0005629333 nova_compute[244014]: 2026-02-25 12:52:38.292 244018 DEBUG oslo_concurrency.lockutils [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:38 np0005629333 nova_compute[244014]: 2026-02-25 12:52:38.336 244018 INFO nova.scheduler.client.report [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Deleted allocations for instance 739026fc-9c96-4212-9fa3-e6731e7f61f9#033[00m
Feb 25 07:52:38 np0005629333 nova_compute[244014]: 2026-02-25 12:52:38.385 244018 DEBUG nova.compute.manager [req-d04870c7-5da3-418a-ac2c-0c4417960add req-fcaa921c-bfc9-4a2f-9735-a5b76da293f5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Received event network-vif-deleted-61c3ba1d-0d4b-426e-8d04-ce56efd650ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:52:38 np0005629333 nova_compute[244014]: 2026-02-25 12:52:38.385 244018 INFO nova.compute.manager [req-d04870c7-5da3-418a-ac2c-0c4417960add req-fcaa921c-bfc9-4a2f-9735-a5b76da293f5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Neutron deleted interface 61c3ba1d-0d4b-426e-8d04-ce56efd650ae; detaching it from the instance and deleting it from the info cache#033[00m
Feb 25 07:52:38 np0005629333 nova_compute[244014]: 2026-02-25 12:52:38.385 244018 DEBUG nova.network.neutron [req-d04870c7-5da3-418a-ac2c-0c4417960add req-fcaa921c-bfc9-4a2f-9735-a5b76da293f5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:52:38 np0005629333 nova_compute[244014]: 2026-02-25 12:52:38.426 244018 DEBUG oslo_concurrency.lockutils [None req-866bf25e-c2de-4d03-8ac8-9db9e964d6d2 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "739026fc-9c96-4212-9fa3-e6731e7f61f9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:38 np0005629333 nova_compute[244014]: 2026-02-25 12:52:38.444 244018 DEBUG nova.compute.manager [req-d04870c7-5da3-418a-ac2c-0c4417960add req-fcaa921c-bfc9-4a2f-9735-a5b76da293f5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Detach interface failed, port_id=61c3ba1d-0d4b-426e-8d04-ce56efd650ae, reason: Instance 739026fc-9c96-4212-9fa3-e6731e7f61f9 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 25 07:52:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2162: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 91 KiB/s wr, 41 op/s
Feb 25 07:52:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:52:38 np0005629333 nova_compute[244014]: 2026-02-25 12:52:38.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:52:39 np0005629333 ovn_controller[147040]: 2026-02-25T12:52:39Z|01360|binding|INFO|Releasing lport 591ce053-3764-4ce0-841f-6728c8fd9491 from this chassis (sb_readonly=0)
Feb 25 07:52:39 np0005629333 ovn_controller[147040]: 2026-02-25T12:52:39Z|01361|binding|INFO|Releasing lport 1599c73d-07eb-42e9-83bd-2cf546347a5b from this chassis (sb_readonly=0)
Feb 25 07:52:39 np0005629333 ovn_controller[147040]: 2026-02-25T12:52:39Z|01362|binding|INFO|Releasing lport d3cef94a-f2c7-4a41-beb1-fd29a623854a from this chassis (sb_readonly=0)
Feb 25 07:52:39 np0005629333 nova_compute[244014]: 2026-02-25 12:52:39.663 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:39 np0005629333 nova_compute[244014]: 2026-02-25 12:52:39.700 244018 DEBUG nova.compute.manager [req-cdb18cd7-1794-47c2-a93d-e64ba65032bf req-a38a4a3b-f06f-4aa4-90a8-16611cafa49d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Received event network-changed-9981394d-e733-404e-85a5-e2e51877881a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:52:39 np0005629333 nova_compute[244014]: 2026-02-25 12:52:39.700 244018 DEBUG nova.compute.manager [req-cdb18cd7-1794-47c2-a93d-e64ba65032bf req-a38a4a3b-f06f-4aa4-90a8-16611cafa49d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Refreshing instance network info cache due to event network-changed-9981394d-e733-404e-85a5-e2e51877881a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:52:39 np0005629333 nova_compute[244014]: 2026-02-25 12:52:39.701 244018 DEBUG oslo_concurrency.lockutils [req-cdb18cd7-1794-47c2-a93d-e64ba65032bf req-a38a4a3b-f06f-4aa4-90a8-16611cafa49d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:52:39 np0005629333 nova_compute[244014]: 2026-02-25 12:52:39.701 244018 DEBUG oslo_concurrency.lockutils [req-cdb18cd7-1794-47c2-a93d-e64ba65032bf req-a38a4a3b-f06f-4aa4-90a8-16611cafa49d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:52:39 np0005629333 nova_compute[244014]: 2026-02-25 12:52:39.702 244018 DEBUG nova.network.neutron [req-cdb18cd7-1794-47c2-a93d-e64ba65032bf req-a38a4a3b-f06f-4aa4-90a8-16611cafa49d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Refreshing network info cache for port 9981394d-e733-404e-85a5-e2e51877881a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:52:39 np0005629333 nova_compute[244014]: 2026-02-25 12:52:39.870 244018 DEBUG oslo_concurrency.lockutils [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:52:39 np0005629333 nova_compute[244014]: 2026-02-25 12:52:39.870 244018 DEBUG oslo_concurrency.lockutils [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:52:39 np0005629333 nova_compute[244014]: 2026-02-25 12:52:39.871 244018 DEBUG oslo_concurrency.lockutils [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:52:39 np0005629333 nova_compute[244014]: 2026-02-25 12:52:39.871 244018 DEBUG oslo_concurrency.lockutils [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:52:39 np0005629333 nova_compute[244014]: 2026-02-25 12:52:39.872 244018 DEBUG oslo_concurrency.lockutils [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:39 np0005629333 nova_compute[244014]: 2026-02-25 12:52:39.874 244018 INFO nova.compute.manager [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Terminating instance#033[00m
Feb 25 07:52:39 np0005629333 nova_compute[244014]: 2026-02-25 12:52:39.876 244018 DEBUG nova.compute.manager [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:52:39 np0005629333 nova_compute[244014]: 2026-02-25 12:52:39.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:52:39 np0005629333 kernel: tap9981394d-e7 (unregistering): left promiscuous mode
Feb 25 07:52:39 np0005629333 NetworkManager[49836]: <info>  [1772023959.9277] device (tap9981394d-e7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:52:39 np0005629333 ovn_controller[147040]: 2026-02-25T12:52:39Z|01363|binding|INFO|Releasing lport 9981394d-e733-404e-85a5-e2e51877881a from this chassis (sb_readonly=0)
Feb 25 07:52:39 np0005629333 ovn_controller[147040]: 2026-02-25T12:52:39Z|01364|binding|INFO|Setting lport 9981394d-e733-404e-85a5-e2e51877881a down in Southbound
Feb 25 07:52:39 np0005629333 nova_compute[244014]: 2026-02-25 12:52:39.946 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:39 np0005629333 ovn_controller[147040]: 2026-02-25T12:52:39Z|01365|binding|INFO|Removing iface tap9981394d-e7 ovn-installed in OVS
Feb 25 07:52:39 np0005629333 nova_compute[244014]: 2026-02-25 12:52:39.951 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:39.955 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:92:99 10.100.0.7 2001:db8:0:1:f816:3eff:fe2b:9299 2001:db8::f816:3eff:fe2b:9299'], port_security=['fa:16:3e:2b:92:99 10.100.0.7 2001:db8:0:1:f816:3eff:fe2b:9299 2001:db8::f816:3eff:fe2b:9299'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28 2001:db8:0:1:f816:3eff:fe2b:9299/64 2001:db8::f816:3eff:fe2b:9299/64', 'neutron:device_id': 'dd7feae9-9d2a-41b6-9277-cbf51a2c8f23', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e19ed85e-54ee-4274-951c-ade412625983', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a1c36bad-4548-4f88-8bee-0f028af7a076', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a56192da-444f-498f-a789-c0e4cafb114e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=9981394d-e733-404e-85a5-e2e51877881a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:52:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:39.956 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 9981394d-e733-404e-85a5-e2e51877881a in datapath e19ed85e-54ee-4274-951c-ade412625983 unbound from our chassis#033[00m
Feb 25 07:52:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:39.958 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e19ed85e-54ee-4274-951c-ade412625983, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:52:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:39.959 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e339e908-55d7-46a2-8a08-bac5e18ff50e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:39 np0005629333 nova_compute[244014]: 2026-02-25 12:52:39.960 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:39 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:39.961 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e19ed85e-54ee-4274-951c-ade412625983 namespace which is not needed anymore#033[00m
Feb 25 07:52:39 np0005629333 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Feb 25 07:52:39 np0005629333 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d0000007e.scope: Consumed 14.221s CPU time.
Feb 25 07:52:39 np0005629333 systemd-machined[210048]: Machine qemu-158-instance-0000007e terminated.
Feb 25 07:52:40 np0005629333 neutron-haproxy-ovnmeta-e19ed85e-54ee-4274-951c-ade412625983[359394]: [NOTICE]   (359402) : haproxy version is 2.8.14-c23fe91
Feb 25 07:52:40 np0005629333 neutron-haproxy-ovnmeta-e19ed85e-54ee-4274-951c-ade412625983[359394]: [NOTICE]   (359402) : path to executable is /usr/sbin/haproxy
Feb 25 07:52:40 np0005629333 neutron-haproxy-ovnmeta-e19ed85e-54ee-4274-951c-ade412625983[359394]: [WARNING]  (359402) : Exiting Master process...
Feb 25 07:52:40 np0005629333 neutron-haproxy-ovnmeta-e19ed85e-54ee-4274-951c-ade412625983[359394]: [ALERT]    (359402) : Current worker (359404) exited with code 143 (Terminated)
Feb 25 07:52:40 np0005629333 neutron-haproxy-ovnmeta-e19ed85e-54ee-4274-951c-ade412625983[359394]: [WARNING]  (359402) : All workers exited. Exiting... (0)
Feb 25 07:52:40 np0005629333 systemd[1]: libpod-f6693c06ff36140c6e384f51011936b906ddbca089bb7c89e73186677d382987.scope: Deactivated successfully.
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.112 244018 INFO nova.virt.libvirt.driver [-] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Instance destroyed successfully.#033[00m
Feb 25 07:52:40 np0005629333 podman[360805]: 2026-02-25 12:52:40.113284526 +0000 UTC m=+0.060256252 container died f6693c06ff36140c6e384f51011936b906ddbca089bb7c89e73186677d382987 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e19ed85e-54ee-4274-951c-ade412625983, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.113 244018 DEBUG nova.objects.instance [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'resources' on Instance uuid dd7feae9-9d2a-41b6-9277-cbf51a2c8f23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.124 244018 DEBUG nova.virt.libvirt.vif [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:51:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1033485245',display_name='tempest-TestGettingAddress-server-1033485245',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1033485245',id=126,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJC+upPNElBVnEP9sRAofuJ3WgjrN7NUHIjnb2xNcja3SXT0AJcyip72U8J6Hd3gQBhmqSCIrT6CHz7iNR5W3wt+U6axciPl3ik+0GTg6pCZIfse39ZA4oXcSHjtD+Yg8A==',key_name='tempest-TestGettingAddress-1847809733',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:51:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-iqq9ar0s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:51:38Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=dd7feae9-9d2a-41b6-9277-cbf51a2c8f23,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9981394d-e733-404e-85a5-e2e51877881a", "address": "fa:16:3e:2b:92:99", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9981394d-e7", "ovs_interfaceid": "9981394d-e733-404e-85a5-e2e51877881a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.125 244018 DEBUG nova.network.os_vif_util [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "9981394d-e733-404e-85a5-e2e51877881a", "address": "fa:16:3e:2b:92:99", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9981394d-e7", "ovs_interfaceid": "9981394d-e733-404e-85a5-e2e51877881a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.126 244018 DEBUG nova.network.os_vif_util [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2b:92:99,bridge_name='br-int',has_traffic_filtering=True,id=9981394d-e733-404e-85a5-e2e51877881a,network=Network(e19ed85e-54ee-4274-951c-ade412625983),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9981394d-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.127 244018 DEBUG os_vif [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:92:99,bridge_name='br-int',has_traffic_filtering=True,id=9981394d-e733-404e-85a5-e2e51877881a,network=Network(e19ed85e-54ee-4274-951c-ade412625983),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9981394d-e7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.129 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.129 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9981394d-e7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.131 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.132 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.134 244018 INFO os_vif [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:92:99,bridge_name='br-int',has_traffic_filtering=True,id=9981394d-e733-404e-85a5-e2e51877881a,network=Network(e19ed85e-54ee-4274-951c-ade412625983),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9981394d-e7')#033[00m
Feb 25 07:52:40 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f6693c06ff36140c6e384f51011936b906ddbca089bb7c89e73186677d382987-userdata-shm.mount: Deactivated successfully.
Feb 25 07:52:40 np0005629333 systemd[1]: var-lib-containers-storage-overlay-b16cd2a2c94cf38fc347d4884920d0f50de2f7df3adcfcac47023bdece528de3-merged.mount: Deactivated successfully.
Feb 25 07:52:40 np0005629333 podman[360805]: 2026-02-25 12:52:40.166631063 +0000 UTC m=+0.113602779 container cleanup f6693c06ff36140c6e384f51011936b906ddbca089bb7c89e73186677d382987 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e19ed85e-54ee-4274-951c-ade412625983, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 07:52:40 np0005629333 systemd[1]: libpod-conmon-f6693c06ff36140c6e384f51011936b906ddbca089bb7c89e73186677d382987.scope: Deactivated successfully.
Feb 25 07:52:40 np0005629333 podman[360869]: 2026-02-25 12:52:40.221331167 +0000 UTC m=+0.035130633 container remove f6693c06ff36140c6e384f51011936b906ddbca089bb7c89e73186677d382987 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e19ed85e-54ee-4274-951c-ade412625983, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 07:52:40 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:40.226 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[698f3ff9-d96e-4768-8a0f-0338090c218a]: (4, ('Wed Feb 25 12:52:40 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e19ed85e-54ee-4274-951c-ade412625983 (f6693c06ff36140c6e384f51011936b906ddbca089bb7c89e73186677d382987)\nf6693c06ff36140c6e384f51011936b906ddbca089bb7c89e73186677d382987\nWed Feb 25 12:52:40 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e19ed85e-54ee-4274-951c-ade412625983 (f6693c06ff36140c6e384f51011936b906ddbca089bb7c89e73186677d382987)\nf6693c06ff36140c6e384f51011936b906ddbca089bb7c89e73186677d382987\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:40 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:40.228 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2dae55c2-2173-44f4-aad3-35f5450dcbd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:40 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:40.230 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape19ed85e-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.233 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:40 np0005629333 kernel: tape19ed85e-50: left promiscuous mode
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.238 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:40 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:40.246 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[aef75533-2f59-4095-a311-eae6736b2598]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:40 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:40.268 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bc643520-e8bd-443b-946e-4af0476f79fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:40 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:40.270 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bcc592e5-2054-4093-a7ed-77beaac1407a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:40 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:40.285 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c64953d4-e87c-4a3f-a028-86972400a968]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586675, 'reachable_time': 40052, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360898, 'error': None, 'target': 'ovnmeta-e19ed85e-54ee-4274-951c-ade412625983', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:40 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:40.287 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e19ed85e-54ee-4274-951c-ade412625983 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:52:40 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:40.287 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[abec4e70-e3c6-4c13-adca-c147abeb38a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:40 np0005629333 systemd[1]: run-netns-ovnmeta\x2de19ed85e\x2d54ee\x2d4274\x2d951c\x2dade412625983.mount: Deactivated successfully.
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.325 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.325 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:52:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 25 07:52:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 07:52:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:52:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:52:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:52:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:52:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:52:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:52:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:52:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:52:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:52:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:52:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:52:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.344 244018 DEBUG nova.compute.manager [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.365 244018 INFO nova.virt.libvirt.driver [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Deleting instance files /var/lib/nova/instances/dd7feae9-9d2a-41b6-9277-cbf51a2c8f23_del#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.366 244018 INFO nova.virt.libvirt.driver [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Deletion of /var/lib/nova/instances/dd7feae9-9d2a-41b6-9277-cbf51a2c8f23_del complete#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.405 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.443 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.444 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.444 244018 INFO nova.compute.manager [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Took 0.57 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.445 244018 DEBUG oslo.service.loopingcall [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.445 244018 DEBUG nova.compute.manager [-] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.445 244018 DEBUG nova.network.neutron [-] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.451 244018 DEBUG nova.virt.hardware [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.452 244018 INFO nova.compute.claims [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.519 244018 DEBUG nova.compute.manager [req-4aff7aa2-da5e-431a-9a93-f913dbcbf836 req-97e8f57b-7b19-416a-a8e3-41d8bf8f1b0b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Received event network-vif-unplugged-9981394d-e733-404e-85a5-e2e51877881a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.520 244018 DEBUG oslo_concurrency.lockutils [req-4aff7aa2-da5e-431a-9a93-f913dbcbf836 req-97e8f57b-7b19-416a-a8e3-41d8bf8f1b0b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.520 244018 DEBUG oslo_concurrency.lockutils [req-4aff7aa2-da5e-431a-9a93-f913dbcbf836 req-97e8f57b-7b19-416a-a8e3-41d8bf8f1b0b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.520 244018 DEBUG oslo_concurrency.lockutils [req-4aff7aa2-da5e-431a-9a93-f913dbcbf836 req-97e8f57b-7b19-416a-a8e3-41d8bf8f1b0b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.520 244018 DEBUG nova.compute.manager [req-4aff7aa2-da5e-431a-9a93-f913dbcbf836 req-97e8f57b-7b19-416a-a8e3-41d8bf8f1b0b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] No waiting events found dispatching network-vif-unplugged-9981394d-e733-404e-85a5-e2e51877881a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.520 244018 DEBUG nova.compute.manager [req-4aff7aa2-da5e-431a-9a93-f913dbcbf836 req-97e8f57b-7b19-416a-a8e3-41d8bf8f1b0b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Received event network-vif-unplugged-9981394d-e733-404e-85a5-e2e51877881a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.520 244018 DEBUG nova.compute.manager [req-4aff7aa2-da5e-431a-9a93-f913dbcbf836 req-97e8f57b-7b19-416a-a8e3-41d8bf8f1b0b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Received event network-vif-plugged-9981394d-e733-404e-85a5-e2e51877881a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.520 244018 DEBUG oslo_concurrency.lockutils [req-4aff7aa2-da5e-431a-9a93-f913dbcbf836 req-97e8f57b-7b19-416a-a8e3-41d8bf8f1b0b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.521 244018 DEBUG oslo_concurrency.lockutils [req-4aff7aa2-da5e-431a-9a93-f913dbcbf836 req-97e8f57b-7b19-416a-a8e3-41d8bf8f1b0b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.521 244018 DEBUG oslo_concurrency.lockutils [req-4aff7aa2-da5e-431a-9a93-f913dbcbf836 req-97e8f57b-7b19-416a-a8e3-41d8bf8f1b0b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.521 244018 DEBUG nova.compute.manager [req-4aff7aa2-da5e-431a-9a93-f913dbcbf836 req-97e8f57b-7b19-416a-a8e3-41d8bf8f1b0b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] No waiting events found dispatching network-vif-plugged-9981394d-e733-404e-85a5-e2e51877881a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.521 244018 WARNING nova.compute.manager [req-4aff7aa2-da5e-431a-9a93-f913dbcbf836 req-97e8f57b-7b19-416a-a8e3-41d8bf8f1b0b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Received unexpected event network-vif-plugged-9981394d-e733-404e-85a5-e2e51877881a for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:52:40 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 07:52:40 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:52:40 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:52:40 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:52:40 np0005629333 nova_compute[244014]: 2026-02-25 12:52:40.615 244018 DEBUG oslo_concurrency.processutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:52:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2163: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 28 KiB/s wr, 31 op/s
Feb 25 07:52:40 np0005629333 podman[360963]: 2026-02-25 12:52:40.681200641 +0000 UTC m=+0.034578377 container create d2ba9f23f4fe6ad89c9ac7ef05662faaa2bb7be78f80c1041ab4f65e77ca936a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_merkle, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 07:52:40 np0005629333 systemd[1]: Started libpod-conmon-d2ba9f23f4fe6ad89c9ac7ef05662faaa2bb7be78f80c1041ab4f65e77ca936a.scope.
Feb 25 07:52:40 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:52:40 np0005629333 podman[360963]: 2026-02-25 12:52:40.757199677 +0000 UTC m=+0.110577433 container init d2ba9f23f4fe6ad89c9ac7ef05662faaa2bb7be78f80c1041ab4f65e77ca936a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 07:52:40 np0005629333 podman[360963]: 2026-02-25 12:52:40.664904411 +0000 UTC m=+0.018282167 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:52:40 np0005629333 podman[360963]: 2026-02-25 12:52:40.763492905 +0000 UTC m=+0.116870631 container start d2ba9f23f4fe6ad89c9ac7ef05662faaa2bb7be78f80c1041ab4f65e77ca936a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:52:40 np0005629333 trusting_merkle[360980]: 167 167
Feb 25 07:52:40 np0005629333 systemd[1]: libpod-d2ba9f23f4fe6ad89c9ac7ef05662faaa2bb7be78f80c1041ab4f65e77ca936a.scope: Deactivated successfully.
Feb 25 07:52:40 np0005629333 conmon[360980]: conmon d2ba9f23f4fe6ad89c9a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d2ba9f23f4fe6ad89c9ac7ef05662faaa2bb7be78f80c1041ab4f65e77ca936a.scope/container/memory.events
Feb 25 07:52:40 np0005629333 podman[360963]: 2026-02-25 12:52:40.767735645 +0000 UTC m=+0.121113371 container attach d2ba9f23f4fe6ad89c9ac7ef05662faaa2bb7be78f80c1041ab4f65e77ca936a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_merkle, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 07:52:40 np0005629333 podman[360963]: 2026-02-25 12:52:40.77041637 +0000 UTC m=+0.123794106 container died d2ba9f23f4fe6ad89c9ac7ef05662faaa2bb7be78f80c1041ab4f65e77ca936a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 07:52:40 np0005629333 systemd[1]: var-lib-containers-storage-overlay-42c289b9884857a252bca9bf525e90c1df6a29ecde71b35f5be8bca9d5358210-merged.mount: Deactivated successfully.
Feb 25 07:52:40 np0005629333 podman[360963]: 2026-02-25 12:52:40.859885817 +0000 UTC m=+0.213263533 container remove d2ba9f23f4fe6ad89c9ac7ef05662faaa2bb7be78f80c1041ab4f65e77ca936a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_merkle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 07:52:40 np0005629333 systemd[1]: libpod-conmon-d2ba9f23f4fe6ad89c9ac7ef05662faaa2bb7be78f80c1041ab4f65e77ca936a.scope: Deactivated successfully.
Feb 25 07:52:41 np0005629333 podman[361023]: 2026-02-25 12:52:41.003466401 +0000 UTC m=+0.036099940 container create ca3065a6790525b64506e5c3b948364fc123c0d48f9f68a299f3b183608561db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lumiere, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:52:41 np0005629333 systemd[1]: Started libpod-conmon-ca3065a6790525b64506e5c3b948364fc123c0d48f9f68a299f3b183608561db.scope.
Feb 25 07:52:41 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:52:41 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5365aa6de57c83217261e7f745b8d5466115101ea78cc0fdb0ddfa3d77bf2647/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:52:41 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5365aa6de57c83217261e7f745b8d5466115101ea78cc0fdb0ddfa3d77bf2647/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:52:41 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5365aa6de57c83217261e7f745b8d5466115101ea78cc0fdb0ddfa3d77bf2647/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:52:41 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5365aa6de57c83217261e7f745b8d5466115101ea78cc0fdb0ddfa3d77bf2647/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:52:41 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5365aa6de57c83217261e7f745b8d5466115101ea78cc0fdb0ddfa3d77bf2647/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:52:41 np0005629333 podman[361023]: 2026-02-25 12:52:40.988856989 +0000 UTC m=+0.021490548 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:52:41 np0005629333 podman[361023]: 2026-02-25 12:52:41.090085457 +0000 UTC m=+0.122719046 container init ca3065a6790525b64506e5c3b948364fc123c0d48f9f68a299f3b183608561db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:52:41 np0005629333 podman[361023]: 2026-02-25 12:52:41.098433423 +0000 UTC m=+0.131066962 container start ca3065a6790525b64506e5c3b948364fc123c0d48f9f68a299f3b183608561db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lumiere, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 07:52:41 np0005629333 podman[361023]: 2026-02-25 12:52:41.101241072 +0000 UTC m=+0.133874661 container attach ca3065a6790525b64506e5c3b948364fc123c0d48f9f68a299f3b183608561db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 07:52:41 np0005629333 nova_compute[244014]: 2026-02-25 12:52:41.127 244018 DEBUG nova.network.neutron [-] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:52:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:52:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1069586792' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:52:41 np0005629333 nova_compute[244014]: 2026-02-25 12:52:41.150 244018 INFO nova.compute.manager [-] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Took 0.70 seconds to deallocate network for instance.#033[00m
Feb 25 07:52:41 np0005629333 nova_compute[244014]: 2026-02-25 12:52:41.164 244018 DEBUG oslo_concurrency.processutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:52:41 np0005629333 nova_compute[244014]: 2026-02-25 12:52:41.172 244018 DEBUG nova.compute.provider_tree [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:52:41 np0005629333 nova_compute[244014]: 2026-02-25 12:52:41.249 244018 DEBUG nova.scheduler.client.report [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:52:41 np0005629333 nova_compute[244014]: 2026-02-25 12:52:41.258 244018 DEBUG oslo_concurrency.lockutils [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:52:41 np0005629333 nova_compute[244014]: 2026-02-25 12:52:41.289 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:41 np0005629333 nova_compute[244014]: 2026-02-25 12:52:41.291 244018 DEBUG nova.compute.manager [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:52:41 np0005629333 nova_compute[244014]: 2026-02-25 12:52:41.296 244018 DEBUG oslo_concurrency.lockutils [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:52:41 np0005629333 nova_compute[244014]: 2026-02-25 12:52:41.387 244018 DEBUG nova.compute.manager [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:52:41 np0005629333 nova_compute[244014]: 2026-02-25 12:52:41.389 244018 DEBUG nova.network.neutron [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:52:41 np0005629333 nova_compute[244014]: 2026-02-25 12:52:41.412 244018 INFO nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:52:41 np0005629333 nova_compute[244014]: 2026-02-25 12:52:41.447 244018 DEBUG nova.compute.manager [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:52:41 np0005629333 nova_compute[244014]: 2026-02-25 12:52:41.473 244018 DEBUG oslo_concurrency.processutils [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:52:41 np0005629333 nova_compute[244014]: 2026-02-25 12:52:41.558 244018 DEBUG nova.compute.manager [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:52:41 np0005629333 nova_compute[244014]: 2026-02-25 12:52:41.561 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:52:41 np0005629333 nova_compute[244014]: 2026-02-25 12:52:41.561 244018 INFO nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Creating image(s)#033[00m
Feb 25 07:52:41 np0005629333 nova_compute[244014]: 2026-02-25 12:52:41.594 244018 DEBUG nova.storage.rbd_utils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image f3af9615-94aa-4498-ab5c-3fadcab4a4e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:52:41 np0005629333 bold_lumiere[361041]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:52:41 np0005629333 bold_lumiere[361041]: --> All data devices are unavailable
Feb 25 07:52:41 np0005629333 nova_compute[244014]: 2026-02-25 12:52:41.642 244018 DEBUG nova.storage.rbd_utils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image f3af9615-94aa-4498-ab5c-3fadcab4a4e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:52:41 np0005629333 systemd[1]: libpod-ca3065a6790525b64506e5c3b948364fc123c0d48f9f68a299f3b183608561db.scope: Deactivated successfully.
Feb 25 07:52:41 np0005629333 podman[361023]: 2026-02-25 12:52:41.660525284 +0000 UTC m=+0.693158863 container died ca3065a6790525b64506e5c3b948364fc123c0d48f9f68a299f3b183608561db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 07:52:41 np0005629333 nova_compute[244014]: 2026-02-25 12:52:41.684 244018 DEBUG nova.storage.rbd_utils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image f3af9615-94aa-4498-ab5c-3fadcab4a4e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:52:41 np0005629333 systemd[1]: var-lib-containers-storage-overlay-5365aa6de57c83217261e7f745b8d5466115101ea78cc0fdb0ddfa3d77bf2647-merged.mount: Deactivated successfully.
Feb 25 07:52:41 np0005629333 nova_compute[244014]: 2026-02-25 12:52:41.695 244018 DEBUG oslo_concurrency.processutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:52:41 np0005629333 podman[361023]: 2026-02-25 12:52:41.710236108 +0000 UTC m=+0.742869687 container remove ca3065a6790525b64506e5c3b948364fc123c0d48f9f68a299f3b183608561db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lumiere, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 07:52:41 np0005629333 systemd[1]: libpod-conmon-ca3065a6790525b64506e5c3b948364fc123c0d48f9f68a299f3b183608561db.scope: Deactivated successfully.
Feb 25 07:52:41 np0005629333 nova_compute[244014]: 2026-02-25 12:52:41.736 244018 DEBUG nova.policy [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31d013eaf26a447394d93c83ab8def60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e227b91c24404ab5aed600e2fe792d32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:52:41 np0005629333 nova_compute[244014]: 2026-02-25 12:52:41.777 244018 DEBUG oslo_concurrency.processutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:52:41 np0005629333 nova_compute[244014]: 2026-02-25 12:52:41.777 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:52:41 np0005629333 nova_compute[244014]: 2026-02-25 12:52:41.778 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:52:41 np0005629333 nova_compute[244014]: 2026-02-25 12:52:41.778 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:41 np0005629333 nova_compute[244014]: 2026-02-25 12:52:41.810 244018 DEBUG nova.storage.rbd_utils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image f3af9615-94aa-4498-ab5c-3fadcab4a4e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:52:41 np0005629333 nova_compute[244014]: 2026-02-25 12:52:41.820 244018 DEBUG oslo_concurrency.processutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 f3af9615-94aa-4498-ab5c-3fadcab4a4e6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:52:41 np0005629333 nova_compute[244014]: 2026-02-25 12:52:41.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:52:41 np0005629333 nova_compute[244014]: 2026-02-25 12:52:41.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:52:42 np0005629333 nova_compute[244014]: 2026-02-25 12:52:42.044 244018 DEBUG oslo_concurrency.processutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 f3af9615-94aa-4498-ab5c-3fadcab4a4e6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.224s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:52:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:52:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2572988115' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:52:42 np0005629333 nova_compute[244014]: 2026-02-25 12:52:42.087 244018 DEBUG oslo_concurrency.processutils [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:52:42 np0005629333 nova_compute[244014]: 2026-02-25 12:52:42.093 244018 DEBUG nova.storage.rbd_utils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] resizing rbd image f3af9615-94aa-4498-ab5c-3fadcab4a4e6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:52:42 np0005629333 nova_compute[244014]: 2026-02-25 12:52:42.120 244018 DEBUG nova.compute.provider_tree [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:52:42 np0005629333 nova_compute[244014]: 2026-02-25 12:52:42.135 244018 DEBUG nova.scheduler.client.report [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:52:42 np0005629333 podman[361304]: 2026-02-25 12:52:42.145636192 +0000 UTC m=+0.032477488 container create 90f3768bef10e2463340c8fc667787c51c2dfd39b7d1ec79cb95962a835a17e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 07:52:42 np0005629333 systemd[1]: Started libpod-conmon-90f3768bef10e2463340c8fc667787c51c2dfd39b7d1ec79cb95962a835a17e7.scope.
Feb 25 07:52:42 np0005629333 nova_compute[244014]: 2026-02-25 12:52:42.173 244018 DEBUG oslo_concurrency.lockutils [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:42 np0005629333 nova_compute[244014]: 2026-02-25 12:52:42.182 244018 DEBUG nova.objects.instance [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'migration_context' on Instance uuid f3af9615-94aa-4498-ab5c-3fadcab4a4e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:52:42 np0005629333 nova_compute[244014]: 2026-02-25 12:52:42.194 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:52:42 np0005629333 nova_compute[244014]: 2026-02-25 12:52:42.195 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Ensure instance console log exists: /var/lib/nova/instances/f3af9615-94aa-4498-ab5c-3fadcab4a4e6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:52:42 np0005629333 nova_compute[244014]: 2026-02-25 12:52:42.195 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:52:42 np0005629333 nova_compute[244014]: 2026-02-25 12:52:42.195 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:52:42 np0005629333 nova_compute[244014]: 2026-02-25 12:52:42.196 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:42 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:52:42 np0005629333 nova_compute[244014]: 2026-02-25 12:52:42.209 244018 INFO nova.scheduler.client.report [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Deleted allocations for instance dd7feae9-9d2a-41b6-9277-cbf51a2c8f23#033[00m
Feb 25 07:52:42 np0005629333 podman[361304]: 2026-02-25 12:52:42.213261412 +0000 UTC m=+0.100102728 container init 90f3768bef10e2463340c8fc667787c51c2dfd39b7d1ec79cb95962a835a17e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:52:42 np0005629333 podman[361304]: 2026-02-25 12:52:42.218646464 +0000 UTC m=+0.105487770 container start 90f3768bef10e2463340c8fc667787c51c2dfd39b7d1ec79cb95962a835a17e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_khayyam, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:52:42 np0005629333 podman[361304]: 2026-02-25 12:52:42.221408112 +0000 UTC m=+0.108249418 container attach 90f3768bef10e2463340c8fc667787c51c2dfd39b7d1ec79cb95962a835a17e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 07:52:42 np0005629333 gifted_khayyam[361341]: 167 167
Feb 25 07:52:42 np0005629333 systemd[1]: libpod-90f3768bef10e2463340c8fc667787c51c2dfd39b7d1ec79cb95962a835a17e7.scope: Deactivated successfully.
Feb 25 07:52:42 np0005629333 podman[361304]: 2026-02-25 12:52:42.22274228 +0000 UTC m=+0.109583596 container died 90f3768bef10e2463340c8fc667787c51c2dfd39b7d1ec79cb95962a835a17e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 07:52:42 np0005629333 podman[361304]: 2026-02-25 12:52:42.131293817 +0000 UTC m=+0.018135143 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:52:42 np0005629333 systemd[1]: var-lib-containers-storage-overlay-319505d2ff08281de9b8ba7091d18733ded986c0bb27e5d8616863039bed69d4-merged.mount: Deactivated successfully.
Feb 25 07:52:42 np0005629333 podman[361304]: 2026-02-25 12:52:42.262264256 +0000 UTC m=+0.149105562 container remove 90f3768bef10e2463340c8fc667787c51c2dfd39b7d1ec79cb95962a835a17e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_khayyam, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 07:52:42 np0005629333 systemd[1]: libpod-conmon-90f3768bef10e2463340c8fc667787c51c2dfd39b7d1ec79cb95962a835a17e7.scope: Deactivated successfully.
Feb 25 07:52:42 np0005629333 nova_compute[244014]: 2026-02-25 12:52:42.289 244018 DEBUG oslo_concurrency.lockutils [None req-2efd2d32-b958-4fb5-8ee9-668858fef0cc f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.418s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:42 np0005629333 podman[361365]: 2026-02-25 12:52:42.457452577 +0000 UTC m=+0.056283660 container create 79203ec1c32360934b10b18078746435e05a54e93ecc387e9fbed5d477a0f4ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_lehmann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 07:52:42 np0005629333 nova_compute[244014]: 2026-02-25 12:52:42.461 244018 DEBUG nova.network.neutron [req-cdb18cd7-1794-47c2-a93d-e64ba65032bf req-a38a4a3b-f06f-4aa4-90a8-16611cafa49d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Updated VIF entry in instance network info cache for port 9981394d-e733-404e-85a5-e2e51877881a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:52:42 np0005629333 nova_compute[244014]: 2026-02-25 12:52:42.461 244018 DEBUG nova.network.neutron [req-cdb18cd7-1794-47c2-a93d-e64ba65032bf req-a38a4a3b-f06f-4aa4-90a8-16611cafa49d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Updating instance_info_cache with network_info: [{"id": "9981394d-e733-404e-85a5-e2e51877881a", "address": "fa:16:3e:2b:92:99", "network": {"id": "e19ed85e-54ee-4274-951c-ade412625983", "bridge": "br-int", "label": "tempest-network-smoke--180281338", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2b:9299", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9981394d-e7", "ovs_interfaceid": "9981394d-e733-404e-85a5-e2e51877881a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:52:42 np0005629333 nova_compute[244014]: 2026-02-25 12:52:42.476 244018 DEBUG oslo_concurrency.lockutils [req-cdb18cd7-1794-47c2-a93d-e64ba65032bf req-a38a4a3b-f06f-4aa4-90a8-16611cafa49d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-dd7feae9-9d2a-41b6-9277-cbf51a2c8f23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:52:42 np0005629333 systemd[1]: Started libpod-conmon-79203ec1c32360934b10b18078746435e05a54e93ecc387e9fbed5d477a0f4ae.scope.
Feb 25 07:52:42 np0005629333 podman[361365]: 2026-02-25 12:52:42.436831925 +0000 UTC m=+0.035663098 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:52:42 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:52:42 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed34c6abe7cf1fa8f0f4e5af64e706044c16ff66937f160068362ce41fa9dab5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:52:42 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed34c6abe7cf1fa8f0f4e5af64e706044c16ff66937f160068362ce41fa9dab5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:52:42 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed34c6abe7cf1fa8f0f4e5af64e706044c16ff66937f160068362ce41fa9dab5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:52:42 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed34c6abe7cf1fa8f0f4e5af64e706044c16ff66937f160068362ce41fa9dab5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:52:42 np0005629333 podman[361365]: 2026-02-25 12:52:42.557187073 +0000 UTC m=+0.156018226 container init 79203ec1c32360934b10b18078746435e05a54e93ecc387e9fbed5d477a0f4ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_lehmann, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 07:52:42 np0005629333 podman[361365]: 2026-02-25 12:52:42.571905339 +0000 UTC m=+0.170736442 container start 79203ec1c32360934b10b18078746435e05a54e93ecc387e9fbed5d477a0f4ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:52:42 np0005629333 podman[361365]: 2026-02-25 12:52:42.576255232 +0000 UTC m=+0.175086415 container attach 79203ec1c32360934b10b18078746435e05a54e93ecc387e9fbed5d477a0f4ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 07:52:42 np0005629333 nova_compute[244014]: 2026-02-25 12:52:42.616 244018 DEBUG nova.compute.manager [req-aaca354e-b539-4f1d-bc82-b35161663d43 req-ee9a58cc-9839-455c-99d8-f176749989b4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Received event network-vif-deleted-9981394d-e733-404e-85a5-e2e51877881a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:52:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2164: 305 pgs: 305 active+clean; 241 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 468 KiB/s wr, 70 op/s
Feb 25 07:52:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:52:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:52:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:52:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:52:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008594589074421668 of space, bias 1.0, pg target 0.25783767223265003 quantized to 32 (current 32)
Feb 25 07:52:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:52:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:52:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:52:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:52:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:52:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024941313303238075 of space, bias 1.0, pg target 0.7482393990971422 quantized to 32 (current 32)
Feb 25 07:52:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:52:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3944001498069894e-06 of space, bias 4.0, pg target 0.0016732801797683873 quantized to 16 (current 16)
Feb 25 07:52:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:52:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:52:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:52:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:52:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:52:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:52:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:52:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:52:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:52:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]: {
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:    "0": [
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:        {
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "devices": [
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "/dev/loop3"
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            ],
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "lv_name": "ceph_lv0",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "lv_size": "21470642176",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "name": "ceph_lv0",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "tags": {
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.cluster_name": "ceph",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.crush_device_class": "",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.encrypted": "0",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.objectstore": "bluestore",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.osd_id": "0",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.type": "block",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.vdo": "0",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.with_tpm": "0"
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            },
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "type": "block",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "vg_name": "ceph_vg0"
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:        }
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:    ],
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:    "1": [
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:        {
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "devices": [
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "/dev/loop4"
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            ],
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "lv_name": "ceph_lv1",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "lv_size": "21470642176",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "name": "ceph_lv1",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "tags": {
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.cluster_name": "ceph",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.crush_device_class": "",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.encrypted": "0",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.objectstore": "bluestore",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.osd_id": "1",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.type": "block",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.vdo": "0",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.with_tpm": "0"
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            },
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "type": "block",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "vg_name": "ceph_vg1"
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:        }
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:    ],
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:    "2": [
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:        {
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "devices": [
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "/dev/loop5"
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            ],
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "lv_name": "ceph_lv2",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "lv_size": "21470642176",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "name": "ceph_lv2",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "tags": {
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.cluster_name": "ceph",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.crush_device_class": "",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.encrypted": "0",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.objectstore": "bluestore",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.osd_id": "2",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.type": "block",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.vdo": "0",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:                "ceph.with_tpm": "0"
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            },
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "type": "block",
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:            "vg_name": "ceph_vg2"
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:        }
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]:    ]
Feb 25 07:52:42 np0005629333 objective_lehmann[361381]: }
Feb 25 07:52:42 np0005629333 systemd[1]: libpod-79203ec1c32360934b10b18078746435e05a54e93ecc387e9fbed5d477a0f4ae.scope: Deactivated successfully.
Feb 25 07:52:42 np0005629333 podman[361365]: 2026-02-25 12:52:42.89809791 +0000 UTC m=+0.496929013 container died 79203ec1c32360934b10b18078746435e05a54e93ecc387e9fbed5d477a0f4ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:52:42 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ed34c6abe7cf1fa8f0f4e5af64e706044c16ff66937f160068362ce41fa9dab5-merged.mount: Deactivated successfully.
Feb 25 07:52:42 np0005629333 podman[361365]: 2026-02-25 12:52:42.949740738 +0000 UTC m=+0.548571841 container remove 79203ec1c32360934b10b18078746435e05a54e93ecc387e9fbed5d477a0f4ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_lehmann, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:52:42 np0005629333 systemd[1]: libpod-conmon-79203ec1c32360934b10b18078746435e05a54e93ecc387e9fbed5d477a0f4ae.scope: Deactivated successfully.
Feb 25 07:52:43 np0005629333 podman[361463]: 2026-02-25 12:52:43.444633912 +0000 UTC m=+0.043478689 container create 5ad9bded7549ebd8573bfd4a9c2f71d739cba547994364c1946e413f6952a638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hofstadter, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 07:52:43 np0005629333 systemd[1]: Started libpod-conmon-5ad9bded7549ebd8573bfd4a9c2f71d739cba547994364c1946e413f6952a638.scope.
Feb 25 07:52:43 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:52:43 np0005629333 podman[361463]: 2026-02-25 12:52:43.421563411 +0000 UTC m=+0.020408168 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:52:43 np0005629333 podman[361463]: 2026-02-25 12:52:43.525083724 +0000 UTC m=+0.123928451 container init 5ad9bded7549ebd8573bfd4a9c2f71d739cba547994364c1946e413f6952a638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hofstadter, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:52:43 np0005629333 podman[361463]: 2026-02-25 12:52:43.532500413 +0000 UTC m=+0.131345170 container start 5ad9bded7549ebd8573bfd4a9c2f71d739cba547994364c1946e413f6952a638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hofstadter, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:52:43 np0005629333 podman[361463]: 2026-02-25 12:52:43.535800336 +0000 UTC m=+0.134645093 container attach 5ad9bded7549ebd8573bfd4a9c2f71d739cba547994364c1946e413f6952a638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hofstadter, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:52:43 np0005629333 amazing_hofstadter[361480]: 167 167
Feb 25 07:52:43 np0005629333 systemd[1]: libpod-5ad9bded7549ebd8573bfd4a9c2f71d739cba547994364c1946e413f6952a638.scope: Deactivated successfully.
Feb 25 07:52:43 np0005629333 podman[361463]: 2026-02-25 12:52:43.538683548 +0000 UTC m=+0.137528315 container died 5ad9bded7549ebd8573bfd4a9c2f71d739cba547994364c1946e413f6952a638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hofstadter, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:52:43 np0005629333 systemd[1]: var-lib-containers-storage-overlay-5b150f6e384d5013aa7fc94e51ab4568eb208965cc2bef13ec5213768ae11185-merged.mount: Deactivated successfully.
Feb 25 07:52:43 np0005629333 podman[361463]: 2026-02-25 12:52:43.577067622 +0000 UTC m=+0.175912399 container remove 5ad9bded7549ebd8573bfd4a9c2f71d739cba547994364c1946e413f6952a638 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hofstadter, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 07:52:43 np0005629333 systemd[1]: libpod-conmon-5ad9bded7549ebd8573bfd4a9c2f71d739cba547994364c1946e413f6952a638.scope: Deactivated successfully.
Feb 25 07:52:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:52:43 np0005629333 podman[361505]: 2026-02-25 12:52:43.704286684 +0000 UTC m=+0.043750847 container create 77a64390da2f07c1d6f9570dc0bd83dbeaad699bfdc73a77547100149b3f48d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 07:52:43 np0005629333 systemd[1]: Started libpod-conmon-77a64390da2f07c1d6f9570dc0bd83dbeaad699bfdc73a77547100149b3f48d8.scope.
Feb 25 07:52:43 np0005629333 podman[361505]: 2026-02-25 12:52:43.682961762 +0000 UTC m=+0.022426005 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:52:43 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:52:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32de881e62c0b265d0861481a37b7f04bc804ffc03e66c7bd5fddfab6d864369/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:52:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32de881e62c0b265d0861481a37b7f04bc804ffc03e66c7bd5fddfab6d864369/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:52:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32de881e62c0b265d0861481a37b7f04bc804ffc03e66c7bd5fddfab6d864369/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:52:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32de881e62c0b265d0861481a37b7f04bc804ffc03e66c7bd5fddfab6d864369/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:52:43 np0005629333 podman[361505]: 2026-02-25 12:52:43.808032543 +0000 UTC m=+0.147496736 container init 77a64390da2f07c1d6f9570dc0bd83dbeaad699bfdc73a77547100149b3f48d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 07:52:43 np0005629333 podman[361505]: 2026-02-25 12:52:43.821496463 +0000 UTC m=+0.160960656 container start 77a64390da2f07c1d6f9570dc0bd83dbeaad699bfdc73a77547100149b3f48d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hellman, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:52:43 np0005629333 podman[361505]: 2026-02-25 12:52:43.824886779 +0000 UTC m=+0.164350962 container attach 77a64390da2f07c1d6f9570dc0bd83dbeaad699bfdc73a77547100149b3f48d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 07:52:44 np0005629333 nova_compute[244014]: 2026-02-25 12:52:44.018 244018 DEBUG nova.network.neutron [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Successfully created port: aadc9e87-c7c5-4cb0-8906-64017d5aa14b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:52:44 np0005629333 lvm[361601]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:52:44 np0005629333 lvm[361601]: VG ceph_vg1 finished
Feb 25 07:52:44 np0005629333 lvm[361600]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:52:44 np0005629333 lvm[361600]: VG ceph_vg0 finished
Feb 25 07:52:44 np0005629333 lvm[361603]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:52:44 np0005629333 lvm[361603]: VG ceph_vg2 finished
Feb 25 07:52:44 np0005629333 beautiful_hellman[361522]: {}
Feb 25 07:52:44 np0005629333 podman[361505]: 2026-02-25 12:52:44.5757365 +0000 UTC m=+0.915200683 container died 77a64390da2f07c1d6f9570dc0bd83dbeaad699bfdc73a77547100149b3f48d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:52:44 np0005629333 systemd[1]: libpod-77a64390da2f07c1d6f9570dc0bd83dbeaad699bfdc73a77547100149b3f48d8.scope: Deactivated successfully.
Feb 25 07:52:44 np0005629333 systemd[1]: libpod-77a64390da2f07c1d6f9570dc0bd83dbeaad699bfdc73a77547100149b3f48d8.scope: Consumed 1.096s CPU time.
Feb 25 07:52:44 np0005629333 systemd[1]: var-lib-containers-storage-overlay-32de881e62c0b265d0861481a37b7f04bc804ffc03e66c7bd5fddfab6d864369-merged.mount: Deactivated successfully.
Feb 25 07:52:44 np0005629333 podman[361505]: 2026-02-25 12:52:44.613816445 +0000 UTC m=+0.953280618 container remove 77a64390da2f07c1d6f9570dc0bd83dbeaad699bfdc73a77547100149b3f48d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_hellman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Feb 25 07:52:44 np0005629333 systemd[1]: libpod-conmon-77a64390da2f07c1d6f9570dc0bd83dbeaad699bfdc73a77547100149b3f48d8.scope: Deactivated successfully.
Feb 25 07:52:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2165: 305 pgs: 305 active+clean; 241 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 454 KiB/s wr, 70 op/s
Feb 25 07:52:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:52:44 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:52:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:52:44 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:52:45 np0005629333 nova_compute[244014]: 2026-02-25 12:52:45.132 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:45 np0005629333 nova_compute[244014]: 2026-02-25 12:52:45.407 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:45 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:52:45 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:52:45 np0005629333 nova_compute[244014]: 2026-02-25 12:52:45.824 244018 DEBUG nova.network.neutron [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Successfully updated port: aadc9e87-c7c5-4cb0-8906-64017d5aa14b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:52:45 np0005629333 nova_compute[244014]: 2026-02-25 12:52:45.845 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-f3af9615-94aa-4498-ab5c-3fadcab4a4e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:52:45 np0005629333 nova_compute[244014]: 2026-02-25 12:52:45.845 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-f3af9615-94aa-4498-ab5c-3fadcab4a4e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:52:45 np0005629333 nova_compute[244014]: 2026-02-25 12:52:45.845 244018 DEBUG nova.network.neutron [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:52:45 np0005629333 nova_compute[244014]: 2026-02-25 12:52:45.940 244018 DEBUG nova.compute.manager [req-cedfe7ec-38b7-4ab2-8f2f-db138b93c700 req-5a0570b9-c09d-41de-89e3-aa258b884556 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Received event network-changed-aadc9e87-c7c5-4cb0-8906-64017d5aa14b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:52:45 np0005629333 nova_compute[244014]: 2026-02-25 12:52:45.940 244018 DEBUG nova.compute.manager [req-cedfe7ec-38b7-4ab2-8f2f-db138b93c700 req-5a0570b9-c09d-41de-89e3-aa258b884556 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Refreshing instance network info cache due to event network-changed-aadc9e87-c7c5-4cb0-8906-64017d5aa14b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:52:45 np0005629333 nova_compute[244014]: 2026-02-25 12:52:45.941 244018 DEBUG oslo_concurrency.lockutils [req-cedfe7ec-38b7-4ab2-8f2f-db138b93c700 req-5a0570b9-c09d-41de-89e3-aa258b884556 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-f3af9615-94aa-4498-ab5c-3fadcab4a4e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:52:45 np0005629333 nova_compute[244014]: 2026-02-25 12:52:45.995 244018 DEBUG nova.network.neutron [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:52:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:52:46Z|01366|binding|INFO|Releasing lport 1599c73d-07eb-42e9-83bd-2cf546347a5b from this chassis (sb_readonly=0)
Feb 25 07:52:46 np0005629333 ovn_controller[147040]: 2026-02-25T12:52:46Z|01367|binding|INFO|Releasing lport d3cef94a-f2c7-4a41-beb1-fd29a623854a from this chassis (sb_readonly=0)
Feb 25 07:52:46 np0005629333 nova_compute[244014]: 2026-02-25 12:52:46.539 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2166: 305 pgs: 305 active+clean; 270 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 1.6 MiB/s wr, 72 op/s
Feb 25 07:52:47 np0005629333 nova_compute[244014]: 2026-02-25 12:52:47.118 244018 DEBUG nova.network.neutron [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Updating instance_info_cache with network_info: [{"id": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "address": "fa:16:3e:c5:22:67", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaadc9e87-c7", "ovs_interfaceid": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:52:47 np0005629333 nova_compute[244014]: 2026-02-25 12:52:47.139 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-f3af9615-94aa-4498-ab5c-3fadcab4a4e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:52:47 np0005629333 nova_compute[244014]: 2026-02-25 12:52:47.139 244018 DEBUG nova.compute.manager [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Instance network_info: |[{"id": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "address": "fa:16:3e:c5:22:67", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaadc9e87-c7", "ovs_interfaceid": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:52:47 np0005629333 nova_compute[244014]: 2026-02-25 12:52:47.140 244018 DEBUG oslo_concurrency.lockutils [req-cedfe7ec-38b7-4ab2-8f2f-db138b93c700 req-5a0570b9-c09d-41de-89e3-aa258b884556 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-f3af9615-94aa-4498-ab5c-3fadcab4a4e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:52:47 np0005629333 nova_compute[244014]: 2026-02-25 12:52:47.140 244018 DEBUG nova.network.neutron [req-cedfe7ec-38b7-4ab2-8f2f-db138b93c700 req-5a0570b9-c09d-41de-89e3-aa258b884556 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Refreshing network info cache for port aadc9e87-c7c5-4cb0-8906-64017d5aa14b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:52:47 np0005629333 nova_compute[244014]: 2026-02-25 12:52:47.146 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Start _get_guest_xml network_info=[{"id": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "address": "fa:16:3e:c5:22:67", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaadc9e87-c7", "ovs_interfaceid": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:52:47 np0005629333 nova_compute[244014]: 2026-02-25 12:52:47.152 244018 WARNING nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:52:47 np0005629333 nova_compute[244014]: 2026-02-25 12:52:47.165 244018 DEBUG nova.virt.libvirt.host [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:52:47 np0005629333 nova_compute[244014]: 2026-02-25 12:52:47.166 244018 DEBUG nova.virt.libvirt.host [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:52:47 np0005629333 nova_compute[244014]: 2026-02-25 12:52:47.170 244018 DEBUG nova.virt.libvirt.host [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:52:47 np0005629333 nova_compute[244014]: 2026-02-25 12:52:47.171 244018 DEBUG nova.virt.libvirt.host [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:52:47 np0005629333 nova_compute[244014]: 2026-02-25 12:52:47.172 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:52:47 np0005629333 nova_compute[244014]: 2026-02-25 12:52:47.172 244018 DEBUG nova.virt.hardware [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:52:47 np0005629333 nova_compute[244014]: 2026-02-25 12:52:47.173 244018 DEBUG nova.virt.hardware [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:52:47 np0005629333 nova_compute[244014]: 2026-02-25 12:52:47.174 244018 DEBUG nova.virt.hardware [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:52:47 np0005629333 nova_compute[244014]: 2026-02-25 12:52:47.175 244018 DEBUG nova.virt.hardware [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:52:47 np0005629333 nova_compute[244014]: 2026-02-25 12:52:47.175 244018 DEBUG nova.virt.hardware [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:52:47 np0005629333 nova_compute[244014]: 2026-02-25 12:52:47.175 244018 DEBUG nova.virt.hardware [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:52:47 np0005629333 nova_compute[244014]: 2026-02-25 12:52:47.176 244018 DEBUG nova.virt.hardware [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:52:47 np0005629333 nova_compute[244014]: 2026-02-25 12:52:47.176 244018 DEBUG nova.virt.hardware [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:52:47 np0005629333 nova_compute[244014]: 2026-02-25 12:52:47.177 244018 DEBUG nova.virt.hardware [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:52:47 np0005629333 nova_compute[244014]: 2026-02-25 12:52:47.177 244018 DEBUG nova.virt.hardware [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:52:47 np0005629333 nova_compute[244014]: 2026-02-25 12:52:47.178 244018 DEBUG nova.virt.hardware [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:52:47 np0005629333 nova_compute[244014]: 2026-02-25 12:52:47.182 244018 DEBUG oslo_concurrency.processutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:52:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:52:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/574639651' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:52:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:52:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/574639651' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:52:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:52:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3814262031' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:52:47 np0005629333 nova_compute[244014]: 2026-02-25 12:52:47.834 244018 DEBUG oslo_concurrency.processutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:52:47 np0005629333 nova_compute[244014]: 2026-02-25 12:52:47.867 244018 DEBUG nova.storage.rbd_utils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image f3af9615-94aa-4498-ab5c-3fadcab4a4e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:52:47 np0005629333 nova_compute[244014]: 2026-02-25 12:52:47.872 244018 DEBUG oslo_concurrency.processutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:52:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:52:48 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/650191885' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:52:48 np0005629333 nova_compute[244014]: 2026-02-25 12:52:48.432 244018 DEBUG oslo_concurrency.processutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:52:48 np0005629333 nova_compute[244014]: 2026-02-25 12:52:48.435 244018 DEBUG nova.virt.libvirt.vif [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:52:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1698593086',display_name='tempest-TestNetworkBasicOps-server-1698593086',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1698593086',id=129,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOfH5NIL0pxdeRt38xy0B1++ndyNZfkmy0xiJ4hQ2Mr9IzyNFCD9Sw7rhVhyk4a+PAujrQtax8jE403LZBlZi7aC4qHiQhot3Wwoiye+WhezVUjaIGjAv8ZbAegoYf9KmA==',key_name='tempest-TestNetworkBasicOps-1673629868',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ukf4s38n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:52:41Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=f3af9615-94aa-4498-ab5c-3fadcab4a4e6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "address": "fa:16:3e:c5:22:67", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaadc9e87-c7", "ovs_interfaceid": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:52:48 np0005629333 nova_compute[244014]: 2026-02-25 12:52:48.436 244018 DEBUG nova.network.os_vif_util [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "address": "fa:16:3e:c5:22:67", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaadc9e87-c7", "ovs_interfaceid": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:52:48 np0005629333 nova_compute[244014]: 2026-02-25 12:52:48.438 244018 DEBUG nova.network.os_vif_util [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:22:67,bridge_name='br-int',has_traffic_filtering=True,id=aadc9e87-c7c5-4cb0-8906-64017d5aa14b,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaadc9e87-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:52:48 np0005629333 nova_compute[244014]: 2026-02-25 12:52:48.440 244018 DEBUG nova.objects.instance [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'pci_devices' on Instance uuid f3af9615-94aa-4498-ab5c-3fadcab4a4e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:52:48 np0005629333 nova_compute[244014]: 2026-02-25 12:52:48.464 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:52:48 np0005629333 nova_compute[244014]:  <uuid>f3af9615-94aa-4498-ab5c-3fadcab4a4e6</uuid>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:  <name>instance-00000081</name>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestNetworkBasicOps-server-1698593086</nova:name>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:52:47</nova:creationTime>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:52:48 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:        <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:        <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:        <nova:port uuid="aadc9e87-c7c5-4cb0-8906-64017d5aa14b">
Feb 25 07:52:48 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.30" ipVersion="4"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      <entry name="serial">f3af9615-94aa-4498-ab5c-3fadcab4a4e6</entry>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      <entry name="uuid">f3af9615-94aa-4498-ab5c-3fadcab4a4e6</entry>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/f3af9615-94aa-4498-ab5c-3fadcab4a4e6_disk">
Feb 25 07:52:48 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:52:48 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/f3af9615-94aa-4498-ab5c-3fadcab4a4e6_disk.config">
Feb 25 07:52:48 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:52:48 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:c5:22:67"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      <target dev="tapaadc9e87-c7"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/f3af9615-94aa-4498-ab5c-3fadcab4a4e6/console.log" append="off"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:52:48 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:52:48 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:52:48 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:52:48 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:52:48 np0005629333 nova_compute[244014]: 2026-02-25 12:52:48.466 244018 DEBUG nova.compute.manager [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Preparing to wait for external event network-vif-plugged-aadc9e87-c7c5-4cb0-8906-64017d5aa14b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:52:48 np0005629333 nova_compute[244014]: 2026-02-25 12:52:48.466 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:52:48 np0005629333 nova_compute[244014]: 2026-02-25 12:52:48.467 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:52:48 np0005629333 nova_compute[244014]: 2026-02-25 12:52:48.467 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:48 np0005629333 nova_compute[244014]: 2026-02-25 12:52:48.469 244018 DEBUG nova.virt.libvirt.vif [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:52:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1698593086',display_name='tempest-TestNetworkBasicOps-server-1698593086',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1698593086',id=129,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOfH5NIL0pxdeRt38xy0B1++ndyNZfkmy0xiJ4hQ2Mr9IzyNFCD9Sw7rhVhyk4a+PAujrQtax8jE403LZBlZi7aC4qHiQhot3Wwoiye+WhezVUjaIGjAv8ZbAegoYf9KmA==',key_name='tempest-TestNetworkBasicOps-1673629868',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ukf4s38n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:52:41Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=f3af9615-94aa-4498-ab5c-3fadcab4a4e6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "address": "fa:16:3e:c5:22:67", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaadc9e87-c7", "ovs_interfaceid": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:52:48 np0005629333 nova_compute[244014]: 2026-02-25 12:52:48.469 244018 DEBUG nova.network.os_vif_util [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "address": "fa:16:3e:c5:22:67", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaadc9e87-c7", "ovs_interfaceid": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:52:48 np0005629333 nova_compute[244014]: 2026-02-25 12:52:48.470 244018 DEBUG nova.network.os_vif_util [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:22:67,bridge_name='br-int',has_traffic_filtering=True,id=aadc9e87-c7c5-4cb0-8906-64017d5aa14b,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaadc9e87-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:52:48 np0005629333 nova_compute[244014]: 2026-02-25 12:52:48.471 244018 DEBUG os_vif [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:22:67,bridge_name='br-int',has_traffic_filtering=True,id=aadc9e87-c7c5-4cb0-8906-64017d5aa14b,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaadc9e87-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:52:48 np0005629333 nova_compute[244014]: 2026-02-25 12:52:48.472 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:48 np0005629333 nova_compute[244014]: 2026-02-25 12:52:48.473 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:52:48 np0005629333 nova_compute[244014]: 2026-02-25 12:52:48.473 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:52:48 np0005629333 nova_compute[244014]: 2026-02-25 12:52:48.477 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:48 np0005629333 nova_compute[244014]: 2026-02-25 12:52:48.478 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaadc9e87-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:52:48 np0005629333 nova_compute[244014]: 2026-02-25 12:52:48.479 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaadc9e87-c7, col_values=(('external_ids', {'iface-id': 'aadc9e87-c7c5-4cb0-8906-64017d5aa14b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c5:22:67', 'vm-uuid': 'f3af9615-94aa-4498-ab5c-3fadcab4a4e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:52:48 np0005629333 nova_compute[244014]: 2026-02-25 12:52:48.481 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:48 np0005629333 NetworkManager[49836]: <info>  [1772023968.4823] manager: (tapaadc9e87-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/569)
Feb 25 07:52:48 np0005629333 nova_compute[244014]: 2026-02-25 12:52:48.484 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:52:48 np0005629333 nova_compute[244014]: 2026-02-25 12:52:48.489 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:48 np0005629333 nova_compute[244014]: 2026-02-25 12:52:48.490 244018 INFO os_vif [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:22:67,bridge_name='br-int',has_traffic_filtering=True,id=aadc9e87-c7c5-4cb0-8906-64017d5aa14b,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaadc9e87-c7')#033[00m
Feb 25 07:52:48 np0005629333 nova_compute[244014]: 2026-02-25 12:52:48.553 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:52:48 np0005629333 nova_compute[244014]: 2026-02-25 12:52:48.554 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:52:48 np0005629333 nova_compute[244014]: 2026-02-25 12:52:48.554 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:c5:22:67, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:52:48 np0005629333 nova_compute[244014]: 2026-02-25 12:52:48.554 244018 INFO nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Using config drive#033[00m
Feb 25 07:52:48 np0005629333 nova_compute[244014]: 2026-02-25 12:52:48.577 244018 DEBUG nova.storage.rbd_utils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image f3af9615-94aa-4498-ab5c-3fadcab4a4e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:52:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2167: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 1.8 MiB/s wr, 79 op/s
Feb 25 07:52:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:52:50 np0005629333 nova_compute[244014]: 2026-02-25 12:52:50.031 244018 INFO nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Creating config drive at /var/lib/nova/instances/f3af9615-94aa-4498-ab5c-3fadcab4a4e6/disk.config#033[00m
Feb 25 07:52:50 np0005629333 nova_compute[244014]: 2026-02-25 12:52:50.037 244018 DEBUG oslo_concurrency.processutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f3af9615-94aa-4498-ab5c-3fadcab4a4e6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp4hco5kjk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:52:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:50.124 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:52:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:50.126 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:52:50 np0005629333 nova_compute[244014]: 2026-02-25 12:52:50.125 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:50 np0005629333 nova_compute[244014]: 2026-02-25 12:52:50.177 244018 DEBUG oslo_concurrency.processutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f3af9615-94aa-4498-ab5c-3fadcab4a4e6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp4hco5kjk" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:52:50 np0005629333 nova_compute[244014]: 2026-02-25 12:52:50.214 244018 DEBUG nova.storage.rbd_utils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image f3af9615-94aa-4498-ab5c-3fadcab4a4e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:52:50 np0005629333 nova_compute[244014]: 2026-02-25 12:52:50.218 244018 DEBUG oslo_concurrency.processutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f3af9615-94aa-4498-ab5c-3fadcab4a4e6/disk.config f3af9615-94aa-4498-ab5c-3fadcab4a4e6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:52:50 np0005629333 nova_compute[244014]: 2026-02-25 12:52:50.355 244018 DEBUG oslo_concurrency.processutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f3af9615-94aa-4498-ab5c-3fadcab4a4e6/disk.config f3af9615-94aa-4498-ab5c-3fadcab4a4e6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:52:50 np0005629333 nova_compute[244014]: 2026-02-25 12:52:50.356 244018 INFO nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Deleting local config drive /var/lib/nova/instances/f3af9615-94aa-4498-ab5c-3fadcab4a4e6/disk.config because it was imported into RBD.#033[00m
Feb 25 07:52:50 np0005629333 nova_compute[244014]: 2026-02-25 12:52:50.410 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:50 np0005629333 kernel: tapaadc9e87-c7: entered promiscuous mode
Feb 25 07:52:50 np0005629333 NetworkManager[49836]: <info>  [1772023970.4188] manager: (tapaadc9e87-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/570)
Feb 25 07:52:50 np0005629333 nova_compute[244014]: 2026-02-25 12:52:50.419 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:50 np0005629333 ovn_controller[147040]: 2026-02-25T12:52:50Z|01368|binding|INFO|Claiming lport aadc9e87-c7c5-4cb0-8906-64017d5aa14b for this chassis.
Feb 25 07:52:50 np0005629333 ovn_controller[147040]: 2026-02-25T12:52:50Z|01369|binding|INFO|aadc9e87-c7c5-4cb0-8906-64017d5aa14b: Claiming fa:16:3e:c5:22:67 10.100.0.30
Feb 25 07:52:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:50.429 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:22:67 10.100.0.30'], port_security=['fa:16:3e:c5:22:67 10.100.0.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/28', 'neutron:device_id': 'f3af9615-94aa-4498-ab5c-3fadcab4a4e6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e68b4f5a-a28d-4155-93af-2997c1302403', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b63d7e28-c418-4eb5-bd68-2c67c6b10e04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c6b15ef5-ec0a-4008-9d34-97a6f1968263, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=aadc9e87-c7c5-4cb0-8906-64017d5aa14b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:52:50 np0005629333 nova_compute[244014]: 2026-02-25 12:52:50.431 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:50.431 157129 INFO neutron.agent.ovn.metadata.agent [-] Port aadc9e87-c7c5-4cb0-8906-64017d5aa14b in datapath e68b4f5a-a28d-4155-93af-2997c1302403 bound to our chassis#033[00m
Feb 25 07:52:50 np0005629333 ovn_controller[147040]: 2026-02-25T12:52:50Z|01370|binding|INFO|Setting lport aadc9e87-c7c5-4cb0-8906-64017d5aa14b ovn-installed in OVS
Feb 25 07:52:50 np0005629333 ovn_controller[147040]: 2026-02-25T12:52:50Z|01371|binding|INFO|Setting lport aadc9e87-c7c5-4cb0-8906-64017d5aa14b up in Southbound
Feb 25 07:52:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:50.433 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e68b4f5a-a28d-4155-93af-2997c1302403#033[00m
Feb 25 07:52:50 np0005629333 nova_compute[244014]: 2026-02-25 12:52:50.434 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:50.454 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f1894df7-488d-492a-9e8d-293fe18fba70]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:50 np0005629333 systemd-machined[210048]: New machine qemu-161-instance-00000081.
Feb 25 07:52:50 np0005629333 systemd-udevd[361780]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:52:50 np0005629333 systemd[1]: Started Virtual Machine qemu-161-instance-00000081.
Feb 25 07:52:50 np0005629333 NetworkManager[49836]: <info>  [1772023970.4820] device (tapaadc9e87-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:52:50 np0005629333 NetworkManager[49836]: <info>  [1772023970.4833] device (tapaadc9e87-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:52:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:50.485 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f71b434e-a89f-490d-91bb-f3e6a71b1212]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:50.490 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[76848a39-43fe-4170-aed9-101798bd039b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:50.522 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[670f51ab-1afe-41c7-8293-3254298e9bdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:50.543 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[13576499-bd43-4d43-8c29-a63e14a91a64]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape68b4f5a-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:ac:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 404], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592123, 'reachable_time': 41665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361790, 'error': None, 'target': 'ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:50.563 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4f00855c-1a58-4e52-81b7-3702f5110f5d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tape68b4f5a-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592135, 'tstamp': 592135}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361792, 'error': None, 'target': 'ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape68b4f5a-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592138, 'tstamp': 592138}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361792, 'error': None, 'target': 'ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:52:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:50.565 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape68b4f5a-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:52:50 np0005629333 nova_compute[244014]: 2026-02-25 12:52:50.567 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:50 np0005629333 nova_compute[244014]: 2026-02-25 12:52:50.569 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:50.569 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape68b4f5a-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:52:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:50.570 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:52:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:50.570 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape68b4f5a-a0, col_values=(('external_ids', {'iface-id': 'd3cef94a-f2c7-4a41-beb1-fd29a623854a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:52:50 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:50.570 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:52:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2168: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 25 07:52:50 np0005629333 nova_compute[244014]: 2026-02-25 12:52:50.680 244018 DEBUG nova.compute.manager [req-3bd22f25-6eeb-464c-a3f1-d7b6ec40398f req-2abcdf7c-17cd-4aae-b9b1-12734fed63e2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Received event network-vif-plugged-aadc9e87-c7c5-4cb0-8906-64017d5aa14b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:52:50 np0005629333 nova_compute[244014]: 2026-02-25 12:52:50.680 244018 DEBUG oslo_concurrency.lockutils [req-3bd22f25-6eeb-464c-a3f1-d7b6ec40398f req-2abcdf7c-17cd-4aae-b9b1-12734fed63e2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:52:50 np0005629333 nova_compute[244014]: 2026-02-25 12:52:50.681 244018 DEBUG oslo_concurrency.lockutils [req-3bd22f25-6eeb-464c-a3f1-d7b6ec40398f req-2abcdf7c-17cd-4aae-b9b1-12734fed63e2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:52:50 np0005629333 nova_compute[244014]: 2026-02-25 12:52:50.681 244018 DEBUG oslo_concurrency.lockutils [req-3bd22f25-6eeb-464c-a3f1-d7b6ec40398f req-2abcdf7c-17cd-4aae-b9b1-12734fed63e2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:50 np0005629333 nova_compute[244014]: 2026-02-25 12:52:50.681 244018 DEBUG nova.compute.manager [req-3bd22f25-6eeb-464c-a3f1-d7b6ec40398f req-2abcdf7c-17cd-4aae-b9b1-12734fed63e2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Processing event network-vif-plugged-aadc9e87-c7c5-4cb0-8906-64017d5aa14b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:52:50 np0005629333 nova_compute[244014]: 2026-02-25 12:52:50.758 244018 DEBUG nova.network.neutron [req-cedfe7ec-38b7-4ab2-8f2f-db138b93c700 req-5a0570b9-c09d-41de-89e3-aa258b884556 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Updated VIF entry in instance network info cache for port aadc9e87-c7c5-4cb0-8906-64017d5aa14b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:52:50 np0005629333 nova_compute[244014]: 2026-02-25 12:52:50.759 244018 DEBUG nova.network.neutron [req-cedfe7ec-38b7-4ab2-8f2f-db138b93c700 req-5a0570b9-c09d-41de-89e3-aa258b884556 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Updating instance_info_cache with network_info: [{"id": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "address": "fa:16:3e:c5:22:67", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaadc9e87-c7", "ovs_interfaceid": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:52:50 np0005629333 nova_compute[244014]: 2026-02-25 12:52:50.781 244018 DEBUG oslo_concurrency.lockutils [req-cedfe7ec-38b7-4ab2-8f2f-db138b93c700 req-5a0570b9-c09d-41de-89e3-aa258b884556 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-f3af9615-94aa-4498-ab5c-3fadcab4a4e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:52:50 np0005629333 nova_compute[244014]: 2026-02-25 12:52:50.796 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023955.7955625, 739026fc-9c96-4212-9fa3-e6731e7f61f9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:52:50 np0005629333 nova_compute[244014]: 2026-02-25 12:52:50.796 244018 INFO nova.compute.manager [-] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:52:50 np0005629333 nova_compute[244014]: 2026-02-25 12:52:50.815 244018 DEBUG nova.compute.manager [None req-b27aab78-2568-4047-9f44-d8c34f49e484 - - - - - -] [instance: 739026fc-9c96-4212-9fa3-e6731e7f61f9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:52:51 np0005629333 nova_compute[244014]: 2026-02-25 12:52:51.154 244018 DEBUG nova.compute.manager [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:52:51 np0005629333 nova_compute[244014]: 2026-02-25 12:52:51.155 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023971.1537647, f3af9615-94aa-4498-ab5c-3fadcab4a4e6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:52:51 np0005629333 nova_compute[244014]: 2026-02-25 12:52:51.155 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] VM Started (Lifecycle Event)#033[00m
Feb 25 07:52:51 np0005629333 nova_compute[244014]: 2026-02-25 12:52:51.159 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:52:51 np0005629333 nova_compute[244014]: 2026-02-25 12:52:51.163 244018 INFO nova.virt.libvirt.driver [-] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Instance spawned successfully.#033[00m
Feb 25 07:52:51 np0005629333 nova_compute[244014]: 2026-02-25 12:52:51.163 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:52:51 np0005629333 nova_compute[244014]: 2026-02-25 12:52:51.179 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:52:51 np0005629333 nova_compute[244014]: 2026-02-25 12:52:51.184 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:52:51 np0005629333 nova_compute[244014]: 2026-02-25 12:52:51.187 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:52:51 np0005629333 nova_compute[244014]: 2026-02-25 12:52:51.188 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:52:51 np0005629333 nova_compute[244014]: 2026-02-25 12:52:51.188 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:52:51 np0005629333 nova_compute[244014]: 2026-02-25 12:52:51.189 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:52:51 np0005629333 nova_compute[244014]: 2026-02-25 12:52:51.189 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:52:51 np0005629333 nova_compute[244014]: 2026-02-25 12:52:51.189 244018 DEBUG nova.virt.libvirt.driver [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:52:51 np0005629333 nova_compute[244014]: 2026-02-25 12:52:51.211 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:52:51 np0005629333 nova_compute[244014]: 2026-02-25 12:52:51.212 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023971.153989, f3af9615-94aa-4498-ab5c-3fadcab4a4e6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:52:51 np0005629333 nova_compute[244014]: 2026-02-25 12:52:51.212 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:52:51 np0005629333 nova_compute[244014]: 2026-02-25 12:52:51.236 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:52:51 np0005629333 nova_compute[244014]: 2026-02-25 12:52:51.240 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772023971.1581452, f3af9615-94aa-4498-ab5c-3fadcab4a4e6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:52:51 np0005629333 nova_compute[244014]: 2026-02-25 12:52:51.240 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:52:51 np0005629333 nova_compute[244014]: 2026-02-25 12:52:51.245 244018 INFO nova.compute.manager [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Took 9.69 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:52:51 np0005629333 nova_compute[244014]: 2026-02-25 12:52:51.245 244018 DEBUG nova.compute.manager [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:52:51 np0005629333 nova_compute[244014]: 2026-02-25 12:52:51.269 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:52:51 np0005629333 nova_compute[244014]: 2026-02-25 12:52:51.272 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:52:51 np0005629333 nova_compute[244014]: 2026-02-25 12:52:51.292 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:52:51 np0005629333 nova_compute[244014]: 2026-02-25 12:52:51.303 244018 INFO nova.compute.manager [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Took 10.88 seconds to build instance.#033[00m
Feb 25 07:52:51 np0005629333 nova_compute[244014]: 2026-02-25 12:52:51.319 244018 DEBUG oslo_concurrency.lockutils [None req-4a797a74-867b-4dba-ab1b-932a337ad2b8 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.994s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:52.128 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:52:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2169: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 429 KiB/s rd, 1.8 MiB/s wr, 78 op/s
Feb 25 07:52:52 np0005629333 nova_compute[244014]: 2026-02-25 12:52:52.808 244018 DEBUG nova.compute.manager [req-61cc61fb-db1d-4d30-b01a-6869e66af71c req-d41a6bca-2323-4962-b029-48ce39366df0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Received event network-vif-plugged-aadc9e87-c7c5-4cb0-8906-64017d5aa14b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:52:52 np0005629333 nova_compute[244014]: 2026-02-25 12:52:52.809 244018 DEBUG oslo_concurrency.lockutils [req-61cc61fb-db1d-4d30-b01a-6869e66af71c req-d41a6bca-2323-4962-b029-48ce39366df0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:52:52 np0005629333 nova_compute[244014]: 2026-02-25 12:52:52.810 244018 DEBUG oslo_concurrency.lockutils [req-61cc61fb-db1d-4d30-b01a-6869e66af71c req-d41a6bca-2323-4962-b029-48ce39366df0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:52:52 np0005629333 nova_compute[244014]: 2026-02-25 12:52:52.810 244018 DEBUG oslo_concurrency.lockutils [req-61cc61fb-db1d-4d30-b01a-6869e66af71c req-d41a6bca-2323-4962-b029-48ce39366df0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:52 np0005629333 nova_compute[244014]: 2026-02-25 12:52:52.810 244018 DEBUG nova.compute.manager [req-61cc61fb-db1d-4d30-b01a-6869e66af71c req-d41a6bca-2323-4962-b029-48ce39366df0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] No waiting events found dispatching network-vif-plugged-aadc9e87-c7c5-4cb0-8906-64017d5aa14b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:52:52 np0005629333 nova_compute[244014]: 2026-02-25 12:52:52.810 244018 WARNING nova.compute.manager [req-61cc61fb-db1d-4d30-b01a-6869e66af71c req-d41a6bca-2323-4962-b029-48ce39366df0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Received unexpected event network-vif-plugged-aadc9e87-c7c5-4cb0-8906-64017d5aa14b for instance with vm_state active and task_state None.#033[00m
Feb 25 07:52:53 np0005629333 nova_compute[244014]: 2026-02-25 12:52:53.481 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:52:54 np0005629333 ovn_controller[147040]: 2026-02-25T12:52:54Z|01372|binding|INFO|Releasing lport 1599c73d-07eb-42e9-83bd-2cf546347a5b from this chassis (sb_readonly=0)
Feb 25 07:52:54 np0005629333 ovn_controller[147040]: 2026-02-25T12:52:54Z|01373|binding|INFO|Releasing lport d3cef94a-f2c7-4a41-beb1-fd29a623854a from this chassis (sb_readonly=0)
Feb 25 07:52:54 np0005629333 nova_compute[244014]: 2026-02-25 12:52:54.171 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2170: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 404 KiB/s rd, 1.4 MiB/s wr, 39 op/s
Feb 25 07:52:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:55.029 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:52:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:55.030 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:52:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:52:55.031 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:52:55 np0005629333 nova_compute[244014]: 2026-02-25 12:52:55.110 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023960.108923, dd7feae9-9d2a-41b6-9277-cbf51a2c8f23 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:52:55 np0005629333 nova_compute[244014]: 2026-02-25 12:52:55.110 244018 INFO nova.compute.manager [-] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:52:55 np0005629333 nova_compute[244014]: 2026-02-25 12:52:55.153 244018 DEBUG nova.compute.manager [None req-2bdfcbab-26be-485b-8fdf-8ea52490f415 - - - - - -] [instance: dd7feae9-9d2a-41b6-9277-cbf51a2c8f23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:52:55 np0005629333 nova_compute[244014]: 2026-02-25 12:52:55.413 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2171: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.4 MiB/s wr, 76 op/s
Feb 25 07:52:58 np0005629333 nova_compute[244014]: 2026-02-25 12:52:58.485 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:52:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2172: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 214 KiB/s wr, 88 op/s
Feb 25 07:52:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:52:59 np0005629333 nova_compute[244014]: 2026-02-25 12:52:59.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:52:59 np0005629333 nova_compute[244014]: 2026-02-25 12:52:59.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 25 07:52:59 np0005629333 nova_compute[244014]: 2026-02-25 12:52:59.896 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 25 07:53:00 np0005629333 nova_compute[244014]: 2026-02-25 12:53:00.415 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2173: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Feb 25 07:53:00 np0005629333 nova_compute[244014]: 2026-02-25 12:53:00.801 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:00 np0005629333 nova_compute[244014]: 2026-02-25 12:53:00.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:53:00 np0005629333 nova_compute[244014]: 2026-02-25 12:53:00.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 25 07:53:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:53:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:53:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:53:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:53:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:53:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:53:02 np0005629333 ovn_controller[147040]: 2026-02-25T12:53:02Z|00167|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c5:22:67 10.100.0.30
Feb 25 07:53:02 np0005629333 ovn_controller[147040]: 2026-02-25T12:53:02Z|00168|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c5:22:67 10.100.0.30
Feb 25 07:53:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2174: 305 pgs: 305 active+clean; 295 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.6 MiB/s wr, 101 op/s
Feb 25 07:53:03 np0005629333 nova_compute[244014]: 2026-02-25 12:53:03.488 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:53:03 np0005629333 nova_compute[244014]: 2026-02-25 12:53:03.747 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2175: 305 pgs: 305 active+clean; 295 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.6 MiB/s wr, 78 op/s
Feb 25 07:53:04 np0005629333 nova_compute[244014]: 2026-02-25 12:53:04.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:53:05 np0005629333 nova_compute[244014]: 2026-02-25 12:53:05.418 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:05.954 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1c:5e:1e 10.100.0.2 2001:db8::f816:3eff:fe1c:5e1e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe1c:5e1e/64', 'neutron:device_id': 'ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88562c34-222a-439a-b444-9e6f8a6d70cd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1dcb5c12-7c78-49d6-b14f-b2c48c93839d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=09662fcb-392d-469a-981c-54d31225748b) old=Port_Binding(mac=['fa:16:3e:1c:5e:1e 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88562c34-222a-439a-b444-9e6f8a6d70cd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:53:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:05.956 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 09662fcb-392d-469a-981c-54d31225748b in datapath 88562c34-222a-439a-b444-9e6f8a6d70cd updated#033[00m
Feb 25 07:53:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:05.958 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 88562c34-222a-439a-b444-9e6f8a6d70cd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:53:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:05.960 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3c3b0f15-ecf4-49de-83cc-8095128e61a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2176: 305 pgs: 305 active+clean; 301 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 100 op/s
Feb 25 07:53:06 np0005629333 podman[361838]: 2026-02-25 12:53:06.754653685 +0000 UTC m=+0.100178220 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Feb 25 07:53:06 np0005629333 podman[361839]: 2026-02-25 12:53:06.770081261 +0000 UTC m=+0.110302706 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:53:08 np0005629333 nova_compute[244014]: 2026-02-25 12:53:08.492 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2177: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 733 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Feb 25 07:53:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:53:10 np0005629333 nova_compute[244014]: 2026-02-25 12:53:10.420 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2178: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 07:53:11 np0005629333 nova_compute[244014]: 2026-02-25 12:53:11.348 244018 DEBUG nova.compute.manager [req-8db93cb4-02e4-43b1-a507-4f275a08dbba req-9b0b84e8-5424-4080-a1b5-aba8f8258112 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received event network-changed-77034e66-a3e3-47d0-b467-29a045343530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:53:11 np0005629333 nova_compute[244014]: 2026-02-25 12:53:11.349 244018 DEBUG nova.compute.manager [req-8db93cb4-02e4-43b1-a507-4f275a08dbba req-9b0b84e8-5424-4080-a1b5-aba8f8258112 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Refreshing instance network info cache due to event network-changed-77034e66-a3e3-47d0-b467-29a045343530. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:53:11 np0005629333 nova_compute[244014]: 2026-02-25 12:53:11.350 244018 DEBUG oslo_concurrency.lockutils [req-8db93cb4-02e4-43b1-a507-4f275a08dbba req-9b0b84e8-5424-4080-a1b5-aba8f8258112 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:53:11 np0005629333 nova_compute[244014]: 2026-02-25 12:53:11.350 244018 DEBUG oslo_concurrency.lockutils [req-8db93cb4-02e4-43b1-a507-4f275a08dbba req-9b0b84e8-5424-4080-a1b5-aba8f8258112 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:53:11 np0005629333 nova_compute[244014]: 2026-02-25 12:53:11.351 244018 DEBUG nova.network.neutron [req-8db93cb4-02e4-43b1-a507-4f275a08dbba req-9b0b84e8-5424-4080-a1b5-aba8f8258112 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Refreshing network info cache for port 77034e66-a3e3-47d0-b467-29a045343530 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:53:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2179: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 07:53:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:13.296 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1c:5e:1e 10.100.0.2 2001:db8:0:1:f816:3eff:fe1c:5e1e 2001:db8::f816:3eff:fe1c:5e1e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe1c:5e1e/64 2001:db8::f816:3eff:fe1c:5e1e/64', 'neutron:device_id': 'ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88562c34-222a-439a-b444-9e6f8a6d70cd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1dcb5c12-7c78-49d6-b14f-b2c48c93839d, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=09662fcb-392d-469a-981c-54d31225748b) old=Port_Binding(mac=['fa:16:3e:1c:5e:1e 10.100.0.2 2001:db8::f816:3eff:fe1c:5e1e'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe1c:5e1e/64', 'neutron:device_id': 'ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88562c34-222a-439a-b444-9e6f8a6d70cd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:53:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:13.299 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 09662fcb-392d-469a-981c-54d31225748b in datapath 88562c34-222a-439a-b444-9e6f8a6d70cd updated#033[00m
Feb 25 07:53:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:13.302 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 88562c34-222a-439a-b444-9e6f8a6d70cd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:53:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:13.303 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[49279396-c2fb-4b81-97cb-5bf4d5b9bc04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:13 np0005629333 nova_compute[244014]: 2026-02-25 12:53:13.496 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:53:14 np0005629333 nova_compute[244014]: 2026-02-25 12:53:14.041 244018 DEBUG nova.network.neutron [req-8db93cb4-02e4-43b1-a507-4f275a08dbba req-9b0b84e8-5424-4080-a1b5-aba8f8258112 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Updated VIF entry in instance network info cache for port 77034e66-a3e3-47d0-b467-29a045343530. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:53:14 np0005629333 nova_compute[244014]: 2026-02-25 12:53:14.042 244018 DEBUG nova.network.neutron [req-8db93cb4-02e4-43b1-a507-4f275a08dbba req-9b0b84e8-5424-4080-a1b5-aba8f8258112 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Updating instance_info_cache with network_info: [{"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "77034e66-a3e3-47d0-b467-29a045343530", "address": "fa:16:3e:63:eb:19", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77034e66-a3", "ovs_interfaceid": "77034e66-a3e3-47d0-b467-29a045343530", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:53:14 np0005629333 nova_compute[244014]: 2026-02-25 12:53:14.083 244018 DEBUG oslo_concurrency.lockutils [req-8db93cb4-02e4-43b1-a507-4f275a08dbba req-9b0b84e8-5424-4080-a1b5-aba8f8258112 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:53:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2180: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 228 KiB/s rd, 554 KiB/s wr, 38 op/s
Feb 25 07:53:15 np0005629333 nova_compute[244014]: 2026-02-25 12:53:15.423 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:15 np0005629333 nova_compute[244014]: 2026-02-25 12:53:15.716 244018 DEBUG oslo_concurrency.lockutils [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:53:15 np0005629333 nova_compute[244014]: 2026-02-25 12:53:15.717 244018 DEBUG oslo_concurrency.lockutils [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:53:15 np0005629333 nova_compute[244014]: 2026-02-25 12:53:15.718 244018 DEBUG oslo_concurrency.lockutils [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:53:15 np0005629333 nova_compute[244014]: 2026-02-25 12:53:15.718 244018 DEBUG oslo_concurrency.lockutils [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:53:15 np0005629333 nova_compute[244014]: 2026-02-25 12:53:15.719 244018 DEBUG oslo_concurrency.lockutils [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:53:15 np0005629333 nova_compute[244014]: 2026-02-25 12:53:15.720 244018 INFO nova.compute.manager [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Terminating instance#033[00m
Feb 25 07:53:15 np0005629333 nova_compute[244014]: 2026-02-25 12:53:15.722 244018 DEBUG nova.compute.manager [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:53:15 np0005629333 kernel: tapaadc9e87-c7 (unregistering): left promiscuous mode
Feb 25 07:53:15 np0005629333 NetworkManager[49836]: <info>  [1772023995.9365] device (tapaadc9e87-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:53:15 np0005629333 ovn_controller[147040]: 2026-02-25T12:53:15Z|01374|binding|INFO|Releasing lport aadc9e87-c7c5-4cb0-8906-64017d5aa14b from this chassis (sb_readonly=0)
Feb 25 07:53:15 np0005629333 ovn_controller[147040]: 2026-02-25T12:53:15Z|01375|binding|INFO|Setting lport aadc9e87-c7c5-4cb0-8906-64017d5aa14b down in Southbound
Feb 25 07:53:15 np0005629333 ovn_controller[147040]: 2026-02-25T12:53:15Z|01376|binding|INFO|Removing iface tapaadc9e87-c7 ovn-installed in OVS
Feb 25 07:53:15 np0005629333 nova_compute[244014]: 2026-02-25 12:53:15.947 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:15 np0005629333 nova_compute[244014]: 2026-02-25 12:53:15.957 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:15.961 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:22:67 10.100.0.30'], port_security=['fa:16:3e:c5:22:67 10.100.0.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/28', 'neutron:device_id': 'f3af9615-94aa-4498-ab5c-3fadcab4a4e6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e68b4f5a-a28d-4155-93af-2997c1302403', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b63d7e28-c418-4eb5-bd68-2c67c6b10e04', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c6b15ef5-ec0a-4008-9d34-97a6f1968263, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=aadc9e87-c7c5-4cb0-8906-64017d5aa14b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:53:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:15.963 157129 INFO neutron.agent.ovn.metadata.agent [-] Port aadc9e87-c7c5-4cb0-8906-64017d5aa14b in datapath e68b4f5a-a28d-4155-93af-2997c1302403 unbound from our chassis#033[00m
Feb 25 07:53:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:15.964 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e68b4f5a-a28d-4155-93af-2997c1302403#033[00m
Feb 25 07:53:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:15.983 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8d39222d-4162-4850-ab9c-0ec0132e75eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:15 np0005629333 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000081.scope: Deactivated successfully.
Feb 25 07:53:15 np0005629333 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000081.scope: Consumed 12.650s CPU time.
Feb 25 07:53:15 np0005629333 systemd-machined[210048]: Machine qemu-161-instance-00000081 terminated.
Feb 25 07:53:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:16.021 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[84fe199c-4bfd-447e-b325-b2913c4f77bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:16.025 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e7a336a2-57a8-4244-b2b6-d5ac37510d60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:16.056 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[25dc83a0-fde6-45dc-9cf9-7692f7dced29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:16.075 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[528665e4-1588-4a1c-bc61-24242d66e0b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape68b4f5a-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:ac:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 404], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592123, 'reachable_time': 41665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361894, 'error': None, 'target': 'ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:16.102 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d2c44262-3746-4fe5-a454-f2f79d7d5cd7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tape68b4f5a-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592135, 'tstamp': 592135}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361895, 'error': None, 'target': 'ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape68b4f5a-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592138, 'tstamp': 592138}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361895, 'error': None, 'target': 'ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:16.106 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape68b4f5a-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:53:16 np0005629333 nova_compute[244014]: 2026-02-25 12:53:16.108 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:16 np0005629333 nova_compute[244014]: 2026-02-25 12:53:16.113 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:16.114 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape68b4f5a-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:53:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:16.115 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:53:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:16.116 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape68b4f5a-a0, col_values=(('external_ids', {'iface-id': 'd3cef94a-f2c7-4a41-beb1-fd29a623854a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:53:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:16.117 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:53:16 np0005629333 nova_compute[244014]: 2026-02-25 12:53:16.152 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:16 np0005629333 nova_compute[244014]: 2026-02-25 12:53:16.158 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:16 np0005629333 nova_compute[244014]: 2026-02-25 12:53:16.168 244018 INFO nova.virt.libvirt.driver [-] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Instance destroyed successfully.#033[00m
Feb 25 07:53:16 np0005629333 nova_compute[244014]: 2026-02-25 12:53:16.169 244018 DEBUG nova.objects.instance [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'resources' on Instance uuid f3af9615-94aa-4498-ab5c-3fadcab4a4e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:53:16 np0005629333 nova_compute[244014]: 2026-02-25 12:53:16.186 244018 DEBUG nova.virt.libvirt.vif [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:52:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1698593086',display_name='tempest-TestNetworkBasicOps-server-1698593086',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1698593086',id=129,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOfH5NIL0pxdeRt38xy0B1++ndyNZfkmy0xiJ4hQ2Mr9IzyNFCD9Sw7rhVhyk4a+PAujrQtax8jE403LZBlZi7aC4qHiQhot3Wwoiye+WhezVUjaIGjAv8ZbAegoYf9KmA==',key_name='tempest-TestNetworkBasicOps-1673629868',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:52:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ukf4s38n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:52:51Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=f3af9615-94aa-4498-ab5c-3fadcab4a4e6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "address": "fa:16:3e:c5:22:67", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaadc9e87-c7", "ovs_interfaceid": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:53:16 np0005629333 nova_compute[244014]: 2026-02-25 12:53:16.186 244018 DEBUG nova.network.os_vif_util [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "address": "fa:16:3e:c5:22:67", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaadc9e87-c7", "ovs_interfaceid": "aadc9e87-c7c5-4cb0-8906-64017d5aa14b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:53:16 np0005629333 nova_compute[244014]: 2026-02-25 12:53:16.187 244018 DEBUG nova.network.os_vif_util [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c5:22:67,bridge_name='br-int',has_traffic_filtering=True,id=aadc9e87-c7c5-4cb0-8906-64017d5aa14b,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaadc9e87-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:53:16 np0005629333 nova_compute[244014]: 2026-02-25 12:53:16.188 244018 DEBUG os_vif [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:22:67,bridge_name='br-int',has_traffic_filtering=True,id=aadc9e87-c7c5-4cb0-8906-64017d5aa14b,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaadc9e87-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:53:16 np0005629333 nova_compute[244014]: 2026-02-25 12:53:16.190 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:16 np0005629333 nova_compute[244014]: 2026-02-25 12:53:16.190 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaadc9e87-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:53:16 np0005629333 nova_compute[244014]: 2026-02-25 12:53:16.192 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:16 np0005629333 nova_compute[244014]: 2026-02-25 12:53:16.195 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:16 np0005629333 nova_compute[244014]: 2026-02-25 12:53:16.199 244018 INFO os_vif [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c5:22:67,bridge_name='br-int',has_traffic_filtering=True,id=aadc9e87-c7c5-4cb0-8906-64017d5aa14b,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaadc9e87-c7')#033[00m
Feb 25 07:53:16 np0005629333 nova_compute[244014]: 2026-02-25 12:53:16.276 244018 DEBUG nova.compute.manager [req-126311d1-1948-4c30-a400-b11e16158e34 req-51b1355e-bf8f-4721-8fff-bf82d5c44cb3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Received event network-vif-unplugged-aadc9e87-c7c5-4cb0-8906-64017d5aa14b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:53:16 np0005629333 nova_compute[244014]: 2026-02-25 12:53:16.278 244018 DEBUG oslo_concurrency.lockutils [req-126311d1-1948-4c30-a400-b11e16158e34 req-51b1355e-bf8f-4721-8fff-bf82d5c44cb3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:53:16 np0005629333 nova_compute[244014]: 2026-02-25 12:53:16.278 244018 DEBUG oslo_concurrency.lockutils [req-126311d1-1948-4c30-a400-b11e16158e34 req-51b1355e-bf8f-4721-8fff-bf82d5c44cb3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:53:16 np0005629333 nova_compute[244014]: 2026-02-25 12:53:16.279 244018 DEBUG oslo_concurrency.lockutils [req-126311d1-1948-4c30-a400-b11e16158e34 req-51b1355e-bf8f-4721-8fff-bf82d5c44cb3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:53:16 np0005629333 nova_compute[244014]: 2026-02-25 12:53:16.279 244018 DEBUG nova.compute.manager [req-126311d1-1948-4c30-a400-b11e16158e34 req-51b1355e-bf8f-4721-8fff-bf82d5c44cb3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] No waiting events found dispatching network-vif-unplugged-aadc9e87-c7c5-4cb0-8906-64017d5aa14b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:53:16 np0005629333 nova_compute[244014]: 2026-02-25 12:53:16.279 244018 DEBUG nova.compute.manager [req-126311d1-1948-4c30-a400-b11e16158e34 req-51b1355e-bf8f-4721-8fff-bf82d5c44cb3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Received event network-vif-unplugged-aadc9e87-c7c5-4cb0-8906-64017d5aa14b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:53:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2181: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 228 KiB/s rd, 554 KiB/s wr, 38 op/s
Feb 25 07:53:17 np0005629333 nova_compute[244014]: 2026-02-25 12:53:17.885 244018 INFO nova.virt.libvirt.driver [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Deleting instance files /var/lib/nova/instances/f3af9615-94aa-4498-ab5c-3fadcab4a4e6_del#033[00m
Feb 25 07:53:17 np0005629333 nova_compute[244014]: 2026-02-25 12:53:17.888 244018 INFO nova.virt.libvirt.driver [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Deletion of /var/lib/nova/instances/f3af9615-94aa-4498-ab5c-3fadcab4a4e6_del complete#033[00m
Feb 25 07:53:17 np0005629333 nova_compute[244014]: 2026-02-25 12:53:17.946 244018 INFO nova.compute.manager [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Took 2.22 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:53:17 np0005629333 nova_compute[244014]: 2026-02-25 12:53:17.947 244018 DEBUG oslo.service.loopingcall [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:53:17 np0005629333 nova_compute[244014]: 2026-02-25 12:53:17.947 244018 DEBUG nova.compute.manager [-] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:53:17 np0005629333 nova_compute[244014]: 2026-02-25 12:53:17.948 244018 DEBUG nova.network.neutron [-] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:53:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2182: 305 pgs: 305 active+clean; 303 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 102 KiB/s wr, 25 op/s
Feb 25 07:53:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:53:19 np0005629333 nova_compute[244014]: 2026-02-25 12:53:19.100 244018 DEBUG nova.compute.manager [req-ecc18ce3-a72e-4db5-8542-281b412c7afa req-9a23e1dd-9eb3-40ff-aff8-a0e8934d46ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Received event network-vif-plugged-aadc9e87-c7c5-4cb0-8906-64017d5aa14b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:53:19 np0005629333 nova_compute[244014]: 2026-02-25 12:53:19.101 244018 DEBUG oslo_concurrency.lockutils [req-ecc18ce3-a72e-4db5-8542-281b412c7afa req-9a23e1dd-9eb3-40ff-aff8-a0e8934d46ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:53:19 np0005629333 nova_compute[244014]: 2026-02-25 12:53:19.102 244018 DEBUG oslo_concurrency.lockutils [req-ecc18ce3-a72e-4db5-8542-281b412c7afa req-9a23e1dd-9eb3-40ff-aff8-a0e8934d46ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:53:19 np0005629333 nova_compute[244014]: 2026-02-25 12:53:19.102 244018 DEBUG oslo_concurrency.lockutils [req-ecc18ce3-a72e-4db5-8542-281b412c7afa req-9a23e1dd-9eb3-40ff-aff8-a0e8934d46ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:53:19 np0005629333 nova_compute[244014]: 2026-02-25 12:53:19.103 244018 DEBUG nova.compute.manager [req-ecc18ce3-a72e-4db5-8542-281b412c7afa req-9a23e1dd-9eb3-40ff-aff8-a0e8934d46ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] No waiting events found dispatching network-vif-plugged-aadc9e87-c7c5-4cb0-8906-64017d5aa14b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:53:19 np0005629333 nova_compute[244014]: 2026-02-25 12:53:19.103 244018 WARNING nova.compute.manager [req-ecc18ce3-a72e-4db5-8542-281b412c7afa req-9a23e1dd-9eb3-40ff-aff8-a0e8934d46ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Received unexpected event network-vif-plugged-aadc9e87-c7c5-4cb0-8906-64017d5aa14b for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:53:20 np0005629333 nova_compute[244014]: 2026-02-25 12:53:20.297 244018 DEBUG nova.network.neutron [-] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:53:20 np0005629333 nova_compute[244014]: 2026-02-25 12:53:20.314 244018 INFO nova.compute.manager [-] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Took 2.37 seconds to deallocate network for instance.#033[00m
Feb 25 07:53:20 np0005629333 nova_compute[244014]: 2026-02-25 12:53:20.359 244018 DEBUG oslo_concurrency.lockutils [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:53:20 np0005629333 nova_compute[244014]: 2026-02-25 12:53:20.360 244018 DEBUG oslo_concurrency.lockutils [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:53:20 np0005629333 nova_compute[244014]: 2026-02-25 12:53:20.368 244018 DEBUG nova.compute.manager [req-c232c262-cc28-46c5-a877-d41a9a5e2796 req-2bc9ce8a-7c74-40d0-9fca-ee47ad1b6d32 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Received event network-vif-deleted-aadc9e87-c7c5-4cb0-8906-64017d5aa14b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:53:20 np0005629333 nova_compute[244014]: 2026-02-25 12:53:20.426 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:20 np0005629333 nova_compute[244014]: 2026-02-25 12:53:20.431 244018 DEBUG oslo_concurrency.processutils [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:53:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2183: 305 pgs: 305 active+clean; 303 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.2 KiB/s rd, 23 KiB/s wr, 10 op/s
Feb 25 07:53:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:53:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2080829712' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:53:21 np0005629333 nova_compute[244014]: 2026-02-25 12:53:21.044 244018 DEBUG oslo_concurrency.processutils [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:53:21 np0005629333 nova_compute[244014]: 2026-02-25 12:53:21.050 244018 DEBUG nova.compute.provider_tree [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:53:21 np0005629333 nova_compute[244014]: 2026-02-25 12:53:21.066 244018 DEBUG nova.scheduler.client.report [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:53:21 np0005629333 nova_compute[244014]: 2026-02-25 12:53:21.096 244018 DEBUG oslo_concurrency.lockutils [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:53:21 np0005629333 nova_compute[244014]: 2026-02-25 12:53:21.134 244018 INFO nova.scheduler.client.report [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Deleted allocations for instance f3af9615-94aa-4498-ab5c-3fadcab4a4e6#033[00m
Feb 25 07:53:21 np0005629333 nova_compute[244014]: 2026-02-25 12:53:21.193 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:21 np0005629333 nova_compute[244014]: 2026-02-25 12:53:21.215 244018 DEBUG oslo_concurrency.lockutils [None req-8ff2961e-84c3-45ed-bbda-211ce5f8c877 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "f3af9615-94aa-4498-ab5c-3fadcab4a4e6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.498s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:53:21 np0005629333 nova_compute[244014]: 2026-02-25 12:53:21.809 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:53:21 np0005629333 nova_compute[244014]: 2026-02-25 12:53:21.810 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:53:21 np0005629333 nova_compute[244014]: 2026-02-25 12:53:21.837 244018 DEBUG nova.compute.manager [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:53:21 np0005629333 nova_compute[244014]: 2026-02-25 12:53:21.936 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:53:21 np0005629333 nova_compute[244014]: 2026-02-25 12:53:21.936 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:53:21 np0005629333 nova_compute[244014]: 2026-02-25 12:53:21.945 244018 DEBUG nova.virt.hardware [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:53:21 np0005629333 nova_compute[244014]: 2026-02-25 12:53:21.946 244018 INFO nova.compute.claims [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:53:21 np0005629333 nova_compute[244014]: 2026-02-25 12:53:21.996 244018 DEBUG oslo_concurrency.lockutils [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "interface-809da994-7551-4f52-8920-b0dfaa2ef73e-77034e66-a3e3-47d0-b467-29a045343530" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:53:21 np0005629333 nova_compute[244014]: 2026-02-25 12:53:21.997 244018 DEBUG oslo_concurrency.lockutils [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "interface-809da994-7551-4f52-8920-b0dfaa2ef73e-77034e66-a3e3-47d0-b467-29a045343530" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.026 244018 DEBUG nova.objects.instance [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'flavor' on Instance uuid 809da994-7551-4f52-8920-b0dfaa2ef73e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.051 244018 DEBUG nova.virt.libvirt.vif [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:51:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1151259173',display_name='tempest-TestNetworkBasicOps-server-1151259173',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1151259173',id=127,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuhKTGrJ6xoqtXbC9i6shxmOzzCAmEiPLvEhcBT9lMLvpNHET3NrmwNha38Zzx8OOcER4UhJ6EWWvnBqNIlR5/VZu+vuQ6n1q9c1LTSLn17vfclbttzgZoR8PFOeBob+Q==',key_name='tempest-TestNetworkBasicOps-1930796261',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:52:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ijtjj0g2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:52:01Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=809da994-7551-4f52-8920-b0dfaa2ef73e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "77034e66-a3e3-47d0-b467-29a045343530", "address": "fa:16:3e:63:eb:19", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77034e66-a3", "ovs_interfaceid": "77034e66-a3e3-47d0-b467-29a045343530", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.052 244018 DEBUG nova.network.os_vif_util [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "77034e66-a3e3-47d0-b467-29a045343530", "address": "fa:16:3e:63:eb:19", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77034e66-a3", "ovs_interfaceid": "77034e66-a3e3-47d0-b467-29a045343530", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.053 244018 DEBUG nova.network.os_vif_util [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:63:eb:19,bridge_name='br-int',has_traffic_filtering=True,id=77034e66-a3e3-47d0-b467-29a045343530,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77034e66-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.063 244018 DEBUG nova.virt.libvirt.guest [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:63:eb:19"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap77034e66-a3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.067 244018 DEBUG nova.virt.libvirt.guest [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:63:eb:19"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap77034e66-a3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.071 244018 DEBUG nova.virt.libvirt.driver [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Attempting to detach device tap77034e66-a3 from instance 809da994-7551-4f52-8920-b0dfaa2ef73e from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.072 244018 DEBUG nova.virt.libvirt.guest [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] detach device xml: <interface type="ethernet">
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <mac address="fa:16:3e:63:eb:19"/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <model type="virtio"/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <mtu size="1442"/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <target dev="tap77034e66-a3"/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]: </interface>
Feb 25 07:53:22 np0005629333 nova_compute[244014]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.081 244018 DEBUG nova.virt.libvirt.guest [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:63:eb:19"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap77034e66-a3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.086 244018 DEBUG nova.virt.libvirt.guest [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:63:eb:19"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap77034e66-a3"/></interface>not found in domain: <domain type='kvm' id='159'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <name>instance-0000007f</name>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <uuid>809da994-7551-4f52-8920-b0dfaa2ef73e</uuid>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <nova:name>tempest-TestNetworkBasicOps-server-1151259173</nova:name>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <nova:creationTime>2026-02-25 12:52:31</nova:creationTime>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <nova:flavor name="m1.nano">
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <nova:memory>128</nova:memory>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <nova:disk>1</nova:disk>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <nova:swap>0</nova:swap>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <nova:vcpus>1</nova:vcpus>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  </nova:flavor>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <nova:owner>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  </nova:owner>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <nova:ports>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <nova:port uuid="4f59e1f7-f07c-48a1-82b4-b6a563a7130a">
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <nova:port uuid="77034e66-a3e3-47d0-b467-29a045343530">
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  </nova:ports>
Feb 25 07:53:22 np0005629333 nova_compute[244014]: </nova:instance>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <memory unit='KiB'>131072</memory>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <currentMemory unit='KiB'>131072</currentMemory>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <vcpu placement='static'>1</vcpu>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <resource>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <partition>/machine</partition>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  </resource>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <sysinfo type='smbios'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <entry name='manufacturer'>RDO</entry>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <entry name='product'>OpenStack Compute</entry>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <entry name='serial'>809da994-7551-4f52-8920-b0dfaa2ef73e</entry>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <entry name='uuid'>809da994-7551-4f52-8920-b0dfaa2ef73e</entry>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <entry name='family'>Virtual Machine</entry>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <boot dev='hd'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <smbios mode='sysinfo'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <vmcoreinfo state='on'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <cpu mode='custom' match='exact' check='full'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <model fallback='forbid'>EPYC-Rome</model>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <vendor>AMD</vendor>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='require' name='x2apic'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='require' name='tsc-deadline'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='require' name='hypervisor'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='require' name='tsc_adjust'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='require' name='spec-ctrl'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='require' name='stibp'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='require' name='ssbd'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='require' name='cmp_legacy'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='require' name='overflow-recov'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='require' name='succor'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='require' name='ibrs'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='require' name='amd-ssbd'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='require' name='virt-ssbd'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='disable' name='lbrv'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='disable' name='tsc-scale'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='disable' name='vmcb-clean'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='disable' name='flushbyasid'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='disable' name='pause-filter'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='disable' name='pfthreshold'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='disable' name='svme-addr-chk'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='require' name='lfence-always-serializing'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='disable' name='xsaves'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='disable' name='svm'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='require' name='topoext'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='disable' name='npt'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='disable' name='nrip-save'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <clock offset='utc'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <timer name='pit' tickpolicy='delay'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <timer name='rtc' tickpolicy='catchup'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <timer name='hpet' present='no'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <on_poweroff>destroy</on_poweroff>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <on_reboot>restart</on_reboot>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <on_crash>destroy</on_crash>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <disk type='network' device='disk'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <driver name='qemu' type='raw' cache='none'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <auth username='openstack'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:        <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <source protocol='rbd' name='vms/809da994-7551-4f52-8920-b0dfaa2ef73e_disk' index='2'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:        <host name='192.168.122.100' port='6789'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target dev='vda' bus='virtio'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='virtio-disk0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <disk type='network' device='cdrom'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <driver name='qemu' type='raw' cache='none'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <auth username='openstack'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:        <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <source protocol='rbd' name='vms/809da994-7551-4f52-8920-b0dfaa2ef73e_disk.config' index='1'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:        <host name='192.168.122.100' port='6789'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target dev='sda' bus='sata'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <readonly/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='sata0-0-0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='0' model='pcie-root'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pcie.0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='1' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='1' port='0x10'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.1'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='2' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='2' port='0x11'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.2'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='3' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='3' port='0x12'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.3'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='4' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='4' port='0x13'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.4'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='5' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='5' port='0x14'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.5'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='6' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='6' port='0x15'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.6'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='7' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='7' port='0x16'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.7'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='8' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='8' port='0x17'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.8'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='9' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='9' port='0x18'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.9'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='10' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='10' port='0x19'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.10'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='11' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='11' port='0x1a'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.11'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='12' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='12' port='0x1b'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.12'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='13' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='13' port='0x1c'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.13'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='14' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='14' port='0x1d'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.14'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='15' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='15' port='0x1e'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.15'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='16' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='16' port='0x1f'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.16'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='17' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='17' port='0x20'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.17'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='18' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='18' port='0x21'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.18'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='19' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='19' port='0x22'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.19'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='20' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='20' port='0x23'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.20'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='21' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='21' port='0x24'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.21'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='22' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='22' port='0x25'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.22'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='23' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='23' port='0x26'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.23'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='24' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='24' port='0x27'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.24'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='25' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='25' port='0x28'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.25'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-pci-bridge'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.26'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='usb' index='0' model='piix3-uhci'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='usb'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='sata' index='0'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='ide'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <interface type='ethernet'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <mac address='fa:16:3e:c3:63:14'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target dev='tap4f59e1f7-f0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model type='virtio'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <driver name='vhost' rx_queue_size='512'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <mtu size='1442'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='net0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <interface type='ethernet'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <mac address='fa:16:3e:63:eb:19'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target dev='tap77034e66-a3'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model type='virtio'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <driver name='vhost' rx_queue_size='512'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <mtu size='1442'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='net1'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <serial type='pty'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <source path='/dev/pts/1'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <log file='/var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e/console.log' append='off'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target type='isa-serial' port='0'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:        <model name='isa-serial'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      </target>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='serial0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <console type='pty' tty='/dev/pts/1'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <source path='/dev/pts/1'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <log file='/var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e/console.log' append='off'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target type='serial' port='0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='serial0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </console>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <input type='tablet' bus='usb'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='input0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='usb' bus='0' port='1'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <input type='mouse' bus='ps2'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='input1'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <input type='keyboard' bus='ps2'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='input2'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <listen type='address' address='::0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </graphics>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <audio id='1' type='none'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model type='virtio' heads='1' primary='yes'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='video0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <watchdog model='itco' action='reset'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='watchdog0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </watchdog>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <memballoon model='virtio'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <stats period='10'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='balloon0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <rng model='virtio'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <backend model='random'>/dev/urandom</backend>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='rng0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <label>system_u:system_r:svirt_t:s0:c232,c554</label>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c232,c554</imagelabel>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  </seclabel>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <label>+107:+107</label>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <imagelabel>+107:+107</imagelabel>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  </seclabel>
Feb 25 07:53:22 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:53:22 np0005629333 nova_compute[244014]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.087 244018 INFO nova.virt.libvirt.driver [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully detached device tap77034e66-a3 from instance 809da994-7551-4f52-8920-b0dfaa2ef73e from the persistent domain config.#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.088 244018 DEBUG nova.virt.libvirt.driver [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] (1/8): Attempting to detach device tap77034e66-a3 with device alias net1 from instance 809da994-7551-4f52-8920-b0dfaa2ef73e from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.089 244018 DEBUG nova.virt.libvirt.guest [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] detach device xml: <interface type="ethernet">
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <mac address="fa:16:3e:63:eb:19"/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <model type="virtio"/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <mtu size="1442"/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <target dev="tap77034e66-a3"/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]: </interface>
Feb 25 07:53:22 np0005629333 nova_compute[244014]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.110 244018 DEBUG oslo_concurrency.processutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:53:22 np0005629333 kernel: tap77034e66-a3 (unregistering): left promiscuous mode
Feb 25 07:53:22 np0005629333 NetworkManager[49836]: <info>  [1772024002.2019] device (tap77034e66-a3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.213 244018 DEBUG nova.virt.libvirt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Received event <DeviceRemovedEvent: 1772024002.2125895, 809da994-7551-4f52-8920-b0dfaa2ef73e => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.216 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:53:22Z|01377|binding|INFO|Releasing lport 77034e66-a3e3-47d0-b467-29a045343530 from this chassis (sb_readonly=0)
Feb 25 07:53:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:53:22Z|01378|binding|INFO|Setting lport 77034e66-a3e3-47d0-b467-29a045343530 down in Southbound
Feb 25 07:53:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:53:22Z|01379|binding|INFO|Removing iface tap77034e66-a3 ovn-installed in OVS
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.220 244018 DEBUG nova.virt.libvirt.driver [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Start waiting for the detach event from libvirt for device tap77034e66-a3 with device alias net1 for instance 809da994-7551-4f52-8920-b0dfaa2ef73e _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.221 244018 DEBUG nova.virt.libvirt.guest [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:63:eb:19"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap77034e66-a3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.223 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:22.223 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:eb:19 10.100.0.29', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.29/28', 'neutron:device_id': '809da994-7551-4f52-8920-b0dfaa2ef73e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e68b4f5a-a28d-4155-93af-2997c1302403', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c6b15ef5-ec0a-4008-9d34-97a6f1968263, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=77034e66-a3e3-47d0-b467-29a045343530) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:53:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:22.225 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 77034e66-a3e3-47d0-b467-29a045343530 in datapath e68b4f5a-a28d-4155-93af-2997c1302403 unbound from our chassis#033[00m
Feb 25 07:53:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:22.227 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e68b4f5a-a28d-4155-93af-2997c1302403, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:53:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:22.228 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c1277e46-e0e5-4b4a-a4c8-c86b10fdf3cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:22.228 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403 namespace which is not needed anymore#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.229 244018 DEBUG nova.virt.libvirt.guest [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:63:eb:19"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap77034e66-a3"/></interface>not found in domain: <domain type='kvm' id='159'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <name>instance-0000007f</name>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <uuid>809da994-7551-4f52-8920-b0dfaa2ef73e</uuid>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <nova:name>tempest-TestNetworkBasicOps-server-1151259173</nova:name>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <nova:creationTime>2026-02-25 12:52:31</nova:creationTime>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <nova:flavor name="m1.nano">
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <nova:memory>128</nova:memory>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <nova:disk>1</nova:disk>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <nova:swap>0</nova:swap>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <nova:vcpus>1</nova:vcpus>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  </nova:flavor>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <nova:owner>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  </nova:owner>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <nova:ports>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <nova:port uuid="4f59e1f7-f07c-48a1-82b4-b6a563a7130a">
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <nova:port uuid="77034e66-a3e3-47d0-b467-29a045343530">
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  </nova:ports>
Feb 25 07:53:22 np0005629333 nova_compute[244014]: </nova:instance>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <memory unit='KiB'>131072</memory>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <currentMemory unit='KiB'>131072</currentMemory>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <vcpu placement='static'>1</vcpu>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <resource>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <partition>/machine</partition>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  </resource>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <sysinfo type='smbios'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <entry name='manufacturer'>RDO</entry>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <entry name='product'>OpenStack Compute</entry>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <entry name='serial'>809da994-7551-4f52-8920-b0dfaa2ef73e</entry>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <entry name='uuid'>809da994-7551-4f52-8920-b0dfaa2ef73e</entry>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <entry name='family'>Virtual Machine</entry>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <boot dev='hd'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <smbios mode='sysinfo'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <vmcoreinfo state='on'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <cpu mode='custom' match='exact' check='full'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <model fallback='forbid'>EPYC-Rome</model>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <vendor>AMD</vendor>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='require' name='x2apic'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='require' name='tsc-deadline'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='require' name='hypervisor'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='require' name='tsc_adjust'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='require' name='spec-ctrl'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='require' name='stibp'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='require' name='ssbd'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='require' name='cmp_legacy'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='require' name='overflow-recov'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='require' name='succor'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='require' name='ibrs'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='require' name='amd-ssbd'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='require' name='virt-ssbd'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='disable' name='lbrv'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='disable' name='tsc-scale'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='disable' name='vmcb-clean'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='disable' name='flushbyasid'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='disable' name='pause-filter'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='disable' name='pfthreshold'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='disable' name='svme-addr-chk'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='require' name='lfence-always-serializing'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='disable' name='xsaves'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='disable' name='svm'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='require' name='topoext'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='disable' name='npt'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <feature policy='disable' name='nrip-save'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <clock offset='utc'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <timer name='pit' tickpolicy='delay'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <timer name='rtc' tickpolicy='catchup'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <timer name='hpet' present='no'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <on_poweroff>destroy</on_poweroff>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <on_reboot>restart</on_reboot>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <on_crash>destroy</on_crash>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <disk type='network' device='disk'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <driver name='qemu' type='raw' cache='none'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <auth username='openstack'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:        <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <source protocol='rbd' name='vms/809da994-7551-4f52-8920-b0dfaa2ef73e_disk' index='2'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:        <host name='192.168.122.100' port='6789'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target dev='vda' bus='virtio'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='virtio-disk0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <disk type='network' device='cdrom'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <driver name='qemu' type='raw' cache='none'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <auth username='openstack'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:        <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <source protocol='rbd' name='vms/809da994-7551-4f52-8920-b0dfaa2ef73e_disk.config' index='1'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:        <host name='192.168.122.100' port='6789'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target dev='sda' bus='sata'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <readonly/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='sata0-0-0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='0' model='pcie-root'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pcie.0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='1' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='1' port='0x10'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.1'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='2' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='2' port='0x11'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.2'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='3' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='3' port='0x12'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.3'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='4' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='4' port='0x13'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.4'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='5' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='5' port='0x14'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.5'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='6' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='6' port='0x15'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.6'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='7' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='7' port='0x16'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.7'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='8' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='8' port='0x17'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.8'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='9' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='9' port='0x18'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.9'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='10' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='10' port='0x19'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.10'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='11' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='11' port='0x1a'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.11'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='12' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='12' port='0x1b'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.12'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='13' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='13' port='0x1c'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.13'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='14' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='14' port='0x1d'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.14'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='15' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='15' port='0x1e'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.15'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='16' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='16' port='0x1f'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.16'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='17' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='17' port='0x20'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.17'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='18' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='18' port='0x21'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.18'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='19' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='19' port='0x22'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.19'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='20' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='20' port='0x23'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.20'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='21' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='21' port='0x24'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.21'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='22' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='22' port='0x25'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.22'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='23' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='23' port='0x26'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.23'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='24' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='24' port='0x27'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.24'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='25' model='pcie-root-port'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target chassis='25' port='0x28'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.25'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model name='pcie-pci-bridge'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='pci.26'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='usb' index='0' model='piix3-uhci'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='usb'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <controller type='sata' index='0'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='ide'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <interface type='ethernet'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <mac address='fa:16:3e:c3:63:14'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target dev='tap4f59e1f7-f0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model type='virtio'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <driver name='vhost' rx_queue_size='512'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <mtu size='1442'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='net0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <serial type='pty'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <source path='/dev/pts/1'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <log file='/var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e/console.log' append='off'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target type='isa-serial' port='0'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:        <model name='isa-serial'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      </target>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='serial0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <console type='pty' tty='/dev/pts/1'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <source path='/dev/pts/1'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <log file='/var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e/console.log' append='off'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <target type='serial' port='0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='serial0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </console>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <input type='tablet' bus='usb'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='input0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='usb' bus='0' port='1'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <input type='mouse' bus='ps2'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='input1'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <input type='keyboard' bus='ps2'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='input2'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <listen type='address' address='::0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </graphics>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <audio id='1' type='none'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <model type='virtio' heads='1' primary='yes'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='video0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <watchdog model='itco' action='reset'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='watchdog0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </watchdog>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <memballoon model='virtio'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <stats period='10'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='balloon0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <rng model='virtio'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <backend model='random'>/dev/urandom</backend>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <alias name='rng0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <label>system_u:system_r:svirt_t:s0:c232,c554</label>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c232,c554</imagelabel>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  </seclabel>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <label>+107:+107</label>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <imagelabel>+107:+107</imagelabel>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  </seclabel>
Feb 25 07:53:22 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:53:22 np0005629333 nova_compute[244014]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.229 244018 INFO nova.virt.libvirt.driver [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully detached device tap77034e66-a3 from instance 809da994-7551-4f52-8920-b0dfaa2ef73e from the live domain config.#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.230 244018 DEBUG nova.virt.libvirt.vif [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:51:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1151259173',display_name='tempest-TestNetworkBasicOps-server-1151259173',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1151259173',id=127,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuhKTGrJ6xoqtXbC9i6shxmOzzCAmEiPLvEhcBT9lMLvpNHET3NrmwNha38Zzx8OOcER4UhJ6EWWvnBqNIlR5/VZu+vuQ6n1q9c1LTSLn17vfclbttzgZoR8PFOeBob+Q==',key_name='tempest-TestNetworkBasicOps-1930796261',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:52:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ijtjj0g2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:52:01Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=809da994-7551-4f52-8920-b0dfaa2ef73e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "77034e66-a3e3-47d0-b467-29a045343530", "address": "fa:16:3e:63:eb:19", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77034e66-a3", "ovs_interfaceid": "77034e66-a3e3-47d0-b467-29a045343530", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.231 244018 DEBUG nova.network.os_vif_util [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "77034e66-a3e3-47d0-b467-29a045343530", "address": "fa:16:3e:63:eb:19", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77034e66-a3", "ovs_interfaceid": "77034e66-a3e3-47d0-b467-29a045343530", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.231 244018 DEBUG nova.network.os_vif_util [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:63:eb:19,bridge_name='br-int',has_traffic_filtering=True,id=77034e66-a3e3-47d0-b467-29a045343530,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77034e66-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.232 244018 DEBUG os_vif [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:63:eb:19,bridge_name='br-int',has_traffic_filtering=True,id=77034e66-a3e3-47d0-b467-29a045343530,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77034e66-a3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.233 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.234 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77034e66-a3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.235 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.236 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.239 244018 INFO os_vif [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:63:eb:19,bridge_name='br-int',has_traffic_filtering=True,id=77034e66-a3e3-47d0-b467-29a045343530,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77034e66-a3')#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.240 244018 DEBUG nova.virt.libvirt.guest [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <nova:name>tempest-TestNetworkBasicOps-server-1151259173</nova:name>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <nova:creationTime>2026-02-25 12:53:22</nova:creationTime>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <nova:flavor name="m1.nano">
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <nova:memory>128</nova:memory>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <nova:disk>1</nova:disk>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <nova:swap>0</nova:swap>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <nova:vcpus>1</nova:vcpus>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  </nova:flavor>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <nova:owner>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  </nova:owner>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  <nova:ports>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    <nova:port uuid="4f59e1f7-f07c-48a1-82b4-b6a563a7130a">
Feb 25 07:53:22 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:53:22 np0005629333 nova_compute[244014]:  </nova:ports>
Feb 25 07:53:22 np0005629333 nova_compute[244014]: </nova:instance>
Feb 25 07:53:22 np0005629333 nova_compute[244014]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Feb 25 07:53:22 np0005629333 neutron-haproxy-ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403[360547]: [NOTICE]   (360551) : haproxy version is 2.8.14-c23fe91
Feb 25 07:53:22 np0005629333 neutron-haproxy-ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403[360547]: [NOTICE]   (360551) : path to executable is /usr/sbin/haproxy
Feb 25 07:53:22 np0005629333 neutron-haproxy-ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403[360547]: [WARNING]  (360551) : Exiting Master process...
Feb 25 07:53:22 np0005629333 neutron-haproxy-ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403[360547]: [ALERT]    (360551) : Current worker (360553) exited with code 143 (Terminated)
Feb 25 07:53:22 np0005629333 neutron-haproxy-ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403[360547]: [WARNING]  (360551) : All workers exited. Exiting... (0)
Feb 25 07:53:22 np0005629333 systemd[1]: libpod-62a431ef2588502cb52e896538a38fe0b2b55f1d522f5872eb0314ef660af17e.scope: Deactivated successfully.
Feb 25 07:53:22 np0005629333 conmon[360547]: conmon 62a431ef2588502cb52e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-62a431ef2588502cb52e896538a38fe0b2b55f1d522f5872eb0314ef660af17e.scope/container/memory.events
Feb 25 07:53:22 np0005629333 podman[361990]: 2026-02-25 12:53:22.375829521 +0000 UTC m=+0.047802431 container died 62a431ef2588502cb52e896538a38fe0b2b55f1d522f5872eb0314ef660af17e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 25 07:53:22 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-62a431ef2588502cb52e896538a38fe0b2b55f1d522f5872eb0314ef660af17e-userdata-shm.mount: Deactivated successfully.
Feb 25 07:53:22 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ce54b1a0b6462e52effccd4126e7ec55dceac4038acb86f1df2038cf5f156ad6-merged.mount: Deactivated successfully.
Feb 25 07:53:22 np0005629333 podman[361990]: 2026-02-25 12:53:22.433926131 +0000 UTC m=+0.105899051 container cleanup 62a431ef2588502cb52e896538a38fe0b2b55f1d522f5872eb0314ef660af17e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 07:53:22 np0005629333 systemd[1]: libpod-conmon-62a431ef2588502cb52e896538a38fe0b2b55f1d522f5872eb0314ef660af17e.scope: Deactivated successfully.
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.491 244018 DEBUG nova.compute.manager [req-aee6c06e-8fac-435c-a317-4b9db8b19729 req-8e02825b-0e41-4bbd-8bfd-f5361b494535 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received event network-vif-unplugged-77034e66-a3e3-47d0-b467-29a045343530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.491 244018 DEBUG oslo_concurrency.lockutils [req-aee6c06e-8fac-435c-a317-4b9db8b19729 req-8e02825b-0e41-4bbd-8bfd-f5361b494535 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.491 244018 DEBUG oslo_concurrency.lockutils [req-aee6c06e-8fac-435c-a317-4b9db8b19729 req-8e02825b-0e41-4bbd-8bfd-f5361b494535 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.492 244018 DEBUG oslo_concurrency.lockutils [req-aee6c06e-8fac-435c-a317-4b9db8b19729 req-8e02825b-0e41-4bbd-8bfd-f5361b494535 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.492 244018 DEBUG nova.compute.manager [req-aee6c06e-8fac-435c-a317-4b9db8b19729 req-8e02825b-0e41-4bbd-8bfd-f5361b494535 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] No waiting events found dispatching network-vif-unplugged-77034e66-a3e3-47d0-b467-29a045343530 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.493 244018 WARNING nova.compute.manager [req-aee6c06e-8fac-435c-a317-4b9db8b19729 req-8e02825b-0e41-4bbd-8bfd-f5361b494535 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received unexpected event network-vif-unplugged-77034e66-a3e3-47d0-b467-29a045343530 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:53:22 np0005629333 podman[362018]: 2026-02-25 12:53:22.502442346 +0000 UTC m=+0.049679714 container remove 62a431ef2588502cb52e896538a38fe0b2b55f1d522f5872eb0314ef660af17e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:53:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:22.508 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[57ccbdb9-f0b1-4832-8c29-b03023480764]: (4, ('Wed Feb 25 12:53:22 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403 (62a431ef2588502cb52e896538a38fe0b2b55f1d522f5872eb0314ef660af17e)\n62a431ef2588502cb52e896538a38fe0b2b55f1d522f5872eb0314ef660af17e\nWed Feb 25 12:53:22 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403 (62a431ef2588502cb52e896538a38fe0b2b55f1d522f5872eb0314ef660af17e)\n62a431ef2588502cb52e896538a38fe0b2b55f1d522f5872eb0314ef660af17e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:22.510 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a87ed3df-1413-4a0c-b6f4-e60298612f05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:22.512 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape68b4f5a-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.514 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:22 np0005629333 kernel: tape68b4f5a-a0: left promiscuous mode
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.523 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:22.526 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3dce6776-9039-4678-aeaf-f853fcb22adc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:22.547 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f1edd0c4-5fa1-4338-b748-b14be5e03767]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:22.548 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3767ecfe-4407-4d44-aaf3-6a2076f5fc07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:22.566 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[76862e2b-2194-4273-8601-5bd0fca78805]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592117, 'reachable_time': 20926, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362033, 'error': None, 'target': 'ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:22 np0005629333 systemd[1]: run-netns-ovnmeta\x2de68b4f5a\x2da28d\x2d4155\x2d93af\x2d2997c1302403.mount: Deactivated successfully.
Feb 25 07:53:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:22.569 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e68b4f5a-a28d-4155-93af-2997c1302403 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:53:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:22.570 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[e4f27c2a-68da-49f2-bf92-b11e2936855c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2184: 305 pgs: 305 active+clean; 233 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 26 KiB/s wr, 29 op/s
Feb 25 07:53:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:53:22 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/636794886' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.743 244018 DEBUG oslo_concurrency.processutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.749 244018 DEBUG nova.compute.provider_tree [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.765 244018 DEBUG nova.scheduler.client.report [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.788 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.788 244018 DEBUG nova.compute.manager [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.843 244018 DEBUG nova.compute.manager [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.843 244018 DEBUG nova.network.neutron [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.862 244018 INFO nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.888 244018 DEBUG nova.compute.manager [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.973 244018 DEBUG nova.compute.manager [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.974 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:53:22 np0005629333 nova_compute[244014]: 2026-02-25 12:53:22.974 244018 INFO nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Creating image(s)#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.002 244018 DEBUG nova.storage.rbd_utils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 03d948e4-e7cb-45ea-bf63-7ab363b4d46e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.028 244018 DEBUG nova.storage.rbd_utils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 03d948e4-e7cb-45ea-bf63-7ab363b4d46e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.053 244018 DEBUG nova.storage.rbd_utils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 03d948e4-e7cb-45ea-bf63-7ab363b4d46e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.056 244018 DEBUG oslo_concurrency.processutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.138 244018 DEBUG oslo_concurrency.processutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.140 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.140 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.141 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.196 244018 DEBUG nova.storage.rbd_utils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 03d948e4-e7cb-45ea-bf63-7ab363b4d46e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.200 244018 DEBUG oslo_concurrency.processutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 03d948e4-e7cb-45ea-bf63-7ab363b4d46e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.239 244018 DEBUG nova.policy [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb8dbf8cc448ad946fd23aaae2326e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '25fa1e8dd32c483686f869da2604f2b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.383 244018 DEBUG oslo_concurrency.lockutils [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.384 244018 DEBUG oslo_concurrency.lockutils [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.384 244018 DEBUG nova.network.neutron [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.432 244018 DEBUG nova.compute.manager [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received event network-vif-deleted-77034e66-a3e3-47d0-b467-29a045343530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.433 244018 INFO nova.compute.manager [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Neutron deleted interface 77034e66-a3e3-47d0-b467-29a045343530; detaching it from the instance and deleting it from the info cache#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.433 244018 DEBUG nova.network.neutron [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Updating instance_info_cache with network_info: [{"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.454 244018 DEBUG nova.objects.instance [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lazy-loading 'system_metadata' on Instance uuid 809da994-7551-4f52-8920-b0dfaa2ef73e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.487 244018 DEBUG nova.objects.instance [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lazy-loading 'flavor' on Instance uuid 809da994-7551-4f52-8920-b0dfaa2ef73e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.514 244018 DEBUG nova.virt.libvirt.vif [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:51:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1151259173',display_name='tempest-TestNetworkBasicOps-server-1151259173',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1151259173',id=127,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuhKTGrJ6xoqtXbC9i6shxmOzzCAmEiPLvEhcBT9lMLvpNHET3NrmwNha38Zzx8OOcER4UhJ6EWWvnBqNIlR5/VZu+vuQ6n1q9c1LTSLn17vfclbttzgZoR8PFOeBob+Q==',key_name='tempest-TestNetworkBasicOps-1930796261',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:52:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ijtjj0g2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:52:01Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=809da994-7551-4f52-8920-b0dfaa2ef73e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "77034e66-a3e3-47d0-b467-29a045343530", "address": "fa:16:3e:63:eb:19", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77034e66-a3", "ovs_interfaceid": "77034e66-a3e3-47d0-b467-29a045343530", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.515 244018 DEBUG nova.network.os_vif_util [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Converting VIF {"id": "77034e66-a3e3-47d0-b467-29a045343530", "address": "fa:16:3e:63:eb:19", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77034e66-a3", "ovs_interfaceid": "77034e66-a3e3-47d0-b467-29a045343530", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.516 244018 DEBUG nova.network.os_vif_util [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:63:eb:19,bridge_name='br-int',has_traffic_filtering=True,id=77034e66-a3e3-47d0-b467-29a045343530,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77034e66-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.520 244018 DEBUG nova.virt.libvirt.guest [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:63:eb:19"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap77034e66-a3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.523 244018 DEBUG nova.virt.libvirt.guest [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:63:eb:19"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap77034e66-a3"/></interface>not found in domain: <domain type='kvm' id='159'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <name>instance-0000007f</name>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <uuid>809da994-7551-4f52-8920-b0dfaa2ef73e</uuid>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <nova:name>tempest-TestNetworkBasicOps-server-1151259173</nova:name>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <nova:creationTime>2026-02-25 12:53:22</nova:creationTime>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <nova:flavor name="m1.nano">
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <nova:memory>128</nova:memory>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <nova:disk>1</nova:disk>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <nova:swap>0</nova:swap>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <nova:vcpus>1</nova:vcpus>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  </nova:flavor>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <nova:owner>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  </nova:owner>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <nova:ports>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <nova:port uuid="4f59e1f7-f07c-48a1-82b4-b6a563a7130a">
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  </nova:ports>
Feb 25 07:53:23 np0005629333 nova_compute[244014]: </nova:instance>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <memory unit='KiB'>131072</memory>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <currentMemory unit='KiB'>131072</currentMemory>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <vcpu placement='static'>1</vcpu>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <resource>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <partition>/machine</partition>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  </resource>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <sysinfo type='smbios'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <entry name='manufacturer'>RDO</entry>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <entry name='product'>OpenStack Compute</entry>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <entry name='serial'>809da994-7551-4f52-8920-b0dfaa2ef73e</entry>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <entry name='uuid'>809da994-7551-4f52-8920-b0dfaa2ef73e</entry>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <entry name='family'>Virtual Machine</entry>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <boot dev='hd'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <smbios mode='sysinfo'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <vmcoreinfo state='on'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <cpu mode='custom' match='exact' check='full'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <model fallback='forbid'>EPYC-Rome</model>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <vendor>AMD</vendor>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='require' name='x2apic'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='require' name='tsc-deadline'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='require' name='hypervisor'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='require' name='tsc_adjust'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='require' name='spec-ctrl'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='require' name='stibp'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='require' name='ssbd'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='require' name='cmp_legacy'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='require' name='overflow-recov'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='require' name='succor'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='require' name='ibrs'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='require' name='amd-ssbd'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='require' name='virt-ssbd'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='disable' name='lbrv'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='disable' name='tsc-scale'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='disable' name='vmcb-clean'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='disable' name='flushbyasid'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='disable' name='pause-filter'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='disable' name='pfthreshold'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='disable' name='svme-addr-chk'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='require' name='lfence-always-serializing'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='disable' name='xsaves'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='disable' name='svm'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='require' name='topoext'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='disable' name='npt'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='disable' name='nrip-save'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <clock offset='utc'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <timer name='pit' tickpolicy='delay'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <timer name='rtc' tickpolicy='catchup'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <timer name='hpet' present='no'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <on_poweroff>destroy</on_poweroff>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <on_reboot>restart</on_reboot>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <on_crash>destroy</on_crash>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <disk type='network' device='disk'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <driver name='qemu' type='raw' cache='none'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <auth username='openstack'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:        <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <source protocol='rbd' name='vms/809da994-7551-4f52-8920-b0dfaa2ef73e_disk' index='2'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:        <host name='192.168.122.100' port='6789'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target dev='vda' bus='virtio'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='virtio-disk0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <disk type='network' device='cdrom'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <driver name='qemu' type='raw' cache='none'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <auth username='openstack'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:        <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <source protocol='rbd' name='vms/809da994-7551-4f52-8920-b0dfaa2ef73e_disk.config' index='1'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:        <host name='192.168.122.100' port='6789'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target dev='sda' bus='sata'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <readonly/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='sata0-0-0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='0' model='pcie-root'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pcie.0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='1' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='1' port='0x10'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.1'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='2' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='2' port='0x11'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.2'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='3' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='3' port='0x12'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.3'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='4' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='4' port='0x13'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.4'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='5' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='5' port='0x14'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.5'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='6' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='6' port='0x15'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.6'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='7' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='7' port='0x16'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.7'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='8' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='8' port='0x17'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.8'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='9' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='9' port='0x18'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.9'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='10' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='10' port='0x19'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.10'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='11' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='11' port='0x1a'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.11'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='12' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='12' port='0x1b'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.12'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='13' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='13' port='0x1c'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.13'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='14' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='14' port='0x1d'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.14'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='15' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='15' port='0x1e'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.15'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='16' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='16' port='0x1f'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.16'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='17' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='17' port='0x20'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.17'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='18' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='18' port='0x21'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.18'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='19' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='19' port='0x22'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.19'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='20' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='20' port='0x23'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.20'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='21' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='21' port='0x24'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.21'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='22' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='22' port='0x25'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.22'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='23' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='23' port='0x26'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.23'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='24' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='24' port='0x27'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.24'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='25' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='25' port='0x28'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.25'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-pci-bridge'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.26'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='usb' index='0' model='piix3-uhci'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='usb'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='sata' index='0'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='ide'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <interface type='ethernet'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <mac address='fa:16:3e:c3:63:14'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target dev='tap4f59e1f7-f0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model type='virtio'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <driver name='vhost' rx_queue_size='512'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <mtu size='1442'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='net0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <serial type='pty'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <source path='/dev/pts/1'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <log file='/var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e/console.log' append='off'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target type='isa-serial' port='0'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:        <model name='isa-serial'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      </target>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='serial0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <console type='pty' tty='/dev/pts/1'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <source path='/dev/pts/1'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <log file='/var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e/console.log' append='off'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target type='serial' port='0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='serial0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </console>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <input type='tablet' bus='usb'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='input0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='usb' bus='0' port='1'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <input type='mouse' bus='ps2'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='input1'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <input type='keyboard' bus='ps2'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='input2'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <listen type='address' address='::0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </graphics>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <audio id='1' type='none'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model type='virtio' heads='1' primary='yes'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='video0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <watchdog model='itco' action='reset'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='watchdog0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </watchdog>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <memballoon model='virtio'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <stats period='10'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='balloon0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <rng model='virtio'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <backend model='random'>/dev/urandom</backend>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='rng0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <label>system_u:system_r:svirt_t:s0:c232,c554</label>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c232,c554</imagelabel>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  </seclabel>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <label>+107:+107</label>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <imagelabel>+107:+107</imagelabel>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  </seclabel>
Feb 25 07:53:23 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:53:23 np0005629333 nova_compute[244014]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.524 244018 DEBUG nova.virt.libvirt.guest [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:63:eb:19"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap77034e66-a3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.528 244018 DEBUG nova.virt.libvirt.guest [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:63:eb:19"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap77034e66-a3"/></interface>not found in domain: <domain type='kvm' id='159'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <name>instance-0000007f</name>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <uuid>809da994-7551-4f52-8920-b0dfaa2ef73e</uuid>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <nova:name>tempest-TestNetworkBasicOps-server-1151259173</nova:name>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <nova:creationTime>2026-02-25 12:53:22</nova:creationTime>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <nova:flavor name="m1.nano">
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <nova:memory>128</nova:memory>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <nova:disk>1</nova:disk>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <nova:swap>0</nova:swap>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <nova:vcpus>1</nova:vcpus>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  </nova:flavor>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <nova:owner>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  </nova:owner>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <nova:ports>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <nova:port uuid="4f59e1f7-f07c-48a1-82b4-b6a563a7130a">
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  </nova:ports>
Feb 25 07:53:23 np0005629333 nova_compute[244014]: </nova:instance>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <memory unit='KiB'>131072</memory>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <currentMemory unit='KiB'>131072</currentMemory>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <vcpu placement='static'>1</vcpu>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <resource>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <partition>/machine</partition>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  </resource>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <sysinfo type='smbios'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <entry name='manufacturer'>RDO</entry>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <entry name='product'>OpenStack Compute</entry>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <entry name='version'>27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <entry name='serial'>809da994-7551-4f52-8920-b0dfaa2ef73e</entry>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <entry name='uuid'>809da994-7551-4f52-8920-b0dfaa2ef73e</entry>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <entry name='family'>Virtual Machine</entry>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <boot dev='hd'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <smbios mode='sysinfo'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <vmcoreinfo state='on'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <cpu mode='custom' match='exact' check='full'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <model fallback='forbid'>EPYC-Rome</model>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <vendor>AMD</vendor>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='require' name='x2apic'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='require' name='tsc-deadline'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='require' name='hypervisor'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='require' name='tsc_adjust'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='require' name='spec-ctrl'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='require' name='stibp'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='require' name='ssbd'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='require' name='cmp_legacy'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='require' name='overflow-recov'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='require' name='succor'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='require' name='ibrs'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='require' name='amd-ssbd'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='require' name='virt-ssbd'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='disable' name='lbrv'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='disable' name='tsc-scale'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='disable' name='vmcb-clean'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='disable' name='flushbyasid'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='disable' name='pause-filter'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='disable' name='pfthreshold'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='disable' name='svme-addr-chk'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='require' name='lfence-always-serializing'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='disable' name='xsaves'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='disable' name='svm'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='require' name='topoext'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='disable' name='npt'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <feature policy='disable' name='nrip-save'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <clock offset='utc'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <timer name='pit' tickpolicy='delay'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <timer name='rtc' tickpolicy='catchup'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <timer name='hpet' present='no'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <on_poweroff>destroy</on_poweroff>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <on_reboot>restart</on_reboot>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <on_crash>destroy</on_crash>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <disk type='network' device='disk'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <driver name='qemu' type='raw' cache='none'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <auth username='openstack'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:        <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <source protocol='rbd' name='vms/809da994-7551-4f52-8920-b0dfaa2ef73e_disk' index='2'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:        <host name='192.168.122.100' port='6789'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target dev='vda' bus='virtio'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='virtio-disk0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <disk type='network' device='cdrom'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <driver name='qemu' type='raw' cache='none'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <auth username='openstack'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:        <secret type='ceph' uuid='8ac33163-6221-5d58-9a39-8b6933fe7762'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <source protocol='rbd' name='vms/809da994-7551-4f52-8920-b0dfaa2ef73e_disk.config' index='1'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:        <host name='192.168.122.100' port='6789'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target dev='sda' bus='sata'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <readonly/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='sata0-0-0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='0' model='pcie-root'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pcie.0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='1' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='1' port='0x10'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.1'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='2' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='2' port='0x11'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.2'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='3' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='3' port='0x12'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.3'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='4' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='4' port='0x13'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.4'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='5' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='5' port='0x14'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.5'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='6' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='6' port='0x15'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.6'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='7' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='7' port='0x16'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.7'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='8' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='8' port='0x17'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.8'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='9' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='9' port='0x18'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.9'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='10' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='10' port='0x19'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.10'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='11' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='11' port='0x1a'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.11'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='12' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='12' port='0x1b'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.12'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='13' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='13' port='0x1c'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.13'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='14' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='14' port='0x1d'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.14'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='15' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='15' port='0x1e'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.15'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='16' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='16' port='0x1f'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.16'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='17' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='17' port='0x20'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.17'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='18' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='18' port='0x21'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.18'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='19' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='19' port='0x22'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.19'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='20' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='20' port='0x23'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.20'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='21' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='21' port='0x24'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.21'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='22' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='22' port='0x25'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.22'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='23' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='23' port='0x26'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.23'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='24' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='24' port='0x27'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.24'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='25' model='pcie-root-port'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-root-port'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target chassis='25' port='0x28'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.25'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model name='pcie-pci-bridge'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='pci.26'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='usb' index='0' model='piix3-uhci'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='usb'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <controller type='sata' index='0'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='ide'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </controller>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <interface type='ethernet'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <mac address='fa:16:3e:c3:63:14'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target dev='tap4f59e1f7-f0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model type='virtio'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <driver name='vhost' rx_queue_size='512'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <mtu size='1442'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='net0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <serial type='pty'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <source path='/dev/pts/1'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <log file='/var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e/console.log' append='off'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target type='isa-serial' port='0'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:        <model name='isa-serial'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      </target>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='serial0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <console type='pty' tty='/dev/pts/1'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <source path='/dev/pts/1'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <log file='/var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e/console.log' append='off'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <target type='serial' port='0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='serial0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </console>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <input type='tablet' bus='usb'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='input0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='usb' bus='0' port='1'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <input type='mouse' bus='ps2'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='input1'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <input type='keyboard' bus='ps2'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='input2'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </input>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <listen type='address' address='::0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </graphics>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <audio id='1' type='none'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <model type='virtio' heads='1' primary='yes'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='video0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <watchdog model='itco' action='reset'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='watchdog0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </watchdog>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <memballoon model='virtio'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <stats period='10'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='balloon0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <rng model='virtio'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <backend model='random'>/dev/urandom</backend>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <alias name='rng0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <label>system_u:system_r:svirt_t:s0:c232,c554</label>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c232,c554</imagelabel>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  </seclabel>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <label>+107:+107</label>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <imagelabel>+107:+107</imagelabel>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  </seclabel>
Feb 25 07:53:23 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:53:23 np0005629333 nova_compute[244014]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.529 244018 WARNING nova.virt.libvirt.driver [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Detaching interface fa:16:3e:63:eb:19 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap77034e66-a3' not found.#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.530 244018 DEBUG nova.virt.libvirt.vif [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:51:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1151259173',display_name='tempest-TestNetworkBasicOps-server-1151259173',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1151259173',id=127,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuhKTGrJ6xoqtXbC9i6shxmOzzCAmEiPLvEhcBT9lMLvpNHET3NrmwNha38Zzx8OOcER4UhJ6EWWvnBqNIlR5/VZu+vuQ6n1q9c1LTSLn17vfclbttzgZoR8PFOeBob+Q==',key_name='tempest-TestNetworkBasicOps-1930796261',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:52:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ijtjj0g2',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:52:01Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=809da994-7551-4f52-8920-b0dfaa2ef73e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "77034e66-a3e3-47d0-b467-29a045343530", "address": "fa:16:3e:63:eb:19", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77034e66-a3", "ovs_interfaceid": "77034e66-a3e3-47d0-b467-29a045343530", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.530 244018 DEBUG nova.network.os_vif_util [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Converting VIF {"id": "77034e66-a3e3-47d0-b467-29a045343530", "address": "fa:16:3e:63:eb:19", "network": {"id": "e68b4f5a-a28d-4155-93af-2997c1302403", "bridge": "br-int", "label": "tempest-network-smoke--2003604864", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77034e66-a3", "ovs_interfaceid": "77034e66-a3e3-47d0-b467-29a045343530", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.531 244018 DEBUG nova.network.os_vif_util [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:63:eb:19,bridge_name='br-int',has_traffic_filtering=True,id=77034e66-a3e3-47d0-b467-29a045343530,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77034e66-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.531 244018 DEBUG os_vif [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:63:eb:19,bridge_name='br-int',has_traffic_filtering=True,id=77034e66-a3e3-47d0-b467-29a045343530,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77034e66-a3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.534 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.534 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77034e66-a3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.535 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.537 244018 INFO os_vif [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:63:eb:19,bridge_name='br-int',has_traffic_filtering=True,id=77034e66-a3e3-47d0-b467-29a045343530,network=Network(e68b4f5a-a28d-4155-93af-2997c1302403),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77034e66-a3')#033[00m
Feb 25 07:53:23 np0005629333 nova_compute[244014]: 2026-02-25 12:53:23.539 244018 DEBUG nova.virt.libvirt.guest [req-78080cb4-6e52-4ae0-ad0c-1a86d032503f req-9a3d4348-a6ef-47b9-989c-c5fc35c1c359 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <nova:name>tempest-TestNetworkBasicOps-server-1151259173</nova:name>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <nova:creationTime>2026-02-25 12:53:23</nova:creationTime>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <nova:flavor name="m1.nano">
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <nova:memory>128</nova:memory>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <nova:disk>1</nova:disk>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <nova:swap>0</nova:swap>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <nova:vcpus>1</nova:vcpus>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  </nova:flavor>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <nova:owner>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  </nova:owner>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  <nova:ports>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    <nova:port uuid="4f59e1f7-f07c-48a1-82b4-b6a563a7130a">
Feb 25 07:53:23 np0005629333 nova_compute[244014]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:    </nova:port>
Feb 25 07:53:23 np0005629333 nova_compute[244014]:  </nova:ports>
Feb 25 07:53:23 np0005629333 nova_compute[244014]: </nova:instance>
Feb 25 07:53:23 np0005629333 nova_compute[244014]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Feb 25 07:53:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:53:24 np0005629333 nova_compute[244014]: 2026-02-25 12:53:24.328 244018 DEBUG oslo_concurrency.processutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 03d948e4-e7cb-45ea-bf63-7ab363b4d46e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:53:24 np0005629333 nova_compute[244014]: 2026-02-25 12:53:24.441 244018 DEBUG nova.storage.rbd_utils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] resizing rbd image 03d948e4-e7cb-45ea-bf63-7ab363b4d46e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:53:24 np0005629333 nova_compute[244014]: 2026-02-25 12:53:24.597 244018 DEBUG nova.compute.manager [req-45ea00d0-aafa-4e97-af68-bc21efef9439 req-0bbcbf11-6419-4952-b378-3c2a47b89dd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received event network-vif-plugged-77034e66-a3e3-47d0-b467-29a045343530 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:53:24 np0005629333 nova_compute[244014]: 2026-02-25 12:53:24.598 244018 DEBUG oslo_concurrency.lockutils [req-45ea00d0-aafa-4e97-af68-bc21efef9439 req-0bbcbf11-6419-4952-b378-3c2a47b89dd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:53:24 np0005629333 nova_compute[244014]: 2026-02-25 12:53:24.598 244018 DEBUG oslo_concurrency.lockutils [req-45ea00d0-aafa-4e97-af68-bc21efef9439 req-0bbcbf11-6419-4952-b378-3c2a47b89dd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:53:24 np0005629333 nova_compute[244014]: 2026-02-25 12:53:24.599 244018 DEBUG oslo_concurrency.lockutils [req-45ea00d0-aafa-4e97-af68-bc21efef9439 req-0bbcbf11-6419-4952-b378-3c2a47b89dd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:53:24 np0005629333 nova_compute[244014]: 2026-02-25 12:53:24.599 244018 DEBUG nova.compute.manager [req-45ea00d0-aafa-4e97-af68-bc21efef9439 req-0bbcbf11-6419-4952-b378-3c2a47b89dd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] No waiting events found dispatching network-vif-plugged-77034e66-a3e3-47d0-b467-29a045343530 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:53:24 np0005629333 nova_compute[244014]: 2026-02-25 12:53:24.599 244018 WARNING nova.compute.manager [req-45ea00d0-aafa-4e97-af68-bc21efef9439 req-0bbcbf11-6419-4952-b378-3c2a47b89dd3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received unexpected event network-vif-plugged-77034e66-a3e3-47d0-b467-29a045343530 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:53:24 np0005629333 nova_compute[244014]: 2026-02-25 12:53:24.601 244018 DEBUG nova.network.neutron [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Successfully created port: f66ec19f-119c-4e9b-ba57-c8e2d850f5dc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:53:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2185: 305 pgs: 305 active+clean; 233 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 29 op/s
Feb 25 07:53:24 np0005629333 nova_compute[244014]: 2026-02-25 12:53:24.740 244018 DEBUG nova.objects.instance [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'migration_context' on Instance uuid 03d948e4-e7cb-45ea-bf63-7ab363b4d46e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:53:24 np0005629333 nova_compute[244014]: 2026-02-25 12:53:24.753 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:53:24 np0005629333 nova_compute[244014]: 2026-02-25 12:53:24.753 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Ensure instance console log exists: /var/lib/nova/instances/03d948e4-e7cb-45ea-bf63-7ab363b4d46e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:53:24 np0005629333 nova_compute[244014]: 2026-02-25 12:53:24.754 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:53:24 np0005629333 nova_compute[244014]: 2026-02-25 12:53:24.754 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:53:24 np0005629333 nova_compute[244014]: 2026-02-25 12:53:24.755 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:53:25 np0005629333 ovn_controller[147040]: 2026-02-25T12:53:25Z|01380|binding|INFO|Releasing lport 1599c73d-07eb-42e9-83bd-2cf546347a5b from this chassis (sb_readonly=0)
Feb 25 07:53:25 np0005629333 nova_compute[244014]: 2026-02-25 12:53:25.275 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:25 np0005629333 nova_compute[244014]: 2026-02-25 12:53:25.394 244018 INFO nova.network.neutron [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Port 77034e66-a3e3-47d0-b467-29a045343530 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Feb 25 07:53:25 np0005629333 nova_compute[244014]: 2026-02-25 12:53:25.394 244018 DEBUG nova.network.neutron [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Updating instance_info_cache with network_info: [{"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:53:25 np0005629333 nova_compute[244014]: 2026-02-25 12:53:25.429 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:25 np0005629333 nova_compute[244014]: 2026-02-25 12:53:25.452 244018 DEBUG oslo_concurrency.lockutils [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:53:25 np0005629333 nova_compute[244014]: 2026-02-25 12:53:25.481 244018 DEBUG oslo_concurrency.lockutils [None req-0042f329-6a00-40f4-9244-35f680f8bc00 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "interface-809da994-7551-4f52-8920-b0dfaa2ef73e-77034e66-a3e3-47d0-b467-29a045343530" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.484s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:53:25 np0005629333 nova_compute[244014]: 2026-02-25 12:53:25.795 244018 DEBUG nova.network.neutron [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Successfully updated port: f66ec19f-119c-4e9b-ba57-c8e2d850f5dc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:53:25 np0005629333 nova_compute[244014]: 2026-02-25 12:53:25.810 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "refresh_cache-03d948e4-e7cb-45ea-bf63-7ab363b4d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:53:25 np0005629333 nova_compute[244014]: 2026-02-25 12:53:25.810 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquired lock "refresh_cache-03d948e4-e7cb-45ea-bf63-7ab363b4d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:53:25 np0005629333 nova_compute[244014]: 2026-02-25 12:53:25.810 244018 DEBUG nova.network.neutron [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:53:25 np0005629333 nova_compute[244014]: 2026-02-25 12:53:25.882 244018 DEBUG nova.compute.manager [req-33ac74ac-fec9-4a38-a04d-b76759d279b4 req-53610533-030a-48d6-9e5c-8f1c4e6a316e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Received event network-changed-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:53:25 np0005629333 nova_compute[244014]: 2026-02-25 12:53:25.883 244018 DEBUG nova.compute.manager [req-33ac74ac-fec9-4a38-a04d-b76759d279b4 req-53610533-030a-48d6-9e5c-8f1c4e6a316e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Refreshing instance network info cache due to event network-changed-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:53:25 np0005629333 nova_compute[244014]: 2026-02-25 12:53:25.883 244018 DEBUG oslo_concurrency.lockutils [req-33ac74ac-fec9-4a38-a04d-b76759d279b4 req-53610533-030a-48d6-9e5c-8f1c4e6a316e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-03d948e4-e7cb-45ea-bf63-7ab363b4d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.006 244018 DEBUG nova.network.neutron [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.157 244018 DEBUG oslo_concurrency.lockutils [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "809da994-7551-4f52-8920-b0dfaa2ef73e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.158 244018 DEBUG oslo_concurrency.lockutils [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.158 244018 DEBUG oslo_concurrency.lockutils [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.159 244018 DEBUG oslo_concurrency.lockutils [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.159 244018 DEBUG oslo_concurrency.lockutils [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.161 244018 INFO nova.compute.manager [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Terminating instance#033[00m
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.163 244018 DEBUG nova.compute.manager [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:53:26 np0005629333 kernel: tap4f59e1f7-f0 (unregistering): left promiscuous mode
Feb 25 07:53:26 np0005629333 NetworkManager[49836]: <info>  [1772024006.2204] device (tap4f59e1f7-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.228 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:26 np0005629333 ovn_controller[147040]: 2026-02-25T12:53:26Z|01381|binding|INFO|Releasing lport 4f59e1f7-f07c-48a1-82b4-b6a563a7130a from this chassis (sb_readonly=0)
Feb 25 07:53:26 np0005629333 ovn_controller[147040]: 2026-02-25T12:53:26Z|01382|binding|INFO|Setting lport 4f59e1f7-f07c-48a1-82b4-b6a563a7130a down in Southbound
Feb 25 07:53:26 np0005629333 ovn_controller[147040]: 2026-02-25T12:53:26Z|01383|binding|INFO|Removing iface tap4f59e1f7-f0 ovn-installed in OVS
Feb 25 07:53:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:26.239 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:63:14 10.100.0.12'], port_security=['fa:16:3e:c3:63:14 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '809da994-7551-4f52-8920-b0dfaa2ef73e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-526ae63c-3640-4e70-a308-56e7a67e4cf2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3e3f0007-e379-4ef5-bc52-5669e937e826', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9099fb32-6839-4ebe-bdc3-ec67cc06d180, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=4f59e1f7-f07c-48a1-82b4-b6a563a7130a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:53:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:26.241 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 4f59e1f7-f07c-48a1-82b4-b6a563a7130a in datapath 526ae63c-3640-4e70-a308-56e7a67e4cf2 unbound from our chassis#033[00m
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.242 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:26.243 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 526ae63c-3640-4e70-a308-56e7a67e4cf2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:53:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:26.244 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[50415736-f589-4fb8-a320-bc1d6daf85f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:26.244 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2 namespace which is not needed anymore#033[00m
Feb 25 07:53:26 np0005629333 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Feb 25 07:53:26 np0005629333 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d0000007f.scope: Consumed 15.819s CPU time.
Feb 25 07:53:26 np0005629333 systemd-machined[210048]: Machine qemu-159-instance-0000007f terminated.
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.381 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.386 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:26 np0005629333 neutron-haproxy-ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2[360019]: [NOTICE]   (360023) : haproxy version is 2.8.14-c23fe91
Feb 25 07:53:26 np0005629333 neutron-haproxy-ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2[360019]: [NOTICE]   (360023) : path to executable is /usr/sbin/haproxy
Feb 25 07:53:26 np0005629333 neutron-haproxy-ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2[360019]: [WARNING]  (360023) : Exiting Master process...
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.398 244018 INFO nova.virt.libvirt.driver [-] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Instance destroyed successfully.#033[00m
Feb 25 07:53:26 np0005629333 neutron-haproxy-ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2[360019]: [ALERT]    (360023) : Current worker (360025) exited with code 143 (Terminated)
Feb 25 07:53:26 np0005629333 neutron-haproxy-ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2[360019]: [WARNING]  (360023) : All workers exited. Exiting... (0)
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.399 244018 DEBUG nova.objects.instance [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'resources' on Instance uuid 809da994-7551-4f52-8920-b0dfaa2ef73e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:53:26 np0005629333 systemd[1]: libpod-ecfea8e647a5e0a1222cbb98f4b3892ab54ecda1af4c00fe810aec476144a552.scope: Deactivated successfully.
Feb 25 07:53:26 np0005629333 podman[362227]: 2026-02-25 12:53:26.40820487 +0000 UTC m=+0.058551574 container died ecfea8e647a5e0a1222cbb98f4b3892ab54ecda1af4c00fe810aec476144a552 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.417 244018 DEBUG nova.virt.libvirt.vif [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:51:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1151259173',display_name='tempest-TestNetworkBasicOps-server-1151259173',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1151259173',id=127,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCuhKTGrJ6xoqtXbC9i6shxmOzzCAmEiPLvEhcBT9lMLvpNHET3NrmwNha38Zzx8OOcER4UhJ6EWWvnBqNIlR5/VZu+vuQ6n1q9c1LTSLn17vfclbttzgZoR8PFOeBob+Q==',key_name='tempest-TestNetworkBasicOps-1930796261',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:52:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ijtjj0g2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:52:01Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=809da994-7551-4f52-8920-b0dfaa2ef73e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.417 244018 DEBUG nova.network.os_vif_util [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.418 244018 DEBUG nova.network.os_vif_util [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c3:63:14,bridge_name='br-int',has_traffic_filtering=True,id=4f59e1f7-f07c-48a1-82b4-b6a563a7130a,network=Network(526ae63c-3640-4e70-a308-56e7a67e4cf2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f59e1f7-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.418 244018 DEBUG os_vif [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c3:63:14,bridge_name='br-int',has_traffic_filtering=True,id=4f59e1f7-f07c-48a1-82b4-b6a563a7130a,network=Network(526ae63c-3640-4e70-a308-56e7a67e4cf2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f59e1f7-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.420 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.421 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f59e1f7-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.425 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.428 244018 INFO os_vif [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c3:63:14,bridge_name='br-int',has_traffic_filtering=True,id=4f59e1f7-f07c-48a1-82b4-b6a563a7130a,network=Network(526ae63c-3640-4e70-a308-56e7a67e4cf2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f59e1f7-f0')#033[00m
Feb 25 07:53:26 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ecfea8e647a5e0a1222cbb98f4b3892ab54ecda1af4c00fe810aec476144a552-userdata-shm.mount: Deactivated successfully.
Feb 25 07:53:26 np0005629333 systemd[1]: var-lib-containers-storage-overlay-8f39925cd20db3978a9556b803bb9da1be3764b7cb84297a2c2d11b4e6c0059d-merged.mount: Deactivated successfully.
Feb 25 07:53:26 np0005629333 podman[362227]: 2026-02-25 12:53:26.465629792 +0000 UTC m=+0.115976486 container cleanup ecfea8e647a5e0a1222cbb98f4b3892ab54ecda1af4c00fe810aec476144a552 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 25 07:53:26 np0005629333 systemd[1]: libpod-conmon-ecfea8e647a5e0a1222cbb98f4b3892ab54ecda1af4c00fe810aec476144a552.scope: Deactivated successfully.
Feb 25 07:53:26 np0005629333 podman[362283]: 2026-02-25 12:53:26.552243877 +0000 UTC m=+0.056353932 container remove ecfea8e647a5e0a1222cbb98f4b3892ab54ecda1af4c00fe810aec476144a552 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223)
Feb 25 07:53:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:26.559 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[878c39f5-a03b-4284-b262-890e2992a8b3]: (4, ('Wed Feb 25 12:53:26 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2 (ecfea8e647a5e0a1222cbb98f4b3892ab54ecda1af4c00fe810aec476144a552)\necfea8e647a5e0a1222cbb98f4b3892ab54ecda1af4c00fe810aec476144a552\nWed Feb 25 12:53:26 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2 (ecfea8e647a5e0a1222cbb98f4b3892ab54ecda1af4c00fe810aec476144a552)\necfea8e647a5e0a1222cbb98f4b3892ab54ecda1af4c00fe810aec476144a552\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:26.562 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3e6d9b48-3e15-4e73-8cbf-16756de1229c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:26.563 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap526ae63c-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:53:26 np0005629333 kernel: tap526ae63c-30: left promiscuous mode
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.567 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.572 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:26.576 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[47b5f27b-39d9-4203-a8c8-f765c8d1c753]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:26.588 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c0b7e61e-8faa-4b2a-b1c4-9a5f2f112afc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:26.589 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9a8e1bb8-401a-4d09-a076-92f39ccd3c68]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:26.602 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b96d3336-412e-4682-9276-6a310b613b3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588362, 'reachable_time': 24275, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362301, 'error': None, 'target': 'ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:26 np0005629333 systemd[1]: run-netns-ovnmeta\x2d526ae63c\x2d3640\x2d4e70\x2da308\x2d56e7a67e4cf2.mount: Deactivated successfully.
Feb 25 07:53:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:26.605 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-526ae63c-3640-4e70-a308-56e7a67e4cf2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:53:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:26.605 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[68fa93cd-a89b-4d6f-bed3-0bdb23fbb130]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2186: 305 pgs: 305 active+clean; 253 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.0 MiB/s wr, 43 op/s
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.696 244018 INFO nova.virt.libvirt.driver [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Deleting instance files /var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e_del#033[00m
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.697 244018 INFO nova.virt.libvirt.driver [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Deletion of /var/lib/nova/instances/809da994-7551-4f52-8920-b0dfaa2ef73e_del complete#033[00m
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.750 244018 DEBUG nova.compute.manager [req-32987a75-04b4-43a1-a78c-2deb5e6abe0b req-77a9b897-4575-411e-9931-dff32f027142 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received event network-changed-4f59e1f7-f07c-48a1-82b4-b6a563a7130a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.751 244018 DEBUG nova.compute.manager [req-32987a75-04b4-43a1-a78c-2deb5e6abe0b req-77a9b897-4575-411e-9931-dff32f027142 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Refreshing instance network info cache due to event network-changed-4f59e1f7-f07c-48a1-82b4-b6a563a7130a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.752 244018 DEBUG oslo_concurrency.lockutils [req-32987a75-04b4-43a1-a78c-2deb5e6abe0b req-77a9b897-4575-411e-9931-dff32f027142 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.752 244018 DEBUG oslo_concurrency.lockutils [req-32987a75-04b4-43a1-a78c-2deb5e6abe0b req-77a9b897-4575-411e-9931-dff32f027142 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.753 244018 DEBUG nova.network.neutron [req-32987a75-04b4-43a1-a78c-2deb5e6abe0b req-77a9b897-4575-411e-9931-dff32f027142 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Refreshing network info cache for port 4f59e1f7-f07c-48a1-82b4-b6a563a7130a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.773 244018 INFO nova.compute.manager [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Took 0.61 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.773 244018 DEBUG oslo.service.loopingcall [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.774 244018 DEBUG nova.compute.manager [-] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:53:26 np0005629333 nova_compute[244014]: 2026-02-25 12:53:26.775 244018 DEBUG nova.network.neutron [-] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:53:27 np0005629333 nova_compute[244014]: 2026-02-25 12:53:27.923 244018 DEBUG nova.network.neutron [-] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:53:27 np0005629333 nova_compute[244014]: 2026-02-25 12:53:27.947 244018 INFO nova.compute.manager [-] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Took 1.17 seconds to deallocate network for instance.#033[00m
Feb 25 07:53:27 np0005629333 nova_compute[244014]: 2026-02-25 12:53:27.992 244018 DEBUG oslo_concurrency.lockutils [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:53:27 np0005629333 nova_compute[244014]: 2026-02-25 12:53:27.993 244018 DEBUG oslo_concurrency.lockutils [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.001 244018 DEBUG nova.compute.manager [req-5955bd45-c84d-4d0e-aa67-79a90d170dd1 req-8e073b55-079f-40b2-bf7a-01c91ca90912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received event network-vif-unplugged-4f59e1f7-f07c-48a1-82b4-b6a563a7130a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.001 244018 DEBUG oslo_concurrency.lockutils [req-5955bd45-c84d-4d0e-aa67-79a90d170dd1 req-8e073b55-079f-40b2-bf7a-01c91ca90912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.002 244018 DEBUG oslo_concurrency.lockutils [req-5955bd45-c84d-4d0e-aa67-79a90d170dd1 req-8e073b55-079f-40b2-bf7a-01c91ca90912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.003 244018 DEBUG oslo_concurrency.lockutils [req-5955bd45-c84d-4d0e-aa67-79a90d170dd1 req-8e073b55-079f-40b2-bf7a-01c91ca90912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.003 244018 DEBUG nova.compute.manager [req-5955bd45-c84d-4d0e-aa67-79a90d170dd1 req-8e073b55-079f-40b2-bf7a-01c91ca90912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] No waiting events found dispatching network-vif-unplugged-4f59e1f7-f07c-48a1-82b4-b6a563a7130a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.004 244018 WARNING nova.compute.manager [req-5955bd45-c84d-4d0e-aa67-79a90d170dd1 req-8e073b55-079f-40b2-bf7a-01c91ca90912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received unexpected event network-vif-unplugged-4f59e1f7-f07c-48a1-82b4-b6a563a7130a for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.004 244018 DEBUG nova.compute.manager [req-5955bd45-c84d-4d0e-aa67-79a90d170dd1 req-8e073b55-079f-40b2-bf7a-01c91ca90912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received event network-vif-plugged-4f59e1f7-f07c-48a1-82b4-b6a563a7130a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.005 244018 DEBUG oslo_concurrency.lockutils [req-5955bd45-c84d-4d0e-aa67-79a90d170dd1 req-8e073b55-079f-40b2-bf7a-01c91ca90912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.006 244018 DEBUG oslo_concurrency.lockutils [req-5955bd45-c84d-4d0e-aa67-79a90d170dd1 req-8e073b55-079f-40b2-bf7a-01c91ca90912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.006 244018 DEBUG oslo_concurrency.lockutils [req-5955bd45-c84d-4d0e-aa67-79a90d170dd1 req-8e073b55-079f-40b2-bf7a-01c91ca90912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.006 244018 DEBUG nova.compute.manager [req-5955bd45-c84d-4d0e-aa67-79a90d170dd1 req-8e073b55-079f-40b2-bf7a-01c91ca90912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] No waiting events found dispatching network-vif-plugged-4f59e1f7-f07c-48a1-82b4-b6a563a7130a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.007 244018 WARNING nova.compute.manager [req-5955bd45-c84d-4d0e-aa67-79a90d170dd1 req-8e073b55-079f-40b2-bf7a-01c91ca90912 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received unexpected event network-vif-plugged-4f59e1f7-f07c-48a1-82b4-b6a563a7130a for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.074 244018 DEBUG oslo_concurrency.processutils [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.350 244018 DEBUG nova.network.neutron [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Updating instance_info_cache with network_info: [{"id": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "address": "fa:16:3e:d9:be:ae", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66ec19f-11", "ovs_interfaceid": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.369 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Releasing lock "refresh_cache-03d948e4-e7cb-45ea-bf63-7ab363b4d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.370 244018 DEBUG nova.compute.manager [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Instance network_info: |[{"id": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "address": "fa:16:3e:d9:be:ae", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66ec19f-11", "ovs_interfaceid": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.371 244018 DEBUG oslo_concurrency.lockutils [req-33ac74ac-fec9-4a38-a04d-b76759d279b4 req-53610533-030a-48d6-9e5c-8f1c4e6a316e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-03d948e4-e7cb-45ea-bf63-7ab363b4d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.371 244018 DEBUG nova.network.neutron [req-33ac74ac-fec9-4a38-a04d-b76759d279b4 req-53610533-030a-48d6-9e5c-8f1c4e6a316e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Refreshing network info cache for port f66ec19f-119c-4e9b-ba57-c8e2d850f5dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.379 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Start _get_guest_xml network_info=[{"id": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "address": "fa:16:3e:d9:be:ae", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66ec19f-11", "ovs_interfaceid": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.386 244018 WARNING nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.397 244018 DEBUG nova.virt.libvirt.host [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.398 244018 DEBUG nova.virt.libvirt.host [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.402 244018 DEBUG nova.virt.libvirt.host [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.403 244018 DEBUG nova.virt.libvirt.host [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.404 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.404 244018 DEBUG nova.virt.hardware [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.405 244018 DEBUG nova.virt.hardware [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.406 244018 DEBUG nova.virt.hardware [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.407 244018 DEBUG nova.virt.hardware [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.407 244018 DEBUG nova.virt.hardware [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.408 244018 DEBUG nova.virt.hardware [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.408 244018 DEBUG nova.virt.hardware [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.409 244018 DEBUG nova.virt.hardware [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.410 244018 DEBUG nova.virt.hardware [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.410 244018 DEBUG nova.virt.hardware [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.411 244018 DEBUG nova.virt.hardware [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.417 244018 DEBUG oslo_concurrency.processutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.458 244018 DEBUG nova.network.neutron [req-32987a75-04b4-43a1-a78c-2deb5e6abe0b req-77a9b897-4575-411e-9931-dff32f027142 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Updated VIF entry in instance network info cache for port 4f59e1f7-f07c-48a1-82b4-b6a563a7130a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.460 244018 DEBUG nova.network.neutron [req-32987a75-04b4-43a1-a78c-2deb5e6abe0b req-77a9b897-4575-411e-9931-dff32f027142 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Updating instance_info_cache with network_info: [{"id": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "address": "fa:16:3e:c3:63:14", "network": {"id": "526ae63c-3640-4e70-a308-56e7a67e4cf2", "bridge": "br-int", "label": "tempest-network-smoke--2065954569", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f59e1f7-f0", "ovs_interfaceid": "4f59e1f7-f07c-48a1-82b4-b6a563a7130a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.487 244018 DEBUG oslo_concurrency.lockutils [req-32987a75-04b4-43a1-a78c-2deb5e6abe0b req-77a9b897-4575-411e-9931-dff32f027142 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-809da994-7551-4f52-8920-b0dfaa2ef73e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:53:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:53:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2940924392' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:53:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2187: 305 pgs: 305 active+clean; 223 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.655 244018 DEBUG oslo_concurrency.processutils [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.663 244018 DEBUG nova.compute.provider_tree [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:53:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.681 244018 DEBUG nova.scheduler.client.report [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.709 244018 DEBUG oslo_concurrency.lockutils [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.734 244018 INFO nova.scheduler.client.report [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Deleted allocations for instance 809da994-7551-4f52-8920-b0dfaa2ef73e#033[00m
Feb 25 07:53:28 np0005629333 nova_compute[244014]: 2026-02-25 12:53:28.793 244018 DEBUG oslo_concurrency.lockutils [None req-0ccab973-a553-4f64-9a10-11d7e7f51d56 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "809da994-7551-4f52-8920-b0dfaa2ef73e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:53:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:53:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2461866324' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:53:29 np0005629333 nova_compute[244014]: 2026-02-25 12:53:29.022 244018 DEBUG oslo_concurrency.processutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:53:29 np0005629333 nova_compute[244014]: 2026-02-25 12:53:29.053 244018 DEBUG nova.storage.rbd_utils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 03d948e4-e7cb-45ea-bf63-7ab363b4d46e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:53:29 np0005629333 nova_compute[244014]: 2026-02-25 12:53:29.057 244018 DEBUG oslo_concurrency.processutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:53:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:53:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3696651289' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:53:29 np0005629333 nova_compute[244014]: 2026-02-25 12:53:29.610 244018 DEBUG oslo_concurrency.processutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:53:29 np0005629333 nova_compute[244014]: 2026-02-25 12:53:29.612 244018 DEBUG nova.virt.libvirt.vif [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:53:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1355608326',display_name='tempest-TestGettingAddress-server-1355608326',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1355608326',id=130,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL/SVWuEVaf+Py7xASyHmNYfi29LBzumkZB6ldUzxX0Dxybzo1LoSeFAK9YF8BeK/8cHma4lQxEX6N+N2g4I6aGP/1FtZuhCE2GUlMamVs4LR8ITh1z/8qYyUh+iaduJ8A==',key_name='tempest-TestGettingAddress-2128145461',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-0ah75or2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:53:22Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=03d948e4-e7cb-45ea-bf63-7ab363b4d46e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "address": "fa:16:3e:d9:be:ae", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66ec19f-11", "ovs_interfaceid": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:53:29 np0005629333 nova_compute[244014]: 2026-02-25 12:53:29.613 244018 DEBUG nova.network.os_vif_util [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "address": "fa:16:3e:d9:be:ae", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66ec19f-11", "ovs_interfaceid": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:53:29 np0005629333 nova_compute[244014]: 2026-02-25 12:53:29.615 244018 DEBUG nova.network.os_vif_util [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:be:ae,bridge_name='br-int',has_traffic_filtering=True,id=f66ec19f-119c-4e9b-ba57-c8e2d850f5dc,network=Network(88562c34-222a-439a-b444-9e6f8a6d70cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66ec19f-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:53:29 np0005629333 nova_compute[244014]: 2026-02-25 12:53:29.617 244018 DEBUG nova.objects.instance [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 03d948e4-e7cb-45ea-bf63-7ab363b4d46e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:53:29 np0005629333 nova_compute[244014]: 2026-02-25 12:53:29.655 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:53:29 np0005629333 nova_compute[244014]:  <uuid>03d948e4-e7cb-45ea-bf63-7ab363b4d46e</uuid>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:  <name>instance-00000082</name>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestGettingAddress-server-1355608326</nova:name>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:53:28</nova:creationTime>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:53:29 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:        <nova:user uuid="f8eb8dbf8cc448ad946fd23aaae2326e">tempest-TestGettingAddress-344063294-project-member</nova:user>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:        <nova:project uuid="25fa1e8dd32c483686f869da2604f2b1">tempest-TestGettingAddress-344063294</nova:project>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:        <nova:port uuid="f66ec19f-119c-4e9b-ba57-c8e2d850f5dc">
Feb 25 07:53:29 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fed9:beae" ipVersion="6"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fed9:beae" ipVersion="6"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      <entry name="serial">03d948e4-e7cb-45ea-bf63-7ab363b4d46e</entry>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      <entry name="uuid">03d948e4-e7cb-45ea-bf63-7ab363b4d46e</entry>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/03d948e4-e7cb-45ea-bf63-7ab363b4d46e_disk">
Feb 25 07:53:29 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:53:29 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/03d948e4-e7cb-45ea-bf63-7ab363b4d46e_disk.config">
Feb 25 07:53:29 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:53:29 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:d9:be:ae"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      <target dev="tapf66ec19f-11"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/03d948e4-e7cb-45ea-bf63-7ab363b4d46e/console.log" append="off"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:53:29 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:53:29 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:53:29 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:53:29 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:53:29 np0005629333 nova_compute[244014]: 2026-02-25 12:53:29.656 244018 DEBUG nova.compute.manager [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Preparing to wait for external event network-vif-plugged-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:53:29 np0005629333 nova_compute[244014]: 2026-02-25 12:53:29.656 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:53:29 np0005629333 nova_compute[244014]: 2026-02-25 12:53:29.657 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:53:29 np0005629333 nova_compute[244014]: 2026-02-25 12:53:29.657 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:53:29 np0005629333 nova_compute[244014]: 2026-02-25 12:53:29.658 244018 DEBUG nova.virt.libvirt.vif [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:53:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1355608326',display_name='tempest-TestGettingAddress-server-1355608326',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1355608326',id=130,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL/SVWuEVaf+Py7xASyHmNYfi29LBzumkZB6ldUzxX0Dxybzo1LoSeFAK9YF8BeK/8cHma4lQxEX6N+N2g4I6aGP/1FtZuhCE2GUlMamVs4LR8ITh1z/8qYyUh+iaduJ8A==',key_name='tempest-TestGettingAddress-2128145461',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-0ah75or2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:53:22Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=03d948e4-e7cb-45ea-bf63-7ab363b4d46e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "address": "fa:16:3e:d9:be:ae", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66ec19f-11", "ovs_interfaceid": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:53:29 np0005629333 nova_compute[244014]: 2026-02-25 12:53:29.659 244018 DEBUG nova.network.os_vif_util [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "address": "fa:16:3e:d9:be:ae", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66ec19f-11", "ovs_interfaceid": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:53:29 np0005629333 nova_compute[244014]: 2026-02-25 12:53:29.660 244018 DEBUG nova.network.os_vif_util [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:be:ae,bridge_name='br-int',has_traffic_filtering=True,id=f66ec19f-119c-4e9b-ba57-c8e2d850f5dc,network=Network(88562c34-222a-439a-b444-9e6f8a6d70cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66ec19f-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:53:29 np0005629333 nova_compute[244014]: 2026-02-25 12:53:29.661 244018 DEBUG os_vif [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:be:ae,bridge_name='br-int',has_traffic_filtering=True,id=f66ec19f-119c-4e9b-ba57-c8e2d850f5dc,network=Network(88562c34-222a-439a-b444-9e6f8a6d70cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66ec19f-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:53:29 np0005629333 nova_compute[244014]: 2026-02-25 12:53:29.661 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:29 np0005629333 nova_compute[244014]: 2026-02-25 12:53:29.662 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:53:29 np0005629333 nova_compute[244014]: 2026-02-25 12:53:29.663 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:53:29 np0005629333 nova_compute[244014]: 2026-02-25 12:53:29.666 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:29 np0005629333 nova_compute[244014]: 2026-02-25 12:53:29.667 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf66ec19f-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:53:29 np0005629333 nova_compute[244014]: 2026-02-25 12:53:29.667 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf66ec19f-11, col_values=(('external_ids', {'iface-id': 'f66ec19f-119c-4e9b-ba57-c8e2d850f5dc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:be:ae', 'vm-uuid': '03d948e4-e7cb-45ea-bf63-7ab363b4d46e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:53:29 np0005629333 nova_compute[244014]: 2026-02-25 12:53:29.669 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:29 np0005629333 NetworkManager[49836]: <info>  [1772024009.6708] manager: (tapf66ec19f-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/571)
Feb 25 07:53:29 np0005629333 nova_compute[244014]: 2026-02-25 12:53:29.673 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:53:29 np0005629333 nova_compute[244014]: 2026-02-25 12:53:29.676 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:29 np0005629333 nova_compute[244014]: 2026-02-25 12:53:29.678 244018 INFO os_vif [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:be:ae,bridge_name='br-int',has_traffic_filtering=True,id=f66ec19f-119c-4e9b-ba57-c8e2d850f5dc,network=Network(88562c34-222a-439a-b444-9e6f8a6d70cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66ec19f-11')#033[00m
Feb 25 07:53:29 np0005629333 nova_compute[244014]: 2026-02-25 12:53:29.765 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:53:29 np0005629333 nova_compute[244014]: 2026-02-25 12:53:29.766 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:53:29 np0005629333 nova_compute[244014]: 2026-02-25 12:53:29.766 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:d9:be:ae, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:53:29 np0005629333 nova_compute[244014]: 2026-02-25 12:53:29.767 244018 INFO nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Using config drive#033[00m
Feb 25 07:53:29 np0005629333 nova_compute[244014]: 2026-02-25 12:53:29.794 244018 DEBUG nova.storage.rbd_utils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 03d948e4-e7cb-45ea-bf63-7ab363b4d46e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:53:30 np0005629333 nova_compute[244014]: 2026-02-25 12:53:30.080 244018 DEBUG nova.compute.manager [req-a97a8de4-2009-441d-9995-e54ee8bc8c98 req-97c0e964-a46d-4edc-b98a-d03f0ca0d28c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Received event network-vif-deleted-4f59e1f7-f07c-48a1-82b4-b6a563a7130a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:53:30 np0005629333 nova_compute[244014]: 2026-02-25 12:53:30.081 244018 INFO nova.compute.manager [req-a97a8de4-2009-441d-9995-e54ee8bc8c98 req-97c0e964-a46d-4edc-b98a-d03f0ca0d28c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Neutron deleted interface 4f59e1f7-f07c-48a1-82b4-b6a563a7130a; detaching it from the instance and deleting it from the info cache#033[00m
Feb 25 07:53:30 np0005629333 nova_compute[244014]: 2026-02-25 12:53:30.081 244018 DEBUG nova.network.neutron [req-a97a8de4-2009-441d-9995-e54ee8bc8c98 req-97c0e964-a46d-4edc-b98a-d03f0ca0d28c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Feb 25 07:53:30 np0005629333 nova_compute[244014]: 2026-02-25 12:53:30.086 244018 DEBUG nova.compute.manager [req-a97a8de4-2009-441d-9995-e54ee8bc8c98 req-97c0e964-a46d-4edc-b98a-d03f0ca0d28c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Detach interface failed, port_id=4f59e1f7-f07c-48a1-82b4-b6a563a7130a, reason: Instance 809da994-7551-4f52-8920-b0dfaa2ef73e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 25 07:53:30 np0005629333 nova_compute[244014]: 2026-02-25 12:53:30.256 244018 INFO nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Creating config drive at /var/lib/nova/instances/03d948e4-e7cb-45ea-bf63-7ab363b4d46e/disk.config#033[00m
Feb 25 07:53:30 np0005629333 nova_compute[244014]: 2026-02-25 12:53:30.262 244018 DEBUG oslo_concurrency.processutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/03d948e4-e7cb-45ea-bf63-7ab363b4d46e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpv4jek6l8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:53:30 np0005629333 nova_compute[244014]: 2026-02-25 12:53:30.409 244018 DEBUG oslo_concurrency.processutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/03d948e4-e7cb-45ea-bf63-7ab363b4d46e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpv4jek6l8" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:53:30 np0005629333 nova_compute[244014]: 2026-02-25 12:53:30.448 244018 DEBUG nova.storage.rbd_utils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 03d948e4-e7cb-45ea-bf63-7ab363b4d46e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:53:30 np0005629333 nova_compute[244014]: 2026-02-25 12:53:30.453 244018 DEBUG oslo_concurrency.processutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/03d948e4-e7cb-45ea-bf63-7ab363b4d46e/disk.config 03d948e4-e7cb-45ea-bf63-7ab363b4d46e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:53:30 np0005629333 nova_compute[244014]: 2026-02-25 12:53:30.490 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:30 np0005629333 nova_compute[244014]: 2026-02-25 12:53:30.612 244018 DEBUG nova.network.neutron [req-33ac74ac-fec9-4a38-a04d-b76759d279b4 req-53610533-030a-48d6-9e5c-8f1c4e6a316e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Updated VIF entry in instance network info cache for port f66ec19f-119c-4e9b-ba57-c8e2d850f5dc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:53:30 np0005629333 nova_compute[244014]: 2026-02-25 12:53:30.613 244018 DEBUG nova.network.neutron [req-33ac74ac-fec9-4a38-a04d-b76759d279b4 req-53610533-030a-48d6-9e5c-8f1c4e6a316e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Updating instance_info_cache with network_info: [{"id": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "address": "fa:16:3e:d9:be:ae", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66ec19f-11", "ovs_interfaceid": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:53:30 np0005629333 nova_compute[244014]: 2026-02-25 12:53:30.637 244018 DEBUG oslo_concurrency.lockutils [req-33ac74ac-fec9-4a38-a04d-b76759d279b4 req-53610533-030a-48d6-9e5c-8f1c4e6a316e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-03d948e4-e7cb-45ea-bf63-7ab363b4d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:53:30 np0005629333 nova_compute[244014]: 2026-02-25 12:53:30.644 244018 DEBUG oslo_concurrency.processutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/03d948e4-e7cb-45ea-bf63-7ab363b4d46e/disk.config 03d948e4-e7cb-45ea-bf63-7ab363b4d46e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:53:30 np0005629333 nova_compute[244014]: 2026-02-25 12:53:30.645 244018 INFO nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Deleting local config drive /var/lib/nova/instances/03d948e4-e7cb-45ea-bf63-7ab363b4d46e/disk.config because it was imported into RBD.#033[00m
Feb 25 07:53:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2188: 305 pgs: 305 active+clean; 223 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Feb 25 07:53:30 np0005629333 kernel: tapf66ec19f-11: entered promiscuous mode
Feb 25 07:53:30 np0005629333 NetworkManager[49836]: <info>  [1772024010.7067] manager: (tapf66ec19f-11): new Tun device (/org/freedesktop/NetworkManager/Devices/572)
Feb 25 07:53:30 np0005629333 ovn_controller[147040]: 2026-02-25T12:53:30Z|01384|binding|INFO|Claiming lport f66ec19f-119c-4e9b-ba57-c8e2d850f5dc for this chassis.
Feb 25 07:53:30 np0005629333 nova_compute[244014]: 2026-02-25 12:53:30.708 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:30 np0005629333 ovn_controller[147040]: 2026-02-25T12:53:30Z|01385|binding|INFO|f66ec19f-119c-4e9b-ba57-c8e2d850f5dc: Claiming fa:16:3e:d9:be:ae 10.100.0.3 2001:db8:0:1:f816:3eff:fed9:beae 2001:db8::f816:3eff:fed9:beae
Feb 25 07:53:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.718 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:be:ae 10.100.0.3 2001:db8:0:1:f816:3eff:fed9:beae 2001:db8::f816:3eff:fed9:beae'], port_security=['fa:16:3e:d9:be:ae 10.100.0.3 2001:db8:0:1:f816:3eff:fed9:beae 2001:db8::f816:3eff:fed9:beae'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8:0:1:f816:3eff:fed9:beae/64 2001:db8::f816:3eff:fed9:beae/64', 'neutron:device_id': '03d948e4-e7cb-45ea-bf63-7ab363b4d46e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88562c34-222a-439a-b444-9e6f8a6d70cd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cd4482ae-906b-42b8-8dd6-2eff099d2f11', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1dcb5c12-7c78-49d6-b14f-b2c48c93839d, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=f66ec19f-119c-4e9b-ba57-c8e2d850f5dc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:53:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.720 157129 INFO neutron.agent.ovn.metadata.agent [-] Port f66ec19f-119c-4e9b-ba57-c8e2d850f5dc in datapath 88562c34-222a-439a-b444-9e6f8a6d70cd bound to our chassis#033[00m
Feb 25 07:53:30 np0005629333 ovn_controller[147040]: 2026-02-25T12:53:30Z|01386|binding|INFO|Setting lport f66ec19f-119c-4e9b-ba57-c8e2d850f5dc ovn-installed in OVS
Feb 25 07:53:30 np0005629333 ovn_controller[147040]: 2026-02-25T12:53:30Z|01387|binding|INFO|Setting lport f66ec19f-119c-4e9b-ba57-c8e2d850f5dc up in Southbound
Feb 25 07:53:30 np0005629333 nova_compute[244014]: 2026-02-25 12:53:30.723 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.723 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 88562c34-222a-439a-b444-9e6f8a6d70cd#033[00m
Feb 25 07:53:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.735 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c115ef29-cca5-40bb-a517-f55b416b149c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.736 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap88562c34-21 in ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:53:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.739 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap88562c34-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:53:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.740 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6babd599-0f28-4be9-a6b3-66a9c02ba0ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.741 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cfbaa2a2-e735-4f7b-9f6c-f9f502d2840e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:30 np0005629333 systemd-machined[210048]: New machine qemu-162-instance-00000082.
Feb 25 07:53:30 np0005629333 systemd[1]: Started Virtual Machine qemu-162-instance-00000082.
Feb 25 07:53:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.757 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[ad148f8c-4968-49d5-b76f-79bc483fe068]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:30 np0005629333 systemd-udevd[362463]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:53:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.776 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[74863abf-c67a-47fa-9fed-dabba18d96de]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:30 np0005629333 NetworkManager[49836]: <info>  [1772024010.7867] device (tapf66ec19f-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:53:30 np0005629333 NetworkManager[49836]: <info>  [1772024010.7876] device (tapf66ec19f-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:53:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.817 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d0817f5b-7e1d-49f1-92cf-5abb0dfb73f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.824 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d385ffab-a571-41fa-a357-76698d5ba219]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:30 np0005629333 NetworkManager[49836]: <info>  [1772024010.8265] manager: (tap88562c34-20): new Veth device (/org/freedesktop/NetworkManager/Devices/573)
Feb 25 07:53:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.858 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[00abc008-0e88-48a9-8220-51dc836e94ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.863 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[346bb885-c0af-4e4b-9684-a0d6fea94f2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:30 np0005629333 nova_compute[244014]: 2026-02-25 12:53:30.883 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:53:30 np0005629333 nova_compute[244014]: 2026-02-25 12:53:30.884 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:53:30 np0005629333 NetworkManager[49836]: <info>  [1772024010.8957] device (tap88562c34-20): carrier: link connected
Feb 25 07:53:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.903 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8ba8c267-b190-42bd-9136-0f61097faab9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:30 np0005629333 nova_compute[244014]: 2026-02-25 12:53:30.911 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:53:30 np0005629333 nova_compute[244014]: 2026-02-25 12:53:30.912 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:53:30 np0005629333 nova_compute[244014]: 2026-02-25 12:53:30.912 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:53:30 np0005629333 nova_compute[244014]: 2026-02-25 12:53:30.912 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:53:30 np0005629333 nova_compute[244014]: 2026-02-25 12:53:30.912 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:53:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.920 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[93e65cd8-5eb6-4ab7-b348-0df2b776334f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap88562c34-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:5e:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 411], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598029, 'reachable_time': 38748, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362494, 'error': None, 'target': 'ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.937 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[af22e56c-ef48-4b95-8381-bad17cedaadc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1c:5e1e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 598029, 'tstamp': 598029}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362496, 'error': None, 'target': 'ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.951 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3218c053-a9fd-4561-834e-870f813f97a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap88562c34-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:5e:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 411], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598029, 'reachable_time': 38748, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 362497, 'error': None, 'target': 'ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:30 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:30.980 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3f76e60a-4687-4ae7-9faf-140fff7071e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:53:31
Feb 25 07:53:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:53:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:53:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['backups', '.mgr', 'default.rgw.log', 'volumes', 'images', 'vms', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Feb 25 07:53:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:31.029 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2fba0f98-dbf4-4229-80dd-8eb668319222]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:31.030 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88562c34-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:31.030 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:31.031 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88562c34-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:53:31 np0005629333 NetworkManager[49836]: <info>  [1772024011.0331] manager: (tap88562c34-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/574)
Feb 25 07:53:31 np0005629333 nova_compute[244014]: 2026-02-25 12:53:31.032 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:31 np0005629333 kernel: tap88562c34-20: entered promiscuous mode
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:31.035 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap88562c34-20, col_values=(('external_ids', {'iface-id': '09662fcb-392d-469a-981c-54d31225748b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:53:31 np0005629333 nova_compute[244014]: 2026-02-25 12:53:31.036 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:31 np0005629333 nova_compute[244014]: 2026-02-25 12:53:31.037 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:31.037 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/88562c34-222a-439a-b444-9e6f8a6d70cd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/88562c34-222a-439a-b444-9e6f8a6d70cd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:53:31 np0005629333 ovn_controller[147040]: 2026-02-25T12:53:31Z|01388|binding|INFO|Releasing lport 09662fcb-392d-469a-981c-54d31225748b from this chassis (sb_readonly=0)
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:31.041 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eb22e7ef-b923-4ec8-9567-58b87b19651b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:31.041 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-88562c34-222a-439a-b444-9e6f8a6d70cd
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/88562c34-222a-439a-b444-9e6f8a6d70cd.pid.haproxy
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 88562c34-222a-439a-b444-9e6f8a6d70cd
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:53:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:31.042 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd', 'env', 'PROCESS_TAG=haproxy-88562c34-222a-439a-b444-9e6f8a6d70cd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/88562c34-222a-439a-b444-9e6f8a6d70cd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:53:31 np0005629333 nova_compute[244014]: 2026-02-25 12:53:31.045 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:31 np0005629333 nova_compute[244014]: 2026-02-25 12:53:31.166 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772023996.165597, f3af9615-94aa-4498-ab5c-3fadcab4a4e6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:53:31 np0005629333 nova_compute[244014]: 2026-02-25 12:53:31.167 244018 INFO nova.compute.manager [-] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:53:31 np0005629333 nova_compute[244014]: 2026-02-25 12:53:31.191 244018 DEBUG nova.compute.manager [None req-78ed1a19-63a7-4f08-b0c7-bdcd0ceb2869 - - - - - -] [instance: f3af9615-94aa-4498-ab5c-3fadcab4a4e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:53:31 np0005629333 nova_compute[244014]: 2026-02-25 12:53:31.346 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024011.3457973, 03d948e4-e7cb-45ea-bf63-7ab363b4d46e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:53:31 np0005629333 nova_compute[244014]: 2026-02-25 12:53:31.347 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] VM Started (Lifecycle Event)#033[00m
Feb 25 07:53:31 np0005629333 nova_compute[244014]: 2026-02-25 12:53:31.368 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:53:31 np0005629333 nova_compute[244014]: 2026-02-25 12:53:31.373 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024011.3461757, 03d948e4-e7cb-45ea-bf63-7ab363b4d46e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:53:31 np0005629333 nova_compute[244014]: 2026-02-25 12:53:31.374 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:53:31 np0005629333 nova_compute[244014]: 2026-02-25 12:53:31.389 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:53:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:53:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1366131965' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:53:31 np0005629333 nova_compute[244014]: 2026-02-25 12:53:31.395 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:53:31 np0005629333 nova_compute[244014]: 2026-02-25 12:53:31.408 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:53:31 np0005629333 nova_compute[244014]: 2026-02-25 12:53:31.422 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:53:31 np0005629333 nova_compute[244014]: 2026-02-25 12:53:31.481 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:53:31 np0005629333 nova_compute[244014]: 2026-02-25 12:53:31.481 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:53:31 np0005629333 podman[362593]: 2026-02-25 12:53:31.498191623 +0000 UTC m=+0.054468019 container create fea1bab1975a17824e0567deea72d3032b1182111d7bdd8fa9e701667e461d57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 25 07:53:31 np0005629333 systemd[1]: Started libpod-conmon-fea1bab1975a17824e0567deea72d3032b1182111d7bdd8fa9e701667e461d57.scope.
Feb 25 07:53:31 np0005629333 podman[362593]: 2026-02-25 12:53:31.465406407 +0000 UTC m=+0.021682843 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:53:31 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:53:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10a865f5da51854f8f958183be1ede03192965b68e57f97054deb9abb51f58a8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:53:31 np0005629333 podman[362593]: 2026-02-25 12:53:31.606589683 +0000 UTC m=+0.162866059 container init fea1bab1975a17824e0567deea72d3032b1182111d7bdd8fa9e701667e461d57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 07:53:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:53:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:53:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:53:31 np0005629333 podman[362593]: 2026-02-25 12:53:31.612774898 +0000 UTC m=+0.169051284 container start fea1bab1975a17824e0567deea72d3032b1182111d7bdd8fa9e701667e461d57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 07:53:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:53:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:53:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:53:31 np0005629333 neutron-haproxy-ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd[362609]: [NOTICE]   (362613) : New worker (362615) forked
Feb 25 07:53:31 np0005629333 neutron-haproxy-ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd[362609]: [NOTICE]   (362613) : Loading success.
Feb 25 07:53:31 np0005629333 ovn_controller[147040]: 2026-02-25T12:53:31Z|01389|binding|INFO|Releasing lport 09662fcb-392d-469a-981c-54d31225748b from this chassis (sb_readonly=0)
Feb 25 07:53:31 np0005629333 nova_compute[244014]: 2026-02-25 12:53:31.664 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:31 np0005629333 ovn_controller[147040]: 2026-02-25T12:53:31Z|01390|binding|INFO|Releasing lport 09662fcb-392d-469a-981c-54d31225748b from this chassis (sb_readonly=0)
Feb 25 07:53:31 np0005629333 nova_compute[244014]: 2026-02-25 12:53:31.703 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:31 np0005629333 nova_compute[244014]: 2026-02-25 12:53:31.736 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:53:31 np0005629333 nova_compute[244014]: 2026-02-25 12:53:31.738 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3481MB free_disk=59.95623039081693GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:53:31 np0005629333 nova_compute[244014]: 2026-02-25 12:53:31.739 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:53:31 np0005629333 nova_compute[244014]: 2026-02-25 12:53:31.739 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:53:31 np0005629333 nova_compute[244014]: 2026-02-25 12:53:31.808 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 03d948e4-e7cb-45ea-bf63-7ab363b4d46e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:53:31 np0005629333 nova_compute[244014]: 2026-02-25 12:53:31.809 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:53:31 np0005629333 nova_compute[244014]: 2026-02-25 12:53:31.809 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:53:31 np0005629333 nova_compute[244014]: 2026-02-25 12:53:31.848 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:53:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:53:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:53:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:53:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:53:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:53:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:53:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:53:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:53:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:53:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.156 244018 DEBUG nova.compute.manager [req-9d973cb6-96c4-4fbf-91a9-7706c6807b30 req-a2cd64a4-a5c5-4ee6-9267-c53e1c2285ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Received event network-vif-plugged-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.156 244018 DEBUG oslo_concurrency.lockutils [req-9d973cb6-96c4-4fbf-91a9-7706c6807b30 req-a2cd64a4-a5c5-4ee6-9267-c53e1c2285ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.158 244018 DEBUG oslo_concurrency.lockutils [req-9d973cb6-96c4-4fbf-91a9-7706c6807b30 req-a2cd64a4-a5c5-4ee6-9267-c53e1c2285ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.158 244018 DEBUG oslo_concurrency.lockutils [req-9d973cb6-96c4-4fbf-91a9-7706c6807b30 req-a2cd64a4-a5c5-4ee6-9267-c53e1c2285ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.159 244018 DEBUG nova.compute.manager [req-9d973cb6-96c4-4fbf-91a9-7706c6807b30 req-a2cd64a4-a5c5-4ee6-9267-c53e1c2285ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Processing event network-vif-plugged-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.159 244018 DEBUG nova.compute.manager [req-9d973cb6-96c4-4fbf-91a9-7706c6807b30 req-a2cd64a4-a5c5-4ee6-9267-c53e1c2285ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Received event network-vif-plugged-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.160 244018 DEBUG oslo_concurrency.lockutils [req-9d973cb6-96c4-4fbf-91a9-7706c6807b30 req-a2cd64a4-a5c5-4ee6-9267-c53e1c2285ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.160 244018 DEBUG oslo_concurrency.lockutils [req-9d973cb6-96c4-4fbf-91a9-7706c6807b30 req-a2cd64a4-a5c5-4ee6-9267-c53e1c2285ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.160 244018 DEBUG oslo_concurrency.lockutils [req-9d973cb6-96c4-4fbf-91a9-7706c6807b30 req-a2cd64a4-a5c5-4ee6-9267-c53e1c2285ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.161 244018 DEBUG nova.compute.manager [req-9d973cb6-96c4-4fbf-91a9-7706c6807b30 req-a2cd64a4-a5c5-4ee6-9267-c53e1c2285ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] No waiting events found dispatching network-vif-plugged-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.161 244018 WARNING nova.compute.manager [req-9d973cb6-96c4-4fbf-91a9-7706c6807b30 req-a2cd64a4-a5c5-4ee6-9267-c53e1c2285ba 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Received unexpected event network-vif-plugged-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.162 244018 DEBUG nova.compute.manager [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.166 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024012.1663218, 03d948e4-e7cb-45ea-bf63-7ab363b4d46e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.167 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.171 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.174 244018 INFO nova.virt.libvirt.driver [-] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Instance spawned successfully.#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.175 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.193 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.198 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.201 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.201 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.201 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.201 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.202 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.202 244018 DEBUG nova.virt.libvirt.driver [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.224 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.268 244018 INFO nova.compute.manager [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Took 9.29 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.268 244018 DEBUG nova.compute.manager [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.332 244018 INFO nova.compute.manager [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Took 10.42 seconds to build instance.#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.348 244018 DEBUG oslo_concurrency.lockutils [None req-b2e753ba-bab0-402f-af8e-0ba5194d7251 f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.538s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:53:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:53:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2944912620' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.400 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.405 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.423 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.450 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:53:32 np0005629333 nova_compute[244014]: 2026-02-25 12:53:32.451 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:53:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2189: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Feb 25 07:53:33 np0005629333 nova_compute[244014]: 2026-02-25 12:53:33.442 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:53:33 np0005629333 nova_compute[244014]: 2026-02-25 12:53:33.443 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:53:33 np0005629333 nova_compute[244014]: 2026-02-25 12:53:33.490 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:33.681339) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024013681361, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 1869, "num_deletes": 256, "total_data_size": 3052952, "memory_usage": 3093928, "flush_reason": "Manual Compaction"}
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024013694852, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 2966165, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45036, "largest_seqno": 46904, "table_properties": {"data_size": 2957724, "index_size": 5127, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 17453, "raw_average_key_size": 19, "raw_value_size": 2940788, "raw_average_value_size": 3345, "num_data_blocks": 228, "num_entries": 879, "num_filter_entries": 879, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772023826, "oldest_key_time": 1772023826, "file_creation_time": 1772024013, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 13543 microseconds, and 4297 cpu microseconds.
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:33.694881) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 2966165 bytes OK
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:33.694894) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:33.696366) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:33.696376) EVENT_LOG_v1 {"time_micros": 1772024013696373, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:33.696389) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 3044965, prev total WAL file size 3044965, number of live WAL files 2.
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:33.696874) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373633' seq:72057594037927935, type:22 .. '6C6F676D0032303135' seq:0, type:0; will stop at (end)
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(2896KB)], [104(8169KB)]
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024013696946, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 11332163, "oldest_snapshot_seqno": -1}
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 6959 keys, 11211013 bytes, temperature: kUnknown
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024013822051, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 11211013, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11162410, "index_size": 30123, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17413, "raw_key_size": 179834, "raw_average_key_size": 25, "raw_value_size": 11036082, "raw_average_value_size": 1585, "num_data_blocks": 1192, "num_entries": 6959, "num_filter_entries": 6959, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772024013, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:33.822299) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 11211013 bytes
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:33.823755) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 90.5 rd, 89.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 8.0 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(7.6) write-amplify(3.8) OK, records in: 7483, records dropped: 524 output_compression: NoCompression
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:33.823774) EVENT_LOG_v1 {"time_micros": 1772024013823764, "job": 62, "event": "compaction_finished", "compaction_time_micros": 125175, "compaction_time_cpu_micros": 51333, "output_level": 6, "num_output_files": 1, "total_output_size": 11211013, "num_input_records": 7483, "num_output_records": 6959, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024013824175, "job": 62, "event": "table_file_deletion", "file_number": 106}
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024013824831, "job": 62, "event": "table_file_deletion", "file_number": 104}
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:33.696749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:33.824933) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:33.824943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:33.824948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:33.824952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:53:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:33.824957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:53:33 np0005629333 nova_compute[244014]: 2026-02-25 12:53:33.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:53:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2190: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Feb 25 07:53:34 np0005629333 nova_compute[244014]: 2026-02-25 12:53:34.672 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:35 np0005629333 nova_compute[244014]: 2026-02-25 12:53:35.436 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2191: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Feb 25 07:53:36 np0005629333 nova_compute[244014]: 2026-02-25 12:53:36.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:53:37 np0005629333 nova_compute[244014]: 2026-02-25 12:53:37.275 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:37 np0005629333 NetworkManager[49836]: <info>  [1772024017.2830] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/575)
Feb 25 07:53:37 np0005629333 NetworkManager[49836]: <info>  [1772024017.2844] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/576)
Feb 25 07:53:37 np0005629333 ovn_controller[147040]: 2026-02-25T12:53:37Z|01391|binding|INFO|Releasing lport 09662fcb-392d-469a-981c-54d31225748b from this chassis (sb_readonly=0)
Feb 25 07:53:37 np0005629333 nova_compute[244014]: 2026-02-25 12:53:37.311 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:37 np0005629333 nova_compute[244014]: 2026-02-25 12:53:37.326 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:37 np0005629333 podman[362647]: 2026-02-25 12:53:37.733551938 +0000 UTC m=+0.074561266 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223)
Feb 25 07:53:37 np0005629333 podman[362648]: 2026-02-25 12:53:37.761577149 +0000 UTC m=+0.099418787 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 07:53:37 np0005629333 nova_compute[244014]: 2026-02-25 12:53:37.842 244018 DEBUG nova.compute.manager [req-4a4059f7-c7ef-4bc0-b4f7-ba11af3c36e6 req-58df6b37-bd88-4ae9-80b1-173dad410a93 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Received event network-changed-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:53:37 np0005629333 nova_compute[244014]: 2026-02-25 12:53:37.843 244018 DEBUG nova.compute.manager [req-4a4059f7-c7ef-4bc0-b4f7-ba11af3c36e6 req-58df6b37-bd88-4ae9-80b1-173dad410a93 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Refreshing instance network info cache due to event network-changed-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:53:37 np0005629333 nova_compute[244014]: 2026-02-25 12:53:37.843 244018 DEBUG oslo_concurrency.lockutils [req-4a4059f7-c7ef-4bc0-b4f7-ba11af3c36e6 req-58df6b37-bd88-4ae9-80b1-173dad410a93 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-03d948e4-e7cb-45ea-bf63-7ab363b4d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:53:37 np0005629333 nova_compute[244014]: 2026-02-25 12:53:37.843 244018 DEBUG oslo_concurrency.lockutils [req-4a4059f7-c7ef-4bc0-b4f7-ba11af3c36e6 req-58df6b37-bd88-4ae9-80b1-173dad410a93 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-03d948e4-e7cb-45ea-bf63-7ab363b4d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:53:37 np0005629333 nova_compute[244014]: 2026-02-25 12:53:37.844 244018 DEBUG nova.network.neutron [req-4a4059f7-c7ef-4bc0-b4f7-ba11af3c36e6 req-58df6b37-bd88-4ae9-80b1-173dad410a93 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Refreshing network info cache for port f66ec19f-119c-4e9b-ba57-c8e2d850f5dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:53:37 np0005629333 nova_compute[244014]: 2026-02-25 12:53:37.885 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:53:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2192: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 782 KiB/s wr, 114 op/s
Feb 25 07:53:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:53:39 np0005629333 nova_compute[244014]: 2026-02-25 12:53:39.675 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:39 np0005629333 nova_compute[244014]: 2026-02-25 12:53:39.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:53:39 np0005629333 nova_compute[244014]: 2026-02-25 12:53:39.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:53:40 np0005629333 nova_compute[244014]: 2026-02-25 12:53:40.285 244018 DEBUG nova.network.neutron [req-4a4059f7-c7ef-4bc0-b4f7-ba11af3c36e6 req-58df6b37-bd88-4ae9-80b1-173dad410a93 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Updated VIF entry in instance network info cache for port f66ec19f-119c-4e9b-ba57-c8e2d850f5dc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:53:40 np0005629333 nova_compute[244014]: 2026-02-25 12:53:40.285 244018 DEBUG nova.network.neutron [req-4a4059f7-c7ef-4bc0-b4f7-ba11af3c36e6 req-58df6b37-bd88-4ae9-80b1-173dad410a93 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Updating instance_info_cache with network_info: [{"id": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "address": "fa:16:3e:d9:be:ae", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66ec19f-11", "ovs_interfaceid": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:53:40 np0005629333 nova_compute[244014]: 2026-02-25 12:53:40.307 244018 DEBUG oslo_concurrency.lockutils [req-4a4059f7-c7ef-4bc0-b4f7-ba11af3c36e6 req-58df6b37-bd88-4ae9-80b1-173dad410a93 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-03d948e4-e7cb-45ea-bf63-7ab363b4d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:53:40 np0005629333 nova_compute[244014]: 2026-02-25 12:53:40.436 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2193: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 97 op/s
Feb 25 07:53:40 np0005629333 nova_compute[244014]: 2026-02-25 12:53:40.840 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:41 np0005629333 nova_compute[244014]: 2026-02-25 12:53:41.396 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024006.394624, 809da994-7551-4f52-8920-b0dfaa2ef73e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:53:41 np0005629333 nova_compute[244014]: 2026-02-25 12:53:41.397 244018 INFO nova.compute.manager [-] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:53:41 np0005629333 nova_compute[244014]: 2026-02-25 12:53:41.422 244018 DEBUG nova.compute.manager [None req-6b742b2f-5571-4647-9ed0-7fb8696efe2e - - - - - -] [instance: 809da994-7551-4f52-8920-b0dfaa2ef73e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:53:42 np0005629333 ovn_controller[147040]: 2026-02-25T12:53:42Z|00169|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d9:be:ae 10.100.0.3
Feb 25 07:53:42 np0005629333 ovn_controller[147040]: 2026-02-25T12:53:42Z|00170|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d9:be:ae 10.100.0.3
Feb 25 07:53:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2194: 305 pgs: 305 active+clean; 216 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 137 op/s
Feb 25 07:53:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:53:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:53:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:53:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:53:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007082478581069775 of space, bias 1.0, pg target 0.21247435743209325 quantized to 32 (current 32)
Feb 25 07:53:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:53:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:53:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:53:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:53:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:53:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002494152801502742 of space, bias 1.0, pg target 0.7482458404508227 quantized to 32 (current 32)
Feb 25 07:53:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:53:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3911709341754778e-06 of space, bias 4.0, pg target 0.0016694051210105734 quantized to 16 (current 16)
Feb 25 07:53:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:53:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:53:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:53:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:53:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:53:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:53:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:53:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:53:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:53:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:53:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:53:43 np0005629333 nova_compute[244014]: 2026-02-25 12:53:43.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:53:43 np0005629333 nova_compute[244014]: 2026-02-25 12:53:43.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:53:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2195: 305 pgs: 305 active+clean; 216 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Feb 25 07:53:44 np0005629333 nova_compute[244014]: 2026-02-25 12:53:44.678 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:45 np0005629333 podman[362783]: 2026-02-25 12:53:45.338818075 +0000 UTC m=+0.074495945 container exec ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:53:45 np0005629333 nova_compute[244014]: 2026-02-25 12:53:45.438 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:45 np0005629333 podman[362783]: 2026-02-25 12:53:45.451140746 +0000 UTC m=+0.186818626 container exec_died ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 07:53:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:53:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:53:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:53:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:53:46 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:53:46 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:53:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2196: 305 pgs: 305 active+clean; 221 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 114 op/s
Feb 25 07:53:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:53:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:53:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:53:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:53:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:53:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:53:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:53:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:53:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:53:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:53:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:53:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:53:47 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:53:47 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:53:47 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:53:47 np0005629333 podman[363116]: 2026-02-25 12:53:47.52161728 +0000 UTC m=+0.065514151 container create fbe80fa4290d21af9e242a173b49155a14bfc206ae72c86f91c666b69ab92c07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 07:53:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:53:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2315838714' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:53:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:53:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2315838714' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:53:47 np0005629333 systemd[1]: Started libpod-conmon-fbe80fa4290d21af9e242a173b49155a14bfc206ae72c86f91c666b69ab92c07.scope.
Feb 25 07:53:47 np0005629333 podman[363116]: 2026-02-25 12:53:47.490952604 +0000 UTC m=+0.034849535 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:53:47 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:53:47 np0005629333 podman[363116]: 2026-02-25 12:53:47.607190687 +0000 UTC m=+0.151087598 container init fbe80fa4290d21af9e242a173b49155a14bfc206ae72c86f91c666b69ab92c07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_morse, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:53:47 np0005629333 podman[363116]: 2026-02-25 12:53:47.615676596 +0000 UTC m=+0.159573467 container start fbe80fa4290d21af9e242a173b49155a14bfc206ae72c86f91c666b69ab92c07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_morse, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 07:53:47 np0005629333 nice_morse[363133]: 167 167
Feb 25 07:53:47 np0005629333 podman[363116]: 2026-02-25 12:53:47.619950987 +0000 UTC m=+0.163847858 container attach fbe80fa4290d21af9e242a173b49155a14bfc206ae72c86f91c666b69ab92c07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_morse, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:53:47 np0005629333 systemd[1]: libpod-fbe80fa4290d21af9e242a173b49155a14bfc206ae72c86f91c666b69ab92c07.scope: Deactivated successfully.
Feb 25 07:53:47 np0005629333 conmon[363133]: conmon fbe80fa4290d21af9e24 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fbe80fa4290d21af9e242a173b49155a14bfc206ae72c86f91c666b69ab92c07.scope/container/memory.events
Feb 25 07:53:47 np0005629333 podman[363116]: 2026-02-25 12:53:47.624631019 +0000 UTC m=+0.168527900 container died fbe80fa4290d21af9e242a173b49155a14bfc206ae72c86f91c666b69ab92c07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_morse, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 07:53:47 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ce7cf2ceb861dbc3c7788437e2e298f0d62aefa913b557b6f5d486727fbb2c20-merged.mount: Deactivated successfully.
Feb 25 07:53:47 np0005629333 podman[363116]: 2026-02-25 12:53:47.673663154 +0000 UTC m=+0.217560025 container remove fbe80fa4290d21af9e242a173b49155a14bfc206ae72c86f91c666b69ab92c07 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 07:53:47 np0005629333 systemd[1]: libpod-conmon-fbe80fa4290d21af9e242a173b49155a14bfc206ae72c86f91c666b69ab92c07.scope: Deactivated successfully.
Feb 25 07:53:47 np0005629333 podman[363156]: 2026-02-25 12:53:47.869840473 +0000 UTC m=+0.060887430 container create baf9ba5d71079d31cfbc71a74c74579a19429b1bb8a2c6d5e840295c119f095d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hopper, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:53:47 np0005629333 nova_compute[244014]: 2026-02-25 12:53:47.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:53:47 np0005629333 systemd[1]: Started libpod-conmon-baf9ba5d71079d31cfbc71a74c74579a19429b1bb8a2c6d5e840295c119f095d.scope.
Feb 25 07:53:47 np0005629333 podman[363156]: 2026-02-25 12:53:47.845010552 +0000 UTC m=+0.036057579 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:53:47 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:53:47 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23d36cf2561cf809f5a7546d792f698dd58cf235f2dab8972b5b55fb8ebbd2ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:53:47 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23d36cf2561cf809f5a7546d792f698dd58cf235f2dab8972b5b55fb8ebbd2ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:53:47 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23d36cf2561cf809f5a7546d792f698dd58cf235f2dab8972b5b55fb8ebbd2ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:53:47 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23d36cf2561cf809f5a7546d792f698dd58cf235f2dab8972b5b55fb8ebbd2ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:53:47 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23d36cf2561cf809f5a7546d792f698dd58cf235f2dab8972b5b55fb8ebbd2ad/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:53:47 np0005629333 podman[363156]: 2026-02-25 12:53:47.989940114 +0000 UTC m=+0.180987051 container init baf9ba5d71079d31cfbc71a74c74579a19429b1bb8a2c6d5e840295c119f095d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hopper, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 07:53:48 np0005629333 podman[363156]: 2026-02-25 12:53:48.003627061 +0000 UTC m=+0.194674028 container start baf9ba5d71079d31cfbc71a74c74579a19429b1bb8a2c6d5e840295c119f095d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:53:48 np0005629333 podman[363156]: 2026-02-25 12:53:48.007944273 +0000 UTC m=+0.198991240 container attach baf9ba5d71079d31cfbc71a74c74579a19429b1bb8a2c6d5e840295c119f095d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hopper, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 07:53:48 np0005629333 nifty_hopper[363172]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:53:48 np0005629333 nifty_hopper[363172]: --> All data devices are unavailable
Feb 25 07:53:48 np0005629333 systemd[1]: libpod-baf9ba5d71079d31cfbc71a74c74579a19429b1bb8a2c6d5e840295c119f095d.scope: Deactivated successfully.
Feb 25 07:53:48 np0005629333 podman[363194]: 2026-02-25 12:53:48.563120838 +0000 UTC m=+0.041677248 container died baf9ba5d71079d31cfbc71a74c74579a19429b1bb8a2c6d5e840295c119f095d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hopper, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:53:48 np0005629333 systemd[1]: var-lib-containers-storage-overlay-23d36cf2561cf809f5a7546d792f698dd58cf235f2dab8972b5b55fb8ebbd2ad-merged.mount: Deactivated successfully.
Feb 25 07:53:48 np0005629333 podman[363194]: 2026-02-25 12:53:48.653402627 +0000 UTC m=+0.131958997 container remove baf9ba5d71079d31cfbc71a74c74579a19429b1bb8a2c6d5e840295c119f095d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hopper, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:53:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2197: 305 pgs: 305 active+clean; 233 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 94 op/s
Feb 25 07:53:48 np0005629333 systemd[1]: libpod-conmon-baf9ba5d71079d31cfbc71a74c74579a19429b1bb8a2c6d5e840295c119f095d.scope: Deactivated successfully.
Feb 25 07:53:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:53:49 np0005629333 podman[363274]: 2026-02-25 12:53:49.157851511 +0000 UTC m=+0.056754493 container create 95ac1abd8ce2d6dc22c438af6dd8bd51b00de188b2403de01319c2ac6b0e05ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 25 07:53:49 np0005629333 systemd[1]: Started libpod-conmon-95ac1abd8ce2d6dc22c438af6dd8bd51b00de188b2403de01319c2ac6b0e05ef.scope.
Feb 25 07:53:49 np0005629333 podman[363274]: 2026-02-25 12:53:49.133907545 +0000 UTC m=+0.032810577 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:53:49 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:53:49 np0005629333 podman[363274]: 2026-02-25 12:53:49.245712482 +0000 UTC m=+0.144615504 container init 95ac1abd8ce2d6dc22c438af6dd8bd51b00de188b2403de01319c2ac6b0e05ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 07:53:49 np0005629333 podman[363274]: 2026-02-25 12:53:49.254747447 +0000 UTC m=+0.153650439 container start 95ac1abd8ce2d6dc22c438af6dd8bd51b00de188b2403de01319c2ac6b0e05ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_wright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 07:53:49 np0005629333 podman[363274]: 2026-02-25 12:53:49.258777511 +0000 UTC m=+0.157680493 container attach 95ac1abd8ce2d6dc22c438af6dd8bd51b00de188b2403de01319c2ac6b0e05ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_wright, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 07:53:49 np0005629333 systemd[1]: libpod-95ac1abd8ce2d6dc22c438af6dd8bd51b00de188b2403de01319c2ac6b0e05ef.scope: Deactivated successfully.
Feb 25 07:53:49 np0005629333 competent_wright[363291]: 167 167
Feb 25 07:53:49 np0005629333 podman[363274]: 2026-02-25 12:53:49.260943632 +0000 UTC m=+0.159846614 container died 95ac1abd8ce2d6dc22c438af6dd8bd51b00de188b2403de01319c2ac6b0e05ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 07:53:49 np0005629333 systemd[1]: var-lib-containers-storage-overlay-cd5da45afddc3415a25f7c8926ba9ba0a2b3f03a320414d48a7b91325d44858c-merged.mount: Deactivated successfully.
Feb 25 07:53:49 np0005629333 podman[363274]: 2026-02-25 12:53:49.308936808 +0000 UTC m=+0.207839790 container remove 95ac1abd8ce2d6dc22c438af6dd8bd51b00de188b2403de01319c2ac6b0e05ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_wright, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 07:53:49 np0005629333 systemd[1]: libpod-conmon-95ac1abd8ce2d6dc22c438af6dd8bd51b00de188b2403de01319c2ac6b0e05ef.scope: Deactivated successfully.
Feb 25 07:53:49 np0005629333 podman[363313]: 2026-02-25 12:53:49.51689708 +0000 UTC m=+0.061318643 container create 06d9f44c47c7f4a3bc7d712b0926bc3ecd468cf18c44af8899370c396026d2a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 07:53:49 np0005629333 nova_compute[244014]: 2026-02-25 12:53:49.531 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:53:49 np0005629333 nova_compute[244014]: 2026-02-25 12:53:49.531 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:53:49 np0005629333 systemd[1]: Started libpod-conmon-06d9f44c47c7f4a3bc7d712b0926bc3ecd468cf18c44af8899370c396026d2a7.scope.
Feb 25 07:53:49 np0005629333 podman[363313]: 2026-02-25 12:53:49.492368187 +0000 UTC m=+0.036789810 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:53:49 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:53:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45d1c6e0b4c299d45939bbb684e9ba3c579fd34026cfb6ae29ee384f91961d42/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:53:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45d1c6e0b4c299d45939bbb684e9ba3c579fd34026cfb6ae29ee384f91961d42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:53:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45d1c6e0b4c299d45939bbb684e9ba3c579fd34026cfb6ae29ee384f91961d42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:53:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45d1c6e0b4c299d45939bbb684e9ba3c579fd34026cfb6ae29ee384f91961d42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:53:49 np0005629333 podman[363313]: 2026-02-25 12:53:49.630118817 +0000 UTC m=+0.174540400 container init 06d9f44c47c7f4a3bc7d712b0926bc3ecd468cf18c44af8899370c396026d2a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_sutherland, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 07:53:49 np0005629333 podman[363313]: 2026-02-25 12:53:49.639994905 +0000 UTC m=+0.184416468 container start 06d9f44c47c7f4a3bc7d712b0926bc3ecd468cf18c44af8899370c396026d2a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:53:49 np0005629333 podman[363313]: 2026-02-25 12:53:49.643772172 +0000 UTC m=+0.188193725 container attach 06d9f44c47c7f4a3bc7d712b0926bc3ecd468cf18c44af8899370c396026d2a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 07:53:49 np0005629333 nova_compute[244014]: 2026-02-25 12:53:49.642 244018 DEBUG nova.compute.manager [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:53:49 np0005629333 nova_compute[244014]: 2026-02-25 12:53:49.682 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:49 np0005629333 nova_compute[244014]: 2026-02-25 12:53:49.746 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:53:49 np0005629333 nova_compute[244014]: 2026-02-25 12:53:49.747 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:53:49 np0005629333 nova_compute[244014]: 2026-02-25 12:53:49.758 244018 DEBUG nova.virt.hardware [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:53:49 np0005629333 nova_compute[244014]: 2026-02-25 12:53:49.758 244018 INFO nova.compute.claims [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:53:49 np0005629333 nova_compute[244014]: 2026-02-25 12:53:49.896 244018 DEBUG oslo_concurrency.processutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]: {
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:    "0": [
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:        {
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "devices": [
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "/dev/loop3"
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            ],
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "lv_name": "ceph_lv0",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "lv_size": "21470642176",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "name": "ceph_lv0",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "tags": {
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.cluster_name": "ceph",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.crush_device_class": "",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.encrypted": "0",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.objectstore": "bluestore",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.osd_id": "0",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.type": "block",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.vdo": "0",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.with_tpm": "0"
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            },
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "type": "block",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "vg_name": "ceph_vg0"
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:        }
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:    ],
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:    "1": [
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:        {
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "devices": [
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "/dev/loop4"
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            ],
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "lv_name": "ceph_lv1",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "lv_size": "21470642176",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "name": "ceph_lv1",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "tags": {
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.cluster_name": "ceph",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.crush_device_class": "",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.encrypted": "0",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.objectstore": "bluestore",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.osd_id": "1",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.type": "block",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.vdo": "0",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.with_tpm": "0"
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            },
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "type": "block",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "vg_name": "ceph_vg1"
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:        }
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:    ],
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:    "2": [
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:        {
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "devices": [
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "/dev/loop5"
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            ],
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "lv_name": "ceph_lv2",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "lv_size": "21470642176",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "name": "ceph_lv2",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "tags": {
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.cluster_name": "ceph",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.crush_device_class": "",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.encrypted": "0",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.objectstore": "bluestore",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.osd_id": "2",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.type": "block",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.vdo": "0",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:                "ceph.with_tpm": "0"
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            },
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "type": "block",
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:            "vg_name": "ceph_vg2"
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:        }
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]:    ]
Feb 25 07:53:49 np0005629333 distracted_sutherland[363329]: }
Feb 25 07:53:50 np0005629333 systemd[1]: libpod-06d9f44c47c7f4a3bc7d712b0926bc3ecd468cf18c44af8899370c396026d2a7.scope: Deactivated successfully.
Feb 25 07:53:50 np0005629333 podman[363313]: 2026-02-25 12:53:50.007431391 +0000 UTC m=+0.551852914 container died 06d9f44c47c7f4a3bc7d712b0926bc3ecd468cf18c44af8899370c396026d2a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 07:53:50 np0005629333 systemd[1]: var-lib-containers-storage-overlay-45d1c6e0b4c299d45939bbb684e9ba3c579fd34026cfb6ae29ee384f91961d42-merged.mount: Deactivated successfully.
Feb 25 07:53:50 np0005629333 podman[363313]: 2026-02-25 12:53:50.047843282 +0000 UTC m=+0.592264795 container remove 06d9f44c47c7f4a3bc7d712b0926bc3ecd468cf18c44af8899370c396026d2a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 07:53:50 np0005629333 systemd[1]: libpod-conmon-06d9f44c47c7f4a3bc7d712b0926bc3ecd468cf18c44af8899370c396026d2a7.scope: Deactivated successfully.
Feb 25 07:53:50 np0005629333 nova_compute[244014]: 2026-02-25 12:53:50.439 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:53:50 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4114764317' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:53:50 np0005629333 nova_compute[244014]: 2026-02-25 12:53:50.491 244018 DEBUG oslo_concurrency.processutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:53:50 np0005629333 nova_compute[244014]: 2026-02-25 12:53:50.499 244018 DEBUG nova.compute.provider_tree [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:53:50 np0005629333 nova_compute[244014]: 2026-02-25 12:53:50.527 244018 DEBUG nova.scheduler.client.report [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:53:50 np0005629333 podman[363431]: 2026-02-25 12:53:50.540321838 +0000 UTC m=+0.043225272 container create 18ca15f35c0c6b3390a1db18ace96c352c1b863ed3ec1cc0cedc41a1908bcb7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_dirac, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Feb 25 07:53:50 np0005629333 nova_compute[244014]: 2026-02-25 12:53:50.556 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:53:50 np0005629333 nova_compute[244014]: 2026-02-25 12:53:50.556 244018 DEBUG nova.compute.manager [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:53:50 np0005629333 systemd[1]: Started libpod-conmon-18ca15f35c0c6b3390a1db18ace96c352c1b863ed3ec1cc0cedc41a1908bcb7e.scope.
Feb 25 07:53:50 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:53:50 np0005629333 nova_compute[244014]: 2026-02-25 12:53:50.602 244018 DEBUG nova.compute.manager [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:53:50 np0005629333 podman[363431]: 2026-02-25 12:53:50.602807312 +0000 UTC m=+0.105710786 container init 18ca15f35c0c6b3390a1db18ace96c352c1b863ed3ec1cc0cedc41a1908bcb7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_dirac, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:53:50 np0005629333 nova_compute[244014]: 2026-02-25 12:53:50.602 244018 DEBUG nova.network.neutron [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:53:50 np0005629333 podman[363431]: 2026-02-25 12:53:50.609628135 +0000 UTC m=+0.112531599 container start 18ca15f35c0c6b3390a1db18ace96c352c1b863ed3ec1cc0cedc41a1908bcb7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:53:50 np0005629333 lucid_dirac[363447]: 167 167
Feb 25 07:53:50 np0005629333 podman[363431]: 2026-02-25 12:53:50.613948817 +0000 UTC m=+0.116852291 container attach 18ca15f35c0c6b3390a1db18ace96c352c1b863ed3ec1cc0cedc41a1908bcb7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_dirac, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 07:53:50 np0005629333 systemd[1]: libpod-18ca15f35c0c6b3390a1db18ace96c352c1b863ed3ec1cc0cedc41a1908bcb7e.scope: Deactivated successfully.
Feb 25 07:53:50 np0005629333 podman[363431]: 2026-02-25 12:53:50.614380219 +0000 UTC m=+0.117283693 container died 18ca15f35c0c6b3390a1db18ace96c352c1b863ed3ec1cc0cedc41a1908bcb7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 07:53:50 np0005629333 podman[363431]: 2026-02-25 12:53:50.528092033 +0000 UTC m=+0.030995487 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:53:50 np0005629333 nova_compute[244014]: 2026-02-25 12:53:50.627 244018 INFO nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:53:50 np0005629333 systemd[1]: var-lib-containers-storage-overlay-fbdf629afc534b1d0eb7a6af03a8a0089fcc8f5df688ebcbb496d8151ee3db20-merged.mount: Deactivated successfully.
Feb 25 07:53:50 np0005629333 nova_compute[244014]: 2026-02-25 12:53:50.649 244018 DEBUG nova.compute.manager [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:53:50 np0005629333 podman[363431]: 2026-02-25 12:53:50.657356753 +0000 UTC m=+0.160260227 container remove 18ca15f35c0c6b3390a1db18ace96c352c1b863ed3ec1cc0cedc41a1908bcb7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:53:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2198: 305 pgs: 305 active+clean; 233 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 07:53:50 np0005629333 systemd[1]: libpod-conmon-18ca15f35c0c6b3390a1db18ace96c352c1b863ed3ec1cc0cedc41a1908bcb7e.scope: Deactivated successfully.
Feb 25 07:53:50 np0005629333 podman[363471]: 2026-02-25 12:53:50.808368617 +0000 UTC m=+0.046908196 container create ebd3c390718ac22da87f5706b02a1278e663545a5c16e1575537294291ac48fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_chatelet, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle)
Feb 25 07:53:50 np0005629333 systemd[1]: Started libpod-conmon-ebd3c390718ac22da87f5706b02a1278e663545a5c16e1575537294291ac48fe.scope.
Feb 25 07:53:50 np0005629333 nova_compute[244014]: 2026-02-25 12:53:50.871 244018 DEBUG nova.compute.manager [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:53:50 np0005629333 nova_compute[244014]: 2026-02-25 12:53:50.875 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:53:50 np0005629333 nova_compute[244014]: 2026-02-25 12:53:50.876 244018 INFO nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Creating image(s)#033[00m
Feb 25 07:53:50 np0005629333 podman[363471]: 2026-02-25 12:53:50.782531547 +0000 UTC m=+0.021071176 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:53:50 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:53:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9102adddcc0f464a24e9fdbcc3ee3fa2121fe32349bf8fc291e52bd5803ac97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:53:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9102adddcc0f464a24e9fdbcc3ee3fa2121fe32349bf8fc291e52bd5803ac97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:53:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9102adddcc0f464a24e9fdbcc3ee3fa2121fe32349bf8fc291e52bd5803ac97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:53:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9102adddcc0f464a24e9fdbcc3ee3fa2121fe32349bf8fc291e52bd5803ac97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:53:50 np0005629333 podman[363471]: 2026-02-25 12:53:50.913614438 +0000 UTC m=+0.152154017 container init ebd3c390718ac22da87f5706b02a1278e663545a5c16e1575537294291ac48fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 07:53:50 np0005629333 nova_compute[244014]: 2026-02-25 12:53:50.923 244018 DEBUG nova.storage.rbd_utils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 31c909bc-0d05-4a67-83e9-b45fb2eb35a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:53:50 np0005629333 podman[363471]: 2026-02-25 12:53:50.928033216 +0000 UTC m=+0.166572765 container start ebd3c390718ac22da87f5706b02a1278e663545a5c16e1575537294291ac48fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_chatelet, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 07:53:50 np0005629333 podman[363471]: 2026-02-25 12:53:50.931085482 +0000 UTC m=+0.169625031 container attach ebd3c390718ac22da87f5706b02a1278e663545a5c16e1575537294291ac48fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_chatelet, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 07:53:50 np0005629333 nova_compute[244014]: 2026-02-25 12:53:50.958 244018 DEBUG nova.storage.rbd_utils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 31c909bc-0d05-4a67-83e9-b45fb2eb35a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:53:50 np0005629333 nova_compute[244014]: 2026-02-25 12:53:50.986 244018 DEBUG nova.storage.rbd_utils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 31c909bc-0d05-4a67-83e9-b45fb2eb35a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:53:50 np0005629333 nova_compute[244014]: 2026-02-25 12:53:50.990 244018 DEBUG oslo_concurrency.processutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:53:51 np0005629333 nova_compute[244014]: 2026-02-25 12:53:51.053 244018 DEBUG oslo_concurrency.processutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:53:51 np0005629333 nova_compute[244014]: 2026-02-25 12:53:51.054 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:53:51 np0005629333 nova_compute[244014]: 2026-02-25 12:53:51.055 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:53:51 np0005629333 nova_compute[244014]: 2026-02-25 12:53:51.056 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:53:51 np0005629333 nova_compute[244014]: 2026-02-25 12:53:51.084 244018 DEBUG nova.storage.rbd_utils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 31c909bc-0d05-4a67-83e9-b45fb2eb35a9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:53:51 np0005629333 nova_compute[244014]: 2026-02-25 12:53:51.091 244018 DEBUG oslo_concurrency.processutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 31c909bc-0d05-4a67-83e9-b45fb2eb35a9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:53:51 np0005629333 nova_compute[244014]: 2026-02-25 12:53:51.190 244018 DEBUG nova.policy [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31d013eaf26a447394d93c83ab8def60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e227b91c24404ab5aed600e2fe792d32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:53:51 np0005629333 nova_compute[244014]: 2026-02-25 12:53:51.364 244018 DEBUG oslo_concurrency.processutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 31c909bc-0d05-4a67-83e9-b45fb2eb35a9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.274s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:53:51 np0005629333 nova_compute[244014]: 2026-02-25 12:53:51.437 244018 DEBUG nova.storage.rbd_utils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] resizing rbd image 31c909bc-0d05-4a67-83e9-b45fb2eb35a9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:53:51 np0005629333 nova_compute[244014]: 2026-02-25 12:53:51.506 244018 DEBUG nova.objects.instance [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'migration_context' on Instance uuid 31c909bc-0d05-4a67-83e9-b45fb2eb35a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:53:51 np0005629333 nova_compute[244014]: 2026-02-25 12:53:51.522 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:53:51 np0005629333 nova_compute[244014]: 2026-02-25 12:53:51.523 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Ensure instance console log exists: /var/lib/nova/instances/31c909bc-0d05-4a67-83e9-b45fb2eb35a9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:53:51 np0005629333 nova_compute[244014]: 2026-02-25 12:53:51.523 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:53:51 np0005629333 nova_compute[244014]: 2026-02-25 12:53:51.524 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:53:51 np0005629333 nova_compute[244014]: 2026-02-25 12:53:51.524 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:53:51 np0005629333 lvm[363735]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:53:51 np0005629333 lvm[363735]: VG ceph_vg1 finished
Feb 25 07:53:51 np0005629333 lvm[363732]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:53:51 np0005629333 lvm[363732]: VG ceph_vg0 finished
Feb 25 07:53:51 np0005629333 lvm[363737]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:53:51 np0005629333 lvm[363737]: VG ceph_vg2 finished
Feb 25 07:53:51 np0005629333 nervous_chatelet[363489]: {}
Feb 25 07:53:51 np0005629333 systemd[1]: libpod-ebd3c390718ac22da87f5706b02a1278e663545a5c16e1575537294291ac48fe.scope: Deactivated successfully.
Feb 25 07:53:51 np0005629333 systemd[1]: libpod-ebd3c390718ac22da87f5706b02a1278e663545a5c16e1575537294291ac48fe.scope: Consumed 1.073s CPU time.
Feb 25 07:53:51 np0005629333 podman[363471]: 2026-02-25 12:53:51.698159772 +0000 UTC m=+0.936699361 container died ebd3c390718ac22da87f5706b02a1278e663545a5c16e1575537294291ac48fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_chatelet, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 07:53:51 np0005629333 systemd[1]: var-lib-containers-storage-overlay-c9102adddcc0f464a24e9fdbcc3ee3fa2121fe32349bf8fc291e52bd5803ac97-merged.mount: Deactivated successfully.
Feb 25 07:53:51 np0005629333 podman[363471]: 2026-02-25 12:53:51.920579252 +0000 UTC m=+1.159118841 container remove ebd3c390718ac22da87f5706b02a1278e663545a5c16e1575537294291ac48fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_chatelet, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Feb 25 07:53:51 np0005629333 systemd[1]: libpod-conmon-ebd3c390718ac22da87f5706b02a1278e663545a5c16e1575537294291ac48fe.scope: Deactivated successfully.
Feb 25 07:53:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:53:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:53:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:53:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:53:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2199: 305 pgs: 305 active+clean; 273 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.7 MiB/s wr, 90 op/s
Feb 25 07:53:52 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:53:52 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:53:52 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Feb 25 07:53:52 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:52.999809) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024032999848, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 456, "num_deletes": 251, "total_data_size": 404517, "memory_usage": 413032, "flush_reason": "Manual Compaction"}
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024033004100, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 389601, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 46905, "largest_seqno": 47360, "table_properties": {"data_size": 386901, "index_size": 736, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6504, "raw_average_key_size": 19, "raw_value_size": 381548, "raw_average_value_size": 1122, "num_data_blocks": 32, "num_entries": 340, "num_filter_entries": 340, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772024014, "oldest_key_time": 1772024014, "file_creation_time": 1772024032, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 4339 microseconds, and 1962 cpu microseconds.
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:53.004147) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 389601 bytes OK
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:53.004165) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:53.005791) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:53.005810) EVENT_LOG_v1 {"time_micros": 1772024033005804, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:53.005828) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 401746, prev total WAL file size 401746, number of live WAL files 2.
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:53.006209) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(380KB)], [107(10MB)]
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024033006247, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 11600614, "oldest_snapshot_seqno": -1}
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6786 keys, 9892285 bytes, temperature: kUnknown
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024033056989, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 9892285, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9846196, "index_size": 27993, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17029, "raw_key_size": 176935, "raw_average_key_size": 26, "raw_value_size": 9724260, "raw_average_value_size": 1432, "num_data_blocks": 1094, "num_entries": 6786, "num_filter_entries": 6786, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772024033, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:53.057294) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 9892285 bytes
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:53.058319) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 228.1 rd, 194.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 10.7 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(55.2) write-amplify(25.4) OK, records in: 7299, records dropped: 513 output_compression: NoCompression
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:53.058348) EVENT_LOG_v1 {"time_micros": 1772024033058335, "job": 64, "event": "compaction_finished", "compaction_time_micros": 50852, "compaction_time_cpu_micros": 16908, "output_level": 6, "num_output_files": 1, "total_output_size": 9892285, "num_input_records": 7299, "num_output_records": 6786, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024033058561, "job": 64, "event": "table_file_deletion", "file_number": 109}
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024033060118, "job": 64, "event": "table_file_deletion", "file_number": 107}
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:53.006134) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:53.060259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:53.060269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:53.060274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:53.060278) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:53:53.060283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:53:53 np0005629333 nova_compute[244014]: 2026-02-25 12:53:53.150 244018 DEBUG nova.network.neutron [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Successfully updated port: 205ead5a-797e-421e-87e3-ec5dac2037d3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:53:53 np0005629333 nova_compute[244014]: 2026-02-25 12:53:53.165 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-31c909bc-0d05-4a67-83e9-b45fb2eb35a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:53:53 np0005629333 nova_compute[244014]: 2026-02-25 12:53:53.165 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-31c909bc-0d05-4a67-83e9-b45fb2eb35a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:53:53 np0005629333 nova_compute[244014]: 2026-02-25 12:53:53.166 244018 DEBUG nova.network.neutron [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:53:53 np0005629333 nova_compute[244014]: 2026-02-25 12:53:53.257 244018 DEBUG nova.compute.manager [req-929aaba2-5418-49e2-b3db-8a23fb3c7f7c req-301defeb-5bca-4690-adf2-22717a3565c4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Received event network-changed-205ead5a-797e-421e-87e3-ec5dac2037d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:53:53 np0005629333 nova_compute[244014]: 2026-02-25 12:53:53.257 244018 DEBUG nova.compute.manager [req-929aaba2-5418-49e2-b3db-8a23fb3c7f7c req-301defeb-5bca-4690-adf2-22717a3565c4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Refreshing instance network info cache due to event network-changed-205ead5a-797e-421e-87e3-ec5dac2037d3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:53:53 np0005629333 nova_compute[244014]: 2026-02-25 12:53:53.257 244018 DEBUG oslo_concurrency.lockutils [req-929aaba2-5418-49e2-b3db-8a23fb3c7f7c req-301defeb-5bca-4690-adf2-22717a3565c4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-31c909bc-0d05-4a67-83e9-b45fb2eb35a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:53:53 np0005629333 nova_compute[244014]: 2026-02-25 12:53:53.344 244018 DEBUG nova.network.neutron [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:53:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:53:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2200: 305 pgs: 305 active+clean; 273 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 192 KiB/s rd, 2.0 MiB/s wr, 50 op/s
Feb 25 07:53:54 np0005629333 nova_compute[244014]: 2026-02-25 12:53:54.685 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:55.031 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:53:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:55.032 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:53:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:55.032 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:53:55 np0005629333 nova_compute[244014]: 2026-02-25 12:53:55.146 244018 DEBUG nova.network.neutron [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Updating instance_info_cache with network_info: [{"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:53:55 np0005629333 nova_compute[244014]: 2026-02-25 12:53:55.254 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-31c909bc-0d05-4a67-83e9-b45fb2eb35a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:53:55 np0005629333 nova_compute[244014]: 2026-02-25 12:53:55.255 244018 DEBUG nova.compute.manager [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Instance network_info: |[{"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:53:55 np0005629333 nova_compute[244014]: 2026-02-25 12:53:55.255 244018 DEBUG oslo_concurrency.lockutils [req-929aaba2-5418-49e2-b3db-8a23fb3c7f7c req-301defeb-5bca-4690-adf2-22717a3565c4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-31c909bc-0d05-4a67-83e9-b45fb2eb35a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:53:55 np0005629333 nova_compute[244014]: 2026-02-25 12:53:55.255 244018 DEBUG nova.network.neutron [req-929aaba2-5418-49e2-b3db-8a23fb3c7f7c req-301defeb-5bca-4690-adf2-22717a3565c4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Refreshing network info cache for port 205ead5a-797e-421e-87e3-ec5dac2037d3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:53:55 np0005629333 nova_compute[244014]: 2026-02-25 12:53:55.260 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Start _get_guest_xml network_info=[{"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:53:55 np0005629333 nova_compute[244014]: 2026-02-25 12:53:55.266 244018 WARNING nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:53:55 np0005629333 nova_compute[244014]: 2026-02-25 12:53:55.273 244018 DEBUG nova.virt.libvirt.host [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:53:55 np0005629333 nova_compute[244014]: 2026-02-25 12:53:55.274 244018 DEBUG nova.virt.libvirt.host [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:53:55 np0005629333 nova_compute[244014]: 2026-02-25 12:53:55.278 244018 DEBUG nova.virt.libvirt.host [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:53:55 np0005629333 nova_compute[244014]: 2026-02-25 12:53:55.279 244018 DEBUG nova.virt.libvirt.host [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:53:55 np0005629333 nova_compute[244014]: 2026-02-25 12:53:55.279 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:53:55 np0005629333 nova_compute[244014]: 2026-02-25 12:53:55.279 244018 DEBUG nova.virt.hardware [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:53:55 np0005629333 nova_compute[244014]: 2026-02-25 12:53:55.280 244018 DEBUG nova.virt.hardware [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:53:55 np0005629333 nova_compute[244014]: 2026-02-25 12:53:55.280 244018 DEBUG nova.virt.hardware [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:53:55 np0005629333 nova_compute[244014]: 2026-02-25 12:53:55.280 244018 DEBUG nova.virt.hardware [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:53:55 np0005629333 nova_compute[244014]: 2026-02-25 12:53:55.281 244018 DEBUG nova.virt.hardware [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:53:55 np0005629333 nova_compute[244014]: 2026-02-25 12:53:55.281 244018 DEBUG nova.virt.hardware [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:53:55 np0005629333 nova_compute[244014]: 2026-02-25 12:53:55.281 244018 DEBUG nova.virt.hardware [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:53:55 np0005629333 nova_compute[244014]: 2026-02-25 12:53:55.282 244018 DEBUG nova.virt.hardware [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:53:55 np0005629333 nova_compute[244014]: 2026-02-25 12:53:55.282 244018 DEBUG nova.virt.hardware [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:53:55 np0005629333 nova_compute[244014]: 2026-02-25 12:53:55.282 244018 DEBUG nova.virt.hardware [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:53:55 np0005629333 nova_compute[244014]: 2026-02-25 12:53:55.282 244018 DEBUG nova.virt.hardware [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:53:55 np0005629333 nova_compute[244014]: 2026-02-25 12:53:55.286 244018 DEBUG oslo_concurrency.processutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:53:55 np0005629333 nova_compute[244014]: 2026-02-25 12:53:55.441 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:53:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/26886895' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:53:55 np0005629333 nova_compute[244014]: 2026-02-25 12:53:55.857 244018 DEBUG oslo_concurrency.processutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:53:55 np0005629333 nova_compute[244014]: 2026-02-25 12:53:55.887 244018 DEBUG nova.storage.rbd_utils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 31c909bc-0d05-4a67-83e9-b45fb2eb35a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:53:55 np0005629333 nova_compute[244014]: 2026-02-25 12:53:55.892 244018 DEBUG oslo_concurrency.processutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.301 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "945e5549-40d1-4eae-8179-84ad1d751957" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.302 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "945e5549-40d1-4eae-8179-84ad1d751957" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:53:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:53:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2338384354' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.428 244018 DEBUG nova.compute.manager [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.442 244018 DEBUG oslo_concurrency.processutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.443 244018 DEBUG nova.virt.libvirt.vif [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:53:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-726997753',display_name='tempest-TestNetworkBasicOps-server-726997753',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-726997753',id=131,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFbOW6FmwA5xCX/nfm45JqiS9iGNwncp9v5d8H3G3ZA8NRo+s1Asg0gZAqq62mcFCBdPP+KeNg7r6EvgTxZQYa2mEaBkl1lAwClqNYTOE5oGijuBkGH8PPKePy7SM1EfdQ==',key_name='tempest-TestNetworkBasicOps-533048180',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-p8om99l7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:53:50Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=31c909bc-0d05-4a67-83e9-b45fb2eb35a9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.444 244018 DEBUG nova.network.os_vif_util [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.445 244018 DEBUG nova.network.os_vif_util [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:de:be,bridge_name='br-int',has_traffic_filtering=True,id=205ead5a-797e-421e-87e3-ec5dac2037d3,network=Network(02748d96-83c0-45be-acd6-081ad673e4bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap205ead5a-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.447 244018 DEBUG nova.objects.instance [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'pci_devices' on Instance uuid 31c909bc-0d05-4a67-83e9-b45fb2eb35a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.450 244018 DEBUG nova.network.neutron [req-929aaba2-5418-49e2-b3db-8a23fb3c7f7c req-301defeb-5bca-4690-adf2-22717a3565c4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Updated VIF entry in instance network info cache for port 205ead5a-797e-421e-87e3-ec5dac2037d3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.451 244018 DEBUG nova.network.neutron [req-929aaba2-5418-49e2-b3db-8a23fb3c7f7c req-301defeb-5bca-4690-adf2-22717a3565c4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Updating instance_info_cache with network_info: [{"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.525 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:53:56 np0005629333 nova_compute[244014]:  <uuid>31c909bc-0d05-4a67-83e9-b45fb2eb35a9</uuid>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:  <name>instance-00000083</name>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestNetworkBasicOps-server-726997753</nova:name>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:53:55</nova:creationTime>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:53:56 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:        <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:        <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:        <nova:port uuid="205ead5a-797e-421e-87e3-ec5dac2037d3">
Feb 25 07:53:56 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      <entry name="serial">31c909bc-0d05-4a67-83e9-b45fb2eb35a9</entry>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      <entry name="uuid">31c909bc-0d05-4a67-83e9-b45fb2eb35a9</entry>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/31c909bc-0d05-4a67-83e9-b45fb2eb35a9_disk">
Feb 25 07:53:56 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:53:56 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/31c909bc-0d05-4a67-83e9-b45fb2eb35a9_disk.config">
Feb 25 07:53:56 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:53:56 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:5f:de:be"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      <target dev="tap205ead5a-79"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/31c909bc-0d05-4a67-83e9-b45fb2eb35a9/console.log" append="off"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:53:56 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:53:56 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:53:56 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:53:56 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.527 244018 DEBUG nova.compute.manager [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Preparing to wait for external event network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.527 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.528 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.528 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.529 244018 DEBUG nova.virt.libvirt.vif [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:53:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-726997753',display_name='tempest-TestNetworkBasicOps-server-726997753',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-726997753',id=131,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFbOW6FmwA5xCX/nfm45JqiS9iGNwncp9v5d8H3G3ZA8NRo+s1Asg0gZAqq62mcFCBdPP+KeNg7r6EvgTxZQYa2mEaBkl1lAwClqNYTOE5oGijuBkGH8PPKePy7SM1EfdQ==',key_name='tempest-TestNetworkBasicOps-533048180',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-p8om99l7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:53:50Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=31c909bc-0d05-4a67-83e9-b45fb2eb35a9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.529 244018 DEBUG nova.network.os_vif_util [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.530 244018 DEBUG nova.network.os_vif_util [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:de:be,bridge_name='br-int',has_traffic_filtering=True,id=205ead5a-797e-421e-87e3-ec5dac2037d3,network=Network(02748d96-83c0-45be-acd6-081ad673e4bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap205ead5a-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.530 244018 DEBUG os_vif [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:de:be,bridge_name='br-int',has_traffic_filtering=True,id=205ead5a-797e-421e-87e3-ec5dac2037d3,network=Network(02748d96-83c0-45be-acd6-081ad673e4bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap205ead5a-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.531 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.532 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.532 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.534 244018 DEBUG oslo_concurrency.lockutils [req-929aaba2-5418-49e2-b3db-8a23fb3c7f7c req-301defeb-5bca-4690-adf2-22717a3565c4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-31c909bc-0d05-4a67-83e9-b45fb2eb35a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.538 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.538 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap205ead5a-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.539 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap205ead5a-79, col_values=(('external_ids', {'iface-id': '205ead5a-797e-421e-87e3-ec5dac2037d3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5f:de:be', 'vm-uuid': '31c909bc-0d05-4a67-83e9-b45fb2eb35a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.540 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:56 np0005629333 NetworkManager[49836]: <info>  [1772024036.5420] manager: (tap205ead5a-79): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/577)
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.546 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.549 244018 INFO os_vif [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:de:be,bridge_name='br-int',has_traffic_filtering=True,id=205ead5a-797e-421e-87e3-ec5dac2037d3,network=Network(02748d96-83c0-45be-acd6-081ad673e4bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap205ead5a-79')#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.568 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.569 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.576 244018 DEBUG nova.virt.hardware [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.576 244018 INFO nova.compute.claims [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:53:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2201: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 192 KiB/s rd, 2.1 MiB/s wr, 51 op/s
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.732 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.733 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.734 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:5f:de:be, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.735 244018 INFO nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Using config drive#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.773 244018 DEBUG nova.storage.rbd_utils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 31c909bc-0d05-4a67-83e9-b45fb2eb35a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:53:56 np0005629333 nova_compute[244014]: 2026-02-25 12:53:56.921 244018 DEBUG oslo_concurrency.processutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:53:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:53:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3403396074' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:53:57 np0005629333 nova_compute[244014]: 2026-02-25 12:53:57.485 244018 DEBUG oslo_concurrency.processutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:53:57 np0005629333 nova_compute[244014]: 2026-02-25 12:53:57.493 244018 DEBUG nova.compute.provider_tree [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:53:57 np0005629333 nova_compute[244014]: 2026-02-25 12:53:57.658 244018 DEBUG nova.scheduler.client.report [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:53:57 np0005629333 nova_compute[244014]: 2026-02-25 12:53:57.765 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.196s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:53:57 np0005629333 nova_compute[244014]: 2026-02-25 12:53:57.766 244018 DEBUG nova.compute.manager [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:53:57 np0005629333 nova_compute[244014]: 2026-02-25 12:53:57.855 244018 DEBUG nova.compute.manager [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:53:57 np0005629333 nova_compute[244014]: 2026-02-25 12:53:57.856 244018 DEBUG nova.network.neutron [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:53:57 np0005629333 nova_compute[244014]: 2026-02-25 12:53:57.904 244018 INFO nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:53:57 np0005629333 nova_compute[244014]: 2026-02-25 12:53:57.937 244018 DEBUG nova.compute.manager [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:53:58 np0005629333 nova_compute[244014]: 2026-02-25 12:53:58.078 244018 DEBUG nova.compute.manager [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:53:58 np0005629333 nova_compute[244014]: 2026-02-25 12:53:58.080 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:53:58 np0005629333 nova_compute[244014]: 2026-02-25 12:53:58.080 244018 INFO nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Creating image(s)#033[00m
Feb 25 07:53:58 np0005629333 nova_compute[244014]: 2026-02-25 12:53:58.115 244018 DEBUG nova.storage.rbd_utils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 945e5549-40d1-4eae-8179-84ad1d751957_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:53:58 np0005629333 nova_compute[244014]: 2026-02-25 12:53:58.153 244018 DEBUG nova.storage.rbd_utils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 945e5549-40d1-4eae-8179-84ad1d751957_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:53:58 np0005629333 nova_compute[244014]: 2026-02-25 12:53:58.188 244018 DEBUG nova.storage.rbd_utils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 945e5549-40d1-4eae-8179-84ad1d751957_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:53:58 np0005629333 nova_compute[244014]: 2026-02-25 12:53:58.192 244018 DEBUG oslo_concurrency.processutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:53:58 np0005629333 nova_compute[244014]: 2026-02-25 12:53:58.274 244018 DEBUG oslo_concurrency.processutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:53:58 np0005629333 nova_compute[244014]: 2026-02-25 12:53:58.275 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:53:58 np0005629333 nova_compute[244014]: 2026-02-25 12:53:58.276 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:53:58 np0005629333 nova_compute[244014]: 2026-02-25 12:53:58.277 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:53:58 np0005629333 nova_compute[244014]: 2026-02-25 12:53:58.313 244018 DEBUG nova.storage.rbd_utils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 945e5549-40d1-4eae-8179-84ad1d751957_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:53:58 np0005629333 nova_compute[244014]: 2026-02-25 12:53:58.318 244018 DEBUG oslo_concurrency.processutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 945e5549-40d1-4eae-8179-84ad1d751957_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:53:58 np0005629333 nova_compute[244014]: 2026-02-25 12:53:58.558 244018 DEBUG oslo_concurrency.processutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 945e5549-40d1-4eae-8179-84ad1d751957_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:53:58 np0005629333 nova_compute[244014]: 2026-02-25 12:53:58.646 244018 DEBUG nova.storage.rbd_utils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] resizing rbd image 945e5549-40d1-4eae-8179-84ad1d751957_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:53:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2202: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.9 MiB/s wr, 40 op/s
Feb 25 07:53:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:53:58 np0005629333 nova_compute[244014]: 2026-02-25 12:53:58.755 244018 DEBUG nova.objects.instance [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'migration_context' on Instance uuid 945e5549-40d1-4eae-8179-84ad1d751957 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:53:58 np0005629333 nova_compute[244014]: 2026-02-25 12:53:58.773 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:53:58 np0005629333 nova_compute[244014]: 2026-02-25 12:53:58.774 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Ensure instance console log exists: /var/lib/nova/instances/945e5549-40d1-4eae-8179-84ad1d751957/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:53:58 np0005629333 nova_compute[244014]: 2026-02-25 12:53:58.775 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:53:58 np0005629333 nova_compute[244014]: 2026-02-25 12:53:58.775 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:53:58 np0005629333 nova_compute[244014]: 2026-02-25 12:53:58.776 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:53:59 np0005629333 nova_compute[244014]: 2026-02-25 12:53:59.087 244018 INFO nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Creating config drive at /var/lib/nova/instances/31c909bc-0d05-4a67-83e9-b45fb2eb35a9/disk.config#033[00m
Feb 25 07:53:59 np0005629333 nova_compute[244014]: 2026-02-25 12:53:59.094 244018 DEBUG oslo_concurrency.processutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/31c909bc-0d05-4a67-83e9-b45fb2eb35a9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpui_wleo6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:53:59 np0005629333 nova_compute[244014]: 2026-02-25 12:53:59.129 244018 DEBUG nova.policy [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb8dbf8cc448ad946fd23aaae2326e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '25fa1e8dd32c483686f869da2604f2b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:53:59 np0005629333 nova_compute[244014]: 2026-02-25 12:53:59.230 244018 DEBUG oslo_concurrency.processutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/31c909bc-0d05-4a67-83e9-b45fb2eb35a9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpui_wleo6" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:53:59 np0005629333 nova_compute[244014]: 2026-02-25 12:53:59.266 244018 DEBUG nova.storage.rbd_utils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 31c909bc-0d05-4a67-83e9-b45fb2eb35a9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:53:59 np0005629333 nova_compute[244014]: 2026-02-25 12:53:59.271 244018 DEBUG oslo_concurrency.processutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/31c909bc-0d05-4a67-83e9-b45fb2eb35a9/disk.config 31c909bc-0d05-4a67-83e9-b45fb2eb35a9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:53:59 np0005629333 nova_compute[244014]: 2026-02-25 12:53:59.411 244018 DEBUG oslo_concurrency.processutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/31c909bc-0d05-4a67-83e9-b45fb2eb35a9/disk.config 31c909bc-0d05-4a67-83e9-b45fb2eb35a9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:53:59 np0005629333 nova_compute[244014]: 2026-02-25 12:53:59.413 244018 INFO nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Deleting local config drive /var/lib/nova/instances/31c909bc-0d05-4a67-83e9-b45fb2eb35a9/disk.config because it was imported into RBD.#033[00m
Feb 25 07:53:59 np0005629333 kernel: tap205ead5a-79: entered promiscuous mode
Feb 25 07:53:59 np0005629333 NetworkManager[49836]: <info>  [1772024039.4795] manager: (tap205ead5a-79): new Tun device (/org/freedesktop/NetworkManager/Devices/578)
Feb 25 07:53:59 np0005629333 nova_compute[244014]: 2026-02-25 12:53:59.480 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:53:59Z|01392|binding|INFO|Claiming lport 205ead5a-797e-421e-87e3-ec5dac2037d3 for this chassis.
Feb 25 07:53:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:53:59Z|01393|binding|INFO|205ead5a-797e-421e-87e3-ec5dac2037d3: Claiming fa:16:3e:5f:de:be 10.100.0.13
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.492 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:de:be 10.100.0.13'], port_security=['fa:16:3e:5f:de:be 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-564267725', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '31c909bc-0d05-4a67-83e9-b45fb2eb35a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02748d96-83c0-45be-acd6-081ad673e4bc', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-564267725', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4f6e829b-663a-471b-98d6-d0d40f869440', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=988642bf-bd28-4e43-bffa-6e16e232d40d, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=205ead5a-797e-421e-87e3-ec5dac2037d3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:53:59 np0005629333 nova_compute[244014]: 2026-02-25 12:53:59.492 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:53:59Z|01394|binding|INFO|Setting lport 205ead5a-797e-421e-87e3-ec5dac2037d3 ovn-installed in OVS
Feb 25 07:53:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:53:59Z|01395|binding|INFO|Setting lport 205ead5a-797e-421e-87e3-ec5dac2037d3 up in Southbound
Feb 25 07:53:59 np0005629333 nova_compute[244014]: 2026-02-25 12:53:59.496 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.494 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 205ead5a-797e-421e-87e3-ec5dac2037d3 in datapath 02748d96-83c0-45be-acd6-081ad673e4bc bound to our chassis#033[00m
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.496 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 02748d96-83c0-45be-acd6-081ad673e4bc#033[00m
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.508 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b512e82a-13d2-49e3-885b-3cdce915b494]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.510 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap02748d96-81 in ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.512 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap02748d96-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.512 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ba6774f4-eac7-480c-9d3a-22ba10581de2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.513 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2e6a298c-9a0d-420a-b5ab-6445fd40cb41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:59 np0005629333 systemd-machined[210048]: New machine qemu-163-instance-00000083.
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.526 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[b7097cd3-5e26-4f0d-8e05-3e902733d6ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:59 np0005629333 systemd[1]: Started Virtual Machine qemu-163-instance-00000083.
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.538 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[57782821-b4c6-49dd-bee3-3fdf645846a6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:59 np0005629333 systemd-udevd[364103]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:53:59 np0005629333 NetworkManager[49836]: <info>  [1772024039.5573] device (tap205ead5a-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:53:59 np0005629333 NetworkManager[49836]: <info>  [1772024039.5585] device (tap205ead5a-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.566 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[02c8290b-6169-4992-91f9-1bbff01de279]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.570 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[669a6b03-2d0b-42e4-99d0-c7906c291334]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:59 np0005629333 NetworkManager[49836]: <info>  [1772024039.5723] manager: (tap02748d96-80): new Veth device (/org/freedesktop/NetworkManager/Devices/579)
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.594 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[360bff30-e5bb-4842-868a-62824cd11dc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.598 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4e73e7b2-0a7c-41f5-93d5-bfd27694f9f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:59 np0005629333 NetworkManager[49836]: <info>  [1772024039.6169] device (tap02748d96-80): carrier: link connected
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.622 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2290689f-27ce-4676-b33b-9d34fd63d6ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.635 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5bb5d84d-5050-44b1-81dd-a086a05199b9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02748d96-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:5d:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 413], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600902, 'reachable_time': 31482, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364133, 'error': None, 'target': 'ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.645 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3d41fa17-d9b3-47f2-b0b9-cad99506321c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe33:5d94'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 600902, 'tstamp': 600902}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364134, 'error': None, 'target': 'ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.661 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[44289261-5588-4df5-9112-9dcedb2bac4c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02748d96-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:5d:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 413], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600902, 'reachable_time': 31482, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 364135, 'error': None, 'target': 'ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.686 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f9030c-92ff-436c-bf5b-f6ba84ca0ad4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.751 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ab2eb025-41e7-4bee-aa57-0ff376b7ba40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.752 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02748d96-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.753 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.754 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02748d96-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:53:59 np0005629333 nova_compute[244014]: 2026-02-25 12:53:59.756 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:59 np0005629333 kernel: tap02748d96-80: entered promiscuous mode
Feb 25 07:53:59 np0005629333 NetworkManager[49836]: <info>  [1772024039.7579] manager: (tap02748d96-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/580)
Feb 25 07:53:59 np0005629333 nova_compute[244014]: 2026-02-25 12:53:59.761 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.762 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap02748d96-80, col_values=(('external_ids', {'iface-id': 'd05bda2c-299c-4a37-881d-1ed81c75bb47'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:53:59 np0005629333 nova_compute[244014]: 2026-02-25 12:53:59.764 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:53:59Z|01396|binding|INFO|Releasing lport d05bda2c-299c-4a37-881d-1ed81c75bb47 from this chassis (sb_readonly=0)
Feb 25 07:53:59 np0005629333 nova_compute[244014]: 2026-02-25 12:53:59.773 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.774 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/02748d96-83c0-45be-acd6-081ad673e4bc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/02748d96-83c0-45be-acd6-081ad673e4bc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.775 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[710167a8-2afd-4f20-a319-d9aa1b3a4df1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.776 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-02748d96-83c0-45be-acd6-081ad673e4bc
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/02748d96-83c0-45be-acd6-081ad673e4bc.pid.haproxy
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 02748d96-83c0-45be-acd6-081ad673e4bc
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:53:59 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:53:59.777 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc', 'env', 'PROCESS_TAG=haproxy-02748d96-83c0-45be-acd6-081ad673e4bc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/02748d96-83c0-45be-acd6-081ad673e4bc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.052 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024040.05061, 31c909bc-0d05-4a67-83e9-b45fb2eb35a9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.053 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] VM Started (Lifecycle Event)#033[00m
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.080 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.085 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024040.0512712, 31c909bc-0d05-4a67-83e9-b45fb2eb35a9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.085 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.130 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.134 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:54:00 np0005629333 podman[364209]: 2026-02-25 12:54:00.188518999 +0000 UTC m=+0.069737800 container create 44e3df8efac0d089b0c6a41ca8f60e9d44e5b824d82494e7176ec9a9bf2b0d0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223)
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.214 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:54:00 np0005629333 systemd[1]: Started libpod-conmon-44e3df8efac0d089b0c6a41ca8f60e9d44e5b824d82494e7176ec9a9bf2b0d0c.scope.
Feb 25 07:54:00 np0005629333 podman[364209]: 2026-02-25 12:54:00.156920347 +0000 UTC m=+0.038139208 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:54:00 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:54:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c900f458115b85a5678678f33d3ef16ca7c94ef55bdacac047617c1bfeaf0708/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:54:00 np0005629333 podman[364209]: 2026-02-25 12:54:00.281296419 +0000 UTC m=+0.162515270 container init 44e3df8efac0d089b0c6a41ca8f60e9d44e5b824d82494e7176ec9a9bf2b0d0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:54:00 np0005629333 podman[364209]: 2026-02-25 12:54:00.288460961 +0000 UTC m=+0.169679752 container start 44e3df8efac0d089b0c6a41ca8f60e9d44e5b824d82494e7176ec9a9bf2b0d0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.303 244018 DEBUG nova.compute.manager [req-6d2f226a-4f9a-4be0-91d8-4368a49ba7fd req-70201471-86c8-4d34-894e-60834bb7808d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Received event network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.304 244018 DEBUG oslo_concurrency.lockutils [req-6d2f226a-4f9a-4be0-91d8-4368a49ba7fd req-70201471-86c8-4d34-894e-60834bb7808d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.304 244018 DEBUG oslo_concurrency.lockutils [req-6d2f226a-4f9a-4be0-91d8-4368a49ba7fd req-70201471-86c8-4d34-894e-60834bb7808d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.305 244018 DEBUG oslo_concurrency.lockutils [req-6d2f226a-4f9a-4be0-91d8-4368a49ba7fd req-70201471-86c8-4d34-894e-60834bb7808d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.305 244018 DEBUG nova.compute.manager [req-6d2f226a-4f9a-4be0-91d8-4368a49ba7fd req-70201471-86c8-4d34-894e-60834bb7808d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Processing event network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.307 244018 DEBUG nova.compute.manager [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.327 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024040.3261776, 31c909bc-0d05-4a67-83e9-b45fb2eb35a9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:54:00 np0005629333 neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc[364224]: [NOTICE]   (364228) : New worker (364230) forked
Feb 25 07:54:00 np0005629333 neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc[364224]: [NOTICE]   (364228) : Loading success.
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.329 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.331 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.349 244018 INFO nova.virt.libvirt.driver [-] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Instance spawned successfully.#033[00m
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.350 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.355 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.361 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.376 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.377 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.378 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.378 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.379 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.380 244018 DEBUG nova.virt.libvirt.driver [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.388 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.433 244018 INFO nova.compute.manager [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Took 9.56 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.434 244018 DEBUG nova.compute.manager [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.446 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.510 244018 INFO nova.compute.manager [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Took 10.81 seconds to build instance.#033[00m
Feb 25 07:54:00 np0005629333 nova_compute[244014]: 2026-02-25 12:54:00.532 244018 DEBUG oslo_concurrency.lockutils [None req-aa556eaf-d958-45db-bf93-f056e04321ae 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2203: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 07:54:01 np0005629333 nova_compute[244014]: 2026-02-25 12:54:01.191 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:01.192 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:54:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:01.194 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:54:01 np0005629333 nova_compute[244014]: 2026-02-25 12:54:01.541 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:54:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:54:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:54:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:54:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:54:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:54:01 np0005629333 nova_compute[244014]: 2026-02-25 12:54:01.868 244018 DEBUG nova.network.neutron [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Successfully created port: edede35b-327b-4dc5-8432-cc64bb4a290d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:54:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:02.196 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:54:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2204: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 124 op/s
Feb 25 07:54:02 np0005629333 nova_compute[244014]: 2026-02-25 12:54:02.903 244018 DEBUG nova.compute.manager [req-9f93c530-9b75-463b-bee7-098956a3ff22 req-06ed874a-b676-4701-93a0-999c6b69e21c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Received event network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:54:02 np0005629333 nova_compute[244014]: 2026-02-25 12:54:02.904 244018 DEBUG oslo_concurrency.lockutils [req-9f93c530-9b75-463b-bee7-098956a3ff22 req-06ed874a-b676-4701-93a0-999c6b69e21c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:02 np0005629333 nova_compute[244014]: 2026-02-25 12:54:02.904 244018 DEBUG oslo_concurrency.lockutils [req-9f93c530-9b75-463b-bee7-098956a3ff22 req-06ed874a-b676-4701-93a0-999c6b69e21c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:02 np0005629333 nova_compute[244014]: 2026-02-25 12:54:02.904 244018 DEBUG oslo_concurrency.lockutils [req-9f93c530-9b75-463b-bee7-098956a3ff22 req-06ed874a-b676-4701-93a0-999c6b69e21c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:02 np0005629333 nova_compute[244014]: 2026-02-25 12:54:02.904 244018 DEBUG nova.compute.manager [req-9f93c530-9b75-463b-bee7-098956a3ff22 req-06ed874a-b676-4701-93a0-999c6b69e21c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] No waiting events found dispatching network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:54:02 np0005629333 nova_compute[244014]: 2026-02-25 12:54:02.905 244018 WARNING nova.compute.manager [req-9f93c530-9b75-463b-bee7-098956a3ff22 req-06ed874a-b676-4701-93a0-999c6b69e21c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Received unexpected event network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:54:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:54:04 np0005629333 nova_compute[244014]: 2026-02-25 12:54:04.106 244018 DEBUG nova.network.neutron [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Successfully updated port: edede35b-327b-4dc5-8432-cc64bb4a290d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:54:04 np0005629333 nova_compute[244014]: 2026-02-25 12:54:04.133 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "refresh_cache-945e5549-40d1-4eae-8179-84ad1d751957" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:54:04 np0005629333 nova_compute[244014]: 2026-02-25 12:54:04.134 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquired lock "refresh_cache-945e5549-40d1-4eae-8179-84ad1d751957" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:54:04 np0005629333 nova_compute[244014]: 2026-02-25 12:54:04.134 244018 DEBUG nova.network.neutron [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:54:04 np0005629333 nova_compute[244014]: 2026-02-25 12:54:04.227 244018 DEBUG nova.compute.manager [req-5b271d2d-bc3e-4e67-a5e1-fa84c2f502a9 req-6bf2391c-fc4e-48dd-831b-52b8081d9e23 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Received event network-changed-edede35b-327b-4dc5-8432-cc64bb4a290d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:54:04 np0005629333 nova_compute[244014]: 2026-02-25 12:54:04.228 244018 DEBUG nova.compute.manager [req-5b271d2d-bc3e-4e67-a5e1-fa84c2f502a9 req-6bf2391c-fc4e-48dd-831b-52b8081d9e23 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Refreshing instance network info cache due to event network-changed-edede35b-327b-4dc5-8432-cc64bb4a290d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:54:04 np0005629333 nova_compute[244014]: 2026-02-25 12:54:04.228 244018 DEBUG oslo_concurrency.lockutils [req-5b271d2d-bc3e-4e67-a5e1-fa84c2f502a9 req-6bf2391c-fc4e-48dd-831b-52b8081d9e23 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-945e5549-40d1-4eae-8179-84ad1d751957" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:54:04 np0005629333 nova_compute[244014]: 2026-02-25 12:54:04.348 244018 DEBUG nova.network.neutron [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:54:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2205: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.0 MiB/s wr, 96 op/s
Feb 25 07:54:05 np0005629333 nova_compute[244014]: 2026-02-25 12:54:05.445 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:06 np0005629333 nova_compute[244014]: 2026-02-25 12:54:06.544 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2206: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 101 op/s
Feb 25 07:54:06 np0005629333 nova_compute[244014]: 2026-02-25 12:54:06.869 244018 DEBUG nova.network.neutron [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Updating instance_info_cache with network_info: [{"id": "edede35b-327b-4dc5-8432-cc64bb4a290d", "address": "fa:16:3e:06:b5:86", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedede35b-32", "ovs_interfaceid": "edede35b-327b-4dc5-8432-cc64bb4a290d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:54:06 np0005629333 nova_compute[244014]: 2026-02-25 12:54:06.899 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Releasing lock "refresh_cache-945e5549-40d1-4eae-8179-84ad1d751957" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:54:06 np0005629333 nova_compute[244014]: 2026-02-25 12:54:06.899 244018 DEBUG nova.compute.manager [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Instance network_info: |[{"id": "edede35b-327b-4dc5-8432-cc64bb4a290d", "address": "fa:16:3e:06:b5:86", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedede35b-32", "ovs_interfaceid": "edede35b-327b-4dc5-8432-cc64bb4a290d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:54:06 np0005629333 nova_compute[244014]: 2026-02-25 12:54:06.901 244018 DEBUG oslo_concurrency.lockutils [req-5b271d2d-bc3e-4e67-a5e1-fa84c2f502a9 req-6bf2391c-fc4e-48dd-831b-52b8081d9e23 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-945e5549-40d1-4eae-8179-84ad1d751957" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:54:06 np0005629333 nova_compute[244014]: 2026-02-25 12:54:06.901 244018 DEBUG nova.network.neutron [req-5b271d2d-bc3e-4e67-a5e1-fa84c2f502a9 req-6bf2391c-fc4e-48dd-831b-52b8081d9e23 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Refreshing network info cache for port edede35b-327b-4dc5-8432-cc64bb4a290d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:54:06 np0005629333 nova_compute[244014]: 2026-02-25 12:54:06.904 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Start _get_guest_xml network_info=[{"id": "edede35b-327b-4dc5-8432-cc64bb4a290d", "address": "fa:16:3e:06:b5:86", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedede35b-32", "ovs_interfaceid": "edede35b-327b-4dc5-8432-cc64bb4a290d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:54:06 np0005629333 nova_compute[244014]: 2026-02-25 12:54:06.909 244018 WARNING nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:54:06 np0005629333 nova_compute[244014]: 2026-02-25 12:54:06.919 244018 DEBUG nova.virt.libvirt.host [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:54:06 np0005629333 nova_compute[244014]: 2026-02-25 12:54:06.920 244018 DEBUG nova.virt.libvirt.host [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:54:06 np0005629333 nova_compute[244014]: 2026-02-25 12:54:06.924 244018 DEBUG nova.virt.libvirt.host [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:54:06 np0005629333 nova_compute[244014]: 2026-02-25 12:54:06.925 244018 DEBUG nova.virt.libvirt.host [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:54:06 np0005629333 nova_compute[244014]: 2026-02-25 12:54:06.925 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:54:06 np0005629333 nova_compute[244014]: 2026-02-25 12:54:06.925 244018 DEBUG nova.virt.hardware [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:54:06 np0005629333 nova_compute[244014]: 2026-02-25 12:54:06.926 244018 DEBUG nova.virt.hardware [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:54:06 np0005629333 nova_compute[244014]: 2026-02-25 12:54:06.926 244018 DEBUG nova.virt.hardware [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:54:06 np0005629333 nova_compute[244014]: 2026-02-25 12:54:06.927 244018 DEBUG nova.virt.hardware [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:54:06 np0005629333 nova_compute[244014]: 2026-02-25 12:54:06.927 244018 DEBUG nova.virt.hardware [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:54:06 np0005629333 nova_compute[244014]: 2026-02-25 12:54:06.927 244018 DEBUG nova.virt.hardware [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:54:06 np0005629333 nova_compute[244014]: 2026-02-25 12:54:06.928 244018 DEBUG nova.virt.hardware [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:54:06 np0005629333 nova_compute[244014]: 2026-02-25 12:54:06.928 244018 DEBUG nova.virt.hardware [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:54:06 np0005629333 nova_compute[244014]: 2026-02-25 12:54:06.928 244018 DEBUG nova.virt.hardware [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:54:06 np0005629333 nova_compute[244014]: 2026-02-25 12:54:06.928 244018 DEBUG nova.virt.hardware [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:54:06 np0005629333 nova_compute[244014]: 2026-02-25 12:54:06.929 244018 DEBUG nova.virt.hardware [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:54:06 np0005629333 nova_compute[244014]: 2026-02-25 12:54:06.931 244018 DEBUG oslo_concurrency.processutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:54:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:54:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1394111518' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:54:07 np0005629333 nova_compute[244014]: 2026-02-25 12:54:07.482 244018 DEBUG oslo_concurrency.processutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:54:07 np0005629333 nova_compute[244014]: 2026-02-25 12:54:07.523 244018 DEBUG nova.storage.rbd_utils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 945e5549-40d1-4eae-8179-84ad1d751957_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:54:07 np0005629333 nova_compute[244014]: 2026-02-25 12:54:07.530 244018 DEBUG oslo_concurrency.processutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:54:07 np0005629333 nova_compute[244014]: 2026-02-25 12:54:07.651 244018 DEBUG nova.compute.manager [req-1a480677-cc26-43c2-8f6a-2ef8d644c085 req-22630e39-5f45-4212-a735-2e80030d7dcc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Received event network-changed-205ead5a-797e-421e-87e3-ec5dac2037d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:54:07 np0005629333 nova_compute[244014]: 2026-02-25 12:54:07.652 244018 DEBUG nova.compute.manager [req-1a480677-cc26-43c2-8f6a-2ef8d644c085 req-22630e39-5f45-4212-a735-2e80030d7dcc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Refreshing instance network info cache due to event network-changed-205ead5a-797e-421e-87e3-ec5dac2037d3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:54:07 np0005629333 nova_compute[244014]: 2026-02-25 12:54:07.652 244018 DEBUG oslo_concurrency.lockutils [req-1a480677-cc26-43c2-8f6a-2ef8d644c085 req-22630e39-5f45-4212-a735-2e80030d7dcc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-31c909bc-0d05-4a67-83e9-b45fb2eb35a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:54:07 np0005629333 nova_compute[244014]: 2026-02-25 12:54:07.653 244018 DEBUG oslo_concurrency.lockutils [req-1a480677-cc26-43c2-8f6a-2ef8d644c085 req-22630e39-5f45-4212-a735-2e80030d7dcc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-31c909bc-0d05-4a67-83e9-b45fb2eb35a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:54:07 np0005629333 nova_compute[244014]: 2026-02-25 12:54:07.653 244018 DEBUG nova.network.neutron [req-1a480677-cc26-43c2-8f6a-2ef8d644c085 req-22630e39-5f45-4212-a735-2e80030d7dcc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Refreshing network info cache for port 205ead5a-797e-421e-87e3-ec5dac2037d3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:54:07 np0005629333 nova_compute[244014]: 2026-02-25 12:54:07.807 244018 DEBUG oslo_concurrency.lockutils [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:07 np0005629333 nova_compute[244014]: 2026-02-25 12:54:07.807 244018 DEBUG oslo_concurrency.lockutils [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:07 np0005629333 nova_compute[244014]: 2026-02-25 12:54:07.808 244018 DEBUG oslo_concurrency.lockutils [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:07 np0005629333 nova_compute[244014]: 2026-02-25 12:54:07.809 244018 DEBUG oslo_concurrency.lockutils [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:07 np0005629333 nova_compute[244014]: 2026-02-25 12:54:07.810 244018 DEBUG oslo_concurrency.lockutils [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:07 np0005629333 nova_compute[244014]: 2026-02-25 12:54:07.812 244018 INFO nova.compute.manager [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Terminating instance#033[00m
Feb 25 07:54:07 np0005629333 nova_compute[244014]: 2026-02-25 12:54:07.814 244018 DEBUG nova.compute.manager [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:54:07 np0005629333 kernel: tap205ead5a-79 (unregistering): left promiscuous mode
Feb 25 07:54:07 np0005629333 NetworkManager[49836]: <info>  [1772024047.8704] device (tap205ead5a-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:54:07 np0005629333 ovn_controller[147040]: 2026-02-25T12:54:07Z|01397|binding|INFO|Releasing lport 205ead5a-797e-421e-87e3-ec5dac2037d3 from this chassis (sb_readonly=0)
Feb 25 07:54:07 np0005629333 ovn_controller[147040]: 2026-02-25T12:54:07Z|01398|binding|INFO|Setting lport 205ead5a-797e-421e-87e3-ec5dac2037d3 down in Southbound
Feb 25 07:54:07 np0005629333 ovn_controller[147040]: 2026-02-25T12:54:07Z|01399|binding|INFO|Removing iface tap205ead5a-79 ovn-installed in OVS
Feb 25 07:54:07 np0005629333 nova_compute[244014]: 2026-02-25 12:54:07.882 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:07.887 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:de:be 10.100.0.13'], port_security=['fa:16:3e:5f:de:be 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-564267725', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '31c909bc-0d05-4a67-83e9-b45fb2eb35a9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02748d96-83c0-45be-acd6-081ad673e4bc', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-564267725', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4f6e829b-663a-471b-98d6-d0d40f869440', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=988642bf-bd28-4e43-bffa-6e16e232d40d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=205ead5a-797e-421e-87e3-ec5dac2037d3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:54:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:07.888 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 205ead5a-797e-421e-87e3-ec5dac2037d3 in datapath 02748d96-83c0-45be-acd6-081ad673e4bc unbound from our chassis#033[00m
Feb 25 07:54:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:07.889 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 02748d96-83c0-45be-acd6-081ad673e4bc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:54:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:07.890 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d9330ac2-2f9f-4019-ae06-078edaf97555]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:07.890 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc namespace which is not needed anymore#033[00m
Feb 25 07:54:07 np0005629333 nova_compute[244014]: 2026-02-25 12:54:07.900 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:07 np0005629333 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000083.scope: Deactivated successfully.
Feb 25 07:54:07 np0005629333 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000083.scope: Consumed 8.254s CPU time.
Feb 25 07:54:07 np0005629333 systemd-machined[210048]: Machine qemu-163-instance-00000083 terminated.
Feb 25 07:54:08 np0005629333 podman[364299]: 2026-02-25 12:54:08.005953766 +0000 UTC m=+0.106567830 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.035 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:08 np0005629333 podman[364302]: 2026-02-25 12:54:08.03759954 +0000 UTC m=+0.133672076 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.039 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:08 np0005629333 neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc[364224]: [NOTICE]   (364228) : haproxy version is 2.8.14-c23fe91
Feb 25 07:54:08 np0005629333 neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc[364224]: [NOTICE]   (364228) : path to executable is /usr/sbin/haproxy
Feb 25 07:54:08 np0005629333 neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc[364224]: [WARNING]  (364228) : Exiting Master process...
Feb 25 07:54:08 np0005629333 neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc[364224]: [WARNING]  (364228) : Exiting Master process...
Feb 25 07:54:08 np0005629333 neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc[364224]: [ALERT]    (364228) : Current worker (364230) exited with code 143 (Terminated)
Feb 25 07:54:08 np0005629333 neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc[364224]: [WARNING]  (364228) : All workers exited. Exiting... (0)
Feb 25 07:54:08 np0005629333 systemd[1]: libpod-44e3df8efac0d089b0c6a41ca8f60e9d44e5b824d82494e7176ec9a9bf2b0d0c.scope: Deactivated successfully.
Feb 25 07:54:08 np0005629333 conmon[364224]: conmon 44e3df8efac0d089b0c6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-44e3df8efac0d089b0c6a41ca8f60e9d44e5b824d82494e7176ec9a9bf2b0d0c.scope/container/memory.events
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.050 244018 INFO nova.virt.libvirt.driver [-] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Instance destroyed successfully.#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.051 244018 DEBUG nova.objects.instance [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'resources' on Instance uuid 31c909bc-0d05-4a67-83e9-b45fb2eb35a9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:54:08 np0005629333 podman[364356]: 2026-02-25 12:54:08.055049853 +0000 UTC m=+0.065162181 container died 44e3df8efac0d089b0c6a41ca8f60e9d44e5b824d82494e7176ec9a9bf2b0d0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:54:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:54:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/548444361' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.064 244018 DEBUG nova.virt.libvirt.vif [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:53:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-726997753',display_name='tempest-TestNetworkBasicOps-server-726997753',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-726997753',id=131,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFbOW6FmwA5xCX/nfm45JqiS9iGNwncp9v5d8H3G3ZA8NRo+s1Asg0gZAqq62mcFCBdPP+KeNg7r6EvgTxZQYa2mEaBkl1lAwClqNYTOE5oGijuBkGH8PPKePy7SM1EfdQ==',key_name='tempest-TestNetworkBasicOps-533048180',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:54:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-p8om99l7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:54:00Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=31c909bc-0d05-4a67-83e9-b45fb2eb35a9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.065 244018 DEBUG nova.network.os_vif_util [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.066 244018 DEBUG nova.network.os_vif_util [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:de:be,bridge_name='br-int',has_traffic_filtering=True,id=205ead5a-797e-421e-87e3-ec5dac2037d3,network=Network(02748d96-83c0-45be-acd6-081ad673e4bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap205ead5a-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.066 244018 DEBUG os_vif [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:de:be,bridge_name='br-int',has_traffic_filtering=True,id=205ead5a-797e-421e-87e3-ec5dac2037d3,network=Network(02748d96-83c0-45be-acd6-081ad673e4bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap205ead5a-79') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.068 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.069 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap205ead5a-79, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.074 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.077 244018 INFO os_vif [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:de:be,bridge_name='br-int',has_traffic_filtering=True,id=205ead5a-797e-421e-87e3-ec5dac2037d3,network=Network(02748d96-83c0-45be-acd6-081ad673e4bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap205ead5a-79')#033[00m
Feb 25 07:54:08 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-44e3df8efac0d089b0c6a41ca8f60e9d44e5b824d82494e7176ec9a9bf2b0d0c-userdata-shm.mount: Deactivated successfully.
Feb 25 07:54:08 np0005629333 systemd[1]: var-lib-containers-storage-overlay-c900f458115b85a5678678f33d3ef16ca7c94ef55bdacac047617c1bfeaf0708-merged.mount: Deactivated successfully.
Feb 25 07:54:08 np0005629333 podman[364356]: 2026-02-25 12:54:08.101051962 +0000 UTC m=+0.111164290 container cleanup 44e3df8efac0d089b0c6a41ca8f60e9d44e5b824d82494e7176ec9a9bf2b0d0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.106 244018 DEBUG oslo_concurrency.processutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.107 244018 DEBUG nova.compute.manager [req-891f0215-e947-400f-9520-afb67aab843f req-cb0dfcb6-d31f-434a-aa46-80a2be82a327 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Received event network-vif-unplugged-205ead5a-797e-421e-87e3-ec5dac2037d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.108 244018 DEBUG oslo_concurrency.lockutils [req-891f0215-e947-400f-9520-afb67aab843f req-cb0dfcb6-d31f-434a-aa46-80a2be82a327 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.108 244018 DEBUG oslo_concurrency.lockutils [req-891f0215-e947-400f-9520-afb67aab843f req-cb0dfcb6-d31f-434a-aa46-80a2be82a327 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.108 244018 DEBUG oslo_concurrency.lockutils [req-891f0215-e947-400f-9520-afb67aab843f req-cb0dfcb6-d31f-434a-aa46-80a2be82a327 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.108 244018 DEBUG nova.compute.manager [req-891f0215-e947-400f-9520-afb67aab843f req-cb0dfcb6-d31f-434a-aa46-80a2be82a327 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] No waiting events found dispatching network-vif-unplugged-205ead5a-797e-421e-87e3-ec5dac2037d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.109 244018 DEBUG nova.compute.manager [req-891f0215-e947-400f-9520-afb67aab843f req-cb0dfcb6-d31f-434a-aa46-80a2be82a327 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Received event network-vif-unplugged-205ead5a-797e-421e-87e3-ec5dac2037d3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.110 244018 DEBUG nova.virt.libvirt.vif [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:53:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-487150499',display_name='tempest-TestGettingAddress-server-487150499',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-487150499',id=132,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL/SVWuEVaf+Py7xASyHmNYfi29LBzumkZB6ldUzxX0Dxybzo1LoSeFAK9YF8BeK/8cHma4lQxEX6N+N2g4I6aGP/1FtZuhCE2GUlMamVs4LR8ITh1z/8qYyUh+iaduJ8A==',key_name='tempest-TestGettingAddress-2128145461',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-9mlkqqq2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:53:57Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=945e5549-40d1-4eae-8179-84ad1d751957,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "edede35b-327b-4dc5-8432-cc64bb4a290d", "address": "fa:16:3e:06:b5:86", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedede35b-32", "ovs_interfaceid": "edede35b-327b-4dc5-8432-cc64bb4a290d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.110 244018 DEBUG nova.network.os_vif_util [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "edede35b-327b-4dc5-8432-cc64bb4a290d", "address": "fa:16:3e:06:b5:86", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedede35b-32", "ovs_interfaceid": "edede35b-327b-4dc5-8432-cc64bb4a290d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.111 244018 DEBUG nova.network.os_vif_util [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:b5:86,bridge_name='br-int',has_traffic_filtering=True,id=edede35b-327b-4dc5-8432-cc64bb4a290d,network=Network(88562c34-222a-439a-b444-9e6f8a6d70cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedede35b-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.112 244018 DEBUG nova.objects.instance [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 945e5549-40d1-4eae-8179-84ad1d751957 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:54:08 np0005629333 systemd[1]: libpod-conmon-44e3df8efac0d089b0c6a41ca8f60e9d44e5b824d82494e7176ec9a9bf2b0d0c.scope: Deactivated successfully.
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.128 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:54:08 np0005629333 nova_compute[244014]:  <uuid>945e5549-40d1-4eae-8179-84ad1d751957</uuid>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:  <name>instance-00000084</name>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestGettingAddress-server-487150499</nova:name>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:54:06</nova:creationTime>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:54:08 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:        <nova:user uuid="f8eb8dbf8cc448ad946fd23aaae2326e">tempest-TestGettingAddress-344063294-project-member</nova:user>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:        <nova:project uuid="25fa1e8dd32c483686f869da2604f2b1">tempest-TestGettingAddress-344063294</nova:project>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:        <nova:port uuid="edede35b-327b-4dc5-8432-cc64bb4a290d">
Feb 25 07:54:08 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe06:b586" ipVersion="6"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe06:b586" ipVersion="6"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      <entry name="serial">945e5549-40d1-4eae-8179-84ad1d751957</entry>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      <entry name="uuid">945e5549-40d1-4eae-8179-84ad1d751957</entry>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/945e5549-40d1-4eae-8179-84ad1d751957_disk">
Feb 25 07:54:08 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:54:08 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/945e5549-40d1-4eae-8179-84ad1d751957_disk.config">
Feb 25 07:54:08 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:54:08 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:06:b5:86"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      <target dev="tapedede35b-32"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/945e5549-40d1-4eae-8179-84ad1d751957/console.log" append="off"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:54:08 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:54:08 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:54:08 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:54:08 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.128 244018 DEBUG nova.compute.manager [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Preparing to wait for external event network-vif-plugged-edede35b-327b-4dc5-8432-cc64bb4a290d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.128 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "945e5549-40d1-4eae-8179-84ad1d751957-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.129 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "945e5549-40d1-4eae-8179-84ad1d751957-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.129 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "945e5549-40d1-4eae-8179-84ad1d751957-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.129 244018 DEBUG nova.virt.libvirt.vif [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:53:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-487150499',display_name='tempest-TestGettingAddress-server-487150499',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-487150499',id=132,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL/SVWuEVaf+Py7xASyHmNYfi29LBzumkZB6ldUzxX0Dxybzo1LoSeFAK9YF8BeK/8cHma4lQxEX6N+N2g4I6aGP/1FtZuhCE2GUlMamVs4LR8ITh1z/8qYyUh+iaduJ8A==',key_name='tempest-TestGettingAddress-2128145461',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-9mlkqqq2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:53:57Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=945e5549-40d1-4eae-8179-84ad1d751957,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "edede35b-327b-4dc5-8432-cc64bb4a290d", "address": "fa:16:3e:06:b5:86", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedede35b-32", "ovs_interfaceid": "edede35b-327b-4dc5-8432-cc64bb4a290d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.130 244018 DEBUG nova.network.os_vif_util [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "edede35b-327b-4dc5-8432-cc64bb4a290d", "address": "fa:16:3e:06:b5:86", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedede35b-32", "ovs_interfaceid": "edede35b-327b-4dc5-8432-cc64bb4a290d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.131 244018 DEBUG nova.network.os_vif_util [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:b5:86,bridge_name='br-int',has_traffic_filtering=True,id=edede35b-327b-4dc5-8432-cc64bb4a290d,network=Network(88562c34-222a-439a-b444-9e6f8a6d70cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedede35b-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.131 244018 DEBUG os_vif [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:b5:86,bridge_name='br-int',has_traffic_filtering=True,id=edede35b-327b-4dc5-8432-cc64bb4a290d,network=Network(88562c34-222a-439a-b444-9e6f8a6d70cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedede35b-32') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.131 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.132 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.132 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.134 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.134 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapedede35b-32, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.135 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapedede35b-32, col_values=(('external_ids', {'iface-id': 'edede35b-327b-4dc5-8432-cc64bb4a290d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:06:b5:86', 'vm-uuid': '945e5549-40d1-4eae-8179-84ad1d751957'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:54:08 np0005629333 NetworkManager[49836]: <info>  [1772024048.1373] manager: (tapedede35b-32): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/581)
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.140 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.141 244018 INFO os_vif [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:b5:86,bridge_name='br-int',has_traffic_filtering=True,id=edede35b-327b-4dc5-8432-cc64bb4a290d,network=Network(88562c34-222a-439a-b444-9e6f8a6d70cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedede35b-32')#033[00m
Feb 25 07:54:08 np0005629333 podman[364417]: 2026-02-25 12:54:08.172447368 +0000 UTC m=+0.052111903 container remove 44e3df8efac0d089b0c6a41ca8f60e9d44e5b824d82494e7176ec9a9bf2b0d0c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 07:54:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:08.177 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[73a2ae6a-2156-4c29-9e27-5b19c1fabcec]: (4, ('Wed Feb 25 12:54:07 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc (44e3df8efac0d089b0c6a41ca8f60e9d44e5b824d82494e7176ec9a9bf2b0d0c)\n44e3df8efac0d089b0c6a41ca8f60e9d44e5b824d82494e7176ec9a9bf2b0d0c\nWed Feb 25 12:54:08 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc (44e3df8efac0d089b0c6a41ca8f60e9d44e5b824d82494e7176ec9a9bf2b0d0c)\n44e3df8efac0d089b0c6a41ca8f60e9d44e5b824d82494e7176ec9a9bf2b0d0c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:08.179 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4ae48a71-0584-4fcf-84d9-131fd8768a23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:08.181 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02748d96-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:54:08 np0005629333 kernel: tap02748d96-80: left promiscuous mode
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.184 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.195 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.197 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.197 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.197 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:06:b5:86, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.198 244018 INFO nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Using config drive#033[00m
Feb 25 07:54:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:08.198 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[228991ae-5d3b-47e0-8386-3119e5ded323]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:08.210 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7f699d88-cde1-461d-ba95-280d619d5180]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:08.213 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d1d27c54-e6ba-429d-bd08-de7275a958b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.223 244018 DEBUG nova.storage.rbd_utils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 945e5549-40d1-4eae-8179-84ad1d751957_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:54:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:08.226 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7b69118d-15b9-4512-b7d6-47e793b68187]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 600896, 'reachable_time': 16521, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364453, 'error': None, 'target': 'ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:08.229 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:54:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:08.229 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[18a6b085-904b-44fc-bb93-89227a7c1582]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:08 np0005629333 systemd[1]: run-netns-ovnmeta\x2d02748d96\x2d83c0\x2d45be\x2dacd6\x2d081ad673e4bc.mount: Deactivated successfully.
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.491 244018 INFO nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Creating config drive at /var/lib/nova/instances/945e5549-40d1-4eae-8179-84ad1d751957/disk.config#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.497 244018 DEBUG oslo_concurrency.processutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/945e5549-40d1-4eae-8179-84ad1d751957/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_3xf1xn5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.641 244018 DEBUG oslo_concurrency.processutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/945e5549-40d1-4eae-8179-84ad1d751957/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_3xf1xn5" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:54:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2207: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.679 244018 DEBUG nova.storage.rbd_utils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 945e5549-40d1-4eae-8179-84ad1d751957_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.684 244018 DEBUG oslo_concurrency.processutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/945e5549-40d1-4eae-8179-84ad1d751957/disk.config 945e5549-40d1-4eae-8179-84ad1d751957_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:54:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.852 244018 DEBUG oslo_concurrency.processutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/945e5549-40d1-4eae-8179-84ad1d751957/disk.config 945e5549-40d1-4eae-8179-84ad1d751957_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.853 244018 INFO nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Deleting local config drive /var/lib/nova/instances/945e5549-40d1-4eae-8179-84ad1d751957/disk.config because it was imported into RBD.#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.886 244018 INFO nova.virt.libvirt.driver [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Deleting instance files /var/lib/nova/instances/31c909bc-0d05-4a67-83e9-b45fb2eb35a9_del#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.887 244018 INFO nova.virt.libvirt.driver [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Deletion of /var/lib/nova/instances/31c909bc-0d05-4a67-83e9-b45fb2eb35a9_del complete#033[00m
Feb 25 07:54:08 np0005629333 kernel: tapedede35b-32: entered promiscuous mode
Feb 25 07:54:08 np0005629333 NetworkManager[49836]: <info>  [1772024048.9152] manager: (tapedede35b-32): new Tun device (/org/freedesktop/NetworkManager/Devices/582)
Feb 25 07:54:08 np0005629333 systemd-udevd[364321]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.916 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:08 np0005629333 ovn_controller[147040]: 2026-02-25T12:54:08Z|01400|binding|INFO|Claiming lport edede35b-327b-4dc5-8432-cc64bb4a290d for this chassis.
Feb 25 07:54:08 np0005629333 ovn_controller[147040]: 2026-02-25T12:54:08Z|01401|binding|INFO|edede35b-327b-4dc5-8432-cc64bb4a290d: Claiming fa:16:3e:06:b5:86 10.100.0.13 2001:db8:0:1:f816:3eff:fe06:b586 2001:db8::f816:3eff:fe06:b586
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.919 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:08.928 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:b5:86 10.100.0.13 2001:db8:0:1:f816:3eff:fe06:b586 2001:db8::f816:3eff:fe06:b586'], port_security=['fa:16:3e:06:b5:86 10.100.0.13 2001:db8:0:1:f816:3eff:fe06:b586 2001:db8::f816:3eff:fe06:b586'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8:0:1:f816:3eff:fe06:b586/64 2001:db8::f816:3eff:fe06:b586/64', 'neutron:device_id': '945e5549-40d1-4eae-8179-84ad1d751957', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88562c34-222a-439a-b444-9e6f8a6d70cd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cd4482ae-906b-42b8-8dd6-2eff099d2f11', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1dcb5c12-7c78-49d6-b14f-b2c48c93839d, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=edede35b-327b-4dc5-8432-cc64bb4a290d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:54:08 np0005629333 ovn_controller[147040]: 2026-02-25T12:54:08Z|01402|binding|INFO|Setting lport edede35b-327b-4dc5-8432-cc64bb4a290d ovn-installed in OVS
Feb 25 07:54:08 np0005629333 ovn_controller[147040]: 2026-02-25T12:54:08Z|01403|binding|INFO|Setting lport edede35b-327b-4dc5-8432-cc64bb4a290d up in Southbound
Feb 25 07:54:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:08.930 157129 INFO neutron.agent.ovn.metadata.agent [-] Port edede35b-327b-4dc5-8432-cc64bb4a290d in datapath 88562c34-222a-439a-b444-9e6f8a6d70cd bound to our chassis#033[00m
Feb 25 07:54:08 np0005629333 NetworkManager[49836]: <info>  [1772024048.9330] device (tapedede35b-32): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:54:08 np0005629333 NetworkManager[49836]: <info>  [1772024048.9338] device (tapedede35b-32): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:54:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:08.933 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 88562c34-222a-439a-b444-9e6f8a6d70cd#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.931 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:08 np0005629333 systemd-machined[210048]: New machine qemu-164-instance-00000084.
Feb 25 07:54:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:08.952 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a2199e03-4b3b-42a9-b26f-14b5e9271b20]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.970 244018 INFO nova.compute.manager [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Took 1.16 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.971 244018 DEBUG oslo.service.loopingcall [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.971 244018 DEBUG nova.compute.manager [-] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:54:08 np0005629333 nova_compute[244014]: 2026-02-25 12:54:08.971 244018 DEBUG nova.network.neutron [-] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:54:08 np0005629333 systemd[1]: Started Virtual Machine qemu-164-instance-00000084.
Feb 25 07:54:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:08.974 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8fd07a80-a3e5-45e2-a103-fc21427fd3f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:08.980 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ed8cec34-516a-47c0-8cf1-b83320e56b11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:09.003 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[b7f26254-1dd2-4fb9-b744-27b14eb476cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:09.017 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d6a262f6-14b3-4b84-a251-942219ef2edb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap88562c34-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:5e:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 5, 'rx_bytes': 1930, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 411], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598029, 'reachable_time': 38748, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 21, 'inoctets': 1552, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 21, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1552, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 21, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364518, 'error': None, 'target': 'ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:09.033 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9d79d99d-c743-4736-9be0-26307a8c6f66]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap88562c34-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 598041, 'tstamp': 598041}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364522, 'error': None, 'target': 'ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap88562c34-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 598043, 'tstamp': 598043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364522, 'error': None, 'target': 'ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:09.035 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88562c34-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.038 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:09.039 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88562c34-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:54:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:09.039 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:54:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:09.040 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap88562c34-20, col_values=(('external_ids', {'iface-id': '09662fcb-392d-469a-981c-54d31225748b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:54:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:09.040 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.415 244018 DEBUG nova.network.neutron [req-1a480677-cc26-43c2-8f6a-2ef8d644c085 req-22630e39-5f45-4212-a735-2e80030d7dcc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Updated VIF entry in instance network info cache for port 205ead5a-797e-421e-87e3-ec5dac2037d3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.416 244018 DEBUG nova.network.neutron [req-1a480677-cc26-43c2-8f6a-2ef8d644c085 req-22630e39-5f45-4212-a735-2e80030d7dcc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Updating instance_info_cache with network_info: [{"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.422 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024049.421914, 945e5549-40d1-4eae-8179-84ad1d751957 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.422 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] VM Started (Lifecycle Event)#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.445 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.448 244018 DEBUG oslo_concurrency.lockutils [req-1a480677-cc26-43c2-8f6a-2ef8d644c085 req-22630e39-5f45-4212-a735-2e80030d7dcc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-31c909bc-0d05-4a67-83e9-b45fb2eb35a9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.452 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024049.4220247, 945e5549-40d1-4eae-8179-84ad1d751957 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.453 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.471 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.473 244018 DEBUG nova.network.neutron [req-5b271d2d-bc3e-4e67-a5e1-fa84c2f502a9 req-6bf2391c-fc4e-48dd-831b-52b8081d9e23 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Updated VIF entry in instance network info cache for port edede35b-327b-4dc5-8432-cc64bb4a290d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.474 244018 DEBUG nova.network.neutron [req-5b271d2d-bc3e-4e67-a5e1-fa84c2f502a9 req-6bf2391c-fc4e-48dd-831b-52b8081d9e23 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Updating instance_info_cache with network_info: [{"id": "edede35b-327b-4dc5-8432-cc64bb4a290d", "address": "fa:16:3e:06:b5:86", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedede35b-32", "ovs_interfaceid": "edede35b-327b-4dc5-8432-cc64bb4a290d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.479 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.499 244018 DEBUG oslo_concurrency.lockutils [req-5b271d2d-bc3e-4e67-a5e1-fa84c2f502a9 req-6bf2391c-fc4e-48dd-831b-52b8081d9e23 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-945e5549-40d1-4eae-8179-84ad1d751957" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.500 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.729 244018 DEBUG nova.compute.manager [req-0881cfbc-e394-4298-a09c-f474753621f0 req-29ec2177-6e7d-4f20-89cb-7a98f86be221 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Received event network-vif-plugged-edede35b-327b-4dc5-8432-cc64bb4a290d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.730 244018 DEBUG oslo_concurrency.lockutils [req-0881cfbc-e394-4298-a09c-f474753621f0 req-29ec2177-6e7d-4f20-89cb-7a98f86be221 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "945e5549-40d1-4eae-8179-84ad1d751957-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.730 244018 DEBUG oslo_concurrency.lockutils [req-0881cfbc-e394-4298-a09c-f474753621f0 req-29ec2177-6e7d-4f20-89cb-7a98f86be221 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "945e5549-40d1-4eae-8179-84ad1d751957-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.730 244018 DEBUG oslo_concurrency.lockutils [req-0881cfbc-e394-4298-a09c-f474753621f0 req-29ec2177-6e7d-4f20-89cb-7a98f86be221 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "945e5549-40d1-4eae-8179-84ad1d751957-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.731 244018 DEBUG nova.compute.manager [req-0881cfbc-e394-4298-a09c-f474753621f0 req-29ec2177-6e7d-4f20-89cb-7a98f86be221 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Processing event network-vif-plugged-edede35b-327b-4dc5-8432-cc64bb4a290d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.732 244018 DEBUG nova.compute.manager [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.737 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024049.7373264, 945e5549-40d1-4eae-8179-84ad1d751957 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.737 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.740 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.744 244018 INFO nova.virt.libvirt.driver [-] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Instance spawned successfully.#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.744 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.761 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.766 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.802 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.805 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.805 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.806 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.807 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.807 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.808 244018 DEBUG nova.virt.libvirt.driver [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.888 244018 INFO nova.compute.manager [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Took 11.81 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.888 244018 DEBUG nova.compute.manager [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.968 244018 INFO nova.compute.manager [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Took 13.43 seconds to build instance.#033[00m
Feb 25 07:54:09 np0005629333 nova_compute[244014]: 2026-02-25 12:54:09.991 244018 DEBUG oslo_concurrency.lockutils [None req-5a6aa38f-6bfb-4d94-aaeb-519c829c906c f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "945e5549-40d1-4eae-8179-84ad1d751957" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:10 np0005629333 nova_compute[244014]: 2026-02-25 12:54:10.183 244018 DEBUG nova.compute.manager [req-a6cdb6d7-8846-49d2-8513-b721d394ec7b req-87362572-db83-4e4f-b177-04d423e96a62 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Received event network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:54:10 np0005629333 nova_compute[244014]: 2026-02-25 12:54:10.183 244018 DEBUG oslo_concurrency.lockutils [req-a6cdb6d7-8846-49d2-8513-b721d394ec7b req-87362572-db83-4e4f-b177-04d423e96a62 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:10 np0005629333 nova_compute[244014]: 2026-02-25 12:54:10.184 244018 DEBUG oslo_concurrency.lockutils [req-a6cdb6d7-8846-49d2-8513-b721d394ec7b req-87362572-db83-4e4f-b177-04d423e96a62 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:10 np0005629333 nova_compute[244014]: 2026-02-25 12:54:10.184 244018 DEBUG oslo_concurrency.lockutils [req-a6cdb6d7-8846-49d2-8513-b721d394ec7b req-87362572-db83-4e4f-b177-04d423e96a62 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:10 np0005629333 nova_compute[244014]: 2026-02-25 12:54:10.184 244018 DEBUG nova.compute.manager [req-a6cdb6d7-8846-49d2-8513-b721d394ec7b req-87362572-db83-4e4f-b177-04d423e96a62 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] No waiting events found dispatching network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:54:10 np0005629333 nova_compute[244014]: 2026-02-25 12:54:10.185 244018 WARNING nova.compute.manager [req-a6cdb6d7-8846-49d2-8513-b721d394ec7b req-87362572-db83-4e4f-b177-04d423e96a62 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Received unexpected event network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:54:10 np0005629333 nova_compute[244014]: 2026-02-25 12:54:10.440 244018 DEBUG nova.network.neutron [-] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:54:10 np0005629333 nova_compute[244014]: 2026-02-25 12:54:10.447 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:10 np0005629333 nova_compute[244014]: 2026-02-25 12:54:10.466 244018 INFO nova.compute.manager [-] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Took 1.50 seconds to deallocate network for instance.#033[00m
Feb 25 07:54:10 np0005629333 nova_compute[244014]: 2026-02-25 12:54:10.560 244018 DEBUG oslo_concurrency.lockutils [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:10 np0005629333 nova_compute[244014]: 2026-02-25 12:54:10.560 244018 DEBUG oslo_concurrency.lockutils [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:10 np0005629333 nova_compute[244014]: 2026-02-25 12:54:10.665 244018 DEBUG oslo_concurrency.processutils [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:54:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2208: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 07:54:11 np0005629333 nova_compute[244014]: 2026-02-25 12:54:11.882 244018 DEBUG nova.compute.manager [req-8d7918d3-c792-49a9-b9a6-8c21a9e8ce15 req-bb8102a6-fabd-4d68-a9ef-c10f421279f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Received event network-vif-plugged-edede35b-327b-4dc5-8432-cc64bb4a290d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:54:11 np0005629333 nova_compute[244014]: 2026-02-25 12:54:11.883 244018 DEBUG oslo_concurrency.lockutils [req-8d7918d3-c792-49a9-b9a6-8c21a9e8ce15 req-bb8102a6-fabd-4d68-a9ef-c10f421279f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "945e5549-40d1-4eae-8179-84ad1d751957-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:11 np0005629333 nova_compute[244014]: 2026-02-25 12:54:11.883 244018 DEBUG oslo_concurrency.lockutils [req-8d7918d3-c792-49a9-b9a6-8c21a9e8ce15 req-bb8102a6-fabd-4d68-a9ef-c10f421279f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "945e5549-40d1-4eae-8179-84ad1d751957-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:11 np0005629333 nova_compute[244014]: 2026-02-25 12:54:11.883 244018 DEBUG oslo_concurrency.lockutils [req-8d7918d3-c792-49a9-b9a6-8c21a9e8ce15 req-bb8102a6-fabd-4d68-a9ef-c10f421279f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "945e5549-40d1-4eae-8179-84ad1d751957-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:11 np0005629333 nova_compute[244014]: 2026-02-25 12:54:11.883 244018 DEBUG nova.compute.manager [req-8d7918d3-c792-49a9-b9a6-8c21a9e8ce15 req-bb8102a6-fabd-4d68-a9ef-c10f421279f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] No waiting events found dispatching network-vif-plugged-edede35b-327b-4dc5-8432-cc64bb4a290d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:54:11 np0005629333 nova_compute[244014]: 2026-02-25 12:54:11.883 244018 WARNING nova.compute.manager [req-8d7918d3-c792-49a9-b9a6-8c21a9e8ce15 req-bb8102a6-fabd-4d68-a9ef-c10f421279f3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Received unexpected event network-vif-plugged-edede35b-327b-4dc5-8432-cc64bb4a290d for instance with vm_state active and task_state None.#033[00m
Feb 25 07:54:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:54:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3648435004' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:54:12 np0005629333 nova_compute[244014]: 2026-02-25 12:54:12.561 244018 DEBUG oslo_concurrency.processutils [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.896s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:54:12 np0005629333 nova_compute[244014]: 2026-02-25 12:54:12.567 244018 DEBUG nova.compute.provider_tree [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:54:12 np0005629333 nova_compute[244014]: 2026-02-25 12:54:12.596 244018 DEBUG nova.scheduler.client.report [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:54:12 np0005629333 nova_compute[244014]: 2026-02-25 12:54:12.659 244018 DEBUG oslo_concurrency.lockutils [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.099s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2209: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 187 op/s
Feb 25 07:54:12 np0005629333 nova_compute[244014]: 2026-02-25 12:54:12.752 244018 INFO nova.scheduler.client.report [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Deleted allocations for instance 31c909bc-0d05-4a67-83e9-b45fb2eb35a9#033[00m
Feb 25 07:54:12 np0005629333 nova_compute[244014]: 2026-02-25 12:54:12.857 244018 DEBUG oslo_concurrency.lockutils [None req-f09c59da-9e49-4d43-8bea-4484bb9a9e99 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "31c909bc-0d05-4a67-83e9-b45fb2eb35a9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:13 np0005629333 nova_compute[244014]: 2026-02-25 12:54:13.137 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:54:14 np0005629333 nova_compute[244014]: 2026-02-25 12:54:14.004 244018 DEBUG nova.compute.manager [req-0b71598b-4d09-41bb-afcd-8b5fec80d7eb req-b9470761-544c-4947-a857-f99b85bc01a7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Received event network-changed-edede35b-327b-4dc5-8432-cc64bb4a290d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:54:14 np0005629333 nova_compute[244014]: 2026-02-25 12:54:14.004 244018 DEBUG nova.compute.manager [req-0b71598b-4d09-41bb-afcd-8b5fec80d7eb req-b9470761-544c-4947-a857-f99b85bc01a7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Refreshing instance network info cache due to event network-changed-edede35b-327b-4dc5-8432-cc64bb4a290d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:54:14 np0005629333 nova_compute[244014]: 2026-02-25 12:54:14.004 244018 DEBUG oslo_concurrency.lockutils [req-0b71598b-4d09-41bb-afcd-8b5fec80d7eb req-b9470761-544c-4947-a857-f99b85bc01a7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-945e5549-40d1-4eae-8179-84ad1d751957" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:54:14 np0005629333 nova_compute[244014]: 2026-02-25 12:54:14.005 244018 DEBUG oslo_concurrency.lockutils [req-0b71598b-4d09-41bb-afcd-8b5fec80d7eb req-b9470761-544c-4947-a857-f99b85bc01a7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-945e5549-40d1-4eae-8179-84ad1d751957" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:54:14 np0005629333 nova_compute[244014]: 2026-02-25 12:54:14.005 244018 DEBUG nova.network.neutron [req-0b71598b-4d09-41bb-afcd-8b5fec80d7eb req-b9470761-544c-4947-a857-f99b85bc01a7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Refreshing network info cache for port edede35b-327b-4dc5-8432-cc64bb4a290d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:54:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2210: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 14 KiB/s wr, 90 op/s
Feb 25 07:54:15 np0005629333 nova_compute[244014]: 2026-02-25 12:54:15.451 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2211: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 14 KiB/s wr, 90 op/s
Feb 25 07:54:18 np0005629333 nova_compute[244014]: 2026-02-25 12:54:18.087 244018 DEBUG nova.network.neutron [req-0b71598b-4d09-41bb-afcd-8b5fec80d7eb req-b9470761-544c-4947-a857-f99b85bc01a7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Updated VIF entry in instance network info cache for port edede35b-327b-4dc5-8432-cc64bb4a290d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:54:18 np0005629333 nova_compute[244014]: 2026-02-25 12:54:18.088 244018 DEBUG nova.network.neutron [req-0b71598b-4d09-41bb-afcd-8b5fec80d7eb req-b9470761-544c-4947-a857-f99b85bc01a7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Updating instance_info_cache with network_info: [{"id": "edede35b-327b-4dc5-8432-cc64bb4a290d", "address": "fa:16:3e:06:b5:86", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedede35b-32", "ovs_interfaceid": "edede35b-327b-4dc5-8432-cc64bb4a290d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:54:18 np0005629333 nova_compute[244014]: 2026-02-25 12:54:18.111 244018 DEBUG oslo_concurrency.lockutils [req-0b71598b-4d09-41bb-afcd-8b5fec80d7eb req-b9470761-544c-4947-a857-f99b85bc01a7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-945e5549-40d1-4eae-8179-84ad1d751957" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:54:18 np0005629333 nova_compute[244014]: 2026-02-25 12:54:18.140 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2212: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 100 op/s
Feb 25 07:54:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:54:20 np0005629333 nova_compute[244014]: 2026-02-25 12:54:20.453 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2213: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 100 op/s
Feb 25 07:54:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:54:22Z|00171|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:06:b5:86 10.100.0.13
Feb 25 07:54:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:54:22Z|00172|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:06:b5:86 10.100.0.13
Feb 25 07:54:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2214: 305 pgs: 305 active+clean; 304 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 146 op/s
Feb 25 07:54:23 np0005629333 nova_compute[244014]: 2026-02-25 12:54:23.033 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:23 np0005629333 nova_compute[244014]: 2026-02-25 12:54:23.035 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:23 np0005629333 nova_compute[244014]: 2026-02-25 12:54:23.049 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024048.0479279, 31c909bc-0d05-4a67-83e9-b45fb2eb35a9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:54:23 np0005629333 nova_compute[244014]: 2026-02-25 12:54:23.049 244018 INFO nova.compute.manager [-] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:54:23 np0005629333 nova_compute[244014]: 2026-02-25 12:54:23.054 244018 DEBUG nova.compute.manager [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:54:23 np0005629333 nova_compute[244014]: 2026-02-25 12:54:23.090 244018 DEBUG nova.compute.manager [None req-8af727cb-a2b3-4027-ad75-d058f0e5b86e - - - - - -] [instance: 31c909bc-0d05-4a67-83e9-b45fb2eb35a9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:54:23 np0005629333 nova_compute[244014]: 2026-02-25 12:54:23.142 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:23 np0005629333 nova_compute[244014]: 2026-02-25 12:54:23.165 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:23 np0005629333 nova_compute[244014]: 2026-02-25 12:54:23.166 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:23 np0005629333 nova_compute[244014]: 2026-02-25 12:54:23.178 244018 DEBUG nova.virt.hardware [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:54:23 np0005629333 nova_compute[244014]: 2026-02-25 12:54:23.179 244018 INFO nova.compute.claims [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:54:23 np0005629333 nova_compute[244014]: 2026-02-25 12:54:23.401 244018 DEBUG oslo_concurrency.processutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:54:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:54:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:54:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/990708143' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:54:24 np0005629333 nova_compute[244014]: 2026-02-25 12:54:24.016 244018 DEBUG oslo_concurrency.processutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:54:24 np0005629333 nova_compute[244014]: 2026-02-25 12:54:24.024 244018 DEBUG nova.compute.provider_tree [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:54:24 np0005629333 nova_compute[244014]: 2026-02-25 12:54:24.043 244018 DEBUG nova.scheduler.client.report [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:54:24 np0005629333 nova_compute[244014]: 2026-02-25 12:54:24.068 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:24 np0005629333 nova_compute[244014]: 2026-02-25 12:54:24.069 244018 DEBUG nova.compute.manager [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:54:24 np0005629333 nova_compute[244014]: 2026-02-25 12:54:24.119 244018 DEBUG nova.compute.manager [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:54:24 np0005629333 nova_compute[244014]: 2026-02-25 12:54:24.120 244018 DEBUG nova.network.neutron [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:54:24 np0005629333 nova_compute[244014]: 2026-02-25 12:54:24.144 244018 INFO nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:54:24 np0005629333 nova_compute[244014]: 2026-02-25 12:54:24.161 244018 DEBUG nova.compute.manager [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:54:24 np0005629333 nova_compute[244014]: 2026-02-25 12:54:24.314 244018 DEBUG nova.compute.manager [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:54:24 np0005629333 nova_compute[244014]: 2026-02-25 12:54:24.315 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:54:24 np0005629333 nova_compute[244014]: 2026-02-25 12:54:24.316 244018 INFO nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Creating image(s)#033[00m
Feb 25 07:54:24 np0005629333 nova_compute[244014]: 2026-02-25 12:54:24.343 244018 DEBUG nova.storage.rbd_utils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 3da53ea4-e42d-4bfb-b917-d9c81fd3652f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:54:24 np0005629333 nova_compute[244014]: 2026-02-25 12:54:24.372 244018 DEBUG nova.storage.rbd_utils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 3da53ea4-e42d-4bfb-b917-d9c81fd3652f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:54:24 np0005629333 nova_compute[244014]: 2026-02-25 12:54:24.398 244018 DEBUG nova.storage.rbd_utils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 3da53ea4-e42d-4bfb-b917-d9c81fd3652f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:54:24 np0005629333 nova_compute[244014]: 2026-02-25 12:54:24.403 244018 DEBUG oslo_concurrency.processutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:54:24 np0005629333 nova_compute[244014]: 2026-02-25 12:54:24.490 244018 DEBUG oslo_concurrency.processutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:54:24 np0005629333 nova_compute[244014]: 2026-02-25 12:54:24.492 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:24 np0005629333 nova_compute[244014]: 2026-02-25 12:54:24.493 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:24 np0005629333 nova_compute[244014]: 2026-02-25 12:54:24.494 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:24 np0005629333 nova_compute[244014]: 2026-02-25 12:54:24.525 244018 DEBUG nova.storage.rbd_utils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 3da53ea4-e42d-4bfb-b917-d9c81fd3652f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:54:24 np0005629333 nova_compute[244014]: 2026-02-25 12:54:24.530 244018 DEBUG oslo_concurrency.processutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 3da53ea4-e42d-4bfb-b917-d9c81fd3652f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:54:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2215: 305 pgs: 305 active+clean; 304 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 686 KiB/s rd, 2.0 MiB/s wr, 60 op/s
Feb 25 07:54:24 np0005629333 nova_compute[244014]: 2026-02-25 12:54:24.763 244018 DEBUG oslo_concurrency.processutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 3da53ea4-e42d-4bfb-b917-d9c81fd3652f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:54:24 np0005629333 nova_compute[244014]: 2026-02-25 12:54:24.842 244018 DEBUG nova.storage.rbd_utils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] resizing rbd image 3da53ea4-e42d-4bfb-b917-d9c81fd3652f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:54:24 np0005629333 nova_compute[244014]: 2026-02-25 12:54:24.943 244018 DEBUG nova.objects.instance [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'migration_context' on Instance uuid 3da53ea4-e42d-4bfb-b917-d9c81fd3652f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:54:24 np0005629333 nova_compute[244014]: 2026-02-25 12:54:24.960 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:54:24 np0005629333 nova_compute[244014]: 2026-02-25 12:54:24.961 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Ensure instance console log exists: /var/lib/nova/instances/3da53ea4-e42d-4bfb-b917-d9c81fd3652f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:54:24 np0005629333 nova_compute[244014]: 2026-02-25 12:54:24.962 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:24 np0005629333 nova_compute[244014]: 2026-02-25 12:54:24.962 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:24 np0005629333 nova_compute[244014]: 2026-02-25 12:54:24.963 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:25 np0005629333 nova_compute[244014]: 2026-02-25 12:54:25.061 244018 DEBUG nova.policy [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31d013eaf26a447394d93c83ab8def60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e227b91c24404ab5aed600e2fe792d32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:54:25 np0005629333 nova_compute[244014]: 2026-02-25 12:54:25.454 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:26 np0005629333 nova_compute[244014]: 2026-02-25 12:54:26.531 244018 DEBUG nova.network.neutron [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Successfully updated port: 205ead5a-797e-421e-87e3-ec5dac2037d3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:54:26 np0005629333 nova_compute[244014]: 2026-02-25 12:54:26.548 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-3da53ea4-e42d-4bfb-b917-d9c81fd3652f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:54:26 np0005629333 nova_compute[244014]: 2026-02-25 12:54:26.549 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-3da53ea4-e42d-4bfb-b917-d9c81fd3652f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:54:26 np0005629333 nova_compute[244014]: 2026-02-25 12:54:26.549 244018 DEBUG nova.network.neutron [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:54:26 np0005629333 nova_compute[244014]: 2026-02-25 12:54:26.629 244018 DEBUG nova.compute.manager [req-6592dbea-75b4-42d1-850d-9063012084a7 req-605f1eaf-cb16-4d2d-bd21-fdebfc0c94fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Received event network-changed-205ead5a-797e-421e-87e3-ec5dac2037d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:54:26 np0005629333 nova_compute[244014]: 2026-02-25 12:54:26.630 244018 DEBUG nova.compute.manager [req-6592dbea-75b4-42d1-850d-9063012084a7 req-605f1eaf-cb16-4d2d-bd21-fdebfc0c94fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Refreshing instance network info cache due to event network-changed-205ead5a-797e-421e-87e3-ec5dac2037d3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:54:26 np0005629333 nova_compute[244014]: 2026-02-25 12:54:26.630 244018 DEBUG oslo_concurrency.lockutils [req-6592dbea-75b4-42d1-850d-9063012084a7 req-605f1eaf-cb16-4d2d-bd21-fdebfc0c94fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-3da53ea4-e42d-4bfb-b917-d9c81fd3652f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:54:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2216: 305 pgs: 305 active+clean; 304 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 686 KiB/s rd, 2.0 MiB/s wr, 60 op/s
Feb 25 07:54:26 np0005629333 nova_compute[244014]: 2026-02-25 12:54:26.684 244018 DEBUG nova.network.neutron [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:54:27 np0005629333 nova_compute[244014]: 2026-02-25 12:54:27.386 244018 DEBUG nova.network.neutron [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Updating instance_info_cache with network_info: [{"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:54:27 np0005629333 nova_compute[244014]: 2026-02-25 12:54:27.409 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-3da53ea4-e42d-4bfb-b917-d9c81fd3652f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:54:27 np0005629333 nova_compute[244014]: 2026-02-25 12:54:27.409 244018 DEBUG nova.compute.manager [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Instance network_info: |[{"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:54:27 np0005629333 nova_compute[244014]: 2026-02-25 12:54:27.410 244018 DEBUG oslo_concurrency.lockutils [req-6592dbea-75b4-42d1-850d-9063012084a7 req-605f1eaf-cb16-4d2d-bd21-fdebfc0c94fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-3da53ea4-e42d-4bfb-b917-d9c81fd3652f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:54:27 np0005629333 nova_compute[244014]: 2026-02-25 12:54:27.411 244018 DEBUG nova.network.neutron [req-6592dbea-75b4-42d1-850d-9063012084a7 req-605f1eaf-cb16-4d2d-bd21-fdebfc0c94fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Refreshing network info cache for port 205ead5a-797e-421e-87e3-ec5dac2037d3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:54:27 np0005629333 nova_compute[244014]: 2026-02-25 12:54:27.418 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Start _get_guest_xml network_info=[{"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:54:27 np0005629333 nova_compute[244014]: 2026-02-25 12:54:27.426 244018 WARNING nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:54:27 np0005629333 nova_compute[244014]: 2026-02-25 12:54:27.432 244018 DEBUG nova.virt.libvirt.host [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:54:27 np0005629333 nova_compute[244014]: 2026-02-25 12:54:27.433 244018 DEBUG nova.virt.libvirt.host [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:54:27 np0005629333 nova_compute[244014]: 2026-02-25 12:54:27.453 244018 DEBUG nova.virt.libvirt.host [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:54:27 np0005629333 nova_compute[244014]: 2026-02-25 12:54:27.454 244018 DEBUG nova.virt.libvirt.host [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:54:27 np0005629333 nova_compute[244014]: 2026-02-25 12:54:27.455 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:54:27 np0005629333 nova_compute[244014]: 2026-02-25 12:54:27.455 244018 DEBUG nova.virt.hardware [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:54:27 np0005629333 nova_compute[244014]: 2026-02-25 12:54:27.456 244018 DEBUG nova.virt.hardware [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:54:27 np0005629333 nova_compute[244014]: 2026-02-25 12:54:27.456 244018 DEBUG nova.virt.hardware [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:54:27 np0005629333 nova_compute[244014]: 2026-02-25 12:54:27.457 244018 DEBUG nova.virt.hardware [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:54:27 np0005629333 nova_compute[244014]: 2026-02-25 12:54:27.457 244018 DEBUG nova.virt.hardware [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:54:27 np0005629333 nova_compute[244014]: 2026-02-25 12:54:27.458 244018 DEBUG nova.virt.hardware [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:54:27 np0005629333 nova_compute[244014]: 2026-02-25 12:54:27.458 244018 DEBUG nova.virt.hardware [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:54:27 np0005629333 nova_compute[244014]: 2026-02-25 12:54:27.459 244018 DEBUG nova.virt.hardware [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:54:27 np0005629333 nova_compute[244014]: 2026-02-25 12:54:27.459 244018 DEBUG nova.virt.hardware [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:54:27 np0005629333 nova_compute[244014]: 2026-02-25 12:54:27.459 244018 DEBUG nova.virt.hardware [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:54:27 np0005629333 nova_compute[244014]: 2026-02-25 12:54:27.460 244018 DEBUG nova.virt.hardware [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:54:27 np0005629333 nova_compute[244014]: 2026-02-25 12:54:27.464 244018 DEBUG oslo_concurrency.processutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:54:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:54:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4135331109' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:54:28 np0005629333 nova_compute[244014]: 2026-02-25 12:54:28.058 244018 DEBUG oslo_concurrency.processutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:54:28 np0005629333 nova_compute[244014]: 2026-02-25 12:54:28.078 244018 DEBUG nova.storage.rbd_utils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 3da53ea4-e42d-4bfb-b917-d9c81fd3652f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:54:28 np0005629333 nova_compute[244014]: 2026-02-25 12:54:28.082 244018 DEBUG oslo_concurrency.processutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:54:28 np0005629333 nova_compute[244014]: 2026-02-25 12:54:28.146 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:54:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2655311904' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:54:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2217: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 775 KiB/s rd, 3.9 MiB/s wr, 105 op/s
Feb 25 07:54:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:54:28 np0005629333 nova_compute[244014]: 2026-02-25 12:54:28.693 244018 DEBUG oslo_concurrency.processutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:54:28 np0005629333 nova_compute[244014]: 2026-02-25 12:54:28.696 244018 DEBUG nova.virt.libvirt.vif [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:54:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1213997257',display_name='tempest-TestNetworkBasicOps-server-1213997257',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1213997257',id=133,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLNG26zhuNSlF1TfE4sDBmDnvGmo5/ze2p13GCwWfDn/0tZUccugkaXigVz2IREshfwlZeDWvbX4lpuIp+1iD+9XZNIkC3KjSMU6+yZzbD4HnlUsSc5LJ/0HlGl49tXOmw==',key_name='tempest-TestNetworkBasicOps-1599989750',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ywrihctv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:54:24Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=3da53ea4-e42d-4bfb-b917-d9c81fd3652f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:54:28 np0005629333 nova_compute[244014]: 2026-02-25 12:54:28.696 244018 DEBUG nova.network.os_vif_util [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:54:28 np0005629333 nova_compute[244014]: 2026-02-25 12:54:28.698 244018 DEBUG nova.network.os_vif_util [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:de:be,bridge_name='br-int',has_traffic_filtering=True,id=205ead5a-797e-421e-87e3-ec5dac2037d3,network=Network(02748d96-83c0-45be-acd6-081ad673e4bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap205ead5a-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:54:28 np0005629333 nova_compute[244014]: 2026-02-25 12:54:28.699 244018 DEBUG nova.objects.instance [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3da53ea4-e42d-4bfb-b917-d9c81fd3652f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:54:28 np0005629333 nova_compute[244014]: 2026-02-25 12:54:28.720 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:54:28 np0005629333 nova_compute[244014]:  <uuid>3da53ea4-e42d-4bfb-b917-d9c81fd3652f</uuid>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:  <name>instance-00000085</name>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestNetworkBasicOps-server-1213997257</nova:name>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:54:27</nova:creationTime>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:54:28 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:        <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:        <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:        <nova:port uuid="205ead5a-797e-421e-87e3-ec5dac2037d3">
Feb 25 07:54:28 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      <entry name="serial">3da53ea4-e42d-4bfb-b917-d9c81fd3652f</entry>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      <entry name="uuid">3da53ea4-e42d-4bfb-b917-d9c81fd3652f</entry>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/3da53ea4-e42d-4bfb-b917-d9c81fd3652f_disk">
Feb 25 07:54:28 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:54:28 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/3da53ea4-e42d-4bfb-b917-d9c81fd3652f_disk.config">
Feb 25 07:54:28 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:54:28 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:5f:de:be"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      <target dev="tap205ead5a-79"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/3da53ea4-e42d-4bfb-b917-d9c81fd3652f/console.log" append="off"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:54:28 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:54:28 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:54:28 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:54:28 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:54:28 np0005629333 nova_compute[244014]: 2026-02-25 12:54:28.721 244018 DEBUG nova.compute.manager [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Preparing to wait for external event network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:54:28 np0005629333 nova_compute[244014]: 2026-02-25 12:54:28.722 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:28 np0005629333 nova_compute[244014]: 2026-02-25 12:54:28.722 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:28 np0005629333 nova_compute[244014]: 2026-02-25 12:54:28.722 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:28 np0005629333 nova_compute[244014]: 2026-02-25 12:54:28.723 244018 DEBUG nova.virt.libvirt.vif [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:54:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1213997257',display_name='tempest-TestNetworkBasicOps-server-1213997257',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1213997257',id=133,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLNG26zhuNSlF1TfE4sDBmDnvGmo5/ze2p13GCwWfDn/0tZUccugkaXigVz2IREshfwlZeDWvbX4lpuIp+1iD+9XZNIkC3KjSMU6+yZzbD4HnlUsSc5LJ/0HlGl49tXOmw==',key_name='tempest-TestNetworkBasicOps-1599989750',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ywrihctv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:54:24Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=3da53ea4-e42d-4bfb-b917-d9c81fd3652f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:54:28 np0005629333 nova_compute[244014]: 2026-02-25 12:54:28.723 244018 DEBUG nova.network.os_vif_util [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:54:28 np0005629333 nova_compute[244014]: 2026-02-25 12:54:28.724 244018 DEBUG nova.network.os_vif_util [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:de:be,bridge_name='br-int',has_traffic_filtering=True,id=205ead5a-797e-421e-87e3-ec5dac2037d3,network=Network(02748d96-83c0-45be-acd6-081ad673e4bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap205ead5a-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:54:28 np0005629333 nova_compute[244014]: 2026-02-25 12:54:28.724 244018 DEBUG os_vif [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:de:be,bridge_name='br-int',has_traffic_filtering=True,id=205ead5a-797e-421e-87e3-ec5dac2037d3,network=Network(02748d96-83c0-45be-acd6-081ad673e4bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap205ead5a-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:54:28 np0005629333 nova_compute[244014]: 2026-02-25 12:54:28.724 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:28 np0005629333 nova_compute[244014]: 2026-02-25 12:54:28.725 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:54:28 np0005629333 nova_compute[244014]: 2026-02-25 12:54:28.725 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:54:28 np0005629333 nova_compute[244014]: 2026-02-25 12:54:28.729 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:28 np0005629333 nova_compute[244014]: 2026-02-25 12:54:28.729 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap205ead5a-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:54:28 np0005629333 nova_compute[244014]: 2026-02-25 12:54:28.729 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap205ead5a-79, col_values=(('external_ids', {'iface-id': '205ead5a-797e-421e-87e3-ec5dac2037d3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5f:de:be', 'vm-uuid': '3da53ea4-e42d-4bfb-b917-d9c81fd3652f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:54:28 np0005629333 NetworkManager[49836]: <info>  [1772024068.7317] manager: (tap205ead5a-79): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/583)
Feb 25 07:54:28 np0005629333 nova_compute[244014]: 2026-02-25 12:54:28.732 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:54:28 np0005629333 nova_compute[244014]: 2026-02-25 12:54:28.737 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:28 np0005629333 nova_compute[244014]: 2026-02-25 12:54:28.738 244018 INFO os_vif [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:de:be,bridge_name='br-int',has_traffic_filtering=True,id=205ead5a-797e-421e-87e3-ec5dac2037d3,network=Network(02748d96-83c0-45be-acd6-081ad673e4bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap205ead5a-79')#033[00m
Feb 25 07:54:28 np0005629333 nova_compute[244014]: 2026-02-25 12:54:28.788 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:54:28 np0005629333 nova_compute[244014]: 2026-02-25 12:54:28.788 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:54:28 np0005629333 nova_compute[244014]: 2026-02-25 12:54:28.788 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:5f:de:be, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:54:28 np0005629333 nova_compute[244014]: 2026-02-25 12:54:28.788 244018 INFO nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Using config drive#033[00m
Feb 25 07:54:28 np0005629333 nova_compute[244014]: 2026-02-25 12:54:28.821 244018 DEBUG nova.storage.rbd_utils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 3da53ea4-e42d-4bfb-b917-d9c81fd3652f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:54:29 np0005629333 nova_compute[244014]: 2026-02-25 12:54:29.179 244018 INFO nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Creating config drive at /var/lib/nova/instances/3da53ea4-e42d-4bfb-b917-d9c81fd3652f/disk.config#033[00m
Feb 25 07:54:29 np0005629333 nova_compute[244014]: 2026-02-25 12:54:29.186 244018 DEBUG oslo_concurrency.processutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3da53ea4-e42d-4bfb-b917-d9c81fd3652f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8en4w6n2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:54:29 np0005629333 nova_compute[244014]: 2026-02-25 12:54:29.326 244018 DEBUG oslo_concurrency.processutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3da53ea4-e42d-4bfb-b917-d9c81fd3652f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp8en4w6n2" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:54:29 np0005629333 nova_compute[244014]: 2026-02-25 12:54:29.363 244018 DEBUG nova.storage.rbd_utils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 3da53ea4-e42d-4bfb-b917-d9c81fd3652f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:54:29 np0005629333 nova_compute[244014]: 2026-02-25 12:54:29.368 244018 DEBUG oslo_concurrency.processutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3da53ea4-e42d-4bfb-b917-d9c81fd3652f/disk.config 3da53ea4-e42d-4bfb-b917-d9c81fd3652f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:54:29 np0005629333 nova_compute[244014]: 2026-02-25 12:54:29.406 244018 DEBUG nova.network.neutron [req-6592dbea-75b4-42d1-850d-9063012084a7 req-605f1eaf-cb16-4d2d-bd21-fdebfc0c94fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Updated VIF entry in instance network info cache for port 205ead5a-797e-421e-87e3-ec5dac2037d3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:54:29 np0005629333 nova_compute[244014]: 2026-02-25 12:54:29.407 244018 DEBUG nova.network.neutron [req-6592dbea-75b4-42d1-850d-9063012084a7 req-605f1eaf-cb16-4d2d-bd21-fdebfc0c94fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Updating instance_info_cache with network_info: [{"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:54:29 np0005629333 nova_compute[244014]: 2026-02-25 12:54:29.426 244018 DEBUG oslo_concurrency.lockutils [req-6592dbea-75b4-42d1-850d-9063012084a7 req-605f1eaf-cb16-4d2d-bd21-fdebfc0c94fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-3da53ea4-e42d-4bfb-b917-d9c81fd3652f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:54:29 np0005629333 nova_compute[244014]: 2026-02-25 12:54:29.535 244018 DEBUG oslo_concurrency.processutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3da53ea4-e42d-4bfb-b917-d9c81fd3652f/disk.config 3da53ea4-e42d-4bfb-b917-d9c81fd3652f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:54:29 np0005629333 nova_compute[244014]: 2026-02-25 12:54:29.537 244018 INFO nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Deleting local config drive /var/lib/nova/instances/3da53ea4-e42d-4bfb-b917-d9c81fd3652f/disk.config because it was imported into RBD.#033[00m
Feb 25 07:54:29 np0005629333 kernel: tap205ead5a-79: entered promiscuous mode
Feb 25 07:54:29 np0005629333 NetworkManager[49836]: <info>  [1772024069.5935] manager: (tap205ead5a-79): new Tun device (/org/freedesktop/NetworkManager/Devices/584)
Feb 25 07:54:29 np0005629333 nova_compute[244014]: 2026-02-25 12:54:29.596 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:29 np0005629333 ovn_controller[147040]: 2026-02-25T12:54:29Z|01404|binding|INFO|Claiming lport 205ead5a-797e-421e-87e3-ec5dac2037d3 for this chassis.
Feb 25 07:54:29 np0005629333 ovn_controller[147040]: 2026-02-25T12:54:29Z|01405|binding|INFO|205ead5a-797e-421e-87e3-ec5dac2037d3: Claiming fa:16:3e:5f:de:be 10.100.0.13
Feb 25 07:54:29 np0005629333 nova_compute[244014]: 2026-02-25 12:54:29.608 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.608 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:de:be 10.100.0.13'], port_security=['fa:16:3e:5f:de:be 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-564267725', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3da53ea4-e42d-4bfb-b917-d9c81fd3652f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02748d96-83c0-45be-acd6-081ad673e4bc', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-564267725', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '7', 'neutron:security_group_ids': '4f6e829b-663a-471b-98d6-d0d40f869440', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=988642bf-bd28-4e43-bffa-6e16e232d40d, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=205ead5a-797e-421e-87e3-ec5dac2037d3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:54:29 np0005629333 ovn_controller[147040]: 2026-02-25T12:54:29Z|01406|binding|INFO|Setting lport 205ead5a-797e-421e-87e3-ec5dac2037d3 ovn-installed in OVS
Feb 25 07:54:29 np0005629333 ovn_controller[147040]: 2026-02-25T12:54:29Z|01407|binding|INFO|Setting lport 205ead5a-797e-421e-87e3-ec5dac2037d3 up in Southbound
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.611 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 205ead5a-797e-421e-87e3-ec5dac2037d3 in datapath 02748d96-83c0-45be-acd6-081ad673e4bc bound to our chassis#033[00m
Feb 25 07:54:29 np0005629333 nova_compute[244014]: 2026-02-25 12:54:29.612 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.613 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 02748d96-83c0-45be-acd6-081ad673e4bc#033[00m
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.623 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e855fed1-a291-4275-85a5-8fc4f4ad3ffc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.624 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap02748d96-81 in ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.627 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap02748d96-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.628 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8b8f98a9-4cb5-41cf-835e-4442fe05b6bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.630 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[597caa3d-0de6-4421-b247-aa9fed6b9ca5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:29 np0005629333 systemd-udevd[364916]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:54:29 np0005629333 systemd-machined[210048]: New machine qemu-165-instance-00000085.
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.644 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[4e031721-519e-4f8a-8019-6cd88a0ce106]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:29 np0005629333 NetworkManager[49836]: <info>  [1772024069.6573] device (tap205ead5a-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:54:29 np0005629333 NetworkManager[49836]: <info>  [1772024069.6585] device (tap205ead5a-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:54:29 np0005629333 systemd[1]: Started Virtual Machine qemu-165-instance-00000085.
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.671 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c5a58ec3-a539-4e4a-bbcd-1b937cabf96b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.702 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[40591f7b-46bf-4d6c-8e73-4ec923335776]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:29 np0005629333 NetworkManager[49836]: <info>  [1772024069.7100] manager: (tap02748d96-80): new Veth device (/org/freedesktop/NetworkManager/Devices/585)
Feb 25 07:54:29 np0005629333 systemd-udevd[364919]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.710 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f07a93b9-a7a1-46db-9469-2b36feaff7b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.740 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7d6b24e7-0b33-447e-8e9e-46257061a365]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.745 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5a48c221-b3ed-4952-849e-631c94d70aac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:29 np0005629333 NetworkManager[49836]: <info>  [1772024069.7747] device (tap02748d96-80): carrier: link connected
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.783 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[03d5c8f9-838a-40e7-b81e-ffa31cff7780]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.803 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d1eec0e7-e933-437b-8a13-cc392c2f2633]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02748d96-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:5d:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603918, 'reachable_time': 36344, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364947, 'error': None, 'target': 'ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.824 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8447c8b1-b210-4617-a581-445921791310]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe33:5d94'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603918, 'tstamp': 603918}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364948, 'error': None, 'target': 'ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.845 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3033fd69-b9b4-4025-b0da-643d485878b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap02748d96-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:33:5d:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603918, 'reachable_time': 36344, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 364949, 'error': None, 'target': 'ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.887 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bed870f7-a731-41ea-a5a5-7a71e2a26de9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.951 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b2d5b4fa-c7c9-4715-a8e8-85517083e85e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.955 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02748d96-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.956 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.956 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap02748d96-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:54:29 np0005629333 nova_compute[244014]: 2026-02-25 12:54:29.960 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:29 np0005629333 NetworkManager[49836]: <info>  [1772024069.9613] manager: (tap02748d96-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/586)
Feb 25 07:54:29 np0005629333 kernel: tap02748d96-80: entered promiscuous mode
Feb 25 07:54:29 np0005629333 nova_compute[244014]: 2026-02-25 12:54:29.964 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.965 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap02748d96-80, col_values=(('external_ids', {'iface-id': 'd05bda2c-299c-4a37-881d-1ed81c75bb47'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:54:29 np0005629333 nova_compute[244014]: 2026-02-25 12:54:29.967 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:29 np0005629333 ovn_controller[147040]: 2026-02-25T12:54:29Z|01408|binding|INFO|Releasing lport d05bda2c-299c-4a37-881d-1ed81c75bb47 from this chassis (sb_readonly=0)
Feb 25 07:54:29 np0005629333 nova_compute[244014]: 2026-02-25 12:54:29.968 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.968 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/02748d96-83c0-45be-acd6-081ad673e4bc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/02748d96-83c0-45be-acd6-081ad673e4bc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.970 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2cb52bb3-03f2-4a81-8650-3aff3c80917d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.971 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-02748d96-83c0-45be-acd6-081ad673e4bc
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/02748d96-83c0-45be-acd6-081ad673e4bc.pid.haproxy
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 02748d96-83c0-45be-acd6-081ad673e4bc
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:54:29 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:29.972 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc', 'env', 'PROCESS_TAG=haproxy-02748d96-83c0-45be-acd6-081ad673e4bc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/02748d96-83c0-45be-acd6-081ad673e4bc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:54:29 np0005629333 nova_compute[244014]: 2026-02-25 12:54:29.973 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.046 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024070.0462513, 3da53ea4-e42d-4bfb-b917-d9c81fd3652f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.047 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] VM Started (Lifecycle Event)#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.072 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.077 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024070.0463872, 3da53ea4-e42d-4bfb-b917-d9c81fd3652f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.077 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.103 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.108 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.146 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.178 244018 DEBUG nova.compute.manager [req-71c21a7d-3001-4def-8e67-84f962255f76 req-febe5ce4-2b76-46bd-9ec3-645a6ede5d91 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Received event network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.179 244018 DEBUG oslo_concurrency.lockutils [req-71c21a7d-3001-4def-8e67-84f962255f76 req-febe5ce4-2b76-46bd-9ec3-645a6ede5d91 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.180 244018 DEBUG oslo_concurrency.lockutils [req-71c21a7d-3001-4def-8e67-84f962255f76 req-febe5ce4-2b76-46bd-9ec3-645a6ede5d91 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.181 244018 DEBUG oslo_concurrency.lockutils [req-71c21a7d-3001-4def-8e67-84f962255f76 req-febe5ce4-2b76-46bd-9ec3-645a6ede5d91 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.182 244018 DEBUG nova.compute.manager [req-71c21a7d-3001-4def-8e67-84f962255f76 req-febe5ce4-2b76-46bd-9ec3-645a6ede5d91 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Processing event network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.184 244018 DEBUG nova.compute.manager [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.205 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024070.2051318, 3da53ea4-e42d-4bfb-b917-d9c81fd3652f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.208 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.210 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.213 244018 INFO nova.virt.libvirt.driver [-] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Instance spawned successfully.#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.213 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.274 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.279 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.284 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.284 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.285 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.285 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.285 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.286 244018 DEBUG nova.virt.libvirt.driver [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.335 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.365 244018 INFO nova.compute.manager [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Took 6.05 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.366 244018 DEBUG nova.compute.manager [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.413 244018 INFO nova.compute.manager [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Took 7.28 seconds to build instance.#033[00m
Feb 25 07:54:30 np0005629333 podman[365023]: 2026-02-25 12:54:30.419899423 +0000 UTC m=+0.089224068 container create 0f02a67e5a7f861ff1a21a7f6506dd83c2517b9075acb95b64461c07ae7316b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.43.0)
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.425 244018 DEBUG oslo_concurrency.lockutils [None req-cb8a506a-65e9-44db-ae53-acfad8ef1a8f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.391s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.457 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:30 np0005629333 podman[365023]: 2026-02-25 12:54:30.378642289 +0000 UTC m=+0.047966924 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:54:30 np0005629333 systemd[1]: Started libpod-conmon-0f02a67e5a7f861ff1a21a7f6506dd83c2517b9075acb95b64461c07ae7316b3.scope.
Feb 25 07:54:30 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:54:30 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/094a23f1f3da701437aaad813d898e692f00c7eb9efe6f71e247ac3422f36f17/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:54:30 np0005629333 podman[365023]: 2026-02-25 12:54:30.530014009 +0000 UTC m=+0.199338694 container init 0f02a67e5a7f861ff1a21a7f6506dd83c2517b9075acb95b64461c07ae7316b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 25 07:54:30 np0005629333 podman[365023]: 2026-02-25 12:54:30.537643774 +0000 UTC m=+0.206968419 container start 0f02a67e5a7f861ff1a21a7f6506dd83c2517b9075acb95b64461c07ae7316b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:54:30 np0005629333 neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc[365038]: [NOTICE]   (365042) : New worker (365044) forked
Feb 25 07:54:30 np0005629333 neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc[365038]: [NOTICE]   (365042) : Loading success.
Feb 25 07:54:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2218: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 345 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.895 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.896 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.896 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.896 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:54:30 np0005629333 nova_compute[244014]: 2026-02-25 12:54:30.897 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:54:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:54:31
Feb 25 07:54:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:54:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:54:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'backups', 'vms', 'images', 'default.rgw.log', 'volumes', '.mgr']
Feb 25 07:54:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:54:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:54:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1112003846' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:54:31 np0005629333 nova_compute[244014]: 2026-02-25 12:54:31.489 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:54:31 np0005629333 nova_compute[244014]: 2026-02-25 12:54:31.582 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:54:31 np0005629333 nova_compute[244014]: 2026-02-25 12:54:31.583 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000084 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:54:31 np0005629333 nova_compute[244014]: 2026-02-25 12:54:31.587 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:54:31 np0005629333 nova_compute[244014]: 2026-02-25 12:54:31.588 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000082 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:54:31 np0005629333 nova_compute[244014]: 2026-02-25 12:54:31.592 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:54:31 np0005629333 nova_compute[244014]: 2026-02-25 12:54:31.593 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:54:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:54:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:54:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:54:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:54:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:54:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:54:31 np0005629333 nova_compute[244014]: 2026-02-25 12:54:31.793 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:54:31 np0005629333 nova_compute[244014]: 2026-02-25 12:54:31.795 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3029MB free_disk=59.875610740855336GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:54:31 np0005629333 nova_compute[244014]: 2026-02-25 12:54:31.795 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:31 np0005629333 nova_compute[244014]: 2026-02-25 12:54:31.796 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:31 np0005629333 nova_compute[244014]: 2026-02-25 12:54:31.939 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 03d948e4-e7cb-45ea-bf63-7ab363b4d46e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:54:31 np0005629333 nova_compute[244014]: 2026-02-25 12:54:31.939 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 945e5549-40d1-4eae-8179-84ad1d751957 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:54:31 np0005629333 nova_compute[244014]: 2026-02-25 12:54:31.940 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 3da53ea4-e42d-4bfb-b917-d9c81fd3652f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:54:31 np0005629333 nova_compute[244014]: 2026-02-25 12:54:31.940 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:54:31 np0005629333 nova_compute[244014]: 2026-02-25 12:54:31.940 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.033 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:54:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:54:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:54:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:54:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:54:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:54:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:54:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:54:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:54:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:54:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.310 244018 DEBUG nova.compute.manager [req-7ad9bee0-0f03-40db-bbc5-0e132cb4d442 req-4491af01-c742-4928-b4da-329e3f4b03bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Received event network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.311 244018 DEBUG oslo_concurrency.lockutils [req-7ad9bee0-0f03-40db-bbc5-0e132cb4d442 req-4491af01-c742-4928-b4da-329e3f4b03bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.311 244018 DEBUG oslo_concurrency.lockutils [req-7ad9bee0-0f03-40db-bbc5-0e132cb4d442 req-4491af01-c742-4928-b4da-329e3f4b03bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.312 244018 DEBUG oslo_concurrency.lockutils [req-7ad9bee0-0f03-40db-bbc5-0e132cb4d442 req-4491af01-c742-4928-b4da-329e3f4b03bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.312 244018 DEBUG nova.compute.manager [req-7ad9bee0-0f03-40db-bbc5-0e132cb4d442 req-4491af01-c742-4928-b4da-329e3f4b03bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] No waiting events found dispatching network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.313 244018 WARNING nova.compute.manager [req-7ad9bee0-0f03-40db-bbc5-0e132cb4d442 req-4491af01-c742-4928-b4da-329e3f4b03bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Received unexpected event network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:54:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:54:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1882036114' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.592 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.598 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.621 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.660 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.660 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.865s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2219: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.725 244018 DEBUG oslo_concurrency.lockutils [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "945e5549-40d1-4eae-8179-84ad1d751957" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.726 244018 DEBUG oslo_concurrency.lockutils [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "945e5549-40d1-4eae-8179-84ad1d751957" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.727 244018 DEBUG oslo_concurrency.lockutils [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "945e5549-40d1-4eae-8179-84ad1d751957-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.728 244018 DEBUG oslo_concurrency.lockutils [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "945e5549-40d1-4eae-8179-84ad1d751957-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.728 244018 DEBUG oslo_concurrency.lockutils [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "945e5549-40d1-4eae-8179-84ad1d751957-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.731 244018 INFO nova.compute.manager [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Terminating instance#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.733 244018 DEBUG nova.compute.manager [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.783 244018 DEBUG nova.compute.manager [req-891c54a4-6a0c-4a7a-b3cd-6b35032803ba req-0cfbb663-8ad1-4c9c-b3ab-5770b818e15d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Received event network-changed-edede35b-327b-4dc5-8432-cc64bb4a290d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.784 244018 DEBUG nova.compute.manager [req-891c54a4-6a0c-4a7a-b3cd-6b35032803ba req-0cfbb663-8ad1-4c9c-b3ab-5770b818e15d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Refreshing instance network info cache due to event network-changed-edede35b-327b-4dc5-8432-cc64bb4a290d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:54:32 np0005629333 kernel: tapedede35b-32 (unregistering): left promiscuous mode
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.784 244018 DEBUG oslo_concurrency.lockutils [req-891c54a4-6a0c-4a7a-b3cd-6b35032803ba req-0cfbb663-8ad1-4c9c-b3ab-5770b818e15d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-945e5549-40d1-4eae-8179-84ad1d751957" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.786 244018 DEBUG oslo_concurrency.lockutils [req-891c54a4-6a0c-4a7a-b3cd-6b35032803ba req-0cfbb663-8ad1-4c9c-b3ab-5770b818e15d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-945e5549-40d1-4eae-8179-84ad1d751957" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.786 244018 DEBUG nova.network.neutron [req-891c54a4-6a0c-4a7a-b3cd-6b35032803ba req-0cfbb663-8ad1-4c9c-b3ab-5770b818e15d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Refreshing network info cache for port edede35b-327b-4dc5-8432-cc64bb4a290d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:54:32 np0005629333 NetworkManager[49836]: <info>  [1772024072.7895] device (tapedede35b-32): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.799 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.803 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:32 np0005629333 ovn_controller[147040]: 2026-02-25T12:54:32Z|01409|binding|INFO|Releasing lport edede35b-327b-4dc5-8432-cc64bb4a290d from this chassis (sb_readonly=0)
Feb 25 07:54:32 np0005629333 ovn_controller[147040]: 2026-02-25T12:54:32Z|01410|binding|INFO|Setting lport edede35b-327b-4dc5-8432-cc64bb4a290d down in Southbound
Feb 25 07:54:32 np0005629333 ovn_controller[147040]: 2026-02-25T12:54:32Z|01411|binding|INFO|Removing iface tapedede35b-32 ovn-installed in OVS
Feb 25 07:54:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:32.808 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:b5:86 10.100.0.13 2001:db8:0:1:f816:3eff:fe06:b586 2001:db8::f816:3eff:fe06:b586'], port_security=['fa:16:3e:06:b5:86 10.100.0.13 2001:db8:0:1:f816:3eff:fe06:b586 2001:db8::f816:3eff:fe06:b586'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8:0:1:f816:3eff:fe06:b586/64 2001:db8::f816:3eff:fe06:b586/64', 'neutron:device_id': '945e5549-40d1-4eae-8179-84ad1d751957', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88562c34-222a-439a-b444-9e6f8a6d70cd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cd4482ae-906b-42b8-8dd6-2eff099d2f11', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1dcb5c12-7c78-49d6-b14f-b2c48c93839d, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=edede35b-327b-4dc5-8432-cc64bb4a290d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:54:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:32.809 157129 INFO neutron.agent.ovn.metadata.agent [-] Port edede35b-327b-4dc5-8432-cc64bb4a290d in datapath 88562c34-222a-439a-b444-9e6f8a6d70cd unbound from our chassis#033[00m
Feb 25 07:54:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:32.810 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 88562c34-222a-439a-b444-9e6f8a6d70cd#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.819 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:32.833 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[be229f7b-1aab-415d-b7a8-13c370c307c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:32 np0005629333 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000084.scope: Deactivated successfully.
Feb 25 07:54:32 np0005629333 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000084.scope: Consumed 14.241s CPU time.
Feb 25 07:54:32 np0005629333 systemd-machined[210048]: Machine qemu-164-instance-00000084 terminated.
Feb 25 07:54:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:32.868 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[bb00e625-9d29-4c2b-8066-072697461f2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:32.871 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[108c8a51-a3b0-431a-a3d7-a97fac6b3f88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:32.900 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[15f62c61-1452-4006-8a82-a63463eb4d0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:32.912 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[31d2a44e-c1a3-481d-aa63-b39edbf7f4a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap88562c34-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:5e:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 3328, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 411], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598029, 'reachable_time': 38748, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 36, 'inoctets': 2656, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 36, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2656, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 36, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365111, 'error': None, 'target': 'ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:32.926 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8a935afc-e17d-4d0f-8007-1f95f414414c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap88562c34-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 598041, 'tstamp': 598041}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 365112, 'error': None, 'target': 'ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap88562c34-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 598043, 'tstamp': 598043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 365112, 'error': None, 'target': 'ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:32.928 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88562c34-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.930 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.936 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:32.936 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88562c34-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:54:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:32.937 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:54:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:32.937 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap88562c34-20, col_values=(('external_ids', {'iface-id': '09662fcb-392d-469a-981c-54d31225748b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:54:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:32.938 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.969 244018 INFO nova.virt.libvirt.driver [-] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Instance destroyed successfully.#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.970 244018 DEBUG nova.objects.instance [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'resources' on Instance uuid 945e5549-40d1-4eae-8179-84ad1d751957 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.992 244018 DEBUG nova.virt.libvirt.vif [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:53:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-487150499',display_name='tempest-TestGettingAddress-server-487150499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-487150499',id=132,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL/SVWuEVaf+Py7xASyHmNYfi29LBzumkZB6ldUzxX0Dxybzo1LoSeFAK9YF8BeK/8cHma4lQxEX6N+N2g4I6aGP/1FtZuhCE2GUlMamVs4LR8ITh1z/8qYyUh+iaduJ8A==',key_name='tempest-TestGettingAddress-2128145461',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:54:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-9mlkqqq2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:54:09Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=945e5549-40d1-4eae-8179-84ad1d751957,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "edede35b-327b-4dc5-8432-cc64bb4a290d", "address": "fa:16:3e:06:b5:86", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedede35b-32", "ovs_interfaceid": "edede35b-327b-4dc5-8432-cc64bb4a290d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.992 244018 DEBUG nova.network.os_vif_util [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "edede35b-327b-4dc5-8432-cc64bb4a290d", "address": "fa:16:3e:06:b5:86", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedede35b-32", "ovs_interfaceid": "edede35b-327b-4dc5-8432-cc64bb4a290d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.993 244018 DEBUG nova.network.os_vif_util [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:06:b5:86,bridge_name='br-int',has_traffic_filtering=True,id=edede35b-327b-4dc5-8432-cc64bb4a290d,network=Network(88562c34-222a-439a-b444-9e6f8a6d70cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedede35b-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.994 244018 DEBUG os_vif [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:06:b5:86,bridge_name='br-int',has_traffic_filtering=True,id=edede35b-327b-4dc5-8432-cc64bb4a290d,network=Network(88562c34-222a-439a-b444-9e6f8a6d70cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedede35b-32') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.995 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.996 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapedede35b-32, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.997 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:32 np0005629333 nova_compute[244014]: 2026-02-25 12:54:32.998 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.002 244018 INFO os_vif [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:06:b5:86,bridge_name='br-int',has_traffic_filtering=True,id=edede35b-327b-4dc5-8432-cc64bb4a290d,network=Network(88562c34-222a-439a-b444-9e6f8a6d70cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedede35b-32')#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.218 244018 DEBUG oslo_concurrency.lockutils [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.219 244018 DEBUG oslo_concurrency.lockutils [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.219 244018 DEBUG oslo_concurrency.lockutils [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.219 244018 DEBUG oslo_concurrency.lockutils [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.219 244018 DEBUG oslo_concurrency.lockutils [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.220 244018 INFO nova.compute.manager [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Terminating instance#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.221 244018 DEBUG nova.compute.manager [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:54:33 np0005629333 kernel: tap205ead5a-79 (unregistering): left promiscuous mode
Feb 25 07:54:33 np0005629333 NetworkManager[49836]: <info>  [1772024073.2508] device (tap205ead5a-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:54:33 np0005629333 ovn_controller[147040]: 2026-02-25T12:54:33Z|01412|binding|INFO|Releasing lport 205ead5a-797e-421e-87e3-ec5dac2037d3 from this chassis (sb_readonly=0)
Feb 25 07:54:33 np0005629333 ovn_controller[147040]: 2026-02-25T12:54:33Z|01413|binding|INFO|Setting lport 205ead5a-797e-421e-87e3-ec5dac2037d3 down in Southbound
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.255 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:33 np0005629333 ovn_controller[147040]: 2026-02-25T12:54:33Z|01414|binding|INFO|Removing iface tap205ead5a-79 ovn-installed in OVS
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.257 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:33.264 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:de:be 10.100.0.13'], port_security=['fa:16:3e:5f:de:be 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-564267725', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3da53ea4-e42d-4bfb-b917-d9c81fd3652f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02748d96-83c0-45be-acd6-081ad673e4bc', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-564267725', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '9', 'neutron:security_group_ids': '4f6e829b-663a-471b-98d6-d0d40f869440', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.188', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=988642bf-bd28-4e43-bffa-6e16e232d40d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=205ead5a-797e-421e-87e3-ec5dac2037d3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.264 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:33.265 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 205ead5a-797e-421e-87e3-ec5dac2037d3 in datapath 02748d96-83c0-45be-acd6-081ad673e4bc unbound from our chassis#033[00m
Feb 25 07:54:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:33.266 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 02748d96-83c0-45be-acd6-081ad673e4bc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:54:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:33.267 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9b6a8c9b-5e09-42d6-a2c4-f7d5c9f4c3c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:33.267 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc namespace which is not needed anymore#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.271 244018 INFO nova.virt.libvirt.driver [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Deleting instance files /var/lib/nova/instances/945e5549-40d1-4eae-8179-84ad1d751957_del#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.272 244018 INFO nova.virt.libvirt.driver [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Deletion of /var/lib/nova/instances/945e5549-40d1-4eae-8179-84ad1d751957_del complete#033[00m
Feb 25 07:54:33 np0005629333 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000085.scope: Deactivated successfully.
Feb 25 07:54:33 np0005629333 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000085.scope: Consumed 3.409s CPU time.
Feb 25 07:54:33 np0005629333 systemd-machined[210048]: Machine qemu-165-instance-00000085 terminated.
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.328 244018 INFO nova.compute.manager [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Took 0.59 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.329 244018 DEBUG oslo.service.loopingcall [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.329 244018 DEBUG nova.compute.manager [-] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.329 244018 DEBUG nova.network.neutron [-] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:54:33 np0005629333 neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc[365038]: [NOTICE]   (365042) : haproxy version is 2.8.14-c23fe91
Feb 25 07:54:33 np0005629333 neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc[365038]: [NOTICE]   (365042) : path to executable is /usr/sbin/haproxy
Feb 25 07:54:33 np0005629333 neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc[365038]: [WARNING]  (365042) : Exiting Master process...
Feb 25 07:54:33 np0005629333 neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc[365038]: [ALERT]    (365042) : Current worker (365044) exited with code 143 (Terminated)
Feb 25 07:54:33 np0005629333 neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc[365038]: [WARNING]  (365042) : All workers exited. Exiting... (0)
Feb 25 07:54:33 np0005629333 systemd[1]: libpod-0f02a67e5a7f861ff1a21a7f6506dd83c2517b9075acb95b64461c07ae7316b3.scope: Deactivated successfully.
Feb 25 07:54:33 np0005629333 podman[365165]: 2026-02-25 12:54:33.379777896 +0000 UTC m=+0.045307669 container died 0f02a67e5a7f861ff1a21a7f6506dd83c2517b9075acb95b64461c07ae7316b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:54:33 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0f02a67e5a7f861ff1a21a7f6506dd83c2517b9075acb95b64461c07ae7316b3-userdata-shm.mount: Deactivated successfully.
Feb 25 07:54:33 np0005629333 systemd[1]: var-lib-containers-storage-overlay-094a23f1f3da701437aaad813d898e692f00c7eb9efe6f71e247ac3422f36f17-merged.mount: Deactivated successfully.
Feb 25 07:54:33 np0005629333 podman[365165]: 2026-02-25 12:54:33.42283473 +0000 UTC m=+0.088364413 container cleanup 0f02a67e5a7f861ff1a21a7f6506dd83c2517b9075acb95b64461c07ae7316b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:54:33 np0005629333 systemd[1]: libpod-conmon-0f02a67e5a7f861ff1a21a7f6506dd83c2517b9075acb95b64461c07ae7316b3.scope: Deactivated successfully.
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.440 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.443 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.449 244018 INFO nova.virt.libvirt.driver [-] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Instance destroyed successfully.#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.450 244018 DEBUG nova.objects.instance [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'resources' on Instance uuid 3da53ea4-e42d-4bfb-b917-d9c81fd3652f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:54:33 np0005629333 podman[365194]: 2026-02-25 12:54:33.48095992 +0000 UTC m=+0.040299088 container remove 0f02a67e5a7f861ff1a21a7f6506dd83c2517b9075acb95b64461c07ae7316b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 25 07:54:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:33.486 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c84055e6-79c6-462b-b83f-d86426f4d34c]: (4, ('Wed Feb 25 12:54:33 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc (0f02a67e5a7f861ff1a21a7f6506dd83c2517b9075acb95b64461c07ae7316b3)\n0f02a67e5a7f861ff1a21a7f6506dd83c2517b9075acb95b64461c07ae7316b3\nWed Feb 25 12:54:33 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc (0f02a67e5a7f861ff1a21a7f6506dd83c2517b9075acb95b64461c07ae7316b3)\n0f02a67e5a7f861ff1a21a7f6506dd83c2517b9075acb95b64461c07ae7316b3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:33.488 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[597fa506-150a-4623-a12c-ef3b015fbcf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:33.489 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap02748d96-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.492 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:33 np0005629333 kernel: tap02748d96-80: left promiscuous mode
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.500 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:33.503 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[040ee3dc-c99d-4ce9-a27a-c51f0df98da6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.505 244018 DEBUG nova.virt.libvirt.vif [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:54:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1213997257',display_name='tempest-TestNetworkBasicOps-server-1213997257',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1213997257',id=133,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLNG26zhuNSlF1TfE4sDBmDnvGmo5/ze2p13GCwWfDn/0tZUccugkaXigVz2IREshfwlZeDWvbX4lpuIp+1iD+9XZNIkC3KjSMU6+yZzbD4HnlUsSc5LJ/0HlGl49tXOmw==',key_name='tempest-TestNetworkBasicOps-1599989750',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:54:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ywrihctv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:54:30Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=3da53ea4-e42d-4bfb-b917-d9c81fd3652f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.507 244018 DEBUG nova.network.os_vif_util [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "205ead5a-797e-421e-87e3-ec5dac2037d3", "address": "fa:16:3e:5f:de:be", "network": {"id": "02748d96-83c0-45be-acd6-081ad673e4bc", "bridge": "br-int", "label": "tempest-network-smoke--2036922693", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap205ead5a-79", "ovs_interfaceid": "205ead5a-797e-421e-87e3-ec5dac2037d3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.508 244018 DEBUG nova.network.os_vif_util [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:de:be,bridge_name='br-int',has_traffic_filtering=True,id=205ead5a-797e-421e-87e3-ec5dac2037d3,network=Network(02748d96-83c0-45be-acd6-081ad673e4bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap205ead5a-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.509 244018 DEBUG os_vif [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:de:be,bridge_name='br-int',has_traffic_filtering=True,id=205ead5a-797e-421e-87e3-ec5dac2037d3,network=Network(02748d96-83c0-45be-acd6-081ad673e4bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap205ead5a-79') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.511 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.512 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap205ead5a-79, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:54:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:33.518 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f5196b88-00cf-4702-8131-ec24c9cb0278]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.519 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:33.520 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bf56cc6b-3b6d-4069-ad7f-672583368240]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.521 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.524 244018 INFO os_vif [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:de:be,bridge_name='br-int',has_traffic_filtering=True,id=205ead5a-797e-421e-87e3-ec5dac2037d3,network=Network(02748d96-83c0-45be-acd6-081ad673e4bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap205ead5a-79')#033[00m
Feb 25 07:54:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:33.533 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[23d57f6f-7677-48b0-9eb1-6e3d5e53c3da]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603910, 'reachable_time': 18310, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365223, 'error': None, 'target': 'ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:33 np0005629333 systemd[1]: run-netns-ovnmeta\x2d02748d96\x2d83c0\x2d45be\x2dacd6\x2d081ad673e4bc.mount: Deactivated successfully.
Feb 25 07:54:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:33.538 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-02748d96-83c0-45be-acd6-081ad673e4bc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:54:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:33.538 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[fa8b237c-b23c-4909-9352-f4971d1c07dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.656 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:54:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.808 244018 INFO nova.virt.libvirt.driver [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Deleting instance files /var/lib/nova/instances/3da53ea4-e42d-4bfb-b917-d9c81fd3652f_del#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.808 244018 INFO nova.virt.libvirt.driver [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Deletion of /var/lib/nova/instances/3da53ea4-e42d-4bfb-b917-d9c81fd3652f_del complete#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.867 244018 INFO nova.compute.manager [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Took 0.65 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.868 244018 DEBUG oslo.service.loopingcall [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.871 244018 DEBUG nova.compute.manager [-] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.871 244018 DEBUG nova.network.neutron [-] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.899 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Feb 25 07:54:33 np0005629333 nova_compute[244014]: 2026-02-25 12:54:33.899 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.060 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-03d948e4-e7cb-45ea-bf63-7ab363b4d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.061 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-03d948e4-e7cb-45ea-bf63-7ab363b4d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.061 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.061 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid 03d948e4-e7cb-45ea-bf63-7ab363b4d46e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.378 244018 DEBUG nova.network.neutron [-] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.401 244018 INFO nova.compute.manager [-] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Took 1.07 seconds to deallocate network for instance.#033[00m
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.419 244018 DEBUG nova.compute.manager [req-0d56df08-417a-47ae-a82e-e32c3188ab2a req-96b3291e-dd0e-45f5-bc51-76a3b73fdab4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Received event network-vif-unplugged-205ead5a-797e-421e-87e3-ec5dac2037d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.419 244018 DEBUG oslo_concurrency.lockutils [req-0d56df08-417a-47ae-a82e-e32c3188ab2a req-96b3291e-dd0e-45f5-bc51-76a3b73fdab4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.421 244018 DEBUG oslo_concurrency.lockutils [req-0d56df08-417a-47ae-a82e-e32c3188ab2a req-96b3291e-dd0e-45f5-bc51-76a3b73fdab4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.421 244018 DEBUG oslo_concurrency.lockutils [req-0d56df08-417a-47ae-a82e-e32c3188ab2a req-96b3291e-dd0e-45f5-bc51-76a3b73fdab4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.422 244018 DEBUG nova.compute.manager [req-0d56df08-417a-47ae-a82e-e32c3188ab2a req-96b3291e-dd0e-45f5-bc51-76a3b73fdab4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] No waiting events found dispatching network-vif-unplugged-205ead5a-797e-421e-87e3-ec5dac2037d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.422 244018 DEBUG nova.compute.manager [req-0d56df08-417a-47ae-a82e-e32c3188ab2a req-96b3291e-dd0e-45f5-bc51-76a3b73fdab4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Received event network-vif-unplugged-205ead5a-797e-421e-87e3-ec5dac2037d3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.422 244018 DEBUG nova.compute.manager [req-0d56df08-417a-47ae-a82e-e32c3188ab2a req-96b3291e-dd0e-45f5-bc51-76a3b73fdab4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Received event network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.423 244018 DEBUG oslo_concurrency.lockutils [req-0d56df08-417a-47ae-a82e-e32c3188ab2a req-96b3291e-dd0e-45f5-bc51-76a3b73fdab4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.423 244018 DEBUG oslo_concurrency.lockutils [req-0d56df08-417a-47ae-a82e-e32c3188ab2a req-96b3291e-dd0e-45f5-bc51-76a3b73fdab4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.424 244018 DEBUG oslo_concurrency.lockutils [req-0d56df08-417a-47ae-a82e-e32c3188ab2a req-96b3291e-dd0e-45f5-bc51-76a3b73fdab4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.424 244018 DEBUG nova.compute.manager [req-0d56df08-417a-47ae-a82e-e32c3188ab2a req-96b3291e-dd0e-45f5-bc51-76a3b73fdab4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] No waiting events found dispatching network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.425 244018 WARNING nova.compute.manager [req-0d56df08-417a-47ae-a82e-e32c3188ab2a req-96b3291e-dd0e-45f5-bc51-76a3b73fdab4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Received unexpected event network-vif-plugged-205ead5a-797e-421e-87e3-ec5dac2037d3 for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.460 244018 DEBUG oslo_concurrency.lockutils [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.461 244018 DEBUG oslo_concurrency.lockutils [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.582 244018 DEBUG oslo_concurrency.processutils [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:54:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2220: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 119 op/s
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.941 244018 DEBUG nova.compute.manager [req-73d00dc6-9da6-4cd4-aecd-68445c3e1850 req-fdab7dde-90bd-4898-ba04-53a292424a82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Received event network-vif-unplugged-edede35b-327b-4dc5-8432-cc64bb4a290d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.942 244018 DEBUG oslo_concurrency.lockutils [req-73d00dc6-9da6-4cd4-aecd-68445c3e1850 req-fdab7dde-90bd-4898-ba04-53a292424a82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "945e5549-40d1-4eae-8179-84ad1d751957-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.942 244018 DEBUG oslo_concurrency.lockutils [req-73d00dc6-9da6-4cd4-aecd-68445c3e1850 req-fdab7dde-90bd-4898-ba04-53a292424a82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "945e5549-40d1-4eae-8179-84ad1d751957-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.943 244018 DEBUG oslo_concurrency.lockutils [req-73d00dc6-9da6-4cd4-aecd-68445c3e1850 req-fdab7dde-90bd-4898-ba04-53a292424a82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "945e5549-40d1-4eae-8179-84ad1d751957-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.943 244018 DEBUG nova.compute.manager [req-73d00dc6-9da6-4cd4-aecd-68445c3e1850 req-fdab7dde-90bd-4898-ba04-53a292424a82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] No waiting events found dispatching network-vif-unplugged-edede35b-327b-4dc5-8432-cc64bb4a290d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.944 244018 WARNING nova.compute.manager [req-73d00dc6-9da6-4cd4-aecd-68445c3e1850 req-fdab7dde-90bd-4898-ba04-53a292424a82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Received unexpected event network-vif-unplugged-edede35b-327b-4dc5-8432-cc64bb4a290d for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.944 244018 DEBUG nova.compute.manager [req-73d00dc6-9da6-4cd4-aecd-68445c3e1850 req-fdab7dde-90bd-4898-ba04-53a292424a82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Received event network-vif-plugged-edede35b-327b-4dc5-8432-cc64bb4a290d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.945 244018 DEBUG oslo_concurrency.lockutils [req-73d00dc6-9da6-4cd4-aecd-68445c3e1850 req-fdab7dde-90bd-4898-ba04-53a292424a82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "945e5549-40d1-4eae-8179-84ad1d751957-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.945 244018 DEBUG oslo_concurrency.lockutils [req-73d00dc6-9da6-4cd4-aecd-68445c3e1850 req-fdab7dde-90bd-4898-ba04-53a292424a82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "945e5549-40d1-4eae-8179-84ad1d751957-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.945 244018 DEBUG oslo_concurrency.lockutils [req-73d00dc6-9da6-4cd4-aecd-68445c3e1850 req-fdab7dde-90bd-4898-ba04-53a292424a82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "945e5549-40d1-4eae-8179-84ad1d751957-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.946 244018 DEBUG nova.compute.manager [req-73d00dc6-9da6-4cd4-aecd-68445c3e1850 req-fdab7dde-90bd-4898-ba04-53a292424a82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] No waiting events found dispatching network-vif-plugged-edede35b-327b-4dc5-8432-cc64bb4a290d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:54:34 np0005629333 nova_compute[244014]: 2026-02-25 12:54:34.946 244018 WARNING nova.compute.manager [req-73d00dc6-9da6-4cd4-aecd-68445c3e1850 req-fdab7dde-90bd-4898-ba04-53a292424a82 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Received unexpected event network-vif-plugged-edede35b-327b-4dc5-8432-cc64bb4a290d for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:54:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:54:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1009173585' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:54:35 np0005629333 nova_compute[244014]: 2026-02-25 12:54:35.203 244018 DEBUG oslo_concurrency.processutils [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:54:35 np0005629333 nova_compute[244014]: 2026-02-25 12:54:35.210 244018 DEBUG nova.compute.provider_tree [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:54:35 np0005629333 nova_compute[244014]: 2026-02-25 12:54:35.229 244018 DEBUG nova.scheduler.client.report [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:54:35 np0005629333 nova_compute[244014]: 2026-02-25 12:54:35.261 244018 DEBUG oslo_concurrency.lockutils [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:35 np0005629333 nova_compute[244014]: 2026-02-25 12:54:35.305 244018 INFO nova.scheduler.client.report [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Deleted allocations for instance 945e5549-40d1-4eae-8179-84ad1d751957#033[00m
Feb 25 07:54:35 np0005629333 nova_compute[244014]: 2026-02-25 12:54:35.378 244018 DEBUG oslo_concurrency.lockutils [None req-c26042c3-d9aa-4aff-baa8-4a287612208a f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "945e5549-40d1-4eae-8179-84ad1d751957" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:35 np0005629333 nova_compute[244014]: 2026-02-25 12:54:35.397 244018 DEBUG nova.network.neutron [req-891c54a4-6a0c-4a7a-b3cd-6b35032803ba req-0cfbb663-8ad1-4c9c-b3ab-5770b818e15d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Updated VIF entry in instance network info cache for port edede35b-327b-4dc5-8432-cc64bb4a290d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:54:35 np0005629333 nova_compute[244014]: 2026-02-25 12:54:35.397 244018 DEBUG nova.network.neutron [req-891c54a4-6a0c-4a7a-b3cd-6b35032803ba req-0cfbb663-8ad1-4c9c-b3ab-5770b818e15d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Updating instance_info_cache with network_info: [{"id": "edede35b-327b-4dc5-8432-cc64bb4a290d", "address": "fa:16:3e:06:b5:86", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe06:b586", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedede35b-32", "ovs_interfaceid": "edede35b-327b-4dc5-8432-cc64bb4a290d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:54:35 np0005629333 nova_compute[244014]: 2026-02-25 12:54:35.427 244018 DEBUG oslo_concurrency.lockutils [req-891c54a4-6a0c-4a7a-b3cd-6b35032803ba req-0cfbb663-8ad1-4c9c-b3ab-5770b818e15d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-945e5549-40d1-4eae-8179-84ad1d751957" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:54:35 np0005629333 nova_compute[244014]: 2026-02-25 12:54:35.459 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:35 np0005629333 nova_compute[244014]: 2026-02-25 12:54:35.633 244018 DEBUG nova.network.neutron [-] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:54:35 np0005629333 nova_compute[244014]: 2026-02-25 12:54:35.658 244018 INFO nova.compute.manager [-] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Took 1.79 seconds to deallocate network for instance.#033[00m
Feb 25 07:54:35 np0005629333 nova_compute[244014]: 2026-02-25 12:54:35.740 244018 DEBUG oslo_concurrency.lockutils [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:35 np0005629333 nova_compute[244014]: 2026-02-25 12:54:35.741 244018 DEBUG oslo_concurrency.lockutils [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:35 np0005629333 nova_compute[244014]: 2026-02-25 12:54:35.799 244018 DEBUG oslo_concurrency.processutils [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:54:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:54:36 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1154605364' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:54:36 np0005629333 nova_compute[244014]: 2026-02-25 12:54:36.442 244018 DEBUG oslo_concurrency.processutils [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.643s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:54:36 np0005629333 nova_compute[244014]: 2026-02-25 12:54:36.450 244018 DEBUG nova.compute.provider_tree [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:54:36 np0005629333 nova_compute[244014]: 2026-02-25 12:54:36.482 244018 DEBUG nova.scheduler.client.report [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:54:36 np0005629333 nova_compute[244014]: 2026-02-25 12:54:36.521 244018 DEBUG nova.compute.manager [req-32145513-7911-437b-9991-0b4d71fd81ea req-bfeff6a5-6c90-45dc-ae69-a8672fbc49b6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Received event network-vif-deleted-edede35b-327b-4dc5-8432-cc64bb4a290d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:54:36 np0005629333 nova_compute[244014]: 2026-02-25 12:54:36.561 244018 DEBUG oslo_concurrency.lockutils [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:36 np0005629333 nova_compute[244014]: 2026-02-25 12:54:36.628 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Updating instance_info_cache with network_info: [{"id": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "address": "fa:16:3e:d9:be:ae", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66ec19f-11", "ovs_interfaceid": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:54:36 np0005629333 nova_compute[244014]: 2026-02-25 12:54:36.652 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-03d948e4-e7cb-45ea-bf63-7ab363b4d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:54:36 np0005629333 nova_compute[244014]: 2026-02-25 12:54:36.653 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 25 07:54:36 np0005629333 nova_compute[244014]: 2026-02-25 12:54:36.654 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:54:36 np0005629333 nova_compute[244014]: 2026-02-25 12:54:36.663 244018 INFO nova.scheduler.client.report [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Deleted allocations for instance 3da53ea4-e42d-4bfb-b917-d9c81fd3652f#033[00m
Feb 25 07:54:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2221: 305 pgs: 305 active+clean; 358 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 119 op/s
Feb 25 07:54:36 np0005629333 nova_compute[244014]: 2026-02-25 12:54:36.697 244018 DEBUG oslo_concurrency.lockutils [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:36 np0005629333 nova_compute[244014]: 2026-02-25 12:54:36.698 244018 DEBUG oslo_concurrency.lockutils [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:36 np0005629333 nova_compute[244014]: 2026-02-25 12:54:36.698 244018 DEBUG oslo_concurrency.lockutils [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:36 np0005629333 nova_compute[244014]: 2026-02-25 12:54:36.699 244018 DEBUG oslo_concurrency.lockutils [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:36 np0005629333 nova_compute[244014]: 2026-02-25 12:54:36.699 244018 DEBUG oslo_concurrency.lockutils [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:36 np0005629333 nova_compute[244014]: 2026-02-25 12:54:36.701 244018 INFO nova.compute.manager [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Terminating instance#033[00m
Feb 25 07:54:36 np0005629333 nova_compute[244014]: 2026-02-25 12:54:36.703 244018 DEBUG nova.compute.manager [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:54:36 np0005629333 kernel: tapf66ec19f-11 (unregistering): left promiscuous mode
Feb 25 07:54:36 np0005629333 NetworkManager[49836]: <info>  [1772024076.7581] device (tapf66ec19f-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:54:36 np0005629333 ovn_controller[147040]: 2026-02-25T12:54:36Z|01415|binding|INFO|Releasing lport f66ec19f-119c-4e9b-ba57-c8e2d850f5dc from this chassis (sb_readonly=0)
Feb 25 07:54:36 np0005629333 ovn_controller[147040]: 2026-02-25T12:54:36Z|01416|binding|INFO|Setting lport f66ec19f-119c-4e9b-ba57-c8e2d850f5dc down in Southbound
Feb 25 07:54:36 np0005629333 ovn_controller[147040]: 2026-02-25T12:54:36Z|01417|binding|INFO|Removing iface tapf66ec19f-11 ovn-installed in OVS
Feb 25 07:54:36 np0005629333 nova_compute[244014]: 2026-02-25 12:54:36.768 244018 DEBUG oslo_concurrency.lockutils [None req-f8f939cf-cc3e-494b-959a-b52a5fcffecc 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "3da53ea4-e42d-4bfb-b917-d9c81fd3652f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:36.774 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:be:ae 10.100.0.3 2001:db8:0:1:f816:3eff:fed9:beae 2001:db8::f816:3eff:fed9:beae'], port_security=['fa:16:3e:d9:be:ae 10.100.0.3 2001:db8:0:1:f816:3eff:fed9:beae 2001:db8::f816:3eff:fed9:beae'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8:0:1:f816:3eff:fed9:beae/64 2001:db8::f816:3eff:fed9:beae/64', 'neutron:device_id': '03d948e4-e7cb-45ea-bf63-7ab363b4d46e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88562c34-222a-439a-b444-9e6f8a6d70cd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cd4482ae-906b-42b8-8dd6-2eff099d2f11', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1dcb5c12-7c78-49d6-b14f-b2c48c93839d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=f66ec19f-119c-4e9b-ba57-c8e2d850f5dc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:54:36 np0005629333 nova_compute[244014]: 2026-02-25 12:54:36.774 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:36.775 157129 INFO neutron.agent.ovn.metadata.agent [-] Port f66ec19f-119c-4e9b-ba57-c8e2d850f5dc in datapath 88562c34-222a-439a-b444-9e6f8a6d70cd unbound from our chassis#033[00m
Feb 25 07:54:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:36.776 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 88562c34-222a-439a-b444-9e6f8a6d70cd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:54:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:36.777 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c946b269-b95f-496f-8d61-6b252b34b636]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:36.778 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd namespace which is not needed anymore#033[00m
Feb 25 07:54:36 np0005629333 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000082.scope: Deactivated successfully.
Feb 25 07:54:36 np0005629333 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000082.scope: Consumed 13.551s CPU time.
Feb 25 07:54:36 np0005629333 systemd-machined[210048]: Machine qemu-162-instance-00000082 terminated.
Feb 25 07:54:36 np0005629333 neutron-haproxy-ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd[362609]: [NOTICE]   (362613) : haproxy version is 2.8.14-c23fe91
Feb 25 07:54:36 np0005629333 neutron-haproxy-ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd[362609]: [NOTICE]   (362613) : path to executable is /usr/sbin/haproxy
Feb 25 07:54:36 np0005629333 neutron-haproxy-ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd[362609]: [WARNING]  (362613) : Exiting Master process...
Feb 25 07:54:36 np0005629333 neutron-haproxy-ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd[362609]: [ALERT]    (362613) : Current worker (362615) exited with code 143 (Terminated)
Feb 25 07:54:36 np0005629333 neutron-haproxy-ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd[362609]: [WARNING]  (362613) : All workers exited. Exiting... (0)
Feb 25 07:54:36 np0005629333 systemd[1]: libpod-fea1bab1975a17824e0567deea72d3032b1182111d7bdd8fa9e701667e461d57.scope: Deactivated successfully.
Feb 25 07:54:36 np0005629333 podman[365311]: 2026-02-25 12:54:36.925590636 +0000 UTC m=+0.048250532 container died fea1bab1975a17824e0567deea72d3032b1182111d7bdd8fa9e701667e461d57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 25 07:54:36 np0005629333 nova_compute[244014]: 2026-02-25 12:54:36.948 244018 INFO nova.virt.libvirt.driver [-] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Instance destroyed successfully.#033[00m
Feb 25 07:54:36 np0005629333 nova_compute[244014]: 2026-02-25 12:54:36.949 244018 DEBUG nova.objects.instance [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'resources' on Instance uuid 03d948e4-e7cb-45ea-bf63-7ab363b4d46e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:54:36 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fea1bab1975a17824e0567deea72d3032b1182111d7bdd8fa9e701667e461d57-userdata-shm.mount: Deactivated successfully.
Feb 25 07:54:36 np0005629333 nova_compute[244014]: 2026-02-25 12:54:36.964 244018 DEBUG nova.virt.libvirt.vif [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:53:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1355608326',display_name='tempest-TestGettingAddress-server-1355608326',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1355608326',id=130,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL/SVWuEVaf+Py7xASyHmNYfi29LBzumkZB6ldUzxX0Dxybzo1LoSeFAK9YF8BeK/8cHma4lQxEX6N+N2g4I6aGP/1FtZuhCE2GUlMamVs4LR8ITh1z/8qYyUh+iaduJ8A==',key_name='tempest-TestGettingAddress-2128145461',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:53:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-0ah75or2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:53:32Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=03d948e4-e7cb-45ea-bf63-7ab363b4d46e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "address": "fa:16:3e:d9:be:ae", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66ec19f-11", "ovs_interfaceid": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:54:36 np0005629333 nova_compute[244014]: 2026-02-25 12:54:36.965 244018 DEBUG nova.network.os_vif_util [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "address": "fa:16:3e:d9:be:ae", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66ec19f-11", "ovs_interfaceid": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:54:36 np0005629333 systemd[1]: var-lib-containers-storage-overlay-10a865f5da51854f8f958183be1ede03192965b68e57f97054deb9abb51f58a8-merged.mount: Deactivated successfully.
Feb 25 07:54:36 np0005629333 nova_compute[244014]: 2026-02-25 12:54:36.966 244018 DEBUG nova.network.os_vif_util [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d9:be:ae,bridge_name='br-int',has_traffic_filtering=True,id=f66ec19f-119c-4e9b-ba57-c8e2d850f5dc,network=Network(88562c34-222a-439a-b444-9e6f8a6d70cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66ec19f-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:54:36 np0005629333 nova_compute[244014]: 2026-02-25 12:54:36.969 244018 DEBUG os_vif [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:be:ae,bridge_name='br-int',has_traffic_filtering=True,id=f66ec19f-119c-4e9b-ba57-c8e2d850f5dc,network=Network(88562c34-222a-439a-b444-9e6f8a6d70cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66ec19f-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:54:36 np0005629333 nova_compute[244014]: 2026-02-25 12:54:36.971 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:36 np0005629333 nova_compute[244014]: 2026-02-25 12:54:36.972 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf66ec19f-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:54:36 np0005629333 nova_compute[244014]: 2026-02-25 12:54:36.974 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:36 np0005629333 nova_compute[244014]: 2026-02-25 12:54:36.975 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:36 np0005629333 nova_compute[244014]: 2026-02-25 12:54:36.977 244018 INFO os_vif [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:be:ae,bridge_name='br-int',has_traffic_filtering=True,id=f66ec19f-119c-4e9b-ba57-c8e2d850f5dc,network=Network(88562c34-222a-439a-b444-9e6f8a6d70cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66ec19f-11')#033[00m
Feb 25 07:54:36 np0005629333 podman[365311]: 2026-02-25 12:54:36.978760056 +0000 UTC m=+0.101419932 container cleanup fea1bab1975a17824e0567deea72d3032b1182111d7bdd8fa9e701667e461d57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:54:36 np0005629333 systemd[1]: libpod-conmon-fea1bab1975a17824e0567deea72d3032b1182111d7bdd8fa9e701667e461d57.scope: Deactivated successfully.
Feb 25 07:54:37 np0005629333 nova_compute[244014]: 2026-02-25 12:54:37.033 244018 DEBUG nova.compute.manager [req-269c1321-1407-468b-89cf-e543affc1542 req-a8111be2-d6bf-4c60-9172-aeebb481b63a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Received event network-changed-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:54:37 np0005629333 nova_compute[244014]: 2026-02-25 12:54:37.034 244018 DEBUG nova.compute.manager [req-269c1321-1407-468b-89cf-e543affc1542 req-a8111be2-d6bf-4c60-9172-aeebb481b63a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Refreshing instance network info cache due to event network-changed-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:54:37 np0005629333 nova_compute[244014]: 2026-02-25 12:54:37.034 244018 DEBUG oslo_concurrency.lockutils [req-269c1321-1407-468b-89cf-e543affc1542 req-a8111be2-d6bf-4c60-9172-aeebb481b63a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-03d948e4-e7cb-45ea-bf63-7ab363b4d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:54:37 np0005629333 nova_compute[244014]: 2026-02-25 12:54:37.035 244018 DEBUG oslo_concurrency.lockutils [req-269c1321-1407-468b-89cf-e543affc1542 req-a8111be2-d6bf-4c60-9172-aeebb481b63a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-03d948e4-e7cb-45ea-bf63-7ab363b4d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:54:37 np0005629333 nova_compute[244014]: 2026-02-25 12:54:37.035 244018 DEBUG nova.network.neutron [req-269c1321-1407-468b-89cf-e543affc1542 req-a8111be2-d6bf-4c60-9172-aeebb481b63a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Refreshing network info cache for port f66ec19f-119c-4e9b-ba57-c8e2d850f5dc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:54:37 np0005629333 podman[365356]: 2026-02-25 12:54:37.064413132 +0000 UTC m=+0.061984739 container remove fea1bab1975a17824e0567deea72d3032b1182111d7bdd8fa9e701667e461d57 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:54:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:37.069 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[faf2f217-4147-44c5-ba85-fc86f170242a]: (4, ('Wed Feb 25 12:54:36 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd (fea1bab1975a17824e0567deea72d3032b1182111d7bdd8fa9e701667e461d57)\nfea1bab1975a17824e0567deea72d3032b1182111d7bdd8fa9e701667e461d57\nWed Feb 25 12:54:36 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd (fea1bab1975a17824e0567deea72d3032b1182111d7bdd8fa9e701667e461d57)\nfea1bab1975a17824e0567deea72d3032b1182111d7bdd8fa9e701667e461d57\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:37.070 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eb3a510b-df9a-46ca-9b51-d7a9e25d1a77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:37.071 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88562c34-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:54:37 np0005629333 nova_compute[244014]: 2026-02-25 12:54:37.073 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:37 np0005629333 kernel: tap88562c34-20: left promiscuous mode
Feb 25 07:54:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:37.078 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[978ea447-d7c3-422a-b867-7d8a87f2fd68]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:37 np0005629333 nova_compute[244014]: 2026-02-25 12:54:37.083 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:37.096 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8507d36e-fabd-47ed-bee8-bc14b4b2f275]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:37.097 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0baf10bd-ea9a-4fbc-9bed-b98b3ea6f156]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:37.112 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f4bba14a-50ad-497d-a178-b37e80f96ccd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598021, 'reachable_time': 26883, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365383, 'error': None, 'target': 'ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:37.114 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-88562c34-222a-439a-b444-9e6f8a6d70cd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:54:37 np0005629333 systemd[1]: run-netns-ovnmeta\x2d88562c34\x2d222a\x2d439a\x2db444\x2d9e6f8a6d70cd.mount: Deactivated successfully.
Feb 25 07:54:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:37.115 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[d125e578-9eaa-4216-9eb9-a8c80ec7200c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:54:37 np0005629333 nova_compute[244014]: 2026-02-25 12:54:37.243 244018 INFO nova.virt.libvirt.driver [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Deleting instance files /var/lib/nova/instances/03d948e4-e7cb-45ea-bf63-7ab363b4d46e_del#033[00m
Feb 25 07:54:37 np0005629333 nova_compute[244014]: 2026-02-25 12:54:37.244 244018 INFO nova.virt.libvirt.driver [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Deletion of /var/lib/nova/instances/03d948e4-e7cb-45ea-bf63-7ab363b4d46e_del complete#033[00m
Feb 25 07:54:37 np0005629333 nova_compute[244014]: 2026-02-25 12:54:37.313 244018 INFO nova.compute.manager [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Took 0.61 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:54:37 np0005629333 nova_compute[244014]: 2026-02-25 12:54:37.313 244018 DEBUG oslo.service.loopingcall [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:54:37 np0005629333 nova_compute[244014]: 2026-02-25 12:54:37.314 244018 DEBUG nova.compute.manager [-] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:54:37 np0005629333 nova_compute[244014]: 2026-02-25 12:54:37.314 244018 DEBUG nova.network.neutron [-] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:54:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2222: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 199 op/s
Feb 25 07:54:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:54:38 np0005629333 podman[365385]: 2026-02-25 12:54:38.73090492 +0000 UTC m=+0.074312577 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent)
Feb 25 07:54:38 np0005629333 podman[365386]: 2026-02-25 12:54:38.769658363 +0000 UTC m=+0.110462297 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Feb 25 07:54:38 np0005629333 nova_compute[244014]: 2026-02-25 12:54:38.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:54:38 np0005629333 nova_compute[244014]: 2026-02-25 12:54:38.921 244018 DEBUG nova.network.neutron [-] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:54:39 np0005629333 nova_compute[244014]: 2026-02-25 12:54:39.016 244018 INFO nova.compute.manager [-] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Took 1.70 seconds to deallocate network for instance.#033[00m
Feb 25 07:54:39 np0005629333 nova_compute[244014]: 2026-02-25 12:54:39.170 244018 DEBUG nova.compute.manager [req-8058506a-a489-4ea6-97e3-f58aa3d46e34 req-9a5fda54-26b5-44a3-93d2-f5c3684e8c22 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Received event network-vif-unplugged-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:54:39 np0005629333 nova_compute[244014]: 2026-02-25 12:54:39.171 244018 DEBUG oslo_concurrency.lockutils [req-8058506a-a489-4ea6-97e3-f58aa3d46e34 req-9a5fda54-26b5-44a3-93d2-f5c3684e8c22 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:39 np0005629333 nova_compute[244014]: 2026-02-25 12:54:39.171 244018 DEBUG oslo_concurrency.lockutils [req-8058506a-a489-4ea6-97e3-f58aa3d46e34 req-9a5fda54-26b5-44a3-93d2-f5c3684e8c22 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:39 np0005629333 nova_compute[244014]: 2026-02-25 12:54:39.171 244018 DEBUG oslo_concurrency.lockutils [req-8058506a-a489-4ea6-97e3-f58aa3d46e34 req-9a5fda54-26b5-44a3-93d2-f5c3684e8c22 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:39 np0005629333 nova_compute[244014]: 2026-02-25 12:54:39.171 244018 DEBUG nova.compute.manager [req-8058506a-a489-4ea6-97e3-f58aa3d46e34 req-9a5fda54-26b5-44a3-93d2-f5c3684e8c22 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] No waiting events found dispatching network-vif-unplugged-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:54:39 np0005629333 nova_compute[244014]: 2026-02-25 12:54:39.172 244018 WARNING nova.compute.manager [req-8058506a-a489-4ea6-97e3-f58aa3d46e34 req-9a5fda54-26b5-44a3-93d2-f5c3684e8c22 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Received unexpected event network-vif-unplugged-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:54:39 np0005629333 nova_compute[244014]: 2026-02-25 12:54:39.172 244018 DEBUG nova.compute.manager [req-8058506a-a489-4ea6-97e3-f58aa3d46e34 req-9a5fda54-26b5-44a3-93d2-f5c3684e8c22 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Received event network-vif-plugged-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:54:39 np0005629333 nova_compute[244014]: 2026-02-25 12:54:39.172 244018 DEBUG oslo_concurrency.lockutils [req-8058506a-a489-4ea6-97e3-f58aa3d46e34 req-9a5fda54-26b5-44a3-93d2-f5c3684e8c22 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:39 np0005629333 nova_compute[244014]: 2026-02-25 12:54:39.172 244018 DEBUG oslo_concurrency.lockutils [req-8058506a-a489-4ea6-97e3-f58aa3d46e34 req-9a5fda54-26b5-44a3-93d2-f5c3684e8c22 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:39 np0005629333 nova_compute[244014]: 2026-02-25 12:54:39.173 244018 DEBUG oslo_concurrency.lockutils [req-8058506a-a489-4ea6-97e3-f58aa3d46e34 req-9a5fda54-26b5-44a3-93d2-f5c3684e8c22 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:39 np0005629333 nova_compute[244014]: 2026-02-25 12:54:39.173 244018 DEBUG nova.compute.manager [req-8058506a-a489-4ea6-97e3-f58aa3d46e34 req-9a5fda54-26b5-44a3-93d2-f5c3684e8c22 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] No waiting events found dispatching network-vif-plugged-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:54:39 np0005629333 nova_compute[244014]: 2026-02-25 12:54:39.173 244018 WARNING nova.compute.manager [req-8058506a-a489-4ea6-97e3-f58aa3d46e34 req-9a5fda54-26b5-44a3-93d2-f5c3684e8c22 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Received unexpected event network-vif-plugged-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:54:39 np0005629333 nova_compute[244014]: 2026-02-25 12:54:39.173 244018 DEBUG nova.compute.manager [req-8058506a-a489-4ea6-97e3-f58aa3d46e34 req-9a5fda54-26b5-44a3-93d2-f5c3684e8c22 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Received event network-vif-deleted-f66ec19f-119c-4e9b-ba57-c8e2d850f5dc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:54:39 np0005629333 nova_compute[244014]: 2026-02-25 12:54:39.178 244018 DEBUG oslo_concurrency.lockutils [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:39 np0005629333 nova_compute[244014]: 2026-02-25 12:54:39.178 244018 DEBUG oslo_concurrency.lockutils [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:39 np0005629333 nova_compute[244014]: 2026-02-25 12:54:39.217 244018 DEBUG oslo_concurrency.processutils [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:54:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:54:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2344282572' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:54:39 np0005629333 nova_compute[244014]: 2026-02-25 12:54:39.711 244018 DEBUG oslo_concurrency.processutils [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:54:39 np0005629333 nova_compute[244014]: 2026-02-25 12:54:39.718 244018 DEBUG nova.compute.provider_tree [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:54:39 np0005629333 nova_compute[244014]: 2026-02-25 12:54:39.741 244018 DEBUG nova.scheduler.client.report [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:54:39 np0005629333 nova_compute[244014]: 2026-02-25 12:54:39.775 244018 DEBUG oslo_concurrency.lockutils [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:39 np0005629333 nova_compute[244014]: 2026-02-25 12:54:39.832 244018 INFO nova.scheduler.client.report [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Deleted allocations for instance 03d948e4-e7cb-45ea-bf63-7ab363b4d46e#033[00m
Feb 25 07:54:39 np0005629333 nova_compute[244014]: 2026-02-25 12:54:39.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:54:39 np0005629333 nova_compute[244014]: 2026-02-25 12:54:39.953 244018 DEBUG oslo_concurrency.lockutils [None req-b398e37c-be99-4d90-9d54-8d0717530b6e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "03d948e4-e7cb-45ea-bf63-7ab363b4d46e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:40 np0005629333 nova_compute[244014]: 2026-02-25 12:54:40.462 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:40 np0005629333 nova_compute[244014]: 2026-02-25 12:54:40.473 244018 DEBUG nova.network.neutron [req-269c1321-1407-468b-89cf-e543affc1542 req-a8111be2-d6bf-4c60-9172-aeebb481b63a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Updated VIF entry in instance network info cache for port f66ec19f-119c-4e9b-ba57-c8e2d850f5dc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:54:40 np0005629333 nova_compute[244014]: 2026-02-25 12:54:40.474 244018 DEBUG nova.network.neutron [req-269c1321-1407-468b-89cf-e543affc1542 req-a8111be2-d6bf-4c60-9172-aeebb481b63a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Updating instance_info_cache with network_info: [{"id": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "address": "fa:16:3e:d9:be:ae", "network": {"id": "88562c34-222a-439a-b444-9e6f8a6d70cd", "bridge": "br-int", "label": "tempest-network-smoke--1040070558", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fed9:beae", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66ec19f-11", "ovs_interfaceid": "f66ec19f-119c-4e9b-ba57-c8e2d850f5dc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:54:40 np0005629333 nova_compute[244014]: 2026-02-25 12:54:40.496 244018 DEBUG oslo_concurrency.lockutils [req-269c1321-1407-468b-89cf-e543affc1542 req-a8111be2-d6bf-4c60-9172-aeebb481b63a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-03d948e4-e7cb-45ea-bf63-7ab363b4d46e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:54:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2223: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 31 KiB/s wr, 154 op/s
Feb 25 07:54:40 np0005629333 nova_compute[244014]: 2026-02-25 12:54:40.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:54:41 np0005629333 nova_compute[244014]: 2026-02-25 12:54:41.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:54:41 np0005629333 nova_compute[244014]: 2026-02-25 12:54:41.975 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2224: 305 pgs: 305 active+clean; 153 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 31 KiB/s wr, 157 op/s
Feb 25 07:54:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:54:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:54:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:54:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:54:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6317816538884318e-05 of space, bias 1.0, pg target 0.004895344961665295 quantized to 32 (current 32)
Feb 25 07:54:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:54:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:54:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:54:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:54:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:54:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024941881055236368 of space, bias 1.0, pg target 0.748256431657091 quantized to 32 (current 32)
Feb 25 07:54:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:54:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.383268670923365e-06 of space, bias 4.0, pg target 0.0016599224051080381 quantized to 16 (current 16)
Feb 25 07:54:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:54:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:54:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:54:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:54:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:54:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:54:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:54:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:54:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:54:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:54:43 np0005629333 nova_compute[244014]: 2026-02-25 12:54:43.168 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:43 np0005629333 nova_compute[244014]: 2026-02-25 12:54:43.234 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:54:43 np0005629333 nova_compute[244014]: 2026-02-25 12:54:43.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:54:43 np0005629333 nova_compute[244014]: 2026-02-25 12:54:43.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:54:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2225: 305 pgs: 305 active+clean; 153 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 6.2 KiB/s wr, 82 op/s
Feb 25 07:54:45 np0005629333 nova_compute[244014]: 2026-02-25 12:54:45.464 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2226: 305 pgs: 305 active+clean; 153 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 6.2 KiB/s wr, 82 op/s
Feb 25 07:54:46 np0005629333 nova_compute[244014]: 2026-02-25 12:54:46.978 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:54:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1668056892' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:54:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:54:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1668056892' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:54:47 np0005629333 nova_compute[244014]: 2026-02-25 12:54:47.968 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024072.9669049, 945e5549-40d1-4eae-8179-84ad1d751957 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:54:47 np0005629333 nova_compute[244014]: 2026-02-25 12:54:47.969 244018 INFO nova.compute.manager [-] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:54:47 np0005629333 nova_compute[244014]: 2026-02-25 12:54:47.993 244018 DEBUG nova.compute.manager [None req-ebc94e2d-99f0-402b-9e96-7868500dfe3f - - - - - -] [instance: 945e5549-40d1-4eae-8179-84ad1d751957] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:54:48 np0005629333 nova_compute[244014]: 2026-02-25 12:54:48.449 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024073.4479399, 3da53ea4-e42d-4bfb-b917-d9c81fd3652f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:54:48 np0005629333 nova_compute[244014]: 2026-02-25 12:54:48.450 244018 INFO nova.compute.manager [-] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:54:48 np0005629333 nova_compute[244014]: 2026-02-25 12:54:48.474 244018 DEBUG nova.compute.manager [None req-f428ee07-986c-424d-815d-035e8ce1c2f6 - - - - - -] [instance: 3da53ea4-e42d-4bfb-b917-d9c81fd3652f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:54:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2227: 305 pgs: 305 active+clean; 153 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 6.2 KiB/s wr, 82 op/s
Feb 25 07:54:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:54:50 np0005629333 nova_compute[244014]: 2026-02-25 12:54:50.465 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2228: 305 pgs: 305 active+clean; 153 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 1.4 KiB/s rd, 341 B/s wr, 2 op/s
Feb 25 07:54:51 np0005629333 nova_compute[244014]: 2026-02-25 12:54:51.946 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024076.9453082, 03d948e4-e7cb-45ea-bf63-7ab363b4d46e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:54:51 np0005629333 nova_compute[244014]: 2026-02-25 12:54:51.947 244018 INFO nova.compute.manager [-] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:54:51 np0005629333 nova_compute[244014]: 2026-02-25 12:54:51.973 244018 DEBUG nova.compute.manager [None req-c8f4a569-4f5e-4ca4-925d-6e2565d76284 - - - - - -] [instance: 03d948e4-e7cb-45ea-bf63-7ab363b4d46e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:54:51 np0005629333 nova_compute[244014]: 2026-02-25 12:54:51.982 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2229: 305 pgs: 305 active+clean; 153 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 1.4 KiB/s rd, 341 B/s wr, 2 op/s
Feb 25 07:54:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:54:52 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:54:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:54:52 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:54:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:54:52 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:54:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:54:52 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:54:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:54:52 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:54:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:54:52 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:54:53 np0005629333 podman[365597]: 2026-02-25 12:54:53.413155612 +0000 UTC m=+0.058761099 container create b3f1bdaaabb0d3259f1581520c6b16d30b30278042bcb04bdc1612c5ca3a8202 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mcnulty, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:54:53 np0005629333 systemd[1]: Started libpod-conmon-b3f1bdaaabb0d3259f1581520c6b16d30b30278042bcb04bdc1612c5ca3a8202.scope.
Feb 25 07:54:53 np0005629333 podman[365597]: 2026-02-25 12:54:53.390084371 +0000 UTC m=+0.035689948 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:54:53 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:54:53 np0005629333 podman[365597]: 2026-02-25 12:54:53.498438557 +0000 UTC m=+0.144044064 container init b3f1bdaaabb0d3259f1581520c6b16d30b30278042bcb04bdc1612c5ca3a8202 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 07:54:53 np0005629333 podman[365597]: 2026-02-25 12:54:53.508281375 +0000 UTC m=+0.153886882 container start b3f1bdaaabb0d3259f1581520c6b16d30b30278042bcb04bdc1612c5ca3a8202 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:54:53 np0005629333 podman[365597]: 2026-02-25 12:54:53.511973709 +0000 UTC m=+0.157579286 container attach b3f1bdaaabb0d3259f1581520c6b16d30b30278042bcb04bdc1612c5ca3a8202 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 07:54:53 np0005629333 stoic_mcnulty[365614]: 167 167
Feb 25 07:54:53 np0005629333 systemd[1]: libpod-b3f1bdaaabb0d3259f1581520c6b16d30b30278042bcb04bdc1612c5ca3a8202.scope: Deactivated successfully.
Feb 25 07:54:53 np0005629333 podman[365597]: 2026-02-25 12:54:53.51660177 +0000 UTC m=+0.162207247 container died b3f1bdaaabb0d3259f1581520c6b16d30b30278042bcb04bdc1612c5ca3a8202 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mcnulty, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:54:53 np0005629333 systemd[1]: var-lib-containers-storage-overlay-48dc76d065f033bce1c657b79cf08cb6b54423ed454a606ff2d04b4bd7281f9f-merged.mount: Deactivated successfully.
Feb 25 07:54:53 np0005629333 podman[365597]: 2026-02-25 12:54:53.563469142 +0000 UTC m=+0.209074649 container remove b3f1bdaaabb0d3259f1581520c6b16d30b30278042bcb04bdc1612c5ca3a8202 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 07:54:53 np0005629333 systemd[1]: libpod-conmon-b3f1bdaaabb0d3259f1581520c6b16d30b30278042bcb04bdc1612c5ca3a8202.scope: Deactivated successfully.
Feb 25 07:54:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:54:53 np0005629333 podman[365638]: 2026-02-25 12:54:53.728069225 +0000 UTC m=+0.046957806 container create 9e6f1de8fc23327c4b325e8ccdef5e2c76978a22d329861fd78eefd37e58d8c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pasteur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 07:54:53 np0005629333 systemd[1]: Started libpod-conmon-9e6f1de8fc23327c4b325e8ccdef5e2c76978a22d329861fd78eefd37e58d8c7.scope.
Feb 25 07:54:53 np0005629333 podman[365638]: 2026-02-25 12:54:53.707302659 +0000 UTC m=+0.026191240 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:54:53 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:54:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d54cff380e3062d459d4893a1827ed33d329f9732168a00b89872517ce64179/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:54:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d54cff380e3062d459d4893a1827ed33d329f9732168a00b89872517ce64179/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:54:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d54cff380e3062d459d4893a1827ed33d329f9732168a00b89872517ce64179/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:54:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d54cff380e3062d459d4893a1827ed33d329f9732168a00b89872517ce64179/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:54:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d54cff380e3062d459d4893a1827ed33d329f9732168a00b89872517ce64179/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:54:53 np0005629333 podman[365638]: 2026-02-25 12:54:53.86792685 +0000 UTC m=+0.186815511 container init 9e6f1de8fc23327c4b325e8ccdef5e2c76978a22d329861fd78eefd37e58d8c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pasteur, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 07:54:53 np0005629333 podman[365638]: 2026-02-25 12:54:53.877172821 +0000 UTC m=+0.196061422 container start 9e6f1de8fc23327c4b325e8ccdef5e2c76978a22d329861fd78eefd37e58d8c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pasteur, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:54:53 np0005629333 podman[365638]: 2026-02-25 12:54:53.881855983 +0000 UTC m=+0.200744614 container attach 9e6f1de8fc23327c4b325e8ccdef5e2c76978a22d329861fd78eefd37e58d8c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pasteur, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 07:54:53 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:54:53 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:54:53 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:54:54 np0005629333 quirky_pasteur[365652]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:54:54 np0005629333 quirky_pasteur[365652]: --> All data devices are unavailable
Feb 25 07:54:54 np0005629333 systemd[1]: libpod-9e6f1de8fc23327c4b325e8ccdef5e2c76978a22d329861fd78eefd37e58d8c7.scope: Deactivated successfully.
Feb 25 07:54:54 np0005629333 podman[365638]: 2026-02-25 12:54:54.354147846 +0000 UTC m=+0.673036427 container died 9e6f1de8fc23327c4b325e8ccdef5e2c76978a22d329861fd78eefd37e58d8c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 07:54:54 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1d54cff380e3062d459d4893a1827ed33d329f9732168a00b89872517ce64179-merged.mount: Deactivated successfully.
Feb 25 07:54:54 np0005629333 podman[365638]: 2026-02-25 12:54:54.40359287 +0000 UTC m=+0.722481481 container remove 9e6f1de8fc23327c4b325e8ccdef5e2c76978a22d329861fd78eefd37e58d8c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pasteur, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:54:54 np0005629333 systemd[1]: libpod-conmon-9e6f1de8fc23327c4b325e8ccdef5e2c76978a22d329861fd78eefd37e58d8c7.scope: Deactivated successfully.
Feb 25 07:54:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2230: 305 pgs: 305 active+clean; 153 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:54:54 np0005629333 podman[365749]: 2026-02-25 12:54:54.914274226 +0000 UTC m=+0.055272120 container create b474227baf721acc16722872c8f4d5004ee44825e6f9d3c175a0ba6126d34ab1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shaw, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:54:54 np0005629333 systemd[1]: Started libpod-conmon-b474227baf721acc16722872c8f4d5004ee44825e6f9d3c175a0ba6126d34ab1.scope.
Feb 25 07:54:54 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:54:54 np0005629333 podman[365749]: 2026-02-25 12:54:54.891930636 +0000 UTC m=+0.032928540 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:54:55 np0005629333 podman[365749]: 2026-02-25 12:54:55.00232016 +0000 UTC m=+0.143318114 container init b474227baf721acc16722872c8f4d5004ee44825e6f9d3c175a0ba6126d34ab1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shaw, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 07:54:55 np0005629333 podman[365749]: 2026-02-25 12:54:55.010299655 +0000 UTC m=+0.151297549 container start b474227baf721acc16722872c8f4d5004ee44825e6f9d3c175a0ba6126d34ab1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 25 07:54:55 np0005629333 cool_shaw[365765]: 167 167
Feb 25 07:54:55 np0005629333 systemd[1]: libpod-b474227baf721acc16722872c8f4d5004ee44825e6f9d3c175a0ba6126d34ab1.scope: Deactivated successfully.
Feb 25 07:54:55 np0005629333 podman[365749]: 2026-02-25 12:54:55.01615455 +0000 UTC m=+0.157152494 container attach b474227baf721acc16722872c8f4d5004ee44825e6f9d3c175a0ba6126d34ab1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 07:54:55 np0005629333 podman[365749]: 2026-02-25 12:54:55.017008484 +0000 UTC m=+0.158006398 container died b474227baf721acc16722872c8f4d5004ee44825e6f9d3c175a0ba6126d34ab1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shaw, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:54:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:55.032 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:55.033 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:54:55.033 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:55 np0005629333 systemd[1]: var-lib-containers-storage-overlay-da148ca0941bbe5b12272c8290d136616501c01c2dbf42759bc1d5927570850d-merged.mount: Deactivated successfully.
Feb 25 07:54:55 np0005629333 podman[365749]: 2026-02-25 12:54:55.062895758 +0000 UTC m=+0.203893642 container remove b474227baf721acc16722872c8f4d5004ee44825e6f9d3c175a0ba6126d34ab1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shaw, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 07:54:55 np0005629333 systemd[1]: libpod-conmon-b474227baf721acc16722872c8f4d5004ee44825e6f9d3c175a0ba6126d34ab1.scope: Deactivated successfully.
Feb 25 07:54:55 np0005629333 podman[365789]: 2026-02-25 12:54:55.271845043 +0000 UTC m=+0.071039765 container create 39800c16f8ba5dfdf7179bc4d7cc261102f4938cc35aca6077e0f5263c2f9094 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_tu, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:54:55 np0005629333 systemd[1]: Started libpod-conmon-39800c16f8ba5dfdf7179bc4d7cc261102f4938cc35aca6077e0f5263c2f9094.scope.
Feb 25 07:54:55 np0005629333 podman[365789]: 2026-02-25 12:54:55.243554125 +0000 UTC m=+0.042748887 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:54:55 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:54:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b7ad0e9cb04249b23bb01a36c9b60d1f3e53f5ac1ceaaf085ddc9de43c3e125/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:54:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b7ad0e9cb04249b23bb01a36c9b60d1f3e53f5ac1ceaaf085ddc9de43c3e125/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:54:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b7ad0e9cb04249b23bb01a36c9b60d1f3e53f5ac1ceaaf085ddc9de43c3e125/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:54:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b7ad0e9cb04249b23bb01a36c9b60d1f3e53f5ac1ceaaf085ddc9de43c3e125/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:54:55 np0005629333 podman[365789]: 2026-02-25 12:54:55.373285894 +0000 UTC m=+0.172480596 container init 39800c16f8ba5dfdf7179bc4d7cc261102f4938cc35aca6077e0f5263c2f9094 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:54:55 np0005629333 podman[365789]: 2026-02-25 12:54:55.387768113 +0000 UTC m=+0.186962815 container start 39800c16f8ba5dfdf7179bc4d7cc261102f4938cc35aca6077e0f5263c2f9094 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:54:55 np0005629333 podman[365789]: 2026-02-25 12:54:55.391813667 +0000 UTC m=+0.191008339 container attach 39800c16f8ba5dfdf7179bc4d7cc261102f4938cc35aca6077e0f5263c2f9094 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_tu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:54:55 np0005629333 nova_compute[244014]: 2026-02-25 12:54:55.467 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]: {
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:    "0": [
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:        {
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "devices": [
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "/dev/loop3"
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            ],
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "lv_name": "ceph_lv0",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "lv_size": "21470642176",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "name": "ceph_lv0",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "tags": {
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.cluster_name": "ceph",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.crush_device_class": "",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.encrypted": "0",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.objectstore": "bluestore",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.osd_id": "0",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.type": "block",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.vdo": "0",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.with_tpm": "0"
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            },
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "type": "block",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "vg_name": "ceph_vg0"
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:        }
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:    ],
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:    "1": [
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:        {
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "devices": [
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "/dev/loop4"
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            ],
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "lv_name": "ceph_lv1",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "lv_size": "21470642176",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "name": "ceph_lv1",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "tags": {
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.cluster_name": "ceph",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.crush_device_class": "",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.encrypted": "0",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.objectstore": "bluestore",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.osd_id": "1",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.type": "block",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.vdo": "0",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.with_tpm": "0"
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            },
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "type": "block",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "vg_name": "ceph_vg1"
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:        }
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:    ],
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:    "2": [
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:        {
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "devices": [
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "/dev/loop5"
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            ],
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "lv_name": "ceph_lv2",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "lv_size": "21470642176",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "name": "ceph_lv2",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "tags": {
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.cluster_name": "ceph",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.crush_device_class": "",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.encrypted": "0",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.objectstore": "bluestore",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.osd_id": "2",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.type": "block",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.vdo": "0",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:                "ceph.with_tpm": "0"
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            },
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "type": "block",
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:            "vg_name": "ceph_vg2"
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:        }
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]:    ]
Feb 25 07:54:55 np0005629333 affectionate_tu[365806]: }
Feb 25 07:54:55 np0005629333 systemd[1]: libpod-39800c16f8ba5dfdf7179bc4d7cc261102f4938cc35aca6077e0f5263c2f9094.scope: Deactivated successfully.
Feb 25 07:54:55 np0005629333 podman[365789]: 2026-02-25 12:54:55.71132806 +0000 UTC m=+0.510522772 container died 39800c16f8ba5dfdf7179bc4d7cc261102f4938cc35aca6077e0f5263c2f9094 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 07:54:55 np0005629333 systemd[1]: var-lib-containers-storage-overlay-9b7ad0e9cb04249b23bb01a36c9b60d1f3e53f5ac1ceaaf085ddc9de43c3e125-merged.mount: Deactivated successfully.
Feb 25 07:54:55 np0005629333 podman[365789]: 2026-02-25 12:54:55.770455138 +0000 UTC m=+0.569649860 container remove 39800c16f8ba5dfdf7179bc4d7cc261102f4938cc35aca6077e0f5263c2f9094 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_tu, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:54:55 np0005629333 systemd[1]: libpod-conmon-39800c16f8ba5dfdf7179bc4d7cc261102f4938cc35aca6077e0f5263c2f9094.scope: Deactivated successfully.
Feb 25 07:54:56 np0005629333 podman[365887]: 2026-02-25 12:54:56.25479763 +0000 UTC m=+0.046603235 container create d3d8894f807cb63e69e2a6b0b87ff890538916befb0f753e4efb7b2f669b7df6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_jang, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:54:56 np0005629333 systemd[1]: Started libpod-conmon-d3d8894f807cb63e69e2a6b0b87ff890538916befb0f753e4efb7b2f669b7df6.scope.
Feb 25 07:54:56 np0005629333 podman[365887]: 2026-02-25 12:54:56.230860005 +0000 UTC m=+0.022665650 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:54:56 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:54:56 np0005629333 podman[365887]: 2026-02-25 12:54:56.345214241 +0000 UTC m=+0.137019876 container init d3d8894f807cb63e69e2a6b0b87ff890538916befb0f753e4efb7b2f669b7df6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_jang, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 07:54:56 np0005629333 podman[365887]: 2026-02-25 12:54:56.35333143 +0000 UTC m=+0.145137035 container start d3d8894f807cb63e69e2a6b0b87ff890538916befb0f753e4efb7b2f669b7df6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:54:56 np0005629333 confident_jang[365903]: 167 167
Feb 25 07:54:56 np0005629333 systemd[1]: libpod-d3d8894f807cb63e69e2a6b0b87ff890538916befb0f753e4efb7b2f669b7df6.scope: Deactivated successfully.
Feb 25 07:54:56 np0005629333 podman[365887]: 2026-02-25 12:54:56.358178027 +0000 UTC m=+0.149983632 container attach d3d8894f807cb63e69e2a6b0b87ff890538916befb0f753e4efb7b2f669b7df6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 07:54:56 np0005629333 podman[365887]: 2026-02-25 12:54:56.358676261 +0000 UTC m=+0.150481866 container died d3d8894f807cb63e69e2a6b0b87ff890538916befb0f753e4efb7b2f669b7df6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:54:56 np0005629333 systemd[1]: var-lib-containers-storage-overlay-99ca54d39be380edf7bf8d692a879278a68bcf1c992bcb3100f77c9462e000f2-merged.mount: Deactivated successfully.
Feb 25 07:54:56 np0005629333 podman[365887]: 2026-02-25 12:54:56.406134869 +0000 UTC m=+0.197940464 container remove d3d8894f807cb63e69e2a6b0b87ff890538916befb0f753e4efb7b2f669b7df6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_jang, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 07:54:56 np0005629333 systemd[1]: libpod-conmon-d3d8894f807cb63e69e2a6b0b87ff890538916befb0f753e4efb7b2f669b7df6.scope: Deactivated successfully.
Feb 25 07:54:56 np0005629333 podman[365926]: 2026-02-25 12:54:56.594830391 +0000 UTC m=+0.056166985 container create 54e56ce2d0ae4e85f0823ac403a179969590faf21ee26d98e58fb8358b82955c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_herschel, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 07:54:56 np0005629333 systemd[1]: Started libpod-conmon-54e56ce2d0ae4e85f0823ac403a179969590faf21ee26d98e58fb8358b82955c.scope.
Feb 25 07:54:56 np0005629333 podman[365926]: 2026-02-25 12:54:56.570559267 +0000 UTC m=+0.031895911 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:54:56 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:54:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3294dd482efdca199fbf007ef29655fbc7cda5113c890ace1163249975487324/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:54:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3294dd482efdca199fbf007ef29655fbc7cda5113c890ace1163249975487324/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:54:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3294dd482efdca199fbf007ef29655fbc7cda5113c890ace1163249975487324/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:54:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3294dd482efdca199fbf007ef29655fbc7cda5113c890ace1163249975487324/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:54:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2231: 305 pgs: 305 active+clean; 153 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:54:56 np0005629333 podman[365926]: 2026-02-25 12:54:56.697855627 +0000 UTC m=+0.159192281 container init 54e56ce2d0ae4e85f0823ac403a179969590faf21ee26d98e58fb8358b82955c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_herschel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 07:54:56 np0005629333 podman[365926]: 2026-02-25 12:54:56.711103891 +0000 UTC m=+0.172440465 container start 54e56ce2d0ae4e85f0823ac403a179969590faf21ee26d98e58fb8358b82955c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_herschel, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 07:54:56 np0005629333 podman[365926]: 2026-02-25 12:54:56.714913239 +0000 UTC m=+0.176249913 container attach 54e56ce2d0ae4e85f0823ac403a179969590faf21ee26d98e58fb8358b82955c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 07:54:56 np0005629333 nova_compute[244014]: 2026-02-25 12:54:56.984 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:54:57 np0005629333 lvm[366021]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:54:57 np0005629333 lvm[366021]: VG ceph_vg0 finished
Feb 25 07:54:57 np0005629333 lvm[366022]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:54:57 np0005629333 lvm[366022]: VG ceph_vg1 finished
Feb 25 07:54:57 np0005629333 lvm[366024]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:54:57 np0005629333 lvm[366024]: VG ceph_vg2 finished
Feb 25 07:54:57 np0005629333 cool_herschel[365943]: {}
Feb 25 07:54:57 np0005629333 systemd[1]: libpod-54e56ce2d0ae4e85f0823ac403a179969590faf21ee26d98e58fb8358b82955c.scope: Deactivated successfully.
Feb 25 07:54:57 np0005629333 podman[365926]: 2026-02-25 12:54:57.508306669 +0000 UTC m=+0.969643263 container died 54e56ce2d0ae4e85f0823ac403a179969590faf21ee26d98e58fb8358b82955c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_herschel, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:54:57 np0005629333 systemd[1]: libpod-54e56ce2d0ae4e85f0823ac403a179969590faf21ee26d98e58fb8358b82955c.scope: Consumed 1.125s CPU time.
Feb 25 07:54:57 np0005629333 systemd[1]: var-lib-containers-storage-overlay-3294dd482efdca199fbf007ef29655fbc7cda5113c890ace1163249975487324-merged.mount: Deactivated successfully.
Feb 25 07:54:57 np0005629333 podman[365926]: 2026-02-25 12:54:57.56079876 +0000 UTC m=+1.022135354 container remove 54e56ce2d0ae4e85f0823ac403a179969590faf21ee26d98e58fb8358b82955c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 07:54:57 np0005629333 systemd[1]: libpod-conmon-54e56ce2d0ae4e85f0823ac403a179969590faf21ee26d98e58fb8358b82955c.scope: Deactivated successfully.
Feb 25 07:54:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:54:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:54:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:54:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:54:57 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:54:57 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:54:58 np0005629333 nova_compute[244014]: 2026-02-25 12:54:58.460 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:58 np0005629333 nova_compute[244014]: 2026-02-25 12:54:58.461 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:58 np0005629333 nova_compute[244014]: 2026-02-25 12:54:58.489 244018 DEBUG nova.compute.manager [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:54:58 np0005629333 nova_compute[244014]: 2026-02-25 12:54:58.578 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:58 np0005629333 nova_compute[244014]: 2026-02-25 12:54:58.579 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:58 np0005629333 nova_compute[244014]: 2026-02-25 12:54:58.588 244018 DEBUG nova.virt.hardware [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:54:58 np0005629333 nova_compute[244014]: 2026-02-25 12:54:58.589 244018 INFO nova.compute.claims [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:54:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2232: 305 pgs: 305 active+clean; 153 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:54:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:54:58 np0005629333 nova_compute[244014]: 2026-02-25 12:54:58.722 244018 DEBUG oslo_concurrency.processutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:54:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:54:59 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1161396759' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:54:59 np0005629333 nova_compute[244014]: 2026-02-25 12:54:59.306 244018 DEBUG oslo_concurrency.processutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:54:59 np0005629333 nova_compute[244014]: 2026-02-25 12:54:59.314 244018 DEBUG nova.compute.provider_tree [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:54:59 np0005629333 nova_compute[244014]: 2026-02-25 12:54:59.332 244018 DEBUG nova.scheduler.client.report [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:54:59 np0005629333 nova_compute[244014]: 2026-02-25 12:54:59.374 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:59 np0005629333 nova_compute[244014]: 2026-02-25 12:54:59.374 244018 DEBUG nova.compute.manager [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:54:59 np0005629333 nova_compute[244014]: 2026-02-25 12:54:59.444 244018 DEBUG nova.compute.manager [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:54:59 np0005629333 nova_compute[244014]: 2026-02-25 12:54:59.445 244018 DEBUG nova.network.neutron [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:54:59 np0005629333 nova_compute[244014]: 2026-02-25 12:54:59.469 244018 INFO nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:54:59 np0005629333 nova_compute[244014]: 2026-02-25 12:54:59.514 244018 DEBUG nova.compute.manager [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:54:59 np0005629333 nova_compute[244014]: 2026-02-25 12:54:59.629 244018 DEBUG nova.compute.manager [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:54:59 np0005629333 nova_compute[244014]: 2026-02-25 12:54:59.631 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:54:59 np0005629333 nova_compute[244014]: 2026-02-25 12:54:59.632 244018 INFO nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Creating image(s)#033[00m
Feb 25 07:54:59 np0005629333 nova_compute[244014]: 2026-02-25 12:54:59.661 244018 DEBUG nova.storage.rbd_utils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 4e93a158-a4bd-41a0-8fe0-52b2a069c409_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:54:59 np0005629333 nova_compute[244014]: 2026-02-25 12:54:59.693 244018 DEBUG nova.storage.rbd_utils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 4e93a158-a4bd-41a0-8fe0-52b2a069c409_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:54:59 np0005629333 nova_compute[244014]: 2026-02-25 12:54:59.722 244018 DEBUG nova.storage.rbd_utils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 4e93a158-a4bd-41a0-8fe0-52b2a069c409_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:54:59 np0005629333 nova_compute[244014]: 2026-02-25 12:54:59.727 244018 DEBUG oslo_concurrency.processutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:54:59 np0005629333 nova_compute[244014]: 2026-02-25 12:54:59.796 244018 DEBUG nova.policy [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31d013eaf26a447394d93c83ab8def60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e227b91c24404ab5aed600e2fe792d32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:54:59 np0005629333 nova_compute[244014]: 2026-02-25 12:54:59.808 244018 DEBUG oslo_concurrency.processutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:54:59 np0005629333 nova_compute[244014]: 2026-02-25 12:54:59.809 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:54:59 np0005629333 nova_compute[244014]: 2026-02-25 12:54:59.810 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:54:59 np0005629333 nova_compute[244014]: 2026-02-25 12:54:59.810 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:54:59 np0005629333 nova_compute[244014]: 2026-02-25 12:54:59.841 244018 DEBUG nova.storage.rbd_utils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 4e93a158-a4bd-41a0-8fe0-52b2a069c409_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:54:59 np0005629333 nova_compute[244014]: 2026-02-25 12:54:59.846 244018 DEBUG oslo_concurrency.processutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 4e93a158-a4bd-41a0-8fe0-52b2a069c409_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:55:00 np0005629333 nova_compute[244014]: 2026-02-25 12:55:00.144 244018 DEBUG oslo_concurrency.processutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 4e93a158-a4bd-41a0-8fe0-52b2a069c409_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:55:00 np0005629333 nova_compute[244014]: 2026-02-25 12:55:00.225 244018 DEBUG nova.storage.rbd_utils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] resizing rbd image 4e93a158-a4bd-41a0-8fe0-52b2a069c409_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:55:00 np0005629333 nova_compute[244014]: 2026-02-25 12:55:00.324 244018 DEBUG nova.objects.instance [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'migration_context' on Instance uuid 4e93a158-a4bd-41a0-8fe0-52b2a069c409 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:55:00 np0005629333 nova_compute[244014]: 2026-02-25 12:55:00.346 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:55:00 np0005629333 nova_compute[244014]: 2026-02-25 12:55:00.346 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Ensure instance console log exists: /var/lib/nova/instances/4e93a158-a4bd-41a0-8fe0-52b2a069c409/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:55:00 np0005629333 nova_compute[244014]: 2026-02-25 12:55:00.347 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:55:00 np0005629333 nova_compute[244014]: 2026-02-25 12:55:00.347 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:55:00 np0005629333 nova_compute[244014]: 2026-02-25 12:55:00.348 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:55:00 np0005629333 nova_compute[244014]: 2026-02-25 12:55:00.469 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2233: 305 pgs: 305 active+clean; 153 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail
Feb 25 07:55:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:55:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:55:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:55:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:55:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:55:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:55:01 np0005629333 nova_compute[244014]: 2026-02-25 12:55:01.988 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:02.077 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:55:02 np0005629333 nova_compute[244014]: 2026-02-25 12:55:02.078 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:02.080 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:55:02 np0005629333 nova_compute[244014]: 2026-02-25 12:55:02.613 244018 DEBUG nova.network.neutron [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Successfully created port: 8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:55:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2234: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 07:55:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:55:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2235: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 07:55:05 np0005629333 nova_compute[244014]: 2026-02-25 12:55:05.472 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:06 np0005629333 nova_compute[244014]: 2026-02-25 12:55:06.170 244018 DEBUG nova.network.neutron [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Successfully updated port: 8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:55:06 np0005629333 nova_compute[244014]: 2026-02-25 12:55:06.192 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-4e93a158-a4bd-41a0-8fe0-52b2a069c409" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:55:06 np0005629333 nova_compute[244014]: 2026-02-25 12:55:06.192 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-4e93a158-a4bd-41a0-8fe0-52b2a069c409" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:55:06 np0005629333 nova_compute[244014]: 2026-02-25 12:55:06.192 244018 DEBUG nova.network.neutron [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:55:06 np0005629333 nova_compute[244014]: 2026-02-25 12:55:06.290 244018 DEBUG nova.compute.manager [req-0e6354d6-053d-439b-ab66-0f17a657985a req-1c73b3de-c61f-44be-92e2-4fb9c2b46219 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Received event network-changed-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:55:06 np0005629333 nova_compute[244014]: 2026-02-25 12:55:06.290 244018 DEBUG nova.compute.manager [req-0e6354d6-053d-439b-ab66-0f17a657985a req-1c73b3de-c61f-44be-92e2-4fb9c2b46219 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Refreshing instance network info cache due to event network-changed-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:55:06 np0005629333 nova_compute[244014]: 2026-02-25 12:55:06.291 244018 DEBUG oslo_concurrency.lockutils [req-0e6354d6-053d-439b-ab66-0f17a657985a req-1c73b3de-c61f-44be-92e2-4fb9c2b46219 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-4e93a158-a4bd-41a0-8fe0-52b2a069c409" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:55:06 np0005629333 nova_compute[244014]: 2026-02-25 12:55:06.442 244018 DEBUG nova.network.neutron [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:55:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:06.526 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:91:36 10.100.0.2 2001:db8::f816:3eff:fe31:9136'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe31:9136/64', 'neutron:device_id': 'ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7d68418-660c-4b4a-9631-94822d5aa11e, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=d6ceb7cb-a48f-47c3-acb6-58d61268f8c7) old=Port_Binding(mac=['fa:16:3e:31:91:36 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:55:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:06.527 157129 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port d6ceb7cb-a48f-47c3-acb6-58d61268f8c7 in datapath 9f82a56f-a5c3-446e-9b75-7dc69c93c56b updated#033[00m
Feb 25 07:55:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:06.529 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9f82a56f-a5c3-446e-9b75-7dc69c93c56b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:55:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:06.530 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9838d597-5dda-452f-b79e-505935356654]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2236: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 07:55:06 np0005629333 nova_compute[244014]: 2026-02-25 12:55:06.991 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:07 np0005629333 nova_compute[244014]: 2026-02-25 12:55:07.525 244018 DEBUG nova.network.neutron [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Updating instance_info_cache with network_info: [{"id": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "address": "fa:16:3e:e1:08:7c", "network": {"id": "77884df7-14d9-4fbd-8fa6-5eb17d0a82bf", "bridge": "br-int", "label": "tempest-network-smoke--1156045896", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d4ab3e5-d9", "ovs_interfaceid": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:55:07 np0005629333 nova_compute[244014]: 2026-02-25 12:55:07.545 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-4e93a158-a4bd-41a0-8fe0-52b2a069c409" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:55:07 np0005629333 nova_compute[244014]: 2026-02-25 12:55:07.545 244018 DEBUG nova.compute.manager [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Instance network_info: |[{"id": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "address": "fa:16:3e:e1:08:7c", "network": {"id": "77884df7-14d9-4fbd-8fa6-5eb17d0a82bf", "bridge": "br-int", "label": "tempest-network-smoke--1156045896", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d4ab3e5-d9", "ovs_interfaceid": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:55:07 np0005629333 nova_compute[244014]: 2026-02-25 12:55:07.546 244018 DEBUG oslo_concurrency.lockutils [req-0e6354d6-053d-439b-ab66-0f17a657985a req-1c73b3de-c61f-44be-92e2-4fb9c2b46219 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-4e93a158-a4bd-41a0-8fe0-52b2a069c409" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:55:07 np0005629333 nova_compute[244014]: 2026-02-25 12:55:07.547 244018 DEBUG nova.network.neutron [req-0e6354d6-053d-439b-ab66-0f17a657985a req-1c73b3de-c61f-44be-92e2-4fb9c2b46219 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Refreshing network info cache for port 8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:55:07 np0005629333 nova_compute[244014]: 2026-02-25 12:55:07.552 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Start _get_guest_xml network_info=[{"id": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "address": "fa:16:3e:e1:08:7c", "network": {"id": "77884df7-14d9-4fbd-8fa6-5eb17d0a82bf", "bridge": "br-int", "label": "tempest-network-smoke--1156045896", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d4ab3e5-d9", "ovs_interfaceid": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:55:07 np0005629333 nova_compute[244014]: 2026-02-25 12:55:07.557 244018 WARNING nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:55:07 np0005629333 nova_compute[244014]: 2026-02-25 12:55:07.563 244018 DEBUG nova.virt.libvirt.host [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:55:07 np0005629333 nova_compute[244014]: 2026-02-25 12:55:07.564 244018 DEBUG nova.virt.libvirt.host [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:55:07 np0005629333 nova_compute[244014]: 2026-02-25 12:55:07.573 244018 DEBUG nova.virt.libvirt.host [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:55:07 np0005629333 nova_compute[244014]: 2026-02-25 12:55:07.573 244018 DEBUG nova.virt.libvirt.host [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:55:07 np0005629333 nova_compute[244014]: 2026-02-25 12:55:07.574 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:55:07 np0005629333 nova_compute[244014]: 2026-02-25 12:55:07.574 244018 DEBUG nova.virt.hardware [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:55:07 np0005629333 nova_compute[244014]: 2026-02-25 12:55:07.575 244018 DEBUG nova.virt.hardware [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:55:07 np0005629333 nova_compute[244014]: 2026-02-25 12:55:07.576 244018 DEBUG nova.virt.hardware [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:55:07 np0005629333 nova_compute[244014]: 2026-02-25 12:55:07.576 244018 DEBUG nova.virt.hardware [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:55:07 np0005629333 nova_compute[244014]: 2026-02-25 12:55:07.576 244018 DEBUG nova.virt.hardware [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:55:07 np0005629333 nova_compute[244014]: 2026-02-25 12:55:07.577 244018 DEBUG nova.virt.hardware [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:55:07 np0005629333 nova_compute[244014]: 2026-02-25 12:55:07.577 244018 DEBUG nova.virt.hardware [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:55:07 np0005629333 nova_compute[244014]: 2026-02-25 12:55:07.578 244018 DEBUG nova.virt.hardware [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:55:07 np0005629333 nova_compute[244014]: 2026-02-25 12:55:07.578 244018 DEBUG nova.virt.hardware [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:55:07 np0005629333 nova_compute[244014]: 2026-02-25 12:55:07.578 244018 DEBUG nova.virt.hardware [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:55:07 np0005629333 nova_compute[244014]: 2026-02-25 12:55:07.579 244018 DEBUG nova.virt.hardware [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:55:07 np0005629333 nova_compute[244014]: 2026-02-25 12:55:07.584 244018 DEBUG oslo_concurrency.processutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:55:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:55:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3196340221' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:55:08 np0005629333 nova_compute[244014]: 2026-02-25 12:55:08.185 244018 DEBUG oslo_concurrency.processutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:55:08 np0005629333 nova_compute[244014]: 2026-02-25 12:55:08.221 244018 DEBUG nova.storage.rbd_utils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 4e93a158-a4bd-41a0-8fe0-52b2a069c409_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:55:08 np0005629333 nova_compute[244014]: 2026-02-25 12:55:08.226 244018 DEBUG oslo_concurrency.processutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:55:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:55:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2237: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 07:55:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:55:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1885824017' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:55:08 np0005629333 nova_compute[244014]: 2026-02-25 12:55:08.793 244018 DEBUG oslo_concurrency.processutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:55:08 np0005629333 nova_compute[244014]: 2026-02-25 12:55:08.795 244018 DEBUG nova.virt.libvirt.vif [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:54:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-308904214',display_name='tempest-TestNetworkBasicOps-server-308904214',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-308904214',id=134,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC4B/zjaDJTncfqsk6mkNSo/RgL3k210CyqHPfVKNsCjY61bbmJZVB8ZwHBrT5XizBW+KM0WAI8Ln67e320MCh/gk3oyfKKaIoKvHrkjtWvb3C4JL4zwHhWY7bHt65jOyg==',key_name='tempest-TestNetworkBasicOps-945311814',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ypd9bclm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:54:59Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=4e93a158-a4bd-41a0-8fe0-52b2a069c409,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "address": "fa:16:3e:e1:08:7c", "network": {"id": "77884df7-14d9-4fbd-8fa6-5eb17d0a82bf", "bridge": "br-int", "label": "tempest-network-smoke--1156045896", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d4ab3e5-d9", "ovs_interfaceid": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:55:08 np0005629333 nova_compute[244014]: 2026-02-25 12:55:08.796 244018 DEBUG nova.network.os_vif_util [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "address": "fa:16:3e:e1:08:7c", "network": {"id": "77884df7-14d9-4fbd-8fa6-5eb17d0a82bf", "bridge": "br-int", "label": "tempest-network-smoke--1156045896", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d4ab3e5-d9", "ovs_interfaceid": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:55:08 np0005629333 nova_compute[244014]: 2026-02-25 12:55:08.797 244018 DEBUG nova.network.os_vif_util [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:08:7c,bridge_name='br-int',has_traffic_filtering=True,id=8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d,network=Network(77884df7-14d9-4fbd-8fa6-5eb17d0a82bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d4ab3e5-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:55:08 np0005629333 nova_compute[244014]: 2026-02-25 12:55:08.798 244018 DEBUG nova.objects.instance [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e93a158-a4bd-41a0-8fe0-52b2a069c409 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:55:08 np0005629333 nova_compute[244014]: 2026-02-25 12:55:08.817 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:55:08 np0005629333 nova_compute[244014]:  <uuid>4e93a158-a4bd-41a0-8fe0-52b2a069c409</uuid>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:  <name>instance-00000086</name>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestNetworkBasicOps-server-308904214</nova:name>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:55:07</nova:creationTime>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:55:08 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:        <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:        <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:        <nova:port uuid="8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d">
Feb 25 07:55:08 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      <entry name="serial">4e93a158-a4bd-41a0-8fe0-52b2a069c409</entry>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      <entry name="uuid">4e93a158-a4bd-41a0-8fe0-52b2a069c409</entry>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/4e93a158-a4bd-41a0-8fe0-52b2a069c409_disk">
Feb 25 07:55:08 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:55:08 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/4e93a158-a4bd-41a0-8fe0-52b2a069c409_disk.config">
Feb 25 07:55:08 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:55:08 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:e1:08:7c"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      <target dev="tap8d4ab3e5-d9"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/4e93a158-a4bd-41a0-8fe0-52b2a069c409/console.log" append="off"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:55:08 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:55:08 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:55:08 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:55:08 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:55:08 np0005629333 nova_compute[244014]: 2026-02-25 12:55:08.818 244018 DEBUG nova.compute.manager [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Preparing to wait for external event network-vif-plugged-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:55:08 np0005629333 nova_compute[244014]: 2026-02-25 12:55:08.819 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:55:08 np0005629333 nova_compute[244014]: 2026-02-25 12:55:08.819 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:55:08 np0005629333 nova_compute[244014]: 2026-02-25 12:55:08.819 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:55:08 np0005629333 nova_compute[244014]: 2026-02-25 12:55:08.820 244018 DEBUG nova.virt.libvirt.vif [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:54:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-308904214',display_name='tempest-TestNetworkBasicOps-server-308904214',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-308904214',id=134,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC4B/zjaDJTncfqsk6mkNSo/RgL3k210CyqHPfVKNsCjY61bbmJZVB8ZwHBrT5XizBW+KM0WAI8Ln67e320MCh/gk3oyfKKaIoKvHrkjtWvb3C4JL4zwHhWY7bHt65jOyg==',key_name='tempest-TestNetworkBasicOps-945311814',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ypd9bclm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:54:59Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=4e93a158-a4bd-41a0-8fe0-52b2a069c409,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "address": "fa:16:3e:e1:08:7c", "network": {"id": "77884df7-14d9-4fbd-8fa6-5eb17d0a82bf", "bridge": "br-int", "label": "tempest-network-smoke--1156045896", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d4ab3e5-d9", "ovs_interfaceid": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:55:08 np0005629333 nova_compute[244014]: 2026-02-25 12:55:08.821 244018 DEBUG nova.network.os_vif_util [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "address": "fa:16:3e:e1:08:7c", "network": {"id": "77884df7-14d9-4fbd-8fa6-5eb17d0a82bf", "bridge": "br-int", "label": "tempest-network-smoke--1156045896", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d4ab3e5-d9", "ovs_interfaceid": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:55:08 np0005629333 nova_compute[244014]: 2026-02-25 12:55:08.821 244018 DEBUG nova.network.os_vif_util [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e1:08:7c,bridge_name='br-int',has_traffic_filtering=True,id=8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d,network=Network(77884df7-14d9-4fbd-8fa6-5eb17d0a82bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d4ab3e5-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:55:08 np0005629333 nova_compute[244014]: 2026-02-25 12:55:08.822 244018 DEBUG os_vif [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:08:7c,bridge_name='br-int',has_traffic_filtering=True,id=8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d,network=Network(77884df7-14d9-4fbd-8fa6-5eb17d0a82bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d4ab3e5-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:55:08 np0005629333 nova_compute[244014]: 2026-02-25 12:55:08.823 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:08 np0005629333 nova_compute[244014]: 2026-02-25 12:55:08.823 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:55:08 np0005629333 nova_compute[244014]: 2026-02-25 12:55:08.824 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:55:08 np0005629333 nova_compute[244014]: 2026-02-25 12:55:08.827 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:08 np0005629333 nova_compute[244014]: 2026-02-25 12:55:08.827 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d4ab3e5-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:55:08 np0005629333 nova_compute[244014]: 2026-02-25 12:55:08.828 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d4ab3e5-d9, col_values=(('external_ids', {'iface-id': '8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e1:08:7c', 'vm-uuid': '4e93a158-a4bd-41a0-8fe0-52b2a069c409'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:55:08 np0005629333 nova_compute[244014]: 2026-02-25 12:55:08.830 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:08 np0005629333 NetworkManager[49836]: <info>  [1772024108.8318] manager: (tap8d4ab3e5-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/587)
Feb 25 07:55:08 np0005629333 nova_compute[244014]: 2026-02-25 12:55:08.832 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:55:08 np0005629333 nova_compute[244014]: 2026-02-25 12:55:08.837 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:08 np0005629333 nova_compute[244014]: 2026-02-25 12:55:08.839 244018 INFO os_vif [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e1:08:7c,bridge_name='br-int',has_traffic_filtering=True,id=8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d,network=Network(77884df7-14d9-4fbd-8fa6-5eb17d0a82bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d4ab3e5-d9')#033[00m
Feb 25 07:55:08 np0005629333 nova_compute[244014]: 2026-02-25 12:55:08.902 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:55:08 np0005629333 nova_compute[244014]: 2026-02-25 12:55:08.903 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:55:08 np0005629333 nova_compute[244014]: 2026-02-25 12:55:08.903 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:e1:08:7c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:55:08 np0005629333 nova_compute[244014]: 2026-02-25 12:55:08.903 244018 INFO nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Using config drive#033[00m
Feb 25 07:55:08 np0005629333 nova_compute[244014]: 2026-02-25 12:55:08.930 244018 DEBUG nova.storage.rbd_utils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 4e93a158-a4bd-41a0-8fe0-52b2a069c409_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:55:08 np0005629333 podman[366318]: 2026-02-25 12:55:08.938796775 +0000 UTC m=+0.056611878 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 07:55:08 np0005629333 podman[366319]: 2026-02-25 12:55:08.967528746 +0000 UTC m=+0.084394952 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 07:55:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:09.081 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:55:09 np0005629333 nova_compute[244014]: 2026-02-25 12:55:09.510 244018 INFO nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Creating config drive at /var/lib/nova/instances/4e93a158-a4bd-41a0-8fe0-52b2a069c409/disk.config#033[00m
Feb 25 07:55:09 np0005629333 nova_compute[244014]: 2026-02-25 12:55:09.515 244018 DEBUG oslo_concurrency.processutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4e93a158-a4bd-41a0-8fe0-52b2a069c409/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_h4jzot2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:55:09 np0005629333 nova_compute[244014]: 2026-02-25 12:55:09.653 244018 DEBUG oslo_concurrency.processutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4e93a158-a4bd-41a0-8fe0-52b2a069c409/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_h4jzot2" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:55:09 np0005629333 nova_compute[244014]: 2026-02-25 12:55:09.692 244018 DEBUG nova.storage.rbd_utils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 4e93a158-a4bd-41a0-8fe0-52b2a069c409_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:55:09 np0005629333 nova_compute[244014]: 2026-02-25 12:55:09.697 244018 DEBUG oslo_concurrency.processutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4e93a158-a4bd-41a0-8fe0-52b2a069c409/disk.config 4e93a158-a4bd-41a0-8fe0-52b2a069c409_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:55:09 np0005629333 nova_compute[244014]: 2026-02-25 12:55:09.740 244018 DEBUG nova.network.neutron [req-0e6354d6-053d-439b-ab66-0f17a657985a req-1c73b3de-c61f-44be-92e2-4fb9c2b46219 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Updated VIF entry in instance network info cache for port 8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:55:09 np0005629333 nova_compute[244014]: 2026-02-25 12:55:09.741 244018 DEBUG nova.network.neutron [req-0e6354d6-053d-439b-ab66-0f17a657985a req-1c73b3de-c61f-44be-92e2-4fb9c2b46219 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Updating instance_info_cache with network_info: [{"id": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "address": "fa:16:3e:e1:08:7c", "network": {"id": "77884df7-14d9-4fbd-8fa6-5eb17d0a82bf", "bridge": "br-int", "label": "tempest-network-smoke--1156045896", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d4ab3e5-d9", "ovs_interfaceid": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:55:09 np0005629333 nova_compute[244014]: 2026-02-25 12:55:09.777 244018 DEBUG oslo_concurrency.lockutils [req-0e6354d6-053d-439b-ab66-0f17a657985a req-1c73b3de-c61f-44be-92e2-4fb9c2b46219 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-4e93a158-a4bd-41a0-8fe0-52b2a069c409" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:55:09 np0005629333 nova_compute[244014]: 2026-02-25 12:55:09.860 244018 DEBUG oslo_concurrency.processutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4e93a158-a4bd-41a0-8fe0-52b2a069c409/disk.config 4e93a158-a4bd-41a0-8fe0-52b2a069c409_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:55:09 np0005629333 nova_compute[244014]: 2026-02-25 12:55:09.861 244018 INFO nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Deleting local config drive /var/lib/nova/instances/4e93a158-a4bd-41a0-8fe0-52b2a069c409/disk.config because it was imported into RBD.#033[00m
Feb 25 07:55:09 np0005629333 kernel: tap8d4ab3e5-d9: entered promiscuous mode
Feb 25 07:55:09 np0005629333 NetworkManager[49836]: <info>  [1772024109.9244] manager: (tap8d4ab3e5-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/588)
Feb 25 07:55:09 np0005629333 nova_compute[244014]: 2026-02-25 12:55:09.924 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:55:09Z|01418|binding|INFO|Claiming lport 8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d for this chassis.
Feb 25 07:55:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:55:09Z|01419|binding|INFO|8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d: Claiming fa:16:3e:e1:08:7c 10.100.0.9
Feb 25 07:55:09 np0005629333 nova_compute[244014]: 2026-02-25 12:55:09.929 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:09 np0005629333 nova_compute[244014]: 2026-02-25 12:55:09.932 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:09 np0005629333 nova_compute[244014]: 2026-02-25 12:55:09.936 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:09.952 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:08:7c 10.100.0.9'], port_security=['fa:16:3e:e1:08:7c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4e93a158-a4bd-41a0-8fe0-52b2a069c409', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e0c08d4b-4169-4e97-8a55-a9553174491d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14ad45b1-188a-47f0-9d37-00198e9d57fa, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:55:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:09.953 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d in datapath 77884df7-14d9-4fbd-8fa6-5eb17d0a82bf bound to our chassis#033[00m
Feb 25 07:55:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:09.955 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 77884df7-14d9-4fbd-8fa6-5eb17d0a82bf#033[00m
Feb 25 07:55:09 np0005629333 systemd-machined[210048]: New machine qemu-166-instance-00000086.
Feb 25 07:55:09 np0005629333 systemd-udevd[366434]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:55:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:09.967 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0811b6ee-78ee-45ed-af02-c9759abf7cd0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:09.968 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap77884df7-11 in ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:55:09 np0005629333 NetworkManager[49836]: <info>  [1772024109.9740] device (tap8d4ab3e5-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:55:09 np0005629333 NetworkManager[49836]: <info>  [1772024109.9750] device (tap8d4ab3e5-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:55:09 np0005629333 systemd[1]: Started Virtual Machine qemu-166-instance-00000086.
Feb 25 07:55:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:09.971 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap77884df7-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:55:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:09.971 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f19be0bb-0839-45ce-875c-10e6a9974c43]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:09.974 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[62904fb7-7757-4a48-af90-268bf7bb420f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:55:09Z|01420|binding|INFO|Setting lport 8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d ovn-installed in OVS
Feb 25 07:55:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:55:09Z|01421|binding|INFO|Setting lport 8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d up in Southbound
Feb 25 07:55:09 np0005629333 nova_compute[244014]: 2026-02-25 12:55:09.979 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:09.986 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[5801bdb8-cb8f-4012-91aa-0e469afafd16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.000 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[725d8f68-194c-44f3-94bf-4187cad5d686]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.023 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d598f095-8fd3-4dd7-a1df-9d66453ecebe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:10 np0005629333 systemd-udevd[366437]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:55:10 np0005629333 NetworkManager[49836]: <info>  [1772024110.0303] manager: (tap77884df7-10): new Veth device (/org/freedesktop/NetworkManager/Devices/589)
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.029 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7a8749b5-b2b0-4b4e-a932-af304d71e390]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.053 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4f06a8c9-7330-47a8-b9bb-4d177dacbabf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.058 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2d8db995-90d3-43b1-94f8-4eddba65eb79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:10 np0005629333 NetworkManager[49836]: <info>  [1772024110.0797] device (tap77884df7-10): carrier: link connected
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.083 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb29783-b098-472e-995b-47367109c8b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.101 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3fdd3612-f568-4bd5-91e3-f87e703e83e7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77884df7-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:34:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 422], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607948, 'reachable_time': 32603, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366467, 'error': None, 'target': 'ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.120 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[24d64c7e-de7e-42fa-925f-4196bc0c2d59]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe55:3411'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607948, 'tstamp': 607948}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366468, 'error': None, 'target': 'ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.139 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4df5927d-7931-4512-99a1-495dd26082f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77884df7-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:34:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 422], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607948, 'reachable_time': 32603, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 366469, 'error': None, 'target': 'ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.169 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[02450337-5c4f-4be8-be51-38cd3c4aaa73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.225 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9fede634-ccea-447a-bb55-bdefa35db250]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.226 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77884df7-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.227 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.228 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77884df7-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.230 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:10 np0005629333 kernel: tap77884df7-10: entered promiscuous mode
Feb 25 07:55:10 np0005629333 NetworkManager[49836]: <info>  [1772024110.2322] manager: (tap77884df7-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/590)
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.235 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap77884df7-10, col_values=(('external_ids', {'iface-id': '99924042-340f-4cf2-b7e0-cbd906fe3c45'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.236 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:10 np0005629333 ovn_controller[147040]: 2026-02-25T12:55:10Z|01422|binding|INFO|Releasing lport 99924042-340f-4cf2-b7e0-cbd906fe3c45 from this chassis (sb_readonly=0)
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.237 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.238 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/77884df7-14d9-4fbd-8fa6-5eb17d0a82bf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/77884df7-14d9-4fbd-8fa6-5eb17d0a82bf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.238 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5309c82f-1711-4e4c-9ebf-ea2a686e9113]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.239 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/77884df7-14d9-4fbd-8fa6-5eb17d0a82bf.pid.haproxy
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 77884df7-14d9-4fbd-8fa6-5eb17d0a82bf
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:55:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:10.240 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf', 'env', 'PROCESS_TAG=haproxy-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/77884df7-14d9-4fbd-8fa6-5eb17d0a82bf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.245 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.264 244018 DEBUG nova.compute.manager [req-78749b2b-d7f2-4fef-85fc-412884bf11c1 req-7de0c61f-81f0-40d0-b550-745144d98243 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Received event network-vif-plugged-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.264 244018 DEBUG oslo_concurrency.lockutils [req-78749b2b-d7f2-4fef-85fc-412884bf11c1 req-7de0c61f-81f0-40d0-b550-745144d98243 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.265 244018 DEBUG oslo_concurrency.lockutils [req-78749b2b-d7f2-4fef-85fc-412884bf11c1 req-7de0c61f-81f0-40d0-b550-745144d98243 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.265 244018 DEBUG oslo_concurrency.lockutils [req-78749b2b-d7f2-4fef-85fc-412884bf11c1 req-7de0c61f-81f0-40d0-b550-745144d98243 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.266 244018 DEBUG nova.compute.manager [req-78749b2b-d7f2-4fef-85fc-412884bf11c1 req-7de0c61f-81f0-40d0-b550-745144d98243 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Processing event network-vif-plugged-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.473 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:10 np0005629333 podman[366501]: 2026-02-25 12:55:10.600006226 +0000 UTC m=+0.070556991 container create 403707fb052a6cce800dd129557287a99d76e360fe7c869d3bc648b6f9ce8141 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:55:10 np0005629333 systemd[1]: Started libpod-conmon-403707fb052a6cce800dd129557287a99d76e360fe7c869d3bc648b6f9ce8141.scope.
Feb 25 07:55:10 np0005629333 podman[366501]: 2026-02-25 12:55:10.567657063 +0000 UTC m=+0.038207888 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:55:10 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:55:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36d579fc02c3db25950a35ca5e2bc6443af220c9434e5f70315a2a4e00da493a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:55:10 np0005629333 podman[366501]: 2026-02-25 12:55:10.693552574 +0000 UTC m=+0.164103359 container init 403707fb052a6cce800dd129557287a99d76e360fe7c869d3bc648b6f9ce8141 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 07:55:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2238: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 07:55:10 np0005629333 podman[366501]: 2026-02-25 12:55:10.700990004 +0000 UTC m=+0.171540749 container start 403707fb052a6cce800dd129557287a99d76e360fe7c869d3bc648b6f9ce8141 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223)
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.710 244018 DEBUG nova.compute.manager [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.712 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024110.7094576, 4e93a158-a4bd-41a0-8fe0-52b2a069c409 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.713 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] VM Started (Lifecycle Event)#033[00m
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.723 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.728 244018 INFO nova.virt.libvirt.driver [-] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Instance spawned successfully.#033[00m
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.728 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:55:10 np0005629333 neutron-haproxy-ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf[366557]: [NOTICE]   (366562) : New worker (366564) forked
Feb 25 07:55:10 np0005629333 neutron-haproxy-ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf[366557]: [NOTICE]   (366562) : Loading success.
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.777 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.781 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.848 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.848 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024110.7108655, 4e93a158-a4bd-41a0-8fe0-52b2a069c409 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.849 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.853 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.853 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.854 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.854 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.854 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.855 244018 DEBUG nova.virt.libvirt.driver [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.904 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.908 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024110.7154756, 4e93a158-a4bd-41a0-8fe0-52b2a069c409 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.908 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.936 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.940 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.961 244018 INFO nova.compute.manager [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Took 11.33 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.961 244018 DEBUG nova.compute.manager [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:55:10 np0005629333 nova_compute[244014]: 2026-02-25 12:55:10.969 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:55:11 np0005629333 nova_compute[244014]: 2026-02-25 12:55:11.041 244018 INFO nova.compute.manager [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Took 12.49 seconds to build instance.#033[00m
Feb 25 07:55:11 np0005629333 nova_compute[244014]: 2026-02-25 12:55:11.061 244018 DEBUG oslo_concurrency.lockutils [None req-ea13f419-7a1e-46dc-89d0-81eaf7304372 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:55:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2239: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Feb 25 07:55:13 np0005629333 nova_compute[244014]: 2026-02-25 12:55:13.185 244018 DEBUG nova.compute.manager [req-ac860ee6-bb20-4288-abe5-643834fc6e74 req-5d911c95-1937-4763-a303-502333f5eb3c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Received event network-vif-plugged-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:55:13 np0005629333 nova_compute[244014]: 2026-02-25 12:55:13.186 244018 DEBUG oslo_concurrency.lockutils [req-ac860ee6-bb20-4288-abe5-643834fc6e74 req-5d911c95-1937-4763-a303-502333f5eb3c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:55:13 np0005629333 nova_compute[244014]: 2026-02-25 12:55:13.186 244018 DEBUG oslo_concurrency.lockutils [req-ac860ee6-bb20-4288-abe5-643834fc6e74 req-5d911c95-1937-4763-a303-502333f5eb3c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:55:13 np0005629333 nova_compute[244014]: 2026-02-25 12:55:13.186 244018 DEBUG oslo_concurrency.lockutils [req-ac860ee6-bb20-4288-abe5-643834fc6e74 req-5d911c95-1937-4763-a303-502333f5eb3c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:55:13 np0005629333 nova_compute[244014]: 2026-02-25 12:55:13.187 244018 DEBUG nova.compute.manager [req-ac860ee6-bb20-4288-abe5-643834fc6e74 req-5d911c95-1937-4763-a303-502333f5eb3c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] No waiting events found dispatching network-vif-plugged-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:55:13 np0005629333 nova_compute[244014]: 2026-02-25 12:55:13.187 244018 WARNING nova.compute.manager [req-ac860ee6-bb20-4288-abe5-643834fc6e74 req-5d911c95-1937-4763-a303-502333f5eb3c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Received unexpected event network-vif-plugged-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d for instance with vm_state active and task_state None.#033[00m
Feb 25 07:55:13 np0005629333 nova_compute[244014]: 2026-02-25 12:55:13.511 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "dfb7287a-5448-4579-8938-fe909fbf54c6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:55:13 np0005629333 nova_compute[244014]: 2026-02-25 12:55:13.512 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dfb7287a-5448-4579-8938-fe909fbf54c6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:55:13 np0005629333 nova_compute[244014]: 2026-02-25 12:55:13.538 244018 DEBUG nova.compute.manager [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:55:13 np0005629333 nova_compute[244014]: 2026-02-25 12:55:13.614 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:55:13 np0005629333 nova_compute[244014]: 2026-02-25 12:55:13.615 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:55:13 np0005629333 nova_compute[244014]: 2026-02-25 12:55:13.622 244018 DEBUG nova.virt.hardware [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:55:13 np0005629333 nova_compute[244014]: 2026-02-25 12:55:13.623 244018 INFO nova.compute.claims [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:55:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:55:13 np0005629333 nova_compute[244014]: 2026-02-25 12:55:13.832 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:13 np0005629333 nova_compute[244014]: 2026-02-25 12:55:13.868 244018 DEBUG oslo_concurrency.processutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:55:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:55:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2944023126' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:55:14 np0005629333 nova_compute[244014]: 2026-02-25 12:55:14.414 244018 DEBUG oslo_concurrency.processutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:55:14 np0005629333 nova_compute[244014]: 2026-02-25 12:55:14.420 244018 DEBUG nova.compute.provider_tree [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:55:14 np0005629333 nova_compute[244014]: 2026-02-25 12:55:14.444 244018 DEBUG nova.scheduler.client.report [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:55:14 np0005629333 nova_compute[244014]: 2026-02-25 12:55:14.479 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:55:14 np0005629333 nova_compute[244014]: 2026-02-25 12:55:14.479 244018 DEBUG nova.compute.manager [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:55:14 np0005629333 nova_compute[244014]: 2026-02-25 12:55:14.538 244018 DEBUG nova.compute.manager [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:55:14 np0005629333 nova_compute[244014]: 2026-02-25 12:55:14.539 244018 DEBUG nova.network.neutron [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:55:14 np0005629333 nova_compute[244014]: 2026-02-25 12:55:14.627 244018 INFO nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:55:14 np0005629333 nova_compute[244014]: 2026-02-25 12:55:14.646 244018 DEBUG nova.compute.manager [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:55:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2240: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 12 KiB/s wr, 69 op/s
Feb 25 07:55:14 np0005629333 nova_compute[244014]: 2026-02-25 12:55:14.736 244018 DEBUG nova.policy [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb8dbf8cc448ad946fd23aaae2326e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '25fa1e8dd32c483686f869da2604f2b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:55:14 np0005629333 nova_compute[244014]: 2026-02-25 12:55:14.743 244018 DEBUG nova.compute.manager [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:55:14 np0005629333 nova_compute[244014]: 2026-02-25 12:55:14.745 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:55:14 np0005629333 nova_compute[244014]: 2026-02-25 12:55:14.746 244018 INFO nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Creating image(s)#033[00m
Feb 25 07:55:14 np0005629333 nova_compute[244014]: 2026-02-25 12:55:14.784 244018 DEBUG nova.storage.rbd_utils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dfb7287a-5448-4579-8938-fe909fbf54c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:55:14 np0005629333 nova_compute[244014]: 2026-02-25 12:55:14.820 244018 DEBUG nova.storage.rbd_utils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dfb7287a-5448-4579-8938-fe909fbf54c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:55:14 np0005629333 nova_compute[244014]: 2026-02-25 12:55:14.846 244018 DEBUG nova.storage.rbd_utils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dfb7287a-5448-4579-8938-fe909fbf54c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:55:14 np0005629333 nova_compute[244014]: 2026-02-25 12:55:14.849 244018 DEBUG oslo_concurrency.processutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:55:14 np0005629333 nova_compute[244014]: 2026-02-25 12:55:14.940 244018 DEBUG oslo_concurrency.processutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:55:14 np0005629333 nova_compute[244014]: 2026-02-25 12:55:14.941 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:55:14 np0005629333 nova_compute[244014]: 2026-02-25 12:55:14.942 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:55:14 np0005629333 nova_compute[244014]: 2026-02-25 12:55:14.942 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:55:14 np0005629333 nova_compute[244014]: 2026-02-25 12:55:14.974 244018 DEBUG nova.storage.rbd_utils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dfb7287a-5448-4579-8938-fe909fbf54c6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:55:14 np0005629333 nova_compute[244014]: 2026-02-25 12:55:14.982 244018 DEBUG oslo_concurrency.processutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 dfb7287a-5448-4579-8938-fe909fbf54c6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:55:15 np0005629333 NetworkManager[49836]: <info>  [1772024115.0383] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/591)
Feb 25 07:55:15 np0005629333 nova_compute[244014]: 2026-02-25 12:55:15.037 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:15 np0005629333 NetworkManager[49836]: <info>  [1772024115.0395] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/592)
Feb 25 07:55:15 np0005629333 nova_compute[244014]: 2026-02-25 12:55:15.065 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:15 np0005629333 ovn_controller[147040]: 2026-02-25T12:55:15Z|01423|binding|INFO|Releasing lport 99924042-340f-4cf2-b7e0-cbd906fe3c45 from this chassis (sb_readonly=0)
Feb 25 07:55:15 np0005629333 nova_compute[244014]: 2026-02-25 12:55:15.082 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:15 np0005629333 nova_compute[244014]: 2026-02-25 12:55:15.233 244018 DEBUG oslo_concurrency.processutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 dfb7287a-5448-4579-8938-fe909fbf54c6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:55:15 np0005629333 nova_compute[244014]: 2026-02-25 12:55:15.305 244018 DEBUG nova.storage.rbd_utils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] resizing rbd image dfb7287a-5448-4579-8938-fe909fbf54c6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:55:15 np0005629333 nova_compute[244014]: 2026-02-25 12:55:15.396 244018 DEBUG nova.compute.manager [req-34264510-374b-4236-87ea-98641f0478f7 req-9c363a2b-aaad-4f62-80cf-0b4810bebb90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Received event network-changed-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:55:15 np0005629333 nova_compute[244014]: 2026-02-25 12:55:15.397 244018 DEBUG nova.compute.manager [req-34264510-374b-4236-87ea-98641f0478f7 req-9c363a2b-aaad-4f62-80cf-0b4810bebb90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Refreshing instance network info cache due to event network-changed-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:55:15 np0005629333 nova_compute[244014]: 2026-02-25 12:55:15.397 244018 DEBUG oslo_concurrency.lockutils [req-34264510-374b-4236-87ea-98641f0478f7 req-9c363a2b-aaad-4f62-80cf-0b4810bebb90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-4e93a158-a4bd-41a0-8fe0-52b2a069c409" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:55:15 np0005629333 nova_compute[244014]: 2026-02-25 12:55:15.397 244018 DEBUG oslo_concurrency.lockutils [req-34264510-374b-4236-87ea-98641f0478f7 req-9c363a2b-aaad-4f62-80cf-0b4810bebb90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-4e93a158-a4bd-41a0-8fe0-52b2a069c409" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:55:15 np0005629333 nova_compute[244014]: 2026-02-25 12:55:15.398 244018 DEBUG nova.network.neutron [req-34264510-374b-4236-87ea-98641f0478f7 req-9c363a2b-aaad-4f62-80cf-0b4810bebb90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Refreshing network info cache for port 8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:55:15 np0005629333 nova_compute[244014]: 2026-02-25 12:55:15.404 244018 DEBUG nova.objects.instance [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'migration_context' on Instance uuid dfb7287a-5448-4579-8938-fe909fbf54c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:55:15 np0005629333 nova_compute[244014]: 2026-02-25 12:55:15.436 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:55:15 np0005629333 nova_compute[244014]: 2026-02-25 12:55:15.437 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Ensure instance console log exists: /var/lib/nova/instances/dfb7287a-5448-4579-8938-fe909fbf54c6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:55:15 np0005629333 nova_compute[244014]: 2026-02-25 12:55:15.438 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:55:15 np0005629333 nova_compute[244014]: 2026-02-25 12:55:15.438 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:55:15 np0005629333 nova_compute[244014]: 2026-02-25 12:55:15.438 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:55:15 np0005629333 nova_compute[244014]: 2026-02-25 12:55:15.476 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:15 np0005629333 nova_compute[244014]: 2026-02-25 12:55:15.655 244018 DEBUG nova.network.neutron [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Successfully created port: 44908825-991d-42d4-9bad-18d2a1f5fe9c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:55:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2241: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 12 KiB/s wr, 69 op/s
Feb 25 07:55:16 np0005629333 nova_compute[244014]: 2026-02-25 12:55:16.767 244018 DEBUG nova.network.neutron [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Successfully updated port: 44908825-991d-42d4-9bad-18d2a1f5fe9c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:55:16 np0005629333 nova_compute[244014]: 2026-02-25 12:55:16.810 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "refresh_cache-dfb7287a-5448-4579-8938-fe909fbf54c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:55:16 np0005629333 nova_compute[244014]: 2026-02-25 12:55:16.810 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquired lock "refresh_cache-dfb7287a-5448-4579-8938-fe909fbf54c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:55:16 np0005629333 nova_compute[244014]: 2026-02-25 12:55:16.810 244018 DEBUG nova.network.neutron [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:55:16 np0005629333 nova_compute[244014]: 2026-02-25 12:55:16.837 244018 DEBUG nova.compute.manager [req-26f3faca-fe36-44bb-91d6-36651700490c req-19c33118-ec16-4ee5-b295-72a18a9e9bb0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Received event network-changed-44908825-991d-42d4-9bad-18d2a1f5fe9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:55:16 np0005629333 nova_compute[244014]: 2026-02-25 12:55:16.838 244018 DEBUG nova.compute.manager [req-26f3faca-fe36-44bb-91d6-36651700490c req-19c33118-ec16-4ee5-b295-72a18a9e9bb0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Refreshing instance network info cache due to event network-changed-44908825-991d-42d4-9bad-18d2a1f5fe9c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:55:16 np0005629333 nova_compute[244014]: 2026-02-25 12:55:16.839 244018 DEBUG oslo_concurrency.lockutils [req-26f3faca-fe36-44bb-91d6-36651700490c req-19c33118-ec16-4ee5-b295-72a18a9e9bb0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-dfb7287a-5448-4579-8938-fe909fbf54c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:55:16 np0005629333 nova_compute[244014]: 2026-02-25 12:55:16.851 244018 DEBUG nova.network.neutron [req-34264510-374b-4236-87ea-98641f0478f7 req-9c363a2b-aaad-4f62-80cf-0b4810bebb90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Updated VIF entry in instance network info cache for port 8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:55:16 np0005629333 nova_compute[244014]: 2026-02-25 12:55:16.852 244018 DEBUG nova.network.neutron [req-34264510-374b-4236-87ea-98641f0478f7 req-9c363a2b-aaad-4f62-80cf-0b4810bebb90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Updating instance_info_cache with network_info: [{"id": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "address": "fa:16:3e:e1:08:7c", "network": {"id": "77884df7-14d9-4fbd-8fa6-5eb17d0a82bf", "bridge": "br-int", "label": "tempest-network-smoke--1156045896", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d4ab3e5-d9", "ovs_interfaceid": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:55:16 np0005629333 nova_compute[244014]: 2026-02-25 12:55:16.918 244018 DEBUG oslo_concurrency.lockutils [req-34264510-374b-4236-87ea-98641f0478f7 req-9c363a2b-aaad-4f62-80cf-0b4810bebb90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-4e93a158-a4bd-41a0-8fe0-52b2a069c409" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:55:16 np0005629333 nova_compute[244014]: 2026-02-25 12:55:16.940 244018 DEBUG nova.network.neutron [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:55:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:55:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2242: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 07:55:18 np0005629333 nova_compute[244014]: 2026-02-25 12:55:18.835 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:19 np0005629333 nova_compute[244014]: 2026-02-25 12:55:19.094 244018 DEBUG nova.network.neutron [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Updating instance_info_cache with network_info: [{"id": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "address": "fa:16:3e:e4:45:21", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee4:4521", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44908825-99", "ovs_interfaceid": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:55:19 np0005629333 nova_compute[244014]: 2026-02-25 12:55:19.133 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Releasing lock "refresh_cache-dfb7287a-5448-4579-8938-fe909fbf54c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:55:19 np0005629333 nova_compute[244014]: 2026-02-25 12:55:19.134 244018 DEBUG nova.compute.manager [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Instance network_info: |[{"id": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "address": "fa:16:3e:e4:45:21", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee4:4521", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44908825-99", "ovs_interfaceid": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:55:19 np0005629333 nova_compute[244014]: 2026-02-25 12:55:19.134 244018 DEBUG oslo_concurrency.lockutils [req-26f3faca-fe36-44bb-91d6-36651700490c req-19c33118-ec16-4ee5-b295-72a18a9e9bb0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-dfb7287a-5448-4579-8938-fe909fbf54c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:55:19 np0005629333 nova_compute[244014]: 2026-02-25 12:55:19.134 244018 DEBUG nova.network.neutron [req-26f3faca-fe36-44bb-91d6-36651700490c req-19c33118-ec16-4ee5-b295-72a18a9e9bb0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Refreshing network info cache for port 44908825-991d-42d4-9bad-18d2a1f5fe9c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:55:19 np0005629333 nova_compute[244014]: 2026-02-25 12:55:19.137 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Start _get_guest_xml network_info=[{"id": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "address": "fa:16:3e:e4:45:21", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee4:4521", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44908825-99", "ovs_interfaceid": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:55:19 np0005629333 nova_compute[244014]: 2026-02-25 12:55:19.143 244018 WARNING nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:55:19 np0005629333 nova_compute[244014]: 2026-02-25 12:55:19.149 244018 DEBUG nova.virt.libvirt.host [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:55:19 np0005629333 nova_compute[244014]: 2026-02-25 12:55:19.150 244018 DEBUG nova.virt.libvirt.host [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:55:19 np0005629333 nova_compute[244014]: 2026-02-25 12:55:19.156 244018 DEBUG nova.virt.libvirt.host [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:55:19 np0005629333 nova_compute[244014]: 2026-02-25 12:55:19.156 244018 DEBUG nova.virt.libvirt.host [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:55:19 np0005629333 nova_compute[244014]: 2026-02-25 12:55:19.157 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:55:19 np0005629333 nova_compute[244014]: 2026-02-25 12:55:19.157 244018 DEBUG nova.virt.hardware [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:55:19 np0005629333 nova_compute[244014]: 2026-02-25 12:55:19.157 244018 DEBUG nova.virt.hardware [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:55:19 np0005629333 nova_compute[244014]: 2026-02-25 12:55:19.158 244018 DEBUG nova.virt.hardware [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:55:19 np0005629333 nova_compute[244014]: 2026-02-25 12:55:19.158 244018 DEBUG nova.virt.hardware [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:55:19 np0005629333 nova_compute[244014]: 2026-02-25 12:55:19.158 244018 DEBUG nova.virt.hardware [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:55:19 np0005629333 nova_compute[244014]: 2026-02-25 12:55:19.158 244018 DEBUG nova.virt.hardware [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:55:19 np0005629333 nova_compute[244014]: 2026-02-25 12:55:19.158 244018 DEBUG nova.virt.hardware [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:55:19 np0005629333 nova_compute[244014]: 2026-02-25 12:55:19.159 244018 DEBUG nova.virt.hardware [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:55:19 np0005629333 nova_compute[244014]: 2026-02-25 12:55:19.159 244018 DEBUG nova.virt.hardware [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:55:19 np0005629333 nova_compute[244014]: 2026-02-25 12:55:19.159 244018 DEBUG nova.virt.hardware [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:55:19 np0005629333 nova_compute[244014]: 2026-02-25 12:55:19.159 244018 DEBUG nova.virt.hardware [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:55:19 np0005629333 nova_compute[244014]: 2026-02-25 12:55:19.162 244018 DEBUG oslo_concurrency.processutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:55:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:55:19 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/712710689' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:55:19 np0005629333 nova_compute[244014]: 2026-02-25 12:55:19.698 244018 DEBUG oslo_concurrency.processutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:55:19 np0005629333 nova_compute[244014]: 2026-02-25 12:55:19.730 244018 DEBUG nova.storage.rbd_utils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dfb7287a-5448-4579-8938-fe909fbf54c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:55:19 np0005629333 nova_compute[244014]: 2026-02-25 12:55:19.734 244018 DEBUG oslo_concurrency.processutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:55:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:55:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1690628935' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:55:20 np0005629333 nova_compute[244014]: 2026-02-25 12:55:20.357 244018 DEBUG oslo_concurrency.processutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:55:20 np0005629333 nova_compute[244014]: 2026-02-25 12:55:20.360 244018 DEBUG nova.virt.libvirt.vif [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:55:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-730348204',display_name='tempest-TestGettingAddress-server-730348204',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-730348204',id=135,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCjAwmzTqRLo6MtceITp6kkRRb7F7L6NpmCwm+uERDDuNYjjK2SbSlgPchdRYIPodHunzOlcpbeAY9Mle2akRMrTndDYB62iVXgr7LtFS3t0EZlmgtz0aV+ezHMzkXsV7A==',key_name='tempest-TestGettingAddress-1069883733',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-cliy4xfz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:55:14Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=dfb7287a-5448-4579-8938-fe909fbf54c6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "address": "fa:16:3e:e4:45:21", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee4:4521", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44908825-99", "ovs_interfaceid": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:55:20 np0005629333 nova_compute[244014]: 2026-02-25 12:55:20.361 244018 DEBUG nova.network.os_vif_util [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "address": "fa:16:3e:e4:45:21", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee4:4521", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44908825-99", "ovs_interfaceid": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:55:20 np0005629333 nova_compute[244014]: 2026-02-25 12:55:20.363 244018 DEBUG nova.network.os_vif_util [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:45:21,bridge_name='br-int',has_traffic_filtering=True,id=44908825-991d-42d4-9bad-18d2a1f5fe9c,network=Network(9f82a56f-a5c3-446e-9b75-7dc69c93c56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44908825-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:55:20 np0005629333 nova_compute[244014]: 2026-02-25 12:55:20.365 244018 DEBUG nova.objects.instance [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid dfb7287a-5448-4579-8938-fe909fbf54c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:55:20 np0005629333 nova_compute[244014]: 2026-02-25 12:55:20.386 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:55:20 np0005629333 nova_compute[244014]:  <uuid>dfb7287a-5448-4579-8938-fe909fbf54c6</uuid>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:  <name>instance-00000087</name>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestGettingAddress-server-730348204</nova:name>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:55:19</nova:creationTime>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:55:20 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:        <nova:user uuid="f8eb8dbf8cc448ad946fd23aaae2326e">tempest-TestGettingAddress-344063294-project-member</nova:user>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:        <nova:project uuid="25fa1e8dd32c483686f869da2604f2b1">tempest-TestGettingAddress-344063294</nova:project>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:        <nova:port uuid="44908825-991d-42d4-9bad-18d2a1f5fe9c">
Feb 25 07:55:20 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fee4:4521" ipVersion="6"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      <entry name="serial">dfb7287a-5448-4579-8938-fe909fbf54c6</entry>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      <entry name="uuid">dfb7287a-5448-4579-8938-fe909fbf54c6</entry>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/dfb7287a-5448-4579-8938-fe909fbf54c6_disk">
Feb 25 07:55:20 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:55:20 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/dfb7287a-5448-4579-8938-fe909fbf54c6_disk.config">
Feb 25 07:55:20 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:55:20 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:e4:45:21"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      <target dev="tap44908825-99"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/dfb7287a-5448-4579-8938-fe909fbf54c6/console.log" append="off"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:55:20 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:55:20 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:55:20 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:55:20 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:55:20 np0005629333 nova_compute[244014]: 2026-02-25 12:55:20.388 244018 DEBUG nova.compute.manager [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Preparing to wait for external event network-vif-plugged-44908825-991d-42d4-9bad-18d2a1f5fe9c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:55:20 np0005629333 nova_compute[244014]: 2026-02-25 12:55:20.388 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:55:20 np0005629333 nova_compute[244014]: 2026-02-25 12:55:20.388 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:55:20 np0005629333 nova_compute[244014]: 2026-02-25 12:55:20.389 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:55:20 np0005629333 nova_compute[244014]: 2026-02-25 12:55:20.389 244018 DEBUG nova.virt.libvirt.vif [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:55:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-730348204',display_name='tempest-TestGettingAddress-server-730348204',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-730348204',id=135,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCjAwmzTqRLo6MtceITp6kkRRb7F7L6NpmCwm+uERDDuNYjjK2SbSlgPchdRYIPodHunzOlcpbeAY9Mle2akRMrTndDYB62iVXgr7LtFS3t0EZlmgtz0aV+ezHMzkXsV7A==',key_name='tempest-TestGettingAddress-1069883733',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-cliy4xfz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:55:14Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=dfb7287a-5448-4579-8938-fe909fbf54c6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "address": "fa:16:3e:e4:45:21", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee4:4521", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44908825-99", "ovs_interfaceid": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:55:20 np0005629333 nova_compute[244014]: 2026-02-25 12:55:20.390 244018 DEBUG nova.network.os_vif_util [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "address": "fa:16:3e:e4:45:21", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee4:4521", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44908825-99", "ovs_interfaceid": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:55:20 np0005629333 nova_compute[244014]: 2026-02-25 12:55:20.390 244018 DEBUG nova.network.os_vif_util [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:45:21,bridge_name='br-int',has_traffic_filtering=True,id=44908825-991d-42d4-9bad-18d2a1f5fe9c,network=Network(9f82a56f-a5c3-446e-9b75-7dc69c93c56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44908825-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:55:20 np0005629333 nova_compute[244014]: 2026-02-25 12:55:20.391 244018 DEBUG os_vif [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:45:21,bridge_name='br-int',has_traffic_filtering=True,id=44908825-991d-42d4-9bad-18d2a1f5fe9c,network=Network(9f82a56f-a5c3-446e-9b75-7dc69c93c56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44908825-99') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:55:20 np0005629333 nova_compute[244014]: 2026-02-25 12:55:20.391 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:20 np0005629333 nova_compute[244014]: 2026-02-25 12:55:20.392 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:55:20 np0005629333 nova_compute[244014]: 2026-02-25 12:55:20.392 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:55:20 np0005629333 nova_compute[244014]: 2026-02-25 12:55:20.395 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:20 np0005629333 nova_compute[244014]: 2026-02-25 12:55:20.395 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap44908825-99, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:55:20 np0005629333 nova_compute[244014]: 2026-02-25 12:55:20.396 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap44908825-99, col_values=(('external_ids', {'iface-id': '44908825-991d-42d4-9bad-18d2a1f5fe9c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e4:45:21', 'vm-uuid': 'dfb7287a-5448-4579-8938-fe909fbf54c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:55:20 np0005629333 nova_compute[244014]: 2026-02-25 12:55:20.399 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:20 np0005629333 NetworkManager[49836]: <info>  [1772024120.4000] manager: (tap44908825-99): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/593)
Feb 25 07:55:20 np0005629333 nova_compute[244014]: 2026-02-25 12:55:20.404 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:55:20 np0005629333 nova_compute[244014]: 2026-02-25 12:55:20.406 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:20 np0005629333 nova_compute[244014]: 2026-02-25 12:55:20.407 244018 INFO os_vif [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:45:21,bridge_name='br-int',has_traffic_filtering=True,id=44908825-991d-42d4-9bad-18d2a1f5fe9c,network=Network(9f82a56f-a5c3-446e-9b75-7dc69c93c56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44908825-99')#033[00m
Feb 25 07:55:20 np0005629333 nova_compute[244014]: 2026-02-25 12:55:20.463 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:55:20 np0005629333 nova_compute[244014]: 2026-02-25 12:55:20.464 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:55:20 np0005629333 nova_compute[244014]: 2026-02-25 12:55:20.465 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:e4:45:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:55:20 np0005629333 nova_compute[244014]: 2026-02-25 12:55:20.466 244018 INFO nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Using config drive#033[00m
Feb 25 07:55:20 np0005629333 nova_compute[244014]: 2026-02-25 12:55:20.506 244018 DEBUG nova.storage.rbd_utils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dfb7287a-5448-4579-8938-fe909fbf54c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:55:20 np0005629333 nova_compute[244014]: 2026-02-25 12:55:20.520 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2243: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.105 244018 INFO nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Creating config drive at /var/lib/nova/instances/dfb7287a-5448-4579-8938-fe909fbf54c6/disk.config#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.108 244018 DEBUG oslo_concurrency.processutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dfb7287a-5448-4579-8938-fe909fbf54c6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp1faxc0a9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.143 244018 DEBUG nova.network.neutron [req-26f3faca-fe36-44bb-91d6-36651700490c req-19c33118-ec16-4ee5-b295-72a18a9e9bb0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Updated VIF entry in instance network info cache for port 44908825-991d-42d4-9bad-18d2a1f5fe9c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.144 244018 DEBUG nova.network.neutron [req-26f3faca-fe36-44bb-91d6-36651700490c req-19c33118-ec16-4ee5-b295-72a18a9e9bb0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Updating instance_info_cache with network_info: [{"id": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "address": "fa:16:3e:e4:45:21", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee4:4521", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44908825-99", "ovs_interfaceid": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.164 244018 DEBUG oslo_concurrency.lockutils [req-26f3faca-fe36-44bb-91d6-36651700490c req-19c33118-ec16-4ee5-b295-72a18a9e9bb0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-dfb7287a-5448-4579-8938-fe909fbf54c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.251 244018 DEBUG oslo_concurrency.processutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dfb7287a-5448-4579-8938-fe909fbf54c6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp1faxc0a9" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.283 244018 DEBUG nova.storage.rbd_utils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image dfb7287a-5448-4579-8938-fe909fbf54c6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.289 244018 DEBUG oslo_concurrency.processutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dfb7287a-5448-4579-8938-fe909fbf54c6/disk.config dfb7287a-5448-4579-8938-fe909fbf54c6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.444 244018 DEBUG oslo_concurrency.processutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dfb7287a-5448-4579-8938-fe909fbf54c6/disk.config dfb7287a-5448-4579-8938-fe909fbf54c6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.446 244018 INFO nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Deleting local config drive /var/lib/nova/instances/dfb7287a-5448-4579-8938-fe909fbf54c6/disk.config because it was imported into RBD.#033[00m
Feb 25 07:55:21 np0005629333 kernel: tap44908825-99: entered promiscuous mode
Feb 25 07:55:21 np0005629333 NetworkManager[49836]: <info>  [1772024121.4969] manager: (tap44908825-99): new Tun device (/org/freedesktop/NetworkManager/Devices/594)
Feb 25 07:55:21 np0005629333 ovn_controller[147040]: 2026-02-25T12:55:21Z|01424|binding|INFO|Claiming lport 44908825-991d-42d4-9bad-18d2a1f5fe9c for this chassis.
Feb 25 07:55:21 np0005629333 ovn_controller[147040]: 2026-02-25T12:55:21Z|01425|binding|INFO|44908825-991d-42d4-9bad-18d2a1f5fe9c: Claiming fa:16:3e:e4:45:21 10.100.0.3 2001:db8::f816:3eff:fee4:4521
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.498 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.509 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:45:21 10.100.0.3 2001:db8::f816:3eff:fee4:4521'], port_security=['fa:16:3e:e4:45:21 10.100.0.3 2001:db8::f816:3eff:fee4:4521'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fee4:4521/64', 'neutron:device_id': 'dfb7287a-5448-4579-8938-fe909fbf54c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aaaa1259-a3db-497e-8efb-1f3c1f8cc088', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7d68418-660c-4b4a-9631-94822d5aa11e, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=44908825-991d-42d4-9bad-18d2a1f5fe9c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.513 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 44908825-991d-42d4-9bad-18d2a1f5fe9c in datapath 9f82a56f-a5c3-446e-9b75-7dc69c93c56b bound to our chassis#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.512 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:21 np0005629333 ovn_controller[147040]: 2026-02-25T12:55:21Z|01426|binding|INFO|Setting lport 44908825-991d-42d4-9bad-18d2a1f5fe9c up in Southbound
Feb 25 07:55:21 np0005629333 ovn_controller[147040]: 2026-02-25T12:55:21Z|01427|binding|INFO|Setting lport 44908825-991d-42d4-9bad-18d2a1f5fe9c ovn-installed in OVS
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.515 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.516 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f82a56f-a5c3-446e-9b75-7dc69c93c56b#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.517 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.528 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[549270e8-73d0-46ee-9754-65ac5c447f93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.529 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9f82a56f-a1 in ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.532 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9f82a56f-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.532 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[891719ac-a7c2-484c-8d30-4455970ff947]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:21 np0005629333 systemd-machined[210048]: New machine qemu-167-instance-00000087.
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.533 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1a46de76-c2f2-48eb-afb0-7cc00e678677]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:21 np0005629333 systemd[1]: Started Virtual Machine qemu-167-instance-00000087.
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.546 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[00c73720-85b7-469d-b0d1-92a0aab3afc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:21 np0005629333 systemd-udevd[366902]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.573 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[caa50fbe-a252-43b0-a00b-6f19aaab1e2a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:21 np0005629333 NetworkManager[49836]: <info>  [1772024121.5789] device (tap44908825-99): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:55:21 np0005629333 NetworkManager[49836]: <info>  [1772024121.5801] device (tap44908825-99): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.605 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[bbbb7c6d-3eed-4929-8049-0db4f00c888f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.612 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c4c1a1d0-5a96-41ad-8072-73d2a164d575]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:21 np0005629333 NetworkManager[49836]: <info>  [1772024121.6167] manager: (tap9f82a56f-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/595)
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.645 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cd7982c7-7f52-460e-939d-e0dc793ed2bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.648 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3c5fbf23-d4ae-435c-8220-7f5ff3c10bb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:21 np0005629333 NetworkManager[49836]: <info>  [1772024121.6695] device (tap9f82a56f-a0): carrier: link connected
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.675 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[afdda326-42e1-4355-ac1b-c75789013fa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.693 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[007b4908-f86d-4c97-8338-64c7738dd1b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f82a56f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:91:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609107, 'reachable_time': 19694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366932, 'error': None, 'target': 'ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.707 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5eb7aa6a-c9f2-40cf-a1ea-5b7b26a58cdc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe31:9136'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609107, 'tstamp': 609107}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366933, 'error': None, 'target': 'ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.717 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[98b8cb06-b990-4fbc-945e-6a88a16d9e59]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f82a56f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:91:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609107, 'reachable_time': 19694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 366934, 'error': None, 'target': 'ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.734 244018 DEBUG nova.compute.manager [req-4db9796f-1cc1-42b6-b458-74fc9ec7b371 req-1342e31e-dba9-415e-adbc-d63ae30855c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Received event network-vif-plugged-44908825-991d-42d4-9bad-18d2a1f5fe9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.735 244018 DEBUG oslo_concurrency.lockutils [req-4db9796f-1cc1-42b6-b458-74fc9ec7b371 req-1342e31e-dba9-415e-adbc-d63ae30855c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.735 244018 DEBUG oslo_concurrency.lockutils [req-4db9796f-1cc1-42b6-b458-74fc9ec7b371 req-1342e31e-dba9-415e-adbc-d63ae30855c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.736 244018 DEBUG oslo_concurrency.lockutils [req-4db9796f-1cc1-42b6-b458-74fc9ec7b371 req-1342e31e-dba9-415e-adbc-d63ae30855c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.736 244018 DEBUG nova.compute.manager [req-4db9796f-1cc1-42b6-b458-74fc9ec7b371 req-1342e31e-dba9-415e-adbc-d63ae30855c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Processing event network-vif-plugged-44908825-991d-42d4-9bad-18d2a1f5fe9c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.740 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[cd5d7958-1ada-4f7f-a21c-0273037e649c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.789 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fcbc59e3-d648-44db-b747-d2a669b64169]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.791 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f82a56f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.791 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.792 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f82a56f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.793 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:21 np0005629333 NetworkManager[49836]: <info>  [1772024121.7941] manager: (tap9f82a56f-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/596)
Feb 25 07:55:21 np0005629333 kernel: tap9f82a56f-a0: entered promiscuous mode
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.796 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f82a56f-a0, col_values=(('external_ids', {'iface-id': 'd6ceb7cb-a48f-47c3-acb6-58d61268f8c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:55:21 np0005629333 ovn_controller[147040]: 2026-02-25T12:55:21Z|01428|binding|INFO|Releasing lport d6ceb7cb-a48f-47c3-acb6-58d61268f8c7 from this chassis (sb_readonly=0)
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.797 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.798 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9f82a56f-a5c3-446e-9b75-7dc69c93c56b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9f82a56f-a5c3-446e-9b75-7dc69c93c56b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.799 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a86cf020-9cd8-4f84-a0a8-90ea2471412d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.800 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-9f82a56f-a5c3-446e-9b75-7dc69c93c56b
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/9f82a56f-a5c3-446e-9b75-7dc69c93c56b.pid.haproxy
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 9f82a56f-a5c3-446e-9b75-7dc69c93c56b
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:55:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:21.801 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'env', 'PROCESS_TAG=haproxy-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9f82a56f-a5c3-446e-9b75-7dc69c93c56b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.802 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.887 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024121.8870604, dfb7287a-5448-4579-8938-fe909fbf54c6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.888 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] VM Started (Lifecycle Event)#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.891 244018 DEBUG nova.compute.manager [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.895 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.899 244018 INFO nova.virt.libvirt.driver [-] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Instance spawned successfully.#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.900 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.915 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.922 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.930 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.931 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.932 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.933 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.934 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.935 244018 DEBUG nova.virt.libvirt.driver [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.946 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.947 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024121.887162, dfb7287a-5448-4579-8938-fe909fbf54c6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.947 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.979 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.983 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024121.893982, dfb7287a-5448-4579-8938-fe909fbf54c6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.984 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.994 244018 INFO nova.compute.manager [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Took 7.25 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:55:21 np0005629333 nova_compute[244014]: 2026-02-25 12:55:21.995 244018 DEBUG nova.compute.manager [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:55:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:55:22Z|00173|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e1:08:7c 10.100.0.9
Feb 25 07:55:22 np0005629333 ovn_controller[147040]: 2026-02-25T12:55:22Z|00174|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e1:08:7c 10.100.0.9
Feb 25 07:55:22 np0005629333 nova_compute[244014]: 2026-02-25 12:55:22.004 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:55:22 np0005629333 nova_compute[244014]: 2026-02-25 12:55:22.006 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:55:22 np0005629333 nova_compute[244014]: 2026-02-25 12:55:22.038 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:55:22 np0005629333 nova_compute[244014]: 2026-02-25 12:55:22.069 244018 INFO nova.compute.manager [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Took 8.49 seconds to build instance.#033[00m
Feb 25 07:55:22 np0005629333 nova_compute[244014]: 2026-02-25 12:55:22.089 244018 DEBUG oslo_concurrency.lockutils [None req-814b1e6e-7b2a-4468-8f34-017f71aa979e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dfb7287a-5448-4579-8938-fe909fbf54c6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:55:22 np0005629333 podman[367008]: 2026-02-25 12:55:22.181124719 +0000 UTC m=+0.065777006 container create 428a904833ac927c13f7cd77da8bdaf3e47abd2d0a464c01afa6fcfb7b29a635 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:55:22 np0005629333 systemd[1]: Started libpod-conmon-428a904833ac927c13f7cd77da8bdaf3e47abd2d0a464c01afa6fcfb7b29a635.scope.
Feb 25 07:55:22 np0005629333 podman[367008]: 2026-02-25 12:55:22.152404449 +0000 UTC m=+0.037056746 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:55:22 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:55:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36eb1392c95fbb33162d7879fcd33b904283c8f0341d2d03795af7b704d65fdf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:55:22 np0005629333 podman[367008]: 2026-02-25 12:55:22.287925132 +0000 UTC m=+0.172577459 container init 428a904833ac927c13f7cd77da8bdaf3e47abd2d0a464c01afa6fcfb7b29a635 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:55:22 np0005629333 podman[367008]: 2026-02-25 12:55:22.292761538 +0000 UTC m=+0.177413825 container start 428a904833ac927c13f7cd77da8bdaf3e47abd2d0a464c01afa6fcfb7b29a635 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 25 07:55:22 np0005629333 neutron-haproxy-ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b[367023]: [NOTICE]   (367027) : New worker (367029) forked
Feb 25 07:55:22 np0005629333 neutron-haproxy-ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b[367023]: [NOTICE]   (367027) : Loading success.
Feb 25 07:55:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2244: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 180 op/s
Feb 25 07:55:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:55:23 np0005629333 nova_compute[244014]: 2026-02-25 12:55:23.849 244018 DEBUG nova.compute.manager [req-85686a0d-d8e6-4c85-85ef-f783d007d8f8 req-8aab2e6c-14ab-410b-af88-1aca8084eb80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Received event network-vif-plugged-44908825-991d-42d4-9bad-18d2a1f5fe9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:55:23 np0005629333 nova_compute[244014]: 2026-02-25 12:55:23.850 244018 DEBUG oslo_concurrency.lockutils [req-85686a0d-d8e6-4c85-85ef-f783d007d8f8 req-8aab2e6c-14ab-410b-af88-1aca8084eb80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:55:23 np0005629333 nova_compute[244014]: 2026-02-25 12:55:23.851 244018 DEBUG oslo_concurrency.lockutils [req-85686a0d-d8e6-4c85-85ef-f783d007d8f8 req-8aab2e6c-14ab-410b-af88-1aca8084eb80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:55:23 np0005629333 nova_compute[244014]: 2026-02-25 12:55:23.851 244018 DEBUG oslo_concurrency.lockutils [req-85686a0d-d8e6-4c85-85ef-f783d007d8f8 req-8aab2e6c-14ab-410b-af88-1aca8084eb80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:55:23 np0005629333 nova_compute[244014]: 2026-02-25 12:55:23.852 244018 DEBUG nova.compute.manager [req-85686a0d-d8e6-4c85-85ef-f783d007d8f8 req-8aab2e6c-14ab-410b-af88-1aca8084eb80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] No waiting events found dispatching network-vif-plugged-44908825-991d-42d4-9bad-18d2a1f5fe9c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:55:23 np0005629333 nova_compute[244014]: 2026-02-25 12:55:23.852 244018 WARNING nova.compute.manager [req-85686a0d-d8e6-4c85-85ef-f783d007d8f8 req-8aab2e6c-14ab-410b-af88-1aca8084eb80 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Received unexpected event network-vif-plugged-44908825-991d-42d4-9bad-18d2a1f5fe9c for instance with vm_state active and task_state None.#033[00m
Feb 25 07:55:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2245: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 650 KiB/s rd, 3.9 MiB/s wr, 110 op/s
Feb 25 07:55:25 np0005629333 nova_compute[244014]: 2026-02-25 12:55:25.399 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:25 np0005629333 nova_compute[244014]: 2026-02-25 12:55:25.481 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2246: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 650 KiB/s rd, 3.9 MiB/s wr, 110 op/s
Feb 25 07:55:27 np0005629333 nova_compute[244014]: 2026-02-25 12:55:27.160 244018 DEBUG nova.compute.manager [req-b44c3c97-1f0f-4828-8732-8f2cb71b0ec2 req-b9fb1720-2fa0-4601-bf9e-da7d12a88132 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Received event network-changed-44908825-991d-42d4-9bad-18d2a1f5fe9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:55:27 np0005629333 nova_compute[244014]: 2026-02-25 12:55:27.161 244018 DEBUG nova.compute.manager [req-b44c3c97-1f0f-4828-8732-8f2cb71b0ec2 req-b9fb1720-2fa0-4601-bf9e-da7d12a88132 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Refreshing instance network info cache due to event network-changed-44908825-991d-42d4-9bad-18d2a1f5fe9c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:55:27 np0005629333 nova_compute[244014]: 2026-02-25 12:55:27.162 244018 DEBUG oslo_concurrency.lockutils [req-b44c3c97-1f0f-4828-8732-8f2cb71b0ec2 req-b9fb1720-2fa0-4601-bf9e-da7d12a88132 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-dfb7287a-5448-4579-8938-fe909fbf54c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:55:27 np0005629333 nova_compute[244014]: 2026-02-25 12:55:27.162 244018 DEBUG oslo_concurrency.lockutils [req-b44c3c97-1f0f-4828-8732-8f2cb71b0ec2 req-b9fb1720-2fa0-4601-bf9e-da7d12a88132 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-dfb7287a-5448-4579-8938-fe909fbf54c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:55:27 np0005629333 nova_compute[244014]: 2026-02-25 12:55:27.163 244018 DEBUG nova.network.neutron [req-b44c3c97-1f0f-4828-8732-8f2cb71b0ec2 req-b9fb1720-2fa0-4601-bf9e-da7d12a88132 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Refreshing network info cache for port 44908825-991d-42d4-9bad-18d2a1f5fe9c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:55:28 np0005629333 nova_compute[244014]: 2026-02-25 12:55:28.116 244018 INFO nova.compute.manager [None req-72ed04ef-a0f6-4b1a-8dda-1613dfa6e021 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Get console output#033[00m
Feb 25 07:55:28 np0005629333 nova_compute[244014]: 2026-02-25 12:55:28.122 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 25 07:55:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:55:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2247: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 168 op/s
Feb 25 07:55:29 np0005629333 nova_compute[244014]: 2026-02-25 12:55:29.549 244018 DEBUG nova.network.neutron [req-b44c3c97-1f0f-4828-8732-8f2cb71b0ec2 req-b9fb1720-2fa0-4601-bf9e-da7d12a88132 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Updated VIF entry in instance network info cache for port 44908825-991d-42d4-9bad-18d2a1f5fe9c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:55:29 np0005629333 nova_compute[244014]: 2026-02-25 12:55:29.549 244018 DEBUG nova.network.neutron [req-b44c3c97-1f0f-4828-8732-8f2cb71b0ec2 req-b9fb1720-2fa0-4601-bf9e-da7d12a88132 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Updating instance_info_cache with network_info: [{"id": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "address": "fa:16:3e:e4:45:21", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee4:4521", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44908825-99", "ovs_interfaceid": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:55:29 np0005629333 nova_compute[244014]: 2026-02-25 12:55:29.585 244018 DEBUG oslo_concurrency.lockutils [req-b44c3c97-1f0f-4828-8732-8f2cb71b0ec2 req-b9fb1720-2fa0-4601-bf9e-da7d12a88132 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-dfb7287a-5448-4579-8938-fe909fbf54c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:55:30 np0005629333 ovn_controller[147040]: 2026-02-25T12:55:30Z|00175|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e1:08:7c 10.100.0.9
Feb 25 07:55:30 np0005629333 nova_compute[244014]: 2026-02-25 12:55:30.403 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:30 np0005629333 nova_compute[244014]: 2026-02-25 12:55:30.484 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2248: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Feb 25 07:55:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:55:31
Feb 25 07:55:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:55:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:55:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'images', 'default.rgw.control', 'default.rgw.log', 'backups', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr']
Feb 25 07:55:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:55:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:55:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:55:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:55:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:55:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:55:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:55:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:55:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:55:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:55:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:55:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:55:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:55:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:55:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:55:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:55:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:55:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2249: 305 pgs: 305 active+clean; 291 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.3 MiB/s wr, 150 op/s
Feb 25 07:55:32 np0005629333 nova_compute[244014]: 2026-02-25 12:55:32.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:55:32 np0005629333 nova_compute[244014]: 2026-02-25 12:55:32.932 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:55:32 np0005629333 nova_compute[244014]: 2026-02-25 12:55:32.933 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:55:32 np0005629333 nova_compute[244014]: 2026-02-25 12:55:32.934 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:55:32 np0005629333 nova_compute[244014]: 2026-02-25 12:55:32.935 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:55:32 np0005629333 nova_compute[244014]: 2026-02-25 12:55:32.935 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:55:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:55:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2627345700' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:55:33 np0005629333 nova_compute[244014]: 2026-02-25 12:55:33.587 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:55:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:55:33 np0005629333 nova_compute[244014]: 2026-02-25 12:55:33.706 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:55:33 np0005629333 nova_compute[244014]: 2026-02-25 12:55:33.707 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:55:33 np0005629333 nova_compute[244014]: 2026-02-25 12:55:33.713 244018 DEBUG nova.compute.manager [req-ba2b4d96-ab1c-4fb9-ae89-f65485cc5fa9 req-29d710fe-7ee9-413f-ad5c-fc6952aade90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Received event network-changed-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:55:33 np0005629333 nova_compute[244014]: 2026-02-25 12:55:33.713 244018 DEBUG nova.compute.manager [req-ba2b4d96-ab1c-4fb9-ae89-f65485cc5fa9 req-29d710fe-7ee9-413f-ad5c-fc6952aade90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Refreshing instance network info cache due to event network-changed-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:55:33 np0005629333 nova_compute[244014]: 2026-02-25 12:55:33.714 244018 DEBUG oslo_concurrency.lockutils [req-ba2b4d96-ab1c-4fb9-ae89-f65485cc5fa9 req-29d710fe-7ee9-413f-ad5c-fc6952aade90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-4e93a158-a4bd-41a0-8fe0-52b2a069c409" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:55:33 np0005629333 nova_compute[244014]: 2026-02-25 12:55:33.714 244018 DEBUG oslo_concurrency.lockutils [req-ba2b4d96-ab1c-4fb9-ae89-f65485cc5fa9 req-29d710fe-7ee9-413f-ad5c-fc6952aade90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-4e93a158-a4bd-41a0-8fe0-52b2a069c409" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:55:33 np0005629333 nova_compute[244014]: 2026-02-25 12:55:33.714 244018 DEBUG nova.network.neutron [req-ba2b4d96-ab1c-4fb9-ae89-f65485cc5fa9 req-29d710fe-7ee9-413f-ad5c-fc6952aade90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Refreshing network info cache for port 8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:55:33 np0005629333 nova_compute[244014]: 2026-02-25 12:55:33.721 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:55:33 np0005629333 nova_compute[244014]: 2026-02-25 12:55:33.722 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:55:33 np0005629333 nova_compute[244014]: 2026-02-25 12:55:33.786 244018 DEBUG oslo_concurrency.lockutils [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:55:33 np0005629333 nova_compute[244014]: 2026-02-25 12:55:33.787 244018 DEBUG oslo_concurrency.lockutils [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:55:33 np0005629333 nova_compute[244014]: 2026-02-25 12:55:33.788 244018 DEBUG oslo_concurrency.lockutils [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:55:33 np0005629333 nova_compute[244014]: 2026-02-25 12:55:33.788 244018 DEBUG oslo_concurrency.lockutils [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:55:33 np0005629333 nova_compute[244014]: 2026-02-25 12:55:33.788 244018 DEBUG oslo_concurrency.lockutils [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:55:33 np0005629333 nova_compute[244014]: 2026-02-25 12:55:33.790 244018 INFO nova.compute.manager [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Terminating instance#033[00m
Feb 25 07:55:33 np0005629333 nova_compute[244014]: 2026-02-25 12:55:33.792 244018 DEBUG nova.compute.manager [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:55:33 np0005629333 kernel: tap8d4ab3e5-d9 (unregistering): left promiscuous mode
Feb 25 07:55:33 np0005629333 NetworkManager[49836]: <info>  [1772024133.9221] device (tap8d4ab3e5-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:55:33 np0005629333 nova_compute[244014]: 2026-02-25 12:55:33.930 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:33 np0005629333 ovn_controller[147040]: 2026-02-25T12:55:33Z|01429|binding|INFO|Releasing lport 8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d from this chassis (sb_readonly=0)
Feb 25 07:55:33 np0005629333 ovn_controller[147040]: 2026-02-25T12:55:33Z|01430|binding|INFO|Setting lport 8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d down in Southbound
Feb 25 07:55:33 np0005629333 ovn_controller[147040]: 2026-02-25T12:55:33Z|01431|binding|INFO|Removing iface tap8d4ab3e5-d9 ovn-installed in OVS
Feb 25 07:55:33 np0005629333 nova_compute[244014]: 2026-02-25 12:55:33.933 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:33 np0005629333 nova_compute[244014]: 2026-02-25 12:55:33.941 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:33.942 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e1:08:7c 10.100.0.9'], port_security=['fa:16:3e:e1:08:7c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '4e93a158-a4bd-41a0-8fe0-52b2a069c409', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e0c08d4b-4169-4e97-8a55-a9553174491d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14ad45b1-188a-47f0-9d37-00198e9d57fa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:55:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:33.946 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d in datapath 77884df7-14d9-4fbd-8fa6-5eb17d0a82bf unbound from our chassis#033[00m
Feb 25 07:55:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:33.948 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77884df7-14d9-4fbd-8fa6-5eb17d0a82bf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:55:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:33.949 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7dc0b6fd-4786-4e38-822b-f3ddc04c006a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:33.951 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf namespace which is not needed anymore#033[00m
Feb 25 07:55:33 np0005629333 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000086.scope: Deactivated successfully.
Feb 25 07:55:33 np0005629333 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000086.scope: Consumed 12.483s CPU time.
Feb 25 07:55:33 np0005629333 systemd-machined[210048]: Machine qemu-166-instance-00000086 terminated.
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.039 244018 INFO nova.virt.libvirt.driver [-] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Instance destroyed successfully.#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.039 244018 DEBUG nova.objects.instance [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'resources' on Instance uuid 4e93a158-a4bd-41a0-8fe0-52b2a069c409 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.057 244018 DEBUG nova.virt.libvirt.vif [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:54:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-308904214',display_name='tempest-TestNetworkBasicOps-server-308904214',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-308904214',id=134,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBC4B/zjaDJTncfqsk6mkNSo/RgL3k210CyqHPfVKNsCjY61bbmJZVB8ZwHBrT5XizBW+KM0WAI8Ln67e320MCh/gk3oyfKKaIoKvHrkjtWvb3C4JL4zwHhWY7bHt65jOyg==',key_name='tempest-TestNetworkBasicOps-945311814',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:55:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ypd9bclm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:55:11Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=4e93a158-a4bd-41a0-8fe0-52b2a069c409,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "address": "fa:16:3e:e1:08:7c", "network": {"id": "77884df7-14d9-4fbd-8fa6-5eb17d0a82bf", "bridge": "br-int", "label": "tempest-network-smoke--1156045896", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d4ab3e5-d9", "ovs_interfaceid": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.057 244018 DEBUG nova.network.os_vif_util [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "address": "fa:16:3e:e1:08:7c", "network": {"id": "77884df7-14d9-4fbd-8fa6-5eb17d0a82bf", "bridge": "br-int", "label": "tempest-network-smoke--1156045896", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d4ab3e5-d9", "ovs_interfaceid": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.058 244018 DEBUG nova.network.os_vif_util [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e1:08:7c,bridge_name='br-int',has_traffic_filtering=True,id=8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d,network=Network(77884df7-14d9-4fbd-8fa6-5eb17d0a82bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d4ab3e5-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.058 244018 DEBUG os_vif [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:08:7c,bridge_name='br-int',has_traffic_filtering=True,id=8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d,network=Network(77884df7-14d9-4fbd-8fa6-5eb17d0a82bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d4ab3e5-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.060 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.061 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d4ab3e5-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.062 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.065 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.068 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.070 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3204MB free_disk=59.90699964296073GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.070 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.071 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.073 244018 INFO os_vif [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e1:08:7c,bridge_name='br-int',has_traffic_filtering=True,id=8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d,network=Network(77884df7-14d9-4fbd-8fa6-5eb17d0a82bf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d4ab3e5-d9')#033[00m
Feb 25 07:55:34 np0005629333 neutron-haproxy-ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf[366557]: [NOTICE]   (366562) : haproxy version is 2.8.14-c23fe91
Feb 25 07:55:34 np0005629333 neutron-haproxy-ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf[366557]: [NOTICE]   (366562) : path to executable is /usr/sbin/haproxy
Feb 25 07:55:34 np0005629333 neutron-haproxy-ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf[366557]: [WARNING]  (366562) : Exiting Master process...
Feb 25 07:55:34 np0005629333 neutron-haproxy-ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf[366557]: [ALERT]    (366562) : Current worker (366564) exited with code 143 (Terminated)
Feb 25 07:55:34 np0005629333 neutron-haproxy-ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf[366557]: [WARNING]  (366562) : All workers exited. Exiting... (0)
Feb 25 07:55:34 np0005629333 systemd[1]: libpod-403707fb052a6cce800dd129557287a99d76e360fe7c869d3bc648b6f9ce8141.scope: Deactivated successfully.
Feb 25 07:55:34 np0005629333 podman[367093]: 2026-02-25 12:55:34.126068137 +0000 UTC m=+0.067187876 container died 403707fb052a6cce800dd129557287a99d76e360fe7c869d3bc648b6f9ce8141 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 07:55:34 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-403707fb052a6cce800dd129557287a99d76e360fe7c869d3bc648b6f9ce8141-userdata-shm.mount: Deactivated successfully.
Feb 25 07:55:34 np0005629333 systemd[1]: var-lib-containers-storage-overlay-36d579fc02c3db25950a35ca5e2bc6443af220c9434e5f70315a2a4e00da493a-merged.mount: Deactivated successfully.
Feb 25 07:55:34 np0005629333 podman[367093]: 2026-02-25 12:55:34.176620234 +0000 UTC m=+0.117739953 container cleanup 403707fb052a6cce800dd129557287a99d76e360fe7c869d3bc648b6f9ce8141 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.180 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 4e93a158-a4bd-41a0-8fe0-52b2a069c409 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.181 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance dfb7287a-5448-4579-8938-fe909fbf54c6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.182 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.182 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:55:34 np0005629333 systemd[1]: libpod-conmon-403707fb052a6cce800dd129557287a99d76e360fe7c869d3bc648b6f9ce8141.scope: Deactivated successfully.
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.203 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.221 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.221 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.241 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 25 07:55:34 np0005629333 podman[367141]: 2026-02-25 12:55:34.280447552 +0000 UTC m=+0.076452987 container remove 403707fb052a6cce800dd129557287a99d76e360fe7c869d3bc648b6f9ce8141 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true)
Feb 25 07:55:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:34.287 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6b605807-c2c5-4b00-8da3-ac55738ffbfe]: (4, ('Wed Feb 25 12:55:34 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf (403707fb052a6cce800dd129557287a99d76e360fe7c869d3bc648b6f9ce8141)\n403707fb052a6cce800dd129557287a99d76e360fe7c869d3bc648b6f9ce8141\nWed Feb 25 12:55:34 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf (403707fb052a6cce800dd129557287a99d76e360fe7c869d3bc648b6f9ce8141)\n403707fb052a6cce800dd129557287a99d76e360fe7c869d3bc648b6f9ce8141\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:34.288 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1c19d9c2-85f8-417e-b7a9-8f1b6050c072]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:34.289 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77884df7-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.290 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:34 np0005629333 kernel: tap77884df7-10: left promiscuous mode
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.298 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 25 07:55:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:34.301 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1f0ab20b-e8c6-4166-9112-ba18730ce947]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.301 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.311 244018 DEBUG nova.compute.manager [req-3827e794-5857-451a-a5f9-f1a2f89f0634 req-0cd1fa25-b72c-48e5-be6e-46554a2cad29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Received event network-vif-unplugged-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.311 244018 DEBUG oslo_concurrency.lockutils [req-3827e794-5857-451a-a5f9-f1a2f89f0634 req-0cd1fa25-b72c-48e5-be6e-46554a2cad29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.312 244018 DEBUG oslo_concurrency.lockutils [req-3827e794-5857-451a-a5f9-f1a2f89f0634 req-0cd1fa25-b72c-48e5-be6e-46554a2cad29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.312 244018 DEBUG oslo_concurrency.lockutils [req-3827e794-5857-451a-a5f9-f1a2f89f0634 req-0cd1fa25-b72c-48e5-be6e-46554a2cad29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.312 244018 DEBUG nova.compute.manager [req-3827e794-5857-451a-a5f9-f1a2f89f0634 req-0cd1fa25-b72c-48e5-be6e-46554a2cad29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] No waiting events found dispatching network-vif-unplugged-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.313 244018 DEBUG nova.compute.manager [req-3827e794-5857-451a-a5f9-f1a2f89f0634 req-0cd1fa25-b72c-48e5-be6e-46554a2cad29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Received event network-vif-unplugged-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:55:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:34.318 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[46384604-ad53-411c-ab6b-c522ceec5b57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:34.320 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fd378779-ebc1-44a3-8cac-be700ad3be4c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:34 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Feb 25 07:55:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:34.331 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e31ec76c-3c3c-411f-900b-445834899d58]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607942, 'reachable_time': 42050, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367155, 'error': None, 'target': 'ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:34 np0005629333 systemd[1]: run-netns-ovnmeta\x2d77884df7\x2d14d9\x2d4fbd\x2d8fa6\x2d5eb17d0a82bf.mount: Deactivated successfully.
Feb 25 07:55:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:34.333 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-77884df7-14d9-4fbd-8fa6-5eb17d0a82bf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:55:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:34.334 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a4b785-14d7-4678-8b36-860d4f6058b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.370 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.432 244018 INFO nova.virt.libvirt.driver [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Deleting instance files /var/lib/nova/instances/4e93a158-a4bd-41a0-8fe0-52b2a069c409_del#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.433 244018 INFO nova.virt.libvirt.driver [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Deletion of /var/lib/nova/instances/4e93a158-a4bd-41a0-8fe0-52b2a069c409_del complete#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.548 244018 INFO nova.compute.manager [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.549 244018 DEBUG oslo.service.loopingcall [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.549 244018 DEBUG nova.compute.manager [-] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.549 244018 DEBUG nova.network.neutron [-] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:55:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:55:34Z|00176|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e4:45:21 10.100.0.3
Feb 25 07:55:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:55:34Z|00177|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e4:45:21 10.100.0.3
Feb 25 07:55:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2250: 305 pgs: 305 active+clean; 291 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.2 MiB/s wr, 70 op/s
Feb 25 07:55:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:55:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/737742452' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.923 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.930 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.944 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.979 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:55:34 np0005629333 nova_compute[244014]: 2026-02-25 12:55:34.980 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:55:35 np0005629333 nova_compute[244014]: 2026-02-25 12:55:35.487 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:35 np0005629333 nova_compute[244014]: 2026-02-25 12:55:35.976 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:55:35 np0005629333 nova_compute[244014]: 2026-02-25 12:55:35.977 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:55:35 np0005629333 nova_compute[244014]: 2026-02-25 12:55:35.977 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:55:35 np0005629333 nova_compute[244014]: 2026-02-25 12:55:35.977 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 07:55:35 np0005629333 nova_compute[244014]: 2026-02-25 12:55:35.994 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Feb 25 07:55:36 np0005629333 nova_compute[244014]: 2026-02-25 12:55:36.134 244018 DEBUG nova.network.neutron [req-ba2b4d96-ab1c-4fb9-ae89-f65485cc5fa9 req-29d710fe-7ee9-413f-ad5c-fc6952aade90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Updated VIF entry in instance network info cache for port 8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:55:36 np0005629333 nova_compute[244014]: 2026-02-25 12:55:36.135 244018 DEBUG nova.network.neutron [req-ba2b4d96-ab1c-4fb9-ae89-f65485cc5fa9 req-29d710fe-7ee9-413f-ad5c-fc6952aade90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Updating instance_info_cache with network_info: [{"id": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "address": "fa:16:3e:e1:08:7c", "network": {"id": "77884df7-14d9-4fbd-8fa6-5eb17d0a82bf", "bridge": "br-int", "label": "tempest-network-smoke--1156045896", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d4ab3e5-d9", "ovs_interfaceid": "8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:55:36 np0005629333 nova_compute[244014]: 2026-02-25 12:55:36.149 244018 DEBUG nova.network.neutron [-] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:55:36 np0005629333 nova_compute[244014]: 2026-02-25 12:55:36.153 244018 DEBUG oslo_concurrency.lockutils [req-ba2b4d96-ab1c-4fb9-ae89-f65485cc5fa9 req-29d710fe-7ee9-413f-ad5c-fc6952aade90 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-4e93a158-a4bd-41a0-8fe0-52b2a069c409" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:55:36 np0005629333 nova_compute[244014]: 2026-02-25 12:55:36.168 244018 INFO nova.compute.manager [-] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Took 1.62 seconds to deallocate network for instance.#033[00m
Feb 25 07:55:36 np0005629333 nova_compute[244014]: 2026-02-25 12:55:36.175 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-dfb7287a-5448-4579-8938-fe909fbf54c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:55:36 np0005629333 nova_compute[244014]: 2026-02-25 12:55:36.176 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-dfb7287a-5448-4579-8938-fe909fbf54c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:55:36 np0005629333 nova_compute[244014]: 2026-02-25 12:55:36.176 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 25 07:55:36 np0005629333 nova_compute[244014]: 2026-02-25 12:55:36.177 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid dfb7287a-5448-4579-8938-fe909fbf54c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:55:36 np0005629333 nova_compute[244014]: 2026-02-25 12:55:36.241 244018 DEBUG oslo_concurrency.lockutils [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:55:36 np0005629333 nova_compute[244014]: 2026-02-25 12:55:36.242 244018 DEBUG oslo_concurrency.lockutils [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:55:36 np0005629333 nova_compute[244014]: 2026-02-25 12:55:36.338 244018 DEBUG oslo_concurrency.processutils [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:55:36 np0005629333 nova_compute[244014]: 2026-02-25 12:55:36.383 244018 DEBUG nova.compute.manager [req-431b43d2-03e9-4d9a-abfd-c65f88c8ec63 req-74b7d936-a298-4157-a5e4-1f6a5a259281 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Received event network-vif-plugged-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:55:36 np0005629333 nova_compute[244014]: 2026-02-25 12:55:36.384 244018 DEBUG oslo_concurrency.lockutils [req-431b43d2-03e9-4d9a-abfd-c65f88c8ec63 req-74b7d936-a298-4157-a5e4-1f6a5a259281 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:55:36 np0005629333 nova_compute[244014]: 2026-02-25 12:55:36.385 244018 DEBUG oslo_concurrency.lockutils [req-431b43d2-03e9-4d9a-abfd-c65f88c8ec63 req-74b7d936-a298-4157-a5e4-1f6a5a259281 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:55:36 np0005629333 nova_compute[244014]: 2026-02-25 12:55:36.386 244018 DEBUG oslo_concurrency.lockutils [req-431b43d2-03e9-4d9a-abfd-c65f88c8ec63 req-74b7d936-a298-4157-a5e4-1f6a5a259281 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:55:36 np0005629333 nova_compute[244014]: 2026-02-25 12:55:36.386 244018 DEBUG nova.compute.manager [req-431b43d2-03e9-4d9a-abfd-c65f88c8ec63 req-74b7d936-a298-4157-a5e4-1f6a5a259281 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] No waiting events found dispatching network-vif-plugged-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:55:36 np0005629333 nova_compute[244014]: 2026-02-25 12:55:36.387 244018 WARNING nova.compute.manager [req-431b43d2-03e9-4d9a-abfd-c65f88c8ec63 req-74b7d936-a298-4157-a5e4-1f6a5a259281 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Received unexpected event network-vif-plugged-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:55:36 np0005629333 nova_compute[244014]: 2026-02-25 12:55:36.388 244018 DEBUG nova.compute.manager [req-431b43d2-03e9-4d9a-abfd-c65f88c8ec63 req-74b7d936-a298-4157-a5e4-1f6a5a259281 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Received event network-vif-deleted-8d4ab3e5-d9f2-416c-a6f8-57a0cb15f32d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:55:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2251: 305 pgs: 305 active+clean; 291 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.2 MiB/s wr, 70 op/s
Feb 25 07:55:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:55:36 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/85658916' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:55:36 np0005629333 nova_compute[244014]: 2026-02-25 12:55:36.890 244018 DEBUG oslo_concurrency.processutils [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:55:36 np0005629333 nova_compute[244014]: 2026-02-25 12:55:36.898 244018 DEBUG nova.compute.provider_tree [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:55:36 np0005629333 nova_compute[244014]: 2026-02-25 12:55:36.913 244018 DEBUG nova.scheduler.client.report [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:55:36 np0005629333 nova_compute[244014]: 2026-02-25 12:55:36.937 244018 DEBUG oslo_concurrency.lockutils [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:55:36 np0005629333 nova_compute[244014]: 2026-02-25 12:55:36.968 244018 INFO nova.scheduler.client.report [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Deleted allocations for instance 4e93a158-a4bd-41a0-8fe0-52b2a069c409#033[00m
Feb 25 07:55:37 np0005629333 nova_compute[244014]: 2026-02-25 12:55:37.035 244018 DEBUG oslo_concurrency.lockutils [None req-4f1e1c4a-8fd1-4a0d-9aa6-030f3a18917a 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "4e93a158-a4bd-41a0-8fe0-52b2a069c409" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.248s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:55:37 np0005629333 nova_compute[244014]: 2026-02-25 12:55:37.865 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Updating instance_info_cache with network_info: [{"id": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "address": "fa:16:3e:e4:45:21", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee4:4521", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44908825-99", "ovs_interfaceid": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:55:37 np0005629333 nova_compute[244014]: 2026-02-25 12:55:37.893 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-dfb7287a-5448-4579-8938-fe909fbf54c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:55:37 np0005629333 nova_compute[244014]: 2026-02-25 12:55:37.894 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 25 07:55:37 np0005629333 nova_compute[244014]: 2026-02-25 12:55:37.894 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:55:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:55:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2252: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 147 op/s
Feb 25 07:55:39 np0005629333 nova_compute[244014]: 2026-02-25 12:55:39.064 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:39 np0005629333 podman[367201]: 2026-02-25 12:55:39.761935536 +0000 UTC m=+0.096341879 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller)
Feb 25 07:55:39 np0005629333 podman[367200]: 2026-02-25 12:55:39.762262035 +0000 UTC m=+0.101541225 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Feb 25 07:55:39 np0005629333 nova_compute[244014]: 2026-02-25 12:55:39.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:55:40 np0005629333 nova_compute[244014]: 2026-02-25 12:55:40.490 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2253: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 287 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Feb 25 07:55:41 np0005629333 ovn_controller[147040]: 2026-02-25T12:55:41Z|01432|binding|INFO|Releasing lport d6ceb7cb-a48f-47c3-acb6-58d61268f8c7 from this chassis (sb_readonly=0)
Feb 25 07:55:41 np0005629333 nova_compute[244014]: 2026-02-25 12:55:41.593 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:41 np0005629333 nova_compute[244014]: 2026-02-25 12:55:41.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:55:41 np0005629333 nova_compute[244014]: 2026-02-25 12:55:41.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:55:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2254: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 287 KiB/s rd, 2.2 MiB/s wr, 90 op/s
Feb 25 07:55:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:55:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:55:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:55:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:55:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007749709516653071 of space, bias 1.0, pg target 0.23249128549959214 quantized to 32 (current 32)
Feb 25 07:55:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:55:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:55:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:55:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:55:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:55:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002494192933822009 of space, bias 1.0, pg target 0.7482578801466027 quantized to 32 (current 32)
Feb 25 07:55:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:55:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.377850419695492e-06 of space, bias 4.0, pg target 0.0016534205036345905 quantized to 16 (current 16)
Feb 25 07:55:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:55:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:55:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:55:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:55:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:55:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:55:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:55:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:55:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:55:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:55:42 np0005629333 nova_compute[244014]: 2026-02-25 12:55:42.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:55:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:55:44 np0005629333 nova_compute[244014]: 2026-02-25 12:55:44.070 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2255: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 271 KiB/s rd, 1015 KiB/s wr, 77 op/s
Feb 25 07:55:44 np0005629333 nova_compute[244014]: 2026-02-25 12:55:44.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:55:44 np0005629333 nova_compute[244014]: 2026-02-25 12:55:44.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:55:45 np0005629333 nova_compute[244014]: 2026-02-25 12:55:45.492 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:45 np0005629333 nova_compute[244014]: 2026-02-25 12:55:45.541 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:46 np0005629333 nova_compute[244014]: 2026-02-25 12:55:46.438 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "7102d0db-32cc-4a5e-8282-cf5266710872" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:55:46 np0005629333 nova_compute[244014]: 2026-02-25 12:55:46.439 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "7102d0db-32cc-4a5e-8282-cf5266710872" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:55:46 np0005629333 nova_compute[244014]: 2026-02-25 12:55:46.468 244018 DEBUG nova.compute.manager [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:55:46 np0005629333 nova_compute[244014]: 2026-02-25 12:55:46.544 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:55:46 np0005629333 nova_compute[244014]: 2026-02-25 12:55:46.545 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:55:46 np0005629333 nova_compute[244014]: 2026-02-25 12:55:46.553 244018 DEBUG nova.virt.hardware [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:55:46 np0005629333 nova_compute[244014]: 2026-02-25 12:55:46.553 244018 INFO nova.compute.claims [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:55:46 np0005629333 nova_compute[244014]: 2026-02-25 12:55:46.658 244018 DEBUG oslo_concurrency.processutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:55:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2256: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 271 KiB/s rd, 1015 KiB/s wr, 77 op/s
Feb 25 07:55:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:55:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1657928784' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:55:47 np0005629333 nova_compute[244014]: 2026-02-25 12:55:47.183 244018 DEBUG oslo_concurrency.processutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:55:47 np0005629333 nova_compute[244014]: 2026-02-25 12:55:47.191 244018 DEBUG nova.compute.provider_tree [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:55:47 np0005629333 nova_compute[244014]: 2026-02-25 12:55:47.211 244018 DEBUG nova.scheduler.client.report [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:55:47 np0005629333 nova_compute[244014]: 2026-02-25 12:55:47.260 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:55:47 np0005629333 nova_compute[244014]: 2026-02-25 12:55:47.260 244018 DEBUG nova.compute.manager [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:55:47 np0005629333 nova_compute[244014]: 2026-02-25 12:55:47.359 244018 DEBUG nova.compute.manager [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:55:47 np0005629333 nova_compute[244014]: 2026-02-25 12:55:47.360 244018 DEBUG nova.network.neutron [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:55:47 np0005629333 nova_compute[244014]: 2026-02-25 12:55:47.395 244018 INFO nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:55:47 np0005629333 nova_compute[244014]: 2026-02-25 12:55:47.415 244018 DEBUG nova.compute.manager [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:55:47 np0005629333 nova_compute[244014]: 2026-02-25 12:55:47.507 244018 DEBUG nova.compute.manager [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:55:47 np0005629333 nova_compute[244014]: 2026-02-25 12:55:47.510 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:55:47 np0005629333 nova_compute[244014]: 2026-02-25 12:55:47.510 244018 INFO nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Creating image(s)#033[00m
Feb 25 07:55:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:55:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2792475797' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:55:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:55:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2792475797' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:55:47 np0005629333 nova_compute[244014]: 2026-02-25 12:55:47.542 244018 DEBUG nova.storage.rbd_utils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 7102d0db-32cc-4a5e-8282-cf5266710872_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:55:47 np0005629333 nova_compute[244014]: 2026-02-25 12:55:47.581 244018 DEBUG nova.storage.rbd_utils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 7102d0db-32cc-4a5e-8282-cf5266710872_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:55:47 np0005629333 nova_compute[244014]: 2026-02-25 12:55:47.616 244018 DEBUG nova.storage.rbd_utils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 7102d0db-32cc-4a5e-8282-cf5266710872_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:55:47 np0005629333 nova_compute[244014]: 2026-02-25 12:55:47.622 244018 DEBUG oslo_concurrency.processutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:55:47 np0005629333 nova_compute[244014]: 2026-02-25 12:55:47.662 244018 DEBUG nova.policy [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f8eb8dbf8cc448ad946fd23aaae2326e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '25fa1e8dd32c483686f869da2604f2b1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:55:47 np0005629333 nova_compute[244014]: 2026-02-25 12:55:47.713 244018 DEBUG oslo_concurrency.processutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:55:47 np0005629333 nova_compute[244014]: 2026-02-25 12:55:47.715 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:55:47 np0005629333 nova_compute[244014]: 2026-02-25 12:55:47.716 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:55:47 np0005629333 nova_compute[244014]: 2026-02-25 12:55:47.717 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:55:47 np0005629333 nova_compute[244014]: 2026-02-25 12:55:47.753 244018 DEBUG nova.storage.rbd_utils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 7102d0db-32cc-4a5e-8282-cf5266710872_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:55:47 np0005629333 nova_compute[244014]: 2026-02-25 12:55:47.759 244018 DEBUG oslo_concurrency.processutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 7102d0db-32cc-4a5e-8282-cf5266710872_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:55:48 np0005629333 nova_compute[244014]: 2026-02-25 12:55:48.056 244018 DEBUG oslo_concurrency.processutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 7102d0db-32cc-4a5e-8282-cf5266710872_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.297s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:55:48 np0005629333 nova_compute[244014]: 2026-02-25 12:55:48.140 244018 DEBUG nova.storage.rbd_utils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] resizing rbd image 7102d0db-32cc-4a5e-8282-cf5266710872_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:55:48 np0005629333 nova_compute[244014]: 2026-02-25 12:55:48.221 244018 DEBUG nova.objects.instance [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'migration_context' on Instance uuid 7102d0db-32cc-4a5e-8282-cf5266710872 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:55:48 np0005629333 nova_compute[244014]: 2026-02-25 12:55:48.252 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:55:48 np0005629333 nova_compute[244014]: 2026-02-25 12:55:48.253 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Ensure instance console log exists: /var/lib/nova/instances/7102d0db-32cc-4a5e-8282-cf5266710872/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:55:48 np0005629333 nova_compute[244014]: 2026-02-25 12:55:48.254 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:55:48 np0005629333 nova_compute[244014]: 2026-02-25 12:55:48.254 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:55:48 np0005629333 nova_compute[244014]: 2026-02-25 12:55:48.255 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:55:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:55:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2257: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 271 KiB/s rd, 1019 KiB/s wr, 77 op/s
Feb 25 07:55:49 np0005629333 nova_compute[244014]: 2026-02-25 12:55:49.037 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024134.0357685, 4e93a158-a4bd-41a0-8fe0-52b2a069c409 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:55:49 np0005629333 nova_compute[244014]: 2026-02-25 12:55:49.037 244018 INFO nova.compute.manager [-] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:55:49 np0005629333 nova_compute[244014]: 2026-02-25 12:55:49.059 244018 DEBUG nova.compute.manager [None req-a36ce2df-2508-44ea-882d-36cc01b5b686 - - - - - -] [instance: 4e93a158-a4bd-41a0-8fe0-52b2a069c409] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:55:49 np0005629333 nova_compute[244014]: 2026-02-25 12:55:49.073 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:49 np0005629333 nova_compute[244014]: 2026-02-25 12:55:49.666 244018 DEBUG nova.network.neutron [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Successfully created port: dd7b2b79-55bd-4df0-a79a-3344d8c79c92 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:55:50 np0005629333 nova_compute[244014]: 2026-02-25 12:55:50.424 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:50 np0005629333 nova_compute[244014]: 2026-02-25 12:55:50.494 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2258: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s wr, 0 op/s
Feb 25 07:55:51 np0005629333 nova_compute[244014]: 2026-02-25 12:55:51.255 244018 DEBUG nova.network.neutron [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Successfully updated port: dd7b2b79-55bd-4df0-a79a-3344d8c79c92 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:55:51 np0005629333 nova_compute[244014]: 2026-02-25 12:55:51.277 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "refresh_cache-7102d0db-32cc-4a5e-8282-cf5266710872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:55:51 np0005629333 nova_compute[244014]: 2026-02-25 12:55:51.277 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquired lock "refresh_cache-7102d0db-32cc-4a5e-8282-cf5266710872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:55:51 np0005629333 nova_compute[244014]: 2026-02-25 12:55:51.278 244018 DEBUG nova.network.neutron [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:55:51 np0005629333 nova_compute[244014]: 2026-02-25 12:55:51.359 244018 DEBUG nova.compute.manager [req-e3eb7296-5f22-41ce-b3c1-6caff8a48538 req-5e22e67b-a70e-42d3-85dd-613d43e4a166 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Received event network-changed-dd7b2b79-55bd-4df0-a79a-3344d8c79c92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:55:51 np0005629333 nova_compute[244014]: 2026-02-25 12:55:51.360 244018 DEBUG nova.compute.manager [req-e3eb7296-5f22-41ce-b3c1-6caff8a48538 req-5e22e67b-a70e-42d3-85dd-613d43e4a166 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Refreshing instance network info cache due to event network-changed-dd7b2b79-55bd-4df0-a79a-3344d8c79c92. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:55:51 np0005629333 nova_compute[244014]: 2026-02-25 12:55:51.360 244018 DEBUG oslo_concurrency.lockutils [req-e3eb7296-5f22-41ce-b3c1-6caff8a48538 req-5e22e67b-a70e-42d3-85dd-613d43e4a166 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-7102d0db-32cc-4a5e-8282-cf5266710872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:55:51 np0005629333 nova_compute[244014]: 2026-02-25 12:55:51.462 244018 DEBUG nova.network.neutron [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:55:51 np0005629333 nova_compute[244014]: 2026-02-25 12:55:51.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:55:52 np0005629333 nova_compute[244014]: 2026-02-25 12:55:52.596 244018 DEBUG nova.network.neutron [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Updating instance_info_cache with network_info: [{"id": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "address": "fa:16:3e:c1:35:f1", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec1:35f1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd7b2b79-55", "ovs_interfaceid": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:55:52 np0005629333 nova_compute[244014]: 2026-02-25 12:55:52.615 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Releasing lock "refresh_cache-7102d0db-32cc-4a5e-8282-cf5266710872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:55:52 np0005629333 nova_compute[244014]: 2026-02-25 12:55:52.616 244018 DEBUG nova.compute.manager [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Instance network_info: |[{"id": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "address": "fa:16:3e:c1:35:f1", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec1:35f1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd7b2b79-55", "ovs_interfaceid": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:55:52 np0005629333 nova_compute[244014]: 2026-02-25 12:55:52.616 244018 DEBUG oslo_concurrency.lockutils [req-e3eb7296-5f22-41ce-b3c1-6caff8a48538 req-5e22e67b-a70e-42d3-85dd-613d43e4a166 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-7102d0db-32cc-4a5e-8282-cf5266710872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:55:52 np0005629333 nova_compute[244014]: 2026-02-25 12:55:52.617 244018 DEBUG nova.network.neutron [req-e3eb7296-5f22-41ce-b3c1-6caff8a48538 req-5e22e67b-a70e-42d3-85dd-613d43e4a166 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Refreshing network info cache for port dd7b2b79-55bd-4df0-a79a-3344d8c79c92 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:55:52 np0005629333 nova_compute[244014]: 2026-02-25 12:55:52.622 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Start _get_guest_xml network_info=[{"id": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "address": "fa:16:3e:c1:35:f1", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec1:35f1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd7b2b79-55", "ovs_interfaceid": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:55:52 np0005629333 nova_compute[244014]: 2026-02-25 12:55:52.628 244018 WARNING nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:55:52 np0005629333 nova_compute[244014]: 2026-02-25 12:55:52.637 244018 DEBUG nova.virt.libvirt.host [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:55:52 np0005629333 nova_compute[244014]: 2026-02-25 12:55:52.640 244018 DEBUG nova.virt.libvirt.host [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:55:52 np0005629333 nova_compute[244014]: 2026-02-25 12:55:52.644 244018 DEBUG nova.virt.libvirt.host [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:55:52 np0005629333 nova_compute[244014]: 2026-02-25 12:55:52.645 244018 DEBUG nova.virt.libvirt.host [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:55:52 np0005629333 nova_compute[244014]: 2026-02-25 12:55:52.646 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:55:52 np0005629333 nova_compute[244014]: 2026-02-25 12:55:52.646 244018 DEBUG nova.virt.hardware [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:55:52 np0005629333 nova_compute[244014]: 2026-02-25 12:55:52.647 244018 DEBUG nova.virt.hardware [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:55:52 np0005629333 nova_compute[244014]: 2026-02-25 12:55:52.647 244018 DEBUG nova.virt.hardware [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:55:52 np0005629333 nova_compute[244014]: 2026-02-25 12:55:52.648 244018 DEBUG nova.virt.hardware [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:55:52 np0005629333 nova_compute[244014]: 2026-02-25 12:55:52.648 244018 DEBUG nova.virt.hardware [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:55:52 np0005629333 nova_compute[244014]: 2026-02-25 12:55:52.649 244018 DEBUG nova.virt.hardware [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:55:52 np0005629333 nova_compute[244014]: 2026-02-25 12:55:52.649 244018 DEBUG nova.virt.hardware [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:55:52 np0005629333 nova_compute[244014]: 2026-02-25 12:55:52.650 244018 DEBUG nova.virt.hardware [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:55:52 np0005629333 nova_compute[244014]: 2026-02-25 12:55:52.650 244018 DEBUG nova.virt.hardware [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:55:52 np0005629333 nova_compute[244014]: 2026-02-25 12:55:52.651 244018 DEBUG nova.virt.hardware [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:55:52 np0005629333 nova_compute[244014]: 2026-02-25 12:55:52.651 244018 DEBUG nova.virt.hardware [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:55:52 np0005629333 nova_compute[244014]: 2026-02-25 12:55:52.656 244018 DEBUG oslo_concurrency.processutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:55:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2259: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 07:55:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:55:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/410031636' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:55:53 np0005629333 nova_compute[244014]: 2026-02-25 12:55:53.223 244018 DEBUG oslo_concurrency.processutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:55:53 np0005629333 nova_compute[244014]: 2026-02-25 12:55:53.254 244018 DEBUG nova.storage.rbd_utils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 7102d0db-32cc-4a5e-8282-cf5266710872_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:55:53 np0005629333 nova_compute[244014]: 2026-02-25 12:55:53.259 244018 DEBUG oslo_concurrency.processutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:55:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:55:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:55:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/77170465' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:55:53 np0005629333 nova_compute[244014]: 2026-02-25 12:55:53.846 244018 DEBUG oslo_concurrency.processutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:55:53 np0005629333 nova_compute[244014]: 2026-02-25 12:55:53.849 244018 DEBUG nova.virt.libvirt.vif [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:55:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1911565712',display_name='tempest-TestGettingAddress-server-1911565712',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1911565712',id=136,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCjAwmzTqRLo6MtceITp6kkRRb7F7L6NpmCwm+uERDDuNYjjK2SbSlgPchdRYIPodHunzOlcpbeAY9Mle2akRMrTndDYB62iVXgr7LtFS3t0EZlmgtz0aV+ezHMzkXsV7A==',key_name='tempest-TestGettingAddress-1069883733',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-tvigm4mr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:55:47Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=7102d0db-32cc-4a5e-8282-cf5266710872,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "address": "fa:16:3e:c1:35:f1", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec1:35f1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd7b2b79-55", "ovs_interfaceid": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:55:53 np0005629333 nova_compute[244014]: 2026-02-25 12:55:53.850 244018 DEBUG nova.network.os_vif_util [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "address": "fa:16:3e:c1:35:f1", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec1:35f1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd7b2b79-55", "ovs_interfaceid": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:55:53 np0005629333 nova_compute[244014]: 2026-02-25 12:55:53.851 244018 DEBUG nova.network.os_vif_util [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:35:f1,bridge_name='br-int',has_traffic_filtering=True,id=dd7b2b79-55bd-4df0-a79a-3344d8c79c92,network=Network(9f82a56f-a5c3-446e-9b75-7dc69c93c56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd7b2b79-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:55:53 np0005629333 nova_compute[244014]: 2026-02-25 12:55:53.853 244018 DEBUG nova.objects.instance [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7102d0db-32cc-4a5e-8282-cf5266710872 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:55:53 np0005629333 nova_compute[244014]: 2026-02-25 12:55:53.869 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:55:53 np0005629333 nova_compute[244014]:  <uuid>7102d0db-32cc-4a5e-8282-cf5266710872</uuid>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:  <name>instance-00000088</name>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestGettingAddress-server-1911565712</nova:name>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:55:52</nova:creationTime>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:55:53 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:        <nova:user uuid="f8eb8dbf8cc448ad946fd23aaae2326e">tempest-TestGettingAddress-344063294-project-member</nova:user>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:        <nova:project uuid="25fa1e8dd32c483686f869da2604f2b1">tempest-TestGettingAddress-344063294</nova:project>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:        <nova:port uuid="dd7b2b79-55bd-4df0-a79a-3344d8c79c92">
Feb 25 07:55:53 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fec1:35f1" ipVersion="6"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      <entry name="serial">7102d0db-32cc-4a5e-8282-cf5266710872</entry>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      <entry name="uuid">7102d0db-32cc-4a5e-8282-cf5266710872</entry>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/7102d0db-32cc-4a5e-8282-cf5266710872_disk">
Feb 25 07:55:53 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:55:53 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/7102d0db-32cc-4a5e-8282-cf5266710872_disk.config">
Feb 25 07:55:53 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:55:53 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:c1:35:f1"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      <target dev="tapdd7b2b79-55"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/7102d0db-32cc-4a5e-8282-cf5266710872/console.log" append="off"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:55:53 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:55:53 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:55:53 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:55:53 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:55:53 np0005629333 nova_compute[244014]: 2026-02-25 12:55:53.870 244018 DEBUG nova.compute.manager [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Preparing to wait for external event network-vif-plugged-dd7b2b79-55bd-4df0-a79a-3344d8c79c92 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:55:53 np0005629333 nova_compute[244014]: 2026-02-25 12:55:53.871 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:55:53 np0005629333 nova_compute[244014]: 2026-02-25 12:55:53.871 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:55:53 np0005629333 nova_compute[244014]: 2026-02-25 12:55:53.872 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:55:53 np0005629333 nova_compute[244014]: 2026-02-25 12:55:53.873 244018 DEBUG nova.virt.libvirt.vif [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:55:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1911565712',display_name='tempest-TestGettingAddress-server-1911565712',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1911565712',id=136,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCjAwmzTqRLo6MtceITp6kkRRb7F7L6NpmCwm+uERDDuNYjjK2SbSlgPchdRYIPodHunzOlcpbeAY9Mle2akRMrTndDYB62iVXgr7LtFS3t0EZlmgtz0aV+ezHMzkXsV7A==',key_name='tempest-TestGettingAddress-1069883733',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-tvigm4mr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:55:47Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=7102d0db-32cc-4a5e-8282-cf5266710872,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "address": "fa:16:3e:c1:35:f1", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec1:35f1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd7b2b79-55", "ovs_interfaceid": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:55:53 np0005629333 nova_compute[244014]: 2026-02-25 12:55:53.874 244018 DEBUG nova.network.os_vif_util [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "address": "fa:16:3e:c1:35:f1", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec1:35f1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd7b2b79-55", "ovs_interfaceid": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:55:53 np0005629333 nova_compute[244014]: 2026-02-25 12:55:53.875 244018 DEBUG nova.network.os_vif_util [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:35:f1,bridge_name='br-int',has_traffic_filtering=True,id=dd7b2b79-55bd-4df0-a79a-3344d8c79c92,network=Network(9f82a56f-a5c3-446e-9b75-7dc69c93c56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd7b2b79-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:55:53 np0005629333 nova_compute[244014]: 2026-02-25 12:55:53.876 244018 DEBUG os_vif [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:35:f1,bridge_name='br-int',has_traffic_filtering=True,id=dd7b2b79-55bd-4df0-a79a-3344d8c79c92,network=Network(9f82a56f-a5c3-446e-9b75-7dc69c93c56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd7b2b79-55') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:55:53 np0005629333 nova_compute[244014]: 2026-02-25 12:55:53.877 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:53 np0005629333 nova_compute[244014]: 2026-02-25 12:55:53.878 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:55:53 np0005629333 nova_compute[244014]: 2026-02-25 12:55:53.879 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:55:53 np0005629333 nova_compute[244014]: 2026-02-25 12:55:53.883 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:53 np0005629333 nova_compute[244014]: 2026-02-25 12:55:53.883 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdd7b2b79-55, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:55:53 np0005629333 nova_compute[244014]: 2026-02-25 12:55:53.884 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdd7b2b79-55, col_values=(('external_ids', {'iface-id': 'dd7b2b79-55bd-4df0-a79a-3344d8c79c92', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:35:f1', 'vm-uuid': '7102d0db-32cc-4a5e-8282-cf5266710872'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:55:53 np0005629333 nova_compute[244014]: 2026-02-25 12:55:53.886 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:53 np0005629333 NetworkManager[49836]: <info>  [1772024153.8879] manager: (tapdd7b2b79-55): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/597)
Feb 25 07:55:53 np0005629333 nova_compute[244014]: 2026-02-25 12:55:53.889 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:55:53 np0005629333 nova_compute[244014]: 2026-02-25 12:55:53.894 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:53 np0005629333 nova_compute[244014]: 2026-02-25 12:55:53.896 244018 INFO os_vif [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:35:f1,bridge_name='br-int',has_traffic_filtering=True,id=dd7b2b79-55bd-4df0-a79a-3344d8c79c92,network=Network(9f82a56f-a5c3-446e-9b75-7dc69c93c56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd7b2b79-55')#033[00m
Feb 25 07:55:53 np0005629333 nova_compute[244014]: 2026-02-25 12:55:53.981 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:55:53 np0005629333 nova_compute[244014]: 2026-02-25 12:55:53.982 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:55:53 np0005629333 nova_compute[244014]: 2026-02-25 12:55:53.982 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] No VIF found with MAC fa:16:3e:c1:35:f1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:55:53 np0005629333 nova_compute[244014]: 2026-02-25 12:55:53.984 244018 INFO nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Using config drive#033[00m
Feb 25 07:55:54 np0005629333 nova_compute[244014]: 2026-02-25 12:55:54.017 244018 DEBUG nova.storage.rbd_utils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 7102d0db-32cc-4a5e-8282-cf5266710872_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:55:54 np0005629333 nova_compute[244014]: 2026-02-25 12:55:54.431 244018 INFO nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Creating config drive at /var/lib/nova/instances/7102d0db-32cc-4a5e-8282-cf5266710872/disk.config#033[00m
Feb 25 07:55:54 np0005629333 nova_compute[244014]: 2026-02-25 12:55:54.436 244018 DEBUG oslo_concurrency.processutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7102d0db-32cc-4a5e-8282-cf5266710872/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_yqx6vlk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:55:54 np0005629333 nova_compute[244014]: 2026-02-25 12:55:54.580 244018 DEBUG oslo_concurrency.processutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7102d0db-32cc-4a5e-8282-cf5266710872/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp_yqx6vlk" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:55:54 np0005629333 nova_compute[244014]: 2026-02-25 12:55:54.618 244018 DEBUG nova.storage.rbd_utils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] rbd image 7102d0db-32cc-4a5e-8282-cf5266710872_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:55:54 np0005629333 nova_compute[244014]: 2026-02-25 12:55:54.623 244018 DEBUG oslo_concurrency.processutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7102d0db-32cc-4a5e-8282-cf5266710872/disk.config 7102d0db-32cc-4a5e-8282-cf5266710872_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:55:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2260: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 07:55:54 np0005629333 nova_compute[244014]: 2026-02-25 12:55:54.757 244018 DEBUG nova.network.neutron [req-e3eb7296-5f22-41ce-b3c1-6caff8a48538 req-5e22e67b-a70e-42d3-85dd-613d43e4a166 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Updated VIF entry in instance network info cache for port dd7b2b79-55bd-4df0-a79a-3344d8c79c92. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:55:54 np0005629333 nova_compute[244014]: 2026-02-25 12:55:54.758 244018 DEBUG nova.network.neutron [req-e3eb7296-5f22-41ce-b3c1-6caff8a48538 req-5e22e67b-a70e-42d3-85dd-613d43e4a166 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Updating instance_info_cache with network_info: [{"id": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "address": "fa:16:3e:c1:35:f1", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec1:35f1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd7b2b79-55", "ovs_interfaceid": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:55:54 np0005629333 nova_compute[244014]: 2026-02-25 12:55:54.813 244018 DEBUG oslo_concurrency.processutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7102d0db-32cc-4a5e-8282-cf5266710872/disk.config 7102d0db-32cc-4a5e-8282-cf5266710872_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:55:54 np0005629333 nova_compute[244014]: 2026-02-25 12:55:54.815 244018 INFO nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Deleting local config drive /var/lib/nova/instances/7102d0db-32cc-4a5e-8282-cf5266710872/disk.config because it was imported into RBD.#033[00m
Feb 25 07:55:54 np0005629333 nova_compute[244014]: 2026-02-25 12:55:54.863 244018 DEBUG oslo_concurrency.lockutils [req-e3eb7296-5f22-41ce-b3c1-6caff8a48538 req-5e22e67b-a70e-42d3-85dd-613d43e4a166 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-7102d0db-32cc-4a5e-8282-cf5266710872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:55:54 np0005629333 kernel: tapdd7b2b79-55: entered promiscuous mode
Feb 25 07:55:54 np0005629333 NetworkManager[49836]: <info>  [1772024154.8762] manager: (tapdd7b2b79-55): new Tun device (/org/freedesktop/NetworkManager/Devices/598)
Feb 25 07:55:54 np0005629333 ovn_controller[147040]: 2026-02-25T12:55:54Z|01433|binding|INFO|Claiming lport dd7b2b79-55bd-4df0-a79a-3344d8c79c92 for this chassis.
Feb 25 07:55:54 np0005629333 ovn_controller[147040]: 2026-02-25T12:55:54Z|01434|binding|INFO|dd7b2b79-55bd-4df0-a79a-3344d8c79c92: Claiming fa:16:3e:c1:35:f1 10.100.0.4 2001:db8::f816:3eff:fec1:35f1
Feb 25 07:55:54 np0005629333 nova_compute[244014]: 2026-02-25 12:55:54.877 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:54 np0005629333 nova_compute[244014]: 2026-02-25 12:55:54.888 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:54 np0005629333 ovn_controller[147040]: 2026-02-25T12:55:54Z|01435|binding|INFO|Setting lport dd7b2b79-55bd-4df0-a79a-3344d8c79c92 ovn-installed in OVS
Feb 25 07:55:54 np0005629333 nova_compute[244014]: 2026-02-25 12:55:54.891 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:54 np0005629333 systemd-udevd[367563]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:55:54 np0005629333 ovn_controller[147040]: 2026-02-25T12:55:54Z|01436|binding|INFO|Setting lport dd7b2b79-55bd-4df0-a79a-3344d8c79c92 up in Southbound
Feb 25 07:55:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:54.925 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:35:f1 10.100.0.4 2001:db8::f816:3eff:fec1:35f1'], port_security=['fa:16:3e:c1:35:f1 10.100.0.4 2001:db8::f816:3eff:fec1:35f1'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8::f816:3eff:fec1:35f1/64', 'neutron:device_id': '7102d0db-32cc-4a5e-8282-cf5266710872', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aaaa1259-a3db-497e-8efb-1f3c1f8cc088', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7d68418-660c-4b4a-9631-94822d5aa11e, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=dd7b2b79-55bd-4df0-a79a-3344d8c79c92) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:55:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:54.927 157129 INFO neutron.agent.ovn.metadata.agent [-] Port dd7b2b79-55bd-4df0-a79a-3344d8c79c92 in datapath 9f82a56f-a5c3-446e-9b75-7dc69c93c56b bound to our chassis#033[00m
Feb 25 07:55:54 np0005629333 NetworkManager[49836]: <info>  [1772024154.9291] device (tapdd7b2b79-55): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:55:54 np0005629333 NetworkManager[49836]: <info>  [1772024154.9303] device (tapdd7b2b79-55): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:55:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:54.931 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f82a56f-a5c3-446e-9b75-7dc69c93c56b#033[00m
Feb 25 07:55:54 np0005629333 systemd-machined[210048]: New machine qemu-168-instance-00000088.
Feb 25 07:55:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:54.948 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ee6bab90-3088-405d-85dd-d010ca959f72]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:54 np0005629333 systemd[1]: Started Virtual Machine qemu-168-instance-00000088.
Feb 25 07:55:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:54.977 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3167656b-afde-4dc4-bbe8-b36e3b90df5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:54.982 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[39656ab0-c161-4cd4-b409-ee06debca756]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:55.012 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[7f09559a-5db8-4892-8e38-c4ec4ff18c7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:55.030 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6a7116da-5e7e-462d-8a4f-c3db5f982a63]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f82a56f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:91:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1656, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 20, 'tx_packets': 5, 'rx_bytes': 1656, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609107, 'reachable_time': 19694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367579, 'error': None, 'target': 'ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:55.032 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:55:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:55.033 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:55:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:55.034 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:55:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:55.047 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0227d22d-8dba-4ceb-b3b6-273b4b021efd]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9f82a56f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609116, 'tstamp': 609116}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367580, 'error': None, 'target': 'ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9f82a56f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609119, 'tstamp': 609119}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367580, 'error': None, 'target': 'ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:55:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:55.049 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f82a56f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.051 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:55.053 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f82a56f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:55:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:55.053 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:55:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:55.054 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f82a56f-a0, col_values=(('external_ids', {'iface-id': 'd6ceb7cb-a48f-47c3-acb6-58d61268f8c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:55:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:55:55.055 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.319 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024155.3182516, 7102d0db-32cc-4a5e-8282-cf5266710872 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.320 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] VM Started (Lifecycle Event)#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.343 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.348 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024155.3185399, 7102d0db-32cc-4a5e-8282-cf5266710872 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.349 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.372 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.376 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.388 244018 DEBUG nova.compute.manager [req-8a945c03-f40a-4231-8936-9b122045f010 req-38aec52e-e21f-4c6e-9866-a90507cab2d6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Received event network-vif-plugged-dd7b2b79-55bd-4df0-a79a-3344d8c79c92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.389 244018 DEBUG oslo_concurrency.lockutils [req-8a945c03-f40a-4231-8936-9b122045f010 req-38aec52e-e21f-4c6e-9866-a90507cab2d6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.389 244018 DEBUG oslo_concurrency.lockutils [req-8a945c03-f40a-4231-8936-9b122045f010 req-38aec52e-e21f-4c6e-9866-a90507cab2d6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.389 244018 DEBUG oslo_concurrency.lockutils [req-8a945c03-f40a-4231-8936-9b122045f010 req-38aec52e-e21f-4c6e-9866-a90507cab2d6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.390 244018 DEBUG nova.compute.manager [req-8a945c03-f40a-4231-8936-9b122045f010 req-38aec52e-e21f-4c6e-9866-a90507cab2d6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Processing event network-vif-plugged-dd7b2b79-55bd-4df0-a79a-3344d8c79c92 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.390 244018 DEBUG nova.compute.manager [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.395 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.398 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.398 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024155.3939693, 7102d0db-32cc-4a5e-8282-cf5266710872 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.398 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.402 244018 INFO nova.virt.libvirt.driver [-] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Instance spawned successfully.#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.402 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.428 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.433 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.437 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.437 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.438 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.438 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.438 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.439 244018 DEBUG nova.virt.libvirt.driver [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.481 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.496 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.520 244018 INFO nova.compute.manager [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Took 8.01 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.520 244018 DEBUG nova.compute.manager [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.613 244018 INFO nova.compute.manager [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Took 9.10 seconds to build instance.#033[00m
Feb 25 07:55:55 np0005629333 nova_compute[244014]: 2026-02-25 12:55:55.651 244018 DEBUG oslo_concurrency.lockutils [None req-0510cd42-a915-4ba7-a69e-552af7fa80cf f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "7102d0db-32cc-4a5e-8282-cf5266710872" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:55:56 np0005629333 nova_compute[244014]: 2026-02-25 12:55:56.410 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:55:56 np0005629333 nova_compute[244014]: 2026-02-25 12:55:56.412 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:55:56 np0005629333 nova_compute[244014]: 2026-02-25 12:55:56.431 244018 DEBUG nova.compute.manager [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:55:56 np0005629333 nova_compute[244014]: 2026-02-25 12:55:56.525 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:55:56 np0005629333 nova_compute[244014]: 2026-02-25 12:55:56.526 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:55:56 np0005629333 nova_compute[244014]: 2026-02-25 12:55:56.534 244018 DEBUG nova.virt.hardware [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:55:56 np0005629333 nova_compute[244014]: 2026-02-25 12:55:56.535 244018 INFO nova.compute.claims [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:55:56 np0005629333 nova_compute[244014]: 2026-02-25 12:55:56.687 244018 DEBUG oslo_concurrency.processutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:55:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2261: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 07:55:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:55:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2050084465' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:55:57 np0005629333 nova_compute[244014]: 2026-02-25 12:55:57.204 244018 DEBUG oslo_concurrency.processutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:55:57 np0005629333 nova_compute[244014]: 2026-02-25 12:55:57.213 244018 DEBUG nova.compute.provider_tree [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:55:57 np0005629333 nova_compute[244014]: 2026-02-25 12:55:57.236 244018 DEBUG nova.scheduler.client.report [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:55:57 np0005629333 nova_compute[244014]: 2026-02-25 12:55:57.272 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:55:57 np0005629333 nova_compute[244014]: 2026-02-25 12:55:57.272 244018 DEBUG nova.compute.manager [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:55:57 np0005629333 nova_compute[244014]: 2026-02-25 12:55:57.344 244018 DEBUG nova.compute.manager [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:55:57 np0005629333 nova_compute[244014]: 2026-02-25 12:55:57.344 244018 DEBUG nova.network.neutron [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:55:57 np0005629333 nova_compute[244014]: 2026-02-25 12:55:57.366 244018 INFO nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:55:57 np0005629333 nova_compute[244014]: 2026-02-25 12:55:57.382 244018 DEBUG nova.compute.manager [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:55:57 np0005629333 nova_compute[244014]: 2026-02-25 12:55:57.496 244018 DEBUG nova.compute.manager [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:55:57 np0005629333 nova_compute[244014]: 2026-02-25 12:55:57.497 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:55:57 np0005629333 nova_compute[244014]: 2026-02-25 12:55:57.498 244018 INFO nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Creating image(s)#033[00m
Feb 25 07:55:57 np0005629333 nova_compute[244014]: 2026-02-25 12:55:57.530 244018 DEBUG nova.storage.rbd_utils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image e728a9dc-bb04-4a25-bcad-b787a044bc0b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:55:57 np0005629333 nova_compute[244014]: 2026-02-25 12:55:57.562 244018 DEBUG nova.storage.rbd_utils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image e728a9dc-bb04-4a25-bcad-b787a044bc0b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:55:57 np0005629333 nova_compute[244014]: 2026-02-25 12:55:57.594 244018 DEBUG nova.storage.rbd_utils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image e728a9dc-bb04-4a25-bcad-b787a044bc0b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:55:57 np0005629333 nova_compute[244014]: 2026-02-25 12:55:57.598 244018 DEBUG oslo_concurrency.processutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:55:57 np0005629333 nova_compute[244014]: 2026-02-25 12:55:57.636 244018 DEBUG nova.policy [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31d013eaf26a447394d93c83ab8def60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e227b91c24404ab5aed600e2fe792d32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:55:57 np0005629333 nova_compute[244014]: 2026-02-25 12:55:57.642 244018 DEBUG nova.compute.manager [req-7af6f9b3-d123-4c9a-b8a0-12a6ce92cd12 req-5b54c028-4dba-4e7e-a34c-ed89e60ef2be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Received event network-vif-plugged-dd7b2b79-55bd-4df0-a79a-3344d8c79c92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:55:57 np0005629333 nova_compute[244014]: 2026-02-25 12:55:57.642 244018 DEBUG oslo_concurrency.lockutils [req-7af6f9b3-d123-4c9a-b8a0-12a6ce92cd12 req-5b54c028-4dba-4e7e-a34c-ed89e60ef2be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:55:57 np0005629333 nova_compute[244014]: 2026-02-25 12:55:57.643 244018 DEBUG oslo_concurrency.lockutils [req-7af6f9b3-d123-4c9a-b8a0-12a6ce92cd12 req-5b54c028-4dba-4e7e-a34c-ed89e60ef2be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:55:57 np0005629333 nova_compute[244014]: 2026-02-25 12:55:57.643 244018 DEBUG oslo_concurrency.lockutils [req-7af6f9b3-d123-4c9a-b8a0-12a6ce92cd12 req-5b54c028-4dba-4e7e-a34c-ed89e60ef2be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:55:57 np0005629333 nova_compute[244014]: 2026-02-25 12:55:57.643 244018 DEBUG nova.compute.manager [req-7af6f9b3-d123-4c9a-b8a0-12a6ce92cd12 req-5b54c028-4dba-4e7e-a34c-ed89e60ef2be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] No waiting events found dispatching network-vif-plugged-dd7b2b79-55bd-4df0-a79a-3344d8c79c92 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:55:57 np0005629333 nova_compute[244014]: 2026-02-25 12:55:57.644 244018 WARNING nova.compute.manager [req-7af6f9b3-d123-4c9a-b8a0-12a6ce92cd12 req-5b54c028-4dba-4e7e-a34c-ed89e60ef2be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Received unexpected event network-vif-plugged-dd7b2b79-55bd-4df0-a79a-3344d8c79c92 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:55:57 np0005629333 nova_compute[244014]: 2026-02-25 12:55:57.665 244018 DEBUG oslo_concurrency.processutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:55:57 np0005629333 nova_compute[244014]: 2026-02-25 12:55:57.666 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:55:57 np0005629333 nova_compute[244014]: 2026-02-25 12:55:57.667 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:55:57 np0005629333 nova_compute[244014]: 2026-02-25 12:55:57.668 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:55:57 np0005629333 nova_compute[244014]: 2026-02-25 12:55:57.699 244018 DEBUG nova.storage.rbd_utils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image e728a9dc-bb04-4a25-bcad-b787a044bc0b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:55:57 np0005629333 nova_compute[244014]: 2026-02-25 12:55:57.705 244018 DEBUG oslo_concurrency.processutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 e728a9dc-bb04-4a25-bcad-b787a044bc0b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:55:58 np0005629333 nova_compute[244014]: 2026-02-25 12:55:58.016 244018 DEBUG oslo_concurrency.processutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 e728a9dc-bb04-4a25-bcad-b787a044bc0b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:55:58 np0005629333 nova_compute[244014]: 2026-02-25 12:55:58.110 244018 DEBUG nova.storage.rbd_utils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] resizing rbd image e728a9dc-bb04-4a25-bcad-b787a044bc0b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:55:58 np0005629333 nova_compute[244014]: 2026-02-25 12:55:58.207 244018 DEBUG nova.objects.instance [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'migration_context' on Instance uuid e728a9dc-bb04-4a25-bcad-b787a044bc0b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:55:58 np0005629333 nova_compute[244014]: 2026-02-25 12:55:58.223 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:55:58 np0005629333 nova_compute[244014]: 2026-02-25 12:55:58.224 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Ensure instance console log exists: /var/lib/nova/instances/e728a9dc-bb04-4a25-bcad-b787a044bc0b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:55:58 np0005629333 nova_compute[244014]: 2026-02-25 12:55:58.224 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:55:58 np0005629333 nova_compute[244014]: 2026-02-25 12:55:58.225 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:55:58 np0005629333 nova_compute[244014]: 2026-02-25 12:55:58.225 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:55:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:55:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:55:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:55:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:55:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:55:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:55:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:55:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:55:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:55:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:55:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:55:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:55:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:55:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2262: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 25 07:55:58 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:55:58 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:55:58 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:55:58 np0005629333 nova_compute[244014]: 2026-02-25 12:55:58.887 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:55:58 np0005629333 podman[367952]: 2026-02-25 12:55:58.91565627 +0000 UTC m=+0.056872325 container create b5684d7e1d356b93bc4a046643bd976060fa2baa273f0ec709c64d21bdfb38c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_colden, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:55:58 np0005629333 systemd[1]: Started libpod-conmon-b5684d7e1d356b93bc4a046643bd976060fa2baa273f0ec709c64d21bdfb38c3.scope.
Feb 25 07:55:58 np0005629333 podman[367952]: 2026-02-25 12:55:58.891491549 +0000 UTC m=+0.032707654 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:55:58 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:55:59 np0005629333 podman[367952]: 2026-02-25 12:55:59.010020202 +0000 UTC m=+0.151236327 container init b5684d7e1d356b93bc4a046643bd976060fa2baa273f0ec709c64d21bdfb38c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_colden, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:55:59 np0005629333 podman[367952]: 2026-02-25 12:55:59.01952125 +0000 UTC m=+0.160737275 container start b5684d7e1d356b93bc4a046643bd976060fa2baa273f0ec709c64d21bdfb38c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_colden, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:55:59 np0005629333 podman[367952]: 2026-02-25 12:55:59.023162573 +0000 UTC m=+0.164378688 container attach b5684d7e1d356b93bc4a046643bd976060fa2baa273f0ec709c64d21bdfb38c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_colden, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:55:59 np0005629333 charming_colden[367968]: 167 167
Feb 25 07:55:59 np0005629333 systemd[1]: libpod-b5684d7e1d356b93bc4a046643bd976060fa2baa273f0ec709c64d21bdfb38c3.scope: Deactivated successfully.
Feb 25 07:55:59 np0005629333 podman[367952]: 2026-02-25 12:55:59.026651011 +0000 UTC m=+0.167867086 container died b5684d7e1d356b93bc4a046643bd976060fa2baa273f0ec709c64d21bdfb38c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_colden, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 07:55:59 np0005629333 nova_compute[244014]: 2026-02-25 12:55:59.055 244018 DEBUG nova.network.neutron [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Successfully created port: bde15f84-edfb-445b-b129-ec33331763f0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:55:59 np0005629333 systemd[1]: var-lib-containers-storage-overlay-de27f618d86a2df5e1713aa4926795056f92ed99ff00a334ae45733434243c24-merged.mount: Deactivated successfully.
Feb 25 07:55:59 np0005629333 podman[367952]: 2026-02-25 12:55:59.081815358 +0000 UTC m=+0.223031423 container remove b5684d7e1d356b93bc4a046643bd976060fa2baa273f0ec709c64d21bdfb38c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_colden, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 07:55:59 np0005629333 systemd[1]: libpod-conmon-b5684d7e1d356b93bc4a046643bd976060fa2baa273f0ec709c64d21bdfb38c3.scope: Deactivated successfully.
Feb 25 07:55:59 np0005629333 podman[367993]: 2026-02-25 12:55:59.279460153 +0000 UTC m=+0.058330506 container create 89d8ed19f826b468d1836306bb633cfac67a49e9f6eddb3691ca6ebd15b29cdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_easley, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 07:55:59 np0005629333 systemd[1]: Started libpod-conmon-89d8ed19f826b468d1836306bb633cfac67a49e9f6eddb3691ca6ebd15b29cdb.scope.
Feb 25 07:55:59 np0005629333 podman[367993]: 2026-02-25 12:55:59.254220261 +0000 UTC m=+0.033090614 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:55:59 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:55:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/233859dc0ff07e762bdd47ed154eedfa4b1c5c2904a418089ea45b442af28ae2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:55:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/233859dc0ff07e762bdd47ed154eedfa4b1c5c2904a418089ea45b442af28ae2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:55:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/233859dc0ff07e762bdd47ed154eedfa4b1c5c2904a418089ea45b442af28ae2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:55:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/233859dc0ff07e762bdd47ed154eedfa4b1c5c2904a418089ea45b442af28ae2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:55:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/233859dc0ff07e762bdd47ed154eedfa4b1c5c2904a418089ea45b442af28ae2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:55:59 np0005629333 podman[367993]: 2026-02-25 12:55:59.386521833 +0000 UTC m=+0.165392176 container init 89d8ed19f826b468d1836306bb633cfac67a49e9f6eddb3691ca6ebd15b29cdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_easley, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 07:55:59 np0005629333 podman[367993]: 2026-02-25 12:55:59.396727961 +0000 UTC m=+0.175598314 container start 89d8ed19f826b468d1836306bb633cfac67a49e9f6eddb3691ca6ebd15b29cdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_easley, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:55:59 np0005629333 podman[367993]: 2026-02-25 12:55:59.400997511 +0000 UTC m=+0.179867904 container attach 89d8ed19f826b468d1836306bb633cfac67a49e9f6eddb3691ca6ebd15b29cdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_easley, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 07:55:59 np0005629333 nova_compute[244014]: 2026-02-25 12:55:59.645 244018 DEBUG nova.compute.manager [req-b9aa0f2b-9c5c-4c42-8125-6573b607894b req-57a128fd-9a56-4ffa-a4c7-a0f7cdd7b9f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Received event network-changed-dd7b2b79-55bd-4df0-a79a-3344d8c79c92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:55:59 np0005629333 nova_compute[244014]: 2026-02-25 12:55:59.646 244018 DEBUG nova.compute.manager [req-b9aa0f2b-9c5c-4c42-8125-6573b607894b req-57a128fd-9a56-4ffa-a4c7-a0f7cdd7b9f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Refreshing instance network info cache due to event network-changed-dd7b2b79-55bd-4df0-a79a-3344d8c79c92. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:55:59 np0005629333 nova_compute[244014]: 2026-02-25 12:55:59.647 244018 DEBUG oslo_concurrency.lockutils [req-b9aa0f2b-9c5c-4c42-8125-6573b607894b req-57a128fd-9a56-4ffa-a4c7-a0f7cdd7b9f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-7102d0db-32cc-4a5e-8282-cf5266710872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:55:59 np0005629333 nova_compute[244014]: 2026-02-25 12:55:59.647 244018 DEBUG oslo_concurrency.lockutils [req-b9aa0f2b-9c5c-4c42-8125-6573b607894b req-57a128fd-9a56-4ffa-a4c7-a0f7cdd7b9f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-7102d0db-32cc-4a5e-8282-cf5266710872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:55:59 np0005629333 nova_compute[244014]: 2026-02-25 12:55:59.647 244018 DEBUG nova.network.neutron [req-b9aa0f2b-9c5c-4c42-8125-6573b607894b req-57a128fd-9a56-4ffa-a4c7-a0f7cdd7b9f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Refreshing network info cache for port dd7b2b79-55bd-4df0-a79a-3344d8c79c92 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:55:59 np0005629333 jovial_easley[368009]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:55:59 np0005629333 jovial_easley[368009]: --> All data devices are unavailable
Feb 25 07:55:59 np0005629333 systemd[1]: libpod-89d8ed19f826b468d1836306bb633cfac67a49e9f6eddb3691ca6ebd15b29cdb.scope: Deactivated successfully.
Feb 25 07:55:59 np0005629333 podman[367993]: 2026-02-25 12:55:59.940364286 +0000 UTC m=+0.719234649 container died 89d8ed19f826b468d1836306bb633cfac67a49e9f6eddb3691ca6ebd15b29cdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_easley, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:55:59 np0005629333 systemd[1]: var-lib-containers-storage-overlay-233859dc0ff07e762bdd47ed154eedfa4b1c5c2904a418089ea45b442af28ae2-merged.mount: Deactivated successfully.
Feb 25 07:55:59 np0005629333 podman[367993]: 2026-02-25 12:55:59.991487358 +0000 UTC m=+0.770357681 container remove 89d8ed19f826b468d1836306bb633cfac67a49e9f6eddb3691ca6ebd15b29cdb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_easley, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 25 07:56:00 np0005629333 systemd[1]: libpod-conmon-89d8ed19f826b468d1836306bb633cfac67a49e9f6eddb3691ca6ebd15b29cdb.scope: Deactivated successfully.
Feb 25 07:56:00 np0005629333 nova_compute[244014]: 2026-02-25 12:56:00.500 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:00 np0005629333 podman[368102]: 2026-02-25 12:56:00.526638374 +0000 UTC m=+0.046337628 container create 50eaea7ebe1f56f08c33e0f0bccd042fa9efcc34f797642a9946a6244e60d10d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:56:00 np0005629333 systemd[1]: Started libpod-conmon-50eaea7ebe1f56f08c33e0f0bccd042fa9efcc34f797642a9946a6244e60d10d.scope.
Feb 25 07:56:00 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:56:00 np0005629333 podman[368102]: 2026-02-25 12:56:00.506957949 +0000 UTC m=+0.026657203 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:56:00 np0005629333 podman[368102]: 2026-02-25 12:56:00.611676223 +0000 UTC m=+0.131375467 container init 50eaea7ebe1f56f08c33e0f0bccd042fa9efcc34f797642a9946a6244e60d10d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 07:56:00 np0005629333 podman[368102]: 2026-02-25 12:56:00.619331619 +0000 UTC m=+0.139030873 container start 50eaea7ebe1f56f08c33e0f0bccd042fa9efcc34f797642a9946a6244e60d10d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 07:56:00 np0005629333 podman[368102]: 2026-02-25 12:56:00.623160657 +0000 UTC m=+0.142859901 container attach 50eaea7ebe1f56f08c33e0f0bccd042fa9efcc34f797642a9946a6244e60d10d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:56:00 np0005629333 friendly_vaughan[368118]: 167 167
Feb 25 07:56:00 np0005629333 systemd[1]: libpod-50eaea7ebe1f56f08c33e0f0bccd042fa9efcc34f797642a9946a6244e60d10d.scope: Deactivated successfully.
Feb 25 07:56:00 np0005629333 podman[368102]: 2026-02-25 12:56:00.624488554 +0000 UTC m=+0.144187798 container died 50eaea7ebe1f56f08c33e0f0bccd042fa9efcc34f797642a9946a6244e60d10d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 07:56:00 np0005629333 systemd[1]: var-lib-containers-storage-overlay-4114e1d5785febfe884b89f30ee687f8a8c8881d393b35e3cb67eb3d39ca0fb8-merged.mount: Deactivated successfully.
Feb 25 07:56:00 np0005629333 podman[368102]: 2026-02-25 12:56:00.670876243 +0000 UTC m=+0.190575477 container remove 50eaea7ebe1f56f08c33e0f0bccd042fa9efcc34f797642a9946a6244e60d10d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 07:56:00 np0005629333 systemd[1]: libpod-conmon-50eaea7ebe1f56f08c33e0f0bccd042fa9efcc34f797642a9946a6244e60d10d.scope: Deactivated successfully.
Feb 25 07:56:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2263: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 07:56:00 np0005629333 podman[368141]: 2026-02-25 12:56:00.867033095 +0000 UTC m=+0.066407353 container create ad62ca5ebd7966fc3d2269889750a607919a754c19c5e40ea778f4ea3ca688e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 07:56:00 np0005629333 systemd[1]: Started libpod-conmon-ad62ca5ebd7966fc3d2269889750a607919a754c19c5e40ea778f4ea3ca688e9.scope.
Feb 25 07:56:00 np0005629333 podman[368141]: 2026-02-25 12:56:00.840425525 +0000 UTC m=+0.039799823 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:56:00 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:56:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea6cb00a73bdfb3ea74a190be20e3800fd1f231c48b2d4a86831c8a5e0b89416/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:56:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea6cb00a73bdfb3ea74a190be20e3800fd1f231c48b2d4a86831c8a5e0b89416/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:56:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea6cb00a73bdfb3ea74a190be20e3800fd1f231c48b2d4a86831c8a5e0b89416/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:56:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea6cb00a73bdfb3ea74a190be20e3800fd1f231c48b2d4a86831c8a5e0b89416/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:56:01 np0005629333 podman[368141]: 2026-02-25 12:56:00.999888073 +0000 UTC m=+0.199262371 container init ad62ca5ebd7966fc3d2269889750a607919a754c19c5e40ea778f4ea3ca688e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_fermi, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:56:01 np0005629333 podman[368141]: 2026-02-25 12:56:01.01079137 +0000 UTC m=+0.210165628 container start ad62ca5ebd7966fc3d2269889750a607919a754c19c5e40ea778f4ea3ca688e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_fermi, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 07:56:01 np0005629333 podman[368141]: 2026-02-25 12:56:01.015255136 +0000 UTC m=+0.214629444 container attach ad62ca5ebd7966fc3d2269889750a607919a754c19c5e40ea778f4ea3ca688e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_fermi, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 07:56:01 np0005629333 nova_compute[244014]: 2026-02-25 12:56:01.188 244018 DEBUG nova.network.neutron [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Successfully updated port: bde15f84-edfb-445b-b129-ec33331763f0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:56:01 np0005629333 nova_compute[244014]: 2026-02-25 12:56:01.208 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:56:01 np0005629333 nova_compute[244014]: 2026-02-25 12:56:01.209 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:56:01 np0005629333 nova_compute[244014]: 2026-02-25 12:56:01.209 244018 DEBUG nova.network.neutron [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:56:01 np0005629333 happy_fermi[368157]: {
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:    "0": [
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:        {
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "devices": [
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "/dev/loop3"
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            ],
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "lv_name": "ceph_lv0",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "lv_size": "21470642176",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "name": "ceph_lv0",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "tags": {
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.cluster_name": "ceph",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.crush_device_class": "",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.encrypted": "0",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.objectstore": "bluestore",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.osd_id": "0",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.type": "block",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.vdo": "0",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.with_tpm": "0"
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            },
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "type": "block",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "vg_name": "ceph_vg0"
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:        }
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:    ],
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:    "1": [
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:        {
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "devices": [
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "/dev/loop4"
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            ],
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "lv_name": "ceph_lv1",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "lv_size": "21470642176",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "name": "ceph_lv1",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "tags": {
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.cluster_name": "ceph",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.crush_device_class": "",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.encrypted": "0",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.objectstore": "bluestore",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.osd_id": "1",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.type": "block",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.vdo": "0",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.with_tpm": "0"
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            },
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "type": "block",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "vg_name": "ceph_vg1"
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:        }
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:    ],
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:    "2": [
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:        {
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "devices": [
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "/dev/loop5"
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            ],
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "lv_name": "ceph_lv2",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "lv_size": "21470642176",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "name": "ceph_lv2",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "tags": {
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.cluster_name": "ceph",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.crush_device_class": "",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.encrypted": "0",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.objectstore": "bluestore",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.osd_id": "2",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.type": "block",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.vdo": "0",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:                "ceph.with_tpm": "0"
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            },
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "type": "block",
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:            "vg_name": "ceph_vg2"
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:        }
Feb 25 07:56:01 np0005629333 happy_fermi[368157]:    ]
Feb 25 07:56:01 np0005629333 happy_fermi[368157]: }
Feb 25 07:56:01 np0005629333 systemd[1]: libpod-ad62ca5ebd7966fc3d2269889750a607919a754c19c5e40ea778f4ea3ca688e9.scope: Deactivated successfully.
Feb 25 07:56:01 np0005629333 podman[368141]: 2026-02-25 12:56:01.32877411 +0000 UTC m=+0.528148368 container died ad62ca5ebd7966fc3d2269889750a607919a754c19c5e40ea778f4ea3ca688e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:56:01 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ea6cb00a73bdfb3ea74a190be20e3800fd1f231c48b2d4a86831c8a5e0b89416-merged.mount: Deactivated successfully.
Feb 25 07:56:01 np0005629333 podman[368141]: 2026-02-25 12:56:01.383097943 +0000 UTC m=+0.582472191 container remove ad62ca5ebd7966fc3d2269889750a607919a754c19c5e40ea778f4ea3ca688e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 07:56:01 np0005629333 systemd[1]: libpod-conmon-ad62ca5ebd7966fc3d2269889750a607919a754c19c5e40ea778f4ea3ca688e9.scope: Deactivated successfully.
Feb 25 07:56:01 np0005629333 nova_compute[244014]: 2026-02-25 12:56:01.421 244018 DEBUG nova.network.neutron [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:56:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:56:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:56:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:56:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:56:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:56:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:56:01 np0005629333 nova_compute[244014]: 2026-02-25 12:56:01.740 244018 DEBUG nova.compute.manager [req-65f29bb7-0720-49c8-86f9-36881a01ed73 req-07331946-fe05-41fe-a6a3-aaf00ecb2edd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received event network-changed-bde15f84-edfb-445b-b129-ec33331763f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:56:01 np0005629333 nova_compute[244014]: 2026-02-25 12:56:01.740 244018 DEBUG nova.compute.manager [req-65f29bb7-0720-49c8-86f9-36881a01ed73 req-07331946-fe05-41fe-a6a3-aaf00ecb2edd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Refreshing instance network info cache due to event network-changed-bde15f84-edfb-445b-b129-ec33331763f0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:56:01 np0005629333 nova_compute[244014]: 2026-02-25 12:56:01.740 244018 DEBUG oslo_concurrency.lockutils [req-65f29bb7-0720-49c8-86f9-36881a01ed73 req-07331946-fe05-41fe-a6a3-aaf00ecb2edd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:56:01 np0005629333 podman[368238]: 2026-02-25 12:56:01.850745064 +0000 UTC m=+0.056379781 container create 17cd20c7fa6784bf3b975686212debce07b85a21d6ffdbd92ef4e8b40b0daa43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_sanderson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 07:56:01 np0005629333 systemd[1]: Started libpod-conmon-17cd20c7fa6784bf3b975686212debce07b85a21d6ffdbd92ef4e8b40b0daa43.scope.
Feb 25 07:56:01 np0005629333 podman[368238]: 2026-02-25 12:56:01.831467731 +0000 UTC m=+0.037102518 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:56:01 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:56:01 np0005629333 podman[368238]: 2026-02-25 12:56:01.945140187 +0000 UTC m=+0.150774904 container init 17cd20c7fa6784bf3b975686212debce07b85a21d6ffdbd92ef4e8b40b0daa43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:56:01 np0005629333 podman[368238]: 2026-02-25 12:56:01.952573997 +0000 UTC m=+0.158208704 container start 17cd20c7fa6784bf3b975686212debce07b85a21d6ffdbd92ef4e8b40b0daa43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_sanderson, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 07:56:01 np0005629333 amazing_sanderson[368254]: 167 167
Feb 25 07:56:01 np0005629333 podman[368238]: 2026-02-25 12:56:01.956677863 +0000 UTC m=+0.162312570 container attach 17cd20c7fa6784bf3b975686212debce07b85a21d6ffdbd92ef4e8b40b0daa43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_sanderson, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 07:56:01 np0005629333 systemd[1]: libpod-17cd20c7fa6784bf3b975686212debce07b85a21d6ffdbd92ef4e8b40b0daa43.scope: Deactivated successfully.
Feb 25 07:56:01 np0005629333 podman[368238]: 2026-02-25 12:56:01.957385763 +0000 UTC m=+0.163020480 container died 17cd20c7fa6784bf3b975686212debce07b85a21d6ffdbd92ef4e8b40b0daa43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_sanderson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 07:56:01 np0005629333 systemd[1]: var-lib-containers-storage-overlay-0a2bf63b41ad4728c354ceae0ea302dbce268672c0f80f34fd0278db1c2c5273-merged.mount: Deactivated successfully.
Feb 25 07:56:02 np0005629333 podman[368238]: 2026-02-25 12:56:02.004005728 +0000 UTC m=+0.209640435 container remove 17cd20c7fa6784bf3b975686212debce07b85a21d6ffdbd92ef4e8b40b0daa43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 07:56:02 np0005629333 systemd[1]: libpod-conmon-17cd20c7fa6784bf3b975686212debce07b85a21d6ffdbd92ef4e8b40b0daa43.scope: Deactivated successfully.
Feb 25 07:56:02 np0005629333 podman[368280]: 2026-02-25 12:56:02.190366675 +0000 UTC m=+0.044185098 container create 650b57c3489d000922f7b0317fd960557287a4963694422339ea5cd91130048d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:56:02 np0005629333 systemd[1]: Started libpod-conmon-650b57c3489d000922f7b0317fd960557287a4963694422339ea5cd91130048d.scope.
Feb 25 07:56:02 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:56:02 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d0327b07df9ec2f1a28bd5aba09a4e82f7b10fd0b7e120c4848bd5c4f7b37d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:56:02 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d0327b07df9ec2f1a28bd5aba09a4e82f7b10fd0b7e120c4848bd5c4f7b37d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:56:02 np0005629333 podman[368280]: 2026-02-25 12:56:02.173032486 +0000 UTC m=+0.026850929 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:56:02 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d0327b07df9ec2f1a28bd5aba09a4e82f7b10fd0b7e120c4848bd5c4f7b37d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:56:02 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d0327b07df9ec2f1a28bd5aba09a4e82f7b10fd0b7e120c4848bd5c4f7b37d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:56:02 np0005629333 podman[368280]: 2026-02-25 12:56:02.30220592 +0000 UTC m=+0.156024423 container init 650b57c3489d000922f7b0317fd960557287a4963694422339ea5cd91130048d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 07:56:02 np0005629333 podman[368280]: 2026-02-25 12:56:02.310769791 +0000 UTC m=+0.164588244 container start 650b57c3489d000922f7b0317fd960557287a4963694422339ea5cd91130048d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 07:56:02 np0005629333 podman[368280]: 2026-02-25 12:56:02.31390565 +0000 UTC m=+0.167724143 container attach 650b57c3489d000922f7b0317fd960557287a4963694422339ea5cd91130048d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_goldwasser, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 07:56:02 np0005629333 nova_compute[244014]: 2026-02-25 12:56:02.353 244018 DEBUG nova.network.neutron [req-b9aa0f2b-9c5c-4c42-8125-6573b607894b req-57a128fd-9a56-4ffa-a4c7-a0f7cdd7b9f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Updated VIF entry in instance network info cache for port dd7b2b79-55bd-4df0-a79a-3344d8c79c92. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:56:02 np0005629333 nova_compute[244014]: 2026-02-25 12:56:02.355 244018 DEBUG nova.network.neutron [req-b9aa0f2b-9c5c-4c42-8125-6573b607894b req-57a128fd-9a56-4ffa-a4c7-a0f7cdd7b9f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Updating instance_info_cache with network_info: [{"id": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "address": "fa:16:3e:c1:35:f1", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec1:35f1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd7b2b79-55", "ovs_interfaceid": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:56:02 np0005629333 nova_compute[244014]: 2026-02-25 12:56:02.379 244018 DEBUG oslo_concurrency.lockutils [req-b9aa0f2b-9c5c-4c42-8125-6573b607894b req-57a128fd-9a56-4ffa-a4c7-a0f7cdd7b9f6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-7102d0db-32cc-4a5e-8282-cf5266710872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:56:02 np0005629333 nova_compute[244014]: 2026-02-25 12:56:02.541 244018 DEBUG nova.network.neutron [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Updating instance_info_cache with network_info: [{"id": "bde15f84-edfb-445b-b129-ec33331763f0", "address": "fa:16:3e:99:9e:1f", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbde15f84-ed", "ovs_interfaceid": "bde15f84-edfb-445b-b129-ec33331763f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:56:02 np0005629333 nova_compute[244014]: 2026-02-25 12:56:02.557 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:56:02 np0005629333 nova_compute[244014]: 2026-02-25 12:56:02.558 244018 DEBUG nova.compute.manager [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Instance network_info: |[{"id": "bde15f84-edfb-445b-b129-ec33331763f0", "address": "fa:16:3e:99:9e:1f", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbde15f84-ed", "ovs_interfaceid": "bde15f84-edfb-445b-b129-ec33331763f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:56:02 np0005629333 nova_compute[244014]: 2026-02-25 12:56:02.558 244018 DEBUG oslo_concurrency.lockutils [req-65f29bb7-0720-49c8-86f9-36881a01ed73 req-07331946-fe05-41fe-a6a3-aaf00ecb2edd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:56:02 np0005629333 nova_compute[244014]: 2026-02-25 12:56:02.558 244018 DEBUG nova.network.neutron [req-65f29bb7-0720-49c8-86f9-36881a01ed73 req-07331946-fe05-41fe-a6a3-aaf00ecb2edd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Refreshing network info cache for port bde15f84-edfb-445b-b129-ec33331763f0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:56:02 np0005629333 nova_compute[244014]: 2026-02-25 12:56:02.563 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Start _get_guest_xml network_info=[{"id": "bde15f84-edfb-445b-b129-ec33331763f0", "address": "fa:16:3e:99:9e:1f", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbde15f84-ed", "ovs_interfaceid": "bde15f84-edfb-445b-b129-ec33331763f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:56:02 np0005629333 nova_compute[244014]: 2026-02-25 12:56:02.570 244018 WARNING nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:56:02 np0005629333 nova_compute[244014]: 2026-02-25 12:56:02.579 244018 DEBUG nova.virt.libvirt.host [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:56:02 np0005629333 nova_compute[244014]: 2026-02-25 12:56:02.581 244018 DEBUG nova.virt.libvirt.host [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:56:02 np0005629333 nova_compute[244014]: 2026-02-25 12:56:02.585 244018 DEBUG nova.virt.libvirt.host [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:56:02 np0005629333 nova_compute[244014]: 2026-02-25 12:56:02.585 244018 DEBUG nova.virt.libvirt.host [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:56:02 np0005629333 nova_compute[244014]: 2026-02-25 12:56:02.586 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:56:02 np0005629333 nova_compute[244014]: 2026-02-25 12:56:02.586 244018 DEBUG nova.virt.hardware [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:56:02 np0005629333 nova_compute[244014]: 2026-02-25 12:56:02.586 244018 DEBUG nova.virt.hardware [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:56:02 np0005629333 nova_compute[244014]: 2026-02-25 12:56:02.586 244018 DEBUG nova.virt.hardware [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:56:02 np0005629333 nova_compute[244014]: 2026-02-25 12:56:02.587 244018 DEBUG nova.virt.hardware [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:56:02 np0005629333 nova_compute[244014]: 2026-02-25 12:56:02.587 244018 DEBUG nova.virt.hardware [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:56:02 np0005629333 nova_compute[244014]: 2026-02-25 12:56:02.587 244018 DEBUG nova.virt.hardware [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:56:02 np0005629333 nova_compute[244014]: 2026-02-25 12:56:02.587 244018 DEBUG nova.virt.hardware [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:56:02 np0005629333 nova_compute[244014]: 2026-02-25 12:56:02.587 244018 DEBUG nova.virt.hardware [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:56:02 np0005629333 nova_compute[244014]: 2026-02-25 12:56:02.588 244018 DEBUG nova.virt.hardware [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:56:02 np0005629333 nova_compute[244014]: 2026-02-25 12:56:02.588 244018 DEBUG nova.virt.hardware [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:56:02 np0005629333 nova_compute[244014]: 2026-02-25 12:56:02.588 244018 DEBUG nova.virt.hardware [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:56:02 np0005629333 nova_compute[244014]: 2026-02-25 12:56:02.591 244018 DEBUG oslo_concurrency.processutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:56:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2264: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Feb 25 07:56:02 np0005629333 lvm[368396]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:56:02 np0005629333 lvm[368396]: VG ceph_vg1 finished
Feb 25 07:56:02 np0005629333 lvm[368395]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:56:02 np0005629333 lvm[368395]: VG ceph_vg0 finished
Feb 25 07:56:02 np0005629333 lvm[368398]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:56:02 np0005629333 lvm[368398]: VG ceph_vg2 finished
Feb 25 07:56:03 np0005629333 dazzling_goldwasser[368297]: {}
Feb 25 07:56:03 np0005629333 systemd[1]: libpod-650b57c3489d000922f7b0317fd960557287a4963694422339ea5cd91130048d.scope: Deactivated successfully.
Feb 25 07:56:03 np0005629333 podman[368280]: 2026-02-25 12:56:03.092579255 +0000 UTC m=+0.946397678 container died 650b57c3489d000922f7b0317fd960557287a4963694422339ea5cd91130048d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_goldwasser, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Feb 25 07:56:03 np0005629333 systemd[1]: libpod-650b57c3489d000922f7b0317fd960557287a4963694422339ea5cd91130048d.scope: Consumed 1.123s CPU time.
Feb 25 07:56:03 np0005629333 systemd[1]: var-lib-containers-storage-overlay-5d0327b07df9ec2f1a28bd5aba09a4e82f7b10fd0b7e120c4848bd5c4f7b37d2-merged.mount: Deactivated successfully.
Feb 25 07:56:03 np0005629333 podman[368280]: 2026-02-25 12:56:03.138625304 +0000 UTC m=+0.992443767 container remove 650b57c3489d000922f7b0317fd960557287a4963694422339ea5cd91130048d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_goldwasser, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 07:56:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:56:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2259164080' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:56:03 np0005629333 systemd[1]: libpod-conmon-650b57c3489d000922f7b0317fd960557287a4963694422339ea5cd91130048d.scope: Deactivated successfully.
Feb 25 07:56:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:56:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:56:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:56:03 np0005629333 nova_compute[244014]: 2026-02-25 12:56:03.193 244018 DEBUG oslo_concurrency.processutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:56:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:56:03 np0005629333 nova_compute[244014]: 2026-02-25 12:56:03.227 244018 DEBUG nova.storage.rbd_utils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image e728a9dc-bb04-4a25-bcad-b787a044bc0b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:56:03 np0005629333 nova_compute[244014]: 2026-02-25 12:56:03.235 244018 DEBUG oslo_concurrency.processutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:56:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:56:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:56:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/671392504' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:56:03 np0005629333 nova_compute[244014]: 2026-02-25 12:56:03.759 244018 DEBUG oslo_concurrency.processutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:56:03 np0005629333 nova_compute[244014]: 2026-02-25 12:56:03.761 244018 DEBUG nova.virt.libvirt.vif [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:55:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-563089963',display_name='tempest-TestNetworkBasicOps-server-563089963',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-563089963',id=137,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDFJ0F94NwULIDtBe1hWjwHL1rXe9bp7hZF1/RCkpzZ1NYU+13CFJoQDcWC4Kzxk0oxcoRc7zBPHT8toFws+wsWPaxnQhFXZM/VRwZJjXXVApOjgca9Ve/Do6WWCKSzVsA==',key_name='tempest-TestNetworkBasicOps-1065518207',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-nqssdsat',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:55:57Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=e728a9dc-bb04-4a25-bcad-b787a044bc0b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bde15f84-edfb-445b-b129-ec33331763f0", "address": "fa:16:3e:99:9e:1f", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbde15f84-ed", "ovs_interfaceid": "bde15f84-edfb-445b-b129-ec33331763f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:56:03 np0005629333 nova_compute[244014]: 2026-02-25 12:56:03.762 244018 DEBUG nova.network.os_vif_util [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "bde15f84-edfb-445b-b129-ec33331763f0", "address": "fa:16:3e:99:9e:1f", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbde15f84-ed", "ovs_interfaceid": "bde15f84-edfb-445b-b129-ec33331763f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:56:03 np0005629333 nova_compute[244014]: 2026-02-25 12:56:03.763 244018 DEBUG nova.network.os_vif_util [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:9e:1f,bridge_name='br-int',has_traffic_filtering=True,id=bde15f84-edfb-445b-b129-ec33331763f0,network=Network(74cacb3c-0135-4e0b-9776-478b5f7a3349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbde15f84-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:56:03 np0005629333 nova_compute[244014]: 2026-02-25 12:56:03.764 244018 DEBUG nova.objects.instance [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'pci_devices' on Instance uuid e728a9dc-bb04-4a25-bcad-b787a044bc0b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:56:03 np0005629333 nova_compute[244014]: 2026-02-25 12:56:03.778 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:56:03 np0005629333 nova_compute[244014]:  <uuid>e728a9dc-bb04-4a25-bcad-b787a044bc0b</uuid>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:  <name>instance-00000089</name>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestNetworkBasicOps-server-563089963</nova:name>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:56:02</nova:creationTime>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:56:03 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:        <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:        <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:        <nova:port uuid="bde15f84-edfb-445b-b129-ec33331763f0">
Feb 25 07:56:03 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      <entry name="serial">e728a9dc-bb04-4a25-bcad-b787a044bc0b</entry>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      <entry name="uuid">e728a9dc-bb04-4a25-bcad-b787a044bc0b</entry>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/e728a9dc-bb04-4a25-bcad-b787a044bc0b_disk">
Feb 25 07:56:03 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:56:03 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/e728a9dc-bb04-4a25-bcad-b787a044bc0b_disk.config">
Feb 25 07:56:03 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:56:03 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:99:9e:1f"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      <target dev="tapbde15f84-ed"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/e728a9dc-bb04-4a25-bcad-b787a044bc0b/console.log" append="off"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:56:03 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:56:03 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:56:03 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:56:03 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:56:03 np0005629333 nova_compute[244014]: 2026-02-25 12:56:03.779 244018 DEBUG nova.compute.manager [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Preparing to wait for external event network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:56:03 np0005629333 nova_compute[244014]: 2026-02-25 12:56:03.779 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:56:03 np0005629333 nova_compute[244014]: 2026-02-25 12:56:03.779 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:56:03 np0005629333 nova_compute[244014]: 2026-02-25 12:56:03.780 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:56:03 np0005629333 nova_compute[244014]: 2026-02-25 12:56:03.780 244018 DEBUG nova.virt.libvirt.vif [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:55:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-563089963',display_name='tempest-TestNetworkBasicOps-server-563089963',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-563089963',id=137,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDFJ0F94NwULIDtBe1hWjwHL1rXe9bp7hZF1/RCkpzZ1NYU+13CFJoQDcWC4Kzxk0oxcoRc7zBPHT8toFws+wsWPaxnQhFXZM/VRwZJjXXVApOjgca9Ve/Do6WWCKSzVsA==',key_name='tempest-TestNetworkBasicOps-1065518207',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-nqssdsat',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:55:57Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=e728a9dc-bb04-4a25-bcad-b787a044bc0b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bde15f84-edfb-445b-b129-ec33331763f0", "address": "fa:16:3e:99:9e:1f", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbde15f84-ed", "ovs_interfaceid": "bde15f84-edfb-445b-b129-ec33331763f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:56:03 np0005629333 nova_compute[244014]: 2026-02-25 12:56:03.780 244018 DEBUG nova.network.os_vif_util [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "bde15f84-edfb-445b-b129-ec33331763f0", "address": "fa:16:3e:99:9e:1f", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbde15f84-ed", "ovs_interfaceid": "bde15f84-edfb-445b-b129-ec33331763f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:56:03 np0005629333 nova_compute[244014]: 2026-02-25 12:56:03.781 244018 DEBUG nova.network.os_vif_util [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:9e:1f,bridge_name='br-int',has_traffic_filtering=True,id=bde15f84-edfb-445b-b129-ec33331763f0,network=Network(74cacb3c-0135-4e0b-9776-478b5f7a3349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbde15f84-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:56:03 np0005629333 nova_compute[244014]: 2026-02-25 12:56:03.781 244018 DEBUG os_vif [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:9e:1f,bridge_name='br-int',has_traffic_filtering=True,id=bde15f84-edfb-445b-b129-ec33331763f0,network=Network(74cacb3c-0135-4e0b-9776-478b5f7a3349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbde15f84-ed') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:56:03 np0005629333 nova_compute[244014]: 2026-02-25 12:56:03.782 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:03 np0005629333 nova_compute[244014]: 2026-02-25 12:56:03.782 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:56:03 np0005629333 nova_compute[244014]: 2026-02-25 12:56:03.782 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:56:03 np0005629333 nova_compute[244014]: 2026-02-25 12:56:03.791 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:03 np0005629333 nova_compute[244014]: 2026-02-25 12:56:03.791 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbde15f84-ed, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:56:03 np0005629333 nova_compute[244014]: 2026-02-25 12:56:03.791 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbde15f84-ed, col_values=(('external_ids', {'iface-id': 'bde15f84-edfb-445b-b129-ec33331763f0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:99:9e:1f', 'vm-uuid': 'e728a9dc-bb04-4a25-bcad-b787a044bc0b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:56:03 np0005629333 nova_compute[244014]: 2026-02-25 12:56:03.793 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:03 np0005629333 NetworkManager[49836]: <info>  [1772024163.7946] manager: (tapbde15f84-ed): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/599)
Feb 25 07:56:03 np0005629333 nova_compute[244014]: 2026-02-25 12:56:03.794 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:56:03 np0005629333 nova_compute[244014]: 2026-02-25 12:56:03.799 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:03 np0005629333 nova_compute[244014]: 2026-02-25 12:56:03.800 244018 INFO os_vif [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:9e:1f,bridge_name='br-int',has_traffic_filtering=True,id=bde15f84-edfb-445b-b129-ec33331763f0,network=Network(74cacb3c-0135-4e0b-9776-478b5f7a3349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbde15f84-ed')#033[00m
Feb 25 07:56:03 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:56:03 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:56:03 np0005629333 nova_compute[244014]: 2026-02-25 12:56:03.846 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:56:03 np0005629333 nova_compute[244014]: 2026-02-25 12:56:03.846 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:56:03 np0005629333 nova_compute[244014]: 2026-02-25 12:56:03.847 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:99:9e:1f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:56:03 np0005629333 nova_compute[244014]: 2026-02-25 12:56:03.847 244018 INFO nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Using config drive#033[00m
Feb 25 07:56:03 np0005629333 nova_compute[244014]: 2026-02-25 12:56:03.871 244018 DEBUG nova.storage.rbd_utils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image e728a9dc-bb04-4a25-bcad-b787a044bc0b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:56:04 np0005629333 nova_compute[244014]: 2026-02-25 12:56:04.285 244018 INFO nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Creating config drive at /var/lib/nova/instances/e728a9dc-bb04-4a25-bcad-b787a044bc0b/disk.config#033[00m
Feb 25 07:56:04 np0005629333 nova_compute[244014]: 2026-02-25 12:56:04.293 244018 DEBUG oslo_concurrency.processutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e728a9dc-bb04-4a25-bcad-b787a044bc0b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3kiflx1d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:56:04 np0005629333 nova_compute[244014]: 2026-02-25 12:56:04.437 244018 DEBUG oslo_concurrency.processutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e728a9dc-bb04-4a25-bcad-b787a044bc0b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3kiflx1d" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:56:04 np0005629333 nova_compute[244014]: 2026-02-25 12:56:04.479 244018 DEBUG nova.storage.rbd_utils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image e728a9dc-bb04-4a25-bcad-b787a044bc0b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:56:04 np0005629333 nova_compute[244014]: 2026-02-25 12:56:04.486 244018 DEBUG oslo_concurrency.processutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e728a9dc-bb04-4a25-bcad-b787a044bc0b/disk.config e728a9dc-bb04-4a25-bcad-b787a044bc0b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:56:04 np0005629333 nova_compute[244014]: 2026-02-25 12:56:04.525 244018 DEBUG nova.network.neutron [req-65f29bb7-0720-49c8-86f9-36881a01ed73 req-07331946-fe05-41fe-a6a3-aaf00ecb2edd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Updated VIF entry in instance network info cache for port bde15f84-edfb-445b-b129-ec33331763f0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:56:04 np0005629333 nova_compute[244014]: 2026-02-25 12:56:04.527 244018 DEBUG nova.network.neutron [req-65f29bb7-0720-49c8-86f9-36881a01ed73 req-07331946-fe05-41fe-a6a3-aaf00ecb2edd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Updating instance_info_cache with network_info: [{"id": "bde15f84-edfb-445b-b129-ec33331763f0", "address": "fa:16:3e:99:9e:1f", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbde15f84-ed", "ovs_interfaceid": "bde15f84-edfb-445b-b129-ec33331763f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:56:04 np0005629333 nova_compute[244014]: 2026-02-25 12:56:04.547 244018 DEBUG oslo_concurrency.lockutils [req-65f29bb7-0720-49c8-86f9-36881a01ed73 req-07331946-fe05-41fe-a6a3-aaf00ecb2edd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:56:04 np0005629333 nova_compute[244014]: 2026-02-25 12:56:04.661 244018 DEBUG oslo_concurrency.processutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e728a9dc-bb04-4a25-bcad-b787a044bc0b/disk.config e728a9dc-bb04-4a25-bcad-b787a044bc0b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:56:04 np0005629333 nova_compute[244014]: 2026-02-25 12:56:04.662 244018 INFO nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Deleting local config drive /var/lib/nova/instances/e728a9dc-bb04-4a25-bcad-b787a044bc0b/disk.config because it was imported into RBD.#033[00m
Feb 25 07:56:04 np0005629333 kernel: tapbde15f84-ed: entered promiscuous mode
Feb 25 07:56:04 np0005629333 NetworkManager[49836]: <info>  [1772024164.7115] manager: (tapbde15f84-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/600)
Feb 25 07:56:04 np0005629333 nova_compute[244014]: 2026-02-25 12:56:04.714 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:04 np0005629333 ovn_controller[147040]: 2026-02-25T12:56:04Z|01437|binding|INFO|Claiming lport bde15f84-edfb-445b-b129-ec33331763f0 for this chassis.
Feb 25 07:56:04 np0005629333 ovn_controller[147040]: 2026-02-25T12:56:04Z|01438|binding|INFO|bde15f84-edfb-445b-b129-ec33331763f0: Claiming fa:16:3e:99:9e:1f 10.100.0.11
Feb 25 07:56:04 np0005629333 systemd-udevd[368393]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:56:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2265: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 07:56:04 np0005629333 nova_compute[244014]: 2026-02-25 12:56:04.733 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:04 np0005629333 ovn_controller[147040]: 2026-02-25T12:56:04Z|01439|binding|INFO|Setting lport bde15f84-edfb-445b-b129-ec33331763f0 ovn-installed in OVS
Feb 25 07:56:04 np0005629333 NetworkManager[49836]: <info>  [1772024164.7398] device (tapbde15f84-ed): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:56:04 np0005629333 nova_compute[244014]: 2026-02-25 12:56:04.739 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:04 np0005629333 NetworkManager[49836]: <info>  [1772024164.7406] device (tapbde15f84-ed): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:56:04 np0005629333 nova_compute[244014]: 2026-02-25 12:56:04.741 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:04 np0005629333 nova_compute[244014]: 2026-02-25 12:56:04.747 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:04 np0005629333 systemd-machined[210048]: New machine qemu-169-instance-00000089.
Feb 25 07:56:04 np0005629333 ovn_controller[147040]: 2026-02-25T12:56:04Z|01440|binding|INFO|Setting lport bde15f84-edfb-445b-b129-ec33331763f0 up in Southbound
Feb 25 07:56:04 np0005629333 systemd[1]: Started Virtual Machine qemu-169-instance-00000089.
Feb 25 07:56:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.775 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:9e:1f 10.100.0.11'], port_security=['fa:16:3e:99:9e:1f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e728a9dc-bb04-4a25-bcad-b787a044bc0b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74cacb3c-0135-4e0b-9776-478b5f7a3349', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1d8168d2-f5f8-4f41-af80-56661b6a1e4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=394a85b3-bdd5-4c73-8ada-c7f3f99bacc6, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=bde15f84-edfb-445b-b129-ec33331763f0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:56:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.777 157129 INFO neutron.agent.ovn.metadata.agent [-] Port bde15f84-edfb-445b-b129-ec33331763f0 in datapath 74cacb3c-0135-4e0b-9776-478b5f7a3349 bound to our chassis#033[00m
Feb 25 07:56:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.779 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74cacb3c-0135-4e0b-9776-478b5f7a3349#033[00m
Feb 25 07:56:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.789 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9bfecc4e-65ab-44ae-b3f0-50aa91617c90]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.790 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap74cacb3c-01 in ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:56:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.793 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap74cacb3c-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:56:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.793 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[34a83d7d-bdce-4b4a-9057-e0feb8686942]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.794 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[224810bc-deed-4ec0-b4a0-cb59ac1f8535]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.807 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[2a31d9a2-5c99-45d9-8226-a351cfb5f610]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.831 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dc999532-59e8-438f-ad75-7418a27ba054]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.870 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[61c7e0ea-0ecc-4589-97c5-f49a4f03f7c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.876 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dedd3e63-f6b0-47a6-9e2f-c8a894efac7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:04 np0005629333 NetworkManager[49836]: <info>  [1772024164.8780] manager: (tap74cacb3c-00): new Veth device (/org/freedesktop/NetworkManager/Devices/601)
Feb 25 07:56:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.913 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d8459c6c-0d8a-48bd-8e81-6d483f10a8fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.918 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[449a6dd0-454c-48c9-b0bd-bdfdd57937d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:04 np0005629333 NetworkManager[49836]: <info>  [1772024164.9453] device (tap74cacb3c-00): carrier: link connected
Feb 25 07:56:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.957 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[83af35b7-2bc6-4373-84bc-83114ace39b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.972 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[086531c9-07ce-4690-8fe6-adaa99b4b627]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74cacb3c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:f1:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613435, 'reachable_time': 33349, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368583, 'error': None, 'target': 'ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:04 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:04.991 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e038546b-4b47-46d7-8ffc-ba6991799041]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1f:f194'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 613435, 'tstamp': 613435}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368584, 'error': None, 'target': 'ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:05.007 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7a9abb27-b272-4f5b-af54-274a27918556]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74cacb3c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:f1:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613435, 'reachable_time': 33349, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 368585, 'error': None, 'target': 'ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:05.041 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d23ecf72-7995-45a8-b770-f4f553c90807]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:05.105 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[467d45f4-1387-4b03-8668-66d4b4a650fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:05.107 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74cacb3c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:05.107 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:05.108 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74cacb3c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.110 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:05 np0005629333 NetworkManager[49836]: <info>  [1772024165.1112] manager: (tap74cacb3c-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/602)
Feb 25 07:56:05 np0005629333 kernel: tap74cacb3c-00: entered promiscuous mode
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:05.113 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74cacb3c-00, col_values=(('external_ids', {'iface-id': 'b1eb3633-3950-4cbb-8a36-6968fb223904'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.114 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:05 np0005629333 ovn_controller[147040]: 2026-02-25T12:56:05Z|01441|binding|INFO|Releasing lport b1eb3633-3950-4cbb-8a36-6968fb223904 from this chassis (sb_readonly=0)
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:05.116 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/74cacb3c-0135-4e0b-9776-478b5f7a3349.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/74cacb3c-0135-4e0b-9776-478b5f7a3349.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:05.117 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f10f70e4-f0bb-43da-9016-5228888c2b25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:05.118 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-74cacb3c-0135-4e0b-9776-478b5f7a3349
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/74cacb3c-0135-4e0b-9776-478b5f7a3349.pid.haproxy
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 74cacb3c-0135-4e0b-9776-478b5f7a3349
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:56:05 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:05.119 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349', 'env', 'PROCESS_TAG=haproxy-74cacb3c-0135-4e0b-9776-478b5f7a3349', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/74cacb3c-0135-4e0b-9776-478b5f7a3349.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.121 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.385 244018 DEBUG nova.compute.manager [req-236e7422-b27f-4bcd-bf79-1e124ccd5fdf req-ae0ad119-343c-4eec-85ec-4e374b41d73f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received event network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.386 244018 DEBUG oslo_concurrency.lockutils [req-236e7422-b27f-4bcd-bf79-1e124ccd5fdf req-ae0ad119-343c-4eec-85ec-4e374b41d73f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.386 244018 DEBUG oslo_concurrency.lockutils [req-236e7422-b27f-4bcd-bf79-1e124ccd5fdf req-ae0ad119-343c-4eec-85ec-4e374b41d73f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.387 244018 DEBUG oslo_concurrency.lockutils [req-236e7422-b27f-4bcd-bf79-1e124ccd5fdf req-ae0ad119-343c-4eec-85ec-4e374b41d73f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.387 244018 DEBUG nova.compute.manager [req-236e7422-b27f-4bcd-bf79-1e124ccd5fdf req-ae0ad119-343c-4eec-85ec-4e374b41d73f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Processing event network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:56:05 np0005629333 podman[368655]: 2026-02-25 12:56:05.485592118 +0000 UTC m=+0.061047733 container create 0f4ca28b4b7bc70c479451b9c034e9216bba7e832734fdc111c714b31e0214ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.500 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.510 244018 DEBUG nova.compute.manager [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.511 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024165.5100322, e728a9dc-bb04-4a25-bcad-b787a044bc0b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.512 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] VM Started (Lifecycle Event)#033[00m
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.518 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.523 244018 INFO nova.virt.libvirt.driver [-] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Instance spawned successfully.#033[00m
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.523 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:56:05 np0005629333 systemd[1]: Started libpod-conmon-0f4ca28b4b7bc70c479451b9c034e9216bba7e832734fdc111c714b31e0214ab.scope.
Feb 25 07:56:05 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:56:05 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e045d31172de33fbbfdc9cfbf5c4682002fbe9f3cdd63cf75b833a745b1cae3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:56:05 np0005629333 podman[368655]: 2026-02-25 12:56:05.457378432 +0000 UTC m=+0.032834047 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.559 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:56:05 np0005629333 podman[368655]: 2026-02-25 12:56:05.561432897 +0000 UTC m=+0.136888522 container init 0f4ca28b4b7bc70c479451b9c034e9216bba7e832734fdc111c714b31e0214ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:56:05 np0005629333 podman[368655]: 2026-02-25 12:56:05.566168711 +0000 UTC m=+0.141624296 container start 0f4ca28b4b7bc70c479451b9c034e9216bba7e832734fdc111c714b31e0214ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.569 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.576 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.577 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.578 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.578 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.579 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.579 244018 DEBUG nova.virt.libvirt.driver [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:56:05 np0005629333 neutron-haproxy-ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349[368671]: [NOTICE]   (368675) : New worker (368677) forked
Feb 25 07:56:05 np0005629333 neutron-haproxy-ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349[368671]: [NOTICE]   (368675) : Loading success.
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.601 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.601 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024165.5184069, e728a9dc-bb04-4a25-bcad-b787a044bc0b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.602 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.645 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.652 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024165.5190613, e728a9dc-bb04-4a25-bcad-b787a044bc0b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.653 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.676 244018 INFO nova.compute.manager [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Took 8.18 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.676 244018 DEBUG nova.compute.manager [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.677 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.684 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.711 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.737 244018 INFO nova.compute.manager [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Took 9.25 seconds to build instance.#033[00m
Feb 25 07:56:05 np0005629333 nova_compute[244014]: 2026-02-25 12:56:05.759 244018 DEBUG oslo_concurrency.lockutils [None req-97b69c0f-dcf1-4552-8ea9-6a21ef38f664 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.346s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:56:05 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Feb 25 07:56:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2266: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 07:56:07 np0005629333 nova_compute[244014]: 2026-02-25 12:56:07.464 244018 DEBUG nova.compute.manager [req-23306cf1-c6ab-48c2-a4f6-2e61fcd0dc9d req-8fac0538-daa2-40f0-a8a1-0e81b4342da4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received event network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:56:07 np0005629333 nova_compute[244014]: 2026-02-25 12:56:07.464 244018 DEBUG oslo_concurrency.lockutils [req-23306cf1-c6ab-48c2-a4f6-2e61fcd0dc9d req-8fac0538-daa2-40f0-a8a1-0e81b4342da4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:56:07 np0005629333 nova_compute[244014]: 2026-02-25 12:56:07.465 244018 DEBUG oslo_concurrency.lockutils [req-23306cf1-c6ab-48c2-a4f6-2e61fcd0dc9d req-8fac0538-daa2-40f0-a8a1-0e81b4342da4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:56:07 np0005629333 nova_compute[244014]: 2026-02-25 12:56:07.465 244018 DEBUG oslo_concurrency.lockutils [req-23306cf1-c6ab-48c2-a4f6-2e61fcd0dc9d req-8fac0538-daa2-40f0-a8a1-0e81b4342da4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:56:07 np0005629333 nova_compute[244014]: 2026-02-25 12:56:07.465 244018 DEBUG nova.compute.manager [req-23306cf1-c6ab-48c2-a4f6-2e61fcd0dc9d req-8fac0538-daa2-40f0-a8a1-0e81b4342da4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] No waiting events found dispatching network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:56:07 np0005629333 nova_compute[244014]: 2026-02-25 12:56:07.465 244018 WARNING nova.compute.manager [req-23306cf1-c6ab-48c2-a4f6-2e61fcd0dc9d req-8fac0538-daa2-40f0-a8a1-0e81b4342da4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received unexpected event network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:56:07 np0005629333 ovn_controller[147040]: 2026-02-25T12:56:07Z|00178|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c1:35:f1 10.100.0.4
Feb 25 07:56:07 np0005629333 ovn_controller[147040]: 2026-02-25T12:56:07Z|00179|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c1:35:f1 10.100.0.4
Feb 25 07:56:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:56:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2267: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 238 op/s
Feb 25 07:56:08 np0005629333 nova_compute[244014]: 2026-02-25 12:56:08.795 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:09 np0005629333 nova_compute[244014]: 2026-02-25 12:56:09.708 244018 DEBUG nova.compute.manager [req-d8b9b6af-1164-417e-9070-7d11d1a56f81 req-d85abec6-be97-41cb-bd41-56556152ed7c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received event network-changed-bde15f84-edfb-445b-b129-ec33331763f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:56:09 np0005629333 nova_compute[244014]: 2026-02-25 12:56:09.708 244018 DEBUG nova.compute.manager [req-d8b9b6af-1164-417e-9070-7d11d1a56f81 req-d85abec6-be97-41cb-bd41-56556152ed7c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Refreshing instance network info cache due to event network-changed-bde15f84-edfb-445b-b129-ec33331763f0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:56:09 np0005629333 nova_compute[244014]: 2026-02-25 12:56:09.709 244018 DEBUG oslo_concurrency.lockutils [req-d8b9b6af-1164-417e-9070-7d11d1a56f81 req-d85abec6-be97-41cb-bd41-56556152ed7c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:56:09 np0005629333 nova_compute[244014]: 2026-02-25 12:56:09.709 244018 DEBUG oslo_concurrency.lockutils [req-d8b9b6af-1164-417e-9070-7d11d1a56f81 req-d85abec6-be97-41cb-bd41-56556152ed7c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:56:09 np0005629333 nova_compute[244014]: 2026-02-25 12:56:09.710 244018 DEBUG nova.network.neutron [req-d8b9b6af-1164-417e-9070-7d11d1a56f81 req-d85abec6-be97-41cb-bd41-56556152ed7c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Refreshing network info cache for port bde15f84-edfb-445b-b129-ec33331763f0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:56:10 np0005629333 nova_compute[244014]: 2026-02-25 12:56:10.168 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:10.168 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:56:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:10.170 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:56:10 np0005629333 nova_compute[244014]: 2026-02-25 12:56:10.503 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:10 np0005629333 podman[368686]: 2026-02-25 12:56:10.72451754 +0000 UTC m=+0.058206593 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Feb 25 07:56:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2268: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Feb 25 07:56:10 np0005629333 podman[368687]: 2026-02-25 12:56:10.752643163 +0000 UTC m=+0.086939193 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:56:11 np0005629333 nova_compute[244014]: 2026-02-25 12:56:11.134 244018 DEBUG nova.network.neutron [req-d8b9b6af-1164-417e-9070-7d11d1a56f81 req-d85abec6-be97-41cb-bd41-56556152ed7c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Updated VIF entry in instance network info cache for port bde15f84-edfb-445b-b129-ec33331763f0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:56:11 np0005629333 nova_compute[244014]: 2026-02-25 12:56:11.135 244018 DEBUG nova.network.neutron [req-d8b9b6af-1164-417e-9070-7d11d1a56f81 req-d85abec6-be97-41cb-bd41-56556152ed7c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Updating instance_info_cache with network_info: [{"id": "bde15f84-edfb-445b-b129-ec33331763f0", "address": "fa:16:3e:99:9e:1f", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbde15f84-ed", "ovs_interfaceid": "bde15f84-edfb-445b-b129-ec33331763f0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:56:11 np0005629333 nova_compute[244014]: 2026-02-25 12:56:11.171 244018 DEBUG oslo_concurrency.lockutils [req-d8b9b6af-1164-417e-9070-7d11d1a56f81 req-d85abec6-be97-41cb-bd41-56556152ed7c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:56:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2269: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Feb 25 07:56:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:56:13 np0005629333 nova_compute[244014]: 2026-02-25 12:56:13.797 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2270: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Feb 25 07:56:15 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:15.173 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:56:15 np0005629333 nova_compute[244014]: 2026-02-25 12:56:15.505 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:56:16Z|00180|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:99:9e:1f 10.100.0.11
Feb 25 07:56:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:56:16Z|00181|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:99:9e:1f 10.100.0.11
Feb 25 07:56:16 np0005629333 nova_compute[244014]: 2026-02-25 12:56:16.240 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "859fd309-32ea-4025-8312-ddecfa0d6a7f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:56:16 np0005629333 nova_compute[244014]: 2026-02-25 12:56:16.241 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "859fd309-32ea-4025-8312-ddecfa0d6a7f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:56:16 np0005629333 nova_compute[244014]: 2026-02-25 12:56:16.302 244018 DEBUG nova.compute.manager [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:56:16 np0005629333 nova_compute[244014]: 2026-02-25 12:56:16.554 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:56:16 np0005629333 nova_compute[244014]: 2026-02-25 12:56:16.555 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:56:16 np0005629333 nova_compute[244014]: 2026-02-25 12:56:16.565 244018 DEBUG nova.virt.hardware [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:56:16 np0005629333 nova_compute[244014]: 2026-02-25 12:56:16.565 244018 INFO nova.compute.claims [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:56:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2271: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Feb 25 07:56:16 np0005629333 nova_compute[244014]: 2026-02-25 12:56:16.794 244018 DEBUG oslo_concurrency.processutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:56:16 np0005629333 nova_compute[244014]: 2026-02-25 12:56:16.839 244018 DEBUG nova.compute.manager [req-a711629e-f384-4ecd-b653-47863cd91253 req-56e64340-4e51-48a7-88a9-30a0ba242d8a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Received event network-changed-dd7b2b79-55bd-4df0-a79a-3344d8c79c92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:56:16 np0005629333 nova_compute[244014]: 2026-02-25 12:56:16.840 244018 DEBUG nova.compute.manager [req-a711629e-f384-4ecd-b653-47863cd91253 req-56e64340-4e51-48a7-88a9-30a0ba242d8a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Refreshing instance network info cache due to event network-changed-dd7b2b79-55bd-4df0-a79a-3344d8c79c92. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:56:16 np0005629333 nova_compute[244014]: 2026-02-25 12:56:16.840 244018 DEBUG oslo_concurrency.lockutils [req-a711629e-f384-4ecd-b653-47863cd91253 req-56e64340-4e51-48a7-88a9-30a0ba242d8a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-7102d0db-32cc-4a5e-8282-cf5266710872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:56:16 np0005629333 nova_compute[244014]: 2026-02-25 12:56:16.841 244018 DEBUG oslo_concurrency.lockutils [req-a711629e-f384-4ecd-b653-47863cd91253 req-56e64340-4e51-48a7-88a9-30a0ba242d8a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-7102d0db-32cc-4a5e-8282-cf5266710872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:56:16 np0005629333 nova_compute[244014]: 2026-02-25 12:56:16.841 244018 DEBUG nova.network.neutron [req-a711629e-f384-4ecd-b653-47863cd91253 req-56e64340-4e51-48a7-88a9-30a0ba242d8a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Refreshing network info cache for port dd7b2b79-55bd-4df0-a79a-3344d8c79c92 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:56:16 np0005629333 nova_compute[244014]: 2026-02-25 12:56:16.953 244018 DEBUG oslo_concurrency.lockutils [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "7102d0db-32cc-4a5e-8282-cf5266710872" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:56:16 np0005629333 nova_compute[244014]: 2026-02-25 12:56:16.953 244018 DEBUG oslo_concurrency.lockutils [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "7102d0db-32cc-4a5e-8282-cf5266710872" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:56:16 np0005629333 nova_compute[244014]: 2026-02-25 12:56:16.954 244018 DEBUG oslo_concurrency.lockutils [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:56:16 np0005629333 nova_compute[244014]: 2026-02-25 12:56:16.954 244018 DEBUG oslo_concurrency.lockutils [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:56:16 np0005629333 nova_compute[244014]: 2026-02-25 12:56:16.955 244018 DEBUG oslo_concurrency.lockutils [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:56:16 np0005629333 nova_compute[244014]: 2026-02-25 12:56:16.957 244018 INFO nova.compute.manager [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Terminating instance#033[00m
Feb 25 07:56:16 np0005629333 nova_compute[244014]: 2026-02-25 12:56:16.959 244018 DEBUG nova.compute.manager [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:56:17 np0005629333 kernel: tapdd7b2b79-55 (unregistering): left promiscuous mode
Feb 25 07:56:17 np0005629333 NetworkManager[49836]: <info>  [1772024177.0959] device (tapdd7b2b79-55): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:56:17 np0005629333 ovn_controller[147040]: 2026-02-25T12:56:17Z|01442|binding|INFO|Releasing lport dd7b2b79-55bd-4df0-a79a-3344d8c79c92 from this chassis (sb_readonly=0)
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.105 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:17 np0005629333 ovn_controller[147040]: 2026-02-25T12:56:17Z|01443|binding|INFO|Setting lport dd7b2b79-55bd-4df0-a79a-3344d8c79c92 down in Southbound
Feb 25 07:56:17 np0005629333 ovn_controller[147040]: 2026-02-25T12:56:17Z|01444|binding|INFO|Removing iface tapdd7b2b79-55 ovn-installed in OVS
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.112 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:17.124 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:35:f1 10.100.0.4 2001:db8::f816:3eff:fec1:35f1'], port_security=['fa:16:3e:c1:35:f1 10.100.0.4 2001:db8::f816:3eff:fec1:35f1'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8::f816:3eff:fec1:35f1/64', 'neutron:device_id': '7102d0db-32cc-4a5e-8282-cf5266710872', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aaaa1259-a3db-497e-8efb-1f3c1f8cc088', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7d68418-660c-4b4a-9631-94822d5aa11e, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=dd7b2b79-55bd-4df0-a79a-3344d8c79c92) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:56:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:17.126 157129 INFO neutron.agent.ovn.metadata.agent [-] Port dd7b2b79-55bd-4df0-a79a-3344d8c79c92 in datapath 9f82a56f-a5c3-446e-9b75-7dc69c93c56b unbound from our chassis#033[00m
Feb 25 07:56:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:17.129 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f82a56f-a5c3-446e-9b75-7dc69c93c56b#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.136 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:17.155 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9add1fe0-1e66-4c23-921c-5a9d82411ce2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:17 np0005629333 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000088.scope: Deactivated successfully.
Feb 25 07:56:17 np0005629333 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000088.scope: Consumed 12.339s CPU time.
Feb 25 07:56:17 np0005629333 systemd-machined[210048]: Machine qemu-168-instance-00000088 terminated.
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.202 244018 INFO nova.virt.libvirt.driver [-] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Instance destroyed successfully.#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.202 244018 DEBUG nova.objects.instance [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'resources' on Instance uuid 7102d0db-32cc-4a5e-8282-cf5266710872 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:56:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:17.204 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ca698616-f2b2-403d-b10d-25396718701c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:17.212 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a47e99cf-aefc-4f3e-8cc9-9286db0ec482]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.218 244018 DEBUG nova.virt.libvirt.vif [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:55:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1911565712',display_name='tempest-TestGettingAddress-server-1911565712',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1911565712',id=136,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCjAwmzTqRLo6MtceITp6kkRRb7F7L6NpmCwm+uERDDuNYjjK2SbSlgPchdRYIPodHunzOlcpbeAY9Mle2akRMrTndDYB62iVXgr7LtFS3t0EZlmgtz0aV+ezHMzkXsV7A==',key_name='tempest-TestGettingAddress-1069883733',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:55:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-tvigm4mr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:55:55Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=7102d0db-32cc-4a5e-8282-cf5266710872,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "address": "fa:16:3e:c1:35:f1", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec1:35f1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd7b2b79-55", "ovs_interfaceid": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.219 244018 DEBUG nova.network.os_vif_util [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "address": "fa:16:3e:c1:35:f1", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec1:35f1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd7b2b79-55", "ovs_interfaceid": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.221 244018 DEBUG nova.network.os_vif_util [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c1:35:f1,bridge_name='br-int',has_traffic_filtering=True,id=dd7b2b79-55bd-4df0-a79a-3344d8c79c92,network=Network(9f82a56f-a5c3-446e-9b75-7dc69c93c56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd7b2b79-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.221 244018 DEBUG os_vif [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:35:f1,bridge_name='br-int',has_traffic_filtering=True,id=dd7b2b79-55bd-4df0-a79a-3344d8c79c92,network=Network(9f82a56f-a5c3-446e-9b75-7dc69c93c56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd7b2b79-55') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.223 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.224 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdd7b2b79-55, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.226 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.229 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.232 244018 INFO os_vif [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:35:f1,bridge_name='br-int',has_traffic_filtering=True,id=dd7b2b79-55bd-4df0-a79a-3344d8c79c92,network=Network(9f82a56f-a5c3-446e-9b75-7dc69c93c56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd7b2b79-55')#033[00m
Feb 25 07:56:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:17.240 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ada6ff47-f71e-4f5a-ad9b-0b57db3d2ece]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:17.255 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[88a5e857-8fcf-4b43-bf93-c2333394c06d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f82a56f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:91:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2780, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 7, 'rx_bytes': 2780, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609107, 'reachable_time': 19694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368780, 'error': None, 'target': 'ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:17.270 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d3f80c3b-c06a-456c-8b93-64e272ab440e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9f82a56f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609116, 'tstamp': 609116}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368789, 'error': None, 'target': 'ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9f82a56f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609119, 'tstamp': 609119}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368789, 'error': None, 'target': 'ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:17.272 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f82a56f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.273 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.276 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:17.276 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f82a56f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:56:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:17.276 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:56:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:17.276 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f82a56f-a0, col_values=(('external_ids', {'iface-id': 'd6ceb7cb-a48f-47c3-acb6-58d61268f8c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:56:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:17.277 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:56:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:56:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3067882301' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.404 244018 DEBUG oslo_concurrency.processutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.412 244018 DEBUG nova.compute.provider_tree [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.432 244018 DEBUG nova.scheduler.client.report [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.452 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.453 244018 DEBUG nova.compute.manager [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.497 244018 DEBUG nova.compute.manager [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.498 244018 DEBUG nova.network.neutron [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.517 244018 INFO nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.532 244018 DEBUG nova.compute.manager [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.610 244018 DEBUG nova.compute.manager [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.611 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.612 244018 INFO nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Creating image(s)#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.642 244018 DEBUG nova.storage.rbd_utils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 859fd309-32ea-4025-8312-ddecfa0d6a7f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.681 244018 DEBUG nova.storage.rbd_utils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 859fd309-32ea-4025-8312-ddecfa0d6a7f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.716 244018 DEBUG nova.storage.rbd_utils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 859fd309-32ea-4025-8312-ddecfa0d6a7f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.722 244018 DEBUG oslo_concurrency.processutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.826 244018 DEBUG oslo_concurrency.processutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.828 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.830 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.830 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.864 244018 DEBUG nova.storage.rbd_utils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 859fd309-32ea-4025-8312-ddecfa0d6a7f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:56:17 np0005629333 nova_compute[244014]: 2026-02-25 12:56:17.870 244018 DEBUG oslo_concurrency.processutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 859fd309-32ea-4025-8312-ddecfa0d6a7f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:56:18 np0005629333 nova_compute[244014]: 2026-02-25 12:56:18.219 244018 DEBUG nova.policy [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31d013eaf26a447394d93c83ab8def60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e227b91c24404ab5aed600e2fe792d32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:56:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:56:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2272: 305 pgs: 305 active+clean; 391 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 210 op/s
Feb 25 07:56:19 np0005629333 nova_compute[244014]: 2026-02-25 12:56:19.015 244018 DEBUG oslo_concurrency.processutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 859fd309-32ea-4025-8312-ddecfa0d6a7f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:56:19 np0005629333 nova_compute[244014]: 2026-02-25 12:56:19.108 244018 DEBUG nova.storage.rbd_utils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] resizing rbd image 859fd309-32ea-4025-8312-ddecfa0d6a7f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:56:19 np0005629333 nova_compute[244014]: 2026-02-25 12:56:19.159 244018 INFO nova.virt.libvirt.driver [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Deleting instance files /var/lib/nova/instances/7102d0db-32cc-4a5e-8282-cf5266710872_del#033[00m
Feb 25 07:56:19 np0005629333 nova_compute[244014]: 2026-02-25 12:56:19.160 244018 INFO nova.virt.libvirt.driver [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Deletion of /var/lib/nova/instances/7102d0db-32cc-4a5e-8282-cf5266710872_del complete#033[00m
Feb 25 07:56:19 np0005629333 nova_compute[244014]: 2026-02-25 12:56:19.214 244018 DEBUG nova.objects.instance [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'migration_context' on Instance uuid 859fd309-32ea-4025-8312-ddecfa0d6a7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:56:19 np0005629333 nova_compute[244014]: 2026-02-25 12:56:19.230 244018 INFO nova.compute.manager [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Took 2.27 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:56:19 np0005629333 nova_compute[244014]: 2026-02-25 12:56:19.231 244018 DEBUG oslo.service.loopingcall [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:56:19 np0005629333 nova_compute[244014]: 2026-02-25 12:56:19.231 244018 DEBUG nova.compute.manager [-] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:56:19 np0005629333 nova_compute[244014]: 2026-02-25 12:56:19.231 244018 DEBUG nova.network.neutron [-] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:56:19 np0005629333 nova_compute[244014]: 2026-02-25 12:56:19.238 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:56:19 np0005629333 nova_compute[244014]: 2026-02-25 12:56:19.239 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Ensure instance console log exists: /var/lib/nova/instances/859fd309-32ea-4025-8312-ddecfa0d6a7f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:56:19 np0005629333 nova_compute[244014]: 2026-02-25 12:56:19.239 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:56:19 np0005629333 nova_compute[244014]: 2026-02-25 12:56:19.240 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:56:19 np0005629333 nova_compute[244014]: 2026-02-25 12:56:19.240 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:56:20 np0005629333 nova_compute[244014]: 2026-02-25 12:56:20.324 244018 DEBUG nova.compute.manager [req-e4415831-69dc-4ffc-a9bb-c4a52f5c6caf req-f83d7710-1594-42c0-a803-d5d7660bb07b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Received event network-vif-unplugged-dd7b2b79-55bd-4df0-a79a-3344d8c79c92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:56:20 np0005629333 nova_compute[244014]: 2026-02-25 12:56:20.325 244018 DEBUG oslo_concurrency.lockutils [req-e4415831-69dc-4ffc-a9bb-c4a52f5c6caf req-f83d7710-1594-42c0-a803-d5d7660bb07b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:56:20 np0005629333 nova_compute[244014]: 2026-02-25 12:56:20.325 244018 DEBUG oslo_concurrency.lockutils [req-e4415831-69dc-4ffc-a9bb-c4a52f5c6caf req-f83d7710-1594-42c0-a803-d5d7660bb07b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:56:20 np0005629333 nova_compute[244014]: 2026-02-25 12:56:20.325 244018 DEBUG oslo_concurrency.lockutils [req-e4415831-69dc-4ffc-a9bb-c4a52f5c6caf req-f83d7710-1594-42c0-a803-d5d7660bb07b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:56:20 np0005629333 nova_compute[244014]: 2026-02-25 12:56:20.326 244018 DEBUG nova.compute.manager [req-e4415831-69dc-4ffc-a9bb-c4a52f5c6caf req-f83d7710-1594-42c0-a803-d5d7660bb07b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] No waiting events found dispatching network-vif-unplugged-dd7b2b79-55bd-4df0-a79a-3344d8c79c92 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:56:20 np0005629333 nova_compute[244014]: 2026-02-25 12:56:20.326 244018 DEBUG nova.compute.manager [req-e4415831-69dc-4ffc-a9bb-c4a52f5c6caf req-f83d7710-1594-42c0-a803-d5d7660bb07b 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Received event network-vif-unplugged-dd7b2b79-55bd-4df0-a79a-3344d8c79c92 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:56:20 np0005629333 nova_compute[244014]: 2026-02-25 12:56:20.508 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2273: 305 pgs: 305 active+clean; 391 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 394 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Feb 25 07:56:22 np0005629333 nova_compute[244014]: 2026-02-25 12:56:22.229 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:22 np0005629333 nova_compute[244014]: 2026-02-25 12:56:22.232 244018 DEBUG nova.network.neutron [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Successfully created port: 2a99d3aa-08e4-4712-a8b3-984de83b4b60 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:56:22 np0005629333 nova_compute[244014]: 2026-02-25 12:56:22.433 244018 DEBUG nova.compute.manager [req-40549ba1-04dc-46ea-a77f-0dfeacda46df req-0929082d-2193-45bc-9f10-65ccdb8d106d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Received event network-vif-plugged-dd7b2b79-55bd-4df0-a79a-3344d8c79c92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:56:22 np0005629333 nova_compute[244014]: 2026-02-25 12:56:22.434 244018 DEBUG oslo_concurrency.lockutils [req-40549ba1-04dc-46ea-a77f-0dfeacda46df req-0929082d-2193-45bc-9f10-65ccdb8d106d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:56:22 np0005629333 nova_compute[244014]: 2026-02-25 12:56:22.434 244018 DEBUG oslo_concurrency.lockutils [req-40549ba1-04dc-46ea-a77f-0dfeacda46df req-0929082d-2193-45bc-9f10-65ccdb8d106d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:56:22 np0005629333 nova_compute[244014]: 2026-02-25 12:56:22.434 244018 DEBUG oslo_concurrency.lockutils [req-40549ba1-04dc-46ea-a77f-0dfeacda46df req-0929082d-2193-45bc-9f10-65ccdb8d106d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "7102d0db-32cc-4a5e-8282-cf5266710872-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:56:22 np0005629333 nova_compute[244014]: 2026-02-25 12:56:22.435 244018 DEBUG nova.compute.manager [req-40549ba1-04dc-46ea-a77f-0dfeacda46df req-0929082d-2193-45bc-9f10-65ccdb8d106d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] No waiting events found dispatching network-vif-plugged-dd7b2b79-55bd-4df0-a79a-3344d8c79c92 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:56:22 np0005629333 nova_compute[244014]: 2026-02-25 12:56:22.435 244018 WARNING nova.compute.manager [req-40549ba1-04dc-46ea-a77f-0dfeacda46df req-0929082d-2193-45bc-9f10-65ccdb8d106d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Received unexpected event network-vif-plugged-dd7b2b79-55bd-4df0-a79a-3344d8c79c92 for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:56:22 np0005629333 nova_compute[244014]: 2026-02-25 12:56:22.437 244018 DEBUG nova.network.neutron [-] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:56:22 np0005629333 nova_compute[244014]: 2026-02-25 12:56:22.473 244018 INFO nova.compute.manager [-] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Took 3.24 seconds to deallocate network for instance.#033[00m
Feb 25 07:56:22 np0005629333 nova_compute[244014]: 2026-02-25 12:56:22.521 244018 DEBUG oslo_concurrency.lockutils [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:56:22 np0005629333 nova_compute[244014]: 2026-02-25 12:56:22.522 244018 DEBUG oslo_concurrency.lockutils [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:56:22 np0005629333 nova_compute[244014]: 2026-02-25 12:56:22.551 244018 DEBUG nova.network.neutron [req-a711629e-f384-4ecd-b653-47863cd91253 req-56e64340-4e51-48a7-88a9-30a0ba242d8a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Updated VIF entry in instance network info cache for port dd7b2b79-55bd-4df0-a79a-3344d8c79c92. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:56:22 np0005629333 nova_compute[244014]: 2026-02-25 12:56:22.551 244018 DEBUG nova.network.neutron [req-a711629e-f384-4ecd-b653-47863cd91253 req-56e64340-4e51-48a7-88a9-30a0ba242d8a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Updating instance_info_cache with network_info: [{"id": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "address": "fa:16:3e:c1:35:f1", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec1:35f1", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd7b2b79-55", "ovs_interfaceid": "dd7b2b79-55bd-4df0-a79a-3344d8c79c92", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:56:22 np0005629333 nova_compute[244014]: 2026-02-25 12:56:22.576 244018 DEBUG oslo_concurrency.lockutils [req-a711629e-f384-4ecd-b653-47863cd91253 req-56e64340-4e51-48a7-88a9-30a0ba242d8a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-7102d0db-32cc-4a5e-8282-cf5266710872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:56:22 np0005629333 nova_compute[244014]: 2026-02-25 12:56:22.618 244018 DEBUG oslo_concurrency.processutils [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:56:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2274: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 425 KiB/s rd, 3.9 MiB/s wr, 123 op/s
Feb 25 07:56:22 np0005629333 nova_compute[244014]: 2026-02-25 12:56:22.980 244018 DEBUG nova.network.neutron [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Successfully updated port: 2a99d3aa-08e4-4712-a8b3-984de83b4b60 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:56:22 np0005629333 nova_compute[244014]: 2026-02-25 12:56:22.998 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-859fd309-32ea-4025-8312-ddecfa0d6a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:56:22 np0005629333 nova_compute[244014]: 2026-02-25 12:56:22.999 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-859fd309-32ea-4025-8312-ddecfa0d6a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:56:22 np0005629333 nova_compute[244014]: 2026-02-25 12:56:22.999 244018 DEBUG nova.network.neutron [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:56:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:56:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3147388156' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:56:23 np0005629333 nova_compute[244014]: 2026-02-25 12:56:23.185 244018 DEBUG oslo_concurrency.processutils [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:56:23 np0005629333 nova_compute[244014]: 2026-02-25 12:56:23.191 244018 DEBUG nova.compute.provider_tree [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:56:23 np0005629333 nova_compute[244014]: 2026-02-25 12:56:23.227 244018 DEBUG nova.scheduler.client.report [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:56:23 np0005629333 nova_compute[244014]: 2026-02-25 12:56:23.237 244018 DEBUG nova.network.neutron [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:56:23 np0005629333 nova_compute[244014]: 2026-02-25 12:56:23.312 244018 DEBUG oslo_concurrency.lockutils [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:56:23 np0005629333 nova_compute[244014]: 2026-02-25 12:56:23.358 244018 INFO nova.scheduler.client.report [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Deleted allocations for instance 7102d0db-32cc-4a5e-8282-cf5266710872#033[00m
Feb 25 07:56:23 np0005629333 nova_compute[244014]: 2026-02-25 12:56:23.436 244018 DEBUG oslo_concurrency.lockutils [None req-b266a449-44e9-4a08-95cb-9ab4ef17b90e f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "7102d0db-32cc-4a5e-8282-cf5266710872" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.483s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:56:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:56:24 np0005629333 nova_compute[244014]: 2026-02-25 12:56:24.521 244018 DEBUG nova.compute.manager [req-856c293d-4d9c-46bf-8e69-b5897adfd80b req-cead0217-afbe-45d8-8905-4829850e6177 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Received event network-vif-deleted-dd7b2b79-55bd-4df0-a79a-3344d8c79c92 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:56:24 np0005629333 nova_compute[244014]: 2026-02-25 12:56:24.521 244018 INFO nova.compute.manager [req-856c293d-4d9c-46bf-8e69-b5897adfd80b req-cead0217-afbe-45d8-8905-4829850e6177 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Neutron deleted interface dd7b2b79-55bd-4df0-a79a-3344d8c79c92; detaching it from the instance and deleting it from the info cache#033[00m
Feb 25 07:56:24 np0005629333 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 07:56:24 np0005629333 nova_compute[244014]: 2026-02-25 12:56:24.522 244018 DEBUG nova.network.neutron [req-856c293d-4d9c-46bf-8e69-b5897adfd80b req-cead0217-afbe-45d8-8905-4829850e6177 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Feb 25 07:56:24 np0005629333 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 07:56:24 np0005629333 nova_compute[244014]: 2026-02-25 12:56:24.526 244018 DEBUG nova.compute.manager [req-856c293d-4d9c-46bf-8e69-b5897adfd80b req-cead0217-afbe-45d8-8905-4829850e6177 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Detach interface failed, port_id=dd7b2b79-55bd-4df0-a79a-3344d8c79c92, reason: Instance 7102d0db-32cc-4a5e-8282-cf5266710872 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 25 07:56:24 np0005629333 nova_compute[244014]: 2026-02-25 12:56:24.526 244018 DEBUG nova.compute.manager [req-856c293d-4d9c-46bf-8e69-b5897adfd80b req-cead0217-afbe-45d8-8905-4829850e6177 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Received event network-changed-2a99d3aa-08e4-4712-a8b3-984de83b4b60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:56:24 np0005629333 nova_compute[244014]: 2026-02-25 12:56:24.527 244018 DEBUG nova.compute.manager [req-856c293d-4d9c-46bf-8e69-b5897adfd80b req-cead0217-afbe-45d8-8905-4829850e6177 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Refreshing instance network info cache due to event network-changed-2a99d3aa-08e4-4712-a8b3-984de83b4b60. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:56:24 np0005629333 nova_compute[244014]: 2026-02-25 12:56:24.527 244018 DEBUG oslo_concurrency.lockutils [req-856c293d-4d9c-46bf-8e69-b5897adfd80b req-cead0217-afbe-45d8-8905-4829850e6177 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-859fd309-32ea-4025-8312-ddecfa0d6a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:56:24 np0005629333 nova_compute[244014]: 2026-02-25 12:56:24.561 244018 DEBUG nova.network.neutron [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Updating instance_info_cache with network_info: [{"id": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "address": "fa:16:3e:7f:bf:de", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a99d3aa-08", "ovs_interfaceid": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:56:24 np0005629333 nova_compute[244014]: 2026-02-25 12:56:24.585 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-859fd309-32ea-4025-8312-ddecfa0d6a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:56:24 np0005629333 nova_compute[244014]: 2026-02-25 12:56:24.586 244018 DEBUG nova.compute.manager [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Instance network_info: |[{"id": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "address": "fa:16:3e:7f:bf:de", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a99d3aa-08", "ovs_interfaceid": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:56:24 np0005629333 nova_compute[244014]: 2026-02-25 12:56:24.586 244018 DEBUG oslo_concurrency.lockutils [req-856c293d-4d9c-46bf-8e69-b5897adfd80b req-cead0217-afbe-45d8-8905-4829850e6177 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-859fd309-32ea-4025-8312-ddecfa0d6a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:56:24 np0005629333 nova_compute[244014]: 2026-02-25 12:56:24.587 244018 DEBUG nova.network.neutron [req-856c293d-4d9c-46bf-8e69-b5897adfd80b req-cead0217-afbe-45d8-8905-4829850e6177 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Refreshing network info cache for port 2a99d3aa-08e4-4712-a8b3-984de83b4b60 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:56:24 np0005629333 nova_compute[244014]: 2026-02-25 12:56:24.592 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Start _get_guest_xml network_info=[{"id": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "address": "fa:16:3e:7f:bf:de", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a99d3aa-08", "ovs_interfaceid": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:56:24 np0005629333 nova_compute[244014]: 2026-02-25 12:56:24.599 244018 WARNING nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:56:24 np0005629333 nova_compute[244014]: 2026-02-25 12:56:24.610 244018 DEBUG nova.virt.libvirt.host [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:56:24 np0005629333 nova_compute[244014]: 2026-02-25 12:56:24.611 244018 DEBUG nova.virt.libvirt.host [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:56:24 np0005629333 nova_compute[244014]: 2026-02-25 12:56:24.615 244018 DEBUG nova.virt.libvirt.host [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:56:24 np0005629333 nova_compute[244014]: 2026-02-25 12:56:24.616 244018 DEBUG nova.virt.libvirt.host [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:56:24 np0005629333 nova_compute[244014]: 2026-02-25 12:56:24.617 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:56:24 np0005629333 nova_compute[244014]: 2026-02-25 12:56:24.617 244018 DEBUG nova.virt.hardware [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:56:24 np0005629333 nova_compute[244014]: 2026-02-25 12:56:24.618 244018 DEBUG nova.virt.hardware [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:56:24 np0005629333 nova_compute[244014]: 2026-02-25 12:56:24.618 244018 DEBUG nova.virt.hardware [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:56:24 np0005629333 nova_compute[244014]: 2026-02-25 12:56:24.619 244018 DEBUG nova.virt.hardware [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:56:24 np0005629333 nova_compute[244014]: 2026-02-25 12:56:24.619 244018 DEBUG nova.virt.hardware [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:56:24 np0005629333 nova_compute[244014]: 2026-02-25 12:56:24.619 244018 DEBUG nova.virt.hardware [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:56:24 np0005629333 nova_compute[244014]: 2026-02-25 12:56:24.620 244018 DEBUG nova.virt.hardware [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:56:24 np0005629333 nova_compute[244014]: 2026-02-25 12:56:24.620 244018 DEBUG nova.virt.hardware [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:56:24 np0005629333 nova_compute[244014]: 2026-02-25 12:56:24.621 244018 DEBUG nova.virt.hardware [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:56:24 np0005629333 nova_compute[244014]: 2026-02-25 12:56:24.621 244018 DEBUG nova.virt.hardware [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:56:24 np0005629333 nova_compute[244014]: 2026-02-25 12:56:24.622 244018 DEBUG nova.virt.hardware [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:56:24 np0005629333 nova_compute[244014]: 2026-02-25 12:56:24.627 244018 DEBUG oslo_concurrency.processutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:56:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2275: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 424 KiB/s rd, 3.9 MiB/s wr, 123 op/s
Feb 25 07:56:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:56:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2112457902' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:56:25 np0005629333 nova_compute[244014]: 2026-02-25 12:56:25.195 244018 DEBUG oslo_concurrency.processutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:56:25 np0005629333 nova_compute[244014]: 2026-02-25 12:56:25.321 244018 DEBUG nova.storage.rbd_utils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 859fd309-32ea-4025-8312-ddecfa0d6a7f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:56:25 np0005629333 nova_compute[244014]: 2026-02-25 12:56:25.326 244018 DEBUG oslo_concurrency.processutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:56:25 np0005629333 nova_compute[244014]: 2026-02-25 12:56:25.511 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:56:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/646772582' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:56:25 np0005629333 nova_compute[244014]: 2026-02-25 12:56:25.950 244018 DEBUG oslo_concurrency.processutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:56:25 np0005629333 nova_compute[244014]: 2026-02-25 12:56:25.952 244018 DEBUG nova.virt.libvirt.vif [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:56:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1835970958',display_name='tempest-TestNetworkBasicOps-server-1835970958',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1835970958',id=138,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKi5hwXM8otzF8onl/sQ2HcXmcnSYZEPkja+OxRBMskTynTP/vErrzil0sRJRcFkhhDMTnC+dkl/140PCZpaQazdCB0rdws3ctDi/BBK2mrv8mGIbCHW3g3d+gqo12lx8A==',key_name='tempest-TestNetworkBasicOps-53938794',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-idncdrcl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:56:17Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=859fd309-32ea-4025-8312-ddecfa0d6a7f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "address": "fa:16:3e:7f:bf:de", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a99d3aa-08", "ovs_interfaceid": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:56:25 np0005629333 nova_compute[244014]: 2026-02-25 12:56:25.953 244018 DEBUG nova.network.os_vif_util [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "address": "fa:16:3e:7f:bf:de", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a99d3aa-08", "ovs_interfaceid": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:56:25 np0005629333 nova_compute[244014]: 2026-02-25 12:56:25.954 244018 DEBUG nova.network.os_vif_util [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:bf:de,bridge_name='br-int',has_traffic_filtering=True,id=2a99d3aa-08e4-4712-a8b3-984de83b4b60,network=Network(74cacb3c-0135-4e0b-9776-478b5f7a3349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a99d3aa-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:56:25 np0005629333 nova_compute[244014]: 2026-02-25 12:56:25.955 244018 DEBUG nova.objects.instance [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'pci_devices' on Instance uuid 859fd309-32ea-4025-8312-ddecfa0d6a7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:56:25 np0005629333 nova_compute[244014]: 2026-02-25 12:56:25.974 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:56:25 np0005629333 nova_compute[244014]:  <uuid>859fd309-32ea-4025-8312-ddecfa0d6a7f</uuid>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:  <name>instance-0000008a</name>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestNetworkBasicOps-server-1835970958</nova:name>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:56:24</nova:creationTime>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:56:25 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:        <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:        <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:        <nova:port uuid="2a99d3aa-08e4-4712-a8b3-984de83b4b60">
Feb 25 07:56:25 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      <entry name="serial">859fd309-32ea-4025-8312-ddecfa0d6a7f</entry>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      <entry name="uuid">859fd309-32ea-4025-8312-ddecfa0d6a7f</entry>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/859fd309-32ea-4025-8312-ddecfa0d6a7f_disk">
Feb 25 07:56:25 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:56:25 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/859fd309-32ea-4025-8312-ddecfa0d6a7f_disk.config">
Feb 25 07:56:25 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:56:25 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:7f:bf:de"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      <target dev="tap2a99d3aa-08"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/859fd309-32ea-4025-8312-ddecfa0d6a7f/console.log" append="off"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:56:25 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:56:25 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:56:25 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:56:25 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:56:25 np0005629333 nova_compute[244014]: 2026-02-25 12:56:25.977 244018 DEBUG nova.compute.manager [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Preparing to wait for external event network-vif-plugged-2a99d3aa-08e4-4712-a8b3-984de83b4b60 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:56:25 np0005629333 nova_compute[244014]: 2026-02-25 12:56:25.978 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:56:25 np0005629333 nova_compute[244014]: 2026-02-25 12:56:25.979 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:56:25 np0005629333 nova_compute[244014]: 2026-02-25 12:56:25.979 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:56:25 np0005629333 nova_compute[244014]: 2026-02-25 12:56:25.981 244018 DEBUG nova.virt.libvirt.vif [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:56:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1835970958',display_name='tempest-TestNetworkBasicOps-server-1835970958',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1835970958',id=138,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKi5hwXM8otzF8onl/sQ2HcXmcnSYZEPkja+OxRBMskTynTP/vErrzil0sRJRcFkhhDMTnC+dkl/140PCZpaQazdCB0rdws3ctDi/BBK2mrv8mGIbCHW3g3d+gqo12lx8A==',key_name='tempest-TestNetworkBasicOps-53938794',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-idncdrcl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:56:17Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=859fd309-32ea-4025-8312-ddecfa0d6a7f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "address": "fa:16:3e:7f:bf:de", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a99d3aa-08", "ovs_interfaceid": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:56:25 np0005629333 nova_compute[244014]: 2026-02-25 12:56:25.981 244018 DEBUG nova.network.os_vif_util [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "address": "fa:16:3e:7f:bf:de", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a99d3aa-08", "ovs_interfaceid": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:56:25 np0005629333 nova_compute[244014]: 2026-02-25 12:56:25.982 244018 DEBUG nova.network.os_vif_util [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:bf:de,bridge_name='br-int',has_traffic_filtering=True,id=2a99d3aa-08e4-4712-a8b3-984de83b4b60,network=Network(74cacb3c-0135-4e0b-9776-478b5f7a3349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a99d3aa-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:56:25 np0005629333 nova_compute[244014]: 2026-02-25 12:56:25.983 244018 DEBUG os_vif [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:bf:de,bridge_name='br-int',has_traffic_filtering=True,id=2a99d3aa-08e4-4712-a8b3-984de83b4b60,network=Network(74cacb3c-0135-4e0b-9776-478b5f7a3349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a99d3aa-08') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:56:25 np0005629333 nova_compute[244014]: 2026-02-25 12:56:25.984 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:25 np0005629333 nova_compute[244014]: 2026-02-25 12:56:25.985 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:56:25 np0005629333 nova_compute[244014]: 2026-02-25 12:56:25.985 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:56:25 np0005629333 nova_compute[244014]: 2026-02-25 12:56:25.989 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:25 np0005629333 nova_compute[244014]: 2026-02-25 12:56:25.990 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a99d3aa-08, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:56:25 np0005629333 nova_compute[244014]: 2026-02-25 12:56:25.990 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2a99d3aa-08, col_values=(('external_ids', {'iface-id': '2a99d3aa-08e4-4712-a8b3-984de83b4b60', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:bf:de', 'vm-uuid': '859fd309-32ea-4025-8312-ddecfa0d6a7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:56:25 np0005629333 nova_compute[244014]: 2026-02-25 12:56:25.992 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:25 np0005629333 NetworkManager[49836]: <info>  [1772024185.9938] manager: (tap2a99d3aa-08): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/603)
Feb 25 07:56:25 np0005629333 nova_compute[244014]: 2026-02-25 12:56:25.995 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:56:25 np0005629333 nova_compute[244014]: 2026-02-25 12:56:25.998 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.000 244018 INFO os_vif [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:bf:de,bridge_name='br-int',has_traffic_filtering=True,id=2a99d3aa-08e4-4712-a8b3-984de83b4b60,network=Network(74cacb3c-0135-4e0b-9776-478b5f7a3349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a99d3aa-08')#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.075 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.076 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.077 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:7f:bf:de, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.077 244018 INFO nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Using config drive#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.111 244018 DEBUG nova.storage.rbd_utils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 859fd309-32ea-4025-8312-ddecfa0d6a7f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.387 244018 DEBUG nova.compute.manager [req-3147b30e-90df-4df0-a401-bba7a20d0b42 req-1a5c5954-4409-4e7f-bf10-c05b482cc8a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Received event network-changed-44908825-991d-42d4-9bad-18d2a1f5fe9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.388 244018 DEBUG nova.compute.manager [req-3147b30e-90df-4df0-a401-bba7a20d0b42 req-1a5c5954-4409-4e7f-bf10-c05b482cc8a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Refreshing instance network info cache due to event network-changed-44908825-991d-42d4-9bad-18d2a1f5fe9c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.388 244018 DEBUG oslo_concurrency.lockutils [req-3147b30e-90df-4df0-a401-bba7a20d0b42 req-1a5c5954-4409-4e7f-bf10-c05b482cc8a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-dfb7287a-5448-4579-8938-fe909fbf54c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.389 244018 DEBUG oslo_concurrency.lockutils [req-3147b30e-90df-4df0-a401-bba7a20d0b42 req-1a5c5954-4409-4e7f-bf10-c05b482cc8a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-dfb7287a-5448-4579-8938-fe909fbf54c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.389 244018 DEBUG nova.network.neutron [req-3147b30e-90df-4df0-a401-bba7a20d0b42 req-1a5c5954-4409-4e7f-bf10-c05b482cc8a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Refreshing network info cache for port 44908825-991d-42d4-9bad-18d2a1f5fe9c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.476 244018 DEBUG oslo_concurrency.lockutils [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "dfb7287a-5448-4579-8938-fe909fbf54c6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.477 244018 DEBUG oslo_concurrency.lockutils [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dfb7287a-5448-4579-8938-fe909fbf54c6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.477 244018 DEBUG oslo_concurrency.lockutils [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.477 244018 DEBUG oslo_concurrency.lockutils [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.477 244018 DEBUG oslo_concurrency.lockutils [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.479 244018 INFO nova.compute.manager [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Terminating instance#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.480 244018 DEBUG nova.compute.manager [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:56:26 np0005629333 kernel: tap44908825-99 (unregistering): left promiscuous mode
Feb 25 07:56:26 np0005629333 NetworkManager[49836]: <info>  [1772024186.5729] device (tap44908825-99): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:56:26 np0005629333 ovn_controller[147040]: 2026-02-25T12:56:26Z|01445|binding|INFO|Releasing lport 44908825-991d-42d4-9bad-18d2a1f5fe9c from this chassis (sb_readonly=0)
Feb 25 07:56:26 np0005629333 ovn_controller[147040]: 2026-02-25T12:56:26Z|01446|binding|INFO|Setting lport 44908825-991d-42d4-9bad-18d2a1f5fe9c down in Southbound
Feb 25 07:56:26 np0005629333 ovn_controller[147040]: 2026-02-25T12:56:26Z|01447|binding|INFO|Removing iface tap44908825-99 ovn-installed in OVS
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.584 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.592 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:26.594 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:45:21 10.100.0.3 2001:db8::f816:3eff:fee4:4521'], port_security=['fa:16:3e:e4:45:21 10.100.0.3 2001:db8::f816:3eff:fee4:4521'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fee4:4521/64', 'neutron:device_id': 'dfb7287a-5448-4579-8938-fe909fbf54c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '25fa1e8dd32c483686f869da2604f2b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aaaa1259-a3db-497e-8efb-1f3c1f8cc088', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e7d68418-660c-4b4a-9631-94822d5aa11e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=44908825-991d-42d4-9bad-18d2a1f5fe9c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:56:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:26.597 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 44908825-991d-42d4-9bad-18d2a1f5fe9c in datapath 9f82a56f-a5c3-446e-9b75-7dc69c93c56b unbound from our chassis#033[00m
Feb 25 07:56:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:26.600 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9f82a56f-a5c3-446e-9b75-7dc69c93c56b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:56:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:26.601 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4c6f79fe-3713-488f-b06c-2db9db7888e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:26.602 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b namespace which is not needed anymore#033[00m
Feb 25 07:56:26 np0005629333 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000087.scope: Deactivated successfully.
Feb 25 07:56:26 np0005629333 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000087.scope: Consumed 14.386s CPU time.
Feb 25 07:56:26 np0005629333 systemd-machined[210048]: Machine qemu-167-instance-00000087 terminated.
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.635 244018 INFO nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Creating config drive at /var/lib/nova/instances/859fd309-32ea-4025-8312-ddecfa0d6a7f/disk.config#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.642 244018 DEBUG oslo_concurrency.processutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/859fd309-32ea-4025-8312-ddecfa0d6a7f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp4clsxbcm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.722 244018 INFO nova.virt.libvirt.driver [-] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Instance destroyed successfully.#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.724 244018 DEBUG nova.objects.instance [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lazy-loading 'resources' on Instance uuid dfb7287a-5448-4579-8938-fe909fbf54c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:56:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2276: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 424 KiB/s rd, 3.9 MiB/s wr, 123 op/s
Feb 25 07:56:26 np0005629333 neutron-haproxy-ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b[367023]: [NOTICE]   (367027) : haproxy version is 2.8.14-c23fe91
Feb 25 07:56:26 np0005629333 neutron-haproxy-ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b[367023]: [NOTICE]   (367027) : path to executable is /usr/sbin/haproxy
Feb 25 07:56:26 np0005629333 neutron-haproxy-ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b[367023]: [WARNING]  (367027) : Exiting Master process...
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.740 244018 DEBUG nova.virt.libvirt.vif [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:55:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-730348204',display_name='tempest-TestGettingAddress-server-730348204',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-730348204',id=135,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCjAwmzTqRLo6MtceITp6kkRRb7F7L6NpmCwm+uERDDuNYjjK2SbSlgPchdRYIPodHunzOlcpbeAY9Mle2akRMrTndDYB62iVXgr7LtFS3t0EZlmgtz0aV+ezHMzkXsV7A==',key_name='tempest-TestGettingAddress-1069883733',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:55:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='25fa1e8dd32c483686f869da2604f2b1',ramdisk_id='',reservation_id='r-cliy4xfz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-344063294',owner_user_name='tempest-TestGettingAddress-344063294-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:55:22Z,user_data=None,user_id='f8eb8dbf8cc448ad946fd23aaae2326e',uuid=dfb7287a-5448-4579-8938-fe909fbf54c6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "address": "fa:16:3e:e4:45:21", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee4:4521", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44908825-99", "ovs_interfaceid": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:56:26 np0005629333 neutron-haproxy-ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b[367023]: [ALERT]    (367027) : Current worker (367029) exited with code 143 (Terminated)
Feb 25 07:56:26 np0005629333 neutron-haproxy-ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b[367023]: [WARNING]  (367027) : All workers exited. Exiting... (0)
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.741 244018 DEBUG nova.network.os_vif_util [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converting VIF {"id": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "address": "fa:16:3e:e4:45:21", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee4:4521", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44908825-99", "ovs_interfaceid": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.743 244018 DEBUG nova.network.os_vif_util [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e4:45:21,bridge_name='br-int',has_traffic_filtering=True,id=44908825-991d-42d4-9bad-18d2a1f5fe9c,network=Network(9f82a56f-a5c3-446e-9b75-7dc69c93c56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44908825-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:56:26 np0005629333 systemd[1]: libpod-428a904833ac927c13f7cd77da8bdaf3e47abd2d0a464c01afa6fcfb7b29a635.scope: Deactivated successfully.
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.743 244018 DEBUG os_vif [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:45:21,bridge_name='br-int',has_traffic_filtering=True,id=44908825-991d-42d4-9bad-18d2a1f5fe9c,network=Network(9f82a56f-a5c3-446e-9b75-7dc69c93c56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44908825-99') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.747 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.748 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44908825-99, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:56:26 np0005629333 podman[369095]: 2026-02-25 12:56:26.751122223 +0000 UTC m=+0.057905865 container died 428a904833ac927c13f7cd77da8bdaf3e47abd2d0a464c01afa6fcfb7b29a635 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.751 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.753 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.757 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.759 244018 DEBUG nova.network.neutron [req-856c293d-4d9c-46bf-8e69-b5897adfd80b req-cead0217-afbe-45d8-8905-4829850e6177 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Updated VIF entry in instance network info cache for port 2a99d3aa-08e4-4712-a8b3-984de83b4b60. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.760 244018 DEBUG nova.network.neutron [req-856c293d-4d9c-46bf-8e69-b5897adfd80b req-cead0217-afbe-45d8-8905-4829850e6177 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Updating instance_info_cache with network_info: [{"id": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "address": "fa:16:3e:7f:bf:de", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a99d3aa-08", "ovs_interfaceid": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.761 244018 INFO os_vif [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:45:21,bridge_name='br-int',has_traffic_filtering=True,id=44908825-991d-42d4-9bad-18d2a1f5fe9c,network=Network(9f82a56f-a5c3-446e-9b75-7dc69c93c56b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44908825-99')#033[00m
Feb 25 07:56:26 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-428a904833ac927c13f7cd77da8bdaf3e47abd2d0a464c01afa6fcfb7b29a635-userdata-shm.mount: Deactivated successfully.
Feb 25 07:56:26 np0005629333 systemd[1]: var-lib-containers-storage-overlay-36eb1392c95fbb33162d7879fcd33b904283c8f0341d2d03795af7b704d65fdf-merged.mount: Deactivated successfully.
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.782 244018 DEBUG oslo_concurrency.processutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/859fd309-32ea-4025-8312-ddecfa0d6a7f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp4clsxbcm" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:56:26 np0005629333 podman[369095]: 2026-02-25 12:56:26.798039526 +0000 UTC m=+0.104823198 container cleanup 428a904833ac927c13f7cd77da8bdaf3e47abd2d0a464c01afa6fcfb7b29a635 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:56:26 np0005629333 systemd[1]: libpod-conmon-428a904833ac927c13f7cd77da8bdaf3e47abd2d0a464c01afa6fcfb7b29a635.scope: Deactivated successfully.
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.810 244018 DEBUG nova.storage.rbd_utils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 859fd309-32ea-4025-8312-ddecfa0d6a7f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.816 244018 DEBUG oslo_concurrency.processutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/859fd309-32ea-4025-8312-ddecfa0d6a7f/disk.config 859fd309-32ea-4025-8312-ddecfa0d6a7f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.849 244018 DEBUG oslo_concurrency.lockutils [req-856c293d-4d9c-46bf-8e69-b5897adfd80b req-cead0217-afbe-45d8-8905-4829850e6177 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-859fd309-32ea-4025-8312-ddecfa0d6a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:56:26 np0005629333 podman[369168]: 2026-02-25 12:56:26.893437177 +0000 UTC m=+0.065081727 container remove 428a904833ac927c13f7cd77da8bdaf3e47abd2d0a464c01afa6fcfb7b29a635 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:56:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:26.898 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d8250f6b-1dea-42e6-9ea2-a06b944b15b2]: (4, ('Wed Feb 25 12:56:26 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b (428a904833ac927c13f7cd77da8bdaf3e47abd2d0a464c01afa6fcfb7b29a635)\n428a904833ac927c13f7cd77da8bdaf3e47abd2d0a464c01afa6fcfb7b29a635\nWed Feb 25 12:56:26 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b (428a904833ac927c13f7cd77da8bdaf3e47abd2d0a464c01afa6fcfb7b29a635)\n428a904833ac927c13f7cd77da8bdaf3e47abd2d0a464c01afa6fcfb7b29a635\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:26.900 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[08012f68-bc99-4d0c-8251-5813a636e125]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:26.901 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f82a56f-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:56:26 np0005629333 kernel: tap9f82a56f-a0: left promiscuous mode
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.904 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.915 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:26.918 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[403f8d05-412e-4cde-a5ae-e97cf8c39c52]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:26.930 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[33778e97-d5d1-4904-92d4-c788e9116bcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:26.931 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5958a241-ddd2-44d3-8350-469efd669e0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:26.946 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[64eb8071-a9f3-4acf-a6ce-572da1bb431f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609100, 'reachable_time': 15579, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369206, 'error': None, 'target': 'ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:26.948 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9f82a56f-a5c3-446e-9b75-7dc69c93c56b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:56:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:26.948 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[be7efc2b-36a2-490f-9e50-e7428e97052a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:26 np0005629333 systemd[1]: run-netns-ovnmeta\x2d9f82a56f\x2da5c3\x2d446e\x2d9b75\x2d7dc69c93c56b.mount: Deactivated successfully.
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.989 244018 DEBUG oslo_concurrency.processutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/859fd309-32ea-4025-8312-ddecfa0d6a7f/disk.config 859fd309-32ea-4025-8312-ddecfa0d6a7f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:56:26 np0005629333 nova_compute[244014]: 2026-02-25 12:56:26.990 244018 INFO nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Deleting local config drive /var/lib/nova/instances/859fd309-32ea-4025-8312-ddecfa0d6a7f/disk.config because it was imported into RBD.#033[00m
Feb 25 07:56:27 np0005629333 nova_compute[244014]: 2026-02-25 12:56:27.029 244018 DEBUG nova.compute.manager [req-d5f55966-5b95-43db-987e-af5ef42f1938 req-78fd48db-4a5b-4bf4-868e-a808cbe73837 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Received event network-vif-unplugged-44908825-991d-42d4-9bad-18d2a1f5fe9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:56:27 np0005629333 nova_compute[244014]: 2026-02-25 12:56:27.030 244018 DEBUG oslo_concurrency.lockutils [req-d5f55966-5b95-43db-987e-af5ef42f1938 req-78fd48db-4a5b-4bf4-868e-a808cbe73837 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:56:27 np0005629333 nova_compute[244014]: 2026-02-25 12:56:27.030 244018 DEBUG oslo_concurrency.lockutils [req-d5f55966-5b95-43db-987e-af5ef42f1938 req-78fd48db-4a5b-4bf4-868e-a808cbe73837 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:56:27 np0005629333 nova_compute[244014]: 2026-02-25 12:56:27.030 244018 DEBUG oslo_concurrency.lockutils [req-d5f55966-5b95-43db-987e-af5ef42f1938 req-78fd48db-4a5b-4bf4-868e-a808cbe73837 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:56:27 np0005629333 nova_compute[244014]: 2026-02-25 12:56:27.031 244018 DEBUG nova.compute.manager [req-d5f55966-5b95-43db-987e-af5ef42f1938 req-78fd48db-4a5b-4bf4-868e-a808cbe73837 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] No waiting events found dispatching network-vif-unplugged-44908825-991d-42d4-9bad-18d2a1f5fe9c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:56:27 np0005629333 nova_compute[244014]: 2026-02-25 12:56:27.031 244018 DEBUG nova.compute.manager [req-d5f55966-5b95-43db-987e-af5ef42f1938 req-78fd48db-4a5b-4bf4-868e-a808cbe73837 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Received event network-vif-unplugged-44908825-991d-42d4-9bad-18d2a1f5fe9c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:56:27 np0005629333 kernel: tap2a99d3aa-08: entered promiscuous mode
Feb 25 07:56:27 np0005629333 systemd-udevd[369073]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:56:27 np0005629333 NetworkManager[49836]: <info>  [1772024187.0365] manager: (tap2a99d3aa-08): new Tun device (/org/freedesktop/NetworkManager/Devices/604)
Feb 25 07:56:27 np0005629333 ovn_controller[147040]: 2026-02-25T12:56:27Z|01448|binding|INFO|Claiming lport 2a99d3aa-08e4-4712-a8b3-984de83b4b60 for this chassis.
Feb 25 07:56:27 np0005629333 ovn_controller[147040]: 2026-02-25T12:56:27Z|01449|binding|INFO|2a99d3aa-08e4-4712-a8b3-984de83b4b60: Claiming fa:16:3e:7f:bf:de 10.100.0.4
Feb 25 07:56:27 np0005629333 nova_compute[244014]: 2026-02-25 12:56:27.036 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:27 np0005629333 nova_compute[244014]: 2026-02-25 12:56:27.042 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:27 np0005629333 ovn_controller[147040]: 2026-02-25T12:56:27Z|01450|binding|INFO|Setting lport 2a99d3aa-08e4-4712-a8b3-984de83b4b60 ovn-installed in OVS
Feb 25 07:56:27 np0005629333 ovn_controller[147040]: 2026-02-25T12:56:27Z|01451|binding|INFO|Setting lport 2a99d3aa-08e4-4712-a8b3-984de83b4b60 up in Southbound
Feb 25 07:56:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:27.043 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:bf:de 10.100.0.4'], port_security=['fa:16:3e:7f:bf:de 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '859fd309-32ea-4025-8312-ddecfa0d6a7f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74cacb3c-0135-4e0b-9776-478b5f7a3349', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf564118-674f-4f5e-bcfa-9616c32835a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=394a85b3-bdd5-4c73-8ada-c7f3f99bacc6, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=2a99d3aa-08e4-4712-a8b3-984de83b4b60) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:56:27 np0005629333 nova_compute[244014]: 2026-02-25 12:56:27.044 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:27.044 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 2a99d3aa-08e4-4712-a8b3-984de83b4b60 in datapath 74cacb3c-0135-4e0b-9776-478b5f7a3349 bound to our chassis#033[00m
Feb 25 07:56:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:27.045 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74cacb3c-0135-4e0b-9776-478b5f7a3349#033[00m
Feb 25 07:56:27 np0005629333 NetworkManager[49836]: <info>  [1772024187.0527] device (tap2a99d3aa-08): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:56:27 np0005629333 NetworkManager[49836]: <info>  [1772024187.0532] device (tap2a99d3aa-08): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:56:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:27.056 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[860b12b6-4311-4e48-967a-760fd6c2cb90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:27 np0005629333 systemd-machined[210048]: New machine qemu-170-instance-0000008a.
Feb 25 07:56:27 np0005629333 systemd[1]: Started Virtual Machine qemu-170-instance-0000008a.
Feb 25 07:56:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:27.084 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8af97a65-23a1-414d-a3cf-b4a54009d54e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:27.087 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f03a71f8-6880-45b3-b913-ee2b86be302a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:27.116 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[46811807-6095-46dd-9ac7-457381d14bcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:27.136 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb31767-83dd-49d1-8c7f-70d84336e5d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74cacb3c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:f1:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613435, 'reachable_time': 33349, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369236, 'error': None, 'target': 'ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:27 np0005629333 nova_compute[244014]: 2026-02-25 12:56:27.152 244018 INFO nova.virt.libvirt.driver [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Deleting instance files /var/lib/nova/instances/dfb7287a-5448-4579-8938-fe909fbf54c6_del#033[00m
Feb 25 07:56:27 np0005629333 nova_compute[244014]: 2026-02-25 12:56:27.154 244018 INFO nova.virt.libvirt.driver [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Deletion of /var/lib/nova/instances/dfb7287a-5448-4579-8938-fe909fbf54c6_del complete#033[00m
Feb 25 07:56:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:27.154 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[64986f35-c74d-46b2-be01-0423b93ca252]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap74cacb3c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 613447, 'tstamp': 613447}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369237, 'error': None, 'target': 'ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap74cacb3c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 613450, 'tstamp': 613450}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369237, 'error': None, 'target': 'ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:27.156 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74cacb3c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:56:27 np0005629333 nova_compute[244014]: 2026-02-25 12:56:27.159 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:27.160 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74cacb3c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:56:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:27.160 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:56:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:27.160 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74cacb3c-00, col_values=(('external_ids', {'iface-id': 'b1eb3633-3950-4cbb-8a36-6968fb223904'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:56:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:27.161 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:56:27 np0005629333 nova_compute[244014]: 2026-02-25 12:56:27.239 244018 INFO nova.compute.manager [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:56:27 np0005629333 nova_compute[244014]: 2026-02-25 12:56:27.240 244018 DEBUG oslo.service.loopingcall [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:56:27 np0005629333 nova_compute[244014]: 2026-02-25 12:56:27.240 244018 DEBUG nova.compute.manager [-] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:56:27 np0005629333 nova_compute[244014]: 2026-02-25 12:56:27.241 244018 DEBUG nova.network.neutron [-] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:56:27 np0005629333 nova_compute[244014]: 2026-02-25 12:56:27.468 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024187.4673276, 859fd309-32ea-4025-8312-ddecfa0d6a7f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:56:27 np0005629333 nova_compute[244014]: 2026-02-25 12:56:27.469 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] VM Started (Lifecycle Event)#033[00m
Feb 25 07:56:27 np0005629333 nova_compute[244014]: 2026-02-25 12:56:27.493 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:56:27 np0005629333 nova_compute[244014]: 2026-02-25 12:56:27.504 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024187.4674957, 859fd309-32ea-4025-8312-ddecfa0d6a7f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:56:27 np0005629333 nova_compute[244014]: 2026-02-25 12:56:27.504 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:56:27 np0005629333 nova_compute[244014]: 2026-02-25 12:56:27.527 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:56:27 np0005629333 nova_compute[244014]: 2026-02-25 12:56:27.532 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:56:27 np0005629333 nova_compute[244014]: 2026-02-25 12:56:27.569 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:56:28 np0005629333 nova_compute[244014]: 2026-02-25 12:56:28.129 244018 DEBUG nova.network.neutron [req-3147b30e-90df-4df0-a401-bba7a20d0b42 req-1a5c5954-4409-4e7f-bf10-c05b482cc8a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Updated VIF entry in instance network info cache for port 44908825-991d-42d4-9bad-18d2a1f5fe9c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:56:28 np0005629333 nova_compute[244014]: 2026-02-25 12:56:28.130 244018 DEBUG nova.network.neutron [req-3147b30e-90df-4df0-a401-bba7a20d0b42 req-1a5c5954-4409-4e7f-bf10-c05b482cc8a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Updating instance_info_cache with network_info: [{"id": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "address": "fa:16:3e:e4:45:21", "network": {"id": "9f82a56f-a5c3-446e-9b75-7dc69c93c56b", "bridge": "br-int", "label": "tempest-network-smoke--133035031", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee4:4521", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "25fa1e8dd32c483686f869da2604f2b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44908825-99", "ovs_interfaceid": "44908825-991d-42d4-9bad-18d2a1f5fe9c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:56:28 np0005629333 nova_compute[244014]: 2026-02-25 12:56:28.164 244018 DEBUG nova.network.neutron [-] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:56:28 np0005629333 nova_compute[244014]: 2026-02-25 12:56:28.167 244018 DEBUG oslo_concurrency.lockutils [req-3147b30e-90df-4df0-a401-bba7a20d0b42 req-1a5c5954-4409-4e7f-bf10-c05b482cc8a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-dfb7287a-5448-4579-8938-fe909fbf54c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:56:28 np0005629333 nova_compute[244014]: 2026-02-25 12:56:28.190 244018 INFO nova.compute.manager [-] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Took 0.95 seconds to deallocate network for instance.#033[00m
Feb 25 07:56:28 np0005629333 nova_compute[244014]: 2026-02-25 12:56:28.234 244018 DEBUG oslo_concurrency.lockutils [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:56:28 np0005629333 nova_compute[244014]: 2026-02-25 12:56:28.234 244018 DEBUG oslo_concurrency.lockutils [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:56:28 np0005629333 nova_compute[244014]: 2026-02-25 12:56:28.318 244018 DEBUG oslo_concurrency.processutils [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:56:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:56:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2277: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 446 KiB/s rd, 3.9 MiB/s wr, 156 op/s
Feb 25 07:56:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:56:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2940670755' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:56:28 np0005629333 nova_compute[244014]: 2026-02-25 12:56:28.879 244018 DEBUG oslo_concurrency.processutils [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:56:28 np0005629333 nova_compute[244014]: 2026-02-25 12:56:28.885 244018 DEBUG nova.compute.provider_tree [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:56:28 np0005629333 nova_compute[244014]: 2026-02-25 12:56:28.904 244018 DEBUG nova.scheduler.client.report [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:56:28 np0005629333 nova_compute[244014]: 2026-02-25 12:56:28.930 244018 DEBUG oslo_concurrency.lockutils [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:56:28 np0005629333 nova_compute[244014]: 2026-02-25 12:56:28.956 244018 INFO nova.scheduler.client.report [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Deleted allocations for instance dfb7287a-5448-4579-8938-fe909fbf54c6#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.026 244018 DEBUG oslo_concurrency.lockutils [None req-3916ed0b-8478-4b52-a199-a24d4a3909ce f8eb8dbf8cc448ad946fd23aaae2326e 25fa1e8dd32c483686f869da2604f2b1 - - default default] Lock "dfb7287a-5448-4579-8938-fe909fbf54c6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.124 244018 DEBUG nova.compute.manager [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Received event network-vif-plugged-44908825-991d-42d4-9bad-18d2a1f5fe9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.125 244018 DEBUG oslo_concurrency.lockutils [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.125 244018 DEBUG oslo_concurrency.lockutils [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.126 244018 DEBUG oslo_concurrency.lockutils [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "dfb7287a-5448-4579-8938-fe909fbf54c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.126 244018 DEBUG nova.compute.manager [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] No waiting events found dispatching network-vif-plugged-44908825-991d-42d4-9bad-18d2a1f5fe9c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.127 244018 WARNING nova.compute.manager [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Received unexpected event network-vif-plugged-44908825-991d-42d4-9bad-18d2a1f5fe9c for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.127 244018 DEBUG nova.compute.manager [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Received event network-vif-plugged-2a99d3aa-08e4-4712-a8b3-984de83b4b60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.127 244018 DEBUG oslo_concurrency.lockutils [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.128 244018 DEBUG oslo_concurrency.lockutils [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.128 244018 DEBUG oslo_concurrency.lockutils [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.128 244018 DEBUG nova.compute.manager [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Processing event network-vif-plugged-2a99d3aa-08e4-4712-a8b3-984de83b4b60 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.129 244018 DEBUG nova.compute.manager [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Received event network-vif-plugged-2a99d3aa-08e4-4712-a8b3-984de83b4b60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.129 244018 DEBUG oslo_concurrency.lockutils [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.130 244018 DEBUG oslo_concurrency.lockutils [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.130 244018 DEBUG oslo_concurrency.lockutils [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.130 244018 DEBUG nova.compute.manager [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] No waiting events found dispatching network-vif-plugged-2a99d3aa-08e4-4712-a8b3-984de83b4b60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.131 244018 WARNING nova.compute.manager [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Received unexpected event network-vif-plugged-2a99d3aa-08e4-4712-a8b3-984de83b4b60 for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.131 244018 DEBUG nova.compute.manager [req-305257d0-59c8-4d48-b291-064b9b3c2fb2 req-b37782e9-13ac-40e0-ac7c-de15be274da9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Received event network-vif-deleted-44908825-991d-42d4-9bad-18d2a1f5fe9c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.132 244018 DEBUG nova.compute.manager [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.137 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024189.1365752, 859fd309-32ea-4025-8312-ddecfa0d6a7f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.137 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.140 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.145 244018 INFO nova.virt.libvirt.driver [-] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Instance spawned successfully.#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.145 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.171 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.186 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.190 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.190 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.191 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.191 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.192 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.192 244018 DEBUG nova.virt.libvirt.driver [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.215 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.317 244018 INFO nova.compute.manager [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Took 11.71 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.318 244018 DEBUG nova.compute.manager [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.402 244018 INFO nova.compute.manager [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Took 12.88 seconds to build instance.#033[00m
Feb 25 07:56:29 np0005629333 nova_compute[244014]: 2026-02-25 12:56:29.426 244018 DEBUG oslo_concurrency.lockutils [None req-b17fa73a-4f8f-4152-9d94-754ffc379dde 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "859fd309-32ea-4025-8312-ddecfa0d6a7f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:56:30 np0005629333 nova_compute[244014]: 2026-02-25 12:56:30.514 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2278: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Feb 25 07:56:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:56:31
Feb 25 07:56:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:56:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:56:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'images', '.mgr', 'vms']
Feb 25 07:56:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:56:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:56:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:56:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:56:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:56:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:56:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:56:31 np0005629333 nova_compute[244014]: 2026-02-25 12:56:31.753 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:56:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:56:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:56:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:56:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:56:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:56:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:56:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:56:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:56:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:56:32 np0005629333 nova_compute[244014]: 2026-02-25 12:56:32.196 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024177.195008, 7102d0db-32cc-4a5e-8282-cf5266710872 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:56:32 np0005629333 nova_compute[244014]: 2026-02-25 12:56:32.197 244018 INFO nova.compute.manager [-] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:56:32 np0005629333 nova_compute[244014]: 2026-02-25 12:56:32.222 244018 DEBUG nova.compute.manager [None req-fd89bb00-9117-4391-9ec6-9e26ee3cc708 - - - - - -] [instance: 7102d0db-32cc-4a5e-8282-cf5266710872] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:56:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2279: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 152 op/s
Feb 25 07:56:33 np0005629333 nova_compute[244014]: 2026-02-25 12:56:33.514 244018 DEBUG nova.compute.manager [req-594fbb57-afa1-44b1-997e-31eada81f5fe req-f7592285-5e8d-4a65-87f4-b5a88b799457 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Received event network-changed-2a99d3aa-08e4-4712-a8b3-984de83b4b60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:56:33 np0005629333 nova_compute[244014]: 2026-02-25 12:56:33.514 244018 DEBUG nova.compute.manager [req-594fbb57-afa1-44b1-997e-31eada81f5fe req-f7592285-5e8d-4a65-87f4-b5a88b799457 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Refreshing instance network info cache due to event network-changed-2a99d3aa-08e4-4712-a8b3-984de83b4b60. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:56:33 np0005629333 nova_compute[244014]: 2026-02-25 12:56:33.515 244018 DEBUG oslo_concurrency.lockutils [req-594fbb57-afa1-44b1-997e-31eada81f5fe req-f7592285-5e8d-4a65-87f4-b5a88b799457 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-859fd309-32ea-4025-8312-ddecfa0d6a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:56:33 np0005629333 nova_compute[244014]: 2026-02-25 12:56:33.515 244018 DEBUG oslo_concurrency.lockutils [req-594fbb57-afa1-44b1-997e-31eada81f5fe req-f7592285-5e8d-4a65-87f4-b5a88b799457 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-859fd309-32ea-4025-8312-ddecfa0d6a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:56:33 np0005629333 nova_compute[244014]: 2026-02-25 12:56:33.516 244018 DEBUG nova.network.neutron [req-594fbb57-afa1-44b1-997e-31eada81f5fe req-f7592285-5e8d-4a65-87f4-b5a88b799457 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Refreshing network info cache for port 2a99d3aa-08e4-4712-a8b3-984de83b4b60 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:56:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:56:33 np0005629333 nova_compute[244014]: 2026-02-25 12:56:33.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:56:33 np0005629333 nova_compute[244014]: 2026-02-25 12:56:33.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:56:33 np0005629333 nova_compute[244014]: 2026-02-25 12:56:33.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:56:33 np0005629333 nova_compute[244014]: 2026-02-25 12:56:33.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 07:56:34 np0005629333 nova_compute[244014]: 2026-02-25 12:56:34.165 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:56:34 np0005629333 nova_compute[244014]: 2026-02-25 12:56:34.166 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:56:34 np0005629333 nova_compute[244014]: 2026-02-25 12:56:34.166 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 25 07:56:34 np0005629333 nova_compute[244014]: 2026-02-25 12:56:34.167 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid e728a9dc-bb04-4a25-bcad-b787a044bc0b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:56:34 np0005629333 ovn_controller[147040]: 2026-02-25T12:56:34Z|01452|binding|INFO|Releasing lport b1eb3633-3950-4cbb-8a36-6968fb223904 from this chassis (sb_readonly=0)
Feb 25 07:56:34 np0005629333 nova_compute[244014]: 2026-02-25 12:56:34.276 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2280: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 25 07:56:35 np0005629333 nova_compute[244014]: 2026-02-25 12:56:35.515 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:35 np0005629333 nova_compute[244014]: 2026-02-25 12:56:35.659 244018 DEBUG nova.network.neutron [req-594fbb57-afa1-44b1-997e-31eada81f5fe req-f7592285-5e8d-4a65-87f4-b5a88b799457 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Updated VIF entry in instance network info cache for port 2a99d3aa-08e4-4712-a8b3-984de83b4b60. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:56:35 np0005629333 nova_compute[244014]: 2026-02-25 12:56:35.660 244018 DEBUG nova.network.neutron [req-594fbb57-afa1-44b1-997e-31eada81f5fe req-f7592285-5e8d-4a65-87f4-b5a88b799457 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Updating instance_info_cache with network_info: [{"id": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "address": "fa:16:3e:7f:bf:de", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a99d3aa-08", "ovs_interfaceid": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:56:35 np0005629333 nova_compute[244014]: 2026-02-25 12:56:35.799 244018 DEBUG oslo_concurrency.lockutils [req-594fbb57-afa1-44b1-997e-31eada81f5fe req-f7592285-5e8d-4a65-87f4-b5a88b799457 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-859fd309-32ea-4025-8312-ddecfa0d6a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:56:36 np0005629333 nova_compute[244014]: 2026-02-25 12:56:36.153 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Updating instance_info_cache with network_info: [{"id": "bde15f84-edfb-445b-b129-ec33331763f0", "address": "fa:16:3e:99:9e:1f", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbde15f84-ed", "ovs_interfaceid": "bde15f84-edfb-445b-b129-ec33331763f0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:56:36 np0005629333 nova_compute[244014]: 2026-02-25 12:56:36.190 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:56:36 np0005629333 nova_compute[244014]: 2026-02-25 12:56:36.191 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 25 07:56:36 np0005629333 nova_compute[244014]: 2026-02-25 12:56:36.192 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:56:36 np0005629333 nova_compute[244014]: 2026-02-25 12:56:36.243 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:56:36 np0005629333 nova_compute[244014]: 2026-02-25 12:56:36.244 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:56:36 np0005629333 nova_compute[244014]: 2026-02-25 12:56:36.244 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:56:36 np0005629333 nova_compute[244014]: 2026-02-25 12:56:36.245 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:56:36 np0005629333 nova_compute[244014]: 2026-02-25 12:56:36.246 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:56:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2281: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 25 07:56:36 np0005629333 nova_compute[244014]: 2026-02-25 12:56:36.756 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:56:36 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2611800794' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:56:36 np0005629333 nova_compute[244014]: 2026-02-25 12:56:36.853 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:56:36 np0005629333 nova_compute[244014]: 2026-02-25 12:56:36.964 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:56:36 np0005629333 nova_compute[244014]: 2026-02-25 12:56:36.965 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:56:36 np0005629333 nova_compute[244014]: 2026-02-25 12:56:36.969 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:56:36 np0005629333 nova_compute[244014]: 2026-02-25 12:56:36.970 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:56:37 np0005629333 nova_compute[244014]: 2026-02-25 12:56:37.168 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:56:37 np0005629333 nova_compute[244014]: 2026-02-25 12:56:37.169 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3213MB free_disk=59.92087952699512GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:56:37 np0005629333 nova_compute[244014]: 2026-02-25 12:56:37.170 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:56:37 np0005629333 nova_compute[244014]: 2026-02-25 12:56:37.170 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:56:37 np0005629333 nova_compute[244014]: 2026-02-25 12:56:37.433 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance e728a9dc-bb04-4a25-bcad-b787a044bc0b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:56:37 np0005629333 nova_compute[244014]: 2026-02-25 12:56:37.434 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 859fd309-32ea-4025-8312-ddecfa0d6a7f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:56:37 np0005629333 nova_compute[244014]: 2026-02-25 12:56:37.435 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:56:37 np0005629333 nova_compute[244014]: 2026-02-25 12:56:37.435 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:56:37 np0005629333 nova_compute[244014]: 2026-02-25 12:56:37.481 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:56:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:56:38 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3826527813' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:56:38 np0005629333 nova_compute[244014]: 2026-02-25 12:56:38.058 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:56:38 np0005629333 nova_compute[244014]: 2026-02-25 12:56:38.067 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:56:38 np0005629333 nova_compute[244014]: 2026-02-25 12:56:38.091 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:56:38 np0005629333 nova_compute[244014]: 2026-02-25 12:56:38.186 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:56:38 np0005629333 nova_compute[244014]: 2026-02-25 12:56:38.187 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:56:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:56:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2282: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 101 op/s
Feb 25 07:56:39 np0005629333 nova_compute[244014]: 2026-02-25 12:56:39.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:56:40 np0005629333 ovn_controller[147040]: 2026-02-25T12:56:40Z|00182|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7f:bf:de 10.100.0.4
Feb 25 07:56:40 np0005629333 ovn_controller[147040]: 2026-02-25T12:56:40Z|00183|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7f:bf:de 10.100.0.4
Feb 25 07:56:40 np0005629333 nova_compute[244014]: 2026-02-25 12:56:40.518 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2283: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1023 B/s wr, 68 op/s
Feb 25 07:56:41 np0005629333 nova_compute[244014]: 2026-02-25 12:56:41.720 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024186.7188518, dfb7287a-5448-4579-8938-fe909fbf54c6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:56:41 np0005629333 nova_compute[244014]: 2026-02-25 12:56:41.720 244018 INFO nova.compute.manager [-] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:56:41 np0005629333 podman[369348]: 2026-02-25 12:56:41.737576266 +0000 UTC m=+0.074110201 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Feb 25 07:56:41 np0005629333 nova_compute[244014]: 2026-02-25 12:56:41.741 244018 DEBUG nova.compute.manager [None req-f72da2e3-b062-4b8a-bee2-1d1de480d0d6 - - - - - -] [instance: dfb7287a-5448-4579-8938-fe909fbf54c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:56:41 np0005629333 nova_compute[244014]: 2026-02-25 12:56:41.760 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:41 np0005629333 podman[369349]: 2026-02-25 12:56:41.766592025 +0000 UTC m=+0.102918675 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0)
Feb 25 07:56:41 np0005629333 nova_compute[244014]: 2026-02-25 12:56:41.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:56:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:56:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:56:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:56:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:56:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011237722717163936 of space, bias 1.0, pg target 0.33713168151491807 quantized to 32 (current 32)
Feb 25 07:56:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:56:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:56:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:56:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:56:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:56:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024941998890556766 of space, bias 1.0, pg target 0.748259966716703 quantized to 32 (current 32)
Feb 25 07:56:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:56:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3756613840991307e-06 of space, bias 4.0, pg target 0.001650793660918957 quantized to 16 (current 16)
Feb 25 07:56:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:56:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:56:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:56:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:56:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:56:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:56:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:56:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:56:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:56:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:56:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2284: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Feb 25 07:56:42 np0005629333 nova_compute[244014]: 2026-02-25 12:56:42.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:56:42 np0005629333 nova_compute[244014]: 2026-02-25 12:56:42.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:56:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:56:43 np0005629333 nova_compute[244014]: 2026-02-25 12:56:43.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:56:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2285: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 07:56:44 np0005629333 nova_compute[244014]: 2026-02-25 12:56:44.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:56:44 np0005629333 nova_compute[244014]: 2026-02-25 12:56:44.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:56:45 np0005629333 nova_compute[244014]: 2026-02-25 12:56:45.520 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2286: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 07:56:46 np0005629333 nova_compute[244014]: 2026-02-25 12:56:46.762 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:56:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2944514131' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:56:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:56:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2944514131' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:56:48.721163) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024208721197, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 1686, "num_deletes": 250, "total_data_size": 2637898, "memory_usage": 2669248, "flush_reason": "Manual Compaction"}
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024208730323, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 1522098, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47361, "largest_seqno": 49046, "table_properties": {"data_size": 1516493, "index_size": 2681, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 15126, "raw_average_key_size": 20, "raw_value_size": 1504016, "raw_average_value_size": 2071, "num_data_blocks": 122, "num_entries": 726, "num_filter_entries": 726, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772024034, "oldest_key_time": 1772024034, "file_creation_time": 1772024208, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 9211 microseconds, and 4061 cpu microseconds.
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:56:48.730371) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 1522098 bytes OK
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:56:48.730394) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:56:48.732974) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:56:48.732988) EVENT_LOG_v1 {"time_micros": 1772024208732983, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:56:48.733007) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 2630622, prev total WAL file size 2630622, number of live WAL files 2.
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:56:48.733591) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373533' seq:72057594037927935, type:22 .. '6D6772737461740032303034' seq:0, type:0; will stop at (end)
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(1486KB)], [110(9660KB)]
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024208733673, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 11414383, "oldest_snapshot_seqno": -1}
Feb 25 07:56:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2287: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 7081 keys, 9139339 bytes, temperature: kUnknown
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024208813860, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 9139339, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9093884, "index_size": 26655, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17733, "raw_key_size": 183288, "raw_average_key_size": 25, "raw_value_size": 8969409, "raw_average_value_size": 1266, "num_data_blocks": 1045, "num_entries": 7081, "num_filter_entries": 7081, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772024208, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:56:48.814174) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 9139339 bytes
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:56:48.815464) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 142.2 rd, 113.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 9.4 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(13.5) write-amplify(6.0) OK, records in: 7512, records dropped: 431 output_compression: NoCompression
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:56:48.815495) EVENT_LOG_v1 {"time_micros": 1772024208815480, "job": 66, "event": "compaction_finished", "compaction_time_micros": 80246, "compaction_time_cpu_micros": 32818, "output_level": 6, "num_output_files": 1, "total_output_size": 9139339, "num_input_records": 7512, "num_output_records": 7081, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024208815906, "job": 66, "event": "table_file_deletion", "file_number": 112}
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024208817333, "job": 66, "event": "table_file_deletion", "file_number": 110}
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:56:48.733450) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:56:48.817478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:56:48.817489) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:56:48.817493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:56:48.817500) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:56:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:56:48.817503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:56:49 np0005629333 nova_compute[244014]: 2026-02-25 12:56:49.367 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:49 np0005629333 nova_compute[244014]: 2026-02-25 12:56:49.387 244018 INFO nova.compute.manager [None req-20bd1fec-e1be-4f22-9377-03f383e4b505 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Get console output#033[00m
Feb 25 07:56:49 np0005629333 nova_compute[244014]: 2026-02-25 12:56:49.396 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 25 07:56:50 np0005629333 nova_compute[244014]: 2026-02-25 12:56:50.522 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:50 np0005629333 nova_compute[244014]: 2026-02-25 12:56:50.631 244018 DEBUG nova.compute.manager [req-ba1f1d63-7e23-496f-9a4b-123d90771d94 req-be348795-0c1a-4b0d-bd0c-8e85b196238a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received event network-changed-bde15f84-edfb-445b-b129-ec33331763f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:56:50 np0005629333 nova_compute[244014]: 2026-02-25 12:56:50.632 244018 DEBUG nova.compute.manager [req-ba1f1d63-7e23-496f-9a4b-123d90771d94 req-be348795-0c1a-4b0d-bd0c-8e85b196238a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Refreshing instance network info cache due to event network-changed-bde15f84-edfb-445b-b129-ec33331763f0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:56:50 np0005629333 nova_compute[244014]: 2026-02-25 12:56:50.632 244018 DEBUG oslo_concurrency.lockutils [req-ba1f1d63-7e23-496f-9a4b-123d90771d94 req-be348795-0c1a-4b0d-bd0c-8e85b196238a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:56:50 np0005629333 nova_compute[244014]: 2026-02-25 12:56:50.632 244018 DEBUG oslo_concurrency.lockutils [req-ba1f1d63-7e23-496f-9a4b-123d90771d94 req-be348795-0c1a-4b0d-bd0c-8e85b196238a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:56:50 np0005629333 nova_compute[244014]: 2026-02-25 12:56:50.632 244018 DEBUG nova.network.neutron [req-ba1f1d63-7e23-496f-9a4b-123d90771d94 req-be348795-0c1a-4b0d-bd0c-8e85b196238a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Refreshing network info cache for port bde15f84-edfb-445b-b129-ec33331763f0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:56:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2288: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 07:56:51 np0005629333 nova_compute[244014]: 2026-02-25 12:56:51.653 244018 INFO nova.compute.manager [None req-f6443d22-817e-4990-b9a1-1217a191288f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Get console output#033[00m
Feb 25 07:56:51 np0005629333 nova_compute[244014]: 2026-02-25 12:56:51.657 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 25 07:56:51 np0005629333 nova_compute[244014]: 2026-02-25 12:56:51.764 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:51 np0005629333 nova_compute[244014]: 2026-02-25 12:56:51.910 244018 DEBUG nova.network.neutron [req-ba1f1d63-7e23-496f-9a4b-123d90771d94 req-be348795-0c1a-4b0d-bd0c-8e85b196238a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Updated VIF entry in instance network info cache for port bde15f84-edfb-445b-b129-ec33331763f0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:56:51 np0005629333 nova_compute[244014]: 2026-02-25 12:56:51.911 244018 DEBUG nova.network.neutron [req-ba1f1d63-7e23-496f-9a4b-123d90771d94 req-be348795-0c1a-4b0d-bd0c-8e85b196238a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Updating instance_info_cache with network_info: [{"id": "bde15f84-edfb-445b-b129-ec33331763f0", "address": "fa:16:3e:99:9e:1f", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbde15f84-ed", "ovs_interfaceid": "bde15f84-edfb-445b-b129-ec33331763f0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:56:51 np0005629333 nova_compute[244014]: 2026-02-25 12:56:51.931 244018 DEBUG oslo_concurrency.lockutils [req-ba1f1d63-7e23-496f-9a4b-123d90771d94 req-be348795-0c1a-4b0d-bd0c-8e85b196238a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:56:52 np0005629333 nova_compute[244014]: 2026-02-25 12:56:52.717 244018 DEBUG nova.compute.manager [req-484a5e73-285b-4586-be00-9855d3ae840c req-1045f6d7-112b-4259-8186-64e21005f1b2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received event network-vif-unplugged-bde15f84-edfb-445b-b129-ec33331763f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:56:52 np0005629333 nova_compute[244014]: 2026-02-25 12:56:52.717 244018 DEBUG oslo_concurrency.lockutils [req-484a5e73-285b-4586-be00-9855d3ae840c req-1045f6d7-112b-4259-8186-64e21005f1b2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:56:52 np0005629333 nova_compute[244014]: 2026-02-25 12:56:52.718 244018 DEBUG oslo_concurrency.lockutils [req-484a5e73-285b-4586-be00-9855d3ae840c req-1045f6d7-112b-4259-8186-64e21005f1b2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:56:52 np0005629333 nova_compute[244014]: 2026-02-25 12:56:52.718 244018 DEBUG oslo_concurrency.lockutils [req-484a5e73-285b-4586-be00-9855d3ae840c req-1045f6d7-112b-4259-8186-64e21005f1b2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:56:52 np0005629333 nova_compute[244014]: 2026-02-25 12:56:52.719 244018 DEBUG nova.compute.manager [req-484a5e73-285b-4586-be00-9855d3ae840c req-1045f6d7-112b-4259-8186-64e21005f1b2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] No waiting events found dispatching network-vif-unplugged-bde15f84-edfb-445b-b129-ec33331763f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:56:52 np0005629333 nova_compute[244014]: 2026-02-25 12:56:52.719 244018 WARNING nova.compute.manager [req-484a5e73-285b-4586-be00-9855d3ae840c req-1045f6d7-112b-4259-8186-64e21005f1b2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received unexpected event network-vif-unplugged-bde15f84-edfb-445b-b129-ec33331763f0 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:56:52 np0005629333 nova_compute[244014]: 2026-02-25 12:56:52.720 244018 DEBUG nova.compute.manager [req-484a5e73-285b-4586-be00-9855d3ae840c req-1045f6d7-112b-4259-8186-64e21005f1b2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received event network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:56:52 np0005629333 nova_compute[244014]: 2026-02-25 12:56:52.720 244018 DEBUG oslo_concurrency.lockutils [req-484a5e73-285b-4586-be00-9855d3ae840c req-1045f6d7-112b-4259-8186-64e21005f1b2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:56:52 np0005629333 nova_compute[244014]: 2026-02-25 12:56:52.721 244018 DEBUG oslo_concurrency.lockutils [req-484a5e73-285b-4586-be00-9855d3ae840c req-1045f6d7-112b-4259-8186-64e21005f1b2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:56:52 np0005629333 nova_compute[244014]: 2026-02-25 12:56:52.721 244018 DEBUG oslo_concurrency.lockutils [req-484a5e73-285b-4586-be00-9855d3ae840c req-1045f6d7-112b-4259-8186-64e21005f1b2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:56:52 np0005629333 nova_compute[244014]: 2026-02-25 12:56:52.721 244018 DEBUG nova.compute.manager [req-484a5e73-285b-4586-be00-9855d3ae840c req-1045f6d7-112b-4259-8186-64e21005f1b2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] No waiting events found dispatching network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:56:52 np0005629333 nova_compute[244014]: 2026-02-25 12:56:52.722 244018 WARNING nova.compute.manager [req-484a5e73-285b-4586-be00-9855d3ae840c req-1045f6d7-112b-4259-8186-64e21005f1b2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received unexpected event network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:56:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2289: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 07:56:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:56:53 np0005629333 nova_compute[244014]: 2026-02-25 12:56:53.732 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2290: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.3 KiB/s rd, 15 KiB/s wr, 1 op/s
Feb 25 07:56:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:55.033 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:56:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:55.034 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:56:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:55.035 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:56:55 np0005629333 nova_compute[244014]: 2026-02-25 12:56:55.083 244018 DEBUG nova.compute.manager [req-df208256-557c-474b-a349-21ad3dc62e1e req-22bd5e16-78d5-48ef-96e5-cd5b044976be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received event network-changed-bde15f84-edfb-445b-b129-ec33331763f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:56:55 np0005629333 nova_compute[244014]: 2026-02-25 12:56:55.083 244018 DEBUG nova.compute.manager [req-df208256-557c-474b-a349-21ad3dc62e1e req-22bd5e16-78d5-48ef-96e5-cd5b044976be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Refreshing instance network info cache due to event network-changed-bde15f84-edfb-445b-b129-ec33331763f0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:56:55 np0005629333 nova_compute[244014]: 2026-02-25 12:56:55.084 244018 DEBUG oslo_concurrency.lockutils [req-df208256-557c-474b-a349-21ad3dc62e1e req-22bd5e16-78d5-48ef-96e5-cd5b044976be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:56:55 np0005629333 nova_compute[244014]: 2026-02-25 12:56:55.085 244018 DEBUG oslo_concurrency.lockutils [req-df208256-557c-474b-a349-21ad3dc62e1e req-22bd5e16-78d5-48ef-96e5-cd5b044976be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:56:55 np0005629333 nova_compute[244014]: 2026-02-25 12:56:55.085 244018 DEBUG nova.network.neutron [req-df208256-557c-474b-a349-21ad3dc62e1e req-22bd5e16-78d5-48ef-96e5-cd5b044976be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Refreshing network info cache for port bde15f84-edfb-445b-b129-ec33331763f0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:56:55 np0005629333 nova_compute[244014]: 2026-02-25 12:56:55.211 244018 INFO nova.compute.manager [None req-a502e8ad-9a3e-40e0-b8ba-508ca0897766 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Get console output#033[00m
Feb 25 07:56:55 np0005629333 nova_compute[244014]: 2026-02-25 12:56:55.216 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 25 07:56:55 np0005629333 nova_compute[244014]: 2026-02-25 12:56:55.524 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2291: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.3 KiB/s rd, 15 KiB/s wr, 1 op/s
Feb 25 07:56:56 np0005629333 nova_compute[244014]: 2026-02-25 12:56:56.767 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:57 np0005629333 nova_compute[244014]: 2026-02-25 12:56:57.367 244018 DEBUG nova.compute.manager [req-88281d65-dc09-4694-9241-00aa607e2662 req-bc955ac5-5cac-4b5f-b8c0-31005289c3dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Received event network-changed-2a99d3aa-08e4-4712-a8b3-984de83b4b60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:56:57 np0005629333 nova_compute[244014]: 2026-02-25 12:56:57.368 244018 DEBUG nova.compute.manager [req-88281d65-dc09-4694-9241-00aa607e2662 req-bc955ac5-5cac-4b5f-b8c0-31005289c3dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Refreshing instance network info cache due to event network-changed-2a99d3aa-08e4-4712-a8b3-984de83b4b60. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:56:57 np0005629333 nova_compute[244014]: 2026-02-25 12:56:57.369 244018 DEBUG oslo_concurrency.lockutils [req-88281d65-dc09-4694-9241-00aa607e2662 req-bc955ac5-5cac-4b5f-b8c0-31005289c3dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-859fd309-32ea-4025-8312-ddecfa0d6a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:56:57 np0005629333 nova_compute[244014]: 2026-02-25 12:56:57.369 244018 DEBUG oslo_concurrency.lockutils [req-88281d65-dc09-4694-9241-00aa607e2662 req-bc955ac5-5cac-4b5f-b8c0-31005289c3dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-859fd309-32ea-4025-8312-ddecfa0d6a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:56:57 np0005629333 nova_compute[244014]: 2026-02-25 12:56:57.370 244018 DEBUG nova.network.neutron [req-88281d65-dc09-4694-9241-00aa607e2662 req-bc955ac5-5cac-4b5f-b8c0-31005289c3dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Refreshing network info cache for port 2a99d3aa-08e4-4712-a8b3-984de83b4b60 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:56:57 np0005629333 nova_compute[244014]: 2026-02-25 12:56:57.376 244018 DEBUG nova.network.neutron [req-df208256-557c-474b-a349-21ad3dc62e1e req-22bd5e16-78d5-48ef-96e5-cd5b044976be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Updated VIF entry in instance network info cache for port bde15f84-edfb-445b-b129-ec33331763f0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:56:57 np0005629333 nova_compute[244014]: 2026-02-25 12:56:57.377 244018 DEBUG nova.network.neutron [req-df208256-557c-474b-a349-21ad3dc62e1e req-22bd5e16-78d5-48ef-96e5-cd5b044976be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Updating instance_info_cache with network_info: [{"id": "bde15f84-edfb-445b-b129-ec33331763f0", "address": "fa:16:3e:99:9e:1f", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbde15f84-ed", "ovs_interfaceid": "bde15f84-edfb-445b-b129-ec33331763f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:56:57 np0005629333 nova_compute[244014]: 2026-02-25 12:56:57.422 244018 DEBUG oslo_concurrency.lockutils [req-df208256-557c-474b-a349-21ad3dc62e1e req-22bd5e16-78d5-48ef-96e5-cd5b044976be 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:56:57 np0005629333 nova_compute[244014]: 2026-02-25 12:56:57.423 244018 DEBUG oslo_concurrency.lockutils [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "859fd309-32ea-4025-8312-ddecfa0d6a7f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:56:57 np0005629333 nova_compute[244014]: 2026-02-25 12:56:57.423 244018 DEBUG oslo_concurrency.lockutils [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "859fd309-32ea-4025-8312-ddecfa0d6a7f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:56:57 np0005629333 nova_compute[244014]: 2026-02-25 12:56:57.424 244018 DEBUG oslo_concurrency.lockutils [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:56:57 np0005629333 nova_compute[244014]: 2026-02-25 12:56:57.424 244018 DEBUG oslo_concurrency.lockutils [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:56:57 np0005629333 nova_compute[244014]: 2026-02-25 12:56:57.425 244018 DEBUG oslo_concurrency.lockutils [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:56:57 np0005629333 nova_compute[244014]: 2026-02-25 12:56:57.427 244018 INFO nova.compute.manager [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Terminating instance#033[00m
Feb 25 07:56:57 np0005629333 nova_compute[244014]: 2026-02-25 12:56:57.428 244018 DEBUG nova.compute.manager [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:56:57 np0005629333 kernel: tap2a99d3aa-08 (unregistering): left promiscuous mode
Feb 25 07:56:57 np0005629333 NetworkManager[49836]: <info>  [1772024217.4853] device (tap2a99d3aa-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:56:57 np0005629333 ovn_controller[147040]: 2026-02-25T12:56:57Z|01453|binding|INFO|Releasing lport 2a99d3aa-08e4-4712-a8b3-984de83b4b60 from this chassis (sb_readonly=0)
Feb 25 07:56:57 np0005629333 nova_compute[244014]: 2026-02-25 12:56:57.491 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:57 np0005629333 ovn_controller[147040]: 2026-02-25T12:56:57Z|01454|binding|INFO|Setting lport 2a99d3aa-08e4-4712-a8b3-984de83b4b60 down in Southbound
Feb 25 07:56:57 np0005629333 ovn_controller[147040]: 2026-02-25T12:56:57Z|01455|binding|INFO|Removing iface tap2a99d3aa-08 ovn-installed in OVS
Feb 25 07:56:57 np0005629333 nova_compute[244014]: 2026-02-25 12:56:57.494 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:57.501 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:bf:de 10.100.0.4'], port_security=['fa:16:3e:7f:bf:de 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '859fd309-32ea-4025-8312-ddecfa0d6a7f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74cacb3c-0135-4e0b-9776-478b5f7a3349', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf564118-674f-4f5e-bcfa-9616c32835a7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=394a85b3-bdd5-4c73-8ada-c7f3f99bacc6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=2a99d3aa-08e4-4712-a8b3-984de83b4b60) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:56:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:57.503 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 2a99d3aa-08e4-4712-a8b3-984de83b4b60 in datapath 74cacb3c-0135-4e0b-9776-478b5f7a3349 unbound from our chassis#033[00m
Feb 25 07:56:57 np0005629333 nova_compute[244014]: 2026-02-25 12:56:57.503 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:57.505 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74cacb3c-0135-4e0b-9776-478b5f7a3349#033[00m
Feb 25 07:56:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:57.524 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d8f946f2-fd94-4f75-adc3-6c9b4c1e302a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:57 np0005629333 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Feb 25 07:56:57 np0005629333 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d0000008a.scope: Consumed 12.407s CPU time.
Feb 25 07:56:57 np0005629333 systemd-machined[210048]: Machine qemu-170-instance-0000008a terminated.
Feb 25 07:56:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:57.560 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[6b7f4ebd-eb99-4393-b57c-8898cc9935a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:57.564 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c0f7c5e2-11a1-4913-9038-63198eb28eba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:57.594 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[88019a38-083d-469a-b48f-e7b51f128288]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:57.613 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fbad3c8b-0498-4dcc-a376-a1acb065e9ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74cacb3c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:f1:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613435, 'reachable_time': 33349, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369405, 'error': None, 'target': 'ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:57.628 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3049742a-bb1f-4874-a4bb-67e81bf84068]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap74cacb3c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 613447, 'tstamp': 613447}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369406, 'error': None, 'target': 'ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap74cacb3c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 613450, 'tstamp': 613450}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369406, 'error': None, 'target': 'ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:56:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:57.630 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74cacb3c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:56:57 np0005629333 nova_compute[244014]: 2026-02-25 12:56:57.632 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:57 np0005629333 nova_compute[244014]: 2026-02-25 12:56:57.640 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:57.641 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74cacb3c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:56:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:57.641 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:56:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:57.642 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74cacb3c-00, col_values=(('external_ids', {'iface-id': 'b1eb3633-3950-4cbb-8a36-6968fb223904'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:56:57 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:56:57.643 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:56:57 np0005629333 nova_compute[244014]: 2026-02-25 12:56:57.669 244018 INFO nova.virt.libvirt.driver [-] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Instance destroyed successfully.#033[00m
Feb 25 07:56:57 np0005629333 nova_compute[244014]: 2026-02-25 12:56:57.670 244018 DEBUG nova.objects.instance [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'resources' on Instance uuid 859fd309-32ea-4025-8312-ddecfa0d6a7f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:56:57 np0005629333 nova_compute[244014]: 2026-02-25 12:56:57.684 244018 DEBUG nova.virt.libvirt.vif [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:56:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1835970958',display_name='tempest-TestNetworkBasicOps-server-1835970958',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1835970958',id=138,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKi5hwXM8otzF8onl/sQ2HcXmcnSYZEPkja+OxRBMskTynTP/vErrzil0sRJRcFkhhDMTnC+dkl/140PCZpaQazdCB0rdws3ctDi/BBK2mrv8mGIbCHW3g3d+gqo12lx8A==',key_name='tempest-TestNetworkBasicOps-53938794',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:56:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-idncdrcl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:56:29Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=859fd309-32ea-4025-8312-ddecfa0d6a7f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "address": "fa:16:3e:7f:bf:de", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a99d3aa-08", "ovs_interfaceid": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:56:57 np0005629333 nova_compute[244014]: 2026-02-25 12:56:57.685 244018 DEBUG nova.network.os_vif_util [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "address": "fa:16:3e:7f:bf:de", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a99d3aa-08", "ovs_interfaceid": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:56:57 np0005629333 nova_compute[244014]: 2026-02-25 12:56:57.686 244018 DEBUG nova.network.os_vif_util [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7f:bf:de,bridge_name='br-int',has_traffic_filtering=True,id=2a99d3aa-08e4-4712-a8b3-984de83b4b60,network=Network(74cacb3c-0135-4e0b-9776-478b5f7a3349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a99d3aa-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:56:57 np0005629333 nova_compute[244014]: 2026-02-25 12:56:57.687 244018 DEBUG os_vif [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:bf:de,bridge_name='br-int',has_traffic_filtering=True,id=2a99d3aa-08e4-4712-a8b3-984de83b4b60,network=Network(74cacb3c-0135-4e0b-9776-478b5f7a3349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a99d3aa-08') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:56:57 np0005629333 nova_compute[244014]: 2026-02-25 12:56:57.689 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:57 np0005629333 nova_compute[244014]: 2026-02-25 12:56:57.690 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a99d3aa-08, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:56:57 np0005629333 nova_compute[244014]: 2026-02-25 12:56:57.692 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:56:57 np0005629333 nova_compute[244014]: 2026-02-25 12:56:57.694 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:56:57 np0005629333 nova_compute[244014]: 2026-02-25 12:56:57.697 244018 INFO os_vif [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:bf:de,bridge_name='br-int',has_traffic_filtering=True,id=2a99d3aa-08e4-4712-a8b3-984de83b4b60,network=Network(74cacb3c-0135-4e0b-9776-478b5f7a3349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a99d3aa-08')#033[00m
Feb 25 07:56:58 np0005629333 nova_compute[244014]: 2026-02-25 12:56:58.235 244018 INFO nova.virt.libvirt.driver [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Deleting instance files /var/lib/nova/instances/859fd309-32ea-4025-8312-ddecfa0d6a7f_del#033[00m
Feb 25 07:56:58 np0005629333 nova_compute[244014]: 2026-02-25 12:56:58.237 244018 INFO nova.virt.libvirt.driver [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Deletion of /var/lib/nova/instances/859fd309-32ea-4025-8312-ddecfa0d6a7f_del complete#033[00m
Feb 25 07:56:58 np0005629333 nova_compute[244014]: 2026-02-25 12:56:58.300 244018 INFO nova.compute.manager [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Took 0.87 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:56:58 np0005629333 nova_compute[244014]: 2026-02-25 12:56:58.301 244018 DEBUG oslo.service.loopingcall [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:56:58 np0005629333 nova_compute[244014]: 2026-02-25 12:56:58.302 244018 DEBUG nova.compute.manager [-] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:56:58 np0005629333 nova_compute[244014]: 2026-02-25 12:56:58.302 244018 DEBUG nova.network.neutron [-] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:56:58 np0005629333 nova_compute[244014]: 2026-02-25 12:56:58.341 244018 DEBUG nova.compute.manager [req-a78844cb-74f3-4373-9c16-23b43cef9cde req-6027b7db-da29-472c-b85a-c0f6d083c379 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Received event network-vif-unplugged-2a99d3aa-08e4-4712-a8b3-984de83b4b60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:56:58 np0005629333 nova_compute[244014]: 2026-02-25 12:56:58.341 244018 DEBUG oslo_concurrency.lockutils [req-a78844cb-74f3-4373-9c16-23b43cef9cde req-6027b7db-da29-472c-b85a-c0f6d083c379 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:56:58 np0005629333 nova_compute[244014]: 2026-02-25 12:56:58.342 244018 DEBUG oslo_concurrency.lockutils [req-a78844cb-74f3-4373-9c16-23b43cef9cde req-6027b7db-da29-472c-b85a-c0f6d083c379 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:56:58 np0005629333 nova_compute[244014]: 2026-02-25 12:56:58.342 244018 DEBUG oslo_concurrency.lockutils [req-a78844cb-74f3-4373-9c16-23b43cef9cde req-6027b7db-da29-472c-b85a-c0f6d083c379 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:56:58 np0005629333 nova_compute[244014]: 2026-02-25 12:56:58.342 244018 DEBUG nova.compute.manager [req-a78844cb-74f3-4373-9c16-23b43cef9cde req-6027b7db-da29-472c-b85a-c0f6d083c379 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] No waiting events found dispatching network-vif-unplugged-2a99d3aa-08e4-4712-a8b3-984de83b4b60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:56:58 np0005629333 nova_compute[244014]: 2026-02-25 12:56:58.343 244018 DEBUG nova.compute.manager [req-a78844cb-74f3-4373-9c16-23b43cef9cde req-6027b7db-da29-472c-b85a-c0f6d083c379 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Received event network-vif-unplugged-2a99d3aa-08e4-4712-a8b3-984de83b4b60 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:56:58 np0005629333 nova_compute[244014]: 2026-02-25 12:56:58.575 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "6185e497-8422-4a5f-a98a-865484d53d4f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:56:58 np0005629333 nova_compute[244014]: 2026-02-25 12:56:58.575 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "6185e497-8422-4a5f-a98a-865484d53d4f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:56:58 np0005629333 nova_compute[244014]: 2026-02-25 12:56:58.603 244018 DEBUG nova.compute.manager [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:56:58 np0005629333 nova_compute[244014]: 2026-02-25 12:56:58.709 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:56:58 np0005629333 nova_compute[244014]: 2026-02-25 12:56:58.709 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:56:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:56:58 np0005629333 nova_compute[244014]: 2026-02-25 12:56:58.721 244018 DEBUG nova.virt.hardware [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:56:58 np0005629333 nova_compute[244014]: 2026-02-25 12:56:58.722 244018 INFO nova.compute.claims [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:56:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2292: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 19 KiB/s wr, 1 op/s
Feb 25 07:56:58 np0005629333 nova_compute[244014]: 2026-02-25 12:56:58.869 244018 DEBUG oslo_concurrency.processutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:56:59 np0005629333 nova_compute[244014]: 2026-02-25 12:56:59.071 244018 DEBUG nova.network.neutron [-] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:56:59 np0005629333 nova_compute[244014]: 2026-02-25 12:56:59.087 244018 INFO nova.compute.manager [-] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Took 0.78 seconds to deallocate network for instance.#033[00m
Feb 25 07:56:59 np0005629333 nova_compute[244014]: 2026-02-25 12:56:59.135 244018 DEBUG oslo_concurrency.lockutils [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:56:59 np0005629333 nova_compute[244014]: 2026-02-25 12:56:59.268 244018 DEBUG nova.network.neutron [req-88281d65-dc09-4694-9241-00aa607e2662 req-bc955ac5-5cac-4b5f-b8c0-31005289c3dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Updated VIF entry in instance network info cache for port 2a99d3aa-08e4-4712-a8b3-984de83b4b60. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:56:59 np0005629333 nova_compute[244014]: 2026-02-25 12:56:59.268 244018 DEBUG nova.network.neutron [req-88281d65-dc09-4694-9241-00aa607e2662 req-bc955ac5-5cac-4b5f-b8c0-31005289c3dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Updating instance_info_cache with network_info: [{"id": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "address": "fa:16:3e:7f:bf:de", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a99d3aa-08", "ovs_interfaceid": "2a99d3aa-08e4-4712-a8b3-984de83b4b60", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:56:59 np0005629333 nova_compute[244014]: 2026-02-25 12:56:59.284 244018 DEBUG oslo_concurrency.lockutils [req-88281d65-dc09-4694-9241-00aa607e2662 req-bc955ac5-5cac-4b5f-b8c0-31005289c3dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-859fd309-32ea-4025-8312-ddecfa0d6a7f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:56:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:56:59 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3465264861' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:56:59 np0005629333 nova_compute[244014]: 2026-02-25 12:56:59.453 244018 DEBUG oslo_concurrency.processutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:56:59 np0005629333 nova_compute[244014]: 2026-02-25 12:56:59.461 244018 DEBUG nova.compute.provider_tree [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:56:59 np0005629333 nova_compute[244014]: 2026-02-25 12:56:59.478 244018 DEBUG nova.scheduler.client.report [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:56:59 np0005629333 nova_compute[244014]: 2026-02-25 12:56:59.512 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:56:59 np0005629333 nova_compute[244014]: 2026-02-25 12:56:59.513 244018 DEBUG nova.compute.manager [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:56:59 np0005629333 nova_compute[244014]: 2026-02-25 12:56:59.518 244018 DEBUG oslo_concurrency.lockutils [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.383s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:56:59 np0005629333 nova_compute[244014]: 2026-02-25 12:56:59.556 244018 DEBUG nova.compute.manager [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:56:59 np0005629333 nova_compute[244014]: 2026-02-25 12:56:59.557 244018 DEBUG nova.network.neutron [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:56:59 np0005629333 nova_compute[244014]: 2026-02-25 12:56:59.575 244018 INFO nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:56:59 np0005629333 nova_compute[244014]: 2026-02-25 12:56:59.593 244018 DEBUG nova.compute.manager [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:56:59 np0005629333 nova_compute[244014]: 2026-02-25 12:56:59.603 244018 DEBUG oslo_concurrency.processutils [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:56:59 np0005629333 nova_compute[244014]: 2026-02-25 12:56:59.703 244018 DEBUG nova.compute.manager [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:56:59 np0005629333 nova_compute[244014]: 2026-02-25 12:56:59.707 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:56:59 np0005629333 nova_compute[244014]: 2026-02-25 12:56:59.707 244018 INFO nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Creating image(s)#033[00m
Feb 25 07:56:59 np0005629333 nova_compute[244014]: 2026-02-25 12:56:59.734 244018 DEBUG nova.storage.rbd_utils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 6185e497-8422-4a5f-a98a-865484d53d4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:56:59 np0005629333 nova_compute[244014]: 2026-02-25 12:56:59.763 244018 DEBUG nova.storage.rbd_utils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 6185e497-8422-4a5f-a98a-865484d53d4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:56:59 np0005629333 nova_compute[244014]: 2026-02-25 12:56:59.801 244018 DEBUG nova.storage.rbd_utils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 6185e497-8422-4a5f-a98a-865484d53d4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:56:59 np0005629333 nova_compute[244014]: 2026-02-25 12:56:59.804 244018 DEBUG oslo_concurrency.processutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:56:59 np0005629333 nova_compute[244014]: 2026-02-25 12:56:59.890 244018 DEBUG oslo_concurrency.processutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:56:59 np0005629333 nova_compute[244014]: 2026-02-25 12:56:59.891 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:56:59 np0005629333 nova_compute[244014]: 2026-02-25 12:56:59.891 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:56:59 np0005629333 nova_compute[244014]: 2026-02-25 12:56:59.892 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:56:59 np0005629333 nova_compute[244014]: 2026-02-25 12:56:59.923 244018 DEBUG nova.storage.rbd_utils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 6185e497-8422-4a5f-a98a-865484d53d4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:56:59 np0005629333 nova_compute[244014]: 2026-02-25 12:56:59.927 244018 DEBUG oslo_concurrency.processutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 6185e497-8422-4a5f-a98a-865484d53d4f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:57:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:57:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1267808107' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:57:00 np0005629333 nova_compute[244014]: 2026-02-25 12:57:00.179 244018 DEBUG nova.policy [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea895f651dd742a7b5eb2d63fb34641c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b9699483122f465084e3147e4904d13d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:57:00 np0005629333 nova_compute[244014]: 2026-02-25 12:57:00.187 244018 DEBUG oslo_concurrency.processutils [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:57:00 np0005629333 nova_compute[244014]: 2026-02-25 12:57:00.194 244018 DEBUG nova.compute.provider_tree [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:57:00 np0005629333 nova_compute[244014]: 2026-02-25 12:57:00.210 244018 DEBUG oslo_concurrency.processutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 6185e497-8422-4a5f-a98a-865484d53d4f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:57:00 np0005629333 nova_compute[244014]: 2026-02-25 12:57:00.249 244018 DEBUG nova.scheduler.client.report [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:57:00 np0005629333 nova_compute[244014]: 2026-02-25 12:57:00.297 244018 DEBUG oslo_concurrency.lockutils [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:57:00 np0005629333 nova_compute[244014]: 2026-02-25 12:57:00.304 244018 DEBUG nova.compute.manager [req-d6f599b1-4ec4-4d99-ae07-212a13465132 req-427579dd-a87f-4a65-a45e-5161ce491eac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received event network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:57:00 np0005629333 nova_compute[244014]: 2026-02-25 12:57:00.304 244018 DEBUG oslo_concurrency.lockutils [req-d6f599b1-4ec4-4d99-ae07-212a13465132 req-427579dd-a87f-4a65-a45e-5161ce491eac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:57:00 np0005629333 nova_compute[244014]: 2026-02-25 12:57:00.305 244018 DEBUG oslo_concurrency.lockutils [req-d6f599b1-4ec4-4d99-ae07-212a13465132 req-427579dd-a87f-4a65-a45e-5161ce491eac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:57:00 np0005629333 nova_compute[244014]: 2026-02-25 12:57:00.305 244018 DEBUG oslo_concurrency.lockutils [req-d6f599b1-4ec4-4d99-ae07-212a13465132 req-427579dd-a87f-4a65-a45e-5161ce491eac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:57:00 np0005629333 nova_compute[244014]: 2026-02-25 12:57:00.305 244018 DEBUG nova.compute.manager [req-d6f599b1-4ec4-4d99-ae07-212a13465132 req-427579dd-a87f-4a65-a45e-5161ce491eac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] No waiting events found dispatching network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:57:00 np0005629333 nova_compute[244014]: 2026-02-25 12:57:00.306 244018 WARNING nova.compute.manager [req-d6f599b1-4ec4-4d99-ae07-212a13465132 req-427579dd-a87f-4a65-a45e-5161ce491eac 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received unexpected event network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:57:00 np0005629333 nova_compute[244014]: 2026-02-25 12:57:00.312 244018 DEBUG nova.storage.rbd_utils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] resizing rbd image 6185e497-8422-4a5f-a98a-865484d53d4f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:57:00 np0005629333 nova_compute[244014]: 2026-02-25 12:57:00.344 244018 INFO nova.scheduler.client.report [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Deleted allocations for instance 859fd309-32ea-4025-8312-ddecfa0d6a7f#033[00m
Feb 25 07:57:00 np0005629333 nova_compute[244014]: 2026-02-25 12:57:00.397 244018 DEBUG nova.objects.instance [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'migration_context' on Instance uuid 6185e497-8422-4a5f-a98a-865484d53d4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:57:00 np0005629333 nova_compute[244014]: 2026-02-25 12:57:00.423 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:57:00 np0005629333 nova_compute[244014]: 2026-02-25 12:57:00.423 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Ensure instance console log exists: /var/lib/nova/instances/6185e497-8422-4a5f-a98a-865484d53d4f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:57:00 np0005629333 nova_compute[244014]: 2026-02-25 12:57:00.427 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:57:00 np0005629333 nova_compute[244014]: 2026-02-25 12:57:00.428 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:57:00 np0005629333 nova_compute[244014]: 2026-02-25 12:57:00.428 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:57:00 np0005629333 nova_compute[244014]: 2026-02-25 12:57:00.464 244018 DEBUG oslo_concurrency.lockutils [None req-fa212d79-d02a-4b2b-ac49-7add5aa65ea4 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "859fd309-32ea-4025-8312-ddecfa0d6a7f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:57:00 np0005629333 nova_compute[244014]: 2026-02-25 12:57:00.470 244018 DEBUG nova.compute.manager [req-9d876820-43f2-479e-a847-f1a338edc138 req-d6f16660-adec-4c3c-abb9-99417ae1cc69 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Received event network-vif-plugged-2a99d3aa-08e4-4712-a8b3-984de83b4b60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:57:00 np0005629333 nova_compute[244014]: 2026-02-25 12:57:00.471 244018 DEBUG oslo_concurrency.lockutils [req-9d876820-43f2-479e-a847-f1a338edc138 req-d6f16660-adec-4c3c-abb9-99417ae1cc69 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:57:00 np0005629333 nova_compute[244014]: 2026-02-25 12:57:00.471 244018 DEBUG oslo_concurrency.lockutils [req-9d876820-43f2-479e-a847-f1a338edc138 req-d6f16660-adec-4c3c-abb9-99417ae1cc69 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:57:00 np0005629333 nova_compute[244014]: 2026-02-25 12:57:00.472 244018 DEBUG oslo_concurrency.lockutils [req-9d876820-43f2-479e-a847-f1a338edc138 req-d6f16660-adec-4c3c-abb9-99417ae1cc69 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "859fd309-32ea-4025-8312-ddecfa0d6a7f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:57:00 np0005629333 nova_compute[244014]: 2026-02-25 12:57:00.472 244018 DEBUG nova.compute.manager [req-9d876820-43f2-479e-a847-f1a338edc138 req-d6f16660-adec-4c3c-abb9-99417ae1cc69 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] No waiting events found dispatching network-vif-plugged-2a99d3aa-08e4-4712-a8b3-984de83b4b60 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:57:00 np0005629333 nova_compute[244014]: 2026-02-25 12:57:00.472 244018 WARNING nova.compute.manager [req-9d876820-43f2-479e-a847-f1a338edc138 req-d6f16660-adec-4c3c-abb9-99417ae1cc69 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Received unexpected event network-vif-plugged-2a99d3aa-08e4-4712-a8b3-984de83b4b60 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:57:00 np0005629333 nova_compute[244014]: 2026-02-25 12:57:00.472 244018 DEBUG nova.compute.manager [req-9d876820-43f2-479e-a847-f1a338edc138 req-d6f16660-adec-4c3c-abb9-99417ae1cc69 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Received event network-vif-deleted-2a99d3aa-08e4-4712-a8b3-984de83b4b60 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:57:00 np0005629333 nova_compute[244014]: 2026-02-25 12:57:00.473 244018 INFO nova.compute.manager [req-9d876820-43f2-479e-a847-f1a338edc138 req-d6f16660-adec-4c3c-abb9-99417ae1cc69 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Neutron deleted interface 2a99d3aa-08e4-4712-a8b3-984de83b4b60; detaching it from the instance and deleting it from the info cache#033[00m
Feb 25 07:57:00 np0005629333 nova_compute[244014]: 2026-02-25 12:57:00.473 244018 DEBUG nova.network.neutron [req-9d876820-43f2-479e-a847-f1a338edc138 req-d6f16660-adec-4c3c-abb9-99417ae1cc69 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:57:00 np0005629333 nova_compute[244014]: 2026-02-25 12:57:00.505 244018 DEBUG nova.compute.manager [req-9d876820-43f2-479e-a847-f1a338edc138 req-d6f16660-adec-4c3c-abb9-99417ae1cc69 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Detach interface failed, port_id=2a99d3aa-08e4-4712-a8b3-984de83b4b60, reason: Instance 859fd309-32ea-4025-8312-ddecfa0d6a7f could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 25 07:57:00 np0005629333 nova_compute[244014]: 2026-02-25 12:57:00.527 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2293: 305 pgs: 305 active+clean; 312 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 4.3 KiB/s wr, 1 op/s
Feb 25 07:57:01 np0005629333 nova_compute[244014]: 2026-02-25 12:57:01.524 244018 DEBUG nova.network.neutron [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Successfully created port: a4d9181f-fefa-4cde-81ea-c5e59433606c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:57:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:57:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:57:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:57:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:57:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:57:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:57:02 np0005629333 nova_compute[244014]: 2026-02-25 12:57:02.417 244018 DEBUG nova.compute.manager [req-a8129056-298a-4ed3-aab9-9b827dd722cb req-f7c93b8d-acd7-4ad6-8fc3-365b4795ddc0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received event network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:57:02 np0005629333 nova_compute[244014]: 2026-02-25 12:57:02.417 244018 DEBUG oslo_concurrency.lockutils [req-a8129056-298a-4ed3-aab9-9b827dd722cb req-f7c93b8d-acd7-4ad6-8fc3-365b4795ddc0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:57:02 np0005629333 nova_compute[244014]: 2026-02-25 12:57:02.418 244018 DEBUG oslo_concurrency.lockutils [req-a8129056-298a-4ed3-aab9-9b827dd722cb req-f7c93b8d-acd7-4ad6-8fc3-365b4795ddc0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:57:02 np0005629333 nova_compute[244014]: 2026-02-25 12:57:02.418 244018 DEBUG oslo_concurrency.lockutils [req-a8129056-298a-4ed3-aab9-9b827dd722cb req-f7c93b8d-acd7-4ad6-8fc3-365b4795ddc0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:57:02 np0005629333 nova_compute[244014]: 2026-02-25 12:57:02.419 244018 DEBUG nova.compute.manager [req-a8129056-298a-4ed3-aab9-9b827dd722cb req-f7c93b8d-acd7-4ad6-8fc3-365b4795ddc0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] No waiting events found dispatching network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:57:02 np0005629333 nova_compute[244014]: 2026-02-25 12:57:02.419 244018 WARNING nova.compute.manager [req-a8129056-298a-4ed3-aab9-9b827dd722cb req-f7c93b8d-acd7-4ad6-8fc3-365b4795ddc0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received unexpected event network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:57:02 np0005629333 nova_compute[244014]: 2026-02-25 12:57:02.691 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:02 np0005629333 nova_compute[244014]: 2026-02-25 12:57:02.717 244018 DEBUG nova.network.neutron [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Successfully updated port: a4d9181f-fefa-4cde-81ea-c5e59433606c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:57:02 np0005629333 nova_compute[244014]: 2026-02-25 12:57:02.721 244018 DEBUG oslo_concurrency.lockutils [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:57:02 np0005629333 nova_compute[244014]: 2026-02-25 12:57:02.722 244018 DEBUG oslo_concurrency.lockutils [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:57:02 np0005629333 nova_compute[244014]: 2026-02-25 12:57:02.722 244018 DEBUG oslo_concurrency.lockutils [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:57:02 np0005629333 nova_compute[244014]: 2026-02-25 12:57:02.723 244018 DEBUG oslo_concurrency.lockutils [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:57:02 np0005629333 nova_compute[244014]: 2026-02-25 12:57:02.723 244018 DEBUG oslo_concurrency.lockutils [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:57:02 np0005629333 nova_compute[244014]: 2026-02-25 12:57:02.724 244018 INFO nova.compute.manager [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Terminating instance#033[00m
Feb 25 07:57:02 np0005629333 nova_compute[244014]: 2026-02-25 12:57:02.726 244018 DEBUG nova.compute.manager [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:57:02 np0005629333 nova_compute[244014]: 2026-02-25 12:57:02.729 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "refresh_cache-6185e497-8422-4a5f-a98a-865484d53d4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:57:02 np0005629333 nova_compute[244014]: 2026-02-25 12:57:02.730 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquired lock "refresh_cache-6185e497-8422-4a5f-a98a-865484d53d4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:57:02 np0005629333 nova_compute[244014]: 2026-02-25 12:57:02.730 244018 DEBUG nova.network.neutron [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:57:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2294: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Feb 25 07:57:02 np0005629333 kernel: tapbde15f84-ed (unregistering): left promiscuous mode
Feb 25 07:57:02 np0005629333 NetworkManager[49836]: <info>  [1772024222.7856] device (tapbde15f84-ed): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:57:02 np0005629333 nova_compute[244014]: 2026-02-25 12:57:02.786 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:02 np0005629333 ovn_controller[147040]: 2026-02-25T12:57:02Z|01456|binding|INFO|Releasing lport bde15f84-edfb-445b-b129-ec33331763f0 from this chassis (sb_readonly=0)
Feb 25 07:57:02 np0005629333 ovn_controller[147040]: 2026-02-25T12:57:02Z|01457|binding|INFO|Setting lport bde15f84-edfb-445b-b129-ec33331763f0 down in Southbound
Feb 25 07:57:02 np0005629333 ovn_controller[147040]: 2026-02-25T12:57:02Z|01458|binding|INFO|Removing iface tapbde15f84-ed ovn-installed in OVS
Feb 25 07:57:02 np0005629333 nova_compute[244014]: 2026-02-25 12:57:02.794 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:02.801 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:9e:1f 10.100.0.11'], port_security=['fa:16:3e:99:9e:1f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e728a9dc-bb04-4a25-bcad-b787a044bc0b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74cacb3c-0135-4e0b-9776-478b5f7a3349', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '8', 'neutron:security_group_ids': '1d8168d2-f5f8-4f41-af80-56661b6a1e4c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=394a85b3-bdd5-4c73-8ada-c7f3f99bacc6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=bde15f84-edfb-445b-b129-ec33331763f0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:57:02 np0005629333 nova_compute[244014]: 2026-02-25 12:57:02.803 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:02.804 157129 INFO neutron.agent.ovn.metadata.agent [-] Port bde15f84-edfb-445b-b129-ec33331763f0 in datapath 74cacb3c-0135-4e0b-9776-478b5f7a3349 unbound from our chassis#033[00m
Feb 25 07:57:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:02.806 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74cacb3c-0135-4e0b-9776-478b5f7a3349, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:57:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:02.807 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[03c3fe58-41c3-46f3-8fc0-ff6ece663c03]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:02.807 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349 namespace which is not needed anymore#033[00m
Feb 25 07:57:02 np0005629333 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000089.scope: Deactivated successfully.
Feb 25 07:57:02 np0005629333 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000089.scope: Consumed 13.440s CPU time.
Feb 25 07:57:02 np0005629333 systemd-machined[210048]: Machine qemu-169-instance-00000089 terminated.
Feb 25 07:57:02 np0005629333 neutron-haproxy-ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349[368671]: [NOTICE]   (368675) : haproxy version is 2.8.14-c23fe91
Feb 25 07:57:02 np0005629333 neutron-haproxy-ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349[368671]: [NOTICE]   (368675) : path to executable is /usr/sbin/haproxy
Feb 25 07:57:02 np0005629333 neutron-haproxy-ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349[368671]: [WARNING]  (368675) : Exiting Master process...
Feb 25 07:57:02 np0005629333 neutron-haproxy-ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349[368671]: [WARNING]  (368675) : Exiting Master process...
Feb 25 07:57:02 np0005629333 neutron-haproxy-ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349[368671]: [ALERT]    (368675) : Current worker (368677) exited with code 143 (Terminated)
Feb 25 07:57:02 np0005629333 neutron-haproxy-ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349[368671]: [WARNING]  (368675) : All workers exited. Exiting... (0)
Feb 25 07:57:02 np0005629333 systemd[1]: libpod-0f4ca28b4b7bc70c479451b9c034e9216bba7e832734fdc111c714b31e0214ab.scope: Deactivated successfully.
Feb 25 07:57:02 np0005629333 nova_compute[244014]: 2026-02-25 12:57:02.981 244018 INFO nova.virt.libvirt.driver [-] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Instance destroyed successfully.#033[00m
Feb 25 07:57:02 np0005629333 nova_compute[244014]: 2026-02-25 12:57:02.983 244018 DEBUG nova.objects.instance [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'resources' on Instance uuid e728a9dc-bb04-4a25-bcad-b787a044bc0b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:57:02 np0005629333 podman[369673]: 2026-02-25 12:57:02.985528104 +0000 UTC m=+0.064554512 container died 0f4ca28b4b7bc70c479451b9c034e9216bba7e832734fdc111c714b31e0214ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:57:02 np0005629333 nova_compute[244014]: 2026-02-25 12:57:02.997 244018 DEBUG nova.virt.libvirt.vif [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:55:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-563089963',display_name='tempest-TestNetworkBasicOps-server-563089963',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-563089963',id=137,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDFJ0F94NwULIDtBe1hWjwHL1rXe9bp7hZF1/RCkpzZ1NYU+13CFJoQDcWC4Kzxk0oxcoRc7zBPHT8toFws+wsWPaxnQhFXZM/VRwZJjXXVApOjgca9Ve/Do6WWCKSzVsA==',key_name='tempest-TestNetworkBasicOps-1065518207',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:56:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-nqssdsat',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:56:05Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=e728a9dc-bb04-4a25-bcad-b787a044bc0b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bde15f84-edfb-445b-b129-ec33331763f0", "address": "fa:16:3e:99:9e:1f", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbde15f84-ed", "ovs_interfaceid": "bde15f84-edfb-445b-b129-ec33331763f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:57:02 np0005629333 nova_compute[244014]: 2026-02-25 12:57:02.998 244018 DEBUG nova.network.os_vif_util [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "bde15f84-edfb-445b-b129-ec33331763f0", "address": "fa:16:3e:99:9e:1f", "network": {"id": "74cacb3c-0135-4e0b-9776-478b5f7a3349", "bridge": "br-int", "label": "tempest-network-smoke--1059483714", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbde15f84-ed", "ovs_interfaceid": "bde15f84-edfb-445b-b129-ec33331763f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:57:03 np0005629333 nova_compute[244014]: 2026-02-25 12:57:03.000 244018 DEBUG nova.network.os_vif_util [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:9e:1f,bridge_name='br-int',has_traffic_filtering=True,id=bde15f84-edfb-445b-b129-ec33331763f0,network=Network(74cacb3c-0135-4e0b-9776-478b5f7a3349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbde15f84-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:57:03 np0005629333 nova_compute[244014]: 2026-02-25 12:57:03.001 244018 DEBUG os_vif [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:9e:1f,bridge_name='br-int',has_traffic_filtering=True,id=bde15f84-edfb-445b-b129-ec33331763f0,network=Network(74cacb3c-0135-4e0b-9776-478b5f7a3349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbde15f84-ed') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:57:03 np0005629333 nova_compute[244014]: 2026-02-25 12:57:03.004 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:03 np0005629333 nova_compute[244014]: 2026-02-25 12:57:03.004 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbde15f84-ed, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:57:03 np0005629333 nova_compute[244014]: 2026-02-25 12:57:03.006 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:03 np0005629333 nova_compute[244014]: 2026-02-25 12:57:03.008 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:03 np0005629333 nova_compute[244014]: 2026-02-25 12:57:03.012 244018 INFO os_vif [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:9e:1f,bridge_name='br-int',has_traffic_filtering=True,id=bde15f84-edfb-445b-b129-ec33331763f0,network=Network(74cacb3c-0135-4e0b-9776-478b5f7a3349),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbde15f84-ed')#033[00m
Feb 25 07:57:03 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0f4ca28b4b7bc70c479451b9c034e9216bba7e832734fdc111c714b31e0214ab-userdata-shm.mount: Deactivated successfully.
Feb 25 07:57:03 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7e045d31172de33fbbfdc9cfbf5c4682002fbe9f3cdd63cf75b833a745b1cae3-merged.mount: Deactivated successfully.
Feb 25 07:57:03 np0005629333 podman[369673]: 2026-02-25 12:57:03.036630096 +0000 UTC m=+0.115656484 container cleanup 0f4ca28b4b7bc70c479451b9c034e9216bba7e832734fdc111c714b31e0214ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:57:03 np0005629333 systemd[1]: libpod-conmon-0f4ca28b4b7bc70c479451b9c034e9216bba7e832734fdc111c714b31e0214ab.scope: Deactivated successfully.
Feb 25 07:57:03 np0005629333 podman[369730]: 2026-02-25 12:57:03.12011096 +0000 UTC m=+0.061120985 container remove 0f4ca28b4b7bc70c479451b9c034e9216bba7e832734fdc111c714b31e0214ab (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 07:57:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:03.126 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0fa136b9-cbdb-4b4f-8615-0996439531f1]: (4, ('Wed Feb 25 12:57:02 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349 (0f4ca28b4b7bc70c479451b9c034e9216bba7e832734fdc111c714b31e0214ab)\n0f4ca28b4b7bc70c479451b9c034e9216bba7e832734fdc111c714b31e0214ab\nWed Feb 25 12:57:03 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349 (0f4ca28b4b7bc70c479451b9c034e9216bba7e832734fdc111c714b31e0214ab)\n0f4ca28b4b7bc70c479451b9c034e9216bba7e832734fdc111c714b31e0214ab\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:03.129 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bba77c70-b1ba-40b9-a945-810592c138a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:03.130 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74cacb3c-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:57:03 np0005629333 kernel: tap74cacb3c-00: left promiscuous mode
Feb 25 07:57:03 np0005629333 nova_compute[244014]: 2026-02-25 12:57:03.132 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:03 np0005629333 nova_compute[244014]: 2026-02-25 12:57:03.144 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:03.149 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5598ec58-438f-4dcd-a971-3816d76005a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:03.166 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2278683b-c093-4dd0-9a73-cabccd4f6baa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:03.168 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[45295995-90eb-41ee-a119-20da672216d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:03 np0005629333 nova_compute[244014]: 2026-02-25 12:57:03.171 244018 DEBUG nova.network.neutron [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:57:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:03.182 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ac23ba40-fbb0-4cbf-8af8-71df2e84f622]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613427, 'reachable_time': 34629, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369746, 'error': None, 'target': 'ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:03 np0005629333 systemd[1]: run-netns-ovnmeta\x2d74cacb3c\x2d0135\x2d4e0b\x2d9776\x2d478b5f7a3349.mount: Deactivated successfully.
Feb 25 07:57:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:03.185 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-74cacb3c-0135-4e0b-9776-478b5f7a3349 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:57:03 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:03.185 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[0427499c-3fa4-4cc8-ab6e-68a7e3c45da7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:03 np0005629333 nova_compute[244014]: 2026-02-25 12:57:03.316 244018 INFO nova.virt.libvirt.driver [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Deleting instance files /var/lib/nova/instances/e728a9dc-bb04-4a25-bcad-b787a044bc0b_del#033[00m
Feb 25 07:57:03 np0005629333 nova_compute[244014]: 2026-02-25 12:57:03.317 244018 INFO nova.virt.libvirt.driver [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Deletion of /var/lib/nova/instances/e728a9dc-bb04-4a25-bcad-b787a044bc0b_del complete#033[00m
Feb 25 07:57:03 np0005629333 nova_compute[244014]: 2026-02-25 12:57:03.375 244018 INFO nova.compute.manager [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Took 0.65 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:57:03 np0005629333 nova_compute[244014]: 2026-02-25 12:57:03.376 244018 DEBUG oslo.service.loopingcall [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:57:03 np0005629333 nova_compute[244014]: 2026-02-25 12:57:03.376 244018 DEBUG nova.compute.manager [-] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:57:03 np0005629333 nova_compute[244014]: 2026-02-25 12:57:03.376 244018 DEBUG nova.network.neutron [-] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:57:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:57:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:57:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:57:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:57:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:57:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:57:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:57:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:57:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:57:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:57:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:57:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:57:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:57:04 np0005629333 podman[369893]: 2026-02-25 12:57:04.442023219 +0000 UTC m=+0.058515861 container create 93df05ce87de676d044f09068866ef8a83e47d2bc45315cf02036b943f75e0e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_moore, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:57:04 np0005629333 systemd[1]: Started libpod-conmon-93df05ce87de676d044f09068866ef8a83e47d2bc45315cf02036b943f75e0e8.scope.
Feb 25 07:57:04 np0005629333 nova_compute[244014]: 2026-02-25 12:57:04.485 244018 DEBUG nova.network.neutron [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Updating instance_info_cache with network_info: [{"id": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "address": "fa:16:3e:ef:c2:bb", "network": {"id": "f23f1675-b7ff-4265-a011-0912c637d746", "bridge": "br-int", "label": "tempest-network-smoke--923808756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d9181f-fe", "ovs_interfaceid": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:57:04 np0005629333 nova_compute[244014]: 2026-02-25 12:57:04.508 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Releasing lock "refresh_cache-6185e497-8422-4a5f-a98a-865484d53d4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:57:04 np0005629333 nova_compute[244014]: 2026-02-25 12:57:04.509 244018 DEBUG nova.compute.manager [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Instance network_info: |[{"id": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "address": "fa:16:3e:ef:c2:bb", "network": {"id": "f23f1675-b7ff-4265-a011-0912c637d746", "bridge": "br-int", "label": "tempest-network-smoke--923808756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d9181f-fe", "ovs_interfaceid": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:57:04 np0005629333 podman[369893]: 2026-02-25 12:57:04.419776922 +0000 UTC m=+0.036269544 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:57:04 np0005629333 nova_compute[244014]: 2026-02-25 12:57:04.513 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Start _get_guest_xml network_info=[{"id": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "address": "fa:16:3e:ef:c2:bb", "network": {"id": "f23f1675-b7ff-4265-a011-0912c637d746", "bridge": "br-int", "label": "tempest-network-smoke--923808756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d9181f-fe", "ovs_interfaceid": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:57:04 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:57:04 np0005629333 nova_compute[244014]: 2026-02-25 12:57:04.522 244018 WARNING nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:57:04 np0005629333 nova_compute[244014]: 2026-02-25 12:57:04.527 244018 DEBUG nova.virt.libvirt.host [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:57:04 np0005629333 nova_compute[244014]: 2026-02-25 12:57:04.528 244018 DEBUG nova.virt.libvirt.host [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:57:04 np0005629333 nova_compute[244014]: 2026-02-25 12:57:04.532 244018 DEBUG nova.virt.libvirt.host [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:57:04 np0005629333 nova_compute[244014]: 2026-02-25 12:57:04.533 244018 DEBUG nova.virt.libvirt.host [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:57:04 np0005629333 nova_compute[244014]: 2026-02-25 12:57:04.533 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:57:04 np0005629333 nova_compute[244014]: 2026-02-25 12:57:04.534 244018 DEBUG nova.virt.hardware [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:57:04 np0005629333 nova_compute[244014]: 2026-02-25 12:57:04.535 244018 DEBUG nova.virt.hardware [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:57:04 np0005629333 nova_compute[244014]: 2026-02-25 12:57:04.535 244018 DEBUG nova.virt.hardware [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:57:04 np0005629333 nova_compute[244014]: 2026-02-25 12:57:04.535 244018 DEBUG nova.virt.hardware [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:57:04 np0005629333 nova_compute[244014]: 2026-02-25 12:57:04.535 244018 DEBUG nova.virt.hardware [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:57:04 np0005629333 nova_compute[244014]: 2026-02-25 12:57:04.536 244018 DEBUG nova.virt.hardware [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:57:04 np0005629333 nova_compute[244014]: 2026-02-25 12:57:04.536 244018 DEBUG nova.virt.hardware [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:57:04 np0005629333 nova_compute[244014]: 2026-02-25 12:57:04.536 244018 DEBUG nova.virt.hardware [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:57:04 np0005629333 nova_compute[244014]: 2026-02-25 12:57:04.537 244018 DEBUG nova.virt.hardware [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:57:04 np0005629333 nova_compute[244014]: 2026-02-25 12:57:04.537 244018 DEBUG nova.virt.hardware [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:57:04 np0005629333 nova_compute[244014]: 2026-02-25 12:57:04.538 244018 DEBUG nova.virt.hardware [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:57:04 np0005629333 podman[369893]: 2026-02-25 12:57:04.539121998 +0000 UTC m=+0.155614630 container init 93df05ce87de676d044f09068866ef8a83e47d2bc45315cf02036b943f75e0e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_moore, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 07:57:04 np0005629333 nova_compute[244014]: 2026-02-25 12:57:04.543 244018 DEBUG oslo_concurrency.processutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:57:04 np0005629333 podman[369893]: 2026-02-25 12:57:04.547635759 +0000 UTC m=+0.164128371 container start 93df05ce87de676d044f09068866ef8a83e47d2bc45315cf02036b943f75e0e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:57:04 np0005629333 podman[369893]: 2026-02-25 12:57:04.551318252 +0000 UTC m=+0.167810864 container attach 93df05ce87de676d044f09068866ef8a83e47d2bc45315cf02036b943f75e0e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_moore, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:57:04 np0005629333 gallant_moore[369910]: 167 167
Feb 25 07:57:04 np0005629333 systemd[1]: libpod-93df05ce87de676d044f09068866ef8a83e47d2bc45315cf02036b943f75e0e8.scope: Deactivated successfully.
Feb 25 07:57:04 np0005629333 podman[369893]: 2026-02-25 12:57:04.556775996 +0000 UTC m=+0.173268638 container died 93df05ce87de676d044f09068866ef8a83e47d2bc45315cf02036b943f75e0e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_moore, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 07:57:04 np0005629333 systemd[1]: var-lib-containers-storage-overlay-edbb969fb18fc278e6b4947c177e7c8d4b69b26dff864a732fc82a8c3477940a-merged.mount: Deactivated successfully.
Feb 25 07:57:04 np0005629333 podman[369893]: 2026-02-25 12:57:04.609110513 +0000 UTC m=+0.225603145 container remove 93df05ce87de676d044f09068866ef8a83e47d2bc45315cf02036b943f75e0e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_moore, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Feb 25 07:57:04 np0005629333 systemd[1]: libpod-conmon-93df05ce87de676d044f09068866ef8a83e47d2bc45315cf02036b943f75e0e8.scope: Deactivated successfully.
Feb 25 07:57:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2295: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 25 07:57:04 np0005629333 podman[369954]: 2026-02-25 12:57:04.78519031 +0000 UTC m=+0.047541092 container create 1dcbe7c8c24a40fc5d77a8a9e4cc11ae9a8cc3c75e960028a876e3ee518cc1d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_curran, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 07:57:04 np0005629333 systemd[1]: Started libpod-conmon-1dcbe7c8c24a40fc5d77a8a9e4cc11ae9a8cc3c75e960028a876e3ee518cc1d8.scope.
Feb 25 07:57:04 np0005629333 podman[369954]: 2026-02-25 12:57:04.764254619 +0000 UTC m=+0.026605421 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:57:04 np0005629333 nova_compute[244014]: 2026-02-25 12:57:04.864 244018 DEBUG nova.network.neutron [-] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:57:04 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:57:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8de50344cef45ad1ee11d815dbfbcbfab846478d398da10f4bcdb3da1bb5219/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:57:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8de50344cef45ad1ee11d815dbfbcbfab846478d398da10f4bcdb3da1bb5219/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:57:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8de50344cef45ad1ee11d815dbfbcbfab846478d398da10f4bcdb3da1bb5219/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:57:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8de50344cef45ad1ee11d815dbfbcbfab846478d398da10f4bcdb3da1bb5219/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:57:04 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:57:04 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:57:04 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:57:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8de50344cef45ad1ee11d815dbfbcbfab846478d398da10f4bcdb3da1bb5219/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:57:04 np0005629333 podman[369954]: 2026-02-25 12:57:04.908065896 +0000 UTC m=+0.170416758 container init 1dcbe7c8c24a40fc5d77a8a9e4cc11ae9a8cc3c75e960028a876e3ee518cc1d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Feb 25 07:57:04 np0005629333 nova_compute[244014]: 2026-02-25 12:57:04.913 244018 INFO nova.compute.manager [-] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Took 1.54 seconds to deallocate network for instance.#033[00m
Feb 25 07:57:04 np0005629333 podman[369954]: 2026-02-25 12:57:04.917378608 +0000 UTC m=+0.179729370 container start 1dcbe7c8c24a40fc5d77a8a9e4cc11ae9a8cc3c75e960028a876e3ee518cc1d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_curran, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 07:57:04 np0005629333 podman[369954]: 2026-02-25 12:57:04.920902128 +0000 UTC m=+0.183252930 container attach 1dcbe7c8c24a40fc5d77a8a9e4cc11ae9a8cc3c75e960028a876e3ee518cc1d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_curran, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.005 244018 DEBUG nova.compute.manager [req-22e19e12-07ae-4840-ac4d-0b235481572b req-fa4c780c-f222-46e9-b9f6-cdd46f392eaa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received event network-vif-deleted-bde15f84-edfb-445b-b129-ec33331763f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.005 244018 INFO nova.compute.manager [req-22e19e12-07ae-4840-ac4d-0b235481572b req-fa4c780c-f222-46e9-b9f6-cdd46f392eaa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Neutron deleted interface bde15f84-edfb-445b-b129-ec33331763f0; detaching it from the instance and deleting it from the info cache#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.006 244018 DEBUG nova.network.neutron [req-22e19e12-07ae-4840-ac4d-0b235481572b req-fa4c780c-f222-46e9-b9f6-cdd46f392eaa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.009 244018 DEBUG nova.compute.manager [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received event network-changed-bde15f84-edfb-445b-b129-ec33331763f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.009 244018 DEBUG nova.compute.manager [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Refreshing instance network info cache due to event network-changed-bde15f84-edfb-445b-b129-ec33331763f0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.010 244018 DEBUG oslo_concurrency.lockutils [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.011 244018 DEBUG oslo_concurrency.lockutils [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.011 244018 DEBUG nova.network.neutron [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Refreshing network info cache for port bde15f84-edfb-445b-b129-ec33331763f0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.066 244018 DEBUG oslo_concurrency.lockutils [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.067 244018 DEBUG oslo_concurrency.lockutils [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.070 244018 DEBUG nova.compute.manager [req-22e19e12-07ae-4840-ac4d-0b235481572b req-fa4c780c-f222-46e9-b9f6-cdd46f392eaa 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Detach interface failed, port_id=bde15f84-edfb-445b-b129-ec33331763f0, reason: Instance e728a9dc-bb04-4a25-bcad-b787a044bc0b could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.137 244018 DEBUG oslo_concurrency.processutils [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:57:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:57:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1882747267' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.184 244018 DEBUG nova.network.neutron [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.190 244018 DEBUG oslo_concurrency.processutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.647s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.227 244018 DEBUG nova.storage.rbd_utils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 6185e497-8422-4a5f-a98a-865484d53d4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.235 244018 DEBUG oslo_concurrency.processutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:57:05 np0005629333 cool_curran[369971]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:57:05 np0005629333 cool_curran[369971]: --> All data devices are unavailable
Feb 25 07:57:05 np0005629333 systemd[1]: libpod-1dcbe7c8c24a40fc5d77a8a9e4cc11ae9a8cc3c75e960028a876e3ee518cc1d8.scope: Deactivated successfully.
Feb 25 07:57:05 np0005629333 podman[369954]: 2026-02-25 12:57:05.442010637 +0000 UTC m=+0.704361439 container died 1dcbe7c8c24a40fc5d77a8a9e4cc11ae9a8cc3c75e960028a876e3ee518cc1d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_curran, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 07:57:05 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f8de50344cef45ad1ee11d815dbfbcbfab846478d398da10f4bcdb3da1bb5219-merged.mount: Deactivated successfully.
Feb 25 07:57:05 np0005629333 podman[369954]: 2026-02-25 12:57:05.491478902 +0000 UTC m=+0.753829694 container remove 1dcbe7c8c24a40fc5d77a8a9e4cc11ae9a8cc3c75e960028a876e3ee518cc1d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 07:57:05 np0005629333 systemd[1]: libpod-conmon-1dcbe7c8c24a40fc5d77a8a9e4cc11ae9a8cc3c75e960028a876e3ee518cc1d8.scope: Deactivated successfully.
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.521 244018 DEBUG nova.network.neutron [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.530 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.540 244018 DEBUG oslo_concurrency.lockutils [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-e728a9dc-bb04-4a25-bcad-b787a044bc0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.541 244018 DEBUG nova.compute.manager [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Received event network-changed-a4d9181f-fefa-4cde-81ea-c5e59433606c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.541 244018 DEBUG nova.compute.manager [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Refreshing instance network info cache due to event network-changed-a4d9181f-fefa-4cde-81ea-c5e59433606c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.542 244018 DEBUG oslo_concurrency.lockutils [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-6185e497-8422-4a5f-a98a-865484d53d4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.542 244018 DEBUG oslo_concurrency.lockutils [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-6185e497-8422-4a5f-a98a-865484d53d4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.542 244018 DEBUG nova.network.neutron [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Refreshing network info cache for port a4d9181f-fefa-4cde-81ea-c5e59433606c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:57:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:57:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4279281827' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.725 244018 DEBUG oslo_concurrency.processutils [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.733 244018 DEBUG nova.compute.provider_tree [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.755 244018 DEBUG nova.scheduler.client.report [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.785 244018 DEBUG oslo_concurrency.lockutils [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:57:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:57:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4251817825' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.805 244018 DEBUG oslo_concurrency.processutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.808 244018 DEBUG nova.virt.libvirt.vif [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:56:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-1963119789',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-1963119789',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-acc',id=139,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL87R8Bekvm59mo5kwvnuGBsBkp9tXZgiXMYkz6TvQlSCgwb0IPTrLwrZDNNWgUnIBhU5eG7QYkAKCOG1521iSWxvl92auy73uemRv7k4zUT5bfi0v5q+2K3Cz8+sbuK0A==',key_name='tempest-TestSecurityGroupsBasicOps-1295736462',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-405awptu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:56:59Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=6185e497-8422-4a5f-a98a-865484d53d4f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "address": "fa:16:3e:ef:c2:bb", "network": {"id": "f23f1675-b7ff-4265-a011-0912c637d746", "bridge": "br-int", "label": "tempest-network-smoke--923808756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d9181f-fe", "ovs_interfaceid": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.808 244018 DEBUG nova.network.os_vif_util [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "address": "fa:16:3e:ef:c2:bb", "network": {"id": "f23f1675-b7ff-4265-a011-0912c637d746", "bridge": "br-int", "label": "tempest-network-smoke--923808756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d9181f-fe", "ovs_interfaceid": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.810 244018 DEBUG nova.network.os_vif_util [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:c2:bb,bridge_name='br-int',has_traffic_filtering=True,id=a4d9181f-fefa-4cde-81ea-c5e59433606c,network=Network(f23f1675-b7ff-4265-a011-0912c637d746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4d9181f-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.812 244018 DEBUG nova.objects.instance [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'pci_devices' on Instance uuid 6185e497-8422-4a5f-a98a-865484d53d4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.829 244018 INFO nova.scheduler.client.report [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Deleted allocations for instance e728a9dc-bb04-4a25-bcad-b787a044bc0b#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.844 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:57:05 np0005629333 nova_compute[244014]:  <uuid>6185e497-8422-4a5f-a98a-865484d53d4f</uuid>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:  <name>instance-0000008b</name>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-1963119789</nova:name>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:57:04</nova:creationTime>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:57:05 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:        <nova:user uuid="ea895f651dd742a7b5eb2d63fb34641c">tempest-TestSecurityGroupsBasicOps-948360018-project-member</nova:user>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:        <nova:project uuid="b9699483122f465084e3147e4904d13d">tempest-TestSecurityGroupsBasicOps-948360018</nova:project>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:        <nova:port uuid="a4d9181f-fefa-4cde-81ea-c5e59433606c">
Feb 25 07:57:05 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      <entry name="serial">6185e497-8422-4a5f-a98a-865484d53d4f</entry>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      <entry name="uuid">6185e497-8422-4a5f-a98a-865484d53d4f</entry>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/6185e497-8422-4a5f-a98a-865484d53d4f_disk">
Feb 25 07:57:05 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:57:05 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/6185e497-8422-4a5f-a98a-865484d53d4f_disk.config">
Feb 25 07:57:05 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:57:05 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:ef:c2:bb"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      <target dev="tapa4d9181f-fe"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/6185e497-8422-4a5f-a98a-865484d53d4f/console.log" append="off"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:57:05 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:57:05 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:57:05 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:57:05 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.845 244018 DEBUG nova.compute.manager [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Preparing to wait for external event network-vif-plugged-a4d9181f-fefa-4cde-81ea-c5e59433606c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.845 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.845 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.846 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.847 244018 DEBUG nova.virt.libvirt.vif [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:56:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-1963119789',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-1963119789',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-acc',id=139,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL87R8Bekvm59mo5kwvnuGBsBkp9tXZgiXMYkz6TvQlSCgwb0IPTrLwrZDNNWgUnIBhU5eG7QYkAKCOG1521iSWxvl92auy73uemRv7k4zUT5bfi0v5q+2K3Cz8+sbuK0A==',key_name='tempest-TestSecurityGroupsBasicOps-1295736462',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-405awptu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:56:59Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=6185e497-8422-4a5f-a98a-865484d53d4f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "address": "fa:16:3e:ef:c2:bb", "network": {"id": "f23f1675-b7ff-4265-a011-0912c637d746", "bridge": "br-int", "label": "tempest-network-smoke--923808756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d9181f-fe", "ovs_interfaceid": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.847 244018 DEBUG nova.network.os_vif_util [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "address": "fa:16:3e:ef:c2:bb", "network": {"id": "f23f1675-b7ff-4265-a011-0912c637d746", "bridge": "br-int", "label": "tempest-network-smoke--923808756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d9181f-fe", "ovs_interfaceid": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.848 244018 DEBUG nova.network.os_vif_util [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:c2:bb,bridge_name='br-int',has_traffic_filtering=True,id=a4d9181f-fefa-4cde-81ea-c5e59433606c,network=Network(f23f1675-b7ff-4265-a011-0912c637d746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4d9181f-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.849 244018 DEBUG os_vif [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:c2:bb,bridge_name='br-int',has_traffic_filtering=True,id=a4d9181f-fefa-4cde-81ea-c5e59433606c,network=Network(f23f1675-b7ff-4265-a011-0912c637d746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4d9181f-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.849 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.850 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.850 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.859 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.859 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa4d9181f-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.860 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa4d9181f-fe, col_values=(('external_ids', {'iface-id': 'a4d9181f-fefa-4cde-81ea-c5e59433606c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ef:c2:bb', 'vm-uuid': '6185e497-8422-4a5f-a98a-865484d53d4f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:57:05 np0005629333 NetworkManager[49836]: <info>  [1772024225.8641] manager: (tapa4d9181f-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/605)
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.871 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.872 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:05 np0005629333 nova_compute[244014]: 2026-02-25 12:57:05.874 244018 INFO os_vif [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:c2:bb,bridge_name='br-int',has_traffic_filtering=True,id=a4d9181f-fefa-4cde-81ea-c5e59433606c,network=Network(f23f1675-b7ff-4265-a011-0912c637d746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4d9181f-fe')#033[00m
Feb 25 07:57:05 np0005629333 podman[370129]: 2026-02-25 12:57:05.960921724 +0000 UTC m=+0.063402189 container create f12eb82b4e555e60be176c6f2f414f7ddf1058fe1eee9289b6c897a7323c3c72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shtern, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Feb 25 07:57:06 np0005629333 systemd[1]: Started libpod-conmon-f12eb82b4e555e60be176c6f2f414f7ddf1058fe1eee9289b6c897a7323c3c72.scope.
Feb 25 07:57:06 np0005629333 nova_compute[244014]: 2026-02-25 12:57:06.022 244018 DEBUG oslo_concurrency.lockutils [None req-3166a7fc-250b-4cd8-bd30-b1d1ec3ca91f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:57:06 np0005629333 podman[370129]: 2026-02-25 12:57:05.935261041 +0000 UTC m=+0.037741546 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:57:06 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:57:06 np0005629333 podman[370129]: 2026-02-25 12:57:06.057082537 +0000 UTC m=+0.159563062 container init f12eb82b4e555e60be176c6f2f414f7ddf1058fe1eee9289b6c897a7323c3c72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 07:57:06 np0005629333 podman[370129]: 2026-02-25 12:57:06.06356487 +0000 UTC m=+0.166045335 container start f12eb82b4e555e60be176c6f2f414f7ddf1058fe1eee9289b6c897a7323c3c72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shtern, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 07:57:06 np0005629333 podman[370129]: 2026-02-25 12:57:06.068983833 +0000 UTC m=+0.171464348 container attach f12eb82b4e555e60be176c6f2f414f7ddf1058fe1eee9289b6c897a7323c3c72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:57:06 np0005629333 trusting_shtern[370147]: 167 167
Feb 25 07:57:06 np0005629333 systemd[1]: libpod-f12eb82b4e555e60be176c6f2f414f7ddf1058fe1eee9289b6c897a7323c3c72.scope: Deactivated successfully.
Feb 25 07:57:06 np0005629333 podman[370129]: 2026-02-25 12:57:06.070825105 +0000 UTC m=+0.173305630 container died f12eb82b4e555e60be176c6f2f414f7ddf1058fe1eee9289b6c897a7323c3c72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shtern, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Feb 25 07:57:06 np0005629333 nova_compute[244014]: 2026-02-25 12:57:06.080 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:57:06 np0005629333 nova_compute[244014]: 2026-02-25 12:57:06.080 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:57:06 np0005629333 nova_compute[244014]: 2026-02-25 12:57:06.081 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No VIF found with MAC fa:16:3e:ef:c2:bb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:57:06 np0005629333 nova_compute[244014]: 2026-02-25 12:57:06.082 244018 INFO nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Using config drive#033[00m
Feb 25 07:57:06 np0005629333 systemd[1]: var-lib-containers-storage-overlay-76fd4d580a93400a8974fb39a4257d5a104621e0b26cf17119c53cd399167e1b-merged.mount: Deactivated successfully.
Feb 25 07:57:06 np0005629333 podman[370129]: 2026-02-25 12:57:06.115113414 +0000 UTC m=+0.217593879 container remove f12eb82b4e555e60be176c6f2f414f7ddf1058fe1eee9289b6c897a7323c3c72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_shtern, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:57:06 np0005629333 nova_compute[244014]: 2026-02-25 12:57:06.120 244018 DEBUG nova.storage.rbd_utils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 6185e497-8422-4a5f-a98a-865484d53d4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:57:06 np0005629333 systemd[1]: libpod-conmon-f12eb82b4e555e60be176c6f2f414f7ddf1058fe1eee9289b6c897a7323c3c72.scope: Deactivated successfully.
Feb 25 07:57:06 np0005629333 podman[370189]: 2026-02-25 12:57:06.266805933 +0000 UTC m=+0.042525361 container create 61e23cc7f15d4c498b983457ae95a3944cbd73c7d903e997fbfcc18b15266797 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shockley, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 07:57:06 np0005629333 systemd[1]: Started libpod-conmon-61e23cc7f15d4c498b983457ae95a3944cbd73c7d903e997fbfcc18b15266797.scope.
Feb 25 07:57:06 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:57:06 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8476635d56727ada9b5724ade5c7efc8774131fbe481fbf88bb2488e495c46a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:57:06 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8476635d56727ada9b5724ade5c7efc8774131fbe481fbf88bb2488e495c46a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:57:06 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8476635d56727ada9b5724ade5c7efc8774131fbe481fbf88bb2488e495c46a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:57:06 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8476635d56727ada9b5724ade5c7efc8774131fbe481fbf88bb2488e495c46a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:57:06 np0005629333 podman[370189]: 2026-02-25 12:57:06.248799225 +0000 UTC m=+0.024518663 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:57:06 np0005629333 podman[370189]: 2026-02-25 12:57:06.362078571 +0000 UTC m=+0.137798049 container init 61e23cc7f15d4c498b983457ae95a3944cbd73c7d903e997fbfcc18b15266797 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shockley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 07:57:06 np0005629333 podman[370189]: 2026-02-25 12:57:06.378586816 +0000 UTC m=+0.154306224 container start 61e23cc7f15d4c498b983457ae95a3944cbd73c7d903e997fbfcc18b15266797 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shockley, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 07:57:06 np0005629333 podman[370189]: 2026-02-25 12:57:06.382387623 +0000 UTC m=+0.158107091 container attach 61e23cc7f15d4c498b983457ae95a3944cbd73c7d903e997fbfcc18b15266797 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 07:57:06 np0005629333 nova_compute[244014]: 2026-02-25 12:57:06.440 244018 INFO nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Creating config drive at /var/lib/nova/instances/6185e497-8422-4a5f-a98a-865484d53d4f/disk.config#033[00m
Feb 25 07:57:06 np0005629333 nova_compute[244014]: 2026-02-25 12:57:06.445 244018 DEBUG oslo_concurrency.processutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6185e497-8422-4a5f-a98a-865484d53d4f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp4fd8f1cp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:57:06 np0005629333 nova_compute[244014]: 2026-02-25 12:57:06.583 244018 DEBUG oslo_concurrency.processutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6185e497-8422-4a5f-a98a-865484d53d4f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp4fd8f1cp" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:57:06 np0005629333 nova_compute[244014]: 2026-02-25 12:57:06.617 244018 DEBUG nova.storage.rbd_utils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 6185e497-8422-4a5f-a98a-865484d53d4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:57:06 np0005629333 nova_compute[244014]: 2026-02-25 12:57:06.622 244018 DEBUG oslo_concurrency.processutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6185e497-8422-4a5f-a98a-865484d53d4f/disk.config 6185e497-8422-4a5f-a98a-865484d53d4f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]: {
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:    "0": [
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:        {
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "devices": [
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "/dev/loop3"
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            ],
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "lv_name": "ceph_lv0",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "lv_size": "21470642176",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "name": "ceph_lv0",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "tags": {
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.cluster_name": "ceph",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.crush_device_class": "",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.encrypted": "0",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.objectstore": "bluestore",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.osd_id": "0",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.type": "block",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.vdo": "0",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.with_tpm": "0"
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            },
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "type": "block",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "vg_name": "ceph_vg0"
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:        }
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:    ],
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:    "1": [
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:        {
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "devices": [
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "/dev/loop4"
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            ],
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "lv_name": "ceph_lv1",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "lv_size": "21470642176",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "name": "ceph_lv1",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "tags": {
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.cluster_name": "ceph",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.crush_device_class": "",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.encrypted": "0",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.objectstore": "bluestore",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.osd_id": "1",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.type": "block",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.vdo": "0",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.with_tpm": "0"
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            },
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "type": "block",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "vg_name": "ceph_vg1"
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:        }
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:    ],
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:    "2": [
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:        {
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "devices": [
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "/dev/loop5"
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            ],
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "lv_name": "ceph_lv2",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "lv_size": "21470642176",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "name": "ceph_lv2",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "tags": {
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.cluster_name": "ceph",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.crush_device_class": "",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.encrypted": "0",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.objectstore": "bluestore",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.osd_id": "2",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.type": "block",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.vdo": "0",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:                "ceph.with_tpm": "0"
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            },
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "type": "block",
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:            "vg_name": "ceph_vg2"
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:        }
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]:    ]
Feb 25 07:57:06 np0005629333 upbeat_shockley[370205]: }
Feb 25 07:57:06 np0005629333 systemd[1]: libpod-61e23cc7f15d4c498b983457ae95a3944cbd73c7d903e997fbfcc18b15266797.scope: Deactivated successfully.
Feb 25 07:57:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2296: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 25 07:57:06 np0005629333 podman[370251]: 2026-02-25 12:57:06.76230167 +0000 UTC m=+0.032467767 container died 61e23cc7f15d4c498b983457ae95a3944cbd73c7d903e997fbfcc18b15266797 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shockley, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:57:06 np0005629333 nova_compute[244014]: 2026-02-25 12:57:06.785 244018 DEBUG oslo_concurrency.processutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6185e497-8422-4a5f-a98a-865484d53d4f/disk.config 6185e497-8422-4a5f-a98a-865484d53d4f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:57:06 np0005629333 nova_compute[244014]: 2026-02-25 12:57:06.786 244018 INFO nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Deleting local config drive /var/lib/nova/instances/6185e497-8422-4a5f-a98a-865484d53d4f/disk.config because it was imported into RBD.#033[00m
Feb 25 07:57:06 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d8476635d56727ada9b5724ade5c7efc8774131fbe481fbf88bb2488e495c46a-merged.mount: Deactivated successfully.
Feb 25 07:57:06 np0005629333 podman[370251]: 2026-02-25 12:57:06.808285767 +0000 UTC m=+0.078451814 container remove 61e23cc7f15d4c498b983457ae95a3944cbd73c7d903e997fbfcc18b15266797 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_shockley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 07:57:06 np0005629333 systemd[1]: libpod-conmon-61e23cc7f15d4c498b983457ae95a3944cbd73c7d903e997fbfcc18b15266797.scope: Deactivated successfully.
Feb 25 07:57:06 np0005629333 kernel: tapa4d9181f-fe: entered promiscuous mode
Feb 25 07:57:06 np0005629333 NetworkManager[49836]: <info>  [1772024226.8417] manager: (tapa4d9181f-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/606)
Feb 25 07:57:06 np0005629333 nova_compute[244014]: 2026-02-25 12:57:06.842 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:06 np0005629333 ovn_controller[147040]: 2026-02-25T12:57:06Z|01459|binding|INFO|Claiming lport a4d9181f-fefa-4cde-81ea-c5e59433606c for this chassis.
Feb 25 07:57:06 np0005629333 ovn_controller[147040]: 2026-02-25T12:57:06Z|01460|binding|INFO|a4d9181f-fefa-4cde-81ea-c5e59433606c: Claiming fa:16:3e:ef:c2:bb 10.100.0.11
Feb 25 07:57:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:06.851 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:c2:bb 10.100.0.11'], port_security=['fa:16:3e:ef:c2:bb 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '6185e497-8422-4a5f-a98a-865484d53d4f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f23f1675-b7ff-4265-a011-0912c637d746', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9699483122f465084e3147e4904d13d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '87d37268-c5dd-4380-a676-5b9940f82b8f 979c759f-0c66-4e81-a7f1-5a970907b9e7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad5ffac6-eab3-4eca-a87d-9bf8335b5de6, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=a4d9181f-fefa-4cde-81ea-c5e59433606c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:57:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:06.853 157129 INFO neutron.agent.ovn.metadata.agent [-] Port a4d9181f-fefa-4cde-81ea-c5e59433606c in datapath f23f1675-b7ff-4265-a011-0912c637d746 bound to our chassis#033[00m
Feb 25 07:57:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:06.855 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f23f1675-b7ff-4265-a011-0912c637d746#033[00m
Feb 25 07:57:06 np0005629333 nova_compute[244014]: 2026-02-25 12:57:06.856 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:06 np0005629333 ovn_controller[147040]: 2026-02-25T12:57:06Z|01461|binding|INFO|Setting lport a4d9181f-fefa-4cde-81ea-c5e59433606c ovn-installed in OVS
Feb 25 07:57:06 np0005629333 ovn_controller[147040]: 2026-02-25T12:57:06Z|01462|binding|INFO|Setting lport a4d9181f-fefa-4cde-81ea-c5e59433606c up in Southbound
Feb 25 07:57:06 np0005629333 nova_compute[244014]: 2026-02-25 12:57:06.861 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:06 np0005629333 nova_compute[244014]: 2026-02-25 12:57:06.864 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:06.866 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d4536e72-ab57-43c2-a12a-3df9be9672d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:06.868 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf23f1675-b1 in ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:57:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:06.871 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf23f1675-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:57:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:06.871 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[447afe6e-5aca-4e9f-b308-0653d8c63f2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:06.873 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4be4a40b-bba3-40c7-af6e-977addef1b5f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:06.884 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[06da8670-a162-4924-a873-445d4b8a4086]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:06 np0005629333 systemd-machined[210048]: New machine qemu-171-instance-0000008b.
Feb 25 07:57:06 np0005629333 systemd[1]: Started Virtual Machine qemu-171-instance-0000008b.
Feb 25 07:57:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:06.906 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7c0d1edd-58a9-4c37-99a4-892b50e67856]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:06 np0005629333 systemd-udevd[370307]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:57:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:06.932 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cbba6b0e-a19a-4d72-8288-4ccbd09b040f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:06 np0005629333 systemd-udevd[370312]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:57:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:06.939 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d62e5cab-f93f-44aa-8f16-1ef7255fa0f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:06 np0005629333 NetworkManager[49836]: <info>  [1772024226.9462] manager: (tapf23f1675-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/607)
Feb 25 07:57:06 np0005629333 NetworkManager[49836]: <info>  [1772024226.9483] device (tapa4d9181f-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:57:06 np0005629333 NetworkManager[49836]: <info>  [1772024226.9501] device (tapa4d9181f-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:57:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:06.973 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[59f7fbbf-e34d-41bc-ae16-c5b1f1a86ce3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:06 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:06.978 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[65997da9-34c0-44b2-a997-be9b4c32e892]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:07 np0005629333 NetworkManager[49836]: <info>  [1772024227.0015] device (tapf23f1675-b0): carrier: link connected
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:07.005 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a860d562-d185-4fdf-94ab-3ee3d7bf44a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:07.022 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fd836392-c4c5-42f6-bda6-e0545d622349]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf23f1675-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:24:4e:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 435], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619640, 'reachable_time': 34313, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370363, 'error': None, 'target': 'ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:07.037 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[48222472-b827-4068-bc3b-22af36be4592]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe24:4e9f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 619640, 'tstamp': 619640}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370366, 'error': None, 'target': 'ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:07.054 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[73287dcf-b5c9-4341-803f-11ed6f0bc929]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf23f1675-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:24:4e:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 435], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619640, 'reachable_time': 34313, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 370367, 'error': None, 'target': 'ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:07.082 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f5152cef-8e4b-46bb-a17b-8b95dca5d29d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:07.132 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[82bed630-2ea3-482e-af31-f9b0c79a81f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:07.133 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf23f1675-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:07.134 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:07.135 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf23f1675-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:57:07 np0005629333 nova_compute[244014]: 2026-02-25 12:57:07.137 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:07 np0005629333 kernel: tapf23f1675-b0: entered promiscuous mode
Feb 25 07:57:07 np0005629333 NetworkManager[49836]: <info>  [1772024227.1378] manager: (tapf23f1675-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/608)
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:07.140 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf23f1675-b0, col_values=(('external_ids', {'iface-id': '93d7aa8d-1845-4103-8fd2-aed7a8c4298a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:57:07 np0005629333 nova_compute[244014]: 2026-02-25 12:57:07.141 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:07 np0005629333 ovn_controller[147040]: 2026-02-25T12:57:07Z|01463|binding|INFO|Releasing lport 93d7aa8d-1845-4103-8fd2-aed7a8c4298a from this chassis (sb_readonly=0)
Feb 25 07:57:07 np0005629333 nova_compute[244014]: 2026-02-25 12:57:07.152 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:07 np0005629333 nova_compute[244014]: 2026-02-25 12:57:07.153 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:07.153 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f23f1675-b7ff-4265-a011-0912c637d746.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f23f1675-b7ff-4265-a011-0912c637d746.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:07.154 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[34ac1809-8aef-49ae-88fd-3416c3d0d54a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:07.154 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-f23f1675-b7ff-4265-a011-0912c637d746
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/f23f1675-b7ff-4265-a011-0912c637d746.pid.haproxy
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID f23f1675-b7ff-4265-a011-0912c637d746
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:57:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:07.155 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746', 'env', 'PROCESS_TAG=haproxy-f23f1675-b7ff-4265-a011-0912c637d746', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f23f1675-b7ff-4265-a011-0912c637d746.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:57:07 np0005629333 podman[370389]: 2026-02-25 12:57:07.299243977 +0000 UTC m=+0.052421530 container create 2246d6c88233dc7ab7398dde63460a33c2b150d8e57ec843bca15423e33bc882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_varahamihira, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 07:57:07 np0005629333 systemd[1]: Started libpod-conmon-2246d6c88233dc7ab7398dde63460a33c2b150d8e57ec843bca15423e33bc882.scope.
Feb 25 07:57:07 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:57:07 np0005629333 podman[370389]: 2026-02-25 12:57:07.265006941 +0000 UTC m=+0.018184534 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:57:07 np0005629333 podman[370389]: 2026-02-25 12:57:07.363498869 +0000 UTC m=+0.116676472 container init 2246d6c88233dc7ab7398dde63460a33c2b150d8e57ec843bca15423e33bc882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_varahamihira, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 07:57:07 np0005629333 podman[370389]: 2026-02-25 12:57:07.372984727 +0000 UTC m=+0.126162310 container start 2246d6c88233dc7ab7398dde63460a33c2b150d8e57ec843bca15423e33bc882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_varahamihira, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:57:07 np0005629333 naughty_varahamihira[370405]: 167 167
Feb 25 07:57:07 np0005629333 systemd[1]: libpod-2246d6c88233dc7ab7398dde63460a33c2b150d8e57ec843bca15423e33bc882.scope: Deactivated successfully.
Feb 25 07:57:07 np0005629333 podman[370389]: 2026-02-25 12:57:07.377194996 +0000 UTC m=+0.130372559 container attach 2246d6c88233dc7ab7398dde63460a33c2b150d8e57ec843bca15423e33bc882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_varahamihira, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 07:57:07 np0005629333 podman[370389]: 2026-02-25 12:57:07.377528205 +0000 UTC m=+0.130705768 container died 2246d6c88233dc7ab7398dde63460a33c2b150d8e57ec843bca15423e33bc882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:57:07 np0005629333 nova_compute[244014]: 2026-02-25 12:57:07.386 244018 DEBUG nova.compute.manager [req-69ec7bb3-211f-4c14-8dcc-74f997a70d1f req-b68dfb05-ea20-44e4-967e-55833308f702 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Received event network-vif-plugged-a4d9181f-fefa-4cde-81ea-c5e59433606c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:57:07 np0005629333 nova_compute[244014]: 2026-02-25 12:57:07.388 244018 DEBUG oslo_concurrency.lockutils [req-69ec7bb3-211f-4c14-8dcc-74f997a70d1f req-b68dfb05-ea20-44e4-967e-55833308f702 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:57:07 np0005629333 nova_compute[244014]: 2026-02-25 12:57:07.388 244018 DEBUG oslo_concurrency.lockutils [req-69ec7bb3-211f-4c14-8dcc-74f997a70d1f req-b68dfb05-ea20-44e4-967e-55833308f702 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:57:07 np0005629333 nova_compute[244014]: 2026-02-25 12:57:07.388 244018 DEBUG oslo_concurrency.lockutils [req-69ec7bb3-211f-4c14-8dcc-74f997a70d1f req-b68dfb05-ea20-44e4-967e-55833308f702 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:57:07 np0005629333 nova_compute[244014]: 2026-02-25 12:57:07.389 244018 DEBUG nova.compute.manager [req-69ec7bb3-211f-4c14-8dcc-74f997a70d1f req-b68dfb05-ea20-44e4-967e-55833308f702 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Processing event network-vif-plugged-a4d9181f-fefa-4cde-81ea-c5e59433606c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:57:07 np0005629333 systemd[1]: var-lib-containers-storage-overlay-58b44eeb6950c252e84e50f6336893559c59935c688db34c83a4dc9cd1a74489-merged.mount: Deactivated successfully.
Feb 25 07:57:07 np0005629333 podman[370389]: 2026-02-25 12:57:07.419461628 +0000 UTC m=+0.172639171 container remove 2246d6c88233dc7ab7398dde63460a33c2b150d8e57ec843bca15423e33bc882 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_varahamihira, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:57:07 np0005629333 systemd[1]: libpod-conmon-2246d6c88233dc7ab7398dde63460a33c2b150d8e57ec843bca15423e33bc882.scope: Deactivated successfully.
Feb 25 07:57:07 np0005629333 nova_compute[244014]: 2026-02-25 12:57:07.533 244018 DEBUG nova.network.neutron [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Updated VIF entry in instance network info cache for port a4d9181f-fefa-4cde-81ea-c5e59433606c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:57:07 np0005629333 nova_compute[244014]: 2026-02-25 12:57:07.534 244018 DEBUG nova.network.neutron [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Updating instance_info_cache with network_info: [{"id": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "address": "fa:16:3e:ef:c2:bb", "network": {"id": "f23f1675-b7ff-4265-a011-0912c637d746", "bridge": "br-int", "label": "tempest-network-smoke--923808756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d9181f-fe", "ovs_interfaceid": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:57:07 np0005629333 nova_compute[244014]: 2026-02-25 12:57:07.549 244018 DEBUG oslo_concurrency.lockutils [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-6185e497-8422-4a5f-a98a-865484d53d4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:57:07 np0005629333 nova_compute[244014]: 2026-02-25 12:57:07.550 244018 DEBUG nova.compute.manager [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received event network-vif-unplugged-bde15f84-edfb-445b-b129-ec33331763f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:57:07 np0005629333 nova_compute[244014]: 2026-02-25 12:57:07.550 244018 DEBUG oslo_concurrency.lockutils [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:57:07 np0005629333 nova_compute[244014]: 2026-02-25 12:57:07.550 244018 DEBUG oslo_concurrency.lockutils [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:57:07 np0005629333 nova_compute[244014]: 2026-02-25 12:57:07.551 244018 DEBUG oslo_concurrency.lockutils [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:57:07 np0005629333 nova_compute[244014]: 2026-02-25 12:57:07.551 244018 DEBUG nova.compute.manager [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] No waiting events found dispatching network-vif-unplugged-bde15f84-edfb-445b-b129-ec33331763f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:57:07 np0005629333 nova_compute[244014]: 2026-02-25 12:57:07.552 244018 DEBUG nova.compute.manager [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received event network-vif-unplugged-bde15f84-edfb-445b-b129-ec33331763f0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:57:07 np0005629333 nova_compute[244014]: 2026-02-25 12:57:07.552 244018 DEBUG nova.compute.manager [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received event network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:57:07 np0005629333 nova_compute[244014]: 2026-02-25 12:57:07.552 244018 DEBUG oslo_concurrency.lockutils [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:57:07 np0005629333 nova_compute[244014]: 2026-02-25 12:57:07.553 244018 DEBUG oslo_concurrency.lockutils [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:57:07 np0005629333 nova_compute[244014]: 2026-02-25 12:57:07.553 244018 DEBUG oslo_concurrency.lockutils [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e728a9dc-bb04-4a25-bcad-b787a044bc0b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:57:07 np0005629333 nova_compute[244014]: 2026-02-25 12:57:07.553 244018 DEBUG nova.compute.manager [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] No waiting events found dispatching network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:57:07 np0005629333 nova_compute[244014]: 2026-02-25 12:57:07.554 244018 WARNING nova.compute.manager [req-35c688b0-be5c-4a43-ab1a-499558accd2a req-a42414e0-6bc6-42af-b1f0-69635ad15d01 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Received unexpected event network-vif-plugged-bde15f84-edfb-445b-b129-ec33331763f0 for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:57:07 np0005629333 podman[370444]: 2026-02-25 12:57:07.586909081 +0000 UTC m=+0.062895345 container create cdc423d51cb186bf96ec29f98f73bc3b4220dfecb23592c7f19f8b857cc9f2c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 07:57:07 np0005629333 podman[370455]: 2026-02-25 12:57:07.595702069 +0000 UTC m=+0.053196801 container create 67d240867a9cba8779007d0b1a160eb107605e3fbbca13059220fbe445a7dee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:57:07 np0005629333 systemd[1]: Started libpod-conmon-cdc423d51cb186bf96ec29f98f73bc3b4220dfecb23592c7f19f8b857cc9f2c7.scope.
Feb 25 07:57:07 np0005629333 systemd[1]: Started libpod-conmon-67d240867a9cba8779007d0b1a160eb107605e3fbbca13059220fbe445a7dee7.scope.
Feb 25 07:57:07 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:57:07 np0005629333 podman[370444]: 2026-02-25 12:57:07.556610087 +0000 UTC m=+0.032596361 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:57:07 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5a1918a9162265e4a26eeedf3b2aaae3652d2ea097cac975c16800ff9ce3b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:57:07 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5a1918a9162265e4a26eeedf3b2aaae3652d2ea097cac975c16800ff9ce3b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:57:07 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5a1918a9162265e4a26eeedf3b2aaae3652d2ea097cac975c16800ff9ce3b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:57:07 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5a1918a9162265e4a26eeedf3b2aaae3652d2ea097cac975c16800ff9ce3b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:57:07 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:57:07 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc100cb099adcbdbcce587f0f6d881ddf87a2202a85f400719149b8997f75aa8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:57:07 np0005629333 podman[370455]: 2026-02-25 12:57:07.678501225 +0000 UTC m=+0.135995957 container init 67d240867a9cba8779007d0b1a160eb107605e3fbbca13059220fbe445a7dee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 07:57:07 np0005629333 podman[370455]: 2026-02-25 12:57:07.576765425 +0000 UTC m=+0.034260177 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:57:07 np0005629333 podman[370455]: 2026-02-25 12:57:07.690788902 +0000 UTC m=+0.148283634 container start 67d240867a9cba8779007d0b1a160eb107605e3fbbca13059220fbe445a7dee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:57:07 np0005629333 podman[370455]: 2026-02-25 12:57:07.694104855 +0000 UTC m=+0.151599587 container attach 67d240867a9cba8779007d0b1a160eb107605e3fbbca13059220fbe445a7dee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_payne, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:57:07 np0005629333 podman[370444]: 2026-02-25 12:57:07.699280421 +0000 UTC m=+0.175266665 container init cdc423d51cb186bf96ec29f98f73bc3b4220dfecb23592c7f19f8b857cc9f2c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 07:57:07 np0005629333 podman[370444]: 2026-02-25 12:57:07.706009521 +0000 UTC m=+0.181995745 container start cdc423d51cb186bf96ec29f98f73bc3b4220dfecb23592c7f19f8b857cc9f2c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 07:57:07 np0005629333 neutron-haproxy-ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746[370480]: [NOTICE]   (370489) : New worker (370491) forked
Feb 25 07:57:07 np0005629333 neutron-haproxy-ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746[370480]: [NOTICE]   (370489) : Loading success.
Feb 25 07:57:07 np0005629333 nova_compute[244014]: 2026-02-25 12:57:07.976 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024227.9753835, 6185e497-8422-4a5f-a98a-865484d53d4f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:57:07 np0005629333 nova_compute[244014]: 2026-02-25 12:57:07.977 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] VM Started (Lifecycle Event)#033[00m
Feb 25 07:57:07 np0005629333 nova_compute[244014]: 2026-02-25 12:57:07.980 244018 DEBUG nova.compute.manager [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:57:07 np0005629333 nova_compute[244014]: 2026-02-25 12:57:07.984 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:57:07 np0005629333 nova_compute[244014]: 2026-02-25 12:57:07.987 244018 INFO nova.virt.libvirt.driver [-] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Instance spawned successfully.#033[00m
Feb 25 07:57:07 np0005629333 nova_compute[244014]: 2026-02-25 12:57:07.988 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:57:08 np0005629333 nova_compute[244014]: 2026-02-25 12:57:08.002 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:57:08 np0005629333 nova_compute[244014]: 2026-02-25 12:57:08.011 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:57:08 np0005629333 nova_compute[244014]: 2026-02-25 12:57:08.018 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:57:08 np0005629333 nova_compute[244014]: 2026-02-25 12:57:08.019 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:57:08 np0005629333 nova_compute[244014]: 2026-02-25 12:57:08.020 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:57:08 np0005629333 nova_compute[244014]: 2026-02-25 12:57:08.020 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:57:08 np0005629333 nova_compute[244014]: 2026-02-25 12:57:08.021 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:57:08 np0005629333 nova_compute[244014]: 2026-02-25 12:57:08.021 244018 DEBUG nova.virt.libvirt.driver [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:57:08 np0005629333 nova_compute[244014]: 2026-02-25 12:57:08.043 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:57:08 np0005629333 nova_compute[244014]: 2026-02-25 12:57:08.044 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024227.9768565, 6185e497-8422-4a5f-a98a-865484d53d4f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:57:08 np0005629333 nova_compute[244014]: 2026-02-25 12:57:08.044 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:57:08 np0005629333 nova_compute[244014]: 2026-02-25 12:57:08.075 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:57:08 np0005629333 nova_compute[244014]: 2026-02-25 12:57:08.079 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024227.9838955, 6185e497-8422-4a5f-a98a-865484d53d4f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:57:08 np0005629333 nova_compute[244014]: 2026-02-25 12:57:08.079 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:57:08 np0005629333 nova_compute[244014]: 2026-02-25 12:57:08.089 244018 INFO nova.compute.manager [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Took 8.39 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:57:08 np0005629333 nova_compute[244014]: 2026-02-25 12:57:08.090 244018 DEBUG nova.compute.manager [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:57:08 np0005629333 nova_compute[244014]: 2026-02-25 12:57:08.098 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:57:08 np0005629333 nova_compute[244014]: 2026-02-25 12:57:08.105 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:57:08 np0005629333 nova_compute[244014]: 2026-02-25 12:57:08.127 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:57:08 np0005629333 nova_compute[244014]: 2026-02-25 12:57:08.161 244018 INFO nova.compute.manager [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Took 9.49 seconds to build instance.#033[00m
Feb 25 07:57:08 np0005629333 nova_compute[244014]: 2026-02-25 12:57:08.176 244018 DEBUG oslo_concurrency.lockutils [None req-9c045861-cc3c-4817-8c3e-cc057e7ce96f ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "6185e497-8422-4a5f-a98a-865484d53d4f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:57:08 np0005629333 lvm[370612]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:57:08 np0005629333 lvm[370612]: VG ceph_vg0 finished
Feb 25 07:57:08 np0005629333 lvm[370614]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:57:08 np0005629333 lvm[370614]: VG ceph_vg1 finished
Feb 25 07:57:08 np0005629333 lvm[370615]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:57:08 np0005629333 lvm[370615]: VG ceph_vg2 finished
Feb 25 07:57:08 np0005629333 unruffled_payne[370481]: {}
Feb 25 07:57:08 np0005629333 systemd[1]: libpod-67d240867a9cba8779007d0b1a160eb107605e3fbbca13059220fbe445a7dee7.scope: Deactivated successfully.
Feb 25 07:57:08 np0005629333 systemd[1]: libpod-67d240867a9cba8779007d0b1a160eb107605e3fbbca13059220fbe445a7dee7.scope: Consumed 1.112s CPU time.
Feb 25 07:57:08 np0005629333 podman[370455]: 2026-02-25 12:57:08.446076567 +0000 UTC m=+0.903571339 container died 67d240867a9cba8779007d0b1a160eb107605e3fbbca13059220fbe445a7dee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 07:57:08 np0005629333 systemd[1]: var-lib-containers-storage-overlay-be5a1918a9162265e4a26eeedf3b2aaae3652d2ea097cac975c16800ff9ce3b1-merged.mount: Deactivated successfully.
Feb 25 07:57:08 np0005629333 podman[370455]: 2026-02-25 12:57:08.494624317 +0000 UTC m=+0.952119049 container remove 67d240867a9cba8779007d0b1a160eb107605e3fbbca13059220fbe445a7dee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_payne, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 07:57:08 np0005629333 systemd[1]: libpod-conmon-67d240867a9cba8779007d0b1a160eb107605e3fbbca13059220fbe445a7dee7.scope: Deactivated successfully.
Feb 25 07:57:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:57:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:57:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:57:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:57:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:57:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2297: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 1.8 MiB/s wr, 88 op/s
Feb 25 07:57:09 np0005629333 nova_compute[244014]: 2026-02-25 12:57:09.464 244018 DEBUG nova.compute.manager [req-b834791c-5890-4307-81fc-6c2106a52c2e req-9b2e9681-1a70-4477-ae28-d134ee4ea0c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Received event network-vif-plugged-a4d9181f-fefa-4cde-81ea-c5e59433606c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:57:09 np0005629333 nova_compute[244014]: 2026-02-25 12:57:09.466 244018 DEBUG oslo_concurrency.lockutils [req-b834791c-5890-4307-81fc-6c2106a52c2e req-9b2e9681-1a70-4477-ae28-d134ee4ea0c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:57:09 np0005629333 nova_compute[244014]: 2026-02-25 12:57:09.467 244018 DEBUG oslo_concurrency.lockutils [req-b834791c-5890-4307-81fc-6c2106a52c2e req-9b2e9681-1a70-4477-ae28-d134ee4ea0c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:57:09 np0005629333 nova_compute[244014]: 2026-02-25 12:57:09.467 244018 DEBUG oslo_concurrency.lockutils [req-b834791c-5890-4307-81fc-6c2106a52c2e req-9b2e9681-1a70-4477-ae28-d134ee4ea0c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:57:09 np0005629333 nova_compute[244014]: 2026-02-25 12:57:09.468 244018 DEBUG nova.compute.manager [req-b834791c-5890-4307-81fc-6c2106a52c2e req-9b2e9681-1a70-4477-ae28-d134ee4ea0c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] No waiting events found dispatching network-vif-plugged-a4d9181f-fefa-4cde-81ea-c5e59433606c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:57:09 np0005629333 nova_compute[244014]: 2026-02-25 12:57:09.468 244018 WARNING nova.compute.manager [req-b834791c-5890-4307-81fc-6c2106a52c2e req-9b2e9681-1a70-4477-ae28-d134ee4ea0c2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Received unexpected event network-vif-plugged-a4d9181f-fefa-4cde-81ea-c5e59433606c for instance with vm_state active and task_state None.#033[00m
Feb 25 07:57:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:57:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:57:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:57:09Z|01464|binding|INFO|Releasing lport 93d7aa8d-1845-4103-8fd2-aed7a8c4298a from this chassis (sb_readonly=0)
Feb 25 07:57:09 np0005629333 nova_compute[244014]: 2026-02-25 12:57:09.681 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:09 np0005629333 ovn_controller[147040]: 2026-02-25T12:57:09Z|01465|binding|INFO|Releasing lport 93d7aa8d-1845-4103-8fd2-aed7a8c4298a from this chassis (sb_readonly=0)
Feb 25 07:57:09 np0005629333 nova_compute[244014]: 2026-02-25 12:57:09.730 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:10 np0005629333 nova_compute[244014]: 2026-02-25 12:57:10.420 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:10.420 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:57:10 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:10.424 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:57:10 np0005629333 nova_compute[244014]: 2026-02-25 12:57:10.532 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2298: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Feb 25 07:57:10 np0005629333 nova_compute[244014]: 2026-02-25 12:57:10.864 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:11 np0005629333 NetworkManager[49836]: <info>  [1772024231.6225] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/609)
Feb 25 07:57:11 np0005629333 NetworkManager[49836]: <info>  [1772024231.6241] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/610)
Feb 25 07:57:11 np0005629333 ovn_controller[147040]: 2026-02-25T12:57:11Z|01466|binding|INFO|Releasing lport 93d7aa8d-1845-4103-8fd2-aed7a8c4298a from this chassis (sb_readonly=0)
Feb 25 07:57:11 np0005629333 nova_compute[244014]: 2026-02-25 12:57:11.633 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:11 np0005629333 ovn_controller[147040]: 2026-02-25T12:57:11Z|01467|binding|INFO|Releasing lport 93d7aa8d-1845-4103-8fd2-aed7a8c4298a from this chassis (sb_readonly=0)
Feb 25 07:57:11 np0005629333 nova_compute[244014]: 2026-02-25 12:57:11.652 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:11 np0005629333 nova_compute[244014]: 2026-02-25 12:57:11.657 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:12 np0005629333 nova_compute[244014]: 2026-02-25 12:57:12.084 244018 DEBUG nova.compute.manager [req-603dfce0-60e0-4ca0-8c37-eaf7a2a8fb66 req-ec870f06-32f8-4d4b-ac8f-624e0eee9a02 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Received event network-changed-a4d9181f-fefa-4cde-81ea-c5e59433606c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:57:12 np0005629333 nova_compute[244014]: 2026-02-25 12:57:12.084 244018 DEBUG nova.compute.manager [req-603dfce0-60e0-4ca0-8c37-eaf7a2a8fb66 req-ec870f06-32f8-4d4b-ac8f-624e0eee9a02 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Refreshing instance network info cache due to event network-changed-a4d9181f-fefa-4cde-81ea-c5e59433606c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:57:12 np0005629333 nova_compute[244014]: 2026-02-25 12:57:12.085 244018 DEBUG oslo_concurrency.lockutils [req-603dfce0-60e0-4ca0-8c37-eaf7a2a8fb66 req-ec870f06-32f8-4d4b-ac8f-624e0eee9a02 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-6185e497-8422-4a5f-a98a-865484d53d4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:57:12 np0005629333 nova_compute[244014]: 2026-02-25 12:57:12.085 244018 DEBUG oslo_concurrency.lockutils [req-603dfce0-60e0-4ca0-8c37-eaf7a2a8fb66 req-ec870f06-32f8-4d4b-ac8f-624e0eee9a02 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-6185e497-8422-4a5f-a98a-865484d53d4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:57:12 np0005629333 nova_compute[244014]: 2026-02-25 12:57:12.085 244018 DEBUG nova.network.neutron [req-603dfce0-60e0-4ca0-8c37-eaf7a2a8fb66 req-ec870f06-32f8-4d4b-ac8f-624e0eee9a02 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Refreshing network info cache for port a4d9181f-fefa-4cde-81ea-c5e59433606c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:57:12 np0005629333 nova_compute[244014]: 2026-02-25 12:57:12.667 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024217.6667678, 859fd309-32ea-4025-8312-ddecfa0d6a7f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:57:12 np0005629333 nova_compute[244014]: 2026-02-25 12:57:12.668 244018 INFO nova.compute.manager [-] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:57:12 np0005629333 nova_compute[244014]: 2026-02-25 12:57:12.694 244018 DEBUG nova.compute.manager [None req-1d0094c3-7f6d-42bb-b768-b3d5ca7c89a8 - - - - - -] [instance: 859fd309-32ea-4025-8312-ddecfa0d6a7f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:57:12 np0005629333 podman[370659]: 2026-02-25 12:57:12.737510021 +0000 UTC m=+0.078004362 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 07:57:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2299: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Feb 25 07:57:12 np0005629333 podman[370660]: 2026-02-25 12:57:12.771402737 +0000 UTC m=+0.111879717 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:57:13 np0005629333 nova_compute[244014]: 2026-02-25 12:57:13.647 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:57:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2300: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 25 07:57:15 np0005629333 nova_compute[244014]: 2026-02-25 12:57:15.534 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:15 np0005629333 nova_compute[244014]: 2026-02-25 12:57:15.581 244018 DEBUG nova.network.neutron [req-603dfce0-60e0-4ca0-8c37-eaf7a2a8fb66 req-ec870f06-32f8-4d4b-ac8f-624e0eee9a02 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Updated VIF entry in instance network info cache for port a4d9181f-fefa-4cde-81ea-c5e59433606c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:57:15 np0005629333 nova_compute[244014]: 2026-02-25 12:57:15.582 244018 DEBUG nova.network.neutron [req-603dfce0-60e0-4ca0-8c37-eaf7a2a8fb66 req-ec870f06-32f8-4d4b-ac8f-624e0eee9a02 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Updating instance_info_cache with network_info: [{"id": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "address": "fa:16:3e:ef:c2:bb", "network": {"id": "f23f1675-b7ff-4265-a011-0912c637d746", "bridge": "br-int", "label": "tempest-network-smoke--923808756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d9181f-fe", "ovs_interfaceid": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:57:15 np0005629333 nova_compute[244014]: 2026-02-25 12:57:15.606 244018 DEBUG oslo_concurrency.lockutils [req-603dfce0-60e0-4ca0-8c37-eaf7a2a8fb66 req-ec870f06-32f8-4d4b-ac8f-624e0eee9a02 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-6185e497-8422-4a5f-a98a-865484d53d4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:57:15 np0005629333 nova_compute[244014]: 2026-02-25 12:57:15.866 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:16.427 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:57:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2301: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Feb 25 07:57:17 np0005629333 nova_compute[244014]: 2026-02-25 12:57:17.974 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024222.973289, e728a9dc-bb04-4a25-bcad-b787a044bc0b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:57:17 np0005629333 nova_compute[244014]: 2026-02-25 12:57:17.975 244018 INFO nova.compute.manager [-] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:57:18 np0005629333 nova_compute[244014]: 2026-02-25 12:57:18.012 244018 DEBUG nova.compute.manager [None req-7f8b5dbb-f109-416e-9e2d-95bd8d0312d8 - - - - - -] [instance: e728a9dc-bb04-4a25-bcad-b787a044bc0b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:57:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:57:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2302: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 102 op/s
Feb 25 07:57:19 np0005629333 ovn_controller[147040]: 2026-02-25T12:57:19Z|00184|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ef:c2:bb 10.100.0.11
Feb 25 07:57:19 np0005629333 ovn_controller[147040]: 2026-02-25T12:57:19Z|00185|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ef:c2:bb 10.100.0.11
Feb 25 07:57:20 np0005629333 nova_compute[244014]: 2026-02-25 12:57:20.536 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2303: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 69 op/s
Feb 25 07:57:20 np0005629333 nova_compute[244014]: 2026-02-25 12:57:20.869 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2304: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Feb 25 07:57:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:57:24 np0005629333 nova_compute[244014]: 2026-02-25 12:57:24.560 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2305: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 07:57:25 np0005629333 nova_compute[244014]: 2026-02-25 12:57:25.539 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:25 np0005629333 nova_compute[244014]: 2026-02-25 12:57:25.870 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2306: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 07:57:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:57:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2307: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 07:57:30 np0005629333 nova_compute[244014]: 2026-02-25 12:57:30.541 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2308: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Feb 25 07:57:30 np0005629333 nova_compute[244014]: 2026-02-25 12:57:30.872 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:57:31
Feb 25 07:57:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:57:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:57:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'vms', 'default.rgw.meta', 'backups', '.mgr', 'default.rgw.control', 'images', '.rgw.root', 'volumes', 'cephfs.cephfs.meta']
Feb 25 07:57:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:57:31 np0005629333 nova_compute[244014]: 2026-02-25 12:57:31.365 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:57:31 np0005629333 nova_compute[244014]: 2026-02-25 12:57:31.366 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:57:31 np0005629333 nova_compute[244014]: 2026-02-25 12:57:31.412 244018 DEBUG nova.compute.manager [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:57:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:57:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:57:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:57:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:57:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:57:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:57:31 np0005629333 nova_compute[244014]: 2026-02-25 12:57:31.689 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:57:31 np0005629333 nova_compute[244014]: 2026-02-25 12:57:31.690 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:57:31 np0005629333 nova_compute[244014]: 2026-02-25 12:57:31.700 244018 DEBUG nova.virt.hardware [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:57:31 np0005629333 nova_compute[244014]: 2026-02-25 12:57:31.701 244018 INFO nova.compute.claims [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:57:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:57:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:57:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:57:32 np0005629333 nova_compute[244014]: 2026-02-25 12:57:32.094 244018 DEBUG oslo_concurrency.processutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:57:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:57:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:57:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:57:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:57:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:57:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:57:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:57:32 np0005629333 nova_compute[244014]: 2026-02-25 12:57:32.375 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:57:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2224785933' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:57:32 np0005629333 nova_compute[244014]: 2026-02-25 12:57:32.662 244018 DEBUG oslo_concurrency.processutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:57:32 np0005629333 nova_compute[244014]: 2026-02-25 12:57:32.668 244018 DEBUG nova.compute.provider_tree [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:57:32 np0005629333 nova_compute[244014]: 2026-02-25 12:57:32.700 244018 DEBUG nova.scheduler.client.report [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:57:32 np0005629333 nova_compute[244014]: 2026-02-25 12:57:32.724 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:57:32 np0005629333 nova_compute[244014]: 2026-02-25 12:57:32.725 244018 DEBUG nova.compute.manager [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:57:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2309: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 07:57:32 np0005629333 nova_compute[244014]: 2026-02-25 12:57:32.777 244018 DEBUG nova.compute.manager [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:57:32 np0005629333 nova_compute[244014]: 2026-02-25 12:57:32.778 244018 DEBUG nova.network.neutron [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:57:32 np0005629333 nova_compute[244014]: 2026-02-25 12:57:32.799 244018 INFO nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:57:32 np0005629333 nova_compute[244014]: 2026-02-25 12:57:32.821 244018 DEBUG nova.compute.manager [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:57:32 np0005629333 nova_compute[244014]: 2026-02-25 12:57:32.905 244018 DEBUG nova.compute.manager [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:57:32 np0005629333 nova_compute[244014]: 2026-02-25 12:57:32.907 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:57:32 np0005629333 nova_compute[244014]: 2026-02-25 12:57:32.908 244018 INFO nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Creating image(s)#033[00m
Feb 25 07:57:32 np0005629333 nova_compute[244014]: 2026-02-25 12:57:32.933 244018 DEBUG nova.storage.rbd_utils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 0389cfa0-085b-4e4e-8d61-95d0b91c413e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:57:32 np0005629333 nova_compute[244014]: 2026-02-25 12:57:32.961 244018 DEBUG nova.storage.rbd_utils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 0389cfa0-085b-4e4e-8d61-95d0b91c413e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:57:32 np0005629333 nova_compute[244014]: 2026-02-25 12:57:32.987 244018 DEBUG nova.storage.rbd_utils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 0389cfa0-085b-4e4e-8d61-95d0b91c413e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:57:32 np0005629333 nova_compute[244014]: 2026-02-25 12:57:32.992 244018 DEBUG oslo_concurrency.processutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:57:33.007282) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024253007314, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 626, "num_deletes": 251, "total_data_size": 726796, "memory_usage": 739496, "flush_reason": "Manual Compaction"}
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024253016552, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 720263, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49047, "largest_seqno": 49672, "table_properties": {"data_size": 716874, "index_size": 1297, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7810, "raw_average_key_size": 19, "raw_value_size": 710097, "raw_average_value_size": 1762, "num_data_blocks": 58, "num_entries": 403, "num_filter_entries": 403, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772024209, "oldest_key_time": 1772024209, "file_creation_time": 1772024253, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 9326 microseconds, and 2445 cpu microseconds.
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:57:33.016604) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 720263 bytes OK
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:57:33.016623) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:57:33.022028) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:57:33.022051) EVENT_LOG_v1 {"time_micros": 1772024253022044, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:57:33.022074) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 723411, prev total WAL file size 723411, number of live WAL files 2.
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:57:33.022576) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(703KB)], [113(8925KB)]
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024253022637, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 9859602, "oldest_snapshot_seqno": -1}
Feb 25 07:57:33 np0005629333 nova_compute[244014]: 2026-02-25 12:57:33.057 244018 DEBUG oslo_concurrency.processutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:57:33 np0005629333 nova_compute[244014]: 2026-02-25 12:57:33.058 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:57:33 np0005629333 nova_compute[244014]: 2026-02-25 12:57:33.058 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:57:33 np0005629333 nova_compute[244014]: 2026-02-25 12:57:33.059 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:57:33 np0005629333 nova_compute[244014]: 2026-02-25 12:57:33.083 244018 DEBUG nova.storage.rbd_utils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 0389cfa0-085b-4e4e-8d61-95d0b91c413e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:57:33 np0005629333 nova_compute[244014]: 2026-02-25 12:57:33.086 244018 DEBUG oslo_concurrency.processutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 0389cfa0-085b-4e4e-8d61-95d0b91c413e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 6970 keys, 8143845 bytes, temperature: kUnknown
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024253170628, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 8143845, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8099996, "index_size": 25305, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17477, "raw_key_size": 181671, "raw_average_key_size": 26, "raw_value_size": 7978292, "raw_average_value_size": 1144, "num_data_blocks": 981, "num_entries": 6970, "num_filter_entries": 6970, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772024253, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:57:33.170875) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 8143845 bytes
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:57:33.178202) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 66.6 rd, 55.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 8.7 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(25.0) write-amplify(11.3) OK, records in: 7484, records dropped: 514 output_compression: NoCompression
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:57:33.178234) EVENT_LOG_v1 {"time_micros": 1772024253178220, "job": 68, "event": "compaction_finished", "compaction_time_micros": 148077, "compaction_time_cpu_micros": 26575, "output_level": 6, "num_output_files": 1, "total_output_size": 8143845, "num_input_records": 7484, "num_output_records": 6970, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024253178491, "job": 68, "event": "table_file_deletion", "file_number": 115}
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024253180077, "job": 68, "event": "table_file_deletion", "file_number": 113}
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:57:33.022479) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:57:33.180193) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:57:33.180202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:57:33.180207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:57:33.180211) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-12:57:33.180215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 07:57:33 np0005629333 nova_compute[244014]: 2026-02-25 12:57:33.237 244018 DEBUG nova.policy [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '31d013eaf26a447394d93c83ab8def60', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e227b91c24404ab5aed600e2fe792d32', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:57:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:57:33 np0005629333 nova_compute[244014]: 2026-02-25 12:57:33.954 244018 DEBUG nova.network.neutron [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Successfully created port: dcb845b3-f7fb-449b-b027-65efdcdcf6ec _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:57:34 np0005629333 nova_compute[244014]: 2026-02-25 12:57:34.011 244018 DEBUG oslo_concurrency.processutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 0389cfa0-085b-4e4e-8d61-95d0b91c413e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.925s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:57:34 np0005629333 nova_compute[244014]: 2026-02-25 12:57:34.079 244018 DEBUG nova.storage.rbd_utils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] resizing rbd image 0389cfa0-085b-4e4e-8d61-95d0b91c413e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:57:34 np0005629333 nova_compute[244014]: 2026-02-25 12:57:34.160 244018 DEBUG nova.objects.instance [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'migration_context' on Instance uuid 0389cfa0-085b-4e4e-8d61-95d0b91c413e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:57:34 np0005629333 nova_compute[244014]: 2026-02-25 12:57:34.189 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:57:34 np0005629333 nova_compute[244014]: 2026-02-25 12:57:34.189 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Ensure instance console log exists: /var/lib/nova/instances/0389cfa0-085b-4e4e-8d61-95d0b91c413e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:57:34 np0005629333 nova_compute[244014]: 2026-02-25 12:57:34.190 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:57:34 np0005629333 nova_compute[244014]: 2026-02-25 12:57:34.190 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:57:34 np0005629333 nova_compute[244014]: 2026-02-25 12:57:34.190 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:57:34 np0005629333 nova_compute[244014]: 2026-02-25 12:57:34.655 244018 DEBUG nova.network.neutron [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Successfully updated port: dcb845b3-f7fb-449b-b027-65efdcdcf6ec _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:57:34 np0005629333 nova_compute[244014]: 2026-02-25 12:57:34.689 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "refresh_cache-0389cfa0-085b-4e4e-8d61-95d0b91c413e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:57:34 np0005629333 nova_compute[244014]: 2026-02-25 12:57:34.689 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquired lock "refresh_cache-0389cfa0-085b-4e4e-8d61-95d0b91c413e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:57:34 np0005629333 nova_compute[244014]: 2026-02-25 12:57:34.690 244018 DEBUG nova.network.neutron [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:57:34 np0005629333 nova_compute[244014]: 2026-02-25 12:57:34.745 244018 DEBUG nova.compute.manager [req-2f7ed36f-ed5f-4c81-a094-06c1d64b061f req-bcf6194d-65a8-47d0-9992-6ef14ecd7933 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Received event network-changed-dcb845b3-f7fb-449b-b027-65efdcdcf6ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:57:34 np0005629333 nova_compute[244014]: 2026-02-25 12:57:34.746 244018 DEBUG nova.compute.manager [req-2f7ed36f-ed5f-4c81-a094-06c1d64b061f req-bcf6194d-65a8-47d0-9992-6ef14ecd7933 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Refreshing instance network info cache due to event network-changed-dcb845b3-f7fb-449b-b027-65efdcdcf6ec. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:57:34 np0005629333 nova_compute[244014]: 2026-02-25 12:57:34.746 244018 DEBUG oslo_concurrency.lockutils [req-2f7ed36f-ed5f-4c81-a094-06c1d64b061f req-bcf6194d-65a8-47d0-9992-6ef14ecd7933 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-0389cfa0-085b-4e4e-8d61-95d0b91c413e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:57:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2310: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 15 KiB/s wr, 0 op/s
Feb 25 07:57:34 np0005629333 nova_compute[244014]: 2026-02-25 12:57:34.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:57:34 np0005629333 nova_compute[244014]: 2026-02-25 12:57:34.905 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:57:34 np0005629333 nova_compute[244014]: 2026-02-25 12:57:34.906 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:57:34 np0005629333 nova_compute[244014]: 2026-02-25 12:57:34.906 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:57:34 np0005629333 nova_compute[244014]: 2026-02-25 12:57:34.907 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:57:34 np0005629333 nova_compute[244014]: 2026-02-25 12:57:34.907 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:57:35 np0005629333 nova_compute[244014]: 2026-02-25 12:57:35.158 244018 DEBUG nova.network.neutron [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:57:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:57:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2722102778' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:57:35 np0005629333 nova_compute[244014]: 2026-02-25 12:57:35.480 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:57:35 np0005629333 nova_compute[244014]: 2026-02-25 12:57:35.544 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:35 np0005629333 nova_compute[244014]: 2026-02-25 12:57:35.553 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:57:35 np0005629333 nova_compute[244014]: 2026-02-25 12:57:35.554 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:57:35 np0005629333 nova_compute[244014]: 2026-02-25 12:57:35.791 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:57:35 np0005629333 nova_compute[244014]: 2026-02-25 12:57:35.793 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3366MB free_disk=59.94174979068339GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:57:35 np0005629333 nova_compute[244014]: 2026-02-25 12:57:35.794 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:57:35 np0005629333 nova_compute[244014]: 2026-02-25 12:57:35.794 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:57:35 np0005629333 nova_compute[244014]: 2026-02-25 12:57:35.873 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:35 np0005629333 nova_compute[244014]: 2026-02-25 12:57:35.932 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 6185e497-8422-4a5f-a98a-865484d53d4f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:57:35 np0005629333 nova_compute[244014]: 2026-02-25 12:57:35.933 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 0389cfa0-085b-4e4e-8d61-95d0b91c413e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:57:35 np0005629333 nova_compute[244014]: 2026-02-25 12:57:35.933 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:57:35 np0005629333 nova_compute[244014]: 2026-02-25 12:57:35.933 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.108 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.273 244018 DEBUG nova.network.neutron [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Updating instance_info_cache with network_info: [{"id": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "address": "fa:16:3e:89:15:55", "network": {"id": "6c261236-9e75-404c-ae2b-04691f3dc670", "bridge": "br-int", "label": "tempest-network-smoke--1012459848", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb845b3-f7", "ovs_interfaceid": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.294 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Releasing lock "refresh_cache-0389cfa0-085b-4e4e-8d61-95d0b91c413e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.295 244018 DEBUG nova.compute.manager [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Instance network_info: |[{"id": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "address": "fa:16:3e:89:15:55", "network": {"id": "6c261236-9e75-404c-ae2b-04691f3dc670", "bridge": "br-int", "label": "tempest-network-smoke--1012459848", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb845b3-f7", "ovs_interfaceid": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.296 244018 DEBUG oslo_concurrency.lockutils [req-2f7ed36f-ed5f-4c81-a094-06c1d64b061f req-bcf6194d-65a8-47d0-9992-6ef14ecd7933 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-0389cfa0-085b-4e4e-8d61-95d0b91c413e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.297 244018 DEBUG nova.network.neutron [req-2f7ed36f-ed5f-4c81-a094-06c1d64b061f req-bcf6194d-65a8-47d0-9992-6ef14ecd7933 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Refreshing network info cache for port dcb845b3-f7fb-449b-b027-65efdcdcf6ec _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.302 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Start _get_guest_xml network_info=[{"id": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "address": "fa:16:3e:89:15:55", "network": {"id": "6c261236-9e75-404c-ae2b-04691f3dc670", "bridge": "br-int", "label": "tempest-network-smoke--1012459848", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb845b3-f7", "ovs_interfaceid": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.310 244018 WARNING nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.316 244018 DEBUG nova.virt.libvirt.host [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.317 244018 DEBUG nova.virt.libvirt.host [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.322 244018 DEBUG nova.virt.libvirt.host [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.323 244018 DEBUG nova.virt.libvirt.host [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.324 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.324 244018 DEBUG nova.virt.hardware [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.325 244018 DEBUG nova.virt.hardware [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.325 244018 DEBUG nova.virt.hardware [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.326 244018 DEBUG nova.virt.hardware [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.326 244018 DEBUG nova.virt.hardware [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.326 244018 DEBUG nova.virt.hardware [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.327 244018 DEBUG nova.virt.hardware [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.327 244018 DEBUG nova.virt.hardware [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.328 244018 DEBUG nova.virt.hardware [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.328 244018 DEBUG nova.virt.hardware [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.328 244018 DEBUG nova.virt.hardware [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.334 244018 DEBUG oslo_concurrency.processutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:57:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:57:36 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3958285174' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.685 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.691 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.711 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:57:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2311: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 15 KiB/s wr, 0 op/s
Feb 25 07:57:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:57:36 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2621989641' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.904 244018 DEBUG oslo_concurrency.processutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.928 244018 DEBUG nova.storage.rbd_utils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 0389cfa0-085b-4e4e-8d61-95d0b91c413e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.933 244018 DEBUG oslo_concurrency.processutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.961 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:57:36 np0005629333 nova_compute[244014]: 2026-02-25 12:57:36.961 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:57:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:57:37 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/429612525' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.443 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.452 244018 DEBUG oslo_concurrency.processutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.453 244018 DEBUG nova.virt.libvirt.vif [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:57:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1742784697',display_name='tempest-TestNetworkBasicOps-server-1742784697',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1742784697',id=140,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAZFg6svZGG2HhSYQn7nNKCzfycyIXr66T8mUvP1MQ920eEEyjRb+o64QsZLqXfBMNPIzdFc3Q2mjbznplS/flTAudp3OXevaI0LCPFmYdclt9P1cih6MnuEw2nz2PTUtw==',key_name='tempest-TestNetworkBasicOps-571395165',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ydu0he2w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:57:32Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=0389cfa0-085b-4e4e-8d61-95d0b91c413e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "address": "fa:16:3e:89:15:55", "network": {"id": "6c261236-9e75-404c-ae2b-04691f3dc670", "bridge": "br-int", "label": "tempest-network-smoke--1012459848", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb845b3-f7", "ovs_interfaceid": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.454 244018 DEBUG nova.network.os_vif_util [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "address": "fa:16:3e:89:15:55", "network": {"id": "6c261236-9e75-404c-ae2b-04691f3dc670", "bridge": "br-int", "label": "tempest-network-smoke--1012459848", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb845b3-f7", "ovs_interfaceid": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.454 244018 DEBUG nova.network.os_vif_util [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:15:55,bridge_name='br-int',has_traffic_filtering=True,id=dcb845b3-f7fb-449b-b027-65efdcdcf6ec,network=Network(6c261236-9e75-404c-ae2b-04691f3dc670),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcb845b3-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.455 244018 DEBUG nova.objects.instance [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0389cfa0-085b-4e4e-8d61-95d0b91c413e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.509 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:57:37 np0005629333 nova_compute[244014]:  <uuid>0389cfa0-085b-4e4e-8d61-95d0b91c413e</uuid>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:  <name>instance-0000008c</name>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestNetworkBasicOps-server-1742784697</nova:name>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:57:36</nova:creationTime>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:57:37 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:        <nova:user uuid="31d013eaf26a447394d93c83ab8def60">tempest-TestNetworkBasicOps-80594480-project-member</nova:user>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:        <nova:project uuid="e227b91c24404ab5aed600e2fe792d32">tempest-TestNetworkBasicOps-80594480</nova:project>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:        <nova:port uuid="dcb845b3-f7fb-449b-b027-65efdcdcf6ec">
Feb 25 07:57:37 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      <entry name="serial">0389cfa0-085b-4e4e-8d61-95d0b91c413e</entry>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      <entry name="uuid">0389cfa0-085b-4e4e-8d61-95d0b91c413e</entry>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/0389cfa0-085b-4e4e-8d61-95d0b91c413e_disk">
Feb 25 07:57:37 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:57:37 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/0389cfa0-085b-4e4e-8d61-95d0b91c413e_disk.config">
Feb 25 07:57:37 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:57:37 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:89:15:55"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      <target dev="tapdcb845b3-f7"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/0389cfa0-085b-4e4e-8d61-95d0b91c413e/console.log" append="off"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:57:37 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:57:37 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:57:37 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:57:37 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.510 244018 DEBUG nova.compute.manager [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Preparing to wait for external event network-vif-plugged-dcb845b3-f7fb-449b-b027-65efdcdcf6ec prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.510 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.510 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.511 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.511 244018 DEBUG nova.virt.libvirt.vif [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:57:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1742784697',display_name='tempest-TestNetworkBasicOps-server-1742784697',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1742784697',id=140,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAZFg6svZGG2HhSYQn7nNKCzfycyIXr66T8mUvP1MQ920eEEyjRb+o64QsZLqXfBMNPIzdFc3Q2mjbznplS/flTAudp3OXevaI0LCPFmYdclt9P1cih6MnuEw2nz2PTUtw==',key_name='tempest-TestNetworkBasicOps-571395165',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ydu0he2w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:57:32Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=0389cfa0-085b-4e4e-8d61-95d0b91c413e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "address": "fa:16:3e:89:15:55", "network": {"id": "6c261236-9e75-404c-ae2b-04691f3dc670", "bridge": "br-int", "label": "tempest-network-smoke--1012459848", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb845b3-f7", "ovs_interfaceid": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.512 244018 DEBUG nova.network.os_vif_util [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "address": "fa:16:3e:89:15:55", "network": {"id": "6c261236-9e75-404c-ae2b-04691f3dc670", "bridge": "br-int", "label": "tempest-network-smoke--1012459848", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb845b3-f7", "ovs_interfaceid": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.512 244018 DEBUG nova.network.os_vif_util [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:15:55,bridge_name='br-int',has_traffic_filtering=True,id=dcb845b3-f7fb-449b-b027-65efdcdcf6ec,network=Network(6c261236-9e75-404c-ae2b-04691f3dc670),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcb845b3-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.512 244018 DEBUG os_vif [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:15:55,bridge_name='br-int',has_traffic_filtering=True,id=dcb845b3-f7fb-449b-b027-65efdcdcf6ec,network=Network(6c261236-9e75-404c-ae2b-04691f3dc670),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcb845b3-f7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.513 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.513 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.513 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.517 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.517 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdcb845b3-f7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.517 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdcb845b3-f7, col_values=(('external_ids', {'iface-id': 'dcb845b3-f7fb-449b-b027-65efdcdcf6ec', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:89:15:55', 'vm-uuid': '0389cfa0-085b-4e4e-8d61-95d0b91c413e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.519 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:37 np0005629333 NetworkManager[49836]: <info>  [1772024257.5202] manager: (tapdcb845b3-f7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/611)
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.521 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.527 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.528 244018 INFO os_vif [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:15:55,bridge_name='br-int',has_traffic_filtering=True,id=dcb845b3-f7fb-449b-b027-65efdcdcf6ec,network=Network(6c261236-9e75-404c-ae2b-04691f3dc670),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcb845b3-f7')#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.578 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.578 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.578 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] No VIF found with MAC fa:16:3e:89:15:55, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.579 244018 INFO nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Using config drive#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.599 244018 DEBUG nova.storage.rbd_utils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 0389cfa0-085b-4e4e-8d61-95d0b91c413e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.958 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.958 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.959 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.982 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 07:57:37 np0005629333 nova_compute[244014]: 2026-02-25 12:57:37.995 244018 INFO nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Creating config drive at /var/lib/nova/instances/0389cfa0-085b-4e4e-8d61-95d0b91c413e/disk.config#033[00m
Feb 25 07:57:38 np0005629333 nova_compute[244014]: 2026-02-25 12:57:38.000 244018 DEBUG oslo_concurrency.processutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0389cfa0-085b-4e4e-8d61-95d0b91c413e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpclqw_7j4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:57:38 np0005629333 nova_compute[244014]: 2026-02-25 12:57:38.046 244018 DEBUG nova.network.neutron [req-2f7ed36f-ed5f-4c81-a094-06c1d64b061f req-bcf6194d-65a8-47d0-9992-6ef14ecd7933 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Updated VIF entry in instance network info cache for port dcb845b3-f7fb-449b-b027-65efdcdcf6ec. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:57:38 np0005629333 nova_compute[244014]: 2026-02-25 12:57:38.047 244018 DEBUG nova.network.neutron [req-2f7ed36f-ed5f-4c81-a094-06c1d64b061f req-bcf6194d-65a8-47d0-9992-6ef14ecd7933 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Updating instance_info_cache with network_info: [{"id": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "address": "fa:16:3e:89:15:55", "network": {"id": "6c261236-9e75-404c-ae2b-04691f3dc670", "bridge": "br-int", "label": "tempest-network-smoke--1012459848", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb845b3-f7", "ovs_interfaceid": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:57:38 np0005629333 nova_compute[244014]: 2026-02-25 12:57:38.068 244018 DEBUG oslo_concurrency.lockutils [req-2f7ed36f-ed5f-4c81-a094-06c1d64b061f req-bcf6194d-65a8-47d0-9992-6ef14ecd7933 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-0389cfa0-085b-4e4e-8d61-95d0b91c413e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:57:38 np0005629333 nova_compute[244014]: 2026-02-25 12:57:38.133 244018 DEBUG oslo_concurrency.processutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0389cfa0-085b-4e4e-8d61-95d0b91c413e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpclqw_7j4" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:57:38 np0005629333 nova_compute[244014]: 2026-02-25 12:57:38.163 244018 DEBUG nova.storage.rbd_utils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] rbd image 0389cfa0-085b-4e4e-8d61-95d0b91c413e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:57:38 np0005629333 nova_compute[244014]: 2026-02-25 12:57:38.167 244018 DEBUG oslo_concurrency.processutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0389cfa0-085b-4e4e-8d61-95d0b91c413e/disk.config 0389cfa0-085b-4e4e-8d61-95d0b91c413e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:57:38 np0005629333 nova_compute[244014]: 2026-02-25 12:57:38.305 244018 DEBUG oslo_concurrency.processutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0389cfa0-085b-4e4e-8d61-95d0b91c413e/disk.config 0389cfa0-085b-4e4e-8d61-95d0b91c413e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:57:38 np0005629333 nova_compute[244014]: 2026-02-25 12:57:38.306 244018 INFO nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Deleting local config drive /var/lib/nova/instances/0389cfa0-085b-4e4e-8d61-95d0b91c413e/disk.config because it was imported into RBD.#033[00m
Feb 25 07:57:38 np0005629333 kernel: tapdcb845b3-f7: entered promiscuous mode
Feb 25 07:57:38 np0005629333 NetworkManager[49836]: <info>  [1772024258.3587] manager: (tapdcb845b3-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/612)
Feb 25 07:57:38 np0005629333 ovn_controller[147040]: 2026-02-25T12:57:38Z|01468|binding|INFO|Claiming lport dcb845b3-f7fb-449b-b027-65efdcdcf6ec for this chassis.
Feb 25 07:57:38 np0005629333 ovn_controller[147040]: 2026-02-25T12:57:38Z|01469|binding|INFO|dcb845b3-f7fb-449b-b027-65efdcdcf6ec: Claiming fa:16:3e:89:15:55 10.100.0.8
Feb 25 07:57:38 np0005629333 nova_compute[244014]: 2026-02-25 12:57:38.362 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.370 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:15:55 10.100.0.8'], port_security=['fa:16:3e:89:15:55 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0389cfa0-085b-4e4e-8d61-95d0b91c413e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c261236-9e75-404c-ae2b-04691f3dc670', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bd6f4c79-0604-475b-8d7d-e893b4f0a2c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e13b5f3-1e2c-45d6-a1db-56c3aecd9d1e, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=dcb845b3-f7fb-449b-b027-65efdcdcf6ec) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:57:38 np0005629333 ovn_controller[147040]: 2026-02-25T12:57:38Z|01470|binding|INFO|Setting lport dcb845b3-f7fb-449b-b027-65efdcdcf6ec up in Southbound
Feb 25 07:57:38 np0005629333 ovn_controller[147040]: 2026-02-25T12:57:38Z|01471|binding|INFO|Setting lport dcb845b3-f7fb-449b-b027-65efdcdcf6ec ovn-installed in OVS
Feb 25 07:57:38 np0005629333 nova_compute[244014]: 2026-02-25 12:57:38.375 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.373 157129 INFO neutron.agent.ovn.metadata.agent [-] Port dcb845b3-f7fb-449b-b027-65efdcdcf6ec in datapath 6c261236-9e75-404c-ae2b-04691f3dc670 bound to our chassis#033[00m
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.376 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c261236-9e75-404c-ae2b-04691f3dc670#033[00m
Feb 25 07:57:38 np0005629333 nova_compute[244014]: 2026-02-25 12:57:38.380 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.387 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4c1e0706-1745-4dea-ba3a-24906a867e21]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.388 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6c261236-91 in ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.393 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6c261236-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.393 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5b5b63a1-7322-4a7a-87ba-b3d646bbbc7a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:38 np0005629333 systemd-machined[210048]: New machine qemu-172-instance-0000008c.
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.394 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8d826250-43e2-42f6-8f69-c9c43d195def]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.407 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[ec9f8eb4-3220-4eac-b0fa-292bf482f9f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:38 np0005629333 systemd[1]: Started Virtual Machine qemu-172-instance-0000008c.
Feb 25 07:57:38 np0005629333 systemd-udevd[371073]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.423 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b56026a5-a0ed-4bbe-85bd-3a7e85c35a21]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:38 np0005629333 NetworkManager[49836]: <info>  [1772024258.4369] device (tapdcb845b3-f7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:57:38 np0005629333 NetworkManager[49836]: <info>  [1772024258.4374] device (tapdcb845b3-f7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.457 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4ff49975-e861-4945-81c0-fa5b857d6591]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:38 np0005629333 NetworkManager[49836]: <info>  [1772024258.4657] manager: (tap6c261236-90): new Veth device (/org/freedesktop/NetworkManager/Devices/613)
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.465 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[35679de3-49e9-4cee-a2f7-c1f7b9db9a5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.500 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8e9a0019-08a0-43f8-9280-1792fbb757d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.504 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c199634b-b8b0-47b8-8154-2eb0c83105ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:38 np0005629333 NetworkManager[49836]: <info>  [1772024258.5264] device (tap6c261236-90): carrier: link connected
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.531 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[a0920e65-1d11-4220-bb66-0a6b99abade1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.547 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9f017312-89ae-4167-8888-6b6e5de785fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c261236-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:4c:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 437], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622793, 'reachable_time': 28503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371103, 'error': None, 'target': 'ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.562 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[aa084513-65bf-4afe-ae68-58dd7c3f43e9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:4ce3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622793, 'tstamp': 622793}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371104, 'error': None, 'target': 'ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.573 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[26f85744-7700-4467-9dce-2f0229385b06]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c261236-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:4c:e3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 437], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622793, 'reachable_time': 28503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 371105, 'error': None, 'target': 'ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.594 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ae70cec1-36fc-436c-b50c-3bc62b64b4dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.638 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4624cc1b-d6bf-4f27-8b6c-60c3fc04dd7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.641 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c261236-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.641 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.642 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c261236-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:57:38 np0005629333 nova_compute[244014]: 2026-02-25 12:57:38.645 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:38 np0005629333 kernel: tap6c261236-90: entered promiscuous mode
Feb 25 07:57:38 np0005629333 NetworkManager[49836]: <info>  [1772024258.6471] manager: (tap6c261236-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/614)
Feb 25 07:57:38 np0005629333 nova_compute[244014]: 2026-02-25 12:57:38.648 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.649 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c261236-90, col_values=(('external_ids', {'iface-id': 'f9b1a684-77d3-4856-bba2-cff43e072272'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:57:38 np0005629333 ovn_controller[147040]: 2026-02-25T12:57:38Z|01472|binding|INFO|Releasing lport f9b1a684-77d3-4856-bba2-cff43e072272 from this chassis (sb_readonly=0)
Feb 25 07:57:38 np0005629333 nova_compute[244014]: 2026-02-25 12:57:38.651 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:38 np0005629333 nova_compute[244014]: 2026-02-25 12:57:38.651 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.652 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6c261236-9e75-404c-ae2b-04691f3dc670.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6c261236-9e75-404c-ae2b-04691f3dc670.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.653 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c9d3c747-a914-490e-8707-5b202ce8c7e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.654 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-6c261236-9e75-404c-ae2b-04691f3dc670
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/6c261236-9e75-404c-ae2b-04691f3dc670.pid.haproxy
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 6c261236-9e75-404c-ae2b-04691f3dc670
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:57:38 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:38.655 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670', 'env', 'PROCESS_TAG=haproxy-6c261236-9e75-404c-ae2b-04691f3dc670', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6c261236-9e75-404c-ae2b-04691f3dc670.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:57:38 np0005629333 nova_compute[244014]: 2026-02-25 12:57:38.656 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:57:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2312: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 07:57:38 np0005629333 nova_compute[244014]: 2026-02-25 12:57:38.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:57:38 np0005629333 podman[371138]: 2026-02-25 12:57:38.976070717 +0000 UTC m=+0.056110823 container create 3046b04d14faec9fa5503201588fbc566d131cd1b7d3ee6b39d335ecb002632c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:57:39 np0005629333 systemd[1]: Started libpod-conmon-3046b04d14faec9fa5503201588fbc566d131cd1b7d3ee6b39d335ecb002632c.scope.
Feb 25 07:57:39 np0005629333 podman[371138]: 2026-02-25 12:57:38.942034917 +0000 UTC m=+0.022075033 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:57:39 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:57:39 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57e765a2b026bd301f1d6a49133c1f0c94503a7d34a089d5f615c08ee5f0bc83/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:57:39 np0005629333 podman[371138]: 2026-02-25 12:57:39.059752098 +0000 UTC m=+0.139792224 container init 3046b04d14faec9fa5503201588fbc566d131cd1b7d3ee6b39d335ecb002632c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 25 07:57:39 np0005629333 podman[371138]: 2026-02-25 12:57:39.065004076 +0000 UTC m=+0.145044182 container start 3046b04d14faec9fa5503201588fbc566d131cd1b7d3ee6b39d335ecb002632c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 07:57:39 np0005629333 neutron-haproxy-ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670[371178]: [NOTICE]   (371198) : New worker (371201) forked
Feb 25 07:57:39 np0005629333 neutron-haproxy-ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670[371178]: [NOTICE]   (371198) : Loading success.
Feb 25 07:57:39 np0005629333 nova_compute[244014]: 2026-02-25 12:57:39.136 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024259.1354146, 0389cfa0-085b-4e4e-8d61-95d0b91c413e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:57:39 np0005629333 nova_compute[244014]: 2026-02-25 12:57:39.136 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] VM Started (Lifecycle Event)#033[00m
Feb 25 07:57:39 np0005629333 nova_compute[244014]: 2026-02-25 12:57:39.155 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:57:39 np0005629333 nova_compute[244014]: 2026-02-25 12:57:39.160 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024259.135805, 0389cfa0-085b-4e4e-8d61-95d0b91c413e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:57:39 np0005629333 nova_compute[244014]: 2026-02-25 12:57:39.160 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:57:39 np0005629333 nova_compute[244014]: 2026-02-25 12:57:39.209 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:57:39 np0005629333 nova_compute[244014]: 2026-02-25 12:57:39.213 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:57:39 np0005629333 nova_compute[244014]: 2026-02-25 12:57:39.242 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:57:40 np0005629333 nova_compute[244014]: 2026-02-25 12:57:40.546 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2313: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 07:57:42 np0005629333 nova_compute[244014]: 2026-02-25 12:57:42.333 244018 DEBUG nova.compute.manager [req-ed698516-676d-4f81-acad-cd6f4217dd56 req-695ccad3-da91-4a73-897a-7533f123a9fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Received event network-vif-plugged-dcb845b3-f7fb-449b-b027-65efdcdcf6ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:57:42 np0005629333 nova_compute[244014]: 2026-02-25 12:57:42.333 244018 DEBUG oslo_concurrency.lockutils [req-ed698516-676d-4f81-acad-cd6f4217dd56 req-695ccad3-da91-4a73-897a-7533f123a9fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:57:42 np0005629333 nova_compute[244014]: 2026-02-25 12:57:42.334 244018 DEBUG oslo_concurrency.lockutils [req-ed698516-676d-4f81-acad-cd6f4217dd56 req-695ccad3-da91-4a73-897a-7533f123a9fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:57:42 np0005629333 nova_compute[244014]: 2026-02-25 12:57:42.334 244018 DEBUG oslo_concurrency.lockutils [req-ed698516-676d-4f81-acad-cd6f4217dd56 req-695ccad3-da91-4a73-897a-7533f123a9fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:57:42 np0005629333 nova_compute[244014]: 2026-02-25 12:57:42.334 244018 DEBUG nova.compute.manager [req-ed698516-676d-4f81-acad-cd6f4217dd56 req-695ccad3-da91-4a73-897a-7533f123a9fd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Processing event network-vif-plugged-dcb845b3-f7fb-449b-b027-65efdcdcf6ec _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:57:42 np0005629333 nova_compute[244014]: 2026-02-25 12:57:42.335 244018 DEBUG nova.compute.manager [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:57:42 np0005629333 nova_compute[244014]: 2026-02-25 12:57:42.338 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024262.337872, 0389cfa0-085b-4e4e-8d61-95d0b91c413e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:57:42 np0005629333 nova_compute[244014]: 2026-02-25 12:57:42.338 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:57:42 np0005629333 nova_compute[244014]: 2026-02-25 12:57:42.339 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:57:42 np0005629333 nova_compute[244014]: 2026-02-25 12:57:42.342 244018 INFO nova.virt.libvirt.driver [-] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Instance spawned successfully.#033[00m
Feb 25 07:57:42 np0005629333 nova_compute[244014]: 2026-02-25 12:57:42.342 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:57:42 np0005629333 nova_compute[244014]: 2026-02-25 12:57:42.360 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:57:42 np0005629333 nova_compute[244014]: 2026-02-25 12:57:42.364 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:57:42 np0005629333 nova_compute[244014]: 2026-02-25 12:57:42.368 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:57:42 np0005629333 nova_compute[244014]: 2026-02-25 12:57:42.368 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:57:42 np0005629333 nova_compute[244014]: 2026-02-25 12:57:42.368 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:57:42 np0005629333 nova_compute[244014]: 2026-02-25 12:57:42.369 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:57:42 np0005629333 nova_compute[244014]: 2026-02-25 12:57:42.369 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:57:42 np0005629333 nova_compute[244014]: 2026-02-25 12:57:42.370 244018 DEBUG nova.virt.libvirt.driver [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:57:42 np0005629333 nova_compute[244014]: 2026-02-25 12:57:42.394 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:57:42 np0005629333 nova_compute[244014]: 2026-02-25 12:57:42.421 244018 INFO nova.compute.manager [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Took 9.52 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:57:42 np0005629333 nova_compute[244014]: 2026-02-25 12:57:42.422 244018 DEBUG nova.compute.manager [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:57:42 np0005629333 nova_compute[244014]: 2026-02-25 12:57:42.518 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:42 np0005629333 nova_compute[244014]: 2026-02-25 12:57:42.665 244018 INFO nova.compute.manager [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Took 11.21 seconds to build instance.#033[00m
Feb 25 07:57:42 np0005629333 nova_compute[244014]: 2026-02-25 12:57:42.695 244018 DEBUG oslo_concurrency.lockutils [None req-d777ca1d-150e-4cde-a89e-b7deb4058b9b 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.329s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:57:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:57:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:57:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:57:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:57:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011215693877529972 of space, bias 1.0, pg target 0.33647081632589915 quantized to 32 (current 32)
Feb 25 07:57:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:57:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:57:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:57:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:57:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:57:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024942067356138187 of space, bias 1.0, pg target 0.7482620206841456 quantized to 32 (current 32)
Feb 25 07:57:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:57:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3702431328712579e-06 of space, bias 4.0, pg target 0.0016442917594455095 quantized to 16 (current 16)
Feb 25 07:57:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:57:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:57:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:57:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:57:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:57:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:57:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:57:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:57:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:57:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:57:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2314: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 25 07:57:42 np0005629333 nova_compute[244014]: 2026-02-25 12:57:42.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:57:43 np0005629333 nova_compute[244014]: 2026-02-25 12:57:43.529 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Acquiring lock "621d2b1a-0d06-4a98-b252-2acafee3ba02" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:57:43 np0005629333 nova_compute[244014]: 2026-02-25 12:57:43.529 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lock "621d2b1a-0d06-4a98-b252-2acafee3ba02" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:57:43 np0005629333 nova_compute[244014]: 2026-02-25 12:57:43.550 244018 DEBUG nova.compute.manager [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:57:43 np0005629333 nova_compute[244014]: 2026-02-25 12:57:43.635 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:57:43 np0005629333 nova_compute[244014]: 2026-02-25 12:57:43.635 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:57:43 np0005629333 nova_compute[244014]: 2026-02-25 12:57:43.644 244018 DEBUG nova.virt.hardware [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:57:43 np0005629333 nova_compute[244014]: 2026-02-25 12:57:43.644 244018 INFO nova.compute.claims [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:57:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:57:43 np0005629333 podman[371210]: 2026-02-25 12:57:43.780567136 +0000 UTC m=+0.110149828 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 07:57:43 np0005629333 nova_compute[244014]: 2026-02-25 12:57:43.798 244018 DEBUG oslo_concurrency.processutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:57:43 np0005629333 podman[371211]: 2026-02-25 12:57:43.80340247 +0000 UTC m=+0.135521874 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 25 07:57:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:57:44 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3513331914' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:57:44 np0005629333 nova_compute[244014]: 2026-02-25 12:57:44.355 244018 DEBUG oslo_concurrency.processutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:57:44 np0005629333 nova_compute[244014]: 2026-02-25 12:57:44.363 244018 DEBUG nova.compute.provider_tree [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:57:44 np0005629333 nova_compute[244014]: 2026-02-25 12:57:44.384 244018 DEBUG nova.scheduler.client.report [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:57:44 np0005629333 nova_compute[244014]: 2026-02-25 12:57:44.406 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:57:44 np0005629333 nova_compute[244014]: 2026-02-25 12:57:44.407 244018 DEBUG nova.compute.manager [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:57:44 np0005629333 nova_compute[244014]: 2026-02-25 12:57:44.472 244018 DEBUG nova.compute.manager [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:57:44 np0005629333 nova_compute[244014]: 2026-02-25 12:57:44.473 244018 DEBUG nova.network.neutron [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:57:44 np0005629333 nova_compute[244014]: 2026-02-25 12:57:44.505 244018 INFO nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:57:44 np0005629333 nova_compute[244014]: 2026-02-25 12:57:44.519 244018 DEBUG nova.compute.manager [req-92cae1a9-3426-4d85-b0c9-0681f78c2e57 req-5ea8e4b9-a777-4ac6-a2d8-2a77074cc281 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Received event network-vif-plugged-dcb845b3-f7fb-449b-b027-65efdcdcf6ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:57:44 np0005629333 nova_compute[244014]: 2026-02-25 12:57:44.519 244018 DEBUG oslo_concurrency.lockutils [req-92cae1a9-3426-4d85-b0c9-0681f78c2e57 req-5ea8e4b9-a777-4ac6-a2d8-2a77074cc281 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:57:44 np0005629333 nova_compute[244014]: 2026-02-25 12:57:44.520 244018 DEBUG oslo_concurrency.lockutils [req-92cae1a9-3426-4d85-b0c9-0681f78c2e57 req-5ea8e4b9-a777-4ac6-a2d8-2a77074cc281 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:57:44 np0005629333 nova_compute[244014]: 2026-02-25 12:57:44.520 244018 DEBUG oslo_concurrency.lockutils [req-92cae1a9-3426-4d85-b0c9-0681f78c2e57 req-5ea8e4b9-a777-4ac6-a2d8-2a77074cc281 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:57:44 np0005629333 nova_compute[244014]: 2026-02-25 12:57:44.521 244018 DEBUG nova.compute.manager [req-92cae1a9-3426-4d85-b0c9-0681f78c2e57 req-5ea8e4b9-a777-4ac6-a2d8-2a77074cc281 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] No waiting events found dispatching network-vif-plugged-dcb845b3-f7fb-449b-b027-65efdcdcf6ec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:57:44 np0005629333 nova_compute[244014]: 2026-02-25 12:57:44.521 244018 WARNING nova.compute.manager [req-92cae1a9-3426-4d85-b0c9-0681f78c2e57 req-5ea8e4b9-a777-4ac6-a2d8-2a77074cc281 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Received unexpected event network-vif-plugged-dcb845b3-f7fb-449b-b027-65efdcdcf6ec for instance with vm_state active and task_state None.#033[00m
Feb 25 07:57:44 np0005629333 nova_compute[244014]: 2026-02-25 12:57:44.526 244018 DEBUG nova.compute.manager [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:57:44 np0005629333 nova_compute[244014]: 2026-02-25 12:57:44.650 244018 DEBUG nova.compute.manager [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:57:44 np0005629333 nova_compute[244014]: 2026-02-25 12:57:44.652 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:57:44 np0005629333 nova_compute[244014]: 2026-02-25 12:57:44.653 244018 INFO nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Creating image(s)#033[00m
Feb 25 07:57:44 np0005629333 nova_compute[244014]: 2026-02-25 12:57:44.687 244018 DEBUG nova.storage.rbd_utils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] rbd image 621d2b1a-0d06-4a98-b252-2acafee3ba02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:57:44 np0005629333 nova_compute[244014]: 2026-02-25 12:57:44.721 244018 DEBUG nova.storage.rbd_utils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] rbd image 621d2b1a-0d06-4a98-b252-2acafee3ba02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:57:44 np0005629333 nova_compute[244014]: 2026-02-25 12:57:44.749 244018 DEBUG nova.storage.rbd_utils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] rbd image 621d2b1a-0d06-4a98-b252-2acafee3ba02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:57:44 np0005629333 nova_compute[244014]: 2026-02-25 12:57:44.753 244018 DEBUG oslo_concurrency.processutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:57:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2315: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 25 07:57:44 np0005629333 nova_compute[244014]: 2026-02-25 12:57:44.840 244018 DEBUG oslo_concurrency.processutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:57:44 np0005629333 nova_compute[244014]: 2026-02-25 12:57:44.842 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:57:44 np0005629333 nova_compute[244014]: 2026-02-25 12:57:44.843 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:57:44 np0005629333 nova_compute[244014]: 2026-02-25 12:57:44.843 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:57:44 np0005629333 nova_compute[244014]: 2026-02-25 12:57:44.870 244018 DEBUG nova.storage.rbd_utils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] rbd image 621d2b1a-0d06-4a98-b252-2acafee3ba02_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:57:44 np0005629333 nova_compute[244014]: 2026-02-25 12:57:44.874 244018 DEBUG oslo_concurrency.processutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 621d2b1a-0d06-4a98-b252-2acafee3ba02_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:57:44 np0005629333 nova_compute[244014]: 2026-02-25 12:57:44.909 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:57:44 np0005629333 nova_compute[244014]: 2026-02-25 12:57:44.910 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:57:45 np0005629333 nova_compute[244014]: 2026-02-25 12:57:45.173 244018 DEBUG oslo_concurrency.processutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 621d2b1a-0d06-4a98-b252-2acafee3ba02_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:57:45 np0005629333 nova_compute[244014]: 2026-02-25 12:57:45.256 244018 DEBUG nova.policy [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2248dda8be6e4a51ace44a9e66dc4b45', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '089b8f0ee9684ec69cbdc13d24262170', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:57:45 np0005629333 nova_compute[244014]: 2026-02-25 12:57:45.266 244018 DEBUG nova.storage.rbd_utils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] resizing rbd image 621d2b1a-0d06-4a98-b252-2acafee3ba02_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:57:45 np0005629333 nova_compute[244014]: 2026-02-25 12:57:45.361 244018 DEBUG nova.objects.instance [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lazy-loading 'migration_context' on Instance uuid 621d2b1a-0d06-4a98-b252-2acafee3ba02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:57:45 np0005629333 nova_compute[244014]: 2026-02-25 12:57:45.403 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:57:45 np0005629333 nova_compute[244014]: 2026-02-25 12:57:45.404 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Ensure instance console log exists: /var/lib/nova/instances/621d2b1a-0d06-4a98-b252-2acafee3ba02/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:57:45 np0005629333 nova_compute[244014]: 2026-02-25 12:57:45.405 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:57:45 np0005629333 nova_compute[244014]: 2026-02-25 12:57:45.405 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:57:45 np0005629333 nova_compute[244014]: 2026-02-25 12:57:45.406 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:57:45 np0005629333 nova_compute[244014]: 2026-02-25 12:57:45.547 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:45 np0005629333 nova_compute[244014]: 2026-02-25 12:57:45.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:57:45 np0005629333 nova_compute[244014]: 2026-02-25 12:57:45.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:57:45 np0005629333 nova_compute[244014]: 2026-02-25 12:57:45.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:57:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2316: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Feb 25 07:57:46 np0005629333 nova_compute[244014]: 2026-02-25 12:57:46.849 244018 DEBUG nova.network.neutron [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Successfully created port: 13dff95c-b96c-4657-9807-58964669e78a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:57:46 np0005629333 nova_compute[244014]: 2026-02-25 12:57:46.900 244018 DEBUG nova.compute.manager [req-69593754-bc1f-49be-adde-e092cbe632aa req-e151d0c6-7a47-48d4-b42d-b2528dea0603 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Received event network-changed-dcb845b3-f7fb-449b-b027-65efdcdcf6ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:57:46 np0005629333 nova_compute[244014]: 2026-02-25 12:57:46.901 244018 DEBUG nova.compute.manager [req-69593754-bc1f-49be-adde-e092cbe632aa req-e151d0c6-7a47-48d4-b42d-b2528dea0603 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Refreshing instance network info cache due to event network-changed-dcb845b3-f7fb-449b-b027-65efdcdcf6ec. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:57:46 np0005629333 nova_compute[244014]: 2026-02-25 12:57:46.901 244018 DEBUG oslo_concurrency.lockutils [req-69593754-bc1f-49be-adde-e092cbe632aa req-e151d0c6-7a47-48d4-b42d-b2528dea0603 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-0389cfa0-085b-4e4e-8d61-95d0b91c413e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:57:46 np0005629333 nova_compute[244014]: 2026-02-25 12:57:46.902 244018 DEBUG oslo_concurrency.lockutils [req-69593754-bc1f-49be-adde-e092cbe632aa req-e151d0c6-7a47-48d4-b42d-b2528dea0603 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-0389cfa0-085b-4e4e-8d61-95d0b91c413e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:57:46 np0005629333 nova_compute[244014]: 2026-02-25 12:57:46.902 244018 DEBUG nova.network.neutron [req-69593754-bc1f-49be-adde-e092cbe632aa req-e151d0c6-7a47-48d4-b42d-b2528dea0603 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Refreshing network info cache for port dcb845b3-f7fb-449b-b027-65efdcdcf6ec _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:57:47 np0005629333 nova_compute[244014]: 2026-02-25 12:57:47.520 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:57:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1895987037' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:57:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:57:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1895987037' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:57:48 np0005629333 nova_compute[244014]: 2026-02-25 12:57:48.677 244018 DEBUG nova.network.neutron [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Successfully updated port: 13dff95c-b96c-4657-9807-58964669e78a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:57:48 np0005629333 nova_compute[244014]: 2026-02-25 12:57:48.691 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Acquiring lock "refresh_cache-621d2b1a-0d06-4a98-b252-2acafee3ba02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:57:48 np0005629333 nova_compute[244014]: 2026-02-25 12:57:48.691 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Acquired lock "refresh_cache-621d2b1a-0d06-4a98-b252-2acafee3ba02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:57:48 np0005629333 nova_compute[244014]: 2026-02-25 12:57:48.692 244018 DEBUG nova.network.neutron [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:57:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:57:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2317: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 128 op/s
Feb 25 07:57:48 np0005629333 nova_compute[244014]: 2026-02-25 12:57:48.831 244018 DEBUG nova.network.neutron [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:57:48 np0005629333 nova_compute[244014]: 2026-02-25 12:57:48.989 244018 DEBUG nova.compute.manager [req-8991ebb0-50e8-4aa8-be7c-bc4155816dae req-ab742f1f-bd9e-4a8e-8ba3-a88941ee92ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Received event network-changed-13dff95c-b96c-4657-9807-58964669e78a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:57:48 np0005629333 nova_compute[244014]: 2026-02-25 12:57:48.990 244018 DEBUG nova.compute.manager [req-8991ebb0-50e8-4aa8-be7c-bc4155816dae req-ab742f1f-bd9e-4a8e-8ba3-a88941ee92ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Refreshing instance network info cache due to event network-changed-13dff95c-b96c-4657-9807-58964669e78a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:57:48 np0005629333 nova_compute[244014]: 2026-02-25 12:57:48.991 244018 DEBUG oslo_concurrency.lockutils [req-8991ebb0-50e8-4aa8-be7c-bc4155816dae req-ab742f1f-bd9e-4a8e-8ba3-a88941ee92ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-621d2b1a-0d06-4a98-b252-2acafee3ba02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:57:49 np0005629333 nova_compute[244014]: 2026-02-25 12:57:49.021 244018 DEBUG nova.network.neutron [req-69593754-bc1f-49be-adde-e092cbe632aa req-e151d0c6-7a47-48d4-b42d-b2528dea0603 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Updated VIF entry in instance network info cache for port dcb845b3-f7fb-449b-b027-65efdcdcf6ec. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:57:49 np0005629333 nova_compute[244014]: 2026-02-25 12:57:49.022 244018 DEBUG nova.network.neutron [req-69593754-bc1f-49be-adde-e092cbe632aa req-e151d0c6-7a47-48d4-b42d-b2528dea0603 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Updating instance_info_cache with network_info: [{"id": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "address": "fa:16:3e:89:15:55", "network": {"id": "6c261236-9e75-404c-ae2b-04691f3dc670", "bridge": "br-int", "label": "tempest-network-smoke--1012459848", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb845b3-f7", "ovs_interfaceid": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:57:49 np0005629333 nova_compute[244014]: 2026-02-25 12:57:49.040 244018 DEBUG oslo_concurrency.lockutils [req-69593754-bc1f-49be-adde-e092cbe632aa req-e151d0c6-7a47-48d4-b42d-b2528dea0603 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-0389cfa0-085b-4e4e-8d61-95d0b91c413e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:57:49 np0005629333 nova_compute[244014]: 2026-02-25 12:57:49.443 244018 DEBUG nova.network.neutron [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Updating instance_info_cache with network_info: [{"id": "13dff95c-b96c-4657-9807-58964669e78a", "address": "fa:16:3e:42:9b:25", "network": {"id": "f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd", "bridge": "br-int", "label": "tempest-network-smoke--679537891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "089b8f0ee9684ec69cbdc13d24262170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13dff95c-b9", "ovs_interfaceid": "13dff95c-b96c-4657-9807-58964669e78a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:57:49 np0005629333 nova_compute[244014]: 2026-02-25 12:57:49.461 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Releasing lock "refresh_cache-621d2b1a-0d06-4a98-b252-2acafee3ba02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:57:49 np0005629333 nova_compute[244014]: 2026-02-25 12:57:49.462 244018 DEBUG nova.compute.manager [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Instance network_info: |[{"id": "13dff95c-b96c-4657-9807-58964669e78a", "address": "fa:16:3e:42:9b:25", "network": {"id": "f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd", "bridge": "br-int", "label": "tempest-network-smoke--679537891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "089b8f0ee9684ec69cbdc13d24262170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13dff95c-b9", "ovs_interfaceid": "13dff95c-b96c-4657-9807-58964669e78a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:57:49 np0005629333 nova_compute[244014]: 2026-02-25 12:57:49.463 244018 DEBUG oslo_concurrency.lockutils [req-8991ebb0-50e8-4aa8-be7c-bc4155816dae req-ab742f1f-bd9e-4a8e-8ba3-a88941ee92ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-621d2b1a-0d06-4a98-b252-2acafee3ba02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:57:49 np0005629333 nova_compute[244014]: 2026-02-25 12:57:49.463 244018 DEBUG nova.network.neutron [req-8991ebb0-50e8-4aa8-be7c-bc4155816dae req-ab742f1f-bd9e-4a8e-8ba3-a88941ee92ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Refreshing network info cache for port 13dff95c-b96c-4657-9807-58964669e78a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:57:49 np0005629333 nova_compute[244014]: 2026-02-25 12:57:49.468 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Start _get_guest_xml network_info=[{"id": "13dff95c-b96c-4657-9807-58964669e78a", "address": "fa:16:3e:42:9b:25", "network": {"id": "f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd", "bridge": "br-int", "label": "tempest-network-smoke--679537891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "089b8f0ee9684ec69cbdc13d24262170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13dff95c-b9", "ovs_interfaceid": "13dff95c-b96c-4657-9807-58964669e78a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:57:49 np0005629333 nova_compute[244014]: 2026-02-25 12:57:49.473 244018 WARNING nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:57:49 np0005629333 nova_compute[244014]: 2026-02-25 12:57:49.481 244018 DEBUG nova.virt.libvirt.host [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:57:49 np0005629333 nova_compute[244014]: 2026-02-25 12:57:49.482 244018 DEBUG nova.virt.libvirt.host [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:57:49 np0005629333 nova_compute[244014]: 2026-02-25 12:57:49.486 244018 DEBUG nova.virt.libvirt.host [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:57:49 np0005629333 nova_compute[244014]: 2026-02-25 12:57:49.486 244018 DEBUG nova.virt.libvirt.host [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:57:49 np0005629333 nova_compute[244014]: 2026-02-25 12:57:49.487 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:57:49 np0005629333 nova_compute[244014]: 2026-02-25 12:57:49.488 244018 DEBUG nova.virt.hardware [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:57:49 np0005629333 nova_compute[244014]: 2026-02-25 12:57:49.489 244018 DEBUG nova.virt.hardware [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:57:49 np0005629333 nova_compute[244014]: 2026-02-25 12:57:49.489 244018 DEBUG nova.virt.hardware [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:57:49 np0005629333 nova_compute[244014]: 2026-02-25 12:57:49.490 244018 DEBUG nova.virt.hardware [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:57:49 np0005629333 nova_compute[244014]: 2026-02-25 12:57:49.490 244018 DEBUG nova.virt.hardware [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:57:49 np0005629333 nova_compute[244014]: 2026-02-25 12:57:49.491 244018 DEBUG nova.virt.hardware [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:57:49 np0005629333 nova_compute[244014]: 2026-02-25 12:57:49.492 244018 DEBUG nova.virt.hardware [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:57:49 np0005629333 nova_compute[244014]: 2026-02-25 12:57:49.492 244018 DEBUG nova.virt.hardware [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:57:49 np0005629333 nova_compute[244014]: 2026-02-25 12:57:49.493 244018 DEBUG nova.virt.hardware [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:57:49 np0005629333 nova_compute[244014]: 2026-02-25 12:57:49.493 244018 DEBUG nova.virt.hardware [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:57:49 np0005629333 nova_compute[244014]: 2026-02-25 12:57:49.494 244018 DEBUG nova.virt.hardware [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:57:49 np0005629333 nova_compute[244014]: 2026-02-25 12:57:49.499 244018 DEBUG oslo_concurrency.processutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:57:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:57:50 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1456607593' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:57:50 np0005629333 nova_compute[244014]: 2026-02-25 12:57:50.055 244018 DEBUG oslo_concurrency.processutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:57:50 np0005629333 nova_compute[244014]: 2026-02-25 12:57:50.100 244018 DEBUG nova.storage.rbd_utils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] rbd image 621d2b1a-0d06-4a98-b252-2acafee3ba02_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:57:50 np0005629333 nova_compute[244014]: 2026-02-25 12:57:50.108 244018 DEBUG oslo_concurrency.processutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:57:50 np0005629333 nova_compute[244014]: 2026-02-25 12:57:50.549 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:57:50 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2692681946' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:57:50 np0005629333 nova_compute[244014]: 2026-02-25 12:57:50.620 244018 DEBUG oslo_concurrency.processutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:57:50 np0005629333 nova_compute[244014]: 2026-02-25 12:57:50.622 244018 DEBUG nova.virt.libvirt.vif [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:57:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-370800561-access_point-2136907043',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-370800561-access_point-2136907043',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-370800561-acc',id=141,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJN+hUOMaKAQ0YB42oqWjvIfKM8uPP+02JazD6RxfhEthxrTmdltfGPXOOUYwxJvxVj/DZ80733jprZ2P4416qp+bN+v8dr9z51p/U8yRQby/Qu2leuQCIhcBO5CRWY1xg==',key_name='tempest-TestSecurityGroupsBasicOps-1642482985',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='089b8f0ee9684ec69cbdc13d24262170',ramdisk_id='',reservation_id='r-r5sfe18w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-370800561',owner_user_name='tempest-TestSecurityGroupsBasicOps-370800561-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:57:44Z,user_data=None,user_id='2248dda8be6e4a51ace44a9e66dc4b45',uuid=621d2b1a-0d06-4a98-b252-2acafee3ba02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "13dff95c-b96c-4657-9807-58964669e78a", "address": "fa:16:3e:42:9b:25", "network": {"id": "f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd", "bridge": "br-int", "label": "tempest-network-smoke--679537891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "089b8f0ee9684ec69cbdc13d24262170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13dff95c-b9", "ovs_interfaceid": "13dff95c-b96c-4657-9807-58964669e78a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:57:50 np0005629333 nova_compute[244014]: 2026-02-25 12:57:50.623 244018 DEBUG nova.network.os_vif_util [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Converting VIF {"id": "13dff95c-b96c-4657-9807-58964669e78a", "address": "fa:16:3e:42:9b:25", "network": {"id": "f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd", "bridge": "br-int", "label": "tempest-network-smoke--679537891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "089b8f0ee9684ec69cbdc13d24262170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13dff95c-b9", "ovs_interfaceid": "13dff95c-b96c-4657-9807-58964669e78a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:57:50 np0005629333 nova_compute[244014]: 2026-02-25 12:57:50.624 244018 DEBUG nova.network.os_vif_util [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:9b:25,bridge_name='br-int',has_traffic_filtering=True,id=13dff95c-b96c-4657-9807-58964669e78a,network=Network(f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13dff95c-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:57:50 np0005629333 nova_compute[244014]: 2026-02-25 12:57:50.626 244018 DEBUG nova.objects.instance [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lazy-loading 'pci_devices' on Instance uuid 621d2b1a-0d06-4a98-b252-2acafee3ba02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:57:50 np0005629333 nova_compute[244014]: 2026-02-25 12:57:50.664 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:57:50 np0005629333 nova_compute[244014]:  <uuid>621d2b1a-0d06-4a98-b252-2acafee3ba02</uuid>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:  <name>instance-0000008d</name>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-370800561-access_point-2136907043</nova:name>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:57:49</nova:creationTime>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:57:50 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:        <nova:user uuid="2248dda8be6e4a51ace44a9e66dc4b45">tempest-TestSecurityGroupsBasicOps-370800561-project-member</nova:user>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:        <nova:project uuid="089b8f0ee9684ec69cbdc13d24262170">tempest-TestSecurityGroupsBasicOps-370800561</nova:project>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:        <nova:port uuid="13dff95c-b96c-4657-9807-58964669e78a">
Feb 25 07:57:50 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      <entry name="serial">621d2b1a-0d06-4a98-b252-2acafee3ba02</entry>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      <entry name="uuid">621d2b1a-0d06-4a98-b252-2acafee3ba02</entry>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/621d2b1a-0d06-4a98-b252-2acafee3ba02_disk">
Feb 25 07:57:50 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:57:50 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/621d2b1a-0d06-4a98-b252-2acafee3ba02_disk.config">
Feb 25 07:57:50 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:57:50 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:42:9b:25"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      <target dev="tap13dff95c-b9"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/621d2b1a-0d06-4a98-b252-2acafee3ba02/console.log" append="off"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:57:50 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:57:50 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:57:50 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:57:50 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:57:50 np0005629333 nova_compute[244014]: 2026-02-25 12:57:50.667 244018 DEBUG nova.compute.manager [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Preparing to wait for external event network-vif-plugged-13dff95c-b96c-4657-9807-58964669e78a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:57:50 np0005629333 nova_compute[244014]: 2026-02-25 12:57:50.668 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Acquiring lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:57:50 np0005629333 nova_compute[244014]: 2026-02-25 12:57:50.668 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:57:50 np0005629333 nova_compute[244014]: 2026-02-25 12:57:50.669 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:57:50 np0005629333 nova_compute[244014]: 2026-02-25 12:57:50.670 244018 DEBUG nova.virt.libvirt.vif [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:57:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-370800561-access_point-2136907043',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-370800561-access_point-2136907043',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-370800561-acc',id=141,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJN+hUOMaKAQ0YB42oqWjvIfKM8uPP+02JazD6RxfhEthxrTmdltfGPXOOUYwxJvxVj/DZ80733jprZ2P4416qp+bN+v8dr9z51p/U8yRQby/Qu2leuQCIhcBO5CRWY1xg==',key_name='tempest-TestSecurityGroupsBasicOps-1642482985',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='089b8f0ee9684ec69cbdc13d24262170',ramdisk_id='',reservation_id='r-r5sfe18w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-370800561',owner_user_name='tempest-TestSecurityGroupsBasicOps-370800561-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:57:44Z,user_data=None,user_id='2248dda8be6e4a51ace44a9e66dc4b45',uuid=621d2b1a-0d06-4a98-b252-2acafee3ba02,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "13dff95c-b96c-4657-9807-58964669e78a", "address": "fa:16:3e:42:9b:25", "network": {"id": "f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd", "bridge": "br-int", "label": "tempest-network-smoke--679537891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "089b8f0ee9684ec69cbdc13d24262170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13dff95c-b9", "ovs_interfaceid": "13dff95c-b96c-4657-9807-58964669e78a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:57:50 np0005629333 nova_compute[244014]: 2026-02-25 12:57:50.670 244018 DEBUG nova.network.os_vif_util [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Converting VIF {"id": "13dff95c-b96c-4657-9807-58964669e78a", "address": "fa:16:3e:42:9b:25", "network": {"id": "f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd", "bridge": "br-int", "label": "tempest-network-smoke--679537891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "089b8f0ee9684ec69cbdc13d24262170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13dff95c-b9", "ovs_interfaceid": "13dff95c-b96c-4657-9807-58964669e78a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:57:50 np0005629333 nova_compute[244014]: 2026-02-25 12:57:50.671 244018 DEBUG nova.network.os_vif_util [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:9b:25,bridge_name='br-int',has_traffic_filtering=True,id=13dff95c-b96c-4657-9807-58964669e78a,network=Network(f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13dff95c-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:57:50 np0005629333 nova_compute[244014]: 2026-02-25 12:57:50.672 244018 DEBUG os_vif [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:9b:25,bridge_name='br-int',has_traffic_filtering=True,id=13dff95c-b96c-4657-9807-58964669e78a,network=Network(f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13dff95c-b9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:57:50 np0005629333 nova_compute[244014]: 2026-02-25 12:57:50.673 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:50 np0005629333 nova_compute[244014]: 2026-02-25 12:57:50.673 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:57:50 np0005629333 nova_compute[244014]: 2026-02-25 12:57:50.674 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:57:50 np0005629333 nova_compute[244014]: 2026-02-25 12:57:50.684 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:50 np0005629333 nova_compute[244014]: 2026-02-25 12:57:50.684 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13dff95c-b9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:57:50 np0005629333 nova_compute[244014]: 2026-02-25 12:57:50.685 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap13dff95c-b9, col_values=(('external_ids', {'iface-id': '13dff95c-b96c-4657-9807-58964669e78a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:42:9b:25', 'vm-uuid': '621d2b1a-0d06-4a98-b252-2acafee3ba02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:57:50 np0005629333 nova_compute[244014]: 2026-02-25 12:57:50.689 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:50 np0005629333 NetworkManager[49836]: <info>  [1772024270.6904] manager: (tap13dff95c-b9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/615)
Feb 25 07:57:50 np0005629333 nova_compute[244014]: 2026-02-25 12:57:50.695 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:50 np0005629333 nova_compute[244014]: 2026-02-25 12:57:50.698 244018 INFO os_vif [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:9b:25,bridge_name='br-int',has_traffic_filtering=True,id=13dff95c-b96c-4657-9807-58964669e78a,network=Network(f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13dff95c-b9')#033[00m
Feb 25 07:57:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2318: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 07:57:50 np0005629333 nova_compute[244014]: 2026-02-25 12:57:50.834 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:57:50 np0005629333 nova_compute[244014]: 2026-02-25 12:57:50.834 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:57:50 np0005629333 nova_compute[244014]: 2026-02-25 12:57:50.834 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] No VIF found with MAC fa:16:3e:42:9b:25, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:57:50 np0005629333 nova_compute[244014]: 2026-02-25 12:57:50.835 244018 INFO nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Using config drive#033[00m
Feb 25 07:57:50 np0005629333 nova_compute[244014]: 2026-02-25 12:57:50.863 244018 DEBUG nova.storage.rbd_utils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] rbd image 621d2b1a-0d06-4a98-b252-2acafee3ba02_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:57:51 np0005629333 nova_compute[244014]: 2026-02-25 12:57:51.338 244018 INFO nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Creating config drive at /var/lib/nova/instances/621d2b1a-0d06-4a98-b252-2acafee3ba02/disk.config#033[00m
Feb 25 07:57:51 np0005629333 nova_compute[244014]: 2026-02-25 12:57:51.342 244018 DEBUG oslo_concurrency.processutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/621d2b1a-0d06-4a98-b252-2acafee3ba02/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp7c74x7i6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:57:51 np0005629333 nova_compute[244014]: 2026-02-25 12:57:51.487 244018 DEBUG oslo_concurrency.processutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/621d2b1a-0d06-4a98-b252-2acafee3ba02/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp7c74x7i6" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:57:51 np0005629333 nova_compute[244014]: 2026-02-25 12:57:51.512 244018 DEBUG nova.storage.rbd_utils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] rbd image 621d2b1a-0d06-4a98-b252-2acafee3ba02_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:57:51 np0005629333 nova_compute[244014]: 2026-02-25 12:57:51.516 244018 DEBUG oslo_concurrency.processutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/621d2b1a-0d06-4a98-b252-2acafee3ba02/disk.config 621d2b1a-0d06-4a98-b252-2acafee3ba02_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:57:51 np0005629333 nova_compute[244014]: 2026-02-25 12:57:51.630 244018 DEBUG nova.network.neutron [req-8991ebb0-50e8-4aa8-be7c-bc4155816dae req-ab742f1f-bd9e-4a8e-8ba3-a88941ee92ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Updated VIF entry in instance network info cache for port 13dff95c-b96c-4657-9807-58964669e78a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:57:51 np0005629333 nova_compute[244014]: 2026-02-25 12:57:51.632 244018 DEBUG nova.network.neutron [req-8991ebb0-50e8-4aa8-be7c-bc4155816dae req-ab742f1f-bd9e-4a8e-8ba3-a88941ee92ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Updating instance_info_cache with network_info: [{"id": "13dff95c-b96c-4657-9807-58964669e78a", "address": "fa:16:3e:42:9b:25", "network": {"id": "f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd", "bridge": "br-int", "label": "tempest-network-smoke--679537891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "089b8f0ee9684ec69cbdc13d24262170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13dff95c-b9", "ovs_interfaceid": "13dff95c-b96c-4657-9807-58964669e78a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:57:51 np0005629333 nova_compute[244014]: 2026-02-25 12:57:51.641 244018 DEBUG oslo_concurrency.processutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/621d2b1a-0d06-4a98-b252-2acafee3ba02/disk.config 621d2b1a-0d06-4a98-b252-2acafee3ba02_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:57:51 np0005629333 nova_compute[244014]: 2026-02-25 12:57:51.641 244018 INFO nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Deleting local config drive /var/lib/nova/instances/621d2b1a-0d06-4a98-b252-2acafee3ba02/disk.config because it was imported into RBD.#033[00m
Feb 25 07:57:51 np0005629333 nova_compute[244014]: 2026-02-25 12:57:51.681 244018 DEBUG oslo_concurrency.lockutils [req-8991ebb0-50e8-4aa8-be7c-bc4155816dae req-ab742f1f-bd9e-4a8e-8ba3-a88941ee92ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-621d2b1a-0d06-4a98-b252-2acafee3ba02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:57:51 np0005629333 kernel: tap13dff95c-b9: entered promiscuous mode
Feb 25 07:57:51 np0005629333 NetworkManager[49836]: <info>  [1772024271.6917] manager: (tap13dff95c-b9): new Tun device (/org/freedesktop/NetworkManager/Devices/616)
Feb 25 07:57:51 np0005629333 nova_compute[244014]: 2026-02-25 12:57:51.692 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:51 np0005629333 ovn_controller[147040]: 2026-02-25T12:57:51Z|01473|binding|INFO|Claiming lport 13dff95c-b96c-4657-9807-58964669e78a for this chassis.
Feb 25 07:57:51 np0005629333 ovn_controller[147040]: 2026-02-25T12:57:51Z|01474|binding|INFO|13dff95c-b96c-4657-9807-58964669e78a: Claiming fa:16:3e:42:9b:25 10.100.0.8
Feb 25 07:57:51 np0005629333 ovn_controller[147040]: 2026-02-25T12:57:51Z|01475|binding|INFO|Setting lport 13dff95c-b96c-4657-9807-58964669e78a ovn-installed in OVS
Feb 25 07:57:51 np0005629333 nova_compute[244014]: 2026-02-25 12:57:51.701 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:51 np0005629333 nova_compute[244014]: 2026-02-25 12:57:51.705 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:51 np0005629333 systemd-udevd[371578]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:57:51 np0005629333 ovn_controller[147040]: 2026-02-25T12:57:51Z|01476|binding|INFO|Setting lport 13dff95c-b96c-4657-9807-58964669e78a up in Southbound
Feb 25 07:57:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.727 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:9b:25 10.100.0.8'], port_security=['fa:16:3e:42:9b:25 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '621d2b1a-0d06-4a98-b252-2acafee3ba02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '089b8f0ee9684ec69cbdc13d24262170', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0892ce4b-109e-43d1-a441-11b1be3535ea 1dfe9b6c-62fb-43ca-b23e-3d57cce238fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=916a48b0-3c8c-48cf-bf68-7fcb782233f1, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=13dff95c-b96c-4657-9807-58964669e78a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:57:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.729 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 13dff95c-b96c-4657-9807-58964669e78a in datapath f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd bound to our chassis#033[00m
Feb 25 07:57:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.730 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd#033[00m
Feb 25 07:57:51 np0005629333 NetworkManager[49836]: <info>  [1772024271.7385] device (tap13dff95c-b9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:57:51 np0005629333 NetworkManager[49836]: <info>  [1772024271.7393] device (tap13dff95c-b9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:57:51 np0005629333 systemd-machined[210048]: New machine qemu-173-instance-0000008d.
Feb 25 07:57:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.742 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[61223646-48d3-41c3-b9f2-86e09541c3db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.744 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf1fe6aa3-21 in ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:57:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.747 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf1fe6aa3-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:57:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.747 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6f607099-1509-430c-9d8e-8130007da305]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.748 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fd849e52-4fc6-406e-a7c6-5b2f7a9cfb7b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:51 np0005629333 systemd[1]: Started Virtual Machine qemu-173-instance-0000008d.
Feb 25 07:57:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.759 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[f6c681f1-e547-465b-b838-e42432237e67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.771 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[25237499-6720-497c-aa3c-8c7da38e0a85]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.804 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[3839b749-9332-476b-b60f-7d197d25ce96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.809 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9d230859-7eb6-4c3b-bcfa-eef989466922]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:51 np0005629333 NetworkManager[49836]: <info>  [1772024271.8105] manager: (tapf1fe6aa3-20): new Veth device (/org/freedesktop/NetworkManager/Devices/617)
Feb 25 07:57:51 np0005629333 systemd-udevd[371582]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:57:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.834 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c69ca2e7-4130-4ca3-9f40-e37c8e01bbe6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.837 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[78ad3315-8d27-4fb3-a687-a349c3dbea92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:51 np0005629333 NetworkManager[49836]: <info>  [1772024271.8552] device (tapf1fe6aa3-20): carrier: link connected
Feb 25 07:57:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.861 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f504eb04-32e5-4d18-a561-a8b33aa3b097]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.880 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[37f325f6-3d20-4c90-b85f-90ffabc1cf6e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf1fe6aa3-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:c1:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 439], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624126, 'reachable_time': 42465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371612, 'error': None, 'target': 'ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.897 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4e58651b-e383-4afe-ae1f-3932eda1ea3e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe06:c1b5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 624126, 'tstamp': 624126}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371613, 'error': None, 'target': 'ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.918 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9671ad0f-bf8e-4d2f-8563-2f63a8a7a36d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf1fe6aa3-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:c1:b5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 439], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624126, 'reachable_time': 42465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 371614, 'error': None, 'target': 'ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:51 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:51.947 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[30486d14-e526-4385-ad06-9aa37fe44357]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:52.002 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[fb5a4e7e-007a-4124-9219-5adbff17d205]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:52.004 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1fe6aa3-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:52.004 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:52.004 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf1fe6aa3-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.006 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:52 np0005629333 NetworkManager[49836]: <info>  [1772024272.0069] manager: (tapf1fe6aa3-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/618)
Feb 25 07:57:52 np0005629333 kernel: tapf1fe6aa3-20: entered promiscuous mode
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:52.009 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf1fe6aa3-20, col_values=(('external_ids', {'iface-id': 'd5efe0f3-2e55-4d47-9b3f-ed6e541466bd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.010 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:52 np0005629333 ovn_controller[147040]: 2026-02-25T12:57:52Z|01477|binding|INFO|Releasing lport d5efe0f3-2e55-4d47-9b3f-ed6e541466bd from this chassis (sb_readonly=0)
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.017 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:52.018 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:52.018 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9b6113e6-0501-48ef-9fe2-eb8a118b19ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:52.019 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd.pid.haproxy
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:57:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:52.020 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd', 'env', 'PROCESS_TAG=haproxy-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.117 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024272.11736, 621d2b1a-0d06-4a98-b252-2acafee3ba02 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.118 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] VM Started (Lifecycle Event)#033[00m
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.201 244018 DEBUG nova.compute.manager [req-3c081cfc-f83b-4ea7-92cc-de1757964d7a req-5488b1c3-a593-4624-96c9-f0ea65063d65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Received event network-vif-plugged-13dff95c-b96c-4657-9807-58964669e78a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.201 244018 DEBUG oslo_concurrency.lockutils [req-3c081cfc-f83b-4ea7-92cc-de1757964d7a req-5488b1c3-a593-4624-96c9-f0ea65063d65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.201 244018 DEBUG oslo_concurrency.lockutils [req-3c081cfc-f83b-4ea7-92cc-de1757964d7a req-5488b1c3-a593-4624-96c9-f0ea65063d65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.202 244018 DEBUG oslo_concurrency.lockutils [req-3c081cfc-f83b-4ea7-92cc-de1757964d7a req-5488b1c3-a593-4624-96c9-f0ea65063d65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.202 244018 DEBUG nova.compute.manager [req-3c081cfc-f83b-4ea7-92cc-de1757964d7a req-5488b1c3-a593-4624-96c9-f0ea65063d65 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Processing event network-vif-plugged-13dff95c-b96c-4657-9807-58964669e78a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.203 244018 DEBUG nova.compute.manager [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.208 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.212 244018 INFO nova.virt.libvirt.driver [-] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Instance spawned successfully.#033[00m
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.212 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.284 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.289 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.464 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.465 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.466 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.467 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.467 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.468 244018 DEBUG nova.virt.libvirt.driver [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:57:52 np0005629333 podman[371684]: 2026-02-25 12:57:52.511722587 +0000 UTC m=+0.072082614 container create 977d987a9c1cff7ad3eeb65e4b89c2f566fad4bba349902334adf6c1d6d5e94f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 07:57:52 np0005629333 systemd[1]: Started libpod-conmon-977d987a9c1cff7ad3eeb65e4b89c2f566fad4bba349902334adf6c1d6d5e94f.scope.
Feb 25 07:57:52 np0005629333 podman[371684]: 2026-02-25 12:57:52.467803248 +0000 UTC m=+0.028163275 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:57:52 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:57:52 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f481ca962f4dc0356ecf15f122766f43015783bde0babd55e306f3282801e38/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.599 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.599 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024272.1198149, 621d2b1a-0d06-4a98-b252-2acafee3ba02 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.600 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:57:52 np0005629333 podman[371684]: 2026-02-25 12:57:52.605537884 +0000 UTC m=+0.165897931 container init 977d987a9c1cff7ad3eeb65e4b89c2f566fad4bba349902334adf6c1d6d5e94f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 25 07:57:52 np0005629333 podman[371684]: 2026-02-25 12:57:52.610539645 +0000 UTC m=+0.170899662 container start 977d987a9c1cff7ad3eeb65e4b89c2f566fad4bba349902334adf6c1d6d5e94f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 25 07:57:52 np0005629333 neutron-haproxy-ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd[371699]: [NOTICE]   (371703) : New worker (371705) forked
Feb 25 07:57:52 np0005629333 neutron-haproxy-ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd[371699]: [NOTICE]   (371703) : Loading success.
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.659 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.665 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024272.208287, 621d2b1a-0d06-4a98-b252-2acafee3ba02 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.665 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.709 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.712 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.726 244018 INFO nova.compute.manager [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Took 8.07 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.726 244018 DEBUG nova.compute.manager [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.734 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:57:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2319: 305 pgs: 305 active+clean; 342 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.3 MiB/s wr, 136 op/s
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.834 244018 INFO nova.compute.manager [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Took 9.24 seconds to build instance.#033[00m
Feb 25 07:57:52 np0005629333 nova_compute[244014]: 2026-02-25 12:57:52.950 244018 DEBUG oslo_concurrency.lockutils [None req-8484ff80-c3fe-4af1-8613-d2d8963baaef 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lock "621d2b1a-0d06-4a98-b252-2acafee3ba02" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.421s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:57:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:57:53 np0005629333 ovn_controller[147040]: 2026-02-25T12:57:53Z|00186|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:89:15:55 10.100.0.8
Feb 25 07:57:53 np0005629333 ovn_controller[147040]: 2026-02-25T12:57:53Z|00187|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:89:15:55 10.100.0.8
Feb 25 07:57:54 np0005629333 nova_compute[244014]: 2026-02-25 12:57:54.374 244018 DEBUG nova.compute.manager [req-a18d4ebe-109b-467c-bfc3-1fcd25cdfaeb req-c4fa798a-440d-45fe-85ab-0dd955f5040e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Received event network-vif-plugged-13dff95c-b96c-4657-9807-58964669e78a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:57:54 np0005629333 nova_compute[244014]: 2026-02-25 12:57:54.375 244018 DEBUG oslo_concurrency.lockutils [req-a18d4ebe-109b-467c-bfc3-1fcd25cdfaeb req-c4fa798a-440d-45fe-85ab-0dd955f5040e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:57:54 np0005629333 nova_compute[244014]: 2026-02-25 12:57:54.375 244018 DEBUG oslo_concurrency.lockutils [req-a18d4ebe-109b-467c-bfc3-1fcd25cdfaeb req-c4fa798a-440d-45fe-85ab-0dd955f5040e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:57:54 np0005629333 nova_compute[244014]: 2026-02-25 12:57:54.375 244018 DEBUG oslo_concurrency.lockutils [req-a18d4ebe-109b-467c-bfc3-1fcd25cdfaeb req-c4fa798a-440d-45fe-85ab-0dd955f5040e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:57:54 np0005629333 nova_compute[244014]: 2026-02-25 12:57:54.375 244018 DEBUG nova.compute.manager [req-a18d4ebe-109b-467c-bfc3-1fcd25cdfaeb req-c4fa798a-440d-45fe-85ab-0dd955f5040e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] No waiting events found dispatching network-vif-plugged-13dff95c-b96c-4657-9807-58964669e78a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:57:54 np0005629333 nova_compute[244014]: 2026-02-25 12:57:54.376 244018 WARNING nova.compute.manager [req-a18d4ebe-109b-467c-bfc3-1fcd25cdfaeb req-c4fa798a-440d-45fe-85ab-0dd955f5040e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Received unexpected event network-vif-plugged-13dff95c-b96c-4657-9807-58964669e78a for instance with vm_state active and task_state None.#033[00m
Feb 25 07:57:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2320: 305 pgs: 305 active+clean; 342 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.2 MiB/s wr, 126 op/s
Feb 25 07:57:54 np0005629333 nova_compute[244014]: 2026-02-25 12:57:54.873 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:57:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:55.034 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:57:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:55.035 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:57:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:57:55.036 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:57:55 np0005629333 nova_compute[244014]: 2026-02-25 12:57:55.552 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:55 np0005629333 nova_compute[244014]: 2026-02-25 12:57:55.689 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:57:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2321: 305 pgs: 305 active+clean; 342 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.2 MiB/s wr, 126 op/s
Feb 25 07:57:56 np0005629333 nova_compute[244014]: 2026-02-25 12:57:56.921 244018 DEBUG nova.compute.manager [req-b34a392c-a402-4519-bb84-2181ba710be6 req-4cc2cf16-1a64-4b26-b6bd-ae9270c42880 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Received event network-changed-13dff95c-b96c-4657-9807-58964669e78a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:57:56 np0005629333 nova_compute[244014]: 2026-02-25 12:57:56.922 244018 DEBUG nova.compute.manager [req-b34a392c-a402-4519-bb84-2181ba710be6 req-4cc2cf16-1a64-4b26-b6bd-ae9270c42880 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Refreshing instance network info cache due to event network-changed-13dff95c-b96c-4657-9807-58964669e78a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:57:56 np0005629333 nova_compute[244014]: 2026-02-25 12:57:56.923 244018 DEBUG oslo_concurrency.lockutils [req-b34a392c-a402-4519-bb84-2181ba710be6 req-4cc2cf16-1a64-4b26-b6bd-ae9270c42880 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-621d2b1a-0d06-4a98-b252-2acafee3ba02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:57:56 np0005629333 nova_compute[244014]: 2026-02-25 12:57:56.923 244018 DEBUG oslo_concurrency.lockutils [req-b34a392c-a402-4519-bb84-2181ba710be6 req-4cc2cf16-1a64-4b26-b6bd-ae9270c42880 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-621d2b1a-0d06-4a98-b252-2acafee3ba02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:57:56 np0005629333 nova_compute[244014]: 2026-02-25 12:57:56.923 244018 DEBUG nova.network.neutron [req-b34a392c-a402-4519-bb84-2181ba710be6 req-4cc2cf16-1a64-4b26-b6bd-ae9270c42880 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Refreshing network info cache for port 13dff95c-b96c-4657-9807-58964669e78a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:57:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:57:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2322: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 228 op/s
Feb 25 07:58:00 np0005629333 nova_compute[244014]: 2026-02-25 12:58:00.555 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:00 np0005629333 nova_compute[244014]: 2026-02-25 12:58:00.691 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2323: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Feb 25 07:58:01 np0005629333 nova_compute[244014]: 2026-02-25 12:58:01.498 244018 INFO nova.compute.manager [None req-57e40573-7566-492d-be2f-04f4c988e21f 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Get console output#033[00m
Feb 25 07:58:01 np0005629333 nova_compute[244014]: 2026-02-25 12:58:01.507 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 25 07:58:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:58:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:58:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:58:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:58:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:58:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:58:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2324: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Feb 25 07:58:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:58:04 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:04Z|00188|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:42:9b:25 10.100.0.8
Feb 25 07:58:04 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:04Z|00189|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:42:9b:25 10.100.0.8
Feb 25 07:58:04 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:04Z|01478|binding|INFO|Releasing lport f9b1a684-77d3-4856-bba2-cff43e072272 from this chassis (sb_readonly=0)
Feb 25 07:58:04 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:04Z|01479|binding|INFO|Releasing lport 93d7aa8d-1845-4103-8fd2-aed7a8c4298a from this chassis (sb_readonly=0)
Feb 25 07:58:04 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:04Z|01480|binding|INFO|Releasing lport d5efe0f3-2e55-4d47-9b3f-ed6e541466bd from this chassis (sb_readonly=0)
Feb 25 07:58:04 np0005629333 nova_compute[244014]: 2026-02-25 12:58:04.393 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2325: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 705 KiB/s wr, 103 op/s
Feb 25 07:58:05 np0005629333 nova_compute[244014]: 2026-02-25 12:58:05.184 244018 DEBUG nova.network.neutron [req-b34a392c-a402-4519-bb84-2181ba710be6 req-4cc2cf16-1a64-4b26-b6bd-ae9270c42880 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Updated VIF entry in instance network info cache for port 13dff95c-b96c-4657-9807-58964669e78a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:58:05 np0005629333 nova_compute[244014]: 2026-02-25 12:58:05.185 244018 DEBUG nova.network.neutron [req-b34a392c-a402-4519-bb84-2181ba710be6 req-4cc2cf16-1a64-4b26-b6bd-ae9270c42880 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Updating instance_info_cache with network_info: [{"id": "13dff95c-b96c-4657-9807-58964669e78a", "address": "fa:16:3e:42:9b:25", "network": {"id": "f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd", "bridge": "br-int", "label": "tempest-network-smoke--679537891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "089b8f0ee9684ec69cbdc13d24262170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13dff95c-b9", "ovs_interfaceid": "13dff95c-b96c-4657-9807-58964669e78a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:58:05 np0005629333 nova_compute[244014]: 2026-02-25 12:58:05.213 244018 DEBUG oslo_concurrency.lockutils [req-b34a392c-a402-4519-bb84-2181ba710be6 req-4cc2cf16-1a64-4b26-b6bd-ae9270c42880 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-621d2b1a-0d06-4a98-b252-2acafee3ba02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:58:05 np0005629333 nova_compute[244014]: 2026-02-25 12:58:05.557 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:05 np0005629333 nova_compute[244014]: 2026-02-25 12:58:05.651 244018 INFO nova.compute.manager [None req-135e1180-0535-491c-8f05-fdda71f2b806 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Get console output#033[00m
Feb 25 07:58:05 np0005629333 nova_compute[244014]: 2026-02-25 12:58:05.658 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 25 07:58:05 np0005629333 nova_compute[244014]: 2026-02-25 12:58:05.694 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:06 np0005629333 nova_compute[244014]: 2026-02-25 12:58:06.639 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2326: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 705 KiB/s wr, 103 op/s
Feb 25 07:58:07 np0005629333 nova_compute[244014]: 2026-02-25 12:58:07.153 244018 INFO nova.compute.manager [None req-85fd1b40-bcc6-4c4d-bd13-3e6ba5d5e427 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Get console output#033[00m
Feb 25 07:58:07 np0005629333 nova_compute[244014]: 2026-02-25 12:58:07.159 291526 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.559 244018 DEBUG nova.compute.manager [req-2af3b851-1762-4c0e-bf6a-b7af83af8679 req-f72f69d5-d270-4615-8749-bacff32b53b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Received event network-changed-dcb845b3-f7fb-449b-b027-65efdcdcf6ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.561 244018 DEBUG nova.compute.manager [req-2af3b851-1762-4c0e-bf6a-b7af83af8679 req-f72f69d5-d270-4615-8749-bacff32b53b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Refreshing instance network info cache due to event network-changed-dcb845b3-f7fb-449b-b027-65efdcdcf6ec. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.561 244018 DEBUG oslo_concurrency.lockutils [req-2af3b851-1762-4c0e-bf6a-b7af83af8679 req-f72f69d5-d270-4615-8749-bacff32b53b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-0389cfa0-085b-4e4e-8d61-95d0b91c413e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.562 244018 DEBUG oslo_concurrency.lockutils [req-2af3b851-1762-4c0e-bf6a-b7af83af8679 req-f72f69d5-d270-4615-8749-bacff32b53b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-0389cfa0-085b-4e4e-8d61-95d0b91c413e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.562 244018 DEBUG nova.network.neutron [req-2af3b851-1762-4c0e-bf6a-b7af83af8679 req-f72f69d5-d270-4615-8749-bacff32b53b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Refreshing network info cache for port dcb845b3-f7fb-449b-b027-65efdcdcf6ec _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.603 244018 DEBUG oslo_concurrency.lockutils [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.604 244018 DEBUG oslo_concurrency.lockutils [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.605 244018 DEBUG oslo_concurrency.lockutils [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.605 244018 DEBUG oslo_concurrency.lockutils [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.606 244018 DEBUG oslo_concurrency.lockutils [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.607 244018 INFO nova.compute.manager [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Terminating instance#033[00m
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.608 244018 DEBUG nova.compute.manager [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:58:08 np0005629333 kernel: tapdcb845b3-f7 (unregistering): left promiscuous mode
Feb 25 07:58:08 np0005629333 NetworkManager[49836]: <info>  [1772024288.6748] device (tapdcb845b3-f7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.682 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:08 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:08Z|01481|binding|INFO|Releasing lport dcb845b3-f7fb-449b-b027-65efdcdcf6ec from this chassis (sb_readonly=0)
Feb 25 07:58:08 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:08Z|01482|binding|INFO|Setting lport dcb845b3-f7fb-449b-b027-65efdcdcf6ec down in Southbound
Feb 25 07:58:08 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:08Z|01483|binding|INFO|Removing iface tapdcb845b3-f7 ovn-installed in OVS
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.691 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:08.694 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:15:55 10.100.0.8'], port_security=['fa:16:3e:89:15:55 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0389cfa0-085b-4e4e-8d61-95d0b91c413e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c261236-9e75-404c-ae2b-04691f3dc670', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bd6f4c79-0604-475b-8d7d-e893b4f0a2c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e13b5f3-1e2c-45d6-a1db-56c3aecd9d1e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=dcb845b3-f7fb-449b-b027-65efdcdcf6ec) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:58:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:08.697 157129 INFO neutron.agent.ovn.metadata.agent [-] Port dcb845b3-f7fb-449b-b027-65efdcdcf6ec in datapath 6c261236-9e75-404c-ae2b-04691f3dc670 unbound from our chassis#033[00m
Feb 25 07:58:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:08.702 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c261236-9e75-404c-ae2b-04691f3dc670, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:58:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:08.704 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bb901bd1-a520-4d95-a849-36e52f2ea172]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:58:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:08.704 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670 namespace which is not needed anymore#033[00m
Feb 25 07:58:08 np0005629333 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Feb 25 07:58:08 np0005629333 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008c.scope: Consumed 12.167s CPU time.
Feb 25 07:58:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:58:08 np0005629333 systemd-machined[210048]: Machine qemu-172-instance-0000008c terminated.
Feb 25 07:58:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2327: 305 pgs: 305 active+clean; 391 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.8 MiB/s wr, 165 op/s
Feb 25 07:58:08 np0005629333 kernel: tapdcb845b3-f7: entered promiscuous mode
Feb 25 07:58:08 np0005629333 systemd-udevd[371744]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:58:08 np0005629333 NetworkManager[49836]: <info>  [1772024288.8341] manager: (tapdcb845b3-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/619)
Feb 25 07:58:08 np0005629333 kernel: tapdcb845b3-f7 (unregistering): left promiscuous mode
Feb 25 07:58:08 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:08Z|01484|binding|INFO|Claiming lport dcb845b3-f7fb-449b-b027-65efdcdcf6ec for this chassis.
Feb 25 07:58:08 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:08Z|01485|binding|INFO|dcb845b3-f7fb-449b-b027-65efdcdcf6ec: Claiming fa:16:3e:89:15:55 10.100.0.8
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.836 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:08.846 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:15:55 10.100.0.8'], port_security=['fa:16:3e:89:15:55 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0389cfa0-085b-4e4e-8d61-95d0b91c413e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c261236-9e75-404c-ae2b-04691f3dc670', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bd6f4c79-0604-475b-8d7d-e893b4f0a2c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e13b5f3-1e2c-45d6-a1db-56c3aecd9d1e, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=dcb845b3-f7fb-449b-b027-65efdcdcf6ec) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:58:08 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:08Z|01486|binding|INFO|Setting lport dcb845b3-f7fb-449b-b027-65efdcdcf6ec ovn-installed in OVS
Feb 25 07:58:08 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:08Z|01487|binding|INFO|Setting lport dcb845b3-f7fb-449b-b027-65efdcdcf6ec up in Southbound
Feb 25 07:58:08 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:08Z|01488|binding|INFO|Releasing lport dcb845b3-f7fb-449b-b027-65efdcdcf6ec from this chassis (sb_readonly=1)
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.859 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:08 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:08Z|01489|if_status|INFO|Dropped 9 log messages in last 453 seconds (most recently, 443 seconds ago) due to excessive rate
Feb 25 07:58:08 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:08Z|01490|if_status|INFO|Not setting lport dcb845b3-f7fb-449b-b027-65efdcdcf6ec down as sb is readonly
Feb 25 07:58:08 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:08Z|01491|binding|INFO|Removing iface tapdcb845b3-f7 ovn-installed in OVS
Feb 25 07:58:08 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:08Z|01492|binding|INFO|Releasing lport dcb845b3-f7fb-449b-b027-65efdcdcf6ec from this chassis (sb_readonly=0)
Feb 25 07:58:08 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:08Z|01493|binding|INFO|Setting lport dcb845b3-f7fb-449b-b027-65efdcdcf6ec down in Southbound
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.865 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.866 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.868 244018 INFO nova.virt.libvirt.driver [-] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Instance destroyed successfully.#033[00m
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.868 244018 DEBUG nova.objects.instance [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lazy-loading 'resources' on Instance uuid 0389cfa0-085b-4e4e-8d61-95d0b91c413e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:58:08 np0005629333 neutron-haproxy-ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670[371178]: [NOTICE]   (371198) : haproxy version is 2.8.14-c23fe91
Feb 25 07:58:08 np0005629333 neutron-haproxy-ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670[371178]: [NOTICE]   (371198) : path to executable is /usr/sbin/haproxy
Feb 25 07:58:08 np0005629333 neutron-haproxy-ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670[371178]: [WARNING]  (371198) : Exiting Master process...
Feb 25 07:58:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:08.872 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:15:55 10.100.0.8'], port_security=['fa:16:3e:89:15:55 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0389cfa0-085b-4e4e-8d61-95d0b91c413e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c261236-9e75-404c-ae2b-04691f3dc670', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e227b91c24404ab5aed600e2fe792d32', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bd6f4c79-0604-475b-8d7d-e893b4f0a2c8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e13b5f3-1e2c-45d6-a1db-56c3aecd9d1e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=dcb845b3-f7fb-449b-b027-65efdcdcf6ec) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:58:08 np0005629333 neutron-haproxy-ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670[371178]: [ALERT]    (371198) : Current worker (371201) exited with code 143 (Terminated)
Feb 25 07:58:08 np0005629333 neutron-haproxy-ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670[371178]: [WARNING]  (371198) : All workers exited. Exiting... (0)
Feb 25 07:58:08 np0005629333 systemd[1]: libpod-3046b04d14faec9fa5503201588fbc566d131cd1b7d3ee6b39d335ecb002632c.scope: Deactivated successfully.
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 25 07:58:08 np0005629333 podman[371790]: 2026-02-25 12:58:08.88138246 +0000 UTC m=+0.073351590 container died 3046b04d14faec9fa5503201588fbc566d131cd1b7d3ee6b39d335ecb002632c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.889 244018 DEBUG nova.virt.libvirt.vif [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:57:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1742784697',display_name='tempest-TestNetworkBasicOps-server-1742784697',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1742784697',id=140,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAZFg6svZGG2HhSYQn7nNKCzfycyIXr66T8mUvP1MQ920eEEyjRb+o64QsZLqXfBMNPIzdFc3Q2mjbznplS/flTAudp3OXevaI0LCPFmYdclt9P1cih6MnuEw2nz2PTUtw==',key_name='tempest-TestNetworkBasicOps-571395165',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:57:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e227b91c24404ab5aed600e2fe792d32',ramdisk_id='',reservation_id='r-ydu0he2w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-80594480',owner_user_name='tempest-TestNetworkBasicOps-80594480-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:57:42Z,user_data=None,user_id='31d013eaf26a447394d93c83ab8def60',uuid=0389cfa0-085b-4e4e-8d61-95d0b91c413e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "address": "fa:16:3e:89:15:55", "network": {"id": "6c261236-9e75-404c-ae2b-04691f3dc670", "bridge": "br-int", "label": "tempest-network-smoke--1012459848", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb845b3-f7", "ovs_interfaceid": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.889 244018 DEBUG nova.network.os_vif_util [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converting VIF {"id": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "address": "fa:16:3e:89:15:55", "network": {"id": "6c261236-9e75-404c-ae2b-04691f3dc670", "bridge": "br-int", "label": "tempest-network-smoke--1012459848", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb845b3-f7", "ovs_interfaceid": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.890 244018 DEBUG nova.network.os_vif_util [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:89:15:55,bridge_name='br-int',has_traffic_filtering=True,id=dcb845b3-f7fb-449b-b027-65efdcdcf6ec,network=Network(6c261236-9e75-404c-ae2b-04691f3dc670),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcb845b3-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.891 244018 DEBUG os_vif [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:15:55,bridge_name='br-int',has_traffic_filtering=True,id=dcb845b3-f7fb-449b-b027-65efdcdcf6ec,network=Network(6c261236-9e75-404c-ae2b-04691f3dc670),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcb845b3-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.894 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.894 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdcb845b3-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.896 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.897 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.898 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.901 244018 INFO os_vif [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:15:55,bridge_name='br-int',has_traffic_filtering=True,id=dcb845b3-f7fb-449b-b027-65efdcdcf6ec,network=Network(6c261236-9e75-404c-ae2b-04691f3dc670),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdcb845b3-f7')#033[00m
Feb 25 07:58:08 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3046b04d14faec9fa5503201588fbc566d131cd1b7d3ee6b39d335ecb002632c-userdata-shm.mount: Deactivated successfully.
Feb 25 07:58:08 np0005629333 systemd[1]: var-lib-containers-storage-overlay-57e765a2b026bd301f1d6a49133c1f0c94503a7d34a089d5f615c08ee5f0bc83-merged.mount: Deactivated successfully.
Feb 25 07:58:08 np0005629333 podman[371790]: 2026-02-25 12:58:08.92500148 +0000 UTC m=+0.116970600 container cleanup 3046b04d14faec9fa5503201588fbc566d131cd1b7d3ee6b39d335ecb002632c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 07:58:08 np0005629333 systemd[1]: libpod-conmon-3046b04d14faec9fa5503201588fbc566d131cd1b7d3ee6b39d335ecb002632c.scope: Deactivated successfully.
Feb 25 07:58:08 np0005629333 podman[371838]: 2026-02-25 12:58:08.989309134 +0000 UTC m=+0.044501266 container remove 3046b04d14faec9fa5503201588fbc566d131cd1b7d3ee6b39d335ecb002632c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 07:58:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:08.993 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e01ec0bb-f73c-4994-b9f0-101480cd451d]: (4, ('Wed Feb 25 12:58:08 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670 (3046b04d14faec9fa5503201588fbc566d131cd1b7d3ee6b39d335ecb002632c)\n3046b04d14faec9fa5503201588fbc566d131cd1b7d3ee6b39d335ecb002632c\nWed Feb 25 12:58:08 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670 (3046b04d14faec9fa5503201588fbc566d131cd1b7d3ee6b39d335ecb002632c)\n3046b04d14faec9fa5503201588fbc566d131cd1b7d3ee6b39d335ecb002632c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:58:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:08.995 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4a39c086-fe2a-4529-b153-6e2b1a403bad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:58:08 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:08.996 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c261236-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:58:08 np0005629333 nova_compute[244014]: 2026-02-25 12:58:08.998 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:08 np0005629333 kernel: tap6c261236-90: left promiscuous mode
Feb 25 07:58:09 np0005629333 nova_compute[244014]: 2026-02-25 12:58:09.001 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:09.005 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2099a9f7-fcb4-46e3-9f56-ec7eab0efb18]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:58:09 np0005629333 nova_compute[244014]: 2026-02-25 12:58:09.010 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:09.017 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[51e4f6f6-f1a2-4441-a45e-0abbc77bbcb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:58:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:09.019 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[49948001-0d20-4f6e-9523-2c3f85cdd22f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:58:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:09.036 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4feffb9d-edf4-4a16-a0d9-bfda378b1937]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622785, 'reachable_time': 22640, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371871, 'error': None, 'target': 'ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:58:09 np0005629333 systemd[1]: run-netns-ovnmeta\x2d6c261236\x2d9e75\x2d404c\x2dae2b\x2d04691f3dc670.mount: Deactivated successfully.
Feb 25 07:58:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:09.041 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6c261236-9e75-404c-ae2b-04691f3dc670 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:58:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:09.041 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[34860cb0-7a2a-484d-8313-cdcab1d0e9d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:58:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:09.041 157129 INFO neutron.agent.ovn.metadata.agent [-] Port dcb845b3-f7fb-449b-b027-65efdcdcf6ec in datapath 6c261236-9e75-404c-ae2b-04691f3dc670 unbound from our chassis#033[00m
Feb 25 07:58:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:09.042 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c261236-9e75-404c-ae2b-04691f3dc670, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:58:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:09.043 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2b5d2dfa-8172-416f-ad73-47a181407889]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:58:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:09.044 157129 INFO neutron.agent.ovn.metadata.agent [-] Port dcb845b3-f7fb-449b-b027-65efdcdcf6ec in datapath 6c261236-9e75-404c-ae2b-04691f3dc670 unbound from our chassis#033[00m
Feb 25 07:58:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:09.045 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c261236-9e75-404c-ae2b-04691f3dc670, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:58:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:09.045 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8197ec67-fd18-4dae-bd32-cf304fbe0683]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:58:09 np0005629333 nova_compute[244014]: 2026-02-25 12:58:09.165 244018 INFO nova.virt.libvirt.driver [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Deleting instance files /var/lib/nova/instances/0389cfa0-085b-4e4e-8d61-95d0b91c413e_del#033[00m
Feb 25 07:58:09 np0005629333 nova_compute[244014]: 2026-02-25 12:58:09.166 244018 INFO nova.virt.libvirt.driver [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Deletion of /var/lib/nova/instances/0389cfa0-085b-4e4e-8d61-95d0b91c413e_del complete#033[00m
Feb 25 07:58:09 np0005629333 nova_compute[244014]: 2026-02-25 12:58:09.214 244018 INFO nova.compute.manager [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Took 0.60 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:58:09 np0005629333 nova_compute[244014]: 2026-02-25 12:58:09.216 244018 DEBUG oslo.service.loopingcall [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:58:09 np0005629333 nova_compute[244014]: 2026-02-25 12:58:09.217 244018 DEBUG nova.compute.manager [-] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:58:09 np0005629333 nova_compute[244014]: 2026-02-25 12:58:09.217 244018 DEBUG nova.network.neutron [-] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:58:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:58:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:58:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:58:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:58:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:58:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:58:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:58:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:58:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:58:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:58:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:58:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:58:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:58:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:58:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:58:09 np0005629333 podman[371952]: 2026-02-25 12:58:09.752831621 +0000 UTC m=+0.082858967 container create 57f794b9fc21a9bbca87cf27b8a3dd7fc8512b478ebe84c19a517ccdab6066ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:58:09 np0005629333 podman[371952]: 2026-02-25 12:58:09.704340183 +0000 UTC m=+0.034367569 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:58:09 np0005629333 systemd[1]: Started libpod-conmon-57f794b9fc21a9bbca87cf27b8a3dd7fc8512b478ebe84c19a517ccdab6066ce.scope.
Feb 25 07:58:09 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:58:09 np0005629333 podman[371952]: 2026-02-25 12:58:09.88464792 +0000 UTC m=+0.214675286 container init 57f794b9fc21a9bbca87cf27b8a3dd7fc8512b478ebe84c19a517ccdab6066ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_chaplygin, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:58:09 np0005629333 podman[371952]: 2026-02-25 12:58:09.894631471 +0000 UTC m=+0.224658817 container start 57f794b9fc21a9bbca87cf27b8a3dd7fc8512b478ebe84c19a517ccdab6066ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_chaplygin, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:58:09 np0005629333 podman[371952]: 2026-02-25 12:58:09.898315385 +0000 UTC m=+0.228342741 container attach 57f794b9fc21a9bbca87cf27b8a3dd7fc8512b478ebe84c19a517ccdab6066ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_chaplygin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:58:09 np0005629333 zen_chaplygin[371968]: 167 167
Feb 25 07:58:09 np0005629333 systemd[1]: libpod-57f794b9fc21a9bbca87cf27b8a3dd7fc8512b478ebe84c19a517ccdab6066ce.scope: Deactivated successfully.
Feb 25 07:58:09 np0005629333 conmon[371968]: conmon 57f794b9fc21a9bbca87 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-57f794b9fc21a9bbca87cf27b8a3dd7fc8512b478ebe84c19a517ccdab6066ce.scope/container/memory.events
Feb 25 07:58:09 np0005629333 podman[371952]: 2026-02-25 12:58:09.903235734 +0000 UTC m=+0.233263110 container died 57f794b9fc21a9bbca87cf27b8a3dd7fc8512b478ebe84c19a517ccdab6066ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:58:09 np0005629333 systemd[1]: var-lib-containers-storage-overlay-6db8b6cffbde6cf203ee21fb63f37ee113920fd5572a3489c13a7271172f4e07-merged.mount: Deactivated successfully.
Feb 25 07:58:09 np0005629333 podman[371952]: 2026-02-25 12:58:09.948798799 +0000 UTC m=+0.278826175 container remove 57f794b9fc21a9bbca87cf27b8a3dd7fc8512b478ebe84c19a517ccdab6066ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_chaplygin, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 07:58:09 np0005629333 systemd[1]: libpod-conmon-57f794b9fc21a9bbca87cf27b8a3dd7fc8512b478ebe84c19a517ccdab6066ce.scope: Deactivated successfully.
Feb 25 07:58:10 np0005629333 podman[371994]: 2026-02-25 12:58:10.148085041 +0000 UTC m=+0.051475533 container create 584217a6d09471f67af8e6c8fea99d1333745b5f9b74b953efc8ea675fc7d31b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_gould, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:58:10 np0005629333 systemd[1]: Started libpod-conmon-584217a6d09471f67af8e6c8fea99d1333745b5f9b74b953efc8ea675fc7d31b.scope.
Feb 25 07:58:10 np0005629333 nova_compute[244014]: 2026-02-25 12:58:10.208 244018 DEBUG nova.network.neutron [-] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:58:10 np0005629333 podman[371994]: 2026-02-25 12:58:10.124402123 +0000 UTC m=+0.027792695 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:58:10 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:58:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6101abcbbaf754af054152e4d3adb3ab30b91982b8a5e948e7b23ade0e08990/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:58:10 np0005629333 nova_compute[244014]: 2026-02-25 12:58:10.235 244018 INFO nova.compute.manager [-] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Took 1.02 seconds to deallocate network for instance.#033[00m
Feb 25 07:58:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6101abcbbaf754af054152e4d3adb3ab30b91982b8a5e948e7b23ade0e08990/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:58:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6101abcbbaf754af054152e4d3adb3ab30b91982b8a5e948e7b23ade0e08990/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:58:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6101abcbbaf754af054152e4d3adb3ab30b91982b8a5e948e7b23ade0e08990/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:58:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6101abcbbaf754af054152e4d3adb3ab30b91982b8a5e948e7b23ade0e08990/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:58:10 np0005629333 nova_compute[244014]: 2026-02-25 12:58:10.286 244018 DEBUG oslo_concurrency.lockutils [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:58:10 np0005629333 nova_compute[244014]: 2026-02-25 12:58:10.287 244018 DEBUG oslo_concurrency.lockutils [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:58:10 np0005629333 nova_compute[244014]: 2026-02-25 12:58:10.361 244018 DEBUG oslo_concurrency.processutils [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:58:10 np0005629333 podman[371994]: 2026-02-25 12:58:10.385513758 +0000 UTC m=+0.288904280 container init 584217a6d09471f67af8e6c8fea99d1333745b5f9b74b953efc8ea675fc7d31b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_gould, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:58:10 np0005629333 podman[371994]: 2026-02-25 12:58:10.397066194 +0000 UTC m=+0.300456676 container start 584217a6d09471f67af8e6c8fea99d1333745b5f9b74b953efc8ea675fc7d31b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_gould, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 07:58:10 np0005629333 nova_compute[244014]: 2026-02-25 12:58:10.399 244018 DEBUG nova.network.neutron [req-2af3b851-1762-4c0e-bf6a-b7af83af8679 req-f72f69d5-d270-4615-8749-bacff32b53b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Updated VIF entry in instance network info cache for port dcb845b3-f7fb-449b-b027-65efdcdcf6ec. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:58:10 np0005629333 nova_compute[244014]: 2026-02-25 12:58:10.400 244018 DEBUG nova.network.neutron [req-2af3b851-1762-4c0e-bf6a-b7af83af8679 req-f72f69d5-d270-4615-8749-bacff32b53b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Updating instance_info_cache with network_info: [{"id": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "address": "fa:16:3e:89:15:55", "network": {"id": "6c261236-9e75-404c-ae2b-04691f3dc670", "bridge": "br-int", "label": "tempest-network-smoke--1012459848", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e227b91c24404ab5aed600e2fe792d32", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdcb845b3-f7", "ovs_interfaceid": "dcb845b3-f7fb-449b-b027-65efdcdcf6ec", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:58:10 np0005629333 podman[371994]: 2026-02-25 12:58:10.403930578 +0000 UTC m=+0.307321060 container attach 584217a6d09471f67af8e6c8fea99d1333745b5f9b74b953efc8ea675fc7d31b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_gould, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 07:58:10 np0005629333 nova_compute[244014]: 2026-02-25 12:58:10.421 244018 DEBUG oslo_concurrency.lockutils [req-2af3b851-1762-4c0e-bf6a-b7af83af8679 req-f72f69d5-d270-4615-8749-bacff32b53b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-0389cfa0-085b-4e4e-8d61-95d0b91c413e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:58:10 np0005629333 nova_compute[244014]: 2026-02-25 12:58:10.559 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:10 np0005629333 nova_compute[244014]: 2026-02-25 12:58:10.637 244018 DEBUG nova.compute.manager [req-e5323a77-adb2-4aa5-99e6-20ae166efa7a req-b4367807-d1db-466e-9086-79a76f860df6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Received event network-vif-unplugged-dcb845b3-f7fb-449b-b027-65efdcdcf6ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:58:10 np0005629333 nova_compute[244014]: 2026-02-25 12:58:10.638 244018 DEBUG oslo_concurrency.lockutils [req-e5323a77-adb2-4aa5-99e6-20ae166efa7a req-b4367807-d1db-466e-9086-79a76f860df6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:58:10 np0005629333 nova_compute[244014]: 2026-02-25 12:58:10.639 244018 DEBUG oslo_concurrency.lockutils [req-e5323a77-adb2-4aa5-99e6-20ae166efa7a req-b4367807-d1db-466e-9086-79a76f860df6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:58:10 np0005629333 nova_compute[244014]: 2026-02-25 12:58:10.640 244018 DEBUG oslo_concurrency.lockutils [req-e5323a77-adb2-4aa5-99e6-20ae166efa7a req-b4367807-d1db-466e-9086-79a76f860df6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:58:10 np0005629333 nova_compute[244014]: 2026-02-25 12:58:10.640 244018 DEBUG nova.compute.manager [req-e5323a77-adb2-4aa5-99e6-20ae166efa7a req-b4367807-d1db-466e-9086-79a76f860df6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] No waiting events found dispatching network-vif-unplugged-dcb845b3-f7fb-449b-b027-65efdcdcf6ec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:58:10 np0005629333 nova_compute[244014]: 2026-02-25 12:58:10.641 244018 WARNING nova.compute.manager [req-e5323a77-adb2-4aa5-99e6-20ae166efa7a req-b4367807-d1db-466e-9086-79a76f860df6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Received unexpected event network-vif-unplugged-dcb845b3-f7fb-449b-b027-65efdcdcf6ec for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:58:10 np0005629333 nova_compute[244014]: 2026-02-25 12:58:10.641 244018 DEBUG nova.compute.manager [req-e5323a77-adb2-4aa5-99e6-20ae166efa7a req-b4367807-d1db-466e-9086-79a76f860df6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Received event network-vif-plugged-dcb845b3-f7fb-449b-b027-65efdcdcf6ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:58:10 np0005629333 nova_compute[244014]: 2026-02-25 12:58:10.642 244018 DEBUG oslo_concurrency.lockutils [req-e5323a77-adb2-4aa5-99e6-20ae166efa7a req-b4367807-d1db-466e-9086-79a76f860df6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:58:10 np0005629333 nova_compute[244014]: 2026-02-25 12:58:10.643 244018 DEBUG oslo_concurrency.lockutils [req-e5323a77-adb2-4aa5-99e6-20ae166efa7a req-b4367807-d1db-466e-9086-79a76f860df6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:58:10 np0005629333 nova_compute[244014]: 2026-02-25 12:58:10.643 244018 DEBUG oslo_concurrency.lockutils [req-e5323a77-adb2-4aa5-99e6-20ae166efa7a req-b4367807-d1db-466e-9086-79a76f860df6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:58:10 np0005629333 nova_compute[244014]: 2026-02-25 12:58:10.644 244018 DEBUG nova.compute.manager [req-e5323a77-adb2-4aa5-99e6-20ae166efa7a req-b4367807-d1db-466e-9086-79a76f860df6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] No waiting events found dispatching network-vif-plugged-dcb845b3-f7fb-449b-b027-65efdcdcf6ec pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:58:10 np0005629333 nova_compute[244014]: 2026-02-25 12:58:10.644 244018 WARNING nova.compute.manager [req-e5323a77-adb2-4aa5-99e6-20ae166efa7a req-b4367807-d1db-466e-9086-79a76f860df6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Received unexpected event network-vif-plugged-dcb845b3-f7fb-449b-b027-65efdcdcf6ec for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:58:10 np0005629333 nova_compute[244014]: 2026-02-25 12:58:10.645 244018 DEBUG nova.compute.manager [req-e5323a77-adb2-4aa5-99e6-20ae166efa7a req-b4367807-d1db-466e-9086-79a76f860df6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Received event network-vif-deleted-dcb845b3-f7fb-449b-b027-65efdcdcf6ec external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:58:10 np0005629333 nova_compute[244014]: 2026-02-25 12:58:10.646 244018 INFO nova.compute.manager [req-e5323a77-adb2-4aa5-99e6-20ae166efa7a req-b4367807-d1db-466e-9086-79a76f860df6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Neutron deleted interface dcb845b3-f7fb-449b-b027-65efdcdcf6ec; detaching it from the instance and deleting it from the info cache#033[00m
Feb 25 07:58:10 np0005629333 nova_compute[244014]: 2026-02-25 12:58:10.646 244018 DEBUG nova.network.neutron [req-e5323a77-adb2-4aa5-99e6-20ae166efa7a req-b4367807-d1db-466e-9086-79a76f860df6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:58:10 np0005629333 nova_compute[244014]: 2026-02-25 12:58:10.669 244018 DEBUG nova.compute.manager [req-e5323a77-adb2-4aa5-99e6-20ae166efa7a req-b4367807-d1db-466e-9086-79a76f860df6 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Detach interface failed, port_id=dcb845b3-f7fb-449b-b027-65efdcdcf6ec, reason: Instance 0389cfa0-085b-4e4e-8d61-95d0b91c413e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 25 07:58:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2328: 305 pgs: 305 active+clean; 391 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 339 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 07:58:10 np0005629333 awesome_gould[372010]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:58:10 np0005629333 awesome_gould[372010]: --> All data devices are unavailable
Feb 25 07:58:10 np0005629333 systemd[1]: libpod-584217a6d09471f67af8e6c8fea99d1333745b5f9b74b953efc8ea675fc7d31b.scope: Deactivated successfully.
Feb 25 07:58:10 np0005629333 nova_compute[244014]: 2026-02-25 12:58:10.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:58:10 np0005629333 nova_compute[244014]: 2026-02-25 12:58:10.878 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 25 07:58:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:58:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2894236462' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:58:10 np0005629333 nova_compute[244014]: 2026-02-25 12:58:10.906 244018 DEBUG oslo_concurrency.processutils [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:58:10 np0005629333 podman[372050]: 2026-02-25 12:58:10.907349888 +0000 UTC m=+0.034562726 container died 584217a6d09471f67af8e6c8fea99d1333745b5f9b74b953efc8ea675fc7d31b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:58:10 np0005629333 nova_compute[244014]: 2026-02-25 12:58:10.914 244018 DEBUG nova.compute.provider_tree [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:58:10 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d6101abcbbaf754af054152e4d3adb3ab30b91982b8a5e948e7b23ade0e08990-merged.mount: Deactivated successfully.
Feb 25 07:58:10 np0005629333 podman[372050]: 2026-02-25 12:58:10.945518205 +0000 UTC m=+0.072730983 container remove 584217a6d09471f67af8e6c8fea99d1333745b5f9b74b953efc8ea675fc7d31b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:58:10 np0005629333 nova_compute[244014]: 2026-02-25 12:58:10.949 244018 DEBUG nova.scheduler.client.report [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:58:10 np0005629333 systemd[1]: libpod-conmon-584217a6d09471f67af8e6c8fea99d1333745b5f9b74b953efc8ea675fc7d31b.scope: Deactivated successfully.
Feb 25 07:58:10 np0005629333 nova_compute[244014]: 2026-02-25 12:58:10.969 244018 DEBUG oslo_concurrency.lockutils [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:58:10 np0005629333 nova_compute[244014]: 2026-02-25 12:58:10.995 244018 INFO nova.scheduler.client.report [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Deleted allocations for instance 0389cfa0-085b-4e4e-8d61-95d0b91c413e#033[00m
Feb 25 07:58:11 np0005629333 nova_compute[244014]: 2026-02-25 12:58:11.089 244018 DEBUG oslo_concurrency.lockutils [None req-16279967-f788-49c6-bd1e-12e81137d637 31d013eaf26a447394d93c83ab8def60 e227b91c24404ab5aed600e2fe792d32 - - default default] Lock "0389cfa0-085b-4e4e-8d61-95d0b91c413e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.484s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:58:11 np0005629333 podman[372132]: 2026-02-25 12:58:11.437815262 +0000 UTC m=+0.051907975 container create d3b162375f43a4faeaa77b29f5046123a146eda3dd808569da7128f79929fe9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cohen, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:58:11 np0005629333 systemd[1]: Started libpod-conmon-d3b162375f43a4faeaa77b29f5046123a146eda3dd808569da7128f79929fe9a.scope.
Feb 25 07:58:11 np0005629333 podman[372132]: 2026-02-25 12:58:11.415341138 +0000 UTC m=+0.029433881 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:58:11 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:58:11 np0005629333 podman[372132]: 2026-02-25 12:58:11.534786018 +0000 UTC m=+0.148878831 container init d3b162375f43a4faeaa77b29f5046123a146eda3dd808569da7128f79929fe9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cohen, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 07:58:11 np0005629333 podman[372132]: 2026-02-25 12:58:11.544918553 +0000 UTC m=+0.159011266 container start d3b162375f43a4faeaa77b29f5046123a146eda3dd808569da7128f79929fe9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cohen, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Feb 25 07:58:11 np0005629333 podman[372132]: 2026-02-25 12:58:11.548258938 +0000 UTC m=+0.162351751 container attach d3b162375f43a4faeaa77b29f5046123a146eda3dd808569da7128f79929fe9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cohen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 07:58:11 np0005629333 cool_cohen[372148]: 167 167
Feb 25 07:58:11 np0005629333 systemd[1]: libpod-d3b162375f43a4faeaa77b29f5046123a146eda3dd808569da7128f79929fe9a.scope: Deactivated successfully.
Feb 25 07:58:11 np0005629333 podman[372132]: 2026-02-25 12:58:11.551306904 +0000 UTC m=+0.165399657 container died d3b162375f43a4faeaa77b29f5046123a146eda3dd808569da7128f79929fe9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 07:58:11 np0005629333 systemd[1]: var-lib-containers-storage-overlay-8df13f63c09405e79b335bf2f9bacf9906caefa28da0850fa60efc304e2aabcd-merged.mount: Deactivated successfully.
Feb 25 07:58:11 np0005629333 podman[372132]: 2026-02-25 12:58:11.59337266 +0000 UTC m=+0.207465383 container remove d3b162375f43a4faeaa77b29f5046123a146eda3dd808569da7128f79929fe9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cohen, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:58:11 np0005629333 systemd[1]: libpod-conmon-d3b162375f43a4faeaa77b29f5046123a146eda3dd808569da7128f79929fe9a.scope: Deactivated successfully.
Feb 25 07:58:11 np0005629333 podman[372171]: 2026-02-25 12:58:11.78692491 +0000 UTC m=+0.060405005 container create a2f24e98e3e1236bb67470b5666a6a703e56ed673fe056ba75a4fdcb16f5762a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_shockley, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:58:11 np0005629333 systemd[1]: Started libpod-conmon-a2f24e98e3e1236bb67470b5666a6a703e56ed673fe056ba75a4fdcb16f5762a.scope.
Feb 25 07:58:11 np0005629333 podman[372171]: 2026-02-25 12:58:11.763305214 +0000 UTC m=+0.036785329 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:58:11 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:58:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc46f80f4fcb0f18a4e3c085a98129672a41c8ac7a10b883b1a6537fd26ee1af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:58:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc46f80f4fcb0f18a4e3c085a98129672a41c8ac7a10b883b1a6537fd26ee1af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:58:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc46f80f4fcb0f18a4e3c085a98129672a41c8ac7a10b883b1a6537fd26ee1af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:58:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc46f80f4fcb0f18a4e3c085a98129672a41c8ac7a10b883b1a6537fd26ee1af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:58:11 np0005629333 podman[372171]: 2026-02-25 12:58:11.887221479 +0000 UTC m=+0.160701644 container init a2f24e98e3e1236bb67470b5666a6a703e56ed673fe056ba75a4fdcb16f5762a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_shockley, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 07:58:11 np0005629333 podman[372171]: 2026-02-25 12:58:11.899786864 +0000 UTC m=+0.173266959 container start a2f24e98e3e1236bb67470b5666a6a703e56ed673fe056ba75a4fdcb16f5762a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_shockley, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 07:58:11 np0005629333 podman[372171]: 2026-02-25 12:58:11.903097737 +0000 UTC m=+0.176577832 container attach a2f24e98e3e1236bb67470b5666a6a703e56ed673fe056ba75a4fdcb16f5762a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:58:12 np0005629333 bold_shockley[372187]: {
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:    "0": [
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:        {
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "devices": [
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "/dev/loop3"
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            ],
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "lv_name": "ceph_lv0",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "lv_size": "21470642176",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "name": "ceph_lv0",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "tags": {
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.cluster_name": "ceph",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.crush_device_class": "",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.encrypted": "0",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.objectstore": "bluestore",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.osd_id": "0",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.type": "block",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.vdo": "0",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.with_tpm": "0"
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            },
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "type": "block",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "vg_name": "ceph_vg0"
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:        }
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:    ],
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:    "1": [
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:        {
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "devices": [
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "/dev/loop4"
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            ],
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "lv_name": "ceph_lv1",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "lv_size": "21470642176",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "name": "ceph_lv1",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "tags": {
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.cluster_name": "ceph",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.crush_device_class": "",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.encrypted": "0",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.objectstore": "bluestore",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.osd_id": "1",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.type": "block",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.vdo": "0",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.with_tpm": "0"
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            },
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "type": "block",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "vg_name": "ceph_vg1"
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:        }
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:    ],
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:    "2": [
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:        {
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "devices": [
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "/dev/loop5"
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            ],
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "lv_name": "ceph_lv2",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "lv_size": "21470642176",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "name": "ceph_lv2",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "tags": {
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.cluster_name": "ceph",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.crush_device_class": "",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.encrypted": "0",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.objectstore": "bluestore",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.osd_id": "2",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.type": "block",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.vdo": "0",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:                "ceph.with_tpm": "0"
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            },
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "type": "block",
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:            "vg_name": "ceph_vg2"
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:        }
Feb 25 07:58:12 np0005629333 bold_shockley[372187]:    ]
Feb 25 07:58:12 np0005629333 bold_shockley[372187]: }
Feb 25 07:58:12 np0005629333 systemd[1]: libpod-a2f24e98e3e1236bb67470b5666a6a703e56ed673fe056ba75a4fdcb16f5762a.scope: Deactivated successfully.
Feb 25 07:58:12 np0005629333 podman[372171]: 2026-02-25 12:58:12.211052614 +0000 UTC m=+0.484532679 container died a2f24e98e3e1236bb67470b5666a6a703e56ed673fe056ba75a4fdcb16f5762a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 07:58:12 np0005629333 systemd[1]: var-lib-containers-storage-overlay-dc46f80f4fcb0f18a4e3c085a98129672a41c8ac7a10b883b1a6537fd26ee1af-merged.mount: Deactivated successfully.
Feb 25 07:58:12 np0005629333 podman[372171]: 2026-02-25 12:58:12.252564645 +0000 UTC m=+0.526044720 container remove a2f24e98e3e1236bb67470b5666a6a703e56ed673fe056ba75a4fdcb16f5762a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Feb 25 07:58:12 np0005629333 systemd[1]: libpod-conmon-a2f24e98e3e1236bb67470b5666a6a703e56ed673fe056ba75a4fdcb16f5762a.scope: Deactivated successfully.
Feb 25 07:58:12 np0005629333 podman[372270]: 2026-02-25 12:58:12.732475943 +0000 UTC m=+0.056423903 container create 7f28ef584fd63e2871970598687f94e56534d5ef4594fd34b6eba4d2fcb4f82b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:58:12 np0005629333 systemd[1]: Started libpod-conmon-7f28ef584fd63e2871970598687f94e56534d5ef4594fd34b6eba4d2fcb4f82b.scope.
Feb 25 07:58:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2329: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 358 KiB/s rd, 2.2 MiB/s wr, 92 op/s
Feb 25 07:58:12 np0005629333 podman[372270]: 2026-02-25 12:58:12.707846268 +0000 UTC m=+0.031794308 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:58:12 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:58:12 np0005629333 podman[372270]: 2026-02-25 12:58:12.82097804 +0000 UTC m=+0.144926060 container init 7f28ef584fd63e2871970598687f94e56534d5ef4594fd34b6eba4d2fcb4f82b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:58:12 np0005629333 podman[372270]: 2026-02-25 12:58:12.831236969 +0000 UTC m=+0.155184929 container start 7f28ef584fd63e2871970598687f94e56534d5ef4594fd34b6eba4d2fcb4f82b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:58:12 np0005629333 podman[372270]: 2026-02-25 12:58:12.835390576 +0000 UTC m=+0.159338546 container attach 7f28ef584fd63e2871970598687f94e56534d5ef4594fd34b6eba4d2fcb4f82b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:58:12 np0005629333 mystifying_sutherland[372286]: 167 167
Feb 25 07:58:12 np0005629333 systemd[1]: libpod-7f28ef584fd63e2871970598687f94e56534d5ef4594fd34b6eba4d2fcb4f82b.scope: Deactivated successfully.
Feb 25 07:58:12 np0005629333 podman[372270]: 2026-02-25 12:58:12.838545155 +0000 UTC m=+0.162493115 container died 7f28ef584fd63e2871970598687f94e56534d5ef4594fd34b6eba4d2fcb4f82b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_sutherland, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True)
Feb 25 07:58:12 np0005629333 systemd[1]: var-lib-containers-storage-overlay-eef602e4ae2c43315a453425731352c2c38e36207eb0178e9ef676268ad32977-merged.mount: Deactivated successfully.
Feb 25 07:58:12 np0005629333 nova_compute[244014]: 2026-02-25 12:58:12.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:58:12 np0005629333 podman[372270]: 2026-02-25 12:58:12.896747467 +0000 UTC m=+0.220695437 container remove 7f28ef584fd63e2871970598687f94e56534d5ef4594fd34b6eba4d2fcb4f82b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_sutherland, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 07:58:12 np0005629333 systemd[1]: libpod-conmon-7f28ef584fd63e2871970598687f94e56534d5ef4594fd34b6eba4d2fcb4f82b.scope: Deactivated successfully.
Feb 25 07:58:13 np0005629333 podman[372310]: 2026-02-25 12:58:13.086445718 +0000 UTC m=+0.055440945 container create 5d08d6f74ea9a19bc1868704b618f2d1e798e042031e5386b141f6378274e36f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 07:58:13 np0005629333 systemd[1]: Started libpod-conmon-5d08d6f74ea9a19bc1868704b618f2d1e798e042031e5386b141f6378274e36f.scope.
Feb 25 07:58:13 np0005629333 podman[372310]: 2026-02-25 12:58:13.062325458 +0000 UTC m=+0.031320705 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:58:13 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:58:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ef17f5c20802ed424b625ee61322356594d2dc7bc862243e5e172efdbea316e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:58:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ef17f5c20802ed424b625ee61322356594d2dc7bc862243e5e172efdbea316e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:58:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ef17f5c20802ed424b625ee61322356594d2dc7bc862243e5e172efdbea316e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:58:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ef17f5c20802ed424b625ee61322356594d2dc7bc862243e5e172efdbea316e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:58:13 np0005629333 podman[372310]: 2026-02-25 12:58:13.192127689 +0000 UTC m=+0.161122906 container init 5d08d6f74ea9a19bc1868704b618f2d1e798e042031e5386b141f6378274e36f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sanderson, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:58:13 np0005629333 podman[372310]: 2026-02-25 12:58:13.206231567 +0000 UTC m=+0.175226804 container start 5d08d6f74ea9a19bc1868704b618f2d1e798e042031e5386b141f6378274e36f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sanderson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 07:58:13 np0005629333 podman[372310]: 2026-02-25 12:58:13.21273387 +0000 UTC m=+0.181729087 container attach 5d08d6f74ea9a19bc1868704b618f2d1e798e042031e5386b141f6378274e36f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:58:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:58:13 np0005629333 lvm[372417]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:58:13 np0005629333 lvm[372423]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:58:13 np0005629333 lvm[372423]: VG ceph_vg1 finished
Feb 25 07:58:13 np0005629333 lvm[372424]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:58:13 np0005629333 lvm[372424]: VG ceph_vg2 finished
Feb 25 07:58:13 np0005629333 lvm[372417]: VG ceph_vg0 finished
Feb 25 07:58:13 np0005629333 podman[372401]: 2026-02-25 12:58:13.891122506 +0000 UTC m=+0.069544033 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 25 07:58:13 np0005629333 nova_compute[244014]: 2026-02-25 12:58:13.897 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:13 np0005629333 lvm[372446]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:58:13 np0005629333 lvm[372446]: VG ceph_vg0 finished
Feb 25 07:58:13 np0005629333 zealous_sanderson[372326]: {}
Feb 25 07:58:13 np0005629333 podman[372403]: 2026-02-25 12:58:13.909868225 +0000 UTC m=+0.084699721 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:58:13 np0005629333 systemd[1]: libpod-5d08d6f74ea9a19bc1868704b618f2d1e798e042031e5386b141f6378274e36f.scope: Deactivated successfully.
Feb 25 07:58:13 np0005629333 systemd[1]: libpod-5d08d6f74ea9a19bc1868704b618f2d1e798e042031e5386b141f6378274e36f.scope: Consumed 1.126s CPU time.
Feb 25 07:58:13 np0005629333 podman[372310]: 2026-02-25 12:58:13.943964276 +0000 UTC m=+0.912959493 container died 5d08d6f74ea9a19bc1868704b618f2d1e798e042031e5386b141f6378274e36f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 07:58:13 np0005629333 systemd[1]: var-lib-containers-storage-overlay-4ef17f5c20802ed424b625ee61322356594d2dc7bc862243e5e172efdbea316e-merged.mount: Deactivated successfully.
Feb 25 07:58:13 np0005629333 podman[372310]: 2026-02-25 12:58:13.983817661 +0000 UTC m=+0.952812858 container remove 5d08d6f74ea9a19bc1868704b618f2d1e798e042031e5386b141f6378274e36f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sanderson, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 07:58:13 np0005629333 systemd[1]: libpod-conmon-5d08d6f74ea9a19bc1868704b618f2d1e798e042031e5386b141f6378274e36f.scope: Deactivated successfully.
Feb 25 07:58:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:58:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:58:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:58:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:58:14 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:58:14 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:58:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2330: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Feb 25 07:58:15 np0005629333 nova_compute[244014]: 2026-02-25 12:58:15.562 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:15 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:15Z|01494|binding|INFO|Releasing lport 93d7aa8d-1845-4103-8fd2-aed7a8c4298a from this chassis (sb_readonly=0)
Feb 25 07:58:15 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:15Z|01495|binding|INFO|Releasing lport d5efe0f3-2e55-4d47-9b3f-ed6e541466bd from this chassis (sb_readonly=0)
Feb 25 07:58:15 np0005629333 nova_compute[244014]: 2026-02-25 12:58:15.978 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:16.271 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:58:16 np0005629333 nova_compute[244014]: 2026-02-25 12:58:16.271 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:16.273 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:58:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2331: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Feb 25 07:58:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:58:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2332: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Feb 25 07:58:18 np0005629333 nova_compute[244014]: 2026-02-25 12:58:18.900 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:20 np0005629333 nova_compute[244014]: 2026-02-25 12:58:20.565 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2333: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 14 KiB/s wr, 28 op/s
Feb 25 07:58:21 np0005629333 nova_compute[244014]: 2026-02-25 12:58:21.751 244018 DEBUG nova.compute.manager [req-601c4bc4-f03d-4aab-9ad8-6d4187db25e0 req-d957187e-e6c6-4616-ae30-98a7ba110a64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Received event network-changed-13dff95c-b96c-4657-9807-58964669e78a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:58:21 np0005629333 nova_compute[244014]: 2026-02-25 12:58:21.751 244018 DEBUG nova.compute.manager [req-601c4bc4-f03d-4aab-9ad8-6d4187db25e0 req-d957187e-e6c6-4616-ae30-98a7ba110a64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Refreshing instance network info cache due to event network-changed-13dff95c-b96c-4657-9807-58964669e78a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:58:21 np0005629333 nova_compute[244014]: 2026-02-25 12:58:21.751 244018 DEBUG oslo_concurrency.lockutils [req-601c4bc4-f03d-4aab-9ad8-6d4187db25e0 req-d957187e-e6c6-4616-ae30-98a7ba110a64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-621d2b1a-0d06-4a98-b252-2acafee3ba02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:58:21 np0005629333 nova_compute[244014]: 2026-02-25 12:58:21.751 244018 DEBUG oslo_concurrency.lockutils [req-601c4bc4-f03d-4aab-9ad8-6d4187db25e0 req-d957187e-e6c6-4616-ae30-98a7ba110a64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-621d2b1a-0d06-4a98-b252-2acafee3ba02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:58:21 np0005629333 nova_compute[244014]: 2026-02-25 12:58:21.751 244018 DEBUG nova.network.neutron [req-601c4bc4-f03d-4aab-9ad8-6d4187db25e0 req-d957187e-e6c6-4616-ae30-98a7ba110a64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Refreshing network info cache for port 13dff95c-b96c-4657-9807-58964669e78a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:58:21 np0005629333 nova_compute[244014]: 2026-02-25 12:58:21.837 244018 DEBUG oslo_concurrency.lockutils [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Acquiring lock "621d2b1a-0d06-4a98-b252-2acafee3ba02" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:58:21 np0005629333 nova_compute[244014]: 2026-02-25 12:58:21.837 244018 DEBUG oslo_concurrency.lockutils [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lock "621d2b1a-0d06-4a98-b252-2acafee3ba02" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:58:21 np0005629333 nova_compute[244014]: 2026-02-25 12:58:21.838 244018 DEBUG oslo_concurrency.lockutils [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Acquiring lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:58:21 np0005629333 nova_compute[244014]: 2026-02-25 12:58:21.838 244018 DEBUG oslo_concurrency.lockutils [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:58:21 np0005629333 nova_compute[244014]: 2026-02-25 12:58:21.838 244018 DEBUG oslo_concurrency.lockutils [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:58:21 np0005629333 nova_compute[244014]: 2026-02-25 12:58:21.839 244018 INFO nova.compute.manager [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Terminating instance#033[00m
Feb 25 07:58:21 np0005629333 nova_compute[244014]: 2026-02-25 12:58:21.839 244018 DEBUG nova.compute.manager [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:58:21 np0005629333 kernel: tap13dff95c-b9 (unregistering): left promiscuous mode
Feb 25 07:58:21 np0005629333 NetworkManager[49836]: <info>  [1772024301.8892] device (tap13dff95c-b9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:58:21 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:21Z|01496|binding|INFO|Releasing lport 13dff95c-b96c-4657-9807-58964669e78a from this chassis (sb_readonly=0)
Feb 25 07:58:21 np0005629333 nova_compute[244014]: 2026-02-25 12:58:21.898 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:21 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:21Z|01497|binding|INFO|Setting lport 13dff95c-b96c-4657-9807-58964669e78a down in Southbound
Feb 25 07:58:21 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:21Z|01498|binding|INFO|Removing iface tap13dff95c-b9 ovn-installed in OVS
Feb 25 07:58:21 np0005629333 nova_compute[244014]: 2026-02-25 12:58:21.901 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:21 np0005629333 nova_compute[244014]: 2026-02-25 12:58:21.906 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:21.919 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:9b:25 10.100.0.8'], port_security=['fa:16:3e:42:9b:25 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '621d2b1a-0d06-4a98-b252-2acafee3ba02', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '089b8f0ee9684ec69cbdc13d24262170', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0892ce4b-109e-43d1-a441-11b1be3535ea 1dfe9b6c-62fb-43ca-b23e-3d57cce238fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=916a48b0-3c8c-48cf-bf68-7fcb782233f1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=13dff95c-b96c-4657-9807-58964669e78a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:58:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:21.921 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 13dff95c-b96c-4657-9807-58964669e78a in datapath f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd unbound from our chassis#033[00m
Feb 25 07:58:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:21.923 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:58:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:21.924 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6057bba7-51ba-4891-bfee-1ea69542b976]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:58:21 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:21.925 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd namespace which is not needed anymore#033[00m
Feb 25 07:58:21 np0005629333 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Feb 25 07:58:21 np0005629333 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008d.scope: Consumed 12.776s CPU time.
Feb 25 07:58:21 np0005629333 systemd-machined[210048]: Machine qemu-173-instance-0000008d terminated.
Feb 25 07:58:22 np0005629333 neutron-haproxy-ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd[371699]: [NOTICE]   (371703) : haproxy version is 2.8.14-c23fe91
Feb 25 07:58:22 np0005629333 neutron-haproxy-ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd[371699]: [NOTICE]   (371703) : path to executable is /usr/sbin/haproxy
Feb 25 07:58:22 np0005629333 neutron-haproxy-ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd[371699]: [WARNING]  (371703) : Exiting Master process...
Feb 25 07:58:22 np0005629333 neutron-haproxy-ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd[371699]: [WARNING]  (371703) : Exiting Master process...
Feb 25 07:58:22 np0005629333 neutron-haproxy-ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd[371699]: [ALERT]    (371703) : Current worker (371705) exited with code 143 (Terminated)
Feb 25 07:58:22 np0005629333 neutron-haproxy-ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd[371699]: [WARNING]  (371703) : All workers exited. Exiting... (0)
Feb 25 07:58:22 np0005629333 systemd[1]: libpod-977d987a9c1cff7ad3eeb65e4b89c2f566fad4bba349902334adf6c1d6d5e94f.scope: Deactivated successfully.
Feb 25 07:58:22 np0005629333 podman[372516]: 2026-02-25 12:58:22.064093312 +0000 UTC m=+0.048500829 container died 977d987a9c1cff7ad3eeb65e4b89c2f566fad4bba349902334adf6c1d6d5e94f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 07:58:22 np0005629333 nova_compute[244014]: 2026-02-25 12:58:22.083 244018 INFO nova.virt.libvirt.driver [-] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Instance destroyed successfully.#033[00m
Feb 25 07:58:22 np0005629333 nova_compute[244014]: 2026-02-25 12:58:22.084 244018 DEBUG nova.objects.instance [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lazy-loading 'resources' on Instance uuid 621d2b1a-0d06-4a98-b252-2acafee3ba02 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:58:22 np0005629333 nova_compute[244014]: 2026-02-25 12:58:22.103 244018 DEBUG nova.virt.libvirt.vif [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:57:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-370800561-access_point-2136907043',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-370800561-access_point-2136907043',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-370800561-acc',id=141,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJN+hUOMaKAQ0YB42oqWjvIfKM8uPP+02JazD6RxfhEthxrTmdltfGPXOOUYwxJvxVj/DZ80733jprZ2P4416qp+bN+v8dr9z51p/U8yRQby/Qu2leuQCIhcBO5CRWY1xg==',key_name='tempest-TestSecurityGroupsBasicOps-1642482985',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:57:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='089b8f0ee9684ec69cbdc13d24262170',ramdisk_id='',reservation_id='r-r5sfe18w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-370800561',owner_user_name='tempest-TestSecurityGroupsBasicOps-370800561-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:57:52Z,user_data=None,user_id='2248dda8be6e4a51ace44a9e66dc4b45',uuid=621d2b1a-0d06-4a98-b252-2acafee3ba02,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "13dff95c-b96c-4657-9807-58964669e78a", "address": "fa:16:3e:42:9b:25", "network": {"id": "f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd", "bridge": "br-int", "label": "tempest-network-smoke--679537891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "089b8f0ee9684ec69cbdc13d24262170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13dff95c-b9", "ovs_interfaceid": "13dff95c-b96c-4657-9807-58964669e78a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:58:22 np0005629333 nova_compute[244014]: 2026-02-25 12:58:22.103 244018 DEBUG nova.network.os_vif_util [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Converting VIF {"id": "13dff95c-b96c-4657-9807-58964669e78a", "address": "fa:16:3e:42:9b:25", "network": {"id": "f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd", "bridge": "br-int", "label": "tempest-network-smoke--679537891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "089b8f0ee9684ec69cbdc13d24262170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13dff95c-b9", "ovs_interfaceid": "13dff95c-b96c-4657-9807-58964669e78a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:58:22 np0005629333 nova_compute[244014]: 2026-02-25 12:58:22.104 244018 DEBUG nova.network.os_vif_util [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:42:9b:25,bridge_name='br-int',has_traffic_filtering=True,id=13dff95c-b96c-4657-9807-58964669e78a,network=Network(f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13dff95c-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:58:22 np0005629333 nova_compute[244014]: 2026-02-25 12:58:22.105 244018 DEBUG os_vif [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:42:9b:25,bridge_name='br-int',has_traffic_filtering=True,id=13dff95c-b96c-4657-9807-58964669e78a,network=Network(f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13dff95c-b9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:58:22 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-977d987a9c1cff7ad3eeb65e4b89c2f566fad4bba349902334adf6c1d6d5e94f-userdata-shm.mount: Deactivated successfully.
Feb 25 07:58:22 np0005629333 nova_compute[244014]: 2026-02-25 12:58:22.107 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:22 np0005629333 nova_compute[244014]: 2026-02-25 12:58:22.108 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13dff95c-b9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:58:22 np0005629333 systemd[1]: var-lib-containers-storage-overlay-3f481ca962f4dc0356ecf15f122766f43015783bde0babd55e306f3282801e38-merged.mount: Deactivated successfully.
Feb 25 07:58:22 np0005629333 nova_compute[244014]: 2026-02-25 12:58:22.110 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:22 np0005629333 nova_compute[244014]: 2026-02-25 12:58:22.113 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:22 np0005629333 nova_compute[244014]: 2026-02-25 12:58:22.117 244018 INFO os_vif [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:42:9b:25,bridge_name='br-int',has_traffic_filtering=True,id=13dff95c-b96c-4657-9807-58964669e78a,network=Network(f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13dff95c-b9')#033[00m
Feb 25 07:58:22 np0005629333 podman[372516]: 2026-02-25 12:58:22.123478327 +0000 UTC m=+0.107885814 container cleanup 977d987a9c1cff7ad3eeb65e4b89c2f566fad4bba349902334adf6c1d6d5e94f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 07:58:22 np0005629333 systemd[1]: libpod-conmon-977d987a9c1cff7ad3eeb65e4b89c2f566fad4bba349902334adf6c1d6d5e94f.scope: Deactivated successfully.
Feb 25 07:58:22 np0005629333 podman[372563]: 2026-02-25 12:58:22.191872756 +0000 UTC m=+0.047742618 container remove 977d987a9c1cff7ad3eeb65e4b89c2f566fad4bba349902334adf6c1d6d5e94f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:58:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:22.197 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[492e6f8e-16f8-4f9b-8252-7585c9ec961a]: (4, ('Wed Feb 25 12:58:22 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd (977d987a9c1cff7ad3eeb65e4b89c2f566fad4bba349902334adf6c1d6d5e94f)\n977d987a9c1cff7ad3eeb65e4b89c2f566fad4bba349902334adf6c1d6d5e94f\nWed Feb 25 12:58:22 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd (977d987a9c1cff7ad3eeb65e4b89c2f566fad4bba349902334adf6c1d6d5e94f)\n977d987a9c1cff7ad3eeb65e4b89c2f566fad4bba349902334adf6c1d6d5e94f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:58:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:22.199 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6db8123d-b289-4eb8-a458-67ada0063bb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:58:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:22.201 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1fe6aa3-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:58:22 np0005629333 kernel: tapf1fe6aa3-20: left promiscuous mode
Feb 25 07:58:22 np0005629333 nova_compute[244014]: 2026-02-25 12:58:22.204 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:22 np0005629333 nova_compute[244014]: 2026-02-25 12:58:22.210 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:22.211 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e67ba772-59e9-47ee-afce-cd36000c3a10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:58:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:22.225 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e923f7a1-ae12-4d49-bf5e-df514002d986]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:58:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:22.227 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[16f20823-2061-45f9-a018-a09142631096]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:58:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:22.243 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e7280a4d-e75a-491c-bb49-4f03aa6f1763]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624120, 'reachable_time': 43000, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372588, 'error': None, 'target': 'ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:58:22 np0005629333 systemd[1]: run-netns-ovnmeta\x2df1fe6aa3\x2d20a1\x2d49f5\x2d80e1\x2dd26b81a3a5cd.mount: Deactivated successfully.
Feb 25 07:58:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:22.248 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:58:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:22.248 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[3806571b-e4b2-40fc-a44d-9bbf0f8f07db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:58:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:22.276 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:58:22 np0005629333 nova_compute[244014]: 2026-02-25 12:58:22.410 244018 INFO nova.virt.libvirt.driver [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Deleting instance files /var/lib/nova/instances/621d2b1a-0d06-4a98-b252-2acafee3ba02_del#033[00m
Feb 25 07:58:22 np0005629333 nova_compute[244014]: 2026-02-25 12:58:22.411 244018 INFO nova.virt.libvirt.driver [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Deletion of /var/lib/nova/instances/621d2b1a-0d06-4a98-b252-2acafee3ba02_del complete#033[00m
Feb 25 07:58:22 np0005629333 nova_compute[244014]: 2026-02-25 12:58:22.503 244018 INFO nova.compute.manager [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Took 0.66 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:58:22 np0005629333 nova_compute[244014]: 2026-02-25 12:58:22.504 244018 DEBUG oslo.service.loopingcall [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:58:22 np0005629333 nova_compute[244014]: 2026-02-25 12:58:22.505 244018 DEBUG nova.compute.manager [-] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:58:22 np0005629333 nova_compute[244014]: 2026-02-25 12:58:22.505 244018 DEBUG nova.network.neutron [-] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:58:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2334: 305 pgs: 305 active+clean; 247 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 29 KiB/s wr, 49 op/s
Feb 25 07:58:22 np0005629333 nova_compute[244014]: 2026-02-25 12:58:22.922 244018 DEBUG nova.network.neutron [req-601c4bc4-f03d-4aab-9ad8-6d4187db25e0 req-d957187e-e6c6-4616-ae30-98a7ba110a64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Updated VIF entry in instance network info cache for port 13dff95c-b96c-4657-9807-58964669e78a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:58:22 np0005629333 nova_compute[244014]: 2026-02-25 12:58:22.923 244018 DEBUG nova.network.neutron [req-601c4bc4-f03d-4aab-9ad8-6d4187db25e0 req-d957187e-e6c6-4616-ae30-98a7ba110a64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Updating instance_info_cache with network_info: [{"id": "13dff95c-b96c-4657-9807-58964669e78a", "address": "fa:16:3e:42:9b:25", "network": {"id": "f1fe6aa3-20a1-49f5-80e1-d26b81a3a5cd", "bridge": "br-int", "label": "tempest-network-smoke--679537891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "089b8f0ee9684ec69cbdc13d24262170", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13dff95c-b9", "ovs_interfaceid": "13dff95c-b96c-4657-9807-58964669e78a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:58:22 np0005629333 nova_compute[244014]: 2026-02-25 12:58:22.965 244018 DEBUG oslo_concurrency.lockutils [req-601c4bc4-f03d-4aab-9ad8-6d4187db25e0 req-d957187e-e6c6-4616-ae30-98a7ba110a64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-621d2b1a-0d06-4a98-b252-2acafee3ba02" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:58:23 np0005629333 nova_compute[244014]: 2026-02-25 12:58:23.130 244018 DEBUG nova.network.neutron [-] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:58:23 np0005629333 nova_compute[244014]: 2026-02-25 12:58:23.169 244018 INFO nova.compute.manager [-] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Took 0.66 seconds to deallocate network for instance.#033[00m
Feb 25 07:58:23 np0005629333 nova_compute[244014]: 2026-02-25 12:58:23.214 244018 DEBUG nova.compute.manager [req-9e29473a-36d8-4d6f-8202-b721c47b5979 req-e688aa20-415d-4bba-bee5-6233f17e9688 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Received event network-vif-deleted-13dff95c-b96c-4657-9807-58964669e78a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:58:23 np0005629333 nova_compute[244014]: 2026-02-25 12:58:23.225 244018 DEBUG oslo_concurrency.lockutils [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:58:23 np0005629333 nova_compute[244014]: 2026-02-25 12:58:23.226 244018 DEBUG oslo_concurrency.lockutils [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:58:23 np0005629333 nova_compute[244014]: 2026-02-25 12:58:23.324 244018 DEBUG oslo_concurrency.processutils [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:58:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:58:23 np0005629333 nova_compute[244014]: 2026-02-25 12:58:23.855 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024288.8528855, 0389cfa0-085b-4e4e-8d61-95d0b91c413e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:58:23 np0005629333 nova_compute[244014]: 2026-02-25 12:58:23.856 244018 INFO nova.compute.manager [-] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:58:23 np0005629333 nova_compute[244014]: 2026-02-25 12:58:23.860 244018 DEBUG nova.compute.manager [req-24b815a1-be22-4164-a060-a7e20e9a6493 req-104719d0-0cf4-4164-a6fe-d851144073eb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Received event network-vif-unplugged-13dff95c-b96c-4657-9807-58964669e78a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:58:23 np0005629333 nova_compute[244014]: 2026-02-25 12:58:23.861 244018 DEBUG oslo_concurrency.lockutils [req-24b815a1-be22-4164-a060-a7e20e9a6493 req-104719d0-0cf4-4164-a6fe-d851144073eb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:58:23 np0005629333 nova_compute[244014]: 2026-02-25 12:58:23.861 244018 DEBUG oslo_concurrency.lockutils [req-24b815a1-be22-4164-a060-a7e20e9a6493 req-104719d0-0cf4-4164-a6fe-d851144073eb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:58:23 np0005629333 nova_compute[244014]: 2026-02-25 12:58:23.862 244018 DEBUG oslo_concurrency.lockutils [req-24b815a1-be22-4164-a060-a7e20e9a6493 req-104719d0-0cf4-4164-a6fe-d851144073eb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:58:23 np0005629333 nova_compute[244014]: 2026-02-25 12:58:23.862 244018 DEBUG nova.compute.manager [req-24b815a1-be22-4164-a060-a7e20e9a6493 req-104719d0-0cf4-4164-a6fe-d851144073eb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] No waiting events found dispatching network-vif-unplugged-13dff95c-b96c-4657-9807-58964669e78a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:58:23 np0005629333 nova_compute[244014]: 2026-02-25 12:58:23.862 244018 WARNING nova.compute.manager [req-24b815a1-be22-4164-a060-a7e20e9a6493 req-104719d0-0cf4-4164-a6fe-d851144073eb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Received unexpected event network-vif-unplugged-13dff95c-b96c-4657-9807-58964669e78a for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:58:23 np0005629333 nova_compute[244014]: 2026-02-25 12:58:23.863 244018 DEBUG nova.compute.manager [req-24b815a1-be22-4164-a060-a7e20e9a6493 req-104719d0-0cf4-4164-a6fe-d851144073eb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Received event network-vif-plugged-13dff95c-b96c-4657-9807-58964669e78a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:58:23 np0005629333 nova_compute[244014]: 2026-02-25 12:58:23.863 244018 DEBUG oslo_concurrency.lockutils [req-24b815a1-be22-4164-a060-a7e20e9a6493 req-104719d0-0cf4-4164-a6fe-d851144073eb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:58:23 np0005629333 nova_compute[244014]: 2026-02-25 12:58:23.863 244018 DEBUG oslo_concurrency.lockutils [req-24b815a1-be22-4164-a060-a7e20e9a6493 req-104719d0-0cf4-4164-a6fe-d851144073eb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:58:23 np0005629333 nova_compute[244014]: 2026-02-25 12:58:23.863 244018 DEBUG oslo_concurrency.lockutils [req-24b815a1-be22-4164-a060-a7e20e9a6493 req-104719d0-0cf4-4164-a6fe-d851144073eb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "621d2b1a-0d06-4a98-b252-2acafee3ba02-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:58:23 np0005629333 nova_compute[244014]: 2026-02-25 12:58:23.864 244018 DEBUG nova.compute.manager [req-24b815a1-be22-4164-a060-a7e20e9a6493 req-104719d0-0cf4-4164-a6fe-d851144073eb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] No waiting events found dispatching network-vif-plugged-13dff95c-b96c-4657-9807-58964669e78a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:58:23 np0005629333 nova_compute[244014]: 2026-02-25 12:58:23.864 244018 WARNING nova.compute.manager [req-24b815a1-be22-4164-a060-a7e20e9a6493 req-104719d0-0cf4-4164-a6fe-d851144073eb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Received unexpected event network-vif-plugged-13dff95c-b96c-4657-9807-58964669e78a for instance with vm_state deleted and task_state None.#033[00m
Feb 25 07:58:23 np0005629333 nova_compute[244014]: 2026-02-25 12:58:23.897 244018 DEBUG nova.compute.manager [None req-e28a871d-9205-4f24-97f5-b7ac8334e9e8 - - - - - -] [instance: 0389cfa0-085b-4e4e-8d61-95d0b91c413e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:58:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:58:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/565057763' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:58:23 np0005629333 nova_compute[244014]: 2026-02-25 12:58:23.962 244018 DEBUG oslo_concurrency.processutils [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.638s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:58:23 np0005629333 nova_compute[244014]: 2026-02-25 12:58:23.968 244018 DEBUG nova.compute.provider_tree [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:58:23 np0005629333 nova_compute[244014]: 2026-02-25 12:58:23.993 244018 DEBUG nova.scheduler.client.report [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:58:24 np0005629333 nova_compute[244014]: 2026-02-25 12:58:24.049 244018 DEBUG oslo_concurrency.lockutils [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:58:24 np0005629333 nova_compute[244014]: 2026-02-25 12:58:24.085 244018 INFO nova.scheduler.client.report [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Deleted allocations for instance 621d2b1a-0d06-4a98-b252-2acafee3ba02#033[00m
Feb 25 07:58:24 np0005629333 nova_compute[244014]: 2026-02-25 12:58:24.183 244018 DEBUG oslo_concurrency.lockutils [None req-5f53e641-3466-4846-b91d-f934f0a08136 2248dda8be6e4a51ace44a9e66dc4b45 089b8f0ee9684ec69cbdc13d24262170 - - default default] Lock "621d2b1a-0d06-4a98-b252-2acafee3ba02" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.346s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:58:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2335: 305 pgs: 305 active+clean; 247 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 15 KiB/s wr, 21 op/s
Feb 25 07:58:25 np0005629333 nova_compute[244014]: 2026-02-25 12:58:25.567 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2336: 305 pgs: 305 active+clean; 247 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 15 KiB/s wr, 21 op/s
Feb 25 07:58:27 np0005629333 nova_compute[244014]: 2026-02-25 12:58:27.112 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:58:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2337: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 15 KiB/s wr, 30 op/s
Feb 25 07:58:30 np0005629333 nova_compute[244014]: 2026-02-25 12:58:30.569 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:30 np0005629333 nova_compute[244014]: 2026-02-25 12:58:30.763 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:58:30 np0005629333 nova_compute[244014]: 2026-02-25 12:58:30.787 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Triggering sync for uuid 6185e497-8422-4a5f-a98a-865484d53d4f _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Feb 25 07:58:30 np0005629333 nova_compute[244014]: 2026-02-25 12:58:30.788 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "6185e497-8422-4a5f-a98a-865484d53d4f" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:58:30 np0005629333 nova_compute[244014]: 2026-02-25 12:58:30.788 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "6185e497-8422-4a5f-a98a-865484d53d4f" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:58:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2338: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 15 KiB/s wr, 29 op/s
Feb 25 07:58:30 np0005629333 nova_compute[244014]: 2026-02-25 12:58:30.812 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "6185e497-8422-4a5f-a98a-865484d53d4f" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:58:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:58:31
Feb 25 07:58:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:58:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:58:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['images', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'vms', 'default.rgw.log', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta']
Feb 25 07:58:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:58:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:58:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:58:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:58:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:58:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:58:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:58:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:58:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:58:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:58:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:58:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:58:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:58:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:58:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:58:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:58:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:58:32 np0005629333 nova_compute[244014]: 2026-02-25 12:58:32.115 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:32 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:32Z|01499|binding|INFO|Releasing lport 93d7aa8d-1845-4103-8fd2-aed7a8c4298a from this chassis (sb_readonly=0)
Feb 25 07:58:32 np0005629333 nova_compute[244014]: 2026-02-25 12:58:32.655 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Acquiring lock "eadab8d0-1f24-4ace-b7b4-10329df53f12" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:58:32 np0005629333 nova_compute[244014]: 2026-02-25 12:58:32.655 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:58:32 np0005629333 nova_compute[244014]: 2026-02-25 12:58:32.674 244018 DEBUG nova.compute.manager [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:58:32 np0005629333 nova_compute[244014]: 2026-02-25 12:58:32.679 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:32 np0005629333 nova_compute[244014]: 2026-02-25 12:58:32.772 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:58:32 np0005629333 nova_compute[244014]: 2026-02-25 12:58:32.774 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:58:32 np0005629333 nova_compute[244014]: 2026-02-25 12:58:32.789 244018 DEBUG nova.virt.hardware [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:58:32 np0005629333 nova_compute[244014]: 2026-02-25 12:58:32.789 244018 INFO nova.compute.claims [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:58:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2339: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 15 KiB/s wr, 29 op/s
Feb 25 07:58:32 np0005629333 nova_compute[244014]: 2026-02-25 12:58:32.955 244018 DEBUG oslo_concurrency.processutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:58:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:58:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2250003891' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:58:33 np0005629333 nova_compute[244014]: 2026-02-25 12:58:33.512 244018 DEBUG oslo_concurrency.processutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:58:33 np0005629333 nova_compute[244014]: 2026-02-25 12:58:33.518 244018 DEBUG nova.compute.provider_tree [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:58:33 np0005629333 nova_compute[244014]: 2026-02-25 12:58:33.595 244018 DEBUG nova.scheduler.client.report [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:58:33 np0005629333 nova_compute[244014]: 2026-02-25 12:58:33.715 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.941s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:58:33 np0005629333 nova_compute[244014]: 2026-02-25 12:58:33.716 244018 DEBUG nova.compute.manager [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:58:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:58:33 np0005629333 nova_compute[244014]: 2026-02-25 12:58:33.894 244018 DEBUG nova.compute.manager [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:58:33 np0005629333 nova_compute[244014]: 2026-02-25 12:58:33.895 244018 DEBUG nova.network.neutron [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:58:33 np0005629333 nova_compute[244014]: 2026-02-25 12:58:33.946 244018 INFO nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:58:33 np0005629333 nova_compute[244014]: 2026-02-25 12:58:33.991 244018 DEBUG nova.compute.manager [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:58:34 np0005629333 nova_compute[244014]: 2026-02-25 12:58:34.250 244018 DEBUG nova.compute.manager [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:58:34 np0005629333 nova_compute[244014]: 2026-02-25 12:58:34.252 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:58:34 np0005629333 nova_compute[244014]: 2026-02-25 12:58:34.252 244018 INFO nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Creating image(s)#033[00m
Feb 25 07:58:34 np0005629333 nova_compute[244014]: 2026-02-25 12:58:34.280 244018 DEBUG nova.storage.rbd_utils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] rbd image eadab8d0-1f24-4ace-b7b4-10329df53f12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:58:34 np0005629333 nova_compute[244014]: 2026-02-25 12:58:34.307 244018 DEBUG nova.storage.rbd_utils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] rbd image eadab8d0-1f24-4ace-b7b4-10329df53f12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:58:34 np0005629333 nova_compute[244014]: 2026-02-25 12:58:34.330 244018 DEBUG nova.storage.rbd_utils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] rbd image eadab8d0-1f24-4ace-b7b4-10329df53f12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:58:34 np0005629333 nova_compute[244014]: 2026-02-25 12:58:34.336 244018 DEBUG oslo_concurrency.processutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:58:34 np0005629333 nova_compute[244014]: 2026-02-25 12:58:34.407 244018 DEBUG oslo_concurrency.processutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:58:34 np0005629333 nova_compute[244014]: 2026-02-25 12:58:34.408 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:58:34 np0005629333 nova_compute[244014]: 2026-02-25 12:58:34.409 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:58:34 np0005629333 nova_compute[244014]: 2026-02-25 12:58:34.409 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:58:34 np0005629333 nova_compute[244014]: 2026-02-25 12:58:34.433 244018 DEBUG nova.storage.rbd_utils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] rbd image eadab8d0-1f24-4ace-b7b4-10329df53f12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:58:34 np0005629333 nova_compute[244014]: 2026-02-25 12:58:34.438 244018 DEBUG oslo_concurrency.processutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 eadab8d0-1f24-4ace-b7b4-10329df53f12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:58:34 np0005629333 nova_compute[244014]: 2026-02-25 12:58:34.596 244018 DEBUG nova.policy [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2236154a16ea4715a79c2e6f36e22e83', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e20a44a5f2664c25a996daf9d0d14b97', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:58:34 np0005629333 nova_compute[244014]: 2026-02-25 12:58:34.720 244018 DEBUG oslo_concurrency.processutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 eadab8d0-1f24-4ace-b7b4-10329df53f12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:58:34 np0005629333 nova_compute[244014]: 2026-02-25 12:58:34.794 244018 DEBUG nova.storage.rbd_utils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] resizing rbd image eadab8d0-1f24-4ace-b7b4-10329df53f12_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:58:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2340: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.8 KiB/s rd, 0 B/s wr, 8 op/s
Feb 25 07:58:34 np0005629333 nova_compute[244014]: 2026-02-25 12:58:34.905 244018 DEBUG nova.objects.instance [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lazy-loading 'migration_context' on Instance uuid eadab8d0-1f24-4ace-b7b4-10329df53f12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:58:34 np0005629333 nova_compute[244014]: 2026-02-25 12:58:34.938 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:58:34 np0005629333 nova_compute[244014]: 2026-02-25 12:58:34.938 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Ensure instance console log exists: /var/lib/nova/instances/eadab8d0-1f24-4ace-b7b4-10329df53f12/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:58:34 np0005629333 nova_compute[244014]: 2026-02-25 12:58:34.938 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:58:34 np0005629333 nova_compute[244014]: 2026-02-25 12:58:34.939 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:58:34 np0005629333 nova_compute[244014]: 2026-02-25 12:58:34.939 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:58:35 np0005629333 nova_compute[244014]: 2026-02-25 12:58:35.317 244018 DEBUG nova.network.neutron [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Successfully created port: 98c45ccf-8678-4765-bf66-7a9e3dae8cac _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:58:35 np0005629333 nova_compute[244014]: 2026-02-25 12:58:35.572 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:35 np0005629333 nova_compute[244014]: 2026-02-25 12:58:35.604 244018 DEBUG nova.compute.manager [req-65addcf2-9991-43cf-a80b-cd1c5343fcea req-7ca4af24-e5fc-4472-b75e-6b91c79ef2bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Received event network-changed-a4d9181f-fefa-4cde-81ea-c5e59433606c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:58:35 np0005629333 nova_compute[244014]: 2026-02-25 12:58:35.605 244018 DEBUG nova.compute.manager [req-65addcf2-9991-43cf-a80b-cd1c5343fcea req-7ca4af24-e5fc-4472-b75e-6b91c79ef2bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Refreshing instance network info cache due to event network-changed-a4d9181f-fefa-4cde-81ea-c5e59433606c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:58:35 np0005629333 nova_compute[244014]: 2026-02-25 12:58:35.606 244018 DEBUG oslo_concurrency.lockutils [req-65addcf2-9991-43cf-a80b-cd1c5343fcea req-7ca4af24-e5fc-4472-b75e-6b91c79ef2bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-6185e497-8422-4a5f-a98a-865484d53d4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:58:35 np0005629333 nova_compute[244014]: 2026-02-25 12:58:35.606 244018 DEBUG oslo_concurrency.lockutils [req-65addcf2-9991-43cf-a80b-cd1c5343fcea req-7ca4af24-e5fc-4472-b75e-6b91c79ef2bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-6185e497-8422-4a5f-a98a-865484d53d4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:58:35 np0005629333 nova_compute[244014]: 2026-02-25 12:58:35.607 244018 DEBUG nova.network.neutron [req-65addcf2-9991-43cf-a80b-cd1c5343fcea req-7ca4af24-e5fc-4472-b75e-6b91c79ef2bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Refreshing network info cache for port a4d9181f-fefa-4cde-81ea-c5e59433606c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:58:35 np0005629333 nova_compute[244014]: 2026-02-25 12:58:35.881 244018 DEBUG oslo_concurrency.lockutils [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "6185e497-8422-4a5f-a98a-865484d53d4f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:58:35 np0005629333 nova_compute[244014]: 2026-02-25 12:58:35.882 244018 DEBUG oslo_concurrency.lockutils [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "6185e497-8422-4a5f-a98a-865484d53d4f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:58:35 np0005629333 nova_compute[244014]: 2026-02-25 12:58:35.883 244018 DEBUG oslo_concurrency.lockutils [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:58:35 np0005629333 nova_compute[244014]: 2026-02-25 12:58:35.883 244018 DEBUG oslo_concurrency.lockutils [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:58:35 np0005629333 nova_compute[244014]: 2026-02-25 12:58:35.884 244018 DEBUG oslo_concurrency.lockutils [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:58:35 np0005629333 nova_compute[244014]: 2026-02-25 12:58:35.886 244018 INFO nova.compute.manager [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Terminating instance#033[00m
Feb 25 07:58:35 np0005629333 nova_compute[244014]: 2026-02-25 12:58:35.888 244018 DEBUG nova.compute.manager [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:58:35 np0005629333 nova_compute[244014]: 2026-02-25 12:58:35.899 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:58:35 np0005629333 nova_compute[244014]: 2026-02-25 12:58:35.901 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:58:35 np0005629333 nova_compute[244014]: 2026-02-25 12:58:35.902 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 07:58:35 np0005629333 nova_compute[244014]: 2026-02-25 12:58:35.929 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Feb 25 07:58:35 np0005629333 nova_compute[244014]: 2026-02-25 12:58:35.929 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 25 07:58:35 np0005629333 nova_compute[244014]: 2026-02-25 12:58:35.930 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 07:58:35 np0005629333 kernel: tapa4d9181f-fe (unregistering): left promiscuous mode
Feb 25 07:58:35 np0005629333 NetworkManager[49836]: <info>  [1772024315.9474] device (tapa4d9181f-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:58:35 np0005629333 nova_compute[244014]: 2026-02-25 12:58:35.948 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:35 np0005629333 nova_compute[244014]: 2026-02-25 12:58:35.958 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:35 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:35Z|01500|binding|INFO|Releasing lport a4d9181f-fefa-4cde-81ea-c5e59433606c from this chassis (sb_readonly=0)
Feb 25 07:58:35 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:35Z|01501|binding|INFO|Setting lport a4d9181f-fefa-4cde-81ea-c5e59433606c down in Southbound
Feb 25 07:58:35 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:35Z|01502|binding|INFO|Removing iface tapa4d9181f-fe ovn-installed in OVS
Feb 25 07:58:35 np0005629333 nova_compute[244014]: 2026-02-25 12:58:35.960 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:35 np0005629333 nova_compute[244014]: 2026-02-25 12:58:35.965 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:35.988 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:c2:bb 10.100.0.11'], port_security=['fa:16:3e:ef:c2:bb 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '6185e497-8422-4a5f-a98a-865484d53d4f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f23f1675-b7ff-4265-a011-0912c637d746', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9699483122f465084e3147e4904d13d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '87d37268-c5dd-4380-a676-5b9940f82b8f 979c759f-0c66-4e81-a7f1-5a970907b9e7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad5ffac6-eab3-4eca-a87d-9bf8335b5de6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=a4d9181f-fefa-4cde-81ea-c5e59433606c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:58:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:35.991 157129 INFO neutron.agent.ovn.metadata.agent [-] Port a4d9181f-fefa-4cde-81ea-c5e59433606c in datapath f23f1675-b7ff-4265-a011-0912c637d746 unbound from our chassis#033[00m
Feb 25 07:58:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:35.993 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f23f1675-b7ff-4265-a011-0912c637d746, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 07:58:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:35.994 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7bbed3e2-5708-4f87-b13f-07974deeb25e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:58:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:35.995 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746 namespace which is not needed anymore#033[00m
Feb 25 07:58:36 np0005629333 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d0000008b.scope: Deactivated successfully.
Feb 25 07:58:36 np0005629333 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d0000008b.scope: Consumed 15.708s CPU time.
Feb 25 07:58:36 np0005629333 systemd-machined[210048]: Machine qemu-171-instance-0000008b terminated.
Feb 25 07:58:36 np0005629333 kernel: tapa4d9181f-fe: entered promiscuous mode
Feb 25 07:58:36 np0005629333 kernel: tapa4d9181f-fe (unregistering): left promiscuous mode
Feb 25 07:58:36 np0005629333 nova_compute[244014]: 2026-02-25 12:58:36.118 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:36 np0005629333 nova_compute[244014]: 2026-02-25 12:58:36.127 244018 INFO nova.virt.libvirt.driver [-] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Instance destroyed successfully.#033[00m
Feb 25 07:58:36 np0005629333 nova_compute[244014]: 2026-02-25 12:58:36.128 244018 DEBUG nova.objects.instance [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'resources' on Instance uuid 6185e497-8422-4a5f-a98a-865484d53d4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:58:36 np0005629333 neutron-haproxy-ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746[370480]: [NOTICE]   (370489) : haproxy version is 2.8.14-c23fe91
Feb 25 07:58:36 np0005629333 neutron-haproxy-ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746[370480]: [NOTICE]   (370489) : path to executable is /usr/sbin/haproxy
Feb 25 07:58:36 np0005629333 neutron-haproxy-ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746[370480]: [WARNING]  (370489) : Exiting Master process...
Feb 25 07:58:36 np0005629333 neutron-haproxy-ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746[370480]: [ALERT]    (370489) : Current worker (370491) exited with code 143 (Terminated)
Feb 25 07:58:36 np0005629333 neutron-haproxy-ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746[370480]: [WARNING]  (370489) : All workers exited. Exiting... (0)
Feb 25 07:58:36 np0005629333 systemd[1]: libpod-cdc423d51cb186bf96ec29f98f73bc3b4220dfecb23592c7f19f8b857cc9f2c7.scope: Deactivated successfully.
Feb 25 07:58:36 np0005629333 podman[372823]: 2026-02-25 12:58:36.167897145 +0000 UTC m=+0.063308637 container died cdc423d51cb186bf96ec29f98f73bc3b4220dfecb23592c7f19f8b857cc9f2c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 25 07:58:36 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cdc423d51cb186bf96ec29f98f73bc3b4220dfecb23592c7f19f8b857cc9f2c7-userdata-shm.mount: Deactivated successfully.
Feb 25 07:58:36 np0005629333 systemd[1]: var-lib-containers-storage-overlay-cc100cb099adcbdbcce587f0f6d881ddf87a2202a85f400719149b8997f75aa8-merged.mount: Deactivated successfully.
Feb 25 07:58:36 np0005629333 nova_compute[244014]: 2026-02-25 12:58:36.204 244018 DEBUG nova.virt.libvirt.vif [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:56:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-1963119789',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-1963119789',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-acc',id=139,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL87R8Bekvm59mo5kwvnuGBsBkp9tXZgiXMYkz6TvQlSCgwb0IPTrLwrZDNNWgUnIBhU5eG7QYkAKCOG1521iSWxvl92auy73uemRv7k4zUT5bfi0v5q+2K3Cz8+sbuK0A==',key_name='tempest-TestSecurityGroupsBasicOps-1295736462',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:57:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-405awptu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:57:08Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=6185e497-8422-4a5f-a98a-865484d53d4f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "address": "fa:16:3e:ef:c2:bb", "network": {"id": "f23f1675-b7ff-4265-a011-0912c637d746", "bridge": "br-int", "label": "tempest-network-smoke--923808756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d9181f-fe", "ovs_interfaceid": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:58:36 np0005629333 nova_compute[244014]: 2026-02-25 12:58:36.206 244018 DEBUG nova.network.os_vif_util [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "address": "fa:16:3e:ef:c2:bb", "network": {"id": "f23f1675-b7ff-4265-a011-0912c637d746", "bridge": "br-int", "label": "tempest-network-smoke--923808756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d9181f-fe", "ovs_interfaceid": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:58:36 np0005629333 nova_compute[244014]: 2026-02-25 12:58:36.208 244018 DEBUG nova.network.os_vif_util [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ef:c2:bb,bridge_name='br-int',has_traffic_filtering=True,id=a4d9181f-fefa-4cde-81ea-c5e59433606c,network=Network(f23f1675-b7ff-4265-a011-0912c637d746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4d9181f-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:58:36 np0005629333 nova_compute[244014]: 2026-02-25 12:58:36.209 244018 DEBUG os_vif [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ef:c2:bb,bridge_name='br-int',has_traffic_filtering=True,id=a4d9181f-fefa-4cde-81ea-c5e59433606c,network=Network(f23f1675-b7ff-4265-a011-0912c637d746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4d9181f-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:58:36 np0005629333 nova_compute[244014]: 2026-02-25 12:58:36.212 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:36 np0005629333 nova_compute[244014]: 2026-02-25 12:58:36.212 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa4d9181f-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:58:36 np0005629333 podman[372823]: 2026-02-25 12:58:36.216464575 +0000 UTC m=+0.111876057 container cleanup cdc423d51cb186bf96ec29f98f73bc3b4220dfecb23592c7f19f8b857cc9f2c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 25 07:58:36 np0005629333 nova_compute[244014]: 2026-02-25 12:58:36.218 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:58:36 np0005629333 nova_compute[244014]: 2026-02-25 12:58:36.220 244018 INFO os_vif [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ef:c2:bb,bridge_name='br-int',has_traffic_filtering=True,id=a4d9181f-fefa-4cde-81ea-c5e59433606c,network=Network(f23f1675-b7ff-4265-a011-0912c637d746),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa4d9181f-fe')#033[00m
Feb 25 07:58:36 np0005629333 systemd[1]: libpod-conmon-cdc423d51cb186bf96ec29f98f73bc3b4220dfecb23592c7f19f8b857cc9f2c7.scope: Deactivated successfully.
Feb 25 07:58:36 np0005629333 podman[372863]: 2026-02-25 12:58:36.312169104 +0000 UTC m=+0.067170655 container remove cdc423d51cb186bf96ec29f98f73bc3b4220dfecb23592c7f19f8b857cc9f2c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 07:58:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:36.319 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1290c45d-9b1c-4227-930b-1d57c33d9ffd]: (4, ('Wed Feb 25 12:58:36 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746 (cdc423d51cb186bf96ec29f98f73bc3b4220dfecb23592c7f19f8b857cc9f2c7)\ncdc423d51cb186bf96ec29f98f73bc3b4220dfecb23592c7f19f8b857cc9f2c7\nWed Feb 25 12:58:36 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746 (cdc423d51cb186bf96ec29f98f73bc3b4220dfecb23592c7f19f8b857cc9f2c7)\ncdc423d51cb186bf96ec29f98f73bc3b4220dfecb23592c7f19f8b857cc9f2c7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:58:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:36.322 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[16ce3014-bdba-48cc-9e53-71ede35735f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:58:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:36.324 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf23f1675-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:58:36 np0005629333 nova_compute[244014]: 2026-02-25 12:58:36.326 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:36 np0005629333 kernel: tapf23f1675-b0: left promiscuous mode
Feb 25 07:58:36 np0005629333 nova_compute[244014]: 2026-02-25 12:58:36.333 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:36.337 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[00fdb6cc-2370-4e1c-a139-138ceaa09fc9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:58:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:36.356 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[071e39dd-dbac-4cc4-8b83-85691fe77e15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:58:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:36.359 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b62d8fdc-159c-45aa-a57b-4c3d5b02feca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:58:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:36.379 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0d427080-94b8-4b5b-9dca-af2e4712f3cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619633, 'reachable_time': 26442, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372896, 'error': None, 'target': 'ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:58:36 np0005629333 systemd[1]: run-netns-ovnmeta\x2df23f1675\x2db7ff\x2d4265\x2da011\x2d0912c637d746.mount: Deactivated successfully.
Feb 25 07:58:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:36.383 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f23f1675-b7ff-4265-a011-0912c637d746 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 07:58:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:36.384 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[ff0d5778-973f-40f8-a6d9-efd58868e268]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:58:36 np0005629333 nova_compute[244014]: 2026-02-25 12:58:36.496 244018 INFO nova.virt.libvirt.driver [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Deleting instance files /var/lib/nova/instances/6185e497-8422-4a5f-a98a-865484d53d4f_del#033[00m
Feb 25 07:58:36 np0005629333 nova_compute[244014]: 2026-02-25 12:58:36.497 244018 INFO nova.virt.libvirt.driver [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Deletion of /var/lib/nova/instances/6185e497-8422-4a5f-a98a-865484d53d4f_del complete#033[00m
Feb 25 07:58:36 np0005629333 nova_compute[244014]: 2026-02-25 12:58:36.702 244018 INFO nova.compute.manager [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:58:36 np0005629333 nova_compute[244014]: 2026-02-25 12:58:36.703 244018 DEBUG oslo.service.loopingcall [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:58:36 np0005629333 nova_compute[244014]: 2026-02-25 12:58:36.704 244018 DEBUG nova.compute.manager [-] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:58:36 np0005629333 nova_compute[244014]: 2026-02-25 12:58:36.704 244018 DEBUG nova.network.neutron [-] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:58:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2341: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.8 KiB/s rd, 0 B/s wr, 8 op/s
Feb 25 07:58:36 np0005629333 nova_compute[244014]: 2026-02-25 12:58:36.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:58:36 np0005629333 nova_compute[244014]: 2026-02-25 12:58:36.950 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:58:36 np0005629333 nova_compute[244014]: 2026-02-25 12:58:36.951 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:58:36 np0005629333 nova_compute[244014]: 2026-02-25 12:58:36.951 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:58:36 np0005629333 nova_compute[244014]: 2026-02-25 12:58:36.952 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:58:36 np0005629333 nova_compute[244014]: 2026-02-25 12:58:36.952 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:58:37 np0005629333 nova_compute[244014]: 2026-02-25 12:58:37.081 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024302.0802164, 621d2b1a-0d06-4a98-b252-2acafee3ba02 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:58:37 np0005629333 nova_compute[244014]: 2026-02-25 12:58:37.082 244018 INFO nova.compute.manager [-] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:58:37 np0005629333 nova_compute[244014]: 2026-02-25 12:58:37.114 244018 DEBUG nova.compute.manager [None req-db822f53-7f67-4dfd-9372-21c4548b3fac - - - - - -] [instance: 621d2b1a-0d06-4a98-b252-2acafee3ba02] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:58:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:58:37 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3476929791' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:58:37 np0005629333 nova_compute[244014]: 2026-02-25 12:58:37.495 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:58:37 np0005629333 nova_compute[244014]: 2026-02-25 12:58:37.709 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:58:37 np0005629333 nova_compute[244014]: 2026-02-25 12:58:37.711 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3488MB free_disk=59.941710148938GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:58:37 np0005629333 nova_compute[244014]: 2026-02-25 12:58:37.711 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:58:37 np0005629333 nova_compute[244014]: 2026-02-25 12:58:37.712 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:58:37 np0005629333 nova_compute[244014]: 2026-02-25 12:58:37.759 244018 DEBUG nova.compute.manager [req-e05865a4-4bb6-4ee7-b630-96649f5090f0 req-dc23ee5c-9414-456f-9ac4-6955c41a1976 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Received event network-vif-unplugged-a4d9181f-fefa-4cde-81ea-c5e59433606c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:58:37 np0005629333 nova_compute[244014]: 2026-02-25 12:58:37.760 244018 DEBUG oslo_concurrency.lockutils [req-e05865a4-4bb6-4ee7-b630-96649f5090f0 req-dc23ee5c-9414-456f-9ac4-6955c41a1976 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:58:37 np0005629333 nova_compute[244014]: 2026-02-25 12:58:37.760 244018 DEBUG oslo_concurrency.lockutils [req-e05865a4-4bb6-4ee7-b630-96649f5090f0 req-dc23ee5c-9414-456f-9ac4-6955c41a1976 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:58:37 np0005629333 nova_compute[244014]: 2026-02-25 12:58:37.761 244018 DEBUG oslo_concurrency.lockutils [req-e05865a4-4bb6-4ee7-b630-96649f5090f0 req-dc23ee5c-9414-456f-9ac4-6955c41a1976 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:58:37 np0005629333 nova_compute[244014]: 2026-02-25 12:58:37.761 244018 DEBUG nova.compute.manager [req-e05865a4-4bb6-4ee7-b630-96649f5090f0 req-dc23ee5c-9414-456f-9ac4-6955c41a1976 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] No waiting events found dispatching network-vif-unplugged-a4d9181f-fefa-4cde-81ea-c5e59433606c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:58:37 np0005629333 nova_compute[244014]: 2026-02-25 12:58:37.762 244018 DEBUG nova.compute.manager [req-e05865a4-4bb6-4ee7-b630-96649f5090f0 req-dc23ee5c-9414-456f-9ac4-6955c41a1976 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Received event network-vif-unplugged-a4d9181f-fefa-4cde-81ea-c5e59433606c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:58:37 np0005629333 nova_compute[244014]: 2026-02-25 12:58:37.799 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 6185e497-8422-4a5f-a98a-865484d53d4f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:58:37 np0005629333 nova_compute[244014]: 2026-02-25 12:58:37.799 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance eadab8d0-1f24-4ace-b7b4-10329df53f12 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:58:37 np0005629333 nova_compute[244014]: 2026-02-25 12:58:37.800 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:58:37 np0005629333 nova_compute[244014]: 2026-02-25 12:58:37.800 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:58:37 np0005629333 nova_compute[244014]: 2026-02-25 12:58:37.875 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:58:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:58:38 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4244823206' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:58:38 np0005629333 nova_compute[244014]: 2026-02-25 12:58:38.471 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:58:38 np0005629333 nova_compute[244014]: 2026-02-25 12:58:38.476 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:58:38 np0005629333 nova_compute[244014]: 2026-02-25 12:58:38.501 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:58:38 np0005629333 nova_compute[244014]: 2026-02-25 12:58:38.535 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:58:38 np0005629333 nova_compute[244014]: 2026-02-25 12:58:38.535 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:58:38 np0005629333 nova_compute[244014]: 2026-02-25 12:58:38.614 244018 DEBUG nova.network.neutron [req-65addcf2-9991-43cf-a80b-cd1c5343fcea req-7ca4af24-e5fc-4472-b75e-6b91c79ef2bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Updated VIF entry in instance network info cache for port a4d9181f-fefa-4cde-81ea-c5e59433606c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:58:38 np0005629333 nova_compute[244014]: 2026-02-25 12:58:38.615 244018 DEBUG nova.network.neutron [req-65addcf2-9991-43cf-a80b-cd1c5343fcea req-7ca4af24-e5fc-4472-b75e-6b91c79ef2bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Updating instance_info_cache with network_info: [{"id": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "address": "fa:16:3e:ef:c2:bb", "network": {"id": "f23f1675-b7ff-4265-a011-0912c637d746", "bridge": "br-int", "label": "tempest-network-smoke--923808756", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa4d9181f-fe", "ovs_interfaceid": "a4d9181f-fefa-4cde-81ea-c5e59433606c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:58:38 np0005629333 nova_compute[244014]: 2026-02-25 12:58:38.643 244018 DEBUG oslo_concurrency.lockutils [req-65addcf2-9991-43cf-a80b-cd1c5343fcea req-7ca4af24-e5fc-4472-b75e-6b91c79ef2bb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-6185e497-8422-4a5f-a98a-865484d53d4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:58:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:58:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2342: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Feb 25 07:58:39 np0005629333 nova_compute[244014]: 2026-02-25 12:58:39.531 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:58:39 np0005629333 nova_compute[244014]: 2026-02-25 12:58:39.715 244018 DEBUG nova.network.neutron [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Successfully updated port: 98c45ccf-8678-4765-bf66-7a9e3dae8cac _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:58:39 np0005629333 nova_compute[244014]: 2026-02-25 12:58:39.735 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Acquiring lock "refresh_cache-eadab8d0-1f24-4ace-b7b4-10329df53f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:58:39 np0005629333 nova_compute[244014]: 2026-02-25 12:58:39.736 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Acquired lock "refresh_cache-eadab8d0-1f24-4ace-b7b4-10329df53f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:58:39 np0005629333 nova_compute[244014]: 2026-02-25 12:58:39.736 244018 DEBUG nova.network.neutron [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:58:39 np0005629333 nova_compute[244014]: 2026-02-25 12:58:39.898 244018 DEBUG nova.compute.manager [req-b87d0727-a57e-428e-bce5-c9664dfaa02e req-a6b8716c-c376-4752-a903-00512fb954db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Received event network-vif-plugged-a4d9181f-fefa-4cde-81ea-c5e59433606c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:58:39 np0005629333 nova_compute[244014]: 2026-02-25 12:58:39.898 244018 DEBUG oslo_concurrency.lockutils [req-b87d0727-a57e-428e-bce5-c9664dfaa02e req-a6b8716c-c376-4752-a903-00512fb954db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:58:39 np0005629333 nova_compute[244014]: 2026-02-25 12:58:39.899 244018 DEBUG oslo_concurrency.lockutils [req-b87d0727-a57e-428e-bce5-c9664dfaa02e req-a6b8716c-c376-4752-a903-00512fb954db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:58:39 np0005629333 nova_compute[244014]: 2026-02-25 12:58:39.899 244018 DEBUG oslo_concurrency.lockutils [req-b87d0727-a57e-428e-bce5-c9664dfaa02e req-a6b8716c-c376-4752-a903-00512fb954db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "6185e497-8422-4a5f-a98a-865484d53d4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:58:39 np0005629333 nova_compute[244014]: 2026-02-25 12:58:39.900 244018 DEBUG nova.compute.manager [req-b87d0727-a57e-428e-bce5-c9664dfaa02e req-a6b8716c-c376-4752-a903-00512fb954db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] No waiting events found dispatching network-vif-plugged-a4d9181f-fefa-4cde-81ea-c5e59433606c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:58:39 np0005629333 nova_compute[244014]: 2026-02-25 12:58:39.900 244018 WARNING nova.compute.manager [req-b87d0727-a57e-428e-bce5-c9664dfaa02e req-a6b8716c-c376-4752-a903-00512fb954db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Received unexpected event network-vif-plugged-a4d9181f-fefa-4cde-81ea-c5e59433606c for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:58:39 np0005629333 nova_compute[244014]: 2026-02-25 12:58:39.901 244018 DEBUG nova.compute.manager [req-b87d0727-a57e-428e-bce5-c9664dfaa02e req-a6b8716c-c376-4752-a903-00512fb954db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received event network-changed-98c45ccf-8678-4765-bf66-7a9e3dae8cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:58:39 np0005629333 nova_compute[244014]: 2026-02-25 12:58:39.901 244018 DEBUG nova.compute.manager [req-b87d0727-a57e-428e-bce5-c9664dfaa02e req-a6b8716c-c376-4752-a903-00512fb954db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Refreshing instance network info cache due to event network-changed-98c45ccf-8678-4765-bf66-7a9e3dae8cac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:58:39 np0005629333 nova_compute[244014]: 2026-02-25 12:58:39.902 244018 DEBUG oslo_concurrency.lockutils [req-b87d0727-a57e-428e-bce5-c9664dfaa02e req-a6b8716c-c376-4752-a903-00512fb954db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-eadab8d0-1f24-4ace-b7b4-10329df53f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:58:39 np0005629333 nova_compute[244014]: 2026-02-25 12:58:39.903 244018 DEBUG nova.network.neutron [-] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:58:39 np0005629333 nova_compute[244014]: 2026-02-25 12:58:39.931 244018 DEBUG nova.network.neutron [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:58:39 np0005629333 nova_compute[244014]: 2026-02-25 12:58:39.938 244018 INFO nova.compute.manager [-] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Took 3.23 seconds to deallocate network for instance.#033[00m
Feb 25 07:58:39 np0005629333 nova_compute[244014]: 2026-02-25 12:58:39.995 244018 DEBUG oslo_concurrency.lockutils [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:58:39 np0005629333 nova_compute[244014]: 2026-02-25 12:58:39.996 244018 DEBUG oslo_concurrency.lockutils [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:58:40 np0005629333 nova_compute[244014]: 2026-02-25 12:58:40.070 244018 DEBUG oslo_concurrency.processutils [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:58:40 np0005629333 nova_compute[244014]: 2026-02-25 12:58:40.573 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:58:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2461465610' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:58:40 np0005629333 nova_compute[244014]: 2026-02-25 12:58:40.655 244018 DEBUG oslo_concurrency.processutils [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:58:40 np0005629333 nova_compute[244014]: 2026-02-25 12:58:40.671 244018 DEBUG nova.compute.provider_tree [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:58:40 np0005629333 nova_compute[244014]: 2026-02-25 12:58:40.693 244018 DEBUG nova.scheduler.client.report [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:58:40 np0005629333 nova_compute[244014]: 2026-02-25 12:58:40.735 244018 DEBUG oslo_concurrency.lockutils [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:58:40 np0005629333 nova_compute[244014]: 2026-02-25 12:58:40.774 244018 INFO nova.scheduler.client.report [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Deleted allocations for instance 6185e497-8422-4a5f-a98a-865484d53d4f#033[00m
Feb 25 07:58:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2343: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 25 07:58:40 np0005629333 nova_compute[244014]: 2026-02-25 12:58:40.847 244018 DEBUG oslo_concurrency.lockutils [None req-db1b790b-af21-48c3-8ec9-c65e88925543 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "6185e497-8422-4a5f-a98a-865484d53d4f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.965s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:58:40 np0005629333 nova_compute[244014]: 2026-02-25 12:58:40.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:58:41 np0005629333 nova_compute[244014]: 2026-02-25 12:58:41.186 244018 DEBUG nova.network.neutron [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Updating instance_info_cache with network_info: [{"id": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "address": "fa:16:3e:25:62:35", "network": {"id": "bcd4fda6-15a9-498e-a842-640e098deac1", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-465800388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e20a44a5f2664c25a996daf9d0d14b97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c45ccf-86", "ovs_interfaceid": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:58:41 np0005629333 nova_compute[244014]: 2026-02-25 12:58:41.211 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Releasing lock "refresh_cache-eadab8d0-1f24-4ace-b7b4-10329df53f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:58:41 np0005629333 nova_compute[244014]: 2026-02-25 12:58:41.212 244018 DEBUG nova.compute.manager [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Instance network_info: |[{"id": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "address": "fa:16:3e:25:62:35", "network": {"id": "bcd4fda6-15a9-498e-a842-640e098deac1", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-465800388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e20a44a5f2664c25a996daf9d0d14b97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c45ccf-86", "ovs_interfaceid": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:58:41 np0005629333 nova_compute[244014]: 2026-02-25 12:58:41.213 244018 DEBUG oslo_concurrency.lockutils [req-b87d0727-a57e-428e-bce5-c9664dfaa02e req-a6b8716c-c376-4752-a903-00512fb954db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-eadab8d0-1f24-4ace-b7b4-10329df53f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:58:41 np0005629333 nova_compute[244014]: 2026-02-25 12:58:41.213 244018 DEBUG nova.network.neutron [req-b87d0727-a57e-428e-bce5-c9664dfaa02e req-a6b8716c-c376-4752-a903-00512fb954db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Refreshing network info cache for port 98c45ccf-8678-4765-bf66-7a9e3dae8cac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:58:41 np0005629333 nova_compute[244014]: 2026-02-25 12:58:41.217 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Start _get_guest_xml network_info=[{"id": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "address": "fa:16:3e:25:62:35", "network": {"id": "bcd4fda6-15a9-498e-a842-640e098deac1", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-465800388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e20a44a5f2664c25a996daf9d0d14b97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c45ccf-86", "ovs_interfaceid": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:58:41 np0005629333 nova_compute[244014]: 2026-02-25 12:58:41.218 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:41 np0005629333 nova_compute[244014]: 2026-02-25 12:58:41.223 244018 WARNING nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:58:41 np0005629333 nova_compute[244014]: 2026-02-25 12:58:41.228 244018 DEBUG nova.virt.libvirt.host [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:58:41 np0005629333 nova_compute[244014]: 2026-02-25 12:58:41.229 244018 DEBUG nova.virt.libvirt.host [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:58:41 np0005629333 nova_compute[244014]: 2026-02-25 12:58:41.237 244018 DEBUG nova.virt.libvirt.host [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:58:41 np0005629333 nova_compute[244014]: 2026-02-25 12:58:41.238 244018 DEBUG nova.virt.libvirt.host [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:58:41 np0005629333 nova_compute[244014]: 2026-02-25 12:58:41.239 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:58:41 np0005629333 nova_compute[244014]: 2026-02-25 12:58:41.239 244018 DEBUG nova.virt.hardware [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:58:41 np0005629333 nova_compute[244014]: 2026-02-25 12:58:41.240 244018 DEBUG nova.virt.hardware [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:58:41 np0005629333 nova_compute[244014]: 2026-02-25 12:58:41.240 244018 DEBUG nova.virt.hardware [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:58:41 np0005629333 nova_compute[244014]: 2026-02-25 12:58:41.240 244018 DEBUG nova.virt.hardware [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:58:41 np0005629333 nova_compute[244014]: 2026-02-25 12:58:41.240 244018 DEBUG nova.virt.hardware [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:58:41 np0005629333 nova_compute[244014]: 2026-02-25 12:58:41.241 244018 DEBUG nova.virt.hardware [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:58:41 np0005629333 nova_compute[244014]: 2026-02-25 12:58:41.241 244018 DEBUG nova.virt.hardware [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:58:41 np0005629333 nova_compute[244014]: 2026-02-25 12:58:41.241 244018 DEBUG nova.virt.hardware [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:58:41 np0005629333 nova_compute[244014]: 2026-02-25 12:58:41.242 244018 DEBUG nova.virt.hardware [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:58:41 np0005629333 nova_compute[244014]: 2026-02-25 12:58:41.242 244018 DEBUG nova.virt.hardware [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:58:41 np0005629333 nova_compute[244014]: 2026-02-25 12:58:41.242 244018 DEBUG nova.virt.hardware [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:58:41 np0005629333 nova_compute[244014]: 2026-02-25 12:58:41.246 244018 DEBUG oslo_concurrency.processutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:58:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:58:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2680388769' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:58:41 np0005629333 nova_compute[244014]: 2026-02-25 12:58:41.911 244018 DEBUG oslo_concurrency.processutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.664s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:58:41 np0005629333 nova_compute[244014]: 2026-02-25 12:58:41.949 244018 DEBUG nova.storage.rbd_utils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] rbd image eadab8d0-1f24-4ace-b7b4-10329df53f12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:58:41 np0005629333 nova_compute[244014]: 2026-02-25 12:58:41.955 244018 DEBUG oslo_concurrency.processutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:58:42 np0005629333 nova_compute[244014]: 2026-02-25 12:58:42.011 244018 DEBUG nova.compute.manager [req-e7d23be8-e956-43ef-8d00-f25e0d830351 req-f954cad6-38d0-4b87-9148-da0c887d2bbe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Received event network-vif-deleted-a4d9181f-fefa-4cde-81ea-c5e59433606c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:58:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:58:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2110053394' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:58:42 np0005629333 nova_compute[244014]: 2026-02-25 12:58:42.576 244018 DEBUG oslo_concurrency.processutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:58:42 np0005629333 nova_compute[244014]: 2026-02-25 12:58:42.579 244018 DEBUG nova.virt.libvirt.vif [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:58:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1391903363',display_name='tempest-TestServerAdvancedOps-server-1391903363',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1391903363',id=142,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e20a44a5f2664c25a996daf9d0d14b97',ramdisk_id='',reservation_id='r-fckrf11t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-400846090',owner_user_name='tempest-TestServerAdvancedOps-400846090-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:58:34Z,user_data=None,user_id='2236154a16ea4715a79c2e6f36e22e83',uuid=eadab8d0-1f24-4ace-b7b4-10329df53f12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "address": "fa:16:3e:25:62:35", "network": {"id": "bcd4fda6-15a9-498e-a842-640e098deac1", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-465800388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e20a44a5f2664c25a996daf9d0d14b97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c45ccf-86", "ovs_interfaceid": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:58:42 np0005629333 nova_compute[244014]: 2026-02-25 12:58:42.580 244018 DEBUG nova.network.os_vif_util [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Converting VIF {"id": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "address": "fa:16:3e:25:62:35", "network": {"id": "bcd4fda6-15a9-498e-a842-640e098deac1", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-465800388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e20a44a5f2664c25a996daf9d0d14b97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c45ccf-86", "ovs_interfaceid": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:58:42 np0005629333 nova_compute[244014]: 2026-02-25 12:58:42.581 244018 DEBUG nova.network.os_vif_util [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:62:35,bridge_name='br-int',has_traffic_filtering=True,id=98c45ccf-8678-4765-bf66-7a9e3dae8cac,network=Network(bcd4fda6-15a9-498e-a842-640e098deac1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c45ccf-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:58:42 np0005629333 nova_compute[244014]: 2026-02-25 12:58:42.583 244018 DEBUG nova.objects.instance [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lazy-loading 'pci_devices' on Instance uuid eadab8d0-1f24-4ace-b7b4-10329df53f12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:58:42 np0005629333 nova_compute[244014]: 2026-02-25 12:58:42.605 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:58:42 np0005629333 nova_compute[244014]:  <uuid>eadab8d0-1f24-4ace-b7b4-10329df53f12</uuid>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:  <name>instance-0000008e</name>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestServerAdvancedOps-server-1391903363</nova:name>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:58:41</nova:creationTime>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:58:42 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:        <nova:user uuid="2236154a16ea4715a79c2e6f36e22e83">tempest-TestServerAdvancedOps-400846090-project-member</nova:user>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:        <nova:project uuid="e20a44a5f2664c25a996daf9d0d14b97">tempest-TestServerAdvancedOps-400846090</nova:project>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:        <nova:port uuid="98c45ccf-8678-4765-bf66-7a9e3dae8cac">
Feb 25 07:58:42 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      <entry name="serial">eadab8d0-1f24-4ace-b7b4-10329df53f12</entry>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      <entry name="uuid">eadab8d0-1f24-4ace-b7b4-10329df53f12</entry>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/eadab8d0-1f24-4ace-b7b4-10329df53f12_disk">
Feb 25 07:58:42 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:58:42 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/eadab8d0-1f24-4ace-b7b4-10329df53f12_disk.config">
Feb 25 07:58:42 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:58:42 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:25:62:35"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      <target dev="tap98c45ccf-86"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/eadab8d0-1f24-4ace-b7b4-10329df53f12/console.log" append="off"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:58:42 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:58:42 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:58:42 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:58:42 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:58:42 np0005629333 nova_compute[244014]: 2026-02-25 12:58:42.607 244018 DEBUG nova.compute.manager [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Preparing to wait for external event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:58:42 np0005629333 nova_compute[244014]: 2026-02-25 12:58:42.607 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Acquiring lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:58:42 np0005629333 nova_compute[244014]: 2026-02-25 12:58:42.608 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:58:42 np0005629333 nova_compute[244014]: 2026-02-25 12:58:42.608 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:58:42 np0005629333 nova_compute[244014]: 2026-02-25 12:58:42.609 244018 DEBUG nova.virt.libvirt.vif [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:58:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1391903363',display_name='tempest-TestServerAdvancedOps-server-1391903363',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1391903363',id=142,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e20a44a5f2664c25a996daf9d0d14b97',ramdisk_id='',reservation_id='r-fckrf11t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-400846090',owner_user_name='tempest-TestServerAdvancedOps-400846090-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:58:34Z,user_data=None,user_id='2236154a16ea4715a79c2e6f36e22e83',uuid=eadab8d0-1f24-4ace-b7b4-10329df53f12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "address": "fa:16:3e:25:62:35", "network": {"id": "bcd4fda6-15a9-498e-a842-640e098deac1", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-465800388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e20a44a5f2664c25a996daf9d0d14b97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c45ccf-86", "ovs_interfaceid": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:58:42 np0005629333 nova_compute[244014]: 2026-02-25 12:58:42.610 244018 DEBUG nova.network.os_vif_util [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Converting VIF {"id": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "address": "fa:16:3e:25:62:35", "network": {"id": "bcd4fda6-15a9-498e-a842-640e098deac1", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-465800388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e20a44a5f2664c25a996daf9d0d14b97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c45ccf-86", "ovs_interfaceid": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:58:42 np0005629333 nova_compute[244014]: 2026-02-25 12:58:42.611 244018 DEBUG nova.network.os_vif_util [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:62:35,bridge_name='br-int',has_traffic_filtering=True,id=98c45ccf-8678-4765-bf66-7a9e3dae8cac,network=Network(bcd4fda6-15a9-498e-a842-640e098deac1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c45ccf-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:58:42 np0005629333 nova_compute[244014]: 2026-02-25 12:58:42.612 244018 DEBUG os_vif [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:62:35,bridge_name='br-int',has_traffic_filtering=True,id=98c45ccf-8678-4765-bf66-7a9e3dae8cac,network=Network(bcd4fda6-15a9-498e-a842-640e098deac1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c45ccf-86') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:58:42 np0005629333 nova_compute[244014]: 2026-02-25 12:58:42.612 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:42 np0005629333 nova_compute[244014]: 2026-02-25 12:58:42.613 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:58:42 np0005629333 nova_compute[244014]: 2026-02-25 12:58:42.614 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:58:42 np0005629333 nova_compute[244014]: 2026-02-25 12:58:42.617 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:42 np0005629333 nova_compute[244014]: 2026-02-25 12:58:42.618 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98c45ccf-86, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:58:42 np0005629333 nova_compute[244014]: 2026-02-25 12:58:42.619 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap98c45ccf-86, col_values=(('external_ids', {'iface-id': '98c45ccf-8678-4765-bf66-7a9e3dae8cac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:25:62:35', 'vm-uuid': 'eadab8d0-1f24-4ace-b7b4-10329df53f12'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:58:42 np0005629333 nova_compute[244014]: 2026-02-25 12:58:42.621 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:42 np0005629333 NetworkManager[49836]: <info>  [1772024322.6225] manager: (tap98c45ccf-86): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/620)
Feb 25 07:58:42 np0005629333 nova_compute[244014]: 2026-02-25 12:58:42.624 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:58:42 np0005629333 nova_compute[244014]: 2026-02-25 12:58:42.627 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:42 np0005629333 nova_compute[244014]: 2026-02-25 12:58:42.628 244018 INFO os_vif [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:62:35,bridge_name='br-int',has_traffic_filtering=True,id=98c45ccf-8678-4765-bf66-7a9e3dae8cac,network=Network(bcd4fda6-15a9-498e-a842-640e098deac1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c45ccf-86')#033[00m
Feb 25 07:58:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:58:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:58:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:58:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:58:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00036222554203308425 of space, bias 1.0, pg target 0.10866766260992528 quantized to 32 (current 32)
Feb 25 07:58:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:58:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:58:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:58:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:58:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:58:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024942606386747445 of space, bias 1.0, pg target 0.7482781916024234 quantized to 32 (current 32)
Feb 25 07:58:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:58:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3700257818191368e-06 of space, bias 4.0, pg target 0.0016440309381829641 quantized to 16 (current 16)
Feb 25 07:58:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:58:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:58:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:58:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:58:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:58:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:58:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:58:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:58:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:58:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:58:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2344: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 25 07:58:42 np0005629333 nova_compute[244014]: 2026-02-25 12:58:42.837 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:58:42 np0005629333 nova_compute[244014]: 2026-02-25 12:58:42.837 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:58:42 np0005629333 nova_compute[244014]: 2026-02-25 12:58:42.838 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] No VIF found with MAC fa:16:3e:25:62:35, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:58:42 np0005629333 nova_compute[244014]: 2026-02-25 12:58:42.838 244018 INFO nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Using config drive#033[00m
Feb 25 07:58:42 np0005629333 nova_compute[244014]: 2026-02-25 12:58:42.869 244018 DEBUG nova.storage.rbd_utils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] rbd image eadab8d0-1f24-4ace-b7b4-10329df53f12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:58:42 np0005629333 nova_compute[244014]: 2026-02-25 12:58:42.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:58:43 np0005629333 nova_compute[244014]: 2026-02-25 12:58:43.638 244018 DEBUG nova.network.neutron [req-b87d0727-a57e-428e-bce5-c9664dfaa02e req-a6b8716c-c376-4752-a903-00512fb954db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Updated VIF entry in instance network info cache for port 98c45ccf-8678-4765-bf66-7a9e3dae8cac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:58:43 np0005629333 nova_compute[244014]: 2026-02-25 12:58:43.639 244018 DEBUG nova.network.neutron [req-b87d0727-a57e-428e-bce5-c9664dfaa02e req-a6b8716c-c376-4752-a903-00512fb954db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Updating instance_info_cache with network_info: [{"id": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "address": "fa:16:3e:25:62:35", "network": {"id": "bcd4fda6-15a9-498e-a842-640e098deac1", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-465800388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e20a44a5f2664c25a996daf9d0d14b97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c45ccf-86", "ovs_interfaceid": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:58:43 np0005629333 nova_compute[244014]: 2026-02-25 12:58:43.688 244018 DEBUG oslo_concurrency.lockutils [req-b87d0727-a57e-428e-bce5-c9664dfaa02e req-a6b8716c-c376-4752-a903-00512fb954db 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-eadab8d0-1f24-4ace-b7b4-10329df53f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:58:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:58:43 np0005629333 nova_compute[244014]: 2026-02-25 12:58:43.751 244018 INFO nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Creating config drive at /var/lib/nova/instances/eadab8d0-1f24-4ace-b7b4-10329df53f12/disk.config#033[00m
Feb 25 07:58:43 np0005629333 nova_compute[244014]: 2026-02-25 12:58:43.757 244018 DEBUG oslo_concurrency.processutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/eadab8d0-1f24-4ace-b7b4-10329df53f12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpstx3eitn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:58:43 np0005629333 nova_compute[244014]: 2026-02-25 12:58:43.901 244018 DEBUG oslo_concurrency.processutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/eadab8d0-1f24-4ace-b7b4-10329df53f12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpstx3eitn" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:58:43 np0005629333 nova_compute[244014]: 2026-02-25 12:58:43.941 244018 DEBUG nova.storage.rbd_utils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] rbd image eadab8d0-1f24-4ace-b7b4-10329df53f12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:58:43 np0005629333 nova_compute[244014]: 2026-02-25 12:58:43.948 244018 DEBUG oslo_concurrency.processutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/eadab8d0-1f24-4ace-b7b4-10329df53f12/disk.config eadab8d0-1f24-4ace-b7b4-10329df53f12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:58:44 np0005629333 nova_compute[244014]: 2026-02-25 12:58:44.125 244018 DEBUG oslo_concurrency.processutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/eadab8d0-1f24-4ace-b7b4-10329df53f12/disk.config eadab8d0-1f24-4ace-b7b4-10329df53f12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:58:44 np0005629333 nova_compute[244014]: 2026-02-25 12:58:44.126 244018 INFO nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Deleting local config drive /var/lib/nova/instances/eadab8d0-1f24-4ace-b7b4-10329df53f12/disk.config because it was imported into RBD.#033[00m
Feb 25 07:58:44 np0005629333 kernel: tap98c45ccf-86: entered promiscuous mode
Feb 25 07:58:44 np0005629333 NetworkManager[49836]: <info>  [1772024324.1965] manager: (tap98c45ccf-86): new Tun device (/org/freedesktop/NetworkManager/Devices/621)
Feb 25 07:58:44 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:44Z|01503|binding|INFO|Claiming lport 98c45ccf-8678-4765-bf66-7a9e3dae8cac for this chassis.
Feb 25 07:58:44 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:44Z|01504|binding|INFO|98c45ccf-8678-4765-bf66-7a9e3dae8cac: Claiming fa:16:3e:25:62:35 10.100.0.7
Feb 25 07:58:44 np0005629333 nova_compute[244014]: 2026-02-25 12:58:44.198 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:44 np0005629333 nova_compute[244014]: 2026-02-25 12:58:44.225 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:44 np0005629333 nova_compute[244014]: 2026-02-25 12:58:44.232 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:44 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:44Z|01505|binding|INFO|Setting lport 98c45ccf-8678-4765-bf66-7a9e3dae8cac ovn-installed in OVS
Feb 25 07:58:44 np0005629333 nova_compute[244014]: 2026-02-25 12:58:44.236 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:44 np0005629333 systemd-machined[210048]: New machine qemu-174-instance-0000008e.
Feb 25 07:58:44 np0005629333 systemd[1]: Started Virtual Machine qemu-174-instance-0000008e.
Feb 25 07:58:44 np0005629333 systemd-udevd[373124]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:58:44 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:44Z|01506|binding|INFO|Setting lport 98c45ccf-8678-4765-bf66-7a9e3dae8cac up in Southbound
Feb 25 07:58:44 np0005629333 NetworkManager[49836]: <info>  [1772024324.3001] device (tap98c45ccf-86): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:58:44 np0005629333 NetworkManager[49836]: <info>  [1772024324.3007] device (tap98c45ccf-86): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:58:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:44.296 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:62:35 10.100.0.7'], port_security=['fa:16:3e:25:62:35 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'eadab8d0-1f24-4ace-b7b4-10329df53f12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bcd4fda6-15a9-498e-a842-640e098deac1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e20a44a5f2664c25a996daf9d0d14b97', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ec2ac6ed-e266-4c96-886a-7f586d569550', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f782b99-9661-4693-8fe6-ad5ceac2a9fa, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=98c45ccf-8678-4765-bf66-7a9e3dae8cac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:58:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:44.298 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 98c45ccf-8678-4765-bf66-7a9e3dae8cac in datapath bcd4fda6-15a9-498e-a842-640e098deac1 bound to our chassis#033[00m
Feb 25 07:58:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:44.299 157129 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network bcd4fda6-15a9-498e-a842-640e098deac1 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 25 07:58:44 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:44.302 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e76ffb16-da0b-4d14-b57c-4c7441041a21]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:58:44 np0005629333 podman[373097]: 2026-02-25 12:58:44.324941041 +0000 UTC m=+0.087621743 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 07:58:44 np0005629333 podman[373098]: 2026-02-25 12:58:44.379952513 +0000 UTC m=+0.140548786 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0)
Feb 25 07:58:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2345: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 25 07:58:44 np0005629333 nova_compute[244014]: 2026-02-25 12:58:44.830 244018 DEBUG nova.compute.manager [req-745c3461-4388-4a92-ae80-0e81113aef14 req-271d335c-ecb1-442b-87e1-e569ed334409 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:58:44 np0005629333 nova_compute[244014]: 2026-02-25 12:58:44.831 244018 DEBUG oslo_concurrency.lockutils [req-745c3461-4388-4a92-ae80-0e81113aef14 req-271d335c-ecb1-442b-87e1-e569ed334409 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:58:44 np0005629333 nova_compute[244014]: 2026-02-25 12:58:44.832 244018 DEBUG oslo_concurrency.lockutils [req-745c3461-4388-4a92-ae80-0e81113aef14 req-271d335c-ecb1-442b-87e1-e569ed334409 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:58:44 np0005629333 nova_compute[244014]: 2026-02-25 12:58:44.832 244018 DEBUG oslo_concurrency.lockutils [req-745c3461-4388-4a92-ae80-0e81113aef14 req-271d335c-ecb1-442b-87e1-e569ed334409 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:58:44 np0005629333 nova_compute[244014]: 2026-02-25 12:58:44.833 244018 DEBUG nova.compute.manager [req-745c3461-4388-4a92-ae80-0e81113aef14 req-271d335c-ecb1-442b-87e1-e569ed334409 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Processing event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:58:44 np0005629333 nova_compute[244014]: 2026-02-25 12:58:44.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:58:45 np0005629333 nova_compute[244014]: 2026-02-25 12:58:45.157 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024325.1570914, eadab8d0-1f24-4ace-b7b4-10329df53f12 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:58:45 np0005629333 nova_compute[244014]: 2026-02-25 12:58:45.158 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] VM Started (Lifecycle Event)#033[00m
Feb 25 07:58:45 np0005629333 nova_compute[244014]: 2026-02-25 12:58:45.162 244018 DEBUG nova.compute.manager [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:58:45 np0005629333 nova_compute[244014]: 2026-02-25 12:58:45.167 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:58:45 np0005629333 nova_compute[244014]: 2026-02-25 12:58:45.171 244018 INFO nova.virt.libvirt.driver [-] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Instance spawned successfully.#033[00m
Feb 25 07:58:45 np0005629333 nova_compute[244014]: 2026-02-25 12:58:45.172 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:58:45 np0005629333 nova_compute[244014]: 2026-02-25 12:58:45.350 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:58:45 np0005629333 nova_compute[244014]: 2026-02-25 12:58:45.359 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:58:45 np0005629333 nova_compute[244014]: 2026-02-25 12:58:45.366 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:58:45 np0005629333 nova_compute[244014]: 2026-02-25 12:58:45.367 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:58:45 np0005629333 nova_compute[244014]: 2026-02-25 12:58:45.368 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:58:45 np0005629333 nova_compute[244014]: 2026-02-25 12:58:45.369 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:58:45 np0005629333 nova_compute[244014]: 2026-02-25 12:58:45.370 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:58:45 np0005629333 nova_compute[244014]: 2026-02-25 12:58:45.371 244018 DEBUG nova.virt.libvirt.driver [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:58:45 np0005629333 nova_compute[244014]: 2026-02-25 12:58:45.409 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:58:45 np0005629333 nova_compute[244014]: 2026-02-25 12:58:45.410 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024325.1572623, eadab8d0-1f24-4ace-b7b4-10329df53f12 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:58:45 np0005629333 nova_compute[244014]: 2026-02-25 12:58:45.410 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:58:45 np0005629333 nova_compute[244014]: 2026-02-25 12:58:45.473 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:58:45 np0005629333 nova_compute[244014]: 2026-02-25 12:58:45.478 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024325.1663496, eadab8d0-1f24-4ace-b7b4-10329df53f12 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:58:45 np0005629333 nova_compute[244014]: 2026-02-25 12:58:45.479 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:58:45 np0005629333 nova_compute[244014]: 2026-02-25 12:58:45.539 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:58:45 np0005629333 nova_compute[244014]: 2026-02-25 12:58:45.543 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:58:45 np0005629333 nova_compute[244014]: 2026-02-25 12:58:45.553 244018 INFO nova.compute.manager [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Took 11.30 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:58:45 np0005629333 nova_compute[244014]: 2026-02-25 12:58:45.554 244018 DEBUG nova.compute.manager [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:58:45 np0005629333 nova_compute[244014]: 2026-02-25 12:58:45.574 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:45 np0005629333 nova_compute[244014]: 2026-02-25 12:58:45.618 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:58:45 np0005629333 nova_compute[244014]: 2026-02-25 12:58:45.684 244018 INFO nova.compute.manager [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Took 12.95 seconds to build instance.#033[00m
Feb 25 07:58:45 np0005629333 nova_compute[244014]: 2026-02-25 12:58:45.794 244018 DEBUG oslo_concurrency.lockutils [None req-840a09ff-ffd4-496b-a3ee-142c9a36500b 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:58:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2346: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 25 07:58:46 np0005629333 nova_compute[244014]: 2026-02-25 12:58:46.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:58:46 np0005629333 nova_compute[244014]: 2026-02-25 12:58:46.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:58:46 np0005629333 nova_compute[244014]: 2026-02-25 12:58:46.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:58:46 np0005629333 nova_compute[244014]: 2026-02-25 12:58:46.933 244018 DEBUG nova.compute.manager [req-9dc290eb-1dff-429d-8ce7-f739e5427a07 req-c652255e-11bf-41be-96c8-7806cea697e3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:58:46 np0005629333 nova_compute[244014]: 2026-02-25 12:58:46.933 244018 DEBUG oslo_concurrency.lockutils [req-9dc290eb-1dff-429d-8ce7-f739e5427a07 req-c652255e-11bf-41be-96c8-7806cea697e3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:58:46 np0005629333 nova_compute[244014]: 2026-02-25 12:58:46.934 244018 DEBUG oslo_concurrency.lockutils [req-9dc290eb-1dff-429d-8ce7-f739e5427a07 req-c652255e-11bf-41be-96c8-7806cea697e3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:58:46 np0005629333 nova_compute[244014]: 2026-02-25 12:58:46.934 244018 DEBUG oslo_concurrency.lockutils [req-9dc290eb-1dff-429d-8ce7-f739e5427a07 req-c652255e-11bf-41be-96c8-7806cea697e3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:58:46 np0005629333 nova_compute[244014]: 2026-02-25 12:58:46.934 244018 DEBUG nova.compute.manager [req-9dc290eb-1dff-429d-8ce7-f739e5427a07 req-c652255e-11bf-41be-96c8-7806cea697e3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] No waiting events found dispatching network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:58:46 np0005629333 nova_compute[244014]: 2026-02-25 12:58:46.935 244018 WARNING nova.compute.manager [req-9dc290eb-1dff-429d-8ce7-f739e5427a07 req-c652255e-11bf-41be-96c8-7806cea697e3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received unexpected event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac for instance with vm_state active and task_state None.#033[00m
Feb 25 07:58:47 np0005629333 nova_compute[244014]: 2026-02-25 12:58:47.524 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:58:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3785713222' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:58:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:58:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3785713222' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:58:47 np0005629333 nova_compute[244014]: 2026-02-25 12:58:47.572 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:47 np0005629333 nova_compute[244014]: 2026-02-25 12:58:47.621 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:47 np0005629333 nova_compute[244014]: 2026-02-25 12:58:47.878 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:58:48 np0005629333 nova_compute[244014]: 2026-02-25 12:58:48.195 244018 DEBUG nova.objects.instance [None req-962210e9-8602-41fe-a8bb-68938c7a4b0c 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lazy-loading 'pci_devices' on Instance uuid eadab8d0-1f24-4ace-b7b4-10329df53f12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:58:48 np0005629333 nova_compute[244014]: 2026-02-25 12:58:48.225 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024328.2256804, eadab8d0-1f24-4ace-b7b4-10329df53f12 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:58:48 np0005629333 nova_compute[244014]: 2026-02-25 12:58:48.226 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:58:48 np0005629333 nova_compute[244014]: 2026-02-25 12:58:48.259 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:58:48 np0005629333 nova_compute[244014]: 2026-02-25 12:58:48.263 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:58:48 np0005629333 nova_compute[244014]: 2026-02-25 12:58:48.286 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Feb 25 07:58:48 np0005629333 kernel: tap98c45ccf-86 (unregistering): left promiscuous mode
Feb 25 07:58:48 np0005629333 NetworkManager[49836]: <info>  [1772024328.5660] device (tap98c45ccf-86): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:58:48 np0005629333 nova_compute[244014]: 2026-02-25 12:58:48.570 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:48 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:48Z|01507|binding|INFO|Releasing lport 98c45ccf-8678-4765-bf66-7a9e3dae8cac from this chassis (sb_readonly=0)
Feb 25 07:58:48 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:48Z|01508|binding|INFO|Setting lport 98c45ccf-8678-4765-bf66-7a9e3dae8cac down in Southbound
Feb 25 07:58:48 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:48Z|01509|binding|INFO|Removing iface tap98c45ccf-86 ovn-installed in OVS
Feb 25 07:58:48 np0005629333 nova_compute[244014]: 2026-02-25 12:58:48.576 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:48 np0005629333 nova_compute[244014]: 2026-02-25 12:58:48.580 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:48.584 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:62:35 10.100.0.7'], port_security=['fa:16:3e:25:62:35 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'eadab8d0-1f24-4ace-b7b4-10329df53f12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bcd4fda6-15a9-498e-a842-640e098deac1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e20a44a5f2664c25a996daf9d0d14b97', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ec2ac6ed-e266-4c96-886a-7f586d569550', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f782b99-9661-4693-8fe6-ad5ceac2a9fa, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=98c45ccf-8678-4765-bf66-7a9e3dae8cac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:58:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:48.586 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 98c45ccf-8678-4765-bf66-7a9e3dae8cac in datapath bcd4fda6-15a9-498e-a842-640e098deac1 unbound from our chassis#033[00m
Feb 25 07:58:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:48.587 157129 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network bcd4fda6-15a9-498e-a842-640e098deac1 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 25 07:58:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:48.587 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8ede2cd0-2c97-4b94-990d-130087a6cbc8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:58:48 np0005629333 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Feb 25 07:58:48 np0005629333 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008e.scope: Consumed 4.056s CPU time.
Feb 25 07:58:48 np0005629333 systemd-machined[210048]: Machine qemu-174-instance-0000008e terminated.
Feb 25 07:58:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:58:48 np0005629333 nova_compute[244014]: 2026-02-25 12:58:48.745 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:48 np0005629333 nova_compute[244014]: 2026-02-25 12:58:48.750 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:48 np0005629333 nova_compute[244014]: 2026-02-25 12:58:48.761 244018 DEBUG nova.compute.manager [None req-962210e9-8602-41fe-a8bb-68938c7a4b0c 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:58:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2347: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Feb 25 07:58:49 np0005629333 nova_compute[244014]: 2026-02-25 12:58:49.003 244018 DEBUG nova.compute.manager [req-3d1b125d-d65e-4f4a-89d5-d2d4530376be req-9469cdb6-ef3e-4eaa-84ba-db93c1b41ead 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received event network-vif-unplugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:58:49 np0005629333 nova_compute[244014]: 2026-02-25 12:58:49.004 244018 DEBUG oslo_concurrency.lockutils [req-3d1b125d-d65e-4f4a-89d5-d2d4530376be req-9469cdb6-ef3e-4eaa-84ba-db93c1b41ead 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:58:49 np0005629333 nova_compute[244014]: 2026-02-25 12:58:49.004 244018 DEBUG oslo_concurrency.lockutils [req-3d1b125d-d65e-4f4a-89d5-d2d4530376be req-9469cdb6-ef3e-4eaa-84ba-db93c1b41ead 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:58:49 np0005629333 nova_compute[244014]: 2026-02-25 12:58:49.005 244018 DEBUG oslo_concurrency.lockutils [req-3d1b125d-d65e-4f4a-89d5-d2d4530376be req-9469cdb6-ef3e-4eaa-84ba-db93c1b41ead 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:58:49 np0005629333 nova_compute[244014]: 2026-02-25 12:58:49.005 244018 DEBUG nova.compute.manager [req-3d1b125d-d65e-4f4a-89d5-d2d4530376be req-9469cdb6-ef3e-4eaa-84ba-db93c1b41ead 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] No waiting events found dispatching network-vif-unplugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:58:49 np0005629333 nova_compute[244014]: 2026-02-25 12:58:49.006 244018 WARNING nova.compute.manager [req-3d1b125d-d65e-4f4a-89d5-d2d4530376be req-9469cdb6-ef3e-4eaa-84ba-db93c1b41ead 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received unexpected event network-vif-unplugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac for instance with vm_state suspended and task_state None.#033[00m
Feb 25 07:58:50 np0005629333 nova_compute[244014]: 2026-02-25 12:58:50.376 244018 INFO nova.compute.manager [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Resuming#033[00m
Feb 25 07:58:50 np0005629333 nova_compute[244014]: 2026-02-25 12:58:50.378 244018 DEBUG nova.objects.instance [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lazy-loading 'flavor' on Instance uuid eadab8d0-1f24-4ace-b7b4-10329df53f12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:58:50 np0005629333 nova_compute[244014]: 2026-02-25 12:58:50.425 244018 DEBUG oslo_concurrency.lockutils [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Acquiring lock "refresh_cache-eadab8d0-1f24-4ace-b7b4-10329df53f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:58:50 np0005629333 nova_compute[244014]: 2026-02-25 12:58:50.426 244018 DEBUG oslo_concurrency.lockutils [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Acquired lock "refresh_cache-eadab8d0-1f24-4ace-b7b4-10329df53f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:58:50 np0005629333 nova_compute[244014]: 2026-02-25 12:58:50.426 244018 DEBUG nova.network.neutron [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:58:50 np0005629333 nova_compute[244014]: 2026-02-25 12:58:50.576 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2348: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 07:58:51 np0005629333 nova_compute[244014]: 2026-02-25 12:58:51.116 244018 DEBUG nova.compute.manager [req-9188ae97-7d87-490c-8270-9a438cb3f71b req-da1dc90b-8d7a-4b54-98de-5cf5a3d3911f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:58:51 np0005629333 nova_compute[244014]: 2026-02-25 12:58:51.116 244018 DEBUG oslo_concurrency.lockutils [req-9188ae97-7d87-490c-8270-9a438cb3f71b req-da1dc90b-8d7a-4b54-98de-5cf5a3d3911f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:58:51 np0005629333 nova_compute[244014]: 2026-02-25 12:58:51.116 244018 DEBUG oslo_concurrency.lockutils [req-9188ae97-7d87-490c-8270-9a438cb3f71b req-da1dc90b-8d7a-4b54-98de-5cf5a3d3911f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:58:51 np0005629333 nova_compute[244014]: 2026-02-25 12:58:51.117 244018 DEBUG oslo_concurrency.lockutils [req-9188ae97-7d87-490c-8270-9a438cb3f71b req-da1dc90b-8d7a-4b54-98de-5cf5a3d3911f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:58:51 np0005629333 nova_compute[244014]: 2026-02-25 12:58:51.117 244018 DEBUG nova.compute.manager [req-9188ae97-7d87-490c-8270-9a438cb3f71b req-da1dc90b-8d7a-4b54-98de-5cf5a3d3911f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] No waiting events found dispatching network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:58:51 np0005629333 nova_compute[244014]: 2026-02-25 12:58:51.117 244018 WARNING nova.compute.manager [req-9188ae97-7d87-490c-8270-9a438cb3f71b req-da1dc90b-8d7a-4b54-98de-5cf5a3d3911f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received unexpected event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac for instance with vm_state suspended and task_state resuming.#033[00m
Feb 25 07:58:51 np0005629333 nova_compute[244014]: 2026-02-25 12:58:51.126 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024316.1253798, 6185e497-8422-4a5f-a98a-865484d53d4f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:58:51 np0005629333 nova_compute[244014]: 2026-02-25 12:58:51.126 244018 INFO nova.compute.manager [-] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:58:51 np0005629333 nova_compute[244014]: 2026-02-25 12:58:51.235 244018 DEBUG nova.compute.manager [None req-8e7fd769-c213-42ab-ab50-7d3d1d969a95 - - - - - -] [instance: 6185e497-8422-4a5f-a98a-865484d53d4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:58:52 np0005629333 nova_compute[244014]: 2026-02-25 12:58:52.589 244018 DEBUG nova.network.neutron [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Updating instance_info_cache with network_info: [{"id": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "address": "fa:16:3e:25:62:35", "network": {"id": "bcd4fda6-15a9-498e-a842-640e098deac1", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-465800388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e20a44a5f2664c25a996daf9d0d14b97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c45ccf-86", "ovs_interfaceid": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:58:52 np0005629333 nova_compute[244014]: 2026-02-25 12:58:52.609 244018 DEBUG oslo_concurrency.lockutils [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Releasing lock "refresh_cache-eadab8d0-1f24-4ace-b7b4-10329df53f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:58:52 np0005629333 nova_compute[244014]: 2026-02-25 12:58:52.616 244018 DEBUG nova.virt.libvirt.vif [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:58:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1391903363',display_name='tempest-TestServerAdvancedOps-server-1391903363',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1391903363',id=142,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:58:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='e20a44a5f2664c25a996daf9d0d14b97',ramdisk_id='',reservation_id='r-fckrf11t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-400846090',owner_user_name='tempest-TestServerAdvancedOps-400846090-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:58:48Z,user_data=None,user_id='2236154a16ea4715a79c2e6f36e22e83',uuid=eadab8d0-1f24-4ace-b7b4-10329df53f12,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "address": "fa:16:3e:25:62:35", "network": {"id": "bcd4fda6-15a9-498e-a842-640e098deac1", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-465800388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e20a44a5f2664c25a996daf9d0d14b97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c45ccf-86", "ovs_interfaceid": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:58:52 np0005629333 nova_compute[244014]: 2026-02-25 12:58:52.617 244018 DEBUG nova.network.os_vif_util [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Converting VIF {"id": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "address": "fa:16:3e:25:62:35", "network": {"id": "bcd4fda6-15a9-498e-a842-640e098deac1", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-465800388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e20a44a5f2664c25a996daf9d0d14b97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c45ccf-86", "ovs_interfaceid": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:58:52 np0005629333 nova_compute[244014]: 2026-02-25 12:58:52.618 244018 DEBUG nova.network.os_vif_util [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:62:35,bridge_name='br-int',has_traffic_filtering=True,id=98c45ccf-8678-4765-bf66-7a9e3dae8cac,network=Network(bcd4fda6-15a9-498e-a842-640e098deac1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c45ccf-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:58:52 np0005629333 nova_compute[244014]: 2026-02-25 12:58:52.619 244018 DEBUG os_vif [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:62:35,bridge_name='br-int',has_traffic_filtering=True,id=98c45ccf-8678-4765-bf66-7a9e3dae8cac,network=Network(bcd4fda6-15a9-498e-a842-640e098deac1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c45ccf-86') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:58:52 np0005629333 nova_compute[244014]: 2026-02-25 12:58:52.619 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:52 np0005629333 nova_compute[244014]: 2026-02-25 12:58:52.620 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:58:52 np0005629333 nova_compute[244014]: 2026-02-25 12:58:52.621 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:58:52 np0005629333 nova_compute[244014]: 2026-02-25 12:58:52.623 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:52 np0005629333 nova_compute[244014]: 2026-02-25 12:58:52.625 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:52 np0005629333 nova_compute[244014]: 2026-02-25 12:58:52.625 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98c45ccf-86, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:58:52 np0005629333 nova_compute[244014]: 2026-02-25 12:58:52.625 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap98c45ccf-86, col_values=(('external_ids', {'iface-id': '98c45ccf-8678-4765-bf66-7a9e3dae8cac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:25:62:35', 'vm-uuid': 'eadab8d0-1f24-4ace-b7b4-10329df53f12'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:58:52 np0005629333 nova_compute[244014]: 2026-02-25 12:58:52.626 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:58:52 np0005629333 nova_compute[244014]: 2026-02-25 12:58:52.627 244018 INFO os_vif [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:62:35,bridge_name='br-int',has_traffic_filtering=True,id=98c45ccf-8678-4765-bf66-7a9e3dae8cac,network=Network(bcd4fda6-15a9-498e-a842-640e098deac1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c45ccf-86')#033[00m
Feb 25 07:58:52 np0005629333 nova_compute[244014]: 2026-02-25 12:58:52.651 244018 DEBUG nova.objects.instance [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lazy-loading 'numa_topology' on Instance uuid eadab8d0-1f24-4ace-b7b4-10329df53f12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:58:52 np0005629333 kernel: tap98c45ccf-86: entered promiscuous mode
Feb 25 07:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:52Z|01510|binding|INFO|Claiming lport 98c45ccf-8678-4765-bf66-7a9e3dae8cac for this chassis.
Feb 25 07:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:52Z|01511|binding|INFO|98c45ccf-8678-4765-bf66-7a9e3dae8cac: Claiming fa:16:3e:25:62:35 10.100.0.7
Feb 25 07:58:52 np0005629333 nova_compute[244014]: 2026-02-25 12:58:52.727 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:52 np0005629333 NetworkManager[49836]: <info>  [1772024332.7281] manager: (tap98c45ccf-86): new Tun device (/org/freedesktop/NetworkManager/Devices/622)
Feb 25 07:58:52 np0005629333 nova_compute[244014]: 2026-02-25 12:58:52.729 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:52Z|01512|binding|INFO|Setting lport 98c45ccf-8678-4765-bf66-7a9e3dae8cac up in Southbound
Feb 25 07:58:52 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:52Z|01513|binding|INFO|Setting lport 98c45ccf-8678-4765-bf66-7a9e3dae8cac ovn-installed in OVS
Feb 25 07:58:52 np0005629333 nova_compute[244014]: 2026-02-25 12:58:52.736 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:52.737 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:62:35 10.100.0.7'], port_security=['fa:16:3e:25:62:35 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'eadab8d0-1f24-4ace-b7b4-10329df53f12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bcd4fda6-15a9-498e-a842-640e098deac1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e20a44a5f2664c25a996daf9d0d14b97', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'ec2ac6ed-e266-4c96-886a-7f586d569550', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f782b99-9661-4693-8fe6-ad5ceac2a9fa, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=98c45ccf-8678-4765-bf66-7a9e3dae8cac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:58:52 np0005629333 nova_compute[244014]: 2026-02-25 12:58:52.738 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:52 np0005629333 nova_compute[244014]: 2026-02-25 12:58:52.742 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:52.741 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 98c45ccf-8678-4765-bf66-7a9e3dae8cac in datapath bcd4fda6-15a9-498e-a842-640e098deac1 bound to our chassis#033[00m
Feb 25 07:58:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:52.743 157129 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network bcd4fda6-15a9-498e-a842-640e098deac1 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 25 07:58:52 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:52.744 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[92abb6e4-a9fc-4ae9-84a4-daf541d8fdc8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:58:52 np0005629333 systemd-machined[210048]: New machine qemu-175-instance-0000008e.
Feb 25 07:58:52 np0005629333 systemd[1]: Started Virtual Machine qemu-175-instance-0000008e.
Feb 25 07:58:52 np0005629333 systemd-udevd[373231]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:58:52 np0005629333 NetworkManager[49836]: <info>  [1772024332.8109] device (tap98c45ccf-86): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:58:52 np0005629333 NetworkManager[49836]: <info>  [1772024332.8120] device (tap98c45ccf-86): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:58:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2349: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 07:58:53 np0005629333 nova_compute[244014]: 2026-02-25 12:58:53.209 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for eadab8d0-1f24-4ace-b7b4-10329df53f12 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 25 07:58:53 np0005629333 nova_compute[244014]: 2026-02-25 12:58:53.211 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024333.2091112, eadab8d0-1f24-4ace-b7b4-10329df53f12 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:58:53 np0005629333 nova_compute[244014]: 2026-02-25 12:58:53.211 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] VM Started (Lifecycle Event)#033[00m
Feb 25 07:58:53 np0005629333 nova_compute[244014]: 2026-02-25 12:58:53.220 244018 DEBUG nova.compute.manager [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:58:53 np0005629333 nova_compute[244014]: 2026-02-25 12:58:53.220 244018 DEBUG nova.objects.instance [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lazy-loading 'pci_devices' on Instance uuid eadab8d0-1f24-4ace-b7b4-10329df53f12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:58:53 np0005629333 nova_compute[244014]: 2026-02-25 12:58:53.242 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:58:53 np0005629333 nova_compute[244014]: 2026-02-25 12:58:53.244 244018 INFO nova.virt.libvirt.driver [-] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Instance running successfully.#033[00m
Feb 25 07:58:53 np0005629333 virtqemud[243235]: argument unsupported: QEMU guest agent is not configured
Feb 25 07:58:53 np0005629333 nova_compute[244014]: 2026-02-25 12:58:53.246 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:58:53 np0005629333 nova_compute[244014]: 2026-02-25 12:58:53.248 244018 DEBUG nova.virt.libvirt.guest [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Feb 25 07:58:53 np0005629333 nova_compute[244014]: 2026-02-25 12:58:53.248 244018 DEBUG nova.compute.manager [None req-4f2cd2db-33a5-4516-8dbc-33ac3b22fcc7 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:58:53 np0005629333 nova_compute[244014]: 2026-02-25 12:58:53.269 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Feb 25 07:58:53 np0005629333 nova_compute[244014]: 2026-02-25 12:58:53.270 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024333.2121406, eadab8d0-1f24-4ace-b7b4-10329df53f12 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:58:53 np0005629333 nova_compute[244014]: 2026-02-25 12:58:53.270 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:58:53 np0005629333 nova_compute[244014]: 2026-02-25 12:58:53.302 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:58:53 np0005629333 nova_compute[244014]: 2026-02-25 12:58:53.305 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:58:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:58:54 np0005629333 nova_compute[244014]: 2026-02-25 12:58:54.706 244018 DEBUG nova.compute.manager [req-b1bd4024-6e41-4e67-8c04-33796bc7174a req-b8967e38-f53c-40a6-ac61-4d1e49b91a46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:58:54 np0005629333 nova_compute[244014]: 2026-02-25 12:58:54.707 244018 DEBUG oslo_concurrency.lockutils [req-b1bd4024-6e41-4e67-8c04-33796bc7174a req-b8967e38-f53c-40a6-ac61-4d1e49b91a46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:58:54 np0005629333 nova_compute[244014]: 2026-02-25 12:58:54.707 244018 DEBUG oslo_concurrency.lockutils [req-b1bd4024-6e41-4e67-8c04-33796bc7174a req-b8967e38-f53c-40a6-ac61-4d1e49b91a46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:58:54 np0005629333 nova_compute[244014]: 2026-02-25 12:58:54.707 244018 DEBUG oslo_concurrency.lockutils [req-b1bd4024-6e41-4e67-8c04-33796bc7174a req-b8967e38-f53c-40a6-ac61-4d1e49b91a46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:58:54 np0005629333 nova_compute[244014]: 2026-02-25 12:58:54.707 244018 DEBUG nova.compute.manager [req-b1bd4024-6e41-4e67-8c04-33796bc7174a req-b8967e38-f53c-40a6-ac61-4d1e49b91a46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] No waiting events found dispatching network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:58:54 np0005629333 nova_compute[244014]: 2026-02-25 12:58:54.708 244018 WARNING nova.compute.manager [req-b1bd4024-6e41-4e67-8c04-33796bc7174a req-b8967e38-f53c-40a6-ac61-4d1e49b91a46 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received unexpected event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac for instance with vm_state active and task_state None.#033[00m
Feb 25 07:58:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2350: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 07:58:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:55.035 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:58:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:55.035 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:58:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:55.036 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:58:55 np0005629333 nova_compute[244014]: 2026-02-25 12:58:55.578 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:56 np0005629333 nova_compute[244014]: 2026-02-25 12:58:56.777 244018 DEBUG nova.compute.manager [req-925fe5f3-d393-4dc2-b483-9d6d1cd833c2 req-4a92ea2a-7a11-4c19-a907-b380f7fbbb31 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:58:56 np0005629333 nova_compute[244014]: 2026-02-25 12:58:56.778 244018 DEBUG oslo_concurrency.lockutils [req-925fe5f3-d393-4dc2-b483-9d6d1cd833c2 req-4a92ea2a-7a11-4c19-a907-b380f7fbbb31 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:58:56 np0005629333 nova_compute[244014]: 2026-02-25 12:58:56.779 244018 DEBUG oslo_concurrency.lockutils [req-925fe5f3-d393-4dc2-b483-9d6d1cd833c2 req-4a92ea2a-7a11-4c19-a907-b380f7fbbb31 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:58:56 np0005629333 nova_compute[244014]: 2026-02-25 12:58:56.779 244018 DEBUG oslo_concurrency.lockutils [req-925fe5f3-d393-4dc2-b483-9d6d1cd833c2 req-4a92ea2a-7a11-4c19-a907-b380f7fbbb31 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:58:56 np0005629333 nova_compute[244014]: 2026-02-25 12:58:56.780 244018 DEBUG nova.compute.manager [req-925fe5f3-d393-4dc2-b483-9d6d1cd833c2 req-4a92ea2a-7a11-4c19-a907-b380f7fbbb31 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] No waiting events found dispatching network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:58:56 np0005629333 nova_compute[244014]: 2026-02-25 12:58:56.780 244018 WARNING nova.compute.manager [req-925fe5f3-d393-4dc2-b483-9d6d1cd833c2 req-4a92ea2a-7a11-4c19-a907-b380f7fbbb31 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received unexpected event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac for instance with vm_state active and task_state None.#033[00m
Feb 25 07:58:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2351: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 07:58:57 np0005629333 nova_compute[244014]: 2026-02-25 12:58:57.083 244018 DEBUG nova.objects.instance [None req-f8190f48-00cb-4861-8fbc-73bbf802b55c 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lazy-loading 'pci_devices' on Instance uuid eadab8d0-1f24-4ace-b7b4-10329df53f12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:58:57 np0005629333 nova_compute[244014]: 2026-02-25 12:58:57.109 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024337.1094124, eadab8d0-1f24-4ace-b7b4-10329df53f12 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:58:57 np0005629333 nova_compute[244014]: 2026-02-25 12:58:57.110 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:58:57 np0005629333 nova_compute[244014]: 2026-02-25 12:58:57.126 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:58:57 np0005629333 nova_compute[244014]: 2026-02-25 12:58:57.130 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:58:57 np0005629333 nova_compute[244014]: 2026-02-25 12:58:57.147 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Feb 25 07:58:57 np0005629333 nova_compute[244014]: 2026-02-25 12:58:57.626 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:57 np0005629333 kernel: tap98c45ccf-86 (unregistering): left promiscuous mode
Feb 25 07:58:57 np0005629333 NetworkManager[49836]: <info>  [1772024337.6787] device (tap98c45ccf-86): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:58:57 np0005629333 nova_compute[244014]: 2026-02-25 12:58:57.679 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:57 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:57Z|01514|binding|INFO|Releasing lport 98c45ccf-8678-4765-bf66-7a9e3dae8cac from this chassis (sb_readonly=0)
Feb 25 07:58:57 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:57Z|01515|binding|INFO|Setting lport 98c45ccf-8678-4765-bf66-7a9e3dae8cac down in Southbound
Feb 25 07:58:57 np0005629333 nova_compute[244014]: 2026-02-25 12:58:57.686 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:57 np0005629333 ovn_controller[147040]: 2026-02-25T12:58:57Z|01516|binding|INFO|Removing iface tap98c45ccf-86 ovn-installed in OVS
Feb 25 07:58:57 np0005629333 nova_compute[244014]: 2026-02-25 12:58:57.688 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:57 np0005629333 nova_compute[244014]: 2026-02-25 12:58:57.695 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:57 np0005629333 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Feb 25 07:58:57 np0005629333 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d0000008e.scope: Consumed 4.431s CPU time.
Feb 25 07:58:57 np0005629333 systemd-machined[210048]: Machine qemu-175-instance-0000008e terminated.
Feb 25 07:58:57 np0005629333 nova_compute[244014]: 2026-02-25 12:58:57.842 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:57 np0005629333 nova_compute[244014]: 2026-02-25 12:58:57.847 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:57 np0005629333 nova_compute[244014]: 2026-02-25 12:58:57.856 244018 DEBUG nova.compute.manager [None req-f8190f48-00cb-4861-8fbc-73bbf802b55c 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:58:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:58.124 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:62:35 10.100.0.7'], port_security=['fa:16:3e:25:62:35 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'eadab8d0-1f24-4ace-b7b4-10329df53f12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bcd4fda6-15a9-498e-a842-640e098deac1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e20a44a5f2664c25a996daf9d0d14b97', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'ec2ac6ed-e266-4c96-886a-7f586d569550', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f782b99-9661-4693-8fe6-ad5ceac2a9fa, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=98c45ccf-8678-4765-bf66-7a9e3dae8cac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:58:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:58.125 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 98c45ccf-8678-4765-bf66-7a9e3dae8cac in datapath bcd4fda6-15a9-498e-a842-640e098deac1 unbound from our chassis#033[00m
Feb 25 07:58:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:58.126 157129 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network bcd4fda6-15a9-498e-a842-640e098deac1 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 25 07:58:58 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:58:58.127 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[951b08cc-00f1-4ebd-aaa5-ce18d1380ec6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:58:58 np0005629333 nova_compute[244014]: 2026-02-25 12:58:58.134 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:58:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:58:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2352: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 78 op/s
Feb 25 07:58:58 np0005629333 nova_compute[244014]: 2026-02-25 12:58:58.977 244018 DEBUG nova.compute.manager [req-3465bfa7-3e4a-4f37-9d6a-b03bc8082a20 req-e6459c5e-805d-4e78-afa5-f61eafbf0af2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received event network-vif-unplugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:58:58 np0005629333 nova_compute[244014]: 2026-02-25 12:58:58.977 244018 DEBUG oslo_concurrency.lockutils [req-3465bfa7-3e4a-4f37-9d6a-b03bc8082a20 req-e6459c5e-805d-4e78-afa5-f61eafbf0af2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:58:58 np0005629333 nova_compute[244014]: 2026-02-25 12:58:58.977 244018 DEBUG oslo_concurrency.lockutils [req-3465bfa7-3e4a-4f37-9d6a-b03bc8082a20 req-e6459c5e-805d-4e78-afa5-f61eafbf0af2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:58:58 np0005629333 nova_compute[244014]: 2026-02-25 12:58:58.978 244018 DEBUG oslo_concurrency.lockutils [req-3465bfa7-3e4a-4f37-9d6a-b03bc8082a20 req-e6459c5e-805d-4e78-afa5-f61eafbf0af2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:58:58 np0005629333 nova_compute[244014]: 2026-02-25 12:58:58.978 244018 DEBUG nova.compute.manager [req-3465bfa7-3e4a-4f37-9d6a-b03bc8082a20 req-e6459c5e-805d-4e78-afa5-f61eafbf0af2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] No waiting events found dispatching network-vif-unplugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:58:58 np0005629333 nova_compute[244014]: 2026-02-25 12:58:58.978 244018 WARNING nova.compute.manager [req-3465bfa7-3e4a-4f37-9d6a-b03bc8082a20 req-e6459c5e-805d-4e78-afa5-f61eafbf0af2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received unexpected event network-vif-unplugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac for instance with vm_state suspended and task_state None.#033[00m
Feb 25 07:59:00 np0005629333 nova_compute[244014]: 2026-02-25 12:59:00.579 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:00 np0005629333 nova_compute[244014]: 2026-02-25 12:59:00.694 244018 INFO nova.compute.manager [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Resuming#033[00m
Feb 25 07:59:00 np0005629333 nova_compute[244014]: 2026-02-25 12:59:00.696 244018 DEBUG nova.objects.instance [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lazy-loading 'flavor' on Instance uuid eadab8d0-1f24-4ace-b7b4-10329df53f12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:59:00 np0005629333 nova_compute[244014]: 2026-02-25 12:59:00.741 244018 DEBUG oslo_concurrency.lockutils [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Acquiring lock "refresh_cache-eadab8d0-1f24-4ace-b7b4-10329df53f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:59:00 np0005629333 nova_compute[244014]: 2026-02-25 12:59:00.741 244018 DEBUG oslo_concurrency.lockutils [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Acquired lock "refresh_cache-eadab8d0-1f24-4ace-b7b4-10329df53f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:59:00 np0005629333 nova_compute[244014]: 2026-02-25 12:59:00.742 244018 DEBUG nova.network.neutron [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:59:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2353: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Feb 25 07:59:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:59:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:59:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:59:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:59:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:59:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:59:02 np0005629333 nova_compute[244014]: 2026-02-25 12:59:02.628 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2354: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Feb 25 07:59:03 np0005629333 nova_compute[244014]: 2026-02-25 12:59:03.375 244018 DEBUG nova.compute.manager [req-07e38157-c382-46bd-a02a-85b8b384da19 req-692141c9-27a1-47e9-94e9-d3edb240fad4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:59:03 np0005629333 nova_compute[244014]: 2026-02-25 12:59:03.376 244018 DEBUG oslo_concurrency.lockutils [req-07e38157-c382-46bd-a02a-85b8b384da19 req-692141c9-27a1-47e9-94e9-d3edb240fad4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:59:03 np0005629333 nova_compute[244014]: 2026-02-25 12:59:03.376 244018 DEBUG oslo_concurrency.lockutils [req-07e38157-c382-46bd-a02a-85b8b384da19 req-692141c9-27a1-47e9-94e9-d3edb240fad4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:59:03 np0005629333 nova_compute[244014]: 2026-02-25 12:59:03.377 244018 DEBUG oslo_concurrency.lockutils [req-07e38157-c382-46bd-a02a-85b8b384da19 req-692141c9-27a1-47e9-94e9-d3edb240fad4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:59:03 np0005629333 nova_compute[244014]: 2026-02-25 12:59:03.377 244018 DEBUG nova.compute.manager [req-07e38157-c382-46bd-a02a-85b8b384da19 req-692141c9-27a1-47e9-94e9-d3edb240fad4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] No waiting events found dispatching network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:59:03 np0005629333 nova_compute[244014]: 2026-02-25 12:59:03.377 244018 WARNING nova.compute.manager [req-07e38157-c382-46bd-a02a-85b8b384da19 req-692141c9-27a1-47e9-94e9-d3edb240fad4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received unexpected event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac for instance with vm_state suspended and task_state resuming.#033[00m
Feb 25 07:59:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:59:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2355: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Feb 25 07:59:05 np0005629333 nova_compute[244014]: 2026-02-25 12:59:05.581 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2356: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Feb 25 07:59:07 np0005629333 nova_compute[244014]: 2026-02-25 12:59:07.126 244018 DEBUG nova.network.neutron [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Updating instance_info_cache with network_info: [{"id": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "address": "fa:16:3e:25:62:35", "network": {"id": "bcd4fda6-15a9-498e-a842-640e098deac1", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-465800388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e20a44a5f2664c25a996daf9d0d14b97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c45ccf-86", "ovs_interfaceid": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:59:07 np0005629333 nova_compute[244014]: 2026-02-25 12:59:07.165 244018 DEBUG oslo_concurrency.lockutils [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Releasing lock "refresh_cache-eadab8d0-1f24-4ace-b7b4-10329df53f12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:59:07 np0005629333 nova_compute[244014]: 2026-02-25 12:59:07.173 244018 DEBUG nova.virt.libvirt.vif [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:58:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1391903363',display_name='tempest-TestServerAdvancedOps-server-1391903363',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1391903363',id=142,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:58:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='e20a44a5f2664c25a996daf9d0d14b97',ramdisk_id='',reservation_id='r-fckrf11t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-400846090',owner_user_name='tempest-TestServerAdvancedOps-400846090-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:58:58Z,user_data=None,user_id='2236154a16ea4715a79c2e6f36e22e83',uuid=eadab8d0-1f24-4ace-b7b4-10329df53f12,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "address": "fa:16:3e:25:62:35", "network": {"id": "bcd4fda6-15a9-498e-a842-640e098deac1", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-465800388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e20a44a5f2664c25a996daf9d0d14b97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c45ccf-86", "ovs_interfaceid": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:59:07 np0005629333 nova_compute[244014]: 2026-02-25 12:59:07.174 244018 DEBUG nova.network.os_vif_util [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Converting VIF {"id": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "address": "fa:16:3e:25:62:35", "network": {"id": "bcd4fda6-15a9-498e-a842-640e098deac1", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-465800388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e20a44a5f2664c25a996daf9d0d14b97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c45ccf-86", "ovs_interfaceid": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:59:07 np0005629333 nova_compute[244014]: 2026-02-25 12:59:07.176 244018 DEBUG nova.network.os_vif_util [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:62:35,bridge_name='br-int',has_traffic_filtering=True,id=98c45ccf-8678-4765-bf66-7a9e3dae8cac,network=Network(bcd4fda6-15a9-498e-a842-640e098deac1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c45ccf-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:59:07 np0005629333 nova_compute[244014]: 2026-02-25 12:59:07.177 244018 DEBUG os_vif [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:62:35,bridge_name='br-int',has_traffic_filtering=True,id=98c45ccf-8678-4765-bf66-7a9e3dae8cac,network=Network(bcd4fda6-15a9-498e-a842-640e098deac1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c45ccf-86') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:59:07 np0005629333 nova_compute[244014]: 2026-02-25 12:59:07.177 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:07 np0005629333 nova_compute[244014]: 2026-02-25 12:59:07.178 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:59:07 np0005629333 nova_compute[244014]: 2026-02-25 12:59:07.178 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:59:07 np0005629333 nova_compute[244014]: 2026-02-25 12:59:07.182 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:07 np0005629333 nova_compute[244014]: 2026-02-25 12:59:07.183 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98c45ccf-86, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:59:07 np0005629333 nova_compute[244014]: 2026-02-25 12:59:07.183 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap98c45ccf-86, col_values=(('external_ids', {'iface-id': '98c45ccf-8678-4765-bf66-7a9e3dae8cac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:25:62:35', 'vm-uuid': 'eadab8d0-1f24-4ace-b7b4-10329df53f12'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:59:07 np0005629333 nova_compute[244014]: 2026-02-25 12:59:07.184 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:59:07 np0005629333 nova_compute[244014]: 2026-02-25 12:59:07.185 244018 INFO os_vif [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:62:35,bridge_name='br-int',has_traffic_filtering=True,id=98c45ccf-8678-4765-bf66-7a9e3dae8cac,network=Network(bcd4fda6-15a9-498e-a842-640e098deac1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c45ccf-86')#033[00m
Feb 25 07:59:07 np0005629333 nova_compute[244014]: 2026-02-25 12:59:07.203 244018 DEBUG nova.objects.instance [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lazy-loading 'numa_topology' on Instance uuid eadab8d0-1f24-4ace-b7b4-10329df53f12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:59:07 np0005629333 kernel: tap98c45ccf-86: entered promiscuous mode
Feb 25 07:59:07 np0005629333 NetworkManager[49836]: <info>  [1772024347.3347] manager: (tap98c45ccf-86): new Tun device (/org/freedesktop/NetworkManager/Devices/623)
Feb 25 07:59:07 np0005629333 nova_compute[244014]: 2026-02-25 12:59:07.334 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:07 np0005629333 ovn_controller[147040]: 2026-02-25T12:59:07Z|01517|binding|INFO|Claiming lport 98c45ccf-8678-4765-bf66-7a9e3dae8cac for this chassis.
Feb 25 07:59:07 np0005629333 ovn_controller[147040]: 2026-02-25T12:59:07Z|01518|binding|INFO|98c45ccf-8678-4765-bf66-7a9e3dae8cac: Claiming fa:16:3e:25:62:35 10.100.0.7
Feb 25 07:59:07 np0005629333 nova_compute[244014]: 2026-02-25 12:59:07.364 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:07 np0005629333 ovn_controller[147040]: 2026-02-25T12:59:07Z|01519|binding|INFO|Setting lport 98c45ccf-8678-4765-bf66-7a9e3dae8cac ovn-installed in OVS
Feb 25 07:59:07 np0005629333 nova_compute[244014]: 2026-02-25 12:59:07.368 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:07 np0005629333 systemd-machined[210048]: New machine qemu-176-instance-0000008e.
Feb 25 07:59:07 np0005629333 systemd-udevd[373317]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:59:07 np0005629333 systemd[1]: Started Virtual Machine qemu-176-instance-0000008e.
Feb 25 07:59:07 np0005629333 NetworkManager[49836]: <info>  [1772024347.3891] device (tap98c45ccf-86): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:59:07 np0005629333 NetworkManager[49836]: <info>  [1772024347.3903] device (tap98c45ccf-86): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:59:07 np0005629333 ovn_controller[147040]: 2026-02-25T12:59:07Z|01520|binding|INFO|Setting lport 98c45ccf-8678-4765-bf66-7a9e3dae8cac up in Southbound
Feb 25 07:59:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:07.400 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:62:35 10.100.0.7'], port_security=['fa:16:3e:25:62:35 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'eadab8d0-1f24-4ace-b7b4-10329df53f12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bcd4fda6-15a9-498e-a842-640e098deac1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e20a44a5f2664c25a996daf9d0d14b97', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'ec2ac6ed-e266-4c96-886a-7f586d569550', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f782b99-9661-4693-8fe6-ad5ceac2a9fa, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=98c45ccf-8678-4765-bf66-7a9e3dae8cac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:59:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:07.402 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 98c45ccf-8678-4765-bf66-7a9e3dae8cac in datapath bcd4fda6-15a9-498e-a842-640e098deac1 bound to our chassis#033[00m
Feb 25 07:59:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:07.403 157129 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network bcd4fda6-15a9-498e-a842-640e098deac1 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 25 07:59:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:07.403 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b38d1deb-e2e3-48bd-9b59-b31dc57b8dc4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:59:07 np0005629333 nova_compute[244014]: 2026-02-25 12:59:07.629 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:08 np0005629333 nova_compute[244014]: 2026-02-25 12:59:08.094 244018 DEBUG nova.virt.libvirt.host [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Removed pending event for eadab8d0-1f24-4ace-b7b4-10329df53f12 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Feb 25 07:59:08 np0005629333 nova_compute[244014]: 2026-02-25 12:59:08.095 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024348.0937846, eadab8d0-1f24-4ace-b7b4-10329df53f12 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:59:08 np0005629333 nova_compute[244014]: 2026-02-25 12:59:08.095 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] VM Started (Lifecycle Event)#033[00m
Feb 25 07:59:08 np0005629333 nova_compute[244014]: 2026-02-25 12:59:08.106 244018 DEBUG nova.compute.manager [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:59:08 np0005629333 nova_compute[244014]: 2026-02-25 12:59:08.107 244018 DEBUG nova.objects.instance [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lazy-loading 'pci_devices' on Instance uuid eadab8d0-1f24-4ace-b7b4-10329df53f12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:59:08 np0005629333 nova_compute[244014]: 2026-02-25 12:59:08.112 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:59:08 np0005629333 nova_compute[244014]: 2026-02-25 12:59:08.117 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:59:08 np0005629333 nova_compute[244014]: 2026-02-25 12:59:08.123 244018 INFO nova.virt.libvirt.driver [-] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Instance running successfully.#033[00m
Feb 25 07:59:08 np0005629333 virtqemud[243235]: argument unsupported: QEMU guest agent is not configured
Feb 25 07:59:08 np0005629333 nova_compute[244014]: 2026-02-25 12:59:08.125 244018 DEBUG nova.virt.libvirt.guest [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Feb 25 07:59:08 np0005629333 nova_compute[244014]: 2026-02-25 12:59:08.125 244018 DEBUG nova.compute.manager [None req-370fa852-14c1-4c58-a4da-65187016b876 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:59:08 np0005629333 nova_compute[244014]: 2026-02-25 12:59:08.140 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Feb 25 07:59:08 np0005629333 nova_compute[244014]: 2026-02-25 12:59:08.141 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024348.0974534, eadab8d0-1f24-4ace-b7b4-10329df53f12 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:59:08 np0005629333 nova_compute[244014]: 2026-02-25 12:59:08.141 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:59:08 np0005629333 nova_compute[244014]: 2026-02-25 12:59:08.175 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:59:08 np0005629333 nova_compute[244014]: 2026-02-25 12:59:08.181 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:59:08 np0005629333 nova_compute[244014]: 2026-02-25 12:59:08.497 244018 DEBUG nova.compute.manager [req-f40cdecf-2183-4669-9c68-898f7fed71e8 req-eab5eb44-1359-4479-a425-9b4872841670 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:59:08 np0005629333 nova_compute[244014]: 2026-02-25 12:59:08.497 244018 DEBUG oslo_concurrency.lockutils [req-f40cdecf-2183-4669-9c68-898f7fed71e8 req-eab5eb44-1359-4479-a425-9b4872841670 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:59:08 np0005629333 nova_compute[244014]: 2026-02-25 12:59:08.498 244018 DEBUG oslo_concurrency.lockutils [req-f40cdecf-2183-4669-9c68-898f7fed71e8 req-eab5eb44-1359-4479-a425-9b4872841670 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:59:08 np0005629333 nova_compute[244014]: 2026-02-25 12:59:08.498 244018 DEBUG oslo_concurrency.lockutils [req-f40cdecf-2183-4669-9c68-898f7fed71e8 req-eab5eb44-1359-4479-a425-9b4872841670 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:59:08 np0005629333 nova_compute[244014]: 2026-02-25 12:59:08.499 244018 DEBUG nova.compute.manager [req-f40cdecf-2183-4669-9c68-898f7fed71e8 req-eab5eb44-1359-4479-a425-9b4872841670 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] No waiting events found dispatching network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:59:08 np0005629333 nova_compute[244014]: 2026-02-25 12:59:08.499 244018 WARNING nova.compute.manager [req-f40cdecf-2183-4669-9c68-898f7fed71e8 req-eab5eb44-1359-4479-a425-9b4872841670 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received unexpected event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac for instance with vm_state active and task_state None.#033[00m
Feb 25 07:59:08 np0005629333 nova_compute[244014]: 2026-02-25 12:59:08.636 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "fc7d7f86-eb7c-476c-840e-98c97329de34" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:59:08 np0005629333 nova_compute[244014]: 2026-02-25 12:59:08.636 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "fc7d7f86-eb7c-476c-840e-98c97329de34" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:59:08 np0005629333 nova_compute[244014]: 2026-02-25 12:59:08.658 244018 DEBUG nova.compute.manager [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:59:08 np0005629333 nova_compute[244014]: 2026-02-25 12:59:08.737 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:59:08 np0005629333 nova_compute[244014]: 2026-02-25 12:59:08.738 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:59:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:59:08 np0005629333 nova_compute[244014]: 2026-02-25 12:59:08.745 244018 DEBUG nova.virt.hardware [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:59:08 np0005629333 nova_compute[244014]: 2026-02-25 12:59:08.746 244018 INFO nova.compute.claims [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:59:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2357: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4 op/s
Feb 25 07:59:08 np0005629333 nova_compute[244014]: 2026-02-25 12:59:08.868 244018 DEBUG oslo_concurrency.processutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:59:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:59:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3707919021' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:59:09 np0005629333 nova_compute[244014]: 2026-02-25 12:59:09.422 244018 DEBUG oslo_concurrency.processutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:59:09 np0005629333 nova_compute[244014]: 2026-02-25 12:59:09.428 244018 DEBUG nova.compute.provider_tree [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:59:09 np0005629333 nova_compute[244014]: 2026-02-25 12:59:09.455 244018 DEBUG nova.scheduler.client.report [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:59:09 np0005629333 nova_compute[244014]: 2026-02-25 12:59:09.481 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:59:09 np0005629333 nova_compute[244014]: 2026-02-25 12:59:09.481 244018 DEBUG nova.compute.manager [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:59:09 np0005629333 nova_compute[244014]: 2026-02-25 12:59:09.544 244018 DEBUG nova.compute.manager [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:59:09 np0005629333 nova_compute[244014]: 2026-02-25 12:59:09.544 244018 DEBUG nova.network.neutron [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:59:09 np0005629333 nova_compute[244014]: 2026-02-25 12:59:09.575 244018 INFO nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:59:09 np0005629333 nova_compute[244014]: 2026-02-25 12:59:09.598 244018 DEBUG nova.compute.manager [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:59:09 np0005629333 nova_compute[244014]: 2026-02-25 12:59:09.709 244018 DEBUG nova.compute.manager [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:59:09 np0005629333 nova_compute[244014]: 2026-02-25 12:59:09.710 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:59:09 np0005629333 nova_compute[244014]: 2026-02-25 12:59:09.710 244018 INFO nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Creating image(s)#033[00m
Feb 25 07:59:09 np0005629333 nova_compute[244014]: 2026-02-25 12:59:09.732 244018 DEBUG nova.storage.rbd_utils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image fc7d7f86-eb7c-476c-840e-98c97329de34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:59:09 np0005629333 nova_compute[244014]: 2026-02-25 12:59:09.753 244018 DEBUG nova.storage.rbd_utils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image fc7d7f86-eb7c-476c-840e-98c97329de34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:59:09 np0005629333 nova_compute[244014]: 2026-02-25 12:59:09.772 244018 DEBUG nova.storage.rbd_utils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image fc7d7f86-eb7c-476c-840e-98c97329de34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:59:09 np0005629333 nova_compute[244014]: 2026-02-25 12:59:09.776 244018 DEBUG oslo_concurrency.processutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:59:09 np0005629333 nova_compute[244014]: 2026-02-25 12:59:09.843 244018 DEBUG oslo_concurrency.processutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:59:09 np0005629333 nova_compute[244014]: 2026-02-25 12:59:09.845 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:59:09 np0005629333 nova_compute[244014]: 2026-02-25 12:59:09.846 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:59:09 np0005629333 nova_compute[244014]: 2026-02-25 12:59:09.846 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:59:09 np0005629333 nova_compute[244014]: 2026-02-25 12:59:09.877 244018 DEBUG nova.storage.rbd_utils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image fc7d7f86-eb7c-476c-840e-98c97329de34_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:59:09 np0005629333 nova_compute[244014]: 2026-02-25 12:59:09.880 244018 DEBUG oslo_concurrency.processutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 fc7d7f86-eb7c-476c-840e-98c97329de34_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:59:10 np0005629333 nova_compute[244014]: 2026-02-25 12:59:10.292 244018 DEBUG nova.policy [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea895f651dd742a7b5eb2d63fb34641c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b9699483122f465084e3147e4904d13d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:59:10 np0005629333 nova_compute[244014]: 2026-02-25 12:59:10.588 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:10 np0005629333 nova_compute[244014]: 2026-02-25 12:59:10.611 244018 DEBUG oslo_concurrency.lockutils [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Acquiring lock "eadab8d0-1f24-4ace-b7b4-10329df53f12" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:59:10 np0005629333 nova_compute[244014]: 2026-02-25 12:59:10.612 244018 DEBUG oslo_concurrency.lockutils [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:59:10 np0005629333 nova_compute[244014]: 2026-02-25 12:59:10.612 244018 DEBUG oslo_concurrency.lockutils [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Acquiring lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:59:10 np0005629333 nova_compute[244014]: 2026-02-25 12:59:10.612 244018 DEBUG oslo_concurrency.lockutils [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:59:10 np0005629333 nova_compute[244014]: 2026-02-25 12:59:10.613 244018 DEBUG oslo_concurrency.lockutils [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:59:10 np0005629333 nova_compute[244014]: 2026-02-25 12:59:10.614 244018 INFO nova.compute.manager [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Terminating instance#033[00m
Feb 25 07:59:10 np0005629333 nova_compute[244014]: 2026-02-25 12:59:10.615 244018 DEBUG nova.compute.manager [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 07:59:10 np0005629333 nova_compute[244014]: 2026-02-25 12:59:10.698 244018 DEBUG nova.compute.manager [req-bcf58436-efa8-479a-8e8a-6167118633dd req-6e5ce227-0bde-4c0b-b448-7eae76ad07a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:59:10 np0005629333 nova_compute[244014]: 2026-02-25 12:59:10.699 244018 DEBUG oslo_concurrency.lockutils [req-bcf58436-efa8-479a-8e8a-6167118633dd req-6e5ce227-0bde-4c0b-b448-7eae76ad07a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:59:10 np0005629333 nova_compute[244014]: 2026-02-25 12:59:10.699 244018 DEBUG oslo_concurrency.lockutils [req-bcf58436-efa8-479a-8e8a-6167118633dd req-6e5ce227-0bde-4c0b-b448-7eae76ad07a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:59:10 np0005629333 nova_compute[244014]: 2026-02-25 12:59:10.699 244018 DEBUG oslo_concurrency.lockutils [req-bcf58436-efa8-479a-8e8a-6167118633dd req-6e5ce227-0bde-4c0b-b448-7eae76ad07a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:59:10 np0005629333 nova_compute[244014]: 2026-02-25 12:59:10.699 244018 DEBUG nova.compute.manager [req-bcf58436-efa8-479a-8e8a-6167118633dd req-6e5ce227-0bde-4c0b-b448-7eae76ad07a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] No waiting events found dispatching network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:59:10 np0005629333 nova_compute[244014]: 2026-02-25 12:59:10.699 244018 WARNING nova.compute.manager [req-bcf58436-efa8-479a-8e8a-6167118633dd req-6e5ce227-0bde-4c0b-b448-7eae76ad07a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received unexpected event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:59:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2358: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 07:59:11 np0005629333 kernel: tap98c45ccf-86 (unregistering): left promiscuous mode
Feb 25 07:59:11 np0005629333 nova_compute[244014]: 2026-02-25 12:59:11.375 244018 DEBUG nova.network.neutron [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Successfully created port: 5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:59:11 np0005629333 NetworkManager[49836]: <info>  [1772024351.3775] device (tap98c45ccf-86): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 07:59:11 np0005629333 nova_compute[244014]: 2026-02-25 12:59:11.379 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:11 np0005629333 ovn_controller[147040]: 2026-02-25T12:59:11Z|01521|binding|INFO|Releasing lport 98c45ccf-8678-4765-bf66-7a9e3dae8cac from this chassis (sb_readonly=0)
Feb 25 07:59:11 np0005629333 ovn_controller[147040]: 2026-02-25T12:59:11Z|01522|binding|INFO|Setting lport 98c45ccf-8678-4765-bf66-7a9e3dae8cac down in Southbound
Feb 25 07:59:11 np0005629333 ovn_controller[147040]: 2026-02-25T12:59:11Z|01523|binding|INFO|Removing iface tap98c45ccf-86 ovn-installed in OVS
Feb 25 07:59:11 np0005629333 nova_compute[244014]: 2026-02-25 12:59:11.384 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:11.390 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:62:35 10.100.0.7'], port_security=['fa:16:3e:25:62:35 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'eadab8d0-1f24-4ace-b7b4-10329df53f12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bcd4fda6-15a9-498e-a842-640e098deac1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e20a44a5f2664c25a996daf9d0d14b97', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'ec2ac6ed-e266-4c96-886a-7f586d569550', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f782b99-9661-4693-8fe6-ad5ceac2a9fa, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=98c45ccf-8678-4765-bf66-7a9e3dae8cac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:59:11 np0005629333 nova_compute[244014]: 2026-02-25 12:59:11.393 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:11.394 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 98c45ccf-8678-4765-bf66-7a9e3dae8cac in datapath bcd4fda6-15a9-498e-a842-640e098deac1 unbound from our chassis#033[00m
Feb 25 07:59:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:11.395 157129 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network bcd4fda6-15a9-498e-a842-640e098deac1 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Feb 25 07:59:11 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:11.397 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3535caae-2a18-400c-99c8-b3ac9f7a3c61]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:59:11 np0005629333 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Feb 25 07:59:11 np0005629333 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d0000008e.scope: Consumed 3.396s CPU time.
Feb 25 07:59:11 np0005629333 systemd-machined[210048]: Machine qemu-176-instance-0000008e terminated.
Feb 25 07:59:11 np0005629333 nova_compute[244014]: 2026-02-25 12:59:11.547 244018 DEBUG oslo_concurrency.processutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 fc7d7f86-eb7c-476c-840e-98c97329de34_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.666s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:59:11 np0005629333 nova_compute[244014]: 2026-02-25 12:59:11.630 244018 DEBUG nova.storage.rbd_utils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] resizing rbd image fc7d7f86-eb7c-476c-840e-98c97329de34_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:59:11 np0005629333 nova_compute[244014]: 2026-02-25 12:59:11.665 244018 INFO nova.virt.libvirt.driver [-] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Instance destroyed successfully.#033[00m
Feb 25 07:59:11 np0005629333 nova_compute[244014]: 2026-02-25 12:59:11.666 244018 DEBUG nova.objects.instance [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lazy-loading 'resources' on Instance uuid eadab8d0-1f24-4ace-b7b4-10329df53f12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:59:11 np0005629333 nova_compute[244014]: 2026-02-25 12:59:11.680 244018 DEBUG nova.virt.libvirt.vif [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:58:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1391903363',display_name='tempest-TestServerAdvancedOps-server-1391903363',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1391903363',id=142,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:58:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e20a44a5f2664c25a996daf9d0d14b97',ramdisk_id='',reservation_id='r-fckrf11t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerAdvancedOps-400846090',owner_user_name='tempest-TestServerAdvancedOps-400846090-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:59:08Z,user_data=None,user_id='2236154a16ea4715a79c2e6f36e22e83',uuid=eadab8d0-1f24-4ace-b7b4-10329df53f12,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "address": "fa:16:3e:25:62:35", "network": {"id": "bcd4fda6-15a9-498e-a842-640e098deac1", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-465800388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e20a44a5f2664c25a996daf9d0d14b97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c45ccf-86", "ovs_interfaceid": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 07:59:11 np0005629333 nova_compute[244014]: 2026-02-25 12:59:11.680 244018 DEBUG nova.network.os_vif_util [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Converting VIF {"id": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "address": "fa:16:3e:25:62:35", "network": {"id": "bcd4fda6-15a9-498e-a842-640e098deac1", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-465800388-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "e20a44a5f2664c25a996daf9d0d14b97", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c45ccf-86", "ovs_interfaceid": "98c45ccf-8678-4765-bf66-7a9e3dae8cac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:59:11 np0005629333 nova_compute[244014]: 2026-02-25 12:59:11.681 244018 DEBUG nova.network.os_vif_util [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:62:35,bridge_name='br-int',has_traffic_filtering=True,id=98c45ccf-8678-4765-bf66-7a9e3dae8cac,network=Network(bcd4fda6-15a9-498e-a842-640e098deac1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c45ccf-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:59:11 np0005629333 nova_compute[244014]: 2026-02-25 12:59:11.682 244018 DEBUG os_vif [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:62:35,bridge_name='br-int',has_traffic_filtering=True,id=98c45ccf-8678-4765-bf66-7a9e3dae8cac,network=Network(bcd4fda6-15a9-498e-a842-640e098deac1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c45ccf-86') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 07:59:11 np0005629333 nova_compute[244014]: 2026-02-25 12:59:11.684 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:11 np0005629333 nova_compute[244014]: 2026-02-25 12:59:11.685 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98c45ccf-86, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:59:11 np0005629333 nova_compute[244014]: 2026-02-25 12:59:11.687 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:11 np0005629333 nova_compute[244014]: 2026-02-25 12:59:11.690 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:59:11 np0005629333 nova_compute[244014]: 2026-02-25 12:59:11.693 244018 INFO os_vif [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:62:35,bridge_name='br-int',has_traffic_filtering=True,id=98c45ccf-8678-4765-bf66-7a9e3dae8cac,network=Network(bcd4fda6-15a9-498e-a842-640e098deac1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c45ccf-86')#033[00m
Feb 25 07:59:11 np0005629333 nova_compute[244014]: 2026-02-25 12:59:11.864 244018 DEBUG nova.objects.instance [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'migration_context' on Instance uuid fc7d7f86-eb7c-476c-840e-98c97329de34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:59:11 np0005629333 nova_compute[244014]: 2026-02-25 12:59:11.876 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:59:11 np0005629333 nova_compute[244014]: 2026-02-25 12:59:11.877 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Ensure instance console log exists: /var/lib/nova/instances/fc7d7f86-eb7c-476c-840e-98c97329de34/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:59:11 np0005629333 nova_compute[244014]: 2026-02-25 12:59:11.877 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:59:11 np0005629333 nova_compute[244014]: 2026-02-25 12:59:11.877 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:59:11 np0005629333 nova_compute[244014]: 2026-02-25 12:59:11.878 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:59:12 np0005629333 nova_compute[244014]: 2026-02-25 12:59:12.432 244018 INFO nova.virt.libvirt.driver [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Deleting instance files /var/lib/nova/instances/eadab8d0-1f24-4ace-b7b4-10329df53f12_del#033[00m
Feb 25 07:59:12 np0005629333 nova_compute[244014]: 2026-02-25 12:59:12.433 244018 INFO nova.virt.libvirt.driver [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Deletion of /var/lib/nova/instances/eadab8d0-1f24-4ace-b7b4-10329df53f12_del complete#033[00m
Feb 25 07:59:12 np0005629333 nova_compute[244014]: 2026-02-25 12:59:12.465 244018 DEBUG nova.network.neutron [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Successfully updated port: 5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:59:12 np0005629333 nova_compute[244014]: 2026-02-25 12:59:12.481 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "refresh_cache-fc7d7f86-eb7c-476c-840e-98c97329de34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:59:12 np0005629333 nova_compute[244014]: 2026-02-25 12:59:12.481 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquired lock "refresh_cache-fc7d7f86-eb7c-476c-840e-98c97329de34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:59:12 np0005629333 nova_compute[244014]: 2026-02-25 12:59:12.482 244018 DEBUG nova.network.neutron [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:59:12 np0005629333 nova_compute[244014]: 2026-02-25 12:59:12.487 244018 INFO nova.compute.manager [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Took 1.87 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 07:59:12 np0005629333 nova_compute[244014]: 2026-02-25 12:59:12.487 244018 DEBUG oslo.service.loopingcall [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 07:59:12 np0005629333 nova_compute[244014]: 2026-02-25 12:59:12.488 244018 DEBUG nova.compute.manager [-] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 07:59:12 np0005629333 nova_compute[244014]: 2026-02-25 12:59:12.488 244018 DEBUG nova.network.neutron [-] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 07:59:12 np0005629333 nova_compute[244014]: 2026-02-25 12:59:12.647 244018 DEBUG nova.network.neutron [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:59:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 07:59:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 11K writes, 50K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s#012Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1390 writes, 6279 keys, 1390 commit groups, 1.0 writes per commit group, ingest: 8.96 MB, 0.01 MB/s#012Interval WAL: 1390 writes, 1390 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     55.2      1.06              0.16        34    0.031       0      0       0.0       0.0#012  L6      1/0    7.77 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.7    103.5     87.3      3.13              0.80        33    0.095    195K    17K       0.0       0.0#012 Sum      1/0    7.77 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.7     77.3     79.2      4.19              0.96        67    0.063    195K    17K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   7.4     83.7     83.4      0.62              0.18        10    0.062     36K   2496       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    103.5     87.3      3.13              0.80        33    0.095    195K    17K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     55.4      1.05              0.16        33    0.032       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.0 total, 600.0 interval#012Flush(GB): cumulative 0.057, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.32 GB write, 0.08 MB/s write, 0.32 GB read, 0.08 MB/s read, 4.2 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561a1af858d0#2 capacity: 304.00 MB usage: 37.33 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000341 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2361,35.88 MB,11.8042%) FilterBlock(68,550.42 KB,0.176816%) IndexBlock(68,934.36 KB,0.300151%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb 25 07:59:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2359: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 25 07:59:13 np0005629333 nova_compute[244014]: 2026-02-25 12:59:13.534 244018 DEBUG nova.compute.manager [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received event network-vif-unplugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:59:13 np0005629333 nova_compute[244014]: 2026-02-25 12:59:13.534 244018 DEBUG oslo_concurrency.lockutils [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:59:13 np0005629333 nova_compute[244014]: 2026-02-25 12:59:13.534 244018 DEBUG oslo_concurrency.lockutils [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:59:13 np0005629333 nova_compute[244014]: 2026-02-25 12:59:13.535 244018 DEBUG oslo_concurrency.lockutils [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:59:13 np0005629333 nova_compute[244014]: 2026-02-25 12:59:13.535 244018 DEBUG nova.compute.manager [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] No waiting events found dispatching network-vif-unplugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:59:13 np0005629333 nova_compute[244014]: 2026-02-25 12:59:13.535 244018 DEBUG nova.compute.manager [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received event network-vif-unplugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 07:59:13 np0005629333 nova_compute[244014]: 2026-02-25 12:59:13.535 244018 DEBUG nova.compute.manager [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:59:13 np0005629333 nova_compute[244014]: 2026-02-25 12:59:13.535 244018 DEBUG oslo_concurrency.lockutils [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:59:13 np0005629333 nova_compute[244014]: 2026-02-25 12:59:13.536 244018 DEBUG oslo_concurrency.lockutils [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:59:13 np0005629333 nova_compute[244014]: 2026-02-25 12:59:13.536 244018 DEBUG oslo_concurrency.lockutils [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:59:13 np0005629333 nova_compute[244014]: 2026-02-25 12:59:13.536 244018 DEBUG nova.compute.manager [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] No waiting events found dispatching network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:59:13 np0005629333 nova_compute[244014]: 2026-02-25 12:59:13.536 244018 WARNING nova.compute.manager [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received unexpected event network-vif-plugged-98c45ccf-8678-4765-bf66-7a9e3dae8cac for instance with vm_state active and task_state deleting.#033[00m
Feb 25 07:59:13 np0005629333 nova_compute[244014]: 2026-02-25 12:59:13.537 244018 DEBUG nova.compute.manager [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Received event network-changed-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:59:13 np0005629333 nova_compute[244014]: 2026-02-25 12:59:13.537 244018 DEBUG nova.compute.manager [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Refreshing instance network info cache due to event network-changed-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:59:13 np0005629333 nova_compute[244014]: 2026-02-25 12:59:13.537 244018 DEBUG oslo_concurrency.lockutils [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-fc7d7f86-eb7c-476c-840e-98c97329de34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:59:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:59:13 np0005629333 nova_compute[244014]: 2026-02-25 12:59:13.855 244018 DEBUG nova.network.neutron [-] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:59:13 np0005629333 nova_compute[244014]: 2026-02-25 12:59:13.874 244018 INFO nova.compute.manager [-] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Took 1.39 seconds to deallocate network for instance.#033[00m
Feb 25 07:59:13 np0005629333 nova_compute[244014]: 2026-02-25 12:59:13.952 244018 DEBUG oslo_concurrency.lockutils [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:59:13 np0005629333 nova_compute[244014]: 2026-02-25 12:59:13.953 244018 DEBUG oslo_concurrency.lockutils [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.009 244018 DEBUG oslo_concurrency.processutils [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.242 244018 DEBUG nova.network.neutron [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Updating instance_info_cache with network_info: [{"id": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "address": "fa:16:3e:db:fb:af", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b9c40d0-fe", "ovs_interfaceid": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.263 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Releasing lock "refresh_cache-fc7d7f86-eb7c-476c-840e-98c97329de34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.263 244018 DEBUG nova.compute.manager [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Instance network_info: |[{"id": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "address": "fa:16:3e:db:fb:af", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b9c40d0-fe", "ovs_interfaceid": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.264 244018 DEBUG oslo_concurrency.lockutils [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-fc7d7f86-eb7c-476c-840e-98c97329de34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.264 244018 DEBUG nova.network.neutron [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Refreshing network info cache for port 5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.267 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Start _get_guest_xml network_info=[{"id": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "address": "fa:16:3e:db:fb:af", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b9c40d0-fe", "ovs_interfaceid": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.274 244018 WARNING nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.290 244018 DEBUG nova.virt.libvirt.host [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.291 244018 DEBUG nova.virt.libvirt.host [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.294 244018 DEBUG nova.virt.libvirt.host [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.295 244018 DEBUG nova.virt.libvirt.host [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.295 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.296 244018 DEBUG nova.virt.hardware [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.296 244018 DEBUG nova.virt.hardware [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.296 244018 DEBUG nova.virt.hardware [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.297 244018 DEBUG nova.virt.hardware [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.297 244018 DEBUG nova.virt.hardware [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.297 244018 DEBUG nova.virt.hardware [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.297 244018 DEBUG nova.virt.hardware [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.298 244018 DEBUG nova.virt.hardware [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.298 244018 DEBUG nova.virt.hardware [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.298 244018 DEBUG nova.virt.hardware [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.298 244018 DEBUG nova.virt.hardware [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.302 244018 DEBUG oslo_concurrency.processutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:59:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:59:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1172366935' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.621 244018 DEBUG oslo_concurrency.processutils [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.650 244018 DEBUG nova.compute.provider_tree [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.668 244018 DEBUG nova.scheduler.client.report [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.696 244018 DEBUG oslo_concurrency.lockutils [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:59:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:59:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:59:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 07:59:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:59:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 07:59:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:59:14 np0005629333 podman[373717]: 2026-02-25 12:59:14.713250183 +0000 UTC m=+0.056792133 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:59:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 07:59:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 07:59:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 07:59:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:59:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 07:59:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 07:59:14 np0005629333 podman[373718]: 2026-02-25 12:59:14.748043975 +0000 UTC m=+0.087259973 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.757 244018 INFO nova.scheduler.client.report [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Deleted allocations for instance eadab8d0-1f24-4ace-b7b4-10329df53f12#033[00m
Feb 25 07:59:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2360: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 25 07:59:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:59:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2593479441' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.894 244018 DEBUG oslo_concurrency.processutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.922 244018 DEBUG nova.storage.rbd_utils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image fc7d7f86-eb7c-476c-840e-98c97329de34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.930 244018 DEBUG oslo_concurrency.processutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:59:14 np0005629333 nova_compute[244014]: 2026-02-25 12:59:14.976 244018 DEBUG oslo_concurrency.lockutils [None req-86215b07-515d-47ee-9cb4-e0ea23d81e1a 2236154a16ea4715a79c2e6f36e22e83 e20a44a5f2664c25a996daf9d0d14b97 - - default default] Lock "eadab8d0-1f24-4ace-b7b4-10329df53f12" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.364s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:59:15 np0005629333 podman[373862]: 2026-02-25 12:59:15.109392858 +0000 UTC m=+0.041198923 container create d9f7212e5a8a9d5117c9f8f14f153a7fba88cbff642c2b7063e913685baf36f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Feb 25 07:59:15 np0005629333 systemd[1]: Started libpod-conmon-d9f7212e5a8a9d5117c9f8f14f153a7fba88cbff642c2b7063e913685baf36f2.scope.
Feb 25 07:59:15 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:59:15 np0005629333 podman[373862]: 2026-02-25 12:59:15.089758764 +0000 UTC m=+0.021564839 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:59:15 np0005629333 podman[373862]: 2026-02-25 12:59:15.200195139 +0000 UTC m=+0.132001234 container init d9f7212e5a8a9d5117c9f8f14f153a7fba88cbff642c2b7063e913685baf36f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:59:15 np0005629333 podman[373862]: 2026-02-25 12:59:15.2087173 +0000 UTC m=+0.140523345 container start d9f7212e5a8a9d5117c9f8f14f153a7fba88cbff642c2b7063e913685baf36f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:59:15 np0005629333 podman[373862]: 2026-02-25 12:59:15.211893369 +0000 UTC m=+0.143699424 container attach d9f7212e5a8a9d5117c9f8f14f153a7fba88cbff642c2b7063e913685baf36f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_agnesi, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 07:59:15 np0005629333 laughing_agnesi[373878]: 167 167
Feb 25 07:59:15 np0005629333 systemd[1]: libpod-d9f7212e5a8a9d5117c9f8f14f153a7fba88cbff642c2b7063e913685baf36f2.scope: Deactivated successfully.
Feb 25 07:59:15 np0005629333 podman[373862]: 2026-02-25 12:59:15.215251104 +0000 UTC m=+0.147057199 container died d9f7212e5a8a9d5117c9f8f14f153a7fba88cbff642c2b7063e913685baf36f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_agnesi, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:59:15 np0005629333 systemd[1]: var-lib-containers-storage-overlay-40e970e8c8ab26343596feba3e9548a787c2eb69e8233a85e7287f2dbffb4f46-merged.mount: Deactivated successfully.
Feb 25 07:59:15 np0005629333 podman[373862]: 2026-02-25 12:59:15.261104368 +0000 UTC m=+0.192910453 container remove d9f7212e5a8a9d5117c9f8f14f153a7fba88cbff642c2b7063e913685baf36f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 07:59:15 np0005629333 systemd[1]: libpod-conmon-d9f7212e5a8a9d5117c9f8f14f153a7fba88cbff642c2b7063e913685baf36f2.scope: Deactivated successfully.
Feb 25 07:59:15 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 07:59:15 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:59:15 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 07:59:15 np0005629333 podman[373902]: 2026-02-25 12:59:15.432809141 +0000 UTC m=+0.051834303 container create 23e673fcf7bd764c496d71fdc3cf0cf570ec22e79946f58b99a8d8926b36632a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_matsumoto, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:59:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:59:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1594549115' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.455 244018 DEBUG oslo_concurrency.processutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.456 244018 DEBUG nova.virt.libvirt.vif [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:59:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-1045538617',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-1045538617',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-acc',id=143,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKflPzkYcK8T1jIvTW73/HFgMcFhCGkDieaMKwFdqxoWC54RyT9HKRPaz7zDpWGOyrYK1XW41LVz9p16Co9g6HWeTxnQZJe7dDdqfF7T6FTQYUTjJqoaDTrnYUJSVUiaQA==',key_name='tempest-TestSecurityGroupsBasicOps-600669415',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-q2l93d4m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:59:09Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=fc7d7f86-eb7c-476c-840e-98c97329de34,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "address": "fa:16:3e:db:fb:af", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b9c40d0-fe", "ovs_interfaceid": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.456 244018 DEBUG nova.network.os_vif_util [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "address": "fa:16:3e:db:fb:af", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b9c40d0-fe", "ovs_interfaceid": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.457 244018 DEBUG nova.network.os_vif_util [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:fb:af,bridge_name='br-int',has_traffic_filtering=True,id=5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3,network=Network(62bbaf1c-560e-4f11-b053-43d27fe35ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b9c40d0-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.458 244018 DEBUG nova.objects.instance [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'pci_devices' on Instance uuid fc7d7f86-eb7c-476c-840e-98c97329de34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:59:15 np0005629333 systemd[1]: Started libpod-conmon-23e673fcf7bd764c496d71fdc3cf0cf570ec22e79946f58b99a8d8926b36632a.scope.
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.482 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:59:15 np0005629333 nova_compute[244014]:  <uuid>fc7d7f86-eb7c-476c-840e-98c97329de34</uuid>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:  <name>instance-0000008f</name>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-1045538617</nova:name>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:59:14</nova:creationTime>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:59:15 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:        <nova:user uuid="ea895f651dd742a7b5eb2d63fb34641c">tempest-TestSecurityGroupsBasicOps-948360018-project-member</nova:user>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:        <nova:project uuid="b9699483122f465084e3147e4904d13d">tempest-TestSecurityGroupsBasicOps-948360018</nova:project>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:        <nova:port uuid="5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3">
Feb 25 07:59:15 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      <entry name="serial">fc7d7f86-eb7c-476c-840e-98c97329de34</entry>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      <entry name="uuid">fc7d7f86-eb7c-476c-840e-98c97329de34</entry>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/fc7d7f86-eb7c-476c-840e-98c97329de34_disk">
Feb 25 07:59:15 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:59:15 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/fc7d7f86-eb7c-476c-840e-98c97329de34_disk.config">
Feb 25 07:59:15 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:59:15 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:db:fb:af"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      <target dev="tap5b9c40d0-fe"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/fc7d7f86-eb7c-476c-840e-98c97329de34/console.log" append="off"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:59:15 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:59:15 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:59:15 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:59:15 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.482 244018 DEBUG nova.compute.manager [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Preparing to wait for external event network-vif-plugged-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.482 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.483 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.483 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.483 244018 DEBUG nova.virt.libvirt.vif [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:59:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-1045538617',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-1045538617',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-acc',id=143,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKflPzkYcK8T1jIvTW73/HFgMcFhCGkDieaMKwFdqxoWC54RyT9HKRPaz7zDpWGOyrYK1XW41LVz9p16Co9g6HWeTxnQZJe7dDdqfF7T6FTQYUTjJqoaDTrnYUJSVUiaQA==',key_name='tempest-TestSecurityGroupsBasicOps-600669415',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-q2l93d4m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:59:09Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=fc7d7f86-eb7c-476c-840e-98c97329de34,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "address": "fa:16:3e:db:fb:af", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b9c40d0-fe", "ovs_interfaceid": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.484 244018 DEBUG nova.network.os_vif_util [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "address": "fa:16:3e:db:fb:af", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b9c40d0-fe", "ovs_interfaceid": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.484 244018 DEBUG nova.network.os_vif_util [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:db:fb:af,bridge_name='br-int',has_traffic_filtering=True,id=5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3,network=Network(62bbaf1c-560e-4f11-b053-43d27fe35ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b9c40d0-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.485 244018 DEBUG os_vif [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:fb:af,bridge_name='br-int',has_traffic_filtering=True,id=5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3,network=Network(62bbaf1c-560e-4f11-b053-43d27fe35ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b9c40d0-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.485 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.486 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.486 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.490 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.490 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b9c40d0-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.491 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5b9c40d0-fe, col_values=(('external_ids', {'iface-id': '5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:db:fb:af', 'vm-uuid': 'fc7d7f86-eb7c-476c-840e-98c97329de34'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.493 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:15 np0005629333 NetworkManager[49836]: <info>  [1772024355.4948] manager: (tap5b9c40d0-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/624)
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.496 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.499 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.500 244018 INFO os_vif [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:db:fb:af,bridge_name='br-int',has_traffic_filtering=True,id=5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3,network=Network(62bbaf1c-560e-4f11-b053-43d27fe35ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b9c40d0-fe')#033[00m
Feb 25 07:59:15 np0005629333 podman[373902]: 2026-02-25 12:59:15.407786675 +0000 UTC m=+0.026811947 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:59:15 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:59:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b53a31222dcf963cb46d0d7b32cc36a1b794e65374ee781413b64005ad4dde6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:59:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b53a31222dcf963cb46d0d7b32cc36a1b794e65374ee781413b64005ad4dde6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:59:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b53a31222dcf963cb46d0d7b32cc36a1b794e65374ee781413b64005ad4dde6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:59:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b53a31222dcf963cb46d0d7b32cc36a1b794e65374ee781413b64005ad4dde6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:59:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b53a31222dcf963cb46d0d7b32cc36a1b794e65374ee781413b64005ad4dde6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 07:59:15 np0005629333 podman[373902]: 2026-02-25 12:59:15.532557295 +0000 UTC m=+0.151582537 container init 23e673fcf7bd764c496d71fdc3cf0cf570ec22e79946f58b99a8d8926b36632a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_matsumoto, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:59:15 np0005629333 podman[373902]: 2026-02-25 12:59:15.541824426 +0000 UTC m=+0.160849598 container start 23e673fcf7bd764c496d71fdc3cf0cf570ec22e79946f58b99a8d8926b36632a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 07:59:15 np0005629333 podman[373902]: 2026-02-25 12:59:15.545316795 +0000 UTC m=+0.164341977 container attach 23e673fcf7bd764c496d71fdc3cf0cf570ec22e79946f58b99a8d8926b36632a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_matsumoto, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.581 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.582 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.583 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No VIF found with MAC fa:16:3e:db:fb:af, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.584 244018 INFO nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Using config drive#033[00m
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.616 244018 DEBUG nova.storage.rbd_utils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image fc7d7f86-eb7c-476c-840e-98c97329de34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.626 244018 DEBUG nova.network.neutron [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Updated VIF entry in instance network info cache for port 5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.627 244018 DEBUG nova.network.neutron [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Updating instance_info_cache with network_info: [{"id": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "address": "fa:16:3e:db:fb:af", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b9c40d0-fe", "ovs_interfaceid": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.628 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.633 244018 DEBUG nova.compute.manager [req-448ad05e-c3e7-4b75-9c49-7dcbeeea5c59 req-b96ce751-8170-4732-9b8e-31483f679f29 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Received event network-vif-deleted-98c45ccf-8678-4765-bf66-7a9e3dae8cac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:59:15 np0005629333 nova_compute[244014]: 2026-02-25 12:59:15.680 244018 DEBUG oslo_concurrency.lockutils [req-0cf7cece-9a87-4836-8725-46301611f28d req-d595079f-96a5-466f-837d-aa8e7aef9591 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-fc7d7f86-eb7c-476c-840e-98c97329de34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:59:15 np0005629333 practical_matsumoto[373923]: --> passed data devices: 0 physical, 3 LVM
Feb 25 07:59:15 np0005629333 practical_matsumoto[373923]: --> All data devices are unavailable
Feb 25 07:59:16 np0005629333 systemd[1]: libpod-23e673fcf7bd764c496d71fdc3cf0cf570ec22e79946f58b99a8d8926b36632a.scope: Deactivated successfully.
Feb 25 07:59:16 np0005629333 podman[373902]: 2026-02-25 12:59:16.01912544 +0000 UTC m=+0.638150632 container died 23e673fcf7bd764c496d71fdc3cf0cf570ec22e79946f58b99a8d8926b36632a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_matsumoto, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:59:16 np0005629333 systemd[1]: var-lib-containers-storage-overlay-3b53a31222dcf963cb46d0d7b32cc36a1b794e65374ee781413b64005ad4dde6-merged.mount: Deactivated successfully.
Feb 25 07:59:16 np0005629333 nova_compute[244014]: 2026-02-25 12:59:16.057 244018 INFO nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Creating config drive at /var/lib/nova/instances/fc7d7f86-eb7c-476c-840e-98c97329de34/disk.config#033[00m
Feb 25 07:59:16 np0005629333 nova_compute[244014]: 2026-02-25 12:59:16.064 244018 DEBUG oslo_concurrency.processutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fc7d7f86-eb7c-476c-840e-98c97329de34/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpawuak_3v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:59:16 np0005629333 podman[373902]: 2026-02-25 12:59:16.070543441 +0000 UTC m=+0.689568633 container remove 23e673fcf7bd764c496d71fdc3cf0cf570ec22e79946f58b99a8d8926b36632a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_matsumoto, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 07:59:16 np0005629333 systemd[1]: libpod-conmon-23e673fcf7bd764c496d71fdc3cf0cf570ec22e79946f58b99a8d8926b36632a.scope: Deactivated successfully.
Feb 25 07:59:16 np0005629333 nova_compute[244014]: 2026-02-25 12:59:16.210 244018 DEBUG oslo_concurrency.processutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fc7d7f86-eb7c-476c-840e-98c97329de34/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpawuak_3v" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:59:16 np0005629333 nova_compute[244014]: 2026-02-25 12:59:16.246 244018 DEBUG nova.storage.rbd_utils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image fc7d7f86-eb7c-476c-840e-98c97329de34_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:59:16 np0005629333 nova_compute[244014]: 2026-02-25 12:59:16.251 244018 DEBUG oslo_concurrency.processutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fc7d7f86-eb7c-476c-840e-98c97329de34/disk.config fc7d7f86-eb7c-476c-840e-98c97329de34_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.475 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.477 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 07:59:16 np0005629333 nova_compute[244014]: 2026-02-25 12:59:16.476 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:16 np0005629333 nova_compute[244014]: 2026-02-25 12:59:16.521 244018 DEBUG oslo_concurrency.processutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fc7d7f86-eb7c-476c-840e-98c97329de34/disk.config fc7d7f86-eb7c-476c-840e-98c97329de34_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:59:16 np0005629333 nova_compute[244014]: 2026-02-25 12:59:16.522 244018 INFO nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Deleting local config drive /var/lib/nova/instances/fc7d7f86-eb7c-476c-840e-98c97329de34/disk.config because it was imported into RBD.#033[00m
Feb 25 07:59:16 np0005629333 podman[374077]: 2026-02-25 12:59:16.533188371 +0000 UTC m=+0.051955156 container create abfd55c0dc735b4496cc6e1ea943ff33b518a77b3b8d8fcaf88d35fac9073a74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:59:16 np0005629333 systemd[1]: Started libpod-conmon-abfd55c0dc735b4496cc6e1ea943ff33b518a77b3b8d8fcaf88d35fac9073a74.scope.
Feb 25 07:59:16 np0005629333 kernel: tap5b9c40d0-fe: entered promiscuous mode
Feb 25 07:59:16 np0005629333 NetworkManager[49836]: <info>  [1772024356.5892] manager: (tap5b9c40d0-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/625)
Feb 25 07:59:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:59:16Z|01524|binding|INFO|Claiming lport 5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 for this chassis.
Feb 25 07:59:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:59:16Z|01525|binding|INFO|5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3: Claiming fa:16:3e:db:fb:af 10.100.0.14
Feb 25 07:59:16 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:59:16 np0005629333 nova_compute[244014]: 2026-02-25 12:59:16.595 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:16 np0005629333 podman[374077]: 2026-02-25 12:59:16.506065886 +0000 UTC m=+0.024832731 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:59:16 np0005629333 podman[374077]: 2026-02-25 12:59:16.615133323 +0000 UTC m=+0.133900078 container init abfd55c0dc735b4496cc6e1ea943ff33b518a77b3b8d8fcaf88d35fac9073a74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_stonebraker, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 07:59:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:59:16Z|01526|binding|INFO|Setting lport 5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 ovn-installed in OVS
Feb 25 07:59:16 np0005629333 nova_compute[244014]: 2026-02-25 12:59:16.617 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:59:16Z|01527|binding|INFO|Setting lport 5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 up in Southbound
Feb 25 07:59:16 np0005629333 podman[374077]: 2026-02-25 12:59:16.622944843 +0000 UTC m=+0.141711588 container start abfd55c0dc735b4496cc6e1ea943ff33b518a77b3b8d8fcaf88d35fac9073a74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:59:16 np0005629333 systemd-udevd[374110]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:59:16 np0005629333 podman[374077]: 2026-02-25 12:59:16.626572275 +0000 UTC m=+0.145339020 container attach abfd55c0dc735b4496cc6e1ea943ff33b518a77b3b8d8fcaf88d35fac9073a74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True)
Feb 25 07:59:16 np0005629333 magical_stonebraker[374098]: 167 167
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.624 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:fb:af 10.100.0.14'], port_security=['fa:16:3e:db:fb:af 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'fc7d7f86-eb7c-476c-840e-98c97329de34', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62bbaf1c-560e-4f11-b053-43d27fe35ba2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9699483122f465084e3147e4904d13d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1134355f-9f11-4363-a935-f82cb22e83bd 1ae32a7b-44d8-4e80-93f7-db39a030f905', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf42b548-d8ba-420a-827b-4e92eb42f3f0, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.627 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 in datapath 62bbaf1c-560e-4f11-b053-43d27fe35ba2 bound to our chassis#033[00m
Feb 25 07:59:16 np0005629333 podman[374077]: 2026-02-25 12:59:16.629966151 +0000 UTC m=+0.148732896 container died abfd55c0dc735b4496cc6e1ea943ff33b518a77b3b8d8fcaf88d35fac9073a74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:59:16 np0005629333 systemd-machined[210048]: New machine qemu-177-instance-0000008f.
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.634 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62bbaf1c-560e-4f11-b053-43d27fe35ba2#033[00m
Feb 25 07:59:16 np0005629333 systemd[1]: Started Virtual Machine qemu-177-instance-0000008f.
Feb 25 07:59:16 np0005629333 systemd[1]: libpod-abfd55c0dc735b4496cc6e1ea943ff33b518a77b3b8d8fcaf88d35fac9073a74.scope: Deactivated successfully.
Feb 25 07:59:16 np0005629333 NetworkManager[49836]: <info>  [1772024356.6418] device (tap5b9c40d0-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:59:16 np0005629333 NetworkManager[49836]: <info>  [1772024356.6430] device (tap5b9c40d0-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.648 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3d6d27bb-5fba-49ad-b041-ccf066b56b34]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.649 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap62bbaf1c-51 in ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.652 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap62bbaf1c-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.652 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f207faa0-757b-403b-b863-50c79a0da0fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.653 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9a2a592b-d9ba-4f21-a49a-8707be145567]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:59:16 np0005629333 systemd[1]: var-lib-containers-storage-overlay-2910d1336720f321e2d4f71f810206892fdf3047074ede6af3673d00ceb00274-merged.mount: Deactivated successfully.
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.668 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[84efd376-b9e8-4ca3-9689-bffd46146796]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:59:16 np0005629333 podman[374077]: 2026-02-25 12:59:16.677512152 +0000 UTC m=+0.196278897 container remove abfd55c0dc735b4496cc6e1ea943ff33b518a77b3b8d8fcaf88d35fac9073a74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:59:16 np0005629333 systemd[1]: libpod-conmon-abfd55c0dc735b4496cc6e1ea943ff33b518a77b3b8d8fcaf88d35fac9073a74.scope: Deactivated successfully.
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.691 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5bd64af8-05aa-44bc-820c-0b26a372cad2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:59:16 np0005629333 nova_compute[244014]: 2026-02-25 12:59:16.705 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.714 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9a65e31c-2e44-489a-841d-20969d94e311]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.718 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[601263d8-f602-4745-9870-9933ebfaca64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:59:16 np0005629333 NetworkManager[49836]: <info>  [1772024356.7190] manager: (tap62bbaf1c-50): new Veth device (/org/freedesktop/NetworkManager/Devices/626)
Feb 25 07:59:16 np0005629333 systemd-udevd[374121]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.745 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[45487196-d23f-4206-9eb6-7e85d28e7a75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.748 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[217d59c9-8b7c-4f57-af25-1648095b4881]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:59:16 np0005629333 NetworkManager[49836]: <info>  [1772024356.7631] device (tap62bbaf1c-50): carrier: link connected
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.768 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4d2f74b1-8e9e-46aa-809e-fdccb921489b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.783 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5280d508-d930-4a78-8742-d3c5c250dee1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62bbaf1c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:da:c2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 450], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632617, 'reachable_time': 39162, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374170, 'error': None, 'target': 'ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.795 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[796b7aca-f418-4c06-94d4-710746940fc5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2f:dac2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632617, 'tstamp': 632617}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374177, 'error': None, 'target': 'ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:59:16 np0005629333 podman[374162]: 2026-02-25 12:59:16.802009304 +0000 UTC m=+0.038009863 container create 5a6341152e8f6fba5396bd19fbb43ba9586cfb21d393006149b1d492202ac5f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mccarthy, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.807 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dacfaa7b-082c-41ce-b890-fd3254cef60b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62bbaf1c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:da:c2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 450], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632617, 'reachable_time': 39162, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 374179, 'error': None, 'target': 'ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:59:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2361: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Feb 25 07:59:16 np0005629333 systemd[1]: Started libpod-conmon-5a6341152e8f6fba5396bd19fbb43ba9586cfb21d393006149b1d492202ac5f4.scope.
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.832 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dff4d2b6-eddd-4df5-b9a9-8b8135f37d39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:59:16 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:59:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ceed0f413b8011ad086b8b59db954440778209cc857706cbc5f9377b0407d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:59:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ceed0f413b8011ad086b8b59db954440778209cc857706cbc5f9377b0407d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:59:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ceed0f413b8011ad086b8b59db954440778209cc857706cbc5f9377b0407d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:59:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ceed0f413b8011ad086b8b59db954440778209cc857706cbc5f9377b0407d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:59:16 np0005629333 podman[374162]: 2026-02-25 12:59:16.866268747 +0000 UTC m=+0.102269396 container init 5a6341152e8f6fba5396bd19fbb43ba9586cfb21d393006149b1d492202ac5f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mccarthy, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.873 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[756bd72d-7f1b-463e-a808-22ed46693c32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.874 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62bbaf1c-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.875 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.875 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62bbaf1c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:59:16 np0005629333 podman[374162]: 2026-02-25 12:59:16.875539629 +0000 UTC m=+0.111540188 container start 5a6341152e8f6fba5396bd19fbb43ba9586cfb21d393006149b1d492202ac5f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mccarthy, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 07:59:16 np0005629333 nova_compute[244014]: 2026-02-25 12:59:16.877 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:16 np0005629333 NetworkManager[49836]: <info>  [1772024356.8781] manager: (tap62bbaf1c-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/627)
Feb 25 07:59:16 np0005629333 kernel: tap62bbaf1c-50: entered promiscuous mode
Feb 25 07:59:16 np0005629333 podman[374162]: 2026-02-25 12:59:16.784959163 +0000 UTC m=+0.020959742 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.882 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62bbaf1c-50, col_values=(('external_ids', {'iface-id': 'e8ddd525-f10e-4482-b8e7-397d0dcdd470'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:59:16 np0005629333 ovn_controller[147040]: 2026-02-25T12:59:16Z|01528|binding|INFO|Releasing lport e8ddd525-f10e-4482-b8e7-397d0dcdd470 from this chassis (sb_readonly=0)
Feb 25 07:59:16 np0005629333 nova_compute[244014]: 2026-02-25 12:59:16.885 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:16 np0005629333 podman[374162]: 2026-02-25 12:59:16.891788817 +0000 UTC m=+0.127789416 container attach 5a6341152e8f6fba5396bd19fbb43ba9586cfb21d393006149b1d492202ac5f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mccarthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.895 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/62bbaf1c-560e-4f11-b053-43d27fe35ba2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/62bbaf1c-560e-4f11-b053-43d27fe35ba2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 07:59:16 np0005629333 nova_compute[244014]: 2026-02-25 12:59:16.894 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.896 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b742dd20-d271-48c5-a79d-ae45e5aa8873]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.897 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-62bbaf1c-560e-4f11-b053-43d27fe35ba2
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/62bbaf1c-560e-4f11-b053-43d27fe35ba2.pid.haproxy
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 62bbaf1c-560e-4f11-b053-43d27fe35ba2
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 07:59:16 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:16.898 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2', 'env', 'PROCESS_TAG=haproxy-62bbaf1c-560e-4f11-b053-43d27fe35ba2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/62bbaf1c-560e-4f11-b053-43d27fe35ba2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]: {
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:    "0": [
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:        {
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "devices": [
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "/dev/loop3"
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            ],
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "lv_name": "ceph_lv0",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "lv_size": "21470642176",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "name": "ceph_lv0",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "tags": {
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.cluster_name": "ceph",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.crush_device_class": "",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.encrypted": "0",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.objectstore": "bluestore",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.osd_id": "0",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.type": "block",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.vdo": "0",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.with_tpm": "0"
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            },
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "type": "block",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "vg_name": "ceph_vg0"
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:        }
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:    ],
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:    "1": [
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:        {
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "devices": [
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "/dev/loop4"
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            ],
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "lv_name": "ceph_lv1",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "lv_size": "21470642176",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "name": "ceph_lv1",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "tags": {
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.cluster_name": "ceph",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.crush_device_class": "",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.encrypted": "0",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.objectstore": "bluestore",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.osd_id": "1",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.type": "block",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.vdo": "0",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.with_tpm": "0"
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            },
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "type": "block",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "vg_name": "ceph_vg1"
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:        }
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:    ],
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:    "2": [
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:        {
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "devices": [
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "/dev/loop5"
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            ],
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "lv_name": "ceph_lv2",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "lv_size": "21470642176",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "name": "ceph_lv2",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "tags": {
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.cephx_lockbox_secret": "",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.cluster_name": "ceph",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.crush_device_class": "",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.encrypted": "0",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.objectstore": "bluestore",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.osd_id": "2",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.type": "block",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.vdo": "0",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:                "ceph.with_tpm": "0"
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            },
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "type": "block",
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:            "vg_name": "ceph_vg2"
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:        }
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]:    ]
Feb 25 07:59:17 np0005629333 beautiful_mccarthy[374184]: }
Feb 25 07:59:17 np0005629333 systemd[1]: libpod-5a6341152e8f6fba5396bd19fbb43ba9586cfb21d393006149b1d492202ac5f4.scope: Deactivated successfully.
Feb 25 07:59:17 np0005629333 podman[374162]: 2026-02-25 12:59:17.1598858 +0000 UTC m=+0.395886359 container died 5a6341152e8f6fba5396bd19fbb43ba9586cfb21d393006149b1d492202ac5f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 07:59:17 np0005629333 systemd[1]: var-lib-containers-storage-overlay-78ceed0f413b8011ad086b8b59db954440778209cc857706cbc5f9377b0407d8-merged.mount: Deactivated successfully.
Feb 25 07:59:17 np0005629333 podman[374162]: 2026-02-25 12:59:17.201907955 +0000 UTC m=+0.437908514 container remove 5a6341152e8f6fba5396bd19fbb43ba9586cfb21d393006149b1d492202ac5f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mccarthy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 07:59:17 np0005629333 systemd[1]: libpod-conmon-5a6341152e8f6fba5396bd19fbb43ba9586cfb21d393006149b1d492202ac5f4.scope: Deactivated successfully.
Feb 25 07:59:17 np0005629333 podman[374227]: 2026-02-25 12:59:17.242948683 +0000 UTC m=+0.058309276 container create 2a98674612616c51d7e93497e598848a866c7677b00e7d83cc7aa53f3ea9d9ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 25 07:59:17 np0005629333 systemd[1]: Started libpod-conmon-2a98674612616c51d7e93497e598848a866c7677b00e7d83cc7aa53f3ea9d9ea.scope.
Feb 25 07:59:17 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:59:17 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00e350bf0228b37d39a08274b06ef58a344901c5a3e3eee456d5980226c76505/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 07:59:17 np0005629333 podman[374227]: 2026-02-25 12:59:17.213210274 +0000 UTC m=+0.028570887 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 07:59:17 np0005629333 podman[374227]: 2026-02-25 12:59:17.322911558 +0000 UTC m=+0.138272221 container init 2a98674612616c51d7e93497e598848a866c7677b00e7d83cc7aa53f3ea9d9ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 07:59:17 np0005629333 podman[374227]: 2026-02-25 12:59:17.326654234 +0000 UTC m=+0.142014867 container start 2a98674612616c51d7e93497e598848a866c7677b00e7d83cc7aa53f3ea9d9ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0)
Feb 25 07:59:17 np0005629333 neutron-haproxy-ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2[374269]: [NOTICE]   (374288) : New worker (374300) forked
Feb 25 07:59:17 np0005629333 neutron-haproxy-ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2[374269]: [NOTICE]   (374288) : Loading success.
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.659 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024357.658368, fc7d7f86-eb7c-476c-840e-98c97329de34 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.659 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] VM Started (Lifecycle Event)#033[00m
Feb 25 07:59:17 np0005629333 podman[374357]: 2026-02-25 12:59:17.684886059 +0000 UTC m=+0.109204971 container create 871c91678cbabe66dbb4a698ec58579217f17d44382bab8da10b1459013413b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_wiles, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.693 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:59:17 np0005629333 podman[374357]: 2026-02-25 12:59:17.599100789 +0000 UTC m=+0.023419701 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.698 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024357.661661, fc7d7f86-eb7c-476c-840e-98c97329de34 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.699 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.732 244018 DEBUG nova.compute.manager [req-400d6de2-4f5f-4169-80ab-8208c62a3fe2 req-5cb744c0-5c7f-4606-97f3-88a10a62f152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Received event network-vif-plugged-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.732 244018 DEBUG oslo_concurrency.lockutils [req-400d6de2-4f5f-4169-80ab-8208c62a3fe2 req-5cb744c0-5c7f-4606-97f3-88a10a62f152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.733 244018 DEBUG oslo_concurrency.lockutils [req-400d6de2-4f5f-4169-80ab-8208c62a3fe2 req-5cb744c0-5c7f-4606-97f3-88a10a62f152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.733 244018 DEBUG oslo_concurrency.lockutils [req-400d6de2-4f5f-4169-80ab-8208c62a3fe2 req-5cb744c0-5c7f-4606-97f3-88a10a62f152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.733 244018 DEBUG nova.compute.manager [req-400d6de2-4f5f-4169-80ab-8208c62a3fe2 req-5cb744c0-5c7f-4606-97f3-88a10a62f152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Processing event network-vif-plugged-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.733 244018 DEBUG nova.compute.manager [req-400d6de2-4f5f-4169-80ab-8208c62a3fe2 req-5cb744c0-5c7f-4606-97f3-88a10a62f152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Received event network-vif-plugged-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.734 244018 DEBUG oslo_concurrency.lockutils [req-400d6de2-4f5f-4169-80ab-8208c62a3fe2 req-5cb744c0-5c7f-4606-97f3-88a10a62f152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.734 244018 DEBUG oslo_concurrency.lockutils [req-400d6de2-4f5f-4169-80ab-8208c62a3fe2 req-5cb744c0-5c7f-4606-97f3-88a10a62f152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.734 244018 DEBUG oslo_concurrency.lockutils [req-400d6de2-4f5f-4169-80ab-8208c62a3fe2 req-5cb744c0-5c7f-4606-97f3-88a10a62f152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.734 244018 DEBUG nova.compute.manager [req-400d6de2-4f5f-4169-80ab-8208c62a3fe2 req-5cb744c0-5c7f-4606-97f3-88a10a62f152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] No waiting events found dispatching network-vif-plugged-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.735 244018 WARNING nova.compute.manager [req-400d6de2-4f5f-4169-80ab-8208c62a3fe2 req-5cb744c0-5c7f-4606-97f3-88a10a62f152 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Received unexpected event network-vif-plugged-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 for instance with vm_state building and task_state spawning.#033[00m
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.735 244018 DEBUG nova.compute.manager [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.739 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.743 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.744 244018 INFO nova.virt.libvirt.driver [-] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Instance spawned successfully.#033[00m
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.744 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.748 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024357.7388053, fc7d7f86-eb7c-476c-840e-98c97329de34 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.748 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:59:17 np0005629333 systemd[1]: Started libpod-conmon-871c91678cbabe66dbb4a698ec58579217f17d44382bab8da10b1459013413b7.scope.
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.780 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.780 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.781 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.781 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.782 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.782 244018 DEBUG nova.virt.libvirt.driver [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:59:17 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.815 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.819 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:59:17 np0005629333 podman[374357]: 2026-02-25 12:59:17.837469852 +0000 UTC m=+0.261788824 container init 871c91678cbabe66dbb4a698ec58579217f17d44382bab8da10b1459013413b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_wiles, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 07:59:17 np0005629333 podman[374357]: 2026-02-25 12:59:17.846656571 +0000 UTC m=+0.270975493 container start 871c91678cbabe66dbb4a698ec58579217f17d44382bab8da10b1459013413b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:59:17 np0005629333 jovial_wiles[374379]: 167 167
Feb 25 07:59:17 np0005629333 systemd[1]: libpod-871c91678cbabe66dbb4a698ec58579217f17d44382bab8da10b1459013413b7.scope: Deactivated successfully.
Feb 25 07:59:17 np0005629333 podman[374357]: 2026-02-25 12:59:17.880551227 +0000 UTC m=+0.304870109 container attach 871c91678cbabe66dbb4a698ec58579217f17d44382bab8da10b1459013413b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_wiles, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:59:17 np0005629333 podman[374357]: 2026-02-25 12:59:17.888830211 +0000 UTC m=+0.313149093 container died 871c91678cbabe66dbb4a698ec58579217f17d44382bab8da10b1459013413b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.909 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.935 244018 INFO nova.compute.manager [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Took 8.23 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:59:17 np0005629333 nova_compute[244014]: 2026-02-25 12:59:17.935 244018 DEBUG nova.compute.manager [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:59:18 np0005629333 systemd[1]: var-lib-containers-storage-overlay-83d07b114da5ebf2cec124c9c87dd82391c5ba1bd9bddd60afd8797dc5bbf4f7-merged.mount: Deactivated successfully.
Feb 25 07:59:18 np0005629333 nova_compute[244014]: 2026-02-25 12:59:18.048 244018 INFO nova.compute.manager [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Took 9.34 seconds to build instance.#033[00m
Feb 25 07:59:18 np0005629333 podman[374357]: 2026-02-25 12:59:18.136109007 +0000 UTC m=+0.560427929 container remove 871c91678cbabe66dbb4a698ec58579217f17d44382bab8da10b1459013413b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 07:59:18 np0005629333 systemd[1]: libpod-conmon-871c91678cbabe66dbb4a698ec58579217f17d44382bab8da10b1459013413b7.scope: Deactivated successfully.
Feb 25 07:59:18 np0005629333 nova_compute[244014]: 2026-02-25 12:59:18.155 244018 DEBUG oslo_concurrency.lockutils [None req-ad10b4c2-95fa-4a77-9f3e-8040fbb4763c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "fc7d7f86-eb7c-476c-840e-98c97329de34" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.519s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:59:18 np0005629333 podman[374402]: 2026-02-25 12:59:18.316382562 +0000 UTC m=+0.031163060 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 07:59:18 np0005629333 podman[374402]: 2026-02-25 12:59:18.440650307 +0000 UTC m=+0.155430815 container create 7e06c0061553d22c135d3759e296f52b9efdf5e33dfdc9a75e891878c459823a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_noether, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 07:59:18 np0005629333 systemd[1]: Started libpod-conmon-7e06c0061553d22c135d3759e296f52b9efdf5e33dfdc9a75e891878c459823a.scope.
Feb 25 07:59:18 np0005629333 systemd[1]: Started libcrun container.
Feb 25 07:59:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a92e78175949c4efd2e3724f637d11f001650fa371595a7d477176de8180ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 07:59:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a92e78175949c4efd2e3724f637d11f001650fa371595a7d477176de8180ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 07:59:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a92e78175949c4efd2e3724f637d11f001650fa371595a7d477176de8180ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 07:59:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3a92e78175949c4efd2e3724f637d11f001650fa371595a7d477176de8180ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 07:59:18 np0005629333 podman[374402]: 2026-02-25 12:59:18.63004558 +0000 UTC m=+0.344826108 container init 7e06c0061553d22c135d3759e296f52b9efdf5e33dfdc9a75e891878c459823a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_noether, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:59:18 np0005629333 podman[374402]: 2026-02-25 12:59:18.639180298 +0000 UTC m=+0.353960806 container start 7e06c0061553d22c135d3759e296f52b9efdf5e33dfdc9a75e891878c459823a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_noether, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 07:59:18 np0005629333 podman[374402]: 2026-02-25 12:59:18.65380891 +0000 UTC m=+0.368589428 container attach 7e06c0061553d22c135d3759e296f52b9efdf5e33dfdc9a75e891878c459823a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_noether, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 07:59:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:59:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2362: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Feb 25 07:59:19 np0005629333 lvm[374493]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 07:59:19 np0005629333 lvm[374493]: VG ceph_vg0 finished
Feb 25 07:59:19 np0005629333 lvm[374495]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 07:59:19 np0005629333 lvm[374495]: VG ceph_vg1 finished
Feb 25 07:59:19 np0005629333 lvm[374496]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 07:59:19 np0005629333 lvm[374496]: VG ceph_vg2 finished
Feb 25 07:59:19 np0005629333 kind_noether[374418]: {}
Feb 25 07:59:19 np0005629333 systemd[1]: libpod-7e06c0061553d22c135d3759e296f52b9efdf5e33dfdc9a75e891878c459823a.scope: Deactivated successfully.
Feb 25 07:59:19 np0005629333 systemd[1]: libpod-7e06c0061553d22c135d3759e296f52b9efdf5e33dfdc9a75e891878c459823a.scope: Consumed 1.074s CPU time.
Feb 25 07:59:19 np0005629333 podman[374402]: 2026-02-25 12:59:19.520410395 +0000 UTC m=+1.235190883 container died 7e06c0061553d22c135d3759e296f52b9efdf5e33dfdc9a75e891878c459823a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_noether, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 07:59:19 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d3a92e78175949c4efd2e3724f637d11f001650fa371595a7d477176de8180ef-merged.mount: Deactivated successfully.
Feb 25 07:59:19 np0005629333 podman[374402]: 2026-02-25 12:59:19.656739811 +0000 UTC m=+1.371520289 container remove 7e06c0061553d22c135d3759e296f52b9efdf5e33dfdc9a75e891878c459823a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 07:59:19 np0005629333 systemd[1]: libpod-conmon-7e06c0061553d22c135d3759e296f52b9efdf5e33dfdc9a75e891878c459823a.scope: Deactivated successfully.
Feb 25 07:59:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 07:59:19 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:59:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 07:59:19 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:59:20 np0005629333 nova_compute[244014]: 2026-02-25 12:59:20.494 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:20 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:59:20 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 07:59:20 np0005629333 nova_compute[244014]: 2026-02-25 12:59:20.587 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2363: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Feb 25 07:59:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2364: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 132 op/s
Feb 25 07:59:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:59:24 np0005629333 NetworkManager[49836]: <info>  [1772024364.5051] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/628)
Feb 25 07:59:24 np0005629333 ovn_controller[147040]: 2026-02-25T12:59:24Z|01529|binding|INFO|Releasing lport e8ddd525-f10e-4482-b8e7-397d0dcdd470 from this chassis (sb_readonly=0)
Feb 25 07:59:24 np0005629333 NetworkManager[49836]: <info>  [1772024364.5066] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/629)
Feb 25 07:59:24 np0005629333 nova_compute[244014]: 2026-02-25 12:59:24.501 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:24 np0005629333 ovn_controller[147040]: 2026-02-25T12:59:24Z|01530|binding|INFO|Releasing lport e8ddd525-f10e-4482-b8e7-397d0dcdd470 from this chassis (sb_readonly=0)
Feb 25 07:59:24 np0005629333 nova_compute[244014]: 2026-02-25 12:59:24.518 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:24 np0005629333 nova_compute[244014]: 2026-02-25 12:59:24.523 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2365: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Feb 25 07:59:25 np0005629333 nova_compute[244014]: 2026-02-25 12:59:25.337 244018 DEBUG nova.compute.manager [req-9aafc230-b397-47a9-85f3-5e9e2965437a req-a4cb7a4f-af99-4f35-b4ea-87ab2259ba2e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Received event network-changed-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:59:25 np0005629333 nova_compute[244014]: 2026-02-25 12:59:25.337 244018 DEBUG nova.compute.manager [req-9aafc230-b397-47a9-85f3-5e9e2965437a req-a4cb7a4f-af99-4f35-b4ea-87ab2259ba2e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Refreshing instance network info cache due to event network-changed-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:59:25 np0005629333 nova_compute[244014]: 2026-02-25 12:59:25.337 244018 DEBUG oslo_concurrency.lockutils [req-9aafc230-b397-47a9-85f3-5e9e2965437a req-a4cb7a4f-af99-4f35-b4ea-87ab2259ba2e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-fc7d7f86-eb7c-476c-840e-98c97329de34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:59:25 np0005629333 nova_compute[244014]: 2026-02-25 12:59:25.337 244018 DEBUG oslo_concurrency.lockutils [req-9aafc230-b397-47a9-85f3-5e9e2965437a req-a4cb7a4f-af99-4f35-b4ea-87ab2259ba2e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-fc7d7f86-eb7c-476c-840e-98c97329de34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:59:25 np0005629333 nova_compute[244014]: 2026-02-25 12:59:25.338 244018 DEBUG nova.network.neutron [req-9aafc230-b397-47a9-85f3-5e9e2965437a req-a4cb7a4f-af99-4f35-b4ea-87ab2259ba2e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Refreshing network info cache for port 5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:59:25 np0005629333 nova_compute[244014]: 2026-02-25 12:59:25.496 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:25 np0005629333 nova_compute[244014]: 2026-02-25 12:59:25.589 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:26.478 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:59:26 np0005629333 nova_compute[244014]: 2026-02-25 12:59:26.588 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024351.5640838, eadab8d0-1f24-4ace-b7b4-10329df53f12 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:59:26 np0005629333 nova_compute[244014]: 2026-02-25 12:59:26.588 244018 INFO nova.compute.manager [-] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] VM Stopped (Lifecycle Event)#033[00m
Feb 25 07:59:26 np0005629333 nova_compute[244014]: 2026-02-25 12:59:26.611 244018 DEBUG nova.compute.manager [None req-1bf328c5-1afa-4c98-8bd3-1e31115f465f - - - - - -] [instance: eadab8d0-1f24-4ace-b7b4-10329df53f12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:59:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2366: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Feb 25 07:59:27 np0005629333 nova_compute[244014]: 2026-02-25 12:59:27.490 244018 DEBUG nova.network.neutron [req-9aafc230-b397-47a9-85f3-5e9e2965437a req-a4cb7a4f-af99-4f35-b4ea-87ab2259ba2e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Updated VIF entry in instance network info cache for port 5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:59:27 np0005629333 nova_compute[244014]: 2026-02-25 12:59:27.491 244018 DEBUG nova.network.neutron [req-9aafc230-b397-47a9-85f3-5e9e2965437a req-a4cb7a4f-af99-4f35-b4ea-87ab2259ba2e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Updating instance_info_cache with network_info: [{"id": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "address": "fa:16:3e:db:fb:af", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b9c40d0-fe", "ovs_interfaceid": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:59:27 np0005629333 nova_compute[244014]: 2026-02-25 12:59:27.515 244018 DEBUG oslo_concurrency.lockutils [req-9aafc230-b397-47a9-85f3-5e9e2965437a req-a4cb7a4f-af99-4f35-b4ea-87ab2259ba2e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-fc7d7f86-eb7c-476c-840e-98c97329de34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:59:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:59:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2367: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 77 op/s
Feb 25 07:59:29 np0005629333 nova_compute[244014]: 2026-02-25 12:59:29.185 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:30 np0005629333 nova_compute[244014]: 2026-02-25 12:59:30.498 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:30 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Feb 25 07:59:30 np0005629333 ovn_controller[147040]: 2026-02-25T12:59:30Z|00190|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:db:fb:af 10.100.0.14
Feb 25 07:59:30 np0005629333 ovn_controller[147040]: 2026-02-25T12:59:30Z|00191|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:db:fb:af 10.100.0.14
Feb 25 07:59:30 np0005629333 nova_compute[244014]: 2026-02-25 12:59:30.591 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2368: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 64 op/s
Feb 25 07:59:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_12:59:31
Feb 25 07:59:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 07:59:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 07:59:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'images', 'vms', 'backups', 'cephfs.cephfs.meta', 'volumes']
Feb 25 07:59:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 07:59:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:59:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:59:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:59:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:59:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 07:59:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 07:59:31 np0005629333 nova_compute[244014]: 2026-02-25 12:59:31.916 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 07:59:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:59:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 07:59:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 07:59:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:59:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:59:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 07:59:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:59:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 07:59:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 07:59:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2369: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Feb 25 07:59:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:59:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2370: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 07:59:35 np0005629333 nova_compute[244014]: 2026-02-25 12:59:35.500 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:35 np0005629333 nova_compute[244014]: 2026-02-25 12:59:35.593 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2371: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 07:59:36 np0005629333 nova_compute[244014]: 2026-02-25 12:59:36.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:59:36 np0005629333 nova_compute[244014]: 2026-02-25 12:59:36.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 07:59:36 np0005629333 nova_compute[244014]: 2026-02-25 12:59:36.878 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 07:59:37 np0005629333 nova_compute[244014]: 2026-02-25 12:59:37.252 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-fc7d7f86-eb7c-476c-840e-98c97329de34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:59:37 np0005629333 nova_compute[244014]: 2026-02-25 12:59:37.253 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-fc7d7f86-eb7c-476c-840e-98c97329de34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:59:37 np0005629333 nova_compute[244014]: 2026-02-25 12:59:37.253 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 25 07:59:37 np0005629333 nova_compute[244014]: 2026-02-25 12:59:37.254 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid fc7d7f86-eb7c-476c-840e-98c97329de34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:59:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:59:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2372: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 07:59:38 np0005629333 nova_compute[244014]: 2026-02-25 12:59:38.864 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:39 np0005629333 nova_compute[244014]: 2026-02-25 12:59:39.353 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Updating instance_info_cache with network_info: [{"id": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "address": "fa:16:3e:db:fb:af", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b9c40d0-fe", "ovs_interfaceid": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:59:39 np0005629333 nova_compute[244014]: 2026-02-25 12:59:39.400 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-fc7d7f86-eb7c-476c-840e-98c97329de34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:59:39 np0005629333 nova_compute[244014]: 2026-02-25 12:59:39.401 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 25 07:59:39 np0005629333 nova_compute[244014]: 2026-02-25 12:59:39.402 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:59:39 np0005629333 nova_compute[244014]: 2026-02-25 12:59:39.532 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:59:39 np0005629333 nova_compute[244014]: 2026-02-25 12:59:39.532 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:59:39 np0005629333 nova_compute[244014]: 2026-02-25 12:59:39.533 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:59:39 np0005629333 nova_compute[244014]: 2026-02-25 12:59:39.533 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 07:59:39 np0005629333 nova_compute[244014]: 2026-02-25 12:59:39.534 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:59:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:59:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2157476487' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:59:40 np0005629333 nova_compute[244014]: 2026-02-25 12:59:40.189 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.655s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:59:40 np0005629333 nova_compute[244014]: 2026-02-25 12:59:40.353 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:59:40 np0005629333 nova_compute[244014]: 2026-02-25 12:59:40.354 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 07:59:40 np0005629333 nova_compute[244014]: 2026-02-25 12:59:40.502 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:40 np0005629333 nova_compute[244014]: 2026-02-25 12:59:40.557 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:59:40 np0005629333 nova_compute[244014]: 2026-02-25 12:59:40.559 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3370MB free_disk=59.94178275205195GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 07:59:40 np0005629333 nova_compute[244014]: 2026-02-25 12:59:40.560 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:59:40 np0005629333 nova_compute[244014]: 2026-02-25 12:59:40.560 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:59:40 np0005629333 nova_compute[244014]: 2026-02-25 12:59:40.595 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2373: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 339 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 07:59:41 np0005629333 nova_compute[244014]: 2026-02-25 12:59:41.104 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance fc7d7f86-eb7c-476c-840e-98c97329de34 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 07:59:41 np0005629333 nova_compute[244014]: 2026-02-25 12:59:41.105 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 07:59:41 np0005629333 nova_compute[244014]: 2026-02-25 12:59:41.105 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 07:59:41 np0005629333 nova_compute[244014]: 2026-02-25 12:59:41.144 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:59:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:59:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1508777480' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:59:41 np0005629333 nova_compute[244014]: 2026-02-25 12:59:41.719 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:59:41 np0005629333 nova_compute[244014]: 2026-02-25 12:59:41.727 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:59:41 np0005629333 nova_compute[244014]: 2026-02-25 12:59:41.765 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:59:41 np0005629333 nova_compute[244014]: 2026-02-25 12:59:41.839 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 07:59:41 np0005629333 nova_compute[244014]: 2026-02-25 12:59:41.840 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.280s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:59:42 np0005629333 nova_compute[244014]: 2026-02-25 12:59:42.615 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 07:59:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:59:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 07:59:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:59:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007751263576675736 of space, bias 1.0, pg target 0.23253790730027207 quantized to 32 (current 32)
Feb 25 07:59:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:59:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:59:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:59:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:59:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:59:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024942605610493687 of space, bias 1.0, pg target 0.7482781683148106 quantized to 32 (current 32)
Feb 25 07:59:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:59:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3624184949949025e-06 of space, bias 4.0, pg target 0.001634902193993883 quantized to 16 (current 16)
Feb 25 07:59:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:59:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:59:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:59:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 07:59:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:59:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 07:59:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:59:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 07:59:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 07:59:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 07:59:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2374: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 07:59:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:59:43 np0005629333 nova_compute[244014]: 2026-02-25 12:59:43.835 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:59:43 np0005629333 nova_compute[244014]: 2026-02-25 12:59:43.835 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:59:43 np0005629333 nova_compute[244014]: 2026-02-25 12:59:43.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:59:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2375: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Feb 25 07:59:45 np0005629333 nova_compute[244014]: 2026-02-25 12:59:45.504 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:45 np0005629333 nova_compute[244014]: 2026-02-25 12:59:45.597 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:45 np0005629333 podman[374583]: 2026-02-25 12:59:45.736099067 +0000 UTC m=+0.071986802 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb 25 07:59:45 np0005629333 podman[374584]: 2026-02-25 12:59:45.769030466 +0000 UTC m=+0.104899350 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0)
Feb 25 07:59:45 np0005629333 nova_compute[244014]: 2026-02-25 12:59:45.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:59:46 np0005629333 nova_compute[244014]: 2026-02-25 12:59:46.667 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "b0014972-9433-4648-951c-bd9a210b6a69" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:59:46 np0005629333 nova_compute[244014]: 2026-02-25 12:59:46.667 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "b0014972-9433-4648-951c-bd9a210b6a69" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:59:46 np0005629333 nova_compute[244014]: 2026-02-25 12:59:46.681 244018 DEBUG nova.compute.manager [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:59:46 np0005629333 nova_compute[244014]: 2026-02-25 12:59:46.765 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:59:46 np0005629333 nova_compute[244014]: 2026-02-25 12:59:46.765 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:59:46 np0005629333 nova_compute[244014]: 2026-02-25 12:59:46.774 244018 DEBUG nova.virt.hardware [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:59:46 np0005629333 nova_compute[244014]: 2026-02-25 12:59:46.775 244018 INFO nova.compute.claims [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:59:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2376: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Feb 25 07:59:46 np0005629333 nova_compute[244014]: 2026-02-25 12:59:46.902 244018 DEBUG oslo_concurrency.processutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:59:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:59:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/74252487' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:59:47 np0005629333 nova_compute[244014]: 2026-02-25 12:59:47.437 244018 DEBUG oslo_concurrency.processutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:59:47 np0005629333 nova_compute[244014]: 2026-02-25 12:59:47.445 244018 DEBUG nova.compute.provider_tree [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:59:47 np0005629333 nova_compute[244014]: 2026-02-25 12:59:47.464 244018 DEBUG nova.scheduler.client.report [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:59:47 np0005629333 nova_compute[244014]: 2026-02-25 12:59:47.484 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:59:47 np0005629333 nova_compute[244014]: 2026-02-25 12:59:47.486 244018 DEBUG nova.compute.manager [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:59:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 07:59:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3267162017' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 07:59:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 07:59:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3267162017' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 07:59:47 np0005629333 nova_compute[244014]: 2026-02-25 12:59:47.541 244018 DEBUG nova.compute.manager [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:59:47 np0005629333 nova_compute[244014]: 2026-02-25 12:59:47.542 244018 DEBUG nova.network.neutron [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:59:47 np0005629333 nova_compute[244014]: 2026-02-25 12:59:47.563 244018 INFO nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:59:47 np0005629333 nova_compute[244014]: 2026-02-25 12:59:47.584 244018 DEBUG nova.compute.manager [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:59:47 np0005629333 nova_compute[244014]: 2026-02-25 12:59:47.663 244018 DEBUG nova.compute.manager [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:59:47 np0005629333 nova_compute[244014]: 2026-02-25 12:59:47.664 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:59:47 np0005629333 nova_compute[244014]: 2026-02-25 12:59:47.665 244018 INFO nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Creating image(s)#033[00m
Feb 25 07:59:47 np0005629333 nova_compute[244014]: 2026-02-25 12:59:47.691 244018 DEBUG nova.storage.rbd_utils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image b0014972-9433-4648-951c-bd9a210b6a69_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:59:47 np0005629333 nova_compute[244014]: 2026-02-25 12:59:47.727 244018 DEBUG nova.storage.rbd_utils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image b0014972-9433-4648-951c-bd9a210b6a69_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:59:47 np0005629333 nova_compute[244014]: 2026-02-25 12:59:47.760 244018 DEBUG nova.storage.rbd_utils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image b0014972-9433-4648-951c-bd9a210b6a69_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:59:47 np0005629333 nova_compute[244014]: 2026-02-25 12:59:47.765 244018 DEBUG oslo_concurrency.processutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:59:47 np0005629333 nova_compute[244014]: 2026-02-25 12:59:47.812 244018 DEBUG nova.policy [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea895f651dd742a7b5eb2d63fb34641c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b9699483122f465084e3147e4904d13d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:59:47 np0005629333 nova_compute[244014]: 2026-02-25 12:59:47.845 244018 DEBUG oslo_concurrency.processutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:59:47 np0005629333 nova_compute[244014]: 2026-02-25 12:59:47.846 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:59:47 np0005629333 nova_compute[244014]: 2026-02-25 12:59:47.847 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:59:47 np0005629333 nova_compute[244014]: 2026-02-25 12:59:47.847 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:59:47 np0005629333 nova_compute[244014]: 2026-02-25 12:59:47.871 244018 DEBUG nova.storage.rbd_utils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image b0014972-9433-4648-951c-bd9a210b6a69_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:59:47 np0005629333 nova_compute[244014]: 2026-02-25 12:59:47.876 244018 DEBUG oslo_concurrency.processutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 b0014972-9433-4648-951c-bd9a210b6a69_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:59:47 np0005629333 nova_compute[244014]: 2026-02-25 12:59:47.909 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:59:47 np0005629333 nova_compute[244014]: 2026-02-25 12:59:47.910 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:59:47 np0005629333 nova_compute[244014]: 2026-02-25 12:59:47.911 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 07:59:48 np0005629333 nova_compute[244014]: 2026-02-25 12:59:48.418 244018 DEBUG oslo_concurrency.processutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 b0014972-9433-4648-951c-bd9a210b6a69_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:59:48 np0005629333 nova_compute[244014]: 2026-02-25 12:59:48.537 244018 DEBUG nova.storage.rbd_utils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] resizing rbd image b0014972-9433-4648-951c-bd9a210b6a69_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:59:48 np0005629333 nova_compute[244014]: 2026-02-25 12:59:48.736 244018 DEBUG nova.objects.instance [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'migration_context' on Instance uuid b0014972-9433-4648-951c-bd9a210b6a69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:59:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:59:48 np0005629333 nova_compute[244014]: 2026-02-25 12:59:48.755 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:59:48 np0005629333 nova_compute[244014]: 2026-02-25 12:59:48.757 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Ensure instance console log exists: /var/lib/nova/instances/b0014972-9433-4648-951c-bd9a210b6a69/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:59:48 np0005629333 nova_compute[244014]: 2026-02-25 12:59:48.758 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:59:48 np0005629333 nova_compute[244014]: 2026-02-25 12:59:48.759 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:59:48 np0005629333 nova_compute[244014]: 2026-02-25 12:59:48.759 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:59:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2377: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 12 KiB/s wr, 0 op/s
Feb 25 07:59:49 np0005629333 nova_compute[244014]: 2026-02-25 12:59:49.408 244018 DEBUG nova.network.neutron [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Successfully created port: 54f5b116-49ad-43de-8259-78cb20b778c1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:59:49 np0005629333 nova_compute[244014]: 2026-02-25 12:59:49.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 07:59:50 np0005629333 nova_compute[244014]: 2026-02-25 12:59:50.048 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "8d338640-2b5f-4571-8f76-b523064ee129" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:59:50 np0005629333 nova_compute[244014]: 2026-02-25 12:59:50.049 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "8d338640-2b5f-4571-8f76-b523064ee129" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:59:50 np0005629333 nova_compute[244014]: 2026-02-25 12:59:50.066 244018 DEBUG nova.compute.manager [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 07:59:50 np0005629333 nova_compute[244014]: 2026-02-25 12:59:50.123 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:59:50 np0005629333 nova_compute[244014]: 2026-02-25 12:59:50.124 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:59:50 np0005629333 nova_compute[244014]: 2026-02-25 12:59:50.132 244018 DEBUG nova.virt.hardware [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 07:59:50 np0005629333 nova_compute[244014]: 2026-02-25 12:59:50.132 244018 INFO nova.compute.claims [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 07:59:50 np0005629333 nova_compute[244014]: 2026-02-25 12:59:50.287 244018 DEBUG oslo_concurrency.processutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:59:50 np0005629333 nova_compute[244014]: 2026-02-25 12:59:50.416 244018 DEBUG nova.network.neutron [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Successfully updated port: 54f5b116-49ad-43de-8259-78cb20b778c1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:59:50 np0005629333 nova_compute[244014]: 2026-02-25 12:59:50.436 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "refresh_cache-b0014972-9433-4648-951c-bd9a210b6a69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:59:50 np0005629333 nova_compute[244014]: 2026-02-25 12:59:50.437 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquired lock "refresh_cache-b0014972-9433-4648-951c-bd9a210b6a69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:59:50 np0005629333 nova_compute[244014]: 2026-02-25 12:59:50.437 244018 DEBUG nova.network.neutron [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:59:50 np0005629333 nova_compute[244014]: 2026-02-25 12:59:50.506 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:50 np0005629333 nova_compute[244014]: 2026-02-25 12:59:50.526 244018 DEBUG nova.compute.manager [req-aa8490de-73d3-44d5-a6d7-a6c281cce583 req-881143fb-1888-4f39-9f25-7f8b83456564 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Received event network-changed-54f5b116-49ad-43de-8259-78cb20b778c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:59:50 np0005629333 nova_compute[244014]: 2026-02-25 12:59:50.526 244018 DEBUG nova.compute.manager [req-aa8490de-73d3-44d5-a6d7-a6c281cce583 req-881143fb-1888-4f39-9f25-7f8b83456564 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Refreshing instance network info cache due to event network-changed-54f5b116-49ad-43de-8259-78cb20b778c1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:59:50 np0005629333 nova_compute[244014]: 2026-02-25 12:59:50.526 244018 DEBUG oslo_concurrency.lockutils [req-aa8490de-73d3-44d5-a6d7-a6c281cce583 req-881143fb-1888-4f39-9f25-7f8b83456564 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-b0014972-9433-4648-951c-bd9a210b6a69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:59:50 np0005629333 nova_compute[244014]: 2026-02-25 12:59:50.600 244018 DEBUG nova.network.neutron [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:59:50 np0005629333 nova_compute[244014]: 2026-02-25 12:59:50.603 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 07:59:50 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3336040633' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 07:59:50 np0005629333 nova_compute[244014]: 2026-02-25 12:59:50.821 244018 DEBUG oslo_concurrency.processutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:59:50 np0005629333 nova_compute[244014]: 2026-02-25 12:59:50.828 244018 DEBUG nova.compute.provider_tree [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 07:59:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2378: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Feb 25 07:59:50 np0005629333 nova_compute[244014]: 2026-02-25 12:59:50.845 244018 DEBUG nova.scheduler.client.report [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 07:59:50 np0005629333 nova_compute[244014]: 2026-02-25 12:59:50.869 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:59:50 np0005629333 nova_compute[244014]: 2026-02-25 12:59:50.870 244018 DEBUG nova.compute.manager [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 07:59:50 np0005629333 nova_compute[244014]: 2026-02-25 12:59:50.945 244018 DEBUG nova.compute.manager [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 07:59:50 np0005629333 nova_compute[244014]: 2026-02-25 12:59:50.946 244018 DEBUG nova.network.neutron [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 07:59:50 np0005629333 nova_compute[244014]: 2026-02-25 12:59:50.970 244018 INFO nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 07:59:50 np0005629333 nova_compute[244014]: 2026-02-25 12:59:50.991 244018 DEBUG nova.compute.manager [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 07:59:51 np0005629333 nova_compute[244014]: 2026-02-25 12:59:51.114 244018 DEBUG nova.compute.manager [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 07:59:51 np0005629333 nova_compute[244014]: 2026-02-25 12:59:51.116 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 07:59:51 np0005629333 nova_compute[244014]: 2026-02-25 12:59:51.117 244018 INFO nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Creating image(s)#033[00m
Feb 25 07:59:51 np0005629333 nova_compute[244014]: 2026-02-25 12:59:51.151 244018 DEBUG nova.storage.rbd_utils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] rbd image 8d338640-2b5f-4571-8f76-b523064ee129_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:59:51 np0005629333 nova_compute[244014]: 2026-02-25 12:59:51.189 244018 DEBUG nova.storage.rbd_utils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] rbd image 8d338640-2b5f-4571-8f76-b523064ee129_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:59:51 np0005629333 nova_compute[244014]: 2026-02-25 12:59:51.222 244018 DEBUG nova.storage.rbd_utils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] rbd image 8d338640-2b5f-4571-8f76-b523064ee129_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:59:51 np0005629333 nova_compute[244014]: 2026-02-25 12:59:51.227 244018 DEBUG oslo_concurrency.processutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:59:51 np0005629333 nova_compute[244014]: 2026-02-25 12:59:51.297 244018 DEBUG nova.policy [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7592542cdf7f423c86332695423dbe79', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cf5eb89ba0424237a313b1f369bcb92b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 07:59:51 np0005629333 nova_compute[244014]: 2026-02-25 12:59:51.314 244018 DEBUG oslo_concurrency.processutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:59:51 np0005629333 nova_compute[244014]: 2026-02-25 12:59:51.315 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:59:51 np0005629333 nova_compute[244014]: 2026-02-25 12:59:51.315 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:59:51 np0005629333 nova_compute[244014]: 2026-02-25 12:59:51.316 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:59:51 np0005629333 nova_compute[244014]: 2026-02-25 12:59:51.349 244018 DEBUG nova.storage.rbd_utils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] rbd image 8d338640-2b5f-4571-8f76-b523064ee129_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:59:51 np0005629333 nova_compute[244014]: 2026-02-25 12:59:51.355 244018 DEBUG oslo_concurrency.processutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 8d338640-2b5f-4571-8f76-b523064ee129_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:59:51 np0005629333 nova_compute[244014]: 2026-02-25 12:59:51.947 244018 DEBUG oslo_concurrency.processutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 8d338640-2b5f-4571-8f76-b523064ee129_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.031 244018 DEBUG nova.storage.rbd_utils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] resizing rbd image 8d338640-2b5f-4571-8f76-b523064ee129_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.064 244018 DEBUG nova.network.neutron [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Updating instance_info_cache with network_info: [{"id": "54f5b116-49ad-43de-8259-78cb20b778c1", "address": "fa:16:3e:33:21:fa", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f5b116-49", "ovs_interfaceid": "54f5b116-49ad-43de-8259-78cb20b778c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.087 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Releasing lock "refresh_cache-b0014972-9433-4648-951c-bd9a210b6a69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.088 244018 DEBUG nova.compute.manager [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Instance network_info: |[{"id": "54f5b116-49ad-43de-8259-78cb20b778c1", "address": "fa:16:3e:33:21:fa", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f5b116-49", "ovs_interfaceid": "54f5b116-49ad-43de-8259-78cb20b778c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.089 244018 DEBUG oslo_concurrency.lockutils [req-aa8490de-73d3-44d5-a6d7-a6c281cce583 req-881143fb-1888-4f39-9f25-7f8b83456564 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-b0014972-9433-4648-951c-bd9a210b6a69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.090 244018 DEBUG nova.network.neutron [req-aa8490de-73d3-44d5-a6d7-a6c281cce583 req-881143fb-1888-4f39-9f25-7f8b83456564 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Refreshing network info cache for port 54f5b116-49ad-43de-8259-78cb20b778c1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.096 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Start _get_guest_xml network_info=[{"id": "54f5b116-49ad-43de-8259-78cb20b778c1", "address": "fa:16:3e:33:21:fa", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f5b116-49", "ovs_interfaceid": "54f5b116-49ad-43de-8259-78cb20b778c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.101 244018 WARNING nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.106 244018 DEBUG nova.virt.libvirt.host [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.106 244018 DEBUG nova.virt.libvirt.host [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.111 244018 DEBUG nova.virt.libvirt.host [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.112 244018 DEBUG nova.virt.libvirt.host [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.113 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.114 244018 DEBUG nova.virt.hardware [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.114 244018 DEBUG nova.virt.hardware [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.115 244018 DEBUG nova.virt.hardware [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.115 244018 DEBUG nova.virt.hardware [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.116 244018 DEBUG nova.virt.hardware [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.116 244018 DEBUG nova.virt.hardware [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.116 244018 DEBUG nova.virt.hardware [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.117 244018 DEBUG nova.virt.hardware [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.117 244018 DEBUG nova.virt.hardware [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.118 244018 DEBUG nova.virt.hardware [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.118 244018 DEBUG nova.virt.hardware [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.122 244018 DEBUG oslo_concurrency.processutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.155 244018 DEBUG nova.network.neutron [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Successfully created port: e11a4d69-8666-420b-b13c-69a6427567a4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.196 244018 DEBUG nova.objects.instance [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lazy-loading 'migration_context' on Instance uuid 8d338640-2b5f-4571-8f76-b523064ee129 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.215 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.215 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Ensure instance console log exists: /var/lib/nova/instances/8d338640-2b5f-4571-8f76-b523064ee129/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.216 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.216 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.217 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:59:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:59:52 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/145816303' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.720 244018 DEBUG oslo_concurrency.processutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.743 244018 DEBUG nova.storage.rbd_utils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image b0014972-9433-4648-951c-bd9a210b6a69_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:59:52 np0005629333 nova_compute[244014]: 2026-02-25 12:59:52.749 244018 DEBUG oslo_concurrency.processutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:59:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2379: 305 pgs: 305 active+clean; 325 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Feb 25 07:59:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:59:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/876705242' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.262 244018 DEBUG oslo_concurrency.processutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.264 244018 DEBUG nova.virt.libvirt.vif [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:59:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-0-306231765',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-0-306231765',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-gen',id=144,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKflPzkYcK8T1jIvTW73/HFgMcFhCGkDieaMKwFdqxoWC54RyT9HKRPaz7zDpWGOyrYK1XW41LVz9p16Co9g6HWeTxnQZJe7dDdqfF7T6FTQYUTjJqoaDTrnYUJSVUiaQA==',key_name='tempest-TestSecurityGroupsBasicOps-600669415',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-ir6by9q0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:59:47Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=b0014972-9433-4648-951c-bd9a210b6a69,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54f5b116-49ad-43de-8259-78cb20b778c1", "address": "fa:16:3e:33:21:fa", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f5b116-49", "ovs_interfaceid": "54f5b116-49ad-43de-8259-78cb20b778c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.264 244018 DEBUG nova.network.os_vif_util [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "54f5b116-49ad-43de-8259-78cb20b778c1", "address": "fa:16:3e:33:21:fa", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f5b116-49", "ovs_interfaceid": "54f5b116-49ad-43de-8259-78cb20b778c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.265 244018 DEBUG nova.network.os_vif_util [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:21:fa,bridge_name='br-int',has_traffic_filtering=True,id=54f5b116-49ad-43de-8259-78cb20b778c1,network=Network(62bbaf1c-560e-4f11-b053-43d27fe35ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f5b116-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.266 244018 DEBUG nova.objects.instance [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'pci_devices' on Instance uuid b0014972-9433-4648-951c-bd9a210b6a69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.281 244018 DEBUG nova.network.neutron [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Successfully updated port: e11a4d69-8666-420b-b13c-69a6427567a4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.290 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:59:53 np0005629333 nova_compute[244014]:  <uuid>b0014972-9433-4648-951c-bd9a210b6a69</uuid>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:  <name>instance-00000090</name>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-0-306231765</nova:name>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:59:52</nova:creationTime>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:59:53 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:        <nova:user uuid="ea895f651dd742a7b5eb2d63fb34641c">tempest-TestSecurityGroupsBasicOps-948360018-project-member</nova:user>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:        <nova:project uuid="b9699483122f465084e3147e4904d13d">tempest-TestSecurityGroupsBasicOps-948360018</nova:project>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:        <nova:port uuid="54f5b116-49ad-43de-8259-78cb20b778c1">
Feb 25 07:59:53 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      <entry name="serial">b0014972-9433-4648-951c-bd9a210b6a69</entry>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      <entry name="uuid">b0014972-9433-4648-951c-bd9a210b6a69</entry>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/b0014972-9433-4648-951c-bd9a210b6a69_disk">
Feb 25 07:59:53 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:59:53 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/b0014972-9433-4648-951c-bd9a210b6a69_disk.config">
Feb 25 07:59:53 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:59:53 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:33:21:fa"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      <target dev="tap54f5b116-49"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/b0014972-9433-4648-951c-bd9a210b6a69/console.log" append="off"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:59:53 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:59:53 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:59:53 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:59:53 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.292 244018 DEBUG nova.compute.manager [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Preparing to wait for external event network-vif-plugged-54f5b116-49ad-43de-8259-78cb20b778c1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.293 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "b0014972-9433-4648-951c-bd9a210b6a69-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.293 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "b0014972-9433-4648-951c-bd9a210b6a69-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.294 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "b0014972-9433-4648-951c-bd9a210b6a69-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.295 244018 DEBUG nova.virt.libvirt.vif [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:59:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-0-306231765',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-0-306231765',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-gen',id=144,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKflPzkYcK8T1jIvTW73/HFgMcFhCGkDieaMKwFdqxoWC54RyT9HKRPaz7zDpWGOyrYK1XW41LVz9p16Co9g6HWeTxnQZJe7dDdqfF7T6FTQYUTjJqoaDTrnYUJSVUiaQA==',key_name='tempest-TestSecurityGroupsBasicOps-600669415',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-ir6by9q0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:59:47Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=b0014972-9433-4648-951c-bd9a210b6a69,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54f5b116-49ad-43de-8259-78cb20b778c1", "address": "fa:16:3e:33:21:fa", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f5b116-49", "ovs_interfaceid": "54f5b116-49ad-43de-8259-78cb20b778c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.295 244018 DEBUG nova.network.os_vif_util [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "54f5b116-49ad-43de-8259-78cb20b778c1", "address": "fa:16:3e:33:21:fa", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f5b116-49", "ovs_interfaceid": "54f5b116-49ad-43de-8259-78cb20b778c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.296 244018 DEBUG nova.network.os_vif_util [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:21:fa,bridge_name='br-int',has_traffic_filtering=True,id=54f5b116-49ad-43de-8259-78cb20b778c1,network=Network(62bbaf1c-560e-4f11-b053-43d27fe35ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f5b116-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.296 244018 DEBUG os_vif [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:21:fa,bridge_name='br-int',has_traffic_filtering=True,id=54f5b116-49ad-43de-8259-78cb20b778c1,network=Network(62bbaf1c-560e-4f11-b053-43d27fe35ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f5b116-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.297 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.298 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.299 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.303 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "refresh_cache-8d338640-2b5f-4571-8f76-b523064ee129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.303 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquired lock "refresh_cache-8d338640-2b5f-4571-8f76-b523064ee129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.303 244018 DEBUG nova.network.neutron [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.306 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.306 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54f5b116-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.307 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap54f5b116-49, col_values=(('external_ids', {'iface-id': '54f5b116-49ad-43de-8259-78cb20b778c1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:21:fa', 'vm-uuid': 'b0014972-9433-4648-951c-bd9a210b6a69'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.309 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:53 np0005629333 NetworkManager[49836]: <info>  [1772024393.3097] manager: (tap54f5b116-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/630)
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.311 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.321 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.323 244018 INFO os_vif [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:21:fa,bridge_name='br-int',has_traffic_filtering=True,id=54f5b116-49ad-43de-8259-78cb20b778c1,network=Network(62bbaf1c-560e-4f11-b053-43d27fe35ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f5b116-49')#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.377 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.377 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.378 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No VIF found with MAC fa:16:3e:33:21:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.378 244018 INFO nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Using config drive#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.402 244018 DEBUG nova.storage.rbd_utils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image b0014972-9433-4648-951c-bd9a210b6a69_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.410 244018 DEBUG nova.compute.manager [req-e1577941-4893-472d-9c60-9c0a1061aeb2 req-a5933da8-5025-422f-98f1-c7b1d0c1abf2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Received event network-changed-e11a4d69-8666-420b-b13c-69a6427567a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.410 244018 DEBUG nova.compute.manager [req-e1577941-4893-472d-9c60-9c0a1061aeb2 req-a5933da8-5025-422f-98f1-c7b1d0c1abf2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Refreshing instance network info cache due to event network-changed-e11a4d69-8666-420b-b13c-69a6427567a4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.411 244018 DEBUG oslo_concurrency.lockutils [req-e1577941-4893-472d-9c60-9c0a1061aeb2 req-a5933da8-5025-422f-98f1-c7b1d0c1abf2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8d338640-2b5f-4571-8f76-b523064ee129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.468 244018 DEBUG nova.network.neutron [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.680 244018 DEBUG nova.network.neutron [req-aa8490de-73d3-44d5-a6d7-a6c281cce583 req-881143fb-1888-4f39-9f25-7f8b83456564 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Updated VIF entry in instance network info cache for port 54f5b116-49ad-43de-8259-78cb20b778c1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.681 244018 DEBUG nova.network.neutron [req-aa8490de-73d3-44d5-a6d7-a6c281cce583 req-881143fb-1888-4f39-9f25-7f8b83456564 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Updating instance_info_cache with network_info: [{"id": "54f5b116-49ad-43de-8259-78cb20b778c1", "address": "fa:16:3e:33:21:fa", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f5b116-49", "ovs_interfaceid": "54f5b116-49ad-43de-8259-78cb20b778c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.702 244018 DEBUG oslo_concurrency.lockutils [req-aa8490de-73d3-44d5-a6d7-a6c281cce583 req-881143fb-1888-4f39-9f25-7f8b83456564 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-b0014972-9433-4648-951c-bd9a210b6a69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.752 244018 INFO nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Creating config drive at /var/lib/nova/instances/b0014972-9433-4648-951c-bd9a210b6a69/disk.config#033[00m
Feb 25 07:59:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.757 244018 DEBUG oslo_concurrency.processutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b0014972-9433-4648-951c-bd9a210b6a69/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpda0ick6j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.900 244018 DEBUG oslo_concurrency.processutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b0014972-9433-4648-951c-bd9a210b6a69/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpda0ick6j" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.941 244018 DEBUG nova.storage.rbd_utils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image b0014972-9433-4648-951c-bd9a210b6a69_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:59:53 np0005629333 nova_compute[244014]: 2026-02-25 12:59:53.946 244018 DEBUG oslo_concurrency.processutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b0014972-9433-4648-951c-bd9a210b6a69/disk.config b0014972-9433-4648-951c-bd9a210b6a69_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:59:54 np0005629333 nova_compute[244014]: 2026-02-25 12:59:54.239 244018 DEBUG oslo_concurrency.processutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b0014972-9433-4648-951c-bd9a210b6a69/disk.config b0014972-9433-4648-951c-bd9a210b6a69_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:59:54 np0005629333 nova_compute[244014]: 2026-02-25 12:59:54.240 244018 INFO nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Deleting local config drive /var/lib/nova/instances/b0014972-9433-4648-951c-bd9a210b6a69/disk.config because it was imported into RBD.#033[00m
Feb 25 07:59:54 np0005629333 NetworkManager[49836]: <info>  [1772024394.2966] manager: (tap54f5b116-49): new Tun device (/org/freedesktop/NetworkManager/Devices/631)
Feb 25 07:59:54 np0005629333 kernel: tap54f5b116-49: entered promiscuous mode
Feb 25 07:59:54 np0005629333 ovn_controller[147040]: 2026-02-25T12:59:54Z|01531|binding|INFO|Claiming lport 54f5b116-49ad-43de-8259-78cb20b778c1 for this chassis.
Feb 25 07:59:54 np0005629333 ovn_controller[147040]: 2026-02-25T12:59:54Z|01532|binding|INFO|54f5b116-49ad-43de-8259-78cb20b778c1: Claiming fa:16:3e:33:21:fa 10.100.0.4
Feb 25 07:59:54 np0005629333 nova_compute[244014]: 2026-02-25 12:59:54.300 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:54 np0005629333 nova_compute[244014]: 2026-02-25 12:59:54.310 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:54 np0005629333 ovn_controller[147040]: 2026-02-25T12:59:54Z|01533|binding|INFO|Setting lport 54f5b116-49ad-43de-8259-78cb20b778c1 ovn-installed in OVS
Feb 25 07:59:54 np0005629333 nova_compute[244014]: 2026-02-25 12:59:54.314 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:54 np0005629333 systemd-udevd[375134]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:59:54 np0005629333 NetworkManager[49836]: <info>  [1772024394.3553] device (tap54f5b116-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:59:54 np0005629333 NetworkManager[49836]: <info>  [1772024394.3567] device (tap54f5b116-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:54.628 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:21:fa 10.100.0.4'], port_security=['fa:16:3e:33:21:fa 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b0014972-9433-4648-951c-bd9a210b6a69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62bbaf1c-560e-4f11-b053-43d27fe35ba2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9699483122f465084e3147e4904d13d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1ae32a7b-44d8-4e80-93f7-db39a030f905', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf42b548-d8ba-420a-827b-4e92eb42f3f0, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=54f5b116-49ad-43de-8259-78cb20b778c1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 07:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:54.630 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 54f5b116-49ad-43de-8259-78cb20b778c1 in datapath 62bbaf1c-560e-4f11-b053-43d27fe35ba2 bound to our chassis#033[00m
Feb 25 07:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:54.633 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62bbaf1c-560e-4f11-b053-43d27fe35ba2#033[00m
Feb 25 07:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:54.649 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[27899da6-db4e-4328-b09a-cd025a0c14e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:54.678 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f8bb77d0-097e-4579-8650-8284c1b90a28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:54.682 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[63a885e4-52d1-48f3-ae14-99d4c1395a05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:59:54 np0005629333 ovn_controller[147040]: 2026-02-25T12:59:54Z|01534|binding|INFO|Setting lport 54f5b116-49ad-43de-8259-78cb20b778c1 up in Southbound
Feb 25 07:59:54 np0005629333 systemd-machined[210048]: New machine qemu-178-instance-00000090.
Feb 25 07:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:54.712 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0d7a8743-7864-4c19-bb8c-73538554aea8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:59:54 np0005629333 systemd[1]: Started Virtual Machine qemu-178-instance-00000090.
Feb 25 07:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:54.731 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1b127624-b731-4a8b-b308-5dcc6437ea99]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62bbaf1c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:da:c2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 450], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632617, 'reachable_time': 39162, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375145, 'error': None, 'target': 'ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:54.744 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9f775a67-aa05-44ad-b3bc-7e9a12dd03b7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap62bbaf1c-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632625, 'tstamp': 632625}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375146, 'error': None, 'target': 'ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap62bbaf1c-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632627, 'tstamp': 632627}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375146, 'error': None, 'target': 'ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 07:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:54.746 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62bbaf1c-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:59:54 np0005629333 nova_compute[244014]: 2026-02-25 12:59:54.748 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:54 np0005629333 nova_compute[244014]: 2026-02-25 12:59:54.749 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:54.750 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62bbaf1c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:54.750 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:54.750 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62bbaf1c-50, col_values=(('external_ids', {'iface-id': 'e8ddd525-f10e-4482-b8e7-397d0dcdd470'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:59:54 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:54.751 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:59:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2380: 305 pgs: 305 active+clean; 325 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Feb 25 07:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:55.036 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:55.036 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 12:59:55.037 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.399 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024395.3990338, b0014972-9433-4648-951c-bd9a210b6a69 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.399 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b0014972-9433-4648-951c-bd9a210b6a69] VM Started (Lifecycle Event)#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.423 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.427 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024395.4016118, b0014972-9433-4648-951c-bd9a210b6a69 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.427 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b0014972-9433-4648-951c-bd9a210b6a69] VM Paused (Lifecycle Event)#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.445 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.447 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.470 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b0014972-9433-4648-951c-bd9a210b6a69] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.590 244018 DEBUG nova.compute.manager [req-2363be12-1364-4ab4-9f3e-e6798ed39c4d req-7fe29d74-d973-4775-bdf1-6c2e1da7afc9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Received event network-vif-plugged-54f5b116-49ad-43de-8259-78cb20b778c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.591 244018 DEBUG oslo_concurrency.lockutils [req-2363be12-1364-4ab4-9f3e-e6798ed39c4d req-7fe29d74-d973-4775-bdf1-6c2e1da7afc9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b0014972-9433-4648-951c-bd9a210b6a69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.592 244018 DEBUG oslo_concurrency.lockutils [req-2363be12-1364-4ab4-9f3e-e6798ed39c4d req-7fe29d74-d973-4775-bdf1-6c2e1da7afc9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b0014972-9433-4648-951c-bd9a210b6a69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.592 244018 DEBUG oslo_concurrency.lockutils [req-2363be12-1364-4ab4-9f3e-e6798ed39c4d req-7fe29d74-d973-4775-bdf1-6c2e1da7afc9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b0014972-9433-4648-951c-bd9a210b6a69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.593 244018 DEBUG nova.compute.manager [req-2363be12-1364-4ab4-9f3e-e6798ed39c4d req-7fe29d74-d973-4775-bdf1-6c2e1da7afc9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Processing event network-vif-plugged-54f5b116-49ad-43de-8259-78cb20b778c1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.593 244018 DEBUG nova.compute.manager [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.597 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024395.597441, b0014972-9433-4648-951c-bd9a210b6a69 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.598 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b0014972-9433-4648-951c-bd9a210b6a69] VM Resumed (Lifecycle Event)#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.601 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.601 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.607 244018 INFO nova.virt.libvirt.driver [-] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Instance spawned successfully.#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.608 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.617 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.621 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.629 244018 DEBUG nova.network.neutron [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Updating instance_info_cache with network_info: [{"id": "e11a4d69-8666-420b-b13c-69a6427567a4", "address": "fa:16:3e:15:04:04", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape11a4d69-86", "ovs_interfaceid": "e11a4d69-8666-420b-b13c-69a6427567a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.637 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.638 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.638 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.639 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.640 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.641 244018 DEBUG nova.virt.libvirt.driver [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.646 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: b0014972-9433-4648-951c-bd9a210b6a69] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.666 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Releasing lock "refresh_cache-8d338640-2b5f-4571-8f76-b523064ee129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.666 244018 DEBUG nova.compute.manager [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Instance network_info: |[{"id": "e11a4d69-8666-420b-b13c-69a6427567a4", "address": "fa:16:3e:15:04:04", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape11a4d69-86", "ovs_interfaceid": "e11a4d69-8666-420b-b13c-69a6427567a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.667 244018 DEBUG oslo_concurrency.lockutils [req-e1577941-4893-472d-9c60-9c0a1061aeb2 req-a5933da8-5025-422f-98f1-c7b1d0c1abf2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8d338640-2b5f-4571-8f76-b523064ee129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.668 244018 DEBUG nova.network.neutron [req-e1577941-4893-472d-9c60-9c0a1061aeb2 req-a5933da8-5025-422f-98f1-c7b1d0c1abf2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Refreshing network info cache for port e11a4d69-8666-420b-b13c-69a6427567a4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.673 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Start _get_guest_xml network_info=[{"id": "e11a4d69-8666-420b-b13c-69a6427567a4", "address": "fa:16:3e:15:04:04", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape11a4d69-86", "ovs_interfaceid": "e11a4d69-8666-420b-b13c-69a6427567a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.679 244018 WARNING nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.686 244018 DEBUG nova.virt.libvirt.host [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.688 244018 DEBUG nova.virt.libvirt.host [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.692 244018 DEBUG nova.virt.libvirt.host [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.693 244018 DEBUG nova.virt.libvirt.host [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.693 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.694 244018 DEBUG nova.virt.hardware [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.695 244018 DEBUG nova.virt.hardware [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.695 244018 DEBUG nova.virt.hardware [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.696 244018 DEBUG nova.virt.hardware [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.696 244018 DEBUG nova.virt.hardware [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.696 244018 DEBUG nova.virt.hardware [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.697 244018 DEBUG nova.virt.hardware [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.697 244018 DEBUG nova.virt.hardware [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.698 244018 DEBUG nova.virt.hardware [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.698 244018 DEBUG nova.virt.hardware [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.699 244018 DEBUG nova.virt.hardware [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.704 244018 DEBUG oslo_concurrency.processutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.747 244018 INFO nova.compute.manager [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Took 8.08 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.748 244018 DEBUG nova.compute.manager [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.813 244018 INFO nova.compute.manager [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Took 9.08 seconds to build instance.#033[00m
Feb 25 07:59:55 np0005629333 nova_compute[244014]: 2026-02-25 12:59:55.829 244018 DEBUG oslo_concurrency.lockutils [None req-b34d08b0-eb1d-41b9-b595-95f3693798f6 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "b0014972-9433-4648-951c-bd9a210b6a69" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:59:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:59:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1818604780' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:59:56 np0005629333 nova_compute[244014]: 2026-02-25 12:59:56.289 244018 DEBUG oslo_concurrency.processutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:59:56 np0005629333 nova_compute[244014]: 2026-02-25 12:59:56.329 244018 DEBUG nova.storage.rbd_utils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] rbd image 8d338640-2b5f-4571-8f76-b523064ee129_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:59:56 np0005629333 nova_compute[244014]: 2026-02-25 12:59:56.334 244018 DEBUG oslo_concurrency.processutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:59:56 np0005629333 nova_compute[244014]: 2026-02-25 12:59:56.771 244018 DEBUG nova.network.neutron [req-e1577941-4893-472d-9c60-9c0a1061aeb2 req-a5933da8-5025-422f-98f1-c7b1d0c1abf2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Updated VIF entry in instance network info cache for port e11a4d69-8666-420b-b13c-69a6427567a4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 07:59:56 np0005629333 nova_compute[244014]: 2026-02-25 12:59:56.772 244018 DEBUG nova.network.neutron [req-e1577941-4893-472d-9c60-9c0a1061aeb2 req-a5933da8-5025-422f-98f1-c7b1d0c1abf2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Updating instance_info_cache with network_info: [{"id": "e11a4d69-8666-420b-b13c-69a6427567a4", "address": "fa:16:3e:15:04:04", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape11a4d69-86", "ovs_interfaceid": "e11a4d69-8666-420b-b13c-69a6427567a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 07:59:56 np0005629333 nova_compute[244014]: 2026-02-25 12:59:56.794 244018 DEBUG oslo_concurrency.lockutils [req-e1577941-4893-472d-9c60-9c0a1061aeb2 req-a5933da8-5025-422f-98f1-c7b1d0c1abf2 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8d338640-2b5f-4571-8f76-b523064ee129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 07:59:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2381: 305 pgs: 305 active+clean; 325 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Feb 25 07:59:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 07:59:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1711484078' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 07:59:56 np0005629333 nova_compute[244014]: 2026-02-25 12:59:56.859 244018 DEBUG oslo_concurrency.processutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:59:56 np0005629333 nova_compute[244014]: 2026-02-25 12:59:56.861 244018 DEBUG nova.virt.libvirt.vif [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:59:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1793809408',display_name='tempest-TestSnapshotPattern-server-1793809408',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1793809408',id=145,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDCWEdl3+ajGxJOkCeAxfCSENIRZwbPgzOnOZcpgV3Hlv4XmE2Mrj2poBzyZ23ScEZYFPY48VJJN3YnpuQrByWx+iTcrPbMvMX+80KQEtNnqR2GlURJo6gN1+GRlPtyo4Q==',key_name='tempest-TestSnapshotPattern-301066188',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cf5eb89ba0424237a313b1f369bcb92b',ramdisk_id='',reservation_id='r-b4g1xjnl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1122231160',owner_user_name='tempest-TestSnapshotPattern-1122231160-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:59:51Z,user_data=None,user_id='7592542cdf7f423c86332695423dbe79',uuid=8d338640-2b5f-4571-8f76-b523064ee129,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e11a4d69-8666-420b-b13c-69a6427567a4", "address": "fa:16:3e:15:04:04", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape11a4d69-86", "ovs_interfaceid": "e11a4d69-8666-420b-b13c-69a6427567a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 07:59:56 np0005629333 nova_compute[244014]: 2026-02-25 12:59:56.861 244018 DEBUG nova.network.os_vif_util [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Converting VIF {"id": "e11a4d69-8666-420b-b13c-69a6427567a4", "address": "fa:16:3e:15:04:04", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape11a4d69-86", "ovs_interfaceid": "e11a4d69-8666-420b-b13c-69a6427567a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:59:56 np0005629333 nova_compute[244014]: 2026-02-25 12:59:56.862 244018 DEBUG nova.network.os_vif_util [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:04:04,bridge_name='br-int',has_traffic_filtering=True,id=e11a4d69-8666-420b-b13c-69a6427567a4,network=Network(4273798e-5f22-4d98-8d00-a22d1ea2c776),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape11a4d69-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:59:56 np0005629333 nova_compute[244014]: 2026-02-25 12:59:56.863 244018 DEBUG nova.objects.instance [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lazy-loading 'pci_devices' on Instance uuid 8d338640-2b5f-4571-8f76-b523064ee129 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 07:59:56 np0005629333 nova_compute[244014]: 2026-02-25 12:59:56.879 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] End _get_guest_xml xml=<domain type="kvm">
Feb 25 07:59:56 np0005629333 nova_compute[244014]:  <uuid>8d338640-2b5f-4571-8f76-b523064ee129</uuid>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:  <name>instance-00000091</name>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestSnapshotPattern-server-1793809408</nova:name>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 12:59:55</nova:creationTime>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 07:59:56 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:        <nova:user uuid="7592542cdf7f423c86332695423dbe79">tempest-TestSnapshotPattern-1122231160-project-member</nova:user>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:        <nova:project uuid="cf5eb89ba0424237a313b1f369bcb92b">tempest-TestSnapshotPattern-1122231160</nova:project>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:        <nova:port uuid="e11a4d69-8666-420b-b13c-69a6427567a4">
Feb 25 07:59:56 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <system>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      <entry name="serial">8d338640-2b5f-4571-8f76-b523064ee129</entry>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      <entry name="uuid">8d338640-2b5f-4571-8f76-b523064ee129</entry>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    </system>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:  <os>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:  </os>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:  <features>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:  </features>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:  </clock>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:  <devices>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/8d338640-2b5f-4571-8f76-b523064ee129_disk">
Feb 25 07:59:56 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:59:56 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/8d338640-2b5f-4571-8f76-b523064ee129_disk.config">
Feb 25 07:59:56 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      </source>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 07:59:56 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      </auth>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    </disk>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:15:04:04"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      <target dev="tape11a4d69-86"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    </interface>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/8d338640-2b5f-4571-8f76-b523064ee129/console.log" append="off"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    </serial>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <video>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    </video>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    </rng>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 07:59:56 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 07:59:56 np0005629333 nova_compute[244014]:  </devices>
Feb 25 07:59:56 np0005629333 nova_compute[244014]: </domain>
Feb 25 07:59:56 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 07:59:56 np0005629333 nova_compute[244014]: 2026-02-25 12:59:56.880 244018 DEBUG nova.compute.manager [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Preparing to wait for external event network-vif-plugged-e11a4d69-8666-420b-b13c-69a6427567a4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 07:59:56 np0005629333 nova_compute[244014]: 2026-02-25 12:59:56.881 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "8d338640-2b5f-4571-8f76-b523064ee129-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:59:56 np0005629333 nova_compute[244014]: 2026-02-25 12:59:56.881 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "8d338640-2b5f-4571-8f76-b523064ee129-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:59:56 np0005629333 nova_compute[244014]: 2026-02-25 12:59:56.881 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "8d338640-2b5f-4571-8f76-b523064ee129-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:59:56 np0005629333 nova_compute[244014]: 2026-02-25 12:59:56.882 244018 DEBUG nova.virt.libvirt.vif [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T12:59:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1793809408',display_name='tempest-TestSnapshotPattern-server-1793809408',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1793809408',id=145,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDCWEdl3+ajGxJOkCeAxfCSENIRZwbPgzOnOZcpgV3Hlv4XmE2Mrj2poBzyZ23ScEZYFPY48VJJN3YnpuQrByWx+iTcrPbMvMX+80KQEtNnqR2GlURJo6gN1+GRlPtyo4Q==',key_name='tempest-TestSnapshotPattern-301066188',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cf5eb89ba0424237a313b1f369bcb92b',ramdisk_id='',reservation_id='r-b4g1xjnl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1122231160',owner_user_name='tempest-TestSnapshotPattern-1122231160-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T12:59:51Z,user_data=None,user_id='7592542cdf7f423c86332695423dbe79',uuid=8d338640-2b5f-4571-8f76-b523064ee129,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e11a4d69-8666-420b-b13c-69a6427567a4", "address": "fa:16:3e:15:04:04", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape11a4d69-86", "ovs_interfaceid": "e11a4d69-8666-420b-b13c-69a6427567a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 07:59:56 np0005629333 nova_compute[244014]: 2026-02-25 12:59:56.882 244018 DEBUG nova.network.os_vif_util [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Converting VIF {"id": "e11a4d69-8666-420b-b13c-69a6427567a4", "address": "fa:16:3e:15:04:04", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape11a4d69-86", "ovs_interfaceid": "e11a4d69-8666-420b-b13c-69a6427567a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 07:59:56 np0005629333 nova_compute[244014]: 2026-02-25 12:59:56.883 244018 DEBUG nova.network.os_vif_util [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:04:04,bridge_name='br-int',has_traffic_filtering=True,id=e11a4d69-8666-420b-b13c-69a6427567a4,network=Network(4273798e-5f22-4d98-8d00-a22d1ea2c776),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape11a4d69-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 07:59:56 np0005629333 nova_compute[244014]: 2026-02-25 12:59:56.883 244018 DEBUG os_vif [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:04:04,bridge_name='br-int',has_traffic_filtering=True,id=e11a4d69-8666-420b-b13c-69a6427567a4,network=Network(4273798e-5f22-4d98-8d00-a22d1ea2c776),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape11a4d69-86') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 07:59:56 np0005629333 nova_compute[244014]: 2026-02-25 12:59:56.884 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:56 np0005629333 nova_compute[244014]: 2026-02-25 12:59:56.884 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:59:56 np0005629333 nova_compute[244014]: 2026-02-25 12:59:56.885 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 07:59:56 np0005629333 nova_compute[244014]: 2026-02-25 12:59:56.888 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:56 np0005629333 nova_compute[244014]: 2026-02-25 12:59:56.888 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape11a4d69-86, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:59:56 np0005629333 nova_compute[244014]: 2026-02-25 12:59:56.888 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape11a4d69-86, col_values=(('external_ids', {'iface-id': 'e11a4d69-8666-420b-b13c-69a6427567a4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:04:04', 'vm-uuid': '8d338640-2b5f-4571-8f76-b523064ee129'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 07:59:56 np0005629333 nova_compute[244014]: 2026-02-25 12:59:56.890 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:56 np0005629333 NetworkManager[49836]: <info>  [1772024396.8911] manager: (tape11a4d69-86): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/632)
Feb 25 07:59:56 np0005629333 nova_compute[244014]: 2026-02-25 12:59:56.892 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 07:59:56 np0005629333 nova_compute[244014]: 2026-02-25 12:59:56.894 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:56 np0005629333 nova_compute[244014]: 2026-02-25 12:59:56.895 244018 INFO os_vif [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:04:04,bridge_name='br-int',has_traffic_filtering=True,id=e11a4d69-8666-420b-b13c-69a6427567a4,network=Network(4273798e-5f22-4d98-8d00-a22d1ea2c776),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape11a4d69-86')#033[00m
Feb 25 07:59:57 np0005629333 nova_compute[244014]: 2026-02-25 12:59:57.237 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:59:57 np0005629333 nova_compute[244014]: 2026-02-25 12:59:57.237 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 07:59:57 np0005629333 nova_compute[244014]: 2026-02-25 12:59:57.237 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] No VIF found with MAC fa:16:3e:15:04:04, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 07:59:57 np0005629333 nova_compute[244014]: 2026-02-25 12:59:57.238 244018 INFO nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Using config drive#033[00m
Feb 25 07:59:57 np0005629333 nova_compute[244014]: 2026-02-25 12:59:57.262 244018 DEBUG nova.storage.rbd_utils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] rbd image 8d338640-2b5f-4571-8f76-b523064ee129_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:59:57 np0005629333 nova_compute[244014]: 2026-02-25 12:59:57.662 244018 DEBUG nova.compute.manager [req-a15781d7-96ff-4f5f-a5f6-63e46a2a1aee req-4a289c39-078e-4b0f-9150-281a6e3eee7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Received event network-vif-plugged-54f5b116-49ad-43de-8259-78cb20b778c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 07:59:57 np0005629333 nova_compute[244014]: 2026-02-25 12:59:57.663 244018 DEBUG oslo_concurrency.lockutils [req-a15781d7-96ff-4f5f-a5f6-63e46a2a1aee req-4a289c39-078e-4b0f-9150-281a6e3eee7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b0014972-9433-4648-951c-bd9a210b6a69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 07:59:57 np0005629333 nova_compute[244014]: 2026-02-25 12:59:57.663 244018 DEBUG oslo_concurrency.lockutils [req-a15781d7-96ff-4f5f-a5f6-63e46a2a1aee req-4a289c39-078e-4b0f-9150-281a6e3eee7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b0014972-9433-4648-951c-bd9a210b6a69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 07:59:57 np0005629333 nova_compute[244014]: 2026-02-25 12:59:57.664 244018 DEBUG oslo_concurrency.lockutils [req-a15781d7-96ff-4f5f-a5f6-63e46a2a1aee req-4a289c39-078e-4b0f-9150-281a6e3eee7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b0014972-9433-4648-951c-bd9a210b6a69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 07:59:57 np0005629333 nova_compute[244014]: 2026-02-25 12:59:57.665 244018 DEBUG nova.compute.manager [req-a15781d7-96ff-4f5f-a5f6-63e46a2a1aee req-4a289c39-078e-4b0f-9150-281a6e3eee7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] No waiting events found dispatching network-vif-plugged-54f5b116-49ad-43de-8259-78cb20b778c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 07:59:57 np0005629333 nova_compute[244014]: 2026-02-25 12:59:57.665 244018 WARNING nova.compute.manager [req-a15781d7-96ff-4f5f-a5f6-63e46a2a1aee req-4a289c39-078e-4b0f-9150-281a6e3eee7f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Received unexpected event network-vif-plugged-54f5b116-49ad-43de-8259-78cb20b778c1 for instance with vm_state active and task_state None.#033[00m
Feb 25 07:59:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 07:59:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2382: 305 pgs: 305 active+clean; 325 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 128 op/s
Feb 25 07:59:59 np0005629333 nova_compute[244014]: 2026-02-25 12:59:59.424 244018 INFO nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Creating config drive at /var/lib/nova/instances/8d338640-2b5f-4571-8f76-b523064ee129/disk.config#033[00m
Feb 25 07:59:59 np0005629333 nova_compute[244014]: 2026-02-25 12:59:59.427 244018 DEBUG oslo_concurrency.processutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8d338640-2b5f-4571-8f76-b523064ee129/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpbz3_su6e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:59:59 np0005629333 nova_compute[244014]: 2026-02-25 12:59:59.554 244018 DEBUG oslo_concurrency.processutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8d338640-2b5f-4571-8f76-b523064ee129/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpbz3_su6e" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:59:59 np0005629333 nova_compute[244014]: 2026-02-25 12:59:59.578 244018 DEBUG nova.storage.rbd_utils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] rbd image 8d338640-2b5f-4571-8f76-b523064ee129_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 07:59:59 np0005629333 nova_compute[244014]: 2026-02-25 12:59:59.584 244018 DEBUG oslo_concurrency.processutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8d338640-2b5f-4571-8f76-b523064ee129/disk.config 8d338640-2b5f-4571-8f76-b523064ee129_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 07:59:59 np0005629333 nova_compute[244014]: 2026-02-25 12:59:59.730 244018 DEBUG oslo_concurrency.processutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8d338640-2b5f-4571-8f76-b523064ee129/disk.config 8d338640-2b5f-4571-8f76-b523064ee129_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 07:59:59 np0005629333 nova_compute[244014]: 2026-02-25 12:59:59.731 244018 INFO nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Deleting local config drive /var/lib/nova/instances/8d338640-2b5f-4571-8f76-b523064ee129/disk.config because it was imported into RBD.#033[00m
Feb 25 07:59:59 np0005629333 NetworkManager[49836]: <info>  [1772024399.7734] manager: (tape11a4d69-86): new Tun device (/org/freedesktop/NetworkManager/Devices/633)
Feb 25 07:59:59 np0005629333 kernel: tape11a4d69-86: entered promiscuous mode
Feb 25 07:59:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:59:59Z|01535|binding|INFO|Claiming lport e11a4d69-8666-420b-b13c-69a6427567a4 for this chassis.
Feb 25 07:59:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:59:59Z|01536|binding|INFO|e11a4d69-8666-420b-b13c-69a6427567a4: Claiming fa:16:3e:15:04:04 10.100.0.9
Feb 25 07:59:59 np0005629333 nova_compute[244014]: 2026-02-25 12:59:59.778 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:59 np0005629333 nova_compute[244014]: 2026-02-25 12:59:59.786 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:59 np0005629333 ovn_controller[147040]: 2026-02-25T12:59:59Z|01537|binding|INFO|Setting lport e11a4d69-8666-420b-b13c-69a6427567a4 ovn-installed in OVS
Feb 25 07:59:59 np0005629333 nova_compute[244014]: 2026-02-25 12:59:59.789 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 07:59:59 np0005629333 systemd-udevd[375328]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 07:59:59 np0005629333 NetworkManager[49836]: <info>  [1772024399.8052] device (tape11a4d69-86): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 07:59:59 np0005629333 NetworkManager[49836]: <info>  [1772024399.8064] device (tape11a4d69-86): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 07:59:59 np0005629333 nova_compute[244014]: 2026-02-25 12:59:59.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:00:00 np0005629333 ovn_controller[147040]: 2026-02-25T13:00:00Z|01538|binding|INFO|Setting lport e11a4d69-8666-420b-b13c-69a6427567a4 up in Southbound
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.403 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:04:04 10.100.0.9'], port_security=['fa:16:3e:15:04:04 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '8d338640-2b5f-4571-8f76-b523064ee129', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4273798e-5f22-4d98-8d00-a22d1ea2c776', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cf5eb89ba0424237a313b1f369bcb92b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9876e36c-27f9-4526-a220-4685b5be270d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b451f093-c10b-47f6-9a8c-8892cb3ad351, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=e11a4d69-8666-420b-b13c-69a6427567a4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.405 157129 INFO neutron.agent.ovn.metadata.agent [-] Port e11a4d69-8666-420b-b13c-69a6427567a4 in datapath 4273798e-5f22-4d98-8d00-a22d1ea2c776 bound to our chassis#033[00m
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.408 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4273798e-5f22-4d98-8d00-a22d1ea2c776#033[00m
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.420 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b0fa6213-887a-4453-8e64-d73e22933e14]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.421 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4273798e-51 in ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.423 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4273798e-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.423 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[21e01010-3d8d-4d51-afe7-71886a68433b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.426 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[94b925e5-60b3-4b3f-83b7-e6be851c7724]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:00 np0005629333 systemd-machined[210048]: New machine qemu-179-instance-00000091.
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.444 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[8c95307b-88a4-4e73-b33a-15e126268296]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:00 np0005629333 systemd[1]: Started Virtual Machine qemu-179-instance-00000091.
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.467 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7da352eb-af29-45a8-82d3-f7b219593101]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.492 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[cffddcb6-533a-41d2-853c-fb9e4026b0c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:00 np0005629333 NetworkManager[49836]: <info>  [1772024400.4989] manager: (tap4273798e-50): new Veth device (/org/freedesktop/NetworkManager/Devices/634)
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.498 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7bfcada9-d4c2-4c95-a6de-522c34de8556]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.530 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[bab68de0-291e-4800-a172-bfae83f85b79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.533 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c73d0eb8-76ab-41d6-b059-888cdd03ec7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:00 np0005629333 NetworkManager[49836]: <info>  [1772024400.5504] device (tap4273798e-50): carrier: link connected
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.555 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ab88fa72-a3b2-4cb1-b568-38b08582c2c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.571 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ad9a8601-97ea-4f1b-b95f-95980437021e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4273798e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:a3:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 453], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636995, 'reachable_time': 38262, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375364, 'error': None, 'target': 'ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.583 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[296ca9d1-e013-4827-bf13-672727c6d902]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:a341'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 636995, 'tstamp': 636995}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375365, 'error': None, 'target': 'ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.596 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eaf4718c-07a0-4aa9-911f-c503ccea4806]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4273798e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:a3:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 453], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636995, 'reachable_time': 38262, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 375366, 'error': None, 'target': 'ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:00 np0005629333 nova_compute[244014]: 2026-02-25 13:00:00.603 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.624 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a989c8c3-b338-4b5c-bfdc-5a293d218518]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.680 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[281c252c-4750-4b19-a53a-39469868fb93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.682 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4273798e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.683 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.683 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4273798e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:00:00 np0005629333 NetworkManager[49836]: <info>  [1772024400.6862] manager: (tap4273798e-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/635)
Feb 25 08:00:00 np0005629333 nova_compute[244014]: 2026-02-25 13:00:00.685 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:00 np0005629333 kernel: tap4273798e-50: entered promiscuous mode
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.690 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4273798e-50, col_values=(('external_ids', {'iface-id': 'bba4cbca-ca61-4422-a903-61bd05b8ebd6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:00:00 np0005629333 nova_compute[244014]: 2026-02-25 13:00:00.690 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:00 np0005629333 ovn_controller[147040]: 2026-02-25T13:00:00Z|01539|binding|INFO|Releasing lport bba4cbca-ca61-4422-a903-61bd05b8ebd6 from this chassis (sb_readonly=1)
Feb 25 08:00:00 np0005629333 nova_compute[244014]: 2026-02-25 13:00:00.698 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.699 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4273798e-5f22-4d98-8d00-a22d1ea2c776.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4273798e-5f22-4d98-8d00-a22d1ea2c776.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.699 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[90be80e9-2a39-419c-8817-750fd2538959]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.700 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-4273798e-5f22-4d98-8d00-a22d1ea2c776
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/4273798e-5f22-4d98-8d00-a22d1ea2c776.pid.haproxy
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 4273798e-5f22-4d98-8d00-a22d1ea2c776
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 08:00:00 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:00.701 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776', 'env', 'PROCESS_TAG=haproxy-4273798e-5f22-4d98-8d00-a22d1ea2c776', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4273798e-5f22-4d98-8d00-a22d1ea2c776.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 08:00:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2383: 305 pgs: 305 active+clean; 325 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 128 op/s
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.012 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024401.0114937, 8d338640-2b5f-4571-8f76-b523064ee129 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.012 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] VM Started (Lifecycle Event)#033[00m
Feb 25 08:00:01 np0005629333 podman[375437]: 2026-02-25 13:00:01.076167105 +0000 UTC m=+0.033414914 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.218 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.224 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024401.01498, 8d338640-2b5f-4571-8f76-b523064ee129 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.224 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] VM Paused (Lifecycle Event)#033[00m
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.246 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.250 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.273 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 08:00:01 np0005629333 podman[375437]: 2026-02-25 13:00:01.339242976 +0000 UTC m=+0.296490765 container create 563da1d83ed457a7fbc129442ed41559a2aa55f8da57581e46f1589881a8ff7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 25 08:00:01 np0005629333 systemd[1]: Started libpod-conmon-563da1d83ed457a7fbc129442ed41559a2aa55f8da57581e46f1589881a8ff7d.scope.
Feb 25 08:00:01 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:00:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c064d32c5a8bc9b32dbc5298e999e73ca9ab9f090eb5ef23c399a7752b4fd87b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 08:00:01 np0005629333 podman[375437]: 2026-02-25 13:00:01.50280519 +0000 UTC m=+0.460053049 container init 563da1d83ed457a7fbc129442ed41559a2aa55f8da57581e46f1589881a8ff7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS)
Feb 25 08:00:01 np0005629333 podman[375437]: 2026-02-25 13:00:01.510563619 +0000 UTC m=+0.467811438 container start 563da1d83ed457a7fbc129442ed41559a2aa55f8da57581e46f1589881a8ff7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Feb 25 08:00:01 np0005629333 neutron-haproxy-ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776[375453]: [NOTICE]   (375457) : New worker (375459) forked
Feb 25 08:00:01 np0005629333 neutron-haproxy-ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776[375453]: [NOTICE]   (375457) : Loading success.
Feb 25 08:00:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:00:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:00:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:00:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:00:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:00:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.729 244018 DEBUG nova.compute.manager [req-d8eb3859-290d-499f-aec0-3be3b0d97d48 req-78f0ed17-b299-475b-a0b6-6e617c5a2f57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Received event network-vif-plugged-e11a4d69-8666-420b-b13c-69a6427567a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.729 244018 DEBUG oslo_concurrency.lockutils [req-d8eb3859-290d-499f-aec0-3be3b0d97d48 req-78f0ed17-b299-475b-a0b6-6e617c5a2f57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8d338640-2b5f-4571-8f76-b523064ee129-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.730 244018 DEBUG oslo_concurrency.lockutils [req-d8eb3859-290d-499f-aec0-3be3b0d97d48 req-78f0ed17-b299-475b-a0b6-6e617c5a2f57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8d338640-2b5f-4571-8f76-b523064ee129-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.730 244018 DEBUG oslo_concurrency.lockutils [req-d8eb3859-290d-499f-aec0-3be3b0d97d48 req-78f0ed17-b299-475b-a0b6-6e617c5a2f57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8d338640-2b5f-4571-8f76-b523064ee129-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.730 244018 DEBUG nova.compute.manager [req-d8eb3859-290d-499f-aec0-3be3b0d97d48 req-78f0ed17-b299-475b-a0b6-6e617c5a2f57 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Processing event network-vif-plugged-e11a4d69-8666-420b-b13c-69a6427567a4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.731 244018 DEBUG nova.compute.manager [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.736 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.736 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024401.7359138, 8d338640-2b5f-4571-8f76-b523064ee129 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.737 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] VM Resumed (Lifecycle Event)#033[00m
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.741 244018 INFO nova.virt.libvirt.driver [-] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Instance spawned successfully.#033[00m
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.742 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.765 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.770 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.774 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.775 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.775 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.776 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.776 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.777 244018 DEBUG nova.virt.libvirt.driver [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.807 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.845 244018 INFO nova.compute.manager [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Took 10.73 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.845 244018 DEBUG nova.compute.manager [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.892 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.938 244018 INFO nova.compute.manager [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Took 11.83 seconds to build instance.#033[00m
Feb 25 08:00:01 np0005629333 nova_compute[244014]: 2026-02-25 13:00:01.975 244018 DEBUG oslo_concurrency.lockutils [None req-afef978a-f3b6-483b-8ad4-a6999e1d1945 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "8d338640-2b5f-4571-8f76-b523064ee129" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:00:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2384: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 162 op/s
Feb 25 08:00:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:00:03 np0005629333 nova_compute[244014]: 2026-02-25 13:00:03.812 244018 DEBUG nova.compute.manager [req-22e02954-1f44-48f8-ac96-aaa9cca2a0ea req-282a1bc0-df1c-44c3-b7c3-e52341c1cdca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Received event network-vif-plugged-e11a4d69-8666-420b-b13c-69a6427567a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:00:03 np0005629333 nova_compute[244014]: 2026-02-25 13:00:03.813 244018 DEBUG oslo_concurrency.lockutils [req-22e02954-1f44-48f8-ac96-aaa9cca2a0ea req-282a1bc0-df1c-44c3-b7c3-e52341c1cdca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8d338640-2b5f-4571-8f76-b523064ee129-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:00:03 np0005629333 nova_compute[244014]: 2026-02-25 13:00:03.813 244018 DEBUG oslo_concurrency.lockutils [req-22e02954-1f44-48f8-ac96-aaa9cca2a0ea req-282a1bc0-df1c-44c3-b7c3-e52341c1cdca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8d338640-2b5f-4571-8f76-b523064ee129-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:00:03 np0005629333 nova_compute[244014]: 2026-02-25 13:00:03.813 244018 DEBUG oslo_concurrency.lockutils [req-22e02954-1f44-48f8-ac96-aaa9cca2a0ea req-282a1bc0-df1c-44c3-b7c3-e52341c1cdca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8d338640-2b5f-4571-8f76-b523064ee129-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:00:03 np0005629333 nova_compute[244014]: 2026-02-25 13:00:03.813 244018 DEBUG nova.compute.manager [req-22e02954-1f44-48f8-ac96-aaa9cca2a0ea req-282a1bc0-df1c-44c3-b7c3-e52341c1cdca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] No waiting events found dispatching network-vif-plugged-e11a4d69-8666-420b-b13c-69a6427567a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 08:00:03 np0005629333 nova_compute[244014]: 2026-02-25 13:00:03.814 244018 WARNING nova.compute.manager [req-22e02954-1f44-48f8-ac96-aaa9cca2a0ea req-282a1bc0-df1c-44c3-b7c3-e52341c1cdca 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Received unexpected event network-vif-plugged-e11a4d69-8666-420b-b13c-69a6427567a4 for instance with vm_state active and task_state None.#033[00m
Feb 25 08:00:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2385: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 27 KiB/s wr, 107 op/s
Feb 25 08:00:05 np0005629333 nova_compute[244014]: 2026-02-25 13:00:05.606 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:05 np0005629333 nova_compute[244014]: 2026-02-25 13:00:05.933 244018 DEBUG nova.compute.manager [req-c7fd2024-a984-4aa0-8c1e-eb153aa82ac4 req-9badcacf-bdf7-4e47-8961-47504c679805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Received event network-changed-e11a4d69-8666-420b-b13c-69a6427567a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:00:05 np0005629333 nova_compute[244014]: 2026-02-25 13:00:05.933 244018 DEBUG nova.compute.manager [req-c7fd2024-a984-4aa0-8c1e-eb153aa82ac4 req-9badcacf-bdf7-4e47-8961-47504c679805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Refreshing instance network info cache due to event network-changed-e11a4d69-8666-420b-b13c-69a6427567a4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 08:00:05 np0005629333 nova_compute[244014]: 2026-02-25 13:00:05.934 244018 DEBUG oslo_concurrency.lockutils [req-c7fd2024-a984-4aa0-8c1e-eb153aa82ac4 req-9badcacf-bdf7-4e47-8961-47504c679805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8d338640-2b5f-4571-8f76-b523064ee129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 08:00:05 np0005629333 nova_compute[244014]: 2026-02-25 13:00:05.934 244018 DEBUG oslo_concurrency.lockutils [req-c7fd2024-a984-4aa0-8c1e-eb153aa82ac4 req-9badcacf-bdf7-4e47-8961-47504c679805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8d338640-2b5f-4571-8f76-b523064ee129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 08:00:05 np0005629333 nova_compute[244014]: 2026-02-25 13:00:05.935 244018 DEBUG nova.network.neutron [req-c7fd2024-a984-4aa0-8c1e-eb153aa82ac4 req-9badcacf-bdf7-4e47-8961-47504c679805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Refreshing network info cache for port e11a4d69-8666-420b-b13c-69a6427567a4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 08:00:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2386: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 27 KiB/s wr, 107 op/s
Feb 25 08:00:06 np0005629333 nova_compute[244014]: 2026-02-25 13:00:06.895 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:07 np0005629333 nova_compute[244014]: 2026-02-25 13:00:07.124 244018 DEBUG nova.network.neutron [req-c7fd2024-a984-4aa0-8c1e-eb153aa82ac4 req-9badcacf-bdf7-4e47-8961-47504c679805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Updated VIF entry in instance network info cache for port e11a4d69-8666-420b-b13c-69a6427567a4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 08:00:07 np0005629333 nova_compute[244014]: 2026-02-25 13:00:07.125 244018 DEBUG nova.network.neutron [req-c7fd2024-a984-4aa0-8c1e-eb153aa82ac4 req-9badcacf-bdf7-4e47-8961-47504c679805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Updating instance_info_cache with network_info: [{"id": "e11a4d69-8666-420b-b13c-69a6427567a4", "address": "fa:16:3e:15:04:04", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape11a4d69-86", "ovs_interfaceid": "e11a4d69-8666-420b-b13c-69a6427567a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:00:07 np0005629333 nova_compute[244014]: 2026-02-25 13:00:07.154 244018 DEBUG oslo_concurrency.lockutils [req-c7fd2024-a984-4aa0-8c1e-eb153aa82ac4 req-9badcacf-bdf7-4e47-8961-47504c679805 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8d338640-2b5f-4571-8f76-b523064ee129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 08:00:07 np0005629333 ovn_controller[147040]: 2026-02-25T13:00:07Z|00192|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:33:21:fa 10.100.0.4
Feb 25 08:00:07 np0005629333 ovn_controller[147040]: 2026-02-25T13:00:07Z|00193|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:33:21:fa 10.100.0.4
Feb 25 08:00:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:00:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2387: 305 pgs: 305 active+clean; 357 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 207 op/s
Feb 25 08:00:10 np0005629333 nova_compute[244014]: 2026-02-25 13:00:10.608 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2388: 305 pgs: 305 active+clean; 357 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Feb 25 08:00:11 np0005629333 nova_compute[244014]: 2026-02-25 13:00:11.898 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:12 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Feb 25 08:00:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2389: 305 pgs: 305 active+clean; 375 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Feb 25 08:00:13 np0005629333 ovn_controller[147040]: 2026-02-25T13:00:13Z|00194|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:15:04:04 10.100.0.9
Feb 25 08:00:13 np0005629333 ovn_controller[147040]: 2026-02-25T13:00:13Z|00195|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:15:04:04 10.100.0.9
Feb 25 08:00:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:00:14 np0005629333 nova_compute[244014]: 2026-02-25 13:00:14.706 244018 DEBUG oslo_concurrency.lockutils [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "b0014972-9433-4648-951c-bd9a210b6a69" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:00:14 np0005629333 nova_compute[244014]: 2026-02-25 13:00:14.707 244018 DEBUG oslo_concurrency.lockutils [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "b0014972-9433-4648-951c-bd9a210b6a69" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:00:14 np0005629333 nova_compute[244014]: 2026-02-25 13:00:14.707 244018 DEBUG oslo_concurrency.lockutils [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "b0014972-9433-4648-951c-bd9a210b6a69-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:00:14 np0005629333 nova_compute[244014]: 2026-02-25 13:00:14.708 244018 DEBUG oslo_concurrency.lockutils [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "b0014972-9433-4648-951c-bd9a210b6a69-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:00:14 np0005629333 nova_compute[244014]: 2026-02-25 13:00:14.709 244018 DEBUG oslo_concurrency.lockutils [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "b0014972-9433-4648-951c-bd9a210b6a69-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:00:14 np0005629333 nova_compute[244014]: 2026-02-25 13:00:14.711 244018 INFO nova.compute.manager [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Terminating instance#033[00m
Feb 25 08:00:14 np0005629333 nova_compute[244014]: 2026-02-25 13:00:14.712 244018 DEBUG nova.compute.manager [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 08:00:14 np0005629333 kernel: tap54f5b116-49 (unregistering): left promiscuous mode
Feb 25 08:00:14 np0005629333 NetworkManager[49836]: <info>  [1772024414.7740] device (tap54f5b116-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 08:00:14 np0005629333 ovn_controller[147040]: 2026-02-25T13:00:14Z|01540|binding|INFO|Releasing lport 54f5b116-49ad-43de-8259-78cb20b778c1 from this chassis (sb_readonly=0)
Feb 25 08:00:14 np0005629333 ovn_controller[147040]: 2026-02-25T13:00:14Z|01541|binding|INFO|Setting lport 54f5b116-49ad-43de-8259-78cb20b778c1 down in Southbound
Feb 25 08:00:14 np0005629333 nova_compute[244014]: 2026-02-25 13:00:14.781 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:14 np0005629333 ovn_controller[147040]: 2026-02-25T13:00:14Z|01542|binding|INFO|Removing iface tap54f5b116-49 ovn-installed in OVS
Feb 25 08:00:14 np0005629333 nova_compute[244014]: 2026-02-25 13:00:14.784 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:14.789 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:21:fa 10.100.0.4'], port_security=['fa:16:3e:33:21:fa 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b0014972-9433-4648-951c-bd9a210b6a69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62bbaf1c-560e-4f11-b053-43d27fe35ba2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9699483122f465084e3147e4904d13d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1ae32a7b-44d8-4e80-93f7-db39a030f905', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf42b548-d8ba-420a-827b-4e92eb42f3f0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=54f5b116-49ad-43de-8259-78cb20b778c1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 08:00:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:14.790 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 54f5b116-49ad-43de-8259-78cb20b778c1 in datapath 62bbaf1c-560e-4f11-b053-43d27fe35ba2 unbound from our chassis#033[00m
Feb 25 08:00:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:14.792 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62bbaf1c-560e-4f11-b053-43d27fe35ba2#033[00m
Feb 25 08:00:14 np0005629333 nova_compute[244014]: 2026-02-25 13:00:14.794 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:14.807 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c1a06369-a45f-4d59-b850-4f4b8fb49c5c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:14 np0005629333 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000090.scope: Deactivated successfully.
Feb 25 08:00:14 np0005629333 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000090.scope: Consumed 12.085s CPU time.
Feb 25 08:00:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:14.835 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[dd0c3230-62ac-43df-b856-e525ca2be2c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:14 np0005629333 systemd-machined[210048]: Machine qemu-178-instance-00000090 terminated.
Feb 25 08:00:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:14.839 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f96ed993-bdb4-4fb5-9c7a-88f11913b990]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2390: 305 pgs: 305 active+clean; 375 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 130 op/s
Feb 25 08:00:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:14.865 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ce25ee47-b4c3-429c-8470-8dcb08ec3d67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:14.887 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[565db2e2-c7d0-4ab4-ab2d-dd640be29a26]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62bbaf1c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:da:c2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 450], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632617, 'reachable_time': 39162, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375479, 'error': None, 'target': 'ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:14.901 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f3c05db7-08b2-4a3f-9740-67ff06335121]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap62bbaf1c-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632625, 'tstamp': 632625}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375480, 'error': None, 'target': 'ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap62bbaf1c-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632627, 'tstamp': 632627}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375480, 'error': None, 'target': 'ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:14.903 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62bbaf1c-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:00:14 np0005629333 nova_compute[244014]: 2026-02-25 13:00:14.905 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:14 np0005629333 nova_compute[244014]: 2026-02-25 13:00:14.910 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:14.910 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62bbaf1c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:00:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:14.911 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 08:00:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:14.911 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62bbaf1c-50, col_values=(('external_ids', {'iface-id': 'e8ddd525-f10e-4482-b8e7-397d0dcdd470'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:00:14 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:14.912 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 08:00:14 np0005629333 nova_compute[244014]: 2026-02-25 13:00:14.933 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:14 np0005629333 nova_compute[244014]: 2026-02-25 13:00:14.940 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:14 np0005629333 nova_compute[244014]: 2026-02-25 13:00:14.951 244018 INFO nova.virt.libvirt.driver [-] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Instance destroyed successfully.#033[00m
Feb 25 08:00:14 np0005629333 nova_compute[244014]: 2026-02-25 13:00:14.951 244018 DEBUG nova.objects.instance [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'resources' on Instance uuid b0014972-9433-4648-951c-bd9a210b6a69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 08:00:14 np0005629333 nova_compute[244014]: 2026-02-25 13:00:14.987 244018 DEBUG nova.compute.manager [req-5dc3661e-51d9-4385-93de-b19f065b6434 req-3da77c23-91bc-4c3e-88d6-1c11eac114c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Received event network-vif-unplugged-54f5b116-49ad-43de-8259-78cb20b778c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:00:14 np0005629333 nova_compute[244014]: 2026-02-25 13:00:14.987 244018 DEBUG oslo_concurrency.lockutils [req-5dc3661e-51d9-4385-93de-b19f065b6434 req-3da77c23-91bc-4c3e-88d6-1c11eac114c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b0014972-9433-4648-951c-bd9a210b6a69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:00:14 np0005629333 nova_compute[244014]: 2026-02-25 13:00:14.987 244018 DEBUG oslo_concurrency.lockutils [req-5dc3661e-51d9-4385-93de-b19f065b6434 req-3da77c23-91bc-4c3e-88d6-1c11eac114c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b0014972-9433-4648-951c-bd9a210b6a69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:00:14 np0005629333 nova_compute[244014]: 2026-02-25 13:00:14.988 244018 DEBUG oslo_concurrency.lockutils [req-5dc3661e-51d9-4385-93de-b19f065b6434 req-3da77c23-91bc-4c3e-88d6-1c11eac114c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b0014972-9433-4648-951c-bd9a210b6a69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:00:14 np0005629333 nova_compute[244014]: 2026-02-25 13:00:14.988 244018 DEBUG nova.compute.manager [req-5dc3661e-51d9-4385-93de-b19f065b6434 req-3da77c23-91bc-4c3e-88d6-1c11eac114c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] No waiting events found dispatching network-vif-unplugged-54f5b116-49ad-43de-8259-78cb20b778c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 08:00:14 np0005629333 nova_compute[244014]: 2026-02-25 13:00:14.988 244018 DEBUG nova.compute.manager [req-5dc3661e-51d9-4385-93de-b19f065b6434 req-3da77c23-91bc-4c3e-88d6-1c11eac114c9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Received event network-vif-unplugged-54f5b116-49ad-43de-8259-78cb20b778c1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 08:00:14 np0005629333 nova_compute[244014]: 2026-02-25 13:00:14.991 244018 DEBUG nova.virt.libvirt.vif [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:59:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-0-306231765',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-0-306231765',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-gen',id=144,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKflPzkYcK8T1jIvTW73/HFgMcFhCGkDieaMKwFdqxoWC54RyT9HKRPaz7zDpWGOyrYK1XW41LVz9p16Co9g6HWeTxnQZJe7dDdqfF7T6FTQYUTjJqoaDTrnYUJSVUiaQA==',key_name='tempest-TestSecurityGroupsBasicOps-600669415',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:59:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-ir6by9q0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:59:55Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=b0014972-9433-4648-951c-bd9a210b6a69,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "54f5b116-49ad-43de-8259-78cb20b778c1", "address": "fa:16:3e:33:21:fa", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f5b116-49", "ovs_interfaceid": "54f5b116-49ad-43de-8259-78cb20b778c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 08:00:14 np0005629333 nova_compute[244014]: 2026-02-25 13:00:14.991 244018 DEBUG nova.network.os_vif_util [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "54f5b116-49ad-43de-8259-78cb20b778c1", "address": "fa:16:3e:33:21:fa", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f5b116-49", "ovs_interfaceid": "54f5b116-49ad-43de-8259-78cb20b778c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 08:00:14 np0005629333 nova_compute[244014]: 2026-02-25 13:00:14.992 244018 DEBUG nova.network.os_vif_util [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:21:fa,bridge_name='br-int',has_traffic_filtering=True,id=54f5b116-49ad-43de-8259-78cb20b778c1,network=Network(62bbaf1c-560e-4f11-b053-43d27fe35ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f5b116-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 08:00:14 np0005629333 nova_compute[244014]: 2026-02-25 13:00:14.993 244018 DEBUG os_vif [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:21:fa,bridge_name='br-int',has_traffic_filtering=True,id=54f5b116-49ad-43de-8259-78cb20b778c1,network=Network(62bbaf1c-560e-4f11-b053-43d27fe35ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f5b116-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 08:00:14 np0005629333 nova_compute[244014]: 2026-02-25 13:00:14.995 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:14 np0005629333 nova_compute[244014]: 2026-02-25 13:00:14.996 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54f5b116-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:00:14 np0005629333 nova_compute[244014]: 2026-02-25 13:00:14.998 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:14 np0005629333 nova_compute[244014]: 2026-02-25 13:00:14.999 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:15 np0005629333 nova_compute[244014]: 2026-02-25 13:00:15.003 244018 INFO os_vif [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:21:fa,bridge_name='br-int',has_traffic_filtering=True,id=54f5b116-49ad-43de-8259-78cb20b778c1,network=Network(62bbaf1c-560e-4f11-b053-43d27fe35ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f5b116-49')#033[00m
Feb 25 08:00:15 np0005629333 nova_compute[244014]: 2026-02-25 13:00:15.342 244018 INFO nova.virt.libvirt.driver [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Deleting instance files /var/lib/nova/instances/b0014972-9433-4648-951c-bd9a210b6a69_del#033[00m
Feb 25 08:00:15 np0005629333 nova_compute[244014]: 2026-02-25 13:00:15.343 244018 INFO nova.virt.libvirt.driver [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Deletion of /var/lib/nova/instances/b0014972-9433-4648-951c-bd9a210b6a69_del complete#033[00m
Feb 25 08:00:15 np0005629333 nova_compute[244014]: 2026-02-25 13:00:15.395 244018 INFO nova.compute.manager [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 08:00:15 np0005629333 nova_compute[244014]: 2026-02-25 13:00:15.395 244018 DEBUG oslo.service.loopingcall [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 08:00:15 np0005629333 nova_compute[244014]: 2026-02-25 13:00:15.396 244018 DEBUG nova.compute.manager [-] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 08:00:15 np0005629333 nova_compute[244014]: 2026-02-25 13:00:15.396 244018 DEBUG nova.network.neutron [-] [instance: b0014972-9433-4648-951c-bd9a210b6a69] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 08:00:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 08:00:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 44K writes, 173K keys, 44K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s#012Cumulative WAL: 44K writes, 15K syncs, 2.76 writes per sync, written: 0.18 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5670 writes, 23K keys, 5670 commit groups, 1.0 writes per commit group, ingest: 27.83 MB, 0.05 MB/s#012Interval WAL: 5670 writes, 2215 syncs, 2.56 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 08:00:15 np0005629333 nova_compute[244014]: 2026-02-25 13:00:15.611 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:16 np0005629333 podman[375512]: 2026-02-25 13:00:16.74616305 +0000 UTC m=+0.078902147 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Feb 25 08:00:16 np0005629333 podman[375513]: 2026-02-25 13:00:16.784850671 +0000 UTC m=+0.117555107 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible)
Feb 25 08:00:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2391: 305 pgs: 305 active+clean; 375 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 130 op/s
Feb 25 08:00:17 np0005629333 nova_compute[244014]: 2026-02-25 13:00:17.106 244018 DEBUG nova.compute.manager [req-01df855d-d2d7-43f6-864c-0c1d0e0e6cda req-e3816c3a-e27c-4df2-990d-16c62c6e6ac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Received event network-vif-plugged-54f5b116-49ad-43de-8259-78cb20b778c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:00:17 np0005629333 nova_compute[244014]: 2026-02-25 13:00:17.106 244018 DEBUG oslo_concurrency.lockutils [req-01df855d-d2d7-43f6-864c-0c1d0e0e6cda req-e3816c3a-e27c-4df2-990d-16c62c6e6ac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "b0014972-9433-4648-951c-bd9a210b6a69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:00:17 np0005629333 nova_compute[244014]: 2026-02-25 13:00:17.106 244018 DEBUG oslo_concurrency.lockutils [req-01df855d-d2d7-43f6-864c-0c1d0e0e6cda req-e3816c3a-e27c-4df2-990d-16c62c6e6ac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b0014972-9433-4648-951c-bd9a210b6a69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:00:17 np0005629333 nova_compute[244014]: 2026-02-25 13:00:17.107 244018 DEBUG oslo_concurrency.lockutils [req-01df855d-d2d7-43f6-864c-0c1d0e0e6cda req-e3816c3a-e27c-4df2-990d-16c62c6e6ac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "b0014972-9433-4648-951c-bd9a210b6a69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:00:17 np0005629333 nova_compute[244014]: 2026-02-25 13:00:17.107 244018 DEBUG nova.compute.manager [req-01df855d-d2d7-43f6-864c-0c1d0e0e6cda req-e3816c3a-e27c-4df2-990d-16c62c6e6ac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] No waiting events found dispatching network-vif-plugged-54f5b116-49ad-43de-8259-78cb20b778c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 08:00:17 np0005629333 nova_compute[244014]: 2026-02-25 13:00:17.107 244018 WARNING nova.compute.manager [req-01df855d-d2d7-43f6-864c-0c1d0e0e6cda req-e3816c3a-e27c-4df2-990d-16c62c6e6ac7 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Received unexpected event network-vif-plugged-54f5b116-49ad-43de-8259-78cb20b778c1 for instance with vm_state active and task_state deleting.#033[00m
Feb 25 08:00:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:17.638 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 08:00:17 np0005629333 nova_compute[244014]: 2026-02-25 13:00:17.638 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:17.639 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 08:00:18 np0005629333 nova_compute[244014]: 2026-02-25 13:00:18.618 244018 DEBUG nova.network.neutron [-] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:00:18 np0005629333 nova_compute[244014]: 2026-02-25 13:00:18.642 244018 INFO nova.compute.manager [-] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Took 3.25 seconds to deallocate network for instance.#033[00m
Feb 25 08:00:18 np0005629333 nova_compute[244014]: 2026-02-25 13:00:18.710 244018 DEBUG oslo_concurrency.lockutils [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:00:18 np0005629333 nova_compute[244014]: 2026-02-25 13:00:18.710 244018 DEBUG oslo_concurrency.lockutils [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:00:18 np0005629333 nova_compute[244014]: 2026-02-25 13:00:18.738 244018 DEBUG nova.compute.manager [req-76fdac8d-b780-40c8-9798-ffcaf17babe9 req-ed683fe8-323c-4d38-975c-b07d6c2d43d5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Received event network-vif-deleted-54f5b116-49ad-43de-8259-78cb20b778c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:00:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:00:18 np0005629333 nova_compute[244014]: 2026-02-25 13:00:18.803 244018 DEBUG oslo_concurrency.processutils [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:00:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2392: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.3 MiB/s wr, 189 op/s
Feb 25 08:00:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:00:19 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4035139781' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:00:19 np0005629333 nova_compute[244014]: 2026-02-25 13:00:19.347 244018 DEBUG oslo_concurrency.processutils [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:00:19 np0005629333 nova_compute[244014]: 2026-02-25 13:00:19.353 244018 DEBUG nova.compute.provider_tree [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:00:19 np0005629333 nova_compute[244014]: 2026-02-25 13:00:19.372 244018 DEBUG nova.scheduler.client.report [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:00:19 np0005629333 nova_compute[244014]: 2026-02-25 13:00:19.393 244018 DEBUG oslo_concurrency.lockutils [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:00:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 08:00:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.2 total, 600.0 interval#012Cumulative writes: 45K writes, 181K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s#012Cumulative WAL: 45K writes, 16K syncs, 2.78 writes per sync, written: 0.18 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5581 writes, 24K keys, 5581 commit groups, 1.0 writes per commit group, ingest: 27.22 MB, 0.05 MB/s#012Interval WAL: 5581 writes, 2111 syncs, 2.64 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 08:00:19 np0005629333 nova_compute[244014]: 2026-02-25 13:00:19.421 244018 INFO nova.scheduler.client.report [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Deleted allocations for instance b0014972-9433-4648-951c-bd9a210b6a69#033[00m
Feb 25 08:00:19 np0005629333 nova_compute[244014]: 2026-02-25 13:00:19.493 244018 DEBUG oslo_concurrency.lockutils [None req-9434cbfb-9d6c-480a-b548-fe0ba2d27a74 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "b0014972-9433-4648-951c-bd9a210b6a69" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:00:19 np0005629333 nova_compute[244014]: 2026-02-25 13:00:19.998 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:20 np0005629333 nova_compute[244014]: 2026-02-25 13:00:20.247 244018 DEBUG oslo_concurrency.lockutils [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "fc7d7f86-eb7c-476c-840e-98c97329de34" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:00:20 np0005629333 nova_compute[244014]: 2026-02-25 13:00:20.248 244018 DEBUG oslo_concurrency.lockutils [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "fc7d7f86-eb7c-476c-840e-98c97329de34" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:00:20 np0005629333 nova_compute[244014]: 2026-02-25 13:00:20.248 244018 DEBUG oslo_concurrency.lockutils [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:00:20 np0005629333 nova_compute[244014]: 2026-02-25 13:00:20.249 244018 DEBUG oslo_concurrency.lockutils [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:00:20 np0005629333 nova_compute[244014]: 2026-02-25 13:00:20.249 244018 DEBUG oslo_concurrency.lockutils [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:00:20 np0005629333 nova_compute[244014]: 2026-02-25 13:00:20.251 244018 INFO nova.compute.manager [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Terminating instance#033[00m
Feb 25 08:00:20 np0005629333 nova_compute[244014]: 2026-02-25 13:00:20.253 244018 DEBUG nova.compute.manager [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 08:00:20 np0005629333 kernel: tap5b9c40d0-fe (unregistering): left promiscuous mode
Feb 25 08:00:20 np0005629333 NetworkManager[49836]: <info>  [1772024420.3259] device (tap5b9c40d0-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 08:00:20 np0005629333 ovn_controller[147040]: 2026-02-25T13:00:20Z|01543|binding|INFO|Releasing lport 5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 from this chassis (sb_readonly=0)
Feb 25 08:00:20 np0005629333 nova_compute[244014]: 2026-02-25 13:00:20.331 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:20 np0005629333 ovn_controller[147040]: 2026-02-25T13:00:20Z|01544|binding|INFO|Setting lport 5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 down in Southbound
Feb 25 08:00:20 np0005629333 ovn_controller[147040]: 2026-02-25T13:00:20Z|01545|binding|INFO|Removing iface tap5b9c40d0-fe ovn-installed in OVS
Feb 25 08:00:20 np0005629333 nova_compute[244014]: 2026-02-25 13:00:20.334 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:20.340 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:db:fb:af 10.100.0.14'], port_security=['fa:16:3e:db:fb:af 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'fc7d7f86-eb7c-476c-840e-98c97329de34', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62bbaf1c-560e-4f11-b053-43d27fe35ba2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9699483122f465084e3147e4904d13d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1134355f-9f11-4363-a935-f82cb22e83bd 1ae32a7b-44d8-4e80-93f7-db39a030f905', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf42b548-d8ba-420a-827b-4e92eb42f3f0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 08:00:20 np0005629333 nova_compute[244014]: 2026-02-25 13:00:20.341 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:20.342 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 in datapath 62bbaf1c-560e-4f11-b053-43d27fe35ba2 unbound from our chassis#033[00m
Feb 25 08:00:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:20.343 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 62bbaf1c-560e-4f11-b053-43d27fe35ba2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 08:00:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:20.344 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eba56fa9-ed88-4cbb-a546-2ec83f2f41e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:20.345 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2 namespace which is not needed anymore#033[00m
Feb 25 08:00:20 np0005629333 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Feb 25 08:00:20 np0005629333 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d0000008f.scope: Consumed 14.723s CPU time.
Feb 25 08:00:20 np0005629333 systemd-machined[210048]: Machine qemu-177-instance-0000008f terminated.
Feb 25 08:00:20 np0005629333 nova_compute[244014]: 2026-02-25 13:00:20.480 244018 INFO nova.virt.libvirt.driver [-] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Instance destroyed successfully.#033[00m
Feb 25 08:00:20 np0005629333 nova_compute[244014]: 2026-02-25 13:00:20.481 244018 DEBUG nova.objects.instance [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'resources' on Instance uuid fc7d7f86-eb7c-476c-840e-98c97329de34 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 08:00:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:00:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:00:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:00:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:00:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:00:20 np0005629333 nova_compute[244014]: 2026-02-25 13:00:20.498 244018 DEBUG nova.virt.libvirt.vif [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:59:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-1045538617',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-1045538617',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-acc',id=143,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKflPzkYcK8T1jIvTW73/HFgMcFhCGkDieaMKwFdqxoWC54RyT9HKRPaz7zDpWGOyrYK1XW41LVz9p16Co9g6HWeTxnQZJe7dDdqfF7T6FTQYUTjJqoaDTrnYUJSVUiaQA==',key_name='tempest-TestSecurityGroupsBasicOps-600669415',keypairs=<?>,launch_index=0,launched_at=2026-02-25T12:59:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-q2l93d4m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T12:59:18Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=fc7d7f86-eb7c-476c-840e-98c97329de34,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "address": "fa:16:3e:db:fb:af", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b9c40d0-fe", "ovs_interfaceid": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 08:00:20 np0005629333 nova_compute[244014]: 2026-02-25 13:00:20.498 244018 DEBUG nova.network.os_vif_util [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "address": "fa:16:3e:db:fb:af", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b9c40d0-fe", "ovs_interfaceid": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 08:00:20 np0005629333 nova_compute[244014]: 2026-02-25 13:00:20.499 244018 DEBUG nova.network.os_vif_util [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:db:fb:af,bridge_name='br-int',has_traffic_filtering=True,id=5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3,network=Network(62bbaf1c-560e-4f11-b053-43d27fe35ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b9c40d0-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 08:00:20 np0005629333 nova_compute[244014]: 2026-02-25 13:00:20.499 244018 DEBUG os_vif [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:db:fb:af,bridge_name='br-int',has_traffic_filtering=True,id=5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3,network=Network(62bbaf1c-560e-4f11-b053-43d27fe35ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b9c40d0-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 08:00:20 np0005629333 nova_compute[244014]: 2026-02-25 13:00:20.501 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:20 np0005629333 nova_compute[244014]: 2026-02-25 13:00:20.501 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b9c40d0-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:00:20 np0005629333 nova_compute[244014]: 2026-02-25 13:00:20.502 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:20 np0005629333 nova_compute[244014]: 2026-02-25 13:00:20.505 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 08:00:20 np0005629333 neutron-haproxy-ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2[374269]: [NOTICE]   (374288) : haproxy version is 2.8.14-c23fe91
Feb 25 08:00:20 np0005629333 neutron-haproxy-ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2[374269]: [NOTICE]   (374288) : path to executable is /usr/sbin/haproxy
Feb 25 08:00:20 np0005629333 neutron-haproxy-ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2[374269]: [WARNING]  (374288) : Exiting Master process...
Feb 25 08:00:20 np0005629333 neutron-haproxy-ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2[374269]: [ALERT]    (374288) : Current worker (374300) exited with code 143 (Terminated)
Feb 25 08:00:20 np0005629333 neutron-haproxy-ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2[374269]: [WARNING]  (374288) : All workers exited. Exiting... (0)
Feb 25 08:00:20 np0005629333 nova_compute[244014]: 2026-02-25 13:00:20.508 244018 INFO os_vif [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:db:fb:af,bridge_name='br-int',has_traffic_filtering=True,id=5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3,network=Network(62bbaf1c-560e-4f11-b053-43d27fe35ba2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b9c40d0-fe')#033[00m
Feb 25 08:00:20 np0005629333 systemd[1]: libpod-2a98674612616c51d7e93497e598848a866c7677b00e7d83cc7aa53f3ea9d9ea.scope: Deactivated successfully.
Feb 25 08:00:20 np0005629333 podman[375684]: 2026-02-25 13:00:20.516857424 +0000 UTC m=+0.088244180 container died 2a98674612616c51d7e93497e598848a866c7677b00e7d83cc7aa53f3ea9d9ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 08:00:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:00:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:00:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:00:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:00:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:00:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:00:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:00:20 np0005629333 nova_compute[244014]: 2026-02-25 13:00:20.612 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:20 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2a98674612616c51d7e93497e598848a866c7677b00e7d83cc7aa53f3ea9d9ea-userdata-shm.mount: Deactivated successfully.
Feb 25 08:00:20 np0005629333 systemd[1]: var-lib-containers-storage-overlay-00e350bf0228b37d39a08274b06ef58a344901c5a3e3eee456d5980226c76505-merged.mount: Deactivated successfully.
Feb 25 08:00:20 np0005629333 podman[375684]: 2026-02-25 13:00:20.640497082 +0000 UTC m=+0.211883838 container cleanup 2a98674612616c51d7e93497e598848a866c7677b00e7d83cc7aa53f3ea9d9ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 08:00:20 np0005629333 systemd[1]: libpod-conmon-2a98674612616c51d7e93497e598848a866c7677b00e7d83cc7aa53f3ea9d9ea.scope: Deactivated successfully.
Feb 25 08:00:20 np0005629333 podman[375794]: 2026-02-25 13:00:20.748960481 +0000 UTC m=+0.085419800 container remove 2a98674612616c51d7e93497e598848a866c7677b00e7d83cc7aa53f3ea9d9ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 08:00:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:20.759 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[054abd8e-7a3e-4546-ab63-6a6fef539af9]: (4, ('Wed Feb 25 01:00:20 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2 (2a98674612616c51d7e93497e598848a866c7677b00e7d83cc7aa53f3ea9d9ea)\n2a98674612616c51d7e93497e598848a866c7677b00e7d83cc7aa53f3ea9d9ea\nWed Feb 25 01:00:20 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2 (2a98674612616c51d7e93497e598848a866c7677b00e7d83cc7aa53f3ea9d9ea)\n2a98674612616c51d7e93497e598848a866c7677b00e7d83cc7aa53f3ea9d9ea\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:20.761 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1a393f8b-7885-4b8c-a684-49464afde31a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:20.761 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62bbaf1c-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:00:20 np0005629333 nova_compute[244014]: 2026-02-25 13:00:20.763 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:20 np0005629333 kernel: tap62bbaf1c-50: left promiscuous mode
Feb 25 08:00:20 np0005629333 nova_compute[244014]: 2026-02-25 13:00:20.774 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:20.777 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[59f6d8a5-f273-4469-9bdc-991ef7655b18]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:20.790 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3daea00c-c24e-4759-a890-dbd5916ef82d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:20.791 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e6dc0fda-cf9c-468f-92d7-e49d9ff78b2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:20.805 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2d14eefa-1516-4ccc-a286-44bf29e4c3f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632611, 'reachable_time': 43680, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375809, 'error': None, 'target': 'ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:20.808 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-62bbaf1c-560e-4f11-b053-43d27fe35ba2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 08:00:20 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:20.809 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[0ffa9d40-afe7-4398-9d71-7508c55a6a97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:20 np0005629333 systemd[1]: run-netns-ovnmeta\x2d62bbaf1c\x2d560e\x2d4f11\x2db053\x2d43d27fe35ba2.mount: Deactivated successfully.
Feb 25 08:00:20 np0005629333 nova_compute[244014]: 2026-02-25 13:00:20.851 244018 DEBUG nova.compute.manager [req-122fc605-aa65-4e23-8dc5-78394d0010ac req-64dc8962-a33c-4d70-a93d-1c6a322658ab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Received event network-changed-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:00:20 np0005629333 nova_compute[244014]: 2026-02-25 13:00:20.852 244018 DEBUG nova.compute.manager [req-122fc605-aa65-4e23-8dc5-78394d0010ac req-64dc8962-a33c-4d70-a93d-1c6a322658ab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Refreshing instance network info cache due to event network-changed-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 08:00:20 np0005629333 nova_compute[244014]: 2026-02-25 13:00:20.852 244018 DEBUG oslo_concurrency.lockutils [req-122fc605-aa65-4e23-8dc5-78394d0010ac req-64dc8962-a33c-4d70-a93d-1c6a322658ab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-fc7d7f86-eb7c-476c-840e-98c97329de34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 08:00:20 np0005629333 nova_compute[244014]: 2026-02-25 13:00:20.853 244018 DEBUG oslo_concurrency.lockutils [req-122fc605-aa65-4e23-8dc5-78394d0010ac req-64dc8962-a33c-4d70-a93d-1c6a322658ab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-fc7d7f86-eb7c-476c-840e-98c97329de34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 08:00:20 np0005629333 nova_compute[244014]: 2026-02-25 13:00:20.853 244018 DEBUG nova.network.neutron [req-122fc605-aa65-4e23-8dc5-78394d0010ac req-64dc8962-a33c-4d70-a93d-1c6a322658ab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Refreshing network info cache for port 5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 08:00:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2393: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 266 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Feb 25 08:00:20 np0005629333 podman[375822]: 2026-02-25 13:00:20.963904205 +0000 UTC m=+0.067064113 container create 7cf4ada2fc744567a1ad85a0423ec9851c2acb4193b9d052fe0ff34a594d35c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_pike, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 08:00:21 np0005629333 podman[375822]: 2026-02-25 13:00:20.9226152 +0000 UTC m=+0.025775178 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:00:21 np0005629333 systemd[1]: Started libpod-conmon-7cf4ada2fc744567a1ad85a0423ec9851c2acb4193b9d052fe0ff34a594d35c8.scope.
Feb 25 08:00:21 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:00:21 np0005629333 podman[375822]: 2026-02-25 13:00:21.089010094 +0000 UTC m=+0.192170062 container init 7cf4ada2fc744567a1ad85a0423ec9851c2acb4193b9d052fe0ff34a594d35c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 08:00:21 np0005629333 podman[375822]: 2026-02-25 13:00:21.096901666 +0000 UTC m=+0.200061584 container start 7cf4ada2fc744567a1ad85a0423ec9851c2acb4193b9d052fe0ff34a594d35c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_pike, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:00:21 np0005629333 jovial_pike[375838]: 167 167
Feb 25 08:00:21 np0005629333 systemd[1]: libpod-7cf4ada2fc744567a1ad85a0423ec9851c2acb4193b9d052fe0ff34a594d35c8.scope: Deactivated successfully.
Feb 25 08:00:21 np0005629333 podman[375822]: 2026-02-25 13:00:21.129136736 +0000 UTC m=+0.232296834 container attach 7cf4ada2fc744567a1ad85a0423ec9851c2acb4193b9d052fe0ff34a594d35c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_pike, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:00:21 np0005629333 podman[375822]: 2026-02-25 13:00:21.129552868 +0000 UTC m=+0.232712776 container died 7cf4ada2fc744567a1ad85a0423ec9851c2acb4193b9d052fe0ff34a594d35c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:00:21 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7e75caac92e819f6daf5aa44940c49249d3387b5f8b1672fc941ba3212bda929-merged.mount: Deactivated successfully.
Feb 25 08:00:21 np0005629333 nova_compute[244014]: 2026-02-25 13:00:21.223 244018 INFO nova.virt.libvirt.driver [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Deleting instance files /var/lib/nova/instances/fc7d7f86-eb7c-476c-840e-98c97329de34_del#033[00m
Feb 25 08:00:21 np0005629333 nova_compute[244014]: 2026-02-25 13:00:21.226 244018 INFO nova.virt.libvirt.driver [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Deletion of /var/lib/nova/instances/fc7d7f86-eb7c-476c-840e-98c97329de34_del complete#033[00m
Feb 25 08:00:21 np0005629333 nova_compute[244014]: 2026-02-25 13:00:21.288 244018 INFO nova.compute.manager [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Took 1.03 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 08:00:21 np0005629333 nova_compute[244014]: 2026-02-25 13:00:21.289 244018 DEBUG oslo.service.loopingcall [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 08:00:21 np0005629333 nova_compute[244014]: 2026-02-25 13:00:21.290 244018 DEBUG nova.compute.manager [-] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 08:00:21 np0005629333 nova_compute[244014]: 2026-02-25 13:00:21.291 244018 DEBUG nova.network.neutron [-] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 08:00:21 np0005629333 podman[375822]: 2026-02-25 13:00:21.294055898 +0000 UTC m=+0.397215816 container remove 7cf4ada2fc744567a1ad85a0423ec9851c2acb4193b9d052fe0ff34a594d35c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_pike, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 08:00:21 np0005629333 systemd[1]: libpod-conmon-7cf4ada2fc744567a1ad85a0423ec9851c2acb4193b9d052fe0ff34a594d35c8.scope: Deactivated successfully.
Feb 25 08:00:21 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:00:21 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:00:21 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:00:21 np0005629333 podman[375863]: 2026-02-25 13:00:21.501571412 +0000 UTC m=+0.059318205 container create a872d4d2a3acfe89105ece074e542f5403706406295ebbc8bcd8085908948385 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:00:21 np0005629333 systemd[1]: Started libpod-conmon-a872d4d2a3acfe89105ece074e542f5403706406295ebbc8bcd8085908948385.scope.
Feb 25 08:00:21 np0005629333 podman[375863]: 2026-02-25 13:00:21.470368091 +0000 UTC m=+0.028114934 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:00:21 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:00:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd31e84eee565bae2034566e17733d6392359be72829b5d6dea6a847df343293/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:00:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd31e84eee565bae2034566e17733d6392359be72829b5d6dea6a847df343293/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:00:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd31e84eee565bae2034566e17733d6392359be72829b5d6dea6a847df343293/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:00:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd31e84eee565bae2034566e17733d6392359be72829b5d6dea6a847df343293/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:00:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd31e84eee565bae2034566e17733d6392359be72829b5d6dea6a847df343293/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:00:21 np0005629333 podman[375863]: 2026-02-25 13:00:21.616891115 +0000 UTC m=+0.174637928 container init a872d4d2a3acfe89105ece074e542f5403706406295ebbc8bcd8085908948385 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 08:00:21 np0005629333 podman[375863]: 2026-02-25 13:00:21.630451057 +0000 UTC m=+0.188197860 container start a872d4d2a3acfe89105ece074e542f5403706406295ebbc8bcd8085908948385 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:00:21 np0005629333 podman[375863]: 2026-02-25 13:00:21.635294334 +0000 UTC m=+0.193041137 container attach a872d4d2a3acfe89105ece074e542f5403706406295ebbc8bcd8085908948385 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_proskuriakova, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:00:21 np0005629333 nova_compute[244014]: 2026-02-25 13:00:21.775 244018 DEBUG nova.network.neutron [-] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:00:21 np0005629333 nova_compute[244014]: 2026-02-25 13:00:21.795 244018 INFO nova.compute.manager [-] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Took 0.50 seconds to deallocate network for instance.#033[00m
Feb 25 08:00:21 np0005629333 nova_compute[244014]: 2026-02-25 13:00:21.842 244018 DEBUG oslo_concurrency.lockutils [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:00:21 np0005629333 nova_compute[244014]: 2026-02-25 13:00:21.843 244018 DEBUG oslo_concurrency.lockutils [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:00:21 np0005629333 nova_compute[244014]: 2026-02-25 13:00:21.846 244018 DEBUG nova.compute.manager [req-ca1b2299-21d4-40e7-966b-37b12b17162e req-e026ea22-2325-4378-a373-9537c6283bc5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Received event network-vif-deleted-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:00:21 np0005629333 nova_compute[244014]: 2026-02-25 13:00:21.908 244018 DEBUG oslo_concurrency.processutils [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:00:21 np0005629333 nova_compute[244014]: 2026-02-25 13:00:21.989 244018 DEBUG nova.network.neutron [req-122fc605-aa65-4e23-8dc5-78394d0010ac req-64dc8962-a33c-4d70-a93d-1c6a322658ab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Updated VIF entry in instance network info cache for port 5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 08:00:21 np0005629333 nova_compute[244014]: 2026-02-25 13:00:21.990 244018 DEBUG nova.network.neutron [req-122fc605-aa65-4e23-8dc5-78394d0010ac req-64dc8962-a33c-4d70-a93d-1c6a322658ab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Updating instance_info_cache with network_info: [{"id": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "address": "fa:16:3e:db:fb:af", "network": {"id": "62bbaf1c-560e-4f11-b053-43d27fe35ba2", "bridge": "br-int", "label": "tempest-network-smoke--661671396", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b9c40d0-fe", "ovs_interfaceid": "5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:00:22 np0005629333 nova_compute[244014]: 2026-02-25 13:00:22.012 244018 DEBUG oslo_concurrency.lockutils [req-122fc605-aa65-4e23-8dc5-78394d0010ac req-64dc8962-a33c-4d70-a93d-1c6a322658ab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-fc7d7f86-eb7c-476c-840e-98c97329de34" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 08:00:22 np0005629333 nova_compute[244014]: 2026-02-25 13:00:22.013 244018 DEBUG nova.compute.manager [req-122fc605-aa65-4e23-8dc5-78394d0010ac req-64dc8962-a33c-4d70-a93d-1c6a322658ab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Received event network-vif-unplugged-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:00:22 np0005629333 nova_compute[244014]: 2026-02-25 13:00:22.013 244018 DEBUG oslo_concurrency.lockutils [req-122fc605-aa65-4e23-8dc5-78394d0010ac req-64dc8962-a33c-4d70-a93d-1c6a322658ab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:00:22 np0005629333 nova_compute[244014]: 2026-02-25 13:00:22.013 244018 DEBUG oslo_concurrency.lockutils [req-122fc605-aa65-4e23-8dc5-78394d0010ac req-64dc8962-a33c-4d70-a93d-1c6a322658ab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:00:22 np0005629333 nova_compute[244014]: 2026-02-25 13:00:22.013 244018 DEBUG oslo_concurrency.lockutils [req-122fc605-aa65-4e23-8dc5-78394d0010ac req-64dc8962-a33c-4d70-a93d-1c6a322658ab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:00:22 np0005629333 nova_compute[244014]: 2026-02-25 13:00:22.014 244018 DEBUG nova.compute.manager [req-122fc605-aa65-4e23-8dc5-78394d0010ac req-64dc8962-a33c-4d70-a93d-1c6a322658ab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] No waiting events found dispatching network-vif-unplugged-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 08:00:22 np0005629333 nova_compute[244014]: 2026-02-25 13:00:22.014 244018 DEBUG nova.compute.manager [req-122fc605-aa65-4e23-8dc5-78394d0010ac req-64dc8962-a33c-4d70-a93d-1c6a322658ab 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Received event network-vif-unplugged-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 08:00:22 np0005629333 nova_compute[244014]: 2026-02-25 13:00:22.086 244018 DEBUG nova.compute.manager [None req-5a04c5ce-6502-44f9-8ca1-ae3cf3453291 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:00:22 np0005629333 jovial_proskuriakova[375880]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:00:22 np0005629333 jovial_proskuriakova[375880]: --> All data devices are unavailable
Feb 25 08:00:22 np0005629333 systemd[1]: libpod-a872d4d2a3acfe89105ece074e542f5403706406295ebbc8bcd8085908948385.scope: Deactivated successfully.
Feb 25 08:00:22 np0005629333 nova_compute[244014]: 2026-02-25 13:00:22.138 244018 INFO nova.compute.manager [None req-5a04c5ce-6502-44f9-8ca1-ae3cf3453291 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] instance snapshotting#033[00m
Feb 25 08:00:22 np0005629333 podman[375920]: 2026-02-25 13:00:22.162252418 +0000 UTC m=+0.024880743 container died a872d4d2a3acfe89105ece074e542f5403706406295ebbc8bcd8085908948385 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_proskuriakova, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 25 08:00:22 np0005629333 systemd[1]: var-lib-containers-storage-overlay-dd31e84eee565bae2034566e17733d6392359be72829b5d6dea6a847df343293-merged.mount: Deactivated successfully.
Feb 25 08:00:22 np0005629333 podman[375920]: 2026-02-25 13:00:22.257076362 +0000 UTC m=+0.119704637 container remove a872d4d2a3acfe89105ece074e542f5403706406295ebbc8bcd8085908948385 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_proskuriakova, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Feb 25 08:00:22 np0005629333 systemd[1]: libpod-conmon-a872d4d2a3acfe89105ece074e542f5403706406295ebbc8bcd8085908948385.scope: Deactivated successfully.
Feb 25 08:00:22 np0005629333 nova_compute[244014]: 2026-02-25 13:00:22.350 244018 INFO nova.virt.libvirt.driver [None req-5a04c5ce-6502-44f9-8ca1-ae3cf3453291 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Beginning live snapshot process#033[00m
Feb 25 08:00:22 np0005629333 nova_compute[244014]: 2026-02-25 13:00:22.484 244018 DEBUG nova.virt.libvirt.imagebackend [None req-5a04c5ce-6502-44f9-8ca1-ae3cf3453291 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] No parent info for c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Feb 25 08:00:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:00:22 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/626962576' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:00:22 np0005629333 nova_compute[244014]: 2026-02-25 13:00:22.512 244018 DEBUG oslo_concurrency.processutils [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:00:22 np0005629333 nova_compute[244014]: 2026-02-25 13:00:22.518 244018 DEBUG nova.compute.provider_tree [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:00:22 np0005629333 nova_compute[244014]: 2026-02-25 13:00:22.540 244018 DEBUG nova.scheduler.client.report [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:00:22 np0005629333 nova_compute[244014]: 2026-02-25 13:00:22.559 244018 DEBUG oslo_concurrency.lockutils [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:00:22 np0005629333 nova_compute[244014]: 2026-02-25 13:00:22.584 244018 INFO nova.scheduler.client.report [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Deleted allocations for instance fc7d7f86-eb7c-476c-840e-98c97329de34#033[00m
Feb 25 08:00:22 np0005629333 nova_compute[244014]: 2026-02-25 13:00:22.640 244018 DEBUG oslo_concurrency.lockutils [None req-94b20fa3-8cbf-4618-91e7-8f433067056c ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "fc7d7f86-eb7c-476c-840e-98c97329de34" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.392s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:00:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:22.642 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:00:22 np0005629333 nova_compute[244014]: 2026-02-25 13:00:22.662 244018 DEBUG nova.storage.rbd_utils [None req-5a04c5ce-6502-44f9-8ca1-ae3cf3453291 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] creating snapshot(0548b3c2c0a14dbf963fcd1dadaa89d0) on rbd image(8d338640-2b5f-4571-8f76-b523064ee129_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 08:00:22 np0005629333 podman[376051]: 2026-02-25 13:00:22.758794585 +0000 UTC m=+0.055337622 container create b53aefff2e1fb5c563997560f1b0626e39d4aa43d636edaf5b0b2d4650119809 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_dubinsky, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:00:22 np0005629333 systemd[1]: Started libpod-conmon-b53aefff2e1fb5c563997560f1b0626e39d4aa43d636edaf5b0b2d4650119809.scope.
Feb 25 08:00:22 np0005629333 podman[376051]: 2026-02-25 13:00:22.732619237 +0000 UTC m=+0.029162294 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:00:22 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:00:22 np0005629333 podman[376051]: 2026-02-25 13:00:22.853644681 +0000 UTC m=+0.150187768 container init b53aefff2e1fb5c563997560f1b0626e39d4aa43d636edaf5b0b2d4650119809 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 25 08:00:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2394: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 288 KiB/s rd, 2.2 MiB/s wr, 123 op/s
Feb 25 08:00:22 np0005629333 podman[376051]: 2026-02-25 13:00:22.861665137 +0000 UTC m=+0.158208174 container start b53aefff2e1fb5c563997560f1b0626e39d4aa43d636edaf5b0b2d4650119809 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_dubinsky, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 08:00:22 np0005629333 romantic_dubinsky[376068]: 167 167
Feb 25 08:00:22 np0005629333 systemd[1]: libpod-b53aefff2e1fb5c563997560f1b0626e39d4aa43d636edaf5b0b2d4650119809.scope: Deactivated successfully.
Feb 25 08:00:22 np0005629333 podman[376051]: 2026-02-25 13:00:22.87419077 +0000 UTC m=+0.170733887 container attach b53aefff2e1fb5c563997560f1b0626e39d4aa43d636edaf5b0b2d4650119809 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_dubinsky, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Feb 25 08:00:22 np0005629333 podman[376051]: 2026-02-25 13:00:22.875013194 +0000 UTC m=+0.171556261 container died b53aefff2e1fb5c563997560f1b0626e39d4aa43d636edaf5b0b2d4650119809 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 08:00:22 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e18761e552bef6bf6bf4fec2bd65caed3de9b2dcbe435fdb036a4e852829d2c2-merged.mount: Deactivated successfully.
Feb 25 08:00:22 np0005629333 podman[376051]: 2026-02-25 13:00:22.93478429 +0000 UTC m=+0.231327347 container remove b53aefff2e1fb5c563997560f1b0626e39d4aa43d636edaf5b0b2d4650119809 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_dubinsky, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 08:00:22 np0005629333 systemd[1]: libpod-conmon-b53aefff2e1fb5c563997560f1b0626e39d4aa43d636edaf5b0b2d4650119809.scope: Deactivated successfully.
Feb 25 08:00:22 np0005629333 nova_compute[244014]: 2026-02-25 13:00:22.946 244018 DEBUG nova.compute.manager [req-4e531fc1-0678-47f3-bf8b-23c18db08a3c req-5d2a1332-8017-44a7-a408-0b7f7cf3a51e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Received event network-vif-plugged-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:00:22 np0005629333 nova_compute[244014]: 2026-02-25 13:00:22.948 244018 DEBUG oslo_concurrency.lockutils [req-4e531fc1-0678-47f3-bf8b-23c18db08a3c req-5d2a1332-8017-44a7-a408-0b7f7cf3a51e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:00:22 np0005629333 nova_compute[244014]: 2026-02-25 13:00:22.948 244018 DEBUG oslo_concurrency.lockutils [req-4e531fc1-0678-47f3-bf8b-23c18db08a3c req-5d2a1332-8017-44a7-a408-0b7f7cf3a51e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:00:22 np0005629333 nova_compute[244014]: 2026-02-25 13:00:22.949 244018 DEBUG oslo_concurrency.lockutils [req-4e531fc1-0678-47f3-bf8b-23c18db08a3c req-5d2a1332-8017-44a7-a408-0b7f7cf3a51e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "fc7d7f86-eb7c-476c-840e-98c97329de34-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:00:22 np0005629333 nova_compute[244014]: 2026-02-25 13:00:22.949 244018 DEBUG nova.compute.manager [req-4e531fc1-0678-47f3-bf8b-23c18db08a3c req-5d2a1332-8017-44a7-a408-0b7f7cf3a51e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] No waiting events found dispatching network-vif-plugged-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 08:00:22 np0005629333 nova_compute[244014]: 2026-02-25 13:00:22.950 244018 WARNING nova.compute.manager [req-4e531fc1-0678-47f3-bf8b-23c18db08a3c req-5d2a1332-8017-44a7-a408-0b7f7cf3a51e 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Received unexpected event network-vif-plugged-5b9c40d0-fe09-4e2d-82aa-fe03755f4bf3 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 08:00:23 np0005629333 podman[376094]: 2026-02-25 13:00:23.114844629 +0000 UTC m=+0.061268959 container create 7d99ba1756274fa4ba616b748564604099fc3344590388648573420046cc3f53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:00:23 np0005629333 systemd[1]: Started libpod-conmon-7d99ba1756274fa4ba616b748564604099fc3344590388648573420046cc3f53.scope.
Feb 25 08:00:23 np0005629333 podman[376094]: 2026-02-25 13:00:23.086684405 +0000 UTC m=+0.033108785 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:00:23 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:00:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aef29fcdcc0175c44a613718f354cc7e0dd9654d9bedd6f43f32ecaeb1d2fa7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:00:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aef29fcdcc0175c44a613718f354cc7e0dd9654d9bedd6f43f32ecaeb1d2fa7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:00:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aef29fcdcc0175c44a613718f354cc7e0dd9654d9bedd6f43f32ecaeb1d2fa7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:00:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aef29fcdcc0175c44a613718f354cc7e0dd9654d9bedd6f43f32ecaeb1d2fa7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:00:23 np0005629333 podman[376094]: 2026-02-25 13:00:23.210491507 +0000 UTC m=+0.156915917 container init 7d99ba1756274fa4ba616b748564604099fc3344590388648573420046cc3f53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_yonath, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:00:23 np0005629333 podman[376094]: 2026-02-25 13:00:23.221444426 +0000 UTC m=+0.167868756 container start 7d99ba1756274fa4ba616b748564604099fc3344590388648573420046cc3f53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:00:23 np0005629333 podman[376094]: 2026-02-25 13:00:23.224513833 +0000 UTC m=+0.170938193 container attach 7d99ba1756274fa4ba616b748564604099fc3344590388648573420046cc3f53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]: {
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:    "0": [
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:        {
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "devices": [
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "/dev/loop3"
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            ],
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "lv_name": "ceph_lv0",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "lv_size": "21470642176",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "name": "ceph_lv0",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "tags": {
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.cluster_name": "ceph",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.crush_device_class": "",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.encrypted": "0",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.objectstore": "bluestore",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.osd_id": "0",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.type": "block",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.vdo": "0",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.with_tpm": "0"
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            },
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "type": "block",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "vg_name": "ceph_vg0"
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:        }
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:    ],
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:    "1": [
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:        {
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "devices": [
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "/dev/loop4"
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            ],
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "lv_name": "ceph_lv1",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "lv_size": "21470642176",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "name": "ceph_lv1",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "tags": {
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.cluster_name": "ceph",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.crush_device_class": "",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.encrypted": "0",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.objectstore": "bluestore",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.osd_id": "1",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.type": "block",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.vdo": "0",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.with_tpm": "0"
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            },
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "type": "block",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "vg_name": "ceph_vg1"
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:        }
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:    ],
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:    "2": [
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:        {
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "devices": [
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "/dev/loop5"
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            ],
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "lv_name": "ceph_lv2",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "lv_size": "21470642176",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "name": "ceph_lv2",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "tags": {
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.cluster_name": "ceph",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.crush_device_class": "",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.encrypted": "0",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.objectstore": "bluestore",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.osd_id": "2",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.type": "block",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.vdo": "0",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:                "ceph.with_tpm": "0"
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            },
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "type": "block",
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:            "vg_name": "ceph_vg2"
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:        }
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]:    ]
Feb 25 08:00:23 np0005629333 nervous_yonath[376110]: }
Feb 25 08:00:23 np0005629333 systemd[1]: libpod-7d99ba1756274fa4ba616b748564604099fc3344590388648573420046cc3f53.scope: Deactivated successfully.
Feb 25 08:00:23 np0005629333 podman[376094]: 2026-02-25 13:00:23.540508806 +0000 UTC m=+0.486933136 container died 7d99ba1756274fa4ba616b748564604099fc3344590388648573420046cc3f53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_yonath, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:00:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Feb 25 08:00:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Feb 25 08:00:23 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Feb 25 08:00:23 np0005629333 systemd[1]: var-lib-containers-storage-overlay-aef29fcdcc0175c44a613718f354cc7e0dd9654d9bedd6f43f32ecaeb1d2fa7f-merged.mount: Deactivated successfully.
Feb 25 08:00:23 np0005629333 podman[376094]: 2026-02-25 13:00:23.668329482 +0000 UTC m=+0.614753842 container remove 7d99ba1756274fa4ba616b748564604099fc3344590388648573420046cc3f53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_yonath, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:00:23 np0005629333 systemd[1]: libpod-conmon-7d99ba1756274fa4ba616b748564604099fc3344590388648573420046cc3f53.scope: Deactivated successfully.
Feb 25 08:00:23 np0005629333 nova_compute[244014]: 2026-02-25 13:00:23.683 244018 DEBUG nova.storage.rbd_utils [None req-5a04c5ce-6502-44f9-8ca1-ae3cf3453291 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] cloning vms/8d338640-2b5f-4571-8f76-b523064ee129_disk@0548b3c2c0a14dbf963fcd1dadaa89d0 to images/7c8e8d1d-4a3c-44a1-a64e-6873dbc9d170 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 25 08:00:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:00:23 np0005629333 nova_compute[244014]: 2026-02-25 13:00:23.840 244018 DEBUG nova.storage.rbd_utils [None req-5a04c5ce-6502-44f9-8ca1-ae3cf3453291 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] flattening images/7c8e8d1d-4a3c-44a1-a64e-6873dbc9d170 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 25 08:00:24 np0005629333 podman[376248]: 2026-02-25 13:00:24.229348388 +0000 UTC m=+0.081064768 container create ff0d4b3dee5f6d2b7fdaded3b0096284f669feb163e2fd3c0de343f775d0cc84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_gagarin, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:00:24 np0005629333 podman[376248]: 2026-02-25 13:00:24.193487116 +0000 UTC m=+0.045203546 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:00:24 np0005629333 systemd[1]: Started libpod-conmon-ff0d4b3dee5f6d2b7fdaded3b0096284f669feb163e2fd3c0de343f775d0cc84.scope.
Feb 25 08:00:24 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:00:24 np0005629333 podman[376248]: 2026-02-25 13:00:24.514834281 +0000 UTC m=+0.366550671 container init ff0d4b3dee5f6d2b7fdaded3b0096284f669feb163e2fd3c0de343f775d0cc84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_gagarin, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:00:24 np0005629333 podman[376248]: 2026-02-25 13:00:24.522319042 +0000 UTC m=+0.374035422 container start ff0d4b3dee5f6d2b7fdaded3b0096284f669feb163e2fd3c0de343f775d0cc84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 08:00:24 np0005629333 pedantic_gagarin[376265]: 167 167
Feb 25 08:00:24 np0005629333 systemd[1]: libpod-ff0d4b3dee5f6d2b7fdaded3b0096284f669feb163e2fd3c0de343f775d0cc84.scope: Deactivated successfully.
Feb 25 08:00:24 np0005629333 podman[376248]: 2026-02-25 13:00:24.701463755 +0000 UTC m=+0.553180105 container attach ff0d4b3dee5f6d2b7fdaded3b0096284f669feb163e2fd3c0de343f775d0cc84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_gagarin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:00:24 np0005629333 podman[376248]: 2026-02-25 13:00:24.70232424 +0000 UTC m=+0.554040610 container died ff0d4b3dee5f6d2b7fdaded3b0096284f669feb163e2fd3c0de343f775d0cc84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:00:24 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ea83933d114ee582d4f9ade2ec8c154f26eba861f13f9c07a8afec710fdfd25b-merged.mount: Deactivated successfully.
Feb 25 08:00:24 np0005629333 podman[376248]: 2026-02-25 13:00:24.811928361 +0000 UTC m=+0.663644721 container remove ff0d4b3dee5f6d2b7fdaded3b0096284f669feb163e2fd3c0de343f775d0cc84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:00:24 np0005629333 systemd[1]: libpod-conmon-ff0d4b3dee5f6d2b7fdaded3b0096284f669feb163e2fd3c0de343f775d0cc84.scope: Deactivated successfully.
Feb 25 08:00:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2396: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 256 KiB/s rd, 458 KiB/s wr, 110 op/s
Feb 25 08:00:24 np0005629333 nova_compute[244014]: 2026-02-25 13:00:24.862 244018 DEBUG nova.storage.rbd_utils [None req-5a04c5ce-6502-44f9-8ca1-ae3cf3453291 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] removing snapshot(0548b3c2c0a14dbf963fcd1dadaa89d0) on rbd image(8d338640-2b5f-4571-8f76-b523064ee129_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 25 08:00:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 08:00:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.6 total, 600.0 interval#012Cumulative writes: 35K writes, 142K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 35K writes, 12K syncs, 2.81 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3752 writes, 14K keys, 3752 commit groups, 1.0 writes per commit group, ingest: 17.80 MB, 0.03 MB/s#012Interval WAL: 3752 writes, 1485 syncs, 2.53 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 08:00:25 np0005629333 podman[376308]: 2026-02-25 13:00:25.027443291 +0000 UTC m=+0.072098525 container create a0136e579afa398bd2c9909435a25ddb924ba4af52176bd55d4a1e4a61312f0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_heyrovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Feb 25 08:00:25 np0005629333 podman[376308]: 2026-02-25 13:00:24.979880329 +0000 UTC m=+0.024535563 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:00:25 np0005629333 systemd[1]: Started libpod-conmon-a0136e579afa398bd2c9909435a25ddb924ba4af52176bd55d4a1e4a61312f0e.scope.
Feb 25 08:00:25 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:00:25 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6776945804d9c1e74ed5a87f858a6e9dd6b825a00438677a28711d343aaa96c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:00:25 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6776945804d9c1e74ed5a87f858a6e9dd6b825a00438677a28711d343aaa96c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:00:25 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6776945804d9c1e74ed5a87f858a6e9dd6b825a00438677a28711d343aaa96c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:00:25 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6776945804d9c1e74ed5a87f858a6e9dd6b825a00438677a28711d343aaa96c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:00:25 np0005629333 podman[376308]: 2026-02-25 13:00:25.236630392 +0000 UTC m=+0.281285616 container init a0136e579afa398bd2c9909435a25ddb924ba4af52176bd55d4a1e4a61312f0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_heyrovsky, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:00:25 np0005629333 podman[376308]: 2026-02-25 13:00:25.245499622 +0000 UTC m=+0.290154846 container start a0136e579afa398bd2c9909435a25ddb924ba4af52176bd55d4a1e4a61312f0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_heyrovsky, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:00:25 np0005629333 podman[376308]: 2026-02-25 13:00:25.252644593 +0000 UTC m=+0.297299797 container attach a0136e579afa398bd2c9909435a25ddb924ba4af52176bd55d4a1e4a61312f0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:00:25 np0005629333 nova_compute[244014]: 2026-02-25 13:00:25.503 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:25 np0005629333 nova_compute[244014]: 2026-02-25 13:00:25.614 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Feb 25 08:00:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Feb 25 08:00:25 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Feb 25 08:00:25 np0005629333 nova_compute[244014]: 2026-02-25 13:00:25.796 244018 DEBUG nova.storage.rbd_utils [None req-5a04c5ce-6502-44f9-8ca1-ae3cf3453291 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] creating snapshot(snap) on rbd image(7c8e8d1d-4a3c-44a1-a64e-6873dbc9d170) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 08:00:25 np0005629333 ovn_controller[147040]: 2026-02-25T13:00:25Z|01546|binding|INFO|Releasing lport bba4cbca-ca61-4422-a903-61bd05b8ebd6 from this chassis (sb_readonly=0)
Feb 25 08:00:25 np0005629333 nova_compute[244014]: 2026-02-25 13:00:25.875 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:25 np0005629333 lvm[376424]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:00:25 np0005629333 lvm[376423]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:00:25 np0005629333 lvm[376420]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:00:25 np0005629333 lvm[376420]: VG ceph_vg0 finished
Feb 25 08:00:25 np0005629333 lvm[376424]: VG ceph_vg2 finished
Feb 25 08:00:25 np0005629333 lvm[376423]: VG ceph_vg1 finished
Feb 25 08:00:26 np0005629333 brave_heyrovsky[376325]: {}
Feb 25 08:00:26 np0005629333 systemd[1]: libpod-a0136e579afa398bd2c9909435a25ddb924ba4af52176bd55d4a1e4a61312f0e.scope: Deactivated successfully.
Feb 25 08:00:26 np0005629333 systemd[1]: libpod-a0136e579afa398bd2c9909435a25ddb924ba4af52176bd55d4a1e4a61312f0e.scope: Consumed 1.122s CPU time.
Feb 25 08:00:26 np0005629333 podman[376308]: 2026-02-25 13:00:26.084087836 +0000 UTC m=+1.128743070 container died a0136e579afa398bd2c9909435a25ddb924ba4af52176bd55d4a1e4a61312f0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_heyrovsky, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 08:00:26 np0005629333 systemd[1]: var-lib-containers-storage-overlay-6776945804d9c1e74ed5a87f858a6e9dd6b825a00438677a28711d343aaa96c5-merged.mount: Deactivated successfully.
Feb 25 08:00:26 np0005629333 podman[376308]: 2026-02-25 13:00:26.140969181 +0000 UTC m=+1.185624405 container remove a0136e579afa398bd2c9909435a25ddb924ba4af52176bd55d4a1e4a61312f0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_heyrovsky, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:00:26 np0005629333 systemd[1]: libpod-conmon-a0136e579afa398bd2c9909435a25ddb924ba4af52176bd55d4a1e4a61312f0e.scope: Deactivated successfully.
Feb 25 08:00:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:00:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:00:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:00:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:00:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Feb 25 08:00:26 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:00:26 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:00:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Feb 25 08:00:26 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Feb 25 08:00:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2399: 305 pgs: 305 active+clean; 233 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 64 KiB/s wr, 66 op/s
Feb 25 08:00:27 np0005629333 ceph-mgr[76641]: [devicehealth INFO root] Check health
Feb 25 08:00:28 np0005629333 nova_compute[244014]: 2026-02-25 13:00:28.016 244018 INFO nova.virt.libvirt.driver [None req-5a04c5ce-6502-44f9-8ca1-ae3cf3453291 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Snapshot image upload complete#033[00m
Feb 25 08:00:28 np0005629333 nova_compute[244014]: 2026-02-25 13:00:28.017 244018 INFO nova.compute.manager [None req-5a04c5ce-6502-44f9-8ca1-ae3cf3453291 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Took 5.88 seconds to snapshot the instance on the hypervisor.#033[00m
Feb 25 08:00:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:00:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2400: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 153 op/s
Feb 25 08:00:28 np0005629333 nova_compute[244014]: 2026-02-25 13:00:28.990 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:29 np0005629333 nova_compute[244014]: 2026-02-25 13:00:29.949 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024414.947953, b0014972-9433-4648-951c-bd9a210b6a69 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:00:29 np0005629333 nova_compute[244014]: 2026-02-25 13:00:29.949 244018 INFO nova.compute.manager [-] [instance: b0014972-9433-4648-951c-bd9a210b6a69] VM Stopped (Lifecycle Event)#033[00m
Feb 25 08:00:29 np0005629333 nova_compute[244014]: 2026-02-25 13:00:29.983 244018 DEBUG nova.compute.manager [None req-bc5bbe2d-2591-4c66-b566-dddcb6bb4606 - - - - - -] [instance: b0014972-9433-4648-951c-bd9a210b6a69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:00:30 np0005629333 nova_compute[244014]: 2026-02-25 13:00:30.506 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:30 np0005629333 nova_compute[244014]: 2026-02-25 13:00:30.616 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2401: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 6.4 MiB/s wr, 126 op/s
Feb 25 08:00:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:00:31
Feb 25 08:00:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:00:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:00:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['.mgr', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'vms', 'default.rgw.log', 'backups']
Feb 25 08:00:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:00:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:00:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:00:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:00:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:00:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:00:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:00:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:00:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:00:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:00:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:00:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:00:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:00:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:00:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:00:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:00:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:00:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2402: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 128 op/s
Feb 25 08:00:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:00:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Feb 25 08:00:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Feb 25 08:00:33 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Feb 25 08:00:33 np0005629333 nova_compute[244014]: 2026-02-25 13:00:33.912 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:00:33 np0005629333 nova_compute[244014]: 2026-02-25 13:00:33.912 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:00:33 np0005629333 nova_compute[244014]: 2026-02-25 13:00:33.927 244018 DEBUG nova.compute.manager [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 08:00:34 np0005629333 nova_compute[244014]: 2026-02-25 13:00:34.002 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:00:34 np0005629333 nova_compute[244014]: 2026-02-25 13:00:34.002 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:00:34 np0005629333 nova_compute[244014]: 2026-02-25 13:00:34.012 244018 DEBUG nova.virt.hardware [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 08:00:34 np0005629333 nova_compute[244014]: 2026-02-25 13:00:34.012 244018 INFO nova.compute.claims [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 08:00:34 np0005629333 nova_compute[244014]: 2026-02-25 13:00:34.144 244018 DEBUG oslo_concurrency.processutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:00:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:00:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3440491008' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:00:34 np0005629333 nova_compute[244014]: 2026-02-25 13:00:34.675 244018 DEBUG oslo_concurrency.processutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:00:34 np0005629333 nova_compute[244014]: 2026-02-25 13:00:34.682 244018 DEBUG nova.compute.provider_tree [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:00:34 np0005629333 nova_compute[244014]: 2026-02-25 13:00:34.714 244018 DEBUG nova.scheduler.client.report [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:00:34 np0005629333 nova_compute[244014]: 2026-02-25 13:00:34.744 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:00:34 np0005629333 nova_compute[244014]: 2026-02-25 13:00:34.746 244018 DEBUG nova.compute.manager [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 08:00:34 np0005629333 nova_compute[244014]: 2026-02-25 13:00:34.824 244018 DEBUG nova.compute.manager [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 08:00:34 np0005629333 nova_compute[244014]: 2026-02-25 13:00:34.825 244018 DEBUG nova.network.neutron [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 08:00:34 np0005629333 nova_compute[244014]: 2026-02-25 13:00:34.843 244018 INFO nova.virt.libvirt.driver [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 08:00:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2404: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 5.8 MiB/s wr, 127 op/s
Feb 25 08:00:34 np0005629333 nova_compute[244014]: 2026-02-25 13:00:34.874 244018 DEBUG nova.compute.manager [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 08:00:34 np0005629333 nova_compute[244014]: 2026-02-25 13:00:34.981 244018 DEBUG nova.compute.manager [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 08:00:34 np0005629333 nova_compute[244014]: 2026-02-25 13:00:34.983 244018 DEBUG nova.virt.libvirt.driver [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 08:00:34 np0005629333 nova_compute[244014]: 2026-02-25 13:00:34.984 244018 INFO nova.virt.libvirt.driver [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Creating image(s)#033[00m
Feb 25 08:00:35 np0005629333 nova_compute[244014]: 2026-02-25 13:00:35.013 244018 DEBUG nova.storage.rbd_utils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] rbd image 0e4f3bd8-9df3-4329-afdc-27d91b98810d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:00:35 np0005629333 nova_compute[244014]: 2026-02-25 13:00:35.036 244018 DEBUG nova.storage.rbd_utils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] rbd image 0e4f3bd8-9df3-4329-afdc-27d91b98810d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:00:35 np0005629333 nova_compute[244014]: 2026-02-25 13:00:35.065 244018 DEBUG nova.storage.rbd_utils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] rbd image 0e4f3bd8-9df3-4329-afdc-27d91b98810d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:00:35 np0005629333 nova_compute[244014]: 2026-02-25 13:00:35.068 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "fa283b84f237165ed51c00dab3d16371bfe8f671" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:00:35 np0005629333 nova_compute[244014]: 2026-02-25 13:00:35.069 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "fa283b84f237165ed51c00dab3d16371bfe8f671" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:00:35 np0005629333 nova_compute[244014]: 2026-02-25 13:00:35.295 244018 DEBUG nova.virt.libvirt.imagebackend [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Image locations are: [{'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/7c8e8d1d-4a3c-44a1-a64e-6873dbc9d170/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/7c8e8d1d-4a3c-44a1-a64e-6873dbc9d170/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Feb 25 08:00:35 np0005629333 nova_compute[244014]: 2026-02-25 13:00:35.362 244018 DEBUG nova.virt.libvirt.imagebackend [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Selected location: {'url': 'rbd://8ac33163-6221-5d58-9a39-8b6933fe7762/images/7c8e8d1d-4a3c-44a1-a64e-6873dbc9d170/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Feb 25 08:00:35 np0005629333 nova_compute[244014]: 2026-02-25 13:00:35.363 244018 DEBUG nova.storage.rbd_utils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] cloning images/7c8e8d1d-4a3c-44a1-a64e-6873dbc9d170@snap to None/0e4f3bd8-9df3-4329-afdc-27d91b98810d_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 25 08:00:35 np0005629333 nova_compute[244014]: 2026-02-25 13:00:35.420 244018 DEBUG nova.policy [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7592542cdf7f423c86332695423dbe79', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cf5eb89ba0424237a313b1f369bcb92b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 08:00:35 np0005629333 nova_compute[244014]: 2026-02-25 13:00:35.473 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "fa283b84f237165ed51c00dab3d16371bfe8f671" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.404s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:00:35 np0005629333 nova_compute[244014]: 2026-02-25 13:00:35.525 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:35 np0005629333 nova_compute[244014]: 2026-02-25 13:00:35.527 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024420.4781866, fc7d7f86-eb7c-476c-840e-98c97329de34 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:00:35 np0005629333 nova_compute[244014]: 2026-02-25 13:00:35.528 244018 INFO nova.compute.manager [-] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] VM Stopped (Lifecycle Event)#033[00m
Feb 25 08:00:35 np0005629333 nova_compute[244014]: 2026-02-25 13:00:35.586 244018 DEBUG nova.compute.manager [None req-87618ec9-fd58-470b-88f3-4cee70c77b38 - - - - - -] [instance: fc7d7f86-eb7c-476c-840e-98c97329de34] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:00:35 np0005629333 nova_compute[244014]: 2026-02-25 13:00:35.640 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:35 np0005629333 nova_compute[244014]: 2026-02-25 13:00:35.649 244018 DEBUG nova.objects.instance [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lazy-loading 'migration_context' on Instance uuid 0e4f3bd8-9df3-4329-afdc-27d91b98810d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 08:00:35 np0005629333 nova_compute[244014]: 2026-02-25 13:00:35.673 244018 DEBUG nova.virt.libvirt.driver [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 08:00:35 np0005629333 nova_compute[244014]: 2026-02-25 13:00:35.674 244018 DEBUG nova.virt.libvirt.driver [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Ensure instance console log exists: /var/lib/nova/instances/0e4f3bd8-9df3-4329-afdc-27d91b98810d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 08:00:35 np0005629333 nova_compute[244014]: 2026-02-25 13:00:35.675 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:00:35 np0005629333 nova_compute[244014]: 2026-02-25 13:00:35.675 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:00:35 np0005629333 nova_compute[244014]: 2026-02-25 13:00:35.676 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:00:36 np0005629333 nova_compute[244014]: 2026-02-25 13:00:36.158 244018 DEBUG nova.network.neutron [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Successfully created port: 4f9273d8-a479-491a-bbac-087a11ac1a08 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 08:00:36 np0005629333 nova_compute[244014]: 2026-02-25 13:00:36.164 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2405: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.7 MiB/s wr, 102 op/s
Feb 25 08:00:37 np0005629333 nova_compute[244014]: 2026-02-25 13:00:37.641 244018 DEBUG nova.network.neutron [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Successfully updated port: 4f9273d8-a479-491a-bbac-087a11ac1a08 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 08:00:37 np0005629333 nova_compute[244014]: 2026-02-25 13:00:37.662 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "refresh_cache-0e4f3bd8-9df3-4329-afdc-27d91b98810d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 08:00:37 np0005629333 nova_compute[244014]: 2026-02-25 13:00:37.663 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquired lock "refresh_cache-0e4f3bd8-9df3-4329-afdc-27d91b98810d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 08:00:37 np0005629333 nova_compute[244014]: 2026-02-25 13:00:37.663 244018 DEBUG nova.network.neutron [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 08:00:37 np0005629333 nova_compute[244014]: 2026-02-25 13:00:37.734 244018 DEBUG nova.compute.manager [req-ff6daa29-098f-448f-86a6-ac7b72a4d3f2 req-8193e8ba-bcce-49ba-8017-20deb33065ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Received event network-changed-4f9273d8-a479-491a-bbac-087a11ac1a08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:00:37 np0005629333 nova_compute[244014]: 2026-02-25 13:00:37.734 244018 DEBUG nova.compute.manager [req-ff6daa29-098f-448f-86a6-ac7b72a4d3f2 req-8193e8ba-bcce-49ba-8017-20deb33065ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Refreshing instance network info cache due to event network-changed-4f9273d8-a479-491a-bbac-087a11ac1a08. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 08:00:37 np0005629333 nova_compute[244014]: 2026-02-25 13:00:37.734 244018 DEBUG oslo_concurrency.lockutils [req-ff6daa29-098f-448f-86a6-ac7b72a4d3f2 req-8193e8ba-bcce-49ba-8017-20deb33065ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-0e4f3bd8-9df3-4329-afdc-27d91b98810d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 08:00:37 np0005629333 nova_compute[244014]: 2026-02-25 13:00:37.832 244018 DEBUG nova.network.neutron [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 08:00:37 np0005629333 nova_compute[244014]: 2026-02-25 13:00:37.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:00:37 np0005629333 nova_compute[244014]: 2026-02-25 13:00:37.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:00:37 np0005629333 nova_compute[244014]: 2026-02-25 13:00:37.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:00:37 np0005629333 nova_compute[244014]: 2026-02-25 13:00:37.895 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 25 08:00:38 np0005629333 nova_compute[244014]: 2026-02-25 13:00:38.497 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-8d338640-2b5f-4571-8f76-b523064ee129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 08:00:38 np0005629333 nova_compute[244014]: 2026-02-25 13:00:38.497 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-8d338640-2b5f-4571-8f76-b523064ee129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 08:00:38 np0005629333 nova_compute[244014]: 2026-02-25 13:00:38.498 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 25 08:00:38 np0005629333 nova_compute[244014]: 2026-02-25 13:00:38.498 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8d338640-2b5f-4571-8f76-b523064ee129 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:00:38.796824) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024438796851, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 1818, "num_deletes": 257, "total_data_size": 2889250, "memory_usage": 2934024, "flush_reason": "Manual Compaction"}
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024438851291, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 2837246, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49673, "largest_seqno": 51490, "table_properties": {"data_size": 2828935, "index_size": 5061, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17301, "raw_average_key_size": 20, "raw_value_size": 2812148, "raw_average_value_size": 3251, "num_data_blocks": 225, "num_entries": 865, "num_filter_entries": 865, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772024254, "oldest_key_time": 1772024254, "file_creation_time": 1772024438, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 54550 microseconds, and 4439 cpu microseconds.
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:00:38.851365) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 2837246 bytes OK
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:00:38.851391) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:00:38.859757) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:00:38.859782) EVENT_LOG_v1 {"time_micros": 1772024438859774, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:00:38.859804) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 2881427, prev total WAL file size 2881427, number of live WAL files 2.
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:00:38.860712) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303134' seq:72057594037927935, type:22 .. '6C6F676D0032323636' seq:0, type:0; will stop at (end)
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(2770KB)], [116(7952KB)]
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024438860754, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 10981091, "oldest_snapshot_seqno": -1}
Feb 25 08:00:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2406: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.1 KiB/s wr, 43 op/s
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 7305 keys, 10857741 bytes, temperature: kUnknown
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024438935816, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 10857741, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10808436, "index_size": 29929, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18309, "raw_key_size": 189626, "raw_average_key_size": 25, "raw_value_size": 10677768, "raw_average_value_size": 1461, "num_data_blocks": 1178, "num_entries": 7305, "num_filter_entries": 7305, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772024438, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:00:38.936230) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 10857741 bytes
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:00:38.937510) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 146.1 rd, 144.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 7.8 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(7.7) write-amplify(3.8) OK, records in: 7835, records dropped: 530 output_compression: NoCompression
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:00:38.937611) EVENT_LOG_v1 {"time_micros": 1772024438937595, "job": 70, "event": "compaction_finished", "compaction_time_micros": 75161, "compaction_time_cpu_micros": 27607, "output_level": 6, "num_output_files": 1, "total_output_size": 10857741, "num_input_records": 7835, "num_output_records": 7305, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024438938380, "job": 70, "event": "table_file_deletion", "file_number": 118}
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024438940124, "job": 70, "event": "table_file_deletion", "file_number": 116}
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:00:38.860555) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:00:38.940246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:00:38.940252) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:00:38.940254) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:00:38.940256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:00:38 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:00:38.940258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:00:39 np0005629333 nova_compute[244014]: 2026-02-25 13:00:39.467 244018 DEBUG nova.network.neutron [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Updating instance_info_cache with network_info: [{"id": "4f9273d8-a479-491a-bbac-087a11ac1a08", "address": "fa:16:3e:f0:48:d9", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f9273d8-a4", "ovs_interfaceid": "4f9273d8-a479-491a-bbac-087a11ac1a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:00:39 np0005629333 nova_compute[244014]: 2026-02-25 13:00:39.490 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Releasing lock "refresh_cache-0e4f3bd8-9df3-4329-afdc-27d91b98810d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 08:00:39 np0005629333 nova_compute[244014]: 2026-02-25 13:00:39.491 244018 DEBUG nova.compute.manager [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Instance network_info: |[{"id": "4f9273d8-a479-491a-bbac-087a11ac1a08", "address": "fa:16:3e:f0:48:d9", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f9273d8-a4", "ovs_interfaceid": "4f9273d8-a479-491a-bbac-087a11ac1a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 08:00:39 np0005629333 nova_compute[244014]: 2026-02-25 13:00:39.491 244018 DEBUG oslo_concurrency.lockutils [req-ff6daa29-098f-448f-86a6-ac7b72a4d3f2 req-8193e8ba-bcce-49ba-8017-20deb33065ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-0e4f3bd8-9df3-4329-afdc-27d91b98810d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 08:00:39 np0005629333 nova_compute[244014]: 2026-02-25 13:00:39.492 244018 DEBUG nova.network.neutron [req-ff6daa29-098f-448f-86a6-ac7b72a4d3f2 req-8193e8ba-bcce-49ba-8017-20deb33065ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Refreshing network info cache for port 4f9273d8-a479-491a-bbac-087a11ac1a08 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 08:00:39 np0005629333 nova_compute[244014]: 2026-02-25 13:00:39.497 244018 DEBUG nova.virt.libvirt.driver [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Start _get_guest_xml network_info=[{"id": "4f9273d8-a479-491a-bbac-087a11ac1a08", "address": "fa:16:3e:f0:48:d9", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f9273d8-a4", "ovs_interfaceid": "4f9273d8-a479-491a-bbac-087a11ac1a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-02-25T13:00:21Z,direct_url=<?>,disk_format='raw',id=7c8e8d1d-4a3c-44a1-a64e-6873dbc9d170,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-1341513254',owner='cf5eb89ba0424237a313b1f369bcb92b',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-25T13:00:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': '7c8e8d1d-4a3c-44a1-a64e-6873dbc9d170'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 08:00:39 np0005629333 nova_compute[244014]: 2026-02-25 13:00:39.503 244018 WARNING nova.virt.libvirt.driver [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:00:39 np0005629333 nova_compute[244014]: 2026-02-25 13:00:39.512 244018 DEBUG nova.virt.libvirt.host [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 08:00:39 np0005629333 nova_compute[244014]: 2026-02-25 13:00:39.513 244018 DEBUG nova.virt.libvirt.host [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 08:00:39 np0005629333 nova_compute[244014]: 2026-02-25 13:00:39.524 244018 DEBUG nova.virt.libvirt.host [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 08:00:39 np0005629333 nova_compute[244014]: 2026-02-25 13:00:39.525 244018 DEBUG nova.virt.libvirt.host [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 08:00:39 np0005629333 nova_compute[244014]: 2026-02-25 13:00:39.526 244018 DEBUG nova.virt.libvirt.driver [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 08:00:39 np0005629333 nova_compute[244014]: 2026-02-25 13:00:39.526 244018 DEBUG nova.virt.hardware [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-02-25T13:00:21Z,direct_url=<?>,disk_format='raw',id=7c8e8d1d-4a3c-44a1-a64e-6873dbc9d170,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-1341513254',owner='cf5eb89ba0424237a313b1f369bcb92b',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-25T13:00:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 08:00:39 np0005629333 nova_compute[244014]: 2026-02-25 13:00:39.527 244018 DEBUG nova.virt.hardware [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 08:00:39 np0005629333 nova_compute[244014]: 2026-02-25 13:00:39.528 244018 DEBUG nova.virt.hardware [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 08:00:39 np0005629333 nova_compute[244014]: 2026-02-25 13:00:39.529 244018 DEBUG nova.virt.hardware [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 08:00:39 np0005629333 nova_compute[244014]: 2026-02-25 13:00:39.529 244018 DEBUG nova.virt.hardware [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 08:00:39 np0005629333 nova_compute[244014]: 2026-02-25 13:00:39.530 244018 DEBUG nova.virt.hardware [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 08:00:39 np0005629333 nova_compute[244014]: 2026-02-25 13:00:39.530 244018 DEBUG nova.virt.hardware [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 08:00:39 np0005629333 nova_compute[244014]: 2026-02-25 13:00:39.531 244018 DEBUG nova.virt.hardware [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 08:00:39 np0005629333 nova_compute[244014]: 2026-02-25 13:00:39.532 244018 DEBUG nova.virt.hardware [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 08:00:39 np0005629333 nova_compute[244014]: 2026-02-25 13:00:39.532 244018 DEBUG nova.virt.hardware [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 08:00:39 np0005629333 nova_compute[244014]: 2026-02-25 13:00:39.533 244018 DEBUG nova.virt.hardware [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 08:00:39 np0005629333 nova_compute[244014]: 2026-02-25 13:00:39.538 244018 DEBUG oslo_concurrency.processutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:00:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 08:00:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1019647448' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.071 244018 DEBUG oslo_concurrency.processutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.106 244018 DEBUG nova.storage.rbd_utils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] rbd image 0e4f3bd8-9df3-4329-afdc-27d91b98810d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.111 244018 DEBUG oslo_concurrency.processutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.511 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Updating instance_info_cache with network_info: [{"id": "e11a4d69-8666-420b-b13c-69a6427567a4", "address": "fa:16:3e:15:04:04", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape11a4d69-86", "ovs_interfaceid": "e11a4d69-8666-420b-b13c-69a6427567a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.528 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.540 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-8d338640-2b5f-4571-8f76-b523064ee129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.540 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.542 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.566 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.567 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.567 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.568 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.568 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:00:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 08:00:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2455738330' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.620 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.634 244018 DEBUG oslo_concurrency.processutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.637 244018 DEBUG nova.virt.libvirt.vif [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T13:00:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1321825921',display_name='tempest-TestSnapshotPattern-server-1321825921',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1321825921',id=146,image_ref='7c8e8d1d-4a3c-44a1-a64e-6873dbc9d170',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDCWEdl3+ajGxJOkCeAxfCSENIRZwbPgzOnOZcpgV3Hlv4XmE2Mrj2poBzyZ23ScEZYFPY48VJJN3YnpuQrByWx+iTcrPbMvMX+80KQEtNnqR2GlURJo6gN1+GRlPtyo4Q==',key_name='tempest-TestSnapshotPattern-301066188',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cf5eb89ba0424237a313b1f369bcb92b',ramdisk_id='',reservation_id='r-1efchiu9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='8d338640-2b5f-4571-8f76-b523064ee129',image_min_disk='1',image_min_ram='0',image_owner_id='cf5eb89ba0424237a313b1f369bcb92b',image_owner_project_name='tempest-TestSnapshotPattern-1122231160',image_owner_user_name='tempest-TestSnapshotPattern-1122231160-project-member',image_user_id='7592542cdf7f423c86332695423dbe79',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1122231160',owner_user_name='tempest-TestSnapshotPattern-1122231160-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T13:00:34Z,user_data=None,user_id='7592542cdf7f423c86332695423dbe79',uuid=0e4f3bd8-9df3-4329-afdc-27d91b98810d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4f9273d8-a479-491a-bbac-087a11ac1a08", "address": "fa:16:3e:f0:48:d9", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f9273d8-a4", "ovs_interfaceid": "4f9273d8-a479-491a-bbac-087a11ac1a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.638 244018 DEBUG nova.network.os_vif_util [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Converting VIF {"id": "4f9273d8-a479-491a-bbac-087a11ac1a08", "address": "fa:16:3e:f0:48:d9", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f9273d8-a4", "ovs_interfaceid": "4f9273d8-a479-491a-bbac-087a11ac1a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.639 244018 DEBUG nova.network.os_vif_util [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:48:d9,bridge_name='br-int',has_traffic_filtering=True,id=4f9273d8-a479-491a-bbac-087a11ac1a08,network=Network(4273798e-5f22-4d98-8d00-a22d1ea2c776),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f9273d8-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.641 244018 DEBUG nova.objects.instance [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lazy-loading 'pci_devices' on Instance uuid 0e4f3bd8-9df3-4329-afdc-27d91b98810d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.670 244018 DEBUG nova.virt.libvirt.driver [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] End _get_guest_xml xml=<domain type="kvm">
Feb 25 08:00:40 np0005629333 nova_compute[244014]:  <uuid>0e4f3bd8-9df3-4329-afdc-27d91b98810d</uuid>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:  <name>instance-00000092</name>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestSnapshotPattern-server-1321825921</nova:name>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 13:00:39</nova:creationTime>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 08:00:40 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:        <nova:user uuid="7592542cdf7f423c86332695423dbe79">tempest-TestSnapshotPattern-1122231160-project-member</nova:user>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:        <nova:project uuid="cf5eb89ba0424237a313b1f369bcb92b">tempest-TestSnapshotPattern-1122231160</nova:project>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="7c8e8d1d-4a3c-44a1-a64e-6873dbc9d170"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:        <nova:port uuid="4f9273d8-a479-491a-bbac-087a11ac1a08">
Feb 25 08:00:40 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <system>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      <entry name="serial">0e4f3bd8-9df3-4329-afdc-27d91b98810d</entry>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      <entry name="uuid">0e4f3bd8-9df3-4329-afdc-27d91b98810d</entry>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    </system>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:  <os>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:  </os>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:  <features>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:  </features>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:  </clock>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:  <devices>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/0e4f3bd8-9df3-4329-afdc-27d91b98810d_disk">
Feb 25 08:00:40 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      </source>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 08:00:40 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      </auth>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    </disk>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/0e4f3bd8-9df3-4329-afdc-27d91b98810d_disk.config">
Feb 25 08:00:40 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      </source>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 08:00:40 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      </auth>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    </disk>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:f0:48:d9"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      <target dev="tap4f9273d8-a4"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    </interface>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/0e4f3bd8-9df3-4329-afdc-27d91b98810d/console.log" append="off"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    </serial>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <video>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    </video>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <input type="keyboard" bus="usb"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    </rng>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 08:00:40 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 08:00:40 np0005629333 nova_compute[244014]:  </devices>
Feb 25 08:00:40 np0005629333 nova_compute[244014]: </domain>
Feb 25 08:00:40 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.678 244018 DEBUG nova.compute.manager [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Preparing to wait for external event network-vif-plugged-4f9273d8-a479-491a-bbac-087a11ac1a08 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.678 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.678 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.679 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.679 244018 DEBUG nova.virt.libvirt.vif [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T13:00:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1321825921',display_name='tempest-TestSnapshotPattern-server-1321825921',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1321825921',id=146,image_ref='7c8e8d1d-4a3c-44a1-a64e-6873dbc9d170',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDCWEdl3+ajGxJOkCeAxfCSENIRZwbPgzOnOZcpgV3Hlv4XmE2Mrj2poBzyZ23ScEZYFPY48VJJN3YnpuQrByWx+iTcrPbMvMX+80KQEtNnqR2GlURJo6gN1+GRlPtyo4Q==',key_name='tempest-TestSnapshotPattern-301066188',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cf5eb89ba0424237a313b1f369bcb92b',ramdisk_id='',reservation_id='r-1efchiu9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='8d338640-2b5f-4571-8f76-b523064ee129',image_min_disk='1',image_min_ram='0',image_owner_id='cf5eb89ba0424237a313b1f369bcb92b',image_owner_project_name='tempest-TestSnapshotPattern-1122231160',image_owner_user_name='tempest-TestSnapshotPattern-1122231160-project-member',image_user_id='7592542cdf7f423c86332695423dbe79',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1122231160',owner_user_name='tempest-TestSnapshotPattern-1122231160-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T13:00:34Z,user_data=None,user_id='7592542cdf7f423c86332695423dbe79',uuid=0e4f3bd8-9df3-4329-afdc-27d91b98810d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4f9273d8-a479-491a-bbac-087a11ac1a08", "address": "fa:16:3e:f0:48:d9", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f9273d8-a4", "ovs_interfaceid": "4f9273d8-a479-491a-bbac-087a11ac1a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.679 244018 DEBUG nova.network.os_vif_util [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Converting VIF {"id": "4f9273d8-a479-491a-bbac-087a11ac1a08", "address": "fa:16:3e:f0:48:d9", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f9273d8-a4", "ovs_interfaceid": "4f9273d8-a479-491a-bbac-087a11ac1a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.680 244018 DEBUG nova.network.os_vif_util [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:48:d9,bridge_name='br-int',has_traffic_filtering=True,id=4f9273d8-a479-491a-bbac-087a11ac1a08,network=Network(4273798e-5f22-4d98-8d00-a22d1ea2c776),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f9273d8-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.680 244018 DEBUG os_vif [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:48:d9,bridge_name='br-int',has_traffic_filtering=True,id=4f9273d8-a479-491a-bbac-087a11ac1a08,network=Network(4273798e-5f22-4d98-8d00-a22d1ea2c776),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f9273d8-a4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.681 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.681 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.682 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.685 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.685 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f9273d8-a4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.685 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4f9273d8-a4, col_values=(('external_ids', {'iface-id': '4f9273d8-a479-491a-bbac-087a11ac1a08', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f0:48:d9', 'vm-uuid': '0e4f3bd8-9df3-4329-afdc-27d91b98810d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.687 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:40 np0005629333 NetworkManager[49836]: <info>  [1772024440.6881] manager: (tap4f9273d8-a4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/636)
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.690 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.695 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.697 244018 INFO os_vif [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:48:d9,bridge_name='br-int',has_traffic_filtering=True,id=4f9273d8-a479-491a-bbac-087a11ac1a08,network=Network(4273798e-5f22-4d98-8d00-a22d1ea2c776),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f9273d8-a4')#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.762 244018 DEBUG nova.virt.libvirt.driver [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.763 244018 DEBUG nova.virt.libvirt.driver [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.763 244018 DEBUG nova.virt.libvirt.driver [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] No VIF found with MAC fa:16:3e:f0:48:d9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.764 244018 INFO nova.virt.libvirt.driver [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Using config drive#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.783 244018 DEBUG nova.storage.rbd_utils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] rbd image 0e4f3bd8-9df3-4329-afdc-27d91b98810d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:00:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2407: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.1 KiB/s wr, 43 op/s
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.983 244018 DEBUG nova.network.neutron [req-ff6daa29-098f-448f-86a6-ac7b72a4d3f2 req-8193e8ba-bcce-49ba-8017-20deb33065ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Updated VIF entry in instance network info cache for port 4f9273d8-a479-491a-bbac-087a11ac1a08. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 08:00:40 np0005629333 nova_compute[244014]: 2026-02-25 13:00:40.984 244018 DEBUG nova.network.neutron [req-ff6daa29-098f-448f-86a6-ac7b72a4d3f2 req-8193e8ba-bcce-49ba-8017-20deb33065ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Updating instance_info_cache with network_info: [{"id": "4f9273d8-a479-491a-bbac-087a11ac1a08", "address": "fa:16:3e:f0:48:d9", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f9273d8-a4", "ovs_interfaceid": "4f9273d8-a479-491a-bbac-087a11ac1a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.005 244018 DEBUG oslo_concurrency.lockutils [req-ff6daa29-098f-448f-86a6-ac7b72a4d3f2 req-8193e8ba-bcce-49ba-8017-20deb33065ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-0e4f3bd8-9df3-4329-afdc-27d91b98810d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 08:00:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:00:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/602947456' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.115 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.173 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.173 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.178 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.178 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.220 244018 INFO nova.virt.libvirt.driver [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Creating config drive at /var/lib/nova/instances/0e4f3bd8-9df3-4329-afdc-27d91b98810d/disk.config#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.226 244018 DEBUG oslo_concurrency.processutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0e4f3bd8-9df3-4329-afdc-27d91b98810d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmprbjh_j77 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.266 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "e53615dc-61ae-4f86-a246-c36739b4d2ae" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.267 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "e53615dc-61ae-4f86-a246-c36739b4d2ae" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.297 244018 DEBUG nova.compute.manager [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.367 244018 DEBUG oslo_concurrency.processutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0e4f3bd8-9df3-4329-afdc-27d91b98810d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmprbjh_j77" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.392 244018 DEBUG nova.storage.rbd_utils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] rbd image 0e4f3bd8-9df3-4329-afdc-27d91b98810d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.396 244018 DEBUG oslo_concurrency.processutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0e4f3bd8-9df3-4329-afdc-27d91b98810d/disk.config 0e4f3bd8-9df3-4329-afdc-27d91b98810d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.430 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.431 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.439 244018 DEBUG nova.virt.hardware [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.439 244018 INFO nova.compute.claims [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.497 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.498 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3308MB free_disk=59.941679435782135GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.498 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.533 244018 DEBUG nova.scheduler.client.report [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.550 244018 DEBUG nova.scheduler.client.report [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.551 244018 DEBUG nova.compute.provider_tree [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.552 244018 DEBUG oslo_concurrency.processutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0e4f3bd8-9df3-4329-afdc-27d91b98810d/disk.config 0e4f3bd8-9df3-4329-afdc-27d91b98810d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.553 244018 INFO nova.virt.libvirt.driver [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Deleting local config drive /var/lib/nova/instances/0e4f3bd8-9df3-4329-afdc-27d91b98810d/disk.config because it was imported into RBD.#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.570 244018 DEBUG nova.scheduler.client.report [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.588 244018 DEBUG nova.scheduler.client.report [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 25 08:00:41 np0005629333 kernel: tap4f9273d8-a4: entered promiscuous mode
Feb 25 08:00:41 np0005629333 NetworkManager[49836]: <info>  [1772024441.5949] manager: (tap4f9273d8-a4): new Tun device (/org/freedesktop/NetworkManager/Devices/637)
Feb 25 08:00:41 np0005629333 ovn_controller[147040]: 2026-02-25T13:00:41Z|01547|binding|INFO|Claiming lport 4f9273d8-a479-491a-bbac-087a11ac1a08 for this chassis.
Feb 25 08:00:41 np0005629333 ovn_controller[147040]: 2026-02-25T13:00:41Z|01548|binding|INFO|4f9273d8-a479-491a-bbac-087a11ac1a08: Claiming fa:16:3e:f0:48:d9 10.100.0.10
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.599 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:41 np0005629333 ovn_controller[147040]: 2026-02-25T13:00:41Z|01549|binding|INFO|Setting lport 4f9273d8-a479-491a-bbac-087a11ac1a08 ovn-installed in OVS
Feb 25 08:00:41 np0005629333 ovn_controller[147040]: 2026-02-25T13:00:41Z|01550|binding|INFO|Setting lport 4f9273d8-a479-491a-bbac-087a11ac1a08 up in Southbound
Feb 25 08:00:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:41.607 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:48:d9 10.100.0.10'], port_security=['fa:16:3e:f0:48:d9 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '0e4f3bd8-9df3-4329-afdc-27d91b98810d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4273798e-5f22-4d98-8d00-a22d1ea2c776', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cf5eb89ba0424237a313b1f369bcb92b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9876e36c-27f9-4526-a220-4685b5be270d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b451f093-c10b-47f6-9a8c-8892cb3ad351, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=4f9273d8-a479-491a-bbac-087a11ac1a08) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.607 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:41.610 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 4f9273d8-a479-491a-bbac-087a11ac1a08 in datapath 4273798e-5f22-4d98-8d00-a22d1ea2c776 bound to our chassis#033[00m
Feb 25 08:00:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:41.613 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4273798e-5f22-4d98-8d00-a22d1ea2c776#033[00m
Feb 25 08:00:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:41.627 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[767f5ca0-8e81-4474-a041-13f338a6f040]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:41 np0005629333 systemd-machined[210048]: New machine qemu-180-instance-00000092.
Feb 25 08:00:41 np0005629333 systemd[1]: Started Virtual Machine qemu-180-instance-00000092.
Feb 25 08:00:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:41.655 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f4362663-1d5c-4871-92c1-3938a270f9ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:41.661 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0f9dfa8d-1fcc-458d-a3eb-7d772e73d1e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:41 np0005629333 systemd-udevd[376827]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 08:00:41 np0005629333 NetworkManager[49836]: <info>  [1772024441.6832] device (tap4f9273d8-a4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 08:00:41 np0005629333 NetworkManager[49836]: <info>  [1772024441.6840] device (tap4f9273d8-a4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 08:00:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:41.691 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[97ef81ac-1ac3-460d-87eb-c8e46abd9787]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.694 244018 DEBUG oslo_concurrency.processutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:00:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:41.715 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8013257f-c0e3-4168-8527-d0fae538d0eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4273798e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:a3:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 453], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636995, 'reachable_time': 38262, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376834, 'error': None, 'target': 'ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:41.732 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[8d0c001e-67d2-43f1-a084-849d503f9fb6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4273798e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 637005, 'tstamp': 637005}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376839, 'error': None, 'target': 'ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4273798e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 637008, 'tstamp': 637008}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376839, 'error': None, 'target': 'ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:41.734 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4273798e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.736 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:41.738 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4273798e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:00:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:41.738 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 08:00:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:41.738 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4273798e-50, col_values=(('external_ids', {'iface-id': 'bba4cbca-ca61-4422-a903-61bd05b8ebd6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:00:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:41.739 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.871 244018 DEBUG nova.compute.manager [req-7a617b28-a0f9-45bd-acf4-605537846295 req-901208ee-0a92-482d-a093-47a24d3dde9d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Received event network-vif-plugged-4f9273d8-a479-491a-bbac-087a11ac1a08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.872 244018 DEBUG oslo_concurrency.lockutils [req-7a617b28-a0f9-45bd-acf4-605537846295 req-901208ee-0a92-482d-a093-47a24d3dde9d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.873 244018 DEBUG oslo_concurrency.lockutils [req-7a617b28-a0f9-45bd-acf4-605537846295 req-901208ee-0a92-482d-a093-47a24d3dde9d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.873 244018 DEBUG oslo_concurrency.lockutils [req-7a617b28-a0f9-45bd-acf4-605537846295 req-901208ee-0a92-482d-a093-47a24d3dde9d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:00:41 np0005629333 nova_compute[244014]: 2026-02-25 13:00:41.874 244018 DEBUG nova.compute.manager [req-7a617b28-a0f9-45bd-acf4-605537846295 req-901208ee-0a92-482d-a093-47a24d3dde9d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Processing event network-vif-plugged-4f9273d8-a479-491a-bbac-087a11ac1a08 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.015 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024442.0145745, 0e4f3bd8-9df3-4329-afdc-27d91b98810d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.015 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] VM Started (Lifecycle Event)#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.023 244018 DEBUG nova.compute.manager [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.028 244018 DEBUG nova.virt.libvirt.driver [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.031 244018 INFO nova.virt.libvirt.driver [-] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Instance spawned successfully.#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.032 244018 INFO nova.compute.manager [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Took 7.05 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.032 244018 DEBUG nova.compute.manager [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.045 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.053 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.111 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.111 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024442.01482, 0e4f3bd8-9df3-4329-afdc-27d91b98810d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.112 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] VM Paused (Lifecycle Event)#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.146 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.149 244018 INFO nova.compute.manager [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Took 8.18 seconds to build instance.#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.154 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024442.0263085, 0e4f3bd8-9df3-4329-afdc-27d91b98810d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.155 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] VM Resumed (Lifecycle Event)#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.190 244018 DEBUG oslo_concurrency.lockutils [None req-4ed2d892-86ca-46e4-8f37-8faadbd9e76d 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.278s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.194 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.198 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 08:00:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:00:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3098004259' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.296 244018 DEBUG oslo_concurrency.processutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.301 244018 DEBUG nova.compute.provider_tree [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.327 244018 DEBUG nova.scheduler.client.report [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.353 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.922s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.353 244018 DEBUG nova.compute.manager [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.356 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.419 244018 DEBUG nova.compute.manager [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.419 244018 DEBUG nova.network.neutron [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.597 244018 INFO nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.651 244018 DEBUG nova.compute.manager [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.657 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 8d338640-2b5f-4571-8f76-b523064ee129 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.658 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 0e4f3bd8-9df3-4329-afdc-27d91b98810d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.658 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance e53615dc-61ae-4f86-a246-c36739b4d2ae actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.658 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.659 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.736 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.771 244018 DEBUG nova.compute.manager [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.773 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.774 244018 INFO nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Creating image(s)#033[00m
Feb 25 08:00:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:00:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:00:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:00:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:00:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007768486318795051 of space, bias 1.0, pg target 0.23305458956385153 quantized to 32 (current 32)
Feb 25 08:00:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:00:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:00:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:00:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:00:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:00:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00325201735596192 of space, bias 1.0, pg target 0.975605206788576 quantized to 32 (current 32)
Feb 25 08:00:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:00:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3623253445439937e-06 of space, bias 4.0, pg target 0.0016347904134527923 quantized to 16 (current 16)
Feb 25 08:00:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:00:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:00:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:00:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:00:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:00:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:00:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:00:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:00:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:00:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.805 244018 DEBUG nova.storage.rbd_utils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image e53615dc-61ae-4f86-a246-c36739b4d2ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.831 244018 DEBUG nova.storage.rbd_utils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image e53615dc-61ae-4f86-a246-c36739b4d2ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:00:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2408: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 16 KiB/s wr, 51 op/s
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.865 244018 DEBUG nova.storage.rbd_utils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image e53615dc-61ae-4f86-a246-c36739b4d2ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.876 244018 DEBUG oslo_concurrency.processutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.912 244018 DEBUG nova.policy [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea895f651dd742a7b5eb2d63fb34641c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b9699483122f465084e3147e4904d13d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.970 244018 DEBUG oslo_concurrency.processutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.971 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.971 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.971 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.992 244018 DEBUG nova.storage.rbd_utils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image e53615dc-61ae-4f86-a246-c36739b4d2ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:00:42 np0005629333 nova_compute[244014]: 2026-02-25 13:00:42.996 244018 DEBUG oslo_concurrency.processutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 e53615dc-61ae-4f86-a246-c36739b4d2ae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:00:43 np0005629333 nova_compute[244014]: 2026-02-25 13:00:43.280 244018 DEBUG oslo_concurrency.processutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 e53615dc-61ae-4f86-a246-c36739b4d2ae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:00:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:00:43 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2475209737' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:00:43 np0005629333 nova_compute[244014]: 2026-02-25 13:00:43.342 244018 DEBUG nova.storage.rbd_utils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] resizing rbd image e53615dc-61ae-4f86-a246-c36739b4d2ae_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 08:00:43 np0005629333 nova_compute[244014]: 2026-02-25 13:00:43.374 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.638s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:00:43 np0005629333 nova_compute[244014]: 2026-02-25 13:00:43.380 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:00:43 np0005629333 nova_compute[244014]: 2026-02-25 13:00:43.422 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:00:43 np0005629333 nova_compute[244014]: 2026-02-25 13:00:43.429 244018 DEBUG nova.objects.instance [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'migration_context' on Instance uuid e53615dc-61ae-4f86-a246-c36739b4d2ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 08:00:43 np0005629333 nova_compute[244014]: 2026-02-25 13:00:43.460 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 08:00:43 np0005629333 nova_compute[244014]: 2026-02-25 13:00:43.460 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Ensure instance console log exists: /var/lib/nova/instances/e53615dc-61ae-4f86-a246-c36739b4d2ae/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 08:00:43 np0005629333 nova_compute[244014]: 2026-02-25 13:00:43.460 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:00:43 np0005629333 nova_compute[244014]: 2026-02-25 13:00:43.461 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:00:43 np0005629333 nova_compute[244014]: 2026-02-25 13:00:43.461 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:00:43 np0005629333 nova_compute[244014]: 2026-02-25 13:00:43.470 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:00:43 np0005629333 nova_compute[244014]: 2026-02-25 13:00:43.470 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.115s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:00:43 np0005629333 nova_compute[244014]: 2026-02-25 13:00:43.775 244018 DEBUG nova.network.neutron [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Successfully created port: 614ca81e-067a-4af2-852a-f3155f0f177c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 08:00:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:00:43 np0005629333 nova_compute[244014]: 2026-02-25 13:00:43.973 244018 DEBUG nova.compute.manager [req-226b5fac-6c14-4b72-89ac-ad3cadf8a23f req-37def11a-554c-4c98-96fc-276dc2c8b6d5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Received event network-vif-plugged-4f9273d8-a479-491a-bbac-087a11ac1a08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:00:43 np0005629333 nova_compute[244014]: 2026-02-25 13:00:43.973 244018 DEBUG oslo_concurrency.lockutils [req-226b5fac-6c14-4b72-89ac-ad3cadf8a23f req-37def11a-554c-4c98-96fc-276dc2c8b6d5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:00:43 np0005629333 nova_compute[244014]: 2026-02-25 13:00:43.973 244018 DEBUG oslo_concurrency.lockutils [req-226b5fac-6c14-4b72-89ac-ad3cadf8a23f req-37def11a-554c-4c98-96fc-276dc2c8b6d5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:00:43 np0005629333 nova_compute[244014]: 2026-02-25 13:00:43.973 244018 DEBUG oslo_concurrency.lockutils [req-226b5fac-6c14-4b72-89ac-ad3cadf8a23f req-37def11a-554c-4c98-96fc-276dc2c8b6d5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:00:43 np0005629333 nova_compute[244014]: 2026-02-25 13:00:43.974 244018 DEBUG nova.compute.manager [req-226b5fac-6c14-4b72-89ac-ad3cadf8a23f req-37def11a-554c-4c98-96fc-276dc2c8b6d5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] No waiting events found dispatching network-vif-plugged-4f9273d8-a479-491a-bbac-087a11ac1a08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 08:00:43 np0005629333 nova_compute[244014]: 2026-02-25 13:00:43.974 244018 WARNING nova.compute.manager [req-226b5fac-6c14-4b72-89ac-ad3cadf8a23f req-37def11a-554c-4c98-96fc-276dc2c8b6d5 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Received unexpected event network-vif-plugged-4f9273d8-a479-491a-bbac-087a11ac1a08 for instance with vm_state active and task_state None.#033[00m
Feb 25 08:00:44 np0005629333 nova_compute[244014]: 2026-02-25 13:00:44.466 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:00:44 np0005629333 nova_compute[244014]: 2026-02-25 13:00:44.466 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:00:44 np0005629333 nova_compute[244014]: 2026-02-25 13:00:44.847 244018 DEBUG nova.network.neutron [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Successfully updated port: 614ca81e-067a-4af2-852a-f3155f0f177c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 08:00:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2409: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 15 KiB/s wr, 46 op/s
Feb 25 08:00:44 np0005629333 nova_compute[244014]: 2026-02-25 13:00:44.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:00:44 np0005629333 nova_compute[244014]: 2026-02-25 13:00:44.886 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "refresh_cache-e53615dc-61ae-4f86-a246-c36739b4d2ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 08:00:44 np0005629333 nova_compute[244014]: 2026-02-25 13:00:44.886 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquired lock "refresh_cache-e53615dc-61ae-4f86-a246-c36739b4d2ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 08:00:44 np0005629333 nova_compute[244014]: 2026-02-25 13:00:44.886 244018 DEBUG nova.network.neutron [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 08:00:45 np0005629333 nova_compute[244014]: 2026-02-25 13:00:45.107 244018 DEBUG nova.network.neutron [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 08:00:45 np0005629333 nova_compute[244014]: 2026-02-25 13:00:45.622 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:45 np0005629333 nova_compute[244014]: 2026-02-25 13:00:45.687 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:46 np0005629333 nova_compute[244014]: 2026-02-25 13:00:46.006 244018 DEBUG nova.network.neutron [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Updating instance_info_cache with network_info: [{"id": "614ca81e-067a-4af2-852a-f3155f0f177c", "address": "fa:16:3e:d5:16:22", "network": {"id": "c6c760d4-99a5-4930-b051-8fdaa331a4d4", "bridge": "br-int", "label": "tempest-network-smoke--308292664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap614ca81e-06", "ovs_interfaceid": "614ca81e-067a-4af2-852a-f3155f0f177c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:00:46 np0005629333 nova_compute[244014]: 2026-02-25 13:00:46.068 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Releasing lock "refresh_cache-e53615dc-61ae-4f86-a246-c36739b4d2ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 08:00:46 np0005629333 nova_compute[244014]: 2026-02-25 13:00:46.069 244018 DEBUG nova.compute.manager [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Instance network_info: |[{"id": "614ca81e-067a-4af2-852a-f3155f0f177c", "address": "fa:16:3e:d5:16:22", "network": {"id": "c6c760d4-99a5-4930-b051-8fdaa331a4d4", "bridge": "br-int", "label": "tempest-network-smoke--308292664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap614ca81e-06", "ovs_interfaceid": "614ca81e-067a-4af2-852a-f3155f0f177c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 08:00:46 np0005629333 nova_compute[244014]: 2026-02-25 13:00:46.071 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Start _get_guest_xml network_info=[{"id": "614ca81e-067a-4af2-852a-f3155f0f177c", "address": "fa:16:3e:d5:16:22", "network": {"id": "c6c760d4-99a5-4930-b051-8fdaa331a4d4", "bridge": "br-int", "label": "tempest-network-smoke--308292664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap614ca81e-06", "ovs_interfaceid": "614ca81e-067a-4af2-852a-f3155f0f177c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 08:00:46 np0005629333 nova_compute[244014]: 2026-02-25 13:00:46.075 244018 WARNING nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:00:46 np0005629333 nova_compute[244014]: 2026-02-25 13:00:46.079 244018 DEBUG nova.compute.manager [req-28c69574-b8ca-4b09-ae82-afccb989ad1a req-0d77e185-1032-4d9b-8037-b77362787914 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Received event network-changed-614ca81e-067a-4af2-852a-f3155f0f177c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:00:46 np0005629333 nova_compute[244014]: 2026-02-25 13:00:46.079 244018 DEBUG nova.compute.manager [req-28c69574-b8ca-4b09-ae82-afccb989ad1a req-0d77e185-1032-4d9b-8037-b77362787914 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Refreshing instance network info cache due to event network-changed-614ca81e-067a-4af2-852a-f3155f0f177c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 08:00:46 np0005629333 nova_compute[244014]: 2026-02-25 13:00:46.080 244018 DEBUG oslo_concurrency.lockutils [req-28c69574-b8ca-4b09-ae82-afccb989ad1a req-0d77e185-1032-4d9b-8037-b77362787914 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-e53615dc-61ae-4f86-a246-c36739b4d2ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 08:00:46 np0005629333 nova_compute[244014]: 2026-02-25 13:00:46.080 244018 DEBUG oslo_concurrency.lockutils [req-28c69574-b8ca-4b09-ae82-afccb989ad1a req-0d77e185-1032-4d9b-8037-b77362787914 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-e53615dc-61ae-4f86-a246-c36739b4d2ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 08:00:46 np0005629333 nova_compute[244014]: 2026-02-25 13:00:46.080 244018 DEBUG nova.network.neutron [req-28c69574-b8ca-4b09-ae82-afccb989ad1a req-0d77e185-1032-4d9b-8037-b77362787914 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Refreshing network info cache for port 614ca81e-067a-4af2-852a-f3155f0f177c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 08:00:46 np0005629333 nova_compute[244014]: 2026-02-25 13:00:46.081 244018 DEBUG nova.virt.libvirt.host [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 08:00:46 np0005629333 nova_compute[244014]: 2026-02-25 13:00:46.082 244018 DEBUG nova.virt.libvirt.host [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 08:00:46 np0005629333 nova_compute[244014]: 2026-02-25 13:00:46.085 244018 DEBUG nova.virt.libvirt.host [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 08:00:46 np0005629333 nova_compute[244014]: 2026-02-25 13:00:46.085 244018 DEBUG nova.virt.libvirt.host [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 08:00:46 np0005629333 nova_compute[244014]: 2026-02-25 13:00:46.085 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 08:00:46 np0005629333 nova_compute[244014]: 2026-02-25 13:00:46.085 244018 DEBUG nova.virt.hardware [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 08:00:46 np0005629333 nova_compute[244014]: 2026-02-25 13:00:46.086 244018 DEBUG nova.virt.hardware [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 08:00:46 np0005629333 nova_compute[244014]: 2026-02-25 13:00:46.086 244018 DEBUG nova.virt.hardware [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 08:00:46 np0005629333 nova_compute[244014]: 2026-02-25 13:00:46.086 244018 DEBUG nova.virt.hardware [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 08:00:46 np0005629333 nova_compute[244014]: 2026-02-25 13:00:46.086 244018 DEBUG nova.virt.hardware [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 08:00:46 np0005629333 nova_compute[244014]: 2026-02-25 13:00:46.087 244018 DEBUG nova.virt.hardware [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 08:00:46 np0005629333 nova_compute[244014]: 2026-02-25 13:00:46.087 244018 DEBUG nova.virt.hardware [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 08:00:46 np0005629333 nova_compute[244014]: 2026-02-25 13:00:46.087 244018 DEBUG nova.virt.hardware [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 08:00:46 np0005629333 nova_compute[244014]: 2026-02-25 13:00:46.087 244018 DEBUG nova.virt.hardware [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 08:00:46 np0005629333 nova_compute[244014]: 2026-02-25 13:00:46.087 244018 DEBUG nova.virt.hardware [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 08:00:46 np0005629333 nova_compute[244014]: 2026-02-25 13:00:46.088 244018 DEBUG nova.virt.hardware [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 08:00:46 np0005629333 nova_compute[244014]: 2026-02-25 13:00:46.090 244018 DEBUG oslo_concurrency.processutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:00:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 08:00:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1415538947' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 08:00:46 np0005629333 nova_compute[244014]: 2026-02-25 13:00:46.685 244018 DEBUG oslo_concurrency.processutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:00:46 np0005629333 nova_compute[244014]: 2026-02-25 13:00:46.708 244018 DEBUG nova.storage.rbd_utils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image e53615dc-61ae-4f86-a246-c36739b4d2ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:00:46 np0005629333 nova_compute[244014]: 2026-02-25 13:00:46.713 244018 DEBUG oslo_concurrency.processutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:00:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2410: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 86 KiB/s rd, 13 KiB/s wr, 42 op/s
Feb 25 08:00:46 np0005629333 nova_compute[244014]: 2026-02-25 13:00:46.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:00:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 08:00:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/843307968' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.232 244018 DEBUG oslo_concurrency.processutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.234 244018 DEBUG nova.virt.libvirt.vif [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T13:00:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-2025216436',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-2025216436',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-acc',id=147,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKSnXyyRLcIe+wwroSDefDYtDDfXrk/YMn2qsODUMBYqG2hXn2+h/Zx9KIePkIbsUhd+oTI/uxQzz33WttQ4dgrPw3EcoKeusKnTCEMOPy/CP9hk5/kHcYVWCklV9tEBEw==',key_name='tempest-TestSecurityGroupsBasicOps-302753452',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-tvsldivx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T13:00:42Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=e53615dc-61ae-4f86-a246-c36739b4d2ae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "614ca81e-067a-4af2-852a-f3155f0f177c", "address": "fa:16:3e:d5:16:22", "network": {"id": "c6c760d4-99a5-4930-b051-8fdaa331a4d4", "bridge": "br-int", "label": "tempest-network-smoke--308292664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap614ca81e-06", "ovs_interfaceid": "614ca81e-067a-4af2-852a-f3155f0f177c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.234 244018 DEBUG nova.network.os_vif_util [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "614ca81e-067a-4af2-852a-f3155f0f177c", "address": "fa:16:3e:d5:16:22", "network": {"id": "c6c760d4-99a5-4930-b051-8fdaa331a4d4", "bridge": "br-int", "label": "tempest-network-smoke--308292664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap614ca81e-06", "ovs_interfaceid": "614ca81e-067a-4af2-852a-f3155f0f177c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.235 244018 DEBUG nova.network.os_vif_util [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:16:22,bridge_name='br-int',has_traffic_filtering=True,id=614ca81e-067a-4af2-852a-f3155f0f177c,network=Network(c6c760d4-99a5-4930-b051-8fdaa331a4d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap614ca81e-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.236 244018 DEBUG nova.objects.instance [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'pci_devices' on Instance uuid e53615dc-61ae-4f86-a246-c36739b4d2ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.254 244018 DEBUG nova.network.neutron [req-28c69574-b8ca-4b09-ae82-afccb989ad1a req-0d77e185-1032-4d9b-8037-b77362787914 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Updated VIF entry in instance network info cache for port 614ca81e-067a-4af2-852a-f3155f0f177c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.254 244018 DEBUG nova.network.neutron [req-28c69574-b8ca-4b09-ae82-afccb989ad1a req-0d77e185-1032-4d9b-8037-b77362787914 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Updating instance_info_cache with network_info: [{"id": "614ca81e-067a-4af2-852a-f3155f0f177c", "address": "fa:16:3e:d5:16:22", "network": {"id": "c6c760d4-99a5-4930-b051-8fdaa331a4d4", "bridge": "br-int", "label": "tempest-network-smoke--308292664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap614ca81e-06", "ovs_interfaceid": "614ca81e-067a-4af2-852a-f3155f0f177c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.277 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] End _get_guest_xml xml=<domain type="kvm">
Feb 25 08:00:47 np0005629333 nova_compute[244014]:  <uuid>e53615dc-61ae-4f86-a246-c36739b4d2ae</uuid>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:  <name>instance-00000093</name>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-2025216436</nova:name>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 13:00:46</nova:creationTime>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 08:00:47 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:        <nova:user uuid="ea895f651dd742a7b5eb2d63fb34641c">tempest-TestSecurityGroupsBasicOps-948360018-project-member</nova:user>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:        <nova:project uuid="b9699483122f465084e3147e4904d13d">tempest-TestSecurityGroupsBasicOps-948360018</nova:project>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:        <nova:port uuid="614ca81e-067a-4af2-852a-f3155f0f177c">
Feb 25 08:00:47 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <system>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      <entry name="serial">e53615dc-61ae-4f86-a246-c36739b4d2ae</entry>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      <entry name="uuid">e53615dc-61ae-4f86-a246-c36739b4d2ae</entry>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    </system>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:  <os>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:  </os>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:  <features>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:  </features>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:  </clock>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:  <devices>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/e53615dc-61ae-4f86-a246-c36739b4d2ae_disk">
Feb 25 08:00:47 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      </source>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 08:00:47 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      </auth>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    </disk>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/e53615dc-61ae-4f86-a246-c36739b4d2ae_disk.config">
Feb 25 08:00:47 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      </source>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 08:00:47 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      </auth>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    </disk>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:d5:16:22"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      <target dev="tap614ca81e-06"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    </interface>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/e53615dc-61ae-4f86-a246-c36739b4d2ae/console.log" append="off"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    </serial>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <video>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    </video>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    </rng>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 08:00:47 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 08:00:47 np0005629333 nova_compute[244014]:  </devices>
Feb 25 08:00:47 np0005629333 nova_compute[244014]: </domain>
Feb 25 08:00:47 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.279 244018 DEBUG nova.compute.manager [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Preparing to wait for external event network-vif-plugged-614ca81e-067a-4af2-852a-f3155f0f177c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.279 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.279 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.280 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.281 244018 DEBUG nova.virt.libvirt.vif [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T13:00:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-2025216436',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-2025216436',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-acc',id=147,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKSnXyyRLcIe+wwroSDefDYtDDfXrk/YMn2qsODUMBYqG2hXn2+h/Zx9KIePkIbsUhd+oTI/uxQzz33WttQ4dgrPw3EcoKeusKnTCEMOPy/CP9hk5/kHcYVWCklV9tEBEw==',key_name='tempest-TestSecurityGroupsBasicOps-302753452',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-tvsldivx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T13:00:42Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=e53615dc-61ae-4f86-a246-c36739b4d2ae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "614ca81e-067a-4af2-852a-f3155f0f177c", "address": "fa:16:3e:d5:16:22", "network": {"id": "c6c760d4-99a5-4930-b051-8fdaa331a4d4", "bridge": "br-int", "label": "tempest-network-smoke--308292664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap614ca81e-06", "ovs_interfaceid": "614ca81e-067a-4af2-852a-f3155f0f177c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.281 244018 DEBUG nova.network.os_vif_util [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "614ca81e-067a-4af2-852a-f3155f0f177c", "address": "fa:16:3e:d5:16:22", "network": {"id": "c6c760d4-99a5-4930-b051-8fdaa331a4d4", "bridge": "br-int", "label": "tempest-network-smoke--308292664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap614ca81e-06", "ovs_interfaceid": "614ca81e-067a-4af2-852a-f3155f0f177c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.282 244018 DEBUG nova.network.os_vif_util [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:16:22,bridge_name='br-int',has_traffic_filtering=True,id=614ca81e-067a-4af2-852a-f3155f0f177c,network=Network(c6c760d4-99a5-4930-b051-8fdaa331a4d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap614ca81e-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.282 244018 DEBUG os_vif [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:16:22,bridge_name='br-int',has_traffic_filtering=True,id=614ca81e-067a-4af2-852a-f3155f0f177c,network=Network(c6c760d4-99a5-4930-b051-8fdaa331a4d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap614ca81e-06') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.283 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.284 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.285 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.286 244018 DEBUG oslo_concurrency.lockutils [req-28c69574-b8ca-4b09-ae82-afccb989ad1a req-0d77e185-1032-4d9b-8037-b77362787914 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-e53615dc-61ae-4f86-a246-c36739b4d2ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.287 244018 DEBUG nova.compute.manager [req-28c69574-b8ca-4b09-ae82-afccb989ad1a req-0d77e185-1032-4d9b-8037-b77362787914 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Received event network-changed-4f9273d8-a479-491a-bbac-087a11ac1a08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.287 244018 DEBUG nova.compute.manager [req-28c69574-b8ca-4b09-ae82-afccb989ad1a req-0d77e185-1032-4d9b-8037-b77362787914 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Refreshing instance network info cache due to event network-changed-4f9273d8-a479-491a-bbac-087a11ac1a08. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.287 244018 DEBUG oslo_concurrency.lockutils [req-28c69574-b8ca-4b09-ae82-afccb989ad1a req-0d77e185-1032-4d9b-8037-b77362787914 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-0e4f3bd8-9df3-4329-afdc-27d91b98810d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.287 244018 DEBUG oslo_concurrency.lockutils [req-28c69574-b8ca-4b09-ae82-afccb989ad1a req-0d77e185-1032-4d9b-8037-b77362787914 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-0e4f3bd8-9df3-4329-afdc-27d91b98810d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.288 244018 DEBUG nova.network.neutron [req-28c69574-b8ca-4b09-ae82-afccb989ad1a req-0d77e185-1032-4d9b-8037-b77362787914 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Refreshing network info cache for port 4f9273d8-a479-491a-bbac-087a11ac1a08 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.289 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.290 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap614ca81e-06, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.291 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap614ca81e-06, col_values=(('external_ids', {'iface-id': '614ca81e-067a-4af2-852a-f3155f0f177c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d5:16:22', 'vm-uuid': 'e53615dc-61ae-4f86-a246-c36739b4d2ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.292 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:47 np0005629333 NetworkManager[49836]: <info>  [1772024447.2939] manager: (tap614ca81e-06): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/638)
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.295 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.299 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.301 244018 INFO os_vif [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:16:22,bridge_name='br-int',has_traffic_filtering=True,id=614ca81e-067a-4af2-852a-f3155f0f177c,network=Network(c6c760d4-99a5-4930-b051-8fdaa331a4d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap614ca81e-06')#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.367 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.367 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.368 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No VIF found with MAC fa:16:3e:d5:16:22, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.369 244018 INFO nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Using config drive#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.403 244018 DEBUG nova.storage.rbd_utils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image e53615dc-61ae-4f86-a246-c36739b4d2ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:00:47 np0005629333 podman[377156]: 2026-02-25 13:00:47.405045766 +0000 UTC m=+0.063272526 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 25 08:00:47 np0005629333 podman[377158]: 2026-02-25 13:00:47.465283265 +0000 UTC m=+0.122767104 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 25 08:00:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:00:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3939963270' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:00:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:00:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3939963270' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.684 244018 INFO nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Creating config drive at /var/lib/nova/instances/e53615dc-61ae-4f86-a246-c36739b4d2ae/disk.config#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.688 244018 DEBUG oslo_concurrency.processutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e53615dc-61ae-4f86-a246-c36739b4d2ae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpkwius9tw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.823 244018 DEBUG oslo_concurrency.processutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e53615dc-61ae-4f86-a246-c36739b4d2ae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpkwius9tw" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.844 244018 DEBUG nova.storage.rbd_utils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image e53615dc-61ae-4f86-a246-c36739b4d2ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.848 244018 DEBUG oslo_concurrency.processutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e53615dc-61ae-4f86-a246-c36739b4d2ae/disk.config e53615dc-61ae-4f86-a246-c36739b4d2ae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.960 244018 DEBUG oslo_concurrency.processutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e53615dc-61ae-4f86-a246-c36739b4d2ae/disk.config e53615dc-61ae-4f86-a246-c36739b4d2ae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.962 244018 INFO nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Deleting local config drive /var/lib/nova/instances/e53615dc-61ae-4f86-a246-c36739b4d2ae/disk.config because it was imported into RBD.#033[00m
Feb 25 08:00:47 np0005629333 kernel: tap614ca81e-06: entered promiscuous mode
Feb 25 08:00:47 np0005629333 ovn_controller[147040]: 2026-02-25T13:00:47Z|01551|binding|INFO|Claiming lport 614ca81e-067a-4af2-852a-f3155f0f177c for this chassis.
Feb 25 08:00:47 np0005629333 ovn_controller[147040]: 2026-02-25T13:00:47Z|01552|binding|INFO|614ca81e-067a-4af2-852a-f3155f0f177c: Claiming fa:16:3e:d5:16:22 10.100.0.13
Feb 25 08:00:47 np0005629333 nova_compute[244014]: 2026-02-25 13:00:47.996 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:48 np0005629333 NetworkManager[49836]: <info>  [1772024448.0032] manager: (tap614ca81e-06): new Tun device (/org/freedesktop/NetworkManager/Devices/639)
Feb 25 08:00:48 np0005629333 ovn_controller[147040]: 2026-02-25T13:00:48Z|01553|binding|INFO|Setting lport 614ca81e-067a-4af2-852a-f3155f0f177c ovn-installed in OVS
Feb 25 08:00:48 np0005629333 ovn_controller[147040]: 2026-02-25T13:00:48Z|01554|binding|INFO|Setting lport 614ca81e-067a-4af2-852a-f3155f0f177c up in Southbound
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.003 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:16:22 10.100.0.13'], port_security=['fa:16:3e:d5:16:22 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e53615dc-61ae-4f86-a246-c36739b4d2ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6c760d4-99a5-4930-b051-8fdaa331a4d4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9699483122f465084e3147e4904d13d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '09018a8e-2978-4907-9f5b-7472a21cdef8 7177af96-a132-4797-a5d1-f9104ca018eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a6a1380-5e0c-4178-ac05-c458e549fab7, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=614ca81e-067a-4af2-852a-f3155f0f177c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.004 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 614ca81e-067a-4af2-852a-f3155f0f177c in datapath c6c760d4-99a5-4930-b051-8fdaa331a4d4 bound to our chassis#033[00m
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.005 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c6c760d4-99a5-4930-b051-8fdaa331a4d4#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.006 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.018 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ec6cc0f5-792c-449d-829c-98045038193e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.019 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc6c760d4-91 in ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.021 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc6c760d4-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.021 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[497ba391-cb60-4b24-b50d-3e309bd9a7b1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:48 np0005629333 systemd-machined[210048]: New machine qemu-181-instance-00000093.
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.022 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[691ddc7d-5e87-453b-8f6f-442dce3bd95d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.034 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[26d47f7e-7fa2-4fc9-8508-f6aedfb068c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:48 np0005629333 systemd[1]: Started Virtual Machine qemu-181-instance-00000093.
Feb 25 08:00:48 np0005629333 systemd-udevd[377275]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.055 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[28779397-193d-467c-8bfa-86ce6464b221]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:48 np0005629333 NetworkManager[49836]: <info>  [1772024448.0678] device (tap614ca81e-06): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 08:00:48 np0005629333 NetworkManager[49836]: <info>  [1772024448.0686] device (tap614ca81e-06): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.088 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1e1ed66c-60a4-4df3-84d5-bcc8ff07d908]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.096 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[05005577-b981-41b0-b0d7-6abd0ee6ee04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:48 np0005629333 systemd-udevd[377281]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 08:00:48 np0005629333 NetworkManager[49836]: <info>  [1772024448.0982] manager: (tapc6c760d4-90): new Veth device (/org/freedesktop/NetworkManager/Devices/640)
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.129 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[40c80016-ae06-4ca6-a147-f97e236e8029]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.133 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[f1e1112e-6733-4db3-8121-7131bd07cc16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:48 np0005629333 NetworkManager[49836]: <info>  [1772024448.1554] device (tapc6c760d4-90): carrier: link connected
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.158 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ae4d16c7-053d-437a-916d-3d1012a2322f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.170 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3996f83a-7c43-4631-93cf-652f90ef72fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6c760d4-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:23:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 458], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641756, 'reachable_time': 19140, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377305, 'error': None, 'target': 'ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.188 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[54ed1ffb-7bd6-4684-b68a-541dae8bd2a7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea8:23d1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641756, 'tstamp': 641756}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377306, 'error': None, 'target': 'ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.202 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3b8a9549-78a9-4fef-aba9-9805ac75b264]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6c760d4-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:23:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 458], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641756, 'reachable_time': 19140, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 377307, 'error': None, 'target': 'ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.231 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[febfcb53-3d78-48b1-9f22-866b6b16713d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.289 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[609d643b-99f5-420a-85f6-b1b775024bdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.291 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6c760d4-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.291 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.292 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6c760d4-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.294 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:48 np0005629333 kernel: tapc6c760d4-90: entered promiscuous mode
Feb 25 08:00:48 np0005629333 NetworkManager[49836]: <info>  [1772024448.2953] manager: (tapc6c760d4-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/641)
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.301 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc6c760d4-90, col_values=(('external_ids', {'iface-id': '1b4f0006-e243-4f86-aa8d-a93d1170014d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.302 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:48 np0005629333 ovn_controller[147040]: 2026-02-25T13:00:48Z|01555|binding|INFO|Releasing lport 1b4f0006-e243-4f86-aa8d-a93d1170014d from this chassis (sb_readonly=0)
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.308 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.309 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c6c760d4-99a5-4930-b051-8fdaa331a4d4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c6c760d4-99a5-4930-b051-8fdaa331a4d4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.310 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6ad52d5b-6e74-4c57-aadf-c3aaa56f2816]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.312 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-c6c760d4-99a5-4930-b051-8fdaa331a4d4
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/c6c760d4-99a5-4930-b051-8fdaa331a4d4.pid.haproxy
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID c6c760d4-99a5-4930-b051-8fdaa331a4d4
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 08:00:48 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:48.313 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4', 'env', 'PROCESS_TAG=haproxy-c6c760d4-99a5-4930-b051-8fdaa331a4d4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c6c760d4-99a5-4930-b051-8fdaa331a4d4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.376 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024448.3756309, e53615dc-61ae-4f86-a246-c36739b4d2ae => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.376 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] VM Started (Lifecycle Event)#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.397 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.401 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024448.3758302, e53615dc-61ae-4f86-a246-c36739b4d2ae => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.402 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] VM Paused (Lifecycle Event)#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.443 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.446 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.459 244018 DEBUG nova.compute.manager [req-44e64a43-73fb-4802-affe-5c39a6a03fb2 req-9660ea15-6953-4fc4-b27b-707eeed2346a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Received event network-vif-plugged-614ca81e-067a-4af2-852a-f3155f0f177c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.459 244018 DEBUG oslo_concurrency.lockutils [req-44e64a43-73fb-4802-affe-5c39a6a03fb2 req-9660ea15-6953-4fc4-b27b-707eeed2346a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.459 244018 DEBUG oslo_concurrency.lockutils [req-44e64a43-73fb-4802-affe-5c39a6a03fb2 req-9660ea15-6953-4fc4-b27b-707eeed2346a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.459 244018 DEBUG oslo_concurrency.lockutils [req-44e64a43-73fb-4802-affe-5c39a6a03fb2 req-9660ea15-6953-4fc4-b27b-707eeed2346a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.460 244018 DEBUG nova.compute.manager [req-44e64a43-73fb-4802-affe-5c39a6a03fb2 req-9660ea15-6953-4fc4-b27b-707eeed2346a 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Processing event network-vif-plugged-614ca81e-067a-4af2-852a-f3155f0f177c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.460 244018 DEBUG nova.compute.manager [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.472 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.474 244018 INFO nova.virt.libvirt.driver [-] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Instance spawned successfully.#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.475 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.480 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.480 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024448.465376, e53615dc-61ae-4f86-a246-c36739b4d2ae => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.480 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] VM Resumed (Lifecycle Event)#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.493 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.493 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.500 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.500 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.500 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.501 244018 DEBUG nova.virt.libvirt.driver [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.571 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.575 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.615 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.622 244018 INFO nova.compute.manager [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Took 5.85 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.623 244018 DEBUG nova.compute.manager [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:00:48 np0005629333 podman[377381]: 2026-02-25 13:00:48.657672391 +0000 UTC m=+0.060847167 container create 992e0def4086ad22f6763b88dc5d7cd4c4f8077e6d39c469fad8bd2220fa1e51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.680 244018 DEBUG nova.network.neutron [req-28c69574-b8ca-4b09-ae82-afccb989ad1a req-0d77e185-1032-4d9b-8037-b77362787914 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Updated VIF entry in instance network info cache for port 4f9273d8-a479-491a-bbac-087a11ac1a08. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.681 244018 DEBUG nova.network.neutron [req-28c69574-b8ca-4b09-ae82-afccb989ad1a req-0d77e185-1032-4d9b-8037-b77362787914 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Updating instance_info_cache with network_info: [{"id": "4f9273d8-a479-491a-bbac-087a11ac1a08", "address": "fa:16:3e:f0:48:d9", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f9273d8-a4", "ovs_interfaceid": "4f9273d8-a479-491a-bbac-087a11ac1a08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:00:48 np0005629333 systemd[1]: Started libpod-conmon-992e0def4086ad22f6763b88dc5d7cd4c4f8077e6d39c469fad8bd2220fa1e51.scope.
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.712 244018 INFO nova.compute.manager [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Took 7.36 seconds to build instance.#033[00m
Feb 25 08:00:48 np0005629333 podman[377381]: 2026-02-25 13:00:48.627123469 +0000 UTC m=+0.030298325 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.732 244018 DEBUG oslo_concurrency.lockutils [req-28c69574-b8ca-4b09-ae82-afccb989ad1a req-0d77e185-1032-4d9b-8037-b77362787914 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-0e4f3bd8-9df3-4329-afdc-27d91b98810d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 08:00:48 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:00:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f3d2482bb7304f1dfd8f2b101c4a80fae58fc8d8a42e192363e9767cd83f9d9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 08:00:48 np0005629333 nova_compute[244014]: 2026-02-25 13:00:48.743 244018 DEBUG oslo_concurrency.lockutils [None req-678fb070-dd2f-4f16-b5ba-74cc04a6e80a ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "e53615dc-61ae-4f86-a246-c36739b4d2ae" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.476s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:00:48 np0005629333 podman[377381]: 2026-02-25 13:00:48.763831256 +0000 UTC m=+0.167006112 container init 992e0def4086ad22f6763b88dc5d7cd4c4f8077e6d39c469fad8bd2220fa1e51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:00:48 np0005629333 podman[377381]: 2026-02-25 13:00:48.767938972 +0000 UTC m=+0.171113778 container start 992e0def4086ad22f6763b88dc5d7cd4c4f8077e6d39c469fad8bd2220fa1e51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 08:00:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:00:48 np0005629333 neutron-haproxy-ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4[377396]: [NOTICE]   (377400) : New worker (377402) forked
Feb 25 08:00:48 np0005629333 neutron-haproxy-ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4[377396]: [NOTICE]   (377400) : Loading success.
Feb 25 08:00:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2411: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 131 op/s
Feb 25 08:00:49 np0005629333 nova_compute[244014]: 2026-02-25 13:00:49.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:00:49 np0005629333 nova_compute[244014]: 2026-02-25 13:00:49.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:00:49 np0005629333 nova_compute[244014]: 2026-02-25 13:00:49.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:00:50 np0005629333 nova_compute[244014]: 2026-02-25 13:00:50.624 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:50 np0005629333 nova_compute[244014]: 2026-02-25 13:00:50.652 244018 DEBUG nova.compute.manager [req-c1c45092-7e1d-4f69-91e8-f3fb4e3e38c5 req-b748fa72-5816-4718-9a68-a0321a0c46d9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Received event network-vif-plugged-614ca81e-067a-4af2-852a-f3155f0f177c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:00:50 np0005629333 nova_compute[244014]: 2026-02-25 13:00:50.653 244018 DEBUG oslo_concurrency.lockutils [req-c1c45092-7e1d-4f69-91e8-f3fb4e3e38c5 req-b748fa72-5816-4718-9a68-a0321a0c46d9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:00:50 np0005629333 nova_compute[244014]: 2026-02-25 13:00:50.653 244018 DEBUG oslo_concurrency.lockutils [req-c1c45092-7e1d-4f69-91e8-f3fb4e3e38c5 req-b748fa72-5816-4718-9a68-a0321a0c46d9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:00:50 np0005629333 nova_compute[244014]: 2026-02-25 13:00:50.653 244018 DEBUG oslo_concurrency.lockutils [req-c1c45092-7e1d-4f69-91e8-f3fb4e3e38c5 req-b748fa72-5816-4718-9a68-a0321a0c46d9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:00:50 np0005629333 nova_compute[244014]: 2026-02-25 13:00:50.654 244018 DEBUG nova.compute.manager [req-c1c45092-7e1d-4f69-91e8-f3fb4e3e38c5 req-b748fa72-5816-4718-9a68-a0321a0c46d9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] No waiting events found dispatching network-vif-plugged-614ca81e-067a-4af2-852a-f3155f0f177c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 08:00:50 np0005629333 nova_compute[244014]: 2026-02-25 13:00:50.654 244018 WARNING nova.compute.manager [req-c1c45092-7e1d-4f69-91e8-f3fb4e3e38c5 req-b748fa72-5816-4718-9a68-a0321a0c46d9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Received unexpected event network-vif-plugged-614ca81e-067a-4af2-852a-f3155f0f177c for instance with vm_state active and task_state None.#033[00m
Feb 25 08:00:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2412: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Feb 25 08:00:51 np0005629333 nova_compute[244014]: 2026-02-25 13:00:51.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:00:52 np0005629333 nova_compute[244014]: 2026-02-25 13:00:52.292 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2413: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 178 op/s
Feb 25 08:00:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:00:53 np0005629333 ovn_controller[147040]: 2026-02-25T13:00:53Z|00196|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.9 does not match offer 10.100.0.10
Feb 25 08:00:53 np0005629333 ovn_controller[147040]: 2026-02-25T13:00:53Z|00197|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:f0:48:d9 10.100.0.10
Feb 25 08:00:54 np0005629333 nova_compute[244014]: 2026-02-25 13:00:54.104 244018 DEBUG nova.compute.manager [req-f6ca3886-64e5-4556-85f7-6a8a188429df req-fc41a30e-2069-451a-aeac-041407469ebb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Received event network-changed-614ca81e-067a-4af2-852a-f3155f0f177c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:00:54 np0005629333 nova_compute[244014]: 2026-02-25 13:00:54.105 244018 DEBUG nova.compute.manager [req-f6ca3886-64e5-4556-85f7-6a8a188429df req-fc41a30e-2069-451a-aeac-041407469ebb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Refreshing instance network info cache due to event network-changed-614ca81e-067a-4af2-852a-f3155f0f177c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 08:00:54 np0005629333 nova_compute[244014]: 2026-02-25 13:00:54.106 244018 DEBUG oslo_concurrency.lockutils [req-f6ca3886-64e5-4556-85f7-6a8a188429df req-fc41a30e-2069-451a-aeac-041407469ebb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-e53615dc-61ae-4f86-a246-c36739b4d2ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 08:00:54 np0005629333 nova_compute[244014]: 2026-02-25 13:00:54.106 244018 DEBUG oslo_concurrency.lockutils [req-f6ca3886-64e5-4556-85f7-6a8a188429df req-fc41a30e-2069-451a-aeac-041407469ebb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-e53615dc-61ae-4f86-a246-c36739b4d2ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 08:00:54 np0005629333 nova_compute[244014]: 2026-02-25 13:00:54.107 244018 DEBUG nova.network.neutron [req-f6ca3886-64e5-4556-85f7-6a8a188429df req-fc41a30e-2069-451a-aeac-041407469ebb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Refreshing network info cache for port 614ca81e-067a-4af2-852a-f3155f0f177c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 08:00:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2414: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 163 op/s
Feb 25 08:00:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:55.037 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:00:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:55.038 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:00:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:00:55.039 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:00:55 np0005629333 nova_compute[244014]: 2026-02-25 13:00:55.626 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:55 np0005629333 nova_compute[244014]: 2026-02-25 13:00:55.811 244018 DEBUG nova.network.neutron [req-f6ca3886-64e5-4556-85f7-6a8a188429df req-fc41a30e-2069-451a-aeac-041407469ebb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Updated VIF entry in instance network info cache for port 614ca81e-067a-4af2-852a-f3155f0f177c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 08:00:55 np0005629333 nova_compute[244014]: 2026-02-25 13:00:55.812 244018 DEBUG nova.network.neutron [req-f6ca3886-64e5-4556-85f7-6a8a188429df req-fc41a30e-2069-451a-aeac-041407469ebb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Updating instance_info_cache with network_info: [{"id": "614ca81e-067a-4af2-852a-f3155f0f177c", "address": "fa:16:3e:d5:16:22", "network": {"id": "c6c760d4-99a5-4930-b051-8fdaa331a4d4", "bridge": "br-int", "label": "tempest-network-smoke--308292664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap614ca81e-06", "ovs_interfaceid": "614ca81e-067a-4af2-852a-f3155f0f177c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:00:55 np0005629333 nova_compute[244014]: 2026-02-25 13:00:55.861 244018 DEBUG oslo_concurrency.lockutils [req-f6ca3886-64e5-4556-85f7-6a8a188429df req-fc41a30e-2069-451a-aeac-041407469ebb 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-e53615dc-61ae-4f86-a246-c36739b4d2ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 08:00:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2415: 305 pgs: 305 active+clean; 358 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 163 op/s
Feb 25 08:00:57 np0005629333 nova_compute[244014]: 2026-02-25 13:00:57.293 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:00:57 np0005629333 ovn_controller[147040]: 2026-02-25T13:00:57Z|00198|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.9 does not match offer 10.100.0.10
Feb 25 08:00:57 np0005629333 ovn_controller[147040]: 2026-02-25T13:00:57Z|00199|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:f0:48:d9 10.100.0.10
Feb 25 08:00:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:00:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2416: 305 pgs: 305 active+clean; 373 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 2.3 MiB/s wr, 213 op/s
Feb 25 08:00:58 np0005629333 ovn_controller[147040]: 2026-02-25T13:00:58Z|00200|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f0:48:d9 10.100.0.10
Feb 25 08:00:58 np0005629333 ovn_controller[147040]: 2026-02-25T13:00:58Z|00201|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f0:48:d9 10.100.0.10
Feb 25 08:01:00 np0005629333 ovn_controller[147040]: 2026-02-25T13:01:00Z|00202|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d5:16:22 10.100.0.13
Feb 25 08:01:00 np0005629333 ovn_controller[147040]: 2026-02-25T13:01:00Z|00203|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d5:16:22 10.100.0.13
Feb 25 08:01:00 np0005629333 nova_compute[244014]: 2026-02-25 13:01:00.628 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2417: 305 pgs: 305 active+clean; 373 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 512 KiB/s wr, 125 op/s
Feb 25 08:01:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:01:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:01:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:01:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:01:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:01:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:01:02 np0005629333 nova_compute[244014]: 2026-02-25 13:01:02.296 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2418: 305 pgs: 305 active+clean; 409 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.7 MiB/s wr, 191 op/s
Feb 25 08:01:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:01:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2419: 305 pgs: 305 active+clean; 409 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 116 op/s
Feb 25 08:01:05 np0005629333 nova_compute[244014]: 2026-02-25 13:01:05.632 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2420: 305 pgs: 305 active+clean; 409 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 116 op/s
Feb 25 08:01:07 np0005629333 nova_compute[244014]: 2026-02-25 13:01:07.298 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:01:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2421: 305 pgs: 305 active+clean; 409 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 117 op/s
Feb 25 08:01:10 np0005629333 nova_compute[244014]: 2026-02-25 13:01:10.634 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2422: 305 pgs: 305 active+clean; 409 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 360 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Feb 25 08:01:12 np0005629333 nova_compute[244014]: 2026-02-25 13:01:12.300 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:01:12.427472) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024472427500, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 521, "num_deletes": 251, "total_data_size": 503407, "memory_usage": 514232, "flush_reason": "Manual Compaction"}
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024472431795, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 498590, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51491, "largest_seqno": 52011, "table_properties": {"data_size": 495686, "index_size": 876, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6801, "raw_average_key_size": 18, "raw_value_size": 490015, "raw_average_value_size": 1364, "num_data_blocks": 39, "num_entries": 359, "num_filter_entries": 359, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772024439, "oldest_key_time": 1772024439, "file_creation_time": 1772024472, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 4360 microseconds, and 2024 cpu microseconds.
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:01:12.431832) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 498590 bytes OK
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:01:12.431846) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:01:12.433231) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:01:12.433246) EVENT_LOG_v1 {"time_micros": 1772024472433241, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:01:12.433259) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 500421, prev total WAL file size 500421, number of live WAL files 2.
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:01:12.433596) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(486KB)], [119(10MB)]
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024472433652, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 11356331, "oldest_snapshot_seqno": -1}
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 7155 keys, 9618085 bytes, temperature: kUnknown
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024472495751, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 9618085, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9570840, "index_size": 28253, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17925, "raw_key_size": 187208, "raw_average_key_size": 26, "raw_value_size": 9443785, "raw_average_value_size": 1319, "num_data_blocks": 1099, "num_entries": 7155, "num_filter_entries": 7155, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772024472, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:01:12.496039) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 9618085 bytes
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:01:12.497525) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.7 rd, 154.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.4 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(42.1) write-amplify(19.3) OK, records in: 7664, records dropped: 509 output_compression: NoCompression
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:01:12.497554) EVENT_LOG_v1 {"time_micros": 1772024472497541, "job": 72, "event": "compaction_finished", "compaction_time_micros": 62174, "compaction_time_cpu_micros": 33163, "output_level": 6, "num_output_files": 1, "total_output_size": 9618085, "num_input_records": 7664, "num_output_records": 7155, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024472497783, "job": 72, "event": "table_file_deletion", "file_number": 121}
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024472499269, "job": 72, "event": "table_file_deletion", "file_number": 119}
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:01:12.433485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:01:12.499417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:01:12.499425) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:01:12.499427) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:01:12.499430) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:01:12 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:01:12.499433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:01:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2423: 305 pgs: 305 active+clean; 412 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 547 KiB/s rd, 2.4 MiB/s wr, 72 op/s
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.213 244018 DEBUG nova.compute.manager [req-7f0ffa5e-cefb-4407-ac47-e42e62b4c82e req-d9c2f30a-0d2a-4b4d-9a1d-c63d85a8d0b4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Received event network-changed-614ca81e-067a-4af2-852a-f3155f0f177c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.214 244018 DEBUG nova.compute.manager [req-7f0ffa5e-cefb-4407-ac47-e42e62b4c82e req-d9c2f30a-0d2a-4b4d-9a1d-c63d85a8d0b4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Refreshing instance network info cache due to event network-changed-614ca81e-067a-4af2-852a-f3155f0f177c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.214 244018 DEBUG oslo_concurrency.lockutils [req-7f0ffa5e-cefb-4407-ac47-e42e62b4c82e req-d9c2f30a-0d2a-4b4d-9a1d-c63d85a8d0b4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-e53615dc-61ae-4f86-a246-c36739b4d2ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.215 244018 DEBUG oslo_concurrency.lockutils [req-7f0ffa5e-cefb-4407-ac47-e42e62b4c82e req-d9c2f30a-0d2a-4b4d-9a1d-c63d85a8d0b4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-e53615dc-61ae-4f86-a246-c36739b4d2ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.215 244018 DEBUG nova.network.neutron [req-7f0ffa5e-cefb-4407-ac47-e42e62b4c82e req-d9c2f30a-0d2a-4b4d-9a1d-c63d85a8d0b4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Refreshing network info cache for port 614ca81e-067a-4af2-852a-f3155f0f177c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.307 244018 DEBUG oslo_concurrency.lockutils [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "e53615dc-61ae-4f86-a246-c36739b4d2ae" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.307 244018 DEBUG oslo_concurrency.lockutils [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "e53615dc-61ae-4f86-a246-c36739b4d2ae" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.308 244018 DEBUG oslo_concurrency.lockutils [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.308 244018 DEBUG oslo_concurrency.lockutils [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.309 244018 DEBUG oslo_concurrency.lockutils [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.310 244018 INFO nova.compute.manager [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Terminating instance#033[00m
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.312 244018 DEBUG nova.compute.manager [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 08:01:13 np0005629333 kernel: tap614ca81e-06 (unregistering): left promiscuous mode
Feb 25 08:01:13 np0005629333 NetworkManager[49836]: <info>  [1772024473.3752] device (tap614ca81e-06): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 08:01:13 np0005629333 ovn_controller[147040]: 2026-02-25T13:01:13Z|01556|binding|INFO|Releasing lport 614ca81e-067a-4af2-852a-f3155f0f177c from this chassis (sb_readonly=0)
Feb 25 08:01:13 np0005629333 ovn_controller[147040]: 2026-02-25T13:01:13Z|01557|binding|INFO|Setting lport 614ca81e-067a-4af2-852a-f3155f0f177c down in Southbound
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.383 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:13 np0005629333 ovn_controller[147040]: 2026-02-25T13:01:13Z|01558|binding|INFO|Removing iface tap614ca81e-06 ovn-installed in OVS
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.384 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:13.390 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:16:22 10.100.0.13'], port_security=['fa:16:3e:d5:16:22 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e53615dc-61ae-4f86-a246-c36739b4d2ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6c760d4-99a5-4930-b051-8fdaa331a4d4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9699483122f465084e3147e4904d13d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '09018a8e-2978-4907-9f5b-7472a21cdef8 7177af96-a132-4797-a5d1-f9104ca018eb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a6a1380-5e0c-4178-ac05-c458e549fab7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=614ca81e-067a-4af2-852a-f3155f0f177c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 08:01:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:13.392 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 614ca81e-067a-4af2-852a-f3155f0f177c in datapath c6c760d4-99a5-4930-b051-8fdaa331a4d4 unbound from our chassis#033[00m
Feb 25 08:01:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:13.394 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c6c760d4-99a5-4930-b051-8fdaa331a4d4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.398 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:13.399 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7065c938-492d-4e4c-ab6c-7744c1bcf21a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:13.399 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4 namespace which is not needed anymore#033[00m
Feb 25 08:01:13 np0005629333 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000093.scope: Deactivated successfully.
Feb 25 08:01:13 np0005629333 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000093.scope: Consumed 12.696s CPU time.
Feb 25 08:01:13 np0005629333 systemd-machined[210048]: Machine qemu-181-instance-00000093 terminated.
Feb 25 08:01:13 np0005629333 neutron-haproxy-ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4[377396]: [NOTICE]   (377400) : haproxy version is 2.8.14-c23fe91
Feb 25 08:01:13 np0005629333 neutron-haproxy-ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4[377396]: [NOTICE]   (377400) : path to executable is /usr/sbin/haproxy
Feb 25 08:01:13 np0005629333 neutron-haproxy-ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4[377396]: [WARNING]  (377400) : Exiting Master process...
Feb 25 08:01:13 np0005629333 neutron-haproxy-ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4[377396]: [ALERT]    (377400) : Current worker (377402) exited with code 143 (Terminated)
Feb 25 08:01:13 np0005629333 neutron-haproxy-ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4[377396]: [WARNING]  (377400) : All workers exited. Exiting... (0)
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.554 244018 INFO nova.virt.libvirt.driver [-] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Instance destroyed successfully.#033[00m
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.555 244018 DEBUG nova.objects.instance [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'resources' on Instance uuid e53615dc-61ae-4f86-a246-c36739b4d2ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 08:01:13 np0005629333 systemd[1]: libpod-992e0def4086ad22f6763b88dc5d7cd4c4f8077e6d39c469fad8bd2220fa1e51.scope: Deactivated successfully.
Feb 25 08:01:13 np0005629333 podman[377445]: 2026-02-25 13:01:13.562248806 +0000 UTC m=+0.050711872 container died 992e0def4086ad22f6763b88dc5d7cd4c4f8077e6d39c469fad8bd2220fa1e51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.568 244018 DEBUG nova.virt.libvirt.vif [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T13:00:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-2025216436',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-2025216436',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-acc',id=147,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKSnXyyRLcIe+wwroSDefDYtDDfXrk/YMn2qsODUMBYqG2hXn2+h/Zx9KIePkIbsUhd+oTI/uxQzz33WttQ4dgrPw3EcoKeusKnTCEMOPy/CP9hk5/kHcYVWCklV9tEBEw==',key_name='tempest-TestSecurityGroupsBasicOps-302753452',keypairs=<?>,launch_index=0,launched_at=2026-02-25T13:00:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-tvsldivx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T13:00:48Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=e53615dc-61ae-4f86-a246-c36739b4d2ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "614ca81e-067a-4af2-852a-f3155f0f177c", "address": "fa:16:3e:d5:16:22", "network": {"id": "c6c760d4-99a5-4930-b051-8fdaa331a4d4", "bridge": "br-int", "label": "tempest-network-smoke--308292664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap614ca81e-06", "ovs_interfaceid": "614ca81e-067a-4af2-852a-f3155f0f177c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.569 244018 DEBUG nova.network.os_vif_util [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "614ca81e-067a-4af2-852a-f3155f0f177c", "address": "fa:16:3e:d5:16:22", "network": {"id": "c6c760d4-99a5-4930-b051-8fdaa331a4d4", "bridge": "br-int", "label": "tempest-network-smoke--308292664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap614ca81e-06", "ovs_interfaceid": "614ca81e-067a-4af2-852a-f3155f0f177c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.570 244018 DEBUG nova.network.os_vif_util [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d5:16:22,bridge_name='br-int',has_traffic_filtering=True,id=614ca81e-067a-4af2-852a-f3155f0f177c,network=Network(c6c760d4-99a5-4930-b051-8fdaa331a4d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap614ca81e-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.570 244018 DEBUG os_vif [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:16:22,bridge_name='br-int',has_traffic_filtering=True,id=614ca81e-067a-4af2-852a-f3155f0f177c,network=Network(c6c760d4-99a5-4930-b051-8fdaa331a4d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap614ca81e-06') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.572 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.573 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap614ca81e-06, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.574 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.577 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.579 244018 INFO os_vif [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:16:22,bridge_name='br-int',has_traffic_filtering=True,id=614ca81e-067a-4af2-852a-f3155f0f177c,network=Network(c6c760d4-99a5-4930-b051-8fdaa331a4d4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap614ca81e-06')#033[00m
Feb 25 08:01:13 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-992e0def4086ad22f6763b88dc5d7cd4c4f8077e6d39c469fad8bd2220fa1e51-userdata-shm.mount: Deactivated successfully.
Feb 25 08:01:13 np0005629333 systemd[1]: var-lib-containers-storage-overlay-9f3d2482bb7304f1dfd8f2b101c4a80fae58fc8d8a42e192363e9767cd83f9d9-merged.mount: Deactivated successfully.
Feb 25 08:01:13 np0005629333 podman[377445]: 2026-02-25 13:01:13.611783883 +0000 UTC m=+0.100246939 container cleanup 992e0def4086ad22f6763b88dc5d7cd4c4f8077e6d39c469fad8bd2220fa1e51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 08:01:13 np0005629333 systemd[1]: libpod-conmon-992e0def4086ad22f6763b88dc5d7cd4c4f8077e6d39c469fad8bd2220fa1e51.scope: Deactivated successfully.
Feb 25 08:01:13 np0005629333 podman[377501]: 2026-02-25 13:01:13.676632812 +0000 UTC m=+0.044529007 container remove 992e0def4086ad22f6763b88dc5d7cd4c4f8077e6d39c469fad8bd2220fa1e51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223)
Feb 25 08:01:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:13.680 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f4ddcdba-42f1-4497-8ebd-db590339b2ef]: (4, ('Wed Feb 25 01:01:13 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4 (992e0def4086ad22f6763b88dc5d7cd4c4f8077e6d39c469fad8bd2220fa1e51)\n992e0def4086ad22f6763b88dc5d7cd4c4f8077e6d39c469fad8bd2220fa1e51\nWed Feb 25 01:01:13 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4 (992e0def4086ad22f6763b88dc5d7cd4c4f8077e6d39c469fad8bd2220fa1e51)\n992e0def4086ad22f6763b88dc5d7cd4c4f8077e6d39c469fad8bd2220fa1e51\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:13.682 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1a63fade-9685-48a7-8c4e-78790c3f696e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:13.683 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6c760d4-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:01:13 np0005629333 kernel: tapc6c760d4-90: left promiscuous mode
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.685 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:13.689 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[212795aa-30cb-4f07-8979-6d05b3256fd2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.692 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:13.703 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6ff4b619-4d1c-43f5-90ff-1e57bc01e7e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:13.704 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2bb30413-ebd7-4a8a-8b02-252147708030]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:13.717 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7fa2d891-94fd-46b0-9562-1a119ad280e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641749, 'reachable_time': 29797, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377519, 'error': None, 'target': 'ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:13 np0005629333 systemd[1]: run-netns-ovnmeta\x2dc6c760d4\x2d99a5\x2d4930\x2db051\x2d8fdaa331a4d4.mount: Deactivated successfully.
Feb 25 08:01:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:13.721 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c6c760d4-99a5-4930-b051-8fdaa331a4d4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 08:01:13 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:13.721 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[9669f32f-c833-4495-93ee-c16b59e2a93f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.880 244018 INFO nova.virt.libvirt.driver [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Deleting instance files /var/lib/nova/instances/e53615dc-61ae-4f86-a246-c36739b4d2ae_del#033[00m
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.882 244018 INFO nova.virt.libvirt.driver [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Deletion of /var/lib/nova/instances/e53615dc-61ae-4f86-a246-c36739b4d2ae_del complete#033[00m
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.959 244018 INFO nova.compute.manager [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Took 0.65 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.960 244018 DEBUG oslo.service.loopingcall [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.961 244018 DEBUG nova.compute.manager [-] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 08:01:13 np0005629333 nova_compute[244014]: 2026-02-25 13:01:13.961 244018 DEBUG nova.network.neutron [-] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 08:01:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2424: 305 pgs: 305 active+clean; 412 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 186 KiB/s rd, 211 KiB/s wr, 6 op/s
Feb 25 08:01:15 np0005629333 nova_compute[244014]: 2026-02-25 13:01:15.299 244018 DEBUG nova.compute.manager [req-5e07efcd-1a55-4960-b2bc-35523b4282a9 req-b06fe561-2982-4f50-845e-10d4d947b61c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Received event network-vif-unplugged-614ca81e-067a-4af2-852a-f3155f0f177c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:01:15 np0005629333 nova_compute[244014]: 2026-02-25 13:01:15.299 244018 DEBUG oslo_concurrency.lockutils [req-5e07efcd-1a55-4960-b2bc-35523b4282a9 req-b06fe561-2982-4f50-845e-10d4d947b61c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:01:15 np0005629333 nova_compute[244014]: 2026-02-25 13:01:15.300 244018 DEBUG oslo_concurrency.lockutils [req-5e07efcd-1a55-4960-b2bc-35523b4282a9 req-b06fe561-2982-4f50-845e-10d4d947b61c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:01:15 np0005629333 nova_compute[244014]: 2026-02-25 13:01:15.300 244018 DEBUG oslo_concurrency.lockutils [req-5e07efcd-1a55-4960-b2bc-35523b4282a9 req-b06fe561-2982-4f50-845e-10d4d947b61c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:01:15 np0005629333 nova_compute[244014]: 2026-02-25 13:01:15.300 244018 DEBUG nova.compute.manager [req-5e07efcd-1a55-4960-b2bc-35523b4282a9 req-b06fe561-2982-4f50-845e-10d4d947b61c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] No waiting events found dispatching network-vif-unplugged-614ca81e-067a-4af2-852a-f3155f0f177c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 08:01:15 np0005629333 nova_compute[244014]: 2026-02-25 13:01:15.300 244018 DEBUG nova.compute.manager [req-5e07efcd-1a55-4960-b2bc-35523b4282a9 req-b06fe561-2982-4f50-845e-10d4d947b61c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Received event network-vif-unplugged-614ca81e-067a-4af2-852a-f3155f0f177c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 08:01:15 np0005629333 nova_compute[244014]: 2026-02-25 13:01:15.301 244018 DEBUG nova.compute.manager [req-5e07efcd-1a55-4960-b2bc-35523b4282a9 req-b06fe561-2982-4f50-845e-10d4d947b61c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Received event network-vif-plugged-614ca81e-067a-4af2-852a-f3155f0f177c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:01:15 np0005629333 nova_compute[244014]: 2026-02-25 13:01:15.301 244018 DEBUG oslo_concurrency.lockutils [req-5e07efcd-1a55-4960-b2bc-35523b4282a9 req-b06fe561-2982-4f50-845e-10d4d947b61c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:01:15 np0005629333 nova_compute[244014]: 2026-02-25 13:01:15.301 244018 DEBUG oslo_concurrency.lockutils [req-5e07efcd-1a55-4960-b2bc-35523b4282a9 req-b06fe561-2982-4f50-845e-10d4d947b61c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:01:15 np0005629333 nova_compute[244014]: 2026-02-25 13:01:15.301 244018 DEBUG oslo_concurrency.lockutils [req-5e07efcd-1a55-4960-b2bc-35523b4282a9 req-b06fe561-2982-4f50-845e-10d4d947b61c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "e53615dc-61ae-4f86-a246-c36739b4d2ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:01:15 np0005629333 nova_compute[244014]: 2026-02-25 13:01:15.301 244018 DEBUG nova.compute.manager [req-5e07efcd-1a55-4960-b2bc-35523b4282a9 req-b06fe561-2982-4f50-845e-10d4d947b61c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] No waiting events found dispatching network-vif-plugged-614ca81e-067a-4af2-852a-f3155f0f177c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 08:01:15 np0005629333 nova_compute[244014]: 2026-02-25 13:01:15.302 244018 WARNING nova.compute.manager [req-5e07efcd-1a55-4960-b2bc-35523b4282a9 req-b06fe561-2982-4f50-845e-10d4d947b61c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Received unexpected event network-vif-plugged-614ca81e-067a-4af2-852a-f3155f0f177c for instance with vm_state active and task_state deleting.#033[00m
Feb 25 08:01:15 np0005629333 nova_compute[244014]: 2026-02-25 13:01:15.637 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:15 np0005629333 nova_compute[244014]: 2026-02-25 13:01:15.675 244018 DEBUG nova.network.neutron [-] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:01:15 np0005629333 nova_compute[244014]: 2026-02-25 13:01:15.693 244018 INFO nova.compute.manager [-] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Took 1.73 seconds to deallocate network for instance.#033[00m
Feb 25 08:01:15 np0005629333 nova_compute[244014]: 2026-02-25 13:01:15.745 244018 DEBUG oslo_concurrency.lockutils [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:01:15 np0005629333 nova_compute[244014]: 2026-02-25 13:01:15.746 244018 DEBUG oslo_concurrency.lockutils [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:01:15 np0005629333 nova_compute[244014]: 2026-02-25 13:01:15.841 244018 DEBUG nova.network.neutron [req-7f0ffa5e-cefb-4407-ac47-e42e62b4c82e req-d9c2f30a-0d2a-4b4d-9a1d-c63d85a8d0b4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Updated VIF entry in instance network info cache for port 614ca81e-067a-4af2-852a-f3155f0f177c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 08:01:15 np0005629333 nova_compute[244014]: 2026-02-25 13:01:15.842 244018 DEBUG nova.network.neutron [req-7f0ffa5e-cefb-4407-ac47-e42e62b4c82e req-d9c2f30a-0d2a-4b4d-9a1d-c63d85a8d0b4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Updating instance_info_cache with network_info: [{"id": "614ca81e-067a-4af2-852a-f3155f0f177c", "address": "fa:16:3e:d5:16:22", "network": {"id": "c6c760d4-99a5-4930-b051-8fdaa331a4d4", "bridge": "br-int", "label": "tempest-network-smoke--308292664", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap614ca81e-06", "ovs_interfaceid": "614ca81e-067a-4af2-852a-f3155f0f177c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:01:15 np0005629333 nova_compute[244014]: 2026-02-25 13:01:15.858 244018 DEBUG oslo_concurrency.processutils [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:01:15 np0005629333 nova_compute[244014]: 2026-02-25 13:01:15.903 244018 DEBUG oslo_concurrency.lockutils [req-7f0ffa5e-cefb-4407-ac47-e42e62b4c82e req-d9c2f30a-0d2a-4b4d-9a1d-c63d85a8d0b4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-e53615dc-61ae-4f86-a246-c36739b4d2ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 08:01:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:01:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/703727400' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:01:16 np0005629333 nova_compute[244014]: 2026-02-25 13:01:16.430 244018 DEBUG oslo_concurrency.processutils [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:01:16 np0005629333 nova_compute[244014]: 2026-02-25 13:01:16.437 244018 DEBUG nova.compute.provider_tree [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:01:16 np0005629333 nova_compute[244014]: 2026-02-25 13:01:16.460 244018 DEBUG nova.scheduler.client.report [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:01:16 np0005629333 nova_compute[244014]: 2026-02-25 13:01:16.490 244018 DEBUG oslo_concurrency.lockutils [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:01:16 np0005629333 nova_compute[244014]: 2026-02-25 13:01:16.521 244018 INFO nova.scheduler.client.report [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Deleted allocations for instance e53615dc-61ae-4f86-a246-c36739b4d2ae#033[00m
Feb 25 08:01:16 np0005629333 nova_compute[244014]: 2026-02-25 13:01:16.584 244018 DEBUG oslo_concurrency.lockutils [None req-acecdef1-720c-4c12-adfd-aa87eae260dd ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "e53615dc-61ae-4f86-a246-c36739b4d2ae" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.276s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:01:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2425: 305 pgs: 305 active+clean; 412 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 186 KiB/s rd, 211 KiB/s wr, 6 op/s
Feb 25 08:01:17 np0005629333 nova_compute[244014]: 2026-02-25 13:01:17.399 244018 DEBUG nova.compute.manager [req-ab594362-acec-4442-9a97-734b041e6c6a req-049c266e-1e0d-4c5e-90bd-0fbc51ce28ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Received event network-vif-deleted-614ca81e-067a-4af2-852a-f3155f0f177c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:01:17 np0005629333 nova_compute[244014]: 2026-02-25 13:01:17.399 244018 INFO nova.compute.manager [req-ab594362-acec-4442-9a97-734b041e6c6a req-049c266e-1e0d-4c5e-90bd-0fbc51ce28ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Neutron deleted interface 614ca81e-067a-4af2-852a-f3155f0f177c; detaching it from the instance and deleting it from the info cache#033[00m
Feb 25 08:01:17 np0005629333 nova_compute[244014]: 2026-02-25 13:01:17.400 244018 DEBUG nova.network.neutron [req-ab594362-acec-4442-9a97-734b041e6c6a req-049c266e-1e0d-4c5e-90bd-0fbc51ce28ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Feb 25 08:01:17 np0005629333 nova_compute[244014]: 2026-02-25 13:01:17.403 244018 DEBUG nova.compute.manager [req-ab594362-acec-4442-9a97-734b041e6c6a req-049c266e-1e0d-4c5e-90bd-0fbc51ce28ae 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Detach interface failed, port_id=614ca81e-067a-4af2-852a-f3155f0f177c, reason: Instance e53615dc-61ae-4f86-a246-c36739b4d2ae could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Feb 25 08:01:17 np0005629333 podman[377544]: 2026-02-25 13:01:17.72751534 +0000 UTC m=+0.064433688 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 08:01:17 np0005629333 podman[377545]: 2026-02-25 13:01:17.760419889 +0000 UTC m=+0.094798525 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 08:01:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:18.011 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 08:01:18 np0005629333 nova_compute[244014]: 2026-02-25 13:01:18.012 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:18 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:18.013 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 08:01:18 np0005629333 nova_compute[244014]: 2026-02-25 13:01:18.575 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:18 np0005629333 nova_compute[244014]: 2026-02-25 13:01:18.598 244018 DEBUG nova.compute.manager [None req-85cf15e4-a6bd-45e6-943c-d01fa587f42f 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:01:18 np0005629333 nova_compute[244014]: 2026-02-25 13:01:18.638 244018 INFO nova.compute.manager [None req-85cf15e4-a6bd-45e6-943c-d01fa587f42f 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] instance snapshotting#033[00m
Feb 25 08:01:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:01:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2426: 305 pgs: 305 active+clean; 333 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 206 KiB/s rd, 213 KiB/s wr, 34 op/s
Feb 25 08:01:18 np0005629333 nova_compute[244014]: 2026-02-25 13:01:18.912 244018 INFO nova.virt.libvirt.driver [None req-85cf15e4-a6bd-45e6-943c-d01fa587f42f 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Beginning live snapshot process#033[00m
Feb 25 08:01:19 np0005629333 nova_compute[244014]: 2026-02-25 13:01:19.083 244018 DEBUG nova.storage.rbd_utils [None req-85cf15e4-a6bd-45e6-943c-d01fa587f42f 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] creating snapshot(dfc9a21f590643fdaf313ec9d01a20d1) on rbd image(0e4f3bd8-9df3-4329-afdc-27d91b98810d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 08:01:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Feb 25 08:01:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Feb 25 08:01:19 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Feb 25 08:01:19 np0005629333 ovn_controller[147040]: 2026-02-25T13:01:19Z|01559|binding|INFO|Releasing lport bba4cbca-ca61-4422-a903-61bd05b8ebd6 from this chassis (sb_readonly=0)
Feb 25 08:01:19 np0005629333 nova_compute[244014]: 2026-02-25 13:01:19.514 244018 DEBUG nova.storage.rbd_utils [None req-85cf15e4-a6bd-45e6-943c-d01fa587f42f 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] cloning vms/0e4f3bd8-9df3-4329-afdc-27d91b98810d_disk@dfc9a21f590643fdaf313ec9d01a20d1 to images/b3595e75-97c7-4173-a1f7-d933f93b52fe clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Feb 25 08:01:19 np0005629333 nova_compute[244014]: 2026-02-25 13:01:19.547 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:19 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Feb 25 08:01:19 np0005629333 nova_compute[244014]: 2026-02-25 13:01:19.656 244018 DEBUG nova.storage.rbd_utils [None req-85cf15e4-a6bd-45e6-943c-d01fa587f42f 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] flattening images/b3595e75-97c7-4173-a1f7-d933f93b52fe flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Feb 25 08:01:20 np0005629333 nova_compute[244014]: 2026-02-25 13:01:20.409 244018 DEBUG nova.storage.rbd_utils [None req-85cf15e4-a6bd-45e6-943c-d01fa587f42f 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] removing snapshot(dfc9a21f590643fdaf313ec9d01a20d1) on rbd image(0e4f3bd8-9df3-4329-afdc-27d91b98810d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Feb 25 08:01:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Feb 25 08:01:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Feb 25 08:01:20 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Feb 25 08:01:20 np0005629333 nova_compute[244014]: 2026-02-25 13:01:20.514 244018 DEBUG nova.storage.rbd_utils [None req-85cf15e4-a6bd-45e6-943c-d01fa587f42f 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] creating snapshot(snap) on rbd image(b3595e75-97c7-4173-a1f7-d933f93b52fe) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Feb 25 08:01:20 np0005629333 nova_compute[244014]: 2026-02-25 13:01:20.640 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2429: 305 pgs: 305 active+clean; 333 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.7 KiB/s wr, 41 op/s
Feb 25 08:01:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Feb 25 08:01:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Feb 25 08:01:21 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Feb 25 08:01:22 np0005629333 nova_compute[244014]: 2026-02-25 13:01:22.834 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2431: 305 pgs: 305 active+clean; 434 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 15 MiB/s wr, 323 op/s
Feb 25 08:01:23 np0005629333 nova_compute[244014]: 2026-02-25 13:01:23.364 244018 INFO nova.virt.libvirt.driver [None req-85cf15e4-a6bd-45e6-943c-d01fa587f42f 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Snapshot image upload complete#033[00m
Feb 25 08:01:23 np0005629333 nova_compute[244014]: 2026-02-25 13:01:23.365 244018 INFO nova.compute.manager [None req-85cf15e4-a6bd-45e6-943c-d01fa587f42f 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Took 4.73 seconds to snapshot the instance on the hypervisor.#033[00m
Feb 25 08:01:23 np0005629333 nova_compute[244014]: 2026-02-25 13:01:23.577 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:01:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Feb 25 08:01:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Feb 25 08:01:23 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Feb 25 08:01:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Feb 25 08:01:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Feb 25 08:01:24 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Feb 25 08:01:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2434: 305 pgs: 305 active+clean; 434 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 20 MiB/s wr, 364 op/s
Feb 25 08:01:25 np0005629333 nova_compute[244014]: 2026-02-25 13:01:25.642 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:26 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:26.014 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:01:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2435: 305 pgs: 305 active+clean; 434 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 15 MiB/s wr, 267 op/s
Feb 25 08:01:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:01:27 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:01:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:01:27 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:01:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:01:27 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:01:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:01:27 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:01:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:01:27 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:01:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:01:27 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.201 244018 DEBUG nova.compute.manager [req-210848af-dbc1-422f-8c73-f258c22b62ac req-69636d32-ed08-4b3d-abb8-835f27ca896d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Received event network-changed-4f9273d8-a479-491a-bbac-087a11ac1a08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.203 244018 DEBUG nova.compute.manager [req-210848af-dbc1-422f-8c73-f258c22b62ac req-69636d32-ed08-4b3d-abb8-835f27ca896d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Refreshing instance network info cache due to event network-changed-4f9273d8-a479-491a-bbac-087a11ac1a08. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.203 244018 DEBUG oslo_concurrency.lockutils [req-210848af-dbc1-422f-8c73-f258c22b62ac req-69636d32-ed08-4b3d-abb8-835f27ca896d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-0e4f3bd8-9df3-4329-afdc-27d91b98810d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.204 244018 DEBUG oslo_concurrency.lockutils [req-210848af-dbc1-422f-8c73-f258c22b62ac req-69636d32-ed08-4b3d-abb8-835f27ca896d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-0e4f3bd8-9df3-4329-afdc-27d91b98810d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.204 244018 DEBUG nova.network.neutron [req-210848af-dbc1-422f-8c73-f258c22b62ac req-69636d32-ed08-4b3d-abb8-835f27ca896d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Refreshing network info cache for port 4f9273d8-a479-491a-bbac-087a11ac1a08 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.253 244018 DEBUG oslo_concurrency.lockutils [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.254 244018 DEBUG oslo_concurrency.lockutils [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.254 244018 DEBUG oslo_concurrency.lockutils [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.255 244018 DEBUG oslo_concurrency.lockutils [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.255 244018 DEBUG oslo_concurrency.lockutils [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.257 244018 INFO nova.compute.manager [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Terminating instance#033[00m
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.258 244018 DEBUG nova.compute.manager [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 08:01:27 np0005629333 kernel: tap4f9273d8-a4 (unregistering): left promiscuous mode
Feb 25 08:01:27 np0005629333 NetworkManager[49836]: <info>  [1772024487.3395] device (tap4f9273d8-a4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.348 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:27 np0005629333 ovn_controller[147040]: 2026-02-25T13:01:27Z|01560|binding|INFO|Releasing lport 4f9273d8-a479-491a-bbac-087a11ac1a08 from this chassis (sb_readonly=0)
Feb 25 08:01:27 np0005629333 ovn_controller[147040]: 2026-02-25T13:01:27Z|01561|binding|INFO|Setting lport 4f9273d8-a479-491a-bbac-087a11ac1a08 down in Southbound
Feb 25 08:01:27 np0005629333 ovn_controller[147040]: 2026-02-25T13:01:27Z|01562|binding|INFO|Removing iface tap4f9273d8-a4 ovn-installed in OVS
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.352 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.360 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:27.371 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:48:d9 10.100.0.10'], port_security=['fa:16:3e:f0:48:d9 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '0e4f3bd8-9df3-4329-afdc-27d91b98810d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4273798e-5f22-4d98-8d00-a22d1ea2c776', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cf5eb89ba0424237a313b1f369bcb92b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9876e36c-27f9-4526-a220-4685b5be270d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b451f093-c10b-47f6-9a8c-8892cb3ad351, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=4f9273d8-a479-491a-bbac-087a11ac1a08) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 08:01:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:27.373 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 4f9273d8-a479-491a-bbac-087a11ac1a08 in datapath 4273798e-5f22-4d98-8d00-a22d1ea2c776 unbound from our chassis#033[00m
Feb 25 08:01:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:27.376 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4273798e-5f22-4d98-8d00-a22d1ea2c776#033[00m
Feb 25 08:01:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:27.390 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b99137-2e28-401d-90c9-ba3fe874f6df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:27 np0005629333 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000092.scope: Deactivated successfully.
Feb 25 08:01:27 np0005629333 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000092.scope: Consumed 13.779s CPU time.
Feb 25 08:01:27 np0005629333 systemd-machined[210048]: Machine qemu-180-instance-00000092 terminated.
Feb 25 08:01:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:27.419 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[9ea6d775-33da-43c1-9bcc-1399281d599d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:27.422 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1a75c592-7c82-4529-af06-7b4f0e37aed6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:27.447 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[39e44ec2-ad42-4b1c-99d5-57517b608398]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:27.462 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c391b481-9908-4440-81ae-9699c7615ae1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4273798e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:a3:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 453], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636995, 'reachable_time': 38262, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377893, 'error': None, 'target': 'ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:27 np0005629333 podman[377881]: 2026-02-25 13:01:27.469178844 +0000 UTC m=+0.040932556 container create 6b6d59fe69737c921fa963b93b9fa87462b64f68ace8d117859f9bb2365bdb85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_golick, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.475 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:27.477 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[dd7680cc-3dfd-4f3e-ae55-17190d1b66b1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4273798e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 637005, 'tstamp': 637005}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377898, 'error': None, 'target': 'ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4273798e-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 637008, 'tstamp': 637008}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377898, 'error': None, 'target': 'ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:27.482 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4273798e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.484 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.488 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.492 244018 INFO nova.virt.libvirt.driver [-] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Instance destroyed successfully.#033[00m
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.492 244018 DEBUG nova.objects.instance [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lazy-loading 'resources' on Instance uuid 0e4f3bd8-9df3-4329-afdc-27d91b98810d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 08:01:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:27.491 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4273798e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:01:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:27.493 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 08:01:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:27.493 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4273798e-50, col_values=(('external_ids', {'iface-id': 'bba4cbca-ca61-4422-a903-61bd05b8ebd6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:01:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:27.494 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.508 244018 DEBUG nova.virt.libvirt.vif [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T13:00:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1321825921',display_name='tempest-TestSnapshotPattern-server-1321825921',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1321825921',id=146,image_ref='7c8e8d1d-4a3c-44a1-a64e-6873dbc9d170',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDCWEdl3+ajGxJOkCeAxfCSENIRZwbPgzOnOZcpgV3Hlv4XmE2Mrj2poBzyZ23ScEZYFPY48VJJN3YnpuQrByWx+iTcrPbMvMX+80KQEtNnqR2GlURJo6gN1+GRlPtyo4Q==',key_name='tempest-TestSnapshotPattern-301066188',keypairs=<?>,launch_index=0,launched_at=2026-02-25T13:00:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cf5eb89ba0424237a313b1f369bcb92b',ramdisk_id='',reservation_id='r-1efchiu9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='8d338640-2b5f-4571-8f76-b523064ee129',image_min_disk='1',image_min_ram='0',image_owner_id='cf5eb89ba0424237a313b1f369bcb92b',image_owner_project_name='tempest-TestSnapshotPattern-1122231160',image_owner_user_name='tempest-TestSnapshotPattern-1122231160-project-member',image_user_id='7592542cdf7f423c86332695423dbe79',image_version='8.0',owner_project_name='tempest-TestSnapshotPattern-1122231160',owner_user_name='tempest-TestSnapshotPattern-1122231160-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T13:01:23Z,user_data=None,user_id='7592542cdf7f423c86332695423dbe79',uuid=0e4f3bd8-9df3-4329-afdc-27d91b98810d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f9273d8-a479-491a-bbac-087a11ac1a08", "address": "fa:16:3e:f0:48:d9", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f9273d8-a4", "ovs_interfaceid": "4f9273d8-a479-491a-bbac-087a11ac1a08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.508 244018 DEBUG nova.network.os_vif_util [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Converting VIF {"id": "4f9273d8-a479-491a-bbac-087a11ac1a08", "address": "fa:16:3e:f0:48:d9", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f9273d8-a4", "ovs_interfaceid": "4f9273d8-a479-491a-bbac-087a11ac1a08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.509 244018 DEBUG nova.network.os_vif_util [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f0:48:d9,bridge_name='br-int',has_traffic_filtering=True,id=4f9273d8-a479-491a-bbac-087a11ac1a08,network=Network(4273798e-5f22-4d98-8d00-a22d1ea2c776),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f9273d8-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.509 244018 DEBUG os_vif [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:48:d9,bridge_name='br-int',has_traffic_filtering=True,id=4f9273d8-a479-491a-bbac-087a11ac1a08,network=Network(4273798e-5f22-4d98-8d00-a22d1ea2c776),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f9273d8-a4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 08:01:27 np0005629333 systemd[1]: Started libpod-conmon-6b6d59fe69737c921fa963b93b9fa87462b64f68ace8d117859f9bb2365bdb85.scope.
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.511 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.511 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f9273d8-a4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.512 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.513 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.515 244018 INFO os_vif [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:48:d9,bridge_name='br-int',has_traffic_filtering=True,id=4f9273d8-a479-491a-bbac-087a11ac1a08,network=Network(4273798e-5f22-4d98-8d00-a22d1ea2c776),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f9273d8-a4')#033[00m
Feb 25 08:01:27 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:01:27 np0005629333 podman[377881]: 2026-02-25 13:01:27.54382585 +0000 UTC m=+0.115579632 container init 6b6d59fe69737c921fa963b93b9fa87462b64f68ace8d117859f9bb2365bdb85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_golick, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 08:01:27 np0005629333 podman[377881]: 2026-02-25 13:01:27.450992861 +0000 UTC m=+0.022746603 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:01:27 np0005629333 podman[377881]: 2026-02-25 13:01:27.552849804 +0000 UTC m=+0.124603526 container start 6b6d59fe69737c921fa963b93b9fa87462b64f68ace8d117859f9bb2365bdb85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 08:01:27 np0005629333 podman[377881]: 2026-02-25 13:01:27.55696807 +0000 UTC m=+0.128721872 container attach 6b6d59fe69737c921fa963b93b9fa87462b64f68ace8d117859f9bb2365bdb85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_golick, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 08:01:27 np0005629333 jovial_golick[377912]: 167 167
Feb 25 08:01:27 np0005629333 systemd[1]: libpod-6b6d59fe69737c921fa963b93b9fa87462b64f68ace8d117859f9bb2365bdb85.scope: Deactivated successfully.
Feb 25 08:01:27 np0005629333 conmon[377912]: conmon 6b6d59fe69737c921fa9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6b6d59fe69737c921fa963b93b9fa87462b64f68ace8d117859f9bb2365bdb85.scope/container/memory.events
Feb 25 08:01:27 np0005629333 podman[377881]: 2026-02-25 13:01:27.561754495 +0000 UTC m=+0.133508247 container died 6b6d59fe69737c921fa963b93b9fa87462b64f68ace8d117859f9bb2365bdb85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 08:01:27 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d0e21cb2129aa2121b0823bf71888bd5e2b579095797723a43d262d3dc7e8980-merged.mount: Deactivated successfully.
Feb 25 08:01:27 np0005629333 podman[377881]: 2026-02-25 13:01:27.613392422 +0000 UTC m=+0.185146174 container remove 6b6d59fe69737c921fa963b93b9fa87462b64f68ace8d117859f9bb2365bdb85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:01:27 np0005629333 systemd[1]: libpod-conmon-6b6d59fe69737c921fa963b93b9fa87462b64f68ace8d117859f9bb2365bdb85.scope: Deactivated successfully.
Feb 25 08:01:27 np0005629333 podman[377956]: 2026-02-25 13:01:27.778919131 +0000 UTC m=+0.050266079 container create 8e7b1fadc1d884996e115ec84fd98edbcb321b60056b11cbb482d2fffca8a29c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_heyrovsky, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.788 244018 INFO nova.virt.libvirt.driver [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Deleting instance files /var/lib/nova/instances/0e4f3bd8-9df3-4329-afdc-27d91b98810d_del#033[00m
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.790 244018 INFO nova.virt.libvirt.driver [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Deletion of /var/lib/nova/instances/0e4f3bd8-9df3-4329-afdc-27d91b98810d_del complete#033[00m
Feb 25 08:01:27 np0005629333 systemd[1]: Started libpod-conmon-8e7b1fadc1d884996e115ec84fd98edbcb321b60056b11cbb482d2fffca8a29c.scope.
Feb 25 08:01:27 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:01:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcbec27f0eadba756cdfce642b7c1476c490b9b2846b89faa99113fc4b1e1521/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:01:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcbec27f0eadba756cdfce642b7c1476c490b9b2846b89faa99113fc4b1e1521/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:01:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcbec27f0eadba756cdfce642b7c1476c490b9b2846b89faa99113fc4b1e1521/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:01:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcbec27f0eadba756cdfce642b7c1476c490b9b2846b89faa99113fc4b1e1521/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:01:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcbec27f0eadba756cdfce642b7c1476c490b9b2846b89faa99113fc4b1e1521/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:01:27 np0005629333 podman[377956]: 2026-02-25 13:01:27.849478792 +0000 UTC m=+0.120825760 container init 8e7b1fadc1d884996e115ec84fd98edbcb321b60056b11cbb482d2fffca8a29c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 08:01:27 np0005629333 podman[377956]: 2026-02-25 13:01:27.759978717 +0000 UTC m=+0.031325695 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:01:27 np0005629333 podman[377956]: 2026-02-25 13:01:27.857426426 +0000 UTC m=+0.128773384 container start 8e7b1fadc1d884996e115ec84fd98edbcb321b60056b11cbb482d2fffca8a29c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.857 244018 INFO nova.compute.manager [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Took 0.60 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.858 244018 DEBUG oslo.service.loopingcall [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.858 244018 DEBUG nova.compute.manager [-] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 08:01:27 np0005629333 nova_compute[244014]: 2026-02-25 13:01:27.858 244018 DEBUG nova.network.neutron [-] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 08:01:27 np0005629333 podman[377956]: 2026-02-25 13:01:27.863275801 +0000 UTC m=+0.134622769 container attach 8e7b1fadc1d884996e115ec84fd98edbcb321b60056b11cbb482d2fffca8a29c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_heyrovsky, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:01:27 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:01:27 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:01:27 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:01:28 np0005629333 interesting_heyrovsky[377972]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:01:28 np0005629333 interesting_heyrovsky[377972]: --> All data devices are unavailable
Feb 25 08:01:28 np0005629333 systemd[1]: libpod-8e7b1fadc1d884996e115ec84fd98edbcb321b60056b11cbb482d2fffca8a29c.scope: Deactivated successfully.
Feb 25 08:01:28 np0005629333 podman[377956]: 2026-02-25 13:01:28.385285816 +0000 UTC m=+0.656632814 container died 8e7b1fadc1d884996e115ec84fd98edbcb321b60056b11cbb482d2fffca8a29c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_heyrovsky, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 08:01:28 np0005629333 systemd[1]: var-lib-containers-storage-overlay-bcbec27f0eadba756cdfce642b7c1476c490b9b2846b89faa99113fc4b1e1521-merged.mount: Deactivated successfully.
Feb 25 08:01:28 np0005629333 podman[377956]: 2026-02-25 13:01:28.449536288 +0000 UTC m=+0.720883216 container remove 8e7b1fadc1d884996e115ec84fd98edbcb321b60056b11cbb482d2fffca8a29c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 08:01:28 np0005629333 systemd[1]: libpod-conmon-8e7b1fadc1d884996e115ec84fd98edbcb321b60056b11cbb482d2fffca8a29c.scope: Deactivated successfully.
Feb 25 08:01:28 np0005629333 nova_compute[244014]: 2026-02-25 13:01:28.550 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024473.5488775, e53615dc-61ae-4f86-a246-c36739b4d2ae => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:01:28 np0005629333 nova_compute[244014]: 2026-02-25 13:01:28.551 244018 INFO nova.compute.manager [-] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] VM Stopped (Lifecycle Event)#033[00m
Feb 25 08:01:28 np0005629333 nova_compute[244014]: 2026-02-25 13:01:28.568 244018 DEBUG nova.compute.manager [None req-9cebc55d-66e9-4f50-bc3a-4f6d131c1044 - - - - - -] [instance: e53615dc-61ae-4f86-a246-c36739b4d2ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:01:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:01:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2436: 305 pgs: 305 active+clean; 333 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 12 MiB/s wr, 311 op/s
Feb 25 08:01:28 np0005629333 podman[378069]: 2026-02-25 13:01:28.949151122 +0000 UTC m=+0.062411252 container create f6352c3daed4c3e46962394c65a7ba069a91886a8cb84efa96edb0c06e930ef2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:01:28 np0005629333 systemd[1]: Started libpod-conmon-f6352c3daed4c3e46962394c65a7ba069a91886a8cb84efa96edb0c06e930ef2.scope.
Feb 25 08:01:29 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:01:29 np0005629333 podman[378069]: 2026-02-25 13:01:28.92781518 +0000 UTC m=+0.041075290 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:01:29 np0005629333 podman[378069]: 2026-02-25 13:01:29.03415072 +0000 UTC m=+0.147410860 container init f6352c3daed4c3e46962394c65a7ba069a91886a8cb84efa96edb0c06e930ef2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_morse, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:01:29 np0005629333 podman[378069]: 2026-02-25 13:01:29.042907157 +0000 UTC m=+0.156167257 container start f6352c3daed4c3e46962394c65a7ba069a91886a8cb84efa96edb0c06e930ef2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 08:01:29 np0005629333 quirky_morse[378085]: 167 167
Feb 25 08:01:29 np0005629333 systemd[1]: libpod-f6352c3daed4c3e46962394c65a7ba069a91886a8cb84efa96edb0c06e930ef2.scope: Deactivated successfully.
Feb 25 08:01:29 np0005629333 podman[378069]: 2026-02-25 13:01:29.04728467 +0000 UTC m=+0.160544820 container attach f6352c3daed4c3e46962394c65a7ba069a91886a8cb84efa96edb0c06e930ef2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_morse, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:01:29 np0005629333 conmon[378085]: conmon f6352c3daed4c3e46962 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f6352c3daed4c3e46962394c65a7ba069a91886a8cb84efa96edb0c06e930ef2.scope/container/memory.events
Feb 25 08:01:29 np0005629333 podman[378069]: 2026-02-25 13:01:29.048549716 +0000 UTC m=+0.161809816 container died f6352c3daed4c3e46962394c65a7ba069a91886a8cb84efa96edb0c06e930ef2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_morse, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 08:01:29 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a70267c61fb7bba96c773f397444fcd3bac8342c256ec3a520ec56280e5dd7a2-merged.mount: Deactivated successfully.
Feb 25 08:01:29 np0005629333 podman[378069]: 2026-02-25 13:01:29.096405846 +0000 UTC m=+0.209665976 container remove f6352c3daed4c3e46962394c65a7ba069a91886a8cb84efa96edb0c06e930ef2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_morse, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:01:29 np0005629333 systemd[1]: libpod-conmon-f6352c3daed4c3e46962394c65a7ba069a91886a8cb84efa96edb0c06e930ef2.scope: Deactivated successfully.
Feb 25 08:01:29 np0005629333 nova_compute[244014]: 2026-02-25 13:01:29.130 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:29 np0005629333 podman[378109]: 2026-02-25 13:01:29.233787071 +0000 UTC m=+0.043225460 container create b9486fe477e6b42f22dc10b0e2e435245a53db41a8d339d96f204111239618e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:01:29 np0005629333 systemd[1]: Started libpod-conmon-b9486fe477e6b42f22dc10b0e2e435245a53db41a8d339d96f204111239618e0.scope.
Feb 25 08:01:29 np0005629333 nova_compute[244014]: 2026-02-25 13:01:29.275 244018 DEBUG nova.network.neutron [-] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:01:29 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:01:29 np0005629333 nova_compute[244014]: 2026-02-25 13:01:29.294 244018 INFO nova.compute.manager [-] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Took 1.44 seconds to deallocate network for instance.#033[00m
Feb 25 08:01:29 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42f6afe07e70fe27a767fb842dcd31712aff17707bfbd185e95eee2756958c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:01:29 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42f6afe07e70fe27a767fb842dcd31712aff17707bfbd185e95eee2756958c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:01:29 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42f6afe07e70fe27a767fb842dcd31712aff17707bfbd185e95eee2756958c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:01:29 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b42f6afe07e70fe27a767fb842dcd31712aff17707bfbd185e95eee2756958c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:01:29 np0005629333 podman[378109]: 2026-02-25 13:01:29.212430569 +0000 UTC m=+0.021869008 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:01:29 np0005629333 podman[378109]: 2026-02-25 13:01:29.340867412 +0000 UTC m=+0.150305841 container init b9486fe477e6b42f22dc10b0e2e435245a53db41a8d339d96f204111239618e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_beaver, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 08:01:29 np0005629333 podman[378109]: 2026-02-25 13:01:29.348406574 +0000 UTC m=+0.157844973 container start b9486fe477e6b42f22dc10b0e2e435245a53db41a8d339d96f204111239618e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:01:29 np0005629333 podman[378109]: 2026-02-25 13:01:29.351622945 +0000 UTC m=+0.161061334 container attach b9486fe477e6b42f22dc10b0e2e435245a53db41a8d339d96f204111239618e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_beaver, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 08:01:29 np0005629333 nova_compute[244014]: 2026-02-25 13:01:29.357 244018 DEBUG oslo_concurrency.lockutils [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:01:29 np0005629333 nova_compute[244014]: 2026-02-25 13:01:29.357 244018 DEBUG oslo_concurrency.lockutils [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:01:29 np0005629333 nova_compute[244014]: 2026-02-25 13:01:29.374 244018 DEBUG nova.compute.manager [req-b9ab88b9-1ab1-4788-a59c-da3f449c85c0 req-30821dd9-e55e-4641-8dc1-3691f55309af 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Received event network-vif-deleted-4f9273d8-a479-491a-bbac-087a11ac1a08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:01:29 np0005629333 nova_compute[244014]: 2026-02-25 13:01:29.451 244018 DEBUG oslo_concurrency.processutils [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:01:29 np0005629333 nova_compute[244014]: 2026-02-25 13:01:29.628 244018 DEBUG nova.network.neutron [req-210848af-dbc1-422f-8c73-f258c22b62ac req-69636d32-ed08-4b3d-abb8-835f27ca896d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Updated VIF entry in instance network info cache for port 4f9273d8-a479-491a-bbac-087a11ac1a08. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 08:01:29 np0005629333 nova_compute[244014]: 2026-02-25 13:01:29.629 244018 DEBUG nova.network.neutron [req-210848af-dbc1-422f-8c73-f258c22b62ac req-69636d32-ed08-4b3d-abb8-835f27ca896d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Updating instance_info_cache with network_info: [{"id": "4f9273d8-a479-491a-bbac-087a11ac1a08", "address": "fa:16:3e:f0:48:d9", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f9273d8-a4", "ovs_interfaceid": "4f9273d8-a479-491a-bbac-087a11ac1a08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:01:29 np0005629333 charming_beaver[378125]: {
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:    "0": [
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:        {
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "devices": [
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "/dev/loop3"
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            ],
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "lv_name": "ceph_lv0",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "lv_size": "21470642176",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "name": "ceph_lv0",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "tags": {
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.cluster_name": "ceph",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.crush_device_class": "",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.encrypted": "0",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.objectstore": "bluestore",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.osd_id": "0",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.type": "block",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.vdo": "0",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.with_tpm": "0"
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            },
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "type": "block",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "vg_name": "ceph_vg0"
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:        }
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:    ],
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:    "1": [
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:        {
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "devices": [
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "/dev/loop4"
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            ],
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "lv_name": "ceph_lv1",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "lv_size": "21470642176",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "name": "ceph_lv1",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "tags": {
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.cluster_name": "ceph",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.crush_device_class": "",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.encrypted": "0",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.objectstore": "bluestore",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.osd_id": "1",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.type": "block",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.vdo": "0",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.with_tpm": "0"
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            },
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "type": "block",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "vg_name": "ceph_vg1"
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:        }
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:    ],
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:    "2": [
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:        {
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "devices": [
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "/dev/loop5"
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            ],
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "lv_name": "ceph_lv2",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "lv_size": "21470642176",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "name": "ceph_lv2",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "tags": {
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.cluster_name": "ceph",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.crush_device_class": "",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.encrypted": "0",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.objectstore": "bluestore",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.osd_id": "2",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.type": "block",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.vdo": "0",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:                "ceph.with_tpm": "0"
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            },
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "type": "block",
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:            "vg_name": "ceph_vg2"
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:        }
Feb 25 08:01:29 np0005629333 charming_beaver[378125]:    ]
Feb 25 08:01:29 np0005629333 charming_beaver[378125]: }
Feb 25 08:01:29 np0005629333 nova_compute[244014]: 2026-02-25 13:01:29.650 244018 DEBUG oslo_concurrency.lockutils [req-210848af-dbc1-422f-8c73-f258c22b62ac req-69636d32-ed08-4b3d-abb8-835f27ca896d 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-0e4f3bd8-9df3-4329-afdc-27d91b98810d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 08:01:29 np0005629333 systemd[1]: libpod-b9486fe477e6b42f22dc10b0e2e435245a53db41a8d339d96f204111239618e0.scope: Deactivated successfully.
Feb 25 08:01:29 np0005629333 conmon[378125]: conmon b9486fe477e6b42f22dc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b9486fe477e6b42f22dc10b0e2e435245a53db41a8d339d96f204111239618e0.scope/container/memory.events
Feb 25 08:01:29 np0005629333 podman[378109]: 2026-02-25 13:01:29.694868098 +0000 UTC m=+0.504306527 container died b9486fe477e6b42f22dc10b0e2e435245a53db41a8d339d96f204111239618e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:01:29 np0005629333 systemd[1]: var-lib-containers-storage-overlay-b42f6afe07e70fe27a767fb842dcd31712aff17707bfbd185e95eee2756958c6-merged.mount: Deactivated successfully.
Feb 25 08:01:29 np0005629333 podman[378109]: 2026-02-25 13:01:29.744371104 +0000 UTC m=+0.553809493 container remove b9486fe477e6b42f22dc10b0e2e435245a53db41a8d339d96f204111239618e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 08:01:29 np0005629333 systemd[1]: libpod-conmon-b9486fe477e6b42f22dc10b0e2e435245a53db41a8d339d96f204111239618e0.scope: Deactivated successfully.
Feb 25 08:01:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:01:30 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2363102777' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:01:30 np0005629333 nova_compute[244014]: 2026-02-25 13:01:30.099 244018 DEBUG oslo_concurrency.processutils [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:01:30 np0005629333 nova_compute[244014]: 2026-02-25 13:01:30.106 244018 DEBUG nova.compute.provider_tree [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:01:30 np0005629333 nova_compute[244014]: 2026-02-25 13:01:30.128 244018 DEBUG nova.scheduler.client.report [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:01:30 np0005629333 nova_compute[244014]: 2026-02-25 13:01:30.145 244018 DEBUG oslo_concurrency.lockutils [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:01:30 np0005629333 nova_compute[244014]: 2026-02-25 13:01:30.170 244018 INFO nova.scheduler.client.report [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Deleted allocations for instance 0e4f3bd8-9df3-4329-afdc-27d91b98810d#033[00m
Feb 25 08:01:30 np0005629333 nova_compute[244014]: 2026-02-25 13:01:30.237 244018 DEBUG oslo_concurrency.lockutils [None req-987646e2-0e83-4764-8320-80daae5ec720 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "0e4f3bd8-9df3-4329-afdc-27d91b98810d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.983s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:01:30 np0005629333 podman[378231]: 2026-02-25 13:01:30.275103714 +0000 UTC m=+0.056958967 container create 6b6bf59faa1e24f2a53e3b69188454ec0a8b94c47e29e877c5f3b46cb8d629cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tharp, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:01:30 np0005629333 podman[378231]: 2026-02-25 13:01:30.24056383 +0000 UTC m=+0.022418933 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:01:30 np0005629333 systemd[1]: Started libpod-conmon-6b6bf59faa1e24f2a53e3b69188454ec0a8b94c47e29e877c5f3b46cb8d629cd.scope.
Feb 25 08:01:30 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:01:30 np0005629333 podman[378231]: 2026-02-25 13:01:30.40045003 +0000 UTC m=+0.182305073 container init 6b6bf59faa1e24f2a53e3b69188454ec0a8b94c47e29e877c5f3b46cb8d629cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 08:01:30 np0005629333 podman[378231]: 2026-02-25 13:01:30.406716967 +0000 UTC m=+0.188571990 container start 6b6bf59faa1e24f2a53e3b69188454ec0a8b94c47e29e877c5f3b46cb8d629cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tharp, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:01:30 np0005629333 systemd[1]: libpod-6b6bf59faa1e24f2a53e3b69188454ec0a8b94c47e29e877c5f3b46cb8d629cd.scope: Deactivated successfully.
Feb 25 08:01:30 np0005629333 hungry_tharp[378247]: 167 167
Feb 25 08:01:30 np0005629333 conmon[378247]: conmon 6b6bf59faa1e24f2a53e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6b6bf59faa1e24f2a53e3b69188454ec0a8b94c47e29e877c5f3b46cb8d629cd.scope/container/memory.events
Feb 25 08:01:30 np0005629333 podman[378231]: 2026-02-25 13:01:30.418676364 +0000 UTC m=+0.200531387 container attach 6b6bf59faa1e24f2a53e3b69188454ec0a8b94c47e29e877c5f3b46cb8d629cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tharp, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 08:01:30 np0005629333 podman[378231]: 2026-02-25 13:01:30.419293592 +0000 UTC m=+0.201148655 container died 6b6bf59faa1e24f2a53e3b69188454ec0a8b94c47e29e877c5f3b46cb8d629cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tharp, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 08:01:30 np0005629333 systemd[1]: var-lib-containers-storage-overlay-aa0dc72c781d0f92e64c51eb95fbce108f654edae2a52ca075191d203a902764-merged.mount: Deactivated successfully.
Feb 25 08:01:30 np0005629333 podman[378231]: 2026-02-25 13:01:30.462915052 +0000 UTC m=+0.244770085 container remove 6b6bf59faa1e24f2a53e3b69188454ec0a8b94c47e29e877c5f3b46cb8d629cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_tharp, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:01:30 np0005629333 systemd[1]: libpod-conmon-6b6bf59faa1e24f2a53e3b69188454ec0a8b94c47e29e877c5f3b46cb8d629cd.scope: Deactivated successfully.
Feb 25 08:01:30 np0005629333 nova_compute[244014]: 2026-02-25 13:01:30.645 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:30 np0005629333 podman[378271]: 2026-02-25 13:01:30.665566969 +0000 UTC m=+0.065559540 container create 57e8ed1b2640aae0a154d4812e2f0fa0184f3b8dc361d0c2d6e0690d284cb814 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 08:01:30 np0005629333 systemd[1]: Started libpod-conmon-57e8ed1b2640aae0a154d4812e2f0fa0184f3b8dc361d0c2d6e0690d284cb814.scope.
Feb 25 08:01:30 np0005629333 podman[378271]: 2026-02-25 13:01:30.63405245 +0000 UTC m=+0.034045101 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:01:30 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:01:30 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6cba24229211a96d32df0687ec53145e7d52104874956f2b84599b1433913b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:01:30 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6cba24229211a96d32df0687ec53145e7d52104874956f2b84599b1433913b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:01:30 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6cba24229211a96d32df0687ec53145e7d52104874956f2b84599b1433913b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:01:30 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6cba24229211a96d32df0687ec53145e7d52104874956f2b84599b1433913b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:01:30 np0005629333 podman[378271]: 2026-02-25 13:01:30.781362355 +0000 UTC m=+0.181355006 container init 57e8ed1b2640aae0a154d4812e2f0fa0184f3b8dc361d0c2d6e0690d284cb814 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_robinson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 08:01:30 np0005629333 podman[378271]: 2026-02-25 13:01:30.789807934 +0000 UTC m=+0.189800525 container start 57e8ed1b2640aae0a154d4812e2f0fa0184f3b8dc361d0c2d6e0690d284cb814 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_robinson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 08:01:30 np0005629333 podman[378271]: 2026-02-25 13:01:30.804875778 +0000 UTC m=+0.204868389 container attach 57e8ed1b2640aae0a154d4812e2f0fa0184f3b8dc361d0c2d6e0690d284cb814 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_robinson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 08:01:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2437: 305 pgs: 305 active+clean; 333 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 4.1 KiB/s wr, 85 op/s
Feb 25 08:01:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Feb 25 08:01:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Feb 25 08:01:30 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Feb 25 08:01:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:01:31
Feb 25 08:01:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:01:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:01:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'vms', '.mgr', '.rgw.root', 'images', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control']
Feb 25 08:01:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:01:31 np0005629333 lvm[378365]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:01:31 np0005629333 lvm[378365]: VG ceph_vg0 finished
Feb 25 08:01:31 np0005629333 lvm[378368]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:01:31 np0005629333 lvm[378368]: VG ceph_vg1 finished
Feb 25 08:01:31 np0005629333 lvm[378370]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:01:31 np0005629333 lvm[378370]: VG ceph_vg2 finished
Feb 25 08:01:31 np0005629333 lvm[378371]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:01:31 np0005629333 lvm[378371]: VG ceph_vg0 finished
Feb 25 08:01:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:01:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:01:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:01:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:01:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:01:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:01:31 np0005629333 vigilant_robinson[378288]: {}
Feb 25 08:01:31 np0005629333 systemd[1]: libpod-57e8ed1b2640aae0a154d4812e2f0fa0184f3b8dc361d0c2d6e0690d284cb814.scope: Deactivated successfully.
Feb 25 08:01:31 np0005629333 podman[378271]: 2026-02-25 13:01:31.697150918 +0000 UTC m=+1.097143479 container died 57e8ed1b2640aae0a154d4812e2f0fa0184f3b8dc361d0c2d6e0690d284cb814 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_robinson, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:01:31 np0005629333 systemd[1]: libpod-57e8ed1b2640aae0a154d4812e2f0fa0184f3b8dc361d0c2d6e0690d284cb814.scope: Consumed 1.301s CPU time.
Feb 25 08:01:31 np0005629333 systemd[1]: var-lib-containers-storage-overlay-c6cba24229211a96d32df0687ec53145e7d52104874956f2b84599b1433913b4-merged.mount: Deactivated successfully.
Feb 25 08:01:31 np0005629333 podman[378271]: 2026-02-25 13:01:31.738508905 +0000 UTC m=+1.138501476 container remove 57e8ed1b2640aae0a154d4812e2f0fa0184f3b8dc361d0c2d6e0690d284cb814 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_robinson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:01:31 np0005629333 systemd[1]: libpod-conmon-57e8ed1b2640aae0a154d4812e2f0fa0184f3b8dc361d0c2d6e0690d284cb814.scope: Deactivated successfully.
Feb 25 08:01:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:01:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:01:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:01:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:01:31 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:01:31 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:01:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:01:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:01:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:01:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:01:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:01:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:01:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:01:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:01:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:01:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:01:32 np0005629333 nova_compute[244014]: 2026-02-25 13:01:32.310 244018 DEBUG nova.compute.manager [req-a3351704-1229-42e1-83d9-77f38108760e req-182a0627-6c1f-4dfa-9368-29c6ce90d816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Received event network-changed-e11a4d69-8666-420b-b13c-69a6427567a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:01:32 np0005629333 nova_compute[244014]: 2026-02-25 13:01:32.311 244018 DEBUG nova.compute.manager [req-a3351704-1229-42e1-83d9-77f38108760e req-182a0627-6c1f-4dfa-9368-29c6ce90d816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Refreshing instance network info cache due to event network-changed-e11a4d69-8666-420b-b13c-69a6427567a4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 08:01:32 np0005629333 nova_compute[244014]: 2026-02-25 13:01:32.312 244018 DEBUG oslo_concurrency.lockutils [req-a3351704-1229-42e1-83d9-77f38108760e req-182a0627-6c1f-4dfa-9368-29c6ce90d816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8d338640-2b5f-4571-8f76-b523064ee129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 08:01:32 np0005629333 nova_compute[244014]: 2026-02-25 13:01:32.312 244018 DEBUG oslo_concurrency.lockutils [req-a3351704-1229-42e1-83d9-77f38108760e req-182a0627-6c1f-4dfa-9368-29c6ce90d816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8d338640-2b5f-4571-8f76-b523064ee129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 08:01:32 np0005629333 nova_compute[244014]: 2026-02-25 13:01:32.313 244018 DEBUG nova.network.neutron [req-a3351704-1229-42e1-83d9-77f38108760e req-182a0627-6c1f-4dfa-9368-29c6ce90d816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Refreshing network info cache for port e11a4d69-8666-420b-b13c-69a6427567a4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 08:01:32 np0005629333 nova_compute[244014]: 2026-02-25 13:01:32.508 244018 DEBUG oslo_concurrency.lockutils [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "8d338640-2b5f-4571-8f76-b523064ee129" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:01:32 np0005629333 nova_compute[244014]: 2026-02-25 13:01:32.509 244018 DEBUG oslo_concurrency.lockutils [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "8d338640-2b5f-4571-8f76-b523064ee129" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:01:32 np0005629333 nova_compute[244014]: 2026-02-25 13:01:32.509 244018 DEBUG oslo_concurrency.lockutils [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "8d338640-2b5f-4571-8f76-b523064ee129-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:01:32 np0005629333 nova_compute[244014]: 2026-02-25 13:01:32.509 244018 DEBUG oslo_concurrency.lockutils [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "8d338640-2b5f-4571-8f76-b523064ee129-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:01:32 np0005629333 nova_compute[244014]: 2026-02-25 13:01:32.509 244018 DEBUG oslo_concurrency.lockutils [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "8d338640-2b5f-4571-8f76-b523064ee129-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:01:32 np0005629333 nova_compute[244014]: 2026-02-25 13:01:32.511 244018 INFO nova.compute.manager [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Terminating instance#033[00m
Feb 25 08:01:32 np0005629333 nova_compute[244014]: 2026-02-25 13:01:32.512 244018 DEBUG nova.compute.manager [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 08:01:32 np0005629333 nova_compute[244014]: 2026-02-25 13:01:32.513 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:32 np0005629333 kernel: tape11a4d69-86 (unregistering): left promiscuous mode
Feb 25 08:01:32 np0005629333 NetworkManager[49836]: <info>  [1772024492.5690] device (tape11a4d69-86): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 08:01:32 np0005629333 ovn_controller[147040]: 2026-02-25T13:01:32Z|01563|binding|INFO|Releasing lport e11a4d69-8666-420b-b13c-69a6427567a4 from this chassis (sb_readonly=0)
Feb 25 08:01:32 np0005629333 ovn_controller[147040]: 2026-02-25T13:01:32Z|01564|binding|INFO|Setting lport e11a4d69-8666-420b-b13c-69a6427567a4 down in Southbound
Feb 25 08:01:32 np0005629333 ovn_controller[147040]: 2026-02-25T13:01:32Z|01565|binding|INFO|Removing iface tape11a4d69-86 ovn-installed in OVS
Feb 25 08:01:32 np0005629333 nova_compute[244014]: 2026-02-25 13:01:32.577 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:32.583 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:04:04 10.100.0.9'], port_security=['fa:16:3e:15:04:04 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '8d338640-2b5f-4571-8f76-b523064ee129', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4273798e-5f22-4d98-8d00-a22d1ea2c776', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cf5eb89ba0424237a313b1f369bcb92b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9876e36c-27f9-4526-a220-4685b5be270d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b451f093-c10b-47f6-9a8c-8892cb3ad351, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=e11a4d69-8666-420b-b13c-69a6427567a4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 08:01:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:32.585 157129 INFO neutron.agent.ovn.metadata.agent [-] Port e11a4d69-8666-420b-b13c-69a6427567a4 in datapath 4273798e-5f22-4d98-8d00-a22d1ea2c776 unbound from our chassis#033[00m
Feb 25 08:01:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:32.586 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4273798e-5f22-4d98-8d00-a22d1ea2c776, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 08:01:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:32.588 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[392ff96d-7173-439c-9d72-79057b35e443]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:32.589 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776 namespace which is not needed anymore#033[00m
Feb 25 08:01:32 np0005629333 nova_compute[244014]: 2026-02-25 13:01:32.589 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:32 np0005629333 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000091.scope: Deactivated successfully.
Feb 25 08:01:32 np0005629333 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000091.scope: Consumed 15.438s CPU time.
Feb 25 08:01:32 np0005629333 systemd-machined[210048]: Machine qemu-179-instance-00000091 terminated.
Feb 25 08:01:32 np0005629333 neutron-haproxy-ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776[375453]: [NOTICE]   (375457) : haproxy version is 2.8.14-c23fe91
Feb 25 08:01:32 np0005629333 neutron-haproxy-ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776[375453]: [NOTICE]   (375457) : path to executable is /usr/sbin/haproxy
Feb 25 08:01:32 np0005629333 neutron-haproxy-ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776[375453]: [WARNING]  (375457) : Exiting Master process...
Feb 25 08:01:32 np0005629333 neutron-haproxy-ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776[375453]: [ALERT]    (375457) : Current worker (375459) exited with code 143 (Terminated)
Feb 25 08:01:32 np0005629333 neutron-haproxy-ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776[375453]: [WARNING]  (375457) : All workers exited. Exiting... (0)
Feb 25 08:01:32 np0005629333 systemd[1]: libpod-563da1d83ed457a7fbc129442ed41559a2aa55f8da57581e46f1589881a8ff7d.scope: Deactivated successfully.
Feb 25 08:01:32 np0005629333 NetworkManager[49836]: <info>  [1772024492.7342] manager: (tape11a4d69-86): new Tun device (/org/freedesktop/NetworkManager/Devices/642)
Feb 25 08:01:32 np0005629333 podman[378433]: 2026-02-25 13:01:32.735331334 +0000 UTC m=+0.046919154 container died 563da1d83ed457a7fbc129442ed41559a2aa55f8da57581e46f1589881a8ff7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 25 08:01:32 np0005629333 nova_compute[244014]: 2026-02-25 13:01:32.749 244018 INFO nova.virt.libvirt.driver [-] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Instance destroyed successfully.#033[00m
Feb 25 08:01:32 np0005629333 nova_compute[244014]: 2026-02-25 13:01:32.749 244018 DEBUG nova.objects.instance [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lazy-loading 'resources' on Instance uuid 8d338640-2b5f-4571-8f76-b523064ee129 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 08:01:32 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-563da1d83ed457a7fbc129442ed41559a2aa55f8da57581e46f1589881a8ff7d-userdata-shm.mount: Deactivated successfully.
Feb 25 08:01:32 np0005629333 nova_compute[244014]: 2026-02-25 13:01:32.775 244018 DEBUG nova.virt.libvirt.vif [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T12:59:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1793809408',display_name='tempest-TestSnapshotPattern-server-1793809408',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1793809408',id=145,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDCWEdl3+ajGxJOkCeAxfCSENIRZwbPgzOnOZcpgV3Hlv4XmE2Mrj2poBzyZ23ScEZYFPY48VJJN3YnpuQrByWx+iTcrPbMvMX+80KQEtNnqR2GlURJo6gN1+GRlPtyo4Q==',key_name='tempest-TestSnapshotPattern-301066188',keypairs=<?>,launch_index=0,launched_at=2026-02-25T13:00:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cf5eb89ba0424237a313b1f369bcb92b',ramdisk_id='',reservation_id='r-b4g1xjnl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSnapshotPattern-1122231160',owner_user_name='tempest-TestSnapshotPattern-1122231160-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T13:00:28Z,user_data=None,user_id='7592542cdf7f423c86332695423dbe79',uuid=8d338640-2b5f-4571-8f76-b523064ee129,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e11a4d69-8666-420b-b13c-69a6427567a4", "address": "fa:16:3e:15:04:04", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape11a4d69-86", "ovs_interfaceid": "e11a4d69-8666-420b-b13c-69a6427567a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 08:01:32 np0005629333 systemd[1]: var-lib-containers-storage-overlay-c064d32c5a8bc9b32dbc5298e999e73ca9ab9f090eb5ef23c399a7752b4fd87b-merged.mount: Deactivated successfully.
Feb 25 08:01:32 np0005629333 nova_compute[244014]: 2026-02-25 13:01:32.775 244018 DEBUG nova.network.os_vif_util [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Converting VIF {"id": "e11a4d69-8666-420b-b13c-69a6427567a4", "address": "fa:16:3e:15:04:04", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape11a4d69-86", "ovs_interfaceid": "e11a4d69-8666-420b-b13c-69a6427567a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 08:01:32 np0005629333 nova_compute[244014]: 2026-02-25 13:01:32.778 244018 DEBUG nova.network.os_vif_util [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:15:04:04,bridge_name='br-int',has_traffic_filtering=True,id=e11a4d69-8666-420b-b13c-69a6427567a4,network=Network(4273798e-5f22-4d98-8d00-a22d1ea2c776),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape11a4d69-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 08:01:32 np0005629333 nova_compute[244014]: 2026-02-25 13:01:32.778 244018 DEBUG os_vif [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:04:04,bridge_name='br-int',has_traffic_filtering=True,id=e11a4d69-8666-420b-b13c-69a6427567a4,network=Network(4273798e-5f22-4d98-8d00-a22d1ea2c776),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape11a4d69-86') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 08:01:32 np0005629333 nova_compute[244014]: 2026-02-25 13:01:32.783 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:32 np0005629333 nova_compute[244014]: 2026-02-25 13:01:32.784 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape11a4d69-86, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:01:32 np0005629333 nova_compute[244014]: 2026-02-25 13:01:32.786 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:32 np0005629333 nova_compute[244014]: 2026-02-25 13:01:32.790 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 08:01:32 np0005629333 podman[378433]: 2026-02-25 13:01:32.794120673 +0000 UTC m=+0.105708513 container cleanup 563da1d83ed457a7fbc129442ed41559a2aa55f8da57581e46f1589881a8ff7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 08:01:32 np0005629333 nova_compute[244014]: 2026-02-25 13:01:32.794 244018 INFO os_vif [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:04:04,bridge_name='br-int',has_traffic_filtering=True,id=e11a4d69-8666-420b-b13c-69a6427567a4,network=Network(4273798e-5f22-4d98-8d00-a22d1ea2c776),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape11a4d69-86')#033[00m
Feb 25 08:01:32 np0005629333 systemd[1]: libpod-conmon-563da1d83ed457a7fbc129442ed41559a2aa55f8da57581e46f1589881a8ff7d.scope: Deactivated successfully.
Feb 25 08:01:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2439: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 7.5 KiB/s wr, 150 op/s
Feb 25 08:01:32 np0005629333 podman[378478]: 2026-02-25 13:01:32.892007004 +0000 UTC m=+0.070850190 container remove 563da1d83ed457a7fbc129442ed41559a2aa55f8da57581e46f1589881a8ff7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 25 08:01:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:32.900 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[220f47c8-1b28-4c7b-8718-5de859b2a336]: (4, ('Wed Feb 25 01:01:32 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776 (563da1d83ed457a7fbc129442ed41559a2aa55f8da57581e46f1589881a8ff7d)\n563da1d83ed457a7fbc129442ed41559a2aa55f8da57581e46f1589881a8ff7d\nWed Feb 25 01:01:32 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776 (563da1d83ed457a7fbc129442ed41559a2aa55f8da57581e46f1589881a8ff7d)\n563da1d83ed457a7fbc129442ed41559a2aa55f8da57581e46f1589881a8ff7d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:32.902 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[30915a4e-bbeb-4095-9ac8-a9f733fab509]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:32.904 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4273798e-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:01:32 np0005629333 kernel: tap4273798e-50: left promiscuous mode
Feb 25 08:01:32 np0005629333 nova_compute[244014]: 2026-02-25 13:01:32.907 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:32 np0005629333 nova_compute[244014]: 2026-02-25 13:01:32.914 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:32 np0005629333 nova_compute[244014]: 2026-02-25 13:01:32.915 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:32.919 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[41ea2a7a-aa4c-4f5b-a6e4-1312f3e0131f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:32.935 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5f31444e-fc59-4993-8365-8c0467ce18b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:32.936 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[98fea3c1-fa33-455e-ad14-b501494bded6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:32.952 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[86d5053a-8c80-480f-a768-1f10a5956f04]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 636989, 'reachable_time': 21227, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378506, 'error': None, 'target': 'ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:32.954 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4273798e-5f22-4d98-8d00-a22d1ea2c776 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 08:01:32 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:32.955 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[721419f8-9542-4084-9ba0-2b0ba8e623ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:32 np0005629333 systemd[1]: run-netns-ovnmeta\x2d4273798e\x2d5f22\x2d4d98\x2d8d00\x2da22d1ea2c776.mount: Deactivated successfully.
Feb 25 08:01:33 np0005629333 nova_compute[244014]: 2026-02-25 13:01:33.209 244018 INFO nova.virt.libvirt.driver [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Deleting instance files /var/lib/nova/instances/8d338640-2b5f-4571-8f76-b523064ee129_del#033[00m
Feb 25 08:01:33 np0005629333 nova_compute[244014]: 2026-02-25 13:01:33.210 244018 INFO nova.virt.libvirt.driver [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Deletion of /var/lib/nova/instances/8d338640-2b5f-4571-8f76-b523064ee129_del complete#033[00m
Feb 25 08:01:33 np0005629333 nova_compute[244014]: 2026-02-25 13:01:33.292 244018 INFO nova.compute.manager [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 08:01:33 np0005629333 nova_compute[244014]: 2026-02-25 13:01:33.292 244018 DEBUG oslo.service.loopingcall [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 08:01:33 np0005629333 nova_compute[244014]: 2026-02-25 13:01:33.293 244018 DEBUG nova.compute.manager [-] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 08:01:33 np0005629333 nova_compute[244014]: 2026-02-25 13:01:33.293 244018 DEBUG nova.network.neutron [-] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 08:01:33 np0005629333 nova_compute[244014]: 2026-02-25 13:01:33.570 244018 DEBUG nova.network.neutron [req-a3351704-1229-42e1-83d9-77f38108760e req-182a0627-6c1f-4dfa-9368-29c6ce90d816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Updated VIF entry in instance network info cache for port e11a4d69-8666-420b-b13c-69a6427567a4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 08:01:33 np0005629333 nova_compute[244014]: 2026-02-25 13:01:33.570 244018 DEBUG nova.network.neutron [req-a3351704-1229-42e1-83d9-77f38108760e req-182a0627-6c1f-4dfa-9368-29c6ce90d816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Updating instance_info_cache with network_info: [{"id": "e11a4d69-8666-420b-b13c-69a6427567a4", "address": "fa:16:3e:15:04:04", "network": {"id": "4273798e-5f22-4d98-8d00-a22d1ea2c776", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-142504463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf5eb89ba0424237a313b1f369bcb92b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape11a4d69-86", "ovs_interfaceid": "e11a4d69-8666-420b-b13c-69a6427567a4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:01:33 np0005629333 nova_compute[244014]: 2026-02-25 13:01:33.571 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "ea4e69b0-bf04-4637-847b-9807c884d103" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:01:33 np0005629333 nova_compute[244014]: 2026-02-25 13:01:33.572 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "ea4e69b0-bf04-4637-847b-9807c884d103" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:01:33 np0005629333 nova_compute[244014]: 2026-02-25 13:01:33.596 244018 DEBUG oslo_concurrency.lockutils [req-a3351704-1229-42e1-83d9-77f38108760e req-182a0627-6c1f-4dfa-9368-29c6ce90d816 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8d338640-2b5f-4571-8f76-b523064ee129" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 08:01:33 np0005629333 nova_compute[244014]: 2026-02-25 13:01:33.597 244018 DEBUG nova.compute.manager [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 08:01:33 np0005629333 nova_compute[244014]: 2026-02-25 13:01:33.667 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:01:33 np0005629333 nova_compute[244014]: 2026-02-25 13:01:33.668 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:01:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:01:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Feb 25 08:01:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Feb 25 08:01:33 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Feb 25 08:01:33 np0005629333 nova_compute[244014]: 2026-02-25 13:01:33.827 244018 DEBUG nova.virt.hardware [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 08:01:33 np0005629333 nova_compute[244014]: 2026-02-25 13:01:33.828 244018 INFO nova.compute.claims [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 08:01:33 np0005629333 nova_compute[244014]: 2026-02-25 13:01:33.908 244018 DEBUG nova.network.neutron [-] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:01:33 np0005629333 nova_compute[244014]: 2026-02-25 13:01:33.926 244018 INFO nova.compute.manager [-] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Took 0.63 seconds to deallocate network for instance.#033[00m
Feb 25 08:01:33 np0005629333 nova_compute[244014]: 2026-02-25 13:01:33.951 244018 DEBUG oslo_concurrency.processutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:01:33 np0005629333 nova_compute[244014]: 2026-02-25 13:01:33.989 244018 DEBUG oslo_concurrency.lockutils [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:01:34 np0005629333 nova_compute[244014]: 2026-02-25 13:01:34.426 244018 DEBUG nova.compute.manager [req-316772cd-74e0-42fb-b4b9-159cb72dc279 req-93f69feb-be7d-45b3-bd95-c389b52eeb4f 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Received event network-vif-deleted-e11a4d69-8666-420b-b13c-69a6427567a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:01:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:01:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/573217387' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:01:34 np0005629333 nova_compute[244014]: 2026-02-25 13:01:34.515 244018 DEBUG oslo_concurrency.processutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:01:34 np0005629333 nova_compute[244014]: 2026-02-25 13:01:34.523 244018 DEBUG nova.compute.provider_tree [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:01:34 np0005629333 nova_compute[244014]: 2026-02-25 13:01:34.545 244018 DEBUG nova.scheduler.client.report [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:01:34 np0005629333 nova_compute[244014]: 2026-02-25 13:01:34.577 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:01:34 np0005629333 nova_compute[244014]: 2026-02-25 13:01:34.577 244018 DEBUG nova.compute.manager [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 08:01:34 np0005629333 nova_compute[244014]: 2026-02-25 13:01:34.580 244018 DEBUG oslo_concurrency.lockutils [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:01:34 np0005629333 nova_compute[244014]: 2026-02-25 13:01:34.648 244018 DEBUG nova.compute.manager [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 08:01:34 np0005629333 nova_compute[244014]: 2026-02-25 13:01:34.648 244018 DEBUG nova.network.neutron [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 08:01:34 np0005629333 nova_compute[244014]: 2026-02-25 13:01:34.655 244018 DEBUG oslo_concurrency.processutils [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:01:34 np0005629333 nova_compute[244014]: 2026-02-25 13:01:34.715 244018 INFO nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 08:01:34 np0005629333 nova_compute[244014]: 2026-02-25 13:01:34.738 244018 DEBUG nova.compute.manager [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 08:01:34 np0005629333 nova_compute[244014]: 2026-02-25 13:01:34.831 244018 DEBUG nova.compute.manager [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 08:01:34 np0005629333 nova_compute[244014]: 2026-02-25 13:01:34.834 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 08:01:34 np0005629333 nova_compute[244014]: 2026-02-25 13:01:34.834 244018 INFO nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Creating image(s)#033[00m
Feb 25 08:01:34 np0005629333 nova_compute[244014]: 2026-02-25 13:01:34.872 244018 DEBUG nova.storage.rbd_utils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image ea4e69b0-bf04-4637-847b-9807c884d103_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:01:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2441: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 7.5 KiB/s wr, 151 op/s
Feb 25 08:01:34 np0005629333 nova_compute[244014]: 2026-02-25 13:01:34.905 244018 DEBUG nova.storage.rbd_utils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image ea4e69b0-bf04-4637-847b-9807c884d103_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:01:34 np0005629333 nova_compute[244014]: 2026-02-25 13:01:34.933 244018 DEBUG nova.storage.rbd_utils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image ea4e69b0-bf04-4637-847b-9807c884d103_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:01:34 np0005629333 nova_compute[244014]: 2026-02-25 13:01:34.936 244018 DEBUG oslo_concurrency.processutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:01:35 np0005629333 nova_compute[244014]: 2026-02-25 13:01:35.008 244018 DEBUG oslo_concurrency.processutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:01:35 np0005629333 nova_compute[244014]: 2026-02-25 13:01:35.009 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:01:35 np0005629333 nova_compute[244014]: 2026-02-25 13:01:35.010 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:01:35 np0005629333 nova_compute[244014]: 2026-02-25 13:01:35.011 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:01:35 np0005629333 nova_compute[244014]: 2026-02-25 13:01:35.042 244018 DEBUG nova.storage.rbd_utils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image ea4e69b0-bf04-4637-847b-9807c884d103_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:01:35 np0005629333 nova_compute[244014]: 2026-02-25 13:01:35.046 244018 DEBUG oslo_concurrency.processutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 ea4e69b0-bf04-4637-847b-9807c884d103_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:01:35 np0005629333 nova_compute[244014]: 2026-02-25 13:01:35.083 244018 DEBUG nova.policy [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea895f651dd742a7b5eb2d63fb34641c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b9699483122f465084e3147e4904d13d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 08:01:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:01:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/817129133' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:01:35 np0005629333 nova_compute[244014]: 2026-02-25 13:01:35.280 244018 DEBUG oslo_concurrency.processutils [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:01:35 np0005629333 nova_compute[244014]: 2026-02-25 13:01:35.286 244018 DEBUG nova.compute.provider_tree [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:01:35 np0005629333 nova_compute[244014]: 2026-02-25 13:01:35.305 244018 DEBUG nova.scheduler.client.report [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:01:35 np0005629333 nova_compute[244014]: 2026-02-25 13:01:35.333 244018 DEBUG oslo_concurrency.lockutils [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:01:35 np0005629333 nova_compute[244014]: 2026-02-25 13:01:35.372 244018 INFO nova.scheduler.client.report [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Deleted allocations for instance 8d338640-2b5f-4571-8f76-b523064ee129#033[00m
Feb 25 08:01:35 np0005629333 nova_compute[244014]: 2026-02-25 13:01:35.403 244018 DEBUG oslo_concurrency.processutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 ea4e69b0-bf04-4637-847b-9807c884d103_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.357s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:01:35 np0005629333 nova_compute[244014]: 2026-02-25 13:01:35.445 244018 DEBUG oslo_concurrency.lockutils [None req-4829a0e2-845a-41ee-a415-e5668316a41a 7592542cdf7f423c86332695423dbe79 cf5eb89ba0424237a313b1f369bcb92b - - default default] Lock "8d338640-2b5f-4571-8f76-b523064ee129" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:01:35 np0005629333 nova_compute[244014]: 2026-02-25 13:01:35.487 244018 DEBUG nova.storage.rbd_utils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] resizing rbd image ea4e69b0-bf04-4637-847b-9807c884d103_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 08:01:35 np0005629333 nova_compute[244014]: 2026-02-25 13:01:35.593 244018 DEBUG nova.objects.instance [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'migration_context' on Instance uuid ea4e69b0-bf04-4637-847b-9807c884d103 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 08:01:35 np0005629333 nova_compute[244014]: 2026-02-25 13:01:35.606 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 08:01:35 np0005629333 nova_compute[244014]: 2026-02-25 13:01:35.606 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Ensure instance console log exists: /var/lib/nova/instances/ea4e69b0-bf04-4637-847b-9807c884d103/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 08:01:35 np0005629333 nova_compute[244014]: 2026-02-25 13:01:35.607 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:01:35 np0005629333 nova_compute[244014]: 2026-02-25 13:01:35.607 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:01:35 np0005629333 nova_compute[244014]: 2026-02-25 13:01:35.608 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:01:35 np0005629333 nova_compute[244014]: 2026-02-25 13:01:35.647 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:35 np0005629333 nova_compute[244014]: 2026-02-25 13:01:35.974 244018 DEBUG nova.network.neutron [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Successfully created port: 5a5e6571-f930-49df-8399-0abb9daf7f4f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 08:01:36 np0005629333 nova_compute[244014]: 2026-02-25 13:01:36.790 244018 DEBUG nova.network.neutron [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Successfully updated port: 5a5e6571-f930-49df-8399-0abb9daf7f4f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 08:01:36 np0005629333 nova_compute[244014]: 2026-02-25 13:01:36.813 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "refresh_cache-ea4e69b0-bf04-4637-847b-9807c884d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 08:01:36 np0005629333 nova_compute[244014]: 2026-02-25 13:01:36.814 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquired lock "refresh_cache-ea4e69b0-bf04-4637-847b-9807c884d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 08:01:36 np0005629333 nova_compute[244014]: 2026-02-25 13:01:36.814 244018 DEBUG nova.network.neutron [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 08:01:36 np0005629333 nova_compute[244014]: 2026-02-25 13:01:36.878 244018 DEBUG nova.compute.manager [req-46d40674-17a5-49f3-bd0d-b1e58c6d926f req-67b6b340-d115-4771-83a9-8d2adbef90f4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Received event network-changed-5a5e6571-f930-49df-8399-0abb9daf7f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:01:36 np0005629333 nova_compute[244014]: 2026-02-25 13:01:36.879 244018 DEBUG nova.compute.manager [req-46d40674-17a5-49f3-bd0d-b1e58c6d926f req-67b6b340-d115-4771-83a9-8d2adbef90f4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Refreshing instance network info cache due to event network-changed-5a5e6571-f930-49df-8399-0abb9daf7f4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 08:01:36 np0005629333 nova_compute[244014]: 2026-02-25 13:01:36.879 244018 DEBUG oslo_concurrency.lockutils [req-46d40674-17a5-49f3-bd0d-b1e58c6d926f req-67b6b340-d115-4771-83a9-8d2adbef90f4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-ea4e69b0-bf04-4637-847b-9807c884d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 08:01:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2442: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 3.4 KiB/s wr, 65 op/s
Feb 25 08:01:37 np0005629333 nova_compute[244014]: 2026-02-25 13:01:37.192 244018 DEBUG nova.network.neutron [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 08:01:37 np0005629333 nova_compute[244014]: 2026-02-25 13:01:37.786 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:01:38 np0005629333 nova_compute[244014]: 2026-02-25 13:01:38.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:01:38 np0005629333 nova_compute[244014]: 2026-02-25 13:01:38.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:01:38 np0005629333 nova_compute[244014]: 2026-02-25 13:01:38.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:01:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2443: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 98 KiB/s rd, 2.7 MiB/s wr, 147 op/s
Feb 25 08:01:38 np0005629333 nova_compute[244014]: 2026-02-25 13:01:38.894 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Feb 25 08:01:38 np0005629333 nova_compute[244014]: 2026-02-25 13:01:38.894 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:01:39 np0005629333 nova_compute[244014]: 2026-02-25 13:01:39.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:01:39 np0005629333 nova_compute[244014]: 2026-02-25 13:01:39.937 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:01:39 np0005629333 nova_compute[244014]: 2026-02-25 13:01:39.938 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:01:39 np0005629333 nova_compute[244014]: 2026-02-25 13:01:39.938 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:01:39 np0005629333 nova_compute[244014]: 2026-02-25 13:01:39.938 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:01:39 np0005629333 nova_compute[244014]: 2026-02-25 13:01:39.938 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.041 244018 DEBUG nova.network.neutron [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Updating instance_info_cache with network_info: [{"id": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "address": "fa:16:3e:8b:fe:07", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a5e6571-f9", "ovs_interfaceid": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.071 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Releasing lock "refresh_cache-ea4e69b0-bf04-4637-847b-9807c884d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.071 244018 DEBUG nova.compute.manager [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Instance network_info: |[{"id": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "address": "fa:16:3e:8b:fe:07", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a5e6571-f9", "ovs_interfaceid": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.072 244018 DEBUG oslo_concurrency.lockutils [req-46d40674-17a5-49f3-bd0d-b1e58c6d926f req-67b6b340-d115-4771-83a9-8d2adbef90f4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-ea4e69b0-bf04-4637-847b-9807c884d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.072 244018 DEBUG nova.network.neutron [req-46d40674-17a5-49f3-bd0d-b1e58c6d926f req-67b6b340-d115-4771-83a9-8d2adbef90f4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Refreshing network info cache for port 5a5e6571-f930-49df-8399-0abb9daf7f4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.074 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Start _get_guest_xml network_info=[{"id": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "address": "fa:16:3e:8b:fe:07", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a5e6571-f9", "ovs_interfaceid": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.078 244018 WARNING nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.082 244018 DEBUG nova.virt.libvirt.host [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.082 244018 DEBUG nova.virt.libvirt.host [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.086 244018 DEBUG nova.virt.libvirt.host [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.086 244018 DEBUG nova.virt.libvirt.host [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.086 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.087 244018 DEBUG nova.virt.hardware [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.087 244018 DEBUG nova.virt.hardware [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.087 244018 DEBUG nova.virt.hardware [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.088 244018 DEBUG nova.virt.hardware [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.088 244018 DEBUG nova.virt.hardware [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.088 244018 DEBUG nova.virt.hardware [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.088 244018 DEBUG nova.virt.hardware [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.089 244018 DEBUG nova.virt.hardware [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.090 244018 DEBUG nova.virt.hardware [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.090 244018 DEBUG nova.virt.hardware [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.090 244018 DEBUG nova.virt.hardware [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.093 244018 DEBUG oslo_concurrency.processutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:01:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:01:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/257235082' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.439 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.566 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.567 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3434MB free_disk=59.96653531957418GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.567 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.568 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:01:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 08:01:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1383724056' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.642 244018 DEBUG oslo_concurrency.processutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.667 244018 DEBUG nova.storage.rbd_utils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image ea4e69b0-bf04-4637-847b-9807c884d103_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.671 244018 DEBUG oslo_concurrency.processutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.697 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.701 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance ea4e69b0-bf04-4637-847b-9807c884d103 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.702 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.702 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:01:40 np0005629333 nova_compute[244014]: 2026-02-25 13:01:40.741 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:01:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2444: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 2.1 MiB/s wr, 119 op/s
Feb 25 08:01:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 08:01:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3332419605' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 08:01:41 np0005629333 nova_compute[244014]: 2026-02-25 13:01:41.214 244018 DEBUG oslo_concurrency.processutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:01:41 np0005629333 nova_compute[244014]: 2026-02-25 13:01:41.216 244018 DEBUG nova.virt.libvirt.vif [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T13:01:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-852563561',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-852563561',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-acc',id=148,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDG3qQBqZbx+RdRlYmfQhfQx6MRvbn5P1SDfQdH7zgAHK3JPnG4I4PJxwsk8GxkBiwXqDS8mMRQo7u+5+wHwxJ+FcVnrZkfYuYFmbh/Gbve/0HWzg16VZ7xAIbYve6W3FQ==',key_name='tempest-TestSecurityGroupsBasicOps-1940037159',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-nfnujduq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T13:01:34Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=ea4e69b0-bf04-4637-847b-9807c884d103,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "address": "fa:16:3e:8b:fe:07", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a5e6571-f9", "ovs_interfaceid": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 08:01:41 np0005629333 nova_compute[244014]: 2026-02-25 13:01:41.217 244018 DEBUG nova.network.os_vif_util [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "address": "fa:16:3e:8b:fe:07", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a5e6571-f9", "ovs_interfaceid": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 08:01:41 np0005629333 nova_compute[244014]: 2026-02-25 13:01:41.218 244018 DEBUG nova.network.os_vif_util [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:fe:07,bridge_name='br-int',has_traffic_filtering=True,id=5a5e6571-f930-49df-8399-0abb9daf7f4f,network=Network(81e387f6-426b-4301-9279-fdf7597d2c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a5e6571-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 08:01:41 np0005629333 nova_compute[244014]: 2026-02-25 13:01:41.220 244018 DEBUG nova.objects.instance [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'pci_devices' on Instance uuid ea4e69b0-bf04-4637-847b-9807c884d103 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 08:01:41 np0005629333 nova_compute[244014]: 2026-02-25 13:01:41.238 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] End _get_guest_xml xml=<domain type="kvm">
Feb 25 08:01:41 np0005629333 nova_compute[244014]:  <uuid>ea4e69b0-bf04-4637-847b-9807c884d103</uuid>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:  <name>instance-00000094</name>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-852563561</nova:name>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 13:01:40</nova:creationTime>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 08:01:41 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:        <nova:user uuid="ea895f651dd742a7b5eb2d63fb34641c">tempest-TestSecurityGroupsBasicOps-948360018-project-member</nova:user>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:        <nova:project uuid="b9699483122f465084e3147e4904d13d">tempest-TestSecurityGroupsBasicOps-948360018</nova:project>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:        <nova:port uuid="5a5e6571-f930-49df-8399-0abb9daf7f4f">
Feb 25 08:01:41 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <system>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      <entry name="serial">ea4e69b0-bf04-4637-847b-9807c884d103</entry>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      <entry name="uuid">ea4e69b0-bf04-4637-847b-9807c884d103</entry>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    </system>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:  <os>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:  </os>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:  <features>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:  </features>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:  </clock>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:  <devices>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/ea4e69b0-bf04-4637-847b-9807c884d103_disk">
Feb 25 08:01:41 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      </source>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 08:01:41 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      </auth>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    </disk>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/ea4e69b0-bf04-4637-847b-9807c884d103_disk.config">
Feb 25 08:01:41 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      </source>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 08:01:41 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      </auth>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    </disk>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:8b:fe:07"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      <target dev="tap5a5e6571-f9"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    </interface>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/ea4e69b0-bf04-4637-847b-9807c884d103/console.log" append="off"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    </serial>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <video>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    </video>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    </rng>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 08:01:41 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 08:01:41 np0005629333 nova_compute[244014]:  </devices>
Feb 25 08:01:41 np0005629333 nova_compute[244014]: </domain>
Feb 25 08:01:41 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 08:01:41 np0005629333 nova_compute[244014]: 2026-02-25 13:01:41.240 244018 DEBUG nova.compute.manager [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Preparing to wait for external event network-vif-plugged-5a5e6571-f930-49df-8399-0abb9daf7f4f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 08:01:41 np0005629333 nova_compute[244014]: 2026-02-25 13:01:41.240 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:01:41 np0005629333 nova_compute[244014]: 2026-02-25 13:01:41.241 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:01:41 np0005629333 nova_compute[244014]: 2026-02-25 13:01:41.241 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:01:41 np0005629333 nova_compute[244014]: 2026-02-25 13:01:41.242 244018 DEBUG nova.virt.libvirt.vif [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T13:01:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-852563561',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-852563561',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-acc',id=148,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDG3qQBqZbx+RdRlYmfQhfQx6MRvbn5P1SDfQdH7zgAHK3JPnG4I4PJxwsk8GxkBiwXqDS8mMRQo7u+5+wHwxJ+FcVnrZkfYuYFmbh/Gbve/0HWzg16VZ7xAIbYve6W3FQ==',key_name='tempest-TestSecurityGroupsBasicOps-1940037159',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-nfnujduq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T13:01:34Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=ea4e69b0-bf04-4637-847b-9807c884d103,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "address": "fa:16:3e:8b:fe:07", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a5e6571-f9", "ovs_interfaceid": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 08:01:41 np0005629333 nova_compute[244014]: 2026-02-25 13:01:41.243 244018 DEBUG nova.network.os_vif_util [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "address": "fa:16:3e:8b:fe:07", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a5e6571-f9", "ovs_interfaceid": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 08:01:41 np0005629333 nova_compute[244014]: 2026-02-25 13:01:41.243 244018 DEBUG nova.network.os_vif_util [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:fe:07,bridge_name='br-int',has_traffic_filtering=True,id=5a5e6571-f930-49df-8399-0abb9daf7f4f,network=Network(81e387f6-426b-4301-9279-fdf7597d2c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a5e6571-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 08:01:41 np0005629333 nova_compute[244014]: 2026-02-25 13:01:41.244 244018 DEBUG os_vif [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:fe:07,bridge_name='br-int',has_traffic_filtering=True,id=5a5e6571-f930-49df-8399-0abb9daf7f4f,network=Network(81e387f6-426b-4301-9279-fdf7597d2c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a5e6571-f9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 08:01:41 np0005629333 nova_compute[244014]: 2026-02-25 13:01:41.245 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:41 np0005629333 nova_compute[244014]: 2026-02-25 13:01:41.246 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:01:41 np0005629333 nova_compute[244014]: 2026-02-25 13:01:41.246 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 08:01:41 np0005629333 nova_compute[244014]: 2026-02-25 13:01:41.249 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:41 np0005629333 nova_compute[244014]: 2026-02-25 13:01:41.249 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5a5e6571-f9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:01:41 np0005629333 nova_compute[244014]: 2026-02-25 13:01:41.250 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5a5e6571-f9, col_values=(('external_ids', {'iface-id': '5a5e6571-f930-49df-8399-0abb9daf7f4f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:fe:07', 'vm-uuid': 'ea4e69b0-bf04-4637-847b-9807c884d103'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:01:41 np0005629333 NetworkManager[49836]: <info>  [1772024501.2536] manager: (tap5a5e6571-f9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/643)
Feb 25 08:01:41 np0005629333 nova_compute[244014]: 2026-02-25 13:01:41.261 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 08:01:41 np0005629333 nova_compute[244014]: 2026-02-25 13:01:41.262 244018 INFO os_vif [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:fe:07,bridge_name='br-int',has_traffic_filtering=True,id=5a5e6571-f930-49df-8399-0abb9daf7f4f,network=Network(81e387f6-426b-4301-9279-fdf7597d2c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a5e6571-f9')#033[00m
Feb 25 08:01:41 np0005629333 nova_compute[244014]: 2026-02-25 13:01:41.312 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 08:01:41 np0005629333 nova_compute[244014]: 2026-02-25 13:01:41.313 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 08:01:41 np0005629333 nova_compute[244014]: 2026-02-25 13:01:41.313 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No VIF found with MAC fa:16:3e:8b:fe:07, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 08:01:41 np0005629333 nova_compute[244014]: 2026-02-25 13:01:41.314 244018 INFO nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Using config drive#033[00m
Feb 25 08:01:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:01:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2489639036' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:01:41 np0005629333 nova_compute[244014]: 2026-02-25 13:01:41.344 244018 DEBUG nova.storage.rbd_utils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image ea4e69b0-bf04-4637-847b-9807c884d103_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:01:41 np0005629333 nova_compute[244014]: 2026-02-25 13:01:41.350 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:01:41 np0005629333 nova_compute[244014]: 2026-02-25 13:01:41.355 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:01:41 np0005629333 nova_compute[244014]: 2026-02-25 13:01:41.373 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:01:41 np0005629333 nova_compute[244014]: 2026-02-25 13:01:41.394 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:01:41 np0005629333 nova_compute[244014]: 2026-02-25 13:01:41.394 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:01:42 np0005629333 nova_compute[244014]: 2026-02-25 13:01:42.309 244018 INFO nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Creating config drive at /var/lib/nova/instances/ea4e69b0-bf04-4637-847b-9807c884d103/disk.config#033[00m
Feb 25 08:01:42 np0005629333 nova_compute[244014]: 2026-02-25 13:01:42.312 244018 DEBUG oslo_concurrency.processutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ea4e69b0-bf04-4637-847b-9807c884d103/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5nbp24b_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:01:42 np0005629333 nova_compute[244014]: 2026-02-25 13:01:42.375 244018 DEBUG nova.network.neutron [req-46d40674-17a5-49f3-bd0d-b1e58c6d926f req-67b6b340-d115-4771-83a9-8d2adbef90f4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Updated VIF entry in instance network info cache for port 5a5e6571-f930-49df-8399-0abb9daf7f4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 08:01:42 np0005629333 nova_compute[244014]: 2026-02-25 13:01:42.377 244018 DEBUG nova.network.neutron [req-46d40674-17a5-49f3-bd0d-b1e58c6d926f req-67b6b340-d115-4771-83a9-8d2adbef90f4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Updating instance_info_cache with network_info: [{"id": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "address": "fa:16:3e:8b:fe:07", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a5e6571-f9", "ovs_interfaceid": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:01:42 np0005629333 nova_compute[244014]: 2026-02-25 13:01:42.390 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:01:42 np0005629333 nova_compute[244014]: 2026-02-25 13:01:42.393 244018 DEBUG oslo_concurrency.lockutils [req-46d40674-17a5-49f3-bd0d-b1e58c6d926f req-67b6b340-d115-4771-83a9-8d2adbef90f4 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-ea4e69b0-bf04-4637-847b-9807c884d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 08:01:42 np0005629333 nova_compute[244014]: 2026-02-25 13:01:42.453 244018 DEBUG oslo_concurrency.processutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ea4e69b0-bf04-4637-847b-9807c884d103/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp5nbp24b_" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:01:42 np0005629333 nova_compute[244014]: 2026-02-25 13:01:42.482 244018 DEBUG nova.storage.rbd_utils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image ea4e69b0-bf04-4637-847b-9807c884d103_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:01:42 np0005629333 nova_compute[244014]: 2026-02-25 13:01:42.486 244018 DEBUG oslo_concurrency.processutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ea4e69b0-bf04-4637-847b-9807c884d103/disk.config ea4e69b0-bf04-4637-847b-9807c884d103_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:01:42 np0005629333 nova_compute[244014]: 2026-02-25 13:01:42.519 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024487.4897103, 0e4f3bd8-9df3-4329-afdc-27d91b98810d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:01:42 np0005629333 nova_compute[244014]: 2026-02-25 13:01:42.519 244018 INFO nova.compute.manager [-] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] VM Stopped (Lifecycle Event)#033[00m
Feb 25 08:01:42 np0005629333 nova_compute[244014]: 2026-02-25 13:01:42.541 244018 DEBUG nova.compute.manager [None req-05d1c5b1-8943-4255-805c-369cf7ebc239 - - - - - -] [instance: 0e4f3bd8-9df3-4329-afdc-27d91b98810d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:01:42 np0005629333 nova_compute[244014]: 2026-02-25 13:01:42.695 244018 DEBUG oslo_concurrency.processutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ea4e69b0-bf04-4637-847b-9807c884d103/disk.config ea4e69b0-bf04-4637-847b-9807c884d103_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:01:42 np0005629333 nova_compute[244014]: 2026-02-25 13:01:42.696 244018 INFO nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Deleting local config drive /var/lib/nova/instances/ea4e69b0-bf04-4637-847b-9807c884d103/disk.config because it was imported into RBD.#033[00m
Feb 25 08:01:42 np0005629333 kernel: tap5a5e6571-f9: entered promiscuous mode
Feb 25 08:01:42 np0005629333 NetworkManager[49836]: <info>  [1772024502.7560] manager: (tap5a5e6571-f9): new Tun device (/org/freedesktop/NetworkManager/Devices/644)
Feb 25 08:01:42 np0005629333 nova_compute[244014]: 2026-02-25 13:01:42.758 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:42 np0005629333 ovn_controller[147040]: 2026-02-25T13:01:42Z|01566|binding|INFO|Claiming lport 5a5e6571-f930-49df-8399-0abb9daf7f4f for this chassis.
Feb 25 08:01:42 np0005629333 ovn_controller[147040]: 2026-02-25T13:01:42Z|01567|binding|INFO|5a5e6571-f930-49df-8399-0abb9daf7f4f: Claiming fa:16:3e:8b:fe:07 10.100.0.5
Feb 25 08:01:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.768 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:fe:07 10.100.0.5'], port_security=['fa:16:3e:8b:fe:07 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ea4e69b0-bf04-4637-847b-9807c884d103', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81e387f6-426b-4301-9279-fdf7597d2c7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9699483122f465084e3147e4904d13d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '11991dde-8004-44c6-851b-7ca96866b068 266cd610-5f0f-4f4d-8bfb-2ebdf5d189b9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2daa638-d0b7-4114-8331-0d5928db072a, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=5a5e6571-f930-49df-8399-0abb9daf7f4f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 08:01:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.770 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 5a5e6571-f930-49df-8399-0abb9daf7f4f in datapath 81e387f6-426b-4301-9279-fdf7597d2c7e bound to our chassis#033[00m
Feb 25 08:01:42 np0005629333 ovn_controller[147040]: 2026-02-25T13:01:42Z|01568|binding|INFO|Setting lport 5a5e6571-f930-49df-8399-0abb9daf7f4f ovn-installed in OVS
Feb 25 08:01:42 np0005629333 ovn_controller[147040]: 2026-02-25T13:01:42Z|01569|binding|INFO|Setting lport 5a5e6571-f930-49df-8399-0abb9daf7f4f up in Southbound
Feb 25 08:01:42 np0005629333 nova_compute[244014]: 2026-02-25 13:01:42.773 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.775 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 81e387f6-426b-4301-9279-fdf7597d2c7e#033[00m
Feb 25 08:01:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.786 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d9a6493f-5d59-4043-a262-80a1df048c11]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.787 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap81e387f6-41 in ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 08:01:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.790 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap81e387f6-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 08:01:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.790 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[334cec34-a7e8-4e91-8128-3044137894da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.791 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ff807221-a95e-4eb4-a683-88b6bfa03b16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:42 np0005629333 systemd-udevd[378899]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 08:01:42 np0005629333 systemd-machined[210048]: New machine qemu-182-instance-00000094.
Feb 25 08:01:42 np0005629333 NetworkManager[49836]: <info>  [1772024502.8088] device (tap5a5e6571-f9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 08:01:42 np0005629333 NetworkManager[49836]: <info>  [1772024502.8096] device (tap5a5e6571-f9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 08:01:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:01:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.810 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[a013265f-cdb8-4e3d-ade8-35ca3486c24a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:42 np0005629333 systemd[1]: Started Virtual Machine qemu-182-instance-00000094.
Feb 25 08:01:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:01:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:01:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:01:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00036250297512604155 of space, bias 1.0, pg target 0.10875089253781246 quantized to 32 (current 32)
Feb 25 08:01:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:01:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:01:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:01:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:01:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:01:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002494450541393998 of space, bias 1.0, pg target 0.7483351624181993 quantized to 32 (current 32)
Feb 25 08:01:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:01:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4133407414918176e-06 of space, bias 4.0, pg target 0.0016960088897901811 quantized to 16 (current 16)
Feb 25 08:01:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:01:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:01:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:01:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:01:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:01:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:01:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:01:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:01:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:01:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:01:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.823 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[0551de6b-68a6-485d-bf1b-020897e41d8b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.848 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[383df195-9c83-4139-aab0-646c4f606062]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.854 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[b9c93cae-3909-45de-9ddb-1d5a87c0f590]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:42 np0005629333 NetworkManager[49836]: <info>  [1772024502.8546] manager: (tap81e387f6-40): new Veth device (/org/freedesktop/NetworkManager/Devices/645)
Feb 25 08:01:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.881 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ee5a05cf-8040-4db8-ba28-ba79797d98e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.884 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[91975f55-497b-433a-8975-bca5b2aa9ef7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2445: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 08:01:42 np0005629333 NetworkManager[49836]: <info>  [1772024502.9062] device (tap81e387f6-40): carrier: link connected
Feb 25 08:01:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.913 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[078b9c18-0851-45b4-81c7-da271d60066f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.930 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e12cd059-4ad0-4873-a85e-6d19b50f6544]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81e387f6-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:15:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647231, 'reachable_time': 28642, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378931, 'error': None, 'target': 'ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.941 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f364b3ad-91f6-4fe0-a6dd-de7eff0bbdda]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedd:15b1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647231, 'tstamp': 647231}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378932, 'error': None, 'target': 'ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.954 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e7a4ad76-b1c1-4daf-9564-1f8c62cb8b76]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81e387f6-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:15:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647231, 'reachable_time': 28642, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 378933, 'error': None, 'target': 'ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:42 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:42.976 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9e2c0400-504f-4b87-a331-a085965c94ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:43.012 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[86ab515b-3ec7-44e1-8609-8c89221054a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:43.013 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81e387f6-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:43.013 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:43.014 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81e387f6-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.015 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:43 np0005629333 NetworkManager[49836]: <info>  [1772024503.0163] manager: (tap81e387f6-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/646)
Feb 25 08:01:43 np0005629333 kernel: tap81e387f6-40: entered promiscuous mode
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.018 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:43.019 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap81e387f6-40, col_values=(('external_ids', {'iface-id': '6e4a9a2d-90d8-4fd6-b7cc-e40dc20beeef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.021 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:43 np0005629333 ovn_controller[147040]: 2026-02-25T13:01:43Z|01570|binding|INFO|Releasing lport 6e4a9a2d-90d8-4fd6-b7cc-e40dc20beeef from this chassis (sb_readonly=0)
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.027 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:43.029 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/81e387f6-426b-4301-9279-fdf7597d2c7e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/81e387f6-426b-4301-9279-fdf7597d2c7e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:43.030 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[20a5563c-ebab-4993-9973-2d3a895f7e72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:43.030 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-81e387f6-426b-4301-9279-fdf7597d2c7e
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/81e387f6-426b-4301-9279-fdf7597d2c7e.pid.haproxy
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 81e387f6-426b-4301-9279-fdf7597d2c7e
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 08:01:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:43.032 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e', 'env', 'PROCESS_TAG=haproxy-81e387f6-426b-4301-9279-fdf7597d2c7e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/81e387f6-426b-4301-9279-fdf7597d2c7e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.409 244018 DEBUG nova.compute.manager [req-78ea9f71-74e4-452a-bd1d-e81cc9190c05 req-1bc519d8-ef4a-4d8b-b682-1cff37fafbe8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Received event network-vif-plugged-5a5e6571-f930-49df-8399-0abb9daf7f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.409 244018 DEBUG oslo_concurrency.lockutils [req-78ea9f71-74e4-452a-bd1d-e81cc9190c05 req-1bc519d8-ef4a-4d8b-b682-1cff37fafbe8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.409 244018 DEBUG oslo_concurrency.lockutils [req-78ea9f71-74e4-452a-bd1d-e81cc9190c05 req-1bc519d8-ef4a-4d8b-b682-1cff37fafbe8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.410 244018 DEBUG oslo_concurrency.lockutils [req-78ea9f71-74e4-452a-bd1d-e81cc9190c05 req-1bc519d8-ef4a-4d8b-b682-1cff37fafbe8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.410 244018 DEBUG nova.compute.manager [req-78ea9f71-74e4-452a-bd1d-e81cc9190c05 req-1bc519d8-ef4a-4d8b-b682-1cff37fafbe8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Processing event network-vif-plugged-5a5e6571-f930-49df-8399-0abb9daf7f4f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 08:01:43 np0005629333 podman[378963]: 2026-02-25 13:01:43.319186459 +0000 UTC m=+0.023300238 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.697 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024503.6968124, ea4e69b0-bf04-4637-847b-9807c884d103 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.698 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] VM Started (Lifecycle Event)#033[00m
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.702 244018 DEBUG nova.compute.manager [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.706 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 08:01:43 np0005629333 podman[378963]: 2026-02-25 13:01:43.710370704 +0000 UTC m=+0.414484443 container create 47eadc59421d1d38142f2438394fc8c198c542685813ac8cfd5bdf8cb8939451 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.711 244018 INFO nova.virt.libvirt.driver [-] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Instance spawned successfully.#033[00m
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.712 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.742 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.756 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.762 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.762 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.763 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.763 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.764 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.764 244018 DEBUG nova.virt.libvirt.driver [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:01:43 np0005629333 systemd[1]: Started libpod-conmon-47eadc59421d1d38142f2438394fc8c198c542685813ac8cfd5bdf8cb8939451.scope.
Feb 25 08:01:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.804 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.805 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024503.6971457, ea4e69b0-bf04-4637-847b-9807c884d103 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.806 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] VM Paused (Lifecycle Event)#033[00m
Feb 25 08:01:43 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:01:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3abb1a1fefa49a0b771e7d6edf00e3f98906310d6c7b5e12ddbb0b47d61691b7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.841 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.848 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024503.7056663, ea4e69b0-bf04-4637-847b-9807c884d103 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.848 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] VM Resumed (Lifecycle Event)#033[00m
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.854 244018 INFO nova.compute.manager [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Took 9.02 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.854 244018 DEBUG nova.compute.manager [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.882 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.885 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.922 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.939 244018 INFO nova.compute.manager [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Took 10.29 seconds to build instance.#033[00m
Feb 25 08:01:43 np0005629333 podman[378963]: 2026-02-25 13:01:43.943422658 +0000 UTC m=+0.647536467 container init 47eadc59421d1d38142f2438394fc8c198c542685813ac8cfd5bdf8cb8939451 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 08:01:43 np0005629333 podman[378963]: 2026-02-25 13:01:43.951337391 +0000 UTC m=+0.655451130 container start 47eadc59421d1d38142f2438394fc8c198c542685813ac8cfd5bdf8cb8939451 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 08:01:43 np0005629333 nova_compute[244014]: 2026-02-25 13:01:43.955 244018 DEBUG oslo_concurrency.lockutils [None req-425deafe-0dad-41d8-af18-ae47668e0b3b ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "ea4e69b0-bf04-4637-847b-9807c884d103" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.383s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:01:43 np0005629333 neutron-haproxy-ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e[379021]: [NOTICE]   (379025) : New worker (379027) forked
Feb 25 08:01:43 np0005629333 neutron-haproxy-ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e[379021]: [NOTICE]   (379025) : Loading success.
Feb 25 08:01:44 np0005629333 ovn_controller[147040]: 2026-02-25T13:01:44Z|01571|binding|INFO|Releasing lport 6e4a9a2d-90d8-4fd6-b7cc-e40dc20beeef from this chassis (sb_readonly=0)
Feb 25 08:01:44 np0005629333 nova_compute[244014]: 2026-02-25 13:01:44.214 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:44 np0005629333 ovn_controller[147040]: 2026-02-25T13:01:44Z|01572|binding|INFO|Releasing lport 6e4a9a2d-90d8-4fd6-b7cc-e40dc20beeef from this chassis (sb_readonly=0)
Feb 25 08:01:44 np0005629333 nova_compute[244014]: 2026-02-25 13:01:44.267 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2446: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 1.9 MiB/s wr, 59 op/s
Feb 25 08:01:45 np0005629333 nova_compute[244014]: 2026-02-25 13:01:45.577 244018 DEBUG nova.compute.manager [req-350c8635-316c-4487-a8e5-8203218b6e25 req-4a9a28fa-b79b-441d-b086-ef685e3e81a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Received event network-vif-plugged-5a5e6571-f930-49df-8399-0abb9daf7f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:01:45 np0005629333 nova_compute[244014]: 2026-02-25 13:01:45.577 244018 DEBUG oslo_concurrency.lockutils [req-350c8635-316c-4487-a8e5-8203218b6e25 req-4a9a28fa-b79b-441d-b086-ef685e3e81a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:01:45 np0005629333 nova_compute[244014]: 2026-02-25 13:01:45.577 244018 DEBUG oslo_concurrency.lockutils [req-350c8635-316c-4487-a8e5-8203218b6e25 req-4a9a28fa-b79b-441d-b086-ef685e3e81a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:01:45 np0005629333 nova_compute[244014]: 2026-02-25 13:01:45.578 244018 DEBUG oslo_concurrency.lockutils [req-350c8635-316c-4487-a8e5-8203218b6e25 req-4a9a28fa-b79b-441d-b086-ef685e3e81a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:01:45 np0005629333 nova_compute[244014]: 2026-02-25 13:01:45.578 244018 DEBUG nova.compute.manager [req-350c8635-316c-4487-a8e5-8203218b6e25 req-4a9a28fa-b79b-441d-b086-ef685e3e81a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] No waiting events found dispatching network-vif-plugged-5a5e6571-f930-49df-8399-0abb9daf7f4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 08:01:45 np0005629333 nova_compute[244014]: 2026-02-25 13:01:45.578 244018 WARNING nova.compute.manager [req-350c8635-316c-4487-a8e5-8203218b6e25 req-4a9a28fa-b79b-441d-b086-ef685e3e81a3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Received unexpected event network-vif-plugged-5a5e6571-f930-49df-8399-0abb9daf7f4f for instance with vm_state active and task_state None.#033[00m
Feb 25 08:01:45 np0005629333 nova_compute[244014]: 2026-02-25 13:01:45.652 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:45 np0005629333 nova_compute[244014]: 2026-02-25 13:01:45.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:01:46 np0005629333 nova_compute[244014]: 2026-02-25 13:01:46.253 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2447: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 25 08:01:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:01:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2937760942' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:01:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:01:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2937760942' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:01:47 np0005629333 nova_compute[244014]: 2026-02-25 13:01:47.746 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024492.7447417, 8d338640-2b5f-4571-8f76-b523064ee129 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:01:47 np0005629333 nova_compute[244014]: 2026-02-25 13:01:47.748 244018 INFO nova.compute.manager [-] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] VM Stopped (Lifecycle Event)#033[00m
Feb 25 08:01:47 np0005629333 nova_compute[244014]: 2026-02-25 13:01:47.772 244018 DEBUG nova.compute.manager [None req-a9e54686-3614-4bcf-a898-9d0bf3c5faca - - - - - -] [instance: 8d338640-2b5f-4571-8f76-b523064ee129] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:01:48 np0005629333 NetworkManager[49836]: <info>  [1772024508.0918] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/647)
Feb 25 08:01:48 np0005629333 NetworkManager[49836]: <info>  [1772024508.0928] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/648)
Feb 25 08:01:48 np0005629333 nova_compute[244014]: 2026-02-25 13:01:48.093 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:48 np0005629333 nova_compute[244014]: 2026-02-25 13:01:48.128 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:48 np0005629333 ovn_controller[147040]: 2026-02-25T13:01:48Z|01573|binding|INFO|Releasing lport 6e4a9a2d-90d8-4fd6-b7cc-e40dc20beeef from this chassis (sb_readonly=0)
Feb 25 08:01:48 np0005629333 nova_compute[244014]: 2026-02-25 13:01:48.140 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:48 np0005629333 nova_compute[244014]: 2026-02-25 13:01:48.610 244018 DEBUG nova.compute.manager [req-00a91e1f-f6db-4d7c-97be-b44ab33f9b0e req-1e1a24f2-8185-4eef-b698-431927e764b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Received event network-changed-5a5e6571-f930-49df-8399-0abb9daf7f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:01:48 np0005629333 nova_compute[244014]: 2026-02-25 13:01:48.610 244018 DEBUG nova.compute.manager [req-00a91e1f-f6db-4d7c-97be-b44ab33f9b0e req-1e1a24f2-8185-4eef-b698-431927e764b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Refreshing instance network info cache due to event network-changed-5a5e6571-f930-49df-8399-0abb9daf7f4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 08:01:48 np0005629333 nova_compute[244014]: 2026-02-25 13:01:48.611 244018 DEBUG oslo_concurrency.lockutils [req-00a91e1f-f6db-4d7c-97be-b44ab33f9b0e req-1e1a24f2-8185-4eef-b698-431927e764b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-ea4e69b0-bf04-4637-847b-9807c884d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 08:01:48 np0005629333 nova_compute[244014]: 2026-02-25 13:01:48.611 244018 DEBUG oslo_concurrency.lockutils [req-00a91e1f-f6db-4d7c-97be-b44ab33f9b0e req-1e1a24f2-8185-4eef-b698-431927e764b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-ea4e69b0-bf04-4637-847b-9807c884d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 08:01:48 np0005629333 nova_compute[244014]: 2026-02-25 13:01:48.611 244018 DEBUG nova.network.neutron [req-00a91e1f-f6db-4d7c-97be-b44ab33f9b0e req-1e1a24f2-8185-4eef-b698-431927e764b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Refreshing network info cache for port 5a5e6571-f930-49df-8399-0abb9daf7f4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 08:01:48 np0005629333 podman[379038]: 2026-02-25 13:01:48.732899541 +0000 UTC m=+0.066922319 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 08:01:48 np0005629333 podman[379039]: 2026-02-25 13:01:48.759801579 +0000 UTC m=+0.087763696 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 08:01:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:01:48 np0005629333 nova_compute[244014]: 2026-02-25 13:01:48.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:01:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2448: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Feb 25 08:01:48 np0005629333 ovn_controller[147040]: 2026-02-25T13:01:48Z|01574|binding|INFO|Releasing lport 6e4a9a2d-90d8-4fd6-b7cc-e40dc20beeef from this chassis (sb_readonly=0)
Feb 25 08:01:48 np0005629333 nova_compute[244014]: 2026-02-25 13:01:48.961 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:50 np0005629333 nova_compute[244014]: 2026-02-25 13:01:50.315 244018 DEBUG nova.network.neutron [req-00a91e1f-f6db-4d7c-97be-b44ab33f9b0e req-1e1a24f2-8185-4eef-b698-431927e764b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Updated VIF entry in instance network info cache for port 5a5e6571-f930-49df-8399-0abb9daf7f4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 08:01:50 np0005629333 nova_compute[244014]: 2026-02-25 13:01:50.316 244018 DEBUG nova.network.neutron [req-00a91e1f-f6db-4d7c-97be-b44ab33f9b0e req-1e1a24f2-8185-4eef-b698-431927e764b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Updating instance_info_cache with network_info: [{"id": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "address": "fa:16:3e:8b:fe:07", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a5e6571-f9", "ovs_interfaceid": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:01:50 np0005629333 nova_compute[244014]: 2026-02-25 13:01:50.375 244018 DEBUG oslo_concurrency.lockutils [req-00a91e1f-f6db-4d7c-97be-b44ab33f9b0e req-1e1a24f2-8185-4eef-b698-431927e764b0 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-ea4e69b0-bf04-4637-847b-9807c884d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 08:01:50 np0005629333 nova_compute[244014]: 2026-02-25 13:01:50.653 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:50 np0005629333 nova_compute[244014]: 2026-02-25 13:01:50.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:01:50 np0005629333 nova_compute[244014]: 2026-02-25 13:01:50.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:01:50 np0005629333 nova_compute[244014]: 2026-02-25 13:01:50.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:01:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2449: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 08:01:51 np0005629333 nova_compute[244014]: 2026-02-25 13:01:51.255 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:52 np0005629333 nova_compute[244014]: 2026-02-25 13:01:52.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:01:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2450: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Feb 25 08:01:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:01:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2451: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Feb 25 08:01:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:55.039 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:01:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:55.039 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:01:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:01:55.040 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:01:55 np0005629333 nova_compute[244014]: 2026-02-25 13:01:55.655 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:56 np0005629333 nova_compute[244014]: 2026-02-25 13:01:56.257 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:01:56 np0005629333 ovn_controller[147040]: 2026-02-25T13:01:56Z|00204|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8b:fe:07 10.100.0.5
Feb 25 08:01:56 np0005629333 ovn_controller[147040]: 2026-02-25T13:01:56Z|00205|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8b:fe:07 10.100.0.5
Feb 25 08:01:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2452: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Feb 25 08:01:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:01:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2453: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Feb 25 08:01:59 np0005629333 nova_compute[244014]: 2026-02-25 13:01:59.542 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:00 np0005629333 nova_compute[244014]: 2026-02-25 13:02:00.657 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2454: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 400 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 08:02:01 np0005629333 nova_compute[244014]: 2026-02-25 13:02:01.259 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:02:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:02:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:02:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:02:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:02:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:02:02 np0005629333 nova_compute[244014]: 2026-02-25 13:02:02.102 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2455: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 400 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 25 08:02:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:02:03 np0005629333 nova_compute[244014]: 2026-02-25 13:02:03.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:02:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2456: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 397 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 08:02:05 np0005629333 nova_compute[244014]: 2026-02-25 13:02:05.661 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:06 np0005629333 nova_compute[244014]: 2026-02-25 13:02:06.261 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2457: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 397 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 08:02:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:02:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2458: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 397 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 08:02:09 np0005629333 ovn_controller[147040]: 2026-02-25T13:02:09Z|01575|binding|INFO|Releasing lport 6e4a9a2d-90d8-4fd6-b7cc-e40dc20beeef from this chassis (sb_readonly=0)
Feb 25 08:02:09 np0005629333 nova_compute[244014]: 2026-02-25 13:02:09.234 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:09 np0005629333 nova_compute[244014]: 2026-02-25 13:02:09.408 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "d2d1c933-a594-4748-8f09-84790cab7d73" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:02:09 np0005629333 nova_compute[244014]: 2026-02-25 13:02:09.410 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "d2d1c933-a594-4748-8f09-84790cab7d73" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:02:09 np0005629333 nova_compute[244014]: 2026-02-25 13:02:09.436 244018 DEBUG nova.compute.manager [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 08:02:09 np0005629333 nova_compute[244014]: 2026-02-25 13:02:09.536 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:02:09 np0005629333 nova_compute[244014]: 2026-02-25 13:02:09.537 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:02:09 np0005629333 nova_compute[244014]: 2026-02-25 13:02:09.547 244018 DEBUG nova.virt.hardware [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 08:02:09 np0005629333 nova_compute[244014]: 2026-02-25 13:02:09.547 244018 INFO nova.compute.claims [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 08:02:09 np0005629333 nova_compute[244014]: 2026-02-25 13:02:09.675 244018 DEBUG oslo_concurrency.processutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:02:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:02:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4222605547' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:02:10 np0005629333 nova_compute[244014]: 2026-02-25 13:02:10.233 244018 DEBUG oslo_concurrency.processutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:02:10 np0005629333 nova_compute[244014]: 2026-02-25 13:02:10.664 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2459: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 11 KiB/s wr, 0 op/s
Feb 25 08:02:10 np0005629333 nova_compute[244014]: 2026-02-25 13:02:10.980 244018 DEBUG nova.compute.provider_tree [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:02:11 np0005629333 nova_compute[244014]: 2026-02-25 13:02:11.006 244018 DEBUG nova.scheduler.client.report [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:02:11 np0005629333 nova_compute[244014]: 2026-02-25 13:02:11.033 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:02:11 np0005629333 nova_compute[244014]: 2026-02-25 13:02:11.035 244018 DEBUG nova.compute.manager [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 08:02:11 np0005629333 nova_compute[244014]: 2026-02-25 13:02:11.164 244018 DEBUG nova.compute.manager [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 08:02:11 np0005629333 nova_compute[244014]: 2026-02-25 13:02:11.165 244018 DEBUG nova.network.neutron [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 08:02:11 np0005629333 nova_compute[244014]: 2026-02-25 13:02:11.182 244018 INFO nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 08:02:11 np0005629333 nova_compute[244014]: 2026-02-25 13:02:11.203 244018 DEBUG nova.compute.manager [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 08:02:11 np0005629333 nova_compute[244014]: 2026-02-25 13:02:11.263 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:11 np0005629333 nova_compute[244014]: 2026-02-25 13:02:11.316 244018 DEBUG nova.compute.manager [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 08:02:11 np0005629333 nova_compute[244014]: 2026-02-25 13:02:11.318 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 08:02:11 np0005629333 nova_compute[244014]: 2026-02-25 13:02:11.319 244018 INFO nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Creating image(s)#033[00m
Feb 25 08:02:11 np0005629333 nova_compute[244014]: 2026-02-25 13:02:11.349 244018 DEBUG nova.storage.rbd_utils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image d2d1c933-a594-4748-8f09-84790cab7d73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:02:11 np0005629333 nova_compute[244014]: 2026-02-25 13:02:11.376 244018 DEBUG nova.storage.rbd_utils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image d2d1c933-a594-4748-8f09-84790cab7d73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:02:11 np0005629333 nova_compute[244014]: 2026-02-25 13:02:11.397 244018 DEBUG nova.storage.rbd_utils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image d2d1c933-a594-4748-8f09-84790cab7d73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:02:11 np0005629333 nova_compute[244014]: 2026-02-25 13:02:11.400 244018 DEBUG oslo_concurrency.processutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:02:11 np0005629333 nova_compute[244014]: 2026-02-25 13:02:11.434 244018 DEBUG nova.policy [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea895f651dd742a7b5eb2d63fb34641c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b9699483122f465084e3147e4904d13d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 08:02:11 np0005629333 nova_compute[244014]: 2026-02-25 13:02:11.474 244018 DEBUG oslo_concurrency.processutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:02:11 np0005629333 nova_compute[244014]: 2026-02-25 13:02:11.475 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:02:11 np0005629333 nova_compute[244014]: 2026-02-25 13:02:11.476 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:02:11 np0005629333 nova_compute[244014]: 2026-02-25 13:02:11.477 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:02:11 np0005629333 nova_compute[244014]: 2026-02-25 13:02:11.514 244018 DEBUG nova.storage.rbd_utils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image d2d1c933-a594-4748-8f09-84790cab7d73_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:02:11 np0005629333 nova_compute[244014]: 2026-02-25 13:02:11.519 244018 DEBUG oslo_concurrency.processutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d2d1c933-a594-4748-8f09-84790cab7d73_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:02:11 np0005629333 nova_compute[244014]: 2026-02-25 13:02:11.956 244018 DEBUG oslo_concurrency.processutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 d2d1c933-a594-4748-8f09-84790cab7d73_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:02:12 np0005629333 nova_compute[244014]: 2026-02-25 13:02:12.044 244018 DEBUG nova.storage.rbd_utils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] resizing rbd image d2d1c933-a594-4748-8f09-84790cab7d73_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 08:02:12 np0005629333 nova_compute[244014]: 2026-02-25 13:02:12.201 244018 DEBUG nova.objects.instance [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'migration_context' on Instance uuid d2d1c933-a594-4748-8f09-84790cab7d73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 08:02:12 np0005629333 nova_compute[244014]: 2026-02-25 13:02:12.232 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 08:02:12 np0005629333 nova_compute[244014]: 2026-02-25 13:02:12.233 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Ensure instance console log exists: /var/lib/nova/instances/d2d1c933-a594-4748-8f09-84790cab7d73/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 08:02:12 np0005629333 nova_compute[244014]: 2026-02-25 13:02:12.234 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:02:12 np0005629333 nova_compute[244014]: 2026-02-25 13:02:12.234 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:02:12 np0005629333 nova_compute[244014]: 2026-02-25 13:02:12.235 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:02:12 np0005629333 nova_compute[244014]: 2026-02-25 13:02:12.627 244018 DEBUG nova.network.neutron [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Successfully created port: 1167ad0b-6ff5-4138-8c61-fde6601ec903 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 08:02:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2460: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 08:02:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:02:14 np0005629333 nova_compute[244014]: 2026-02-25 13:02:14.231 244018 DEBUG nova.network.neutron [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Successfully updated port: 1167ad0b-6ff5-4138-8c61-fde6601ec903 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 08:02:14 np0005629333 nova_compute[244014]: 2026-02-25 13:02:14.249 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "refresh_cache-d2d1c933-a594-4748-8f09-84790cab7d73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 08:02:14 np0005629333 nova_compute[244014]: 2026-02-25 13:02:14.249 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquired lock "refresh_cache-d2d1c933-a594-4748-8f09-84790cab7d73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 08:02:14 np0005629333 nova_compute[244014]: 2026-02-25 13:02:14.249 244018 DEBUG nova.network.neutron [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 08:02:14 np0005629333 nova_compute[244014]: 2026-02-25 13:02:14.324 244018 DEBUG nova.compute.manager [req-f3d81340-037b-4a05-ba25-5a95a5086095 req-6012dc88-52ef-462f-93a4-0bbcfc1346a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Received event network-changed-1167ad0b-6ff5-4138-8c61-fde6601ec903 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:02:14 np0005629333 nova_compute[244014]: 2026-02-25 13:02:14.325 244018 DEBUG nova.compute.manager [req-f3d81340-037b-4a05-ba25-5a95a5086095 req-6012dc88-52ef-462f-93a4-0bbcfc1346a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Refreshing instance network info cache due to event network-changed-1167ad0b-6ff5-4138-8c61-fde6601ec903. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 08:02:14 np0005629333 nova_compute[244014]: 2026-02-25 13:02:14.325 244018 DEBUG oslo_concurrency.lockutils [req-f3d81340-037b-4a05-ba25-5a95a5086095 req-6012dc88-52ef-462f-93a4-0bbcfc1346a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d2d1c933-a594-4748-8f09-84790cab7d73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 08:02:14 np0005629333 nova_compute[244014]: 2026-02-25 13:02:14.386 244018 DEBUG nova.network.neutron [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 08:02:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2461: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 08:02:15 np0005629333 nova_compute[244014]: 2026-02-25 13:02:15.430 244018 DEBUG nova.network.neutron [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Updating instance_info_cache with network_info: [{"id": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "address": "fa:16:3e:16:de:15", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1167ad0b-6f", "ovs_interfaceid": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:02:15 np0005629333 nova_compute[244014]: 2026-02-25 13:02:15.454 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Releasing lock "refresh_cache-d2d1c933-a594-4748-8f09-84790cab7d73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 08:02:15 np0005629333 nova_compute[244014]: 2026-02-25 13:02:15.454 244018 DEBUG nova.compute.manager [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Instance network_info: |[{"id": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "address": "fa:16:3e:16:de:15", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1167ad0b-6f", "ovs_interfaceid": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 08:02:15 np0005629333 nova_compute[244014]: 2026-02-25 13:02:15.455 244018 DEBUG oslo_concurrency.lockutils [req-f3d81340-037b-4a05-ba25-5a95a5086095 req-6012dc88-52ef-462f-93a4-0bbcfc1346a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d2d1c933-a594-4748-8f09-84790cab7d73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 08:02:15 np0005629333 nova_compute[244014]: 2026-02-25 13:02:15.455 244018 DEBUG nova.network.neutron [req-f3d81340-037b-4a05-ba25-5a95a5086095 req-6012dc88-52ef-462f-93a4-0bbcfc1346a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Refreshing network info cache for port 1167ad0b-6ff5-4138-8c61-fde6601ec903 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 08:02:15 np0005629333 nova_compute[244014]: 2026-02-25 13:02:15.458 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Start _get_guest_xml network_info=[{"id": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "address": "fa:16:3e:16:de:15", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1167ad0b-6f", "ovs_interfaceid": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 08:02:15 np0005629333 nova_compute[244014]: 2026-02-25 13:02:15.463 244018 WARNING nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:02:15 np0005629333 nova_compute[244014]: 2026-02-25 13:02:15.468 244018 DEBUG nova.virt.libvirt.host [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 08:02:15 np0005629333 nova_compute[244014]: 2026-02-25 13:02:15.469 244018 DEBUG nova.virt.libvirt.host [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 08:02:15 np0005629333 nova_compute[244014]: 2026-02-25 13:02:15.481 244018 DEBUG nova.virt.libvirt.host [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 08:02:15 np0005629333 nova_compute[244014]: 2026-02-25 13:02:15.481 244018 DEBUG nova.virt.libvirt.host [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 08:02:15 np0005629333 nova_compute[244014]: 2026-02-25 13:02:15.482 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 08:02:15 np0005629333 nova_compute[244014]: 2026-02-25 13:02:15.482 244018 DEBUG nova.virt.hardware [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 08:02:15 np0005629333 nova_compute[244014]: 2026-02-25 13:02:15.483 244018 DEBUG nova.virt.hardware [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 08:02:15 np0005629333 nova_compute[244014]: 2026-02-25 13:02:15.484 244018 DEBUG nova.virt.hardware [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 08:02:15 np0005629333 nova_compute[244014]: 2026-02-25 13:02:15.484 244018 DEBUG nova.virt.hardware [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 08:02:15 np0005629333 nova_compute[244014]: 2026-02-25 13:02:15.485 244018 DEBUG nova.virt.hardware [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 08:02:15 np0005629333 nova_compute[244014]: 2026-02-25 13:02:15.485 244018 DEBUG nova.virt.hardware [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 08:02:15 np0005629333 nova_compute[244014]: 2026-02-25 13:02:15.485 244018 DEBUG nova.virt.hardware [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 08:02:15 np0005629333 nova_compute[244014]: 2026-02-25 13:02:15.486 244018 DEBUG nova.virt.hardware [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 08:02:15 np0005629333 nova_compute[244014]: 2026-02-25 13:02:15.486 244018 DEBUG nova.virt.hardware [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 08:02:15 np0005629333 nova_compute[244014]: 2026-02-25 13:02:15.487 244018 DEBUG nova.virt.hardware [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 08:02:15 np0005629333 nova_compute[244014]: 2026-02-25 13:02:15.487 244018 DEBUG nova.virt.hardware [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 08:02:15 np0005629333 nova_compute[244014]: 2026-02-25 13:02:15.492 244018 DEBUG oslo_concurrency.processutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:02:15 np0005629333 nova_compute[244014]: 2026-02-25 13:02:15.665 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 08:02:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1265620149' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.083 244018 DEBUG oslo_concurrency.processutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.101 244018 DEBUG nova.storage.rbd_utils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image d2d1c933-a594-4748-8f09-84790cab7d73_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.104 244018 DEBUG oslo_concurrency.processutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.266 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.503 244018 DEBUG nova.network.neutron [req-f3d81340-037b-4a05-ba25-5a95a5086095 req-6012dc88-52ef-462f-93a4-0bbcfc1346a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Updated VIF entry in instance network info cache for port 1167ad0b-6ff5-4138-8c61-fde6601ec903. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.504 244018 DEBUG nova.network.neutron [req-f3d81340-037b-4a05-ba25-5a95a5086095 req-6012dc88-52ef-462f-93a4-0bbcfc1346a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Updating instance_info_cache with network_info: [{"id": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "address": "fa:16:3e:16:de:15", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1167ad0b-6f", "ovs_interfaceid": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.524 244018 DEBUG oslo_concurrency.lockutils [req-f3d81340-037b-4a05-ba25-5a95a5086095 req-6012dc88-52ef-462f-93a4-0bbcfc1346a9 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d2d1c933-a594-4748-8f09-84790cab7d73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 08:02:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 08:02:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1221534215' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.658 244018 DEBUG oslo_concurrency.processutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.660 244018 DEBUG nova.virt.libvirt.vif [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T13:02:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-1-1254828585',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-1-1254828585',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-gen',id=149,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDG3qQBqZbx+RdRlYmfQhfQx6MRvbn5P1SDfQdH7zgAHK3JPnG4I4PJxwsk8GxkBiwXqDS8mMRQo7u+5+wHwxJ+FcVnrZkfYuYFmbh/Gbve/0HWzg16VZ7xAIbYve6W3FQ==',key_name='tempest-TestSecurityGroupsBasicOps-1940037159',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-rc0rknoo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T13:02:11Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=d2d1c933-a594-4748-8f09-84790cab7d73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "address": "fa:16:3e:16:de:15", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1167ad0b-6f", "ovs_interfaceid": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.661 244018 DEBUG nova.network.os_vif_util [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "address": "fa:16:3e:16:de:15", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1167ad0b-6f", "ovs_interfaceid": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.662 244018 DEBUG nova.network.os_vif_util [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:de:15,bridge_name='br-int',has_traffic_filtering=True,id=1167ad0b-6ff5-4138-8c61-fde6601ec903,network=Network(81e387f6-426b-4301-9279-fdf7597d2c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1167ad0b-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.664 244018 DEBUG nova.objects.instance [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'pci_devices' on Instance uuid d2d1c933-a594-4748-8f09-84790cab7d73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.690 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] End _get_guest_xml xml=<domain type="kvm">
Feb 25 08:02:16 np0005629333 nova_compute[244014]:  <uuid>d2d1c933-a594-4748-8f09-84790cab7d73</uuid>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:  <name>instance-00000095</name>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-1-1254828585</nova:name>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 13:02:15</nova:creationTime>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 08:02:16 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:        <nova:user uuid="ea895f651dd742a7b5eb2d63fb34641c">tempest-TestSecurityGroupsBasicOps-948360018-project-member</nova:user>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:        <nova:project uuid="b9699483122f465084e3147e4904d13d">tempest-TestSecurityGroupsBasicOps-948360018</nova:project>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:        <nova:port uuid="1167ad0b-6ff5-4138-8c61-fde6601ec903">
Feb 25 08:02:16 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <system>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      <entry name="serial">d2d1c933-a594-4748-8f09-84790cab7d73</entry>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      <entry name="uuid">d2d1c933-a594-4748-8f09-84790cab7d73</entry>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    </system>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:  <os>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:  </os>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:  <features>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:  </features>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:  </clock>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:  <devices>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/d2d1c933-a594-4748-8f09-84790cab7d73_disk">
Feb 25 08:02:16 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      </source>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 08:02:16 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      </auth>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    </disk>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/d2d1c933-a594-4748-8f09-84790cab7d73_disk.config">
Feb 25 08:02:16 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      </source>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 08:02:16 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      </auth>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    </disk>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:16:de:15"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      <target dev="tap1167ad0b-6f"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    </interface>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/d2d1c933-a594-4748-8f09-84790cab7d73/console.log" append="off"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    </serial>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <video>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    </video>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    </rng>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 08:02:16 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 08:02:16 np0005629333 nova_compute[244014]:  </devices>
Feb 25 08:02:16 np0005629333 nova_compute[244014]: </domain>
Feb 25 08:02:16 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.691 244018 DEBUG nova.compute.manager [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Preparing to wait for external event network-vif-plugged-1167ad0b-6ff5-4138-8c61-fde6601ec903 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.692 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.692 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.692 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.693 244018 DEBUG nova.virt.libvirt.vif [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T13:02:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-1-1254828585',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-1-1254828585',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-gen',id=149,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDG3qQBqZbx+RdRlYmfQhfQx6MRvbn5P1SDfQdH7zgAHK3JPnG4I4PJxwsk8GxkBiwXqDS8mMRQo7u+5+wHwxJ+FcVnrZkfYuYFmbh/Gbve/0HWzg16VZ7xAIbYve6W3FQ==',key_name='tempest-TestSecurityGroupsBasicOps-1940037159',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-rc0rknoo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T13:02:11Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=d2d1c933-a594-4748-8f09-84790cab7d73,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "address": "fa:16:3e:16:de:15", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1167ad0b-6f", "ovs_interfaceid": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.693 244018 DEBUG nova.network.os_vif_util [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "address": "fa:16:3e:16:de:15", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1167ad0b-6f", "ovs_interfaceid": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.694 244018 DEBUG nova.network.os_vif_util [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:de:15,bridge_name='br-int',has_traffic_filtering=True,id=1167ad0b-6ff5-4138-8c61-fde6601ec903,network=Network(81e387f6-426b-4301-9279-fdf7597d2c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1167ad0b-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.694 244018 DEBUG os_vif [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:de:15,bridge_name='br-int',has_traffic_filtering=True,id=1167ad0b-6ff5-4138-8c61-fde6601ec903,network=Network(81e387f6-426b-4301-9279-fdf7597d2c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1167ad0b-6f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.695 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.695 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.695 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.699 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.699 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1167ad0b-6f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.700 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1167ad0b-6f, col_values=(('external_ids', {'iface-id': '1167ad0b-6ff5-4138-8c61-fde6601ec903', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:16:de:15', 'vm-uuid': 'd2d1c933-a594-4748-8f09-84790cab7d73'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.701 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:16 np0005629333 NetworkManager[49836]: <info>  [1772024536.7036] manager: (tap1167ad0b-6f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/649)
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.703 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.707 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.708 244018 INFO os_vif [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:de:15,bridge_name='br-int',has_traffic_filtering=True,id=1167ad0b-6ff5-4138-8c61-fde6601ec903,network=Network(81e387f6-426b-4301-9279-fdf7597d2c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1167ad0b-6f')#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.757 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.758 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.758 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No VIF found with MAC fa:16:3e:16:de:15, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.759 244018 INFO nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Using config drive#033[00m
Feb 25 08:02:16 np0005629333 nova_compute[244014]: 2026-02-25 13:02:16.783 244018 DEBUG nova.storage.rbd_utils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image d2d1c933-a594-4748-8f09-84790cab7d73_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:02:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2462: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 08:02:17 np0005629333 nova_compute[244014]: 2026-02-25 13:02:17.095 244018 INFO nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Creating config drive at /var/lib/nova/instances/d2d1c933-a594-4748-8f09-84790cab7d73/disk.config#033[00m
Feb 25 08:02:17 np0005629333 nova_compute[244014]: 2026-02-25 13:02:17.102 244018 DEBUG oslo_concurrency.processutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d2d1c933-a594-4748-8f09-84790cab7d73/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpetv24cum execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:02:17 np0005629333 nova_compute[244014]: 2026-02-25 13:02:17.247 244018 DEBUG oslo_concurrency.processutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d2d1c933-a594-4748-8f09-84790cab7d73/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpetv24cum" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:02:17 np0005629333 nova_compute[244014]: 2026-02-25 13:02:17.289 244018 DEBUG nova.storage.rbd_utils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image d2d1c933-a594-4748-8f09-84790cab7d73_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:02:17 np0005629333 nova_compute[244014]: 2026-02-25 13:02:17.295 244018 DEBUG oslo_concurrency.processutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d2d1c933-a594-4748-8f09-84790cab7d73/disk.config d2d1c933-a594-4748-8f09-84790cab7d73_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:02:17 np0005629333 nova_compute[244014]: 2026-02-25 13:02:17.581 244018 DEBUG oslo_concurrency.processutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d2d1c933-a594-4748-8f09-84790cab7d73/disk.config d2d1c933-a594-4748-8f09-84790cab7d73_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:02:17 np0005629333 nova_compute[244014]: 2026-02-25 13:02:17.583 244018 INFO nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Deleting local config drive /var/lib/nova/instances/d2d1c933-a594-4748-8f09-84790cab7d73/disk.config because it was imported into RBD.#033[00m
Feb 25 08:02:17 np0005629333 kernel: tap1167ad0b-6f: entered promiscuous mode
Feb 25 08:02:17 np0005629333 NetworkManager[49836]: <info>  [1772024537.6363] manager: (tap1167ad0b-6f): new Tun device (/org/freedesktop/NetworkManager/Devices/650)
Feb 25 08:02:17 np0005629333 ovn_controller[147040]: 2026-02-25T13:02:17Z|01576|binding|INFO|Claiming lport 1167ad0b-6ff5-4138-8c61-fde6601ec903 for this chassis.
Feb 25 08:02:17 np0005629333 ovn_controller[147040]: 2026-02-25T13:02:17Z|01577|binding|INFO|1167ad0b-6ff5-4138-8c61-fde6601ec903: Claiming fa:16:3e:16:de:15 10.100.0.12
Feb 25 08:02:17 np0005629333 nova_compute[244014]: 2026-02-25 13:02:17.638 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:17.648 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:de:15 10.100.0.12'], port_security=['fa:16:3e:16:de:15 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd2d1c933-a594-4748-8f09-84790cab7d73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81e387f6-426b-4301-9279-fdf7597d2c7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9699483122f465084e3147e4904d13d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '11991dde-8004-44c6-851b-7ca96866b068', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2daa638-d0b7-4114-8331-0d5928db072a, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=1167ad0b-6ff5-4138-8c61-fde6601ec903) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 08:02:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:17.651 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 1167ad0b-6ff5-4138-8c61-fde6601ec903 in datapath 81e387f6-426b-4301-9279-fdf7597d2c7e bound to our chassis#033[00m
Feb 25 08:02:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:17.654 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 81e387f6-426b-4301-9279-fdf7597d2c7e#033[00m
Feb 25 08:02:17 np0005629333 ovn_controller[147040]: 2026-02-25T13:02:17Z|01578|binding|INFO|Setting lport 1167ad0b-6ff5-4138-8c61-fde6601ec903 ovn-installed in OVS
Feb 25 08:02:17 np0005629333 ovn_controller[147040]: 2026-02-25T13:02:17Z|01579|binding|INFO|Setting lport 1167ad0b-6ff5-4138-8c61-fde6601ec903 up in Southbound
Feb 25 08:02:17 np0005629333 nova_compute[244014]: 2026-02-25 13:02:17.657 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:17.672 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[851ff627-4ec4-43b4-973f-3a4b0246fb81]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:02:17 np0005629333 systemd-machined[210048]: New machine qemu-183-instance-00000095.
Feb 25 08:02:17 np0005629333 systemd[1]: Started Virtual Machine qemu-183-instance-00000095.
Feb 25 08:02:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:17.699 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ffe2f228-c45d-4931-88cc-f736a396af0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:02:17 np0005629333 systemd-udevd[379407]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 08:02:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:17.704 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[c741936c-d675-486c-9fa1-311d32896c89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:02:17 np0005629333 NetworkManager[49836]: <info>  [1772024537.7184] device (tap1167ad0b-6f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 08:02:17 np0005629333 NetworkManager[49836]: <info>  [1772024537.7200] device (tap1167ad0b-6f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 08:02:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:17.732 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0f07c25d-0646-4306-86e5-f9e9752217aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:02:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:17.753 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e473b7c7-5425-47a9-96a8-fdaa07654503]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81e387f6-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:15:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647231, 'reachable_time': 28642, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379415, 'error': None, 'target': 'ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:02:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:17.767 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e0869a7d-b6a0-4cc6-8aec-5b06cfcdc681]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap81e387f6-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647239, 'tstamp': 647239}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379418, 'error': None, 'target': 'ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap81e387f6-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647241, 'tstamp': 647241}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379418, 'error': None, 'target': 'ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:02:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:17.769 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81e387f6-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:02:17 np0005629333 nova_compute[244014]: 2026-02-25 13:02:17.770 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:17.772 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81e387f6-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:02:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:17.772 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 08:02:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:17.773 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap81e387f6-40, col_values=(('external_ids', {'iface-id': '6e4a9a2d-90d8-4fd6-b7cc-e40dc20beeef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:02:17 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:17.773 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 08:02:17 np0005629333 nova_compute[244014]: 2026-02-25 13:02:17.837 244018 DEBUG nova.compute.manager [req-dd95fc86-5df6-4fd6-931a-48918bf67e75 req-81364e4b-bf3e-44de-928c-61004eb54602 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Received event network-vif-plugged-1167ad0b-6ff5-4138-8c61-fde6601ec903 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:02:17 np0005629333 nova_compute[244014]: 2026-02-25 13:02:17.838 244018 DEBUG oslo_concurrency.lockutils [req-dd95fc86-5df6-4fd6-931a-48918bf67e75 req-81364e4b-bf3e-44de-928c-61004eb54602 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:02:17 np0005629333 nova_compute[244014]: 2026-02-25 13:02:17.839 244018 DEBUG oslo_concurrency.lockutils [req-dd95fc86-5df6-4fd6-931a-48918bf67e75 req-81364e4b-bf3e-44de-928c-61004eb54602 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:02:17 np0005629333 nova_compute[244014]: 2026-02-25 13:02:17.839 244018 DEBUG oslo_concurrency.lockutils [req-dd95fc86-5df6-4fd6-931a-48918bf67e75 req-81364e4b-bf3e-44de-928c-61004eb54602 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:02:17 np0005629333 nova_compute[244014]: 2026-02-25 13:02:17.840 244018 DEBUG nova.compute.manager [req-dd95fc86-5df6-4fd6-931a-48918bf67e75 req-81364e4b-bf3e-44de-928c-61004eb54602 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Processing event network-vif-plugged-1167ad0b-6ff5-4138-8c61-fde6601ec903 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 08:02:18 np0005629333 nova_compute[244014]: 2026-02-25 13:02:18.186 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024538.1856692, d2d1c933-a594-4748-8f09-84790cab7d73 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:02:18 np0005629333 nova_compute[244014]: 2026-02-25 13:02:18.187 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] VM Started (Lifecycle Event)#033[00m
Feb 25 08:02:18 np0005629333 nova_compute[244014]: 2026-02-25 13:02:18.190 244018 DEBUG nova.compute.manager [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 08:02:18 np0005629333 nova_compute[244014]: 2026-02-25 13:02:18.194 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 08:02:18 np0005629333 nova_compute[244014]: 2026-02-25 13:02:18.198 244018 INFO nova.virt.libvirt.driver [-] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Instance spawned successfully.#033[00m
Feb 25 08:02:18 np0005629333 nova_compute[244014]: 2026-02-25 13:02:18.199 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 08:02:18 np0005629333 nova_compute[244014]: 2026-02-25 13:02:18.205 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:02:18 np0005629333 nova_compute[244014]: 2026-02-25 13:02:18.209 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 08:02:18 np0005629333 nova_compute[244014]: 2026-02-25 13:02:18.226 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:02:18 np0005629333 nova_compute[244014]: 2026-02-25 13:02:18.227 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:02:18 np0005629333 nova_compute[244014]: 2026-02-25 13:02:18.227 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:02:18 np0005629333 nova_compute[244014]: 2026-02-25 13:02:18.228 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:02:18 np0005629333 nova_compute[244014]: 2026-02-25 13:02:18.229 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:02:18 np0005629333 nova_compute[244014]: 2026-02-25 13:02:18.230 244018 DEBUG nova.virt.libvirt.driver [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:02:18 np0005629333 nova_compute[244014]: 2026-02-25 13:02:18.236 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 08:02:18 np0005629333 nova_compute[244014]: 2026-02-25 13:02:18.237 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024538.1860127, d2d1c933-a594-4748-8f09-84790cab7d73 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:02:18 np0005629333 nova_compute[244014]: 2026-02-25 13:02:18.237 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] VM Paused (Lifecycle Event)#033[00m
Feb 25 08:02:18 np0005629333 nova_compute[244014]: 2026-02-25 13:02:18.275 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:02:18 np0005629333 nova_compute[244014]: 2026-02-25 13:02:18.279 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024538.1937666, d2d1c933-a594-4748-8f09-84790cab7d73 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:02:18 np0005629333 nova_compute[244014]: 2026-02-25 13:02:18.279 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] VM Resumed (Lifecycle Event)#033[00m
Feb 25 08:02:18 np0005629333 nova_compute[244014]: 2026-02-25 13:02:18.289 244018 INFO nova.compute.manager [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Took 6.97 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 08:02:18 np0005629333 nova_compute[244014]: 2026-02-25 13:02:18.290 244018 DEBUG nova.compute.manager [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:02:18 np0005629333 nova_compute[244014]: 2026-02-25 13:02:18.301 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:02:18 np0005629333 nova_compute[244014]: 2026-02-25 13:02:18.305 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 08:02:18 np0005629333 nova_compute[244014]: 2026-02-25 13:02:18.338 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 08:02:18 np0005629333 nova_compute[244014]: 2026-02-25 13:02:18.369 244018 INFO nova.compute.manager [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Took 8.87 seconds to build instance.#033[00m
Feb 25 08:02:18 np0005629333 nova_compute[244014]: 2026-02-25 13:02:18.392 244018 DEBUG oslo_concurrency.lockutils [None req-1632cd58-4743-43c5-aa35-9de9671e7163 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "d2d1c933-a594-4748-8f09-84790cab7d73" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.982s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:02:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:02:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2463: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Feb 25 08:02:19 np0005629333 podman[379462]: 2026-02-25 13:02:19.726683664 +0000 UTC m=+0.060859718 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, io.buildah.version=1.43.0)
Feb 25 08:02:19 np0005629333 podman[379463]: 2026-02-25 13:02:19.761669981 +0000 UTC m=+0.095206867 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Feb 25 08:02:19 np0005629333 nova_compute[244014]: 2026-02-25 13:02:19.985 244018 DEBUG nova.compute.manager [req-2813d889-5041-454a-8541-1a4e25a390f7 req-2b7e5f30-03a4-4563-8b50-074a60c0ccdc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Received event network-vif-plugged-1167ad0b-6ff5-4138-8c61-fde6601ec903 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:02:19 np0005629333 nova_compute[244014]: 2026-02-25 13:02:19.986 244018 DEBUG oslo_concurrency.lockutils [req-2813d889-5041-454a-8541-1a4e25a390f7 req-2b7e5f30-03a4-4563-8b50-074a60c0ccdc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:02:19 np0005629333 nova_compute[244014]: 2026-02-25 13:02:19.986 244018 DEBUG oslo_concurrency.lockutils [req-2813d889-5041-454a-8541-1a4e25a390f7 req-2b7e5f30-03a4-4563-8b50-074a60c0ccdc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:02:19 np0005629333 nova_compute[244014]: 2026-02-25 13:02:19.986 244018 DEBUG oslo_concurrency.lockutils [req-2813d889-5041-454a-8541-1a4e25a390f7 req-2b7e5f30-03a4-4563-8b50-074a60c0ccdc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:02:19 np0005629333 nova_compute[244014]: 2026-02-25 13:02:19.987 244018 DEBUG nova.compute.manager [req-2813d889-5041-454a-8541-1a4e25a390f7 req-2b7e5f30-03a4-4563-8b50-074a60c0ccdc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] No waiting events found dispatching network-vif-plugged-1167ad0b-6ff5-4138-8c61-fde6601ec903 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 08:02:19 np0005629333 nova_compute[244014]: 2026-02-25 13:02:19.987 244018 WARNING nova.compute.manager [req-2813d889-5041-454a-8541-1a4e25a390f7 req-2b7e5f30-03a4-4563-8b50-074a60c0ccdc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Received unexpected event network-vif-plugged-1167ad0b-6ff5-4138-8c61-fde6601ec903 for instance with vm_state active and task_state None.#033[00m
Feb 25 08:02:20 np0005629333 nova_compute[244014]: 2026-02-25 13:02:20.667 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2464: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Feb 25 08:02:21 np0005629333 nova_compute[244014]: 2026-02-25 13:02:21.702 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2465: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 25 08:02:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:02:24 np0005629333 nova_compute[244014]: 2026-02-25 13:02:24.404 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:24.404 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 08:02:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:24.405 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 08:02:24 np0005629333 nova_compute[244014]: 2026-02-25 13:02:24.518 244018 DEBUG nova.compute.manager [req-593f57e0-0645-448e-99af-ee438a658adc req-a81cfc28-2952-489f-b3fd-9e99b364c617 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Received event network-changed-1167ad0b-6ff5-4138-8c61-fde6601ec903 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:02:24 np0005629333 nova_compute[244014]: 2026-02-25 13:02:24.519 244018 DEBUG nova.compute.manager [req-593f57e0-0645-448e-99af-ee438a658adc req-a81cfc28-2952-489f-b3fd-9e99b364c617 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Refreshing instance network info cache due to event network-changed-1167ad0b-6ff5-4138-8c61-fde6601ec903. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 08:02:24 np0005629333 nova_compute[244014]: 2026-02-25 13:02:24.519 244018 DEBUG oslo_concurrency.lockutils [req-593f57e0-0645-448e-99af-ee438a658adc req-a81cfc28-2952-489f-b3fd-9e99b364c617 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d2d1c933-a594-4748-8f09-84790cab7d73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 08:02:24 np0005629333 nova_compute[244014]: 2026-02-25 13:02:24.519 244018 DEBUG oslo_concurrency.lockutils [req-593f57e0-0645-448e-99af-ee438a658adc req-a81cfc28-2952-489f-b3fd-9e99b364c617 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d2d1c933-a594-4748-8f09-84790cab7d73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 08:02:24 np0005629333 nova_compute[244014]: 2026-02-25 13:02:24.519 244018 DEBUG nova.network.neutron [req-593f57e0-0645-448e-99af-ee438a658adc req-a81cfc28-2952-489f-b3fd-9e99b364c617 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Refreshing network info cache for port 1167ad0b-6ff5-4138-8c61-fde6601ec903 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 08:02:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2466: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Feb 25 08:02:25 np0005629333 nova_compute[244014]: 2026-02-25 13:02:25.668 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:26 np0005629333 nova_compute[244014]: 2026-02-25 13:02:26.253 244018 DEBUG nova.network.neutron [req-593f57e0-0645-448e-99af-ee438a658adc req-a81cfc28-2952-489f-b3fd-9e99b364c617 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Updated VIF entry in instance network info cache for port 1167ad0b-6ff5-4138-8c61-fde6601ec903. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 08:02:26 np0005629333 nova_compute[244014]: 2026-02-25 13:02:26.254 244018 DEBUG nova.network.neutron [req-593f57e0-0645-448e-99af-ee438a658adc req-a81cfc28-2952-489f-b3fd-9e99b364c617 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Updating instance_info_cache with network_info: [{"id": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "address": "fa:16:3e:16:de:15", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1167ad0b-6f", "ovs_interfaceid": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:02:26 np0005629333 nova_compute[244014]: 2026-02-25 13:02:26.282 244018 DEBUG oslo_concurrency.lockutils [req-593f57e0-0645-448e-99af-ee438a658adc req-a81cfc28-2952-489f-b3fd-9e99b364c617 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d2d1c933-a594-4748-8f09-84790cab7d73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 08:02:26 np0005629333 nova_compute[244014]: 2026-02-25 13:02:26.494 244018 DEBUG nova.compute.manager [req-7f9e5160-10a6-4642-b1ef-8592a66b0343 req-ee794187-8274-46ec-b180-24cee3556745 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Received event network-changed-1167ad0b-6ff5-4138-8c61-fde6601ec903 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:02:26 np0005629333 nova_compute[244014]: 2026-02-25 13:02:26.495 244018 DEBUG nova.compute.manager [req-7f9e5160-10a6-4642-b1ef-8592a66b0343 req-ee794187-8274-46ec-b180-24cee3556745 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Refreshing instance network info cache due to event network-changed-1167ad0b-6ff5-4138-8c61-fde6601ec903. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 08:02:26 np0005629333 nova_compute[244014]: 2026-02-25 13:02:26.495 244018 DEBUG oslo_concurrency.lockutils [req-7f9e5160-10a6-4642-b1ef-8592a66b0343 req-ee794187-8274-46ec-b180-24cee3556745 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-d2d1c933-a594-4748-8f09-84790cab7d73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 08:02:26 np0005629333 nova_compute[244014]: 2026-02-25 13:02:26.496 244018 DEBUG oslo_concurrency.lockutils [req-7f9e5160-10a6-4642-b1ef-8592a66b0343 req-ee794187-8274-46ec-b180-24cee3556745 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-d2d1c933-a594-4748-8f09-84790cab7d73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 08:02:26 np0005629333 nova_compute[244014]: 2026-02-25 13:02:26.496 244018 DEBUG nova.network.neutron [req-7f9e5160-10a6-4642-b1ef-8592a66b0343 req-ee794187-8274-46ec-b180-24cee3556745 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Refreshing network info cache for port 1167ad0b-6ff5-4138-8c61-fde6601ec903 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 08:02:26 np0005629333 nova_compute[244014]: 2026-02-25 13:02:26.704 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2467: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Feb 25 08:02:27 np0005629333 nova_compute[244014]: 2026-02-25 13:02:27.557 244018 DEBUG nova.network.neutron [req-7f9e5160-10a6-4642-b1ef-8592a66b0343 req-ee794187-8274-46ec-b180-24cee3556745 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Updated VIF entry in instance network info cache for port 1167ad0b-6ff5-4138-8c61-fde6601ec903. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 08:02:27 np0005629333 nova_compute[244014]: 2026-02-25 13:02:27.559 244018 DEBUG nova.network.neutron [req-7f9e5160-10a6-4642-b1ef-8592a66b0343 req-ee794187-8274-46ec-b180-24cee3556745 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Updating instance_info_cache with network_info: [{"id": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "address": "fa:16:3e:16:de:15", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1167ad0b-6f", "ovs_interfaceid": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:02:27 np0005629333 nova_compute[244014]: 2026-02-25 13:02:27.582 244018 DEBUG oslo_concurrency.lockutils [req-7f9e5160-10a6-4642-b1ef-8592a66b0343 req-ee794187-8274-46ec-b180-24cee3556745 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-d2d1c933-a594-4748-8f09-84790cab7d73" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 08:02:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:02:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2468: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Feb 25 08:02:30 np0005629333 ovn_controller[147040]: 2026-02-25T13:02:30Z|00206|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:16:de:15 10.100.0.12
Feb 25 08:02:30 np0005629333 ovn_controller[147040]: 2026-02-25T13:02:30Z|00207|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:16:de:15 10.100.0.12
Feb 25 08:02:30 np0005629333 nova_compute[244014]: 2026-02-25 13:02:30.671 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2469: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 71 op/s
Feb 25 08:02:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:02:31
Feb 25 08:02:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:02:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:02:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'backups', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'default.rgw.control', 'vms', 'volumes', 'default.rgw.meta', '.rgw.root']
Feb 25 08:02:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:02:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:02:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:02:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:02:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:02:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:02:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:02:31 np0005629333 nova_compute[244014]: 2026-02-25 13:02:31.707 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:02:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:02:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:02:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:02:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:02:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:02:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:02:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:02:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:02:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:02:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:02:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:02:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:02:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:02:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2470: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Feb 25 08:02:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:02:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:02:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:02:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:02:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:02:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:02:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:02:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:02:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:02:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:02:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:02:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:02:33 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:02:33 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:02:33 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:02:33 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:02:33 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:02:33 np0005629333 podman[379722]: 2026-02-25 13:02:33.609243216 +0000 UTC m=+0.033358572 container create 676d09072d5d1f2d1370a63c74f5e37356cfbe01d6e12c597f6e1c6c9a1ec989 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wescoff, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:02:33 np0005629333 podman[379722]: 2026-02-25 13:02:33.594820889 +0000 UTC m=+0.018936255 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:02:33 np0005629333 systemd[1]: Started libpod-conmon-676d09072d5d1f2d1370a63c74f5e37356cfbe01d6e12c597f6e1c6c9a1ec989.scope.
Feb 25 08:02:33 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:02:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:02:33 np0005629333 podman[379722]: 2026-02-25 13:02:33.960552506 +0000 UTC m=+0.384667882 container init 676d09072d5d1f2d1370a63c74f5e37356cfbe01d6e12c597f6e1c6c9a1ec989 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wescoff, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:02:33 np0005629333 podman[379722]: 2026-02-25 13:02:33.970573799 +0000 UTC m=+0.394689155 container start 676d09072d5d1f2d1370a63c74f5e37356cfbe01d6e12c597f6e1c6c9a1ec989 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wescoff, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:02:33 np0005629333 podman[379722]: 2026-02-25 13:02:33.97556248 +0000 UTC m=+0.399678036 container attach 676d09072d5d1f2d1370a63c74f5e37356cfbe01d6e12c597f6e1c6c9a1ec989 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wescoff, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:02:33 np0005629333 frosty_wescoff[379739]: 167 167
Feb 25 08:02:33 np0005629333 systemd[1]: libpod-676d09072d5d1f2d1370a63c74f5e37356cfbe01d6e12c597f6e1c6c9a1ec989.scope: Deactivated successfully.
Feb 25 08:02:33 np0005629333 conmon[379739]: conmon 676d09072d5d1f2d1370 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-676d09072d5d1f2d1370a63c74f5e37356cfbe01d6e12c597f6e1c6c9a1ec989.scope/container/memory.events
Feb 25 08:02:34 np0005629333 podman[379744]: 2026-02-25 13:02:34.0376236 +0000 UTC m=+0.034565226 container died 676d09072d5d1f2d1370a63c74f5e37356cfbe01d6e12c597f6e1c6c9a1ec989 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wescoff, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:02:34 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d2087928fa407259d5cbabd8701ead140c952d531e91ce535ee12b8aa94b5820-merged.mount: Deactivated successfully.
Feb 25 08:02:34 np0005629333 podman[379744]: 2026-02-25 13:02:34.240472902 +0000 UTC m=+0.237414568 container remove 676d09072d5d1f2d1370a63c74f5e37356cfbe01d6e12c597f6e1c6c9a1ec989 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:02:34 np0005629333 systemd[1]: libpod-conmon-676d09072d5d1f2d1370a63c74f5e37356cfbe01d6e12c597f6e1c6c9a1ec989.scope: Deactivated successfully.
Feb 25 08:02:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:34.407 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:02:34 np0005629333 podman[379764]: 2026-02-25 13:02:34.431236364 +0000 UTC m=+0.054717445 container create 7e45a4ea115da58950bb5112960cb1c465f7083e2d391fa77ec593e892ecb0d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_feynman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 08:02:34 np0005629333 systemd[1]: Started libpod-conmon-7e45a4ea115da58950bb5112960cb1c465f7083e2d391fa77ec593e892ecb0d3.scope.
Feb 25 08:02:34 np0005629333 podman[379764]: 2026-02-25 13:02:34.405256731 +0000 UTC m=+0.028737842 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:02:34 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:02:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a106698f8202ba14db5a3f2656c939da6b89704ce3e5ddb4e74591b3ef564f32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:02:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a106698f8202ba14db5a3f2656c939da6b89704ce3e5ddb4e74591b3ef564f32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:02:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a106698f8202ba14db5a3f2656c939da6b89704ce3e5ddb4e74591b3ef564f32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:02:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a106698f8202ba14db5a3f2656c939da6b89704ce3e5ddb4e74591b3ef564f32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:02:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a106698f8202ba14db5a3f2656c939da6b89704ce3e5ddb4e74591b3ef564f32/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:02:34 np0005629333 podman[379764]: 2026-02-25 13:02:34.571062357 +0000 UTC m=+0.194543478 container init 7e45a4ea115da58950bb5112960cb1c465f7083e2d391fa77ec593e892ecb0d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:02:34 np0005629333 podman[379764]: 2026-02-25 13:02:34.578675672 +0000 UTC m=+0.202156783 container start 7e45a4ea115da58950bb5112960cb1c465f7083e2d391fa77ec593e892ecb0d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_feynman, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:02:34 np0005629333 podman[379764]: 2026-02-25 13:02:34.587735417 +0000 UTC m=+0.211216598 container attach 7e45a4ea115da58950bb5112960cb1c465f7083e2d391fa77ec593e892ecb0d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_feynman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:02:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2471: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 08:02:35 np0005629333 optimistic_feynman[379785]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:02:35 np0005629333 optimistic_feynman[379785]: --> All data devices are unavailable
Feb 25 08:02:35 np0005629333 systemd[1]: libpod-7e45a4ea115da58950bb5112960cb1c465f7083e2d391fa77ec593e892ecb0d3.scope: Deactivated successfully.
Feb 25 08:02:35 np0005629333 podman[379764]: 2026-02-25 13:02:35.134593353 +0000 UTC m=+0.758074434 container died 7e45a4ea115da58950bb5112960cb1c465f7083e2d391fa77ec593e892ecb0d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_feynman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 08:02:35 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a106698f8202ba14db5a3f2656c939da6b89704ce3e5ddb4e74591b3ef564f32-merged.mount: Deactivated successfully.
Feb 25 08:02:35 np0005629333 podman[379764]: 2026-02-25 13:02:35.18764938 +0000 UTC m=+0.811130461 container remove 7e45a4ea115da58950bb5112960cb1c465f7083e2d391fa77ec593e892ecb0d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_feynman, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 08:02:35 np0005629333 systemd[1]: libpod-conmon-7e45a4ea115da58950bb5112960cb1c465f7083e2d391fa77ec593e892ecb0d3.scope: Deactivated successfully.
Feb 25 08:02:35 np0005629333 podman[379881]: 2026-02-25 13:02:35.670764408 +0000 UTC m=+0.049851757 container create e443eecc6af2ddf5f58ee41fe5533b301ca9a951285db109bc31d0a41185adc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_payne, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:02:35 np0005629333 nova_compute[244014]: 2026-02-25 13:02:35.674 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:35 np0005629333 systemd[1]: Started libpod-conmon-e443eecc6af2ddf5f58ee41fe5533b301ca9a951285db109bc31d0a41185adc7.scope.
Feb 25 08:02:35 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:02:35 np0005629333 podman[379881]: 2026-02-25 13:02:35.64603083 +0000 UTC m=+0.025118219 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:02:35 np0005629333 podman[379881]: 2026-02-25 13:02:35.750442796 +0000 UTC m=+0.129530135 container init e443eecc6af2ddf5f58ee41fe5533b301ca9a951285db109bc31d0a41185adc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 08:02:35 np0005629333 podman[379881]: 2026-02-25 13:02:35.756330002 +0000 UTC m=+0.135417311 container start e443eecc6af2ddf5f58ee41fe5533b301ca9a951285db109bc31d0a41185adc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 08:02:35 np0005629333 systemd[1]: libpod-e443eecc6af2ddf5f58ee41fe5533b301ca9a951285db109bc31d0a41185adc7.scope: Deactivated successfully.
Feb 25 08:02:35 np0005629333 bold_payne[379896]: 167 167
Feb 25 08:02:35 np0005629333 conmon[379896]: conmon e443eecc6af2ddf5f58e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e443eecc6af2ddf5f58ee41fe5533b301ca9a951285db109bc31d0a41185adc7.scope/container/memory.events
Feb 25 08:02:35 np0005629333 podman[379881]: 2026-02-25 13:02:35.762578288 +0000 UTC m=+0.141665627 container attach e443eecc6af2ddf5f58ee41fe5533b301ca9a951285db109bc31d0a41185adc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_payne, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 08:02:35 np0005629333 podman[379881]: 2026-02-25 13:02:35.762980519 +0000 UTC m=+0.142067828 container died e443eecc6af2ddf5f58ee41fe5533b301ca9a951285db109bc31d0a41185adc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:02:35 np0005629333 systemd[1]: var-lib-containers-storage-overlay-2af4c0eb307a88072aecd4a5bfce97ebc59a8340bdd5d7f7fc9c084a4484730f-merged.mount: Deactivated successfully.
Feb 25 08:02:35 np0005629333 podman[379881]: 2026-02-25 13:02:35.79917473 +0000 UTC m=+0.178262039 container remove e443eecc6af2ddf5f58ee41fe5533b301ca9a951285db109bc31d0a41185adc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Feb 25 08:02:35 np0005629333 systemd[1]: libpod-conmon-e443eecc6af2ddf5f58ee41fe5533b301ca9a951285db109bc31d0a41185adc7.scope: Deactivated successfully.
Feb 25 08:02:35 np0005629333 podman[379921]: 2026-02-25 13:02:35.959515173 +0000 UTC m=+0.048003495 container create 0d6b79b8e40c2f5d166858d57ce31bd48c68cd9f348141f6dda0327508f7e9d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_herschel, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:02:36 np0005629333 systemd[1]: Started libpod-conmon-0d6b79b8e40c2f5d166858d57ce31bd48c68cd9f348141f6dda0327508f7e9d3.scope.
Feb 25 08:02:36 np0005629333 podman[379921]: 2026-02-25 13:02:35.941701221 +0000 UTC m=+0.030189503 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:02:36 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:02:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aa8eaefb56f575c985262c2101e567cee53dd3f0ab9acf9d76cc569e7d67daa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:02:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aa8eaefb56f575c985262c2101e567cee53dd3f0ab9acf9d76cc569e7d67daa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:02:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aa8eaefb56f575c985262c2101e567cee53dd3f0ab9acf9d76cc569e7d67daa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:02:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aa8eaefb56f575c985262c2101e567cee53dd3f0ab9acf9d76cc569e7d67daa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:02:36 np0005629333 podman[379921]: 2026-02-25 13:02:36.073772596 +0000 UTC m=+0.162260958 container init 0d6b79b8e40c2f5d166858d57ce31bd48c68cd9f348141f6dda0327508f7e9d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_herschel, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 08:02:36 np0005629333 podman[379921]: 2026-02-25 13:02:36.080446755 +0000 UTC m=+0.168935077 container start 0d6b79b8e40c2f5d166858d57ce31bd48c68cd9f348141f6dda0327508f7e9d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_herschel, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 08:02:36 np0005629333 podman[379921]: 2026-02-25 13:02:36.086165056 +0000 UTC m=+0.174653398 container attach 0d6b79b8e40c2f5d166858d57ce31bd48c68cd9f348141f6dda0327508f7e9d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_herschel, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]: {
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:    "0": [
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:        {
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "devices": [
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "/dev/loop3"
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            ],
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "lv_name": "ceph_lv0",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "lv_size": "21470642176",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "name": "ceph_lv0",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "tags": {
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.cluster_name": "ceph",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.crush_device_class": "",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.encrypted": "0",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.objectstore": "bluestore",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.osd_id": "0",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.type": "block",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.vdo": "0",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.with_tpm": "0"
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            },
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "type": "block",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "vg_name": "ceph_vg0"
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:        }
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:    ],
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:    "1": [
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:        {
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "devices": [
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "/dev/loop4"
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            ],
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "lv_name": "ceph_lv1",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "lv_size": "21470642176",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "name": "ceph_lv1",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "tags": {
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.cluster_name": "ceph",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.crush_device_class": "",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.encrypted": "0",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.objectstore": "bluestore",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.osd_id": "1",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.type": "block",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.vdo": "0",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.with_tpm": "0"
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            },
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "type": "block",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "vg_name": "ceph_vg1"
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:        }
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:    ],
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:    "2": [
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:        {
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "devices": [
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "/dev/loop5"
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            ],
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "lv_name": "ceph_lv2",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "lv_size": "21470642176",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "name": "ceph_lv2",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "tags": {
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.cluster_name": "ceph",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.crush_device_class": "",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.encrypted": "0",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.objectstore": "bluestore",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.osd_id": "2",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.type": "block",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.vdo": "0",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:                "ceph.with_tpm": "0"
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            },
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "type": "block",
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:            "vg_name": "ceph_vg2"
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:        }
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]:    ]
Feb 25 08:02:36 np0005629333 sharp_herschel[379938]: }
Feb 25 08:02:36 np0005629333 systemd[1]: libpod-0d6b79b8e40c2f5d166858d57ce31bd48c68cd9f348141f6dda0327508f7e9d3.scope: Deactivated successfully.
Feb 25 08:02:36 np0005629333 podman[379921]: 2026-02-25 13:02:36.422072241 +0000 UTC m=+0.510560553 container died 0d6b79b8e40c2f5d166858d57ce31bd48c68cd9f348141f6dda0327508f7e9d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_herschel, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 08:02:36 np0005629333 systemd[1]: var-lib-containers-storage-overlay-8aa8eaefb56f575c985262c2101e567cee53dd3f0ab9acf9d76cc569e7d67daa-merged.mount: Deactivated successfully.
Feb 25 08:02:36 np0005629333 podman[379921]: 2026-02-25 13:02:36.526082995 +0000 UTC m=+0.614571317 container remove 0d6b79b8e40c2f5d166858d57ce31bd48c68cd9f348141f6dda0327508f7e9d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_herschel, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 08:02:36 np0005629333 systemd[1]: libpod-conmon-0d6b79b8e40c2f5d166858d57ce31bd48c68cd9f348141f6dda0327508f7e9d3.scope: Deactivated successfully.
Feb 25 08:02:36 np0005629333 nova_compute[244014]: 2026-02-25 13:02:36.709 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2472: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 08:02:37 np0005629333 podman[380021]: 2026-02-25 13:02:37.055419097 +0000 UTC m=+0.060773565 container create adf789da310610ce5431c410c8199a3d9a750b3ec28fc88f7221501d81aa4f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:02:37 np0005629333 systemd[1]: Started libpod-conmon-adf789da310610ce5431c410c8199a3d9a750b3ec28fc88f7221501d81aa4f27.scope.
Feb 25 08:02:37 np0005629333 podman[380021]: 2026-02-25 13:02:37.016261083 +0000 UTC m=+0.021615441 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:02:37 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:02:37 np0005629333 nova_compute[244014]: 2026-02-25 13:02:37.193 244018 DEBUG oslo_concurrency.lockutils [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "d2d1c933-a594-4748-8f09-84790cab7d73" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:02:37 np0005629333 nova_compute[244014]: 2026-02-25 13:02:37.195 244018 DEBUG oslo_concurrency.lockutils [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "d2d1c933-a594-4748-8f09-84790cab7d73" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:02:37 np0005629333 nova_compute[244014]: 2026-02-25 13:02:37.196 244018 DEBUG oslo_concurrency.lockutils [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:02:37 np0005629333 nova_compute[244014]: 2026-02-25 13:02:37.196 244018 DEBUG oslo_concurrency.lockutils [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:02:37 np0005629333 nova_compute[244014]: 2026-02-25 13:02:37.196 244018 DEBUG oslo_concurrency.lockutils [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:02:37 np0005629333 nova_compute[244014]: 2026-02-25 13:02:37.198 244018 INFO nova.compute.manager [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Terminating instance#033[00m
Feb 25 08:02:37 np0005629333 nova_compute[244014]: 2026-02-25 13:02:37.199 244018 DEBUG nova.compute.manager [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 08:02:37 np0005629333 podman[380021]: 2026-02-25 13:02:37.211353986 +0000 UTC m=+0.216708314 container init adf789da310610ce5431c410c8199a3d9a750b3ec28fc88f7221501d81aa4f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_jackson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:02:37 np0005629333 podman[380021]: 2026-02-25 13:02:37.219396883 +0000 UTC m=+0.224751221 container start adf789da310610ce5431c410c8199a3d9a750b3ec28fc88f7221501d81aa4f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_jackson, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 08:02:37 np0005629333 hungry_jackson[380037]: 167 167
Feb 25 08:02:37 np0005629333 systemd[1]: libpod-adf789da310610ce5431c410c8199a3d9a750b3ec28fc88f7221501d81aa4f27.scope: Deactivated successfully.
Feb 25 08:02:37 np0005629333 conmon[380037]: conmon adf789da310610ce5431 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-adf789da310610ce5431c410c8199a3d9a750b3ec28fc88f7221501d81aa4f27.scope/container/memory.events
Feb 25 08:02:37 np0005629333 podman[380021]: 2026-02-25 13:02:37.273016585 +0000 UTC m=+0.278371133 container attach adf789da310610ce5431c410c8199a3d9a750b3ec28fc88f7221501d81aa4f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 08:02:37 np0005629333 podman[380021]: 2026-02-25 13:02:37.273936391 +0000 UTC m=+0.279290719 container died adf789da310610ce5431c410c8199a3d9a750b3ec28fc88f7221501d81aa4f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_jackson, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 08:02:37 np0005629333 kernel: tap1167ad0b-6f (unregistering): left promiscuous mode
Feb 25 08:02:37 np0005629333 NetworkManager[49836]: <info>  [1772024557.3685] device (tap1167ad0b-6f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 08:02:37 np0005629333 nova_compute[244014]: 2026-02-25 13:02:37.377 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:37 np0005629333 ovn_controller[147040]: 2026-02-25T13:02:37Z|01580|binding|INFO|Releasing lport 1167ad0b-6ff5-4138-8c61-fde6601ec903 from this chassis (sb_readonly=0)
Feb 25 08:02:37 np0005629333 ovn_controller[147040]: 2026-02-25T13:02:37Z|01581|binding|INFO|Setting lport 1167ad0b-6ff5-4138-8c61-fde6601ec903 down in Southbound
Feb 25 08:02:37 np0005629333 ovn_controller[147040]: 2026-02-25T13:02:37Z|01582|binding|INFO|Removing iface tap1167ad0b-6f ovn-installed in OVS
Feb 25 08:02:37 np0005629333 nova_compute[244014]: 2026-02-25 13:02:37.382 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:37 np0005629333 nova_compute[244014]: 2026-02-25 13:02:37.386 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:37.409 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:de:15 10.100.0.12', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd2d1c933-a594-4748-8f09-84790cab7d73', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81e387f6-426b-4301-9279-fdf7597d2c7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9699483122f465084e3147e4904d13d', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2daa638-d0b7-4114-8331-0d5928db072a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=1167ad0b-6ff5-4138-8c61-fde6601ec903) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 08:02:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:37.411 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 1167ad0b-6ff5-4138-8c61-fde6601ec903 in datapath 81e387f6-426b-4301-9279-fdf7597d2c7e unbound from our chassis#033[00m
Feb 25 08:02:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:37.412 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 81e387f6-426b-4301-9279-fdf7597d2c7e#033[00m
Feb 25 08:02:37 np0005629333 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000095.scope: Deactivated successfully.
Feb 25 08:02:37 np0005629333 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000095.scope: Consumed 12.407s CPU time.
Feb 25 08:02:37 np0005629333 systemd-machined[210048]: Machine qemu-183-instance-00000095 terminated.
Feb 25 08:02:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:37.428 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f51cd758-655b-417f-b5b1-5982cc5585c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:02:37 np0005629333 systemd[1]: var-lib-containers-storage-overlay-20cc89d3bf2578635e53d1c7e563be88f35c103f7432236adf77f7c12f6df52c-merged.mount: Deactivated successfully.
Feb 25 08:02:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:37.455 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[677e4de6-f925-4a7c-ab7d-2d4ef8d401e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:02:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:37.460 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[105ff05d-32a7-4323-b0d4-6dc8bbfbaa80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:02:37 np0005629333 podman[380021]: 2026-02-25 13:02:37.470606399 +0000 UTC m=+0.475960707 container remove adf789da310610ce5431c410c8199a3d9a750b3ec28fc88f7221501d81aa4f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 08:02:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:37.482 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[895115f8-567c-4968-b940-787570aff39a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:02:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:37.499 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9731a3f8-f10e-4dd2-9379-5c78562dfa7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap81e387f6-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dd:15:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 7, 'rx_bytes': 1370, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 7, 'rx_bytes': 1370, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647231, 'reachable_time': 33442, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 13, 'inoctets': 936, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 13, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 936, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 13, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380068, 'error': None, 'target': 'ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:02:37 np0005629333 systemd[1]: libpod-conmon-adf789da310610ce5431c410c8199a3d9a750b3ec28fc88f7221501d81aa4f27.scope: Deactivated successfully.
Feb 25 08:02:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:37.513 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6e235780-fcbf-4c08-b13a-2b471ceaf861]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap81e387f6-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647239, 'tstamp': 647239}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380069, 'error': None, 'target': 'ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap81e387f6-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647241, 'tstamp': 647241}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380069, 'error': None, 'target': 'ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:02:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:37.515 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81e387f6-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:02:37 np0005629333 nova_compute[244014]: 2026-02-25 13:02:37.517 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:37 np0005629333 nova_compute[244014]: 2026-02-25 13:02:37.522 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:37.523 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81e387f6-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:02:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:37.523 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 08:02:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:37.524 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap81e387f6-40, col_values=(('external_ids', {'iface-id': '6e4a9a2d-90d8-4fd6-b7cc-e40dc20beeef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:02:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:37.525 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 08:02:37 np0005629333 nova_compute[244014]: 2026-02-25 13:02:37.645 244018 INFO nova.virt.libvirt.driver [-] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Instance destroyed successfully.#033[00m
Feb 25 08:02:37 np0005629333 nova_compute[244014]: 2026-02-25 13:02:37.646 244018 DEBUG nova.objects.instance [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'resources' on Instance uuid d2d1c933-a594-4748-8f09-84790cab7d73 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 08:02:37 np0005629333 nova_compute[244014]: 2026-02-25 13:02:37.661 244018 DEBUG nova.virt.libvirt.vif [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T13:02:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-1-1254828585',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-1-1254828585',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-gen',id=149,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDG3qQBqZbx+RdRlYmfQhfQx6MRvbn5P1SDfQdH7zgAHK3JPnG4I4PJxwsk8GxkBiwXqDS8mMRQo7u+5+wHwxJ+FcVnrZkfYuYFmbh/Gbve/0HWzg16VZ7xAIbYve6W3FQ==',key_name='tempest-TestSecurityGroupsBasicOps-1940037159',keypairs=<?>,launch_index=0,launched_at=2026-02-25T13:02:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-rc0rknoo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T13:02:18Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=d2d1c933-a594-4748-8f09-84790cab7d73,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "address": "fa:16:3e:16:de:15", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1167ad0b-6f", "ovs_interfaceid": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 08:02:37 np0005629333 nova_compute[244014]: 2026-02-25 13:02:37.662 244018 DEBUG nova.network.os_vif_util [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "address": "fa:16:3e:16:de:15", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1167ad0b-6f", "ovs_interfaceid": "1167ad0b-6ff5-4138-8c61-fde6601ec903", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 08:02:37 np0005629333 nova_compute[244014]: 2026-02-25 13:02:37.662 244018 DEBUG nova.network.os_vif_util [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:16:de:15,bridge_name='br-int',has_traffic_filtering=True,id=1167ad0b-6ff5-4138-8c61-fde6601ec903,network=Network(81e387f6-426b-4301-9279-fdf7597d2c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1167ad0b-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 08:02:37 np0005629333 nova_compute[244014]: 2026-02-25 13:02:37.663 244018 DEBUG os_vif [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:de:15,bridge_name='br-int',has_traffic_filtering=True,id=1167ad0b-6ff5-4138-8c61-fde6601ec903,network=Network(81e387f6-426b-4301-9279-fdf7597d2c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1167ad0b-6f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 08:02:37 np0005629333 nova_compute[244014]: 2026-02-25 13:02:37.666 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:37 np0005629333 nova_compute[244014]: 2026-02-25 13:02:37.666 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1167ad0b-6f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:02:37 np0005629333 nova_compute[244014]: 2026-02-25 13:02:37.668 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:37 np0005629333 podman[380076]: 2026-02-25 13:02:37.670059825 +0000 UTC m=+0.073906236 container create 2a191f684dbb4d79893a8083b44380ee003ec0c5f0e507443e98380f3408062a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:02:37 np0005629333 nova_compute[244014]: 2026-02-25 13:02:37.671 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 08:02:37 np0005629333 nova_compute[244014]: 2026-02-25 13:02:37.674 244018 INFO os_vif [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:de:15,bridge_name='br-int',has_traffic_filtering=True,id=1167ad0b-6ff5-4138-8c61-fde6601ec903,network=Network(81e387f6-426b-4301-9279-fdf7597d2c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1167ad0b-6f')#033[00m
Feb 25 08:02:37 np0005629333 podman[380076]: 2026-02-25 13:02:37.626682142 +0000 UTC m=+0.030528553 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:02:37 np0005629333 systemd[1]: Started libpod-conmon-2a191f684dbb4d79893a8083b44380ee003ec0c5f0e507443e98380f3408062a.scope.
Feb 25 08:02:37 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:02:37 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c8e0dcea9d1c85ddad041aa71d0bf5de4064d6df9adb967f5d86283c28b6bda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:02:37 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c8e0dcea9d1c85ddad041aa71d0bf5de4064d6df9adb967f5d86283c28b6bda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:02:37 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c8e0dcea9d1c85ddad041aa71d0bf5de4064d6df9adb967f5d86283c28b6bda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:02:37 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c8e0dcea9d1c85ddad041aa71d0bf5de4064d6df9adb967f5d86283c28b6bda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:02:37 np0005629333 podman[380076]: 2026-02-25 13:02:37.809960632 +0000 UTC m=+0.213807063 container init 2a191f684dbb4d79893a8083b44380ee003ec0c5f0e507443e98380f3408062a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_driscoll, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 08:02:37 np0005629333 podman[380076]: 2026-02-25 13:02:37.820936171 +0000 UTC m=+0.224782612 container start 2a191f684dbb4d79893a8083b44380ee003ec0c5f0e507443e98380f3408062a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_driscoll, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 08:02:37 np0005629333 podman[380076]: 2026-02-25 13:02:37.832110166 +0000 UTC m=+0.235956597 container attach 2a191f684dbb4d79893a8083b44380ee003ec0c5f0e507443e98380f3408062a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_driscoll, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 08:02:38 np0005629333 nova_compute[244014]: 2026-02-25 13:02:38.328 244018 DEBUG nova.compute.manager [req-fce8f2af-6f5b-4718-b6f0-09dc1ee95047 req-51caf21f-936d-4056-95fe-b914792e9e43 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Received event network-vif-unplugged-1167ad0b-6ff5-4138-8c61-fde6601ec903 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:02:38 np0005629333 nova_compute[244014]: 2026-02-25 13:02:38.329 244018 DEBUG oslo_concurrency.lockutils [req-fce8f2af-6f5b-4718-b6f0-09dc1ee95047 req-51caf21f-936d-4056-95fe-b914792e9e43 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:02:38 np0005629333 nova_compute[244014]: 2026-02-25 13:02:38.331 244018 DEBUG oslo_concurrency.lockutils [req-fce8f2af-6f5b-4718-b6f0-09dc1ee95047 req-51caf21f-936d-4056-95fe-b914792e9e43 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:02:38 np0005629333 nova_compute[244014]: 2026-02-25 13:02:38.331 244018 DEBUG oslo_concurrency.lockutils [req-fce8f2af-6f5b-4718-b6f0-09dc1ee95047 req-51caf21f-936d-4056-95fe-b914792e9e43 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:02:38 np0005629333 nova_compute[244014]: 2026-02-25 13:02:38.332 244018 DEBUG nova.compute.manager [req-fce8f2af-6f5b-4718-b6f0-09dc1ee95047 req-51caf21f-936d-4056-95fe-b914792e9e43 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] No waiting events found dispatching network-vif-unplugged-1167ad0b-6ff5-4138-8c61-fde6601ec903 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 08:02:38 np0005629333 nova_compute[244014]: 2026-02-25 13:02:38.332 244018 DEBUG nova.compute.manager [req-fce8f2af-6f5b-4718-b6f0-09dc1ee95047 req-51caf21f-936d-4056-95fe-b914792e9e43 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Received event network-vif-unplugged-1167ad0b-6ff5-4138-8c61-fde6601ec903 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 08:02:38 np0005629333 lvm[380198]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:02:38 np0005629333 lvm[380198]: VG ceph_vg0 finished
Feb 25 08:02:38 np0005629333 nova_compute[244014]: 2026-02-25 13:02:38.407 244018 INFO nova.virt.libvirt.driver [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Deleting instance files /var/lib/nova/instances/d2d1c933-a594-4748-8f09-84790cab7d73_del#033[00m
Feb 25 08:02:38 np0005629333 nova_compute[244014]: 2026-02-25 13:02:38.408 244018 INFO nova.virt.libvirt.driver [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Deletion of /var/lib/nova/instances/d2d1c933-a594-4748-8f09-84790cab7d73_del complete#033[00m
Feb 25 08:02:38 np0005629333 lvm[380200]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:02:38 np0005629333 lvm[380200]: VG ceph_vg1 finished
Feb 25 08:02:38 np0005629333 lvm[380201]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:02:38 np0005629333 lvm[380201]: VG ceph_vg2 finished
Feb 25 08:02:38 np0005629333 lvm[380202]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:02:38 np0005629333 lvm[380202]: VG ceph_vg0 finished
Feb 25 08:02:38 np0005629333 nova_compute[244014]: 2026-02-25 13:02:38.466 244018 INFO nova.compute.manager [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Took 1.27 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 08:02:38 np0005629333 nova_compute[244014]: 2026-02-25 13:02:38.466 244018 DEBUG oslo.service.loopingcall [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 08:02:38 np0005629333 nova_compute[244014]: 2026-02-25 13:02:38.467 244018 DEBUG nova.compute.manager [-] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 08:02:38 np0005629333 nova_compute[244014]: 2026-02-25 13:02:38.467 244018 DEBUG nova.network.neutron [-] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 08:02:38 np0005629333 lvm[380204]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:02:38 np0005629333 lvm[380204]: VG ceph_vg2 finished
Feb 25 08:02:38 np0005629333 objective_driscoll[380122]: {}
Feb 25 08:02:38 np0005629333 systemd[1]: libpod-2a191f684dbb4d79893a8083b44380ee003ec0c5f0e507443e98380f3408062a.scope: Deactivated successfully.
Feb 25 08:02:38 np0005629333 systemd[1]: libpod-2a191f684dbb4d79893a8083b44380ee003ec0c5f0e507443e98380f3408062a.scope: Consumed 1.088s CPU time.
Feb 25 08:02:38 np0005629333 conmon[380122]: conmon 2a191f684dbb4d79893a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2a191f684dbb4d79893a8083b44380ee003ec0c5f0e507443e98380f3408062a.scope/container/memory.events
Feb 25 08:02:38 np0005629333 podman[380076]: 2026-02-25 13:02:38.559819683 +0000 UTC m=+0.963666124 container died 2a191f684dbb4d79893a8083b44380ee003ec0c5f0e507443e98380f3408062a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_driscoll, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:02:38 np0005629333 systemd[1]: var-lib-containers-storage-overlay-6c8e0dcea9d1c85ddad041aa71d0bf5de4064d6df9adb967f5d86283c28b6bda-merged.mount: Deactivated successfully.
Feb 25 08:02:38 np0005629333 podman[380076]: 2026-02-25 13:02:38.634054707 +0000 UTC m=+1.037901118 container remove 2a191f684dbb4d79893a8083b44380ee003ec0c5f0e507443e98380f3408062a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_driscoll, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 08:02:38 np0005629333 systemd[1]: libpod-conmon-2a191f684dbb4d79893a8083b44380ee003ec0c5f0e507443e98380f3408062a.scope: Deactivated successfully.
Feb 25 08:02:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:02:38 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:02:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:02:38 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:02:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:02:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2473: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 08:02:39 np0005629333 nova_compute[244014]: 2026-02-25 13:02:39.442 244018 DEBUG nova.network.neutron [-] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:02:39 np0005629333 nova_compute[244014]: 2026-02-25 13:02:39.459 244018 INFO nova.compute.manager [-] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Took 0.99 seconds to deallocate network for instance.#033[00m
Feb 25 08:02:39 np0005629333 nova_compute[244014]: 2026-02-25 13:02:39.497 244018 DEBUG oslo_concurrency.lockutils [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:02:39 np0005629333 nova_compute[244014]: 2026-02-25 13:02:39.497 244018 DEBUG oslo_concurrency.lockutils [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:02:39 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:02:39 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:02:39 np0005629333 nova_compute[244014]: 2026-02-25 13:02:39.775 244018 DEBUG oslo_concurrency.processutils [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:02:39 np0005629333 nova_compute[244014]: 2026-02-25 13:02:39.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:02:39 np0005629333 nova_compute[244014]: 2026-02-25 13:02:39.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:02:39 np0005629333 nova_compute[244014]: 2026-02-25 13:02:39.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:02:40 np0005629333 nova_compute[244014]: 2026-02-25 13:02:40.044 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "refresh_cache-ea4e69b0-bf04-4637-847b-9807c884d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 08:02:40 np0005629333 nova_compute[244014]: 2026-02-25 13:02:40.044 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquired lock "refresh_cache-ea4e69b0-bf04-4637-847b-9807c884d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 08:02:40 np0005629333 nova_compute[244014]: 2026-02-25 13:02:40.044 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Feb 25 08:02:40 np0005629333 nova_compute[244014]: 2026-02-25 13:02:40.045 244018 DEBUG nova.objects.instance [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lazy-loading 'info_cache' on Instance uuid ea4e69b0-bf04-4637-847b-9807c884d103 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 08:02:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:02:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/949477923' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:02:40 np0005629333 nova_compute[244014]: 2026-02-25 13:02:40.341 244018 DEBUG oslo_concurrency.processutils [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:02:40 np0005629333 nova_compute[244014]: 2026-02-25 13:02:40.347 244018 DEBUG nova.compute.provider_tree [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:02:40 np0005629333 nova_compute[244014]: 2026-02-25 13:02:40.362 244018 DEBUG nova.scheduler.client.report [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:02:40 np0005629333 nova_compute[244014]: 2026-02-25 13:02:40.389 244018 DEBUG oslo_concurrency.lockutils [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:02:40 np0005629333 nova_compute[244014]: 2026-02-25 13:02:40.402 244018 DEBUG nova.compute.manager [req-7ae73f94-fc0e-4b6f-bcbc-ad0b8265b418 req-f425f903-59b1-43c0-9587-a817fcdc3b19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Received event network-vif-plugged-1167ad0b-6ff5-4138-8c61-fde6601ec903 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:02:40 np0005629333 nova_compute[244014]: 2026-02-25 13:02:40.403 244018 DEBUG oslo_concurrency.lockutils [req-7ae73f94-fc0e-4b6f-bcbc-ad0b8265b418 req-f425f903-59b1-43c0-9587-a817fcdc3b19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:02:40 np0005629333 nova_compute[244014]: 2026-02-25 13:02:40.403 244018 DEBUG oslo_concurrency.lockutils [req-7ae73f94-fc0e-4b6f-bcbc-ad0b8265b418 req-f425f903-59b1-43c0-9587-a817fcdc3b19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:02:40 np0005629333 nova_compute[244014]: 2026-02-25 13:02:40.404 244018 DEBUG oslo_concurrency.lockutils [req-7ae73f94-fc0e-4b6f-bcbc-ad0b8265b418 req-f425f903-59b1-43c0-9587-a817fcdc3b19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "d2d1c933-a594-4748-8f09-84790cab7d73-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:02:40 np0005629333 nova_compute[244014]: 2026-02-25 13:02:40.404 244018 DEBUG nova.compute.manager [req-7ae73f94-fc0e-4b6f-bcbc-ad0b8265b418 req-f425f903-59b1-43c0-9587-a817fcdc3b19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] No waiting events found dispatching network-vif-plugged-1167ad0b-6ff5-4138-8c61-fde6601ec903 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 08:02:40 np0005629333 nova_compute[244014]: 2026-02-25 13:02:40.404 244018 WARNING nova.compute.manager [req-7ae73f94-fc0e-4b6f-bcbc-ad0b8265b418 req-f425f903-59b1-43c0-9587-a817fcdc3b19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Received unexpected event network-vif-plugged-1167ad0b-6ff5-4138-8c61-fde6601ec903 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 08:02:40 np0005629333 nova_compute[244014]: 2026-02-25 13:02:40.405 244018 DEBUG nova.compute.manager [req-7ae73f94-fc0e-4b6f-bcbc-ad0b8265b418 req-f425f903-59b1-43c0-9587-a817fcdc3b19 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Received event network-vif-deleted-1167ad0b-6ff5-4138-8c61-fde6601ec903 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:02:40 np0005629333 nova_compute[244014]: 2026-02-25 13:02:40.419 244018 INFO nova.scheduler.client.report [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Deleted allocations for instance d2d1c933-a594-4748-8f09-84790cab7d73#033[00m
Feb 25 08:02:40 np0005629333 nova_compute[244014]: 2026-02-25 13:02:40.476 244018 DEBUG oslo_concurrency.lockutils [None req-aa71c2c2-6773-4aa0-960c-767615478d50 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "d2d1c933-a594-4748-8f09-84790cab7d73" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.281s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:02:40 np0005629333 nova_compute[244014]: 2026-02-25 13:02:40.676 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2474: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 317 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 08:02:41 np0005629333 nova_compute[244014]: 2026-02-25 13:02:41.450 244018 DEBUG oslo_concurrency.lockutils [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "ea4e69b0-bf04-4637-847b-9807c884d103" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:02:41 np0005629333 nova_compute[244014]: 2026-02-25 13:02:41.451 244018 DEBUG oslo_concurrency.lockutils [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "ea4e69b0-bf04-4637-847b-9807c884d103" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:02:41 np0005629333 nova_compute[244014]: 2026-02-25 13:02:41.451 244018 DEBUG oslo_concurrency.lockutils [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:02:41 np0005629333 nova_compute[244014]: 2026-02-25 13:02:41.451 244018 DEBUG oslo_concurrency.lockutils [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:02:41 np0005629333 nova_compute[244014]: 2026-02-25 13:02:41.452 244018 DEBUG oslo_concurrency.lockutils [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:02:41 np0005629333 nova_compute[244014]: 2026-02-25 13:02:41.453 244018 INFO nova.compute.manager [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Terminating instance#033[00m
Feb 25 08:02:41 np0005629333 nova_compute[244014]: 2026-02-25 13:02:41.456 244018 DEBUG nova.compute.manager [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 08:02:41 np0005629333 kernel: tap5a5e6571-f9 (unregistering): left promiscuous mode
Feb 25 08:02:41 np0005629333 nova_compute[244014]: 2026-02-25 13:02:41.513 244018 DEBUG nova.network.neutron [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Updating instance_info_cache with network_info: [{"id": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "address": "fa:16:3e:8b:fe:07", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a5e6571-f9", "ovs_interfaceid": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:02:41 np0005629333 NetworkManager[49836]: <info>  [1772024561.5155] device (tap5a5e6571-f9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 08:02:41 np0005629333 nova_compute[244014]: 2026-02-25 13:02:41.524 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:41 np0005629333 ovn_controller[147040]: 2026-02-25T13:02:41Z|01583|binding|INFO|Releasing lport 5a5e6571-f930-49df-8399-0abb9daf7f4f from this chassis (sb_readonly=0)
Feb 25 08:02:41 np0005629333 ovn_controller[147040]: 2026-02-25T13:02:41Z|01584|binding|INFO|Setting lport 5a5e6571-f930-49df-8399-0abb9daf7f4f down in Southbound
Feb 25 08:02:41 np0005629333 ovn_controller[147040]: 2026-02-25T13:02:41Z|01585|binding|INFO|Removing iface tap5a5e6571-f9 ovn-installed in OVS
Feb 25 08:02:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:41.532 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:fe:07 10.100.0.5'], port_security=['fa:16:3e:8b:fe:07 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ea4e69b0-bf04-4637-847b-9807c884d103', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-81e387f6-426b-4301-9279-fdf7597d2c7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9699483122f465084e3147e4904d13d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '11991dde-8004-44c6-851b-7ca96866b068 266cd610-5f0f-4f4d-8bfb-2ebdf5d189b9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2daa638-d0b7-4114-8331-0d5928db072a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=5a5e6571-f930-49df-8399-0abb9daf7f4f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 08:02:41 np0005629333 nova_compute[244014]: 2026-02-25 13:02:41.534 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Releasing lock "refresh_cache-ea4e69b0-bf04-4637-847b-9807c884d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 08:02:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:41.534 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 5a5e6571-f930-49df-8399-0abb9daf7f4f in datapath 81e387f6-426b-4301-9279-fdf7597d2c7e unbound from our chassis#033[00m
Feb 25 08:02:41 np0005629333 nova_compute[244014]: 2026-02-25 13:02:41.534 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Feb 25 08:02:41 np0005629333 nova_compute[244014]: 2026-02-25 13:02:41.535 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:02:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:41.535 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 81e387f6-426b-4301-9279-fdf7597d2c7e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 08:02:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:41.537 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[2b6ef8b2-2aac-4c89-9a44-c2192377c5ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:02:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:41.538 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e namespace which is not needed anymore#033[00m
Feb 25 08:02:41 np0005629333 nova_compute[244014]: 2026-02-25 13:02:41.547 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:41 np0005629333 nova_compute[244014]: 2026-02-25 13:02:41.555 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:02:41 np0005629333 nova_compute[244014]: 2026-02-25 13:02:41.555 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:02:41 np0005629333 nova_compute[244014]: 2026-02-25 13:02:41.556 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:02:41 np0005629333 nova_compute[244014]: 2026-02-25 13:02:41.556 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:02:41 np0005629333 nova_compute[244014]: 2026-02-25 13:02:41.556 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:02:41 np0005629333 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000094.scope: Deactivated successfully.
Feb 25 08:02:41 np0005629333 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000094.scope: Consumed 14.334s CPU time.
Feb 25 08:02:41 np0005629333 systemd-machined[210048]: Machine qemu-182-instance-00000094 terminated.
Feb 25 08:02:41 np0005629333 nova_compute[244014]: 2026-02-25 13:02:41.681 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:41 np0005629333 nova_compute[244014]: 2026-02-25 13:02:41.685 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:41 np0005629333 nova_compute[244014]: 2026-02-25 13:02:41.695 244018 INFO nova.virt.libvirt.driver [-] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Instance destroyed successfully.#033[00m
Feb 25 08:02:41 np0005629333 nova_compute[244014]: 2026-02-25 13:02:41.697 244018 DEBUG nova.objects.instance [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'resources' on Instance uuid ea4e69b0-bf04-4637-847b-9807c884d103 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 08:02:41 np0005629333 neutron-haproxy-ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e[379021]: [NOTICE]   (379025) : haproxy version is 2.8.14-c23fe91
Feb 25 08:02:41 np0005629333 neutron-haproxy-ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e[379021]: [NOTICE]   (379025) : path to executable is /usr/sbin/haproxy
Feb 25 08:02:41 np0005629333 neutron-haproxy-ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e[379021]: [WARNING]  (379025) : Exiting Master process...
Feb 25 08:02:41 np0005629333 neutron-haproxy-ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e[379021]: [WARNING]  (379025) : Exiting Master process...
Feb 25 08:02:41 np0005629333 neutron-haproxy-ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e[379021]: [ALERT]    (379025) : Current worker (379027) exited with code 143 (Terminated)
Feb 25 08:02:41 np0005629333 neutron-haproxy-ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e[379021]: [WARNING]  (379025) : All workers exited. Exiting... (0)
Feb 25 08:02:41 np0005629333 systemd[1]: libpod-47eadc59421d1d38142f2438394fc8c198c542685813ac8cfd5bdf8cb8939451.scope: Deactivated successfully.
Feb 25 08:02:41 np0005629333 podman[380292]: 2026-02-25 13:02:41.730943735 +0000 UTC m=+0.065546929 container died 47eadc59421d1d38142f2438394fc8c198c542685813ac8cfd5bdf8cb8939451 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2)
Feb 25 08:02:41 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-47eadc59421d1d38142f2438394fc8c198c542685813ac8cfd5bdf8cb8939451-userdata-shm.mount: Deactivated successfully.
Feb 25 08:02:41 np0005629333 systemd[1]: var-lib-containers-storage-overlay-3abb1a1fefa49a0b771e7d6edf00e3f98906310d6c7b5e12ddbb0b47d61691b7-merged.mount: Deactivated successfully.
Feb 25 08:02:41 np0005629333 nova_compute[244014]: 2026-02-25 13:02:41.783 244018 DEBUG nova.virt.libvirt.vif [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T13:01:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-852563561',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-852563561',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-acc',id=148,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDG3qQBqZbx+RdRlYmfQhfQx6MRvbn5P1SDfQdH7zgAHK3JPnG4I4PJxwsk8GxkBiwXqDS8mMRQo7u+5+wHwxJ+FcVnrZkfYuYFmbh/Gbve/0HWzg16VZ7xAIbYve6W3FQ==',key_name='tempest-TestSecurityGroupsBasicOps-1940037159',keypairs=<?>,launch_index=0,launched_at=2026-02-25T13:01:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-nfnujduq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T13:01:43Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=ea4e69b0-bf04-4637-847b-9807c884d103,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "address": "fa:16:3e:8b:fe:07", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a5e6571-f9", "ovs_interfaceid": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 08:02:41 np0005629333 nova_compute[244014]: 2026-02-25 13:02:41.784 244018 DEBUG nova.network.os_vif_util [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "address": "fa:16:3e:8b:fe:07", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a5e6571-f9", "ovs_interfaceid": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 08:02:41 np0005629333 nova_compute[244014]: 2026-02-25 13:02:41.784 244018 DEBUG nova.network.os_vif_util [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8b:fe:07,bridge_name='br-int',has_traffic_filtering=True,id=5a5e6571-f930-49df-8399-0abb9daf7f4f,network=Network(81e387f6-426b-4301-9279-fdf7597d2c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a5e6571-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 08:02:41 np0005629333 nova_compute[244014]: 2026-02-25 13:02:41.785 244018 DEBUG os_vif [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:fe:07,bridge_name='br-int',has_traffic_filtering=True,id=5a5e6571-f930-49df-8399-0abb9daf7f4f,network=Network(81e387f6-426b-4301-9279-fdf7597d2c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a5e6571-f9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 08:02:41 np0005629333 nova_compute[244014]: 2026-02-25 13:02:41.786 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:41 np0005629333 nova_compute[244014]: 2026-02-25 13:02:41.786 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a5e6571-f9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:02:41 np0005629333 nova_compute[244014]: 2026-02-25 13:02:41.789 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 08:02:41 np0005629333 nova_compute[244014]: 2026-02-25 13:02:41.791 244018 INFO os_vif [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:fe:07,bridge_name='br-int',has_traffic_filtering=True,id=5a5e6571-f930-49df-8399-0abb9daf7f4f,network=Network(81e387f6-426b-4301-9279-fdf7597d2c7e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a5e6571-f9')#033[00m
Feb 25 08:02:41 np0005629333 podman[380292]: 2026-02-25 13:02:41.82614969 +0000 UTC m=+0.160752864 container cleanup 47eadc59421d1d38142f2438394fc8c198c542685813ac8cfd5bdf8cb8939451 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 25 08:02:41 np0005629333 systemd[1]: libpod-conmon-47eadc59421d1d38142f2438394fc8c198c542685813ac8cfd5bdf8cb8939451.scope: Deactivated successfully.
Feb 25 08:02:41 np0005629333 podman[380369]: 2026-02-25 13:02:41.921678065 +0000 UTC m=+0.073239227 container remove 47eadc59421d1d38142f2438394fc8c198c542685813ac8cfd5bdf8cb8939451 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:02:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:41.927 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[7d529a06-d45c-4419-a975-2db17dd8a6ce]: (4, ('Wed Feb 25 01:02:41 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e (47eadc59421d1d38142f2438394fc8c198c542685813ac8cfd5bdf8cb8939451)\n47eadc59421d1d38142f2438394fc8c198c542685813ac8cfd5bdf8cb8939451\nWed Feb 25 01:02:41 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e (47eadc59421d1d38142f2438394fc8c198c542685813ac8cfd5bdf8cb8939451)\n47eadc59421d1d38142f2438394fc8c198c542685813ac8cfd5bdf8cb8939451\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:02:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:41.929 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bac1dedb-1265-4d71-aeb9-75c9c9f47900]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:02:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:41.930 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81e387f6-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:02:41 np0005629333 kernel: tap81e387f6-40: left promiscuous mode
Feb 25 08:02:41 np0005629333 nova_compute[244014]: 2026-02-25 13:02:41.938 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:41 np0005629333 nova_compute[244014]: 2026-02-25 13:02:41.943 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:41.947 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[afc7e705-dc70-41dd-b1aa-95fc01e89ef2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:02:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:41.965 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1a43affe-b9f7-4f81-95b5-a5d231b3701d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:02:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:41.967 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c2d0b3a5-c746-4c97-ab1a-ed2fa72c756c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:02:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:41.981 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[037f57f0-6caa-4dfb-82d0-f66fc7a8576a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647225, 'reachable_time': 44491, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380384, 'error': None, 'target': 'ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:02:41 np0005629333 systemd[1]: run-netns-ovnmeta\x2d81e387f6\x2d426b\x2d4301\x2d9279\x2dfdf7597d2c7e.mount: Deactivated successfully.
Feb 25 08:02:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:41.992 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-81e387f6-426b-4301-9279-fdf7597d2c7e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 08:02:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:41.993 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[c3b2295c-0724-497c-bf46-301490695f15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:02:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:02:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/250342729' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:02:42 np0005629333 nova_compute[244014]: 2026-02-25 13:02:42.110 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:02:42 np0005629333 nova_compute[244014]: 2026-02-25 13:02:42.172 244018 INFO nova.virt.libvirt.driver [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Deleting instance files /var/lib/nova/instances/ea4e69b0-bf04-4637-847b-9807c884d103_del#033[00m
Feb 25 08:02:42 np0005629333 nova_compute[244014]: 2026-02-25 13:02:42.173 244018 INFO nova.virt.libvirt.driver [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Deletion of /var/lib/nova/instances/ea4e69b0-bf04-4637-847b-9807c884d103_del complete#033[00m
Feb 25 08:02:42 np0005629333 nova_compute[244014]: 2026-02-25 13:02:42.224 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 08:02:42 np0005629333 nova_compute[244014]: 2026-02-25 13:02:42.224 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 08:02:42 np0005629333 nova_compute[244014]: 2026-02-25 13:02:42.248 244018 INFO nova.compute.manager [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 08:02:42 np0005629333 nova_compute[244014]: 2026-02-25 13:02:42.249 244018 DEBUG oslo.service.loopingcall [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 08:02:42 np0005629333 nova_compute[244014]: 2026-02-25 13:02:42.249 244018 DEBUG nova.compute.manager [-] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 08:02:42 np0005629333 nova_compute[244014]: 2026-02-25 13:02:42.250 244018 DEBUG nova.network.neutron [-] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 08:02:42 np0005629333 nova_compute[244014]: 2026-02-25 13:02:42.376 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:02:42 np0005629333 nova_compute[244014]: 2026-02-25 13:02:42.377 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3442MB free_disk=59.8961376408115GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:02:42 np0005629333 nova_compute[244014]: 2026-02-25 13:02:42.377 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:02:42 np0005629333 nova_compute[244014]: 2026-02-25 13:02:42.377 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:02:42 np0005629333 nova_compute[244014]: 2026-02-25 13:02:42.445 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance ea4e69b0-bf04-4637-847b-9807c884d103 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 08:02:42 np0005629333 nova_compute[244014]: 2026-02-25 13:02:42.446 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:02:42 np0005629333 nova_compute[244014]: 2026-02-25 13:02:42.446 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:02:42 np0005629333 nova_compute[244014]: 2026-02-25 13:02:42.495 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:02:42 np0005629333 nova_compute[244014]: 2026-02-25 13:02:42.538 244018 DEBUG nova.compute.manager [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Received event network-changed-5a5e6571-f930-49df-8399-0abb9daf7f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:02:42 np0005629333 nova_compute[244014]: 2026-02-25 13:02:42.540 244018 DEBUG nova.compute.manager [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Refreshing instance network info cache due to event network-changed-5a5e6571-f930-49df-8399-0abb9daf7f4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 08:02:42 np0005629333 nova_compute[244014]: 2026-02-25 13:02:42.540 244018 DEBUG oslo_concurrency.lockutils [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-ea4e69b0-bf04-4637-847b-9807c884d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 08:02:42 np0005629333 nova_compute[244014]: 2026-02-25 13:02:42.540 244018 DEBUG oslo_concurrency.lockutils [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-ea4e69b0-bf04-4637-847b-9807c884d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 08:02:42 np0005629333 nova_compute[244014]: 2026-02-25 13:02:42.541 244018 DEBUG nova.network.neutron [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Refreshing network info cache for port 5a5e6571-f930-49df-8399-0abb9daf7f4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 08:02:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:02:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:02:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:02:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:02:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015360268250475516 of space, bias 1.0, pg target 0.46080804751426546 quantized to 32 (current 32)
Feb 25 08:02:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:02:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:02:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:02:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:02:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:02:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002494462883828743 of space, bias 1.0, pg target 0.7483388651486229 quantized to 32 (current 32)
Feb 25 08:02:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:02:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4133407414918176e-06 of space, bias 4.0, pg target 0.0016960088897901811 quantized to 16 (current 16)
Feb 25 08:02:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:02:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:02:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:02:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:02:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:02:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:02:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:02:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:02:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:02:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:02:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2475: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 119 op/s
Feb 25 08:02:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:02:43 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/653326501' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:02:43 np0005629333 nova_compute[244014]: 2026-02-25 13:02:43.060 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:02:43 np0005629333 nova_compute[244014]: 2026-02-25 13:02:43.067 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:02:43 np0005629333 nova_compute[244014]: 2026-02-25 13:02:43.090 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:02:43 np0005629333 nova_compute[244014]: 2026-02-25 13:02:43.125 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:02:43 np0005629333 nova_compute[244014]: 2026-02-25 13:02:43.126 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:02:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:02:44 np0005629333 nova_compute[244014]: 2026-02-25 13:02:44.324 244018 DEBUG nova.network.neutron [-] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:02:44 np0005629333 nova_compute[244014]: 2026-02-25 13:02:44.342 244018 INFO nova.compute.manager [-] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Took 2.09 seconds to deallocate network for instance.#033[00m
Feb 25 08:02:44 np0005629333 nova_compute[244014]: 2026-02-25 13:02:44.386 244018 DEBUG oslo_concurrency.lockutils [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:02:44 np0005629333 nova_compute[244014]: 2026-02-25 13:02:44.387 244018 DEBUG oslo_concurrency.lockutils [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:02:44 np0005629333 nova_compute[244014]: 2026-02-25 13:02:44.434 244018 DEBUG oslo_concurrency.processutils [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:02:44 np0005629333 nova_compute[244014]: 2026-02-25 13:02:44.586 244018 DEBUG nova.compute.manager [req-5ded0371-698e-4e0c-b96b-2e202be8894b req-a8f2fabd-2d9d-4bfc-84a4-c5f965ce0326 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Received event network-vif-deleted-5a5e6571-f930-49df-8399-0abb9daf7f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:02:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2476: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 17 KiB/s wr, 56 op/s
Feb 25 08:02:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:02:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1115451864' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:02:45 np0005629333 nova_compute[244014]: 2026-02-25 13:02:45.042 244018 DEBUG oslo_concurrency.processutils [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:02:45 np0005629333 nova_compute[244014]: 2026-02-25 13:02:45.049 244018 DEBUG nova.compute.provider_tree [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:02:45 np0005629333 nova_compute[244014]: 2026-02-25 13:02:45.066 244018 DEBUG nova.scheduler.client.report [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:02:45 np0005629333 nova_compute[244014]: 2026-02-25 13:02:45.085 244018 DEBUG oslo_concurrency.lockutils [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:02:45 np0005629333 nova_compute[244014]: 2026-02-25 13:02:45.108 244018 INFO nova.scheduler.client.report [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Deleted allocations for instance ea4e69b0-bf04-4637-847b-9807c884d103#033[00m
Feb 25 08:02:45 np0005629333 nova_compute[244014]: 2026-02-25 13:02:45.203 244018 DEBUG oslo_concurrency.lockutils [None req-028e5b50-027c-4897-ae23-797fbdbaca48 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "ea4e69b0-bf04-4637-847b-9807c884d103" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:02:45 np0005629333 nova_compute[244014]: 2026-02-25 13:02:45.297 244018 DEBUG nova.network.neutron [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Updated VIF entry in instance network info cache for port 5a5e6571-f930-49df-8399-0abb9daf7f4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 08:02:45 np0005629333 nova_compute[244014]: 2026-02-25 13:02:45.297 244018 DEBUG nova.network.neutron [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Updating instance_info_cache with network_info: [{"id": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "address": "fa:16:3e:8b:fe:07", "network": {"id": "81e387f6-426b-4301-9279-fdf7597d2c7e", "bridge": "br-int", "label": "tempest-network-smoke--545682579", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a5e6571-f9", "ovs_interfaceid": "5a5e6571-f930-49df-8399-0abb9daf7f4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:02:45 np0005629333 nova_compute[244014]: 2026-02-25 13:02:45.322 244018 DEBUG oslo_concurrency.lockutils [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-ea4e69b0-bf04-4637-847b-9807c884d103" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 08:02:45 np0005629333 nova_compute[244014]: 2026-02-25 13:02:45.322 244018 DEBUG nova.compute.manager [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Received event network-vif-unplugged-5a5e6571-f930-49df-8399-0abb9daf7f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:02:45 np0005629333 nova_compute[244014]: 2026-02-25 13:02:45.323 244018 DEBUG oslo_concurrency.lockutils [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:02:45 np0005629333 nova_compute[244014]: 2026-02-25 13:02:45.323 244018 DEBUG oslo_concurrency.lockutils [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:02:45 np0005629333 nova_compute[244014]: 2026-02-25 13:02:45.324 244018 DEBUG oslo_concurrency.lockutils [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:02:45 np0005629333 nova_compute[244014]: 2026-02-25 13:02:45.324 244018 DEBUG nova.compute.manager [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] No waiting events found dispatching network-vif-unplugged-5a5e6571-f930-49df-8399-0abb9daf7f4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 08:02:45 np0005629333 nova_compute[244014]: 2026-02-25 13:02:45.324 244018 DEBUG nova.compute.manager [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Received event network-vif-unplugged-5a5e6571-f930-49df-8399-0abb9daf7f4f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 08:02:45 np0005629333 nova_compute[244014]: 2026-02-25 13:02:45.325 244018 DEBUG nova.compute.manager [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Received event network-vif-plugged-5a5e6571-f930-49df-8399-0abb9daf7f4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:02:45 np0005629333 nova_compute[244014]: 2026-02-25 13:02:45.325 244018 DEBUG oslo_concurrency.lockutils [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:02:45 np0005629333 nova_compute[244014]: 2026-02-25 13:02:45.325 244018 DEBUG oslo_concurrency.lockutils [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:02:45 np0005629333 nova_compute[244014]: 2026-02-25 13:02:45.326 244018 DEBUG oslo_concurrency.lockutils [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "ea4e69b0-bf04-4637-847b-9807c884d103-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:02:45 np0005629333 nova_compute[244014]: 2026-02-25 13:02:45.326 244018 DEBUG nova.compute.manager [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] No waiting events found dispatching network-vif-plugged-5a5e6571-f930-49df-8399-0abb9daf7f4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 08:02:45 np0005629333 nova_compute[244014]: 2026-02-25 13:02:45.326 244018 WARNING nova.compute.manager [req-88191b71-0fb1-47e9-ae3d-b242da82fab2 req-007b2c45-77ec-4fc1-888c-ff50f8477954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Received unexpected event network-vif-plugged-5a5e6571-f930-49df-8399-0abb9daf7f4f for instance with vm_state active and task_state deleting.#033[00m
Feb 25 08:02:45 np0005629333 nova_compute[244014]: 2026-02-25 13:02:45.679 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:46 np0005629333 nova_compute[244014]: 2026-02-25 13:02:46.121 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:02:46 np0005629333 nova_compute[244014]: 2026-02-25 13:02:46.122 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:02:46 np0005629333 nova_compute[244014]: 2026-02-25 13:02:46.122 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:02:46 np0005629333 nova_compute[244014]: 2026-02-25 13:02:46.789 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2477: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 17 KiB/s wr, 56 op/s
Feb 25 08:02:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:02:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3828213001' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:02:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:02:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3828213001' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:02:48 np0005629333 nova_compute[244014]: 2026-02-25 13:02:48.571 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:48 np0005629333 nova_compute[244014]: 2026-02-25 13:02:48.621 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:02:48 np0005629333 nova_compute[244014]: 2026-02-25 13:02:48.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:02:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2478: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 17 KiB/s wr, 56 op/s
Feb 25 08:02:50 np0005629333 nova_compute[244014]: 2026-02-25 13:02:50.680 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:50 np0005629333 podman[380434]: 2026-02-25 13:02:50.741645371 +0000 UTC m=+0.086197932 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Feb 25 08:02:50 np0005629333 podman[380435]: 2026-02-25 13:02:50.790759017 +0000 UTC m=+0.130416300 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 08:02:50 np0005629333 nova_compute[244014]: 2026-02-25 13:02:50.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:02:50 np0005629333 nova_compute[244014]: 2026-02-25 13:02:50.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:02:50 np0005629333 nova_compute[244014]: 2026-02-25 13:02:50.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:02:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2479: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.4 KiB/s wr, 55 op/s
Feb 25 08:02:51 np0005629333 nova_compute[244014]: 2026-02-25 13:02:51.792 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:52 np0005629333 nova_compute[244014]: 2026-02-25 13:02:52.644 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024557.6426325, d2d1c933-a594-4748-8f09-84790cab7d73 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:02:52 np0005629333 nova_compute[244014]: 2026-02-25 13:02:52.645 244018 INFO nova.compute.manager [-] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] VM Stopped (Lifecycle Event)#033[00m
Feb 25 08:02:52 np0005629333 nova_compute[244014]: 2026-02-25 13:02:52.671 244018 DEBUG nova.compute.manager [None req-223caa6c-cc0e-49bc-a080-da3adbf4ae09 - - - - - -] [instance: d2d1c933-a594-4748-8f09-84790cab7d73] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:02:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2480: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.4 KiB/s wr, 55 op/s
Feb 25 08:02:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:02:54 np0005629333 nova_compute[244014]: 2026-02-25 13:02:54.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:02:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2481: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:02:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:55.040 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:02:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:55.040 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:02:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:02:55.040 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:02:55 np0005629333 nova_compute[244014]: 2026-02-25 13:02:55.682 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:56 np0005629333 nova_compute[244014]: 2026-02-25 13:02:56.693 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024561.6922565, ea4e69b0-bf04-4637-847b-9807c884d103 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:02:56 np0005629333 nova_compute[244014]: 2026-02-25 13:02:56.694 244018 INFO nova.compute.manager [-] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] VM Stopped (Lifecycle Event)#033[00m
Feb 25 08:02:56 np0005629333 nova_compute[244014]: 2026-02-25 13:02:56.715 244018 DEBUG nova.compute.manager [None req-22197e49-2bed-45ed-b23b-db58fa1abce8 - - - - - -] [instance: ea4e69b0-bf04-4637-847b-9807c884d103] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:02:56 np0005629333 nova_compute[244014]: 2026-02-25 13:02:56.796 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:02:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2482: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:02:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:02:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2483: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:03:00 np0005629333 nova_compute[244014]: 2026-02-25 13:03:00.684 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2484: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:03:01 np0005629333 nova_compute[244014]: 2026-02-25 13:03:01.469 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:03:01 np0005629333 nova_compute[244014]: 2026-02-25 13:03:01.470 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:03:01 np0005629333 nova_compute[244014]: 2026-02-25 13:03:01.489 244018 DEBUG nova.compute.manager [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 08:03:01 np0005629333 nova_compute[244014]: 2026-02-25 13:03:01.574 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:03:01 np0005629333 nova_compute[244014]: 2026-02-25 13:03:01.575 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:03:01 np0005629333 nova_compute[244014]: 2026-02-25 13:03:01.584 244018 DEBUG nova.virt.hardware [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 08:03:01 np0005629333 nova_compute[244014]: 2026-02-25 13:03:01.584 244018 INFO nova.compute.claims [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 08:03:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:03:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:03:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:03:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:03:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:03:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:03:01 np0005629333 nova_compute[244014]: 2026-02-25 13:03:01.706 244018 DEBUG oslo_concurrency.processutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:03:01 np0005629333 nova_compute[244014]: 2026-02-25 13:03:01.798 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:03:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/967103420' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:03:02 np0005629333 nova_compute[244014]: 2026-02-25 13:03:02.270 244018 DEBUG oslo_concurrency.processutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:03:02 np0005629333 nova_compute[244014]: 2026-02-25 13:03:02.280 244018 DEBUG nova.compute.provider_tree [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:03:02 np0005629333 nova_compute[244014]: 2026-02-25 13:03:02.299 244018 DEBUG nova.scheduler.client.report [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:03:02 np0005629333 nova_compute[244014]: 2026-02-25 13:03:02.331 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:03:02 np0005629333 nova_compute[244014]: 2026-02-25 13:03:02.332 244018 DEBUG nova.compute.manager [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 08:03:02 np0005629333 nova_compute[244014]: 2026-02-25 13:03:02.385 244018 DEBUG nova.compute.manager [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 08:03:02 np0005629333 nova_compute[244014]: 2026-02-25 13:03:02.386 244018 DEBUG nova.network.neutron [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 08:03:02 np0005629333 nova_compute[244014]: 2026-02-25 13:03:02.409 244018 INFO nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 08:03:02 np0005629333 nova_compute[244014]: 2026-02-25 13:03:02.431 244018 DEBUG nova.compute.manager [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 08:03:02 np0005629333 nova_compute[244014]: 2026-02-25 13:03:02.536 244018 DEBUG nova.compute.manager [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 08:03:02 np0005629333 nova_compute[244014]: 2026-02-25 13:03:02.538 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 08:03:02 np0005629333 nova_compute[244014]: 2026-02-25 13:03:02.538 244018 INFO nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Creating image(s)#033[00m
Feb 25 08:03:02 np0005629333 nova_compute[244014]: 2026-02-25 13:03:02.564 244018 DEBUG nova.storage.rbd_utils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:03:02 np0005629333 nova_compute[244014]: 2026-02-25 13:03:02.592 244018 DEBUG nova.storage.rbd_utils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:03:02 np0005629333 nova_compute[244014]: 2026-02-25 13:03:02.615 244018 DEBUG nova.storage.rbd_utils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:03:02 np0005629333 nova_compute[244014]: 2026-02-25 13:03:02.619 244018 DEBUG oslo_concurrency.processutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:03:02 np0005629333 nova_compute[244014]: 2026-02-25 13:03:02.694 244018 DEBUG oslo_concurrency.processutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:03:02 np0005629333 nova_compute[244014]: 2026-02-25 13:03:02.696 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:03:02 np0005629333 nova_compute[244014]: 2026-02-25 13:03:02.697 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:03:02 np0005629333 nova_compute[244014]: 2026-02-25 13:03:02.698 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:03:02 np0005629333 nova_compute[244014]: 2026-02-25 13:03:02.732 244018 DEBUG nova.storage.rbd_utils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:03:02 np0005629333 nova_compute[244014]: 2026-02-25 13:03:02.738 244018 DEBUG oslo_concurrency.processutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:03:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2485: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:03:03 np0005629333 nova_compute[244014]: 2026-02-25 13:03:03.107 244018 DEBUG nova.policy [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea895f651dd742a7b5eb2d63fb34641c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b9699483122f465084e3147e4904d13d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 08:03:03 np0005629333 nova_compute[244014]: 2026-02-25 13:03:03.121 244018 DEBUG oslo_concurrency.processutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.383s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:03:03 np0005629333 nova_compute[244014]: 2026-02-25 13:03:03.206 244018 DEBUG nova.storage.rbd_utils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] resizing rbd image 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 08:03:03 np0005629333 nova_compute[244014]: 2026-02-25 13:03:03.437 244018 DEBUG nova.objects.instance [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'migration_context' on Instance uuid 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 08:03:03 np0005629333 nova_compute[244014]: 2026-02-25 13:03:03.471 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 08:03:03 np0005629333 nova_compute[244014]: 2026-02-25 13:03:03.472 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Ensure instance console log exists: /var/lib/nova/instances/8c7de3d4-fa84-4c1c-9e61-e3b610cb177d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 08:03:03 np0005629333 nova_compute[244014]: 2026-02-25 13:03:03.472 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:03:03 np0005629333 nova_compute[244014]: 2026-02-25 13:03:03.473 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:03:03 np0005629333 nova_compute[244014]: 2026-02-25 13:03:03.473 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:03:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:03:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2486: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:03:05 np0005629333 nova_compute[244014]: 2026-02-25 13:03:05.252 244018 DEBUG nova.network.neutron [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Successfully created port: 7d538e82-43ce-4f65-b84b-fb9efe7d35b0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 08:03:05 np0005629333 nova_compute[244014]: 2026-02-25 13:03:05.686 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:05 np0005629333 nova_compute[244014]: 2026-02-25 13:03:05.870 244018 DEBUG nova.network.neutron [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Successfully updated port: 7d538e82-43ce-4f65-b84b-fb9efe7d35b0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 08:03:05 np0005629333 nova_compute[244014]: 2026-02-25 13:03:05.894 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "refresh_cache-8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 08:03:05 np0005629333 nova_compute[244014]: 2026-02-25 13:03:05.895 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquired lock "refresh_cache-8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 08:03:05 np0005629333 nova_compute[244014]: 2026-02-25 13:03:05.895 244018 DEBUG nova.network.neutron [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 08:03:06 np0005629333 nova_compute[244014]: 2026-02-25 13:03:06.012 244018 DEBUG nova.compute.manager [req-22191467-bed4-4866-a22c-33bcd1321455 req-38ea70fa-900e-48f0-b62f-a2013e4ffdfd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Received event network-changed-7d538e82-43ce-4f65-b84b-fb9efe7d35b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:03:06 np0005629333 nova_compute[244014]: 2026-02-25 13:03:06.013 244018 DEBUG nova.compute.manager [req-22191467-bed4-4866-a22c-33bcd1321455 req-38ea70fa-900e-48f0-b62f-a2013e4ffdfd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Refreshing instance network info cache due to event network-changed-7d538e82-43ce-4f65-b84b-fb9efe7d35b0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 08:03:06 np0005629333 nova_compute[244014]: 2026-02-25 13:03:06.013 244018 DEBUG oslo_concurrency.lockutils [req-22191467-bed4-4866-a22c-33bcd1321455 req-38ea70fa-900e-48f0-b62f-a2013e4ffdfd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 08:03:06 np0005629333 nova_compute[244014]: 2026-02-25 13:03:06.065 244018 DEBUG nova.network.neutron [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 08:03:06 np0005629333 nova_compute[244014]: 2026-02-25 13:03:06.802 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2487: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:03:06 np0005629333 nova_compute[244014]: 2026-02-25 13:03:06.987 244018 DEBUG nova.network.neutron [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Updating instance_info_cache with network_info: [{"id": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "address": "fa:16:3e:1a:bc:b9", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d538e82-43", "ovs_interfaceid": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:03:07 np0005629333 nova_compute[244014]: 2026-02-25 13:03:07.006 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Releasing lock "refresh_cache-8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 08:03:07 np0005629333 nova_compute[244014]: 2026-02-25 13:03:07.007 244018 DEBUG nova.compute.manager [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Instance network_info: |[{"id": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "address": "fa:16:3e:1a:bc:b9", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d538e82-43", "ovs_interfaceid": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 08:03:07 np0005629333 nova_compute[244014]: 2026-02-25 13:03:07.008 244018 DEBUG oslo_concurrency.lockutils [req-22191467-bed4-4866-a22c-33bcd1321455 req-38ea70fa-900e-48f0-b62f-a2013e4ffdfd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 08:03:07 np0005629333 nova_compute[244014]: 2026-02-25 13:03:07.008 244018 DEBUG nova.network.neutron [req-22191467-bed4-4866-a22c-33bcd1321455 req-38ea70fa-900e-48f0-b62f-a2013e4ffdfd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Refreshing network info cache for port 7d538e82-43ce-4f65-b84b-fb9efe7d35b0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 08:03:07 np0005629333 nova_compute[244014]: 2026-02-25 13:03:07.013 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Start _get_guest_xml network_info=[{"id": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "address": "fa:16:3e:1a:bc:b9", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d538e82-43", "ovs_interfaceid": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 08:03:07 np0005629333 nova_compute[244014]: 2026-02-25 13:03:07.018 244018 WARNING nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:03:07 np0005629333 nova_compute[244014]: 2026-02-25 13:03:07.031 244018 DEBUG nova.virt.libvirt.host [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 08:03:07 np0005629333 nova_compute[244014]: 2026-02-25 13:03:07.031 244018 DEBUG nova.virt.libvirt.host [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 08:03:07 np0005629333 nova_compute[244014]: 2026-02-25 13:03:07.037 244018 DEBUG nova.virt.libvirt.host [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 08:03:07 np0005629333 nova_compute[244014]: 2026-02-25 13:03:07.037 244018 DEBUG nova.virt.libvirt.host [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 08:03:07 np0005629333 nova_compute[244014]: 2026-02-25 13:03:07.038 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 08:03:07 np0005629333 nova_compute[244014]: 2026-02-25 13:03:07.039 244018 DEBUG nova.virt.hardware [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 08:03:07 np0005629333 nova_compute[244014]: 2026-02-25 13:03:07.039 244018 DEBUG nova.virt.hardware [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 08:03:07 np0005629333 nova_compute[244014]: 2026-02-25 13:03:07.040 244018 DEBUG nova.virt.hardware [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 08:03:07 np0005629333 nova_compute[244014]: 2026-02-25 13:03:07.040 244018 DEBUG nova.virt.hardware [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 08:03:07 np0005629333 nova_compute[244014]: 2026-02-25 13:03:07.041 244018 DEBUG nova.virt.hardware [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 08:03:07 np0005629333 nova_compute[244014]: 2026-02-25 13:03:07.041 244018 DEBUG nova.virt.hardware [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 08:03:07 np0005629333 nova_compute[244014]: 2026-02-25 13:03:07.041 244018 DEBUG nova.virt.hardware [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 08:03:07 np0005629333 nova_compute[244014]: 2026-02-25 13:03:07.042 244018 DEBUG nova.virt.hardware [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 08:03:07 np0005629333 nova_compute[244014]: 2026-02-25 13:03:07.042 244018 DEBUG nova.virt.hardware [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 08:03:07 np0005629333 nova_compute[244014]: 2026-02-25 13:03:07.043 244018 DEBUG nova.virt.hardware [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 08:03:07 np0005629333 nova_compute[244014]: 2026-02-25 13:03:07.043 244018 DEBUG nova.virt.hardware [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 08:03:07 np0005629333 nova_compute[244014]: 2026-02-25 13:03:07.047 244018 DEBUG oslo_concurrency.processutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:03:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 08:03:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3410567786' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 08:03:07 np0005629333 nova_compute[244014]: 2026-02-25 13:03:07.552 244018 DEBUG oslo_concurrency.processutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:03:07 np0005629333 nova_compute[244014]: 2026-02-25 13:03:07.583 244018 DEBUG nova.storage.rbd_utils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:03:07 np0005629333 nova_compute[244014]: 2026-02-25 13:03:07.588 244018 DEBUG oslo_concurrency.processutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:03:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 08:03:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4278592762' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 08:03:08 np0005629333 nova_compute[244014]: 2026-02-25 13:03:08.181 244018 DEBUG oslo_concurrency.processutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:03:08 np0005629333 nova_compute[244014]: 2026-02-25 13:03:08.185 244018 DEBUG nova.virt.libvirt.vif [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T13:03:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-214149727',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-214149727',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-acc',id=150,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbQ2uY7T8R5wVQnvFbyA1segFh0PnNufgJbxt3APzZeuF0vcSW1YfpqbsmGbwvIughYn7yKsWGZo7qrK+BPyfIKQ42RLyvp7znat3TJdsHJA5kxzzkNZRLtYgFDUNF7bA==',key_name='tempest-TestSecurityGroupsBasicOps-1917346637',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-pgg8mh7f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T13:03:02Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=8c7de3d4-fa84-4c1c-9e61-e3b610cb177d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "address": "fa:16:3e:1a:bc:b9", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d538e82-43", "ovs_interfaceid": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 08:03:08 np0005629333 nova_compute[244014]: 2026-02-25 13:03:08.185 244018 DEBUG nova.network.os_vif_util [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "address": "fa:16:3e:1a:bc:b9", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d538e82-43", "ovs_interfaceid": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 08:03:08 np0005629333 nova_compute[244014]: 2026-02-25 13:03:08.187 244018 DEBUG nova.network.os_vif_util [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:bc:b9,bridge_name='br-int',has_traffic_filtering=True,id=7d538e82-43ce-4f65-b84b-fb9efe7d35b0,network=Network(60551679-32db-4035-bd76-7d0d38a9d6de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d538e82-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 08:03:08 np0005629333 nova_compute[244014]: 2026-02-25 13:03:08.189 244018 DEBUG nova.objects.instance [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'pci_devices' on Instance uuid 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 08:03:08 np0005629333 nova_compute[244014]: 2026-02-25 13:03:08.211 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] End _get_guest_xml xml=<domain type="kvm">
Feb 25 08:03:08 np0005629333 nova_compute[244014]:  <uuid>8c7de3d4-fa84-4c1c-9e61-e3b610cb177d</uuid>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:  <name>instance-00000096</name>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-214149727</nova:name>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 13:03:07</nova:creationTime>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 08:03:08 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:        <nova:user uuid="ea895f651dd742a7b5eb2d63fb34641c">tempest-TestSecurityGroupsBasicOps-948360018-project-member</nova:user>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:        <nova:project uuid="b9699483122f465084e3147e4904d13d">tempest-TestSecurityGroupsBasicOps-948360018</nova:project>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:        <nova:port uuid="7d538e82-43ce-4f65-b84b-fb9efe7d35b0">
Feb 25 08:03:08 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <system>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      <entry name="serial">8c7de3d4-fa84-4c1c-9e61-e3b610cb177d</entry>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      <entry name="uuid">8c7de3d4-fa84-4c1c-9e61-e3b610cb177d</entry>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    </system>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:  <os>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:  </os>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:  <features>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:  </features>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:  </clock>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:  <devices>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/8c7de3d4-fa84-4c1c-9e61-e3b610cb177d_disk">
Feb 25 08:03:08 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      </source>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 08:03:08 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      </auth>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    </disk>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/8c7de3d4-fa84-4c1c-9e61-e3b610cb177d_disk.config">
Feb 25 08:03:08 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      </source>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 08:03:08 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      </auth>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    </disk>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:1a:bc:b9"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      <target dev="tap7d538e82-43"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    </interface>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/8c7de3d4-fa84-4c1c-9e61-e3b610cb177d/console.log" append="off"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    </serial>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <video>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    </video>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    </rng>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 08:03:08 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 08:03:08 np0005629333 nova_compute[244014]:  </devices>
Feb 25 08:03:08 np0005629333 nova_compute[244014]: </domain>
Feb 25 08:03:08 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 08:03:08 np0005629333 nova_compute[244014]: 2026-02-25 13:03:08.214 244018 DEBUG nova.compute.manager [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Preparing to wait for external event network-vif-plugged-7d538e82-43ce-4f65-b84b-fb9efe7d35b0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 08:03:08 np0005629333 nova_compute[244014]: 2026-02-25 13:03:08.214 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:03:08 np0005629333 nova_compute[244014]: 2026-02-25 13:03:08.215 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:03:08 np0005629333 nova_compute[244014]: 2026-02-25 13:03:08.215 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:03:08 np0005629333 nova_compute[244014]: 2026-02-25 13:03:08.217 244018 DEBUG nova.virt.libvirt.vif [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T13:03:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-214149727',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-214149727',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-acc',id=150,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbQ2uY7T8R5wVQnvFbyA1segFh0PnNufgJbxt3APzZeuF0vcSW1YfpqbsmGbwvIughYn7yKsWGZo7qrK+BPyfIKQ42RLyvp7znat3TJdsHJA5kxzzkNZRLtYgFDUNF7bA==',key_name='tempest-TestSecurityGroupsBasicOps-1917346637',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-pgg8mh7f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T13:03:02Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=8c7de3d4-fa84-4c1c-9e61-e3b610cb177d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "address": "fa:16:3e:1a:bc:b9", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d538e82-43", "ovs_interfaceid": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 08:03:08 np0005629333 nova_compute[244014]: 2026-02-25 13:03:08.217 244018 DEBUG nova.network.os_vif_util [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "address": "fa:16:3e:1a:bc:b9", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d538e82-43", "ovs_interfaceid": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 08:03:08 np0005629333 nova_compute[244014]: 2026-02-25 13:03:08.218 244018 DEBUG nova.network.os_vif_util [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:bc:b9,bridge_name='br-int',has_traffic_filtering=True,id=7d538e82-43ce-4f65-b84b-fb9efe7d35b0,network=Network(60551679-32db-4035-bd76-7d0d38a9d6de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d538e82-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 08:03:08 np0005629333 nova_compute[244014]: 2026-02-25 13:03:08.219 244018 DEBUG os_vif [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:bc:b9,bridge_name='br-int',has_traffic_filtering=True,id=7d538e82-43ce-4f65-b84b-fb9efe7d35b0,network=Network(60551679-32db-4035-bd76-7d0d38a9d6de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d538e82-43') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 08:03:08 np0005629333 nova_compute[244014]: 2026-02-25 13:03:08.220 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:08 np0005629333 nova_compute[244014]: 2026-02-25 13:03:08.221 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:03:08 np0005629333 nova_compute[244014]: 2026-02-25 13:03:08.222 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 08:03:08 np0005629333 nova_compute[244014]: 2026-02-25 13:03:08.226 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:08 np0005629333 nova_compute[244014]: 2026-02-25 13:03:08.226 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7d538e82-43, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:03:08 np0005629333 nova_compute[244014]: 2026-02-25 13:03:08.227 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7d538e82-43, col_values=(('external_ids', {'iface-id': '7d538e82-43ce-4f65-b84b-fb9efe7d35b0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:bc:b9', 'vm-uuid': '8c7de3d4-fa84-4c1c-9e61-e3b610cb177d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:03:08 np0005629333 nova_compute[244014]: 2026-02-25 13:03:08.230 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:08 np0005629333 NetworkManager[49836]: <info>  [1772024588.2310] manager: (tap7d538e82-43): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/651)
Feb 25 08:03:08 np0005629333 nova_compute[244014]: 2026-02-25 13:03:08.234 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 08:03:08 np0005629333 nova_compute[244014]: 2026-02-25 13:03:08.236 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:08 np0005629333 nova_compute[244014]: 2026-02-25 13:03:08.238 244018 INFO os_vif [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:bc:b9,bridge_name='br-int',has_traffic_filtering=True,id=7d538e82-43ce-4f65-b84b-fb9efe7d35b0,network=Network(60551679-32db-4035-bd76-7d0d38a9d6de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d538e82-43')#033[00m
Feb 25 08:03:08 np0005629333 nova_compute[244014]: 2026-02-25 13:03:08.304 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 08:03:08 np0005629333 nova_compute[244014]: 2026-02-25 13:03:08.305 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 08:03:08 np0005629333 nova_compute[244014]: 2026-02-25 13:03:08.305 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No VIF found with MAC fa:16:3e:1a:bc:b9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 08:03:08 np0005629333 nova_compute[244014]: 2026-02-25 13:03:08.306 244018 INFO nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Using config drive#033[00m
Feb 25 08:03:08 np0005629333 nova_compute[244014]: 2026-02-25 13:03:08.344 244018 DEBUG nova.storage.rbd_utils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:03:08 np0005629333 nova_compute[244014]: 2026-02-25 13:03:08.632 244018 DEBUG nova.network.neutron [req-22191467-bed4-4866-a22c-33bcd1321455 req-38ea70fa-900e-48f0-b62f-a2013e4ffdfd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Updated VIF entry in instance network info cache for port 7d538e82-43ce-4f65-b84b-fb9efe7d35b0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 08:03:08 np0005629333 nova_compute[244014]: 2026-02-25 13:03:08.632 244018 DEBUG nova.network.neutron [req-22191467-bed4-4866-a22c-33bcd1321455 req-38ea70fa-900e-48f0-b62f-a2013e4ffdfd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Updating instance_info_cache with network_info: [{"id": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "address": "fa:16:3e:1a:bc:b9", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d538e82-43", "ovs_interfaceid": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:03:08 np0005629333 nova_compute[244014]: 2026-02-25 13:03:08.653 244018 DEBUG oslo_concurrency.lockutils [req-22191467-bed4-4866-a22c-33bcd1321455 req-38ea70fa-900e-48f0-b62f-a2013e4ffdfd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 08:03:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:03:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2488: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 08:03:09 np0005629333 nova_compute[244014]: 2026-02-25 13:03:09.119 244018 INFO nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Creating config drive at /var/lib/nova/instances/8c7de3d4-fa84-4c1c-9e61-e3b610cb177d/disk.config#033[00m
Feb 25 08:03:09 np0005629333 nova_compute[244014]: 2026-02-25 13:03:09.125 244018 DEBUG oslo_concurrency.processutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8c7de3d4-fa84-4c1c-9e61-e3b610cb177d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp04qyp_3m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:03:09 np0005629333 nova_compute[244014]: 2026-02-25 13:03:09.266 244018 DEBUG oslo_concurrency.processutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8c7de3d4-fa84-4c1c-9e61-e3b610cb177d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp04qyp_3m" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:03:09 np0005629333 nova_compute[244014]: 2026-02-25 13:03:09.323 244018 DEBUG nova.storage.rbd_utils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:03:09 np0005629333 nova_compute[244014]: 2026-02-25 13:03:09.329 244018 DEBUG oslo_concurrency.processutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8c7de3d4-fa84-4c1c-9e61-e3b610cb177d/disk.config 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:03:09 np0005629333 nova_compute[244014]: 2026-02-25 13:03:09.502 244018 DEBUG oslo_concurrency.processutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8c7de3d4-fa84-4c1c-9e61-e3b610cb177d/disk.config 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:03:09 np0005629333 nova_compute[244014]: 2026-02-25 13:03:09.503 244018 INFO nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Deleting local config drive /var/lib/nova/instances/8c7de3d4-fa84-4c1c-9e61-e3b610cb177d/disk.config because it was imported into RBD.#033[00m
Feb 25 08:03:09 np0005629333 kernel: tap7d538e82-43: entered promiscuous mode
Feb 25 08:03:09 np0005629333 NetworkManager[49836]: <info>  [1772024589.5570] manager: (tap7d538e82-43): new Tun device (/org/freedesktop/NetworkManager/Devices/652)
Feb 25 08:03:09 np0005629333 nova_compute[244014]: 2026-02-25 13:03:09.557 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:09 np0005629333 ovn_controller[147040]: 2026-02-25T13:03:09Z|01586|binding|INFO|Claiming lport 7d538e82-43ce-4f65-b84b-fb9efe7d35b0 for this chassis.
Feb 25 08:03:09 np0005629333 ovn_controller[147040]: 2026-02-25T13:03:09Z|01587|binding|INFO|7d538e82-43ce-4f65-b84b-fb9efe7d35b0: Claiming fa:16:3e:1a:bc:b9 10.100.0.6
Feb 25 08:03:09 np0005629333 nova_compute[244014]: 2026-02-25 13:03:09.561 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:09 np0005629333 nova_compute[244014]: 2026-02-25 13:03:09.567 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:09 np0005629333 nova_compute[244014]: 2026-02-25 13:03:09.572 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.579 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:bc:b9 10.100.0.6'], port_security=['fa:16:3e:1a:bc:b9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8c7de3d4-fa84-4c1c-9e61-e3b610cb177d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60551679-32db-4035-bd76-7d0d38a9d6de', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9699483122f465084e3147e4904d13d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '411a5034-2bd3-44c2-8d69-e8230e12b7ba 45df576e-9e98-4f75-84d8-faaa81c292fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2a78513-3d45-4660-838e-2ad213a5a7d3, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=7d538e82-43ce-4f65-b84b-fb9efe7d35b0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.581 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 7d538e82-43ce-4f65-b84b-fb9efe7d35b0 in datapath 60551679-32db-4035-bd76-7d0d38a9d6de bound to our chassis#033[00m
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.582 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60551679-32db-4035-bd76-7d0d38a9d6de#033[00m
Feb 25 08:03:09 np0005629333 systemd-machined[210048]: New machine qemu-184-instance-00000096.
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.592 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c2809a6a-353d-4136-a277-90dfc210bb66]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.593 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap60551679-31 in ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 08:03:09 np0005629333 systemd-udevd[380801]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.595 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap60551679-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.595 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[68cc4277-9ad9-4c96-9afe-2f73d1ac32cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.597 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[5d732679-b02c-437e-bfb3-fd57a273627e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:03:09 np0005629333 ovn_controller[147040]: 2026-02-25T13:03:09Z|01588|binding|INFO|Setting lport 7d538e82-43ce-4f65-b84b-fb9efe7d35b0 ovn-installed in OVS
Feb 25 08:03:09 np0005629333 ovn_controller[147040]: 2026-02-25T13:03:09Z|01589|binding|INFO|Setting lport 7d538e82-43ce-4f65-b84b-fb9efe7d35b0 up in Southbound
Feb 25 08:03:09 np0005629333 NetworkManager[49836]: <info>  [1772024589.6069] device (tap7d538e82-43): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 08:03:09 np0005629333 nova_compute[244014]: 2026-02-25 13:03:09.607 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:09 np0005629333 systemd[1]: Started Virtual Machine qemu-184-instance-00000096.
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.606 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[e27481d6-71a2-4caf-b892-e6367016c0dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:03:09 np0005629333 NetworkManager[49836]: <info>  [1772024589.6092] device (tap7d538e82-43): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.631 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f5b3158c-0a0b-4df5-a415-c7aa6ebf3f95]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.658 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[caf801e8-79ae-483b-9f23-f88d0ed386b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.663 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[eea89d6f-e8c8-411e-913a-9f194856240e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:03:09 np0005629333 NetworkManager[49836]: <info>  [1772024589.6650] manager: (tap60551679-30): new Veth device (/org/freedesktop/NetworkManager/Devices/653)
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.693 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[55464254-13c5-4337-a5b4-d68d5ed0e009]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.695 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[0094861e-96f7-4981-942f-f6d968b5c34b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:03:09 np0005629333 NetworkManager[49836]: <info>  [1772024589.7132] device (tap60551679-30): carrier: link connected
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.716 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[1f303967-3f5b-4038-8d09-2bd3083defa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.731 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1e09d918-d66b-4c89-bc95-1ac46ec8a625]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60551679-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:a9:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 468], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655912, 'reachable_time': 23194, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380833, 'error': None, 'target': 'ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.744 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ba53ff78-f92a-47c8-aa33-c95412b34252]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9b:a9f1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 655912, 'tstamp': 655912}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380834, 'error': None, 'target': 'ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.757 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[4f603e0d-17c1-4464-937d-49a94ff68f3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60551679-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:a9:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 468], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655912, 'reachable_time': 23194, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 380835, 'error': None, 'target': 'ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.779 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[60c0aeb3-20e0-4361-ba15-9850f7bd2db0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.823 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[a93a3921-5455-4606-aa65-f35007f83499]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.825 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60551679-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.825 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.825 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60551679-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:03:09 np0005629333 nova_compute[244014]: 2026-02-25 13:03:09.828 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:09 np0005629333 kernel: tap60551679-30: entered promiscuous mode
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.831 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60551679-30, col_values=(('external_ids', {'iface-id': '5108b2ec-2ba2-4c9d-af59-95b6cba805a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:03:09 np0005629333 nova_compute[244014]: 2026-02-25 13:03:09.832 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:09 np0005629333 NetworkManager[49836]: <info>  [1772024589.8327] manager: (tap60551679-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/654)
Feb 25 08:03:09 np0005629333 nova_compute[244014]: 2026-02-25 13:03:09.836 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:09 np0005629333 ovn_controller[147040]: 2026-02-25T13:03:09Z|01590|binding|INFO|Releasing lport 5108b2ec-2ba2-4c9d-af59-95b6cba805a9 from this chassis (sb_readonly=0)
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.836 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/60551679-32db-4035-bd76-7d0d38a9d6de.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/60551679-32db-4035-bd76-7d0d38a9d6de.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 08:03:09 np0005629333 nova_compute[244014]: 2026-02-25 13:03:09.847 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.847 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e311e079-d11e-483a-94b8-50eb6e0c2ceb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.848 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-60551679-32db-4035-bd76-7d0d38a9d6de
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/60551679-32db-4035-bd76-7d0d38a9d6de.pid.haproxy
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID 60551679-32db-4035-bd76-7d0d38a9d6de
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 08:03:09 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:09.849 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de', 'env', 'PROCESS_TAG=haproxy-60551679-32db-4035-bd76-7d0d38a9d6de', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/60551679-32db-4035-bd76-7d0d38a9d6de.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.163 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024590.1633112, 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.164 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] VM Started (Lifecycle Event)#033[00m
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.175 244018 DEBUG nova.compute.manager [req-5e67936b-08bc-4f2d-83bc-9bcdf19068b3 req-331ab6a8-19dd-4ebf-aa97-4d3be4347ad3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Received event network-vif-plugged-7d538e82-43ce-4f65-b84b-fb9efe7d35b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.176 244018 DEBUG oslo_concurrency.lockutils [req-5e67936b-08bc-4f2d-83bc-9bcdf19068b3 req-331ab6a8-19dd-4ebf-aa97-4d3be4347ad3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.176 244018 DEBUG oslo_concurrency.lockutils [req-5e67936b-08bc-4f2d-83bc-9bcdf19068b3 req-331ab6a8-19dd-4ebf-aa97-4d3be4347ad3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.177 244018 DEBUG oslo_concurrency.lockutils [req-5e67936b-08bc-4f2d-83bc-9bcdf19068b3 req-331ab6a8-19dd-4ebf-aa97-4d3be4347ad3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.177 244018 DEBUG nova.compute.manager [req-5e67936b-08bc-4f2d-83bc-9bcdf19068b3 req-331ab6a8-19dd-4ebf-aa97-4d3be4347ad3 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Processing event network-vif-plugged-7d538e82-43ce-4f65-b84b-fb9efe7d35b0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.178 244018 DEBUG nova.compute.manager [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.182 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.188 244018 INFO nova.virt.libvirt.driver [-] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Instance spawned successfully.#033[00m
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.188 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.191 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.195 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.210 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.211 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.212 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.212 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.213 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.213 244018 DEBUG nova.virt.libvirt.driver [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.219 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.219 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024590.164456, 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.220 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] VM Paused (Lifecycle Event)#033[00m
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.243 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.247 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024590.181352, 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.247 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] VM Resumed (Lifecycle Event)#033[00m
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.280 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.286 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 08:03:10 np0005629333 podman[380907]: 2026-02-25 13:03:10.28691093 +0000 UTC m=+0.132299452 container create 8ef82320a342dca80ac5c36f5a312b72987ce73068f699469a0a05f7925da602 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0)
Feb 25 08:03:10 np0005629333 podman[380907]: 2026-02-25 13:03:10.192620021 +0000 UTC m=+0.038008503 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.297 244018 INFO nova.compute.manager [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Took 7.76 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.298 244018 DEBUG nova.compute.manager [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.305 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 08:03:10 np0005629333 systemd[1]: Started libpod-conmon-8ef82320a342dca80ac5c36f5a312b72987ce73068f699469a0a05f7925da602.scope.
Feb 25 08:03:10 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:03:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/683ddb331af2ab3814797ad79c5d49ef9abaf7e6aaf6d9a7ae140104b9cbf429/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.361 244018 INFO nova.compute.manager [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Took 8.83 seconds to build instance.#033[00m
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.375 244018 DEBUG oslo_concurrency.lockutils [None req-4a901e85-a00d-4c09-8cdd-a1f15b3e770e ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:03:10 np0005629333 podman[380907]: 2026-02-25 13:03:10.438968816 +0000 UTC m=+0.284357298 container init 8ef82320a342dca80ac5c36f5a312b72987ce73068f699469a0a05f7925da602 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 08:03:10 np0005629333 podman[380907]: 2026-02-25 13:03:10.446333054 +0000 UTC m=+0.291721536 container start 8ef82320a342dca80ac5c36f5a312b72987ce73068f699469a0a05f7925da602 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 25 08:03:10 np0005629333 neutron-haproxy-ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de[380922]: [NOTICE]   (380926) : New worker (380928) forked
Feb 25 08:03:10 np0005629333 neutron-haproxy-ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de[380922]: [NOTICE]   (380926) : Loading success.
Feb 25 08:03:10 np0005629333 nova_compute[244014]: 2026-02-25 13:03:10.688 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2489: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 08:03:11 np0005629333 nova_compute[244014]: 2026-02-25 13:03:11.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:03:11 np0005629333 nova_compute[244014]: 2026-02-25 13:03:11.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 25 08:03:11 np0005629333 nova_compute[244014]: 2026-02-25 13:03:11.891 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 25 08:03:12 np0005629333 nova_compute[244014]: 2026-02-25 13:03:12.312 244018 DEBUG nova.compute.manager [req-9a33edf7-f770-43fd-a01e-6fc21cf11f91 req-e2681d5b-94dd-4bfa-801a-1c869109e0dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Received event network-vif-plugged-7d538e82-43ce-4f65-b84b-fb9efe7d35b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:03:12 np0005629333 nova_compute[244014]: 2026-02-25 13:03:12.313 244018 DEBUG oslo_concurrency.lockutils [req-9a33edf7-f770-43fd-a01e-6fc21cf11f91 req-e2681d5b-94dd-4bfa-801a-1c869109e0dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:03:12 np0005629333 nova_compute[244014]: 2026-02-25 13:03:12.313 244018 DEBUG oslo_concurrency.lockutils [req-9a33edf7-f770-43fd-a01e-6fc21cf11f91 req-e2681d5b-94dd-4bfa-801a-1c869109e0dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:03:12 np0005629333 nova_compute[244014]: 2026-02-25 13:03:12.313 244018 DEBUG oslo_concurrency.lockutils [req-9a33edf7-f770-43fd-a01e-6fc21cf11f91 req-e2681d5b-94dd-4bfa-801a-1c869109e0dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:03:12 np0005629333 nova_compute[244014]: 2026-02-25 13:03:12.313 244018 DEBUG nova.compute.manager [req-9a33edf7-f770-43fd-a01e-6fc21cf11f91 req-e2681d5b-94dd-4bfa-801a-1c869109e0dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] No waiting events found dispatching network-vif-plugged-7d538e82-43ce-4f65-b84b-fb9efe7d35b0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 08:03:12 np0005629333 nova_compute[244014]: 2026-02-25 13:03:12.313 244018 WARNING nova.compute.manager [req-9a33edf7-f770-43fd-a01e-6fc21cf11f91 req-e2681d5b-94dd-4bfa-801a-1c869109e0dd 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Received unexpected event network-vif-plugged-7d538e82-43ce-4f65-b84b-fb9efe7d35b0 for instance with vm_state active and task_state None.#033[00m
Feb 25 08:03:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2490: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 08:03:13 np0005629333 nova_compute[244014]: 2026-02-25 13:03:13.230 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:03:14 np0005629333 nova_compute[244014]: 2026-02-25 13:03:14.105 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:14 np0005629333 ovn_controller[147040]: 2026-02-25T13:03:14Z|01591|binding|INFO|Releasing lport 5108b2ec-2ba2-4c9d-af59-95b6cba805a9 from this chassis (sb_readonly=0)
Feb 25 08:03:14 np0005629333 NetworkManager[49836]: <info>  [1772024594.1115] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/655)
Feb 25 08:03:14 np0005629333 NetworkManager[49836]: <info>  [1772024594.1127] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/656)
Feb 25 08:03:14 np0005629333 ovn_controller[147040]: 2026-02-25T13:03:14Z|01592|binding|INFO|Releasing lport 5108b2ec-2ba2-4c9d-af59-95b6cba805a9 from this chassis (sb_readonly=0)
Feb 25 08:03:14 np0005629333 nova_compute[244014]: 2026-02-25 13:03:14.118 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:14 np0005629333 nova_compute[244014]: 2026-02-25 13:03:14.123 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:14 np0005629333 nova_compute[244014]: 2026-02-25 13:03:14.494 244018 DEBUG nova.compute.manager [req-7d9062e2-afea-4f67-bfe4-8037727229db req-f82186c2-d2ff-4ccd-9a09-f0b838888756 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Received event network-changed-7d538e82-43ce-4f65-b84b-fb9efe7d35b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:03:14 np0005629333 nova_compute[244014]: 2026-02-25 13:03:14.494 244018 DEBUG nova.compute.manager [req-7d9062e2-afea-4f67-bfe4-8037727229db req-f82186c2-d2ff-4ccd-9a09-f0b838888756 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Refreshing instance network info cache due to event network-changed-7d538e82-43ce-4f65-b84b-fb9efe7d35b0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 08:03:14 np0005629333 nova_compute[244014]: 2026-02-25 13:03:14.494 244018 DEBUG oslo_concurrency.lockutils [req-7d9062e2-afea-4f67-bfe4-8037727229db req-f82186c2-d2ff-4ccd-9a09-f0b838888756 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 08:03:14 np0005629333 nova_compute[244014]: 2026-02-25 13:03:14.494 244018 DEBUG oslo_concurrency.lockutils [req-7d9062e2-afea-4f67-bfe4-8037727229db req-f82186c2-d2ff-4ccd-9a09-f0b838888756 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 08:03:14 np0005629333 nova_compute[244014]: 2026-02-25 13:03:14.495 244018 DEBUG nova.network.neutron [req-7d9062e2-afea-4f67-bfe4-8037727229db req-f82186c2-d2ff-4ccd-9a09-f0b838888756 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Refreshing network info cache for port 7d538e82-43ce-4f65-b84b-fb9efe7d35b0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 08:03:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2491: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 08:03:15 np0005629333 nova_compute[244014]: 2026-02-25 13:03:15.691 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:15 np0005629333 nova_compute[244014]: 2026-02-25 13:03:15.838 244018 DEBUG nova.network.neutron [req-7d9062e2-afea-4f67-bfe4-8037727229db req-f82186c2-d2ff-4ccd-9a09-f0b838888756 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Updated VIF entry in instance network info cache for port 7d538e82-43ce-4f65-b84b-fb9efe7d35b0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 08:03:15 np0005629333 nova_compute[244014]: 2026-02-25 13:03:15.838 244018 DEBUG nova.network.neutron [req-7d9062e2-afea-4f67-bfe4-8037727229db req-f82186c2-d2ff-4ccd-9a09-f0b838888756 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Updating instance_info_cache with network_info: [{"id": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "address": "fa:16:3e:1a:bc:b9", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d538e82-43", "ovs_interfaceid": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:03:15 np0005629333 nova_compute[244014]: 2026-02-25 13:03:15.864 244018 DEBUG oslo_concurrency.lockutils [req-7d9062e2-afea-4f67-bfe4-8037727229db req-f82186c2-d2ff-4ccd-9a09-f0b838888756 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 08:03:15 np0005629333 nova_compute[244014]: 2026-02-25 13:03:15.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:03:16 np0005629333 nova_compute[244014]: 2026-02-25 13:03:16.885 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:03:16 np0005629333 nova_compute[244014]: 2026-02-25 13:03:16.886 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 25 08:03:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2492: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 08:03:18 np0005629333 nova_compute[244014]: 2026-02-25 13:03:18.232 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:03:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2493: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 08:03:20 np0005629333 nova_compute[244014]: 2026-02-25 13:03:20.692 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2494: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Feb 25 08:03:21 np0005629333 podman[380940]: 2026-02-25 13:03:21.703524332 +0000 UTC m=+0.049958209 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 25 08:03:21 np0005629333 podman[380941]: 2026-02-25 13:03:21.724289908 +0000 UTC m=+0.070778227 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Feb 25 08:03:22 np0005629333 ovn_controller[147040]: 2026-02-25T13:03:22Z|00208|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1a:bc:b9 10.100.0.6
Feb 25 08:03:22 np0005629333 ovn_controller[147040]: 2026-02-25T13:03:22Z|00209|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1a:bc:b9 10.100.0.6
Feb 25 08:03:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2495: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Feb 25 08:03:23 np0005629333 nova_compute[244014]: 2026-02-25 13:03:23.234 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:03:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2496: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 08:03:25 np0005629333 nova_compute[244014]: 2026-02-25 13:03:25.694 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2497: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 08:03:28 np0005629333 nova_compute[244014]: 2026-02-25 13:03:28.236 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:03:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2498: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 08:03:30 np0005629333 nova_compute[244014]: 2026-02-25 13:03:30.696 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2499: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Feb 25 08:03:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:03:31
Feb 25 08:03:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:03:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:03:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'volumes', 'images', 'default.rgw.control', '.rgw.root', '.mgr', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data']
Feb 25 08:03:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:03:31 np0005629333 nova_compute[244014]: 2026-02-25 13:03:31.349 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:31.349 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 08:03:31 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:31.351 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 08:03:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:03:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:03:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:03:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:03:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:03:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:03:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:03:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:03:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:03:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:03:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:03:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:03:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:03:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:03:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:03:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:03:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2500: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 08:03:33 np0005629333 nova_compute[244014]: 2026-02-25 13:03:33.238 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:03:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2501: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 0 op/s
Feb 25 08:03:35 np0005629333 nova_compute[244014]: 2026-02-25 13:03:35.698 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:36 np0005629333 nova_compute[244014]: 2026-02-25 13:03:36.471 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "5cd02959-e31f-4dd4-a50d-caafefc56629" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:03:36 np0005629333 nova_compute[244014]: 2026-02-25 13:03:36.471 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "5cd02959-e31f-4dd4-a50d-caafefc56629" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:03:36 np0005629333 nova_compute[244014]: 2026-02-25 13:03:36.489 244018 DEBUG nova.compute.manager [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 08:03:36 np0005629333 nova_compute[244014]: 2026-02-25 13:03:36.579 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:03:36 np0005629333 nova_compute[244014]: 2026-02-25 13:03:36.580 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:03:36 np0005629333 nova_compute[244014]: 2026-02-25 13:03:36.594 244018 DEBUG nova.virt.hardware [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 08:03:36 np0005629333 nova_compute[244014]: 2026-02-25 13:03:36.595 244018 INFO nova.compute.claims [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 08:03:36 np0005629333 nova_compute[244014]: 2026-02-25 13:03:36.736 244018 DEBUG oslo_concurrency.processutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:03:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2502: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 0 op/s
Feb 25 08:03:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:03:37 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2682082842' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:03:37 np0005629333 nova_compute[244014]: 2026-02-25 13:03:37.264 244018 DEBUG oslo_concurrency.processutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:03:37 np0005629333 nova_compute[244014]: 2026-02-25 13:03:37.270 244018 DEBUG nova.compute.provider_tree [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:03:37 np0005629333 nova_compute[244014]: 2026-02-25 13:03:37.286 244018 DEBUG nova.scheduler.client.report [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:03:37 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:37.354 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:03:37 np0005629333 nova_compute[244014]: 2026-02-25 13:03:37.368 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:03:37 np0005629333 nova_compute[244014]: 2026-02-25 13:03:37.369 244018 DEBUG nova.compute.manager [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 08:03:37 np0005629333 nova_compute[244014]: 2026-02-25 13:03:37.425 244018 DEBUG nova.compute.manager [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 08:03:37 np0005629333 nova_compute[244014]: 2026-02-25 13:03:37.425 244018 DEBUG nova.network.neutron [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 08:03:37 np0005629333 nova_compute[244014]: 2026-02-25 13:03:37.446 244018 INFO nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 08:03:37 np0005629333 nova_compute[244014]: 2026-02-25 13:03:37.461 244018 DEBUG nova.compute.manager [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 08:03:37 np0005629333 nova_compute[244014]: 2026-02-25 13:03:37.550 244018 DEBUG nova.compute.manager [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 08:03:37 np0005629333 nova_compute[244014]: 2026-02-25 13:03:37.553 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 08:03:37 np0005629333 nova_compute[244014]: 2026-02-25 13:03:37.553 244018 INFO nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Creating image(s)#033[00m
Feb 25 08:03:37 np0005629333 nova_compute[244014]: 2026-02-25 13:03:37.581 244018 DEBUG nova.storage.rbd_utils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 5cd02959-e31f-4dd4-a50d-caafefc56629_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:03:37 np0005629333 nova_compute[244014]: 2026-02-25 13:03:37.608 244018 DEBUG nova.storage.rbd_utils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 5cd02959-e31f-4dd4-a50d-caafefc56629_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:03:37 np0005629333 nova_compute[244014]: 2026-02-25 13:03:37.634 244018 DEBUG nova.storage.rbd_utils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 5cd02959-e31f-4dd4-a50d-caafefc56629_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:03:37 np0005629333 nova_compute[244014]: 2026-02-25 13:03:37.638 244018 DEBUG oslo_concurrency.processutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:03:37 np0005629333 nova_compute[244014]: 2026-02-25 13:03:37.669 244018 DEBUG nova.policy [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea895f651dd742a7b5eb2d63fb34641c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b9699483122f465084e3147e4904d13d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 08:03:37 np0005629333 nova_compute[244014]: 2026-02-25 13:03:37.709 244018 DEBUG oslo_concurrency.processutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:03:37 np0005629333 nova_compute[244014]: 2026-02-25 13:03:37.709 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:03:37 np0005629333 nova_compute[244014]: 2026-02-25 13:03:37.710 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:03:37 np0005629333 nova_compute[244014]: 2026-02-25 13:03:37.710 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:03:37 np0005629333 nova_compute[244014]: 2026-02-25 13:03:37.734 244018 DEBUG nova.storage.rbd_utils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 5cd02959-e31f-4dd4-a50d-caafefc56629_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:03:37 np0005629333 nova_compute[244014]: 2026-02-25 13:03:37.738 244018 DEBUG oslo_concurrency.processutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 5cd02959-e31f-4dd4-a50d-caafefc56629_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:03:38 np0005629333 nova_compute[244014]: 2026-02-25 13:03:38.129 244018 DEBUG oslo_concurrency.processutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 5cd02959-e31f-4dd4-a50d-caafefc56629_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.391s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:03:38 np0005629333 nova_compute[244014]: 2026-02-25 13:03:38.209 244018 DEBUG nova.storage.rbd_utils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] resizing rbd image 5cd02959-e31f-4dd4-a50d-caafefc56629_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 08:03:38 np0005629333 nova_compute[244014]: 2026-02-25 13:03:38.248 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:38 np0005629333 nova_compute[244014]: 2026-02-25 13:03:38.321 244018 DEBUG nova.objects.instance [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'migration_context' on Instance uuid 5cd02959-e31f-4dd4-a50d-caafefc56629 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 08:03:38 np0005629333 nova_compute[244014]: 2026-02-25 13:03:38.337 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 08:03:38 np0005629333 nova_compute[244014]: 2026-02-25 13:03:38.338 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Ensure instance console log exists: /var/lib/nova/instances/5cd02959-e31f-4dd4-a50d-caafefc56629/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 08:03:38 np0005629333 nova_compute[244014]: 2026-02-25 13:03:38.338 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:03:38 np0005629333 nova_compute[244014]: 2026-02-25 13:03:38.339 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:03:38 np0005629333 nova_compute[244014]: 2026-02-25 13:03:38.340 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:03:38 np0005629333 nova_compute[244014]: 2026-02-25 13:03:38.525 244018 DEBUG nova.network.neutron [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Successfully created port: b54ec6ac-470f-4041-8bcc-ed69244b5317 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 08:03:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:03:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2503: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 14 KiB/s wr, 0 op/s
Feb 25 08:03:39 np0005629333 nova_compute[244014]: 2026-02-25 13:03:39.379 244018 DEBUG nova.network.neutron [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Successfully updated port: b54ec6ac-470f-4041-8bcc-ed69244b5317 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 08:03:39 np0005629333 nova_compute[244014]: 2026-02-25 13:03:39.396 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "refresh_cache-5cd02959-e31f-4dd4-a50d-caafefc56629" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 08:03:39 np0005629333 nova_compute[244014]: 2026-02-25 13:03:39.397 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquired lock "refresh_cache-5cd02959-e31f-4dd4-a50d-caafefc56629" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 08:03:39 np0005629333 nova_compute[244014]: 2026-02-25 13:03:39.397 244018 DEBUG nova.network.neutron [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 08:03:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 25 08:03:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 08:03:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:03:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:03:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:03:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:03:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:03:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:03:39 np0005629333 nova_compute[244014]: 2026-02-25 13:03:39.500 244018 DEBUG nova.compute.manager [req-b0567c34-8aca-43ed-be76-399a0bfd1d15 req-fce3b14f-cc46-4b97-8019-c7a69fd06e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Received event network-changed-b54ec6ac-470f-4041-8bcc-ed69244b5317 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:03:39 np0005629333 nova_compute[244014]: 2026-02-25 13:03:39.500 244018 DEBUG nova.compute.manager [req-b0567c34-8aca-43ed-be76-399a0bfd1d15 req-fce3b14f-cc46-4b97-8019-c7a69fd06e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Refreshing instance network info cache due to event network-changed-b54ec6ac-470f-4041-8bcc-ed69244b5317. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 08:03:39 np0005629333 nova_compute[244014]: 2026-02-25 13:03:39.500 244018 DEBUG oslo_concurrency.lockutils [req-b0567c34-8aca-43ed-be76-399a0bfd1d15 req-fce3b14f-cc46-4b97-8019-c7a69fd06e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-5cd02959-e31f-4dd4-a50d-caafefc56629" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 08:03:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:03:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:03:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:03:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:03:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:03:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:03:39 np0005629333 nova_compute[244014]: 2026-02-25 13:03:39.545 244018 DEBUG nova.network.neutron [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 08:03:39 np0005629333 nova_compute[244014]: 2026-02-25 13:03:39.889 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:03:39 np0005629333 nova_compute[244014]: 2026-02-25 13:03:39.890 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:03:39 np0005629333 podman[381318]: 2026-02-25 13:03:39.900769 +0000 UTC m=+0.079159393 container create 6e4c290ac6ebaad32b47c4457ea75aa5535d6bbfc9a3491a6026e971a254da5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_shaw, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:03:39 np0005629333 nova_compute[244014]: 2026-02-25 13:03:39.913 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:03:39 np0005629333 systemd[1]: Started libpod-conmon-6e4c290ac6ebaad32b47c4457ea75aa5535d6bbfc9a3491a6026e971a254da5a.scope.
Feb 25 08:03:39 np0005629333 podman[381318]: 2026-02-25 13:03:39.865911968 +0000 UTC m=+0.044302441 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:03:39 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:03:40 np0005629333 podman[381318]: 2026-02-25 13:03:40.021881796 +0000 UTC m=+0.200272269 container init 6e4c290ac6ebaad32b47c4457ea75aa5535d6bbfc9a3491a6026e971a254da5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_shaw, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:03:40 np0005629333 podman[381318]: 2026-02-25 13:03:40.035570582 +0000 UTC m=+0.213961005 container start 6e4c290ac6ebaad32b47c4457ea75aa5535d6bbfc9a3491a6026e971a254da5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 08:03:40 np0005629333 frosty_shaw[381334]: 167 167
Feb 25 08:03:40 np0005629333 systemd[1]: libpod-6e4c290ac6ebaad32b47c4457ea75aa5535d6bbfc9a3491a6026e971a254da5a.scope: Deactivated successfully.
Feb 25 08:03:40 np0005629333 podman[381318]: 2026-02-25 13:03:40.096902491 +0000 UTC m=+0.275292974 container attach 6e4c290ac6ebaad32b47c4457ea75aa5535d6bbfc9a3491a6026e971a254da5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_shaw, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 08:03:40 np0005629333 podman[381318]: 2026-02-25 13:03:40.09792085 +0000 UTC m=+0.276311283 container died 6e4c290ac6ebaad32b47c4457ea75aa5535d6bbfc9a3491a6026e971a254da5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_shaw, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 08:03:40 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 08:03:40 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:03:40 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:03:40 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:03:40 np0005629333 systemd[1]: var-lib-containers-storage-overlay-244e6d46700cc4535f2a74d909f1c262f62b0f220771f31726b3af3efa5278dc-merged.mount: Deactivated successfully.
Feb 25 08:03:40 np0005629333 podman[381318]: 2026-02-25 13:03:40.195117731 +0000 UTC m=+0.373508154 container remove 6e4c290ac6ebaad32b47c4457ea75aa5535d6bbfc9a3491a6026e971a254da5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:03:40 np0005629333 systemd[1]: libpod-conmon-6e4c290ac6ebaad32b47c4457ea75aa5535d6bbfc9a3491a6026e971a254da5a.scope: Deactivated successfully.
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.249 244018 DEBUG nova.network.neutron [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Updating instance_info_cache with network_info: [{"id": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "address": "fa:16:3e:fa:d1:ab", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54ec6ac-47", "ovs_interfaceid": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.268 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Releasing lock "refresh_cache-5cd02959-e31f-4dd4-a50d-caafefc56629" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.269 244018 DEBUG nova.compute.manager [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Instance network_info: |[{"id": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "address": "fa:16:3e:fa:d1:ab", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54ec6ac-47", "ovs_interfaceid": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.269 244018 DEBUG oslo_concurrency.lockutils [req-b0567c34-8aca-43ed-be76-399a0bfd1d15 req-fce3b14f-cc46-4b97-8019-c7a69fd06e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-5cd02959-e31f-4dd4-a50d-caafefc56629" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.270 244018 DEBUG nova.network.neutron [req-b0567c34-8aca-43ed-be76-399a0bfd1d15 req-fce3b14f-cc46-4b97-8019-c7a69fd06e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Refreshing network info cache for port b54ec6ac-470f-4041-8bcc-ed69244b5317 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.275 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Start _get_guest_xml network_info=[{"id": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "address": "fa:16:3e:fa:d1:ab", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54ec6ac-47", "ovs_interfaceid": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.281 244018 WARNING nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.293 244018 DEBUG nova.virt.libvirt.host [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.294 244018 DEBUG nova.virt.libvirt.host [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.298 244018 DEBUG nova.virt.libvirt.host [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.299 244018 DEBUG nova.virt.libvirt.host [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.299 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.300 244018 DEBUG nova.virt.hardware [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.300 244018 DEBUG nova.virt.hardware [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.301 244018 DEBUG nova.virt.hardware [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.301 244018 DEBUG nova.virt.hardware [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.301 244018 DEBUG nova.virt.hardware [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.302 244018 DEBUG nova.virt.hardware [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.302 244018 DEBUG nova.virt.hardware [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.303 244018 DEBUG nova.virt.hardware [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.303 244018 DEBUG nova.virt.hardware [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.303 244018 DEBUG nova.virt.hardware [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.303 244018 DEBUG nova.virt.hardware [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.309 244018 DEBUG oslo_concurrency.processutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:03:40 np0005629333 podman[381359]: 2026-02-25 13:03:40.343996439 +0000 UTC m=+0.049103216 container create f47778d62a4af87f5c8a2facd310e31705438a1f6c05dd8e5e9c24c13307faeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_cartwright, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 08:03:40 np0005629333 systemd[1]: Started libpod-conmon-f47778d62a4af87f5c8a2facd310e31705438a1f6c05dd8e5e9c24c13307faeb.scope.
Feb 25 08:03:40 np0005629333 podman[381359]: 2026-02-25 13:03:40.319533029 +0000 UTC m=+0.024639826 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:03:40 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:03:40 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ab0e555455b1bd5a5b126cad8d81f1f8e6c3d3dc7711fad34fb973716018f5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:03:40 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ab0e555455b1bd5a5b126cad8d81f1f8e6c3d3dc7711fad34fb973716018f5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:03:40 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ab0e555455b1bd5a5b126cad8d81f1f8e6c3d3dc7711fad34fb973716018f5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:03:40 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ab0e555455b1bd5a5b126cad8d81f1f8e6c3d3dc7711fad34fb973716018f5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:03:40 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ab0e555455b1bd5a5b126cad8d81f1f8e6c3d3dc7711fad34fb973716018f5e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:03:40 np0005629333 podman[381359]: 2026-02-25 13:03:40.448578008 +0000 UTC m=+0.153684805 container init f47778d62a4af87f5c8a2facd310e31705438a1f6c05dd8e5e9c24c13307faeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:03:40 np0005629333 podman[381359]: 2026-02-25 13:03:40.457549851 +0000 UTC m=+0.162656628 container start f47778d62a4af87f5c8a2facd310e31705438a1f6c05dd8e5e9c24c13307faeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_cartwright, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Feb 25 08:03:40 np0005629333 podman[381359]: 2026-02-25 13:03:40.465752962 +0000 UTC m=+0.170859759 container attach f47778d62a4af87f5c8a2facd310e31705438a1f6c05dd8e5e9c24c13307faeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_cartwright, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.701 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:03:40 np0005629333 affectionate_cartwright[381378]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:03:40 np0005629333 affectionate_cartwright[381378]: --> All data devices are unavailable
Feb 25 08:03:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 08:03:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3883640857' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 08:03:40 np0005629333 systemd[1]: libpod-f47778d62a4af87f5c8a2facd310e31705438a1f6c05dd8e5e9c24c13307faeb.scope: Deactivated successfully.
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.911 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:03:40 np0005629333 podman[381359]: 2026-02-25 13:03:40.912360676 +0000 UTC m=+0.617467463 container died f47778d62a4af87f5c8a2facd310e31705438a1f6c05dd8e5e9c24c13307faeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.912 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.913 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.913 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.915 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:03:40 np0005629333 systemd[1]: var-lib-containers-storage-overlay-9ab0e555455b1bd5a5b126cad8d81f1f8e6c3d3dc7711fad34fb973716018f5e-merged.mount: Deactivated successfully.
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.950 244018 DEBUG oslo_concurrency.processutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:03:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2504: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 2.0 KiB/s wr, 0 op/s
Feb 25 08:03:40 np0005629333 podman[381359]: 2026-02-25 13:03:40.95290048 +0000 UTC m=+0.658007257 container remove f47778d62a4af87f5c8a2facd310e31705438a1f6c05dd8e5e9c24c13307faeb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_cartwright, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:03:40 np0005629333 systemd[1]: libpod-conmon-f47778d62a4af87f5c8a2facd310e31705438a1f6c05dd8e5e9c24c13307faeb.scope: Deactivated successfully.
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.975 244018 DEBUG nova.storage.rbd_utils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 5cd02959-e31f-4dd4-a50d-caafefc56629_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:03:40 np0005629333 nova_compute[244014]: 2026-02-25 13:03:40.979 244018 DEBUG oslo_concurrency.processutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:03:41 np0005629333 podman[381551]: 2026-02-25 13:03:41.363123587 +0000 UTC m=+0.059574411 container create 18d0162314896c0ce6e9e5832758bb2efca1f7991cef51e23134c7c4a556a55a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.379 244018 DEBUG nova.network.neutron [req-b0567c34-8aca-43ed-be76-399a0bfd1d15 req-fce3b14f-cc46-4b97-8019-c7a69fd06e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Updated VIF entry in instance network info cache for port b54ec6ac-470f-4041-8bcc-ed69244b5317. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.380 244018 DEBUG nova.network.neutron [req-b0567c34-8aca-43ed-be76-399a0bfd1d15 req-fce3b14f-cc46-4b97-8019-c7a69fd06e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Updating instance_info_cache with network_info: [{"id": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "address": "fa:16:3e:fa:d1:ab", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54ec6ac-47", "ovs_interfaceid": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.403 244018 DEBUG oslo_concurrency.lockutils [req-b0567c34-8aca-43ed-be76-399a0bfd1d15 req-fce3b14f-cc46-4b97-8019-c7a69fd06e2c 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-5cd02959-e31f-4dd4-a50d-caafefc56629" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 08:03:41 np0005629333 systemd[1]: Started libpod-conmon-18d0162314896c0ce6e9e5832758bb2efca1f7991cef51e23134c7c4a556a55a.scope.
Feb 25 08:03:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:03:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/357654892' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:03:41 np0005629333 podman[381551]: 2026-02-25 13:03:41.336349072 +0000 UTC m=+0.032799886 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:03:41 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.443 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:03:41 np0005629333 podman[381551]: 2026-02-25 13:03:41.45612186 +0000 UTC m=+0.152572704 container init 18d0162314896c0ce6e9e5832758bb2efca1f7991cef51e23134c7c4a556a55a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_kapitsa, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:03:41 np0005629333 podman[381551]: 2026-02-25 13:03:41.463342603 +0000 UTC m=+0.159793397 container start 18d0162314896c0ce6e9e5832758bb2efca1f7991cef51e23134c7c4a556a55a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:03:41 np0005629333 nervous_kapitsa[381568]: 167 167
Feb 25 08:03:41 np0005629333 systemd[1]: libpod-18d0162314896c0ce6e9e5832758bb2efca1f7991cef51e23134c7c4a556a55a.scope: Deactivated successfully.
Feb 25 08:03:41 np0005629333 podman[381551]: 2026-02-25 13:03:41.470488655 +0000 UTC m=+0.166939519 container attach 18d0162314896c0ce6e9e5832758bb2efca1f7991cef51e23134c7c4a556a55a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_kapitsa, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 08:03:41 np0005629333 podman[381551]: 2026-02-25 13:03:41.471075781 +0000 UTC m=+0.167526575 container died 18d0162314896c0ce6e9e5832758bb2efca1f7991cef51e23134c7c4a556a55a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_kapitsa, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 08:03:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 08:03:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3250334373' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.506 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.506 244018 DEBUG nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Feb 25 08:03:41 np0005629333 systemd[1]: var-lib-containers-storage-overlay-df343bbb39ede07b31ca689e7ef4fd93f4929b214c0fdccec9a6940206686fb3-merged.mount: Deactivated successfully.
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.519 244018 DEBUG oslo_concurrency.processutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.520 244018 DEBUG nova.virt.libvirt.vif [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T13:03:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-1-1820301438',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-1-1820301438',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-gen',id=151,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbQ2uY7T8R5wVQnvFbyA1segFh0PnNufgJbxt3APzZeuF0vcSW1YfpqbsmGbwvIughYn7yKsWGZo7qrK+BPyfIKQ42RLyvp7znat3TJdsHJA5kxzzkNZRLtYgFDUNF7bA==',key_name='tempest-TestSecurityGroupsBasicOps-1917346637',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-pztet3gy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T13:03:37Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=5cd02959-e31f-4dd4-a50d-caafefc56629,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "address": "fa:16:3e:fa:d1:ab", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54ec6ac-47", "ovs_interfaceid": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.521 244018 DEBUG nova.network.os_vif_util [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "address": "fa:16:3e:fa:d1:ab", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54ec6ac-47", "ovs_interfaceid": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.521 244018 DEBUG nova.network.os_vif_util [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:d1:ab,bridge_name='br-int',has_traffic_filtering=True,id=b54ec6ac-470f-4041-8bcc-ed69244b5317,network=Network(60551679-32db-4035-bd76-7d0d38a9d6de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb54ec6ac-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.523 244018 DEBUG nova.objects.instance [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'pci_devices' on Instance uuid 5cd02959-e31f-4dd4-a50d-caafefc56629 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.537 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] End _get_guest_xml xml=<domain type="kvm">
Feb 25 08:03:41 np0005629333 nova_compute[244014]:  <uuid>5cd02959-e31f-4dd4-a50d-caafefc56629</uuid>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:  <name>instance-00000097</name>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-1-1820301438</nova:name>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 13:03:40</nova:creationTime>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 08:03:41 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:        <nova:user uuid="ea895f651dd742a7b5eb2d63fb34641c">tempest-TestSecurityGroupsBasicOps-948360018-project-member</nova:user>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:        <nova:project uuid="b9699483122f465084e3147e4904d13d">tempest-TestSecurityGroupsBasicOps-948360018</nova:project>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:        <nova:port uuid="b54ec6ac-470f-4041-8bcc-ed69244b5317">
Feb 25 08:03:41 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <system>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      <entry name="serial">5cd02959-e31f-4dd4-a50d-caafefc56629</entry>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      <entry name="uuid">5cd02959-e31f-4dd4-a50d-caafefc56629</entry>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    </system>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:  <os>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:  </os>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:  <features>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:  </features>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:  </clock>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:  <devices>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/5cd02959-e31f-4dd4-a50d-caafefc56629_disk">
Feb 25 08:03:41 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      </source>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 08:03:41 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      </auth>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    </disk>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/5cd02959-e31f-4dd4-a50d-caafefc56629_disk.config">
Feb 25 08:03:41 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      </source>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 08:03:41 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      </auth>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    </disk>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:fa:d1:ab"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      <target dev="tapb54ec6ac-47"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    </interface>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/5cd02959-e31f-4dd4-a50d-caafefc56629/console.log" append="off"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    </serial>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <video>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    </video>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    </rng>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 08:03:41 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 08:03:41 np0005629333 nova_compute[244014]:  </devices>
Feb 25 08:03:41 np0005629333 nova_compute[244014]: </domain>
Feb 25 08:03:41 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.537 244018 DEBUG nova.compute.manager [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Preparing to wait for external event network-vif-plugged-b54ec6ac-470f-4041-8bcc-ed69244b5317 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.537 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.538 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.538 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.539 244018 DEBUG nova.virt.libvirt.vif [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-25T13:03:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-1-1820301438',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-1-1820301438',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-gen',id=151,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbQ2uY7T8R5wVQnvFbyA1segFh0PnNufgJbxt3APzZeuF0vcSW1YfpqbsmGbwvIughYn7yKsWGZo7qrK+BPyfIKQ42RLyvp7znat3TJdsHJA5kxzzkNZRLtYgFDUNF7bA==',key_name='tempest-TestSecurityGroupsBasicOps-1917346637',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-pztet3gy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T13:03:37Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=5cd02959-e31f-4dd4-a50d-caafefc56629,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "address": "fa:16:3e:fa:d1:ab", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54ec6ac-47", "ovs_interfaceid": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.539 244018 DEBUG nova.network.os_vif_util [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "address": "fa:16:3e:fa:d1:ab", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54ec6ac-47", "ovs_interfaceid": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.540 244018 DEBUG nova.network.os_vif_util [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:d1:ab,bridge_name='br-int',has_traffic_filtering=True,id=b54ec6ac-470f-4041-8bcc-ed69244b5317,network=Network(60551679-32db-4035-bd76-7d0d38a9d6de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb54ec6ac-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.540 244018 DEBUG os_vif [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:d1:ab,bridge_name='br-int',has_traffic_filtering=True,id=b54ec6ac-470f-4041-8bcc-ed69244b5317,network=Network(60551679-32db-4035-bd76-7d0d38a9d6de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb54ec6ac-47') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.541 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.542 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.542 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.546 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.546 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb54ec6ac-47, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.547 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb54ec6ac-47, col_values=(('external_ids', {'iface-id': 'b54ec6ac-470f-4041-8bcc-ed69244b5317', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fa:d1:ab', 'vm-uuid': '5cd02959-e31f-4dd4-a50d-caafefc56629'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.549 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:41 np0005629333 NetworkManager[49836]: <info>  [1772024621.5504] manager: (tapb54ec6ac-47): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/657)
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.551 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 08:03:41 np0005629333 podman[381551]: 2026-02-25 13:03:41.552858358 +0000 UTC m=+0.249309142 container remove 18d0162314896c0ce6e9e5832758bb2efca1f7991cef51e23134c7c4a556a55a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.556 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.557 244018 INFO os_vif [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:d1:ab,bridge_name='br-int',has_traffic_filtering=True,id=b54ec6ac-470f-4041-8bcc-ed69244b5317,network=Network(60551679-32db-4035-bd76-7d0d38a9d6de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb54ec6ac-47')#033[00m
Feb 25 08:03:41 np0005629333 systemd[1]: libpod-conmon-18d0162314896c0ce6e9e5832758bb2efca1f7991cef51e23134c7c4a556a55a.scope: Deactivated successfully.
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.599 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.600 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.600 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] No VIF found with MAC fa:16:3e:fa:d1:ab, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.601 244018 INFO nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Using config drive#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.620 244018 DEBUG nova.storage.rbd_utils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 5cd02959-e31f-4dd4-a50d-caafefc56629_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:03:41 np0005629333 podman[381617]: 2026-02-25 13:03:41.695423208 +0000 UTC m=+0.038608840 container create 0418b6f9ce1329cff6138021b8093a2c3b2929245dbd71c2461b70b8cb999b76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_khayyam, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.729 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.731 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3238MB free_disk=59.941737859509885GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.731 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.731 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:03:41 np0005629333 systemd[1]: Started libpod-conmon-0418b6f9ce1329cff6138021b8093a2c3b2929245dbd71c2461b70b8cb999b76.scope.
Feb 25 08:03:41 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:03:41 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f2b70a08eb0a06013a2b5c7813a414045d833868d693d90ae8d6eb4d9e16b38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:03:41 np0005629333 podman[381617]: 2026-02-25 13:03:41.675677301 +0000 UTC m=+0.018862963 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:03:41 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f2b70a08eb0a06013a2b5c7813a414045d833868d693d90ae8d6eb4d9e16b38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:03:41 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f2b70a08eb0a06013a2b5c7813a414045d833868d693d90ae8d6eb4d9e16b38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:03:41 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f2b70a08eb0a06013a2b5c7813a414045d833868d693d90ae8d6eb4d9e16b38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:03:41 np0005629333 podman[381617]: 2026-02-25 13:03:41.809389742 +0000 UTC m=+0.152575474 container init 0418b6f9ce1329cff6138021b8093a2c3b2929245dbd71c2461b70b8cb999b76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_khayyam, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 08:03:41 np0005629333 podman[381617]: 2026-02-25 13:03:41.818169549 +0000 UTC m=+0.161355181 container start 0418b6f9ce1329cff6138021b8093a2c3b2929245dbd71c2461b70b8cb999b76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Feb 25 08:03:41 np0005629333 podman[381617]: 2026-02-25 13:03:41.82812847 +0000 UTC m=+0.171314192 container attach 0418b6f9ce1329cff6138021b8093a2c3b2929245dbd71c2461b70b8cb999b76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.829 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.829 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Instance 5cd02959-e31f-4dd4-a50d-caafefc56629 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.830 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.830 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.875 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.901 244018 INFO nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Creating config drive at /var/lib/nova/instances/5cd02959-e31f-4dd4-a50d-caafefc56629/disk.config#033[00m
Feb 25 08:03:41 np0005629333 nova_compute[244014]: 2026-02-25 13:03:41.905 244018 DEBUG oslo_concurrency.processutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5cd02959-e31f-4dd4-a50d-caafefc56629/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpbmapajmg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:03:42 np0005629333 nova_compute[244014]: 2026-02-25 13:03:42.041 244018 DEBUG oslo_concurrency.processutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5cd02959-e31f-4dd4-a50d-caafefc56629/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpbmapajmg" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:03:42 np0005629333 nova_compute[244014]: 2026-02-25 13:03:42.077 244018 DEBUG nova.storage.rbd_utils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] rbd image 5cd02959-e31f-4dd4-a50d-caafefc56629_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:03:42 np0005629333 nova_compute[244014]: 2026-02-25 13:03:42.082 244018 DEBUG oslo_concurrency.processutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5cd02959-e31f-4dd4-a50d-caafefc56629/disk.config 5cd02959-e31f-4dd4-a50d-caafefc56629_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]: {
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:    "0": [
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:        {
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "devices": [
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "/dev/loop3"
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            ],
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "lv_name": "ceph_lv0",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "lv_size": "21470642176",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "name": "ceph_lv0",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "tags": {
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.cluster_name": "ceph",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.crush_device_class": "",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.encrypted": "0",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.objectstore": "bluestore",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.osd_id": "0",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.type": "block",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.vdo": "0",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.with_tpm": "0"
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            },
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "type": "block",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "vg_name": "ceph_vg0"
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:        }
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:    ],
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:    "1": [
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:        {
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "devices": [
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "/dev/loop4"
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            ],
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "lv_name": "ceph_lv1",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "lv_size": "21470642176",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "name": "ceph_lv1",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "tags": {
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.cluster_name": "ceph",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.crush_device_class": "",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.encrypted": "0",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.objectstore": "bluestore",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.osd_id": "1",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.type": "block",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.vdo": "0",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.with_tpm": "0"
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            },
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "type": "block",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "vg_name": "ceph_vg1"
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:        }
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:    ],
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:    "2": [
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:        {
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "devices": [
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "/dev/loop5"
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            ],
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "lv_name": "ceph_lv2",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "lv_size": "21470642176",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "name": "ceph_lv2",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "tags": {
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.cluster_name": "ceph",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.crush_device_class": "",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.encrypted": "0",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.objectstore": "bluestore",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.osd_id": "2",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.type": "block",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.vdo": "0",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:                "ceph.with_tpm": "0"
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            },
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "type": "block",
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:            "vg_name": "ceph_vg2"
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:        }
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]:    ]
Feb 25 08:03:42 np0005629333 suspicious_khayyam[381633]: }
Feb 25 08:03:42 np0005629333 systemd[1]: libpod-0418b6f9ce1329cff6138021b8093a2c3b2929245dbd71c2461b70b8cb999b76.scope: Deactivated successfully.
Feb 25 08:03:42 np0005629333 podman[381617]: 2026-02-25 13:03:42.114240108 +0000 UTC m=+0.457425780 container died 0418b6f9ce1329cff6138021b8093a2c3b2929245dbd71c2461b70b8cb999b76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 08:03:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:03:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2898101735' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:03:42 np0005629333 nova_compute[244014]: 2026-02-25 13:03:42.371 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:03:42 np0005629333 nova_compute[244014]: 2026-02-25 13:03:42.376 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:03:42 np0005629333 systemd[1]: var-lib-containers-storage-overlay-4f2b70a08eb0a06013a2b5c7813a414045d833868d693d90ae8d6eb4d9e16b38-merged.mount: Deactivated successfully.
Feb 25 08:03:42 np0005629333 nova_compute[244014]: 2026-02-25 13:03:42.389 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:03:42 np0005629333 nova_compute[244014]: 2026-02-25 13:03:42.408 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:03:42 np0005629333 nova_compute[244014]: 2026-02-25 13:03:42.408 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:03:42 np0005629333 podman[381617]: 2026-02-25 13:03:42.826377799 +0000 UTC m=+1.169563471 container remove 0418b6f9ce1329cff6138021b8093a2c3b2929245dbd71c2461b70b8cb999b76 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Feb 25 08:03:42 np0005629333 systemd[1]: libpod-conmon-0418b6f9ce1329cff6138021b8093a2c3b2929245dbd71c2461b70b8cb999b76.scope: Deactivated successfully.
Feb 25 08:03:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:03:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:03:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:03:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:03:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007758747128651013 of space, bias 1.0, pg target 0.2327624138595304 quantized to 32 (current 32)
Feb 25 08:03:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:03:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:03:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:03:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:03:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:03:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024944784865292705 of space, bias 1.0, pg target 0.7483435459587812 quantized to 32 (current 32)
Feb 25 08:03:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:03:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4133407414918176e-06 of space, bias 4.0, pg target 0.0016960088897901811 quantized to 16 (current 16)
Feb 25 08:03:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:03:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:03:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:03:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:03:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:03:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:03:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:03:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:03:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:03:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:03:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2505: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 08:03:43 np0005629333 nova_compute[244014]: 2026-02-25 13:03:43.295 244018 DEBUG oslo_concurrency.processutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5cd02959-e31f-4dd4-a50d-caafefc56629/disk.config 5cd02959-e31f-4dd4-a50d-caafefc56629_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:03:43 np0005629333 nova_compute[244014]: 2026-02-25 13:03:43.295 244018 INFO nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Deleting local config drive /var/lib/nova/instances/5cd02959-e31f-4dd4-a50d-caafefc56629/disk.config because it was imported into RBD.#033[00m
Feb 25 08:03:43 np0005629333 kernel: tapb54ec6ac-47: entered promiscuous mode
Feb 25 08:03:43 np0005629333 NetworkManager[49836]: <info>  [1772024623.3365] manager: (tapb54ec6ac-47): new Tun device (/org/freedesktop/NetworkManager/Devices/658)
Feb 25 08:03:43 np0005629333 ovn_controller[147040]: 2026-02-25T13:03:43Z|01593|binding|INFO|Claiming lport b54ec6ac-470f-4041-8bcc-ed69244b5317 for this chassis.
Feb 25 08:03:43 np0005629333 ovn_controller[147040]: 2026-02-25T13:03:43Z|01594|binding|INFO|b54ec6ac-470f-4041-8bcc-ed69244b5317: Claiming fa:16:3e:fa:d1:ab 10.100.0.10
Feb 25 08:03:43 np0005629333 nova_compute[244014]: 2026-02-25 13:03:43.338 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:43 np0005629333 nova_compute[244014]: 2026-02-25 13:03:43.343 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:43 np0005629333 ovn_controller[147040]: 2026-02-25T13:03:43Z|01595|binding|INFO|Setting lport b54ec6ac-470f-4041-8bcc-ed69244b5317 up in Southbound
Feb 25 08:03:43 np0005629333 nova_compute[244014]: 2026-02-25 13:03:43.345 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:43 np0005629333 ovn_controller[147040]: 2026-02-25T13:03:43Z|01596|binding|INFO|Setting lport b54ec6ac-470f-4041-8bcc-ed69244b5317 ovn-installed in OVS
Feb 25 08:03:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:43.345 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:d1:ab 10.100.0.10'], port_security=['fa:16:3e:fa:d1:ab 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5cd02959-e31f-4dd4-a50d-caafefc56629', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60551679-32db-4035-bd76-7d0d38a9d6de', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9699483122f465084e3147e4904d13d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '411a5034-2bd3-44c2-8d69-e8230e12b7ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2a78513-3d45-4660-838e-2ad213a5a7d3, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=b54ec6ac-470f-4041-8bcc-ed69244b5317) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 08:03:43 np0005629333 nova_compute[244014]: 2026-02-25 13:03:43.346 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:43.347 157129 INFO neutron.agent.ovn.metadata.agent [-] Port b54ec6ac-470f-4041-8bcc-ed69244b5317 in datapath 60551679-32db-4035-bd76-7d0d38a9d6de bound to our chassis#033[00m
Feb 25 08:03:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:43.349 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60551679-32db-4035-bd76-7d0d38a9d6de#033[00m
Feb 25 08:03:43 np0005629333 nova_compute[244014]: 2026-02-25 13:03:43.349 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:43 np0005629333 nova_compute[244014]: 2026-02-25 13:03:43.353 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:43.365 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6e2f9a24-14c9-4803-8ac8-e8e4da483464]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:03:43 np0005629333 podman[381780]: 2026-02-25 13:03:43.370489572 +0000 UTC m=+0.052131711 container create 9b684dc8cf16ab066afaa09e516139fdfad2b4d228826540c64073fbed051b12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_fermat, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 08:03:43 np0005629333 systemd-machined[210048]: New machine qemu-185-instance-00000097.
Feb 25 08:03:43 np0005629333 systemd-udevd[381809]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 08:03:43 np0005629333 systemd[1]: Started Virtual Machine qemu-185-instance-00000097.
Feb 25 08:03:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:43.385 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[8b6906fb-9c85-4aef-acec-b3bdb14f0af8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:03:43 np0005629333 NetworkManager[49836]: <info>  [1772024623.3873] device (tapb54ec6ac-47): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 08:03:43 np0005629333 NetworkManager[49836]: <info>  [1772024623.3899] device (tapb54ec6ac-47): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 08:03:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:43.388 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4789409c-02ef-4077-80e5-b66903f886e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:03:43 np0005629333 systemd[1]: Started libpod-conmon-9b684dc8cf16ab066afaa09e516139fdfad2b4d228826540c64073fbed051b12.scope.
Feb 25 08:03:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:43.413 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[fe28eaa4-d50f-4068-8bb7-11c1a2b5d4f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:03:43 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:03:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:43.432 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1bc5c218-a428-49e7-9c0e-4f3f0ace74ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60551679-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:a9:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 468], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655912, 'reachable_time': 23194, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381819, 'error': None, 'target': 'ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:03:43 np0005629333 podman[381780]: 2026-02-25 13:03:43.350002355 +0000 UTC m=+0.031644504 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:03:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:43.454 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[6fb21b62-fd84-4660-9618-8c58a401ae7d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap60551679-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 655920, 'tstamp': 655920}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381827, 'error': None, 'target': 'ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap60551679-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 655922, 'tstamp': 655922}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381827, 'error': None, 'target': 'ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:03:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:43.456 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60551679-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:03:43 np0005629333 nova_compute[244014]: 2026-02-25 13:03:43.458 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:43 np0005629333 nova_compute[244014]: 2026-02-25 13:03:43.458 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:43.459 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60551679-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:03:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:43.459 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 08:03:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:43.459 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60551679-30, col_values=(('external_ids', {'iface-id': '5108b2ec-2ba2-4c9d-af59-95b6cba805a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:03:43 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:43.460 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 08:03:43 np0005629333 nova_compute[244014]: 2026-02-25 13:03:43.522 244018 DEBUG nova.compute.manager [req-ffa78798-f3c0-4725-b721-e2c593083982 req-d400de03-8507-47a4-b606-4404cb9839cc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Received event network-vif-plugged-b54ec6ac-470f-4041-8bcc-ed69244b5317 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:03:43 np0005629333 nova_compute[244014]: 2026-02-25 13:03:43.522 244018 DEBUG oslo_concurrency.lockutils [req-ffa78798-f3c0-4725-b721-e2c593083982 req-d400de03-8507-47a4-b606-4404cb9839cc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:03:43 np0005629333 nova_compute[244014]: 2026-02-25 13:03:43.523 244018 DEBUG oslo_concurrency.lockutils [req-ffa78798-f3c0-4725-b721-e2c593083982 req-d400de03-8507-47a4-b606-4404cb9839cc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:03:43 np0005629333 nova_compute[244014]: 2026-02-25 13:03:43.523 244018 DEBUG oslo_concurrency.lockutils [req-ffa78798-f3c0-4725-b721-e2c593083982 req-d400de03-8507-47a4-b606-4404cb9839cc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:03:43 np0005629333 nova_compute[244014]: 2026-02-25 13:03:43.524 244018 DEBUG nova.compute.manager [req-ffa78798-f3c0-4725-b721-e2c593083982 req-d400de03-8507-47a4-b606-4404cb9839cc 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Processing event network-vif-plugged-b54ec6ac-470f-4041-8bcc-ed69244b5317 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 08:03:43 np0005629333 podman[381780]: 2026-02-25 13:03:43.617758805 +0000 UTC m=+0.299400984 container init 9b684dc8cf16ab066afaa09e516139fdfad2b4d228826540c64073fbed051b12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:03:43 np0005629333 podman[381780]: 2026-02-25 13:03:43.628759385 +0000 UTC m=+0.310401534 container start 9b684dc8cf16ab066afaa09e516139fdfad2b4d228826540c64073fbed051b12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_fermat, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:03:43 np0005629333 systemd[1]: libpod-9b684dc8cf16ab066afaa09e516139fdfad2b4d228826540c64073fbed051b12.scope: Deactivated successfully.
Feb 25 08:03:43 np0005629333 blissful_fermat[381817]: 167 167
Feb 25 08:03:43 np0005629333 conmon[381817]: conmon 9b684dc8cf16ab066afa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9b684dc8cf16ab066afaa09e516139fdfad2b4d228826540c64073fbed051b12.scope/container/memory.events
Feb 25 08:03:43 np0005629333 podman[381780]: 2026-02-25 13:03:43.744064417 +0000 UTC m=+0.425706576 container attach 9b684dc8cf16ab066afaa09e516139fdfad2b4d228826540c64073fbed051b12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:03:43 np0005629333 podman[381780]: 2026-02-25 13:03:43.744814788 +0000 UTC m=+0.426456927 container died 9b684dc8cf16ab066afaa09e516139fdfad2b4d228826540c64073fbed051b12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 08:03:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:03:43 np0005629333 nova_compute[244014]: 2026-02-25 13:03:43.905 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024623.904618, 5cd02959-e31f-4dd4-a50d-caafefc56629 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:03:43 np0005629333 nova_compute[244014]: 2026-02-25 13:03:43.905 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] VM Started (Lifecycle Event)#033[00m
Feb 25 08:03:43 np0005629333 nova_compute[244014]: 2026-02-25 13:03:43.908 244018 DEBUG nova.compute.manager [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 08:03:43 np0005629333 nova_compute[244014]: 2026-02-25 13:03:43.913 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 08:03:43 np0005629333 nova_compute[244014]: 2026-02-25 13:03:43.917 244018 INFO nova.virt.libvirt.driver [-] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Instance spawned successfully.#033[00m
Feb 25 08:03:43 np0005629333 nova_compute[244014]: 2026-02-25 13:03:43.917 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 08:03:43 np0005629333 nova_compute[244014]: 2026-02-25 13:03:43.939 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:03:43 np0005629333 systemd[1]: var-lib-containers-storage-overlay-056b89cb761559a247c1da86d852e8f03a8dff30dfe92586d6c636302ad55c5c-merged.mount: Deactivated successfully.
Feb 25 08:03:43 np0005629333 nova_compute[244014]: 2026-02-25 13:03:43.948 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 08:03:43 np0005629333 nova_compute[244014]: 2026-02-25 13:03:43.954 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:03:43 np0005629333 nova_compute[244014]: 2026-02-25 13:03:43.954 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:03:43 np0005629333 nova_compute[244014]: 2026-02-25 13:03:43.954 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:03:43 np0005629333 nova_compute[244014]: 2026-02-25 13:03:43.955 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:03:43 np0005629333 nova_compute[244014]: 2026-02-25 13:03:43.955 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:03:43 np0005629333 nova_compute[244014]: 2026-02-25 13:03:43.956 244018 DEBUG nova.virt.libvirt.driver [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:03:43 np0005629333 nova_compute[244014]: 2026-02-25 13:03:43.981 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 08:03:43 np0005629333 nova_compute[244014]: 2026-02-25 13:03:43.981 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024623.9048011, 5cd02959-e31f-4dd4-a50d-caafefc56629 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:03:43 np0005629333 nova_compute[244014]: 2026-02-25 13:03:43.981 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] VM Paused (Lifecycle Event)#033[00m
Feb 25 08:03:44 np0005629333 nova_compute[244014]: 2026-02-25 13:03:44.012 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:03:44 np0005629333 nova_compute[244014]: 2026-02-25 13:03:44.015 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024623.9116669, 5cd02959-e31f-4dd4-a50d-caafefc56629 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:03:44 np0005629333 nova_compute[244014]: 2026-02-25 13:03:44.015 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] VM Resumed (Lifecycle Event)#033[00m
Feb 25 08:03:44 np0005629333 nova_compute[244014]: 2026-02-25 13:03:44.026 244018 INFO nova.compute.manager [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Took 6.47 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 08:03:44 np0005629333 nova_compute[244014]: 2026-02-25 13:03:44.026 244018 DEBUG nova.compute.manager [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:03:44 np0005629333 nova_compute[244014]: 2026-02-25 13:03:44.033 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:03:44 np0005629333 nova_compute[244014]: 2026-02-25 13:03:44.036 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 08:03:44 np0005629333 nova_compute[244014]: 2026-02-25 13:03:44.058 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 08:03:44 np0005629333 nova_compute[244014]: 2026-02-25 13:03:44.087 244018 INFO nova.compute.manager [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Took 7.55 seconds to build instance.#033[00m
Feb 25 08:03:44 np0005629333 nova_compute[244014]: 2026-02-25 13:03:44.101 244018 DEBUG oslo_concurrency.lockutils [None req-39dfa206-2315-499b-8ce9-fefcfdba6b49 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "5cd02959-e31f-4dd4-a50d-caafefc56629" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:03:44 np0005629333 podman[381780]: 2026-02-25 13:03:44.292110581 +0000 UTC m=+0.973752720 container remove 9b684dc8cf16ab066afaa09e516139fdfad2b4d228826540c64073fbed051b12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_fermat, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:03:44 np0005629333 systemd[1]: libpod-conmon-9b684dc8cf16ab066afaa09e516139fdfad2b4d228826540c64073fbed051b12.scope: Deactivated successfully.
Feb 25 08:03:44 np0005629333 podman[381891]: 2026-02-25 13:03:44.457618378 +0000 UTC m=+0.034671028 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:03:44 np0005629333 podman[381891]: 2026-02-25 13:03:44.572811117 +0000 UTC m=+0.149863747 container create 9d5a09a904518c4481f1f962c3894c633c8541fa33ff96d7abb8d01b3c8ae293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wiles, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:03:44 np0005629333 systemd[1]: Started libpod-conmon-9d5a09a904518c4481f1f962c3894c633c8541fa33ff96d7abb8d01b3c8ae293.scope.
Feb 25 08:03:44 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:03:44 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/129bcbd2b93463b1a4dd488497ba9793e008a8d94c09ef78eead4e906ce7828f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:03:44 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/129bcbd2b93463b1a4dd488497ba9793e008a8d94c09ef78eead4e906ce7828f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:03:44 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/129bcbd2b93463b1a4dd488497ba9793e008a8d94c09ef78eead4e906ce7828f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:03:44 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/129bcbd2b93463b1a4dd488497ba9793e008a8d94c09ef78eead4e906ce7828f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:03:44 np0005629333 podman[381891]: 2026-02-25 13:03:44.898516461 +0000 UTC m=+0.475569131 container init 9d5a09a904518c4481f1f962c3894c633c8541fa33ff96d7abb8d01b3c8ae293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wiles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:03:44 np0005629333 podman[381891]: 2026-02-25 13:03:44.904817909 +0000 UTC m=+0.481870529 container start 9d5a09a904518c4481f1f962c3894c633c8541fa33ff96d7abb8d01b3c8ae293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wiles, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 08:03:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2506: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 08:03:44 np0005629333 podman[381891]: 2026-02-25 13:03:44.981863101 +0000 UTC m=+0.558915831 container attach 9d5a09a904518c4481f1f962c3894c633c8541fa33ff96d7abb8d01b3c8ae293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wiles, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:03:45 np0005629333 nova_compute[244014]: 2026-02-25 13:03:45.403 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:03:45 np0005629333 lvm[381983]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:03:45 np0005629333 lvm[381983]: VG ceph_vg0 finished
Feb 25 08:03:45 np0005629333 lvm[381985]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:03:45 np0005629333 lvm[381985]: VG ceph_vg1 finished
Feb 25 08:03:45 np0005629333 lvm[381987]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:03:45 np0005629333 lvm[381987]: VG ceph_vg2 finished
Feb 25 08:03:45 np0005629333 nova_compute[244014]: 2026-02-25 13:03:45.590 244018 DEBUG nova.compute.manager [req-9313ea9e-961c-45f6-88b7-0c4ca8189a84 req-10dbf1a4-5832-4c3d-b731-0dbeeb9f94a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Received event network-vif-plugged-b54ec6ac-470f-4041-8bcc-ed69244b5317 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:03:45 np0005629333 nova_compute[244014]: 2026-02-25 13:03:45.591 244018 DEBUG oslo_concurrency.lockutils [req-9313ea9e-961c-45f6-88b7-0c4ca8189a84 req-10dbf1a4-5832-4c3d-b731-0dbeeb9f94a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:03:45 np0005629333 nova_compute[244014]: 2026-02-25 13:03:45.591 244018 DEBUG oslo_concurrency.lockutils [req-9313ea9e-961c-45f6-88b7-0c4ca8189a84 req-10dbf1a4-5832-4c3d-b731-0dbeeb9f94a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:03:45 np0005629333 nova_compute[244014]: 2026-02-25 13:03:45.591 244018 DEBUG oslo_concurrency.lockutils [req-9313ea9e-961c-45f6-88b7-0c4ca8189a84 req-10dbf1a4-5832-4c3d-b731-0dbeeb9f94a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:03:45 np0005629333 nova_compute[244014]: 2026-02-25 13:03:45.592 244018 DEBUG nova.compute.manager [req-9313ea9e-961c-45f6-88b7-0c4ca8189a84 req-10dbf1a4-5832-4c3d-b731-0dbeeb9f94a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] No waiting events found dispatching network-vif-plugged-b54ec6ac-470f-4041-8bcc-ed69244b5317 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 08:03:45 np0005629333 nova_compute[244014]: 2026-02-25 13:03:45.592 244018 WARNING nova.compute.manager [req-9313ea9e-961c-45f6-88b7-0c4ca8189a84 req-10dbf1a4-5832-4c3d-b731-0dbeeb9f94a8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Received unexpected event network-vif-plugged-b54ec6ac-470f-4041-8bcc-ed69244b5317 for instance with vm_state active and task_state None.#033[00m
Feb 25 08:03:45 np0005629333 naughty_wiles[381908]: {}
Feb 25 08:03:45 np0005629333 systemd[1]: libpod-9d5a09a904518c4481f1f962c3894c633c8541fa33ff96d7abb8d01b3c8ae293.scope: Deactivated successfully.
Feb 25 08:03:45 np0005629333 podman[381891]: 2026-02-25 13:03:45.697146122 +0000 UTC m=+1.274198782 container died 9d5a09a904518c4481f1f962c3894c633c8541fa33ff96d7abb8d01b3c8ae293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wiles, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 08:03:45 np0005629333 systemd[1]: libpod-9d5a09a904518c4481f1f962c3894c633c8541fa33ff96d7abb8d01b3c8ae293.scope: Consumed 1.048s CPU time.
Feb 25 08:03:45 np0005629333 nova_compute[244014]: 2026-02-25 13:03:45.703 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:45 np0005629333 systemd[1]: var-lib-containers-storage-overlay-129bcbd2b93463b1a4dd488497ba9793e008a8d94c09ef78eead4e906ce7828f-merged.mount: Deactivated successfully.
Feb 25 08:03:45 np0005629333 nova_compute[244014]: 2026-02-25 13:03:45.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:03:45 np0005629333 podman[381891]: 2026-02-25 13:03:45.905760654 +0000 UTC m=+1.482813314 container remove 9d5a09a904518c4481f1f962c3894c633c8541fa33ff96d7abb8d01b3c8ae293 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wiles, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:03:45 np0005629333 systemd[1]: libpod-conmon-9d5a09a904518c4481f1f962c3894c633c8541fa33ff96d7abb8d01b3c8ae293.scope: Deactivated successfully.
Feb 25 08:03:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:03:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:03:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:03:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:03:46 np0005629333 nova_compute[244014]: 2026-02-25 13:03:46.549 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:46 np0005629333 nova_compute[244014]: 2026-02-25 13:03:46.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:03:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2507: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Feb 25 08:03:47 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:03:47 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:03:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:03:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1613258657' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:03:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:03:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1613258657' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:03:47 np0005629333 nova_compute[244014]: 2026-02-25 13:03:47.670 244018 DEBUG nova.compute.manager [req-e151b66e-16b2-4622-bcb7-7a8ecfb489da req-0cf44f33-3e2b-4580-bb4a-7c68d390d131 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Received event network-changed-b54ec6ac-470f-4041-8bcc-ed69244b5317 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:03:47 np0005629333 nova_compute[244014]: 2026-02-25 13:03:47.671 244018 DEBUG nova.compute.manager [req-e151b66e-16b2-4622-bcb7-7a8ecfb489da req-0cf44f33-3e2b-4580-bb4a-7c68d390d131 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Refreshing instance network info cache due to event network-changed-b54ec6ac-470f-4041-8bcc-ed69244b5317. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 08:03:47 np0005629333 nova_compute[244014]: 2026-02-25 13:03:47.671 244018 DEBUG oslo_concurrency.lockutils [req-e151b66e-16b2-4622-bcb7-7a8ecfb489da req-0cf44f33-3e2b-4580-bb4a-7c68d390d131 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-5cd02959-e31f-4dd4-a50d-caafefc56629" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 08:03:47 np0005629333 nova_compute[244014]: 2026-02-25 13:03:47.671 244018 DEBUG oslo_concurrency.lockutils [req-e151b66e-16b2-4622-bcb7-7a8ecfb489da req-0cf44f33-3e2b-4580-bb4a-7c68d390d131 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-5cd02959-e31f-4dd4-a50d-caafefc56629" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 08:03:47 np0005629333 nova_compute[244014]: 2026-02-25 13:03:47.672 244018 DEBUG nova.network.neutron [req-e151b66e-16b2-4622-bcb7-7a8ecfb489da req-0cf44f33-3e2b-4580-bb4a-7c68d390d131 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Refreshing network info cache for port b54ec6ac-470f-4041-8bcc-ed69244b5317 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 08:03:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:03:48 np0005629333 nova_compute[244014]: 2026-02-25 13:03:48.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:03:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2508: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 25 08:03:49 np0005629333 nova_compute[244014]: 2026-02-25 13:03:49.125 244018 DEBUG nova.network.neutron [req-e151b66e-16b2-4622-bcb7-7a8ecfb489da req-0cf44f33-3e2b-4580-bb4a-7c68d390d131 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Updated VIF entry in instance network info cache for port b54ec6ac-470f-4041-8bcc-ed69244b5317. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 08:03:49 np0005629333 nova_compute[244014]: 2026-02-25 13:03:49.126 244018 DEBUG nova.network.neutron [req-e151b66e-16b2-4622-bcb7-7a8ecfb489da req-0cf44f33-3e2b-4580-bb4a-7c68d390d131 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Updating instance_info_cache with network_info: [{"id": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "address": "fa:16:3e:fa:d1:ab", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54ec6ac-47", "ovs_interfaceid": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:03:49 np0005629333 nova_compute[244014]: 2026-02-25 13:03:49.147 244018 DEBUG oslo_concurrency.lockutils [req-e151b66e-16b2-4622-bcb7-7a8ecfb489da req-0cf44f33-3e2b-4580-bb4a-7c68d390d131 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-5cd02959-e31f-4dd4-a50d-caafefc56629" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 08:03:50 np0005629333 nova_compute[244014]: 2026-02-25 13:03:50.706 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2509: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 25 08:03:51 np0005629333 nova_compute[244014]: 2026-02-25 13:03:51.551 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:52 np0005629333 podman[382028]: 2026-02-25 13:03:52.734859146 +0000 UTC m=+0.083690261 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 25 08:03:52 np0005629333 podman[382029]: 2026-02-25 13:03:52.751783833 +0000 UTC m=+0.091533672 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 08:03:52 np0005629333 nova_compute[244014]: 2026-02-25 13:03:52.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:03:52 np0005629333 nova_compute[244014]: 2026-02-25 13:03:52.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:03:52 np0005629333 nova_compute[244014]: 2026-02-25 13:03:52.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:03:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2510: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Feb 25 08:03:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:03:54 np0005629333 nova_compute[244014]: 2026-02-25 13:03:54.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:03:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2511: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 73 op/s
Feb 25 08:03:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:55.040 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:03:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:55.041 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:03:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:03:55.042 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:03:55 np0005629333 nova_compute[244014]: 2026-02-25 13:03:55.708 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:56 np0005629333 nova_compute[244014]: 2026-02-25 13:03:56.554 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:03:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2512: 305 pgs: 305 active+clean; 279 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 73 op/s
Feb 25 08:03:56 np0005629333 ovn_controller[147040]: 2026-02-25T13:03:56Z|00210|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fa:d1:ab 10.100.0.10
Feb 25 08:03:56 np0005629333 ovn_controller[147040]: 2026-02-25T13:03:56Z|00211|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fa:d1:ab 10.100.0.10
Feb 25 08:03:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:03:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2513: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Feb 25 08:04:00 np0005629333 nova_compute[244014]: 2026-02-25 13:04:00.710 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2514: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 08:04:01 np0005629333 nova_compute[244014]: 2026-02-25 13:04:01.556 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:04:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:04:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:04:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:04:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:04:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:04:02 np0005629333 nova_compute[244014]: 2026-02-25 13:04:02.219 244018 DEBUG oslo_concurrency.lockutils [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "5cd02959-e31f-4dd4-a50d-caafefc56629" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:04:02 np0005629333 nova_compute[244014]: 2026-02-25 13:04:02.220 244018 DEBUG oslo_concurrency.lockutils [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "5cd02959-e31f-4dd4-a50d-caafefc56629" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:04:02 np0005629333 nova_compute[244014]: 2026-02-25 13:04:02.220 244018 DEBUG oslo_concurrency.lockutils [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:04:02 np0005629333 nova_compute[244014]: 2026-02-25 13:04:02.221 244018 DEBUG oslo_concurrency.lockutils [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:04:02 np0005629333 nova_compute[244014]: 2026-02-25 13:04:02.221 244018 DEBUG oslo_concurrency.lockutils [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:04:02 np0005629333 nova_compute[244014]: 2026-02-25 13:04:02.223 244018 INFO nova.compute.manager [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Terminating instance#033[00m
Feb 25 08:04:02 np0005629333 nova_compute[244014]: 2026-02-25 13:04:02.225 244018 DEBUG nova.compute.manager [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 08:04:02 np0005629333 kernel: tapb54ec6ac-47 (unregistering): left promiscuous mode
Feb 25 08:04:02 np0005629333 NetworkManager[49836]: <info>  [1772024642.3886] device (tapb54ec6ac-47): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 08:04:02 np0005629333 nova_compute[244014]: 2026-02-25 13:04:02.396 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:02 np0005629333 ovn_controller[147040]: 2026-02-25T13:04:02Z|01597|binding|INFO|Releasing lport b54ec6ac-470f-4041-8bcc-ed69244b5317 from this chassis (sb_readonly=0)
Feb 25 08:04:02 np0005629333 ovn_controller[147040]: 2026-02-25T13:04:02Z|01598|binding|INFO|Setting lport b54ec6ac-470f-4041-8bcc-ed69244b5317 down in Southbound
Feb 25 08:04:02 np0005629333 ovn_controller[147040]: 2026-02-25T13:04:02Z|01599|binding|INFO|Removing iface tapb54ec6ac-47 ovn-installed in OVS
Feb 25 08:04:02 np0005629333 nova_compute[244014]: 2026-02-25 13:04:02.399 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:02.405 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:d1:ab 10.100.0.10'], port_security=['fa:16:3e:fa:d1:ab 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5cd02959-e31f-4dd4-a50d-caafefc56629', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60551679-32db-4035-bd76-7d0d38a9d6de', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9699483122f465084e3147e4904d13d', 'neutron:revision_number': '5', 'neutron:security_group_ids': '9c3f9f1f-262b-43f9-8a65-6c1e7c9534cc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2a78513-3d45-4660-838e-2ad213a5a7d3, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=b54ec6ac-470f-4041-8bcc-ed69244b5317) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 08:04:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:02.408 157129 INFO neutron.agent.ovn.metadata.agent [-] Port b54ec6ac-470f-4041-8bcc-ed69244b5317 in datapath 60551679-32db-4035-bd76-7d0d38a9d6de unbound from our chassis#033[00m
Feb 25 08:04:02 np0005629333 nova_compute[244014]: 2026-02-25 13:04:02.409 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:02.411 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 60551679-32db-4035-bd76-7d0d38a9d6de#033[00m
Feb 25 08:04:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:02.427 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[078a6907-a734-408f-b9b7-bae46ac603b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:04:02 np0005629333 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000097.scope: Deactivated successfully.
Feb 25 08:04:02 np0005629333 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000097.scope: Consumed 12.498s CPU time.
Feb 25 08:04:02 np0005629333 systemd-machined[210048]: Machine qemu-185-instance-00000097 terminated.
Feb 25 08:04:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:02.457 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[5a87bccd-90e4-4f17-93da-e93a9b0bcd18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:04:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:02.462 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e853315d-9846-483d-92fb-23c284b62351]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:04:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:02.493 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[ba850934-c2e3-4221-82bb-cfb53c396114]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:04:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:02.513 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[acd1a0c9-6cd6-4419-8fa5-8ef4dbf2e9b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap60551679-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9b:a9:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 468], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655912, 'reachable_time': 23194, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382086, 'error': None, 'target': 'ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:04:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:02.532 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[bb977c6c-930f-41cd-9e5f-8c11e9fefb58]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap60551679-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 655920, 'tstamp': 655920}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382087, 'error': None, 'target': 'ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap60551679-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 655922, 'tstamp': 655922}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382087, 'error': None, 'target': 'ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:04:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:02.534 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60551679-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:04:02 np0005629333 nova_compute[244014]: 2026-02-25 13:04:02.536 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:02 np0005629333 nova_compute[244014]: 2026-02-25 13:04:02.540 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:02.541 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60551679-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:04:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:02.542 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 08:04:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:02.543 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap60551679-30, col_values=(('external_ids', {'iface-id': '5108b2ec-2ba2-4c9d-af59-95b6cba805a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:04:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:02.543 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 08:04:02 np0005629333 nova_compute[244014]: 2026-02-25 13:04:02.650 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:02 np0005629333 nova_compute[244014]: 2026-02-25 13:04:02.657 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:02 np0005629333 nova_compute[244014]: 2026-02-25 13:04:02.665 244018 INFO nova.virt.libvirt.driver [-] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Instance destroyed successfully.#033[00m
Feb 25 08:04:02 np0005629333 nova_compute[244014]: 2026-02-25 13:04:02.666 244018 DEBUG nova.objects.instance [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'resources' on Instance uuid 5cd02959-e31f-4dd4-a50d-caafefc56629 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 08:04:02 np0005629333 nova_compute[244014]: 2026-02-25 13:04:02.680 244018 DEBUG nova.virt.libvirt.vif [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T13:03:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-1-1820301438',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-gen-1-1820301438',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-gen',id=151,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbQ2uY7T8R5wVQnvFbyA1segFh0PnNufgJbxt3APzZeuF0vcSW1YfpqbsmGbwvIughYn7yKsWGZo7qrK+BPyfIKQ42RLyvp7znat3TJdsHJA5kxzzkNZRLtYgFDUNF7bA==',key_name='tempest-TestSecurityGroupsBasicOps-1917346637',keypairs=<?>,launch_index=0,launched_at=2026-02-25T13:03:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-pztet3gy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T13:03:44Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=5cd02959-e31f-4dd4-a50d-caafefc56629,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "address": "fa:16:3e:fa:d1:ab", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54ec6ac-47", "ovs_interfaceid": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 08:04:02 np0005629333 nova_compute[244014]: 2026-02-25 13:04:02.681 244018 DEBUG nova.network.os_vif_util [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "address": "fa:16:3e:fa:d1:ab", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54ec6ac-47", "ovs_interfaceid": "b54ec6ac-470f-4041-8bcc-ed69244b5317", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 08:04:02 np0005629333 nova_compute[244014]: 2026-02-25 13:04:02.683 244018 DEBUG nova.network.os_vif_util [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fa:d1:ab,bridge_name='br-int',has_traffic_filtering=True,id=b54ec6ac-470f-4041-8bcc-ed69244b5317,network=Network(60551679-32db-4035-bd76-7d0d38a9d6de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb54ec6ac-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 08:04:02 np0005629333 nova_compute[244014]: 2026-02-25 13:04:02.684 244018 DEBUG os_vif [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fa:d1:ab,bridge_name='br-int',has_traffic_filtering=True,id=b54ec6ac-470f-4041-8bcc-ed69244b5317,network=Network(60551679-32db-4035-bd76-7d0d38a9d6de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb54ec6ac-47') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 08:04:02 np0005629333 nova_compute[244014]: 2026-02-25 13:04:02.687 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:02 np0005629333 nova_compute[244014]: 2026-02-25 13:04:02.687 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb54ec6ac-47, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:04:02 np0005629333 nova_compute[244014]: 2026-02-25 13:04:02.691 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:02 np0005629333 nova_compute[244014]: 2026-02-25 13:04:02.694 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 08:04:02 np0005629333 nova_compute[244014]: 2026-02-25 13:04:02.696 244018 INFO os_vif [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fa:d1:ab,bridge_name='br-int',has_traffic_filtering=True,id=b54ec6ac-470f-4041-8bcc-ed69244b5317,network=Network(60551679-32db-4035-bd76-7d0d38a9d6de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb54ec6ac-47')#033[00m
Feb 25 08:04:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2515: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 08:04:03 np0005629333 nova_compute[244014]: 2026-02-25 13:04:03.282 244018 DEBUG nova.compute.manager [req-138c1e22-448d-4558-8f23-ee6b0c09cb65 req-36463188-d059-476a-a183-e0f67a413954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Received event network-vif-unplugged-b54ec6ac-470f-4041-8bcc-ed69244b5317 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:04:03 np0005629333 nova_compute[244014]: 2026-02-25 13:04:03.282 244018 DEBUG oslo_concurrency.lockutils [req-138c1e22-448d-4558-8f23-ee6b0c09cb65 req-36463188-d059-476a-a183-e0f67a413954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:04:03 np0005629333 nova_compute[244014]: 2026-02-25 13:04:03.283 244018 DEBUG oslo_concurrency.lockutils [req-138c1e22-448d-4558-8f23-ee6b0c09cb65 req-36463188-d059-476a-a183-e0f67a413954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:04:03 np0005629333 nova_compute[244014]: 2026-02-25 13:04:03.283 244018 DEBUG oslo_concurrency.lockutils [req-138c1e22-448d-4558-8f23-ee6b0c09cb65 req-36463188-d059-476a-a183-e0f67a413954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:04:03 np0005629333 nova_compute[244014]: 2026-02-25 13:04:03.283 244018 DEBUG nova.compute.manager [req-138c1e22-448d-4558-8f23-ee6b0c09cb65 req-36463188-d059-476a-a183-e0f67a413954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] No waiting events found dispatching network-vif-unplugged-b54ec6ac-470f-4041-8bcc-ed69244b5317 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 08:04:03 np0005629333 nova_compute[244014]: 2026-02-25 13:04:03.284 244018 DEBUG nova.compute.manager [req-138c1e22-448d-4558-8f23-ee6b0c09cb65 req-36463188-d059-476a-a183-e0f67a413954 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Received event network-vif-unplugged-b54ec6ac-470f-4041-8bcc-ed69244b5317 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 08:04:03 np0005629333 nova_compute[244014]: 2026-02-25 13:04:03.386 244018 INFO nova.virt.libvirt.driver [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Deleting instance files /var/lib/nova/instances/5cd02959-e31f-4dd4-a50d-caafefc56629_del#033[00m
Feb 25 08:04:03 np0005629333 nova_compute[244014]: 2026-02-25 13:04:03.386 244018 INFO nova.virt.libvirt.driver [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Deletion of /var/lib/nova/instances/5cd02959-e31f-4dd4-a50d-caafefc56629_del complete#033[00m
Feb 25 08:04:03 np0005629333 nova_compute[244014]: 2026-02-25 13:04:03.440 244018 INFO nova.compute.manager [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Took 1.22 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 08:04:03 np0005629333 nova_compute[244014]: 2026-02-25 13:04:03.441 244018 DEBUG oslo.service.loopingcall [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 08:04:03 np0005629333 nova_compute[244014]: 2026-02-25 13:04:03.442 244018 DEBUG nova.compute.manager [-] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 08:04:03 np0005629333 nova_compute[244014]: 2026-02-25 13:04:03.442 244018 DEBUG nova.network.neutron [-] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 08:04:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:04:04 np0005629333 nova_compute[244014]: 2026-02-25 13:04:04.851 244018 DEBUG nova.network.neutron [-] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:04:04 np0005629333 nova_compute[244014]: 2026-02-25 13:04:04.869 244018 INFO nova.compute.manager [-] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Took 1.43 seconds to deallocate network for instance.#033[00m
Feb 25 08:04:04 np0005629333 nova_compute[244014]: 2026-02-25 13:04:04.911 244018 DEBUG oslo_concurrency.lockutils [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:04:04 np0005629333 nova_compute[244014]: 2026-02-25 13:04:04.912 244018 DEBUG oslo_concurrency.lockutils [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:04:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2516: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 08:04:04 np0005629333 nova_compute[244014]: 2026-02-25 13:04:04.979 244018 DEBUG oslo_concurrency.processutils [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:04:05 np0005629333 nova_compute[244014]: 2026-02-25 13:04:05.441 244018 DEBUG nova.compute.manager [req-55d6d207-fcb6-4bc7-9be3-7e7d5212b6ab req-61ce9470-aeac-479a-96c0-8c0cfc9556ed 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Received event network-vif-plugged-b54ec6ac-470f-4041-8bcc-ed69244b5317 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:04:05 np0005629333 nova_compute[244014]: 2026-02-25 13:04:05.442 244018 DEBUG oslo_concurrency.lockutils [req-55d6d207-fcb6-4bc7-9be3-7e7d5212b6ab req-61ce9470-aeac-479a-96c0-8c0cfc9556ed 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:04:05 np0005629333 nova_compute[244014]: 2026-02-25 13:04:05.443 244018 DEBUG oslo_concurrency.lockutils [req-55d6d207-fcb6-4bc7-9be3-7e7d5212b6ab req-61ce9470-aeac-479a-96c0-8c0cfc9556ed 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:04:05 np0005629333 nova_compute[244014]: 2026-02-25 13:04:05.443 244018 DEBUG oslo_concurrency.lockutils [req-55d6d207-fcb6-4bc7-9be3-7e7d5212b6ab req-61ce9470-aeac-479a-96c0-8c0cfc9556ed 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "5cd02959-e31f-4dd4-a50d-caafefc56629-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:04:05 np0005629333 nova_compute[244014]: 2026-02-25 13:04:05.444 244018 DEBUG nova.compute.manager [req-55d6d207-fcb6-4bc7-9be3-7e7d5212b6ab req-61ce9470-aeac-479a-96c0-8c0cfc9556ed 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] No waiting events found dispatching network-vif-plugged-b54ec6ac-470f-4041-8bcc-ed69244b5317 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 08:04:05 np0005629333 nova_compute[244014]: 2026-02-25 13:04:05.444 244018 WARNING nova.compute.manager [req-55d6d207-fcb6-4bc7-9be3-7e7d5212b6ab req-61ce9470-aeac-479a-96c0-8c0cfc9556ed 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Received unexpected event network-vif-plugged-b54ec6ac-470f-4041-8bcc-ed69244b5317 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 08:04:05 np0005629333 nova_compute[244014]: 2026-02-25 13:04:05.445 244018 DEBUG nova.compute.manager [req-55d6d207-fcb6-4bc7-9be3-7e7d5212b6ab req-61ce9470-aeac-479a-96c0-8c0cfc9556ed 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Received event network-vif-deleted-b54ec6ac-470f-4041-8bcc-ed69244b5317 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:04:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:04:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/82752679' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:04:05 np0005629333 nova_compute[244014]: 2026-02-25 13:04:05.580 244018 DEBUG oslo_concurrency.processutils [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:04:05 np0005629333 nova_compute[244014]: 2026-02-25 13:04:05.586 244018 DEBUG nova.compute.provider_tree [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:04:05 np0005629333 nova_compute[244014]: 2026-02-25 13:04:05.600 244018 DEBUG nova.scheduler.client.report [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:04:05 np0005629333 nova_compute[244014]: 2026-02-25 13:04:05.630 244018 DEBUG oslo_concurrency.lockutils [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:04:05 np0005629333 nova_compute[244014]: 2026-02-25 13:04:05.667 244018 INFO nova.scheduler.client.report [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Deleted allocations for instance 5cd02959-e31f-4dd4-a50d-caafefc56629#033[00m
Feb 25 08:04:05 np0005629333 nova_compute[244014]: 2026-02-25 13:04:05.712 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:05 np0005629333 nova_compute[244014]: 2026-02-25 13:04:05.739 244018 DEBUG oslo_concurrency.lockutils [None req-9d25a0d4-f6d5-4730-afd0-ed68c6c9a20d ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "5cd02959-e31f-4dd4-a50d-caafefc56629" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.519s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:04:05 np0005629333 nova_compute[244014]: 2026-02-25 13:04:05.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:04:06 np0005629333 nova_compute[244014]: 2026-02-25 13:04:06.863 244018 DEBUG nova.compute.manager [req-7d3fb5ef-41cf-44e1-bcf5-cc729443e471 req-c9013877-cb5a-46af-8255-c0935a071ad8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Received event network-changed-7d538e82-43ce-4f65-b84b-fb9efe7d35b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:04:06 np0005629333 nova_compute[244014]: 2026-02-25 13:04:06.863 244018 DEBUG nova.compute.manager [req-7d3fb5ef-41cf-44e1-bcf5-cc729443e471 req-c9013877-cb5a-46af-8255-c0935a071ad8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Refreshing instance network info cache due to event network-changed-7d538e82-43ce-4f65-b84b-fb9efe7d35b0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 08:04:06 np0005629333 nova_compute[244014]: 2026-02-25 13:04:06.864 244018 DEBUG oslo_concurrency.lockutils [req-7d3fb5ef-41cf-44e1-bcf5-cc729443e471 req-c9013877-cb5a-46af-8255-c0935a071ad8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 08:04:06 np0005629333 nova_compute[244014]: 2026-02-25 13:04:06.864 244018 DEBUG oslo_concurrency.lockutils [req-7d3fb5ef-41cf-44e1-bcf5-cc729443e471 req-c9013877-cb5a-46af-8255-c0935a071ad8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 08:04:06 np0005629333 nova_compute[244014]: 2026-02-25 13:04:06.865 244018 DEBUG nova.network.neutron [req-7d3fb5ef-41cf-44e1-bcf5-cc729443e471 req-c9013877-cb5a-46af-8255-c0935a071ad8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Refreshing network info cache for port 7d538e82-43ce-4f65-b84b-fb9efe7d35b0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 08:04:06 np0005629333 nova_compute[244014]: 2026-02-25 13:04:06.961 244018 DEBUG oslo_concurrency.lockutils [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:04:06 np0005629333 nova_compute[244014]: 2026-02-25 13:04:06.962 244018 DEBUG oslo_concurrency.lockutils [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:04:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2517: 305 pgs: 305 active+clean; 312 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Feb 25 08:04:06 np0005629333 nova_compute[244014]: 2026-02-25 13:04:06.962 244018 DEBUG oslo_concurrency.lockutils [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:04:06 np0005629333 nova_compute[244014]: 2026-02-25 13:04:06.963 244018 DEBUG oslo_concurrency.lockutils [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:04:06 np0005629333 nova_compute[244014]: 2026-02-25 13:04:06.964 244018 DEBUG oslo_concurrency.lockutils [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:04:06 np0005629333 nova_compute[244014]: 2026-02-25 13:04:06.966 244018 INFO nova.compute.manager [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Terminating instance#033[00m
Feb 25 08:04:06 np0005629333 nova_compute[244014]: 2026-02-25 13:04:06.968 244018 DEBUG nova.compute.manager [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 08:04:07 np0005629333 kernel: tap7d538e82-43 (unregistering): left promiscuous mode
Feb 25 08:04:07 np0005629333 NetworkManager[49836]: <info>  [1772024647.0294] device (tap7d538e82-43): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 08:04:07 np0005629333 ovn_controller[147040]: 2026-02-25T13:04:07Z|01600|binding|INFO|Releasing lport 7d538e82-43ce-4f65-b84b-fb9efe7d35b0 from this chassis (sb_readonly=0)
Feb 25 08:04:07 np0005629333 ovn_controller[147040]: 2026-02-25T13:04:07Z|01601|binding|INFO|Setting lport 7d538e82-43ce-4f65-b84b-fb9efe7d35b0 down in Southbound
Feb 25 08:04:07 np0005629333 ovn_controller[147040]: 2026-02-25T13:04:07Z|01602|binding|INFO|Removing iface tap7d538e82-43 ovn-installed in OVS
Feb 25 08:04:07 np0005629333 nova_compute[244014]: 2026-02-25 13:04:07.033 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:07 np0005629333 nova_compute[244014]: 2026-02-25 13:04:07.043 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:07.045 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:bc:b9 10.100.0.6'], port_security=['fa:16:3e:1a:bc:b9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8c7de3d4-fa84-4c1c-9e61-e3b610cb177d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60551679-32db-4035-bd76-7d0d38a9d6de', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b9699483122f465084e3147e4904d13d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '411a5034-2bd3-44c2-8d69-e8230e12b7ba 45df576e-9e98-4f75-84d8-faaa81c292fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2a78513-3d45-4660-838e-2ad213a5a7d3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=7d538e82-43ce-4f65-b84b-fb9efe7d35b0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 08:04:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:07.049 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 7d538e82-43ce-4f65-b84b-fb9efe7d35b0 in datapath 60551679-32db-4035-bd76-7d0d38a9d6de unbound from our chassis#033[00m
Feb 25 08:04:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:07.050 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 60551679-32db-4035-bd76-7d0d38a9d6de, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 08:04:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:07.052 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f14866a0-7fa1-4051-8d47-12aeac191b80]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:04:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:07.052 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de namespace which is not needed anymore#033[00m
Feb 25 08:04:07 np0005629333 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000096.scope: Deactivated successfully.
Feb 25 08:04:07 np0005629333 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000096.scope: Consumed 13.837s CPU time.
Feb 25 08:04:07 np0005629333 systemd-machined[210048]: Machine qemu-184-instance-00000096 terminated.
Feb 25 08:04:07 np0005629333 nova_compute[244014]: 2026-02-25 13:04:07.210 244018 INFO nova.virt.libvirt.driver [-] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Instance destroyed successfully.#033[00m
Feb 25 08:04:07 np0005629333 nova_compute[244014]: 2026-02-25 13:04:07.212 244018 DEBUG nova.objects.instance [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lazy-loading 'resources' on Instance uuid 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 08:04:07 np0005629333 neutron-haproxy-ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de[380922]: [NOTICE]   (380926) : haproxy version is 2.8.14-c23fe91
Feb 25 08:04:07 np0005629333 neutron-haproxy-ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de[380922]: [NOTICE]   (380926) : path to executable is /usr/sbin/haproxy
Feb 25 08:04:07 np0005629333 neutron-haproxy-ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de[380922]: [WARNING]  (380926) : Exiting Master process...
Feb 25 08:04:07 np0005629333 neutron-haproxy-ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de[380922]: [WARNING]  (380926) : Exiting Master process...
Feb 25 08:04:07 np0005629333 neutron-haproxy-ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de[380922]: [ALERT]    (380926) : Current worker (380928) exited with code 143 (Terminated)
Feb 25 08:04:07 np0005629333 neutron-haproxy-ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de[380922]: [WARNING]  (380926) : All workers exited. Exiting... (0)
Feb 25 08:04:07 np0005629333 systemd[1]: libpod-8ef82320a342dca80ac5c36f5a312b72987ce73068f699469a0a05f7925da602.scope: Deactivated successfully.
Feb 25 08:04:07 np0005629333 podman[382164]: 2026-02-25 13:04:07.229070526 +0000 UTC m=+0.070620383 container died 8ef82320a342dca80ac5c36f5a312b72987ce73068f699469a0a05f7925da602 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 08:04:07 np0005629333 nova_compute[244014]: 2026-02-25 13:04:07.230 244018 DEBUG nova.virt.libvirt.vif [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T13:03:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-214149727',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-948360018-access_point-214149727',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-948360018-acc',id=150,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbQ2uY7T8R5wVQnvFbyA1segFh0PnNufgJbxt3APzZeuF0vcSW1YfpqbsmGbwvIughYn7yKsWGZo7qrK+BPyfIKQ42RLyvp7znat3TJdsHJA5kxzzkNZRLtYgFDUNF7bA==',key_name='tempest-TestSecurityGroupsBasicOps-1917346637',keypairs=<?>,launch_index=0,launched_at=2026-02-25T13:03:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b9699483122f465084e3147e4904d13d',ramdisk_id='',reservation_id='r-pgg8mh7f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-948360018',owner_user_name='tempest-TestSecurityGroupsBasicOps-948360018-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T13:03:10Z,user_data=None,user_id='ea895f651dd742a7b5eb2d63fb34641c',uuid=8c7de3d4-fa84-4c1c-9e61-e3b610cb177d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "address": "fa:16:3e:1a:bc:b9", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d538e82-43", "ovs_interfaceid": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 08:04:07 np0005629333 nova_compute[244014]: 2026-02-25 13:04:07.231 244018 DEBUG nova.network.os_vif_util [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converting VIF {"id": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "address": "fa:16:3e:1a:bc:b9", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d538e82-43", "ovs_interfaceid": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 08:04:07 np0005629333 nova_compute[244014]: 2026-02-25 13:04:07.232 244018 DEBUG nova.network.os_vif_util [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1a:bc:b9,bridge_name='br-int',has_traffic_filtering=True,id=7d538e82-43ce-4f65-b84b-fb9efe7d35b0,network=Network(60551679-32db-4035-bd76-7d0d38a9d6de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d538e82-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 08:04:07 np0005629333 nova_compute[244014]: 2026-02-25 13:04:07.233 244018 DEBUG os_vif [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:bc:b9,bridge_name='br-int',has_traffic_filtering=True,id=7d538e82-43ce-4f65-b84b-fb9efe7d35b0,network=Network(60551679-32db-4035-bd76-7d0d38a9d6de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d538e82-43') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 08:04:07 np0005629333 nova_compute[244014]: 2026-02-25 13:04:07.235 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:07 np0005629333 nova_compute[244014]: 2026-02-25 13:04:07.236 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7d538e82-43, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:04:07 np0005629333 nova_compute[244014]: 2026-02-25 13:04:07.240 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:07 np0005629333 nova_compute[244014]: 2026-02-25 13:04:07.243 244018 INFO os_vif [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:bc:b9,bridge_name='br-int',has_traffic_filtering=True,id=7d538e82-43ce-4f65-b84b-fb9efe7d35b0,network=Network(60551679-32db-4035-bd76-7d0d38a9d6de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7d538e82-43')#033[00m
Feb 25 08:04:07 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8ef82320a342dca80ac5c36f5a312b72987ce73068f699469a0a05f7925da602-userdata-shm.mount: Deactivated successfully.
Feb 25 08:04:07 np0005629333 systemd[1]: var-lib-containers-storage-overlay-683ddb331af2ab3814797ad79c5d49ef9abaf7e6aaf6d9a7ae140104b9cbf429-merged.mount: Deactivated successfully.
Feb 25 08:04:07 np0005629333 podman[382164]: 2026-02-25 13:04:07.284382535 +0000 UTC m=+0.125932432 container cleanup 8ef82320a342dca80ac5c36f5a312b72987ce73068f699469a0a05f7925da602 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 25 08:04:07 np0005629333 systemd[1]: libpod-conmon-8ef82320a342dca80ac5c36f5a312b72987ce73068f699469a0a05f7925da602.scope: Deactivated successfully.
Feb 25 08:04:07 np0005629333 podman[382226]: 2026-02-25 13:04:07.37213362 +0000 UTC m=+0.059247642 container remove 8ef82320a342dca80ac5c36f5a312b72987ce73068f699469a0a05f7925da602 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 25 08:04:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:07.377 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e9cfae20-8f6f-407c-b0ab-efe5867d2e50]: (4, ('Wed Feb 25 01:04:07 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de (8ef82320a342dca80ac5c36f5a312b72987ce73068f699469a0a05f7925da602)\n8ef82320a342dca80ac5c36f5a312b72987ce73068f699469a0a05f7925da602\nWed Feb 25 01:04:07 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de (8ef82320a342dca80ac5c36f5a312b72987ce73068f699469a0a05f7925da602)\n8ef82320a342dca80ac5c36f5a312b72987ce73068f699469a0a05f7925da602\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:04:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:07.379 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f615c0a9-9471-4e44-9d77-19457b7f390d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:04:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:07.380 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60551679-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:04:07 np0005629333 nova_compute[244014]: 2026-02-25 13:04:07.382 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:07 np0005629333 kernel: tap60551679-30: left promiscuous mode
Feb 25 08:04:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:07.389 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[f5dfe7c0-b81e-432d-b066-686a2f03382f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:04:07 np0005629333 nova_compute[244014]: 2026-02-25 13:04:07.392 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:07.405 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c7b2fbf6-0c7b-4f9f-a26c-9ab98010798b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:04:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:07.408 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[09564744-e7bf-4ead-8463-161a7110c7c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:04:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:07.424 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[97930067-f423-4a6e-8973-a088e9d3d0ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 655906, 'reachable_time': 41303, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382241, 'error': None, 'target': 'ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:04:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:07.427 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-60551679-32db-4035-bd76-7d0d38a9d6de deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 08:04:07 np0005629333 systemd[1]: run-netns-ovnmeta\x2d60551679\x2d32db\x2d4035\x2dbd76\x2d7d0d38a9d6de.mount: Deactivated successfully.
Feb 25 08:04:07 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:07.427 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[ed7ac954-ccb7-4afb-814a-68b500ecb311]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:04:07 np0005629333 nova_compute[244014]: 2026-02-25 13:04:07.660 244018 INFO nova.virt.libvirt.driver [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Deleting instance files /var/lib/nova/instances/8c7de3d4-fa84-4c1c-9e61-e3b610cb177d_del#033[00m
Feb 25 08:04:07 np0005629333 nova_compute[244014]: 2026-02-25 13:04:07.661 244018 INFO nova.virt.libvirt.driver [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Deletion of /var/lib/nova/instances/8c7de3d4-fa84-4c1c-9e61-e3b610cb177d_del complete#033[00m
Feb 25 08:04:07 np0005629333 nova_compute[244014]: 2026-02-25 13:04:07.719 244018 INFO nova.compute.manager [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 08:04:07 np0005629333 nova_compute[244014]: 2026-02-25 13:04:07.720 244018 DEBUG oslo.service.loopingcall [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 08:04:07 np0005629333 nova_compute[244014]: 2026-02-25 13:04:07.720 244018 DEBUG nova.compute.manager [-] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 08:04:07 np0005629333 nova_compute[244014]: 2026-02-25 13:04:07.721 244018 DEBUG nova.network.neutron [-] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 08:04:08 np0005629333 nova_compute[244014]: 2026-02-25 13:04:08.024 244018 DEBUG nova.network.neutron [req-7d3fb5ef-41cf-44e1-bcf5-cc729443e471 req-c9013877-cb5a-46af-8255-c0935a071ad8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Updated VIF entry in instance network info cache for port 7d538e82-43ce-4f65-b84b-fb9efe7d35b0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 08:04:08 np0005629333 nova_compute[244014]: 2026-02-25 13:04:08.025 244018 DEBUG nova.network.neutron [req-7d3fb5ef-41cf-44e1-bcf5-cc729443e471 req-c9013877-cb5a-46af-8255-c0935a071ad8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Updating instance_info_cache with network_info: [{"id": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "address": "fa:16:3e:1a:bc:b9", "network": {"id": "60551679-32db-4035-bd76-7d0d38a9d6de", "bridge": "br-int", "label": "tempest-network-smoke--346788591", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b9699483122f465084e3147e4904d13d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7d538e82-43", "ovs_interfaceid": "7d538e82-43ce-4f65-b84b-fb9efe7d35b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:04:08 np0005629333 nova_compute[244014]: 2026-02-25 13:04:08.086 244018 DEBUG oslo_concurrency.lockutils [req-7d3fb5ef-41cf-44e1-bcf5-cc729443e471 req-c9013877-cb5a-46af-8255-c0935a071ad8 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 08:04:08 np0005629333 nova_compute[244014]: 2026-02-25 13:04:08.343 244018 DEBUG nova.network.neutron [-] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:04:08 np0005629333 nova_compute[244014]: 2026-02-25 13:04:08.364 244018 INFO nova.compute.manager [-] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Took 0.64 seconds to deallocate network for instance.#033[00m
Feb 25 08:04:08 np0005629333 nova_compute[244014]: 2026-02-25 13:04:08.425 244018 DEBUG oslo_concurrency.lockutils [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:04:08 np0005629333 nova_compute[244014]: 2026-02-25 13:04:08.426 244018 DEBUG oslo_concurrency.lockutils [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:04:08 np0005629333 nova_compute[244014]: 2026-02-25 13:04:08.489 244018 DEBUG oslo_concurrency.processutils [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:04:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:04:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2518: 305 pgs: 305 active+clean; 187 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 362 KiB/s rd, 2.1 MiB/s wr, 115 op/s
Feb 25 08:04:08 np0005629333 nova_compute[244014]: 2026-02-25 13:04:08.972 244018 DEBUG nova.compute.manager [req-fa0ba157-1f55-4717-a07d-a193d140f618 req-485060fe-17ce-47dc-9fdc-348003136042 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Received event network-vif-unplugged-7d538e82-43ce-4f65-b84b-fb9efe7d35b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:04:08 np0005629333 nova_compute[244014]: 2026-02-25 13:04:08.973 244018 DEBUG oslo_concurrency.lockutils [req-fa0ba157-1f55-4717-a07d-a193d140f618 req-485060fe-17ce-47dc-9fdc-348003136042 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:04:08 np0005629333 nova_compute[244014]: 2026-02-25 13:04:08.974 244018 DEBUG oslo_concurrency.lockutils [req-fa0ba157-1f55-4717-a07d-a193d140f618 req-485060fe-17ce-47dc-9fdc-348003136042 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:04:08 np0005629333 nova_compute[244014]: 2026-02-25 13:04:08.974 244018 DEBUG oslo_concurrency.lockutils [req-fa0ba157-1f55-4717-a07d-a193d140f618 req-485060fe-17ce-47dc-9fdc-348003136042 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:04:08 np0005629333 nova_compute[244014]: 2026-02-25 13:04:08.975 244018 DEBUG nova.compute.manager [req-fa0ba157-1f55-4717-a07d-a193d140f618 req-485060fe-17ce-47dc-9fdc-348003136042 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] No waiting events found dispatching network-vif-unplugged-7d538e82-43ce-4f65-b84b-fb9efe7d35b0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 08:04:08 np0005629333 nova_compute[244014]: 2026-02-25 13:04:08.975 244018 WARNING nova.compute.manager [req-fa0ba157-1f55-4717-a07d-a193d140f618 req-485060fe-17ce-47dc-9fdc-348003136042 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Received unexpected event network-vif-unplugged-7d538e82-43ce-4f65-b84b-fb9efe7d35b0 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 08:04:08 np0005629333 nova_compute[244014]: 2026-02-25 13:04:08.976 244018 DEBUG nova.compute.manager [req-fa0ba157-1f55-4717-a07d-a193d140f618 req-485060fe-17ce-47dc-9fdc-348003136042 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Received event network-vif-plugged-7d538e82-43ce-4f65-b84b-fb9efe7d35b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:04:08 np0005629333 nova_compute[244014]: 2026-02-25 13:04:08.976 244018 DEBUG oslo_concurrency.lockutils [req-fa0ba157-1f55-4717-a07d-a193d140f618 req-485060fe-17ce-47dc-9fdc-348003136042 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:04:08 np0005629333 nova_compute[244014]: 2026-02-25 13:04:08.977 244018 DEBUG oslo_concurrency.lockutils [req-fa0ba157-1f55-4717-a07d-a193d140f618 req-485060fe-17ce-47dc-9fdc-348003136042 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:04:08 np0005629333 nova_compute[244014]: 2026-02-25 13:04:08.977 244018 DEBUG oslo_concurrency.lockutils [req-fa0ba157-1f55-4717-a07d-a193d140f618 req-485060fe-17ce-47dc-9fdc-348003136042 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:04:08 np0005629333 nova_compute[244014]: 2026-02-25 13:04:08.978 244018 DEBUG nova.compute.manager [req-fa0ba157-1f55-4717-a07d-a193d140f618 req-485060fe-17ce-47dc-9fdc-348003136042 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] No waiting events found dispatching network-vif-plugged-7d538e82-43ce-4f65-b84b-fb9efe7d35b0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 08:04:08 np0005629333 nova_compute[244014]: 2026-02-25 13:04:08.979 244018 WARNING nova.compute.manager [req-fa0ba157-1f55-4717-a07d-a193d140f618 req-485060fe-17ce-47dc-9fdc-348003136042 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Received unexpected event network-vif-plugged-7d538e82-43ce-4f65-b84b-fb9efe7d35b0 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 08:04:08 np0005629333 nova_compute[244014]: 2026-02-25 13:04:08.979 244018 DEBUG nova.compute.manager [req-fa0ba157-1f55-4717-a07d-a193d140f618 req-485060fe-17ce-47dc-9fdc-348003136042 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Received event network-vif-deleted-7d538e82-43ce-4f65-b84b-fb9efe7d35b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:04:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:04:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2147639800' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:04:09 np0005629333 nova_compute[244014]: 2026-02-25 13:04:09.043 244018 DEBUG oslo_concurrency.processutils [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:04:09 np0005629333 nova_compute[244014]: 2026-02-25 13:04:09.050 244018 DEBUG nova.compute.provider_tree [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:04:09 np0005629333 nova_compute[244014]: 2026-02-25 13:04:09.072 244018 DEBUG nova.scheduler.client.report [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:04:09 np0005629333 nova_compute[244014]: 2026-02-25 13:04:09.097 244018 DEBUG oslo_concurrency.lockutils [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:04:09 np0005629333 nova_compute[244014]: 2026-02-25 13:04:09.133 244018 INFO nova.scheduler.client.report [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Deleted allocations for instance 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d#033[00m
Feb 25 08:04:09 np0005629333 nova_compute[244014]: 2026-02-25 13:04:09.216 244018 DEBUG oslo_concurrency.lockutils [None req-9be40da2-e5b3-4368-a1b1-9284c228d949 ea895f651dd742a7b5eb2d63fb34641c b9699483122f465084e3147e4904d13d - - default default] Lock "8c7de3d4-fa84-4c1c-9e61-e3b610cb177d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.254s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:04:10 np0005629333 nova_compute[244014]: 2026-02-25 13:04:10.714 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2519: 305 pgs: 305 active+clean; 187 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 3.0 KiB/s wr, 51 op/s
Feb 25 08:04:12 np0005629333 nova_compute[244014]: 2026-02-25 13:04:12.230 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:12 np0005629333 nova_compute[244014]: 2026-02-25 13:04:12.238 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:12 np0005629333 nova_compute[244014]: 2026-02-25 13:04:12.256 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2520: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Feb 25 08:04:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:04:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2521: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Feb 25 08:04:15 np0005629333 nova_compute[244014]: 2026-02-25 13:04:15.716 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2522: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Feb 25 08:04:17 np0005629333 nova_compute[244014]: 2026-02-25 13:04:17.241 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:17 np0005629333 nova_compute[244014]: 2026-02-25 13:04:17.663 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024642.6627862, 5cd02959-e31f-4dd4-a50d-caafefc56629 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:04:17 np0005629333 nova_compute[244014]: 2026-02-25 13:04:17.663 244018 INFO nova.compute.manager [-] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] VM Stopped (Lifecycle Event)#033[00m
Feb 25 08:04:17 np0005629333 nova_compute[244014]: 2026-02-25 13:04:17.701 244018 DEBUG nova.compute.manager [None req-a402d69e-0a32-41f6-9097-d9c1c9f9eba2 - - - - - -] [instance: 5cd02959-e31f-4dd4-a50d-caafefc56629] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:04:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:04:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2523: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Feb 25 08:04:20 np0005629333 nova_compute[244014]: 2026-02-25 13:04:20.719 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2524: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 341 B/s wr, 3 op/s
Feb 25 08:04:22 np0005629333 nova_compute[244014]: 2026-02-25 13:04:22.207 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024647.2063859, 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:04:22 np0005629333 nova_compute[244014]: 2026-02-25 13:04:22.208 244018 INFO nova.compute.manager [-] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] VM Stopped (Lifecycle Event)#033[00m
Feb 25 08:04:22 np0005629333 nova_compute[244014]: 2026-02-25 13:04:22.237 244018 DEBUG nova.compute.manager [None req-1d8b2e26-36c6-4e75-98e2-e1b03ff35dfd - - - - - -] [instance: 8c7de3d4-fa84-4c1c-9e61-e3b610cb177d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:04:22 np0005629333 nova_compute[244014]: 2026-02-25 13:04:22.243 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2525: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 341 B/s wr, 3 op/s
Feb 25 08:04:23 np0005629333 podman[382271]: 2026-02-25 13:04:23.708414673 +0000 UTC m=+0.051913345 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 25 08:04:23 np0005629333 podman[382272]: 2026-02-25 13:04:23.754609366 +0000 UTC m=+0.098053046 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Feb 25 08:04:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:04:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2526: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:04:25 np0005629333 nova_compute[244014]: 2026-02-25 13:04:25.721 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2527: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:04:27 np0005629333 nova_compute[244014]: 2026-02-25 13:04:27.246 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:04:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2528: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:04:30 np0005629333 nova_compute[244014]: 2026-02-25 13:04:30.723 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2529: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:04:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:04:31
Feb 25 08:04:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:04:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:04:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'backups', 'vms', 'volumes', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'default.rgw.meta']
Feb 25 08:04:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:04:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:04:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:04:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:04:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:04:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:04:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:04:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:04:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:04:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:04:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:04:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:04:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:04:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:04:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:04:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:04:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:04:32 np0005629333 nova_compute[244014]: 2026-02-25 13:04:32.249 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2530: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:04:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:33.314 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 08:04:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:33.315 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 08:04:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:33.317 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:04:33 np0005629333 nova_compute[244014]: 2026-02-25 13:04:33.316 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:04:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2531: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:04:35 np0005629333 nova_compute[244014]: 2026-02-25 13:04:35.725 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2532: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:04:37 np0005629333 nova_compute[244014]: 2026-02-25 13:04:37.252 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:04:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2533: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:04:40 np0005629333 nova_compute[244014]: 2026-02-25 13:04:40.727 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:40 np0005629333 nova_compute[244014]: 2026-02-25 13:04:40.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:04:40 np0005629333 nova_compute[244014]: 2026-02-25 13:04:40.900 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:04:40 np0005629333 nova_compute[244014]: 2026-02-25 13:04:40.901 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:04:40 np0005629333 nova_compute[244014]: 2026-02-25 13:04:40.901 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:04:40 np0005629333 nova_compute[244014]: 2026-02-25 13:04:40.901 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:04:40 np0005629333 nova_compute[244014]: 2026-02-25 13:04:40.902 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:04:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2534: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:04:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:04:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4032463823' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:04:41 np0005629333 nova_compute[244014]: 2026-02-25 13:04:41.431 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:04:41 np0005629333 nova_compute[244014]: 2026-02-25 13:04:41.582 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:04:41 np0005629333 nova_compute[244014]: 2026-02-25 13:04:41.584 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3523MB free_disk=59.98728773742914GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:04:41 np0005629333 nova_compute[244014]: 2026-02-25 13:04:41.584 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:04:41 np0005629333 nova_compute[244014]: 2026-02-25 13:04:41.584 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:04:41 np0005629333 nova_compute[244014]: 2026-02-25 13:04:41.646 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:04:41 np0005629333 nova_compute[244014]: 2026-02-25 13:04:41.647 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:04:41 np0005629333 nova_compute[244014]: 2026-02-25 13:04:41.668 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:04:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:04:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2995689333' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:04:42 np0005629333 nova_compute[244014]: 2026-02-25 13:04:42.255 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:42 np0005629333 nova_compute[244014]: 2026-02-25 13:04:42.260 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:04:42 np0005629333 nova_compute[244014]: 2026-02-25 13:04:42.267 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:04:42 np0005629333 nova_compute[244014]: 2026-02-25 13:04:42.295 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:04:42 np0005629333 nova_compute[244014]: 2026-02-25 13:04:42.321 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:04:42 np0005629333 nova_compute[244014]: 2026-02-25 13:04:42.321 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:04:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:04:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:04:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:04:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:04:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.6561777569814966e-05 of space, bias 1.0, pg target 0.00496853327094449 quantized to 32 (current 32)
Feb 25 08:04:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:04:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:04:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:04:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:04:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:04:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002494506322989017 of space, bias 1.0, pg target 0.7483518968967051 quantized to 32 (current 32)
Feb 25 08:04:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:04:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.410111525860306e-06 of space, bias 4.0, pg target 0.0016921338310323672 quantized to 16 (current 16)
Feb 25 08:04:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:04:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:04:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:04:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:04:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:04:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:04:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:04:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:04:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:04:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:04:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2535: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:04:43 np0005629333 nova_compute[244014]: 2026-02-25 13:04:43.323 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:04:43 np0005629333 nova_compute[244014]: 2026-02-25 13:04:43.324 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:04:43 np0005629333 nova_compute[244014]: 2026-02-25 13:04:43.324 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:04:43 np0005629333 nova_compute[244014]: 2026-02-25 13:04:43.342 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:04:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:04:43 np0005629333 nova_compute[244014]: 2026-02-25 13:04:43.891 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:04:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2536: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:04:45 np0005629333 nova_compute[244014]: 2026-02-25 13:04:45.730 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:46 np0005629333 podman[382456]: 2026-02-25 13:04:46.736220768 +0000 UTC m=+0.081704675 container exec ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:04:46 np0005629333 podman[382456]: 2026-02-25 13:04:46.845126939 +0000 UTC m=+0.190610786 container exec_died ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:04:46 np0005629333 nova_compute[244014]: 2026-02-25 13:04:46.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:04:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2537: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:04:47 np0005629333 nova_compute[244014]: 2026-02-25 13:04:47.260 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2722819388' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2722819388' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:04:47.710932) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024687711179, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 2095, "num_deletes": 253, "total_data_size": 3438033, "memory_usage": 3500784, "flush_reason": "Manual Compaction"}
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024687734026, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 3367957, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52012, "largest_seqno": 54106, "table_properties": {"data_size": 3358355, "index_size": 6096, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19560, "raw_average_key_size": 20, "raw_value_size": 3339167, "raw_average_value_size": 3474, "num_data_blocks": 269, "num_entries": 961, "num_filter_entries": 961, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772024473, "oldest_key_time": 1772024473, "file_creation_time": 1772024687, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 23152 microseconds, and 8373 cpu microseconds.
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:04:47.734079) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 3367957 bytes OK
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:04:47.734103) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:04:47.736964) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:04:47.736985) EVENT_LOG_v1 {"time_micros": 1772024687736978, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:04:47.737006) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 3429223, prev total WAL file size 3472487, number of live WAL files 2.
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:04:47.738039) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(3289KB)], [122(9392KB)]
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024687738114, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 12986042, "oldest_snapshot_seqno": -1}
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 7594 keys, 11282083 bytes, temperature: kUnknown
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024687817173, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 11282083, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11230178, "index_size": 31825, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19013, "raw_key_size": 197045, "raw_average_key_size": 25, "raw_value_size": 11093674, "raw_average_value_size": 1460, "num_data_blocks": 1247, "num_entries": 7594, "num_filter_entries": 7594, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772024687, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:04:47.817417) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 11282083 bytes
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:04:47.820351) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.1 rd, 142.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 9.2 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(7.2) write-amplify(3.3) OK, records in: 8116, records dropped: 522 output_compression: NoCompression
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:04:47.820381) EVENT_LOG_v1 {"time_micros": 1772024687820368, "job": 74, "event": "compaction_finished", "compaction_time_micros": 79126, "compaction_time_cpu_micros": 36594, "output_level": 6, "num_output_files": 1, "total_output_size": 11282083, "num_input_records": 8116, "num_output_records": 7594, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024687821169, "job": 74, "event": "table_file_deletion", "file_number": 124}
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024687822814, "job": 74, "event": "table_file_deletion", "file_number": 122}
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:04:47.737857) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:04:47.822928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:04:47.822935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:04:47.822939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:04:47.822943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:04:47 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:04:47.822947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:04:47 np0005629333 nova_compute[244014]: 2026-02-25 13:04:47.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:04:48 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:04:48 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:04:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:04:48 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:04:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:04:48 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:04:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:04:48 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:04:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:04:48 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:04:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:04:48 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:04:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:04:48 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:04:48 np0005629333 podman[382784]: 2026-02-25 13:04:48.765434298 +0000 UTC m=+0.049778954 container create 6ccfa3b3492605a3ba39ab15668ffdf9ecaf5e23a2998210734265de538eccc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_pascal, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 08:04:48 np0005629333 systemd[1]: Started libpod-conmon-6ccfa3b3492605a3ba39ab15668ffdf9ecaf5e23a2998210734265de538eccc2.scope.
Feb 25 08:04:48 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:04:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:04:48 np0005629333 podman[382784]: 2026-02-25 13:04:48.738351295 +0000 UTC m=+0.022695961 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:04:48 np0005629333 podman[382784]: 2026-02-25 13:04:48.855516919 +0000 UTC m=+0.139861665 container init 6ccfa3b3492605a3ba39ab15668ffdf9ecaf5e23a2998210734265de538eccc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_pascal, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 08:04:48 np0005629333 podman[382784]: 2026-02-25 13:04:48.861855057 +0000 UTC m=+0.146199703 container start 6ccfa3b3492605a3ba39ab15668ffdf9ecaf5e23a2998210734265de538eccc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_pascal, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 08:04:48 np0005629333 crazy_pascal[382800]: 167 167
Feb 25 08:04:48 np0005629333 systemd[1]: libpod-6ccfa3b3492605a3ba39ab15668ffdf9ecaf5e23a2998210734265de538eccc2.scope: Deactivated successfully.
Feb 25 08:04:48 np0005629333 podman[382784]: 2026-02-25 13:04:48.868072713 +0000 UTC m=+0.152417359 container attach 6ccfa3b3492605a3ba39ab15668ffdf9ecaf5e23a2998210734265de538eccc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:04:48 np0005629333 podman[382784]: 2026-02-25 13:04:48.869145263 +0000 UTC m=+0.153489919 container died 6ccfa3b3492605a3ba39ab15668ffdf9ecaf5e23a2998210734265de538eccc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_pascal, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 08:04:48 np0005629333 nova_compute[244014]: 2026-02-25 13:04:48.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:04:48 np0005629333 systemd[1]: var-lib-containers-storage-overlay-18555a5b6a9a66ee023b35514c802b592c6c695f9dcf7a17ca1f8b778b5a28fa-merged.mount: Deactivated successfully.
Feb 25 08:04:48 np0005629333 podman[382784]: 2026-02-25 13:04:48.971118178 +0000 UTC m=+0.255462854 container remove 6ccfa3b3492605a3ba39ab15668ffdf9ecaf5e23a2998210734265de538eccc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 08:04:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2538: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:04:48 np0005629333 systemd[1]: libpod-conmon-6ccfa3b3492605a3ba39ab15668ffdf9ecaf5e23a2998210734265de538eccc2.scope: Deactivated successfully.
Feb 25 08:04:49 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:04:49 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:04:49 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:04:49 np0005629333 podman[382826]: 2026-02-25 13:04:49.132434847 +0000 UTC m=+0.050138695 container create 4a923c29e7efa3ede78aa9a7c22ad436d935d901567fcce0e9e7c796158a7c11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_hodgkin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 08:04:49 np0005629333 systemd[1]: Started libpod-conmon-4a923c29e7efa3ede78aa9a7c22ad436d935d901567fcce0e9e7c796158a7c11.scope.
Feb 25 08:04:49 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:04:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1398c34ebf746def8f1bc3aa60092b35f8536914ddc05dadf4bee7dc01e2d136/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:04:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1398c34ebf746def8f1bc3aa60092b35f8536914ddc05dadf4bee7dc01e2d136/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:04:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1398c34ebf746def8f1bc3aa60092b35f8536914ddc05dadf4bee7dc01e2d136/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:04:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1398c34ebf746def8f1bc3aa60092b35f8536914ddc05dadf4bee7dc01e2d136/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:04:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1398c34ebf746def8f1bc3aa60092b35f8536914ddc05dadf4bee7dc01e2d136/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:04:49 np0005629333 podman[382826]: 2026-02-25 13:04:49.109941643 +0000 UTC m=+0.027645571 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:04:49 np0005629333 podman[382826]: 2026-02-25 13:04:49.217067464 +0000 UTC m=+0.134771342 container init 4a923c29e7efa3ede78aa9a7c22ad436d935d901567fcce0e9e7c796158a7c11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_hodgkin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:04:49 np0005629333 podman[382826]: 2026-02-25 13:04:49.222797605 +0000 UTC m=+0.140501494 container start 4a923c29e7efa3ede78aa9a7c22ad436d935d901567fcce0e9e7c796158a7c11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_hodgkin, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:04:49 np0005629333 podman[382826]: 2026-02-25 13:04:49.22649333 +0000 UTC m=+0.144197248 container attach 4a923c29e7efa3ede78aa9a7c22ad436d935d901567fcce0e9e7c796158a7c11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_hodgkin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 08:04:49 np0005629333 eager_hodgkin[382842]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:04:49 np0005629333 eager_hodgkin[382842]: --> All data devices are unavailable
Feb 25 08:04:49 np0005629333 systemd[1]: libpod-4a923c29e7efa3ede78aa9a7c22ad436d935d901567fcce0e9e7c796158a7c11.scope: Deactivated successfully.
Feb 25 08:04:49 np0005629333 podman[382826]: 2026-02-25 13:04:49.681864751 +0000 UTC m=+0.599568589 container died 4a923c29e7efa3ede78aa9a7c22ad436d935d901567fcce0e9e7c796158a7c11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_hodgkin, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:04:49 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1398c34ebf746def8f1bc3aa60092b35f8536914ddc05dadf4bee7dc01e2d136-merged.mount: Deactivated successfully.
Feb 25 08:04:49 np0005629333 podman[382826]: 2026-02-25 13:04:49.926578871 +0000 UTC m=+0.844282719 container remove 4a923c29e7efa3ede78aa9a7c22ad436d935d901567fcce0e9e7c796158a7c11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_hodgkin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:04:49 np0005629333 systemd[1]: libpod-conmon-4a923c29e7efa3ede78aa9a7c22ad436d935d901567fcce0e9e7c796158a7c11.scope: Deactivated successfully.
Feb 25 08:04:50 np0005629333 podman[382938]: 2026-02-25 13:04:50.360992771 +0000 UTC m=+0.055012412 container create 32a158decdf52e694b9e7d9da6fad1419ff1aff61a7571fae22e11c2e2e0b948 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_allen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 08:04:50 np0005629333 systemd[1]: Started libpod-conmon-32a158decdf52e694b9e7d9da6fad1419ff1aff61a7571fae22e11c2e2e0b948.scope.
Feb 25 08:04:50 np0005629333 podman[382938]: 2026-02-25 13:04:50.325752648 +0000 UTC m=+0.019772299 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:04:50 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:04:50 np0005629333 podman[382938]: 2026-02-25 13:04:50.472490806 +0000 UTC m=+0.166510457 container init 32a158decdf52e694b9e7d9da6fad1419ff1aff61a7571fae22e11c2e2e0b948 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_allen, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 08:04:50 np0005629333 podman[382938]: 2026-02-25 13:04:50.479664528 +0000 UTC m=+0.173684129 container start 32a158decdf52e694b9e7d9da6fad1419ff1aff61a7571fae22e11c2e2e0b948 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_allen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 08:04:50 np0005629333 vigorous_allen[382955]: 167 167
Feb 25 08:04:50 np0005629333 systemd[1]: libpod-32a158decdf52e694b9e7d9da6fad1419ff1aff61a7571fae22e11c2e2e0b948.scope: Deactivated successfully.
Feb 25 08:04:50 np0005629333 podman[382938]: 2026-02-25 13:04:50.483114365 +0000 UTC m=+0.177133966 container attach 32a158decdf52e694b9e7d9da6fad1419ff1aff61a7571fae22e11c2e2e0b948 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_allen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 25 08:04:50 np0005629333 podman[382938]: 2026-02-25 13:04:50.48363668 +0000 UTC m=+0.177656271 container died 32a158decdf52e694b9e7d9da6fad1419ff1aff61a7571fae22e11c2e2e0b948 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_allen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:04:50 np0005629333 systemd[1]: var-lib-containers-storage-overlay-15b1f211fc6c7bdf28813207c8ee51c4281012ef046e6deb28c42aa9f01e6855-merged.mount: Deactivated successfully.
Feb 25 08:04:50 np0005629333 podman[382938]: 2026-02-25 13:04:50.538497087 +0000 UTC m=+0.232516748 container remove 32a158decdf52e694b9e7d9da6fad1419ff1aff61a7571fae22e11c2e2e0b948 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_allen, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:04:50 np0005629333 systemd[1]: libpod-conmon-32a158decdf52e694b9e7d9da6fad1419ff1aff61a7571fae22e11c2e2e0b948.scope: Deactivated successfully.
Feb 25 08:04:50 np0005629333 podman[382980]: 2026-02-25 13:04:50.72233209 +0000 UTC m=+0.071906559 container create 5116c59aec838213d0af8a9f0010773afeeaf05bea89730fc58fe653feafffbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 08:04:50 np0005629333 nova_compute[244014]: 2026-02-25 13:04:50.732 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:50 np0005629333 systemd[1]: Started libpod-conmon-5116c59aec838213d0af8a9f0010773afeeaf05bea89730fc58fe653feafffbc.scope.
Feb 25 08:04:50 np0005629333 podman[382980]: 2026-02-25 13:04:50.671026653 +0000 UTC m=+0.020601022 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:04:50 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:04:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da8b87f0f23be93eabf3cf7df4242977c8b669931ce26b38c6364e032f934f68/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:04:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da8b87f0f23be93eabf3cf7df4242977c8b669931ce26b38c6364e032f934f68/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:04:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da8b87f0f23be93eabf3cf7df4242977c8b669931ce26b38c6364e032f934f68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:04:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da8b87f0f23be93eabf3cf7df4242977c8b669931ce26b38c6364e032f934f68/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:04:50 np0005629333 podman[382980]: 2026-02-25 13:04:50.82164684 +0000 UTC m=+0.171221229 container init 5116c59aec838213d0af8a9f0010773afeeaf05bea89730fc58fe653feafffbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shockley, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:04:50 np0005629333 podman[382980]: 2026-02-25 13:04:50.830371466 +0000 UTC m=+0.179945855 container start 5116c59aec838213d0af8a9f0010773afeeaf05bea89730fc58fe653feafffbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 08:04:50 np0005629333 podman[382980]: 2026-02-25 13:04:50.84151002 +0000 UTC m=+0.191084409 container attach 5116c59aec838213d0af8a9f0010773afeeaf05bea89730fc58fe653feafffbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Feb 25 08:04:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2539: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]: {
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:    "0": [
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:        {
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "devices": [
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "/dev/loop3"
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            ],
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "lv_name": "ceph_lv0",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "lv_size": "21470642176",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "name": "ceph_lv0",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "tags": {
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.cluster_name": "ceph",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.crush_device_class": "",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.encrypted": "0",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.objectstore": "bluestore",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.osd_id": "0",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.type": "block",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.vdo": "0",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.with_tpm": "0"
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            },
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "type": "block",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "vg_name": "ceph_vg0"
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:        }
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:    ],
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:    "1": [
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:        {
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "devices": [
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "/dev/loop4"
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            ],
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "lv_name": "ceph_lv1",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "lv_size": "21470642176",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "name": "ceph_lv1",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "tags": {
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.cluster_name": "ceph",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.crush_device_class": "",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.encrypted": "0",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.objectstore": "bluestore",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.osd_id": "1",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.type": "block",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.vdo": "0",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.with_tpm": "0"
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            },
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "type": "block",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "vg_name": "ceph_vg1"
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:        }
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:    ],
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:    "2": [
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:        {
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "devices": [
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "/dev/loop5"
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            ],
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "lv_name": "ceph_lv2",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "lv_size": "21470642176",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "name": "ceph_lv2",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "tags": {
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.cluster_name": "ceph",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.crush_device_class": "",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.encrypted": "0",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.objectstore": "bluestore",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.osd_id": "2",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.type": "block",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.vdo": "0",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:                "ceph.with_tpm": "0"
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            },
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "type": "block",
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:            "vg_name": "ceph_vg2"
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:        }
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]:    ]
Feb 25 08:04:51 np0005629333 unruffled_shockley[382996]: }
Feb 25 08:04:51 np0005629333 systemd[1]: libpod-5116c59aec838213d0af8a9f0010773afeeaf05bea89730fc58fe653feafffbc.scope: Deactivated successfully.
Feb 25 08:04:51 np0005629333 podman[382980]: 2026-02-25 13:04:51.14361443 +0000 UTC m=+0.493188789 container died 5116c59aec838213d0af8a9f0010773afeeaf05bea89730fc58fe653feafffbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shockley, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 08:04:51 np0005629333 systemd[1]: var-lib-containers-storage-overlay-da8b87f0f23be93eabf3cf7df4242977c8b669931ce26b38c6364e032f934f68-merged.mount: Deactivated successfully.
Feb 25 08:04:51 np0005629333 podman[382980]: 2026-02-25 13:04:51.245535114 +0000 UTC m=+0.595109493 container remove 5116c59aec838213d0af8a9f0010773afeeaf05bea89730fc58fe653feafffbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Feb 25 08:04:51 np0005629333 systemd[1]: libpod-conmon-5116c59aec838213d0af8a9f0010773afeeaf05bea89730fc58fe653feafffbc.scope: Deactivated successfully.
Feb 25 08:04:51 np0005629333 podman[383079]: 2026-02-25 13:04:51.729167791 +0000 UTC m=+0.095605676 container create a3d09145a4b70b21c0756f9eac555ccb8719e453846a6403893e51bc8d8cce0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:04:51 np0005629333 podman[383079]: 2026-02-25 13:04:51.651199713 +0000 UTC m=+0.017637618 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:04:51 np0005629333 systemd[1]: Started libpod-conmon-a3d09145a4b70b21c0756f9eac555ccb8719e453846a6403893e51bc8d8cce0c.scope.
Feb 25 08:04:51 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:04:52 np0005629333 podman[383079]: 2026-02-25 13:04:52.014081116 +0000 UTC m=+0.380519071 container init a3d09145a4b70b21c0756f9eac555ccb8719e453846a6403893e51bc8d8cce0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_ride, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 08:04:52 np0005629333 podman[383079]: 2026-02-25 13:04:52.021520405 +0000 UTC m=+0.387958330 container start a3d09145a4b70b21c0756f9eac555ccb8719e453846a6403893e51bc8d8cce0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_ride, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:04:52 np0005629333 jolly_ride[383096]: 167 167
Feb 25 08:04:52 np0005629333 systemd[1]: libpod-a3d09145a4b70b21c0756f9eac555ccb8719e453846a6403893e51bc8d8cce0c.scope: Deactivated successfully.
Feb 25 08:04:52 np0005629333 podman[383079]: 2026-02-25 13:04:52.071219597 +0000 UTC m=+0.437657572 container attach a3d09145a4b70b21c0756f9eac555ccb8719e453846a6403893e51bc8d8cce0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 08:04:52 np0005629333 podman[383079]: 2026-02-25 13:04:52.072071161 +0000 UTC m=+0.438509076 container died a3d09145a4b70b21c0756f9eac555ccb8719e453846a6403893e51bc8d8cce0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_ride, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:04:52 np0005629333 systemd[1]: var-lib-containers-storage-overlay-61d1e462643882f36d75d7d8600b3d11dd61ae2772f70c46a433bc31b6f7bd72-merged.mount: Deactivated successfully.
Feb 25 08:04:52 np0005629333 podman[383079]: 2026-02-25 13:04:52.182733021 +0000 UTC m=+0.549170906 container remove a3d09145a4b70b21c0756f9eac555ccb8719e453846a6403893e51bc8d8cce0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_ride, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:04:52 np0005629333 systemd[1]: libpod-conmon-a3d09145a4b70b21c0756f9eac555ccb8719e453846a6403893e51bc8d8cce0c.scope: Deactivated successfully.
Feb 25 08:04:52 np0005629333 nova_compute[244014]: 2026-02-25 13:04:52.263 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:52 np0005629333 podman[383122]: 2026-02-25 13:04:52.34652448 +0000 UTC m=+0.055978769 container create cdb488d821e7caaa16920d2e0a1def63460bde14a9674460728ffb45dbd7a642 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 08:04:52 np0005629333 systemd[1]: Started libpod-conmon-cdb488d821e7caaa16920d2e0a1def63460bde14a9674460728ffb45dbd7a642.scope.
Feb 25 08:04:52 np0005629333 podman[383122]: 2026-02-25 13:04:52.31425422 +0000 UTC m=+0.023708489 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:04:52 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:04:52 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac8869d6b7490f4ad49eb30c8eba2d3b78fa9eb32ec41e9c5c4beb7f0ffb9054/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:04:52 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac8869d6b7490f4ad49eb30c8eba2d3b78fa9eb32ec41e9c5c4beb7f0ffb9054/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:04:52 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac8869d6b7490f4ad49eb30c8eba2d3b78fa9eb32ec41e9c5c4beb7f0ffb9054/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:04:52 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac8869d6b7490f4ad49eb30c8eba2d3b78fa9eb32ec41e9c5c4beb7f0ffb9054/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:04:52 np0005629333 podman[383122]: 2026-02-25 13:04:52.458784946 +0000 UTC m=+0.168239225 container init cdb488d821e7caaa16920d2e0a1def63460bde14a9674460728ffb45dbd7a642 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_swirles, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 08:04:52 np0005629333 podman[383122]: 2026-02-25 13:04:52.463799167 +0000 UTC m=+0.173253456 container start cdb488d821e7caaa16920d2e0a1def63460bde14a9674460728ffb45dbd7a642 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_swirles, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:04:52 np0005629333 podman[383122]: 2026-02-25 13:04:52.489769369 +0000 UTC m=+0.199223718 container attach cdb488d821e7caaa16920d2e0a1def63460bde14a9674460728ffb45dbd7a642 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_swirles, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 08:04:52 np0005629333 nova_compute[244014]: 2026-02-25 13:04:52.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:04:52 np0005629333 nova_compute[244014]: 2026-02-25 13:04:52.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:04:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2540: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:04:53 np0005629333 lvm[383217]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:04:53 np0005629333 lvm[383217]: VG ceph_vg0 finished
Feb 25 08:04:53 np0005629333 lvm[383220]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:04:53 np0005629333 lvm[383220]: VG ceph_vg1 finished
Feb 25 08:04:53 np0005629333 lvm[383222]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:04:53 np0005629333 lvm[383222]: VG ceph_vg2 finished
Feb 25 08:04:53 np0005629333 confident_swirles[383141]: {}
Feb 25 08:04:53 np0005629333 systemd[1]: libpod-cdb488d821e7caaa16920d2e0a1def63460bde14a9674460728ffb45dbd7a642.scope: Deactivated successfully.
Feb 25 08:04:53 np0005629333 systemd[1]: libpod-cdb488d821e7caaa16920d2e0a1def63460bde14a9674460728ffb45dbd7a642.scope: Consumed 1.064s CPU time.
Feb 25 08:04:53 np0005629333 podman[383122]: 2026-02-25 13:04:53.237091673 +0000 UTC m=+0.946545962 container died cdb488d821e7caaa16920d2e0a1def63460bde14a9674460728ffb45dbd7a642 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_swirles, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:04:53 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ac8869d6b7490f4ad49eb30c8eba2d3b78fa9eb32ec41e9c5c4beb7f0ffb9054-merged.mount: Deactivated successfully.
Feb 25 08:04:53 np0005629333 podman[383122]: 2026-02-25 13:04:53.280352653 +0000 UTC m=+0.989806902 container remove cdb488d821e7caaa16920d2e0a1def63460bde14a9674460728ffb45dbd7a642 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_swirles, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:04:53 np0005629333 systemd[1]: libpod-conmon-cdb488d821e7caaa16920d2e0a1def63460bde14a9674460728ffb45dbd7a642.scope: Deactivated successfully.
Feb 25 08:04:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:04:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:04:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:04:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:04:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:04:54 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:04:54 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:04:54 np0005629333 nova_compute[244014]: 2026-02-25 13:04:54.454 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Acquiring lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:04:54 np0005629333 nova_compute[244014]: 2026-02-25 13:04:54.455 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:04:54 np0005629333 nova_compute[244014]: 2026-02-25 13:04:54.482 244018 DEBUG nova.compute.manager [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 08:04:54 np0005629333 podman[383261]: 2026-02-25 13:04:54.716376876 +0000 UTC m=+0.054848088 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0)
Feb 25 08:04:54 np0005629333 podman[383262]: 2026-02-25 13:04:54.736349999 +0000 UTC m=+0.074907353 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 08:04:54 np0005629333 nova_compute[244014]: 2026-02-25 13:04:54.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:04:54 np0005629333 nova_compute[244014]: 2026-02-25 13:04:54.921 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:04:54 np0005629333 nova_compute[244014]: 2026-02-25 13:04:54.921 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:04:54 np0005629333 nova_compute[244014]: 2026-02-25 13:04:54.931 244018 DEBUG nova.virt.hardware [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 08:04:54 np0005629333 nova_compute[244014]: 2026-02-25 13:04:54.932 244018 INFO nova.compute.claims [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 08:04:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2541: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:04:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:55.042 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:04:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:55.042 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:04:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:04:55.043 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:04:55 np0005629333 nova_compute[244014]: 2026-02-25 13:04:55.116 244018 DEBUG oslo_concurrency.processutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:04:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:04:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2083294746' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:04:55 np0005629333 nova_compute[244014]: 2026-02-25 13:04:55.734 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:55 np0005629333 nova_compute[244014]: 2026-02-25 13:04:55.755 244018 DEBUG oslo_concurrency.processutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:04:55 np0005629333 nova_compute[244014]: 2026-02-25 13:04:55.761 244018 DEBUG nova.compute.provider_tree [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:04:55 np0005629333 nova_compute[244014]: 2026-02-25 13:04:55.783 244018 DEBUG nova.scheduler.client.report [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:04:55 np0005629333 nova_compute[244014]: 2026-02-25 13:04:55.808 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.887s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:04:55 np0005629333 nova_compute[244014]: 2026-02-25 13:04:55.809 244018 DEBUG nova.compute.manager [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 08:04:55 np0005629333 nova_compute[244014]: 2026-02-25 13:04:55.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:04:55 np0005629333 nova_compute[244014]: 2026-02-25 13:04:55.883 244018 DEBUG nova.compute.manager [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 08:04:55 np0005629333 nova_compute[244014]: 2026-02-25 13:04:55.884 244018 DEBUG nova.network.neutron [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 08:04:55 np0005629333 nova_compute[244014]: 2026-02-25 13:04:55.910 244018 INFO nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 08:04:55 np0005629333 nova_compute[244014]: 2026-02-25 13:04:55.937 244018 DEBUG nova.compute.manager [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 08:04:56 np0005629333 nova_compute[244014]: 2026-02-25 13:04:56.067 244018 DEBUG nova.compute.manager [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 08:04:56 np0005629333 nova_compute[244014]: 2026-02-25 13:04:56.069 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 08:04:56 np0005629333 nova_compute[244014]: 2026-02-25 13:04:56.069 244018 INFO nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Creating image(s)#033[00m
Feb 25 08:04:56 np0005629333 nova_compute[244014]: 2026-02-25 13:04:56.098 244018 DEBUG nova.storage.rbd_utils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] rbd image 65c18a8f-0df3-4e29-b22d-fbc9362683f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:04:56 np0005629333 nova_compute[244014]: 2026-02-25 13:04:56.123 244018 DEBUG nova.storage.rbd_utils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] rbd image 65c18a8f-0df3-4e29-b22d-fbc9362683f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:04:56 np0005629333 nova_compute[244014]: 2026-02-25 13:04:56.148 244018 DEBUG nova.storage.rbd_utils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] rbd image 65c18a8f-0df3-4e29-b22d-fbc9362683f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:04:56 np0005629333 nova_compute[244014]: 2026-02-25 13:04:56.151 244018 DEBUG oslo_concurrency.processutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:04:56 np0005629333 nova_compute[244014]: 2026-02-25 13:04:56.224 244018 DEBUG nova.policy [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '49675ebd5cbe44e1a3373d23daabdc78', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7b441b24147b4cdbaeba6c8bc41ce081', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Feb 25 08:04:56 np0005629333 nova_compute[244014]: 2026-02-25 13:04:56.234 244018 DEBUG oslo_concurrency.processutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:04:56 np0005629333 nova_compute[244014]: 2026-02-25 13:04:56.234 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:04:56 np0005629333 nova_compute[244014]: 2026-02-25 13:04:56.235 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:04:56 np0005629333 nova_compute[244014]: 2026-02-25 13:04:56.235 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:04:56 np0005629333 nova_compute[244014]: 2026-02-25 13:04:56.257 244018 DEBUG nova.storage.rbd_utils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] rbd image 65c18a8f-0df3-4e29-b22d-fbc9362683f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:04:56 np0005629333 nova_compute[244014]: 2026-02-25 13:04:56.261 244018 DEBUG oslo_concurrency.processutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 65c18a8f-0df3-4e29-b22d-fbc9362683f8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:04:56 np0005629333 nova_compute[244014]: 2026-02-25 13:04:56.520 244018 DEBUG oslo_concurrency.processutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 65c18a8f-0df3-4e29-b22d-fbc9362683f8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.259s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:04:56 np0005629333 nova_compute[244014]: 2026-02-25 13:04:56.583 244018 DEBUG nova.storage.rbd_utils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] resizing rbd image 65c18a8f-0df3-4e29-b22d-fbc9362683f8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 08:04:56 np0005629333 nova_compute[244014]: 2026-02-25 13:04:56.669 244018 DEBUG nova.objects.instance [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lazy-loading 'migration_context' on Instance uuid 65c18a8f-0df3-4e29-b22d-fbc9362683f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 08:04:56 np0005629333 nova_compute[244014]: 2026-02-25 13:04:56.689 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 08:04:56 np0005629333 nova_compute[244014]: 2026-02-25 13:04:56.689 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Ensure instance console log exists: /var/lib/nova/instances/65c18a8f-0df3-4e29-b22d-fbc9362683f8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 08:04:56 np0005629333 nova_compute[244014]: 2026-02-25 13:04:56.690 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:04:56 np0005629333 nova_compute[244014]: 2026-02-25 13:04:56.690 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:04:56 np0005629333 nova_compute[244014]: 2026-02-25 13:04:56.691 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:04:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2542: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:04:56 np0005629333 nova_compute[244014]: 2026-02-25 13:04:56.996 244018 DEBUG nova.network.neutron [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Successfully created port: 6beae6ca-810c-48a5-8fcd-5f58732e64f4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Feb 25 08:04:57 np0005629333 nova_compute[244014]: 2026-02-25 13:04:57.265 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:04:57 np0005629333 nova_compute[244014]: 2026-02-25 13:04:57.516 244018 DEBUG nova.network.neutron [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Successfully updated port: 6beae6ca-810c-48a5-8fcd-5f58732e64f4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Feb 25 08:04:57 np0005629333 nova_compute[244014]: 2026-02-25 13:04:57.531 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Acquiring lock "refresh_cache-65c18a8f-0df3-4e29-b22d-fbc9362683f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 08:04:57 np0005629333 nova_compute[244014]: 2026-02-25 13:04:57.532 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Acquired lock "refresh_cache-65c18a8f-0df3-4e29-b22d-fbc9362683f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 08:04:57 np0005629333 nova_compute[244014]: 2026-02-25 13:04:57.532 244018 DEBUG nova.network.neutron [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 08:04:57 np0005629333 ovn_controller[147040]: 2026-02-25T13:04:57Z|01603|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Feb 25 08:04:57 np0005629333 nova_compute[244014]: 2026-02-25 13:04:57.622 244018 DEBUG nova.compute.manager [req-6c87d8fa-c3e2-4b06-8c24-0fbf79477c34 req-7af23e3d-dd46-4cdd-a0ff-6849e46dd146 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Received event network-changed-6beae6ca-810c-48a5-8fcd-5f58732e64f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:04:57 np0005629333 nova_compute[244014]: 2026-02-25 13:04:57.622 244018 DEBUG nova.compute.manager [req-6c87d8fa-c3e2-4b06-8c24-0fbf79477c34 req-7af23e3d-dd46-4cdd-a0ff-6849e46dd146 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Refreshing instance network info cache due to event network-changed-6beae6ca-810c-48a5-8fcd-5f58732e64f4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 08:04:57 np0005629333 nova_compute[244014]: 2026-02-25 13:04:57.623 244018 DEBUG oslo_concurrency.lockutils [req-6c87d8fa-c3e2-4b06-8c24-0fbf79477c34 req-7af23e3d-dd46-4cdd-a0ff-6849e46dd146 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-65c18a8f-0df3-4e29-b22d-fbc9362683f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 08:04:58 np0005629333 nova_compute[244014]: 2026-02-25 13:04:58.158 244018 DEBUG nova.network.neutron [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 08:04:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:04:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2543: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 08:04:59 np0005629333 nova_compute[244014]: 2026-02-25 13:04:59.604 244018 DEBUG nova.network.neutron [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Updating instance_info_cache with network_info: [{"id": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "address": "fa:16:3e:8a:be:b7", "network": {"id": "e5412d48-5ea8-48ec-b0d9-17bc49a104c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1780627753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b441b24147b4cdbaeba6c8bc41ce081", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6beae6ca-81", "ovs_interfaceid": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:04:59 np0005629333 nova_compute[244014]: 2026-02-25 13:04:59.630 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Releasing lock "refresh_cache-65c18a8f-0df3-4e29-b22d-fbc9362683f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 08:04:59 np0005629333 nova_compute[244014]: 2026-02-25 13:04:59.631 244018 DEBUG nova.compute.manager [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Instance network_info: |[{"id": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "address": "fa:16:3e:8a:be:b7", "network": {"id": "e5412d48-5ea8-48ec-b0d9-17bc49a104c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1780627753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b441b24147b4cdbaeba6c8bc41ce081", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6beae6ca-81", "ovs_interfaceid": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 08:04:59 np0005629333 nova_compute[244014]: 2026-02-25 13:04:59.631 244018 DEBUG oslo_concurrency.lockutils [req-6c87d8fa-c3e2-4b06-8c24-0fbf79477c34 req-7af23e3d-dd46-4cdd-a0ff-6849e46dd146 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-65c18a8f-0df3-4e29-b22d-fbc9362683f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 08:04:59 np0005629333 nova_compute[244014]: 2026-02-25 13:04:59.632 244018 DEBUG nova.network.neutron [req-6c87d8fa-c3e2-4b06-8c24-0fbf79477c34 req-7af23e3d-dd46-4cdd-a0ff-6849e46dd146 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Refreshing network info cache for port 6beae6ca-810c-48a5-8fcd-5f58732e64f4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 08:04:59 np0005629333 nova_compute[244014]: 2026-02-25 13:04:59.636 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Start _get_guest_xml network_info=[{"id": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "address": "fa:16:3e:8a:be:b7", "network": {"id": "e5412d48-5ea8-48ec-b0d9-17bc49a104c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1780627753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b441b24147b4cdbaeba6c8bc41ce081", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6beae6ca-81", "ovs_interfaceid": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 08:04:59 np0005629333 nova_compute[244014]: 2026-02-25 13:04:59.641 244018 WARNING nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:04:59 np0005629333 nova_compute[244014]: 2026-02-25 13:04:59.647 244018 DEBUG nova.virt.libvirt.host [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 08:04:59 np0005629333 nova_compute[244014]: 2026-02-25 13:04:59.648 244018 DEBUG nova.virt.libvirt.host [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 08:04:59 np0005629333 nova_compute[244014]: 2026-02-25 13:04:59.660 244018 DEBUG nova.virt.libvirt.host [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 08:04:59 np0005629333 nova_compute[244014]: 2026-02-25 13:04:59.661 244018 DEBUG nova.virt.libvirt.host [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 08:04:59 np0005629333 nova_compute[244014]: 2026-02-25 13:04:59.662 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 08:04:59 np0005629333 nova_compute[244014]: 2026-02-25 13:04:59.662 244018 DEBUG nova.virt.hardware [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 08:04:59 np0005629333 nova_compute[244014]: 2026-02-25 13:04:59.663 244018 DEBUG nova.virt.hardware [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 08:04:59 np0005629333 nova_compute[244014]: 2026-02-25 13:04:59.664 244018 DEBUG nova.virt.hardware [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 08:04:59 np0005629333 nova_compute[244014]: 2026-02-25 13:04:59.664 244018 DEBUG nova.virt.hardware [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 08:04:59 np0005629333 nova_compute[244014]: 2026-02-25 13:04:59.665 244018 DEBUG nova.virt.hardware [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 08:04:59 np0005629333 nova_compute[244014]: 2026-02-25 13:04:59.665 244018 DEBUG nova.virt.hardware [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 08:04:59 np0005629333 nova_compute[244014]: 2026-02-25 13:04:59.666 244018 DEBUG nova.virt.hardware [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 08:04:59 np0005629333 nova_compute[244014]: 2026-02-25 13:04:59.666 244018 DEBUG nova.virt.hardware [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 08:04:59 np0005629333 nova_compute[244014]: 2026-02-25 13:04:59.667 244018 DEBUG nova.virt.hardware [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 08:04:59 np0005629333 nova_compute[244014]: 2026-02-25 13:04:59.667 244018 DEBUG nova.virt.hardware [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 08:04:59 np0005629333 nova_compute[244014]: 2026-02-25 13:04:59.668 244018 DEBUG nova.virt.hardware [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 08:04:59 np0005629333 nova_compute[244014]: 2026-02-25 13:04:59.672 244018 DEBUG oslo_concurrency.processutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:05:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 08:05:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4011732510' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.205 244018 DEBUG oslo_concurrency.processutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.228 244018 DEBUG nova.storage.rbd_utils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] rbd image 65c18a8f-0df3-4e29-b22d-fbc9362683f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.231 244018 DEBUG oslo_concurrency.processutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.736 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 08:05:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/312836393' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.774 244018 DEBUG oslo_concurrency.processutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.775 244018 DEBUG nova.virt.libvirt.vif [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T13:04:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-326185766',display_name='tempest-TestServerBasicOps-server-326185766',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-326185766',id=152,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAgD3tk1qiKui8lJJf3rJQ9kydu9X1XrZCAFY86RY3JpEFptTJu0F5JBGrnQbFsHJxgtY6rKDs+7evBr0WJnOj8cmec2yTpdNqIYUHkpoNnjetcb9Sa9n8HT0qnNjdCSzw==',key_name='tempest-TestServerBasicOps-1849175750',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7b441b24147b4cdbaeba6c8bc41ce081',ramdisk_id='',reservation_id='r-pdnxw3eu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1933998932',owner_user_name='tempest-TestServerBasicOps-1933998932-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T13:04:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='49675ebd5cbe44e1a3373d23daabdc78',uuid=65c18a8f-0df3-4e29-b22d-fbc9362683f8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "address": "fa:16:3e:8a:be:b7", "network": {"id": "e5412d48-5ea8-48ec-b0d9-17bc49a104c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1780627753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b441b24147b4cdbaeba6c8bc41ce081", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6beae6ca-81", "ovs_interfaceid": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.776 244018 DEBUG nova.network.os_vif_util [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Converting VIF {"id": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "address": "fa:16:3e:8a:be:b7", "network": {"id": "e5412d48-5ea8-48ec-b0d9-17bc49a104c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1780627753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b441b24147b4cdbaeba6c8bc41ce081", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6beae6ca-81", "ovs_interfaceid": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.777 244018 DEBUG nova.network.os_vif_util [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8a:be:b7,bridge_name='br-int',has_traffic_filtering=True,id=6beae6ca-810c-48a5-8fcd-5f58732e64f4,network=Network(e5412d48-5ea8-48ec-b0d9-17bc49a104c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6beae6ca-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.778 244018 DEBUG nova.objects.instance [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lazy-loading 'pci_devices' on Instance uuid 65c18a8f-0df3-4e29-b22d-fbc9362683f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.802 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] End _get_guest_xml xml=<domain type="kvm">
Feb 25 08:05:00 np0005629333 nova_compute[244014]:  <uuid>65c18a8f-0df3-4e29-b22d-fbc9362683f8</uuid>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:  <name>instance-00000098</name>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      <nova:name>tempest-TestServerBasicOps-server-326185766</nova:name>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 13:04:59</nova:creationTime>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 08:05:00 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:        <nova:user uuid="49675ebd5cbe44e1a3373d23daabdc78">tempest-TestServerBasicOps-1933998932-project-member</nova:user>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:        <nova:project uuid="7b441b24147b4cdbaeba6c8bc41ce081">tempest-TestServerBasicOps-1933998932</nova:project>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      <nova:ports>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:        <nova:port uuid="6beae6ca-810c-48a5-8fcd-5f58732e64f4">
Feb 25 08:05:00 np0005629333 nova_compute[244014]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:        </nova:port>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      </nova:ports>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <system>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      <entry name="serial">65c18a8f-0df3-4e29-b22d-fbc9362683f8</entry>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      <entry name="uuid">65c18a8f-0df3-4e29-b22d-fbc9362683f8</entry>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    </system>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:  <os>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:  </os>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:  <features>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:  </features>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:  </clock>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:  <devices>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/65c18a8f-0df3-4e29-b22d-fbc9362683f8_disk">
Feb 25 08:05:00 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      </source>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 08:05:00 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      </auth>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    </disk>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/65c18a8f-0df3-4e29-b22d-fbc9362683f8_disk.config">
Feb 25 08:05:00 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      </source>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 08:05:00 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      </auth>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    </disk>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <interface type="ethernet">
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      <mac address="fa:16:3e:8a:be:b7"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      <driver name="vhost" rx_queue_size="512"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      <mtu size="1442"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      <target dev="tap6beae6ca-81"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    </interface>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/65c18a8f-0df3-4e29-b22d-fbc9362683f8/console.log" append="off"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    </serial>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <video>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    </video>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    </rng>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 08:05:00 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 08:05:00 np0005629333 nova_compute[244014]:  </devices>
Feb 25 08:05:00 np0005629333 nova_compute[244014]: </domain>
Feb 25 08:05:00 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.803 244018 DEBUG nova.compute.manager [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Preparing to wait for external event network-vif-plugged-6beae6ca-810c-48a5-8fcd-5f58732e64f4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.803 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Acquiring lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.804 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.804 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.805 244018 DEBUG nova.virt.libvirt.vif [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T13:04:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-326185766',display_name='tempest-TestServerBasicOps-server-326185766',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-326185766',id=152,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAgD3tk1qiKui8lJJf3rJQ9kydu9X1XrZCAFY86RY3JpEFptTJu0F5JBGrnQbFsHJxgtY6rKDs+7evBr0WJnOj8cmec2yTpdNqIYUHkpoNnjetcb9Sa9n8HT0qnNjdCSzw==',key_name='tempest-TestServerBasicOps-1849175750',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7b441b24147b4cdbaeba6c8bc41ce081',ramdisk_id='',reservation_id='r-pdnxw3eu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1933998932',owner_user_name='tempest-TestServerBasicOps-1933998932-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-25T13:04:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='49675ebd5cbe44e1a3373d23daabdc78',uuid=65c18a8f-0df3-4e29-b22d-fbc9362683f8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "address": "fa:16:3e:8a:be:b7", "network": {"id": "e5412d48-5ea8-48ec-b0d9-17bc49a104c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1780627753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b441b24147b4cdbaeba6c8bc41ce081", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6beae6ca-81", "ovs_interfaceid": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.805 244018 DEBUG nova.network.os_vif_util [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Converting VIF {"id": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "address": "fa:16:3e:8a:be:b7", "network": {"id": "e5412d48-5ea8-48ec-b0d9-17bc49a104c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1780627753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b441b24147b4cdbaeba6c8bc41ce081", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6beae6ca-81", "ovs_interfaceid": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.806 244018 DEBUG nova.network.os_vif_util [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8a:be:b7,bridge_name='br-int',has_traffic_filtering=True,id=6beae6ca-810c-48a5-8fcd-5f58732e64f4,network=Network(e5412d48-5ea8-48ec-b0d9-17bc49a104c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6beae6ca-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.806 244018 DEBUG os_vif [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:be:b7,bridge_name='br-int',has_traffic_filtering=True,id=6beae6ca-810c-48a5-8fcd-5f58732e64f4,network=Network(e5412d48-5ea8-48ec-b0d9-17bc49a104c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6beae6ca-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.807 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.807 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.808 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.811 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.811 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6beae6ca-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.811 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6beae6ca-81, col_values=(('external_ids', {'iface-id': '6beae6ca-810c-48a5-8fcd-5f58732e64f4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8a:be:b7', 'vm-uuid': '65c18a8f-0df3-4e29-b22d-fbc9362683f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.813 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:00 np0005629333 NetworkManager[49836]: <info>  [1772024700.8143] manager: (tap6beae6ca-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/659)
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.817 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.819 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.821 244018 INFO os_vif [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:be:b7,bridge_name='br-int',has_traffic_filtering=True,id=6beae6ca-810c-48a5-8fcd-5f58732e64f4,network=Network(e5412d48-5ea8-48ec-b0d9-17bc49a104c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6beae6ca-81')#033[00m
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.865 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.865 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.866 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] No VIF found with MAC fa:16:3e:8a:be:b7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.866 244018 INFO nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Using config drive#033[00m
Feb 25 08:05:00 np0005629333 nova_compute[244014]: 2026-02-25 13:05:00.888 244018 DEBUG nova.storage.rbd_utils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] rbd image 65c18a8f-0df3-4e29-b22d-fbc9362683f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:05:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2544: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 08:05:01 np0005629333 nova_compute[244014]: 2026-02-25 13:05:01.328 244018 INFO nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Creating config drive at /var/lib/nova/instances/65c18a8f-0df3-4e29-b22d-fbc9362683f8/disk.config#033[00m
Feb 25 08:05:01 np0005629333 nova_compute[244014]: 2026-02-25 13:05:01.336 244018 DEBUG oslo_concurrency.processutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/65c18a8f-0df3-4e29-b22d-fbc9362683f8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp1c2go8pu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:05:01 np0005629333 nova_compute[244014]: 2026-02-25 13:05:01.405 244018 DEBUG nova.network.neutron [req-6c87d8fa-c3e2-4b06-8c24-0fbf79477c34 req-7af23e3d-dd46-4cdd-a0ff-6849e46dd146 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Updated VIF entry in instance network info cache for port 6beae6ca-810c-48a5-8fcd-5f58732e64f4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 08:05:01 np0005629333 nova_compute[244014]: 2026-02-25 13:05:01.406 244018 DEBUG nova.network.neutron [req-6c87d8fa-c3e2-4b06-8c24-0fbf79477c34 req-7af23e3d-dd46-4cdd-a0ff-6849e46dd146 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Updating instance_info_cache with network_info: [{"id": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "address": "fa:16:3e:8a:be:b7", "network": {"id": "e5412d48-5ea8-48ec-b0d9-17bc49a104c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1780627753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b441b24147b4cdbaeba6c8bc41ce081", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6beae6ca-81", "ovs_interfaceid": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:05:01 np0005629333 nova_compute[244014]: 2026-02-25 13:05:01.431 244018 DEBUG oslo_concurrency.lockutils [req-6c87d8fa-c3e2-4b06-8c24-0fbf79477c34 req-7af23e3d-dd46-4cdd-a0ff-6849e46dd146 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-65c18a8f-0df3-4e29-b22d-fbc9362683f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 08:05:01 np0005629333 nova_compute[244014]: 2026-02-25 13:05:01.481 244018 DEBUG oslo_concurrency.processutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/65c18a8f-0df3-4e29-b22d-fbc9362683f8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp1c2go8pu" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:05:01 np0005629333 nova_compute[244014]: 2026-02-25 13:05:01.519 244018 DEBUG nova.storage.rbd_utils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] rbd image 65c18a8f-0df3-4e29-b22d-fbc9362683f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:05:01 np0005629333 nova_compute[244014]: 2026-02-25 13:05:01.525 244018 DEBUG oslo_concurrency.processutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/65c18a8f-0df3-4e29-b22d-fbc9362683f8/disk.config 65c18a8f-0df3-4e29-b22d-fbc9362683f8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:05:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:05:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:05:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:05:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:05:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:05:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:05:01 np0005629333 nova_compute[244014]: 2026-02-25 13:05:01.676 244018 DEBUG oslo_concurrency.processutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/65c18a8f-0df3-4e29-b22d-fbc9362683f8/disk.config 65c18a8f-0df3-4e29-b22d-fbc9362683f8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:05:01 np0005629333 nova_compute[244014]: 2026-02-25 13:05:01.677 244018 INFO nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Deleting local config drive /var/lib/nova/instances/65c18a8f-0df3-4e29-b22d-fbc9362683f8/disk.config because it was imported into RBD.#033[00m
Feb 25 08:05:01 np0005629333 kernel: tap6beae6ca-81: entered promiscuous mode
Feb 25 08:05:01 np0005629333 ovn_controller[147040]: 2026-02-25T13:05:01Z|01604|binding|INFO|Claiming lport 6beae6ca-810c-48a5-8fcd-5f58732e64f4 for this chassis.
Feb 25 08:05:01 np0005629333 ovn_controller[147040]: 2026-02-25T13:05:01Z|01605|binding|INFO|6beae6ca-810c-48a5-8fcd-5f58732e64f4: Claiming fa:16:3e:8a:be:b7 10.100.0.3
Feb 25 08:05:01 np0005629333 NetworkManager[49836]: <info>  [1772024701.7412] manager: (tap6beae6ca-81): new Tun device (/org/freedesktop/NetworkManager/Devices/660)
Feb 25 08:05:01 np0005629333 nova_compute[244014]: 2026-02-25 13:05:01.741 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:01 np0005629333 nova_compute[244014]: 2026-02-25 13:05:01.748 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.755 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:be:b7 10.100.0.3'], port_security=['fa:16:3e:8a:be:b7 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '65c18a8f-0df3-4e29-b22d-fbc9362683f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5412d48-5ea8-48ec-b0d9-17bc49a104c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7b441b24147b4cdbaeba6c8bc41ce081', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1265db5c-9190-4fc7-98f2-b8b04c9ddabc 876954b7-4b56-43af-996d-ec29d7f8f2ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=247650f0-77ff-40c8-b14f-1d2a2a430ca3, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6beae6ca-810c-48a5-8fcd-5f58732e64f4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 08:05:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.756 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6beae6ca-810c-48a5-8fcd-5f58732e64f4 in datapath e5412d48-5ea8-48ec-b0d9-17bc49a104c2 bound to our chassis#033[00m
Feb 25 08:05:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.757 157129 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e5412d48-5ea8-48ec-b0d9-17bc49a104c2#033[00m
Feb 25 08:05:01 np0005629333 systemd-machined[210048]: New machine qemu-186-instance-00000098.
Feb 25 08:05:01 np0005629333 ovn_controller[147040]: 2026-02-25T13:05:01Z|01606|binding|INFO|Setting lport 6beae6ca-810c-48a5-8fcd-5f58732e64f4 ovn-installed in OVS
Feb 25 08:05:01 np0005629333 ovn_controller[147040]: 2026-02-25T13:05:01Z|01607|binding|INFO|Setting lport 6beae6ca-810c-48a5-8fcd-5f58732e64f4 up in Southbound
Feb 25 08:05:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.767 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[77e8661e-c32e-46a0-aea3-d43529846449]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:05:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.768 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape5412d48-51 in ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Feb 25 08:05:01 np0005629333 nova_compute[244014]: 2026-02-25 13:05:01.768 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.770 253268 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape5412d48-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Feb 25 08:05:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.771 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[640207f4-8b80-45b6-aba1-ba6848bae831]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:05:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.771 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e8f699ee-1f53-44b2-8590-b5c08810744d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:05:01 np0005629333 systemd[1]: Started Virtual Machine qemu-186-instance-00000098.
Feb 25 08:05:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.784 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[09c94362-ea14-4f18-a28d-d82dcd23699f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:05:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.796 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ff7f7987-8ca4-453d-8e7c-4b8afa1da010]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:05:01 np0005629333 systemd-udevd[383630]: Network interface NamePolicy= disabled on kernel command line.
Feb 25 08:05:01 np0005629333 NetworkManager[49836]: <info>  [1772024701.8148] device (tap6beae6ca-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Feb 25 08:05:01 np0005629333 NetworkManager[49836]: <info>  [1772024701.8153] device (tap6beae6ca-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Feb 25 08:05:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.822 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[e238740f-bea2-43d0-a30c-ea581369b70b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:05:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.828 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[1df8a820-49a9-4863-a63a-2686e4a417b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:05:01 np0005629333 NetworkManager[49836]: <info>  [1772024701.8287] manager: (tape5412d48-50): new Veth device (/org/freedesktop/NetworkManager/Devices/661)
Feb 25 08:05:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.855 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[d9fc8256-6bef-40fd-9676-0b9f3fe3b954]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:05:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.858 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[2eb0df85-8812-49be-95bf-4e8c84a30a4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:05:01 np0005629333 NetworkManager[49836]: <info>  [1772024701.8792] device (tape5412d48-50): carrier: link connected
Feb 25 08:05:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.886 253343 DEBUG oslo.privsep.daemon [-] privsep: reply[4b93967d-e381-4746-bec9-9a66e629f696]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:05:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.901 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[459198b4-3be9-4990-b376-a079b58c0668]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape5412d48-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:45:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667128, 'reachable_time': 27561, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383659, 'error': None, 'target': 'ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:05:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.917 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3277280c-6056-46ed-ac01-e29edda9f8b1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed2:455f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 667128, 'tstamp': 667128}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383660, 'error': None, 'target': 'ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:05:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.934 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[c8897625-c3ab-49d0-b0d4-fd128d75b3c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape5412d48-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:45:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667128, 'reachable_time': 27561, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 383661, 'error': None, 'target': 'ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:05:01 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:01.973 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d2555bc1-8b63-4a97-88b4-846a03ac392e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:02.031 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e29dd2fd-02f0-40b0-b95b-2739016a16fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:02.033 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5412d48-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:02.033 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:02.033 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape5412d48-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:05:02 np0005629333 kernel: tape5412d48-50: entered promiscuous mode
Feb 25 08:05:02 np0005629333 NetworkManager[49836]: <info>  [1772024702.0367] manager: (tape5412d48-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/662)
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:02.038 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape5412d48-50, col_values=(('external_ids', {'iface-id': 'b38e9a57-21bb-4d54-870d-12b7311abbfa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.038 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:02 np0005629333 ovn_controller[147040]: 2026-02-25T13:05:02Z|01608|binding|INFO|Releasing lport b38e9a57-21bb-4d54-870d-12b7311abbfa from this chassis (sb_readonly=0)
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.047 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:02.049 157129 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e5412d48-5ea8-48ec-b0d9-17bc49a104c2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e5412d48-5ea8-48ec-b0d9-17bc49a104c2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:02.050 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[45474884-3556-47fb-b201-127d7af1e962]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:02.050 157129 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]: global
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]:    log         /dev/log local0 debug
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]:    log-tag     haproxy-metadata-proxy-e5412d48-5ea8-48ec-b0d9-17bc49a104c2
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]:    user        root
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]:    group       root
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]:    maxconn     1024
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]:    pidfile     /var/lib/neutron/external/pids/e5412d48-5ea8-48ec-b0d9-17bc49a104c2.pid.haproxy
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]:    daemon
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]: defaults
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]:    log global
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]:    mode http
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]:    option httplog
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]:    option dontlognull
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]:    option http-server-close
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]:    option forwardfor
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]:    retries                 3
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]:    timeout http-request    30s
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]:    timeout connect         30s
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]:    timeout client          32s
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]:    timeout server          32s
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]:    timeout http-keep-alive 30s
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]: 
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]: listen listener
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]:    bind 169.254.169.254:80
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]:    server metadata /var/lib/neutron/metadata_proxy
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]:    http-request add-header X-OVN-Network-ID e5412d48-5ea8-48ec-b0d9-17bc49a104c2
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Feb 25 08:05:02 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:02.051 157129 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2', 'env', 'PROCESS_TAG=haproxy-e5412d48-5ea8-48ec-b0d9-17bc49a104c2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e5412d48-5ea8-48ec-b0d9-17bc49a104c2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.246 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024702.2458346, 65c18a8f-0df3-4e29-b22d-fbc9362683f8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.247 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] VM Started (Lifecycle Event)#033[00m
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.284 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.288 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024702.2460415, 65c18a8f-0df3-4e29-b22d-fbc9362683f8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.294 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] VM Paused (Lifecycle Event)#033[00m
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.315 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.321 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.343 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.364 244018 DEBUG nova.compute.manager [req-ef459609-0249-40bc-bb31-91a9f1eebab5 req-5f1d6f27-2a0e-463f-92d6-cfbe2ce2ea53 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Received event network-vif-plugged-6beae6ca-810c-48a5-8fcd-5f58732e64f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.365 244018 DEBUG oslo_concurrency.lockutils [req-ef459609-0249-40bc-bb31-91a9f1eebab5 req-5f1d6f27-2a0e-463f-92d6-cfbe2ce2ea53 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.366 244018 DEBUG oslo_concurrency.lockutils [req-ef459609-0249-40bc-bb31-91a9f1eebab5 req-5f1d6f27-2a0e-463f-92d6-cfbe2ce2ea53 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.366 244018 DEBUG oslo_concurrency.lockutils [req-ef459609-0249-40bc-bb31-91a9f1eebab5 req-5f1d6f27-2a0e-463f-92d6-cfbe2ce2ea53 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.366 244018 DEBUG nova.compute.manager [req-ef459609-0249-40bc-bb31-91a9f1eebab5 req-5f1d6f27-2a0e-463f-92d6-cfbe2ce2ea53 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Processing event network-vif-plugged-6beae6ca-810c-48a5-8fcd-5f58732e64f4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.367 244018 DEBUG nova.compute.manager [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.371 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024702.3710856, 65c18a8f-0df3-4e29-b22d-fbc9362683f8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.372 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] VM Resumed (Lifecycle Event)#033[00m
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.373 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.376 244018 INFO nova.virt.libvirt.driver [-] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Instance spawned successfully.#033[00m
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.377 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.392 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.395 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.404 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.404 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.405 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.405 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.406 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.406 244018 DEBUG nova.virt.libvirt.driver [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.413 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 08:05:02 np0005629333 podman[383735]: 2026-02-25 13:05:02.447059632 +0000 UTC m=+0.065486758 container create 8de1161d2b77c701346395c1228547fa336d8f21373a96aa40714a392538d024 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.467 244018 INFO nova.compute.manager [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Took 6.40 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.468 244018 DEBUG nova.compute.manager [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:05:02 np0005629333 systemd[1]: Started libpod-conmon-8de1161d2b77c701346395c1228547fa336d8f21373a96aa40714a392538d024.scope.
Feb 25 08:05:02 np0005629333 podman[383735]: 2026-02-25 13:05:02.414310048 +0000 UTC m=+0.032737214 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.527 244018 INFO nova.compute.manager [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Took 7.99 seconds to build instance.#033[00m
Feb 25 08:05:02 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:05:02 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47e92cfb3f6edfebaff05e0277f03fba995760b8b6190fab1455ec297808617c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 25 08:05:02 np0005629333 nova_compute[244014]: 2026-02-25 13:05:02.544 244018 DEBUG oslo_concurrency.lockutils [None req-838fdf77-529c-4977-81bf-2990eea0a58c 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:05:02 np0005629333 podman[383735]: 2026-02-25 13:05:02.554546263 +0000 UTC m=+0.172973369 container init 8de1161d2b77c701346395c1228547fa336d8f21373a96aa40714a392538d024 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223)
Feb 25 08:05:02 np0005629333 podman[383735]: 2026-02-25 13:05:02.56047725 +0000 UTC m=+0.178904356 container start 8de1161d2b77c701346395c1228547fa336d8f21373a96aa40714a392538d024 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 25 08:05:02 np0005629333 neutron-haproxy-ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2[383750]: [NOTICE]   (383754) : New worker (383756) forked
Feb 25 08:05:02 np0005629333 neutron-haproxy-ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2[383750]: [NOTICE]   (383754) : Loading success.
Feb 25 08:05:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2545: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Feb 25 08:05:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:05:04 np0005629333 NetworkManager[49836]: <info>  [1772024704.2957] manager: (patch-provnet-b685db6e-d440-45ba-9962-47f768dffdfd-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/663)
Feb 25 08:05:04 np0005629333 NetworkManager[49836]: <info>  [1772024704.2967] manager: (patch-br-int-to-provnet-b685db6e-d440-45ba-9962-47f768dffdfd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/664)
Feb 25 08:05:04 np0005629333 nova_compute[244014]: 2026-02-25 13:05:04.303 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:04 np0005629333 nova_compute[244014]: 2026-02-25 13:05:04.336 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:04 np0005629333 ovn_controller[147040]: 2026-02-25T13:05:04Z|01609|binding|INFO|Releasing lport b38e9a57-21bb-4d54-870d-12b7311abbfa from this chassis (sb_readonly=0)
Feb 25 08:05:04 np0005629333 nova_compute[244014]: 2026-02-25 13:05:04.347 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:04 np0005629333 nova_compute[244014]: 2026-02-25 13:05:04.484 244018 DEBUG nova.compute.manager [req-c4c5466b-dbdc-4432-b26d-767ccbc9041d req-f5a35a35-8451-4de9-8cf0-a8ae3a0d6d30 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Received event network-vif-plugged-6beae6ca-810c-48a5-8fcd-5f58732e64f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:05:04 np0005629333 nova_compute[244014]: 2026-02-25 13:05:04.484 244018 DEBUG oslo_concurrency.lockutils [req-c4c5466b-dbdc-4432-b26d-767ccbc9041d req-f5a35a35-8451-4de9-8cf0-a8ae3a0d6d30 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:05:04 np0005629333 nova_compute[244014]: 2026-02-25 13:05:04.485 244018 DEBUG oslo_concurrency.lockutils [req-c4c5466b-dbdc-4432-b26d-767ccbc9041d req-f5a35a35-8451-4de9-8cf0-a8ae3a0d6d30 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:05:04 np0005629333 nova_compute[244014]: 2026-02-25 13:05:04.485 244018 DEBUG oslo_concurrency.lockutils [req-c4c5466b-dbdc-4432-b26d-767ccbc9041d req-f5a35a35-8451-4de9-8cf0-a8ae3a0d6d30 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:05:04 np0005629333 nova_compute[244014]: 2026-02-25 13:05:04.485 244018 DEBUG nova.compute.manager [req-c4c5466b-dbdc-4432-b26d-767ccbc9041d req-f5a35a35-8451-4de9-8cf0-a8ae3a0d6d30 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] No waiting events found dispatching network-vif-plugged-6beae6ca-810c-48a5-8fcd-5f58732e64f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 08:05:04 np0005629333 nova_compute[244014]: 2026-02-25 13:05:04.486 244018 WARNING nova.compute.manager [req-c4c5466b-dbdc-4432-b26d-767ccbc9041d req-f5a35a35-8451-4de9-8cf0-a8ae3a0d6d30 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Received unexpected event network-vif-plugged-6beae6ca-810c-48a5-8fcd-5f58732e64f4 for instance with vm_state active and task_state None.#033[00m
Feb 25 08:05:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2546: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Feb 25 08:05:05 np0005629333 nova_compute[244014]: 2026-02-25 13:05:05.739 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:05 np0005629333 nova_compute[244014]: 2026-02-25 13:05:05.813 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:06 np0005629333 nova_compute[244014]: 2026-02-25 13:05:06.594 244018 DEBUG nova.compute.manager [req-58406bf1-d45a-45fb-88ba-7c5ea36b72c4 req-52f8c1e3-c5b5-423b-864c-5ed0df99bf33 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Received event network-changed-6beae6ca-810c-48a5-8fcd-5f58732e64f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:05:06 np0005629333 nova_compute[244014]: 2026-02-25 13:05:06.594 244018 DEBUG nova.compute.manager [req-58406bf1-d45a-45fb-88ba-7c5ea36b72c4 req-52f8c1e3-c5b5-423b-864c-5ed0df99bf33 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Refreshing instance network info cache due to event network-changed-6beae6ca-810c-48a5-8fcd-5f58732e64f4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Feb 25 08:05:06 np0005629333 nova_compute[244014]: 2026-02-25 13:05:06.594 244018 DEBUG oslo_concurrency.lockutils [req-58406bf1-d45a-45fb-88ba-7c5ea36b72c4 req-52f8c1e3-c5b5-423b-864c-5ed0df99bf33 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "refresh_cache-65c18a8f-0df3-4e29-b22d-fbc9362683f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 08:05:06 np0005629333 nova_compute[244014]: 2026-02-25 13:05:06.595 244018 DEBUG oslo_concurrency.lockutils [req-58406bf1-d45a-45fb-88ba-7c5ea36b72c4 req-52f8c1e3-c5b5-423b-864c-5ed0df99bf33 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquired lock "refresh_cache-65c18a8f-0df3-4e29-b22d-fbc9362683f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 08:05:06 np0005629333 nova_compute[244014]: 2026-02-25 13:05:06.595 244018 DEBUG nova.network.neutron [req-58406bf1-d45a-45fb-88ba-7c5ea36b72c4 req-52f8c1e3-c5b5-423b-864c-5ed0df99bf33 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Refreshing network info cache for port 6beae6ca-810c-48a5-8fcd-5f58732e64f4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Feb 25 08:05:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2547: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Feb 25 08:05:08 np0005629333 nova_compute[244014]: 2026-02-25 13:05:08.211 244018 DEBUG nova.network.neutron [req-58406bf1-d45a-45fb-88ba-7c5ea36b72c4 req-52f8c1e3-c5b5-423b-864c-5ed0df99bf33 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Updated VIF entry in instance network info cache for port 6beae6ca-810c-48a5-8fcd-5f58732e64f4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Feb 25 08:05:08 np0005629333 nova_compute[244014]: 2026-02-25 13:05:08.212 244018 DEBUG nova.network.neutron [req-58406bf1-d45a-45fb-88ba-7c5ea36b72c4 req-52f8c1e3-c5b5-423b-864c-5ed0df99bf33 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Updating instance_info_cache with network_info: [{"id": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "address": "fa:16:3e:8a:be:b7", "network": {"id": "e5412d48-5ea8-48ec-b0d9-17bc49a104c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1780627753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b441b24147b4cdbaeba6c8bc41ce081", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6beae6ca-81", "ovs_interfaceid": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:05:08 np0005629333 nova_compute[244014]: 2026-02-25 13:05:08.237 244018 DEBUG oslo_concurrency.lockutils [req-58406bf1-d45a-45fb-88ba-7c5ea36b72c4 req-52f8c1e3-c5b5-423b-864c-5ed0df99bf33 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Releasing lock "refresh_cache-65c18a8f-0df3-4e29-b22d-fbc9362683f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:05:08.845474) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024708845540, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 462, "num_deletes": 250, "total_data_size": 403755, "memory_usage": 412832, "flush_reason": "Manual Compaction"}
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024708849634, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 335913, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54107, "largest_seqno": 54568, "table_properties": {"data_size": 333303, "index_size": 646, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 7000, "raw_average_key_size": 20, "raw_value_size": 328032, "raw_average_value_size": 967, "num_data_blocks": 28, "num_entries": 339, "num_filter_entries": 339, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772024687, "oldest_key_time": 1772024687, "file_creation_time": 1772024708, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 4215 microseconds, and 1653 cpu microseconds.
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:05:08.849681) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 335913 bytes OK
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:05:08.849717) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:05:08.851143) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:05:08.851157) EVENT_LOG_v1 {"time_micros": 1772024708851152, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:05:08.851177) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 400967, prev total WAL file size 400967, number of live WAL files 2.
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:05:08.851590) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303033' seq:72057594037927935, type:22 .. '6D6772737461740032323534' seq:0, type:0; will stop at (end)
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(328KB)], [125(10MB)]
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024708851637, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 11617996, "oldest_snapshot_seqno": -1}
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 7426 keys, 8329578 bytes, temperature: kUnknown
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024708903896, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 8329578, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8283382, "index_size": 26499, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18629, "raw_key_size": 193756, "raw_average_key_size": 26, "raw_value_size": 8154456, "raw_average_value_size": 1098, "num_data_blocks": 1026, "num_entries": 7426, "num_filter_entries": 7426, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772024708, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:05:08.904283) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 8329578 bytes
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:05:08.907310) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 221.8 rd, 159.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.8 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(59.4) write-amplify(24.8) OK, records in: 7933, records dropped: 507 output_compression: NoCompression
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:05:08.907334) EVENT_LOG_v1 {"time_micros": 1772024708907322, "job": 76, "event": "compaction_finished", "compaction_time_micros": 52371, "compaction_time_cpu_micros": 29462, "output_level": 6, "num_output_files": 1, "total_output_size": 8329578, "num_input_records": 7933, "num_output_records": 7426, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024708907514, "job": 76, "event": "table_file_deletion", "file_number": 127}
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024708908791, "job": 76, "event": "table_file_deletion", "file_number": 125}
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:05:08.851529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:05:08.908846) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:05:08.908852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:05:08.908854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:05:08.908856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:05:08 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:05:08.908858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:05:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2548: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 25 08:05:10 np0005629333 nova_compute[244014]: 2026-02-25 13:05:10.741 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:10 np0005629333 nova_compute[244014]: 2026-02-25 13:05:10.816 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2549: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Feb 25 08:05:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2550: 305 pgs: 305 active+clean; 221 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 112 op/s
Feb 25 08:05:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:05:14 np0005629333 ovn_controller[147040]: 2026-02-25T13:05:14Z|00212|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8a:be:b7 10.100.0.3
Feb 25 08:05:14 np0005629333 ovn_controller[147040]: 2026-02-25T13:05:14Z|00213|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8a:be:b7 10.100.0.3
Feb 25 08:05:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2551: 305 pgs: 305 active+clean; 221 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 102 op/s
Feb 25 08:05:15 np0005629333 nova_compute[244014]: 2026-02-25 13:05:15.743 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:15 np0005629333 nova_compute[244014]: 2026-02-25 13:05:15.817 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2552: 305 pgs: 305 active+clean; 221 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 102 op/s
Feb 25 08:05:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:05:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2553: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Feb 25 08:05:20 np0005629333 nova_compute[244014]: 2026-02-25 13:05:20.745 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:20 np0005629333 nova_compute[244014]: 2026-02-25 13:05:20.819 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2554: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 25 08:05:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:22.468 157394 DEBUG eventlet.wsgi.server [-] (157394) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Feb 25 08:05:22 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:22.470 157394 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0#015
Feb 25 08:05:22 np0005629333 ovn_metadata_agent[157124]: Accept: */*#015
Feb 25 08:05:22 np0005629333 ovn_metadata_agent[157124]: Connection: close#015
Feb 25 08:05:22 np0005629333 ovn_metadata_agent[157124]: Content-Type: text/plain#015
Feb 25 08:05:22 np0005629333 ovn_metadata_agent[157124]: Host: 169.254.169.254#015
Feb 25 08:05:22 np0005629333 ovn_metadata_agent[157124]: User-Agent: curl/7.84.0#015
Feb 25 08:05:22 np0005629333 ovn_metadata_agent[157124]: X-Forwarded-For: 10.100.0.3#015
Feb 25 08:05:22 np0005629333 ovn_metadata_agent[157124]: X-Ovn-Network-Id: e5412d48-5ea8-48ec-b0d9-17bc49a104c2 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Feb 25 08:05:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2555: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Feb 25 08:05:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:05:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:24.350 157394 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Feb 25 08:05:24 np0005629333 haproxy-metadata-proxy-e5412d48-5ea8-48ec-b0d9-17bc49a104c2[383756]: 10.100.0.3:33532 [25/Feb/2026:13:05:22.464] listener listener/metadata 0/0/0/1886/1886 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Feb 25 08:05:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:24.351 157394 INFO eventlet.wsgi.server [-] 10.100.0.3,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.8806088#033[00m
Feb 25 08:05:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:24.482 157394 DEBUG eventlet.wsgi.server [-] (157394) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Feb 25 08:05:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:24.483 157394 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0#015
Feb 25 08:05:24 np0005629333 ovn_metadata_agent[157124]: Accept: */*#015
Feb 25 08:05:24 np0005629333 ovn_metadata_agent[157124]: Connection: close#015
Feb 25 08:05:24 np0005629333 ovn_metadata_agent[157124]: Content-Length: 100#015
Feb 25 08:05:24 np0005629333 ovn_metadata_agent[157124]: Content-Type: application/x-www-form-urlencoded#015
Feb 25 08:05:24 np0005629333 ovn_metadata_agent[157124]: Host: 169.254.169.254#015
Feb 25 08:05:24 np0005629333 ovn_metadata_agent[157124]: User-Agent: curl/7.84.0#015
Feb 25 08:05:24 np0005629333 ovn_metadata_agent[157124]: X-Forwarded-For: 10.100.0.3#015
Feb 25 08:05:24 np0005629333 ovn_metadata_agent[157124]: X-Ovn-Network-Id: e5412d48-5ea8-48ec-b0d9-17bc49a104c2#015
Feb 25 08:05:24 np0005629333 ovn_metadata_agent[157124]: #015
Feb 25 08:05:24 np0005629333 ovn_metadata_agent[157124]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Feb 25 08:05:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:24.739 157394 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Feb 25 08:05:24 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:24.740 157394 INFO eventlet.wsgi.server [-] 10.100.0.3,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2570720#033[00m
Feb 25 08:05:24 np0005629333 haproxy-metadata-proxy-e5412d48-5ea8-48ec-b0d9-17bc49a104c2[383756]: 10.100.0.3:33534 [25/Feb/2026:13:05:24.481] listener listener/metadata 0/0/0/259/259 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Feb 25 08:05:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2556: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 212 KiB/s rd, 109 KiB/s wr, 27 op/s
Feb 25 08:05:25 np0005629333 nova_compute[244014]: 2026-02-25 13:05:25.748 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:25 np0005629333 podman[383767]: 2026-02-25 13:05:25.785712815 +0000 UTC m=+0.050369129 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 08:05:25 np0005629333 podman[383768]: 2026-02-25 13:05:25.811455871 +0000 UTC m=+0.078251255 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 25 08:05:25 np0005629333 nova_compute[244014]: 2026-02-25 13:05:25.821 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2557: 305 pgs: 305 active+clean; 233 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 216 KiB/s rd, 109 KiB/s wr, 28 op/s
Feb 25 08:05:27 np0005629333 nova_compute[244014]: 2026-02-25 13:05:27.077 244018 DEBUG oslo_concurrency.lockutils [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Acquiring lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:05:27 np0005629333 nova_compute[244014]: 2026-02-25 13:05:27.078 244018 DEBUG oslo_concurrency.lockutils [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:05:27 np0005629333 nova_compute[244014]: 2026-02-25 13:05:27.079 244018 DEBUG oslo_concurrency.lockutils [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Acquiring lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:05:27 np0005629333 nova_compute[244014]: 2026-02-25 13:05:27.080 244018 DEBUG oslo_concurrency.lockutils [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:05:27 np0005629333 nova_compute[244014]: 2026-02-25 13:05:27.080 244018 DEBUG oslo_concurrency.lockutils [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:05:27 np0005629333 nova_compute[244014]: 2026-02-25 13:05:27.082 244018 INFO nova.compute.manager [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Terminating instance#033[00m
Feb 25 08:05:27 np0005629333 nova_compute[244014]: 2026-02-25 13:05:27.083 244018 DEBUG nova.compute.manager [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 08:05:27 np0005629333 kernel: tap6beae6ca-81 (unregistering): left promiscuous mode
Feb 25 08:05:27 np0005629333 NetworkManager[49836]: <info>  [1772024727.1895] device (tap6beae6ca-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Feb 25 08:05:27 np0005629333 nova_compute[244014]: 2026-02-25 13:05:27.189 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:27 np0005629333 nova_compute[244014]: 2026-02-25 13:05:27.197 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:27 np0005629333 ovn_controller[147040]: 2026-02-25T13:05:27Z|01610|binding|INFO|Releasing lport 6beae6ca-810c-48a5-8fcd-5f58732e64f4 from this chassis (sb_readonly=0)
Feb 25 08:05:27 np0005629333 ovn_controller[147040]: 2026-02-25T13:05:27Z|01611|binding|INFO|Setting lport 6beae6ca-810c-48a5-8fcd-5f58732e64f4 down in Southbound
Feb 25 08:05:27 np0005629333 ovn_controller[147040]: 2026-02-25T13:05:27Z|01612|binding|INFO|Removing iface tap6beae6ca-81 ovn-installed in OVS
Feb 25 08:05:27 np0005629333 nova_compute[244014]: 2026-02-25 13:05:27.199 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.206 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:be:b7 10.100.0.3'], port_security=['fa:16:3e:8a:be:b7 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '65c18a8f-0df3-4e29-b22d-fbc9362683f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5412d48-5ea8-48ec-b0d9-17bc49a104c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7b441b24147b4cdbaeba6c8bc41ce081', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1265db5c-9190-4fc7-98f2-b8b04c9ddabc 876954b7-4b56-43af-996d-ec29d7f8f2ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=247650f0-77ff-40c8-b14f-1d2a2a430ca3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6beae6ca-810c-48a5-8fcd-5f58732e64f4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 08:05:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.209 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6beae6ca-810c-48a5-8fcd-5f58732e64f4 in datapath e5412d48-5ea8-48ec-b0d9-17bc49a104c2 unbound from our chassis#033[00m
Feb 25 08:05:27 np0005629333 nova_compute[244014]: 2026-02-25 13:05:27.211 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.211 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e5412d48-5ea8-48ec-b0d9-17bc49a104c2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 08:05:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.213 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[ea7c35c4-5979-4e93-b259-fa72f3b78647]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:05:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.214 157129 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2 namespace which is not needed anymore#033[00m
Feb 25 08:05:27 np0005629333 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000098.scope: Deactivated successfully.
Feb 25 08:05:27 np0005629333 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000098.scope: Consumed 12.593s CPU time.
Feb 25 08:05:27 np0005629333 systemd-machined[210048]: Machine qemu-186-instance-00000098 terminated.
Feb 25 08:05:27 np0005629333 kernel: tap6beae6ca-81: entered promiscuous mode
Feb 25 08:05:27 np0005629333 kernel: tap6beae6ca-81 (unregistering): left promiscuous mode
Feb 25 08:05:27 np0005629333 NetworkManager[49836]: <info>  [1772024727.3118] manager: (tap6beae6ca-81): new Tun device (/org/freedesktop/NetworkManager/Devices/665)
Feb 25 08:05:27 np0005629333 nova_compute[244014]: 2026-02-25 13:05:27.307 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:27 np0005629333 ovn_controller[147040]: 2026-02-25T13:05:27Z|01613|binding|INFO|Claiming lport 6beae6ca-810c-48a5-8fcd-5f58732e64f4 for this chassis.
Feb 25 08:05:27 np0005629333 ovn_controller[147040]: 2026-02-25T13:05:27Z|01614|binding|INFO|6beae6ca-810c-48a5-8fcd-5f58732e64f4: Claiming fa:16:3e:8a:be:b7 10.100.0.3
Feb 25 08:05:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.322 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:be:b7 10.100.0.3'], port_security=['fa:16:3e:8a:be:b7 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '65c18a8f-0df3-4e29-b22d-fbc9362683f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5412d48-5ea8-48ec-b0d9-17bc49a104c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7b441b24147b4cdbaeba6c8bc41ce081', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1265db5c-9190-4fc7-98f2-b8b04c9ddabc 876954b7-4b56-43af-996d-ec29d7f8f2ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=247650f0-77ff-40c8-b14f-1d2a2a430ca3, chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6beae6ca-810c-48a5-8fcd-5f58732e64f4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 08:05:27 np0005629333 ovn_controller[147040]: 2026-02-25T13:05:27Z|01615|binding|INFO|Setting lport 6beae6ca-810c-48a5-8fcd-5f58732e64f4 up in Southbound
Feb 25 08:05:27 np0005629333 nova_compute[244014]: 2026-02-25 13:05:27.327 244018 INFO nova.virt.libvirt.driver [-] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Instance destroyed successfully.#033[00m
Feb 25 08:05:27 np0005629333 nova_compute[244014]: 2026-02-25 13:05:27.328 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:27 np0005629333 ovn_controller[147040]: 2026-02-25T13:05:27Z|01616|binding|INFO|Releasing lport 6beae6ca-810c-48a5-8fcd-5f58732e64f4 from this chassis (sb_readonly=1)
Feb 25 08:05:27 np0005629333 ovn_controller[147040]: 2026-02-25T13:05:27Z|01617|if_status|INFO|Dropped 1 log messages in last 438 seconds (most recently, 438 seconds ago) due to excessive rate
Feb 25 08:05:27 np0005629333 ovn_controller[147040]: 2026-02-25T13:05:27Z|01618|if_status|INFO|Not setting lport 6beae6ca-810c-48a5-8fcd-5f58732e64f4 down as sb is readonly
Feb 25 08:05:27 np0005629333 nova_compute[244014]: 2026-02-25 13:05:27.331 244018 DEBUG nova.objects.instance [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lazy-loading 'resources' on Instance uuid 65c18a8f-0df3-4e29-b22d-fbc9362683f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 08:05:27 np0005629333 ovn_controller[147040]: 2026-02-25T13:05:27Z|01619|binding|INFO|Releasing lport 6beae6ca-810c-48a5-8fcd-5f58732e64f4 from this chassis (sb_readonly=0)
Feb 25 08:05:27 np0005629333 ovn_controller[147040]: 2026-02-25T13:05:27Z|01620|binding|INFO|Setting lport 6beae6ca-810c-48a5-8fcd-5f58732e64f4 down in Southbound
Feb 25 08:05:27 np0005629333 nova_compute[244014]: 2026-02-25 13:05:27.339 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.341 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:be:b7 10.100.0.3'], port_security=['fa:16:3e:8a:be:b7 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '65c18a8f-0df3-4e29-b22d-fbc9362683f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e5412d48-5ea8-48ec-b0d9-17bc49a104c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7b441b24147b4cdbaeba6c8bc41ce081', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1265db5c-9190-4fc7-98f2-b8b04c9ddabc 876954b7-4b56-43af-996d-ec29d7f8f2ae', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=247650f0-77ff-40c8-b14f-1d2a2a430ca3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>], logical_port=6beae6ca-810c-48a5-8fcd-5f58732e64f4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe5544ee940>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 08:05:27 np0005629333 nova_compute[244014]: 2026-02-25 13:05:27.344 244018 DEBUG nova.virt.libvirt.vif [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-25T13:04:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-326185766',display_name='tempest-TestServerBasicOps-server-326185766',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-326185766',id=152,image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAgD3tk1qiKui8lJJf3rJQ9kydu9X1XrZCAFY86RY3JpEFptTJu0F5JBGrnQbFsHJxgtY6rKDs+7evBr0WJnOj8cmec2yTpdNqIYUHkpoNnjetcb9Sa9n8HT0qnNjdCSzw==',key_name='tempest-TestServerBasicOps-1849175750',keypairs=<?>,launch_index=0,launched_at=2026-02-25T13:05:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7b441b24147b4cdbaeba6c8bc41ce081',ramdisk_id='',reservation_id='r-pdnxw3eu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-1933998932',owner_user_name='tempest-TestServerBasicOps-1933998932-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-25T13:05:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='49675ebd5cbe44e1a3373d23daabdc78',uuid=65c18a8f-0df3-4e29-b22d-fbc9362683f8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "address": "fa:16:3e:8a:be:b7", "network": {"id": "e5412d48-5ea8-48ec-b0d9-17bc49a104c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1780627753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b441b24147b4cdbaeba6c8bc41ce081", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6beae6ca-81", "ovs_interfaceid": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Feb 25 08:05:27 np0005629333 nova_compute[244014]: 2026-02-25 13:05:27.345 244018 DEBUG nova.network.os_vif_util [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Converting VIF {"id": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "address": "fa:16:3e:8a:be:b7", "network": {"id": "e5412d48-5ea8-48ec-b0d9-17bc49a104c2", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1780627753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7b441b24147b4cdbaeba6c8bc41ce081", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6beae6ca-81", "ovs_interfaceid": "6beae6ca-810c-48a5-8fcd-5f58732e64f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Feb 25 08:05:27 np0005629333 nova_compute[244014]: 2026-02-25 13:05:27.346 244018 DEBUG nova.network.os_vif_util [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8a:be:b7,bridge_name='br-int',has_traffic_filtering=True,id=6beae6ca-810c-48a5-8fcd-5f58732e64f4,network=Network(e5412d48-5ea8-48ec-b0d9-17bc49a104c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6beae6ca-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Feb 25 08:05:27 np0005629333 nova_compute[244014]: 2026-02-25 13:05:27.347 244018 DEBUG os_vif [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8a:be:b7,bridge_name='br-int',has_traffic_filtering=True,id=6beae6ca-810c-48a5-8fcd-5f58732e64f4,network=Network(e5412d48-5ea8-48ec-b0d9-17bc49a104c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6beae6ca-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Feb 25 08:05:27 np0005629333 nova_compute[244014]: 2026-02-25 13:05:27.350 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:27 np0005629333 nova_compute[244014]: 2026-02-25 13:05:27.350 244018 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6beae6ca-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:05:27 np0005629333 nova_compute[244014]: 2026-02-25 13:05:27.353 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:27 np0005629333 nova_compute[244014]: 2026-02-25 13:05:27.354 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:27 np0005629333 nova_compute[244014]: 2026-02-25 13:05:27.358 244018 INFO os_vif [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8a:be:b7,bridge_name='br-int',has_traffic_filtering=True,id=6beae6ca-810c-48a5-8fcd-5f58732e64f4,network=Network(e5412d48-5ea8-48ec-b0d9-17bc49a104c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6beae6ca-81')#033[00m
Feb 25 08:05:27 np0005629333 neutron-haproxy-ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2[383750]: [NOTICE]   (383754) : haproxy version is 2.8.14-c23fe91
Feb 25 08:05:27 np0005629333 neutron-haproxy-ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2[383750]: [NOTICE]   (383754) : path to executable is /usr/sbin/haproxy
Feb 25 08:05:27 np0005629333 neutron-haproxy-ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2[383750]: [WARNING]  (383754) : Exiting Master process...
Feb 25 08:05:27 np0005629333 neutron-haproxy-ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2[383750]: [ALERT]    (383754) : Current worker (383756) exited with code 143 (Terminated)
Feb 25 08:05:27 np0005629333 neutron-haproxy-ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2[383750]: [WARNING]  (383754) : All workers exited. Exiting... (0)
Feb 25 08:05:27 np0005629333 systemd[1]: libpod-8de1161d2b77c701346395c1228547fa336d8f21373a96aa40714a392538d024.scope: Deactivated successfully.
Feb 25 08:05:27 np0005629333 podman[383835]: 2026-02-25 13:05:27.402783714 +0000 UTC m=+0.101805941 container died 8de1161d2b77c701346395c1228547fa336d8f21373a96aa40714a392538d024 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 08:05:27 np0005629333 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8de1161d2b77c701346395c1228547fa336d8f21373a96aa40714a392538d024-userdata-shm.mount: Deactivated successfully.
Feb 25 08:05:27 np0005629333 systemd[1]: var-lib-containers-storage-overlay-47e92cfb3f6edfebaff05e0277f03fba995760b8b6190fab1455ec297808617c-merged.mount: Deactivated successfully.
Feb 25 08:05:27 np0005629333 podman[383835]: 2026-02-25 13:05:27.542980788 +0000 UTC m=+0.242003015 container cleanup 8de1161d2b77c701346395c1228547fa336d8f21373a96aa40714a392538d024 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 08:05:27 np0005629333 systemd[1]: libpod-conmon-8de1161d2b77c701346395c1228547fa336d8f21373a96aa40714a392538d024.scope: Deactivated successfully.
Feb 25 08:05:27 np0005629333 podman[383894]: 2026-02-25 13:05:27.675714511 +0000 UTC m=+0.105169707 container remove 8de1161d2b77c701346395c1228547fa336d8f21373a96aa40714a392538d024 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team)
Feb 25 08:05:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.681 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[9909ff6f-97df-4582-a544-1b0d4c7a901e]: (4, ('Wed Feb 25 01:05:27 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2 (8de1161d2b77c701346395c1228547fa336d8f21373a96aa40714a392538d024)\n8de1161d2b77c701346395c1228547fa336d8f21373a96aa40714a392538d024\nWed Feb 25 01:05:27 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2 (8de1161d2b77c701346395c1228547fa336d8f21373a96aa40714a392538d024)\n8de1161d2b77c701346395c1228547fa336d8f21373a96aa40714a392538d024\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:05:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.684 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[907c047b-c3e3-47ef-bd1e-889f377778cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:05:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.686 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape5412d48-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:05:27 np0005629333 nova_compute[244014]: 2026-02-25 13:05:27.688 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:27 np0005629333 kernel: tape5412d48-50: left promiscuous mode
Feb 25 08:05:27 np0005629333 nova_compute[244014]: 2026-02-25 13:05:27.694 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.697 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[70dbdce8-1f6c-4753-9f6c-b604a14db2ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:05:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.713 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[31e7a596-8df0-4d09-b5b4-ce6686fa9ec6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:05:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.715 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[d181dd07-1cf9-49b6-b43b-bf3daf1d120f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:05:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.736 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[e0c4569c-4c15-4ad5-a43c-75a39566461f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 667122, 'reachable_time': 37626, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383909, 'error': None, 'target': 'ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:05:27 np0005629333 systemd[1]: run-netns-ovnmeta\x2de5412d48\x2d5ea8\x2d48ec\x2db0d9\x2d17bc49a104c2.mount: Deactivated successfully.
Feb 25 08:05:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.741 157528 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e5412d48-5ea8-48ec-b0d9-17bc49a104c2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Feb 25 08:05:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.741 157528 DEBUG oslo.privsep.daemon [-] privsep: reply[34346772-d730-49d6-ad6d-630413e35cb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:05:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.743 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6beae6ca-810c-48a5-8fcd-5f58732e64f4 in datapath e5412d48-5ea8-48ec-b0d9-17bc49a104c2 unbound from our chassis#033[00m
Feb 25 08:05:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.745 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e5412d48-5ea8-48ec-b0d9-17bc49a104c2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 08:05:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.746 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[505f4bb6-9a45-4ff3-ab2c-5d0abcc11379]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:05:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.746 157129 INFO neutron.agent.ovn.metadata.agent [-] Port 6beae6ca-810c-48a5-8fcd-5f58732e64f4 in datapath e5412d48-5ea8-48ec-b0d9-17bc49a104c2 unbound from our chassis#033[00m
Feb 25 08:05:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.747 157129 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e5412d48-5ea8-48ec-b0d9-17bc49a104c2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Feb 25 08:05:27 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:27.748 253268 DEBUG oslo.privsep.daemon [-] privsep: reply[3d7cec63-08e2-4a89-ada9-979e071ed1ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Feb 25 08:05:28 np0005629333 nova_compute[244014]: 2026-02-25 13:05:28.017 244018 INFO nova.virt.libvirt.driver [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Deleting instance files /var/lib/nova/instances/65c18a8f-0df3-4e29-b22d-fbc9362683f8_del#033[00m
Feb 25 08:05:28 np0005629333 nova_compute[244014]: 2026-02-25 13:05:28.018 244018 INFO nova.virt.libvirt.driver [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Deletion of /var/lib/nova/instances/65c18a8f-0df3-4e29-b22d-fbc9362683f8_del complete#033[00m
Feb 25 08:05:28 np0005629333 nova_compute[244014]: 2026-02-25 13:05:28.096 244018 INFO nova.compute.manager [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Took 1.01 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 08:05:28 np0005629333 nova_compute[244014]: 2026-02-25 13:05:28.097 244018 DEBUG oslo.service.loopingcall [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 08:05:28 np0005629333 nova_compute[244014]: 2026-02-25 13:05:28.097 244018 DEBUG nova.compute.manager [-] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 08:05:28 np0005629333 nova_compute[244014]: 2026-02-25 13:05:28.098 244018 DEBUG nova.network.neutron [-] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 08:05:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:05:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2558: 305 pgs: 305 active+clean; 215 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 223 KiB/s rd, 110 KiB/s wr, 37 op/s
Feb 25 08:05:30 np0005629333 nova_compute[244014]: 2026-02-25 13:05:30.293 244018 DEBUG nova.compute.manager [req-69290896-1dd6-43ac-a7f6-1bf884878d5a req-68f57877-e558-4ae1-9cb9-865410c903fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Received event network-vif-unplugged-6beae6ca-810c-48a5-8fcd-5f58732e64f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:05:30 np0005629333 nova_compute[244014]: 2026-02-25 13:05:30.295 244018 DEBUG oslo_concurrency.lockutils [req-69290896-1dd6-43ac-a7f6-1bf884878d5a req-68f57877-e558-4ae1-9cb9-865410c903fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:05:30 np0005629333 nova_compute[244014]: 2026-02-25 13:05:30.296 244018 DEBUG oslo_concurrency.lockutils [req-69290896-1dd6-43ac-a7f6-1bf884878d5a req-68f57877-e558-4ae1-9cb9-865410c903fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:05:30 np0005629333 nova_compute[244014]: 2026-02-25 13:05:30.296 244018 DEBUG oslo_concurrency.lockutils [req-69290896-1dd6-43ac-a7f6-1bf884878d5a req-68f57877-e558-4ae1-9cb9-865410c903fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:05:30 np0005629333 nova_compute[244014]: 2026-02-25 13:05:30.296 244018 DEBUG nova.compute.manager [req-69290896-1dd6-43ac-a7f6-1bf884878d5a req-68f57877-e558-4ae1-9cb9-865410c903fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] No waiting events found dispatching network-vif-unplugged-6beae6ca-810c-48a5-8fcd-5f58732e64f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 08:05:30 np0005629333 nova_compute[244014]: 2026-02-25 13:05:30.297 244018 DEBUG nova.compute.manager [req-69290896-1dd6-43ac-a7f6-1bf884878d5a req-68f57877-e558-4ae1-9cb9-865410c903fe 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Received event network-vif-unplugged-6beae6ca-810c-48a5-8fcd-5f58732e64f4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Feb 25 08:05:30 np0005629333 nova_compute[244014]: 2026-02-25 13:05:30.751 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2559: 305 pgs: 305 active+clean; 215 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 14 KiB/s wr, 10 op/s
Feb 25 08:05:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:05:31
Feb 25 08:05:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:05:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:05:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'volumes', 'backups', '.rgw.root', 'vms', '.mgr']
Feb 25 08:05:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:05:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:05:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:05:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:05:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:05:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:05:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:05:31 np0005629333 nova_compute[244014]: 2026-02-25 13:05:31.655 244018 DEBUG nova.network.neutron [-] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:05:31 np0005629333 nova_compute[244014]: 2026-02-25 13:05:31.684 244018 INFO nova.compute.manager [-] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Took 3.59 seconds to deallocate network for instance.#033[00m
Feb 25 08:05:31 np0005629333 nova_compute[244014]: 2026-02-25 13:05:31.790 244018 DEBUG oslo_concurrency.lockutils [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:05:31 np0005629333 nova_compute[244014]: 2026-02-25 13:05:31.791 244018 DEBUG oslo_concurrency.lockutils [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:05:31 np0005629333 nova_compute[244014]: 2026-02-25 13:05:31.854 244018 DEBUG oslo_concurrency.processutils [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:05:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:05:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:05:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:05:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:05:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:05:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:05:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:05:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:05:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:05:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:05:32 np0005629333 nova_compute[244014]: 2026-02-25 13:05:32.355 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:32 np0005629333 nova_compute[244014]: 2026-02-25 13:05:32.393 244018 DEBUG nova.compute.manager [req-420eb7ae-35b1-4e50-9c11-6307081e244e req-c6d5e65b-4362-445b-8206-0bdcfb05cc64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Received event network-vif-deleted-6beae6ca-810c-48a5-8fcd-5f58732e64f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:05:32 np0005629333 nova_compute[244014]: 2026-02-25 13:05:32.394 244018 DEBUG nova.compute.manager [req-420eb7ae-35b1-4e50-9c11-6307081e244e req-c6d5e65b-4362-445b-8206-0bdcfb05cc64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Received event network-vif-plugged-6beae6ca-810c-48a5-8fcd-5f58732e64f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Feb 25 08:05:32 np0005629333 nova_compute[244014]: 2026-02-25 13:05:32.395 244018 DEBUG oslo_concurrency.lockutils [req-420eb7ae-35b1-4e50-9c11-6307081e244e req-c6d5e65b-4362-445b-8206-0bdcfb05cc64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Acquiring lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:05:32 np0005629333 nova_compute[244014]: 2026-02-25 13:05:32.395 244018 DEBUG oslo_concurrency.lockutils [req-420eb7ae-35b1-4e50-9c11-6307081e244e req-c6d5e65b-4362-445b-8206-0bdcfb05cc64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:05:32 np0005629333 nova_compute[244014]: 2026-02-25 13:05:32.396 244018 DEBUG oslo_concurrency.lockutils [req-420eb7ae-35b1-4e50-9c11-6307081e244e req-c6d5e65b-4362-445b-8206-0bdcfb05cc64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] Lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:05:32 np0005629333 nova_compute[244014]: 2026-02-25 13:05:32.396 244018 DEBUG nova.compute.manager [req-420eb7ae-35b1-4e50-9c11-6307081e244e req-c6d5e65b-4362-445b-8206-0bdcfb05cc64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] No waiting events found dispatching network-vif-plugged-6beae6ca-810c-48a5-8fcd-5f58732e64f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Feb 25 08:05:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:05:32 np0005629333 nova_compute[244014]: 2026-02-25 13:05:32.397 244018 WARNING nova.compute.manager [req-420eb7ae-35b1-4e50-9c11-6307081e244e req-c6d5e65b-4362-445b-8206-0bdcfb05cc64 8d8ae619eeb042d288adeecaeb3aff1e f49d31b112924d8f8f817128c4cbfbbf - - default default] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Received unexpected event network-vif-plugged-6beae6ca-810c-48a5-8fcd-5f58732e64f4 for instance with vm_state deleted and task_state None.#033[00m
Feb 25 08:05:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1821226578' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:05:32 np0005629333 nova_compute[244014]: 2026-02-25 13:05:32.416 244018 DEBUG oslo_concurrency.processutils [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:05:32 np0005629333 nova_compute[244014]: 2026-02-25 13:05:32.422 244018 DEBUG nova.compute.provider_tree [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:05:32 np0005629333 nova_compute[244014]: 2026-02-25 13:05:32.444 244018 DEBUG nova.scheduler.client.report [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:05:32 np0005629333 nova_compute[244014]: 2026-02-25 13:05:32.471 244018 DEBUG oslo_concurrency.lockutils [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:05:32 np0005629333 nova_compute[244014]: 2026-02-25 13:05:32.510 244018 INFO nova.scheduler.client.report [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Deleted allocations for instance 65c18a8f-0df3-4e29-b22d-fbc9362683f8#033[00m
Feb 25 08:05:32 np0005629333 nova_compute[244014]: 2026-02-25 13:05:32.583 244018 DEBUG oslo_concurrency.lockutils [None req-73504e21-6cfc-49a4-afe2-1a5abea4b554 49675ebd5cbe44e1a3373d23daabdc78 7b441b24147b4cdbaeba6c8bc41ce081 - - default default] Lock "65c18a8f-0df3-4e29-b22d-fbc9362683f8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:05:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2560: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 15 KiB/s wr, 30 op/s
Feb 25 08:05:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:05:33 np0005629333 nova_compute[244014]: 2026-02-25 13:05:33.922 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:33.923 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 08:05:33 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:33.924 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 08:05:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2561: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 KiB/s wr, 30 op/s
Feb 25 08:05:35 np0005629333 nova_compute[244014]: 2026-02-25 13:05:35.752 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:35 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:35.926 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:05:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2562: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 KiB/s wr, 30 op/s
Feb 25 08:05:37 np0005629333 nova_compute[244014]: 2026-02-25 13:05:37.360 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:37 np0005629333 nova_compute[244014]: 2026-02-25 13:05:37.468 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:37 np0005629333 nova_compute[244014]: 2026-02-25 13:05:37.532 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:05:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2563: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 2.8 KiB/s wr, 29 op/s
Feb 25 08:05:40 np0005629333 nova_compute[244014]: 2026-02-25 13:05:40.756 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:40 np0005629333 nova_compute[244014]: 2026-02-25 13:05:40.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:05:40 np0005629333 nova_compute[244014]: 2026-02-25 13:05:40.903 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:05:40 np0005629333 nova_compute[244014]: 2026-02-25 13:05:40.903 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:05:40 np0005629333 nova_compute[244014]: 2026-02-25 13:05:40.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:05:40 np0005629333 nova_compute[244014]: 2026-02-25 13:05:40.904 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:05:40 np0005629333 nova_compute[244014]: 2026-02-25 13:05:40.904 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:05:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2564: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Feb 25 08:05:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:05:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/196505153' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:05:41 np0005629333 nova_compute[244014]: 2026-02-25 13:05:41.528 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:05:41 np0005629333 nova_compute[244014]: 2026-02-25 13:05:41.712 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:05:41 np0005629333 nova_compute[244014]: 2026-02-25 13:05:41.713 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3529MB free_disk=59.98725803941488GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:05:41 np0005629333 nova_compute[244014]: 2026-02-25 13:05:41.713 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:05:41 np0005629333 nova_compute[244014]: 2026-02-25 13:05:41.713 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:05:41 np0005629333 nova_compute[244014]: 2026-02-25 13:05:41.794 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:05:41 np0005629333 nova_compute[244014]: 2026-02-25 13:05:41.794 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:05:41 np0005629333 nova_compute[244014]: 2026-02-25 13:05:41.813 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 25 08:05:41 np0005629333 nova_compute[244014]: 2026-02-25 13:05:41.842 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 25 08:05:41 np0005629333 nova_compute[244014]: 2026-02-25 13:05:41.843 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 25 08:05:41 np0005629333 nova_compute[244014]: 2026-02-25 13:05:41.861 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 25 08:05:41 np0005629333 nova_compute[244014]: 2026-02-25 13:05:41.894 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 25 08:05:41 np0005629333 nova_compute[244014]: 2026-02-25 13:05:41.910 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:05:42 np0005629333 nova_compute[244014]: 2026-02-25 13:05:42.326 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024727.3249142, 65c18a8f-0df3-4e29-b22d-fbc9362683f8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:05:42 np0005629333 nova_compute[244014]: 2026-02-25 13:05:42.326 244018 INFO nova.compute.manager [-] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] VM Stopped (Lifecycle Event)#033[00m
Feb 25 08:05:42 np0005629333 nova_compute[244014]: 2026-02-25 13:05:42.348 244018 DEBUG nova.compute.manager [None req-d3138dc2-63a6-4321-a940-07f7342911d2 - - - - - -] [instance: 65c18a8f-0df3-4e29-b22d-fbc9362683f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:05:42 np0005629333 nova_compute[244014]: 2026-02-25 13:05:42.363 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:05:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1808517837' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:05:42 np0005629333 nova_compute[244014]: 2026-02-25 13:05:42.466 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:05:42 np0005629333 nova_compute[244014]: 2026-02-25 13:05:42.472 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:05:42 np0005629333 nova_compute[244014]: 2026-02-25 13:05:42.490 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:05:42 np0005629333 nova_compute[244014]: 2026-02-25 13:05:42.521 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:05:42 np0005629333 nova_compute[244014]: 2026-02-25 13:05:42.522 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:05:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:05:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:05:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:05:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:05:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.705684116624595e-05 of space, bias 1.0, pg target 0.005117052349873786 quantized to 32 (current 32)
Feb 25 08:05:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:05:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:05:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:05:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:05:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:05:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024945035750507153 of space, bias 1.0, pg target 0.7483510725152146 quantized to 32 (current 32)
Feb 25 08:05:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:05:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4068823102287942e-06 of space, bias 4.0, pg target 0.001688258772274553 quantized to 16 (current 16)
Feb 25 08:05:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:05:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:05:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:05:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:05:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:05:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:05:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:05:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:05:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:05:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:05:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2565: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Feb 25 08:05:43 np0005629333 nova_compute[244014]: 2026-02-25 13:05:43.522 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:05:43 np0005629333 nova_compute[244014]: 2026-02-25 13:05:43.523 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:05:43 np0005629333 nova_compute[244014]: 2026-02-25 13:05:43.524 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:05:43 np0005629333 nova_compute[244014]: 2026-02-25 13:05:43.544 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:05:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:05:44 np0005629333 nova_compute[244014]: 2026-02-25 13:05:44.893 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:05:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2566: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:05:45 np0005629333 nova_compute[244014]: 2026-02-25 13:05:45.758 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:46 np0005629333 nova_compute[244014]: 2026-02-25 13:05:46.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:05:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2567: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:05:47 np0005629333 nova_compute[244014]: 2026-02-25 13:05:47.367 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:05:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1036224926' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:05:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:05:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1036224926' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:05:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:05:48 np0005629333 nova_compute[244014]: 2026-02-25 13:05:48.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:05:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2568: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:05:50 np0005629333 nova_compute[244014]: 2026-02-25 13:05:50.761 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:50 np0005629333 nova_compute[244014]: 2026-02-25 13:05:50.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:05:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2569: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:05:52 np0005629333 nova_compute[244014]: 2026-02-25 13:05:52.080 244018 DEBUG oslo_concurrency.lockutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Acquiring lock "44f7664a-bc33-4251-9857-884c305df488" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:05:52 np0005629333 nova_compute[244014]: 2026-02-25 13:05:52.081 244018 DEBUG oslo_concurrency.lockutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lock "44f7664a-bc33-4251-9857-884c305df488" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:05:52 np0005629333 nova_compute[244014]: 2026-02-25 13:05:52.099 244018 DEBUG nova.compute.manager [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Feb 25 08:05:52 np0005629333 nova_compute[244014]: 2026-02-25 13:05:52.176 244018 DEBUG oslo_concurrency.lockutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:05:52 np0005629333 nova_compute[244014]: 2026-02-25 13:05:52.177 244018 DEBUG oslo_concurrency.lockutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:05:52 np0005629333 nova_compute[244014]: 2026-02-25 13:05:52.186 244018 DEBUG nova.virt.hardware [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Feb 25 08:05:52 np0005629333 nova_compute[244014]: 2026-02-25 13:05:52.187 244018 INFO nova.compute.claims [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Claim successful on node compute-0.ctlplane.example.com#033[00m
Feb 25 08:05:52 np0005629333 nova_compute[244014]: 2026-02-25 13:05:52.307 244018 DEBUG oslo_concurrency.processutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:05:52 np0005629333 nova_compute[244014]: 2026-02-25 13:05:52.371 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:05:52 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3347998741' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:05:52 np0005629333 nova_compute[244014]: 2026-02-25 13:05:52.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:05:52 np0005629333 nova_compute[244014]: 2026-02-25 13:05:52.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:05:52 np0005629333 nova_compute[244014]: 2026-02-25 13:05:52.894 244018 DEBUG oslo_concurrency.processutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:05:52 np0005629333 nova_compute[244014]: 2026-02-25 13:05:52.901 244018 DEBUG nova.compute.provider_tree [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:05:52 np0005629333 nova_compute[244014]: 2026-02-25 13:05:52.920 244018 DEBUG nova.scheduler.client.report [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:05:52 np0005629333 nova_compute[244014]: 2026-02-25 13:05:52.940 244018 DEBUG oslo_concurrency.lockutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:05:52 np0005629333 nova_compute[244014]: 2026-02-25 13:05:52.941 244018 DEBUG nova.compute.manager [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Feb 25 08:05:52 np0005629333 nova_compute[244014]: 2026-02-25 13:05:52.984 244018 DEBUG nova.compute.manager [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Feb 25 08:05:52 np0005629333 nova_compute[244014]: 2026-02-25 13:05:52.985 244018 DEBUG nova.network.neutron [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Feb 25 08:05:53 np0005629333 nova_compute[244014]: 2026-02-25 13:05:53.005 244018 INFO nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Feb 25 08:05:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2570: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:05:53 np0005629333 nova_compute[244014]: 2026-02-25 13:05:53.027 244018 DEBUG nova.compute.manager [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Feb 25 08:05:53 np0005629333 nova_compute[244014]: 2026-02-25 13:05:53.169 244018 DEBUG nova.compute.manager [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Feb 25 08:05:53 np0005629333 nova_compute[244014]: 2026-02-25 13:05:53.171 244018 DEBUG nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Feb 25 08:05:53 np0005629333 nova_compute[244014]: 2026-02-25 13:05:53.171 244018 INFO nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Creating image(s)#033[00m
Feb 25 08:05:53 np0005629333 nova_compute[244014]: 2026-02-25 13:05:53.200 244018 DEBUG nova.storage.rbd_utils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] rbd image 44f7664a-bc33-4251-9857-884c305df488_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:05:53 np0005629333 nova_compute[244014]: 2026-02-25 13:05:53.228 244018 DEBUG nova.storage.rbd_utils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] rbd image 44f7664a-bc33-4251-9857-884c305df488_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:05:53 np0005629333 nova_compute[244014]: 2026-02-25 13:05:53.256 244018 DEBUG nova.storage.rbd_utils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] rbd image 44f7664a-bc33-4251-9857-884c305df488_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:05:53 np0005629333 nova_compute[244014]: 2026-02-25 13:05:53.260 244018 DEBUG oslo_concurrency.processutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:05:53 np0005629333 nova_compute[244014]: 2026-02-25 13:05:53.346 244018 DEBUG oslo_concurrency.processutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:05:53 np0005629333 nova_compute[244014]: 2026-02-25 13:05:53.347 244018 DEBUG oslo_concurrency.lockutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Acquiring lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:05:53 np0005629333 nova_compute[244014]: 2026-02-25 13:05:53.348 244018 DEBUG oslo_concurrency.lockutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:05:53 np0005629333 nova_compute[244014]: 2026-02-25 13:05:53.348 244018 DEBUG oslo_concurrency.lockutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lock "a63dc6dbb387022d47a8ca49bddcc4af2508a4d6" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:05:53 np0005629333 nova_compute[244014]: 2026-02-25 13:05:53.375 244018 DEBUG nova.storage.rbd_utils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] rbd image 44f7664a-bc33-4251-9857-884c305df488_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:05:53 np0005629333 nova_compute[244014]: 2026-02-25 13:05:53.380 244018 DEBUG oslo_concurrency.processutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 44f7664a-bc33-4251-9857-884c305df488_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:05:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:05:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:05:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:05:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:05:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:05:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:05:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:05:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:05:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:05:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:05:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:05:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:05:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:05:54 np0005629333 nova_compute[244014]: 2026-02-25 13:05:54.233 244018 DEBUG oslo_concurrency.processutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 44f7664a-bc33-4251-9857-884c305df488_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.852s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:05:54 np0005629333 nova_compute[244014]: 2026-02-25 13:05:54.317 244018 DEBUG nova.storage.rbd_utils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] resizing rbd image 44f7664a-bc33-4251-9857-884c305df488_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Feb 25 08:05:54 np0005629333 nova_compute[244014]: 2026-02-25 13:05:54.557 244018 DEBUG nova.objects.instance [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lazy-loading 'migration_context' on Instance uuid 44f7664a-bc33-4251-9857-884c305df488 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 08:05:54 np0005629333 nova_compute[244014]: 2026-02-25 13:05:54.583 244018 DEBUG nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Feb 25 08:05:54 np0005629333 nova_compute[244014]: 2026-02-25 13:05:54.584 244018 DEBUG nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Ensure instance console log exists: /var/lib/nova/instances/44f7664a-bc33-4251-9857-884c305df488/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Feb 25 08:05:54 np0005629333 nova_compute[244014]: 2026-02-25 13:05:54.585 244018 DEBUG oslo_concurrency.lockutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:05:54 np0005629333 nova_compute[244014]: 2026-02-25 13:05:54.586 244018 DEBUG oslo_concurrency.lockutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:05:54 np0005629333 nova_compute[244014]: 2026-02-25 13:05:54.586 244018 DEBUG oslo_concurrency.lockutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:05:54 np0005629333 podman[384309]: 2026-02-25 13:05:54.693654379 +0000 UTC m=+0.057401459 container create 3ef6f556f5fcacd54d85b02ad6db974574aaf2f5ef42e267765ee612bd76626c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mclaren, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 08:05:54 np0005629333 podman[384309]: 2026-02-25 13:05:54.665512416 +0000 UTC m=+0.029259566 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:05:54 np0005629333 systemd[1]: Started libpod-conmon-3ef6f556f5fcacd54d85b02ad6db974574aaf2f5ef42e267765ee612bd76626c.scope.
Feb 25 08:05:54 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:05:54 np0005629333 nova_compute[244014]: 2026-02-25 13:05:54.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:05:54 np0005629333 podman[384309]: 2026-02-25 13:05:54.96958054 +0000 UTC m=+0.333327690 container init 3ef6f556f5fcacd54d85b02ad6db974574aaf2f5ef42e267765ee612bd76626c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mclaren, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:05:54 np0005629333 podman[384309]: 2026-02-25 13:05:54.980651222 +0000 UTC m=+0.344398322 container start 3ef6f556f5fcacd54d85b02ad6db974574aaf2f5ef42e267765ee612bd76626c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:05:54 np0005629333 mystifying_mclaren[384325]: 167 167
Feb 25 08:05:54 np0005629333 systemd[1]: libpod-3ef6f556f5fcacd54d85b02ad6db974574aaf2f5ef42e267765ee612bd76626c.scope: Deactivated successfully.
Feb 25 08:05:55 np0005629333 podman[384309]: 2026-02-25 13:05:55.013577419 +0000 UTC m=+0.377324569 container attach 3ef6f556f5fcacd54d85b02ad6db974574aaf2f5ef42e267765ee612bd76626c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mclaren, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:05:55 np0005629333 podman[384309]: 2026-02-25 13:05:55.01642698 +0000 UTC m=+0.380174080 container died 3ef6f556f5fcacd54d85b02ad6db974574aaf2f5ef42e267765ee612bd76626c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 08:05:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2571: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:05:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:55.044 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:05:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:55.045 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:05:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:05:55.045 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:05:55 np0005629333 systemd[1]: var-lib-containers-storage-overlay-06ac11d5deecd53e49b56438ff9f51280bb645fea97318b76ec0a0e348a2944f-merged.mount: Deactivated successfully.
Feb 25 08:05:55 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:05:55 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:05:55 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:05:55 np0005629333 nova_compute[244014]: 2026-02-25 13:05:55.165 244018 DEBUG nova.network.neutron [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Feb 25 08:05:55 np0005629333 nova_compute[244014]: 2026-02-25 13:05:55.166 244018 DEBUG nova.compute.manager [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Feb 25 08:05:55 np0005629333 nova_compute[244014]: 2026-02-25 13:05:55.170 244018 DEBUG nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'guest_format': None, 'encrypted': False, 'encryption_format': None, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'image_id': 'c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Feb 25 08:05:55 np0005629333 nova_compute[244014]: 2026-02-25 13:05:55.179 244018 WARNING nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:05:55 np0005629333 nova_compute[244014]: 2026-02-25 13:05:55.188 244018 DEBUG nova.virt.libvirt.host [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Feb 25 08:05:55 np0005629333 nova_compute[244014]: 2026-02-25 13:05:55.191 244018 DEBUG nova.virt.libvirt.host [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Feb 25 08:05:55 np0005629333 nova_compute[244014]: 2026-02-25 13:05:55.197 244018 DEBUG nova.virt.libvirt.host [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Feb 25 08:05:55 np0005629333 nova_compute[244014]: 2026-02-25 13:05:55.198 244018 DEBUG nova.virt.libvirt.host [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Feb 25 08:05:55 np0005629333 nova_compute[244014]: 2026-02-25 13:05:55.199 244018 DEBUG nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Feb 25 08:05:55 np0005629333 nova_compute[244014]: 2026-02-25 13:05:55.200 244018 DEBUG nova.virt.hardware [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-25T12:14:23Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f1e0aba0-7190-4c7b-af7d-75cabf0f015e',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-25T12:14:23Z,direct_url=<?>,disk_format='qcow2',id=c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='3b2095d7947f4e57abf66907344b90fc',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-25T12:14:24Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Feb 25 08:05:55 np0005629333 nova_compute[244014]: 2026-02-25 13:05:55.201 244018 DEBUG nova.virt.hardware [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Feb 25 08:05:55 np0005629333 nova_compute[244014]: 2026-02-25 13:05:55.201 244018 DEBUG nova.virt.hardware [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Feb 25 08:05:55 np0005629333 nova_compute[244014]: 2026-02-25 13:05:55.202 244018 DEBUG nova.virt.hardware [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Feb 25 08:05:55 np0005629333 nova_compute[244014]: 2026-02-25 13:05:55.202 244018 DEBUG nova.virt.hardware [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Feb 25 08:05:55 np0005629333 nova_compute[244014]: 2026-02-25 13:05:55.203 244018 DEBUG nova.virt.hardware [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Feb 25 08:05:55 np0005629333 nova_compute[244014]: 2026-02-25 13:05:55.203 244018 DEBUG nova.virt.hardware [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Feb 25 08:05:55 np0005629333 nova_compute[244014]: 2026-02-25 13:05:55.204 244018 DEBUG nova.virt.hardware [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Feb 25 08:05:55 np0005629333 nova_compute[244014]: 2026-02-25 13:05:55.204 244018 DEBUG nova.virt.hardware [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Feb 25 08:05:55 np0005629333 nova_compute[244014]: 2026-02-25 13:05:55.205 244018 DEBUG nova.virt.hardware [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Feb 25 08:05:55 np0005629333 nova_compute[244014]: 2026-02-25 13:05:55.205 244018 DEBUG nova.virt.hardware [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Feb 25 08:05:55 np0005629333 nova_compute[244014]: 2026-02-25 13:05:55.211 244018 DEBUG oslo_concurrency.processutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:05:55 np0005629333 podman[384309]: 2026-02-25 13:05:55.234038546 +0000 UTC m=+0.597785636 container remove 3ef6f556f5fcacd54d85b02ad6db974574aaf2f5ef42e267765ee612bd76626c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 08:05:55 np0005629333 systemd[1]: libpod-conmon-3ef6f556f5fcacd54d85b02ad6db974574aaf2f5ef42e267765ee612bd76626c.scope: Deactivated successfully.
Feb 25 08:05:55 np0005629333 podman[384363]: 2026-02-25 13:05:55.437890215 +0000 UTC m=+0.062964917 container create e842fff2be2c3942f545ba89aa40c20eef133707e749ecb258bf8582154976f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_hoover, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 08:05:55 np0005629333 systemd[1]: Started libpod-conmon-e842fff2be2c3942f545ba89aa40c20eef133707e749ecb258bf8582154976f9.scope.
Feb 25 08:05:55 np0005629333 podman[384363]: 2026-02-25 13:05:55.412743245 +0000 UTC m=+0.037818027 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:05:55 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:05:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e50d1ca347cc9e3095b6aff6fc226072a37b47619d6cb59268ed8af5694274b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:05:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e50d1ca347cc9e3095b6aff6fc226072a37b47619d6cb59268ed8af5694274b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:05:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e50d1ca347cc9e3095b6aff6fc226072a37b47619d6cb59268ed8af5694274b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:05:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e50d1ca347cc9e3095b6aff6fc226072a37b47619d6cb59268ed8af5694274b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:05:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e50d1ca347cc9e3095b6aff6fc226072a37b47619d6cb59268ed8af5694274b7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:05:55 np0005629333 podman[384363]: 2026-02-25 13:05:55.562933511 +0000 UTC m=+0.188008213 container init e842fff2be2c3942f545ba89aa40c20eef133707e749ecb258bf8582154976f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_hoover, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:05:55 np0005629333 podman[384363]: 2026-02-25 13:05:55.572791269 +0000 UTC m=+0.197865981 container start e842fff2be2c3942f545ba89aa40c20eef133707e749ecb258bf8582154976f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_hoover, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 08:05:55 np0005629333 podman[384363]: 2026-02-25 13:05:55.634024005 +0000 UTC m=+0.259098717 container attach e842fff2be2c3942f545ba89aa40c20eef133707e749ecb258bf8582154976f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_hoover, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:05:55 np0005629333 nova_compute[244014]: 2026-02-25 13:05:55.762 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 08:05:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2000528062' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 08:05:55 np0005629333 nova_compute[244014]: 2026-02-25 13:05:55.806 244018 DEBUG oslo_concurrency.processutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:05:55 np0005629333 nova_compute[244014]: 2026-02-25 13:05:55.837 244018 DEBUG nova.storage.rbd_utils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] rbd image 44f7664a-bc33-4251-9857-884c305df488_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:05:55 np0005629333 nova_compute[244014]: 2026-02-25 13:05:55.841 244018 DEBUG oslo_concurrency.processutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:05:56 np0005629333 heuristic_hoover[384389]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:05:56 np0005629333 heuristic_hoover[384389]: --> All data devices are unavailable
Feb 25 08:05:56 np0005629333 systemd[1]: libpod-e842fff2be2c3942f545ba89aa40c20eef133707e749ecb258bf8582154976f9.scope: Deactivated successfully.
Feb 25 08:05:56 np0005629333 podman[384450]: 2026-02-25 13:05:56.0948584 +0000 UTC m=+0.032326382 container died e842fff2be2c3942f545ba89aa40c20eef133707e749ecb258bf8582154976f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 08:05:56 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e50d1ca347cc9e3095b6aff6fc226072a37b47619d6cb59268ed8af5694274b7-merged.mount: Deactivated successfully.
Feb 25 08:05:56 np0005629333 podman[384450]: 2026-02-25 13:05:56.277898062 +0000 UTC m=+0.215366054 container remove e842fff2be2c3942f545ba89aa40c20eef133707e749ecb258bf8582154976f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 08:05:56 np0005629333 podman[384449]: 2026-02-25 13:05:56.281351679 +0000 UTC m=+0.206410021 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 08:05:56 np0005629333 systemd[1]: libpod-conmon-e842fff2be2c3942f545ba89aa40c20eef133707e749ecb258bf8582154976f9.scope: Deactivated successfully.
Feb 25 08:05:56 np0005629333 podman[384456]: 2026-02-25 13:05:56.363385943 +0000 UTC m=+0.288294071 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 08:05:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 25 08:05:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1496961262' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 25 08:05:56 np0005629333 nova_compute[244014]: 2026-02-25 13:05:56.415 244018 DEBUG oslo_concurrency.processutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:05:56 np0005629333 nova_compute[244014]: 2026-02-25 13:05:56.417 244018 DEBUG nova.objects.instance [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lazy-loading 'pci_devices' on Instance uuid 44f7664a-bc33-4251-9857-884c305df488 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 08:05:56 np0005629333 nova_compute[244014]: 2026-02-25 13:05:56.438 244018 DEBUG nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] End _get_guest_xml xml=<domain type="kvm">
Feb 25 08:05:56 np0005629333 nova_compute[244014]:  <uuid>44f7664a-bc33-4251-9857-884c305df488</uuid>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:  <name>instance-00000099</name>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:  <memory>131072</memory>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:  <vcpu>1</vcpu>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:  <metadata>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 25 08:05:56 np0005629333 nova_compute[244014]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:      <nova:name>tempest-AggregatesAdminTestJSON-server-1169940556</nova:name>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:      <nova:creationTime>2026-02-25 13:05:55</nova:creationTime>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:      <nova:flavor name="m1.nano">
Feb 25 08:05:56 np0005629333 nova_compute[244014]:        <nova:memory>128</nova:memory>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:        <nova:disk>1</nova:disk>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:        <nova:swap>0</nova:swap>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:        <nova:ephemeral>0</nova:ephemeral>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:        <nova:vcpus>1</nova:vcpus>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:      </nova:flavor>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:      <nova:owner>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:        <nova:user uuid="c4171ec1b0ec456bbea3ceb6441b3e0c">tempest-AggregatesAdminTestJSON-1553068081-project-member</nova:user>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:        <nova:project uuid="a573180da6e44eebadfb343d41eb8105">tempest-AggregatesAdminTestJSON-1553068081</nova:project>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:      </nova:owner>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:      <nova:root type="image" uuid="c2d5c3a2-ecd1-4a2c-8d50-98a83d9be5a6"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:      <nova:ports/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    </nova:instance>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:  </metadata>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:  <sysinfo type="smbios">
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <system>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:      <entry name="manufacturer">RDO</entry>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:      <entry name="product">OpenStack Compute</entry>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:      <entry name="serial">44f7664a-bc33-4251-9857-884c305df488</entry>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:      <entry name="uuid">44f7664a-bc33-4251-9857-884c305df488</entry>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:      <entry name="family">Virtual Machine</entry>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    </system>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:  </sysinfo>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:  <os>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <type arch="x86_64" machine="q35">hvm</type>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <boot dev="hd"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <smbios mode="sysinfo"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:  </os>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:  <features>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <acpi/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <apic/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <vmcoreinfo/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:  </features>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:  <clock offset="utc">
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <timer name="pit" tickpolicy="delay"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <timer name="rtc" tickpolicy="catchup"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <timer name="hpet" present="no"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:  </clock>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:  <cpu mode="host-model" match="exact">
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <topology sockets="1" cores="1" threads="1"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:  </cpu>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:  <devices>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <disk type="network" device="disk">
Feb 25 08:05:56 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/44f7664a-bc33-4251-9857-884c305df488_disk">
Feb 25 08:05:56 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:      </source>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 08:05:56 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:      </auth>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:      <target dev="vda" bus="virtio"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    </disk>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <disk type="network" device="cdrom">
Feb 25 08:05:56 np0005629333 nova_compute[244014]:      <driver type="raw" cache="none"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:      <source protocol="rbd" name="vms/44f7664a-bc33-4251-9857-884c305df488_disk.config">
Feb 25 08:05:56 np0005629333 nova_compute[244014]:        <host name="192.168.122.100" port="6789"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:      </source>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:      <auth username="openstack">
Feb 25 08:05:56 np0005629333 nova_compute[244014]:        <secret type="ceph" uuid="8ac33163-6221-5d58-9a39-8b6933fe7762"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:      </auth>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:      <target dev="sda" bus="sata"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    </disk>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <serial type="pty">
Feb 25 08:05:56 np0005629333 nova_compute[244014]:      <log file="/var/lib/nova/instances/44f7664a-bc33-4251-9857-884c305df488/console.log" append="off"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    </serial>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <video>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:      <model type="virtio"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    </video>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <input type="tablet" bus="usb"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <rng model="virtio">
Feb 25 08:05:56 np0005629333 nova_compute[244014]:      <backend model="random">/dev/urandom</backend>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    </rng>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <controller type="pci" model="pcie-root-port"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <controller type="usb" index="0"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    <memballoon model="virtio">
Feb 25 08:05:56 np0005629333 nova_compute[244014]:      <stats period="10"/>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:    </memballoon>
Feb 25 08:05:56 np0005629333 nova_compute[244014]:  </devices>
Feb 25 08:05:56 np0005629333 nova_compute[244014]: </domain>
Feb 25 08:05:56 np0005629333 nova_compute[244014]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Feb 25 08:05:56 np0005629333 nova_compute[244014]: 2026-02-25 13:05:56.573 244018 DEBUG nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 08:05:56 np0005629333 nova_compute[244014]: 2026-02-25 13:05:56.574 244018 DEBUG nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Feb 25 08:05:56 np0005629333 nova_compute[244014]: 2026-02-25 13:05:56.578 244018 INFO nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Using config drive#033[00m
Feb 25 08:05:56 np0005629333 nova_compute[244014]: 2026-02-25 13:05:56.621 244018 DEBUG nova.storage.rbd_utils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] rbd image 44f7664a-bc33-4251-9857-884c305df488_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:05:56 np0005629333 podman[384588]: 2026-02-25 13:05:56.725299448 +0000 UTC m=+0.062213145 container create d96ec792bb66ea57e538e66c766c8c44483996d2b438c5300fbfbf415c39c4f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_montalcini, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 08:05:56 np0005629333 podman[384588]: 2026-02-25 13:05:56.68350041 +0000 UTC m=+0.020413997 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:05:56 np0005629333 systemd[1]: Started libpod-conmon-d96ec792bb66ea57e538e66c766c8c44483996d2b438c5300fbfbf415c39c4f7.scope.
Feb 25 08:05:56 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:05:56 np0005629333 nova_compute[244014]: 2026-02-25 13:05:56.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:05:56 np0005629333 podman[384588]: 2026-02-25 13:05:56.878149488 +0000 UTC m=+0.215063055 container init d96ec792bb66ea57e538e66c766c8c44483996d2b438c5300fbfbf415c39c4f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:05:56 np0005629333 podman[384588]: 2026-02-25 13:05:56.884310172 +0000 UTC m=+0.221223739 container start d96ec792bb66ea57e538e66c766c8c44483996d2b438c5300fbfbf415c39c4f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 08:05:56 np0005629333 focused_montalcini[384604]: 167 167
Feb 25 08:05:56 np0005629333 podman[384588]: 2026-02-25 13:05:56.890437855 +0000 UTC m=+0.227351422 container attach d96ec792bb66ea57e538e66c766c8c44483996d2b438c5300fbfbf415c39c4f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:05:56 np0005629333 systemd[1]: libpod-d96ec792bb66ea57e538e66c766c8c44483996d2b438c5300fbfbf415c39c4f7.scope: Deactivated successfully.
Feb 25 08:05:56 np0005629333 podman[384588]: 2026-02-25 13:05:56.891783163 +0000 UTC m=+0.228696830 container died d96ec792bb66ea57e538e66c766c8c44483996d2b438c5300fbfbf415c39c4f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 08:05:56 np0005629333 nova_compute[244014]: 2026-02-25 13:05:56.899 244018 INFO nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Creating config drive at /var/lib/nova/instances/44f7664a-bc33-4251-9857-884c305df488/disk.config#033[00m
Feb 25 08:05:56 np0005629333 nova_compute[244014]: 2026-02-25 13:05:56.902 244018 DEBUG oslo_concurrency.processutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/44f7664a-bc33-4251-9857-884c305df488/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpbo97o_fa execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:05:56 np0005629333 systemd[1]: var-lib-containers-storage-overlay-0d1291ae2908d17ebdc5055a4b298f77b32fd5cb0cb98bc17d932f0347c2a22a-merged.mount: Deactivated successfully.
Feb 25 08:05:56 np0005629333 podman[384588]: 2026-02-25 13:05:56.997626227 +0000 UTC m=+0.334539824 container remove d96ec792bb66ea57e538e66c766c8c44483996d2b438c5300fbfbf415c39c4f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_montalcini, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 08:05:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2572: 305 pgs: 305 active+clean; 183 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 MiB/s wr, 25 op/s
Feb 25 08:05:57 np0005629333 systemd[1]: libpod-conmon-d96ec792bb66ea57e538e66c766c8c44483996d2b438c5300fbfbf415c39c4f7.scope: Deactivated successfully.
Feb 25 08:05:57 np0005629333 nova_compute[244014]: 2026-02-25 13:05:57.045 244018 DEBUG oslo_concurrency.processutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/44f7664a-bc33-4251-9857-884c305df488/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpbo97o_fa" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:05:57 np0005629333 nova_compute[244014]: 2026-02-25 13:05:57.080 244018 DEBUG nova.storage.rbd_utils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] rbd image 44f7664a-bc33-4251-9857-884c305df488_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Feb 25 08:05:57 np0005629333 nova_compute[244014]: 2026-02-25 13:05:57.084 244018 DEBUG oslo_concurrency.processutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/44f7664a-bc33-4251-9857-884c305df488/disk.config 44f7664a-bc33-4251-9857-884c305df488_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:05:57 np0005629333 podman[384652]: 2026-02-25 13:05:57.16901348 +0000 UTC m=+0.035737049 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:05:57 np0005629333 podman[384652]: 2026-02-25 13:05:57.362266639 +0000 UTC m=+0.229017639 container create 893c2b26488cdc17088722539cd5faf4f345d36c01cfc101a209d57b766538c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 08:05:57 np0005629333 nova_compute[244014]: 2026-02-25 13:05:57.373 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:05:57 np0005629333 systemd[1]: Started libpod-conmon-893c2b26488cdc17088722539cd5faf4f345d36c01cfc101a209d57b766538c5.scope.
Feb 25 08:05:57 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:05:57 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a68786a4ff9fd8e2dcde6a63abd7287994d71a0bd82cb186d3c649bae622af76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:05:57 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a68786a4ff9fd8e2dcde6a63abd7287994d71a0bd82cb186d3c649bae622af76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:05:57 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a68786a4ff9fd8e2dcde6a63abd7287994d71a0bd82cb186d3c649bae622af76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:05:57 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a68786a4ff9fd8e2dcde6a63abd7287994d71a0bd82cb186d3c649bae622af76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:05:57 np0005629333 podman[384652]: 2026-02-25 13:05:57.604806549 +0000 UTC m=+0.471530118 container init 893c2b26488cdc17088722539cd5faf4f345d36c01cfc101a209d57b766538c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mahavira, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 08:05:57 np0005629333 podman[384652]: 2026-02-25 13:05:57.612074304 +0000 UTC m=+0.478797873 container start 893c2b26488cdc17088722539cd5faf4f345d36c01cfc101a209d57b766538c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 08:05:57 np0005629333 podman[384652]: 2026-02-25 13:05:57.625341018 +0000 UTC m=+0.492064587 container attach 893c2b26488cdc17088722539cd5faf4f345d36c01cfc101a209d57b766538c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mahavira, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 08:05:57 np0005629333 nova_compute[244014]: 2026-02-25 13:05:57.633 244018 DEBUG oslo_concurrency.processutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/44f7664a-bc33-4251-9857-884c305df488/disk.config 44f7664a-bc33-4251-9857-884c305df488_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:05:57 np0005629333 nova_compute[244014]: 2026-02-25 13:05:57.634 244018 INFO nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Deleting local config drive /var/lib/nova/instances/44f7664a-bc33-4251-9857-884c305df488/disk.config because it was imported into RBD.#033[00m
Feb 25 08:05:57 np0005629333 systemd-machined[210048]: New machine qemu-187-instance-00000099.
Feb 25 08:05:57 np0005629333 systemd[1]: Started Virtual Machine qemu-187-instance-00000099.
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]: {
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:    "0": [
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:        {
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "devices": [
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "/dev/loop3"
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            ],
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "lv_name": "ceph_lv0",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "lv_size": "21470642176",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "name": "ceph_lv0",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "tags": {
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.cluster_name": "ceph",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.crush_device_class": "",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.encrypted": "0",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.objectstore": "bluestore",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.osd_id": "0",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.type": "block",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.vdo": "0",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.with_tpm": "0"
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            },
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "type": "block",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "vg_name": "ceph_vg0"
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:        }
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:    ],
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:    "1": [
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:        {
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "devices": [
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "/dev/loop4"
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            ],
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "lv_name": "ceph_lv1",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "lv_size": "21470642176",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "name": "ceph_lv1",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "tags": {
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.cluster_name": "ceph",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.crush_device_class": "",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.encrypted": "0",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.objectstore": "bluestore",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.osd_id": "1",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.type": "block",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.vdo": "0",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.with_tpm": "0"
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            },
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "type": "block",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "vg_name": "ceph_vg1"
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:        }
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:    ],
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:    "2": [
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:        {
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "devices": [
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "/dev/loop5"
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            ],
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "lv_name": "ceph_lv2",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "lv_size": "21470642176",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "name": "ceph_lv2",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "tags": {
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.cluster_name": "ceph",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.crush_device_class": "",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.encrypted": "0",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.objectstore": "bluestore",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.osd_id": "2",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.type": "block",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.vdo": "0",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:                "ceph.with_tpm": "0"
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            },
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "type": "block",
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:            "vg_name": "ceph_vg2"
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:        }
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]:    ]
Feb 25 08:05:57 np0005629333 beautiful_mahavira[384683]: }
Feb 25 08:05:57 np0005629333 podman[384652]: 2026-02-25 13:05:57.930178794 +0000 UTC m=+0.796902403 container died 893c2b26488cdc17088722539cd5faf4f345d36c01cfc101a209d57b766538c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:05:57 np0005629333 systemd[1]: libpod-893c2b26488cdc17088722539cd5faf4f345d36c01cfc101a209d57b766538c5.scope: Deactivated successfully.
Feb 25 08:05:58 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a68786a4ff9fd8e2dcde6a63abd7287994d71a0bd82cb186d3c649bae622af76-merged.mount: Deactivated successfully.
Feb 25 08:05:58 np0005629333 podman[384652]: 2026-02-25 13:05:58.090776132 +0000 UTC m=+0.957499661 container remove 893c2b26488cdc17088722539cd5faf4f345d36c01cfc101a209d57b766538c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_mahavira, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 08:05:58 np0005629333 systemd[1]: libpod-conmon-893c2b26488cdc17088722539cd5faf4f345d36c01cfc101a209d57b766538c5.scope: Deactivated successfully.
Feb 25 08:05:58 np0005629333 nova_compute[244014]: 2026-02-25 13:05:58.306 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024758.3062577, 44f7664a-bc33-4251-9857-884c305df488 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:05:58 np0005629333 nova_compute[244014]: 2026-02-25 13:05:58.308 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 44f7664a-bc33-4251-9857-884c305df488] VM Resumed (Lifecycle Event)#033[00m
Feb 25 08:05:58 np0005629333 nova_compute[244014]: 2026-02-25 13:05:58.314 244018 DEBUG nova.compute.manager [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Feb 25 08:05:58 np0005629333 nova_compute[244014]: 2026-02-25 13:05:58.315 244018 DEBUG nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Feb 25 08:05:58 np0005629333 nova_compute[244014]: 2026-02-25 13:05:58.319 244018 INFO nova.virt.libvirt.driver [-] [instance: 44f7664a-bc33-4251-9857-884c305df488] Instance spawned successfully.#033[00m
Feb 25 08:05:58 np0005629333 nova_compute[244014]: 2026-02-25 13:05:58.320 244018 DEBUG nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Feb 25 08:05:58 np0005629333 nova_compute[244014]: 2026-02-25 13:05:58.335 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 44f7664a-bc33-4251-9857-884c305df488] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:05:58 np0005629333 nova_compute[244014]: 2026-02-25 13:05:58.341 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 44f7664a-bc33-4251-9857-884c305df488] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 08:05:58 np0005629333 nova_compute[244014]: 2026-02-25 13:05:58.345 244018 DEBUG nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:05:58 np0005629333 nova_compute[244014]: 2026-02-25 13:05:58.345 244018 DEBUG nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:05:58 np0005629333 nova_compute[244014]: 2026-02-25 13:05:58.346 244018 DEBUG nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:05:58 np0005629333 nova_compute[244014]: 2026-02-25 13:05:58.346 244018 DEBUG nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:05:58 np0005629333 nova_compute[244014]: 2026-02-25 13:05:58.347 244018 DEBUG nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:05:58 np0005629333 nova_compute[244014]: 2026-02-25 13:05:58.347 244018 DEBUG nova.virt.libvirt.driver [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Feb 25 08:05:58 np0005629333 nova_compute[244014]: 2026-02-25 13:05:58.379 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 44f7664a-bc33-4251-9857-884c305df488] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 08:05:58 np0005629333 nova_compute[244014]: 2026-02-25 13:05:58.380 244018 DEBUG nova.virt.driver [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] Emitting event <LifecycleEvent: 1772024758.312484, 44f7664a-bc33-4251-9857-884c305df488 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:05:58 np0005629333 nova_compute[244014]: 2026-02-25 13:05:58.380 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 44f7664a-bc33-4251-9857-884c305df488] VM Started (Lifecycle Event)#033[00m
Feb 25 08:05:58 np0005629333 nova_compute[244014]: 2026-02-25 13:05:58.410 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 44f7664a-bc33-4251-9857-884c305df488] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:05:58 np0005629333 nova_compute[244014]: 2026-02-25 13:05:58.417 244018 DEBUG nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 44f7664a-bc33-4251-9857-884c305df488] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Feb 25 08:05:58 np0005629333 nova_compute[244014]: 2026-02-25 13:05:58.421 244018 INFO nova.compute.manager [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Took 5.25 seconds to spawn the instance on the hypervisor.#033[00m
Feb 25 08:05:58 np0005629333 nova_compute[244014]: 2026-02-25 13:05:58.422 244018 DEBUG nova.compute.manager [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:05:58 np0005629333 nova_compute[244014]: 2026-02-25 13:05:58.485 244018 INFO nova.compute.manager [None req-2bd4467e-1c0f-49bd-96b5-b7c4d6699224 - - - - - -] [instance: 44f7664a-bc33-4251-9857-884c305df488] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Feb 25 08:05:58 np0005629333 nova_compute[244014]: 2026-02-25 13:05:58.539 244018 INFO nova.compute.manager [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Took 6.39 seconds to build instance.#033[00m
Feb 25 08:05:58 np0005629333 nova_compute[244014]: 2026-02-25 13:05:58.557 244018 DEBUG oslo_concurrency.lockutils [None req-5662cebd-2863-435f-8cb0-404fa254578f c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lock "44f7664a-bc33-4251-9857-884c305df488" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:05:58 np0005629333 podman[384825]: 2026-02-25 13:05:58.59146487 +0000 UTC m=+0.047139279 container create 502cefe468ad4ee09d4c272db50a2fcf7fd238fa7d0f25cdec9c3fafe2b075b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bartik, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:05:58 np0005629333 systemd[1]: Started libpod-conmon-502cefe468ad4ee09d4c272db50a2fcf7fd238fa7d0f25cdec9c3fafe2b075b3.scope.
Feb 25 08:05:58 np0005629333 podman[384825]: 2026-02-25 13:05:58.566032264 +0000 UTC m=+0.021706693 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:05:58 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:05:58 np0005629333 podman[384825]: 2026-02-25 13:05:58.682488707 +0000 UTC m=+0.138163196 container init 502cefe468ad4ee09d4c272db50a2fcf7fd238fa7d0f25cdec9c3fafe2b075b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bartik, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:05:58 np0005629333 podman[384825]: 2026-02-25 13:05:58.690993037 +0000 UTC m=+0.146667446 container start 502cefe468ad4ee09d4c272db50a2fcf7fd238fa7d0f25cdec9c3fafe2b075b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:05:58 np0005629333 podman[384825]: 2026-02-25 13:05:58.696952815 +0000 UTC m=+0.152627274 container attach 502cefe468ad4ee09d4c272db50a2fcf7fd238fa7d0f25cdec9c3fafe2b075b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bartik, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:05:58 np0005629333 affectionate_bartik[384841]: 167 167
Feb 25 08:05:58 np0005629333 systemd[1]: libpod-502cefe468ad4ee09d4c272db50a2fcf7fd238fa7d0f25cdec9c3fafe2b075b3.scope: Deactivated successfully.
Feb 25 08:05:58 np0005629333 conmon[384841]: conmon 502cefe468ad4ee09d4c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-502cefe468ad4ee09d4c272db50a2fcf7fd238fa7d0f25cdec9c3fafe2b075b3.scope/container/memory.events
Feb 25 08:05:58 np0005629333 podman[384825]: 2026-02-25 13:05:58.697913152 +0000 UTC m=+0.153587561 container died 502cefe468ad4ee09d4c272db50a2fcf7fd238fa7d0f25cdec9c3fafe2b075b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bartik, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Feb 25 08:05:58 np0005629333 systemd[1]: var-lib-containers-storage-overlay-92b92eece74bae07cae4c0368b4de6007e84249d40fff741ca7b90f01a332e81-merged.mount: Deactivated successfully.
Feb 25 08:05:58 np0005629333 podman[384825]: 2026-02-25 13:05:58.763837631 +0000 UTC m=+0.219512040 container remove 502cefe468ad4ee09d4c272db50a2fcf7fd238fa7d0f25cdec9c3fafe2b075b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bartik, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 08:05:58 np0005629333 systemd[1]: libpod-conmon-502cefe468ad4ee09d4c272db50a2fcf7fd238fa7d0f25cdec9c3fafe2b075b3.scope: Deactivated successfully.
Feb 25 08:05:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:05:58 np0005629333 podman[384867]: 2026-02-25 13:05:58.946805641 +0000 UTC m=+0.063463261 container create fcea88edada0c4d3ca24db8ba97a01bd68ab9511098023efa745ab717fad639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_carver, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:05:58 np0005629333 systemd[1]: Started libpod-conmon-fcea88edada0c4d3ca24db8ba97a01bd68ab9511098023efa745ab717fad639a.scope.
Feb 25 08:05:59 np0005629333 podman[384867]: 2026-02-25 13:05:58.914429338 +0000 UTC m=+0.031087018 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:05:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2573: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 08:05:59 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:05:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bca8dd9da628a506c6dc931924660f564597705bf3a7c5adafdc574dc2b0f75d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:05:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bca8dd9da628a506c6dc931924660f564597705bf3a7c5adafdc574dc2b0f75d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:05:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bca8dd9da628a506c6dc931924660f564597705bf3a7c5adafdc574dc2b0f75d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:05:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bca8dd9da628a506c6dc931924660f564597705bf3a7c5adafdc574dc2b0f75d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:05:59 np0005629333 podman[384867]: 2026-02-25 13:05:59.06026293 +0000 UTC m=+0.176920600 container init fcea88edada0c4d3ca24db8ba97a01bd68ab9511098023efa745ab717fad639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:05:59 np0005629333 podman[384867]: 2026-02-25 13:05:59.070111438 +0000 UTC m=+0.186769078 container start fcea88edada0c4d3ca24db8ba97a01bd68ab9511098023efa745ab717fad639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_carver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:05:59 np0005629333 podman[384867]: 2026-02-25 13:05:59.126596751 +0000 UTC m=+0.243254371 container attach fcea88edada0c4d3ca24db8ba97a01bd68ab9511098023efa745ab717fad639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_carver, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:05:59 np0005629333 nova_compute[244014]: 2026-02-25 13:05:59.593 244018 DEBUG oslo_concurrency.lockutils [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Acquiring lock "44f7664a-bc33-4251-9857-884c305df488" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:05:59 np0005629333 nova_compute[244014]: 2026-02-25 13:05:59.594 244018 DEBUG oslo_concurrency.lockutils [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lock "44f7664a-bc33-4251-9857-884c305df488" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:05:59 np0005629333 nova_compute[244014]: 2026-02-25 13:05:59.595 244018 DEBUG oslo_concurrency.lockutils [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Acquiring lock "44f7664a-bc33-4251-9857-884c305df488-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:05:59 np0005629333 nova_compute[244014]: 2026-02-25 13:05:59.595 244018 DEBUG oslo_concurrency.lockutils [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lock "44f7664a-bc33-4251-9857-884c305df488-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:05:59 np0005629333 nova_compute[244014]: 2026-02-25 13:05:59.595 244018 DEBUG oslo_concurrency.lockutils [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lock "44f7664a-bc33-4251-9857-884c305df488-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:05:59 np0005629333 nova_compute[244014]: 2026-02-25 13:05:59.596 244018 INFO nova.compute.manager [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Terminating instance#033[00m
Feb 25 08:05:59 np0005629333 nova_compute[244014]: 2026-02-25 13:05:59.597 244018 DEBUG oslo_concurrency.lockutils [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Acquiring lock "refresh_cache-44f7664a-bc33-4251-9857-884c305df488" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Feb 25 08:05:59 np0005629333 nova_compute[244014]: 2026-02-25 13:05:59.597 244018 DEBUG oslo_concurrency.lockutils [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Acquired lock "refresh_cache-44f7664a-bc33-4251-9857-884c305df488" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Feb 25 08:05:59 np0005629333 nova_compute[244014]: 2026-02-25 13:05:59.597 244018 DEBUG nova.network.neutron [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Feb 25 08:05:59 np0005629333 lvm[384958]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:05:59 np0005629333 lvm[384958]: VG ceph_vg0 finished
Feb 25 08:05:59 np0005629333 lvm[384961]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:05:59 np0005629333 lvm[384961]: VG ceph_vg1 finished
Feb 25 08:05:59 np0005629333 lvm[384962]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:05:59 np0005629333 lvm[384962]: VG ceph_vg2 finished
Feb 25 08:05:59 np0005629333 elated_carver[384883]: {}
Feb 25 08:05:59 np0005629333 systemd[1]: libpod-fcea88edada0c4d3ca24db8ba97a01bd68ab9511098023efa745ab717fad639a.scope: Deactivated successfully.
Feb 25 08:05:59 np0005629333 podman[384867]: 2026-02-25 13:05:59.882896907 +0000 UTC m=+0.999554547 container died fcea88edada0c4d3ca24db8ba97a01bd68ab9511098023efa745ab717fad639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_carver, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:05:59 np0005629333 systemd[1]: libpod-fcea88edada0c4d3ca24db8ba97a01bd68ab9511098023efa745ab717fad639a.scope: Consumed 1.070s CPU time.
Feb 25 08:05:59 np0005629333 systemd[1]: var-lib-containers-storage-overlay-bca8dd9da628a506c6dc931924660f564597705bf3a7c5adafdc574dc2b0f75d-merged.mount: Deactivated successfully.
Feb 25 08:05:59 np0005629333 podman[384867]: 2026-02-25 13:05:59.981105687 +0000 UTC m=+1.097763287 container remove fcea88edada0c4d3ca24db8ba97a01bd68ab9511098023efa745ab717fad639a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:06:00 np0005629333 systemd[1]: libpod-conmon-fcea88edada0c4d3ca24db8ba97a01bd68ab9511098023efa745ab717fad639a.scope: Deactivated successfully.
Feb 25 08:06:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:06:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:06:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:06:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:06:00 np0005629333 nova_compute[244014]: 2026-02-25 13:06:00.169 244018 DEBUG nova.network.neutron [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 08:06:00 np0005629333 nova_compute[244014]: 2026-02-25 13:06:00.455 244018 DEBUG nova.network.neutron [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:06:00 np0005629333 nova_compute[244014]: 2026-02-25 13:06:00.471 244018 DEBUG oslo_concurrency.lockutils [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Releasing lock "refresh_cache-44f7664a-bc33-4251-9857-884c305df488" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Feb 25 08:06:00 np0005629333 nova_compute[244014]: 2026-02-25 13:06:00.471 244018 DEBUG nova.compute.manager [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Feb 25 08:06:00 np0005629333 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d00000099.scope: Deactivated successfully.
Feb 25 08:06:00 np0005629333 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d00000099.scope: Consumed 2.765s CPU time.
Feb 25 08:06:00 np0005629333 systemd-machined[210048]: Machine qemu-187-instance-00000099 terminated.
Feb 25 08:06:00 np0005629333 nova_compute[244014]: 2026-02-25 13:06:00.699 244018 INFO nova.virt.libvirt.driver [-] [instance: 44f7664a-bc33-4251-9857-884c305df488] Instance destroyed successfully.#033[00m
Feb 25 08:06:00 np0005629333 nova_compute[244014]: 2026-02-25 13:06:00.700 244018 DEBUG nova.objects.instance [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lazy-loading 'resources' on Instance uuid 44f7664a-bc33-4251-9857-884c305df488 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Feb 25 08:06:00 np0005629333 nova_compute[244014]: 2026-02-25 13:06:00.764 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:06:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2574: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Feb 25 08:06:01 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:06:01 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:06:01 np0005629333 nova_compute[244014]: 2026-02-25 13:06:01.564 244018 INFO nova.virt.libvirt.driver [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Deleting instance files /var/lib/nova/instances/44f7664a-bc33-4251-9857-884c305df488_del#033[00m
Feb 25 08:06:01 np0005629333 nova_compute[244014]: 2026-02-25 13:06:01.565 244018 INFO nova.virt.libvirt.driver [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Deletion of /var/lib/nova/instances/44f7664a-bc33-4251-9857-884c305df488_del complete#033[00m
Feb 25 08:06:01 np0005629333 nova_compute[244014]: 2026-02-25 13:06:01.611 244018 INFO nova.compute.manager [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] [instance: 44f7664a-bc33-4251-9857-884c305df488] Took 1.14 seconds to destroy the instance on the hypervisor.#033[00m
Feb 25 08:06:01 np0005629333 nova_compute[244014]: 2026-02-25 13:06:01.612 244018 DEBUG oslo.service.loopingcall [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Feb 25 08:06:01 np0005629333 nova_compute[244014]: 2026-02-25 13:06:01.612 244018 DEBUG nova.compute.manager [-] [instance: 44f7664a-bc33-4251-9857-884c305df488] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Feb 25 08:06:01 np0005629333 nova_compute[244014]: 2026-02-25 13:06:01.612 244018 DEBUG nova.network.neutron [-] [instance: 44f7664a-bc33-4251-9857-884c305df488] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Feb 25 08:06:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:06:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:06:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:06:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:06:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:06:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:06:02 np0005629333 nova_compute[244014]: 2026-02-25 13:06:02.131 244018 DEBUG nova.network.neutron [-] [instance: 44f7664a-bc33-4251-9857-884c305df488] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Feb 25 08:06:02 np0005629333 nova_compute[244014]: 2026-02-25 13:06:02.152 244018 DEBUG nova.network.neutron [-] [instance: 44f7664a-bc33-4251-9857-884c305df488] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Feb 25 08:06:02 np0005629333 nova_compute[244014]: 2026-02-25 13:06:02.169 244018 INFO nova.compute.manager [-] [instance: 44f7664a-bc33-4251-9857-884c305df488] Took 0.56 seconds to deallocate network for instance.#033[00m
Feb 25 08:06:02 np0005629333 nova_compute[244014]: 2026-02-25 13:06:02.217 244018 DEBUG oslo_concurrency.lockutils [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:06:02 np0005629333 nova_compute[244014]: 2026-02-25 13:06:02.218 244018 DEBUG oslo_concurrency.lockutils [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:06:02 np0005629333 nova_compute[244014]: 2026-02-25 13:06:02.301 244018 DEBUG oslo_concurrency.processutils [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:06:02 np0005629333 nova_compute[244014]: 2026-02-25 13:06:02.378 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:06:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:06:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3510927282' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:06:02 np0005629333 nova_compute[244014]: 2026-02-25 13:06:02.850 244018 DEBUG oslo_concurrency.processutils [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:06:02 np0005629333 nova_compute[244014]: 2026-02-25 13:06:02.857 244018 DEBUG nova.compute.provider_tree [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:06:02 np0005629333 nova_compute[244014]: 2026-02-25 13:06:02.889 244018 DEBUG nova.scheduler.client.report [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:06:02 np0005629333 nova_compute[244014]: 2026-02-25 13:06:02.924 244018 DEBUG oslo_concurrency.lockutils [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:06:02 np0005629333 nova_compute[244014]: 2026-02-25 13:06:02.964 244018 INFO nova.scheduler.client.report [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Deleted allocations for instance 44f7664a-bc33-4251-9857-884c305df488#033[00m
Feb 25 08:06:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2575: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Feb 25 08:06:03 np0005629333 nova_compute[244014]: 2026-02-25 13:06:03.052 244018 DEBUG oslo_concurrency.lockutils [None req-8ca19ab8-053b-4d75-8814-f95430293431 c4171ec1b0ec456bbea3ceb6441b3e0c a573180da6e44eebadfb343d41eb8105 - - default default] Lock "44f7664a-bc33-4251-9857-884c305df488" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.458s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:06:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:06:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2576: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Feb 25 08:06:05 np0005629333 nova_compute[244014]: 2026-02-25 13:06:05.766 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:06:06 np0005629333 nova_compute[244014]: 2026-02-25 13:06:06.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:06:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2577: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Feb 25 08:06:07 np0005629333 nova_compute[244014]: 2026-02-25 13:06:07.382 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:06:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:06:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2578: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 727 KiB/s wr, 101 op/s
Feb 25 08:06:10 np0005629333 nova_compute[244014]: 2026-02-25 13:06:10.768 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:06:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2579: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Feb 25 08:06:12 np0005629333 nova_compute[244014]: 2026-02-25 13:06:12.386 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:06:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2580: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Feb 25 08:06:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:06:14 np0005629333 ovn_controller[147040]: 2026-02-25T13:06:14Z|01621|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Feb 25 08:06:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2581: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 426 B/s wr, 1 op/s
Feb 25 08:06:15 np0005629333 nova_compute[244014]: 2026-02-25 13:06:15.697 244018 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772024760.6957903, 44f7664a-bc33-4251-9857-884c305df488 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Feb 25 08:06:15 np0005629333 nova_compute[244014]: 2026-02-25 13:06:15.697 244018 INFO nova.compute.manager [-] [instance: 44f7664a-bc33-4251-9857-884c305df488] VM Stopped (Lifecycle Event)#033[00m
Feb 25 08:06:15 np0005629333 nova_compute[244014]: 2026-02-25 13:06:15.756 244018 DEBUG nova.compute.manager [None req-eb285364-1e0b-4e25-b374-3e758321ec70 - - - - - -] [instance: 44f7664a-bc33-4251-9857-884c305df488] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Feb 25 08:06:15 np0005629333 nova_compute[244014]: 2026-02-25 13:06:15.769 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:06:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2582: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s rd, 426 B/s wr, 1 op/s
Feb 25 08:06:17 np0005629333 nova_compute[244014]: 2026-02-25 13:06:17.389 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:06:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:06:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2583: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:06:20 np0005629333 nova_compute[244014]: 2026-02-25 13:06:20.772 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:06:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2584: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:06:22 np0005629333 nova_compute[244014]: 2026-02-25 13:06:22.393 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:06:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2585: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 08:06:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:06:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2586: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 08:06:25 np0005629333 nova_compute[244014]: 2026-02-25 13:06:25.772 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:06:26 np0005629333 podman[385046]: 2026-02-25 13:06:26.742737679 +0000 UTC m=+0.074419949 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 25 08:06:26 np0005629333 podman[385047]: 2026-02-25 13:06:26.777596242 +0000 UTC m=+0.109325594 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb 25 08:06:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2587: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 08:06:27 np0005629333 nova_compute[244014]: 2026-02-25 13:06:27.396 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:06:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:06:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2588: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 08:06:30 np0005629333 nova_compute[244014]: 2026-02-25 13:06:30.774 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:06:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2589: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 08:06:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:06:31
Feb 25 08:06:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:06:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:06:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'backups']
Feb 25 08:06:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:06:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:06:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:06:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:06:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:06:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:06:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:06:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:06:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:06:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:06:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:06:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:06:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:06:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:06:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:06:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:06:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:06:32 np0005629333 nova_compute[244014]: 2026-02-25 13:06:32.400 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:06:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2590: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 08:06:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:06:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:06:34.555 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 08:06:34 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:06:34.556 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 08:06:34 np0005629333 nova_compute[244014]: 2026-02-25 13:06:34.556 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:06:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2591: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:06:35 np0005629333 nova_compute[244014]: 2026-02-25 13:06:35.776 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:06:36 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:06:36.558 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:06:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2592: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:06:37 np0005629333 nova_compute[244014]: 2026-02-25 13:06:37.404 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:06:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:06:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2593: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:06:40 np0005629333 nova_compute[244014]: 2026-02-25 13:06:40.778 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:06:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2594: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:06:41 np0005629333 nova_compute[244014]: 2026-02-25 13:06:41.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:06:41 np0005629333 nova_compute[244014]: 2026-02-25 13:06:41.903 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:06:41 np0005629333 nova_compute[244014]: 2026-02-25 13:06:41.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:06:41 np0005629333 nova_compute[244014]: 2026-02-25 13:06:41.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:06:41 np0005629333 nova_compute[244014]: 2026-02-25 13:06:41.905 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:06:41 np0005629333 nova_compute[244014]: 2026-02-25 13:06:41.905 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:06:42 np0005629333 nova_compute[244014]: 2026-02-25 13:06:42.407 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:06:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:06:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1948775156' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:06:42 np0005629333 nova_compute[244014]: 2026-02-25 13:06:42.465 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:06:42 np0005629333 nova_compute[244014]: 2026-02-25 13:06:42.679 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:06:42 np0005629333 nova_compute[244014]: 2026-02-25 13:06:42.681 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3527MB free_disk=59.98725803941488GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:06:42 np0005629333 nova_compute[244014]: 2026-02-25 13:06:42.682 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:06:42 np0005629333 nova_compute[244014]: 2026-02-25 13:06:42.683 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:06:42 np0005629333 nova_compute[244014]: 2026-02-25 13:06:42.754 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:06:42 np0005629333 nova_compute[244014]: 2026-02-25 13:06:42.754 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:06:42 np0005629333 nova_compute[244014]: 2026-02-25 13:06:42.772 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:06:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:06:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:06:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:06:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:06:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.705684116624595e-05 of space, bias 1.0, pg target 0.005117052349873786 quantized to 32 (current 32)
Feb 25 08:06:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:06:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:06:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:06:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:06:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:06:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002494511648089794 of space, bias 1.0, pg target 0.7483534944269382 quantized to 32 (current 32)
Feb 25 08:06:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:06:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4068823102287942e-06 of space, bias 4.0, pg target 0.001688258772274553 quantized to 16 (current 16)
Feb 25 08:06:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:06:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:06:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:06:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:06:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:06:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:06:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:06:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:06:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:06:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:06:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2595: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:06:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:06:43 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/375962641' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:06:43 np0005629333 nova_compute[244014]: 2026-02-25 13:06:43.315 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:06:43 np0005629333 nova_compute[244014]: 2026-02-25 13:06:43.325 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:06:43 np0005629333 nova_compute[244014]: 2026-02-25 13:06:43.346 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:06:43 np0005629333 nova_compute[244014]: 2026-02-25 13:06:43.379 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:06:43 np0005629333 nova_compute[244014]: 2026-02-25 13:06:43.379 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:06:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:06:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2596: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:06:45 np0005629333 nova_compute[244014]: 2026-02-25 13:06:45.381 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:06:45 np0005629333 nova_compute[244014]: 2026-02-25 13:06:45.381 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:06:45 np0005629333 nova_compute[244014]: 2026-02-25 13:06:45.381 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:06:45 np0005629333 nova_compute[244014]: 2026-02-25 13:06:45.406 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:06:45 np0005629333 nova_compute[244014]: 2026-02-25 13:06:45.780 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:06:46 np0005629333 nova_compute[244014]: 2026-02-25 13:06:46.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:06:46 np0005629333 nova_compute[244014]: 2026-02-25 13:06:46.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:06:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2597: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:06:47 np0005629333 nova_compute[244014]: 2026-02-25 13:06:47.410 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:06:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:06:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/939471437' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:06:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:06:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/939471437' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:06:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:06:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2598: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:06:49 np0005629333 nova_compute[244014]: 2026-02-25 13:06:49.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:06:50 np0005629333 nova_compute[244014]: 2026-02-25 13:06:50.782 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:06:50 np0005629333 nova_compute[244014]: 2026-02-25 13:06:50.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:06:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2599: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:06:51 np0005629333 nova_compute[244014]: 2026-02-25 13:06:51.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:06:52 np0005629333 nova_compute[244014]: 2026-02-25 13:06:52.414 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:06:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2600: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:06:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:06:53 np0005629333 nova_compute[244014]: 2026-02-25 13:06:53.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:06:53 np0005629333 nova_compute[244014]: 2026-02-25 13:06:53.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:06:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:06:55.045 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:06:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:06:55.045 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:06:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:06:55.045 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:06:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2601: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:06:55 np0005629333 nova_compute[244014]: 2026-02-25 13:06:55.784 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:06:56 np0005629333 nova_compute[244014]: 2026-02-25 13:06:56.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:06:56 np0005629333 nova_compute[244014]: 2026-02-25 13:06:56.878 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:06:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2602: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:06:57 np0005629333 nova_compute[244014]: 2026-02-25 13:06:57.418 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:06:57 np0005629333 podman[385135]: 2026-02-25 13:06:57.722634904 +0000 UTC m=+0.059469498 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 08:06:57 np0005629333 podman[385136]: 2026-02-25 13:06:57.747213917 +0000 UTC m=+0.083945268 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Feb 25 08:06:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:06:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2603: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:06:59 np0005629333 ovn_controller[147040]: 2026-02-25T13:06:59Z|01622|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Feb 25 08:07:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:07:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:07:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:07:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:07:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:07:00 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:07:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:07:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:07:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:07:00 np0005629333 nova_compute[244014]: 2026-02-25 13:07:00.786 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:07:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:07:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:07:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:07:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:07:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2604: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:07:01 np0005629333 podman[385324]: 2026-02-25 13:07:01.238001733 +0000 UTC m=+0.085210533 container create dcd689f82f30c43a9090bb9a706b4787e37c558f14eab689bb424e763dcbeba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lovelace, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:07:01 np0005629333 podman[385324]: 2026-02-25 13:07:01.192434188 +0000 UTC m=+0.039643038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:07:01 np0005629333 systemd[1]: Started libpod-conmon-dcd689f82f30c43a9090bb9a706b4787e37c558f14eab689bb424e763dcbeba6.scope.
Feb 25 08:07:01 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:07:01 np0005629333 podman[385324]: 2026-02-25 13:07:01.362872225 +0000 UTC m=+0.210081085 container init dcd689f82f30c43a9090bb9a706b4787e37c558f14eab689bb424e763dcbeba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:07:01 np0005629333 podman[385324]: 2026-02-25 13:07:01.368868654 +0000 UTC m=+0.216077484 container start dcd689f82f30c43a9090bb9a706b4787e37c558f14eab689bb424e763dcbeba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Feb 25 08:07:01 np0005629333 systemd[1]: libpod-dcd689f82f30c43a9090bb9a706b4787e37c558f14eab689bb424e763dcbeba6.scope: Deactivated successfully.
Feb 25 08:07:01 np0005629333 peaceful_lovelace[385341]: 167 167
Feb 25 08:07:01 np0005629333 conmon[385341]: conmon dcd689f82f30c43a9090 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dcd689f82f30c43a9090bb9a706b4787e37c558f14eab689bb424e763dcbeba6.scope/container/memory.events
Feb 25 08:07:01 np0005629333 podman[385324]: 2026-02-25 13:07:01.401862434 +0000 UTC m=+0.249071294 container attach dcd689f82f30c43a9090bb9a706b4787e37c558f14eab689bb424e763dcbeba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lovelace, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:07:01 np0005629333 podman[385324]: 2026-02-25 13:07:01.402519863 +0000 UTC m=+0.249728693 container died dcd689f82f30c43a9090bb9a706b4787e37c558f14eab689bb424e763dcbeba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:07:01 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f2db2506ae02e201926c0a2351d92c9747f9be73b5069a1389544ae506a8fe2c-merged.mount: Deactivated successfully.
Feb 25 08:07:01 np0005629333 podman[385324]: 2026-02-25 13:07:01.480146402 +0000 UTC m=+0.327355192 container remove dcd689f82f30c43a9090bb9a706b4787e37c558f14eab689bb424e763dcbeba6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lovelace, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:07:01 np0005629333 systemd[1]: libpod-conmon-dcd689f82f30c43a9090bb9a706b4787e37c558f14eab689bb424e763dcbeba6.scope: Deactivated successfully.
Feb 25 08:07:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:07:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:07:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:07:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:07:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:07:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:07:01 np0005629333 podman[385365]: 2026-02-25 13:07:01.620400027 +0000 UTC m=+0.030371358 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:07:01 np0005629333 podman[385365]: 2026-02-25 13:07:01.716391334 +0000 UTC m=+0.126362675 container create 558b0f8c67d4982d2cdb7ffd0a44869bd3d0b39a60f7fe35a067251e949c895b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mestorf, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 08:07:01 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:07:01 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:07:01 np0005629333 systemd[1]: Started libpod-conmon-558b0f8c67d4982d2cdb7ffd0a44869bd3d0b39a60f7fe35a067251e949c895b.scope.
Feb 25 08:07:01 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:07:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50c7728839bc4f4ffbe64b137e24a3c18d667c3b1a3729835ec3607503dcf370/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:07:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50c7728839bc4f4ffbe64b137e24a3c18d667c3b1a3729835ec3607503dcf370/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:07:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50c7728839bc4f4ffbe64b137e24a3c18d667c3b1a3729835ec3607503dcf370/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:07:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50c7728839bc4f4ffbe64b137e24a3c18d667c3b1a3729835ec3607503dcf370/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:07:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50c7728839bc4f4ffbe64b137e24a3c18d667c3b1a3729835ec3607503dcf370/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:07:01 np0005629333 podman[385365]: 2026-02-25 13:07:01.92299675 +0000 UTC m=+0.332968131 container init 558b0f8c67d4982d2cdb7ffd0a44869bd3d0b39a60f7fe35a067251e949c895b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mestorf, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Feb 25 08:07:01 np0005629333 podman[385365]: 2026-02-25 13:07:01.931921221 +0000 UTC m=+0.341892562 container start 558b0f8c67d4982d2cdb7ffd0a44869bd3d0b39a60f7fe35a067251e949c895b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mestorf, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 08:07:01 np0005629333 podman[385365]: 2026-02-25 13:07:01.93825373 +0000 UTC m=+0.348225131 container attach 558b0f8c67d4982d2cdb7ffd0a44869bd3d0b39a60f7fe35a067251e949c895b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:07:02 np0005629333 wonderful_mestorf[385382]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:07:02 np0005629333 wonderful_mestorf[385382]: --> All data devices are unavailable
Feb 25 08:07:02 np0005629333 systemd[1]: libpod-558b0f8c67d4982d2cdb7ffd0a44869bd3d0b39a60f7fe35a067251e949c895b.scope: Deactivated successfully.
Feb 25 08:07:02 np0005629333 conmon[385382]: conmon 558b0f8c67d4982d2cdb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-558b0f8c67d4982d2cdb7ffd0a44869bd3d0b39a60f7fe35a067251e949c895b.scope/container/memory.events
Feb 25 08:07:02 np0005629333 nova_compute[244014]: 2026-02-25 13:07:02.422 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:07:02 np0005629333 podman[385365]: 2026-02-25 13:07:02.425137829 +0000 UTC m=+0.835109170 container died 558b0f8c67d4982d2cdb7ffd0a44869bd3d0b39a60f7fe35a067251e949c895b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mestorf, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 08:07:02 np0005629333 systemd[1]: var-lib-containers-storage-overlay-50c7728839bc4f4ffbe64b137e24a3c18d667c3b1a3729835ec3607503dcf370-merged.mount: Deactivated successfully.
Feb 25 08:07:02 np0005629333 podman[385365]: 2026-02-25 13:07:02.535705097 +0000 UTC m=+0.945676408 container remove 558b0f8c67d4982d2cdb7ffd0a44869bd3d0b39a60f7fe35a067251e949c895b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:07:02 np0005629333 systemd[1]: libpod-conmon-558b0f8c67d4982d2cdb7ffd0a44869bd3d0b39a60f7fe35a067251e949c895b.scope: Deactivated successfully.
Feb 25 08:07:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Feb 25 08:07:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Feb 25 08:07:02 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Feb 25 08:07:03 np0005629333 podman[385479]: 2026-02-25 13:07:03.002667754 +0000 UTC m=+0.047603392 container create 2271ac3add1b369aa4ffc4d50cad3d32285009e828250797219607a86e200ff8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:07:03 np0005629333 systemd[1]: Started libpod-conmon-2271ac3add1b369aa4ffc4d50cad3d32285009e828250797219607a86e200ff8.scope.
Feb 25 08:07:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2606: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:07:03 np0005629333 podman[385479]: 2026-02-25 13:07:02.985242874 +0000 UTC m=+0.030178502 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:07:03 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:07:03 np0005629333 podman[385479]: 2026-02-25 13:07:03.109211349 +0000 UTC m=+0.154146987 container init 2271ac3add1b369aa4ffc4d50cad3d32285009e828250797219607a86e200ff8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 08:07:03 np0005629333 podman[385479]: 2026-02-25 13:07:03.118520331 +0000 UTC m=+0.163455929 container start 2271ac3add1b369aa4ffc4d50cad3d32285009e828250797219607a86e200ff8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:07:03 np0005629333 podman[385479]: 2026-02-25 13:07:03.122651368 +0000 UTC m=+0.167587056 container attach 2271ac3add1b369aa4ffc4d50cad3d32285009e828250797219607a86e200ff8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 08:07:03 np0005629333 confident_engelbart[385495]: 167 167
Feb 25 08:07:03 np0005629333 systemd[1]: libpod-2271ac3add1b369aa4ffc4d50cad3d32285009e828250797219607a86e200ff8.scope: Deactivated successfully.
Feb 25 08:07:03 np0005629333 podman[385479]: 2026-02-25 13:07:03.125139708 +0000 UTC m=+0.170075316 container died 2271ac3add1b369aa4ffc4d50cad3d32285009e828250797219607a86e200ff8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_engelbart, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:07:03 np0005629333 systemd[1]: var-lib-containers-storage-overlay-5de943fdc16be057fc3e9a5c6d1cec65a5110d98a7719a2ae37d5b543e303a4a-merged.mount: Deactivated successfully.
Feb 25 08:07:03 np0005629333 podman[385479]: 2026-02-25 13:07:03.169155659 +0000 UTC m=+0.214091287 container remove 2271ac3add1b369aa4ffc4d50cad3d32285009e828250797219607a86e200ff8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 08:07:03 np0005629333 systemd[1]: libpod-conmon-2271ac3add1b369aa4ffc4d50cad3d32285009e828250797219607a86e200ff8.scope: Deactivated successfully.
Feb 25 08:07:03 np0005629333 podman[385520]: 2026-02-25 13:07:03.350278256 +0000 UTC m=+0.061995109 container create 5615a0fef2de071fbb22163bab50f114759476bec16d420e56e1b583c9a36a93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 08:07:03 np0005629333 systemd[1]: Started libpod-conmon-5615a0fef2de071fbb22163bab50f114759476bec16d420e56e1b583c9a36a93.scope.
Feb 25 08:07:03 np0005629333 podman[385520]: 2026-02-25 13:07:03.323671046 +0000 UTC m=+0.035387929 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:07:03 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:07:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/350dce8a289b8ac8ac03656986f9d1bc9fc66b4a8f58ab1485bd8fcf6596322b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:07:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/350dce8a289b8ac8ac03656986f9d1bc9fc66b4a8f58ab1485bd8fcf6596322b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:07:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/350dce8a289b8ac8ac03656986f9d1bc9fc66b4a8f58ab1485bd8fcf6596322b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:07:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/350dce8a289b8ac8ac03656986f9d1bc9fc66b4a8f58ab1485bd8fcf6596322b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:07:03 np0005629333 podman[385520]: 2026-02-25 13:07:03.453878948 +0000 UTC m=+0.165595841 container init 5615a0fef2de071fbb22163bab50f114759476bec16d420e56e1b583c9a36a93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 08:07:03 np0005629333 podman[385520]: 2026-02-25 13:07:03.46211213 +0000 UTC m=+0.173828963 container start 5615a0fef2de071fbb22163bab50f114759476bec16d420e56e1b583c9a36a93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_cartwright, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:07:03 np0005629333 podman[385520]: 2026-02-25 13:07:03.466814423 +0000 UTC m=+0.178531336 container attach 5615a0fef2de071fbb22163bab50f114759476bec16d420e56e1b583c9a36a93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]: {
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:    "0": [
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:        {
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "devices": [
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "/dev/loop3"
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            ],
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "lv_name": "ceph_lv0",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "lv_size": "21470642176",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "name": "ceph_lv0",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "tags": {
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.cluster_name": "ceph",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.crush_device_class": "",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.encrypted": "0",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.objectstore": "bluestore",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.osd_id": "0",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.type": "block",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.vdo": "0",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.with_tpm": "0"
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            },
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "type": "block",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "vg_name": "ceph_vg0"
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:        }
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:    ],
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:    "1": [
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:        {
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "devices": [
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "/dev/loop4"
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            ],
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "lv_name": "ceph_lv1",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "lv_size": "21470642176",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "name": "ceph_lv1",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "tags": {
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.cluster_name": "ceph",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.crush_device_class": "",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.encrypted": "0",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.objectstore": "bluestore",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.osd_id": "1",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.type": "block",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.vdo": "0",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.with_tpm": "0"
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            },
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "type": "block",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "vg_name": "ceph_vg1"
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:        }
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:    ],
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:    "2": [
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:        {
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "devices": [
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "/dev/loop5"
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            ],
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "lv_name": "ceph_lv2",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "lv_size": "21470642176",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "name": "ceph_lv2",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "tags": {
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.cluster_name": "ceph",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.crush_device_class": "",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.encrypted": "0",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.objectstore": "bluestore",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.osd_id": "2",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.type": "block",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.vdo": "0",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:                "ceph.with_tpm": "0"
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            },
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "type": "block",
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:            "vg_name": "ceph_vg2"
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:        }
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]:    ]
Feb 25 08:07:03 np0005629333 hopeful_cartwright[385536]: }
Feb 25 08:07:03 np0005629333 systemd[1]: libpod-5615a0fef2de071fbb22163bab50f114759476bec16d420e56e1b583c9a36a93.scope: Deactivated successfully.
Feb 25 08:07:03 np0005629333 podman[385520]: 2026-02-25 13:07:03.773355177 +0000 UTC m=+0.485072000 container died 5615a0fef2de071fbb22163bab50f114759476bec16d420e56e1b583c9a36a93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:07:03 np0005629333 systemd[1]: var-lib-containers-storage-overlay-350dce8a289b8ac8ac03656986f9d1bc9fc66b4a8f58ab1485bd8fcf6596322b-merged.mount: Deactivated successfully.
Feb 25 08:07:03 np0005629333 podman[385520]: 2026-02-25 13:07:03.820764854 +0000 UTC m=+0.532481697 container remove 5615a0fef2de071fbb22163bab50f114759476bec16d420e56e1b583c9a36a93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_cartwright, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:07:03 np0005629333 systemd[1]: libpod-conmon-5615a0fef2de071fbb22163bab50f114759476bec16d420e56e1b583c9a36a93.scope: Deactivated successfully.
Feb 25 08:07:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:07:04 np0005629333 podman[385620]: 2026-02-25 13:07:04.296163529 +0000 UTC m=+0.047948963 container create 704dc5a573ded53f6a76c1a60c9a0a3ee6ad6b5ed2138ca8da17b8c5dd09565a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 08:07:04 np0005629333 systemd[1]: Started libpod-conmon-704dc5a573ded53f6a76c1a60c9a0a3ee6ad6b5ed2138ca8da17b8c5dd09565a.scope.
Feb 25 08:07:04 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:07:04 np0005629333 podman[385620]: 2026-02-25 13:07:04.368783197 +0000 UTC m=+0.120568641 container init 704dc5a573ded53f6a76c1a60c9a0a3ee6ad6b5ed2138ca8da17b8c5dd09565a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_wing, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 08:07:04 np0005629333 podman[385620]: 2026-02-25 13:07:04.37420795 +0000 UTC m=+0.125993374 container start 704dc5a573ded53f6a76c1a60c9a0a3ee6ad6b5ed2138ca8da17b8c5dd09565a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_wing, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 08:07:04 np0005629333 podman[385620]: 2026-02-25 13:07:04.279076138 +0000 UTC m=+0.030861582 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:07:04 np0005629333 zen_wing[385636]: 167 167
Feb 25 08:07:04 np0005629333 podman[385620]: 2026-02-25 13:07:04.377904484 +0000 UTC m=+0.129689908 container attach 704dc5a573ded53f6a76c1a60c9a0a3ee6ad6b5ed2138ca8da17b8c5dd09565a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_wing, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 08:07:04 np0005629333 systemd[1]: libpod-704dc5a573ded53f6a76c1a60c9a0a3ee6ad6b5ed2138ca8da17b8c5dd09565a.scope: Deactivated successfully.
Feb 25 08:07:04 np0005629333 podman[385620]: 2026-02-25 13:07:04.378482021 +0000 UTC m=+0.130267445 container died 704dc5a573ded53f6a76c1a60c9a0a3ee6ad6b5ed2138ca8da17b8c5dd09565a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_wing, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 08:07:04 np0005629333 systemd[1]: var-lib-containers-storage-overlay-99b896afc94016e8366c4b303b2747dfe353788549f4392563de8f36bcd1ed8f-merged.mount: Deactivated successfully.
Feb 25 08:07:04 np0005629333 podman[385620]: 2026-02-25 13:07:04.417435309 +0000 UTC m=+0.169220733 container remove 704dc5a573ded53f6a76c1a60c9a0a3ee6ad6b5ed2138ca8da17b8c5dd09565a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_wing, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:07:04 np0005629333 systemd[1]: libpod-conmon-704dc5a573ded53f6a76c1a60c9a0a3ee6ad6b5ed2138ca8da17b8c5dd09565a.scope: Deactivated successfully.
Feb 25 08:07:04 np0005629333 podman[385661]: 2026-02-25 13:07:04.55646333 +0000 UTC m=+0.055149007 container create a9e904114e0c752ae4d2a1ef1579bdcb17a54b04e81053c16619177d3e5d12fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 08:07:04 np0005629333 systemd[1]: Started libpod-conmon-a9e904114e0c752ae4d2a1ef1579bdcb17a54b04e81053c16619177d3e5d12fa.scope.
Feb 25 08:07:04 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:07:04 np0005629333 podman[385661]: 2026-02-25 13:07:04.532312689 +0000 UTC m=+0.030998386 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:07:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e70fc4f1e1a85c75104e300d15b37466f6cc9ce61f33ebc662d447a1689d2fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:07:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e70fc4f1e1a85c75104e300d15b37466f6cc9ce61f33ebc662d447a1689d2fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:07:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e70fc4f1e1a85c75104e300d15b37466f6cc9ce61f33ebc662d447a1689d2fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:07:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e70fc4f1e1a85c75104e300d15b37466f6cc9ce61f33ebc662d447a1689d2fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:07:04 np0005629333 podman[385661]: 2026-02-25 13:07:04.659382842 +0000 UTC m=+0.158068609 container init a9e904114e0c752ae4d2a1ef1579bdcb17a54b04e81053c16619177d3e5d12fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hofstadter, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 08:07:04 np0005629333 podman[385661]: 2026-02-25 13:07:04.668090087 +0000 UTC m=+0.166775804 container start a9e904114e0c752ae4d2a1ef1579bdcb17a54b04e81053c16619177d3e5d12fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hofstadter, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 08:07:04 np0005629333 podman[385661]: 2026-02-25 13:07:04.679960532 +0000 UTC m=+0.178646299 container attach a9e904114e0c752ae4d2a1ef1579bdcb17a54b04e81053c16619177d3e5d12fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hofstadter, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 08:07:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2607: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:07:05 np0005629333 lvm[385756]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:07:05 np0005629333 lvm[385756]: VG ceph_vg1 finished
Feb 25 08:07:05 np0005629333 lvm[385755]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:07:05 np0005629333 lvm[385755]: VG ceph_vg0 finished
Feb 25 08:07:05 np0005629333 lvm[385758]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:07:05 np0005629333 lvm[385758]: VG ceph_vg2 finished
Feb 25 08:07:05 np0005629333 interesting_hofstadter[385677]: {}
Feb 25 08:07:05 np0005629333 systemd[1]: libpod-a9e904114e0c752ae4d2a1ef1579bdcb17a54b04e81053c16619177d3e5d12fa.scope: Deactivated successfully.
Feb 25 08:07:05 np0005629333 podman[385661]: 2026-02-25 13:07:05.544562243 +0000 UTC m=+1.043248000 container died a9e904114e0c752ae4d2a1ef1579bdcb17a54b04e81053c16619177d3e5d12fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 08:07:05 np0005629333 systemd[1]: libpod-a9e904114e0c752ae4d2a1ef1579bdcb17a54b04e81053c16619177d3e5d12fa.scope: Consumed 1.102s CPU time.
Feb 25 08:07:05 np0005629333 systemd[1]: var-lib-containers-storage-overlay-0e70fc4f1e1a85c75104e300d15b37466f6cc9ce61f33ebc662d447a1689d2fb-merged.mount: Deactivated successfully.
Feb 25 08:07:05 np0005629333 podman[385661]: 2026-02-25 13:07:05.781801963 +0000 UTC m=+1.280487680 container remove a9e904114e0c752ae4d2a1ef1579bdcb17a54b04e81053c16619177d3e5d12fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_hofstadter, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:07:05 np0005629333 nova_compute[244014]: 2026-02-25 13:07:05.788 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:07:05 np0005629333 systemd[1]: libpod-conmon-a9e904114e0c752ae4d2a1ef1579bdcb17a54b04e81053c16619177d3e5d12fa.scope: Deactivated successfully.
Feb 25 08:07:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:07:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:07:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:07:05 np0005629333 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 08:07:05 np0005629333 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 08:07:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:07:06 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:07:06 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:07:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2608: 305 pgs: 305 active+clean; 81 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.7 KiB/s rd, 511 B/s wr, 7 op/s
Feb 25 08:07:07 np0005629333 nova_compute[244014]: 2026-02-25 13:07:07.428 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:07:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:07:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2609: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Feb 25 08:07:10 np0005629333 nova_compute[244014]: 2026-02-25 13:07:10.789 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:07:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2610: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Feb 25 08:07:12 np0005629333 nova_compute[244014]: 2026-02-25 13:07:12.431 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:07:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2611: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Feb 25 08:07:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:07:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Feb 25 08:07:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Feb 25 08:07:13 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Feb 25 08:07:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2613: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Feb 25 08:07:15 np0005629333 nova_compute[244014]: 2026-02-25 13:07:15.791 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:07:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2614: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 921 B/s wr, 18 op/s
Feb 25 08:07:17 np0005629333 nova_compute[244014]: 2026-02-25 13:07:17.435 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:07:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:07:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2615: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:07:20 np0005629333 nova_compute[244014]: 2026-02-25 13:07:20.793 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:07:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2616: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:07:22 np0005629333 nova_compute[244014]: 2026-02-25 13:07:22.440 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:07:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2617: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:07:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:07:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2618: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:07:25 np0005629333 nova_compute[244014]: 2026-02-25 13:07:25.795 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:07:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2619: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:07:27 np0005629333 nova_compute[244014]: 2026-02-25 13:07:27.443 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:07:28 np0005629333 podman[385801]: 2026-02-25 13:07:28.769368753 +0000 UTC m=+0.102064609 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Feb 25 08:07:28 np0005629333 podman[385802]: 2026-02-25 13:07:28.770063832 +0000 UTC m=+0.105504606 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 08:07:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:07:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2620: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:07:30 np0005629333 nova_compute[244014]: 2026-02-25 13:07:30.796 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:07:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2621: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:07:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:07:31
Feb 25 08:07:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:07:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:07:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['volumes', 'vms', 'backups', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', '.rgw.root']
Feb 25 08:07:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:07:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:07:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:07:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:07:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:07:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:07:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:07:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:07:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:07:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:07:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:07:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:07:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:07:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:07:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:07:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:07:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:07:32 np0005629333 nova_compute[244014]: 2026-02-25 13:07:32.447 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:07:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2622: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:07:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:07:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2623: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:07:35 np0005629333 nova_compute[244014]: 2026-02-25 13:07:35.799 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:07:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2624: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:07:37 np0005629333 nova_compute[244014]: 2026-02-25 13:07:37.451 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:07:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:07:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2625: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:07:40 np0005629333 nova_compute[244014]: 2026-02-25 13:07:40.800 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:07:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2626: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:07:42 np0005629333 nova_compute[244014]: 2026-02-25 13:07:42.455 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:07:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:07:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:07:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:07:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:07:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.7142632731533132e-05 of space, bias 1.0, pg target 0.00514278981945994 quantized to 32 (current 32)
Feb 25 08:07:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:07:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:07:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:07:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:07:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:07:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006710760464711434 of space, bias 1.0, pg target 0.20132281394134302 quantized to 32 (current 32)
Feb 25 08:07:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:07:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4125800128093941e-06 of space, bias 4.0, pg target 0.001695096015371273 quantized to 16 (current 16)
Feb 25 08:07:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:07:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:07:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:07:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:07:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:07:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:07:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:07:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:07:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:07:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:07:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2627: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:07:43 np0005629333 nova_compute[244014]: 2026-02-25 13:07:43.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:07:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:07:43 np0005629333 nova_compute[244014]: 2026-02-25 13:07:43.909 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:07:43 np0005629333 nova_compute[244014]: 2026-02-25 13:07:43.910 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:07:43 np0005629333 nova_compute[244014]: 2026-02-25 13:07:43.910 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:07:43 np0005629333 nova_compute[244014]: 2026-02-25 13:07:43.911 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:07:43 np0005629333 nova_compute[244014]: 2026-02-25 13:07:43.911 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:07:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:07:44 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1448772498' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:07:44 np0005629333 nova_compute[244014]: 2026-02-25 13:07:44.479 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:07:44 np0005629333 nova_compute[244014]: 2026-02-25 13:07:44.647 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:07:44 np0005629333 nova_compute[244014]: 2026-02-25 13:07:44.649 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3528MB free_disk=59.987252892926335GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:07:44 np0005629333 nova_compute[244014]: 2026-02-25 13:07:44.650 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:07:44 np0005629333 nova_compute[244014]: 2026-02-25 13:07:44.650 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:07:44 np0005629333 nova_compute[244014]: 2026-02-25 13:07:44.749 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:07:44 np0005629333 nova_compute[244014]: 2026-02-25 13:07:44.750 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:07:44 np0005629333 nova_compute[244014]: 2026-02-25 13:07:44.775 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:07:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2628: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:07:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:07:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/501854444' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:07:45 np0005629333 nova_compute[244014]: 2026-02-25 13:07:45.332 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:07:45 np0005629333 nova_compute[244014]: 2026-02-25 13:07:45.337 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:07:45 np0005629333 nova_compute[244014]: 2026-02-25 13:07:45.358 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:07:45 np0005629333 nova_compute[244014]: 2026-02-25 13:07:45.361 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:07:45 np0005629333 nova_compute[244014]: 2026-02-25 13:07:45.361 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:07:45 np0005629333 nova_compute[244014]: 2026-02-25 13:07:45.802 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:07:46 np0005629333 nova_compute[244014]: 2026-02-25 13:07:46.364 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:07:46 np0005629333 nova_compute[244014]: 2026-02-25 13:07:46.364 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:07:46 np0005629333 nova_compute[244014]: 2026-02-25 13:07:46.365 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:07:46 np0005629333 nova_compute[244014]: 2026-02-25 13:07:46.393 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:07:46 np0005629333 nova_compute[244014]: 2026-02-25 13:07:46.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:07:46 np0005629333 nova_compute[244014]: 2026-02-25 13:07:46.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:07:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2629: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:07:47 np0005629333 nova_compute[244014]: 2026-02-25 13:07:47.459 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:07:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:07:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/396203184' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:07:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:07:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/396203184' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:07:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:07:48 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Feb 25 08:07:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:07:48.895067) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 08:07:48 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Feb 25 08:07:48 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024868895106, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 1546, "num_deletes": 256, "total_data_size": 2455540, "memory_usage": 2492128, "flush_reason": "Manual Compaction"}
Feb 25 08:07:48 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Feb 25 08:07:48 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024868910061, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 2420424, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54569, "largest_seqno": 56114, "table_properties": {"data_size": 2413208, "index_size": 4222, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14838, "raw_average_key_size": 19, "raw_value_size": 2398685, "raw_average_value_size": 3193, "num_data_blocks": 188, "num_entries": 751, "num_filter_entries": 751, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772024709, "oldest_key_time": 1772024709, "file_creation_time": 1772024868, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:07:48 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 15088 microseconds, and 6483 cpu microseconds.
Feb 25 08:07:48 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:07:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:07:48.910153) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 2420424 bytes OK
Feb 25 08:07:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:07:48.910177) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Feb 25 08:07:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:07:48.913828) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Feb 25 08:07:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:07:48.913851) EVENT_LOG_v1 {"time_micros": 1772024868913844, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 08:07:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:07:48.913874) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 08:07:48 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 2448782, prev total WAL file size 2448782, number of live WAL files 2.
Feb 25 08:07:48 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:07:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:07:48.914778) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323635' seq:72057594037927935, type:22 .. '6C6F676D0032353136' seq:0, type:0; will stop at (end)
Feb 25 08:07:48 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 08:07:48 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(2363KB)], [128(8134KB)]
Feb 25 08:07:48 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024868914824, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 10750002, "oldest_snapshot_seqno": -1}
Feb 25 08:07:49 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 7649 keys, 10632648 bytes, temperature: kUnknown
Feb 25 08:07:49 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024869044625, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 10632648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10582117, "index_size": 30322, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19141, "raw_key_size": 199282, "raw_average_key_size": 26, "raw_value_size": 10446360, "raw_average_value_size": 1365, "num_data_blocks": 1189, "num_entries": 7649, "num_filter_entries": 7649, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772024868, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:07:49 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:07:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:07:49.044991) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 10632648 bytes
Feb 25 08:07:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:07:49.048908) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 82.7 rd, 81.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 7.9 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(8.8) write-amplify(4.4) OK, records in: 8177, records dropped: 528 output_compression: NoCompression
Feb 25 08:07:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:07:49.048938) EVENT_LOG_v1 {"time_micros": 1772024869048924, "job": 78, "event": "compaction_finished", "compaction_time_micros": 129939, "compaction_time_cpu_micros": 20211, "output_level": 6, "num_output_files": 1, "total_output_size": 10632648, "num_input_records": 8177, "num_output_records": 7649, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 08:07:49 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:07:49 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024869049574, "job": 78, "event": "table_file_deletion", "file_number": 130}
Feb 25 08:07:49 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:07:49 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024869051114, "job": 78, "event": "table_file_deletion", "file_number": 128}
Feb 25 08:07:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:07:48.914721) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:07:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:07:49.051209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:07:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:07:49.051216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:07:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:07:49.051219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:07:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:07:49.051222) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:07:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:07:49.051225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:07:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2630: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:07:49 np0005629333 nova_compute[244014]: 2026-02-25 13:07:49.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:07:50 np0005629333 nova_compute[244014]: 2026-02-25 13:07:50.804 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:07:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2631: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:07:51 np0005629333 nova_compute[244014]: 2026-02-25 13:07:51.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:07:52 np0005629333 nova_compute[244014]: 2026-02-25 13:07:52.462 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:07:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2632: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:07:53 np0005629333 nova_compute[244014]: 2026-02-25 13:07:53.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:07:53 np0005629333 nova_compute[244014]: 2026-02-25 13:07:53.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:07:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:07:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:07:55.045 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:07:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:07:55.046 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:07:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:07:55.046 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:07:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2633: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:07:55 np0005629333 nova_compute[244014]: 2026-02-25 13:07:55.807 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:07:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2634: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:07:57 np0005629333 nova_compute[244014]: 2026-02-25 13:07:57.466 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:07:57 np0005629333 systemd[1]: virtsecretd.service: Deactivated successfully.
Feb 25 08:07:57 np0005629333 systemd[1]: virtsecretd.service: Consumed 1.165s CPU time.
Feb 25 08:07:57 np0005629333 nova_compute[244014]: 2026-02-25 13:07:57.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:07:57 np0005629333 nova_compute[244014]: 2026-02-25 13:07:57.878 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:07:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:07:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2635: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:07:59 np0005629333 podman[385894]: 2026-02-25 13:07:59.764096317 +0000 UTC m=+0.100764542 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Feb 25 08:07:59 np0005629333 podman[385895]: 2026-02-25 13:07:59.769812168 +0000 UTC m=+0.106655638 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 08:08:00 np0005629333 nova_compute[244014]: 2026-02-25 13:08:00.808 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:08:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2636: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:08:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:08:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:08:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:08:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:08:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:08:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:08:02 np0005629333 nova_compute[244014]: 2026-02-25 13:08:02.469 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:08:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2637: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:08:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:08:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2638: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:08:05 np0005629333 nova_compute[244014]: 2026-02-25 13:08:05.810 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:08:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:08:06 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:08:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:08:06 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:08:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:08:06 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:08:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:08:06 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:08:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:08:06 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:08:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:08:06 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:08:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2639: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:08:07 np0005629333 podman[386081]: 2026-02-25 13:08:07.139679539 +0000 UTC m=+0.036938503 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:08:07 np0005629333 podman[386081]: 2026-02-25 13:08:07.237640521 +0000 UTC m=+0.134899395 container create b5096d6f5ff39c9293484f0572d915e2325692a311eb1a0500bb0fbee23a5d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:08:07 np0005629333 systemd[1]: Started libpod-conmon-b5096d6f5ff39c9293484f0572d915e2325692a311eb1a0500bb0fbee23a5d61.scope.
Feb 25 08:08:07 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:08:07 np0005629333 podman[386081]: 2026-02-25 13:08:07.430736135 +0000 UTC m=+0.327995109 container init b5096d6f5ff39c9293484f0572d915e2325692a311eb1a0500bb0fbee23a5d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_galileo, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 08:08:07 np0005629333 podman[386081]: 2026-02-25 13:08:07.43942151 +0000 UTC m=+0.336680434 container start b5096d6f5ff39c9293484f0572d915e2325692a311eb1a0500bb0fbee23a5d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_galileo, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 08:08:07 np0005629333 loving_galileo[386098]: 167 167
Feb 25 08:08:07 np0005629333 systemd[1]: libpod-b5096d6f5ff39c9293484f0572d915e2325692a311eb1a0500bb0fbee23a5d61.scope: Deactivated successfully.
Feb 25 08:08:07 np0005629333 nova_compute[244014]: 2026-02-25 13:08:07.472 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:08:07 np0005629333 podman[386081]: 2026-02-25 13:08:07.487224808 +0000 UTC m=+0.384483772 container attach b5096d6f5ff39c9293484f0572d915e2325692a311eb1a0500bb0fbee23a5d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 08:08:07 np0005629333 podman[386081]: 2026-02-25 13:08:07.487768163 +0000 UTC m=+0.385027097 container died b5096d6f5ff39c9293484f0572d915e2325692a311eb1a0500bb0fbee23a5d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_galileo, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 08:08:07 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:08:07 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:08:07 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:08:07 np0005629333 systemd[1]: var-lib-containers-storage-overlay-cd8e24f435283ecb0a620845d1f31bd85aff4108bf39629a86d2bad1a88fd88e-merged.mount: Deactivated successfully.
Feb 25 08:08:07 np0005629333 podman[386081]: 2026-02-25 13:08:07.78338934 +0000 UTC m=+0.680648254 container remove b5096d6f5ff39c9293484f0572d915e2325692a311eb1a0500bb0fbee23a5d61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:08:07 np0005629333 systemd[1]: libpod-conmon-b5096d6f5ff39c9293484f0572d915e2325692a311eb1a0500bb0fbee23a5d61.scope: Deactivated successfully.
Feb 25 08:08:07 np0005629333 podman[386125]: 2026-02-25 13:08:07.989389659 +0000 UTC m=+0.096894044 container create d0777d6d6347e927ed9b478d16a89d600cc984fa58b6b3d8ab67210b509c85e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_diffie, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 08:08:08 np0005629333 podman[386125]: 2026-02-25 13:08:07.915254298 +0000 UTC m=+0.022758653 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:08:08 np0005629333 systemd[1]: Started libpod-conmon-d0777d6d6347e927ed9b478d16a89d600cc984fa58b6b3d8ab67210b509c85e6.scope.
Feb 25 08:08:08 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:08:08 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36a38be95699b92267fb0c206590629ffdee86813cf7cffd697a3c71830f0d7c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:08:08 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36a38be95699b92267fb0c206590629ffdee86813cf7cffd697a3c71830f0d7c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:08:08 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36a38be95699b92267fb0c206590629ffdee86813cf7cffd697a3c71830f0d7c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:08:08 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36a38be95699b92267fb0c206590629ffdee86813cf7cffd697a3c71830f0d7c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:08:08 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36a38be95699b92267fb0c206590629ffdee86813cf7cffd697a3c71830f0d7c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:08:08 np0005629333 podman[386125]: 2026-02-25 13:08:08.167174132 +0000 UTC m=+0.274678537 container init d0777d6d6347e927ed9b478d16a89d600cc984fa58b6b3d8ab67210b509c85e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_diffie, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 08:08:08 np0005629333 podman[386125]: 2026-02-25 13:08:08.17488548 +0000 UTC m=+0.282389855 container start d0777d6d6347e927ed9b478d16a89d600cc984fa58b6b3d8ab67210b509c85e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_diffie, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 08:08:08 np0005629333 podman[386125]: 2026-02-25 13:08:08.220530927 +0000 UTC m=+0.328035292 container attach d0777d6d6347e927ed9b478d16a89d600cc984fa58b6b3d8ab67210b509c85e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:08:08 np0005629333 trusting_diffie[386141]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:08:08 np0005629333 trusting_diffie[386141]: --> All data devices are unavailable
Feb 25 08:08:08 np0005629333 systemd[1]: libpod-d0777d6d6347e927ed9b478d16a89d600cc984fa58b6b3d8ab67210b509c85e6.scope: Deactivated successfully.
Feb 25 08:08:08 np0005629333 podman[386125]: 2026-02-25 13:08:08.681098784 +0000 UTC m=+0.788603179 container died d0777d6d6347e927ed9b478d16a89d600cc984fa58b6b3d8ab67210b509c85e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 08:08:08 np0005629333 systemd[1]: var-lib-containers-storage-overlay-36a38be95699b92267fb0c206590629ffdee86813cf7cffd697a3c71830f0d7c-merged.mount: Deactivated successfully.
Feb 25 08:08:08 np0005629333 nova_compute[244014]: 2026-02-25 13:08:08.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:08:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:08:08 np0005629333 podman[386125]: 2026-02-25 13:08:08.992036762 +0000 UTC m=+1.099541127 container remove d0777d6d6347e927ed9b478d16a89d600cc984fa58b6b3d8ab67210b509c85e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_diffie, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 08:08:08 np0005629333 systemd[1]: libpod-conmon-d0777d6d6347e927ed9b478d16a89d600cc984fa58b6b3d8ab67210b509c85e6.scope: Deactivated successfully.
Feb 25 08:08:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2640: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:08:09 np0005629333 podman[386239]: 2026-02-25 13:08:09.485559549 +0000 UTC m=+0.084674068 container create 24831bb2e421fe1ffc7e61d694f72693d194d002e87f6eff81a289edb16d330d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_goldstine, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:08:09 np0005629333 podman[386239]: 2026-02-25 13:08:09.426622087 +0000 UTC m=+0.025736666 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:08:09 np0005629333 systemd[1]: Started libpod-conmon-24831bb2e421fe1ffc7e61d694f72693d194d002e87f6eff81a289edb16d330d.scope.
Feb 25 08:08:09 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:08:09 np0005629333 podman[386239]: 2026-02-25 13:08:09.647494046 +0000 UTC m=+0.246608655 container init 24831bb2e421fe1ffc7e61d694f72693d194d002e87f6eff81a289edb16d330d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 08:08:09 np0005629333 podman[386239]: 2026-02-25 13:08:09.656322035 +0000 UTC m=+0.255436564 container start 24831bb2e421fe1ffc7e61d694f72693d194d002e87f6eff81a289edb16d330d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 08:08:09 np0005629333 eager_goldstine[386257]: 167 167
Feb 25 08:08:09 np0005629333 systemd[1]: libpod-24831bb2e421fe1ffc7e61d694f72693d194d002e87f6eff81a289edb16d330d.scope: Deactivated successfully.
Feb 25 08:08:09 np0005629333 podman[386239]: 2026-02-25 13:08:09.69375404 +0000 UTC m=+0.292868639 container attach 24831bb2e421fe1ffc7e61d694f72693d194d002e87f6eff81a289edb16d330d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:08:09 np0005629333 podman[386239]: 2026-02-25 13:08:09.694252264 +0000 UTC m=+0.293366793 container died 24831bb2e421fe1ffc7e61d694f72693d194d002e87f6eff81a289edb16d330d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:08:09 np0005629333 systemd[1]: var-lib-containers-storage-overlay-529385d85c231b1acd84d9e4d2c770c61609a83aef408ab17fc864ce67349b29-merged.mount: Deactivated successfully.
Feb 25 08:08:09 np0005629333 podman[386239]: 2026-02-25 13:08:09.939991324 +0000 UTC m=+0.539105843 container remove 24831bb2e421fe1ffc7e61d694f72693d194d002e87f6eff81a289edb16d330d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_goldstine, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:08:09 np0005629333 systemd[1]: libpod-conmon-24831bb2e421fe1ffc7e61d694f72693d194d002e87f6eff81a289edb16d330d.scope: Deactivated successfully.
Feb 25 08:08:10 np0005629333 podman[386283]: 2026-02-25 13:08:10.160220234 +0000 UTC m=+0.085325437 container create a5968c036251b2a5ab9a7775d707da4c4ce99db3e95740bb01ff6751e9228629 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_noyce, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:08:10 np0005629333 podman[386283]: 2026-02-25 13:08:10.107311132 +0000 UTC m=+0.032416385 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:08:10 np0005629333 systemd[1]: Started libpod-conmon-a5968c036251b2a5ab9a7775d707da4c4ce99db3e95740bb01ff6751e9228629.scope.
Feb 25 08:08:10 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:08:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bda7a6d527ef1004c759cffe95cdf2d8c39451206c0ab37c67cc9696010ea838/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:08:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bda7a6d527ef1004c759cffe95cdf2d8c39451206c0ab37c67cc9696010ea838/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:08:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bda7a6d527ef1004c759cffe95cdf2d8c39451206c0ab37c67cc9696010ea838/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:08:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bda7a6d527ef1004c759cffe95cdf2d8c39451206c0ab37c67cc9696010ea838/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:08:10 np0005629333 podman[386283]: 2026-02-25 13:08:10.692596856 +0000 UTC m=+0.617702119 container init a5968c036251b2a5ab9a7775d707da4c4ce99db3e95740bb01ff6751e9228629 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_noyce, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 08:08:10 np0005629333 podman[386283]: 2026-02-25 13:08:10.701170148 +0000 UTC m=+0.626275321 container start a5968c036251b2a5ab9a7775d707da4c4ce99db3e95740bb01ff6751e9228629 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_noyce, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 08:08:10 np0005629333 podman[386283]: 2026-02-25 13:08:10.730883466 +0000 UTC m=+0.655988729 container attach a5968c036251b2a5ab9a7775d707da4c4ce99db3e95740bb01ff6751e9228629 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_noyce, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:08:10 np0005629333 nova_compute[244014]: 2026-02-25 13:08:10.811 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:08:10 np0005629333 clever_noyce[386299]: {
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:    "0": [
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:        {
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "devices": [
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "/dev/loop3"
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            ],
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "lv_name": "ceph_lv0",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "lv_size": "21470642176",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "name": "ceph_lv0",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "tags": {
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.cluster_name": "ceph",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.crush_device_class": "",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.encrypted": "0",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.objectstore": "bluestore",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.osd_id": "0",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.type": "block",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.vdo": "0",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.with_tpm": "0"
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            },
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "type": "block",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "vg_name": "ceph_vg0"
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:        }
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:    ],
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:    "1": [
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:        {
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "devices": [
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "/dev/loop4"
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            ],
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "lv_name": "ceph_lv1",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "lv_size": "21470642176",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "name": "ceph_lv1",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "tags": {
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.cluster_name": "ceph",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.crush_device_class": "",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.encrypted": "0",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.objectstore": "bluestore",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.osd_id": "1",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.type": "block",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.vdo": "0",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.with_tpm": "0"
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            },
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "type": "block",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "vg_name": "ceph_vg1"
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:        }
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:    ],
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:    "2": [
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:        {
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "devices": [
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "/dev/loop5"
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            ],
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "lv_name": "ceph_lv2",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "lv_size": "21470642176",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "name": "ceph_lv2",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "tags": {
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.cluster_name": "ceph",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.crush_device_class": "",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.encrypted": "0",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.objectstore": "bluestore",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.osd_id": "2",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.type": "block",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.vdo": "0",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:                "ceph.with_tpm": "0"
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            },
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "type": "block",
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:            "vg_name": "ceph_vg2"
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:        }
Feb 25 08:08:10 np0005629333 clever_noyce[386299]:    ]
Feb 25 08:08:10 np0005629333 clever_noyce[386299]: }
Feb 25 08:08:11 np0005629333 podman[386283]: 2026-02-25 13:08:11.033192529 +0000 UTC m=+0.958297742 container died a5968c036251b2a5ab9a7775d707da4c4ce99db3e95740bb01ff6751e9228629 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_noyce, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:08:11 np0005629333 systemd[1]: libpod-a5968c036251b2a5ab9a7775d707da4c4ce99db3e95740bb01ff6751e9228629.scope: Deactivated successfully.
Feb 25 08:08:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2641: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:08:11 np0005629333 systemd[1]: var-lib-containers-storage-overlay-bda7a6d527ef1004c759cffe95cdf2d8c39451206c0ab37c67cc9696010ea838-merged.mount: Deactivated successfully.
Feb 25 08:08:11 np0005629333 podman[386283]: 2026-02-25 13:08:11.23463771 +0000 UTC m=+1.159742913 container remove a5968c036251b2a5ab9a7775d707da4c4ce99db3e95740bb01ff6751e9228629 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 08:08:11 np0005629333 systemd[1]: libpod-conmon-a5968c036251b2a5ab9a7775d707da4c4ce99db3e95740bb01ff6751e9228629.scope: Deactivated successfully.
Feb 25 08:08:11 np0005629333 podman[386384]: 2026-02-25 13:08:11.79278544 +0000 UTC m=+0.107488642 container create 9d43d456f5ab52fd60f7d255a7a60dda50899e7b12a346a6d58d6b9d1bfdf46c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:08:11 np0005629333 podman[386384]: 2026-02-25 13:08:11.720276765 +0000 UTC m=+0.034980037 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:08:11 np0005629333 systemd[1]: Started libpod-conmon-9d43d456f5ab52fd60f7d255a7a60dda50899e7b12a346a6d58d6b9d1bfdf46c.scope.
Feb 25 08:08:11 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:08:11 np0005629333 podman[386384]: 2026-02-25 13:08:11.94316386 +0000 UTC m=+0.257867072 container init 9d43d456f5ab52fd60f7d255a7a60dda50899e7b12a346a6d58d6b9d1bfdf46c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 08:08:11 np0005629333 podman[386384]: 2026-02-25 13:08:11.949764336 +0000 UTC m=+0.264467548 container start 9d43d456f5ab52fd60f7d255a7a60dda50899e7b12a346a6d58d6b9d1bfdf46c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mendeleev, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 08:08:11 np0005629333 stupefied_mendeleev[386400]: 167 167
Feb 25 08:08:11 np0005629333 systemd[1]: libpod-9d43d456f5ab52fd60f7d255a7a60dda50899e7b12a346a6d58d6b9d1bfdf46c.scope: Deactivated successfully.
Feb 25 08:08:11 np0005629333 podman[386384]: 2026-02-25 13:08:11.970330056 +0000 UTC m=+0.285033328 container attach 9d43d456f5ab52fd60f7d255a7a60dda50899e7b12a346a6d58d6b9d1bfdf46c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 08:08:11 np0005629333 podman[386384]: 2026-02-25 13:08:11.971018746 +0000 UTC m=+0.285721958 container died 9d43d456f5ab52fd60f7d255a7a60dda50899e7b12a346a6d58d6b9d1bfdf46c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mendeleev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:08:12 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e504318cdf1566d6ae4dc58cf107572b6170b28ef134a4e3d73270ff1ebdc910-merged.mount: Deactivated successfully.
Feb 25 08:08:12 np0005629333 podman[386384]: 2026-02-25 13:08:12.017373823 +0000 UTC m=+0.332077035 container remove 9d43d456f5ab52fd60f7d255a7a60dda50899e7b12a346a6d58d6b9d1bfdf46c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mendeleev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 08:08:12 np0005629333 systemd[1]: libpod-conmon-9d43d456f5ab52fd60f7d255a7a60dda50899e7b12a346a6d58d6b9d1bfdf46c.scope: Deactivated successfully.
Feb 25 08:08:12 np0005629333 podman[386425]: 2026-02-25 13:08:12.213751271 +0000 UTC m=+0.051085532 container create 828f4563a1ee25486b16db4772bd36041623c4a725682c60a803c68d1bab377b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_solomon, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:08:12 np0005629333 systemd[1]: Started libpod-conmon-828f4563a1ee25486b16db4772bd36041623c4a725682c60a803c68d1bab377b.scope.
Feb 25 08:08:12 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:08:12 np0005629333 podman[386425]: 2026-02-25 13:08:12.193960963 +0000 UTC m=+0.031295264 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:08:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f489aef18e8d21422bc5703790bfaf6b268fa6cf873f2ef3a2fcc11a35e9c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:08:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f489aef18e8d21422bc5703790bfaf6b268fa6cf873f2ef3a2fcc11a35e9c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:08:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f489aef18e8d21422bc5703790bfaf6b268fa6cf873f2ef3a2fcc11a35e9c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:08:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f489aef18e8d21422bc5703790bfaf6b268fa6cf873f2ef3a2fcc11a35e9c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:08:12 np0005629333 podman[386425]: 2026-02-25 13:08:12.31161285 +0000 UTC m=+0.148947131 container init 828f4563a1ee25486b16db4772bd36041623c4a725682c60a803c68d1bab377b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_solomon, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 08:08:12 np0005629333 podman[386425]: 2026-02-25 13:08:12.324417841 +0000 UTC m=+0.161752102 container start 828f4563a1ee25486b16db4772bd36041623c4a725682c60a803c68d1bab377b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_solomon, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 08:08:12 np0005629333 podman[386425]: 2026-02-25 13:08:12.328018393 +0000 UTC m=+0.165352644 container attach 828f4563a1ee25486b16db4772bd36041623c4a725682c60a803c68d1bab377b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_solomon, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:08:12 np0005629333 nova_compute[244014]: 2026-02-25 13:08:12.477 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:08:12 np0005629333 lvm[386520]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:08:12 np0005629333 lvm[386519]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:08:12 np0005629333 lvm[386520]: VG ceph_vg1 finished
Feb 25 08:08:12 np0005629333 lvm[386519]: VG ceph_vg0 finished
Feb 25 08:08:13 np0005629333 lvm[386522]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:08:13 np0005629333 lvm[386522]: VG ceph_vg2 finished
Feb 25 08:08:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2642: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:08:13 np0005629333 condescending_solomon[386441]: {}
Feb 25 08:08:13 np0005629333 systemd[1]: libpod-828f4563a1ee25486b16db4772bd36041623c4a725682c60a803c68d1bab377b.scope: Deactivated successfully.
Feb 25 08:08:13 np0005629333 systemd[1]: libpod-828f4563a1ee25486b16db4772bd36041623c4a725682c60a803c68d1bab377b.scope: Consumed 1.159s CPU time.
Feb 25 08:08:13 np0005629333 podman[386425]: 2026-02-25 13:08:13.154242961 +0000 UTC m=+0.991577222 container died 828f4563a1ee25486b16db4772bd36041623c4a725682c60a803c68d1bab377b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_solomon, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 25 08:08:13 np0005629333 systemd[1]: var-lib-containers-storage-overlay-b5f489aef18e8d21422bc5703790bfaf6b268fa6cf873f2ef3a2fcc11a35e9c2-merged.mount: Deactivated successfully.
Feb 25 08:08:13 np0005629333 podman[386425]: 2026-02-25 13:08:13.510062125 +0000 UTC m=+1.347396406 container remove 828f4563a1ee25486b16db4772bd36041623c4a725682c60a803c68d1bab377b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_solomon, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 08:08:13 np0005629333 systemd[1]: libpod-conmon-828f4563a1ee25486b16db4772bd36041623c4a725682c60a803c68d1bab377b.scope: Deactivated successfully.
Feb 25 08:08:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:08:13 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:08:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:08:13 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:08:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:08:14 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:08:14 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:08:14 np0005629333 nova_compute[244014]: 2026-02-25 13:08:14.738 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:08:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2643: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:08:15 np0005629333 nova_compute[244014]: 2026-02-25 13:08:15.813 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:08:15 np0005629333 nova_compute[244014]: 2026-02-25 13:08:15.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:08:15 np0005629333 nova_compute[244014]: 2026-02-25 13:08:15.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 25 08:08:15 np0005629333 nova_compute[244014]: 2026-02-25 13:08:15.890 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 25 08:08:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2644: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:08:17 np0005629333 nova_compute[244014]: 2026-02-25 13:08:17.482 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:08:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:08:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2645: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:08:20 np0005629333 nova_compute[244014]: 2026-02-25 13:08:20.815 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:08:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2646: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:08:21 np0005629333 nova_compute[244014]: 2026-02-25 13:08:21.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:08:22 np0005629333 nova_compute[244014]: 2026-02-25 13:08:22.485 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:08:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2647: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:08:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:08:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2648: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:08:25 np0005629333 nova_compute[244014]: 2026-02-25 13:08:25.818 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:08:27.096739) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024907096783, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 557, "num_deletes": 251, "total_data_size": 602471, "memory_usage": 613688, "flush_reason": "Manual Compaction"}
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Feb 25 08:08:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2649: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024907122076, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 597222, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56115, "largest_seqno": 56671, "table_properties": {"data_size": 594100, "index_size": 1093, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7188, "raw_average_key_size": 19, "raw_value_size": 587906, "raw_average_value_size": 1571, "num_data_blocks": 48, "num_entries": 374, "num_filter_entries": 374, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772024869, "oldest_key_time": 1772024869, "file_creation_time": 1772024907, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 25425 microseconds, and 3111 cpu microseconds.
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:08:27.122153) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 597222 bytes OK
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:08:27.122195) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:08:27.131439) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:08:27.131480) EVENT_LOG_v1 {"time_micros": 1772024907131469, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:08:27.131517) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 599353, prev total WAL file size 599353, number of live WAL files 2.
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:08:27.132304) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(583KB)], [131(10MB)]
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024907132354, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 11229870, "oldest_snapshot_seqno": -1}
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 7510 keys, 9512100 bytes, temperature: kUnknown
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024907202519, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 9512100, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9463504, "index_size": 28707, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18821, "raw_key_size": 197067, "raw_average_key_size": 26, "raw_value_size": 9331131, "raw_average_value_size": 1242, "num_data_blocks": 1114, "num_entries": 7510, "num_filter_entries": 7510, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772024907, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:08:27.203340) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 9512100 bytes
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:08:27.213858) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 159.8 rd, 135.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 10.1 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(34.7) write-amplify(15.9) OK, records in: 8023, records dropped: 513 output_compression: NoCompression
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:08:27.213898) EVENT_LOG_v1 {"time_micros": 1772024907213881, "job": 80, "event": "compaction_finished", "compaction_time_micros": 70291, "compaction_time_cpu_micros": 21564, "output_level": 6, "num_output_files": 1, "total_output_size": 9512100, "num_input_records": 8023, "num_output_records": 7510, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024907214468, "job": 80, "event": "table_file_deletion", "file_number": 133}
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772024907216243, "job": 80, "event": "table_file_deletion", "file_number": 131}
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:08:27.132206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:08:27.216343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:08:27.216354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:08:27.216357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:08:27.216361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:08:27 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:08:27.216364) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:08:27 np0005629333 nova_compute[244014]: 2026-02-25 13:08:27.488 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:08:28 np0005629333 nova_compute[244014]: 2026-02-25 13:08:28.893 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:08:28 np0005629333 nova_compute[244014]: 2026-02-25 13:08:28.894 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 25 08:08:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:08:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2650: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:08:30 np0005629333 podman[386564]: 2026-02-25 13:08:30.783261034 +0000 UTC m=+0.047417465 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Feb 25 08:08:30 np0005629333 podman[386565]: 2026-02-25 13:08:30.799256395 +0000 UTC m=+0.062219992 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:08:30 np0005629333 nova_compute[244014]: 2026-02-25 13:08:30.819 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:08:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:08:31
Feb 25 08:08:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:08:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:08:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'backups', 'images', 'default.rgw.control', '.rgw.root', 'vms', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr']
Feb 25 08:08:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:08:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2651: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:08:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:08:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:08:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:08:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:08:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:08:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:08:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:08:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:08:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:08:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:08:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:08:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:08:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:08:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:08:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:08:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:08:32 np0005629333 nova_compute[244014]: 2026-02-25 13:08:32.492 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:08:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2652: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:08:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:08:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2653: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:08:35 np0005629333 nova_compute[244014]: 2026-02-25 13:08:35.821 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:08:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2654: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:08:37 np0005629333 nova_compute[244014]: 2026-02-25 13:08:37.496 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:08:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:08:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2655: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:08:40 np0005629333 nova_compute[244014]: 2026-02-25 13:08:40.824 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:08:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2656: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:08:42 np0005629333 nova_compute[244014]: 2026-02-25 13:08:42.499 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:08:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:08:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:08:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:08:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:08:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.7142632731533132e-05 of space, bias 1.0, pg target 0.00514278981945994 quantized to 32 (current 32)
Feb 25 08:08:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:08:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:08:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:08:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:08:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:08:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006710760464711434 of space, bias 1.0, pg target 0.20132281394134302 quantized to 32 (current 32)
Feb 25 08:08:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:08:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4125800128093941e-06 of space, bias 4.0, pg target 0.001695096015371273 quantized to 16 (current 16)
Feb 25 08:08:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:08:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:08:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:08:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:08:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:08:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:08:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:08:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:08:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:08:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:08:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2657: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:08:43 np0005629333 nova_compute[244014]: 2026-02-25 13:08:43.757 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:08:43 np0005629333 nova_compute[244014]: 2026-02-25 13:08:43.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:08:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:08:43 np0005629333 nova_compute[244014]: 2026-02-25 13:08:43.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:08:43 np0005629333 nova_compute[244014]: 2026-02-25 13:08:43.905 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:08:43 np0005629333 nova_compute[244014]: 2026-02-25 13:08:43.906 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:08:43 np0005629333 nova_compute[244014]: 2026-02-25 13:08:43.906 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:08:43 np0005629333 nova_compute[244014]: 2026-02-25 13:08:43.907 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:08:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:08:44 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2376885638' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:08:44 np0005629333 nova_compute[244014]: 2026-02-25 13:08:44.501 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:08:44 np0005629333 nova_compute[244014]: 2026-02-25 13:08:44.650 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:08:44 np0005629333 nova_compute[244014]: 2026-02-25 13:08:44.651 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3566MB free_disk=59.987252892926335GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:08:44 np0005629333 nova_compute[244014]: 2026-02-25 13:08:44.651 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:08:44 np0005629333 nova_compute[244014]: 2026-02-25 13:08:44.651 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:08:44 np0005629333 nova_compute[244014]: 2026-02-25 13:08:44.728 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:08:44 np0005629333 nova_compute[244014]: 2026-02-25 13:08:44.729 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:08:44 np0005629333 nova_compute[244014]: 2026-02-25 13:08:44.747 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:08:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2658: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:08:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:08:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4005707079' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:08:45 np0005629333 nova_compute[244014]: 2026-02-25 13:08:45.345 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:08:45 np0005629333 nova_compute[244014]: 2026-02-25 13:08:45.354 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:08:45 np0005629333 nova_compute[244014]: 2026-02-25 13:08:45.381 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:08:45 np0005629333 nova_compute[244014]: 2026-02-25 13:08:45.384 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:08:45 np0005629333 nova_compute[244014]: 2026-02-25 13:08:45.384 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:08:45 np0005629333 nova_compute[244014]: 2026-02-25 13:08:45.826 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:08:46 np0005629333 nova_compute[244014]: 2026-02-25 13:08:46.385 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:08:46 np0005629333 nova_compute[244014]: 2026-02-25 13:08:46.386 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:08:46 np0005629333 nova_compute[244014]: 2026-02-25 13:08:46.386 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:08:46 np0005629333 nova_compute[244014]: 2026-02-25 13:08:46.401 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:08:46 np0005629333 nova_compute[244014]: 2026-02-25 13:08:46.887 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:08:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2659: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:08:47 np0005629333 nova_compute[244014]: 2026-02-25 13:08:47.503 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:08:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:08:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2664787863' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:08:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:08:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2664787863' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:08:47 np0005629333 nova_compute[244014]: 2026-02-25 13:08:47.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:08:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:08:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2660: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:08:50 np0005629333 nova_compute[244014]: 2026-02-25 13:08:50.828 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:08:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2661: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:08:51 np0005629333 nova_compute[244014]: 2026-02-25 13:08:51.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:08:52 np0005629333 nova_compute[244014]: 2026-02-25 13:08:52.508 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:08:52 np0005629333 nova_compute[244014]: 2026-02-25 13:08:52.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:08:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2662: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:08:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:08:54 np0005629333 nova_compute[244014]: 2026-02-25 13:08:54.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:08:54 np0005629333 nova_compute[244014]: 2026-02-25 13:08:54.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:08:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:08:55.047 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:08:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:08:55.047 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:08:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:08:55.048 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:08:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2663: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:08:55 np0005629333 nova_compute[244014]: 2026-02-25 13:08:55.832 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:08:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2664: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:08:57 np0005629333 nova_compute[244014]: 2026-02-25 13:08:57.511 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:08:58 np0005629333 nova_compute[244014]: 2026-02-25 13:08:58.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:08:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:08:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2665: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 85 B/s wr, 5 op/s
Feb 25 08:08:59 np0005629333 nova_compute[244014]: 2026-02-25 13:08:59.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:09:00 np0005629333 nova_compute[244014]: 2026-02-25 13:09:00.833 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:09:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2666: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 85 B/s wr, 5 op/s
Feb 25 08:09:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:09:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:09:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:09:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:09:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:09:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:09:01 np0005629333 podman[386655]: 2026-02-25 13:09:01.736044642 +0000 UTC m=+0.067413112 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 08:09:01 np0005629333 podman[386656]: 2026-02-25 13:09:01.778474508 +0000 UTC m=+0.107432970 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Feb 25 08:09:02 np0005629333 nova_compute[244014]: 2026-02-25 13:09:02.513 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:09:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2667: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Feb 25 08:09:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:09:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2668: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Feb 25 08:09:05 np0005629333 nova_compute[244014]: 2026-02-25 13:09:05.835 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:09:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Feb 25 08:09:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Feb 25 08:09:06 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Feb 25 08:09:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2670: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 204 B/s wr, 9 op/s
Feb 25 08:09:07 np0005629333 nova_compute[244014]: 2026-02-25 13:09:07.517 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:09:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:09:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2671: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 25 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 470 KiB/s rd, 1023 B/s wr, 21 op/s
Feb 25 08:09:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Feb 25 08:09:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Feb 25 08:09:10 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Feb 25 08:09:10 np0005629333 nova_compute[244014]: 2026-02-25 13:09:10.837 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:09:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2673: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 25 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Feb 25 08:09:12 np0005629333 nova_compute[244014]: 2026-02-25 13:09:12.520 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:09:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 08:09:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.0 total, 600.0 interval#012Cumulative writes: 12K writes, 56K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s#012Cumulative WAL: 12K writes, 12K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1405 writes, 6581 keys, 1405 commit groups, 1.0 writes per commit group, ingest: 9.04 MB, 0.02 MB/s#012Interval WAL: 1405 writes, 1405 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     57.4      1.19              0.19        40    0.030       0      0       0.0       0.0#012  L6      1/0    9.07 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.9    108.2     91.9      3.60              0.97        39    0.092    243K    21K       0.0       0.0#012 Sum      1/0    9.07 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.9     81.4     83.3      4.79              1.15        79    0.061    243K    21K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.0    110.3    112.5      0.60              0.19        12    0.050     47K   3109       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0    108.2     91.9      3.60              0.97        39    0.092    243K    21K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     57.6      1.18              0.19        39    0.030       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4800.0 total, 600.0 interval#012Flush(GB): cumulative 0.066, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.39 GB write, 0.08 MB/s write, 0.38 GB read, 0.08 MB/s read, 4.8 seconds#012Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561a1af858d0#2 capacity: 304.00 MB usage: 44.72 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000406 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2810,42.94 MB,14.1264%) FilterBlock(80,680.55 KB,0.218617%) IndexBlock(80,1.11 MB,0.366045%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb 25 08:09:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2674: 305 pgs: 305 active+clean; 462 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 62 op/s
Feb 25 08:09:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:09:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Feb 25 08:09:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Feb 25 08:09:13 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Feb 25 08:09:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:09:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:09:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:09:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:09:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:09:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:09:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:09:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:09:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:09:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:09:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:09:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:09:14 np0005629333 podman[386844]: 2026-02-25 13:09:14.871512065 +0000 UTC m=+0.036046497 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:09:14 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:09:14 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:09:14 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:09:14 np0005629333 podman[386844]: 2026-02-25 13:09:14.97097173 +0000 UTC m=+0.135506102 container create 10dd8bdc40b78cc2708c27da0abe462bc8d83b29e79a739dc63dc30cebb201f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 08:09:15 np0005629333 systemd[1]: Started libpod-conmon-10dd8bdc40b78cc2708c27da0abe462bc8d83b29e79a739dc63dc30cebb201f9.scope.
Feb 25 08:09:15 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:09:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2676: 305 pgs: 305 active+clean; 462 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 3.4 KiB/s wr, 60 op/s
Feb 25 08:09:15 np0005629333 podman[386844]: 2026-02-25 13:09:15.15114102 +0000 UTC m=+0.315675462 container init 10dd8bdc40b78cc2708c27da0abe462bc8d83b29e79a739dc63dc30cebb201f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_margulis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:09:15 np0005629333 podman[386844]: 2026-02-25 13:09:15.159585298 +0000 UTC m=+0.324119670 container start 10dd8bdc40b78cc2708c27da0abe462bc8d83b29e79a739dc63dc30cebb201f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:09:15 np0005629333 stupefied_margulis[386860]: 167 167
Feb 25 08:09:15 np0005629333 systemd[1]: libpod-10dd8bdc40b78cc2708c27da0abe462bc8d83b29e79a739dc63dc30cebb201f9.scope: Deactivated successfully.
Feb 25 08:09:15 np0005629333 conmon[386860]: conmon 10dd8bdc40b78cc2708c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-10dd8bdc40b78cc2708c27da0abe462bc8d83b29e79a739dc63dc30cebb201f9.scope/container/memory.events
Feb 25 08:09:15 np0005629333 podman[386844]: 2026-02-25 13:09:15.189482481 +0000 UTC m=+0.354016844 container attach 10dd8bdc40b78cc2708c27da0abe462bc8d83b29e79a739dc63dc30cebb201f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 08:09:15 np0005629333 podman[386844]: 2026-02-25 13:09:15.190160201 +0000 UTC m=+0.354694533 container died 10dd8bdc40b78cc2708c27da0abe462bc8d83b29e79a739dc63dc30cebb201f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_margulis, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:09:15 np0005629333 systemd[1]: var-lib-containers-storage-overlay-9b8a726ea5ff8570b109c9ddd517f70cfd248193bdd182ece47d0f01e369281e-merged.mount: Deactivated successfully.
Feb 25 08:09:15 np0005629333 podman[386844]: 2026-02-25 13:09:15.668594471 +0000 UTC m=+0.833128853 container remove 10dd8bdc40b78cc2708c27da0abe462bc8d83b29e79a739dc63dc30cebb201f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_margulis, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:09:15 np0005629333 systemd[1]: libpod-conmon-10dd8bdc40b78cc2708c27da0abe462bc8d83b29e79a739dc63dc30cebb201f9.scope: Deactivated successfully.
Feb 25 08:09:15 np0005629333 nova_compute[244014]: 2026-02-25 13:09:15.839 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:09:15 np0005629333 podman[386886]: 2026-02-25 13:09:15.931038162 +0000 UTC m=+0.108902392 container create 786defd59f8137ae66c1d1455df5acbffdf737a887a6e3fabc99fa1a1478fcb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:09:15 np0005629333 podman[386886]: 2026-02-25 13:09:15.854624327 +0000 UTC m=+0.032488607 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:09:16 np0005629333 systemd[1]: Started libpod-conmon-786defd59f8137ae66c1d1455df5acbffdf737a887a6e3fabc99fa1a1478fcb7.scope.
Feb 25 08:09:16 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:09:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8511c03cbf7bc11ab27618aec6a96ed9a54c2388570c5fd4e4e790f144192020/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:09:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8511c03cbf7bc11ab27618aec6a96ed9a54c2388570c5fd4e4e790f144192020/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:09:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8511c03cbf7bc11ab27618aec6a96ed9a54c2388570c5fd4e4e790f144192020/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:09:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8511c03cbf7bc11ab27618aec6a96ed9a54c2388570c5fd4e4e790f144192020/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:09:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8511c03cbf7bc11ab27618aec6a96ed9a54c2388570c5fd4e4e790f144192020/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:09:16 np0005629333 podman[386886]: 2026-02-25 13:09:16.187795212 +0000 UTC m=+0.365659472 container init 786defd59f8137ae66c1d1455df5acbffdf737a887a6e3fabc99fa1a1478fcb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_taussig, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 08:09:16 np0005629333 podman[386886]: 2026-02-25 13:09:16.198103173 +0000 UTC m=+0.375967383 container start 786defd59f8137ae66c1d1455df5acbffdf737a887a6e3fabc99fa1a1478fcb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_taussig, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:09:16 np0005629333 podman[386886]: 2026-02-25 13:09:16.241928749 +0000 UTC m=+0.419792979 container attach 786defd59f8137ae66c1d1455df5acbffdf737a887a6e3fabc99fa1a1478fcb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 08:09:16 np0005629333 determined_taussig[386902]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:09:16 np0005629333 determined_taussig[386902]: --> All data devices are unavailable
Feb 25 08:09:16 np0005629333 systemd[1]: libpod-786defd59f8137ae66c1d1455df5acbffdf737a887a6e3fabc99fa1a1478fcb7.scope: Deactivated successfully.
Feb 25 08:09:16 np0005629333 conmon[386902]: conmon 786defd59f8137ae66c1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-786defd59f8137ae66c1d1455df5acbffdf737a887a6e3fabc99fa1a1478fcb7.scope/container/memory.events
Feb 25 08:09:16 np0005629333 podman[386886]: 2026-02-25 13:09:16.808776493 +0000 UTC m=+0.986640733 container died 786defd59f8137ae66c1d1455df5acbffdf737a887a6e3fabc99fa1a1478fcb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:09:16 np0005629333 systemd[1]: var-lib-containers-storage-overlay-8511c03cbf7bc11ab27618aec6a96ed9a54c2388570c5fd4e4e790f144192020-merged.mount: Deactivated successfully.
Feb 25 08:09:17 np0005629333 podman[386886]: 2026-02-25 13:09:17.086229897 +0000 UTC m=+1.264094137 container remove 786defd59f8137ae66c1d1455df5acbffdf737a887a6e3fabc99fa1a1478fcb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 08:09:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2677: 305 pgs: 305 active+clean; 462 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.2 KiB/s wr, 37 op/s
Feb 25 08:09:17 np0005629333 systemd[1]: libpod-conmon-786defd59f8137ae66c1d1455df5acbffdf737a887a6e3fabc99fa1a1478fcb7.scope: Deactivated successfully.
Feb 25 08:09:17 np0005629333 nova_compute[244014]: 2026-02-25 13:09:17.523 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:09:17 np0005629333 podman[386997]: 2026-02-25 13:09:17.649246314 +0000 UTC m=+0.114388557 container create 09021ba57c38f906004965420a475e3eb11d413737d133e2f62bb4b78d0b9961 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_cori, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 08:09:17 np0005629333 podman[386997]: 2026-02-25 13:09:17.556678873 +0000 UTC m=+0.021821096 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:09:17 np0005629333 systemd[1]: Started libpod-conmon-09021ba57c38f906004965420a475e3eb11d413737d133e2f62bb4b78d0b9961.scope.
Feb 25 08:09:17 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:09:17 np0005629333 podman[386997]: 2026-02-25 13:09:17.817999113 +0000 UTC m=+0.283141416 container init 09021ba57c38f906004965420a475e3eb11d413737d133e2f62bb4b78d0b9961 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_cori, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:09:17 np0005629333 podman[386997]: 2026-02-25 13:09:17.825897555 +0000 UTC m=+0.291039788 container start 09021ba57c38f906004965420a475e3eb11d413737d133e2f62bb4b78d0b9961 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 08:09:17 np0005629333 admiring_cori[387013]: 167 167
Feb 25 08:09:17 np0005629333 systemd[1]: libpod-09021ba57c38f906004965420a475e3eb11d413737d133e2f62bb4b78d0b9961.scope: Deactivated successfully.
Feb 25 08:09:17 np0005629333 podman[386997]: 2026-02-25 13:09:17.880077233 +0000 UTC m=+0.345219476 container attach 09021ba57c38f906004965420a475e3eb11d413737d133e2f62bb4b78d0b9961 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_cori, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 08:09:17 np0005629333 podman[386997]: 2026-02-25 13:09:17.880565437 +0000 UTC m=+0.345707680 container died 09021ba57c38f906004965420a475e3eb11d413737d133e2f62bb4b78d0b9961 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_cori, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030)
Feb 25 08:09:18 np0005629333 systemd[1]: var-lib-containers-storage-overlay-8b84d5a81af260a32f16401d3c9a3b3f3bdf2c7ce9c1c277e591bb1003092acb-merged.mount: Deactivated successfully.
Feb 25 08:09:18 np0005629333 podman[386997]: 2026-02-25 13:09:18.177748237 +0000 UTC m=+0.642890460 container remove 09021ba57c38f906004965420a475e3eb11d413737d133e2f62bb4b78d0b9961 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:09:18 np0005629333 systemd[1]: libpod-conmon-09021ba57c38f906004965420a475e3eb11d413737d133e2f62bb4b78d0b9961.scope: Deactivated successfully.
Feb 25 08:09:18 np0005629333 podman[387040]: 2026-02-25 13:09:18.381459231 +0000 UTC m=+0.086295654 container create 1972827f09bc157e6a95acad3b0b9abd3619276608b96dde97ebb85a5f4ec1a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 08:09:18 np0005629333 podman[387040]: 2026-02-25 13:09:18.3204159 +0000 UTC m=+0.025252383 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:09:18 np0005629333 systemd[1]: Started libpod-conmon-1972827f09bc157e6a95acad3b0b9abd3619276608b96dde97ebb85a5f4ec1a2.scope.
Feb 25 08:09:18 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:09:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99c1ba6ca5f5920d5e8e89c56e8d59c16c676f3448fef4f12f4c2cff1be4072b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:09:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99c1ba6ca5f5920d5e8e89c56e8d59c16c676f3448fef4f12f4c2cff1be4072b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:09:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99c1ba6ca5f5920d5e8e89c56e8d59c16c676f3448fef4f12f4c2cff1be4072b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:09:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99c1ba6ca5f5920d5e8e89c56e8d59c16c676f3448fef4f12f4c2cff1be4072b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:09:18 np0005629333 podman[387040]: 2026-02-25 13:09:18.580327959 +0000 UTC m=+0.285164352 container init 1972827f09bc157e6a95acad3b0b9abd3619276608b96dde97ebb85a5f4ec1a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 08:09:18 np0005629333 podman[387040]: 2026-02-25 13:09:18.588985353 +0000 UTC m=+0.293821736 container start 1972827f09bc157e6a95acad3b0b9abd3619276608b96dde97ebb85a5f4ec1a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_meninsky, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:09:18 np0005629333 podman[387040]: 2026-02-25 13:09:18.621658224 +0000 UTC m=+0.326494647 container attach 1972827f09bc157e6a95acad3b0b9abd3619276608b96dde97ebb85a5f4ec1a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]: {
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:    "0": [
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:        {
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "devices": [
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "/dev/loop3"
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            ],
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "lv_name": "ceph_lv0",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "lv_size": "21470642176",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "name": "ceph_lv0",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "tags": {
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.cluster_name": "ceph",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.crush_device_class": "",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.encrypted": "0",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.objectstore": "bluestore",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.osd_id": "0",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.type": "block",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.vdo": "0",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.with_tpm": "0"
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            },
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "type": "block",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "vg_name": "ceph_vg0"
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:        }
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:    ],
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:    "1": [
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:        {
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "devices": [
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "/dev/loop4"
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            ],
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "lv_name": "ceph_lv1",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "lv_size": "21470642176",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "name": "ceph_lv1",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "tags": {
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.cluster_name": "ceph",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.crush_device_class": "",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.encrypted": "0",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.objectstore": "bluestore",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.osd_id": "1",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.type": "block",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.vdo": "0",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.with_tpm": "0"
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            },
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "type": "block",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "vg_name": "ceph_vg1"
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:        }
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:    ],
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:    "2": [
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:        {
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "devices": [
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "/dev/loop5"
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            ],
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "lv_name": "ceph_lv2",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "lv_size": "21470642176",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "name": "ceph_lv2",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "tags": {
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.cluster_name": "ceph",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.crush_device_class": "",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.encrypted": "0",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.objectstore": "bluestore",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.osd_id": "2",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.type": "block",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.vdo": "0",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:                "ceph.with_tpm": "0"
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            },
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "type": "block",
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:            "vg_name": "ceph_vg2"
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:        }
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]:    ]
Feb 25 08:09:18 np0005629333 crazy_meninsky[387057]: }
Feb 25 08:09:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:09:18 np0005629333 systemd[1]: libpod-1972827f09bc157e6a95acad3b0b9abd3619276608b96dde97ebb85a5f4ec1a2.scope: Deactivated successfully.
Feb 25 08:09:18 np0005629333 podman[387040]: 2026-02-25 13:09:18.936724549 +0000 UTC m=+0.641560952 container died 1972827f09bc157e6a95acad3b0b9abd3619276608b96dde97ebb85a5f4ec1a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:09:19 np0005629333 systemd[1]: var-lib-containers-storage-overlay-99c1ba6ca5f5920d5e8e89c56e8d59c16c676f3448fef4f12f4c2cff1be4072b-merged.mount: Deactivated successfully.
Feb 25 08:09:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2678: 305 pgs: 305 active+clean; 462 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.1 KiB/s wr, 34 op/s
Feb 25 08:09:19 np0005629333 podman[387040]: 2026-02-25 13:09:19.267461745 +0000 UTC m=+0.972298168 container remove 1972827f09bc157e6a95acad3b0b9abd3619276608b96dde97ebb85a5f4ec1a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:09:19 np0005629333 systemd[1]: libpod-conmon-1972827f09bc157e6a95acad3b0b9abd3619276608b96dde97ebb85a5f4ec1a2.scope: Deactivated successfully.
Feb 25 08:09:19 np0005629333 podman[387143]: 2026-02-25 13:09:19.81697888 +0000 UTC m=+0.097726466 container create 4965d579db20c60c8d5fa10d6b59b6b34f315af06db38b9f5eba2aba2c99ed53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Feb 25 08:09:19 np0005629333 podman[387143]: 2026-02-25 13:09:19.742081148 +0000 UTC m=+0.022828724 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:09:19 np0005629333 systemd[1]: Started libpod-conmon-4965d579db20c60c8d5fa10d6b59b6b34f315af06db38b9f5eba2aba2c99ed53.scope.
Feb 25 08:09:19 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:09:19 np0005629333 podman[387143]: 2026-02-25 13:09:19.992557891 +0000 UTC m=+0.273305537 container init 4965d579db20c60c8d5fa10d6b59b6b34f315af06db38b9f5eba2aba2c99ed53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 08:09:20 np0005629333 podman[387143]: 2026-02-25 13:09:20.000274109 +0000 UTC m=+0.281021715 container start 4965d579db20c60c8d5fa10d6b59b6b34f315af06db38b9f5eba2aba2c99ed53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 08:09:20 np0005629333 zen_heisenberg[387159]: 167 167
Feb 25 08:09:20 np0005629333 systemd[1]: libpod-4965d579db20c60c8d5fa10d6b59b6b34f315af06db38b9f5eba2aba2c99ed53.scope: Deactivated successfully.
Feb 25 08:09:20 np0005629333 podman[387143]: 2026-02-25 13:09:20.026508699 +0000 UTC m=+0.307256285 container attach 4965d579db20c60c8d5fa10d6b59b6b34f315af06db38b9f5eba2aba2c99ed53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 08:09:20 np0005629333 podman[387143]: 2026-02-25 13:09:20.027235249 +0000 UTC m=+0.307982825 container died 4965d579db20c60c8d5fa10d6b59b6b34f315af06db38b9f5eba2aba2c99ed53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:09:20 np0005629333 systemd[1]: var-lib-containers-storage-overlay-71868a684af994b3d13042458fd2fb760b5af80349af26f4c02ae581a1f018ec-merged.mount: Deactivated successfully.
Feb 25 08:09:20 np0005629333 podman[387143]: 2026-02-25 13:09:20.268432631 +0000 UTC m=+0.549180237 container remove 4965d579db20c60c8d5fa10d6b59b6b34f315af06db38b9f5eba2aba2c99ed53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_heisenberg, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 08:09:20 np0005629333 systemd[1]: libpod-conmon-4965d579db20c60c8d5fa10d6b59b6b34f315af06db38b9f5eba2aba2c99ed53.scope: Deactivated successfully.
Feb 25 08:09:20 np0005629333 podman[387187]: 2026-02-25 13:09:20.504778936 +0000 UTC m=+0.103295244 container create 08e6fe8b2f67eb9f345a7e103ee249b98d883584ca65127e6b8f898b4fc7dea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Feb 25 08:09:20 np0005629333 podman[387187]: 2026-02-25 13:09:20.426318043 +0000 UTC m=+0.024834391 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:09:20 np0005629333 systemd[1]: Started libpod-conmon-08e6fe8b2f67eb9f345a7e103ee249b98d883584ca65127e6b8f898b4fc7dea3.scope.
Feb 25 08:09:20 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:09:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55148601925dbc1443ed079060c8272ba53b51bdf4d3e376f5f5e2711a25c6ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:09:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55148601925dbc1443ed079060c8272ba53b51bdf4d3e376f5f5e2711a25c6ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:09:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55148601925dbc1443ed079060c8272ba53b51bdf4d3e376f5f5e2711a25c6ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:09:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55148601925dbc1443ed079060c8272ba53b51bdf4d3e376f5f5e2711a25c6ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:09:20 np0005629333 podman[387187]: 2026-02-25 13:09:20.705552487 +0000 UTC m=+0.304068785 container init 08e6fe8b2f67eb9f345a7e103ee249b98d883584ca65127e6b8f898b4fc7dea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 08:09:20 np0005629333 podman[387187]: 2026-02-25 13:09:20.714188421 +0000 UTC m=+0.312704729 container start 08e6fe8b2f67eb9f345a7e103ee249b98d883584ca65127e6b8f898b4fc7dea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bardeen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 08:09:20 np0005629333 podman[387187]: 2026-02-25 13:09:20.753048427 +0000 UTC m=+0.351564795 container attach 08e6fe8b2f67eb9f345a7e103ee249b98d883584ca65127e6b8f898b4fc7dea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bardeen, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Feb 25 08:09:20 np0005629333 nova_compute[244014]: 2026-02-25 13:09:20.841 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:09:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2679: 305 pgs: 305 active+clean; 462 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 KiB/s wr, 30 op/s
Feb 25 08:09:21 np0005629333 lvm[387281]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:09:21 np0005629333 lvm[387281]: VG ceph_vg0 finished
Feb 25 08:09:21 np0005629333 lvm[387282]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:09:21 np0005629333 lvm[387282]: VG ceph_vg1 finished
Feb 25 08:09:21 np0005629333 lvm[387284]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:09:21 np0005629333 lvm[387284]: VG ceph_vg2 finished
Feb 25 08:09:21 np0005629333 exciting_bardeen[387203]: {}
Feb 25 08:09:21 np0005629333 systemd[1]: libpod-08e6fe8b2f67eb9f345a7e103ee249b98d883584ca65127e6b8f898b4fc7dea3.scope: Deactivated successfully.
Feb 25 08:09:21 np0005629333 systemd[1]: libpod-08e6fe8b2f67eb9f345a7e103ee249b98d883584ca65127e6b8f898b4fc7dea3.scope: Consumed 1.103s CPU time.
Feb 25 08:09:21 np0005629333 podman[387187]: 2026-02-25 13:09:21.473185494 +0000 UTC m=+1.071701792 container died 08e6fe8b2f67eb9f345a7e103ee249b98d883584ca65127e6b8f898b4fc7dea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bardeen, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 08:09:21 np0005629333 systemd[1]: var-lib-containers-storage-overlay-55148601925dbc1443ed079060c8272ba53b51bdf4d3e376f5f5e2711a25c6ab-merged.mount: Deactivated successfully.
Feb 25 08:09:21 np0005629333 podman[387187]: 2026-02-25 13:09:21.666611248 +0000 UTC m=+1.265127546 container remove 08e6fe8b2f67eb9f345a7e103ee249b98d883584ca65127e6b8f898b4fc7dea3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:09:21 np0005629333 systemd[1]: libpod-conmon-08e6fe8b2f67eb9f345a7e103ee249b98d883584ca65127e6b8f898b4fc7dea3.scope: Deactivated successfully.
Feb 25 08:09:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:09:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:09:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:09:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:09:22 np0005629333 nova_compute[244014]: 2026-02-25 13:09:22.527 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:09:22 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:09:22 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:09:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2680: 305 pgs: 305 active+clean; 462 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Feb 25 08:09:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:09:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2681: 305 pgs: 305 active+clean; 462 KiB data, 988 MiB used, 59 GiB / 60 GiB avail; 91 B/s rd, 0 B/s wr, 0 op/s
Feb 25 08:09:25 np0005629333 nova_compute[244014]: 2026-02-25 13:09:25.843 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:09:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2682: 305 pgs: 305 active+clean; 72 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 6.0 MiB/s wr, 19 op/s
Feb 25 08:09:27 np0005629333 nova_compute[244014]: 2026-02-25 13:09:27.532 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:09:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Feb 25 08:09:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Feb 25 08:09:28 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Feb 25 08:09:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:09:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2684: 305 pgs: 305 active+clean; 80 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 8.0 MiB/s wr, 22 op/s
Feb 25 08:09:30 np0005629333 nova_compute[244014]: 2026-02-25 13:09:30.846 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:09:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:09:31
Feb 25 08:09:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:09:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:09:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', '.rgw.root', 'images', 'volumes', 'vms', 'default.rgw.control', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta']
Feb 25 08:09:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:09:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2685: 305 pgs: 305 active+clean; 80 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 8.0 MiB/s wr, 22 op/s
Feb 25 08:09:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:09:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:09:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:09:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:09:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:09:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:09:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:09:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:09:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:09:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:09:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:09:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:09:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:09:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:09:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:09:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:09:32 np0005629333 nova_compute[244014]: 2026-02-25 13:09:32.536 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:09:32 np0005629333 podman[387326]: 2026-02-25 13:09:32.738472807 +0000 UTC m=+0.071188889 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 25 08:09:32 np0005629333 podman[387327]: 2026-02-25 13:09:32.772108115 +0000 UTC m=+0.104926690 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Feb 25 08:09:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2686: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 11 MiB/s wr, 33 op/s
Feb 25 08:09:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:09:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2687: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 11 MiB/s wr, 33 op/s
Feb 25 08:09:35 np0005629333 nova_compute[244014]: 2026-02-25 13:09:35.849 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:09:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2688: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 4.0 MiB/s wr, 10 op/s
Feb 25 08:09:37 np0005629333 nova_compute[244014]: 2026-02-25 13:09:37.539 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:09:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:09:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2689: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.6 KiB/s rd, 2.9 MiB/s wr, 9 op/s
Feb 25 08:09:40 np0005629333 nova_compute[244014]: 2026-02-25 13:09:40.849 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:09:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2690: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 2.7 MiB/s wr, 8 op/s
Feb 25 08:09:42 np0005629333 nova_compute[244014]: 2026-02-25 13:09:42.543 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:09:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:09:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:09:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:09:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:09:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.723792564281303e-05 of space, bias 1.0, pg target 0.005171377692843909 quantized to 32 (current 32)
Feb 25 08:09:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:09:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:09:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:09:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:09:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:09:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0018289410571936495 of space, bias 1.0, pg target 0.5486823171580949 quantized to 32 (current 32)
Feb 25 08:09:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:09:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4127973638615152e-06 of space, bias 4.0, pg target 0.0016953568366338183 quantized to 16 (current 16)
Feb 25 08:09:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:09:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:09:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:09:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:09:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:09:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:09:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:09:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:09:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:09:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:09:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2691: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 2.7 MiB/s wr, 8 op/s
Feb 25 08:09:43 np0005629333 nova_compute[244014]: 2026-02-25 13:09:43.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:09:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:09:43 np0005629333 nova_compute[244014]: 2026-02-25 13:09:43.980 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:09:43 np0005629333 nova_compute[244014]: 2026-02-25 13:09:43.981 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:09:43 np0005629333 nova_compute[244014]: 2026-02-25 13:09:43.981 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:09:43 np0005629333 nova_compute[244014]: 2026-02-25 13:09:43.981 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:09:43 np0005629333 nova_compute[244014]: 2026-02-25 13:09:43.981 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:09:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:09:44 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/861744831' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:09:44 np0005629333 nova_compute[244014]: 2026-02-25 13:09:44.525 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:09:44 np0005629333 nova_compute[244014]: 2026-02-25 13:09:44.641 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:09:44 np0005629333 nova_compute[244014]: 2026-02-25 13:09:44.642 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3559MB free_disk=59.98724717646837GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:09:44 np0005629333 nova_compute[244014]: 2026-02-25 13:09:44.642 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:09:44 np0005629333 nova_compute[244014]: 2026-02-25 13:09:44.642 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:09:44 np0005629333 nova_compute[244014]: 2026-02-25 13:09:44.713 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:09:44 np0005629333 nova_compute[244014]: 2026-02-25 13:09:44.714 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:09:44 np0005629333 nova_compute[244014]: 2026-02-25 13:09:44.733 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:09:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2692: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:09:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:09:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/613871194' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:09:45 np0005629333 nova_compute[244014]: 2026-02-25 13:09:45.279 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:09:45 np0005629333 nova_compute[244014]: 2026-02-25 13:09:45.285 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:09:45 np0005629333 nova_compute[244014]: 2026-02-25 13:09:45.307 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:09:45 np0005629333 nova_compute[244014]: 2026-02-25 13:09:45.308 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:09:45 np0005629333 nova_compute[244014]: 2026-02-25 13:09:45.309 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:09:45 np0005629333 nova_compute[244014]: 2026-02-25 13:09:45.852 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:09:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2693: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:09:47 np0005629333 nova_compute[244014]: 2026-02-25 13:09:47.310 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:09:47 np0005629333 nova_compute[244014]: 2026-02-25 13:09:47.310 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:09:47 np0005629333 nova_compute[244014]: 2026-02-25 13:09:47.311 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:09:47 np0005629333 nova_compute[244014]: 2026-02-25 13:09:47.326 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:09:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:09:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2473000453' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:09:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:09:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2473000453' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:09:47 np0005629333 nova_compute[244014]: 2026-02-25 13:09:47.546 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:09:47 np0005629333 nova_compute[244014]: 2026-02-25 13:09:47.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:09:48 np0005629333 nova_compute[244014]: 2026-02-25 13:09:48.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:09:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:09:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2694: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:09:50 np0005629333 nova_compute[244014]: 2026-02-25 13:09:50.854 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:09:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2695: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:09:52 np0005629333 nova_compute[244014]: 2026-02-25 13:09:52.549 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:09:52 np0005629333 nova_compute[244014]: 2026-02-25 13:09:52.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:09:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2696: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:09:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:09:54 np0005629333 nova_compute[244014]: 2026-02-25 13:09:54.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:09:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:09:55.048 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:09:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:09:55.049 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:09:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:09:55.049 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:09:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2697: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:09:55 np0005629333 nova_compute[244014]: 2026-02-25 13:09:55.856 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:09:56 np0005629333 nova_compute[244014]: 2026-02-25 13:09:56.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:09:56 np0005629333 nova_compute[244014]: 2026-02-25 13:09:56.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:09:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2698: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:09:57 np0005629333 nova_compute[244014]: 2026-02-25 13:09:57.552 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:09:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:09:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2699: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:09:59 np0005629333 nova_compute[244014]: 2026-02-25 13:09:59.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:10:00 np0005629333 nova_compute[244014]: 2026-02-25 13:10:00.859 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:10:00 np0005629333 nova_compute[244014]: 2026-02-25 13:10:00.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:10:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2700: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:10:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:10:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:10:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:10:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:10:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:10:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:10:02 np0005629333 nova_compute[244014]: 2026-02-25 13:10:02.556 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:10:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2701: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:10:03 np0005629333 podman[387415]: 2026-02-25 13:10:03.731638959 +0000 UTC m=+0.075374186 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223)
Feb 25 08:10:03 np0005629333 podman[387416]: 2026-02-25 13:10:03.764064884 +0000 UTC m=+0.102734839 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 08:10:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:10:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2702: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:10:05 np0005629333 nova_compute[244014]: 2026-02-25 13:10:05.861 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:10:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2703: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:10:07 np0005629333 nova_compute[244014]: 2026-02-25 13:10:07.560 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:10:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:10:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2704: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:10:10 np0005629333 nova_compute[244014]: 2026-02-25 13:10:10.863 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:10:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2705: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:10:12 np0005629333 nova_compute[244014]: 2026-02-25 13:10:12.565 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:10:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2706: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:10:13 np0005629333 nova_compute[244014]: 2026-02-25 13:10:13.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:10:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:10:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2707: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:10:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 08:10:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 46K writes, 184K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s#012Cumulative WAL: 46K writes, 17K syncs, 2.75 writes per sync, written: 0.19 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2661 writes, 10K keys, 2661 commit groups, 1.0 writes per commit group, ingest: 10.48 MB, 0.02 MB/s#012Interval WAL: 2661 writes, 1045 syncs, 2.55 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 08:10:15 np0005629333 nova_compute[244014]: 2026-02-25 13:10:15.865 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:10:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2708: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:10:17 np0005629333 nova_compute[244014]: 2026-02-25 13:10:17.568 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:10:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:10:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2709: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:10:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 08:10:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.2 total, 600.0 interval#012Cumulative writes: 48K writes, 193K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s#012Cumulative WAL: 48K writes, 17K syncs, 2.76 writes per sync, written: 0.19 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2895 writes, 12K keys, 2895 commit groups, 1.0 writes per commit group, ingest: 11.47 MB, 0.02 MB/s#012Interval WAL: 2895 writes, 1135 syncs, 2.55 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 08:10:20 np0005629333 nova_compute[244014]: 2026-02-25 13:10:20.869 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:10:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2710: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:10:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:10:22 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:10:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:10:22 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:10:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:10:22 np0005629333 nova_compute[244014]: 2026-02-25 13:10:22.571 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:10:22 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:10:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:10:22 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:10:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:10:22 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:10:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:10:22 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:10:22 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:10:22 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:10:22 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:10:23 np0005629333 podman[387606]: 2026-02-25 13:10:23.062470855 +0000 UTC m=+0.061965659 container create eba78ca3b24bb983f539073f1c5dcab94ecdbd6e2d0109e6ce13bfc92fde74d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_burnell, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:10:23 np0005629333 podman[387606]: 2026-02-25 13:10:23.032814558 +0000 UTC m=+0.032309402 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:10:23 np0005629333 systemd[1]: Started libpod-conmon-eba78ca3b24bb983f539073f1c5dcab94ecdbd6e2d0109e6ce13bfc92fde74d1.scope.
Feb 25 08:10:23 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:10:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2711: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:10:23 np0005629333 podman[387606]: 2026-02-25 13:10:23.209368937 +0000 UTC m=+0.208863791 container init eba78ca3b24bb983f539073f1c5dcab94ecdbd6e2d0109e6ce13bfc92fde74d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_burnell, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 08:10:23 np0005629333 podman[387606]: 2026-02-25 13:10:23.218895526 +0000 UTC m=+0.218390330 container start eba78ca3b24bb983f539073f1c5dcab94ecdbd6e2d0109e6ce13bfc92fde74d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:10:23 np0005629333 reverent_burnell[387622]: 167 167
Feb 25 08:10:23 np0005629333 systemd[1]: libpod-eba78ca3b24bb983f539073f1c5dcab94ecdbd6e2d0109e6ce13bfc92fde74d1.scope: Deactivated successfully.
Feb 25 08:10:23 np0005629333 conmon[387622]: conmon eba78ca3b24bb983f539 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eba78ca3b24bb983f539073f1c5dcab94ecdbd6e2d0109e6ce13bfc92fde74d1.scope/container/memory.events
Feb 25 08:10:23 np0005629333 podman[387606]: 2026-02-25 13:10:23.241986187 +0000 UTC m=+0.241480991 container attach eba78ca3b24bb983f539073f1c5dcab94ecdbd6e2d0109e6ce13bfc92fde74d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_burnell, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 08:10:23 np0005629333 podman[387606]: 2026-02-25 13:10:23.242876032 +0000 UTC m=+0.242370836 container died eba78ca3b24bb983f539073f1c5dcab94ecdbd6e2d0109e6ce13bfc92fde74d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_burnell, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 08:10:23 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f7f93c1970064bb46e0de382d5ef5080920829432fb418ae88df05c28a7371d0-merged.mount: Deactivated successfully.
Feb 25 08:10:23 np0005629333 podman[387606]: 2026-02-25 13:10:23.482163708 +0000 UTC m=+0.481658502 container remove eba78ca3b24bb983f539073f1c5dcab94ecdbd6e2d0109e6ce13bfc92fde74d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_burnell, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:10:23 np0005629333 systemd[1]: libpod-conmon-eba78ca3b24bb983f539073f1c5dcab94ecdbd6e2d0109e6ce13bfc92fde74d1.scope: Deactivated successfully.
Feb 25 08:10:23 np0005629333 podman[387647]: 2026-02-25 13:10:23.637147389 +0000 UTC m=+0.072870076 container create df03f91a4cf884963960011cc693dce570229b045c8970cade49d1cd00820e13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_kirch, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:10:23 np0005629333 podman[387647]: 2026-02-25 13:10:23.585038659 +0000 UTC m=+0.020761346 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:10:23 np0005629333 systemd[1]: Started libpod-conmon-df03f91a4cf884963960011cc693dce570229b045c8970cade49d1cd00820e13.scope.
Feb 25 08:10:23 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:10:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea85d4a74b2db8de3b50ebf749215a57e924095c09fa2b82bcc4e46aa78cd17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:10:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea85d4a74b2db8de3b50ebf749215a57e924095c09fa2b82bcc4e46aa78cd17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:10:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea85d4a74b2db8de3b50ebf749215a57e924095c09fa2b82bcc4e46aa78cd17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:10:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea85d4a74b2db8de3b50ebf749215a57e924095c09fa2b82bcc4e46aa78cd17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:10:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea85d4a74b2db8de3b50ebf749215a57e924095c09fa2b82bcc4e46aa78cd17/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:10:23 np0005629333 podman[387647]: 2026-02-25 13:10:23.809760966 +0000 UTC m=+0.245483693 container init df03f91a4cf884963960011cc693dce570229b045c8970cade49d1cd00820e13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_kirch, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 08:10:23 np0005629333 podman[387647]: 2026-02-25 13:10:23.82016586 +0000 UTC m=+0.255888507 container start df03f91a4cf884963960011cc693dce570229b045c8970cade49d1cd00820e13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_kirch, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 25 08:10:23 np0005629333 podman[387647]: 2026-02-25 13:10:23.851758891 +0000 UTC m=+0.287481598 container attach df03f91a4cf884963960011cc693dce570229b045c8970cade49d1cd00820e13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:10:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:10:24 np0005629333 elated_kirch[387663]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:10:24 np0005629333 elated_kirch[387663]: --> All data devices are unavailable
Feb 25 08:10:24 np0005629333 systemd[1]: libpod-df03f91a4cf884963960011cc693dce570229b045c8970cade49d1cd00820e13.scope: Deactivated successfully.
Feb 25 08:10:24 np0005629333 podman[387647]: 2026-02-25 13:10:24.269336136 +0000 UTC m=+0.705058793 container died df03f91a4cf884963960011cc693dce570229b045c8970cade49d1cd00820e13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_kirch, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Feb 25 08:10:24 np0005629333 systemd[1]: var-lib-containers-storage-overlay-9ea85d4a74b2db8de3b50ebf749215a57e924095c09fa2b82bcc4e46aa78cd17-merged.mount: Deactivated successfully.
Feb 25 08:10:24 np0005629333 podman[387647]: 2026-02-25 13:10:24.5088649 +0000 UTC m=+0.944587587 container remove df03f91a4cf884963960011cc693dce570229b045c8970cade49d1cd00820e13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3)
Feb 25 08:10:24 np0005629333 systemd[1]: libpod-conmon-df03f91a4cf884963960011cc693dce570229b045c8970cade49d1cd00820e13.scope: Deactivated successfully.
Feb 25 08:10:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 08:10:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.6 total, 600.0 interval#012Cumulative writes: 37K writes, 148K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s#012Cumulative WAL: 37K writes, 13K syncs, 2.78 writes per sync, written: 0.15 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1807 writes, 6626 keys, 1807 commit groups, 1.0 writes per commit group, ingest: 5.30 MB, 0.01 MB/s#012Interval WAL: 1807 writes, 773 syncs, 2.34 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 08:10:25 np0005629333 podman[387761]: 2026-02-25 13:10:25.10496699 +0000 UTC m=+0.100488335 container create aebd5c626330c1626e80764e43dd5701e4279ea9d8b1683d603f97c32ac067ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 08:10:25 np0005629333 podman[387761]: 2026-02-25 13:10:25.044043872 +0000 UTC m=+0.039565277 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:10:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2712: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:10:25 np0005629333 systemd[1]: Started libpod-conmon-aebd5c626330c1626e80764e43dd5701e4279ea9d8b1683d603f97c32ac067ad.scope.
Feb 25 08:10:25 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:10:25 np0005629333 podman[387761]: 2026-02-25 13:10:25.348764035 +0000 UTC m=+0.344285390 container init aebd5c626330c1626e80764e43dd5701e4279ea9d8b1683d603f97c32ac067ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:10:25 np0005629333 podman[387761]: 2026-02-25 13:10:25.358454318 +0000 UTC m=+0.353975673 container start aebd5c626330c1626e80764e43dd5701e4279ea9d8b1683d603f97c32ac067ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lumiere, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:10:25 np0005629333 awesome_lumiere[387777]: 167 167
Feb 25 08:10:25 np0005629333 systemd[1]: libpod-aebd5c626330c1626e80764e43dd5701e4279ea9d8b1683d603f97c32ac067ad.scope: Deactivated successfully.
Feb 25 08:10:25 np0005629333 podman[387761]: 2026-02-25 13:10:25.483933546 +0000 UTC m=+0.479454911 container attach aebd5c626330c1626e80764e43dd5701e4279ea9d8b1683d603f97c32ac067ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:10:25 np0005629333 podman[387761]: 2026-02-25 13:10:25.485501911 +0000 UTC m=+0.481023326 container died aebd5c626330c1626e80764e43dd5701e4279ea9d8b1683d603f97c32ac067ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lumiere, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:10:25 np0005629333 systemd[1]: var-lib-containers-storage-overlay-36a9b4590c0e515b068d37467e8532f0afbf3140fcf072cbf8c9da1e363bc5fe-merged.mount: Deactivated successfully.
Feb 25 08:10:25 np0005629333 nova_compute[244014]: 2026-02-25 13:10:25.869 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:10:25 np0005629333 podman[387761]: 2026-02-25 13:10:25.902988634 +0000 UTC m=+0.898509989 container remove aebd5c626330c1626e80764e43dd5701e4279ea9d8b1683d603f97c32ac067ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 08:10:25 np0005629333 systemd[1]: libpod-conmon-aebd5c626330c1626e80764e43dd5701e4279ea9d8b1683d603f97c32ac067ad.scope: Deactivated successfully.
Feb 25 08:10:26 np0005629333 podman[387803]: 2026-02-25 13:10:26.062579234 +0000 UTC m=+0.029278106 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:10:26 np0005629333 podman[387803]: 2026-02-25 13:10:26.191351716 +0000 UTC m=+0.158050558 container create 992a76b3ef531b2f5329a32f37352ea02be650a7a594fa1fc48da502ab1cae4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_boyd, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:10:26 np0005629333 systemd[1]: Started libpod-conmon-992a76b3ef531b2f5329a32f37352ea02be650a7a594fa1fc48da502ab1cae4c.scope.
Feb 25 08:10:26 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:10:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82e5dae33155446abc287c6156981fb26365959027938c0c0f9650aba576d1f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:10:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82e5dae33155446abc287c6156981fb26365959027938c0c0f9650aba576d1f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:10:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82e5dae33155446abc287c6156981fb26365959027938c0c0f9650aba576d1f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:10:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82e5dae33155446abc287c6156981fb26365959027938c0c0f9650aba576d1f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:10:26 np0005629333 podman[387803]: 2026-02-25 13:10:26.501516222 +0000 UTC m=+0.468215034 container init 992a76b3ef531b2f5329a32f37352ea02be650a7a594fa1fc48da502ab1cae4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_boyd, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:10:26 np0005629333 podman[387803]: 2026-02-25 13:10:26.507991065 +0000 UTC m=+0.474689877 container start 992a76b3ef531b2f5329a32f37352ea02be650a7a594fa1fc48da502ab1cae4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_boyd, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:10:26 np0005629333 podman[387803]: 2026-02-25 13:10:26.64541176 +0000 UTC m=+0.612110612 container attach 992a76b3ef531b2f5329a32f37352ea02be650a7a594fa1fc48da502ab1cae4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]: {
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:    "0": [
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:        {
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "devices": [
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "/dev/loop3"
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            ],
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "lv_name": "ceph_lv0",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "lv_size": "21470642176",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "name": "ceph_lv0",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "tags": {
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.cluster_name": "ceph",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.crush_device_class": "",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.encrypted": "0",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.objectstore": "bluestore",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.osd_id": "0",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.type": "block",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.vdo": "0",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.with_tpm": "0"
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            },
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "type": "block",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "vg_name": "ceph_vg0"
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:        }
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:    ],
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:    "1": [
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:        {
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "devices": [
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "/dev/loop4"
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            ],
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "lv_name": "ceph_lv1",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "lv_size": "21470642176",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "name": "ceph_lv1",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "tags": {
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.cluster_name": "ceph",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.crush_device_class": "",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.encrypted": "0",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.objectstore": "bluestore",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.osd_id": "1",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.type": "block",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.vdo": "0",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.with_tpm": "0"
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            },
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "type": "block",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "vg_name": "ceph_vg1"
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:        }
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:    ],
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:    "2": [
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:        {
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "devices": [
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "/dev/loop5"
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            ],
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "lv_name": "ceph_lv2",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "lv_size": "21470642176",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "name": "ceph_lv2",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "tags": {
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.cluster_name": "ceph",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.crush_device_class": "",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.encrypted": "0",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.objectstore": "bluestore",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.osd_id": "2",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.type": "block",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.vdo": "0",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:                "ceph.with_tpm": "0"
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            },
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "type": "block",
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:            "vg_name": "ceph_vg2"
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:        }
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]:    ]
Feb 25 08:10:26 np0005629333 jovial_boyd[387820]: }
Feb 25 08:10:26 np0005629333 systemd[1]: libpod-992a76b3ef531b2f5329a32f37352ea02be650a7a594fa1fc48da502ab1cae4c.scope: Deactivated successfully.
Feb 25 08:10:26 np0005629333 podman[387803]: 2026-02-25 13:10:26.793578488 +0000 UTC m=+0.760277280 container died 992a76b3ef531b2f5329a32f37352ea02be650a7a594fa1fc48da502ab1cae4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_boyd, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:10:27 np0005629333 systemd[1]: var-lib-containers-storage-overlay-82e5dae33155446abc287c6156981fb26365959027938c0c0f9650aba576d1f4-merged.mount: Deactivated successfully.
Feb 25 08:10:27 np0005629333 ceph-mgr[76641]: [devicehealth INFO root] Check health
Feb 25 08:10:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2713: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:10:27 np0005629333 podman[387803]: 2026-02-25 13:10:27.302850998 +0000 UTC m=+1.269549830 container remove 992a76b3ef531b2f5329a32f37352ea02be650a7a594fa1fc48da502ab1cae4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_boyd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 08:10:27 np0005629333 systemd[1]: libpod-conmon-992a76b3ef531b2f5329a32f37352ea02be650a7a594fa1fc48da502ab1cae4c.scope: Deactivated successfully.
Feb 25 08:10:27 np0005629333 nova_compute[244014]: 2026-02-25 13:10:27.574 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:10:27 np0005629333 podman[387905]: 2026-02-25 13:10:27.93907956 +0000 UTC m=+0.124070740 container create 33b9f3f2ce8e1500d3cebd018230dc7d1268aeb68b3895ccd9c026a46a22a5a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_beaver, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 08:10:27 np0005629333 podman[387905]: 2026-02-25 13:10:27.848990059 +0000 UTC m=+0.033981279 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:10:28 np0005629333 systemd[1]: Started libpod-conmon-33b9f3f2ce8e1500d3cebd018230dc7d1268aeb68b3895ccd9c026a46a22a5a1.scope.
Feb 25 08:10:28 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:10:28 np0005629333 podman[387905]: 2026-02-25 13:10:28.290848269 +0000 UTC m=+0.475839459 container init 33b9f3f2ce8e1500d3cebd018230dc7d1268aeb68b3895ccd9c026a46a22a5a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_beaver, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:10:28 np0005629333 podman[387905]: 2026-02-25 13:10:28.300130751 +0000 UTC m=+0.485121921 container start 33b9f3f2ce8e1500d3cebd018230dc7d1268aeb68b3895ccd9c026a46a22a5a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 08:10:28 np0005629333 infallible_beaver[387921]: 167 167
Feb 25 08:10:28 np0005629333 systemd[1]: libpod-33b9f3f2ce8e1500d3cebd018230dc7d1268aeb68b3895ccd9c026a46a22a5a1.scope: Deactivated successfully.
Feb 25 08:10:28 np0005629333 podman[387905]: 2026-02-25 13:10:28.424551899 +0000 UTC m=+0.609543119 container attach 33b9f3f2ce8e1500d3cebd018230dc7d1268aeb68b3895ccd9c026a46a22a5a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:10:28 np0005629333 podman[387905]: 2026-02-25 13:10:28.425051693 +0000 UTC m=+0.610042863 container died 33b9f3f2ce8e1500d3cebd018230dc7d1268aeb68b3895ccd9c026a46a22a5a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:10:28 np0005629333 systemd[1]: var-lib-containers-storage-overlay-bfd77ceea6eb450576a33ffa1d08214b58bfc7849c22f00f96711566b6c9d246-merged.mount: Deactivated successfully.
Feb 25 08:10:28 np0005629333 podman[387905]: 2026-02-25 13:10:28.726025 +0000 UTC m=+0.911016150 container remove 33b9f3f2ce8e1500d3cebd018230dc7d1268aeb68b3895ccd9c026a46a22a5a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:10:28 np0005629333 systemd[1]: libpod-conmon-33b9f3f2ce8e1500d3cebd018230dc7d1268aeb68b3895ccd9c026a46a22a5a1.scope: Deactivated successfully.
Feb 25 08:10:28 np0005629333 podman[387946]: 2026-02-25 13:10:28.95687029 +0000 UTC m=+0.092094198 container create d173b7a4645717e0c55202379206bf2fc97a2fae86205836308cc7730e6b3652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_morse, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 08:10:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:10:28 np0005629333 podman[387946]: 2026-02-25 13:10:28.890395716 +0000 UTC m=+0.025619694 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:10:29 np0005629333 systemd[1]: Started libpod-conmon-d173b7a4645717e0c55202379206bf2fc97a2fae86205836308cc7730e6b3652.scope.
Feb 25 08:10:29 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:10:29 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa3ffde88bdedbc12627b9bdec881582b5b463c4ae0e16d4ddb5cb145f8b0152/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:10:29 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa3ffde88bdedbc12627b9bdec881582b5b463c4ae0e16d4ddb5cb145f8b0152/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:10:29 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa3ffde88bdedbc12627b9bdec881582b5b463c4ae0e16d4ddb5cb145f8b0152/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:10:29 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa3ffde88bdedbc12627b9bdec881582b5b463c4ae0e16d4ddb5cb145f8b0152/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:10:29 np0005629333 podman[387946]: 2026-02-25 13:10:29.122237803 +0000 UTC m=+0.257461731 container init d173b7a4645717e0c55202379206bf2fc97a2fae86205836308cc7730e6b3652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 08:10:29 np0005629333 podman[387946]: 2026-02-25 13:10:29.12922648 +0000 UTC m=+0.264450368 container start d173b7a4645717e0c55202379206bf2fc97a2fae86205836308cc7730e6b3652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_morse, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:10:29 np0005629333 podman[387946]: 2026-02-25 13:10:29.159208106 +0000 UTC m=+0.294432004 container attach d173b7a4645717e0c55202379206bf2fc97a2fae86205836308cc7730e6b3652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_morse, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 08:10:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2714: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:10:29 np0005629333 lvm[388041]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:10:29 np0005629333 lvm[388041]: VG ceph_vg1 finished
Feb 25 08:10:29 np0005629333 lvm[388040]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:10:29 np0005629333 lvm[388040]: VG ceph_vg0 finished
Feb 25 08:10:29 np0005629333 lvm[388043]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:10:29 np0005629333 lvm[388043]: VG ceph_vg2 finished
Feb 25 08:10:29 np0005629333 beautiful_morse[387962]: {}
Feb 25 08:10:30 np0005629333 systemd[1]: libpod-d173b7a4645717e0c55202379206bf2fc97a2fae86205836308cc7730e6b3652.scope: Deactivated successfully.
Feb 25 08:10:30 np0005629333 systemd[1]: libpod-d173b7a4645717e0c55202379206bf2fc97a2fae86205836308cc7730e6b3652.scope: Consumed 1.308s CPU time.
Feb 25 08:10:30 np0005629333 podman[387946]: 2026-02-25 13:10:30.000968863 +0000 UTC m=+1.136192771 container died d173b7a4645717e0c55202379206bf2fc97a2fae86205836308cc7730e6b3652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_morse, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:10:30 np0005629333 systemd[1]: var-lib-containers-storage-overlay-fa3ffde88bdedbc12627b9bdec881582b5b463c4ae0e16d4ddb5cb145f8b0152-merged.mount: Deactivated successfully.
Feb 25 08:10:30 np0005629333 podman[387946]: 2026-02-25 13:10:30.290664332 +0000 UTC m=+1.425888250 container remove d173b7a4645717e0c55202379206bf2fc97a2fae86205836308cc7730e6b3652 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:10:30 np0005629333 systemd[1]: libpod-conmon-d173b7a4645717e0c55202379206bf2fc97a2fae86205836308cc7730e6b3652.scope: Deactivated successfully.
Feb 25 08:10:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:10:30 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:10:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:10:30 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:10:30 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:10:30 np0005629333 nova_compute[244014]: 2026-02-25 13:10:30.871 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:10:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:10:31
Feb 25 08:10:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:10:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:10:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.control', 'vms', '.mgr', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'backups', 'images', 'cephfs.cephfs.meta']
Feb 25 08:10:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:10:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2715: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:10:31 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:10:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:10:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:10:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:10:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:10:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:10:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:10:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:10:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:10:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:10:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:10:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:10:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:10:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:10:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:10:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:10:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:10:32 np0005629333 nova_compute[244014]: 2026-02-25 13:10:32.577 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:10:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2716: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:10:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:10:34 np0005629333 podman[388083]: 2026-02-25 13:10:34.734071881 +0000 UTC m=+0.077164827 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 08:10:34 np0005629333 podman[388084]: 2026-02-25 13:10:34.791902592 +0000 UTC m=+0.129565515 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller)
Feb 25 08:10:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2717: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:10:35 np0005629333 nova_compute[244014]: 2026-02-25 13:10:35.872 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:10:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2718: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:10:37 np0005629333 nova_compute[244014]: 2026-02-25 13:10:37.580 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:10:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:10:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2719: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:10:40 np0005629333 nova_compute[244014]: 2026-02-25 13:10:40.874 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:10:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2720: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:10:42 np0005629333 nova_compute[244014]: 2026-02-25 13:10:42.584 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:10:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:10:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:10:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:10:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:10:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.723792564281303e-05 of space, bias 1.0, pg target 0.005171377692843909 quantized to 32 (current 32)
Feb 25 08:10:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:10:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:10:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:10:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:10:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:10:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0018289410571936495 of space, bias 1.0, pg target 0.5486823171580949 quantized to 32 (current 32)
Feb 25 08:10:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:10:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4127973638615152e-06 of space, bias 4.0, pg target 0.0016953568366338183 quantized to 16 (current 16)
Feb 25 08:10:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:10:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:10:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:10:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:10:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:10:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:10:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:10:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:10:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:10:42 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:10:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2721: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:10:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:10:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2722: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:10:45 np0005629333 nova_compute[244014]: 2026-02-25 13:10:45.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:10:45 np0005629333 nova_compute[244014]: 2026-02-25 13:10:45.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:10:45 np0005629333 nova_compute[244014]: 2026-02-25 13:10:45.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:10:45 np0005629333 nova_compute[244014]: 2026-02-25 13:10:45.879 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:10:45 np0005629333 nova_compute[244014]: 2026-02-25 13:10:45.923 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:10:45 np0005629333 nova_compute[244014]: 2026-02-25 13:10:45.924 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:10:45 np0005629333 nova_compute[244014]: 2026-02-25 13:10:45.962 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:10:45 np0005629333 nova_compute[244014]: 2026-02-25 13:10:45.962 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:10:45 np0005629333 nova_compute[244014]: 2026-02-25 13:10:45.962 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:10:45 np0005629333 nova_compute[244014]: 2026-02-25 13:10:45.963 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:10:45 np0005629333 nova_compute[244014]: 2026-02-25 13:10:45.963 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:10:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:10:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/640803340' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:10:46 np0005629333 nova_compute[244014]: 2026-02-25 13:10:46.497 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:10:46 np0005629333 nova_compute[244014]: 2026-02-25 13:10:46.656 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:10:46 np0005629333 nova_compute[244014]: 2026-02-25 13:10:46.657 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3589MB free_disk=59.98724717646837GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:10:46 np0005629333 nova_compute[244014]: 2026-02-25 13:10:46.657 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:10:46 np0005629333 nova_compute[244014]: 2026-02-25 13:10:46.657 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:10:46 np0005629333 nova_compute[244014]: 2026-02-25 13:10:46.902 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:10:46 np0005629333 nova_compute[244014]: 2026-02-25 13:10:46.902 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:10:46 np0005629333 nova_compute[244014]: 2026-02-25 13:10:46.995 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 25 08:10:47 np0005629333 nova_compute[244014]: 2026-02-25 13:10:47.117 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 25 08:10:47 np0005629333 nova_compute[244014]: 2026-02-25 13:10:47.118 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 25 08:10:47 np0005629333 nova_compute[244014]: 2026-02-25 13:10:47.132 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 25 08:10:47 np0005629333 nova_compute[244014]: 2026-02-25 13:10:47.154 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 25 08:10:47 np0005629333 nova_compute[244014]: 2026-02-25 13:10:47.174 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:10:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2723: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:10:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:10:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4136985533' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:10:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:10:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4136985533' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:10:47 np0005629333 nova_compute[244014]: 2026-02-25 13:10:47.588 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:10:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:10:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3286221838' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:10:47 np0005629333 nova_compute[244014]: 2026-02-25 13:10:47.728 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:10:47 np0005629333 nova_compute[244014]: 2026-02-25 13:10:47.734 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:10:47 np0005629333 nova_compute[244014]: 2026-02-25 13:10:47.794 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:10:47 np0005629333 nova_compute[244014]: 2026-02-25 13:10:47.797 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:10:47 np0005629333 nova_compute[244014]: 2026-02-25 13:10:47.797 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:10:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:10:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2724: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:10:50 np0005629333 nova_compute[244014]: 2026-02-25 13:10:50.793 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:10:50 np0005629333 nova_compute[244014]: 2026-02-25 13:10:50.794 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:10:50 np0005629333 nova_compute[244014]: 2026-02-25 13:10:50.879 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:10:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2725: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:10:52 np0005629333 nova_compute[244014]: 2026-02-25 13:10:52.592 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:10:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2726: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:10:53 np0005629333 nova_compute[244014]: 2026-02-25 13:10:53.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:10:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:10:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:10:55.050 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:10:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:10:55.051 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:10:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:10:55.051 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:10:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2727: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:10:55 np0005629333 nova_compute[244014]: 2026-02-25 13:10:55.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:10:55 np0005629333 nova_compute[244014]: 2026-02-25 13:10:55.880 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:10:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2728: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:10:57 np0005629333 nova_compute[244014]: 2026-02-25 13:10:57.595 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:10:58 np0005629333 nova_compute[244014]: 2026-02-25 13:10:58.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:10:58 np0005629333 nova_compute[244014]: 2026-02-25 13:10:58.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:10:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:10:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2729: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:11:00 np0005629333 nova_compute[244014]: 2026-02-25 13:11:00.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:11:00 np0005629333 nova_compute[244014]: 2026-02-25 13:11:00.883 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:11:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2730: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:11:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:11:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:11:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:11:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:11:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:11:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:11:02 np0005629333 nova_compute[244014]: 2026-02-25 13:11:02.599 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:11:02 np0005629333 nova_compute[244014]: 2026-02-25 13:11:02.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:11:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2731: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:11:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:11:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2732: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:11:05 np0005629333 podman[388172]: 2026-02-25 13:11:05.73822123 +0000 UTC m=+0.075872030 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0)
Feb 25 08:11:05 np0005629333 podman[388173]: 2026-02-25 13:11:05.81092846 +0000 UTC m=+0.135903333 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:11:05 np0005629333 nova_compute[244014]: 2026-02-25 13:11:05.885 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:11:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2733: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:11:07 np0005629333 nova_compute[244014]: 2026-02-25 13:11:07.603 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:11:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:11:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2734: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:11:10 np0005629333 nova_compute[244014]: 2026-02-25 13:11:10.887 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:11:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2735: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:11:12 np0005629333 nova_compute[244014]: 2026-02-25 13:11:12.607 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:11:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2736: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:11:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:11:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2737: 305 pgs: 305 active+clean; 112 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:11:15 np0005629333 nova_compute[244014]: 2026-02-25 13:11:15.889 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:11:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Feb 25 08:11:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Feb 25 08:11:16 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Feb 25 08:11:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2739: 305 pgs: 305 active+clean; 133 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.9 KiB/s rd, 2.0 MiB/s wr, 14 op/s
Feb 25 08:11:17 np0005629333 nova_compute[244014]: 2026-02-25 13:11:17.611 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:11:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:11:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Feb 25 08:11:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Feb 25 08:11:19 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Feb 25 08:11:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2741: 305 pgs: 305 active+clean; 133 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 2.6 MiB/s wr, 18 op/s
Feb 25 08:11:20 np0005629333 nova_compute[244014]: 2026-02-25 13:11:20.892 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:11:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2742: 305 pgs: 305 active+clean; 133 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 2.6 MiB/s wr, 18 op/s
Feb 25 08:11:22 np0005629333 nova_compute[244014]: 2026-02-25 13:11:22.615 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:11:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2743: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 5.1 MiB/s wr, 68 op/s
Feb 25 08:11:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:11:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2744: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 2.5 MiB/s wr, 49 op/s
Feb 25 08:11:25 np0005629333 nova_compute[244014]: 2026-02-25 13:11:25.893 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:11:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2745: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 2.0 MiB/s wr, 95 op/s
Feb 25 08:11:27 np0005629333 nova_compute[244014]: 2026-02-25 13:11:27.618 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:11:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:11:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2746: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 2.0 MiB/s wr, 95 op/s
Feb 25 08:11:30 np0005629333 nova_compute[244014]: 2026-02-25 13:11:30.894 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:11:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:11:31
Feb 25 08:11:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:11:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:11:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['backups', 'vms', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'images', '.mgr']
Feb 25 08:11:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:11:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:11:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:11:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:11:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:11:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:11:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:11:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:11:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:11:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:11:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:11:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:11:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2747: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 1.7 MiB/s wr, 79 op/s
Feb 25 08:11:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:11:31 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:11:31 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:11:31 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:11:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:11:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:11:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:11:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:11:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:11:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:11:31 np0005629333 podman[388364]: 2026-02-25 13:11:31.615759495 +0000 UTC m=+0.027922209 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:11:31 np0005629333 podman[388364]: 2026-02-25 13:11:31.747075248 +0000 UTC m=+0.159237912 container create 85472d00eb11888ecc3a896871373054c9d7ebfefa8d1a7a80ec10f27d33f887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shirley, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 08:11:31 np0005629333 systemd[1]: Started libpod-conmon-85472d00eb11888ecc3a896871373054c9d7ebfefa8d1a7a80ec10f27d33f887.scope.
Feb 25 08:11:31 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:11:31 np0005629333 podman[388364]: 2026-02-25 13:11:31.89859327 +0000 UTC m=+0.310756044 container init 85472d00eb11888ecc3a896871373054c9d7ebfefa8d1a7a80ec10f27d33f887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 08:11:31 np0005629333 podman[388364]: 2026-02-25 13:11:31.908547921 +0000 UTC m=+0.320710635 container start 85472d00eb11888ecc3a896871373054c9d7ebfefa8d1a7a80ec10f27d33f887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:11:31 np0005629333 confident_shirley[388380]: 167 167
Feb 25 08:11:31 np0005629333 systemd[1]: libpod-85472d00eb11888ecc3a896871373054c9d7ebfefa8d1a7a80ec10f27d33f887.scope: Deactivated successfully.
Feb 25 08:11:31 np0005629333 podman[388364]: 2026-02-25 13:11:31.949155346 +0000 UTC m=+0.361318070 container attach 85472d00eb11888ecc3a896871373054c9d7ebfefa8d1a7a80ec10f27d33f887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shirley, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 08:11:31 np0005629333 podman[388364]: 2026-02-25 13:11:31.949900397 +0000 UTC m=+0.362063091 container died 85472d00eb11888ecc3a896871373054c9d7ebfefa8d1a7a80ec10f27d33f887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:11:32 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ba2bfa754e0f2961c59b94ad2797e57100ebfba7b1a626079925ea3e79962075-merged.mount: Deactivated successfully.
Feb 25 08:11:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:11:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:11:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:11:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:11:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:11:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:11:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:11:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:11:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:11:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:11:32 np0005629333 podman[388364]: 2026-02-25 13:11:32.306821942 +0000 UTC m=+0.718984636 container remove 85472d00eb11888ecc3a896871373054c9d7ebfefa8d1a7a80ec10f27d33f887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shirley, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 08:11:32 np0005629333 systemd[1]: libpod-conmon-85472d00eb11888ecc3a896871373054c9d7ebfefa8d1a7a80ec10f27d33f887.scope: Deactivated successfully.
Feb 25 08:11:32 np0005629333 podman[388404]: 2026-02-25 13:11:32.540970085 +0000 UTC m=+0.101210995 container create c2bb1e9a84c431c88329380b73705627e0ae9e73a5aa89a98ac94c468263a020 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lumiere, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:11:32 np0005629333 podman[388404]: 2026-02-25 13:11:32.465997201 +0000 UTC m=+0.026238161 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:11:32 np0005629333 systemd[1]: Started libpod-conmon-c2bb1e9a84c431c88329380b73705627e0ae9e73a5aa89a98ac94c468263a020.scope.
Feb 25 08:11:32 np0005629333 nova_compute[244014]: 2026-02-25 13:11:32.620 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:11:32 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:11:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbac53e56975fd2f6573273905cfc53e2b2fcf32ff0d574bdee3a8fcc670ebef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:11:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbac53e56975fd2f6573273905cfc53e2b2fcf32ff0d574bdee3a8fcc670ebef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:11:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbac53e56975fd2f6573273905cfc53e2b2fcf32ff0d574bdee3a8fcc670ebef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:11:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbac53e56975fd2f6573273905cfc53e2b2fcf32ff0d574bdee3a8fcc670ebef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:11:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbac53e56975fd2f6573273905cfc53e2b2fcf32ff0d574bdee3a8fcc670ebef/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:11:32 np0005629333 podman[388404]: 2026-02-25 13:11:32.673892353 +0000 UTC m=+0.234133313 container init c2bb1e9a84c431c88329380b73705627e0ae9e73a5aa89a98ac94c468263a020 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lumiere, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 08:11:32 np0005629333 podman[388404]: 2026-02-25 13:11:32.684607115 +0000 UTC m=+0.244848015 container start c2bb1e9a84c431c88329380b73705627e0ae9e73a5aa89a98ac94c468263a020 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lumiere, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 08:11:32 np0005629333 podman[388404]: 2026-02-25 13:11:32.704107065 +0000 UTC m=+0.264348025 container attach c2bb1e9a84c431c88329380b73705627e0ae9e73a5aa89a98ac94c468263a020 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lumiere, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:11:33 np0005629333 charming_lumiere[388422]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:11:33 np0005629333 charming_lumiere[388422]: --> All data devices are unavailable
Feb 25 08:11:33 np0005629333 systemd[1]: libpod-c2bb1e9a84c431c88329380b73705627e0ae9e73a5aa89a98ac94c468263a020.scope: Deactivated successfully.
Feb 25 08:11:33 np0005629333 podman[388404]: 2026-02-25 13:11:33.126232948 +0000 UTC m=+0.686473848 container died c2bb1e9a84c431c88329380b73705627e0ae9e73a5aa89a98ac94c468263a020 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lumiere, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:11:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2748: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 1.7 MiB/s wr, 79 op/s
Feb 25 08:11:33 np0005629333 systemd[1]: var-lib-containers-storage-overlay-cbac53e56975fd2f6573273905cfc53e2b2fcf32ff0d574bdee3a8fcc670ebef-merged.mount: Deactivated successfully.
Feb 25 08:11:33 np0005629333 podman[388404]: 2026-02-25 13:11:33.301519501 +0000 UTC m=+0.861760411 container remove c2bb1e9a84c431c88329380b73705627e0ae9e73a5aa89a98ac94c468263a020 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 08:11:33 np0005629333 systemd[1]: libpod-conmon-c2bb1e9a84c431c88329380b73705627e0ae9e73a5aa89a98ac94c468263a020.scope: Deactivated successfully.
Feb 25 08:11:33 np0005629333 podman[388519]: 2026-02-25 13:11:33.835920921 +0000 UTC m=+0.091244654 container create 27f32422b1287c1688ccd7bc3627aebc609e5f6fa09c4db20250ceecff8ad4e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_satoshi, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 08:11:33 np0005629333 podman[388519]: 2026-02-25 13:11:33.780984612 +0000 UTC m=+0.036308395 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:11:33 np0005629333 systemd[1]: Started libpod-conmon-27f32422b1287c1688ccd7bc3627aebc609e5f6fa09c4db20250ceecff8ad4e7.scope.
Feb 25 08:11:33 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:11:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:11:33 np0005629333 podman[388519]: 2026-02-25 13:11:33.985557471 +0000 UTC m=+0.240881244 container init 27f32422b1287c1688ccd7bc3627aebc609e5f6fa09c4db20250ceecff8ad4e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_satoshi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:11:33 np0005629333 podman[388519]: 2026-02-25 13:11:33.993265828 +0000 UTC m=+0.248589551 container start 27f32422b1287c1688ccd7bc3627aebc609e5f6fa09c4db20250ceecff8ad4e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:11:34 np0005629333 adoring_satoshi[388535]: 167 167
Feb 25 08:11:34 np0005629333 systemd[1]: libpod-27f32422b1287c1688ccd7bc3627aebc609e5f6fa09c4db20250ceecff8ad4e7.scope: Deactivated successfully.
Feb 25 08:11:34 np0005629333 conmon[388535]: conmon 27f32422b1287c1688cc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-27f32422b1287c1688ccd7bc3627aebc609e5f6fa09c4db20250ceecff8ad4e7.scope/container/memory.events
Feb 25 08:11:34 np0005629333 podman[388519]: 2026-02-25 13:11:34.017777789 +0000 UTC m=+0.273101572 container attach 27f32422b1287c1688ccd7bc3627aebc609e5f6fa09c4db20250ceecff8ad4e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_satoshi, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:11:34 np0005629333 podman[388519]: 2026-02-25 13:11:34.018328935 +0000 UTC m=+0.273652658 container died 27f32422b1287c1688ccd7bc3627aebc609e5f6fa09c4db20250ceecff8ad4e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 08:11:34 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ef1218157ad34fb7ad8e7bf239015886f71dd45abb17aee75de5044e1475d83d-merged.mount: Deactivated successfully.
Feb 25 08:11:34 np0005629333 podman[388519]: 2026-02-25 13:11:34.211587394 +0000 UTC m=+0.466911087 container remove 27f32422b1287c1688ccd7bc3627aebc609e5f6fa09c4db20250ceecff8ad4e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_satoshi, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 08:11:34 np0005629333 systemd[1]: libpod-conmon-27f32422b1287c1688ccd7bc3627aebc609e5f6fa09c4db20250ceecff8ad4e7.scope: Deactivated successfully.
Feb 25 08:11:34 np0005629333 podman[388561]: 2026-02-25 13:11:34.444121322 +0000 UTC m=+0.096842062 container create fd673f746eec532bc4a13747e2665c6f25bd1901ff2d0948dbc4c20414a47197 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_leavitt, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 08:11:34 np0005629333 podman[388561]: 2026-02-25 13:11:34.387573857 +0000 UTC m=+0.040294397 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:11:34 np0005629333 systemd[1]: Started libpod-conmon-fd673f746eec532bc4a13747e2665c6f25bd1901ff2d0948dbc4c20414a47197.scope.
Feb 25 08:11:34 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:11:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da387f8eef0e1d2c0eaa2d7b51b17361d88a0503413c4c6fbf8ca0efc85f578/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:11:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da387f8eef0e1d2c0eaa2d7b51b17361d88a0503413c4c6fbf8ca0efc85f578/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:11:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da387f8eef0e1d2c0eaa2d7b51b17361d88a0503413c4c6fbf8ca0efc85f578/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:11:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da387f8eef0e1d2c0eaa2d7b51b17361d88a0503413c4c6fbf8ca0efc85f578/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:11:34 np0005629333 podman[388561]: 2026-02-25 13:11:34.610255756 +0000 UTC m=+0.262976267 container init fd673f746eec532bc4a13747e2665c6f25bd1901ff2d0948dbc4c20414a47197 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_leavitt, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:11:34 np0005629333 podman[388561]: 2026-02-25 13:11:34.621396821 +0000 UTC m=+0.274117371 container start fd673f746eec532bc4a13747e2665c6f25bd1901ff2d0948dbc4c20414a47197 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 08:11:34 np0005629333 podman[388561]: 2026-02-25 13:11:34.692057533 +0000 UTC m=+0.344778073 container attach fd673f746eec532bc4a13747e2665c6f25bd1901ff2d0948dbc4c20414a47197 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]: {
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:    "0": [
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:        {
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "devices": [
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "/dev/loop3"
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            ],
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "lv_name": "ceph_lv0",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "lv_size": "21470642176",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "name": "ceph_lv0",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "tags": {
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.cluster_name": "ceph",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.crush_device_class": "",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.encrypted": "0",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.objectstore": "bluestore",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.osd_id": "0",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.type": "block",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.vdo": "0",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.with_tpm": "0"
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            },
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "type": "block",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "vg_name": "ceph_vg0"
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:        }
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:    ],
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:    "1": [
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:        {
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "devices": [
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "/dev/loop4"
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            ],
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "lv_name": "ceph_lv1",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "lv_size": "21470642176",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "name": "ceph_lv1",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "tags": {
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.cluster_name": "ceph",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.crush_device_class": "",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.encrypted": "0",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.objectstore": "bluestore",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.osd_id": "1",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.type": "block",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.vdo": "0",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.with_tpm": "0"
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            },
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "type": "block",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "vg_name": "ceph_vg1"
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:        }
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:    ],
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:    "2": [
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:        {
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "devices": [
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "/dev/loop5"
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            ],
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "lv_name": "ceph_lv2",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "lv_size": "21470642176",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "name": "ceph_lv2",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "tags": {
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.cluster_name": "ceph",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.crush_device_class": "",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.encrypted": "0",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.objectstore": "bluestore",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.osd_id": "2",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.type": "block",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.vdo": "0",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:                "ceph.with_tpm": "0"
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            },
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "type": "block",
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:            "vg_name": "ceph_vg2"
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:        }
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]:    ]
Feb 25 08:11:34 np0005629333 eloquent_leavitt[388577]: }
Feb 25 08:11:34 np0005629333 systemd[1]: libpod-fd673f746eec532bc4a13747e2665c6f25bd1901ff2d0948dbc4c20414a47197.scope: Deactivated successfully.
Feb 25 08:11:34 np0005629333 podman[388561]: 2026-02-25 13:11:34.969351843 +0000 UTC m=+0.622072383 container died fd673f746eec532bc4a13747e2665c6f25bd1901ff2d0948dbc4c20414a47197 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_leavitt, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:11:35 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7da387f8eef0e1d2c0eaa2d7b51b17361d88a0503413c4c6fbf8ca0efc85f578-merged.mount: Deactivated successfully.
Feb 25 08:11:35 np0005629333 podman[388561]: 2026-02-25 13:11:35.199003637 +0000 UTC m=+0.851724177 container remove fd673f746eec532bc4a13747e2665c6f25bd1901ff2d0948dbc4c20414a47197 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Feb 25 08:11:35 np0005629333 systemd[1]: libpod-conmon-fd673f746eec532bc4a13747e2665c6f25bd1901ff2d0948dbc4c20414a47197.scope: Deactivated successfully.
Feb 25 08:11:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2749: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Feb 25 08:11:35 np0005629333 podman[388660]: 2026-02-25 13:11:35.77924696 +0000 UTC m=+0.099181958 container create 731fd28c69a74ae7c5026142e38a6b63b510cca13690649c0fcde874e7d4e230 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_pike, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:11:35 np0005629333 podman[388660]: 2026-02-25 13:11:35.711981593 +0000 UTC m=+0.031916651 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:11:35 np0005629333 systemd[1]: Started libpod-conmon-731fd28c69a74ae7c5026142e38a6b63b510cca13690649c0fcde874e7d4e230.scope.
Feb 25 08:11:35 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:11:35 np0005629333 nova_compute[244014]: 2026-02-25 13:11:35.897 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:11:35 np0005629333 podman[388660]: 2026-02-25 13:11:35.936061972 +0000 UTC m=+0.255997020 container init 731fd28c69a74ae7c5026142e38a6b63b510cca13690649c0fcde874e7d4e230 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_pike, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 08:11:35 np0005629333 podman[388660]: 2026-02-25 13:11:35.945338974 +0000 UTC m=+0.265273982 container start 731fd28c69a74ae7c5026142e38a6b63b510cca13690649c0fcde874e7d4e230 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:11:35 np0005629333 keen_pike[388689]: 167 167
Feb 25 08:11:35 np0005629333 systemd[1]: libpod-731fd28c69a74ae7c5026142e38a6b63b510cca13690649c0fcde874e7d4e230.scope: Deactivated successfully.
Feb 25 08:11:35 np0005629333 podman[388660]: 2026-02-25 13:11:35.992354099 +0000 UTC m=+0.312289147 container attach 731fd28c69a74ae7c5026142e38a6b63b510cca13690649c0fcde874e7d4e230 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_pike, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:11:35 np0005629333 podman[388660]: 2026-02-25 13:11:35.992896715 +0000 UTC m=+0.312831713 container died 731fd28c69a74ae7c5026142e38a6b63b510cca13690649c0fcde874e7d4e230 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_pike, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:11:36 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ce8aff379a310feaa89dd0b0a76efaf9ae8407c8bfd519f700b684c6f4497040-merged.mount: Deactivated successfully.
Feb 25 08:11:36 np0005629333 podman[388660]: 2026-02-25 13:11:36.222866719 +0000 UTC m=+0.542801727 container remove 731fd28c69a74ae7c5026142e38a6b63b510cca13690649c0fcde874e7d4e230 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:11:36 np0005629333 podman[388674]: 2026-02-25 13:11:36.225610997 +0000 UTC m=+0.412757770 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 08:11:36 np0005629333 podman[388690]: 2026-02-25 13:11:36.229457755 +0000 UTC m=+0.356572846 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 08:11:36 np0005629333 systemd[1]: libpod-conmon-731fd28c69a74ae7c5026142e38a6b63b510cca13690649c0fcde874e7d4e230.scope: Deactivated successfully.
Feb 25 08:11:36 np0005629333 podman[388746]: 2026-02-25 13:11:36.445363704 +0000 UTC m=+0.079873784 container create 7387b0c33ad743ec648b1c6992469d5445bdfc6424a489cc1bce6b7f6d5d3e99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_shockley, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 08:11:36 np0005629333 podman[388746]: 2026-02-25 13:11:36.396238559 +0000 UTC m=+0.030748699 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:11:36 np0005629333 systemd[1]: Started libpod-conmon-7387b0c33ad743ec648b1c6992469d5445bdfc6424a489cc1bce6b7f6d5d3e99.scope.
Feb 25 08:11:36 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:11:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2855a0a0bd0f7a7ba9fe8c416a48225b4e3f79002c865ddad898a9981d16d487/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:11:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2855a0a0bd0f7a7ba9fe8c416a48225b4e3f79002c865ddad898a9981d16d487/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:11:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2855a0a0bd0f7a7ba9fe8c416a48225b4e3f79002c865ddad898a9981d16d487/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:11:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2855a0a0bd0f7a7ba9fe8c416a48225b4e3f79002c865ddad898a9981d16d487/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:11:36 np0005629333 podman[388746]: 2026-02-25 13:11:36.587858512 +0000 UTC m=+0.222368622 container init 7387b0c33ad743ec648b1c6992469d5445bdfc6424a489cc1bce6b7f6d5d3e99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_shockley, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 08:11:36 np0005629333 podman[388746]: 2026-02-25 13:11:36.595399485 +0000 UTC m=+0.229909565 container start 7387b0c33ad743ec648b1c6992469d5445bdfc6424a489cc1bce6b7f6d5d3e99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:11:36 np0005629333 podman[388746]: 2026-02-25 13:11:36.617806717 +0000 UTC m=+0.252316807 container attach 7387b0c33ad743ec648b1c6992469d5445bdfc6424a489cc1bce6b7f6d5d3e99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_shockley, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:11:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2750: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Feb 25 08:11:37 np0005629333 lvm[388839]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:11:37 np0005629333 lvm[388839]: VG ceph_vg0 finished
Feb 25 08:11:37 np0005629333 lvm[388842]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:11:37 np0005629333 lvm[388842]: VG ceph_vg1 finished
Feb 25 08:11:37 np0005629333 lvm[388844]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:11:37 np0005629333 lvm[388844]: VG ceph_vg2 finished
Feb 25 08:11:37 np0005629333 serene_shockley[388762]: {}
Feb 25 08:11:37 np0005629333 systemd[1]: libpod-7387b0c33ad743ec648b1c6992469d5445bdfc6424a489cc1bce6b7f6d5d3e99.scope: Deactivated successfully.
Feb 25 08:11:37 np0005629333 systemd[1]: libpod-7387b0c33ad743ec648b1c6992469d5445bdfc6424a489cc1bce6b7f6d5d3e99.scope: Consumed 1.185s CPU time.
Feb 25 08:11:37 np0005629333 podman[388746]: 2026-02-25 13:11:37.411087516 +0000 UTC m=+1.045597576 container died 7387b0c33ad743ec648b1c6992469d5445bdfc6424a489cc1bce6b7f6d5d3e99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_shockley, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 08:11:37 np0005629333 systemd[1]: var-lib-containers-storage-overlay-2855a0a0bd0f7a7ba9fe8c416a48225b4e3f79002c865ddad898a9981d16d487-merged.mount: Deactivated successfully.
Feb 25 08:11:37 np0005629333 podman[388746]: 2026-02-25 13:11:37.584434395 +0000 UTC m=+1.218944485 container remove 7387b0c33ad743ec648b1c6992469d5445bdfc6424a489cc1bce6b7f6d5d3e99 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 08:11:37 np0005629333 systemd[1]: libpod-conmon-7387b0c33ad743ec648b1c6992469d5445bdfc6424a489cc1bce6b7f6d5d3e99.scope: Deactivated successfully.
Feb 25 08:11:37 np0005629333 nova_compute[244014]: 2026-02-25 13:11:37.623 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:11:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:11:37 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:11:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:11:37 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:11:38 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:11:38 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:11:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:11:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2751: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:11:40 np0005629333 nova_compute[244014]: 2026-02-25 13:11:40.902 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:11:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:11:41.096 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 08:11:41 np0005629333 nova_compute[244014]: 2026-02-25 13:11:41.097 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:11:41 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:11:41.097 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 08:11:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2752: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:11:42 np0005629333 nova_compute[244014]: 2026-02-25 13:11:42.628 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:11:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:11:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:11:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:11:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:11:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.7350063260632303e-05 of space, bias 1.0, pg target 0.005205018978189691 quantized to 32 (current 32)
Feb 25 08:11:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:11:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:11:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:11:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:11:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:11:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0024948454527306263 of space, bias 1.0, pg target 0.7484536358191879 quantized to 32 (current 32)
Feb 25 08:11:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:11:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4184950664421151e-06 of space, bias 4.0, pg target 0.0017021940797305381 quantized to 16 (current 16)
Feb 25 08:11:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:11:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:11:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:11:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:11:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:11:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:11:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:11:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:11:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:11:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:11:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2753: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:11:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:11:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2754: 305 pgs: 305 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:11:45 np0005629333 nova_compute[244014]: 2026-02-25 13:11:45.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:11:45 np0005629333 nova_compute[244014]: 2026-02-25 13:11:45.902 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:11:45 np0005629333 nova_compute[244014]: 2026-02-25 13:11:45.925 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:11:45 np0005629333 nova_compute[244014]: 2026-02-25 13:11:45.927 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:11:45 np0005629333 nova_compute[244014]: 2026-02-25 13:11:45.927 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:11:45 np0005629333 nova_compute[244014]: 2026-02-25 13:11:45.928 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:11:45 np0005629333 nova_compute[244014]: 2026-02-25 13:11:45.928 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:11:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:11:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1242954410' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:11:46 np0005629333 nova_compute[244014]: 2026-02-25 13:11:46.486 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:11:46 np0005629333 nova_compute[244014]: 2026-02-25 13:11:46.684 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:11:46 np0005629333 nova_compute[244014]: 2026-02-25 13:11:46.686 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3557MB free_disk=59.987240449525416GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:11:46 np0005629333 nova_compute[244014]: 2026-02-25 13:11:46.686 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:11:46 np0005629333 nova_compute[244014]: 2026-02-25 13:11:46.687 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:11:46 np0005629333 nova_compute[244014]: 2026-02-25 13:11:46.759 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:11:46 np0005629333 nova_compute[244014]: 2026-02-25 13:11:46.759 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:11:46 np0005629333 nova_compute[244014]: 2026-02-25 13:11:46.775 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:11:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Feb 25 08:11:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Feb 25 08:11:47 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Feb 25 08:11:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2756: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Feb 25 08:11:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:11:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3957610471' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:11:47 np0005629333 nova_compute[244014]: 2026-02-25 13:11:47.343 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:11:47 np0005629333 nova_compute[244014]: 2026-02-25 13:11:47.351 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:11:47 np0005629333 nova_compute[244014]: 2026-02-25 13:11:47.383 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:11:47 np0005629333 nova_compute[244014]: 2026-02-25 13:11:47.386 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:11:47 np0005629333 nova_compute[244014]: 2026-02-25 13:11:47.386 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:11:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:11:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4066214724' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:11:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:11:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4066214724' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:11:47 np0005629333 nova_compute[244014]: 2026-02-25 13:11:47.632 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:11:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:11:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:11:49.099 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:11:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2757: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Feb 25 08:11:49 np0005629333 nova_compute[244014]: 2026-02-25 13:11:49.387 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:11:49 np0005629333 nova_compute[244014]: 2026-02-25 13:11:49.388 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:11:49 np0005629333 nova_compute[244014]: 2026-02-25 13:11:49.388 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:11:49 np0005629333 nova_compute[244014]: 2026-02-25 13:11:49.403 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:11:49 np0005629333 nova_compute[244014]: 2026-02-25 13:11:49.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:11:50 np0005629333 nova_compute[244014]: 2026-02-25 13:11:50.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:11:50 np0005629333 nova_compute[244014]: 2026-02-25 13:11:50.904 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:11:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2758: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 153 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Feb 25 08:11:52 np0005629333 nova_compute[244014]: 2026-02-25 13:11:52.637 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:11:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2759: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Feb 25 08:11:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:11:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Feb 25 08:11:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Feb 25 08:11:54 np0005629333 ceph-mon[76335]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Feb 25 08:11:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:11:55.051 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:11:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:11:55.052 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:11:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:11:55.052 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:11:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2761: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Feb 25 08:11:55 np0005629333 nova_compute[244014]: 2026-02-25 13:11:55.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:11:55 np0005629333 nova_compute[244014]: 2026-02-25 13:11:55.906 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:11:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2762: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Feb 25 08:11:57 np0005629333 nova_compute[244014]: 2026-02-25 13:11:57.640 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:11:57 np0005629333 nova_compute[244014]: 2026-02-25 13:11:57.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:11:58 np0005629333 nova_compute[244014]: 2026-02-25 13:11:58.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:11:58 np0005629333 nova_compute[244014]: 2026-02-25 13:11:58.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:11:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:11:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2763: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Feb 25 08:12:00 np0005629333 nova_compute[244014]: 2026-02-25 13:12:00.910 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:12:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2764: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Feb 25 08:12:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:12:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:12:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:12:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:12:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:12:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:12:02 np0005629333 nova_compute[244014]: 2026-02-25 13:12:02.644 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:12:02 np0005629333 nova_compute[244014]: 2026-02-25 13:12:02.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:12:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2765: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:12:03 np0005629333 nova_compute[244014]: 2026-02-25 13:12:03.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:12:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:12:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2766: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:12:05 np0005629333 nova_compute[244014]: 2026-02-25 13:12:05.912 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:12:06 np0005629333 podman[388929]: 2026-02-25 13:12:06.729830455 +0000 UTC m=+0.064428878 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 08:12:06 np0005629333 podman[388930]: 2026-02-25 13:12:06.76580941 +0000 UTC m=+0.095481474 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:12:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2767: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:12:07 np0005629333 nova_compute[244014]: 2026-02-25 13:12:07.647 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:12:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:12:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2768: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:12:10.712005) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025130712049, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 2081, "num_deletes": 253, "total_data_size": 3481435, "memory_usage": 3542080, "flush_reason": "Manual Compaction"}
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025130771526, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 3423942, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56672, "largest_seqno": 58752, "table_properties": {"data_size": 3414337, "index_size": 6162, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19124, "raw_average_key_size": 20, "raw_value_size": 3395243, "raw_average_value_size": 3596, "num_data_blocks": 272, "num_entries": 944, "num_filter_entries": 944, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772024908, "oldest_key_time": 1772024908, "file_creation_time": 1772025130, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 59628 microseconds, and 10860 cpu microseconds.
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:12:10.771627) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 3423942 bytes OK
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:12:10.771655) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:12:10.816065) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:12:10.816125) EVENT_LOG_v1 {"time_micros": 1772025130816110, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:12:10.816161) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 3472735, prev total WAL file size 3472735, number of live WAL files 2.
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:12:10.817550) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(3343KB)], [134(9289KB)]
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025130817629, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 12936042, "oldest_snapshot_seqno": -1}
Feb 25 08:12:10 np0005629333 nova_compute[244014]: 2026-02-25 13:12:10.915 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 7932 keys, 11196676 bytes, temperature: kUnknown
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025130941103, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 11196676, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11143495, "index_size": 32268, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19845, "raw_key_size": 206468, "raw_average_key_size": 26, "raw_value_size": 11001943, "raw_average_value_size": 1387, "num_data_blocks": 1261, "num_entries": 7932, "num_filter_entries": 7932, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772025130, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:12:10.941415) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 11196676 bytes
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:12:10.966149) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 104.7 rd, 90.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 9.1 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(7.0) write-amplify(3.3) OK, records in: 8454, records dropped: 522 output_compression: NoCompression
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:12:10.966202) EVENT_LOG_v1 {"time_micros": 1772025130966180, "job": 82, "event": "compaction_finished", "compaction_time_micros": 123559, "compaction_time_cpu_micros": 29480, "output_level": 6, "num_output_files": 1, "total_output_size": 11196676, "num_input_records": 8454, "num_output_records": 7932, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025130967000, "job": 82, "event": "table_file_deletion", "file_number": 136}
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025130968524, "job": 82, "event": "table_file_deletion", "file_number": 134}
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:12:10.817416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:12:10.968621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:12:10.968629) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:12:10.968632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:12:10.968635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:12:10 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:12:10.968638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:12:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2769: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:12:12 np0005629333 nova_compute[244014]: 2026-02-25 13:12:12.652 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:12:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2770: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:12:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:12:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2771: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:12:15 np0005629333 nova_compute[244014]: 2026-02-25 13:12:15.916 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:12:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2772: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:12:17 np0005629333 nova_compute[244014]: 2026-02-25 13:12:17.655 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:12:18 np0005629333 nova_compute[244014]: 2026-02-25 13:12:18.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:12:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:12:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2773: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:12:20 np0005629333 nova_compute[244014]: 2026-02-25 13:12:20.919 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:12:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2774: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:12:22 np0005629333 nova_compute[244014]: 2026-02-25 13:12:22.659 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:12:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2775: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:12:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:12:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2776: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:12:25 np0005629333 nova_compute[244014]: 2026-02-25 13:12:25.921 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:12:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2777: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:12:27 np0005629333 nova_compute[244014]: 2026-02-25 13:12:27.663 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:12:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:12:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2778: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:12:30 np0005629333 nova_compute[244014]: 2026-02-25 13:12:30.922 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:12:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:12:31
Feb 25 08:12:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:12:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:12:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['volumes', 'backups', '.mgr', '.rgw.root', 'default.rgw.log', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', 'images']
Feb 25 08:12:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:12:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2779: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:12:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:12:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:12:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:12:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:12:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:12:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:12:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:12:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:12:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:12:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:12:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:12:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:12:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:12:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:12:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:12:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:12:32 np0005629333 nova_compute[244014]: 2026-02-25 13:12:32.666 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:12:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2780: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:12:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:12:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2781: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:12:35 np0005629333 nova_compute[244014]: 2026-02-25 13:12:35.925 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:12:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2782: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:12:37 np0005629333 nova_compute[244014]: 2026-02-25 13:12:37.669 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:12:37 np0005629333 podman[388974]: 2026-02-25 13:12:37.735330577 +0000 UTC m=+0.077288150 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:12:37 np0005629333 podman[388975]: 2026-02-25 13:12:37.761068613 +0000 UTC m=+0.099392854 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 08:12:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:12:38 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:12:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:12:38 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:12:38 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:12:38 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:12:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:12:38 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:12:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:12:38 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:12:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:12:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:12:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:12:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:12:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:12:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:12:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:12:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:12:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:12:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2783: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:12:39 np0005629333 podman[389235]: 2026-02-25 13:12:39.479604713 +0000 UTC m=+0.077613989 container create cc247ab28f6bbb07bbd595e647197ea779e78d18ef8ff79118d696d30f6f2c10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 08:12:39 np0005629333 podman[389235]: 2026-02-25 13:12:39.427939977 +0000 UTC m=+0.025949303 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:12:39 np0005629333 systemd[1]: Started libpod-conmon-cc247ab28f6bbb07bbd595e647197ea779e78d18ef8ff79118d696d30f6f2c10.scope.
Feb 25 08:12:39 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:12:39 np0005629333 podman[389235]: 2026-02-25 13:12:39.638964126 +0000 UTC m=+0.236973442 container init cc247ab28f6bbb07bbd595e647197ea779e78d18ef8ff79118d696d30f6f2c10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shockley, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Feb 25 08:12:39 np0005629333 podman[389235]: 2026-02-25 13:12:39.64901231 +0000 UTC m=+0.247021576 container start cc247ab28f6bbb07bbd595e647197ea779e78d18ef8ff79118d696d30f6f2c10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shockley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:12:39 np0005629333 infallible_shockley[389251]: 167 167
Feb 25 08:12:39 np0005629333 systemd[1]: libpod-cc247ab28f6bbb07bbd595e647197ea779e78d18ef8ff79118d696d30f6f2c10.scope: Deactivated successfully.
Feb 25 08:12:39 np0005629333 podman[389235]: 2026-02-25 13:12:39.68696586 +0000 UTC m=+0.284975126 container attach cc247ab28f6bbb07bbd595e647197ea779e78d18ef8ff79118d696d30f6f2c10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shockley, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:12:39 np0005629333 podman[389235]: 2026-02-25 13:12:39.68804276 +0000 UTC m=+0.286052026 container died cc247ab28f6bbb07bbd595e647197ea779e78d18ef8ff79118d696d30f6f2c10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 08:12:39 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:12:39 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:12:39 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:12:39 np0005629333 systemd[1]: var-lib-containers-storage-overlay-4e34f12875dfaf04ba9ca18b6876e0f6cffbc9677bb95c2395ed16adde12b4c1-merged.mount: Deactivated successfully.
Feb 25 08:12:39 np0005629333 podman[389235]: 2026-02-25 13:12:39.887098374 +0000 UTC m=+0.485107660 container remove cc247ab28f6bbb07bbd595e647197ea779e78d18ef8ff79118d696d30f6f2c10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shockley, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:12:39 np0005629333 systemd[1]: libpod-conmon-cc247ab28f6bbb07bbd595e647197ea779e78d18ef8ff79118d696d30f6f2c10.scope: Deactivated successfully.
Feb 25 08:12:40 np0005629333 podman[389275]: 2026-02-25 13:12:40.107830178 +0000 UTC m=+0.080456120 container create a57761b2dc5a7090ed76d95372bc93738590880986674d0ba70d2f31b5766887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_snyder, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 08:12:40 np0005629333 podman[389275]: 2026-02-25 13:12:40.065811293 +0000 UTC m=+0.038437275 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:12:40 np0005629333 systemd[1]: Started libpod-conmon-a57761b2dc5a7090ed76d95372bc93738590880986674d0ba70d2f31b5766887.scope.
Feb 25 08:12:40 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:12:40 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5134339b3de37133b7ebb92599c8fe8293d4ff55c549d57fa97833fad58bf99b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:12:40 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5134339b3de37133b7ebb92599c8fe8293d4ff55c549d57fa97833fad58bf99b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:12:40 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5134339b3de37133b7ebb92599c8fe8293d4ff55c549d57fa97833fad58bf99b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:12:40 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5134339b3de37133b7ebb92599c8fe8293d4ff55c549d57fa97833fad58bf99b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:12:40 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5134339b3de37133b7ebb92599c8fe8293d4ff55c549d57fa97833fad58bf99b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:12:40 np0005629333 podman[389275]: 2026-02-25 13:12:40.240042856 +0000 UTC m=+0.212668808 container init a57761b2dc5a7090ed76d95372bc93738590880986674d0ba70d2f31b5766887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_snyder, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 08:12:40 np0005629333 podman[389275]: 2026-02-25 13:12:40.245651604 +0000 UTC m=+0.218277546 container start a57761b2dc5a7090ed76d95372bc93738590880986674d0ba70d2f31b5766887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 08:12:40 np0005629333 podman[389275]: 2026-02-25 13:12:40.271678778 +0000 UTC m=+0.244304770 container attach a57761b2dc5a7090ed76d95372bc93738590880986674d0ba70d2f31b5766887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 08:12:40 np0005629333 epic_snyder[389291]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:12:40 np0005629333 epic_snyder[389291]: --> All data devices are unavailable
Feb 25 08:12:40 np0005629333 systemd[1]: libpod-a57761b2dc5a7090ed76d95372bc93738590880986674d0ba70d2f31b5766887.scope: Deactivated successfully.
Feb 25 08:12:40 np0005629333 conmon[389291]: conmon a57761b2dc5a7090ed76 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a57761b2dc5a7090ed76d95372bc93738590880986674d0ba70d2f31b5766887.scope/container/memory.events
Feb 25 08:12:40 np0005629333 podman[389275]: 2026-02-25 13:12:40.732355359 +0000 UTC m=+0.704981261 container died a57761b2dc5a7090ed76d95372bc93738590880986674d0ba70d2f31b5766887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 08:12:40 np0005629333 systemd[1]: var-lib-containers-storage-overlay-5134339b3de37133b7ebb92599c8fe8293d4ff55c549d57fa97833fad58bf99b-merged.mount: Deactivated successfully.
Feb 25 08:12:40 np0005629333 nova_compute[244014]: 2026-02-25 13:12:40.926 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:12:40 np0005629333 podman[389275]: 2026-02-25 13:12:40.950255774 +0000 UTC m=+0.922881686 container remove a57761b2dc5a7090ed76d95372bc93738590880986674d0ba70d2f31b5766887 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_snyder, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:12:40 np0005629333 systemd[1]: libpod-conmon-a57761b2dc5a7090ed76d95372bc93738590880986674d0ba70d2f31b5766887.scope: Deactivated successfully.
Feb 25 08:12:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2784: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:12:41 np0005629333 podman[389387]: 2026-02-25 13:12:41.385511998 +0000 UTC m=+0.062146764 container create f693cc89440603ba3cf36e4bc05ccbd8a059e01790d641c30a8f0a60b91687b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_swartz, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 08:12:41 np0005629333 podman[389387]: 2026-02-25 13:12:41.346136957 +0000 UTC m=+0.022771763 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:12:41 np0005629333 systemd[1]: Started libpod-conmon-f693cc89440603ba3cf36e4bc05ccbd8a059e01790d641c30a8f0a60b91687b0.scope.
Feb 25 08:12:41 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:12:41 np0005629333 podman[389387]: 2026-02-25 13:12:41.528947032 +0000 UTC m=+0.205581888 container init f693cc89440603ba3cf36e4bc05ccbd8a059e01790d641c30a8f0a60b91687b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 08:12:41 np0005629333 podman[389387]: 2026-02-25 13:12:41.537071551 +0000 UTC m=+0.213706357 container start f693cc89440603ba3cf36e4bc05ccbd8a059e01790d641c30a8f0a60b91687b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:12:41 np0005629333 trusting_swartz[389403]: 167 167
Feb 25 08:12:41 np0005629333 systemd[1]: libpod-f693cc89440603ba3cf36e4bc05ccbd8a059e01790d641c30a8f0a60b91687b0.scope: Deactivated successfully.
Feb 25 08:12:41 np0005629333 podman[389387]: 2026-02-25 13:12:41.558241568 +0000 UTC m=+0.234876414 container attach f693cc89440603ba3cf36e4bc05ccbd8a059e01790d641c30a8f0a60b91687b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_swartz, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 08:12:41 np0005629333 podman[389387]: 2026-02-25 13:12:41.558848505 +0000 UTC m=+0.235483301 container died f693cc89440603ba3cf36e4bc05ccbd8a059e01790d641c30a8f0a60b91687b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_swartz, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:12:41 np0005629333 systemd[1]: var-lib-containers-storage-overlay-eeedaa37a4f34ce23227c2c36d7c167f4055db91766cee158e058c7f7a43df78-merged.mount: Deactivated successfully.
Feb 25 08:12:41 np0005629333 podman[389387]: 2026-02-25 13:12:41.695674334 +0000 UTC m=+0.372309130 container remove f693cc89440603ba3cf36e4bc05ccbd8a059e01790d641c30a8f0a60b91687b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_swartz, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 08:12:41 np0005629333 systemd[1]: libpod-conmon-f693cc89440603ba3cf36e4bc05ccbd8a059e01790d641c30a8f0a60b91687b0.scope: Deactivated successfully.
Feb 25 08:12:41 np0005629333 podman[389427]: 2026-02-25 13:12:41.866220803 +0000 UTC m=+0.062912785 container create 07f8eb58de0f4a1e4ce685e130fbd96ec605aad86043ac28d4731b7634b73ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_feistel, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 08:12:41 np0005629333 podman[389427]: 2026-02-25 13:12:41.824052964 +0000 UTC m=+0.020745036 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:12:41 np0005629333 systemd[1]: Started libpod-conmon-07f8eb58de0f4a1e4ce685e130fbd96ec605aad86043ac28d4731b7634b73ce0.scope.
Feb 25 08:12:41 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:12:41 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6141721858c18f2cc811783fbe8d0fe7993f061333088afa8a4b553500aba87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:12:41 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6141721858c18f2cc811783fbe8d0fe7993f061333088afa8a4b553500aba87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:12:41 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6141721858c18f2cc811783fbe8d0fe7993f061333088afa8a4b553500aba87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:12:41 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6141721858c18f2cc811783fbe8d0fe7993f061333088afa8a4b553500aba87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:12:41 np0005629333 podman[389427]: 2026-02-25 13:12:41.994192092 +0000 UTC m=+0.190884104 container init 07f8eb58de0f4a1e4ce685e130fbd96ec605aad86043ac28d4731b7634b73ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_feistel, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:12:42 np0005629333 podman[389427]: 2026-02-25 13:12:42.000212972 +0000 UTC m=+0.196905034 container start 07f8eb58de0f4a1e4ce685e130fbd96ec605aad86043ac28d4731b7634b73ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_feistel, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:12:42 np0005629333 podman[389427]: 2026-02-25 13:12:42.02249306 +0000 UTC m=+0.219185122 container attach 07f8eb58de0f4a1e4ce685e130fbd96ec605aad86043ac28d4731b7634b73ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_feistel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]: {
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:    "0": [
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:        {
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "devices": [
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "/dev/loop3"
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            ],
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "lv_name": "ceph_lv0",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "lv_size": "21470642176",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "name": "ceph_lv0",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "tags": {
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.cluster_name": "ceph",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.crush_device_class": "",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.encrypted": "0",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.objectstore": "bluestore",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.osd_id": "0",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.type": "block",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.vdo": "0",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.with_tpm": "0"
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            },
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "type": "block",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "vg_name": "ceph_vg0"
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:        }
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:    ],
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:    "1": [
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:        {
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "devices": [
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "/dev/loop4"
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            ],
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "lv_name": "ceph_lv1",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "lv_size": "21470642176",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "name": "ceph_lv1",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "tags": {
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.cluster_name": "ceph",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.crush_device_class": "",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.encrypted": "0",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.objectstore": "bluestore",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.osd_id": "1",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.type": "block",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.vdo": "0",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.with_tpm": "0"
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            },
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "type": "block",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "vg_name": "ceph_vg1"
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:        }
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:    ],
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:    "2": [
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:        {
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "devices": [
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "/dev/loop5"
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            ],
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "lv_name": "ceph_lv2",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "lv_size": "21470642176",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "name": "ceph_lv2",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "tags": {
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.cluster_name": "ceph",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.crush_device_class": "",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.encrypted": "0",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.objectstore": "bluestore",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.osd_id": "2",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.type": "block",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.vdo": "0",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:                "ceph.with_tpm": "0"
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            },
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "type": "block",
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:            "vg_name": "ceph_vg2"
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:        }
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]:    ]
Feb 25 08:12:42 np0005629333 awesome_feistel[389444]: }
Feb 25 08:12:42 np0005629333 systemd[1]: libpod-07f8eb58de0f4a1e4ce685e130fbd96ec605aad86043ac28d4731b7634b73ce0.scope: Deactivated successfully.
Feb 25 08:12:42 np0005629333 podman[389427]: 2026-02-25 13:12:42.288399198 +0000 UTC m=+0.485091190 container died 07f8eb58de0f4a1e4ce685e130fbd96ec605aad86043ac28d4731b7634b73ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_feistel, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 08:12:42 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a6141721858c18f2cc811783fbe8d0fe7993f061333088afa8a4b553500aba87-merged.mount: Deactivated successfully.
Feb 25 08:12:42 np0005629333 podman[389427]: 2026-02-25 13:12:42.45621885 +0000 UTC m=+0.652910832 container remove 07f8eb58de0f4a1e4ce685e130fbd96ec605aad86043ac28d4731b7634b73ce0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_feistel, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:12:42 np0005629333 systemd[1]: libpod-conmon-07f8eb58de0f4a1e4ce685e130fbd96ec605aad86043ac28d4731b7634b73ce0.scope: Deactivated successfully.
Feb 25 08:12:42 np0005629333 nova_compute[244014]: 2026-02-25 13:12:42.672 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:12:42 np0005629333 podman[389529]: 2026-02-25 13:12:42.984012564 +0000 UTC m=+0.090367260 container create ea9eb2f1fd7c47caf0289ce95341f7ad82ee24a396695886bcb0c23bb82e7ac8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:12:43 np0005629333 podman[389529]: 2026-02-25 13:12:42.924056513 +0000 UTC m=+0.030411239 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:12:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:12:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:12:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:12:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:12:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:12:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:12:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:12:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:12:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:12:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:12:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:12:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:12:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:12:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:12:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:12:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:12:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:12:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:12:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:12:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:12:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:12:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:12:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:12:43 np0005629333 systemd[1]: Started libpod-conmon-ea9eb2f1fd7c47caf0289ce95341f7ad82ee24a396695886bcb0c23bb82e7ac8.scope.
Feb 25 08:12:43 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:12:43 np0005629333 podman[389529]: 2026-02-25 13:12:43.134560718 +0000 UTC m=+0.240915384 container init ea9eb2f1fd7c47caf0289ce95341f7ad82ee24a396695886bcb0c23bb82e7ac8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 08:12:43 np0005629333 podman[389529]: 2026-02-25 13:12:43.144417066 +0000 UTC m=+0.250771772 container start ea9eb2f1fd7c47caf0289ce95341f7ad82ee24a396695886bcb0c23bb82e7ac8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 08:12:43 np0005629333 friendly_vaughan[389545]: 167 167
Feb 25 08:12:43 np0005629333 systemd[1]: libpod-ea9eb2f1fd7c47caf0289ce95341f7ad82ee24a396695886bcb0c23bb82e7ac8.scope: Deactivated successfully.
Feb 25 08:12:43 np0005629333 podman[389529]: 2026-02-25 13:12:43.165536101 +0000 UTC m=+0.271890777 container attach ea9eb2f1fd7c47caf0289ce95341f7ad82ee24a396695886bcb0c23bb82e7ac8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 08:12:43 np0005629333 podman[389529]: 2026-02-25 13:12:43.16688992 +0000 UTC m=+0.273244596 container died ea9eb2f1fd7c47caf0289ce95341f7ad82ee24a396695886bcb0c23bb82e7ac8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 08:12:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2785: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:12:43 np0005629333 systemd[1]: var-lib-containers-storage-overlay-6aad1699090114f49a7087e520d026bc599f7498de3cc77e1e8a16c7a01b600f-merged.mount: Deactivated successfully.
Feb 25 08:12:43 np0005629333 podman[389529]: 2026-02-25 13:12:43.377851198 +0000 UTC m=+0.484205904 container remove ea9eb2f1fd7c47caf0289ce95341f7ad82ee24a396695886bcb0c23bb82e7ac8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 08:12:43 np0005629333 systemd[1]: libpod-conmon-ea9eb2f1fd7c47caf0289ce95341f7ad82ee24a396695886bcb0c23bb82e7ac8.scope: Deactivated successfully.
Feb 25 08:12:43 np0005629333 podman[389570]: 2026-02-25 13:12:43.599823078 +0000 UTC m=+0.084282188 container create fc05bf687f91dcb10946e808cbc263fdf894d195d5dc320ab2ee1c88a5b15736 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_taussig, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:12:43 np0005629333 podman[389570]: 2026-02-25 13:12:43.551139625 +0000 UTC m=+0.035598725 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:12:43 np0005629333 systemd[1]: Started libpod-conmon-fc05bf687f91dcb10946e808cbc263fdf894d195d5dc320ab2ee1c88a5b15736.scope.
Feb 25 08:12:43 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:12:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6149f953c6a21441004105408f7a2b2016a1cf459ede3a62052456aa26f0a632/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:12:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6149f953c6a21441004105408f7a2b2016a1cf459ede3a62052456aa26f0a632/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:12:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6149f953c6a21441004105408f7a2b2016a1cf459ede3a62052456aa26f0a632/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:12:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6149f953c6a21441004105408f7a2b2016a1cf459ede3a62052456aa26f0a632/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:12:43 np0005629333 podman[389570]: 2026-02-25 13:12:43.738183949 +0000 UTC m=+0.222643009 container init fc05bf687f91dcb10946e808cbc263fdf894d195d5dc320ab2ee1c88a5b15736 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:12:43 np0005629333 podman[389570]: 2026-02-25 13:12:43.744473296 +0000 UTC m=+0.228932306 container start fc05bf687f91dcb10946e808cbc263fdf894d195d5dc320ab2ee1c88a5b15736 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_taussig, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 08:12:43 np0005629333 podman[389570]: 2026-02-25 13:12:43.788246811 +0000 UTC m=+0.272705871 container attach fc05bf687f91dcb10946e808cbc263fdf894d195d5dc320ab2ee1c88a5b15736 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_taussig, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 08:12:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:12:44 np0005629333 lvm[389664]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:12:44 np0005629333 lvm[389664]: VG ceph_vg1 finished
Feb 25 08:12:44 np0005629333 lvm[389665]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:12:44 np0005629333 lvm[389665]: VG ceph_vg0 finished
Feb 25 08:12:44 np0005629333 lvm[389667]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:12:44 np0005629333 lvm[389667]: VG ceph_vg2 finished
Feb 25 08:12:44 np0005629333 busy_taussig[389586]: {}
Feb 25 08:12:44 np0005629333 systemd[1]: libpod-fc05bf687f91dcb10946e808cbc263fdf894d195d5dc320ab2ee1c88a5b15736.scope: Deactivated successfully.
Feb 25 08:12:44 np0005629333 systemd[1]: libpod-fc05bf687f91dcb10946e808cbc263fdf894d195d5dc320ab2ee1c88a5b15736.scope: Consumed 1.064s CPU time.
Feb 25 08:12:44 np0005629333 podman[389570]: 2026-02-25 13:12:44.486546872 +0000 UTC m=+0.971005912 container died fc05bf687f91dcb10946e808cbc263fdf894d195d5dc320ab2ee1c88a5b15736 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_taussig, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 08:12:44 np0005629333 systemd[1]: var-lib-containers-storage-overlay-6149f953c6a21441004105408f7a2b2016a1cf459ede3a62052456aa26f0a632-merged.mount: Deactivated successfully.
Feb 25 08:12:44 np0005629333 podman[389570]: 2026-02-25 13:12:44.859757467 +0000 UTC m=+1.344216477 container remove fc05bf687f91dcb10946e808cbc263fdf894d195d5dc320ab2ee1c88a5b15736 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:12:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:12:44 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:12:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:12:44 np0005629333 systemd[1]: libpod-conmon-fc05bf687f91dcb10946e808cbc263fdf894d195d5dc320ab2ee1c88a5b15736.scope: Deactivated successfully.
Feb 25 08:12:44 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:12:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2786: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:12:45 np0005629333 nova_compute[244014]: 2026-02-25 13:12:45.930 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:12:45 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:12:46 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:12:46 np0005629333 nova_compute[244014]: 2026-02-25 13:12:46.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:12:46 np0005629333 nova_compute[244014]: 2026-02-25 13:12:46.925 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:12:46 np0005629333 nova_compute[244014]: 2026-02-25 13:12:46.925 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:12:46 np0005629333 nova_compute[244014]: 2026-02-25 13:12:46.925 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:12:46 np0005629333 nova_compute[244014]: 2026-02-25 13:12:46.925 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:12:46 np0005629333 nova_compute[244014]: 2026-02-25 13:12:46.926 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:12:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2787: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:12:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:12:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1293435788' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:12:47 np0005629333 nova_compute[244014]: 2026-02-25 13:12:47.472 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:12:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:12:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/672947010' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:12:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:12:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/672947010' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:12:47 np0005629333 nova_compute[244014]: 2026-02-25 13:12:47.675 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:12:47 np0005629333 nova_compute[244014]: 2026-02-25 13:12:47.677 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3542MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:12:47 np0005629333 nova_compute[244014]: 2026-02-25 13:12:47.677 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:12:47 np0005629333 nova_compute[244014]: 2026-02-25 13:12:47.678 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:12:47 np0005629333 nova_compute[244014]: 2026-02-25 13:12:47.679 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:12:47 np0005629333 nova_compute[244014]: 2026-02-25 13:12:47.753 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:12:47 np0005629333 nova_compute[244014]: 2026-02-25 13:12:47.754 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:12:47 np0005629333 nova_compute[244014]: 2026-02-25 13:12:47.775 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:12:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:12:48 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1366442388' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:12:48 np0005629333 nova_compute[244014]: 2026-02-25 13:12:48.467 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.693s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:12:48 np0005629333 nova_compute[244014]: 2026-02-25 13:12:48.475 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:12:48 np0005629333 nova_compute[244014]: 2026-02-25 13:12:48.496 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:12:48 np0005629333 nova_compute[244014]: 2026-02-25 13:12:48.498 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:12:48 np0005629333 nova_compute[244014]: 2026-02-25 13:12:48.499 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:12:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:12:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2788: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:12:49 np0005629333 nova_compute[244014]: 2026-02-25 13:12:49.499 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:12:49 np0005629333 nova_compute[244014]: 2026-02-25 13:12:49.501 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:12:49 np0005629333 nova_compute[244014]: 2026-02-25 13:12:49.501 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:12:49 np0005629333 nova_compute[244014]: 2026-02-25 13:12:49.731 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:12:50 np0005629333 nova_compute[244014]: 2026-02-25 13:12:50.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:12:50 np0005629333 nova_compute[244014]: 2026-02-25 13:12:50.931 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:12:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2789: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:12:52 np0005629333 nova_compute[244014]: 2026-02-25 13:12:52.682 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:12:52 np0005629333 nova_compute[244014]: 2026-02-25 13:12:52.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:12:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2790: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:12:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:12:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:12:55.052 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:12:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:12:55.053 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:12:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:12:55.054 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:12:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2791: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:12:55 np0005629333 nova_compute[244014]: 2026-02-25 13:12:55.933 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:12:56 np0005629333 nova_compute[244014]: 2026-02-25 13:12:56.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:12:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2792: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:12:57 np0005629333 nova_compute[244014]: 2026-02-25 13:12:57.686 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:12:57 np0005629333 nova_compute[244014]: 2026-02-25 13:12:57.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:12:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:12:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2793: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:13:00 np0005629333 nova_compute[244014]: 2026-02-25 13:13:00.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:13:00 np0005629333 nova_compute[244014]: 2026-02-25 13:13:00.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:13:00 np0005629333 nova_compute[244014]: 2026-02-25 13:13:00.935 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:13:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2794: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:13:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:13:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:13:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:13:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:13:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:13:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:13:02 np0005629333 nova_compute[244014]: 2026-02-25 13:13:02.690 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:13:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2795: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:13:03 np0005629333 nova_compute[244014]: 2026-02-25 13:13:03.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:13:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:13:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2796: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:13:05 np0005629333 nova_compute[244014]: 2026-02-25 13:13:05.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:13:05 np0005629333 nova_compute[244014]: 2026-02-25 13:13:05.938 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:13:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2797: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:13:07 np0005629333 nova_compute[244014]: 2026-02-25 13:13:07.692 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:13:08 np0005629333 podman[389753]: 2026-02-25 13:13:08.748624354 +0000 UTC m=+0.077995021 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Feb 25 08:13:08 np0005629333 podman[389754]: 2026-02-25 13:13:08.783382904 +0000 UTC m=+0.113017898 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Feb 25 08:13:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:13:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2798: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:13:10 np0005629333 nova_compute[244014]: 2026-02-25 13:13:10.940 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:13:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2799: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:13:12 np0005629333 nova_compute[244014]: 2026-02-25 13:13:12.695 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:13:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2800: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:13:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:13:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2801: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:13:15 np0005629333 nova_compute[244014]: 2026-02-25 13:13:15.943 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:13:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2802: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:13:17 np0005629333 nova_compute[244014]: 2026-02-25 13:13:17.699 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:13:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:13:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2803: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:13:20 np0005629333 nova_compute[244014]: 2026-02-25 13:13:20.945 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:13:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2804: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:13:22 np0005629333 nova_compute[244014]: 2026-02-25 13:13:22.702 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:13:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2805: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:13:23 np0005629333 nova_compute[244014]: 2026-02-25 13:13:23.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:13:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:13:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2806: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:13:25 np0005629333 nova_compute[244014]: 2026-02-25 13:13:25.947 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:13:25 np0005629333 nova_compute[244014]: 2026-02-25 13:13:25.957 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:13:25 np0005629333 nova_compute[244014]: 2026-02-25 13:13:25.957 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 25 08:13:25 np0005629333 nova_compute[244014]: 2026-02-25 13:13:25.994 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 25 08:13:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2807: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:13:27 np0005629333 nova_compute[244014]: 2026-02-25 13:13:27.705 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:13:29.059611) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025209059751, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 870, "num_deletes": 250, "total_data_size": 1220985, "memory_usage": 1237896, "flush_reason": "Manual Compaction"}
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025209103990, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 759052, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58753, "largest_seqno": 59622, "table_properties": {"data_size": 755524, "index_size": 1307, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9394, "raw_average_key_size": 20, "raw_value_size": 747947, "raw_average_value_size": 1640, "num_data_blocks": 59, "num_entries": 456, "num_filter_entries": 456, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772025131, "oldest_key_time": 1772025131, "file_creation_time": 1772025209, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 44447 microseconds, and 3634 cpu microseconds.
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:13:29.104068) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 759052 bytes OK
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:13:29.104107) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:13:29.138657) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:13:29.138755) EVENT_LOG_v1 {"time_micros": 1772025209138744, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:13:29.138785) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 1216733, prev total WAL file size 1216733, number of live WAL files 2.
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:13:29.139517) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323533' seq:72057594037927935, type:22 .. '6D6772737461740032353034' seq:0, type:0; will stop at (end)
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(741KB)], [137(10MB)]
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025209139556, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 11955728, "oldest_snapshot_seqno": -1}
Feb 25 08:13:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2808: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 7909 keys, 9042563 bytes, temperature: kUnknown
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025209322865, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 9042563, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8993316, "index_size": 28398, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19781, "raw_key_size": 206129, "raw_average_key_size": 26, "raw_value_size": 8855881, "raw_average_value_size": 1119, "num_data_blocks": 1102, "num_entries": 7909, "num_filter_entries": 7909, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772025209, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:13:29.323154) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 9042563 bytes
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:13:29.330555) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 65.2 rd, 49.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 10.7 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(27.7) write-amplify(11.9) OK, records in: 8388, records dropped: 479 output_compression: NoCompression
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:13:29.330588) EVENT_LOG_v1 {"time_micros": 1772025209330574, "job": 84, "event": "compaction_finished", "compaction_time_micros": 183397, "compaction_time_cpu_micros": 33913, "output_level": 6, "num_output_files": 1, "total_output_size": 9042563, "num_input_records": 8388, "num_output_records": 7909, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025209330887, "job": 84, "event": "table_file_deletion", "file_number": 139}
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025209332899, "job": 84, "event": "table_file_deletion", "file_number": 137}
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:13:29.139425) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:13:29.332971) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:13:29.332977) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:13:29.332980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:13:29.332983) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:13:29 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:13:29.332986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:13:29 np0005629333 nova_compute[244014]: 2026-02-25 13:13:29.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:13:29 np0005629333 nova_compute[244014]: 2026-02-25 13:13:29.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 25 08:13:30 np0005629333 nova_compute[244014]: 2026-02-25 13:13:30.949 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:13:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:13:31
Feb 25 08:13:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:13:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:13:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', '.mgr', 'volumes', 'backups', 'images', 'default.rgw.meta', 'vms', 'default.rgw.control']
Feb 25 08:13:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:13:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2809: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:13:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:13:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:13:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:13:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:13:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:13:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:13:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:13:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:13:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:13:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:13:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:13:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:13:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:13:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:13:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:13:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:13:32 np0005629333 nova_compute[244014]: 2026-02-25 13:13:32.710 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:13:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2810: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:13:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:13:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2811: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:13:35 np0005629333 nova_compute[244014]: 2026-02-25 13:13:35.951 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:13:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2812: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:13:37 np0005629333 nova_compute[244014]: 2026-02-25 13:13:37.713 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:13:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:13:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2813: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:13:39 np0005629333 podman[389796]: 2026-02-25 13:13:39.721768738 +0000 UTC m=+0.058242003 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 08:13:39 np0005629333 podman[389797]: 2026-02-25 13:13:39.787764859 +0000 UTC m=+0.117043241 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller)
Feb 25 08:13:40 np0005629333 nova_compute[244014]: 2026-02-25 13:13:40.953 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:13:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2814: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:13:42 np0005629333 nova_compute[244014]: 2026-02-25 13:13:42.716 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:13:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:13:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:13:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:13:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:13:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:13:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:13:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:13:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:13:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:13:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:13:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:13:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:13:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:13:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:13:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:13:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:13:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:13:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:13:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:13:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:13:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:13:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:13:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:13:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2815: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:13:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:13:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2816: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:13:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 25 08:13:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 08:13:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:13:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:13:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:13:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:13:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:13:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:13:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:13:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:13:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:13:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:13:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:13:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:13:45 np0005629333 nova_compute[244014]: 2026-02-25 13:13:45.954 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:13:46 np0005629333 podman[389985]: 2026-02-25 13:13:46.353859584 +0000 UTC m=+0.081572511 container create 151d86828d6b9f9e9fe2c681ab4d25fb774033b167707a9353fb912ccf7dc264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_mendel, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:13:46 np0005629333 podman[389985]: 2026-02-25 13:13:46.30725869 +0000 UTC m=+0.034971587 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:13:46 np0005629333 systemd[1]: Started libpod-conmon-151d86828d6b9f9e9fe2c681ab4d25fb774033b167707a9353fb912ccf7dc264.scope.
Feb 25 08:13:46 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:13:46 np0005629333 podman[389985]: 2026-02-25 13:13:46.49377937 +0000 UTC m=+0.221492357 container init 151d86828d6b9f9e9fe2c681ab4d25fb774033b167707a9353fb912ccf7dc264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_mendel, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 08:13:46 np0005629333 podman[389985]: 2026-02-25 13:13:46.503499154 +0000 UTC m=+0.231212081 container start 151d86828d6b9f9e9fe2c681ab4d25fb774033b167707a9353fb912ccf7dc264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_mendel, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 08:13:46 np0005629333 vigilant_mendel[390001]: 167 167
Feb 25 08:13:46 np0005629333 systemd[1]: libpod-151d86828d6b9f9e9fe2c681ab4d25fb774033b167707a9353fb912ccf7dc264.scope: Deactivated successfully.
Feb 25 08:13:46 np0005629333 podman[389985]: 2026-02-25 13:13:46.525860374 +0000 UTC m=+0.253573351 container attach 151d86828d6b9f9e9fe2c681ab4d25fb774033b167707a9353fb912ccf7dc264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_mendel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:13:46 np0005629333 podman[389985]: 2026-02-25 13:13:46.526267326 +0000 UTC m=+0.253980253 container died 151d86828d6b9f9e9fe2c681ab4d25fb774033b167707a9353fb912ccf7dc264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_mendel, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:13:46 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e989c288c7b5eda488ebd1edf6a84c9c1b24fbe3efcb56a56e1363a521c657e6-merged.mount: Deactivated successfully.
Feb 25 08:13:46 np0005629333 podman[389985]: 2026-02-25 13:13:46.760994955 +0000 UTC m=+0.488707882 container remove 151d86828d6b9f9e9fe2c681ab4d25fb774033b167707a9353fb912ccf7dc264 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:13:46 np0005629333 systemd[1]: libpod-conmon-151d86828d6b9f9e9fe2c681ab4d25fb774033b167707a9353fb912ccf7dc264.scope: Deactivated successfully.
Feb 25 08:13:46 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 08:13:46 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:13:46 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:13:46 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:13:46 np0005629333 podman[390027]: 2026-02-25 13:13:46.947374441 +0000 UTC m=+0.067437143 container create 91d7b2d403a09ff6c4c0f1aee1cbe6194326ddf548ae1ca9b8ad5400bb36e00a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_cannon, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 08:13:46 np0005629333 podman[390027]: 2026-02-25 13:13:46.902990759 +0000 UTC m=+0.023053551 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:13:47 np0005629333 systemd[1]: Started libpod-conmon-91d7b2d403a09ff6c4c0f1aee1cbe6194326ddf548ae1ca9b8ad5400bb36e00a.scope.
Feb 25 08:13:47 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:13:47 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7fa1f76e845dc4ae21c59c24f84fcd7a90b67acc9794955dbf10c90cefb820b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:13:47 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7fa1f76e845dc4ae21c59c24f84fcd7a90b67acc9794955dbf10c90cefb820b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:13:47 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7fa1f76e845dc4ae21c59c24f84fcd7a90b67acc9794955dbf10c90cefb820b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:13:47 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7fa1f76e845dc4ae21c59c24f84fcd7a90b67acc9794955dbf10c90cefb820b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:13:47 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7fa1f76e845dc4ae21c59c24f84fcd7a90b67acc9794955dbf10c90cefb820b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:13:47 np0005629333 podman[390027]: 2026-02-25 13:13:47.077951073 +0000 UTC m=+0.198013875 container init 91d7b2d403a09ff6c4c0f1aee1cbe6194326ddf548ae1ca9b8ad5400bb36e00a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_cannon, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:13:47 np0005629333 podman[390027]: 2026-02-25 13:13:47.089082516 +0000 UTC m=+0.209145228 container start 91d7b2d403a09ff6c4c0f1aee1cbe6194326ddf548ae1ca9b8ad5400bb36e00a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_cannon, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:13:47 np0005629333 podman[390027]: 2026-02-25 13:13:47.110725297 +0000 UTC m=+0.230788099 container attach 91d7b2d403a09ff6c4c0f1aee1cbe6194326ddf548ae1ca9b8ad5400bb36e00a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 08:13:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2817: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:13:47 np0005629333 systemd-logind[811]: New session 52 of user zuul.
Feb 25 08:13:47 np0005629333 systemd[1]: Started Session 52 of User zuul.
Feb 25 08:13:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:13:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/519517514' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:13:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:13:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/519517514' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:13:47 np0005629333 pensive_cannon[390044]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:13:47 np0005629333 pensive_cannon[390044]: --> All data devices are unavailable
Feb 25 08:13:47 np0005629333 systemd[1]: libpod-91d7b2d403a09ff6c4c0f1aee1cbe6194326ddf548ae1ca9b8ad5400bb36e00a.scope: Deactivated successfully.
Feb 25 08:13:47 np0005629333 podman[390114]: 2026-02-25 13:13:47.623100054 +0000 UTC m=+0.032440586 container died 91d7b2d403a09ff6c4c0f1aee1cbe6194326ddf548ae1ca9b8ad5400bb36e00a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_cannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 25 08:13:47 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d7fa1f76e845dc4ae21c59c24f84fcd7a90b67acc9794955dbf10c90cefb820b-merged.mount: Deactivated successfully.
Feb 25 08:13:47 np0005629333 nova_compute[244014]: 2026-02-25 13:13:47.720 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:13:47 np0005629333 podman[390114]: 2026-02-25 13:13:47.749472608 +0000 UTC m=+0.158813130 container remove 91d7b2d403a09ff6c4c0f1aee1cbe6194326ddf548ae1ca9b8ad5400bb36e00a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_cannon, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:13:47 np0005629333 systemd[1]: libpod-conmon-91d7b2d403a09ff6c4c0f1aee1cbe6194326ddf548ae1ca9b8ad5400bb36e00a.scope: Deactivated successfully.
Feb 25 08:13:48 np0005629333 podman[390192]: 2026-02-25 13:13:48.288345854 +0000 UTC m=+0.074259616 container create 5eb339572435cd4c1930d254e9e90dd5798f487ea2f031c130098c6b1033f762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cray, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 08:13:48 np0005629333 podman[390192]: 2026-02-25 13:13:48.247116741 +0000 UTC m=+0.033030563 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:13:48 np0005629333 systemd[1]: Started libpod-conmon-5eb339572435cd4c1930d254e9e90dd5798f487ea2f031c130098c6b1033f762.scope.
Feb 25 08:13:48 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:13:48 np0005629333 podman[390192]: 2026-02-25 13:13:48.413321678 +0000 UTC m=+0.199235490 container init 5eb339572435cd4c1930d254e9e90dd5798f487ea2f031c130098c6b1033f762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cray, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 08:13:48 np0005629333 podman[390192]: 2026-02-25 13:13:48.421144958 +0000 UTC m=+0.207058700 container start 5eb339572435cd4c1930d254e9e90dd5798f487ea2f031c130098c6b1033f762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:13:48 np0005629333 gracious_cray[390208]: 167 167
Feb 25 08:13:48 np0005629333 systemd[1]: libpod-5eb339572435cd4c1930d254e9e90dd5798f487ea2f031c130098c6b1033f762.scope: Deactivated successfully.
Feb 25 08:13:48 np0005629333 nova_compute[244014]: 2026-02-25 13:13:48.431 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:13:48 np0005629333 podman[390192]: 2026-02-25 13:13:48.454023375 +0000 UTC m=+0.239937107 container attach 5eb339572435cd4c1930d254e9e90dd5798f487ea2f031c130098c6b1033f762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cray, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 08:13:48 np0005629333 podman[390192]: 2026-02-25 13:13:48.454449687 +0000 UTC m=+0.240363419 container died 5eb339572435cd4c1930d254e9e90dd5798f487ea2f031c130098c6b1033f762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cray, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 08:13:48 np0005629333 nova_compute[244014]: 2026-02-25 13:13:48.487 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:13:48 np0005629333 nova_compute[244014]: 2026-02-25 13:13:48.487 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:13:48 np0005629333 nova_compute[244014]: 2026-02-25 13:13:48.487 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:13:48 np0005629333 nova_compute[244014]: 2026-02-25 13:13:48.487 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:13:48 np0005629333 nova_compute[244014]: 2026-02-25 13:13:48.487 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:13:48 np0005629333 systemd[1]: var-lib-containers-storage-overlay-06cafe6ad857fad7149c131511fa881ab1402be7ae63aad83419805028cd094a-merged.mount: Deactivated successfully.
Feb 25 08:13:48 np0005629333 podman[390192]: 2026-02-25 13:13:48.641335928 +0000 UTC m=+0.427249700 container remove 5eb339572435cd4c1930d254e9e90dd5798f487ea2f031c130098c6b1033f762 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 08:13:48 np0005629333 systemd[1]: libpod-conmon-5eb339572435cd4c1930d254e9e90dd5798f487ea2f031c130098c6b1033f762.scope: Deactivated successfully.
Feb 25 08:13:48 np0005629333 podman[390255]: 2026-02-25 13:13:48.861774074 +0000 UTC m=+0.079949886 container create 8a46fcd055710aa59c6164db0588eb8603a90c51459065cbfc1359be6e5e9cd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_ardinghelli, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 08:13:48 np0005629333 podman[390255]: 2026-02-25 13:13:48.817227187 +0000 UTC m=+0.035403049 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:13:48 np0005629333 systemd[1]: Started libpod-conmon-8a46fcd055710aa59c6164db0588eb8603a90c51459065cbfc1359be6e5e9cd6.scope.
Feb 25 08:13:48 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:13:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3db674359c749870a527409bae09f6c29dafc9cda2a4abf8a78eef2ac5bc94a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:13:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3db674359c749870a527409bae09f6c29dafc9cda2a4abf8a78eef2ac5bc94a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:13:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3db674359c749870a527409bae09f6c29dafc9cda2a4abf8a78eef2ac5bc94a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:13:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3db674359c749870a527409bae09f6c29dafc9cda2a4abf8a78eef2ac5bc94a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:13:48 np0005629333 podman[390255]: 2026-02-25 13:13:48.991973935 +0000 UTC m=+0.210149797 container init 8a46fcd055710aa59c6164db0588eb8603a90c51459065cbfc1359be6e5e9cd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True)
Feb 25 08:13:49 np0005629333 podman[390255]: 2026-02-25 13:13:49.001077392 +0000 UTC m=+0.219253204 container start 8a46fcd055710aa59c6164db0588eb8603a90c51459065cbfc1359be6e5e9cd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_ardinghelli, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:13:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:13:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/605900426' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:13:49 np0005629333 podman[390255]: 2026-02-25 13:13:49.012169975 +0000 UTC m=+0.230345847 container attach 8a46fcd055710aa59c6164db0588eb8603a90c51459065cbfc1359be6e5e9cd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_ardinghelli, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:13:49 np0005629333 nova_compute[244014]: 2026-02-25 13:13:49.021 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:13:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:13:49 np0005629333 nova_compute[244014]: 2026-02-25 13:13:49.172 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:13:49 np0005629333 nova_compute[244014]: 2026-02-25 13:13:49.175 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3544MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:13:49 np0005629333 nova_compute[244014]: 2026-02-25 13:13:49.175 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:13:49 np0005629333 nova_compute[244014]: 2026-02-25 13:13:49.176 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:13:49 np0005629333 nova_compute[244014]: 2026-02-25 13:13:49.241 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:13:49 np0005629333 nova_compute[244014]: 2026-02-25 13:13:49.241 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:13:49 np0005629333 nova_compute[244014]: 2026-02-25 13:13:49.256 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:13:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2818: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]: {
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:    "0": [
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:        {
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "devices": [
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "/dev/loop3"
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            ],
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "lv_name": "ceph_lv0",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "lv_size": "21470642176",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "name": "ceph_lv0",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "tags": {
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.cluster_name": "ceph",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.crush_device_class": "",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.encrypted": "0",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.objectstore": "bluestore",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.osd_id": "0",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.type": "block",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.vdo": "0",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.with_tpm": "0"
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            },
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "type": "block",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "vg_name": "ceph_vg0"
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:        }
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:    ],
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:    "1": [
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:        {
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "devices": [
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "/dev/loop4"
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            ],
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "lv_name": "ceph_lv1",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "lv_size": "21470642176",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "name": "ceph_lv1",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "tags": {
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.cluster_name": "ceph",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.crush_device_class": "",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.encrypted": "0",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.objectstore": "bluestore",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.osd_id": "1",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.type": "block",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.vdo": "0",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.with_tpm": "0"
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            },
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "type": "block",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "vg_name": "ceph_vg1"
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:        }
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:    ],
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:    "2": [
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:        {
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "devices": [
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "/dev/loop5"
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            ],
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "lv_name": "ceph_lv2",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "lv_size": "21470642176",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "name": "ceph_lv2",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "tags": {
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.cluster_name": "ceph",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.crush_device_class": "",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.encrypted": "0",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.objectstore": "bluestore",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.osd_id": "2",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.type": "block",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.vdo": "0",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:                "ceph.with_tpm": "0"
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            },
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "type": "block",
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:            "vg_name": "ceph_vg2"
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:        }
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]:    ]
Feb 25 08:13:49 np0005629333 awesome_ardinghelli[390271]: }
Feb 25 08:13:49 np0005629333 systemd[1]: libpod-8a46fcd055710aa59c6164db0588eb8603a90c51459065cbfc1359be6e5e9cd6.scope: Deactivated successfully.
Feb 25 08:13:49 np0005629333 podman[390283]: 2026-02-25 13:13:49.381786707 +0000 UTC m=+0.024252914 container died 8a46fcd055710aa59c6164db0588eb8603a90c51459065cbfc1359be6e5e9cd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_ardinghelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 08:13:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:13:49.423 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 08:13:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:13:49.424 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 08:13:49 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:13:49.424 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:13:49 np0005629333 nova_compute[244014]: 2026-02-25 13:13:49.424 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:13:49 np0005629333 systemd[1]: var-lib-containers-storage-overlay-3db674359c749870a527409bae09f6c29dafc9cda2a4abf8a78eef2ac5bc94a3-merged.mount: Deactivated successfully.
Feb 25 08:13:49 np0005629333 podman[390283]: 2026-02-25 13:13:49.542674064 +0000 UTC m=+0.185140261 container remove 8a46fcd055710aa59c6164db0588eb8603a90c51459065cbfc1359be6e5e9cd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_ardinghelli, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:13:49 np0005629333 systemd[1]: libpod-conmon-8a46fcd055710aa59c6164db0588eb8603a90c51459065cbfc1359be6e5e9cd6.scope: Deactivated successfully.
Feb 25 08:13:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:13:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2807305594' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:13:49 np0005629333 nova_compute[244014]: 2026-02-25 13:13:49.857 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:13:49 np0005629333 nova_compute[244014]: 2026-02-25 13:13:49.862 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:13:49 np0005629333 nova_compute[244014]: 2026-02-25 13:13:49.882 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:13:49 np0005629333 nova_compute[244014]: 2026-02-25 13:13:49.884 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:13:49 np0005629333 nova_compute[244014]: 2026-02-25 13:13:49.885 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:13:50 np0005629333 podman[390426]: 2026-02-25 13:13:50.024037248 +0000 UTC m=+0.062669258 container create fabd23f5b3f9820822ead2503cc9fda271a69efa65364e44612d421782a5f249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_mirzakhani, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 08:13:50 np0005629333 podman[390426]: 2026-02-25 13:13:49.97906576 +0000 UTC m=+0.017697760 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:13:50 np0005629333 systemd[1]: Started libpod-conmon-fabd23f5b3f9820822ead2503cc9fda271a69efa65364e44612d421782a5f249.scope.
Feb 25 08:13:50 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:13:50 np0005629333 podman[390426]: 2026-02-25 13:13:50.153481468 +0000 UTC m=+0.192113518 container init fabd23f5b3f9820822ead2503cc9fda271a69efa65364e44612d421782a5f249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 08:13:50 np0005629333 podman[390426]: 2026-02-25 13:13:50.16418436 +0000 UTC m=+0.202816340 container start fabd23f5b3f9820822ead2503cc9fda271a69efa65364e44612d421782a5f249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:13:50 np0005629333 jovial_mirzakhani[390508]: 167 167
Feb 25 08:13:50 np0005629333 systemd[1]: libpod-fabd23f5b3f9820822ead2503cc9fda271a69efa65364e44612d421782a5f249.scope: Deactivated successfully.
Feb 25 08:13:50 np0005629333 podman[390426]: 2026-02-25 13:13:50.195167444 +0000 UTC m=+0.233799494 container attach fabd23f5b3f9820822ead2503cc9fda271a69efa65364e44612d421782a5f249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:13:50 np0005629333 podman[390426]: 2026-02-25 13:13:50.195514394 +0000 UTC m=+0.234146404 container died fabd23f5b3f9820822ead2503cc9fda271a69efa65364e44612d421782a5f249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Feb 25 08:13:50 np0005629333 systemd[1]: var-lib-containers-storage-overlay-0f684db7d63c84d66b26e028e617b9a43a891e681ed1745088b7bf770bc14721-merged.mount: Deactivated successfully.
Feb 25 08:13:50 np0005629333 podman[390426]: 2026-02-25 13:13:50.409437126 +0000 UTC m=+0.448069096 container remove fabd23f5b3f9820822ead2503cc9fda271a69efa65364e44612d421782a5f249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_mirzakhani, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 08:13:50 np0005629333 systemd[1]: libpod-conmon-fabd23f5b3f9820822ead2503cc9fda271a69efa65364e44612d421782a5f249.scope: Deactivated successfully.
Feb 25 08:13:50 np0005629333 ceph-osd[89088]: bluestore.MempoolThread fragmentation_score=0.004165 took=0.000096s
Feb 25 08:13:50 np0005629333 podman[390604]: 2026-02-25 13:13:50.590238514 +0000 UTC m=+0.077183427 container create 7956f4290db90bd5afaf2097680f5992db26e3b9385c1a251b8552dcd9f5b508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 08:13:50 np0005629333 ceph-osd[86953]: bluestore.MempoolThread fragmentation_score=0.004726 took=0.000082s
Feb 25 08:13:50 np0005629333 ceph-osd[88012]: bluestore.MempoolThread fragmentation_score=0.004788 took=0.000089s
Feb 25 08:13:50 np0005629333 podman[390604]: 2026-02-25 13:13:50.546938744 +0000 UTC m=+0.033883707 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:13:50 np0005629333 systemd[1]: Started libpod-conmon-7956f4290db90bd5afaf2097680f5992db26e3b9385c1a251b8552dcd9f5b508.scope.
Feb 25 08:13:50 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:13:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/014b91665c03d8cbb525f583ba3133ab09952b80dfe0b26a0bb08a1bacc17e53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:13:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/014b91665c03d8cbb525f583ba3133ab09952b80dfe0b26a0bb08a1bacc17e53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:13:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/014b91665c03d8cbb525f583ba3133ab09952b80dfe0b26a0bb08a1bacc17e53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:13:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/014b91665c03d8cbb525f583ba3133ab09952b80dfe0b26a0bb08a1bacc17e53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:13:50 np0005629333 podman[390604]: 2026-02-25 13:13:50.727981559 +0000 UTC m=+0.214926512 container init 7956f4290db90bd5afaf2097680f5992db26e3b9385c1a251b8552dcd9f5b508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 08:13:50 np0005629333 podman[390604]: 2026-02-25 13:13:50.736092617 +0000 UTC m=+0.223037500 container start 7956f4290db90bd5afaf2097680f5992db26e3b9385c1a251b8552dcd9f5b508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 08:13:50 np0005629333 podman[390604]: 2026-02-25 13:13:50.773755199 +0000 UTC m=+0.260700172 container attach 7956f4290db90bd5afaf2097680f5992db26e3b9385c1a251b8552dcd9f5b508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_mclean, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:13:50 np0005629333 nova_compute[244014]: 2026-02-25 13:13:50.956 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:13:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2819: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:13:51 np0005629333 nova_compute[244014]: 2026-02-25 13:13:51.330 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:13:51 np0005629333 nova_compute[244014]: 2026-02-25 13:13:51.331 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:13:51 np0005629333 nova_compute[244014]: 2026-02-25 13:13:51.331 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:13:51 np0005629333 nova_compute[244014]: 2026-02-25 13:13:51.349 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:13:51 np0005629333 nova_compute[244014]: 2026-02-25 13:13:51.350 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:13:51 np0005629333 lvm[390702]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:13:51 np0005629333 lvm[390699]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:13:51 np0005629333 lvm[390702]: VG ceph_vg2 finished
Feb 25 08:13:51 np0005629333 lvm[390699]: VG ceph_vg0 finished
Feb 25 08:13:51 np0005629333 lvm[390700]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:13:51 np0005629333 lvm[390700]: VG ceph_vg1 finished
Feb 25 08:13:51 np0005629333 quirky_mclean[390621]: {}
Feb 25 08:13:51 np0005629333 systemd[1]: libpod-7956f4290db90bd5afaf2097680f5992db26e3b9385c1a251b8552dcd9f5b508.scope: Deactivated successfully.
Feb 25 08:13:51 np0005629333 systemd[1]: libpod-7956f4290db90bd5afaf2097680f5992db26e3b9385c1a251b8552dcd9f5b508.scope: Consumed 1.212s CPU time.
Feb 25 08:13:51 np0005629333 podman[390604]: 2026-02-25 13:13:51.577991337 +0000 UTC m=+1.064936270 container died 7956f4290db90bd5afaf2097680f5992db26e3b9385c1a251b8552dcd9f5b508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_mclean, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:13:51 np0005629333 systemd[1]: var-lib-containers-storage-overlay-014b91665c03d8cbb525f583ba3133ab09952b80dfe0b26a0bb08a1bacc17e53-merged.mount: Deactivated successfully.
Feb 25 08:13:51 np0005629333 podman[390604]: 2026-02-25 13:13:51.811684517 +0000 UTC m=+1.298629440 container remove 7956f4290db90bd5afaf2097680f5992db26e3b9385c1a251b8552dcd9f5b508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_mclean, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:13:51 np0005629333 systemd[1]: libpod-conmon-7956f4290db90bd5afaf2097680f5992db26e3b9385c1a251b8552dcd9f5b508.scope: Deactivated successfully.
Feb 25 08:13:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:13:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:13:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:13:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:13:52 np0005629333 systemd[1]: session-52.scope: Deactivated successfully.
Feb 25 08:13:52 np0005629333 systemd-logind[811]: Session 52 logged out. Waiting for processes to exit.
Feb 25 08:13:52 np0005629333 systemd-logind[811]: Removed session 52.
Feb 25 08:13:52 np0005629333 nova_compute[244014]: 2026-02-25 13:13:52.724 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:13:52 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:13:52 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:13:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2820: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:13:53 np0005629333 nova_compute[244014]: 2026-02-25 13:13:53.891 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:13:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:13:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:13:55.053 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:13:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:13:55.054 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:13:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:13:55.054 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:13:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2821: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:13:55 np0005629333 nova_compute[244014]: 2026-02-25 13:13:55.959 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:13:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2822: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:13:57 np0005629333 nova_compute[244014]: 2026-02-25 13:13:57.729 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:13:57 np0005629333 nova_compute[244014]: 2026-02-25 13:13:57.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:13:57 np0005629333 nova_compute[244014]: 2026-02-25 13:13:57.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:13:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:13:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2823: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:14:00 np0005629333 nova_compute[244014]: 2026-02-25 13:14:00.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:14:00 np0005629333 nova_compute[244014]: 2026-02-25 13:14:00.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:14:00 np0005629333 nova_compute[244014]: 2026-02-25 13:14:00.961 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:14:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2824: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:14:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:14:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:14:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:14:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:14:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:14:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:14:02 np0005629333 nova_compute[244014]: 2026-02-25 13:14:02.732 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:14:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2825: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:14:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:14:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2826: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:14:05 np0005629333 nova_compute[244014]: 2026-02-25 13:14:05.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:14:05 np0005629333 nova_compute[244014]: 2026-02-25 13:14:05.878 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:14:05 np0005629333 nova_compute[244014]: 2026-02-25 13:14:05.965 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:14:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2827: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:14:07 np0005629333 nova_compute[244014]: 2026-02-25 13:14:07.736 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:14:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:14:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2828: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:14:10 np0005629333 nova_compute[244014]: 2026-02-25 13:14:10.968 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:14:11 np0005629333 podman[390767]: 2026-02-25 13:14:11.15451592 +0000 UTC m=+0.098984422 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 25 08:14:11 np0005629333 podman[390768]: 2026-02-25 13:14:11.167553508 +0000 UTC m=+0.109065727 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 08:14:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2829: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:14:12 np0005629333 nova_compute[244014]: 2026-02-25 13:14:12.740 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:14:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2830: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:14:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:14:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2831: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:14:15 np0005629333 nova_compute[244014]: 2026-02-25 13:14:15.972 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:14:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2832: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:14:17 np0005629333 nova_compute[244014]: 2026-02-25 13:14:17.745 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:14:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:14:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2833: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:14:19 np0005629333 nova_compute[244014]: 2026-02-25 13:14:19.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:14:20 np0005629333 nova_compute[244014]: 2026-02-25 13:14:20.972 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:14:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2834: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:14:22 np0005629333 nova_compute[244014]: 2026-02-25 13:14:22.748 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:14:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2835: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:14:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:14:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2836: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:14:25 np0005629333 nova_compute[244014]: 2026-02-25 13:14:25.974 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:14:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2837: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:14:27 np0005629333 nova_compute[244014]: 2026-02-25 13:14:27.751 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:14:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:14:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2838: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:14:30 np0005629333 nova_compute[244014]: 2026-02-25 13:14:30.977 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:14:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:14:31
Feb 25 08:14:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:14:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:14:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'backups', 'default.rgw.control', 'images', '.mgr', 'volumes', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta']
Feb 25 08:14:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:14:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2839: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:14:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:14:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:14:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:14:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:14:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:14:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:14:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:14:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:14:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:14:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:14:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:14:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:14:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:14:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:14:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:14:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:14:32 np0005629333 nova_compute[244014]: 2026-02-25 13:14:32.754 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:14:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2840: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:14:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:14:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2841: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:14:35 np0005629333 nova_compute[244014]: 2026-02-25 13:14:35.980 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:14:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2842: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:14:37 np0005629333 nova_compute[244014]: 2026-02-25 13:14:37.757 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:14:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:14:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2843: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:14:40 np0005629333 nova_compute[244014]: 2026-02-25 13:14:40.982 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:14:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2844: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:14:41 np0005629333 podman[390811]: 2026-02-25 13:14:41.751837858 +0000 UTC m=+0.089420962 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Feb 25 08:14:41 np0005629333 podman[390812]: 2026-02-25 13:14:41.770786953 +0000 UTC m=+0.105805655 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Feb 25 08:14:42 np0005629333 nova_compute[244014]: 2026-02-25 13:14:42.761 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:14:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:14:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:14:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:14:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:14:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:14:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:14:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:14:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:14:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:14:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:14:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:14:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:14:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:14:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:14:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:14:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:14:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:14:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:14:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:14:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:14:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:14:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:14:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:14:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2845: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:14:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:14:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2846: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:14:45 np0005629333 nova_compute[244014]: 2026-02-25 13:14:45.984 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:14:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2847: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:14:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:14:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1043124153' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:14:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:14:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1043124153' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:14:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:14:47.723 157129 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:e1:5e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '7e:5d:8d:ad:cd:13'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Feb 25 08:14:47 np0005629333 nova_compute[244014]: 2026-02-25 13:14:47.724 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:14:47 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:14:47.726 157129 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Feb 25 08:14:47 np0005629333 nova_compute[244014]: 2026-02-25 13:14:47.763 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:14:47 np0005629333 systemd-logind[811]: New session 53 of user zuul.
Feb 25 08:14:47 np0005629333 systemd[1]: Started Session 53 of User zuul.
Feb 25 08:14:48 np0005629333 nova_compute[244014]: 2026-02-25 13:14:48.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:14:48 np0005629333 nova_compute[244014]: 2026-02-25 13:14:48.921 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:14:48 np0005629333 nova_compute[244014]: 2026-02-25 13:14:48.922 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:14:48 np0005629333 nova_compute[244014]: 2026-02-25 13:14:48.922 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:14:48 np0005629333 nova_compute[244014]: 2026-02-25 13:14:48.923 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:14:48 np0005629333 nova_compute[244014]: 2026-02-25 13:14:48.923 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:14:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:14:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2848: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:14:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:14:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2653152889' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:14:49 np0005629333 nova_compute[244014]: 2026-02-25 13:14:49.518 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:14:49 np0005629333 nova_compute[244014]: 2026-02-25 13:14:49.736 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:14:49 np0005629333 nova_compute[244014]: 2026-02-25 13:14:49.738 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3597MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:14:49 np0005629333 nova_compute[244014]: 2026-02-25 13:14:49.739 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:14:49 np0005629333 nova_compute[244014]: 2026-02-25 13:14:49.740 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:14:49 np0005629333 nova_compute[244014]: 2026-02-25 13:14:49.821 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:14:49 np0005629333 nova_compute[244014]: 2026-02-25 13:14:49.822 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:14:49 np0005629333 nova_compute[244014]: 2026-02-25 13:14:49.849 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:14:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:14:50 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1874255157' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:14:50 np0005629333 nova_compute[244014]: 2026-02-25 13:14:50.420 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:14:50 np0005629333 nova_compute[244014]: 2026-02-25 13:14:50.426 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:14:50 np0005629333 nova_compute[244014]: 2026-02-25 13:14:50.450 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:14:50 np0005629333 nova_compute[244014]: 2026-02-25 13:14:50.452 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:14:50 np0005629333 nova_compute[244014]: 2026-02-25 13:14:50.452 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:14:50 np0005629333 nova_compute[244014]: 2026-02-25 13:14:50.988 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:14:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2849: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:14:52 np0005629333 nova_compute[244014]: 2026-02-25 13:14:52.454 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:14:52 np0005629333 nova_compute[244014]: 2026-02-25 13:14:52.454 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:14:52 np0005629333 nova_compute[244014]: 2026-02-25 13:14:52.454 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:14:52 np0005629333 nova_compute[244014]: 2026-02-25 13:14:52.493 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:14:52 np0005629333 nova_compute[244014]: 2026-02-25 13:14:52.493 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:14:52 np0005629333 systemd-logind[811]: New session 54 of user zuul.
Feb 25 08:14:52 np0005629333 systemd[1]: Started Session 54 of User zuul.
Feb 25 08:14:52 np0005629333 nova_compute[244014]: 2026-02-25 13:14:52.766 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:14:52 np0005629333 podman[391170]: 2026-02-25 13:14:52.777042757 +0000 UTC m=+0.263151112 container exec ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 08:14:52 np0005629333 podman[391170]: 2026-02-25 13:14:52.977761047 +0000 UTC m=+0.463869342 container exec_died ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 08:14:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2850: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:14:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:14:53 np0005629333 systemd[1]: Reloading.
Feb 25 08:14:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:14:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:14:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:14:54 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 08:14:54 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 08:14:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:14:54 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:14:54 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:14:55 np0005629333 systemd[1]: Reloading.
Feb 25 08:14:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:14:55.053 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:14:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:14:55.055 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:14:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:14:55.055 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:14:55 np0005629333 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 25 08:14:55 np0005629333 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 25 08:14:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2851: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:14:55 np0005629333 systemd[1]: Starting Podman API Socket...
Feb 25 08:14:55 np0005629333 systemd[1]: Listening on Podman API Socket.
Feb 25 08:14:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:14:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:14:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:14:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:14:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:14:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:14:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:14:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:14:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:14:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:14:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:14:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:14:55 np0005629333 dbus-broker-launch[795]: avc:  op=setenforce lsm=selinux enforcing=0 res=1
Feb 25 08:14:55 np0005629333 systemd[1]: podman.socket: Deactivated successfully.
Feb 25 08:14:55 np0005629333 systemd[1]: Closed Podman API Socket.
Feb 25 08:14:55 np0005629333 systemd[1]: Stopping Podman API Socket...
Feb 25 08:14:55 np0005629333 systemd[1]: Starting Podman API Socket...
Feb 25 08:14:55 np0005629333 systemd[1]: Listening on Podman API Socket.
Feb 25 08:14:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:14:55.728 157129 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a594384c-d614-4492-9e0a-4d6ec095920c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Feb 25 08:14:55 np0005629333 systemd-logind[811]: New session 55 of user zuul.
Feb 25 08:14:55 np0005629333 systemd[1]: Started Session 55 of User zuul.
Feb 25 08:14:55 np0005629333 systemd[1]: Starting Podman API Service...
Feb 25 08:14:55 np0005629333 systemd[1]: Started Podman API Service.
Feb 25 08:14:55 np0005629333 podman[391760]: time="2026-02-25T13:14:55Z" level=info msg="/usr/bin/podman filtering at log level info"
Feb 25 08:14:55 np0005629333 podman[391760]: time="2026-02-25T13:14:55Z" level=info msg="Setting parallel job count to 25"
Feb 25 08:14:55 np0005629333 podman[391760]: time="2026-02-25T13:14:55Z" level=info msg="Using sqlite as database backend"
Feb 25 08:14:55 np0005629333 nova_compute[244014]: 2026-02-25 13:14:55.912 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:14:55 np0005629333 podman[391749]: 2026-02-25 13:14:55.937310092 +0000 UTC m=+0.095369870 container create ffa7b13af04acf74ab5e7442bc84dacb7134b6006564f481173beb95c9641b9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_jackson, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:14:55 np0005629333 podman[391760]: time="2026-02-25T13:14:55Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Feb 25 08:14:55 np0005629333 podman[391760]: time="2026-02-25T13:14:55Z" level=info msg="Using systemd socket activation to determine API endpoint"
Feb 25 08:14:55 np0005629333 podman[391760]: time="2026-02-25T13:14:55Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Feb 25 08:14:55 np0005629333 podman[391749]: 2026-02-25 13:14:55.864754966 +0000 UTC m=+0.022814764 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:14:55 np0005629333 podman[391760]: @ - - [25/Feb/2026:13:14:55 +0000] "HEAD /v4.7.0/libpod/_ping HTTP/1.1" 200 0 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Feb 25 08:14:55 np0005629333 nova_compute[244014]: 2026-02-25 13:14:55.991 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:14:56 np0005629333 systemd[1]: Started libpod-conmon-ffa7b13af04acf74ab5e7442bc84dacb7134b6006564f481173beb95c9641b9a.scope.
Feb 25 08:14:56 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:14:56 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:14:56 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:14:56 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:14:56 np0005629333 podman[391749]: 2026-02-25 13:14:56.182327991 +0000 UTC m=+0.340387759 container init ffa7b13af04acf74ab5e7442bc84dacb7134b6006564f481173beb95c9641b9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_jackson, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True)
Feb 25 08:14:56 np0005629333 podman[391749]: 2026-02-25 13:14:56.191759437 +0000 UTC m=+0.349819225 container start ffa7b13af04acf74ab5e7442bc84dacb7134b6006564f481173beb95c9641b9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_jackson, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 08:14:56 np0005629333 laughing_jackson[391780]: 167 167
Feb 25 08:14:56 np0005629333 systemd[1]: libpod-ffa7b13af04acf74ab5e7442bc84dacb7134b6006564f481173beb95c9641b9a.scope: Deactivated successfully.
Feb 25 08:14:56 np0005629333 podman[391749]: 2026-02-25 13:14:56.230797858 +0000 UTC m=+0.388857636 container attach ffa7b13af04acf74ab5e7442bc84dacb7134b6006564f481173beb95c9641b9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_jackson, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 08:14:56 np0005629333 podman[391760]: 2026-02-25 13:14:56.231949 +0000 UTC m=+0.352874741 container died ffa7b13af04acf74ab5e7442bc84dacb7134b6006564f481173beb95c9641b9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 08:14:56 np0005629333 podman[391760]: @ - - [25/Feb/2026:13:14:55 +0000] "GET /v4.7.0/libpod/containers/json HTTP/1.1" 200 22903 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Feb 25 08:14:56 np0005629333 systemd[1]: var-lib-containers-storage-overlay-29ac0a6ec45d2b7b01c59176f7d3214958938745e6534066b192d7b7f33af01a-merged.mount: Deactivated successfully.
Feb 25 08:14:56 np0005629333 podman[391749]: 2026-02-25 13:14:56.595935014 +0000 UTC m=+0.753994792 container remove ffa7b13af04acf74ab5e7442bc84dacb7134b6006564f481173beb95c9641b9a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:14:56 np0005629333 systemd[1]: libpod-conmon-ffa7b13af04acf74ab5e7442bc84dacb7134b6006564f481173beb95c9641b9a.scope: Deactivated successfully.
Feb 25 08:14:56 np0005629333 podman[391803]: 2026-02-25 13:14:56.760124104 +0000 UTC m=+0.084198265 container create 08a98ed288ddd01b5daa61dc689e2eb3894bbdd0d15c3123c3cc56ea3d138cb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:14:56 np0005629333 podman[391803]: 2026-02-25 13:14:56.709278201 +0000 UTC m=+0.033352432 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:14:56 np0005629333 systemd[1]: Started libpod-conmon-08a98ed288ddd01b5daa61dc689e2eb3894bbdd0d15c3123c3cc56ea3d138cb4.scope.
Feb 25 08:14:56 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:14:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd43e65fbd99e3cc9a8d6fc3feb8ae4c8bbe10522ba8d67c92aea2c2d008d3fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:14:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd43e65fbd99e3cc9a8d6fc3feb8ae4c8bbe10522ba8d67c92aea2c2d008d3fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:14:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd43e65fbd99e3cc9a8d6fc3feb8ae4c8bbe10522ba8d67c92aea2c2d008d3fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:14:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd43e65fbd99e3cc9a8d6fc3feb8ae4c8bbe10522ba8d67c92aea2c2d008d3fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:14:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd43e65fbd99e3cc9a8d6fc3feb8ae4c8bbe10522ba8d67c92aea2c2d008d3fa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:14:56 np0005629333 podman[391803]: 2026-02-25 13:14:56.947503798 +0000 UTC m=+0.271577989 container init 08a98ed288ddd01b5daa61dc689e2eb3894bbdd0d15c3123c3cc56ea3d138cb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_bose, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:14:56 np0005629333 podman[391803]: 2026-02-25 13:14:56.957887361 +0000 UTC m=+0.281961552 container start 08a98ed288ddd01b5daa61dc689e2eb3894bbdd0d15c3123c3cc56ea3d138cb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_bose, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:14:57 np0005629333 podman[391803]: 2026-02-25 13:14:56.999942007 +0000 UTC m=+0.324016198 container attach 08a98ed288ddd01b5daa61dc689e2eb3894bbdd0d15c3123c3cc56ea3d138cb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_bose, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 08:14:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2852: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:14:57 np0005629333 xenodochial_bose[391820]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:14:57 np0005629333 xenodochial_bose[391820]: --> All data devices are unavailable
Feb 25 08:14:57 np0005629333 systemd[1]: libpod-08a98ed288ddd01b5daa61dc689e2eb3894bbdd0d15c3123c3cc56ea3d138cb4.scope: Deactivated successfully.
Feb 25 08:14:57 np0005629333 podman[391803]: 2026-02-25 13:14:57.468748137 +0000 UTC m=+0.792822318 container died 08a98ed288ddd01b5daa61dc689e2eb3894bbdd0d15c3123c3cc56ea3d138cb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 08:14:57 np0005629333 systemd[1]: var-lib-containers-storage-overlay-fd43e65fbd99e3cc9a8d6fc3feb8ae4c8bbe10522ba8d67c92aea2c2d008d3fa-merged.mount: Deactivated successfully.
Feb 25 08:14:57 np0005629333 nova_compute[244014]: 2026-02-25 13:14:57.769 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:14:57 np0005629333 podman[391803]: 2026-02-25 13:14:57.815057873 +0000 UTC m=+1.139132024 container remove 08a98ed288ddd01b5daa61dc689e2eb3894bbdd0d15c3123c3cc56ea3d138cb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_bose, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:14:57 np0005629333 systemd[1]: libpod-conmon-08a98ed288ddd01b5daa61dc689e2eb3894bbdd0d15c3123c3cc56ea3d138cb4.scope: Deactivated successfully.
Feb 25 08:14:58 np0005629333 podman[391916]: 2026-02-25 13:14:58.381177737 +0000 UTC m=+0.114007916 container create 1d65213661a598e076591ce9aeb77365e4a3b56a4b76ca2d76dfe5b00e356e96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:14:58 np0005629333 podman[391916]: 2026-02-25 13:14:58.305529303 +0000 UTC m=+0.038359572 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:14:58 np0005629333 systemd[1]: Started libpod-conmon-1d65213661a598e076591ce9aeb77365e4a3b56a4b76ca2d76dfe5b00e356e96.scope.
Feb 25 08:14:58 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:14:58 np0005629333 podman[391916]: 2026-02-25 13:14:58.518802468 +0000 UTC m=+0.251632727 container init 1d65213661a598e076591ce9aeb77365e4a3b56a4b76ca2d76dfe5b00e356e96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hamilton, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:14:58 np0005629333 podman[391916]: 2026-02-25 13:14:58.528291065 +0000 UTC m=+0.261121264 container start 1d65213661a598e076591ce9aeb77365e4a3b56a4b76ca2d76dfe5b00e356e96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 08:14:58 np0005629333 exciting_hamilton[391933]: 167 167
Feb 25 08:14:58 np0005629333 systemd[1]: libpod-1d65213661a598e076591ce9aeb77365e4a3b56a4b76ca2d76dfe5b00e356e96.scope: Deactivated successfully.
Feb 25 08:14:58 np0005629333 podman[391916]: 2026-02-25 13:14:58.556900362 +0000 UTC m=+0.289730621 container attach 1d65213661a598e076591ce9aeb77365e4a3b56a4b76ca2d76dfe5b00e356e96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hamilton, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 08:14:58 np0005629333 podman[391916]: 2026-02-25 13:14:58.557607022 +0000 UTC m=+0.290437221 container died 1d65213661a598e076591ce9aeb77365e4a3b56a4b76ca2d76dfe5b00e356e96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hamilton, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 25 08:14:58 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e64bc4e1c02ea56fa7a3b4adb6ef692f2c841aef9042aa362d675a2349fe7ed4-merged.mount: Deactivated successfully.
Feb 25 08:14:58 np0005629333 podman[391916]: 2026-02-25 13:14:58.770628139 +0000 UTC m=+0.503458338 container remove 1d65213661a598e076591ce9aeb77365e4a3b56a4b76ca2d76dfe5b00e356e96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_hamilton, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:14:58 np0005629333 systemd[1]: libpod-conmon-1d65213661a598e076591ce9aeb77365e4a3b56a4b76ca2d76dfe5b00e356e96.scope: Deactivated successfully.
Feb 25 08:14:58 np0005629333 nova_compute[244014]: 2026-02-25 13:14:58.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:14:58 np0005629333 podman[391959]: 2026-02-25 13:14:58.992779023 +0000 UTC m=+0.081672314 container create b8fb7cc79a8d6380d35f30e776a6ce35f5d9b134a9befd697263156af04da5e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 08:14:59 np0005629333 podman[391959]: 2026-02-25 13:14:58.945685195 +0000 UTC m=+0.034578556 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:14:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:14:59 np0005629333 systemd[1]: Started libpod-conmon-b8fb7cc79a8d6380d35f30e776a6ce35f5d9b134a9befd697263156af04da5e9.scope.
Feb 25 08:14:59 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:14:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8644fae478fb2cadf732445acd0cfeefa0b4f88f115125a9743f346c8e86975e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:14:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8644fae478fb2cadf732445acd0cfeefa0b4f88f115125a9743f346c8e86975e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:14:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8644fae478fb2cadf732445acd0cfeefa0b4f88f115125a9743f346c8e86975e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:14:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8644fae478fb2cadf732445acd0cfeefa0b4f88f115125a9743f346c8e86975e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:14:59 np0005629333 podman[391959]: 2026-02-25 13:14:59.178029696 +0000 UTC m=+0.266922977 container init b8fb7cc79a8d6380d35f30e776a6ce35f5d9b134a9befd697263156af04da5e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:14:59 np0005629333 podman[391959]: 2026-02-25 13:14:59.186821144 +0000 UTC m=+0.275714445 container start b8fb7cc79a8d6380d35f30e776a6ce35f5d9b134a9befd697263156af04da5e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 08:14:59 np0005629333 podman[391959]: 2026-02-25 13:14:59.246430665 +0000 UTC m=+0.335324026 container attach b8fb7cc79a8d6380d35f30e776a6ce35f5d9b134a9befd697263156af04da5e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:14:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2853: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]: {
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:    "0": [
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:        {
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "devices": [
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "/dev/loop3"
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            ],
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "lv_name": "ceph_lv0",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "lv_size": "21470642176",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "name": "ceph_lv0",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "tags": {
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.cluster_name": "ceph",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.crush_device_class": "",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.encrypted": "0",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.objectstore": "bluestore",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.osd_id": "0",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.type": "block",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.vdo": "0",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.with_tpm": "0"
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            },
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "type": "block",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "vg_name": "ceph_vg0"
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:        }
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:    ],
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:    "1": [
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:        {
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "devices": [
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "/dev/loop4"
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            ],
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "lv_name": "ceph_lv1",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "lv_size": "21470642176",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "name": "ceph_lv1",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "tags": {
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.cluster_name": "ceph",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.crush_device_class": "",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.encrypted": "0",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.objectstore": "bluestore",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.osd_id": "1",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.type": "block",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.vdo": "0",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.with_tpm": "0"
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            },
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "type": "block",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "vg_name": "ceph_vg1"
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:        }
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:    ],
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:    "2": [
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:        {
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "devices": [
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "/dev/loop5"
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            ],
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "lv_name": "ceph_lv2",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "lv_size": "21470642176",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "name": "ceph_lv2",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "tags": {
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.cluster_name": "ceph",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.crush_device_class": "",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.encrypted": "0",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.objectstore": "bluestore",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.osd_id": "2",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.type": "block",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.vdo": "0",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:                "ceph.with_tpm": "0"
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            },
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "type": "block",
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:            "vg_name": "ceph_vg2"
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:        }
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]:    ]
Feb 25 08:14:59 np0005629333 crazy_rhodes[391975]: }
Feb 25 08:14:59 np0005629333 systemd[1]: libpod-b8fb7cc79a8d6380d35f30e776a6ce35f5d9b134a9befd697263156af04da5e9.scope: Deactivated successfully.
Feb 25 08:14:59 np0005629333 podman[391959]: 2026-02-25 13:14:59.536033181 +0000 UTC m=+0.624926512 container died b8fb7cc79a8d6380d35f30e776a6ce35f5d9b134a9befd697263156af04da5e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_rhodes, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 08:14:59 np0005629333 systemd[1]: var-lib-containers-storage-overlay-8644fae478fb2cadf732445acd0cfeefa0b4f88f115125a9743f346c8e86975e-merged.mount: Deactivated successfully.
Feb 25 08:14:59 np0005629333 nova_compute[244014]: 2026-02-25 13:14:59.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:14:59 np0005629333 podman[391959]: 2026-02-25 13:14:59.971353467 +0000 UTC m=+1.060246758 container remove b8fb7cc79a8d6380d35f30e776a6ce35f5d9b134a9befd697263156af04da5e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 08:14:59 np0005629333 systemd[1]: libpod-conmon-b8fb7cc79a8d6380d35f30e776a6ce35f5d9b134a9befd697263156af04da5e9.scope: Deactivated successfully.
Feb 25 08:15:00 np0005629333 podman[392059]: 2026-02-25 13:15:00.586206525 +0000 UTC m=+0.125250743 container create c3a9d2f0e972994d016cbf0b565198967643e7e4cdaeb270dff5aa33ada17d2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hodgkin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 08:15:00 np0005629333 podman[392059]: 2026-02-25 13:15:00.49134958 +0000 UTC m=+0.030393828 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:15:00 np0005629333 systemd[1]: Started libpod-conmon-c3a9d2f0e972994d016cbf0b565198967643e7e4cdaeb270dff5aa33ada17d2e.scope.
Feb 25 08:15:00 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:15:00 np0005629333 podman[392059]: 2026-02-25 13:15:00.755132438 +0000 UTC m=+0.294176696 container init c3a9d2f0e972994d016cbf0b565198967643e7e4cdaeb270dff5aa33ada17d2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hodgkin, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:15:00 np0005629333 podman[392059]: 2026-02-25 13:15:00.762920388 +0000 UTC m=+0.301964636 container start c3a9d2f0e972994d016cbf0b565198967643e7e4cdaeb270dff5aa33ada17d2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hodgkin, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:15:00 np0005629333 vibrant_hodgkin[392075]: 167 167
Feb 25 08:15:00 np0005629333 systemd[1]: libpod-c3a9d2f0e972994d016cbf0b565198967643e7e4cdaeb270dff5aa33ada17d2e.scope: Deactivated successfully.
Feb 25 08:15:00 np0005629333 podman[392059]: 2026-02-25 13:15:00.876059059 +0000 UTC m=+0.415103277 container attach c3a9d2f0e972994d016cbf0b565198967643e7e4cdaeb270dff5aa33ada17d2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 08:15:00 np0005629333 nova_compute[244014]: 2026-02-25 13:15:00.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:15:00 np0005629333 podman[392059]: 2026-02-25 13:15:00.876401358 +0000 UTC m=+0.415445576 container died c3a9d2f0e972994d016cbf0b565198967643e7e4cdaeb270dff5aa33ada17d2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hodgkin, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:15:00 np0005629333 nova_compute[244014]: 2026-02-25 13:15:00.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:15:00 np0005629333 nova_compute[244014]: 2026-02-25 13:15:00.991 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:15:01 np0005629333 systemd[1]: var-lib-containers-storage-overlay-bf8ad3185058c38b4b5439c03b0c21f3a49eb13c0866adfcd52928f6f14d3f7b-merged.mount: Deactivated successfully.
Feb 25 08:15:01 np0005629333 podman[392059]: 2026-02-25 13:15:01.283043295 +0000 UTC m=+0.822087543 container remove c3a9d2f0e972994d016cbf0b565198967643e7e4cdaeb270dff5aa33ada17d2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 08:15:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2854: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:15:01 np0005629333 systemd[1]: libpod-conmon-c3a9d2f0e972994d016cbf0b565198967643e7e4cdaeb270dff5aa33ada17d2e.scope: Deactivated successfully.
Feb 25 08:15:01 np0005629333 podman[392099]: 2026-02-25 13:15:01.439363593 +0000 UTC m=+0.026887979 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:15:01 np0005629333 podman[392099]: 2026-02-25 13:15:01.562817865 +0000 UTC m=+0.150342271 container create b62b641977e7b3bdf88832653301c887cbda78a7dad6d3b4af3380c7f272c9fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_williams, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 08:15:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:15:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:15:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:15:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:15:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:15:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:15:01 np0005629333 systemd[1]: Started libpod-conmon-b62b641977e7b3bdf88832653301c887cbda78a7dad6d3b4af3380c7f272c9fd.scope.
Feb 25 08:15:01 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:15:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86b0577b17480e530ea10bfa1f39d428cb1f22f4828b255f8be01f6532126513/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:15:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86b0577b17480e530ea10bfa1f39d428cb1f22f4828b255f8be01f6532126513/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:15:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86b0577b17480e530ea10bfa1f39d428cb1f22f4828b255f8be01f6532126513/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:15:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86b0577b17480e530ea10bfa1f39d428cb1f22f4828b255f8be01f6532126513/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:15:01 np0005629333 podman[392099]: 2026-02-25 13:15:01.814519773 +0000 UTC m=+0.402044189 container init b62b641977e7b3bdf88832653301c887cbda78a7dad6d3b4af3380c7f272c9fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 25 08:15:01 np0005629333 podman[392099]: 2026-02-25 13:15:01.824683859 +0000 UTC m=+0.412208255 container start b62b641977e7b3bdf88832653301c887cbda78a7dad6d3b4af3380c7f272c9fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 08:15:01 np0005629333 podman[392099]: 2026-02-25 13:15:01.856088545 +0000 UTC m=+0.443612951 container attach b62b641977e7b3bdf88832653301c887cbda78a7dad6d3b4af3380c7f272c9fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:15:02 np0005629333 lvm[392194]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:15:02 np0005629333 lvm[392193]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:15:02 np0005629333 lvm[392194]: VG ceph_vg1 finished
Feb 25 08:15:02 np0005629333 lvm[392193]: VG ceph_vg0 finished
Feb 25 08:15:02 np0005629333 lvm[392196]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:15:02 np0005629333 lvm[392196]: VG ceph_vg2 finished
Feb 25 08:15:02 np0005629333 musing_williams[392115]: {}
Feb 25 08:15:02 np0005629333 systemd[1]: libpod-b62b641977e7b3bdf88832653301c887cbda78a7dad6d3b4af3380c7f272c9fd.scope: Deactivated successfully.
Feb 25 08:15:02 np0005629333 podman[392099]: 2026-02-25 13:15:02.661726622 +0000 UTC m=+1.249250978 container died b62b641977e7b3bdf88832653301c887cbda78a7dad6d3b4af3380c7f272c9fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_williams, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:15:02 np0005629333 systemd[1]: libpod-b62b641977e7b3bdf88832653301c887cbda78a7dad6d3b4af3380c7f272c9fd.scope: Consumed 1.176s CPU time.
Feb 25 08:15:02 np0005629333 nova_compute[244014]: 2026-02-25 13:15:02.773 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:15:02 np0005629333 systemd[1]: var-lib-containers-storage-overlay-86b0577b17480e530ea10bfa1f39d428cb1f22f4828b255f8be01f6532126513-merged.mount: Deactivated successfully.
Feb 25 08:15:03 np0005629333 podman[392099]: 2026-02-25 13:15:03.098839097 +0000 UTC m=+1.686363503 container remove b62b641977e7b3bdf88832653301c887cbda78a7dad6d3b4af3380c7f272c9fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:15:03 np0005629333 systemd[1]: libpod-conmon-b62b641977e7b3bdf88832653301c887cbda78a7dad6d3b4af3380c7f272c9fd.scope: Deactivated successfully.
Feb 25 08:15:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:15:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:15:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:15:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:15:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2855: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:15:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:15:04 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:15:04 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:15:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2856: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:15:05 np0005629333 nova_compute[244014]: 2026-02-25 13:15:05.993 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:15:06 np0005629333 nova_compute[244014]: 2026-02-25 13:15:06.878 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:15:06 np0005629333 nova_compute[244014]: 2026-02-25 13:15:06.879 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:15:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2857: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:15:07 np0005629333 nova_compute[244014]: 2026-02-25 13:15:07.780 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:15:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:15:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2858: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:15:10 np0005629333 nova_compute[244014]: 2026-02-25 13:15:10.996 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:15:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2859: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:15:11 np0005629333 podman[391760]: time="2026-02-25T13:15:11Z" level=info msg="Received shutdown.Stop(), terminating!" PID=391760
Feb 25 08:15:11 np0005629333 systemd[1]: podman.service: Deactivated successfully.
Feb 25 08:15:12 np0005629333 podman[392236]: 2026-02-25 13:15:12.7264241 +0000 UTC m=+0.063304006 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:15:12 np0005629333 podman[392237]: 2026-02-25 13:15:12.757526577 +0000 UTC m=+0.095373180 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:15:12 np0005629333 nova_compute[244014]: 2026-02-25 13:15:12.781 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:15:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2860: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:15:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:15:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2861: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:15:15 np0005629333 nova_compute[244014]: 2026-02-25 13:15:15.998 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:15:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2862: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:15:17 np0005629333 nova_compute[244014]: 2026-02-25 13:15:17.786 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:15:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:15:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2863: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:15:19 np0005629333 systemd[1]: session-53.scope: Deactivated successfully.
Feb 25 08:15:19 np0005629333 systemd-logind[811]: Session 53 logged out. Waiting for processes to exit.
Feb 25 08:15:19 np0005629333 systemd-logind[811]: Removed session 53.
Feb 25 08:15:19 np0005629333 systemd[1]: session-54.scope: Deactivated successfully.
Feb 25 08:15:19 np0005629333 systemd[1]: session-54.scope: Consumed 1.180s CPU time.
Feb 25 08:15:19 np0005629333 systemd-logind[811]: Session 54 logged out. Waiting for processes to exit.
Feb 25 08:15:19 np0005629333 systemd-logind[811]: Removed session 54.
Feb 25 08:15:20 np0005629333 systemd[1]: session-55.scope: Deactivated successfully.
Feb 25 08:15:20 np0005629333 systemd-logind[811]: Session 55 logged out. Waiting for processes to exit.
Feb 25 08:15:20 np0005629333 systemd-logind[811]: Removed session 55.
Feb 25 08:15:21 np0005629333 nova_compute[244014]: 2026-02-25 13:15:21.000 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:15:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2864: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:15:22 np0005629333 nova_compute[244014]: 2026-02-25 13:15:22.790 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:15:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2865: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:15:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:15:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2866: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:15:26 np0005629333 nova_compute[244014]: 2026-02-25 13:15:26.002 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:15:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2867: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:15:27 np0005629333 nova_compute[244014]: 2026-02-25 13:15:27.794 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:15:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:15:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2868: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:15:31 np0005629333 nova_compute[244014]: 2026-02-25 13:15:31.003 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:15:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:15:31
Feb 25 08:15:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:15:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:15:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['backups', '.mgr', 'images', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'default.rgw.meta', 'vms']
Feb 25 08:15:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:15:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2869: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:15:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:15:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:15:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:15:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:15:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:15:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:15:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:15:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:15:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:15:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:15:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:15:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:15:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:15:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:15:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:15:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:15:32 np0005629333 nova_compute[244014]: 2026-02-25 13:15:32.798 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:15:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2870: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:34.100105) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025334100231, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 1253, "num_deletes": 256, "total_data_size": 1956944, "memory_usage": 1987904, "flush_reason": "Manual Compaction"}
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025334125673, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 1915780, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59623, "largest_seqno": 60875, "table_properties": {"data_size": 1909795, "index_size": 3315, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12351, "raw_average_key_size": 19, "raw_value_size": 1897786, "raw_average_value_size": 2983, "num_data_blocks": 149, "num_entries": 636, "num_filter_entries": 636, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772025210, "oldest_key_time": 1772025210, "file_creation_time": 1772025334, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 25693 microseconds, and 5624 cpu microseconds.
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:34.125810) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 1915780 bytes OK
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:34.125847) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:34.132895) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:34.132954) EVENT_LOG_v1 {"time_micros": 1772025334132942, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:34.132992) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 1951253, prev total WAL file size 1951253, number of live WAL files 2.
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:34.133873) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353135' seq:72057594037927935, type:22 .. '6C6F676D0032373637' seq:0, type:0; will stop at (end)
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(1870KB)], [140(8830KB)]
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025334133922, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 10958343, "oldest_snapshot_seqno": -1}
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 8021 keys, 10838721 bytes, temperature: kUnknown
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025334322822, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 10838721, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10786313, "index_size": 31239, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20101, "raw_key_size": 209336, "raw_average_key_size": 26, "raw_value_size": 10644395, "raw_average_value_size": 1327, "num_data_blocks": 1223, "num_entries": 8021, "num_filter_entries": 8021, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772025334, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:34.323165) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 10838721 bytes
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:34.372033) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 58.0 rd, 57.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 8.6 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(11.4) write-amplify(5.7) OK, records in: 8545, records dropped: 524 output_compression: NoCompression
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:34.372081) EVENT_LOG_v1 {"time_micros": 1772025334372062, "job": 86, "event": "compaction_finished", "compaction_time_micros": 188998, "compaction_time_cpu_micros": 19461, "output_level": 6, "num_output_files": 1, "total_output_size": 10838721, "num_input_records": 8545, "num_output_records": 8021, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025334372539, "job": 86, "event": "table_file_deletion", "file_number": 142}
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025334373769, "job": 86, "event": "table_file_deletion", "file_number": 140}
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:34.133805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:34.373806) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:34.373812) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:34.373815) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:34.373818) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:15:34 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:34.373821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:15:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2871: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:15:36 np0005629333 nova_compute[244014]: 2026-02-25 13:15:36.005 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:15:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2872: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:15:37 np0005629333 nova_compute[244014]: 2026-02-25 13:15:37.801 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:15:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:15:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2873: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:15:41 np0005629333 nova_compute[244014]: 2026-02-25 13:15:41.009 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:15:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2874: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:15:42 np0005629333 nova_compute[244014]: 2026-02-25 13:15:42.807 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:15:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:15:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:15:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:15:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:15:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:15:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:15:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:15:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:15:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:15:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:15:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:15:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:15:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:15:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:15:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:15:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:15:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:15:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:15:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:15:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:15:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:15:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:15:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:15:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2875: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:15:43 np0005629333 podman[392336]: 2026-02-25 13:15:43.760012516 +0000 UTC m=+0.093054735 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 25 08:15:43 np0005629333 podman[392335]: 2026-02-25 13:15:43.76296967 +0000 UTC m=+0.097106700 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:15:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:15:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2876: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:15:46 np0005629333 nova_compute[244014]: 2026-02-25 13:15:46.011 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:15:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2877: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:15:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:15:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2428991676' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:15:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:15:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2428991676' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:15:47 np0005629333 nova_compute[244014]: 2026-02-25 13:15:47.810 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:15:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:15:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2878: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:15:50 np0005629333 nova_compute[244014]: 2026-02-25 13:15:50.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:15:50 np0005629333 nova_compute[244014]: 2026-02-25 13:15:50.913 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:15:50 np0005629333 nova_compute[244014]: 2026-02-25 13:15:50.914 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:15:50 np0005629333 nova_compute[244014]: 2026-02-25 13:15:50.914 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:15:50 np0005629333 nova_compute[244014]: 2026-02-25 13:15:50.915 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:15:50 np0005629333 nova_compute[244014]: 2026-02-25 13:15:50.915 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:15:51 np0005629333 nova_compute[244014]: 2026-02-25 13:15:51.013 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:15:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2879: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:15:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:15:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3109523005' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:15:51 np0005629333 nova_compute[244014]: 2026-02-25 13:15:51.503 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:15:51 np0005629333 nova_compute[244014]: 2026-02-25 13:15:51.694 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:15:51 np0005629333 nova_compute[244014]: 2026-02-25 13:15:51.695 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3611MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:15:51 np0005629333 nova_compute[244014]: 2026-02-25 13:15:51.695 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:15:51 np0005629333 nova_compute[244014]: 2026-02-25 13:15:51.695 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:15:51 np0005629333 nova_compute[244014]: 2026-02-25 13:15:51.896 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:15:51 np0005629333 nova_compute[244014]: 2026-02-25 13:15:51.896 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:15:51 np0005629333 nova_compute[244014]: 2026-02-25 13:15:51.964 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 25 08:15:52 np0005629333 nova_compute[244014]: 2026-02-25 13:15:52.034 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 25 08:15:52 np0005629333 nova_compute[244014]: 2026-02-25 13:15:52.035 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 25 08:15:52 np0005629333 nova_compute[244014]: 2026-02-25 13:15:52.057 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 25 08:15:52 np0005629333 nova_compute[244014]: 2026-02-25 13:15:52.084 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 25 08:15:52 np0005629333 nova_compute[244014]: 2026-02-25 13:15:52.105 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:15:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:15:52 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4196835951' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:15:52 np0005629333 nova_compute[244014]: 2026-02-25 13:15:52.651 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:15:52 np0005629333 nova_compute[244014]: 2026-02-25 13:15:52.658 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:15:52 np0005629333 nova_compute[244014]: 2026-02-25 13:15:52.679 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:15:52 np0005629333 nova_compute[244014]: 2026-02-25 13:15:52.683 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:15:52 np0005629333 nova_compute[244014]: 2026-02-25 13:15:52.683 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.988s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:15:52 np0005629333 nova_compute[244014]: 2026-02-25 13:15:52.813 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:15:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2880: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:15:53 np0005629333 nova_compute[244014]: 2026-02-25 13:15:53.685 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:15:53 np0005629333 nova_compute[244014]: 2026-02-25 13:15:53.686 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:15:53 np0005629333 nova_compute[244014]: 2026-02-25 13:15:53.686 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:15:53 np0005629333 nova_compute[244014]: 2026-02-25 13:15:53.713 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:15:53 np0005629333 nova_compute[244014]: 2026-02-25 13:15:53.714 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:15:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:15:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:15:55.054 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:15:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:15:55.055 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:15:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:15:55.056 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:15:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2881: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:55.708788) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025355708857, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 426, "num_deletes": 251, "total_data_size": 326747, "memory_usage": 334472, "flush_reason": "Manual Compaction"}
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025355727829, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 323674, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60876, "largest_seqno": 61301, "table_properties": {"data_size": 321192, "index_size": 581, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6019, "raw_average_key_size": 18, "raw_value_size": 316316, "raw_average_value_size": 982, "num_data_blocks": 26, "num_entries": 322, "num_filter_entries": 322, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772025335, "oldest_key_time": 1772025335, "file_creation_time": 1772025355, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 19113 microseconds, and 2591 cpu microseconds.
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:55.727901) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 323674 bytes OK
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:55.727932) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:55.752782) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:55.752823) EVENT_LOG_v1 {"time_micros": 1772025355752813, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:55.752850) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 324114, prev total WAL file size 324114, number of live WAL files 2.
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:55.753471) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(316KB)], [143(10MB)]
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025355753545, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 11162395, "oldest_snapshot_seqno": -1}
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 7834 keys, 9420206 bytes, temperature: kUnknown
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025355850648, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 9420206, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9370366, "index_size": 29119, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19653, "raw_key_size": 206136, "raw_average_key_size": 26, "raw_value_size": 9233116, "raw_average_value_size": 1178, "num_data_blocks": 1125, "num_entries": 7834, "num_filter_entries": 7834, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772025355, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:55.850929) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 9420206 bytes
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:55.864928) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 114.8 rd, 96.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.3 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(63.6) write-amplify(29.1) OK, records in: 8343, records dropped: 509 output_compression: NoCompression
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:55.864977) EVENT_LOG_v1 {"time_micros": 1772025355864957, "job": 88, "event": "compaction_finished", "compaction_time_micros": 97204, "compaction_time_cpu_micros": 24339, "output_level": 6, "num_output_files": 1, "total_output_size": 9420206, "num_input_records": 8343, "num_output_records": 7834, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025355865293, "job": 88, "event": "table_file_deletion", "file_number": 145}
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025355866987, "job": 88, "event": "table_file_deletion", "file_number": 143}
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:55.753396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:55.867075) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:55.867085) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:55.867089) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:55.867093) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:15:55 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:15:55.867097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:15:56 np0005629333 nova_compute[244014]: 2026-02-25 13:15:56.014 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:15:56 np0005629333 nova_compute[244014]: 2026-02-25 13:15:56.901 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:15:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2882: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:15:57 np0005629333 nova_compute[244014]: 2026-02-25 13:15:57.816 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:15:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:15:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2883: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:16:00 np0005629333 nova_compute[244014]: 2026-02-25 13:16:00.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:16:00 np0005629333 nova_compute[244014]: 2026-02-25 13:16:00.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:16:00 np0005629333 nova_compute[244014]: 2026-02-25 13:16:00.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:16:01 np0005629333 nova_compute[244014]: 2026-02-25 13:16:01.016 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:16:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2884: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:16:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:16:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:16:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:16:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:16:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:16:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:16:01 np0005629333 nova_compute[244014]: 2026-02-25 13:16:01.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:16:02 np0005629333 nova_compute[244014]: 2026-02-25 13:16:02.819 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:16:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2885: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:16:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:16:04 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:16:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:16:04 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:16:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:16:04 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:16:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:16:04 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:16:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:16:04 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:16:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:16:04 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:16:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:16:04 np0005629333 podman[392566]: 2026-02-25 13:16:04.428427817 +0000 UTC m=+0.022006111 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:16:04 np0005629333 podman[392566]: 2026-02-25 13:16:04.584936111 +0000 UTC m=+0.178514385 container create 47052eccdbe622716f365598eb25181b36badaf5f375f95d72aa4e9df4534d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:16:04 np0005629333 systemd[1]: Started libpod-conmon-47052eccdbe622716f365598eb25181b36badaf5f375f95d72aa4e9df4534d5b.scope.
Feb 25 08:16:04 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:16:04 np0005629333 podman[392566]: 2026-02-25 13:16:04.868986261 +0000 UTC m=+0.462564555 container init 47052eccdbe622716f365598eb25181b36badaf5f375f95d72aa4e9df4534d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_poitras, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 08:16:04 np0005629333 podman[392566]: 2026-02-25 13:16:04.878663904 +0000 UTC m=+0.472242218 container start 47052eccdbe622716f365598eb25181b36badaf5f375f95d72aa4e9df4534d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_poitras, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:16:04 np0005629333 beautiful_poitras[392582]: 167 167
Feb 25 08:16:04 np0005629333 systemd[1]: libpod-47052eccdbe622716f365598eb25181b36badaf5f375f95d72aa4e9df4534d5b.scope: Deactivated successfully.
Feb 25 08:16:04 np0005629333 podman[392566]: 2026-02-25 13:16:04.935799775 +0000 UTC m=+0.529378069 container attach 47052eccdbe622716f365598eb25181b36badaf5f375f95d72aa4e9df4534d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_poitras, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:16:04 np0005629333 podman[392566]: 2026-02-25 13:16:04.936462573 +0000 UTC m=+0.530040847 container died 47052eccdbe622716f365598eb25181b36badaf5f375f95d72aa4e9df4534d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_poitras, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 08:16:05 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:16:05 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:16:05 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:16:05 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a4163020eac511f760582b23ece772c307aea096c5cdbe9099c2c1d45b11700e-merged.mount: Deactivated successfully.
Feb 25 08:16:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2886: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:16:05 np0005629333 podman[392566]: 2026-02-25 13:16:05.598179923 +0000 UTC m=+1.191758237 container remove 47052eccdbe622716f365598eb25181b36badaf5f375f95d72aa4e9df4534d5b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Feb 25 08:16:05 np0005629333 systemd[1]: libpod-conmon-47052eccdbe622716f365598eb25181b36badaf5f375f95d72aa4e9df4534d5b.scope: Deactivated successfully.
Feb 25 08:16:05 np0005629333 podman[392606]: 2026-02-25 13:16:05.823294771 +0000 UTC m=+0.125857380 container create 05d3cec2b34d05fb7345141af7045317dbcda34284ae1dbfdc83cc416eb545eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:16:05 np0005629333 podman[392606]: 2026-02-25 13:16:05.738050817 +0000 UTC m=+0.040613466 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:16:05 np0005629333 systemd[1]: Started libpod-conmon-05d3cec2b34d05fb7345141af7045317dbcda34284ae1dbfdc83cc416eb545eb.scope.
Feb 25 08:16:05 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:16:05 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d53b3aa4e49721ddc0af72ce71f7c4c6ea168895d0126266d94e70c690f6ae4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:16:06 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d53b3aa4e49721ddc0af72ce71f7c4c6ea168895d0126266d94e70c690f6ae4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:16:06 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d53b3aa4e49721ddc0af72ce71f7c4c6ea168895d0126266d94e70c690f6ae4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:16:06 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d53b3aa4e49721ddc0af72ce71f7c4c6ea168895d0126266d94e70c690f6ae4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:16:06 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d53b3aa4e49721ddc0af72ce71f7c4c6ea168895d0126266d94e70c690f6ae4a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:16:06 np0005629333 nova_compute[244014]: 2026-02-25 13:16:06.018 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:16:06 np0005629333 podman[392606]: 2026-02-25 13:16:06.11088178 +0000 UTC m=+0.413444379 container init 05d3cec2b34d05fb7345141af7045317dbcda34284ae1dbfdc83cc416eb545eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_herschel, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:16:06 np0005629333 podman[392606]: 2026-02-25 13:16:06.118969378 +0000 UTC m=+0.421531987 container start 05d3cec2b34d05fb7345141af7045317dbcda34284ae1dbfdc83cc416eb545eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_herschel, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 08:16:06 np0005629333 podman[392606]: 2026-02-25 13:16:06.201070483 +0000 UTC m=+0.503633072 container attach 05d3cec2b34d05fb7345141af7045317dbcda34284ae1dbfdc83cc416eb545eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:16:06 np0005629333 distracted_herschel[392623]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:16:06 np0005629333 distracted_herschel[392623]: --> All data devices are unavailable
Feb 25 08:16:06 np0005629333 systemd[1]: libpod-05d3cec2b34d05fb7345141af7045317dbcda34284ae1dbfdc83cc416eb545eb.scope: Deactivated successfully.
Feb 25 08:16:06 np0005629333 podman[392643]: 2026-02-25 13:16:06.718248087 +0000 UTC m=+0.036175691 container died 05d3cec2b34d05fb7345141af7045317dbcda34284ae1dbfdc83cc416eb545eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_herschel, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 08:16:06 np0005629333 nova_compute[244014]: 2026-02-25 13:16:06.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:16:07 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d53b3aa4e49721ddc0af72ce71f7c4c6ea168895d0126266d94e70c690f6ae4a-merged.mount: Deactivated successfully.
Feb 25 08:16:07 np0005629333 podman[392643]: 2026-02-25 13:16:07.282846767 +0000 UTC m=+0.600774291 container remove 05d3cec2b34d05fb7345141af7045317dbcda34284ae1dbfdc83cc416eb545eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_herschel, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:16:07 np0005629333 systemd[1]: libpod-conmon-05d3cec2b34d05fb7345141af7045317dbcda34284ae1dbfdc83cc416eb545eb.scope: Deactivated successfully.
Feb 25 08:16:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2887: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:16:07 np0005629333 nova_compute[244014]: 2026-02-25 13:16:07.823 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:16:07 np0005629333 podman[392721]: 2026-02-25 13:16:07.834778502 +0000 UTC m=+0.108292425 container create 59118bce2ede9514be1f5bccbc75ac898b37707ebded335a3476086898b6cec8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_kirch, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 08:16:07 np0005629333 podman[392721]: 2026-02-25 13:16:07.746643456 +0000 UTC m=+0.020157419 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:16:07 np0005629333 systemd[1]: Started libpod-conmon-59118bce2ede9514be1f5bccbc75ac898b37707ebded335a3476086898b6cec8.scope.
Feb 25 08:16:07 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:16:08 np0005629333 podman[392721]: 2026-02-25 13:16:08.023468862 +0000 UTC m=+0.296982825 container init 59118bce2ede9514be1f5bccbc75ac898b37707ebded335a3476086898b6cec8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_kirch, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 25 08:16:08 np0005629333 podman[392721]: 2026-02-25 13:16:08.033531766 +0000 UTC m=+0.307045689 container start 59118bce2ede9514be1f5bccbc75ac898b37707ebded335a3476086898b6cec8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_kirch, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 08:16:08 np0005629333 pensive_kirch[392737]: 167 167
Feb 25 08:16:08 np0005629333 systemd[1]: libpod-59118bce2ede9514be1f5bccbc75ac898b37707ebded335a3476086898b6cec8.scope: Deactivated successfully.
Feb 25 08:16:08 np0005629333 podman[392721]: 2026-02-25 13:16:08.153313584 +0000 UTC m=+0.426827587 container attach 59118bce2ede9514be1f5bccbc75ac898b37707ebded335a3476086898b6cec8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:16:08 np0005629333 podman[392721]: 2026-02-25 13:16:08.153843809 +0000 UTC m=+0.427357762 container died 59118bce2ede9514be1f5bccbc75ac898b37707ebded335a3476086898b6cec8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_kirch, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 08:16:08 np0005629333 systemd[1]: var-lib-containers-storage-overlay-5e6058b545dccf91b6455fbb0e0239dc850ee1b0ba00a76363dfa9f8a89ce4d8-merged.mount: Deactivated successfully.
Feb 25 08:16:08 np0005629333 podman[392721]: 2026-02-25 13:16:08.827542226 +0000 UTC m=+1.101056169 container remove 59118bce2ede9514be1f5bccbc75ac898b37707ebded335a3476086898b6cec8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_kirch, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:16:08 np0005629333 systemd[1]: libpod-conmon-59118bce2ede9514be1f5bccbc75ac898b37707ebded335a3476086898b6cec8.scope: Deactivated successfully.
Feb 25 08:16:08 np0005629333 nova_compute[244014]: 2026-02-25 13:16:08.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:16:09 np0005629333 podman[392763]: 2026-02-25 13:16:08.99821435 +0000 UTC m=+0.031095328 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:16:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:16:09 np0005629333 podman[392763]: 2026-02-25 13:16:09.102311495 +0000 UTC m=+0.135192353 container create 7ec08ce54b0db846727f49931d9e465def28cb8331f6dcb7cde4ce4196038552 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_williamson, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 25 08:16:09 np0005629333 systemd[1]: Started libpod-conmon-7ec08ce54b0db846727f49931d9e465def28cb8331f6dcb7cde4ce4196038552.scope.
Feb 25 08:16:09 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:16:09 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85fe74231d6ef6dd57158e0cf3a6f56150af7d2179e52caadf8f21e39f2b6df2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:16:09 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85fe74231d6ef6dd57158e0cf3a6f56150af7d2179e52caadf8f21e39f2b6df2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:16:09 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85fe74231d6ef6dd57158e0cf3a6f56150af7d2179e52caadf8f21e39f2b6df2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:16:09 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85fe74231d6ef6dd57158e0cf3a6f56150af7d2179e52caadf8f21e39f2b6df2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:16:09 np0005629333 podman[392763]: 2026-02-25 13:16:09.343938709 +0000 UTC m=+0.376819587 container init 7ec08ce54b0db846727f49931d9e465def28cb8331f6dcb7cde4ce4196038552 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 08:16:09 np0005629333 podman[392763]: 2026-02-25 13:16:09.351220424 +0000 UTC m=+0.384101292 container start 7ec08ce54b0db846727f49931d9e465def28cb8331f6dcb7cde4ce4196038552 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_williamson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:16:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2888: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:16:09 np0005629333 podman[392763]: 2026-02-25 13:16:09.454005752 +0000 UTC m=+0.486886680 container attach 7ec08ce54b0db846727f49931d9e465def28cb8331f6dcb7cde4ce4196038552 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_williamson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]: {
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:    "0": [
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:        {
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "devices": [
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "/dev/loop3"
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            ],
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "lv_name": "ceph_lv0",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "lv_size": "21470642176",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "name": "ceph_lv0",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "tags": {
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.cluster_name": "ceph",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.crush_device_class": "",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.encrypted": "0",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.objectstore": "bluestore",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.osd_id": "0",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.type": "block",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.vdo": "0",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.with_tpm": "0"
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            },
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "type": "block",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "vg_name": "ceph_vg0"
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:        }
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:    ],
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:    "1": [
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:        {
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "devices": [
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "/dev/loop4"
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            ],
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "lv_name": "ceph_lv1",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "lv_size": "21470642176",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "name": "ceph_lv1",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "tags": {
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.cluster_name": "ceph",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.crush_device_class": "",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.encrypted": "0",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.objectstore": "bluestore",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.osd_id": "1",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.type": "block",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.vdo": "0",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.with_tpm": "0"
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            },
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "type": "block",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "vg_name": "ceph_vg1"
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:        }
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:    ],
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:    "2": [
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:        {
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "devices": [
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "/dev/loop5"
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            ],
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "lv_name": "ceph_lv2",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "lv_size": "21470642176",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "name": "ceph_lv2",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "tags": {
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.cluster_name": "ceph",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.crush_device_class": "",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.encrypted": "0",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.objectstore": "bluestore",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.osd_id": "2",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.type": "block",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.vdo": "0",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:                "ceph.with_tpm": "0"
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            },
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "type": "block",
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:            "vg_name": "ceph_vg2"
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:        }
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]:    ]
Feb 25 08:16:09 np0005629333 lucid_williamson[392779]: }
Feb 25 08:16:09 np0005629333 systemd[1]: libpod-7ec08ce54b0db846727f49931d9e465def28cb8331f6dcb7cde4ce4196038552.scope: Deactivated successfully.
Feb 25 08:16:09 np0005629333 podman[392788]: 2026-02-25 13:16:09.699360901 +0000 UTC m=+0.035274096 container died 7ec08ce54b0db846727f49931d9e465def28cb8331f6dcb7cde4ce4196038552 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_williamson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 08:16:09 np0005629333 systemd[1]: var-lib-containers-storage-overlay-85fe74231d6ef6dd57158e0cf3a6f56150af7d2179e52caadf8f21e39f2b6df2-merged.mount: Deactivated successfully.
Feb 25 08:16:10 np0005629333 podman[392788]: 2026-02-25 13:16:10.163017865 +0000 UTC m=+0.498931020 container remove 7ec08ce54b0db846727f49931d9e465def28cb8331f6dcb7cde4ce4196038552 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_williamson, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:16:10 np0005629333 systemd[1]: libpod-conmon-7ec08ce54b0db846727f49931d9e465def28cb8331f6dcb7cde4ce4196038552.scope: Deactivated successfully.
Feb 25 08:16:11 np0005629333 nova_compute[244014]: 2026-02-25 13:16:11.019 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:16:11 np0005629333 podman[392866]: 2026-02-25 13:16:11.181022171 +0000 UTC m=+0.105734633 container create a89a13a2d4d660ec8031475d3919211c7f3d2dbec24d174dd95566cb88035faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_elbakyan, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 08:16:11 np0005629333 podman[392866]: 2026-02-25 13:16:11.114062603 +0000 UTC m=+0.038775115 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:16:11 np0005629333 systemd[1]: Started libpod-conmon-a89a13a2d4d660ec8031475d3919211c7f3d2dbec24d174dd95566cb88035faa.scope.
Feb 25 08:16:11 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:16:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2889: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:16:11 np0005629333 podman[392866]: 2026-02-25 13:16:11.397112535 +0000 UTC m=+0.321825057 container init a89a13a2d4d660ec8031475d3919211c7f3d2dbec24d174dd95566cb88035faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_elbakyan, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Feb 25 08:16:11 np0005629333 podman[392866]: 2026-02-25 13:16:11.403519525 +0000 UTC m=+0.328231997 container start a89a13a2d4d660ec8031475d3919211c7f3d2dbec24d174dd95566cb88035faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:16:11 np0005629333 laughing_elbakyan[392883]: 167 167
Feb 25 08:16:11 np0005629333 systemd[1]: libpod-a89a13a2d4d660ec8031475d3919211c7f3d2dbec24d174dd95566cb88035faa.scope: Deactivated successfully.
Feb 25 08:16:11 np0005629333 podman[392866]: 2026-02-25 13:16:11.468666692 +0000 UTC m=+0.393379184 container attach a89a13a2d4d660ec8031475d3919211c7f3d2dbec24d174dd95566cb88035faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_elbakyan, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:16:11 np0005629333 podman[392866]: 2026-02-25 13:16:11.469428524 +0000 UTC m=+0.394140986 container died a89a13a2d4d660ec8031475d3919211c7f3d2dbec24d174dd95566cb88035faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_elbakyan, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:16:11 np0005629333 systemd[1]: var-lib-containers-storage-overlay-5c2229fafcf850a592fedf9e0efb03182e90b0f71b74879cc08b02cebddc9ba8-merged.mount: Deactivated successfully.
Feb 25 08:16:11 np0005629333 podman[392866]: 2026-02-25 13:16:11.924559018 +0000 UTC m=+0.849271490 container remove a89a13a2d4d660ec8031475d3919211c7f3d2dbec24d174dd95566cb88035faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_elbakyan, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Feb 25 08:16:11 np0005629333 systemd[1]: libpod-conmon-a89a13a2d4d660ec8031475d3919211c7f3d2dbec24d174dd95566cb88035faa.scope: Deactivated successfully.
Feb 25 08:16:12 np0005629333 podman[392905]: 2026-02-25 13:16:12.160928994 +0000 UTC m=+0.115278102 container create de9bf3a096e8e0af2c7d8dd901c08f155a9a5e2e94b35018793077864c34371d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:16:12 np0005629333 podman[392905]: 2026-02-25 13:16:12.081590176 +0000 UTC m=+0.035939284 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:16:12 np0005629333 systemd[1]: Started libpod-conmon-de9bf3a096e8e0af2c7d8dd901c08f155a9a5e2e94b35018793077864c34371d.scope.
Feb 25 08:16:12 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:16:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea0443692e1abc9a96ec6c33c3b8b11d5dabc037acd415dc742149010fca94a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:16:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea0443692e1abc9a96ec6c33c3b8b11d5dabc037acd415dc742149010fca94a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:16:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea0443692e1abc9a96ec6c33c3b8b11d5dabc037acd415dc742149010fca94a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:16:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea0443692e1abc9a96ec6c33c3b8b11d5dabc037acd415dc742149010fca94a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:16:12 np0005629333 podman[392905]: 2026-02-25 13:16:12.445644093 +0000 UTC m=+0.399993201 container init de9bf3a096e8e0af2c7d8dd901c08f155a9a5e2e94b35018793077864c34371d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_cray, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Feb 25 08:16:12 np0005629333 podman[392905]: 2026-02-25 13:16:12.454333478 +0000 UTC m=+0.408682586 container start de9bf3a096e8e0af2c7d8dd901c08f155a9a5e2e94b35018793077864c34371d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:16:12 np0005629333 podman[392905]: 2026-02-25 13:16:12.579417265 +0000 UTC m=+0.533766373 container attach de9bf3a096e8e0af2c7d8dd901c08f155a9a5e2e94b35018793077864c34371d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 08:16:12 np0005629333 nova_compute[244014]: 2026-02-25 13:16:12.827 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:16:13 np0005629333 lvm[393002]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:16:13 np0005629333 lvm[393002]: VG ceph_vg1 finished
Feb 25 08:16:13 np0005629333 lvm[393003]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:16:13 np0005629333 lvm[393003]: VG ceph_vg2 finished
Feb 25 08:16:13 np0005629333 lvm[393000]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:16:13 np0005629333 lvm[393000]: VG ceph_vg0 finished
Feb 25 08:16:13 np0005629333 strange_cray[392922]: {}
Feb 25 08:16:13 np0005629333 systemd[1]: libpod-de9bf3a096e8e0af2c7d8dd901c08f155a9a5e2e94b35018793077864c34371d.scope: Deactivated successfully.
Feb 25 08:16:13 np0005629333 systemd[1]: libpod-de9bf3a096e8e0af2c7d8dd901c08f155a9a5e2e94b35018793077864c34371d.scope: Consumed 1.094s CPU time.
Feb 25 08:16:13 np0005629333 podman[392905]: 2026-02-25 13:16:13.214186495 +0000 UTC m=+1.168535563 container died de9bf3a096e8e0af2c7d8dd901c08f155a9a5e2e94b35018793077864c34371d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_cray, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 08:16:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2890: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:16:13 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ea0443692e1abc9a96ec6c33c3b8b11d5dabc037acd415dc742149010fca94a1-merged.mount: Deactivated successfully.
Feb 25 08:16:13 np0005629333 podman[392905]: 2026-02-25 13:16:13.610219102 +0000 UTC m=+1.564568180 container remove de9bf3a096e8e0af2c7d8dd901c08f155a9a5e2e94b35018793077864c34371d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_cray, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 08:16:13 np0005629333 systemd[1]: libpod-conmon-de9bf3a096e8e0af2c7d8dd901c08f155a9a5e2e94b35018793077864c34371d.scope: Deactivated successfully.
Feb 25 08:16:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:16:13 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:16:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:16:13 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:16:14 np0005629333 podman[393044]: 2026-02-25 13:16:14.034743315 +0000 UTC m=+0.102500913 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0)
Feb 25 08:16:14 np0005629333 podman[393045]: 2026-02-25 13:16:14.057106634 +0000 UTC m=+0.123285988 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 08:16:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:16:14 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:16:14 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:16:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2891: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:16:16 np0005629333 nova_compute[244014]: 2026-02-25 13:16:16.021 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:16:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2892: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:16:17 np0005629333 nova_compute[244014]: 2026-02-25 13:16:17.831 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:16:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:16:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2893: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:16:19 np0005629333 nova_compute[244014]: 2026-02-25 13:16:19.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:16:21 np0005629333 nova_compute[244014]: 2026-02-25 13:16:21.025 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:16:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2894: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:16:22 np0005629333 nova_compute[244014]: 2026-02-25 13:16:22.834 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:16:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2895: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:16:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:16:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2896: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:16:26 np0005629333 nova_compute[244014]: 2026-02-25 13:16:26.028 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:16:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2897: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:16:27 np0005629333 nova_compute[244014]: 2026-02-25 13:16:27.837 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:16:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:16:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2898: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:16:31 np0005629333 nova_compute[244014]: 2026-02-25 13:16:31.031 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:16:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:16:31
Feb 25 08:16:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:16:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:16:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['.mgr', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'volumes', 'default.rgw.log', '.rgw.root', 'backups']
Feb 25 08:16:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:16:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2899: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:16:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:16:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:16:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:16:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:16:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:16:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:16:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:16:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:16:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:16:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:16:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:16:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:16:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:16:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:16:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:16:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:16:32 np0005629333 nova_compute[244014]: 2026-02-25 13:16:32.840 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:16:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2900: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:16:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:16:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2901: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:16:36 np0005629333 nova_compute[244014]: 2026-02-25 13:16:36.034 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:16:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2902: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:16:37 np0005629333 nova_compute[244014]: 2026-02-25 13:16:37.843 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:16:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:16:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2903: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:16:41 np0005629333 nova_compute[244014]: 2026-02-25 13:16:41.037 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:16:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2904: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:16:42 np0005629333 nova_compute[244014]: 2026-02-25 13:16:42.846 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:16:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:16:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:16:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:16:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:16:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:16:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:16:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:16:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:16:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:16:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:16:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:16:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:16:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:16:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:16:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:16:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:16:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:16:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:16:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:16:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:16:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:16:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:16:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:16:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2905: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:16:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:16:44 np0005629333 podman[393088]: 2026-02-25 13:16:44.775052309 +0000 UTC m=+0.103142720 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 08:16:44 np0005629333 podman[393089]: 2026-02-25 13:16:44.813798891 +0000 UTC m=+0.140517483 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 08:16:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2906: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:16:46 np0005629333 nova_compute[244014]: 2026-02-25 13:16:46.037 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:16:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2907: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:16:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:16:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2179013673' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:16:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:16:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2179013673' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:16:47 np0005629333 nova_compute[244014]: 2026-02-25 13:16:47.850 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:16:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:16:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2908: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:16:50 np0005629333 nova_compute[244014]: 2026-02-25 13:16:50.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:16:50 np0005629333 nova_compute[244014]: 2026-02-25 13:16:50.903 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:16:50 np0005629333 nova_compute[244014]: 2026-02-25 13:16:50.903 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:16:50 np0005629333 nova_compute[244014]: 2026-02-25 13:16:50.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:16:50 np0005629333 nova_compute[244014]: 2026-02-25 13:16:50.904 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:16:50 np0005629333 nova_compute[244014]: 2026-02-25 13:16:50.904 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:16:51 np0005629333 nova_compute[244014]: 2026-02-25 13:16:51.038 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:16:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2909: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:16:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:16:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/360309141' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:16:51 np0005629333 nova_compute[244014]: 2026-02-25 13:16:51.439 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:16:51 np0005629333 nova_compute[244014]: 2026-02-25 13:16:51.635 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:16:51 np0005629333 nova_compute[244014]: 2026-02-25 13:16:51.636 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3581MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:16:51 np0005629333 nova_compute[244014]: 2026-02-25 13:16:51.636 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:16:51 np0005629333 nova_compute[244014]: 2026-02-25 13:16:51.637 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:16:51 np0005629333 nova_compute[244014]: 2026-02-25 13:16:51.694 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:16:51 np0005629333 nova_compute[244014]: 2026-02-25 13:16:51.694 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:16:51 np0005629333 nova_compute[244014]: 2026-02-25 13:16:51.708 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:16:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:16:52 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1866881172' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:16:52 np0005629333 nova_compute[244014]: 2026-02-25 13:16:52.241 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:16:52 np0005629333 nova_compute[244014]: 2026-02-25 13:16:52.249 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:16:52 np0005629333 nova_compute[244014]: 2026-02-25 13:16:52.276 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:16:52 np0005629333 nova_compute[244014]: 2026-02-25 13:16:52.279 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:16:52 np0005629333 nova_compute[244014]: 2026-02-25 13:16:52.280 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:16:52 np0005629333 nova_compute[244014]: 2026-02-25 13:16:52.853 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:16:53 np0005629333 nova_compute[244014]: 2026-02-25 13:16:53.281 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:16:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2910: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:16:53 np0005629333 nova_compute[244014]: 2026-02-25 13:16:53.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:16:53 np0005629333 nova_compute[244014]: 2026-02-25 13:16:53.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:16:53 np0005629333 nova_compute[244014]: 2026-02-25 13:16:53.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:16:53 np0005629333 nova_compute[244014]: 2026-02-25 13:16:53.896 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:16:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:16:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:16:55.056 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:16:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:16:55.056 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:16:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:16:55.057 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:16:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2911: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:16:56 np0005629333 nova_compute[244014]: 2026-02-25 13:16:56.040 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:16:56 np0005629333 nova_compute[244014]: 2026-02-25 13:16:56.893 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:16:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2912: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:16:57 np0005629333 nova_compute[244014]: 2026-02-25 13:16:57.857 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:16:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:16:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2913: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:17:01 np0005629333 nova_compute[244014]: 2026-02-25 13:17:01.042 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:17:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2914: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:17:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:17:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:17:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:17:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:17:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:17:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:17:01 np0005629333 nova_compute[244014]: 2026-02-25 13:17:01.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:17:02 np0005629333 nova_compute[244014]: 2026-02-25 13:17:02.859 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:17:02 np0005629333 nova_compute[244014]: 2026-02-25 13:17:02.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:17:02 np0005629333 nova_compute[244014]: 2026-02-25 13:17:02.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:17:02 np0005629333 nova_compute[244014]: 2026-02-25 13:17:02.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:17:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2915: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:17:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:17:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2916: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:17:06 np0005629333 nova_compute[244014]: 2026-02-25 13:17:06.043 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:17:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2917: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:17:07 np0005629333 nova_compute[244014]: 2026-02-25 13:17:07.864 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:17:07 np0005629333 nova_compute[244014]: 2026-02-25 13:17:07.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:17:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:17:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2918: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:17:10 np0005629333 nova_compute[244014]: 2026-02-25 13:17:10.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:17:11 np0005629333 nova_compute[244014]: 2026-02-25 13:17:11.045 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:17:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2919: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:17:12 np0005629333 nova_compute[244014]: 2026-02-25 13:17:12.867 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:17:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2920: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:17:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:17:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:17:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:17:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:17:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:17:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:17:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:17:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:17:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:17:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:17:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:17:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:17:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:17:14 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:17:14 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:17:14 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:17:14 np0005629333 podman[393306]: 2026-02-25 13:17:14.895398503 +0000 UTC m=+0.063217974 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Feb 25 08:17:14 np0005629333 podman[393307]: 2026-02-25 13:17:14.92439179 +0000 UTC m=+0.089108134 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Feb 25 08:17:15 np0005629333 podman[393363]: 2026-02-25 13:17:15.165504539 +0000 UTC m=+0.107690958 container create 87a801ecf8b01381ce1bff147cd530a507f88d04a5c21711b3b630e8a2b72dd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:17:15 np0005629333 podman[393363]: 2026-02-25 13:17:15.083025893 +0000 UTC m=+0.025212312 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:17:15 np0005629333 systemd[1]: Started libpod-conmon-87a801ecf8b01381ce1bff147cd530a507f88d04a5c21711b3b630e8a2b72dd1.scope.
Feb 25 08:17:15 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:17:15 np0005629333 podman[393363]: 2026-02-25 13:17:15.286973174 +0000 UTC m=+0.229159663 container init 87a801ecf8b01381ce1bff147cd530a507f88d04a5c21711b3b630e8a2b72dd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 08:17:15 np0005629333 podman[393363]: 2026-02-25 13:17:15.295777822 +0000 UTC m=+0.237964261 container start 87a801ecf8b01381ce1bff147cd530a507f88d04a5c21711b3b630e8a2b72dd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_heyrovsky, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 08:17:15 np0005629333 interesting_heyrovsky[393380]: 167 167
Feb 25 08:17:15 np0005629333 systemd[1]: libpod-87a801ecf8b01381ce1bff147cd530a507f88d04a5c21711b3b630e8a2b72dd1.scope: Deactivated successfully.
Feb 25 08:17:15 np0005629333 podman[393363]: 2026-02-25 13:17:15.318119002 +0000 UTC m=+0.260305481 container attach 87a801ecf8b01381ce1bff147cd530a507f88d04a5c21711b3b630e8a2b72dd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 08:17:15 np0005629333 podman[393363]: 2026-02-25 13:17:15.318608126 +0000 UTC m=+0.260794555 container died 87a801ecf8b01381ce1bff147cd530a507f88d04a5c21711b3b630e8a2b72dd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_heyrovsky, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 08:17:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2921: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:17:15 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e90193464154d31b60bebc690dc085c3959eeb99625ace28b6f253a1eceb67d9-merged.mount: Deactivated successfully.
Feb 25 08:17:15 np0005629333 podman[393363]: 2026-02-25 13:17:15.670680624 +0000 UTC m=+0.612867053 container remove 87a801ecf8b01381ce1bff147cd530a507f88d04a5c21711b3b630e8a2b72dd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 08:17:15 np0005629333 systemd[1]: libpod-conmon-87a801ecf8b01381ce1bff147cd530a507f88d04a5c21711b3b630e8a2b72dd1.scope: Deactivated successfully.
Feb 25 08:17:15 np0005629333 podman[393407]: 2026-02-25 13:17:15.905372883 +0000 UTC m=+0.089932068 container create 0ab5c0827988a74c1216ca863f1f298472302656c80b241bd0d816e6d4f7bdd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_khayyam, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 08:17:15 np0005629333 podman[393407]: 2026-02-25 13:17:15.850021502 +0000 UTC m=+0.034580657 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:17:15 np0005629333 systemd[1]: Started libpod-conmon-0ab5c0827988a74c1216ca863f1f298472302656c80b241bd0d816e6d4f7bdd4.scope.
Feb 25 08:17:16 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:17:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a913741bcb213d04044faabc818e9423b3cf1f187d76e16301212b89c1c6af86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:17:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a913741bcb213d04044faabc818e9423b3cf1f187d76e16301212b89c1c6af86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:17:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a913741bcb213d04044faabc818e9423b3cf1f187d76e16301212b89c1c6af86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:17:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a913741bcb213d04044faabc818e9423b3cf1f187d76e16301212b89c1c6af86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:17:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a913741bcb213d04044faabc818e9423b3cf1f187d76e16301212b89c1c6af86/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:17:16 np0005629333 nova_compute[244014]: 2026-02-25 13:17:16.047 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:17:16 np0005629333 podman[393407]: 2026-02-25 13:17:16.068992596 +0000 UTC m=+0.253551791 container init 0ab5c0827988a74c1216ca863f1f298472302656c80b241bd0d816e6d4f7bdd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 08:17:16 np0005629333 podman[393407]: 2026-02-25 13:17:16.079204804 +0000 UTC m=+0.263763979 container start 0ab5c0827988a74c1216ca863f1f298472302656c80b241bd0d816e6d4f7bdd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_khayyam, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 08:17:16 np0005629333 podman[393407]: 2026-02-25 13:17:16.118834972 +0000 UTC m=+0.303394147 container attach 0ab5c0827988a74c1216ca863f1f298472302656c80b241bd0d816e6d4f7bdd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_khayyam, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:17:16 np0005629333 heuristic_khayyam[393424]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:17:16 np0005629333 heuristic_khayyam[393424]: --> All data devices are unavailable
Feb 25 08:17:16 np0005629333 systemd[1]: libpod-0ab5c0827988a74c1216ca863f1f298472302656c80b241bd0d816e6d4f7bdd4.scope: Deactivated successfully.
Feb 25 08:17:16 np0005629333 podman[393407]: 2026-02-25 13:17:16.561619628 +0000 UTC m=+0.746178763 container died 0ab5c0827988a74c1216ca863f1f298472302656c80b241bd0d816e6d4f7bdd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_khayyam, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:17:16 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a913741bcb213d04044faabc818e9423b3cf1f187d76e16301212b89c1c6af86-merged.mount: Deactivated successfully.
Feb 25 08:17:16 np0005629333 podman[393407]: 2026-02-25 13:17:16.851975716 +0000 UTC m=+1.036534881 container remove 0ab5c0827988a74c1216ca863f1f298472302656c80b241bd0d816e6d4f7bdd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_khayyam, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:17:16 np0005629333 systemd[1]: libpod-conmon-0ab5c0827988a74c1216ca863f1f298472302656c80b241bd0d816e6d4f7bdd4.scope: Deactivated successfully.
Feb 25 08:17:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2922: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:17:17 np0005629333 podman[393522]: 2026-02-25 13:17:17.417000269 +0000 UTC m=+0.107345928 container create e8a2dd0d4dbc5250f787fe88bd95fb2a92ee50397081ad3794b1fdcc0ed810cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_lichterman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 08:17:17 np0005629333 podman[393522]: 2026-02-25 13:17:17.348765205 +0000 UTC m=+0.039110914 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:17:17 np0005629333 systemd[1]: Started libpod-conmon-e8a2dd0d4dbc5250f787fe88bd95fb2a92ee50397081ad3794b1fdcc0ed810cd.scope.
Feb 25 08:17:17 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:17:17 np0005629333 podman[393522]: 2026-02-25 13:17:17.58053374 +0000 UTC m=+0.270879449 container init e8a2dd0d4dbc5250f787fe88bd95fb2a92ee50397081ad3794b1fdcc0ed810cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_lichterman, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 25 08:17:17 np0005629333 podman[393522]: 2026-02-25 13:17:17.588968348 +0000 UTC m=+0.279314007 container start e8a2dd0d4dbc5250f787fe88bd95fb2a92ee50397081ad3794b1fdcc0ed810cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:17:17 np0005629333 practical_lichterman[393538]: 167 167
Feb 25 08:17:17 np0005629333 systemd[1]: libpod-e8a2dd0d4dbc5250f787fe88bd95fb2a92ee50397081ad3794b1fdcc0ed810cd.scope: Deactivated successfully.
Feb 25 08:17:17 np0005629333 podman[393522]: 2026-02-25 13:17:17.65746547 +0000 UTC m=+0.347811119 container attach e8a2dd0d4dbc5250f787fe88bd95fb2a92ee50397081ad3794b1fdcc0ed810cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 08:17:17 np0005629333 podman[393522]: 2026-02-25 13:17:17.658417587 +0000 UTC m=+0.348763246 container died e8a2dd0d4dbc5250f787fe88bd95fb2a92ee50397081ad3794b1fdcc0ed810cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_lichterman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 08:17:17 np0005629333 systemd[1]: var-lib-containers-storage-overlay-8286e0765f25e1cf641c8c32c64db067663c250d50465313862393de2f9c311a-merged.mount: Deactivated successfully.
Feb 25 08:17:17 np0005629333 nova_compute[244014]: 2026-02-25 13:17:17.870 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:17:18 np0005629333 podman[393522]: 2026-02-25 13:17:18.017157522 +0000 UTC m=+0.707503181 container remove e8a2dd0d4dbc5250f787fe88bd95fb2a92ee50397081ad3794b1fdcc0ed810cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:17:18 np0005629333 systemd[1]: libpod-conmon-e8a2dd0d4dbc5250f787fe88bd95fb2a92ee50397081ad3794b1fdcc0ed810cd.scope: Deactivated successfully.
Feb 25 08:17:18 np0005629333 podman[393564]: 2026-02-25 13:17:18.192854447 +0000 UTC m=+0.062574326 container create 308ce07721908a2eac65fc23012f43bf39706afa4f2823e2dab819d08ee238fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 08:17:18 np0005629333 podman[393564]: 2026-02-25 13:17:18.152971952 +0000 UTC m=+0.022691841 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:17:18 np0005629333 systemd[1]: Started libpod-conmon-308ce07721908a2eac65fc23012f43bf39706afa4f2823e2dab819d08ee238fe.scope.
Feb 25 08:17:18 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:17:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d9a89e2d1fb376c7a6093ef45626f860157c9e31ee92bbc6c069d5c91dedfa1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:17:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d9a89e2d1fb376c7a6093ef45626f860157c9e31ee92bbc6c069d5c91dedfa1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:17:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d9a89e2d1fb376c7a6093ef45626f860157c9e31ee92bbc6c069d5c91dedfa1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:17:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d9a89e2d1fb376c7a6093ef45626f860157c9e31ee92bbc6c069d5c91dedfa1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:17:18 np0005629333 podman[393564]: 2026-02-25 13:17:18.34758341 +0000 UTC m=+0.217303349 container init 308ce07721908a2eac65fc23012f43bf39706afa4f2823e2dab819d08ee238fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ganguly, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 08:17:18 np0005629333 podman[393564]: 2026-02-25 13:17:18.357250913 +0000 UTC m=+0.226970792 container start 308ce07721908a2eac65fc23012f43bf39706afa4f2823e2dab819d08ee238fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ganguly, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:17:18 np0005629333 podman[393564]: 2026-02-25 13:17:18.404618868 +0000 UTC m=+0.274338807 container attach 308ce07721908a2eac65fc23012f43bf39706afa4f2823e2dab819d08ee238fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]: {
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:    "0": [
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:        {
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "devices": [
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "/dev/loop3"
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            ],
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "lv_name": "ceph_lv0",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "lv_size": "21470642176",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "name": "ceph_lv0",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "tags": {
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.cluster_name": "ceph",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.crush_device_class": "",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.encrypted": "0",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.objectstore": "bluestore",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.osd_id": "0",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.type": "block",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.vdo": "0",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.with_tpm": "0"
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            },
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "type": "block",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "vg_name": "ceph_vg0"
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:        }
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:    ],
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:    "1": [
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:        {
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "devices": [
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "/dev/loop4"
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            ],
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "lv_name": "ceph_lv1",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "lv_size": "21470642176",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "name": "ceph_lv1",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "tags": {
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.cluster_name": "ceph",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.crush_device_class": "",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.encrypted": "0",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.objectstore": "bluestore",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.osd_id": "1",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.type": "block",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.vdo": "0",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.with_tpm": "0"
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            },
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "type": "block",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "vg_name": "ceph_vg1"
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:        }
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:    ],
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:    "2": [
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:        {
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "devices": [
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "/dev/loop5"
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            ],
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "lv_name": "ceph_lv2",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "lv_size": "21470642176",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "name": "ceph_lv2",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "tags": {
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.cluster_name": "ceph",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.crush_device_class": "",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.encrypted": "0",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.objectstore": "bluestore",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.osd_id": "2",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.type": "block",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.vdo": "0",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:                "ceph.with_tpm": "0"
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            },
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "type": "block",
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:            "vg_name": "ceph_vg2"
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:        }
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]:    ]
Feb 25 08:17:18 np0005629333 reverent_ganguly[393580]: }
Feb 25 08:17:18 np0005629333 systemd[1]: libpod-308ce07721908a2eac65fc23012f43bf39706afa4f2823e2dab819d08ee238fe.scope: Deactivated successfully.
Feb 25 08:17:18 np0005629333 podman[393589]: 2026-02-25 13:17:18.70648009 +0000 UTC m=+0.032501698 container died 308ce07721908a2eac65fc23012f43bf39706afa4f2823e2dab819d08ee238fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ganguly, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 08:17:18 np0005629333 systemd[1]: var-lib-containers-storage-overlay-4d9a89e2d1fb376c7a6093ef45626f860157c9e31ee92bbc6c069d5c91dedfa1-merged.mount: Deactivated successfully.
Feb 25 08:17:18 np0005629333 podman[393589]: 2026-02-25 13:17:18.913372374 +0000 UTC m=+0.239393962 container remove 308ce07721908a2eac65fc23012f43bf39706afa4f2823e2dab819d08ee238fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ganguly, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:17:18 np0005629333 systemd[1]: libpod-conmon-308ce07721908a2eac65fc23012f43bf39706afa4f2823e2dab819d08ee238fe.scope: Deactivated successfully.
Feb 25 08:17:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:17:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2923: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:17:19 np0005629333 podman[393667]: 2026-02-25 13:17:19.507127117 +0000 UTC m=+0.114881931 container create 2e95a30c8cd1926a9a01d8b86338e748ccca081392c2355af022935f3072fb77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Feb 25 08:17:19 np0005629333 podman[393667]: 2026-02-25 13:17:19.416737918 +0000 UTC m=+0.024492782 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:17:19 np0005629333 systemd[1]: Started libpod-conmon-2e95a30c8cd1926a9a01d8b86338e748ccca081392c2355af022935f3072fb77.scope.
Feb 25 08:17:19 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:17:19 np0005629333 podman[393667]: 2026-02-25 13:17:19.694865151 +0000 UTC m=+0.302619985 container init 2e95a30c8cd1926a9a01d8b86338e748ccca081392c2355af022935f3072fb77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chaum, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:17:19 np0005629333 podman[393667]: 2026-02-25 13:17:19.702836936 +0000 UTC m=+0.310591730 container start 2e95a30c8cd1926a9a01d8b86338e748ccca081392c2355af022935f3072fb77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chaum, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 08:17:19 np0005629333 xenodochial_chaum[393683]: 167 167
Feb 25 08:17:19 np0005629333 systemd[1]: libpod-2e95a30c8cd1926a9a01d8b86338e748ccca081392c2355af022935f3072fb77.scope: Deactivated successfully.
Feb 25 08:17:19 np0005629333 podman[393667]: 2026-02-25 13:17:19.731728801 +0000 UTC m=+0.339483675 container attach 2e95a30c8cd1926a9a01d8b86338e748ccca081392c2355af022935f3072fb77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chaum, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 08:17:19 np0005629333 podman[393667]: 2026-02-25 13:17:19.732434971 +0000 UTC m=+0.340189795 container died 2e95a30c8cd1926a9a01d8b86338e748ccca081392c2355af022935f3072fb77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chaum, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:17:19 np0005629333 systemd[1]: var-lib-containers-storage-overlay-af27a813d730bc32e8bf095735c6e517ec0e2878f5f4891ebf3c6997838f7a11-merged.mount: Deactivated successfully.
Feb 25 08:17:20 np0005629333 podman[393667]: 2026-02-25 13:17:20.02037245 +0000 UTC m=+0.628127274 container remove 2e95a30c8cd1926a9a01d8b86338e748ccca081392c2355af022935f3072fb77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_chaum, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:17:20 np0005629333 systemd[1]: libpod-conmon-2e95a30c8cd1926a9a01d8b86338e748ccca081392c2355af022935f3072fb77.scope: Deactivated successfully.
Feb 25 08:17:20 np0005629333 podman[393710]: 2026-02-25 13:17:20.268373943 +0000 UTC m=+0.108836250 container create 7d48584a7bde584b9885770a7f25942d880cd1c359139e4ba66032250801821a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:17:20 np0005629333 podman[393710]: 2026-02-25 13:17:20.182598194 +0000 UTC m=+0.023060571 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:17:20 np0005629333 systemd[1]: Started libpod-conmon-7d48584a7bde584b9885770a7f25942d880cd1c359139e4ba66032250801821a.scope.
Feb 25 08:17:20 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:17:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b50a8af593e5a5f618215eb4170ad7cf4b2d8591625dc9629d5c9a76fcc8a4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:17:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b50a8af593e5a5f618215eb4170ad7cf4b2d8591625dc9629d5c9a76fcc8a4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:17:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b50a8af593e5a5f618215eb4170ad7cf4b2d8591625dc9629d5c9a76fcc8a4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:17:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b50a8af593e5a5f618215eb4170ad7cf4b2d8591625dc9629d5c9a76fcc8a4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:17:20 np0005629333 podman[393710]: 2026-02-25 13:17:20.441134305 +0000 UTC m=+0.281596692 container init 7d48584a7bde584b9885770a7f25942d880cd1c359139e4ba66032250801821a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 08:17:20 np0005629333 podman[393710]: 2026-02-25 13:17:20.449804569 +0000 UTC m=+0.290266896 container start 7d48584a7bde584b9885770a7f25942d880cd1c359139e4ba66032250801821a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_fermi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 08:17:20 np0005629333 podman[393710]: 2026-02-25 13:17:20.465611015 +0000 UTC m=+0.306073412 container attach 7d48584a7bde584b9885770a7f25942d880cd1c359139e4ba66032250801821a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 08:17:21 np0005629333 nova_compute[244014]: 2026-02-25 13:17:21.049 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:17:21 np0005629333 lvm[393806]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:17:21 np0005629333 lvm[393805]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:17:21 np0005629333 lvm[393805]: VG ceph_vg0 finished
Feb 25 08:17:21 np0005629333 lvm[393806]: VG ceph_vg1 finished
Feb 25 08:17:21 np0005629333 lvm[393808]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:17:21 np0005629333 lvm[393808]: VG ceph_vg2 finished
Feb 25 08:17:21 np0005629333 optimistic_fermi[393727]: {}
Feb 25 08:17:21 np0005629333 systemd[1]: libpod-7d48584a7bde584b9885770a7f25942d880cd1c359139e4ba66032250801821a.scope: Deactivated successfully.
Feb 25 08:17:21 np0005629333 podman[393710]: 2026-02-25 13:17:21.253601456 +0000 UTC m=+1.094063823 container died 7d48584a7bde584b9885770a7f25942d880cd1c359139e4ba66032250801821a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_fermi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:17:21 np0005629333 systemd[1]: libpod-7d48584a7bde584b9885770a7f25942d880cd1c359139e4ba66032250801821a.scope: Consumed 1.107s CPU time.
Feb 25 08:17:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2924: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:17:21 np0005629333 systemd[1]: var-lib-containers-storage-overlay-4b50a8af593e5a5f618215eb4170ad7cf4b2d8591625dc9629d5c9a76fcc8a4a-merged.mount: Deactivated successfully.
Feb 25 08:17:21 np0005629333 podman[393710]: 2026-02-25 13:17:21.619530634 +0000 UTC m=+1.459992971 container remove 7d48584a7bde584b9885770a7f25942d880cd1c359139e4ba66032250801821a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:17:21 np0005629333 systemd[1]: libpod-conmon-7d48584a7bde584b9885770a7f25942d880cd1c359139e4ba66032250801821a.scope: Deactivated successfully.
Feb 25 08:17:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:17:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:17:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:17:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:17:22 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:17:22 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:17:22 np0005629333 nova_compute[244014]: 2026-02-25 13:17:22.875 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:17:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2925: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:17:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:17:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2926: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:17:26 np0005629333 nova_compute[244014]: 2026-02-25 13:17:26.051 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:17:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2927: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:17:27 np0005629333 nova_compute[244014]: 2026-02-25 13:17:27.879 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:17:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:17:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2928: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:17:31 np0005629333 nova_compute[244014]: 2026-02-25 13:17:31.057 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:17:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:17:31
Feb 25 08:17:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:17:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:17:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['backups', 'images', 'default.rgw.log', '.rgw.root', 'default.rgw.control', '.mgr', 'vms', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta']
Feb 25 08:17:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:17:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2929: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:17:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:17:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:17:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:17:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:17:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:17:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:17:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:17:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:17:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:17:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:17:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:17:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:17:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:17:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:17:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:17:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:17:32 np0005629333 nova_compute[244014]: 2026-02-25 13:17:32.882 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:17:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2930: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:17:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:17:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2931: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:17:36 np0005629333 nova_compute[244014]: 2026-02-25 13:17:36.059 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:17:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2932: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:17:37 np0005629333 nova_compute[244014]: 2026-02-25 13:17:37.887 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:17:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:17:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2933: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:17:41 np0005629333 nova_compute[244014]: 2026-02-25 13:17:41.062 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:17:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2934: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:17:42 np0005629333 nova_compute[244014]: 2026-02-25 13:17:42.891 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:17:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:17:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:17:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:17:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:17:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:17:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:17:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:17:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:17:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:17:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:17:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:17:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:17:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:17:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:17:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:17:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:17:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:17:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:17:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:17:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:17:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:17:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:17:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:17:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2935: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:17:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:17:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2936: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:17:45 np0005629333 podman[393850]: 2026-02-25 13:17:45.740867092 +0000 UTC m=+0.079370989 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent)
Feb 25 08:17:45 np0005629333 podman[393851]: 2026-02-25 13:17:45.77979748 +0000 UTC m=+0.116136136 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, io.buildah.version=1.43.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:17:46 np0005629333 nova_compute[244014]: 2026-02-25 13:17:46.064 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:17:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2937: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:17:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:17:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2122349000' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:17:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:17:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2122349000' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:17:47 np0005629333 nova_compute[244014]: 2026-02-25 13:17:47.894 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:17:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:17:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2938: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:17:51 np0005629333 nova_compute[244014]: 2026-02-25 13:17:51.065 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:17:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2939: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:17:51 np0005629333 nova_compute[244014]: 2026-02-25 13:17:51.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:17:51 np0005629333 nova_compute[244014]: 2026-02-25 13:17:51.907 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:17:51 np0005629333 nova_compute[244014]: 2026-02-25 13:17:51.908 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:17:51 np0005629333 nova_compute[244014]: 2026-02-25 13:17:51.908 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:17:51 np0005629333 nova_compute[244014]: 2026-02-25 13:17:51.908 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:17:51 np0005629333 nova_compute[244014]: 2026-02-25 13:17:51.909 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:17:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:17:52 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4266829098' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:17:52 np0005629333 nova_compute[244014]: 2026-02-25 13:17:52.445 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:17:52 np0005629333 nova_compute[244014]: 2026-02-25 13:17:52.655 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:17:52 np0005629333 nova_compute[244014]: 2026-02-25 13:17:52.656 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3580MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:17:52 np0005629333 nova_compute[244014]: 2026-02-25 13:17:52.656 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:17:52 np0005629333 nova_compute[244014]: 2026-02-25 13:17:52.656 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:17:52 np0005629333 nova_compute[244014]: 2026-02-25 13:17:52.717 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:17:52 np0005629333 nova_compute[244014]: 2026-02-25 13:17:52.717 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:17:52 np0005629333 nova_compute[244014]: 2026-02-25 13:17:52.731 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:17:52 np0005629333 nova_compute[244014]: 2026-02-25 13:17:52.898 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:17:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:17:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4281192284' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:17:53 np0005629333 nova_compute[244014]: 2026-02-25 13:17:53.292 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:17:53 np0005629333 nova_compute[244014]: 2026-02-25 13:17:53.298 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:17:53 np0005629333 nova_compute[244014]: 2026-02-25 13:17:53.315 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:17:53 np0005629333 nova_compute[244014]: 2026-02-25 13:17:53.318 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:17:53 np0005629333 nova_compute[244014]: 2026-02-25 13:17:53.318 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:17:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2940: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:17:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:17:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:17:55.057 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:17:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:17:55.058 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:17:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:17:55.058 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:17:55 np0005629333 nova_compute[244014]: 2026-02-25 13:17:55.319 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:17:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2941: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:17:55 np0005629333 nova_compute[244014]: 2026-02-25 13:17:55.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:17:55 np0005629333 nova_compute[244014]: 2026-02-25 13:17:55.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:17:55 np0005629333 nova_compute[244014]: 2026-02-25 13:17:55.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:17:55 np0005629333 nova_compute[244014]: 2026-02-25 13:17:55.891 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:17:56 np0005629333 nova_compute[244014]: 2026-02-25 13:17:56.068 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:17:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2942: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:17:57 np0005629333 nova_compute[244014]: 2026-02-25 13:17:57.904 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:17:58 np0005629333 nova_compute[244014]: 2026-02-25 13:17:58.886 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:17:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:17:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2943: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:18:01 np0005629333 nova_compute[244014]: 2026-02-25 13:18:01.070 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:18:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2944: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:18:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:18:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:18:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:18:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:18:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:18:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:18:02 np0005629333 nova_compute[244014]: 2026-02-25 13:18:02.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:18:02 np0005629333 nova_compute[244014]: 2026-02-25 13:18:02.910 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:18:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2945: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:18:03 np0005629333 nova_compute[244014]: 2026-02-25 13:18:03.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:18:03 np0005629333 nova_compute[244014]: 2026-02-25 13:18:03.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:18:03 np0005629333 nova_compute[244014]: 2026-02-25 13:18:03.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:18:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:18:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2946: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:18:06 np0005629333 nova_compute[244014]: 2026-02-25 13:18:06.073 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:18:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2947: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:18:07 np0005629333 nova_compute[244014]: 2026-02-25 13:18:07.914 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:18:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:18:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2948: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:18:09 np0005629333 nova_compute[244014]: 2026-02-25 13:18:09.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:18:11 np0005629333 nova_compute[244014]: 2026-02-25 13:18:11.074 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:18:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2949: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:18:12 np0005629333 nova_compute[244014]: 2026-02-25 13:18:12.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:18:12 np0005629333 nova_compute[244014]: 2026-02-25 13:18:12.916 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:18:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2950: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:18:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:18:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2951: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:18:16 np0005629333 nova_compute[244014]: 2026-02-25 13:18:16.076 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:18:16 np0005629333 podman[393938]: 2026-02-25 13:18:16.768826808 +0000 UTC m=+0.105046324 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 08:18:16 np0005629333 podman[393939]: 2026-02-25 13:18:16.778713226 +0000 UTC m=+0.113241654 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, managed_by=edpm_ansible)
Feb 25 08:18:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2952: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:18:17 np0005629333 nova_compute[244014]: 2026-02-25 13:18:17.920 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:18:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:18:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2953: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:18:20 np0005629333 nova_compute[244014]: 2026-02-25 13:18:20.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:18:21 np0005629333 nova_compute[244014]: 2026-02-25 13:18:21.078 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:18:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2954: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:18:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:18:22 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:18:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:18:22 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:18:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:18:22 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:18:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:18:22 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:18:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:18:22 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:18:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:18:22 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:18:22 np0005629333 nova_compute[244014]: 2026-02-25 13:18:22.924 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:18:22 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:18:22 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:18:22 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:18:23 np0005629333 podman[394124]: 2026-02-25 13:18:23.114360972 +0000 UTC m=+0.026564960 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:18:23 np0005629333 podman[394124]: 2026-02-25 13:18:23.226007091 +0000 UTC m=+0.138211099 container create dc7c38754d400d3abab9dd6d9966f4036c51ae4a7813df8bbcd04d06f514bc08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_leakey, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 08:18:23 np0005629333 systemd[1]: Started libpod-conmon-dc7c38754d400d3abab9dd6d9966f4036c51ae4a7813df8bbcd04d06f514bc08.scope.
Feb 25 08:18:23 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:18:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2955: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:18:23 np0005629333 podman[394124]: 2026-02-25 13:18:23.476611258 +0000 UTC m=+0.388815266 container init dc7c38754d400d3abab9dd6d9966f4036c51ae4a7813df8bbcd04d06f514bc08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_leakey, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 08:18:23 np0005629333 podman[394124]: 2026-02-25 13:18:23.485953821 +0000 UTC m=+0.398157819 container start dc7c38754d400d3abab9dd6d9966f4036c51ae4a7813df8bbcd04d06f514bc08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 08:18:23 np0005629333 affectionate_leakey[394140]: 167 167
Feb 25 08:18:23 np0005629333 systemd[1]: libpod-dc7c38754d400d3abab9dd6d9966f4036c51ae4a7813df8bbcd04d06f514bc08.scope: Deactivated successfully.
Feb 25 08:18:23 np0005629333 podman[394124]: 2026-02-25 13:18:23.528984845 +0000 UTC m=+0.441188853 container attach dc7c38754d400d3abab9dd6d9966f4036c51ae4a7813df8bbcd04d06f514bc08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 08:18:23 np0005629333 podman[394124]: 2026-02-25 13:18:23.529917041 +0000 UTC m=+0.442121029 container died dc7c38754d400d3abab9dd6d9966f4036c51ae4a7813df8bbcd04d06f514bc08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_leakey, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:18:23 np0005629333 systemd[1]: var-lib-containers-storage-overlay-b8549f61d61791ff68f31dc6786cb1d5fca4735267c8e4f67265051c9f21e4c1-merged.mount: Deactivated successfully.
Feb 25 08:18:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:18:24 np0005629333 podman[394124]: 2026-02-25 13:18:24.267653595 +0000 UTC m=+1.179857593 container remove dc7c38754d400d3abab9dd6d9966f4036c51ae4a7813df8bbcd04d06f514bc08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 08:18:24 np0005629333 systemd[1]: libpod-conmon-dc7c38754d400d3abab9dd6d9966f4036c51ae4a7813df8bbcd04d06f514bc08.scope: Deactivated successfully.
Feb 25 08:18:24 np0005629333 podman[394166]: 2026-02-25 13:18:24.427046509 +0000 UTC m=+0.038238379 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:18:24 np0005629333 podman[394166]: 2026-02-25 13:18:24.529464648 +0000 UTC m=+0.140656538 container create c679dd12f1f217e07c5903f06fdc68a90041847f512f669acd5da34c1e9125f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_haslett, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:18:24 np0005629333 systemd[1]: Started libpod-conmon-c679dd12f1f217e07c5903f06fdc68a90041847f512f669acd5da34c1e9125f5.scope.
Feb 25 08:18:24 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:18:24 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4dacc0b06fc1a86863393b9ecec298fb0a40c15ce3274fc31be268f146849d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:18:24 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4dacc0b06fc1a86863393b9ecec298fb0a40c15ce3274fc31be268f146849d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:18:24 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4dacc0b06fc1a86863393b9ecec298fb0a40c15ce3274fc31be268f146849d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:18:24 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4dacc0b06fc1a86863393b9ecec298fb0a40c15ce3274fc31be268f146849d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:18:24 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4dacc0b06fc1a86863393b9ecec298fb0a40c15ce3274fc31be268f146849d8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:18:24 np0005629333 podman[394166]: 2026-02-25 13:18:24.755546112 +0000 UTC m=+0.366738052 container init c679dd12f1f217e07c5903f06fdc68a90041847f512f669acd5da34c1e9125f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_haslett, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 08:18:24 np0005629333 podman[394166]: 2026-02-25 13:18:24.765044021 +0000 UTC m=+0.376235901 container start c679dd12f1f217e07c5903f06fdc68a90041847f512f669acd5da34c1e9125f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_haslett, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 25 08:18:24 np0005629333 podman[394166]: 2026-02-25 13:18:24.832491223 +0000 UTC m=+0.443683173 container attach c679dd12f1f217e07c5903f06fdc68a90041847f512f669acd5da34c1e9125f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_haslett, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:18:25 np0005629333 upbeat_haslett[394183]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:18:25 np0005629333 upbeat_haslett[394183]: --> All data devices are unavailable
Feb 25 08:18:25 np0005629333 systemd[1]: libpod-c679dd12f1f217e07c5903f06fdc68a90041847f512f669acd5da34c1e9125f5.scope: Deactivated successfully.
Feb 25 08:18:25 np0005629333 conmon[394183]: conmon c679dd12f1f217e07c59 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c679dd12f1f217e07c5903f06fdc68a90041847f512f669acd5da34c1e9125f5.scope/container/memory.events
Feb 25 08:18:25 np0005629333 podman[394166]: 2026-02-25 13:18:25.312485638 +0000 UTC m=+0.923677528 container died c679dd12f1f217e07c5903f06fdc68a90041847f512f669acd5da34c1e9125f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:18:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2956: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:18:25 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a4dacc0b06fc1a86863393b9ecec298fb0a40c15ce3274fc31be268f146849d8-merged.mount: Deactivated successfully.
Feb 25 08:18:25 np0005629333 podman[394166]: 2026-02-25 13:18:25.851085446 +0000 UTC m=+1.462277336 container remove c679dd12f1f217e07c5903f06fdc68a90041847f512f669acd5da34c1e9125f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_haslett, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:18:25 np0005629333 systemd[1]: libpod-conmon-c679dd12f1f217e07c5903f06fdc68a90041847f512f669acd5da34c1e9125f5.scope: Deactivated successfully.
Feb 25 08:18:26 np0005629333 nova_compute[244014]: 2026-02-25 13:18:26.080 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:18:26 np0005629333 podman[394279]: 2026-02-25 13:18:26.365178353 +0000 UTC m=+0.038671771 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:18:26 np0005629333 podman[394279]: 2026-02-25 13:18:26.530484755 +0000 UTC m=+0.203978093 container create 0d10c3f806a970a5891a0d344c1281a9032222d0461a4ec7d1b17208ca52b9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_fermat, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:18:26 np0005629333 systemd[1]: Started libpod-conmon-0d10c3f806a970a5891a0d344c1281a9032222d0461a4ec7d1b17208ca52b9e8.scope.
Feb 25 08:18:26 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:18:26 np0005629333 podman[394279]: 2026-02-25 13:18:26.847957516 +0000 UTC m=+0.521450924 container init 0d10c3f806a970a5891a0d344c1281a9032222d0461a4ec7d1b17208ca52b9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_fermat, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 08:18:26 np0005629333 podman[394279]: 2026-02-25 13:18:26.856719243 +0000 UTC m=+0.530212601 container start 0d10c3f806a970a5891a0d344c1281a9032222d0461a4ec7d1b17208ca52b9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_fermat, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Feb 25 08:18:26 np0005629333 adoring_fermat[394295]: 167 167
Feb 25 08:18:26 np0005629333 systemd[1]: libpod-0d10c3f806a970a5891a0d344c1281a9032222d0461a4ec7d1b17208ca52b9e8.scope: Deactivated successfully.
Feb 25 08:18:26 np0005629333 podman[394279]: 2026-02-25 13:18:26.903983246 +0000 UTC m=+0.577476604 container attach 0d10c3f806a970a5891a0d344c1281a9032222d0461a4ec7d1b17208ca52b9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 08:18:26 np0005629333 podman[394279]: 2026-02-25 13:18:26.905839068 +0000 UTC m=+0.579332426 container died 0d10c3f806a970a5891a0d344c1281a9032222d0461a4ec7d1b17208ca52b9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_fermat, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 08:18:27 np0005629333 systemd[1]: var-lib-containers-storage-overlay-902b78625c8eb3d4ecdefab77387772df62f7020af4e90875168ea5ad74da193-merged.mount: Deactivated successfully.
Feb 25 08:18:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2957: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:18:27 np0005629333 podman[394279]: 2026-02-25 13:18:27.513185684 +0000 UTC m=+1.186679042 container remove 0d10c3f806a970a5891a0d344c1281a9032222d0461a4ec7d1b17208ca52b9e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 08:18:27 np0005629333 systemd[1]: libpod-conmon-0d10c3f806a970a5891a0d344c1281a9032222d0461a4ec7d1b17208ca52b9e8.scope: Deactivated successfully.
Feb 25 08:18:27 np0005629333 podman[394321]: 2026-02-25 13:18:27.684295599 +0000 UTC m=+0.031698814 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:18:27 np0005629333 podman[394321]: 2026-02-25 13:18:27.779956417 +0000 UTC m=+0.127359542 container create e77de4b4eda7f78aafb3d08fd925e562a026cf1a5368267c1be0cf9aa6d63032 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_nightingale, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:18:27 np0005629333 systemd[1]: Started libpod-conmon-e77de4b4eda7f78aafb3d08fd925e562a026cf1a5368267c1be0cf9aa6d63032.scope.
Feb 25 08:18:27 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:18:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15edb6f83ecf7a1b0d19bff60ae62dc46822fc2cf292babae29e5aeeb5524d4c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:18:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15edb6f83ecf7a1b0d19bff60ae62dc46822fc2cf292babae29e5aeeb5524d4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:18:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15edb6f83ecf7a1b0d19bff60ae62dc46822fc2cf292babae29e5aeeb5524d4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:18:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15edb6f83ecf7a1b0d19bff60ae62dc46822fc2cf292babae29e5aeeb5524d4c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:18:27 np0005629333 nova_compute[244014]: 2026-02-25 13:18:27.927 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:18:28 np0005629333 podman[394321]: 2026-02-25 13:18:28.047511291 +0000 UTC m=+0.394914496 container init e77de4b4eda7f78aafb3d08fd925e562a026cf1a5368267c1be0cf9aa6d63032 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_nightingale, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:18:28 np0005629333 podman[394321]: 2026-02-25 13:18:28.056457084 +0000 UTC m=+0.403860229 container start e77de4b4eda7f78aafb3d08fd925e562a026cf1a5368267c1be0cf9aa6d63032 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 08:18:28 np0005629333 podman[394321]: 2026-02-25 13:18:28.123017881 +0000 UTC m=+0.470421026 container attach e77de4b4eda7f78aafb3d08fd925e562a026cf1a5368267c1be0cf9aa6d63032 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]: {
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:    "0": [
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:        {
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "devices": [
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "/dev/loop3"
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            ],
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "lv_name": "ceph_lv0",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "lv_size": "21470642176",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "name": "ceph_lv0",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "tags": {
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.cluster_name": "ceph",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.crush_device_class": "",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.encrypted": "0",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.objectstore": "bluestore",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.osd_id": "0",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.type": "block",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.vdo": "0",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.with_tpm": "0"
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            },
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "type": "block",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "vg_name": "ceph_vg0"
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:        }
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:    ],
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:    "1": [
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:        {
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "devices": [
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "/dev/loop4"
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            ],
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "lv_name": "ceph_lv1",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "lv_size": "21470642176",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "name": "ceph_lv1",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "tags": {
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.cluster_name": "ceph",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.crush_device_class": "",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.encrypted": "0",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.objectstore": "bluestore",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.osd_id": "1",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.type": "block",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.vdo": "0",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.with_tpm": "0"
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            },
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "type": "block",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "vg_name": "ceph_vg1"
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:        }
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:    ],
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:    "2": [
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:        {
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "devices": [
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "/dev/loop5"
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            ],
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "lv_name": "ceph_lv2",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "lv_size": "21470642176",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "name": "ceph_lv2",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "tags": {
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.cluster_name": "ceph",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.crush_device_class": "",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.encrypted": "0",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.objectstore": "bluestore",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.osd_id": "2",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.type": "block",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.vdo": "0",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:                "ceph.with_tpm": "0"
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            },
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "type": "block",
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:            "vg_name": "ceph_vg2"
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:        }
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]:    ]
Feb 25 08:18:28 np0005629333 zen_nightingale[394338]: }
Feb 25 08:18:28 np0005629333 systemd[1]: libpod-e77de4b4eda7f78aafb3d08fd925e562a026cf1a5368267c1be0cf9aa6d63032.scope: Deactivated successfully.
Feb 25 08:18:28 np0005629333 podman[394321]: 2026-02-25 13:18:28.38398341 +0000 UTC m=+0.731386545 container died e77de4b4eda7f78aafb3d08fd925e562a026cf1a5368267c1be0cf9aa6d63032 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 08:18:28 np0005629333 systemd[1]: var-lib-containers-storage-overlay-15edb6f83ecf7a1b0d19bff60ae62dc46822fc2cf292babae29e5aeeb5524d4c-merged.mount: Deactivated successfully.
Feb 25 08:18:28 np0005629333 podman[394321]: 2026-02-25 13:18:28.802036878 +0000 UTC m=+1.149440033 container remove e77de4b4eda7f78aafb3d08fd925e562a026cf1a5368267c1be0cf9aa6d63032 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_nightingale, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 08:18:28 np0005629333 systemd[1]: libpod-conmon-e77de4b4eda7f78aafb3d08fd925e562a026cf1a5368267c1be0cf9aa6d63032.scope: Deactivated successfully.
Feb 25 08:18:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:18:29 np0005629333 podman[394423]: 2026-02-25 13:18:29.383478244 +0000 UTC m=+0.117536745 container create 9c47944436bdf4cbbe0b897fadc014e3f3b0baccd0c269bfc9903a8b650f8508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_panini, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:18:29 np0005629333 podman[394423]: 2026-02-25 13:18:29.301318077 +0000 UTC m=+0.035376568 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:18:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2958: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:18:29 np0005629333 systemd[1]: Started libpod-conmon-9c47944436bdf4cbbe0b897fadc014e3f3b0baccd0c269bfc9903a8b650f8508.scope.
Feb 25 08:18:29 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:18:29 np0005629333 podman[394423]: 2026-02-25 13:18:29.59076592 +0000 UTC m=+0.324824471 container init 9c47944436bdf4cbbe0b897fadc014e3f3b0baccd0c269bfc9903a8b650f8508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_panini, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 08:18:29 np0005629333 podman[394423]: 2026-02-25 13:18:29.599113805 +0000 UTC m=+0.333172306 container start 9c47944436bdf4cbbe0b897fadc014e3f3b0baccd0c269bfc9903a8b650f8508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_panini, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 08:18:29 np0005629333 sweet_panini[394439]: 167 167
Feb 25 08:18:29 np0005629333 systemd[1]: libpod-9c47944436bdf4cbbe0b897fadc014e3f3b0baccd0c269bfc9903a8b650f8508.scope: Deactivated successfully.
Feb 25 08:18:29 np0005629333 podman[394423]: 2026-02-25 13:18:29.644091953 +0000 UTC m=+0.378150514 container attach 9c47944436bdf4cbbe0b897fadc014e3f3b0baccd0c269bfc9903a8b650f8508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:18:29 np0005629333 podman[394423]: 2026-02-25 13:18:29.644629918 +0000 UTC m=+0.378688419 container died 9c47944436bdf4cbbe0b897fadc014e3f3b0baccd0c269bfc9903a8b650f8508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_panini, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 08:18:29 np0005629333 systemd[1]: var-lib-containers-storage-overlay-51b98b82405b81e4c2ece1f53af7dcb398b449d62dd9aed423dc9fa64045c616-merged.mount: Deactivated successfully.
Feb 25 08:18:29 np0005629333 podman[394423]: 2026-02-25 13:18:29.987608 +0000 UTC m=+0.721666491 container remove 9c47944436bdf4cbbe0b897fadc014e3f3b0baccd0c269bfc9903a8b650f8508 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_panini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:18:29 np0005629333 systemd[1]: libpod-conmon-9c47944436bdf4cbbe0b897fadc014e3f3b0baccd0c269bfc9903a8b650f8508.scope: Deactivated successfully.
Feb 25 08:18:30 np0005629333 podman[394464]: 2026-02-25 13:18:30.156741339 +0000 UTC m=+0.035334427 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:18:30 np0005629333 podman[394464]: 2026-02-25 13:18:30.254753222 +0000 UTC m=+0.133346240 container create fcfb309d72fa899ebdba0381b0c5d05189897ee8d475d18af012bafa967f3786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_lederberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:18:30 np0005629333 systemd[1]: Started libpod-conmon-fcfb309d72fa899ebdba0381b0c5d05189897ee8d475d18af012bafa967f3786.scope.
Feb 25 08:18:30 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:18:30 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62b06ce619c2403b8442acd923856030a570669f149d1e8af5bb99d307ea615/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:18:30 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62b06ce619c2403b8442acd923856030a570669f149d1e8af5bb99d307ea615/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:18:30 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62b06ce619c2403b8442acd923856030a570669f149d1e8af5bb99d307ea615/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:18:30 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62b06ce619c2403b8442acd923856030a570669f149d1e8af5bb99d307ea615/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:18:30 np0005629333 podman[394464]: 2026-02-25 13:18:30.535087167 +0000 UTC m=+0.413680185 container init fcfb309d72fa899ebdba0381b0c5d05189897ee8d475d18af012bafa967f3786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_lederberg, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:18:30 np0005629333 podman[394464]: 2026-02-25 13:18:30.54297956 +0000 UTC m=+0.421572568 container start fcfb309d72fa899ebdba0381b0c5d05189897ee8d475d18af012bafa967f3786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_lederberg, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 08:18:30 np0005629333 podman[394464]: 2026-02-25 13:18:30.60045787 +0000 UTC m=+0.479050898 container attach fcfb309d72fa899ebdba0381b0c5d05189897ee8d475d18af012bafa967f3786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:18:30 np0005629333 nova_compute[244014]: 2026-02-25 13:18:30.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:18:31 np0005629333 nova_compute[244014]: 2026-02-25 13:18:31.082 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:18:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:18:31
Feb 25 08:18:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:18:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:18:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['images', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'default.rgw.meta', 'backups']
Feb 25 08:18:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:18:31 np0005629333 lvm[394559]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:18:31 np0005629333 lvm[394560]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:18:31 np0005629333 lvm[394559]: VG ceph_vg0 finished
Feb 25 08:18:31 np0005629333 lvm[394562]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:18:31 np0005629333 lvm[394562]: VG ceph_vg2 finished
Feb 25 08:18:31 np0005629333 lvm[394560]: VG ceph_vg1 finished
Feb 25 08:18:31 np0005629333 wonderful_lederberg[394481]: {}
Feb 25 08:18:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2959: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:18:31 np0005629333 systemd[1]: libpod-fcfb309d72fa899ebdba0381b0c5d05189897ee8d475d18af012bafa967f3786.scope: Deactivated successfully.
Feb 25 08:18:31 np0005629333 systemd[1]: libpod-fcfb309d72fa899ebdba0381b0c5d05189897ee8d475d18af012bafa967f3786.scope: Consumed 1.254s CPU time.
Feb 25 08:18:31 np0005629333 podman[394464]: 2026-02-25 13:18:31.452358063 +0000 UTC m=+1.330951051 container died fcfb309d72fa899ebdba0381b0c5d05189897ee8d475d18af012bafa967f3786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_lederberg, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:18:31 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d62b06ce619c2403b8442acd923856030a570669f149d1e8af5bb99d307ea615-merged.mount: Deactivated successfully.
Feb 25 08:18:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:18:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:18:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:18:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:18:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:18:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:18:31 np0005629333 podman[394464]: 2026-02-25 13:18:31.692956227 +0000 UTC m=+1.571549245 container remove fcfb309d72fa899ebdba0381b0c5d05189897ee8d475d18af012bafa967f3786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_lederberg, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:18:31 np0005629333 systemd[1]: libpod-conmon-fcfb309d72fa899ebdba0381b0c5d05189897ee8d475d18af012bafa967f3786.scope: Deactivated successfully.
Feb 25 08:18:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:18:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:18:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:18:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:18:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:18:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:18:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:18:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:18:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:18:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:18:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:18:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:18:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:18:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:18:32 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:18:32 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:18:32 np0005629333 nova_compute[244014]: 2026-02-25 13:18:32.931 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:18:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2960: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:18:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:18:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2961: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:18:35 np0005629333 nova_compute[244014]: 2026-02-25 13:18:35.890 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:18:35 np0005629333 nova_compute[244014]: 2026-02-25 13:18:35.891 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 25 08:18:36 np0005629333 nova_compute[244014]: 2026-02-25 13:18:36.085 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:18:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2962: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:18:37 np0005629333 nova_compute[244014]: 2026-02-25 13:18:37.896 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:18:37 np0005629333 nova_compute[244014]: 2026-02-25 13:18:37.897 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 25 08:18:37 np0005629333 nova_compute[244014]: 2026-02-25 13:18:37.911 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 25 08:18:37 np0005629333 nova_compute[244014]: 2026-02-25 13:18:37.935 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:18:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:18:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2963: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:18:41 np0005629333 nova_compute[244014]: 2026-02-25 13:18:41.086 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:18:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2964: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:18:42 np0005629333 nova_compute[244014]: 2026-02-25 13:18:42.939 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:18:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:18:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:18:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:18:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:18:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:18:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:18:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:18:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:18:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:18:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:18:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:18:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:18:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:18:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:18:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:18:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:18:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:18:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:18:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:18:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:18:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:18:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:18:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:18:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2965: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:18:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:18:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2966: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:18:46 np0005629333 nova_compute[244014]: 2026-02-25 13:18:46.087 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:18:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2967: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:18:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:18:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3649150109' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:18:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:18:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3649150109' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:18:47 np0005629333 podman[394604]: 2026-02-25 13:18:47.754857272 +0000 UTC m=+0.091002397 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 08:18:47 np0005629333 podman[394605]: 2026-02-25 13:18:47.784729614 +0000 UTC m=+0.120914020 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 08:18:47 np0005629333 nova_compute[244014]: 2026-02-25 13:18:47.941 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:18:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:18:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2968: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:18:50 np0005629333 nova_compute[244014]: 2026-02-25 13:18:50.665 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:18:51 np0005629333 nova_compute[244014]: 2026-02-25 13:18:51.089 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:18:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2969: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:18:52 np0005629333 nova_compute[244014]: 2026-02-25 13:18:52.945 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:18:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2970: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:18:53 np0005629333 nova_compute[244014]: 2026-02-25 13:18:53.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:18:53 np0005629333 nova_compute[244014]: 2026-02-25 13:18:53.908 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:18:53 np0005629333 nova_compute[244014]: 2026-02-25 13:18:53.909 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:18:53 np0005629333 nova_compute[244014]: 2026-02-25 13:18:53.909 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:18:53 np0005629333 nova_compute[244014]: 2026-02-25 13:18:53.909 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:18:53 np0005629333 nova_compute[244014]: 2026-02-25 13:18:53.910 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:18:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:18:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:18:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4150166556' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:18:54 np0005629333 nova_compute[244014]: 2026-02-25 13:18:54.525 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:18:54 np0005629333 nova_compute[244014]: 2026-02-25 13:18:54.740 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:18:54 np0005629333 nova_compute[244014]: 2026-02-25 13:18:54.742 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3603MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:18:54 np0005629333 nova_compute[244014]: 2026-02-25 13:18:54.743 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:18:54 np0005629333 nova_compute[244014]: 2026-02-25 13:18:54.743 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:18:54 np0005629333 nova_compute[244014]: 2026-02-25 13:18:54.807 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:18:54 np0005629333 nova_compute[244014]: 2026-02-25 13:18:54.808 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:18:54 np0005629333 nova_compute[244014]: 2026-02-25 13:18:54.825 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:18:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:18:55.058 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:18:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:18:55.059 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:18:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:18:55.059 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:18:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:18:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1928768406' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:18:55 np0005629333 nova_compute[244014]: 2026-02-25 13:18:55.416 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:18:55 np0005629333 nova_compute[244014]: 2026-02-25 13:18:55.422 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:18:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2971: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:18:55 np0005629333 nova_compute[244014]: 2026-02-25 13:18:55.448 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:18:55 np0005629333 nova_compute[244014]: 2026-02-25 13:18:55.451 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:18:55 np0005629333 nova_compute[244014]: 2026-02-25 13:18:55.452 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:18:56 np0005629333 nova_compute[244014]: 2026-02-25 13:18:56.091 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:18:56 np0005629333 nova_compute[244014]: 2026-02-25 13:18:56.453 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:18:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2972: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:18:57 np0005629333 nova_compute[244014]: 2026-02-25 13:18:57.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:18:57 np0005629333 nova_compute[244014]: 2026-02-25 13:18:57.878 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:18:57 np0005629333 nova_compute[244014]: 2026-02-25 13:18:57.878 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:18:57 np0005629333 nova_compute[244014]: 2026-02-25 13:18:57.902 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:18:57 np0005629333 nova_compute[244014]: 2026-02-25 13:18:57.948 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:18:58 np0005629333 nova_compute[244014]: 2026-02-25 13:18:58.895 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:18:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:18:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2973: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:19:01 np0005629333 nova_compute[244014]: 2026-02-25 13:19:01.093 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:19:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2974: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:19:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:19:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:19:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:19:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:19:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:19:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:19:02 np0005629333 nova_compute[244014]: 2026-02-25 13:19:02.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:19:02 np0005629333 nova_compute[244014]: 2026-02-25 13:19:02.951 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:19:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2975: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:19:03 np0005629333 nova_compute[244014]: 2026-02-25 13:19:03.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:19:03 np0005629333 nova_compute[244014]: 2026-02-25 13:19:03.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:19:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:19:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2976: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:19:05 np0005629333 nova_compute[244014]: 2026-02-25 13:19:05.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:19:06 np0005629333 nova_compute[244014]: 2026-02-25 13:19:06.094 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:19:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2977: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:19:07 np0005629333 nova_compute[244014]: 2026-02-25 13:19:07.955 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:19:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:19:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2978: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:19:09 np0005629333 nova_compute[244014]: 2026-02-25 13:19:09.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:19:11 np0005629333 nova_compute[244014]: 2026-02-25 13:19:11.096 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:19:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2979: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:19:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 08:19:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.0 total, 600.0 interval#012Cumulative writes: 13K writes, 62K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s#012Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1328 writes, 5767 keys, 1328 commit groups, 1.0 writes per commit group, ingest: 8.78 MB, 0.01 MB/s#012Interval WAL: 1328 writes, 1328 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     55.6      1.33              0.21        44    0.030       0      0       0.0       0.0#012  L6      1/0    8.98 MB   0.0      0.4     0.1      0.4       0.4      0.0       0.0   5.0    103.6     88.1      4.20              1.07        43    0.098    276K    23K       0.0       0.0#012 Sum      1/0    8.98 MB   0.0      0.4     0.1      0.4       0.4      0.1       0.0   6.0     78.6     80.2      5.53              1.28        87    0.064    276K    23K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   7.3     60.4     60.3      0.74              0.13         8    0.093     33K   2034       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.4       0.4      0.0       0.0   0.0    103.6     88.1      4.20              1.07        43    0.098    276K    23K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     55.7      1.33              0.21        43    0.031       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 5400.0 total, 600.0 interval#012Flush(GB): cumulative 0.072, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.43 GB write, 0.08 MB/s write, 0.42 GB read, 0.08 MB/s read, 5.5 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561a1af858d0#2 capacity: 304.00 MB usage: 49.66 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000326 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3116,47.66 MB,15.6781%) FilterBlock(88,767.17 KB,0.246445%) IndexBlock(88,1.25 MB,0.411139%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb 25 08:19:12 np0005629333 nova_compute[244014]: 2026-02-25 13:19:12.956 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:19:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2980: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:19:13 np0005629333 nova_compute[244014]: 2026-02-25 13:19:13.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:19:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:19:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2981: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:19:16 np0005629333 nova_compute[244014]: 2026-02-25 13:19:16.098 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:19:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2982: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:19:17 np0005629333 nova_compute[244014]: 2026-02-25 13:19:17.960 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:19:18 np0005629333 podman[394693]: 2026-02-25 13:19:18.719660117 +0000 UTC m=+0.059118329 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 08:19:18 np0005629333 podman[394694]: 2026-02-25 13:19:18.741597755 +0000 UTC m=+0.078225647 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, io.buildah.version=1.43.0)
Feb 25 08:19:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:19:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2983: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:19:21 np0005629333 nova_compute[244014]: 2026-02-25 13:19:21.101 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:19:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2984: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:19:23 np0005629333 nova_compute[244014]: 2026-02-25 13:19:23.009 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:19:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2985: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:19:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:19:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2986: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:19:26 np0005629333 nova_compute[244014]: 2026-02-25 13:19:26.101 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:19:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2987: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:19:28 np0005629333 nova_compute[244014]: 2026-02-25 13:19:28.012 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:19:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:19:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2988: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:19:31 np0005629333 nova_compute[244014]: 2026-02-25 13:19:31.102 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:19:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:19:31
Feb 25 08:19:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:19:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:19:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'images', '.mgr', 'vms', 'default.rgw.meta', 'volumes', '.rgw.root', 'default.rgw.log']
Feb 25 08:19:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:19:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2989: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:19:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:19:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:19:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:19:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:19:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:19:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:19:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:19:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:19:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:19:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:19:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:19:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:19:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:19:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:19:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:19:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:19:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:19:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:19:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:19:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:19:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:19:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:19:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:19:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:19:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:19:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:19:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:19:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:19:33 np0005629333 nova_compute[244014]: 2026-02-25 13:19:33.015 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:19:33 np0005629333 podman[394882]: 2026-02-25 13:19:33.187600952 +0000 UTC m=+0.109219741 container create 7c07a6edb2a935601e1a5da5369e4ad2f9514bee5b77a925f6cc76b5b52ed6a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:19:33 np0005629333 podman[394882]: 2026-02-25 13:19:33.103445269 +0000 UTC m=+0.025064068 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:19:33 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:19:33 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:19:33 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:19:33 np0005629333 systemd[1]: Started libpod-conmon-7c07a6edb2a935601e1a5da5369e4ad2f9514bee5b77a925f6cc76b5b52ed6a4.scope.
Feb 25 08:19:33 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:19:33 np0005629333 podman[394882]: 2026-02-25 13:19:33.414496151 +0000 UTC m=+0.336114990 container init 7c07a6edb2a935601e1a5da5369e4ad2f9514bee5b77a925f6cc76b5b52ed6a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:19:33 np0005629333 podman[394882]: 2026-02-25 13:19:33.425226093 +0000 UTC m=+0.346844852 container start 7c07a6edb2a935601e1a5da5369e4ad2f9514bee5b77a925f6cc76b5b52ed6a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 08:19:33 np0005629333 great_taussig[394898]: 167 167
Feb 25 08:19:33 np0005629333 systemd[1]: libpod-7c07a6edb2a935601e1a5da5369e4ad2f9514bee5b77a925f6cc76b5b52ed6a4.scope: Deactivated successfully.
Feb 25 08:19:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2990: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:19:33 np0005629333 podman[394882]: 2026-02-25 13:19:33.469662486 +0000 UTC m=+0.391281315 container attach 7c07a6edb2a935601e1a5da5369e4ad2f9514bee5b77a925f6cc76b5b52ed6a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:19:33 np0005629333 podman[394882]: 2026-02-25 13:19:33.470289234 +0000 UTC m=+0.391908023 container died 7c07a6edb2a935601e1a5da5369e4ad2f9514bee5b77a925f6cc76b5b52ed6a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Feb 25 08:19:33 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1f185d3ec9a2d61050ceadb2f38700487c0f0836e0f0d713b5deedb3a5b5e82b-merged.mount: Deactivated successfully.
Feb 25 08:19:33 np0005629333 podman[394882]: 2026-02-25 13:19:33.903467919 +0000 UTC m=+0.825086708 container remove 7c07a6edb2a935601e1a5da5369e4ad2f9514bee5b77a925f6cc76b5b52ed6a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_taussig, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 08:19:33 np0005629333 systemd[1]: libpod-conmon-7c07a6edb2a935601e1a5da5369e4ad2f9514bee5b77a925f6cc76b5b52ed6a4.scope: Deactivated successfully.
Feb 25 08:19:34 np0005629333 podman[394924]: 2026-02-25 13:19:34.129597215 +0000 UTC m=+0.079842332 container create e914fa927d1b84935281a850394afd68ab6b83872d22d98b7ae7b1d290cbd952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_colden, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 08:19:34 np0005629333 podman[394924]: 2026-02-25 13:19:34.080639505 +0000 UTC m=+0.030884622 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:19:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:19:34 np0005629333 systemd[1]: Started libpod-conmon-e914fa927d1b84935281a850394afd68ab6b83872d22d98b7ae7b1d290cbd952.scope.
Feb 25 08:19:34 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:19:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/284ff02fdaf8b75f13e2fef2803575bb4beafd1b456bf7f87bf90f29918f12c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:19:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/284ff02fdaf8b75f13e2fef2803575bb4beafd1b456bf7f87bf90f29918f12c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:19:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/284ff02fdaf8b75f13e2fef2803575bb4beafd1b456bf7f87bf90f29918f12c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:19:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/284ff02fdaf8b75f13e2fef2803575bb4beafd1b456bf7f87bf90f29918f12c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:19:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/284ff02fdaf8b75f13e2fef2803575bb4beafd1b456bf7f87bf90f29918f12c1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:19:34 np0005629333 podman[394924]: 2026-02-25 13:19:34.283363672 +0000 UTC m=+0.233608819 container init e914fa927d1b84935281a850394afd68ab6b83872d22d98b7ae7b1d290cbd952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_colden, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 08:19:34 np0005629333 podman[394924]: 2026-02-25 13:19:34.292911651 +0000 UTC m=+0.243156768 container start e914fa927d1b84935281a850394afd68ab6b83872d22d98b7ae7b1d290cbd952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_colden, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 08:19:34 np0005629333 podman[394924]: 2026-02-25 13:19:34.342992493 +0000 UTC m=+0.293237630 container attach e914fa927d1b84935281a850394afd68ab6b83872d22d98b7ae7b1d290cbd952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_colden, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 08:19:34 np0005629333 optimistic_colden[394941]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:19:34 np0005629333 optimistic_colden[394941]: --> All data devices are unavailable
Feb 25 08:19:34 np0005629333 systemd[1]: libpod-e914fa927d1b84935281a850394afd68ab6b83872d22d98b7ae7b1d290cbd952.scope: Deactivated successfully.
Feb 25 08:19:34 np0005629333 podman[394924]: 2026-02-25 13:19:34.761896245 +0000 UTC m=+0.712141412 container died e914fa927d1b84935281a850394afd68ab6b83872d22d98b7ae7b1d290cbd952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_colden, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:19:34 np0005629333 systemd[1]: var-lib-containers-storage-overlay-284ff02fdaf8b75f13e2fef2803575bb4beafd1b456bf7f87bf90f29918f12c1-merged.mount: Deactivated successfully.
Feb 25 08:19:35 np0005629333 podman[394924]: 2026-02-25 13:19:35.082769683 +0000 UTC m=+1.033014830 container remove e914fa927d1b84935281a850394afd68ab6b83872d22d98b7ae7b1d290cbd952 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_colden, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:19:35 np0005629333 systemd[1]: libpod-conmon-e914fa927d1b84935281a850394afd68ab6b83872d22d98b7ae7b1d290cbd952.scope: Deactivated successfully.
Feb 25 08:19:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2991: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:19:35 np0005629333 podman[395037]: 2026-02-25 13:19:35.576914617 +0000 UTC m=+0.037019835 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:19:35 np0005629333 podman[395037]: 2026-02-25 13:19:35.725664082 +0000 UTC m=+0.185769240 container create 7fbef5b22fe24e8ca78ce81c7484dde8ee1f04b0107db5c24eb8c23e99a0fa32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_visvesvaraya, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 08:19:35 np0005629333 systemd[1]: Started libpod-conmon-7fbef5b22fe24e8ca78ce81c7484dde8ee1f04b0107db5c24eb8c23e99a0fa32.scope.
Feb 25 08:19:35 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:19:36 np0005629333 podman[395037]: 2026-02-25 13:19:36.069904089 +0000 UTC m=+0.530009267 container init 7fbef5b22fe24e8ca78ce81c7484dde8ee1f04b0107db5c24eb8c23e99a0fa32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 08:19:36 np0005629333 podman[395037]: 2026-02-25 13:19:36.077498223 +0000 UTC m=+0.537603381 container start 7fbef5b22fe24e8ca78ce81c7484dde8ee1f04b0107db5c24eb8c23e99a0fa32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:19:36 np0005629333 nice_visvesvaraya[395054]: 167 167
Feb 25 08:19:36 np0005629333 systemd[1]: libpod-7fbef5b22fe24e8ca78ce81c7484dde8ee1f04b0107db5c24eb8c23e99a0fa32.scope: Deactivated successfully.
Feb 25 08:19:36 np0005629333 nova_compute[244014]: 2026-02-25 13:19:36.103 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:19:36 np0005629333 podman[395037]: 2026-02-25 13:19:36.189682177 +0000 UTC m=+0.649787385 container attach 7fbef5b22fe24e8ca78ce81c7484dde8ee1f04b0107db5c24eb8c23e99a0fa32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:19:36 np0005629333 podman[395037]: 2026-02-25 13:19:36.190131089 +0000 UTC m=+0.650236237 container died 7fbef5b22fe24e8ca78ce81c7484dde8ee1f04b0107db5c24eb8c23e99a0fa32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:19:36 np0005629333 systemd[1]: var-lib-containers-storage-overlay-9bc090a5629e3e2998eaa45eaf766acf265a7c60a0b6c578bfa2dad44e079fa4-merged.mount: Deactivated successfully.
Feb 25 08:19:36 np0005629333 podman[395037]: 2026-02-25 13:19:36.477922544 +0000 UTC m=+0.938027702 container remove 7fbef5b22fe24e8ca78ce81c7484dde8ee1f04b0107db5c24eb8c23e99a0fa32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 08:19:36 np0005629333 systemd[1]: libpod-conmon-7fbef5b22fe24e8ca78ce81c7484dde8ee1f04b0107db5c24eb8c23e99a0fa32.scope: Deactivated successfully.
Feb 25 08:19:36 np0005629333 podman[395079]: 2026-02-25 13:19:36.631006491 +0000 UTC m=+0.032026624 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:19:36 np0005629333 podman[395079]: 2026-02-25 13:19:36.760400569 +0000 UTC m=+0.161420652 container create 036194e9c1dac6be5dec3f600ae96a3dcdb88d654b978161da2fe66596300189 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_sanderson, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:19:36 np0005629333 systemd[1]: Started libpod-conmon-036194e9c1dac6be5dec3f600ae96a3dcdb88d654b978161da2fe66596300189.scope.
Feb 25 08:19:36 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:19:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b601dc05a3886d339d99154bc19f0c2b31760a46c8e76c8d3a0e266f663032ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:19:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b601dc05a3886d339d99154bc19f0c2b31760a46c8e76c8d3a0e266f663032ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:19:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b601dc05a3886d339d99154bc19f0c2b31760a46c8e76c8d3a0e266f663032ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:19:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b601dc05a3886d339d99154bc19f0c2b31760a46c8e76c8d3a0e266f663032ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:19:36 np0005629333 podman[395079]: 2026-02-25 13:19:36.961306375 +0000 UTC m=+0.362326438 container init 036194e9c1dac6be5dec3f600ae96a3dcdb88d654b978161da2fe66596300189 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Feb 25 08:19:36 np0005629333 podman[395079]: 2026-02-25 13:19:36.966238324 +0000 UTC m=+0.367258367 container start 036194e9c1dac6be5dec3f600ae96a3dcdb88d654b978161da2fe66596300189 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_sanderson, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:19:36 np0005629333 podman[395079]: 2026-02-25 13:19:36.998239137 +0000 UTC m=+0.399259180 container attach 036194e9c1dac6be5dec3f600ae96a3dcdb88d654b978161da2fe66596300189 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_sanderson, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]: {
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:    "0": [
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:        {
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "devices": [
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "/dev/loop3"
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            ],
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "lv_name": "ceph_lv0",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "lv_size": "21470642176",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "name": "ceph_lv0",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "tags": {
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.cluster_name": "ceph",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.crush_device_class": "",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.encrypted": "0",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.objectstore": "bluestore",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.osd_id": "0",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.type": "block",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.vdo": "0",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.with_tpm": "0"
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            },
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "type": "block",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "vg_name": "ceph_vg0"
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:        }
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:    ],
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:    "1": [
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:        {
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "devices": [
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "/dev/loop4"
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            ],
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "lv_name": "ceph_lv1",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "lv_size": "21470642176",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "name": "ceph_lv1",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "tags": {
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.cluster_name": "ceph",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.crush_device_class": "",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.encrypted": "0",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.objectstore": "bluestore",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.osd_id": "1",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.type": "block",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.vdo": "0",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.with_tpm": "0"
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            },
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "type": "block",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "vg_name": "ceph_vg1"
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:        }
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:    ],
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:    "2": [
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:        {
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "devices": [
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "/dev/loop5"
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            ],
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "lv_name": "ceph_lv2",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "lv_size": "21470642176",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "name": "ceph_lv2",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "tags": {
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.cluster_name": "ceph",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.crush_device_class": "",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.encrypted": "0",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.objectstore": "bluestore",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.osd_id": "2",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.type": "block",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.vdo": "0",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:                "ceph.with_tpm": "0"
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            },
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "type": "block",
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:            "vg_name": "ceph_vg2"
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:        }
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]:    ]
Feb 25 08:19:37 np0005629333 ecstatic_sanderson[395096]: }
Feb 25 08:19:37 np0005629333 systemd[1]: libpod-036194e9c1dac6be5dec3f600ae96a3dcdb88d654b978161da2fe66596300189.scope: Deactivated successfully.
Feb 25 08:19:37 np0005629333 podman[395079]: 2026-02-25 13:19:37.292444993 +0000 UTC m=+0.693465066 container died 036194e9c1dac6be5dec3f600ae96a3dcdb88d654b978161da2fe66596300189 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_sanderson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 08:19:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2992: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:19:37 np0005629333 systemd[1]: var-lib-containers-storage-overlay-b601dc05a3886d339d99154bc19f0c2b31760a46c8e76c8d3a0e266f663032ce-merged.mount: Deactivated successfully.
Feb 25 08:19:38 np0005629333 nova_compute[244014]: 2026-02-25 13:19:38.020 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:19:38 np0005629333 podman[395079]: 2026-02-25 13:19:38.240120545 +0000 UTC m=+1.641140628 container remove 036194e9c1dac6be5dec3f600ae96a3dcdb88d654b978161da2fe66596300189 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_sanderson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 08:19:38 np0005629333 systemd[1]: libpod-conmon-036194e9c1dac6be5dec3f600ae96a3dcdb88d654b978161da2fe66596300189.scope: Deactivated successfully.
Feb 25 08:19:38 np0005629333 podman[395182]: 2026-02-25 13:19:38.731573644 +0000 UTC m=+0.031566032 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:19:38 np0005629333 podman[395182]: 2026-02-25 13:19:38.924069652 +0000 UTC m=+0.224061980 container create adf563610dd6dba85c6691f00751e90f7853a451c5e77e131f366231bfc5460b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 08:19:39 np0005629333 systemd[1]: Started libpod-conmon-adf563610dd6dba85c6691f00751e90f7853a451c5e77e131f366231bfc5460b.scope.
Feb 25 08:19:39 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:19:39 np0005629333 podman[395182]: 2026-02-25 13:19:39.126214532 +0000 UTC m=+0.426206870 container init adf563610dd6dba85c6691f00751e90f7853a451c5e77e131f366231bfc5460b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:19:39 np0005629333 podman[395182]: 2026-02-25 13:19:39.136193493 +0000 UTC m=+0.436185831 container start adf563610dd6dba85c6691f00751e90f7853a451c5e77e131f366231bfc5460b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_feynman, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:19:39 np0005629333 sad_feynman[395198]: 167 167
Feb 25 08:19:39 np0005629333 systemd[1]: libpod-adf563610dd6dba85c6691f00751e90f7853a451c5e77e131f366231bfc5460b.scope: Deactivated successfully.
Feb 25 08:19:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:19:39 np0005629333 podman[395182]: 2026-02-25 13:19:39.198392777 +0000 UTC m=+0.498385105 container attach adf563610dd6dba85c6691f00751e90f7853a451c5e77e131f366231bfc5460b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_feynman, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 08:19:39 np0005629333 podman[395182]: 2026-02-25 13:19:39.198882371 +0000 UTC m=+0.498874709 container died adf563610dd6dba85c6691f00751e90f7853a451c5e77e131f366231bfc5460b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_feynman, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Feb 25 08:19:39 np0005629333 systemd[1]: var-lib-containers-storage-overlay-986ddff3983d1c196f836a3f4e0834d59bd0193cabf42d678b6db63d55eacc33-merged.mount: Deactivated successfully.
Feb 25 08:19:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2993: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:19:39 np0005629333 podman[395182]: 2026-02-25 13:19:39.612304079 +0000 UTC m=+0.912296377 container remove adf563610dd6dba85c6691f00751e90f7853a451c5e77e131f366231bfc5460b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_feynman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 08:19:39 np0005629333 systemd[1]: libpod-conmon-adf563610dd6dba85c6691f00751e90f7853a451c5e77e131f366231bfc5460b.scope: Deactivated successfully.
Feb 25 08:19:39 np0005629333 podman[395223]: 2026-02-25 13:19:39.860803077 +0000 UTC m=+0.108270554 container create 44c2498b99b12f07b200e528217dcfa7b3306af0e971f6250be5e5f794ffd6f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 08:19:39 np0005629333 podman[395223]: 2026-02-25 13:19:39.775937894 +0000 UTC m=+0.023405381 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:19:39 np0005629333 systemd[1]: Started libpod-conmon-44c2498b99b12f07b200e528217dcfa7b3306af0e971f6250be5e5f794ffd6f0.scope.
Feb 25 08:19:39 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:19:39 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098a51ac0d9296898eda4e5f487e7e12ea1fec5b8e27f74088e4c0897f23ff22/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:19:39 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098a51ac0d9296898eda4e5f487e7e12ea1fec5b8e27f74088e4c0897f23ff22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:19:39 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098a51ac0d9296898eda4e5f487e7e12ea1fec5b8e27f74088e4c0897f23ff22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:19:39 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/098a51ac0d9296898eda4e5f487e7e12ea1fec5b8e27f74088e4c0897f23ff22/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:19:40 np0005629333 podman[395223]: 2026-02-25 13:19:40.075264124 +0000 UTC m=+0.322731621 container init 44c2498b99b12f07b200e528217dcfa7b3306af0e971f6250be5e5f794ffd6f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_maxwell, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:19:40 np0005629333 podman[395223]: 2026-02-25 13:19:40.086685856 +0000 UTC m=+0.334153323 container start 44c2498b99b12f07b200e528217dcfa7b3306af0e971f6250be5e5f794ffd6f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_maxwell, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:19:40 np0005629333 podman[395223]: 2026-02-25 13:19:40.165393246 +0000 UTC m=+0.412860723 container attach 44c2498b99b12f07b200e528217dcfa7b3306af0e971f6250be5e5f794ffd6f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_maxwell, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:19:40 np0005629333 lvm[395319]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:19:40 np0005629333 lvm[395318]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:19:40 np0005629333 lvm[395318]: VG ceph_vg0 finished
Feb 25 08:19:40 np0005629333 lvm[395319]: VG ceph_vg1 finished
Feb 25 08:19:40 np0005629333 lvm[395321]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:19:40 np0005629333 lvm[395321]: VG ceph_vg2 finished
Feb 25 08:19:40 np0005629333 youthful_maxwell[395240]: {}
Feb 25 08:19:41 np0005629333 systemd[1]: libpod-44c2498b99b12f07b200e528217dcfa7b3306af0e971f6250be5e5f794ffd6f0.scope: Deactivated successfully.
Feb 25 08:19:41 np0005629333 systemd[1]: libpod-44c2498b99b12f07b200e528217dcfa7b3306af0e971f6250be5e5f794ffd6f0.scope: Consumed 1.268s CPU time.
Feb 25 08:19:41 np0005629333 podman[395223]: 2026-02-25 13:19:41.013417099 +0000 UTC m=+1.260884566 container died 44c2498b99b12f07b200e528217dcfa7b3306af0e971f6250be5e5f794ffd6f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_maxwell, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:19:41 np0005629333 nova_compute[244014]: 2026-02-25 13:19:41.138 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:19:41 np0005629333 systemd[1]: var-lib-containers-storage-overlay-098a51ac0d9296898eda4e5f487e7e12ea1fec5b8e27f74088e4c0897f23ff22-merged.mount: Deactivated successfully.
Feb 25 08:19:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2994: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:19:41 np0005629333 podman[395223]: 2026-02-25 13:19:41.500591137 +0000 UTC m=+1.748058604 container remove 44c2498b99b12f07b200e528217dcfa7b3306af0e971f6250be5e5f794ffd6f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:19:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:19:41 np0005629333 systemd[1]: libpod-conmon-44c2498b99b12f07b200e528217dcfa7b3306af0e971f6250be5e5f794ffd6f0.scope: Deactivated successfully.
Feb 25 08:19:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:19:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:19:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:19:42 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:19:42 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:19:43 np0005629333 nova_compute[244014]: 2026-02-25 13:19:43.024 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:19:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:19:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:19:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:19:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:19:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:19:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:19:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:19:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:19:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:19:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:19:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:19:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:19:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:19:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:19:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:19:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:19:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:19:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:19:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:19:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:19:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:19:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:19:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:19:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2995: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:19:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:19:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2996: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:19:46 np0005629333 nova_compute[244014]: 2026-02-25 13:19:46.139 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:19:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2997: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:19:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:19:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3303749402' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:19:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:19:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3303749402' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:19:48 np0005629333 nova_compute[244014]: 2026-02-25 13:19:48.064 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:19:48.090474) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025588090548, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 2052, "num_deletes": 251, "total_data_size": 3513323, "memory_usage": 3577360, "flush_reason": "Manual Compaction"}
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025588158759, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 3445434, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61302, "largest_seqno": 63353, "table_properties": {"data_size": 3435980, "index_size": 6011, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18656, "raw_average_key_size": 20, "raw_value_size": 3417321, "raw_average_value_size": 3678, "num_data_blocks": 267, "num_entries": 929, "num_filter_entries": 929, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772025356, "oldest_key_time": 1772025356, "file_creation_time": 1772025588, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 68318 microseconds, and 6417 cpu microseconds.
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:19:48.158805) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 3445434 bytes OK
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:19:48.158824) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:19:48.222556) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:19:48.222608) EVENT_LOG_v1 {"time_micros": 1772025588222597, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:19:48.222636) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 3504732, prev total WAL file size 3504732, number of live WAL files 2.
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:19:48.223829) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(3364KB)], [146(9199KB)]
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025588223941, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 12865640, "oldest_snapshot_seqno": -1}
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 8249 keys, 11104958 bytes, temperature: kUnknown
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025588367440, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 11104958, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11050781, "index_size": 32432, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20677, "raw_key_size": 215365, "raw_average_key_size": 26, "raw_value_size": 10904577, "raw_average_value_size": 1321, "num_data_blocks": 1262, "num_entries": 8249, "num_filter_entries": 8249, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772025588, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:19:48.367869) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 11104958 bytes
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:19:48.381037) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 89.6 rd, 77.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 9.0 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(7.0) write-amplify(3.2) OK, records in: 8763, records dropped: 514 output_compression: NoCompression
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:19:48.381067) EVENT_LOG_v1 {"time_micros": 1772025588381054, "job": 90, "event": "compaction_finished", "compaction_time_micros": 143573, "compaction_time_cpu_micros": 18641, "output_level": 6, "num_output_files": 1, "total_output_size": 11104958, "num_input_records": 8763, "num_output_records": 8249, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025588381618, "job": 90, "event": "table_file_deletion", "file_number": 148}
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025588382882, "job": 90, "event": "table_file_deletion", "file_number": 146}
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:19:48.223621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:19:48.383017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:19:48.383029) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:19:48.383033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:19:48.383037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:19:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:19:48.383040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:19:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:19:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2998: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:19:49 np0005629333 podman[395361]: 2026-02-25 13:19:49.763095209 +0000 UTC m=+0.088192398 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 08:19:49 np0005629333 podman[395362]: 2026-02-25 13:19:49.802463 +0000 UTC m=+0.127564289 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 08:19:51 np0005629333 nova_compute[244014]: 2026-02-25 13:19:51.141 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:19:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v2999: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:19:53 np0005629333 nova_compute[244014]: 2026-02-25 13:19:53.100 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:19:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3000: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:19:53 np0005629333 nova_compute[244014]: 2026-02-25 13:19:53.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:19:53 np0005629333 nova_compute[244014]: 2026-02-25 13:19:53.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:19:53 np0005629333 nova_compute[244014]: 2026-02-25 13:19:53.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:19:53 np0005629333 nova_compute[244014]: 2026-02-25 13:19:53.905 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:19:53 np0005629333 nova_compute[244014]: 2026-02-25 13:19:53.905 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:19:53 np0005629333 nova_compute[244014]: 2026-02-25 13:19:53.906 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:19:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:19:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:19:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3425891276' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:19:54 np0005629333 nova_compute[244014]: 2026-02-25 13:19:54.563 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.657s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:19:54 np0005629333 nova_compute[244014]: 2026-02-25 13:19:54.757 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:19:54 np0005629333 nova_compute[244014]: 2026-02-25 13:19:54.758 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3596MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:19:54 np0005629333 nova_compute[244014]: 2026-02-25 13:19:54.759 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:19:54 np0005629333 nova_compute[244014]: 2026-02-25 13:19:54.759 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:19:54 np0005629333 nova_compute[244014]: 2026-02-25 13:19:54.823 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:19:54 np0005629333 nova_compute[244014]: 2026-02-25 13:19:54.824 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:19:54 np0005629333 nova_compute[244014]: 2026-02-25 13:19:54.838 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:19:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:19:55.060 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:19:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:19:55.061 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:19:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:19:55.061 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:19:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:19:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3331256269' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:19:55 np0005629333 nova_compute[244014]: 2026-02-25 13:19:55.385 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:19:55 np0005629333 nova_compute[244014]: 2026-02-25 13:19:55.391 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:19:55 np0005629333 nova_compute[244014]: 2026-02-25 13:19:55.420 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:19:55 np0005629333 nova_compute[244014]: 2026-02-25 13:19:55.424 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:19:55 np0005629333 nova_compute[244014]: 2026-02-25 13:19:55.425 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:19:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3001: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:19:56 np0005629333 nova_compute[244014]: 2026-02-25 13:19:56.144 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:19:56 np0005629333 nova_compute[244014]: 2026-02-25 13:19:56.425 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:19:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3002: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:19:57 np0005629333 nova_compute[244014]: 2026-02-25 13:19:57.878 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:19:57 np0005629333 nova_compute[244014]: 2026-02-25 13:19:57.879 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:19:57 np0005629333 nova_compute[244014]: 2026-02-25 13:19:57.879 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:19:57 np0005629333 nova_compute[244014]: 2026-02-25 13:19:57.906 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:19:58 np0005629333 nova_compute[244014]: 2026-02-25 13:19:58.104 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:19:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:19:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3003: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:20:00 np0005629333 nova_compute[244014]: 2026-02-25 13:20:00.900 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:20:01 np0005629333 nova_compute[244014]: 2026-02-25 13:20:01.145 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:20:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3004: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:20:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:20:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:20:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:20:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:20:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:20:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:20:03 np0005629333 nova_compute[244014]: 2026-02-25 13:20:03.141 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:20:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3005: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:20:03 np0005629333 nova_compute[244014]: 2026-02-25 13:20:03.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:20:03 np0005629333 nova_compute[244014]: 2026-02-25 13:20:03.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:20:03 np0005629333 nova_compute[244014]: 2026-02-25 13:20:03.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:20:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:20:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3006: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:20:06 np0005629333 nova_compute[244014]: 2026-02-25 13:20:06.146 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:20:06 np0005629333 nova_compute[244014]: 2026-02-25 13:20:06.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:20:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3007: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:20:08 np0005629333 nova_compute[244014]: 2026-02-25 13:20:08.145 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:20:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:20:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3008: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:20:09 np0005629333 nova_compute[244014]: 2026-02-25 13:20:09.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:20:11 np0005629333 nova_compute[244014]: 2026-02-25 13:20:11.190 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:20:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3009: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:20:13 np0005629333 nova_compute[244014]: 2026-02-25 13:20:13.148 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:20:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3010: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:20:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:20:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3011: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:20:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 08:20:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 47K writes, 184K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s#012Cumulative WAL: 47K writes, 17K syncs, 2.74 writes per sync, written: 0.19 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 262 writes, 636 keys, 262 commit groups, 1.0 writes per commit group, ingest: 0.25 MB, 0.00 MB/s#012Interval WAL: 262 writes, 126 syncs, 2.08 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 08:20:15 np0005629333 nova_compute[244014]: 2026-02-25 13:20:15.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:20:16 np0005629333 nova_compute[244014]: 2026-02-25 13:20:16.194 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:20:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3012: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:20:18 np0005629333 nova_compute[244014]: 2026-02-25 13:20:18.150 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:20:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:20:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 08:20:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.2 total, 600.0 interval#012Cumulative writes: 48K writes, 194K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s#012Cumulative WAL: 48K writes, 17K syncs, 2.76 writes per sync, written: 0.19 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 335 writes, 989 keys, 335 commit groups, 1.0 writes per commit group, ingest: 0.39 MB, 0.00 MB/s#012Interval WAL: 335 writes, 158 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 08:20:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3013: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:20:20 np0005629333 podman[395449]: 2026-02-25 13:20:20.727455989 +0000 UTC m=+0.064990515 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 08:20:20 np0005629333 podman[395450]: 2026-02-25 13:20:20.741390662 +0000 UTC m=+0.080722249 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 25 08:20:21 np0005629333 nova_compute[244014]: 2026-02-25 13:20:21.195 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:20:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3014: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:20:22 np0005629333 nova_compute[244014]: 2026-02-25 13:20:22.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:20:23 np0005629333 nova_compute[244014]: 2026-02-25 13:20:23.153 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:20:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3015: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:20:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:20:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 08:20:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.6 total, 600.0 interval#012Cumulative writes: 37K writes, 149K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s#012Cumulative WAL: 37K writes, 13K syncs, 2.77 writes per sync, written: 0.15 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 312 writes, 919 keys, 312 commit groups, 1.0 writes per commit group, ingest: 0.35 MB, 0.00 MB/s#012Interval WAL: 312 writes, 148 syncs, 2.11 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 08:20:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3016: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:20:26 np0005629333 nova_compute[244014]: 2026-02-25 13:20:26.198 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:20:27 np0005629333 ceph-mgr[76641]: [devicehealth INFO root] Check health
Feb 25 08:20:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3017: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:20:28 np0005629333 nova_compute[244014]: 2026-02-25 13:20:28.200 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:20:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:20:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3018: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:20:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:20:31
Feb 25 08:20:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:20:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:20:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['.rgw.root', 'vms', 'volumes', '.mgr', 'images', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log']
Feb 25 08:20:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:20:31 np0005629333 nova_compute[244014]: 2026-02-25 13:20:31.202 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:20:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3019: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:20:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:20:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:20:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:20:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:20:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:20:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:20:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:20:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:20:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:20:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:20:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:20:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:20:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:20:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:20:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:20:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:20:33 np0005629333 nova_compute[244014]: 2026-02-25 13:20:33.205 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:20:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3020: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:20:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:20:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3021: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:20:36 np0005629333 nova_compute[244014]: 2026-02-25 13:20:36.202 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:20:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3022: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:20:38 np0005629333 nova_compute[244014]: 2026-02-25 13:20:38.252 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:20:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:20:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3023: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:20:41 np0005629333 nova_compute[244014]: 2026-02-25 13:20:41.204 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:20:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3024: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:20:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:20:43 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:20:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:20:43 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:20:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:20:43 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:20:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:20:43 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:20:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:20:43 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:20:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:20:43 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:20:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:20:43 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:20:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:20:43 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:20:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:20:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:20:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:20:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:20:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:20:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:20:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:20:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:20:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:20:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:20:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:20:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:20:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:20:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:20:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:20:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:20:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:20:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:20:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:20:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:20:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:20:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:20:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:20:43 np0005629333 nova_compute[244014]: 2026-02-25 13:20:43.256 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:20:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3025: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:20:43 np0005629333 podman[395711]: 2026-02-25 13:20:43.513676964 +0000 UTC m=+0.036526632 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:20:43 np0005629333 podman[395711]: 2026-02-25 13:20:43.610406703 +0000 UTC m=+0.133256261 container create 3d3d2466ad02f2123a496c6c6a8b70677ae29b36732363cbf05d3982391a44a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_lichterman, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:20:43 np0005629333 systemd[1]: Started libpod-conmon-3d3d2466ad02f2123a496c6c6a8b70677ae29b36732363cbf05d3982391a44a1.scope.
Feb 25 08:20:43 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:20:43 np0005629333 podman[395711]: 2026-02-25 13:20:43.800473008 +0000 UTC m=+0.323322656 container init 3d3d2466ad02f2123a496c6c6a8b70677ae29b36732363cbf05d3982391a44a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_lichterman, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:20:43 np0005629333 podman[395711]: 2026-02-25 13:20:43.808888605 +0000 UTC m=+0.331738183 container start 3d3d2466ad02f2123a496c6c6a8b70677ae29b36732363cbf05d3982391a44a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_lichterman, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:20:43 np0005629333 systemd[1]: libpod-3d3d2466ad02f2123a496c6c6a8b70677ae29b36732363cbf05d3982391a44a1.scope: Deactivated successfully.
Feb 25 08:20:43 np0005629333 sad_lichterman[395727]: 167 167
Feb 25 08:20:43 np0005629333 conmon[395727]: conmon 3d3d2466ad02f2123a49 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3d3d2466ad02f2123a496c6c6a8b70677ae29b36732363cbf05d3982391a44a1.scope/container/memory.events
Feb 25 08:20:43 np0005629333 podman[395711]: 2026-02-25 13:20:43.856778477 +0000 UTC m=+0.379628125 container attach 3d3d2466ad02f2123a496c6c6a8b70677ae29b36732363cbf05d3982391a44a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_lichterman, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:20:43 np0005629333 podman[395711]: 2026-02-25 13:20:43.857623311 +0000 UTC m=+0.380472919 container died 3d3d2466ad02f2123a496c6c6a8b70677ae29b36732363cbf05d3982391a44a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 08:20:43 np0005629333 systemd[1]: var-lib-containers-storage-overlay-19d50b3a7dfa393d2c008de1680e14bd93c31a181eb10737033cf8eaa16a8f75-merged.mount: Deactivated successfully.
Feb 25 08:20:44 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:20:44 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:20:44 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:20:44 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:20:44 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:20:44 np0005629333 podman[395711]: 2026-02-25 13:20:44.16445588 +0000 UTC m=+0.687305458 container remove 3d3d2466ad02f2123a496c6c6a8b70677ae29b36732363cbf05d3982391a44a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:20:44 np0005629333 systemd[1]: libpod-conmon-3d3d2466ad02f2123a496c6c6a8b70677ae29b36732363cbf05d3982391a44a1.scope: Deactivated successfully.
Feb 25 08:20:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:20:44 np0005629333 podman[395752]: 2026-02-25 13:20:44.404434803 +0000 UTC m=+0.099767507 container create f6ff3696a5d083e697247938605a7dda87f9084e687fd288da1819c5a1d6dc32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_rhodes, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 08:20:44 np0005629333 podman[395752]: 2026-02-25 13:20:44.340118338 +0000 UTC m=+0.035451112 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:20:44 np0005629333 systemd[1]: Started libpod-conmon-f6ff3696a5d083e697247938605a7dda87f9084e687fd288da1819c5a1d6dc32.scope.
Feb 25 08:20:44 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:20:44 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbe0828d6b291410b7637cb3850692f2bb8af4d6578f4bcb30ca087180d223e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:20:44 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbe0828d6b291410b7637cb3850692f2bb8af4d6578f4bcb30ca087180d223e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:20:44 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbe0828d6b291410b7637cb3850692f2bb8af4d6578f4bcb30ca087180d223e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:20:44 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbe0828d6b291410b7637cb3850692f2bb8af4d6578f4bcb30ca087180d223e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:20:44 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbe0828d6b291410b7637cb3850692f2bb8af4d6578f4bcb30ca087180d223e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:20:44 np0005629333 podman[395752]: 2026-02-25 13:20:44.576088117 +0000 UTC m=+0.271420901 container init f6ff3696a5d083e697247938605a7dda87f9084e687fd288da1819c5a1d6dc32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:20:44 np0005629333 podman[395752]: 2026-02-25 13:20:44.583723523 +0000 UTC m=+0.279056207 container start f6ff3696a5d083e697247938605a7dda87f9084e687fd288da1819c5a1d6dc32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_rhodes, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:20:44 np0005629333 podman[395752]: 2026-02-25 13:20:44.608178423 +0000 UTC m=+0.303511187 container attach f6ff3696a5d083e697247938605a7dda87f9084e687fd288da1819c5a1d6dc32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_rhodes, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 08:20:45 np0005629333 compassionate_rhodes[395770]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:20:45 np0005629333 compassionate_rhodes[395770]: --> All data devices are unavailable
Feb 25 08:20:45 np0005629333 systemd[1]: libpod-f6ff3696a5d083e697247938605a7dda87f9084e687fd288da1819c5a1d6dc32.scope: Deactivated successfully.
Feb 25 08:20:45 np0005629333 podman[395752]: 2026-02-25 13:20:45.072120257 +0000 UTC m=+0.767452951 container died f6ff3696a5d083e697247938605a7dda87f9084e687fd288da1819c5a1d6dc32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:20:45 np0005629333 systemd[1]: var-lib-containers-storage-overlay-6fbe0828d6b291410b7637cb3850692f2bb8af4d6578f4bcb30ca087180d223e-merged.mount: Deactivated successfully.
Feb 25 08:20:45 np0005629333 podman[395752]: 2026-02-25 13:20:45.256184001 +0000 UTC m=+0.951516715 container remove f6ff3696a5d083e697247938605a7dda87f9084e687fd288da1819c5a1d6dc32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_rhodes, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 25 08:20:45 np0005629333 systemd[1]: libpod-conmon-f6ff3696a5d083e697247938605a7dda87f9084e687fd288da1819c5a1d6dc32.scope: Deactivated successfully.
Feb 25 08:20:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3026: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:20:45 np0005629333 podman[395866]: 2026-02-25 13:20:45.776249339 +0000 UTC m=+0.082122349 container create 497b67b9c2bb0e1ce0f4d6be1032f65520b6999008d24a5ea9547a88e8cf38d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_babbage, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:20:45 np0005629333 podman[395866]: 2026-02-25 13:20:45.720376332 +0000 UTC m=+0.026249382 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:20:45 np0005629333 systemd[1]: Started libpod-conmon-497b67b9c2bb0e1ce0f4d6be1032f65520b6999008d24a5ea9547a88e8cf38d6.scope.
Feb 25 08:20:45 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:20:45 np0005629333 podman[395866]: 2026-02-25 13:20:45.901136374 +0000 UTC m=+0.207009444 container init 497b67b9c2bb0e1ce0f4d6be1032f65520b6999008d24a5ea9547a88e8cf38d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_babbage, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True)
Feb 25 08:20:45 np0005629333 podman[395866]: 2026-02-25 13:20:45.909297534 +0000 UTC m=+0.215170544 container start 497b67b9c2bb0e1ce0f4d6be1032f65520b6999008d24a5ea9547a88e8cf38d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 08:20:45 np0005629333 awesome_babbage[395883]: 167 167
Feb 25 08:20:45 np0005629333 systemd[1]: libpod-497b67b9c2bb0e1ce0f4d6be1032f65520b6999008d24a5ea9547a88e8cf38d6.scope: Deactivated successfully.
Feb 25 08:20:45 np0005629333 podman[395866]: 2026-02-25 13:20:45.93254318 +0000 UTC m=+0.238416240 container attach 497b67b9c2bb0e1ce0f4d6be1032f65520b6999008d24a5ea9547a88e8cf38d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_babbage, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:20:45 np0005629333 podman[395866]: 2026-02-25 13:20:45.933582289 +0000 UTC m=+0.239455369 container died 497b67b9c2bb0e1ce0f4d6be1032f65520b6999008d24a5ea9547a88e8cf38d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_babbage, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:20:46 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e68432b294e11086fed4ac9c4675563975d7c2d943018602cbce70925fe07657-merged.mount: Deactivated successfully.
Feb 25 08:20:46 np0005629333 podman[395866]: 2026-02-25 13:20:46.104897764 +0000 UTC m=+0.410770784 container remove 497b67b9c2bb0e1ce0f4d6be1032f65520b6999008d24a5ea9547a88e8cf38d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 08:20:46 np0005629333 systemd[1]: libpod-conmon-497b67b9c2bb0e1ce0f4d6be1032f65520b6999008d24a5ea9547a88e8cf38d6.scope: Deactivated successfully.
Feb 25 08:20:46 np0005629333 nova_compute[244014]: 2026-02-25 13:20:46.206 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:20:46 np0005629333 podman[395907]: 2026-02-25 13:20:46.34289043 +0000 UTC m=+0.096430092 container create 442f30454d78c932d6e0a52bc5b70f4beacee5482865d931bc2380bdbaa0d97d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_goldstine, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 08:20:46 np0005629333 podman[395907]: 2026-02-25 13:20:46.281984671 +0000 UTC m=+0.035524383 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:20:46 np0005629333 systemd[1]: Started libpod-conmon-442f30454d78c932d6e0a52bc5b70f4beacee5482865d931bc2380bdbaa0d97d.scope.
Feb 25 08:20:46 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:20:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/468b3043ed79b771a2f0f280411a0d3d849cd5d649d56e857d8e150a4480095c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:20:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/468b3043ed79b771a2f0f280411a0d3d849cd5d649d56e857d8e150a4480095c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:20:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/468b3043ed79b771a2f0f280411a0d3d849cd5d649d56e857d8e150a4480095c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:20:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/468b3043ed79b771a2f0f280411a0d3d849cd5d649d56e857d8e150a4480095c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:20:46 np0005629333 podman[395907]: 2026-02-25 13:20:46.504558483 +0000 UTC m=+0.258098125 container init 442f30454d78c932d6e0a52bc5b70f4beacee5482865d931bc2380bdbaa0d97d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 08:20:46 np0005629333 podman[395907]: 2026-02-25 13:20:46.513968218 +0000 UTC m=+0.267507850 container start 442f30454d78c932d6e0a52bc5b70f4beacee5482865d931bc2380bdbaa0d97d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_goldstine, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:20:46 np0005629333 podman[395907]: 2026-02-25 13:20:46.539775877 +0000 UTC m=+0.293315509 container attach 442f30454d78c932d6e0a52bc5b70f4beacee5482865d931bc2380bdbaa0d97d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]: {
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:    "0": [
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:        {
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "devices": [
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "/dev/loop3"
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            ],
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "lv_name": "ceph_lv0",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "lv_size": "21470642176",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "name": "ceph_lv0",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "tags": {
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.cluster_name": "ceph",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.crush_device_class": "",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.encrypted": "0",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.objectstore": "bluestore",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.osd_id": "0",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.type": "block",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.vdo": "0",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.with_tpm": "0"
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            },
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "type": "block",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "vg_name": "ceph_vg0"
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:        }
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:    ],
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:    "1": [
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:        {
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "devices": [
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "/dev/loop4"
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            ],
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "lv_name": "ceph_lv1",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "lv_size": "21470642176",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "name": "ceph_lv1",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "tags": {
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.cluster_name": "ceph",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.crush_device_class": "",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.encrypted": "0",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.objectstore": "bluestore",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.osd_id": "1",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.type": "block",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.vdo": "0",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.with_tpm": "0"
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            },
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "type": "block",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "vg_name": "ceph_vg1"
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:        }
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:    ],
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:    "2": [
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:        {
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "devices": [
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "/dev/loop5"
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            ],
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "lv_name": "ceph_lv2",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "lv_size": "21470642176",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "name": "ceph_lv2",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "tags": {
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.cluster_name": "ceph",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.crush_device_class": "",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.encrypted": "0",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.objectstore": "bluestore",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.osd_id": "2",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.type": "block",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.vdo": "0",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:                "ceph.with_tpm": "0"
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            },
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "type": "block",
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:            "vg_name": "ceph_vg2"
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:        }
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]:    ]
Feb 25 08:20:46 np0005629333 optimistic_goldstine[395924]: }
Feb 25 08:20:46 np0005629333 systemd[1]: libpod-442f30454d78c932d6e0a52bc5b70f4beacee5482865d931bc2380bdbaa0d97d.scope: Deactivated successfully.
Feb 25 08:20:46 np0005629333 podman[395907]: 2026-02-25 13:20:46.781107648 +0000 UTC m=+0.534647300 container died 442f30454d78c932d6e0a52bc5b70f4beacee5482865d931bc2380bdbaa0d97d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:20:46 np0005629333 systemd[1]: var-lib-containers-storage-overlay-468b3043ed79b771a2f0f280411a0d3d849cd5d649d56e857d8e150a4480095c-merged.mount: Deactivated successfully.
Feb 25 08:20:46 np0005629333 podman[395907]: 2026-02-25 13:20:46.945571879 +0000 UTC m=+0.699111541 container remove 442f30454d78c932d6e0a52bc5b70f4beacee5482865d931bc2380bdbaa0d97d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:20:46 np0005629333 systemd[1]: libpod-conmon-442f30454d78c932d6e0a52bc5b70f4beacee5482865d931bc2380bdbaa0d97d.scope: Deactivated successfully.
Feb 25 08:20:47 np0005629333 podman[396009]: 2026-02-25 13:20:47.409041989 +0000 UTC m=+0.063828312 container create 006b6c367cc701309cceaa92ca5db4baeb0da43a367658a57fc4444043d2a5c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Feb 25 08:20:47 np0005629333 podman[396009]: 2026-02-25 13:20:47.378105897 +0000 UTC m=+0.032892220 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:20:47 np0005629333 systemd[1]: Started libpod-conmon-006b6c367cc701309cceaa92ca5db4baeb0da43a367658a57fc4444043d2a5c2.scope.
Feb 25 08:20:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3027: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:20:47 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:20:47 np0005629333 podman[396009]: 2026-02-25 13:20:47.543109083 +0000 UTC m=+0.197895376 container init 006b6c367cc701309cceaa92ca5db4baeb0da43a367658a57fc4444043d2a5c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 08:20:47 np0005629333 podman[396009]: 2026-02-25 13:20:47.552973382 +0000 UTC m=+0.207759695 container start 006b6c367cc701309cceaa92ca5db4baeb0da43a367658a57fc4444043d2a5c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_borg, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 08:20:47 np0005629333 confident_borg[396024]: 167 167
Feb 25 08:20:47 np0005629333 systemd[1]: libpod-006b6c367cc701309cceaa92ca5db4baeb0da43a367658a57fc4444043d2a5c2.scope: Deactivated successfully.
Feb 25 08:20:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:20:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4236125798' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:20:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:20:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4236125798' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:20:47 np0005629333 podman[396009]: 2026-02-25 13:20:47.579966584 +0000 UTC m=+0.234752867 container attach 006b6c367cc701309cceaa92ca5db4baeb0da43a367658a57fc4444043d2a5c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 08:20:47 np0005629333 podman[396009]: 2026-02-25 13:20:47.581471696 +0000 UTC m=+0.236257979 container died 006b6c367cc701309cceaa92ca5db4baeb0da43a367658a57fc4444043d2a5c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_borg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 08:20:47 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ad56cb9785c65fccfb29b75fc1375afca10c3af7b51e9ecf291399e846e77502-merged.mount: Deactivated successfully.
Feb 25 08:20:47 np0005629333 podman[396009]: 2026-02-25 13:20:47.810967313 +0000 UTC m=+0.465753636 container remove 006b6c367cc701309cceaa92ca5db4baeb0da43a367658a57fc4444043d2a5c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_borg, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 08:20:47 np0005629333 systemd[1]: libpod-conmon-006b6c367cc701309cceaa92ca5db4baeb0da43a367658a57fc4444043d2a5c2.scope: Deactivated successfully.
Feb 25 08:20:48 np0005629333 podman[396051]: 2026-02-25 13:20:48.011340608 +0000 UTC m=+0.076095829 container create 526f99a8cabf57ac1f98664ea7ac760accd4aed5e77c90b9b67d79c9d90c7942 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_snyder, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 08:20:48 np0005629333 podman[396051]: 2026-02-25 13:20:47.961732308 +0000 UTC m=+0.026487549 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:20:48 np0005629333 systemd[1]: Started libpod-conmon-526f99a8cabf57ac1f98664ea7ac760accd4aed5e77c90b9b67d79c9d90c7942.scope.
Feb 25 08:20:48 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:20:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1742874af25eaa16ea953606d72a7a094ed9d97248746d9e9da7980e1dce53c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:20:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1742874af25eaa16ea953606d72a7a094ed9d97248746d9e9da7980e1dce53c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:20:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1742874af25eaa16ea953606d72a7a094ed9d97248746d9e9da7980e1dce53c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:20:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1742874af25eaa16ea953606d72a7a094ed9d97248746d9e9da7980e1dce53c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:20:48 np0005629333 podman[396051]: 2026-02-25 13:20:48.168170214 +0000 UTC m=+0.232925435 container init 526f99a8cabf57ac1f98664ea7ac760accd4aed5e77c90b9b67d79c9d90c7942 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:20:48 np0005629333 podman[396051]: 2026-02-25 13:20:48.178682281 +0000 UTC m=+0.243437472 container start 526f99a8cabf57ac1f98664ea7ac760accd4aed5e77c90b9b67d79c9d90c7942 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_snyder, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:20:48 np0005629333 podman[396051]: 2026-02-25 13:20:48.1938864 +0000 UTC m=+0.258641601 container attach 526f99a8cabf57ac1f98664ea7ac760accd4aed5e77c90b9b67d79c9d90c7942 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_snyder, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:20:48 np0005629333 nova_compute[244014]: 2026-02-25 13:20:48.258 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:20:48 np0005629333 lvm[396146]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:20:48 np0005629333 lvm[396146]: VG ceph_vg1 finished
Feb 25 08:20:48 np0005629333 lvm[396148]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:20:48 np0005629333 lvm[396148]: VG ceph_vg2 finished
Feb 25 08:20:48 np0005629333 lvm[396145]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:20:48 np0005629333 lvm[396145]: VG ceph_vg0 finished
Feb 25 08:20:49 np0005629333 quizzical_snyder[396067]: {}
Feb 25 08:20:49 np0005629333 systemd[1]: libpod-526f99a8cabf57ac1f98664ea7ac760accd4aed5e77c90b9b67d79c9d90c7942.scope: Deactivated successfully.
Feb 25 08:20:49 np0005629333 podman[396051]: 2026-02-25 13:20:49.027986831 +0000 UTC m=+1.092742122 container died 526f99a8cabf57ac1f98664ea7ac760accd4aed5e77c90b9b67d79c9d90c7942 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_snyder, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:20:49 np0005629333 systemd[1]: libpod-526f99a8cabf57ac1f98664ea7ac760accd4aed5e77c90b9b67d79c9d90c7942.scope: Consumed 1.099s CPU time.
Feb 25 08:20:49 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1742874af25eaa16ea953606d72a7a094ed9d97248746d9e9da7980e1dce53c6-merged.mount: Deactivated successfully.
Feb 25 08:20:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:20:49 np0005629333 podman[396051]: 2026-02-25 13:20:49.243645157 +0000 UTC m=+1.308400388 container remove 526f99a8cabf57ac1f98664ea7ac760accd4aed5e77c90b9b67d79c9d90c7942 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_snyder, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:20:49 np0005629333 systemd[1]: libpod-conmon-526f99a8cabf57ac1f98664ea7ac760accd4aed5e77c90b9b67d79c9d90c7942.scope: Deactivated successfully.
Feb 25 08:20:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:20:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:20:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:20:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:20:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3028: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:20:50 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:20:50 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:20:51 np0005629333 nova_compute[244014]: 2026-02-25 13:20:51.209 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:20:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3029: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:20:51 np0005629333 podman[396190]: 2026-02-25 13:20:51.730897302 +0000 UTC m=+0.065029136 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.43.0)
Feb 25 08:20:51 np0005629333 podman[396191]: 2026-02-25 13:20:51.802330028 +0000 UTC m=+0.134166477 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_id=ovn_controller, tcib_managed=true)
Feb 25 08:20:53 np0005629333 nova_compute[244014]: 2026-02-25 13:20:53.296 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:20:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3030: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:20:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:20:54 np0005629333 nova_compute[244014]: 2026-02-25 13:20:54.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:20:54 np0005629333 nova_compute[244014]: 2026-02-25 13:20:54.916 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:20:54 np0005629333 nova_compute[244014]: 2026-02-25 13:20:54.916 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:20:54 np0005629333 nova_compute[244014]: 2026-02-25 13:20:54.916 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:20:54 np0005629333 nova_compute[244014]: 2026-02-25 13:20:54.917 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:20:54 np0005629333 nova_compute[244014]: 2026-02-25 13:20:54.917 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:20:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:20:55.062 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:20:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:20:55.062 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:20:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:20:55.063 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:20:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:20:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2186151979' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:20:55 np0005629333 nova_compute[244014]: 2026-02-25 13:20:55.495 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:20:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3031: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:20:55 np0005629333 nova_compute[244014]: 2026-02-25 13:20:55.651 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:20:55 np0005629333 nova_compute[244014]: 2026-02-25 13:20:55.653 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3575MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:20:55 np0005629333 nova_compute[244014]: 2026-02-25 13:20:55.653 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:20:55 np0005629333 nova_compute[244014]: 2026-02-25 13:20:55.653 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:20:55 np0005629333 nova_compute[244014]: 2026-02-25 13:20:55.821 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:20:55 np0005629333 nova_compute[244014]: 2026-02-25 13:20:55.822 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:20:55 np0005629333 nova_compute[244014]: 2026-02-25 13:20:55.905 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 25 08:20:55 np0005629333 nova_compute[244014]: 2026-02-25 13:20:55.974 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 25 08:20:55 np0005629333 nova_compute[244014]: 2026-02-25 13:20:55.974 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 25 08:20:55 np0005629333 nova_compute[244014]: 2026-02-25 13:20:55.990 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 25 08:20:56 np0005629333 nova_compute[244014]: 2026-02-25 13:20:56.009 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 25 08:20:56 np0005629333 nova_compute[244014]: 2026-02-25 13:20:56.050 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:20:56 np0005629333 nova_compute[244014]: 2026-02-25 13:20:56.211 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:20:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:20:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3836706908' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:20:56 np0005629333 nova_compute[244014]: 2026-02-25 13:20:56.620 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:20:56 np0005629333 nova_compute[244014]: 2026-02-25 13:20:56.627 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:20:56 np0005629333 nova_compute[244014]: 2026-02-25 13:20:56.645 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:20:56 np0005629333 nova_compute[244014]: 2026-02-25 13:20:56.648 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:20:56 np0005629333 nova_compute[244014]: 2026-02-25 13:20:56.648 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:20:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3032: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:20:57 np0005629333 nova_compute[244014]: 2026-02-25 13:20:57.649 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:20:57 np0005629333 nova_compute[244014]: 2026-02-25 13:20:57.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:20:57 np0005629333 nova_compute[244014]: 2026-02-25 13:20:57.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:20:57 np0005629333 nova_compute[244014]: 2026-02-25 13:20:57.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:20:57 np0005629333 nova_compute[244014]: 2026-02-25 13:20:57.893 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:20:58 np0005629333 nova_compute[244014]: 2026-02-25 13:20:58.315 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:20:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:20:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3033: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:21:00 np0005629333 nova_compute[244014]: 2026-02-25 13:21:00.887 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:21:01 np0005629333 nova_compute[244014]: 2026-02-25 13:21:01.213 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:21:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3034: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:21:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:21:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:21:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:21:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:21:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:21:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:21:03 np0005629333 nova_compute[244014]: 2026-02-25 13:21:03.319 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:21:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3035: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:21:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:21:04 np0005629333 nova_compute[244014]: 2026-02-25 13:21:04.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:21:04 np0005629333 nova_compute[244014]: 2026-02-25 13:21:04.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:21:04 np0005629333 nova_compute[244014]: 2026-02-25 13:21:04.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:21:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3036: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:21:06 np0005629333 nova_compute[244014]: 2026-02-25 13:21:06.215 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:21:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3037: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:21:08 np0005629333 nova_compute[244014]: 2026-02-25 13:21:08.323 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:21:08 np0005629333 nova_compute[244014]: 2026-02-25 13:21:08.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:21:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:21:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3038: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:21:11 np0005629333 nova_compute[244014]: 2026-02-25 13:21:11.217 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:21:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3039: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:21:11 np0005629333 nova_compute[244014]: 2026-02-25 13:21:11.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:21:13 np0005629333 nova_compute[244014]: 2026-02-25 13:21:13.326 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:21:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3040: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:21:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:21:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3041: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:21:15 np0005629333 nova_compute[244014]: 2026-02-25 13:21:15.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:21:16 np0005629333 nova_compute[244014]: 2026-02-25 13:21:16.218 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:21:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3042: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:21:18 np0005629333 nova_compute[244014]: 2026-02-25 13:21:18.329 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:21:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:21:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3043: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:21:21 np0005629333 nova_compute[244014]: 2026-02-25 13:21:21.219 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:21:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3044: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 7 op/s
Feb 25 08:21:22 np0005629333 podman[396276]: 2026-02-25 13:21:22.735498595 +0000 UTC m=+0.071376475 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 08:21:22 np0005629333 podman[396277]: 2026-02-25 13:21:22.776608305 +0000 UTC m=+0.110599422 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, config_id=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 08:21:23 np0005629333 nova_compute[244014]: 2026-02-25 13:21:23.384 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:21:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3045: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 37 op/s
Feb 25 08:21:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:21:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3046: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 37 op/s
Feb 25 08:21:26 np0005629333 nova_compute[244014]: 2026-02-25 13:21:26.221 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:21:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3047: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Feb 25 08:21:28 np0005629333 nova_compute[244014]: 2026-02-25 13:21:28.388 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:21:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:21:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3048: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Feb 25 08:21:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:21:31
Feb 25 08:21:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:21:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:21:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', '.mgr', 'backups', '.rgw.root', 'images']
Feb 25 08:21:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:21:31 np0005629333 nova_compute[244014]: 2026-02-25 13:21:31.262 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:21:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3049: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Feb 25 08:21:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:21:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:21:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:21:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:21:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:21:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:21:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:21:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:21:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:21:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:21:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:21:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:21:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:21:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:21:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:21:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:21:33 np0005629333 nova_compute[244014]: 2026-02-25 13:21:33.392 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:21:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3050: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 0 B/s wr, 67 op/s
Feb 25 08:21:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:21:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3051: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 37 op/s
Feb 25 08:21:36 np0005629333 nova_compute[244014]: 2026-02-25 13:21:36.263 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:21:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3052: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 37 op/s
Feb 25 08:21:38 np0005629333 nova_compute[244014]: 2026-02-25 13:21:38.395 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:21:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:21:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3053: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:21:41 np0005629333 nova_compute[244014]: 2026-02-25 13:21:41.263 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:21:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3054: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:21:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:21:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:21:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:21:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:21:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:21:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:21:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:21:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:21:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:21:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:21:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:21:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:21:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:21:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:21:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:21:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:21:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:21:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:21:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:21:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:21:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:21:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:21:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:21:43 np0005629333 nova_compute[244014]: 2026-02-25 13:21:43.431 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:21:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3055: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:21:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:21:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3056: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:21:46 np0005629333 nova_compute[244014]: 2026-02-25 13:21:46.265 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:21:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:21:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4285245314' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:21:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:21:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4285245314' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:21:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3057: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:21:48 np0005629333 nova_compute[244014]: 2026-02-25 13:21:48.435 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:21:49.218726) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025709218768, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 1189, "num_deletes": 250, "total_data_size": 1825449, "memory_usage": 1845736, "flush_reason": "Manual Compaction"}
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025709226321, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 1075170, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63354, "largest_seqno": 64542, "table_properties": {"data_size": 1070843, "index_size": 1850, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11380, "raw_average_key_size": 20, "raw_value_size": 1061415, "raw_average_value_size": 1922, "num_data_blocks": 84, "num_entries": 552, "num_filter_entries": 552, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772025589, "oldest_key_time": 1772025589, "file_creation_time": 1772025709, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 7670 microseconds, and 2822 cpu microseconds.
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:21:49.226395) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 1075170 bytes OK
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:21:49.226416) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:21:49.228760) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:21:49.228774) EVENT_LOG_v1 {"time_micros": 1772025709228769, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:21:49.228792) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 1820044, prev total WAL file size 1820044, number of live WAL files 2.
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:21:49.229356) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353033' seq:72057594037927935, type:22 .. '6D6772737461740032373534' seq:0, type:0; will stop at (end)
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(1049KB)], [149(10MB)]
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025709229407, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 12180128, "oldest_snapshot_seqno": -1}
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 8343 keys, 9557371 bytes, temperature: kUnknown
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025709302328, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 9557371, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9505821, "index_size": 29613, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20869, "raw_key_size": 217432, "raw_average_key_size": 26, "raw_value_size": 9361100, "raw_average_value_size": 1122, "num_data_blocks": 1148, "num_entries": 8343, "num_filter_entries": 8343, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772025709, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:21:49.302660) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 9557371 bytes
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:21:49.304436) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.8 rd, 130.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.6 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(20.2) write-amplify(8.9) OK, records in: 8801, records dropped: 458 output_compression: NoCompression
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:21:49.304466) EVENT_LOG_v1 {"time_micros": 1772025709304452, "job": 92, "event": "compaction_finished", "compaction_time_micros": 73034, "compaction_time_cpu_micros": 27351, "output_level": 6, "num_output_files": 1, "total_output_size": 9557371, "num_input_records": 8801, "num_output_records": 8343, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025709304795, "job": 92, "event": "table_file_deletion", "file_number": 151}
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025709306456, "job": 92, "event": "table_file_deletion", "file_number": 149}
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:21:49.229285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:21:49.306496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:21:49.306503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:21:49.306506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:21:49.306509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:21:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:21:49.306512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:21:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3058: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:21:50 np0005629333 podman[396464]: 2026-02-25 13:21:50.624158685 +0000 UTC m=+0.070660145 container create e5c9e96e1293bc3dc74932da5b0d041731f175f6e313119a80e77f73992cbe0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 08:21:50 np0005629333 systemd[1]: Started libpod-conmon-e5c9e96e1293bc3dc74932da5b0d041731f175f6e313119a80e77f73992cbe0f.scope.
Feb 25 08:21:50 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:21:50 np0005629333 podman[396464]: 2026-02-25 13:21:50.587880382 +0000 UTC m=+0.034381762 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:21:50 np0005629333 podman[396464]: 2026-02-25 13:21:50.697607897 +0000 UTC m=+0.144109257 container init e5c9e96e1293bc3dc74932da5b0d041731f175f6e313119a80e77f73992cbe0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_dubinsky, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:21:50 np0005629333 podman[396464]: 2026-02-25 13:21:50.703253677 +0000 UTC m=+0.149755017 container start e5c9e96e1293bc3dc74932da5b0d041731f175f6e313119a80e77f73992cbe0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_dubinsky, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:21:50 np0005629333 sweet_dubinsky[396480]: 167 167
Feb 25 08:21:50 np0005629333 systemd[1]: libpod-e5c9e96e1293bc3dc74932da5b0d041731f175f6e313119a80e77f73992cbe0f.scope: Deactivated successfully.
Feb 25 08:21:50 np0005629333 podman[396464]: 2026-02-25 13:21:50.72534881 +0000 UTC m=+0.171850240 container attach e5c9e96e1293bc3dc74932da5b0d041731f175f6e313119a80e77f73992cbe0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_dubinsky, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:21:50 np0005629333 podman[396464]: 2026-02-25 13:21:50.725771282 +0000 UTC m=+0.172272622 container died e5c9e96e1293bc3dc74932da5b0d041731f175f6e313119a80e77f73992cbe0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Feb 25 08:21:50 np0005629333 systemd[1]: var-lib-containers-storage-overlay-6697c9bd8fa06246c6da4221d441f2f8e41aa2a2d545f8e119df4f991369a522-merged.mount: Deactivated successfully.
Feb 25 08:21:50 np0005629333 podman[396464]: 2026-02-25 13:21:50.787817673 +0000 UTC m=+0.234319053 container remove e5c9e96e1293bc3dc74932da5b0d041731f175f6e313119a80e77f73992cbe0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_dubinsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:21:50 np0005629333 systemd[1]: libpod-conmon-e5c9e96e1293bc3dc74932da5b0d041731f175f6e313119a80e77f73992cbe0f.scope: Deactivated successfully.
Feb 25 08:21:50 np0005629333 podman[396506]: 2026-02-25 13:21:50.954386014 +0000 UTC m=+0.048783558 container create e533b15e610383fbb2250938a73d3db2ef326c483208f70d44916800f663a71d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hertz, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:21:51 np0005629333 systemd[1]: Started libpod-conmon-e533b15e610383fbb2250938a73d3db2ef326c483208f70d44916800f663a71d.scope.
Feb 25 08:21:51 np0005629333 podman[396506]: 2026-02-25 13:21:50.931107127 +0000 UTC m=+0.025504761 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:21:51 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:21:51 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/925cd72d561f9a9a86214093e40c4d463d746aa45dbeb1d817e894c3f45b078b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:21:51 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/925cd72d561f9a9a86214093e40c4d463d746aa45dbeb1d817e894c3f45b078b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:21:51 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/925cd72d561f9a9a86214093e40c4d463d746aa45dbeb1d817e894c3f45b078b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:21:51 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/925cd72d561f9a9a86214093e40c4d463d746aa45dbeb1d817e894c3f45b078b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:21:51 np0005629333 podman[396506]: 2026-02-25 13:21:51.061642361 +0000 UTC m=+0.156039935 container init e533b15e610383fbb2250938a73d3db2ef326c483208f70d44916800f663a71d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:21:51 np0005629333 podman[396506]: 2026-02-25 13:21:51.069676888 +0000 UTC m=+0.164074472 container start e533b15e610383fbb2250938a73d3db2ef326c483208f70d44916800f663a71d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:21:51 np0005629333 podman[396506]: 2026-02-25 13:21:51.07329616 +0000 UTC m=+0.167693714 container attach e533b15e610383fbb2250938a73d3db2ef326c483208f70d44916800f663a71d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hertz, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:21:51 np0005629333 nova_compute[244014]: 2026-02-25 13:21:51.266 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:21:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3059: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]: [
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:    {
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:        "available": false,
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:        "being_replaced": false,
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:        "ceph_device_lvm": false,
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:        "device_id": "QEMU_DVD-ROM_QM00001",
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:        "lsm_data": {},
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:        "lvs": [],
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:        "path": "/dev/sr0",
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:        "rejected_reasons": [
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:            "Insufficient space (<5GB)",
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:            "Has a FileSystem"
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:        ],
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:        "sys_api": {
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:            "actuators": null,
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:            "device_nodes": [
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:                "sr0"
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:            ],
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:            "devname": "sr0",
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:            "human_readable_size": "482.00 KB",
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:            "id_bus": "ata",
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:            "model": "QEMU DVD-ROM",
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:            "nr_requests": "2",
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:            "parent": "/dev/sr0",
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:            "partitions": {},
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:            "path": "/dev/sr0",
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:            "removable": "1",
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:            "rev": "2.5+",
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:            "ro": "0",
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:            "rotational": "1",
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:            "sas_address": "",
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:            "sas_device_handle": "",
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:            "scheduler_mode": "mq-deadline",
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:            "sectors": 0,
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:            "sectorsize": "2048",
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:            "size": 493568.0,
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:            "support_discard": "2048",
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:            "type": "disk",
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:            "vendor": "QEMU"
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:        }
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]:    }
Feb 25 08:21:51 np0005629333 nifty_hertz[396523]: ]
Feb 25 08:21:51 np0005629333 systemd[1]: libpod-e533b15e610383fbb2250938a73d3db2ef326c483208f70d44916800f663a71d.scope: Deactivated successfully.
Feb 25 08:21:51 np0005629333 podman[396506]: 2026-02-25 13:21:51.647922877 +0000 UTC m=+0.742320461 container died e533b15e610383fbb2250938a73d3db2ef326c483208f70d44916800f663a71d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hertz, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 08:21:51 np0005629333 systemd[1]: var-lib-containers-storage-overlay-925cd72d561f9a9a86214093e40c4d463d746aa45dbeb1d817e894c3f45b078b-merged.mount: Deactivated successfully.
Feb 25 08:21:51 np0005629333 podman[396506]: 2026-02-25 13:21:51.702483647 +0000 UTC m=+0.796881221 container remove e533b15e610383fbb2250938a73d3db2ef326c483208f70d44916800f663a71d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 08:21:51 np0005629333 systemd[1]: libpod-conmon-e533b15e610383fbb2250938a73d3db2ef326c483208f70d44916800f663a71d.scope: Deactivated successfully.
Feb 25 08:21:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:21:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:21:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:21:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:21:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:21:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:21:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:21:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:21:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:21:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:21:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:21:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:21:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:21:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:21:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:21:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:21:52 np0005629333 podman[397319]: 2026-02-25 13:21:52.245080021 +0000 UTC m=+0.057889195 container create 3c75680b972273ab5ef0b77a05c3a8055106241e229b9593baeac18c8c6131d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kepler, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 08:21:52 np0005629333 systemd[1]: Started libpod-conmon-3c75680b972273ab5ef0b77a05c3a8055106241e229b9593baeac18c8c6131d0.scope.
Feb 25 08:21:52 np0005629333 podman[397319]: 2026-02-25 13:21:52.220736104 +0000 UTC m=+0.033545318 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:21:52 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:21:52 np0005629333 podman[397319]: 2026-02-25 13:21:52.333869677 +0000 UTC m=+0.146678911 container init 3c75680b972273ab5ef0b77a05c3a8055106241e229b9593baeac18c8c6131d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kepler, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 08:21:52 np0005629333 podman[397319]: 2026-02-25 13:21:52.341747889 +0000 UTC m=+0.154557053 container start 3c75680b972273ab5ef0b77a05c3a8055106241e229b9593baeac18c8c6131d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kepler, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:21:52 np0005629333 podman[397319]: 2026-02-25 13:21:52.345441463 +0000 UTC m=+0.158250687 container attach 3c75680b972273ab5ef0b77a05c3a8055106241e229b9593baeac18c8c6131d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 08:21:52 np0005629333 pedantic_kepler[397336]: 167 167
Feb 25 08:21:52 np0005629333 systemd[1]: libpod-3c75680b972273ab5ef0b77a05c3a8055106241e229b9593baeac18c8c6131d0.scope: Deactivated successfully.
Feb 25 08:21:52 np0005629333 podman[397319]: 2026-02-25 13:21:52.348099438 +0000 UTC m=+0.160908602 container died 3c75680b972273ab5ef0b77a05c3a8055106241e229b9593baeac18c8c6131d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 25 08:21:52 np0005629333 systemd[1]: var-lib-containers-storage-overlay-175e1d6f7cf79deeb9b1fee486766be8534d5e06517a4e669506a0bd43dbf66b-merged.mount: Deactivated successfully.
Feb 25 08:21:52 np0005629333 podman[397319]: 2026-02-25 13:21:52.399912471 +0000 UTC m=+0.212721635 container remove 3c75680b972273ab5ef0b77a05c3a8055106241e229b9593baeac18c8c6131d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Feb 25 08:21:52 np0005629333 systemd[1]: libpod-conmon-3c75680b972273ab5ef0b77a05c3a8055106241e229b9593baeac18c8c6131d0.scope: Deactivated successfully.
Feb 25 08:21:52 np0005629333 podman[397361]: 2026-02-25 13:21:52.584962753 +0000 UTC m=+0.057698659 container create 873766ed40ac51109b113d06b4c9a1cf5dee793f870ddccfdf742ba3e2e13972 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_spence, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Feb 25 08:21:52 np0005629333 systemd[1]: Started libpod-conmon-873766ed40ac51109b113d06b4c9a1cf5dee793f870ddccfdf742ba3e2e13972.scope.
Feb 25 08:21:52 np0005629333 podman[397361]: 2026-02-25 13:21:52.562101368 +0000 UTC m=+0.034837344 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:21:52 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:21:52 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07106e0e24b13b22d768c4b4c00df4819ba6e90f196eab97dd5e8f6edb74a0e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:21:52 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07106e0e24b13b22d768c4b4c00df4819ba6e90f196eab97dd5e8f6edb74a0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:21:52 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07106e0e24b13b22d768c4b4c00df4819ba6e90f196eab97dd5e8f6edb74a0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:21:52 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07106e0e24b13b22d768c4b4c00df4819ba6e90f196eab97dd5e8f6edb74a0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:21:52 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a07106e0e24b13b22d768c4b4c00df4819ba6e90f196eab97dd5e8f6edb74a0e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:21:52 np0005629333 podman[397361]: 2026-02-25 13:21:52.6904488 +0000 UTC m=+0.163184776 container init 873766ed40ac51109b113d06b4c9a1cf5dee793f870ddccfdf742ba3e2e13972 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_spence, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 08:21:52 np0005629333 podman[397361]: 2026-02-25 13:21:52.703772306 +0000 UTC m=+0.176508212 container start 873766ed40ac51109b113d06b4c9a1cf5dee793f870ddccfdf742ba3e2e13972 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_spence, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 08:21:52 np0005629333 podman[397361]: 2026-02-25 13:21:52.708563632 +0000 UTC m=+0.181299558 container attach 873766ed40ac51109b113d06b4c9a1cf5dee793f870ddccfdf742ba3e2e13972 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:21:52 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:21:52 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:21:52 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:21:52 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:21:52 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:21:53 np0005629333 happy_spence[397378]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:21:53 np0005629333 happy_spence[397378]: --> All data devices are unavailable
Feb 25 08:21:53 np0005629333 systemd[1]: libpod-873766ed40ac51109b113d06b4c9a1cf5dee793f870ddccfdf742ba3e2e13972.scope: Deactivated successfully.
Feb 25 08:21:53 np0005629333 podman[397361]: 2026-02-25 13:21:53.176380475 +0000 UTC m=+0.649116361 container died 873766ed40ac51109b113d06b4c9a1cf5dee793f870ddccfdf742ba3e2e13972 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_spence, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 08:21:53 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a07106e0e24b13b22d768c4b4c00df4819ba6e90f196eab97dd5e8f6edb74a0e-merged.mount: Deactivated successfully.
Feb 25 08:21:53 np0005629333 podman[397361]: 2026-02-25 13:21:53.236591654 +0000 UTC m=+0.709327530 container remove 873766ed40ac51109b113d06b4c9a1cf5dee793f870ddccfdf742ba3e2e13972 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_spence, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:21:53 np0005629333 systemd[1]: libpod-conmon-873766ed40ac51109b113d06b4c9a1cf5dee793f870ddccfdf742ba3e2e13972.scope: Deactivated successfully.
Feb 25 08:21:53 np0005629333 podman[397399]: 2026-02-25 13:21:53.330436173 +0000 UTC m=+0.122768806 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Feb 25 08:21:53 np0005629333 podman[397406]: 2026-02-25 13:21:53.342986137 +0000 UTC m=+0.132870701 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 08:21:53 np0005629333 nova_compute[244014]: 2026-02-25 13:21:53.437 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:21:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3060: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:21:53 np0005629333 podman[397515]: 2026-02-25 13:21:53.686308086 +0000 UTC m=+0.062168405 container create 0a18e6a5b3b2afeab6a1d8e7db99bb73b7a9da0eccc4d773f640dd14502a14a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_mcnulty, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:21:53 np0005629333 systemd[1]: Started libpod-conmon-0a18e6a5b3b2afeab6a1d8e7db99bb73b7a9da0eccc4d773f640dd14502a14a1.scope.
Feb 25 08:21:53 np0005629333 podman[397515]: 2026-02-25 13:21:53.659136229 +0000 UTC m=+0.034996608 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:21:53 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:21:53 np0005629333 podman[397515]: 2026-02-25 13:21:53.781979496 +0000 UTC m=+0.157839875 container init 0a18e6a5b3b2afeab6a1d8e7db99bb73b7a9da0eccc4d773f640dd14502a14a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_mcnulty, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:21:53 np0005629333 podman[397515]: 2026-02-25 13:21:53.789321693 +0000 UTC m=+0.165182022 container start 0a18e6a5b3b2afeab6a1d8e7db99bb73b7a9da0eccc4d773f640dd14502a14a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_mcnulty, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 08:21:53 np0005629333 podman[397515]: 2026-02-25 13:21:53.794807798 +0000 UTC m=+0.170668167 container attach 0a18e6a5b3b2afeab6a1d8e7db99bb73b7a9da0eccc4d773f640dd14502a14a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_mcnulty, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:21:53 np0005629333 clever_mcnulty[397531]: 167 167
Feb 25 08:21:53 np0005629333 systemd[1]: libpod-0a18e6a5b3b2afeab6a1d8e7db99bb73b7a9da0eccc4d773f640dd14502a14a1.scope: Deactivated successfully.
Feb 25 08:21:53 np0005629333 podman[397515]: 2026-02-25 13:21:53.796242019 +0000 UTC m=+0.172102348 container died 0a18e6a5b3b2afeab6a1d8e7db99bb73b7a9da0eccc4d773f640dd14502a14a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_mcnulty, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:21:53 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f903077075e667bcb6095c3a4eac141570a50628b01871cf9f64cb920de8dee9-merged.mount: Deactivated successfully.
Feb 25 08:21:53 np0005629333 podman[397515]: 2026-02-25 13:21:53.837869674 +0000 UTC m=+0.213729993 container remove 0a18e6a5b3b2afeab6a1d8e7db99bb73b7a9da0eccc4d773f640dd14502a14a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_mcnulty, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 08:21:53 np0005629333 systemd[1]: libpod-conmon-0a18e6a5b3b2afeab6a1d8e7db99bb73b7a9da0eccc4d773f640dd14502a14a1.scope: Deactivated successfully.
Feb 25 08:21:54 np0005629333 podman[397555]: 2026-02-25 13:21:54.001570804 +0000 UTC m=+0.054228492 container create f096043f4b69525837aa4e8cb3935ca7117283d3681534e35ddb542fe8d6a643 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_thompson, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:21:54 np0005629333 systemd[1]: Started libpod-conmon-f096043f4b69525837aa4e8cb3935ca7117283d3681534e35ddb542fe8d6a643.scope.
Feb 25 08:21:54 np0005629333 podman[397555]: 2026-02-25 13:21:53.971876056 +0000 UTC m=+0.024533764 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:21:54 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:21:54 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c81f49af591c3ed5f02096a19d703dc60122e73c4b23ff44770358c1b881c3dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:21:54 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c81f49af591c3ed5f02096a19d703dc60122e73c4b23ff44770358c1b881c3dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:21:54 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c81f49af591c3ed5f02096a19d703dc60122e73c4b23ff44770358c1b881c3dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:21:54 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c81f49af591c3ed5f02096a19d703dc60122e73c4b23ff44770358c1b881c3dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:21:54 np0005629333 podman[397555]: 2026-02-25 13:21:54.103635724 +0000 UTC m=+0.156293392 container init f096043f4b69525837aa4e8cb3935ca7117283d3681534e35ddb542fe8d6a643 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_thompson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 08:21:54 np0005629333 podman[397555]: 2026-02-25 13:21:54.119010758 +0000 UTC m=+0.171668416 container start f096043f4b69525837aa4e8cb3935ca7117283d3681534e35ddb542fe8d6a643 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_thompson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 25 08:21:54 np0005629333 podman[397555]: 2026-02-25 13:21:54.123558067 +0000 UTC m=+0.176215725 container attach f096043f4b69525837aa4e8cb3935ca7117283d3681534e35ddb542fe8d6a643 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:21:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]: {
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:    "0": [
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:        {
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "devices": [
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "/dev/loop3"
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            ],
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "lv_name": "ceph_lv0",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "lv_size": "21470642176",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "name": "ceph_lv0",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "tags": {
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.cluster_name": "ceph",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.crush_device_class": "",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.encrypted": "0",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.objectstore": "bluestore",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.osd_id": "0",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.type": "block",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.vdo": "0",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.with_tpm": "0"
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            },
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "type": "block",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "vg_name": "ceph_vg0"
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:        }
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:    ],
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:    "1": [
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:        {
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "devices": [
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "/dev/loop4"
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            ],
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "lv_name": "ceph_lv1",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "lv_size": "21470642176",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "name": "ceph_lv1",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "tags": {
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.cluster_name": "ceph",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.crush_device_class": "",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.encrypted": "0",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.objectstore": "bluestore",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.osd_id": "1",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.type": "block",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.vdo": "0",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.with_tpm": "0"
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            },
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "type": "block",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "vg_name": "ceph_vg1"
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:        }
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:    ],
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:    "2": [
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:        {
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "devices": [
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "/dev/loop5"
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            ],
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "lv_name": "ceph_lv2",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "lv_size": "21470642176",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "name": "ceph_lv2",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "tags": {
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.cluster_name": "ceph",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.crush_device_class": "",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.encrypted": "0",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.objectstore": "bluestore",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.osd_id": "2",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.type": "block",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.vdo": "0",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:                "ceph.with_tpm": "0"
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            },
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "type": "block",
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:            "vg_name": "ceph_vg2"
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:        }
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]:    ]
Feb 25 08:21:54 np0005629333 adoring_thompson[397571]: }
Feb 25 08:21:54 np0005629333 systemd[1]: libpod-f096043f4b69525837aa4e8cb3935ca7117283d3681534e35ddb542fe8d6a643.scope: Deactivated successfully.
Feb 25 08:21:54 np0005629333 podman[397555]: 2026-02-25 13:21:54.439749459 +0000 UTC m=+0.492407157 container died f096043f4b69525837aa4e8cb3935ca7117283d3681534e35ddb542fe8d6a643 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_thompson, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:21:54 np0005629333 systemd[1]: var-lib-containers-storage-overlay-c81f49af591c3ed5f02096a19d703dc60122e73c4b23ff44770358c1b881c3dd-merged.mount: Deactivated successfully.
Feb 25 08:21:54 np0005629333 podman[397555]: 2026-02-25 13:21:54.495720619 +0000 UTC m=+0.548378297 container remove f096043f4b69525837aa4e8cb3935ca7117283d3681534e35ddb542fe8d6a643 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_thompson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:21:54 np0005629333 systemd[1]: libpod-conmon-f096043f4b69525837aa4e8cb3935ca7117283d3681534e35ddb542fe8d6a643.scope: Deactivated successfully.
Feb 25 08:21:54 np0005629333 nova_compute[244014]: 2026-02-25 13:21:54.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:21:54 np0005629333 nova_compute[244014]: 2026-02-25 13:21:54.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:21:54 np0005629333 nova_compute[244014]: 2026-02-25 13:21:54.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:21:54 np0005629333 nova_compute[244014]: 2026-02-25 13:21:54.905 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:21:54 np0005629333 nova_compute[244014]: 2026-02-25 13:21:54.905 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:21:54 np0005629333 nova_compute[244014]: 2026-02-25 13:21:54.905 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:21:55 np0005629333 podman[397656]: 2026-02-25 13:21:55.033488606 +0000 UTC m=+0.068309949 container create a194225dfab1511ddcb393d44ef8d67530d47a8746a63438f79801b991135ee3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cerf, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 08:21:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:21:55.063 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:21:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:21:55.065 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:21:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:21:55.065 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:21:55 np0005629333 systemd[1]: Started libpod-conmon-a194225dfab1511ddcb393d44ef8d67530d47a8746a63438f79801b991135ee3.scope.
Feb 25 08:21:55 np0005629333 podman[397656]: 2026-02-25 13:21:55.001280597 +0000 UTC m=+0.036101970 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:21:55 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:21:55 np0005629333 podman[397656]: 2026-02-25 13:21:55.131454331 +0000 UTC m=+0.166275744 container init a194225dfab1511ddcb393d44ef8d67530d47a8746a63438f79801b991135ee3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cerf, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:21:55 np0005629333 podman[397656]: 2026-02-25 13:21:55.141532176 +0000 UTC m=+0.176353529 container start a194225dfab1511ddcb393d44ef8d67530d47a8746a63438f79801b991135ee3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cerf, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 08:21:55 np0005629333 podman[397656]: 2026-02-25 13:21:55.150110598 +0000 UTC m=+0.184932001 container attach a194225dfab1511ddcb393d44ef8d67530d47a8746a63438f79801b991135ee3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cerf, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:21:55 np0005629333 adoring_cerf[397693]: 167 167
Feb 25 08:21:55 np0005629333 systemd[1]: libpod-a194225dfab1511ddcb393d44ef8d67530d47a8746a63438f79801b991135ee3.scope: Deactivated successfully.
Feb 25 08:21:55 np0005629333 conmon[397693]: conmon a194225dfab1511ddcb3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a194225dfab1511ddcb393d44ef8d67530d47a8746a63438f79801b991135ee3.scope/container/memory.events
Feb 25 08:21:55 np0005629333 podman[397656]: 2026-02-25 13:21:55.154152702 +0000 UTC m=+0.188974045 container died a194225dfab1511ddcb393d44ef8d67530d47a8746a63438f79801b991135ee3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cerf, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 08:21:55 np0005629333 systemd[1]: var-lib-containers-storage-overlay-aeafa9c77999755d48ee17b738ee5c2b332e1568032f037f59d9a1e9b2813f82-merged.mount: Deactivated successfully.
Feb 25 08:21:55 np0005629333 podman[397656]: 2026-02-25 13:21:55.214861455 +0000 UTC m=+0.249682808 container remove a194225dfab1511ddcb393d44ef8d67530d47a8746a63438f79801b991135ee3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cerf, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 08:21:55 np0005629333 systemd[1]: libpod-conmon-a194225dfab1511ddcb393d44ef8d67530d47a8746a63438f79801b991135ee3.scope: Deactivated successfully.
Feb 25 08:21:55 np0005629333 podman[397717]: 2026-02-25 13:21:55.354368772 +0000 UTC m=+0.043040975 container create 24659205ebfae825e8b28fb7fde7d900b370ebb3680956e66891bee1cdeed671 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 08:21:55 np0005629333 systemd[1]: Started libpod-conmon-24659205ebfae825e8b28fb7fde7d900b370ebb3680956e66891bee1cdeed671.scope.
Feb 25 08:21:55 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:21:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/926c7b64e86c2544cca1f1cfde25be80fea6a3040eba2faab5be2f0a00e58982/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:21:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/926c7b64e86c2544cca1f1cfde25be80fea6a3040eba2faab5be2f0a00e58982/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:21:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/926c7b64e86c2544cca1f1cfde25be80fea6a3040eba2faab5be2f0a00e58982/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:21:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/926c7b64e86c2544cca1f1cfde25be80fea6a3040eba2faab5be2f0a00e58982/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:21:55 np0005629333 podman[397717]: 2026-02-25 13:21:55.335296884 +0000 UTC m=+0.023969087 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:21:55 np0005629333 podman[397717]: 2026-02-25 13:21:55.446838772 +0000 UTC m=+0.135510975 container init 24659205ebfae825e8b28fb7fde7d900b370ebb3680956e66891bee1cdeed671 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_jones, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Feb 25 08:21:55 np0005629333 podman[397717]: 2026-02-25 13:21:55.456746282 +0000 UTC m=+0.145418505 container start 24659205ebfae825e8b28fb7fde7d900b370ebb3680956e66891bee1cdeed671 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_jones, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 08:21:55 np0005629333 podman[397717]: 2026-02-25 13:21:55.461745073 +0000 UTC m=+0.150417276 container attach 24659205ebfae825e8b28fb7fde7d900b370ebb3680956e66891bee1cdeed671 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:21:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:21:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1494353662' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:21:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3061: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:21:55 np0005629333 nova_compute[244014]: 2026-02-25 13:21:55.534 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:21:55 np0005629333 nova_compute[244014]: 2026-02-25 13:21:55.671 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:21:55 np0005629333 nova_compute[244014]: 2026-02-25 13:21:55.672 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3544MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:21:55 np0005629333 nova_compute[244014]: 2026-02-25 13:21:55.672 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:21:55 np0005629333 nova_compute[244014]: 2026-02-25 13:21:55.672 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:21:55 np0005629333 nova_compute[244014]: 2026-02-25 13:21:55.738 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:21:55 np0005629333 nova_compute[244014]: 2026-02-25 13:21:55.739 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:21:55 np0005629333 nova_compute[244014]: 2026-02-25 13:21:55.757 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:21:56 np0005629333 lvm[397835]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:21:56 np0005629333 lvm[397835]: VG ceph_vg1 finished
Feb 25 08:21:56 np0005629333 lvm[397833]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:21:56 np0005629333 lvm[397833]: VG ceph_vg0 finished
Feb 25 08:21:56 np0005629333 lvm[397836]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:21:56 np0005629333 lvm[397836]: VG ceph_vg2 finished
Feb 25 08:21:56 np0005629333 admiring_jones[397733]: {}
Feb 25 08:21:56 np0005629333 systemd[1]: libpod-24659205ebfae825e8b28fb7fde7d900b370ebb3680956e66891bee1cdeed671.scope: Deactivated successfully.
Feb 25 08:21:56 np0005629333 systemd[1]: libpod-24659205ebfae825e8b28fb7fde7d900b370ebb3680956e66891bee1cdeed671.scope: Consumed 1.078s CPU time.
Feb 25 08:21:56 np0005629333 podman[397717]: 2026-02-25 13:21:56.212392228 +0000 UTC m=+0.901064421 container died 24659205ebfae825e8b28fb7fde7d900b370ebb3680956e66891bee1cdeed671 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_jones, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 08:21:56 np0005629333 systemd[1]: var-lib-containers-storage-overlay-926c7b64e86c2544cca1f1cfde25be80fea6a3040eba2faab5be2f0a00e58982-merged.mount: Deactivated successfully.
Feb 25 08:21:56 np0005629333 podman[397717]: 2026-02-25 13:21:56.25248845 +0000 UTC m=+0.941160633 container remove 24659205ebfae825e8b28fb7fde7d900b370ebb3680956e66891bee1cdeed671 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_jones, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:21:56 np0005629333 nova_compute[244014]: 2026-02-25 13:21:56.267 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:21:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:21:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/552420912' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:21:56 np0005629333 systemd[1]: libpod-conmon-24659205ebfae825e8b28fb7fde7d900b370ebb3680956e66891bee1cdeed671.scope: Deactivated successfully.
Feb 25 08:21:56 np0005629333 nova_compute[244014]: 2026-02-25 13:21:56.299 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:21:56 np0005629333 nova_compute[244014]: 2026-02-25 13:21:56.304 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:21:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:21:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:21:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:21:56 np0005629333 nova_compute[244014]: 2026-02-25 13:21:56.323 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:21:56 np0005629333 nova_compute[244014]: 2026-02-25 13:21:56.325 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:21:56 np0005629333 nova_compute[244014]: 2026-02-25 13:21:56.325 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:21:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:21:56 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:21:56 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:21:57 np0005629333 nova_compute[244014]: 2026-02-25 13:21:57.326 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:21:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3062: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:21:57 np0005629333 nova_compute[244014]: 2026-02-25 13:21:57.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:21:57 np0005629333 nova_compute[244014]: 2026-02-25 13:21:57.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:21:57 np0005629333 nova_compute[244014]: 2026-02-25 13:21:57.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:21:57 np0005629333 nova_compute[244014]: 2026-02-25 13:21:57.892 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:21:58 np0005629333 nova_compute[244014]: 2026-02-25 13:21:58.440 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:21:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:21:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3063: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:22:01 np0005629333 nova_compute[244014]: 2026-02-25 13:22:01.269 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:22:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3064: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:22:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:22:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:22:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:22:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:22:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:22:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:22:02 np0005629333 nova_compute[244014]: 2026-02-25 13:22:02.887 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:22:03 np0005629333 nova_compute[244014]: 2026-02-25 13:22:03.444 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:22:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3065: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:22:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:22:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3066: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:22:05 np0005629333 nova_compute[244014]: 2026-02-25 13:22:05.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:22:05 np0005629333 nova_compute[244014]: 2026-02-25 13:22:05.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:22:06 np0005629333 nova_compute[244014]: 2026-02-25 13:22:06.271 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:22:06 np0005629333 nova_compute[244014]: 2026-02-25 13:22:06.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:22:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3067: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:22:08 np0005629333 nova_compute[244014]: 2026-02-25 13:22:08.447 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:22:08 np0005629333 nova_compute[244014]: 2026-02-25 13:22:08.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:22:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:22:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3068: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:22:11 np0005629333 nova_compute[244014]: 2026-02-25 13:22:11.274 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:22:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3069: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:22:11 np0005629333 nova_compute[244014]: 2026-02-25 13:22:11.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:22:13 np0005629333 nova_compute[244014]: 2026-02-25 13:22:13.451 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:22:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3070: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:22:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:22:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3071: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:22:16 np0005629333 nova_compute[244014]: 2026-02-25 13:22:16.276 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:22:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3072: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:22:17 np0005629333 nova_compute[244014]: 2026-02-25 13:22:17.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:22:18 np0005629333 nova_compute[244014]: 2026-02-25 13:22:18.456 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:22:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:22:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3073: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:22:21 np0005629333 nova_compute[244014]: 2026-02-25 13:22:21.277 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:22:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3074: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:22:23 np0005629333 nova_compute[244014]: 2026-02-25 13:22:23.459 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:22:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3075: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:22:23 np0005629333 podman[397881]: 2026-02-25 13:22:23.761838817 +0000 UTC m=+0.091794822 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Feb 25 08:22:23 np0005629333 podman[397882]: 2026-02-25 13:22:23.781891193 +0000 UTC m=+0.110156600 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb 25 08:22:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:22:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3076: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:22:25 np0005629333 nova_compute[244014]: 2026-02-25 13:22:25.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:22:26 np0005629333 nova_compute[244014]: 2026-02-25 13:22:26.279 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:22:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3077: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:22:28 np0005629333 nova_compute[244014]: 2026-02-25 13:22:28.461 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:22:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:22:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3078: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:22:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:22:31
Feb 25 08:22:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:22:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:22:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', '.mgr', 'backups', 'images']
Feb 25 08:22:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:22:31 np0005629333 nova_compute[244014]: 2026-02-25 13:22:31.280 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:22:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3079: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:22:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:22:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:22:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:22:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:22:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:22:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:22:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:22:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:22:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:22:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:22:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:22:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:22:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:22:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:22:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:22:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:22:33 np0005629333 nova_compute[244014]: 2026-02-25 13:22:33.465 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:22:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3080: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:22:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:22:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3081: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:22:36 np0005629333 nova_compute[244014]: 2026-02-25 13:22:36.281 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:22:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3082: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:22:38 np0005629333 nova_compute[244014]: 2026-02-25 13:22:38.468 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:22:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:22:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3083: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:22:41 np0005629333 nova_compute[244014]: 2026-02-25 13:22:41.284 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:22:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3084: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:22:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:22:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:22:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:22:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:22:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:22:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:22:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:22:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:22:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:22:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:22:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:22:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:22:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:22:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:22:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:22:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:22:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:22:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:22:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:22:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:22:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:22:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:22:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:22:43 np0005629333 nova_compute[244014]: 2026-02-25 13:22:43.472 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:22:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3085: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:22:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:22:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3086: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:22:46 np0005629333 nova_compute[244014]: 2026-02-25 13:22:46.285 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:22:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3087: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:22:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:22:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/716683261' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:22:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:22:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/716683261' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:22:48 np0005629333 nova_compute[244014]: 2026-02-25 13:22:48.476 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:22:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:22:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3088: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:22:51 np0005629333 nova_compute[244014]: 2026-02-25 13:22:51.287 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:22:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3089: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:22:53 np0005629333 nova_compute[244014]: 2026-02-25 13:22:53.543 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:22:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3090: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:22:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:22:54 np0005629333 podman[397927]: 2026-02-25 13:22:54.721537649 +0000 UTC m=+0.062580318 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Feb 25 08:22:54 np0005629333 podman[397928]: 2026-02-25 13:22:54.782934261 +0000 UTC m=+0.122111137 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 08:22:54 np0005629333 nova_compute[244014]: 2026-02-25 13:22:54.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:22:54 np0005629333 nova_compute[244014]: 2026-02-25 13:22:54.976 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:22:54 np0005629333 nova_compute[244014]: 2026-02-25 13:22:54.977 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:22:54 np0005629333 nova_compute[244014]: 2026-02-25 13:22:54.977 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:22:54 np0005629333 nova_compute[244014]: 2026-02-25 13:22:54.977 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:22:54 np0005629333 nova_compute[244014]: 2026-02-25 13:22:54.978 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:22:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:22:55.065 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:22:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:22:55.065 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:22:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:22:55.065 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:22:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:22:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/39669680' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:22:55 np0005629333 nova_compute[244014]: 2026-02-25 13:22:55.491 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:22:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3091: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:22:55 np0005629333 nova_compute[244014]: 2026-02-25 13:22:55.657 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:22:55 np0005629333 nova_compute[244014]: 2026-02-25 13:22:55.659 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3596MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:22:55 np0005629333 nova_compute[244014]: 2026-02-25 13:22:55.659 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:22:55 np0005629333 nova_compute[244014]: 2026-02-25 13:22:55.660 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:22:55 np0005629333 nova_compute[244014]: 2026-02-25 13:22:55.759 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:22:55 np0005629333 nova_compute[244014]: 2026-02-25 13:22:55.759 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:22:55 np0005629333 nova_compute[244014]: 2026-02-25 13:22:55.777 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:22:56 np0005629333 nova_compute[244014]: 2026-02-25 13:22:56.288 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:22:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:22:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4053153168' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:22:56 np0005629333 nova_compute[244014]: 2026-02-25 13:22:56.345 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:22:56 np0005629333 nova_compute[244014]: 2026-02-25 13:22:56.353 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:22:56 np0005629333 nova_compute[244014]: 2026-02-25 13:22:56.370 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:22:56 np0005629333 nova_compute[244014]: 2026-02-25 13:22:56.373 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:22:56 np0005629333 nova_compute[244014]: 2026-02-25 13:22:56.373 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:22:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:22:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:22:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:22:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:22:57 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:22:57 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:22:57 np0005629333 nova_compute[244014]: 2026-02-25 13:22:57.374 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:22:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3092: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:22:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:22:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:22:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:22:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:22:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:22:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:22:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:22:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:22:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:22:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:22:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:22:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:22:57 np0005629333 nova_compute[244014]: 2026-02-25 13:22:57.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:22:57 np0005629333 nova_compute[244014]: 2026-02-25 13:22:57.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:22:57 np0005629333 nova_compute[244014]: 2026-02-25 13:22:57.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:22:57 np0005629333 nova_compute[244014]: 2026-02-25 13:22:57.902 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:22:58 np0005629333 podman[398230]: 2026-02-25 13:22:58.042607767 +0000 UTC m=+0.060464308 container create aa26b2bea8718aaaeeea8854105eff66a30c33a57bee265068cff43ae84b77e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_carson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:22:58 np0005629333 systemd[1]: Started libpod-conmon-aa26b2bea8718aaaeeea8854105eff66a30c33a57bee265068cff43ae84b77e1.scope.
Feb 25 08:22:58 np0005629333 podman[398230]: 2026-02-25 13:22:58.014544505 +0000 UTC m=+0.032401106 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:22:58 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:22:58 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:22:58 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:22:58 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:22:58 np0005629333 podman[398230]: 2026-02-25 13:22:58.141976861 +0000 UTC m=+0.159833462 container init aa26b2bea8718aaaeeea8854105eff66a30c33a57bee265068cff43ae84b77e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_carson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 08:22:58 np0005629333 podman[398230]: 2026-02-25 13:22:58.152538529 +0000 UTC m=+0.170395070 container start aa26b2bea8718aaaeeea8854105eff66a30c33a57bee265068cff43ae84b77e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 08:22:58 np0005629333 podman[398230]: 2026-02-25 13:22:58.157341285 +0000 UTC m=+0.175197836 container attach aa26b2bea8718aaaeeea8854105eff66a30c33a57bee265068cff43ae84b77e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_carson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:22:58 np0005629333 quirky_carson[398246]: 167 167
Feb 25 08:22:58 np0005629333 systemd[1]: libpod-aa26b2bea8718aaaeeea8854105eff66a30c33a57bee265068cff43ae84b77e1.scope: Deactivated successfully.
Feb 25 08:22:58 np0005629333 podman[398230]: 2026-02-25 13:22:58.159587068 +0000 UTC m=+0.177443629 container died aa26b2bea8718aaaeeea8854105eff66a30c33a57bee265068cff43ae84b77e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_carson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 08:22:58 np0005629333 systemd[1]: var-lib-containers-storage-overlay-0b7e37d35d025ac9672fc0493a04fe334df0423993b1840342037db5550cd251-merged.mount: Deactivated successfully.
Feb 25 08:22:58 np0005629333 podman[398230]: 2026-02-25 13:22:58.199746211 +0000 UTC m=+0.217602722 container remove aa26b2bea8718aaaeeea8854105eff66a30c33a57bee265068cff43ae84b77e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 08:22:58 np0005629333 systemd[1]: libpod-conmon-aa26b2bea8718aaaeeea8854105eff66a30c33a57bee265068cff43ae84b77e1.scope: Deactivated successfully.
Feb 25 08:22:58 np0005629333 podman[398271]: 2026-02-25 13:22:58.362881746 +0000 UTC m=+0.069282117 container create 9f0e386d8dbf88b353de478bf0e2cc7a97d901e9b74a35a19b41c53b1ba36de7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 08:22:58 np0005629333 systemd[1]: Started libpod-conmon-9f0e386d8dbf88b353de478bf0e2cc7a97d901e9b74a35a19b41c53b1ba36de7.scope.
Feb 25 08:22:58 np0005629333 podman[398271]: 2026-02-25 13:22:58.331577592 +0000 UTC m=+0.037978023 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:22:58 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:22:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f07d095ebff7dbb5a252051f48a55ea5049e6a52e49cce69728a399d6cf39e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:22:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f07d095ebff7dbb5a252051f48a55ea5049e6a52e49cce69728a399d6cf39e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:22:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f07d095ebff7dbb5a252051f48a55ea5049e6a52e49cce69728a399d6cf39e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:22:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f07d095ebff7dbb5a252051f48a55ea5049e6a52e49cce69728a399d6cf39e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:22:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97f07d095ebff7dbb5a252051f48a55ea5049e6a52e49cce69728a399d6cf39e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:22:58 np0005629333 podman[398271]: 2026-02-25 13:22:58.478367435 +0000 UTC m=+0.184767816 container init 9f0e386d8dbf88b353de478bf0e2cc7a97d901e9b74a35a19b41c53b1ba36de7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_khorana, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:22:58 np0005629333 podman[398271]: 2026-02-25 13:22:58.496440705 +0000 UTC m=+0.202841076 container start 9f0e386d8dbf88b353de478bf0e2cc7a97d901e9b74a35a19b41c53b1ba36de7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 08:22:58 np0005629333 podman[398271]: 2026-02-25 13:22:58.500811928 +0000 UTC m=+0.207212299 container attach 9f0e386d8dbf88b353de478bf0e2cc7a97d901e9b74a35a19b41c53b1ba36de7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_khorana, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:22:58 np0005629333 nova_compute[244014]: 2026-02-25 13:22:58.546 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:22:59 np0005629333 nostalgic_khorana[398288]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:22:59 np0005629333 nostalgic_khorana[398288]: --> All data devices are unavailable
Feb 25 08:22:59 np0005629333 systemd[1]: libpod-9f0e386d8dbf88b353de478bf0e2cc7a97d901e9b74a35a19b41c53b1ba36de7.scope: Deactivated successfully.
Feb 25 08:22:59 np0005629333 podman[398308]: 2026-02-25 13:22:59.082304008 +0000 UTC m=+0.025308885 container died 9f0e386d8dbf88b353de478bf0e2cc7a97d901e9b74a35a19b41c53b1ba36de7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 08:22:59 np0005629333 systemd[1]: var-lib-containers-storage-overlay-97f07d095ebff7dbb5a252051f48a55ea5049e6a52e49cce69728a399d6cf39e-merged.mount: Deactivated successfully.
Feb 25 08:22:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:22:59 np0005629333 podman[398308]: 2026-02-25 13:22:59.232115006 +0000 UTC m=+0.175119833 container remove 9f0e386d8dbf88b353de478bf0e2cc7a97d901e9b74a35a19b41c53b1ba36de7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 08:22:59 np0005629333 systemd[1]: libpod-conmon-9f0e386d8dbf88b353de478bf0e2cc7a97d901e9b74a35a19b41c53b1ba36de7.scope: Deactivated successfully.
Feb 25 08:22:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3093: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:22:59 np0005629333 podman[398385]: 2026-02-25 13:22:59.742908172 +0000 UTC m=+0.058055109 container create add84a903d91582cd8503cc74c6e614bf463ab3fba3e1bfb88ba1e46ef75ea75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_payne, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:22:59 np0005629333 systemd[1]: Started libpod-conmon-add84a903d91582cd8503cc74c6e614bf463ab3fba3e1bfb88ba1e46ef75ea75.scope.
Feb 25 08:22:59 np0005629333 podman[398385]: 2026-02-25 13:22:59.719670366 +0000 UTC m=+0.034817383 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:22:59 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:22:59 np0005629333 podman[398385]: 2026-02-25 13:22:59.85019781 +0000 UTC m=+0.165344827 container init add84a903d91582cd8503cc74c6e614bf463ab3fba3e1bfb88ba1e46ef75ea75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_payne, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 08:22:59 np0005629333 podman[398385]: 2026-02-25 13:22:59.858375891 +0000 UTC m=+0.173522858 container start add84a903d91582cd8503cc74c6e614bf463ab3fba3e1bfb88ba1e46ef75ea75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 08:22:59 np0005629333 podman[398385]: 2026-02-25 13:22:59.862737304 +0000 UTC m=+0.177884281 container attach add84a903d91582cd8503cc74c6e614bf463ab3fba3e1bfb88ba1e46ef75ea75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 08:22:59 np0005629333 pensive_payne[398402]: 167 167
Feb 25 08:22:59 np0005629333 systemd[1]: libpod-add84a903d91582cd8503cc74c6e614bf463ab3fba3e1bfb88ba1e46ef75ea75.scope: Deactivated successfully.
Feb 25 08:22:59 np0005629333 podman[398385]: 2026-02-25 13:22:59.86505805 +0000 UTC m=+0.180205017 container died add84a903d91582cd8503cc74c6e614bf463ab3fba3e1bfb88ba1e46ef75ea75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_payne, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 08:22:59 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e25561167777e21a12837edee48c1eaabdbd72c34a398f9367296591814f5bea-merged.mount: Deactivated successfully.
Feb 25 08:22:59 np0005629333 podman[398385]: 2026-02-25 13:22:59.919586748 +0000 UTC m=+0.234733715 container remove add84a903d91582cd8503cc74c6e614bf463ab3fba3e1bfb88ba1e46ef75ea75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_payne, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:22:59 np0005629333 systemd[1]: libpod-conmon-add84a903d91582cd8503cc74c6e614bf463ab3fba3e1bfb88ba1e46ef75ea75.scope: Deactivated successfully.
Feb 25 08:23:00 np0005629333 podman[398426]: 2026-02-25 13:23:00.109148518 +0000 UTC m=+0.082895020 container create 8144fd6002ac73e71840f738b45a16399f897adde9f26a39223b5e82b07309d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 08:23:00 np0005629333 podman[398426]: 2026-02-25 13:23:00.062173013 +0000 UTC m=+0.035919485 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:23:00 np0005629333 systemd[1]: Started libpod-conmon-8144fd6002ac73e71840f738b45a16399f897adde9f26a39223b5e82b07309d3.scope.
Feb 25 08:23:00 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:23:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0fbc7cf782ebce48467802c590068fb5718a5a11661c6345a3552a641e1b2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:23:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0fbc7cf782ebce48467802c590068fb5718a5a11661c6345a3552a641e1b2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:23:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0fbc7cf782ebce48467802c590068fb5718a5a11661c6345a3552a641e1b2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:23:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0fbc7cf782ebce48467802c590068fb5718a5a11661c6345a3552a641e1b2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:23:00 np0005629333 podman[398426]: 2026-02-25 13:23:00.278472347 +0000 UTC m=+0.252218769 container init 8144fd6002ac73e71840f738b45a16399f897adde9f26a39223b5e82b07309d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 08:23:00 np0005629333 podman[398426]: 2026-02-25 13:23:00.287229044 +0000 UTC m=+0.260975466 container start 8144fd6002ac73e71840f738b45a16399f897adde9f26a39223b5e82b07309d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_feynman, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 08:23:00 np0005629333 podman[398426]: 2026-02-25 13:23:00.292605706 +0000 UTC m=+0.266352128 container attach 8144fd6002ac73e71840f738b45a16399f897adde9f26a39223b5e82b07309d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_feynman, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 08:23:00 np0005629333 musing_feynman[398442]: {
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:    "0": [
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:        {
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "devices": [
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "/dev/loop3"
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            ],
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "lv_name": "ceph_lv0",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "lv_size": "21470642176",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "name": "ceph_lv0",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "tags": {
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.cluster_name": "ceph",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.crush_device_class": "",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.encrypted": "0",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.objectstore": "bluestore",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.osd_id": "0",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.type": "block",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.vdo": "0",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.with_tpm": "0"
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            },
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "type": "block",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "vg_name": "ceph_vg0"
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:        }
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:    ],
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:    "1": [
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:        {
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "devices": [
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "/dev/loop4"
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            ],
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "lv_name": "ceph_lv1",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "lv_size": "21470642176",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "name": "ceph_lv1",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "tags": {
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.cluster_name": "ceph",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.crush_device_class": "",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.encrypted": "0",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.objectstore": "bluestore",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.osd_id": "1",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.type": "block",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.vdo": "0",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.with_tpm": "0"
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            },
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "type": "block",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "vg_name": "ceph_vg1"
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:        }
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:    ],
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:    "2": [
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:        {
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "devices": [
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "/dev/loop5"
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            ],
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "lv_name": "ceph_lv2",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "lv_size": "21470642176",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "name": "ceph_lv2",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "tags": {
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.cluster_name": "ceph",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.crush_device_class": "",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.encrypted": "0",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.objectstore": "bluestore",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.osd_id": "2",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.type": "block",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.vdo": "0",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:                "ceph.with_tpm": "0"
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            },
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "type": "block",
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:            "vg_name": "ceph_vg2"
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:        }
Feb 25 08:23:00 np0005629333 musing_feynman[398442]:    ]
Feb 25 08:23:00 np0005629333 musing_feynman[398442]: }
Feb 25 08:23:00 np0005629333 systemd[1]: libpod-8144fd6002ac73e71840f738b45a16399f897adde9f26a39223b5e82b07309d3.scope: Deactivated successfully.
Feb 25 08:23:00 np0005629333 podman[398426]: 2026-02-25 13:23:00.592460359 +0000 UTC m=+0.566206751 container died 8144fd6002ac73e71840f738b45a16399f897adde9f26a39223b5e82b07309d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_feynman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:23:00 np0005629333 systemd[1]: var-lib-containers-storage-overlay-0c0fbc7cf782ebce48467802c590068fb5718a5a11661c6345a3552a641e1b2e-merged.mount: Deactivated successfully.
Feb 25 08:23:00 np0005629333 podman[398426]: 2026-02-25 13:23:00.690847166 +0000 UTC m=+0.664593588 container remove 8144fd6002ac73e71840f738b45a16399f897adde9f26a39223b5e82b07309d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_feynman, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 08:23:00 np0005629333 systemd[1]: libpod-conmon-8144fd6002ac73e71840f738b45a16399f897adde9f26a39223b5e82b07309d3.scope: Deactivated successfully.
Feb 25 08:23:01 np0005629333 podman[398527]: 2026-02-25 13:23:01.205612213 +0000 UTC m=+0.069687998 container create 8b1016d548e6d09cea279fce37625a64486d318c093e71cbaec11212c6d38885 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:23:01 np0005629333 systemd[1]: Started libpod-conmon-8b1016d548e6d09cea279fce37625a64486d318c093e71cbaec11212c6d38885.scope.
Feb 25 08:23:01 np0005629333 podman[398527]: 2026-02-25 13:23:01.169526535 +0000 UTC m=+0.033602389 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:23:01 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:23:01 np0005629333 nova_compute[244014]: 2026-02-25 13:23:01.290 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:23:01 np0005629333 podman[398527]: 2026-02-25 13:23:01.293940336 +0000 UTC m=+0.158016190 container init 8b1016d548e6d09cea279fce37625a64486d318c093e71cbaec11212c6d38885 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_matsumoto, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 08:23:01 np0005629333 podman[398527]: 2026-02-25 13:23:01.304433793 +0000 UTC m=+0.168509587 container start 8b1016d548e6d09cea279fce37625a64486d318c093e71cbaec11212c6d38885 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:23:01 np0005629333 stupefied_matsumoto[398543]: 167 167
Feb 25 08:23:01 np0005629333 systemd[1]: libpod-8b1016d548e6d09cea279fce37625a64486d318c093e71cbaec11212c6d38885.scope: Deactivated successfully.
Feb 25 08:23:01 np0005629333 podman[398527]: 2026-02-25 13:23:01.317552273 +0000 UTC m=+0.181628027 container attach 8b1016d548e6d09cea279fce37625a64486d318c093e71cbaec11212c6d38885 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_matsumoto, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:23:01 np0005629333 podman[398527]: 2026-02-25 13:23:01.31815497 +0000 UTC m=+0.182230794 container died 8b1016d548e6d09cea279fce37625a64486d318c093e71cbaec11212c6d38885 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Feb 25 08:23:01 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d5a6c5e0ff7948e2fd5f7ec918715cf540c574ff85cbddf15c42122a78cd3719-merged.mount: Deactivated successfully.
Feb 25 08:23:01 np0005629333 podman[398527]: 2026-02-25 13:23:01.400244027 +0000 UTC m=+0.264319811 container remove 8b1016d548e6d09cea279fce37625a64486d318c093e71cbaec11212c6d38885 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 08:23:01 np0005629333 systemd[1]: libpod-conmon-8b1016d548e6d09cea279fce37625a64486d318c093e71cbaec11212c6d38885.scope: Deactivated successfully.
Feb 25 08:23:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3094: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:23:01 np0005629333 podman[398567]: 2026-02-25 13:23:01.608912706 +0000 UTC m=+0.081201563 container create 92bed0945d180312f79ff0e7dfa19b36ddbd9ce274ac33f8713361cf5d476b12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_mccarthy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Feb 25 08:23:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:23:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:23:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:23:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:23:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:23:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:23:01 np0005629333 podman[398567]: 2026-02-25 13:23:01.556535887 +0000 UTC m=+0.028824804 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:23:01 np0005629333 systemd[1]: Started libpod-conmon-92bed0945d180312f79ff0e7dfa19b36ddbd9ce274ac33f8713361cf5d476b12.scope.
Feb 25 08:23:01 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:23:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740e57860a96bf295a8894079a2ba24a1cdb1ff7059cf8d72d8a096cd98f50a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:23:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740e57860a96bf295a8894079a2ba24a1cdb1ff7059cf8d72d8a096cd98f50a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:23:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740e57860a96bf295a8894079a2ba24a1cdb1ff7059cf8d72d8a096cd98f50a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:23:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740e57860a96bf295a8894079a2ba24a1cdb1ff7059cf8d72d8a096cd98f50a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:23:01 np0005629333 podman[398567]: 2026-02-25 13:23:01.717201302 +0000 UTC m=+0.189490179 container init 92bed0945d180312f79ff0e7dfa19b36ddbd9ce274ac33f8713361cf5d476b12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_mccarthy, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 08:23:01 np0005629333 podman[398567]: 2026-02-25 13:23:01.723717436 +0000 UTC m=+0.196006303 container start 92bed0945d180312f79ff0e7dfa19b36ddbd9ce274ac33f8713361cf5d476b12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_mccarthy, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 08:23:01 np0005629333 podman[398567]: 2026-02-25 13:23:01.763020405 +0000 UTC m=+0.235309242 container attach 92bed0945d180312f79ff0e7dfa19b36ddbd9ce274ac33f8713361cf5d476b12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 08:23:02 np0005629333 lvm[398659]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:23:02 np0005629333 lvm[398659]: VG ceph_vg0 finished
Feb 25 08:23:02 np0005629333 lvm[398662]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:23:02 np0005629333 lvm[398662]: VG ceph_vg1 finished
Feb 25 08:23:02 np0005629333 lvm[398664]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:23:02 np0005629333 lvm[398664]: VG ceph_vg2 finished
Feb 25 08:23:02 np0005629333 frosty_mccarthy[398583]: {}
Feb 25 08:23:02 np0005629333 systemd[1]: libpod-92bed0945d180312f79ff0e7dfa19b36ddbd9ce274ac33f8713361cf5d476b12.scope: Deactivated successfully.
Feb 25 08:23:02 np0005629333 podman[398567]: 2026-02-25 13:23:02.531360279 +0000 UTC m=+1.003649146 container died 92bed0945d180312f79ff0e7dfa19b36ddbd9ce274ac33f8713361cf5d476b12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_mccarthy, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 08:23:02 np0005629333 systemd[1]: libpod-92bed0945d180312f79ff0e7dfa19b36ddbd9ce274ac33f8713361cf5d476b12.scope: Consumed 1.178s CPU time.
Feb 25 08:23:02 np0005629333 systemd[1]: var-lib-containers-storage-overlay-740e57860a96bf295a8894079a2ba24a1cdb1ff7059cf8d72d8a096cd98f50a6-merged.mount: Deactivated successfully.
Feb 25 08:23:02 np0005629333 podman[398567]: 2026-02-25 13:23:02.702586901 +0000 UTC m=+1.174875758 container remove 92bed0945d180312f79ff0e7dfa19b36ddbd9ce274ac33f8713361cf5d476b12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_mccarthy, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:23:02 np0005629333 systemd[1]: libpod-conmon-92bed0945d180312f79ff0e7dfa19b36ddbd9ce274ac33f8713361cf5d476b12.scope: Deactivated successfully.
Feb 25 08:23:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:23:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:23:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:23:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:23:03 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:23:03 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:23:03 np0005629333 nova_compute[244014]: 2026-02-25 13:23:03.549 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:23:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3095: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:23:03 np0005629333 nova_compute[244014]: 2026-02-25 13:23:03.898 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:23:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:23:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3096: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:23:05 np0005629333 nova_compute[244014]: 2026-02-25 13:23:05.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:23:05 np0005629333 nova_compute[244014]: 2026-02-25 13:23:05.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:23:06 np0005629333 nova_compute[244014]: 2026-02-25 13:23:06.294 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:23:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3097: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:23:07 np0005629333 nova_compute[244014]: 2026-02-25 13:23:07.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:23:08 np0005629333 nova_compute[244014]: 2026-02-25 13:23:08.553 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:23:08 np0005629333 nova_compute[244014]: 2026-02-25 13:23:08.881 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:23:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:23:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3098: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:23:11 np0005629333 nova_compute[244014]: 2026-02-25 13:23:11.295 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:23:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3099: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:23:12 np0005629333 nova_compute[244014]: 2026-02-25 13:23:12.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:23:13 np0005629333 nova_compute[244014]: 2026-02-25 13:23:13.557 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:23:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3100: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:23:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:23:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3101: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:23:16 np0005629333 nova_compute[244014]: 2026-02-25 13:23:16.300 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:23:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3102: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:23:18 np0005629333 nova_compute[244014]: 2026-02-25 13:23:18.562 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:23:18 np0005629333 nova_compute[244014]: 2026-02-25 13:23:18.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:19.393534) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025799393632, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 1011, "num_deletes": 256, "total_data_size": 1489277, "memory_usage": 1513168, "flush_reason": "Manual Compaction"}
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025799526941, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 1442145, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64543, "largest_seqno": 65553, "table_properties": {"data_size": 1437199, "index_size": 2469, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10513, "raw_average_key_size": 19, "raw_value_size": 1427275, "raw_average_value_size": 2609, "num_data_blocks": 111, "num_entries": 547, "num_filter_entries": 547, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772025710, "oldest_key_time": 1772025710, "file_creation_time": 1772025799, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 133512 microseconds, and 5296 cpu microseconds.
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:23:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3103: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:19.527048) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 1442145 bytes OK
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:19.527095) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:19.578287) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:19.578362) EVENT_LOG_v1 {"time_micros": 1772025799578344, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:19.578411) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 1484464, prev total WAL file size 1484464, number of live WAL files 2.
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:19.579532) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373636' seq:72057594037927935, type:22 .. '6C6F676D0033303138' seq:0, type:0; will stop at (end)
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(1408KB)], [152(9333KB)]
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025799579596, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 10999516, "oldest_snapshot_seqno": -1}
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 8367 keys, 10888073 bytes, temperature: kUnknown
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025799853221, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 10888073, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10834439, "index_size": 31633, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20933, "raw_key_size": 218825, "raw_average_key_size": 26, "raw_value_size": 10687356, "raw_average_value_size": 1277, "num_data_blocks": 1234, "num_entries": 8367, "num_filter_entries": 8367, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772025799, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:19.853672) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 10888073 bytes
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:19.906080) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 40.2 rd, 39.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 9.1 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(15.2) write-amplify(7.5) OK, records in: 8890, records dropped: 523 output_compression: NoCompression
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:19.906123) EVENT_LOG_v1 {"time_micros": 1772025799906104, "job": 94, "event": "compaction_finished", "compaction_time_micros": 273813, "compaction_time_cpu_micros": 39349, "output_level": 6, "num_output_files": 1, "total_output_size": 10888073, "num_input_records": 8890, "num_output_records": 8367, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025799906835, "job": 94, "event": "table_file_deletion", "file_number": 154}
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025799909018, "job": 94, "event": "table_file_deletion", "file_number": 152}
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:19.579433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:19.909176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:19.909183) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:19.909185) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:19.909187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:23:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:19.909189) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:23:21 np0005629333 nova_compute[244014]: 2026-02-25 13:23:21.302 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:23:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3104: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:23:23 np0005629333 nova_compute[244014]: 2026-02-25 13:23:23.566 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:23:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3105: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:23:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:23:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3106: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:23:25 np0005629333 podman[398707]: 2026-02-25 13:23:25.769465211 +0000 UTC m=+0.101773913 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 25 08:23:25 np0005629333 podman[398708]: 2026-02-25 13:23:25.800831186 +0000 UTC m=+0.133265562 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 08:23:26 np0005629333 nova_compute[244014]: 2026-02-25 13:23:26.304 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:23:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3107: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:23:28 np0005629333 nova_compute[244014]: 2026-02-25 13:23:28.571 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:23:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:23:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3108: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:23:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:23:31
Feb 25 08:23:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:23:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:23:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['vms', 'backups', '.mgr', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'default.rgw.log']
Feb 25 08:23:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:23:31 np0005629333 nova_compute[244014]: 2026-02-25 13:23:31.307 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:23:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3109: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:23:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:23:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:23:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:23:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:23:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:23:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:23:31 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Feb 25 08:23:31 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:31.825346) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 08:23:31 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Feb 25 08:23:31 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025811825428, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 349, "num_deletes": 251, "total_data_size": 211202, "memory_usage": 217864, "flush_reason": "Manual Compaction"}
Feb 25 08:23:31 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Feb 25 08:23:31 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025811847803, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 209711, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65554, "largest_seqno": 65902, "table_properties": {"data_size": 207478, "index_size": 396, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5415, "raw_average_key_size": 18, "raw_value_size": 203173, "raw_average_value_size": 693, "num_data_blocks": 18, "num_entries": 293, "num_filter_entries": 293, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772025800, "oldest_key_time": 1772025800, "file_creation_time": 1772025811, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:23:31 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 22489 microseconds, and 1367 cpu microseconds.
Feb 25 08:23:31 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:23:31 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:31.847847) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 209711 bytes OK
Feb 25 08:23:31 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:31.847883) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Feb 25 08:23:31 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:31.893260) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Feb 25 08:23:31 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:31.893307) EVENT_LOG_v1 {"time_micros": 1772025811893297, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 08:23:31 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:31.893337) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 08:23:31 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 208844, prev total WAL file size 208844, number of live WAL files 2.
Feb 25 08:23:31 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:23:31 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:31.893944) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Feb 25 08:23:31 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 08:23:31 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(204KB)], [155(10MB)]
Feb 25 08:23:31 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025811894044, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 11097784, "oldest_snapshot_seqno": -1}
Feb 25 08:23:32 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 8150 keys, 9334176 bytes, temperature: kUnknown
Feb 25 08:23:32 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025812013842, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 9334176, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9283403, "index_size": 29284, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20421, "raw_key_size": 214997, "raw_average_key_size": 26, "raw_value_size": 9141514, "raw_average_value_size": 1121, "num_data_blocks": 1126, "num_entries": 8150, "num_filter_entries": 8150, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772025811, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:23:32 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:23:32 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:32.014157) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 9334176 bytes
Feb 25 08:23:32 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:32.047538) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 92.6 rd, 77.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.4 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(97.4) write-amplify(44.5) OK, records in: 8660, records dropped: 510 output_compression: NoCompression
Feb 25 08:23:32 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:32.047584) EVENT_LOG_v1 {"time_micros": 1772025812047566, "job": 96, "event": "compaction_finished", "compaction_time_micros": 119886, "compaction_time_cpu_micros": 33449, "output_level": 6, "num_output_files": 1, "total_output_size": 9334176, "num_input_records": 8660, "num_output_records": 8150, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 08:23:32 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:23:32 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025812047910, "job": 96, "event": "table_file_deletion", "file_number": 157}
Feb 25 08:23:32 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:23:32 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772025812049601, "job": 96, "event": "table_file_deletion", "file_number": 155}
Feb 25 08:23:32 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:31.893806) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:23:32 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:32.049733) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:23:32 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:32.049741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:23:32 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:32.049746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:23:32 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:32.049750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:23:32 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:23:32.049754) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:23:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:23:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:23:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:23:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:23:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:23:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:23:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:23:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:23:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:23:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:23:32 np0005629333 nova_compute[244014]: 2026-02-25 13:23:32.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:23:33 np0005629333 nova_compute[244014]: 2026-02-25 13:23:33.573 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:23:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3110: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:23:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:23:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3111: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:23:36 np0005629333 nova_compute[244014]: 2026-02-25 13:23:36.310 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:23:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3112: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:23:38 np0005629333 nova_compute[244014]: 2026-02-25 13:23:38.578 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:23:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:23:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3113: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:23:41 np0005629333 nova_compute[244014]: 2026-02-25 13:23:41.311 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:23:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3114: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:23:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:23:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:23:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:23:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:23:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:23:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:23:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:23:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:23:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:23:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:23:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:23:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:23:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:23:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:23:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:23:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:23:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:23:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:23:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:23:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:23:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:23:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:23:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:23:43 np0005629333 nova_compute[244014]: 2026-02-25 13:23:43.580 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:23:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3115: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:23:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:23:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3116: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:23:46 np0005629333 nova_compute[244014]: 2026-02-25 13:23:46.314 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:23:46 np0005629333 nova_compute[244014]: 2026-02-25 13:23:46.907 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:23:46 np0005629333 nova_compute[244014]: 2026-02-25 13:23:46.908 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 25 08:23:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:23:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/259004825' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:23:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:23:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/259004825' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:23:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3117: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:23:48 np0005629333 nova_compute[244014]: 2026-02-25 13:23:48.584 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:23:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:23:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3118: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:23:51 np0005629333 nova_compute[244014]: 2026-02-25 13:23:51.316 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:23:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3119: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:23:51 np0005629333 nova_compute[244014]: 2026-02-25 13:23:51.889 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:23:51 np0005629333 nova_compute[244014]: 2026-02-25 13:23:51.890 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 25 08:23:51 np0005629333 nova_compute[244014]: 2026-02-25 13:23:51.903 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 25 08:23:53 np0005629333 nova_compute[244014]: 2026-02-25 13:23:53.588 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:23:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3120: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:23:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:23:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:23:55.066 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:23:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:23:55.066 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:23:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:23:55.067 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:23:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3121: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:23:55 np0005629333 nova_compute[244014]: 2026-02-25 13:23:55.890 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:23:55 np0005629333 nova_compute[244014]: 2026-02-25 13:23:55.891 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:23:55 np0005629333 nova_compute[244014]: 2026-02-25 13:23:55.920 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:23:55 np0005629333 nova_compute[244014]: 2026-02-25 13:23:55.921 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:23:55 np0005629333 nova_compute[244014]: 2026-02-25 13:23:55.921 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:23:55 np0005629333 nova_compute[244014]: 2026-02-25 13:23:55.921 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:23:55 np0005629333 nova_compute[244014]: 2026-02-25 13:23:55.923 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:23:56 np0005629333 nova_compute[244014]: 2026-02-25 13:23:56.318 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:23:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:23:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2115060889' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:23:56 np0005629333 nova_compute[244014]: 2026-02-25 13:23:56.540 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:23:56 np0005629333 podman[398771]: 2026-02-25 13:23:56.734652941 +0000 UTC m=+0.073385242 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent)
Feb 25 08:23:56 np0005629333 nova_compute[244014]: 2026-02-25 13:23:56.799 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:23:56 np0005629333 nova_compute[244014]: 2026-02-25 13:23:56.801 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3601MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:23:56 np0005629333 nova_compute[244014]: 2026-02-25 13:23:56.802 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:23:56 np0005629333 nova_compute[244014]: 2026-02-25 13:23:56.802 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:23:56 np0005629333 podman[398772]: 2026-02-25 13:23:56.806856769 +0000 UTC m=+0.145933680 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:23:56 np0005629333 nova_compute[244014]: 2026-02-25 13:23:56.866 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:23:56 np0005629333 nova_compute[244014]: 2026-02-25 13:23:56.867 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:23:56 np0005629333 nova_compute[244014]: 2026-02-25 13:23:56.884 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:23:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:23:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3811398553' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:23:57 np0005629333 nova_compute[244014]: 2026-02-25 13:23:57.453 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:23:57 np0005629333 nova_compute[244014]: 2026-02-25 13:23:57.459 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:23:57 np0005629333 nova_compute[244014]: 2026-02-25 13:23:57.471 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:23:57 np0005629333 nova_compute[244014]: 2026-02-25 13:23:57.473 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:23:57 np0005629333 nova_compute[244014]: 2026-02-25 13:23:57.473 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:23:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3122: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:23:58 np0005629333 nova_compute[244014]: 2026-02-25 13:23:58.591 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:23:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:23:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3123: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:24:01 np0005629333 nova_compute[244014]: 2026-02-25 13:24:01.320 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:24:01 np0005629333 nova_compute[244014]: 2026-02-25 13:24:01.458 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:24:01 np0005629333 nova_compute[244014]: 2026-02-25 13:24:01.459 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:24:01 np0005629333 nova_compute[244014]: 2026-02-25 13:24:01.459 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:24:01 np0005629333 nova_compute[244014]: 2026-02-25 13:24:01.474 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:24:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3124: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:24:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:24:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:24:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:24:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:24:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:24:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:24:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 25 08:24:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 08:24:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:24:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:24:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:24:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:24:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:24:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:24:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:24:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:24:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:24:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:24:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:24:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:24:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3125: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:24:03 np0005629333 nova_compute[244014]: 2026-02-25 13:24:03.653 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:24:03 np0005629333 nova_compute[244014]: 2026-02-25 13:24:03.887 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:24:03 np0005629333 podman[398986]: 2026-02-25 13:24:03.974486265 +0000 UTC m=+0.039523026 container create 289987ea426b35155fd87d2d9f642bf68fcc8d0960b5e892c86144f0dfe0eb5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ritchie, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:24:04 np0005629333 systemd[1]: Started libpod-conmon-289987ea426b35155fd87d2d9f642bf68fcc8d0960b5e892c86144f0dfe0eb5e.scope.
Feb 25 08:24:04 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:24:04 np0005629333 podman[398986]: 2026-02-25 13:24:03.952021531 +0000 UTC m=+0.017058292 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:24:04 np0005629333 podman[398986]: 2026-02-25 13:24:04.048952617 +0000 UTC m=+0.113989358 container init 289987ea426b35155fd87d2d9f642bf68fcc8d0960b5e892c86144f0dfe0eb5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ritchie, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:24:04 np0005629333 podman[398986]: 2026-02-25 13:24:04.056367386 +0000 UTC m=+0.121404117 container start 289987ea426b35155fd87d2d9f642bf68fcc8d0960b5e892c86144f0dfe0eb5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 08:24:04 np0005629333 podman[398986]: 2026-02-25 13:24:04.060067641 +0000 UTC m=+0.125104392 container attach 289987ea426b35155fd87d2d9f642bf68fcc8d0960b5e892c86144f0dfe0eb5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ritchie, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 08:24:04 np0005629333 quizzical_ritchie[399002]: 167 167
Feb 25 08:24:04 np0005629333 systemd[1]: libpod-289987ea426b35155fd87d2d9f642bf68fcc8d0960b5e892c86144f0dfe0eb5e.scope: Deactivated successfully.
Feb 25 08:24:04 np0005629333 podman[398986]: 2026-02-25 13:24:04.062504169 +0000 UTC m=+0.127540930 container died 289987ea426b35155fd87d2d9f642bf68fcc8d0960b5e892c86144f0dfe0eb5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 08:24:04 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 08:24:04 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:24:04 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:24:04 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:24:04 np0005629333 systemd[1]: var-lib-containers-storage-overlay-44b60101a2870ff2af998998ab2ba707f9a5a232da5652c85be0ea1fdea847cc-merged.mount: Deactivated successfully.
Feb 25 08:24:04 np0005629333 podman[398986]: 2026-02-25 13:24:04.115134875 +0000 UTC m=+0.180171606 container remove 289987ea426b35155fd87d2d9f642bf68fcc8d0960b5e892c86144f0dfe0eb5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_ritchie, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:24:04 np0005629333 systemd[1]: libpod-conmon-289987ea426b35155fd87d2d9f642bf68fcc8d0960b5e892c86144f0dfe0eb5e.scope: Deactivated successfully.
Feb 25 08:24:04 np0005629333 podman[399026]: 2026-02-25 13:24:04.285332198 +0000 UTC m=+0.055435595 container create af9c50dcbc01c5a85cf56321867734fa086b8525b2c3d9144cfed2ec129e9222 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_thompson, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:24:04 np0005629333 systemd[1]: Started libpod-conmon-af9c50dcbc01c5a85cf56321867734fa086b8525b2c3d9144cfed2ec129e9222.scope.
Feb 25 08:24:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:24:04 np0005629333 podman[399026]: 2026-02-25 13:24:04.261889017 +0000 UTC m=+0.031992434 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:24:04 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:24:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3852ca26faa144afb4580bce0f43527ae8c812ecc1e72f8d1577641f02294b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:24:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3852ca26faa144afb4580bce0f43527ae8c812ecc1e72f8d1577641f02294b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:24:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3852ca26faa144afb4580bce0f43527ae8c812ecc1e72f8d1577641f02294b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:24:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3852ca26faa144afb4580bce0f43527ae8c812ecc1e72f8d1577641f02294b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:24:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3852ca26faa144afb4580bce0f43527ae8c812ecc1e72f8d1577641f02294b6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:24:04 np0005629333 podman[399026]: 2026-02-25 13:24:04.41224373 +0000 UTC m=+0.182347187 container init af9c50dcbc01c5a85cf56321867734fa086b8525b2c3d9144cfed2ec129e9222 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 08:24:04 np0005629333 podman[399026]: 2026-02-25 13:24:04.421510281 +0000 UTC m=+0.191613678 container start af9c50dcbc01c5a85cf56321867734fa086b8525b2c3d9144cfed2ec129e9222 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_thompson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:24:04 np0005629333 podman[399026]: 2026-02-25 13:24:04.425645038 +0000 UTC m=+0.195748445 container attach af9c50dcbc01c5a85cf56321867734fa086b8525b2c3d9144cfed2ec129e9222 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_thompson, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:24:04 np0005629333 serene_thompson[399043]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:24:04 np0005629333 serene_thompson[399043]: --> All data devices are unavailable
Feb 25 08:24:04 np0005629333 systemd[1]: libpod-af9c50dcbc01c5a85cf56321867734fa086b8525b2c3d9144cfed2ec129e9222.scope: Deactivated successfully.
Feb 25 08:24:04 np0005629333 podman[399026]: 2026-02-25 13:24:04.917225932 +0000 UTC m=+0.687329339 container died af9c50dcbc01c5a85cf56321867734fa086b8525b2c3d9144cfed2ec129e9222 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_thompson, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 08:24:04 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e3852ca26faa144afb4580bce0f43527ae8c812ecc1e72f8d1577641f02294b6-merged.mount: Deactivated successfully.
Feb 25 08:24:04 np0005629333 podman[399026]: 2026-02-25 13:24:04.960685128 +0000 UTC m=+0.730788495 container remove af9c50dcbc01c5a85cf56321867734fa086b8525b2c3d9144cfed2ec129e9222 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_thompson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 08:24:04 np0005629333 systemd[1]: libpod-conmon-af9c50dcbc01c5a85cf56321867734fa086b8525b2c3d9144cfed2ec129e9222.scope: Deactivated successfully.
Feb 25 08:24:05 np0005629333 podman[399137]: 2026-02-25 13:24:05.413856358 +0000 UTC m=+0.057369780 container create 4fbc0f1e87bcc2f4aa34e6ec721be8f233ff2ef29b538128c96190f78140ad77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_shannon, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:24:05 np0005629333 systemd[1]: Started libpod-conmon-4fbc0f1e87bcc2f4aa34e6ec721be8f233ff2ef29b538128c96190f78140ad77.scope.
Feb 25 08:24:05 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:24:05 np0005629333 podman[399137]: 2026-02-25 13:24:05.392373452 +0000 UTC m=+0.035886954 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:24:05 np0005629333 podman[399137]: 2026-02-25 13:24:05.499404642 +0000 UTC m=+0.142918114 container init 4fbc0f1e87bcc2f4aa34e6ec721be8f233ff2ef29b538128c96190f78140ad77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 08:24:05 np0005629333 podman[399137]: 2026-02-25 13:24:05.507926543 +0000 UTC m=+0.151439965 container start 4fbc0f1e87bcc2f4aa34e6ec721be8f233ff2ef29b538128c96190f78140ad77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_shannon, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:24:05 np0005629333 hopeful_shannon[399154]: 167 167
Feb 25 08:24:05 np0005629333 systemd[1]: libpod-4fbc0f1e87bcc2f4aa34e6ec721be8f233ff2ef29b538128c96190f78140ad77.scope: Deactivated successfully.
Feb 25 08:24:05 np0005629333 podman[399137]: 2026-02-25 13:24:05.517765281 +0000 UTC m=+0.161278783 container attach 4fbc0f1e87bcc2f4aa34e6ec721be8f233ff2ef29b538128c96190f78140ad77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_shannon, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 08:24:05 np0005629333 podman[399137]: 2026-02-25 13:24:05.518376788 +0000 UTC m=+0.161890240 container died 4fbc0f1e87bcc2f4aa34e6ec721be8f233ff2ef29b538128c96190f78140ad77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 08:24:05 np0005629333 systemd[1]: var-lib-containers-storage-overlay-2ed51cc6b16cd23d0944517a165eea8e90f81bd813949b9449eca53247fd71e6-merged.mount: Deactivated successfully.
Feb 25 08:24:05 np0005629333 podman[399137]: 2026-02-25 13:24:05.560477916 +0000 UTC m=+0.203991338 container remove 4fbc0f1e87bcc2f4aa34e6ec721be8f233ff2ef29b538128c96190f78140ad77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:24:05 np0005629333 systemd[1]: libpod-conmon-4fbc0f1e87bcc2f4aa34e6ec721be8f233ff2ef29b538128c96190f78140ad77.scope: Deactivated successfully.
Feb 25 08:24:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3126: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:24:05 np0005629333 podman[399178]: 2026-02-25 13:24:05.747258208 +0000 UTC m=+0.050664471 container create a44de9a866dda1292a60df40787c19e0c67d12f1b70658b6ba14d30093c4da26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:24:05 np0005629333 systemd[1]: Started libpod-conmon-a44de9a866dda1292a60df40787c19e0c67d12f1b70658b6ba14d30093c4da26.scope.
Feb 25 08:24:05 np0005629333 podman[399178]: 2026-02-25 13:24:05.722633743 +0000 UTC m=+0.026040066 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:24:05 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:24:05 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b9457f49f48fa19129a19a637dc634327f06ad02c532716b78e89ebb0c641d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:24:05 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b9457f49f48fa19129a19a637dc634327f06ad02c532716b78e89ebb0c641d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:24:05 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b9457f49f48fa19129a19a637dc634327f06ad02c532716b78e89ebb0c641d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:24:05 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b9457f49f48fa19129a19a637dc634327f06ad02c532716b78e89ebb0c641d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:24:05 np0005629333 podman[399178]: 2026-02-25 13:24:05.8614248 +0000 UTC m=+0.164831123 container init a44de9a866dda1292a60df40787c19e0c67d12f1b70658b6ba14d30093c4da26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_babbage, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 08:24:05 np0005629333 podman[399178]: 2026-02-25 13:24:05.868109748 +0000 UTC m=+0.171515981 container start a44de9a866dda1292a60df40787c19e0c67d12f1b70658b6ba14d30093c4da26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_babbage, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:24:05 np0005629333 podman[399178]: 2026-02-25 13:24:05.87276676 +0000 UTC m=+0.176173003 container attach a44de9a866dda1292a60df40787c19e0c67d12f1b70658b6ba14d30093c4da26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_babbage, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 08:24:05 np0005629333 nova_compute[244014]: 2026-02-25 13:24:05.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:24:05 np0005629333 nova_compute[244014]: 2026-02-25 13:24:05.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:24:06 np0005629333 focused_babbage[399195]: {
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:    "0": [
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:        {
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "devices": [
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "/dev/loop3"
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            ],
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "lv_name": "ceph_lv0",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "lv_size": "21470642176",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "name": "ceph_lv0",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "tags": {
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.cluster_name": "ceph",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.crush_device_class": "",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.encrypted": "0",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.objectstore": "bluestore",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.osd_id": "0",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.type": "block",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.vdo": "0",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.with_tpm": "0"
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            },
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "type": "block",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "vg_name": "ceph_vg0"
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:        }
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:    ],
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:    "1": [
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:        {
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "devices": [
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "/dev/loop4"
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            ],
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "lv_name": "ceph_lv1",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "lv_size": "21470642176",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "name": "ceph_lv1",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "tags": {
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.cluster_name": "ceph",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.crush_device_class": "",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.encrypted": "0",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.objectstore": "bluestore",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.osd_id": "1",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.type": "block",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.vdo": "0",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.with_tpm": "0"
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            },
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "type": "block",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "vg_name": "ceph_vg1"
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:        }
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:    ],
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:    "2": [
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:        {
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "devices": [
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "/dev/loop5"
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            ],
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "lv_name": "ceph_lv2",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "lv_size": "21470642176",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "name": "ceph_lv2",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "tags": {
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.cluster_name": "ceph",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.crush_device_class": "",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.encrypted": "0",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.objectstore": "bluestore",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.osd_id": "2",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.type": "block",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.vdo": "0",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:                "ceph.with_tpm": "0"
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            },
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "type": "block",
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:            "vg_name": "ceph_vg2"
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:        }
Feb 25 08:24:06 np0005629333 focused_babbage[399195]:    ]
Feb 25 08:24:06 np0005629333 focused_babbage[399195]: }
Feb 25 08:24:06 np0005629333 systemd[1]: libpod-a44de9a866dda1292a60df40787c19e0c67d12f1b70658b6ba14d30093c4da26.scope: Deactivated successfully.
Feb 25 08:24:06 np0005629333 podman[399178]: 2026-02-25 13:24:06.182551693 +0000 UTC m=+0.485958056 container died a44de9a866dda1292a60df40787c19e0c67d12f1b70658b6ba14d30093c4da26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_babbage, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:24:06 np0005629333 systemd[1]: var-lib-containers-storage-overlay-6b9457f49f48fa19129a19a637dc634327f06ad02c532716b78e89ebb0c641d6-merged.mount: Deactivated successfully.
Feb 25 08:24:06 np0005629333 podman[399178]: 2026-02-25 13:24:06.221041359 +0000 UTC m=+0.524447592 container remove a44de9a866dda1292a60df40787c19e0c67d12f1b70658b6ba14d30093c4da26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_babbage, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:24:06 np0005629333 systemd[1]: libpod-conmon-a44de9a866dda1292a60df40787c19e0c67d12f1b70658b6ba14d30093c4da26.scope: Deactivated successfully.
Feb 25 08:24:06 np0005629333 nova_compute[244014]: 2026-02-25 13:24:06.322 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:24:06 np0005629333 podman[399279]: 2026-02-25 13:24:06.737306318 +0000 UTC m=+0.060210870 container create 262c2c334c7354702e91cbf9f91abc02ca7f7d8efba90f4131aa4931727f2232 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 08:24:06 np0005629333 systemd[1]: Started libpod-conmon-262c2c334c7354702e91cbf9f91abc02ca7f7d8efba90f4131aa4931727f2232.scope.
Feb 25 08:24:06 np0005629333 podman[399279]: 2026-02-25 13:24:06.710313256 +0000 UTC m=+0.033217888 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:24:06 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:24:06 np0005629333 podman[399279]: 2026-02-25 13:24:06.82738003 +0000 UTC m=+0.150284662 container init 262c2c334c7354702e91cbf9f91abc02ca7f7d8efba90f4131aa4931727f2232 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ardinghelli, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:24:06 np0005629333 podman[399279]: 2026-02-25 13:24:06.835992163 +0000 UTC m=+0.158896715 container start 262c2c334c7354702e91cbf9f91abc02ca7f7d8efba90f4131aa4931727f2232 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 08:24:06 np0005629333 podman[399279]: 2026-02-25 13:24:06.839881213 +0000 UTC m=+0.162785845 container attach 262c2c334c7354702e91cbf9f91abc02ca7f7d8efba90f4131aa4931727f2232 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ardinghelli, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 08:24:06 np0005629333 jovial_ardinghelli[399295]: 167 167
Feb 25 08:24:06 np0005629333 systemd[1]: libpod-262c2c334c7354702e91cbf9f91abc02ca7f7d8efba90f4131aa4931727f2232.scope: Deactivated successfully.
Feb 25 08:24:06 np0005629333 podman[399279]: 2026-02-25 13:24:06.84258916 +0000 UTC m=+0.165493742 container died 262c2c334c7354702e91cbf9f91abc02ca7f7d8efba90f4131aa4931727f2232 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:24:06 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e963b47f02d26cf43cd34ca3f887c4fa85a38faf16887d1427ed6971df45c700-merged.mount: Deactivated successfully.
Feb 25 08:24:06 np0005629333 podman[399279]: 2026-02-25 13:24:06.886833338 +0000 UTC m=+0.209737910 container remove 262c2c334c7354702e91cbf9f91abc02ca7f7d8efba90f4131aa4931727f2232 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ardinghelli, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:24:06 np0005629333 systemd[1]: libpod-conmon-262c2c334c7354702e91cbf9f91abc02ca7f7d8efba90f4131aa4931727f2232.scope: Deactivated successfully.
Feb 25 08:24:07 np0005629333 podman[399318]: 2026-02-25 13:24:07.068044773 +0000 UTC m=+0.061732654 container create ea2e9c0c479305992f733535fe018efc7e50133f26e56ee865992fa4a13874b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 08:24:07 np0005629333 systemd[1]: Started libpod-conmon-ea2e9c0c479305992f733535fe018efc7e50133f26e56ee865992fa4a13874b9.scope.
Feb 25 08:24:07 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:24:07 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8450383ee497e4a21aa8e2173561d35a0dd2fb5e4e56fee84508832b4071bb7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:24:07 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8450383ee497e4a21aa8e2173561d35a0dd2fb5e4e56fee84508832b4071bb7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:24:07 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8450383ee497e4a21aa8e2173561d35a0dd2fb5e4e56fee84508832b4071bb7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:24:07 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8450383ee497e4a21aa8e2173561d35a0dd2fb5e4e56fee84508832b4071bb7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:24:07 np0005629333 podman[399318]: 2026-02-25 13:24:07.038789217 +0000 UTC m=+0.032477148 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:24:07 np0005629333 podman[399318]: 2026-02-25 13:24:07.153477424 +0000 UTC m=+0.147165345 container init ea2e9c0c479305992f733535fe018efc7e50133f26e56ee865992fa4a13874b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_babbage, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 08:24:07 np0005629333 podman[399318]: 2026-02-25 13:24:07.159562775 +0000 UTC m=+0.153250656 container start ea2e9c0c479305992f733535fe018efc7e50133f26e56ee865992fa4a13874b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_babbage, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 08:24:07 np0005629333 podman[399318]: 2026-02-25 13:24:07.163741203 +0000 UTC m=+0.157429134 container attach ea2e9c0c479305992f733535fe018efc7e50133f26e56ee865992fa4a13874b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_babbage, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:24:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3127: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:24:07 np0005629333 lvm[399414]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:24:07 np0005629333 lvm[399414]: VG ceph_vg2 finished
Feb 25 08:24:07 np0005629333 lvm[399412]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:24:07 np0005629333 lvm[399413]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:24:07 np0005629333 lvm[399413]: VG ceph_vg1 finished
Feb 25 08:24:07 np0005629333 lvm[399412]: VG ceph_vg0 finished
Feb 25 08:24:07 np0005629333 inspiring_babbage[399333]: {}
Feb 25 08:24:07 np0005629333 systemd[1]: libpod-ea2e9c0c479305992f733535fe018efc7e50133f26e56ee865992fa4a13874b9.scope: Deactivated successfully.
Feb 25 08:24:07 np0005629333 systemd[1]: libpod-ea2e9c0c479305992f733535fe018efc7e50133f26e56ee865992fa4a13874b9.scope: Consumed 1.033s CPU time.
Feb 25 08:24:07 np0005629333 podman[399318]: 2026-02-25 13:24:07.871378005 +0000 UTC m=+0.865065876 container died ea2e9c0c479305992f733535fe018efc7e50133f26e56ee865992fa4a13874b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_babbage, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 08:24:07 np0005629333 systemd[1]: var-lib-containers-storage-overlay-c8450383ee497e4a21aa8e2173561d35a0dd2fb5e4e56fee84508832b4071bb7-merged.mount: Deactivated successfully.
Feb 25 08:24:07 np0005629333 podman[399318]: 2026-02-25 13:24:07.939820986 +0000 UTC m=+0.933508857 container remove ea2e9c0c479305992f733535fe018efc7e50133f26e56ee865992fa4a13874b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_babbage, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 08:24:07 np0005629333 systemd[1]: libpod-conmon-ea2e9c0c479305992f733535fe018efc7e50133f26e56ee865992fa4a13874b9.scope: Deactivated successfully.
Feb 25 08:24:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:24:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:24:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:24:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:24:08 np0005629333 nova_compute[244014]: 2026-02-25 13:24:08.689 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:24:08 np0005629333 nova_compute[244014]: 2026-02-25 13:24:08.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:24:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:24:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:24:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:24:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3128: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:24:10 np0005629333 nova_compute[244014]: 2026-02-25 13:24:10.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:24:11 np0005629333 nova_compute[244014]: 2026-02-25 13:24:11.324 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:24:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3129: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:24:12 np0005629333 nova_compute[244014]: 2026-02-25 13:24:12.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:24:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3130: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:24:13 np0005629333 nova_compute[244014]: 2026-02-25 13:24:13.692 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:24:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:24:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3131: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:24:16 np0005629333 nova_compute[244014]: 2026-02-25 13:24:16.325 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:24:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3132: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:24:18 np0005629333 nova_compute[244014]: 2026-02-25 13:24:18.695 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:24:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:24:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3133: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:24:19 np0005629333 nova_compute[244014]: 2026-02-25 13:24:19.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:24:21 np0005629333 nova_compute[244014]: 2026-02-25 13:24:21.328 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:24:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3134: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:24:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3135: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:24:23 np0005629333 nova_compute[244014]: 2026-02-25 13:24:23.698 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:24:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:24:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3136: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:24:25 np0005629333 nova_compute[244014]: 2026-02-25 13:24:25.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:24:26 np0005629333 nova_compute[244014]: 2026-02-25 13:24:26.329 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:24:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3137: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:24:27 np0005629333 podman[399455]: 2026-02-25 13:24:27.738616194 +0000 UTC m=+0.072586580 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 25 08:24:27 np0005629333 podman[399456]: 2026-02-25 13:24:27.774240849 +0000 UTC m=+0.108088831 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 08:24:28 np0005629333 nova_compute[244014]: 2026-02-25 13:24:28.700 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:24:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:24:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3138: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:24:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:24:31
Feb 25 08:24:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:24:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:24:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'images', 'default.rgw.log', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'backups', '.rgw.root']
Feb 25 08:24:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:24:31 np0005629333 nova_compute[244014]: 2026-02-25 13:24:31.331 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:24:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3139: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:24:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:24:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:24:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:24:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:24:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:24:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:24:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:24:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:24:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:24:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:24:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:24:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:24:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:24:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:24:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:24:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:24:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3140: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:24:33 np0005629333 nova_compute[244014]: 2026-02-25 13:24:33.703 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:24:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:24:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3141: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:24:36 np0005629333 nova_compute[244014]: 2026-02-25 13:24:36.333 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:24:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3142: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:24:38 np0005629333 nova_compute[244014]: 2026-02-25 13:24:38.708 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:24:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:24:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3143: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:24:41 np0005629333 nova_compute[244014]: 2026-02-25 13:24:41.335 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:24:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3144: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:24:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:24:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:24:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:24:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:24:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:24:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:24:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:24:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:24:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:24:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:24:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:24:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:24:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:24:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:24:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:24:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:24:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:24:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:24:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:24:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:24:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:24:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:24:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:24:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3145: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:24:43 np0005629333 nova_compute[244014]: 2026-02-25 13:24:43.711 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:24:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:24:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3146: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:24:46 np0005629333 nova_compute[244014]: 2026-02-25 13:24:46.337 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:24:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:24:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2120872837' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:24:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:24:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2120872837' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:24:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3147: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:24:48 np0005629333 nova_compute[244014]: 2026-02-25 13:24:48.715 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:24:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:24:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3148: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:24:51 np0005629333 nova_compute[244014]: 2026-02-25 13:24:51.339 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:24:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3149: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:24:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3150: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:24:53 np0005629333 nova_compute[244014]: 2026-02-25 13:24:53.732 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:24:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:24:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:24:55.067 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:24:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:24:55.068 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:24:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:24:55.068 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:24:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3151: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:24:55 np0005629333 nova_compute[244014]: 2026-02-25 13:24:55.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:24:56 np0005629333 nova_compute[244014]: 2026-02-25 13:24:56.340 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:24:56 np0005629333 nova_compute[244014]: 2026-02-25 13:24:56.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:24:56 np0005629333 nova_compute[244014]: 2026-02-25 13:24:56.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:24:56 np0005629333 nova_compute[244014]: 2026-02-25 13:24:56.905 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:24:56 np0005629333 nova_compute[244014]: 2026-02-25 13:24:56.905 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:24:56 np0005629333 nova_compute[244014]: 2026-02-25 13:24:56.905 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:24:56 np0005629333 nova_compute[244014]: 2026-02-25 13:24:56.906 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:24:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:24:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3061810097' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:24:57 np0005629333 nova_compute[244014]: 2026-02-25 13:24:57.486 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:24:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3152: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:24:57 np0005629333 nova_compute[244014]: 2026-02-25 13:24:57.655 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:24:57 np0005629333 nova_compute[244014]: 2026-02-25 13:24:57.657 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3601MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:24:57 np0005629333 nova_compute[244014]: 2026-02-25 13:24:57.657 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:24:57 np0005629333 nova_compute[244014]: 2026-02-25 13:24:57.658 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:24:57 np0005629333 nova_compute[244014]: 2026-02-25 13:24:57.720 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:24:57 np0005629333 nova_compute[244014]: 2026-02-25 13:24:57.720 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:24:57 np0005629333 nova_compute[244014]: 2026-02-25 13:24:57.751 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:24:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:24:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/93496846' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:24:58 np0005629333 nova_compute[244014]: 2026-02-25 13:24:58.266 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:24:58 np0005629333 nova_compute[244014]: 2026-02-25 13:24:58.272 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:24:58 np0005629333 nova_compute[244014]: 2026-02-25 13:24:58.290 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:24:58 np0005629333 nova_compute[244014]: 2026-02-25 13:24:58.292 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:24:58 np0005629333 nova_compute[244014]: 2026-02-25 13:24:58.293 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:24:58 np0005629333 podman[399544]: 2026-02-25 13:24:58.709393587 +0000 UTC m=+0.049411416 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 08:24:58 np0005629333 nova_compute[244014]: 2026-02-25 13:24:58.733 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:24:58 np0005629333 podman[399545]: 2026-02-25 13:24:58.740447383 +0000 UTC m=+0.075793430 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 08:24:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:24:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3153: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:25:01 np0005629333 nova_compute[244014]: 2026-02-25 13:25:01.345 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:25:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3154: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:25:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:25:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:25:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:25:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:25:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:25:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:25:03 np0005629333 nova_compute[244014]: 2026-02-25 13:25:03.295 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:25:03 np0005629333 nova_compute[244014]: 2026-02-25 13:25:03.296 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:25:03 np0005629333 nova_compute[244014]: 2026-02-25 13:25:03.296 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:25:03 np0005629333 nova_compute[244014]: 2026-02-25 13:25:03.314 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:25:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3155: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:25:03 np0005629333 nova_compute[244014]: 2026-02-25 13:25:03.737 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:25:03 np0005629333 nova_compute[244014]: 2026-02-25 13:25:03.890 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:25:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:25:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3156: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:25:06 np0005629333 nova_compute[244014]: 2026-02-25 13:25:06.348 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:25:06 np0005629333 nova_compute[244014]: 2026-02-25 13:25:06.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:25:06 np0005629333 nova_compute[244014]: 2026-02-25 13:25:06.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:25:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3157: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:25:08 np0005629333 podman[399685]: 2026-02-25 13:25:08.719547435 +0000 UTC m=+0.065572381 container exec ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 08:25:08 np0005629333 nova_compute[244014]: 2026-02-25 13:25:08.794 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:25:08 np0005629333 podman[399685]: 2026-02-25 13:25:08.810144612 +0000 UTC m=+0.156169558 container exec_died ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 08:25:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:25:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3158: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:25:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:25:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:25:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:25:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:25:09 np0005629333 nova_compute[244014]: 2026-02-25 13:25:09.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:25:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:25:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:25:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:25:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:25:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:25:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:25:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:25:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:25:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:25:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:25:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:25:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:25:10 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:25:10 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:25:10 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:25:10 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:25:10 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:25:10 np0005629333 podman[400017]: 2026-02-25 13:25:10.853771219 +0000 UTC m=+0.053536652 container create 5400225fc248d81e6aa9843c1bb3a3a4399905ae4a446cac369dba2c1d8c64b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 25 08:25:10 np0005629333 nova_compute[244014]: 2026-02-25 13:25:10.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:25:10 np0005629333 systemd[1]: Started libpod-conmon-5400225fc248d81e6aa9843c1bb3a3a4399905ae4a446cac369dba2c1d8c64b5.scope.
Feb 25 08:25:10 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:25:10 np0005629333 podman[400017]: 2026-02-25 13:25:10.826127369 +0000 UTC m=+0.025892852 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:25:10 np0005629333 podman[400017]: 2026-02-25 13:25:10.933669634 +0000 UTC m=+0.133435107 container init 5400225fc248d81e6aa9843c1bb3a3a4399905ae4a446cac369dba2c1d8c64b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 08:25:10 np0005629333 podman[400017]: 2026-02-25 13:25:10.939368094 +0000 UTC m=+0.139133527 container start 5400225fc248d81e6aa9843c1bb3a3a4399905ae4a446cac369dba2c1d8c64b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wilbur, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 08:25:10 np0005629333 podman[400017]: 2026-02-25 13:25:10.942183304 +0000 UTC m=+0.141948757 container attach 5400225fc248d81e6aa9843c1bb3a3a4399905ae4a446cac369dba2c1d8c64b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wilbur, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True)
Feb 25 08:25:10 np0005629333 wonderful_wilbur[400033]: 167 167
Feb 25 08:25:10 np0005629333 systemd[1]: libpod-5400225fc248d81e6aa9843c1bb3a3a4399905ae4a446cac369dba2c1d8c64b5.scope: Deactivated successfully.
Feb 25 08:25:10 np0005629333 conmon[400033]: conmon 5400225fc248d81e6aa9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5400225fc248d81e6aa9843c1bb3a3a4399905ae4a446cac369dba2c1d8c64b5.scope/container/memory.events
Feb 25 08:25:10 np0005629333 podman[400017]: 2026-02-25 13:25:10.945105226 +0000 UTC m=+0.144870689 container died 5400225fc248d81e6aa9843c1bb3a3a4399905ae4a446cac369dba2c1d8c64b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:25:10 np0005629333 systemd[1]: var-lib-containers-storage-overlay-107c1ca87659f65c1790c1262e9c88d0f58da94b334b7c6ad3a90ac67f424303-merged.mount: Deactivated successfully.
Feb 25 08:25:10 np0005629333 podman[400017]: 2026-02-25 13:25:10.985559058 +0000 UTC m=+0.185324491 container remove 5400225fc248d81e6aa9843c1bb3a3a4399905ae4a446cac369dba2c1d8c64b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 08:25:10 np0005629333 systemd[1]: libpod-conmon-5400225fc248d81e6aa9843c1bb3a3a4399905ae4a446cac369dba2c1d8c64b5.scope: Deactivated successfully.
Feb 25 08:25:11 np0005629333 podman[400058]: 2026-02-25 13:25:11.134170451 +0000 UTC m=+0.047687787 container create cb1decc7d2c5a356d4dcdead67afb1b7ab78e7bbf68bbb6ac096f813e820939a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:25:11 np0005629333 systemd[1]: Started libpod-conmon-cb1decc7d2c5a356d4dcdead67afb1b7ab78e7bbf68bbb6ac096f813e820939a.scope.
Feb 25 08:25:11 np0005629333 podman[400058]: 2026-02-25 13:25:11.106859911 +0000 UTC m=+0.020377307 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:25:11 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:25:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d51b06960a47a83ec732d76188af908a41a2c9dbc983ce258ba420ff58486d41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:25:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d51b06960a47a83ec732d76188af908a41a2c9dbc983ce258ba420ff58486d41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:25:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d51b06960a47a83ec732d76188af908a41a2c9dbc983ce258ba420ff58486d41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:25:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d51b06960a47a83ec732d76188af908a41a2c9dbc983ce258ba420ff58486d41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:25:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d51b06960a47a83ec732d76188af908a41a2c9dbc983ce258ba420ff58486d41/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:25:11 np0005629333 podman[400058]: 2026-02-25 13:25:11.244350531 +0000 UTC m=+0.157867917 container init cb1decc7d2c5a356d4dcdead67afb1b7ab78e7bbf68bbb6ac096f813e820939a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_satoshi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:25:11 np0005629333 podman[400058]: 2026-02-25 13:25:11.251786321 +0000 UTC m=+0.165303667 container start cb1decc7d2c5a356d4dcdead67afb1b7ab78e7bbf68bbb6ac096f813e820939a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_satoshi, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 08:25:11 np0005629333 podman[400058]: 2026-02-25 13:25:11.255425933 +0000 UTC m=+0.168943249 container attach cb1decc7d2c5a356d4dcdead67afb1b7ab78e7bbf68bbb6ac096f813e820939a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Feb 25 08:25:11 np0005629333 nova_compute[244014]: 2026-02-25 13:25:11.350 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:25:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3159: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:25:11 np0005629333 sharp_satoshi[400075]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:25:11 np0005629333 sharp_satoshi[400075]: --> All data devices are unavailable
Feb 25 08:25:11 np0005629333 systemd[1]: libpod-cb1decc7d2c5a356d4dcdead67afb1b7ab78e7bbf68bbb6ac096f813e820939a.scope: Deactivated successfully.
Feb 25 08:25:11 np0005629333 podman[400058]: 2026-02-25 13:25:11.74605915 +0000 UTC m=+0.659576496 container died cb1decc7d2c5a356d4dcdead67afb1b7ab78e7bbf68bbb6ac096f813e820939a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:25:11 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d51b06960a47a83ec732d76188af908a41a2c9dbc983ce258ba420ff58486d41-merged.mount: Deactivated successfully.
Feb 25 08:25:11 np0005629333 podman[400058]: 2026-02-25 13:25:11.791153383 +0000 UTC m=+0.704670699 container remove cb1decc7d2c5a356d4dcdead67afb1b7ab78e7bbf68bbb6ac096f813e820939a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:25:11 np0005629333 systemd[1]: libpod-conmon-cb1decc7d2c5a356d4dcdead67afb1b7ab78e7bbf68bbb6ac096f813e820939a.scope: Deactivated successfully.
Feb 25 08:25:12 np0005629333 podman[400169]: 2026-02-25 13:25:12.283503838 +0000 UTC m=+0.044202498 container create d6cc46dfaec38d4d1eae5f8e65e47367f8593de83bfc4dd0f0049129fceba4f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_saha, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:25:12 np0005629333 systemd[1]: Started libpod-conmon-d6cc46dfaec38d4d1eae5f8e65e47367f8593de83bfc4dd0f0049129fceba4f9.scope.
Feb 25 08:25:12 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:25:12 np0005629333 podman[400169]: 2026-02-25 13:25:12.355512821 +0000 UTC m=+0.116211481 container init d6cc46dfaec38d4d1eae5f8e65e47367f8593de83bfc4dd0f0049129fceba4f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_saha, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 08:25:12 np0005629333 podman[400169]: 2026-02-25 13:25:12.262093014 +0000 UTC m=+0.022791694 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:25:12 np0005629333 podman[400169]: 2026-02-25 13:25:12.366150181 +0000 UTC m=+0.126848881 container start d6cc46dfaec38d4d1eae5f8e65e47367f8593de83bfc4dd0f0049129fceba4f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_saha, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 25 08:25:12 np0005629333 podman[400169]: 2026-02-25 13:25:12.370069052 +0000 UTC m=+0.130767732 container attach d6cc46dfaec38d4d1eae5f8e65e47367f8593de83bfc4dd0f0049129fceba4f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_saha, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 08:25:12 np0005629333 amazing_saha[400185]: 167 167
Feb 25 08:25:12 np0005629333 systemd[1]: libpod-d6cc46dfaec38d4d1eae5f8e65e47367f8593de83bfc4dd0f0049129fceba4f9.scope: Deactivated successfully.
Feb 25 08:25:12 np0005629333 podman[400169]: 2026-02-25 13:25:12.37426915 +0000 UTC m=+0.134967820 container died d6cc46dfaec38d4d1eae5f8e65e47367f8593de83bfc4dd0f0049129fceba4f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:25:12 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ef2ececb9d1b5f14faae4d418bac24830e553bc271e0572f4af47150f4c9e1b7-merged.mount: Deactivated successfully.
Feb 25 08:25:12 np0005629333 podman[400169]: 2026-02-25 13:25:12.415071442 +0000 UTC m=+0.175770142 container remove d6cc46dfaec38d4d1eae5f8e65e47367f8593de83bfc4dd0f0049129fceba4f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_saha, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 08:25:12 np0005629333 systemd[1]: libpod-conmon-d6cc46dfaec38d4d1eae5f8e65e47367f8593de83bfc4dd0f0049129fceba4f9.scope: Deactivated successfully.
Feb 25 08:25:12 np0005629333 podman[400210]: 2026-02-25 13:25:12.602716317 +0000 UTC m=+0.057119263 container create 7ec7e0f6f48c6ab1f96684b7f559fd1c2ea851ad6e50f7fb42104ec37eaf3f2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dubinsky, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 08:25:12 np0005629333 systemd[1]: Started libpod-conmon-7ec7e0f6f48c6ab1f96684b7f559fd1c2ea851ad6e50f7fb42104ec37eaf3f2c.scope.
Feb 25 08:25:12 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:25:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/819404410e6d20528a47a5251d304b33af890d27ceb0ce521d0918ba98c683e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:25:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/819404410e6d20528a47a5251d304b33af890d27ceb0ce521d0918ba98c683e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:25:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/819404410e6d20528a47a5251d304b33af890d27ceb0ce521d0918ba98c683e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:25:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/819404410e6d20528a47a5251d304b33af890d27ceb0ce521d0918ba98c683e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:25:12 np0005629333 podman[400210]: 2026-02-25 13:25:12.583270469 +0000 UTC m=+0.037673445 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:25:12 np0005629333 podman[400210]: 2026-02-25 13:25:12.685988058 +0000 UTC m=+0.140391054 container init 7ec7e0f6f48c6ab1f96684b7f559fd1c2ea851ad6e50f7fb42104ec37eaf3f2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 08:25:12 np0005629333 podman[400210]: 2026-02-25 13:25:12.696157685 +0000 UTC m=+0.150560631 container start 7ec7e0f6f48c6ab1f96684b7f559fd1c2ea851ad6e50f7fb42104ec37eaf3f2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dubinsky, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:25:12 np0005629333 podman[400210]: 2026-02-25 13:25:12.699970842 +0000 UTC m=+0.154373878 container attach 7ec7e0f6f48c6ab1f96684b7f559fd1c2ea851ad6e50f7fb42104ec37eaf3f2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dubinsky, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]: {
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:    "0": [
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:        {
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "devices": [
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "/dev/loop3"
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            ],
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "lv_name": "ceph_lv0",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "lv_size": "21470642176",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "name": "ceph_lv0",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "tags": {
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.cluster_name": "ceph",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.crush_device_class": "",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.encrypted": "0",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.objectstore": "bluestore",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.osd_id": "0",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.type": "block",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.vdo": "0",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.with_tpm": "0"
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            },
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "type": "block",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "vg_name": "ceph_vg0"
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:        }
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:    ],
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:    "1": [
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:        {
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "devices": [
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "/dev/loop4"
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            ],
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "lv_name": "ceph_lv1",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "lv_size": "21470642176",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "name": "ceph_lv1",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "tags": {
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.cluster_name": "ceph",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.crush_device_class": "",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.encrypted": "0",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.objectstore": "bluestore",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.osd_id": "1",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.type": "block",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.vdo": "0",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.with_tpm": "0"
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            },
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "type": "block",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "vg_name": "ceph_vg1"
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:        }
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:    ],
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:    "2": [
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:        {
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "devices": [
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "/dev/loop5"
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            ],
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "lv_name": "ceph_lv2",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "lv_size": "21470642176",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "name": "ceph_lv2",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "tags": {
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.cluster_name": "ceph",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.crush_device_class": "",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.encrypted": "0",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.objectstore": "bluestore",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.osd_id": "2",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.type": "block",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.vdo": "0",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:                "ceph.with_tpm": "0"
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            },
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "type": "block",
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:            "vg_name": "ceph_vg2"
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:        }
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]:    ]
Feb 25 08:25:12 np0005629333 interesting_dubinsky[400227]: }
Feb 25 08:25:12 np0005629333 systemd[1]: libpod-7ec7e0f6f48c6ab1f96684b7f559fd1c2ea851ad6e50f7fb42104ec37eaf3f2c.scope: Deactivated successfully.
Feb 25 08:25:13 np0005629333 podman[400236]: 2026-02-25 13:25:13.041060009 +0000 UTC m=+0.028798144 container died 7ec7e0f6f48c6ab1f96684b7f559fd1c2ea851ad6e50f7fb42104ec37eaf3f2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dubinsky, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 08:25:13 np0005629333 systemd[1]: var-lib-containers-storage-overlay-819404410e6d20528a47a5251d304b33af890d27ceb0ce521d0918ba98c683e8-merged.mount: Deactivated successfully.
Feb 25 08:25:13 np0005629333 podman[400236]: 2026-02-25 13:25:13.090872164 +0000 UTC m=+0.078610319 container remove 7ec7e0f6f48c6ab1f96684b7f559fd1c2ea851ad6e50f7fb42104ec37eaf3f2c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_dubinsky, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:25:13 np0005629333 systemd[1]: libpod-conmon-7ec7e0f6f48c6ab1f96684b7f559fd1c2ea851ad6e50f7fb42104ec37eaf3f2c.scope: Deactivated successfully.
Feb 25 08:25:13 np0005629333 podman[400313]: 2026-02-25 13:25:13.597379969 +0000 UTC m=+0.061949399 container create 2b3f26a247b1a67a624925c395d20eedd205e4abb7afdcb7dca354a9a77bfa4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_newton, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:25:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3160: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:25:13 np0005629333 systemd[1]: Started libpod-conmon-2b3f26a247b1a67a624925c395d20eedd205e4abb7afdcb7dca354a9a77bfa4d.scope.
Feb 25 08:25:13 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:25:13 np0005629333 podman[400313]: 2026-02-25 13:25:13.573398403 +0000 UTC m=+0.037967803 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:25:13 np0005629333 podman[400313]: 2026-02-25 13:25:13.679673532 +0000 UTC m=+0.144242962 container init 2b3f26a247b1a67a624925c395d20eedd205e4abb7afdcb7dca354a9a77bfa4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_newton, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 08:25:13 np0005629333 podman[400313]: 2026-02-25 13:25:13.688028018 +0000 UTC m=+0.152597448 container start 2b3f26a247b1a67a624925c395d20eedd205e4abb7afdcb7dca354a9a77bfa4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_newton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:25:13 np0005629333 podman[400313]: 2026-02-25 13:25:13.692223686 +0000 UTC m=+0.156793126 container attach 2b3f26a247b1a67a624925c395d20eedd205e4abb7afdcb7dca354a9a77bfa4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_newton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 08:25:13 np0005629333 trusting_newton[400330]: 167 167
Feb 25 08:25:13 np0005629333 systemd[1]: libpod-2b3f26a247b1a67a624925c395d20eedd205e4abb7afdcb7dca354a9a77bfa4d.scope: Deactivated successfully.
Feb 25 08:25:13 np0005629333 conmon[400330]: conmon 2b3f26a247b1a67a6249 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2b3f26a247b1a67a624925c395d20eedd205e4abb7afdcb7dca354a9a77bfa4d.scope/container/memory.events
Feb 25 08:25:13 np0005629333 podman[400313]: 2026-02-25 13:25:13.69695478 +0000 UTC m=+0.161524210 container died 2b3f26a247b1a67a624925c395d20eedd205e4abb7afdcb7dca354a9a77bfa4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_newton, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:25:13 np0005629333 systemd[1]: var-lib-containers-storage-overlay-dd52b9cbfcd70bebf77f70d53d944c579d2eb4eab523d9245aad92edc7023b03-merged.mount: Deactivated successfully.
Feb 25 08:25:13 np0005629333 podman[400313]: 2026-02-25 13:25:13.740666233 +0000 UTC m=+0.205235653 container remove 2b3f26a247b1a67a624925c395d20eedd205e4abb7afdcb7dca354a9a77bfa4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_newton, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:25:13 np0005629333 systemd[1]: libpod-conmon-2b3f26a247b1a67a624925c395d20eedd205e4abb7afdcb7dca354a9a77bfa4d.scope: Deactivated successfully.
Feb 25 08:25:13 np0005629333 nova_compute[244014]: 2026-02-25 13:25:13.833 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:25:13 np0005629333 podman[400353]: 2026-02-25 13:25:13.89750361 +0000 UTC m=+0.050012763 container create fa6638c4bf674dbafb877a8d576d07f9d1309913c15ada101efb2cafcafd78ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:25:13 np0005629333 systemd[1]: Started libpod-conmon-fa6638c4bf674dbafb877a8d576d07f9d1309913c15ada101efb2cafcafd78ab.scope.
Feb 25 08:25:13 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:25:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfb5e9ed3b3ed8ab8b826134ab3cee082ab7d4a5dc104117811e7cc6c3e4b328/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:25:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfb5e9ed3b3ed8ab8b826134ab3cee082ab7d4a5dc104117811e7cc6c3e4b328/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:25:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfb5e9ed3b3ed8ab8b826134ab3cee082ab7d4a5dc104117811e7cc6c3e4b328/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:25:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfb5e9ed3b3ed8ab8b826134ab3cee082ab7d4a5dc104117811e7cc6c3e4b328/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:25:13 np0005629333 podman[400353]: 2026-02-25 13:25:13.878515584 +0000 UTC m=+0.031024827 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:25:13 np0005629333 podman[400353]: 2026-02-25 13:25:13.993397286 +0000 UTC m=+0.145906529 container init fa6638c4bf674dbafb877a8d576d07f9d1309913c15ada101efb2cafcafd78ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:25:14 np0005629333 podman[400353]: 2026-02-25 13:25:14.000489326 +0000 UTC m=+0.152998539 container start fa6638c4bf674dbafb877a8d576d07f9d1309913c15ada101efb2cafcafd78ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 08:25:14 np0005629333 podman[400353]: 2026-02-25 13:25:14.005004514 +0000 UTC m=+0.157513747 container attach fa6638c4bf674dbafb877a8d576d07f9d1309913c15ada101efb2cafcafd78ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 08:25:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:25:14 np0005629333 lvm[400448]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:25:14 np0005629333 lvm[400448]: VG ceph_vg0 finished
Feb 25 08:25:14 np0005629333 lvm[400449]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:25:14 np0005629333 lvm[400449]: VG ceph_vg1 finished
Feb 25 08:25:14 np0005629333 lvm[400451]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:25:14 np0005629333 lvm[400451]: VG ceph_vg2 finished
Feb 25 08:25:14 np0005629333 musing_mclaren[400370]: {}
Feb 25 08:25:14 np0005629333 systemd[1]: libpod-fa6638c4bf674dbafb877a8d576d07f9d1309913c15ada101efb2cafcafd78ab.scope: Deactivated successfully.
Feb 25 08:25:14 np0005629333 systemd[1]: libpod-fa6638c4bf674dbafb877a8d576d07f9d1309913c15ada101efb2cafcafd78ab.scope: Consumed 1.293s CPU time.
Feb 25 08:25:14 np0005629333 podman[400353]: 2026-02-25 13:25:14.870592052 +0000 UTC m=+1.023101205 container died fa6638c4bf674dbafb877a8d576d07f9d1309913c15ada101efb2cafcafd78ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 08:25:14 np0005629333 nova_compute[244014]: 2026-02-25 13:25:14.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:25:14 np0005629333 systemd[1]: var-lib-containers-storage-overlay-cfb5e9ed3b3ed8ab8b826134ab3cee082ab7d4a5dc104117811e7cc6c3e4b328-merged.mount: Deactivated successfully.
Feb 25 08:25:14 np0005629333 podman[400353]: 2026-02-25 13:25:14.921087497 +0000 UTC m=+1.073596670 container remove fa6638c4bf674dbafb877a8d576d07f9d1309913c15ada101efb2cafcafd78ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:25:14 np0005629333 systemd[1]: libpod-conmon-fa6638c4bf674dbafb877a8d576d07f9d1309913c15ada101efb2cafcafd78ab.scope: Deactivated successfully.
Feb 25 08:25:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:25:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:25:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:25:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:25:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3161: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:25:15 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:25:15 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:25:16 np0005629333 nova_compute[244014]: 2026-02-25 13:25:16.352 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:25:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3162: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:25:18 np0005629333 nova_compute[244014]: 2026-02-25 13:25:18.837 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:25:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:25:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3163: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:25:20 np0005629333 nova_compute[244014]: 2026-02-25 13:25:20.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:25:21 np0005629333 nova_compute[244014]: 2026-02-25 13:25:21.391 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:25:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3164: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:25:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3165: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:25:23 np0005629333 nova_compute[244014]: 2026-02-25 13:25:23.886 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:25:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:25:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3166: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:25:26 np0005629333 nova_compute[244014]: 2026-02-25 13:25:26.392 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:25:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3167: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:25:28 np0005629333 nova_compute[244014]: 2026-02-25 13:25:28.890 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:25:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:25:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3168: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:25:29 np0005629333 podman[400490]: 2026-02-25 13:25:29.730921925 +0000 UTC m=+0.070307445 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 08:25:29 np0005629333 podman[400491]: 2026-02-25 13:25:29.768556457 +0000 UTC m=+0.107226597 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 08:25:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:25:31
Feb 25 08:25:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:25:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:25:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['.rgw.root', 'images', '.mgr', 'backups', 'vms', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta']
Feb 25 08:25:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:25:31 np0005629333 nova_compute[244014]: 2026-02-25 13:25:31.396 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:25:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3169: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:25:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:25:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:25:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:25:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:25:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:25:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:25:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:25:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:25:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:25:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:25:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:25:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:25:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:25:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:25:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:25:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:25:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3170: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:25:33 np0005629333 nova_compute[244014]: 2026-02-25 13:25:33.894 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:25:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:25:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3171: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:25:36 np0005629333 nova_compute[244014]: 2026-02-25 13:25:36.397 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:25:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3172: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:25:38 np0005629333 nova_compute[244014]: 2026-02-25 13:25:38.898 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:25:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:25:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3173: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:25:41 np0005629333 nova_compute[244014]: 2026-02-25 13:25:41.450 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:25:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3174: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:25:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:25:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:25:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:25:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:25:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:25:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:25:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:25:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:25:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:25:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:25:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:25:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:25:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:25:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:25:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:25:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:25:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:25:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:25:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:25:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:25:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:25:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:25:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:25:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3175: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:25:43 np0005629333 nova_compute[244014]: 2026-02-25 13:25:43.936 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:25:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:25:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3176: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:25:46 np0005629333 nova_compute[244014]: 2026-02-25 13:25:46.475 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:25:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:25:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2341387188' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:25:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:25:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2341387188' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:25:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3177: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:25:48 np0005629333 nova_compute[244014]: 2026-02-25 13:25:48.940 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:25:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:25:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3178: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:25:51 np0005629333 nova_compute[244014]: 2026-02-25 13:25:51.477 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:25:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3179: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:25:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3180: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:25:53 np0005629333 nova_compute[244014]: 2026-02-25 13:25:53.943 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:25:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:25:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:25:55.069 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:25:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:25:55.070 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:25:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:25:55.070 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:25:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3181: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:25:56 np0005629333 nova_compute[244014]: 2026-02-25 13:25:56.479 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:25:56 np0005629333 nova_compute[244014]: 2026-02-25 13:25:56.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:25:56 np0005629333 nova_compute[244014]: 2026-02-25 13:25:56.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:25:56 np0005629333 nova_compute[244014]: 2026-02-25 13:25:56.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:25:56 np0005629333 nova_compute[244014]: 2026-02-25 13:25:56.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:25:56 np0005629333 nova_compute[244014]: 2026-02-25 13:25:56.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:25:56 np0005629333 nova_compute[244014]: 2026-02-25 13:25:56.905 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:25:56 np0005629333 nova_compute[244014]: 2026-02-25 13:25:56.905 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:25:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:25:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1976471032' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:25:57 np0005629333 nova_compute[244014]: 2026-02-25 13:25:57.495 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:25:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3182: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:25:57 np0005629333 nova_compute[244014]: 2026-02-25 13:25:57.655 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:25:57 np0005629333 nova_compute[244014]: 2026-02-25 13:25:57.656 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3578MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:25:57 np0005629333 nova_compute[244014]: 2026-02-25 13:25:57.657 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:25:57 np0005629333 nova_compute[244014]: 2026-02-25 13:25:57.657 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:25:57 np0005629333 nova_compute[244014]: 2026-02-25 13:25:57.917 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:25:57 np0005629333 nova_compute[244014]: 2026-02-25 13:25:57.918 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:25:57 np0005629333 nova_compute[244014]: 2026-02-25 13:25:57.991 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 25 08:25:58 np0005629333 nova_compute[244014]: 2026-02-25 13:25:58.085 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 25 08:25:58 np0005629333 nova_compute[244014]: 2026-02-25 13:25:58.085 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 25 08:25:58 np0005629333 nova_compute[244014]: 2026-02-25 13:25:58.100 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 25 08:25:58 np0005629333 nova_compute[244014]: 2026-02-25 13:25:58.120 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 25 08:25:58 np0005629333 nova_compute[244014]: 2026-02-25 13:25:58.134 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:25:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:25:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1051834929' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:25:58 np0005629333 nova_compute[244014]: 2026-02-25 13:25:58.701 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:25:58 np0005629333 nova_compute[244014]: 2026-02-25 13:25:58.707 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:25:58 np0005629333 nova_compute[244014]: 2026-02-25 13:25:58.723 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:25:58 np0005629333 nova_compute[244014]: 2026-02-25 13:25:58.724 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:25:58 np0005629333 nova_compute[244014]: 2026-02-25 13:25:58.724 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:25:58 np0005629333 nova_compute[244014]: 2026-02-25 13:25:58.947 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:25:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:25:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3183: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:26:00 np0005629333 podman[400583]: 2026-02-25 13:26:00.73848434 +0000 UTC m=+0.081167802 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 08:26:00 np0005629333 podman[400584]: 2026-02-25 13:26:00.752452134 +0000 UTC m=+0.088279332 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Feb 25 08:26:01 np0005629333 nova_compute[244014]: 2026-02-25 13:26:01.481 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:26:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:26:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:26:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:26:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:26:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:26:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:26:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3184: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:26:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3185: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:26:03 np0005629333 nova_compute[244014]: 2026-02-25 13:26:03.724 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:26:03 np0005629333 nova_compute[244014]: 2026-02-25 13:26:03.725 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:26:03 np0005629333 nova_compute[244014]: 2026-02-25 13:26:03.725 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:26:03 np0005629333 nova_compute[244014]: 2026-02-25 13:26:03.742 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:26:03 np0005629333 nova_compute[244014]: 2026-02-25 13:26:03.889 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:26:03 np0005629333 nova_compute[244014]: 2026-02-25 13:26:03.950 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:26:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:26:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3186: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:26:06 np0005629333 nova_compute[244014]: 2026-02-25 13:26:06.483 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:26:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3187: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:26:07 np0005629333 nova_compute[244014]: 2026-02-25 13:26:07.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:26:07 np0005629333 nova_compute[244014]: 2026-02-25 13:26:07.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:26:08 np0005629333 nova_compute[244014]: 2026-02-25 13:26:08.953 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:26:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:26:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3188: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:26:10 np0005629333 nova_compute[244014]: 2026-02-25 13:26:10.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:26:11 np0005629333 nova_compute[244014]: 2026-02-25 13:26:11.485 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:26:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3189: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:26:12 np0005629333 nova_compute[244014]: 2026-02-25 13:26:12.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:26:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3190: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:26:13 np0005629333 nova_compute[244014]: 2026-02-25 13:26:13.957 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:26:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:26:14 np0005629333 nova_compute[244014]: 2026-02-25 13:26:14.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:26:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3191: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:26:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:26:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:26:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:26:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:26:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:26:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:26:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:26:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:26:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:26:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:26:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:26:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:26:15 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:26:15 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:26:15 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:26:16 np0005629333 podman[400774]: 2026-02-25 13:26:16.080097294 +0000 UTC m=+0.060297553 container create fe5e1358e7e42308aefe07e33ece0639b7e48ad6a8e3633db34966abf4950954 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_carson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3)
Feb 25 08:26:16 np0005629333 podman[400774]: 2026-02-25 13:26:16.044347655 +0000 UTC m=+0.024547924 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:26:16 np0005629333 systemd[1]: Started libpod-conmon-fe5e1358e7e42308aefe07e33ece0639b7e48ad6a8e3633db34966abf4950954.scope.
Feb 25 08:26:16 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:26:16 np0005629333 podman[400774]: 2026-02-25 13:26:16.201424668 +0000 UTC m=+0.181624957 container init fe5e1358e7e42308aefe07e33ece0639b7e48ad6a8e3633db34966abf4950954 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_carson, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 08:26:16 np0005629333 podman[400774]: 2026-02-25 13:26:16.209097844 +0000 UTC m=+0.189298113 container start fe5e1358e7e42308aefe07e33ece0639b7e48ad6a8e3633db34966abf4950954 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Feb 25 08:26:16 np0005629333 happy_carson[400791]: 167 167
Feb 25 08:26:16 np0005629333 podman[400774]: 2026-02-25 13:26:16.214754724 +0000 UTC m=+0.194955043 container attach fe5e1358e7e42308aefe07e33ece0639b7e48ad6a8e3633db34966abf4950954 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_carson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:26:16 np0005629333 systemd[1]: libpod-fe5e1358e7e42308aefe07e33ece0639b7e48ad6a8e3633db34966abf4950954.scope: Deactivated successfully.
Feb 25 08:26:16 np0005629333 podman[400774]: 2026-02-25 13:26:16.215523176 +0000 UTC m=+0.195723445 container died fe5e1358e7e42308aefe07e33ece0639b7e48ad6a8e3633db34966abf4950954 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 08:26:16 np0005629333 systemd[1]: var-lib-containers-storage-overlay-01adfa051313829154f07c00f5756cbfc125bcf8480ffbbbf07d3cea68dcbca9-merged.mount: Deactivated successfully.
Feb 25 08:26:16 np0005629333 podman[400774]: 2026-02-25 13:26:16.374200554 +0000 UTC m=+0.354400783 container remove fe5e1358e7e42308aefe07e33ece0639b7e48ad6a8e3633db34966abf4950954 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_carson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Feb 25 08:26:16 np0005629333 systemd[1]: libpod-conmon-fe5e1358e7e42308aefe07e33ece0639b7e48ad6a8e3633db34966abf4950954.scope: Deactivated successfully.
Feb 25 08:26:16 np0005629333 nova_compute[244014]: 2026-02-25 13:26:16.487 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:26:16 np0005629333 podman[400817]: 2026-02-25 13:26:16.511435987 +0000 UTC m=+0.056097374 container create 6fdee2a0c5d6fbc37ab99933101c634395063c43e943ee3bde259291cefd52a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:26:16 np0005629333 podman[400817]: 2026-02-25 13:26:16.475718599 +0000 UTC m=+0.020380036 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:26:16 np0005629333 systemd[1]: Started libpod-conmon-6fdee2a0c5d6fbc37ab99933101c634395063c43e943ee3bde259291cefd52a1.scope.
Feb 25 08:26:16 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:26:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b6c7656cf1c2d8f0dedc7d72d4770c34621bd885c3e346d2b9263f2150cbb07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:26:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b6c7656cf1c2d8f0dedc7d72d4770c34621bd885c3e346d2b9263f2150cbb07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:26:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b6c7656cf1c2d8f0dedc7d72d4770c34621bd885c3e346d2b9263f2150cbb07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:26:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b6c7656cf1c2d8f0dedc7d72d4770c34621bd885c3e346d2b9263f2150cbb07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:26:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b6c7656cf1c2d8f0dedc7d72d4770c34621bd885c3e346d2b9263f2150cbb07/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:26:16 np0005629333 podman[400817]: 2026-02-25 13:26:16.786214962 +0000 UTC m=+0.330876399 container init 6fdee2a0c5d6fbc37ab99933101c634395063c43e943ee3bde259291cefd52a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_brattain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 08:26:16 np0005629333 podman[400817]: 2026-02-25 13:26:16.795208256 +0000 UTC m=+0.339869633 container start 6fdee2a0c5d6fbc37ab99933101c634395063c43e943ee3bde259291cefd52a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_brattain, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:26:16 np0005629333 podman[400817]: 2026-02-25 13:26:16.813339748 +0000 UTC m=+0.358001135 container attach 6fdee2a0c5d6fbc37ab99933101c634395063c43e943ee3bde259291cefd52a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 08:26:17 np0005629333 fervent_brattain[400833]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:26:17 np0005629333 fervent_brattain[400833]: --> All data devices are unavailable
Feb 25 08:26:17 np0005629333 systemd[1]: libpod-6fdee2a0c5d6fbc37ab99933101c634395063c43e943ee3bde259291cefd52a1.scope: Deactivated successfully.
Feb 25 08:26:17 np0005629333 podman[400817]: 2026-02-25 13:26:17.256074033 +0000 UTC m=+0.800735420 container died 6fdee2a0c5d6fbc37ab99933101c634395063c43e943ee3bde259291cefd52a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_brattain, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 25 08:26:17 np0005629333 systemd[1]: var-lib-containers-storage-overlay-4b6c7656cf1c2d8f0dedc7d72d4770c34621bd885c3e346d2b9263f2150cbb07-merged.mount: Deactivated successfully.
Feb 25 08:26:17 np0005629333 podman[400817]: 2026-02-25 13:26:17.561208405 +0000 UTC m=+1.105869782 container remove 6fdee2a0c5d6fbc37ab99933101c634395063c43e943ee3bde259291cefd52a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:26:17 np0005629333 systemd[1]: libpod-conmon-6fdee2a0c5d6fbc37ab99933101c634395063c43e943ee3bde259291cefd52a1.scope: Deactivated successfully.
Feb 25 08:26:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3192: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:26:18 np0005629333 podman[400928]: 2026-02-25 13:26:18.072224427 +0000 UTC m=+0.053319866 container create 566825a9b651ebe2ffad6195cce33359797d6554ddf20d91823b2b4a78504a4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 08:26:18 np0005629333 systemd[1]: Started libpod-conmon-566825a9b651ebe2ffad6195cce33359797d6554ddf20d91823b2b4a78504a4f.scope.
Feb 25 08:26:18 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:26:18 np0005629333 podman[400928]: 2026-02-25 13:26:18.124433371 +0000 UTC m=+0.105528690 container init 566825a9b651ebe2ffad6195cce33359797d6554ddf20d91823b2b4a78504a4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 08:26:18 np0005629333 podman[400928]: 2026-02-25 13:26:18.131709636 +0000 UTC m=+0.112804955 container start 566825a9b651ebe2ffad6195cce33359797d6554ddf20d91823b2b4a78504a4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:26:18 np0005629333 podman[400928]: 2026-02-25 13:26:18.135434631 +0000 UTC m=+0.116529960 container attach 566825a9b651ebe2ffad6195cce33359797d6554ddf20d91823b2b4a78504a4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_vaughan, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:26:18 np0005629333 adoring_vaughan[400944]: 167 167
Feb 25 08:26:18 np0005629333 systemd[1]: libpod-566825a9b651ebe2ffad6195cce33359797d6554ddf20d91823b2b4a78504a4f.scope: Deactivated successfully.
Feb 25 08:26:18 np0005629333 podman[400928]: 2026-02-25 13:26:18.136398078 +0000 UTC m=+0.117493427 container died 566825a9b651ebe2ffad6195cce33359797d6554ddf20d91823b2b4a78504a4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_vaughan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:26:18 np0005629333 podman[400928]: 2026-02-25 13:26:18.04505765 +0000 UTC m=+0.026153019 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:26:18 np0005629333 systemd[1]: var-lib-containers-storage-overlay-c14fc64b8203ca1328886bf9388603627280efaa971a3ce8d5fadcbd55c63fa0-merged.mount: Deactivated successfully.
Feb 25 08:26:18 np0005629333 podman[400928]: 2026-02-25 13:26:18.180902574 +0000 UTC m=+0.161997923 container remove 566825a9b651ebe2ffad6195cce33359797d6554ddf20d91823b2b4a78504a4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:26:18 np0005629333 systemd[1]: libpod-conmon-566825a9b651ebe2ffad6195cce33359797d6554ddf20d91823b2b4a78504a4f.scope: Deactivated successfully.
Feb 25 08:26:18 np0005629333 podman[400967]: 2026-02-25 13:26:18.338110581 +0000 UTC m=+0.047869792 container create 5dfca3591cbd34a5ac727fbf49681cf554ef59fd9083d6067a18851826aa9782 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:26:18 np0005629333 systemd[1]: Started libpod-conmon-5dfca3591cbd34a5ac727fbf49681cf554ef59fd9083d6067a18851826aa9782.scope.
Feb 25 08:26:18 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:26:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e2e63ec4e7e42ee52b9931607c5082e8f23dcd6dd33086948ca9a0f6b3e849/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:26:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e2e63ec4e7e42ee52b9931607c5082e8f23dcd6dd33086948ca9a0f6b3e849/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:26:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e2e63ec4e7e42ee52b9931607c5082e8f23dcd6dd33086948ca9a0f6b3e849/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:26:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e2e63ec4e7e42ee52b9931607c5082e8f23dcd6dd33086948ca9a0f6b3e849/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:26:18 np0005629333 podman[400967]: 2026-02-25 13:26:18.317798288 +0000 UTC m=+0.027557499 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:26:18 np0005629333 podman[400967]: 2026-02-25 13:26:18.445168062 +0000 UTC m=+0.154927333 container init 5dfca3591cbd34a5ac727fbf49681cf554ef59fd9083d6067a18851826aa9782 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:26:18 np0005629333 podman[400967]: 2026-02-25 13:26:18.453816147 +0000 UTC m=+0.163575328 container start 5dfca3591cbd34a5ac727fbf49681cf554ef59fd9083d6067a18851826aa9782 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:26:18 np0005629333 podman[400967]: 2026-02-25 13:26:18.463301844 +0000 UTC m=+0.173061055 container attach 5dfca3591cbd34a5ac727fbf49681cf554ef59fd9083d6067a18851826aa9782 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_chaplygin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]: {
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:    "0": [
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:        {
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "devices": [
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "/dev/loop3"
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            ],
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "lv_name": "ceph_lv0",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "lv_size": "21470642176",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "name": "ceph_lv0",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "tags": {
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.cluster_name": "ceph",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.crush_device_class": "",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.encrypted": "0",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.objectstore": "bluestore",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.osd_id": "0",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.type": "block",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.vdo": "0",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.with_tpm": "0"
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            },
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "type": "block",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "vg_name": "ceph_vg0"
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:        }
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:    ],
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:    "1": [
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:        {
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "devices": [
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "/dev/loop4"
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            ],
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "lv_name": "ceph_lv1",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "lv_size": "21470642176",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "name": "ceph_lv1",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "tags": {
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.cluster_name": "ceph",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.crush_device_class": "",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.encrypted": "0",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.objectstore": "bluestore",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.osd_id": "1",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.type": "block",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.vdo": "0",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.with_tpm": "0"
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            },
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "type": "block",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "vg_name": "ceph_vg1"
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:        }
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:    ],
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:    "2": [
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:        {
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "devices": [
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "/dev/loop5"
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            ],
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "lv_name": "ceph_lv2",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "lv_size": "21470642176",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "name": "ceph_lv2",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "tags": {
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.cluster_name": "ceph",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.crush_device_class": "",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.encrypted": "0",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.objectstore": "bluestore",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.osd_id": "2",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.type": "block",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.vdo": "0",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:                "ceph.with_tpm": "0"
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            },
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "type": "block",
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:            "vg_name": "ceph_vg2"
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:        }
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]:    ]
Feb 25 08:26:18 np0005629333 tender_chaplygin[400984]: }
Feb 25 08:26:18 np0005629333 systemd[1]: libpod-5dfca3591cbd34a5ac727fbf49681cf554ef59fd9083d6067a18851826aa9782.scope: Deactivated successfully.
Feb 25 08:26:18 np0005629333 podman[400993]: 2026-02-25 13:26:18.808208998 +0000 UTC m=+0.036760228 container died 5dfca3591cbd34a5ac727fbf49681cf554ef59fd9083d6067a18851826aa9782 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:26:18 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a4e2e63ec4e7e42ee52b9931607c5082e8f23dcd6dd33086948ca9a0f6b3e849-merged.mount: Deactivated successfully.
Feb 25 08:26:18 np0005629333 podman[400993]: 2026-02-25 13:26:18.85043747 +0000 UTC m=+0.078988680 container remove 5dfca3591cbd34a5ac727fbf49681cf554ef59fd9083d6067a18851826aa9782 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 08:26:18 np0005629333 systemd[1]: libpod-conmon-5dfca3591cbd34a5ac727fbf49681cf554ef59fd9083d6067a18851826aa9782.scope: Deactivated successfully.
Feb 25 08:26:18 np0005629333 nova_compute[244014]: 2026-02-25 13:26:18.962 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:26:19 np0005629333 podman[401072]: 2026-02-25 13:26:19.326170586 +0000 UTC m=+0.048790128 container create 4d222772e6c8ba9cd5ad2645bb2b463d6058f9c235683e091a14db83c68cdf29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:26:19 np0005629333 systemd[1]: Started libpod-conmon-4d222772e6c8ba9cd5ad2645bb2b463d6058f9c235683e091a14db83c68cdf29.scope.
Feb 25 08:26:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:26:19 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:26:19 np0005629333 podman[401072]: 2026-02-25 13:26:19.303412333 +0000 UTC m=+0.026031915 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:26:19 np0005629333 podman[401072]: 2026-02-25 13:26:19.406306817 +0000 UTC m=+0.128926449 container init 4d222772e6c8ba9cd5ad2645bb2b463d6058f9c235683e091a14db83c68cdf29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_beaver, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 08:26:19 np0005629333 podman[401072]: 2026-02-25 13:26:19.411240117 +0000 UTC m=+0.133859689 container start 4d222772e6c8ba9cd5ad2645bb2b463d6058f9c235683e091a14db83c68cdf29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_beaver, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:26:19 np0005629333 podman[401072]: 2026-02-25 13:26:19.414947471 +0000 UTC m=+0.137567103 container attach 4d222772e6c8ba9cd5ad2645bb2b463d6058f9c235683e091a14db83c68cdf29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:26:19 np0005629333 amazing_beaver[401088]: 167 167
Feb 25 08:26:19 np0005629333 systemd[1]: libpod-4d222772e6c8ba9cd5ad2645bb2b463d6058f9c235683e091a14db83c68cdf29.scope: Deactivated successfully.
Feb 25 08:26:19 np0005629333 podman[401072]: 2026-02-25 13:26:19.417130933 +0000 UTC m=+0.139750505 container died 4d222772e6c8ba9cd5ad2645bb2b463d6058f9c235683e091a14db83c68cdf29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_beaver, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 08:26:19 np0005629333 systemd[1]: var-lib-containers-storage-overlay-6d27586696e125ab9a0321af42179fd80826e875552d46471f093f866d1ff37a-merged.mount: Deactivated successfully.
Feb 25 08:26:19 np0005629333 podman[401072]: 2026-02-25 13:26:19.460184938 +0000 UTC m=+0.182804500 container remove 4d222772e6c8ba9cd5ad2645bb2b463d6058f9c235683e091a14db83c68cdf29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_beaver, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 08:26:19 np0005629333 systemd[1]: libpod-conmon-4d222772e6c8ba9cd5ad2645bb2b463d6058f9c235683e091a14db83c68cdf29.scope: Deactivated successfully.
Feb 25 08:26:19 np0005629333 podman[401112]: 2026-02-25 13:26:19.643300656 +0000 UTC m=+0.060622892 container create e65082168cdad43ff2055abd7f7f840cdd179b8ea25d7ccae84dcac7a4581f51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 08:26:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3193: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:26:19 np0005629333 systemd[1]: Started libpod-conmon-e65082168cdad43ff2055abd7f7f840cdd179b8ea25d7ccae84dcac7a4581f51.scope.
Feb 25 08:26:19 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:26:19 np0005629333 podman[401112]: 2026-02-25 13:26:19.622913221 +0000 UTC m=+0.040235487 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:26:19 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09c9d7734a4ea9fca85ce1d912d1e3cce9934536ded80be0fdd3f40016b6613e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:26:19 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09c9d7734a4ea9fca85ce1d912d1e3cce9934536ded80be0fdd3f40016b6613e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:26:19 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09c9d7734a4ea9fca85ce1d912d1e3cce9934536ded80be0fdd3f40016b6613e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:26:19 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09c9d7734a4ea9fca85ce1d912d1e3cce9934536ded80be0fdd3f40016b6613e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:26:19 np0005629333 podman[401112]: 2026-02-25 13:26:19.737235737 +0000 UTC m=+0.154558013 container init e65082168cdad43ff2055abd7f7f840cdd179b8ea25d7ccae84dcac7a4581f51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_brown, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:26:19 np0005629333 podman[401112]: 2026-02-25 13:26:19.749845893 +0000 UTC m=+0.167168149 container start e65082168cdad43ff2055abd7f7f840cdd179b8ea25d7ccae84dcac7a4581f51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 08:26:19 np0005629333 podman[401112]: 2026-02-25 13:26:19.756323446 +0000 UTC m=+0.173645752 container attach e65082168cdad43ff2055abd7f7f840cdd179b8ea25d7ccae84dcac7a4581f51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_brown, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:26:20 np0005629333 lvm[401207]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:26:20 np0005629333 lvm[401207]: VG ceph_vg0 finished
Feb 25 08:26:20 np0005629333 lvm[401208]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:26:20 np0005629333 lvm[401208]: VG ceph_vg1 finished
Feb 25 08:26:20 np0005629333 lvm[401210]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:26:20 np0005629333 lvm[401210]: VG ceph_vg2 finished
Feb 25 08:26:20 np0005629333 ecstatic_brown[401129]: {}
Feb 25 08:26:20 np0005629333 systemd[1]: libpod-e65082168cdad43ff2055abd7f7f840cdd179b8ea25d7ccae84dcac7a4581f51.scope: Deactivated successfully.
Feb 25 08:26:20 np0005629333 systemd[1]: libpod-e65082168cdad43ff2055abd7f7f840cdd179b8ea25d7ccae84dcac7a4581f51.scope: Consumed 1.231s CPU time.
Feb 25 08:26:20 np0005629333 podman[401112]: 2026-02-25 13:26:20.561990844 +0000 UTC m=+0.979313110 container died e65082168cdad43ff2055abd7f7f840cdd179b8ea25d7ccae84dcac7a4581f51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_brown, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Feb 25 08:26:20 np0005629333 systemd[1]: var-lib-containers-storage-overlay-09c9d7734a4ea9fca85ce1d912d1e3cce9934536ded80be0fdd3f40016b6613e-merged.mount: Deactivated successfully.
Feb 25 08:26:20 np0005629333 podman[401112]: 2026-02-25 13:26:20.811592618 +0000 UTC m=+1.228914884 container remove e65082168cdad43ff2055abd7f7f840cdd179b8ea25d7ccae84dcac7a4581f51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_brown, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:26:20 np0005629333 systemd[1]: libpod-conmon-e65082168cdad43ff2055abd7f7f840cdd179b8ea25d7ccae84dcac7a4581f51.scope: Deactivated successfully.
Feb 25 08:26:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:26:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:26:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:26:20 np0005629333 nova_compute[244014]: 2026-02-25 13:26:20.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:26:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:26:20 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:26:20 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:26:21 np0005629333 nova_compute[244014]: 2026-02-25 13:26:21.522 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:26:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3194: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:26:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3195: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:26:23 np0005629333 nova_compute[244014]: 2026-02-25 13:26:23.965 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:26:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:26:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3196: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:26:26 np0005629333 nova_compute[244014]: 2026-02-25 13:26:26.522 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:26:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3197: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:26:28 np0005629333 nova_compute[244014]: 2026-02-25 13:26:28.873 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:26:28 np0005629333 nova_compute[244014]: 2026-02-25 13:26:28.969 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:26:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:26:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3198: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:26:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:26:31
Feb 25 08:26:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:26:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:26:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'volumes', 'backups', 'images']
Feb 25 08:26:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:26:31 np0005629333 nova_compute[244014]: 2026-02-25 13:26:31.525 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:26:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:26:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:26:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:26:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:26:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:26:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:26:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3199: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:26:31 np0005629333 podman[401252]: 2026-02-25 13:26:31.739794761 +0000 UTC m=+0.078649751 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 25 08:26:31 np0005629333 podman[401253]: 2026-02-25 13:26:31.802952634 +0000 UTC m=+0.141665870 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 08:26:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:26:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:26:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:26:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:26:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:26:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:26:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:26:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:26:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:26:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:26:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3200: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:26:33 np0005629333 nova_compute[244014]: 2026-02-25 13:26:33.972 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:26:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:26:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3201: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:26:36 np0005629333 nova_compute[244014]: 2026-02-25 13:26:36.527 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:26:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3202: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:26:38 np0005629333 nova_compute[244014]: 2026-02-25 13:26:38.976 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:26:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:26:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3203: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:26:41 np0005629333 nova_compute[244014]: 2026-02-25 13:26:41.555 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:26:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3204: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:26:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:26:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:26:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:26:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:26:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:26:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:26:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:26:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:26:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:26:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:26:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:26:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:26:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:26:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:26:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:26:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:26:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:26:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:26:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:26:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:26:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:26:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:26:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:26:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3205: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:26:43 np0005629333 nova_compute[244014]: 2026-02-25 13:26:43.979 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:26:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:26:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3206: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:26:46 np0005629333 nova_compute[244014]: 2026-02-25 13:26:46.556 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:26:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:26:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3026047940' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:26:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:26:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3026047940' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:26:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3207: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:26:48 np0005629333 nova_compute[244014]: 2026-02-25 13:26:48.982 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:26:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:26:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3208: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:26:51 np0005629333 nova_compute[244014]: 2026-02-25 13:26:51.589 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:26:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3209: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:26:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3210: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:26:53 np0005629333 nova_compute[244014]: 2026-02-25 13:26:53.984 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:26:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:26:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:26:55.070 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:26:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:26:55.070 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:26:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:26:55.070 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:26:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3211: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:26:56 np0005629333 nova_compute[244014]: 2026-02-25 13:26:56.592 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:26:56 np0005629333 nova_compute[244014]: 2026-02-25 13:26:56.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:26:56 np0005629333 nova_compute[244014]: 2026-02-25 13:26:56.898 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:26:56 np0005629333 nova_compute[244014]: 2026-02-25 13:26:56.898 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:26:56 np0005629333 nova_compute[244014]: 2026-02-25 13:26:56.898 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:26:56 np0005629333 nova_compute[244014]: 2026-02-25 13:26:56.898 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:26:56 np0005629333 nova_compute[244014]: 2026-02-25 13:26:56.899 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:26:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:26:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2786108979' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:26:57 np0005629333 nova_compute[244014]: 2026-02-25 13:26:57.451 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:26:57 np0005629333 nova_compute[244014]: 2026-02-25 13:26:57.658 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:26:57 np0005629333 nova_compute[244014]: 2026-02-25 13:26:57.659 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3581MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:26:57 np0005629333 nova_compute[244014]: 2026-02-25 13:26:57.660 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:26:57 np0005629333 nova_compute[244014]: 2026-02-25 13:26:57.660 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:26:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3212: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:26:57 np0005629333 nova_compute[244014]: 2026-02-25 13:26:57.725 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:26:57 np0005629333 nova_compute[244014]: 2026-02-25 13:26:57.725 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:26:57 np0005629333 nova_compute[244014]: 2026-02-25 13:26:57.742 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:26:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:26:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/978799601' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:26:58 np0005629333 nova_compute[244014]: 2026-02-25 13:26:58.279 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:26:58 np0005629333 nova_compute[244014]: 2026-02-25 13:26:58.286 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:26:58 np0005629333 nova_compute[244014]: 2026-02-25 13:26:58.318 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:26:58 np0005629333 nova_compute[244014]: 2026-02-25 13:26:58.321 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:26:58 np0005629333 nova_compute[244014]: 2026-02-25 13:26:58.321 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:26:58 np0005629333 nova_compute[244014]: 2026-02-25 13:26:58.986 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:26:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:26:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3213: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:27:00 np0005629333 nova_compute[244014]: 2026-02-25 13:27:00.323 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:27:01 np0005629333 nova_compute[244014]: 2026-02-25 13:27:01.594 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:27:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:27:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:27:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:27:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:27:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:27:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:27:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3214: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:27:01 np0005629333 nova_compute[244014]: 2026-02-25 13:27:01.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:27:01 np0005629333 nova_compute[244014]: 2026-02-25 13:27:01.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:27:01 np0005629333 nova_compute[244014]: 2026-02-25 13:27:01.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:27:01 np0005629333 nova_compute[244014]: 2026-02-25 13:27:01.924 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:27:02 np0005629333 podman[401343]: 2026-02-25 13:27:02.721715462 +0000 UTC m=+0.063212365 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:27:02 np0005629333 podman[401344]: 2026-02-25 13:27:02.789594868 +0000 UTC m=+0.126597814 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 08:27:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3215: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:27:03 np0005629333 nova_compute[244014]: 2026-02-25 13:27:03.919 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:27:03 np0005629333 nova_compute[244014]: 2026-02-25 13:27:03.989 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:27:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:27:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3216: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:27:06 np0005629333 nova_compute[244014]: 2026-02-25 13:27:06.597 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:27:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3217: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:27:08 np0005629333 nova_compute[244014]: 2026-02-25 13:27:08.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:27:08 np0005629333 nova_compute[244014]: 2026-02-25 13:27:08.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:27:08 np0005629333 nova_compute[244014]: 2026-02-25 13:27:08.992 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:27:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:27:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3218: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:27:11 np0005629333 nova_compute[244014]: 2026-02-25 13:27:11.599 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:27:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3219: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:27:12 np0005629333 nova_compute[244014]: 2026-02-25 13:27:12.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:27:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3220: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:27:13 np0005629333 nova_compute[244014]: 2026-02-25 13:27:13.995 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:27:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:27:14 np0005629333 nova_compute[244014]: 2026-02-25 13:27:14.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:27:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3221: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:27:16 np0005629333 nova_compute[244014]: 2026-02-25 13:27:16.601 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:27:16 np0005629333 nova_compute[244014]: 2026-02-25 13:27:16.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:27:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3222: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:27:18 np0005629333 nova_compute[244014]: 2026-02-25 13:27:18.999 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:27:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3223: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:27:19.841969) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026039842002, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 2042, "num_deletes": 251, "total_data_size": 3448554, "memory_usage": 3505608, "flush_reason": "Manual Compaction"}
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026039858613, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 3381859, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65903, "largest_seqno": 67944, "table_properties": {"data_size": 3372522, "index_size": 5894, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18497, "raw_average_key_size": 20, "raw_value_size": 3354061, "raw_average_value_size": 3633, "num_data_blocks": 261, "num_entries": 923, "num_filter_entries": 923, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772025812, "oldest_key_time": 1772025812, "file_creation_time": 1772026039, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 16899 microseconds, and 5081 cpu microseconds.
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:27:19.858848) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 3381859 bytes OK
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:27:19.858928) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:27:19.861435) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:27:19.861460) EVENT_LOG_v1 {"time_micros": 1772026039861451, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:27:19.861494) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 3440033, prev total WAL file size 3440033, number of live WAL files 2.
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:27:19.862900) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(3302KB)], [158(9115KB)]
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026039862982, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 12716035, "oldest_snapshot_seqno": -1}
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 8559 keys, 10939832 bytes, temperature: kUnknown
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026039929087, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 10939832, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10884785, "index_size": 32534, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21445, "raw_key_size": 224067, "raw_average_key_size": 26, "raw_value_size": 10734144, "raw_average_value_size": 1254, "num_data_blocks": 1260, "num_entries": 8559, "num_filter_entries": 8559, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772026039, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:27:19.929393) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 10939832 bytes
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:27:19.930935) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 192.1 rd, 165.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.9 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(7.0) write-amplify(3.2) OK, records in: 9073, records dropped: 514 output_compression: NoCompression
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:27:19.930960) EVENT_LOG_v1 {"time_micros": 1772026039930948, "job": 98, "event": "compaction_finished", "compaction_time_micros": 66206, "compaction_time_cpu_micros": 29981, "output_level": 6, "num_output_files": 1, "total_output_size": 10939832, "num_input_records": 9073, "num_output_records": 8559, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026039931634, "job": 98, "event": "table_file_deletion", "file_number": 160}
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026039933110, "job": 98, "event": "table_file_deletion", "file_number": 158}
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:27:19.862777) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:27:19.933208) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:27:19.933215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:27:19.933217) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:27:19.933219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:27:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:27:19.933220) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:27:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:27:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:27:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:27:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:27:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:27:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:27:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:27:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:27:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:27:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:27:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:27:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:27:21 np0005629333 nova_compute[244014]: 2026-02-25 13:27:21.604 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:27:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3224: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:27:21 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:27:21 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:27:21 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:27:21 np0005629333 nova_compute[244014]: 2026-02-25 13:27:21.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:27:21 np0005629333 podman[401533]: 2026-02-25 13:27:21.953930072 +0000 UTC m=+0.050951439 container create 904697578e47b3b4eb426ec5a4305d1916c1fac21f65f2fe2eeb2dfb86e10614 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_brattain, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 08:27:22 np0005629333 systemd[1]: Started libpod-conmon-904697578e47b3b4eb426ec5a4305d1916c1fac21f65f2fe2eeb2dfb86e10614.scope.
Feb 25 08:27:22 np0005629333 podman[401533]: 2026-02-25 13:27:21.929887703 +0000 UTC m=+0.026908980 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:27:22 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:27:22 np0005629333 podman[401533]: 2026-02-25 13:27:22.052965307 +0000 UTC m=+0.149986584 container init 904697578e47b3b4eb426ec5a4305d1916c1fac21f65f2fe2eeb2dfb86e10614 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_brattain, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 08:27:22 np0005629333 podman[401533]: 2026-02-25 13:27:22.058119532 +0000 UTC m=+0.155140729 container start 904697578e47b3b4eb426ec5a4305d1916c1fac21f65f2fe2eeb2dfb86e10614 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 08:27:22 np0005629333 podman[401533]: 2026-02-25 13:27:22.061258891 +0000 UTC m=+0.158280118 container attach 904697578e47b3b4eb426ec5a4305d1916c1fac21f65f2fe2eeb2dfb86e10614 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 08:27:22 np0005629333 recursing_brattain[401549]: 167 167
Feb 25 08:27:22 np0005629333 systemd[1]: libpod-904697578e47b3b4eb426ec5a4305d1916c1fac21f65f2fe2eeb2dfb86e10614.scope: Deactivated successfully.
Feb 25 08:27:22 np0005629333 conmon[401549]: conmon 904697578e47b3b4eb42 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-904697578e47b3b4eb426ec5a4305d1916c1fac21f65f2fe2eeb2dfb86e10614.scope/container/memory.events
Feb 25 08:27:22 np0005629333 podman[401533]: 2026-02-25 13:27:22.067173188 +0000 UTC m=+0.164194425 container died 904697578e47b3b4eb426ec5a4305d1916c1fac21f65f2fe2eeb2dfb86e10614 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_brattain, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:27:22 np0005629333 systemd[1]: var-lib-containers-storage-overlay-4dbee3334885b4f9d9a202c3e1819c942cf16115e5a537044ce0692c4ff7fb63-merged.mount: Deactivated successfully.
Feb 25 08:27:22 np0005629333 podman[401533]: 2026-02-25 13:27:22.116907781 +0000 UTC m=+0.213928978 container remove 904697578e47b3b4eb426ec5a4305d1916c1fac21f65f2fe2eeb2dfb86e10614 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 08:27:22 np0005629333 systemd[1]: libpod-conmon-904697578e47b3b4eb426ec5a4305d1916c1fac21f65f2fe2eeb2dfb86e10614.scope: Deactivated successfully.
Feb 25 08:27:22 np0005629333 podman[401573]: 2026-02-25 13:27:22.281043584 +0000 UTC m=+0.055896409 container create faefb34d508be619fd3f87c6b67b86e4f3d7d1d1b9dd86cfbe153730583d0107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 08:27:22 np0005629333 systemd[1]: Started libpod-conmon-faefb34d508be619fd3f87c6b67b86e4f3d7d1d1b9dd86cfbe153730583d0107.scope.
Feb 25 08:27:22 np0005629333 podman[401573]: 2026-02-25 13:27:22.259529927 +0000 UTC m=+0.034382822 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:27:22 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:27:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e493027bc6a9c3354444a8298bb0fff2a713b755f49b00db3b82f28707764b5a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:27:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e493027bc6a9c3354444a8298bb0fff2a713b755f49b00db3b82f28707764b5a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:27:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e493027bc6a9c3354444a8298bb0fff2a713b755f49b00db3b82f28707764b5a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:27:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e493027bc6a9c3354444a8298bb0fff2a713b755f49b00db3b82f28707764b5a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:27:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e493027bc6a9c3354444a8298bb0fff2a713b755f49b00db3b82f28707764b5a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:27:22 np0005629333 podman[401573]: 2026-02-25 13:27:22.381620702 +0000 UTC m=+0.156473597 container init faefb34d508be619fd3f87c6b67b86e4f3d7d1d1b9dd86cfbe153730583d0107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mclaren, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:27:22 np0005629333 podman[401573]: 2026-02-25 13:27:22.396246355 +0000 UTC m=+0.171099210 container start faefb34d508be619fd3f87c6b67b86e4f3d7d1d1b9dd86cfbe153730583d0107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 08:27:22 np0005629333 podman[401573]: 2026-02-25 13:27:22.401259877 +0000 UTC m=+0.176112782 container attach faefb34d508be619fd3f87c6b67b86e4f3d7d1d1b9dd86cfbe153730583d0107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True)
Feb 25 08:27:22 np0005629333 competent_mclaren[401590]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:27:22 np0005629333 competent_mclaren[401590]: --> All data devices are unavailable
Feb 25 08:27:22 np0005629333 systemd[1]: libpod-faefb34d508be619fd3f87c6b67b86e4f3d7d1d1b9dd86cfbe153730583d0107.scope: Deactivated successfully.
Feb 25 08:27:22 np0005629333 podman[401610]: 2026-02-25 13:27:22.939388924 +0000 UTC m=+0.036030268 container died faefb34d508be619fd3f87c6b67b86e4f3d7d1d1b9dd86cfbe153730583d0107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 08:27:22 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e493027bc6a9c3354444a8298bb0fff2a713b755f49b00db3b82f28707764b5a-merged.mount: Deactivated successfully.
Feb 25 08:27:22 np0005629333 podman[401610]: 2026-02-25 13:27:22.98742397 +0000 UTC m=+0.084065294 container remove faefb34d508be619fd3f87c6b67b86e4f3d7d1d1b9dd86cfbe153730583d0107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mclaren, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 08:27:22 np0005629333 systemd[1]: libpod-conmon-faefb34d508be619fd3f87c6b67b86e4f3d7d1d1b9dd86cfbe153730583d0107.scope: Deactivated successfully.
Feb 25 08:27:23 np0005629333 podman[401689]: 2026-02-25 13:27:23.419453973 +0000 UTC m=+0.053848451 container create 09b54eecc301d3e15a5717bab3ccdcad2fdaa50b2254e658bae92306f4f4c57a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 08:27:23 np0005629333 systemd[1]: Started libpod-conmon-09b54eecc301d3e15a5717bab3ccdcad2fdaa50b2254e658bae92306f4f4c57a.scope.
Feb 25 08:27:23 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:27:23 np0005629333 podman[401689]: 2026-02-25 13:27:23.392329817 +0000 UTC m=+0.026724295 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:27:23 np0005629333 podman[401689]: 2026-02-25 13:27:23.49554188 +0000 UTC m=+0.129936408 container init 09b54eecc301d3e15a5717bab3ccdcad2fdaa50b2254e658bae92306f4f4c57a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 08:27:23 np0005629333 podman[401689]: 2026-02-25 13:27:23.503580977 +0000 UTC m=+0.137975455 container start 09b54eecc301d3e15a5717bab3ccdcad2fdaa50b2254e658bae92306f4f4c57a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:27:23 np0005629333 inspiring_hodgkin[401705]: 167 167
Feb 25 08:27:23 np0005629333 systemd[1]: libpod-09b54eecc301d3e15a5717bab3ccdcad2fdaa50b2254e658bae92306f4f4c57a.scope: Deactivated successfully.
Feb 25 08:27:23 np0005629333 podman[401689]: 2026-02-25 13:27:23.511601423 +0000 UTC m=+0.145995971 container attach 09b54eecc301d3e15a5717bab3ccdcad2fdaa50b2254e658bae92306f4f4c57a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hodgkin, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:27:23 np0005629333 podman[401689]: 2026-02-25 13:27:23.51219221 +0000 UTC m=+0.146586688 container died 09b54eecc301d3e15a5717bab3ccdcad2fdaa50b2254e658bae92306f4f4c57a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hodgkin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 08:27:23 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a1bd878ac09ff8b2fd416dbdd347d2d228e08988b5d5e029a5826dbabef9be98-merged.mount: Deactivated successfully.
Feb 25 08:27:23 np0005629333 podman[401689]: 2026-02-25 13:27:23.559163305 +0000 UTC m=+0.193557783 container remove 09b54eecc301d3e15a5717bab3ccdcad2fdaa50b2254e658bae92306f4f4c57a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:27:23 np0005629333 systemd[1]: libpod-conmon-09b54eecc301d3e15a5717bab3ccdcad2fdaa50b2254e658bae92306f4f4c57a.scope: Deactivated successfully.
Feb 25 08:27:23 np0005629333 podman[401729]: 2026-02-25 13:27:23.68301498 +0000 UTC m=+0.043640853 container create 303a391948f4d763a2364f0ee4619548fc8d6faf53c6b68ea7a9835d9c840986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_brown, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:27:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3225: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:27:23 np0005629333 systemd[1]: Started libpod-conmon-303a391948f4d763a2364f0ee4619548fc8d6faf53c6b68ea7a9835d9c840986.scope.
Feb 25 08:27:23 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:27:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f87d0dad9ded3b284ab1a5c36b906e606ba7e7a62a7a223658a91616db3ee0b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:27:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f87d0dad9ded3b284ab1a5c36b906e606ba7e7a62a7a223658a91616db3ee0b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:27:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f87d0dad9ded3b284ab1a5c36b906e606ba7e7a62a7a223658a91616db3ee0b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:27:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f87d0dad9ded3b284ab1a5c36b906e606ba7e7a62a7a223658a91616db3ee0b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:27:23 np0005629333 podman[401729]: 2026-02-25 13:27:23.6656612 +0000 UTC m=+0.026287123 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:27:23 np0005629333 podman[401729]: 2026-02-25 13:27:23.782057785 +0000 UTC m=+0.142683678 container init 303a391948f4d763a2364f0ee4619548fc8d6faf53c6b68ea7a9835d9c840986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 08:27:23 np0005629333 podman[401729]: 2026-02-25 13:27:23.789511576 +0000 UTC m=+0.150137449 container start 303a391948f4d763a2364f0ee4619548fc8d6faf53c6b68ea7a9835d9c840986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_brown, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:27:23 np0005629333 podman[401729]: 2026-02-25 13:27:23.792914492 +0000 UTC m=+0.153540365 container attach 303a391948f4d763a2364f0ee4619548fc8d6faf53c6b68ea7a9835d9c840986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_brown, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 08:27:24 np0005629333 nova_compute[244014]: 2026-02-25 13:27:24.003 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:27:24 np0005629333 musing_brown[401745]: {
Feb 25 08:27:24 np0005629333 musing_brown[401745]:    "0": [
Feb 25 08:27:24 np0005629333 musing_brown[401745]:        {
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "devices": [
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "/dev/loop3"
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            ],
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "lv_name": "ceph_lv0",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "lv_size": "21470642176",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "name": "ceph_lv0",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "tags": {
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.cluster_name": "ceph",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.crush_device_class": "",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.encrypted": "0",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.objectstore": "bluestore",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.osd_id": "0",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.type": "block",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.vdo": "0",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.with_tpm": "0"
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            },
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "type": "block",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "vg_name": "ceph_vg0"
Feb 25 08:27:24 np0005629333 musing_brown[401745]:        }
Feb 25 08:27:24 np0005629333 musing_brown[401745]:    ],
Feb 25 08:27:24 np0005629333 musing_brown[401745]:    "1": [
Feb 25 08:27:24 np0005629333 musing_brown[401745]:        {
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "devices": [
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "/dev/loop4"
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            ],
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "lv_name": "ceph_lv1",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "lv_size": "21470642176",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "name": "ceph_lv1",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "tags": {
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.cluster_name": "ceph",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.crush_device_class": "",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.encrypted": "0",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.objectstore": "bluestore",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.osd_id": "1",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.type": "block",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.vdo": "0",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.with_tpm": "0"
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            },
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "type": "block",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "vg_name": "ceph_vg1"
Feb 25 08:27:24 np0005629333 musing_brown[401745]:        }
Feb 25 08:27:24 np0005629333 musing_brown[401745]:    ],
Feb 25 08:27:24 np0005629333 musing_brown[401745]:    "2": [
Feb 25 08:27:24 np0005629333 musing_brown[401745]:        {
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "devices": [
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "/dev/loop5"
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            ],
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "lv_name": "ceph_lv2",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "lv_size": "21470642176",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "name": "ceph_lv2",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "tags": {
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.cluster_name": "ceph",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.crush_device_class": "",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.encrypted": "0",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.objectstore": "bluestore",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.osd_id": "2",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.type": "block",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.vdo": "0",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:                "ceph.with_tpm": "0"
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            },
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "type": "block",
Feb 25 08:27:24 np0005629333 musing_brown[401745]:            "vg_name": "ceph_vg2"
Feb 25 08:27:24 np0005629333 musing_brown[401745]:        }
Feb 25 08:27:24 np0005629333 musing_brown[401745]:    ]
Feb 25 08:27:24 np0005629333 musing_brown[401745]: }
Feb 25 08:27:24 np0005629333 systemd[1]: libpod-303a391948f4d763a2364f0ee4619548fc8d6faf53c6b68ea7a9835d9c840986.scope: Deactivated successfully.
Feb 25 08:27:24 np0005629333 podman[401729]: 2026-02-25 13:27:24.120646411 +0000 UTC m=+0.481272324 container died 303a391948f4d763a2364f0ee4619548fc8d6faf53c6b68ea7a9835d9c840986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 08:27:24 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f87d0dad9ded3b284ab1a5c36b906e606ba7e7a62a7a223658a91616db3ee0b6-merged.mount: Deactivated successfully.
Feb 25 08:27:24 np0005629333 podman[401729]: 2026-02-25 13:27:24.179876733 +0000 UTC m=+0.540502646 container remove 303a391948f4d763a2364f0ee4619548fc8d6faf53c6b68ea7a9835d9c840986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_brown, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:27:24 np0005629333 systemd[1]: libpod-conmon-303a391948f4d763a2364f0ee4619548fc8d6faf53c6b68ea7a9835d9c840986.scope: Deactivated successfully.
Feb 25 08:27:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:27:24 np0005629333 podman[401830]: 2026-02-25 13:27:24.704462438 +0000 UTC m=+0.059743997 container create 531fd18e8e76ae1eefffb5004981b4d0cfa468db170563d89b812b8dd7a8357b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ellis, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 08:27:24 np0005629333 systemd[1]: Started libpod-conmon-531fd18e8e76ae1eefffb5004981b4d0cfa468db170563d89b812b8dd7a8357b.scope.
Feb 25 08:27:24 np0005629333 podman[401830]: 2026-02-25 13:27:24.670397027 +0000 UTC m=+0.025678686 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:27:24 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:27:24 np0005629333 podman[401830]: 2026-02-25 13:27:24.794299953 +0000 UTC m=+0.149581522 container init 531fd18e8e76ae1eefffb5004981b4d0cfa468db170563d89b812b8dd7a8357b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ellis, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 08:27:24 np0005629333 podman[401830]: 2026-02-25 13:27:24.80303362 +0000 UTC m=+0.158315199 container start 531fd18e8e76ae1eefffb5004981b4d0cfa468db170563d89b812b8dd7a8357b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 08:27:24 np0005629333 podman[401830]: 2026-02-25 13:27:24.807157126 +0000 UTC m=+0.162438735 container attach 531fd18e8e76ae1eefffb5004981b4d0cfa468db170563d89b812b8dd7a8357b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ellis, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:27:24 np0005629333 gracious_ellis[401846]: 167 167
Feb 25 08:27:24 np0005629333 systemd[1]: libpod-531fd18e8e76ae1eefffb5004981b4d0cfa468db170563d89b812b8dd7a8357b.scope: Deactivated successfully.
Feb 25 08:27:24 np0005629333 conmon[401846]: conmon 531fd18e8e76ae1eefff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-531fd18e8e76ae1eefffb5004981b4d0cfa468db170563d89b812b8dd7a8357b.scope/container/memory.events
Feb 25 08:27:24 np0005629333 podman[401830]: 2026-02-25 13:27:24.812013843 +0000 UTC m=+0.167295402 container died 531fd18e8e76ae1eefffb5004981b4d0cfa468db170563d89b812b8dd7a8357b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ellis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:27:24 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7955590d32cf8b16f7aa83e561e1ff3427b3084a3781f857a6570fa92313e393-merged.mount: Deactivated successfully.
Feb 25 08:27:24 np0005629333 podman[401830]: 2026-02-25 13:27:24.856326354 +0000 UTC m=+0.211607953 container remove 531fd18e8e76ae1eefffb5004981b4d0cfa468db170563d89b812b8dd7a8357b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_ellis, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 08:27:24 np0005629333 systemd[1]: libpod-conmon-531fd18e8e76ae1eefffb5004981b4d0cfa468db170563d89b812b8dd7a8357b.scope: Deactivated successfully.
Feb 25 08:27:25 np0005629333 podman[401872]: 2026-02-25 13:27:25.06132116 +0000 UTC m=+0.054210561 container create ac18b52b19456dcb9796846de3e21d0d51c4e31a4f8fc5a7a23caa12eb2da2ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lovelace, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 08:27:25 np0005629333 systemd[1]: Started libpod-conmon-ac18b52b19456dcb9796846de3e21d0d51c4e31a4f8fc5a7a23caa12eb2da2ad.scope.
Feb 25 08:27:25 np0005629333 podman[401872]: 2026-02-25 13:27:25.043459875 +0000 UTC m=+0.036349266 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:27:25 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:27:25 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8735976528e93f207e2eb195322ff674ad88ab64a3ee03bda687ede125e6b436/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:27:25 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8735976528e93f207e2eb195322ff674ad88ab64a3ee03bda687ede125e6b436/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:27:25 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8735976528e93f207e2eb195322ff674ad88ab64a3ee03bda687ede125e6b436/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:27:25 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8735976528e93f207e2eb195322ff674ad88ab64a3ee03bda687ede125e6b436/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:27:25 np0005629333 podman[401872]: 2026-02-25 13:27:25.171959012 +0000 UTC m=+0.164848483 container init ac18b52b19456dcb9796846de3e21d0d51c4e31a4f8fc5a7a23caa12eb2da2ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lovelace, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:27:25 np0005629333 podman[401872]: 2026-02-25 13:27:25.181161902 +0000 UTC m=+0.174051273 container start ac18b52b19456dcb9796846de3e21d0d51c4e31a4f8fc5a7a23caa12eb2da2ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 08:27:25 np0005629333 podman[401872]: 2026-02-25 13:27:25.186856772 +0000 UTC m=+0.179746164 container attach ac18b52b19456dcb9796846de3e21d0d51c4e31a4f8fc5a7a23caa12eb2da2ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lovelace, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:27:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3226: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:27:25 np0005629333 lvm[401966]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:27:25 np0005629333 lvm[401967]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:27:25 np0005629333 lvm[401967]: VG ceph_vg1 finished
Feb 25 08:27:25 np0005629333 lvm[401966]: VG ceph_vg0 finished
Feb 25 08:27:25 np0005629333 lvm[401969]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:27:25 np0005629333 lvm[401969]: VG ceph_vg2 finished
Feb 25 08:27:26 np0005629333 peaceful_lovelace[401888]: {}
Feb 25 08:27:26 np0005629333 systemd[1]: libpod-ac18b52b19456dcb9796846de3e21d0d51c4e31a4f8fc5a7a23caa12eb2da2ad.scope: Deactivated successfully.
Feb 25 08:27:26 np0005629333 podman[401872]: 2026-02-25 13:27:26.083089397 +0000 UTC m=+1.075978758 container died ac18b52b19456dcb9796846de3e21d0d51c4e31a4f8fc5a7a23caa12eb2da2ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 08:27:26 np0005629333 systemd[1]: libpod-ac18b52b19456dcb9796846de3e21d0d51c4e31a4f8fc5a7a23caa12eb2da2ad.scope: Consumed 1.240s CPU time.
Feb 25 08:27:26 np0005629333 systemd[1]: var-lib-containers-storage-overlay-8735976528e93f207e2eb195322ff674ad88ab64a3ee03bda687ede125e6b436-merged.mount: Deactivated successfully.
Feb 25 08:27:26 np0005629333 podman[401872]: 2026-02-25 13:27:26.126674257 +0000 UTC m=+1.119563618 container remove ac18b52b19456dcb9796846de3e21d0d51c4e31a4f8fc5a7a23caa12eb2da2ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lovelace, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 08:27:26 np0005629333 systemd[1]: libpod-conmon-ac18b52b19456dcb9796846de3e21d0d51c4e31a4f8fc5a7a23caa12eb2da2ad.scope: Deactivated successfully.
Feb 25 08:27:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:27:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:27:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:27:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:27:26 np0005629333 nova_compute[244014]: 2026-02-25 13:27:26.605 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:27:27 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:27:27 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:27:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3227: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:27:29 np0005629333 nova_compute[244014]: 2026-02-25 13:27:29.008 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:27:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:27:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3228: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:27:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:27:31
Feb 25 08:27:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:27:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:27:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups']
Feb 25 08:27:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:27:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:27:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:27:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:27:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:27:31 np0005629333 nova_compute[244014]: 2026-02-25 13:27:31.649 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:27:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:27:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:27:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3229: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:27:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:27:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:27:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:27:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:27:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:27:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:27:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:27:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:27:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:27:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:27:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3230: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:27:33 np0005629333 podman[402011]: 2026-02-25 13:27:33.788966685 +0000 UTC m=+0.109561814 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0)
Feb 25 08:27:33 np0005629333 podman[402012]: 2026-02-25 13:27:33.802825996 +0000 UTC m=+0.120197294 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2)
Feb 25 08:27:34 np0005629333 nova_compute[244014]: 2026-02-25 13:27:34.011 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:27:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:27:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3231: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:27:36 np0005629333 nova_compute[244014]: 2026-02-25 13:27:36.698 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:27:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3232: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:27:39 np0005629333 nova_compute[244014]: 2026-02-25 13:27:39.014 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:27:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:27:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3233: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:27:41 np0005629333 nova_compute[244014]: 2026-02-25 13:27:41.700 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:27:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3234: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:27:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:27:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:27:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:27:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:27:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:27:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:27:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:27:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:27:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:27:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:27:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:27:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:27:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:27:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:27:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:27:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:27:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:27:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:27:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:27:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:27:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:27:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:27:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:27:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3235: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:27:44 np0005629333 nova_compute[244014]: 2026-02-25 13:27:44.018 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:27:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:27:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3236: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:27:46 np0005629333 nova_compute[244014]: 2026-02-25 13:27:46.702 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:27:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:27:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1587764996' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:27:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:27:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1587764996' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:27:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3237: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:27:49 np0005629333 nova_compute[244014]: 2026-02-25 13:27:49.022 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:27:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:27:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3238: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:27:51 np0005629333 nova_compute[244014]: 2026-02-25 13:27:51.704 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:27:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3239: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:27:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3240: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:27:54 np0005629333 nova_compute[244014]: 2026-02-25 13:27:54.026 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:27:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:27:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:27:55.071 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:27:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:27:55.071 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:27:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:27:55.071 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:27:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3241: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:27:56 np0005629333 nova_compute[244014]: 2026-02-25 13:27:56.706 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:27:56 np0005629333 nova_compute[244014]: 2026-02-25 13:27:56.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:27:56 np0005629333 nova_compute[244014]: 2026-02-25 13:27:56.918 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:27:56 np0005629333 nova_compute[244014]: 2026-02-25 13:27:56.919 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:27:56 np0005629333 nova_compute[244014]: 2026-02-25 13:27:56.919 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:27:56 np0005629333 nova_compute[244014]: 2026-02-25 13:27:56.919 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:27:56 np0005629333 nova_compute[244014]: 2026-02-25 13:27:56.920 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:27:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:27:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4104010315' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:27:57 np0005629333 nova_compute[244014]: 2026-02-25 13:27:57.433 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:27:57 np0005629333 nova_compute[244014]: 2026-02-25 13:27:57.611 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:27:57 np0005629333 nova_compute[244014]: 2026-02-25 13:27:57.613 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3563MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:27:57 np0005629333 nova_compute[244014]: 2026-02-25 13:27:57.614 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:27:57 np0005629333 nova_compute[244014]: 2026-02-25 13:27:57.614 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:27:57 np0005629333 nova_compute[244014]: 2026-02-25 13:27:57.689 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:27:57 np0005629333 nova_compute[244014]: 2026-02-25 13:27:57.690 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:27:57 np0005629333 nova_compute[244014]: 2026-02-25 13:27:57.706 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:27:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3242: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:27:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:27:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3195868700' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:27:58 np0005629333 nova_compute[244014]: 2026-02-25 13:27:58.279 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:27:58 np0005629333 nova_compute[244014]: 2026-02-25 13:27:58.285 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:27:58 np0005629333 nova_compute[244014]: 2026-02-25 13:27:58.298 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:27:58 np0005629333 nova_compute[244014]: 2026-02-25 13:27:58.300 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:27:58 np0005629333 nova_compute[244014]: 2026-02-25 13:27:58.300 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:27:59 np0005629333 nova_compute[244014]: 2026-02-25 13:27:59.029 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:27:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:27:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3243: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:28:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:28:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:28:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:28:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:28:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:28:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:28:01 np0005629333 nova_compute[244014]: 2026-02-25 13:28:01.709 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:28:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3244: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:28:02 np0005629333 nova_compute[244014]: 2026-02-25 13:28:02.301 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:28:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3245: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:28:03 np0005629333 nova_compute[244014]: 2026-02-25 13:28:03.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:28:03 np0005629333 nova_compute[244014]: 2026-02-25 13:28:03.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:28:03 np0005629333 nova_compute[244014]: 2026-02-25 13:28:03.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:28:03 np0005629333 nova_compute[244014]: 2026-02-25 13:28:03.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:28:03 np0005629333 nova_compute[244014]: 2026-02-25 13:28:03.894 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:28:04 np0005629333 nova_compute[244014]: 2026-02-25 13:28:04.033 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:28:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:28:04 np0005629333 podman[402100]: 2026-02-25 13:28:04.715852333 +0000 UTC m=+0.053510331 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, io.buildah.version=1.43.0)
Feb 25 08:28:04 np0005629333 podman[402101]: 2026-02-25 13:28:04.757666223 +0000 UTC m=+0.095862197 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 08:28:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3246: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:28:06 np0005629333 nova_compute[244014]: 2026-02-25 13:28:06.710 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:28:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3247: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:28:09 np0005629333 nova_compute[244014]: 2026-02-25 13:28:09.037 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:28:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:28:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3248: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:28:10 np0005629333 nova_compute[244014]: 2026-02-25 13:28:10.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:28:10 np0005629333 nova_compute[244014]: 2026-02-25 13:28:10.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:28:11 np0005629333 nova_compute[244014]: 2026-02-25 13:28:11.713 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:28:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3249: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:28:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3250: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:28:13 np0005629333 nova_compute[244014]: 2026-02-25 13:28:13.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:28:14 np0005629333 nova_compute[244014]: 2026-02-25 13:28:14.042 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:28:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:28:14 np0005629333 nova_compute[244014]: 2026-02-25 13:28:14.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:28:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3251: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:28:16 np0005629333 nova_compute[244014]: 2026-02-25 13:28:16.714 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:28:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3252: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:28:18 np0005629333 nova_compute[244014]: 2026-02-25 13:28:18.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:28:19 np0005629333 nova_compute[244014]: 2026-02-25 13:28:19.045 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:28:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:28:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3253: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:28:21 np0005629333 nova_compute[244014]: 2026-02-25 13:28:21.716 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:28:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3254: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:28:21 np0005629333 nova_compute[244014]: 2026-02-25 13:28:21.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:28:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3255: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:28:24 np0005629333 nova_compute[244014]: 2026-02-25 13:28:24.047 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:28:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:28:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3256: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:28:26 np0005629333 nova_compute[244014]: 2026-02-25 13:28:26.717 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:28:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:28:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:28:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:28:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:28:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:28:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:28:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:28:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:28:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:28:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:28:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:28:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:28:27 np0005629333 podman[402287]: 2026-02-25 13:28:27.352800631 +0000 UTC m=+0.035293657 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:28:27 np0005629333 podman[402287]: 2026-02-25 13:28:27.585368935 +0000 UTC m=+0.267861951 container create 266d19cbd5e2c020a1d4e6dae5ac8045782e891445bfbb79da8c7b0d034acffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 08:28:27 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:28:27 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:28:27 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:28:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3257: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:28:27 np0005629333 systemd[1]: Started libpod-conmon-266d19cbd5e2c020a1d4e6dae5ac8045782e891445bfbb79da8c7b0d034acffd.scope.
Feb 25 08:28:27 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:28:27 np0005629333 podman[402287]: 2026-02-25 13:28:27.871073448 +0000 UTC m=+0.553566484 container init 266d19cbd5e2c020a1d4e6dae5ac8045782e891445bfbb79da8c7b0d034acffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:28:27 np0005629333 podman[402287]: 2026-02-25 13:28:27.880504044 +0000 UTC m=+0.562997040 container start 266d19cbd5e2c020a1d4e6dae5ac8045782e891445bfbb79da8c7b0d034acffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:28:27 np0005629333 zealous_haslett[402304]: 167 167
Feb 25 08:28:27 np0005629333 systemd[1]: libpod-266d19cbd5e2c020a1d4e6dae5ac8045782e891445bfbb79da8c7b0d034acffd.scope: Deactivated successfully.
Feb 25 08:28:28 np0005629333 podman[402287]: 2026-02-25 13:28:28.017837959 +0000 UTC m=+0.700330975 container attach 266d19cbd5e2c020a1d4e6dae5ac8045782e891445bfbb79da8c7b0d034acffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_haslett, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:28:28 np0005629333 podman[402287]: 2026-02-25 13:28:28.019113335 +0000 UTC m=+0.701606361 container died 266d19cbd5e2c020a1d4e6dae5ac8045782e891445bfbb79da8c7b0d034acffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 08:28:28 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a09d9e9d9f7ba9d62d30ccc0253f316a167e74e0825d5c6d96a51032530354a1-merged.mount: Deactivated successfully.
Feb 25 08:28:28 np0005629333 podman[402287]: 2026-02-25 13:28:28.63722165 +0000 UTC m=+1.319714666 container remove 266d19cbd5e2c020a1d4e6dae5ac8045782e891445bfbb79da8c7b0d034acffd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 08:28:28 np0005629333 systemd[1]: libpod-conmon-266d19cbd5e2c020a1d4e6dae5ac8045782e891445bfbb79da8c7b0d034acffd.scope: Deactivated successfully.
Feb 25 08:28:28 np0005629333 podman[402329]: 2026-02-25 13:28:28.802593557 +0000 UTC m=+0.038081795 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:28:28 np0005629333 podman[402329]: 2026-02-25 13:28:28.986418195 +0000 UTC m=+0.221906453 container create 54023270fcc63537e2d7fdf9b29a972fd46af69a2b94191b96b6c32b634eb940 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_margulis, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 25 08:28:29 np0005629333 nova_compute[244014]: 2026-02-25 13:28:29.051 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:28:29 np0005629333 systemd[1]: Started libpod-conmon-54023270fcc63537e2d7fdf9b29a972fd46af69a2b94191b96b6c32b634eb940.scope.
Feb 25 08:28:29 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:28:29 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00ea697dfee1607e78abb3d738dffb7f59c97197504f4b2cb4e6abbd94f0337b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:28:29 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00ea697dfee1607e78abb3d738dffb7f59c97197504f4b2cb4e6abbd94f0337b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:28:29 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00ea697dfee1607e78abb3d738dffb7f59c97197504f4b2cb4e6abbd94f0337b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:28:29 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00ea697dfee1607e78abb3d738dffb7f59c97197504f4b2cb4e6abbd94f0337b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:28:29 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00ea697dfee1607e78abb3d738dffb7f59c97197504f4b2cb4e6abbd94f0337b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:28:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:28:29 np0005629333 podman[402329]: 2026-02-25 13:28:29.50107557 +0000 UTC m=+0.736563828 container init 54023270fcc63537e2d7fdf9b29a972fd46af69a2b94191b96b6c32b634eb940 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_margulis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:28:29 np0005629333 podman[402329]: 2026-02-25 13:28:29.510478515 +0000 UTC m=+0.745966763 container start 54023270fcc63537e2d7fdf9b29a972fd46af69a2b94191b96b6c32b634eb940 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_margulis, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:28:29 np0005629333 podman[402329]: 2026-02-25 13:28:29.610323114 +0000 UTC m=+0.845811372 container attach 54023270fcc63537e2d7fdf9b29a972fd46af69a2b94191b96b6c32b634eb940 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_margulis, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:28:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3258: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:28:29 np0005629333 nova_compute[244014]: 2026-02-25 13:28:29.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:28:29 np0005629333 nova_compute[244014]: 2026-02-25 13:28:29.878 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:28:29 np0005629333 nova_compute[244014]: 2026-02-25 13:28:29.879 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:28:29 np0005629333 nova_compute[244014]: 2026-02-25 13:28:29.880 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:28:29 np0005629333 nova_compute[244014]: 2026-02-25 13:28:29.880 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:28:29 np0005629333 nova_compute[244014]: 2026-02-25 13:28:29.881 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:28:29 np0005629333 nova_compute[244014]: 2026-02-25 13:28:29.882 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:28:29 np0005629333 great_margulis[402346]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:28:29 np0005629333 great_margulis[402346]: --> All data devices are unavailable
Feb 25 08:28:29 np0005629333 nova_compute[244014]: 2026-02-25 13:28:29.954 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Feb 25 08:28:29 np0005629333 nova_compute[244014]: 2026-02-25 13:28:29.971 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Feb 25 08:28:29 np0005629333 nova_compute[244014]: 2026-02-25 13:28:29.971 244018 WARNING nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Unknown base file: /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6#033[00m
Feb 25 08:28:29 np0005629333 nova_compute[244014]: 2026-02-25 13:28:29.971 244018 WARNING nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Unknown base file: /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538#033[00m
Feb 25 08:28:29 np0005629333 nova_compute[244014]: 2026-02-25 13:28:29.972 244018 INFO nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Removable base files: /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538#033[00m
Feb 25 08:28:29 np0005629333 nova_compute[244014]: 2026-02-25 13:28:29.972 244018 INFO nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6#033[00m
Feb 25 08:28:29 np0005629333 nova_compute[244014]: 2026-02-25 13:28:29.973 244018 INFO nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538#033[00m
Feb 25 08:28:29 np0005629333 nova_compute[244014]: 2026-02-25 13:28:29.973 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Feb 25 08:28:29 np0005629333 nova_compute[244014]: 2026-02-25 13:28:29.973 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Feb 25 08:28:29 np0005629333 nova_compute[244014]: 2026-02-25 13:28:29.974 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Feb 25 08:28:29 np0005629333 nova_compute[244014]: 2026-02-25 13:28:29.974 244018 INFO nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Feb 25 08:28:29 np0005629333 systemd[1]: libpod-54023270fcc63537e2d7fdf9b29a972fd46af69a2b94191b96b6c32b634eb940.scope: Deactivated successfully.
Feb 25 08:28:29 np0005629333 podman[402329]: 2026-02-25 13:28:29.983589998 +0000 UTC m=+1.219078306 container died 54023270fcc63537e2d7fdf9b29a972fd46af69a2b94191b96b6c32b634eb940 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_margulis, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:28:30 np0005629333 systemd[1]: var-lib-containers-storage-overlay-00ea697dfee1607e78abb3d738dffb7f59c97197504f4b2cb4e6abbd94f0337b-merged.mount: Deactivated successfully.
Feb 25 08:28:30 np0005629333 podman[402329]: 2026-02-25 13:28:30.348028234 +0000 UTC m=+1.583516482 container remove 54023270fcc63537e2d7fdf9b29a972fd46af69a2b94191b96b6c32b634eb940 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:28:30 np0005629333 systemd[1]: libpod-conmon-54023270fcc63537e2d7fdf9b29a972fd46af69a2b94191b96b6c32b634eb940.scope: Deactivated successfully.
Feb 25 08:28:30 np0005629333 podman[402440]: 2026-02-25 13:28:30.765856636 +0000 UTC m=+0.049101007 container create 945bf4a53efd45042675306a535d37cfee7e31b214d9b3fbc0226d43e1d0d01b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_swanson, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 08:28:30 np0005629333 podman[402440]: 2026-02-25 13:28:30.736786645 +0000 UTC m=+0.020030936 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:28:30 np0005629333 nova_compute[244014]: 2026-02-25 13:28:30.970 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:28:30 np0005629333 systemd[1]: Started libpod-conmon-945bf4a53efd45042675306a535d37cfee7e31b214d9b3fbc0226d43e1d0d01b.scope.
Feb 25 08:28:31 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:28:31 np0005629333 podman[402440]: 2026-02-25 13:28:31.140657484 +0000 UTC m=+0.423901765 container init 945bf4a53efd45042675306a535d37cfee7e31b214d9b3fbc0226d43e1d0d01b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_swanson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 08:28:31 np0005629333 podman[402440]: 2026-02-25 13:28:31.147286341 +0000 UTC m=+0.430530602 container start 945bf4a53efd45042675306a535d37cfee7e31b214d9b3fbc0226d43e1d0d01b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_swanson, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 08:28:31 np0005629333 zen_swanson[402457]: 167 167
Feb 25 08:28:31 np0005629333 systemd[1]: libpod-945bf4a53efd45042675306a535d37cfee7e31b214d9b3fbc0226d43e1d0d01b.scope: Deactivated successfully.
Feb 25 08:28:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:28:31
Feb 25 08:28:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:28:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:28:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['volumes', 'backups', 'vms', 'images', '.mgr', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta']
Feb 25 08:28:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:28:31 np0005629333 podman[402440]: 2026-02-25 13:28:31.218767638 +0000 UTC m=+0.502012009 container attach 945bf4a53efd45042675306a535d37cfee7e31b214d9b3fbc0226d43e1d0d01b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:28:31 np0005629333 podman[402440]: 2026-02-25 13:28:31.219739435 +0000 UTC m=+0.502983696 container died 945bf4a53efd45042675306a535d37cfee7e31b214d9b3fbc0226d43e1d0d01b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 08:28:31 np0005629333 systemd[1]: var-lib-containers-storage-overlay-19bbc5d8d720a1c326d1f00cdfc4db54fa38b27467ec4c45d3f445254dc12d07-merged.mount: Deactivated successfully.
Feb 25 08:28:31 np0005629333 podman[402440]: 2026-02-25 13:28:31.264964042 +0000 UTC m=+0.548208303 container remove 945bf4a53efd45042675306a535d37cfee7e31b214d9b3fbc0226d43e1d0d01b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_swanson, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 08:28:31 np0005629333 systemd[1]: libpod-conmon-945bf4a53efd45042675306a535d37cfee7e31b214d9b3fbc0226d43e1d0d01b.scope: Deactivated successfully.
Feb 25 08:28:31 np0005629333 podman[402483]: 2026-02-25 13:28:31.392312556 +0000 UTC m=+0.022657070 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:28:31 np0005629333 podman[402483]: 2026-02-25 13:28:31.520559315 +0000 UTC m=+0.150903759 container create 1260d2145879c7e87134ed57b7aaa6a58da10d045438cbcf00324951b3f95d0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_merkle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:28:31 np0005629333 systemd[1]: Started libpod-conmon-1260d2145879c7e87134ed57b7aaa6a58da10d045438cbcf00324951b3f95d0c.scope.
Feb 25 08:28:31 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:28:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e03afe70cafa10626e260673cac8c390d1138630758fa85902c009d8adbb084/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:28:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e03afe70cafa10626e260673cac8c390d1138630758fa85902c009d8adbb084/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:28:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e03afe70cafa10626e260673cac8c390d1138630758fa85902c009d8adbb084/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:28:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e03afe70cafa10626e260673cac8c390d1138630758fa85902c009d8adbb084/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:28:31 np0005629333 podman[402483]: 2026-02-25 13:28:31.643256317 +0000 UTC m=+0.273600781 container init 1260d2145879c7e87134ed57b7aaa6a58da10d045438cbcf00324951b3f95d0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 08:28:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:28:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:28:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:28:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:28:31 np0005629333 podman[402483]: 2026-02-25 13:28:31.652034315 +0000 UTC m=+0.282378789 container start 1260d2145879c7e87134ed57b7aaa6a58da10d045438cbcf00324951b3f95d0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_merkle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:28:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:28:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:28:31 np0005629333 podman[402483]: 2026-02-25 13:28:31.656146241 +0000 UTC m=+0.286490685 container attach 1260d2145879c7e87134ed57b7aaa6a58da10d045438cbcf00324951b3f95d0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:28:31 np0005629333 nova_compute[244014]: 2026-02-25 13:28:31.719 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:28:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3259: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]: {
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:    "0": [
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:        {
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "devices": [
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "/dev/loop3"
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            ],
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "lv_name": "ceph_lv0",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "lv_size": "21470642176",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "name": "ceph_lv0",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "tags": {
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.cluster_name": "ceph",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.crush_device_class": "",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.encrypted": "0",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.objectstore": "bluestore",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.osd_id": "0",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.type": "block",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.vdo": "0",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.with_tpm": "0"
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            },
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "type": "block",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "vg_name": "ceph_vg0"
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:        }
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:    ],
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:    "1": [
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:        {
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "devices": [
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "/dev/loop4"
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            ],
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "lv_name": "ceph_lv1",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "lv_size": "21470642176",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "name": "ceph_lv1",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "tags": {
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.cluster_name": "ceph",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.crush_device_class": "",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.encrypted": "0",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.objectstore": "bluestore",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.osd_id": "1",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.type": "block",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.vdo": "0",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.with_tpm": "0"
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            },
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "type": "block",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "vg_name": "ceph_vg1"
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:        }
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:    ],
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:    "2": [
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:        {
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "devices": [
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "/dev/loop5"
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            ],
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "lv_name": "ceph_lv2",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "lv_size": "21470642176",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "name": "ceph_lv2",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "tags": {
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.cluster_name": "ceph",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.crush_device_class": "",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.encrypted": "0",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.objectstore": "bluestore",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.osd_id": "2",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.type": "block",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.vdo": "0",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:                "ceph.with_tpm": "0"
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            },
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "type": "block",
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:            "vg_name": "ceph_vg2"
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:        }
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]:    ]
Feb 25 08:28:31 np0005629333 elastic_merkle[402500]: }
Feb 25 08:28:32 np0005629333 systemd[1]: libpod-1260d2145879c7e87134ed57b7aaa6a58da10d045438cbcf00324951b3f95d0c.scope: Deactivated successfully.
Feb 25 08:28:32 np0005629333 podman[402483]: 2026-02-25 13:28:32.006596292 +0000 UTC m=+0.636940766 container died 1260d2145879c7e87134ed57b7aaa6a58da10d045438cbcf00324951b3f95d0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_merkle, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 08:28:32 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7e03afe70cafa10626e260673cac8c390d1138630758fa85902c009d8adbb084-merged.mount: Deactivated successfully.
Feb 25 08:28:32 np0005629333 podman[402483]: 2026-02-25 13:28:32.061411609 +0000 UTC m=+0.691756053 container remove 1260d2145879c7e87134ed57b7aaa6a58da10d045438cbcf00324951b3f95d0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_merkle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 08:28:32 np0005629333 systemd[1]: libpod-conmon-1260d2145879c7e87134ed57b7aaa6a58da10d045438cbcf00324951b3f95d0c.scope: Deactivated successfully.
Feb 25 08:28:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:28:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:28:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:28:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:28:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:28:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:28:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:28:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:28:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:28:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:28:32 np0005629333 podman[402584]: 2026-02-25 13:28:32.512244482 +0000 UTC m=+0.039467674 container create decf8117a4a9eac0518ed71e47d9d6d169fe62102af61f069c956a6b58c6f658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_chaum, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Feb 25 08:28:32 np0005629333 systemd[1]: Started libpod-conmon-decf8117a4a9eac0518ed71e47d9d6d169fe62102af61f069c956a6b58c6f658.scope.
Feb 25 08:28:32 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:28:32 np0005629333 podman[402584]: 2026-02-25 13:28:32.494600524 +0000 UTC m=+0.021823796 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:28:32 np0005629333 podman[402584]: 2026-02-25 13:28:32.598239569 +0000 UTC m=+0.125462872 container init decf8117a4a9eac0518ed71e47d9d6d169fe62102af61f069c956a6b58c6f658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 08:28:32 np0005629333 podman[402584]: 2026-02-25 13:28:32.603146018 +0000 UTC m=+0.130369260 container start decf8117a4a9eac0518ed71e47d9d6d169fe62102af61f069c956a6b58c6f658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_chaum, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 08:28:32 np0005629333 confident_chaum[402602]: 167 167
Feb 25 08:28:32 np0005629333 systemd[1]: libpod-decf8117a4a9eac0518ed71e47d9d6d169fe62102af61f069c956a6b58c6f658.scope: Deactivated successfully.
Feb 25 08:28:32 np0005629333 podman[402584]: 2026-02-25 13:28:32.607632885 +0000 UTC m=+0.134856167 container attach decf8117a4a9eac0518ed71e47d9d6d169fe62102af61f069c956a6b58c6f658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_chaum, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 08:28:32 np0005629333 podman[402584]: 2026-02-25 13:28:32.609740114 +0000 UTC m=+0.136963336 container died decf8117a4a9eac0518ed71e47d9d6d169fe62102af61f069c956a6b58c6f658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_chaum, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 08:28:32 np0005629333 systemd[1]: var-lib-containers-storage-overlay-37d7839bdfe70a38e9f7ca5c4a1640090fabac713c285df47fafba65177fad48-merged.mount: Deactivated successfully.
Feb 25 08:28:32 np0005629333 podman[402584]: 2026-02-25 13:28:32.651594585 +0000 UTC m=+0.178817777 container remove decf8117a4a9eac0518ed71e47d9d6d169fe62102af61f069c956a6b58c6f658 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_chaum, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 08:28:32 np0005629333 systemd[1]: libpod-conmon-decf8117a4a9eac0518ed71e47d9d6d169fe62102af61f069c956a6b58c6f658.scope: Deactivated successfully.
Feb 25 08:28:32 np0005629333 podman[402628]: 2026-02-25 13:28:32.777442727 +0000 UTC m=+0.038438316 container create 52262382169e0a51ed82716b5747f462e8e088146d91e4eafe4802ec1721f88e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_franklin, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 08:28:32 np0005629333 systemd[1]: Started libpod-conmon-52262382169e0a51ed82716b5747f462e8e088146d91e4eafe4802ec1721f88e.scope.
Feb 25 08:28:32 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:28:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9932d22b592cc940385780343397cb8b144e3d0a89c3e41f36708a8a600546/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:28:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9932d22b592cc940385780343397cb8b144e3d0a89c3e41f36708a8a600546/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:28:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9932d22b592cc940385780343397cb8b144e3d0a89c3e41f36708a8a600546/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:28:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e9932d22b592cc940385780343397cb8b144e3d0a89c3e41f36708a8a600546/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:28:32 np0005629333 podman[402628]: 2026-02-25 13:28:32.85303154 +0000 UTC m=+0.114027039 container init 52262382169e0a51ed82716b5747f462e8e088146d91e4eafe4802ec1721f88e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_franklin, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 08:28:32 np0005629333 podman[402628]: 2026-02-25 13:28:32.75911078 +0000 UTC m=+0.020106299 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:28:32 np0005629333 podman[402628]: 2026-02-25 13:28:32.859662318 +0000 UTC m=+0.120657817 container start 52262382169e0a51ed82716b5747f462e8e088146d91e4eafe4802ec1721f88e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_franklin, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:28:32 np0005629333 podman[402628]: 2026-02-25 13:28:32.863206178 +0000 UTC m=+0.124201677 container attach 52262382169e0a51ed82716b5747f462e8e088146d91e4eafe4802ec1721f88e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 08:28:33 np0005629333 lvm[402720]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:28:33 np0005629333 lvm[402720]: VG ceph_vg0 finished
Feb 25 08:28:33 np0005629333 lvm[402723]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:28:33 np0005629333 lvm[402723]: VG ceph_vg1 finished
Feb 25 08:28:33 np0005629333 lvm[402725]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:28:33 np0005629333 lvm[402725]: VG ceph_vg2 finished
Feb 25 08:28:33 np0005629333 crazy_franklin[402644]: {}
Feb 25 08:28:33 np0005629333 systemd[1]: libpod-52262382169e0a51ed82716b5747f462e8e088146d91e4eafe4802ec1721f88e.scope: Deactivated successfully.
Feb 25 08:28:33 np0005629333 systemd[1]: libpod-52262382169e0a51ed82716b5747f462e8e088146d91e4eafe4802ec1721f88e.scope: Consumed 1.043s CPU time.
Feb 25 08:28:33 np0005629333 podman[402628]: 2026-02-25 13:28:33.606622209 +0000 UTC m=+0.867617698 container died 52262382169e0a51ed82716b5747f462e8e088146d91e4eafe4802ec1721f88e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 08:28:33 np0005629333 systemd[1]: var-lib-containers-storage-overlay-9e9932d22b592cc940385780343397cb8b144e3d0a89c3e41f36708a8a600546-merged.mount: Deactivated successfully.
Feb 25 08:28:33 np0005629333 podman[402628]: 2026-02-25 13:28:33.65877047 +0000 UTC m=+0.919765939 container remove 52262382169e0a51ed82716b5747f462e8e088146d91e4eafe4802ec1721f88e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_franklin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 08:28:33 np0005629333 systemd[1]: libpod-conmon-52262382169e0a51ed82716b5747f462e8e088146d91e4eafe4802ec1721f88e.scope: Deactivated successfully.
Feb 25 08:28:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:28:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3260: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:28:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:28:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:28:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:28:34 np0005629333 nova_compute[244014]: 2026-02-25 13:28:34.054 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:28:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:28:34 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:28:34 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:28:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3261: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:28:35 np0005629333 podman[402766]: 2026-02-25 13:28:35.731493857 +0000 UTC m=+0.071886580 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 08:28:35 np0005629333 podman[402767]: 2026-02-25 13:28:35.770513298 +0000 UTC m=+0.107460954 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:28:36 np0005629333 nova_compute[244014]: 2026-02-25 13:28:36.722 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:28:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3262: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:28:37 np0005629333 nova_compute[244014]: 2026-02-25 13:28:37.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:28:39 np0005629333 nova_compute[244014]: 2026-02-25 13:28:39.057 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:28:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:28:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3263: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:28:41 np0005629333 nova_compute[244014]: 2026-02-25 13:28:41.724 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:28:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3264: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:28:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:28:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:28:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:28:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:28:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:28:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:28:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:28:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:28:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:28:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:28:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:28:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:28:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:28:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:28:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:28:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:28:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:28:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:28:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:28:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:28:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:28:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:28:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:28:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3265: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:28:44 np0005629333 nova_compute[244014]: 2026-02-25 13:28:44.067 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:28:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:28:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3266: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:28:46 np0005629333 nova_compute[244014]: 2026-02-25 13:28:46.725 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:28:46 np0005629333 nova_compute[244014]: 2026-02-25 13:28:46.899 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:28:46 np0005629333 nova_compute[244014]: 2026-02-25 13:28:46.899 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 25 08:28:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:28:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/774299455' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:28:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:28:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/774299455' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:28:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3267: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:28:49 np0005629333 nova_compute[244014]: 2026-02-25 13:28:49.071 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:28:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:28:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3268: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:28:51 np0005629333 nova_compute[244014]: 2026-02-25 13:28:51.727 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:28:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3269: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:28:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3270: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:28:54 np0005629333 nova_compute[244014]: 2026-02-25 13:28:54.075 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:28:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:28:54 np0005629333 nova_compute[244014]: 2026-02-25 13:28:54.891 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:28:54 np0005629333 nova_compute[244014]: 2026-02-25 13:28:54.892 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 25 08:28:54 np0005629333 nova_compute[244014]: 2026-02-25 13:28:54.910 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 25 08:28:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:28:55.071 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:28:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:28:55.072 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:28:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:28:55.072 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:28:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3271: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:28:56 np0005629333 nova_compute[244014]: 2026-02-25 13:28:56.729 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:28:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3272: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:28:58 np0005629333 nova_compute[244014]: 2026-02-25 13:28:58.895 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:28:58 np0005629333 nova_compute[244014]: 2026-02-25 13:28:58.925 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:28:58 np0005629333 nova_compute[244014]: 2026-02-25 13:28:58.926 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:28:58 np0005629333 nova_compute[244014]: 2026-02-25 13:28:58.926 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:28:58 np0005629333 nova_compute[244014]: 2026-02-25 13:28:58.926 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:28:58 np0005629333 nova_compute[244014]: 2026-02-25 13:28:58.927 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:28:59 np0005629333 nova_compute[244014]: 2026-02-25 13:28:59.077 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:28:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:28:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:28:59 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2321926870' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:28:59 np0005629333 nova_compute[244014]: 2026-02-25 13:28:59.474 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:28:59 np0005629333 nova_compute[244014]: 2026-02-25 13:28:59.600 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:28:59 np0005629333 nova_compute[244014]: 2026-02-25 13:28:59.601 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3577MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:28:59 np0005629333 nova_compute[244014]: 2026-02-25 13:28:59.602 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:28:59 np0005629333 nova_compute[244014]: 2026-02-25 13:28:59.602 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:28:59 np0005629333 nova_compute[244014]: 2026-02-25 13:28:59.664 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:28:59 np0005629333 nova_compute[244014]: 2026-02-25 13:28:59.665 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:28:59 np0005629333 nova_compute[244014]: 2026-02-25 13:28:59.679 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:28:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3273: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:29:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:29:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1574394186' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:29:00 np0005629333 nova_compute[244014]: 2026-02-25 13:29:00.227 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:29:00 np0005629333 nova_compute[244014]: 2026-02-25 13:29:00.233 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:29:00 np0005629333 nova_compute[244014]: 2026-02-25 13:29:00.250 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:29:00 np0005629333 nova_compute[244014]: 2026-02-25 13:29:00.252 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:29:00 np0005629333 nova_compute[244014]: 2026-02-25 13:29:00.253 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:29:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:29:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:29:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:29:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:29:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:29:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:29:01 np0005629333 nova_compute[244014]: 2026-02-25 13:29:01.732 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:29:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3274: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:29:02 np0005629333 nova_compute[244014]: 2026-02-25 13:29:02.235 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:29:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3275: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:29:03 np0005629333 nova_compute[244014]: 2026-02-25 13:29:03.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:29:03 np0005629333 nova_compute[244014]: 2026-02-25 13:29:03.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:29:03 np0005629333 nova_compute[244014]: 2026-02-25 13:29:03.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:29:03 np0005629333 nova_compute[244014]: 2026-02-25 13:29:03.893 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:29:04 np0005629333 nova_compute[244014]: 2026-02-25 13:29:04.080 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:29:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:29:04 np0005629333 nova_compute[244014]: 2026-02-25 13:29:04.887 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:29:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3276: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:29:06 np0005629333 podman[402852]: 2026-02-25 13:29:06.720595799 +0000 UTC m=+0.063478532 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 08:29:06 np0005629333 nova_compute[244014]: 2026-02-25 13:29:06.734 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:29:06 np0005629333 podman[402853]: 2026-02-25 13:29:06.739486382 +0000 UTC m=+0.083336363 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Feb 25 08:29:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3277: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:29:09 np0005629333 nova_compute[244014]: 2026-02-25 13:29:09.084 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:29:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:29:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3278: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:29:10 np0005629333 nova_compute[244014]: 2026-02-25 13:29:10.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:29:10 np0005629333 nova_compute[244014]: 2026-02-25 13:29:10.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:29:11 np0005629333 nova_compute[244014]: 2026-02-25 13:29:11.737 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:29:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3279: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:29:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 08:29:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.0 total, 600.0 interval#012Cumulative writes: 15K writes, 68K keys, 15K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s#012Cumulative WAL: 15K writes, 15K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1330 writes, 6014 keys, 1330 commit groups, 1.0 writes per commit group, ingest: 8.79 MB, 0.01 MB/s#012Interval WAL: 1330 writes, 1330 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     52.6      1.58              0.23        49    0.032       0      0       0.0       0.0#012  L6      1/0   10.43 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.0    100.9     86.0      4.87              1.22        48    0.102    321K    25K       0.0       0.0#012 Sum      1/0   10.43 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.0     76.2     77.8      6.46              1.45        97    0.067    321K    25K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.4     61.7     63.3      0.93              0.17        10    0.093     44K   2519       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    100.9     86.0      4.87              1.22        48    0.102    321K    25K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     52.7      1.58              0.23        48    0.033       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.0 total, 600.0 interval#012Flush(GB): cumulative 0.081, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.49 GB write, 0.08 MB/s write, 0.48 GB read, 0.08 MB/s read, 6.5 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561a1af858d0#2 capacity: 304.00 MB usage: 56.97 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000411 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3566,54.67 MB,17.9839%) FilterBlock(98,892.36 KB,0.286659%) IndexBlock(98,1.42 MB,0.468219%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb 25 08:29:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3280: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:29:14 np0005629333 nova_compute[244014]: 2026-02-25 13:29:14.088 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:29:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:29:14 np0005629333 nova_compute[244014]: 2026-02-25 13:29:14.651 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:29:14 np0005629333 nova_compute[244014]: 2026-02-25 13:29:14.894 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:29:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3281: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:29:16 np0005629333 nova_compute[244014]: 2026-02-25 13:29:16.739 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:29:16 np0005629333 nova_compute[244014]: 2026-02-25 13:29:16.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:29:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3282: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:29:19 np0005629333 nova_compute[244014]: 2026-02-25 13:29:19.091 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:29:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:29:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3283: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:29:20 np0005629333 nova_compute[244014]: 2026-02-25 13:29:20.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:29:21 np0005629333 nova_compute[244014]: 2026-02-25 13:29:21.740 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:29:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3284: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:29:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3285: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:29:23 np0005629333 nova_compute[244014]: 2026-02-25 13:29:23.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:29:24 np0005629333 nova_compute[244014]: 2026-02-25 13:29:24.095 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:29:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:29:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3286: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:29:26 np0005629333 nova_compute[244014]: 2026-02-25 13:29:26.743 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:29:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3287: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:29:29 np0005629333 nova_compute[244014]: 2026-02-25 13:29:29.099 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:29:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:29:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3288: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:29:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:29:31
Feb 25 08:29:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:29:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:29:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.control', 'volumes', '.mgr', 'default.rgw.log', 'vms', 'backups', '.rgw.root', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Feb 25 08:29:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:29:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:29:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:29:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:29:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:29:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:29:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:29:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3289: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:29:31 np0005629333 nova_compute[244014]: 2026-02-25 13:29:31.762 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:29:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:29:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:29:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:29:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:29:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:29:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:29:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:29:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:29:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:29:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:29:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3290: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:29:34 np0005629333 nova_compute[244014]: 2026-02-25 13:29:34.143 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:29:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:29:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:29:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:29:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:29:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:29:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:29:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:29:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:29:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:29:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:29:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:29:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:29:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:29:34 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:29:34 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:29:34 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:29:35 np0005629333 podman[403042]: 2026-02-25 13:29:35.058363575 +0000 UTC m=+0.052515364 container create ae5e88320468ea62ff6ebfe0661996966bafb27aaf372c758997c813f8d12c58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:29:35 np0005629333 systemd[1]: Started libpod-conmon-ae5e88320468ea62ff6ebfe0661996966bafb27aaf372c758997c813f8d12c58.scope.
Feb 25 08:29:35 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:29:35 np0005629333 podman[403042]: 2026-02-25 13:29:35.038457313 +0000 UTC m=+0.032609122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:29:35 np0005629333 podman[403042]: 2026-02-25 13:29:35.133118754 +0000 UTC m=+0.127270623 container init ae5e88320468ea62ff6ebfe0661996966bafb27aaf372c758997c813f8d12c58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 08:29:35 np0005629333 podman[403042]: 2026-02-25 13:29:35.14006207 +0000 UTC m=+0.134213889 container start ae5e88320468ea62ff6ebfe0661996966bafb27aaf372c758997c813f8d12c58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 08:29:35 np0005629333 podman[403042]: 2026-02-25 13:29:35.144511616 +0000 UTC m=+0.138663485 container attach ae5e88320468ea62ff6ebfe0661996966bafb27aaf372c758997c813f8d12c58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_solomon, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 08:29:35 np0005629333 stoic_solomon[403058]: 167 167
Feb 25 08:29:35 np0005629333 systemd[1]: libpod-ae5e88320468ea62ff6ebfe0661996966bafb27aaf372c758997c813f8d12c58.scope: Deactivated successfully.
Feb 25 08:29:35 np0005629333 conmon[403058]: conmon ae5e88320468ea62ff6e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ae5e88320468ea62ff6ebfe0661996966bafb27aaf372c758997c813f8d12c58.scope/container/memory.events
Feb 25 08:29:35 np0005629333 podman[403042]: 2026-02-25 13:29:35.147778748 +0000 UTC m=+0.141930597 container died ae5e88320468ea62ff6ebfe0661996966bafb27aaf372c758997c813f8d12c58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_solomon, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 08:29:35 np0005629333 systemd[1]: var-lib-containers-storage-overlay-2212dda2bf8fb1a948e631fb56e3a54f167699bb40f8b55f02860109d76f51a7-merged.mount: Deactivated successfully.
Feb 25 08:29:35 np0005629333 podman[403042]: 2026-02-25 13:29:35.199715394 +0000 UTC m=+0.193867223 container remove ae5e88320468ea62ff6ebfe0661996966bafb27aaf372c758997c813f8d12c58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_solomon, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 25 08:29:35 np0005629333 systemd[1]: libpod-conmon-ae5e88320468ea62ff6ebfe0661996966bafb27aaf372c758997c813f8d12c58.scope: Deactivated successfully.
Feb 25 08:29:35 np0005629333 podman[403081]: 2026-02-25 13:29:35.393306777 +0000 UTC m=+0.059976693 container create 77e240e4487134777e9bf1e39905e1f52cd94e87f9abe4cb65d451a746fb749e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:29:35 np0005629333 systemd[1]: Started libpod-conmon-77e240e4487134777e9bf1e39905e1f52cd94e87f9abe4cb65d451a746fb749e.scope.
Feb 25 08:29:35 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:29:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d695844127dbb28f2c5d10072abb67ce52145ef02a464b531a4a3e2e42445e3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:29:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d695844127dbb28f2c5d10072abb67ce52145ef02a464b531a4a3e2e42445e3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:29:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d695844127dbb28f2c5d10072abb67ce52145ef02a464b531a4a3e2e42445e3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:29:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d695844127dbb28f2c5d10072abb67ce52145ef02a464b531a4a3e2e42445e3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:29:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d695844127dbb28f2c5d10072abb67ce52145ef02a464b531a4a3e2e42445e3b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:29:35 np0005629333 podman[403081]: 2026-02-25 13:29:35.366913203 +0000 UTC m=+0.033583169 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:29:35 np0005629333 podman[403081]: 2026-02-25 13:29:35.475611051 +0000 UTC m=+0.142281027 container init 77e240e4487134777e9bf1e39905e1f52cd94e87f9abe4cb65d451a746fb749e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 08:29:35 np0005629333 podman[403081]: 2026-02-25 13:29:35.482476904 +0000 UTC m=+0.149146820 container start 77e240e4487134777e9bf1e39905e1f52cd94e87f9abe4cb65d451a746fb749e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ritchie, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:29:35 np0005629333 podman[403081]: 2026-02-25 13:29:35.54077321 +0000 UTC m=+0.207443136 container attach 77e240e4487134777e9bf1e39905e1f52cd94e87f9abe4cb65d451a746fb749e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ritchie, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:29:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3291: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:29:35 np0005629333 bold_ritchie[403097]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:29:35 np0005629333 bold_ritchie[403097]: --> All data devices are unavailable
Feb 25 08:29:35 np0005629333 systemd[1]: libpod-77e240e4487134777e9bf1e39905e1f52cd94e87f9abe4cb65d451a746fb749e.scope: Deactivated successfully.
Feb 25 08:29:35 np0005629333 podman[403081]: 2026-02-25 13:29:35.959434144 +0000 UTC m=+0.626104100 container died 77e240e4487134777e9bf1e39905e1f52cd94e87f9abe4cb65d451a746fb749e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:29:36 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d695844127dbb28f2c5d10072abb67ce52145ef02a464b531a4a3e2e42445e3b-merged.mount: Deactivated successfully.
Feb 25 08:29:36 np0005629333 podman[403081]: 2026-02-25 13:29:36.04148594 +0000 UTC m=+0.708155886 container remove 77e240e4487134777e9bf1e39905e1f52cd94e87f9abe4cb65d451a746fb749e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 08:29:36 np0005629333 systemd[1]: libpod-conmon-77e240e4487134777e9bf1e39905e1f52cd94e87f9abe4cb65d451a746fb749e.scope: Deactivated successfully.
Feb 25 08:29:36 np0005629333 podman[403191]: 2026-02-25 13:29:36.545458213 +0000 UTC m=+0.076936582 container create bc68d3c15ef2f6e97aec71c54c04e39db8ffe3aa23718ce341fb4f63e9be8561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_brown, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:29:36 np0005629333 systemd[1]: Started libpod-conmon-bc68d3c15ef2f6e97aec71c54c04e39db8ffe3aa23718ce341fb4f63e9be8561.scope.
Feb 25 08:29:36 np0005629333 podman[403191]: 2026-02-25 13:29:36.502387698 +0000 UTC m=+0.033866127 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:29:36 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:29:36 np0005629333 podman[403191]: 2026-02-25 13:29:36.634032983 +0000 UTC m=+0.165511352 container init bc68d3c15ef2f6e97aec71c54c04e39db8ffe3aa23718ce341fb4f63e9be8561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 08:29:36 np0005629333 podman[403191]: 2026-02-25 13:29:36.641954827 +0000 UTC m=+0.173433186 container start bc68d3c15ef2f6e97aec71c54c04e39db8ffe3aa23718ce341fb4f63e9be8561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_brown, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:29:36 np0005629333 podman[403191]: 2026-02-25 13:29:36.647001839 +0000 UTC m=+0.178480178 container attach bc68d3c15ef2f6e97aec71c54c04e39db8ffe3aa23718ce341fb4f63e9be8561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_brown, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:29:36 np0005629333 awesome_brown[403208]: 167 167
Feb 25 08:29:36 np0005629333 systemd[1]: libpod-bc68d3c15ef2f6e97aec71c54c04e39db8ffe3aa23718ce341fb4f63e9be8561.scope: Deactivated successfully.
Feb 25 08:29:36 np0005629333 conmon[403208]: conmon bc68d3c15ef2f6e97aec <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bc68d3c15ef2f6e97aec71c54c04e39db8ffe3aa23718ce341fb4f63e9be8561.scope/container/memory.events
Feb 25 08:29:36 np0005629333 podman[403191]: 2026-02-25 13:29:36.650451127 +0000 UTC m=+0.181929466 container died bc68d3c15ef2f6e97aec71c54c04e39db8ffe3aa23718ce341fb4f63e9be8561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:29:36 np0005629333 systemd[1]: var-lib-containers-storage-overlay-2cf06cc1e162033785b54d299adf9f8a4b6d45b5464b66565c554c47243206ec-merged.mount: Deactivated successfully.
Feb 25 08:29:36 np0005629333 podman[403191]: 2026-02-25 13:29:36.699900942 +0000 UTC m=+0.231379311 container remove bc68d3c15ef2f6e97aec71c54c04e39db8ffe3aa23718ce341fb4f63e9be8561 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 08:29:36 np0005629333 systemd[1]: libpod-conmon-bc68d3c15ef2f6e97aec71c54c04e39db8ffe3aa23718ce341fb4f63e9be8561.scope: Deactivated successfully.
Feb 25 08:29:36 np0005629333 nova_compute[244014]: 2026-02-25 13:29:36.763 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:29:36 np0005629333 podman[403231]: 2026-02-25 13:29:36.863847149 +0000 UTC m=+0.054799247 container create f61778cbd81d64e49e2824b40486a16cdccb671e113ac97f01012dd962c0c791 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_poincare, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 25 08:29:36 np0005629333 systemd[1]: Started libpod-conmon-f61778cbd81d64e49e2824b40486a16cdccb671e113ac97f01012dd962c0c791.scope.
Feb 25 08:29:36 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:29:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe94d3ec4b12884c9c67f021f89cd713858fb0aa4dd0a6f762c0c002252fe6f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:29:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe94d3ec4b12884c9c67f021f89cd713858fb0aa4dd0a6f762c0c002252fe6f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:29:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe94d3ec4b12884c9c67f021f89cd713858fb0aa4dd0a6f762c0c002252fe6f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:29:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe94d3ec4b12884c9c67f021f89cd713858fb0aa4dd0a6f762c0c002252fe6f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:29:36 np0005629333 podman[403231]: 2026-02-25 13:29:36.840908802 +0000 UTC m=+0.031860880 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:29:36 np0005629333 podman[403231]: 2026-02-25 13:29:36.971361423 +0000 UTC m=+0.162313551 container init f61778cbd81d64e49e2824b40486a16cdccb671e113ac97f01012dd962c0c791 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:29:36 np0005629333 podman[403247]: 2026-02-25 13:29:36.974360858 +0000 UTC m=+0.068943937 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Feb 25 08:29:36 np0005629333 podman[403231]: 2026-02-25 13:29:36.980579144 +0000 UTC m=+0.171531232 container start f61778cbd81d64e49e2824b40486a16cdccb671e113ac97f01012dd962c0c791 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_poincare, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 08:29:37 np0005629333 podman[403250]: 2026-02-25 13:29:37.007952856 +0000 UTC m=+0.104476979 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Feb 25 08:29:37 np0005629333 podman[403231]: 2026-02-25 13:29:37.00666061 +0000 UTC m=+0.197612728 container attach f61778cbd81d64e49e2824b40486a16cdccb671e113ac97f01012dd962c0c791 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]: {
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:    "0": [
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:        {
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "devices": [
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "/dev/loop3"
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            ],
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "lv_name": "ceph_lv0",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "lv_size": "21470642176",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "name": "ceph_lv0",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "tags": {
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.cluster_name": "ceph",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.crush_device_class": "",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.encrypted": "0",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.objectstore": "bluestore",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.osd_id": "0",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.type": "block",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.vdo": "0",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.with_tpm": "0"
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            },
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "type": "block",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "vg_name": "ceph_vg0"
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:        }
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:    ],
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:    "1": [
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:        {
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "devices": [
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "/dev/loop4"
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            ],
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "lv_name": "ceph_lv1",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "lv_size": "21470642176",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "name": "ceph_lv1",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "tags": {
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.cluster_name": "ceph",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.crush_device_class": "",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.encrypted": "0",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.objectstore": "bluestore",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.osd_id": "1",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.type": "block",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.vdo": "0",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.with_tpm": "0"
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            },
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "type": "block",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "vg_name": "ceph_vg1"
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:        }
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:    ],
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:    "2": [
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:        {
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "devices": [
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "/dev/loop5"
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            ],
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "lv_name": "ceph_lv2",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "lv_size": "21470642176",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "name": "ceph_lv2",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "tags": {
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.cluster_name": "ceph",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.crush_device_class": "",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.encrypted": "0",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.objectstore": "bluestore",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.osd_id": "2",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.type": "block",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.vdo": "0",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:                "ceph.with_tpm": "0"
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            },
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "type": "block",
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:            "vg_name": "ceph_vg2"
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:        }
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]:    ]
Feb 25 08:29:37 np0005629333 peaceful_poincare[403251]: }
Feb 25 08:29:37 np0005629333 systemd[1]: libpod-f61778cbd81d64e49e2824b40486a16cdccb671e113ac97f01012dd962c0c791.scope: Deactivated successfully.
Feb 25 08:29:37 np0005629333 podman[403231]: 2026-02-25 13:29:37.299937997 +0000 UTC m=+0.490890055 container died f61778cbd81d64e49e2824b40486a16cdccb671e113ac97f01012dd962c0c791 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_poincare, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:29:37 np0005629333 systemd[1]: var-lib-containers-storage-overlay-fe94d3ec4b12884c9c67f021f89cd713858fb0aa4dd0a6f762c0c002252fe6f0-merged.mount: Deactivated successfully.
Feb 25 08:29:37 np0005629333 podman[403231]: 2026-02-25 13:29:37.337659231 +0000 UTC m=+0.528611299 container remove f61778cbd81d64e49e2824b40486a16cdccb671e113ac97f01012dd962c0c791 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_poincare, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:29:37 np0005629333 systemd[1]: libpod-conmon-f61778cbd81d64e49e2824b40486a16cdccb671e113ac97f01012dd962c0c791.scope: Deactivated successfully.
Feb 25 08:29:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3292: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:29:37 np0005629333 podman[403378]: 2026-02-25 13:29:37.797353375 +0000 UTC m=+0.028715281 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:29:37 np0005629333 podman[403378]: 2026-02-25 13:29:37.924636217 +0000 UTC m=+0.155998133 container create 26ffb762497e8178801fc970fbcce79a3f4ddbfa310ca37fab514239cec5f54b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 08:29:38 np0005629333 systemd[1]: Started libpod-conmon-26ffb762497e8178801fc970fbcce79a3f4ddbfa310ca37fab514239cec5f54b.scope.
Feb 25 08:29:38 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:29:38 np0005629333 podman[403378]: 2026-02-25 13:29:38.127113612 +0000 UTC m=+0.358475578 container init 26ffb762497e8178801fc970fbcce79a3f4ddbfa310ca37fab514239cec5f54b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_poitras, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Feb 25 08:29:38 np0005629333 podman[403378]: 2026-02-25 13:29:38.136440395 +0000 UTC m=+0.367802311 container start 26ffb762497e8178801fc970fbcce79a3f4ddbfa310ca37fab514239cec5f54b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_poitras, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Feb 25 08:29:38 np0005629333 podman[403378]: 2026-02-25 13:29:38.14087396 +0000 UTC m=+0.372235936 container attach 26ffb762497e8178801fc970fbcce79a3f4ddbfa310ca37fab514239cec5f54b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:29:38 np0005629333 vigorous_poitras[403394]: 167 167
Feb 25 08:29:38 np0005629333 systemd[1]: libpod-26ffb762497e8178801fc970fbcce79a3f4ddbfa310ca37fab514239cec5f54b.scope: Deactivated successfully.
Feb 25 08:29:38 np0005629333 podman[403378]: 2026-02-25 13:29:38.142745723 +0000 UTC m=+0.374107639 container died 26ffb762497e8178801fc970fbcce79a3f4ddbfa310ca37fab514239cec5f54b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_poitras, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:29:38 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d4a1fb3cca6948e7e1704d596cf7fcdf1fd5a43a043aa1c58f69a441de1c7da0-merged.mount: Deactivated successfully.
Feb 25 08:29:38 np0005629333 podman[403378]: 2026-02-25 13:29:38.23017183 +0000 UTC m=+0.461533746 container remove 26ffb762497e8178801fc970fbcce79a3f4ddbfa310ca37fab514239cec5f54b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:29:38 np0005629333 systemd[1]: libpod-conmon-26ffb762497e8178801fc970fbcce79a3f4ddbfa310ca37fab514239cec5f54b.scope: Deactivated successfully.
Feb 25 08:29:38 np0005629333 podman[403419]: 2026-02-25 13:29:38.401447974 +0000 UTC m=+0.064696487 container create ab57dfeb6b670ffe117a8970dd8690d89574615b311cdc455c7286352cebf0af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:29:38 np0005629333 systemd[1]: Started libpod-conmon-ab57dfeb6b670ffe117a8970dd8690d89574615b311cdc455c7286352cebf0af.scope.
Feb 25 08:29:38 np0005629333 podman[403419]: 2026-02-25 13:29:38.373961949 +0000 UTC m=+0.037210532 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:29:38 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:29:38 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc177674907a4033a178e69821653badc64231f4e5d8ad02fed63a6537fad47e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:29:38 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc177674907a4033a178e69821653badc64231f4e5d8ad02fed63a6537fad47e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:29:38 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc177674907a4033a178e69821653badc64231f4e5d8ad02fed63a6537fad47e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:29:38 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc177674907a4033a178e69821653badc64231f4e5d8ad02fed63a6537fad47e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:29:38 np0005629333 podman[403419]: 2026-02-25 13:29:38.498181814 +0000 UTC m=+0.161430327 container init ab57dfeb6b670ffe117a8970dd8690d89574615b311cdc455c7286352cebf0af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_gauss, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:29:38 np0005629333 podman[403419]: 2026-02-25 13:29:38.507059625 +0000 UTC m=+0.170308138 container start ab57dfeb6b670ffe117a8970dd8690d89574615b311cdc455c7286352cebf0af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_gauss, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 08:29:38 np0005629333 podman[403419]: 2026-02-25 13:29:38.511230383 +0000 UTC m=+0.174478876 container attach ab57dfeb6b670ffe117a8970dd8690d89574615b311cdc455c7286352cebf0af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_gauss, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:29:39 np0005629333 lvm[403514]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:29:39 np0005629333 lvm[403514]: VG ceph_vg0 finished
Feb 25 08:29:39 np0005629333 lvm[403516]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:29:39 np0005629333 lvm[403516]: VG ceph_vg1 finished
Feb 25 08:29:39 np0005629333 lvm[403518]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:29:39 np0005629333 lvm[403518]: VG ceph_vg2 finished
Feb 25 08:29:39 np0005629333 nova_compute[244014]: 2026-02-25 13:29:39.147 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:29:39 np0005629333 elated_gauss[403435]: {}
Feb 25 08:29:39 np0005629333 systemd[1]: libpod-ab57dfeb6b670ffe117a8970dd8690d89574615b311cdc455c7286352cebf0af.scope: Deactivated successfully.
Feb 25 08:29:39 np0005629333 podman[403419]: 2026-02-25 13:29:39.25806402 +0000 UTC m=+0.921312513 container died ab57dfeb6b670ffe117a8970dd8690d89574615b311cdc455c7286352cebf0af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_gauss, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:29:39 np0005629333 systemd[1]: libpod-ab57dfeb6b670ffe117a8970dd8690d89574615b311cdc455c7286352cebf0af.scope: Consumed 1.011s CPU time.
Feb 25 08:29:39 np0005629333 systemd[1]: var-lib-containers-storage-overlay-fc177674907a4033a178e69821653badc64231f4e5d8ad02fed63a6537fad47e-merged.mount: Deactivated successfully.
Feb 25 08:29:39 np0005629333 podman[403419]: 2026-02-25 13:29:39.346382263 +0000 UTC m=+1.009630776 container remove ab57dfeb6b670ffe117a8970dd8690d89574615b311cdc455c7286352cebf0af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 08:29:39 np0005629333 systemd[1]: libpod-conmon-ab57dfeb6b670ffe117a8970dd8690d89574615b311cdc455c7286352cebf0af.scope: Deactivated successfully.
Feb 25 08:29:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:29:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:29:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:29:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:29:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:29:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3293: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:29:40 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:29:40 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:29:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3294: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:29:41 np0005629333 nova_compute[244014]: 2026-02-25 13:29:41.766 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:29:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:29:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:29:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:29:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:29:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:29:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:29:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:29:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:29:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:29:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:29:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:29:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:29:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:29:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:29:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:29:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:29:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:29:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:29:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:29:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:29:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:29:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:29:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:29:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3295: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:29:44 np0005629333 nova_compute[244014]: 2026-02-25 13:29:44.175 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:29:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:29:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3296: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:29:46 np0005629333 nova_compute[244014]: 2026-02-25 13:29:46.806 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:29:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:29:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1837394778' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:29:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:29:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1837394778' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:29:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3297: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:29:49 np0005629333 nova_compute[244014]: 2026-02-25 13:29:49.206 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:29:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:29:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3298: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:29:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3299: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:29:51 np0005629333 nova_compute[244014]: 2026-02-25 13:29:51.808 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:29:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3300: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:29:54 np0005629333 nova_compute[244014]: 2026-02-25 13:29:54.210 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:29:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:29:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:29:55.072 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:29:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:29:55.072 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:29:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:29:55.072 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:29:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3301: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:29:56 np0005629333 nova_compute[244014]: 2026-02-25 13:29:56.834 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:29:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3302: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:29:59 np0005629333 nova_compute[244014]: 2026-02-25 13:29:59.251 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:29:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:29:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3303: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:30:00 np0005629333 nova_compute[244014]: 2026-02-25 13:30:00.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:30:00 np0005629333 nova_compute[244014]: 2026-02-25 13:30:00.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:30:00 np0005629333 nova_compute[244014]: 2026-02-25 13:30:00.926 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:30:00 np0005629333 nova_compute[244014]: 2026-02-25 13:30:00.927 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:30:00 np0005629333 nova_compute[244014]: 2026-02-25 13:30:00.927 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:30:00 np0005629333 nova_compute[244014]: 2026-02-25 13:30:00.928 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:30:00 np0005629333 nova_compute[244014]: 2026-02-25 13:30:00.928 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:30:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:30:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2440226851' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:30:01 np0005629333 nova_compute[244014]: 2026-02-25 13:30:01.468 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:30:01 np0005629333 nova_compute[244014]: 2026-02-25 13:30:01.628 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:30:01 np0005629333 nova_compute[244014]: 2026-02-25 13:30:01.629 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3537MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:30:01 np0005629333 nova_compute[244014]: 2026-02-25 13:30:01.630 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:30:01 np0005629333 nova_compute[244014]: 2026-02-25 13:30:01.630 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:30:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:30:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:30:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:30:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:30:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:30:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:30:01 np0005629333 nova_compute[244014]: 2026-02-25 13:30:01.694 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:30:01 np0005629333 nova_compute[244014]: 2026-02-25 13:30:01.694 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:30:01 np0005629333 nova_compute[244014]: 2026-02-25 13:30:01.709 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:30:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3304: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:30:01 np0005629333 nova_compute[244014]: 2026-02-25 13:30:01.836 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:30:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:30:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1322894189' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:30:02 np0005629333 nova_compute[244014]: 2026-02-25 13:30:02.306 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:30:02 np0005629333 nova_compute[244014]: 2026-02-25 13:30:02.311 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:30:02 np0005629333 nova_compute[244014]: 2026-02-25 13:30:02.327 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:30:02 np0005629333 nova_compute[244014]: 2026-02-25 13:30:02.329 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:30:02 np0005629333 nova_compute[244014]: 2026-02-25 13:30:02.330 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:30:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3305: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:30:04 np0005629333 nova_compute[244014]: 2026-02-25 13:30:04.254 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:30:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:30:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3306: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:30:06 np0005629333 nova_compute[244014]: 2026-02-25 13:30:06.330 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:30:06 np0005629333 nova_compute[244014]: 2026-02-25 13:30:06.331 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:30:06 np0005629333 nova_compute[244014]: 2026-02-25 13:30:06.331 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:30:06 np0005629333 nova_compute[244014]: 2026-02-25 13:30:06.349 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:30:06 np0005629333 nova_compute[244014]: 2026-02-25 13:30:06.838 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:30:06 np0005629333 nova_compute[244014]: 2026-02-25 13:30:06.890 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:30:07 np0005629333 podman[403604]: 2026-02-25 13:30:07.704588586 +0000 UTC m=+0.047522752 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 08:30:07 np0005629333 podman[403605]: 2026-02-25 13:30:07.74724512 +0000 UTC m=+0.085632388 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 08:30:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3307: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:30:09 np0005629333 nova_compute[244014]: 2026-02-25 13:30:09.257 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:09.451092) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026209451148, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 1589, "num_deletes": 250, "total_data_size": 2555789, "memory_usage": 2590904, "flush_reason": "Manual Compaction"}
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026209490307, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 1471334, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 67945, "largest_seqno": 69533, "table_properties": {"data_size": 1465965, "index_size": 2572, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14067, "raw_average_key_size": 20, "raw_value_size": 1454104, "raw_average_value_size": 2138, "num_data_blocks": 118, "num_entries": 680, "num_filter_entries": 680, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772026040, "oldest_key_time": 1772026040, "file_creation_time": 1772026209, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 39298 microseconds, and 6208 cpu microseconds.
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:09.490390) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 1471334 bytes OK
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:09.490423) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:09.506796) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:09.506841) EVENT_LOG_v1 {"time_micros": 1772026209506831, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:09.506866) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 2548904, prev total WAL file size 2548904, number of live WAL files 2.
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:09.507735) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373533' seq:72057594037927935, type:22 .. '6D6772737461740033303034' seq:0, type:0; will stop at (end)
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(1436KB)], [161(10MB)]
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026209507790, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 12411166, "oldest_snapshot_seqno": -1}
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 8805 keys, 10105871 bytes, temperature: kUnknown
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026209603763, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 10105871, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10051764, "index_size": 31018, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22021, "raw_key_size": 229301, "raw_average_key_size": 26, "raw_value_size": 9899227, "raw_average_value_size": 1124, "num_data_blocks": 1203, "num_entries": 8805, "num_filter_entries": 8805, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772026209, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:09.604119) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 10105871 bytes
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:09.685421) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 129.2 rd, 105.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 10.4 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(15.3) write-amplify(6.9) OK, records in: 9239, records dropped: 434 output_compression: NoCompression
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:09.685452) EVENT_LOG_v1 {"time_micros": 1772026209685441, "job": 100, "event": "compaction_finished", "compaction_time_micros": 96070, "compaction_time_cpu_micros": 34020, "output_level": 6, "num_output_files": 1, "total_output_size": 10105871, "num_input_records": 9239, "num_output_records": 8805, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026209685809, "job": 100, "event": "table_file_deletion", "file_number": 163}
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026209686782, "job": 100, "event": "table_file_deletion", "file_number": 161}
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:09.507603) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:09.686897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:09.686906) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:09.686910) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:09.686914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:30:09 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:09.686918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:30:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3308: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:30:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3309: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:30:11 np0005629333 nova_compute[244014]: 2026-02-25 13:30:11.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:30:11 np0005629333 nova_compute[244014]: 2026-02-25 13:30:11.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:30:11 np0005629333 nova_compute[244014]: 2026-02-25 13:30:11.881 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:30:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3310: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:30:14 np0005629333 nova_compute[244014]: 2026-02-25 13:30:14.300 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:30:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:30:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 08:30:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 47K writes, 185K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s#012Cumulative WAL: 47K writes, 17K syncs, 2.74 writes per sync, written: 0.19 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 248 writes, 385 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s#012Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556516836430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556516836430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtabl
Feb 25 08:30:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3311: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:30:15 np0005629333 nova_compute[244014]: 2026-02-25 13:30:15.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:30:16 np0005629333 nova_compute[244014]: 2026-02-25 13:30:16.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:30:16 np0005629333 nova_compute[244014]: 2026-02-25 13:30:16.881 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:30:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3312: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:30:19 np0005629333 nova_compute[244014]: 2026-02-25 13:30:19.305 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:30:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 08:30:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.2 total, 600.0 interval#012Cumulative writes: 49K writes, 195K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s#012Cumulative WAL: 49K writes, 17K syncs, 2.75 writes per sync, written: 0.19 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 224 writes, 347 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a364a358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a364a358d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtabl
Feb 25 08:30:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:30:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3313: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:30:20 np0005629333 nova_compute[244014]: 2026-02-25 13:30:20.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:30:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3314: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:30:21 np0005629333 nova_compute[244014]: 2026-02-25 13:30:21.883 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:30:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3315: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:30:24 np0005629333 nova_compute[244014]: 2026-02-25 13:30:24.342 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:30:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:30:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 08:30:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.6 total, 600.0 interval#012Cumulative writes: 37K writes, 150K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s#012Cumulative WAL: 37K writes, 13K syncs, 2.77 writes per sync, written: 0.15 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 228 writes, 353 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.059       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.059       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.059       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.6 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f8d0df78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.6 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f8d0df78d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.6 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtabl
Feb 25 08:30:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3316: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:30:25 np0005629333 nova_compute[244014]: 2026-02-25 13:30:25.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:30:26 np0005629333 nova_compute[244014]: 2026-02-25 13:30:26.885 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:30:27 np0005629333 ceph-mgr[76641]: [devicehealth INFO root] Check health
Feb 25 08:30:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3317: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:30:29 np0005629333 nova_compute[244014]: 2026-02-25 13:30:29.346 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:30:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:30:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3318: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:30:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:30:31
Feb 25 08:30:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:30:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:30:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.log', 'images', 'volumes', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'backups', '.mgr', 'default.rgw.control']
Feb 25 08:30:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:30:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:30:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:30:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:30:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:30:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:30:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:30:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3319: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:30:31 np0005629333 nova_compute[244014]: 2026-02-25 13:30:31.939 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:30:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:30:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:30:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:30:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:30:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:30:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:30:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:30:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:30:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:30:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:30:32 np0005629333 nova_compute[244014]: 2026-02-25 13:30:32.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:30:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3320: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:30:34 np0005629333 nova_compute[244014]: 2026-02-25 13:30:34.355 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:30:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:30:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3321: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:30:36 np0005629333 nova_compute[244014]: 2026-02-25 13:30:36.941 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:30:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3322: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:30:38 np0005629333 podman[403649]: 2026-02-25 13:30:38.725492898 +0000 UTC m=+0.060498859 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 08:30:38 np0005629333 podman[403650]: 2026-02-25 13:30:38.792107308 +0000 UTC m=+0.127873820 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:30:39 np0005629333 nova_compute[244014]: 2026-02-25 13:30:39.357 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:30:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:30:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3323: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:30:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:30:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:30:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:30:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:30:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:30:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:30:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:30:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:30:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:30:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:30:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:30:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:30:40 np0005629333 podman[403838]: 2026-02-25 13:30:40.638811896 +0000 UTC m=+0.058278316 container create e7b24a88395acfb192704a9d02521cf8e219a5ec484c683554baa76804eedf55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ride, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 08:30:40 np0005629333 systemd[1]: Started libpod-conmon-e7b24a88395acfb192704a9d02521cf8e219a5ec484c683554baa76804eedf55.scope.
Feb 25 08:30:40 np0005629333 podman[403838]: 2026-02-25 13:30:40.611058022 +0000 UTC m=+0.030524462 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:30:40 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:30:40 np0005629333 podman[403838]: 2026-02-25 13:30:40.732610733 +0000 UTC m=+0.152077233 container init e7b24a88395acfb192704a9d02521cf8e219a5ec484c683554baa76804eedf55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ride, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 08:30:40 np0005629333 podman[403838]: 2026-02-25 13:30:40.741423922 +0000 UTC m=+0.160890382 container start e7b24a88395acfb192704a9d02521cf8e219a5ec484c683554baa76804eedf55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ride, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 08:30:40 np0005629333 distracted_ride[403855]: 167 167
Feb 25 08:30:40 np0005629333 systemd[1]: libpod-e7b24a88395acfb192704a9d02521cf8e219a5ec484c683554baa76804eedf55.scope: Deactivated successfully.
Feb 25 08:30:40 np0005629333 podman[403838]: 2026-02-25 13:30:40.748024228 +0000 UTC m=+0.167490688 container attach e7b24a88395acfb192704a9d02521cf8e219a5ec484c683554baa76804eedf55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ride, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 08:30:40 np0005629333 podman[403838]: 2026-02-25 13:30:40.749616313 +0000 UTC m=+0.169082763 container died e7b24a88395acfb192704a9d02521cf8e219a5ec484c683554baa76804eedf55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ride, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 08:30:40 np0005629333 systemd[1]: var-lib-containers-storage-overlay-dd65b05d06f03b3ddf2476bfacc68ffaf176cc77e3a22835a26a5dfe5f804ca2-merged.mount: Deactivated successfully.
Feb 25 08:30:40 np0005629333 podman[403838]: 2026-02-25 13:30:40.815837292 +0000 UTC m=+0.235303712 container remove e7b24a88395acfb192704a9d02521cf8e219a5ec484c683554baa76804eedf55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 08:30:40 np0005629333 systemd[1]: libpod-conmon-e7b24a88395acfb192704a9d02521cf8e219a5ec484c683554baa76804eedf55.scope: Deactivated successfully.
Feb 25 08:30:40 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:30:40 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:30:40 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:30:40 np0005629333 podman[403879]: 2026-02-25 13:30:40.971892896 +0000 UTC m=+0.051024591 container create 42ac099aa900e3aa866c70aaeee2343aca2a05f02c5866d6a56cd0dcc2f1388f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:30:41 np0005629333 systemd[1]: Started libpod-conmon-42ac099aa900e3aa866c70aaeee2343aca2a05f02c5866d6a56cd0dcc2f1388f.scope.
Feb 25 08:30:41 np0005629333 podman[403879]: 2026-02-25 13:30:40.947206429 +0000 UTC m=+0.026338164 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:30:41 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:30:41 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09351871fa61df44f86eb1c5844256176f446fa50809ce7230574a8c098c52dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:30:41 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09351871fa61df44f86eb1c5844256176f446fa50809ce7230574a8c098c52dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:30:41 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09351871fa61df44f86eb1c5844256176f446fa50809ce7230574a8c098c52dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:30:41 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09351871fa61df44f86eb1c5844256176f446fa50809ce7230574a8c098c52dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:30:41 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09351871fa61df44f86eb1c5844256176f446fa50809ce7230574a8c098c52dd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:30:41 np0005629333 podman[403879]: 2026-02-25 13:30:41.085219754 +0000 UTC m=+0.164351479 container init 42ac099aa900e3aa866c70aaeee2343aca2a05f02c5866d6a56cd0dcc2f1388f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 25 08:30:41 np0005629333 podman[403879]: 2026-02-25 13:30:41.092428477 +0000 UTC m=+0.171560162 container start 42ac099aa900e3aa866c70aaeee2343aca2a05f02c5866d6a56cd0dcc2f1388f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_burnell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:30:41 np0005629333 podman[403879]: 2026-02-25 13:30:41.096458631 +0000 UTC m=+0.175590316 container attach 42ac099aa900e3aa866c70aaeee2343aca2a05f02c5866d6a56cd0dcc2f1388f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:30:41 np0005629333 charming_burnell[403896]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:30:41 np0005629333 charming_burnell[403896]: --> All data devices are unavailable
Feb 25 08:30:41 np0005629333 systemd[1]: libpod-42ac099aa900e3aa866c70aaeee2343aca2a05f02c5866d6a56cd0dcc2f1388f.scope: Deactivated successfully.
Feb 25 08:30:41 np0005629333 conmon[403896]: conmon 42ac099aa900e3aa866c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-42ac099aa900e3aa866c70aaeee2343aca2a05f02c5866d6a56cd0dcc2f1388f.scope/container/memory.events
Feb 25 08:30:41 np0005629333 podman[403879]: 2026-02-25 13:30:41.557207274 +0000 UTC m=+0.636338999 container died 42ac099aa900e3aa866c70aaeee2343aca2a05f02c5866d6a56cd0dcc2f1388f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_burnell, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:30:41 np0005629333 systemd[1]: var-lib-containers-storage-overlay-09351871fa61df44f86eb1c5844256176f446fa50809ce7230574a8c098c52dd-merged.mount: Deactivated successfully.
Feb 25 08:30:41 np0005629333 podman[403879]: 2026-02-25 13:30:41.633797676 +0000 UTC m=+0.712929371 container remove 42ac099aa900e3aa866c70aaeee2343aca2a05f02c5866d6a56cd0dcc2f1388f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 08:30:41 np0005629333 systemd[1]: libpod-conmon-42ac099aa900e3aa866c70aaeee2343aca2a05f02c5866d6a56cd0dcc2f1388f.scope: Deactivated successfully.
Feb 25 08:30:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3324: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:30:41 np0005629333 nova_compute[244014]: 2026-02-25 13:30:41.944 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:30:42 np0005629333 podman[403991]: 2026-02-25 13:30:42.10274097 +0000 UTC m=+0.040024970 container create 59f6210c459783819146bc84d749fc6b319f0eba12a1beb0a851c1db8eb03d4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 08:30:42 np0005629333 systemd[1]: Started libpod-conmon-59f6210c459783819146bc84d749fc6b319f0eba12a1beb0a851c1db8eb03d4d.scope.
Feb 25 08:30:42 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:30:42 np0005629333 podman[403991]: 2026-02-25 13:30:42.084144376 +0000 UTC m=+0.021428486 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:30:42 np0005629333 podman[403991]: 2026-02-25 13:30:42.19235641 +0000 UTC m=+0.129640510 container init 59f6210c459783819146bc84d749fc6b319f0eba12a1beb0a851c1db8eb03d4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_keller, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:30:42 np0005629333 podman[403991]: 2026-02-25 13:30:42.20016471 +0000 UTC m=+0.137448760 container start 59f6210c459783819146bc84d749fc6b319f0eba12a1beb0a851c1db8eb03d4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 08:30:42 np0005629333 recursing_keller[404008]: 167 167
Feb 25 08:30:42 np0005629333 systemd[1]: libpod-59f6210c459783819146bc84d749fc6b319f0eba12a1beb0a851c1db8eb03d4d.scope: Deactivated successfully.
Feb 25 08:30:42 np0005629333 podman[403991]: 2026-02-25 13:30:42.206768586 +0000 UTC m=+0.144052686 container attach 59f6210c459783819146bc84d749fc6b319f0eba12a1beb0a851c1db8eb03d4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_keller, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:30:42 np0005629333 podman[403991]: 2026-02-25 13:30:42.207983931 +0000 UTC m=+0.145267981 container died 59f6210c459783819146bc84d749fc6b319f0eba12a1beb0a851c1db8eb03d4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 08:30:42 np0005629333 systemd[1]: var-lib-containers-storage-overlay-3bf66153b4353a344cf7eb141d9076d2349dcd09d6f2c3316414e2013a6948d8-merged.mount: Deactivated successfully.
Feb 25 08:30:42 np0005629333 podman[403991]: 2026-02-25 13:30:42.28342333 +0000 UTC m=+0.220707370 container remove 59f6210c459783819146bc84d749fc6b319f0eba12a1beb0a851c1db8eb03d4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_keller, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 08:30:42 np0005629333 systemd[1]: libpod-conmon-59f6210c459783819146bc84d749fc6b319f0eba12a1beb0a851c1db8eb03d4d.scope: Deactivated successfully.
Feb 25 08:30:42 np0005629333 podman[404031]: 2026-02-25 13:30:42.4311884 +0000 UTC m=+0.043592131 container create c5569b65d4371b2b1988f889e497f94a610b8ef14686e517f865714d0368d82b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_ritchie, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:30:42 np0005629333 systemd[1]: Started libpod-conmon-c5569b65d4371b2b1988f889e497f94a610b8ef14686e517f865714d0368d82b.scope.
Feb 25 08:30:42 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:30:42 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98dc42163140e89944425e33b066c610034805847869a11f66918244649345be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:30:42 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98dc42163140e89944425e33b066c610034805847869a11f66918244649345be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:30:42 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98dc42163140e89944425e33b066c610034805847869a11f66918244649345be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:30:42 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98dc42163140e89944425e33b066c610034805847869a11f66918244649345be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:30:42 np0005629333 podman[404031]: 2026-02-25 13:30:42.411090063 +0000 UTC m=+0.023493844 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:30:42 np0005629333 podman[404031]: 2026-02-25 13:30:42.512632719 +0000 UTC m=+0.125036430 container init c5569b65d4371b2b1988f889e497f94a610b8ef14686e517f865714d0368d82b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_ritchie, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 08:30:42 np0005629333 podman[404031]: 2026-02-25 13:30:42.516431736 +0000 UTC m=+0.128835437 container start c5569b65d4371b2b1988f889e497f94a610b8ef14686e517f865714d0368d82b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 08:30:42 np0005629333 podman[404031]: 2026-02-25 13:30:42.520584123 +0000 UTC m=+0.132987824 container attach c5569b65d4371b2b1988f889e497f94a610b8ef14686e517f865714d0368d82b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]: {
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:    "0": [
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:        {
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "devices": [
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "/dev/loop3"
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            ],
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "lv_name": "ceph_lv0",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "lv_size": "21470642176",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "name": "ceph_lv0",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "tags": {
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.cluster_name": "ceph",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.crush_device_class": "",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.encrypted": "0",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.objectstore": "bluestore",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.osd_id": "0",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.type": "block",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.vdo": "0",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.with_tpm": "0"
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            },
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "type": "block",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "vg_name": "ceph_vg0"
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:        }
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:    ],
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:    "1": [
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:        {
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "devices": [
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "/dev/loop4"
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            ],
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "lv_name": "ceph_lv1",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "lv_size": "21470642176",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "name": "ceph_lv1",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "tags": {
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.cluster_name": "ceph",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.crush_device_class": "",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.encrypted": "0",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.objectstore": "bluestore",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.osd_id": "1",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.type": "block",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.vdo": "0",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.with_tpm": "0"
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            },
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "type": "block",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "vg_name": "ceph_vg1"
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:        }
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:    ],
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:    "2": [
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:        {
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "devices": [
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "/dev/loop5"
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            ],
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "lv_name": "ceph_lv2",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "lv_size": "21470642176",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "name": "ceph_lv2",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "tags": {
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.cluster_name": "ceph",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.crush_device_class": "",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.encrypted": "0",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.objectstore": "bluestore",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.osd_id": "2",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.type": "block",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.vdo": "0",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:                "ceph.with_tpm": "0"
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            },
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "type": "block",
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:            "vg_name": "ceph_vg2"
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:        }
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]:    ]
Feb 25 08:30:42 np0005629333 gifted_ritchie[404048]: }
Feb 25 08:30:42 np0005629333 systemd[1]: libpod-c5569b65d4371b2b1988f889e497f94a610b8ef14686e517f865714d0368d82b.scope: Deactivated successfully.
Feb 25 08:30:42 np0005629333 podman[404031]: 2026-02-25 13:30:42.812155262 +0000 UTC m=+0.424559003 container died c5569b65d4371b2b1988f889e497f94a610b8ef14686e517f865714d0368d82b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 08:30:42 np0005629333 systemd[1]: var-lib-containers-storage-overlay-98dc42163140e89944425e33b066c610034805847869a11f66918244649345be-merged.mount: Deactivated successfully.
Feb 25 08:30:43 np0005629333 podman[404031]: 2026-02-25 13:30:43.024972628 +0000 UTC m=+0.637376369 container remove c5569b65d4371b2b1988f889e497f94a610b8ef14686e517f865714d0368d82b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 08:30:43 np0005629333 systemd[1]: libpod-conmon-c5569b65d4371b2b1988f889e497f94a610b8ef14686e517f865714d0368d82b.scope: Deactivated successfully.
Feb 25 08:30:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:30:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:30:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:30:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:30:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:30:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:30:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:30:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:30:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:30:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:30:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:30:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:30:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:30:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:30:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:30:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:30:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:30:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:30:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:30:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:30:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:30:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:30:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:30:43 np0005629333 podman[404131]: 2026-02-25 13:30:43.605615665 +0000 UTC m=+0.114089370 container create 9792e56361834076c9ab9a5b66e0c58d81123dd5d22af434f77952d065ab1977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:30:43 np0005629333 podman[404131]: 2026-02-25 13:30:43.523800056 +0000 UTC m=+0.032273761 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:30:43 np0005629333 systemd[1]: Started libpod-conmon-9792e56361834076c9ab9a5b66e0c58d81123dd5d22af434f77952d065ab1977.scope.
Feb 25 08:30:43 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:30:43 np0005629333 podman[404131]: 2026-02-25 13:30:43.706357018 +0000 UTC m=+0.214830783 container init 9792e56361834076c9ab9a5b66e0c58d81123dd5d22af434f77952d065ab1977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_jepsen, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 08:30:43 np0005629333 podman[404131]: 2026-02-25 13:30:43.718379278 +0000 UTC m=+0.226852953 container start 9792e56361834076c9ab9a5b66e0c58d81123dd5d22af434f77952d065ab1977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:30:43 np0005629333 confident_jepsen[404147]: 167 167
Feb 25 08:30:43 np0005629333 systemd[1]: libpod-9792e56361834076c9ab9a5b66e0c58d81123dd5d22af434f77952d065ab1977.scope: Deactivated successfully.
Feb 25 08:30:43 np0005629333 podman[404131]: 2026-02-25 13:30:43.784390721 +0000 UTC m=+0.292864476 container attach 9792e56361834076c9ab9a5b66e0c58d81123dd5d22af434f77952d065ab1977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_jepsen, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:30:43 np0005629333 podman[404131]: 2026-02-25 13:30:43.786328096 +0000 UTC m=+0.294801801 container died 9792e56361834076c9ab9a5b66e0c58d81123dd5d22af434f77952d065ab1977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_jepsen, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:30:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3325: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:30:43 np0005629333 systemd[1]: var-lib-containers-storage-overlay-92eabbc82def4bfbc2ff37e7eef6e6850ba2b955404b1ecb8fbd389a8b64e592-merged.mount: Deactivated successfully.
Feb 25 08:30:43 np0005629333 podman[404131]: 2026-02-25 13:30:43.975476433 +0000 UTC m=+0.483950128 container remove 9792e56361834076c9ab9a5b66e0c58d81123dd5d22af434f77952d065ab1977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 08:30:43 np0005629333 systemd[1]: libpod-conmon-9792e56361834076c9ab9a5b66e0c58d81123dd5d22af434f77952d065ab1977.scope: Deactivated successfully.
Feb 25 08:30:44 np0005629333 podman[404174]: 2026-02-25 13:30:44.171001781 +0000 UTC m=+0.061166237 container create e0d0032367b85e02367d693bcffa520056acc607aad7cfb2088568fb94dd0ca3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_williamson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:30:44 np0005629333 systemd[1]: Started libpod-conmon-e0d0032367b85e02367d693bcffa520056acc607aad7cfb2088568fb94dd0ca3.scope.
Feb 25 08:30:44 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:30:44 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44bc15a584f5d5dca13a75fa2bb3a51d675530db58dedbdcc428785edd1058fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:30:44 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44bc15a584f5d5dca13a75fa2bb3a51d675530db58dedbdcc428785edd1058fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:30:44 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44bc15a584f5d5dca13a75fa2bb3a51d675530db58dedbdcc428785edd1058fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:30:44 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44bc15a584f5d5dca13a75fa2bb3a51d675530db58dedbdcc428785edd1058fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:30:44 np0005629333 podman[404174]: 2026-02-25 13:30:44.143958778 +0000 UTC m=+0.034123314 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:30:44 np0005629333 podman[404174]: 2026-02-25 13:30:44.243996781 +0000 UTC m=+0.134161267 container init e0d0032367b85e02367d693bcffa520056acc607aad7cfb2088568fb94dd0ca3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_williamson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:30:44 np0005629333 podman[404174]: 2026-02-25 13:30:44.25173416 +0000 UTC m=+0.141898616 container start e0d0032367b85e02367d693bcffa520056acc607aad7cfb2088568fb94dd0ca3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 08:30:44 np0005629333 podman[404174]: 2026-02-25 13:30:44.260801396 +0000 UTC m=+0.150965852 container attach e0d0032367b85e02367d693bcffa520056acc607aad7cfb2088568fb94dd0ca3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_williamson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 08:30:44 np0005629333 nova_compute[244014]: 2026-02-25 13:30:44.359 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:30:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:30:44 np0005629333 lvm[404266]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:30:44 np0005629333 lvm[404266]: VG ceph_vg0 finished
Feb 25 08:30:44 np0005629333 lvm[404269]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:30:44 np0005629333 lvm[404269]: VG ceph_vg1 finished
Feb 25 08:30:44 np0005629333 lvm[404271]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:30:44 np0005629333 lvm[404271]: VG ceph_vg2 finished
Feb 25 08:30:45 np0005629333 elegant_williamson[404190]: {}
Feb 25 08:30:45 np0005629333 systemd[1]: libpod-e0d0032367b85e02367d693bcffa520056acc607aad7cfb2088568fb94dd0ca3.scope: Deactivated successfully.
Feb 25 08:30:45 np0005629333 systemd[1]: libpod-e0d0032367b85e02367d693bcffa520056acc607aad7cfb2088568fb94dd0ca3.scope: Consumed 1.096s CPU time.
Feb 25 08:30:45 np0005629333 podman[404174]: 2026-02-25 13:30:45.050464522 +0000 UTC m=+0.940628978 container died e0d0032367b85e02367d693bcffa520056acc607aad7cfb2088568fb94dd0ca3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_williamson, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 08:30:45 np0005629333 systemd[1]: var-lib-containers-storage-overlay-44bc15a584f5d5dca13a75fa2bb3a51d675530db58dedbdcc428785edd1058fb-merged.mount: Deactivated successfully.
Feb 25 08:30:45 np0005629333 podman[404174]: 2026-02-25 13:30:45.122302059 +0000 UTC m=+1.012466505 container remove e0d0032367b85e02367d693bcffa520056acc607aad7cfb2088568fb94dd0ca3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:30:45 np0005629333 systemd[1]: libpod-conmon-e0d0032367b85e02367d693bcffa520056acc607aad7cfb2088568fb94dd0ca3.scope: Deactivated successfully.
Feb 25 08:30:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:30:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:30:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:30:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:30:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3326: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:30:46 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:30:46 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:30:46 np0005629333 nova_compute[244014]: 2026-02-25 13:30:46.990 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:30:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:30:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2000992389' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:30:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:30:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2000992389' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:30:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3327: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:30:49 np0005629333 nova_compute[244014]: 2026-02-25 13:30:49.398 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:30:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:30:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3328: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:30:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3329: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:30:52 np0005629333 nova_compute[244014]: 2026-02-25 13:30:52.013 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:30:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3330: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:30:54 np0005629333 nova_compute[244014]: 2026-02-25 13:30:54.401 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:30:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:30:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:30:55.073 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:30:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:30:55.073 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:30:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:30:55.073 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:30:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3331: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:30:57 np0005629333 nova_compute[244014]: 2026-02-25 13:30:57.015 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:30:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3332: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:30:59 np0005629333 nova_compute[244014]: 2026-02-25 13:30:59.405 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:59.464188) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026259464219, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 646, "num_deletes": 255, "total_data_size": 770294, "memory_usage": 783072, "flush_reason": "Manual Compaction"}
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026259473639, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 763599, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69534, "largest_seqno": 70179, "table_properties": {"data_size": 760156, "index_size": 1350, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7666, "raw_average_key_size": 18, "raw_value_size": 753224, "raw_average_value_size": 1841, "num_data_blocks": 60, "num_entries": 409, "num_filter_entries": 409, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772026211, "oldest_key_time": 1772026211, "file_creation_time": 1772026259, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 9512 microseconds, and 2234 cpu microseconds.
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:59.473675) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 763599 bytes OK
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:59.473748) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:59.478951) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:59.478975) EVENT_LOG_v1 {"time_micros": 1772026259478968, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:59.478994) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 766837, prev total WAL file size 767994, number of live WAL files 2.
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:59.479457) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303137' seq:72057594037927935, type:22 .. '6C6F676D0033323638' seq:0, type:0; will stop at (end)
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(745KB)], [164(9869KB)]
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026259479483, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 10869470, "oldest_snapshot_seqno": -1}
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 8693 keys, 10766512 bytes, temperature: kUnknown
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026259525716, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 10766512, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10711761, "index_size": 31918, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21765, "raw_key_size": 227881, "raw_average_key_size": 26, "raw_value_size": 10559830, "raw_average_value_size": 1214, "num_data_blocks": 1240, "num_entries": 8693, "num_filter_entries": 8693, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772026259, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:59.525965) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 10766512 bytes
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:59.530805) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 234.7 rd, 232.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.6 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(28.3) write-amplify(14.1) OK, records in: 9214, records dropped: 521 output_compression: NoCompression
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:59.530829) EVENT_LOG_v1 {"time_micros": 1772026259530818, "job": 102, "event": "compaction_finished", "compaction_time_micros": 46309, "compaction_time_cpu_micros": 20667, "output_level": 6, "num_output_files": 1, "total_output_size": 10766512, "num_input_records": 9214, "num_output_records": 8693, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026259531079, "job": 102, "event": "table_file_deletion", "file_number": 166}
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026259532585, "job": 102, "event": "table_file_deletion", "file_number": 164}
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:59.479399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:59.532610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:59.532615) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:59.532617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:59.532619) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:30:59 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:30:59.532622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:30:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3333: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:31:00 np0005629333 nova_compute[244014]: 2026-02-25 13:31:00.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:31:00 np0005629333 nova_compute[244014]: 2026-02-25 13:31:00.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:31:00 np0005629333 nova_compute[244014]: 2026-02-25 13:31:00.917 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:31:00 np0005629333 nova_compute[244014]: 2026-02-25 13:31:00.918 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:31:00 np0005629333 nova_compute[244014]: 2026-02-25 13:31:00.918 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:31:00 np0005629333 nova_compute[244014]: 2026-02-25 13:31:00.918 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:31:00 np0005629333 nova_compute[244014]: 2026-02-25 13:31:00.919 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:31:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:31:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/356898532' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:31:01 np0005629333 nova_compute[244014]: 2026-02-25 13:31:01.546 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:31:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:31:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:31:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:31:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:31:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:31:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:31:01 np0005629333 nova_compute[244014]: 2026-02-25 13:31:01.699 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:31:01 np0005629333 nova_compute[244014]: 2026-02-25 13:31:01.700 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3557MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:31:01 np0005629333 nova_compute[244014]: 2026-02-25 13:31:01.700 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:31:01 np0005629333 nova_compute[244014]: 2026-02-25 13:31:01.701 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:31:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3334: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:31:02 np0005629333 nova_compute[244014]: 2026-02-25 13:31:02.017 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:31:02 np0005629333 nova_compute[244014]: 2026-02-25 13:31:02.038 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:31:02 np0005629333 nova_compute[244014]: 2026-02-25 13:31:02.039 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:31:02 np0005629333 nova_compute[244014]: 2026-02-25 13:31:02.135 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 25 08:31:02 np0005629333 nova_compute[244014]: 2026-02-25 13:31:02.276 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 25 08:31:02 np0005629333 nova_compute[244014]: 2026-02-25 13:31:02.277 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 25 08:31:02 np0005629333 nova_compute[244014]: 2026-02-25 13:31:02.292 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 25 08:31:02 np0005629333 nova_compute[244014]: 2026-02-25 13:31:02.321 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 25 08:31:02 np0005629333 nova_compute[244014]: 2026-02-25 13:31:02.339 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:31:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:31:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1034938038' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:31:02 np0005629333 nova_compute[244014]: 2026-02-25 13:31:02.855 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:31:02 np0005629333 nova_compute[244014]: 2026-02-25 13:31:02.862 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:31:02 np0005629333 nova_compute[244014]: 2026-02-25 13:31:02.878 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:31:02 np0005629333 nova_compute[244014]: 2026-02-25 13:31:02.881 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:31:02 np0005629333 nova_compute[244014]: 2026-02-25 13:31:02.882 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.181s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:31:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3335: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:31:04 np0005629333 nova_compute[244014]: 2026-02-25 13:31:04.409 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:31:04.621949) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026264622031, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 315, "num_deletes": 251, "total_data_size": 101251, "memory_usage": 107176, "flush_reason": "Manual Compaction"}
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026264641028, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 100286, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70180, "largest_seqno": 70494, "table_properties": {"data_size": 98250, "index_size": 199, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5195, "raw_average_key_size": 18, "raw_value_size": 94295, "raw_average_value_size": 334, "num_data_blocks": 9, "num_entries": 282, "num_filter_entries": 282, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772026259, "oldest_key_time": 1772026259, "file_creation_time": 1772026264, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 19137 microseconds, and 1146 cpu microseconds.
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:31:04.641093) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 100286 bytes OK
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:31:04.641121) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:31:04.645948) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:31:04.645974) EVENT_LOG_v1 {"time_micros": 1772026264645966, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:31:04.645995) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 99011, prev total WAL file size 99011, number of live WAL files 2.
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:31:04.646434) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(97KB)], [167(10MB)]
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026264646638, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 10866798, "oldest_snapshot_seqno": -1}
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 8466 keys, 9092546 bytes, temperature: kUnknown
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026264714305, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 9092546, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9040920, "index_size": 29369, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21189, "raw_key_size": 223857, "raw_average_key_size": 26, "raw_value_size": 8894493, "raw_average_value_size": 1050, "num_data_blocks": 1124, "num_entries": 8466, "num_filter_entries": 8466, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772026264, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:31:04.714580) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 9092546 bytes
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:31:04.716077) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.6 rd, 134.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 10.3 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(199.0) write-amplify(90.7) OK, records in: 8975, records dropped: 509 output_compression: NoCompression
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:31:04.716108) EVENT_LOG_v1 {"time_micros": 1772026264716094, "job": 104, "event": "compaction_finished", "compaction_time_micros": 67661, "compaction_time_cpu_micros": 28549, "output_level": 6, "num_output_files": 1, "total_output_size": 9092546, "num_input_records": 8975, "num_output_records": 8466, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026264716260, "job": 104, "event": "table_file_deletion", "file_number": 169}
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026264718136, "job": 104, "event": "table_file_deletion", "file_number": 167}
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:31:04.646335) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:31:04.718362) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:31:04.718376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:31:04.718380) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:31:04.718384) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:31:04 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:31:04.718389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:31:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3336: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:31:06 np0005629333 nova_compute[244014]: 2026-02-25 13:31:06.883 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:31:06 np0005629333 nova_compute[244014]: 2026-02-25 13:31:06.883 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:31:06 np0005629333 nova_compute[244014]: 2026-02-25 13:31:06.883 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:31:06 np0005629333 nova_compute[244014]: 2026-02-25 13:31:06.975 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:31:07 np0005629333 nova_compute[244014]: 2026-02-25 13:31:07.027 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:31:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3337: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:31:07 np0005629333 nova_compute[244014]: 2026-02-25 13:31:07.964 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:31:09 np0005629333 nova_compute[244014]: 2026-02-25 13:31:09.450 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:31:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:31:09 np0005629333 podman[404356]: 2026-02-25 13:31:09.776718862 +0000 UTC m=+0.104692275 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Feb 25 08:31:09 np0005629333 podman[404357]: 2026-02-25 13:31:09.78124922 +0000 UTC m=+0.109412549 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 08:31:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3338: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:31:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3339: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:31:12 np0005629333 nova_compute[244014]: 2026-02-25 13:31:12.072 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:31:12 np0005629333 nova_compute[244014]: 2026-02-25 13:31:12.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:31:12 np0005629333 nova_compute[244014]: 2026-02-25 13:31:12.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:31:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3340: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:31:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:31:14 np0005629333 nova_compute[244014]: 2026-02-25 13:31:14.494 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:31:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3341: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:31:15 np0005629333 nova_compute[244014]: 2026-02-25 13:31:15.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:31:17 np0005629333 nova_compute[244014]: 2026-02-25 13:31:17.074 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:31:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3342: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:31:17 np0005629333 nova_compute[244014]: 2026-02-25 13:31:17.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:31:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:31:19 np0005629333 nova_compute[244014]: 2026-02-25 13:31:19.498 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:31:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3343: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:31:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3344: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Feb 25 08:31:21 np0005629333 nova_compute[244014]: 2026-02-25 13:31:21.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:31:22 np0005629333 nova_compute[244014]: 2026-02-25 13:31:22.077 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:31:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3345: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Feb 25 08:31:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:31:24 np0005629333 nova_compute[244014]: 2026-02-25 13:31:24.501 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:31:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3346: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Feb 25 08:31:25 np0005629333 nova_compute[244014]: 2026-02-25 13:31:25.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:31:27 np0005629333 nova_compute[244014]: 2026-02-25 13:31:27.124 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:31:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3347: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 08:31:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:31:29 np0005629333 nova_compute[244014]: 2026-02-25 13:31:29.506 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:31:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3348: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 08:31:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:31:31
Feb 25 08:31:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:31:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:31:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'vms', 'images', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'volumes']
Feb 25 08:31:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:31:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:31:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:31:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:31:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:31:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:31:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:31:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3349: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 08:31:32 np0005629333 nova_compute[244014]: 2026-02-25 13:31:32.171 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:31:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:31:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:31:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:31:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:31:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:31:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:31:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:31:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:31:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:31:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:31:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3350: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Feb 25 08:31:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:31:34 np0005629333 nova_compute[244014]: 2026-02-25 13:31:34.540 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:31:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3351: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Feb 25 08:31:37 np0005629333 nova_compute[244014]: 2026-02-25 13:31:37.171 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:31:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3352: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Feb 25 08:31:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:31:39 np0005629333 nova_compute[244014]: 2026-02-25 13:31:39.569 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:31:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3353: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:31:40 np0005629333 podman[404403]: 2026-02-25 13:31:40.719703793 +0000 UTC m=+0.057492123 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:31:40 np0005629333 podman[404404]: 2026-02-25 13:31:40.796544212 +0000 UTC m=+0.126630855 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 08:31:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3354: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:31:42 np0005629333 nova_compute[244014]: 2026-02-25 13:31:42.209 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:31:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:31:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:31:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:31:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:31:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:31:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:31:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:31:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:31:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:31:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:31:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:31:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:31:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:31:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:31:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:31:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:31:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:31:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:31:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:31:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:31:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:31:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:31:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:31:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3355: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:31:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:31:44 np0005629333 nova_compute[244014]: 2026-02-25 13:31:44.573 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:31:45 np0005629333 auditd[725]: Audit daemon rotating log files
Feb 25 08:31:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3356: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:31:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:31:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:31:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:31:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:31:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:31:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:31:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:31:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:31:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:31:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:31:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:31:45 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:31:46 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:31:46 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:31:46 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:31:46 np0005629333 podman[404593]: 2026-02-25 13:31:46.303175012 +0000 UTC m=+0.057681339 container create c9b17f4d68401dc43c95e39a79993b578387629f54b967916e28450d8b9ca184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_darwin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 08:31:46 np0005629333 systemd[1]: Started libpod-conmon-c9b17f4d68401dc43c95e39a79993b578387629f54b967916e28450d8b9ca184.scope.
Feb 25 08:31:46 np0005629333 podman[404593]: 2026-02-25 13:31:46.278321021 +0000 UTC m=+0.032827378 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:31:46 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:31:46 np0005629333 podman[404593]: 2026-02-25 13:31:46.394050107 +0000 UTC m=+0.148556404 container init c9b17f4d68401dc43c95e39a79993b578387629f54b967916e28450d8b9ca184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_darwin, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 08:31:46 np0005629333 podman[404593]: 2026-02-25 13:31:46.39949659 +0000 UTC m=+0.154002917 container start c9b17f4d68401dc43c95e39a79993b578387629f54b967916e28450d8b9ca184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_darwin, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 08:31:46 np0005629333 podman[404593]: 2026-02-25 13:31:46.403162144 +0000 UTC m=+0.157668451 container attach c9b17f4d68401dc43c95e39a79993b578387629f54b967916e28450d8b9ca184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_darwin, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 08:31:46 np0005629333 zen_darwin[404609]: 167 167
Feb 25 08:31:46 np0005629333 systemd[1]: libpod-c9b17f4d68401dc43c95e39a79993b578387629f54b967916e28450d8b9ca184.scope: Deactivated successfully.
Feb 25 08:31:46 np0005629333 podman[404593]: 2026-02-25 13:31:46.40623475 +0000 UTC m=+0.160741067 container died c9b17f4d68401dc43c95e39a79993b578387629f54b967916e28450d8b9ca184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:31:46 np0005629333 systemd[1]: var-lib-containers-storage-overlay-60537f04877f9e70e0d30c2243bffc238f7e0c7eebad064c654f920fb1f98f84-merged.mount: Deactivated successfully.
Feb 25 08:31:46 np0005629333 podman[404593]: 2026-02-25 13:31:46.454102311 +0000 UTC m=+0.208608628 container remove c9b17f4d68401dc43c95e39a79993b578387629f54b967916e28450d8b9ca184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_darwin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 08:31:46 np0005629333 systemd[1]: libpod-conmon-c9b17f4d68401dc43c95e39a79993b578387629f54b967916e28450d8b9ca184.scope: Deactivated successfully.
Feb 25 08:31:46 np0005629333 podman[404634]: 2026-02-25 13:31:46.624026397 +0000 UTC m=+0.055222979 container create e1495218393cddeeb63ff0b529e1faf041ed9168319d894bd3fdf77c8f514e06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:31:46 np0005629333 systemd[1]: Started libpod-conmon-e1495218393cddeeb63ff0b529e1faf041ed9168319d894bd3fdf77c8f514e06.scope.
Feb 25 08:31:46 np0005629333 podman[404634]: 2026-02-25 13:31:46.595234715 +0000 UTC m=+0.026431347 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:31:46 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:31:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc0dc3ff0947b56502e3a82bd21174487b7950f5802cfdce6dea8bd993b8fa03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:31:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc0dc3ff0947b56502e3a82bd21174487b7950f5802cfdce6dea8bd993b8fa03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:31:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc0dc3ff0947b56502e3a82bd21174487b7950f5802cfdce6dea8bd993b8fa03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:31:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc0dc3ff0947b56502e3a82bd21174487b7950f5802cfdce6dea8bd993b8fa03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:31:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc0dc3ff0947b56502e3a82bd21174487b7950f5802cfdce6dea8bd993b8fa03/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:31:46 np0005629333 podman[404634]: 2026-02-25 13:31:46.716380874 +0000 UTC m=+0.147577456 container init e1495218393cddeeb63ff0b529e1faf041ed9168319d894bd3fdf77c8f514e06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_euler, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:31:46 np0005629333 podman[404634]: 2026-02-25 13:31:46.724159373 +0000 UTC m=+0.155355945 container start e1495218393cddeeb63ff0b529e1faf041ed9168319d894bd3fdf77c8f514e06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_euler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 08:31:46 np0005629333 podman[404634]: 2026-02-25 13:31:46.730591645 +0000 UTC m=+0.161788227 container attach e1495218393cddeeb63ff0b529e1faf041ed9168319d894bd3fdf77c8f514e06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:31:47 np0005629333 affectionate_euler[404651]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:31:47 np0005629333 affectionate_euler[404651]: --> All data devices are unavailable
Feb 25 08:31:47 np0005629333 nova_compute[244014]: 2026-02-25 13:31:47.211 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:31:47 np0005629333 systemd[1]: libpod-e1495218393cddeeb63ff0b529e1faf041ed9168319d894bd3fdf77c8f514e06.scope: Deactivated successfully.
Feb 25 08:31:47 np0005629333 podman[404671]: 2026-02-25 13:31:47.266265823 +0000 UTC m=+0.032188800 container died e1495218393cddeeb63ff0b529e1faf041ed9168319d894bd3fdf77c8f514e06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_euler, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:31:47 np0005629333 systemd[1]: var-lib-containers-storage-overlay-fc0dc3ff0947b56502e3a82bd21174487b7950f5802cfdce6dea8bd993b8fa03-merged.mount: Deactivated successfully.
Feb 25 08:31:47 np0005629333 podman[404671]: 2026-02-25 13:31:47.319206927 +0000 UTC m=+0.085129904 container remove e1495218393cddeeb63ff0b529e1faf041ed9168319d894bd3fdf77c8f514e06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_euler, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:31:47 np0005629333 systemd[1]: libpod-conmon-e1495218393cddeeb63ff0b529e1faf041ed9168319d894bd3fdf77c8f514e06.scope: Deactivated successfully.
Feb 25 08:31:47 np0005629333 podman[404749]: 2026-02-25 13:31:47.800708956 +0000 UTC m=+0.048834829 container create 68aef1a53e1c991fc5a7049615368771958ba1ab5c073e592997d702c86c3a58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:31:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3357: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:31:47 np0005629333 systemd[1]: Started libpod-conmon-68aef1a53e1c991fc5a7049615368771958ba1ab5c073e592997d702c86c3a58.scope.
Feb 25 08:31:47 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:31:47 np0005629333 podman[404749]: 2026-02-25 13:31:47.783000336 +0000 UTC m=+0.031126259 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:31:47 np0005629333 podman[404749]: 2026-02-25 13:31:47.885464758 +0000 UTC m=+0.133590761 container init 68aef1a53e1c991fc5a7049615368771958ba1ab5c073e592997d702c86c3a58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_leavitt, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:31:47 np0005629333 podman[404749]: 2026-02-25 13:31:47.894257376 +0000 UTC m=+0.142383279 container start 68aef1a53e1c991fc5a7049615368771958ba1ab5c073e592997d702c86c3a58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_leavitt, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 25 08:31:47 np0005629333 pedantic_leavitt[404766]: 167 167
Feb 25 08:31:47 np0005629333 podman[404749]: 2026-02-25 13:31:47.897860458 +0000 UTC m=+0.145986371 container attach 68aef1a53e1c991fc5a7049615368771958ba1ab5c073e592997d702c86c3a58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_leavitt, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:31:47 np0005629333 systemd[1]: libpod-68aef1a53e1c991fc5a7049615368771958ba1ab5c073e592997d702c86c3a58.scope: Deactivated successfully.
Feb 25 08:31:47 np0005629333 podman[404749]: 2026-02-25 13:31:47.898509436 +0000 UTC m=+0.146635349 container died 68aef1a53e1c991fc5a7049615368771958ba1ab5c073e592997d702c86c3a58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_leavitt, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:31:47 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ca7c6c60f9573baa1e6e964031b22639186cfe9e2c7742e401159599a280632e-merged.mount: Deactivated successfully.
Feb 25 08:31:47 np0005629333 podman[404749]: 2026-02-25 13:31:47.94646048 +0000 UTC m=+0.194586383 container remove 68aef1a53e1c991fc5a7049615368771958ba1ab5c073e592997d702c86c3a58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_leavitt, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 08:31:47 np0005629333 systemd[1]: libpod-conmon-68aef1a53e1c991fc5a7049615368771958ba1ab5c073e592997d702c86c3a58.scope: Deactivated successfully.
Feb 25 08:31:48 np0005629333 podman[404790]: 2026-02-25 13:31:48.109545012 +0000 UTC m=+0.047373448 container create 7269152b087e66ae7bb80a8c00a7f595b16d0c6a05033af1644024c599a55168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 08:31:48 np0005629333 systemd[1]: Started libpod-conmon-7269152b087e66ae7bb80a8c00a7f595b16d0c6a05033af1644024c599a55168.scope.
Feb 25 08:31:48 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:31:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9482df189ec1d759f1cbcde8328518d38c8ea19eca252c295e02083bf25ce641/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:31:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9482df189ec1d759f1cbcde8328518d38c8ea19eca252c295e02083bf25ce641/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:31:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9482df189ec1d759f1cbcde8328518d38c8ea19eca252c295e02083bf25ce641/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:31:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9482df189ec1d759f1cbcde8328518d38c8ea19eca252c295e02083bf25ce641/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:31:48 np0005629333 podman[404790]: 2026-02-25 13:31:48.085559935 +0000 UTC m=+0.023388421 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:31:48 np0005629333 podman[404790]: 2026-02-25 13:31:48.191428533 +0000 UTC m=+0.129256999 container init 7269152b087e66ae7bb80a8c00a7f595b16d0c6a05033af1644024c599a55168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:31:48 np0005629333 podman[404790]: 2026-02-25 13:31:48.199400908 +0000 UTC m=+0.137229384 container start 7269152b087e66ae7bb80a8c00a7f595b16d0c6a05033af1644024c599a55168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_driscoll, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 08:31:48 np0005629333 podman[404790]: 2026-02-25 13:31:48.202932218 +0000 UTC m=+0.140760684 container attach 7269152b087e66ae7bb80a8c00a7f595b16d0c6a05033af1644024c599a55168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]: {
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:    "0": [
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:        {
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "devices": [
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "/dev/loop3"
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            ],
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "lv_name": "ceph_lv0",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "lv_size": "21470642176",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "name": "ceph_lv0",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "tags": {
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.cluster_name": "ceph",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.crush_device_class": "",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.encrypted": "0",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.objectstore": "bluestore",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.osd_id": "0",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.type": "block",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.vdo": "0",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.with_tpm": "0"
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            },
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "type": "block",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "vg_name": "ceph_vg0"
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:        }
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:    ],
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:    "1": [
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:        {
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "devices": [
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "/dev/loop4"
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            ],
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "lv_name": "ceph_lv1",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "lv_size": "21470642176",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "name": "ceph_lv1",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "tags": {
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.cluster_name": "ceph",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.crush_device_class": "",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.encrypted": "0",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.objectstore": "bluestore",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.osd_id": "1",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.type": "block",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.vdo": "0",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.with_tpm": "0"
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            },
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "type": "block",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "vg_name": "ceph_vg1"
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:        }
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:    ],
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:    "2": [
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:        {
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "devices": [
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "/dev/loop5"
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            ],
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "lv_name": "ceph_lv2",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "lv_size": "21470642176",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "name": "ceph_lv2",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "tags": {
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.cluster_name": "ceph",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.crush_device_class": "",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.encrypted": "0",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.objectstore": "bluestore",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.osd_id": "2",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.type": "block",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.vdo": "0",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:                "ceph.with_tpm": "0"
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            },
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "type": "block",
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:            "vg_name": "ceph_vg2"
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:        }
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]:    ]
Feb 25 08:31:48 np0005629333 hopeful_driscoll[404806]: }
Feb 25 08:31:48 np0005629333 systemd[1]: libpod-7269152b087e66ae7bb80a8c00a7f595b16d0c6a05033af1644024c599a55168.scope: Deactivated successfully.
Feb 25 08:31:48 np0005629333 podman[404790]: 2026-02-25 13:31:48.510280961 +0000 UTC m=+0.448109427 container died 7269152b087e66ae7bb80a8c00a7f595b16d0c6a05033af1644024c599a55168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:31:48 np0005629333 systemd[1]: var-lib-containers-storage-overlay-9482df189ec1d759f1cbcde8328518d38c8ea19eca252c295e02083bf25ce641-merged.mount: Deactivated successfully.
Feb 25 08:31:48 np0005629333 podman[404790]: 2026-02-25 13:31:48.548929492 +0000 UTC m=+0.486757928 container remove 7269152b087e66ae7bb80a8c00a7f595b16d0c6a05033af1644024c599a55168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_driscoll, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:31:48 np0005629333 systemd[1]: libpod-conmon-7269152b087e66ae7bb80a8c00a7f595b16d0c6a05033af1644024c599a55168.scope: Deactivated successfully.
Feb 25 08:31:49 np0005629333 podman[404887]: 2026-02-25 13:31:49.006020302 +0000 UTC m=+0.064961064 container create 587e0e27246aee1a898f506d88564f2d34f4fd777ced1befbaa9fb1ad914b0f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 08:31:49 np0005629333 systemd[1]: Started libpod-conmon-587e0e27246aee1a898f506d88564f2d34f4fd777ced1befbaa9fb1ad914b0f5.scope.
Feb 25 08:31:49 np0005629333 podman[404887]: 2026-02-25 13:31:48.977617721 +0000 UTC m=+0.036558543 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:31:49 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:31:49 np0005629333 podman[404887]: 2026-02-25 13:31:49.102250757 +0000 UTC m=+0.161191559 container init 587e0e27246aee1a898f506d88564f2d34f4fd777ced1befbaa9fb1ad914b0f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 08:31:49 np0005629333 podman[404887]: 2026-02-25 13:31:49.112364013 +0000 UTC m=+0.171304745 container start 587e0e27246aee1a898f506d88564f2d34f4fd777ced1befbaa9fb1ad914b0f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kilby, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:31:49 np0005629333 podman[404887]: 2026-02-25 13:31:49.115780079 +0000 UTC m=+0.174720901 container attach 587e0e27246aee1a898f506d88564f2d34f4fd777ced1befbaa9fb1ad914b0f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kilby, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 08:31:49 np0005629333 vigilant_kilby[404903]: 167 167
Feb 25 08:31:49 np0005629333 systemd[1]: libpod-587e0e27246aee1a898f506d88564f2d34f4fd777ced1befbaa9fb1ad914b0f5.scope: Deactivated successfully.
Feb 25 08:31:49 np0005629333 conmon[404903]: conmon 587e0e27246aee1a898f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-587e0e27246aee1a898f506d88564f2d34f4fd777ced1befbaa9fb1ad914b0f5.scope/container/memory.events
Feb 25 08:31:49 np0005629333 podman[404887]: 2026-02-25 13:31:49.119061362 +0000 UTC m=+0.178002124 container died 587e0e27246aee1a898f506d88564f2d34f4fd777ced1befbaa9fb1ad914b0f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kilby, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:31:49 np0005629333 systemd[1]: var-lib-containers-storage-overlay-223d78bc8b2f6235e874f56f9f61bca2c49537fdfc1a647490ecd97fa0a317b3-merged.mount: Deactivated successfully.
Feb 25 08:31:49 np0005629333 podman[404887]: 2026-02-25 13:31:49.167871879 +0000 UTC m=+0.226812641 container remove 587e0e27246aee1a898f506d88564f2d34f4fd777ced1befbaa9fb1ad914b0f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:31:49 np0005629333 systemd[1]: libpod-conmon-587e0e27246aee1a898f506d88564f2d34f4fd777ced1befbaa9fb1ad914b0f5.scope: Deactivated successfully.
Feb 25 08:31:49 np0005629333 podman[404927]: 2026-02-25 13:31:49.374666885 +0000 UTC m=+0.057949206 container create 591b062f30d41cbd148c090b549568d822aff1009306fcf39716c0d78582963e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hawking, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 08:31:49 np0005629333 systemd[1]: Started libpod-conmon-591b062f30d41cbd148c090b549568d822aff1009306fcf39716c0d78582963e.scope.
Feb 25 08:31:49 np0005629333 podman[404927]: 2026-02-25 13:31:49.352991814 +0000 UTC m=+0.036274125 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:31:49 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:31:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5bb3543ed2be9a2b56ec4b23c4baadaa68c43aadd58471e102ea98d806b9d9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:31:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5bb3543ed2be9a2b56ec4b23c4baadaa68c43aadd58471e102ea98d806b9d9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:31:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5bb3543ed2be9a2b56ec4b23c4baadaa68c43aadd58471e102ea98d806b9d9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:31:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5bb3543ed2be9a2b56ec4b23c4baadaa68c43aadd58471e102ea98d806b9d9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:31:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:31:49 np0005629333 podman[404927]: 2026-02-25 13:31:49.491958606 +0000 UTC m=+0.175240927 container init 591b062f30d41cbd148c090b549568d822aff1009306fcf39716c0d78582963e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hawking, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 08:31:49 np0005629333 podman[404927]: 2026-02-25 13:31:49.500218259 +0000 UTC m=+0.183500580 container start 591b062f30d41cbd148c090b549568d822aff1009306fcf39716c0d78582963e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 25 08:31:49 np0005629333 podman[404927]: 2026-02-25 13:31:49.503512082 +0000 UTC m=+0.186794403 container attach 591b062f30d41cbd148c090b549568d822aff1009306fcf39716c0d78582963e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hawking, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 08:31:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:31:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3089635615' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:31:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:31:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3089635615' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:31:49 np0005629333 nova_compute[244014]: 2026-02-25 13:31:49.576 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:31:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3358: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:31:50 np0005629333 lvm[405024]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:31:50 np0005629333 lvm[405024]: VG ceph_vg1 finished
Feb 25 08:31:50 np0005629333 lvm[405025]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:31:50 np0005629333 lvm[405025]: VG ceph_vg2 finished
Feb 25 08:31:50 np0005629333 lvm[405023]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:31:50 np0005629333 lvm[405023]: VG ceph_vg0 finished
Feb 25 08:31:50 np0005629333 thirsty_hawking[404944]: {}
Feb 25 08:31:50 np0005629333 systemd[1]: libpod-591b062f30d41cbd148c090b549568d822aff1009306fcf39716c0d78582963e.scope: Deactivated successfully.
Feb 25 08:31:50 np0005629333 systemd[1]: libpod-591b062f30d41cbd148c090b549568d822aff1009306fcf39716c0d78582963e.scope: Consumed 1.128s CPU time.
Feb 25 08:31:50 np0005629333 podman[404927]: 2026-02-25 13:31:50.258628443 +0000 UTC m=+0.941910754 container died 591b062f30d41cbd148c090b549568d822aff1009306fcf39716c0d78582963e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hawking, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 08:31:50 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e5bb3543ed2be9a2b56ec4b23c4baadaa68c43aadd58471e102ea98d806b9d9b-merged.mount: Deactivated successfully.
Feb 25 08:31:50 np0005629333 podman[404927]: 2026-02-25 13:31:50.316053103 +0000 UTC m=+0.999335414 container remove 591b062f30d41cbd148c090b549568d822aff1009306fcf39716c0d78582963e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 08:31:50 np0005629333 systemd[1]: libpod-conmon-591b062f30d41cbd148c090b549568d822aff1009306fcf39716c0d78582963e.scope: Deactivated successfully.
Feb 25 08:31:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:31:50 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:31:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:31:50 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:31:51 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:31:51 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:31:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3359: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:31:52 np0005629333 nova_compute[244014]: 2026-02-25 13:31:52.213 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:31:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3360: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:31:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:31:54 np0005629333 nova_compute[244014]: 2026-02-25 13:31:54.582 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:31:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:31:55.074 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:31:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:31:55.075 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:31:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:31:55.075 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:31:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3361: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:31:57 np0005629333 nova_compute[244014]: 2026-02-25 13:31:57.215 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:31:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3362: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:31:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:31:59 np0005629333 nova_compute[244014]: 2026-02-25 13:31:59.583 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:31:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3363: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:32:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:32:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:32:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:32:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:32:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:32:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:32:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3364: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:32:01 np0005629333 nova_compute[244014]: 2026-02-25 13:32:01.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:32:02 np0005629333 nova_compute[244014]: 2026-02-25 13:32:02.216 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:32:02 np0005629333 nova_compute[244014]: 2026-02-25 13:32:02.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:32:02 np0005629333 nova_compute[244014]: 2026-02-25 13:32:02.903 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:32:02 np0005629333 nova_compute[244014]: 2026-02-25 13:32:02.903 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:32:02 np0005629333 nova_compute[244014]: 2026-02-25 13:32:02.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:32:02 np0005629333 nova_compute[244014]: 2026-02-25 13:32:02.904 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:32:02 np0005629333 nova_compute[244014]: 2026-02-25 13:32:02.904 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:32:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:32:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/170130205' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:32:03 np0005629333 nova_compute[244014]: 2026-02-25 13:32:03.462 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:32:03 np0005629333 nova_compute[244014]: 2026-02-25 13:32:03.649 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:32:03 np0005629333 nova_compute[244014]: 2026-02-25 13:32:03.651 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3555MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:32:03 np0005629333 nova_compute[244014]: 2026-02-25 13:32:03.651 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:32:03 np0005629333 nova_compute[244014]: 2026-02-25 13:32:03.652 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:32:03 np0005629333 nova_compute[244014]: 2026-02-25 13:32:03.804 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:32:03 np0005629333 nova_compute[244014]: 2026-02-25 13:32:03.805 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:32:03 np0005629333 nova_compute[244014]: 2026-02-25 13:32:03.821 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:32:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3365: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:32:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:32:04 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/236991576' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:32:04 np0005629333 nova_compute[244014]: 2026-02-25 13:32:04.354 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:32:04 np0005629333 nova_compute[244014]: 2026-02-25 13:32:04.360 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:32:04 np0005629333 nova_compute[244014]: 2026-02-25 13:32:04.374 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:32:04 np0005629333 nova_compute[244014]: 2026-02-25 13:32:04.376 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:32:04 np0005629333 nova_compute[244014]: 2026-02-25 13:32:04.376 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:32:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:32:04 np0005629333 nova_compute[244014]: 2026-02-25 13:32:04.586 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:32:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3366: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:32:07 np0005629333 nova_compute[244014]: 2026-02-25 13:32:07.218 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:32:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3367: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:32:08 np0005629333 nova_compute[244014]: 2026-02-25 13:32:08.378 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:32:08 np0005629333 nova_compute[244014]: 2026-02-25 13:32:08.378 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:32:08 np0005629333 nova_compute[244014]: 2026-02-25 13:32:08.378 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:32:08 np0005629333 nova_compute[244014]: 2026-02-25 13:32:08.400 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:32:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:32:09 np0005629333 nova_compute[244014]: 2026-02-25 13:32:09.589 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:32:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3368: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:32:10 np0005629333 nova_compute[244014]: 2026-02-25 13:32:10.874 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:32:11 np0005629333 podman[405111]: 2026-02-25 13:32:11.723408075 +0000 UTC m=+0.065130319 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:32:11 np0005629333 podman[405112]: 2026-02-25 13:32:11.791790455 +0000 UTC m=+0.129298480 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 25 08:32:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3369: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:32:11 np0005629333 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 08:32:11 np0005629333 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 08:32:12 np0005629333 nova_compute[244014]: 2026-02-25 13:32:12.263 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:32:12 np0005629333 nova_compute[244014]: 2026-02-25 13:32:12.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:32:12 np0005629333 nova_compute[244014]: 2026-02-25 13:32:12.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:32:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3370: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:32:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:32:14 np0005629333 nova_compute[244014]: 2026-02-25 13:32:14.593 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:32:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3371: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:32:17 np0005629333 nova_compute[244014]: 2026-02-25 13:32:17.266 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:32:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3372: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:32:17 np0005629333 nova_compute[244014]: 2026-02-25 13:32:17.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:32:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:32:19 np0005629333 nova_compute[244014]: 2026-02-25 13:32:19.597 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:32:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3373: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:32:19 np0005629333 nova_compute[244014]: 2026-02-25 13:32:19.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:32:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3374: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:32:22 np0005629333 nova_compute[244014]: 2026-02-25 13:32:22.303 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:32:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3375: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:32:23 np0005629333 nova_compute[244014]: 2026-02-25 13:32:23.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:32:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:32:24 np0005629333 nova_compute[244014]: 2026-02-25 13:32:24.601 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:32:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3376: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:32:26 np0005629333 nova_compute[244014]: 2026-02-25 13:32:26.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:32:27 np0005629333 nova_compute[244014]: 2026-02-25 13:32:27.335 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:32:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3377: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:32:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:32:29 np0005629333 nova_compute[244014]: 2026-02-25 13:32:29.604 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:32:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3378: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:32:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:32:31
Feb 25 08:32:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:32:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:32:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['images', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'vms', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', 'default.rgw.control']
Feb 25 08:32:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:32:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:32:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:32:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:32:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:32:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:32:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:32:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3379: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:32:32 np0005629333 nova_compute[244014]: 2026-02-25 13:32:32.373 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:32:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:32:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:32:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:32:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:32:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:32:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:32:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:32:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:32:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:32:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:32:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3380: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:32:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:32:34 np0005629333 nova_compute[244014]: 2026-02-25 13:32:34.640 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:32:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3381: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:32:35 np0005629333 nova_compute[244014]: 2026-02-25 13:32:35.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:32:37 np0005629333 nova_compute[244014]: 2026-02-25 13:32:37.407 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:32:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3382: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:32:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:32:39 np0005629333 nova_compute[244014]: 2026-02-25 13:32:39.643 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:32:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3383: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:32:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3384: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:32:42 np0005629333 nova_compute[244014]: 2026-02-25 13:32:42.458 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:32:42 np0005629333 podman[405158]: 2026-02-25 13:32:42.731162186 +0000 UTC m=+0.068117824 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent)
Feb 25 08:32:42 np0005629333 podman[405159]: 2026-02-25 13:32:42.749508014 +0000 UTC m=+0.083929470 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 08:32:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:32:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:32:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:32:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:32:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:32:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:32:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:32:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:32:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:32:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:32:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:32:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:32:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:32:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:32:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:32:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:32:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:32:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:32:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:32:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:32:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:32:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:32:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:32:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3385: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:32:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:32:44 np0005629333 nova_compute[244014]: 2026-02-25 13:32:44.645 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:32:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3386: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:32:47 np0005629333 nova_compute[244014]: 2026-02-25 13:32:47.458 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:32:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:32:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1269309088' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:32:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:32:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1269309088' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:32:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3387: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:32:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:32:49 np0005629333 nova_compute[244014]: 2026-02-25 13:32:49.649 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:32:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3388: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:32:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:32:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:32:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:32:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:32:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:32:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:32:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:32:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:32:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:32:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:32:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:32:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:32:51 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:32:51 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:32:51 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:32:51 np0005629333 podman[405348]: 2026-02-25 13:32:51.461313871 +0000 UTC m=+0.046100983 container create 9abe4c86096b7858aa251483399a18c81ad0c1126e0a775751bed17e8560613e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 08:32:51 np0005629333 systemd[1]: Started libpod-conmon-9abe4c86096b7858aa251483399a18c81ad0c1126e0a775751bed17e8560613e.scope.
Feb 25 08:32:51 np0005629333 podman[405348]: 2026-02-25 13:32:51.437987042 +0000 UTC m=+0.022774144 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:32:51 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:32:51 np0005629333 podman[405348]: 2026-02-25 13:32:51.557943228 +0000 UTC m=+0.142730400 container init 9abe4c86096b7858aa251483399a18c81ad0c1126e0a775751bed17e8560613e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 08:32:51 np0005629333 podman[405348]: 2026-02-25 13:32:51.570501352 +0000 UTC m=+0.155288464 container start 9abe4c86096b7858aa251483399a18c81ad0c1126e0a775751bed17e8560613e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_leakey, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 08:32:51 np0005629333 podman[405348]: 2026-02-25 13:32:51.577974623 +0000 UTC m=+0.162761725 container attach 9abe4c86096b7858aa251483399a18c81ad0c1126e0a775751bed17e8560613e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_leakey, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:32:51 np0005629333 systemd[1]: libpod-9abe4c86096b7858aa251483399a18c81ad0c1126e0a775751bed17e8560613e.scope: Deactivated successfully.
Feb 25 08:32:51 np0005629333 sharp_leakey[405364]: 167 167
Feb 25 08:32:51 np0005629333 conmon[405364]: conmon 9abe4c86096b7858aa25 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9abe4c86096b7858aa251483399a18c81ad0c1126e0a775751bed17e8560613e.scope/container/memory.events
Feb 25 08:32:51 np0005629333 podman[405348]: 2026-02-25 13:32:51.580953427 +0000 UTC m=+0.165740539 container died 9abe4c86096b7858aa251483399a18c81ad0c1126e0a775751bed17e8560613e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_leakey, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:32:51 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7fda520e2c9d4bd3796ad15a8df085ae483a569d25881a0caf2e41620549e747-merged.mount: Deactivated successfully.
Feb 25 08:32:51 np0005629333 podman[405348]: 2026-02-25 13:32:51.650011686 +0000 UTC m=+0.234798798 container remove 9abe4c86096b7858aa251483399a18c81ad0c1126e0a775751bed17e8560613e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 08:32:51 np0005629333 systemd[1]: libpod-conmon-9abe4c86096b7858aa251483399a18c81ad0c1126e0a775751bed17e8560613e.scope: Deactivated successfully.
Feb 25 08:32:51 np0005629333 podman[405389]: 2026-02-25 13:32:51.785463309 +0000 UTC m=+0.048088928 container create c863115e59260586244d3a0c999de78ed56098093935896b0cfb0c39889812f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 08:32:51 np0005629333 systemd[1]: Started libpod-conmon-c863115e59260586244d3a0c999de78ed56098093935896b0cfb0c39889812f4.scope.
Feb 25 08:32:51 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:32:51 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28e296994b170439bab68818c6d989fd472da5cb69d9d6e690db7127241ebc1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:32:51 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28e296994b170439bab68818c6d989fd472da5cb69d9d6e690db7127241ebc1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:32:51 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28e296994b170439bab68818c6d989fd472da5cb69d9d6e690db7127241ebc1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:32:51 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28e296994b170439bab68818c6d989fd472da5cb69d9d6e690db7127241ebc1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:32:51 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28e296994b170439bab68818c6d989fd472da5cb69d9d6e690db7127241ebc1e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:32:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3389: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:32:51 np0005629333 podman[405389]: 2026-02-25 13:32:51.760727211 +0000 UTC m=+0.023352840 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:32:51 np0005629333 podman[405389]: 2026-02-25 13:32:51.874130021 +0000 UTC m=+0.136755610 container init c863115e59260586244d3a0c999de78ed56098093935896b0cfb0c39889812f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:32:51 np0005629333 podman[405389]: 2026-02-25 13:32:51.8854206 +0000 UTC m=+0.148046189 container start c863115e59260586244d3a0c999de78ed56098093935896b0cfb0c39889812f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:32:51 np0005629333 podman[405389]: 2026-02-25 13:32:51.888869007 +0000 UTC m=+0.151494686 container attach c863115e59260586244d3a0c999de78ed56098093935896b0cfb0c39889812f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:32:52 np0005629333 charming_mirzakhani[405405]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:32:52 np0005629333 charming_mirzakhani[405405]: --> All data devices are unavailable
Feb 25 08:32:52 np0005629333 systemd[1]: libpod-c863115e59260586244d3a0c999de78ed56098093935896b0cfb0c39889812f4.scope: Deactivated successfully.
Feb 25 08:32:52 np0005629333 podman[405389]: 2026-02-25 13:32:52.350848676 +0000 UTC m=+0.613474255 container died c863115e59260586244d3a0c999de78ed56098093935896b0cfb0c39889812f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 08:32:52 np0005629333 systemd[1]: var-lib-containers-storage-overlay-28e296994b170439bab68818c6d989fd472da5cb69d9d6e690db7127241ebc1e-merged.mount: Deactivated successfully.
Feb 25 08:32:52 np0005629333 podman[405389]: 2026-02-25 13:32:52.396964927 +0000 UTC m=+0.659590516 container remove c863115e59260586244d3a0c999de78ed56098093935896b0cfb0c39889812f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:32:52 np0005629333 systemd[1]: libpod-conmon-c863115e59260586244d3a0c999de78ed56098093935896b0cfb0c39889812f4.scope: Deactivated successfully.
Feb 25 08:32:52 np0005629333 nova_compute[244014]: 2026-02-25 13:32:52.459 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:32:52 np0005629333 podman[405498]: 2026-02-25 13:32:52.849985151 +0000 UTC m=+0.041922094 container create ec4f2412c3e6500e96551fae5778156357746734e27aea673e7a2b257e774249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_leavitt, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:32:52 np0005629333 systemd[1]: Started libpod-conmon-ec4f2412c3e6500e96551fae5778156357746734e27aea673e7a2b257e774249.scope.
Feb 25 08:32:52 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:32:52 np0005629333 podman[405498]: 2026-02-25 13:32:52.831680125 +0000 UTC m=+0.023617098 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:32:52 np0005629333 podman[405498]: 2026-02-25 13:32:52.93143183 +0000 UTC m=+0.123368843 container init ec4f2412c3e6500e96551fae5778156357746734e27aea673e7a2b257e774249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 08:32:52 np0005629333 podman[405498]: 2026-02-25 13:32:52.938134359 +0000 UTC m=+0.130071312 container start ec4f2412c3e6500e96551fae5778156357746734e27aea673e7a2b257e774249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_leavitt, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:32:52 np0005629333 infallible_leavitt[405515]: 167 167
Feb 25 08:32:52 np0005629333 systemd[1]: libpod-ec4f2412c3e6500e96551fae5778156357746734e27aea673e7a2b257e774249.scope: Deactivated successfully.
Feb 25 08:32:52 np0005629333 podman[405498]: 2026-02-25 13:32:52.94206443 +0000 UTC m=+0.134001423 container attach ec4f2412c3e6500e96551fae5778156357746734e27aea673e7a2b257e774249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_leavitt, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Feb 25 08:32:52 np0005629333 podman[405498]: 2026-02-25 13:32:52.942391759 +0000 UTC m=+0.134328702 container died ec4f2412c3e6500e96551fae5778156357746734e27aea673e7a2b257e774249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_leavitt, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:32:52 np0005629333 systemd[1]: var-lib-containers-storage-overlay-80fb7a27b11c05fdd97126bf2f71fbe5d9b63022cdc10d947577ea33fb2e53ab-merged.mount: Deactivated successfully.
Feb 25 08:32:52 np0005629333 podman[405498]: 2026-02-25 13:32:52.983915141 +0000 UTC m=+0.175852074 container remove ec4f2412c3e6500e96551fae5778156357746734e27aea673e7a2b257e774249 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 08:32:52 np0005629333 systemd[1]: libpod-conmon-ec4f2412c3e6500e96551fae5778156357746734e27aea673e7a2b257e774249.scope: Deactivated successfully.
Feb 25 08:32:53 np0005629333 podman[405540]: 2026-02-25 13:32:53.132363721 +0000 UTC m=+0.044768854 container create 036704d213f825f4ee74cd2668a0dbeebb2dbbf1f8a868979e21d2d1c00fbd72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:32:53 np0005629333 systemd[1]: Started libpod-conmon-036704d213f825f4ee74cd2668a0dbeebb2dbbf1f8a868979e21d2d1c00fbd72.scope.
Feb 25 08:32:53 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:32:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/391dd125f5577384a1f2dd5f795fe3f9aa0877d3c98f81094a335ba9b4a6ef1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:32:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/391dd125f5577384a1f2dd5f795fe3f9aa0877d3c98f81094a335ba9b4a6ef1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:32:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/391dd125f5577384a1f2dd5f795fe3f9aa0877d3c98f81094a335ba9b4a6ef1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:32:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/391dd125f5577384a1f2dd5f795fe3f9aa0877d3c98f81094a335ba9b4a6ef1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:32:53 np0005629333 podman[405540]: 2026-02-25 13:32:53.113599201 +0000 UTC m=+0.026004384 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:32:53 np0005629333 podman[405540]: 2026-02-25 13:32:53.22482551 +0000 UTC m=+0.137230703 container init 036704d213f825f4ee74cd2668a0dbeebb2dbbf1f8a868979e21d2d1c00fbd72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_khayyam, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:32:53 np0005629333 podman[405540]: 2026-02-25 13:32:53.231010295 +0000 UTC m=+0.143415438 container start 036704d213f825f4ee74cd2668a0dbeebb2dbbf1f8a868979e21d2d1c00fbd72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:32:53 np0005629333 podman[405540]: 2026-02-25 13:32:53.23472831 +0000 UTC m=+0.147133473 container attach 036704d213f825f4ee74cd2668a0dbeebb2dbbf1f8a868979e21d2d1c00fbd72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]: {
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:    "0": [
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:        {
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "devices": [
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "/dev/loop3"
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            ],
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "lv_name": "ceph_lv0",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "lv_size": "21470642176",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "name": "ceph_lv0",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "tags": {
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.cluster_name": "ceph",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.crush_device_class": "",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.encrypted": "0",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.objectstore": "bluestore",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.osd_id": "0",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.type": "block",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.vdo": "0",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.with_tpm": "0"
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            },
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "type": "block",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "vg_name": "ceph_vg0"
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:        }
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:    ],
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:    "1": [
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:        {
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "devices": [
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "/dev/loop4"
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            ],
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "lv_name": "ceph_lv1",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "lv_size": "21470642176",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "name": "ceph_lv1",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "tags": {
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.cluster_name": "ceph",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.crush_device_class": "",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.encrypted": "0",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.objectstore": "bluestore",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.osd_id": "1",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.type": "block",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.vdo": "0",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.with_tpm": "0"
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            },
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "type": "block",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "vg_name": "ceph_vg1"
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:        }
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:    ],
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:    "2": [
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:        {
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "devices": [
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "/dev/loop5"
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            ],
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "lv_name": "ceph_lv2",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "lv_size": "21470642176",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "name": "ceph_lv2",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "tags": {
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.cluster_name": "ceph",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.crush_device_class": "",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.encrypted": "0",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.objectstore": "bluestore",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.osd_id": "2",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.type": "block",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.vdo": "0",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:                "ceph.with_tpm": "0"
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            },
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "type": "block",
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:            "vg_name": "ceph_vg2"
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:        }
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]:    ]
Feb 25 08:32:53 np0005629333 lucid_khayyam[405556]: }
Feb 25 08:32:53 np0005629333 systemd[1]: libpod-036704d213f825f4ee74cd2668a0dbeebb2dbbf1f8a868979e21d2d1c00fbd72.scope: Deactivated successfully.
Feb 25 08:32:53 np0005629333 podman[405540]: 2026-02-25 13:32:53.542095405 +0000 UTC m=+0.454500578 container died 036704d213f825f4ee74cd2668a0dbeebb2dbbf1f8a868979e21d2d1c00fbd72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_khayyam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:32:53 np0005629333 systemd[1]: var-lib-containers-storage-overlay-391dd125f5577384a1f2dd5f795fe3f9aa0877d3c98f81094a335ba9b4a6ef1a-merged.mount: Deactivated successfully.
Feb 25 08:32:53 np0005629333 podman[405540]: 2026-02-25 13:32:53.591791307 +0000 UTC m=+0.504196450 container remove 036704d213f825f4ee74cd2668a0dbeebb2dbbf1f8a868979e21d2d1c00fbd72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_khayyam, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:32:53 np0005629333 systemd[1]: libpod-conmon-036704d213f825f4ee74cd2668a0dbeebb2dbbf1f8a868979e21d2d1c00fbd72.scope: Deactivated successfully.
Feb 25 08:32:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3390: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:32:54 np0005629333 podman[405643]: 2026-02-25 13:32:54.05783969 +0000 UTC m=+0.045023091 container create 941638e4646d802696f59e1e35ad47f2d6dcafa63a54f7590401165f7decc9e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_greider, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:32:54 np0005629333 systemd[1]: Started libpod-conmon-941638e4646d802696f59e1e35ad47f2d6dcafa63a54f7590401165f7decc9e3.scope.
Feb 25 08:32:54 np0005629333 podman[405643]: 2026-02-25 13:32:54.037056334 +0000 UTC m=+0.024239765 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:32:54 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:32:54 np0005629333 podman[405643]: 2026-02-25 13:32:54.149220539 +0000 UTC m=+0.136403990 container init 941638e4646d802696f59e1e35ad47f2d6dcafa63a54f7590401165f7decc9e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_greider, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:32:54 np0005629333 podman[405643]: 2026-02-25 13:32:54.157850173 +0000 UTC m=+0.145033584 container start 941638e4646d802696f59e1e35ad47f2d6dcafa63a54f7590401165f7decc9e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_greider, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:32:54 np0005629333 nice_greider[405659]: 167 167
Feb 25 08:32:54 np0005629333 systemd[1]: libpod-941638e4646d802696f59e1e35ad47f2d6dcafa63a54f7590401165f7decc9e3.scope: Deactivated successfully.
Feb 25 08:32:54 np0005629333 podman[405643]: 2026-02-25 13:32:54.162220416 +0000 UTC m=+0.149403827 container attach 941638e4646d802696f59e1e35ad47f2d6dcafa63a54f7590401165f7decc9e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_greider, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:32:54 np0005629333 podman[405643]: 2026-02-25 13:32:54.163137482 +0000 UTC m=+0.150320873 container died 941638e4646d802696f59e1e35ad47f2d6dcafa63a54f7590401165f7decc9e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_greider, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 08:32:54 np0005629333 systemd[1]: var-lib-containers-storage-overlay-c7917bb8e6d2bd63c5aeee4a380212ff24f990bf8c50b41ca725b6f0478c66af-merged.mount: Deactivated successfully.
Feb 25 08:32:54 np0005629333 podman[405643]: 2026-02-25 13:32:54.203873302 +0000 UTC m=+0.191056693 container remove 941638e4646d802696f59e1e35ad47f2d6dcafa63a54f7590401165f7decc9e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:32:54 np0005629333 systemd[1]: libpod-conmon-941638e4646d802696f59e1e35ad47f2d6dcafa63a54f7590401165f7decc9e3.scope: Deactivated successfully.
Feb 25 08:32:54 np0005629333 podman[405685]: 2026-02-25 13:32:54.368377115 +0000 UTC m=+0.048355336 container create 3cc2ebaeaeb19f3ba9587d36106b6eef14d032edaed691a1c4fc4c43f4f3bd5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_robinson, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:32:54 np0005629333 systemd[1]: Started libpod-conmon-3cc2ebaeaeb19f3ba9587d36106b6eef14d032edaed691a1c4fc4c43f4f3bd5a.scope.
Feb 25 08:32:54 np0005629333 podman[405685]: 2026-02-25 13:32:54.343285136 +0000 UTC m=+0.023263407 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:32:54 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:32:54 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ee68e03f7a4bfb0f69eaec00f405fb3ca25a325b4cca35959375e767cc4886e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:32:54 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ee68e03f7a4bfb0f69eaec00f405fb3ca25a325b4cca35959375e767cc4886e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:32:54 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ee68e03f7a4bfb0f69eaec00f405fb3ca25a325b4cca35959375e767cc4886e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:32:54 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ee68e03f7a4bfb0f69eaec00f405fb3ca25a325b4cca35959375e767cc4886e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:32:54 np0005629333 podman[405685]: 2026-02-25 13:32:54.485553622 +0000 UTC m=+0.165531893 container init 3cc2ebaeaeb19f3ba9587d36106b6eef14d032edaed691a1c4fc4c43f4f3bd5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_robinson, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:32:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:32:54 np0005629333 podman[405685]: 2026-02-25 13:32:54.493846946 +0000 UTC m=+0.173825167 container start 3cc2ebaeaeb19f3ba9587d36106b6eef14d032edaed691a1c4fc4c43f4f3bd5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_robinson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 08:32:54 np0005629333 podman[405685]: 2026-02-25 13:32:54.497369455 +0000 UTC m=+0.177347726 container attach 3cc2ebaeaeb19f3ba9587d36106b6eef14d032edaed691a1c4fc4c43f4f3bd5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_robinson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 08:32:54 np0005629333 nova_compute[244014]: 2026-02-25 13:32:54.652 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:32:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:32:55.075 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:32:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:32:55.076 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:32:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:32:55.076 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:32:55 np0005629333 lvm[405780]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:32:55 np0005629333 lvm[405780]: VG ceph_vg0 finished
Feb 25 08:32:55 np0005629333 lvm[405781]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:32:55 np0005629333 lvm[405781]: VG ceph_vg1 finished
Feb 25 08:32:55 np0005629333 lvm[405783]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:32:55 np0005629333 lvm[405783]: VG ceph_vg2 finished
Feb 25 08:32:55 np0005629333 vibrant_robinson[405702]: {}
Feb 25 08:32:55 np0005629333 systemd[1]: libpod-3cc2ebaeaeb19f3ba9587d36106b6eef14d032edaed691a1c4fc4c43f4f3bd5a.scope: Deactivated successfully.
Feb 25 08:32:55 np0005629333 systemd[1]: libpod-3cc2ebaeaeb19f3ba9587d36106b6eef14d032edaed691a1c4fc4c43f4f3bd5a.scope: Consumed 1.117s CPU time.
Feb 25 08:32:55 np0005629333 podman[405685]: 2026-02-25 13:32:55.302506668 +0000 UTC m=+0.982484849 container died 3cc2ebaeaeb19f3ba9587d36106b6eef14d032edaed691a1c4fc4c43f4f3bd5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:32:55 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1ee68e03f7a4bfb0f69eaec00f405fb3ca25a325b4cca35959375e767cc4886e-merged.mount: Deactivated successfully.
Feb 25 08:32:55 np0005629333 podman[405685]: 2026-02-25 13:32:55.353631051 +0000 UTC m=+1.033609242 container remove 3cc2ebaeaeb19f3ba9587d36106b6eef14d032edaed691a1c4fc4c43f4f3bd5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_robinson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 08:32:55 np0005629333 systemd[1]: libpod-conmon-3cc2ebaeaeb19f3ba9587d36106b6eef14d032edaed691a1c4fc4c43f4f3bd5a.scope: Deactivated successfully.
Feb 25 08:32:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:32:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:32:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:32:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:32:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3391: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:32:56 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:32:56 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:32:57 np0005629333 nova_compute[244014]: 2026-02-25 13:32:57.461 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:32:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3392: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:32:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:32:59 np0005629333 nova_compute[244014]: 2026-02-25 13:32:59.654 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:32:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3393: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:33:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:33:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:33:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:33:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:33:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:33:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:33:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3394: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:33:02 np0005629333 nova_compute[244014]: 2026-02-25 13:33:02.463 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:33:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3395: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:33:03 np0005629333 nova_compute[244014]: 2026-02-25 13:33:03.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:33:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:33:04 np0005629333 nova_compute[244014]: 2026-02-25 13:33:04.657 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:33:04 np0005629333 nova_compute[244014]: 2026-02-25 13:33:04.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:33:04 np0005629333 nova_compute[244014]: 2026-02-25 13:33:04.915 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:33:04 np0005629333 nova_compute[244014]: 2026-02-25 13:33:04.916 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:33:04 np0005629333 nova_compute[244014]: 2026-02-25 13:33:04.916 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:33:04 np0005629333 nova_compute[244014]: 2026-02-25 13:33:04.916 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:33:04 np0005629333 nova_compute[244014]: 2026-02-25 13:33:04.917 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:33:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:33:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3265346086' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:33:05 np0005629333 nova_compute[244014]: 2026-02-25 13:33:05.456 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:33:05 np0005629333 nova_compute[244014]: 2026-02-25 13:33:05.603 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:33:05 np0005629333 nova_compute[244014]: 2026-02-25 13:33:05.604 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3528MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:33:05 np0005629333 nova_compute[244014]: 2026-02-25 13:33:05.604 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:33:05 np0005629333 nova_compute[244014]: 2026-02-25 13:33:05.605 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:33:05 np0005629333 nova_compute[244014]: 2026-02-25 13:33:05.766 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:33:05 np0005629333 nova_compute[244014]: 2026-02-25 13:33:05.766 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:33:05 np0005629333 nova_compute[244014]: 2026-02-25 13:33:05.782 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:33:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3396: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:33:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:33:06 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1660571484' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:33:06 np0005629333 nova_compute[244014]: 2026-02-25 13:33:06.326 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:33:06 np0005629333 nova_compute[244014]: 2026-02-25 13:33:06.334 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:33:06 np0005629333 nova_compute[244014]: 2026-02-25 13:33:06.379 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:33:06 np0005629333 nova_compute[244014]: 2026-02-25 13:33:06.385 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:33:06 np0005629333 nova_compute[244014]: 2026-02-25 13:33:06.385 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:33:07 np0005629333 nova_compute[244014]: 2026-02-25 13:33:07.464 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:33:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3397: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:33:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:33:09 np0005629333 nova_compute[244014]: 2026-02-25 13:33:09.701 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:33:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3398: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:33:10 np0005629333 nova_compute[244014]: 2026-02-25 13:33:10.387 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:33:10 np0005629333 nova_compute[244014]: 2026-02-25 13:33:10.387 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:33:10 np0005629333 nova_compute[244014]: 2026-02-25 13:33:10.388 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:33:10 np0005629333 nova_compute[244014]: 2026-02-25 13:33:10.462 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:33:10 np0005629333 nova_compute[244014]: 2026-02-25 13:33:10.947 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:33:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3399: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:33:12 np0005629333 nova_compute[244014]: 2026-02-25 13:33:12.467 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:33:13 np0005629333 podman[405866]: 2026-02-25 13:33:13.720949129 +0000 UTC m=+0.062736112 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 08:33:13 np0005629333 podman[405867]: 2026-02-25 13:33:13.752902061 +0000 UTC m=+0.094685604 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Feb 25 08:33:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3400: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:33:13 np0005629333 nova_compute[244014]: 2026-02-25 13:33:13.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:33:13 np0005629333 nova_compute[244014]: 2026-02-25 13:33:13.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:33:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:33:14 np0005629333 nova_compute[244014]: 2026-02-25 13:33:14.703 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:33:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3401: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:33:17 np0005629333 nova_compute[244014]: 2026-02-25 13:33:17.505 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:33:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3402: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:33:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:33:19 np0005629333 nova_compute[244014]: 2026-02-25 13:33:19.707 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:33:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3403: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:33:19 np0005629333 nova_compute[244014]: 2026-02-25 13:33:19.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:33:20 np0005629333 nova_compute[244014]: 2026-02-25 13:33:20.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:33:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3404: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:33:22 np0005629333 nova_compute[244014]: 2026-02-25 13:33:22.544 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:33:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3405: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:33:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:33:24 np0005629333 nova_compute[244014]: 2026-02-25 13:33:24.710 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:33:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3406: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:33:25 np0005629333 nova_compute[244014]: 2026-02-25 13:33:25.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:33:27 np0005629333 nova_compute[244014]: 2026-02-25 13:33:27.547 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:33:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3407: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:33:28 np0005629333 nova_compute[244014]: 2026-02-25 13:33:28.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:33:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:33:29 np0005629333 nova_compute[244014]: 2026-02-25 13:33:29.713 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:33:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3408: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:33:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:33:31
Feb 25 08:33:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:33:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:33:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', '.mgr', 'vms', 'default.rgw.log', 'backups', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'default.rgw.control']
Feb 25 08:33:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:33:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:33:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:33:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:33:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:33:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:33:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:33:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3409: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:33:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:33:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:33:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:33:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:33:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:33:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:33:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:33:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:33:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:33:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:33:32 np0005629333 nova_compute[244014]: 2026-02-25 13:33:32.568 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:33:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3410: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:33:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:33:34 np0005629333 nova_compute[244014]: 2026-02-25 13:33:34.749 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:33:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3411: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:33:37 np0005629333 nova_compute[244014]: 2026-02-25 13:33:37.570 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:33:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3412: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:33:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:33:39 np0005629333 nova_compute[244014]: 2026-02-25 13:33:39.798 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:33:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3413: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:33:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3414: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:33:42 np0005629333 nova_compute[244014]: 2026-02-25 13:33:42.572 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:33:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:33:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:33:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:33:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:33:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:33:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:33:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:33:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:33:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:33:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:33:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:33:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:33:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:33:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:33:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:33:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:33:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:33:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:33:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:33:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:33:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:33:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:33:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:33:43 np0005629333 nova_compute[244014]: 2026-02-25 13:33:43.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:33:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3415: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:33:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:33:44 np0005629333 podman[405912]: 2026-02-25 13:33:44.717087247 +0000 UTC m=+0.062536306 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 25 08:33:44 np0005629333 podman[405913]: 2026-02-25 13:33:44.757419145 +0000 UTC m=+0.093263633 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, io.buildah.version=1.43.0)
Feb 25 08:33:44 np0005629333 nova_compute[244014]: 2026-02-25 13:33:44.799 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:33:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3416: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:33:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:33:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3674833158' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:33:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:33:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3674833158' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:33:47 np0005629333 nova_compute[244014]: 2026-02-25 13:33:47.573 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:33:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3417: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:33:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:33:49 np0005629333 nova_compute[244014]: 2026-02-25 13:33:49.802 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:33:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3418: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:33:50 np0005629333 nova_compute[244014]: 2026-02-25 13:33:50.894 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:33:50 np0005629333 nova_compute[244014]: 2026-02-25 13:33:50.895 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 25 08:33:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3419: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:33:52 np0005629333 nova_compute[244014]: 2026-02-25 13:33:52.576 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:33:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3420: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:33:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:33:54 np0005629333 nova_compute[244014]: 2026-02-25 13:33:54.805 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:33:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:33:55.076 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:33:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:33:55.077 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:33:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:33:55.077 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:33:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3421: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:33:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:33:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:33:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:33:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:33:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:33:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:33:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:33:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:33:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:33:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:33:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:33:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:33:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:33:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:33:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:33:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:33:57 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:33:57 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:33:57 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:33:57 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:33:57 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:33:57 np0005629333 podman[406173]: 2026-02-25 13:33:57.174889095 +0000 UTC m=+0.058703198 container create 771e8367264055dbd41d8278de6d97469d446ee03824206cc1288c3d7a53da53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_hamilton, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 08:33:57 np0005629333 systemd[1]: Started libpod-conmon-771e8367264055dbd41d8278de6d97469d446ee03824206cc1288c3d7a53da53.scope.
Feb 25 08:33:57 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:33:57 np0005629333 podman[406173]: 2026-02-25 13:33:57.150998861 +0000 UTC m=+0.034813014 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:33:57 np0005629333 podman[406173]: 2026-02-25 13:33:57.257635929 +0000 UTC m=+0.141450083 container init 771e8367264055dbd41d8278de6d97469d446ee03824206cc1288c3d7a53da53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_hamilton, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:33:57 np0005629333 podman[406173]: 2026-02-25 13:33:57.265973585 +0000 UTC m=+0.149787668 container start 771e8367264055dbd41d8278de6d97469d446ee03824206cc1288c3d7a53da53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_hamilton, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 08:33:57 np0005629333 podman[406173]: 2026-02-25 13:33:57.2693436 +0000 UTC m=+0.153157713 container attach 771e8367264055dbd41d8278de6d97469d446ee03824206cc1288c3d7a53da53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 08:33:57 np0005629333 loving_hamilton[406190]: 167 167
Feb 25 08:33:57 np0005629333 systemd[1]: libpod-771e8367264055dbd41d8278de6d97469d446ee03824206cc1288c3d7a53da53.scope: Deactivated successfully.
Feb 25 08:33:57 np0005629333 podman[406173]: 2026-02-25 13:33:57.271551672 +0000 UTC m=+0.155365785 container died 771e8367264055dbd41d8278de6d97469d446ee03824206cc1288c3d7a53da53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:33:57 np0005629333 systemd[1]: var-lib-containers-storage-overlay-71e72b451bdefb754ae5d10586e431c404b5616c80aaafa642349f100b07b15a-merged.mount: Deactivated successfully.
Feb 25 08:33:57 np0005629333 podman[406173]: 2026-02-25 13:33:57.316155791 +0000 UTC m=+0.199969894 container remove 771e8367264055dbd41d8278de6d97469d446ee03824206cc1288c3d7a53da53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_hamilton, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:33:57 np0005629333 systemd[1]: libpod-conmon-771e8367264055dbd41d8278de6d97469d446ee03824206cc1288c3d7a53da53.scope: Deactivated successfully.
Feb 25 08:33:57 np0005629333 podman[406213]: 2026-02-25 13:33:57.492738115 +0000 UTC m=+0.054273063 container create 26ec644fab9e8722a240b0b24010fa031e8a95d6c6a5efdc3946fd39474d1da5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_wilbur, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 08:33:57 np0005629333 systemd[1]: Started libpod-conmon-26ec644fab9e8722a240b0b24010fa031e8a95d6c6a5efdc3946fd39474d1da5.scope.
Feb 25 08:33:57 np0005629333 podman[406213]: 2026-02-25 13:33:57.469460688 +0000 UTC m=+0.030995676 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:33:57 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:33:57 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/483c9564aa6af82273cd82b930a8333d8d26dead088c8f41d24f73975660423e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:33:57 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/483c9564aa6af82273cd82b930a8333d8d26dead088c8f41d24f73975660423e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:33:57 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/483c9564aa6af82273cd82b930a8333d8d26dead088c8f41d24f73975660423e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:33:57 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/483c9564aa6af82273cd82b930a8333d8d26dead088c8f41d24f73975660423e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:33:57 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/483c9564aa6af82273cd82b930a8333d8d26dead088c8f41d24f73975660423e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:33:57 np0005629333 nova_compute[244014]: 2026-02-25 13:33:57.577 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:33:57 np0005629333 podman[406213]: 2026-02-25 13:33:57.587662434 +0000 UTC m=+0.149197412 container init 26ec644fab9e8722a240b0b24010fa031e8a95d6c6a5efdc3946fd39474d1da5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_wilbur, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 08:33:57 np0005629333 podman[406213]: 2026-02-25 13:33:57.59959674 +0000 UTC m=+0.161131678 container start 26ec644fab9e8722a240b0b24010fa031e8a95d6c6a5efdc3946fd39474d1da5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_wilbur, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 08:33:57 np0005629333 podman[406213]: 2026-02-25 13:33:57.602955325 +0000 UTC m=+0.164490313 container attach 26ec644fab9e8722a240b0b24010fa031e8a95d6c6a5efdc3946fd39474d1da5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_wilbur, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 08:33:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3422: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:33:58 np0005629333 modest_wilbur[406229]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:33:58 np0005629333 modest_wilbur[406229]: --> All data devices are unavailable
Feb 25 08:33:58 np0005629333 systemd[1]: libpod-26ec644fab9e8722a240b0b24010fa031e8a95d6c6a5efdc3946fd39474d1da5.scope: Deactivated successfully.
Feb 25 08:33:58 np0005629333 podman[406213]: 2026-02-25 13:33:58.079224937 +0000 UTC m=+0.640759875 container died 26ec644fab9e8722a240b0b24010fa031e8a95d6c6a5efdc3946fd39474d1da5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:33:58 np0005629333 systemd[1]: var-lib-containers-storage-overlay-483c9564aa6af82273cd82b930a8333d8d26dead088c8f41d24f73975660423e-merged.mount: Deactivated successfully.
Feb 25 08:33:58 np0005629333 podman[406213]: 2026-02-25 13:33:58.130325419 +0000 UTC m=+0.691860357 container remove 26ec644fab9e8722a240b0b24010fa031e8a95d6c6a5efdc3946fd39474d1da5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_wilbur, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 08:33:58 np0005629333 systemd[1]: libpod-conmon-26ec644fab9e8722a240b0b24010fa031e8a95d6c6a5efdc3946fd39474d1da5.scope: Deactivated successfully.
Feb 25 08:33:58 np0005629333 podman[406324]: 2026-02-25 13:33:58.607511937 +0000 UTC m=+0.045358172 container create e78ff032e06e4d183e11611da0fdb21a0957ab3a33c605c57e0f462c16504d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_napier, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:33:58 np0005629333 systemd[1]: Started libpod-conmon-e78ff032e06e4d183e11611da0fdb21a0957ab3a33c605c57e0f462c16504d83.scope.
Feb 25 08:33:58 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:33:58 np0005629333 podman[406324]: 2026-02-25 13:33:58.591879195 +0000 UTC m=+0.029725430 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:33:58 np0005629333 podman[406324]: 2026-02-25 13:33:58.691946359 +0000 UTC m=+0.129792624 container init e78ff032e06e4d183e11611da0fdb21a0957ab3a33c605c57e0f462c16504d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_napier, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:33:58 np0005629333 podman[406324]: 2026-02-25 13:33:58.703416423 +0000 UTC m=+0.141262658 container start e78ff032e06e4d183e11611da0fdb21a0957ab3a33c605c57e0f462c16504d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_napier, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 08:33:58 np0005629333 podman[406324]: 2026-02-25 13:33:58.706964353 +0000 UTC m=+0.144810658 container attach e78ff032e06e4d183e11611da0fdb21a0957ab3a33c605c57e0f462c16504d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 08:33:58 np0005629333 dreamy_napier[406340]: 167 167
Feb 25 08:33:58 np0005629333 systemd[1]: libpod-e78ff032e06e4d183e11611da0fdb21a0957ab3a33c605c57e0f462c16504d83.scope: Deactivated successfully.
Feb 25 08:33:58 np0005629333 podman[406324]: 2026-02-25 13:33:58.708485626 +0000 UTC m=+0.146331861 container died e78ff032e06e4d183e11611da0fdb21a0957ab3a33c605c57e0f462c16504d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 25 08:33:58 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f5a2497e6cbf500356a6d2e3b499f5c1022b824bccf6a92ddbf02c1ec1885a38-merged.mount: Deactivated successfully.
Feb 25 08:33:58 np0005629333 podman[406324]: 2026-02-25 13:33:58.749458383 +0000 UTC m=+0.187304588 container remove e78ff032e06e4d183e11611da0fdb21a0957ab3a33c605c57e0f462c16504d83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_napier, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:33:58 np0005629333 systemd[1]: libpod-conmon-e78ff032e06e4d183e11611da0fdb21a0957ab3a33c605c57e0f462c16504d83.scope: Deactivated successfully.
Feb 25 08:33:58 np0005629333 podman[406365]: 2026-02-25 13:33:58.920782698 +0000 UTC m=+0.058392419 container create 6b36aec1f3881536d3705f8b778bfeb0fa00bab0ae422bd8cd538297aba599a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_fermat, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 08:33:58 np0005629333 systemd[1]: Started libpod-conmon-6b36aec1f3881536d3705f8b778bfeb0fa00bab0ae422bd8cd538297aba599a1.scope.
Feb 25 08:33:58 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:33:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b73b878a2d91db9503a5605d9cd4fafc259715ee03c85798994c96944b9c1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:33:58 np0005629333 podman[406365]: 2026-02-25 13:33:58.900089664 +0000 UTC m=+0.037699445 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:33:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b73b878a2d91db9503a5605d9cd4fafc259715ee03c85798994c96944b9c1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:33:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b73b878a2d91db9503a5605d9cd4fafc259715ee03c85798994c96944b9c1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:33:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b73b878a2d91db9503a5605d9cd4fafc259715ee03c85798994c96944b9c1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:33:59 np0005629333 podman[406365]: 2026-02-25 13:33:59.014268636 +0000 UTC m=+0.151878457 container init 6b36aec1f3881536d3705f8b778bfeb0fa00bab0ae422bd8cd538297aba599a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_fermat, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:33:59 np0005629333 podman[406365]: 2026-02-25 13:33:59.023356363 +0000 UTC m=+0.160966084 container start 6b36aec1f3881536d3705f8b778bfeb0fa00bab0ae422bd8cd538297aba599a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:33:59 np0005629333 podman[406365]: 2026-02-25 13:33:59.027381506 +0000 UTC m=+0.164991287 container attach 6b36aec1f3881536d3705f8b778bfeb0fa00bab0ae422bd8cd538297aba599a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_fermat, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]: {
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:    "0": [
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:        {
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "devices": [
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "/dev/loop3"
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            ],
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "lv_name": "ceph_lv0",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "lv_size": "21470642176",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "name": "ceph_lv0",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "tags": {
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.cluster_name": "ceph",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.crush_device_class": "",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.encrypted": "0",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.objectstore": "bluestore",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.osd_id": "0",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.type": "block",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.vdo": "0",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.with_tpm": "0"
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            },
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "type": "block",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "vg_name": "ceph_vg0"
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:        }
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:    ],
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:    "1": [
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:        {
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "devices": [
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "/dev/loop4"
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            ],
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "lv_name": "ceph_lv1",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "lv_size": "21470642176",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "name": "ceph_lv1",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "tags": {
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.cluster_name": "ceph",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.crush_device_class": "",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.encrypted": "0",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.objectstore": "bluestore",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.osd_id": "1",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.type": "block",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.vdo": "0",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.with_tpm": "0"
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            },
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "type": "block",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "vg_name": "ceph_vg1"
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:        }
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:    ],
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:    "2": [
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:        {
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "devices": [
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "/dev/loop5"
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            ],
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "lv_name": "ceph_lv2",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "lv_size": "21470642176",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "name": "ceph_lv2",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "tags": {
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.cluster_name": "ceph",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.crush_device_class": "",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.encrypted": "0",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.objectstore": "bluestore",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.osd_id": "2",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.type": "block",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.vdo": "0",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:                "ceph.with_tpm": "0"
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            },
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "type": "block",
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:            "vg_name": "ceph_vg2"
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:        }
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]:    ]
Feb 25 08:33:59 np0005629333 dazzling_fermat[406381]: }
Feb 25 08:33:59 np0005629333 systemd[1]: libpod-6b36aec1f3881536d3705f8b778bfeb0fa00bab0ae422bd8cd538297aba599a1.scope: Deactivated successfully.
Feb 25 08:33:59 np0005629333 podman[406365]: 2026-02-25 13:33:59.357197985 +0000 UTC m=+0.494807716 container died 6b36aec1f3881536d3705f8b778bfeb0fa00bab0ae422bd8cd538297aba599a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 08:33:59 np0005629333 systemd[1]: var-lib-containers-storage-overlay-76b73b878a2d91db9503a5605d9cd4fafc259715ee03c85798994c96944b9c1b-merged.mount: Deactivated successfully.
Feb 25 08:33:59 np0005629333 podman[406365]: 2026-02-25 13:33:59.408738859 +0000 UTC m=+0.546348590 container remove 6b36aec1f3881536d3705f8b778bfeb0fa00bab0ae422bd8cd538297aba599a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_fermat, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 08:33:59 np0005629333 systemd[1]: libpod-conmon-6b36aec1f3881536d3705f8b778bfeb0fa00bab0ae422bd8cd538297aba599a1.scope: Deactivated successfully.
Feb 25 08:33:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:33:59 np0005629333 nova_compute[244014]: 2026-02-25 13:33:59.808 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:33:59 np0005629333 podman[406464]: 2026-02-25 13:33:59.878560149 +0000 UTC m=+0.042182542 container create f7342f2f5de36016a4ba9908a66f247bd54a9891644c79a6a223540bf6bf6476 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 08:33:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3423: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:33:59 np0005629333 nova_compute[244014]: 2026-02-25 13:33:59.893 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:33:59 np0005629333 nova_compute[244014]: 2026-02-25 13:33:59.893 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 25 08:33:59 np0005629333 nova_compute[244014]: 2026-02-25 13:33:59.915 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 25 08:33:59 np0005629333 systemd[1]: Started libpod-conmon-f7342f2f5de36016a4ba9908a66f247bd54a9891644c79a6a223540bf6bf6476.scope.
Feb 25 08:33:59 np0005629333 podman[406464]: 2026-02-25 13:33:59.857598357 +0000 UTC m=+0.021220840 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:33:59 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:33:59 np0005629333 podman[406464]: 2026-02-25 13:33:59.987403341 +0000 UTC m=+0.151025744 container init f7342f2f5de36016a4ba9908a66f247bd54a9891644c79a6a223540bf6bf6476 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 08:33:59 np0005629333 podman[406464]: 2026-02-25 13:33:59.997563067 +0000 UTC m=+0.161185470 container start f7342f2f5de36016a4ba9908a66f247bd54a9891644c79a6a223540bf6bf6476 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 08:34:00 np0005629333 podman[406464]: 2026-02-25 13:34:00.001769516 +0000 UTC m=+0.165391929 container attach f7342f2f5de36016a4ba9908a66f247bd54a9891644c79a6a223540bf6bf6476 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_lichterman, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:34:00 np0005629333 loving_lichterman[406481]: 167 167
Feb 25 08:34:00 np0005629333 systemd[1]: libpod-f7342f2f5de36016a4ba9908a66f247bd54a9891644c79a6a223540bf6bf6476.scope: Deactivated successfully.
Feb 25 08:34:00 np0005629333 podman[406464]: 2026-02-25 13:34:00.003857925 +0000 UTC m=+0.167480338 container died f7342f2f5de36016a4ba9908a66f247bd54a9891644c79a6a223540bf6bf6476 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_lichterman, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle)
Feb 25 08:34:00 np0005629333 systemd[1]: var-lib-containers-storage-overlay-0059155171ce90425e44194ce3f3883950e0180bf15788e5e5d98bdb97ed6d76-merged.mount: Deactivated successfully.
Feb 25 08:34:00 np0005629333 podman[406464]: 2026-02-25 13:34:00.054482834 +0000 UTC m=+0.218105217 container remove f7342f2f5de36016a4ba9908a66f247bd54a9891644c79a6a223540bf6bf6476 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_lichterman, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:34:00 np0005629333 systemd[1]: libpod-conmon-f7342f2f5de36016a4ba9908a66f247bd54a9891644c79a6a223540bf6bf6476.scope: Deactivated successfully.
Feb 25 08:34:00 np0005629333 podman[406505]: 2026-02-25 13:34:00.242993914 +0000 UTC m=+0.066820517 container create 1bde9d43205d57aa8aabcc37a978297705c6b80bb23b5d2f3cb4a8c8fe807ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_perlman, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:34:00 np0005629333 systemd[1]: Started libpod-conmon-1bde9d43205d57aa8aabcc37a978297705c6b80bb23b5d2f3cb4a8c8fe807ef8.scope.
Feb 25 08:34:00 np0005629333 podman[406505]: 2026-02-25 13:34:00.213726118 +0000 UTC m=+0.037552721 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:34:00 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:34:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce2d1d1906a88f06659c44a48b8fe65ce9e2518ffeddc33828b5c9886de1380/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:34:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce2d1d1906a88f06659c44a48b8fe65ce9e2518ffeddc33828b5c9886de1380/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:34:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce2d1d1906a88f06659c44a48b8fe65ce9e2518ffeddc33828b5c9886de1380/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:34:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce2d1d1906a88f06659c44a48b8fe65ce9e2518ffeddc33828b5c9886de1380/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:34:00 np0005629333 podman[406505]: 2026-02-25 13:34:00.35020246 +0000 UTC m=+0.174029073 container init 1bde9d43205d57aa8aabcc37a978297705c6b80bb23b5d2f3cb4a8c8fe807ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_perlman, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 08:34:00 np0005629333 podman[406505]: 2026-02-25 13:34:00.363017522 +0000 UTC m=+0.186844085 container start 1bde9d43205d57aa8aabcc37a978297705c6b80bb23b5d2f3cb4a8c8fe807ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_perlman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Feb 25 08:34:00 np0005629333 podman[406505]: 2026-02-25 13:34:00.36652262 +0000 UTC m=+0.190349243 container attach 1bde9d43205d57aa8aabcc37a978297705c6b80bb23b5d2f3cb4a8c8fe807ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 08:34:01 np0005629333 lvm[406600]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:34:01 np0005629333 lvm[406600]: VG ceph_vg1 finished
Feb 25 08:34:01 np0005629333 lvm[406598]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:34:01 np0005629333 lvm[406598]: VG ceph_vg0 finished
Feb 25 08:34:01 np0005629333 lvm[406602]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:34:01 np0005629333 lvm[406602]: VG ceph_vg2 finished
Feb 25 08:34:01 np0005629333 busy_perlman[406521]: {}
Feb 25 08:34:01 np0005629333 systemd[1]: libpod-1bde9d43205d57aa8aabcc37a978297705c6b80bb23b5d2f3cb4a8c8fe807ef8.scope: Deactivated successfully.
Feb 25 08:34:01 np0005629333 systemd[1]: libpod-1bde9d43205d57aa8aabcc37a978297705c6b80bb23b5d2f3cb4a8c8fe807ef8.scope: Consumed 1.137s CPU time.
Feb 25 08:34:01 np0005629333 podman[406505]: 2026-02-25 13:34:01.187213982 +0000 UTC m=+1.011040575 container died 1bde9d43205d57aa8aabcc37a978297705c6b80bb23b5d2f3cb4a8c8fe807ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_perlman, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:34:01 np0005629333 systemd[1]: var-lib-containers-storage-overlay-3ce2d1d1906a88f06659c44a48b8fe65ce9e2518ffeddc33828b5c9886de1380-merged.mount: Deactivated successfully.
Feb 25 08:34:01 np0005629333 podman[406505]: 2026-02-25 13:34:01.235909356 +0000 UTC m=+1.059735919 container remove 1bde9d43205d57aa8aabcc37a978297705c6b80bb23b5d2f3cb4a8c8fe807ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 08:34:01 np0005629333 systemd[1]: libpod-conmon-1bde9d43205d57aa8aabcc37a978297705c6b80bb23b5d2f3cb4a8c8fe807ef8.scope: Deactivated successfully.
Feb 25 08:34:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:34:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:34:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:34:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:34:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:34:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:34:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:34:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:34:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:34:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:34:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3424: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:34:02 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:34:02 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:34:02 np0005629333 nova_compute[244014]: 2026-02-25 13:34:02.579 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:34:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3425: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:34:03 np0005629333 nova_compute[244014]: 2026-02-25 13:34:03.899 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:34:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:34:04 np0005629333 nova_compute[244014]: 2026-02-25 13:34:04.843 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:34:04 np0005629333 nova_compute[244014]: 2026-02-25 13:34:04.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:34:04 np0005629333 nova_compute[244014]: 2026-02-25 13:34:04.902 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:34:04 np0005629333 nova_compute[244014]: 2026-02-25 13:34:04.902 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:34:04 np0005629333 nova_compute[244014]: 2026-02-25 13:34:04.903 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:34:04 np0005629333 nova_compute[244014]: 2026-02-25 13:34:04.903 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:34:04 np0005629333 nova_compute[244014]: 2026-02-25 13:34:04.904 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:34:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:34:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3709027456' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:34:05 np0005629333 nova_compute[244014]: 2026-02-25 13:34:05.469 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:34:05 np0005629333 nova_compute[244014]: 2026-02-25 13:34:05.699 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:34:05 np0005629333 nova_compute[244014]: 2026-02-25 13:34:05.700 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3491MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:34:05 np0005629333 nova_compute[244014]: 2026-02-25 13:34:05.700 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:34:05 np0005629333 nova_compute[244014]: 2026-02-25 13:34:05.701 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:34:05 np0005629333 nova_compute[244014]: 2026-02-25 13:34:05.764 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:34:05 np0005629333 nova_compute[244014]: 2026-02-25 13:34:05.764 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:34:05 np0005629333 nova_compute[244014]: 2026-02-25 13:34:05.779 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:34:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3426: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:34:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:34:06 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3365646580' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:34:06 np0005629333 nova_compute[244014]: 2026-02-25 13:34:06.345 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:34:06 np0005629333 nova_compute[244014]: 2026-02-25 13:34:06.352 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:34:06 np0005629333 nova_compute[244014]: 2026-02-25 13:34:06.373 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:34:06 np0005629333 nova_compute[244014]: 2026-02-25 13:34:06.375 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:34:06 np0005629333 nova_compute[244014]: 2026-02-25 13:34:06.376 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:34:07 np0005629333 nova_compute[244014]: 2026-02-25 13:34:07.581 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:34:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3427: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:34:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:34:09 np0005629333 nova_compute[244014]: 2026-02-25 13:34:09.845 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:34:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3428: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:34:11 np0005629333 nova_compute[244014]: 2026-02-25 13:34:11.377 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:34:11 np0005629333 nova_compute[244014]: 2026-02-25 13:34:11.378 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:34:11 np0005629333 nova_compute[244014]: 2026-02-25 13:34:11.378 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:34:11 np0005629333 nova_compute[244014]: 2026-02-25 13:34:11.397 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:34:11 np0005629333 nova_compute[244014]: 2026-02-25 13:34:11.891 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:34:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3429: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:34:12 np0005629333 nova_compute[244014]: 2026-02-25 13:34:12.584 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:34:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3430: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:34:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:34:14 np0005629333 nova_compute[244014]: 2026-02-25 13:34:14.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:34:14 np0005629333 nova_compute[244014]: 2026-02-25 13:34:14.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:34:14 np0005629333 nova_compute[244014]: 2026-02-25 13:34:14.918 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:34:15 np0005629333 podman[406688]: 2026-02-25 13:34:15.730856764 +0000 UTC m=+0.065204641 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Feb 25 08:34:15 np0005629333 podman[406689]: 2026-02-25 13:34:15.799575384 +0000 UTC m=+0.133055406 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 25 08:34:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3431: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:34:17 np0005629333 nova_compute[244014]: 2026-02-25 13:34:17.586 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:34:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3432: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:34:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:34:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3433: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:34:19 np0005629333 nova_compute[244014]: 2026-02-25 13:34:19.955 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:34:21 np0005629333 nova_compute[244014]: 2026-02-25 13:34:21.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:34:21 np0005629333 nova_compute[244014]: 2026-02-25 13:34:21.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:34:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3434: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:34:22 np0005629333 nova_compute[244014]: 2026-02-25 13:34:22.622 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:34:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3435: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:34:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:34:24 np0005629333 nova_compute[244014]: 2026-02-25 13:34:24.959 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:34:25 np0005629333 nova_compute[244014]: 2026-02-25 13:34:25.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:34:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3436: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:34:27 np0005629333 nova_compute[244014]: 2026-02-25 13:34:27.670 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:34:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3437: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:34:28 np0005629333 nova_compute[244014]: 2026-02-25 13:34:28.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:34:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:34:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3438: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:34:29 np0005629333 nova_compute[244014]: 2026-02-25 13:34:29.963 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:34:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:34:31
Feb 25 08:34:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:34:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:34:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['backups', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'vms', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'images']
Feb 25 08:34:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:34:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:34:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:34:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:34:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:34:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:34:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:34:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3439: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:34:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:34:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:34:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:34:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:34:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:34:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:34:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:34:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:34:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:34:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:34:32 np0005629333 nova_compute[244014]: 2026-02-25 13:34:32.672 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:34:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3440: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:34:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:34:35 np0005629333 nova_compute[244014]: 2026-02-25 13:34:35.008 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:34:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3441: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:34:37 np0005629333 nova_compute[244014]: 2026-02-25 13:34:37.672 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:34:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3442: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:34:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:34:39 np0005629333 nova_compute[244014]: 2026-02-25 13:34:39.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:34:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3443: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:34:40 np0005629333 nova_compute[244014]: 2026-02-25 13:34:40.062 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:34:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3444: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:34:42 np0005629333 nova_compute[244014]: 2026-02-25 13:34:42.674 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:34:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:34:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:34:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:34:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:34:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:34:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:34:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:34:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:34:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:34:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:34:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:34:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:34:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:34:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:34:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:34:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:34:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:34:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:34:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:34:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:34:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:34:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:34:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:34:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3445: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:34:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:34:45 np0005629333 nova_compute[244014]: 2026-02-25 13:34:45.095 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:34:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3446: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:34:46 np0005629333 podman[406731]: 2026-02-25 13:34:46.719926746 +0000 UTC m=+0.062727572 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 08:34:46 np0005629333 podman[406732]: 2026-02-25 13:34:46.754469371 +0000 UTC m=+0.093405328 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Feb 25 08:34:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:34:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1860475510' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:34:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:34:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1860475510' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:34:47 np0005629333 nova_compute[244014]: 2026-02-25 13:34:47.675 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:34:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3447: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:34:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:34:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3448: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:34:50 np0005629333 nova_compute[244014]: 2026-02-25 13:34:50.099 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:34:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3449: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:34:52 np0005629333 nova_compute[244014]: 2026-02-25 13:34:52.678 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:34:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3450: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #171. Immutable memtables: 0.
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:34:54.437169) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 171
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026494437199, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 2042, "num_deletes": 251, "total_data_size": 3475600, "memory_usage": 3534368, "flush_reason": "Manual Compaction"}
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #172: started
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026494494807, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 172, "file_size": 3408896, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70495, "largest_seqno": 72536, "table_properties": {"data_size": 3399582, "index_size": 5935, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18441, "raw_average_key_size": 20, "raw_value_size": 3381102, "raw_average_value_size": 3671, "num_data_blocks": 264, "num_entries": 921, "num_filter_entries": 921, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772026265, "oldest_key_time": 1772026265, "file_creation_time": 1772026494, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 172, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 57720 microseconds, and 4688 cpu microseconds.
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:34:54.494879) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #172: 3408896 bytes OK
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:34:54.494906) [db/memtable_list.cc:519] [default] Level-0 commit table #172 started
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:34:54.508726) [db/memtable_list.cc:722] [default] Level-0 commit table #172: memtable #1 done
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:34:54.508785) EVENT_LOG_v1 {"time_micros": 1772026494508773, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:34:54.508816) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 3467079, prev total WAL file size 3467079, number of live WAL files 2.
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000168.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:34:54.509884) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [172(3329KB)], [170(8879KB)]
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026494509929, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [172], "files_L6": [170], "score": -1, "input_data_size": 12501442, "oldest_snapshot_seqno": -1}
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #173: 8873 keys, 10757820 bytes, temperature: kUnknown
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026494668960, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 173, "file_size": 10757820, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10701812, "index_size": 32727, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22213, "raw_key_size": 232871, "raw_average_key_size": 26, "raw_value_size": 10546606, "raw_average_value_size": 1188, "num_data_blocks": 1263, "num_entries": 8873, "num_filter_entries": 8873, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772026494, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:34:54.669494) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 10757820 bytes
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:34:54.776929) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 78.5 rd, 67.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 8.7 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(6.8) write-amplify(3.2) OK, records in: 9387, records dropped: 514 output_compression: NoCompression
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:34:54.776971) EVENT_LOG_v1 {"time_micros": 1772026494776954, "job": 106, "event": "compaction_finished", "compaction_time_micros": 159335, "compaction_time_cpu_micros": 28534, "output_level": 6, "num_output_files": 1, "total_output_size": 10757820, "num_input_records": 9387, "num_output_records": 8873, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000172.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026494778084, "job": 106, "event": "table_file_deletion", "file_number": 172}
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026494779120, "job": 106, "event": "table_file_deletion", "file_number": 170}
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:34:54.509799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:34:54.779253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:34:54.779258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:34:54.779260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:34:54.779263) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:34:54 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:34:54.779265) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:34:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:34:55.076 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:34:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:34:55.077 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:34:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:34:55.077 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:34:55 np0005629333 nova_compute[244014]: 2026-02-25 13:34:55.143 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:34:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3451: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:34:57 np0005629333 nova_compute[244014]: 2026-02-25 13:34:57.730 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:34:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3452: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:34:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:34:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3453: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:35:00 np0005629333 nova_compute[244014]: 2026-02-25 13:35:00.147 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:35:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:35:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:35:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:35:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:35:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:35:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:35:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3454: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:35:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 25 08:35:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 08:35:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:35:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:35:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:35:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:35:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:35:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:35:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:35:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:35:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:35:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:35:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:35:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:35:02 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 08:35:02 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:35:02 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:35:02 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:35:02 np0005629333 podman[406920]: 2026-02-25 13:35:02.486420028 +0000 UTC m=+0.055684602 container create 479839b617e32968e5a335a21fb15feddb81358fb2a4abfde52d5c4314c8fd8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hopper, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:35:02 np0005629333 systemd[1]: Started libpod-conmon-479839b617e32968e5a335a21fb15feddb81358fb2a4abfde52d5c4314c8fd8b.scope.
Feb 25 08:35:02 np0005629333 podman[406920]: 2026-02-25 13:35:02.454759105 +0000 UTC m=+0.024023739 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:35:02 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:35:02 np0005629333 podman[406920]: 2026-02-25 13:35:02.583451067 +0000 UTC m=+0.152715631 container init 479839b617e32968e5a335a21fb15feddb81358fb2a4abfde52d5c4314c8fd8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hopper, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:35:02 np0005629333 podman[406920]: 2026-02-25 13:35:02.591822163 +0000 UTC m=+0.161086727 container start 479839b617e32968e5a335a21fb15feddb81358fb2a4abfde52d5c4314c8fd8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hopper, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 08:35:02 np0005629333 podman[406920]: 2026-02-25 13:35:02.594753696 +0000 UTC m=+0.164018240 container attach 479839b617e32968e5a335a21fb15feddb81358fb2a4abfde52d5c4314c8fd8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hopper, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 08:35:02 np0005629333 hardcore_hopper[406936]: 167 167
Feb 25 08:35:02 np0005629333 systemd[1]: libpod-479839b617e32968e5a335a21fb15feddb81358fb2a4abfde52d5c4314c8fd8b.scope: Deactivated successfully.
Feb 25 08:35:02 np0005629333 conmon[406936]: conmon 479839b617e32968e5a3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-479839b617e32968e5a335a21fb15feddb81358fb2a4abfde52d5c4314c8fd8b.scope/container/memory.events
Feb 25 08:35:02 np0005629333 podman[406941]: 2026-02-25 13:35:02.652143435 +0000 UTC m=+0.036080819 container died 479839b617e32968e5a335a21fb15feddb81358fb2a4abfde52d5c4314c8fd8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 08:35:02 np0005629333 systemd[1]: var-lib-containers-storage-overlay-b7b9680e34998470af8d682ce4a49b346a0953a6d807798e01eabd9948e4a010-merged.mount: Deactivated successfully.
Feb 25 08:35:02 np0005629333 podman[406941]: 2026-02-25 13:35:02.692552896 +0000 UTC m=+0.076490270 container remove 479839b617e32968e5a335a21fb15feddb81358fb2a4abfde52d5c4314c8fd8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_hopper, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 08:35:02 np0005629333 systemd[1]: libpod-conmon-479839b617e32968e5a335a21fb15feddb81358fb2a4abfde52d5c4314c8fd8b.scope: Deactivated successfully.
Feb 25 08:35:02 np0005629333 nova_compute[244014]: 2026-02-25 13:35:02.732 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:35:02 np0005629333 podman[406964]: 2026-02-25 13:35:02.883866705 +0000 UTC m=+0.051419522 container create 093dce83346e495bf7d29929d9f896a7ee6e2fea8b4973b9f81ad4710fde95ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_knuth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Feb 25 08:35:02 np0005629333 systemd[1]: Started libpod-conmon-093dce83346e495bf7d29929d9f896a7ee6e2fea8b4973b9f81ad4710fde95ac.scope.
Feb 25 08:35:02 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:35:02 np0005629333 podman[406964]: 2026-02-25 13:35:02.859345433 +0000 UTC m=+0.026898330 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:35:02 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a4c5f3d74eb34f3dbff2e7f8446695cb059fe55399d008084b437488d3c67f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:35:02 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a4c5f3d74eb34f3dbff2e7f8446695cb059fe55399d008084b437488d3c67f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:35:02 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a4c5f3d74eb34f3dbff2e7f8446695cb059fe55399d008084b437488d3c67f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:35:02 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a4c5f3d74eb34f3dbff2e7f8446695cb059fe55399d008084b437488d3c67f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:35:02 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a4c5f3d74eb34f3dbff2e7f8446695cb059fe55399d008084b437488d3c67f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:35:02 np0005629333 podman[406964]: 2026-02-25 13:35:02.979824804 +0000 UTC m=+0.147377701 container init 093dce83346e495bf7d29929d9f896a7ee6e2fea8b4973b9f81ad4710fde95ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:35:02 np0005629333 podman[406964]: 2026-02-25 13:35:02.987121169 +0000 UTC m=+0.154674026 container start 093dce83346e495bf7d29929d9f896a7ee6e2fea8b4973b9f81ad4710fde95ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 08:35:02 np0005629333 podman[406964]: 2026-02-25 13:35:02.992738048 +0000 UTC m=+0.160290905 container attach 093dce83346e495bf7d29929d9f896a7ee6e2fea8b4973b9f81ad4710fde95ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True)
Feb 25 08:35:03 np0005629333 agitated_knuth[406981]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:35:03 np0005629333 agitated_knuth[406981]: --> All data devices are unavailable
Feb 25 08:35:03 np0005629333 systemd[1]: libpod-093dce83346e495bf7d29929d9f896a7ee6e2fea8b4973b9f81ad4710fde95ac.scope: Deactivated successfully.
Feb 25 08:35:03 np0005629333 podman[406964]: 2026-02-25 13:35:03.476508221 +0000 UTC m=+0.644061078 container died 093dce83346e495bf7d29929d9f896a7ee6e2fea8b4973b9f81ad4710fde95ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_knuth, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 25 08:35:03 np0005629333 systemd[1]: var-lib-containers-storage-overlay-52a4c5f3d74eb34f3dbff2e7f8446695cb059fe55399d008084b437488d3c67f-merged.mount: Deactivated successfully.
Feb 25 08:35:03 np0005629333 podman[406964]: 2026-02-25 13:35:03.529016923 +0000 UTC m=+0.696569740 container remove 093dce83346e495bf7d29929d9f896a7ee6e2fea8b4973b9f81ad4710fde95ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:35:03 np0005629333 systemd[1]: libpod-conmon-093dce83346e495bf7d29929d9f896a7ee6e2fea8b4973b9f81ad4710fde95ac.scope: Deactivated successfully.
Feb 25 08:35:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3455: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:35:03 np0005629333 podman[407075]: 2026-02-25 13:35:03.961967522 +0000 UTC m=+0.056133735 container create 2feff06d2ff593bed05e847aa636e72412227b5255ed0636bc1713591d6dd1ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:35:04 np0005629333 systemd[1]: Started libpod-conmon-2feff06d2ff593bed05e847aa636e72412227b5255ed0636bc1713591d6dd1ed.scope.
Feb 25 08:35:04 np0005629333 podman[407075]: 2026-02-25 13:35:03.935814524 +0000 UTC m=+0.029980787 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:35:04 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:35:04 np0005629333 podman[407075]: 2026-02-25 13:35:04.048870745 +0000 UTC m=+0.143036968 container init 2feff06d2ff593bed05e847aa636e72412227b5255ed0636bc1713591d6dd1ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:35:04 np0005629333 podman[407075]: 2026-02-25 13:35:04.055952835 +0000 UTC m=+0.150119058 container start 2feff06d2ff593bed05e847aa636e72412227b5255ed0636bc1713591d6dd1ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bhaskara, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Feb 25 08:35:04 np0005629333 podman[407075]: 2026-02-25 13:35:04.059404082 +0000 UTC m=+0.153570325 container attach 2feff06d2ff593bed05e847aa636e72412227b5255ed0636bc1713591d6dd1ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bhaskara, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:35:04 np0005629333 eloquent_bhaskara[407091]: 167 167
Feb 25 08:35:04 np0005629333 systemd[1]: libpod-2feff06d2ff593bed05e847aa636e72412227b5255ed0636bc1713591d6dd1ed.scope: Deactivated successfully.
Feb 25 08:35:04 np0005629333 podman[407075]: 2026-02-25 13:35:04.061291915 +0000 UTC m=+0.155458128 container died 2feff06d2ff593bed05e847aa636e72412227b5255ed0636bc1713591d6dd1ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 08:35:04 np0005629333 systemd[1]: var-lib-containers-storage-overlay-228d71ac330a135e557a938a64d77bc0e44881eb38ed48649272688994b1f9fa-merged.mount: Deactivated successfully.
Feb 25 08:35:04 np0005629333 podman[407075]: 2026-02-25 13:35:04.110164525 +0000 UTC m=+0.204330748 container remove 2feff06d2ff593bed05e847aa636e72412227b5255ed0636bc1713591d6dd1ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bhaskara, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Feb 25 08:35:04 np0005629333 systemd[1]: libpod-conmon-2feff06d2ff593bed05e847aa636e72412227b5255ed0636bc1713591d6dd1ed.scope: Deactivated successfully.
Feb 25 08:35:04 np0005629333 podman[407115]: 2026-02-25 13:35:04.287824769 +0000 UTC m=+0.042345706 container create 56fdf9135c7325eb57d76a02c1782cfd077e3294718e8b7079363a357e124f16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:35:04 np0005629333 systemd[1]: Started libpod-conmon-56fdf9135c7325eb57d76a02c1782cfd077e3294718e8b7079363a357e124f16.scope.
Feb 25 08:35:04 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:35:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c79afc9c76c7b1493054b0a2875b862fee207201696c61d3a912ca616b3b88e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:35:04 np0005629333 podman[407115]: 2026-02-25 13:35:04.27052031 +0000 UTC m=+0.025041227 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:35:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c79afc9c76c7b1493054b0a2875b862fee207201696c61d3a912ca616b3b88e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:35:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c79afc9c76c7b1493054b0a2875b862fee207201696c61d3a912ca616b3b88e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:35:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c79afc9c76c7b1493054b0a2875b862fee207201696c61d3a912ca616b3b88e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:35:04 np0005629333 podman[407115]: 2026-02-25 13:35:04.382436509 +0000 UTC m=+0.136957496 container init 56fdf9135c7325eb57d76a02c1782cfd077e3294718e8b7079363a357e124f16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wu, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 08:35:04 np0005629333 podman[407115]: 2026-02-25 13:35:04.389653423 +0000 UTC m=+0.144174360 container start 56fdf9135c7325eb57d76a02c1782cfd077e3294718e8b7079363a357e124f16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wu, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:35:04 np0005629333 podman[407115]: 2026-02-25 13:35:04.39346379 +0000 UTC m=+0.147984727 container attach 56fdf9135c7325eb57d76a02c1782cfd077e3294718e8b7079363a357e124f16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 08:35:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]: {
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:    "0": [
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:        {
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "devices": [
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "/dev/loop3"
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            ],
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "lv_name": "ceph_lv0",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "lv_size": "21470642176",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "name": "ceph_lv0",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "tags": {
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.cluster_name": "ceph",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.crush_device_class": "",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.encrypted": "0",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.objectstore": "bluestore",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.osd_id": "0",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.type": "block",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.vdo": "0",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.with_tpm": "0"
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            },
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "type": "block",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "vg_name": "ceph_vg0"
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:        }
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:    ],
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:    "1": [
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:        {
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "devices": [
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "/dev/loop4"
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            ],
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "lv_name": "ceph_lv1",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "lv_size": "21470642176",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "name": "ceph_lv1",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "tags": {
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.cluster_name": "ceph",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.crush_device_class": "",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.encrypted": "0",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.objectstore": "bluestore",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.osd_id": "1",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.type": "block",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.vdo": "0",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.with_tpm": "0"
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            },
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "type": "block",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "vg_name": "ceph_vg1"
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:        }
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:    ],
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:    "2": [
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:        {
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "devices": [
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "/dev/loop5"
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            ],
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "lv_name": "ceph_lv2",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "lv_size": "21470642176",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "name": "ceph_lv2",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "tags": {
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.cluster_name": "ceph",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.crush_device_class": "",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.encrypted": "0",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.objectstore": "bluestore",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.osd_id": "2",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.type": "block",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.vdo": "0",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:                "ceph.with_tpm": "0"
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            },
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "type": "block",
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:            "vg_name": "ceph_vg2"
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:        }
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]:    ]
Feb 25 08:35:04 np0005629333 inspiring_wu[407132]: }
Feb 25 08:35:04 np0005629333 systemd[1]: libpod-56fdf9135c7325eb57d76a02c1782cfd077e3294718e8b7079363a357e124f16.scope: Deactivated successfully.
Feb 25 08:35:04 np0005629333 podman[407115]: 2026-02-25 13:35:04.721330093 +0000 UTC m=+0.475851000 container died 56fdf9135c7325eb57d76a02c1782cfd077e3294718e8b7079363a357e124f16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wu, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 08:35:04 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1c79afc9c76c7b1493054b0a2875b862fee207201696c61d3a912ca616b3b88e-merged.mount: Deactivated successfully.
Feb 25 08:35:04 np0005629333 podman[407115]: 2026-02-25 13:35:04.762012592 +0000 UTC m=+0.516533509 container remove 56fdf9135c7325eb57d76a02c1782cfd077e3294718e8b7079363a357e124f16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wu, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:35:04 np0005629333 systemd[1]: libpod-conmon-56fdf9135c7325eb57d76a02c1782cfd077e3294718e8b7079363a357e124f16.scope: Deactivated successfully.
Feb 25 08:35:04 np0005629333 nova_compute[244014]: 2026-02-25 13:35:04.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:35:04 np0005629333 nova_compute[244014]: 2026-02-25 13:35:04.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:35:04 np0005629333 nova_compute[244014]: 2026-02-25 13:35:04.909 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:35:04 np0005629333 nova_compute[244014]: 2026-02-25 13:35:04.909 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:35:04 np0005629333 nova_compute[244014]: 2026-02-25 13:35:04.910 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:35:04 np0005629333 nova_compute[244014]: 2026-02-25 13:35:04.910 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:35:04 np0005629333 nova_compute[244014]: 2026-02-25 13:35:04.911 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:35:05 np0005629333 nova_compute[244014]: 2026-02-25 13:35:05.194 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:35:05 np0005629333 podman[407233]: 2026-02-25 13:35:05.268779373 +0000 UTC m=+0.038488256 container create 1879bd3689d8d34a52983ee8f04b48cad1e7c1ae973835f7dac1efda6a6fb0de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:35:05 np0005629333 systemd[1]: Started libpod-conmon-1879bd3689d8d34a52983ee8f04b48cad1e7c1ae973835f7dac1efda6a6fb0de.scope.
Feb 25 08:35:05 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:35:05 np0005629333 podman[407233]: 2026-02-25 13:35:05.250506997 +0000 UTC m=+0.020215890 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:35:05 np0005629333 podman[407233]: 2026-02-25 13:35:05.347550956 +0000 UTC m=+0.117259829 container init 1879bd3689d8d34a52983ee8f04b48cad1e7c1ae973835f7dac1efda6a6fb0de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_stonebraker, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 08:35:05 np0005629333 podman[407233]: 2026-02-25 13:35:05.354846012 +0000 UTC m=+0.124554885 container start 1879bd3689d8d34a52983ee8f04b48cad1e7c1ae973835f7dac1efda6a6fb0de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_stonebraker, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 08:35:05 np0005629333 podman[407233]: 2026-02-25 13:35:05.357653121 +0000 UTC m=+0.127362014 container attach 1879bd3689d8d34a52983ee8f04b48cad1e7c1ae973835f7dac1efda6a6fb0de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_stonebraker, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 08:35:05 np0005629333 stupefied_stonebraker[407250]: 167 167
Feb 25 08:35:05 np0005629333 systemd[1]: libpod-1879bd3689d8d34a52983ee8f04b48cad1e7c1ae973835f7dac1efda6a6fb0de.scope: Deactivated successfully.
Feb 25 08:35:05 np0005629333 podman[407233]: 2026-02-25 13:35:05.359474422 +0000 UTC m=+0.129183305 container died 1879bd3689d8d34a52983ee8f04b48cad1e7c1ae973835f7dac1efda6a6fb0de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 08:35:05 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e990e5032b2d4456d26ac81a58a555d07b58336667146a021dd7e6be12be8ca9-merged.mount: Deactivated successfully.
Feb 25 08:35:05 np0005629333 podman[407233]: 2026-02-25 13:35:05.39979073 +0000 UTC m=+0.169499603 container remove 1879bd3689d8d34a52983ee8f04b48cad1e7c1ae973835f7dac1efda6a6fb0de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:35:05 np0005629333 systemd[1]: libpod-conmon-1879bd3689d8d34a52983ee8f04b48cad1e7c1ae973835f7dac1efda6a6fb0de.scope: Deactivated successfully.
Feb 25 08:35:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:35:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/397516983' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:35:05 np0005629333 nova_compute[244014]: 2026-02-25 13:35:05.497 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:35:05 np0005629333 podman[407276]: 2026-02-25 13:35:05.570131798 +0000 UTC m=+0.059879281 container create 8ffe58e457ac7327b21547a73ce8d859a35980d413649a939d4f4bac4e8d7a2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_chatelet, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:35:05 np0005629333 systemd[1]: Started libpod-conmon-8ffe58e457ac7327b21547a73ce8d859a35980d413649a939d4f4bac4e8d7a2e.scope.
Feb 25 08:35:05 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:35:05 np0005629333 podman[407276]: 2026-02-25 13:35:05.54931865 +0000 UTC m=+0.039066123 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:35:05 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8697738b3cbd88a7a8c3ac0507337f621c80c11a1fabec81c5d374959408c605/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:35:05 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8697738b3cbd88a7a8c3ac0507337f621c80c11a1fabec81c5d374959408c605/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:35:05 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8697738b3cbd88a7a8c3ac0507337f621c80c11a1fabec81c5d374959408c605/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:35:05 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8697738b3cbd88a7a8c3ac0507337f621c80c11a1fabec81c5d374959408c605/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:35:05 np0005629333 podman[407276]: 2026-02-25 13:35:05.669307137 +0000 UTC m=+0.159054630 container init 8ffe58e457ac7327b21547a73ce8d859a35980d413649a939d4f4bac4e8d7a2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:35:05 np0005629333 podman[407276]: 2026-02-25 13:35:05.67404205 +0000 UTC m=+0.163789503 container start 8ffe58e457ac7327b21547a73ce8d859a35980d413649a939d4f4bac4e8d7a2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 08:35:05 np0005629333 podman[407276]: 2026-02-25 13:35:05.677422816 +0000 UTC m=+0.167170319 container attach 8ffe58e457ac7327b21547a73ce8d859a35980d413649a939d4f4bac4e8d7a2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 08:35:05 np0005629333 nova_compute[244014]: 2026-02-25 13:35:05.695 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:35:05 np0005629333 nova_compute[244014]: 2026-02-25 13:35:05.697 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3492MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:35:05 np0005629333 nova_compute[244014]: 2026-02-25 13:35:05.697 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:35:05 np0005629333 nova_compute[244014]: 2026-02-25 13:35:05.697 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:35:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3456: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:35:05 np0005629333 nova_compute[244014]: 2026-02-25 13:35:05.987 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:35:05 np0005629333 nova_compute[244014]: 2026-02-25 13:35:05.987 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:35:06 np0005629333 nova_compute[244014]: 2026-02-25 13:35:06.004 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:35:06 np0005629333 lvm[407391]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:35:06 np0005629333 lvm[407391]: VG ceph_vg0 finished
Feb 25 08:35:06 np0005629333 lvm[407392]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:35:06 np0005629333 lvm[407392]: VG ceph_vg1 finished
Feb 25 08:35:06 np0005629333 lvm[407394]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:35:06 np0005629333 lvm[407394]: VG ceph_vg2 finished
Feb 25 08:35:06 np0005629333 infallible_chatelet[407292]: {}
Feb 25 08:35:06 np0005629333 podman[407276]: 2026-02-25 13:35:06.431784966 +0000 UTC m=+0.921532449 container died 8ffe58e457ac7327b21547a73ce8d859a35980d413649a939d4f4bac4e8d7a2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_chatelet, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 08:35:06 np0005629333 systemd[1]: libpod-8ffe58e457ac7327b21547a73ce8d859a35980d413649a939d4f4bac4e8d7a2e.scope: Deactivated successfully.
Feb 25 08:35:06 np0005629333 systemd[1]: libpod-8ffe58e457ac7327b21547a73ce8d859a35980d413649a939d4f4bac4e8d7a2e.scope: Consumed 1.116s CPU time.
Feb 25 08:35:06 np0005629333 systemd[1]: var-lib-containers-storage-overlay-8697738b3cbd88a7a8c3ac0507337f621c80c11a1fabec81c5d374959408c605-merged.mount: Deactivated successfully.
Feb 25 08:35:06 np0005629333 podman[407276]: 2026-02-25 13:35:06.478435393 +0000 UTC m=+0.968182856 container remove 8ffe58e457ac7327b21547a73ce8d859a35980d413649a939d4f4bac4e8d7a2e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_chatelet, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:35:06 np0005629333 systemd[1]: libpod-conmon-8ffe58e457ac7327b21547a73ce8d859a35980d413649a939d4f4bac4e8d7a2e.scope: Deactivated successfully.
Feb 25 08:35:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:35:06 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:35:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:35:06 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:35:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:35:06 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1443480120' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:35:06 np0005629333 nova_compute[244014]: 2026-02-25 13:35:06.605 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:35:06 np0005629333 nova_compute[244014]: 2026-02-25 13:35:06.612 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:35:06 np0005629333 nova_compute[244014]: 2026-02-25 13:35:06.667 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:35:06 np0005629333 nova_compute[244014]: 2026-02-25 13:35:06.670 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:35:06 np0005629333 nova_compute[244014]: 2026-02-25 13:35:06.670 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:35:07 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:35:07 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:35:07 np0005629333 nova_compute[244014]: 2026-02-25 13:35:07.735 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:35:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3457: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:35:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:35:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3458: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:35:10 np0005629333 nova_compute[244014]: 2026-02-25 13:35:10.198 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:35:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3459: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:35:12 np0005629333 nova_compute[244014]: 2026-02-25 13:35:12.671 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:35:12 np0005629333 nova_compute[244014]: 2026-02-25 13:35:12.671 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:35:12 np0005629333 nova_compute[244014]: 2026-02-25 13:35:12.672 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:35:12 np0005629333 nova_compute[244014]: 2026-02-25 13:35:12.688 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:35:12 np0005629333 nova_compute[244014]: 2026-02-25 13:35:12.774 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:35:13 np0005629333 nova_compute[244014]: 2026-02-25 13:35:13.889 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:35:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3460: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:35:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:35:15 np0005629333 nova_compute[244014]: 2026-02-25 13:35:15.218 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:35:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3461: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:35:16 np0005629333 nova_compute[244014]: 2026-02-25 13:35:16.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:35:16 np0005629333 nova_compute[244014]: 2026-02-25 13:35:16.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:35:17 np0005629333 podman[407435]: 2026-02-25 13:35:17.777837789 +0000 UTC m=+0.112075494 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 25 08:35:17 np0005629333 nova_compute[244014]: 2026-02-25 13:35:17.819 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:35:17 np0005629333 podman[407436]: 2026-02-25 13:35:17.842783672 +0000 UTC m=+0.172862359 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller)
Feb 25 08:35:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3462: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:35:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:35:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3463: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:35:20 np0005629333 nova_compute[244014]: 2026-02-25 13:35:20.254 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:35:21 np0005629333 nova_compute[244014]: 2026-02-25 13:35:21.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:35:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3464: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:35:22 np0005629333 nova_compute[244014]: 2026-02-25 13:35:22.855 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:35:22 np0005629333 nova_compute[244014]: 2026-02-25 13:35:22.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:35:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3465: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:35:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:35:25 np0005629333 nova_compute[244014]: 2026-02-25 13:35:25.258 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:35:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3466: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:35:27 np0005629333 nova_compute[244014]: 2026-02-25 13:35:27.857 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:35:27 np0005629333 nova_compute[244014]: 2026-02-25 13:35:27.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:35:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3467: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:35:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:35:29 np0005629333 nova_compute[244014]: 2026-02-25 13:35:29.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:35:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3468: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:35:30 np0005629333 nova_compute[244014]: 2026-02-25 13:35:30.261 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:35:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:35:31
Feb 25 08:35:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:35:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:35:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['volumes', 'backups', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'images', '.mgr', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log']
Feb 25 08:35:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:35:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:35:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:35:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:35:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:35:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:35:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:35:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3469: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:35:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:35:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:35:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:35:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:35:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:35:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:35:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:35:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:35:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:35:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:35:32 np0005629333 nova_compute[244014]: 2026-02-25 13:35:32.861 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:35:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3470: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:35:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:35:35 np0005629333 nova_compute[244014]: 2026-02-25 13:35:35.264 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:35:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3471: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:35:37 np0005629333 nova_compute[244014]: 2026-02-25 13:35:37.862 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:35:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3472: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:35:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:35:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3473: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:35:40 np0005629333 nova_compute[244014]: 2026-02-25 13:35:40.298 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:35:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3474: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:35:42 np0005629333 nova_compute[244014]: 2026-02-25 13:35:42.911 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:35:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:35:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:35:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:35:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:35:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:35:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:35:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:35:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:35:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:35:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:35:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:35:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:35:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:35:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:35:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:35:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:35:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:35:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:35:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:35:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:35:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:35:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:35:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:35:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3475: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:35:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:35:45 np0005629333 nova_compute[244014]: 2026-02-25 13:35:45.345 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:35:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3476: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:35:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:35:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3154633280' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:35:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:35:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3154633280' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:35:47 np0005629333 nova_compute[244014]: 2026-02-25 13:35:47.944 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:35:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3477: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:35:48 np0005629333 podman[407481]: 2026-02-25 13:35:48.734201847 +0000 UTC m=+0.070540322 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, io.buildah.version=1.43.0)
Feb 25 08:35:48 np0005629333 podman[407482]: 2026-02-25 13:35:48.768528116 +0000 UTC m=+0.104957033 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Feb 25 08:35:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:35:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3478: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:35:50 np0005629333 nova_compute[244014]: 2026-02-25 13:35:50.347 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:35:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3479: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:35:52 np0005629333 nova_compute[244014]: 2026-02-25 13:35:52.975 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:35:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3480: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:35:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:35:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:35:55.077 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:35:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:35:55.078 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:35:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:35:55.078 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:35:55 np0005629333 nova_compute[244014]: 2026-02-25 13:35:55.351 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:35:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3481: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:35:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3482: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:35:57 np0005629333 nova_compute[244014]: 2026-02-25 13:35:57.976 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:35:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:35:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3483: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:36:00 np0005629333 nova_compute[244014]: 2026-02-25 13:36:00.373 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:36:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:36:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:36:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:36:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:36:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:36:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:36:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3484: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:36:02 np0005629333 nova_compute[244014]: 2026-02-25 13:36:02.979 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:36:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3485: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:36:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:36:04 np0005629333 nova_compute[244014]: 2026-02-25 13:36:04.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:36:04 np0005629333 nova_compute[244014]: 2026-02-25 13:36:04.932 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:36:04 np0005629333 nova_compute[244014]: 2026-02-25 13:36:04.933 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:36:04 np0005629333 nova_compute[244014]: 2026-02-25 13:36:04.934 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:36:04 np0005629333 nova_compute[244014]: 2026-02-25 13:36:04.934 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:36:04 np0005629333 nova_compute[244014]: 2026-02-25 13:36:04.935 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:36:05 np0005629333 nova_compute[244014]: 2026-02-25 13:36:05.407 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:36:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:36:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/814438940' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:36:05 np0005629333 nova_compute[244014]: 2026-02-25 13:36:05.570 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.635s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:36:05 np0005629333 nova_compute[244014]: 2026-02-25 13:36:05.819 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:36:05 np0005629333 nova_compute[244014]: 2026-02-25 13:36:05.822 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3554MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:36:05 np0005629333 nova_compute[244014]: 2026-02-25 13:36:05.822 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:36:05 np0005629333 nova_compute[244014]: 2026-02-25 13:36:05.823 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:36:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3486: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:36:06 np0005629333 nova_compute[244014]: 2026-02-25 13:36:06.076 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:36:06 np0005629333 nova_compute[244014]: 2026-02-25 13:36:06.077 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:36:06 np0005629333 nova_compute[244014]: 2026-02-25 13:36:06.201 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 25 08:36:06 np0005629333 nova_compute[244014]: 2026-02-25 13:36:06.329 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 25 08:36:06 np0005629333 nova_compute[244014]: 2026-02-25 13:36:06.330 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 25 08:36:06 np0005629333 nova_compute[244014]: 2026-02-25 13:36:06.350 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 25 08:36:06 np0005629333 nova_compute[244014]: 2026-02-25 13:36:06.372 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 25 08:36:06 np0005629333 nova_compute[244014]: 2026-02-25 13:36:06.386 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:36:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:36:06 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3582902453' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:36:06 np0005629333 nova_compute[244014]: 2026-02-25 13:36:06.979 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:36:06 np0005629333 nova_compute[244014]: 2026-02-25 13:36:06.985 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:36:07 np0005629333 nova_compute[244014]: 2026-02-25 13:36:07.000 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:36:07 np0005629333 nova_compute[244014]: 2026-02-25 13:36:07.002 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:36:07 np0005629333 nova_compute[244014]: 2026-02-25 13:36:07.003 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:36:07 np0005629333 podman[407661]: 2026-02-25 13:36:07.138485518 +0000 UTC m=+0.055312862 container exec ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:36:07 np0005629333 podman[407661]: 2026-02-25 13:36:07.217132288 +0000 UTC m=+0.133959662 container exec_died ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 08:36:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3487: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:36:07 np0005629333 nova_compute[244014]: 2026-02-25 13:36:07.980 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:36:08 np0005629333 nova_compute[244014]: 2026-02-25 13:36:08.002 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:36:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:36:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:36:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:36:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:36:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:36:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:36:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:36:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:36:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:36:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:36:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:36:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:36:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:36:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:36:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:36:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:36:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:36:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:36:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:36:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:36:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:36:09 np0005629333 podman[407994]: 2026-02-25 13:36:09.339582299 +0000 UTC m=+0.038815796 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:36:09 np0005629333 podman[407994]: 2026-02-25 13:36:09.495767157 +0000 UTC m=+0.195000654 container create 3197fb297fb5eb7814f533768a37b5307cce9f0390981afa3ba4c78acd5bc5c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Feb 25 08:36:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:36:09 np0005629333 systemd[1]: Started libpod-conmon-3197fb297fb5eb7814f533768a37b5307cce9f0390981afa3ba4c78acd5bc5c6.scope.
Feb 25 08:36:09 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:36:09 np0005629333 podman[407994]: 2026-02-25 13:36:09.587557478 +0000 UTC m=+0.286791005 container init 3197fb297fb5eb7814f533768a37b5307cce9f0390981afa3ba4c78acd5bc5c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:36:09 np0005629333 podman[407994]: 2026-02-25 13:36:09.596457599 +0000 UTC m=+0.295691086 container start 3197fb297fb5eb7814f533768a37b5307cce9f0390981afa3ba4c78acd5bc5c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_dijkstra, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 08:36:09 np0005629333 podman[407994]: 2026-02-25 13:36:09.600149033 +0000 UTC m=+0.299382590 container attach 3197fb297fb5eb7814f533768a37b5307cce9f0390981afa3ba4c78acd5bc5c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_dijkstra, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:36:09 np0005629333 great_dijkstra[408010]: 167 167
Feb 25 08:36:09 np0005629333 systemd[1]: libpod-3197fb297fb5eb7814f533768a37b5307cce9f0390981afa3ba4c78acd5bc5c6.scope: Deactivated successfully.
Feb 25 08:36:09 np0005629333 podman[407994]: 2026-02-25 13:36:09.604833975 +0000 UTC m=+0.304067472 container died 3197fb297fb5eb7814f533768a37b5307cce9f0390981afa3ba4c78acd5bc5c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_dijkstra, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 08:36:09 np0005629333 systemd[1]: var-lib-containers-storage-overlay-6e59f8605a1413455443d0f17ed8a3e7eab879dab1d4166735a6b3ec2fad7a7f-merged.mount: Deactivated successfully.
Feb 25 08:36:09 np0005629333 podman[407994]: 2026-02-25 13:36:09.642856369 +0000 UTC m=+0.342089866 container remove 3197fb297fb5eb7814f533768a37b5307cce9f0390981afa3ba4c78acd5bc5c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_dijkstra, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:36:09 np0005629333 systemd[1]: libpod-conmon-3197fb297fb5eb7814f533768a37b5307cce9f0390981afa3ba4c78acd5bc5c6.scope: Deactivated successfully.
Feb 25 08:36:09 np0005629333 podman[408033]: 2026-02-25 13:36:09.84349468 +0000 UTC m=+0.060854278 container create c1f0d52b02bed71abee2b87594eb933246db767476bbcc5e92a718fd89c55f6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_panini, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 08:36:09 np0005629333 systemd[1]: Started libpod-conmon-c1f0d52b02bed71abee2b87594eb933246db767476bbcc5e92a718fd89c55f6a.scope.
Feb 25 08:36:09 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:36:09 np0005629333 podman[408033]: 2026-02-25 13:36:09.812581648 +0000 UTC m=+0.029941266 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:36:09 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfb33e6496514c4fa836b1ce3128d2eb9ed2116d9d7baf534477d5c9f95bd37a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:36:09 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfb33e6496514c4fa836b1ce3128d2eb9ed2116d9d7baf534477d5c9f95bd37a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:36:09 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfb33e6496514c4fa836b1ce3128d2eb9ed2116d9d7baf534477d5c9f95bd37a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:36:09 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfb33e6496514c4fa836b1ce3128d2eb9ed2116d9d7baf534477d5c9f95bd37a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:36:09 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfb33e6496514c4fa836b1ce3128d2eb9ed2116d9d7baf534477d5c9f95bd37a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:36:09 np0005629333 podman[408033]: 2026-02-25 13:36:09.941256569 +0000 UTC m=+0.158616177 container init c1f0d52b02bed71abee2b87594eb933246db767476bbcc5e92a718fd89c55f6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_panini, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 08:36:09 np0005629333 podman[408033]: 2026-02-25 13:36:09.948238886 +0000 UTC m=+0.165598514 container start c1f0d52b02bed71abee2b87594eb933246db767476bbcc5e92a718fd89c55f6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_panini, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:36:09 np0005629333 podman[408033]: 2026-02-25 13:36:09.95225513 +0000 UTC m=+0.169614818 container attach c1f0d52b02bed71abee2b87594eb933246db767476bbcc5e92a718fd89c55f6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_panini, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 08:36:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3488: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:36:10 np0005629333 nova_compute[244014]: 2026-02-25 13:36:10.411 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:36:10 np0005629333 recursing_panini[408050]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:36:10 np0005629333 recursing_panini[408050]: --> All data devices are unavailable
Feb 25 08:36:10 np0005629333 systemd[1]: libpod-c1f0d52b02bed71abee2b87594eb933246db767476bbcc5e92a718fd89c55f6a.scope: Deactivated successfully.
Feb 25 08:36:10 np0005629333 conmon[408050]: conmon c1f0d52b02bed71abee2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c1f0d52b02bed71abee2b87594eb933246db767476bbcc5e92a718fd89c55f6a.scope/container/memory.events
Feb 25 08:36:10 np0005629333 podman[408033]: 2026-02-25 13:36:10.468327485 +0000 UTC m=+0.685687113 container died c1f0d52b02bed71abee2b87594eb933246db767476bbcc5e92a718fd89c55f6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_panini, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 08:36:10 np0005629333 systemd[1]: var-lib-containers-storage-overlay-cfb33e6496514c4fa836b1ce3128d2eb9ed2116d9d7baf534477d5c9f95bd37a-merged.mount: Deactivated successfully.
Feb 25 08:36:10 np0005629333 podman[408033]: 2026-02-25 13:36:10.52203778 +0000 UTC m=+0.739397368 container remove c1f0d52b02bed71abee2b87594eb933246db767476bbcc5e92a718fd89c55f6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_panini, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:36:10 np0005629333 systemd[1]: libpod-conmon-c1f0d52b02bed71abee2b87594eb933246db767476bbcc5e92a718fd89c55f6a.scope: Deactivated successfully.
Feb 25 08:36:10 np0005629333 podman[408145]: 2026-02-25 13:36:10.988071363 +0000 UTC m=+0.056392342 container create 8198afdb452a7f67490e9f7dc92c797016b3637d32104f4c65f3d1a24b7639d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 08:36:11 np0005629333 systemd[1]: Started libpod-conmon-8198afdb452a7f67490e9f7dc92c797016b3637d32104f4c65f3d1a24b7639d6.scope.
Feb 25 08:36:11 np0005629333 podman[408145]: 2026-02-25 13:36:10.962931454 +0000 UTC m=+0.031252493 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:36:11 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:36:11 np0005629333 podman[408145]: 2026-02-25 13:36:11.079582956 +0000 UTC m=+0.147903925 container init 8198afdb452a7f67490e9f7dc92c797016b3637d32104f4c65f3d1a24b7639d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_hopper, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 08:36:11 np0005629333 podman[408145]: 2026-02-25 13:36:11.086134681 +0000 UTC m=+0.154455640 container start 8198afdb452a7f67490e9f7dc92c797016b3637d32104f4c65f3d1a24b7639d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_hopper, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:36:11 np0005629333 podman[408145]: 2026-02-25 13:36:11.090741791 +0000 UTC m=+0.159062770 container attach 8198afdb452a7f67490e9f7dc92c797016b3637d32104f4c65f3d1a24b7639d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_hopper, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:36:11 np0005629333 systemd[1]: libpod-8198afdb452a7f67490e9f7dc92c797016b3637d32104f4c65f3d1a24b7639d6.scope: Deactivated successfully.
Feb 25 08:36:11 np0005629333 adoring_hopper[408161]: 167 167
Feb 25 08:36:11 np0005629333 conmon[408161]: conmon 8198afdb452a7f67490e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8198afdb452a7f67490e9f7dc92c797016b3637d32104f4c65f3d1a24b7639d6.scope/container/memory.events
Feb 25 08:36:11 np0005629333 podman[408145]: 2026-02-25 13:36:11.093403506 +0000 UTC m=+0.161724455 container died 8198afdb452a7f67490e9f7dc92c797016b3637d32104f4c65f3d1a24b7639d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_hopper, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 08:36:11 np0005629333 systemd[1]: var-lib-containers-storage-overlay-4dd3514c1982c177d0afb88e57c33d3c989a319da9e154dca00ae7cb4cce4f69-merged.mount: Deactivated successfully.
Feb 25 08:36:11 np0005629333 podman[408145]: 2026-02-25 13:36:11.135415072 +0000 UTC m=+0.203736051 container remove 8198afdb452a7f67490e9f7dc92c797016b3637d32104f4c65f3d1a24b7639d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_hopper, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:36:11 np0005629333 systemd[1]: libpod-conmon-8198afdb452a7f67490e9f7dc92c797016b3637d32104f4c65f3d1a24b7639d6.scope: Deactivated successfully.
Feb 25 08:36:11 np0005629333 podman[408185]: 2026-02-25 13:36:11.266429669 +0000 UTC m=+0.037697095 container create d0fc1a7ee2ec844173ffd0669f5dfd4f0cb5532eccc5630a8e85c03513d75ffa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_nightingale, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:36:11 np0005629333 systemd[1]: Started libpod-conmon-d0fc1a7ee2ec844173ffd0669f5dfd4f0cb5532eccc5630a8e85c03513d75ffa.scope.
Feb 25 08:36:11 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:36:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1411771f850afc6c182f201d843f968a9c20ab26b8ca3038b10412f614997c9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:36:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1411771f850afc6c182f201d843f968a9c20ab26b8ca3038b10412f614997c9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:36:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1411771f850afc6c182f201d843f968a9c20ab26b8ca3038b10412f614997c9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:36:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1411771f850afc6c182f201d843f968a9c20ab26b8ca3038b10412f614997c9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:36:11 np0005629333 podman[408185]: 2026-02-25 13:36:11.342849206 +0000 UTC m=+0.114116662 container init d0fc1a7ee2ec844173ffd0669f5dfd4f0cb5532eccc5630a8e85c03513d75ffa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 08:36:11 np0005629333 podman[408185]: 2026-02-25 13:36:11.252976299 +0000 UTC m=+0.024243725 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:36:11 np0005629333 podman[408185]: 2026-02-25 13:36:11.348763743 +0000 UTC m=+0.120031159 container start d0fc1a7ee2ec844173ffd0669f5dfd4f0cb5532eccc5630a8e85c03513d75ffa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_nightingale, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 08:36:11 np0005629333 podman[408185]: 2026-02-25 13:36:11.358407905 +0000 UTC m=+0.129675381 container attach d0fc1a7ee2ec844173ffd0669f5dfd4f0cb5532eccc5630a8e85c03513d75ffa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]: {
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:    "0": [
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:        {
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "devices": [
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "/dev/loop3"
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            ],
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "lv_name": "ceph_lv0",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "lv_size": "21470642176",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "name": "ceph_lv0",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "tags": {
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.cluster_name": "ceph",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.crush_device_class": "",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.encrypted": "0",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.objectstore": "bluestore",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.osd_id": "0",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.type": "block",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.vdo": "0",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.with_tpm": "0"
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            },
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "type": "block",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "vg_name": "ceph_vg0"
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:        }
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:    ],
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:    "1": [
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:        {
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "devices": [
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "/dev/loop4"
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            ],
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "lv_name": "ceph_lv1",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "lv_size": "21470642176",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "name": "ceph_lv1",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "tags": {
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.cluster_name": "ceph",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.crush_device_class": "",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.encrypted": "0",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.objectstore": "bluestore",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.osd_id": "1",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.type": "block",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.vdo": "0",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.with_tpm": "0"
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            },
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "type": "block",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "vg_name": "ceph_vg1"
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:        }
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:    ],
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:    "2": [
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:        {
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "devices": [
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "/dev/loop5"
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            ],
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "lv_name": "ceph_lv2",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "lv_size": "21470642176",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "name": "ceph_lv2",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "tags": {
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.cluster_name": "ceph",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.crush_device_class": "",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.encrypted": "0",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.objectstore": "bluestore",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.osd_id": "2",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.type": "block",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.vdo": "0",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:                "ceph.with_tpm": "0"
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            },
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "type": "block",
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:            "vg_name": "ceph_vg2"
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:        }
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]:    ]
Feb 25 08:36:11 np0005629333 intelligent_nightingale[408202]: }
Feb 25 08:36:11 np0005629333 systemd[1]: libpod-d0fc1a7ee2ec844173ffd0669f5dfd4f0cb5532eccc5630a8e85c03513d75ffa.scope: Deactivated successfully.
Feb 25 08:36:11 np0005629333 podman[408185]: 2026-02-25 13:36:11.62931038 +0000 UTC m=+0.400577856 container died d0fc1a7ee2ec844173ffd0669f5dfd4f0cb5532eccc5630a8e85c03513d75ffa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:36:11 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1411771f850afc6c182f201d843f968a9c20ab26b8ca3038b10412f614997c9b-merged.mount: Deactivated successfully.
Feb 25 08:36:11 np0005629333 podman[408185]: 2026-02-25 13:36:11.693848882 +0000 UTC m=+0.465116328 container remove d0fc1a7ee2ec844173ffd0669f5dfd4f0cb5532eccc5630a8e85c03513d75ffa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 08:36:11 np0005629333 systemd[1]: libpod-conmon-d0fc1a7ee2ec844173ffd0669f5dfd4f0cb5532eccc5630a8e85c03513d75ffa.scope: Deactivated successfully.
Feb 25 08:36:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3489: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:36:12 np0005629333 podman[408287]: 2026-02-25 13:36:12.262520731 +0000 UTC m=+0.054898780 container create d75ba2809b3185a065da1d69176136faecdd5f41cc075dae48a1e4e412daf1ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:36:12 np0005629333 systemd[1]: Started libpod-conmon-d75ba2809b3185a065da1d69176136faecdd5f41cc075dae48a1e4e412daf1ec.scope.
Feb 25 08:36:12 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:36:12 np0005629333 podman[408287]: 2026-02-25 13:36:12.23837238 +0000 UTC m=+0.030750499 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:36:12 np0005629333 podman[408287]: 2026-02-25 13:36:12.352075439 +0000 UTC m=+0.144453488 container init d75ba2809b3185a065da1d69176136faecdd5f41cc075dae48a1e4e412daf1ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_sanderson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 08:36:12 np0005629333 podman[408287]: 2026-02-25 13:36:12.362374819 +0000 UTC m=+0.154752888 container start d75ba2809b3185a065da1d69176136faecdd5f41cc075dae48a1e4e412daf1ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_sanderson, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 08:36:12 np0005629333 podman[408287]: 2026-02-25 13:36:12.366916108 +0000 UTC m=+0.159294177 container attach d75ba2809b3185a065da1d69176136faecdd5f41cc075dae48a1e4e412daf1ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_sanderson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default)
Feb 25 08:36:12 np0005629333 loving_sanderson[408304]: 167 167
Feb 25 08:36:12 np0005629333 systemd[1]: libpod-d75ba2809b3185a065da1d69176136faecdd5f41cc075dae48a1e4e412daf1ec.scope: Deactivated successfully.
Feb 25 08:36:12 np0005629333 podman[408287]: 2026-02-25 13:36:12.369332196 +0000 UTC m=+0.161710255 container died d75ba2809b3185a065da1d69176136faecdd5f41cc075dae48a1e4e412daf1ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 08:36:12 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a4c5d3dd577ccdfad935af8d26456f688b7565d0550c603666acce5a05c5404e-merged.mount: Deactivated successfully.
Feb 25 08:36:12 np0005629333 podman[408287]: 2026-02-25 13:36:12.407964776 +0000 UTC m=+0.200342815 container remove d75ba2809b3185a065da1d69176136faecdd5f41cc075dae48a1e4e412daf1ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_sanderson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:36:12 np0005629333 systemd[1]: libpod-conmon-d75ba2809b3185a065da1d69176136faecdd5f41cc075dae48a1e4e412daf1ec.scope: Deactivated successfully.
Feb 25 08:36:12 np0005629333 podman[408328]: 2026-02-25 13:36:12.609764161 +0000 UTC m=+0.065732476 container create 45b7f3917940912cc02c36062d0f3bf5cd752ef84f86d037632c38780bb5029b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 08:36:12 np0005629333 systemd[1]: Started libpod-conmon-45b7f3917940912cc02c36062d0f3bf5cd752ef84f86d037632c38780bb5029b.scope.
Feb 25 08:36:12 np0005629333 podman[408328]: 2026-02-25 13:36:12.584269352 +0000 UTC m=+0.040237747 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:36:12 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:36:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac316ab41687d5302a5ab03364c8ecf27bbb399c2ab2d9734a583f43f36feb3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:36:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac316ab41687d5302a5ab03364c8ecf27bbb399c2ab2d9734a583f43f36feb3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:36:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac316ab41687d5302a5ab03364c8ecf27bbb399c2ab2d9734a583f43f36feb3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:36:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac316ab41687d5302a5ab03364c8ecf27bbb399c2ab2d9734a583f43f36feb3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:36:12 np0005629333 podman[408328]: 2026-02-25 13:36:12.718955703 +0000 UTC m=+0.174924108 container init 45b7f3917940912cc02c36062d0f3bf5cd752ef84f86d037632c38780bb5029b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:36:12 np0005629333 podman[408328]: 2026-02-25 13:36:12.729419478 +0000 UTC m=+0.185387813 container start 45b7f3917940912cc02c36062d0f3bf5cd752ef84f86d037632c38780bb5029b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:36:12 np0005629333 podman[408328]: 2026-02-25 13:36:12.733798562 +0000 UTC m=+0.189766907 container attach 45b7f3917940912cc02c36062d0f3bf5cd752ef84f86d037632c38780bb5029b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:36:12 np0005629333 nova_compute[244014]: 2026-02-25 13:36:12.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:36:12 np0005629333 nova_compute[244014]: 2026-02-25 13:36:12.878 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:36:12 np0005629333 nova_compute[244014]: 2026-02-25 13:36:12.878 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:36:12 np0005629333 nova_compute[244014]: 2026-02-25 13:36:12.940 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:36:13 np0005629333 nova_compute[244014]: 2026-02-25 13:36:13.018 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:36:13 np0005629333 lvm[408420]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:36:13 np0005629333 lvm[408420]: VG ceph_vg0 finished
Feb 25 08:36:13 np0005629333 lvm[408423]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:36:13 np0005629333 lvm[408423]: VG ceph_vg1 finished
Feb 25 08:36:13 np0005629333 lvm[408425]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:36:13 np0005629333 lvm[408425]: VG ceph_vg2 finished
Feb 25 08:36:13 np0005629333 friendly_galileo[408344]: {}
Feb 25 08:36:13 np0005629333 systemd[1]: libpod-45b7f3917940912cc02c36062d0f3bf5cd752ef84f86d037632c38780bb5029b.scope: Deactivated successfully.
Feb 25 08:36:13 np0005629333 podman[408328]: 2026-02-25 13:36:13.618945262 +0000 UTC m=+1.074913607 container died 45b7f3917940912cc02c36062d0f3bf5cd752ef84f86d037632c38780bb5029b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_galileo, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:36:13 np0005629333 systemd[1]: libpod-45b7f3917940912cc02c36062d0f3bf5cd752ef84f86d037632c38780bb5029b.scope: Consumed 1.176s CPU time.
Feb 25 08:36:13 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1ac316ab41687d5302a5ab03364c8ecf27bbb399c2ab2d9734a583f43f36feb3-merged.mount: Deactivated successfully.
Feb 25 08:36:13 np0005629333 podman[408328]: 2026-02-25 13:36:13.678266226 +0000 UTC m=+1.134234521 container remove 45b7f3917940912cc02c36062d0f3bf5cd752ef84f86d037632c38780bb5029b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_galileo, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:36:13 np0005629333 systemd[1]: libpod-conmon-45b7f3917940912cc02c36062d0f3bf5cd752ef84f86d037632c38780bb5029b.scope: Deactivated successfully.
Feb 25 08:36:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:36:13 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:36:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:36:13 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:36:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3490: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:36:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:36:14 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:36:14 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:36:14 np0005629333 nova_compute[244014]: 2026-02-25 13:36:14.937 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:36:15 np0005629333 nova_compute[244014]: 2026-02-25 13:36:15.464 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:36:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3491: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:36:17 np0005629333 nova_compute[244014]: 2026-02-25 13:36:17.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:36:17 np0005629333 nova_compute[244014]: 2026-02-25 13:36:17.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:36:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3492: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:36:18 np0005629333 nova_compute[244014]: 2026-02-25 13:36:18.066 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:36:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:36:19 np0005629333 podman[408467]: 2026-02-25 13:36:19.723520407 +0000 UTC m=+0.059423948 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 08:36:19 np0005629333 podman[408468]: 2026-02-25 13:36:19.756742635 +0000 UTC m=+0.091953866 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller)
Feb 25 08:36:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3493: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:36:20 np0005629333 nova_compute[244014]: 2026-02-25 13:36:20.496 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:36:21 np0005629333 nova_compute[244014]: 2026-02-25 13:36:21.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:36:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3494: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 11 op/s
Feb 25 08:36:23 np0005629333 nova_compute[244014]: 2026-02-25 13:36:23.109 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:36:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3495: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 08:36:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:36:24 np0005629333 nova_compute[244014]: 2026-02-25 13:36:24.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:36:25 np0005629333 nova_compute[244014]: 2026-02-25 13:36:25.543 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:36:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3496: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 08:36:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3497: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 08:36:28 np0005629333 nova_compute[244014]: 2026-02-25 13:36:28.147 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:36:28 np0005629333 nova_compute[244014]: 2026-02-25 13:36:28.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:36:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:36:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3498: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 08:36:30 np0005629333 nova_compute[244014]: 2026-02-25 13:36:30.588 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:36:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:36:31
Feb 25 08:36:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:36:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:36:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', '.rgw.root', 'backups', 'images', 'vms', 'cephfs.cephfs.data']
Feb 25 08:36:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:36:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:36:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:36:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:36:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:36:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:36:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:36:31 np0005629333 nova_compute[244014]: 2026-02-25 13:36:31.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:36:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3499: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 08:36:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:36:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:36:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:36:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:36:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:36:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:36:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:36:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:36:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:36:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:36:33 np0005629333 nova_compute[244014]: 2026-02-25 13:36:33.188 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:36:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3500: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 4 op/s
Feb 25 08:36:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:36:35 np0005629333 nova_compute[244014]: 2026-02-25 13:36:35.591 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:36:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3501: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:36:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3502: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:36:38 np0005629333 nova_compute[244014]: 2026-02-25 13:36:38.232 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:36:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:36:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3503: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:36:40 np0005629333 nova_compute[244014]: 2026-02-25 13:36:40.618 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:36:40 np0005629333 nova_compute[244014]: 2026-02-25 13:36:40.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:36:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3504: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:36:43 np0005629333 nova_compute[244014]: 2026-02-25 13:36:43.238 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:36:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:36:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:36:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:36:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:36:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:36:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:36:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:36:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:36:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:36:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:36:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:36:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:36:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:36:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:36:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:36:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:36:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:36:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:36:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:36:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:36:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:36:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:36:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:36:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3505: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:36:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:36:45 np0005629333 nova_compute[244014]: 2026-02-25 13:36:45.623 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:36:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3506: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:36:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:36:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/578608184' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:36:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:36:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/578608184' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:36:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3507: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:36:48 np0005629333 nova_compute[244014]: 2026-02-25 13:36:48.280 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:36:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:36:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3508: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:36:50 np0005629333 nova_compute[244014]: 2026-02-25 13:36:50.670 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:36:50 np0005629333 podman[408513]: 2026-02-25 13:36:50.8036442 +0000 UTC m=+0.104378546 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 25 08:36:50 np0005629333 podman[408514]: 2026-02-25 13:36:50.819412615 +0000 UTC m=+0.111858667 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:36:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3509: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:36:53 np0005629333 nova_compute[244014]: 2026-02-25 13:36:53.313 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:36:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3510: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:36:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:36:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:36:55.079 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:36:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:36:55.080 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:36:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:36:55.080 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:36:55 np0005629333 nova_compute[244014]: 2026-02-25 13:36:55.703 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:36:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3511: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:36:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3512: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:36:58 np0005629333 nova_compute[244014]: 2026-02-25 13:36:58.315 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:36:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:36:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3513: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:37:00 np0005629333 nova_compute[244014]: 2026-02-25 13:37:00.741 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:37:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:37:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:37:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:37:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:37:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:37:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:37:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3514: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:37:03 np0005629333 nova_compute[244014]: 2026-02-25 13:37:03.320 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:37:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3515: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:37:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:37:05 np0005629333 nova_compute[244014]: 2026-02-25 13:37:05.783 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:37:05 np0005629333 nova_compute[244014]: 2026-02-25 13:37:05.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:37:05 np0005629333 nova_compute[244014]: 2026-02-25 13:37:05.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:37:05 np0005629333 nova_compute[244014]: 2026-02-25 13:37:05.931 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:37:05 np0005629333 nova_compute[244014]: 2026-02-25 13:37:05.932 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:37:05 np0005629333 nova_compute[244014]: 2026-02-25 13:37:05.932 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:37:05 np0005629333 nova_compute[244014]: 2026-02-25 13:37:05.932 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:37:05 np0005629333 nova_compute[244014]: 2026-02-25 13:37:05.933 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:37:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3516: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:37:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:37:06 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4181088331' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:37:06 np0005629333 nova_compute[244014]: 2026-02-25 13:37:06.483 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:37:06 np0005629333 nova_compute[244014]: 2026-02-25 13:37:06.691 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:37:06 np0005629333 nova_compute[244014]: 2026-02-25 13:37:06.693 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3556MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:37:06 np0005629333 nova_compute[244014]: 2026-02-25 13:37:06.693 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:37:06 np0005629333 nova_compute[244014]: 2026-02-25 13:37:06.694 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:37:06 np0005629333 nova_compute[244014]: 2026-02-25 13:37:06.783 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:37:06 np0005629333 nova_compute[244014]: 2026-02-25 13:37:06.784 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:37:06 np0005629333 nova_compute[244014]: 2026-02-25 13:37:06.809 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:37:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:37:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2524635839' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:37:07 np0005629333 nova_compute[244014]: 2026-02-25 13:37:07.353 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:37:07 np0005629333 nova_compute[244014]: 2026-02-25 13:37:07.360 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:37:07 np0005629333 nova_compute[244014]: 2026-02-25 13:37:07.376 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:37:07 np0005629333 nova_compute[244014]: 2026-02-25 13:37:07.378 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:37:07 np0005629333 nova_compute[244014]: 2026-02-25 13:37:07.378 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:37:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3517: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:37:08 np0005629333 nova_compute[244014]: 2026-02-25 13:37:08.322 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:37:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:37:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3518: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:37:10 np0005629333 nova_compute[244014]: 2026-02-25 13:37:10.816 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:37:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3519: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:37:13 np0005629333 nova_compute[244014]: 2026-02-25 13:37:13.324 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:37:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3520: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:37:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:37:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:37:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:37:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:37:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:37:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:37:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:37:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:37:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:37:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:37:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:37:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:37:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:37:14 np0005629333 podman[408743]: 2026-02-25 13:37:14.969968799 +0000 UTC m=+0.074454742 container create 1c7026e6d2e14d982e1c4a04c4c946ccaa0d943277cb36f58dc841a10a84a922 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_brattain, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:37:15 np0005629333 systemd[1]: Started libpod-conmon-1c7026e6d2e14d982e1c4a04c4c946ccaa0d943277cb36f58dc841a10a84a922.scope.
Feb 25 08:37:15 np0005629333 podman[408743]: 2026-02-25 13:37:14.923515888 +0000 UTC m=+0.028001651 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:37:15 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:37:15 np0005629333 podman[408743]: 2026-02-25 13:37:15.070890227 +0000 UTC m=+0.175375930 container init 1c7026e6d2e14d982e1c4a04c4c946ccaa0d943277cb36f58dc841a10a84a922 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_brattain, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:37:15 np0005629333 podman[408743]: 2026-02-25 13:37:15.079675555 +0000 UTC m=+0.184161238 container start 1c7026e6d2e14d982e1c4a04c4c946ccaa0d943277cb36f58dc841a10a84a922 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_brattain, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 08:37:15 np0005629333 podman[408743]: 2026-02-25 13:37:15.083174284 +0000 UTC m=+0.187659967 container attach 1c7026e6d2e14d982e1c4a04c4c946ccaa0d943277cb36f58dc841a10a84a922 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_brattain, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:37:15 np0005629333 sharp_brattain[408760]: 167 167
Feb 25 08:37:15 np0005629333 systemd[1]: libpod-1c7026e6d2e14d982e1c4a04c4c946ccaa0d943277cb36f58dc841a10a84a922.scope: Deactivated successfully.
Feb 25 08:37:15 np0005629333 conmon[408760]: conmon 1c7026e6d2e14d982e1c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1c7026e6d2e14d982e1c4a04c4c946ccaa0d943277cb36f58dc841a10a84a922.scope/container/memory.events
Feb 25 08:37:15 np0005629333 podman[408743]: 2026-02-25 13:37:15.088793253 +0000 UTC m=+0.193278966 container died 1c7026e6d2e14d982e1c4a04c4c946ccaa0d943277cb36f58dc841a10a84a922 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_brattain, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:37:15 np0005629333 systemd[1]: var-lib-containers-storage-overlay-43f8fa95b7333d04f1ffdac96f596ebdb2f0000fed1964d91a151968ba3c7dd0-merged.mount: Deactivated successfully.
Feb 25 08:37:15 np0005629333 podman[408743]: 2026-02-25 13:37:15.130096328 +0000 UTC m=+0.234582021 container remove 1c7026e6d2e14d982e1c4a04c4c946ccaa0d943277cb36f58dc841a10a84a922 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_brattain, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:37:15 np0005629333 systemd[1]: libpod-conmon-1c7026e6d2e14d982e1c4a04c4c946ccaa0d943277cb36f58dc841a10a84a922.scope: Deactivated successfully.
Feb 25 08:37:15 np0005629333 podman[408784]: 2026-02-25 13:37:15.286813591 +0000 UTC m=+0.046532204 container create f9e586455248e1045f517719dc67d568fed97cb3f6bbe1f78f830ed9d5f67ccd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_buck, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 08:37:15 np0005629333 systemd[1]: Started libpod-conmon-f9e586455248e1045f517719dc67d568fed97cb3f6bbe1f78f830ed9d5f67ccd.scope.
Feb 25 08:37:15 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:37:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b359e59d972087a8f00c165bdb564965caab6f6606529efcec5ba4dedf519aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:37:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b359e59d972087a8f00c165bdb564965caab6f6606529efcec5ba4dedf519aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:37:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b359e59d972087a8f00c165bdb564965caab6f6606529efcec5ba4dedf519aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:37:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b359e59d972087a8f00c165bdb564965caab6f6606529efcec5ba4dedf519aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:37:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b359e59d972087a8f00c165bdb564965caab6f6606529efcec5ba4dedf519aa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:37:15 np0005629333 podman[408784]: 2026-02-25 13:37:15.26834746 +0000 UTC m=+0.028066083 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:37:15 np0005629333 podman[408784]: 2026-02-25 13:37:15.396901488 +0000 UTC m=+0.156620121 container init f9e586455248e1045f517719dc67d568fed97cb3f6bbe1f78f830ed9d5f67ccd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_buck, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 08:37:15 np0005629333 podman[408784]: 2026-02-25 13:37:15.404907454 +0000 UTC m=+0.164626107 container start f9e586455248e1045f517719dc67d568fed97cb3f6bbe1f78f830ed9d5f67ccd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_buck, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:37:15 np0005629333 podman[408784]: 2026-02-25 13:37:15.409119943 +0000 UTC m=+0.168838596 container attach f9e586455248e1045f517719dc67d568fed97cb3f6bbe1f78f830ed9d5f67ccd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_buck, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:37:15 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:37:15 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:37:15 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:37:15 np0005629333 nova_compute[244014]: 2026-02-25 13:37:15.847 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:37:15 np0005629333 nice_buck[408801]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:37:15 np0005629333 nice_buck[408801]: --> All data devices are unavailable
Feb 25 08:37:15 np0005629333 systemd[1]: libpod-f9e586455248e1045f517719dc67d568fed97cb3f6bbe1f78f830ed9d5f67ccd.scope: Deactivated successfully.
Feb 25 08:37:15 np0005629333 podman[408784]: 2026-02-25 13:37:15.9036454 +0000 UTC m=+0.663364053 container died f9e586455248e1045f517719dc67d568fed97cb3f6bbe1f78f830ed9d5f67ccd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_buck, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 08:37:15 np0005629333 systemd[1]: var-lib-containers-storage-overlay-2b359e59d972087a8f00c165bdb564965caab6f6606529efcec5ba4dedf519aa-merged.mount: Deactivated successfully.
Feb 25 08:37:15 np0005629333 podman[408784]: 2026-02-25 13:37:15.961790561 +0000 UTC m=+0.721509184 container remove f9e586455248e1045f517719dc67d568fed97cb3f6bbe1f78f830ed9d5f67ccd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 08:37:15 np0005629333 systemd[1]: libpod-conmon-f9e586455248e1045f517719dc67d568fed97cb3f6bbe1f78f830ed9d5f67ccd.scope: Deactivated successfully.
Feb 25 08:37:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3521: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:37:16 np0005629333 nova_compute[244014]: 2026-02-25 13:37:16.374 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:37:16 np0005629333 nova_compute[244014]: 2026-02-25 13:37:16.374 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:37:16 np0005629333 nova_compute[244014]: 2026-02-25 13:37:16.375 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:37:16 np0005629333 nova_compute[244014]: 2026-02-25 13:37:16.375 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:37:16 np0005629333 nova_compute[244014]: 2026-02-25 13:37:16.391 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:37:16 np0005629333 podman[408897]: 2026-02-25 13:37:16.429573063 +0000 UTC m=+0.048355456 container create 7301d389c92663235800bc296946d23468999d5236f629b6fbde6e799cebb325 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_jang, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:37:16 np0005629333 systemd[1]: Started libpod-conmon-7301d389c92663235800bc296946d23468999d5236f629b6fbde6e799cebb325.scope.
Feb 25 08:37:16 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:37:16 np0005629333 podman[408897]: 2026-02-25 13:37:16.415677261 +0000 UTC m=+0.034459674 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:37:16 np0005629333 podman[408897]: 2026-02-25 13:37:16.515830157 +0000 UTC m=+0.134612590 container init 7301d389c92663235800bc296946d23468999d5236f629b6fbde6e799cebb325 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_jang, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 08:37:16 np0005629333 podman[408897]: 2026-02-25 13:37:16.52335647 +0000 UTC m=+0.142138903 container start 7301d389c92663235800bc296946d23468999d5236f629b6fbde6e799cebb325 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_jang, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:37:16 np0005629333 kind_jang[408914]: 167 167
Feb 25 08:37:16 np0005629333 podman[408897]: 2026-02-25 13:37:16.526645143 +0000 UTC m=+0.145427646 container attach 7301d389c92663235800bc296946d23468999d5236f629b6fbde6e799cebb325 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:37:16 np0005629333 systemd[1]: libpod-7301d389c92663235800bc296946d23468999d5236f629b6fbde6e799cebb325.scope: Deactivated successfully.
Feb 25 08:37:16 np0005629333 podman[408897]: 2026-02-25 13:37:16.527771034 +0000 UTC m=+0.146553517 container died 7301d389c92663235800bc296946d23468999d5236f629b6fbde6e799cebb325 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_jang, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 08:37:16 np0005629333 systemd[1]: var-lib-containers-storage-overlay-410358fdca1bdc3a11cacd0527710b4efc11b1b0e5d5cbe5dd88fbd61f1b3077-merged.mount: Deactivated successfully.
Feb 25 08:37:16 np0005629333 podman[408897]: 2026-02-25 13:37:16.57012249 +0000 UTC m=+0.188904923 container remove 7301d389c92663235800bc296946d23468999d5236f629b6fbde6e799cebb325 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_jang, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:37:16 np0005629333 systemd[1]: libpod-conmon-7301d389c92663235800bc296946d23468999d5236f629b6fbde6e799cebb325.scope: Deactivated successfully.
Feb 25 08:37:16 np0005629333 podman[408939]: 2026-02-25 13:37:16.774003404 +0000 UTC m=+0.060322994 container create 0f5a6cd3a595ccc181067572c9caeeb861624ffe77c28ef2c8f688c5c16d9fc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:37:16 np0005629333 systemd[1]: Started libpod-conmon-0f5a6cd3a595ccc181067572c9caeeb861624ffe77c28ef2c8f688c5c16d9fc4.scope.
Feb 25 08:37:16 np0005629333 podman[408939]: 2026-02-25 13:37:16.751876619 +0000 UTC m=+0.038196269 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:37:16 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:37:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cce3221252e21fb9722ae8bd1c020a8226522ce26a8d8831390416c6453a248c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:37:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cce3221252e21fb9722ae8bd1c020a8226522ce26a8d8831390416c6453a248c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:37:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cce3221252e21fb9722ae8bd1c020a8226522ce26a8d8831390416c6453a248c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:37:16 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cce3221252e21fb9722ae8bd1c020a8226522ce26a8d8831390416c6453a248c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:37:16 np0005629333 podman[408939]: 2026-02-25 13:37:16.881800476 +0000 UTC m=+0.168120086 container init 0f5a6cd3a595ccc181067572c9caeeb861624ffe77c28ef2c8f688c5c16d9fc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldstine, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 25 08:37:16 np0005629333 podman[408939]: 2026-02-25 13:37:16.890383728 +0000 UTC m=+0.176703278 container start 0f5a6cd3a595ccc181067572c9caeeb861624ffe77c28ef2c8f688c5c16d9fc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:37:16 np0005629333 podman[408939]: 2026-02-25 13:37:16.893588569 +0000 UTC m=+0.179908129 container attach 0f5a6cd3a595ccc181067572c9caeeb861624ffe77c28ef2c8f688c5c16d9fc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldstine, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]: {
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:    "0": [
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:        {
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "devices": [
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "/dev/loop3"
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            ],
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "lv_name": "ceph_lv0",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "lv_size": "21470642176",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "name": "ceph_lv0",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "tags": {
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.cluster_name": "ceph",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.crush_device_class": "",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.encrypted": "0",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.objectstore": "bluestore",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.osd_id": "0",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.type": "block",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.vdo": "0",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.with_tpm": "0"
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            },
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "type": "block",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "vg_name": "ceph_vg0"
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:        }
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:    ],
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:    "1": [
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:        {
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "devices": [
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "/dev/loop4"
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            ],
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "lv_name": "ceph_lv1",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "lv_size": "21470642176",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "name": "ceph_lv1",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "tags": {
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.cluster_name": "ceph",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.crush_device_class": "",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.encrypted": "0",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.objectstore": "bluestore",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.osd_id": "1",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.type": "block",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.vdo": "0",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.with_tpm": "0"
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            },
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "type": "block",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "vg_name": "ceph_vg1"
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:        }
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:    ],
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:    "2": [
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:        {
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "devices": [
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "/dev/loop5"
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            ],
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "lv_name": "ceph_lv2",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "lv_size": "21470642176",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "name": "ceph_lv2",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "tags": {
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.cluster_name": "ceph",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.crush_device_class": "",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.encrypted": "0",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.objectstore": "bluestore",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.osd_id": "2",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.type": "block",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.vdo": "0",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:                "ceph.with_tpm": "0"
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            },
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "type": "block",
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:            "vg_name": "ceph_vg2"
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:        }
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]:    ]
Feb 25 08:37:17 np0005629333 beautiful_goldstine[408956]: }
Feb 25 08:37:17 np0005629333 systemd[1]: libpod-0f5a6cd3a595ccc181067572c9caeeb861624ffe77c28ef2c8f688c5c16d9fc4.scope: Deactivated successfully.
Feb 25 08:37:17 np0005629333 podman[408939]: 2026-02-25 13:37:17.203595198 +0000 UTC m=+0.489914798 container died 0f5a6cd3a595ccc181067572c9caeeb861624ffe77c28ef2c8f688c5c16d9fc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Feb 25 08:37:17 np0005629333 systemd[1]: var-lib-containers-storage-overlay-cce3221252e21fb9722ae8bd1c020a8226522ce26a8d8831390416c6453a248c-merged.mount: Deactivated successfully.
Feb 25 08:37:17 np0005629333 podman[408939]: 2026-02-25 13:37:17.253129576 +0000 UTC m=+0.539449176 container remove 0f5a6cd3a595ccc181067572c9caeeb861624ffe77c28ef2c8f688c5c16d9fc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goldstine, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 08:37:17 np0005629333 systemd[1]: libpod-conmon-0f5a6cd3a595ccc181067572c9caeeb861624ffe77c28ef2c8f688c5c16d9fc4.scope: Deactivated successfully.
Feb 25 08:37:17 np0005629333 podman[409038]: 2026-02-25 13:37:17.732975127 +0000 UTC m=+0.050890517 container create c3f0167d26eb5c68196ce90aa303eef79ff9bbe2dfd84a70276fc7d91e86bd15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_brattain, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:37:17 np0005629333 systemd[1]: Started libpod-conmon-c3f0167d26eb5c68196ce90aa303eef79ff9bbe2dfd84a70276fc7d91e86bd15.scope.
Feb 25 08:37:17 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:37:17 np0005629333 podman[409038]: 2026-02-25 13:37:17.708558438 +0000 UTC m=+0.026473918 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:37:17 np0005629333 podman[409038]: 2026-02-25 13:37:17.806556504 +0000 UTC m=+0.124471954 container init c3f0167d26eb5c68196ce90aa303eef79ff9bbe2dfd84a70276fc7d91e86bd15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_brattain, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 08:37:17 np0005629333 podman[409038]: 2026-02-25 13:37:17.814963081 +0000 UTC m=+0.132878461 container start c3f0167d26eb5c68196ce90aa303eef79ff9bbe2dfd84a70276fc7d91e86bd15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:37:17 np0005629333 podman[409038]: 2026-02-25 13:37:17.818627875 +0000 UTC m=+0.136543295 container attach c3f0167d26eb5c68196ce90aa303eef79ff9bbe2dfd84a70276fc7d91e86bd15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_brattain, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:37:17 np0005629333 nervous_brattain[409054]: 167 167
Feb 25 08:37:17 np0005629333 systemd[1]: libpod-c3f0167d26eb5c68196ce90aa303eef79ff9bbe2dfd84a70276fc7d91e86bd15.scope: Deactivated successfully.
Feb 25 08:37:17 np0005629333 podman[409038]: 2026-02-25 13:37:17.820878288 +0000 UTC m=+0.138793708 container died c3f0167d26eb5c68196ce90aa303eef79ff9bbe2dfd84a70276fc7d91e86bd15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_brattain, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:37:17 np0005629333 systemd[1]: var-lib-containers-storage-overlay-2d292c59b9b86d003400440795c4278bcab64b611aed77c44835df3290f43647-merged.mount: Deactivated successfully.
Feb 25 08:37:17 np0005629333 podman[409038]: 2026-02-25 13:37:17.864387796 +0000 UTC m=+0.182303396 container remove c3f0167d26eb5c68196ce90aa303eef79ff9bbe2dfd84a70276fc7d91e86bd15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 08:37:17 np0005629333 systemd[1]: libpod-conmon-c3f0167d26eb5c68196ce90aa303eef79ff9bbe2dfd84a70276fc7d91e86bd15.scope: Deactivated successfully.
Feb 25 08:37:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3522: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:37:18 np0005629333 podman[409079]: 2026-02-25 13:37:18.01795362 +0000 UTC m=+0.058804221 container create de70c9c0280d7d3af6383cea4a9f64d6c4b06138ac423d22584c8f08a9a3f1f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 08:37:18 np0005629333 systemd[1]: Started libpod-conmon-de70c9c0280d7d3af6383cea4a9f64d6c4b06138ac423d22584c8f08a9a3f1f4.scope.
Feb 25 08:37:18 np0005629333 podman[409079]: 2026-02-25 13:37:17.99136029 +0000 UTC m=+0.032210941 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:37:18 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:37:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a2812c59015fad87faa53daa6e551ed329b7349cd5d0c8336998d75a796f0f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:37:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a2812c59015fad87faa53daa6e551ed329b7349cd5d0c8336998d75a796f0f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:37:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a2812c59015fad87faa53daa6e551ed329b7349cd5d0c8336998d75a796f0f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:37:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a2812c59015fad87faa53daa6e551ed329b7349cd5d0c8336998d75a796f0f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:37:18 np0005629333 podman[409079]: 2026-02-25 13:37:18.122191152 +0000 UTC m=+0.163041723 container init de70c9c0280d7d3af6383cea4a9f64d6c4b06138ac423d22584c8f08a9a3f1f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_buck, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:37:18 np0005629333 podman[409079]: 2026-02-25 13:37:18.1306153 +0000 UTC m=+0.171465881 container start de70c9c0280d7d3af6383cea4a9f64d6c4b06138ac423d22584c8f08a9a3f1f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_buck, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:37:18 np0005629333 podman[409079]: 2026-02-25 13:37:18.134666364 +0000 UTC m=+0.175516975 container attach de70c9c0280d7d3af6383cea4a9f64d6c4b06138ac423d22584c8f08a9a3f1f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_buck, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 08:37:18 np0005629333 nova_compute[244014]: 2026-02-25 13:37:18.327 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:37:18 np0005629333 lvm[409172]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:37:18 np0005629333 lvm[409172]: VG ceph_vg0 finished
Feb 25 08:37:18 np0005629333 lvm[409175]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:37:18 np0005629333 lvm[409175]: VG ceph_vg1 finished
Feb 25 08:37:18 np0005629333 lvm[409177]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:37:18 np0005629333 lvm[409177]: VG ceph_vg2 finished
Feb 25 08:37:18 np0005629333 adoring_buck[409096]: {}
Feb 25 08:37:18 np0005629333 nova_compute[244014]: 2026-02-25 13:37:18.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:37:18 np0005629333 nova_compute[244014]: 2026-02-25 13:37:18.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:37:18 np0005629333 systemd[1]: libpod-de70c9c0280d7d3af6383cea4a9f64d6c4b06138ac423d22584c8f08a9a3f1f4.scope: Deactivated successfully.
Feb 25 08:37:18 np0005629333 podman[409079]: 2026-02-25 13:37:18.902808603 +0000 UTC m=+0.943659174 container died de70c9c0280d7d3af6383cea4a9f64d6c4b06138ac423d22584c8f08a9a3f1f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_buck, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 08:37:18 np0005629333 systemd[1]: libpod-de70c9c0280d7d3af6383cea4a9f64d6c4b06138ac423d22584c8f08a9a3f1f4.scope: Consumed 1.070s CPU time.
Feb 25 08:37:18 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7a2812c59015fad87faa53daa6e551ed329b7349cd5d0c8336998d75a796f0f5-merged.mount: Deactivated successfully.
Feb 25 08:37:18 np0005629333 podman[409079]: 2026-02-25 13:37:18.948244535 +0000 UTC m=+0.989095106 container remove de70c9c0280d7d3af6383cea4a9f64d6c4b06138ac423d22584c8f08a9a3f1f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:37:18 np0005629333 systemd[1]: libpod-conmon-de70c9c0280d7d3af6383cea4a9f64d6c4b06138ac423d22584c8f08a9a3f1f4.scope: Deactivated successfully.
Feb 25 08:37:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:37:19 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:37:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:37:19 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:37:19 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:37:19 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:37:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:37:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3523: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:37:20 np0005629333 nova_compute[244014]: 2026-02-25 13:37:20.884 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:37:21 np0005629333 podman[409218]: 2026-02-25 13:37:21.77280479 +0000 UTC m=+0.107358721 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 08:37:21 np0005629333 podman[409219]: 2026-02-25 13:37:21.792078084 +0000 UTC m=+0.127066937 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 08:37:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3524: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:37:22 np0005629333 nova_compute[244014]: 2026-02-25 13:37:22.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:37:23 np0005629333 nova_compute[244014]: 2026-02-25 13:37:23.330 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:37:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3525: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:37:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:37:25 np0005629333 nova_compute[244014]: 2026-02-25 13:37:25.888 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:37:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3526: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:37:26 np0005629333 nova_compute[244014]: 2026-02-25 13:37:26.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:37:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3527: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:37:28 np0005629333 nova_compute[244014]: 2026-02-25 13:37:28.332 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:37:28 np0005629333 nova_compute[244014]: 2026-02-25 13:37:28.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:37:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:37:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3528: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:37:30 np0005629333 nova_compute[244014]: 2026-02-25 13:37:30.890 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:37:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:37:31
Feb 25 08:37:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:37:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:37:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['backups', 'default.rgw.log', '.mgr', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'vms', 'default.rgw.control', 'volumes']
Feb 25 08:37:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:37:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:37:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:37:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:37:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:37:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:37:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:37:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3529: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:37:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:37:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:37:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:37:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:37:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:37:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:37:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:37:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:37:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:37:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:37:32 np0005629333 nova_compute[244014]: 2026-02-25 13:37:32.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:37:33 np0005629333 nova_compute[244014]: 2026-02-25 13:37:33.374 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:37:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3530: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:37:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:37:35 np0005629333 nova_compute[244014]: 2026-02-25 13:37:35.892 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:37:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3531: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:37:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3532: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:37:38 np0005629333 nova_compute[244014]: 2026-02-25 13:37:38.376 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:37:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:37:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3533: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:37:40 np0005629333 nova_compute[244014]: 2026-02-25 13:37:40.894 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:37:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3534: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:37:43 np0005629333 nova_compute[244014]: 2026-02-25 13:37:43.378 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:37:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:37:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:37:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:37:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:37:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:37:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:37:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:37:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:37:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:37:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:37:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:37:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:37:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:37:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:37:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:37:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:37:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:37:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:37:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:37:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:37:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:37:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:37:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:37:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3535: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:37:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:37:45 np0005629333 nova_compute[244014]: 2026-02-25 13:37:45.896 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:37:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3536: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:37:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:37:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3521667704' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:37:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:37:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3521667704' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:37:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3537: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:37:48 np0005629333 nova_compute[244014]: 2026-02-25 13:37:48.428 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:37:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:37:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3538: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:37:50 np0005629333 nova_compute[244014]: 2026-02-25 13:37:50.900 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:37:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3539: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:37:52 np0005629333 podman[409267]: 2026-02-25 13:37:52.726925157 +0000 UTC m=+0.064908034 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Feb 25 08:37:52 np0005629333 podman[409268]: 2026-02-25 13:37:52.746761717 +0000 UTC m=+0.085631250 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Feb 25 08:37:53 np0005629333 nova_compute[244014]: 2026-02-25 13:37:53.429 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:37:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3540: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:37:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:37:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:37:55.081 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:37:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:37:55.081 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:37:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:37:55.081 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:37:55 np0005629333 nova_compute[244014]: 2026-02-25 13:37:55.935 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:37:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3541: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:37:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3542: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:37:58 np0005629333 nova_compute[244014]: 2026-02-25 13:37:58.433 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:37:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:38:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3543: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:38:00 np0005629333 nova_compute[244014]: 2026-02-25 13:38:00.939 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:38:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:38:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:38:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:38:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:38:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:38:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:38:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3544: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:38:03 np0005629333 nova_compute[244014]: 2026-02-25 13:38:03.435 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:38:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3545: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:38:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:38:05 np0005629333 nova_compute[244014]: 2026-02-25 13:38:05.941 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:38:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3546: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:38:06 np0005629333 nova_compute[244014]: 2026-02-25 13:38:06.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:38:06 np0005629333 nova_compute[244014]: 2026-02-25 13:38:06.926 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:38:06 np0005629333 nova_compute[244014]: 2026-02-25 13:38:06.926 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:38:06 np0005629333 nova_compute[244014]: 2026-02-25 13:38:06.927 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:38:06 np0005629333 nova_compute[244014]: 2026-02-25 13:38:06.927 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:38:06 np0005629333 nova_compute[244014]: 2026-02-25 13:38:06.927 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:38:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:38:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4070224935' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:38:07 np0005629333 nova_compute[244014]: 2026-02-25 13:38:07.474 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:38:07 np0005629333 nova_compute[244014]: 2026-02-25 13:38:07.668 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:38:07 np0005629333 nova_compute[244014]: 2026-02-25 13:38:07.669 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3525MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:38:07 np0005629333 nova_compute[244014]: 2026-02-25 13:38:07.670 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:38:07 np0005629333 nova_compute[244014]: 2026-02-25 13:38:07.670 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:38:07 np0005629333 nova_compute[244014]: 2026-02-25 13:38:07.777 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:38:07 np0005629333 nova_compute[244014]: 2026-02-25 13:38:07.777 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:38:07 np0005629333 nova_compute[244014]: 2026-02-25 13:38:07.818 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:38:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3547: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:38:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:38:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4162965353' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:38:08 np0005629333 nova_compute[244014]: 2026-02-25 13:38:08.362 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:38:08 np0005629333 nova_compute[244014]: 2026-02-25 13:38:08.369 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:38:08 np0005629333 nova_compute[244014]: 2026-02-25 13:38:08.383 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:38:08 np0005629333 nova_compute[244014]: 2026-02-25 13:38:08.385 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:38:08 np0005629333 nova_compute[244014]: 2026-02-25 13:38:08.385 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:38:08 np0005629333 nova_compute[244014]: 2026-02-25 13:38:08.437 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:38:09 np0005629333 nova_compute[244014]: 2026-02-25 13:38:09.387 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:38:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:38:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3548: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:38:10 np0005629333 nova_compute[244014]: 2026-02-25 13:38:10.944 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:38:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3549: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:38:13 np0005629333 nova_compute[244014]: 2026-02-25 13:38:13.441 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:38:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3550: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:38:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:38:15 np0005629333 nova_compute[244014]: 2026-02-25 13:38:15.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:38:15 np0005629333 nova_compute[244014]: 2026-02-25 13:38:15.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:38:15 np0005629333 nova_compute[244014]: 2026-02-25 13:38:15.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:38:15 np0005629333 nova_compute[244014]: 2026-02-25 13:38:15.894 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:38:15 np0005629333 nova_compute[244014]: 2026-02-25 13:38:15.947 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:38:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3551: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:38:16 np0005629333 nova_compute[244014]: 2026-02-25 13:38:16.891 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:38:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3552: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:38:18 np0005629333 nova_compute[244014]: 2026-02-25 13:38:18.444 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:38:18 np0005629333 nova_compute[244014]: 2026-02-25 13:38:18.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:38:18 np0005629333 nova_compute[244014]: 2026-02-25 13:38:18.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:38:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:38:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:38:19 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:38:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:38:19 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:38:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:38:19 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:38:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:38:19 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:38:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:38:19 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:38:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:38:19 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:38:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3553: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:38:20 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:38:20 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:38:20 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:38:20 np0005629333 podman[409502]: 2026-02-25 13:38:20.181073022 +0000 UTC m=+0.050653015 container create f1b8188469fc3f52afb68c3f18735f6086e178b70482971d3823a4a9069bc4aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_gould, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 08:38:20 np0005629333 systemd[1]: Started libpod-conmon-f1b8188469fc3f52afb68c3f18735f6086e178b70482971d3823a4a9069bc4aa.scope.
Feb 25 08:38:20 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:38:20 np0005629333 podman[409502]: 2026-02-25 13:38:20.244239885 +0000 UTC m=+0.113819899 container init f1b8188469fc3f52afb68c3f18735f6086e178b70482971d3823a4a9069bc4aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_gould, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Feb 25 08:38:20 np0005629333 podman[409502]: 2026-02-25 13:38:20.151791361 +0000 UTC m=+0.021371414 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:38:20 np0005629333 podman[409502]: 2026-02-25 13:38:20.248637032 +0000 UTC m=+0.118217015 container start f1b8188469fc3f52afb68c3f18735f6086e178b70482971d3823a4a9069bc4aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_gould, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 08:38:20 np0005629333 youthful_gould[409519]: 167 167
Feb 25 08:38:20 np0005629333 systemd[1]: libpod-f1b8188469fc3f52afb68c3f18735f6086e178b70482971d3823a4a9069bc4aa.scope: Deactivated successfully.
Feb 25 08:38:20 np0005629333 conmon[409519]: conmon f1b8188469fc3f52afb6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f1b8188469fc3f52afb68c3f18735f6086e178b70482971d3823a4a9069bc4aa.scope/container/memory.events
Feb 25 08:38:20 np0005629333 podman[409502]: 2026-02-25 13:38:20.259442032 +0000 UTC m=+0.129022025 container attach f1b8188469fc3f52afb68c3f18735f6086e178b70482971d3823a4a9069bc4aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_gould, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:38:20 np0005629333 podman[409502]: 2026-02-25 13:38:20.259870004 +0000 UTC m=+0.129450007 container died f1b8188469fc3f52afb68c3f18735f6086e178b70482971d3823a4a9069bc4aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 08:38:20 np0005629333 systemd[1]: var-lib-containers-storage-overlay-b393cf27f121cf6bd9ecea87b63bed72dc59c710ce1088d93be5a135c576069d-merged.mount: Deactivated successfully.
Feb 25 08:38:20 np0005629333 podman[409502]: 2026-02-25 13:38:20.315383888 +0000 UTC m=+0.184963911 container remove f1b8188469fc3f52afb68c3f18735f6086e178b70482971d3823a4a9069bc4aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_gould, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 08:38:20 np0005629333 systemd[1]: libpod-conmon-f1b8188469fc3f52afb68c3f18735f6086e178b70482971d3823a4a9069bc4aa.scope: Deactivated successfully.
Feb 25 08:38:20 np0005629333 podman[409543]: 2026-02-25 13:38:20.493913494 +0000 UTC m=+0.061479867 container create 637f72a9dae0a7e7579b20242d562ae9680a227ef29c76bb21d412212080e780 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_ganguly, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 08:38:20 np0005629333 systemd[1]: Started libpod-conmon-637f72a9dae0a7e7579b20242d562ae9680a227ef29c76bb21d412212080e780.scope.
Feb 25 08:38:20 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:38:20 np0005629333 podman[409543]: 2026-02-25 13:38:20.46799975 +0000 UTC m=+0.035566183 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:38:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5891961e7bdf14d2e5cf1e5f8e17a6da746e74d4006e5271fa05bc9d49551102/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:38:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5891961e7bdf14d2e5cf1e5f8e17a6da746e74d4006e5271fa05bc9d49551102/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:38:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5891961e7bdf14d2e5cf1e5f8e17a6da746e74d4006e5271fa05bc9d49551102/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:38:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5891961e7bdf14d2e5cf1e5f8e17a6da746e74d4006e5271fa05bc9d49551102/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:38:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5891961e7bdf14d2e5cf1e5f8e17a6da746e74d4006e5271fa05bc9d49551102/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:38:20 np0005629333 podman[409543]: 2026-02-25 13:38:20.591105144 +0000 UTC m=+0.158671537 container init 637f72a9dae0a7e7579b20242d562ae9680a227ef29c76bb21d412212080e780 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:38:20 np0005629333 podman[409543]: 2026-02-25 13:38:20.601224474 +0000 UTC m=+0.168790857 container start 637f72a9dae0a7e7579b20242d562ae9680a227ef29c76bb21d412212080e780 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:38:20 np0005629333 podman[409543]: 2026-02-25 13:38:20.60491135 +0000 UTC m=+0.172477793 container attach 637f72a9dae0a7e7579b20242d562ae9680a227ef29c76bb21d412212080e780 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 08:38:20 np0005629333 nova_compute[244014]: 2026-02-25 13:38:20.950 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:38:20 np0005629333 clever_ganguly[409560]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:38:20 np0005629333 clever_ganguly[409560]: --> All data devices are unavailable
Feb 25 08:38:21 np0005629333 systemd[1]: libpod-637f72a9dae0a7e7579b20242d562ae9680a227ef29c76bb21d412212080e780.scope: Deactivated successfully.
Feb 25 08:38:21 np0005629333 podman[409543]: 2026-02-25 13:38:21.004198834 +0000 UTC m=+0.571765217 container died 637f72a9dae0a7e7579b20242d562ae9680a227ef29c76bb21d412212080e780 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_ganguly, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:38:21 np0005629333 systemd[1]: var-lib-containers-storage-overlay-5891961e7bdf14d2e5cf1e5f8e17a6da746e74d4006e5271fa05bc9d49551102-merged.mount: Deactivated successfully.
Feb 25 08:38:21 np0005629333 podman[409543]: 2026-02-25 13:38:21.063779314 +0000 UTC m=+0.631345667 container remove 637f72a9dae0a7e7579b20242d562ae9680a227ef29c76bb21d412212080e780 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_ganguly, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:38:21 np0005629333 systemd[1]: libpod-conmon-637f72a9dae0a7e7579b20242d562ae9680a227ef29c76bb21d412212080e780.scope: Deactivated successfully.
Feb 25 08:38:21 np0005629333 podman[409656]: 2026-02-25 13:38:21.510855759 +0000 UTC m=+0.037675902 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:38:21 np0005629333 podman[409656]: 2026-02-25 13:38:21.622561456 +0000 UTC m=+0.149381559 container create 03b75920a332f5d623120e35b239404a2937e4e28897abe1c08de44463b85878 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_montalcini, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 08:38:21 np0005629333 systemd[1]: Started libpod-conmon-03b75920a332f5d623120e35b239404a2937e4e28897abe1c08de44463b85878.scope.
Feb 25 08:38:21 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:38:21 np0005629333 podman[409656]: 2026-02-25 13:38:21.70732595 +0000 UTC m=+0.234146033 container init 03b75920a332f5d623120e35b239404a2937e4e28897abe1c08de44463b85878 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 08:38:21 np0005629333 podman[409656]: 2026-02-25 13:38:21.712960232 +0000 UTC m=+0.239780295 container start 03b75920a332f5d623120e35b239404a2937e4e28897abe1c08de44463b85878 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_montalcini, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 08:38:21 np0005629333 stoic_montalcini[409672]: 167 167
Feb 25 08:38:21 np0005629333 systemd[1]: libpod-03b75920a332f5d623120e35b239404a2937e4e28897abe1c08de44463b85878.scope: Deactivated successfully.
Feb 25 08:38:21 np0005629333 podman[409656]: 2026-02-25 13:38:21.721589439 +0000 UTC m=+0.248409532 container attach 03b75920a332f5d623120e35b239404a2937e4e28897abe1c08de44463b85878 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_montalcini, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 08:38:21 np0005629333 podman[409656]: 2026-02-25 13:38:21.722123055 +0000 UTC m=+0.248943118 container died 03b75920a332f5d623120e35b239404a2937e4e28897abe1c08de44463b85878 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 08:38:21 np0005629333 systemd[1]: var-lib-containers-storage-overlay-afc0b2b5f982e4585f7f933df1f82d26a43275af7a68bbf76530439bbbdc1369-merged.mount: Deactivated successfully.
Feb 25 08:38:21 np0005629333 podman[409656]: 2026-02-25 13:38:21.774830898 +0000 UTC m=+0.301650961 container remove 03b75920a332f5d623120e35b239404a2937e4e28897abe1c08de44463b85878 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_montalcini, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 08:38:21 np0005629333 systemd[1]: libpod-conmon-03b75920a332f5d623120e35b239404a2937e4e28897abe1c08de44463b85878.scope: Deactivated successfully.
Feb 25 08:38:21 np0005629333 podman[409695]: 2026-02-25 13:38:21.967543941 +0000 UTC m=+0.066274624 container create 7178765d24f441d5643427dadcba4eca8a6717b4b825c0b98c3d95550a02c4e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_ishizaka, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:38:22 np0005629333 systemd[1]: Started libpod-conmon-7178765d24f441d5643427dadcba4eca8a6717b4b825c0b98c3d95550a02c4e1.scope.
Feb 25 08:38:22 np0005629333 podman[409695]: 2026-02-25 13:38:21.938083215 +0000 UTC m=+0.036813948 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:38:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3554: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:38:22 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:38:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdcfd305bc0b1fd9bc1593986989f082d3fc53d568bded84d0edd31e65259aeb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:38:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdcfd305bc0b1fd9bc1593986989f082d3fc53d568bded84d0edd31e65259aeb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:38:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdcfd305bc0b1fd9bc1593986989f082d3fc53d568bded84d0edd31e65259aeb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:38:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdcfd305bc0b1fd9bc1593986989f082d3fc53d568bded84d0edd31e65259aeb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:38:22 np0005629333 podman[409695]: 2026-02-25 13:38:22.069168378 +0000 UTC m=+0.167899111 container init 7178765d24f441d5643427dadcba4eca8a6717b4b825c0b98c3d95550a02c4e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_ishizaka, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Feb 25 08:38:22 np0005629333 podman[409695]: 2026-02-25 13:38:22.078881577 +0000 UTC m=+0.177612230 container start 7178765d24f441d5643427dadcba4eca8a6717b4b825c0b98c3d95550a02c4e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_ishizaka, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:38:22 np0005629333 podman[409695]: 2026-02-25 13:38:22.084014184 +0000 UTC m=+0.182744877 container attach 7178765d24f441d5643427dadcba4eca8a6717b4b825c0b98c3d95550a02c4e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_ishizaka, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]: {
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:    "0": [
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:        {
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "devices": [
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "/dev/loop3"
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            ],
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "lv_name": "ceph_lv0",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "lv_size": "21470642176",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "name": "ceph_lv0",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "tags": {
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.cluster_name": "ceph",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.crush_device_class": "",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.encrypted": "0",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.objectstore": "bluestore",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.osd_id": "0",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.type": "block",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.vdo": "0",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.with_tpm": "0"
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            },
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "type": "block",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "vg_name": "ceph_vg0"
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:        }
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:    ],
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:    "1": [
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:        {
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "devices": [
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "/dev/loop4"
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            ],
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "lv_name": "ceph_lv1",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "lv_size": "21470642176",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "name": "ceph_lv1",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "tags": {
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.cluster_name": "ceph",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.crush_device_class": "",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.encrypted": "0",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.objectstore": "bluestore",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.osd_id": "1",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.type": "block",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.vdo": "0",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.with_tpm": "0"
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            },
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "type": "block",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "vg_name": "ceph_vg1"
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:        }
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:    ],
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:    "2": [
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:        {
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "devices": [
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "/dev/loop5"
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            ],
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "lv_name": "ceph_lv2",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "lv_size": "21470642176",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "name": "ceph_lv2",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "tags": {
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.cluster_name": "ceph",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.crush_device_class": "",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.encrypted": "0",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.objectstore": "bluestore",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.osd_id": "2",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.type": "block",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.vdo": "0",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:                "ceph.with_tpm": "0"
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            },
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "type": "block",
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:            "vg_name": "ceph_vg2"
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:        }
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]:    ]
Feb 25 08:38:22 np0005629333 focused_ishizaka[409711]: }
Feb 25 08:38:22 np0005629333 systemd[1]: libpod-7178765d24f441d5643427dadcba4eca8a6717b4b825c0b98c3d95550a02c4e1.scope: Deactivated successfully.
Feb 25 08:38:22 np0005629333 podman[409695]: 2026-02-25 13:38:22.407946543 +0000 UTC m=+0.506677266 container died 7178765d24f441d5643427dadcba4eca8a6717b4b825c0b98c3d95550a02c4e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_ishizaka, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 08:38:22 np0005629333 systemd[1]: var-lib-containers-storage-overlay-bdcfd305bc0b1fd9bc1593986989f082d3fc53d568bded84d0edd31e65259aeb-merged.mount: Deactivated successfully.
Feb 25 08:38:22 np0005629333 podman[409695]: 2026-02-25 13:38:22.468532933 +0000 UTC m=+0.567263596 container remove 7178765d24f441d5643427dadcba4eca8a6717b4b825c0b98c3d95550a02c4e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:38:22 np0005629333 systemd[1]: libpod-conmon-7178765d24f441d5643427dadcba4eca8a6717b4b825c0b98c3d95550a02c4e1.scope: Deactivated successfully.
Feb 25 08:38:22 np0005629333 podman[409794]: 2026-02-25 13:38:22.922135015 +0000 UTC m=+0.044670313 container create ecba3b05c3a2f5ce18264afe583639c6296ac81cddd81b3ad1a0273e11b0cd91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_einstein, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 08:38:22 np0005629333 systemd[1]: Started libpod-conmon-ecba3b05c3a2f5ce18264afe583639c6296ac81cddd81b3ad1a0273e11b0cd91.scope.
Feb 25 08:38:22 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:38:22 np0005629333 podman[409794]: 2026-02-25 13:38:22.902541773 +0000 UTC m=+0.025077101 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:38:23 np0005629333 podman[409794]: 2026-02-25 13:38:23.001586196 +0000 UTC m=+0.124121474 container init ecba3b05c3a2f5ce18264afe583639c6296ac81cddd81b3ad1a0273e11b0cd91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:38:23 np0005629333 podman[409794]: 2026-02-25 13:38:23.01043464 +0000 UTC m=+0.132969928 container start ecba3b05c3a2f5ce18264afe583639c6296ac81cddd81b3ad1a0273e11b0cd91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 08:38:23 np0005629333 gracious_einstein[409812]: 167 167
Feb 25 08:38:23 np0005629333 systemd[1]: libpod-ecba3b05c3a2f5ce18264afe583639c6296ac81cddd81b3ad1a0273e11b0cd91.scope: Deactivated successfully.
Feb 25 08:38:23 np0005629333 podman[409794]: 2026-02-25 13:38:23.015750263 +0000 UTC m=+0.138285561 container attach ecba3b05c3a2f5ce18264afe583639c6296ac81cddd81b3ad1a0273e11b0cd91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_einstein, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:38:23 np0005629333 podman[409794]: 2026-02-25 13:38:23.016450613 +0000 UTC m=+0.138985881 container died ecba3b05c3a2f5ce18264afe583639c6296ac81cddd81b3ad1a0273e11b0cd91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_einstein, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:38:23 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1068e2d16d9b57d537d31a69a973aae86e14e6bb87f30f01b4332aa6a9a7b123-merged.mount: Deactivated successfully.
Feb 25 08:38:23 np0005629333 podman[409794]: 2026-02-25 13:38:23.069682731 +0000 UTC m=+0.192218019 container remove ecba3b05c3a2f5ce18264afe583639c6296ac81cddd81b3ad1a0273e11b0cd91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_einstein, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:38:23 np0005629333 podman[409808]: 2026-02-25 13:38:23.074843009 +0000 UTC m=+0.107565290 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 08:38:23 np0005629333 systemd[1]: libpod-conmon-ecba3b05c3a2f5ce18264afe583639c6296ac81cddd81b3ad1a0273e11b0cd91.scope: Deactivated successfully.
Feb 25 08:38:23 np0005629333 podman[409811]: 2026-02-25 13:38:23.107296251 +0000 UTC m=+0.139668921 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 08:38:23 np0005629333 podman[409877]: 2026-02-25 13:38:23.210327729 +0000 UTC m=+0.038889457 container create f986928b5a050e84bfa1f1ec5fdfd9c08feb5809fb01b0b3bd99edf2cb304604 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_borg, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:38:23 np0005629333 systemd[1]: Started libpod-conmon-f986928b5a050e84bfa1f1ec5fdfd9c08feb5809fb01b0b3bd99edf2cb304604.scope.
Feb 25 08:38:23 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:38:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/267bdf6be22d4916161fa344824b9aaefdd3b97349199497ad84b74ea901d4d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:38:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/267bdf6be22d4916161fa344824b9aaefdd3b97349199497ad84b74ea901d4d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:38:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/267bdf6be22d4916161fa344824b9aaefdd3b97349199497ad84b74ea901d4d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:38:23 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/267bdf6be22d4916161fa344824b9aaefdd3b97349199497ad84b74ea901d4d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:38:23 np0005629333 podman[409877]: 2026-02-25 13:38:23.192471367 +0000 UTC m=+0.021033115 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:38:23 np0005629333 podman[409877]: 2026-02-25 13:38:23.295859135 +0000 UTC m=+0.124420873 container init f986928b5a050e84bfa1f1ec5fdfd9c08feb5809fb01b0b3bd99edf2cb304604 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_borg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:38:23 np0005629333 podman[409877]: 2026-02-25 13:38:23.301476296 +0000 UTC m=+0.130038034 container start f986928b5a050e84bfa1f1ec5fdfd9c08feb5809fb01b0b3bd99edf2cb304604 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_borg, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:38:23 np0005629333 podman[409877]: 2026-02-25 13:38:23.314062057 +0000 UTC m=+0.142623775 container attach f986928b5a050e84bfa1f1ec5fdfd9c08feb5809fb01b0b3bd99edf2cb304604 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:38:23 np0005629333 nova_compute[244014]: 2026-02-25 13:38:23.443 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:38:23 np0005629333 lvm[409969]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:38:23 np0005629333 lvm[409969]: VG ceph_vg0 finished
Feb 25 08:38:23 np0005629333 lvm[409972]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:38:23 np0005629333 lvm[409972]: VG ceph_vg1 finished
Feb 25 08:38:23 np0005629333 lvm[409974]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:38:23 np0005629333 lvm[409974]: VG ceph_vg2 finished
Feb 25 08:38:23 np0005629333 stoic_borg[409893]: {}
Feb 25 08:38:24 np0005629333 podman[409877]: 2026-02-25 13:38:24.019092238 +0000 UTC m=+0.847654186 container died f986928b5a050e84bfa1f1ec5fdfd9c08feb5809fb01b0b3bd99edf2cb304604 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:38:24 np0005629333 systemd[1]: libpod-f986928b5a050e84bfa1f1ec5fdfd9c08feb5809fb01b0b3bd99edf2cb304604.scope: Deactivated successfully.
Feb 25 08:38:24 np0005629333 systemd[1]: libpod-f986928b5a050e84bfa1f1ec5fdfd9c08feb5809fb01b0b3bd99edf2cb304604.scope: Consumed 1.041s CPU time.
Feb 25 08:38:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3555: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:38:24 np0005629333 systemd[1]: var-lib-containers-storage-overlay-267bdf6be22d4916161fa344824b9aaefdd3b97349199497ad84b74ea901d4d2-merged.mount: Deactivated successfully.
Feb 25 08:38:24 np0005629333 podman[409877]: 2026-02-25 13:38:24.086995958 +0000 UTC m=+0.915557676 container remove f986928b5a050e84bfa1f1ec5fdfd9c08feb5809fb01b0b3bd99edf2cb304604 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_borg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 08:38:24 np0005629333 systemd[1]: libpod-conmon-f986928b5a050e84bfa1f1ec5fdfd9c08feb5809fb01b0b3bd99edf2cb304604.scope: Deactivated successfully.
Feb 25 08:38:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:38:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:38:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:38:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:38:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:38:24 np0005629333 nova_compute[244014]: 2026-02-25 13:38:24.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:38:25 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:38:25 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:38:25 np0005629333 nova_compute[244014]: 2026-02-25 13:38:25.954 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:38:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3556: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:38:27 np0005629333 nova_compute[244014]: 2026-02-25 13:38:27.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:38:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3557: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:38:28 np0005629333 nova_compute[244014]: 2026-02-25 13:38:28.446 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:38:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:38:29 np0005629333 nova_compute[244014]: 2026-02-25 13:38:29.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #174. Immutable memtables: 0.
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:30.011510) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 174
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026710011585, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 1953, "num_deletes": 250, "total_data_size": 3328374, "memory_usage": 3377576, "flush_reason": "Manual Compaction"}
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #175: started
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026710034443, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 175, "file_size": 1898215, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72537, "largest_seqno": 74489, "table_properties": {"data_size": 1891909, "index_size": 3251, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16429, "raw_average_key_size": 20, "raw_value_size": 1877798, "raw_average_value_size": 2367, "num_data_blocks": 150, "num_entries": 793, "num_filter_entries": 793, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772026495, "oldest_key_time": 1772026495, "file_creation_time": 1772026710, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 175, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 22980 microseconds, and 7539 cpu microseconds.
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:38:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3558: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:30.034496) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #175: 1898215 bytes OK
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:30.034518) [db/memtable_list.cc:519] [default] Level-0 commit table #175 started
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:30.105168) [db/memtable_list.cc:722] [default] Level-0 commit table #175: memtable #1 done
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:30.105237) EVENT_LOG_v1 {"time_micros": 1772026710105223, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:30.105274) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 3320141, prev total WAL file size 3320141, number of live WAL files 2.
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000171.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:30.106534) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303033' seq:72057594037927935, type:22 .. '6D6772737461740033323534' seq:0, type:0; will stop at (end)
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [175(1853KB)], [173(10MB)]
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026710106609, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [175], "files_L6": [173], "score": -1, "input_data_size": 12656035, "oldest_snapshot_seqno": -1}
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #176: 9255 keys, 10658676 bytes, temperature: kUnknown
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026710329641, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 176, "file_size": 10658676, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10601959, "index_size": 32476, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23173, "raw_key_size": 240927, "raw_average_key_size": 26, "raw_value_size": 10441760, "raw_average_value_size": 1128, "num_data_blocks": 1261, "num_entries": 9255, "num_filter_entries": 9255, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772026710, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:30.330060) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 10658676 bytes
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:30.341968) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 56.7 rd, 47.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 10.3 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(12.3) write-amplify(5.6) OK, records in: 9666, records dropped: 411 output_compression: NoCompression
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:30.341999) EVENT_LOG_v1 {"time_micros": 1772026710341985, "job": 108, "event": "compaction_finished", "compaction_time_micros": 223193, "compaction_time_cpu_micros": 19932, "output_level": 6, "num_output_files": 1, "total_output_size": 10658676, "num_input_records": 9666, "num_output_records": 9255, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000175.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026710342303, "job": 108, "event": "table_file_deletion", "file_number": 175}
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026710343840, "job": 108, "event": "table_file_deletion", "file_number": 173}
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:30.106410) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:30.344020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:30.344036) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:30.344042) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:30.344046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:38:30 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:30.344051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:38:30 np0005629333 nova_compute[244014]: 2026-02-25 13:38:30.956 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:38:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:38:31
Feb 25 08:38:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:38:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:38:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'default.rgw.control', '.rgw.root', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'vms', 'images']
Feb 25 08:38:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:38:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:38:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:38:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:38:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:38:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:38:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:38:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3559: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:38:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:38:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:38:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:38:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:38:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:38:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:38:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:38:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:38:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:38:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:38:33 np0005629333 nova_compute[244014]: 2026-02-25 13:38:33.448 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:38:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3560: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:38:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:38:34 np0005629333 nova_compute[244014]: 2026-02-25 13:38:34.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:38:35 np0005629333 nova_compute[244014]: 2026-02-25 13:38:35.959 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:38:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3561: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:38:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3562: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:38:38 np0005629333 nova_compute[244014]: 2026-02-25 13:38:38.450 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #177. Immutable memtables: 0.
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:39.738328) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 177
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026719738397, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 329, "num_deletes": 254, "total_data_size": 153185, "memory_usage": 159480, "flush_reason": "Manual Compaction"}
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #178: started
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026719740888, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 178, "file_size": 152257, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74490, "largest_seqno": 74818, "table_properties": {"data_size": 150134, "index_size": 286, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5048, "raw_average_key_size": 17, "raw_value_size": 146043, "raw_average_value_size": 508, "num_data_blocks": 13, "num_entries": 287, "num_filter_entries": 287, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772026711, "oldest_key_time": 1772026711, "file_creation_time": 1772026719, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 178, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 2581 microseconds, and 977 cpu microseconds.
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:39.740922) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #178: 152257 bytes OK
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:39.740937) [db/memtable_list.cc:519] [default] Level-0 commit table #178 started
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:39.742020) [db/memtable_list.cc:722] [default] Level-0 commit table #178: memtable #1 done
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:39.742032) EVENT_LOG_v1 {"time_micros": 1772026719742028, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:39.742049) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 150892, prev total WAL file size 150892, number of live WAL files 2.
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000174.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:39.742392) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323637' seq:72057594037927935, type:22 .. '6C6F676D0033353138' seq:0, type:0; will stop at (end)
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [178(148KB)], [176(10MB)]
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026719742432, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [178], "files_L6": [176], "score": -1, "input_data_size": 10810933, "oldest_snapshot_seqno": -1}
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #179: 9027 keys, 10722315 bytes, temperature: kUnknown
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026719810092, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 179, "file_size": 10722315, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10666330, "index_size": 32320, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22597, "raw_key_size": 237117, "raw_average_key_size": 26, "raw_value_size": 10509285, "raw_average_value_size": 1164, "num_data_blocks": 1253, "num_entries": 9027, "num_filter_entries": 9027, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772026719, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:39.810510) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 10722315 bytes
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:39.812314) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 159.5 rd, 158.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 10.2 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(141.4) write-amplify(70.4) OK, records in: 9542, records dropped: 515 output_compression: NoCompression
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:39.812359) EVENT_LOG_v1 {"time_micros": 1772026719812339, "job": 110, "event": "compaction_finished", "compaction_time_micros": 67788, "compaction_time_cpu_micros": 19816, "output_level": 6, "num_output_files": 1, "total_output_size": 10722315, "num_input_records": 9542, "num_output_records": 9027, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000178.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026719812589, "job": 110, "event": "table_file_deletion", "file_number": 178}
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026719815269, "job": 110, "event": "table_file_deletion", "file_number": 176}
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:39.742307) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:39.815390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:39.815399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:39.815402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:39.815405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:38:39 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:39.815408) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:38:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3563: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #180. Immutable memtables: 0.
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:40.759847) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 180
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026720759894, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 270, "num_deletes": 251, "total_data_size": 41355, "memory_usage": 46536, "flush_reason": "Manual Compaction"}
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #181: started
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026720762684, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 181, "file_size": 41289, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74819, "largest_seqno": 75088, "table_properties": {"data_size": 39405, "index_size": 112, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4821, "raw_average_key_size": 18, "raw_value_size": 35790, "raw_average_value_size": 135, "num_data_blocks": 5, "num_entries": 264, "num_filter_entries": 264, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772026720, "oldest_key_time": 1772026720, "file_creation_time": 1772026720, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 181, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 2907 microseconds, and 903 cpu microseconds.
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:40.762758) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #181: 41289 bytes OK
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:40.762777) [db/memtable_list.cc:519] [default] Level-0 commit table #181 started
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:40.764448) [db/memtable_list.cc:722] [default] Level-0 commit table #181: memtable #1 done
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:40.764474) EVENT_LOG_v1 {"time_micros": 1772026720764466, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:40.764498) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 39286, prev total WAL file size 39286, number of live WAL files 2.
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000177.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:40.765093) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [181(40KB)], [179(10MB)]
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026720765136, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [181], "files_L6": [179], "score": -1, "input_data_size": 10763604, "oldest_snapshot_seqno": -1}
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #182: 8782 keys, 9038677 bytes, temperature: kUnknown
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026720830390, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 182, "file_size": 9038677, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8985917, "index_size": 29671, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22021, "raw_key_size": 232719, "raw_average_key_size": 26, "raw_value_size": 8834733, "raw_average_value_size": 1006, "num_data_blocks": 1132, "num_entries": 8782, "num_filter_entries": 8782, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772026720, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:40.830813) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 9038677 bytes
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:40.832650) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.7 rd, 138.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 10.2 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(479.6) write-amplify(218.9) OK, records in: 9291, records dropped: 509 output_compression: NoCompression
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:40.832750) EVENT_LOG_v1 {"time_micros": 1772026720832722, "job": 112, "event": "compaction_finished", "compaction_time_micros": 65349, "compaction_time_cpu_micros": 35833, "output_level": 6, "num_output_files": 1, "total_output_size": 9038677, "num_input_records": 9291, "num_output_records": 8782, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000181.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026720833016, "job": 112, "event": "table_file_deletion", "file_number": 181}
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026720834846, "job": 112, "event": "table_file_deletion", "file_number": 179}
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:40.764972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:40.834962) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:40.835003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:40.835008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:40.835013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:38:40 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:38:40.835017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:38:40 np0005629333 nova_compute[244014]: 2026-02-25 13:38:40.963 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:38:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3564: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:38:43 np0005629333 nova_compute[244014]: 2026-02-25 13:38:43.452 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:38:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:38:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:38:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:38:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:38:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:38:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:38:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:38:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:38:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:38:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:38:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:38:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:38:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:38:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:38:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:38:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:38:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:38:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:38:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:38:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:38:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:38:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:38:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:38:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3565: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:38:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:38:45 np0005629333 nova_compute[244014]: 2026-02-25 13:38:45.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:38:45 np0005629333 nova_compute[244014]: 2026-02-25 13:38:45.965 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:38:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3566: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:38:46 np0005629333 nova_compute[244014]: 2026-02-25 13:38:46.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:38:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:38:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1627703015' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:38:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:38:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1627703015' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:38:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3567: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:38:48 np0005629333 nova_compute[244014]: 2026-02-25 13:38:48.454 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:38:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:38:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3568: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:38:50 np0005629333 nova_compute[244014]: 2026-02-25 13:38:50.968 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:38:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3569: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:38:53 np0005629333 nova_compute[244014]: 2026-02-25 13:38:53.486 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:38:53 np0005629333 podman[410015]: 2026-02-25 13:38:53.747268007 +0000 UTC m=+0.074059397 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:38:53 np0005629333 podman[410016]: 2026-02-25 13:38:53.782996923 +0000 UTC m=+0.109555226 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 25 08:38:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3570: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:38:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:38:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:38:55.082 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:38:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:38:55.082 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:38:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:38:55.082 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:38:55 np0005629333 nova_compute[244014]: 2026-02-25 13:38:55.972 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:38:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3571: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:38:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3572: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:38:58 np0005629333 nova_compute[244014]: 2026-02-25 13:38:58.488 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:38:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:39:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3573: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:39:00 np0005629333 nova_compute[244014]: 2026-02-25 13:39:00.976 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:39:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:39:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:39:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:39:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:39:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:39:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:39:01 np0005629333 nova_compute[244014]: 2026-02-25 13:39:01.892 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:39:01 np0005629333 nova_compute[244014]: 2026-02-25 13:39:01.892 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 25 08:39:01 np0005629333 nova_compute[244014]: 2026-02-25 13:39:01.911 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 25 08:39:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3574: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:39:03 np0005629333 nova_compute[244014]: 2026-02-25 13:39:03.524 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:39:03 np0005629333 nova_compute[244014]: 2026-02-25 13:39:03.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:39:03 np0005629333 nova_compute[244014]: 2026-02-25 13:39:03.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 25 08:39:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3575: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:39:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:39:06 np0005629333 nova_compute[244014]: 2026-02-25 13:39:06.011 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:39:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3576: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:39:06 np0005629333 nova_compute[244014]: 2026-02-25 13:39:06.891 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:39:06 np0005629333 nova_compute[244014]: 2026-02-25 13:39:06.934 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:39:06 np0005629333 nova_compute[244014]: 2026-02-25 13:39:06.934 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:39:06 np0005629333 nova_compute[244014]: 2026-02-25 13:39:06.935 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:39:06 np0005629333 nova_compute[244014]: 2026-02-25 13:39:06.935 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:39:06 np0005629333 nova_compute[244014]: 2026-02-25 13:39:06.936 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:39:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:39:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1447459465' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:39:07 np0005629333 nova_compute[244014]: 2026-02-25 13:39:07.557 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:39:07 np0005629333 nova_compute[244014]: 2026-02-25 13:39:07.766 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:39:07 np0005629333 nova_compute[244014]: 2026-02-25 13:39:07.768 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3548MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:39:07 np0005629333 nova_compute[244014]: 2026-02-25 13:39:07.768 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:39:07 np0005629333 nova_compute[244014]: 2026-02-25 13:39:07.769 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:39:07 np0005629333 nova_compute[244014]: 2026-02-25 13:39:07.838 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:39:07 np0005629333 nova_compute[244014]: 2026-02-25 13:39:07.839 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:39:07 np0005629333 nova_compute[244014]: 2026-02-25 13:39:07.859 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:39:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3577: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:39:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:39:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/641077900' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:39:08 np0005629333 nova_compute[244014]: 2026-02-25 13:39:08.447 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:39:08 np0005629333 nova_compute[244014]: 2026-02-25 13:39:08.453 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:39:08 np0005629333 nova_compute[244014]: 2026-02-25 13:39:08.470 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:39:08 np0005629333 nova_compute[244014]: 2026-02-25 13:39:08.472 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:39:08 np0005629333 nova_compute[244014]: 2026-02-25 13:39:08.472 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:39:08 np0005629333 nova_compute[244014]: 2026-02-25 13:39:08.526 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:39:09 np0005629333 nova_compute[244014]: 2026-02-25 13:39:09.457 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:39:09 np0005629333 nova_compute[244014]: 2026-02-25 13:39:09.738 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:39:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:39:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3578: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:39:11 np0005629333 nova_compute[244014]: 2026-02-25 13:39:11.013 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:39:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3579: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:39:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 08:39:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.0 total, 600.0 interval#012Cumulative writes: 16K writes, 75K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.02 MB/s#012Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.10 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1326 writes, 6497 keys, 1326 commit groups, 1.0 writes per commit group, ingest: 8.73 MB, 0.01 MB/s#012Interval WAL: 1326 writes, 1326 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     52.2      1.74              0.25        56    0.031       0      0       0.0       0.0#012  L6      1/0    8.62 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.4    101.6     87.0      5.60              1.41        55    0.102    386K    29K       0.0       0.0#012 Sum      1/0    8.62 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.4     77.6     78.7      7.34              1.66       111    0.066    386K    29K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0  10.1     87.7     85.6      0.88              0.21        14    0.063     65K   3413       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0    101.6     87.0      5.60              1.41        55    0.102    386K    29K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     52.3      1.73              0.25        55    0.032       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6600.0 total, 600.0 interval#012Flush(GB): cumulative 0.089, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.56 GB write, 0.09 MB/s write, 0.56 GB read, 0.09 MB/s read, 7.3 seconds#012Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 0.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561a1af858d0#2 capacity: 304.00 MB usage: 64.98 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.001126 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4046,62.28 MB,20.4875%) FilterBlock(112,1.05 MB,0.34394%) IndexBlock(112,1.66 MB,0.545015%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb 25 08:39:13 np0005629333 nova_compute[244014]: 2026-02-25 13:39:13.566 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:39:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3580: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:39:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:39:16 np0005629333 nova_compute[244014]: 2026-02-25 13:39:16.017 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:39:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3581: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:39:16 np0005629333 nova_compute[244014]: 2026-02-25 13:39:16.874 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:39:16 np0005629333 nova_compute[244014]: 2026-02-25 13:39:16.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:39:16 np0005629333 nova_compute[244014]: 2026-02-25 13:39:16.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:39:16 np0005629333 nova_compute[244014]: 2026-02-25 13:39:16.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:39:16 np0005629333 nova_compute[244014]: 2026-02-25 13:39:16.891 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:39:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3582: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:39:18 np0005629333 nova_compute[244014]: 2026-02-25 13:39:18.568 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:39:18 np0005629333 nova_compute[244014]: 2026-02-25 13:39:18.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:39:18 np0005629333 nova_compute[244014]: 2026-02-25 13:39:18.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:39:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:39:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3583: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:39:21 np0005629333 nova_compute[244014]: 2026-02-25 13:39:21.020 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:39:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3584: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:39:23 np0005629333 nova_compute[244014]: 2026-02-25 13:39:23.599 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:39:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3585: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:39:24 np0005629333 podman[410130]: 2026-02-25 13:39:24.394881134 +0000 UTC m=+0.068647202 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 08:39:24 np0005629333 podman[410131]: 2026-02-25 13:39:24.443304134 +0000 UTC m=+0.112215613 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Feb 25 08:39:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:39:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:39:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:39:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:39:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:39:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:39:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:39:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:39:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:39:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:39:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:39:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:39:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:39:25 np0005629333 podman[410293]: 2026-02-25 13:39:25.45643661 +0000 UTC m=+0.066261553 container create e7fdbd4bd934461123a23b6eae4b1c8f47d36c6bd44c4e72c29a627d395a5457 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 08:39:25 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:39:25 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:39:25 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:39:25 np0005629333 podman[410293]: 2026-02-25 13:39:25.414250849 +0000 UTC m=+0.024075852 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:39:25 np0005629333 systemd[1]: Started libpod-conmon-e7fdbd4bd934461123a23b6eae4b1c8f47d36c6bd44c4e72c29a627d395a5457.scope.
Feb 25 08:39:25 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:39:25 np0005629333 podman[410293]: 2026-02-25 13:39:25.586468253 +0000 UTC m=+0.196293156 container init e7fdbd4bd934461123a23b6eae4b1c8f47d36c6bd44c4e72c29a627d395a5457 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 08:39:25 np0005629333 podman[410293]: 2026-02-25 13:39:25.597085208 +0000 UTC m=+0.206910151 container start e7fdbd4bd934461123a23b6eae4b1c8f47d36c6bd44c4e72c29a627d395a5457 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:39:25 np0005629333 podman[410293]: 2026-02-25 13:39:25.605207931 +0000 UTC m=+0.215032884 container attach e7fdbd4bd934461123a23b6eae4b1c8f47d36c6bd44c4e72c29a627d395a5457 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_montalcini, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 08:39:25 np0005629333 distracted_montalcini[410309]: 167 167
Feb 25 08:39:25 np0005629333 systemd[1]: libpod-e7fdbd4bd934461123a23b6eae4b1c8f47d36c6bd44c4e72c29a627d395a5457.scope: Deactivated successfully.
Feb 25 08:39:25 np0005629333 podman[410293]: 2026-02-25 13:39:25.607673892 +0000 UTC m=+0.217498825 container died e7fdbd4bd934461123a23b6eae4b1c8f47d36c6bd44c4e72c29a627d395a5457 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_montalcini, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:39:25 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a18f116b686561bc2be2f66d280c5bca9dcbb5fc12186fec5bf86abb19c9fdff-merged.mount: Deactivated successfully.
Feb 25 08:39:25 np0005629333 podman[410293]: 2026-02-25 13:39:25.669562019 +0000 UTC m=+0.279386932 container remove e7fdbd4bd934461123a23b6eae4b1c8f47d36c6bd44c4e72c29a627d395a5457 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_montalcini, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 08:39:25 np0005629333 systemd[1]: libpod-conmon-e7fdbd4bd934461123a23b6eae4b1c8f47d36c6bd44c4e72c29a627d395a5457.scope: Deactivated successfully.
Feb 25 08:39:25 np0005629333 podman[410335]: 2026-02-25 13:39:25.908079747 +0000 UTC m=+0.089986275 container create 7c31d13f3b203e093618a41fb4c243faa1e90eaa7562bb580bf26a979186bd46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_roentgen, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True)
Feb 25 08:39:25 np0005629333 podman[410335]: 2026-02-25 13:39:25.857377451 +0000 UTC m=+0.039284029 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:39:25 np0005629333 systemd[1]: Started libpod-conmon-7c31d13f3b203e093618a41fb4c243faa1e90eaa7562bb580bf26a979186bd46.scope.
Feb 25 08:39:25 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:39:25 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8baacd8ae74a9868086de22f27e1e673b4888089cbd37f527deca18d2232a03b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:39:25 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8baacd8ae74a9868086de22f27e1e673b4888089cbd37f527deca18d2232a03b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:39:25 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8baacd8ae74a9868086de22f27e1e673b4888089cbd37f527deca18d2232a03b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:39:25 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8baacd8ae74a9868086de22f27e1e673b4888089cbd37f527deca18d2232a03b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:39:25 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8baacd8ae74a9868086de22f27e1e673b4888089cbd37f527deca18d2232a03b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:39:26 np0005629333 podman[410335]: 2026-02-25 13:39:26.00991448 +0000 UTC m=+0.191821058 container init 7c31d13f3b203e093618a41fb4c243faa1e90eaa7562bb580bf26a979186bd46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_roentgen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:39:26 np0005629333 podman[410335]: 2026-02-25 13:39:26.016479619 +0000 UTC m=+0.198386107 container start 7c31d13f3b203e093618a41fb4c243faa1e90eaa7562bb580bf26a979186bd46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_roentgen, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 08:39:26 np0005629333 podman[410335]: 2026-02-25 13:39:26.020807323 +0000 UTC m=+0.202713811 container attach 7c31d13f3b203e093618a41fb4c243faa1e90eaa7562bb580bf26a979186bd46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:39:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3586: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:39:26 np0005629333 nova_compute[244014]: 2026-02-25 13:39:26.071 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:39:26 np0005629333 focused_roentgen[410352]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:39:26 np0005629333 focused_roentgen[410352]: --> All data devices are unavailable
Feb 25 08:39:26 np0005629333 systemd[1]: libpod-7c31d13f3b203e093618a41fb4c243faa1e90eaa7562bb580bf26a979186bd46.scope: Deactivated successfully.
Feb 25 08:39:26 np0005629333 podman[410335]: 2026-02-25 13:39:26.493658628 +0000 UTC m=+0.675565196 container died 7c31d13f3b203e093618a41fb4c243faa1e90eaa7562bb580bf26a979186bd46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 08:39:26 np0005629333 systemd[1]: var-lib-containers-storage-overlay-8baacd8ae74a9868086de22f27e1e673b4888089cbd37f527deca18d2232a03b-merged.mount: Deactivated successfully.
Feb 25 08:39:26 np0005629333 podman[410335]: 2026-02-25 13:39:26.549861811 +0000 UTC m=+0.731768309 container remove 7c31d13f3b203e093618a41fb4c243faa1e90eaa7562bb580bf26a979186bd46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_roentgen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:39:26 np0005629333 systemd[1]: libpod-conmon-7c31d13f3b203e093618a41fb4c243faa1e90eaa7562bb580bf26a979186bd46.scope: Deactivated successfully.
Feb 25 08:39:26 np0005629333 nova_compute[244014]: 2026-02-25 13:39:26.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:39:27 np0005629333 podman[410445]: 2026-02-25 13:39:27.024106856 +0000 UTC m=+0.051921771 container create 55b9a1996519b1979dfe231f36a4e3a817f89ba50529f5f1b0979f5d2e92f9ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 08:39:27 np0005629333 systemd[1]: Started libpod-conmon-55b9a1996519b1979dfe231f36a4e3a817f89ba50529f5f1b0979f5d2e92f9ce.scope.
Feb 25 08:39:27 np0005629333 podman[410445]: 2026-02-25 13:39:26.997856973 +0000 UTC m=+0.025671978 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:39:27 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:39:27 np0005629333 podman[410445]: 2026-02-25 13:39:27.113571805 +0000 UTC m=+0.141386730 container init 55b9a1996519b1979dfe231f36a4e3a817f89ba50529f5f1b0979f5d2e92f9ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goodall, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 08:39:27 np0005629333 podman[410445]: 2026-02-25 13:39:27.123345475 +0000 UTC m=+0.151160380 container start 55b9a1996519b1979dfe231f36a4e3a817f89ba50529f5f1b0979f5d2e92f9ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goodall, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 08:39:27 np0005629333 sweet_goodall[410461]: 167 167
Feb 25 08:39:27 np0005629333 systemd[1]: libpod-55b9a1996519b1979dfe231f36a4e3a817f89ba50529f5f1b0979f5d2e92f9ce.scope: Deactivated successfully.
Feb 25 08:39:27 np0005629333 podman[410445]: 2026-02-25 13:39:27.132703394 +0000 UTC m=+0.160518329 container attach 55b9a1996519b1979dfe231f36a4e3a817f89ba50529f5f1b0979f5d2e92f9ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:39:27 np0005629333 podman[410445]: 2026-02-25 13:39:27.133109716 +0000 UTC m=+0.160924631 container died 55b9a1996519b1979dfe231f36a4e3a817f89ba50529f5f1b0979f5d2e92f9ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goodall, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:39:27 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f543c270a9df64899965e7b39b9027dcb5e51a5ab9fa1cb77bc919b7337b9a42-merged.mount: Deactivated successfully.
Feb 25 08:39:27 np0005629333 podman[410445]: 2026-02-25 13:39:27.247150759 +0000 UTC m=+0.274965674 container remove 55b9a1996519b1979dfe231f36a4e3a817f89ba50529f5f1b0979f5d2e92f9ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_goodall, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:39:27 np0005629333 systemd[1]: libpod-conmon-55b9a1996519b1979dfe231f36a4e3a817f89ba50529f5f1b0979f5d2e92f9ce.scope: Deactivated successfully.
Feb 25 08:39:27 np0005629333 podman[410488]: 2026-02-25 13:39:27.380177218 +0000 UTC m=+0.040801292 container create 75ac64f43abe9e8a8c078f837eb3384ddc63e3c2d8a6fcf298d939f5ab306b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_elgamal, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:39:27 np0005629333 systemd[1]: Started libpod-conmon-75ac64f43abe9e8a8c078f837eb3384ddc63e3c2d8a6fcf298d939f5ab306b6c.scope.
Feb 25 08:39:27 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:39:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0390ea1be3afc50cf01e6e2894577eb9f58914b9bbc38040ad0ea2db8c499e68/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:39:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0390ea1be3afc50cf01e6e2894577eb9f58914b9bbc38040ad0ea2db8c499e68/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:39:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0390ea1be3afc50cf01e6e2894577eb9f58914b9bbc38040ad0ea2db8c499e68/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:39:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0390ea1be3afc50cf01e6e2894577eb9f58914b9bbc38040ad0ea2db8c499e68/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:39:27 np0005629333 podman[410488]: 2026-02-25 13:39:27.363757607 +0000 UTC m=+0.024381681 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:39:27 np0005629333 podman[410488]: 2026-02-25 13:39:27.465044794 +0000 UTC m=+0.125668898 container init 75ac64f43abe9e8a8c078f837eb3384ddc63e3c2d8a6fcf298d939f5ab306b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:39:27 np0005629333 podman[410488]: 2026-02-25 13:39:27.472233491 +0000 UTC m=+0.132857545 container start 75ac64f43abe9e8a8c078f837eb3384ddc63e3c2d8a6fcf298d939f5ab306b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 25 08:39:27 np0005629333 podman[410488]: 2026-02-25 13:39:27.475505835 +0000 UTC m=+0.136129909 container attach 75ac64f43abe9e8a8c078f837eb3384ddc63e3c2d8a6fcf298d939f5ab306b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]: {
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:    "0": [
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:        {
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "devices": [
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "/dev/loop3"
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            ],
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "lv_name": "ceph_lv0",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "lv_size": "21470642176",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "name": "ceph_lv0",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "tags": {
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.cluster_name": "ceph",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.crush_device_class": "",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.encrypted": "0",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.objectstore": "bluestore",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.osd_id": "0",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.type": "block",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.vdo": "0",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.with_tpm": "0"
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            },
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "type": "block",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "vg_name": "ceph_vg0"
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:        }
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:    ],
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:    "1": [
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:        {
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "devices": [
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "/dev/loop4"
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            ],
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "lv_name": "ceph_lv1",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "lv_size": "21470642176",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "name": "ceph_lv1",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "tags": {
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.cluster_name": "ceph",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.crush_device_class": "",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.encrypted": "0",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.objectstore": "bluestore",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.osd_id": "1",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.type": "block",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.vdo": "0",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.with_tpm": "0"
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            },
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "type": "block",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "vg_name": "ceph_vg1"
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:        }
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:    ],
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:    "2": [
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:        {
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "devices": [
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "/dev/loop5"
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            ],
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "lv_name": "ceph_lv2",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "lv_size": "21470642176",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "name": "ceph_lv2",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "tags": {
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.cluster_name": "ceph",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.crush_device_class": "",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.encrypted": "0",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.objectstore": "bluestore",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.osd_id": "2",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.type": "block",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.vdo": "0",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:                "ceph.with_tpm": "0"
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            },
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "type": "block",
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:            "vg_name": "ceph_vg2"
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:        }
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]:    ]
Feb 25 08:39:27 np0005629333 flamboyant_elgamal[410505]: }
Feb 25 08:39:27 np0005629333 systemd[1]: libpod-75ac64f43abe9e8a8c078f837eb3384ddc63e3c2d8a6fcf298d939f5ab306b6c.scope: Deactivated successfully.
Feb 25 08:39:27 np0005629333 podman[410488]: 2026-02-25 13:39:27.742442748 +0000 UTC m=+0.403066842 container died 75ac64f43abe9e8a8c078f837eb3384ddc63e3c2d8a6fcf298d939f5ab306b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_elgamal, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 08:39:27 np0005629333 systemd[1]: var-lib-containers-storage-overlay-0390ea1be3afc50cf01e6e2894577eb9f58914b9bbc38040ad0ea2db8c499e68-merged.mount: Deactivated successfully.
Feb 25 08:39:27 np0005629333 podman[410488]: 2026-02-25 13:39:27.784794034 +0000 UTC m=+0.445418078 container remove 75ac64f43abe9e8a8c078f837eb3384ddc63e3c2d8a6fcf298d939f5ab306b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_elgamal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Feb 25 08:39:27 np0005629333 systemd[1]: libpod-conmon-75ac64f43abe9e8a8c078f837eb3384ddc63e3c2d8a6fcf298d939f5ab306b6c.scope: Deactivated successfully.
Feb 25 08:39:27 np0005629333 nova_compute[244014]: 2026-02-25 13:39:27.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:39:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3587: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:39:28 np0005629333 podman[410590]: 2026-02-25 13:39:28.23222825 +0000 UTC m=+0.045348223 container create a58f7a1ccdae60407b1a1e61835fe094db293e00cb898c57ad7d4b04cf87cabe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_ishizaka, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:39:28 np0005629333 systemd[1]: Started libpod-conmon-a58f7a1ccdae60407b1a1e61835fe094db293e00cb898c57ad7d4b04cf87cabe.scope.
Feb 25 08:39:28 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:39:28 np0005629333 podman[410590]: 2026-02-25 13:39:28.213580825 +0000 UTC m=+0.026700848 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:39:28 np0005629333 podman[410590]: 2026-02-25 13:39:28.313154704 +0000 UTC m=+0.126274737 container init a58f7a1ccdae60407b1a1e61835fe094db293e00cb898c57ad7d4b04cf87cabe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:39:28 np0005629333 podman[410590]: 2026-02-25 13:39:28.319260559 +0000 UTC m=+0.132380532 container start a58f7a1ccdae60407b1a1e61835fe094db293e00cb898c57ad7d4b04cf87cabe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_ishizaka, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 08:39:28 np0005629333 unruffled_ishizaka[410607]: 167 167
Feb 25 08:39:28 np0005629333 systemd[1]: libpod-a58f7a1ccdae60407b1a1e61835fe094db293e00cb898c57ad7d4b04cf87cabe.scope: Deactivated successfully.
Feb 25 08:39:28 np0005629333 conmon[410607]: conmon a58f7a1ccdae60407b1a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a58f7a1ccdae60407b1a1e61835fe094db293e00cb898c57ad7d4b04cf87cabe.scope/container/memory.events
Feb 25 08:39:28 np0005629333 podman[410590]: 2026-02-25 13:39:28.326736313 +0000 UTC m=+0.139856386 container attach a58f7a1ccdae60407b1a1e61835fe094db293e00cb898c57ad7d4b04cf87cabe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 08:39:28 np0005629333 podman[410590]: 2026-02-25 13:39:28.327170206 +0000 UTC m=+0.140290219 container died a58f7a1ccdae60407b1a1e61835fe094db293e00cb898c57ad7d4b04cf87cabe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:39:28 np0005629333 systemd[1]: var-lib-containers-storage-overlay-17ab5ac3e96527171ec5fd51f5584b11e787368a4a5e595af79fa12ec87fac34-merged.mount: Deactivated successfully.
Feb 25 08:39:28 np0005629333 podman[410590]: 2026-02-25 13:39:28.366015271 +0000 UTC m=+0.179135274 container remove a58f7a1ccdae60407b1a1e61835fe094db293e00cb898c57ad7d4b04cf87cabe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_ishizaka, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:39:28 np0005629333 systemd[1]: libpod-conmon-a58f7a1ccdae60407b1a1e61835fe094db293e00cb898c57ad7d4b04cf87cabe.scope: Deactivated successfully.
Feb 25 08:39:28 np0005629333 podman[410631]: 2026-02-25 13:39:28.513817085 +0000 UTC m=+0.049447101 container create 875d7c103bea70c176b869fee14ed58877aa26d8da5e6a7b3c65bac84782c74e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_black, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 08:39:28 np0005629333 systemd[1]: Started libpod-conmon-875d7c103bea70c176b869fee14ed58877aa26d8da5e6a7b3c65bac84782c74e.scope.
Feb 25 08:39:28 np0005629333 podman[410631]: 2026-02-25 13:39:28.489038523 +0000 UTC m=+0.024668539 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:39:28 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:39:28 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0798e061a4fa9ce8ab15fab0cea1513db3ffe7761c051ff6a5656b228605472a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:39:28 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0798e061a4fa9ce8ab15fab0cea1513db3ffe7761c051ff6a5656b228605472a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:39:28 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0798e061a4fa9ce8ab15fab0cea1513db3ffe7761c051ff6a5656b228605472a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:39:28 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0798e061a4fa9ce8ab15fab0cea1513db3ffe7761c051ff6a5656b228605472a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:39:28 np0005629333 nova_compute[244014]: 2026-02-25 13:39:28.601 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:39:28 np0005629333 podman[410631]: 2026-02-25 13:39:28.630418262 +0000 UTC m=+0.166048318 container init 875d7c103bea70c176b869fee14ed58877aa26d8da5e6a7b3c65bac84782c74e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_black, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 08:39:28 np0005629333 podman[410631]: 2026-02-25 13:39:28.639340878 +0000 UTC m=+0.174970854 container start 875d7c103bea70c176b869fee14ed58877aa26d8da5e6a7b3c65bac84782c74e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_black, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Feb 25 08:39:28 np0005629333 podman[410631]: 2026-02-25 13:39:28.64287588 +0000 UTC m=+0.178505956 container attach 875d7c103bea70c176b869fee14ed58877aa26d8da5e6a7b3c65bac84782c74e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_black, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 08:39:29 np0005629333 lvm[410727]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:39:29 np0005629333 lvm[410727]: VG ceph_vg1 finished
Feb 25 08:39:29 np0005629333 lvm[410726]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:39:29 np0005629333 lvm[410726]: VG ceph_vg0 finished
Feb 25 08:39:29 np0005629333 lvm[410729]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:39:29 np0005629333 lvm[410729]: VG ceph_vg2 finished
Feb 25 08:39:29 np0005629333 pedantic_black[410648]: {}
Feb 25 08:39:29 np0005629333 systemd[1]: libpod-875d7c103bea70c176b869fee14ed58877aa26d8da5e6a7b3c65bac84782c74e.scope: Deactivated successfully.
Feb 25 08:39:29 np0005629333 systemd[1]: libpod-875d7c103bea70c176b869fee14ed58877aa26d8da5e6a7b3c65bac84782c74e.scope: Consumed 1.095s CPU time.
Feb 25 08:39:29 np0005629333 podman[410732]: 2026-02-25 13:39:29.45274784 +0000 UTC m=+0.037204249 container died 875d7c103bea70c176b869fee14ed58877aa26d8da5e6a7b3c65bac84782c74e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_black, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 08:39:29 np0005629333 systemd[1]: var-lib-containers-storage-overlay-0798e061a4fa9ce8ab15fab0cea1513db3ffe7761c051ff6a5656b228605472a-merged.mount: Deactivated successfully.
Feb 25 08:39:29 np0005629333 podman[410732]: 2026-02-25 13:39:29.501902282 +0000 UTC m=+0.086358641 container remove 875d7c103bea70c176b869fee14ed58877aa26d8da5e6a7b3c65bac84782c74e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 08:39:29 np0005629333 systemd[1]: libpod-conmon-875d7c103bea70c176b869fee14ed58877aa26d8da5e6a7b3c65bac84782c74e.scope: Deactivated successfully.
Feb 25 08:39:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:39:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:39:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:39:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:39:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:39:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3588: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:39:30 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:39:30 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:39:31 np0005629333 nova_compute[244014]: 2026-02-25 13:39:31.077 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:39:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:39:31
Feb 25 08:39:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:39:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:39:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['vms', 'backups', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'images', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'default.rgw.log']
Feb 25 08:39:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:39:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:39:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:39:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:39:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:39:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:39:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:39:31 np0005629333 nova_compute[244014]: 2026-02-25 13:39:31.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:39:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3589: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:39:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:39:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:39:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:39:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:39:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:39:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:39:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:39:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:39:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:39:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:39:33 np0005629333 nova_compute[244014]: 2026-02-25 13:39:33.605 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:39:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3590: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:39:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:39:34 np0005629333 nova_compute[244014]: 2026-02-25 13:39:34.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:39:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3591: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:39:36 np0005629333 nova_compute[244014]: 2026-02-25 13:39:36.079 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:39:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3592: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:39:38 np0005629333 nova_compute[244014]: 2026-02-25 13:39:38.643 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:39:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:39:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3593: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:39:41 np0005629333 nova_compute[244014]: 2026-02-25 13:39:41.109 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:39:41 np0005629333 nova_compute[244014]: 2026-02-25 13:39:41.650 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:39:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3594: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:39:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:39:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:39:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:39:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:39:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:39:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:39:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:39:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:39:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:39:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:39:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:39:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:39:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:39:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:39:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:39:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:39:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:39:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:39:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:39:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:39:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:39:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:39:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:39:43 np0005629333 nova_compute[244014]: 2026-02-25 13:39:43.646 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:39:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3595: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:39:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:39:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3596: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:39:46 np0005629333 nova_compute[244014]: 2026-02-25 13:39:46.113 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:39:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:39:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2194471842' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:39:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:39:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2194471842' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:39:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3597: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:39:48 np0005629333 nova_compute[244014]: 2026-02-25 13:39:48.647 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:39:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:39:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3598: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:39:51 np0005629333 nova_compute[244014]: 2026-02-25 13:39:51.161 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:39:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3599: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:39:53 np0005629333 nova_compute[244014]: 2026-02-25 13:39:53.694 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:39:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3600: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:39:54 np0005629333 podman[410773]: 2026-02-25 13:39:54.747776888 +0000 UTC m=+0.077759644 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 08:39:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:39:54 np0005629333 podman[410774]: 2026-02-25 13:39:54.778972194 +0000 UTC m=+0.108022903 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0)
Feb 25 08:39:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:39:55.083 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:39:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:39:55.084 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:39:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:39:55.084 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:39:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3601: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:39:56 np0005629333 nova_compute[244014]: 2026-02-25 13:39:56.195 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:39:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3602: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:39:58 np0005629333 nova_compute[244014]: 2026-02-25 13:39:58.717 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:39:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:40:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3603: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:40:01 np0005629333 nova_compute[244014]: 2026-02-25 13:40:01.199 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:40:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:40:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:40:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:40:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:40:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:40:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:40:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3604: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:40:03 np0005629333 nova_compute[244014]: 2026-02-25 13:40:03.720 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:40:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3605: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:40:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:40:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3606: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:40:06 np0005629333 nova_compute[244014]: 2026-02-25 13:40:06.203 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:40:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3607: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:40:08 np0005629333 nova_compute[244014]: 2026-02-25 13:40:08.722 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:40:08 np0005629333 nova_compute[244014]: 2026-02-25 13:40:08.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:40:08 np0005629333 nova_compute[244014]: 2026-02-25 13:40:08.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:40:08 np0005629333 nova_compute[244014]: 2026-02-25 13:40:08.912 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:40:08 np0005629333 nova_compute[244014]: 2026-02-25 13:40:08.913 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:40:08 np0005629333 nova_compute[244014]: 2026-02-25 13:40:08.913 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:40:08 np0005629333 nova_compute[244014]: 2026-02-25 13:40:08.913 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:40:08 np0005629333 nova_compute[244014]: 2026-02-25 13:40:08.914 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:40:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:40:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1337006553' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:40:09 np0005629333 nova_compute[244014]: 2026-02-25 13:40:09.456 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:40:09 np0005629333 nova_compute[244014]: 2026-02-25 13:40:09.702 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:40:09 np0005629333 nova_compute[244014]: 2026-02-25 13:40:09.704 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3534MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:40:09 np0005629333 nova_compute[244014]: 2026-02-25 13:40:09.705 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:40:09 np0005629333 nova_compute[244014]: 2026-02-25 13:40:09.706 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:40:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:40:09 np0005629333 nova_compute[244014]: 2026-02-25 13:40:09.775 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:40:09 np0005629333 nova_compute[244014]: 2026-02-25 13:40:09.776 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:40:09 np0005629333 nova_compute[244014]: 2026-02-25 13:40:09.795 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:40:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3608: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:40:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:40:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1864052239' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:40:10 np0005629333 nova_compute[244014]: 2026-02-25 13:40:10.397 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:40:10 np0005629333 nova_compute[244014]: 2026-02-25 13:40:10.404 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:40:10 np0005629333 nova_compute[244014]: 2026-02-25 13:40:10.423 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:40:10 np0005629333 nova_compute[244014]: 2026-02-25 13:40:10.427 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:40:10 np0005629333 nova_compute[244014]: 2026-02-25 13:40:10.428 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.722s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:40:11 np0005629333 nova_compute[244014]: 2026-02-25 13:40:11.207 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:40:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3609: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:40:13 np0005629333 nova_compute[244014]: 2026-02-25 13:40:13.725 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:40:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3610: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:40:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:40:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 08:40:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 47K writes, 185K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s#012Cumulative WAL: 47K writes, 17K syncs, 2.73 writes per sync, written: 0.19 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 08:40:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3611: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:40:16 np0005629333 nova_compute[244014]: 2026-02-25 13:40:16.218 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:40:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3612: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:40:18 np0005629333 nova_compute[244014]: 2026-02-25 13:40:18.763 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:40:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 08:40:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.2 total, 600.0 interval#012Cumulative writes: 49K writes, 195K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s#012Cumulative WAL: 49K writes, 17K syncs, 2.75 writes per sync, written: 0.19 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s#012Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 08:40:19 np0005629333 nova_compute[244014]: 2026-02-25 13:40:19.430 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:40:19 np0005629333 nova_compute[244014]: 2026-02-25 13:40:19.430 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:40:19 np0005629333 nova_compute[244014]: 2026-02-25 13:40:19.430 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:40:19 np0005629333 nova_compute[244014]: 2026-02-25 13:40:19.431 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:40:19 np0005629333 nova_compute[244014]: 2026-02-25 13:40:19.452 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:40:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:40:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3613: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:40:20 np0005629333 nova_compute[244014]: 2026-02-25 13:40:20.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:40:20 np0005629333 nova_compute[244014]: 2026-02-25 13:40:20.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:40:21 np0005629333 nova_compute[244014]: 2026-02-25 13:40:21.221 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:40:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3614: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:40:23 np0005629333 nova_compute[244014]: 2026-02-25 13:40:23.764 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:40:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3615: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:40:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:40:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 08:40:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.6 total, 600.0 interval#012Cumulative writes: 37K writes, 150K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s#012Cumulative WAL: 37K writes, 13K syncs, 2.76 writes per sync, written: 0.15 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s#012Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 08:40:25 np0005629333 podman[410862]: 2026-02-25 13:40:25.733349027 +0000 UTC m=+0.074820899 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Feb 25 08:40:25 np0005629333 podman[410863]: 2026-02-25 13:40:25.820931001 +0000 UTC m=+0.148959807 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 08:40:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3616: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:40:26 np0005629333 nova_compute[244014]: 2026-02-25 13:40:26.223 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:40:27 np0005629333 ceph-mgr[76641]: [devicehealth INFO root] Check health
Feb 25 08:40:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3617: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:40:28 np0005629333 nova_compute[244014]: 2026-02-25 13:40:28.766 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:40:28 np0005629333 nova_compute[244014]: 2026-02-25 13:40:28.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:40:28 np0005629333 nova_compute[244014]: 2026-02-25 13:40:28.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:40:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:40:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3618: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:40:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:40:30 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:40:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:40:30 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:40:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:40:30 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:40:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:40:30 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:40:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:40:30 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:40:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:40:30 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:40:30 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:40:30 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:40:30 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:40:30 np0005629333 podman[411048]: 2026-02-25 13:40:30.676524181 +0000 UTC m=+0.069027243 container create b854fdb7c6bfb93b2e403e4e713e229ee2692de2aaef275be73ac683b9b83d3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_chandrasekhar, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:40:30 np0005629333 systemd[1]: Started libpod-conmon-b854fdb7c6bfb93b2e403e4e713e229ee2692de2aaef275be73ac683b9b83d3f.scope.
Feb 25 08:40:30 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:40:30 np0005629333 podman[411048]: 2026-02-25 13:40:30.648088535 +0000 UTC m=+0.040591627 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:40:30 np0005629333 podman[411048]: 2026-02-25 13:40:30.760579944 +0000 UTC m=+0.153083106 container init b854fdb7c6bfb93b2e403e4e713e229ee2692de2aaef275be73ac683b9b83d3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_chandrasekhar, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:40:30 np0005629333 podman[411048]: 2026-02-25 13:40:30.771810537 +0000 UTC m=+0.164313629 container start b854fdb7c6bfb93b2e403e4e713e229ee2692de2aaef275be73ac683b9b83d3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_chandrasekhar, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 08:40:30 np0005629333 podman[411048]: 2026-02-25 13:40:30.775449061 +0000 UTC m=+0.167952273 container attach b854fdb7c6bfb93b2e403e4e713e229ee2692de2aaef275be73ac683b9b83d3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_chandrasekhar, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 08:40:30 np0005629333 systemd[1]: libpod-b854fdb7c6bfb93b2e403e4e713e229ee2692de2aaef275be73ac683b9b83d3f.scope: Deactivated successfully.
Feb 25 08:40:30 np0005629333 boring_chandrasekhar[411064]: 167 167
Feb 25 08:40:30 np0005629333 conmon[411064]: conmon b854fdb7c6bfb93b2e40 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b854fdb7c6bfb93b2e403e4e713e229ee2692de2aaef275be73ac683b9b83d3f.scope/container/memory.events
Feb 25 08:40:30 np0005629333 podman[411048]: 2026-02-25 13:40:30.781540986 +0000 UTC m=+0.174044058 container died b854fdb7c6bfb93b2e403e4e713e229ee2692de2aaef275be73ac683b9b83d3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_chandrasekhar, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 08:40:30 np0005629333 systemd[1]: var-lib-containers-storage-overlay-47cecd2cd9816e3686cfc2b65bf16d1b26beecbff4f51eead22f14402a5e5830-merged.mount: Deactivated successfully.
Feb 25 08:40:30 np0005629333 podman[411048]: 2026-02-25 13:40:30.824573042 +0000 UTC m=+0.217076114 container remove b854fdb7c6bfb93b2e403e4e713e229ee2692de2aaef275be73ac683b9b83d3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 08:40:30 np0005629333 systemd[1]: libpod-conmon-b854fdb7c6bfb93b2e403e4e713e229ee2692de2aaef275be73ac683b9b83d3f.scope: Deactivated successfully.
Feb 25 08:40:30 np0005629333 podman[411089]: 2026-02-25 13:40:30.989794514 +0000 UTC m=+0.048718359 container create d41003795a2c977e732e427b182d727ef81c8513fcc267277529c69bdac80adc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_mayer, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:40:31 np0005629333 systemd[1]: Started libpod-conmon-d41003795a2c977e732e427b182d727ef81c8513fcc267277529c69bdac80adc.scope.
Feb 25 08:40:31 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:40:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb07a3523fee5bc7b345f00252a85789e232b7d1584c6e20ed5a5f96eb0d92e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:40:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb07a3523fee5bc7b345f00252a85789e232b7d1584c6e20ed5a5f96eb0d92e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:40:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb07a3523fee5bc7b345f00252a85789e232b7d1584c6e20ed5a5f96eb0d92e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:40:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb07a3523fee5bc7b345f00252a85789e232b7d1584c6e20ed5a5f96eb0d92e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:40:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb07a3523fee5bc7b345f00252a85789e232b7d1584c6e20ed5a5f96eb0d92e8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:40:31 np0005629333 podman[411089]: 2026-02-25 13:40:30.973511746 +0000 UTC m=+0.032435381 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:40:31 np0005629333 podman[411089]: 2026-02-25 13:40:31.072733395 +0000 UTC m=+0.131657030 container init d41003795a2c977e732e427b182d727ef81c8513fcc267277529c69bdac80adc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 08:40:31 np0005629333 podman[411089]: 2026-02-25 13:40:31.086178191 +0000 UTC m=+0.145101836 container start d41003795a2c977e732e427b182d727ef81c8513fcc267277529c69bdac80adc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_mayer, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 08:40:31 np0005629333 podman[411089]: 2026-02-25 13:40:31.089110555 +0000 UTC m=+0.148034190 container attach d41003795a2c977e732e427b182d727ef81c8513fcc267277529c69bdac80adc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_mayer, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 08:40:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:40:31
Feb 25 08:40:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:40:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:40:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', '.mgr', '.rgw.root', 'default.rgw.log', 'backups', 'vms', 'images', 'default.rgw.control', 'cephfs.cephfs.data']
Feb 25 08:40:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:40:31 np0005629333 nova_compute[244014]: 2026-02-25 13:40:31.225 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:40:31 np0005629333 elastic_mayer[411106]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:40:31 np0005629333 elastic_mayer[411106]: --> All data devices are unavailable
Feb 25 08:40:31 np0005629333 systemd[1]: libpod-d41003795a2c977e732e427b182d727ef81c8513fcc267277529c69bdac80adc.scope: Deactivated successfully.
Feb 25 08:40:31 np0005629333 podman[411089]: 2026-02-25 13:40:31.592225409 +0000 UTC m=+0.651149054 container died d41003795a2c977e732e427b182d727ef81c8513fcc267277529c69bdac80adc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 08:40:31 np0005629333 systemd[1]: var-lib-containers-storage-overlay-cb07a3523fee5bc7b345f00252a85789e232b7d1584c6e20ed5a5f96eb0d92e8-merged.mount: Deactivated successfully.
Feb 25 08:40:31 np0005629333 podman[411089]: 2026-02-25 13:40:31.646673993 +0000 UTC m=+0.705597638 container remove d41003795a2c977e732e427b182d727ef81c8513fcc267277529c69bdac80adc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:40:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:40:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:40:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:40:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:40:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:40:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:40:31 np0005629333 systemd[1]: libpod-conmon-d41003795a2c977e732e427b182d727ef81c8513fcc267277529c69bdac80adc.scope: Deactivated successfully.
Feb 25 08:40:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3619: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:40:32 np0005629333 podman[411199]: 2026-02-25 13:40:32.124978914 +0000 UTC m=+0.044117607 container create 557aced7d8021e045a74badf655dd65ac39f2b396501976828090eb09054f064 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_johnson, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:40:32 np0005629333 systemd[1]: Started libpod-conmon-557aced7d8021e045a74badf655dd65ac39f2b396501976828090eb09054f064.scope.
Feb 25 08:40:32 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:40:32 np0005629333 podman[411199]: 2026-02-25 13:40:32.103135537 +0000 UTC m=+0.022274290 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:40:32 np0005629333 podman[411199]: 2026-02-25 13:40:32.216129891 +0000 UTC m=+0.135268644 container init 557aced7d8021e045a74badf655dd65ac39f2b396501976828090eb09054f064 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_johnson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 08:40:32 np0005629333 podman[411199]: 2026-02-25 13:40:32.224584984 +0000 UTC m=+0.143723677 container start 557aced7d8021e045a74badf655dd65ac39f2b396501976828090eb09054f064 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_johnson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 08:40:32 np0005629333 podman[411199]: 2026-02-25 13:40:32.228218458 +0000 UTC m=+0.147357171 container attach 557aced7d8021e045a74badf655dd65ac39f2b396501976828090eb09054f064 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_johnson, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:40:32 np0005629333 xenodochial_johnson[411215]: 167 167
Feb 25 08:40:32 np0005629333 systemd[1]: libpod-557aced7d8021e045a74badf655dd65ac39f2b396501976828090eb09054f064.scope: Deactivated successfully.
Feb 25 08:40:32 np0005629333 podman[411199]: 2026-02-25 13:40:32.232084859 +0000 UTC m=+0.151223572 container died 557aced7d8021e045a74badf655dd65ac39f2b396501976828090eb09054f064 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_johnson, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:40:32 np0005629333 systemd[1]: var-lib-containers-storage-overlay-eeb433963c808659c6c56d73c7970b615d912c1bb4c82bb52a4f276b618086b5-merged.mount: Deactivated successfully.
Feb 25 08:40:32 np0005629333 podman[411199]: 2026-02-25 13:40:32.272292564 +0000 UTC m=+0.191431257 container remove 557aced7d8021e045a74badf655dd65ac39f2b396501976828090eb09054f064 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_johnson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 08:40:32 np0005629333 systemd[1]: libpod-conmon-557aced7d8021e045a74badf655dd65ac39f2b396501976828090eb09054f064.scope: Deactivated successfully.
Feb 25 08:40:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:40:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:40:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:40:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:40:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:40:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:40:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:40:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:40:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:40:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:40:32 np0005629333 podman[411239]: 2026-02-25 13:40:32.500812485 +0000 UTC m=+0.079453333 container create 7b4eac132fafe7b6a85524eb79270bf968da09a6c2537f13c6110eea93f8231a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 08:40:32 np0005629333 systemd[1]: Started libpod-conmon-7b4eac132fafe7b6a85524eb79270bf968da09a6c2537f13c6110eea93f8231a.scope.
Feb 25 08:40:32 np0005629333 podman[411239]: 2026-02-25 13:40:32.46720761 +0000 UTC m=+0.045848518 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:40:32 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:40:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d169bcc17047e5a8ba80931c26329fbdc0699bb5e7a0dfe93b6646f9fcbe463/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:40:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d169bcc17047e5a8ba80931c26329fbdc0699bb5e7a0dfe93b6646f9fcbe463/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:40:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d169bcc17047e5a8ba80931c26329fbdc0699bb5e7a0dfe93b6646f9fcbe463/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:40:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d169bcc17047e5a8ba80931c26329fbdc0699bb5e7a0dfe93b6646f9fcbe463/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:40:32 np0005629333 podman[411239]: 2026-02-25 13:40:32.612259814 +0000 UTC m=+0.190900682 container init 7b4eac132fafe7b6a85524eb79270bf968da09a6c2537f13c6110eea93f8231a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 08:40:32 np0005629333 podman[411239]: 2026-02-25 13:40:32.62221203 +0000 UTC m=+0.200852848 container start 7b4eac132fafe7b6a85524eb79270bf968da09a6c2537f13c6110eea93f8231a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_haibt, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 25 08:40:32 np0005629333 podman[411239]: 2026-02-25 13:40:32.626069611 +0000 UTC m=+0.204710499 container attach 7b4eac132fafe7b6a85524eb79270bf968da09a6c2537f13c6110eea93f8231a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_haibt, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 25 08:40:32 np0005629333 nova_compute[244014]: 2026-02-25 13:40:32.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]: {
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:    "0": [
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:        {
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "devices": [
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "/dev/loop3"
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            ],
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "lv_name": "ceph_lv0",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "lv_size": "21470642176",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "name": "ceph_lv0",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "tags": {
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.cluster_name": "ceph",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.crush_device_class": "",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.encrypted": "0",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.objectstore": "bluestore",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.osd_id": "0",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.type": "block",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.vdo": "0",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.with_tpm": "0"
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            },
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "type": "block",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "vg_name": "ceph_vg0"
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:        }
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:    ],
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:    "1": [
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:        {
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "devices": [
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "/dev/loop4"
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            ],
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "lv_name": "ceph_lv1",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "lv_size": "21470642176",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "name": "ceph_lv1",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "tags": {
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.cluster_name": "ceph",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.crush_device_class": "",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.encrypted": "0",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.objectstore": "bluestore",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.osd_id": "1",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.type": "block",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.vdo": "0",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.with_tpm": "0"
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            },
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "type": "block",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "vg_name": "ceph_vg1"
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:        }
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:    ],
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:    "2": [
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:        {
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "devices": [
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "/dev/loop5"
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            ],
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "lv_name": "ceph_lv2",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "lv_size": "21470642176",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "name": "ceph_lv2",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "tags": {
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.cluster_name": "ceph",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.crush_device_class": "",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.encrypted": "0",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.objectstore": "bluestore",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.osd_id": "2",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.type": "block",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.vdo": "0",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:                "ceph.with_tpm": "0"
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            },
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "type": "block",
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:            "vg_name": "ceph_vg2"
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:        }
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]:    ]
Feb 25 08:40:32 np0005629333 mystifying_haibt[411256]: }
Feb 25 08:40:32 np0005629333 systemd[1]: libpod-7b4eac132fafe7b6a85524eb79270bf968da09a6c2537f13c6110eea93f8231a.scope: Deactivated successfully.
Feb 25 08:40:32 np0005629333 podman[411239]: 2026-02-25 13:40:32.938008986 +0000 UTC m=+0.516649874 container died 7b4eac132fafe7b6a85524eb79270bf968da09a6c2537f13c6110eea93f8231a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 08:40:32 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7d169bcc17047e5a8ba80931c26329fbdc0699bb5e7a0dfe93b6646f9fcbe463-merged.mount: Deactivated successfully.
Feb 25 08:40:32 np0005629333 podman[411239]: 2026-02-25 13:40:32.97924919 +0000 UTC m=+0.557889998 container remove 7b4eac132fafe7b6a85524eb79270bf968da09a6c2537f13c6110eea93f8231a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:40:32 np0005629333 systemd[1]: libpod-conmon-7b4eac132fafe7b6a85524eb79270bf968da09a6c2537f13c6110eea93f8231a.scope: Deactivated successfully.
Feb 25 08:40:33 np0005629333 podman[411339]: 2026-02-25 13:40:33.420143278 +0000 UTC m=+0.040976587 container create 1d433edeaf3336fb2fc5aa7c30c859bad4424ccab24165d0942ac6e0b6934f5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 08:40:33 np0005629333 systemd[1]: Started libpod-conmon-1d433edeaf3336fb2fc5aa7c30c859bad4424ccab24165d0942ac6e0b6934f5a.scope.
Feb 25 08:40:33 np0005629333 podman[411339]: 2026-02-25 13:40:33.400487444 +0000 UTC m=+0.021320813 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:40:33 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:40:33 np0005629333 podman[411339]: 2026-02-25 13:40:33.525310377 +0000 UTC m=+0.146143676 container init 1d433edeaf3336fb2fc5aa7c30c859bad4424ccab24165d0942ac6e0b6934f5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_swartz, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:40:33 np0005629333 podman[411339]: 2026-02-25 13:40:33.538186477 +0000 UTC m=+0.159019816 container start 1d433edeaf3336fb2fc5aa7c30c859bad4424ccab24165d0942ac6e0b6934f5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 08:40:33 np0005629333 podman[411339]: 2026-02-25 13:40:33.542086549 +0000 UTC m=+0.162919888 container attach 1d433edeaf3336fb2fc5aa7c30c859bad4424ccab24165d0942ac6e0b6934f5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_swartz, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:40:33 np0005629333 systemd[1]: libpod-1d433edeaf3336fb2fc5aa7c30c859bad4424ccab24165d0942ac6e0b6934f5a.scope: Deactivated successfully.
Feb 25 08:40:33 np0005629333 vibrant_swartz[411356]: 167 167
Feb 25 08:40:33 np0005629333 conmon[411356]: conmon 1d433edeaf3336fb2fc5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1d433edeaf3336fb2fc5aa7c30c859bad4424ccab24165d0942ac6e0b6934f5a.scope/container/memory.events
Feb 25 08:40:33 np0005629333 podman[411339]: 2026-02-25 13:40:33.545363953 +0000 UTC m=+0.166197252 container died 1d433edeaf3336fb2fc5aa7c30c859bad4424ccab24165d0942ac6e0b6934f5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_swartz, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 08:40:33 np0005629333 systemd[1]: var-lib-containers-storage-overlay-6d8f607a08e229b0793e7ef8d39b5af184901e962e75cb16906105a9e7869315-merged.mount: Deactivated successfully.
Feb 25 08:40:33 np0005629333 podman[411339]: 2026-02-25 13:40:33.586430862 +0000 UTC m=+0.207264141 container remove 1d433edeaf3336fb2fc5aa7c30c859bad4424ccab24165d0942ac6e0b6934f5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_swartz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 08:40:33 np0005629333 systemd[1]: libpod-conmon-1d433edeaf3336fb2fc5aa7c30c859bad4424ccab24165d0942ac6e0b6934f5a.scope: Deactivated successfully.
Feb 25 08:40:33 np0005629333 nova_compute[244014]: 2026-02-25 13:40:33.768 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:40:33 np0005629333 podman[411381]: 2026-02-25 13:40:33.772356 +0000 UTC m=+0.053401164 container create 9dc06d96e5a7446454557e31e11017504221dc2f9cc88bce10778bda09735c95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 08:40:33 np0005629333 systemd[1]: Started libpod-conmon-9dc06d96e5a7446454557e31e11017504221dc2f9cc88bce10778bda09735c95.scope.
Feb 25 08:40:33 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:40:33 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e6d0e7cb9e3cbf67bfddb875d4c8bf011d4791098031eab6c3b59ca1fb042e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:40:33 np0005629333 podman[411381]: 2026-02-25 13:40:33.743237714 +0000 UTC m=+0.024282918 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:40:33 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e6d0e7cb9e3cbf67bfddb875d4c8bf011d4791098031eab6c3b59ca1fb042e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:40:33 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e6d0e7cb9e3cbf67bfddb875d4c8bf011d4791098031eab6c3b59ca1fb042e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:40:33 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e6d0e7cb9e3cbf67bfddb875d4c8bf011d4791098031eab6c3b59ca1fb042e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:40:33 np0005629333 podman[411381]: 2026-02-25 13:40:33.856248178 +0000 UTC m=+0.137293352 container init 9dc06d96e5a7446454557e31e11017504221dc2f9cc88bce10778bda09735c95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hoover, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:40:33 np0005629333 podman[411381]: 2026-02-25 13:40:33.861945012 +0000 UTC m=+0.142990146 container start 9dc06d96e5a7446454557e31e11017504221dc2f9cc88bce10778bda09735c95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hoover, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:40:33 np0005629333 podman[411381]: 2026-02-25 13:40:33.865967787 +0000 UTC m=+0.147012951 container attach 9dc06d96e5a7446454557e31e11017504221dc2f9cc88bce10778bda09735c95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hoover, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 08:40:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3620: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:40:34 np0005629333 lvm[411477]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:40:34 np0005629333 lvm[411477]: VG ceph_vg1 finished
Feb 25 08:40:34 np0005629333 lvm[411476]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:40:34 np0005629333 lvm[411476]: VG ceph_vg0 finished
Feb 25 08:40:34 np0005629333 lvm[411479]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:40:34 np0005629333 lvm[411479]: VG ceph_vg2 finished
Feb 25 08:40:34 np0005629333 reverent_hoover[411398]: {}
Feb 25 08:40:34 np0005629333 systemd[1]: libpod-9dc06d96e5a7446454557e31e11017504221dc2f9cc88bce10778bda09735c95.scope: Deactivated successfully.
Feb 25 08:40:34 np0005629333 systemd[1]: libpod-9dc06d96e5a7446454557e31e11017504221dc2f9cc88bce10778bda09735c95.scope: Consumed 1.315s CPU time.
Feb 25 08:40:34 np0005629333 podman[411381]: 2026-02-25 13:40:34.729254811 +0000 UTC m=+1.010299975 container died 9dc06d96e5a7446454557e31e11017504221dc2f9cc88bce10778bda09735c95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hoover, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:40:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:40:34 np0005629333 systemd[1]: var-lib-containers-storage-overlay-80e6d0e7cb9e3cbf67bfddb875d4c8bf011d4791098031eab6c3b59ca1fb042e-merged.mount: Deactivated successfully.
Feb 25 08:40:34 np0005629333 podman[411381]: 2026-02-25 13:40:34.784455376 +0000 UTC m=+1.065500540 container remove 9dc06d96e5a7446454557e31e11017504221dc2f9cc88bce10778bda09735c95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hoover, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:40:34 np0005629333 systemd[1]: libpod-conmon-9dc06d96e5a7446454557e31e11017504221dc2f9cc88bce10778bda09735c95.scope: Deactivated successfully.
Feb 25 08:40:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:40:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:40:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:40:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:40:35 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:40:35 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:40:35 np0005629333 nova_compute[244014]: 2026-02-25 13:40:35.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:40:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3621: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:40:36 np0005629333 nova_compute[244014]: 2026-02-25 13:40:36.231 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:40:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3622: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:40:38 np0005629333 nova_compute[244014]: 2026-02-25 13:40:38.770 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:40:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:40:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3623: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:40:41 np0005629333 nova_compute[244014]: 2026-02-25 13:40:41.235 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:40:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3624: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:40:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:40:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:40:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:40:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:40:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:40:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:40:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:40:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:40:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:40:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:40:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:40:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:40:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:40:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:40:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:40:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:40:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:40:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:40:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:40:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:40:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:40:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:40:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:40:43 np0005629333 nova_compute[244014]: 2026-02-25 13:40:43.773 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:40:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3625: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:40:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:40:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3626: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:40:46 np0005629333 nova_compute[244014]: 2026-02-25 13:40:46.239 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:40:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:40:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3630820822' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:40:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:40:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3630820822' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:40:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3627: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:40:48 np0005629333 nova_compute[244014]: 2026-02-25 13:40:48.776 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:40:48 np0005629333 nova_compute[244014]: 2026-02-25 13:40:48.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:40:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:40:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3628: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:40:51 np0005629333 nova_compute[244014]: 2026-02-25 13:40:51.244 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:40:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3629: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:40:53 np0005629333 nova_compute[244014]: 2026-02-25 13:40:53.776 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:40:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3630: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:40:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:40:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:40:55.084 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:40:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:40:55.084 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:40:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:40:55.085 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:40:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3631: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:40:56 np0005629333 nova_compute[244014]: 2026-02-25 13:40:56.249 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:40:56 np0005629333 podman[411519]: 2026-02-25 13:40:56.728363924 +0000 UTC m=+0.061923728 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0)
Feb 25 08:40:56 np0005629333 podman[411520]: 2026-02-25 13:40:56.760020313 +0000 UTC m=+0.092490546 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 08:40:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3632: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:40:58 np0005629333 nova_compute[244014]: 2026-02-25 13:40:58.778 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:40:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:41:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3633: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:41:01 np0005629333 nova_compute[244014]: 2026-02-25 13:41:01.252 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:41:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:41:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:41:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:41:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:41:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:41:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:41:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3634: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:41:03 np0005629333 nova_compute[244014]: 2026-02-25 13:41:03.780 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:41:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3635: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:41:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:41:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3636: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:41:06 np0005629333 nova_compute[244014]: 2026-02-25 13:41:06.265 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:41:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3637: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:41:08 np0005629333 nova_compute[244014]: 2026-02-25 13:41:08.783 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:41:08 np0005629333 nova_compute[244014]: 2026-02-25 13:41:08.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:41:08 np0005629333 nova_compute[244014]: 2026-02-25 13:41:08.915 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:41:08 np0005629333 nova_compute[244014]: 2026-02-25 13:41:08.915 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:41:08 np0005629333 nova_compute[244014]: 2026-02-25 13:41:08.916 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:41:08 np0005629333 nova_compute[244014]: 2026-02-25 13:41:08.916 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:41:08 np0005629333 nova_compute[244014]: 2026-02-25 13:41:08.917 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:41:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:41:09 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/632894411' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:41:09 np0005629333 nova_compute[244014]: 2026-02-25 13:41:09.471 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:41:09 np0005629333 nova_compute[244014]: 2026-02-25 13:41:09.676 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:41:09 np0005629333 nova_compute[244014]: 2026-02-25 13:41:09.677 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3546MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:41:09 np0005629333 nova_compute[244014]: 2026-02-25 13:41:09.678 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:41:09 np0005629333 nova_compute[244014]: 2026-02-25 13:41:09.678 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:41:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:41:09 np0005629333 nova_compute[244014]: 2026-02-25 13:41:09.847 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:41:09 np0005629333 nova_compute[244014]: 2026-02-25 13:41:09.848 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:41:09 np0005629333 nova_compute[244014]: 2026-02-25 13:41:09.935 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 25 08:41:10 np0005629333 nova_compute[244014]: 2026-02-25 13:41:10.039 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 25 08:41:10 np0005629333 nova_compute[244014]: 2026-02-25 13:41:10.040 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 25 08:41:10 np0005629333 nova_compute[244014]: 2026-02-25 13:41:10.058 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 25 08:41:10 np0005629333 nova_compute[244014]: 2026-02-25 13:41:10.081 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 25 08:41:10 np0005629333 nova_compute[244014]: 2026-02-25 13:41:10.098 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:41:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3638: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:41:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:41:10 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3197378911' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:41:10 np0005629333 nova_compute[244014]: 2026-02-25 13:41:10.669 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:41:10 np0005629333 nova_compute[244014]: 2026-02-25 13:41:10.676 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:41:10 np0005629333 nova_compute[244014]: 2026-02-25 13:41:10.708 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:41:10 np0005629333 nova_compute[244014]: 2026-02-25 13:41:10.711 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:41:10 np0005629333 nova_compute[244014]: 2026-02-25 13:41:10.712 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:41:11 np0005629333 nova_compute[244014]: 2026-02-25 13:41:11.303 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:41:11 np0005629333 nova_compute[244014]: 2026-02-25 13:41:11.713 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:41:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3639: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:41:13 np0005629333 nova_compute[244014]: 2026-02-25 13:41:13.784 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:41:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3640: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:41:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:41:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3641: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:41:16 np0005629333 nova_compute[244014]: 2026-02-25 13:41:16.307 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:41:17 np0005629333 nova_compute[244014]: 2026-02-25 13:41:17.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:41:17 np0005629333 nova_compute[244014]: 2026-02-25 13:41:17.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:41:17 np0005629333 nova_compute[244014]: 2026-02-25 13:41:17.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:41:17 np0005629333 nova_compute[244014]: 2026-02-25 13:41:17.895 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:41:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3642: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:41:18 np0005629333 nova_compute[244014]: 2026-02-25 13:41:18.786 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:41:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:41:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3643: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:41:20 np0005629333 nova_compute[244014]: 2026-02-25 13:41:20.891 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:41:21 np0005629333 nova_compute[244014]: 2026-02-25 13:41:21.350 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:41:21 np0005629333 nova_compute[244014]: 2026-02-25 13:41:21.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:41:21 np0005629333 nova_compute[244014]: 2026-02-25 13:41:21.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:41:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3644: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Feb 25 08:41:23 np0005629333 nova_compute[244014]: 2026-02-25 13:41:23.790 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:41:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3645: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Feb 25 08:41:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:41:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3646: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Feb 25 08:41:26 np0005629333 nova_compute[244014]: 2026-02-25 13:41:26.352 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:41:27 np0005629333 podman[411606]: 2026-02-25 13:41:27.744717875 +0000 UTC m=+0.084237769 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 08:41:27 np0005629333 podman[411607]: 2026-02-25 13:41:27.74907339 +0000 UTC m=+0.087931115 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:41:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3647: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 08:41:28 np0005629333 nova_compute[244014]: 2026-02-25 13:41:28.790 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:41:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:41:29 np0005629333 nova_compute[244014]: 2026-02-25 13:41:29.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:41:29 np0005629333 nova_compute[244014]: 2026-02-25 13:41:29.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:41:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3648: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 08:41:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:41:31
Feb 25 08:41:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:41:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:41:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['images', 'backups', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes']
Feb 25 08:41:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:41:31 np0005629333 nova_compute[244014]: 2026-02-25 13:41:31.354 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:41:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:41:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:41:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:41:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:41:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:41:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:41:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3649: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 08:41:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:41:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:41:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:41:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:41:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:41:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:41:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:41:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:41:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:41:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:41:32 np0005629333 nova_compute[244014]: 2026-02-25 13:41:32.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:41:33 np0005629333 nova_compute[244014]: 2026-02-25 13:41:33.792 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:41:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3650: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Feb 25 08:41:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:41:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:41:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:41:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:41:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:41:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:41:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:41:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:41:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:41:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:41:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:41:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:41:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:41:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3651: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Feb 25 08:41:36 np0005629333 podman[411795]: 2026-02-25 13:41:36.103494307 +0000 UTC m=+0.037964241 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:41:36 np0005629333 podman[411795]: 2026-02-25 13:41:36.393159143 +0000 UTC m=+0.327629037 container create fc1405cc682651ec8638f25eb242e0373d906701c683d56f35fb942d40d336b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_hugle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 08:41:36 np0005629333 nova_compute[244014]: 2026-02-25 13:41:36.399 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:41:36 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:41:36 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:41:36 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:41:36 np0005629333 systemd[1]: Started libpod-conmon-fc1405cc682651ec8638f25eb242e0373d906701c683d56f35fb942d40d336b0.scope.
Feb 25 08:41:36 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:41:36 np0005629333 podman[411795]: 2026-02-25 13:41:36.549032888 +0000 UTC m=+0.483502812 container init fc1405cc682651ec8638f25eb242e0373d906701c683d56f35fb942d40d336b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 08:41:36 np0005629333 podman[411795]: 2026-02-25 13:41:36.55989979 +0000 UTC m=+0.494369684 container start fc1405cc682651ec8638f25eb242e0373d906701c683d56f35fb942d40d336b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_hugle, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:41:36 np0005629333 podman[411795]: 2026-02-25 13:41:36.564297356 +0000 UTC m=+0.498767310 container attach fc1405cc682651ec8638f25eb242e0373d906701c683d56f35fb942d40d336b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Feb 25 08:41:36 np0005629333 admiring_hugle[411812]: 167 167
Feb 25 08:41:36 np0005629333 systemd[1]: libpod-fc1405cc682651ec8638f25eb242e0373d906701c683d56f35fb942d40d336b0.scope: Deactivated successfully.
Feb 25 08:41:36 np0005629333 podman[411795]: 2026-02-25 13:41:36.572831321 +0000 UTC m=+0.507301225 container died fc1405cc682651ec8638f25eb242e0373d906701c683d56f35fb942d40d336b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_hugle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:41:36 np0005629333 systemd[1]: var-lib-containers-storage-overlay-538fa063cc7656193617f738e2c911c406992be5ff989fb5f17edebb821009e1-merged.mount: Deactivated successfully.
Feb 25 08:41:36 np0005629333 podman[411795]: 2026-02-25 13:41:36.662435204 +0000 UTC m=+0.596905098 container remove fc1405cc682651ec8638f25eb242e0373d906701c683d56f35fb942d40d336b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_hugle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 08:41:36 np0005629333 systemd[1]: libpod-conmon-fc1405cc682651ec8638f25eb242e0373d906701c683d56f35fb942d40d336b0.scope: Deactivated successfully.
Feb 25 08:41:36 np0005629333 podman[411835]: 2026-02-25 13:41:36.8758386 +0000 UTC m=+0.051045446 container create f2c5afa8cefba78a7d3f33ec7c573ac6cf209cdfffb728f175b92a4667397987 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_feistel, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 08:41:36 np0005629333 nova_compute[244014]: 2026-02-25 13:41:36.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:41:36 np0005629333 systemd[1]: Started libpod-conmon-f2c5afa8cefba78a7d3f33ec7c573ac6cf209cdfffb728f175b92a4667397987.scope.
Feb 25 08:41:36 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:41:36 np0005629333 podman[411835]: 2026-02-25 13:41:36.85597177 +0000 UTC m=+0.031178426 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:41:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d743f57c2839b889c137297e8c4efb9cdbc77fb2f64e56671aca8d38b8d67f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:41:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d743f57c2839b889c137297e8c4efb9cdbc77fb2f64e56671aca8d38b8d67f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:41:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d743f57c2839b889c137297e8c4efb9cdbc77fb2f64e56671aca8d38b8d67f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:41:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d743f57c2839b889c137297e8c4efb9cdbc77fb2f64e56671aca8d38b8d67f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:41:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d743f57c2839b889c137297e8c4efb9cdbc77fb2f64e56671aca8d38b8d67f1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:41:37 np0005629333 podman[411835]: 2026-02-25 13:41:37.091809851 +0000 UTC m=+0.267016507 container init f2c5afa8cefba78a7d3f33ec7c573ac6cf209cdfffb728f175b92a4667397987 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 08:41:37 np0005629333 podman[411835]: 2026-02-25 13:41:37.100561442 +0000 UTC m=+0.275768098 container start f2c5afa8cefba78a7d3f33ec7c573ac6cf209cdfffb728f175b92a4667397987 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_feistel, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 08:41:37 np0005629333 podman[411835]: 2026-02-25 13:41:37.120197816 +0000 UTC m=+0.295404492 container attach f2c5afa8cefba78a7d3f33ec7c573ac6cf209cdfffb728f175b92a4667397987 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_feistel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:41:37 np0005629333 stupefied_feistel[411851]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:41:37 np0005629333 stupefied_feistel[411851]: --> All data devices are unavailable
Feb 25 08:41:37 np0005629333 systemd[1]: libpod-f2c5afa8cefba78a7d3f33ec7c573ac6cf209cdfffb728f175b92a4667397987.scope: Deactivated successfully.
Feb 25 08:41:37 np0005629333 podman[411872]: 2026-02-25 13:41:37.734959165 +0000 UTC m=+0.044957901 container died f2c5afa8cefba78a7d3f33ec7c573ac6cf209cdfffb728f175b92a4667397987 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:41:37 np0005629333 systemd[1]: var-lib-containers-storage-overlay-6d743f57c2839b889c137297e8c4efb9cdbc77fb2f64e56671aca8d38b8d67f1-merged.mount: Deactivated successfully.
Feb 25 08:41:38 np0005629333 podman[411872]: 2026-02-25 13:41:38.055611951 +0000 UTC m=+0.365610687 container remove f2c5afa8cefba78a7d3f33ec7c573ac6cf209cdfffb728f175b92a4667397987 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_feistel, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 08:41:38 np0005629333 systemd[1]: libpod-conmon-f2c5afa8cefba78a7d3f33ec7c573ac6cf209cdfffb728f175b92a4667397987.scope: Deactivated successfully.
Feb 25 08:41:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3652: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Feb 25 08:41:38 np0005629333 podman[411951]: 2026-02-25 13:41:38.521594119 +0000 UTC m=+0.046769254 container create da8f42a1593c9ae24a90034d29d020be1fcdff3b4e752d04551051f6b780faff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:41:38 np0005629333 systemd[1]: Started libpod-conmon-da8f42a1593c9ae24a90034d29d020be1fcdff3b4e752d04551051f6b780faff.scope.
Feb 25 08:41:38 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:41:38 np0005629333 podman[411951]: 2026-02-25 13:41:38.496817818 +0000 UTC m=+0.021992963 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:41:38 np0005629333 podman[411951]: 2026-02-25 13:41:38.608182125 +0000 UTC m=+0.133357310 container init da8f42a1593c9ae24a90034d29d020be1fcdff3b4e752d04551051f6b780faff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Feb 25 08:41:38 np0005629333 podman[411951]: 2026-02-25 13:41:38.617594025 +0000 UTC m=+0.142769160 container start da8f42a1593c9ae24a90034d29d020be1fcdff3b4e752d04551051f6b780faff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:41:38 np0005629333 vigorous_cartwright[411967]: 167 167
Feb 25 08:41:38 np0005629333 systemd[1]: libpod-da8f42a1593c9ae24a90034d29d020be1fcdff3b4e752d04551051f6b780faff.scope: Deactivated successfully.
Feb 25 08:41:38 np0005629333 podman[411951]: 2026-02-25 13:41:38.626053158 +0000 UTC m=+0.151228353 container attach da8f42a1593c9ae24a90034d29d020be1fcdff3b4e752d04551051f6b780faff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 08:41:38 np0005629333 podman[411951]: 2026-02-25 13:41:38.627411227 +0000 UTC m=+0.152586322 container died da8f42a1593c9ae24a90034d29d020be1fcdff3b4e752d04551051f6b780faff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:41:38 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7fc0eabf5ef6fa6fe6a6803adbcc5ad364b2d9ea588041a89117f2d583c3c994-merged.mount: Deactivated successfully.
Feb 25 08:41:38 np0005629333 podman[411951]: 2026-02-25 13:41:38.70484493 +0000 UTC m=+0.230020025 container remove da8f42a1593c9ae24a90034d29d020be1fcdff3b4e752d04551051f6b780faff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 08:41:38 np0005629333 systemd[1]: libpod-conmon-da8f42a1593c9ae24a90034d29d020be1fcdff3b4e752d04551051f6b780faff.scope: Deactivated successfully.
Feb 25 08:41:38 np0005629333 nova_compute[244014]: 2026-02-25 13:41:38.794 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:41:38 np0005629333 podman[411994]: 2026-02-25 13:41:38.958296895 +0000 UTC m=+0.101936946 container create 94ec62c9056636c79fcf13f9546cbc7664ea4f55a68da0e2b5994ce3b470f6bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_euler, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:41:38 np0005629333 podman[411994]: 2026-02-25 13:41:38.895518564 +0000 UTC m=+0.039158655 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:41:39 np0005629333 systemd[1]: Started libpod-conmon-94ec62c9056636c79fcf13f9546cbc7664ea4f55a68da0e2b5994ce3b470f6bf.scope.
Feb 25 08:41:39 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:41:39 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eebebc17e82a824634b32afdb9b01b7f7c259e72a93c8f07893f04766c9ec129/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:41:39 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eebebc17e82a824634b32afdb9b01b7f7c259e72a93c8f07893f04766c9ec129/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:41:39 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eebebc17e82a824634b32afdb9b01b7f7c259e72a93c8f07893f04766c9ec129/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:41:39 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eebebc17e82a824634b32afdb9b01b7f7c259e72a93c8f07893f04766c9ec129/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:41:39 np0005629333 podman[411994]: 2026-02-25 13:41:39.088189445 +0000 UTC m=+0.231829476 container init 94ec62c9056636c79fcf13f9546cbc7664ea4f55a68da0e2b5994ce3b470f6bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_euler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 08:41:39 np0005629333 podman[411994]: 2026-02-25 13:41:39.096869444 +0000 UTC m=+0.240509445 container start 94ec62c9056636c79fcf13f9546cbc7664ea4f55a68da0e2b5994ce3b470f6bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 08:41:39 np0005629333 podman[411994]: 2026-02-25 13:41:39.102507396 +0000 UTC m=+0.246147387 container attach 94ec62c9056636c79fcf13f9546cbc7664ea4f55a68da0e2b5994ce3b470f6bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_euler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]: {
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:    "0": [
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:        {
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "devices": [
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "/dev/loop3"
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            ],
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "lv_name": "ceph_lv0",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "lv_size": "21470642176",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "name": "ceph_lv0",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "tags": {
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.cluster_name": "ceph",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.crush_device_class": "",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.encrypted": "0",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.objectstore": "bluestore",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.osd_id": "0",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.type": "block",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.vdo": "0",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.with_tpm": "0"
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            },
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "type": "block",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "vg_name": "ceph_vg0"
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:        }
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:    ],
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:    "1": [
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:        {
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "devices": [
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "/dev/loop4"
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            ],
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "lv_name": "ceph_lv1",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "lv_size": "21470642176",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "name": "ceph_lv1",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "tags": {
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.cluster_name": "ceph",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.crush_device_class": "",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.encrypted": "0",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.objectstore": "bluestore",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.osd_id": "1",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.type": "block",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.vdo": "0",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.with_tpm": "0"
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            },
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "type": "block",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "vg_name": "ceph_vg1"
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:        }
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:    ],
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:    "2": [
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:        {
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "devices": [
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "/dev/loop5"
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            ],
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "lv_name": "ceph_lv2",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "lv_size": "21470642176",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "name": "ceph_lv2",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "tags": {
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.cluster_name": "ceph",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.crush_device_class": "",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.encrypted": "0",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.objectstore": "bluestore",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.osd_id": "2",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.type": "block",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.vdo": "0",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:                "ceph.with_tpm": "0"
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            },
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "type": "block",
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:            "vg_name": "ceph_vg2"
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:        }
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]:    ]
Feb 25 08:41:39 np0005629333 unruffled_euler[412010]: }
Feb 25 08:41:39 np0005629333 systemd[1]: libpod-94ec62c9056636c79fcf13f9546cbc7664ea4f55a68da0e2b5994ce3b470f6bf.scope: Deactivated successfully.
Feb 25 08:41:39 np0005629333 podman[411994]: 2026-02-25 13:41:39.415267775 +0000 UTC m=+0.558907776 container died 94ec62c9056636c79fcf13f9546cbc7664ea4f55a68da0e2b5994ce3b470f6bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:41:39 np0005629333 systemd[1]: var-lib-containers-storage-overlay-eebebc17e82a824634b32afdb9b01b7f7c259e72a93c8f07893f04766c9ec129-merged.mount: Deactivated successfully.
Feb 25 08:41:39 np0005629333 podman[411994]: 2026-02-25 13:41:39.473678282 +0000 UTC m=+0.617318273 container remove 94ec62c9056636c79fcf13f9546cbc7664ea4f55a68da0e2b5994ce3b470f6bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_euler, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:41:39 np0005629333 systemd[1]: libpod-conmon-94ec62c9056636c79fcf13f9546cbc7664ea4f55a68da0e2b5994ce3b470f6bf.scope: Deactivated successfully.
Feb 25 08:41:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:41:39 np0005629333 podman[412093]: 2026-02-25 13:41:39.990976933 +0000 UTC m=+0.086064482 container create ea4c7992dffb0ca9c5f4f2405a8fcdd663df1c0754a967b60b9a089ee0f84db8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 08:41:40 np0005629333 podman[412093]: 2026-02-25 13:41:39.931143715 +0000 UTC m=+0.026231324 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:41:40 np0005629333 systemd[1]: Started libpod-conmon-ea4c7992dffb0ca9c5f4f2405a8fcdd663df1c0754a967b60b9a089ee0f84db8.scope.
Feb 25 08:41:40 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:41:40 np0005629333 podman[412093]: 2026-02-25 13:41:40.08490616 +0000 UTC m=+0.179993669 container init ea4c7992dffb0ca9c5f4f2405a8fcdd663df1c0754a967b60b9a089ee0f84db8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 08:41:40 np0005629333 podman[412093]: 2026-02-25 13:41:40.094289849 +0000 UTC m=+0.189377388 container start ea4c7992dffb0ca9c5f4f2405a8fcdd663df1c0754a967b60b9a089ee0f84db8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_dubinsky, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 08:41:40 np0005629333 cranky_dubinsky[412109]: 167 167
Feb 25 08:41:40 np0005629333 systemd[1]: libpod-ea4c7992dffb0ca9c5f4f2405a8fcdd663df1c0754a967b60b9a089ee0f84db8.scope: Deactivated successfully.
Feb 25 08:41:40 np0005629333 podman[412093]: 2026-02-25 13:41:40.105681856 +0000 UTC m=+0.200769395 container attach ea4c7992dffb0ca9c5f4f2405a8fcdd663df1c0754a967b60b9a089ee0f84db8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_dubinsky, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:41:40 np0005629333 podman[412093]: 2026-02-25 13:41:40.106965133 +0000 UTC m=+0.202052642 container died ea4c7992dffb0ca9c5f4f2405a8fcdd663df1c0754a967b60b9a089ee0f84db8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 08:41:40 np0005629333 systemd[1]: var-lib-containers-storage-overlay-8d90b6984cfc6d28c1bfc6c7b2c0edc1643098481a3f37f544544814231aff08-merged.mount: Deactivated successfully.
Feb 25 08:41:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3653: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:41:40 np0005629333 podman[412093]: 2026-02-25 13:41:40.178781415 +0000 UTC m=+0.273868934 container remove ea4c7992dffb0ca9c5f4f2405a8fcdd663df1c0754a967b60b9a089ee0f84db8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 25 08:41:40 np0005629333 systemd[1]: libpod-conmon-ea4c7992dffb0ca9c5f4f2405a8fcdd663df1c0754a967b60b9a089ee0f84db8.scope: Deactivated successfully.
Feb 25 08:41:40 np0005629333 podman[412131]: 2026-02-25 13:41:40.358960838 +0000 UTC m=+0.053438245 container create 036d238a0749e5cbd6658f9a8cf47b3277c943cf58bce06eb7042f8b709c411e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cartwright, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:41:40 np0005629333 systemd[1]: Started libpod-conmon-036d238a0749e5cbd6658f9a8cf47b3277c943cf58bce06eb7042f8b709c411e.scope.
Feb 25 08:41:40 np0005629333 podman[412131]: 2026-02-25 13:41:40.331807888 +0000 UTC m=+0.026285295 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:41:40 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:41:40 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f6641106191e3bf7331cff4f38338ed7b0a3a68668faa14573aba53a4837571/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:41:40 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f6641106191e3bf7331cff4f38338ed7b0a3a68668faa14573aba53a4837571/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:41:40 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f6641106191e3bf7331cff4f38338ed7b0a3a68668faa14573aba53a4837571/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:41:40 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f6641106191e3bf7331cff4f38338ed7b0a3a68668faa14573aba53a4837571/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:41:40 np0005629333 podman[412131]: 2026-02-25 13:41:40.515582374 +0000 UTC m=+0.210059751 container init 036d238a0749e5cbd6658f9a8cf47b3277c943cf58bce06eb7042f8b709c411e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 08:41:40 np0005629333 podman[412131]: 2026-02-25 13:41:40.522517614 +0000 UTC m=+0.216994981 container start 036d238a0749e5cbd6658f9a8cf47b3277c943cf58bce06eb7042f8b709c411e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 08:41:40 np0005629333 podman[412131]: 2026-02-25 13:41:40.545650078 +0000 UTC m=+0.240127455 container attach 036d238a0749e5cbd6658f9a8cf47b3277c943cf58bce06eb7042f8b709c411e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cartwright, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:41:41 np0005629333 lvm[412225]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:41:41 np0005629333 lvm[412227]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:41:41 np0005629333 lvm[412228]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:41:41 np0005629333 lvm[412225]: VG ceph_vg0 finished
Feb 25 08:41:41 np0005629333 lvm[412228]: VG ceph_vg2 finished
Feb 25 08:41:41 np0005629333 lvm[412227]: VG ceph_vg1 finished
Feb 25 08:41:41 np0005629333 adoring_cartwright[412147]: {}
Feb 25 08:41:41 np0005629333 systemd[1]: libpod-036d238a0749e5cbd6658f9a8cf47b3277c943cf58bce06eb7042f8b709c411e.scope: Deactivated successfully.
Feb 25 08:41:41 np0005629333 systemd[1]: libpod-036d238a0749e5cbd6658f9a8cf47b3277c943cf58bce06eb7042f8b709c411e.scope: Consumed 1.273s CPU time.
Feb 25 08:41:41 np0005629333 podman[412131]: 2026-02-25 13:41:41.372452335 +0000 UTC m=+1.066929702 container died 036d238a0749e5cbd6658f9a8cf47b3277c943cf58bce06eb7042f8b709c411e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cartwright, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 08:41:41 np0005629333 nova_compute[244014]: 2026-02-25 13:41:41.402 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:41:41 np0005629333 systemd[1]: var-lib-containers-storage-overlay-9f6641106191e3bf7331cff4f38338ed7b0a3a68668faa14573aba53a4837571-merged.mount: Deactivated successfully.
Feb 25 08:41:41 np0005629333 podman[412131]: 2026-02-25 13:41:41.781178799 +0000 UTC m=+1.475656186 container remove 036d238a0749e5cbd6658f9a8cf47b3277c943cf58bce06eb7042f8b709c411e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 08:41:41 np0005629333 systemd[1]: libpod-conmon-036d238a0749e5cbd6658f9a8cf47b3277c943cf58bce06eb7042f8b709c411e.scope: Deactivated successfully.
Feb 25 08:41:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:41:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:41:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:41:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:41:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3654: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:41:42 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:41:42 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:41:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:41:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:41:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:41:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:41:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:41:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:41:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:41:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:41:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:41:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:41:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:41:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:41:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:41:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:41:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:41:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:41:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:41:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:41:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:41:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:41:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:41:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:41:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:41:43 np0005629333 nova_compute[244014]: 2026-02-25 13:41:43.796 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:41:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3655: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:41:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:41:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3656: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:41:46 np0005629333 nova_compute[244014]: 2026-02-25 13:41:46.405 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:41:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:41:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1120100958' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:41:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:41:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1120100958' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:41:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3657: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:41:48 np0005629333 nova_compute[244014]: 2026-02-25 13:41:48.799 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:41:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:41:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3658: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:41:51 np0005629333 nova_compute[244014]: 2026-02-25 13:41:51.459 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:41:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3659: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:41:53 np0005629333 nova_compute[244014]: 2026-02-25 13:41:53.841 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:41:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3660: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:41:54 np0005629333 kernel: hrtimer: interrupt took 12430707 ns
Feb 25 08:41:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:41:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:41:55.085 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:41:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:41:55.086 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:41:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:41:55.086 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:41:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3661: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:41:56 np0005629333 nova_compute[244014]: 2026-02-25 13:41:56.462 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:41:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3662: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:41:58 np0005629333 nova_compute[244014]: 2026-02-25 13:41:58.843 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:41:58 np0005629333 podman[412271]: 2026-02-25 13:41:58.84679613 +0000 UTC m=+0.178016381 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 08:41:58 np0005629333 podman[412272]: 2026-02-25 13:41:58.869987566 +0000 UTC m=+0.085374401 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223)
Feb 25 08:41:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:42:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3663: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:42:01 np0005629333 nova_compute[244014]: 2026-02-25 13:42:01.467 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:42:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:42:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:42:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:42:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:42:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:42:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:42:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3664: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:42:03 np0005629333 nova_compute[244014]: 2026-02-25 13:42:03.846 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:42:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3665: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:42:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:42:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3666: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:42:06 np0005629333 nova_compute[244014]: 2026-02-25 13:42:06.486 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:42:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3667: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:42:08 np0005629333 nova_compute[244014]: 2026-02-25 13:42:08.849 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:42:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:42:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3668: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:42:10 np0005629333 nova_compute[244014]: 2026-02-25 13:42:10.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:42:10 np0005629333 nova_compute[244014]: 2026-02-25 13:42:10.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:42:10 np0005629333 nova_compute[244014]: 2026-02-25 13:42:10.910 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:42:10 np0005629333 nova_compute[244014]: 2026-02-25 13:42:10.911 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:42:10 np0005629333 nova_compute[244014]: 2026-02-25 13:42:10.911 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:42:10 np0005629333 nova_compute[244014]: 2026-02-25 13:42:10.911 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:42:10 np0005629333 nova_compute[244014]: 2026-02-25 13:42:10.912 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:42:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:42:11 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1069399012' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:42:11 np0005629333 nova_compute[244014]: 2026-02-25 13:42:11.488 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:42:11 np0005629333 nova_compute[244014]: 2026-02-25 13:42:11.500 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:42:11 np0005629333 nova_compute[244014]: 2026-02-25 13:42:11.659 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:42:11 np0005629333 nova_compute[244014]: 2026-02-25 13:42:11.660 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3529MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:42:11 np0005629333 nova_compute[244014]: 2026-02-25 13:42:11.661 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:42:11 np0005629333 nova_compute[244014]: 2026-02-25 13:42:11.661 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:42:11 np0005629333 nova_compute[244014]: 2026-02-25 13:42:11.751 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:42:11 np0005629333 nova_compute[244014]: 2026-02-25 13:42:11.751 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:42:11 np0005629333 nova_compute[244014]: 2026-02-25 13:42:11.773 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:42:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3669: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:42:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:42:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1420449654' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:42:12 np0005629333 nova_compute[244014]: 2026-02-25 13:42:12.332 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:42:12 np0005629333 nova_compute[244014]: 2026-02-25 13:42:12.337 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:42:12 np0005629333 nova_compute[244014]: 2026-02-25 13:42:12.366 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:42:12 np0005629333 nova_compute[244014]: 2026-02-25 13:42:12.368 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:42:12 np0005629333 nova_compute[244014]: 2026-02-25 13:42:12.369 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:42:13 np0005629333 nova_compute[244014]: 2026-02-25 13:42:13.850 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:42:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3670: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:42:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:42:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3671: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:42:16 np0005629333 nova_compute[244014]: 2026-02-25 13:42:16.491 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:42:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3672: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:42:18 np0005629333 nova_compute[244014]: 2026-02-25 13:42:18.853 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:42:19 np0005629333 nova_compute[244014]: 2026-02-25 13:42:19.369 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:42:19 np0005629333 nova_compute[244014]: 2026-02-25 13:42:19.369 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:42:19 np0005629333 nova_compute[244014]: 2026-02-25 13:42:19.369 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:42:19 np0005629333 nova_compute[244014]: 2026-02-25 13:42:19.398 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:42:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:42:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3673: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:42:21 np0005629333 nova_compute[244014]: 2026-02-25 13:42:21.494 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:42:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3674: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:42:22 np0005629333 nova_compute[244014]: 2026-02-25 13:42:22.901 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:42:23 np0005629333 nova_compute[244014]: 2026-02-25 13:42:23.856 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:42:23 np0005629333 nova_compute[244014]: 2026-02-25 13:42:23.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:42:23 np0005629333 nova_compute[244014]: 2026-02-25 13:42:23.875 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:42:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3675: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:42:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:42:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3676: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:42:26 np0005629333 nova_compute[244014]: 2026-02-25 13:42:26.496 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:42:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3677: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:42:28 np0005629333 nova_compute[244014]: 2026-02-25 13:42:28.859 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:42:29 np0005629333 podman[412359]: 2026-02-25 13:42:29.760468517 +0000 UTC m=+0.084043314 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 25 08:42:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:42:29 np0005629333 podman[412360]: 2026-02-25 13:42:29.839201677 +0000 UTC m=+0.154710883 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Feb 25 08:42:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3678: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:42:30 np0005629333 nova_compute[244014]: 2026-02-25 13:42:30.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:42:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:42:31
Feb 25 08:42:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:42:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:42:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'volumes', '.rgw.root', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'images', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta']
Feb 25 08:42:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:42:31 np0005629333 nova_compute[244014]: 2026-02-25 13:42:31.499 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:42:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:42:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:42:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:42:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:42:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:42:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:42:31 np0005629333 nova_compute[244014]: 2026-02-25 13:42:31.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:42:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3679: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #183. Immutable memtables: 0.
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:42:32.442559) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 183
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026952442606, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 2042, "num_deletes": 251, "total_data_size": 3458396, "memory_usage": 3523184, "flush_reason": "Manual Compaction"}
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #184: started
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026952461430, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 184, "file_size": 3402960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75089, "largest_seqno": 77130, "table_properties": {"data_size": 3393565, "index_size": 5952, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18479, "raw_average_key_size": 20, "raw_value_size": 3375080, "raw_average_value_size": 3660, "num_data_blocks": 264, "num_entries": 922, "num_filter_entries": 922, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772026722, "oldest_key_time": 1772026722, "file_creation_time": 1772026952, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 184, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 18906 microseconds, and 5937 cpu microseconds.
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:42:32.461464) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #184: 3402960 bytes OK
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:42:32.461484) [db/memtable_list.cc:519] [default] Level-0 commit table #184 started
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:42:32.463177) [db/memtable_list.cc:722] [default] Level-0 commit table #184: memtable #1 done
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:42:32.463200) EVENT_LOG_v1 {"time_micros": 1772026952463193, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:42:32.463224) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 3449875, prev total WAL file size 3449875, number of live WAL files 2.
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000180.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:42:32.464350) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [184(3323KB)], [182(8826KB)]
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026952464451, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [184], "files_L6": [182], "score": -1, "input_data_size": 12441637, "oldest_snapshot_seqno": -1}
Feb 25 08:42:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:42:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:42:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:42:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:42:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:42:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:42:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:42:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:42:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:42:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #185: 9190 keys, 10684347 bytes, temperature: kUnknown
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026952543173, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 185, "file_size": 10684347, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10627266, "index_size": 33032, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22981, "raw_key_size": 241771, "raw_average_key_size": 26, "raw_value_size": 10467300, "raw_average_value_size": 1138, "num_data_blocks": 1271, "num_entries": 9190, "num_filter_entries": 9190, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772026952, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:42:32.543644) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 10684347 bytes
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:42:32.544967) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.8 rd, 135.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.6 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(6.8) write-amplify(3.1) OK, records in: 9704, records dropped: 514 output_compression: NoCompression
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:42:32.545000) EVENT_LOG_v1 {"time_micros": 1772026952544983, "job": 114, "event": "compaction_finished", "compaction_time_micros": 78865, "compaction_time_cpu_micros": 40636, "output_level": 6, "num_output_files": 1, "total_output_size": 10684347, "num_input_records": 9704, "num_output_records": 9190, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000184.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026952545756, "job": 114, "event": "table_file_deletion", "file_number": 184}
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772026952547512, "job": 114, "event": "table_file_deletion", "file_number": 182}
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:42:32.464144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:42:32.547648) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:42:32.547657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:42:32.547659) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:42:32.547661) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:42:32 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:42:32.547663) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:42:33 np0005629333 nova_compute[244014]: 2026-02-25 13:42:33.860 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:42:33 np0005629333 nova_compute[244014]: 2026-02-25 13:42:33.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:42:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3680: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:42:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:42:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3681: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:42:36 np0005629333 nova_compute[244014]: 2026-02-25 13:42:36.503 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:42:37 np0005629333 nova_compute[244014]: 2026-02-25 13:42:37.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:42:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3682: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:42:38 np0005629333 nova_compute[244014]: 2026-02-25 13:42:38.862 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:42:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:42:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3683: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:42:41 np0005629333 nova_compute[244014]: 2026-02-25 13:42:41.507 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:42:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3684: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:42:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:42:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:42:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:42:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:42:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:42:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:42:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:42:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:42:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:42:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:42:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:42:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:42:43 np0005629333 podman[412548]: 2026-02-25 13:42:43.083593054 +0000 UTC m=+0.067069087 container create 98a809290a253b89a7b7402db16758efbeef67721cbc36ddd6b318a6a514969f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 08:42:43 np0005629333 systemd[1]: Started libpod-conmon-98a809290a253b89a7b7402db16758efbeef67721cbc36ddd6b318a6a514969f.scope.
Feb 25 08:42:43 np0005629333 podman[412548]: 2026-02-25 13:42:43.052991725 +0000 UTC m=+0.036467818 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:42:43 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:42:43 np0005629333 podman[412548]: 2026-02-25 13:42:43.207978245 +0000 UTC m=+0.191454248 container init 98a809290a253b89a7b7402db16758efbeef67721cbc36ddd6b318a6a514969f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lovelace, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:42:43 np0005629333 podman[412548]: 2026-02-25 13:42:43.219617739 +0000 UTC m=+0.203093772 container start 98a809290a253b89a7b7402db16758efbeef67721cbc36ddd6b318a6a514969f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lovelace, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:42:43 np0005629333 podman[412548]: 2026-02-25 13:42:43.223321605 +0000 UTC m=+0.206797658 container attach 98a809290a253b89a7b7402db16758efbeef67721cbc36ddd6b318a6a514969f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 08:42:43 np0005629333 dazzling_lovelace[412564]: 167 167
Feb 25 08:42:43 np0005629333 systemd[1]: libpod-98a809290a253b89a7b7402db16758efbeef67721cbc36ddd6b318a6a514969f.scope: Deactivated successfully.
Feb 25 08:42:43 np0005629333 conmon[412564]: conmon 98a809290a253b89a7b7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-98a809290a253b89a7b7402db16758efbeef67721cbc36ddd6b318a6a514969f.scope/container/memory.events
Feb 25 08:42:43 np0005629333 podman[412548]: 2026-02-25 13:42:43.228956697 +0000 UTC m=+0.212432740 container died 98a809290a253b89a7b7402db16758efbeef67721cbc36ddd6b318a6a514969f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lovelace, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:42:43 np0005629333 systemd[1]: var-lib-containers-storage-overlay-0eff0d699604d8151910fb9c6151e30dab6dc48dcf1ce115164bda94694b851c-merged.mount: Deactivated successfully.
Feb 25 08:42:43 np0005629333 podman[412548]: 2026-02-25 13:42:43.27713342 +0000 UTC m=+0.260609463 container remove 98a809290a253b89a7b7402db16758efbeef67721cbc36ddd6b318a6a514969f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lovelace, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:42:43 np0005629333 systemd[1]: libpod-conmon-98a809290a253b89a7b7402db16758efbeef67721cbc36ddd6b318a6a514969f.scope: Deactivated successfully.
Feb 25 08:42:43 np0005629333 podman[412586]: 2026-02-25 13:42:43.463651854 +0000 UTC m=+0.052591941 container create 6be9e3fa1286d88629caeb73a0e80449db90c72a0eb49ac2f7a4ddf0b9e7a491 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:42:43 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:42:43 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:42:43 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:42:43 np0005629333 systemd[1]: Started libpod-conmon-6be9e3fa1286d88629caeb73a0e80449db90c72a0eb49ac2f7a4ddf0b9e7a491.scope.
Feb 25 08:42:43 np0005629333 podman[412586]: 2026-02-25 13:42:43.437927205 +0000 UTC m=+0.026867402 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:42:43 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:42:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d96fb2c47f59828acce1a3f67878b373946d3340bba8374d618865a0200277e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:42:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d96fb2c47f59828acce1a3f67878b373946d3340bba8374d618865a0200277e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:42:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d96fb2c47f59828acce1a3f67878b373946d3340bba8374d618865a0200277e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:42:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d96fb2c47f59828acce1a3f67878b373946d3340bba8374d618865a0200277e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:42:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d96fb2c47f59828acce1a3f67878b373946d3340bba8374d618865a0200277e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:42:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:42:43 np0005629333 podman[412586]: 2026-02-25 13:42:43.572339724 +0000 UTC m=+0.161279871 container init 6be9e3fa1286d88629caeb73a0e80449db90c72a0eb49ac2f7a4ddf0b9e7a491 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_northcutt, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:42:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:42:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:42:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:42:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:42:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:42:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:42:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:42:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:42:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:42:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:42:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:42:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:42:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:42:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:42:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:42:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:42:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:42:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:42:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:42:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:42:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:42:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:42:43 np0005629333 podman[412586]: 2026-02-25 13:42:43.581517338 +0000 UTC m=+0.170457475 container start 6be9e3fa1286d88629caeb73a0e80449db90c72a0eb49ac2f7a4ddf0b9e7a491 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:42:43 np0005629333 podman[412586]: 2026-02-25 13:42:43.586295175 +0000 UTC m=+0.175235342 container attach 6be9e3fa1286d88629caeb73a0e80449db90c72a0eb49ac2f7a4ddf0b9e7a491 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_northcutt, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 08:42:43 np0005629333 nova_compute[244014]: 2026-02-25 13:42:43.864 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:42:44 np0005629333 pedantic_northcutt[412602]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:42:44 np0005629333 pedantic_northcutt[412602]: --> All data devices are unavailable
Feb 25 08:42:44 np0005629333 systemd[1]: libpod-6be9e3fa1286d88629caeb73a0e80449db90c72a0eb49ac2f7a4ddf0b9e7a491.scope: Deactivated successfully.
Feb 25 08:42:44 np0005629333 podman[412586]: 2026-02-25 13:42:44.038408005 +0000 UTC m=+0.627348132 container died 6be9e3fa1286d88629caeb73a0e80449db90c72a0eb49ac2f7a4ddf0b9e7a491 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_northcutt, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 08:42:44 np0005629333 systemd[1]: var-lib-containers-storage-overlay-9d96fb2c47f59828acce1a3f67878b373946d3340bba8374d618865a0200277e-merged.mount: Deactivated successfully.
Feb 25 08:42:44 np0005629333 podman[412586]: 2026-02-25 13:42:44.090939603 +0000 UTC m=+0.679879700 container remove 6be9e3fa1286d88629caeb73a0e80449db90c72a0eb49ac2f7a4ddf0b9e7a491 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_northcutt, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:42:44 np0005629333 systemd[1]: libpod-conmon-6be9e3fa1286d88629caeb73a0e80449db90c72a0eb49ac2f7a4ddf0b9e7a491.scope: Deactivated successfully.
Feb 25 08:42:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3685: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:42:44 np0005629333 podman[412700]: 2026-02-25 13:42:44.567781363 +0000 UTC m=+0.038474586 container create 89a3790791818cb4ed447833c01cf29a755daedc330953d8c2868c9a1eb4da5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_noether, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 08:42:44 np0005629333 systemd[1]: Started libpod-conmon-89a3790791818cb4ed447833c01cf29a755daedc330953d8c2868c9a1eb4da5a.scope.
Feb 25 08:42:44 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:42:44 np0005629333 podman[412700]: 2026-02-25 13:42:44.550801365 +0000 UTC m=+0.021494638 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:42:44 np0005629333 podman[412700]: 2026-02-25 13:42:44.663519631 +0000 UTC m=+0.134212904 container init 89a3790791818cb4ed447833c01cf29a755daedc330953d8c2868c9a1eb4da5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_noether, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:42:44 np0005629333 podman[412700]: 2026-02-25 13:42:44.672610253 +0000 UTC m=+0.143303516 container start 89a3790791818cb4ed447833c01cf29a755daedc330953d8c2868c9a1eb4da5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 08:42:44 np0005629333 podman[412700]: 2026-02-25 13:42:44.676913736 +0000 UTC m=+0.147606969 container attach 89a3790791818cb4ed447833c01cf29a755daedc330953d8c2868c9a1eb4da5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_noether, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:42:44 np0005629333 systemd[1]: libpod-89a3790791818cb4ed447833c01cf29a755daedc330953d8c2868c9a1eb4da5a.scope: Deactivated successfully.
Feb 25 08:42:44 np0005629333 confident_noether[412716]: 167 167
Feb 25 08:42:44 np0005629333 conmon[412716]: conmon 89a3790791818cb4ed44 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-89a3790791818cb4ed447833c01cf29a755daedc330953d8c2868c9a1eb4da5a.scope/container/memory.events
Feb 25 08:42:44 np0005629333 podman[412700]: 2026-02-25 13:42:44.679326885 +0000 UTC m=+0.150020148 container died 89a3790791818cb4ed447833c01cf29a755daedc330953d8c2868c9a1eb4da5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_noether, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:42:44 np0005629333 systemd[1]: var-lib-containers-storage-overlay-97a36df360caa88c3618cba59c03e1edc62fc4bf19d1b31e65e5e3a735fc2a08-merged.mount: Deactivated successfully.
Feb 25 08:42:44 np0005629333 podman[412700]: 2026-02-25 13:42:44.729052403 +0000 UTC m=+0.199745666 container remove 89a3790791818cb4ed447833c01cf29a755daedc330953d8c2868c9a1eb4da5a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_noether, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 08:42:44 np0005629333 systemd[1]: libpod-conmon-89a3790791818cb4ed447833c01cf29a755daedc330953d8c2868c9a1eb4da5a.scope: Deactivated successfully.
Feb 25 08:42:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:42:44 np0005629333 podman[412740]: 2026-02-25 13:42:44.913880299 +0000 UTC m=+0.050327926 container create 1dbed839f82fd3ce5077da918d1e018b7396157a51728d56e987bd3858e0adc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mcclintock, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:42:44 np0005629333 systemd[1]: Started libpod-conmon-1dbed839f82fd3ce5077da918d1e018b7396157a51728d56e987bd3858e0adc0.scope.
Feb 25 08:42:44 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:42:44 np0005629333 podman[412740]: 2026-02-25 13:42:44.888962104 +0000 UTC m=+0.025409821 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:42:44 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4be718dc34891acd46b9622dd801978a5e396c93bac3828e36e86ab02c3fd984/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:42:44 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4be718dc34891acd46b9622dd801978a5e396c93bac3828e36e86ab02c3fd984/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:42:44 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4be718dc34891acd46b9622dd801978a5e396c93bac3828e36e86ab02c3fd984/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:42:44 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4be718dc34891acd46b9622dd801978a5e396c93bac3828e36e86ab02c3fd984/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:42:45 np0005629333 podman[412740]: 2026-02-25 13:42:45.005493809 +0000 UTC m=+0.141941476 container init 1dbed839f82fd3ce5077da918d1e018b7396157a51728d56e987bd3858e0adc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mcclintock, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 08:42:45 np0005629333 podman[412740]: 2026-02-25 13:42:45.013212081 +0000 UTC m=+0.149659748 container start 1dbed839f82fd3ce5077da918d1e018b7396157a51728d56e987bd3858e0adc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mcclintock, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Feb 25 08:42:45 np0005629333 podman[412740]: 2026-02-25 13:42:45.017089302 +0000 UTC m=+0.153537079 container attach 1dbed839f82fd3ce5077da918d1e018b7396157a51728d56e987bd3858e0adc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mcclintock, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]: {
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:    "0": [
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:        {
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "devices": [
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "/dev/loop3"
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            ],
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "lv_name": "ceph_lv0",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "lv_size": "21470642176",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "name": "ceph_lv0",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "tags": {
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.cluster_name": "ceph",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.crush_device_class": "",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.encrypted": "0",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.objectstore": "bluestore",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.osd_id": "0",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.type": "block",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.vdo": "0",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.with_tpm": "0"
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            },
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "type": "block",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "vg_name": "ceph_vg0"
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:        }
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:    ],
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:    "1": [
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:        {
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "devices": [
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "/dev/loop4"
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            ],
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "lv_name": "ceph_lv1",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "lv_size": "21470642176",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "name": "ceph_lv1",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "tags": {
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.cluster_name": "ceph",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.crush_device_class": "",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.encrypted": "0",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.objectstore": "bluestore",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.osd_id": "1",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.type": "block",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.vdo": "0",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.with_tpm": "0"
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            },
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "type": "block",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "vg_name": "ceph_vg1"
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:        }
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:    ],
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:    "2": [
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:        {
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "devices": [
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "/dev/loop5"
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            ],
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "lv_name": "ceph_lv2",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "lv_size": "21470642176",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "name": "ceph_lv2",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "tags": {
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.cluster_name": "ceph",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.crush_device_class": "",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.encrypted": "0",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.objectstore": "bluestore",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.osd_id": "2",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.type": "block",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.vdo": "0",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:                "ceph.with_tpm": "0"
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            },
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "type": "block",
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:            "vg_name": "ceph_vg2"
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:        }
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]:    ]
Feb 25 08:42:45 np0005629333 practical_mcclintock[412757]: }
Feb 25 08:42:45 np0005629333 systemd[1]: libpod-1dbed839f82fd3ce5077da918d1e018b7396157a51728d56e987bd3858e0adc0.scope: Deactivated successfully.
Feb 25 08:42:45 np0005629333 podman[412740]: 2026-02-25 13:42:45.32177274 +0000 UTC m=+0.458220407 container died 1dbed839f82fd3ce5077da918d1e018b7396157a51728d56e987bd3858e0adc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 08:42:45 np0005629333 systemd[1]: var-lib-containers-storage-overlay-4be718dc34891acd46b9622dd801978a5e396c93bac3828e36e86ab02c3fd984-merged.mount: Deactivated successfully.
Feb 25 08:42:45 np0005629333 podman[412740]: 2026-02-25 13:42:45.368455 +0000 UTC m=+0.504902667 container remove 1dbed839f82fd3ce5077da918d1e018b7396157a51728d56e987bd3858e0adc0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 08:42:45 np0005629333 systemd[1]: libpod-conmon-1dbed839f82fd3ce5077da918d1e018b7396157a51728d56e987bd3858e0adc0.scope: Deactivated successfully.
Feb 25 08:42:45 np0005629333 podman[412839]: 2026-02-25 13:42:45.890730714 +0000 UTC m=+0.058649185 container create fafe76e5e2eb9b5bd9b8e1dd92c5dd750f3dd59b3149040657153871705acb71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kowalevski, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 08:42:45 np0005629333 systemd[1]: Started libpod-conmon-fafe76e5e2eb9b5bd9b8e1dd92c5dd750f3dd59b3149040657153871705acb71.scope.
Feb 25 08:42:45 np0005629333 podman[412839]: 2026-02-25 13:42:45.853478745 +0000 UTC m=+0.021397246 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:42:45 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:42:46 np0005629333 podman[412839]: 2026-02-25 13:42:46.026238924 +0000 UTC m=+0.194157385 container init fafe76e5e2eb9b5bd9b8e1dd92c5dd750f3dd59b3149040657153871705acb71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 08:42:46 np0005629333 podman[412839]: 2026-02-25 13:42:46.036639903 +0000 UTC m=+0.204558374 container start fafe76e5e2eb9b5bd9b8e1dd92c5dd750f3dd59b3149040657153871705acb71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kowalevski, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:42:46 np0005629333 charming_kowalevski[412855]: 167 167
Feb 25 08:42:46 np0005629333 systemd[1]: libpod-fafe76e5e2eb9b5bd9b8e1dd92c5dd750f3dd59b3149040657153871705acb71.scope: Deactivated successfully.
Feb 25 08:42:46 np0005629333 podman[412839]: 2026-02-25 13:42:46.082195281 +0000 UTC m=+0.250113822 container attach fafe76e5e2eb9b5bd9b8e1dd92c5dd750f3dd59b3149040657153871705acb71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 08:42:46 np0005629333 podman[412839]: 2026-02-25 13:42:46.083822298 +0000 UTC m=+0.251740739 container died fafe76e5e2eb9b5bd9b8e1dd92c5dd750f3dd59b3149040657153871705acb71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:42:46 np0005629333 systemd[1]: var-lib-containers-storage-overlay-c239df0a4946ace8bfe7dbc22ffa400c6a9b0605155463151a94e6934459e13e-merged.mount: Deactivated successfully.
Feb 25 08:42:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3686: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:42:46 np0005629333 podman[412839]: 2026-02-25 13:42:46.188140422 +0000 UTC m=+0.356058873 container remove fafe76e5e2eb9b5bd9b8e1dd92c5dd750f3dd59b3149040657153871705acb71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:42:46 np0005629333 systemd[1]: libpod-conmon-fafe76e5e2eb9b5bd9b8e1dd92c5dd750f3dd59b3149040657153871705acb71.scope: Deactivated successfully.
Feb 25 08:42:46 np0005629333 podman[412879]: 2026-02-25 13:42:46.405353429 +0000 UTC m=+0.066279504 container create 093d4744694afd05f0585987a29a96a53a5202868e2146a8f2143643420dba06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_kilby, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:42:46 np0005629333 podman[412879]: 2026-02-25 13:42:46.374615416 +0000 UTC m=+0.035541531 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:42:46 np0005629333 systemd[1]: Started libpod-conmon-093d4744694afd05f0585987a29a96a53a5202868e2146a8f2143643420dba06.scope.
Feb 25 08:42:46 np0005629333 nova_compute[244014]: 2026-02-25 13:42:46.510 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:42:46 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:42:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e714aee9383cb47f2877ab562ea929d067cc736f72b32ea00cc1464a91c19d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:42:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e714aee9383cb47f2877ab562ea929d067cc736f72b32ea00cc1464a91c19d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:42:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e714aee9383cb47f2877ab562ea929d067cc736f72b32ea00cc1464a91c19d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:42:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e714aee9383cb47f2877ab562ea929d067cc736f72b32ea00cc1464a91c19d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:42:46 np0005629333 podman[412879]: 2026-02-25 13:42:46.57607988 +0000 UTC m=+0.237006015 container init 093d4744694afd05f0585987a29a96a53a5202868e2146a8f2143643420dba06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_kilby, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 08:42:46 np0005629333 podman[412879]: 2026-02-25 13:42:46.581815845 +0000 UTC m=+0.242741920 container start 093d4744694afd05f0585987a29a96a53a5202868e2146a8f2143643420dba06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_kilby, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:42:46 np0005629333 podman[412879]: 2026-02-25 13:42:46.609650084 +0000 UTC m=+0.270576219 container attach 093d4744694afd05f0585987a29a96a53a5202868e2146a8f2143643420dba06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_kilby, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:42:47 np0005629333 lvm[412975]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:42:47 np0005629333 lvm[412975]: VG ceph_vg0 finished
Feb 25 08:42:47 np0005629333 lvm[412976]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:42:47 np0005629333 lvm[412976]: VG ceph_vg1 finished
Feb 25 08:42:47 np0005629333 lvm[412978]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:42:47 np0005629333 lvm[412978]: VG ceph_vg2 finished
Feb 25 08:42:47 np0005629333 epic_kilby[412896]: {}
Feb 25 08:42:47 np0005629333 systemd[1]: libpod-093d4744694afd05f0585987a29a96a53a5202868e2146a8f2143643420dba06.scope: Deactivated successfully.
Feb 25 08:42:47 np0005629333 systemd[1]: libpod-093d4744694afd05f0585987a29a96a53a5202868e2146a8f2143643420dba06.scope: Consumed 1.256s CPU time.
Feb 25 08:42:47 np0005629333 podman[412879]: 2026-02-25 13:42:47.413532322 +0000 UTC m=+1.074458367 container died 093d4744694afd05f0585987a29a96a53a5202868e2146a8f2143643420dba06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_kilby, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 08:42:47 np0005629333 systemd[1]: var-lib-containers-storage-overlay-3e714aee9383cb47f2877ab562ea929d067cc736f72b32ea00cc1464a91c19d9-merged.mount: Deactivated successfully.
Feb 25 08:42:47 np0005629333 podman[412879]: 2026-02-25 13:42:47.57473935 +0000 UTC m=+1.235665395 container remove 093d4744694afd05f0585987a29a96a53a5202868e2146a8f2143643420dba06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_kilby, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 08:42:47 np0005629333 systemd[1]: libpod-conmon-093d4744694afd05f0585987a29a96a53a5202868e2146a8f2143643420dba06.scope: Deactivated successfully.
Feb 25 08:42:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:42:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3517324608' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:42:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:42:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3517324608' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:42:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:42:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:42:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:42:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:42:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3687: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:42:48 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:42:48 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:42:48 np0005629333 nova_compute[244014]: 2026-02-25 13:42:48.866 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:42:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:42:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3688: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:42:51 np0005629333 nova_compute[244014]: 2026-02-25 13:42:51.514 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:42:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3689: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:42:52 np0005629333 nova_compute[244014]: 2026-02-25 13:42:52.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:42:53 np0005629333 nova_compute[244014]: 2026-02-25 13:42:53.868 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:42:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3690: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:42:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:42:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:42:55.086 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:42:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:42:55.086 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:42:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:42:55.087 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:42:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3691: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:42:56 np0005629333 nova_compute[244014]: 2026-02-25 13:42:56.518 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:42:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3692: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:42:58 np0005629333 nova_compute[244014]: 2026-02-25 13:42:58.870 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:42:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:43:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3693: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:43:00 np0005629333 podman[413020]: 2026-02-25 13:43:00.738704837 +0000 UTC m=+0.071752821 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 25 08:43:00 np0005629333 podman[413021]: 2026-02-25 13:43:00.785941083 +0000 UTC m=+0.119644636 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Feb 25 08:43:01 np0005629333 nova_compute[244014]: 2026-02-25 13:43:01.553 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:43:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:43:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:43:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:43:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:43:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:43:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:43:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3694: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:43:03 np0005629333 nova_compute[244014]: 2026-02-25 13:43:03.871 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:43:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3695: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:43:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:43:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3696: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:43:06 np0005629333 nova_compute[244014]: 2026-02-25 13:43:06.554 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:43:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3697: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:43:08 np0005629333 nova_compute[244014]: 2026-02-25 13:43:08.873 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:43:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:43:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3698: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:43:10 np0005629333 nova_compute[244014]: 2026-02-25 13:43:10.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:43:11 np0005629333 nova_compute[244014]: 2026-02-25 13:43:11.558 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:43:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3699: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:43:12 np0005629333 nova_compute[244014]: 2026-02-25 13:43:12.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:43:12 np0005629333 nova_compute[244014]: 2026-02-25 13:43:12.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:43:12 np0005629333 nova_compute[244014]: 2026-02-25 13:43:12.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:43:12 np0005629333 nova_compute[244014]: 2026-02-25 13:43:12.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:43:12 np0005629333 nova_compute[244014]: 2026-02-25 13:43:12.904 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:43:12 np0005629333 nova_compute[244014]: 2026-02-25 13:43:12.905 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:43:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:43:13 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2432907870' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:43:13 np0005629333 nova_compute[244014]: 2026-02-25 13:43:13.489 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:43:13 np0005629333 nova_compute[244014]: 2026-02-25 13:43:13.640 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:43:13 np0005629333 nova_compute[244014]: 2026-02-25 13:43:13.642 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3538MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:43:13 np0005629333 nova_compute[244014]: 2026-02-25 13:43:13.643 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:43:13 np0005629333 nova_compute[244014]: 2026-02-25 13:43:13.643 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:43:13 np0005629333 nova_compute[244014]: 2026-02-25 13:43:13.718 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:43:13 np0005629333 nova_compute[244014]: 2026-02-25 13:43:13.719 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:43:13 np0005629333 nova_compute[244014]: 2026-02-25 13:43:13.734 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:43:13 np0005629333 nova_compute[244014]: 2026-02-25 13:43:13.874 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:43:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3700: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:43:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:43:14 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4022764544' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:43:14 np0005629333 nova_compute[244014]: 2026-02-25 13:43:14.328 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:43:14 np0005629333 nova_compute[244014]: 2026-02-25 13:43:14.335 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:43:14 np0005629333 nova_compute[244014]: 2026-02-25 13:43:14.357 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:43:14 np0005629333 nova_compute[244014]: 2026-02-25 13:43:14.361 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:43:14 np0005629333 nova_compute[244014]: 2026-02-25 13:43:14.361 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:43:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:43:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3701: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:43:16 np0005629333 nova_compute[244014]: 2026-02-25 13:43:16.562 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:43:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3702: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:43:18 np0005629333 nova_compute[244014]: 2026-02-25 13:43:18.876 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:43:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:43:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3703: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:43:21 np0005629333 nova_compute[244014]: 2026-02-25 13:43:21.362 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:43:21 np0005629333 nova_compute[244014]: 2026-02-25 13:43:21.363 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:43:21 np0005629333 nova_compute[244014]: 2026-02-25 13:43:21.364 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:43:21 np0005629333 nova_compute[244014]: 2026-02-25 13:43:21.383 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:43:21 np0005629333 nova_compute[244014]: 2026-02-25 13:43:21.565 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:43:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3704: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:43:23 np0005629333 nova_compute[244014]: 2026-02-25 13:43:23.910 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:43:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3705: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:43:24 np0005629333 nova_compute[244014]: 2026-02-25 13:43:24.893 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:43:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:43:25 np0005629333 nova_compute[244014]: 2026-02-25 13:43:25.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:43:25 np0005629333 nova_compute[244014]: 2026-02-25 13:43:25.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:43:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3706: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:43:26 np0005629333 nova_compute[244014]: 2026-02-25 13:43:26.568 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:43:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3707: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:43:28 np0005629333 nova_compute[244014]: 2026-02-25 13:43:28.912 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:43:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:43:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3708: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:43:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:43:31
Feb 25 08:43:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:43:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:43:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'backups', 'volumes', 'vms', '.mgr']
Feb 25 08:43:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:43:31 np0005629333 nova_compute[244014]: 2026-02-25 13:43:31.572 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:43:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:43:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:43:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:43:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:43:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:43:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:43:31 np0005629333 podman[413112]: 2026-02-25 13:43:31.72313026 +0000 UTC m=+0.066932463 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 08:43:31 np0005629333 podman[413113]: 2026-02-25 13:43:31.753543353 +0000 UTC m=+0.097363326 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260223)
Feb 25 08:43:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3709: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:43:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:43:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:43:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:43:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:43:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:43:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:43:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:43:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:43:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:43:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:43:32 np0005629333 nova_compute[244014]: 2026-02-25 13:43:32.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:43:33 np0005629333 nova_compute[244014]: 2026-02-25 13:43:33.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:43:33 np0005629333 nova_compute[244014]: 2026-02-25 13:43:33.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:43:33 np0005629333 nova_compute[244014]: 2026-02-25 13:43:33.916 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:43:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3710: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:43:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:43:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3711: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:43:36 np0005629333 nova_compute[244014]: 2026-02-25 13:43:36.575 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:43:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3712: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:43:38 np0005629333 nova_compute[244014]: 2026-02-25 13:43:38.917 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:43:39 np0005629333 nova_compute[244014]: 2026-02-25 13:43:39.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:43:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:43:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3713: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:43:41 np0005629333 nova_compute[244014]: 2026-02-25 13:43:41.626 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:43:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3714: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:43:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:43:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:43:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:43:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:43:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:43:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:43:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:43:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:43:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:43:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:43:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:43:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:43:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:43:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:43:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:43:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:43:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:43:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:43:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:43:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:43:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:43:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:43:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:43:43 np0005629333 nova_compute[244014]: 2026-02-25 13:43:43.954 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:43:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3715: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:43:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:43:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3716: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:43:46 np0005629333 nova_compute[244014]: 2026-02-25 13:43:46.629 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:43:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:43:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2084020284' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:43:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:43:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2084020284' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:43:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3717: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:43:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:43:48 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:43:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:43:48 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:43:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:43:48 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:43:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:43:48 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:43:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:43:48 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:43:48 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:43:48 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:43:48 np0005629333 nova_compute[244014]: 2026-02-25 13:43:48.955 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:43:49 np0005629333 podman[413298]: 2026-02-25 13:43:49.134491863 +0000 UTC m=+0.112826671 container create 36434a377835c3e10d49d210c988a17e77f46f2509935fd45c8ea9d89fa1f5da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_ardinghelli, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 08:43:49 np0005629333 podman[413298]: 2026-02-25 13:43:49.047466644 +0000 UTC m=+0.025801472 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:43:49 np0005629333 systemd[1]: Started libpod-conmon-36434a377835c3e10d49d210c988a17e77f46f2509935fd45c8ea9d89fa1f5da.scope.
Feb 25 08:43:49 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:43:49 np0005629333 podman[413298]: 2026-02-25 13:43:49.426254509 +0000 UTC m=+0.404589397 container init 36434a377835c3e10d49d210c988a17e77f46f2509935fd45c8ea9d89fa1f5da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 08:43:49 np0005629333 podman[413298]: 2026-02-25 13:43:49.433185228 +0000 UTC m=+0.411520016 container start 36434a377835c3e10d49d210c988a17e77f46f2509935fd45c8ea9d89fa1f5da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 08:43:49 np0005629333 competent_ardinghelli[413314]: 167 167
Feb 25 08:43:49 np0005629333 systemd[1]: libpod-36434a377835c3e10d49d210c988a17e77f46f2509935fd45c8ea9d89fa1f5da.scope: Deactivated successfully.
Feb 25 08:43:49 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:43:49 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:43:49 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:43:49 np0005629333 podman[413298]: 2026-02-25 13:43:49.533281882 +0000 UTC m=+0.511616660 container attach 36434a377835c3e10d49d210c988a17e77f46f2509935fd45c8ea9d89fa1f5da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:43:49 np0005629333 podman[413298]: 2026-02-25 13:43:49.534720773 +0000 UTC m=+0.513055611 container died 36434a377835c3e10d49d210c988a17e77f46f2509935fd45c8ea9d89fa1f5da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_ardinghelli, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:43:49 np0005629333 systemd[1]: var-lib-containers-storage-overlay-181f92c405fa7e24e69e3f0ba55a9f352e51b35cca2a35b18af4bc8052bd0f18-merged.mount: Deactivated successfully.
Feb 25 08:43:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:43:50 np0005629333 podman[413298]: 2026-02-25 13:43:50.046355351 +0000 UTC m=+1.024690169 container remove 36434a377835c3e10d49d210c988a17e77f46f2509935fd45c8ea9d89fa1f5da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:43:50 np0005629333 systemd[1]: libpod-conmon-36434a377835c3e10d49d210c988a17e77f46f2509935fd45c8ea9d89fa1f5da.scope: Deactivated successfully.
Feb 25 08:43:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3718: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:43:50 np0005629333 podman[413339]: 2026-02-25 13:43:50.241898735 +0000 UTC m=+0.067683284 container create 82a4d3c217a4623b6fa8f160e939ea3f370a726650bbf4b923e2a5af0ee30920 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_keldysh, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 08:43:50 np0005629333 podman[413339]: 2026-02-25 13:43:50.200039763 +0000 UTC m=+0.025824372 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:43:50 np0005629333 systemd[1]: Started libpod-conmon-82a4d3c217a4623b6fa8f160e939ea3f370a726650bbf4b923e2a5af0ee30920.scope.
Feb 25 08:43:50 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:43:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8794dad66e40c5e4e2938e01b3781d940cebf70ad16ea62cf0c0ecc77ae66a5a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:43:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8794dad66e40c5e4e2938e01b3781d940cebf70ad16ea62cf0c0ecc77ae66a5a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:43:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8794dad66e40c5e4e2938e01b3781d940cebf70ad16ea62cf0c0ecc77ae66a5a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:43:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8794dad66e40c5e4e2938e01b3781d940cebf70ad16ea62cf0c0ecc77ae66a5a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:43:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8794dad66e40c5e4e2938e01b3781d940cebf70ad16ea62cf0c0ecc77ae66a5a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:43:50 np0005629333 podman[413339]: 2026-02-25 13:43:50.434845285 +0000 UTC m=+0.260629884 container init 82a4d3c217a4623b6fa8f160e939ea3f370a726650bbf4b923e2a5af0ee30920 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_keldysh, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 08:43:50 np0005629333 podman[413339]: 2026-02-25 13:43:50.445851471 +0000 UTC m=+0.271635990 container start 82a4d3c217a4623b6fa8f160e939ea3f370a726650bbf4b923e2a5af0ee30920 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_keldysh, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 08:43:50 np0005629333 podman[413339]: 2026-02-25 13:43:50.587878828 +0000 UTC m=+0.413663377 container attach 82a4d3c217a4623b6fa8f160e939ea3f370a726650bbf4b923e2a5af0ee30920 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_keldysh, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True)
Feb 25 08:43:50 np0005629333 nova_compute[244014]: 2026-02-25 13:43:50.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:43:50 np0005629333 keen_keldysh[413356]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:43:50 np0005629333 keen_keldysh[413356]: --> All data devices are unavailable
Feb 25 08:43:50 np0005629333 systemd[1]: libpod-82a4d3c217a4623b6fa8f160e939ea3f370a726650bbf4b923e2a5af0ee30920.scope: Deactivated successfully.
Feb 25 08:43:50 np0005629333 podman[413339]: 2026-02-25 13:43:50.945973019 +0000 UTC m=+0.771757578 container died 82a4d3c217a4623b6fa8f160e939ea3f370a726650bbf4b923e2a5af0ee30920 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 08:43:51 np0005629333 systemd[1]: var-lib-containers-storage-overlay-8794dad66e40c5e4e2938e01b3781d940cebf70ad16ea62cf0c0ecc77ae66a5a-merged.mount: Deactivated successfully.
Feb 25 08:43:51 np0005629333 nova_compute[244014]: 2026-02-25 13:43:51.631 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:43:51 np0005629333 podman[413339]: 2026-02-25 13:43:51.825106247 +0000 UTC m=+1.650890806 container remove 82a4d3c217a4623b6fa8f160e939ea3f370a726650bbf4b923e2a5af0ee30920 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 08:43:51 np0005629333 systemd[1]: libpod-conmon-82a4d3c217a4623b6fa8f160e939ea3f370a726650bbf4b923e2a5af0ee30920.scope: Deactivated successfully.
Feb 25 08:43:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3719: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:43:52 np0005629333 podman[413453]: 2026-02-25 13:43:52.401823054 +0000 UTC m=+0.078903206 container create 150d92795cfff472ca9f9b35400069dbfd990f2df7b4f9986636d8157a39d112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:43:52 np0005629333 podman[413453]: 2026-02-25 13:43:52.345387094 +0000 UTC m=+0.022467286 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:43:52 np0005629333 systemd[1]: Started libpod-conmon-150d92795cfff472ca9f9b35400069dbfd990f2df7b4f9986636d8157a39d112.scope.
Feb 25 08:43:52 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:43:52 np0005629333 podman[413453]: 2026-02-25 13:43:52.532821955 +0000 UTC m=+0.209902067 container init 150d92795cfff472ca9f9b35400069dbfd990f2df7b4f9986636d8157a39d112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_blackburn, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:43:52 np0005629333 podman[413453]: 2026-02-25 13:43:52.539799516 +0000 UTC m=+0.216879628 container start 150d92795cfff472ca9f9b35400069dbfd990f2df7b4f9986636d8157a39d112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_blackburn, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:43:52 np0005629333 hardcore_blackburn[413469]: 167 167
Feb 25 08:43:52 np0005629333 systemd[1]: libpod-150d92795cfff472ca9f9b35400069dbfd990f2df7b4f9986636d8157a39d112.scope: Deactivated successfully.
Feb 25 08:43:52 np0005629333 podman[413453]: 2026-02-25 13:43:52.547146117 +0000 UTC m=+0.224226229 container attach 150d92795cfff472ca9f9b35400069dbfd990f2df7b4f9986636d8157a39d112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_blackburn, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 25 08:43:52 np0005629333 podman[413453]: 2026-02-25 13:43:52.547668892 +0000 UTC m=+0.224749004 container died 150d92795cfff472ca9f9b35400069dbfd990f2df7b4f9986636d8157a39d112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:43:52 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ab81f7ccd4ef84386ff3e835908aabb5ea977538dd7f8c38b347e8a140d3e9a7-merged.mount: Deactivated successfully.
Feb 25 08:43:52 np0005629333 podman[413453]: 2026-02-25 13:43:52.81235368 +0000 UTC m=+0.489433832 container remove 150d92795cfff472ca9f9b35400069dbfd990f2df7b4f9986636d8157a39d112 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_blackburn, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 25 08:43:52 np0005629333 systemd[1]: libpod-conmon-150d92795cfff472ca9f9b35400069dbfd990f2df7b4f9986636d8157a39d112.scope: Deactivated successfully.
Feb 25 08:43:53 np0005629333 podman[413494]: 2026-02-25 13:43:53.093877282 +0000 UTC m=+0.123536037 container create 666e3e24a76278e28d8d0fdd7b5a6d65bc82fc96864922c9f208c6893b680f6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 08:43:53 np0005629333 podman[413494]: 2026-02-25 13:43:53.006762292 +0000 UTC m=+0.036421057 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:43:53 np0005629333 systemd[1]: Started libpod-conmon-666e3e24a76278e28d8d0fdd7b5a6d65bc82fc96864922c9f208c6893b680f6b.scope.
Feb 25 08:43:53 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:43:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3bed7b34cf90be767f7a791dc520543fd416b38a4a60c009525849cabefa53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:43:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3bed7b34cf90be767f7a791dc520543fd416b38a4a60c009525849cabefa53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:43:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3bed7b34cf90be767f7a791dc520543fd416b38a4a60c009525849cabefa53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:43:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f3bed7b34cf90be767f7a791dc520543fd416b38a4a60c009525849cabefa53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:43:53 np0005629333 podman[413494]: 2026-02-25 13:43:53.303666905 +0000 UTC m=+0.333325701 container init 666e3e24a76278e28d8d0fdd7b5a6d65bc82fc96864922c9f208c6893b680f6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_cray, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:43:53 np0005629333 podman[413494]: 2026-02-25 13:43:53.314513607 +0000 UTC m=+0.344172362 container start 666e3e24a76278e28d8d0fdd7b5a6d65bc82fc96864922c9f208c6893b680f6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:43:53 np0005629333 podman[413494]: 2026-02-25 13:43:53.340719709 +0000 UTC m=+0.370378464 container attach 666e3e24a76278e28d8d0fdd7b5a6d65bc82fc96864922c9f208c6893b680f6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_cray, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:43:53 np0005629333 awesome_cray[413510]: {
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:    "0": [
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:        {
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "devices": [
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "/dev/loop3"
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            ],
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "lv_name": "ceph_lv0",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "lv_size": "21470642176",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "name": "ceph_lv0",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "tags": {
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.cluster_name": "ceph",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.crush_device_class": "",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.encrypted": "0",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.objectstore": "bluestore",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.osd_id": "0",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.type": "block",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.vdo": "0",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.with_tpm": "0"
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            },
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "type": "block",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "vg_name": "ceph_vg0"
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:        }
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:    ],
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:    "1": [
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:        {
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "devices": [
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "/dev/loop4"
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            ],
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "lv_name": "ceph_lv1",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "lv_size": "21470642176",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "name": "ceph_lv1",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "tags": {
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.cluster_name": "ceph",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.crush_device_class": "",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.encrypted": "0",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.objectstore": "bluestore",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.osd_id": "1",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.type": "block",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.vdo": "0",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.with_tpm": "0"
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            },
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "type": "block",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "vg_name": "ceph_vg1"
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:        }
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:    ],
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:    "2": [
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:        {
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "devices": [
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "/dev/loop5"
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            ],
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "lv_name": "ceph_lv2",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "lv_size": "21470642176",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "name": "ceph_lv2",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "tags": {
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.cluster_name": "ceph",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.crush_device_class": "",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.encrypted": "0",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.objectstore": "bluestore",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.osd_id": "2",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.type": "block",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.vdo": "0",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:                "ceph.with_tpm": "0"
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            },
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "type": "block",
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:            "vg_name": "ceph_vg2"
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:        }
Feb 25 08:43:53 np0005629333 awesome_cray[413510]:    ]
Feb 25 08:43:53 np0005629333 awesome_cray[413510]: }
Feb 25 08:43:53 np0005629333 systemd[1]: libpod-666e3e24a76278e28d8d0fdd7b5a6d65bc82fc96864922c9f208c6893b680f6b.scope: Deactivated successfully.
Feb 25 08:43:53 np0005629333 podman[413494]: 2026-02-25 13:43:53.637885161 +0000 UTC m=+0.667543886 container died 666e3e24a76278e28d8d0fdd7b5a6d65bc82fc96864922c9f208c6893b680f6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_cray, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Feb 25 08:43:53 np0005629333 systemd[1]: var-lib-containers-storage-overlay-0f3bed7b34cf90be767f7a791dc520543fd416b38a4a60c009525849cabefa53-merged.mount: Deactivated successfully.
Feb 25 08:43:53 np0005629333 podman[413494]: 2026-02-25 13:43:53.882257237 +0000 UTC m=+0.911915952 container remove 666e3e24a76278e28d8d0fdd7b5a6d65bc82fc96864922c9f208c6893b680f6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_cray, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 08:43:53 np0005629333 systemd[1]: libpod-conmon-666e3e24a76278e28d8d0fdd7b5a6d65bc82fc96864922c9f208c6893b680f6b.scope: Deactivated successfully.
Feb 25 08:43:53 np0005629333 nova_compute[244014]: 2026-02-25 13:43:53.955 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:43:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3720: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:43:54 np0005629333 podman[413593]: 2026-02-25 13:43:54.300861255 +0000 UTC m=+0.056629117 container create c4994f2106d571ede83e3d078cef4c5f3bb72faf795868edb2e6f728971de3c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_jones, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:43:54 np0005629333 podman[413593]: 2026-02-25 13:43:54.272733187 +0000 UTC m=+0.028501119 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:43:54 np0005629333 systemd[1]: Started libpod-conmon-c4994f2106d571ede83e3d078cef4c5f3bb72faf795868edb2e6f728971de3c6.scope.
Feb 25 08:43:54 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:43:54 np0005629333 podman[413593]: 2026-02-25 13:43:54.505439508 +0000 UTC m=+0.261207400 container init c4994f2106d571ede83e3d078cef4c5f3bb72faf795868edb2e6f728971de3c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_jones, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:43:54 np0005629333 podman[413593]: 2026-02-25 13:43:54.512513301 +0000 UTC m=+0.268281153 container start c4994f2106d571ede83e3d078cef4c5f3bb72faf795868edb2e6f728971de3c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 08:43:54 np0005629333 distracted_jones[413610]: 167 167
Feb 25 08:43:54 np0005629333 systemd[1]: libpod-c4994f2106d571ede83e3d078cef4c5f3bb72faf795868edb2e6f728971de3c6.scope: Deactivated successfully.
Feb 25 08:43:54 np0005629333 podman[413593]: 2026-02-25 13:43:54.572527094 +0000 UTC m=+0.328294966 container attach c4994f2106d571ede83e3d078cef4c5f3bb72faf795868edb2e6f728971de3c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 08:43:54 np0005629333 podman[413593]: 2026-02-25 13:43:54.57342423 +0000 UTC m=+0.329192082 container died c4994f2106d571ede83e3d078cef4c5f3bb72faf795868edb2e6f728971de3c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 08:43:54 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f8c8ca95ce2a8ed9feef03fcd061634892afd217b1dae720f3cfe66d45fc2008-merged.mount: Deactivated successfully.
Feb 25 08:43:54 np0005629333 podman[413593]: 2026-02-25 13:43:54.727209685 +0000 UTC m=+0.482977537 container remove c4994f2106d571ede83e3d078cef4c5f3bb72faf795868edb2e6f728971de3c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 08:43:54 np0005629333 systemd[1]: libpod-conmon-c4994f2106d571ede83e3d078cef4c5f3bb72faf795868edb2e6f728971de3c6.scope: Deactivated successfully.
Feb 25 08:43:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:43:54 np0005629333 podman[413633]: 2026-02-25 13:43:54.954174241 +0000 UTC m=+0.093920298 container create d529ffd5f6d38fa158260ef5ffdae6bdd7b2f496424e7a494ecf86dd7593c6a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_dijkstra, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 08:43:54 np0005629333 podman[413633]: 2026-02-25 13:43:54.900959083 +0000 UTC m=+0.040705210 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:43:55 np0005629333 systemd[1]: Started libpod-conmon-d529ffd5f6d38fa158260ef5ffdae6bdd7b2f496424e7a494ecf86dd7593c6a1.scope.
Feb 25 08:43:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:43:55.087 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:43:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:43:55.089 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:43:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:43:55.089 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:43:55 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:43:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07a84038c714feb6afc15792d3e959e1f10c6860d3c9ac78114beb661a1d1596/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:43:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07a84038c714feb6afc15792d3e959e1f10c6860d3c9ac78114beb661a1d1596/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:43:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07a84038c714feb6afc15792d3e959e1f10c6860d3c9ac78114beb661a1d1596/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:43:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07a84038c714feb6afc15792d3e959e1f10c6860d3c9ac78114beb661a1d1596/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:43:55 np0005629333 podman[413633]: 2026-02-25 13:43:55.108011356 +0000 UTC m=+0.247757403 container init d529ffd5f6d38fa158260ef5ffdae6bdd7b2f496424e7a494ecf86dd7593c6a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 08:43:55 np0005629333 podman[413633]: 2026-02-25 13:43:55.113883095 +0000 UTC m=+0.253629142 container start d529ffd5f6d38fa158260ef5ffdae6bdd7b2f496424e7a494ecf86dd7593c6a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_dijkstra, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:43:55 np0005629333 podman[413633]: 2026-02-25 13:43:55.282183977 +0000 UTC m=+0.421930064 container attach d529ffd5f6d38fa158260ef5ffdae6bdd7b2f496424e7a494ecf86dd7593c6a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_dijkstra, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 08:43:55 np0005629333 lvm[413729]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:43:55 np0005629333 lvm[413729]: VG ceph_vg1 finished
Feb 25 08:43:55 np0005629333 lvm[413728]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:43:55 np0005629333 lvm[413728]: VG ceph_vg0 finished
Feb 25 08:43:55 np0005629333 lvm[413731]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:43:55 np0005629333 lvm[413731]: VG ceph_vg2 finished
Feb 25 08:43:55 np0005629333 elegant_dijkstra[413650]: {}
Feb 25 08:43:55 np0005629333 systemd[1]: libpod-d529ffd5f6d38fa158260ef5ffdae6bdd7b2f496424e7a494ecf86dd7593c6a1.scope: Deactivated successfully.
Feb 25 08:43:55 np0005629333 systemd[1]: libpod-d529ffd5f6d38fa158260ef5ffdae6bdd7b2f496424e7a494ecf86dd7593c6a1.scope: Consumed 1.200s CPU time.
Feb 25 08:43:55 np0005629333 podman[413734]: 2026-02-25 13:43:55.952084159 +0000 UTC m=+0.030593389 container died d529ffd5f6d38fa158260ef5ffdae6bdd7b2f496424e7a494ecf86dd7593c6a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 08:43:56 np0005629333 systemd[1]: var-lib-containers-storage-overlay-07a84038c714feb6afc15792d3e959e1f10c6860d3c9ac78114beb661a1d1596-merged.mount: Deactivated successfully.
Feb 25 08:43:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3721: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:43:56 np0005629333 podman[413734]: 2026-02-25 13:43:56.485899905 +0000 UTC m=+0.564409155 container remove d529ffd5f6d38fa158260ef5ffdae6bdd7b2f496424e7a494ecf86dd7593c6a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:43:56 np0005629333 systemd[1]: libpod-conmon-d529ffd5f6d38fa158260ef5ffdae6bdd7b2f496424e7a494ecf86dd7593c6a1.scope: Deactivated successfully.
Feb 25 08:43:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:43:56 np0005629333 nova_compute[244014]: 2026-02-25 13:43:56.674 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:43:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:43:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:43:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:43:57 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:43:57 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:43:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3722: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:43:59 np0005629333 nova_compute[244014]: 2026-02-25 13:43:59.043 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:43:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:44:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3723: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:44:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:44:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:44:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:44:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:44:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:44:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:44:01 np0005629333 nova_compute[244014]: 2026-02-25 13:44:01.679 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:44:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3724: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:44:02 np0005629333 podman[413775]: 2026-02-25 13:44:02.720251917 +0000 UTC m=+0.062722422 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260223, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Feb 25 08:44:02 np0005629333 podman[413776]: 2026-02-25 13:44:02.772808975 +0000 UTC m=+0.109819824 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_id=ovn_controller)
Feb 25 08:44:04 np0005629333 nova_compute[244014]: 2026-02-25 13:44:04.044 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:44:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3725: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:44:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:44:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3726: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:44:06 np0005629333 nova_compute[244014]: 2026-02-25 13:44:06.682 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:44:07 np0005629333 nova_compute[244014]: 2026-02-25 13:44:07.890 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:44:07 np0005629333 nova_compute[244014]: 2026-02-25 13:44:07.890 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 25 08:44:07 np0005629333 nova_compute[244014]: 2026-02-25 13:44:07.907 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 25 08:44:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3727: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:44:09 np0005629333 nova_compute[244014]: 2026-02-25 13:44:09.046 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:44:09 np0005629333 nova_compute[244014]: 2026-02-25 13:44:09.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:44:09 np0005629333 nova_compute[244014]: 2026-02-25 13:44:09.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 25 08:44:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:44:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3728: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:44:11 np0005629333 nova_compute[244014]: 2026-02-25 13:44:11.683 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:44:11 np0005629333 nova_compute[244014]: 2026-02-25 13:44:11.892 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:44:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3729: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:44:14 np0005629333 nova_compute[244014]: 2026-02-25 13:44:14.050 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:44:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3730: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:44:14 np0005629333 nova_compute[244014]: 2026-02-25 13:44:14.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:44:14 np0005629333 nova_compute[244014]: 2026-02-25 13:44:14.920 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:44:14 np0005629333 nova_compute[244014]: 2026-02-25 13:44:14.920 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:44:14 np0005629333 nova_compute[244014]: 2026-02-25 13:44:14.920 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:44:14 np0005629333 nova_compute[244014]: 2026-02-25 13:44:14.921 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:44:14 np0005629333 nova_compute[244014]: 2026-02-25 13:44:14.921 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:44:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:44:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:44:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3048526410' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:44:15 np0005629333 nova_compute[244014]: 2026-02-25 13:44:15.461 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:44:15 np0005629333 nova_compute[244014]: 2026-02-25 13:44:15.643 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:44:15 np0005629333 nova_compute[244014]: 2026-02-25 13:44:15.645 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3535MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:44:15 np0005629333 nova_compute[244014]: 2026-02-25 13:44:15.646 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:44:15 np0005629333 nova_compute[244014]: 2026-02-25 13:44:15.646 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:44:15 np0005629333 nova_compute[244014]: 2026-02-25 13:44:15.736 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:44:15 np0005629333 nova_compute[244014]: 2026-02-25 13:44:15.736 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:44:15 np0005629333 nova_compute[244014]: 2026-02-25 13:44:15.761 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:44:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3731: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:44:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:44:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1324508909' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:44:16 np0005629333 nova_compute[244014]: 2026-02-25 13:44:16.326 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:44:16 np0005629333 nova_compute[244014]: 2026-02-25 13:44:16.331 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:44:16 np0005629333 nova_compute[244014]: 2026-02-25 13:44:16.346 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:44:16 np0005629333 nova_compute[244014]: 2026-02-25 13:44:16.348 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:44:16 np0005629333 nova_compute[244014]: 2026-02-25 13:44:16.348 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:44:16 np0005629333 nova_compute[244014]: 2026-02-25 13:44:16.685 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:44:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3732: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:44:19 np0005629333 nova_compute[244014]: 2026-02-25 13:44:19.052 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:44:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:44:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3733: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:44:21 np0005629333 nova_compute[244014]: 2026-02-25 13:44:21.349 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:44:21 np0005629333 nova_compute[244014]: 2026-02-25 13:44:21.350 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:44:21 np0005629333 nova_compute[244014]: 2026-02-25 13:44:21.350 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:44:21 np0005629333 nova_compute[244014]: 2026-02-25 13:44:21.372 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:44:21 np0005629333 nova_compute[244014]: 2026-02-25 13:44:21.727 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:44:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3734: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:44:24 np0005629333 nova_compute[244014]: 2026-02-25 13:44:24.055 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:44:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3735: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:44:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:44:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3736: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:44:26 np0005629333 nova_compute[244014]: 2026-02-25 13:44:26.730 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:44:26 np0005629333 nova_compute[244014]: 2026-02-25 13:44:26.894 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:44:27 np0005629333 nova_compute[244014]: 2026-02-25 13:44:27.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:44:27 np0005629333 nova_compute[244014]: 2026-02-25 13:44:27.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:44:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3737: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:44:29 np0005629333 nova_compute[244014]: 2026-02-25 13:44:29.069 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:44:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:44:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3738: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:44:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:44:31
Feb 25 08:44:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:44:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:44:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'vms', 'backups', '.rgw.root', 'default.rgw.log', 'volumes', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Feb 25 08:44:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:44:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:44:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:44:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:44:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:44:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:44:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:44:31 np0005629333 nova_compute[244014]: 2026-02-25 13:44:31.734 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:44:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3739: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:44:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:44:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:44:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:44:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:44:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:44:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:44:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:44:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:44:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:44:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:44:32 np0005629333 nova_compute[244014]: 2026-02-25 13:44:32.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:44:33 np0005629333 podman[413862]: 2026-02-25 13:44:33.734168442 +0000 UTC m=+0.072924004 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 08:44:33 np0005629333 podman[413863]: 2026-02-25 13:44:33.805716667 +0000 UTC m=+0.137439087 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 25 08:44:33 np0005629333 nova_compute[244014]: 2026-02-25 13:44:33.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:44:34 np0005629333 nova_compute[244014]: 2026-02-25 13:44:34.098 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:44:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3740: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:44:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:44:35 np0005629333 nova_compute[244014]: 2026-02-25 13:44:35.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:44:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3741: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:44:36 np0005629333 nova_compute[244014]: 2026-02-25 13:44:36.736 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:44:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3742: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:44:39 np0005629333 nova_compute[244014]: 2026-02-25 13:44:39.148 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:44:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:44:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3743: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:44:40 np0005629333 nova_compute[244014]: 2026-02-25 13:44:40.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:44:41 np0005629333 nova_compute[244014]: 2026-02-25 13:44:41.746 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:44:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3744: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:44:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:44:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:44:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:44:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:44:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:44:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:44:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:44:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:44:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:44:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:44:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:44:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:44:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:44:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:44:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:44:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:44:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:44:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:44:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:44:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:44:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:44:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:44:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:44:44 np0005629333 nova_compute[244014]: 2026-02-25 13:44:44.184 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:44:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3745: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:44:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:44:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3746: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:44:46 np0005629333 nova_compute[244014]: 2026-02-25 13:44:46.779 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:44:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:44:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4207949395' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:44:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:44:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4207949395' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:44:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3747: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:44:49 np0005629333 nova_compute[244014]: 2026-02-25 13:44:49.220 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:44:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:44:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3748: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:44:51 np0005629333 nova_compute[244014]: 2026-02-25 13:44:51.783 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:44:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3749: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:44:53 np0005629333 nova_compute[244014]: 2026-02-25 13:44:53.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:44:54 np0005629333 nova_compute[244014]: 2026-02-25 13:44:54.222 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:44:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3750: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:44:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:44:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:44:55.088 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:44:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:44:55.089 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:44:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:44:55.089 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:44:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3751: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:44:56 np0005629333 nova_compute[244014]: 2026-02-25 13:44:56.786 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:44:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:44:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:44:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:44:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:44:57 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:44:57 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:44:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3752: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:44:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:44:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:44:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:44:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:44:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:44:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:44:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:44:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:44:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:44:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:44:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:44:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:44:58 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:44:58 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:44:58 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:44:58 np0005629333 podman[414118]: 2026-02-25 13:44:58.826302067 +0000 UTC m=+0.103926105 container create af321ee48fd5dc2e557ef18164b331d78d6ab432da8cf1f25d8c6a73225a68fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lichterman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:44:58 np0005629333 podman[414118]: 2026-02-25 13:44:58.752631502 +0000 UTC m=+0.030255660 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:44:58 np0005629333 systemd[1]: Started libpod-conmon-af321ee48fd5dc2e557ef18164b331d78d6ab432da8cf1f25d8c6a73225a68fd.scope.
Feb 25 08:44:58 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:44:59 np0005629333 podman[414118]: 2026-02-25 13:44:59.0211278 +0000 UTC m=+0.298751928 container init af321ee48fd5dc2e557ef18164b331d78d6ab432da8cf1f25d8c6a73225a68fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lichterman, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:44:59 np0005629333 podman[414118]: 2026-02-25 13:44:59.029906362 +0000 UTC m=+0.307530400 container start af321ee48fd5dc2e557ef18164b331d78d6ab432da8cf1f25d8c6a73225a68fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lichterman, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 08:44:59 np0005629333 awesome_lichterman[414134]: 167 167
Feb 25 08:44:59 np0005629333 systemd[1]: libpod-af321ee48fd5dc2e557ef18164b331d78d6ab432da8cf1f25d8c6a73225a68fd.scope: Deactivated successfully.
Feb 25 08:44:59 np0005629333 podman[414118]: 2026-02-25 13:44:59.086357423 +0000 UTC m=+0.363981881 container attach af321ee48fd5dc2e557ef18164b331d78d6ab432da8cf1f25d8c6a73225a68fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lichterman, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:44:59 np0005629333 podman[414118]: 2026-02-25 13:44:59.087564938 +0000 UTC m=+0.365189006 container died af321ee48fd5dc2e557ef18164b331d78d6ab432da8cf1f25d8c6a73225a68fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 08:44:59 np0005629333 nova_compute[244014]: 2026-02-25 13:44:59.265 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:44:59 np0005629333 systemd[1]: var-lib-containers-storage-overlay-30d8bc6a90a6be1c36977364c5b591e7623a4255e86d2ecba616f1dbb517bdd6-merged.mount: Deactivated successfully.
Feb 25 08:44:59 np0005629333 podman[414118]: 2026-02-25 13:44:59.901127972 +0000 UTC m=+1.178752040 container remove af321ee48fd5dc2e557ef18164b331d78d6ab432da8cf1f25d8c6a73225a68fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_lichterman, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:44:59 np0005629333 systemd[1]: libpod-conmon-af321ee48fd5dc2e557ef18164b331d78d6ab432da8cf1f25d8c6a73225a68fd.scope: Deactivated successfully.
Feb 25 08:44:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:45:00 np0005629333 podman[414158]: 2026-02-25 13:45:00.04524171 +0000 UTC m=+0.028129708 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:45:00 np0005629333 podman[414158]: 2026-02-25 13:45:00.177419785 +0000 UTC m=+0.160307703 container create a984fbc28166d4a710b992e2989cfc62a9168158f6061c4cd485dcc905d12663 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_haibt, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 08:45:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3753: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:45:00 np0005629333 systemd[1]: Started libpod-conmon-a984fbc28166d4a710b992e2989cfc62a9168158f6061c4cd485dcc905d12663.scope.
Feb 25 08:45:00 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:45:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e8340af428344367cce6102944ba1a8dfed5808ea1c1ca78ca0eb58337930c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:45:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e8340af428344367cce6102944ba1a8dfed5808ea1c1ca78ca0eb58337930c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:45:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e8340af428344367cce6102944ba1a8dfed5808ea1c1ca78ca0eb58337930c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:45:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e8340af428344367cce6102944ba1a8dfed5808ea1c1ca78ca0eb58337930c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:45:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e8340af428344367cce6102944ba1a8dfed5808ea1c1ca78ca0eb58337930c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:45:00 np0005629333 podman[414158]: 2026-02-25 13:45:00.582075742 +0000 UTC m=+0.564963670 container init a984fbc28166d4a710b992e2989cfc62a9168158f6061c4cd485dcc905d12663 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_haibt, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 08:45:00 np0005629333 podman[414158]: 2026-02-25 13:45:00.591784821 +0000 UTC m=+0.574672769 container start a984fbc28166d4a710b992e2989cfc62a9168158f6061c4cd485dcc905d12663 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_haibt, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:45:00 np0005629333 podman[414158]: 2026-02-25 13:45:00.701528042 +0000 UTC m=+0.684415980 container attach a984fbc28166d4a710b992e2989cfc62a9168158f6061c4cd485dcc905d12663 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_haibt, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:45:01 np0005629333 ecstatic_haibt[414174]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:45:01 np0005629333 ecstatic_haibt[414174]: --> All data devices are unavailable
Feb 25 08:45:01 np0005629333 systemd[1]: libpod-a984fbc28166d4a710b992e2989cfc62a9168158f6061c4cd485dcc905d12663.scope: Deactivated successfully.
Feb 25 08:45:01 np0005629333 podman[414158]: 2026-02-25 13:45:01.07347747 +0000 UTC m=+1.056365398 container died a984fbc28166d4a710b992e2989cfc62a9168158f6061c4cd485dcc905d12663 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_haibt, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 08:45:01 np0005629333 systemd[1]: var-lib-containers-storage-overlay-c7e8340af428344367cce6102944ba1a8dfed5808ea1c1ca78ca0eb58337930c-merged.mount: Deactivated successfully.
Feb 25 08:45:01 np0005629333 podman[414158]: 2026-02-25 13:45:01.583642617 +0000 UTC m=+1.566530565 container remove a984fbc28166d4a710b992e2989cfc62a9168158f6061c4cd485dcc905d12663 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 08:45:01 np0005629333 systemd[1]: libpod-conmon-a984fbc28166d4a710b992e2989cfc62a9168158f6061c4cd485dcc905d12663.scope: Deactivated successfully.
Feb 25 08:45:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:45:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:45:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:45:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:45:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:45:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:45:01 np0005629333 nova_compute[244014]: 2026-02-25 13:45:01.821 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:45:02 np0005629333 podman[414270]: 2026-02-25 13:45:02.007397893 +0000 UTC m=+0.033081381 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:45:02 np0005629333 podman[414270]: 2026-02-25 13:45:02.170421303 +0000 UTC m=+0.196104741 container create a60b6bd9678b4b148020d2f3842867e73d3f6065cddb1f22639138723f416a47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_matsumoto, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:45:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3754: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:45:02 np0005629333 systemd[1]: Started libpod-conmon-a60b6bd9678b4b148020d2f3842867e73d3f6065cddb1f22639138723f416a47.scope.
Feb 25 08:45:02 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:45:02 np0005629333 podman[414270]: 2026-02-25 13:45:02.383860811 +0000 UTC m=+0.409544259 container init a60b6bd9678b4b148020d2f3842867e73d3f6065cddb1f22639138723f416a47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_matsumoto, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 08:45:02 np0005629333 podman[414270]: 2026-02-25 13:45:02.389157143 +0000 UTC m=+0.414840541 container start a60b6bd9678b4b148020d2f3842867e73d3f6065cddb1f22639138723f416a47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:45:02 np0005629333 epic_matsumoto[414287]: 167 167
Feb 25 08:45:02 np0005629333 systemd[1]: libpod-a60b6bd9678b4b148020d2f3842867e73d3f6065cddb1f22639138723f416a47.scope: Deactivated successfully.
Feb 25 08:45:02 np0005629333 podman[414270]: 2026-02-25 13:45:02.500648584 +0000 UTC m=+0.526332082 container attach a60b6bd9678b4b148020d2f3842867e73d3f6065cddb1f22639138723f416a47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_matsumoto, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:45:02 np0005629333 podman[414270]: 2026-02-25 13:45:02.501371394 +0000 UTC m=+0.527054822 container died a60b6bd9678b4b148020d2f3842867e73d3f6065cddb1f22639138723f416a47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_matsumoto, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:45:02 np0005629333 systemd[1]: var-lib-containers-storage-overlay-777d5397c68e377d63c29e0279f7b6c3bae18c844ba775a2419203a2dfdd5e3d-merged.mount: Deactivated successfully.
Feb 25 08:45:02 np0005629333 podman[414270]: 2026-02-25 13:45:02.789523677 +0000 UTC m=+0.815207085 container remove a60b6bd9678b4b148020d2f3842867e73d3f6065cddb1f22639138723f416a47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:45:02 np0005629333 systemd[1]: libpod-conmon-a60b6bd9678b4b148020d2f3842867e73d3f6065cddb1f22639138723f416a47.scope: Deactivated successfully.
Feb 25 08:45:03 np0005629333 podman[414311]: 2026-02-25 13:45:03.005853487 +0000 UTC m=+0.088261754 container create 224680b5a162215787aa22e937ae1bbfeb119360fb20695b5825d9a1f07308db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_elgamal, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 08:45:03 np0005629333 podman[414311]: 2026-02-25 13:45:02.946533675 +0000 UTC m=+0.028941952 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:45:03 np0005629333 systemd[1]: Started libpod-conmon-224680b5a162215787aa22e937ae1bbfeb119360fb20695b5825d9a1f07308db.scope.
Feb 25 08:45:03 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:45:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9249cee0f8af2ef12a5f8a4f616b2674a47303d9024b4ee4c6ef152575f5a89/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:45:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9249cee0f8af2ef12a5f8a4f616b2674a47303d9024b4ee4c6ef152575f5a89/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:45:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9249cee0f8af2ef12a5f8a4f616b2674a47303d9024b4ee4c6ef152575f5a89/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:45:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9249cee0f8af2ef12a5f8a4f616b2674a47303d9024b4ee4c6ef152575f5a89/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:45:03 np0005629333 podman[414311]: 2026-02-25 13:45:03.151249991 +0000 UTC m=+0.233658338 container init 224680b5a162215787aa22e937ae1bbfeb119360fb20695b5825d9a1f07308db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_elgamal, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:45:03 np0005629333 podman[414311]: 2026-02-25 13:45:03.160883728 +0000 UTC m=+0.243292025 container start 224680b5a162215787aa22e937ae1bbfeb119360fb20695b5825d9a1f07308db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_elgamal, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:45:03 np0005629333 podman[414311]: 2026-02-25 13:45:03.36718272 +0000 UTC m=+0.449591087 container attach 224680b5a162215787aa22e937ae1bbfeb119360fb20695b5825d9a1f07308db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]: {
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:    "0": [
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:        {
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "devices": [
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "/dev/loop3"
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            ],
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "lv_name": "ceph_lv0",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "lv_size": "21470642176",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "name": "ceph_lv0",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "tags": {
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.cluster_name": "ceph",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.crush_device_class": "",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.encrypted": "0",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.objectstore": "bluestore",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.osd_id": "0",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.type": "block",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.vdo": "0",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.with_tpm": "0"
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            },
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "type": "block",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "vg_name": "ceph_vg0"
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:        }
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:    ],
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:    "1": [
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:        {
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "devices": [
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "/dev/loop4"
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            ],
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "lv_name": "ceph_lv1",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "lv_size": "21470642176",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "name": "ceph_lv1",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "tags": {
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.cluster_name": "ceph",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.crush_device_class": "",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.encrypted": "0",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.objectstore": "bluestore",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.osd_id": "1",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.type": "block",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.vdo": "0",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.with_tpm": "0"
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            },
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "type": "block",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "vg_name": "ceph_vg1"
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:        }
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:    ],
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:    "2": [
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:        {
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "devices": [
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "/dev/loop5"
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            ],
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "lv_name": "ceph_lv2",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "lv_size": "21470642176",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "name": "ceph_lv2",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "tags": {
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.cluster_name": "ceph",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.crush_device_class": "",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.encrypted": "0",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.objectstore": "bluestore",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.osd_id": "2",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.type": "block",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.vdo": "0",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:                "ceph.with_tpm": "0"
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            },
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "type": "block",
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:            "vg_name": "ceph_vg2"
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:        }
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]:    ]
Feb 25 08:45:03 np0005629333 ecstatic_elgamal[414328]: }
Feb 25 08:45:03 np0005629333 systemd[1]: libpod-224680b5a162215787aa22e937ae1bbfeb119360fb20695b5825d9a1f07308db.scope: Deactivated successfully.
Feb 25 08:45:03 np0005629333 podman[414311]: 2026-02-25 13:45:03.494283009 +0000 UTC m=+0.576691306 container died 224680b5a162215787aa22e937ae1bbfeb119360fb20695b5825d9a1f07308db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Feb 25 08:45:03 np0005629333 systemd[1]: var-lib-containers-storage-overlay-c9249cee0f8af2ef12a5f8a4f616b2674a47303d9024b4ee4c6ef152575f5a89-merged.mount: Deactivated successfully.
Feb 25 08:45:03 np0005629333 podman[414311]: 2026-02-25 13:45:03.995554071 +0000 UTC m=+1.077962338 container remove 224680b5a162215787aa22e937ae1bbfeb119360fb20695b5825d9a1f07308db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_elgamal, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 08:45:04 np0005629333 podman[414349]: 2026-02-25 13:45:04.087561382 +0000 UTC m=+0.304218925 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 08:45:04 np0005629333 systemd[1]: libpod-conmon-224680b5a162215787aa22e937ae1bbfeb119360fb20695b5825d9a1f07308db.scope: Deactivated successfully.
Feb 25 08:45:04 np0005629333 podman[414392]: 2026-02-25 13:45:04.233744849 +0000 UTC m=+0.097249453 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, config_id=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 25 08:45:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3755: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:45:04 np0005629333 nova_compute[244014]: 2026-02-25 13:45:04.268 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:45:04 np0005629333 podman[414457]: 2026-02-25 13:45:04.518419282 +0000 UTC m=+0.098894500 container create 7375749be80d90bed8d44e94a512e8ce9a0b490b6a83b6b08e707af767188002 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sanderson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 08:45:04 np0005629333 podman[414457]: 2026-02-25 13:45:04.444763468 +0000 UTC m=+0.025238676 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:45:04 np0005629333 systemd[1]: Started libpod-conmon-7375749be80d90bed8d44e94a512e8ce9a0b490b6a83b6b08e707af767188002.scope.
Feb 25 08:45:04 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:45:04 np0005629333 podman[414457]: 2026-02-25 13:45:04.651715469 +0000 UTC m=+0.232190677 container init 7375749be80d90bed8d44e94a512e8ce9a0b490b6a83b6b08e707af767188002 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sanderson, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:45:04 np0005629333 podman[414457]: 2026-02-25 13:45:04.659646957 +0000 UTC m=+0.240122145 container start 7375749be80d90bed8d44e94a512e8ce9a0b490b6a83b6b08e707af767188002 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sanderson, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 08:45:04 np0005629333 boring_sanderson[414474]: 167 167
Feb 25 08:45:04 np0005629333 systemd[1]: libpod-7375749be80d90bed8d44e94a512e8ce9a0b490b6a83b6b08e707af767188002.scope: Deactivated successfully.
Feb 25 08:45:04 np0005629333 conmon[414474]: conmon 7375749be80d90bed8d4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7375749be80d90bed8d44e94a512e8ce9a0b490b6a83b6b08e707af767188002.scope/container/memory.events
Feb 25 08:45:04 np0005629333 podman[414457]: 2026-02-25 13:45:04.669971473 +0000 UTC m=+0.250446691 container attach 7375749be80d90bed8d44e94a512e8ce9a0b490b6a83b6b08e707af767188002 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sanderson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Feb 25 08:45:04 np0005629333 podman[414457]: 2026-02-25 13:45:04.67091736 +0000 UTC m=+0.251392548 container died 7375749be80d90bed8d44e94a512e8ce9a0b490b6a83b6b08e707af767188002 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:45:04 np0005629333 systemd[1]: var-lib-containers-storage-overlay-475a2c90b3ca4dbebb3a7e0f48695749c7a5360fb2761835599df753194bc314-merged.mount: Deactivated successfully.
Feb 25 08:45:04 np0005629333 podman[414457]: 2026-02-25 13:45:04.93143969 +0000 UTC m=+0.511914898 container remove 7375749be80d90bed8d44e94a512e8ce9a0b490b6a83b6b08e707af767188002 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 08:45:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:45:04 np0005629333 systemd[1]: libpod-conmon-7375749be80d90bed8d44e94a512e8ce9a0b490b6a83b6b08e707af767188002.scope: Deactivated successfully.
Feb 25 08:45:05 np0005629333 podman[414498]: 2026-02-25 13:45:05.142604172 +0000 UTC m=+0.109535415 container create ce3d603b5365ba97af05524942a80ffc43363088ffd7092211e76b28fb68adbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 08:45:05 np0005629333 podman[414498]: 2026-02-25 13:45:05.070203704 +0000 UTC m=+0.037134967 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:45:05 np0005629333 systemd[1]: Started libpod-conmon-ce3d603b5365ba97af05524942a80ffc43363088ffd7092211e76b28fb68adbb.scope.
Feb 25 08:45:05 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:45:05 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb9bbc8d5554595e6ebd61dd411be6fe42eca5d1b0df83a7f26543048fc59c3e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:45:05 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb9bbc8d5554595e6ebd61dd411be6fe42eca5d1b0df83a7f26543048fc59c3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:45:05 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb9bbc8d5554595e6ebd61dd411be6fe42eca5d1b0df83a7f26543048fc59c3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:45:05 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb9bbc8d5554595e6ebd61dd411be6fe42eca5d1b0df83a7f26543048fc59c3e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:45:05 np0005629333 podman[414498]: 2026-02-25 13:45:05.278633488 +0000 UTC m=+0.245564751 container init ce3d603b5365ba97af05524942a80ffc43363088ffd7092211e76b28fb68adbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_wilbur, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:45:05 np0005629333 podman[414498]: 2026-02-25 13:45:05.285080273 +0000 UTC m=+0.252011506 container start ce3d603b5365ba97af05524942a80ffc43363088ffd7092211e76b28fb68adbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_wilbur, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 08:45:05 np0005629333 podman[414498]: 2026-02-25 13:45:05.331157986 +0000 UTC m=+0.298089219 container attach ce3d603b5365ba97af05524942a80ffc43363088ffd7092211e76b28fb68adbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_wilbur, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:45:06 np0005629333 lvm[414592]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:45:06 np0005629333 lvm[414592]: VG ceph_vg0 finished
Feb 25 08:45:06 np0005629333 lvm[414593]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:45:06 np0005629333 lvm[414593]: VG ceph_vg1 finished
Feb 25 08:45:06 np0005629333 lvm[414595]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:45:06 np0005629333 lvm[414595]: VG ceph_vg2 finished
Feb 25 08:45:06 np0005629333 adoring_wilbur[414514]: {}
Feb 25 08:45:06 np0005629333 systemd[1]: libpod-ce3d603b5365ba97af05524942a80ffc43363088ffd7092211e76b28fb68adbb.scope: Deactivated successfully.
Feb 25 08:45:06 np0005629333 systemd[1]: libpod-ce3d603b5365ba97af05524942a80ffc43363088ffd7092211e76b28fb68adbb.scope: Consumed 1.353s CPU time.
Feb 25 08:45:06 np0005629333 podman[414498]: 2026-02-25 13:45:06.200905296 +0000 UTC m=+1.167836539 container died ce3d603b5365ba97af05524942a80ffc43363088ffd7092211e76b28fb68adbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 08:45:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3756: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:45:06 np0005629333 systemd[1]: var-lib-containers-storage-overlay-cb9bbc8d5554595e6ebd61dd411be6fe42eca5d1b0df83a7f26543048fc59c3e-merged.mount: Deactivated successfully.
Feb 25 08:45:06 np0005629333 podman[414498]: 2026-02-25 13:45:06.766802911 +0000 UTC m=+1.733734144 container remove ce3d603b5365ba97af05524942a80ffc43363088ffd7092211e76b28fb68adbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 08:45:06 np0005629333 systemd[1]: libpod-conmon-ce3d603b5365ba97af05524942a80ffc43363088ffd7092211e76b28fb68adbb.scope: Deactivated successfully.
Feb 25 08:45:06 np0005629333 nova_compute[244014]: 2026-02-25 13:45:06.824 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:45:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:45:06 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:45:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:45:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:45:08 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:45:08 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:45:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3757: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:45:09 np0005629333 nova_compute[244014]: 2026-02-25 13:45:09.271 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:45:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:45:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3758: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:45:11 np0005629333 nova_compute[244014]: 2026-02-25 13:45:11.826 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:45:11 np0005629333 nova_compute[244014]: 2026-02-25 13:45:11.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:45:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3759: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:45:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3760: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:45:14 np0005629333 nova_compute[244014]: 2026-02-25 13:45:14.272 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:45:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:45:15 np0005629333 nova_compute[244014]: 2026-02-25 13:45:15.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:45:15 np0005629333 nova_compute[244014]: 2026-02-25 13:45:15.909 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:45:15 np0005629333 nova_compute[244014]: 2026-02-25 13:45:15.910 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:45:15 np0005629333 nova_compute[244014]: 2026-02-25 13:45:15.910 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:45:15 np0005629333 nova_compute[244014]: 2026-02-25 13:45:15.911 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:45:15 np0005629333 nova_compute[244014]: 2026-02-25 13:45:15.912 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:45:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3761: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:45:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:45:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2491815796' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:45:16 np0005629333 nova_compute[244014]: 2026-02-25 13:45:16.457 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:45:16 np0005629333 nova_compute[244014]: 2026-02-25 13:45:16.659 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:45:16 np0005629333 nova_compute[244014]: 2026-02-25 13:45:16.662 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3517MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:45:16 np0005629333 nova_compute[244014]: 2026-02-25 13:45:16.662 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:45:16 np0005629333 nova_compute[244014]: 2026-02-25 13:45:16.663 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:45:16 np0005629333 nova_compute[244014]: 2026-02-25 13:45:16.747 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:45:16 np0005629333 nova_compute[244014]: 2026-02-25 13:45:16.748 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:45:16 np0005629333 nova_compute[244014]: 2026-02-25 13:45:16.772 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:45:16 np0005629333 nova_compute[244014]: 2026-02-25 13:45:16.865 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:45:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:45:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3849367050' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:45:17 np0005629333 nova_compute[244014]: 2026-02-25 13:45:17.344 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:45:17 np0005629333 nova_compute[244014]: 2026-02-25 13:45:17.351 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:45:17 np0005629333 nova_compute[244014]: 2026-02-25 13:45:17.377 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:45:17 np0005629333 nova_compute[244014]: 2026-02-25 13:45:17.380 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:45:17 np0005629333 nova_compute[244014]: 2026-02-25 13:45:17.380 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:45:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3762: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:45:19 np0005629333 nova_compute[244014]: 2026-02-25 13:45:19.288 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:45:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:45:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3763: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:45:21 np0005629333 nova_compute[244014]: 2026-02-25 13:45:21.868 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:45:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3764: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:45:22 np0005629333 nova_compute[244014]: 2026-02-25 13:45:22.382 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:45:22 np0005629333 nova_compute[244014]: 2026-02-25 13:45:22.382 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:45:22 np0005629333 nova_compute[244014]: 2026-02-25 13:45:22.382 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:45:22 np0005629333 nova_compute[244014]: 2026-02-25 13:45:22.397 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:45:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3765: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:45:24 np0005629333 nova_compute[244014]: 2026-02-25 13:45:24.311 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:45:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:45:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3766: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:45:26 np0005629333 nova_compute[244014]: 2026-02-25 13:45:26.872 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:45:27 np0005629333 nova_compute[244014]: 2026-02-25 13:45:27.886 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:45:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3767: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:45:28 np0005629333 nova_compute[244014]: 2026-02-25 13:45:28.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:45:28 np0005629333 nova_compute[244014]: 2026-02-25 13:45:28.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:45:29 np0005629333 nova_compute[244014]: 2026-02-25 13:45:29.313 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:45:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:45:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3768: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:45:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:45:31
Feb 25 08:45:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:45:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:45:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'backups', 'images', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control']
Feb 25 08:45:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:45:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:45:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:45:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:45:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:45:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:45:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:45:31 np0005629333 nova_compute[244014]: 2026-02-25 13:45:31.873 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:45:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3769: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:45:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:45:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:45:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:45:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:45:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:45:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:45:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:45:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:45:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:45:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:45:33 np0005629333 nova_compute[244014]: 2026-02-25 13:45:33.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:45:33 np0005629333 nova_compute[244014]: 2026-02-25 13:45:33.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:45:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3770: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:45:34 np0005629333 nova_compute[244014]: 2026-02-25 13:45:34.315 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:45:34 np0005629333 podman[414683]: 2026-02-25 13:45:34.734552039 +0000 UTC m=+0.067929021 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, config_id=ovn_metadata_agent)
Feb 25 08:45:34 np0005629333 podman[414684]: 2026-02-25 13:45:34.781540798 +0000 UTC m=+0.107861278 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 25 08:45:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:45:35 np0005629333 nova_compute[244014]: 2026-02-25 13:45:35.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:45:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3771: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:45:36 np0005629333 nova_compute[244014]: 2026-02-25 13:45:36.915 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:45:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3772: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:45:39 np0005629333 nova_compute[244014]: 2026-02-25 13:45:39.317 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:45:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:45:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3773: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:45:40 np0005629333 nova_compute[244014]: 2026-02-25 13:45:40.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:45:41 np0005629333 nova_compute[244014]: 2026-02-25 13:45:41.919 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:45:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3774: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:45:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:45:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:45:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:45:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:45:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:45:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:45:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:45:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:45:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:45:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:45:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:45:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:45:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:45:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:45:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:45:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:45:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:45:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:45:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:45:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:45:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:45:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:45:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:45:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3775: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:45:44 np0005629333 nova_compute[244014]: 2026-02-25 13:45:44.320 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:45:44 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:45:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3776: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:45:46 np0005629333 nova_compute[244014]: 2026-02-25 13:45:46.922 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:45:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:45:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/721339229' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:45:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:45:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/721339229' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:45:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3777: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:45:49 np0005629333 nova_compute[244014]: 2026-02-25 13:45:49.322 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:45:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:45:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3778: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:45:51 np0005629333 nova_compute[244014]: 2026-02-25 13:45:51.924 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:45:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3779: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:45:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3780: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:45:54 np0005629333 nova_compute[244014]: 2026-02-25 13:45:54.326 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:45:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:45:55.090 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:45:55.091 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:45:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:45:55.091 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:45:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3781: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:45:56 np0005629333 nova_compute[244014]: 2026-02-25 13:45:56.928 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:45:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3782: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:45:59 np0005629333 nova_compute[244014]: 2026-02-25 13:45:59.328 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:45:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:46:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3783: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:46:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:46:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:46:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:46:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:46:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:46:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:46:01 np0005629333 nova_compute[244014]: 2026-02-25 13:46:01.932 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:46:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3784: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:46:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3785: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:46:04 np0005629333 nova_compute[244014]: 2026-02-25 13:46:04.330 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:46:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:46:05 np0005629333 podman[414729]: 2026-02-25 13:46:05.736940678 +0000 UTC m=+0.071833383 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0)
Feb 25 08:46:05 np0005629333 podman[414730]: 2026-02-25 13:46:05.769709299 +0000 UTC m=+0.099385574 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 25 08:46:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3786: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:46:06 np0005629333 nova_compute[244014]: 2026-02-25 13:46:06.934 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:46:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 25 08:46:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 08:46:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:46:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:46:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:46:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:46:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:46:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:46:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:46:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:46:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:46:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:46:07 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:46:07 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:46:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3787: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:46:08 np0005629333 podman[414914]: 2026-02-25 13:46:08.338111926 +0000 UTC m=+0.026086240 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:46:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 08:46:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:46:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:46:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:46:09 np0005629333 podman[414914]: 2026-02-25 13:46:09.325346079 +0000 UTC m=+1.013320363 container create 91ba03ab8f192535c23fed2ba5f0a90b3b29135fdfa3f67173cd643be7400cc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:46:09 np0005629333 nova_compute[244014]: 2026-02-25 13:46:09.331 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:46:09 np0005629333 systemd[1]: Started libpod-conmon-91ba03ab8f192535c23fed2ba5f0a90b3b29135fdfa3f67173cd643be7400cc7.scope.
Feb 25 08:46:09 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:46:09 np0005629333 podman[414914]: 2026-02-25 13:46:09.597166612 +0000 UTC m=+1.285140906 container init 91ba03ab8f192535c23fed2ba5f0a90b3b29135fdfa3f67173cd643be7400cc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bouman, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 08:46:09 np0005629333 podman[414914]: 2026-02-25 13:46:09.610410033 +0000 UTC m=+1.298384337 container start 91ba03ab8f192535c23fed2ba5f0a90b3b29135fdfa3f67173cd643be7400cc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bouman, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 08:46:09 np0005629333 thirsty_bouman[414930]: 167 167
Feb 25 08:46:09 np0005629333 systemd[1]: libpod-91ba03ab8f192535c23fed2ba5f0a90b3b29135fdfa3f67173cd643be7400cc7.scope: Deactivated successfully.
Feb 25 08:46:09 np0005629333 conmon[414930]: conmon 91ba03ab8f192535c23f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-91ba03ab8f192535c23fed2ba5f0a90b3b29135fdfa3f67173cd643be7400cc7.scope/container/memory.events
Feb 25 08:46:09 np0005629333 podman[414914]: 2026-02-25 13:46:09.647012143 +0000 UTC m=+1.334986467 container attach 91ba03ab8f192535c23fed2ba5f0a90b3b29135fdfa3f67173cd643be7400cc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bouman, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:46:09 np0005629333 podman[414914]: 2026-02-25 13:46:09.648609039 +0000 UTC m=+1.336583333 container died 91ba03ab8f192535c23fed2ba5f0a90b3b29135fdfa3f67173cd643be7400cc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bouman, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:46:09 np0005629333 systemd[1]: var-lib-containers-storage-overlay-47d05874ebf4f654853e690b2a9a358a46c1e3bc41cac5103a0d69914fd9085b-merged.mount: Deactivated successfully.
Feb 25 08:46:09 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:46:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3788: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:46:10 np0005629333 podman[414914]: 2026-02-25 13:46:10.358964223 +0000 UTC m=+2.046938527 container remove 91ba03ab8f192535c23fed2ba5f0a90b3b29135fdfa3f67173cd643be7400cc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_bouman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 08:46:10 np0005629333 systemd[1]: libpod-conmon-91ba03ab8f192535c23fed2ba5f0a90b3b29135fdfa3f67173cd643be7400cc7.scope: Deactivated successfully.
Feb 25 08:46:10 np0005629333 podman[414956]: 2026-02-25 13:46:10.544629723 +0000 UTC m=+0.037286841 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:46:10 np0005629333 podman[414956]: 2026-02-25 13:46:10.7148367 +0000 UTC m=+0.207493848 container create ad36a1dccd0df1322ef3a0ee949b64f124ae57819709585996686be8ab50533d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hermann, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 08:46:10 np0005629333 systemd[1]: Started libpod-conmon-ad36a1dccd0df1322ef3a0ee949b64f124ae57819709585996686be8ab50533d.scope.
Feb 25 08:46:10 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:46:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3c01e5fc2c7670c87d875c784e08b6be2c0256ed71ab789716965ba35520acd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:46:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3c01e5fc2c7670c87d875c784e08b6be2c0256ed71ab789716965ba35520acd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:46:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3c01e5fc2c7670c87d875c784e08b6be2c0256ed71ab789716965ba35520acd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:46:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3c01e5fc2c7670c87d875c784e08b6be2c0256ed71ab789716965ba35520acd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:46:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3c01e5fc2c7670c87d875c784e08b6be2c0256ed71ab789716965ba35520acd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:46:11 np0005629333 podman[414956]: 2026-02-25 13:46:11.031788518 +0000 UTC m=+0.524445656 container init ad36a1dccd0df1322ef3a0ee949b64f124ae57819709585996686be8ab50533d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hermann, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 08:46:11 np0005629333 podman[414956]: 2026-02-25 13:46:11.041289491 +0000 UTC m=+0.533946599 container start ad36a1dccd0df1322ef3a0ee949b64f124ae57819709585996686be8ab50533d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hermann, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:46:11 np0005629333 podman[414956]: 2026-02-25 13:46:11.138179713 +0000 UTC m=+0.630836871 container attach ad36a1dccd0df1322ef3a0ee949b64f124ae57819709585996686be8ab50533d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hermann, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:46:11 np0005629333 romantic_hermann[414973]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:46:11 np0005629333 romantic_hermann[414973]: --> All data devices are unavailable
Feb 25 08:46:11 np0005629333 systemd[1]: libpod-ad36a1dccd0df1322ef3a0ee949b64f124ae57819709585996686be8ab50533d.scope: Deactivated successfully.
Feb 25 08:46:11 np0005629333 podman[414956]: 2026-02-25 13:46:11.493898605 +0000 UTC m=+0.986555763 container died ad36a1dccd0df1322ef3a0ee949b64f124ae57819709585996686be8ab50533d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hermann, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:46:11 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f3c01e5fc2c7670c87d875c784e08b6be2c0256ed71ab789716965ba35520acd-merged.mount: Deactivated successfully.
Feb 25 08:46:11 np0005629333 nova_compute[244014]: 2026-02-25 13:46:11.937 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:46:11 np0005629333 podman[414956]: 2026-02-25 13:46:11.964682731 +0000 UTC m=+1.457339859 container remove ad36a1dccd0df1322ef3a0ee949b64f124ae57819709585996686be8ab50533d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hermann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:46:11 np0005629333 systemd[1]: libpod-conmon-ad36a1dccd0df1322ef3a0ee949b64f124ae57819709585996686be8ab50533d.scope: Deactivated successfully.
Feb 25 08:46:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3789: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:46:12 np0005629333 podman[415066]: 2026-02-25 13:46:12.525051049 +0000 UTC m=+0.106692454 container create 5d2fc6a2ed0d8defc3ceeee0a2eb1ee75459024c3bb1c0cadbf54eb1685cd06c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:46:12 np0005629333 podman[415066]: 2026-02-25 13:46:12.451042574 +0000 UTC m=+0.032684019 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:46:12 np0005629333 systemd[1]: Started libpod-conmon-5d2fc6a2ed0d8defc3ceeee0a2eb1ee75459024c3bb1c0cadbf54eb1685cd06c.scope.
Feb 25 08:46:12 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:46:12 np0005629333 podman[415066]: 2026-02-25 13:46:12.74557778 +0000 UTC m=+0.327219235 container init 5d2fc6a2ed0d8defc3ceeee0a2eb1ee75459024c3bb1c0cadbf54eb1685cd06c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 08:46:12 np0005629333 podman[415066]: 2026-02-25 13:46:12.753683083 +0000 UTC m=+0.335324488 container start 5d2fc6a2ed0d8defc3ceeee0a2eb1ee75459024c3bb1c0cadbf54eb1685cd06c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:46:12 np0005629333 thirsty_lalande[415083]: 167 167
Feb 25 08:46:12 np0005629333 systemd[1]: libpod-5d2fc6a2ed0d8defc3ceeee0a2eb1ee75459024c3bb1c0cadbf54eb1685cd06c.scope: Deactivated successfully.
Feb 25 08:46:12 np0005629333 podman[415066]: 2026-02-25 13:46:12.812493811 +0000 UTC m=+0.394135216 container attach 5d2fc6a2ed0d8defc3ceeee0a2eb1ee75459024c3bb1c0cadbf54eb1685cd06c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lalande, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:46:12 np0005629333 podman[415066]: 2026-02-25 13:46:12.81313427 +0000 UTC m=+0.394775675 container died 5d2fc6a2ed0d8defc3ceeee0a2eb1ee75459024c3bb1c0cadbf54eb1685cd06c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lalande, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:46:12 np0005629333 systemd[1]: var-lib-containers-storage-overlay-38f7ad349a542bde433583cbf24f3251ae1752ef23060cc11db7ca916947a1d9-merged.mount: Deactivated successfully.
Feb 25 08:46:13 np0005629333 podman[415066]: 2026-02-25 13:46:13.085975523 +0000 UTC m=+0.667616918 container remove 5d2fc6a2ed0d8defc3ceeee0a2eb1ee75459024c3bb1c0cadbf54eb1685cd06c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lalande, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 08:46:13 np0005629333 systemd[1]: libpod-conmon-5d2fc6a2ed0d8defc3ceeee0a2eb1ee75459024c3bb1c0cadbf54eb1685cd06c.scope: Deactivated successfully.
Feb 25 08:46:13 np0005629333 podman[415108]: 2026-02-25 13:46:13.281329641 +0000 UTC m=+0.034682686 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:46:13 np0005629333 podman[415108]: 2026-02-25 13:46:13.3843896 +0000 UTC m=+0.137742665 container create 3c8ed3f30844eb650fad6564274abc7704a947ab85cfc1c3521d93af11186e01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wright, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 08:46:13 np0005629333 systemd[1]: Started libpod-conmon-3c8ed3f30844eb650fad6564274abc7704a947ab85cfc1c3521d93af11186e01.scope.
Feb 25 08:46:13 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:46:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd55e2588934be1968233ea146fd72fea9c709b0f98a4f465cc70460e4caac8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:46:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd55e2588934be1968233ea146fd72fea9c709b0f98a4f465cc70460e4caac8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:46:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd55e2588934be1968233ea146fd72fea9c709b0f98a4f465cc70460e4caac8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:46:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd55e2588934be1968233ea146fd72fea9c709b0f98a4f465cc70460e4caac8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:46:13 np0005629333 podman[415108]: 2026-02-25 13:46:13.632848893 +0000 UTC m=+0.386201958 container init 3c8ed3f30844eb650fad6564274abc7704a947ab85cfc1c3521d93af11186e01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 08:46:13 np0005629333 podman[415108]: 2026-02-25 13:46:13.639843214 +0000 UTC m=+0.393196249 container start 3c8ed3f30844eb650fad6564274abc7704a947ab85cfc1c3521d93af11186e01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wright, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 08:46:13 np0005629333 podman[415108]: 2026-02-25 13:46:13.650388027 +0000 UTC m=+0.403741102 container attach 3c8ed3f30844eb650fad6564274abc7704a947ab85cfc1c3521d93af11186e01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Feb 25 08:46:13 np0005629333 nova_compute[244014]: 2026-02-25 13:46:13.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:46:13 np0005629333 brave_wright[415125]: {
Feb 25 08:46:13 np0005629333 brave_wright[415125]:    "0": [
Feb 25 08:46:13 np0005629333 brave_wright[415125]:        {
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "devices": [
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "/dev/loop3"
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            ],
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "lv_name": "ceph_lv0",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "lv_size": "21470642176",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "name": "ceph_lv0",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "tags": {
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.cluster_name": "ceph",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.crush_device_class": "",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.encrypted": "0",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.objectstore": "bluestore",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.osd_id": "0",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.type": "block",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.vdo": "0",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.with_tpm": "0"
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            },
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "type": "block",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "vg_name": "ceph_vg0"
Feb 25 08:46:13 np0005629333 brave_wright[415125]:        }
Feb 25 08:46:13 np0005629333 brave_wright[415125]:    ],
Feb 25 08:46:13 np0005629333 brave_wright[415125]:    "1": [
Feb 25 08:46:13 np0005629333 brave_wright[415125]:        {
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "devices": [
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "/dev/loop4"
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            ],
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "lv_name": "ceph_lv1",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "lv_size": "21470642176",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "name": "ceph_lv1",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "tags": {
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.cluster_name": "ceph",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.crush_device_class": "",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.encrypted": "0",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.objectstore": "bluestore",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.osd_id": "1",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.type": "block",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.vdo": "0",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.with_tpm": "0"
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            },
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "type": "block",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "vg_name": "ceph_vg1"
Feb 25 08:46:13 np0005629333 brave_wright[415125]:        }
Feb 25 08:46:13 np0005629333 brave_wright[415125]:    ],
Feb 25 08:46:13 np0005629333 brave_wright[415125]:    "2": [
Feb 25 08:46:13 np0005629333 brave_wright[415125]:        {
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "devices": [
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "/dev/loop5"
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            ],
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "lv_name": "ceph_lv2",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "lv_size": "21470642176",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "name": "ceph_lv2",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "tags": {
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.cluster_name": "ceph",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.crush_device_class": "",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.encrypted": "0",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.objectstore": "bluestore",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.osd_id": "2",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.type": "block",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.vdo": "0",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:                "ceph.with_tpm": "0"
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            },
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "type": "block",
Feb 25 08:46:13 np0005629333 brave_wright[415125]:            "vg_name": "ceph_vg2"
Feb 25 08:46:13 np0005629333 brave_wright[415125]:        }
Feb 25 08:46:13 np0005629333 brave_wright[415125]:    ]
Feb 25 08:46:13 np0005629333 brave_wright[415125]: }
Feb 25 08:46:13 np0005629333 systemd[1]: libpod-3c8ed3f30844eb650fad6564274abc7704a947ab85cfc1c3521d93af11186e01.scope: Deactivated successfully.
Feb 25 08:46:13 np0005629333 podman[415108]: 2026-02-25 13:46:13.938489328 +0000 UTC m=+0.691842383 container died 3c8ed3f30844eb650fad6564274abc7704a947ab85cfc1c3521d93af11186e01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 08:46:14 np0005629333 systemd[1]: var-lib-containers-storage-overlay-5fd55e2588934be1968233ea146fd72fea9c709b0f98a4f465cc70460e4caac8-merged.mount: Deactivated successfully.
Feb 25 08:46:14 np0005629333 podman[415108]: 2026-02-25 13:46:14.26005627 +0000 UTC m=+1.013409335 container remove 3c8ed3f30844eb650fad6564274abc7704a947ab85cfc1c3521d93af11186e01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wright, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:46:14 np0005629333 systemd[1]: libpod-conmon-3c8ed3f30844eb650fad6564274abc7704a947ab85cfc1c3521d93af11186e01.scope: Deactivated successfully.
Feb 25 08:46:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3790: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:46:14 np0005629333 nova_compute[244014]: 2026-02-25 13:46:14.334 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:46:14 np0005629333 podman[415210]: 2026-02-25 13:46:14.775797956 +0000 UTC m=+0.038997521 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:46:14 np0005629333 podman[415210]: 2026-02-25 13:46:14.886185424 +0000 UTC m=+0.149384999 container create e49dca05058f6f441842334a20038af40c3b6d0a87c288ac2cb70246a18cf0b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:46:14 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:46:15 np0005629333 systemd[1]: Started libpod-conmon-e49dca05058f6f441842334a20038af40c3b6d0a87c288ac2cb70246a18cf0b4.scope.
Feb 25 08:46:15 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:46:15 np0005629333 podman[415210]: 2026-02-25 13:46:15.210984709 +0000 UTC m=+0.474184304 container init e49dca05058f6f441842334a20038af40c3b6d0a87c288ac2cb70246a18cf0b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_diffie, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:46:15 np0005629333 podman[415210]: 2026-02-25 13:46:15.220812752 +0000 UTC m=+0.484012337 container start e49dca05058f6f441842334a20038af40c3b6d0a87c288ac2cb70246a18cf0b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 08:46:15 np0005629333 blissful_diffie[415227]: 167 167
Feb 25 08:46:15 np0005629333 systemd[1]: libpod-e49dca05058f6f441842334a20038af40c3b6d0a87c288ac2cb70246a18cf0b4.scope: Deactivated successfully.
Feb 25 08:46:15 np0005629333 podman[415210]: 2026-02-25 13:46:15.241055343 +0000 UTC m=+0.504254988 container attach e49dca05058f6f441842334a20038af40c3b6d0a87c288ac2cb70246a18cf0b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:46:15 np0005629333 podman[415210]: 2026-02-25 13:46:15.242725321 +0000 UTC m=+0.505924876 container died e49dca05058f6f441842334a20038af40c3b6d0a87c288ac2cb70246a18cf0b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_diffie, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:46:15 np0005629333 systemd[1]: var-lib-containers-storage-overlay-210ed1822ca5176939e49be21ff0c23fcfb31b7a4fda6c14394a5b03920fb4dc-merged.mount: Deactivated successfully.
Feb 25 08:46:15 np0005629333 podman[415210]: 2026-02-25 13:46:15.436455842 +0000 UTC m=+0.699655407 container remove e49dca05058f6f441842334a20038af40c3b6d0a87c288ac2cb70246a18cf0b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:46:15 np0005629333 systemd[1]: libpod-conmon-e49dca05058f6f441842334a20038af40c3b6d0a87c288ac2cb70246a18cf0b4.scope: Deactivated successfully.
Feb 25 08:46:15 np0005629333 podman[415253]: 2026-02-25 13:46:15.648889101 +0000 UTC m=+0.096623135 container create a3231d1790aecb48fdd4a766888dc0618493629f7bd4db6425c8a5f6ba25e260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:46:15 np0005629333 podman[415253]: 2026-02-25 13:46:15.579588512 +0000 UTC m=+0.027322556 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:46:15 np0005629333 systemd[1]: Started libpod-conmon-a3231d1790aecb48fdd4a766888dc0618493629f7bd4db6425c8a5f6ba25e260.scope.
Feb 25 08:46:15 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:46:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d89af7b82d723e33117f79798ef770d68d4afbc58a820f3abacd179239575d8c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:46:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d89af7b82d723e33117f79798ef770d68d4afbc58a820f3abacd179239575d8c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:46:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d89af7b82d723e33117f79798ef770d68d4afbc58a820f3abacd179239575d8c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:46:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d89af7b82d723e33117f79798ef770d68d4afbc58a820f3abacd179239575d8c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:46:15 np0005629333 podman[415253]: 2026-02-25 13:46:15.817137322 +0000 UTC m=+0.264871356 container init a3231d1790aecb48fdd4a766888dc0618493629f7bd4db6425c8a5f6ba25e260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Feb 25 08:46:15 np0005629333 podman[415253]: 2026-02-25 13:46:15.825326317 +0000 UTC m=+0.273060321 container start a3231d1790aecb48fdd4a766888dc0618493629f7bd4db6425c8a5f6ba25e260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:46:15 np0005629333 podman[415253]: 2026-02-25 13:46:15.852662591 +0000 UTC m=+0.300396595 container attach a3231d1790aecb48fdd4a766888dc0618493629f7bd4db6425c8a5f6ba25e260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:46:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3791: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:46:16 np0005629333 lvm[415348]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:46:16 np0005629333 lvm[415348]: VG ceph_vg1 finished
Feb 25 08:46:16 np0005629333 lvm[415349]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:46:16 np0005629333 lvm[415349]: VG ceph_vg0 finished
Feb 25 08:46:16 np0005629333 lvm[415351]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:46:16 np0005629333 lvm[415351]: VG ceph_vg2 finished
Feb 25 08:46:16 np0005629333 pedantic_carson[415270]: {}
Feb 25 08:46:16 np0005629333 systemd[1]: libpod-a3231d1790aecb48fdd4a766888dc0618493629f7bd4db6425c8a5f6ba25e260.scope: Deactivated successfully.
Feb 25 08:46:16 np0005629333 systemd[1]: libpod-a3231d1790aecb48fdd4a766888dc0618493629f7bd4db6425c8a5f6ba25e260.scope: Consumed 1.213s CPU time.
Feb 25 08:46:16 np0005629333 podman[415253]: 2026-02-25 13:46:16.624306455 +0000 UTC m=+1.072040449 container died a3231d1790aecb48fdd4a766888dc0618493629f7bd4db6425c8a5f6ba25e260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_carson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:46:16 np0005629333 nova_compute[244014]: 2026-02-25 13:46:16.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:46:16 np0005629333 nova_compute[244014]: 2026-02-25 13:46:16.900 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:46:16 np0005629333 nova_compute[244014]: 2026-02-25 13:46:16.901 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:46:16 np0005629333 nova_compute[244014]: 2026-02-25 13:46:16.902 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:46:16 np0005629333 nova_compute[244014]: 2026-02-25 13:46:16.902 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:46:16 np0005629333 nova_compute[244014]: 2026-02-25 13:46:16.903 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:46:16 np0005629333 nova_compute[244014]: 2026-02-25 13:46:16.946 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:46:17 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d89af7b82d723e33117f79798ef770d68d4afbc58a820f3abacd179239575d8c-merged.mount: Deactivated successfully.
Feb 25 08:46:17 np0005629333 podman[415253]: 2026-02-25 13:46:17.438908142 +0000 UTC m=+1.886642176 container remove a3231d1790aecb48fdd4a766888dc0618493629f7bd4db6425c8a5f6ba25e260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_carson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:46:17 np0005629333 systemd[1]: libpod-conmon-a3231d1790aecb48fdd4a766888dc0618493629f7bd4db6425c8a5f6ba25e260.scope: Deactivated successfully.
Feb 25 08:46:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:46:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1220553453' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:46:17 np0005629333 nova_compute[244014]: 2026-02-25 13:46:17.502 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:46:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:46:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:46:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:46:17 np0005629333 nova_compute[244014]: 2026-02-25 13:46:17.695 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:46:17 np0005629333 nova_compute[244014]: 2026-02-25 13:46:17.697 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3494MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:46:17 np0005629333 nova_compute[244014]: 2026-02-25 13:46:17.697 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:46:17 np0005629333 nova_compute[244014]: 2026-02-25 13:46:17.697 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:46:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:46:17 np0005629333 nova_compute[244014]: 2026-02-25 13:46:17.934 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:46:17 np0005629333 nova_compute[244014]: 2026-02-25 13:46:17.935 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:46:18 np0005629333 nova_compute[244014]: 2026-02-25 13:46:18.054 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 25 08:46:18 np0005629333 nova_compute[244014]: 2026-02-25 13:46:18.152 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 25 08:46:18 np0005629333 nova_compute[244014]: 2026-02-25 13:46:18.153 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 25 08:46:18 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:46:18 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:46:18 np0005629333 nova_compute[244014]: 2026-02-25 13:46:18.179 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 25 08:46:18 np0005629333 nova_compute[244014]: 2026-02-25 13:46:18.199 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 25 08:46:18 np0005629333 nova_compute[244014]: 2026-02-25 13:46:18.217 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:46:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3792: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:46:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:46:18 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/322719798' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:46:18 np0005629333 nova_compute[244014]: 2026-02-25 13:46:18.802 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:46:18 np0005629333 nova_compute[244014]: 2026-02-25 13:46:18.812 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:46:18 np0005629333 nova_compute[244014]: 2026-02-25 13:46:18.830 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:46:18 np0005629333 nova_compute[244014]: 2026-02-25 13:46:18.832 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:46:18 np0005629333 nova_compute[244014]: 2026-02-25 13:46:18.832 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:46:19 np0005629333 nova_compute[244014]: 2026-02-25 13:46:19.336 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:46:19 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #186. Immutable memtables: 0.
Feb 25 08:46:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:19.564902) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 08:46:19 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 186
Feb 25 08:46:19 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027179564957, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 2054, "num_deletes": 251, "total_data_size": 3476039, "memory_usage": 3521168, "flush_reason": "Manual Compaction"}
Feb 25 08:46:19 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #187: started
Feb 25 08:46:19 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027179699351, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 187, "file_size": 3397976, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77131, "largest_seqno": 79184, "table_properties": {"data_size": 3388592, "index_size": 5941, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18718, "raw_average_key_size": 20, "raw_value_size": 3369990, "raw_average_value_size": 3615, "num_data_blocks": 264, "num_entries": 932, "num_filter_entries": 932, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772026953, "oldest_key_time": 1772026953, "file_creation_time": 1772027179, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 187, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:46:19 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 134514 microseconds, and 10322 cpu microseconds.
Feb 25 08:46:19 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:46:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:19.699411) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #187: 3397976 bytes OK
Feb 25 08:46:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:19.699442) [db/memtable_list.cc:519] [default] Level-0 commit table #187 started
Feb 25 08:46:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:19.852921) [db/memtable_list.cc:722] [default] Level-0 commit table #187: memtable #1 done
Feb 25 08:46:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:19.852995) EVENT_LOG_v1 {"time_micros": 1772027179852980, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 08:46:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:19.853037) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 08:46:19 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 3467434, prev total WAL file size 3467434, number of live WAL files 2.
Feb 25 08:46:19 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000183.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:46:19 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:19.854534) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Feb 25 08:46:19 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 08:46:19 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [187(3318KB)], [185(10MB)]
Feb 25 08:46:19 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027179854603, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [187], "files_L6": [185], "score": -1, "input_data_size": 14082323, "oldest_snapshot_seqno": -1}
Feb 25 08:46:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #188: 9608 keys, 12298268 bytes, temperature: kUnknown
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027180062359, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 188, "file_size": 12298268, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12236832, "index_size": 36299, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24069, "raw_key_size": 251062, "raw_average_key_size": 26, "raw_value_size": 12067981, "raw_average_value_size": 1256, "num_data_blocks": 1406, "num_entries": 9608, "num_filter_entries": 9608, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772027179, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.062992) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 12298268 bytes
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.084152) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 67.7 rd, 59.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.2 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(7.8) write-amplify(3.6) OK, records in: 10122, records dropped: 514 output_compression: NoCompression
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.084198) EVENT_LOG_v1 {"time_micros": 1772027180084175, "job": 116, "event": "compaction_finished", "compaction_time_micros": 208110, "compaction_time_cpu_micros": 51090, "output_level": 6, "num_output_files": 1, "total_output_size": 12298268, "num_input_records": 10122, "num_output_records": 9608, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000187.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027180085916, "job": 116, "event": "table_file_deletion", "file_number": 187}
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027180088231, "job": 116, "event": "table_file_deletion", "file_number": 185}
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:19.854396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.088475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.088484) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.088486) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.088489) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.088492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:46:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3793: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #189. Immutable memtables: 0.
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.322227) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 189
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027180322298, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 260, "num_deletes": 255, "total_data_size": 13120, "memory_usage": 19768, "flush_reason": "Manual Compaction"}
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #190: started
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027180400060, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 190, "file_size": 13362, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79185, "largest_seqno": 79444, "table_properties": {"data_size": 11545, "index_size": 49, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4481, "raw_average_key_size": 17, "raw_value_size": 8118, "raw_average_value_size": 31, "num_data_blocks": 2, "num_entries": 260, "num_filter_entries": 260, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772027180, "oldest_key_time": 1772027180, "file_creation_time": 1772027180, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 190, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 77923 microseconds, and 1678 cpu microseconds.
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.400146) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #190: 13362 bytes OK
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.400183) [db/memtable_list.cc:519] [default] Level-0 commit table #190 started
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.423225) [db/memtable_list.cc:722] [default] Level-0 commit table #190: memtable #1 done
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.423291) EVENT_LOG_v1 {"time_micros": 1772027180423278, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.423336) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 11069, prev total WAL file size 11069, number of live WAL files 2.
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000186.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.424044) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353137' seq:72057594037927935, type:22 .. '6C6F676D0033373638' seq:0, type:0; will stop at (end)
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [190(13KB)], [188(11MB)]
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027180424092, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [190], "files_L6": [188], "score": -1, "input_data_size": 12311630, "oldest_snapshot_seqno": -1}
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #191: 9354 keys, 12223722 bytes, temperature: kUnknown
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027180689039, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 191, "file_size": 12223722, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12163303, "index_size": 35922, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23429, "raw_key_size": 246720, "raw_average_key_size": 26, "raw_value_size": 11998189, "raw_average_value_size": 1282, "num_data_blocks": 1388, "num_entries": 9354, "num_filter_entries": 9354, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772027180, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.689801) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 12223722 bytes
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.718609) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 46.4 rd, 46.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 11.7 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(1836.2) write-amplify(914.8) OK, records in: 9868, records dropped: 514 output_compression: NoCompression
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.718668) EVENT_LOG_v1 {"time_micros": 1772027180718643, "job": 118, "event": "compaction_finished", "compaction_time_micros": 265075, "compaction_time_cpu_micros": 42579, "output_level": 6, "num_output_files": 1, "total_output_size": 12223722, "num_input_records": 9868, "num_output_records": 9354, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000190.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027180718978, "job": 118, "event": "table_file_deletion", "file_number": 190}
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027180721624, "job": 118, "event": "table_file_deletion", "file_number": 188}
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.423951) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.721757) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.721766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.721769) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.721773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:46:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:20.721776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:46:21 np0005629333 nova_compute[244014]: 2026-02-25 13:46:21.949 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:46:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3794: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:46:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3795: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:46:24 np0005629333 nova_compute[244014]: 2026-02-25 13:46:24.337 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:46:24 np0005629333 nova_compute[244014]: 2026-02-25 13:46:24.833 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:46:24 np0005629333 nova_compute[244014]: 2026-02-25 13:46:24.834 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:46:24 np0005629333 nova_compute[244014]: 2026-02-25 13:46:24.834 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:46:24 np0005629333 nova_compute[244014]: 2026-02-25 13:46:24.850 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:46:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:46:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3796: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:46:26 np0005629333 nova_compute[244014]: 2026-02-25 13:46:26.952 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:46:27 np0005629333 nova_compute[244014]: 2026-02-25 13:46:27.888 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:46:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3797: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:46:29 np0005629333 nova_compute[244014]: 2026-02-25 13:46:29.339 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:46:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:46:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3798: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:46:30 np0005629333 nova_compute[244014]: 2026-02-25 13:46:30.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:46:30 np0005629333 nova_compute[244014]: 2026-02-25 13:46:30.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:46:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:46:31
Feb 25 08:46:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:46:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:46:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'default.rgw.meta', 'vms', 'images', '.mgr', 'backups', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Feb 25 08:46:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:46:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:46:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:46:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:46:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:46:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:46:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:46:31 np0005629333 nova_compute[244014]: 2026-02-25 13:46:31.953 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:46:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3799: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:46:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:46:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:46:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:46:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:46:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:46:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:46:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:46:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:46:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:46:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:46:33 np0005629333 nova_compute[244014]: 2026-02-25 13:46:33.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:46:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3800: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:46:34 np0005629333 nova_compute[244014]: 2026-02-25 13:46:34.343 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:46:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:46:35 np0005629333 nova_compute[244014]: 2026-02-25 13:46:35.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:46:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3801: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:46:36 np0005629333 podman[415437]: 2026-02-25 13:46:36.735413545 +0000 UTC m=+0.070616568 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 08:46:36 np0005629333 podman[415438]: 2026-02-25 13:46:36.78400161 +0000 UTC m=+0.118095941 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 08:46:36 np0005629333 nova_compute[244014]: 2026-02-25 13:46:36.956 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:46:37 np0005629333 nova_compute[244014]: 2026-02-25 13:46:37.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:46:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3802: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:46:39 np0005629333 nova_compute[244014]: 2026-02-25 13:46:39.345 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:46:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:46:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3803: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:46:40 np0005629333 nova_compute[244014]: 2026-02-25 13:46:40.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:46:41 np0005629333 nova_compute[244014]: 2026-02-25 13:46:41.959 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:46:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3804: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:46:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:46:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:46:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:46:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:46:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:46:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:46:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:46:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:46:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:46:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:46:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:46:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:46:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:46:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:46:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:46:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:46:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:46:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:46:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:46:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:46:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:46:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:46:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:46:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3805: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:46:44 np0005629333 nova_compute[244014]: 2026-02-25 13:46:44.347 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:46:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:46:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3806: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:46:46 np0005629333 nova_compute[244014]: 2026-02-25 13:46:46.960 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:46:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:46:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2592143154' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:46:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:46:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2592143154' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:46:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3807: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:46:49 np0005629333 nova_compute[244014]: 2026-02-25 13:46:49.349 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #192. Immutable memtables: 0.
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:50.198471) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 192
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027210198513, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 473, "num_deletes": 250, "total_data_size": 437336, "memory_usage": 446528, "flush_reason": "Manual Compaction"}
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #193: started
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027210204576, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 193, "file_size": 313249, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79445, "largest_seqno": 79917, "table_properties": {"data_size": 310811, "index_size": 537, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6642, "raw_average_key_size": 20, "raw_value_size": 305835, "raw_average_value_size": 932, "num_data_blocks": 25, "num_entries": 328, "num_filter_entries": 328, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772027180, "oldest_key_time": 1772027180, "file_creation_time": 1772027210, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 193, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 6157 microseconds, and 1463 cpu microseconds.
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:50.204626) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #193: 313249 bytes OK
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:50.204648) [db/memtable_list.cc:519] [default] Level-0 commit table #193 started
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:50.211580) [db/memtable_list.cc:722] [default] Level-0 commit table #193: memtable #1 done
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:50.211637) EVENT_LOG_v1 {"time_micros": 1772027210211627, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:50.211672) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 434543, prev total WAL file size 434543, number of live WAL files 2.
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000189.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:50.212252) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033323533' seq:72057594037927935, type:22 .. '6D6772737461740033353034' seq:0, type:0; will stop at (end)
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [193(305KB)], [191(11MB)]
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027210212515, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [193], "files_L6": [191], "score": -1, "input_data_size": 12536971, "oldest_snapshot_seqno": -1}
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #194: 9184 keys, 9327679 bytes, temperature: kUnknown
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027210310335, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 194, "file_size": 9327679, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9272954, "index_size": 30676, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22981, "raw_key_size": 243357, "raw_average_key_size": 26, "raw_value_size": 9115295, "raw_average_value_size": 992, "num_data_blocks": 1170, "num_entries": 9184, "num_filter_entries": 9184, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772027210, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:46:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3808: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:50.310782) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 9327679 bytes
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:50.324420) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.1 rd, 95.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 11.7 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(69.8) write-amplify(29.8) OK, records in: 9682, records dropped: 498 output_compression: NoCompression
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:50.324465) EVENT_LOG_v1 {"time_micros": 1772027210324445, "job": 120, "event": "compaction_finished", "compaction_time_micros": 97874, "compaction_time_cpu_micros": 38178, "output_level": 6, "num_output_files": 1, "total_output_size": 9327679, "num_input_records": 9682, "num_output_records": 9184, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000193.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027210324814, "job": 120, "event": "table_file_deletion", "file_number": 193}
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027210327091, "job": 120, "event": "table_file_deletion", "file_number": 191}
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:50.212119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:50.327214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:50.327238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:50.327241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:50.327244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:46:50 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:46:50.327247) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:46:51 np0005629333 nova_compute[244014]: 2026-02-25 13:46:51.963 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:46:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3809: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:46:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3810: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:46:54 np0005629333 nova_compute[244014]: 2026-02-25 13:46:54.352 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:46:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:46:55.091 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:46:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:46:55.092 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:46:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:46:55.092 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:46:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:46:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3811: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:46:56 np0005629333 nova_compute[244014]: 2026-02-25 13:46:56.965 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:46:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3812: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:46:58 np0005629333 nova_compute[244014]: 2026-02-25 13:46:58.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:46:59 np0005629333 nova_compute[244014]: 2026-02-25 13:46:59.355 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:47:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:47:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3813: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:47:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:47:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:47:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:47:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:47:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:47:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:47:01 np0005629333 nova_compute[244014]: 2026-02-25 13:47:01.967 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:47:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3814: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:47:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3815: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:47:04 np0005629333 nova_compute[244014]: 2026-02-25 13:47:04.357 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:47:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:47:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3816: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:47:06 np0005629333 nova_compute[244014]: 2026-02-25 13:47:06.969 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:47:07 np0005629333 podman[415485]: 2026-02-25 13:47:07.70449063 +0000 UTC m=+0.049843972 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:47:07 np0005629333 podman[415486]: 2026-02-25 13:47:07.736066327 +0000 UTC m=+0.078681590 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 08:47:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3817: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:47:09 np0005629333 nova_compute[244014]: 2026-02-25 13:47:09.363 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:47:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:47:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3818: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:47:11 np0005629333 nova_compute[244014]: 2026-02-25 13:47:11.973 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:47:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3819: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:47:13 np0005629333 nova_compute[244014]: 2026-02-25 13:47:13.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:47:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3820: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:47:14 np0005629333 nova_compute[244014]: 2026-02-25 13:47:14.405 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:47:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:47:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3821: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:47:16 np0005629333 nova_compute[244014]: 2026-02-25 13:47:16.977 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:47:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3822: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:47:18 np0005629333 podman[415623]: 2026-02-25 13:47:18.538802612 +0000 UTC m=+0.112825920 container exec ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:47:18 np0005629333 podman[415623]: 2026-02-25 13:47:18.675177997 +0000 UTC m=+0.249201335 container exec_died ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 08:47:18 np0005629333 nova_compute[244014]: 2026-02-25 13:47:18.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:47:18 np0005629333 nova_compute[244014]: 2026-02-25 13:47:18.918 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:47:18 np0005629333 nova_compute[244014]: 2026-02-25 13:47:18.918 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:47:18 np0005629333 nova_compute[244014]: 2026-02-25 13:47:18.919 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:47:18 np0005629333 nova_compute[244014]: 2026-02-25 13:47:18.919 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:47:18 np0005629333 nova_compute[244014]: 2026-02-25 13:47:18.919 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:47:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:47:19 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3856843576' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:47:19 np0005629333 nova_compute[244014]: 2026-02-25 13:47:19.465 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:47:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:47:19 np0005629333 nova_compute[244014]: 2026-02-25 13:47:19.485 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:47:19 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:47:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:47:19 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:47:19 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:47:19 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:47:19 np0005629333 nova_compute[244014]: 2026-02-25 13:47:19.661 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:47:19 np0005629333 nova_compute[244014]: 2026-02-25 13:47:19.663 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3506MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:47:19 np0005629333 nova_compute[244014]: 2026-02-25 13:47:19.663 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:47:19 np0005629333 nova_compute[244014]: 2026-02-25 13:47:19.664 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:47:19 np0005629333 nova_compute[244014]: 2026-02-25 13:47:19.730 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:47:19 np0005629333 nova_compute[244014]: 2026-02-25 13:47:19.730 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:47:19 np0005629333 nova_compute[244014]: 2026-02-25 13:47:19.749 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:47:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:47:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:47:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:47:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:47:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:47:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:47:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:47:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:47:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:47:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:47:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:47:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:47:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:47:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3823: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:47:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:47:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2918798991' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:47:20 np0005629333 nova_compute[244014]: 2026-02-25 13:47:20.404 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.655s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:47:20 np0005629333 nova_compute[244014]: 2026-02-25 13:47:20.412 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:47:20 np0005629333 nova_compute[244014]: 2026-02-25 13:47:20.429 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:47:20 np0005629333 nova_compute[244014]: 2026-02-25 13:47:20.431 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:47:20 np0005629333 nova_compute[244014]: 2026-02-25 13:47:20.431 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:47:20 np0005629333 podman[415997]: 2026-02-25 13:47:20.555489769 +0000 UTC m=+0.044310593 container create 653745aee1c5e0d06e81fc4b4e60ff5bd631741310cd7b6d6318e44b09d4bce6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_liskov, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:47:20 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:47:20 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:47:20 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:47:20 np0005629333 systemd[1]: Started libpod-conmon-653745aee1c5e0d06e81fc4b4e60ff5bd631741310cd7b6d6318e44b09d4bce6.scope.
Feb 25 08:47:20 np0005629333 podman[415997]: 2026-02-25 13:47:20.533398835 +0000 UTC m=+0.022219699 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:47:20 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:47:20 np0005629333 podman[415997]: 2026-02-25 13:47:20.688203939 +0000 UTC m=+0.177024863 container init 653745aee1c5e0d06e81fc4b4e60ff5bd631741310cd7b6d6318e44b09d4bce6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_liskov, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:47:20 np0005629333 podman[415997]: 2026-02-25 13:47:20.698270528 +0000 UTC m=+0.187091392 container start 653745aee1c5e0d06e81fc4b4e60ff5bd631741310cd7b6d6318e44b09d4bce6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:47:20 np0005629333 amazing_liskov[416013]: 167 167
Feb 25 08:47:20 np0005629333 systemd[1]: libpod-653745aee1c5e0d06e81fc4b4e60ff5bd631741310cd7b6d6318e44b09d4bce6.scope: Deactivated successfully.
Feb 25 08:47:20 np0005629333 conmon[416013]: conmon 653745aee1c5e0d06e81 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-653745aee1c5e0d06e81fc4b4e60ff5bd631741310cd7b6d6318e44b09d4bce6.scope/container/memory.events
Feb 25 08:47:20 np0005629333 podman[415997]: 2026-02-25 13:47:20.717548242 +0000 UTC m=+0.206369106 container attach 653745aee1c5e0d06e81fc4b4e60ff5bd631741310cd7b6d6318e44b09d4bce6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_liskov, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 08:47:20 np0005629333 podman[415997]: 2026-02-25 13:47:20.719465117 +0000 UTC m=+0.208285971 container died 653745aee1c5e0d06e81fc4b4e60ff5bd631741310cd7b6d6318e44b09d4bce6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 08:47:20 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e9c43c3b23dcd6678a5c4175abf03869e6bcad0cddab3da78d565412090eac95-merged.mount: Deactivated successfully.
Feb 25 08:47:20 np0005629333 podman[415997]: 2026-02-25 13:47:20.859015763 +0000 UTC m=+0.347836587 container remove 653745aee1c5e0d06e81fc4b4e60ff5bd631741310cd7b6d6318e44b09d4bce6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_liskov, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 08:47:20 np0005629333 systemd[1]: libpod-conmon-653745aee1c5e0d06e81fc4b4e60ff5bd631741310cd7b6d6318e44b09d4bce6.scope: Deactivated successfully.
Feb 25 08:47:21 np0005629333 podman[416040]: 2026-02-25 13:47:21.052071615 +0000 UTC m=+0.068196358 container create 3853e692a6eb833f0a253136da8c949a3724736a08aca128863e8a53cb2d7d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_goldwasser, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 08:47:21 np0005629333 podman[416040]: 2026-02-25 13:47:21.012446738 +0000 UTC m=+0.028571511 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:47:21 np0005629333 systemd[1]: Started libpod-conmon-3853e692a6eb833f0a253136da8c949a3724736a08aca128863e8a53cb2d7d08.scope.
Feb 25 08:47:21 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:47:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a34289d3129fabf01524f372d2a828f0601aac300fd3a93b4886dc5cb4b18636/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:47:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a34289d3129fabf01524f372d2a828f0601aac300fd3a93b4886dc5cb4b18636/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:47:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a34289d3129fabf01524f372d2a828f0601aac300fd3a93b4886dc5cb4b18636/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:47:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a34289d3129fabf01524f372d2a828f0601aac300fd3a93b4886dc5cb4b18636/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:47:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a34289d3129fabf01524f372d2a828f0601aac300fd3a93b4886dc5cb4b18636/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:47:21 np0005629333 podman[416040]: 2026-02-25 13:47:21.211598125 +0000 UTC m=+0.227722928 container init 3853e692a6eb833f0a253136da8c949a3724736a08aca128863e8a53cb2d7d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_goldwasser, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 08:47:21 np0005629333 podman[416040]: 2026-02-25 13:47:21.22048324 +0000 UTC m=+0.236607973 container start 3853e692a6eb833f0a253136da8c949a3724736a08aca128863e8a53cb2d7d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:47:21 np0005629333 podman[416040]: 2026-02-25 13:47:21.242675368 +0000 UTC m=+0.258800181 container attach 3853e692a6eb833f0a253136da8c949a3724736a08aca128863e8a53cb2d7d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_goldwasser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 08:47:21 np0005629333 romantic_goldwasser[416057]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:47:21 np0005629333 romantic_goldwasser[416057]: --> All data devices are unavailable
Feb 25 08:47:21 np0005629333 systemd[1]: libpod-3853e692a6eb833f0a253136da8c949a3724736a08aca128863e8a53cb2d7d08.scope: Deactivated successfully.
Feb 25 08:47:21 np0005629333 conmon[416057]: conmon 3853e692a6eb833f0a25 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3853e692a6eb833f0a253136da8c949a3724736a08aca128863e8a53cb2d7d08.scope/container/memory.events
Feb 25 08:47:21 np0005629333 podman[416040]: 2026-02-25 13:47:21.738463411 +0000 UTC m=+0.754588154 container died 3853e692a6eb833f0a253136da8c949a3724736a08aca128863e8a53cb2d7d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_goldwasser, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 08:47:21 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a34289d3129fabf01524f372d2a828f0601aac300fd3a93b4886dc5cb4b18636-merged.mount: Deactivated successfully.
Feb 25 08:47:21 np0005629333 podman[416040]: 2026-02-25 13:47:21.813410603 +0000 UTC m=+0.829535326 container remove 3853e692a6eb833f0a253136da8c949a3724736a08aca128863e8a53cb2d7d08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle)
Feb 25 08:47:21 np0005629333 systemd[1]: libpod-conmon-3853e692a6eb833f0a253136da8c949a3724736a08aca128863e8a53cb2d7d08.scope: Deactivated successfully.
Feb 25 08:47:21 np0005629333 nova_compute[244014]: 2026-02-25 13:47:21.981 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:47:22 np0005629333 podman[416154]: 2026-02-25 13:47:22.273721828 +0000 UTC m=+0.058758287 container create 170438083b0c48bcd11039fb266d888c85f566eaba96893959004c0340d7d048 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_raman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 08:47:22 np0005629333 systemd[1]: Started libpod-conmon-170438083b0c48bcd11039fb266d888c85f566eaba96893959004c0340d7d048.scope.
Feb 25 08:47:22 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:47:22 np0005629333 podman[416154]: 2026-02-25 13:47:22.236732696 +0000 UTC m=+0.021769165 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:47:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3824: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:47:22 np0005629333 podman[416154]: 2026-02-25 13:47:22.353150529 +0000 UTC m=+0.138187008 container init 170438083b0c48bcd11039fb266d888c85f566eaba96893959004c0340d7d048 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_raman, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:47:22 np0005629333 podman[416154]: 2026-02-25 13:47:22.358746489 +0000 UTC m=+0.143782958 container start 170438083b0c48bcd11039fb266d888c85f566eaba96893959004c0340d7d048 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_raman, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 08:47:22 np0005629333 nifty_raman[416171]: 167 167
Feb 25 08:47:22 np0005629333 systemd[1]: libpod-170438083b0c48bcd11039fb266d888c85f566eaba96893959004c0340d7d048.scope: Deactivated successfully.
Feb 25 08:47:22 np0005629333 podman[416154]: 2026-02-25 13:47:22.378682782 +0000 UTC m=+0.163719261 container attach 170438083b0c48bcd11039fb266d888c85f566eaba96893959004c0340d7d048 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_raman, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:47:22 np0005629333 podman[416154]: 2026-02-25 13:47:22.380385861 +0000 UTC m=+0.165422330 container died 170438083b0c48bcd11039fb266d888c85f566eaba96893959004c0340d7d048 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Feb 25 08:47:22 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1f290521a73e6315359bd24e32aeeea145a3441e0ba0f9b07aec9d134e1dd3c3-merged.mount: Deactivated successfully.
Feb 25 08:47:22 np0005629333 podman[416154]: 2026-02-25 13:47:22.505861023 +0000 UTC m=+0.290897482 container remove 170438083b0c48bcd11039fb266d888c85f566eaba96893959004c0340d7d048 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_raman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 08:47:22 np0005629333 systemd[1]: libpod-conmon-170438083b0c48bcd11039fb266d888c85f566eaba96893959004c0340d7d048.scope: Deactivated successfully.
Feb 25 08:47:22 np0005629333 podman[416195]: 2026-02-25 13:47:22.672131245 +0000 UTC m=+0.070924267 container create bfb684cbe494566cdf9c971b0bc2fb63e4d2715117a868e38a0eb3516fb4eb7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:47:22 np0005629333 systemd[1]: Started libpod-conmon-bfb684cbe494566cdf9c971b0bc2fb63e4d2715117a868e38a0eb3516fb4eb7b.scope.
Feb 25 08:47:22 np0005629333 podman[416195]: 2026-02-25 13:47:22.637560643 +0000 UTC m=+0.036353675 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:47:22 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:47:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f5c659aaad5d0200192bc97399ad3102099c2168f05bcfa13c9890b7883488d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:47:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f5c659aaad5d0200192bc97399ad3102099c2168f05bcfa13c9890b7883488d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:47:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f5c659aaad5d0200192bc97399ad3102099c2168f05bcfa13c9890b7883488d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:47:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f5c659aaad5d0200192bc97399ad3102099c2168f05bcfa13c9890b7883488d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:47:22 np0005629333 podman[416195]: 2026-02-25 13:47:22.782519105 +0000 UTC m=+0.181312147 container init bfb684cbe494566cdf9c971b0bc2fb63e4d2715117a868e38a0eb3516fb4eb7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_brattain, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 08:47:22 np0005629333 podman[416195]: 2026-02-25 13:47:22.791940075 +0000 UTC m=+0.190733097 container start bfb684cbe494566cdf9c971b0bc2fb63e4d2715117a868e38a0eb3516fb4eb7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:47:22 np0005629333 podman[416195]: 2026-02-25 13:47:22.809475429 +0000 UTC m=+0.208268531 container attach bfb684cbe494566cdf9c971b0bc2fb63e4d2715117a868e38a0eb3516fb4eb7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_brattain, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:47:23 np0005629333 sad_brattain[416212]: {
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:    "0": [
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:        {
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "devices": [
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "/dev/loop3"
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            ],
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "lv_name": "ceph_lv0",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "lv_size": "21470642176",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "name": "ceph_lv0",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "tags": {
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.cluster_name": "ceph",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.crush_device_class": "",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.encrypted": "0",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.objectstore": "bluestore",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.osd_id": "0",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.type": "block",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.vdo": "0",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.with_tpm": "0"
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            },
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "type": "block",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "vg_name": "ceph_vg0"
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:        }
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:    ],
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:    "1": [
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:        {
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "devices": [
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "/dev/loop4"
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            ],
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "lv_name": "ceph_lv1",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "lv_size": "21470642176",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "name": "ceph_lv1",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "tags": {
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.cluster_name": "ceph",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.crush_device_class": "",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.encrypted": "0",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.objectstore": "bluestore",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.osd_id": "1",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.type": "block",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.vdo": "0",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.with_tpm": "0"
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            },
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "type": "block",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "vg_name": "ceph_vg1"
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:        }
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:    ],
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:    "2": [
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:        {
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "devices": [
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "/dev/loop5"
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            ],
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "lv_name": "ceph_lv2",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "lv_size": "21470642176",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "name": "ceph_lv2",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "tags": {
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.cluster_name": "ceph",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.crush_device_class": "",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.encrypted": "0",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.objectstore": "bluestore",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.osd_id": "2",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.type": "block",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.vdo": "0",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:                "ceph.with_tpm": "0"
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            },
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "type": "block",
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:            "vg_name": "ceph_vg2"
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:        }
Feb 25 08:47:23 np0005629333 sad_brattain[416212]:    ]
Feb 25 08:47:23 np0005629333 sad_brattain[416212]: }
Feb 25 08:47:23 np0005629333 systemd[1]: libpod-bfb684cbe494566cdf9c971b0bc2fb63e4d2715117a868e38a0eb3516fb4eb7b.scope: Deactivated successfully.
Feb 25 08:47:23 np0005629333 podman[416195]: 2026-02-25 13:47:23.077400551 +0000 UTC m=+0.476193563 container died bfb684cbe494566cdf9c971b0bc2fb63e4d2715117a868e38a0eb3516fb4eb7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:47:23 np0005629333 systemd[1]: var-lib-containers-storage-overlay-4f5c659aaad5d0200192bc97399ad3102099c2168f05bcfa13c9890b7883488d-merged.mount: Deactivated successfully.
Feb 25 08:47:23 np0005629333 podman[416195]: 2026-02-25 13:47:23.208187935 +0000 UTC m=+0.606980967 container remove bfb684cbe494566cdf9c971b0bc2fb63e4d2715117a868e38a0eb3516fb4eb7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:47:23 np0005629333 systemd[1]: libpod-conmon-bfb684cbe494566cdf9c971b0bc2fb63e4d2715117a868e38a0eb3516fb4eb7b.scope: Deactivated successfully.
Feb 25 08:47:23 np0005629333 podman[416298]: 2026-02-25 13:47:23.790083772 +0000 UTC m=+0.118262487 container create 127ac96cc9468fafaa44124b30467a7b781b5c3fe76115ca80801b2b2fd4cb56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_sammet, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:47:23 np0005629333 podman[416298]: 2026-02-25 13:47:23.696862685 +0000 UTC m=+0.025041400 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:47:23 np0005629333 systemd[1]: Started libpod-conmon-127ac96cc9468fafaa44124b30467a7b781b5c3fe76115ca80801b2b2fd4cb56.scope.
Feb 25 08:47:23 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:47:23 np0005629333 podman[416298]: 2026-02-25 13:47:23.932068958 +0000 UTC m=+0.260247683 container init 127ac96cc9468fafaa44124b30467a7b781b5c3fe76115ca80801b2b2fd4cb56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_sammet, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:47:23 np0005629333 podman[416298]: 2026-02-25 13:47:23.945845543 +0000 UTC m=+0.274024238 container start 127ac96cc9468fafaa44124b30467a7b781b5c3fe76115ca80801b2b2fd4cb56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030)
Feb 25 08:47:23 np0005629333 intelligent_sammet[416315]: 167 167
Feb 25 08:47:23 np0005629333 systemd[1]: libpod-127ac96cc9468fafaa44124b30467a7b781b5c3fe76115ca80801b2b2fd4cb56.scope: Deactivated successfully.
Feb 25 08:47:23 np0005629333 podman[416298]: 2026-02-25 13:47:23.959271199 +0000 UTC m=+0.287449894 container attach 127ac96cc9468fafaa44124b30467a7b781b5c3fe76115ca80801b2b2fd4cb56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_sammet, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Feb 25 08:47:23 np0005629333 podman[416298]: 2026-02-25 13:47:23.961676898 +0000 UTC m=+0.289855623 container died 127ac96cc9468fafaa44124b30467a7b781b5c3fe76115ca80801b2b2fd4cb56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_sammet, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 08:47:24 np0005629333 systemd[1]: var-lib-containers-storage-overlay-4125d7dc175fb8266aecbaad9b9c8ce4d3d10a3cf858413db2ad730b3fdf398b-merged.mount: Deactivated successfully.
Feb 25 08:47:24 np0005629333 podman[416298]: 2026-02-25 13:47:24.044578378 +0000 UTC m=+0.372757103 container remove 127ac96cc9468fafaa44124b30467a7b781b5c3fe76115ca80801b2b2fd4cb56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_sammet, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:47:24 np0005629333 systemd[1]: libpod-conmon-127ac96cc9468fafaa44124b30467a7b781b5c3fe76115ca80801b2b2fd4cb56.scope: Deactivated successfully.
Feb 25 08:47:24 np0005629333 podman[416339]: 2026-02-25 13:47:24.229292741 +0000 UTC m=+0.055874325 container create d3e88c8ed3aecdc329f31bd4a51defad1b0e46aeda03c0e81e763fd5eace45c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 08:47:24 np0005629333 podman[416339]: 2026-02-25 13:47:24.194643816 +0000 UTC m=+0.021225460 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:47:24 np0005629333 systemd[1]: Started libpod-conmon-d3e88c8ed3aecdc329f31bd4a51defad1b0e46aeda03c0e81e763fd5eace45c1.scope.
Feb 25 08:47:24 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:47:24 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d48f580e87c455f2360f0d43fc5aa0baafb943e307bf0f904a60a2cb8a8af0a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:47:24 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d48f580e87c455f2360f0d43fc5aa0baafb943e307bf0f904a60a2cb8a8af0a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:47:24 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d48f580e87c455f2360f0d43fc5aa0baafb943e307bf0f904a60a2cb8a8af0a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:47:24 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d48f580e87c455f2360f0d43fc5aa0baafb943e307bf0f904a60a2cb8a8af0a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:47:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3825: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:47:24 np0005629333 podman[416339]: 2026-02-25 13:47:24.361381473 +0000 UTC m=+0.187963077 container init d3e88c8ed3aecdc329f31bd4a51defad1b0e46aeda03c0e81e763fd5eace45c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_gould, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 08:47:24 np0005629333 podman[416339]: 2026-02-25 13:47:24.366090699 +0000 UTC m=+0.192672283 container start d3e88c8ed3aecdc329f31bd4a51defad1b0e46aeda03c0e81e763fd5eace45c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_gould, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 08:47:24 np0005629333 podman[416339]: 2026-02-25 13:47:24.379150883 +0000 UTC m=+0.205732497 container attach d3e88c8ed3aecdc329f31bd4a51defad1b0e46aeda03c0e81e763fd5eace45c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_gould, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:47:24 np0005629333 nova_compute[244014]: 2026-02-25 13:47:24.502 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:47:24 np0005629333 lvm[416434]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:47:24 np0005629333 lvm[416434]: VG ceph_vg1 finished
Feb 25 08:47:24 np0005629333 lvm[416435]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:47:24 np0005629333 lvm[416435]: VG ceph_vg0 finished
Feb 25 08:47:24 np0005629333 lvm[416437]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:47:24 np0005629333 lvm[416437]: VG ceph_vg2 finished
Feb 25 08:47:25 np0005629333 reverent_gould[416356]: {}
Feb 25 08:47:25 np0005629333 systemd[1]: libpod-d3e88c8ed3aecdc329f31bd4a51defad1b0e46aeda03c0e81e763fd5eace45c1.scope: Deactivated successfully.
Feb 25 08:47:25 np0005629333 systemd[1]: libpod-d3e88c8ed3aecdc329f31bd4a51defad1b0e46aeda03c0e81e763fd5eace45c1.scope: Consumed 1.106s CPU time.
Feb 25 08:47:25 np0005629333 podman[416339]: 2026-02-25 13:47:25.097146857 +0000 UTC m=+0.923728491 container died d3e88c8ed3aecdc329f31bd4a51defad1b0e46aeda03c0e81e763fd5eace45c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_gould, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 08:47:25 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d48f580e87c455f2360f0d43fc5aa0baafb943e307bf0f904a60a2cb8a8af0a1-merged.mount: Deactivated successfully.
Feb 25 08:47:25 np0005629333 podman[416339]: 2026-02-25 13:47:25.191492265 +0000 UTC m=+1.018073849 container remove d3e88c8ed3aecdc329f31bd4a51defad1b0e46aeda03c0e81e763fd5eace45c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:47:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:47:25 np0005629333 systemd[1]: libpod-conmon-d3e88c8ed3aecdc329f31bd4a51defad1b0e46aeda03c0e81e763fd5eace45c1.scope: Deactivated successfully.
Feb 25 08:47:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:47:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:47:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:47:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:47:25 np0005629333 nova_compute[244014]: 2026-02-25 13:47:25.433 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:47:25 np0005629333 nova_compute[244014]: 2026-02-25 13:47:25.435 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:47:25 np0005629333 nova_compute[244014]: 2026-02-25 13:47:25.435 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:47:25 np0005629333 nova_compute[244014]: 2026-02-25 13:47:25.451 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:47:25 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:47:25 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:47:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3826: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:47:27 np0005629333 nova_compute[244014]: 2026-02-25 13:47:27.027 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:47:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3827: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:47:28 np0005629333 nova_compute[244014]: 2026-02-25 13:47:28.889 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:47:29 np0005629333 nova_compute[244014]: 2026-02-25 13:47:29.518 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:47:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:47:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3828: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:47:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:47:31
Feb 25 08:47:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:47:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:47:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'images', 'default.rgw.control', 'default.rgw.meta', 'vms', 'backups', '.mgr']
Feb 25 08:47:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:47:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:47:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:47:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:47:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:47:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:47:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:47:32 np0005629333 nova_compute[244014]: 2026-02-25 13:47:32.081 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:47:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3829: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:47:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:47:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:47:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:47:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:47:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:47:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:47:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:47:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:47:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:47:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:47:32 np0005629333 nova_compute[244014]: 2026-02-25 13:47:32.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:47:32 np0005629333 nova_compute[244014]: 2026-02-25 13:47:32.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:47:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3830: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:47:34 np0005629333 nova_compute[244014]: 2026-02-25 13:47:34.564 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:47:34 np0005629333 nova_compute[244014]: 2026-02-25 13:47:34.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:47:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:47:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3831: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:47:37 np0005629333 nova_compute[244014]: 2026-02-25 13:47:37.085 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:47:37 np0005629333 nova_compute[244014]: 2026-02-25 13:47:37.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:47:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3832: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:47:38 np0005629333 podman[416480]: 2026-02-25 13:47:38.720928876 +0000 UTC m=+0.061161677 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Feb 25 08:47:38 np0005629333 podman[416481]: 2026-02-25 13:47:38.772069084 +0000 UTC m=+0.111827871 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 08:47:38 np0005629333 nova_compute[244014]: 2026-02-25 13:47:38.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:47:39 np0005629333 nova_compute[244014]: 2026-02-25 13:47:39.574 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:47:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:47:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3833: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:47:40 np0005629333 nova_compute[244014]: 2026-02-25 13:47:40.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:47:42 np0005629333 nova_compute[244014]: 2026-02-25 13:47:42.128 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:47:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3834: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:47:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:47:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:47:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:47:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:47:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:47:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:47:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:47:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:47:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:47:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:47:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:47:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:47:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:47:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:47:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:47:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:47:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:47:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:47:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:47:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:47:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:47:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:47:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:47:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3835: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:47:44 np0005629333 nova_compute[244014]: 2026-02-25 13:47:44.615 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:47:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:47:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3836: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:47:47 np0005629333 nova_compute[244014]: 2026-02-25 13:47:47.131 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:47:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:47:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1916913705' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:47:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:47:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1916913705' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:47:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3837: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:47:49 np0005629333 nova_compute[244014]: 2026-02-25 13:47:49.616 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:47:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:47:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3838: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:47:52 np0005629333 nova_compute[244014]: 2026-02-25 13:47:52.134 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:47:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3839: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:47:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3840: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:47:54 np0005629333 nova_compute[244014]: 2026-02-25 13:47:54.634 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:47:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:47:55.093 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:47:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:47:55.093 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:47:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:47:55.093 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:47:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:47:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3841: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:47:57 np0005629333 nova_compute[244014]: 2026-02-25 13:47:57.160 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:47:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3842: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:47:59 np0005629333 nova_compute[244014]: 2026-02-25 13:47:59.636 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:48:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:48:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3843: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:48:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:48:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:48:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:48:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:48:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:48:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:48:02 np0005629333 nova_compute[244014]: 2026-02-25 13:48:02.163 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:48:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3844: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:48:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3845: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:48:04 np0005629333 nova_compute[244014]: 2026-02-25 13:48:04.637 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:48:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:48:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3846: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:48:07 np0005629333 nova_compute[244014]: 2026-02-25 13:48:07.166 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:48:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3847: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:48:09 np0005629333 nova_compute[244014]: 2026-02-25 13:48:09.682 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:48:09 np0005629333 podman[416523]: 2026-02-25 13:48:09.766630012 +0000 UTC m=+0.066795709 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Feb 25 08:48:09 np0005629333 podman[416524]: 2026-02-25 13:48:09.793402821 +0000 UTC m=+0.093419723 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Feb 25 08:48:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:48:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3848: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:48:12 np0005629333 nova_compute[244014]: 2026-02-25 13:48:12.201 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:48:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3849: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:48:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3850: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:48:14 np0005629333 nova_compute[244014]: 2026-02-25 13:48:14.718 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:48:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:48:15 np0005629333 nova_compute[244014]: 2026-02-25 13:48:15.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:48:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3851: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:48:17 np0005629333 nova_compute[244014]: 2026-02-25 13:48:17.205 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:48:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3852: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:48:19 np0005629333 nova_compute[244014]: 2026-02-25 13:48:19.719 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:48:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:48:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3853: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:48:20 np0005629333 nova_compute[244014]: 2026-02-25 13:48:20.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:48:20 np0005629333 nova_compute[244014]: 2026-02-25 13:48:20.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:48:20 np0005629333 nova_compute[244014]: 2026-02-25 13:48:20.905 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:48:20 np0005629333 nova_compute[244014]: 2026-02-25 13:48:20.905 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:48:20 np0005629333 nova_compute[244014]: 2026-02-25 13:48:20.906 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:48:20 np0005629333 nova_compute[244014]: 2026-02-25 13:48:20.906 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:48:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:48:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3671368893' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:48:21 np0005629333 nova_compute[244014]: 2026-02-25 13:48:21.487 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:48:21 np0005629333 nova_compute[244014]: 2026-02-25 13:48:21.646 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:48:21 np0005629333 nova_compute[244014]: 2026-02-25 13:48:21.647 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3555MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:48:21 np0005629333 nova_compute[244014]: 2026-02-25 13:48:21.647 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:48:21 np0005629333 nova_compute[244014]: 2026-02-25 13:48:21.648 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:48:21 np0005629333 nova_compute[244014]: 2026-02-25 13:48:21.716 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:48:21 np0005629333 nova_compute[244014]: 2026-02-25 13:48:21.717 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:48:21 np0005629333 nova_compute[244014]: 2026-02-25 13:48:21.735 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:48:22 np0005629333 nova_compute[244014]: 2026-02-25 13:48:22.238 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:48:22 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:48:22 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3219253594' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:48:22 np0005629333 nova_compute[244014]: 2026-02-25 13:48:22.266 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:48:22 np0005629333 nova_compute[244014]: 2026-02-25 13:48:22.273 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:48:22 np0005629333 nova_compute[244014]: 2026-02-25 13:48:22.297 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:48:22 np0005629333 nova_compute[244014]: 2026-02-25 13:48:22.300 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:48:22 np0005629333 nova_compute[244014]: 2026-02-25 13:48:22.300 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:48:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3854: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:48:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3855: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:48:24 np0005629333 nova_compute[244014]: 2026-02-25 13:48:24.755 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:48:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:48:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:48:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:48:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:48:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:48:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:48:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:48:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:48:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:48:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:48:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:48:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:48:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:48:26 np0005629333 nova_compute[244014]: 2026-02-25 13:48:26.302 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:48:26 np0005629333 nova_compute[244014]: 2026-02-25 13:48:26.303 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:48:26 np0005629333 nova_compute[244014]: 2026-02-25 13:48:26.304 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:48:26 np0005629333 nova_compute[244014]: 2026-02-25 13:48:26.332 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:48:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3856: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:48:26 np0005629333 podman[416756]: 2026-02-25 13:48:26.415169038 +0000 UTC m=+0.058001186 container create 8a3f9d70cc287c0889541025ec75fa4c9c32c8bf09c35c97ddaeffff9765616c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_lewin, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 08:48:26 np0005629333 systemd[1]: Started libpod-conmon-8a3f9d70cc287c0889541025ec75fa4c9c32c8bf09c35c97ddaeffff9765616c.scope.
Feb 25 08:48:26 np0005629333 podman[416756]: 2026-02-25 13:48:26.390023006 +0000 UTC m=+0.032855194 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:48:26 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:48:26 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:48:26 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:48:26 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:48:26 np0005629333 podman[416756]: 2026-02-25 13:48:26.517101745 +0000 UTC m=+0.159933943 container init 8a3f9d70cc287c0889541025ec75fa4c9c32c8bf09c35c97ddaeffff9765616c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:48:26 np0005629333 podman[416756]: 2026-02-25 13:48:26.52529302 +0000 UTC m=+0.168125158 container start 8a3f9d70cc287c0889541025ec75fa4c9c32c8bf09c35c97ddaeffff9765616c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_lewin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Feb 25 08:48:26 np0005629333 podman[416756]: 2026-02-25 13:48:26.529607134 +0000 UTC m=+0.172439272 container attach 8a3f9d70cc287c0889541025ec75fa4c9c32c8bf09c35c97ddaeffff9765616c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:48:26 np0005629333 keen_lewin[416772]: 167 167
Feb 25 08:48:26 np0005629333 systemd[1]: libpod-8a3f9d70cc287c0889541025ec75fa4c9c32c8bf09c35c97ddaeffff9765616c.scope: Deactivated successfully.
Feb 25 08:48:26 np0005629333 conmon[416772]: conmon 8a3f9d70cc287c088954 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8a3f9d70cc287c0889541025ec75fa4c9c32c8bf09c35c97ddaeffff9765616c.scope/container/memory.events
Feb 25 08:48:26 np0005629333 podman[416756]: 2026-02-25 13:48:26.533602458 +0000 UTC m=+0.176434626 container died 8a3f9d70cc287c0889541025ec75fa4c9c32c8bf09c35c97ddaeffff9765616c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:48:26 np0005629333 systemd[1]: var-lib-containers-storage-overlay-4a2f586921dbc67a698e0e4bd8147107acefb22a8dff68b8e647f0ca37ec4f61-merged.mount: Deactivated successfully.
Feb 25 08:48:26 np0005629333 podman[416756]: 2026-02-25 13:48:26.586101676 +0000 UTC m=+0.228933824 container remove 8a3f9d70cc287c0889541025ec75fa4c9c32c8bf09c35c97ddaeffff9765616c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_lewin, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:48:26 np0005629333 systemd[1]: libpod-conmon-8a3f9d70cc287c0889541025ec75fa4c9c32c8bf09c35c97ddaeffff9765616c.scope: Deactivated successfully.
Feb 25 08:48:26 np0005629333 podman[416798]: 2026-02-25 13:48:26.76077492 +0000 UTC m=+0.064770190 container create 3b3099875f22a68ee4cc9c49b58a09fdb4c49146f8eab0f35fd9200749022526 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_sanderson, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:48:26 np0005629333 systemd[1]: Started libpod-conmon-3b3099875f22a68ee4cc9c49b58a09fdb4c49146f8eab0f35fd9200749022526.scope.
Feb 25 08:48:26 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:48:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5603d07f9c5e8deba0c7587431bd73f5f8be661e9da4d8cbdd6c0affe0d4673e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:48:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5603d07f9c5e8deba0c7587431bd73f5f8be661e9da4d8cbdd6c0affe0d4673e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:48:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5603d07f9c5e8deba0c7587431bd73f5f8be661e9da4d8cbdd6c0affe0d4673e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:48:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5603d07f9c5e8deba0c7587431bd73f5f8be661e9da4d8cbdd6c0affe0d4673e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:48:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5603d07f9c5e8deba0c7587431bd73f5f8be661e9da4d8cbdd6c0affe0d4673e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:48:26 np0005629333 podman[416798]: 2026-02-25 13:48:26.737597745 +0000 UTC m=+0.041593085 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:48:26 np0005629333 podman[416798]: 2026-02-25 13:48:26.857662872 +0000 UTC m=+0.161658132 container init 3b3099875f22a68ee4cc9c49b58a09fdb4c49146f8eab0f35fd9200749022526 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Feb 25 08:48:26 np0005629333 podman[416798]: 2026-02-25 13:48:26.875438062 +0000 UTC m=+0.179433322 container start 3b3099875f22a68ee4cc9c49b58a09fdb4c49146f8eab0f35fd9200749022526 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_sanderson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 08:48:26 np0005629333 podman[416798]: 2026-02-25 13:48:26.879277462 +0000 UTC m=+0.183272752 container attach 3b3099875f22a68ee4cc9c49b58a09fdb4c49146f8eab0f35fd9200749022526 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_sanderson, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 08:48:27 np0005629333 nova_compute[244014]: 2026-02-25 13:48:27.285 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:48:27 np0005629333 pensive_sanderson[416815]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:48:27 np0005629333 pensive_sanderson[416815]: --> All data devices are unavailable
Feb 25 08:48:27 np0005629333 systemd[1]: libpod-3b3099875f22a68ee4cc9c49b58a09fdb4c49146f8eab0f35fd9200749022526.scope: Deactivated successfully.
Feb 25 08:48:27 np0005629333 podman[416798]: 2026-02-25 13:48:27.456421161 +0000 UTC m=+0.760416461 container died 3b3099875f22a68ee4cc9c49b58a09fdb4c49146f8eab0f35fd9200749022526 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_sanderson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:48:27 np0005629333 systemd[1]: var-lib-containers-storage-overlay-5603d07f9c5e8deba0c7587431bd73f5f8be661e9da4d8cbdd6c0affe0d4673e-merged.mount: Deactivated successfully.
Feb 25 08:48:27 np0005629333 podman[416798]: 2026-02-25 13:48:27.506792267 +0000 UTC m=+0.810787527 container remove 3b3099875f22a68ee4cc9c49b58a09fdb4c49146f8eab0f35fd9200749022526 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_sanderson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:48:27 np0005629333 systemd[1]: libpod-conmon-3b3099875f22a68ee4cc9c49b58a09fdb4c49146f8eab0f35fd9200749022526.scope: Deactivated successfully.
Feb 25 08:48:28 np0005629333 podman[416909]: 2026-02-25 13:48:28.010172839 +0000 UTC m=+0.060618781 container create a9d065eae17d8a5836465fb8424cafe6d96929630a9d2479d9f6afeb73475088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_euclid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:48:28 np0005629333 podman[416909]: 2026-02-25 13:48:27.981563057 +0000 UTC m=+0.032009019 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:48:28 np0005629333 systemd[1]: Started libpod-conmon-a9d065eae17d8a5836465fb8424cafe6d96929630a9d2479d9f6afeb73475088.scope.
Feb 25 08:48:28 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:48:28 np0005629333 podman[416909]: 2026-02-25 13:48:28.172574031 +0000 UTC m=+0.223020033 container init a9d065eae17d8a5836465fb8424cafe6d96929630a9d2479d9f6afeb73475088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_euclid, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:48:28 np0005629333 podman[416909]: 2026-02-25 13:48:28.181170568 +0000 UTC m=+0.231616550 container start a9d065eae17d8a5836465fb8424cafe6d96929630a9d2479d9f6afeb73475088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:48:28 np0005629333 peaceful_euclid[416925]: 167 167
Feb 25 08:48:28 np0005629333 systemd[1]: libpod-a9d065eae17d8a5836465fb8424cafe6d96929630a9d2479d9f6afeb73475088.scope: Deactivated successfully.
Feb 25 08:48:28 np0005629333 conmon[416925]: conmon a9d065eae17d8a583646 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a9d065eae17d8a5836465fb8424cafe6d96929630a9d2479d9f6afeb73475088.scope/container/memory.events
Feb 25 08:48:28 np0005629333 podman[416909]: 2026-02-25 13:48:28.246597856 +0000 UTC m=+0.297043898 container attach a9d065eae17d8a5836465fb8424cafe6d96929630a9d2479d9f6afeb73475088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_euclid, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:48:28 np0005629333 podman[416909]: 2026-02-25 13:48:28.249227372 +0000 UTC m=+0.299673374 container died a9d065eae17d8a5836465fb8424cafe6d96929630a9d2479d9f6afeb73475088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_euclid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:48:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3857: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:48:28 np0005629333 systemd[1]: var-lib-containers-storage-overlay-6459034f7da4d54f3e08b42abc8eccf36606954d8319cace49b19af788687cf2-merged.mount: Deactivated successfully.
Feb 25 08:48:28 np0005629333 podman[416909]: 2026-02-25 13:48:28.681472661 +0000 UTC m=+0.731918643 container remove a9d065eae17d8a5836465fb8424cafe6d96929630a9d2479d9f6afeb73475088 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 08:48:28 np0005629333 systemd[1]: libpod-conmon-a9d065eae17d8a5836465fb8424cafe6d96929630a9d2479d9f6afeb73475088.scope: Deactivated successfully.
Feb 25 08:48:28 np0005629333 podman[416951]: 2026-02-25 13:48:28.898103141 +0000 UTC m=+0.060189989 container create 04c54032690b1beb66412cd5529a92ab9472557215dbb327bd72ed06b7f51184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_jemison, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:48:28 np0005629333 podman[416951]: 2026-02-25 13:48:28.864740803 +0000 UTC m=+0.026827731 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:48:28 np0005629333 systemd[1]: Started libpod-conmon-04c54032690b1beb66412cd5529a92ab9472557215dbb327bd72ed06b7f51184.scope.
Feb 25 08:48:29 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:48:29 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d127f2825e17f9df3ca30a3c22295fe3c4af07ddf5aa7d9a1840fb999a519434/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:48:29 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d127f2825e17f9df3ca30a3c22295fe3c4af07ddf5aa7d9a1840fb999a519434/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:48:29 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d127f2825e17f9df3ca30a3c22295fe3c4af07ddf5aa7d9a1840fb999a519434/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:48:29 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d127f2825e17f9df3ca30a3c22295fe3c4af07ddf5aa7d9a1840fb999a519434/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:48:29 np0005629333 podman[416951]: 2026-02-25 13:48:29.049501287 +0000 UTC m=+0.211588135 container init 04c54032690b1beb66412cd5529a92ab9472557215dbb327bd72ed06b7f51184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_jemison, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 08:48:29 np0005629333 podman[416951]: 2026-02-25 13:48:29.058710722 +0000 UTC m=+0.220797590 container start 04c54032690b1beb66412cd5529a92ab9472557215dbb327bd72ed06b7f51184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_jemison, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:48:29 np0005629333 podman[416951]: 2026-02-25 13:48:29.065680142 +0000 UTC m=+0.227766990 container attach 04c54032690b1beb66412cd5529a92ab9472557215dbb327bd72ed06b7f51184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_jemison, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]: {
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:    "0": [
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:        {
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "devices": [
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "/dev/loop3"
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            ],
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "lv_name": "ceph_lv0",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "lv_size": "21470642176",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "name": "ceph_lv0",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "tags": {
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.cluster_name": "ceph",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.crush_device_class": "",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.encrypted": "0",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.objectstore": "bluestore",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.osd_id": "0",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.type": "block",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.vdo": "0",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.with_tpm": "0"
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            },
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "type": "block",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "vg_name": "ceph_vg0"
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:        }
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:    ],
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:    "1": [
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:        {
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "devices": [
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "/dev/loop4"
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            ],
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "lv_name": "ceph_lv1",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "lv_size": "21470642176",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "name": "ceph_lv1",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "tags": {
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.cluster_name": "ceph",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.crush_device_class": "",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.encrypted": "0",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.objectstore": "bluestore",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.osd_id": "1",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.type": "block",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.vdo": "0",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.with_tpm": "0"
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            },
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "type": "block",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "vg_name": "ceph_vg1"
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:        }
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:    ],
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:    "2": [
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:        {
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "devices": [
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "/dev/loop5"
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            ],
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "lv_name": "ceph_lv2",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "lv_size": "21470642176",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "name": "ceph_lv2",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "tags": {
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.cluster_name": "ceph",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.crush_device_class": "",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.encrypted": "0",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.objectstore": "bluestore",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.osd_id": "2",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.type": "block",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.vdo": "0",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:                "ceph.with_tpm": "0"
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            },
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "type": "block",
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:            "vg_name": "ceph_vg2"
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:        }
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]:    ]
Feb 25 08:48:29 np0005629333 laughing_jemison[416968]: }
Feb 25 08:48:29 np0005629333 systemd[1]: libpod-04c54032690b1beb66412cd5529a92ab9472557215dbb327bd72ed06b7f51184.scope: Deactivated successfully.
Feb 25 08:48:29 np0005629333 podman[416951]: 2026-02-25 13:48:29.357099708 +0000 UTC m=+0.519186556 container died 04c54032690b1beb66412cd5529a92ab9472557215dbb327bd72ed06b7f51184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_jemison, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 25 08:48:29 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d127f2825e17f9df3ca30a3c22295fe3c4af07ddf5aa7d9a1840fb999a519434-merged.mount: Deactivated successfully.
Feb 25 08:48:29 np0005629333 podman[416951]: 2026-02-25 13:48:29.406459645 +0000 UTC m=+0.568546483 container remove 04c54032690b1beb66412cd5529a92ab9472557215dbb327bd72ed06b7f51184 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 08:48:29 np0005629333 systemd[1]: libpod-conmon-04c54032690b1beb66412cd5529a92ab9472557215dbb327bd72ed06b7f51184.scope: Deactivated successfully.
Feb 25 08:48:29 np0005629333 nova_compute[244014]: 2026-02-25 13:48:29.756 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:48:29 np0005629333 nova_compute[244014]: 2026-02-25 13:48:29.902 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:48:29 np0005629333 podman[417053]: 2026-02-25 13:48:29.903887836 +0000 UTC m=+0.045521178 container create fb94376c164880929394478cd0466351756434851d2404d2fa419f3280082a29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 08:48:29 np0005629333 systemd[1]: Started libpod-conmon-fb94376c164880929394478cd0466351756434851d2404d2fa419f3280082a29.scope.
Feb 25 08:48:29 np0005629333 podman[417053]: 2026-02-25 13:48:29.88522641 +0000 UTC m=+0.026859712 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:48:29 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:48:30 np0005629333 podman[417053]: 2026-02-25 13:48:30.034784254 +0000 UTC m=+0.176417576 container init fb94376c164880929394478cd0466351756434851d2404d2fa419f3280082a29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 08:48:30 np0005629333 podman[417053]: 2026-02-25 13:48:30.042432494 +0000 UTC m=+0.184065806 container start fb94376c164880929394478cd0466351756434851d2404d2fa419f3280082a29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:48:30 np0005629333 distracted_elgamal[417069]: 167 167
Feb 25 08:48:30 np0005629333 systemd[1]: libpod-fb94376c164880929394478cd0466351756434851d2404d2fa419f3280082a29.scope: Deactivated successfully.
Feb 25 08:48:30 np0005629333 podman[417053]: 2026-02-25 13:48:30.048435676 +0000 UTC m=+0.190069008 container attach fb94376c164880929394478cd0466351756434851d2404d2fa419f3280082a29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_elgamal, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:48:30 np0005629333 podman[417053]: 2026-02-25 13:48:30.050129915 +0000 UTC m=+0.191763217 container died fb94376c164880929394478cd0466351756434851d2404d2fa419f3280082a29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_elgamal, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Feb 25 08:48:30 np0005629333 systemd[1]: var-lib-containers-storage-overlay-3ba04d0a548b4e04f0447706d4a2eec0253fe07cda8976d72238164945091ef8-merged.mount: Deactivated successfully.
Feb 25 08:48:30 np0005629333 podman[417053]: 2026-02-25 13:48:30.105306619 +0000 UTC m=+0.246939921 container remove fb94376c164880929394478cd0466351756434851d2404d2fa419f3280082a29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_elgamal, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:48:30 np0005629333 systemd[1]: libpod-conmon-fb94376c164880929394478cd0466351756434851d2404d2fa419f3280082a29.scope: Deactivated successfully.
Feb 25 08:48:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:48:30 np0005629333 podman[417093]: 2026-02-25 13:48:30.258415154 +0000 UTC m=+0.044378315 container create bf7de8932baee4a6c270940d99d010a28fa672e29ac4d710eb7d26373aa91bbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hertz, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:48:30 np0005629333 systemd[1]: Started libpod-conmon-bf7de8932baee4a6c270940d99d010a28fa672e29ac4d710eb7d26373aa91bbe.scope.
Feb 25 08:48:30 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:48:30 np0005629333 podman[417093]: 2026-02-25 13:48:30.233336054 +0000 UTC m=+0.019299205 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:48:30 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa8df5f4cff6da65729321a606740ffb4506ae6bff24f72776ce513082872396/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:48:30 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa8df5f4cff6da65729321a606740ffb4506ae6bff24f72776ce513082872396/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:48:30 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa8df5f4cff6da65729321a606740ffb4506ae6bff24f72776ce513082872396/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:48:30 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa8df5f4cff6da65729321a606740ffb4506ae6bff24f72776ce513082872396/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:48:30 np0005629333 podman[417093]: 2026-02-25 13:48:30.356454299 +0000 UTC m=+0.142417510 container init bf7de8932baee4a6c270940d99d010a28fa672e29ac4d710eb7d26373aa91bbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hertz, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 08:48:30 np0005629333 podman[417093]: 2026-02-25 13:48:30.365342694 +0000 UTC m=+0.151305865 container start bf7de8932baee4a6c270940d99d010a28fa672e29ac4d710eb7d26373aa91bbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hertz, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:48:30 np0005629333 podman[417093]: 2026-02-25 13:48:30.370326147 +0000 UTC m=+0.156289358 container attach bf7de8932baee4a6c270940d99d010a28fa672e29ac4d710eb7d26373aa91bbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 08:48:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3858: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:48:31 np0005629333 lvm[417188]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:48:31 np0005629333 lvm[417188]: VG ceph_vg0 finished
Feb 25 08:48:31 np0005629333 lvm[417189]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:48:31 np0005629333 lvm[417189]: VG ceph_vg1 finished
Feb 25 08:48:31 np0005629333 lvm[417191]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:48:31 np0005629333 lvm[417191]: VG ceph_vg2 finished
Feb 25 08:48:31 np0005629333 focused_hertz[417110]: {}
Feb 25 08:48:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:48:31
Feb 25 08:48:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:48:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:48:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['volumes', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'vms', 'backups', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'default.rgw.meta']
Feb 25 08:48:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:48:31 np0005629333 systemd[1]: libpod-bf7de8932baee4a6c270940d99d010a28fa672e29ac4d710eb7d26373aa91bbe.scope: Deactivated successfully.
Feb 25 08:48:31 np0005629333 systemd[1]: libpod-bf7de8932baee4a6c270940d99d010a28fa672e29ac4d710eb7d26373aa91bbe.scope: Consumed 1.332s CPU time.
Feb 25 08:48:31 np0005629333 podman[417093]: 2026-02-25 13:48:31.269995325 +0000 UTC m=+1.055958496 container died bf7de8932baee4a6c270940d99d010a28fa672e29ac4d710eb7d26373aa91bbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hertz, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:48:31 np0005629333 systemd[1]: var-lib-containers-storage-overlay-fa8df5f4cff6da65729321a606740ffb4506ae6bff24f72776ce513082872396-merged.mount: Deactivated successfully.
Feb 25 08:48:31 np0005629333 podman[417093]: 2026-02-25 13:48:31.320130385 +0000 UTC m=+1.106093556 container remove bf7de8932baee4a6c270940d99d010a28fa672e29ac4d710eb7d26373aa91bbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hertz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:48:31 np0005629333 systemd[1]: libpod-conmon-bf7de8932baee4a6c270940d99d010a28fa672e29ac4d710eb7d26373aa91bbe.scope: Deactivated successfully.
Feb 25 08:48:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:48:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:48:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:48:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:48:31 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:48:31 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:48:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:48:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:48:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:48:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:48:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:48:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:48:32 np0005629333 nova_compute[244014]: 2026-02-25 13:48:32.289 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:48:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3859: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:48:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:48:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:48:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:48:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:48:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:48:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:48:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:48:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:48:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:48:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:48:33 np0005629333 nova_compute[244014]: 2026-02-25 13:48:33.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:48:33 np0005629333 nova_compute[244014]: 2026-02-25 13:48:33.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:48:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3860: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:48:34 np0005629333 nova_compute[244014]: 2026-02-25 13:48:34.759 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:48:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:48:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3861: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:48:36 np0005629333 nova_compute[244014]: 2026-02-25 13:48:36.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:48:37 np0005629333 nova_compute[244014]: 2026-02-25 13:48:37.292 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:48:37 np0005629333 nova_compute[244014]: 2026-02-25 13:48:37.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:48:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3862: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:48:39 np0005629333 nova_compute[244014]: 2026-02-25 13:48:39.761 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:48:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:48:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3863: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:48:40 np0005629333 podman[417231]: 2026-02-25 13:48:40.713100181 +0000 UTC m=+0.058976994 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 25 08:48:40 np0005629333 podman[417232]: 2026-02-25 13:48:40.784135601 +0000 UTC m=+0.124217778 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 08:48:40 np0005629333 nova_compute[244014]: 2026-02-25 13:48:40.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:48:41 np0005629333 nova_compute[244014]: 2026-02-25 13:48:41.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:48:42 np0005629333 nova_compute[244014]: 2026-02-25 13:48:42.294 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:48:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3864: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:48:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:48:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:48:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:48:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:48:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:48:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:48:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:48:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:48:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:48:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:48:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:48:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:48:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:48:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:48:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:48:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:48:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:48:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:48:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:48:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:48:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:48:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:48:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:48:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3865: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:48:44 np0005629333 nova_compute[244014]: 2026-02-25 13:48:44.792 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:48:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:48:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3866: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:48:47 np0005629333 nova_compute[244014]: 2026-02-25 13:48:47.345 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:48:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:48:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2533190081' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:48:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:48:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2533190081' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:48:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3867: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:48:49 np0005629333 nova_compute[244014]: 2026-02-25 13:48:49.832 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:48:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:48:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3868: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:48:52 np0005629333 nova_compute[244014]: 2026-02-25 13:48:52.347 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:48:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3869: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:48:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3870: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:48:54 np0005629333 nova_compute[244014]: 2026-02-25 13:48:54.867 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:48:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:48:55.093 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:48:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:48:55.094 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:48:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:48:55.094 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:48:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:48:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3871: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:48:57 np0005629333 nova_compute[244014]: 2026-02-25 13:48:57.383 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:48:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3872: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:48:59 np0005629333 nova_compute[244014]: 2026-02-25 13:48:59.914 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:49:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:49:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3873: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:49:00 np0005629333 nova_compute[244014]: 2026-02-25 13:49:00.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:49:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:49:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:49:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:49:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:49:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:49:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:49:01 np0005629333 nova_compute[244014]: 2026-02-25 13:49:01.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:49:02 np0005629333 nova_compute[244014]: 2026-02-25 13:49:02.385 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:49:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3874: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:49:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3875: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:49:04 np0005629333 nova_compute[244014]: 2026-02-25 13:49:04.957 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:49:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:49:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3876: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:49:07 np0005629333 nova_compute[244014]: 2026-02-25 13:49:07.431 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:49:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3877: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:49:09 np0005629333 nova_compute[244014]: 2026-02-25 13:49:09.960 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:49:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:49:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3878: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:49:11 np0005629333 podman[417278]: 2026-02-25 13:49:11.744766262 +0000 UTC m=+0.076152548 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0)
Feb 25 08:49:11 np0005629333 podman[417279]: 2026-02-25 13:49:11.786584402 +0000 UTC m=+0.108412413 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:49:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3879: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:49:12 np0005629333 nova_compute[244014]: 2026-02-25 13:49:12.434 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:49:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 08:49:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.0 total, 600.0 interval#012Cumulative writes: 17K writes, 80K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.02 MB/s#012Cumulative WAL: 17K writes, 17K syncs, 1.00 writes per sync, written: 0.11 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1310 writes, 5692 keys, 1310 commit groups, 1.0 writes per commit group, ingest: 8.65 MB, 0.01 MB/s#012Interval WAL: 1310 writes, 1310 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     49.4      1.98              0.27        60    0.033       0      0       0.0       0.0#012  L6      1/0    8.90 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.4     98.9     84.7      6.25              1.58        59    0.106    425K    31K       0.0       0.0#012 Sum      1/0    8.90 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.4     75.2     76.2      8.22              1.86       119    0.069    425K    31K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   7.2     55.2     55.5      0.89              0.19         8    0.111     39K   2040       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0     98.9     84.7      6.25              1.58        59    0.106    425K    31K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     49.5      1.97              0.27        59    0.033       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 7200.0 total, 600.0 interval#012Flush(GB): cumulative 0.095, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.61 GB write, 0.09 MB/s write, 0.60 GB read, 0.09 MB/s read, 8.2 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561a1af858d0#2 capacity: 304.00 MB usage: 70.08 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.000815 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4355,67.12 MB,22.0781%) FilterBlock(120,1.15 MB,0.376987%) IndexBlock(120,1.82 MB,0.598099%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb 25 08:49:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3880: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:49:14 np0005629333 nova_compute[244014]: 2026-02-25 13:49:14.997 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:49:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:49:15 np0005629333 nova_compute[244014]: 2026-02-25 13:49:15.887 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:49:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3881: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:49:17 np0005629333 nova_compute[244014]: 2026-02-25 13:49:17.438 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:49:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3882: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:49:20 np0005629333 nova_compute[244014]: 2026-02-25 13:49:20.040 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:49:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:49:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3883: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:49:21 np0005629333 nova_compute[244014]: 2026-02-25 13:49:21.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:49:21 np0005629333 nova_compute[244014]: 2026-02-25 13:49:21.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 25 08:49:21 np0005629333 nova_compute[244014]: 2026-02-25 13:49:21.894 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 25 08:49:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3884: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:49:22 np0005629333 nova_compute[244014]: 2026-02-25 13:49:22.442 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:49:22 np0005629333 nova_compute[244014]: 2026-02-25 13:49:22.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:49:22 np0005629333 nova_compute[244014]: 2026-02-25 13:49:22.909 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:49:22 np0005629333 nova_compute[244014]: 2026-02-25 13:49:22.910 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:49:22 np0005629333 nova_compute[244014]: 2026-02-25 13:49:22.911 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:49:22 np0005629333 nova_compute[244014]: 2026-02-25 13:49:22.911 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:49:22 np0005629333 nova_compute[244014]: 2026-02-25 13:49:22.912 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:49:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:49:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4003319857' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:49:23 np0005629333 nova_compute[244014]: 2026-02-25 13:49:23.473 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:49:23 np0005629333 nova_compute[244014]: 2026-02-25 13:49:23.629 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:49:23 np0005629333 nova_compute[244014]: 2026-02-25 13:49:23.631 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3547MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:49:23 np0005629333 nova_compute[244014]: 2026-02-25 13:49:23.631 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:49:23 np0005629333 nova_compute[244014]: 2026-02-25 13:49:23.632 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:49:23 np0005629333 nova_compute[244014]: 2026-02-25 13:49:23.685 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:49:23 np0005629333 nova_compute[244014]: 2026-02-25 13:49:23.686 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:49:23 np0005629333 nova_compute[244014]: 2026-02-25 13:49:23.702 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:49:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:49:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1713972298' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:49:24 np0005629333 nova_compute[244014]: 2026-02-25 13:49:24.213 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:49:24 np0005629333 nova_compute[244014]: 2026-02-25 13:49:24.220 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:49:24 np0005629333 nova_compute[244014]: 2026-02-25 13:49:24.234 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:49:24 np0005629333 nova_compute[244014]: 2026-02-25 13:49:24.237 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:49:24 np0005629333 nova_compute[244014]: 2026-02-25 13:49:24.237 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:49:24 np0005629333 nova_compute[244014]: 2026-02-25 13:49:24.238 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:49:24 np0005629333 nova_compute[244014]: 2026-02-25 13:49:24.239 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 25 08:49:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3885: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:49:25 np0005629333 nova_compute[244014]: 2026-02-25 13:49:25.042 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:49:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:49:26 np0005629333 nova_compute[244014]: 2026-02-25 13:49:26.248 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:49:26 np0005629333 nova_compute[244014]: 2026-02-25 13:49:26.249 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:49:26 np0005629333 nova_compute[244014]: 2026-02-25 13:49:26.249 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:49:26 np0005629333 nova_compute[244014]: 2026-02-25 13:49:26.269 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:49:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3886: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:49:27 np0005629333 nova_compute[244014]: 2026-02-25 13:49:27.479 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:49:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3887: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:49:29 np0005629333 nova_compute[244014]: 2026-02-25 13:49:29.893 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:49:30 np0005629333 nova_compute[244014]: 2026-02-25 13:49:30.086 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:49:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:49:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3888: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:49:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:49:31
Feb 25 08:49:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:49:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:49:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'images', 'default.rgw.log', 'volumes', '.mgr', 'default.rgw.meta', 'vms']
Feb 25 08:49:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:49:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:49:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:49:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:49:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:49:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:49:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:49:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:49:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:49:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:49:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:49:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:49:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:49:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:49:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:49:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:49:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:49:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:49:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:49:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3889: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:49:32 np0005629333 nova_compute[244014]: 2026-02-25 13:49:32.522 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:49:32 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:49:32 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:49:32 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:49:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:49:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:49:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:49:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:49:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:49:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:49:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:49:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:49:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:49:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:49:32 np0005629333 podman[417511]: 2026-02-25 13:49:32.610624713 +0000 UTC m=+0.033186304 container create bbfa3a978b13ca2b7e5ae37ffe2b5790b0ed19645809c89a77399ef6e22435a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_liskov, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 08:49:32 np0005629333 systemd[1]: Started libpod-conmon-bbfa3a978b13ca2b7e5ae37ffe2b5790b0ed19645809c89a77399ef6e22435a3.scope.
Feb 25 08:49:32 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:49:32 np0005629333 podman[417511]: 2026-02-25 13:49:32.595628962 +0000 UTC m=+0.018190593 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:49:32 np0005629333 podman[417511]: 2026-02-25 13:49:32.705191098 +0000 UTC m=+0.127752719 container init bbfa3a978b13ca2b7e5ae37ffe2b5790b0ed19645809c89a77399ef6e22435a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_liskov, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 08:49:32 np0005629333 podman[417511]: 2026-02-25 13:49:32.713057624 +0000 UTC m=+0.135619235 container start bbfa3a978b13ca2b7e5ae37ffe2b5790b0ed19645809c89a77399ef6e22435a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:49:32 np0005629333 podman[417511]: 2026-02-25 13:49:32.716605696 +0000 UTC m=+0.139167297 container attach bbfa3a978b13ca2b7e5ae37ffe2b5790b0ed19645809c89a77399ef6e22435a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 08:49:32 np0005629333 systemd[1]: libpod-bbfa3a978b13ca2b7e5ae37ffe2b5790b0ed19645809c89a77399ef6e22435a3.scope: Deactivated successfully.
Feb 25 08:49:32 np0005629333 serene_liskov[417527]: 167 167
Feb 25 08:49:32 np0005629333 conmon[417527]: conmon bbfa3a978b13ca2b7e5a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bbfa3a978b13ca2b7e5ae37ffe2b5790b0ed19645809c89a77399ef6e22435a3.scope/container/memory.events
Feb 25 08:49:32 np0005629333 podman[417511]: 2026-02-25 13:49:32.719406666 +0000 UTC m=+0.141968267 container died bbfa3a978b13ca2b7e5ae37ffe2b5790b0ed19645809c89a77399ef6e22435a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_liskov, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 08:49:32 np0005629333 systemd[1]: var-lib-containers-storage-overlay-18122d899c3e0a774677654341a4559f8375506253ab79aba0b8d269d11bcb09-merged.mount: Deactivated successfully.
Feb 25 08:49:32 np0005629333 podman[417511]: 2026-02-25 13:49:32.75786813 +0000 UTC m=+0.180429731 container remove bbfa3a978b13ca2b7e5ae37ffe2b5790b0ed19645809c89a77399ef6e22435a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_liskov, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Feb 25 08:49:32 np0005629333 systemd[1]: libpod-conmon-bbfa3a978b13ca2b7e5ae37ffe2b5790b0ed19645809c89a77399ef6e22435a3.scope: Deactivated successfully.
Feb 25 08:49:32 np0005629333 podman[417549]: 2026-02-25 13:49:32.923442874 +0000 UTC m=+0.060650772 container create 5b8a1ee45682d1f24290650e1149a18be5fe49cdd786e9314a611bec199c1d12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cori, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 08:49:32 np0005629333 systemd[1]: Started libpod-conmon-5b8a1ee45682d1f24290650e1149a18be5fe49cdd786e9314a611bec199c1d12.scope.
Feb 25 08:49:32 np0005629333 podman[417549]: 2026-02-25 13:49:32.890536789 +0000 UTC m=+0.027744787 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:49:33 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:49:33 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c456f8ab265e334cbd6826e304b6512ba9c97e6e5ef0e699653fd9c0d16eacd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:49:33 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c456f8ab265e334cbd6826e304b6512ba9c97e6e5ef0e699653fd9c0d16eacd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:49:33 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c456f8ab265e334cbd6826e304b6512ba9c97e6e5ef0e699653fd9c0d16eacd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:49:33 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c456f8ab265e334cbd6826e304b6512ba9c97e6e5ef0e699653fd9c0d16eacd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:49:33 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c456f8ab265e334cbd6826e304b6512ba9c97e6e5ef0e699653fd9c0d16eacd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:49:33 np0005629333 podman[417549]: 2026-02-25 13:49:33.037771226 +0000 UTC m=+0.174979164 container init 5b8a1ee45682d1f24290650e1149a18be5fe49cdd786e9314a611bec199c1d12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cori, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:49:33 np0005629333 podman[417549]: 2026-02-25 13:49:33.052852659 +0000 UTC m=+0.190060557 container start 5b8a1ee45682d1f24290650e1149a18be5fe49cdd786e9314a611bec199c1d12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cori, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 08:49:33 np0005629333 podman[417549]: 2026-02-25 13:49:33.056671949 +0000 UTC m=+0.193879847 container attach 5b8a1ee45682d1f24290650e1149a18be5fe49cdd786e9314a611bec199c1d12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cori, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 08:49:33 np0005629333 beautiful_cori[417565]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:49:33 np0005629333 beautiful_cori[417565]: --> All data devices are unavailable
Feb 25 08:49:33 np0005629333 systemd[1]: libpod-5b8a1ee45682d1f24290650e1149a18be5fe49cdd786e9314a611bec199c1d12.scope: Deactivated successfully.
Feb 25 08:49:33 np0005629333 podman[417549]: 2026-02-25 13:49:33.50048662 +0000 UTC m=+0.637694548 container died 5b8a1ee45682d1f24290650e1149a18be5fe49cdd786e9314a611bec199c1d12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cori, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:49:33 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7c456f8ab265e334cbd6826e304b6512ba9c97e6e5ef0e699653fd9c0d16eacd-merged.mount: Deactivated successfully.
Feb 25 08:49:33 np0005629333 podman[417549]: 2026-02-25 13:49:33.576992627 +0000 UTC m=+0.714200555 container remove 5b8a1ee45682d1f24290650e1149a18be5fe49cdd786e9314a611bec199c1d12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_cori, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:49:33 np0005629333 systemd[1]: libpod-conmon-5b8a1ee45682d1f24290650e1149a18be5fe49cdd786e9314a611bec199c1d12.scope: Deactivated successfully.
Feb 25 08:49:33 np0005629333 nova_compute[244014]: 2026-02-25 13:49:33.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:49:33 np0005629333 nova_compute[244014]: 2026-02-25 13:49:33.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:49:34 np0005629333 podman[417659]: 2026-02-25 13:49:34.00345536 +0000 UTC m=+0.039236097 container create 21f5c8760c590aeb11d87dae9bbb35555dad0a00898c7ecdea8747b264b07cee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_lamarr, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 08:49:34 np0005629333 systemd[1]: Started libpod-conmon-21f5c8760c590aeb11d87dae9bbb35555dad0a00898c7ecdea8747b264b07cee.scope.
Feb 25 08:49:34 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:49:34 np0005629333 podman[417659]: 2026-02-25 13:49:33.987127101 +0000 UTC m=+0.022907848 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:49:34 np0005629333 podman[417659]: 2026-02-25 13:49:34.088833631 +0000 UTC m=+0.124614358 container init 21f5c8760c590aeb11d87dae9bbb35555dad0a00898c7ecdea8747b264b07cee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_lamarr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 08:49:34 np0005629333 podman[417659]: 2026-02-25 13:49:34.094831904 +0000 UTC m=+0.130612601 container start 21f5c8760c590aeb11d87dae9bbb35555dad0a00898c7ecdea8747b264b07cee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_lamarr, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 08:49:34 np0005629333 podman[417659]: 2026-02-25 13:49:34.098037986 +0000 UTC m=+0.133818703 container attach 21f5c8760c590aeb11d87dae9bbb35555dad0a00898c7ecdea8747b264b07cee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:49:34 np0005629333 fervent_lamarr[417675]: 167 167
Feb 25 08:49:34 np0005629333 systemd[1]: libpod-21f5c8760c590aeb11d87dae9bbb35555dad0a00898c7ecdea8747b264b07cee.scope: Deactivated successfully.
Feb 25 08:49:34 np0005629333 podman[417659]: 2026-02-25 13:49:34.09993111 +0000 UTC m=+0.135711847 container died 21f5c8760c590aeb11d87dae9bbb35555dad0a00898c7ecdea8747b264b07cee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Feb 25 08:49:34 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7f2e79767b05bd29bde5f655772d3ce7fb8d741d9a358c96e0b8975366026efe-merged.mount: Deactivated successfully.
Feb 25 08:49:34 np0005629333 podman[417659]: 2026-02-25 13:49:34.144562031 +0000 UTC m=+0.180342728 container remove 21f5c8760c590aeb11d87dae9bbb35555dad0a00898c7ecdea8747b264b07cee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_lamarr, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Feb 25 08:49:34 np0005629333 systemd[1]: libpod-conmon-21f5c8760c590aeb11d87dae9bbb35555dad0a00898c7ecdea8747b264b07cee.scope: Deactivated successfully.
Feb 25 08:49:34 np0005629333 podman[417698]: 2026-02-25 13:49:34.308783236 +0000 UTC m=+0.045966571 container create 468aa7d511cb967bbdc558795cf6d265ce157ca07097883891aa622e683a2efe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_snyder, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Feb 25 08:49:34 np0005629333 systemd[1]: Started libpod-conmon-468aa7d511cb967bbdc558795cf6d265ce157ca07097883891aa622e683a2efe.scope.
Feb 25 08:49:34 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:49:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c962d65377f6b14f09a14c7f62a4739144c5b48f1016c09b0184342679a3f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:49:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c962d65377f6b14f09a14c7f62a4739144c5b48f1016c09b0184342679a3f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:49:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c962d65377f6b14f09a14c7f62a4739144c5b48f1016c09b0184342679a3f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:49:34 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4c962d65377f6b14f09a14c7f62a4739144c5b48f1016c09b0184342679a3f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:49:34 np0005629333 podman[417698]: 2026-02-25 13:49:34.290525102 +0000 UTC m=+0.027708467 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:49:34 np0005629333 podman[417698]: 2026-02-25 13:49:34.399630554 +0000 UTC m=+0.136813899 container init 468aa7d511cb967bbdc558795cf6d265ce157ca07097883891aa622e683a2efe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_snyder, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:49:34 np0005629333 podman[417698]: 2026-02-25 13:49:34.412939316 +0000 UTC m=+0.150122651 container start 468aa7d511cb967bbdc558795cf6d265ce157ca07097883891aa622e683a2efe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_snyder, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 08:49:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3890: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:49:34 np0005629333 podman[417698]: 2026-02-25 13:49:34.416633982 +0000 UTC m=+0.153817337 container attach 468aa7d511cb967bbdc558795cf6d265ce157ca07097883891aa622e683a2efe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]: {
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:    "0": [
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:        {
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "devices": [
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "/dev/loop3"
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            ],
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "lv_name": "ceph_lv0",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "lv_size": "21470642176",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "name": "ceph_lv0",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "tags": {
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.cluster_name": "ceph",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.crush_device_class": "",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.encrypted": "0",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.objectstore": "bluestore",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.osd_id": "0",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.type": "block",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.vdo": "0",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.with_tpm": "0"
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            },
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "type": "block",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "vg_name": "ceph_vg0"
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:        }
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:    ],
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:    "1": [
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:        {
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "devices": [
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "/dev/loop4"
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            ],
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "lv_name": "ceph_lv1",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "lv_size": "21470642176",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "name": "ceph_lv1",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "tags": {
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.cluster_name": "ceph",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.crush_device_class": "",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.encrypted": "0",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.objectstore": "bluestore",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.osd_id": "1",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.type": "block",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.vdo": "0",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.with_tpm": "0"
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            },
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "type": "block",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "vg_name": "ceph_vg1"
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:        }
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:    ],
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:    "2": [
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:        {
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "devices": [
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "/dev/loop5"
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            ],
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "lv_name": "ceph_lv2",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "lv_size": "21470642176",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "name": "ceph_lv2",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "tags": {
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.cluster_name": "ceph",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.crush_device_class": "",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.encrypted": "0",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.objectstore": "bluestore",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.osd_id": "2",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.type": "block",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.vdo": "0",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:                "ceph.with_tpm": "0"
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            },
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "type": "block",
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:            "vg_name": "ceph_vg2"
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:        }
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]:    ]
Feb 25 08:49:34 np0005629333 stoic_snyder[417715]: }
Feb 25 08:49:34 np0005629333 systemd[1]: libpod-468aa7d511cb967bbdc558795cf6d265ce157ca07097883891aa622e683a2efe.scope: Deactivated successfully.
Feb 25 08:49:34 np0005629333 podman[417698]: 2026-02-25 13:49:34.697207847 +0000 UTC m=+0.434391202 container died 468aa7d511cb967bbdc558795cf6d265ce157ca07097883891aa622e683a2efe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 08:49:34 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f4c962d65377f6b14f09a14c7f62a4739144c5b48f1016c09b0184342679a3f8-merged.mount: Deactivated successfully.
Feb 25 08:49:34 np0005629333 podman[417698]: 2026-02-25 13:49:34.739816841 +0000 UTC m=+0.477000176 container remove 468aa7d511cb967bbdc558795cf6d265ce157ca07097883891aa622e683a2efe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:49:34 np0005629333 systemd[1]: libpod-conmon-468aa7d511cb967bbdc558795cf6d265ce157ca07097883891aa622e683a2efe.scope: Deactivated successfully.
Feb 25 08:49:35 np0005629333 nova_compute[244014]: 2026-02-25 13:49:35.133 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:49:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:49:35 np0005629333 podman[417799]: 2026-02-25 13:49:35.250026498 +0000 UTC m=+0.053829687 container create 10fddffa63d592be2fe10b76735c4b40c917f121c5a3c0171a385ea9191233a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_vaughan, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:49:35 np0005629333 systemd[1]: Started libpod-conmon-10fddffa63d592be2fe10b76735c4b40c917f121c5a3c0171a385ea9191233a3.scope.
Feb 25 08:49:35 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:49:35 np0005629333 podman[417799]: 2026-02-25 13:49:35.23062206 +0000 UTC m=+0.034425269 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:49:35 np0005629333 podman[417799]: 2026-02-25 13:49:35.331397884 +0000 UTC m=+0.135201073 container init 10fddffa63d592be2fe10b76735c4b40c917f121c5a3c0171a385ea9191233a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_vaughan, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Feb 25 08:49:35 np0005629333 podman[417799]: 2026-02-25 13:49:35.337430767 +0000 UTC m=+0.141233956 container start 10fddffa63d592be2fe10b76735c4b40c917f121c5a3c0171a385ea9191233a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 08:49:35 np0005629333 podman[417799]: 2026-02-25 13:49:35.340378251 +0000 UTC m=+0.144181440 container attach 10fddffa63d592be2fe10b76735c4b40c917f121c5a3c0171a385ea9191233a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_vaughan, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 08:49:35 np0005629333 trusting_vaughan[417815]: 167 167
Feb 25 08:49:35 np0005629333 systemd[1]: libpod-10fddffa63d592be2fe10b76735c4b40c917f121c5a3c0171a385ea9191233a3.scope: Deactivated successfully.
Feb 25 08:49:35 np0005629333 podman[417799]: 2026-02-25 13:49:35.342682298 +0000 UTC m=+0.146485547 container died 10fddffa63d592be2fe10b76735c4b40c917f121c5a3c0171a385ea9191233a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 08:49:35 np0005629333 systemd[1]: var-lib-containers-storage-overlay-03a6cba1b78c944d9fccb3909d431ed03821f52fc5a82e714c37d4d850b1d9f4-merged.mount: Deactivated successfully.
Feb 25 08:49:35 np0005629333 podman[417799]: 2026-02-25 13:49:35.382064568 +0000 UTC m=+0.185867757 container remove 10fddffa63d592be2fe10b76735c4b40c917f121c5a3c0171a385ea9191233a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Feb 25 08:49:35 np0005629333 systemd[1]: libpod-conmon-10fddffa63d592be2fe10b76735c4b40c917f121c5a3c0171a385ea9191233a3.scope: Deactivated successfully.
Feb 25 08:49:35 np0005629333 podman[417840]: 2026-02-25 13:49:35.559707598 +0000 UTC m=+0.058921742 container create 469870d46825347d77d0e084d52f3ea527f9fc4557461fca38b979f5d0766dd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_tharp, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 08:49:35 np0005629333 systemd[1]: Started libpod-conmon-469870d46825347d77d0e084d52f3ea527f9fc4557461fca38b979f5d0766dd9.scope.
Feb 25 08:49:35 np0005629333 podman[417840]: 2026-02-25 13:49:35.535620317 +0000 UTC m=+0.034834541 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:49:35 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:49:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebfe8fb2dd12dcfa4b0c7a94fe2e30221d42bc2016e89496aa9d4b309b755e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:49:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebfe8fb2dd12dcfa4b0c7a94fe2e30221d42bc2016e89496aa9d4b309b755e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:49:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebfe8fb2dd12dcfa4b0c7a94fe2e30221d42bc2016e89496aa9d4b309b755e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:49:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cebfe8fb2dd12dcfa4b0c7a94fe2e30221d42bc2016e89496aa9d4b309b755e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:49:35 np0005629333 podman[417840]: 2026-02-25 13:49:35.65623715 +0000 UTC m=+0.155451304 container init 469870d46825347d77d0e084d52f3ea527f9fc4557461fca38b979f5d0766dd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_tharp, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 08:49:35 np0005629333 podman[417840]: 2026-02-25 13:49:35.669108449 +0000 UTC m=+0.168322583 container start 469870d46825347d77d0e084d52f3ea527f9fc4557461fca38b979f5d0766dd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:49:35 np0005629333 podman[417840]: 2026-02-25 13:49:35.674180345 +0000 UTC m=+0.173394519 container attach 469870d46825347d77d0e084d52f3ea527f9fc4557461fca38b979f5d0766dd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 08:49:36 np0005629333 lvm[417936]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:49:36 np0005629333 lvm[417936]: VG ceph_vg0 finished
Feb 25 08:49:36 np0005629333 lvm[417937]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:49:36 np0005629333 lvm[417937]: VG ceph_vg1 finished
Feb 25 08:49:36 np0005629333 lvm[417939]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:49:36 np0005629333 lvm[417939]: VG ceph_vg2 finished
Feb 25 08:49:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3891: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:49:36 np0005629333 elated_tharp[417857]: {}
Feb 25 08:49:36 np0005629333 systemd[1]: libpod-469870d46825347d77d0e084d52f3ea527f9fc4557461fca38b979f5d0766dd9.scope: Deactivated successfully.
Feb 25 08:49:36 np0005629333 systemd[1]: libpod-469870d46825347d77d0e084d52f3ea527f9fc4557461fca38b979f5d0766dd9.scope: Consumed 1.282s CPU time.
Feb 25 08:49:36 np0005629333 podman[417840]: 2026-02-25 13:49:36.529056388 +0000 UTC m=+1.028270562 container died 469870d46825347d77d0e084d52f3ea527f9fc4557461fca38b979f5d0766dd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:49:36 np0005629333 systemd[1]: var-lib-containers-storage-overlay-cebfe8fb2dd12dcfa4b0c7a94fe2e30221d42bc2016e89496aa9d4b309b755e3-merged.mount: Deactivated successfully.
Feb 25 08:49:36 np0005629333 podman[417840]: 2026-02-25 13:49:36.591095769 +0000 UTC m=+1.090309933 container remove 469870d46825347d77d0e084d52f3ea527f9fc4557461fca38b979f5d0766dd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 08:49:36 np0005629333 systemd[1]: libpod-conmon-469870d46825347d77d0e084d52f3ea527f9fc4557461fca38b979f5d0766dd9.scope: Deactivated successfully.
Feb 25 08:49:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:49:36 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:49:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:49:36 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:49:36 np0005629333 nova_compute[244014]: 2026-02-25 13:49:36.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:49:37 np0005629333 nova_compute[244014]: 2026-02-25 13:49:37.572 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:49:37 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:49:37 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:49:37 np0005629333 nova_compute[244014]: 2026-02-25 13:49:37.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:49:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3892: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:49:40 np0005629333 nova_compute[244014]: 2026-02-25 13:49:40.135 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:49:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:49:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3893: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:49:40 np0005629333 nova_compute[244014]: 2026-02-25 13:49:40.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:49:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3894: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:49:42 np0005629333 nova_compute[244014]: 2026-02-25 13:49:42.624 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:49:42 np0005629333 podman[417982]: 2026-02-25 13:49:42.73693681 +0000 UTC m=+0.071844194 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 08:49:42 np0005629333 podman[417983]: 2026-02-25 13:49:42.793989198 +0000 UTC m=+0.121635003 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.build-date=20260223, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 08:49:42 np0005629333 nova_compute[244014]: 2026-02-25 13:49:42.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:49:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:49:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:49:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:49:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:49:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:49:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:49:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:49:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:49:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:49:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:49:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:49:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:49:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:49:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:49:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:49:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:49:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:49:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:49:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:49:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:49:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:49:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:49:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:49:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3895: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:49:44 np0005629333 nova_compute[244014]: 2026-02-25 13:49:44.650 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:49:45 np0005629333 nova_compute[244014]: 2026-02-25 13:49:45.186 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:49:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:49:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3896: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:49:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:49:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/558277791' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:49:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:49:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/558277791' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:49:47 np0005629333 nova_compute[244014]: 2026-02-25 13:49:47.626 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:49:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3897: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:49:50 np0005629333 nova_compute[244014]: 2026-02-25 13:49:50.186 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:49:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:49:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3898: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:49:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3899: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:49:52 np0005629333 nova_compute[244014]: 2026-02-25 13:49:52.673 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:49:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3900: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:49:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:49:55.095 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:49:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:49:55.095 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:49:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:49:55.096 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:49:55 np0005629333 nova_compute[244014]: 2026-02-25 13:49:55.190 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:49:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:49:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3901: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:49:57 np0005629333 nova_compute[244014]: 2026-02-25 13:49:57.676 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:49:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3902: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:50:00 np0005629333 nova_compute[244014]: 2026-02-25 13:50:00.190 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:50:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:50:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3903: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:50:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:50:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:50:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:50:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:50:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:50:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:50:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3904: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:50:02 np0005629333 nova_compute[244014]: 2026-02-25 13:50:02.680 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:50:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3905: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:50:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:50:05 np0005629333 nova_compute[244014]: 2026-02-25 13:50:05.240 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:50:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3906: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:50:07 np0005629333 nova_compute[244014]: 2026-02-25 13:50:07.732 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:50:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3907: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #195. Immutable memtables: 0.
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:50:09.450610) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 195
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027409450665, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 1815, "num_deletes": 251, "total_data_size": 3053461, "memory_usage": 3101296, "flush_reason": "Manual Compaction"}
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #196: started
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027409471029, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 196, "file_size": 2991160, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79918, "largest_seqno": 81732, "table_properties": {"data_size": 2982765, "index_size": 5208, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16852, "raw_average_key_size": 20, "raw_value_size": 2966066, "raw_average_value_size": 3522, "num_data_blocks": 232, "num_entries": 842, "num_filter_entries": 842, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772027211, "oldest_key_time": 1772027211, "file_creation_time": 1772027409, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 196, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 20510 microseconds, and 9534 cpu microseconds.
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:50:09.471112) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #196: 2991160 bytes OK
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:50:09.471151) [db/memtable_list.cc:519] [default] Level-0 commit table #196 started
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:50:09.473479) [db/memtable_list.cc:722] [default] Level-0 commit table #196: memtable #1 done
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:50:09.473506) EVENT_LOG_v1 {"time_micros": 1772027409473498, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:50:09.473539) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 3045737, prev total WAL file size 3045737, number of live WAL files 2.
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000192.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:50:09.474515) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [196(2921KB)], [194(9109KB)]
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027409474597, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [196], "files_L6": [194], "score": -1, "input_data_size": 12318839, "oldest_snapshot_seqno": -1}
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #197: 9509 keys, 10570270 bytes, temperature: kUnknown
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027409536366, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 197, "file_size": 10570270, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10512081, "index_size": 33308, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23813, "raw_key_size": 250709, "raw_average_key_size": 26, "raw_value_size": 10347423, "raw_average_value_size": 1088, "num_data_blocks": 1277, "num_entries": 9509, "num_filter_entries": 9509, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772027409, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:50:09.536843) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 10570270 bytes
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:50:09.538040) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 199.1 rd, 170.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 8.9 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(7.7) write-amplify(3.5) OK, records in: 10026, records dropped: 517 output_compression: NoCompression
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:50:09.538065) EVENT_LOG_v1 {"time_micros": 1772027409538053, "job": 122, "event": "compaction_finished", "compaction_time_micros": 61883, "compaction_time_cpu_micros": 26509, "output_level": 6, "num_output_files": 1, "total_output_size": 10570270, "num_input_records": 10026, "num_output_records": 9509, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000196.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027409538751, "job": 122, "event": "table_file_deletion", "file_number": 196}
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027409540423, "job": 122, "event": "table_file_deletion", "file_number": 194}
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:50:09.474378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:50:09.540499) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:50:09.540507) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:50:09.540510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:50:09.540513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:50:09 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:50:09.540515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:50:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:50:10 np0005629333 nova_compute[244014]: 2026-02-25 13:50:10.242 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:50:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3908: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:50:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3909: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:50:12 np0005629333 nova_compute[244014]: 2026-02-25 13:50:12.779 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:50:13 np0005629333 podman[418029]: 2026-02-25 13:50:13.739875832 +0000 UTC m=+0.073547892 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Feb 25 08:50:13 np0005629333 podman[418030]: 2026-02-25 13:50:13.776642608 +0000 UTC m=+0.104250734 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:50:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3910: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:50:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:50:15 np0005629333 nova_compute[244014]: 2026-02-25 13:50:15.245 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 08:50:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 47K writes, 186K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s#012Cumulative WAL: 47K writes, 17K syncs, 2.73 writes per sync, written: 0.19 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 08:50:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3911: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:50:17 np0005629333 nova_compute[244014]: 2026-02-25 13:50:17.782 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:50:17 np0005629333 nova_compute[244014]: 2026-02-25 13:50:17.891 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:50:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3912: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 08:50:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.2 total, 600.0 interval#012Cumulative writes: 49K writes, 195K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s#012Cumulative WAL: 49K writes, 18K syncs, 2.74 writes per sync, written: 0.19 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 08:50:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:50:20 np0005629333 nova_compute[244014]: 2026-02-25 13:50:20.245 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:50:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3913: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:50:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3914: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:50:22 np0005629333 nova_compute[244014]: 2026-02-25 13:50:22.821 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:50:22 np0005629333 nova_compute[244014]: 2026-02-25 13:50:22.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:50:22 np0005629333 nova_compute[244014]: 2026-02-25 13:50:22.905 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:50:22 np0005629333 nova_compute[244014]: 2026-02-25 13:50:22.905 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:50:22 np0005629333 nova_compute[244014]: 2026-02-25 13:50:22.906 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:50:22 np0005629333 nova_compute[244014]: 2026-02-25 13:50:22.906 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:50:22 np0005629333 nova_compute[244014]: 2026-02-25 13:50:22.906 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:50:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:50:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1015985271' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:50:23 np0005629333 nova_compute[244014]: 2026-02-25 13:50:23.517 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:50:23 np0005629333 nova_compute[244014]: 2026-02-25 13:50:23.704 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:50:23 np0005629333 nova_compute[244014]: 2026-02-25 13:50:23.706 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3538MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:50:23 np0005629333 nova_compute[244014]: 2026-02-25 13:50:23.707 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:50:23 np0005629333 nova_compute[244014]: 2026-02-25 13:50:23.707 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:50:23 np0005629333 nova_compute[244014]: 2026-02-25 13:50:23.880 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:50:23 np0005629333 nova_compute[244014]: 2026-02-25 13:50:23.882 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:50:23 np0005629333 nova_compute[244014]: 2026-02-25 13:50:23.923 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:50:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3915: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:50:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:50:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1033871979' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:50:24 np0005629333 nova_compute[244014]: 2026-02-25 13:50:24.471 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:50:24 np0005629333 nova_compute[244014]: 2026-02-25 13:50:24.476 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:50:24 np0005629333 nova_compute[244014]: 2026-02-25 13:50:24.496 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:50:24 np0005629333 nova_compute[244014]: 2026-02-25 13:50:24.498 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:50:24 np0005629333 nova_compute[244014]: 2026-02-25 13:50:24.498 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 08:50:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.6 total, 600.0 interval#012Cumulative writes: 38K writes, 150K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s#012Cumulative WAL: 38K writes, 13K syncs, 2.76 writes per sync, written: 0.15 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 08:50:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:50:25 np0005629333 nova_compute[244014]: 2026-02-25 13:50:25.247 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:50:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3916: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:50:26 np0005629333 nova_compute[244014]: 2026-02-25 13:50:26.499 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:50:26 np0005629333 nova_compute[244014]: 2026-02-25 13:50:26.500 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:50:26 np0005629333 nova_compute[244014]: 2026-02-25 13:50:26.500 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:50:26 np0005629333 nova_compute[244014]: 2026-02-25 13:50:26.524 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:50:27 np0005629333 ceph-mgr[76641]: [devicehealth INFO root] Check health
Feb 25 08:50:27 np0005629333 nova_compute[244014]: 2026-02-25 13:50:27.859 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:50:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3917: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:50:29 np0005629333 nova_compute[244014]: 2026-02-25 13:50:29.896 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:50:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:50:30 np0005629333 nova_compute[244014]: 2026-02-25 13:50:30.250 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:50:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3918: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:50:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:50:31
Feb 25 08:50:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:50:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:50:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['.rgw.root', 'backups', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'volumes', 'vms']
Feb 25 08:50:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:50:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:50:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:50:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:50:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:50:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:50:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:50:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3919: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:50:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:50:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:50:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:50:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:50:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:50:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:50:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:50:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:50:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:50:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:50:32 np0005629333 nova_compute[244014]: 2026-02-25 13:50:32.864 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:50:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3920: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:50:34 np0005629333 nova_compute[244014]: 2026-02-25 13:50:34.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:50:34 np0005629333 nova_compute[244014]: 2026-02-25 13:50:34.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:50:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:50:35 np0005629333 nova_compute[244014]: 2026-02-25 13:50:35.288 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:50:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3921: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:50:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:50:37 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:50:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:50:37 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:50:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:50:37 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:50:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:50:37 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:50:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:50:37 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:50:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:50:37 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:50:37 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:50:37 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:50:37 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:50:37 np0005629333 nova_compute[244014]: 2026-02-25 13:50:37.909 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:50:37 np0005629333 podman[418262]: 2026-02-25 13:50:37.947255404 +0000 UTC m=+0.113691275 container create 15f89afc2bca87004dc5e8d25804b9309d7d7b345f96baae506a1e4eea1a6b17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_wu, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 08:50:37 np0005629333 podman[418262]: 2026-02-25 13:50:37.858076904 +0000 UTC m=+0.024512815 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:50:37 np0005629333 systemd[1]: Started libpod-conmon-15f89afc2bca87004dc5e8d25804b9309d7d7b345f96baae506a1e4eea1a6b17.scope.
Feb 25 08:50:38 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:50:38 np0005629333 podman[418262]: 2026-02-25 13:50:38.049002606 +0000 UTC m=+0.215438497 container init 15f89afc2bca87004dc5e8d25804b9309d7d7b345f96baae506a1e4eea1a6b17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_wu, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 08:50:38 np0005629333 podman[418262]: 2026-02-25 13:50:38.058770956 +0000 UTC m=+0.225206827 container start 15f89afc2bca87004dc5e8d25804b9309d7d7b345f96baae506a1e4eea1a6b17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_wu, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:50:38 np0005629333 podman[418262]: 2026-02-25 13:50:38.0620609 +0000 UTC m=+0.228496771 container attach 15f89afc2bca87004dc5e8d25804b9309d7d7b345f96baae506a1e4eea1a6b17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_wu, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 08:50:38 np0005629333 angry_wu[418279]: 167 167
Feb 25 08:50:38 np0005629333 systemd[1]: libpod-15f89afc2bca87004dc5e8d25804b9309d7d7b345f96baae506a1e4eea1a6b17.scope: Deactivated successfully.
Feb 25 08:50:38 np0005629333 podman[418262]: 2026-02-25 13:50:38.06830552 +0000 UTC m=+0.234741411 container died 15f89afc2bca87004dc5e8d25804b9309d7d7b345f96baae506a1e4eea1a6b17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_wu, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:50:38 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d957ff41396857cfafcfc983ab6ad07c98c701bff37145827c977431279970bd-merged.mount: Deactivated successfully.
Feb 25 08:50:38 np0005629333 podman[418262]: 2026-02-25 13:50:38.125504072 +0000 UTC m=+0.291939943 container remove 15f89afc2bca87004dc5e8d25804b9309d7d7b345f96baae506a1e4eea1a6b17 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_wu, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 08:50:38 np0005629333 systemd[1]: libpod-conmon-15f89afc2bca87004dc5e8d25804b9309d7d7b345f96baae506a1e4eea1a6b17.scope: Deactivated successfully.
Feb 25 08:50:38 np0005629333 podman[418302]: 2026-02-25 13:50:38.294237436 +0000 UTC m=+0.057052749 container create f9c2d9911f9a82819d7dc72c88cd33d43434a3e4419bb543fc1acee822ed4c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_chaum, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 08:50:38 np0005629333 systemd[1]: Started libpod-conmon-f9c2d9911f9a82819d7dc72c88cd33d43434a3e4419bb543fc1acee822ed4c06.scope.
Feb 25 08:50:38 np0005629333 podman[418302]: 2026-02-25 13:50:38.268771815 +0000 UTC m=+0.031587098 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:50:38 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:50:38 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c12a374253e822cfc43be746488a442480cfa3873d8d05190924ee4ac8c9815/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:50:38 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c12a374253e822cfc43be746488a442480cfa3873d8d05190924ee4ac8c9815/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:50:38 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c12a374253e822cfc43be746488a442480cfa3873d8d05190924ee4ac8c9815/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:50:38 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c12a374253e822cfc43be746488a442480cfa3873d8d05190924ee4ac8c9815/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:50:38 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c12a374253e822cfc43be746488a442480cfa3873d8d05190924ee4ac8c9815/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:50:38 np0005629333 podman[418302]: 2026-02-25 13:50:38.40791403 +0000 UTC m=+0.170729413 container init f9c2d9911f9a82819d7dc72c88cd33d43434a3e4419bb543fc1acee822ed4c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_chaum, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 08:50:38 np0005629333 podman[418302]: 2026-02-25 13:50:38.420729838 +0000 UTC m=+0.183545151 container start f9c2d9911f9a82819d7dc72c88cd33d43434a3e4419bb543fc1acee822ed4c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_chaum, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:50:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3922: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:50:38 np0005629333 podman[418302]: 2026-02-25 13:50:38.458124801 +0000 UTC m=+0.220940154 container attach f9c2d9911f9a82819d7dc72c88cd33d43434a3e4419bb543fc1acee822ed4c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_chaum, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 08:50:38 np0005629333 nova_compute[244014]: 2026-02-25 13:50:38.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:50:38 np0005629333 nova_compute[244014]: 2026-02-25 13:50:38.879 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:50:38 np0005629333 cool_chaum[418319]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:50:38 np0005629333 cool_chaum[418319]: --> All data devices are unavailable
Feb 25 08:50:38 np0005629333 systemd[1]: libpod-f9c2d9911f9a82819d7dc72c88cd33d43434a3e4419bb543fc1acee822ed4c06.scope: Deactivated successfully.
Feb 25 08:50:38 np0005629333 podman[418302]: 2026-02-25 13:50:38.954018188 +0000 UTC m=+0.716833461 container died f9c2d9911f9a82819d7dc72c88cd33d43434a3e4419bb543fc1acee822ed4c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_chaum, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:50:38 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7c12a374253e822cfc43be746488a442480cfa3873d8d05190924ee4ac8c9815-merged.mount: Deactivated successfully.
Feb 25 08:50:39 np0005629333 podman[418302]: 2026-02-25 13:50:39.008051569 +0000 UTC m=+0.770866852 container remove f9c2d9911f9a82819d7dc72c88cd33d43434a3e4419bb543fc1acee822ed4c06 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_chaum, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:50:39 np0005629333 systemd[1]: libpod-conmon-f9c2d9911f9a82819d7dc72c88cd33d43434a3e4419bb543fc1acee822ed4c06.scope: Deactivated successfully.
Feb 25 08:50:39 np0005629333 podman[418414]: 2026-02-25 13:50:39.475489238 +0000 UTC m=+0.047719601 container create 72b7f20541c2f90996510a496b3b507a246e898244aec8d86d06b1156cb818b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_noether, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 08:50:39 np0005629333 systemd[1]: Started libpod-conmon-72b7f20541c2f90996510a496b3b507a246e898244aec8d86d06b1156cb818b8.scope.
Feb 25 08:50:39 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:50:39 np0005629333 podman[418414]: 2026-02-25 13:50:39.451259703 +0000 UTC m=+0.023490146 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:50:39 np0005629333 podman[418414]: 2026-02-25 13:50:39.553609161 +0000 UTC m=+0.125839574 container init 72b7f20541c2f90996510a496b3b507a246e898244aec8d86d06b1156cb818b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:50:39 np0005629333 podman[418414]: 2026-02-25 13:50:39.568439977 +0000 UTC m=+0.140670380 container start 72b7f20541c2f90996510a496b3b507a246e898244aec8d86d06b1156cb818b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_noether, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 08:50:39 np0005629333 hopeful_noether[418430]: 167 167
Feb 25 08:50:39 np0005629333 systemd[1]: libpod-72b7f20541c2f90996510a496b3b507a246e898244aec8d86d06b1156cb818b8.scope: Deactivated successfully.
Feb 25 08:50:39 np0005629333 podman[418414]: 2026-02-25 13:50:39.573148592 +0000 UTC m=+0.145379045 container attach 72b7f20541c2f90996510a496b3b507a246e898244aec8d86d06b1156cb818b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_noether, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:50:39 np0005629333 podman[418414]: 2026-02-25 13:50:39.574313245 +0000 UTC m=+0.146543648 container died 72b7f20541c2f90996510a496b3b507a246e898244aec8d86d06b1156cb818b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 08:50:39 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d941f70cb23bd14b2418997dcdf1d42bb041ea8555644c2bac83604a99287dd9-merged.mount: Deactivated successfully.
Feb 25 08:50:39 np0005629333 podman[418414]: 2026-02-25 13:50:39.621100139 +0000 UTC m=+0.193330542 container remove 72b7f20541c2f90996510a496b3b507a246e898244aec8d86d06b1156cb818b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_noether, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 08:50:39 np0005629333 systemd[1]: libpod-conmon-72b7f20541c2f90996510a496b3b507a246e898244aec8d86d06b1156cb818b8.scope: Deactivated successfully.
Feb 25 08:50:39 np0005629333 podman[418457]: 2026-02-25 13:50:39.791294125 +0000 UTC m=+0.048542165 container create bcf5f6aa96c83fbeba2e382c2ed6efe0188905949772b5207a660a518db20c6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_williams, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 08:50:39 np0005629333 systemd[1]: Started libpod-conmon-bcf5f6aa96c83fbeba2e382c2ed6efe0188905949772b5207a660a518db20c6b.scope.
Feb 25 08:50:39 np0005629333 podman[418457]: 2026-02-25 13:50:39.764846075 +0000 UTC m=+0.022094165 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:50:39 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:50:39 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14b2ebdbff89bf055ba9631e7ac4aa4de942f26ebc48e4aad667fc8cccf05a15/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:50:39 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14b2ebdbff89bf055ba9631e7ac4aa4de942f26ebc48e4aad667fc8cccf05a15/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:50:39 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14b2ebdbff89bf055ba9631e7ac4aa4de942f26ebc48e4aad667fc8cccf05a15/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:50:39 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14b2ebdbff89bf055ba9631e7ac4aa4de942f26ebc48e4aad667fc8cccf05a15/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:50:39 np0005629333 podman[418457]: 2026-02-25 13:50:39.901680944 +0000 UTC m=+0.158929014 container init bcf5f6aa96c83fbeba2e382c2ed6efe0188905949772b5207a660a518db20c6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_williams, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:50:39 np0005629333 podman[418457]: 2026-02-25 13:50:39.90780861 +0000 UTC m=+0.165056610 container start bcf5f6aa96c83fbeba2e382c2ed6efe0188905949772b5207a660a518db20c6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_williams, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:50:39 np0005629333 podman[418457]: 2026-02-25 13:50:39.911564768 +0000 UTC m=+0.168812778 container attach bcf5f6aa96c83fbeba2e382c2ed6efe0188905949772b5207a660a518db20c6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 08:50:40 np0005629333 crazy_williams[418473]: {
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:    "0": [
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:        {
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "devices": [
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "/dev/loop3"
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            ],
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "lv_name": "ceph_lv0",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "lv_size": "21470642176",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "name": "ceph_lv0",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "tags": {
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.cluster_name": "ceph",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.crush_device_class": "",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.encrypted": "0",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.objectstore": "bluestore",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.osd_id": "0",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.type": "block",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.vdo": "0",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.with_tpm": "0"
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            },
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "type": "block",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "vg_name": "ceph_vg0"
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:        }
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:    ],
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:    "1": [
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:        {
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "devices": [
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "/dev/loop4"
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            ],
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "lv_name": "ceph_lv1",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "lv_size": "21470642176",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "name": "ceph_lv1",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "tags": {
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.cluster_name": "ceph",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.crush_device_class": "",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.encrypted": "0",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.objectstore": "bluestore",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.osd_id": "1",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.type": "block",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.vdo": "0",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.with_tpm": "0"
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            },
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "type": "block",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "vg_name": "ceph_vg1"
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:        }
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:    ],
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:    "2": [
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:        {
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "devices": [
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "/dev/loop5"
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            ],
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "lv_name": "ceph_lv2",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "lv_size": "21470642176",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "name": "ceph_lv2",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "tags": {
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.cluster_name": "ceph",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.crush_device_class": "",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.encrypted": "0",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.objectstore": "bluestore",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.osd_id": "2",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.type": "block",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.vdo": "0",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:                "ceph.with_tpm": "0"
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            },
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "type": "block",
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:            "vg_name": "ceph_vg2"
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:        }
Feb 25 08:50:40 np0005629333 crazy_williams[418473]:    ]
Feb 25 08:50:40 np0005629333 crazy_williams[418473]: }
Feb 25 08:50:40 np0005629333 systemd[1]: libpod-bcf5f6aa96c83fbeba2e382c2ed6efe0188905949772b5207a660a518db20c6b.scope: Deactivated successfully.
Feb 25 08:50:40 np0005629333 podman[418457]: 2026-02-25 13:50:40.201948554 +0000 UTC m=+0.459196554 container died bcf5f6aa96c83fbeba2e382c2ed6efe0188905949772b5207a660a518db20c6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_williams, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:50:40 np0005629333 systemd[1]: var-lib-containers-storage-overlay-14b2ebdbff89bf055ba9631e7ac4aa4de942f26ebc48e4aad667fc8cccf05a15-merged.mount: Deactivated successfully.
Feb 25 08:50:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:50:40 np0005629333 podman[418457]: 2026-02-25 13:50:40.247852592 +0000 UTC m=+0.505100592 container remove bcf5f6aa96c83fbeba2e382c2ed6efe0188905949772b5207a660a518db20c6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:50:40 np0005629333 systemd[1]: libpod-conmon-bcf5f6aa96c83fbeba2e382c2ed6efe0188905949772b5207a660a518db20c6b.scope: Deactivated successfully.
Feb 25 08:50:40 np0005629333 nova_compute[244014]: 2026-02-25 13:50:40.291 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:50:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3923: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:50:40 np0005629333 podman[418556]: 2026-02-25 13:50:40.722008035 +0000 UTC m=+0.055362080 container create a11d945cf5df0be40805d68a94a7f23d9f525063ed2ba07803616e4f6f96f75c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_payne, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:50:40 np0005629333 systemd[1]: Started libpod-conmon-a11d945cf5df0be40805d68a94a7f23d9f525063ed2ba07803616e4f6f96f75c.scope.
Feb 25 08:50:40 np0005629333 podman[418556]: 2026-02-25 13:50:40.696444611 +0000 UTC m=+0.029798726 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:50:40 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:50:40 np0005629333 podman[418556]: 2026-02-25 13:50:40.824360904 +0000 UTC m=+0.157715029 container init a11d945cf5df0be40805d68a94a7f23d9f525063ed2ba07803616e4f6f96f75c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Feb 25 08:50:40 np0005629333 podman[418556]: 2026-02-25 13:50:40.834285528 +0000 UTC m=+0.167639543 container start a11d945cf5df0be40805d68a94a7f23d9f525063ed2ba07803616e4f6f96f75c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 08:50:40 np0005629333 podman[418556]: 2026-02-25 13:50:40.83780503 +0000 UTC m=+0.171159075 container attach a11d945cf5df0be40805d68a94a7f23d9f525063ed2ba07803616e4f6f96f75c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_payne, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 25 08:50:40 np0005629333 agitated_payne[418573]: 167 167
Feb 25 08:50:40 np0005629333 systemd[1]: libpod-a11d945cf5df0be40805d68a94a7f23d9f525063ed2ba07803616e4f6f96f75c.scope: Deactivated successfully.
Feb 25 08:50:40 np0005629333 podman[418578]: 2026-02-25 13:50:40.896755202 +0000 UTC m=+0.035415728 container died a11d945cf5df0be40805d68a94a7f23d9f525063ed2ba07803616e4f6f96f75c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_payne, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 08:50:40 np0005629333 systemd[1]: var-lib-containers-storage-overlay-87576b35e9d88143f705d93babfe7d5513bb57ba7fe82a1bf0df3f9bf2a78b70-merged.mount: Deactivated successfully.
Feb 25 08:50:40 np0005629333 podman[418578]: 2026-02-25 13:50:40.937755789 +0000 UTC m=+0.076416255 container remove a11d945cf5df0be40805d68a94a7f23d9f525063ed2ba07803616e4f6f96f75c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_payne, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 08:50:40 np0005629333 systemd[1]: libpod-conmon-a11d945cf5df0be40805d68a94a7f23d9f525063ed2ba07803616e4f6f96f75c.scope: Deactivated successfully.
Feb 25 08:50:41 np0005629333 podman[418600]: 2026-02-25 13:50:41.124522071 +0000 UTC m=+0.041044919 container create bb79cf11af0f20753f5fbe4ca1da415125b3ca94fdca61b3167e235c129c14a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_payne, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 08:50:41 np0005629333 systemd[1]: Started libpod-conmon-bb79cf11af0f20753f5fbe4ca1da415125b3ca94fdca61b3167e235c129c14a5.scope.
Feb 25 08:50:41 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:50:41 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d721e828d642c79dda4ae3366b1319ec9797c4ef071bf3660180832cbed8cf0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:50:41 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d721e828d642c79dda4ae3366b1319ec9797c4ef071bf3660180832cbed8cf0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:50:41 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d721e828d642c79dda4ae3366b1319ec9797c4ef071bf3660180832cbed8cf0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:50:41 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d721e828d642c79dda4ae3366b1319ec9797c4ef071bf3660180832cbed8cf0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:50:41 np0005629333 podman[418600]: 2026-02-25 13:50:41.106208255 +0000 UTC m=+0.022731123 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:50:41 np0005629333 podman[418600]: 2026-02-25 13:50:41.206666379 +0000 UTC m=+0.123189247 container init bb79cf11af0f20753f5fbe4ca1da415125b3ca94fdca61b3167e235c129c14a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_payne, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 25 08:50:41 np0005629333 podman[418600]: 2026-02-25 13:50:41.219731584 +0000 UTC m=+0.136254432 container start bb79cf11af0f20753f5fbe4ca1da415125b3ca94fdca61b3167e235c129c14a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:50:41 np0005629333 podman[418600]: 2026-02-25 13:50:41.223572424 +0000 UTC m=+0.140095272 container attach bb79cf11af0f20753f5fbe4ca1da415125b3ca94fdca61b3167e235c129c14a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_payne, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:50:41 np0005629333 lvm[418696]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:50:41 np0005629333 lvm[418696]: VG ceph_vg1 finished
Feb 25 08:50:41 np0005629333 lvm[418695]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:50:41 np0005629333 lvm[418695]: VG ceph_vg0 finished
Feb 25 08:50:41 np0005629333 lvm[418698]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:50:41 np0005629333 lvm[418698]: VG ceph_vg2 finished
Feb 25 08:50:41 np0005629333 nova_compute[244014]: 2026-02-25 13:50:41.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:50:41 np0005629333 quirky_payne[418616]: {}
Feb 25 08:50:42 np0005629333 systemd[1]: libpod-bb79cf11af0f20753f5fbe4ca1da415125b3ca94fdca61b3167e235c129c14a5.scope: Deactivated successfully.
Feb 25 08:50:42 np0005629333 systemd[1]: libpod-bb79cf11af0f20753f5fbe4ca1da415125b3ca94fdca61b3167e235c129c14a5.scope: Consumed 1.148s CPU time.
Feb 25 08:50:42 np0005629333 podman[418600]: 2026-02-25 13:50:42.014268565 +0000 UTC m=+0.930791413 container died bb79cf11af0f20753f5fbe4ca1da415125b3ca94fdca61b3167e235c129c14a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_payne, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 08:50:42 np0005629333 systemd[1]: var-lib-containers-storage-overlay-8d721e828d642c79dda4ae3366b1319ec9797c4ef071bf3660180832cbed8cf0-merged.mount: Deactivated successfully.
Feb 25 08:50:42 np0005629333 podman[418600]: 2026-02-25 13:50:42.062959233 +0000 UTC m=+0.979482081 container remove bb79cf11af0f20753f5fbe4ca1da415125b3ca94fdca61b3167e235c129c14a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_payne, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 08:50:42 np0005629333 systemd[1]: libpod-conmon-bb79cf11af0f20753f5fbe4ca1da415125b3ca94fdca61b3167e235c129c14a5.scope: Deactivated successfully.
Feb 25 08:50:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:50:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:50:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:50:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:50:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3924: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:50:42 np0005629333 nova_compute[244014]: 2026-02-25 13:50:42.951 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:50:43 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:50:43 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:50:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:50:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:50:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:50:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:50:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:50:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:50:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:50:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:50:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:50:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:50:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:50:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:50:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:50:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:50:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:50:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:50:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:50:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:50:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:50:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:50:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:50:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:50:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:50:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3925: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:50:44 np0005629333 podman[418737]: 2026-02-25 13:50:44.726436628 +0000 UTC m=+0.065934994 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Feb 25 08:50:44 np0005629333 podman[418738]: 2026-02-25 13:50:44.767529138 +0000 UTC m=+0.106141528 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Feb 25 08:50:44 np0005629333 nova_compute[244014]: 2026-02-25 13:50:44.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:50:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:50:45 np0005629333 nova_compute[244014]: 2026-02-25 13:50:45.329 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:50:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3926: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:50:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:50:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3464252026' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:50:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:50:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3464252026' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:50:48 np0005629333 nova_compute[244014]: 2026-02-25 13:50:48.001 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:50:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3927: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:50:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:50:50 np0005629333 nova_compute[244014]: 2026-02-25 13:50:50.371 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:50:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3928: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:50:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3929: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:50:53 np0005629333 nova_compute[244014]: 2026-02-25 13:50:53.005 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:50:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3930: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:50:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:50:55.096 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:50:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:50:55.096 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:50:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:50:55.096 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:50:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:50:55 np0005629333 nova_compute[244014]: 2026-02-25 13:50:55.407 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:50:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3931: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:50:58 np0005629333 nova_compute[244014]: 2026-02-25 13:50:58.010 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:50:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3932: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:51:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:51:00 np0005629333 nova_compute[244014]: 2026-02-25 13:51:00.447 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:51:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3933: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:51:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:51:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:51:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:51:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:51:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:51:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:51:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3934: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:51:03 np0005629333 nova_compute[244014]: 2026-02-25 13:51:03.048 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:51:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3935: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:51:04 np0005629333 nova_compute[244014]: 2026-02-25 13:51:04.873 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:51:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:51:05 np0005629333 nova_compute[244014]: 2026-02-25 13:51:05.448 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:51:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3936: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:51:08 np0005629333 nova_compute[244014]: 2026-02-25 13:51:08.051 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:51:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3937: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:51:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:51:10 np0005629333 nova_compute[244014]: 2026-02-25 13:51:10.451 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:51:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3938: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:51:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3939: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:51:13 np0005629333 nova_compute[244014]: 2026-02-25 13:51:13.085 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:51:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3940: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:51:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:51:15 np0005629333 nova_compute[244014]: 2026-02-25 13:51:15.454 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:51:15 np0005629333 podman[418784]: 2026-02-25 13:51:15.735996175 +0000 UTC m=+0.075563410 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 08:51:15 np0005629333 podman[418785]: 2026-02-25 13:51:15.808206268 +0000 UTC m=+0.137027385 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb 25 08:51:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3941: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:51:18 np0005629333 nova_compute[244014]: 2026-02-25 13:51:18.132 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:51:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3942: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:51:19 np0005629333 nova_compute[244014]: 2026-02-25 13:51:19.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:51:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:51:20 np0005629333 nova_compute[244014]: 2026-02-25 13:51:20.455 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:51:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3943: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:51:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3944: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 30 op/s
Feb 25 08:51:23 np0005629333 nova_compute[244014]: 2026-02-25 13:51:23.163 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:51:23 np0005629333 nova_compute[244014]: 2026-02-25 13:51:23.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:51:23 np0005629333 nova_compute[244014]: 2026-02-25 13:51:23.910 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:51:23 np0005629333 nova_compute[244014]: 2026-02-25 13:51:23.911 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:51:23 np0005629333 nova_compute[244014]: 2026-02-25 13:51:23.911 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:51:23 np0005629333 nova_compute[244014]: 2026-02-25 13:51:23.911 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:51:23 np0005629333 nova_compute[244014]: 2026-02-25 13:51:23.911 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:51:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:51:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2053274885' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:51:24 np0005629333 nova_compute[244014]: 2026-02-25 13:51:24.448 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:51:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3945: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 30 op/s
Feb 25 08:51:24 np0005629333 nova_compute[244014]: 2026-02-25 13:51:24.590 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:51:24 np0005629333 nova_compute[244014]: 2026-02-25 13:51:24.591 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3537MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:51:24 np0005629333 nova_compute[244014]: 2026-02-25 13:51:24.591 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:51:24 np0005629333 nova_compute[244014]: 2026-02-25 13:51:24.592 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:51:24 np0005629333 nova_compute[244014]: 2026-02-25 13:51:24.729 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:51:24 np0005629333 nova_compute[244014]: 2026-02-25 13:51:24.730 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:51:24 np0005629333 nova_compute[244014]: 2026-02-25 13:51:24.802 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 25 08:51:24 np0005629333 nova_compute[244014]: 2026-02-25 13:51:24.902 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 25 08:51:24 np0005629333 nova_compute[244014]: 2026-02-25 13:51:24.903 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 25 08:51:24 np0005629333 nova_compute[244014]: 2026-02-25 13:51:24.919 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 25 08:51:24 np0005629333 nova_compute[244014]: 2026-02-25 13:51:24.943 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 25 08:51:24 np0005629333 nova_compute[244014]: 2026-02-25 13:51:24.960 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:51:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:51:25 np0005629333 nova_compute[244014]: 2026-02-25 13:51:25.458 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:51:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:51:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1077855961' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:51:25 np0005629333 nova_compute[244014]: 2026-02-25 13:51:25.514 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:51:25 np0005629333 nova_compute[244014]: 2026-02-25 13:51:25.520 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:51:25 np0005629333 nova_compute[244014]: 2026-02-25 13:51:25.541 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:51:25 np0005629333 nova_compute[244014]: 2026-02-25 13:51:25.542 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:51:25 np0005629333 nova_compute[244014]: 2026-02-25 13:51:25.542 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:51:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3946: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 52 op/s
Feb 25 08:51:26 np0005629333 nova_compute[244014]: 2026-02-25 13:51:26.542 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:51:26 np0005629333 nova_compute[244014]: 2026-02-25 13:51:26.543 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:51:26 np0005629333 nova_compute[244014]: 2026-02-25 13:51:26.543 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:51:26 np0005629333 nova_compute[244014]: 2026-02-25 13:51:26.558 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:51:28 np0005629333 nova_compute[244014]: 2026-02-25 13:51:28.201 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:51:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3947: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 0 B/s wr, 88 op/s
Feb 25 08:51:29 np0005629333 nova_compute[244014]: 2026-02-25 13:51:29.888 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:51:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:51:30 np0005629333 nova_compute[244014]: 2026-02-25 13:51:30.460 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:51:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3948: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 0 B/s wr, 88 op/s
Feb 25 08:51:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:51:31
Feb 25 08:51:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:51:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:51:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['vms', 'backups', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'images', 'cephfs.cephfs.data', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control']
Feb 25 08:51:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:51:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:51:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:51:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:51:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:51:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:51:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:51:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3949: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 0 B/s wr, 88 op/s
Feb 25 08:51:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:51:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:51:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:51:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:51:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:51:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:51:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:51:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:51:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:51:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:51:33 np0005629333 nova_compute[244014]: 2026-02-25 13:51:33.240 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:51:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3950: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Feb 25 08:51:34 np0005629333 nova_compute[244014]: 2026-02-25 13:51:34.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:51:34 np0005629333 nova_compute[244014]: 2026-02-25 13:51:34.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:51:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:51:35 np0005629333 nova_compute[244014]: 2026-02-25 13:51:35.462 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:51:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3951: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Feb 25 08:51:38 np0005629333 nova_compute[244014]: 2026-02-25 13:51:38.289 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:51:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3952: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Feb 25 08:51:38 np0005629333 nova_compute[244014]: 2026-02-25 13:51:38.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:51:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:51:40 np0005629333 nova_compute[244014]: 2026-02-25 13:51:40.464 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:51:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3953: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:51:40 np0005629333 nova_compute[244014]: 2026-02-25 13:51:40.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:51:41 np0005629333 nova_compute[244014]: 2026-02-25 13:51:41.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:51:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3954: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:51:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:51:43 np0005629333 nova_compute[244014]: 2026-02-25 13:51:43.336 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:51:43 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:51:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:51:43 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:51:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:51:43 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:51:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:51:43 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:51:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:51:43 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:51:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:51:43 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:51:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:51:43 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:51:43 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:51:43 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:51:43 np0005629333 podman[419084]: 2026-02-25 13:51:43.729401311 +0000 UTC m=+0.038575039 container create 2077f7ad675e234bdfb871107e43a332bc625fb566e98175ffef5a99d2abe95a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 08:51:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:51:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:51:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:51:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:51:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:51:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:51:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:51:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:51:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:51:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:51:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:51:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:51:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:51:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:51:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:51:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:51:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:51:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:51:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:51:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:51:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:51:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:51:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:51:43 np0005629333 systemd[1]: Started libpod-conmon-2077f7ad675e234bdfb871107e43a332bc625fb566e98175ffef5a99d2abe95a.scope.
Feb 25 08:51:43 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:51:43 np0005629333 podman[419084]: 2026-02-25 13:51:43.80706577 +0000 UTC m=+0.116239518 container init 2077f7ad675e234bdfb871107e43a332bc625fb566e98175ffef5a99d2abe95a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lovelace, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:51:43 np0005629333 podman[419084]: 2026-02-25 13:51:43.712291639 +0000 UTC m=+0.021465367 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:51:43 np0005629333 podman[419084]: 2026-02-25 13:51:43.816326426 +0000 UTC m=+0.125500154 container start 2077f7ad675e234bdfb871107e43a332bc625fb566e98175ffef5a99d2abe95a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lovelace, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:51:43 np0005629333 podman[419084]: 2026-02-25 13:51:43.820145476 +0000 UTC m=+0.129319224 container attach 2077f7ad675e234bdfb871107e43a332bc625fb566e98175ffef5a99d2abe95a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lovelace, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 08:51:43 np0005629333 priceless_lovelace[419101]: 167 167
Feb 25 08:51:43 np0005629333 systemd[1]: libpod-2077f7ad675e234bdfb871107e43a332bc625fb566e98175ffef5a99d2abe95a.scope: Deactivated successfully.
Feb 25 08:51:43 np0005629333 podman[419084]: 2026-02-25 13:51:43.824240003 +0000 UTC m=+0.133413741 container died 2077f7ad675e234bdfb871107e43a332bc625fb566e98175ffef5a99d2abe95a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lovelace, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:51:43 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:51:43 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:51:43 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:51:43 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:51:43 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:51:43 np0005629333 systemd[1]: var-lib-containers-storage-overlay-6e55a14074da5eee28cf4baec77129003b82a0695e87049d0512a5e5dc270201-merged.mount: Deactivated successfully.
Feb 25 08:51:43 np0005629333 podman[419084]: 2026-02-25 13:51:43.866811125 +0000 UTC m=+0.175984853 container remove 2077f7ad675e234bdfb871107e43a332bc625fb566e98175ffef5a99d2abe95a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lovelace, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:51:43 np0005629333 systemd[1]: libpod-conmon-2077f7ad675e234bdfb871107e43a332bc625fb566e98175ffef5a99d2abe95a.scope: Deactivated successfully.
Feb 25 08:51:43 np0005629333 podman[419126]: 2026-02-25 13:51:43.998335631 +0000 UTC m=+0.041135823 container create f69a9d007358a9358900cd1be23727d5a570977e9065a292ca29eed4edb11582 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:51:44 np0005629333 systemd[1]: Started libpod-conmon-f69a9d007358a9358900cd1be23727d5a570977e9065a292ca29eed4edb11582.scope.
Feb 25 08:51:44 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:51:44 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba71e920d551dcff9b00fc8df870256bfc895686ebe0af73fa517e86b68674b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:51:44 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba71e920d551dcff9b00fc8df870256bfc895686ebe0af73fa517e86b68674b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:51:44 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba71e920d551dcff9b00fc8df870256bfc895686ebe0af73fa517e86b68674b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:51:44 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba71e920d551dcff9b00fc8df870256bfc895686ebe0af73fa517e86b68674b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:51:44 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cba71e920d551dcff9b00fc8df870256bfc895686ebe0af73fa517e86b68674b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:51:44 np0005629333 podman[419126]: 2026-02-25 13:51:43.982083134 +0000 UTC m=+0.024883346 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:51:44 np0005629333 podman[419126]: 2026-02-25 13:51:44.081609561 +0000 UTC m=+0.124409773 container init f69a9d007358a9358900cd1be23727d5a570977e9065a292ca29eed4edb11582 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 08:51:44 np0005629333 podman[419126]: 2026-02-25 13:51:44.087201542 +0000 UTC m=+0.130001734 container start f69a9d007358a9358900cd1be23727d5a570977e9065a292ca29eed4edb11582 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_yonath, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:51:44 np0005629333 podman[419126]: 2026-02-25 13:51:44.091164736 +0000 UTC m=+0.133964928 container attach f69a9d007358a9358900cd1be23727d5a570977e9065a292ca29eed4edb11582 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 08:51:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3955: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:51:44 np0005629333 hardcore_yonath[419143]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:51:44 np0005629333 hardcore_yonath[419143]: --> All data devices are unavailable
Feb 25 08:51:44 np0005629333 systemd[1]: libpod-f69a9d007358a9358900cd1be23727d5a570977e9065a292ca29eed4edb11582.scope: Deactivated successfully.
Feb 25 08:51:44 np0005629333 podman[419126]: 2026-02-25 13:51:44.617467725 +0000 UTC m=+0.660267927 container died f69a9d007358a9358900cd1be23727d5a570977e9065a292ca29eed4edb11582 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_yonath, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 08:51:44 np0005629333 systemd[1]: var-lib-containers-storage-overlay-cba71e920d551dcff9b00fc8df870256bfc895686ebe0af73fa517e86b68674b-merged.mount: Deactivated successfully.
Feb 25 08:51:44 np0005629333 podman[419126]: 2026-02-25 13:51:44.671528497 +0000 UTC m=+0.714328689 container remove f69a9d007358a9358900cd1be23727d5a570977e9065a292ca29eed4edb11582 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_yonath, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:51:44 np0005629333 systemd[1]: libpod-conmon-f69a9d007358a9358900cd1be23727d5a570977e9065a292ca29eed4edb11582.scope: Deactivated successfully.
Feb 25 08:51:45 np0005629333 podman[419235]: 2026-02-25 13:51:45.149918672 +0000 UTC m=+0.064619476 container create fb248bb7f213bf54a0bf8e720b264e9063a736154ea429b266e51b2f254514c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:51:45 np0005629333 systemd[1]: Started libpod-conmon-fb248bb7f213bf54a0bf8e720b264e9063a736154ea429b266e51b2f254514c6.scope.
Feb 25 08:51:45 np0005629333 podman[419235]: 2026-02-25 13:51:45.122894236 +0000 UTC m=+0.037595130 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:51:45 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:51:45 np0005629333 podman[419235]: 2026-02-25 13:51:45.241017067 +0000 UTC m=+0.155717891 container init fb248bb7f213bf54a0bf8e720b264e9063a736154ea429b266e51b2f254514c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 08:51:45 np0005629333 podman[419235]: 2026-02-25 13:51:45.251633662 +0000 UTC m=+0.166334486 container start fb248bb7f213bf54a0bf8e720b264e9063a736154ea429b266e51b2f254514c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:51:45 np0005629333 podman[419235]: 2026-02-25 13:51:45.256158532 +0000 UTC m=+0.170859356 container attach fb248bb7f213bf54a0bf8e720b264e9063a736154ea429b266e51b2f254514c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_proskuriakova, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:51:45 np0005629333 relaxed_proskuriakova[419251]: 167 167
Feb 25 08:51:45 np0005629333 systemd[1]: libpod-fb248bb7f213bf54a0bf8e720b264e9063a736154ea429b266e51b2f254514c6.scope: Deactivated successfully.
Feb 25 08:51:45 np0005629333 podman[419235]: 2026-02-25 13:51:45.258747856 +0000 UTC m=+0.173448690 container died fb248bb7f213bf54a0bf8e720b264e9063a736154ea429b266e51b2f254514c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_proskuriakova, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:51:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:51:45 np0005629333 systemd[1]: var-lib-containers-storage-overlay-befe559a2a5510f2ea65676191490b5a17dc8545a63fbfb6756f3ebbcd77c092-merged.mount: Deactivated successfully.
Feb 25 08:51:45 np0005629333 podman[419235]: 2026-02-25 13:51:45.306022223 +0000 UTC m=+0.220723027 container remove fb248bb7f213bf54a0bf8e720b264e9063a736154ea429b266e51b2f254514c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_proskuriakova, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 08:51:45 np0005629333 systemd[1]: libpod-conmon-fb248bb7f213bf54a0bf8e720b264e9063a736154ea429b266e51b2f254514c6.scope: Deactivated successfully.
Feb 25 08:51:45 np0005629333 nova_compute[244014]: 2026-02-25 13:51:45.466 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:51:45 np0005629333 podman[419277]: 2026-02-25 13:51:45.469921249 +0000 UTC m=+0.045437326 container create 3ced94fdd26704f729c0488a5a5321366f8360167a64e666276f1354917286e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_tharp, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:51:45 np0005629333 systemd[1]: Started libpod-conmon-3ced94fdd26704f729c0488a5a5321366f8360167a64e666276f1354917286e6.scope.
Feb 25 08:51:45 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:51:45 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaac5c7dfeb7007731ac909c2ba7765fa7f8c8c9e705af967fb83ae446cb6a96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:51:45 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaac5c7dfeb7007731ac909c2ba7765fa7f8c8c9e705af967fb83ae446cb6a96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:51:45 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaac5c7dfeb7007731ac909c2ba7765fa7f8c8c9e705af967fb83ae446cb6a96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:51:45 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaac5c7dfeb7007731ac909c2ba7765fa7f8c8c9e705af967fb83ae446cb6a96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:51:45 np0005629333 podman[419277]: 2026-02-25 13:51:45.546488997 +0000 UTC m=+0.122005094 container init 3ced94fdd26704f729c0488a5a5321366f8360167a64e666276f1354917286e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_tharp, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:51:45 np0005629333 podman[419277]: 2026-02-25 13:51:45.449956346 +0000 UTC m=+0.025472453 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:51:45 np0005629333 podman[419277]: 2026-02-25 13:51:45.553115047 +0000 UTC m=+0.128631124 container start 3ced94fdd26704f729c0488a5a5321366f8360167a64e666276f1354917286e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_tharp, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 08:51:45 np0005629333 podman[419277]: 2026-02-25 13:51:45.556813503 +0000 UTC m=+0.132329590 container attach 3ced94fdd26704f729c0488a5a5321366f8360167a64e666276f1354917286e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_tharp, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:51:45 np0005629333 charming_tharp[419294]: {
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:    "0": [
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:        {
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "devices": [
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "/dev/loop3"
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            ],
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "lv_name": "ceph_lv0",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "lv_size": "21470642176",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "name": "ceph_lv0",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "tags": {
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.cluster_name": "ceph",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.crush_device_class": "",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.encrypted": "0",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.objectstore": "bluestore",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.osd_id": "0",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.type": "block",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.vdo": "0",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.with_tpm": "0"
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            },
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "type": "block",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "vg_name": "ceph_vg0"
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:        }
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:    ],
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:    "1": [
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:        {
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "devices": [
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "/dev/loop4"
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            ],
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "lv_name": "ceph_lv1",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "lv_size": "21470642176",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "name": "ceph_lv1",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "tags": {
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.cluster_name": "ceph",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.crush_device_class": "",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.encrypted": "0",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.objectstore": "bluestore",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.osd_id": "1",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.type": "block",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.vdo": "0",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.with_tpm": "0"
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            },
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "type": "block",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "vg_name": "ceph_vg1"
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:        }
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:    ],
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:    "2": [
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:        {
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "devices": [
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "/dev/loop5"
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            ],
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "lv_name": "ceph_lv2",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "lv_size": "21470642176",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "name": "ceph_lv2",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "tags": {
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.cluster_name": "ceph",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.crush_device_class": "",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.encrypted": "0",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.objectstore": "bluestore",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.osd_id": "2",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.type": "block",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.vdo": "0",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:                "ceph.with_tpm": "0"
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            },
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "type": "block",
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:            "vg_name": "ceph_vg2"
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:        }
Feb 25 08:51:45 np0005629333 charming_tharp[419294]:    ]
Feb 25 08:51:45 np0005629333 charming_tharp[419294]: }
Feb 25 08:51:45 np0005629333 systemd[1]: libpod-3ced94fdd26704f729c0488a5a5321366f8360167a64e666276f1354917286e6.scope: Deactivated successfully.
Feb 25 08:51:45 np0005629333 podman[419277]: 2026-02-25 13:51:45.861232933 +0000 UTC m=+0.436749040 container died 3ced94fdd26704f729c0488a5a5321366f8360167a64e666276f1354917286e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 08:51:45 np0005629333 systemd[1]: var-lib-containers-storage-overlay-aaac5c7dfeb7007731ac909c2ba7765fa7f8c8c9e705af967fb83ae446cb6a96-merged.mount: Deactivated successfully.
Feb 25 08:51:45 np0005629333 podman[419277]: 2026-02-25 13:51:45.912774243 +0000 UTC m=+0.488290310 container remove 3ced94fdd26704f729c0488a5a5321366f8360167a64e666276f1354917286e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_tharp, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:51:45 np0005629333 systemd[1]: libpod-conmon-3ced94fdd26704f729c0488a5a5321366f8360167a64e666276f1354917286e6.scope: Deactivated successfully.
Feb 25 08:51:45 np0005629333 podman[419304]: 2026-02-25 13:51:45.969940814 +0000 UTC m=+0.074623843 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Feb 25 08:51:45 np0005629333 podman[419312]: 2026-02-25 13:51:45.996065444 +0000 UTC m=+0.100776204 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 08:51:46 np0005629333 podman[419422]: 2026-02-25 13:51:46.408803674 +0000 UTC m=+0.050478521 container create 9ad0dcfad8afc1cc4dad5363e26b2d52ce1f1e766809f81e34fdf52770cf25d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 08:51:46 np0005629333 systemd[1]: Started libpod-conmon-9ad0dcfad8afc1cc4dad5363e26b2d52ce1f1e766809f81e34fdf52770cf25d3.scope.
Feb 25 08:51:46 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:51:46 np0005629333 podman[419422]: 2026-02-25 13:51:46.392939528 +0000 UTC m=+0.034614395 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:51:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3956: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:51:46 np0005629333 podman[419422]: 2026-02-25 13:51:46.496598804 +0000 UTC m=+0.138273671 container init 9ad0dcfad8afc1cc4dad5363e26b2d52ce1f1e766809f81e34fdf52770cf25d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_diffie, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:51:46 np0005629333 podman[419422]: 2026-02-25 13:51:46.50166349 +0000 UTC m=+0.143338347 container start 9ad0dcfad8afc1cc4dad5363e26b2d52ce1f1e766809f81e34fdf52770cf25d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_diffie, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:51:46 np0005629333 podman[419422]: 2026-02-25 13:51:46.505110459 +0000 UTC m=+0.146785306 container attach 9ad0dcfad8afc1cc4dad5363e26b2d52ce1f1e766809f81e34fdf52770cf25d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_diffie, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:51:46 np0005629333 angry_diffie[419438]: 167 167
Feb 25 08:51:46 np0005629333 systemd[1]: libpod-9ad0dcfad8afc1cc4dad5363e26b2d52ce1f1e766809f81e34fdf52770cf25d3.scope: Deactivated successfully.
Feb 25 08:51:46 np0005629333 conmon[419438]: conmon 9ad0dcfad8afc1cc4dad <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9ad0dcfad8afc1cc4dad5363e26b2d52ce1f1e766809f81e34fdf52770cf25d3.scope/container/memory.events
Feb 25 08:51:46 np0005629333 podman[419422]: 2026-02-25 13:51:46.508151946 +0000 UTC m=+0.149826803 container died 9ad0dcfad8afc1cc4dad5363e26b2d52ce1f1e766809f81e34fdf52770cf25d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_diffie, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 08:51:46 np0005629333 systemd[1]: var-lib-containers-storage-overlay-00df8332409d441124d67236865c589afac612e46a048fca8d3dfe35d1957edb-merged.mount: Deactivated successfully.
Feb 25 08:51:46 np0005629333 podman[419422]: 2026-02-25 13:51:46.552485979 +0000 UTC m=+0.194160866 container remove 9ad0dcfad8afc1cc4dad5363e26b2d52ce1f1e766809f81e34fdf52770cf25d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_diffie, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 08:51:46 np0005629333 systemd[1]: libpod-conmon-9ad0dcfad8afc1cc4dad5363e26b2d52ce1f1e766809f81e34fdf52770cf25d3.scope: Deactivated successfully.
Feb 25 08:51:46 np0005629333 podman[419462]: 2026-02-25 13:51:46.687923787 +0000 UTC m=+0.035672575 container create e0c0d6da356a6b5c965189e5c2e1ab1f8c136a59454f11c10f5da3ab0f97dfe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_newton, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:51:46 np0005629333 systemd[1]: Started libpod-conmon-e0c0d6da356a6b5c965189e5c2e1ab1f8c136a59454f11c10f5da3ab0f97dfe5.scope.
Feb 25 08:51:46 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:51:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0550627222f89476317f637eb3f36dc9e3b80427cac7338cf6ddf7aecc39661/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:51:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0550627222f89476317f637eb3f36dc9e3b80427cac7338cf6ddf7aecc39661/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:51:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0550627222f89476317f637eb3f36dc9e3b80427cac7338cf6ddf7aecc39661/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:51:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0550627222f89476317f637eb3f36dc9e3b80427cac7338cf6ddf7aecc39661/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:51:46 np0005629333 podman[419462]: 2026-02-25 13:51:46.672674759 +0000 UTC m=+0.020423567 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:51:46 np0005629333 podman[419462]: 2026-02-25 13:51:46.781784002 +0000 UTC m=+0.129532810 container init e0c0d6da356a6b5c965189e5c2e1ab1f8c136a59454f11c10f5da3ab0f97dfe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:51:46 np0005629333 podman[419462]: 2026-02-25 13:51:46.788489044 +0000 UTC m=+0.136237852 container start e0c0d6da356a6b5c965189e5c2e1ab1f8c136a59454f11c10f5da3ab0f97dfe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_newton, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:51:46 np0005629333 podman[419462]: 2026-02-25 13:51:46.791658945 +0000 UTC m=+0.139407733 container attach e0c0d6da356a6b5c965189e5c2e1ab1f8c136a59454f11c10f5da3ab0f97dfe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_newton, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 08:51:46 np0005629333 nova_compute[244014]: 2026-02-25 13:51:46.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:51:47 np0005629333 lvm[419556]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:51:47 np0005629333 lvm[419556]: VG ceph_vg0 finished
Feb 25 08:51:47 np0005629333 lvm[419557]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:51:47 np0005629333 lvm[419557]: VG ceph_vg1 finished
Feb 25 08:51:47 np0005629333 lvm[419559]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:51:47 np0005629333 lvm[419559]: VG ceph_vg2 finished
Feb 25 08:51:47 np0005629333 boring_newton[419478]: {}
Feb 25 08:51:47 np0005629333 systemd[1]: libpod-e0c0d6da356a6b5c965189e5c2e1ab1f8c136a59454f11c10f5da3ab0f97dfe5.scope: Deactivated successfully.
Feb 25 08:51:47 np0005629333 systemd[1]: libpod-e0c0d6da356a6b5c965189e5c2e1ab1f8c136a59454f11c10f5da3ab0f97dfe5.scope: Consumed 1.161s CPU time.
Feb 25 08:51:47 np0005629333 podman[419462]: 2026-02-25 13:51:47.558376106 +0000 UTC m=+0.906124904 container died e0c0d6da356a6b5c965189e5c2e1ab1f8c136a59454f11c10f5da3ab0f97dfe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_newton, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:51:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:51:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/485389855' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:51:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:51:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/485389855' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:51:47 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f0550627222f89476317f637eb3f36dc9e3b80427cac7338cf6ddf7aecc39661-merged.mount: Deactivated successfully.
Feb 25 08:51:47 np0005629333 podman[419462]: 2026-02-25 13:51:47.601644198 +0000 UTC m=+0.949392996 container remove e0c0d6da356a6b5c965189e5c2e1ab1f8c136a59454f11c10f5da3ab0f97dfe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:51:47 np0005629333 systemd[1]: libpod-conmon-e0c0d6da356a6b5c965189e5c2e1ab1f8c136a59454f11c10f5da3ab0f97dfe5.scope: Deactivated successfully.
Feb 25 08:51:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:51:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:51:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:51:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:51:47 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:51:47 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:51:48 np0005629333 nova_compute[244014]: 2026-02-25 13:51:48.339 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:51:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3957: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:51:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:51:50 np0005629333 nova_compute[244014]: 2026-02-25 13:51:50.469 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:51:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3958: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:51:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3959: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:51:53 np0005629333 nova_compute[244014]: 2026-02-25 13:51:53.371 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:51:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3960: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:51:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:51:55.097 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:51:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:51:55.097 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:51:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:51:55.098 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:51:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:51:55 np0005629333 nova_compute[244014]: 2026-02-25 13:51:55.472 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:51:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3961: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:51:58 np0005629333 nova_compute[244014]: 2026-02-25 13:51:58.375 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:51:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3962: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:52:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:52:00 np0005629333 nova_compute[244014]: 2026-02-25 13:52:00.474 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:52:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3963: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:52:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:52:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:52:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:52:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:52:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:52:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:52:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3964: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:52:03 np0005629333 nova_compute[244014]: 2026-02-25 13:52:03.416 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:52:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3965: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:52:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:52:05 np0005629333 nova_compute[244014]: 2026-02-25 13:52:05.477 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:52:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3966: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:52:08 np0005629333 nova_compute[244014]: 2026-02-25 13:52:08.420 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:52:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3967: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:52:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:52:10 np0005629333 nova_compute[244014]: 2026-02-25 13:52:10.480 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:52:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3968: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:52:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3969: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:52:13 np0005629333 nova_compute[244014]: 2026-02-25 13:52:13.466 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:52:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3970: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:52:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:52:15 np0005629333 nova_compute[244014]: 2026-02-25 13:52:15.481 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:52:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3971: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:52:16 np0005629333 podman[419598]: 2026-02-25 13:52:16.743181496 +0000 UTC m=+0.080373853 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223)
Feb 25 08:52:16 np0005629333 podman[419599]: 2026-02-25 13:52:16.787061076 +0000 UTC m=+0.124341436 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 08:52:18 np0005629333 nova_compute[244014]: 2026-02-25 13:52:18.468 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:52:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3972: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:52:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:52:20 np0005629333 nova_compute[244014]: 2026-02-25 13:52:20.482 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:52:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3973: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:52:20 np0005629333 nova_compute[244014]: 2026-02-25 13:52:20.878 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:52:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3974: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:52:23 np0005629333 nova_compute[244014]: 2026-02-25 13:52:23.516 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:52:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3975: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:52:24 np0005629333 nova_compute[244014]: 2026-02-25 13:52:24.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:52:24 np0005629333 nova_compute[244014]: 2026-02-25 13:52:24.908 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:52:24 np0005629333 nova_compute[244014]: 2026-02-25 13:52:24.909 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:52:24 np0005629333 nova_compute[244014]: 2026-02-25 13:52:24.909 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:52:24 np0005629333 nova_compute[244014]: 2026-02-25 13:52:24.910 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:52:24 np0005629333 nova_compute[244014]: 2026-02-25 13:52:24.910 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:52:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:52:25 np0005629333 nova_compute[244014]: 2026-02-25 13:52:25.484 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:52:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:52:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3263051956' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:52:25 np0005629333 nova_compute[244014]: 2026-02-25 13:52:25.610 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.700s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:52:25 np0005629333 nova_compute[244014]: 2026-02-25 13:52:25.761 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:52:25 np0005629333 nova_compute[244014]: 2026-02-25 13:52:25.763 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3543MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:52:25 np0005629333 nova_compute[244014]: 2026-02-25 13:52:25.763 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:52:25 np0005629333 nova_compute[244014]: 2026-02-25 13:52:25.763 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:52:25 np0005629333 nova_compute[244014]: 2026-02-25 13:52:25.851 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:52:25 np0005629333 nova_compute[244014]: 2026-02-25 13:52:25.852 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:52:25 np0005629333 nova_compute[244014]: 2026-02-25 13:52:25.872 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:52:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:52:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2878544754' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:52:26 np0005629333 nova_compute[244014]: 2026-02-25 13:52:26.483 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:52:26 np0005629333 nova_compute[244014]: 2026-02-25 13:52:26.490 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:52:26 np0005629333 nova_compute[244014]: 2026-02-25 13:52:26.513 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:52:26 np0005629333 nova_compute[244014]: 2026-02-25 13:52:26.515 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:52:26 np0005629333 nova_compute[244014]: 2026-02-25 13:52:26.515 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:52:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3976: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:52:28 np0005629333 nova_compute[244014]: 2026-02-25 13:52:28.516 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:52:28 np0005629333 nova_compute[244014]: 2026-02-25 13:52:28.517 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:52:28 np0005629333 nova_compute[244014]: 2026-02-25 13:52:28.518 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:52:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3977: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:52:28 np0005629333 nova_compute[244014]: 2026-02-25 13:52:28.521 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:52:28 np0005629333 nova_compute[244014]: 2026-02-25 13:52:28.585 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:52:29 np0005629333 nova_compute[244014]: 2026-02-25 13:52:29.941 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:52:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:52:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3978: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:52:30 np0005629333 nova_compute[244014]: 2026-02-25 13:52:30.523 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:52:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:52:31
Feb 25 08:52:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:52:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:52:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'vms', 'volumes', '.rgw.root']
Feb 25 08:52:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:52:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:52:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:52:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:52:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:52:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:52:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:52:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3979: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:52:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:52:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:52:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:52:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:52:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:52:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:52:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:52:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:52:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:52:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:52:33 np0005629333 nova_compute[244014]: 2026-02-25 13:52:33.523 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:52:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3980: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:52:34 np0005629333 nova_compute[244014]: 2026-02-25 13:52:34.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:52:34 np0005629333 nova_compute[244014]: 2026-02-25 13:52:34.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:52:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:52:35 np0005629333 nova_compute[244014]: 2026-02-25 13:52:35.544 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:52:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3981: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:52:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3982: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:52:38 np0005629333 nova_compute[244014]: 2026-02-25 13:52:38.527 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:52:39 np0005629333 nova_compute[244014]: 2026-02-25 13:52:39.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:52:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:52:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3983: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:52:40 np0005629333 nova_compute[244014]: 2026-02-25 13:52:40.573 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:52:41 np0005629333 nova_compute[244014]: 2026-02-25 13:52:41.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:52:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3984: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:52:43 np0005629333 nova_compute[244014]: 2026-02-25 13:52:43.558 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:52:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:52:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:52:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:52:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:52:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:52:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:52:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:52:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:52:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:52:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:52:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:52:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:52:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:52:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:52:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:52:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:52:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:52:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:52:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:52:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:52:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:52:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:52:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:52:43 np0005629333 nova_compute[244014]: 2026-02-25 13:52:43.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:52:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3985: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:52:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:52:45 np0005629333 nova_compute[244014]: 2026-02-25 13:52:45.576 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:52:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3986: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:52:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:52:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1092072367' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:52:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:52:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1092072367' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:52:47 np0005629333 podman[419687]: 2026-02-25 13:52:47.730530164 +0000 UTC m=+0.065314158 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 08:52:47 np0005629333 podman[419688]: 2026-02-25 13:52:47.794881023 +0000 UTC m=+0.124733837 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 08:52:47 np0005629333 nova_compute[244014]: 2026-02-25 13:52:47.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:52:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3987: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:52:48 np0005629333 nova_compute[244014]: 2026-02-25 13:52:48.559 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:52:48 np0005629333 podman[419876]: 2026-02-25 13:52:48.847180549 +0000 UTC m=+0.042700348 container create 0c170368ae4abf749a818da45d1139123bc0640f97c936ef1840ae3e8d6c767d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_shirley, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:52:48 np0005629333 systemd[1]: Started libpod-conmon-0c170368ae4abf749a818da45d1139123bc0640f97c936ef1840ae3e8d6c767d.scope.
Feb 25 08:52:48 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:52:48 np0005629333 podman[419876]: 2026-02-25 13:52:48.915857861 +0000 UTC m=+0.111377680 container init 0c170368ae4abf749a818da45d1139123bc0640f97c936ef1840ae3e8d6c767d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_shirley, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:52:48 np0005629333 podman[419876]: 2026-02-25 13:52:48.828585654 +0000 UTC m=+0.024105463 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:52:48 np0005629333 podman[419876]: 2026-02-25 13:52:48.925581106 +0000 UTC m=+0.121100875 container start 0c170368ae4abf749a818da45d1139123bc0640f97c936ef1840ae3e8d6c767d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_shirley, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 08:52:48 np0005629333 podman[419876]: 2026-02-25 13:52:48.928517539 +0000 UTC m=+0.124037358 container attach 0c170368ae4abf749a818da45d1139123bc0640f97c936ef1840ae3e8d6c767d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 08:52:48 np0005629333 elated_shirley[419892]: 167 167
Feb 25 08:52:48 np0005629333 systemd[1]: libpod-0c170368ae4abf749a818da45d1139123bc0640f97c936ef1840ae3e8d6c767d.scope: Deactivated successfully.
Feb 25 08:52:48 np0005629333 conmon[419892]: conmon 0c170368ae4abf749a81 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0c170368ae4abf749a818da45d1139123bc0640f97c936ef1840ae3e8d6c767d.scope/container/memory.events
Feb 25 08:52:48 np0005629333 podman[419876]: 2026-02-25 13:52:48.933346075 +0000 UTC m=+0.128865844 container died 0c170368ae4abf749a818da45d1139123bc0640f97c936ef1840ae3e8d6c767d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:52:48 np0005629333 systemd[1]: var-lib-containers-storage-overlay-01ef9fc6236db1d9890f774fd7a3fab50fd25f6bbc271b89575251500f4fd0a0-merged.mount: Deactivated successfully.
Feb 25 08:52:48 np0005629333 podman[419876]: 2026-02-25 13:52:48.975714753 +0000 UTC m=+0.171234542 container remove 0c170368ae4abf749a818da45d1139123bc0640f97c936ef1840ae3e8d6c767d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_shirley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:52:48 np0005629333 systemd[1]: libpod-conmon-0c170368ae4abf749a818da45d1139123bc0640f97c936ef1840ae3e8d6c767d.scope: Deactivated successfully.
Feb 25 08:52:49 np0005629333 podman[419916]: 2026-02-25 13:52:49.144905426 +0000 UTC m=+0.045754644 container create 735cc5fa3e0265552c0383e70184631d4629d274d50bbf9094072b4b588d8dd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_leavitt, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 08:52:49 np0005629333 systemd[1]: Started libpod-conmon-735cc5fa3e0265552c0383e70184631d4629d274d50bbf9094072b4b588d8dd5.scope.
Feb 25 08:52:49 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:52:49 np0005629333 podman[419916]: 2026-02-25 13:52:49.125048205 +0000 UTC m=+0.025897413 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:52:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9c24db5a905c91a9d4f49c05eddc40e0cf2590dea187b27b70dbf704e1facca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:52:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9c24db5a905c91a9d4f49c05eddc40e0cf2590dea187b27b70dbf704e1facca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:52:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9c24db5a905c91a9d4f49c05eddc40e0cf2590dea187b27b70dbf704e1facca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:52:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9c24db5a905c91a9d4f49c05eddc40e0cf2590dea187b27b70dbf704e1facca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:52:49 np0005629333 podman[419916]: 2026-02-25 13:52:49.235807036 +0000 UTC m=+0.136656234 container init 735cc5fa3e0265552c0383e70184631d4629d274d50bbf9094072b4b588d8dd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:52:49 np0005629333 podman[419916]: 2026-02-25 13:52:49.244615915 +0000 UTC m=+0.145465103 container start 735cc5fa3e0265552c0383e70184631d4629d274d50bbf9094072b4b588d8dd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_leavitt, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:52:49 np0005629333 podman[419916]: 2026-02-25 13:52:49.248023661 +0000 UTC m=+0.148872859 container attach 735cc5fa3e0265552c0383e70184631d4629d274d50bbf9094072b4b588d8dd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]: [
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:    {
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:        "available": false,
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:        "being_replaced": false,
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:        "ceph_device_lvm": false,
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:        "device_id": "QEMU_DVD-ROM_QM00001",
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:        "lsm_data": {},
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:        "lvs": [],
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:        "path": "/dev/sr0",
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:        "rejected_reasons": [
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:            "Insufficient space (<5GB)",
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:            "Has a FileSystem"
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:        ],
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:        "sys_api": {
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:            "actuators": null,
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:            "device_nodes": [
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:                "sr0"
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:            ],
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:            "devname": "sr0",
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:            "human_readable_size": "482.00 KB",
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:            "id_bus": "ata",
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:            "model": "QEMU DVD-ROM",
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:            "nr_requests": "2",
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:            "parent": "/dev/sr0",
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:            "partitions": {},
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:            "path": "/dev/sr0",
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:            "removable": "1",
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:            "rev": "2.5+",
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:            "ro": "0",
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:            "rotational": "1",
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:            "sas_address": "",
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:            "sas_device_handle": "",
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:            "scheduler_mode": "mq-deadline",
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:            "sectors": 0,
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:            "sectorsize": "2048",
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:            "size": 493568.0,
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:            "support_discard": "2048",
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:            "type": "disk",
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:            "vendor": "QEMU"
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:        }
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]:    }
Feb 25 08:52:49 np0005629333 elegant_leavitt[419932]: ]
Feb 25 08:52:49 np0005629333 systemd[1]: libpod-735cc5fa3e0265552c0383e70184631d4629d274d50bbf9094072b4b588d8dd5.scope: Deactivated successfully.
Feb 25 08:52:49 np0005629333 podman[419916]: 2026-02-25 13:52:49.753533232 +0000 UTC m=+0.654382410 container died 735cc5fa3e0265552c0383e70184631d4629d274d50bbf9094072b4b588d8dd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 08:52:49 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a9c24db5a905c91a9d4f49c05eddc40e0cf2590dea187b27b70dbf704e1facca-merged.mount: Deactivated successfully.
Feb 25 08:52:49 np0005629333 podman[419916]: 2026-02-25 13:52:49.791774713 +0000 UTC m=+0.692623891 container remove 735cc5fa3e0265552c0383e70184631d4629d274d50bbf9094072b4b588d8dd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 08:52:49 np0005629333 systemd[1]: libpod-conmon-735cc5fa3e0265552c0383e70184631d4629d274d50bbf9094072b4b588d8dd5.scope: Deactivated successfully.
Feb 25 08:52:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:52:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:52:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:52:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:52:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:52:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:52:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:52:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:52:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:52:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:52:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:52:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:52:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:52:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:52:49 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:52:49 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:52:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:52:50 np0005629333 podman[420724]: 2026-02-25 13:52:50.328927328 +0000 UTC m=+0.045501277 container create a91e474e7d86f31a8b00f23232ddb2a7853f91df34905519287333af0925d4c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_leavitt, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:52:50 np0005629333 systemd[1]: Started libpod-conmon-a91e474e7d86f31a8b00f23232ddb2a7853f91df34905519287333af0925d4c3.scope.
Feb 25 08:52:50 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:52:50 np0005629333 podman[420724]: 2026-02-25 13:52:50.310762695 +0000 UTC m=+0.027336724 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:52:50 np0005629333 podman[420724]: 2026-02-25 13:52:50.408490688 +0000 UTC m=+0.125064727 container init a91e474e7d86f31a8b00f23232ddb2a7853f91df34905519287333af0925d4c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 08:52:50 np0005629333 podman[420724]: 2026-02-25 13:52:50.417565584 +0000 UTC m=+0.134139523 container start a91e474e7d86f31a8b00f23232ddb2a7853f91df34905519287333af0925d4c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_leavitt, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 08:52:50 np0005629333 podman[420724]: 2026-02-25 13:52:50.421113454 +0000 UTC m=+0.137687493 container attach a91e474e7d86f31a8b00f23232ddb2a7853f91df34905519287333af0925d4c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_leavitt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 08:52:50 np0005629333 distracted_leavitt[420740]: 167 167
Feb 25 08:52:50 np0005629333 systemd[1]: libpod-a91e474e7d86f31a8b00f23232ddb2a7853f91df34905519287333af0925d4c3.scope: Deactivated successfully.
Feb 25 08:52:50 np0005629333 podman[420724]: 2026-02-25 13:52:50.424344006 +0000 UTC m=+0.140917995 container died a91e474e7d86f31a8b00f23232ddb2a7853f91df34905519287333af0925d4c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_leavitt, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:52:50 np0005629333 systemd[1]: var-lib-containers-storage-overlay-b9c17d532b11a575727f20da86b918c1d9b17c5d802b7096e9fcc3d240e5f8d9-merged.mount: Deactivated successfully.
Feb 25 08:52:50 np0005629333 podman[420724]: 2026-02-25 13:52:50.47648915 +0000 UTC m=+0.193063099 container remove a91e474e7d86f31a8b00f23232ddb2a7853f91df34905519287333af0925d4c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_leavitt, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 08:52:50 np0005629333 systemd[1]: libpod-conmon-a91e474e7d86f31a8b00f23232ddb2a7853f91df34905519287333af0925d4c3.scope: Deactivated successfully.
Feb 25 08:52:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3988: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:52:50 np0005629333 nova_compute[244014]: 2026-02-25 13:52:50.578 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:52:50 np0005629333 podman[420765]: 2026-02-25 13:52:50.676537705 +0000 UTC m=+0.046963899 container create 1b045f9406c36a4f18ecdb79d3f696f1eff5fc3dcc75bd458da7709692748a8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 08:52:50 np0005629333 systemd[1]: Started libpod-conmon-1b045f9406c36a4f18ecdb79d3f696f1eff5fc3dcc75bd458da7709692748a8f.scope.
Feb 25 08:52:50 np0005629333 podman[420765]: 2026-02-25 13:52:50.651470086 +0000 UTC m=+0.021896300 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:52:50 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:52:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b59edfae42e310b1bb4b8711ae0c6795125ea533951dcb63ae6abf051e2bc0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:52:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b59edfae42e310b1bb4b8711ae0c6795125ea533951dcb63ae6abf051e2bc0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:52:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b59edfae42e310b1bb4b8711ae0c6795125ea533951dcb63ae6abf051e2bc0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:52:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b59edfae42e310b1bb4b8711ae0c6795125ea533951dcb63ae6abf051e2bc0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:52:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b59edfae42e310b1bb4b8711ae0c6795125ea533951dcb63ae6abf051e2bc0b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:52:50 np0005629333 podman[420765]: 2026-02-25 13:52:50.774558986 +0000 UTC m=+0.144985160 container init 1b045f9406c36a4f18ecdb79d3f696f1eff5fc3dcc75bd458da7709692748a8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_galileo, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:52:50 np0005629333 podman[420765]: 2026-02-25 13:52:50.788729246 +0000 UTC m=+0.159155420 container start 1b045f9406c36a4f18ecdb79d3f696f1eff5fc3dcc75bd458da7709692748a8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:52:50 np0005629333 podman[420765]: 2026-02-25 13:52:50.792353189 +0000 UTC m=+0.162779363 container attach 1b045f9406c36a4f18ecdb79d3f696f1eff5fc3dcc75bd458da7709692748a8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:52:50 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:52:50 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:52:50 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:52:50 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:52:50 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:52:51 np0005629333 strange_galileo[420782]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:52:51 np0005629333 strange_galileo[420782]: --> All data devices are unavailable
Feb 25 08:52:51 np0005629333 systemd[1]: libpod-1b045f9406c36a4f18ecdb79d3f696f1eff5fc3dcc75bd458da7709692748a8f.scope: Deactivated successfully.
Feb 25 08:52:51 np0005629333 podman[420765]: 2026-02-25 13:52:51.240991082 +0000 UTC m=+0.611417306 container died 1b045f9406c36a4f18ecdb79d3f696f1eff5fc3dcc75bd458da7709692748a8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 25 08:52:51 np0005629333 systemd[1]: var-lib-containers-storage-overlay-0b59edfae42e310b1bb4b8711ae0c6795125ea533951dcb63ae6abf051e2bc0b-merged.mount: Deactivated successfully.
Feb 25 08:52:51 np0005629333 podman[420765]: 2026-02-25 13:52:51.295106102 +0000 UTC m=+0.665532266 container remove 1b045f9406c36a4f18ecdb79d3f696f1eff5fc3dcc75bd458da7709692748a8f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 08:52:51 np0005629333 systemd[1]: libpod-conmon-1b045f9406c36a4f18ecdb79d3f696f1eff5fc3dcc75bd458da7709692748a8f.scope: Deactivated successfully.
Feb 25 08:52:51 np0005629333 podman[420875]: 2026-02-25 13:52:51.820324709 +0000 UTC m=+0.056848008 container create a9d829c648d232b86ade61d0fad41dc494250b9d434a5b7a3c08584fe69275be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_ramanujan, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 08:52:51 np0005629333 systemd[1]: Started libpod-conmon-a9d829c648d232b86ade61d0fad41dc494250b9d434a5b7a3c08584fe69275be.scope.
Feb 25 08:52:51 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:52:51 np0005629333 podman[420875]: 2026-02-25 13:52:51.797509224 +0000 UTC m=+0.034032583 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:52:51 np0005629333 podman[420875]: 2026-02-25 13:52:51.899296961 +0000 UTC m=+0.135820280 container init a9d829c648d232b86ade61d0fad41dc494250b9d434a5b7a3c08584fe69275be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_ramanujan, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 08:52:51 np0005629333 podman[420875]: 2026-02-25 13:52:51.907294547 +0000 UTC m=+0.143817856 container start a9d829c648d232b86ade61d0fad41dc494250b9d434a5b7a3c08584fe69275be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:52:51 np0005629333 podman[420875]: 2026-02-25 13:52:51.911116525 +0000 UTC m=+0.147639874 container attach a9d829c648d232b86ade61d0fad41dc494250b9d434a5b7a3c08584fe69275be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 08:52:51 np0005629333 condescending_ramanujan[420892]: 167 167
Feb 25 08:52:51 np0005629333 systemd[1]: libpod-a9d829c648d232b86ade61d0fad41dc494250b9d434a5b7a3c08584fe69275be.scope: Deactivated successfully.
Feb 25 08:52:51 np0005629333 podman[420875]: 2026-02-25 13:52:51.914251313 +0000 UTC m=+0.150774652 container died a9d829c648d232b86ade61d0fad41dc494250b9d434a5b7a3c08584fe69275be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:52:51 np0005629333 systemd[1]: var-lib-containers-storage-overlay-89792a4a1a02b8fe218c01e30f0fb40a4f2c51c2f66c1ccbf78c958bc9c39dcf-merged.mount: Deactivated successfully.
Feb 25 08:52:51 np0005629333 podman[420875]: 2026-02-25 13:52:51.955430007 +0000 UTC m=+0.191953336 container remove a9d829c648d232b86ade61d0fad41dc494250b9d434a5b7a3c08584fe69275be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_ramanujan, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:52:51 np0005629333 systemd[1]: libpod-conmon-a9d829c648d232b86ade61d0fad41dc494250b9d434a5b7a3c08584fe69275be.scope: Deactivated successfully.
Feb 25 08:52:52 np0005629333 podman[420916]: 2026-02-25 13:52:52.130286161 +0000 UTC m=+0.049544612 container create f615e505882812da70abb9577b08a9930144bb07ba4b0f5d2a887755745fdb20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_golick, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 08:52:52 np0005629333 systemd[1]: Started libpod-conmon-f615e505882812da70abb9577b08a9930144bb07ba4b0f5d2a887755745fdb20.scope.
Feb 25 08:52:52 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:52:52 np0005629333 podman[420916]: 2026-02-25 13:52:52.106436326 +0000 UTC m=+0.025694777 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:52:52 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a475b77ec6264a0859e21b946d3f586088635d0e3c2a0c9a8f7aaf8c30444087/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:52:52 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a475b77ec6264a0859e21b946d3f586088635d0e3c2a0c9a8f7aaf8c30444087/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:52:52 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a475b77ec6264a0859e21b946d3f586088635d0e3c2a0c9a8f7aaf8c30444087/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:52:52 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a475b77ec6264a0859e21b946d3f586088635d0e3c2a0c9a8f7aaf8c30444087/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:52:52 np0005629333 podman[420916]: 2026-02-25 13:52:52.229451464 +0000 UTC m=+0.148709925 container init f615e505882812da70abb9577b08a9930144bb07ba4b0f5d2a887755745fdb20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_golick, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 08:52:52 np0005629333 podman[420916]: 2026-02-25 13:52:52.244914851 +0000 UTC m=+0.164173282 container start f615e505882812da70abb9577b08a9930144bb07ba4b0f5d2a887755745fdb20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_golick, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:52:52 np0005629333 podman[420916]: 2026-02-25 13:52:52.249262534 +0000 UTC m=+0.168520975 container attach f615e505882812da70abb9577b08a9930144bb07ba4b0f5d2a887755745fdb20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:52:52 np0005629333 crazy_golick[420933]: {
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:    "0": [
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:        {
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "devices": [
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "/dev/loop3"
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            ],
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "lv_name": "ceph_lv0",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "lv_size": "21470642176",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "name": "ceph_lv0",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "tags": {
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.cluster_name": "ceph",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.crush_device_class": "",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.encrypted": "0",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.objectstore": "bluestore",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.osd_id": "0",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.type": "block",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.vdo": "0",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.with_tpm": "0"
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            },
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "type": "block",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "vg_name": "ceph_vg0"
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:        }
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:    ],
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:    "1": [
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:        {
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "devices": [
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "/dev/loop4"
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            ],
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "lv_name": "ceph_lv1",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "lv_size": "21470642176",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "name": "ceph_lv1",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "tags": {
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.cluster_name": "ceph",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.crush_device_class": "",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.encrypted": "0",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.objectstore": "bluestore",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.osd_id": "1",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.type": "block",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.vdo": "0",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.with_tpm": "0"
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            },
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "type": "block",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "vg_name": "ceph_vg1"
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:        }
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:    ],
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:    "2": [
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:        {
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "devices": [
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "/dev/loop5"
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            ],
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "lv_name": "ceph_lv2",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "lv_size": "21470642176",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "name": "ceph_lv2",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "tags": {
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.cluster_name": "ceph",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.crush_device_class": "",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.encrypted": "0",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.objectstore": "bluestore",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.osd_id": "2",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.type": "block",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.vdo": "0",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:                "ceph.with_tpm": "0"
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            },
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "type": "block",
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:            "vg_name": "ceph_vg2"
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:        }
Feb 25 08:52:52 np0005629333 crazy_golick[420933]:    ]
Feb 25 08:52:52 np0005629333 crazy_golick[420933]: }
Feb 25 08:52:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3989: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:52:52 np0005629333 systemd[1]: libpod-f615e505882812da70abb9577b08a9930144bb07ba4b0f5d2a887755745fdb20.scope: Deactivated successfully.
Feb 25 08:52:52 np0005629333 podman[420916]: 2026-02-25 13:52:52.557515198 +0000 UTC m=+0.476773659 container died f615e505882812da70abb9577b08a9930144bb07ba4b0f5d2a887755745fdb20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_golick, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 08:52:52 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a475b77ec6264a0859e21b946d3f586088635d0e3c2a0c9a8f7aaf8c30444087-merged.mount: Deactivated successfully.
Feb 25 08:52:52 np0005629333 podman[420916]: 2026-02-25 13:52:52.603861158 +0000 UTC m=+0.523119579 container remove f615e505882812da70abb9577b08a9930144bb07ba4b0f5d2a887755745fdb20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_golick, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:52:52 np0005629333 systemd[1]: libpod-conmon-f615e505882812da70abb9577b08a9930144bb07ba4b0f5d2a887755745fdb20.scope: Deactivated successfully.
Feb 25 08:52:53 np0005629333 podman[421016]: 2026-02-25 13:52:53.157645743 +0000 UTC m=+0.057227349 container create 9203b1e9846f3fa95764801d8d88e8a46927956872beba683c9ba44dfd4db1df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_wozniak, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:52:53 np0005629333 systemd[1]: Started libpod-conmon-9203b1e9846f3fa95764801d8d88e8a46927956872beba683c9ba44dfd4db1df.scope.
Feb 25 08:52:53 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:52:53 np0005629333 podman[421016]: 2026-02-25 13:52:53.137141173 +0000 UTC m=+0.036722819 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:52:53 np0005629333 podman[421016]: 2026-02-25 13:52:53.244279042 +0000 UTC m=+0.143860648 container init 9203b1e9846f3fa95764801d8d88e8a46927956872beba683c9ba44dfd4db1df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_wozniak, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 08:52:53 np0005629333 podman[421016]: 2026-02-25 13:52:53.252672269 +0000 UTC m=+0.152253875 container start 9203b1e9846f3fa95764801d8d88e8a46927956872beba683c9ba44dfd4db1df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_wozniak, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:52:53 np0005629333 podman[421016]: 2026-02-25 13:52:53.256734844 +0000 UTC m=+0.156316480 container attach 9203b1e9846f3fa95764801d8d88e8a46927956872beba683c9ba44dfd4db1df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_wozniak, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Feb 25 08:52:53 np0005629333 dazzling_wozniak[421033]: 167 167
Feb 25 08:52:53 np0005629333 systemd[1]: libpod-9203b1e9846f3fa95764801d8d88e8a46927956872beba683c9ba44dfd4db1df.scope: Deactivated successfully.
Feb 25 08:52:53 np0005629333 podman[421016]: 2026-02-25 13:52:53.258544485 +0000 UTC m=+0.158126121 container died 9203b1e9846f3fa95764801d8d88e8a46927956872beba683c9ba44dfd4db1df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_wozniak, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:52:53 np0005629333 systemd[1]: var-lib-containers-storage-overlay-b51d7a8a18b1f3ba7cde7455cb1383c2ca8673d693f140716ec43a9ae0c0c685-merged.mount: Deactivated successfully.
Feb 25 08:52:53 np0005629333 podman[421016]: 2026-02-25 13:52:53.301950682 +0000 UTC m=+0.201532318 container remove 9203b1e9846f3fa95764801d8d88e8a46927956872beba683c9ba44dfd4db1df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_wozniak, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 08:52:53 np0005629333 systemd[1]: libpod-conmon-9203b1e9846f3fa95764801d8d88e8a46927956872beba683c9ba44dfd4db1df.scope: Deactivated successfully.
Feb 25 08:52:53 np0005629333 podman[421059]: 2026-02-25 13:52:53.502492122 +0000 UTC m=+0.059336699 container create 07941258c6f37f5dced7a2856680d51cc842649d0507963977e9f4f12d0e7e74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:52:53 np0005629333 systemd[1]: Started libpod-conmon-07941258c6f37f5dced7a2856680d51cc842649d0507963977e9f4f12d0e7e74.scope.
Feb 25 08:52:53 np0005629333 nova_compute[244014]: 2026-02-25 13:52:53.562 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:52:53 np0005629333 podman[421059]: 2026-02-25 13:52:53.481759135 +0000 UTC m=+0.038603752 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:52:53 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:52:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d248827eaf6570bc18441fcdd6a74fe0f21713229a00f9c2cd753d3efba78771/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:52:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d248827eaf6570bc18441fcdd6a74fe0f21713229a00f9c2cd753d3efba78771/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:52:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d248827eaf6570bc18441fcdd6a74fe0f21713229a00f9c2cd753d3efba78771/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:52:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d248827eaf6570bc18441fcdd6a74fe0f21713229a00f9c2cd753d3efba78771/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:52:53 np0005629333 podman[421059]: 2026-02-25 13:52:53.608619952 +0000 UTC m=+0.165464619 container init 07941258c6f37f5dced7a2856680d51cc842649d0507963977e9f4f12d0e7e74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_gauss, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Feb 25 08:52:53 np0005629333 podman[421059]: 2026-02-25 13:52:53.624441259 +0000 UTC m=+0.181285856 container start 07941258c6f37f5dced7a2856680d51cc842649d0507963977e9f4f12d0e7e74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:52:53 np0005629333 podman[421059]: 2026-02-25 13:52:53.63050675 +0000 UTC m=+0.187351357 container attach 07941258c6f37f5dced7a2856680d51cc842649d0507963977e9f4f12d0e7e74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 08:52:54 np0005629333 lvm[421154]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:52:54 np0005629333 lvm[421155]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:52:54 np0005629333 lvm[421155]: VG ceph_vg1 finished
Feb 25 08:52:54 np0005629333 lvm[421154]: VG ceph_vg0 finished
Feb 25 08:52:54 np0005629333 lvm[421157]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:52:54 np0005629333 lvm[421157]: VG ceph_vg2 finished
Feb 25 08:52:54 np0005629333 nervous_gauss[421076]: {}
Feb 25 08:52:54 np0005629333 systemd[1]: libpod-07941258c6f37f5dced7a2856680d51cc842649d0507963977e9f4f12d0e7e74.scope: Deactivated successfully.
Feb 25 08:52:54 np0005629333 podman[421059]: 2026-02-25 13:52:54.403137562 +0000 UTC m=+0.959982169 container died 07941258c6f37f5dced7a2856680d51cc842649d0507963977e9f4f12d0e7e74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_gauss, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 08:52:54 np0005629333 systemd[1]: libpod-07941258c6f37f5dced7a2856680d51cc842649d0507963977e9f4f12d0e7e74.scope: Consumed 1.228s CPU time.
Feb 25 08:52:54 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d248827eaf6570bc18441fcdd6a74fe0f21713229a00f9c2cd753d3efba78771-merged.mount: Deactivated successfully.
Feb 25 08:52:54 np0005629333 podman[421059]: 2026-02-25 13:52:54.452092556 +0000 UTC m=+1.008937153 container remove 07941258c6f37f5dced7a2856680d51cc842649d0507963977e9f4f12d0e7e74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:52:54 np0005629333 systemd[1]: libpod-conmon-07941258c6f37f5dced7a2856680d51cc842649d0507963977e9f4f12d0e7e74.scope: Deactivated successfully.
Feb 25 08:52:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:52:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:52:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3990: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:52:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:52:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:52:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:52:55.097 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:52:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:52:55.098 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:52:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:52:55.098 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:52:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:52:55 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:52:55 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:52:55 np0005629333 nova_compute[244014]: 2026-02-25 13:52:55.583 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:52:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3991: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:52:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3992: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:52:58 np0005629333 nova_compute[244014]: 2026-02-25 13:52:58.567 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:53:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:53:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3993: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:53:00 np0005629333 nova_compute[244014]: 2026-02-25 13:53:00.599 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:53:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:53:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:53:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:53:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:53:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:53:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:53:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3994: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:53:03 np0005629333 nova_compute[244014]: 2026-02-25 13:53:03.571 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:53:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3995: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:53:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:53:05 np0005629333 nova_compute[244014]: 2026-02-25 13:53:05.600 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:53:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3996: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:53:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3997: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:53:08 np0005629333 nova_compute[244014]: 2026-02-25 13:53:08.608 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:53:08 np0005629333 nova_compute[244014]: 2026-02-25 13:53:08.873 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:53:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:53:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3998: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:53:10 np0005629333 nova_compute[244014]: 2026-02-25 13:53:10.601 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:53:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v3999: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:53:13 np0005629333 nova_compute[244014]: 2026-02-25 13:53:13.650 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:53:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4000: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:53:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:53:15 np0005629333 nova_compute[244014]: 2026-02-25 13:53:15.603 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:53:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4001: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:53:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4002: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:53:18 np0005629333 nova_compute[244014]: 2026-02-25 13:53:18.687 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:53:18 np0005629333 podman[421199]: 2026-02-25 13:53:18.748515217 +0000 UTC m=+0.081444373 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 08:53:18 np0005629333 podman[421200]: 2026-02-25 13:53:18.779616587 +0000 UTC m=+0.111304918 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Feb 25 08:53:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:53:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4003: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:53:20 np0005629333 nova_compute[244014]: 2026-02-25 13:53:20.607 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:53:20 np0005629333 nova_compute[244014]: 2026-02-25 13:53:20.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:53:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4004: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:53:23 np0005629333 nova_compute[244014]: 2026-02-25 13:53:23.710 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:53:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4005: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:53:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:53:25 np0005629333 nova_compute[244014]: 2026-02-25 13:53:25.609 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:53:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4006: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:53:26 np0005629333 nova_compute[244014]: 2026-02-25 13:53:26.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:53:26 np0005629333 nova_compute[244014]: 2026-02-25 13:53:26.909 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:53:26 np0005629333 nova_compute[244014]: 2026-02-25 13:53:26.910 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:53:26 np0005629333 nova_compute[244014]: 2026-02-25 13:53:26.910 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:53:26 np0005629333 nova_compute[244014]: 2026-02-25 13:53:26.910 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:53:26 np0005629333 nova_compute[244014]: 2026-02-25 13:53:26.911 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:53:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:53:27 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4132084412' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:53:27 np0005629333 nova_compute[244014]: 2026-02-25 13:53:27.497 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:53:27 np0005629333 nova_compute[244014]: 2026-02-25 13:53:27.672 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:53:27 np0005629333 nova_compute[244014]: 2026-02-25 13:53:27.673 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3533MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:53:27 np0005629333 nova_compute[244014]: 2026-02-25 13:53:27.674 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:53:27 np0005629333 nova_compute[244014]: 2026-02-25 13:53:27.674 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:53:27 np0005629333 nova_compute[244014]: 2026-02-25 13:53:27.736 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:53:27 np0005629333 nova_compute[244014]: 2026-02-25 13:53:27.736 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:53:27 np0005629333 nova_compute[244014]: 2026-02-25 13:53:27.753 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:53:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:53:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1840975524' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:53:28 np0005629333 nova_compute[244014]: 2026-02-25 13:53:28.302 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:53:28 np0005629333 nova_compute[244014]: 2026-02-25 13:53:28.310 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:53:28 np0005629333 nova_compute[244014]: 2026-02-25 13:53:28.332 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:53:28 np0005629333 nova_compute[244014]: 2026-02-25 13:53:28.335 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:53:28 np0005629333 nova_compute[244014]: 2026-02-25 13:53:28.335 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:53:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4007: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:53:28 np0005629333 nova_compute[244014]: 2026-02-25 13:53:28.723 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:53:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:53:30 np0005629333 nova_compute[244014]: 2026-02-25 13:53:30.337 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:53:30 np0005629333 nova_compute[244014]: 2026-02-25 13:53:30.338 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:53:30 np0005629333 nova_compute[244014]: 2026-02-25 13:53:30.338 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:53:30 np0005629333 nova_compute[244014]: 2026-02-25 13:53:30.395 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:53:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4008: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:53:30 np0005629333 nova_compute[244014]: 2026-02-25 13:53:30.610 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:53:30 np0005629333 nova_compute[244014]: 2026-02-25 13:53:30.930 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:53:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:53:31
Feb 25 08:53:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:53:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:53:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'images', '.mgr', '.rgw.root', 'backups', 'default.rgw.meta', 'vms', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control']
Feb 25 08:53:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:53:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:53:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:53:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:53:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:53:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:53:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:53:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4009: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:53:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:53:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:53:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:53:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:53:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:53:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:53:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:53:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:53:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:53:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:53:33 np0005629333 nova_compute[244014]: 2026-02-25 13:53:33.727 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:53:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4010: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:53:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:53:35 np0005629333 nova_compute[244014]: 2026-02-25 13:53:35.613 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:53:35 np0005629333 nova_compute[244014]: 2026-02-25 13:53:35.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:53:35 np0005629333 nova_compute[244014]: 2026-02-25 13:53:35.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:53:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4011: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:53:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4012: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:53:38 np0005629333 nova_compute[244014]: 2026-02-25 13:53:38.731 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:53:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:53:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4013: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:53:40 np0005629333 nova_compute[244014]: 2026-02-25 13:53:40.616 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:53:40 np0005629333 nova_compute[244014]: 2026-02-25 13:53:40.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:53:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4014: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:53:43 np0005629333 nova_compute[244014]: 2026-02-25 13:53:43.734 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:53:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:53:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:53:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:53:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:53:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:53:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:53:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:53:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:53:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:53:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:53:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:53:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:53:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:53:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:53:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:53:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:53:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:53:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:53:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:53:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:53:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:53:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:53:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:53:43 np0005629333 nova_compute[244014]: 2026-02-25 13:53:43.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:53:43 np0005629333 nova_compute[244014]: 2026-02-25 13:53:43.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:53:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4015: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:53:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:53:45 np0005629333 nova_compute[244014]: 2026-02-25 13:53:45.619 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:53:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4016: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:53:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:53:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2938783657' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:53:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:53:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2938783657' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:53:47 np0005629333 nova_compute[244014]: 2026-02-25 13:53:47.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:53:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4017: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:53:48 np0005629333 nova_compute[244014]: 2026-02-25 13:53:48.738 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:53:49 np0005629333 podman[421286]: 2026-02-25 13:53:49.772146955 +0000 UTC m=+0.104908346 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Feb 25 08:53:49 np0005629333 podman[421287]: 2026-02-25 13:53:49.814012299 +0000 UTC m=+0.142813278 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 08:53:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:53:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4018: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:53:50 np0005629333 nova_compute[244014]: 2026-02-25 13:53:50.622 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:53:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4019: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:53:53 np0005629333 nova_compute[244014]: 2026-02-25 13:53:53.742 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:53:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4020: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:53:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:53:55.098 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:53:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:53:55.099 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:53:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:53:55.099 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:53:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:53:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:53:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:53:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:53:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:53:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:53:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:53:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:53:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:53:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:53:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:53:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:53:55 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:53:55 np0005629333 nova_compute[244014]: 2026-02-25 13:53:55.623 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:53:55 np0005629333 podman[421472]: 2026-02-25 13:53:55.804580966 +0000 UTC m=+0.066593804 container create 138fd61d27243cf9b9c02e4795dddd3d40ee275d3c854183bfd6492e51df211e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 08:53:55 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:53:55 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:53:55 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:53:55 np0005629333 systemd[1]: Started libpod-conmon-138fd61d27243cf9b9c02e4795dddd3d40ee275d3c854183bfd6492e51df211e.scope.
Feb 25 08:53:55 np0005629333 podman[421472]: 2026-02-25 13:53:55.785457115 +0000 UTC m=+0.047469963 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:53:55 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:53:55 np0005629333 podman[421472]: 2026-02-25 13:53:55.902406851 +0000 UTC m=+0.164419719 container init 138fd61d27243cf9b9c02e4795dddd3d40ee275d3c854183bfd6492e51df211e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cartwright, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:53:55 np0005629333 podman[421472]: 2026-02-25 13:53:55.911318603 +0000 UTC m=+0.173331431 container start 138fd61d27243cf9b9c02e4795dddd3d40ee275d3c854183bfd6492e51df211e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:53:55 np0005629333 podman[421472]: 2026-02-25 13:53:55.914617826 +0000 UTC m=+0.176630664 container attach 138fd61d27243cf9b9c02e4795dddd3d40ee275d3c854183bfd6492e51df211e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:53:55 np0005629333 vigilant_cartwright[421488]: 167 167
Feb 25 08:53:55 np0005629333 systemd[1]: libpod-138fd61d27243cf9b9c02e4795dddd3d40ee275d3c854183bfd6492e51df211e.scope: Deactivated successfully.
Feb 25 08:53:55 np0005629333 conmon[421488]: conmon 138fd61d27243cf9b9c0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-138fd61d27243cf9b9c02e4795dddd3d40ee275d3c854183bfd6492e51df211e.scope/container/memory.events
Feb 25 08:53:55 np0005629333 podman[421472]: 2026-02-25 13:53:55.920950145 +0000 UTC m=+0.182963013 container died 138fd61d27243cf9b9c02e4795dddd3d40ee275d3c854183bfd6492e51df211e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cartwright, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 08:53:55 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e9fda24b8bd42424b7ef678007043bc43c6db97a07ca4c4421dcaddc27908184-merged.mount: Deactivated successfully.
Feb 25 08:53:55 np0005629333 podman[421472]: 2026-02-25 13:53:55.967212723 +0000 UTC m=+0.229225571 container remove 138fd61d27243cf9b9c02e4795dddd3d40ee275d3c854183bfd6492e51df211e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cartwright, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 08:53:55 np0005629333 systemd[1]: libpod-conmon-138fd61d27243cf9b9c02e4795dddd3d40ee275d3c854183bfd6492e51df211e.scope: Deactivated successfully.
Feb 25 08:53:56 np0005629333 podman[421510]: 2026-02-25 13:53:56.099736939 +0000 UTC m=+0.045194708 container create df0d49ce0991dfd6aa1314b7674b68847209cae1f410a28d2c4ecfc1bfa131e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ptolemy, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:53:56 np0005629333 systemd[1]: Started libpod-conmon-df0d49ce0991dfd6aa1314b7674b68847209cae1f410a28d2c4ecfc1bfa131e6.scope.
Feb 25 08:53:56 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:53:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7059c52d21be8d1960ca0f3ad70836592a13fe7b7619925e70ff41810c51791b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:53:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7059c52d21be8d1960ca0f3ad70836592a13fe7b7619925e70ff41810c51791b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:53:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7059c52d21be8d1960ca0f3ad70836592a13fe7b7619925e70ff41810c51791b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:53:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7059c52d21be8d1960ca0f3ad70836592a13fe7b7619925e70ff41810c51791b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:53:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7059c52d21be8d1960ca0f3ad70836592a13fe7b7619925e70ff41810c51791b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:53:56 np0005629333 podman[421510]: 2026-02-25 13:53:56.074919998 +0000 UTC m=+0.020377747 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:53:56 np0005629333 podman[421510]: 2026-02-25 13:53:56.187200112 +0000 UTC m=+0.132657861 container init df0d49ce0991dfd6aa1314b7674b68847209cae1f410a28d2c4ecfc1bfa131e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ptolemy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:53:56 np0005629333 podman[421510]: 2026-02-25 13:53:56.196502665 +0000 UTC m=+0.141960434 container start df0d49ce0991dfd6aa1314b7674b68847209cae1f410a28d2c4ecfc1bfa131e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:53:56 np0005629333 podman[421510]: 2026-02-25 13:53:56.201462665 +0000 UTC m=+0.146920414 container attach df0d49ce0991dfd6aa1314b7674b68847209cae1f410a28d2c4ecfc1bfa131e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ptolemy, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 08:53:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4021: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:53:56 np0005629333 confident_ptolemy[421526]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:53:56 np0005629333 confident_ptolemy[421526]: --> All data devices are unavailable
Feb 25 08:53:56 np0005629333 systemd[1]: libpod-df0d49ce0991dfd6aa1314b7674b68847209cae1f410a28d2c4ecfc1bfa131e6.scope: Deactivated successfully.
Feb 25 08:53:56 np0005629333 podman[421510]: 2026-02-25 13:53:56.676083411 +0000 UTC m=+0.621541140 container died df0d49ce0991dfd6aa1314b7674b68847209cae1f410a28d2c4ecfc1bfa131e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ptolemy, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 08:53:56 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7059c52d21be8d1960ca0f3ad70836592a13fe7b7619925e70ff41810c51791b-merged.mount: Deactivated successfully.
Feb 25 08:53:56 np0005629333 podman[421510]: 2026-02-25 13:53:56.727893006 +0000 UTC m=+0.673350725 container remove df0d49ce0991dfd6aa1314b7674b68847209cae1f410a28d2c4ecfc1bfa131e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 08:53:56 np0005629333 systemd[1]: libpod-conmon-df0d49ce0991dfd6aa1314b7674b68847209cae1f410a28d2c4ecfc1bfa131e6.scope: Deactivated successfully.
Feb 25 08:53:57 np0005629333 podman[421620]: 2026-02-25 13:53:57.165440975 +0000 UTC m=+0.049311585 container create 7d5a51c0f911419b4f213adb8f57f330360dfd55ba4a7baa1cb3fe06dc5d81fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_antonelli, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Feb 25 08:53:57 np0005629333 systemd[1]: Started libpod-conmon-7d5a51c0f911419b4f213adb8f57f330360dfd55ba4a7baa1cb3fe06dc5d81fa.scope.
Feb 25 08:53:57 np0005629333 podman[421620]: 2026-02-25 13:53:57.142004892 +0000 UTC m=+0.025875442 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:53:57 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:53:57 np0005629333 podman[421620]: 2026-02-25 13:53:57.25370841 +0000 UTC m=+0.137578930 container init 7d5a51c0f911419b4f213adb8f57f330360dfd55ba4a7baa1cb3fe06dc5d81fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_antonelli, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 08:53:57 np0005629333 podman[421620]: 2026-02-25 13:53:57.263408764 +0000 UTC m=+0.147279224 container start 7d5a51c0f911419b4f213adb8f57f330360dfd55ba4a7baa1cb3fe06dc5d81fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:53:57 np0005629333 podman[421620]: 2026-02-25 13:53:57.267008786 +0000 UTC m=+0.150879306 container attach 7d5a51c0f911419b4f213adb8f57f330360dfd55ba4a7baa1cb3fe06dc5d81fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:53:57 np0005629333 modest_antonelli[421636]: 167 167
Feb 25 08:53:57 np0005629333 systemd[1]: libpod-7d5a51c0f911419b4f213adb8f57f330360dfd55ba4a7baa1cb3fe06dc5d81fa.scope: Deactivated successfully.
Feb 25 08:53:57 np0005629333 podman[421620]: 2026-02-25 13:53:57.269810295 +0000 UTC m=+0.153680805 container died 7d5a51c0f911419b4f213adb8f57f330360dfd55ba4a7baa1cb3fe06dc5d81fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_antonelli, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:53:57 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d4f254375e483663939aa0d744d7b470c9e62b660bb6fd24fa8f54ac9fb96d70-merged.mount: Deactivated successfully.
Feb 25 08:53:57 np0005629333 podman[421620]: 2026-02-25 13:53:57.310070273 +0000 UTC m=+0.193940743 container remove 7d5a51c0f911419b4f213adb8f57f330360dfd55ba4a7baa1cb3fe06dc5d81fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_antonelli, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 08:53:57 np0005629333 systemd[1]: libpod-conmon-7d5a51c0f911419b4f213adb8f57f330360dfd55ba4a7baa1cb3fe06dc5d81fa.scope: Deactivated successfully.
Feb 25 08:53:57 np0005629333 podman[421660]: 2026-02-25 13:53:57.499252151 +0000 UTC m=+0.060715787 container create 29baa5c76e4fbc3a1c3d8de527cd6f26b786f4da01d64af16460e3903cedb0c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_borg, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:53:57 np0005629333 systemd[1]: Started libpod-conmon-29baa5c76e4fbc3a1c3d8de527cd6f26b786f4da01d64af16460e3903cedb0c2.scope.
Feb 25 08:53:57 np0005629333 podman[421660]: 2026-02-25 13:53:57.471403314 +0000 UTC m=+0.032867010 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:53:57 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:53:57 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea2c70b0aa475e7943e9757f8e880ba338c973eb96604f62b584f3ff279bd5de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:53:57 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea2c70b0aa475e7943e9757f8e880ba338c973eb96604f62b584f3ff279bd5de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:53:57 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea2c70b0aa475e7943e9757f8e880ba338c973eb96604f62b584f3ff279bd5de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:53:57 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea2c70b0aa475e7943e9757f8e880ba338c973eb96604f62b584f3ff279bd5de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:53:57 np0005629333 podman[421660]: 2026-02-25 13:53:57.614961852 +0000 UTC m=+0.176425528 container init 29baa5c76e4fbc3a1c3d8de527cd6f26b786f4da01d64af16460e3903cedb0c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 08:53:57 np0005629333 podman[421660]: 2026-02-25 13:53:57.627187678 +0000 UTC m=+0.188651324 container start 29baa5c76e4fbc3a1c3d8de527cd6f26b786f4da01d64af16460e3903cedb0c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_borg, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 08:53:57 np0005629333 podman[421660]: 2026-02-25 13:53:57.631620813 +0000 UTC m=+0.193084519 container attach 29baa5c76e4fbc3a1c3d8de527cd6f26b786f4da01d64af16460e3903cedb0c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_borg, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 08:53:57 np0005629333 sad_borg[421677]: {
Feb 25 08:53:57 np0005629333 sad_borg[421677]:    "0": [
Feb 25 08:53:57 np0005629333 sad_borg[421677]:        {
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "devices": [
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "/dev/loop3"
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            ],
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "lv_name": "ceph_lv0",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "lv_size": "21470642176",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "name": "ceph_lv0",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "tags": {
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.cluster_name": "ceph",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.crush_device_class": "",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.encrypted": "0",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.objectstore": "bluestore",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.osd_id": "0",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.type": "block",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.vdo": "0",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.with_tpm": "0"
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            },
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "type": "block",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "vg_name": "ceph_vg0"
Feb 25 08:53:57 np0005629333 sad_borg[421677]:        }
Feb 25 08:53:57 np0005629333 sad_borg[421677]:    ],
Feb 25 08:53:57 np0005629333 sad_borg[421677]:    "1": [
Feb 25 08:53:57 np0005629333 sad_borg[421677]:        {
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "devices": [
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "/dev/loop4"
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            ],
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "lv_name": "ceph_lv1",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "lv_size": "21470642176",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "name": "ceph_lv1",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "tags": {
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.cluster_name": "ceph",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.crush_device_class": "",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.encrypted": "0",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.objectstore": "bluestore",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.osd_id": "1",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.type": "block",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.vdo": "0",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.with_tpm": "0"
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            },
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "type": "block",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "vg_name": "ceph_vg1"
Feb 25 08:53:57 np0005629333 sad_borg[421677]:        }
Feb 25 08:53:57 np0005629333 sad_borg[421677]:    ],
Feb 25 08:53:57 np0005629333 sad_borg[421677]:    "2": [
Feb 25 08:53:57 np0005629333 sad_borg[421677]:        {
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "devices": [
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "/dev/loop5"
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            ],
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "lv_name": "ceph_lv2",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "lv_size": "21470642176",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "name": "ceph_lv2",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "tags": {
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.cluster_name": "ceph",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.crush_device_class": "",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.encrypted": "0",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.objectstore": "bluestore",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.osd_id": "2",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.type": "block",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.vdo": "0",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:                "ceph.with_tpm": "0"
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            },
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "type": "block",
Feb 25 08:53:57 np0005629333 sad_borg[421677]:            "vg_name": "ceph_vg2"
Feb 25 08:53:57 np0005629333 sad_borg[421677]:        }
Feb 25 08:53:57 np0005629333 sad_borg[421677]:    ]
Feb 25 08:53:57 np0005629333 sad_borg[421677]: }
Feb 25 08:53:57 np0005629333 systemd[1]: libpod-29baa5c76e4fbc3a1c3d8de527cd6f26b786f4da01d64af16460e3903cedb0c2.scope: Deactivated successfully.
Feb 25 08:53:57 np0005629333 podman[421660]: 2026-02-25 13:53:57.934742902 +0000 UTC m=+0.496206548 container died 29baa5c76e4fbc3a1c3d8de527cd6f26b786f4da01d64af16460e3903cedb0c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_borg, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 08:53:57 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ea2c70b0aa475e7943e9757f8e880ba338c973eb96604f62b584f3ff279bd5de-merged.mount: Deactivated successfully.
Feb 25 08:53:57 np0005629333 podman[421660]: 2026-02-25 13:53:57.989095229 +0000 UTC m=+0.550558825 container remove 29baa5c76e4fbc3a1c3d8de527cd6f26b786f4da01d64af16460e3903cedb0c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_borg, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 08:53:58 np0005629333 systemd[1]: libpod-conmon-29baa5c76e4fbc3a1c3d8de527cd6f26b786f4da01d64af16460e3903cedb0c2.scope: Deactivated successfully.
Feb 25 08:53:58 np0005629333 podman[421761]: 2026-02-25 13:53:58.434980784 +0000 UTC m=+0.048508943 container create 7064628d3c24bf831cd27c893bd2d692a68f4ae2421745c8585daf0b091edcc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_brattain, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:53:58 np0005629333 systemd[1]: Started libpod-conmon-7064628d3c24bf831cd27c893bd2d692a68f4ae2421745c8585daf0b091edcc4.scope.
Feb 25 08:53:58 np0005629333 podman[421761]: 2026-02-25 13:53:58.407533948 +0000 UTC m=+0.021062177 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:53:58 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:53:58 np0005629333 podman[421761]: 2026-02-25 13:53:58.521725496 +0000 UTC m=+0.135253675 container init 7064628d3c24bf831cd27c893bd2d692a68f4ae2421745c8585daf0b091edcc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:53:58 np0005629333 podman[421761]: 2026-02-25 13:53:58.531288026 +0000 UTC m=+0.144816175 container start 7064628d3c24bf831cd27c893bd2d692a68f4ae2421745c8585daf0b091edcc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_brattain, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:53:58 np0005629333 inspiring_brattain[421777]: 167 167
Feb 25 08:53:58 np0005629333 systemd[1]: libpod-7064628d3c24bf831cd27c893bd2d692a68f4ae2421745c8585daf0b091edcc4.scope: Deactivated successfully.
Feb 25 08:53:58 np0005629333 podman[421761]: 2026-02-25 13:53:58.537171292 +0000 UTC m=+0.150699441 container attach 7064628d3c24bf831cd27c893bd2d692a68f4ae2421745c8585daf0b091edcc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_brattain, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Feb 25 08:53:58 np0005629333 podman[421761]: 2026-02-25 13:53:58.538422608 +0000 UTC m=+0.151950827 container died 7064628d3c24bf831cd27c893bd2d692a68f4ae2421745c8585daf0b091edcc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_brattain, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:53:58 np0005629333 systemd[1]: var-lib-containers-storage-overlay-9d65264b54993cb8cae763cba8784890ffc761df8b65d462f11fd8ad8b2f0c26-merged.mount: Deactivated successfully.
Feb 25 08:53:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4022: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:53:58 np0005629333 podman[421761]: 2026-02-25 13:53:58.586109356 +0000 UTC m=+0.199637545 container remove 7064628d3c24bf831cd27c893bd2d692a68f4ae2421745c8585daf0b091edcc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_brattain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 08:53:58 np0005629333 systemd[1]: libpod-conmon-7064628d3c24bf831cd27c893bd2d692a68f4ae2421745c8585daf0b091edcc4.scope: Deactivated successfully.
Feb 25 08:53:58 np0005629333 podman[421802]: 2026-02-25 13:53:58.726357141 +0000 UTC m=+0.042870043 container create 91f6e58978a085d86d1bbee74f565a781d0860bb137434ee33d358b7ca274775 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 08:53:58 np0005629333 nova_compute[244014]: 2026-02-25 13:53:58.745 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:53:58 np0005629333 systemd[1]: Started libpod-conmon-91f6e58978a085d86d1bbee74f565a781d0860bb137434ee33d358b7ca274775.scope.
Feb 25 08:53:58 np0005629333 podman[421802]: 2026-02-25 13:53:58.709500874 +0000 UTC m=+0.026013766 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:53:58 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:53:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2a07be40ce41d763ce65a98fb3c8e2b2dec9ed404083da8d3b7a937a86024e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:53:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2a07be40ce41d763ce65a98fb3c8e2b2dec9ed404083da8d3b7a937a86024e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:53:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2a07be40ce41d763ce65a98fb3c8e2b2dec9ed404083da8d3b7a937a86024e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:53:58 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2a07be40ce41d763ce65a98fb3c8e2b2dec9ed404083da8d3b7a937a86024e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:53:58 np0005629333 podman[421802]: 2026-02-25 13:53:58.845862099 +0000 UTC m=+0.162375001 container init 91f6e58978a085d86d1bbee74f565a781d0860bb137434ee33d358b7ca274775 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:53:58 np0005629333 podman[421802]: 2026-02-25 13:53:58.855167462 +0000 UTC m=+0.171680364 container start 91f6e58978a085d86d1bbee74f565a781d0860bb137434ee33d358b7ca274775 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 08:53:58 np0005629333 podman[421802]: 2026-02-25 13:53:58.859765442 +0000 UTC m=+0.176278354 container attach 91f6e58978a085d86d1bbee74f565a781d0860bb137434ee33d358b7ca274775 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:53:59 np0005629333 lvm[421896]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:53:59 np0005629333 lvm[421896]: VG ceph_vg0 finished
Feb 25 08:53:59 np0005629333 lvm[421899]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:53:59 np0005629333 lvm[421899]: VG ceph_vg2 finished
Feb 25 08:53:59 np0005629333 lvm[421898]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:53:59 np0005629333 lvm[421898]: VG ceph_vg1 finished
Feb 25 08:53:59 np0005629333 pensive_golick[421818]: {}
Feb 25 08:53:59 np0005629333 systemd[1]: libpod-91f6e58978a085d86d1bbee74f565a781d0860bb137434ee33d358b7ca274775.scope: Deactivated successfully.
Feb 25 08:53:59 np0005629333 podman[421802]: 2026-02-25 13:53:59.643493597 +0000 UTC m=+0.960006469 container died 91f6e58978a085d86d1bbee74f565a781d0860bb137434ee33d358b7ca274775 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:53:59 np0005629333 systemd[1]: libpod-91f6e58978a085d86d1bbee74f565a781d0860bb137434ee33d358b7ca274775.scope: Consumed 1.155s CPU time.
Feb 25 08:53:59 np0005629333 systemd[1]: var-lib-containers-storage-overlay-b2a07be40ce41d763ce65a98fb3c8e2b2dec9ed404083da8d3b7a937a86024e7-merged.mount: Deactivated successfully.
Feb 25 08:53:59 np0005629333 podman[421802]: 2026-02-25 13:53:59.694355295 +0000 UTC m=+1.010868167 container remove 91f6e58978a085d86d1bbee74f565a781d0860bb137434ee33d358b7ca274775 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_golick, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:53:59 np0005629333 systemd[1]: libpod-conmon-91f6e58978a085d86d1bbee74f565a781d0860bb137434ee33d358b7ca274775.scope: Deactivated successfully.
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #198. Immutable memtables: 0.
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:53:59.772331) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 198
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027639772380, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 2055, "num_deletes": 251, "total_data_size": 3595324, "memory_usage": 3643608, "flush_reason": "Manual Compaction"}
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #199: started
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027639785874, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 199, "file_size": 3506195, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 81733, "largest_seqno": 83787, "table_properties": {"data_size": 3496702, "index_size": 6050, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18674, "raw_average_key_size": 20, "raw_value_size": 3478028, "raw_average_value_size": 3739, "num_data_blocks": 269, "num_entries": 930, "num_filter_entries": 930, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772027410, "oldest_key_time": 1772027410, "file_creation_time": 1772027639, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 199, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 13608 microseconds, and 5792 cpu microseconds.
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:53:59.785938) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #199: 3506195 bytes OK
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:53:59.785963) [db/memtable_list.cc:519] [default] Level-0 commit table #199 started
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:53:59.787902) [db/memtable_list.cc:722] [default] Level-0 commit table #199: memtable #1 done
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:53:59.787933) EVENT_LOG_v1 {"time_micros": 1772027639787926, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:53:59.787963) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 3586712, prev total WAL file size 3586712, number of live WAL files 2.
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000195.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:53:59.788713) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038323833' seq:72057594037927935, type:22 .. '7061786F730038353335' seq:0, type:0; will stop at (end)
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [199(3424KB)], [197(10MB)]
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027639788755, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [199], "files_L6": [197], "score": -1, "input_data_size": 14076465, "oldest_snapshot_seqno": -1}
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #200: 9925 keys, 12318997 bytes, temperature: kUnknown
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027639857466, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 200, "file_size": 12318997, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12256357, "index_size": 36735, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24837, "raw_key_size": 259956, "raw_average_key_size": 26, "raw_value_size": 12082653, "raw_average_value_size": 1217, "num_data_blocks": 1420, "num_entries": 9925, "num_filter_entries": 9925, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772027639, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 200, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:53:59.858477) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 12318997 bytes
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:53:59.860757) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 202.4 rd, 177.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 10.1 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(7.5) write-amplify(3.5) OK, records in: 10439, records dropped: 514 output_compression: NoCompression
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:53:59.860776) EVENT_LOG_v1 {"time_micros": 1772027639860767, "job": 124, "event": "compaction_finished", "compaction_time_micros": 69537, "compaction_time_cpu_micros": 24720, "output_level": 6, "num_output_files": 1, "total_output_size": 12318997, "num_input_records": 10439, "num_output_records": 9925, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000199.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027639861568, "job": 124, "event": "table_file_deletion", "file_number": 199}
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000197.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027639862583, "job": 124, "event": "table_file_deletion", "file_number": 197}
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:53:59.788638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:53:59.862647) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:53:59.862653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:53:59.862655) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:53:59.862657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:53:59 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:53:59.862658) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:54:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:54:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4023: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:54:00 np0005629333 nova_compute[244014]: 2026-02-25 13:54:00.625 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:54:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:54:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:54:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:54:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:54:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:54:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:54:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4024: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:54:03 np0005629333 nova_compute[244014]: 2026-02-25 13:54:03.749 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:54:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4025: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:54:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:54:05 np0005629333 nova_compute[244014]: 2026-02-25 13:54:05.627 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:54:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4026: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:54:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4027: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:54:08 np0005629333 nova_compute[244014]: 2026-02-25 13:54:08.754 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:54:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:54:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4028: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:54:10 np0005629333 nova_compute[244014]: 2026-02-25 13:54:10.630 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:54:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4029: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:54:13 np0005629333 nova_compute[244014]: 2026-02-25 13:54:13.757 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:54:13 np0005629333 nova_compute[244014]: 2026-02-25 13:54:13.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:54:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4030: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:54:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:54:15 np0005629333 nova_compute[244014]: 2026-02-25 13:54:15.630 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:54:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4031: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:54:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4032: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:54:18 np0005629333 nova_compute[244014]: 2026-02-25 13:54:18.761 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #201. Immutable memtables: 0.
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:54:20.306244) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 125] Flushing memtable with next log file: 201
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027660306331, "job": 125, "event": "flush_started", "num_memtables": 1, "num_entries": 410, "num_deletes": 256, "total_data_size": 296456, "memory_usage": 305392, "flush_reason": "Manual Compaction"}
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 125] Level-0 flush table #202: started
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027660311765, "cf_name": "default", "job": 125, "event": "table_file_creation", "file_number": 202, "file_size": 294050, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 83788, "largest_seqno": 84197, "table_properties": {"data_size": 291614, "index_size": 535, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5715, "raw_average_key_size": 17, "raw_value_size": 286806, "raw_average_value_size": 899, "num_data_blocks": 24, "num_entries": 319, "num_filter_entries": 319, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772027639, "oldest_key_time": 1772027639, "file_creation_time": 1772027660, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 202, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 125] Flush lasted 5580 microseconds, and 2711 cpu microseconds.
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:54:20.311829) [db/flush_job.cc:967] [default] [JOB 125] Level-0 flush table #202: 294050 bytes OK
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:54:20.311853) [db/memtable_list.cc:519] [default] Level-0 commit table #202 started
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:54:20.313860) [db/memtable_list.cc:722] [default] Level-0 commit table #202: memtable #1 done
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:54:20.313882) EVENT_LOG_v1 {"time_micros": 1772027660313875, "job": 125, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:54:20.313911) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 125] Try to delete WAL files size 293858, prev total WAL file size 293858, number of live WAL files 2.
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000198.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:54:20.314473) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373637' seq:72057594037927935, type:22 .. '6C6F676D0034303139' seq:0, type:0; will stop at (end)
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 126] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 125 Base level 0, inputs: [202(287KB)], [200(11MB)]
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027660314512, "job": 126, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [202], "files_L6": [200], "score": -1, "input_data_size": 12613047, "oldest_snapshot_seqno": -1}
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 126] Generated table #203: 9725 keys, 12523444 bytes, temperature: kUnknown
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027660409572, "cf_name": "default", "job": 126, "event": "table_file_creation", "file_number": 203, "file_size": 12523444, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12461188, "index_size": 36863, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24325, "raw_key_size": 256735, "raw_average_key_size": 26, "raw_value_size": 12289997, "raw_average_value_size": 1263, "num_data_blocks": 1423, "num_entries": 9725, "num_filter_entries": 9725, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772027660, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 203, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:54:20.410091) [db/compaction/compaction_job.cc:1663] [default] [JOB 126] Compacted 1@0 + 1@6 files to L6 => 12523444 bytes
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:54:20.411633) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 132.4 rd, 131.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 11.7 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(85.5) write-amplify(42.6) OK, records in: 10244, records dropped: 519 output_compression: NoCompression
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:54:20.411665) EVENT_LOG_v1 {"time_micros": 1772027660411649, "job": 126, "event": "compaction_finished", "compaction_time_micros": 95255, "compaction_time_cpu_micros": 40751, "output_level": 6, "num_output_files": 1, "total_output_size": 12523444, "num_input_records": 10244, "num_output_records": 9725, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000202.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027660411902, "job": 126, "event": "table_file_deletion", "file_number": 202}
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000200.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027660414059, "job": 126, "event": "table_file_deletion", "file_number": 200}
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:54:20.314371) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:54:20.414179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:54:20.414187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:54:20.414190) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:54:20.414193) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:54:20 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:54:20.414196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:54:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4033: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:54:20 np0005629333 nova_compute[244014]: 2026-02-25 13:54:20.634 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:54:20 np0005629333 podman[421938]: 2026-02-25 13:54:20.729445134 +0000 UTC m=+0.063237648 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 08:54:20 np0005629333 podman[421939]: 2026-02-25 13:54:20.771874614 +0000 UTC m=+0.102453298 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:54:21 np0005629333 nova_compute[244014]: 2026-02-25 13:54:21.893 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:54:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4034: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:54:23 np0005629333 nova_compute[244014]: 2026-02-25 13:54:23.764 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:54:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4035: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:54:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:54:25 np0005629333 nova_compute[244014]: 2026-02-25 13:54:25.635 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:54:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4036: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:54:26 np0005629333 nova_compute[244014]: 2026-02-25 13:54:26.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:54:26 np0005629333 nova_compute[244014]: 2026-02-25 13:54:26.901 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:54:26 np0005629333 nova_compute[244014]: 2026-02-25 13:54:26.902 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:54:26 np0005629333 nova_compute[244014]: 2026-02-25 13:54:26.902 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:54:26 np0005629333 nova_compute[244014]: 2026-02-25 13:54:26.902 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:54:26 np0005629333 nova_compute[244014]: 2026-02-25 13:54:26.903 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:54:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:54:27 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/432746270' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:54:27 np0005629333 nova_compute[244014]: 2026-02-25 13:54:27.441 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:54:27 np0005629333 nova_compute[244014]: 2026-02-25 13:54:27.593 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:54:27 np0005629333 nova_compute[244014]: 2026-02-25 13:54:27.594 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3520MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:54:27 np0005629333 nova_compute[244014]: 2026-02-25 13:54:27.595 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:54:27 np0005629333 nova_compute[244014]: 2026-02-25 13:54:27.595 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:54:27 np0005629333 nova_compute[244014]: 2026-02-25 13:54:27.662 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:54:27 np0005629333 nova_compute[244014]: 2026-02-25 13:54:27.663 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:54:27 np0005629333 nova_compute[244014]: 2026-02-25 13:54:27.682 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:54:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:54:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/282885588' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:54:28 np0005629333 nova_compute[244014]: 2026-02-25 13:54:28.239 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:54:28 np0005629333 nova_compute[244014]: 2026-02-25 13:54:28.244 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:54:28 np0005629333 nova_compute[244014]: 2026-02-25 13:54:28.261 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:54:28 np0005629333 nova_compute[244014]: 2026-02-25 13:54:28.263 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:54:28 np0005629333 nova_compute[244014]: 2026-02-25 13:54:28.263 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:54:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4037: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:54:28 np0005629333 nova_compute[244014]: 2026-02-25 13:54:28.768 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:54:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:54:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4038: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:54:30 np0005629333 nova_compute[244014]: 2026-02-25 13:54:30.638 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:54:31 np0005629333 nova_compute[244014]: 2026-02-25 13:54:31.264 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:54:31 np0005629333 nova_compute[244014]: 2026-02-25 13:54:31.265 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:54:31 np0005629333 nova_compute[244014]: 2026-02-25 13:54:31.265 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:54:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:54:31
Feb 25 08:54:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:54:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:54:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['.mgr', 'volumes', 'backups', 'default.rgw.control', 'vms', 'images', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta']
Feb 25 08:54:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:54:31 np0005629333 nova_compute[244014]: 2026-02-25 13:54:31.281 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:54:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:54:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:54:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:54:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:54:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:54:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:54:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4039: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:54:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:54:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:54:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:54:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:54:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:54:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:54:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:54:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:54:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:54:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:54:32 np0005629333 nova_compute[244014]: 2026-02-25 13:54:32.889 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:54:33 np0005629333 nova_compute[244014]: 2026-02-25 13:54:33.772 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:54:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4040: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:54:34 np0005629333 nova_compute[244014]: 2026-02-25 13:54:34.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:54:34 np0005629333 nova_compute[244014]: 2026-02-25 13:54:34.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 25 08:54:34 np0005629333 nova_compute[244014]: 2026-02-25 13:54:34.901 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 25 08:54:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:54:35 np0005629333 nova_compute[244014]: 2026-02-25 13:54:35.641 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:54:35 np0005629333 nova_compute[244014]: 2026-02-25 13:54:35.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:54:35 np0005629333 nova_compute[244014]: 2026-02-25 13:54:35.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 25 08:54:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4041: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:54:36 np0005629333 nova_compute[244014]: 2026-02-25 13:54:36.896 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:54:36 np0005629333 nova_compute[244014]: 2026-02-25 13:54:36.897 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:54:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4042: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:54:38 np0005629333 nova_compute[244014]: 2026-02-25 13:54:38.777 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:54:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:54:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4043: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:54:40 np0005629333 nova_compute[244014]: 2026-02-25 13:54:40.674 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:54:40 np0005629333 nova_compute[244014]: 2026-02-25 13:54:40.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:54:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4044: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:54:43 np0005629333 nova_compute[244014]: 2026-02-25 13:54:43.782 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:54:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:54:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:54:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:54:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:54:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:54:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:54:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:54:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:54:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:54:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:54:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:54:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:54:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:54:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:54:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:54:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:54:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:54:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:54:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:54:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:54:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:54:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:54:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:54:43 np0005629333 nova_compute[244014]: 2026-02-25 13:54:43.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:54:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4045: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:54:44 np0005629333 nova_compute[244014]: 2026-02-25 13:54:44.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:54:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:54:45 np0005629333 nova_compute[244014]: 2026-02-25 13:54:45.675 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:54:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4046: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:54:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:54:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2614287424' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:54:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:54:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2614287424' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:54:47 np0005629333 nova_compute[244014]: 2026-02-25 13:54:47.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:54:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4047: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:54:48 np0005629333 nova_compute[244014]: 2026-02-25 13:54:48.798 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:54:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:54:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4048: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:54:50 np0005629333 nova_compute[244014]: 2026-02-25 13:54:50.678 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:54:51 np0005629333 podman[422028]: 2026-02-25 13:54:51.724424933 +0000 UTC m=+0.065801789 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 08:54:51 np0005629333 podman[422029]: 2026-02-25 13:54:51.74332413 +0000 UTC m=+0.086752044 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Feb 25 08:54:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4049: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:54:53 np0005629333 nova_compute[244014]: 2026-02-25 13:54:53.844 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:54:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4050: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:54:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:54:55.099 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:54:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:54:55.099 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:54:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:54:55.099 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:54:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:54:55 np0005629333 nova_compute[244014]: 2026-02-25 13:54:55.679 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:54:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4051: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:54:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4052: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:54:58 np0005629333 nova_compute[244014]: 2026-02-25 13:54:58.848 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:55:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:55:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:55:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:55:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:55:00 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:55:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4053: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:55:00 np0005629333 nova_compute[244014]: 2026-02-25 13:55:00.680 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:55:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:55:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:55:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:55:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:55:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:55:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:55:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:55:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:55:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:55:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:55:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:55:01 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:55:01 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:55:01 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:55:01 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:55:01 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:55:01 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:55:01 np0005629333 podman[422289]: 2026-02-25 13:55:01.570312442 +0000 UTC m=+0.047185060 container create 387815f3b1e17deb5fdd2d0dd2df65db8cc19437a58ae77df1f19b5089454006 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_booth, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:55:01 np0005629333 systemd[1]: Started libpod-conmon-387815f3b1e17deb5fdd2d0dd2df65db8cc19437a58ae77df1f19b5089454006.scope.
Feb 25 08:55:01 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:55:01 np0005629333 podman[422289]: 2026-02-25 13:55:01.546366212 +0000 UTC m=+0.023238840 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:55:01 np0005629333 podman[422289]: 2026-02-25 13:55:01.651134497 +0000 UTC m=+0.128007125 container init 387815f3b1e17deb5fdd2d0dd2df65db8cc19437a58ae77df1f19b5089454006 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_booth, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 08:55:01 np0005629333 podman[422289]: 2026-02-25 13:55:01.660391549 +0000 UTC m=+0.137264157 container start 387815f3b1e17deb5fdd2d0dd2df65db8cc19437a58ae77df1f19b5089454006 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_booth, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 08:55:01 np0005629333 podman[422289]: 2026-02-25 13:55:01.664584759 +0000 UTC m=+0.141457337 container attach 387815f3b1e17deb5fdd2d0dd2df65db8cc19437a58ae77df1f19b5089454006 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Feb 25 08:55:01 np0005629333 practical_booth[422305]: 167 167
Feb 25 08:55:01 np0005629333 systemd[1]: libpod-387815f3b1e17deb5fdd2d0dd2df65db8cc19437a58ae77df1f19b5089454006.scope: Deactivated successfully.
Feb 25 08:55:01 np0005629333 podman[422289]: 2026-02-25 13:55:01.668122329 +0000 UTC m=+0.144994907 container died 387815f3b1e17deb5fdd2d0dd2df65db8cc19437a58ae77df1f19b5089454006 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_booth, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:55:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:55:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:55:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:55:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:55:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:55:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:55:01 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d2a857dca0d19a3edb9b4b3fb10fb10fe511462b9dfd29d1593c42f0e59b776a-merged.mount: Deactivated successfully.
Feb 25 08:55:01 np0005629333 podman[422289]: 2026-02-25 13:55:01.710805131 +0000 UTC m=+0.187677739 container remove 387815f3b1e17deb5fdd2d0dd2df65db8cc19437a58ae77df1f19b5089454006 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 08:55:01 np0005629333 systemd[1]: libpod-conmon-387815f3b1e17deb5fdd2d0dd2df65db8cc19437a58ae77df1f19b5089454006.scope: Deactivated successfully.
Feb 25 08:55:01 np0005629333 podman[422327]: 2026-02-25 13:55:01.862323942 +0000 UTC m=+0.056677700 container create b789a3644e64f0595affcfd9b43dd6eea73ffd23f8e52f721fa54ddcd6668b02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lichterman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:55:01 np0005629333 systemd[1]: Started libpod-conmon-b789a3644e64f0595affcfd9b43dd6eea73ffd23f8e52f721fa54ddcd6668b02.scope.
Feb 25 08:55:01 np0005629333 podman[422327]: 2026-02-25 13:55:01.836652383 +0000 UTC m=+0.031006211 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:55:01 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:55:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917c9208eba59bcf47d423ba71793559a14c42cbb4cc9b6df2ebbc6845dd158b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:55:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917c9208eba59bcf47d423ba71793559a14c42cbb4cc9b6df2ebbc6845dd158b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:55:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917c9208eba59bcf47d423ba71793559a14c42cbb4cc9b6df2ebbc6845dd158b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:55:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917c9208eba59bcf47d423ba71793559a14c42cbb4cc9b6df2ebbc6845dd158b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:55:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917c9208eba59bcf47d423ba71793559a14c42cbb4cc9b6df2ebbc6845dd158b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:55:01 np0005629333 podman[422327]: 2026-02-25 13:55:01.974390534 +0000 UTC m=+0.168744352 container init b789a3644e64f0595affcfd9b43dd6eea73ffd23f8e52f721fa54ddcd6668b02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lichterman, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 08:55:01 np0005629333 podman[422327]: 2026-02-25 13:55:01.988145174 +0000 UTC m=+0.182498942 container start b789a3644e64f0595affcfd9b43dd6eea73ffd23f8e52f721fa54ddcd6668b02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lichterman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 25 08:55:01 np0005629333 podman[422327]: 2026-02-25 13:55:01.992485647 +0000 UTC m=+0.186839415 container attach b789a3644e64f0595affcfd9b43dd6eea73ffd23f8e52f721fa54ddcd6668b02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lichterman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 08:55:02 np0005629333 affectionate_lichterman[422344]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:55:02 np0005629333 affectionate_lichterman[422344]: --> All data devices are unavailable
Feb 25 08:55:02 np0005629333 systemd[1]: libpod-b789a3644e64f0595affcfd9b43dd6eea73ffd23f8e52f721fa54ddcd6668b02.scope: Deactivated successfully.
Feb 25 08:55:02 np0005629333 podman[422327]: 2026-02-25 13:55:02.437651906 +0000 UTC m=+0.632005644 container died b789a3644e64f0595affcfd9b43dd6eea73ffd23f8e52f721fa54ddcd6668b02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lichterman, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:55:02 np0005629333 systemd[1]: var-lib-containers-storage-overlay-917c9208eba59bcf47d423ba71793559a14c42cbb4cc9b6df2ebbc6845dd158b-merged.mount: Deactivated successfully.
Feb 25 08:55:02 np0005629333 podman[422327]: 2026-02-25 13:55:02.487073789 +0000 UTC m=+0.681427527 container remove b789a3644e64f0595affcfd9b43dd6eea73ffd23f8e52f721fa54ddcd6668b02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lichterman, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 08:55:02 np0005629333 systemd[1]: libpod-conmon-b789a3644e64f0595affcfd9b43dd6eea73ffd23f8e52f721fa54ddcd6668b02.scope: Deactivated successfully.
Feb 25 08:55:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4054: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:55:03 np0005629333 podman[422438]: 2026-02-25 13:55:03.028395367 +0000 UTC m=+0.044735171 container create bbf284e034239735981ab3b9ff3b3f3a1bea08d7c7ae9a8ff1ef55f51fa5acd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 08:55:03 np0005629333 systemd[1]: Started libpod-conmon-bbf284e034239735981ab3b9ff3b3f3a1bea08d7c7ae9a8ff1ef55f51fa5acd5.scope.
Feb 25 08:55:03 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:55:03 np0005629333 podman[422438]: 2026-02-25 13:55:03.013651358 +0000 UTC m=+0.029991132 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:55:03 np0005629333 podman[422438]: 2026-02-25 13:55:03.118162885 +0000 UTC m=+0.134502679 container init bbf284e034239735981ab3b9ff3b3f3a1bea08d7c7ae9a8ff1ef55f51fa5acd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_albattani, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 08:55:03 np0005629333 podman[422438]: 2026-02-25 13:55:03.127594123 +0000 UTC m=+0.143933927 container start bbf284e034239735981ab3b9ff3b3f3a1bea08d7c7ae9a8ff1ef55f51fa5acd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_albattani, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 08:55:03 np0005629333 podman[422438]: 2026-02-25 13:55:03.131594756 +0000 UTC m=+0.147934550 container attach bbf284e034239735981ab3b9ff3b3f3a1bea08d7c7ae9a8ff1ef55f51fa5acd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_albattani, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Feb 25 08:55:03 np0005629333 cranky_albattani[422454]: 167 167
Feb 25 08:55:03 np0005629333 systemd[1]: libpod-bbf284e034239735981ab3b9ff3b3f3a1bea08d7c7ae9a8ff1ef55f51fa5acd5.scope: Deactivated successfully.
Feb 25 08:55:03 np0005629333 podman[422438]: 2026-02-25 13:55:03.133213032 +0000 UTC m=+0.149552836 container died bbf284e034239735981ab3b9ff3b3f3a1bea08d7c7ae9a8ff1ef55f51fa5acd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 08:55:03 np0005629333 systemd[1]: var-lib-containers-storage-overlay-143073bbc8fcef45fabea6976b3a57f91af66fe5d92b57c91d68cf9883264051-merged.mount: Deactivated successfully.
Feb 25 08:55:03 np0005629333 podman[422438]: 2026-02-25 13:55:03.182375788 +0000 UTC m=+0.198715552 container remove bbf284e034239735981ab3b9ff3b3f3a1bea08d7c7ae9a8ff1ef55f51fa5acd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:55:03 np0005629333 systemd[1]: libpod-conmon-bbf284e034239735981ab3b9ff3b3f3a1bea08d7c7ae9a8ff1ef55f51fa5acd5.scope: Deactivated successfully.
Feb 25 08:55:03 np0005629333 podman[422477]: 2026-02-25 13:55:03.377670942 +0000 UTC m=+0.057682778 container create 2bcbab37b8641430c17633fccc3f1942bd7199d8eacf611e1b5e9cb5bf3dbf6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_tesla, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 08:55:03 np0005629333 systemd[1]: Started libpod-conmon-2bcbab37b8641430c17633fccc3f1942bd7199d8eacf611e1b5e9cb5bf3dbf6c.scope.
Feb 25 08:55:03 np0005629333 podman[422477]: 2026-02-25 13:55:03.351024996 +0000 UTC m=+0.031036892 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:55:03 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:55:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80236c88c7be38066b96c2cb10a39ad4b020e4d0602a26f0710b7661bf0ce5ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:55:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80236c88c7be38066b96c2cb10a39ad4b020e4d0602a26f0710b7661bf0ce5ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:55:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80236c88c7be38066b96c2cb10a39ad4b020e4d0602a26f0710b7661bf0ce5ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:55:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80236c88c7be38066b96c2cb10a39ad4b020e4d0602a26f0710b7661bf0ce5ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:55:03 np0005629333 podman[422477]: 2026-02-25 13:55:03.475598263 +0000 UTC m=+0.155610149 container init 2bcbab37b8641430c17633fccc3f1942bd7199d8eacf611e1b5e9cb5bf3dbf6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_tesla, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:55:03 np0005629333 podman[422477]: 2026-02-25 13:55:03.491222616 +0000 UTC m=+0.171234482 container start 2bcbab37b8641430c17633fccc3f1942bd7199d8eacf611e1b5e9cb5bf3dbf6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:55:03 np0005629333 podman[422477]: 2026-02-25 13:55:03.497417022 +0000 UTC m=+0.177428848 container attach 2bcbab37b8641430c17633fccc3f1942bd7199d8eacf611e1b5e9cb5bf3dbf6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_tesla, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]: {
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:    "0": [
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:        {
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "devices": [
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "/dev/loop3"
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            ],
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "lv_name": "ceph_lv0",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "lv_size": "21470642176",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "name": "ceph_lv0",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "tags": {
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.cluster_name": "ceph",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.crush_device_class": "",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.encrypted": "0",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.objectstore": "bluestore",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.osd_id": "0",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.type": "block",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.vdo": "0",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.with_tpm": "0"
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            },
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "type": "block",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "vg_name": "ceph_vg0"
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:        }
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:    ],
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:    "1": [
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:        {
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "devices": [
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "/dev/loop4"
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            ],
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "lv_name": "ceph_lv1",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "lv_size": "21470642176",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "name": "ceph_lv1",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "tags": {
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.cluster_name": "ceph",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.crush_device_class": "",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.encrypted": "0",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.objectstore": "bluestore",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.osd_id": "1",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.type": "block",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.vdo": "0",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.with_tpm": "0"
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            },
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "type": "block",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "vg_name": "ceph_vg1"
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:        }
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:    ],
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:    "2": [
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:        {
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "devices": [
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "/dev/loop5"
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            ],
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "lv_name": "ceph_lv2",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "lv_size": "21470642176",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "name": "ceph_lv2",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "tags": {
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.cluster_name": "ceph",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.crush_device_class": "",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.encrypted": "0",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.objectstore": "bluestore",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.osd_id": "2",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.type": "block",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.vdo": "0",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:                "ceph.with_tpm": "0"
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            },
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "type": "block",
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:            "vg_name": "ceph_vg2"
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:        }
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]:    ]
Feb 25 08:55:03 np0005629333 crazy_tesla[422493]: }
Feb 25 08:55:03 np0005629333 systemd[1]: libpod-2bcbab37b8641430c17633fccc3f1942bd7199d8eacf611e1b5e9cb5bf3dbf6c.scope: Deactivated successfully.
Feb 25 08:55:03 np0005629333 podman[422477]: 2026-02-25 13:55:03.787549309 +0000 UTC m=+0.467561095 container died 2bcbab37b8641430c17633fccc3f1942bd7199d8eacf611e1b5e9cb5bf3dbf6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_tesla, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 08:55:03 np0005629333 systemd[1]: var-lib-containers-storage-overlay-80236c88c7be38066b96c2cb10a39ad4b020e4d0602a26f0710b7661bf0ce5ad-merged.mount: Deactivated successfully.
Feb 25 08:55:03 np0005629333 podman[422477]: 2026-02-25 13:55:03.833258357 +0000 UTC m=+0.513270153 container remove 2bcbab37b8641430c17633fccc3f1942bd7199d8eacf611e1b5e9cb5bf3dbf6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_tesla, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:55:03 np0005629333 systemd[1]: libpod-conmon-2bcbab37b8641430c17633fccc3f1942bd7199d8eacf611e1b5e9cb5bf3dbf6c.scope: Deactivated successfully.
Feb 25 08:55:03 np0005629333 nova_compute[244014]: 2026-02-25 13:55:03.852 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:55:04 np0005629333 podman[422576]: 2026-02-25 13:55:04.35649437 +0000 UTC m=+0.056994459 container create 6fc349ab7533294576685bc74f62ebdf6bdc1c3d8e061e5ec8a6533684f45622 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_satoshi, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 08:55:04 np0005629333 systemd[1]: Started libpod-conmon-6fc349ab7533294576685bc74f62ebdf6bdc1c3d8e061e5ec8a6533684f45622.scope.
Feb 25 08:55:04 np0005629333 podman[422576]: 2026-02-25 13:55:04.330680017 +0000 UTC m=+0.031179946 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:55:04 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:55:04 np0005629333 podman[422576]: 2026-02-25 13:55:04.450569901 +0000 UTC m=+0.151069790 container init 6fc349ab7533294576685bc74f62ebdf6bdc1c3d8e061e5ec8a6533684f45622 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 08:55:04 np0005629333 podman[422576]: 2026-02-25 13:55:04.458885097 +0000 UTC m=+0.159384986 container start 6fc349ab7533294576685bc74f62ebdf6bdc1c3d8e061e5ec8a6533684f45622 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_satoshi, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:55:04 np0005629333 podman[422576]: 2026-02-25 13:55:04.462989543 +0000 UTC m=+0.163489432 container attach 6fc349ab7533294576685bc74f62ebdf6bdc1c3d8e061e5ec8a6533684f45622 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Feb 25 08:55:04 np0005629333 competent_satoshi[422592]: 167 167
Feb 25 08:55:04 np0005629333 systemd[1]: libpod-6fc349ab7533294576685bc74f62ebdf6bdc1c3d8e061e5ec8a6533684f45622.scope: Deactivated successfully.
Feb 25 08:55:04 np0005629333 podman[422576]: 2026-02-25 13:55:04.464323431 +0000 UTC m=+0.164823310 container died 6fc349ab7533294576685bc74f62ebdf6bdc1c3d8e061e5ec8a6533684f45622 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_satoshi, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 25 08:55:04 np0005629333 systemd[1]: var-lib-containers-storage-overlay-312c4613423a61d653fab0fe4575e7a87a8e14167afe487c62af9468f5192378-merged.mount: Deactivated successfully.
Feb 25 08:55:04 np0005629333 podman[422576]: 2026-02-25 13:55:04.50794293 +0000 UTC m=+0.208442789 container remove 6fc349ab7533294576685bc74f62ebdf6bdc1c3d8e061e5ec8a6533684f45622 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_satoshi, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 08:55:04 np0005629333 systemd[1]: libpod-conmon-6fc349ab7533294576685bc74f62ebdf6bdc1c3d8e061e5ec8a6533684f45622.scope: Deactivated successfully.
Feb 25 08:55:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4055: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:55:04 np0005629333 podman[422618]: 2026-02-25 13:55:04.698358165 +0000 UTC m=+0.064401529 container create 2b1cb3cf46dcce4589da12fb887d99987d281b37d94e781dd68f58866bb494cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 08:55:04 np0005629333 systemd[1]: Started libpod-conmon-2b1cb3cf46dcce4589da12fb887d99987d281b37d94e781dd68f58866bb494cb.scope.
Feb 25 08:55:04 np0005629333 podman[422618]: 2026-02-25 13:55:04.671072911 +0000 UTC m=+0.037116345 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:55:04 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:55:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f258cb20ca0c661223764c1a1bdf31efe8c80e34026d98ad0fa2dbaac6dfaec4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:55:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f258cb20ca0c661223764c1a1bdf31efe8c80e34026d98ad0fa2dbaac6dfaec4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:55:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f258cb20ca0c661223764c1a1bdf31efe8c80e34026d98ad0fa2dbaac6dfaec4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:55:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f258cb20ca0c661223764c1a1bdf31efe8c80e34026d98ad0fa2dbaac6dfaec4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:55:04 np0005629333 podman[422618]: 2026-02-25 13:55:04.798948021 +0000 UTC m=+0.164991495 container init 2b1cb3cf46dcce4589da12fb887d99987d281b37d94e781dd68f58866bb494cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_brown, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:55:04 np0005629333 podman[422618]: 2026-02-25 13:55:04.812560818 +0000 UTC m=+0.178604192 container start 2b1cb3cf46dcce4589da12fb887d99987d281b37d94e781dd68f58866bb494cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_brown, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 08:55:04 np0005629333 podman[422618]: 2026-02-25 13:55:04.816763557 +0000 UTC m=+0.182807041 container attach 2b1cb3cf46dcce4589da12fb887d99987d281b37d94e781dd68f58866bb494cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_brown, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 25 08:55:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:55:05 np0005629333 lvm[422714]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:55:05 np0005629333 lvm[422714]: VG ceph_vg1 finished
Feb 25 08:55:05 np0005629333 lvm[422713]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:55:05 np0005629333 lvm[422713]: VG ceph_vg0 finished
Feb 25 08:55:05 np0005629333 lvm[422716]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:55:05 np0005629333 lvm[422716]: VG ceph_vg2 finished
Feb 25 08:55:05 np0005629333 nova_compute[244014]: 2026-02-25 13:55:05.683 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:55:05 np0005629333 focused_brown[422634]: {}
Feb 25 08:55:05 np0005629333 systemd[1]: libpod-2b1cb3cf46dcce4589da12fb887d99987d281b37d94e781dd68f58866bb494cb.scope: Deactivated successfully.
Feb 25 08:55:05 np0005629333 systemd[1]: libpod-2b1cb3cf46dcce4589da12fb887d99987d281b37d94e781dd68f58866bb494cb.scope: Consumed 1.364s CPU time.
Feb 25 08:55:05 np0005629333 podman[422618]: 2026-02-25 13:55:05.743376603 +0000 UTC m=+1.109420007 container died 2b1cb3cf46dcce4589da12fb887d99987d281b37d94e781dd68f58866bb494cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_brown, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 08:55:05 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f258cb20ca0c661223764c1a1bdf31efe8c80e34026d98ad0fa2dbaac6dfaec4-merged.mount: Deactivated successfully.
Feb 25 08:55:05 np0005629333 podman[422618]: 2026-02-25 13:55:05.803424488 +0000 UTC m=+1.169467902 container remove 2b1cb3cf46dcce4589da12fb887d99987d281b37d94e781dd68f58866bb494cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:55:05 np0005629333 systemd[1]: libpod-conmon-2b1cb3cf46dcce4589da12fb887d99987d281b37d94e781dd68f58866bb494cb.scope: Deactivated successfully.
Feb 25 08:55:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:55:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:55:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:55:05 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:55:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4056: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:55:06 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:55:06 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:55:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4057: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:55:08 np0005629333 nova_compute[244014]: 2026-02-25 13:55:08.898 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #204. Immutable memtables: 0.
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:55:10.319530) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 127] Flushing memtable with next log file: 204
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027710319617, "job": 127, "event": "flush_started", "num_memtables": 1, "num_entries": 668, "num_deletes": 250, "total_data_size": 825200, "memory_usage": 837408, "flush_reason": "Manual Compaction"}
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 127] Level-0 flush table #205: started
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027710325017, "cf_name": "default", "job": 127, "event": "table_file_creation", "file_number": 205, "file_size": 553653, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 84198, "largest_seqno": 84865, "table_properties": {"data_size": 550557, "index_size": 1003, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8211, "raw_average_key_size": 20, "raw_value_size": 544102, "raw_average_value_size": 1367, "num_data_blocks": 45, "num_entries": 398, "num_filter_entries": 398, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772027661, "oldest_key_time": 1772027661, "file_creation_time": 1772027710, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 205, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 127] Flush lasted 5504 microseconds, and 2164 cpu microseconds.
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:55:10.325050) [db/flush_job.cc:967] [default] [JOB 127] Level-0 flush table #205: 553653 bytes OK
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:55:10.325067) [db/memtable_list.cc:519] [default] Level-0 commit table #205 started
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:55:10.326840) [db/memtable_list.cc:722] [default] Level-0 commit table #205: memtable #1 done
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:55:10.326863) EVENT_LOG_v1 {"time_micros": 1772027710326856, "job": 127, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:55:10.326893) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 127] Try to delete WAL files size 821666, prev total WAL file size 821666, number of live WAL files 2.
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000201.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:55:10.327602) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033353033' seq:72057594037927935, type:22 .. '6D6772737461740033373534' seq:0, type:0; will stop at (end)
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 128] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 127 Base level 0, inputs: [205(540KB)], [203(11MB)]
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027710327745, "job": 128, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [205], "files_L6": [203], "score": -1, "input_data_size": 13077097, "oldest_snapshot_seqno": -1}
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 128] Generated table #206: 9630 keys, 9978578 bytes, temperature: kUnknown
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027710399383, "cf_name": "default", "job": 128, "event": "table_file_creation", "file_number": 206, "file_size": 9978578, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9921080, "index_size": 32297, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24133, "raw_key_size": 254933, "raw_average_key_size": 26, "raw_value_size": 9755735, "raw_average_value_size": 1013, "num_data_blocks": 1235, "num_entries": 9630, "num_filter_entries": 9630, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772027710, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 206, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:55:10.399935) [db/compaction/compaction_job.cc:1663] [default] [JOB 128] Compacted 1@0 + 1@6 files to L6 => 9978578 bytes
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:55:10.401775) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.1 rd, 138.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 11.9 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(41.6) write-amplify(18.0) OK, records in: 10123, records dropped: 493 output_compression: NoCompression
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:55:10.401810) EVENT_LOG_v1 {"time_micros": 1772027710401794, "job": 128, "event": "compaction_finished", "compaction_time_micros": 71822, "compaction_time_cpu_micros": 23974, "output_level": 6, "num_output_files": 1, "total_output_size": 9978578, "num_input_records": 10123, "num_output_records": 9630, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000205.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027710402212, "job": 128, "event": "table_file_deletion", "file_number": 205}
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000203.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027710405253, "job": 128, "event": "table_file_deletion", "file_number": 203}
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:55:10.327435) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:55:10.405405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:55:10.405415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:55:10.405420) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:55:10.405424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:55:10 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:55:10.405429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:55:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4058: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:55:10 np0005629333 nova_compute[244014]: 2026-02-25 13:55:10.687 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:55:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4059: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:55:12 np0005629333 nova_compute[244014]: 2026-02-25 13:55:12.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:55:13 np0005629333 nova_compute[244014]: 2026-02-25 13:55:13.903 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:55:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4060: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:55:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:55:15 np0005629333 nova_compute[244014]: 2026-02-25 13:55:15.688 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:55:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4061: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:55:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4062: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:55:18 np0005629333 nova_compute[244014]: 2026-02-25 13:55:18.940 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:55:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:55:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4063: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:55:20 np0005629333 nova_compute[244014]: 2026-02-25 13:55:20.691 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:55:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4064: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:55:22 np0005629333 podman[422754]: 2026-02-25 13:55:22.767645862 +0000 UTC m=+0.102107810 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 08:55:22 np0005629333 podman[422755]: 2026-02-25 13:55:22.820677787 +0000 UTC m=+0.155555578 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible)
Feb 25 08:55:23 np0005629333 nova_compute[244014]: 2026-02-25 13:55:23.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:55:23 np0005629333 nova_compute[244014]: 2026-02-25 13:55:23.975 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:55:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4065: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:55:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:55:25 np0005629333 nova_compute[244014]: 2026-02-25 13:55:25.693 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:55:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4066: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:55:26 np0005629333 nova_compute[244014]: 2026-02-25 13:55:26.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:55:26 np0005629333 nova_compute[244014]: 2026-02-25 13:55:26.915 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:55:26 np0005629333 nova_compute[244014]: 2026-02-25 13:55:26.916 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:55:26 np0005629333 nova_compute[244014]: 2026-02-25 13:55:26.916 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:55:26 np0005629333 nova_compute[244014]: 2026-02-25 13:55:26.916 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:55:26 np0005629333 nova_compute[244014]: 2026-02-25 13:55:26.917 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:55:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:55:27 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2806105915' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:55:27 np0005629333 nova_compute[244014]: 2026-02-25 13:55:27.496 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:55:27 np0005629333 nova_compute[244014]: 2026-02-25 13:55:27.695 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:55:27 np0005629333 nova_compute[244014]: 2026-02-25 13:55:27.697 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3498MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:55:27 np0005629333 nova_compute[244014]: 2026-02-25 13:55:27.698 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:55:27 np0005629333 nova_compute[244014]: 2026-02-25 13:55:27.699 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:55:27 np0005629333 nova_compute[244014]: 2026-02-25 13:55:27.768 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:55:27 np0005629333 nova_compute[244014]: 2026-02-25 13:55:27.768 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:55:27 np0005629333 nova_compute[244014]: 2026-02-25 13:55:27.785 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:55:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:55:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3762191624' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:55:28 np0005629333 nova_compute[244014]: 2026-02-25 13:55:28.325 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:55:28 np0005629333 nova_compute[244014]: 2026-02-25 13:55:28.331 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:55:28 np0005629333 nova_compute[244014]: 2026-02-25 13:55:28.351 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:55:28 np0005629333 nova_compute[244014]: 2026-02-25 13:55:28.354 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:55:28 np0005629333 nova_compute[244014]: 2026-02-25 13:55:28.354 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:55:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4067: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:55:28 np0005629333 nova_compute[244014]: 2026-02-25 13:55:28.979 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:55:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:55:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4068: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:55:30 np0005629333 nova_compute[244014]: 2026-02-25 13:55:30.696 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:55:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:55:31
Feb 25 08:55:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:55:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:55:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['.mgr', 'vms', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'images', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log']
Feb 25 08:55:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:55:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:55:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:55:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:55:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:55:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:55:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:55:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:55:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:55:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:55:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:55:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:55:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:55:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:55:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:55:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:55:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:55:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4069: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:55:33 np0005629333 nova_compute[244014]: 2026-02-25 13:55:33.356 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:55:33 np0005629333 nova_compute[244014]: 2026-02-25 13:55:33.356 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:55:33 np0005629333 nova_compute[244014]: 2026-02-25 13:55:33.357 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:55:33 np0005629333 nova_compute[244014]: 2026-02-25 13:55:33.501 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:55:33 np0005629333 nova_compute[244014]: 2026-02-25 13:55:33.983 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:55:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4070: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:55:35 np0005629333 nova_compute[244014]: 2026-02-25 13:55:35.016 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:55:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:55:35 np0005629333 nova_compute[244014]: 2026-02-25 13:55:35.699 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:55:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4071: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:55:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4072: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:55:38 np0005629333 nova_compute[244014]: 2026-02-25 13:55:38.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:55:38 np0005629333 nova_compute[244014]: 2026-02-25 13:55:38.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:55:39 np0005629333 nova_compute[244014]: 2026-02-25 13:55:39.027 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:55:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:55:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4073: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:55:40 np0005629333 nova_compute[244014]: 2026-02-25 13:55:40.704 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:55:40 np0005629333 nova_compute[244014]: 2026-02-25 13:55:40.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:55:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4074: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:55:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:55:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:55:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:55:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:55:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:55:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:55:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:55:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:55:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:55:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:55:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:55:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:55:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:55:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:55:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:55:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:55:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:55:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:55:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:55:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:55:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:55:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:55:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:55:44 np0005629333 nova_compute[244014]: 2026-02-25 13:55:44.031 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:55:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4075: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:55:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:55:45 np0005629333 nova_compute[244014]: 2026-02-25 13:55:45.705 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:55:45 np0005629333 nova_compute[244014]: 2026-02-25 13:55:45.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:55:45 np0005629333 nova_compute[244014]: 2026-02-25 13:55:45.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:55:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4076: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:55:47 np0005629333 nova_compute[244014]: 2026-02-25 13:55:47.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:55:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4077: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:55:49 np0005629333 nova_compute[244014]: 2026-02-25 13:55:49.071 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:55:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:55:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4078: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:55:50 np0005629333 nova_compute[244014]: 2026-02-25 13:55:50.707 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:55:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4079: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:55:53 np0005629333 podman[422844]: 2026-02-25 13:55:53.732756243 +0000 UTC m=+0.074963989 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 08:55:53 np0005629333 podman[422845]: 2026-02-25 13:55:53.770983408 +0000 UTC m=+0.108383068 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 08:55:54 np0005629333 nova_compute[244014]: 2026-02-25 13:55:54.074 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:55:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4080: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:55:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:55:55.099 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:55:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:55:55.100 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:55:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:55:55.100 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:55:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:55:55 np0005629333 nova_compute[244014]: 2026-02-25 13:55:55.707 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:55:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4081: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:55:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4082: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:55:59 np0005629333 nova_compute[244014]: 2026-02-25 13:55:59.113 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:56:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:56:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4083: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:56:00 np0005629333 nova_compute[244014]: 2026-02-25 13:56:00.709 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:56:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:56:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:56:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:56:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:56:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:56:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:56:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4084: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:56:04 np0005629333 nova_compute[244014]: 2026-02-25 13:56:04.117 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:56:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4085: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:56:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:56:05 np0005629333 nova_compute[244014]: 2026-02-25 13:56:05.711 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:56:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:56:06 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:56:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:56:06 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:56:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:56:06 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:56:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:56:06 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:56:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:56:06 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:56:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:56:06 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:56:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4086: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:56:06 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:56:06 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:56:06 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:56:07 np0005629333 podman[423028]: 2026-02-25 13:56:07.015822662 +0000 UTC m=+0.058477631 container create 5a51630f0d4e7280ef9ccf1d070b92afdab9c1ad959dfbe2f08c49596bbb936f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_keller, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:56:07 np0005629333 systemd[1]: Started libpod-conmon-5a51630f0d4e7280ef9ccf1d070b92afdab9c1ad959dfbe2f08c49596bbb936f.scope.
Feb 25 08:56:07 np0005629333 podman[423028]: 2026-02-25 13:56:06.989133934 +0000 UTC m=+0.031788953 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:56:07 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:56:07 np0005629333 podman[423028]: 2026-02-25 13:56:07.112168767 +0000 UTC m=+0.154823746 container init 5a51630f0d4e7280ef9ccf1d070b92afdab9c1ad959dfbe2f08c49596bbb936f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_keller, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 25 08:56:07 np0005629333 podman[423028]: 2026-02-25 13:56:07.121341298 +0000 UTC m=+0.163996247 container start 5a51630f0d4e7280ef9ccf1d070b92afdab9c1ad959dfbe2f08c49596bbb936f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_keller, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 08:56:07 np0005629333 podman[423028]: 2026-02-25 13:56:07.125233518 +0000 UTC m=+0.167888547 container attach 5a51630f0d4e7280ef9ccf1d070b92afdab9c1ad959dfbe2f08c49596bbb936f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_keller, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:56:07 np0005629333 determined_keller[423044]: 167 167
Feb 25 08:56:07 np0005629333 systemd[1]: libpod-5a51630f0d4e7280ef9ccf1d070b92afdab9c1ad959dfbe2f08c49596bbb936f.scope: Deactivated successfully.
Feb 25 08:56:07 np0005629333 podman[423028]: 2026-02-25 13:56:07.129033596 +0000 UTC m=+0.171688575 container died 5a51630f0d4e7280ef9ccf1d070b92afdab9c1ad959dfbe2f08c49596bbb936f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:56:07 np0005629333 systemd[1]: var-lib-containers-storage-overlay-04001b8e0de71ca7f96ccb10c6190247130c13cd37b1db66caad495ba3b7973a-merged.mount: Deactivated successfully.
Feb 25 08:56:07 np0005629333 podman[423028]: 2026-02-25 13:56:07.180450506 +0000 UTC m=+0.223105475 container remove 5a51630f0d4e7280ef9ccf1d070b92afdab9c1ad959dfbe2f08c49596bbb936f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_keller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:56:07 np0005629333 systemd[1]: libpod-conmon-5a51630f0d4e7280ef9ccf1d070b92afdab9c1ad959dfbe2f08c49596bbb936f.scope: Deactivated successfully.
Feb 25 08:56:07 np0005629333 podman[423071]: 2026-02-25 13:56:07.361621969 +0000 UTC m=+0.059841440 container create c9364397c16132e33459259870a3e89868e2f1da6b249bc421444be6e88dba61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 08:56:07 np0005629333 systemd[1]: Started libpod-conmon-c9364397c16132e33459259870a3e89868e2f1da6b249bc421444be6e88dba61.scope.
Feb 25 08:56:07 np0005629333 podman[423071]: 2026-02-25 13:56:07.34156378 +0000 UTC m=+0.039783241 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:56:07 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:56:07 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ae992098f6b3091e27aaf8594cdd5fc8e32d4b9595a63a1d9ace2d8a27eb9f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:56:07 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ae992098f6b3091e27aaf8594cdd5fc8e32d4b9595a63a1d9ace2d8a27eb9f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:56:07 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ae992098f6b3091e27aaf8594cdd5fc8e32d4b9595a63a1d9ace2d8a27eb9f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:56:07 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ae992098f6b3091e27aaf8594cdd5fc8e32d4b9595a63a1d9ace2d8a27eb9f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:56:07 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ae992098f6b3091e27aaf8594cdd5fc8e32d4b9595a63a1d9ace2d8a27eb9f9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:56:07 np0005629333 podman[423071]: 2026-02-25 13:56:07.491193367 +0000 UTC m=+0.189412869 container init c9364397c16132e33459259870a3e89868e2f1da6b249bc421444be6e88dba61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Feb 25 08:56:07 np0005629333 podman[423071]: 2026-02-25 13:56:07.501208272 +0000 UTC m=+0.199427703 container start c9364397c16132e33459259870a3e89868e2f1da6b249bc421444be6e88dba61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_knuth, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 08:56:07 np0005629333 podman[423071]: 2026-02-25 13:56:07.506295416 +0000 UTC m=+0.204515227 container attach c9364397c16132e33459259870a3e89868e2f1da6b249bc421444be6e88dba61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_knuth, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:56:07 np0005629333 thirsty_knuth[423086]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:56:07 np0005629333 thirsty_knuth[423086]: --> All data devices are unavailable
Feb 25 08:56:07 np0005629333 systemd[1]: libpod-c9364397c16132e33459259870a3e89868e2f1da6b249bc421444be6e88dba61.scope: Deactivated successfully.
Feb 25 08:56:07 np0005629333 podman[423071]: 2026-02-25 13:56:07.984926154 +0000 UTC m=+0.683145625 container died c9364397c16132e33459259870a3e89868e2f1da6b249bc421444be6e88dba61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_knuth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:56:08 np0005629333 systemd[1]: var-lib-containers-storage-overlay-8ae992098f6b3091e27aaf8594cdd5fc8e32d4b9595a63a1d9ace2d8a27eb9f9-merged.mount: Deactivated successfully.
Feb 25 08:56:08 np0005629333 podman[423071]: 2026-02-25 13:56:08.03233775 +0000 UTC m=+0.730557191 container remove c9364397c16132e33459259870a3e89868e2f1da6b249bc421444be6e88dba61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_knuth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Feb 25 08:56:08 np0005629333 systemd[1]: libpod-conmon-c9364397c16132e33459259870a3e89868e2f1da6b249bc421444be6e88dba61.scope: Deactivated successfully.
Feb 25 08:56:08 np0005629333 podman[423181]: 2026-02-25 13:56:08.521399035 +0000 UTC m=+0.048596401 container create 1d7ac77c916c6392f8b62f3d9533f1f11fd910af5cb590ec270220cb95215405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_euler, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:56:08 np0005629333 systemd[1]: Started libpod-conmon-1d7ac77c916c6392f8b62f3d9533f1f11fd910af5cb590ec270220cb95215405.scope.
Feb 25 08:56:08 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:56:08 np0005629333 podman[423181]: 2026-02-25 13:56:08.505759801 +0000 UTC m=+0.032957177 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:56:08 np0005629333 podman[423181]: 2026-02-25 13:56:08.606028497 +0000 UTC m=+0.133225863 container init 1d7ac77c916c6392f8b62f3d9533f1f11fd910af5cb590ec270220cb95215405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:56:08 np0005629333 podman[423181]: 2026-02-25 13:56:08.612610174 +0000 UTC m=+0.139807550 container start 1d7ac77c916c6392f8b62f3d9533f1f11fd910af5cb590ec270220cb95215405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_euler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 08:56:08 np0005629333 podman[423181]: 2026-02-25 13:56:08.616201796 +0000 UTC m=+0.143399182 container attach 1d7ac77c916c6392f8b62f3d9533f1f11fd910af5cb590ec270220cb95215405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_euler, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:56:08 np0005629333 systemd[1]: libpod-1d7ac77c916c6392f8b62f3d9533f1f11fd910af5cb590ec270220cb95215405.scope: Deactivated successfully.
Feb 25 08:56:08 np0005629333 upbeat_euler[423198]: 167 167
Feb 25 08:56:08 np0005629333 conmon[423198]: conmon 1d7ac77c916c6392f8b6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1d7ac77c916c6392f8b62f3d9533f1f11fd910af5cb590ec270220cb95215405.scope/container/memory.events
Feb 25 08:56:08 np0005629333 podman[423181]: 2026-02-25 13:56:08.618387618 +0000 UTC m=+0.145585004 container died 1d7ac77c916c6392f8b62f3d9533f1f11fd910af5cb590ec270220cb95215405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:56:08 np0005629333 systemd[1]: var-lib-containers-storage-overlay-3d1f1c2f4d2bcdbbe4e92aa6fc1b29b675c5b33ae56b361e88300faad0d43052-merged.mount: Deactivated successfully.
Feb 25 08:56:08 np0005629333 podman[423181]: 2026-02-25 13:56:08.656356776 +0000 UTC m=+0.183554142 container remove 1d7ac77c916c6392f8b62f3d9533f1f11fd910af5cb590ec270220cb95215405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_euler, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 08:56:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4087: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:56:08 np0005629333 systemd[1]: libpod-conmon-1d7ac77c916c6392f8b62f3d9533f1f11fd910af5cb590ec270220cb95215405.scope: Deactivated successfully.
Feb 25 08:56:08 np0005629333 podman[423221]: 2026-02-25 13:56:08.785600374 +0000 UTC m=+0.047619033 container create 43609c63d69087e7f444fa7ddcb5fe122a9a6d64f40161ed1f411eb6a584654d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_chatterjee, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 08:56:08 np0005629333 systemd[1]: Started libpod-conmon-43609c63d69087e7f444fa7ddcb5fe122a9a6d64f40161ed1f411eb6a584654d.scope.
Feb 25 08:56:08 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:56:08 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f10f3d83e06f89515da07f05421826feccff17904b2e143480504b3a735769e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:56:08 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f10f3d83e06f89515da07f05421826feccff17904b2e143480504b3a735769e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:56:08 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f10f3d83e06f89515da07f05421826feccff17904b2e143480504b3a735769e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:56:08 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f10f3d83e06f89515da07f05421826feccff17904b2e143480504b3a735769e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:56:08 np0005629333 podman[423221]: 2026-02-25 13:56:08.761383867 +0000 UTC m=+0.023402596 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:56:08 np0005629333 podman[423221]: 2026-02-25 13:56:08.868661502 +0000 UTC m=+0.130680201 container init 43609c63d69087e7f444fa7ddcb5fe122a9a6d64f40161ed1f411eb6a584654d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_chatterjee, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:56:08 np0005629333 podman[423221]: 2026-02-25 13:56:08.880233371 +0000 UTC m=+0.142252030 container start 43609c63d69087e7f444fa7ddcb5fe122a9a6d64f40161ed1f411eb6a584654d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_chatterjee, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:56:08 np0005629333 podman[423221]: 2026-02-25 13:56:08.883764831 +0000 UTC m=+0.145783500 container attach 43609c63d69087e7f444fa7ddcb5fe122a9a6d64f40161ed1f411eb6a584654d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_chatterjee, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 08:56:09 np0005629333 nova_compute[244014]: 2026-02-25 13:56:09.120 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]: {
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:    "0": [
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:        {
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "devices": [
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "/dev/loop3"
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            ],
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "lv_name": "ceph_lv0",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "lv_size": "21470642176",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "name": "ceph_lv0",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "tags": {
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.cluster_name": "ceph",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.crush_device_class": "",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.encrypted": "0",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.objectstore": "bluestore",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.osd_id": "0",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.type": "block",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.vdo": "0",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.with_tpm": "0"
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            },
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "type": "block",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "vg_name": "ceph_vg0"
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:        }
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:    ],
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:    "1": [
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:        {
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "devices": [
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "/dev/loop4"
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            ],
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "lv_name": "ceph_lv1",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "lv_size": "21470642176",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "name": "ceph_lv1",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "tags": {
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.cluster_name": "ceph",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.crush_device_class": "",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.encrypted": "0",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.objectstore": "bluestore",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.osd_id": "1",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.type": "block",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.vdo": "0",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.with_tpm": "0"
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            },
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "type": "block",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "vg_name": "ceph_vg1"
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:        }
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:    ],
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:    "2": [
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:        {
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "devices": [
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "/dev/loop5"
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            ],
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "lv_name": "ceph_lv2",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "lv_size": "21470642176",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "name": "ceph_lv2",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "tags": {
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.cluster_name": "ceph",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.crush_device_class": "",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.encrypted": "0",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.objectstore": "bluestore",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.osd_id": "2",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.type": "block",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.vdo": "0",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:                "ceph.with_tpm": "0"
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            },
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "type": "block",
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:            "vg_name": "ceph_vg2"
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:        }
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]:    ]
Feb 25 08:56:09 np0005629333 funny_chatterjee[423238]: }
Feb 25 08:56:09 np0005629333 systemd[1]: libpod-43609c63d69087e7f444fa7ddcb5fe122a9a6d64f40161ed1f411eb6a584654d.scope: Deactivated successfully.
Feb 25 08:56:09 np0005629333 podman[423221]: 2026-02-25 13:56:09.558993561 +0000 UTC m=+0.821012190 container died 43609c63d69087e7f444fa7ddcb5fe122a9a6d64f40161ed1f411eb6a584654d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_chatterjee, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 08:56:09 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f10f3d83e06f89515da07f05421826feccff17904b2e143480504b3a735769e8-merged.mount: Deactivated successfully.
Feb 25 08:56:09 np0005629333 podman[423221]: 2026-02-25 13:56:09.598486072 +0000 UTC m=+0.860504701 container remove 43609c63d69087e7f444fa7ddcb5fe122a9a6d64f40161ed1f411eb6a584654d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:56:09 np0005629333 systemd[1]: libpod-conmon-43609c63d69087e7f444fa7ddcb5fe122a9a6d64f40161ed1f411eb6a584654d.scope: Deactivated successfully.
Feb 25 08:56:10 np0005629333 podman[423323]: 2026-02-25 13:56:10.038156054 +0000 UTC m=+0.048712804 container create 31753aa03d2c2ecbb59f6ce05436e0a12de982e1e7d9e451f9f9f6b605c49ffa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_turing, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:56:10 np0005629333 systemd[1]: Started libpod-conmon-31753aa03d2c2ecbb59f6ce05436e0a12de982e1e7d9e451f9f9f6b605c49ffa.scope.
Feb 25 08:56:10 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:56:10 np0005629333 podman[423323]: 2026-02-25 13:56:10.01689009 +0000 UTC m=+0.027446920 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:56:10 np0005629333 podman[423323]: 2026-02-25 13:56:10.112649299 +0000 UTC m=+0.123206139 container init 31753aa03d2c2ecbb59f6ce05436e0a12de982e1e7d9e451f9f9f6b605c49ffa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_turing, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:56:10 np0005629333 podman[423323]: 2026-02-25 13:56:10.122383565 +0000 UTC m=+0.132940315 container start 31753aa03d2c2ecbb59f6ce05436e0a12de982e1e7d9e451f9f9f6b605c49ffa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:56:10 np0005629333 podman[423323]: 2026-02-25 13:56:10.12609023 +0000 UTC m=+0.136647070 container attach 31753aa03d2c2ecbb59f6ce05436e0a12de982e1e7d9e451f9f9f6b605c49ffa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_turing, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:56:10 np0005629333 nostalgic_turing[423339]: 167 167
Feb 25 08:56:10 np0005629333 systemd[1]: libpod-31753aa03d2c2ecbb59f6ce05436e0a12de982e1e7d9e451f9f9f6b605c49ffa.scope: Deactivated successfully.
Feb 25 08:56:10 np0005629333 podman[423323]: 2026-02-25 13:56:10.127563522 +0000 UTC m=+0.138120272 container died 31753aa03d2c2ecbb59f6ce05436e0a12de982e1e7d9e451f9f9f6b605c49ffa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 08:56:10 np0005629333 systemd[1]: var-lib-containers-storage-overlay-37cee36b7f991d3d8cd81e41870c1cc9bd6e0cfbe4efeda50c3a6e2bbc284efa-merged.mount: Deactivated successfully.
Feb 25 08:56:10 np0005629333 podman[423323]: 2026-02-25 13:56:10.169348678 +0000 UTC m=+0.179905428 container remove 31753aa03d2c2ecbb59f6ce05436e0a12de982e1e7d9e451f9f9f6b605c49ffa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_turing, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:56:10 np0005629333 systemd[1]: libpod-conmon-31753aa03d2c2ecbb59f6ce05436e0a12de982e1e7d9e451f9f9f6b605c49ffa.scope: Deactivated successfully.
Feb 25 08:56:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:56:10 np0005629333 podman[423363]: 2026-02-25 13:56:10.345249602 +0000 UTC m=+0.056092583 container create f9a4f8e9d1da79c3b122a63fc232bc06728a1ddc09f22a85e3e08ab1accb891e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:56:10 np0005629333 systemd[1]: Started libpod-conmon-f9a4f8e9d1da79c3b122a63fc232bc06728a1ddc09f22a85e3e08ab1accb891e.scope.
Feb 25 08:56:10 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:56:10 np0005629333 podman[423363]: 2026-02-25 13:56:10.322674161 +0000 UTC m=+0.033517202 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:56:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d028a7913c8f6045a652501b49751059cce4f1393064b3b78fcea492c392db5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:56:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d028a7913c8f6045a652501b49751059cce4f1393064b3b78fcea492c392db5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:56:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d028a7913c8f6045a652501b49751059cce4f1393064b3b78fcea492c392db5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:56:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d028a7913c8f6045a652501b49751059cce4f1393064b3b78fcea492c392db5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:56:10 np0005629333 podman[423363]: 2026-02-25 13:56:10.440663081 +0000 UTC m=+0.151506062 container init f9a4f8e9d1da79c3b122a63fc232bc06728a1ddc09f22a85e3e08ab1accb891e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_carson, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 08:56:10 np0005629333 podman[423363]: 2026-02-25 13:56:10.454654638 +0000 UTC m=+0.165497629 container start f9a4f8e9d1da79c3b122a63fc232bc06728a1ddc09f22a85e3e08ab1accb891e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_carson, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 08:56:10 np0005629333 podman[423363]: 2026-02-25 13:56:10.459446154 +0000 UTC m=+0.170289195 container attach f9a4f8e9d1da79c3b122a63fc232bc06728a1ddc09f22a85e3e08ab1accb891e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_carson, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 08:56:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4088: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:56:10 np0005629333 nova_compute[244014]: 2026-02-25 13:56:10.713 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:56:11 np0005629333 lvm[423456]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:56:11 np0005629333 lvm[423456]: VG ceph_vg0 finished
Feb 25 08:56:11 np0005629333 lvm[423459]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:56:11 np0005629333 lvm[423459]: VG ceph_vg1 finished
Feb 25 08:56:11 np0005629333 lvm[423461]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:56:11 np0005629333 lvm[423461]: VG ceph_vg2 finished
Feb 25 08:56:11 np0005629333 lvm[423462]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:56:11 np0005629333 lvm[423462]: VG ceph_vg1 finished
Feb 25 08:56:11 np0005629333 inspiring_carson[423380]: {}
Feb 25 08:56:11 np0005629333 systemd[1]: libpod-f9a4f8e9d1da79c3b122a63fc232bc06728a1ddc09f22a85e3e08ab1accb891e.scope: Deactivated successfully.
Feb 25 08:56:11 np0005629333 systemd[1]: libpod-f9a4f8e9d1da79c3b122a63fc232bc06728a1ddc09f22a85e3e08ab1accb891e.scope: Consumed 1.227s CPU time.
Feb 25 08:56:11 np0005629333 podman[423363]: 2026-02-25 13:56:11.262534204 +0000 UTC m=+0.973377195 container died f9a4f8e9d1da79c3b122a63fc232bc06728a1ddc09f22a85e3e08ab1accb891e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True)
Feb 25 08:56:11 np0005629333 systemd[1]: var-lib-containers-storage-overlay-5d028a7913c8f6045a652501b49751059cce4f1393064b3b78fcea492c392db5-merged.mount: Deactivated successfully.
Feb 25 08:56:11 np0005629333 podman[423363]: 2026-02-25 13:56:11.317412602 +0000 UTC m=+1.028255583 container remove f9a4f8e9d1da79c3b122a63fc232bc06728a1ddc09f22a85e3e08ab1accb891e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_carson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:56:11 np0005629333 systemd[1]: libpod-conmon-f9a4f8e9d1da79c3b122a63fc232bc06728a1ddc09f22a85e3e08ab1accb891e.scope: Deactivated successfully.
Feb 25 08:56:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:56:11 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:56:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:56:11 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:56:11 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:56:11 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:56:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4089: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:56:14 np0005629333 nova_compute[244014]: 2026-02-25 13:56:14.124 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:56:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4090: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:56:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:56:15 np0005629333 nova_compute[244014]: 2026-02-25 13:56:15.715 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:56:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4091: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:56:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4092: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:56:19 np0005629333 nova_compute[244014]: 2026-02-25 13:56:19.128 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:56:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:56:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4093: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:56:20 np0005629333 nova_compute[244014]: 2026-02-25 13:56:20.717 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:56:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4094: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:56:24 np0005629333 nova_compute[244014]: 2026-02-25 13:56:24.131 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:56:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4095: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:56:24 np0005629333 podman[423502]: 2026-02-25 13:56:24.736348306 +0000 UTC m=+0.070995197 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:56:24 np0005629333 podman[423503]: 2026-02-25 13:56:24.780633433 +0000 UTC m=+0.115272214 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:56:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:56:25 np0005629333 nova_compute[244014]: 2026-02-25 13:56:25.719 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:56:25 np0005629333 nova_compute[244014]: 2026-02-25 13:56:25.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:56:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4096: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:56:26 np0005629333 nova_compute[244014]: 2026-02-25 13:56:26.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:56:26 np0005629333 nova_compute[244014]: 2026-02-25 13:56:26.912 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:56:26 np0005629333 nova_compute[244014]: 2026-02-25 13:56:26.913 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:56:26 np0005629333 nova_compute[244014]: 2026-02-25 13:56:26.913 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:56:26 np0005629333 nova_compute[244014]: 2026-02-25 13:56:26.913 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:56:26 np0005629333 nova_compute[244014]: 2026-02-25 13:56:26.914 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:56:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:56:27 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2903102671' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:56:27 np0005629333 nova_compute[244014]: 2026-02-25 13:56:27.464 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:56:27 np0005629333 nova_compute[244014]: 2026-02-25 13:56:27.603 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:56:27 np0005629333 nova_compute[244014]: 2026-02-25 13:56:27.606 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3486MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:56:27 np0005629333 nova_compute[244014]: 2026-02-25 13:56:27.606 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:56:27 np0005629333 nova_compute[244014]: 2026-02-25 13:56:27.607 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:56:27 np0005629333 nova_compute[244014]: 2026-02-25 13:56:27.862 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:56:27 np0005629333 nova_compute[244014]: 2026-02-25 13:56:27.863 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:56:27 np0005629333 nova_compute[244014]: 2026-02-25 13:56:27.956 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 25 08:56:28 np0005629333 nova_compute[244014]: 2026-02-25 13:56:28.054 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 25 08:56:28 np0005629333 nova_compute[244014]: 2026-02-25 13:56:28.054 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 25 08:56:28 np0005629333 nova_compute[244014]: 2026-02-25 13:56:28.069 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 25 08:56:28 np0005629333 nova_compute[244014]: 2026-02-25 13:56:28.092 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 25 08:56:28 np0005629333 nova_compute[244014]: 2026-02-25 13:56:28.110 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:56:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:56:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1917647811' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:56:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4097: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:56:28 np0005629333 nova_compute[244014]: 2026-02-25 13:56:28.682 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:56:28 np0005629333 nova_compute[244014]: 2026-02-25 13:56:28.690 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:56:28 np0005629333 nova_compute[244014]: 2026-02-25 13:56:28.706 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:56:28 np0005629333 nova_compute[244014]: 2026-02-25 13:56:28.709 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:56:28 np0005629333 nova_compute[244014]: 2026-02-25 13:56:28.709 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:56:29 np0005629333 nova_compute[244014]: 2026-02-25 13:56:29.134 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:56:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:56:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4098: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:56:30 np0005629333 nova_compute[244014]: 2026-02-25 13:56:30.721 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:56:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:56:31
Feb 25 08:56:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:56:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:56:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'default.rgw.log', 'volumes', '.mgr', 'vms', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data']
Feb 25 08:56:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:56:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:56:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:56:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:56:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:56:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:56:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:56:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:56:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:56:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:56:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:56:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:56:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:56:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:56:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:56:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:56:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:56:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4099: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:56:33 np0005629333 nova_compute[244014]: 2026-02-25 13:56:33.710 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:56:33 np0005629333 nova_compute[244014]: 2026-02-25 13:56:33.710 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:56:33 np0005629333 nova_compute[244014]: 2026-02-25 13:56:33.710 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:56:33 np0005629333 nova_compute[244014]: 2026-02-25 13:56:33.727 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:56:34 np0005629333 nova_compute[244014]: 2026-02-25 13:56:34.175 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:56:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4100: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:56:34 np0005629333 nova_compute[244014]: 2026-02-25 13:56:34.888 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:56:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:56:35 np0005629333 nova_compute[244014]: 2026-02-25 13:56:35.724 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:56:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4101: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:56:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4102: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:56:38 np0005629333 nova_compute[244014]: 2026-02-25 13:56:38.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:56:38 np0005629333 nova_compute[244014]: 2026-02-25 13:56:38.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:56:39 np0005629333 nova_compute[244014]: 2026-02-25 13:56:39.218 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:56:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:56:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4103: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:56:40 np0005629333 nova_compute[244014]: 2026-02-25 13:56:40.726 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:56:41 np0005629333 nova_compute[244014]: 2026-02-25 13:56:41.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:56:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4104: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:56:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:56:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:56:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:56:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:56:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:56:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:56:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:56:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:56:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:56:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:56:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:56:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:56:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:56:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:56:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:56:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:56:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:56:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:56:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:56:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:56:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:56:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:56:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:56:44 np0005629333 nova_compute[244014]: 2026-02-25 13:56:44.267 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:56:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4105: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:56:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:56:45 np0005629333 nova_compute[244014]: 2026-02-25 13:56:45.728 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:56:45 np0005629333 nova_compute[244014]: 2026-02-25 13:56:45.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:56:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4106: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:56:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:56:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1511691588' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:56:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:56:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1511691588' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:56:47 np0005629333 nova_compute[244014]: 2026-02-25 13:56:47.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:56:47 np0005629333 nova_compute[244014]: 2026-02-25 13:56:47.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:56:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4107: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:56:49 np0005629333 nova_compute[244014]: 2026-02-25 13:56:49.271 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:56:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:56:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4108: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:56:50 np0005629333 nova_compute[244014]: 2026-02-25 13:56:50.729 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:56:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4109: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:56:54 np0005629333 nova_compute[244014]: 2026-02-25 13:56:54.318 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:56:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4110: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:56:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:56:55.100 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:56:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:56:55.101 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:56:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:56:55.101 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:56:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:56:55 np0005629333 nova_compute[244014]: 2026-02-25 13:56:55.731 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:56:55 np0005629333 podman[423593]: 2026-02-25 13:56:55.734895648 +0000 UTC m=+0.070869193 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 08:56:55 np0005629333 podman[423594]: 2026-02-25 13:56:55.779191725 +0000 UTC m=+0.107148902 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Feb 25 08:56:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4111: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:56:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4112: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:56:59 np0005629333 nova_compute[244014]: 2026-02-25 13:56:59.323 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:57:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:57:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4113: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:57:00 np0005629333 nova_compute[244014]: 2026-02-25 13:57:00.733 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:57:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:57:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:57:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:57:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:57:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:57:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:57:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4114: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:57:04 np0005629333 nova_compute[244014]: 2026-02-25 13:57:04.381 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:57:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4115: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:57:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:57:05 np0005629333 nova_compute[244014]: 2026-02-25 13:57:05.735 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:57:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4116: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:57:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4117: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:57:09 np0005629333 nova_compute[244014]: 2026-02-25 13:57:09.384 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:57:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:57:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4118: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:57:10 np0005629333 nova_compute[244014]: 2026-02-25 13:57:10.738 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:57:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 25 08:57:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 08:57:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:57:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:57:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:57:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:57:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:57:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:57:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:57:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:57:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:57:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:57:12 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:57:12 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:57:12 np0005629333 podman[423780]: 2026-02-25 13:57:12.61792078 +0000 UTC m=+0.057453822 container create 457111de06c107ac45631b27e3c5185c9476afbc59dfe5b1ec1e64d800350faf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:57:12 np0005629333 systemd[1]: Started libpod-conmon-457111de06c107ac45631b27e3c5185c9476afbc59dfe5b1ec1e64d800350faf.scope.
Feb 25 08:57:12 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:57:12 np0005629333 podman[423780]: 2026-02-25 13:57:12.594866545 +0000 UTC m=+0.034399677 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:57:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4119: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:57:12 np0005629333 podman[423780]: 2026-02-25 13:57:12.706248477 +0000 UTC m=+0.145781549 container init 457111de06c107ac45631b27e3c5185c9476afbc59dfe5b1ec1e64d800350faf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Feb 25 08:57:12 np0005629333 podman[423780]: 2026-02-25 13:57:12.716872379 +0000 UTC m=+0.156405451 container start 457111de06c107ac45631b27e3c5185c9476afbc59dfe5b1ec1e64d800350faf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_kapitsa, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:57:12 np0005629333 podman[423780]: 2026-02-25 13:57:12.720985846 +0000 UTC m=+0.160518888 container attach 457111de06c107ac45631b27e3c5185c9476afbc59dfe5b1ec1e64d800350faf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 08:57:12 np0005629333 jolly_kapitsa[423796]: 167 167
Feb 25 08:57:12 np0005629333 systemd[1]: libpod-457111de06c107ac45631b27e3c5185c9476afbc59dfe5b1ec1e64d800350faf.scope: Deactivated successfully.
Feb 25 08:57:12 np0005629333 podman[423780]: 2026-02-25 13:57:12.724818685 +0000 UTC m=+0.164351767 container died 457111de06c107ac45631b27e3c5185c9476afbc59dfe5b1ec1e64d800350faf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_kapitsa, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 08:57:12 np0005629333 systemd[1]: var-lib-containers-storage-overlay-b1ad2ca98cd4476bd32d729b4d3558fd10eab5cf41c162243b524eb580e1d9ae-merged.mount: Deactivated successfully.
Feb 25 08:57:12 np0005629333 podman[423780]: 2026-02-25 13:57:12.774769833 +0000 UTC m=+0.214302885 container remove 457111de06c107ac45631b27e3c5185c9476afbc59dfe5b1ec1e64d800350faf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_kapitsa, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:57:12 np0005629333 systemd[1]: libpod-conmon-457111de06c107ac45631b27e3c5185c9476afbc59dfe5b1ec1e64d800350faf.scope: Deactivated successfully.
Feb 25 08:57:12 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 08:57:12 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:57:12 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:57:12 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:57:12 np0005629333 podman[423821]: 2026-02-25 13:57:12.948193246 +0000 UTC m=+0.044983308 container create ee54a4272fabb45ff2d6e77fa8512e322e908d0e11151e32800f20bd083a618e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:57:12 np0005629333 systemd[1]: Started libpod-conmon-ee54a4272fabb45ff2d6e77fa8512e322e908d0e11151e32800f20bd083a618e.scope.
Feb 25 08:57:13 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:57:13 np0005629333 podman[423821]: 2026-02-25 13:57:12.92755429 +0000 UTC m=+0.024344282 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:57:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f71a5615013318ee0a110fb10df1ab854a793d82206e37bfba35038c17e28c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:57:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f71a5615013318ee0a110fb10df1ab854a793d82206e37bfba35038c17e28c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:57:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f71a5615013318ee0a110fb10df1ab854a793d82206e37bfba35038c17e28c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:57:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f71a5615013318ee0a110fb10df1ab854a793d82206e37bfba35038c17e28c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:57:13 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f71a5615013318ee0a110fb10df1ab854a793d82206e37bfba35038c17e28c9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:57:13 np0005629333 podman[423821]: 2026-02-25 13:57:13.048624567 +0000 UTC m=+0.145414619 container init ee54a4272fabb45ff2d6e77fa8512e322e908d0e11151e32800f20bd083a618e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:57:13 np0005629333 podman[423821]: 2026-02-25 13:57:13.061783941 +0000 UTC m=+0.158573903 container start ee54a4272fabb45ff2d6e77fa8512e322e908d0e11151e32800f20bd083a618e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:57:13 np0005629333 podman[423821]: 2026-02-25 13:57:13.065753554 +0000 UTC m=+0.162543596 container attach ee54a4272fabb45ff2d6e77fa8512e322e908d0e11151e32800f20bd083a618e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_kowalevski, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:57:13 np0005629333 upbeat_kowalevski[423838]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:57:13 np0005629333 upbeat_kowalevski[423838]: --> All data devices are unavailable
Feb 25 08:57:13 np0005629333 systemd[1]: libpod-ee54a4272fabb45ff2d6e77fa8512e322e908d0e11151e32800f20bd083a618e.scope: Deactivated successfully.
Feb 25 08:57:13 np0005629333 podman[423821]: 2026-02-25 13:57:13.574840705 +0000 UTC m=+0.671630717 container died ee54a4272fabb45ff2d6e77fa8512e322e908d0e11151e32800f20bd083a618e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Feb 25 08:57:13 np0005629333 systemd[1]: var-lib-containers-storage-overlay-0f71a5615013318ee0a110fb10df1ab854a793d82206e37bfba35038c17e28c9-merged.mount: Deactivated successfully.
Feb 25 08:57:13 np0005629333 podman[423821]: 2026-02-25 13:57:13.628969112 +0000 UTC m=+0.725759074 container remove ee54a4272fabb45ff2d6e77fa8512e322e908d0e11151e32800f20bd083a618e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 08:57:13 np0005629333 systemd[1]: libpod-conmon-ee54a4272fabb45ff2d6e77fa8512e322e908d0e11151e32800f20bd083a618e.scope: Deactivated successfully.
Feb 25 08:57:14 np0005629333 podman[423930]: 2026-02-25 13:57:14.097941636 +0000 UTC m=+0.047310804 container create 0348244099a553fd7924a3f68e92733d1daa1aab7bdb24036c6b477b135958e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 08:57:14 np0005629333 systemd[1]: Started libpod-conmon-0348244099a553fd7924a3f68e92733d1daa1aab7bdb24036c6b477b135958e1.scope.
Feb 25 08:57:14 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:57:14 np0005629333 podman[423930]: 2026-02-25 13:57:14.075686534 +0000 UTC m=+0.025055772 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:57:14 np0005629333 podman[423930]: 2026-02-25 13:57:14.180673685 +0000 UTC m=+0.130042923 container init 0348244099a553fd7924a3f68e92733d1daa1aab7bdb24036c6b477b135958e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_bouman, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:57:14 np0005629333 podman[423930]: 2026-02-25 13:57:14.188061825 +0000 UTC m=+0.137431003 container start 0348244099a553fd7924a3f68e92733d1daa1aab7bdb24036c6b477b135958e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_bouman, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 08:57:14 np0005629333 angry_bouman[423947]: 167 167
Feb 25 08:57:14 np0005629333 systemd[1]: libpod-0348244099a553fd7924a3f68e92733d1daa1aab7bdb24036c6b477b135958e1.scope: Deactivated successfully.
Feb 25 08:57:14 np0005629333 podman[423930]: 2026-02-25 13:57:14.194558649 +0000 UTC m=+0.143927907 container attach 0348244099a553fd7924a3f68e92733d1daa1aab7bdb24036c6b477b135958e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_bouman, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:57:14 np0005629333 podman[423930]: 2026-02-25 13:57:14.195535057 +0000 UTC m=+0.144904245 container died 0348244099a553fd7924a3f68e92733d1daa1aab7bdb24036c6b477b135958e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 08:57:14 np0005629333 systemd[1]: var-lib-containers-storage-overlay-deac54028aa9bd81b4c567e92bc44ba279a3fb2d186c5a2bcadd620e85c8fe73-merged.mount: Deactivated successfully.
Feb 25 08:57:14 np0005629333 podman[423930]: 2026-02-25 13:57:14.237246271 +0000 UTC m=+0.186615479 container remove 0348244099a553fd7924a3f68e92733d1daa1aab7bdb24036c6b477b135958e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_bouman, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:57:14 np0005629333 systemd[1]: libpod-conmon-0348244099a553fd7924a3f68e92733d1daa1aab7bdb24036c6b477b135958e1.scope: Deactivated successfully.
Feb 25 08:57:14 np0005629333 nova_compute[244014]: 2026-02-25 13:57:14.389 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:57:14 np0005629333 podman[423970]: 2026-02-25 13:57:14.421859532 +0000 UTC m=+0.059849630 container create c2d86ff278b0f6b408fd2f07ba63adf7443d7e1782e90fb37eb014625b818e41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_snyder, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 08:57:14 np0005629333 systemd[1]: Started libpod-conmon-c2d86ff278b0f6b408fd2f07ba63adf7443d7e1782e90fb37eb014625b818e41.scope.
Feb 25 08:57:14 np0005629333 podman[423970]: 2026-02-25 13:57:14.399178428 +0000 UTC m=+0.037168606 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:57:14 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:57:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6157dda46bdb74dd2e18d208bb1a6df6be6fdd81ad17cffb216d7df7113e86ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:57:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6157dda46bdb74dd2e18d208bb1a6df6be6fdd81ad17cffb216d7df7113e86ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:57:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6157dda46bdb74dd2e18d208bb1a6df6be6fdd81ad17cffb216d7df7113e86ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:57:14 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6157dda46bdb74dd2e18d208bb1a6df6be6fdd81ad17cffb216d7df7113e86ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:57:14 np0005629333 podman[423970]: 2026-02-25 13:57:14.520096561 +0000 UTC m=+0.158086719 container init c2d86ff278b0f6b408fd2f07ba63adf7443d7e1782e90fb37eb014625b818e41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_snyder, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:57:14 np0005629333 podman[423970]: 2026-02-25 13:57:14.532993687 +0000 UTC m=+0.170983805 container start c2d86ff278b0f6b408fd2f07ba63adf7443d7e1782e90fb37eb014625b818e41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:57:14 np0005629333 podman[423970]: 2026-02-25 13:57:14.536907188 +0000 UTC m=+0.174897306 container attach c2d86ff278b0f6b408fd2f07ba63adf7443d7e1782e90fb37eb014625b818e41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_snyder, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:57:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4120: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]: {
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:    "0": [
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:        {
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "devices": [
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "/dev/loop3"
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            ],
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "lv_name": "ceph_lv0",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "lv_size": "21470642176",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "name": "ceph_lv0",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "tags": {
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.cluster_name": "ceph",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.crush_device_class": "",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.encrypted": "0",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.objectstore": "bluestore",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.osd_id": "0",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.type": "block",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.vdo": "0",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.with_tpm": "0"
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            },
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "type": "block",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "vg_name": "ceph_vg0"
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:        }
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:    ],
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:    "1": [
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:        {
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "devices": [
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "/dev/loop4"
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            ],
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "lv_name": "ceph_lv1",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "lv_size": "21470642176",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "name": "ceph_lv1",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "tags": {
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.cluster_name": "ceph",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.crush_device_class": "",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.encrypted": "0",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.objectstore": "bluestore",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.osd_id": "1",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.type": "block",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.vdo": "0",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.with_tpm": "0"
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            },
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "type": "block",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "vg_name": "ceph_vg1"
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:        }
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:    ],
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:    "2": [
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:        {
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "devices": [
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "/dev/loop5"
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            ],
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "lv_name": "ceph_lv2",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "lv_size": "21470642176",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "name": "ceph_lv2",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "tags": {
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.cluster_name": "ceph",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.crush_device_class": "",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.encrypted": "0",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.objectstore": "bluestore",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.osd_id": "2",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.type": "block",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.vdo": "0",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:                "ceph.with_tpm": "0"
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            },
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "type": "block",
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:            "vg_name": "ceph_vg2"
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:        }
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]:    ]
Feb 25 08:57:14 np0005629333 gifted_snyder[423988]: }
Feb 25 08:57:14 np0005629333 systemd[1]: libpod-c2d86ff278b0f6b408fd2f07ba63adf7443d7e1782e90fb37eb014625b818e41.scope: Deactivated successfully.
Feb 25 08:57:14 np0005629333 podman[423970]: 2026-02-25 13:57:14.843107691 +0000 UTC m=+0.481097849 container died c2d86ff278b0f6b408fd2f07ba63adf7443d7e1782e90fb37eb014625b818e41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_snyder, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 08:57:14 np0005629333 nova_compute[244014]: 2026-02-25 13:57:14.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:57:14 np0005629333 systemd[1]: var-lib-containers-storage-overlay-6157dda46bdb74dd2e18d208bb1a6df6be6fdd81ad17cffb216d7df7113e86ad-merged.mount: Deactivated successfully.
Feb 25 08:57:14 np0005629333 podman[423970]: 2026-02-25 13:57:14.894233983 +0000 UTC m=+0.532224091 container remove c2d86ff278b0f6b408fd2f07ba63adf7443d7e1782e90fb37eb014625b818e41 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_snyder, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 08:57:14 np0005629333 systemd[1]: libpod-conmon-c2d86ff278b0f6b408fd2f07ba63adf7443d7e1782e90fb37eb014625b818e41.scope: Deactivated successfully.
Feb 25 08:57:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:57:15 np0005629333 podman[424071]: 2026-02-25 13:57:15.386846098 +0000 UTC m=+0.054808887 container create 028f28458f78ad9684abe11ea56340a7c2dd09e8a22681e956dd3a57af38d74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_beaver, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 08:57:15 np0005629333 systemd[1]: Started libpod-conmon-028f28458f78ad9684abe11ea56340a7c2dd09e8a22681e956dd3a57af38d74c.scope.
Feb 25 08:57:15 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:57:15 np0005629333 podman[424071]: 2026-02-25 13:57:15.366057107 +0000 UTC m=+0.034019986 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:57:15 np0005629333 podman[424071]: 2026-02-25 13:57:15.472901521 +0000 UTC m=+0.140864390 container init 028f28458f78ad9684abe11ea56340a7c2dd09e8a22681e956dd3a57af38d74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_beaver, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:57:15 np0005629333 podman[424071]: 2026-02-25 13:57:15.480778774 +0000 UTC m=+0.148741603 container start 028f28458f78ad9684abe11ea56340a7c2dd09e8a22681e956dd3a57af38d74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_beaver, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:57:15 np0005629333 podman[424071]: 2026-02-25 13:57:15.48486352 +0000 UTC m=+0.152826399 container attach 028f28458f78ad9684abe11ea56340a7c2dd09e8a22681e956dd3a57af38d74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:57:15 np0005629333 silly_beaver[424087]: 167 167
Feb 25 08:57:15 np0005629333 systemd[1]: libpod-028f28458f78ad9684abe11ea56340a7c2dd09e8a22681e956dd3a57af38d74c.scope: Deactivated successfully.
Feb 25 08:57:15 np0005629333 podman[424071]: 2026-02-25 13:57:15.485847238 +0000 UTC m=+0.153810057 container died 028f28458f78ad9684abe11ea56340a7c2dd09e8a22681e956dd3a57af38d74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_beaver, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:57:15 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e56c1dbb8bc5f72a01990905dd91f489ec5eefdab5ad331a9591eae19315ab2c-merged.mount: Deactivated successfully.
Feb 25 08:57:15 np0005629333 podman[424071]: 2026-02-25 13:57:15.535470687 +0000 UTC m=+0.203433506 container remove 028f28458f78ad9684abe11ea56340a7c2dd09e8a22681e956dd3a57af38d74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_beaver, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 08:57:15 np0005629333 systemd[1]: libpod-conmon-028f28458f78ad9684abe11ea56340a7c2dd09e8a22681e956dd3a57af38d74c.scope: Deactivated successfully.
Feb 25 08:57:15 np0005629333 podman[424110]: 2026-02-25 13:57:15.731959335 +0000 UTC m=+0.052369767 container create 3f832766445fc626f553faaed010fc59ad4c4ec65789332d34839de093dd1744 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 08:57:15 np0005629333 nova_compute[244014]: 2026-02-25 13:57:15.739 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:57:15 np0005629333 systemd[1]: Started libpod-conmon-3f832766445fc626f553faaed010fc59ad4c4ec65789332d34839de093dd1744.scope.
Feb 25 08:57:15 np0005629333 podman[424110]: 2026-02-25 13:57:15.714618413 +0000 UTC m=+0.035028865 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:57:15 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:57:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15fc57362bd81823e8e1d68f2a351490466f05167a650407660206eb4f55acbe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:57:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15fc57362bd81823e8e1d68f2a351490466f05167a650407660206eb4f55acbe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:57:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15fc57362bd81823e8e1d68f2a351490466f05167a650407660206eb4f55acbe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:57:15 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15fc57362bd81823e8e1d68f2a351490466f05167a650407660206eb4f55acbe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:57:15 np0005629333 podman[424110]: 2026-02-25 13:57:15.832905411 +0000 UTC m=+0.153315863 container init 3f832766445fc626f553faaed010fc59ad4c4ec65789332d34839de093dd1744 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bohr, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:57:15 np0005629333 podman[424110]: 2026-02-25 13:57:15.841953978 +0000 UTC m=+0.162364450 container start 3f832766445fc626f553faaed010fc59ad4c4ec65789332d34839de093dd1744 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:57:15 np0005629333 podman[424110]: 2026-02-25 13:57:15.845977462 +0000 UTC m=+0.166387894 container attach 3f832766445fc626f553faaed010fc59ad4c4ec65789332d34839de093dd1744 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:57:16 np0005629333 lvm[424205]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:57:16 np0005629333 lvm[424204]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:57:16 np0005629333 lvm[424204]: VG ceph_vg0 finished
Feb 25 08:57:16 np0005629333 lvm[424205]: VG ceph_vg1 finished
Feb 25 08:57:16 np0005629333 lvm[424207]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:57:16 np0005629333 lvm[424207]: VG ceph_vg2 finished
Feb 25 08:57:16 np0005629333 awesome_bohr[424126]: {}
Feb 25 08:57:16 np0005629333 systemd[1]: libpod-3f832766445fc626f553faaed010fc59ad4c4ec65789332d34839de093dd1744.scope: Deactivated successfully.
Feb 25 08:57:16 np0005629333 systemd[1]: libpod-3f832766445fc626f553faaed010fc59ad4c4ec65789332d34839de093dd1744.scope: Consumed 1.167s CPU time.
Feb 25 08:57:16 np0005629333 podman[424110]: 2026-02-25 13:57:16.656149313 +0000 UTC m=+0.976559745 container died 3f832766445fc626f553faaed010fc59ad4c4ec65789332d34839de093dd1744 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bohr, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:57:16 np0005629333 systemd[1]: var-lib-containers-storage-overlay-15fc57362bd81823e8e1d68f2a351490466f05167a650407660206eb4f55acbe-merged.mount: Deactivated successfully.
Feb 25 08:57:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4121: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:57:16 np0005629333 podman[424110]: 2026-02-25 13:57:16.706002438 +0000 UTC m=+1.026412890 container remove 3f832766445fc626f553faaed010fc59ad4c4ec65789332d34839de093dd1744 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_bohr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:57:16 np0005629333 systemd[1]: libpod-conmon-3f832766445fc626f553faaed010fc59ad4c4ec65789332d34839de093dd1744.scope: Deactivated successfully.
Feb 25 08:57:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:57:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:57:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:57:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:57:17 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:57:17 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:57:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4122: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:57:19 np0005629333 nova_compute[244014]: 2026-02-25 13:57:19.442 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:57:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:57:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4123: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:57:20 np0005629333 nova_compute[244014]: 2026-02-25 13:57:20.741 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:57:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4124: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:57:24 np0005629333 nova_compute[244014]: 2026-02-25 13:57:24.492 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:57:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4125: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:57:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:57:25 np0005629333 nova_compute[244014]: 2026-02-25 13:57:25.743 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:57:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4126: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:57:26 np0005629333 podman[424248]: 2026-02-25 13:57:26.74534592 +0000 UTC m=+0.076911025 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 25 08:57:26 np0005629333 podman[424249]: 2026-02-25 13:57:26.824776495 +0000 UTC m=+0.156035641 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20260223, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0)
Feb 25 08:57:26 np0005629333 nova_compute[244014]: 2026-02-25 13:57:26.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:57:26 np0005629333 nova_compute[244014]: 2026-02-25 13:57:26.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:57:26 np0005629333 nova_compute[244014]: 2026-02-25 13:57:26.916 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:57:26 np0005629333 nova_compute[244014]: 2026-02-25 13:57:26.916 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:57:26 np0005629333 nova_compute[244014]: 2026-02-25 13:57:26.916 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:57:26 np0005629333 nova_compute[244014]: 2026-02-25 13:57:26.917 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:57:26 np0005629333 nova_compute[244014]: 2026-02-25 13:57:26.917 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:57:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:57:27 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2850778559' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:57:27 np0005629333 nova_compute[244014]: 2026-02-25 13:57:27.475 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:57:27 np0005629333 nova_compute[244014]: 2026-02-25 13:57:27.657 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:57:27 np0005629333 nova_compute[244014]: 2026-02-25 13:57:27.661 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3465MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:57:27 np0005629333 nova_compute[244014]: 2026-02-25 13:57:27.661 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:57:27 np0005629333 nova_compute[244014]: 2026-02-25 13:57:27.662 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:57:27 np0005629333 nova_compute[244014]: 2026-02-25 13:57:27.728 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:57:27 np0005629333 nova_compute[244014]: 2026-02-25 13:57:27.729 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:57:27 np0005629333 nova_compute[244014]: 2026-02-25 13:57:27.754 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:57:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:57:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1611876398' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:57:28 np0005629333 nova_compute[244014]: 2026-02-25 13:57:28.347 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:57:28 np0005629333 nova_compute[244014]: 2026-02-25 13:57:28.353 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:57:28 np0005629333 nova_compute[244014]: 2026-02-25 13:57:28.369 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:57:28 np0005629333 nova_compute[244014]: 2026-02-25 13:57:28.371 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:57:28 np0005629333 nova_compute[244014]: 2026-02-25 13:57:28.371 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:57:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4127: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:57:29 np0005629333 nova_compute[244014]: 2026-02-25 13:57:29.495 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:57:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:57:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4128: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:57:30 np0005629333 nova_compute[244014]: 2026-02-25 13:57:30.774 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:57:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:57:31
Feb 25 08:57:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:57:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:57:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'volumes', 'vms', 'default.rgw.control', 'backups']
Feb 25 08:57:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:57:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:57:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:57:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:57:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:57:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:57:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:57:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:57:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:57:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:57:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:57:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:57:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:57:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:57:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:57:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:57:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:57:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4129: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:57:34 np0005629333 nova_compute[244014]: 2026-02-25 13:57:34.547 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:57:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4130: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:57:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:57:35 np0005629333 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 08:57:35 np0005629333 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 08:57:35 np0005629333 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 08:57:35 np0005629333 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 08:57:35 np0005629333 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 08:57:35 np0005629333 nova_compute[244014]: 2026-02-25 13:57:35.372 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:57:35 np0005629333 nova_compute[244014]: 2026-02-25 13:57:35.372 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:57:35 np0005629333 nova_compute[244014]: 2026-02-25 13:57:35.372 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:57:35 np0005629333 nova_compute[244014]: 2026-02-25 13:57:35.392 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:57:35 np0005629333 nova_compute[244014]: 2026-02-25 13:57:35.775 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:57:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4131: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:57:36 np0005629333 nova_compute[244014]: 2026-02-25 13:57:36.891 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:57:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4132: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:57:38 np0005629333 nova_compute[244014]: 2026-02-25 13:57:38.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:57:38 np0005629333 nova_compute[244014]: 2026-02-25 13:57:38.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:57:39 np0005629333 nova_compute[244014]: 2026-02-25 13:57:39.551 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:57:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:57:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4133: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:57:40 np0005629333 nova_compute[244014]: 2026-02-25 13:57:40.812 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:57:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4134: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:57:42 np0005629333 nova_compute[244014]: 2026-02-25 13:57:42.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:57:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:57:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:57:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:57:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:57:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:57:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:57:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:57:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:57:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:57:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:57:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:57:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:57:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:57:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:57:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:57:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:57:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:57:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:57:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:57:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:57:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:57:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:57:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:57:44 np0005629333 nova_compute[244014]: 2026-02-25 13:57:44.590 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:57:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4135: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:57:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:57:45 np0005629333 nova_compute[244014]: 2026-02-25 13:57:45.815 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:57:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4136: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:57:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:57:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2625892273' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:57:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:57:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2625892273' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:57:47 np0005629333 nova_compute[244014]: 2026-02-25 13:57:47.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:57:47 np0005629333 nova_compute[244014]: 2026-02-25 13:57:47.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:57:47 np0005629333 nova_compute[244014]: 2026-02-25 13:57:47.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:57:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4137: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:57:48 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #207. Immutable memtables: 0.
Feb 25 08:57:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:57:48.930459) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 08:57:48 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 129] Flushing memtable with next log file: 207
Feb 25 08:57:48 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027868930514, "job": 129, "event": "flush_started", "num_memtables": 1, "num_entries": 1470, "num_deletes": 251, "total_data_size": 2380725, "memory_usage": 2416512, "flush_reason": "Manual Compaction"}
Feb 25 08:57:48 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 129] Level-0 flush table #208: started
Feb 25 08:57:48 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027868941438, "cf_name": "default", "job": 129, "event": "table_file_creation", "file_number": 208, "file_size": 2347175, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 84866, "largest_seqno": 86335, "table_properties": {"data_size": 2340244, "index_size": 4065, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14141, "raw_average_key_size": 19, "raw_value_size": 2326452, "raw_average_value_size": 3272, "num_data_blocks": 182, "num_entries": 711, "num_filter_entries": 711, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772027711, "oldest_key_time": 1772027711, "file_creation_time": 1772027868, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 208, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:57:48 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 129] Flush lasted 11049 microseconds, and 5284 cpu microseconds.
Feb 25 08:57:48 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:57:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:57:48.941503) [db/flush_job.cc:967] [default] [JOB 129] Level-0 flush table #208: 2347175 bytes OK
Feb 25 08:57:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:57:48.941535) [db/memtable_list.cc:519] [default] Level-0 commit table #208 started
Feb 25 08:57:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:57:48.943562) [db/memtable_list.cc:722] [default] Level-0 commit table #208: memtable #1 done
Feb 25 08:57:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:57:48.943584) EVENT_LOG_v1 {"time_micros": 1772027868943578, "job": 129, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 08:57:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:57:48.943612) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 08:57:48 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 129] Try to delete WAL files size 2374288, prev total WAL file size 2374288, number of live WAL files 2.
Feb 25 08:57:48 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000204.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:57:48 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:57:48.944503) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038353334' seq:72057594037927935, type:22 .. '7061786F730038373836' seq:0, type:0; will stop at (end)
Feb 25 08:57:48 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 130] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 08:57:48 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 129 Base level 0, inputs: [208(2292KB)], [206(9744KB)]
Feb 25 08:57:48 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027868944564, "job": 130, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [208], "files_L6": [206], "score": -1, "input_data_size": 12325753, "oldest_snapshot_seqno": -1}
Feb 25 08:57:49 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 130] Generated table #209: 9827 keys, 10551180 bytes, temperature: kUnknown
Feb 25 08:57:49 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027869021970, "cf_name": "default", "job": 130, "event": "table_file_creation", "file_number": 209, "file_size": 10551180, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10491854, "index_size": 33677, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24581, "raw_key_size": 259647, "raw_average_key_size": 26, "raw_value_size": 10322396, "raw_average_value_size": 1050, "num_data_blocks": 1289, "num_entries": 9827, "num_filter_entries": 9827, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772027868, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 209, "seqno_to_time_mapping": "N/A"}}
Feb 25 08:57:49 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 08:57:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:57:49.022331) [db/compaction/compaction_job.cc:1663] [default] [JOB 130] Compacted 1@0 + 1@6 files to L6 => 10551180 bytes
Feb 25 08:57:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:57:49.023873) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 159.0 rd, 136.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 9.5 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(9.7) write-amplify(4.5) OK, records in: 10341, records dropped: 514 output_compression: NoCompression
Feb 25 08:57:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:57:49.023905) EVENT_LOG_v1 {"time_micros": 1772027869023890, "job": 130, "event": "compaction_finished", "compaction_time_micros": 77512, "compaction_time_cpu_micros": 32143, "output_level": 6, "num_output_files": 1, "total_output_size": 10551180, "num_input_records": 10341, "num_output_records": 9827, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 08:57:49 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000208.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:57:49 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027869024423, "job": 130, "event": "table_file_deletion", "file_number": 208}
Feb 25 08:57:49 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000206.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 08:57:49 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772027869026711, "job": 130, "event": "table_file_deletion", "file_number": 206}
Feb 25 08:57:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:57:48.944364) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:57:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:57:49.026866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:57:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:57:49.026877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:57:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:57:49.026880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:57:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:57:49.026883) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:57:49 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-13:57:49.026886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 08:57:49 np0005629333 nova_compute[244014]: 2026-02-25 13:57:49.593 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:57:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:57:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4138: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:57:50 np0005629333 nova_compute[244014]: 2026-02-25 13:57:50.819 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:57:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4139: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:57:54 np0005629333 nova_compute[244014]: 2026-02-25 13:57:54.638 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:57:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4140: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:57:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:57:55.101 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:57:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:57:55.102 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:57:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:57:55.102 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:57:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:57:55 np0005629333 nova_compute[244014]: 2026-02-25 13:57:55.862 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:57:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4141: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:57:57 np0005629333 podman[424337]: 2026-02-25 13:57:57.720575059 +0000 UTC m=+0.063090542 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:57:57 np0005629333 podman[424338]: 2026-02-25 13:57:57.743170571 +0000 UTC m=+0.084720487 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.43.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 08:57:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4142: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:57:59 np0005629333 nova_compute[244014]: 2026-02-25 13:57:59.642 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:58:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:58:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4143: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:58:00 np0005629333 nova_compute[244014]: 2026-02-25 13:58:00.862 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:58:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:58:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:58:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:58:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:58:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:58:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:58:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4144: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:58:04 np0005629333 nova_compute[244014]: 2026-02-25 13:58:04.645 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:58:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4145: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:58:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:58:05 np0005629333 nova_compute[244014]: 2026-02-25 13:58:05.864 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:58:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4146: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:58:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4147: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:58:09 np0005629333 nova_compute[244014]: 2026-02-25 13:58:09.649 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:58:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:58:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4148: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:58:10 np0005629333 nova_compute[244014]: 2026-02-25 13:58:10.895 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:58:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4149: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:58:14 np0005629333 nova_compute[244014]: 2026-02-25 13:58:14.688 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:58:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4150: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:58:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:58:15 np0005629333 nova_compute[244014]: 2026-02-25 13:58:15.896 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:58:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4151: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:58:17 np0005629333 podman[424479]: 2026-02-25 13:58:17.56445238 +0000 UTC m=+0.088022849 container exec ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:58:17 np0005629333 podman[424479]: 2026-02-25 13:58:17.706370479 +0000 UTC m=+0.229940898 container exec_died ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:58:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:58:18 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:58:18 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:58:18 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:58:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4152: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:58:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:58:19 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:58:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:58:19 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:58:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:58:19 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:58:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:58:19 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:58:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:58:19 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:58:19 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:58:19 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:58:19 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:58:19 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:58:19 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:58:19 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:58:19 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:58:19 np0005629333 nova_compute[244014]: 2026-02-25 13:58:19.691 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:58:19 np0005629333 podman[424814]: 2026-02-25 13:58:19.870810657 +0000 UTC m=+0.050172215 container create 0b211b3e2f54af61a483c6e79e298e1c0c91059bf4675649eab720115ff75fab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_pare, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 08:58:19 np0005629333 systemd[1]: Started libpod-conmon-0b211b3e2f54af61a483c6e79e298e1c0c91059bf4675649eab720115ff75fab.scope.
Feb 25 08:58:19 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:58:19 np0005629333 podman[424814]: 2026-02-25 13:58:19.844575802 +0000 UTC m=+0.023937390 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:58:19 np0005629333 podman[424814]: 2026-02-25 13:58:19.957596051 +0000 UTC m=+0.136957649 container init 0b211b3e2f54af61a483c6e79e298e1c0c91059bf4675649eab720115ff75fab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_pare, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 08:58:19 np0005629333 podman[424814]: 2026-02-25 13:58:19.964482326 +0000 UTC m=+0.143843874 container start 0b211b3e2f54af61a483c6e79e298e1c0c91059bf4675649eab720115ff75fab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_pare, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 25 08:58:19 np0005629333 podman[424814]: 2026-02-25 13:58:19.96813702 +0000 UTC m=+0.147498568 container attach 0b211b3e2f54af61a483c6e79e298e1c0c91059bf4675649eab720115ff75fab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_pare, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:58:19 np0005629333 thirsty_pare[424829]: 167 167
Feb 25 08:58:19 np0005629333 systemd[1]: libpod-0b211b3e2f54af61a483c6e79e298e1c0c91059bf4675649eab720115ff75fab.scope: Deactivated successfully.
Feb 25 08:58:19 np0005629333 podman[424814]: 2026-02-25 13:58:19.97166165 +0000 UTC m=+0.151023228 container died 0b211b3e2f54af61a483c6e79e298e1c0c91059bf4675649eab720115ff75fab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_pare, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:58:19 np0005629333 systemd[1]: var-lib-containers-storage-overlay-936af154b56e88ba4947a48209644ae3961b486eab167fd8113f2f8fc7bc5309-merged.mount: Deactivated successfully.
Feb 25 08:58:20 np0005629333 podman[424814]: 2026-02-25 13:58:20.018077308 +0000 UTC m=+0.197438886 container remove 0b211b3e2f54af61a483c6e79e298e1c0c91059bf4675649eab720115ff75fab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_pare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 08:58:20 np0005629333 systemd[1]: libpod-conmon-0b211b3e2f54af61a483c6e79e298e1c0c91059bf4675649eab720115ff75fab.scope: Deactivated successfully.
Feb 25 08:58:20 np0005629333 podman[424853]: 2026-02-25 13:58:20.198975333 +0000 UTC m=+0.045010429 container create aed28cb41dc2d7b9644187ac60a211ba9f506f3583d2021f7a1cc00553c0ca74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_easley, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 25 08:58:20 np0005629333 systemd[1]: Started libpod-conmon-aed28cb41dc2d7b9644187ac60a211ba9f506f3583d2021f7a1cc00553c0ca74.scope.
Feb 25 08:58:20 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:58:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90681e9f1068a17883e15d5ba51f1387b67669b715f3cd14f0b2c51c7297a49f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:58:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90681e9f1068a17883e15d5ba51f1387b67669b715f3cd14f0b2c51c7297a49f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:58:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90681e9f1068a17883e15d5ba51f1387b67669b715f3cd14f0b2c51c7297a49f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:58:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90681e9f1068a17883e15d5ba51f1387b67669b715f3cd14f0b2c51c7297a49f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:58:20 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90681e9f1068a17883e15d5ba51f1387b67669b715f3cd14f0b2c51c7297a49f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:58:20 np0005629333 podman[424853]: 2026-02-25 13:58:20.17771044 +0000 UTC m=+0.023745536 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:58:20 np0005629333 podman[424853]: 2026-02-25 13:58:20.29607554 +0000 UTC m=+0.142110686 container init aed28cb41dc2d7b9644187ac60a211ba9f506f3583d2021f7a1cc00553c0ca74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 08:58:20 np0005629333 podman[424853]: 2026-02-25 13:58:20.30910934 +0000 UTC m=+0.155144406 container start aed28cb41dc2d7b9644187ac60a211ba9f506f3583d2021f7a1cc00553c0ca74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_easley, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:58:20 np0005629333 podman[424853]: 2026-02-25 13:58:20.313279868 +0000 UTC m=+0.159314974 container attach aed28cb41dc2d7b9644187ac60a211ba9f506f3583d2021f7a1cc00553c0ca74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_easley, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:58:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:58:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4153: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:58:20 np0005629333 determined_easley[424870]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:58:20 np0005629333 determined_easley[424870]: --> All data devices are unavailable
Feb 25 08:58:20 np0005629333 systemd[1]: libpod-aed28cb41dc2d7b9644187ac60a211ba9f506f3583d2021f7a1cc00553c0ca74.scope: Deactivated successfully.
Feb 25 08:58:20 np0005629333 podman[424853]: 2026-02-25 13:58:20.787636905 +0000 UTC m=+0.633672001 container died aed28cb41dc2d7b9644187ac60a211ba9f506f3583d2021f7a1cc00553c0ca74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 08:58:20 np0005629333 systemd[1]: var-lib-containers-storage-overlay-90681e9f1068a17883e15d5ba51f1387b67669b715f3cd14f0b2c51c7297a49f-merged.mount: Deactivated successfully.
Feb 25 08:58:20 np0005629333 podman[424853]: 2026-02-25 13:58:20.839857238 +0000 UTC m=+0.685892334 container remove aed28cb41dc2d7b9644187ac60a211ba9f506f3583d2021f7a1cc00553c0ca74 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_easley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:58:20 np0005629333 systemd[1]: libpod-conmon-aed28cb41dc2d7b9644187ac60a211ba9f506f3583d2021f7a1cc00553c0ca74.scope: Deactivated successfully.
Feb 25 08:58:20 np0005629333 nova_compute[244014]: 2026-02-25 13:58:20.898 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:58:21 np0005629333 podman[424963]: 2026-02-25 13:58:21.367892767 +0000 UTC m=+0.062547866 container create fb253ac1eb5e5b7de4ce68bf0f682f0023e25f5fb8eb2daba3878a15e499ed8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 08:58:21 np0005629333 systemd[1]: Started libpod-conmon-fb253ac1eb5e5b7de4ce68bf0f682f0023e25f5fb8eb2daba3878a15e499ed8d.scope.
Feb 25 08:58:21 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:58:21 np0005629333 podman[424963]: 2026-02-25 13:58:21.343166325 +0000 UTC m=+0.037821464 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:58:21 np0005629333 podman[424963]: 2026-02-25 13:58:21.451862781 +0000 UTC m=+0.146517910 container init fb253ac1eb5e5b7de4ce68bf0f682f0023e25f5fb8eb2daba3878a15e499ed8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_lamarr, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 08:58:21 np0005629333 podman[424963]: 2026-02-25 13:58:21.461226047 +0000 UTC m=+0.155881106 container start fb253ac1eb5e5b7de4ce68bf0f682f0023e25f5fb8eb2daba3878a15e499ed8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 08:58:21 np0005629333 podman[424963]: 2026-02-25 13:58:21.465175069 +0000 UTC m=+0.159830208 container attach fb253ac1eb5e5b7de4ce68bf0f682f0023e25f5fb8eb2daba3878a15e499ed8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_lamarr, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 08:58:21 np0005629333 mystifying_lamarr[424979]: 167 167
Feb 25 08:58:21 np0005629333 systemd[1]: libpod-fb253ac1eb5e5b7de4ce68bf0f682f0023e25f5fb8eb2daba3878a15e499ed8d.scope: Deactivated successfully.
Feb 25 08:58:21 np0005629333 podman[424963]: 2026-02-25 13:58:21.466738464 +0000 UTC m=+0.161393553 container died fb253ac1eb5e5b7de4ce68bf0f682f0023e25f5fb8eb2daba3878a15e499ed8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_lamarr, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 08:58:21 np0005629333 systemd[1]: var-lib-containers-storage-overlay-0ad57d49f52d2b6acef6c4472ba6ba1bd42ec9759ba46cdc2f4a7c17f25bb58d-merged.mount: Deactivated successfully.
Feb 25 08:58:21 np0005629333 podman[424963]: 2026-02-25 13:58:21.512549634 +0000 UTC m=+0.207204693 container remove fb253ac1eb5e5b7de4ce68bf0f682f0023e25f5fb8eb2daba3878a15e499ed8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_lamarr, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 08:58:21 np0005629333 systemd[1]: libpod-conmon-fb253ac1eb5e5b7de4ce68bf0f682f0023e25f5fb8eb2daba3878a15e499ed8d.scope: Deactivated successfully.
Feb 25 08:58:21 np0005629333 podman[425001]: 2026-02-25 13:58:21.702792115 +0000 UTC m=+0.059381257 container create d3c78e6d1072b617f3c210c11e53f3330d10586ed4fd3f4b606c906874eeacce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_moore, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:58:21 np0005629333 systemd[1]: Started libpod-conmon-d3c78e6d1072b617f3c210c11e53f3330d10586ed4fd3f4b606c906874eeacce.scope.
Feb 25 08:58:21 np0005629333 podman[425001]: 2026-02-25 13:58:21.680450671 +0000 UTC m=+0.037039803 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:58:21 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:58:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/802f6cf34a7b80df724b7c3165016d910b523e22ab039166f297fef3058e1aa0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:58:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/802f6cf34a7b80df724b7c3165016d910b523e22ab039166f297fef3058e1aa0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:58:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/802f6cf34a7b80df724b7c3165016d910b523e22ab039166f297fef3058e1aa0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:58:21 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/802f6cf34a7b80df724b7c3165016d910b523e22ab039166f297fef3058e1aa0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:58:21 np0005629333 podman[425001]: 2026-02-25 13:58:21.792925014 +0000 UTC m=+0.149514156 container init d3c78e6d1072b617f3c210c11e53f3330d10586ed4fd3f4b606c906874eeacce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 08:58:21 np0005629333 podman[425001]: 2026-02-25 13:58:21.798674027 +0000 UTC m=+0.155263149 container start d3c78e6d1072b617f3c210c11e53f3330d10586ed4fd3f4b606c906874eeacce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_moore, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 08:58:21 np0005629333 podman[425001]: 2026-02-25 13:58:21.802518596 +0000 UTC m=+0.159107718 container attach d3c78e6d1072b617f3c210c11e53f3330d10586ed4fd3f4b606c906874eeacce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_moore, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:58:22 np0005629333 sweet_moore[425017]: {
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:    "0": [
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:        {
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "devices": [
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "/dev/loop3"
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            ],
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "lv_name": "ceph_lv0",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "lv_size": "21470642176",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "name": "ceph_lv0",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "tags": {
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.cluster_name": "ceph",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.crush_device_class": "",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.encrypted": "0",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.objectstore": "bluestore",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.osd_id": "0",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.type": "block",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.vdo": "0",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.with_tpm": "0"
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            },
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "type": "block",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "vg_name": "ceph_vg0"
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:        }
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:    ],
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:    "1": [
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:        {
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "devices": [
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "/dev/loop4"
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            ],
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "lv_name": "ceph_lv1",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "lv_size": "21470642176",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "name": "ceph_lv1",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "tags": {
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.cluster_name": "ceph",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.crush_device_class": "",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.encrypted": "0",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.objectstore": "bluestore",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.osd_id": "1",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.type": "block",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.vdo": "0",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.with_tpm": "0"
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            },
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "type": "block",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "vg_name": "ceph_vg1"
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:        }
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:    ],
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:    "2": [
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:        {
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "devices": [
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "/dev/loop5"
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            ],
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "lv_name": "ceph_lv2",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "lv_size": "21470642176",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "name": "ceph_lv2",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "tags": {
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.cluster_name": "ceph",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.crush_device_class": "",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.encrypted": "0",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.objectstore": "bluestore",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.osd_id": "2",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.type": "block",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.vdo": "0",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:                "ceph.with_tpm": "0"
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            },
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "type": "block",
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:            "vg_name": "ceph_vg2"
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:        }
Feb 25 08:58:22 np0005629333 sweet_moore[425017]:    ]
Feb 25 08:58:22 np0005629333 sweet_moore[425017]: }
Feb 25 08:58:22 np0005629333 systemd[1]: libpod-d3c78e6d1072b617f3c210c11e53f3330d10586ed4fd3f4b606c906874eeacce.scope: Deactivated successfully.
Feb 25 08:58:22 np0005629333 podman[425001]: 2026-02-25 13:58:22.099875138 +0000 UTC m=+0.456464280 container died d3c78e6d1072b617f3c210c11e53f3330d10586ed4fd3f4b606c906874eeacce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 08:58:22 np0005629333 systemd[1]: var-lib-containers-storage-overlay-802f6cf34a7b80df724b7c3165016d910b523e22ab039166f297fef3058e1aa0-merged.mount: Deactivated successfully.
Feb 25 08:58:22 np0005629333 podman[425001]: 2026-02-25 13:58:22.158260446 +0000 UTC m=+0.514849538 container remove d3c78e6d1072b617f3c210c11e53f3330d10586ed4fd3f4b606c906874eeacce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_moore, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:58:22 np0005629333 systemd[1]: libpod-conmon-d3c78e6d1072b617f3c210c11e53f3330d10586ed4fd3f4b606c906874eeacce.scope: Deactivated successfully.
Feb 25 08:58:22 np0005629333 podman[425100]: 2026-02-25 13:58:22.637427729 +0000 UTC m=+0.050035851 container create 228f92849189026bc14a9f6db902e09799893187cade76c9125db2c8cb9533a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:58:22 np0005629333 systemd[1]: Started libpod-conmon-228f92849189026bc14a9f6db902e09799893187cade76c9125db2c8cb9533a1.scope.
Feb 25 08:58:22 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:58:22 np0005629333 podman[425100]: 2026-02-25 13:58:22.706345156 +0000 UTC m=+0.118953278 container init 228f92849189026bc14a9f6db902e09799893187cade76c9125db2c8cb9533a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mahavira, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 08:58:22 np0005629333 podman[425100]: 2026-02-25 13:58:22.712645385 +0000 UTC m=+0.125253487 container start 228f92849189026bc14a9f6db902e09799893187cade76c9125db2c8cb9533a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:58:22 np0005629333 podman[425100]: 2026-02-25 13:58:22.617661368 +0000 UTC m=+0.030269490 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:58:22 np0005629333 podman[425100]: 2026-02-25 13:58:22.716266197 +0000 UTC m=+0.128874329 container attach 228f92849189026bc14a9f6db902e09799893187cade76c9125db2c8cb9533a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:58:22 np0005629333 vigorous_mahavira[425116]: 167 167
Feb 25 08:58:22 np0005629333 systemd[1]: libpod-228f92849189026bc14a9f6db902e09799893187cade76c9125db2c8cb9533a1.scope: Deactivated successfully.
Feb 25 08:58:22 np0005629333 conmon[425116]: conmon 228f92849189026bc14a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-228f92849189026bc14a9f6db902e09799893187cade76c9125db2c8cb9533a1.scope/container/memory.events
Feb 25 08:58:22 np0005629333 podman[425100]: 2026-02-25 13:58:22.718142081 +0000 UTC m=+0.130750213 container died 228f92849189026bc14a9f6db902e09799893187cade76c9125db2c8cb9533a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:58:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4154: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:58:22 np0005629333 systemd[1]: var-lib-containers-storage-overlay-635062f2352dcaa241a69601e8dbfc66ba97491a7ce94001e3487841be36af70-merged.mount: Deactivated successfully.
Feb 25 08:58:22 np0005629333 podman[425100]: 2026-02-25 13:58:22.759938377 +0000 UTC m=+0.172546489 container remove 228f92849189026bc14a9f6db902e09799893187cade76c9125db2c8cb9533a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mahavira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:58:22 np0005629333 systemd[1]: libpod-conmon-228f92849189026bc14a9f6db902e09799893187cade76c9125db2c8cb9533a1.scope: Deactivated successfully.
Feb 25 08:58:22 np0005629333 podman[425139]: 2026-02-25 13:58:22.897129762 +0000 UTC m=+0.038348610 container create bf68e43694f55b7f6864d8d117d83cdff0ff9212ce92727429ca124a8aa2c891 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_swartz, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:58:22 np0005629333 systemd[1]: Started libpod-conmon-bf68e43694f55b7f6864d8d117d83cdff0ff9212ce92727429ca124a8aa2c891.scope.
Feb 25 08:58:22 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:58:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8eac904433ec75554549039a00c555873898e2b9f6bbe0cfe52b1239918cf23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:58:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8eac904433ec75554549039a00c555873898e2b9f6bbe0cfe52b1239918cf23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:58:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8eac904433ec75554549039a00c555873898e2b9f6bbe0cfe52b1239918cf23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:58:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8eac904433ec75554549039a00c555873898e2b9f6bbe0cfe52b1239918cf23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:58:22 np0005629333 podman[425139]: 2026-02-25 13:58:22.880180501 +0000 UTC m=+0.021399399 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:58:22 np0005629333 podman[425139]: 2026-02-25 13:58:22.982681851 +0000 UTC m=+0.123900709 container init bf68e43694f55b7f6864d8d117d83cdff0ff9212ce92727429ca124a8aa2c891 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_swartz, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 08:58:22 np0005629333 podman[425139]: 2026-02-25 13:58:22.98934245 +0000 UTC m=+0.130561318 container start bf68e43694f55b7f6864d8d117d83cdff0ff9212ce92727429ca124a8aa2c891 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_swartz, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 08:58:22 np0005629333 podman[425139]: 2026-02-25 13:58:22.992686675 +0000 UTC m=+0.133905543 container attach bf68e43694f55b7f6864d8d117d83cdff0ff9212ce92727429ca124a8aa2c891 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_swartz, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:58:23 np0005629333 lvm[425233]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:58:23 np0005629333 lvm[425233]: VG ceph_vg0 finished
Feb 25 08:58:23 np0005629333 lvm[425236]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:58:23 np0005629333 lvm[425236]: VG ceph_vg2 finished
Feb 25 08:58:23 np0005629333 lvm[425234]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:58:23 np0005629333 lvm[425234]: VG ceph_vg1 finished
Feb 25 08:58:23 np0005629333 optimistic_swartz[425155]: {}
Feb 25 08:58:23 np0005629333 systemd[1]: libpod-bf68e43694f55b7f6864d8d117d83cdff0ff9212ce92727429ca124a8aa2c891.scope: Deactivated successfully.
Feb 25 08:58:23 np0005629333 systemd[1]: libpod-bf68e43694f55b7f6864d8d117d83cdff0ff9212ce92727429ca124a8aa2c891.scope: Consumed 1.165s CPU time.
Feb 25 08:58:23 np0005629333 podman[425139]: 2026-02-25 13:58:23.705620275 +0000 UTC m=+0.846839133 container died bf68e43694f55b7f6864d8d117d83cdff0ff9212ce92727429ca124a8aa2c891 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_swartz, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 08:58:23 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f8eac904433ec75554549039a00c555873898e2b9f6bbe0cfe52b1239918cf23-merged.mount: Deactivated successfully.
Feb 25 08:58:23 np0005629333 podman[425139]: 2026-02-25 13:58:23.751334773 +0000 UTC m=+0.892553621 container remove bf68e43694f55b7f6864d8d117d83cdff0ff9212ce92727429ca124a8aa2c891 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_swartz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 08:58:23 np0005629333 systemd[1]: libpod-conmon-bf68e43694f55b7f6864d8d117d83cdff0ff9212ce92727429ca124a8aa2c891.scope: Deactivated successfully.
Feb 25 08:58:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:58:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:58:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:58:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:58:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4155: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:58:24 np0005629333 nova_compute[244014]: 2026-02-25 13:58:24.740 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:58:24 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:58:24 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:58:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:58:25 np0005629333 nova_compute[244014]: 2026-02-25 13:58:25.948 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:58:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4156: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:58:27 np0005629333 nova_compute[244014]: 2026-02-25 13:58:27.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:58:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4157: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:58:28 np0005629333 podman[425275]: 2026-02-25 13:58:28.75311647 +0000 UTC m=+0.085091126 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223)
Feb 25 08:58:28 np0005629333 podman[425276]: 2026-02-25 13:58:28.795269037 +0000 UTC m=+0.127243613 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 08:58:28 np0005629333 nova_compute[244014]: 2026-02-25 13:58:28.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:58:28 np0005629333 nova_compute[244014]: 2026-02-25 13:58:28.907 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:58:28 np0005629333 nova_compute[244014]: 2026-02-25 13:58:28.908 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:58:28 np0005629333 nova_compute[244014]: 2026-02-25 13:58:28.909 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:58:28 np0005629333 nova_compute[244014]: 2026-02-25 13:58:28.909 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:58:28 np0005629333 nova_compute[244014]: 2026-02-25 13:58:28.909 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:58:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:58:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3071338508' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:58:29 np0005629333 nova_compute[244014]: 2026-02-25 13:58:29.497 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:58:29 np0005629333 nova_compute[244014]: 2026-02-25 13:58:29.660 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:58:29 np0005629333 nova_compute[244014]: 2026-02-25 13:58:29.662 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3449MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:58:29 np0005629333 nova_compute[244014]: 2026-02-25 13:58:29.662 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:58:29 np0005629333 nova_compute[244014]: 2026-02-25 13:58:29.663 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:58:29 np0005629333 nova_compute[244014]: 2026-02-25 13:58:29.726 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:58:29 np0005629333 nova_compute[244014]: 2026-02-25 13:58:29.726 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:58:29 np0005629333 nova_compute[244014]: 2026-02-25 13:58:29.743 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:58:29 np0005629333 nova_compute[244014]: 2026-02-25 13:58:29.788 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:58:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:58:30 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/756672815' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:58:30 np0005629333 nova_compute[244014]: 2026-02-25 13:58:30.286 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:58:30 np0005629333 nova_compute[244014]: 2026-02-25 13:58:30.292 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:58:30 np0005629333 nova_compute[244014]: 2026-02-25 13:58:30.311 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:58:30 np0005629333 nova_compute[244014]: 2026-02-25 13:58:30.313 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:58:30 np0005629333 nova_compute[244014]: 2026-02-25 13:58:30.314 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:58:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:58:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4158: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:58:30 np0005629333 nova_compute[244014]: 2026-02-25 13:58:30.950 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:58:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:58:31
Feb 25 08:58:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:58:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:58:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'images', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', 'default.rgw.control', 'volumes', '.mgr']
Feb 25 08:58:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:58:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:58:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:58:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:58:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:58:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:58:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:58:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:58:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:58:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:58:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:58:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:58:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:58:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:58:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:58:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:58:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:58:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4159: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:58:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4160: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:58:34 np0005629333 nova_compute[244014]: 2026-02-25 13:58:34.794 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:58:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:58:35 np0005629333 nova_compute[244014]: 2026-02-25 13:58:35.982 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:58:36 np0005629333 nova_compute[244014]: 2026-02-25 13:58:36.316 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:58:36 np0005629333 nova_compute[244014]: 2026-02-25 13:58:36.316 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:58:36 np0005629333 nova_compute[244014]: 2026-02-25 13:58:36.316 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:58:36 np0005629333 nova_compute[244014]: 2026-02-25 13:58:36.339 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:58:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4161: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:58:37 np0005629333 nova_compute[244014]: 2026-02-25 13:58:37.894 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:58:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4162: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:58:39 np0005629333 nova_compute[244014]: 2026-02-25 13:58:39.796 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:58:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:58:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4163: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:58:40 np0005629333 nova_compute[244014]: 2026-02-25 13:58:40.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:58:40 np0005629333 nova_compute[244014]: 2026-02-25 13:58:40.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:58:40 np0005629333 nova_compute[244014]: 2026-02-25 13:58:40.986 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:58:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4164: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:58:42 np0005629333 nova_compute[244014]: 2026-02-25 13:58:42.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:58:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:58:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:58:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:58:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:58:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:58:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:58:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:58:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:58:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:58:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:58:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:58:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:58:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:58:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:58:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:58:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:58:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:58:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:58:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:58:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:58:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:58:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:58:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:58:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4165: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:58:44 np0005629333 nova_compute[244014]: 2026-02-25 13:58:44.800 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:58:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:58:45 np0005629333 nova_compute[244014]: 2026-02-25 13:58:45.986 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:58:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4166: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:58:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:58:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2981582514' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:58:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:58:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2981582514' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:58:47 np0005629333 nova_compute[244014]: 2026-02-25 13:58:47.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:58:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4167: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:58:48 np0005629333 nova_compute[244014]: 2026-02-25 13:58:48.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:58:49 np0005629333 nova_compute[244014]: 2026-02-25 13:58:49.804 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:58:49 np0005629333 nova_compute[244014]: 2026-02-25 13:58:49.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:58:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:58:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4168: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:58:50 np0005629333 nova_compute[244014]: 2026-02-25 13:58:50.990 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:58:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4169: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:58:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4170: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:58:54 np0005629333 nova_compute[244014]: 2026-02-25 13:58:54.807 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:58:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:58:55.102 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:58:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:58:55.103 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:58:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:58:55.103 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:58:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:58:56 np0005629333 nova_compute[244014]: 2026-02-25 13:58:56.040 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:58:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4171: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:58:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4172: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:58:59 np0005629333 podman[425365]: 2026-02-25 13:58:59.710664489 +0000 UTC m=+0.052927624 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 08:58:59 np0005629333 podman[425366]: 2026-02-25 13:58:59.756515341 +0000 UTC m=+0.091403346 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.43.0)
Feb 25 08:58:59 np0005629333 nova_compute[244014]: 2026-02-25 13:58:59.809 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:59:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:59:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4173: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:59:01 np0005629333 nova_compute[244014]: 2026-02-25 13:59:01.041 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:59:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:59:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:59:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:59:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:59:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:59:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:59:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4174: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:59:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4175: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:59:04 np0005629333 nova_compute[244014]: 2026-02-25 13:59:04.871 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:59:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:59:06 np0005629333 nova_compute[244014]: 2026-02-25 13:59:06.071 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:59:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4176: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:59:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4177: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:59:09 np0005629333 nova_compute[244014]: 2026-02-25 13:59:09.876 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:59:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:59:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4178: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:59:11 np0005629333 nova_compute[244014]: 2026-02-25 13:59:11.074 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:59:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 08:59:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.0 total, 600.0 interval#012Cumulative writes: 19K writes, 86K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.02 MB/s#012Cumulative WAL: 19K writes, 19K syncs, 1.00 writes per sync, written: 0.12 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1314 writes, 5964 keys, 1314 commit groups, 1.0 writes per commit group, ingest: 8.79 MB, 0.01 MB/s#012Interval WAL: 1314 writes, 1314 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     52.6      2.03              0.30        65    0.031       0      0       0.0       0.0#012  L6      1/0   10.06 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.5    102.6     88.0      6.63              1.73        64    0.104    476K    33K       0.0       0.0#012 Sum      1/0   10.06 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.5     78.5     79.7      8.66              2.03       129    0.067    476K    33K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.8    142.1    144.8      0.43              0.17        10    0.043     51K   2557       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0    102.6     88.0      6.63              1.73        64    0.104    476K    33K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     52.7      2.03              0.30        64    0.032       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 7800.0 total, 600.0 interval#012Flush(GB): cumulative 0.104, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.67 GB write, 0.09 MB/s write, 0.66 GB read, 0.09 MB/s read, 8.7 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561a1af858d0#2 capacity: 304.00 MB usage: 77.51 MB table_size: 0 occupancy: 18446744073709551615 collections: 14 last_copies: 0 last_secs: 0.000986 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4800,74.21 MB,24.4115%) FilterBlock(130,1.28 MB,0.419732%) IndexBlock(130,2.02 MB,0.66567%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb 25 08:59:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4179: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:59:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4180: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:59:14 np0005629333 nova_compute[244014]: 2026-02-25 13:59:14.879 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:59:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:59:16 np0005629333 nova_compute[244014]: 2026-02-25 13:59:16.109 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:59:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4181: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:59:16 np0005629333 nova_compute[244014]: 2026-02-25 13:59:16.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:59:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4182: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:59:19 np0005629333 nova_compute[244014]: 2026-02-25 13:59:19.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:59:19 np0005629333 nova_compute[244014]: 2026-02-25 13:59:19.883 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:59:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:59:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4183: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:59:21 np0005629333 nova_compute[244014]: 2026-02-25 13:59:21.111 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:59:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4184: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:59:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:59:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:59:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 08:59:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:59:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 08:59:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:59:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 08:59:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 08:59:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 08:59:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:59:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 08:59:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 08:59:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4185: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:59:24 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 08:59:24 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:59:24 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 08:59:24 np0005629333 nova_compute[244014]: 2026-02-25 13:59:24.886 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:59:25 np0005629333 podman[425552]: 2026-02-25 13:59:25.020769339 +0000 UTC m=+0.048163798 container create 1e41f25a64dab8deb4377cb458ae7f39c6e843cdbe8916edf1da475cd4786282 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:59:25 np0005629333 systemd[1]: Started libpod-conmon-1e41f25a64dab8deb4377cb458ae7f39c6e843cdbe8916edf1da475cd4786282.scope.
Feb 25 08:59:25 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:59:25 np0005629333 podman[425552]: 2026-02-25 13:59:25.000080162 +0000 UTC m=+0.027474591 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:59:25 np0005629333 podman[425552]: 2026-02-25 13:59:25.108968573 +0000 UTC m=+0.136363022 container init 1e41f25a64dab8deb4377cb458ae7f39c6e843cdbe8916edf1da475cd4786282 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:59:25 np0005629333 podman[425552]: 2026-02-25 13:59:25.116852427 +0000 UTC m=+0.144246846 container start 1e41f25a64dab8deb4377cb458ae7f39c6e843cdbe8916edf1da475cd4786282 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bartik, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:59:25 np0005629333 podman[425552]: 2026-02-25 13:59:25.120038018 +0000 UTC m=+0.147432467 container attach 1e41f25a64dab8deb4377cb458ae7f39c6e843cdbe8916edf1da475cd4786282 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:59:25 np0005629333 crazy_bartik[425569]: 167 167
Feb 25 08:59:25 np0005629333 systemd[1]: libpod-1e41f25a64dab8deb4377cb458ae7f39c6e843cdbe8916edf1da475cd4786282.scope: Deactivated successfully.
Feb 25 08:59:25 np0005629333 podman[425552]: 2026-02-25 13:59:25.124408062 +0000 UTC m=+0.151802491 container died 1e41f25a64dab8deb4377cb458ae7f39c6e843cdbe8916edf1da475cd4786282 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bartik, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:59:25 np0005629333 systemd[1]: var-lib-containers-storage-overlay-5b118b3a1e382eff305e0fa1fd9ae1986ed19ffb12d8e37776f1a490fdad2d86-merged.mount: Deactivated successfully.
Feb 25 08:59:25 np0005629333 podman[425552]: 2026-02-25 13:59:25.169324457 +0000 UTC m=+0.196718906 container remove 1e41f25a64dab8deb4377cb458ae7f39c6e843cdbe8916edf1da475cd4786282 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_bartik, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 25 08:59:25 np0005629333 systemd[1]: libpod-conmon-1e41f25a64dab8deb4377cb458ae7f39c6e843cdbe8916edf1da475cd4786282.scope: Deactivated successfully.
Feb 25 08:59:25 np0005629333 podman[425592]: 2026-02-25 13:59:25.33956988 +0000 UTC m=+0.052580674 container create 8adb6305e801f335d012a26ab32778723bd43d9bbf22cf97effdd1ac74c677ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_maxwell, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:59:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:59:25 np0005629333 systemd[1]: Started libpod-conmon-8adb6305e801f335d012a26ab32778723bd43d9bbf22cf97effdd1ac74c677ab.scope.
Feb 25 08:59:25 np0005629333 podman[425592]: 2026-02-25 13:59:25.314620082 +0000 UTC m=+0.027630926 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:59:25 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:59:25 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05ff511de61c44ae484ee711211387dd39eeb92b39b14944259aa56af4aadc32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:59:25 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05ff511de61c44ae484ee711211387dd39eeb92b39b14944259aa56af4aadc32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:59:25 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05ff511de61c44ae484ee711211387dd39eeb92b39b14944259aa56af4aadc32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:59:25 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05ff511de61c44ae484ee711211387dd39eeb92b39b14944259aa56af4aadc32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:59:25 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05ff511de61c44ae484ee711211387dd39eeb92b39b14944259aa56af4aadc32/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 08:59:25 np0005629333 podman[425592]: 2026-02-25 13:59:25.467121101 +0000 UTC m=+0.180131905 container init 8adb6305e801f335d012a26ab32778723bd43d9bbf22cf97effdd1ac74c677ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:59:25 np0005629333 podman[425592]: 2026-02-25 13:59:25.480443039 +0000 UTC m=+0.193453803 container start 8adb6305e801f335d012a26ab32778723bd43d9bbf22cf97effdd1ac74c677ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 08:59:25 np0005629333 podman[425592]: 2026-02-25 13:59:25.484845584 +0000 UTC m=+0.197856448 container attach 8adb6305e801f335d012a26ab32778723bd43d9bbf22cf97effdd1ac74c677ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_maxwell, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 08:59:25 np0005629333 sleepy_maxwell[425610]: --> passed data devices: 0 physical, 3 LVM
Feb 25 08:59:25 np0005629333 sleepy_maxwell[425610]: --> All data devices are unavailable
Feb 25 08:59:25 np0005629333 systemd[1]: libpod-8adb6305e801f335d012a26ab32778723bd43d9bbf22cf97effdd1ac74c677ab.scope: Deactivated successfully.
Feb 25 08:59:26 np0005629333 podman[425630]: 2026-02-25 13:59:26.02154057 +0000 UTC m=+0.029468798 container died 8adb6305e801f335d012a26ab32778723bd43d9bbf22cf97effdd1ac74c677ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_maxwell, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 08:59:26 np0005629333 systemd[1]: var-lib-containers-storage-overlay-05ff511de61c44ae484ee711211387dd39eeb92b39b14944259aa56af4aadc32-merged.mount: Deactivated successfully.
Feb 25 08:59:26 np0005629333 podman[425630]: 2026-02-25 13:59:26.067182386 +0000 UTC m=+0.075110524 container remove 8adb6305e801f335d012a26ab32778723bd43d9bbf22cf97effdd1ac74c677ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Feb 25 08:59:26 np0005629333 systemd[1]: libpod-conmon-8adb6305e801f335d012a26ab32778723bd43d9bbf22cf97effdd1ac74c677ab.scope: Deactivated successfully.
Feb 25 08:59:26 np0005629333 nova_compute[244014]: 2026-02-25 13:59:26.166 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:59:26 np0005629333 podman[425706]: 2026-02-25 13:59:26.613079654 +0000 UTC m=+0.060801907 container create f42dc066687d6fe2ee3860c13d9b85c8b989c279e9ed82a8a3f630ff5579ec75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True)
Feb 25 08:59:26 np0005629333 systemd[1]: Started libpod-conmon-f42dc066687d6fe2ee3860c13d9b85c8b989c279e9ed82a8a3f630ff5579ec75.scope.
Feb 25 08:59:26 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:59:26 np0005629333 podman[425706]: 2026-02-25 13:59:26.588121185 +0000 UTC m=+0.035843498 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:59:26 np0005629333 podman[425706]: 2026-02-25 13:59:26.689420291 +0000 UTC m=+0.137142534 container init f42dc066687d6fe2ee3860c13d9b85c8b989c279e9ed82a8a3f630ff5579ec75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:59:26 np0005629333 podman[425706]: 2026-02-25 13:59:26.699770515 +0000 UTC m=+0.147492738 container start f42dc066687d6fe2ee3860c13d9b85c8b989c279e9ed82a8a3f630ff5579ec75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_edison, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:59:26 np0005629333 podman[425706]: 2026-02-25 13:59:26.703548002 +0000 UTC m=+0.151270245 container attach f42dc066687d6fe2ee3860c13d9b85c8b989c279e9ed82a8a3f630ff5579ec75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_edison, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 08:59:26 np0005629333 sharp_edison[425723]: 167 167
Feb 25 08:59:26 np0005629333 systemd[1]: libpod-f42dc066687d6fe2ee3860c13d9b85c8b989c279e9ed82a8a3f630ff5579ec75.scope: Deactivated successfully.
Feb 25 08:59:26 np0005629333 podman[425706]: 2026-02-25 13:59:26.704742456 +0000 UTC m=+0.152464689 container died f42dc066687d6fe2ee3860c13d9b85c8b989c279e9ed82a8a3f630ff5579ec75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_edison, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 08:59:26 np0005629333 systemd[1]: var-lib-containers-storage-overlay-0ee4223907eb74623d16c8fe3db032b5c92f30b8275b80c8de77e57e9e69976f-merged.mount: Deactivated successfully.
Feb 25 08:59:26 np0005629333 podman[425706]: 2026-02-25 13:59:26.746900463 +0000 UTC m=+0.194622726 container remove f42dc066687d6fe2ee3860c13d9b85c8b989c279e9ed82a8a3f630ff5579ec75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_edison, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Feb 25 08:59:26 np0005629333 systemd[1]: libpod-conmon-f42dc066687d6fe2ee3860c13d9b85c8b989c279e9ed82a8a3f630ff5579ec75.scope: Deactivated successfully.
Feb 25 08:59:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4186: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:59:26 np0005629333 podman[425745]: 2026-02-25 13:59:26.93138537 +0000 UTC m=+0.052836011 container create b11488ea40f06b46b5b115dd3904ab909acb1520f52a02275eda450eb9848f4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 25 08:59:26 np0005629333 systemd[1]: Started libpod-conmon-b11488ea40f06b46b5b115dd3904ab909acb1520f52a02275eda450eb9848f4f.scope.
Feb 25 08:59:27 np0005629333 podman[425745]: 2026-02-25 13:59:26.912169215 +0000 UTC m=+0.033619846 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:59:27 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:59:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8be15eed94ec426d3a29753f1b7a82345e4923aabcc799fcdec470437ef5ac0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:59:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8be15eed94ec426d3a29753f1b7a82345e4923aabcc799fcdec470437ef5ac0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:59:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8be15eed94ec426d3a29753f1b7a82345e4923aabcc799fcdec470437ef5ac0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:59:27 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8be15eed94ec426d3a29753f1b7a82345e4923aabcc799fcdec470437ef5ac0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:59:27 np0005629333 podman[425745]: 2026-02-25 13:59:27.048407373 +0000 UTC m=+0.169858014 container init b11488ea40f06b46b5b115dd3904ab909acb1520f52a02275eda450eb9848f4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_bouman, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 08:59:27 np0005629333 podman[425745]: 2026-02-25 13:59:27.062084201 +0000 UTC m=+0.183534832 container start b11488ea40f06b46b5b115dd3904ab909acb1520f52a02275eda450eb9848f4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_bouman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 08:59:27 np0005629333 podman[425745]: 2026-02-25 13:59:27.066516507 +0000 UTC m=+0.187967198 container attach b11488ea40f06b46b5b115dd3904ab909acb1520f52a02275eda450eb9848f4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 08:59:27 np0005629333 funny_bouman[425762]: {
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:    "0": [
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:        {
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "devices": [
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "/dev/loop3"
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            ],
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "lv_name": "ceph_lv0",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "lv_size": "21470642176",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "name": "ceph_lv0",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "tags": {
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.cluster_name": "ceph",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.crush_device_class": "",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.encrypted": "0",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.objectstore": "bluestore",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.osd_id": "0",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.type": "block",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.vdo": "0",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.with_tpm": "0"
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            },
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "type": "block",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "vg_name": "ceph_vg0"
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:        }
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:    ],
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:    "1": [
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:        {
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "devices": [
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "/dev/loop4"
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            ],
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "lv_name": "ceph_lv1",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "lv_size": "21470642176",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "name": "ceph_lv1",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "tags": {
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.cluster_name": "ceph",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.crush_device_class": "",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.encrypted": "0",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.objectstore": "bluestore",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.osd_id": "1",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.type": "block",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.vdo": "0",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.with_tpm": "0"
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            },
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "type": "block",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "vg_name": "ceph_vg1"
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:        }
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:    ],
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:    "2": [
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:        {
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "devices": [
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "/dev/loop5"
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            ],
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "lv_name": "ceph_lv2",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "lv_size": "21470642176",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "name": "ceph_lv2",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "tags": {
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.cephx_lockbox_secret": "",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.cluster_name": "ceph",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.crush_device_class": "",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.encrypted": "0",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.objectstore": "bluestore",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.osd_id": "2",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.type": "block",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.vdo": "0",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:                "ceph.with_tpm": "0"
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            },
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "type": "block",
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:            "vg_name": "ceph_vg2"
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:        }
Feb 25 08:59:27 np0005629333 funny_bouman[425762]:    ]
Feb 25 08:59:27 np0005629333 funny_bouman[425762]: }
Feb 25 08:59:27 np0005629333 systemd[1]: libpod-b11488ea40f06b46b5b115dd3904ab909acb1520f52a02275eda450eb9848f4f.scope: Deactivated successfully.
Feb 25 08:59:27 np0005629333 podman[425745]: 2026-02-25 13:59:27.428563635 +0000 UTC m=+0.550014246 container died b11488ea40f06b46b5b115dd3904ab909acb1520f52a02275eda450eb9848f4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 08:59:27 np0005629333 systemd[1]: var-lib-containers-storage-overlay-b8be15eed94ec426d3a29753f1b7a82345e4923aabcc799fcdec470437ef5ac0-merged.mount: Deactivated successfully.
Feb 25 08:59:27 np0005629333 podman[425745]: 2026-02-25 13:59:27.478017549 +0000 UTC m=+0.599468180 container remove b11488ea40f06b46b5b115dd3904ab909acb1520f52a02275eda450eb9848f4f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 08:59:27 np0005629333 systemd[1]: libpod-conmon-b11488ea40f06b46b5b115dd3904ab909acb1520f52a02275eda450eb9848f4f.scope: Deactivated successfully.
Feb 25 08:59:27 np0005629333 nova_compute[244014]: 2026-02-25 13:59:27.889 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:59:27 np0005629333 podman[425846]: 2026-02-25 13:59:27.956498913 +0000 UTC m=+0.047325765 container create fa4529f3a9d1103262888079b0ad47363f897e1b5cbbc0ecd9c33ac5088eaa36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_goldstine, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 08:59:27 np0005629333 systemd[1]: Started libpod-conmon-fa4529f3a9d1103262888079b0ad47363f897e1b5cbbc0ecd9c33ac5088eaa36.scope.
Feb 25 08:59:28 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:59:28 np0005629333 podman[425846]: 2026-02-25 13:59:27.933988144 +0000 UTC m=+0.024815066 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:59:28 np0005629333 podman[425846]: 2026-02-25 13:59:28.044051519 +0000 UTC m=+0.134878401 container init fa4529f3a9d1103262888079b0ad47363f897e1b5cbbc0ecd9c33ac5088eaa36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_goldstine, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:59:28 np0005629333 podman[425846]: 2026-02-25 13:59:28.053452605 +0000 UTC m=+0.144279457 container start fa4529f3a9d1103262888079b0ad47363f897e1b5cbbc0ecd9c33ac5088eaa36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:59:28 np0005629333 podman[425846]: 2026-02-25 13:59:28.057759378 +0000 UTC m=+0.148586240 container attach fa4529f3a9d1103262888079b0ad47363f897e1b5cbbc0ecd9c33ac5088eaa36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 08:59:28 np0005629333 systemd[1]: libpod-fa4529f3a9d1103262888079b0ad47363f897e1b5cbbc0ecd9c33ac5088eaa36.scope: Deactivated successfully.
Feb 25 08:59:28 np0005629333 nervous_goldstine[425862]: 167 167
Feb 25 08:59:28 np0005629333 conmon[425862]: conmon fa4529f3a9d110326288 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fa4529f3a9d1103262888079b0ad47363f897e1b5cbbc0ecd9c33ac5088eaa36.scope/container/memory.events
Feb 25 08:59:28 np0005629333 podman[425846]: 2026-02-25 13:59:28.060258419 +0000 UTC m=+0.151085271 container died fa4529f3a9d1103262888079b0ad47363f897e1b5cbbc0ecd9c33ac5088eaa36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 08:59:28 np0005629333 systemd[1]: var-lib-containers-storage-overlay-c76baa7b2ff7eafe30c828fae8aa89e5c0cc93e2c52b8758551b559288775000-merged.mount: Deactivated successfully.
Feb 25 08:59:28 np0005629333 podman[425846]: 2026-02-25 13:59:28.10433981 +0000 UTC m=+0.195166662 container remove fa4529f3a9d1103262888079b0ad47363f897e1b5cbbc0ecd9c33ac5088eaa36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 08:59:28 np0005629333 systemd[1]: libpod-conmon-fa4529f3a9d1103262888079b0ad47363f897e1b5cbbc0ecd9c33ac5088eaa36.scope: Deactivated successfully.
Feb 25 08:59:28 np0005629333 podman[425887]: 2026-02-25 13:59:28.272528365 +0000 UTC m=+0.050338130 container create 7b8203e2600e44d6cab655c0468ea98092ad90e9906b260949661c8126a12262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 08:59:28 np0005629333 systemd[1]: Started libpod-conmon-7b8203e2600e44d6cab655c0468ea98092ad90e9906b260949661c8126a12262.scope.
Feb 25 08:59:28 np0005629333 podman[425887]: 2026-02-25 13:59:28.24700522 +0000 UTC m=+0.024815015 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 08:59:28 np0005629333 systemd[1]: Started libcrun container.
Feb 25 08:59:28 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf910ac552fe0e95830e6770637774d384382d8d1c3fa0154393186f924845e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 08:59:28 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf910ac552fe0e95830e6770637774d384382d8d1c3fa0154393186f924845e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 08:59:28 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf910ac552fe0e95830e6770637774d384382d8d1c3fa0154393186f924845e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 08:59:28 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cf910ac552fe0e95830e6770637774d384382d8d1c3fa0154393186f924845e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 08:59:28 np0005629333 podman[425887]: 2026-02-25 13:59:28.377591518 +0000 UTC m=+0.155401303 container init 7b8203e2600e44d6cab655c0468ea98092ad90e9906b260949661c8126a12262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 08:59:28 np0005629333 podman[425887]: 2026-02-25 13:59:28.38823609 +0000 UTC m=+0.166045845 container start 7b8203e2600e44d6cab655c0468ea98092ad90e9906b260949661c8126a12262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_easley, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 08:59:28 np0005629333 podman[425887]: 2026-02-25 13:59:28.417525131 +0000 UTC m=+0.195334946 container attach 7b8203e2600e44d6cab655c0468ea98092ad90e9906b260949661c8126a12262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_easley, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 08:59:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4187: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:59:29 np0005629333 lvm[425980]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 08:59:29 np0005629333 lvm[425980]: VG ceph_vg0 finished
Feb 25 08:59:29 np0005629333 lvm[425983]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 08:59:29 np0005629333 lvm[425983]: VG ceph_vg1 finished
Feb 25 08:59:29 np0005629333 lvm[425985]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 08:59:29 np0005629333 lvm[425985]: VG ceph_vg2 finished
Feb 25 08:59:29 np0005629333 epic_easley[425904]: {}
Feb 25 08:59:29 np0005629333 systemd[1]: libpod-7b8203e2600e44d6cab655c0468ea98092ad90e9906b260949661c8126a12262.scope: Deactivated successfully.
Feb 25 08:59:29 np0005629333 systemd[1]: libpod-7b8203e2600e44d6cab655c0468ea98092ad90e9906b260949661c8126a12262.scope: Consumed 1.050s CPU time.
Feb 25 08:59:29 np0005629333 podman[425887]: 2026-02-25 13:59:29.176896049 +0000 UTC m=+0.954705784 container died 7b8203e2600e44d6cab655c0468ea98092ad90e9906b260949661c8126a12262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 08:59:29 np0005629333 systemd[1]: var-lib-containers-storage-overlay-0cf910ac552fe0e95830e6770637774d384382d8d1c3fa0154393186f924845e-merged.mount: Deactivated successfully.
Feb 25 08:59:29 np0005629333 podman[425887]: 2026-02-25 13:59:29.226133457 +0000 UTC m=+1.003943192 container remove 7b8203e2600e44d6cab655c0468ea98092ad90e9906b260949661c8126a12262 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_easley, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 08:59:29 np0005629333 systemd[1]: libpod-conmon-7b8203e2600e44d6cab655c0468ea98092ad90e9906b260949661c8126a12262.scope: Deactivated successfully.
Feb 25 08:59:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 08:59:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:59:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 08:59:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:59:29 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:59:29 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 08:59:29 np0005629333 nova_compute[244014]: 2026-02-25 13:59:29.939 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:59:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:59:30 np0005629333 podman[426026]: 2026-02-25 13:59:30.74441887 +0000 UTC m=+0.084245202 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Feb 25 08:59:30 np0005629333 podman[426027]: 2026-02-25 13:59:30.766812956 +0000 UTC m=+0.102800829 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Feb 25 08:59:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4188: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:59:30 np0005629333 nova_compute[244014]: 2026-02-25 13:59:30.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:59:30 np0005629333 nova_compute[244014]: 2026-02-25 13:59:30.915 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:59:30 np0005629333 nova_compute[244014]: 2026-02-25 13:59:30.915 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:59:30 np0005629333 nova_compute[244014]: 2026-02-25 13:59:30.915 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:59:30 np0005629333 nova_compute[244014]: 2026-02-25 13:59:30.916 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 08:59:30 np0005629333 nova_compute[244014]: 2026-02-25 13:59:30.916 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:59:31 np0005629333 nova_compute[244014]: 2026-02-25 13:59:31.208 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:59:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_13:59:31
Feb 25 08:59:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 08:59:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 08:59:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'images', 'volumes']
Feb 25 08:59:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 08:59:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:59:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1947648017' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:59:31 np0005629333 nova_compute[244014]: 2026-02-25 13:59:31.504 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:59:31 np0005629333 nova_compute[244014]: 2026-02-25 13:59:31.657 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 08:59:31 np0005629333 nova_compute[244014]: 2026-02-25 13:59:31.658 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3443MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 08:59:31 np0005629333 nova_compute[244014]: 2026-02-25 13:59:31.658 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:59:31 np0005629333 nova_compute[244014]: 2026-02-25 13:59:31.659 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:59:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:59:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:59:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:59:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:59:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 08:59:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 08:59:31 np0005629333 nova_compute[244014]: 2026-02-25 13:59:31.720 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 08:59:31 np0005629333 nova_compute[244014]: 2026-02-25 13:59:31.721 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 08:59:31 np0005629333 nova_compute[244014]: 2026-02-25 13:59:31.735 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 08:59:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 08:59:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3316481371' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 08:59:32 np0005629333 nova_compute[244014]: 2026-02-25 13:59:32.368 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 08:59:32 np0005629333 nova_compute[244014]: 2026-02-25 13:59:32.375 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 08:59:32 np0005629333 nova_compute[244014]: 2026-02-25 13:59:32.398 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 08:59:32 np0005629333 nova_compute[244014]: 2026-02-25 13:59:32.400 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 08:59:32 np0005629333 nova_compute[244014]: 2026-02-25 13:59:32.400 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:59:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 08:59:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:59:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 08:59:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 08:59:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:59:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 08:59:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:59:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 08:59:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:59:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 08:59:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4189: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:59:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4190: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:59:34 np0005629333 nova_compute[244014]: 2026-02-25 13:59:34.990 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:59:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:59:36 np0005629333 nova_compute[244014]: 2026-02-25 13:59:36.221 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:59:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4191: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:59:38 np0005629333 nova_compute[244014]: 2026-02-25 13:59:38.400 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:59:38 np0005629333 nova_compute[244014]: 2026-02-25 13:59:38.401 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 08:59:38 np0005629333 nova_compute[244014]: 2026-02-25 13:59:38.401 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 08:59:38 np0005629333 nova_compute[244014]: 2026-02-25 13:59:38.418 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 08:59:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4192: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:59:38 np0005629333 nova_compute[244014]: 2026-02-25 13:59:38.889 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:59:40 np0005629333 nova_compute[244014]: 2026-02-25 13:59:40.039 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:59:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:59:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4193: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:59:40 np0005629333 nova_compute[244014]: 2026-02-25 13:59:40.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:59:40 np0005629333 nova_compute[244014]: 2026-02-25 13:59:40.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 08:59:41 np0005629333 nova_compute[244014]: 2026-02-25 13:59:41.254 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:59:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4194: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:59:42 np0005629333 nova_compute[244014]: 2026-02-25 13:59:42.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:59:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 08:59:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:59:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 08:59:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:59:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 08:59:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:59:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:59:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:59:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:59:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:59:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 08:59:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:59:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 08:59:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:59:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:59:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:59:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 08:59:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:59:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 08:59:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:59:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 08:59:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 08:59:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 08:59:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4195: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:59:44 np0005629333 nova_compute[244014]: 2026-02-25 13:59:44.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:59:44 np0005629333 nova_compute[244014]: 2026-02-25 13:59:44.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 25 08:59:44 np0005629333 nova_compute[244014]: 2026-02-25 13:59:44.899 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 25 08:59:44 np0005629333 nova_compute[244014]: 2026-02-25 13:59:44.899 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:59:44 np0005629333 nova_compute[244014]: 2026-02-25 13:59:44.900 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 25 08:59:45 np0005629333 nova_compute[244014]: 2026-02-25 13:59:45.041 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:59:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:59:46 np0005629333 nova_compute[244014]: 2026-02-25 13:59:46.255 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:59:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4196: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:59:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 08:59:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1566699395' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 08:59:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 08:59:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1566699395' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 08:59:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4197: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:59:48 np0005629333 nova_compute[244014]: 2026-02-25 13:59:48.923 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:59:50 np0005629333 nova_compute[244014]: 2026-02-25 13:59:50.043 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:59:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:59:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4198: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:59:50 np0005629333 nova_compute[244014]: 2026-02-25 13:59:50.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:59:51 np0005629333 nova_compute[244014]: 2026-02-25 13:59:51.257 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:59:51 np0005629333 nova_compute[244014]: 2026-02-25 13:59:51.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:59:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4199: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:59:53 np0005629333 nova_compute[244014]: 2026-02-25 13:59:53.650 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 08:59:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4200: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:59:55 np0005629333 nova_compute[244014]: 2026-02-25 13:59:55.047 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:59:55.103 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 08:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:59:55.104 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 08:59:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 13:59:55.104 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 08:59:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 08:59:56 np0005629333 nova_compute[244014]: 2026-02-25 13:59:56.259 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 08:59:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4201: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 08:59:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4202: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:00:00 np0005629333 nova_compute[244014]: 2026-02-25 14:00:00.076 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:00:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:00:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4203: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:00:01 np0005629333 nova_compute[244014]: 2026-02-25 14:00:01.260 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:00:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:00:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:00:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:00:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:00:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:00:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:00:01 np0005629333 podman[426111]: 2026-02-25 14:00:01.733637957 +0000 UTC m=+0.066228251 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.43.0)
Feb 25 09:00:01 np0005629333 podman[426112]: 2026-02-25 14:00:01.764621277 +0000 UTC m=+0.092667622 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Feb 25 09:00:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4204: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:00:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4205: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:00:05 np0005629333 nova_compute[244014]: 2026-02-25 14:00:05.080 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:00:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:00:06 np0005629333 nova_compute[244014]: 2026-02-25 14:00:06.262 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:00:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4206: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:00:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4207: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:00:10 np0005629333 nova_compute[244014]: 2026-02-25 14:00:10.085 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:00:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:00:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4208: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:00:11 np0005629333 nova_compute[244014]: 2026-02-25 14:00:11.304 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:00:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4209: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:00:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4210: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:00:15 np0005629333 nova_compute[244014]: 2026-02-25 14:00:15.121 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:00:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:00:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 09:00:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 48K writes, 186K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.02 MB/s#012Cumulative WAL: 48K writes, 17K syncs, 2.72 writes per sync, written: 0.19 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 300 writes, 450 keys, 300 commit groups, 1.0 writes per commit group, ingest: 0.15 MB, 0.00 MB/s#012Interval WAL: 300 writes, 150 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 09:00:16 np0005629333 nova_compute[244014]: 2026-02-25 14:00:16.307 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:00:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4211: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:00:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4212: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:00:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 09:00:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.2 total, 600.0 interval#012Cumulative writes: 49K writes, 196K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.02 MB/s#012Cumulative WAL: 49K writes, 18K syncs, 2.74 writes per sync, written: 0.19 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 264 writes, 396 keys, 264 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s#012Interval WAL: 264 writes, 132 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 09:00:20 np0005629333 nova_compute[244014]: 2026-02-25 14:00:20.123 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:00:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:00:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4213: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:00:21 np0005629333 nova_compute[244014]: 2026-02-25 14:00:21.339 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:00:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4214: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:00:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4215: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:00:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 09:00:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.6 total, 600.0 interval#012Cumulative writes: 38K writes, 151K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s#012Cumulative WAL: 38K writes, 13K syncs, 2.75 writes per sync, written: 0.15 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 264 writes, 396 keys, 264 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s#012Interval WAL: 264 writes, 132 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 09:00:25 np0005629333 nova_compute[244014]: 2026-02-25 14:00:25.164 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:00:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:00:26 np0005629333 nova_compute[244014]: 2026-02-25 14:00:26.341 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:00:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4216: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:00:27 np0005629333 ceph-mgr[76641]: [devicehealth INFO root] Check health
Feb 25 09:00:27 np0005629333 nova_compute[244014]: 2026-02-25 14:00:27.895 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:00:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4217: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:00:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 09:00:30 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 09:00:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 09:00:30 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 09:00:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 09:00:30 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:00:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 09:00:30 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 09:00:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 09:00:30 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 09:00:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 09:00:30 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 09:00:30 np0005629333 nova_compute[244014]: 2026-02-25 14:00:30.168 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:00:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:00:30 np0005629333 podman[426302]: 2026-02-25 14:00:30.539865 +0000 UTC m=+0.044473144 container create 5774e54dac49c390c47c965776554c194a1cbd8f66b8acd758dbea087e13657c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_yonath, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:00:30 np0005629333 systemd[1]: Started libpod-conmon-5774e54dac49c390c47c965776554c194a1cbd8f66b8acd758dbea087e13657c.scope.
Feb 25 09:00:30 np0005629333 podman[426302]: 2026-02-25 14:00:30.517356601 +0000 UTC m=+0.021964745 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:00:30 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:00:30 np0005629333 podman[426302]: 2026-02-25 14:00:30.640026123 +0000 UTC m=+0.144634257 container init 5774e54dac49c390c47c965776554c194a1cbd8f66b8acd758dbea087e13657c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_yonath, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 09:00:30 np0005629333 podman[426302]: 2026-02-25 14:00:30.649631046 +0000 UTC m=+0.154239170 container start 5774e54dac49c390c47c965776554c194a1cbd8f66b8acd758dbea087e13657c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_yonath, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 09:00:30 np0005629333 podman[426302]: 2026-02-25 14:00:30.653184827 +0000 UTC m=+0.157792951 container attach 5774e54dac49c390c47c965776554c194a1cbd8f66b8acd758dbea087e13657c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 09:00:30 np0005629333 eager_yonath[426318]: 167 167
Feb 25 09:00:30 np0005629333 systemd[1]: libpod-5774e54dac49c390c47c965776554c194a1cbd8f66b8acd758dbea087e13657c.scope: Deactivated successfully.
Feb 25 09:00:30 np0005629333 podman[426302]: 2026-02-25 14:00:30.657354965 +0000 UTC m=+0.161963099 container died 5774e54dac49c390c47c965776554c194a1cbd8f66b8acd758dbea087e13657c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:00:30 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e68da61d705fb6fb0a6a2382beca5679f6d650cd7bf0c8e0ba376870d95facaf-merged.mount: Deactivated successfully.
Feb 25 09:00:30 np0005629333 podman[426302]: 2026-02-25 14:00:30.702042784 +0000 UTC m=+0.206650948 container remove 5774e54dac49c390c47c965776554c194a1cbd8f66b8acd758dbea087e13657c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 09:00:30 np0005629333 systemd[1]: libpod-conmon-5774e54dac49c390c47c965776554c194a1cbd8f66b8acd758dbea087e13657c.scope: Deactivated successfully.
Feb 25 09:00:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4218: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:00:30 np0005629333 nova_compute[244014]: 2026-02-25 14:00:30.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:00:30 np0005629333 podman[426343]: 2026-02-25 14:00:30.878223096 +0000 UTC m=+0.053905742 container create 0cf8120fbf17fc6be32a394237401dadc6e06493f339b641d6f6197621b9dafb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Feb 25 09:00:30 np0005629333 nova_compute[244014]: 2026-02-25 14:00:30.907 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:00:30 np0005629333 nova_compute[244014]: 2026-02-25 14:00:30.908 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:00:30 np0005629333 nova_compute[244014]: 2026-02-25 14:00:30.908 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:00:30 np0005629333 nova_compute[244014]: 2026-02-25 14:00:30.908 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 09:00:30 np0005629333 nova_compute[244014]: 2026-02-25 14:00:30.909 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 09:00:30 np0005629333 systemd[1]: Started libpod-conmon-0cf8120fbf17fc6be32a394237401dadc6e06493f339b641d6f6197621b9dafb.scope.
Feb 25 09:00:30 np0005629333 podman[426343]: 2026-02-25 14:00:30.858048303 +0000 UTC m=+0.033731039 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:00:30 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:00:30 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3496ef282e28a57dfa3d33d6dab993ba5d27f8d6b286543a14d6b630d4dae8bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:00:30 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3496ef282e28a57dfa3d33d6dab993ba5d27f8d6b286543a14d6b630d4dae8bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:00:30 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3496ef282e28a57dfa3d33d6dab993ba5d27f8d6b286543a14d6b630d4dae8bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:00:30 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3496ef282e28a57dfa3d33d6dab993ba5d27f8d6b286543a14d6b630d4dae8bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:00:30 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3496ef282e28a57dfa3d33d6dab993ba5d27f8d6b286543a14d6b630d4dae8bf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 09:00:30 np0005629333 podman[426343]: 2026-02-25 14:00:30.978990717 +0000 UTC m=+0.154673363 container init 0cf8120fbf17fc6be32a394237401dadc6e06493f339b641d6f6197621b9dafb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_roentgen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3)
Feb 25 09:00:30 np0005629333 podman[426343]: 2026-02-25 14:00:30.993029295 +0000 UTC m=+0.168711981 container start 0cf8120fbf17fc6be32a394237401dadc6e06493f339b641d6f6197621b9dafb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_roentgen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:00:30 np0005629333 podman[426343]: 2026-02-25 14:00:30.999375225 +0000 UTC m=+0.175057871 container attach 0cf8120fbf17fc6be32a394237401dadc6e06493f339b641d6f6197621b9dafb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:00:31 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 09:00:31 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:00:31 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 09:00:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_14:00:31
Feb 25 09:00:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 09:00:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 09:00:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['images', '.mgr', 'default.rgw.log', '.rgw.root', 'volumes', 'default.rgw.control', 'backups', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta']
Feb 25 09:00:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 09:00:31 np0005629333 nova_compute[244014]: 2026-02-25 14:00:31.381 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:00:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 09:00:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1430821585' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 09:00:31 np0005629333 nova_compute[244014]: 2026-02-25 14:00:31.477 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 09:00:31 np0005629333 competent_roentgen[426360]: --> passed data devices: 0 physical, 3 LVM
Feb 25 09:00:31 np0005629333 competent_roentgen[426360]: --> All data devices are unavailable
Feb 25 09:00:31 np0005629333 systemd[1]: libpod-0cf8120fbf17fc6be32a394237401dadc6e06493f339b641d6f6197621b9dafb.scope: Deactivated successfully.
Feb 25 09:00:31 np0005629333 podman[426343]: 2026-02-25 14:00:31.52296166 +0000 UTC m=+0.698644376 container died 0cf8120fbf17fc6be32a394237401dadc6e06493f339b641d6f6197621b9dafb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_roentgen, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 25 09:00:31 np0005629333 systemd[1]: var-lib-containers-storage-overlay-3496ef282e28a57dfa3d33d6dab993ba5d27f8d6b286543a14d6b630d4dae8bf-merged.mount: Deactivated successfully.
Feb 25 09:00:31 np0005629333 podman[426343]: 2026-02-25 14:00:31.575040458 +0000 UTC m=+0.750723114 container remove 0cf8120fbf17fc6be32a394237401dadc6e06493f339b641d6f6197621b9dafb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_roentgen, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 09:00:31 np0005629333 systemd[1]: libpod-conmon-0cf8120fbf17fc6be32a394237401dadc6e06493f339b641d6f6197621b9dafb.scope: Deactivated successfully.
Feb 25 09:00:31 np0005629333 nova_compute[244014]: 2026-02-25 14:00:31.679 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 09:00:31 np0005629333 nova_compute[244014]: 2026-02-25 14:00:31.680 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3490MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 09:00:31 np0005629333 nova_compute[244014]: 2026-02-25 14:00:31.680 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:00:31 np0005629333 nova_compute[244014]: 2026-02-25 14:00:31.680 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:00:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:00:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:00:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:00:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:00:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:00:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:00:31 np0005629333 nova_compute[244014]: 2026-02-25 14:00:31.738 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 09:00:31 np0005629333 nova_compute[244014]: 2026-02-25 14:00:31.739 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 09:00:31 np0005629333 nova_compute[244014]: 2026-02-25 14:00:31.758 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 09:00:31 np0005629333 podman[426464]: 2026-02-25 14:00:31.925978581 +0000 UTC m=+0.100041881 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible)
Feb 25 09:00:31 np0005629333 podman[426465]: 2026-02-25 14:00:31.9611375 +0000 UTC m=+0.135655463 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 09:00:32 np0005629333 podman[426539]: 2026-02-25 14:00:32.113639459 +0000 UTC m=+0.035038586 container create 1df5232ed53c66289e7b84f6084990f40001133534c9ee97c5591af345b7a9a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_fermi, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:00:32 np0005629333 systemd[1]: Started libpod-conmon-1df5232ed53c66289e7b84f6084990f40001133534c9ee97c5591af345b7a9a9.scope.
Feb 25 09:00:32 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:00:32 np0005629333 podman[426539]: 2026-02-25 14:00:32.097522302 +0000 UTC m=+0.018921459 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:00:32 np0005629333 podman[426539]: 2026-02-25 14:00:32.204816968 +0000 UTC m=+0.126216185 container init 1df5232ed53c66289e7b84f6084990f40001133534c9ee97c5591af345b7a9a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 25 09:00:32 np0005629333 podman[426539]: 2026-02-25 14:00:32.214368819 +0000 UTC m=+0.135767936 container start 1df5232ed53c66289e7b84f6084990f40001133534c9ee97c5591af345b7a9a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_fermi, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:00:32 np0005629333 podman[426539]: 2026-02-25 14:00:32.220278407 +0000 UTC m=+0.141677614 container attach 1df5232ed53c66289e7b84f6084990f40001133534c9ee97c5591af345b7a9a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_fermi, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:00:32 np0005629333 practical_fermi[426557]: 167 167
Feb 25 09:00:32 np0005629333 systemd[1]: libpod-1df5232ed53c66289e7b84f6084990f40001133534c9ee97c5591af345b7a9a9.scope: Deactivated successfully.
Feb 25 09:00:32 np0005629333 podman[426539]: 2026-02-25 14:00:32.222030696 +0000 UTC m=+0.143429833 container died 1df5232ed53c66289e7b84f6084990f40001133534c9ee97c5591af345b7a9a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:00:32 np0005629333 systemd[1]: var-lib-containers-storage-overlay-21af9834b512be85e2a07c21fe560acdd23edb4c1678b1d83c1f4bf9109f0b74-merged.mount: Deactivated successfully.
Feb 25 09:00:32 np0005629333 podman[426539]: 2026-02-25 14:00:32.266030245 +0000 UTC m=+0.187429382 container remove 1df5232ed53c66289e7b84f6084990f40001133534c9ee97c5591af345b7a9a9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_fermi, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 09:00:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 09:00:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/841765983' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 09:00:32 np0005629333 systemd[1]: libpod-conmon-1df5232ed53c66289e7b84f6084990f40001133534c9ee97c5591af345b7a9a9.scope: Deactivated successfully.
Feb 25 09:00:32 np0005629333 nova_compute[244014]: 2026-02-25 14:00:32.304 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 09:00:32 np0005629333 nova_compute[244014]: 2026-02-25 14:00:32.313 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 09:00:32 np0005629333 nova_compute[244014]: 2026-02-25 14:00:32.330 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 09:00:32 np0005629333 nova_compute[244014]: 2026-02-25 14:00:32.332 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 09:00:32 np0005629333 nova_compute[244014]: 2026-02-25 14:00:32.333 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:00:32 np0005629333 podman[426583]: 2026-02-25 14:00:32.435929319 +0000 UTC m=+0.041765247 container create 716274e458ebe340e192b0e36e508256d49ee531945cf6aea361bbd401a9ed9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_gagarin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:00:32 np0005629333 systemd[1]: Started libpod-conmon-716274e458ebe340e192b0e36e508256d49ee531945cf6aea361bbd401a9ed9e.scope.
Feb 25 09:00:32 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:00:32 np0005629333 podman[426583]: 2026-02-25 14:00:32.418959907 +0000 UTC m=+0.024795825 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:00:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99e83c7ce98651ac53d064e8910e2720a34a596e243279ea334aa9cb1169c604/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:00:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99e83c7ce98651ac53d064e8910e2720a34a596e243279ea334aa9cb1169c604/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:00:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99e83c7ce98651ac53d064e8910e2720a34a596e243279ea334aa9cb1169c604/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:00:32 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99e83c7ce98651ac53d064e8910e2720a34a596e243279ea334aa9cb1169c604/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:00:32 np0005629333 podman[426583]: 2026-02-25 14:00:32.531990796 +0000 UTC m=+0.137826764 container init 716274e458ebe340e192b0e36e508256d49ee531945cf6aea361bbd401a9ed9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:00:32 np0005629333 podman[426583]: 2026-02-25 14:00:32.539369276 +0000 UTC m=+0.145205214 container start 716274e458ebe340e192b0e36e508256d49ee531945cf6aea361bbd401a9ed9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_gagarin, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 09:00:32 np0005629333 podman[426583]: 2026-02-25 14:00:32.54374504 +0000 UTC m=+0.149580978 container attach 716274e458ebe340e192b0e36e508256d49ee531945cf6aea361bbd401a9ed9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 09:00:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 09:00:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 09:00:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 09:00:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 09:00:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 09:00:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 09:00:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 09:00:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 09:00:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 09:00:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 09:00:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4219: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]: {
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:    "0": [
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:        {
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "devices": [
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "/dev/loop3"
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            ],
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "lv_name": "ceph_lv0",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "lv_size": "21470642176",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "name": "ceph_lv0",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "tags": {
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.cluster_name": "ceph",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.crush_device_class": "",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.encrypted": "0",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.objectstore": "bluestore",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.osd_id": "0",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.type": "block",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.vdo": "0",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.with_tpm": "0"
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            },
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "type": "block",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "vg_name": "ceph_vg0"
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:        }
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:    ],
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:    "1": [
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:        {
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "devices": [
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "/dev/loop4"
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            ],
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "lv_name": "ceph_lv1",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "lv_size": "21470642176",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "name": "ceph_lv1",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "tags": {
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.cluster_name": "ceph",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.crush_device_class": "",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.encrypted": "0",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.objectstore": "bluestore",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.osd_id": "1",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.type": "block",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.vdo": "0",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.with_tpm": "0"
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            },
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "type": "block",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "vg_name": "ceph_vg1"
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:        }
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:    ],
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:    "2": [
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:        {
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "devices": [
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "/dev/loop5"
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            ],
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "lv_name": "ceph_lv2",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "lv_size": "21470642176",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "name": "ceph_lv2",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "tags": {
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.cluster_name": "ceph",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.crush_device_class": "",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.encrypted": "0",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.objectstore": "bluestore",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.osd_id": "2",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.type": "block",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.vdo": "0",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:                "ceph.with_tpm": "0"
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            },
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "type": "block",
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:            "vg_name": "ceph_vg2"
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:        }
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]:    ]
Feb 25 09:00:32 np0005629333 magical_gagarin[426599]: }
Feb 25 09:00:32 np0005629333 systemd[1]: libpod-716274e458ebe340e192b0e36e508256d49ee531945cf6aea361bbd401a9ed9e.scope: Deactivated successfully.
Feb 25 09:00:32 np0005629333 podman[426583]: 2026-02-25 14:00:32.867292285 +0000 UTC m=+0.473128243 container died 716274e458ebe340e192b0e36e508256d49ee531945cf6aea361bbd401a9ed9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 09:00:32 np0005629333 systemd[1]: var-lib-containers-storage-overlay-99e83c7ce98651ac53d064e8910e2720a34a596e243279ea334aa9cb1169c604-merged.mount: Deactivated successfully.
Feb 25 09:00:32 np0005629333 podman[426583]: 2026-02-25 14:00:32.922473042 +0000 UTC m=+0.528308960 container remove 716274e458ebe340e192b0e36e508256d49ee531945cf6aea361bbd401a9ed9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_gagarin, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:00:32 np0005629333 systemd[1]: libpod-conmon-716274e458ebe340e192b0e36e508256d49ee531945cf6aea361bbd401a9ed9e.scope: Deactivated successfully.
Feb 25 09:00:33 np0005629333 podman[426682]: 2026-02-25 14:00:33.456725459 +0000 UTC m=+0.062694951 container create d9107f47a1bfb4f7ad03c2471353fec58a57d6165234d96c16f801ffaf62835f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_goldstine, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 09:00:33 np0005629333 systemd[1]: Started libpod-conmon-d9107f47a1bfb4f7ad03c2471353fec58a57d6165234d96c16f801ffaf62835f.scope.
Feb 25 09:00:33 np0005629333 podman[426682]: 2026-02-25 14:00:33.43209623 +0000 UTC m=+0.038065772 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:00:33 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:00:33 np0005629333 podman[426682]: 2026-02-25 14:00:33.560599678 +0000 UTC m=+0.166569230 container init d9107f47a1bfb4f7ad03c2471353fec58a57d6165234d96c16f801ffaf62835f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 09:00:33 np0005629333 podman[426682]: 2026-02-25 14:00:33.567492664 +0000 UTC m=+0.173462126 container start d9107f47a1bfb4f7ad03c2471353fec58a57d6165234d96c16f801ffaf62835f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_goldstine, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 09:00:33 np0005629333 agitated_goldstine[426698]: 167 167
Feb 25 09:00:33 np0005629333 podman[426682]: 2026-02-25 14:00:33.574407089 +0000 UTC m=+0.180376641 container attach d9107f47a1bfb4f7ad03c2471353fec58a57d6165234d96c16f801ffaf62835f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 09:00:33 np0005629333 systemd[1]: libpod-d9107f47a1bfb4f7ad03c2471353fec58a57d6165234d96c16f801ffaf62835f.scope: Deactivated successfully.
Feb 25 09:00:33 np0005629333 conmon[426698]: conmon d9107f47a1bfb4f7ad03 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d9107f47a1bfb4f7ad03c2471353fec58a57d6165234d96c16f801ffaf62835f.scope/container/memory.events
Feb 25 09:00:33 np0005629333 podman[426682]: 2026-02-25 14:00:33.576053146 +0000 UTC m=+0.182022658 container died d9107f47a1bfb4f7ad03c2471353fec58a57d6165234d96c16f801ffaf62835f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_goldstine, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 09:00:33 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f2f6c4010014db9fc74c381358d771332c7272fb04e61cba09545e4b1593e412-merged.mount: Deactivated successfully.
Feb 25 09:00:33 np0005629333 podman[426682]: 2026-02-25 14:00:33.621515136 +0000 UTC m=+0.227484598 container remove d9107f47a1bfb4f7ad03c2471353fec58a57d6165234d96c16f801ffaf62835f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:00:33 np0005629333 systemd[1]: libpod-conmon-d9107f47a1bfb4f7ad03c2471353fec58a57d6165234d96c16f801ffaf62835f.scope: Deactivated successfully.
Feb 25 09:00:33 np0005629333 podman[426723]: 2026-02-25 14:00:33.80657235 +0000 UTC m=+0.040909422 container create 28a033bf72c7616403501a3e34d84ec2df291a255ad2ade7a2bfb1c2771d381b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 09:00:33 np0005629333 systemd[1]: Started libpod-conmon-28a033bf72c7616403501a3e34d84ec2df291a255ad2ade7a2bfb1c2771d381b.scope.
Feb 25 09:00:33 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:00:33 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c84ce0c55d1100ff9dc1a9fe721dc972294986ebec5bef881f77b12c3cbbdade/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:00:33 np0005629333 podman[426723]: 2026-02-25 14:00:33.78789773 +0000 UTC m=+0.022234872 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:00:33 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c84ce0c55d1100ff9dc1a9fe721dc972294986ebec5bef881f77b12c3cbbdade/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:00:33 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c84ce0c55d1100ff9dc1a9fe721dc972294986ebec5bef881f77b12c3cbbdade/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:00:33 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c84ce0c55d1100ff9dc1a9fe721dc972294986ebec5bef881f77b12c3cbbdade/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:00:33 np0005629333 podman[426723]: 2026-02-25 14:00:33.902162364 +0000 UTC m=+0.136499466 container init 28a033bf72c7616403501a3e34d84ec2df291a255ad2ade7a2bfb1c2771d381b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 09:00:33 np0005629333 podman[426723]: 2026-02-25 14:00:33.911471408 +0000 UTC m=+0.145808480 container start 28a033bf72c7616403501a3e34d84ec2df291a255ad2ade7a2bfb1c2771d381b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:00:33 np0005629333 podman[426723]: 2026-02-25 14:00:33.91539864 +0000 UTC m=+0.149735712 container attach 28a033bf72c7616403501a3e34d84ec2df291a255ad2ade7a2bfb1c2771d381b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bouman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:00:34 np0005629333 lvm[426817]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 09:00:34 np0005629333 lvm[426817]: VG ceph_vg1 finished
Feb 25 09:00:34 np0005629333 lvm[426818]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 09:00:34 np0005629333 lvm[426818]: VG ceph_vg0 finished
Feb 25 09:00:34 np0005629333 lvm[426820]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 09:00:34 np0005629333 lvm[426820]: VG ceph_vg2 finished
Feb 25 09:00:34 np0005629333 reverent_bouman[426739]: {}
Feb 25 09:00:34 np0005629333 systemd[1]: libpod-28a033bf72c7616403501a3e34d84ec2df291a255ad2ade7a2bfb1c2771d381b.scope: Deactivated successfully.
Feb 25 09:00:34 np0005629333 systemd[1]: libpod-28a033bf72c7616403501a3e34d84ec2df291a255ad2ade7a2bfb1c2771d381b.scope: Consumed 1.174s CPU time.
Feb 25 09:00:34 np0005629333 podman[426723]: 2026-02-25 14:00:34.724921542 +0000 UTC m=+0.959258644 container died 28a033bf72c7616403501a3e34d84ec2df291a255ad2ade7a2bfb1c2771d381b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bouman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 09:00:34 np0005629333 systemd[1]: var-lib-containers-storage-overlay-c84ce0c55d1100ff9dc1a9fe721dc972294986ebec5bef881f77b12c3cbbdade-merged.mount: Deactivated successfully.
Feb 25 09:00:34 np0005629333 podman[426723]: 2026-02-25 14:00:34.777162985 +0000 UTC m=+1.011500057 container remove 28a033bf72c7616403501a3e34d84ec2df291a255ad2ade7a2bfb1c2771d381b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:00:34 np0005629333 systemd[1]: libpod-conmon-28a033bf72c7616403501a3e34d84ec2df291a255ad2ade7a2bfb1c2771d381b.scope: Deactivated successfully.
Feb 25 09:00:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4220: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:00:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 09:00:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:00:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 09:00:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:00:35 np0005629333 nova_compute[244014]: 2026-02-25 14:00:35.309 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:00:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:00:35 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:00:35 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:00:36 np0005629333 nova_compute[244014]: 2026-02-25 14:00:36.418 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:00:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4221: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:00:38 np0005629333 nova_compute[244014]: 2026-02-25 14:00:38.334 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:00:38 np0005629333 nova_compute[244014]: 2026-02-25 14:00:38.335 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 09:00:38 np0005629333 nova_compute[244014]: 2026-02-25 14:00:38.335 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 09:00:38 np0005629333 nova_compute[244014]: 2026-02-25 14:00:38.350 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 09:00:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4222: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:00:39 np0005629333 nova_compute[244014]: 2026-02-25 14:00:39.887 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:00:40 np0005629333 nova_compute[244014]: 2026-02-25 14:00:40.315 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:00:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:00:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4223: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:00:41 np0005629333 nova_compute[244014]: 2026-02-25 14:00:41.465 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:00:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4224: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:00:42 np0005629333 nova_compute[244014]: 2026-02-25 14:00:42.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:00:42 np0005629333 nova_compute[244014]: 2026-02-25 14:00:42.878 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 09:00:43 np0005629333 nova_compute[244014]: 2026-02-25 14:00:43.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:00:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 09:00:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:00:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 09:00:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:00:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 09:00:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:00:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:00:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:00:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:00:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:00:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 09:00:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:00:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 09:00:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:00:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:00:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:00:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 09:00:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:00:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 09:00:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:00:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:00:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:00:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 09:00:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4225: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:00:45 np0005629333 nova_compute[244014]: 2026-02-25 14:00:45.351 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:00:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:00:46 np0005629333 nova_compute[244014]: 2026-02-25 14:00:46.503 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:00:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4226: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:00:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 09:00:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2942443104' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 09:00:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 09:00:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2942443104' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 09:00:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4227: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:00:49 np0005629333 nova_compute[244014]: 2026-02-25 14:00:49.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:00:50 np0005629333 nova_compute[244014]: 2026-02-25 14:00:50.356 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:00:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:00:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4228: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:00:50 np0005629333 nova_compute[244014]: 2026-02-25 14:00:50.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:00:51 np0005629333 nova_compute[244014]: 2026-02-25 14:00:51.542 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:00:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4229: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:00:52 np0005629333 nova_compute[244014]: 2026-02-25 14:00:52.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:00:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4230: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:00:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:00:55.104 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:00:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:00:55.105 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:00:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:00:55.105 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:00:55 np0005629333 nova_compute[244014]: 2026-02-25 14:00:55.360 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:00:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:00:56 np0005629333 nova_compute[244014]: 2026-02-25 14:00:56.543 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:00:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4231: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:00:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4232: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:01:00 np0005629333 nova_compute[244014]: 2026-02-25 14:01:00.364 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:01:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:01:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4233: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:01:01 np0005629333 nova_compute[244014]: 2026-02-25 14:01:01.599 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:01:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:01:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:01:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:01:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:01:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:01:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:01:02 np0005629333 podman[426872]: 2026-02-25 14:01:02.719611924 +0000 UTC m=+0.058590404 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 09:01:02 np0005629333 podman[426873]: 2026-02-25 14:01:02.798681189 +0000 UTC m=+0.133621765 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 25 09:01:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4234: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:01:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4235: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:01:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:01:05 np0005629333 nova_compute[244014]: 2026-02-25 14:01:05.403 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:01:06 np0005629333 nova_compute[244014]: 2026-02-25 14:01:06.631 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:01:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4236: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:01:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4237: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:01:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:01:10 np0005629333 nova_compute[244014]: 2026-02-25 14:01:10.436 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:01:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4238: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:01:11 np0005629333 nova_compute[244014]: 2026-02-25 14:01:11.677 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:01:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4239: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:01:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4240: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:01:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:01:15 np0005629333 nova_compute[244014]: 2026-02-25 14:01:15.476 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:01:16 np0005629333 nova_compute[244014]: 2026-02-25 14:01:16.730 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:01:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4241: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:01:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4242: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:01:18 np0005629333 nova_compute[244014]: 2026-02-25 14:01:18.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:01:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:01:20 np0005629333 nova_compute[244014]: 2026-02-25 14:01:20.480 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:01:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4243: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:01:21 np0005629333 nova_compute[244014]: 2026-02-25 14:01:21.757 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:01:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4244: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Feb 25 09:01:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4245: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Feb 25 09:01:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:01:25 np0005629333 nova_compute[244014]: 2026-02-25 14:01:25.529 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:01:26 np0005629333 nova_compute[244014]: 2026-02-25 14:01:26.817 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:01:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4246: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Feb 25 09:01:27 np0005629333 nova_compute[244014]: 2026-02-25 14:01:27.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:01:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4247: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 09:01:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:01:30 np0005629333 nova_compute[244014]: 2026-02-25 14:01:30.585 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:01:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4248: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 09:01:30 np0005629333 nova_compute[244014]: 2026-02-25 14:01:30.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:01:30 np0005629333 nova_compute[244014]: 2026-02-25 14:01:30.917 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:01:30 np0005629333 nova_compute[244014]: 2026-02-25 14:01:30.917 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:01:30 np0005629333 nova_compute[244014]: 2026-02-25 14:01:30.918 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:01:30 np0005629333 nova_compute[244014]: 2026-02-25 14:01:30.918 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 09:01:30 np0005629333 nova_compute[244014]: 2026-02-25 14:01:30.918 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 09:01:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_14:01:31
Feb 25 09:01:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 09:01:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 09:01:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'vms', 'volumes', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data']
Feb 25 09:01:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 09:01:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 09:01:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1282600901' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 09:01:31 np0005629333 nova_compute[244014]: 2026-02-25 14:01:31.505 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 09:01:31 np0005629333 nova_compute[244014]: 2026-02-25 14:01:31.631 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 09:01:31 np0005629333 nova_compute[244014]: 2026-02-25 14:01:31.632 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3524MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 09:01:31 np0005629333 nova_compute[244014]: 2026-02-25 14:01:31.632 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:01:31 np0005629333 nova_compute[244014]: 2026-02-25 14:01:31.632 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:01:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:01:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:01:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:01:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:01:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:01:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:01:31 np0005629333 nova_compute[244014]: 2026-02-25 14:01:31.808 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 09:01:31 np0005629333 nova_compute[244014]: 2026-02-25 14:01:31.808 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 09:01:31 np0005629333 nova_compute[244014]: 2026-02-25 14:01:31.856 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:01:31 np0005629333 nova_compute[244014]: 2026-02-25 14:01:31.883 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 25 09:01:31 np0005629333 nova_compute[244014]: 2026-02-25 14:01:31.951 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 25 09:01:31 np0005629333 nova_compute[244014]: 2026-02-25 14:01:31.951 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 25 09:01:31 np0005629333 nova_compute[244014]: 2026-02-25 14:01:31.964 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 25 09:01:31 np0005629333 nova_compute[244014]: 2026-02-25 14:01:31.983 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 25 09:01:31 np0005629333 nova_compute[244014]: 2026-02-25 14:01:31.998 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 09:01:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 09:01:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/439463346' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 09:01:32 np0005629333 nova_compute[244014]: 2026-02-25 14:01:32.591 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 09:01:32 np0005629333 nova_compute[244014]: 2026-02-25 14:01:32.596 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 09:01:32 np0005629333 nova_compute[244014]: 2026-02-25 14:01:32.612 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 09:01:32 np0005629333 nova_compute[244014]: 2026-02-25 14:01:32.614 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 09:01:32 np0005629333 nova_compute[244014]: 2026-02-25 14:01:32.614 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.981s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:01:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 09:01:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 09:01:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 09:01:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 09:01:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 09:01:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 09:01:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 09:01:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 09:01:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 09:01:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 09:01:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4249: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Feb 25 09:01:33 np0005629333 podman[426961]: 2026-02-25 14:01:33.708225955 +0000 UTC m=+0.056580357 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent)
Feb 25 09:01:33 np0005629333 podman[426962]: 2026-02-25 14:01:33.762824855 +0000 UTC m=+0.102786129 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Feb 25 09:01:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4250: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Feb 25 09:01:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:01:35 np0005629333 nova_compute[244014]: 2026-02-25 14:01:35.587 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:01:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 09:01:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 09:01:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 09:01:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 09:01:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 09:01:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:01:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 09:01:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 09:01:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 09:01:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 09:01:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 09:01:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 09:01:36 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 09:01:36 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:01:36 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 09:01:36 np0005629333 podman[427150]: 2026-02-25 14:01:36.209624078 +0000 UTC m=+0.078559682 container create becf86302ef35f78f3ccbdad9e92edbad819658816ed4eb93d6fe21ad306974c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wilbur, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 09:01:36 np0005629333 systemd[1]: Started libpod-conmon-becf86302ef35f78f3ccbdad9e92edbad819658816ed4eb93d6fe21ad306974c.scope.
Feb 25 09:01:36 np0005629333 podman[427150]: 2026-02-25 14:01:36.178147134 +0000 UTC m=+0.047082778 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:01:36 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:01:36 np0005629333 podman[427150]: 2026-02-25 14:01:36.320677061 +0000 UTC m=+0.189612705 container init becf86302ef35f78f3ccbdad9e92edbad819658816ed4eb93d6fe21ad306974c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wilbur, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 09:01:36 np0005629333 podman[427150]: 2026-02-25 14:01:36.331286762 +0000 UTC m=+0.200222366 container start becf86302ef35f78f3ccbdad9e92edbad819658816ed4eb93d6fe21ad306974c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wilbur, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 09:01:36 np0005629333 determined_wilbur[427166]: 167 167
Feb 25 09:01:36 np0005629333 podman[427150]: 2026-02-25 14:01:36.339469254 +0000 UTC m=+0.208404898 container attach becf86302ef35f78f3ccbdad9e92edbad819658816ed4eb93d6fe21ad306974c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:01:36 np0005629333 systemd[1]: libpod-becf86302ef35f78f3ccbdad9e92edbad819658816ed4eb93d6fe21ad306974c.scope: Deactivated successfully.
Feb 25 09:01:36 np0005629333 podman[427150]: 2026-02-25 14:01:36.340522594 +0000 UTC m=+0.209458188 container died becf86302ef35f78f3ccbdad9e92edbad819658816ed4eb93d6fe21ad306974c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wilbur, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 09:01:36 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ab5c5e99eab395552b95a2294b8ceaf4d1624f8329c5131c76fd57542ada10cc-merged.mount: Deactivated successfully.
Feb 25 09:01:36 np0005629333 podman[427150]: 2026-02-25 14:01:36.392840629 +0000 UTC m=+0.261776233 container remove becf86302ef35f78f3ccbdad9e92edbad819658816ed4eb93d6fe21ad306974c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:01:36 np0005629333 systemd[1]: libpod-conmon-becf86302ef35f78f3ccbdad9e92edbad819658816ed4eb93d6fe21ad306974c.scope: Deactivated successfully.
Feb 25 09:01:36 np0005629333 podman[427191]: 2026-02-25 14:01:36.575804034 +0000 UTC m=+0.051406421 container create 926fb911ea19ed58d4de3abca47e653b85cf5b5f86d601b5bfcff037b935123a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_thompson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:01:36 np0005629333 systemd[1]: Started libpod-conmon-926fb911ea19ed58d4de3abca47e653b85cf5b5f86d601b5bfcff037b935123a.scope.
Feb 25 09:01:36 np0005629333 podman[427191]: 2026-02-25 14:01:36.548666613 +0000 UTC m=+0.024269040 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:01:36 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:01:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f75c73fa50f9f910d17c308d5569ebf07a947d5b087648b60d7102a052edc663/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:01:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f75c73fa50f9f910d17c308d5569ebf07a947d5b087648b60d7102a052edc663/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:01:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f75c73fa50f9f910d17c308d5569ebf07a947d5b087648b60d7102a052edc663/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:01:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f75c73fa50f9f910d17c308d5569ebf07a947d5b087648b60d7102a052edc663/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:01:36 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f75c73fa50f9f910d17c308d5569ebf07a947d5b087648b60d7102a052edc663/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 09:01:36 np0005629333 podman[427191]: 2026-02-25 14:01:36.674704781 +0000 UTC m=+0.150307178 container init 926fb911ea19ed58d4de3abca47e653b85cf5b5f86d601b5bfcff037b935123a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_thompson, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:01:36 np0005629333 podman[427191]: 2026-02-25 14:01:36.687372511 +0000 UTC m=+0.162974878 container start 926fb911ea19ed58d4de3abca47e653b85cf5b5f86d601b5bfcff037b935123a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_thompson, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 09:01:36 np0005629333 podman[427191]: 2026-02-25 14:01:36.691725434 +0000 UTC m=+0.167327791 container attach 926fb911ea19ed58d4de3abca47e653b85cf5b5f86d601b5bfcff037b935123a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 09:01:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4251: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Feb 25 09:01:36 np0005629333 nova_compute[244014]: 2026-02-25 14:01:36.859 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:01:37 np0005629333 wizardly_thompson[427208]: --> passed data devices: 0 physical, 3 LVM
Feb 25 09:01:37 np0005629333 wizardly_thompson[427208]: --> All data devices are unavailable
Feb 25 09:01:37 np0005629333 systemd[1]: libpod-926fb911ea19ed58d4de3abca47e653b85cf5b5f86d601b5bfcff037b935123a.scope: Deactivated successfully.
Feb 25 09:01:37 np0005629333 podman[427191]: 2026-02-25 14:01:37.193210241 +0000 UTC m=+0.668812668 container died 926fb911ea19ed58d4de3abca47e653b85cf5b5f86d601b5bfcff037b935123a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 09:01:37 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f75c73fa50f9f910d17c308d5569ebf07a947d5b087648b60d7102a052edc663-merged.mount: Deactivated successfully.
Feb 25 09:01:37 np0005629333 podman[427191]: 2026-02-25 14:01:37.244180318 +0000 UTC m=+0.719782715 container remove 926fb911ea19ed58d4de3abca47e653b85cf5b5f86d601b5bfcff037b935123a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_thompson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:01:37 np0005629333 systemd[1]: libpod-conmon-926fb911ea19ed58d4de3abca47e653b85cf5b5f86d601b5bfcff037b935123a.scope: Deactivated successfully.
Feb 25 09:01:37 np0005629333 podman[427305]: 2026-02-25 14:01:37.735215869 +0000 UTC m=+0.054832748 container create 999eb486316d0fe117478faba4df3850aa779e9dcdff1dd9c819776238afc422 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_gagarin, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:01:37 np0005629333 systemd[1]: Started libpod-conmon-999eb486316d0fe117478faba4df3850aa779e9dcdff1dd9c819776238afc422.scope.
Feb 25 09:01:37 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:01:37 np0005629333 podman[427305]: 2026-02-25 14:01:37.717345362 +0000 UTC m=+0.036962271 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:01:37 np0005629333 podman[427305]: 2026-02-25 14:01:37.822334432 +0000 UTC m=+0.141951401 container init 999eb486316d0fe117478faba4df3850aa779e9dcdff1dd9c819776238afc422 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Feb 25 09:01:37 np0005629333 podman[427305]: 2026-02-25 14:01:37.832987324 +0000 UTC m=+0.152604243 container start 999eb486316d0fe117478faba4df3850aa779e9dcdff1dd9c819776238afc422 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_gagarin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:01:37 np0005629333 podman[427305]: 2026-02-25 14:01:37.837218115 +0000 UTC m=+0.156835084 container attach 999eb486316d0fe117478faba4df3850aa779e9dcdff1dd9c819776238afc422 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 09:01:37 np0005629333 ecstatic_gagarin[427321]: 167 167
Feb 25 09:01:37 np0005629333 systemd[1]: libpod-999eb486316d0fe117478faba4df3850aa779e9dcdff1dd9c819776238afc422.scope: Deactivated successfully.
Feb 25 09:01:37 np0005629333 podman[427305]: 2026-02-25 14:01:37.839213031 +0000 UTC m=+0.158829970 container died 999eb486316d0fe117478faba4df3850aa779e9dcdff1dd9c819776238afc422 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_gagarin, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 09:01:37 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7148729377a2342d6e994b4c5383aa57aa329346188abdcdbeccd8d90cdb11ce-merged.mount: Deactivated successfully.
Feb 25 09:01:37 np0005629333 podman[427305]: 2026-02-25 14:01:37.884439115 +0000 UTC m=+0.204056024 container remove 999eb486316d0fe117478faba4df3850aa779e9dcdff1dd9c819776238afc422 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_gagarin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 09:01:37 np0005629333 systemd[1]: libpod-conmon-999eb486316d0fe117478faba4df3850aa779e9dcdff1dd9c819776238afc422.scope: Deactivated successfully.
Feb 25 09:01:38 np0005629333 podman[427345]: 2026-02-25 14:01:38.09419429 +0000 UTC m=+0.065979375 container create 2976a580ead3021b2aa8c05b2fdd33e1a7571b393f92616ada6d48b04a61f911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_proskuriakova, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:01:38 np0005629333 systemd[1]: Started libpod-conmon-2976a580ead3021b2aa8c05b2fdd33e1a7571b393f92616ada6d48b04a61f911.scope.
Feb 25 09:01:38 np0005629333 podman[427345]: 2026-02-25 14:01:38.068236283 +0000 UTC m=+0.040021418 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:01:38 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:01:38 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1577793097aab46e6da92c382555e2162a91a632cb788a8f805d79fba9463e32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:01:38 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1577793097aab46e6da92c382555e2162a91a632cb788a8f805d79fba9463e32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:01:38 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1577793097aab46e6da92c382555e2162a91a632cb788a8f805d79fba9463e32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:01:38 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1577793097aab46e6da92c382555e2162a91a632cb788a8f805d79fba9463e32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:01:38 np0005629333 podman[427345]: 2026-02-25 14:01:38.209506733 +0000 UTC m=+0.181291858 container init 2976a580ead3021b2aa8c05b2fdd33e1a7571b393f92616ada6d48b04a61f911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_proskuriakova, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:01:38 np0005629333 podman[427345]: 2026-02-25 14:01:38.217249483 +0000 UTC m=+0.189034568 container start 2976a580ead3021b2aa8c05b2fdd33e1a7571b393f92616ada6d48b04a61f911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 09:01:38 np0005629333 podman[427345]: 2026-02-25 14:01:38.220897177 +0000 UTC m=+0.192682262 container attach 2976a580ead3021b2aa8c05b2fdd33e1a7571b393f92616ada6d48b04a61f911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_proskuriakova, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]: {
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:    "0": [
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:        {
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "devices": [
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "/dev/loop3"
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            ],
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "lv_name": "ceph_lv0",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "lv_size": "21470642176",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "name": "ceph_lv0",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "tags": {
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.cluster_name": "ceph",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.crush_device_class": "",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.encrypted": "0",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.objectstore": "bluestore",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.osd_id": "0",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.type": "block",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.vdo": "0",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.with_tpm": "0"
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            },
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "type": "block",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "vg_name": "ceph_vg0"
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:        }
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:    ],
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:    "1": [
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:        {
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "devices": [
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "/dev/loop4"
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            ],
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "lv_name": "ceph_lv1",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "lv_size": "21470642176",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "name": "ceph_lv1",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "tags": {
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.cluster_name": "ceph",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.crush_device_class": "",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.encrypted": "0",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.objectstore": "bluestore",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.osd_id": "1",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.type": "block",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.vdo": "0",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.with_tpm": "0"
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            },
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "type": "block",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "vg_name": "ceph_vg1"
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:        }
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:    ],
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:    "2": [
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:        {
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "devices": [
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "/dev/loop5"
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            ],
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "lv_name": "ceph_lv2",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "lv_size": "21470642176",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "name": "ceph_lv2",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "tags": {
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.cluster_name": "ceph",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.crush_device_class": "",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.encrypted": "0",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.objectstore": "bluestore",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.osd_id": "2",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.type": "block",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.vdo": "0",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:                "ceph.with_tpm": "0"
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            },
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "type": "block",
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:            "vg_name": "ceph_vg2"
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:        }
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]:    ]
Feb 25 09:01:38 np0005629333 lucid_proskuriakova[427361]: }
Feb 25 09:01:38 np0005629333 systemd[1]: libpod-2976a580ead3021b2aa8c05b2fdd33e1a7571b393f92616ada6d48b04a61f911.scope: Deactivated successfully.
Feb 25 09:01:38 np0005629333 podman[427345]: 2026-02-25 14:01:38.573736494 +0000 UTC m=+0.545521579 container died 2976a580ead3021b2aa8c05b2fdd33e1a7571b393f92616ada6d48b04a61f911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 09:01:38 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1577793097aab46e6da92c382555e2162a91a632cb788a8f805d79fba9463e32-merged.mount: Deactivated successfully.
Feb 25 09:01:38 np0005629333 nova_compute[244014]: 2026-02-25 14:01:38.614 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:01:38 np0005629333 nova_compute[244014]: 2026-02-25 14:01:38.617 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 09:01:38 np0005629333 nova_compute[244014]: 2026-02-25 14:01:38.617 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 09:01:38 np0005629333 podman[427345]: 2026-02-25 14:01:38.628239101 +0000 UTC m=+0.600024186 container remove 2976a580ead3021b2aa8c05b2fdd33e1a7571b393f92616ada6d48b04a61f911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_proskuriakova, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Feb 25 09:01:38 np0005629333 systemd[1]: libpod-conmon-2976a580ead3021b2aa8c05b2fdd33e1a7571b393f92616ada6d48b04a61f911.scope: Deactivated successfully.
Feb 25 09:01:38 np0005629333 nova_compute[244014]: 2026-02-25 14:01:38.637 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 09:01:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4252: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Feb 25 09:01:39 np0005629333 podman[427446]: 2026-02-25 14:01:39.187816597 +0000 UTC m=+0.060262982 container create 6270e9af6ca30f9ec4e3f0062c558c9de4db9676f193914e6721f55fd5c29631 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_poitras, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:01:39 np0005629333 systemd[1]: Started libpod-conmon-6270e9af6ca30f9ec4e3f0062c558c9de4db9676f193914e6721f55fd5c29631.scope.
Feb 25 09:01:39 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:01:39 np0005629333 podman[427446]: 2026-02-25 14:01:39.162991993 +0000 UTC m=+0.035438468 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:01:39 np0005629333 podman[427446]: 2026-02-25 14:01:39.265794451 +0000 UTC m=+0.138240906 container init 6270e9af6ca30f9ec4e3f0062c558c9de4db9676f193914e6721f55fd5c29631 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_poitras, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 09:01:39 np0005629333 podman[427446]: 2026-02-25 14:01:39.27701993 +0000 UTC m=+0.149466325 container start 6270e9af6ca30f9ec4e3f0062c558c9de4db9676f193914e6721f55fd5c29631 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_poitras, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:01:39 np0005629333 podman[427446]: 2026-02-25 14:01:39.280763766 +0000 UTC m=+0.153210201 container attach 6270e9af6ca30f9ec4e3f0062c558c9de4db9676f193914e6721f55fd5c29631 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_poitras, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:01:39 np0005629333 inspiring_poitras[427462]: 167 167
Feb 25 09:01:39 np0005629333 systemd[1]: libpod-6270e9af6ca30f9ec4e3f0062c558c9de4db9676f193914e6721f55fd5c29631.scope: Deactivated successfully.
Feb 25 09:01:39 np0005629333 podman[427446]: 2026-02-25 14:01:39.282789153 +0000 UTC m=+0.155235548 container died 6270e9af6ca30f9ec4e3f0062c558c9de4db9676f193914e6721f55fd5c29631 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_poitras, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 09:01:39 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f3e74982fc4a0e6d31e185721786ce4dcb3164325580a5a07e90428dc12e2dbe-merged.mount: Deactivated successfully.
Feb 25 09:01:39 np0005629333 podman[427446]: 2026-02-25 14:01:39.326884265 +0000 UTC m=+0.199330650 container remove 6270e9af6ca30f9ec4e3f0062c558c9de4db9676f193914e6721f55fd5c29631 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_poitras, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:01:39 np0005629333 systemd[1]: libpod-conmon-6270e9af6ca30f9ec4e3f0062c558c9de4db9676f193914e6721f55fd5c29631.scope: Deactivated successfully.
Feb 25 09:01:39 np0005629333 podman[427487]: 2026-02-25 14:01:39.45912294 +0000 UTC m=+0.043602219 container create 3db3cef1470a3274d59bc9a5ee1ea520378d884a8680e308e801042f02807731 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 09:01:39 np0005629333 systemd[1]: Started libpod-conmon-3db3cef1470a3274d59bc9a5ee1ea520378d884a8680e308e801042f02807731.scope.
Feb 25 09:01:39 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:01:39 np0005629333 podman[427487]: 2026-02-25 14:01:39.438947027 +0000 UTC m=+0.023426246 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:01:39 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c739d34a526031f1fdb78be3d5135652fd81631907d75c94f0c4cf409cb8c51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:01:39 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c739d34a526031f1fdb78be3d5135652fd81631907d75c94f0c4cf409cb8c51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:01:39 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c739d34a526031f1fdb78be3d5135652fd81631907d75c94f0c4cf409cb8c51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:01:39 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c739d34a526031f1fdb78be3d5135652fd81631907d75c94f0c4cf409cb8c51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:01:39 np0005629333 podman[427487]: 2026-02-25 14:01:39.55143042 +0000 UTC m=+0.135909669 container init 3db3cef1470a3274d59bc9a5ee1ea520378d884a8680e308e801042f02807731 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 09:01:39 np0005629333 podman[427487]: 2026-02-25 14:01:39.556783202 +0000 UTC m=+0.141262421 container start 3db3cef1470a3274d59bc9a5ee1ea520378d884a8680e308e801042f02807731 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_herschel, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:01:39 np0005629333 podman[427487]: 2026-02-25 14:01:39.561030573 +0000 UTC m=+0.145509792 container attach 3db3cef1470a3274d59bc9a5ee1ea520378d884a8680e308e801042f02807731 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 25 09:01:40 np0005629333 lvm[427581]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 09:01:40 np0005629333 lvm[427582]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 09:01:40 np0005629333 lvm[427582]: VG ceph_vg1 finished
Feb 25 09:01:40 np0005629333 lvm[427581]: VG ceph_vg0 finished
Feb 25 09:01:40 np0005629333 lvm[427584]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 09:01:40 np0005629333 lvm[427584]: VG ceph_vg2 finished
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:01:40 np0005629333 heuristic_herschel[427503]: {}
Feb 25 09:01:40 np0005629333 systemd[1]: libpod-3db3cef1470a3274d59bc9a5ee1ea520378d884a8680e308e801042f02807731.scope: Deactivated successfully.
Feb 25 09:01:40 np0005629333 systemd[1]: libpod-3db3cef1470a3274d59bc9a5ee1ea520378d884a8680e308e801042f02807731.scope: Consumed 1.276s CPU time.
Feb 25 09:01:40 np0005629333 podman[427587]: 2026-02-25 14:01:40.494619797 +0000 UTC m=+0.032232306 container died 3db3cef1470a3274d59bc9a5ee1ea520378d884a8680e308e801042f02807731 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 09:01:40 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7c739d34a526031f1fdb78be3d5135652fd81631907d75c94f0c4cf409cb8c51-merged.mount: Deactivated successfully.
Feb 25 09:01:40 np0005629333 podman[427587]: 2026-02-25 14:01:40.543612648 +0000 UTC m=+0.081225097 container remove 3db3cef1470a3274d59bc9a5ee1ea520378d884a8680e308e801042f02807731 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_herschel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 09:01:40 np0005629333 systemd[1]: libpod-conmon-3db3cef1470a3274d59bc9a5ee1ea520378d884a8680e308e801042f02807731.scope: Deactivated successfully.
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:01:40 np0005629333 nova_compute[244014]: 2026-02-25 14:01:40.634 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #210. Immutable memtables: 0.
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:01:40.644317) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 131] Flushing memtable with next log file: 210
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028100644385, "job": 131, "event": "flush_started", "num_memtables": 1, "num_entries": 2052, "num_deletes": 251, "total_data_size": 3487904, "memory_usage": 3551088, "flush_reason": "Manual Compaction"}
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 131] Level-0 flush table #211: started
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028100670057, "cf_name": "default", "job": 131, "event": "table_file_creation", "file_number": 211, "file_size": 3420288, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 86336, "largest_seqno": 88387, "table_properties": {"data_size": 3410846, "index_size": 5999, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18655, "raw_average_key_size": 20, "raw_value_size": 3392214, "raw_average_value_size": 3651, "num_data_blocks": 266, "num_entries": 929, "num_filter_entries": 929, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772027869, "oldest_key_time": 1772027869, "file_creation_time": 1772028100, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 211, "seqno_to_time_mapping": "N/A"}}
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 131] Flush lasted 25798 microseconds, and 8725 cpu microseconds.
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:01:40.670120) [db/flush_job.cc:967] [default] [JOB 131] Level-0 flush table #211: 3420288 bytes OK
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:01:40.670148) [db/memtable_list.cc:519] [default] Level-0 commit table #211 started
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:01:40.672042) [db/memtable_list.cc:722] [default] Level-0 commit table #211: memtable #1 done
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:01:40.672063) EVENT_LOG_v1 {"time_micros": 1772028100672056, "job": 131, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:01:40.672088) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 131] Try to delete WAL files size 3479313, prev total WAL file size 3521005, number of live WAL files 2.
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000207.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:01:40.673071) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038373835' seq:72057594037927935, type:22 .. '7061786F730039303337' seq:0, type:0; will stop at (end)
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 132] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 131 Base level 0, inputs: [211(3340KB)], [209(10MB)]
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028100673162, "job": 132, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [211], "files_L6": [209], "score": -1, "input_data_size": 13971468, "oldest_snapshot_seqno": -1}
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 132] Generated table #212: 10242 keys, 12154253 bytes, temperature: kUnknown
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028100768569, "cf_name": "default", "job": 132, "event": "table_file_creation", "file_number": 212, "file_size": 12154253, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12090510, "index_size": 37006, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25669, "raw_key_size": 268875, "raw_average_key_size": 26, "raw_value_size": 11912164, "raw_average_value_size": 1163, "num_data_blocks": 1426, "num_entries": 10242, "num_filter_entries": 10242, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772028100, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 212, "seqno_to_time_mapping": "N/A"}}
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:01:40.769539) [db/compaction/compaction_job.cc:1663] [default] [JOB 132] Compacted 1@0 + 1@6 files to L6 => 12154253 bytes
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:01:40.771259) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.3 rd, 126.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 10.1 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(7.6) write-amplify(3.6) OK, records in: 10756, records dropped: 514 output_compression: NoCompression
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:01:40.771289) EVENT_LOG_v1 {"time_micros": 1772028100771275, "job": 132, "event": "compaction_finished", "compaction_time_micros": 96149, "compaction_time_cpu_micros": 48679, "output_level": 6, "num_output_files": 1, "total_output_size": 12154253, "num_input_records": 10756, "num_output_records": 10242, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000211.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028100771934, "job": 132, "event": "table_file_deletion", "file_number": 211}
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000209.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028100773518, "job": 132, "event": "table_file_deletion", "file_number": 209}
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:01:40.672919) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:01:40.773601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:01:40.773606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:01:40.773608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:01:40.773610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:01:40 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:01:40.773612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:01:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4253: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:01:41 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:01:41 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:01:41 np0005629333 nova_compute[244014]: 2026-02-25 14:01:41.861 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:01:41 np0005629333 nova_compute[244014]: 2026-02-25 14:01:41.895 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:01:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4254: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:01:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 09:01:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:01:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 09:01:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:01:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 09:01:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:01:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:01:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:01:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:01:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:01:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 09:01:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:01:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 09:01:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:01:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:01:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:01:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 09:01:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:01:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 09:01:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:01:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:01:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:01:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 09:01:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4255: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:01:44 np0005629333 nova_compute[244014]: 2026-02-25 14:01:44.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:01:44 np0005629333 nova_compute[244014]: 2026-02-25 14:01:44.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:01:44 np0005629333 nova_compute[244014]: 2026-02-25 14:01:44.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 09:01:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:01:45 np0005629333 nova_compute[244014]: 2026-02-25 14:01:45.637 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:01:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4256: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:01:46 np0005629333 nova_compute[244014]: 2026-02-25 14:01:46.893 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:01:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 09:01:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2802048590' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 09:01:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 09:01:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2802048590' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 09:01:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4257: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:01:49 np0005629333 nova_compute[244014]: 2026-02-25 14:01:49.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:01:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:01:50 np0005629333 nova_compute[244014]: 2026-02-25 14:01:50.659 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:01:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4258: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:01:50 np0005629333 nova_compute[244014]: 2026-02-25 14:01:50.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:01:51 np0005629333 nova_compute[244014]: 2026-02-25 14:01:51.899 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:01:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4259: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:01:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4260: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:01:54 np0005629333 nova_compute[244014]: 2026-02-25 14:01:54.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:01:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:01:55.106 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:01:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:01:55.106 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:01:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:01:55.107 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:01:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:01:55 np0005629333 nova_compute[244014]: 2026-02-25 14:01:55.663 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:01:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4261: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:01:56 np0005629333 nova_compute[244014]: 2026-02-25 14:01:56.899 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:01:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4262: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:02:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:02:00 np0005629333 nova_compute[244014]: 2026-02-25 14:02:00.667 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:02:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4263: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:02:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:02:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:02:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:02:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:02:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:02:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:02:01 np0005629333 nova_compute[244014]: 2026-02-25 14:02:01.902 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:02:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4264: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:02:04 np0005629333 podman[427627]: 2026-02-25 14:02:04.73253724 +0000 UTC m=+0.069772032 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 25 09:02:04 np0005629333 podman[427628]: 2026-02-25 14:02:04.777645521 +0000 UTC m=+0.114551473 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:02:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4265: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:02:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:02:05 np0005629333 nova_compute[244014]: 2026-02-25 14:02:05.670 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:02:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4266: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:02:06 np0005629333 nova_compute[244014]: 2026-02-25 14:02:06.903 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:02:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4267: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:02:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:02:10 np0005629333 nova_compute[244014]: 2026-02-25 14:02:10.674 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:02:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4268: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:02:11 np0005629333 nova_compute[244014]: 2026-02-25 14:02:11.906 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:02:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4269: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:02:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4270: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #213. Immutable memtables: 0.
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:02:15.416912) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 133] Flushing memtable with next log file: 213
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028135416971, "job": 133, "event": "flush_started", "num_memtables": 1, "num_entries": 516, "num_deletes": 255, "total_data_size": 562281, "memory_usage": 573272, "flush_reason": "Manual Compaction"}
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 133] Level-0 flush table #214: started
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028135422014, "cf_name": "default", "job": 133, "event": "table_file_creation", "file_number": 214, "file_size": 558024, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 88388, "largest_seqno": 88903, "table_properties": {"data_size": 555104, "index_size": 955, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6546, "raw_average_key_size": 18, "raw_value_size": 549293, "raw_average_value_size": 1534, "num_data_blocks": 44, "num_entries": 358, "num_filter_entries": 358, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772028100, "oldest_key_time": 1772028100, "file_creation_time": 1772028135, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 214, "seqno_to_time_mapping": "N/A"}}
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 133] Flush lasted 5188 microseconds, and 2765 cpu microseconds.
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:02:15.422097) [db/flush_job.cc:967] [default] [JOB 133] Level-0 flush table #214: 558024 bytes OK
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:02:15.422122) [db/memtable_list.cc:519] [default] Level-0 commit table #214 started
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:02:15.423918) [db/memtable_list.cc:722] [default] Level-0 commit table #214: memtable #1 done
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:02:15.423938) EVENT_LOG_v1 {"time_micros": 1772028135423931, "job": 133, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:02:15.423959) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 133] Try to delete WAL files size 559302, prev total WAL file size 559302, number of live WAL files 2.
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000210.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:02:15.424494) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303138' seq:72057594037927935, type:22 .. '6C6F676D0034323639' seq:0, type:0; will stop at (end)
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 134] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 133 Base level 0, inputs: [214(544KB)], [212(11MB)]
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028135424561, "job": 134, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [214], "files_L6": [212], "score": -1, "input_data_size": 12712277, "oldest_snapshot_seqno": -1}
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 134] Generated table #215: 10081 keys, 12612880 bytes, temperature: kUnknown
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028135551582, "cf_name": "default", "job": 134, "event": "table_file_creation", "file_number": 215, "file_size": 12612880, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12549062, "index_size": 37529, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25221, "raw_key_size": 266445, "raw_average_key_size": 26, "raw_value_size": 12372196, "raw_average_value_size": 1227, "num_data_blocks": 1447, "num_entries": 10081, "num_filter_entries": 10081, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772028135, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 215, "seqno_to_time_mapping": "N/A"}}
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:02:15.552103) [db/compaction/compaction_job.cc:1663] [default] [JOB 134] Compacted 1@0 + 1@6 files to L6 => 12612880 bytes
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:02:15.558263) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 100.1 rd, 99.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 11.6 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(45.4) write-amplify(22.6) OK, records in: 10600, records dropped: 519 output_compression: NoCompression
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:02:15.558300) EVENT_LOG_v1 {"time_micros": 1772028135558284, "job": 134, "event": "compaction_finished", "compaction_time_micros": 126988, "compaction_time_cpu_micros": 42246, "output_level": 6, "num_output_files": 1, "total_output_size": 12612880, "num_input_records": 10600, "num_output_records": 10081, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000214.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028135558609, "job": 134, "event": "table_file_deletion", "file_number": 214}
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000212.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028135561015, "job": 134, "event": "table_file_deletion", "file_number": 212}
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:02:15.424391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:02:15.561199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:02:15.561209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:02:15.561216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:02:15.561219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:02:15 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:02:15.561222) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:02:15 np0005629333 nova_compute[244014]: 2026-02-25 14:02:15.678 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:02:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4271: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:02:16 np0005629333 nova_compute[244014]: 2026-02-25 14:02:16.948 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:02:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4272: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:02:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:02:20 np0005629333 nova_compute[244014]: 2026-02-25 14:02:20.682 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:02:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4273: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:02:21 np0005629333 nova_compute[244014]: 2026-02-25 14:02:21.951 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:02:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4274: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:02:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4275: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:02:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:02:25 np0005629333 nova_compute[244014]: 2026-02-25 14:02:25.685 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:02:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4276: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:02:26 np0005629333 nova_compute[244014]: 2026-02-25 14:02:26.953 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:02:28 np0005629333 nova_compute[244014]: 2026-02-25 14:02:28.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:02:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4277: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:02:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:02:30 np0005629333 nova_compute[244014]: 2026-02-25 14:02:30.733 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:02:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4278: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:02:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_14:02:31
Feb 25 09:02:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 09:02:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 09:02:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['backups', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', '.mgr', 'vms']
Feb 25 09:02:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 09:02:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:02:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:02:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:02:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:02:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:02:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:02:31 np0005629333 nova_compute[244014]: 2026-02-25 14:02:31.986 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:02:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 09:02:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 09:02:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 09:02:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 09:02:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 09:02:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 09:02:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 09:02:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 09:02:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 09:02:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 09:02:32 np0005629333 nova_compute[244014]: 2026-02-25 14:02:32.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:02:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4279: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:02:32 np0005629333 nova_compute[244014]: 2026-02-25 14:02:32.910 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:02:32 np0005629333 nova_compute[244014]: 2026-02-25 14:02:32.911 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:02:32 np0005629333 nova_compute[244014]: 2026-02-25 14:02:32.911 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:02:32 np0005629333 nova_compute[244014]: 2026-02-25 14:02:32.911 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 09:02:32 np0005629333 nova_compute[244014]: 2026-02-25 14:02:32.912 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 09:02:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 09:02:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2077948250' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 09:02:33 np0005629333 nova_compute[244014]: 2026-02-25 14:02:33.509 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 09:02:33 np0005629333 nova_compute[244014]: 2026-02-25 14:02:33.691 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 09:02:33 np0005629333 nova_compute[244014]: 2026-02-25 14:02:33.693 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3539MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 09:02:33 np0005629333 nova_compute[244014]: 2026-02-25 14:02:33.693 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:02:33 np0005629333 nova_compute[244014]: 2026-02-25 14:02:33.694 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:02:33 np0005629333 nova_compute[244014]: 2026-02-25 14:02:33.769 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 09:02:33 np0005629333 nova_compute[244014]: 2026-02-25 14:02:33.770 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 09:02:33 np0005629333 nova_compute[244014]: 2026-02-25 14:02:33.798 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 09:02:34 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 09:02:34 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1525895893' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 09:02:34 np0005629333 nova_compute[244014]: 2026-02-25 14:02:34.352 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 09:02:34 np0005629333 nova_compute[244014]: 2026-02-25 14:02:34.357 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 09:02:34 np0005629333 nova_compute[244014]: 2026-02-25 14:02:34.378 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 09:02:34 np0005629333 nova_compute[244014]: 2026-02-25 14:02:34.379 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 09:02:34 np0005629333 nova_compute[244014]: 2026-02-25 14:02:34.379 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:02:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4280: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:02:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:02:35 np0005629333 nova_compute[244014]: 2026-02-25 14:02:35.736 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:02:35 np0005629333 podman[427715]: 2026-02-25 14:02:35.754535498 +0000 UTC m=+0.092351613 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Feb 25 09:02:35 np0005629333 podman[427716]: 2026-02-25 14:02:35.759100098 +0000 UTC m=+0.093137215 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 09:02:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4281: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:02:36 np0005629333 nova_compute[244014]: 2026-02-25 14:02:36.989 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:02:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4282: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:02:39 np0005629333 nova_compute[244014]: 2026-02-25 14:02:39.379 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:02:39 np0005629333 nova_compute[244014]: 2026-02-25 14:02:39.380 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 09:02:39 np0005629333 nova_compute[244014]: 2026-02-25 14:02:39.381 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 09:02:39 np0005629333 nova_compute[244014]: 2026-02-25 14:02:39.397 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 09:02:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:02:40 np0005629333 nova_compute[244014]: 2026-02-25 14:02:40.780 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:02:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4283: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:02:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 09:02:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 09:02:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 09:02:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 09:02:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 09:02:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:02:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 09:02:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 09:02:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 09:02:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 09:02:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 09:02:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 09:02:41 np0005629333 podman[427904]: 2026-02-25 14:02:41.875135699 +0000 UTC m=+0.059906931 container create a6a29c87b819d9d21b3e1c8617d32573dea5cb258d592b60a3ccbe338b4dd2c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2)
Feb 25 09:02:41 np0005629333 nova_compute[244014]: 2026-02-25 14:02:41.890 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:02:41 np0005629333 systemd[1]: Started libpod-conmon-a6a29c87b819d9d21b3e1c8617d32573dea5cb258d592b60a3ccbe338b4dd2c8.scope.
Feb 25 09:02:41 np0005629333 podman[427904]: 2026-02-25 14:02:41.849005078 +0000 UTC m=+0.033776320 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:02:41 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 09:02:41 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:02:41 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 09:02:41 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:02:41 np0005629333 podman[427904]: 2026-02-25 14:02:41.969936831 +0000 UTC m=+0.154708113 container init a6a29c87b819d9d21b3e1c8617d32573dea5cb258d592b60a3ccbe338b4dd2c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:02:41 np0005629333 podman[427904]: 2026-02-25 14:02:41.97766984 +0000 UTC m=+0.162441072 container start a6a29c87b819d9d21b3e1c8617d32573dea5cb258d592b60a3ccbe338b4dd2c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_noether, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:02:41 np0005629333 podman[427904]: 2026-02-25 14:02:41.981916711 +0000 UTC m=+0.166688013 container attach a6a29c87b819d9d21b3e1c8617d32573dea5cb258d592b60a3ccbe338b4dd2c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_noether, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 09:02:41 np0005629333 vibrant_noether[427921]: 167 167
Feb 25 09:02:41 np0005629333 systemd[1]: libpod-a6a29c87b819d9d21b3e1c8617d32573dea5cb258d592b60a3ccbe338b4dd2c8.scope: Deactivated successfully.
Feb 25 09:02:41 np0005629333 podman[427904]: 2026-02-25 14:02:41.985394529 +0000 UTC m=+0.170165731 container died a6a29c87b819d9d21b3e1c8617d32573dea5cb258d592b60a3ccbe338b4dd2c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 09:02:42 np0005629333 nova_compute[244014]: 2026-02-25 14:02:42.025 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:02:42 np0005629333 systemd[1]: var-lib-containers-storage-overlay-70179f65dde04122ca07c7dea08c3ff71101190d05847825cc07008007242d2a-merged.mount: Deactivated successfully.
Feb 25 09:02:42 np0005629333 podman[427904]: 2026-02-25 14:02:42.063885258 +0000 UTC m=+0.248656470 container remove a6a29c87b819d9d21b3e1c8617d32573dea5cb258d592b60a3ccbe338b4dd2c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_noether, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 09:02:42 np0005629333 systemd[1]: libpod-conmon-a6a29c87b819d9d21b3e1c8617d32573dea5cb258d592b60a3ccbe338b4dd2c8.scope: Deactivated successfully.
Feb 25 09:02:42 np0005629333 podman[427944]: 2026-02-25 14:02:42.263523206 +0000 UTC m=+0.063142254 container create 2497bb40e5f27853a26619e3c9d974c5db248d1b60de5e7581aef94c7ac9cbc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:02:42 np0005629333 systemd[1]: Started libpod-conmon-2497bb40e5f27853a26619e3c9d974c5db248d1b60de5e7581aef94c7ac9cbc6.scope.
Feb 25 09:02:42 np0005629333 podman[427944]: 2026-02-25 14:02:42.240040729 +0000 UTC m=+0.039659817 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:02:42 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:02:42 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71bbb52c59c019c5b07a9c36a42567937c1afa63d1379d8ce7a7dd592da920a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:02:42 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71bbb52c59c019c5b07a9c36a42567937c1afa63d1379d8ce7a7dd592da920a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:02:42 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71bbb52c59c019c5b07a9c36a42567937c1afa63d1379d8ce7a7dd592da920a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:02:42 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71bbb52c59c019c5b07a9c36a42567937c1afa63d1379d8ce7a7dd592da920a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:02:42 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71bbb52c59c019c5b07a9c36a42567937c1afa63d1379d8ce7a7dd592da920a3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 09:02:42 np0005629333 podman[427944]: 2026-02-25 14:02:42.373446766 +0000 UTC m=+0.173065844 container init 2497bb40e5f27853a26619e3c9d974c5db248d1b60de5e7581aef94c7ac9cbc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 09:02:42 np0005629333 podman[427944]: 2026-02-25 14:02:42.387926637 +0000 UTC m=+0.187545675 container start 2497bb40e5f27853a26619e3c9d974c5db248d1b60de5e7581aef94c7ac9cbc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_turing, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 09:02:42 np0005629333 podman[427944]: 2026-02-25 14:02:42.393145305 +0000 UTC m=+0.192764323 container attach 2497bb40e5f27853a26619e3c9d974c5db248d1b60de5e7581aef94c7ac9cbc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_turing, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 09:02:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4284: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:02:42 np0005629333 stoic_turing[427960]: --> passed data devices: 0 physical, 3 LVM
Feb 25 09:02:42 np0005629333 stoic_turing[427960]: --> All data devices are unavailable
Feb 25 09:02:42 np0005629333 systemd[1]: libpod-2497bb40e5f27853a26619e3c9d974c5db248d1b60de5e7581aef94c7ac9cbc6.scope: Deactivated successfully.
Feb 25 09:02:42 np0005629333 podman[427944]: 2026-02-25 14:02:42.957554288 +0000 UTC m=+0.757173336 container died 2497bb40e5f27853a26619e3c9d974c5db248d1b60de5e7581aef94c7ac9cbc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_turing, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 09:02:42 np0005629333 systemd[1]: var-lib-containers-storage-overlay-71bbb52c59c019c5b07a9c36a42567937c1afa63d1379d8ce7a7dd592da920a3-merged.mount: Deactivated successfully.
Feb 25 09:02:43 np0005629333 podman[427944]: 2026-02-25 14:02:43.014795723 +0000 UTC m=+0.814414771 container remove 2497bb40e5f27853a26619e3c9d974c5db248d1b60de5e7581aef94c7ac9cbc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_turing, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 09:02:43 np0005629333 systemd[1]: libpod-conmon-2497bb40e5f27853a26619e3c9d974c5db248d1b60de5e7581aef94c7ac9cbc6.scope: Deactivated successfully.
Feb 25 09:02:43 np0005629333 podman[428055]: 2026-02-25 14:02:43.525604405 +0000 UTC m=+0.056743212 container create 0cfd6fe9ba056e150d18cdb1f31ba5bf12d32b823aff958f21e68d3a2c02f452 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wescoff, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 09:02:43 np0005629333 podman[428055]: 2026-02-25 14:02:43.495610523 +0000 UTC m=+0.026749380 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:02:43 np0005629333 systemd[1]: Started libpod-conmon-0cfd6fe9ba056e150d18cdb1f31ba5bf12d32b823aff958f21e68d3a2c02f452.scope.
Feb 25 09:02:43 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:02:43 np0005629333 podman[428055]: 2026-02-25 14:02:43.662503711 +0000 UTC m=+0.193642558 container init 0cfd6fe9ba056e150d18cdb1f31ba5bf12d32b823aff958f21e68d3a2c02f452 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wescoff, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:02:43 np0005629333 podman[428055]: 2026-02-25 14:02:43.672446023 +0000 UTC m=+0.203584820 container start 0cfd6fe9ba056e150d18cdb1f31ba5bf12d32b823aff958f21e68d3a2c02f452 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:02:43 np0005629333 podman[428055]: 2026-02-25 14:02:43.676932851 +0000 UTC m=+0.208071658 container attach 0cfd6fe9ba056e150d18cdb1f31ba5bf12d32b823aff958f21e68d3a2c02f452 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wescoff, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:02:43 np0005629333 fervent_wescoff[428071]: 167 167
Feb 25 09:02:43 np0005629333 systemd[1]: libpod-0cfd6fe9ba056e150d18cdb1f31ba5bf12d32b823aff958f21e68d3a2c02f452.scope: Deactivated successfully.
Feb 25 09:02:43 np0005629333 podman[428055]: 2026-02-25 14:02:43.679274437 +0000 UTC m=+0.210413244 container died 0cfd6fe9ba056e150d18cdb1f31ba5bf12d32b823aff958f21e68d3a2c02f452 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:02:43 np0005629333 systemd[1]: var-lib-containers-storage-overlay-34566f80873d5f1129e5382cb9f91ba1f83c4232846b2de163f381078ef9f9da-merged.mount: Deactivated successfully.
Feb 25 09:02:43 np0005629333 podman[428055]: 2026-02-25 14:02:43.723999057 +0000 UTC m=+0.255137864 container remove 0cfd6fe9ba056e150d18cdb1f31ba5bf12d32b823aff958f21e68d3a2c02f452 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wescoff, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Feb 25 09:02:43 np0005629333 systemd[1]: libpod-conmon-0cfd6fe9ba056e150d18cdb1f31ba5bf12d32b823aff958f21e68d3a2c02f452.scope: Deactivated successfully.
Feb 25 09:02:43 np0005629333 podman[428095]: 2026-02-25 14:02:43.887772516 +0000 UTC m=+0.040261454 container create c79c91df7d8e6ecec498f51b7702365138a1f12eda06ff2dc14323f5b29abd52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 09:02:43 np0005629333 systemd[1]: Started libpod-conmon-c79c91df7d8e6ecec498f51b7702365138a1f12eda06ff2dc14323f5b29abd52.scope.
Feb 25 09:02:43 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:02:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bce52767c03856a7c040ea6e78d871f81742fcdeda40e9d70919d1b7533eef20/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:02:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bce52767c03856a7c040ea6e78d871f81742fcdeda40e9d70919d1b7533eef20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:02:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bce52767c03856a7c040ea6e78d871f81742fcdeda40e9d70919d1b7533eef20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:02:43 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bce52767c03856a7c040ea6e78d871f81742fcdeda40e9d70919d1b7533eef20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:02:43 np0005629333 podman[428095]: 2026-02-25 14:02:43.871451873 +0000 UTC m=+0.023940831 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:02:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 09:02:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:02:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 09:02:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:02:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 09:02:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:02:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:02:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:02:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:02:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:02:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 09:02:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:02:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 09:02:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:02:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:02:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:02:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 09:02:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:02:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 09:02:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:02:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:02:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:02:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 09:02:43 np0005629333 podman[428095]: 2026-02-25 14:02:43.983187845 +0000 UTC m=+0.135676803 container init c79c91df7d8e6ecec498f51b7702365138a1f12eda06ff2dc14323f5b29abd52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:02:43 np0005629333 podman[428095]: 2026-02-25 14:02:43.992101878 +0000 UTC m=+0.144590846 container start c79c91df7d8e6ecec498f51b7702365138a1f12eda06ff2dc14323f5b29abd52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_maxwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 09:02:43 np0005629333 podman[428095]: 2026-02-25 14:02:43.996514494 +0000 UTC m=+0.149003442 container attach c79c91df7d8e6ecec498f51b7702365138a1f12eda06ff2dc14323f5b29abd52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]: {
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:    "0": [
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:        {
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "devices": [
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "/dev/loop3"
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            ],
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "lv_name": "ceph_lv0",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "lv_size": "21470642176",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "name": "ceph_lv0",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "tags": {
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.cluster_name": "ceph",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.crush_device_class": "",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.encrypted": "0",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.objectstore": "bluestore",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.osd_id": "0",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.type": "block",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.vdo": "0",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.with_tpm": "0"
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            },
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "type": "block",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "vg_name": "ceph_vg0"
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:        }
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:    ],
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:    "1": [
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:        {
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "devices": [
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "/dev/loop4"
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            ],
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "lv_name": "ceph_lv1",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "lv_size": "21470642176",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "name": "ceph_lv1",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "tags": {
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.cluster_name": "ceph",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.crush_device_class": "",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.encrypted": "0",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.objectstore": "bluestore",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.osd_id": "1",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.type": "block",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.vdo": "0",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.with_tpm": "0"
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            },
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "type": "block",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "vg_name": "ceph_vg1"
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:        }
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:    ],
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:    "2": [
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:        {
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "devices": [
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "/dev/loop5"
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            ],
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "lv_name": "ceph_lv2",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "lv_size": "21470642176",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "name": "ceph_lv2",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "tags": {
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.cluster_name": "ceph",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.crush_device_class": "",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.encrypted": "0",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.objectstore": "bluestore",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.osd_id": "2",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.type": "block",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.vdo": "0",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:                "ceph.with_tpm": "0"
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            },
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "type": "block",
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:            "vg_name": "ceph_vg2"
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:        }
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]:    ]
Feb 25 09:02:44 np0005629333 funny_maxwell[428111]: }
Feb 25 09:02:44 np0005629333 systemd[1]: libpod-c79c91df7d8e6ecec498f51b7702365138a1f12eda06ff2dc14323f5b29abd52.scope: Deactivated successfully.
Feb 25 09:02:44 np0005629333 podman[428095]: 2026-02-25 14:02:44.30566779 +0000 UTC m=+0.458156728 container died c79c91df7d8e6ecec498f51b7702365138a1f12eda06ff2dc14323f5b29abd52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_maxwell, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:02:44 np0005629333 systemd[1]: var-lib-containers-storage-overlay-bce52767c03856a7c040ea6e78d871f81742fcdeda40e9d70919d1b7533eef20-merged.mount: Deactivated successfully.
Feb 25 09:02:44 np0005629333 podman[428095]: 2026-02-25 14:02:44.35038763 +0000 UTC m=+0.502876598 container remove c79c91df7d8e6ecec498f51b7702365138a1f12eda06ff2dc14323f5b29abd52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 09:02:44 np0005629333 systemd[1]: libpod-conmon-c79c91df7d8e6ecec498f51b7702365138a1f12eda06ff2dc14323f5b29abd52.scope: Deactivated successfully.
Feb 25 09:02:44 np0005629333 podman[428194]: 2026-02-25 14:02:44.860647706 +0000 UTC m=+0.066743606 container create cfa022e2fabda78771c1809cf73ab4dfb41bb83199dfcee9015c9a4320643063 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_brown, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:02:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4285: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:02:44 np0005629333 systemd[1]: Started libpod-conmon-cfa022e2fabda78771c1809cf73ab4dfb41bb83199dfcee9015c9a4320643063.scope.
Feb 25 09:02:44 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:02:44 np0005629333 podman[428194]: 2026-02-25 14:02:44.833589458 +0000 UTC m=+0.039685388 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:02:44 np0005629333 podman[428194]: 2026-02-25 14:02:44.930309874 +0000 UTC m=+0.136405784 container init cfa022e2fabda78771c1809cf73ab4dfb41bb83199dfcee9015c9a4320643063 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_brown, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 09:02:44 np0005629333 podman[428194]: 2026-02-25 14:02:44.938420764 +0000 UTC m=+0.144516664 container start cfa022e2fabda78771c1809cf73ab4dfb41bb83199dfcee9015c9a4320643063 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_brown, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 09:02:44 np0005629333 podman[428194]: 2026-02-25 14:02:44.942303674 +0000 UTC m=+0.148399584 container attach cfa022e2fabda78771c1809cf73ab4dfb41bb83199dfcee9015c9a4320643063 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_brown, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 09:02:44 np0005629333 great_brown[428210]: 167 167
Feb 25 09:02:44 np0005629333 systemd[1]: libpod-cfa022e2fabda78771c1809cf73ab4dfb41bb83199dfcee9015c9a4320643063.scope: Deactivated successfully.
Feb 25 09:02:44 np0005629333 podman[428194]: 2026-02-25 14:02:44.945864205 +0000 UTC m=+0.151960105 container died cfa022e2fabda78771c1809cf73ab4dfb41bb83199dfcee9015c9a4320643063 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_brown, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Feb 25 09:02:44 np0005629333 systemd[1]: var-lib-containers-storage-overlay-877e8e827b29b56a690ad15144456b05cc766b37a04bc6701460efe948a43148-merged.mount: Deactivated successfully.
Feb 25 09:02:44 np0005629333 podman[428194]: 2026-02-25 14:02:44.989788273 +0000 UTC m=+0.195884133 container remove cfa022e2fabda78771c1809cf73ab4dfb41bb83199dfcee9015c9a4320643063 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 09:02:45 np0005629333 systemd[1]: libpod-conmon-cfa022e2fabda78771c1809cf73ab4dfb41bb83199dfcee9015c9a4320643063.scope: Deactivated successfully.
Feb 25 09:02:45 np0005629333 podman[428234]: 2026-02-25 14:02:45.164865443 +0000 UTC m=+0.063642648 container create be1a9c84ddcae4b8c346599ee57b02f8a9925143a48b79251fbcfa4e6a950a4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 09:02:45 np0005629333 systemd[1]: Started libpod-conmon-be1a9c84ddcae4b8c346599ee57b02f8a9925143a48b79251fbcfa4e6a950a4a.scope.
Feb 25 09:02:45 np0005629333 podman[428234]: 2026-02-25 14:02:45.134564493 +0000 UTC m=+0.033341738 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:02:45 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:02:45 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4d5e3c82da8db39792419a9846055ce54af9e9099c58b277274c5940deead3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:02:45 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4d5e3c82da8db39792419a9846055ce54af9e9099c58b277274c5940deead3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:02:45 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4d5e3c82da8db39792419a9846055ce54af9e9099c58b277274c5940deead3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:02:45 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4d5e3c82da8db39792419a9846055ce54af9e9099c58b277274c5940deead3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:02:45 np0005629333 podman[428234]: 2026-02-25 14:02:45.265379366 +0000 UTC m=+0.164156551 container init be1a9c84ddcae4b8c346599ee57b02f8a9925143a48b79251fbcfa4e6a950a4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_banzai, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:02:45 np0005629333 podman[428234]: 2026-02-25 14:02:45.277236513 +0000 UTC m=+0.176013708 container start be1a9c84ddcae4b8c346599ee57b02f8a9925143a48b79251fbcfa4e6a950a4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_banzai, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 09:02:45 np0005629333 podman[428234]: 2026-02-25 14:02:45.281847714 +0000 UTC m=+0.180624879 container attach be1a9c84ddcae4b8c346599ee57b02f8a9925143a48b79251fbcfa4e6a950a4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 25 09:02:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:02:45 np0005629333 nova_compute[244014]: 2026-02-25 14:02:45.821 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:02:45 np0005629333 nova_compute[244014]: 2026-02-25 14:02:45.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:02:45 np0005629333 lvm[428329]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 09:02:46 np0005629333 lvm[428329]: VG ceph_vg0 finished
Feb 25 09:02:46 np0005629333 lvm[428330]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 09:02:46 np0005629333 lvm[428330]: VG ceph_vg1 finished
Feb 25 09:02:46 np0005629333 lvm[428332]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 09:02:46 np0005629333 lvm[428332]: VG ceph_vg2 finished
Feb 25 09:02:46 np0005629333 amazing_banzai[428251]: {}
Feb 25 09:02:46 np0005629333 systemd[1]: libpod-be1a9c84ddcae4b8c346599ee57b02f8a9925143a48b79251fbcfa4e6a950a4a.scope: Deactivated successfully.
Feb 25 09:02:46 np0005629333 systemd[1]: libpod-be1a9c84ddcae4b8c346599ee57b02f8a9925143a48b79251fbcfa4e6a950a4a.scope: Consumed 1.214s CPU time.
Feb 25 09:02:46 np0005629333 podman[428234]: 2026-02-25 14:02:46.198896288 +0000 UTC m=+1.097673483 container died be1a9c84ddcae4b8c346599ee57b02f8a9925143a48b79251fbcfa4e6a950a4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 09:02:46 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d4d5e3c82da8db39792419a9846055ce54af9e9099c58b277274c5940deead3f-merged.mount: Deactivated successfully.
Feb 25 09:02:46 np0005629333 podman[428234]: 2026-02-25 14:02:46.253844578 +0000 UTC m=+1.152621773 container remove be1a9c84ddcae4b8c346599ee57b02f8a9925143a48b79251fbcfa4e6a950a4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 09:02:46 np0005629333 systemd[1]: libpod-conmon-be1a9c84ddcae4b8c346599ee57b02f8a9925143a48b79251fbcfa4e6a950a4a.scope: Deactivated successfully.
Feb 25 09:02:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 09:02:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:02:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 09:02:46 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:02:46 np0005629333 nova_compute[244014]: 2026-02-25 14:02:46.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:02:46 np0005629333 nova_compute[244014]: 2026-02-25 14:02:46.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 09:02:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4286: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:02:47 np0005629333 nova_compute[244014]: 2026-02-25 14:02:47.050 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:02:47 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:02:47 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:02:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 09:02:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3576130475' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 09:02:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 09:02:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3576130475' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 09:02:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4287: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:02:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:02:50 np0005629333 nova_compute[244014]: 2026-02-25 14:02:50.825 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:02:50 np0005629333 nova_compute[244014]: 2026-02-25 14:02:50.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:02:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4288: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:02:52 np0005629333 nova_compute[244014]: 2026-02-25 14:02:52.051 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:02:52 np0005629333 nova_compute[244014]: 2026-02-25 14:02:52.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:02:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4289: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:02:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4290: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:02:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:02:55.108 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:02:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:02:55.108 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:02:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:02:55.108 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:02:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:02:55 np0005629333 nova_compute[244014]: 2026-02-25 14:02:55.857 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:02:55 np0005629333 nova_compute[244014]: 2026-02-25 14:02:55.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:02:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4291: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:02:57 np0005629333 nova_compute[244014]: 2026-02-25 14:02:57.053 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:02:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4292: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:03:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:03:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4293: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:03:00 np0005629333 nova_compute[244014]: 2026-02-25 14:03:00.894 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:03:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:03:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:03:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:03:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:03:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:03:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:03:02 np0005629333 nova_compute[244014]: 2026-02-25 14:03:02.083 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:03:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4294: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:03:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4295: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:03:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:03:05 np0005629333 nova_compute[244014]: 2026-02-25 14:03:05.939 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:03:06 np0005629333 podman[428375]: 2026-02-25 14:03:06.761683126 +0000 UTC m=+0.080825106 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 09:03:06 np0005629333 podman[428376]: 2026-02-25 14:03:06.807879657 +0000 UTC m=+0.126517243 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:03:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4296: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:03:07 np0005629333 nova_compute[244014]: 2026-02-25 14:03:07.085 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:03:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4297: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:03:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:03:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4298: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:03:10 np0005629333 nova_compute[244014]: 2026-02-25 14:03:10.944 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:03:12 np0005629333 nova_compute[244014]: 2026-02-25 14:03:12.116 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:03:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4299: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:03:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4300: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:03:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:03:15 np0005629333 nova_compute[244014]: 2026-02-25 14:03:15.987 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:03:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4301: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:03:17 np0005629333 nova_compute[244014]: 2026-02-25 14:03:17.118 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:03:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4302: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:03:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:03:20 np0005629333 nova_compute[244014]: 2026-02-25 14:03:20.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:03:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4303: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:03:20 np0005629333 nova_compute[244014]: 2026-02-25 14:03:20.991 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:03:22 np0005629333 nova_compute[244014]: 2026-02-25 14:03:22.122 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:03:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4304: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:03:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4305: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:03:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:03:26 np0005629333 nova_compute[244014]: 2026-02-25 14:03:26.036 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:03:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4306: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:03:27 np0005629333 nova_compute[244014]: 2026-02-25 14:03:27.158 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:03:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4307: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #216. Immutable memtables: 0.
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:30.436766) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 135] Flushing memtable with next log file: 216
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028210436824, "job": 135, "event": "flush_started", "num_memtables": 1, "num_entries": 827, "num_deletes": 250, "total_data_size": 1115255, "memory_usage": 1139920, "flush_reason": "Manual Compaction"}
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 135] Level-0 flush table #217: started
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028210444155, "cf_name": "default", "job": 135, "event": "table_file_creation", "file_number": 217, "file_size": 692333, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 88904, "largest_seqno": 89730, "table_properties": {"data_size": 688888, "index_size": 1224, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9141, "raw_average_key_size": 20, "raw_value_size": 681594, "raw_average_value_size": 1538, "num_data_blocks": 55, "num_entries": 443, "num_filter_entries": 443, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772028135, "oldest_key_time": 1772028135, "file_creation_time": 1772028210, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 217, "seqno_to_time_mapping": "N/A"}}
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 135] Flush lasted 7457 microseconds, and 3577 cpu microseconds.
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:30.444219) [db/flush_job.cc:967] [default] [JOB 135] Level-0 flush table #217: 692333 bytes OK
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:30.444246) [db/memtable_list.cc:519] [default] Level-0 commit table #217 started
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:30.446471) [db/memtable_list.cc:722] [default] Level-0 commit table #217: memtable #1 done
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:30.446493) EVENT_LOG_v1 {"time_micros": 1772028210446486, "job": 135, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:30.446522) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 135] Try to delete WAL files size 1111160, prev total WAL file size 1111160, number of live WAL files 2.
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000213.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:30.447106) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033373533' seq:72057594037927935, type:22 .. '6D6772737461740034303034' seq:0, type:0; will stop at (end)
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 136] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 135 Base level 0, inputs: [217(676KB)], [215(12MB)]
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028210447145, "job": 136, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [217], "files_L6": [215], "score": -1, "input_data_size": 13305213, "oldest_snapshot_seqno": -1}
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 136] Generated table #218: 10043 keys, 10377031 bytes, temperature: kUnknown
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028210539452, "cf_name": "default", "job": 136, "event": "table_file_creation", "file_number": 218, "file_size": 10377031, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10317302, "index_size": 33504, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25157, "raw_key_size": 265813, "raw_average_key_size": 26, "raw_value_size": 10144952, "raw_average_value_size": 1010, "num_data_blocks": 1281, "num_entries": 10043, "num_filter_entries": 10043, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772028210, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 218, "seqno_to_time_mapping": "N/A"}}
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:30.540224) [db/compaction/compaction_job.cc:1663] [default] [JOB 136] Compacted 1@0 + 1@6 files to L6 => 10377031 bytes
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:30.542188) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.4 rd, 111.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 12.0 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(34.2) write-amplify(15.0) OK, records in: 10524, records dropped: 481 output_compression: NoCompression
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:30.542224) EVENT_LOG_v1 {"time_micros": 1772028210542207, "job": 136, "event": "compaction_finished", "compaction_time_micros": 92761, "compaction_time_cpu_micros": 39140, "output_level": 6, "num_output_files": 1, "total_output_size": 10377031, "num_input_records": 10524, "num_output_records": 10043, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000217.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028210543053, "job": 136, "event": "table_file_deletion", "file_number": 217}
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000215.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028210545952, "job": 136, "event": "table_file_deletion", "file_number": 215}
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:30.447028) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:30.546130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:30.546136) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:30.546138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:30.546140) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:03:30 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:30.546142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:03:30 np0005629333 nova_compute[244014]: 2026-02-25 14:03:30.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:03:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4308: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:03:31 np0005629333 nova_compute[244014]: 2026-02-25 14:03:31.039 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:03:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_14:03:31
Feb 25 09:03:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 09:03:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 09:03:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['images', 'vms', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'volumes', 'default.rgw.log']
Feb 25 09:03:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 09:03:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:03:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:03:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:03:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:03:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:03:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:03:32 np0005629333 nova_compute[244014]: 2026-02-25 14:03:32.169 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:03:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 09:03:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 09:03:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 09:03:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 09:03:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 09:03:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 09:03:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 09:03:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 09:03:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 09:03:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 09:03:32 np0005629333 nova_compute[244014]: 2026-02-25 14:03:32.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:03:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4309: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:03:32 np0005629333 nova_compute[244014]: 2026-02-25 14:03:32.917 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:03:32 np0005629333 nova_compute[244014]: 2026-02-25 14:03:32.917 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:03:32 np0005629333 nova_compute[244014]: 2026-02-25 14:03:32.918 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:03:32 np0005629333 nova_compute[244014]: 2026-02-25 14:03:32.918 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 09:03:32 np0005629333 nova_compute[244014]: 2026-02-25 14:03:32.919 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 09:03:33 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 09:03:33 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4220057394' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 09:03:33 np0005629333 nova_compute[244014]: 2026-02-25 14:03:33.524 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 09:03:33 np0005629333 nova_compute[244014]: 2026-02-25 14:03:33.712 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 09:03:33 np0005629333 nova_compute[244014]: 2026-02-25 14:03:33.713 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3544MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 09:03:33 np0005629333 nova_compute[244014]: 2026-02-25 14:03:33.714 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:03:33 np0005629333 nova_compute[244014]: 2026-02-25 14:03:33.714 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:03:34 np0005629333 nova_compute[244014]: 2026-02-25 14:03:34.419 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 09:03:34 np0005629333 nova_compute[244014]: 2026-02-25 14:03:34.420 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 09:03:34 np0005629333 nova_compute[244014]: 2026-02-25 14:03:34.746 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 09:03:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4310: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:03:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 09:03:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2329039876' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 09:03:35 np0005629333 nova_compute[244014]: 2026-02-25 14:03:35.302 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 09:03:35 np0005629333 nova_compute[244014]: 2026-02-25 14:03:35.308 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 09:03:35 np0005629333 nova_compute[244014]: 2026-02-25 14:03:35.326 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 09:03:35 np0005629333 nova_compute[244014]: 2026-02-25 14:03:35.328 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 09:03:35 np0005629333 nova_compute[244014]: 2026-02-25 14:03:35.328 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:03:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:03:36 np0005629333 nova_compute[244014]: 2026-02-25 14:03:36.044 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:03:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4311: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:03:37 np0005629333 nova_compute[244014]: 2026-02-25 14:03:37.170 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:03:37 np0005629333 podman[428463]: 2026-02-25 14:03:37.716052311 +0000 UTC m=+0.059206021 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Feb 25 09:03:37 np0005629333 podman[428464]: 2026-02-25 14:03:37.8008753 +0000 UTC m=+0.142051964 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Feb 25 09:03:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4312: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:03:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:03:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4313: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:03:41 np0005629333 nova_compute[244014]: 2026-02-25 14:03:41.048 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:03:41 np0005629333 nova_compute[244014]: 2026-02-25 14:03:41.328 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:03:41 np0005629333 nova_compute[244014]: 2026-02-25 14:03:41.328 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 09:03:41 np0005629333 nova_compute[244014]: 2026-02-25 14:03:41.329 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 09:03:41 np0005629333 nova_compute[244014]: 2026-02-25 14:03:41.343 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 09:03:41 np0005629333 nova_compute[244014]: 2026-02-25 14:03:41.886 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:03:42 np0005629333 nova_compute[244014]: 2026-02-25 14:03:42.172 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:03:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4314: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:03:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 09:03:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:03:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 09:03:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:03:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 09:03:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:03:43 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:03:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:03:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:03:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:03:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 09:03:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:03:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 09:03:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:03:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:03:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:03:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 09:03:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:03:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 09:03:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:03:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:03:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:03:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 09:03:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4315: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:03:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:03:45 np0005629333 nova_compute[244014]: 2026-02-25 14:03:45.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:03:46 np0005629333 nova_compute[244014]: 2026-02-25 14:03:46.094 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:03:46 np0005629333 nova_compute[244014]: 2026-02-25 14:03:46.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:03:46 np0005629333 nova_compute[244014]: 2026-02-25 14:03:46.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 09:03:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4316: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:03:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 09:03:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 09:03:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 09:03:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 09:03:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 09:03:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:03:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 09:03:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 09:03:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 09:03:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 09:03:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 09:03:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 09:03:47 np0005629333 nova_compute[244014]: 2026-02-25 14:03:47.210 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:03:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 09:03:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3940397430' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 09:03:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 09:03:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3940397430' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 09:03:47 np0005629333 podman[428649]: 2026-02-25 14:03:47.617720535 +0000 UTC m=+0.062560957 container create 06208ddcf37e0b13187d914f869fd75345be256f783a1d238f45c1d79161b298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_spence, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 09:03:47 np0005629333 systemd[1]: Started libpod-conmon-06208ddcf37e0b13187d914f869fd75345be256f783a1d238f45c1d79161b298.scope.
Feb 25 09:03:47 np0005629333 podman[428649]: 2026-02-25 14:03:47.590184013 +0000 UTC m=+0.035024445 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:03:47 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:03:47 np0005629333 podman[428649]: 2026-02-25 14:03:47.719317519 +0000 UTC m=+0.164157941 container init 06208ddcf37e0b13187d914f869fd75345be256f783a1d238f45c1d79161b298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_spence, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 09:03:47 np0005629333 podman[428649]: 2026-02-25 14:03:47.730322061 +0000 UTC m=+0.175162453 container start 06208ddcf37e0b13187d914f869fd75345be256f783a1d238f45c1d79161b298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_spence, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 09:03:47 np0005629333 podman[428649]: 2026-02-25 14:03:47.73414029 +0000 UTC m=+0.178980772 container attach 06208ddcf37e0b13187d914f869fd75345be256f783a1d238f45c1d79161b298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_spence, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:03:47 np0005629333 eloquent_spence[428666]: 167 167
Feb 25 09:03:47 np0005629333 systemd[1]: libpod-06208ddcf37e0b13187d914f869fd75345be256f783a1d238f45c1d79161b298.scope: Deactivated successfully.
Feb 25 09:03:47 np0005629333 conmon[428666]: conmon 06208ddcf37e0b13187d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-06208ddcf37e0b13187d914f869fd75345be256f783a1d238f45c1d79161b298.scope/container/memory.events
Feb 25 09:03:47 np0005629333 podman[428649]: 2026-02-25 14:03:47.74294374 +0000 UTC m=+0.187784162 container died 06208ddcf37e0b13187d914f869fd75345be256f783a1d238f45c1d79161b298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 09:03:47 np0005629333 systemd[1]: var-lib-containers-storage-overlay-84b8c6019347b653f6c3cfa0ab5844572f9408f6e26dc43ca3af642b21103121-merged.mount: Deactivated successfully.
Feb 25 09:03:47 np0005629333 podman[428649]: 2026-02-25 14:03:47.7936747 +0000 UTC m=+0.238515082 container remove 06208ddcf37e0b13187d914f869fd75345be256f783a1d238f45c1d79161b298 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 09:03:47 np0005629333 systemd[1]: libpod-conmon-06208ddcf37e0b13187d914f869fd75345be256f783a1d238f45c1d79161b298.scope: Deactivated successfully.
Feb 25 09:03:48 np0005629333 podman[428688]: 2026-02-25 14:03:48.008784717 +0000 UTC m=+0.070032519 container create e44ea48606dd5b2c31ef0385f1203578853487ed2be98848a5792436e7ac803e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_booth, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 09:03:48 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 09:03:48 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:03:48 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 09:03:48 np0005629333 systemd[1]: Started libpod-conmon-e44ea48606dd5b2c31ef0385f1203578853487ed2be98848a5792436e7ac803e.scope.
Feb 25 09:03:48 np0005629333 podman[428688]: 2026-02-25 14:03:47.980415701 +0000 UTC m=+0.041663573 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:03:48 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:03:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681dcc22768285c1bb0ba1ceebd10e3ca932fe7551ddbdcbd7e35b4754c660f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:03:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681dcc22768285c1bb0ba1ceebd10e3ca932fe7551ddbdcbd7e35b4754c660f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:03:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681dcc22768285c1bb0ba1ceebd10e3ca932fe7551ddbdcbd7e35b4754c660f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:03:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681dcc22768285c1bb0ba1ceebd10e3ca932fe7551ddbdcbd7e35b4754c660f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:03:48 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681dcc22768285c1bb0ba1ceebd10e3ca932fe7551ddbdcbd7e35b4754c660f4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 09:03:48 np0005629333 podman[428688]: 2026-02-25 14:03:48.118836211 +0000 UTC m=+0.180084073 container init e44ea48606dd5b2c31ef0385f1203578853487ed2be98848a5792436e7ac803e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_booth, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 09:03:48 np0005629333 podman[428688]: 2026-02-25 14:03:48.135370021 +0000 UTC m=+0.196617803 container start e44ea48606dd5b2c31ef0385f1203578853487ed2be98848a5792436e7ac803e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 09:03:48 np0005629333 podman[428688]: 2026-02-25 14:03:48.139880849 +0000 UTC m=+0.201128701 container attach e44ea48606dd5b2c31ef0385f1203578853487ed2be98848a5792436e7ac803e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:03:48 np0005629333 flamboyant_booth[428704]: --> passed data devices: 0 physical, 3 LVM
Feb 25 09:03:48 np0005629333 flamboyant_booth[428704]: --> All data devices are unavailable
Feb 25 09:03:48 np0005629333 systemd[1]: libpod-e44ea48606dd5b2c31ef0385f1203578853487ed2be98848a5792436e7ac803e.scope: Deactivated successfully.
Feb 25 09:03:48 np0005629333 podman[428688]: 2026-02-25 14:03:48.595469213 +0000 UTC m=+0.656717025 container died e44ea48606dd5b2c31ef0385f1203578853487ed2be98848a5792436e7ac803e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 09:03:48 np0005629333 systemd[1]: var-lib-containers-storage-overlay-681dcc22768285c1bb0ba1ceebd10e3ca932fe7551ddbdcbd7e35b4754c660f4-merged.mount: Deactivated successfully.
Feb 25 09:03:48 np0005629333 podman[428688]: 2026-02-25 14:03:48.649166247 +0000 UTC m=+0.710414059 container remove e44ea48606dd5b2c31ef0385f1203578853487ed2be98848a5792436e7ac803e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_booth, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 09:03:48 np0005629333 systemd[1]: libpod-conmon-e44ea48606dd5b2c31ef0385f1203578853487ed2be98848a5792436e7ac803e.scope: Deactivated successfully.
Feb 25 09:03:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4317: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:03:49 np0005629333 podman[428797]: 2026-02-25 14:03:49.129957717 +0000 UTC m=+0.051999498 container create 3e524b9c3f2477dbcfafa8167e8707f09f6f860d85ed752216b6b536c0035718 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:03:49 np0005629333 systemd[1]: Started libpod-conmon-3e524b9c3f2477dbcfafa8167e8707f09f6f860d85ed752216b6b536c0035718.scope.
Feb 25 09:03:49 np0005629333 podman[428797]: 2026-02-25 14:03:49.109862766 +0000 UTC m=+0.031904637 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:03:49 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:03:49 np0005629333 podman[428797]: 2026-02-25 14:03:49.221043613 +0000 UTC m=+0.143085414 container init 3e524b9c3f2477dbcfafa8167e8707f09f6f860d85ed752216b6b536c0035718 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_dirac, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:03:49 np0005629333 podman[428797]: 2026-02-25 14:03:49.232283862 +0000 UTC m=+0.154325643 container start 3e524b9c3f2477dbcfafa8167e8707f09f6f860d85ed752216b6b536c0035718 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:03:49 np0005629333 podman[428797]: 2026-02-25 14:03:49.235315218 +0000 UTC m=+0.157357389 container attach 3e524b9c3f2477dbcfafa8167e8707f09f6f860d85ed752216b6b536c0035718 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_dirac, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Feb 25 09:03:49 np0005629333 serene_dirac[428813]: 167 167
Feb 25 09:03:49 np0005629333 systemd[1]: libpod-3e524b9c3f2477dbcfafa8167e8707f09f6f860d85ed752216b6b536c0035718.scope: Deactivated successfully.
Feb 25 09:03:49 np0005629333 podman[428797]: 2026-02-25 14:03:49.238333233 +0000 UTC m=+0.160375004 container died 3e524b9c3f2477dbcfafa8167e8707f09f6f860d85ed752216b6b536c0035718 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:03:49 np0005629333 systemd[1]: var-lib-containers-storage-overlay-c7824f98200bb6cf76888445bc71c03a8bcec646e1c7b6b571520a3ec41cf9fd-merged.mount: Deactivated successfully.
Feb 25 09:03:49 np0005629333 podman[428797]: 2026-02-25 14:03:49.275749296 +0000 UTC m=+0.197791077 container remove 3e524b9c3f2477dbcfafa8167e8707f09f6f860d85ed752216b6b536c0035718 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 09:03:49 np0005629333 systemd[1]: libpod-conmon-3e524b9c3f2477dbcfafa8167e8707f09f6f860d85ed752216b6b536c0035718.scope: Deactivated successfully.
Feb 25 09:03:49 np0005629333 podman[428836]: 2026-02-25 14:03:49.445280759 +0000 UTC m=+0.052417410 container create b0375a9c9f889b81192b9ce46a432f357f6500cc30b766978a7577cc6bf3af1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_tharp, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:03:49 np0005629333 systemd[1]: Started libpod-conmon-b0375a9c9f889b81192b9ce46a432f357f6500cc30b766978a7577cc6bf3af1b.scope.
Feb 25 09:03:49 np0005629333 podman[428836]: 2026-02-25 14:03:49.424715895 +0000 UTC m=+0.031852536 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:03:49 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:03:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c89452d736882ae044332cf206067d4f81dc3ba4c4e6b58a5ca1b179c7ef1ec5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:03:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c89452d736882ae044332cf206067d4f81dc3ba4c4e6b58a5ca1b179c7ef1ec5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:03:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c89452d736882ae044332cf206067d4f81dc3ba4c4e6b58a5ca1b179c7ef1ec5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:03:49 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c89452d736882ae044332cf206067d4f81dc3ba4c4e6b58a5ca1b179c7ef1ec5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:03:49 np0005629333 podman[428836]: 2026-02-25 14:03:49.549623331 +0000 UTC m=+0.156759992 container init b0375a9c9f889b81192b9ce46a432f357f6500cc30b766978a7577cc6bf3af1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_tharp, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 09:03:49 np0005629333 podman[428836]: 2026-02-25 14:03:49.560018216 +0000 UTC m=+0.167154877 container start b0375a9c9f889b81192b9ce46a432f357f6500cc30b766978a7577cc6bf3af1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 09:03:49 np0005629333 podman[428836]: 2026-02-25 14:03:49.564222535 +0000 UTC m=+0.171359246 container attach b0375a9c9f889b81192b9ce46a432f357f6500cc30b766978a7577cc6bf3af1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_tharp, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]: {
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:    "0": [
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:        {
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "devices": [
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "/dev/loop3"
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            ],
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "lv_name": "ceph_lv0",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "lv_size": "21470642176",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "name": "ceph_lv0",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "tags": {
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.cluster_name": "ceph",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.crush_device_class": "",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.encrypted": "0",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.objectstore": "bluestore",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.osd_id": "0",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.type": "block",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.vdo": "0",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.with_tpm": "0"
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            },
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "type": "block",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "vg_name": "ceph_vg0"
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:        }
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:    ],
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:    "1": [
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:        {
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "devices": [
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "/dev/loop4"
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            ],
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "lv_name": "ceph_lv1",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "lv_size": "21470642176",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "name": "ceph_lv1",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "tags": {
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.cluster_name": "ceph",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.crush_device_class": "",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.encrypted": "0",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.objectstore": "bluestore",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.osd_id": "1",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.type": "block",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.vdo": "0",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.with_tpm": "0"
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            },
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "type": "block",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "vg_name": "ceph_vg1"
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:        }
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:    ],
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:    "2": [
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:        {
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "devices": [
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "/dev/loop5"
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            ],
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "lv_name": "ceph_lv2",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "lv_size": "21470642176",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "name": "ceph_lv2",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "tags": {
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.cluster_name": "ceph",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.crush_device_class": "",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.encrypted": "0",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.objectstore": "bluestore",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.osd_id": "2",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.type": "block",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.vdo": "0",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:                "ceph.with_tpm": "0"
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            },
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "type": "block",
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:            "vg_name": "ceph_vg2"
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:        }
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]:    ]
Feb 25 09:03:49 np0005629333 mystifying_tharp[428853]: }
Feb 25 09:03:49 np0005629333 systemd[1]: libpod-b0375a9c9f889b81192b9ce46a432f357f6500cc30b766978a7577cc6bf3af1b.scope: Deactivated successfully.
Feb 25 09:03:49 np0005629333 podman[428836]: 2026-02-25 14:03:49.872803816 +0000 UTC m=+0.479940457 container died b0375a9c9f889b81192b9ce46a432f357f6500cc30b766978a7577cc6bf3af1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_tharp, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:03:49 np0005629333 systemd[1]: var-lib-containers-storage-overlay-c89452d736882ae044332cf206067d4f81dc3ba4c4e6b58a5ca1b179c7ef1ec5-merged.mount: Deactivated successfully.
Feb 25 09:03:49 np0005629333 podman[428836]: 2026-02-25 14:03:49.918627497 +0000 UTC m=+0.525764128 container remove b0375a9c9f889b81192b9ce46a432f357f6500cc30b766978a7577cc6bf3af1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_tharp, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Feb 25 09:03:49 np0005629333 systemd[1]: libpod-conmon-b0375a9c9f889b81192b9ce46a432f357f6500cc30b766978a7577cc6bf3af1b.scope: Deactivated successfully.
Feb 25 09:03:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:03:50 np0005629333 podman[428935]: 2026-02-25 14:03:50.495164133 +0000 UTC m=+0.065187010 container create 2dc07e3e0011460d107fcddb084bfb721fcfde1051ffcb74295b5b07fafe9bcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_tharp, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 09:03:50 np0005629333 systemd[1]: Started libpod-conmon-2dc07e3e0011460d107fcddb084bfb721fcfde1051ffcb74295b5b07fafe9bcf.scope.
Feb 25 09:03:50 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:03:50 np0005629333 podman[428935]: 2026-02-25 14:03:50.46934003 +0000 UTC m=+0.039362967 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:03:50 np0005629333 podman[428935]: 2026-02-25 14:03:50.575896395 +0000 UTC m=+0.145919332 container init 2dc07e3e0011460d107fcddb084bfb721fcfde1051ffcb74295b5b07fafe9bcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Feb 25 09:03:50 np0005629333 podman[428935]: 2026-02-25 14:03:50.584103728 +0000 UTC m=+0.154126605 container start 2dc07e3e0011460d107fcddb084bfb721fcfde1051ffcb74295b5b07fafe9bcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_tharp, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 09:03:50 np0005629333 boring_tharp[428951]: 167 167
Feb 25 09:03:50 np0005629333 systemd[1]: libpod-2dc07e3e0011460d107fcddb084bfb721fcfde1051ffcb74295b5b07fafe9bcf.scope: Deactivated successfully.
Feb 25 09:03:50 np0005629333 podman[428935]: 2026-02-25 14:03:50.589298816 +0000 UTC m=+0.159321663 container attach 2dc07e3e0011460d107fcddb084bfb721fcfde1051ffcb74295b5b07fafe9bcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 09:03:50 np0005629333 podman[428935]: 2026-02-25 14:03:50.590266823 +0000 UTC m=+0.160289700 container died 2dc07e3e0011460d107fcddb084bfb721fcfde1051ffcb74295b5b07fafe9bcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_tharp, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:03:50 np0005629333 systemd[1]: var-lib-containers-storage-overlay-6afb1cf93247c7e8f2fd4c76b9930dfa31f664a6883d9d688d94e6f3d42162f3-merged.mount: Deactivated successfully.
Feb 25 09:03:50 np0005629333 podman[428935]: 2026-02-25 14:03:50.632571464 +0000 UTC m=+0.202594311 container remove 2dc07e3e0011460d107fcddb084bfb721fcfde1051ffcb74295b5b07fafe9bcf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_tharp, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 09:03:50 np0005629333 systemd[1]: libpod-conmon-2dc07e3e0011460d107fcddb084bfb721fcfde1051ffcb74295b5b07fafe9bcf.scope: Deactivated successfully.
Feb 25 09:03:50 np0005629333 podman[428973]: 2026-02-25 14:03:50.828753974 +0000 UTC m=+0.053306084 container create 211e3edb458bc212b249629301029b5f328539964cdc909e681c2c41bbdaf4e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 09:03:50 np0005629333 systemd[1]: Started libpod-conmon-211e3edb458bc212b249629301029b5f328539964cdc909e681c2c41bbdaf4e4.scope.
Feb 25 09:03:50 np0005629333 nova_compute[244014]: 2026-02-25 14:03:50.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:03:50 np0005629333 podman[428973]: 2026-02-25 14:03:50.805442032 +0000 UTC m=+0.029994112 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:03:50 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:03:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/729c857b9728a091740866e5a6fd96820765dd4924e771e191bfa6d09d9c192f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:03:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/729c857b9728a091740866e5a6fd96820765dd4924e771e191bfa6d09d9c192f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:03:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/729c857b9728a091740866e5a6fd96820765dd4924e771e191bfa6d09d9c192f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:03:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/729c857b9728a091740866e5a6fd96820765dd4924e771e191bfa6d09d9c192f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:03:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4318: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:03:50 np0005629333 podman[428973]: 2026-02-25 14:03:50.940793325 +0000 UTC m=+0.165345365 container init 211e3edb458bc212b249629301029b5f328539964cdc909e681c2c41bbdaf4e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_galois, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 09:03:50 np0005629333 podman[428973]: 2026-02-25 14:03:50.945801727 +0000 UTC m=+0.170353737 container start 211e3edb458bc212b249629301029b5f328539964cdc909e681c2c41bbdaf4e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_galois, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 09:03:50 np0005629333 podman[428973]: 2026-02-25 14:03:50.949346378 +0000 UTC m=+0.173898388 container attach 211e3edb458bc212b249629301029b5f328539964cdc909e681c2c41bbdaf4e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_galois, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 09:03:51 np0005629333 nova_compute[244014]: 2026-02-25 14:03:51.095 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:03:51 np0005629333 lvm[429067]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 09:03:51 np0005629333 lvm[429068]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 09:03:51 np0005629333 lvm[429067]: VG ceph_vg1 finished
Feb 25 09:03:51 np0005629333 lvm[429068]: VG ceph_vg0 finished
Feb 25 09:03:51 np0005629333 lvm[429070]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 09:03:51 np0005629333 lvm[429070]: VG ceph_vg2 finished
Feb 25 09:03:51 np0005629333 dazzling_galois[428989]: {}
Feb 25 09:03:51 np0005629333 systemd[1]: libpod-211e3edb458bc212b249629301029b5f328539964cdc909e681c2c41bbdaf4e4.scope: Deactivated successfully.
Feb 25 09:03:51 np0005629333 podman[428973]: 2026-02-25 14:03:51.825498411 +0000 UTC m=+1.050050441 container died 211e3edb458bc212b249629301029b5f328539964cdc909e681c2c41bbdaf4e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 25 09:03:51 np0005629333 systemd[1]: libpod-211e3edb458bc212b249629301029b5f328539964cdc909e681c2c41bbdaf4e4.scope: Consumed 1.258s CPU time.
Feb 25 09:03:51 np0005629333 systemd[1]: var-lib-containers-storage-overlay-729c857b9728a091740866e5a6fd96820765dd4924e771e191bfa6d09d9c192f-merged.mount: Deactivated successfully.
Feb 25 09:03:51 np0005629333 podman[428973]: 2026-02-25 14:03:51.881042018 +0000 UTC m=+1.105594018 container remove 211e3edb458bc212b249629301029b5f328539964cdc909e681c2c41bbdaf4e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 09:03:51 np0005629333 systemd[1]: libpod-conmon-211e3edb458bc212b249629301029b5f328539964cdc909e681c2c41bbdaf4e4.scope: Deactivated successfully.
Feb 25 09:03:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 09:03:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:03:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 09:03:51 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:03:52 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:03:52 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:03:52 np0005629333 nova_compute[244014]: 2026-02-25 14:03:52.213 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:03:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4319: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:03:53 np0005629333 nova_compute[244014]: 2026-02-25 14:03:53.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:03:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4320: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:03:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:03:55.108 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:03:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:03:55.109 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:03:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:03:55.109 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #219. Immutable memtables: 0.
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:55.445624) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 137] Flushing memtable with next log file: 219
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028235445656, "job": 137, "event": "flush_started", "num_memtables": 1, "num_entries": 463, "num_deletes": 250, "total_data_size": 433774, "memory_usage": 442376, "flush_reason": "Manual Compaction"}
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 137] Level-0 flush table #220: started
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028235450521, "cf_name": "default", "job": 137, "event": "table_file_creation", "file_number": 220, "file_size": 431269, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 89731, "largest_seqno": 90193, "table_properties": {"data_size": 428514, "index_size": 790, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 5564, "raw_average_key_size": 16, "raw_value_size": 423138, "raw_average_value_size": 1237, "num_data_blocks": 35, "num_entries": 342, "num_filter_entries": 342, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772028211, "oldest_key_time": 1772028211, "file_creation_time": 1772028235, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 220, "seqno_to_time_mapping": "N/A"}}
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 137] Flush lasted 4952 microseconds, and 1633 cpu microseconds.
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:55.450570) [db/flush_job.cc:967] [default] [JOB 137] Level-0 flush table #220: 431269 bytes OK
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:55.450594) [db/memtable_list.cc:519] [default] Level-0 commit table #220 started
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:55.452555) [db/memtable_list.cc:722] [default] Level-0 commit table #220: memtable #1 done
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:55.452577) EVENT_LOG_v1 {"time_micros": 1772028235452570, "job": 137, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:55.452599) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 137] Try to delete WAL files size 430979, prev total WAL file size 430979, number of live WAL files 2.
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000216.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:55.452987) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323531' seq:72057594037927935, type:22 .. '6B7600353032' seq:0, type:0; will stop at (end)
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 138] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 137 Base level 0, inputs: [220(421KB)], [218(10133KB)]
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028235453039, "job": 138, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [220], "files_L6": [218], "score": -1, "input_data_size": 10808300, "oldest_snapshot_seqno": -1}
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 138] Generated table #221: 9874 keys, 9747816 bytes, temperature: kUnknown
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028235514268, "cf_name": "default", "job": 138, "event": "table_file_creation", "file_number": 221, "file_size": 9747816, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9689559, "index_size": 32480, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24709, "raw_key_size": 264013, "raw_average_key_size": 26, "raw_value_size": 9520220, "raw_average_value_size": 964, "num_data_blocks": 1224, "num_entries": 9874, "num_filter_entries": 9874, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772028235, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 221, "seqno_to_time_mapping": "N/A"}}
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:55.514601) [db/compaction/compaction_job.cc:1663] [default] [JOB 138] Compacted 1@0 + 1@6 files to L6 => 9747816 bytes
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:55.515921) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 176.2 rd, 158.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.9 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(47.7) write-amplify(22.6) OK, records in: 10385, records dropped: 511 output_compression: NoCompression
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:55.515951) EVENT_LOG_v1 {"time_micros": 1772028235515936, "job": 138, "event": "compaction_finished", "compaction_time_micros": 61333, "compaction_time_cpu_micros": 32136, "output_level": 6, "num_output_files": 1, "total_output_size": 9747816, "num_input_records": 10385, "num_output_records": 9874, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000220.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028235516157, "job": 138, "event": "table_file_deletion", "file_number": 220}
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000218.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028235517658, "job": 138, "event": "table_file_deletion", "file_number": 218}
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:55.452913) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:55.517717) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:55.517722) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:55.517725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:55.517727) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:03:55 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:03:55.517730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:03:55 np0005629333 nova_compute[244014]: 2026-02-25 14:03:55.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:03:56 np0005629333 nova_compute[244014]: 2026-02-25 14:03:56.128 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:03:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4321: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:03:57 np0005629333 nova_compute[244014]: 2026-02-25 14:03:57.240 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:03:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4322: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:04:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:04:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4323: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:04:01 np0005629333 nova_compute[244014]: 2026-02-25 14:04:01.132 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:04:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:04:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:04:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:04:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:04:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:04:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:04:02 np0005629333 nova_compute[244014]: 2026-02-25 14:04:02.242 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:04:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4324: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:04:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4325: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:04:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:04:06 np0005629333 nova_compute[244014]: 2026-02-25 14:04:06.147 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:04:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4326: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:04:07 np0005629333 nova_compute[244014]: 2026-02-25 14:04:07.271 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:04:08 np0005629333 podman[429114]: 2026-02-25 14:04:08.729675581 +0000 UTC m=+0.058829721 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0)
Feb 25 09:04:08 np0005629333 podman[429115]: 2026-02-25 14:04:08.757780619 +0000 UTC m=+0.088457862 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 09:04:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4327: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:04:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:04:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4328: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:04:11 np0005629333 nova_compute[244014]: 2026-02-25 14:04:11.150 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:04:12 np0005629333 nova_compute[244014]: 2026-02-25 14:04:12.274 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:04:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4329: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:04:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4330: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:04:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:04:16 np0005629333 nova_compute[244014]: 2026-02-25 14:04:16.182 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:04:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4331: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:04:17 np0005629333 nova_compute[244014]: 2026-02-25 14:04:17.276 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:04:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4332: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:04:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:04:20 np0005629333 nova_compute[244014]: 2026-02-25 14:04:20.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:04:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4333: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:04:21 np0005629333 nova_compute[244014]: 2026-02-25 14:04:21.186 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:04:22 np0005629333 nova_compute[244014]: 2026-02-25 14:04:22.279 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:04:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4334: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:04:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4335: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:04:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:04:26 np0005629333 nova_compute[244014]: 2026-02-25 14:04:26.222 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:04:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4336: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:04:27 np0005629333 nova_compute[244014]: 2026-02-25 14:04:27.314 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:04:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4337: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:04:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:04:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4338: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:04:31 np0005629333 nova_compute[244014]: 2026-02-25 14:04:31.226 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:04:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_14:04:31
Feb 25 09:04:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 09:04:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 09:04:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'volumes', 'images', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', 'vms', 'cephfs.cephfs.meta']
Feb 25 09:04:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 09:04:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:04:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:04:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:04:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:04:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:04:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:04:32 np0005629333 nova_compute[244014]: 2026-02-25 14:04:32.316 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:04:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 09:04:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 09:04:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 09:04:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 09:04:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 09:04:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 09:04:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 09:04:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 09:04:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 09:04:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 09:04:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4339: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:04:32 np0005629333 nova_compute[244014]: 2026-02-25 14:04:32.970 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:04:34 np0005629333 nova_compute[244014]: 2026-02-25 14:04:34.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:04:34 np0005629333 nova_compute[244014]: 2026-02-25 14:04:34.911 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:04:34 np0005629333 nova_compute[244014]: 2026-02-25 14:04:34.912 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:04:34 np0005629333 nova_compute[244014]: 2026-02-25 14:04:34.912 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:04:34 np0005629333 nova_compute[244014]: 2026-02-25 14:04:34.912 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 09:04:34 np0005629333 nova_compute[244014]: 2026-02-25 14:04:34.912 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 09:04:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4340: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:04:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:04:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 09:04:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2502407981' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 09:04:35 np0005629333 nova_compute[244014]: 2026-02-25 14:04:35.477 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 09:04:35 np0005629333 nova_compute[244014]: 2026-02-25 14:04:35.673 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 09:04:35 np0005629333 nova_compute[244014]: 2026-02-25 14:04:35.675 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3529MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 09:04:35 np0005629333 nova_compute[244014]: 2026-02-25 14:04:35.675 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:04:35 np0005629333 nova_compute[244014]: 2026-02-25 14:04:35.676 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:04:36 np0005629333 nova_compute[244014]: 2026-02-25 14:04:36.277 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:04:36 np0005629333 nova_compute[244014]: 2026-02-25 14:04:36.386 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 09:04:36 np0005629333 nova_compute[244014]: 2026-02-25 14:04:36.386 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 09:04:36 np0005629333 nova_compute[244014]: 2026-02-25 14:04:36.403 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 09:04:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4341: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:04:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 09:04:36 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2349764953' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 09:04:37 np0005629333 nova_compute[244014]: 2026-02-25 14:04:37.014 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 09:04:37 np0005629333 nova_compute[244014]: 2026-02-25 14:04:37.021 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 09:04:37 np0005629333 nova_compute[244014]: 2026-02-25 14:04:37.039 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 09:04:37 np0005629333 nova_compute[244014]: 2026-02-25 14:04:37.041 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 09:04:37 np0005629333 nova_compute[244014]: 2026-02-25 14:04:37.041 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.366s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:04:37 np0005629333 nova_compute[244014]: 2026-02-25 14:04:37.349 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:04:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4342: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:04:39 np0005629333 podman[429202]: 2026-02-25 14:04:39.733989927 +0000 UTC m=+0.074509486 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Feb 25 09:04:39 np0005629333 podman[429203]: 2026-02-25 14:04:39.826428072 +0000 UTC m=+0.159809858 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 09:04:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:04:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4343: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:04:41 np0005629333 nova_compute[244014]: 2026-02-25 14:04:41.043 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:04:41 np0005629333 nova_compute[244014]: 2026-02-25 14:04:41.043 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 09:04:41 np0005629333 nova_compute[244014]: 2026-02-25 14:04:41.044 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 09:04:41 np0005629333 nova_compute[244014]: 2026-02-25 14:04:41.059 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 09:04:41 np0005629333 nova_compute[244014]: 2026-02-25 14:04:41.280 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:04:42 np0005629333 nova_compute[244014]: 2026-02-25 14:04:42.351 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:04:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4344: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:04:43 np0005629333 nova_compute[244014]: 2026-02-25 14:04:43.889 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:04:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 09:04:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:04:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 09:04:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:04:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 09:04:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:04:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:04:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:04:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:04:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:04:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 09:04:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:04:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 09:04:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:04:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:04:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:04:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 09:04:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:04:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 09:04:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:04:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:04:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:04:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 09:04:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4345: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:04:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:04:45 np0005629333 nova_compute[244014]: 2026-02-25 14:04:45.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:04:45 np0005629333 nova_compute[244014]: 2026-02-25 14:04:45.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:04:45 np0005629333 nova_compute[244014]: 2026-02-25 14:04:45.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 25 09:04:45 np0005629333 nova_compute[244014]: 2026-02-25 14:04:45.989 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 25 09:04:46 np0005629333 nova_compute[244014]: 2026-02-25 14:04:46.283 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:04:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4346: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:04:46 np0005629333 nova_compute[244014]: 2026-02-25 14:04:46.989 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:04:46 np0005629333 nova_compute[244014]: 2026-02-25 14:04:46.989 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 09:04:47 np0005629333 nova_compute[244014]: 2026-02-25 14:04:47.353 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:04:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 09:04:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2864306368' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 09:04:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 09:04:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2864306368' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 09:04:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4347: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:04:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:04:50 np0005629333 nova_compute[244014]: 2026-02-25 14:04:50.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:04:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4348: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:04:51 np0005629333 nova_compute[244014]: 2026-02-25 14:04:51.318 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:04:52 np0005629333 nova_compute[244014]: 2026-02-25 14:04:52.377 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:04:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 09:04:52 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 09:04:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 09:04:52 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 09:04:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 09:04:52 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:04:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 09:04:52 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 09:04:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 09:04:52 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 09:04:52 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 09:04:52 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 09:04:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4349: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:04:53 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 09:04:53 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:04:53 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 09:04:53 np0005629333 podman[429391]: 2026-02-25 14:04:53.145026662 +0000 UTC m=+0.060517959 container create 286b239c7e50102477fac8d392dbbee749a2b03b90cf937f87f596d4853068f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:04:53 np0005629333 systemd[1]: Started libpod-conmon-286b239c7e50102477fac8d392dbbee749a2b03b90cf937f87f596d4853068f9.scope.
Feb 25 09:04:53 np0005629333 podman[429391]: 2026-02-25 14:04:53.118375715 +0000 UTC m=+0.033867032 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:04:53 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:04:53 np0005629333 podman[429391]: 2026-02-25 14:04:53.26754979 +0000 UTC m=+0.183041157 container init 286b239c7e50102477fac8d392dbbee749a2b03b90cf937f87f596d4853068f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hawking, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 09:04:53 np0005629333 podman[429391]: 2026-02-25 14:04:53.275600049 +0000 UTC m=+0.191091336 container start 286b239c7e50102477fac8d392dbbee749a2b03b90cf937f87f596d4853068f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hawking, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 09:04:53 np0005629333 podman[429391]: 2026-02-25 14:04:53.279651014 +0000 UTC m=+0.195142311 container attach 286b239c7e50102477fac8d392dbbee749a2b03b90cf937f87f596d4853068f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hawking, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 09:04:53 np0005629333 friendly_hawking[429408]: 167 167
Feb 25 09:04:53 np0005629333 systemd[1]: libpod-286b239c7e50102477fac8d392dbbee749a2b03b90cf937f87f596d4853068f9.scope: Deactivated successfully.
Feb 25 09:04:53 np0005629333 podman[429391]: 2026-02-25 14:04:53.285477539 +0000 UTC m=+0.200968806 container died 286b239c7e50102477fac8d392dbbee749a2b03b90cf937f87f596d4853068f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hawking, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 09:04:53 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1531002e2ad4c3024c3481cbfaff746cada4cf59dc52f877d96a9426161f9f5a-merged.mount: Deactivated successfully.
Feb 25 09:04:53 np0005629333 podman[429391]: 2026-02-25 14:04:53.329391216 +0000 UTC m=+0.244882473 container remove 286b239c7e50102477fac8d392dbbee749a2b03b90cf937f87f596d4853068f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 09:04:53 np0005629333 systemd[1]: libpod-conmon-286b239c7e50102477fac8d392dbbee749a2b03b90cf937f87f596d4853068f9.scope: Deactivated successfully.
Feb 25 09:04:53 np0005629333 podman[429430]: 2026-02-25 14:04:53.499835845 +0000 UTC m=+0.051050390 container create 72f665ac103573f1d74914fb4dcdcf14fa53971af35510dbe09b21869489f869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_curran, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Feb 25 09:04:53 np0005629333 systemd[1]: Started libpod-conmon-72f665ac103573f1d74914fb4dcdcf14fa53971af35510dbe09b21869489f869.scope.
Feb 25 09:04:53 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:04:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22130a4232e49bbf9ed0e893a31f911d058ddf935fa7001187cc8fb7aae48ab7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:04:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22130a4232e49bbf9ed0e893a31f911d058ddf935fa7001187cc8fb7aae48ab7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:04:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22130a4232e49bbf9ed0e893a31f911d058ddf935fa7001187cc8fb7aae48ab7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:04:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22130a4232e49bbf9ed0e893a31f911d058ddf935fa7001187cc8fb7aae48ab7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:04:53 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22130a4232e49bbf9ed0e893a31f911d058ddf935fa7001187cc8fb7aae48ab7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 09:04:53 np0005629333 podman[429430]: 2026-02-25 14:04:53.478964573 +0000 UTC m=+0.030179038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:04:53 np0005629333 podman[429430]: 2026-02-25 14:04:53.582536803 +0000 UTC m=+0.133751218 container init 72f665ac103573f1d74914fb4dcdcf14fa53971af35510dbe09b21869489f869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_curran, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 25 09:04:53 np0005629333 podman[429430]: 2026-02-25 14:04:53.593868235 +0000 UTC m=+0.145082640 container start 72f665ac103573f1d74914fb4dcdcf14fa53971af35510dbe09b21869489f869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Feb 25 09:04:53 np0005629333 podman[429430]: 2026-02-25 14:04:53.597960991 +0000 UTC m=+0.149175446 container attach 72f665ac103573f1d74914fb4dcdcf14fa53971af35510dbe09b21869489f869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 09:04:53 np0005629333 nova_compute[244014]: 2026-02-25 14:04:53.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:04:54 np0005629333 amazing_curran[429447]: --> passed data devices: 0 physical, 3 LVM
Feb 25 09:04:54 np0005629333 amazing_curran[429447]: --> All data devices are unavailable
Feb 25 09:04:54 np0005629333 systemd[1]: libpod-72f665ac103573f1d74914fb4dcdcf14fa53971af35510dbe09b21869489f869.scope: Deactivated successfully.
Feb 25 09:04:54 np0005629333 podman[429430]: 2026-02-25 14:04:54.044529739 +0000 UTC m=+0.595744174 container died 72f665ac103573f1d74914fb4dcdcf14fa53971af35510dbe09b21869489f869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_curran, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 25 09:04:54 np0005629333 systemd[1]: var-lib-containers-storage-overlay-22130a4232e49bbf9ed0e893a31f911d058ddf935fa7001187cc8fb7aae48ab7-merged.mount: Deactivated successfully.
Feb 25 09:04:54 np0005629333 podman[429430]: 2026-02-25 14:04:54.09424396 +0000 UTC m=+0.645458405 container remove 72f665ac103573f1d74914fb4dcdcf14fa53971af35510dbe09b21869489f869 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_curran, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 09:04:54 np0005629333 systemd[1]: libpod-conmon-72f665ac103573f1d74914fb4dcdcf14fa53971af35510dbe09b21869489f869.scope: Deactivated successfully.
Feb 25 09:04:54 np0005629333 podman[429542]: 2026-02-25 14:04:54.57962209 +0000 UTC m=+0.058610465 container create 24a0a26f74f29f3d131749fa54e0e0e2bc0e1031ef0594e20fd7f750db5c5168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_feistel, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 09:04:54 np0005629333 systemd[1]: Started libpod-conmon-24a0a26f74f29f3d131749fa54e0e0e2bc0e1031ef0594e20fd7f750db5c5168.scope.
Feb 25 09:04:54 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:04:54 np0005629333 podman[429542]: 2026-02-25 14:04:54.636634048 +0000 UTC m=+0.115622393 container init 24a0a26f74f29f3d131749fa54e0e0e2bc0e1031ef0594e20fd7f750db5c5168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_feistel, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 09:04:54 np0005629333 podman[429542]: 2026-02-25 14:04:54.645526811 +0000 UTC m=+0.124515156 container start 24a0a26f74f29f3d131749fa54e0e0e2bc0e1031ef0594e20fd7f750db5c5168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:04:54 np0005629333 podman[429542]: 2026-02-25 14:04:54.649246386 +0000 UTC m=+0.128234761 container attach 24a0a26f74f29f3d131749fa54e0e0e2bc0e1031ef0594e20fd7f750db5c5168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_feistel, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 09:04:54 np0005629333 podman[429542]: 2026-02-25 14:04:54.556404291 +0000 UTC m=+0.035392716 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:04:54 np0005629333 quirky_feistel[429558]: 167 167
Feb 25 09:04:54 np0005629333 systemd[1]: libpod-24a0a26f74f29f3d131749fa54e0e0e2bc0e1031ef0594e20fd7f750db5c5168.scope: Deactivated successfully.
Feb 25 09:04:54 np0005629333 podman[429542]: 2026-02-25 14:04:54.653199039 +0000 UTC m=+0.132187384 container died 24a0a26f74f29f3d131749fa54e0e0e2bc0e1031ef0594e20fd7f750db5c5168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_feistel, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 09:04:54 np0005629333 systemd[1]: var-lib-containers-storage-overlay-382c237cd02fd1a1ac5dc4ace2f135105285b3079f7dbc9c88a78122f41bf12e-merged.mount: Deactivated successfully.
Feb 25 09:04:54 np0005629333 podman[429542]: 2026-02-25 14:04:54.694029558 +0000 UTC m=+0.173017903 container remove 24a0a26f74f29f3d131749fa54e0e0e2bc0e1031ef0594e20fd7f750db5c5168 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_feistel, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Feb 25 09:04:54 np0005629333 systemd[1]: libpod-conmon-24a0a26f74f29f3d131749fa54e0e0e2bc0e1031ef0594e20fd7f750db5c5168.scope: Deactivated successfully.
Feb 25 09:04:54 np0005629333 podman[429583]: 2026-02-25 14:04:54.82620531 +0000 UTC m=+0.039016928 container create 5b798d1e5a391768db0d9e3758064e4cafaebd8cb47e5b2f79022ea2e22c2fe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_merkle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:04:54 np0005629333 systemd[1]: Started libpod-conmon-5b798d1e5a391768db0d9e3758064e4cafaebd8cb47e5b2f79022ea2e22c2fe5.scope.
Feb 25 09:04:54 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:04:54 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e4e7692b542fb1597415f4f999f5fc731d775fb4036623271d1bb67cebc7742/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:04:54 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e4e7692b542fb1597415f4f999f5fc731d775fb4036623271d1bb67cebc7742/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:04:54 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e4e7692b542fb1597415f4f999f5fc731d775fb4036623271d1bb67cebc7742/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:04:54 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e4e7692b542fb1597415f4f999f5fc731d775fb4036623271d1bb67cebc7742/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:04:54 np0005629333 podman[429583]: 2026-02-25 14:04:54.900883859 +0000 UTC m=+0.113695497 container init 5b798d1e5a391768db0d9e3758064e4cafaebd8cb47e5b2f79022ea2e22c2fe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 09:04:54 np0005629333 podman[429583]: 2026-02-25 14:04:54.809485256 +0000 UTC m=+0.022296904 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:04:54 np0005629333 podman[429583]: 2026-02-25 14:04:54.906774667 +0000 UTC m=+0.119586275 container start 5b798d1e5a391768db0d9e3758064e4cafaebd8cb47e5b2f79022ea2e22c2fe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 09:04:54 np0005629333 podman[429583]: 2026-02-25 14:04:54.909847744 +0000 UTC m=+0.122659392 container attach 5b798d1e5a391768db0d9e3758064e4cafaebd8cb47e5b2f79022ea2e22c2fe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_merkle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:04:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4350: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:04:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:04:55.109 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:04:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:04:55.111 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:04:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:04:55.111 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]: {
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:    "0": [
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:        {
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "devices": [
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "/dev/loop3"
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            ],
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "lv_name": "ceph_lv0",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "lv_size": "21470642176",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "name": "ceph_lv0",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "tags": {
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.cluster_name": "ceph",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.crush_device_class": "",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.encrypted": "0",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.objectstore": "bluestore",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.osd_id": "0",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.type": "block",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.vdo": "0",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.with_tpm": "0"
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            },
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "type": "block",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "vg_name": "ceph_vg0"
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:        }
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:    ],
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:    "1": [
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:        {
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "devices": [
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "/dev/loop4"
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            ],
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "lv_name": "ceph_lv1",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "lv_size": "21470642176",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "name": "ceph_lv1",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "tags": {
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.cluster_name": "ceph",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.crush_device_class": "",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.encrypted": "0",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.objectstore": "bluestore",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.osd_id": "1",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.type": "block",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.vdo": "0",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.with_tpm": "0"
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            },
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "type": "block",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "vg_name": "ceph_vg1"
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:        }
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:    ],
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:    "2": [
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:        {
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "devices": [
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "/dev/loop5"
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            ],
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "lv_name": "ceph_lv2",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "lv_size": "21470642176",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "name": "ceph_lv2",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "tags": {
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.cluster_name": "ceph",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.crush_device_class": "",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.encrypted": "0",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.objectstore": "bluestore",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.osd_id": "2",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.type": "block",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.vdo": "0",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:                "ceph.with_tpm": "0"
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            },
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "type": "block",
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:            "vg_name": "ceph_vg2"
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:        }
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]:    ]
Feb 25 09:04:55 np0005629333 trusting_merkle[429599]: }
Feb 25 09:04:55 np0005629333 systemd[1]: libpod-5b798d1e5a391768db0d9e3758064e4cafaebd8cb47e5b2f79022ea2e22c2fe5.scope: Deactivated successfully.
Feb 25 09:04:55 np0005629333 podman[429583]: 2026-02-25 14:04:55.217888939 +0000 UTC m=+0.430700577 container died 5b798d1e5a391768db0d9e3758064e4cafaebd8cb47e5b2f79022ea2e22c2fe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_merkle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:04:55 np0005629333 systemd[1]: var-lib-containers-storage-overlay-1e4e7692b542fb1597415f4f999f5fc731d775fb4036623271d1bb67cebc7742-merged.mount: Deactivated successfully.
Feb 25 09:04:55 np0005629333 podman[429583]: 2026-02-25 14:04:55.255589719 +0000 UTC m=+0.468401337 container remove 5b798d1e5a391768db0d9e3758064e4cafaebd8cb47e5b2f79022ea2e22c2fe5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 25 09:04:55 np0005629333 systemd[1]: libpod-conmon-5b798d1e5a391768db0d9e3758064e4cafaebd8cb47e5b2f79022ea2e22c2fe5.scope: Deactivated successfully.
Feb 25 09:04:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:04:55 np0005629333 podman[429678]: 2026-02-25 14:04:55.686229875 +0000 UTC m=+0.045797421 container create bed9064cfaf27091c167d309292b2786b1bfe3c1a04d0b9b28e350652ad09ff1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 09:04:55 np0005629333 systemd[1]: Started libpod-conmon-bed9064cfaf27091c167d309292b2786b1bfe3c1a04d0b9b28e350652ad09ff1.scope.
Feb 25 09:04:55 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:04:55 np0005629333 podman[429678]: 2026-02-25 14:04:55.668014258 +0000 UTC m=+0.027581814 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:04:55 np0005629333 podman[429678]: 2026-02-25 14:04:55.772950927 +0000 UTC m=+0.132518543 container init bed9064cfaf27091c167d309292b2786b1bfe3c1a04d0b9b28e350652ad09ff1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Feb 25 09:04:55 np0005629333 podman[429678]: 2026-02-25 14:04:55.777783584 +0000 UTC m=+0.137351140 container start bed9064cfaf27091c167d309292b2786b1bfe3c1a04d0b9b28e350652ad09ff1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True)
Feb 25 09:04:55 np0005629333 podman[429678]: 2026-02-25 14:04:55.782418126 +0000 UTC m=+0.141985682 container attach bed9064cfaf27091c167d309292b2786b1bfe3c1a04d0b9b28e350652ad09ff1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 09:04:55 np0005629333 friendly_einstein[429694]: 167 167
Feb 25 09:04:55 np0005629333 systemd[1]: libpod-bed9064cfaf27091c167d309292b2786b1bfe3c1a04d0b9b28e350652ad09ff1.scope: Deactivated successfully.
Feb 25 09:04:55 np0005629333 podman[429678]: 2026-02-25 14:04:55.783881257 +0000 UTC m=+0.143448843 container died bed9064cfaf27091c167d309292b2786b1bfe3c1a04d0b9b28e350652ad09ff1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_einstein, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Feb 25 09:04:55 np0005629333 systemd[1]: var-lib-containers-storage-overlay-5dd7054ee16dd5dddbf611744ef1163e15793ff3fb6f412589f5694d62e69845-merged.mount: Deactivated successfully.
Feb 25 09:04:55 np0005629333 podman[429678]: 2026-02-25 14:04:55.826509828 +0000 UTC m=+0.186077374 container remove bed9064cfaf27091c167d309292b2786b1bfe3c1a04d0b9b28e350652ad09ff1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_einstein, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Feb 25 09:04:55 np0005629333 systemd[1]: libpod-conmon-bed9064cfaf27091c167d309292b2786b1bfe3c1a04d0b9b28e350652ad09ff1.scope: Deactivated successfully.
Feb 25 09:04:55 np0005629333 nova_compute[244014]: 2026-02-25 14:04:55.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:04:55 np0005629333 podman[429720]: 2026-02-25 14:04:55.962861229 +0000 UTC m=+0.047630024 container create 87587d57a7c5ae60ba3bb1774f0b8af7d7faf216f0f2d625ba7a69fd49ae0947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_wing, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:04:56 np0005629333 systemd[1]: Started libpod-conmon-87587d57a7c5ae60ba3bb1774f0b8af7d7faf216f0f2d625ba7a69fd49ae0947.scope.
Feb 25 09:04:56 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:04:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d2c0acea03b3c546500581f2b21269598897eeef2af236bd556a45bcf590836/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:04:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d2c0acea03b3c546500581f2b21269598897eeef2af236bd556a45bcf590836/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:04:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d2c0acea03b3c546500581f2b21269598897eeef2af236bd556a45bcf590836/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:04:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d2c0acea03b3c546500581f2b21269598897eeef2af236bd556a45bcf590836/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:04:56 np0005629333 podman[429720]: 2026-02-25 14:04:55.946148784 +0000 UTC m=+0.030917619 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:04:56 np0005629333 podman[429720]: 2026-02-25 14:04:56.044097875 +0000 UTC m=+0.128866700 container init 87587d57a7c5ae60ba3bb1774f0b8af7d7faf216f0f2d625ba7a69fd49ae0947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_wing, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 09:04:56 np0005629333 podman[429720]: 2026-02-25 14:04:56.049801017 +0000 UTC m=+0.134569822 container start 87587d57a7c5ae60ba3bb1774f0b8af7d7faf216f0f2d625ba7a69fd49ae0947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 09:04:56 np0005629333 podman[429720]: 2026-02-25 14:04:56.052875384 +0000 UTC m=+0.137644179 container attach 87587d57a7c5ae60ba3bb1774f0b8af7d7faf216f0f2d625ba7a69fd49ae0947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_wing, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 09:04:56 np0005629333 nova_compute[244014]: 2026-02-25 14:04:56.322 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:04:56 np0005629333 lvm[429816]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 09:04:56 np0005629333 lvm[429816]: VG ceph_vg1 finished
Feb 25 09:04:56 np0005629333 lvm[429815]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 09:04:56 np0005629333 lvm[429815]: VG ceph_vg0 finished
Feb 25 09:04:56 np0005629333 lvm[429818]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 09:04:56 np0005629333 lvm[429818]: VG ceph_vg2 finished
Feb 25 09:04:56 np0005629333 lucid_wing[429736]: {}
Feb 25 09:04:56 np0005629333 systemd[1]: libpod-87587d57a7c5ae60ba3bb1774f0b8af7d7faf216f0f2d625ba7a69fd49ae0947.scope: Deactivated successfully.
Feb 25 09:04:56 np0005629333 podman[429720]: 2026-02-25 14:04:56.872430871 +0000 UTC m=+0.957199716 container died 87587d57a7c5ae60ba3bb1774f0b8af7d7faf216f0f2d625ba7a69fd49ae0947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 09:04:56 np0005629333 systemd[1]: libpod-87587d57a7c5ae60ba3bb1774f0b8af7d7faf216f0f2d625ba7a69fd49ae0947.scope: Consumed 1.105s CPU time.
Feb 25 09:04:56 np0005629333 systemd[1]: var-lib-containers-storage-overlay-6d2c0acea03b3c546500581f2b21269598897eeef2af236bd556a45bcf590836-merged.mount: Deactivated successfully.
Feb 25 09:04:56 np0005629333 podman[429720]: 2026-02-25 14:04:56.92522845 +0000 UTC m=+1.009997255 container remove 87587d57a7c5ae60ba3bb1774f0b8af7d7faf216f0f2d625ba7a69fd49ae0947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Feb 25 09:04:56 np0005629333 systemd[1]: libpod-conmon-87587d57a7c5ae60ba3bb1774f0b8af7d7faf216f0f2d625ba7a69fd49ae0947.scope: Deactivated successfully.
Feb 25 09:04:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4351: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:04:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 09:04:56 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:04:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 09:04:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:04:57 np0005629333 nova_compute[244014]: 2026-02-25 14:04:57.380 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:04:57 np0005629333 nova_compute[244014]: 2026-02-25 14:04:57.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:04:57 np0005629333 nova_compute[244014]: 2026-02-25 14:04:57.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 25 09:04:57 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:04:57 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:04:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4352: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:05:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:05:00 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4353: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:05:01 np0005629333 nova_compute[244014]: 2026-02-25 14:05:01.361 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:05:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:05:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:05:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:05:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:05:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:05:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:05:02 np0005629333 nova_compute[244014]: 2026-02-25 14:05:02.382 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:05:02 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4354: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:05:04 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4355: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:05:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:05:06 np0005629333 nova_compute[244014]: 2026-02-25 14:05:06.365 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:05:06 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4356: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:05:07 np0005629333 nova_compute[244014]: 2026-02-25 14:05:07.384 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:05:08 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4357: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:05:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:05:10 np0005629333 podman[429858]: 2026-02-25 14:05:10.78293632 +0000 UTC m=+0.108369408 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223)
Feb 25 09:05:10 np0005629333 podman[429859]: 2026-02-25 14:05:10.863091636 +0000 UTC m=+0.188837783 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:05:10 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4358: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:05:11 np0005629333 nova_compute[244014]: 2026-02-25 14:05:11.367 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:05:12 np0005629333 nova_compute[244014]: 2026-02-25 14:05:12.388 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:05:12 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4359: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:05:14 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4360: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:05:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:05:16 np0005629333 nova_compute[244014]: 2026-02-25 14:05:16.371 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:05:16 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4361: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:05:17 np0005629333 nova_compute[244014]: 2026-02-25 14:05:17.390 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:05:18 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4362: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:05:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:05:20 np0005629333 nova_compute[244014]: 2026-02-25 14:05:20.886 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:05:20 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4363: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:05:21 np0005629333 nova_compute[244014]: 2026-02-25 14:05:21.376 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:05:22 np0005629333 nova_compute[244014]: 2026-02-25 14:05:22.392 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:05:22 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4364: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:05:24 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4365: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:05:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:05:26 np0005629333 nova_compute[244014]: 2026-02-25 14:05:26.379 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:05:26 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4366: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:05:27 np0005629333 nova_compute[244014]: 2026-02-25 14:05:27.394 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:05:28 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4367: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:05:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:05:30 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4368: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:05:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_14:05:31
Feb 25 09:05:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 09:05:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 09:05:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'vms', 'backups']
Feb 25 09:05:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 09:05:31 np0005629333 nova_compute[244014]: 2026-02-25 14:05:31.412 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:05:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:05:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:05:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:05:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:05:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:05:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:05:32 np0005629333 nova_compute[244014]: 2026-02-25 14:05:32.397 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:05:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 09:05:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 09:05:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 09:05:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 09:05:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 09:05:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 09:05:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 09:05:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 09:05:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 09:05:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 09:05:32 np0005629333 nova_compute[244014]: 2026-02-25 14:05:32.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:05:32 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4369: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #222. Immutable memtables: 0.
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:05:33.101637) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 139] Flushing memtable with next log file: 222
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028333101721, "job": 139, "event": "flush_started", "num_memtables": 1, "num_entries": 989, "num_deletes": 251, "total_data_size": 1449757, "memory_usage": 1477856, "flush_reason": "Manual Compaction"}
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 139] Level-0 flush table #223: started
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028333110834, "cf_name": "default", "job": 139, "event": "table_file_creation", "file_number": 223, "file_size": 1436383, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 90194, "largest_seqno": 91182, "table_properties": {"data_size": 1431474, "index_size": 2496, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10437, "raw_average_key_size": 19, "raw_value_size": 1421698, "raw_average_value_size": 2672, "num_data_blocks": 112, "num_entries": 532, "num_filter_entries": 532, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772028236, "oldest_key_time": 1772028236, "file_creation_time": 1772028333, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 223, "seqno_to_time_mapping": "N/A"}}
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 139] Flush lasted 9260 microseconds, and 4588 cpu microseconds.
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:05:33.110894) [db/flush_job.cc:967] [default] [JOB 139] Level-0 flush table #223: 1436383 bytes OK
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:05:33.110920) [db/memtable_list.cc:519] [default] Level-0 commit table #223 started
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:05:33.112750) [db/memtable_list.cc:722] [default] Level-0 commit table #223: memtable #1 done
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:05:33.112770) EVENT_LOG_v1 {"time_micros": 1772028333112764, "job": 139, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:05:33.112798) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 139] Try to delete WAL files size 1445079, prev total WAL file size 1445079, number of live WAL files 2.
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000219.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:05:33.113270) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039303336' seq:72057594037927935, type:22 .. '7061786F730039323838' seq:0, type:0; will stop at (end)
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 140] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 139 Base level 0, inputs: [223(1402KB)], [221(9519KB)]
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028333113303, "job": 140, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [223], "files_L6": [221], "score": -1, "input_data_size": 11184199, "oldest_snapshot_seqno": -1}
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 140] Generated table #224: 9892 keys, 9410383 bytes, temperature: kUnknown
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028333163679, "cf_name": "default", "job": 140, "event": "table_file_creation", "file_number": 224, "file_size": 9410383, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9352300, "index_size": 32242, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24773, "raw_key_size": 265023, "raw_average_key_size": 26, "raw_value_size": 9182896, "raw_average_value_size": 928, "num_data_blocks": 1207, "num_entries": 9892, "num_filter_entries": 9892, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772028333, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 224, "seqno_to_time_mapping": "N/A"}}
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:05:33.164012) [db/compaction/compaction_job.cc:1663] [default] [JOB 140] Compacted 1@0 + 1@6 files to L6 => 9410383 bytes
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:05:33.165217) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 221.4 rd, 186.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 9.3 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(14.3) write-amplify(6.6) OK, records in: 10406, records dropped: 514 output_compression: NoCompression
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:05:33.165239) EVENT_LOG_v1 {"time_micros": 1772028333165228, "job": 140, "event": "compaction_finished", "compaction_time_micros": 50508, "compaction_time_cpu_micros": 32401, "output_level": 6, "num_output_files": 1, "total_output_size": 9410383, "num_input_records": 10406, "num_output_records": 9892, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000223.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028333165543, "job": 140, "event": "table_file_deletion", "file_number": 223}
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000221.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028333167140, "job": 140, "event": "table_file_deletion", "file_number": 221}
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:05:33.113214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:05:33.167259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:05:33.167267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:05:33.167269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:05:33.167272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:05:33 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:05:33.167274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:05:34 np0005629333 nova_compute[244014]: 2026-02-25 14:05:34.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:05:34 np0005629333 nova_compute[244014]: 2026-02-25 14:05:34.912 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:05:34 np0005629333 nova_compute[244014]: 2026-02-25 14:05:34.912 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:05:34 np0005629333 nova_compute[244014]: 2026-02-25 14:05:34.913 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:05:34 np0005629333 nova_compute[244014]: 2026-02-25 14:05:34.913 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 09:05:34 np0005629333 nova_compute[244014]: 2026-02-25 14:05:34.913 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 09:05:34 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4370: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:05:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:05:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 09:05:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1697179958' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 09:05:35 np0005629333 nova_compute[244014]: 2026-02-25 14:05:35.524 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 09:05:35 np0005629333 nova_compute[244014]: 2026-02-25 14:05:35.697 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 09:05:35 np0005629333 nova_compute[244014]: 2026-02-25 14:05:35.699 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3518MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 09:05:35 np0005629333 nova_compute[244014]: 2026-02-25 14:05:35.699 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:05:35 np0005629333 nova_compute[244014]: 2026-02-25 14:05:35.700 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:05:35 np0005629333 nova_compute[244014]: 2026-02-25 14:05:35.763 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 09:05:35 np0005629333 nova_compute[244014]: 2026-02-25 14:05:35.764 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 09:05:35 np0005629333 nova_compute[244014]: 2026-02-25 14:05:35.783 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 09:05:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 09:05:36 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1016862414' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 09:05:36 np0005629333 nova_compute[244014]: 2026-02-25 14:05:36.386 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 09:05:36 np0005629333 nova_compute[244014]: 2026-02-25 14:05:36.395 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 09:05:36 np0005629333 nova_compute[244014]: 2026-02-25 14:05:36.446 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:05:36 np0005629333 nova_compute[244014]: 2026-02-25 14:05:36.573 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 09:05:36 np0005629333 nova_compute[244014]: 2026-02-25 14:05:36.575 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 09:05:36 np0005629333 nova_compute[244014]: 2026-02-25 14:05:36.576 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:05:36 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4371: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:05:37 np0005629333 nova_compute[244014]: 2026-02-25 14:05:37.398 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:05:38 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4372: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:05:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:05:40 np0005629333 nova_compute[244014]: 2026-02-25 14:05:40.576 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:05:40 np0005629333 nova_compute[244014]: 2026-02-25 14:05:40.577 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 09:05:40 np0005629333 nova_compute[244014]: 2026-02-25 14:05:40.577 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 09:05:40 np0005629333 nova_compute[244014]: 2026-02-25 14:05:40.598 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 09:05:40 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4373: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:05:41 np0005629333 nova_compute[244014]: 2026-02-25 14:05:41.487 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:05:41 np0005629333 podman[429949]: 2026-02-25 14:05:41.737575717 +0000 UTC m=+0.073264161 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 09:05:41 np0005629333 podman[429950]: 2026-02-25 14:05:41.767080704 +0000 UTC m=+0.100448462 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 25 09:05:42 np0005629333 nova_compute[244014]: 2026-02-25 14:05:42.400 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:05:42 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4374: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:05:43 np0005629333 nova_compute[244014]: 2026-02-25 14:05:43.893 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:05:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 09:05:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:05:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 09:05:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:05:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 09:05:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:05:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:05:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:05:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:05:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:05:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 09:05:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:05:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 09:05:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:05:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:05:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:05:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 09:05:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:05:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 09:05:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:05:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:05:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:05:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 09:05:44 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4375: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:05:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:05:45 np0005629333 nova_compute[244014]: 2026-02-25 14:05:45.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:05:46 np0005629333 nova_compute[244014]: 2026-02-25 14:05:46.537 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:05:46 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4376: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:05:47 np0005629333 nova_compute[244014]: 2026-02-25 14:05:47.404 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:05:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 09:05:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1836094477' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 09:05:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 09:05:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1836094477' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 09:05:47 np0005629333 nova_compute[244014]: 2026-02-25 14:05:47.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:05:47 np0005629333 nova_compute[244014]: 2026-02-25 14:05:47.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 09:05:48 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4377: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:05:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:05:50 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4378: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:05:51 np0005629333 nova_compute[244014]: 2026-02-25 14:05:51.542 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:05:52 np0005629333 nova_compute[244014]: 2026-02-25 14:05:52.438 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:05:52 np0005629333 nova_compute[244014]: 2026-02-25 14:05:52.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:05:52 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4379: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:05:53 np0005629333 nova_compute[244014]: 2026-02-25 14:05:53.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:05:54 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4380: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:05:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:05:55.111 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:05:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:05:55.111 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:05:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:05:55.112 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:05:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:05:56 np0005629333 nova_compute[244014]: 2026-02-25 14:05:56.545 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:05:56 np0005629333 nova_compute[244014]: 2026-02-25 14:05:56.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:05:56 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4381: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:05:57 np0005629333 nova_compute[244014]: 2026-02-25 14:05:57.440 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:05:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 09:05:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:05:57 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 09:05:57 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:05:58 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:05:58 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:05:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 09:05:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 09:05:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 09:05:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 09:05:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 09:05:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:05:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 09:05:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 09:05:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 09:05:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 09:05:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 09:05:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 09:05:58 np0005629333 podman[430210]: 2026-02-25 14:05:58.667055355 +0000 UTC m=+0.052178303 container create ad443b7fe26c13169b166fc9e12c65934489404ffe35c4a3011e22a39aa8ade3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_liskov, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 09:05:58 np0005629333 systemd[1]: Started libpod-conmon-ad443b7fe26c13169b166fc9e12c65934489404ffe35c4a3011e22a39aa8ade3.scope.
Feb 25 09:05:58 np0005629333 podman[430210]: 2026-02-25 14:05:58.642219279 +0000 UTC m=+0.027342307 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:05:58 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:05:58 np0005629333 podman[430210]: 2026-02-25 14:05:58.755942488 +0000 UTC m=+0.141065466 container init ad443b7fe26c13169b166fc9e12c65934489404ffe35c4a3011e22a39aa8ade3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_liskov, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:05:58 np0005629333 podman[430210]: 2026-02-25 14:05:58.764197672 +0000 UTC m=+0.149320650 container start ad443b7fe26c13169b166fc9e12c65934489404ffe35c4a3011e22a39aa8ade3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 09:05:58 np0005629333 podman[430210]: 2026-02-25 14:05:58.767813315 +0000 UTC m=+0.152936283 container attach ad443b7fe26c13169b166fc9e12c65934489404ffe35c4a3011e22a39aa8ade3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:05:58 np0005629333 vibrant_liskov[430226]: 167 167
Feb 25 09:05:58 np0005629333 systemd[1]: libpod-ad443b7fe26c13169b166fc9e12c65934489404ffe35c4a3011e22a39aa8ade3.scope: Deactivated successfully.
Feb 25 09:05:58 np0005629333 podman[430210]: 2026-02-25 14:05:58.771132219 +0000 UTC m=+0.156255177 container died ad443b7fe26c13169b166fc9e12c65934489404ffe35c4a3011e22a39aa8ade3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_liskov, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:05:58 np0005629333 systemd[1]: var-lib-containers-storage-overlay-2ff239f5552e38cbd93c47335107bfd1089bf311a94cc2de1a4b241177417a4c-merged.mount: Deactivated successfully.
Feb 25 09:05:58 np0005629333 podman[430210]: 2026-02-25 14:05:58.813274546 +0000 UTC m=+0.198397504 container remove ad443b7fe26c13169b166fc9e12c65934489404ffe35c4a3011e22a39aa8ade3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_liskov, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:05:58 np0005629333 systemd[1]: libpod-conmon-ad443b7fe26c13169b166fc9e12c65934489404ffe35c4a3011e22a39aa8ade3.scope: Deactivated successfully.
Feb 25 09:05:58 np0005629333 podman[430251]: 2026-02-25 14:05:58.947648711 +0000 UTC m=+0.040010477 container create 3b26f35051be32820c64cb847cd42997b46477e15992808430523ce99bde3ed3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:05:58 np0005629333 systemd[1]: Started libpod-conmon-3b26f35051be32820c64cb847cd42997b46477e15992808430523ce99bde3ed3.scope.
Feb 25 09:05:58 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4382: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:05:59 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:05:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9ad13beb21f35e4628aa31d9c9013576e3ea6f641f58255a5697d826524147/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:05:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9ad13beb21f35e4628aa31d9c9013576e3ea6f641f58255a5697d826524147/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:05:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9ad13beb21f35e4628aa31d9c9013576e3ea6f641f58255a5697d826524147/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:05:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9ad13beb21f35e4628aa31d9c9013576e3ea6f641f58255a5697d826524147/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:05:59 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9ad13beb21f35e4628aa31d9c9013576e3ea6f641f58255a5697d826524147/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 09:05:59 np0005629333 podman[430251]: 2026-02-25 14:05:58.929892826 +0000 UTC m=+0.022254613 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:05:59 np0005629333 podman[430251]: 2026-02-25 14:05:59.035818974 +0000 UTC m=+0.128180760 container init 3b26f35051be32820c64cb847cd42997b46477e15992808430523ce99bde3ed3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_golick, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 09:05:59 np0005629333 podman[430251]: 2026-02-25 14:05:59.046501607 +0000 UTC m=+0.138863373 container start 3b26f35051be32820c64cb847cd42997b46477e15992808430523ce99bde3ed3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_golick, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:05:59 np0005629333 podman[430251]: 2026-02-25 14:05:59.04942454 +0000 UTC m=+0.141786306 container attach 3b26f35051be32820c64cb847cd42997b46477e15992808430523ce99bde3ed3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_golick, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 09:05:59 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 09:05:59 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:05:59 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 09:05:59 np0005629333 youthful_golick[430267]: --> passed data devices: 0 physical, 3 LVM
Feb 25 09:05:59 np0005629333 youthful_golick[430267]: --> All data devices are unavailable
Feb 25 09:05:59 np0005629333 systemd[1]: libpod-3b26f35051be32820c64cb847cd42997b46477e15992808430523ce99bde3ed3.scope: Deactivated successfully.
Feb 25 09:05:59 np0005629333 conmon[430267]: conmon 3b26f35051be32820c64 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3b26f35051be32820c64cb847cd42997b46477e15992808430523ce99bde3ed3.scope/container/memory.events
Feb 25 09:05:59 np0005629333 podman[430251]: 2026-02-25 14:05:59.492778866 +0000 UTC m=+0.585140652 container died 3b26f35051be32820c64cb847cd42997b46477e15992808430523ce99bde3ed3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_golick, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 09:05:59 np0005629333 systemd[1]: var-lib-containers-storage-overlay-3d9ad13beb21f35e4628aa31d9c9013576e3ea6f641f58255a5697d826524147-merged.mount: Deactivated successfully.
Feb 25 09:05:59 np0005629333 podman[430251]: 2026-02-25 14:05:59.549368832 +0000 UTC m=+0.641730628 container remove 3b26f35051be32820c64cb847cd42997b46477e15992808430523ce99bde3ed3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_golick, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 09:05:59 np0005629333 systemd[1]: libpod-conmon-3b26f35051be32820c64cb847cd42997b46477e15992808430523ce99bde3ed3.scope: Deactivated successfully.
Feb 25 09:06:00 np0005629333 podman[430362]: 2026-02-25 14:06:00.027686812 +0000 UTC m=+0.051788962 container create 7182e3983d3fc1e216156c286a712f6e9d1748c3bef5737bc33e2f702aaf5557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_khorana, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 09:06:00 np0005629333 systemd[1]: Started libpod-conmon-7182e3983d3fc1e216156c286a712f6e9d1748c3bef5737bc33e2f702aaf5557.scope.
Feb 25 09:06:00 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:06:00 np0005629333 podman[430362]: 2026-02-25 14:06:00.101206569 +0000 UTC m=+0.125308709 container init 7182e3983d3fc1e216156c286a712f6e9d1748c3bef5737bc33e2f702aaf5557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 09:06:00 np0005629333 podman[430362]: 2026-02-25 14:06:00.010485993 +0000 UTC m=+0.034588133 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:06:00 np0005629333 podman[430362]: 2026-02-25 14:06:00.107430615 +0000 UTC m=+0.131532775 container start 7182e3983d3fc1e216156c286a712f6e9d1748c3bef5737bc33e2f702aaf5557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_khorana, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:06:00 np0005629333 podman[430362]: 2026-02-25 14:06:00.111266574 +0000 UTC m=+0.135368724 container attach 7182e3983d3fc1e216156c286a712f6e9d1748c3bef5737bc33e2f702aaf5557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_khorana, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:06:00 np0005629333 magical_khorana[430378]: 167 167
Feb 25 09:06:00 np0005629333 systemd[1]: libpod-7182e3983d3fc1e216156c286a712f6e9d1748c3bef5737bc33e2f702aaf5557.scope: Deactivated successfully.
Feb 25 09:06:00 np0005629333 podman[430362]: 2026-02-25 14:06:00.114006722 +0000 UTC m=+0.138108852 container died 7182e3983d3fc1e216156c286a712f6e9d1748c3bef5737bc33e2f702aaf5557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_khorana, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 09:06:00 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ab39b5e1e7d37acb0759d3172b229497cbf60dbf2b68d2dc56e3a637d74cf424-merged.mount: Deactivated successfully.
Feb 25 09:06:00 np0005629333 podman[430362]: 2026-02-25 14:06:00.157252 +0000 UTC m=+0.181354120 container remove 7182e3983d3fc1e216156c286a712f6e9d1748c3bef5737bc33e2f702aaf5557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_khorana, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 09:06:00 np0005629333 systemd[1]: libpod-conmon-7182e3983d3fc1e216156c286a712f6e9d1748c3bef5737bc33e2f702aaf5557.scope: Deactivated successfully.
Feb 25 09:06:00 np0005629333 podman[430401]: 2026-02-25 14:06:00.287463917 +0000 UTC m=+0.036357514 container create e97b4d2b4e40cf14f417cfecca56a4a7acf71d476e79a5d14e1842a48cbebdc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 09:06:00 np0005629333 systemd[1]: Started libpod-conmon-e97b4d2b4e40cf14f417cfecca56a4a7acf71d476e79a5d14e1842a48cbebdc4.scope.
Feb 25 09:06:00 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:06:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d35ff2056d7068e3a63a7826b4332637d15c0d6cb8e34d2246ea8e7f3448a0bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:06:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d35ff2056d7068e3a63a7826b4332637d15c0d6cb8e34d2246ea8e7f3448a0bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:06:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d35ff2056d7068e3a63a7826b4332637d15c0d6cb8e34d2246ea8e7f3448a0bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:06:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d35ff2056d7068e3a63a7826b4332637d15c0d6cb8e34d2246ea8e7f3448a0bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:06:00 np0005629333 podman[430401]: 2026-02-25 14:06:00.270913387 +0000 UTC m=+0.019807024 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:06:00 np0005629333 podman[430401]: 2026-02-25 14:06:00.371830812 +0000 UTC m=+0.120724409 container init e97b4d2b4e40cf14f417cfecca56a4a7acf71d476e79a5d14e1842a48cbebdc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bhabha, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 09:06:00 np0005629333 podman[430401]: 2026-02-25 14:06:00.376635638 +0000 UTC m=+0.125529245 container start e97b4d2b4e40cf14f417cfecca56a4a7acf71d476e79a5d14e1842a48cbebdc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:06:00 np0005629333 podman[430401]: 2026-02-25 14:06:00.384012028 +0000 UTC m=+0.132905645 container attach e97b4d2b4e40cf14f417cfecca56a4a7acf71d476e79a5d14e1842a48cbebdc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 09:06:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]: {
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:    "0": [
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:        {
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "devices": [
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "/dev/loop3"
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            ],
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "lv_name": "ceph_lv0",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "lv_size": "21470642176",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "name": "ceph_lv0",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "tags": {
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.cluster_name": "ceph",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.crush_device_class": "",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.encrypted": "0",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.objectstore": "bluestore",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.osd_id": "0",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.type": "block",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.vdo": "0",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.with_tpm": "0"
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            },
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "type": "block",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "vg_name": "ceph_vg0"
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:        }
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:    ],
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:    "1": [
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:        {
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "devices": [
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "/dev/loop4"
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            ],
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "lv_name": "ceph_lv1",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "lv_size": "21470642176",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "name": "ceph_lv1",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "tags": {
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.cluster_name": "ceph",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.crush_device_class": "",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.encrypted": "0",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.objectstore": "bluestore",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.osd_id": "1",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.type": "block",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.vdo": "0",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.with_tpm": "0"
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            },
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "type": "block",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "vg_name": "ceph_vg1"
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:        }
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:    ],
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:    "2": [
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:        {
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "devices": [
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "/dev/loop5"
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            ],
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "lv_name": "ceph_lv2",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "lv_size": "21470642176",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "name": "ceph_lv2",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "tags": {
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.cluster_name": "ceph",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.crush_device_class": "",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.encrypted": "0",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.objectstore": "bluestore",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.osd_id": "2",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.type": "block",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.vdo": "0",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:                "ceph.with_tpm": "0"
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            },
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "type": "block",
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:            "vg_name": "ceph_vg2"
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:        }
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]:    ]
Feb 25 09:06:00 np0005629333 blissful_bhabha[430418]: }
Feb 25 09:06:00 np0005629333 systemd[1]: libpod-e97b4d2b4e40cf14f417cfecca56a4a7acf71d476e79a5d14e1842a48cbebdc4.scope: Deactivated successfully.
Feb 25 09:06:00 np0005629333 podman[430427]: 2026-02-25 14:06:00.748950358 +0000 UTC m=+0.021303286 container died e97b4d2b4e40cf14f417cfecca56a4a7acf71d476e79a5d14e1842a48cbebdc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bhabha, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Feb 25 09:06:00 np0005629333 systemd[1]: var-lib-containers-storage-overlay-d35ff2056d7068e3a63a7826b4332637d15c0d6cb8e34d2246ea8e7f3448a0bb-merged.mount: Deactivated successfully.
Feb 25 09:06:00 np0005629333 podman[430427]: 2026-02-25 14:06:00.791545247 +0000 UTC m=+0.063898165 container remove e97b4d2b4e40cf14f417cfecca56a4a7acf71d476e79a5d14e1842a48cbebdc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 09:06:00 np0005629333 systemd[1]: libpod-conmon-e97b4d2b4e40cf14f417cfecca56a4a7acf71d476e79a5d14e1842a48cbebdc4.scope: Deactivated successfully.
Feb 25 09:06:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4383: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:06:01 np0005629333 podman[430504]: 2026-02-25 14:06:01.268596311 +0000 UTC m=+0.056670110 container create f71819877f83fc43aad4033ef44b28ced92b5bad8a581266da17d80791cc416f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_newton, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:06:01 np0005629333 systemd[1]: Started libpod-conmon-f71819877f83fc43aad4033ef44b28ced92b5bad8a581266da17d80791cc416f.scope.
Feb 25 09:06:01 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:06:01 np0005629333 podman[430504]: 2026-02-25 14:06:01.245115164 +0000 UTC m=+0.033189013 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:06:01 np0005629333 podman[430504]: 2026-02-25 14:06:01.354948942 +0000 UTC m=+0.143022801 container init f71819877f83fc43aad4033ef44b28ced92b5bad8a581266da17d80791cc416f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_newton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 09:06:01 np0005629333 podman[430504]: 2026-02-25 14:06:01.365412259 +0000 UTC m=+0.153486068 container start f71819877f83fc43aad4033ef44b28ced92b5bad8a581266da17d80791cc416f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_newton, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Feb 25 09:06:01 np0005629333 podman[430504]: 2026-02-25 14:06:01.370652288 +0000 UTC m=+0.158726107 container attach f71819877f83fc43aad4033ef44b28ced92b5bad8a581266da17d80791cc416f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_newton, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 09:06:01 np0005629333 serene_newton[430520]: 167 167
Feb 25 09:06:01 np0005629333 systemd[1]: libpod-f71819877f83fc43aad4033ef44b28ced92b5bad8a581266da17d80791cc416f.scope: Deactivated successfully.
Feb 25 09:06:01 np0005629333 podman[430504]: 2026-02-25 14:06:01.372318745 +0000 UTC m=+0.160392544 container died f71819877f83fc43aad4033ef44b28ced92b5bad8a581266da17d80791cc416f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_newton, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 25 09:06:01 np0005629333 systemd[1]: var-lib-containers-storage-overlay-745609be49af38f1397ff652c7323d563dedc1d2325f528db5b5451f374e38ec-merged.mount: Deactivated successfully.
Feb 25 09:06:01 np0005629333 podman[430504]: 2026-02-25 14:06:01.417196059 +0000 UTC m=+0.205269858 container remove f71819877f83fc43aad4033ef44b28ced92b5bad8a581266da17d80791cc416f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Feb 25 09:06:01 np0005629333 systemd[1]: libpod-conmon-f71819877f83fc43aad4033ef44b28ced92b5bad8a581266da17d80791cc416f.scope: Deactivated successfully.
Feb 25 09:06:01 np0005629333 nova_compute[244014]: 2026-02-25 14:06:01.549 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:06:01 np0005629333 podman[430543]: 2026-02-25 14:06:01.619265076 +0000 UTC m=+0.052602354 container create 3a4e59abbe3c46ec1133b88ee75a8887abdf5f5f1d1bd0124842e45afe5fb420 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Feb 25 09:06:01 np0005629333 systemd[1]: Started libpod-conmon-3a4e59abbe3c46ec1133b88ee75a8887abdf5f5f1d1bd0124842e45afe5fb420.scope.
Feb 25 09:06:01 np0005629333 podman[430543]: 2026-02-25 14:06:01.590398046 +0000 UTC m=+0.023735374 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:06:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:06:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:06:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:06:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:06:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:06:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:06:01 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:06:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb79938a80a957159280486c20621e0666dffec0272fb27e34d21377d8c449f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:06:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb79938a80a957159280486c20621e0666dffec0272fb27e34d21377d8c449f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:06:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb79938a80a957159280486c20621e0666dffec0272fb27e34d21377d8c449f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:06:01 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fb79938a80a957159280486c20621e0666dffec0272fb27e34d21377d8c449f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:06:01 np0005629333 podman[430543]: 2026-02-25 14:06:01.732351566 +0000 UTC m=+0.165688854 container init 3a4e59abbe3c46ec1133b88ee75a8887abdf5f5f1d1bd0124842e45afe5fb420 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Feb 25 09:06:01 np0005629333 podman[430543]: 2026-02-25 14:06:01.742358421 +0000 UTC m=+0.175695699 container start 3a4e59abbe3c46ec1133b88ee75a8887abdf5f5f1d1bd0124842e45afe5fb420 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ellis, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Feb 25 09:06:01 np0005629333 podman[430543]: 2026-02-25 14:06:01.745974243 +0000 UTC m=+0.179311531 container attach 3a4e59abbe3c46ec1133b88ee75a8887abdf5f5f1d1bd0124842e45afe5fb420 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ellis, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 09:06:02 np0005629333 nova_compute[244014]: 2026-02-25 14:06:02.442 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:06:02 np0005629333 lvm[430636]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 09:06:02 np0005629333 lvm[430636]: VG ceph_vg0 finished
Feb 25 09:06:02 np0005629333 lvm[430639]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 09:06:02 np0005629333 lvm[430639]: VG ceph_vg1 finished
Feb 25 09:06:02 np0005629333 lvm[430641]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 09:06:02 np0005629333 lvm[430641]: VG ceph_vg2 finished
Feb 25 09:06:02 np0005629333 angry_ellis[430560]: {}
Feb 25 09:06:02 np0005629333 systemd[1]: libpod-3a4e59abbe3c46ec1133b88ee75a8887abdf5f5f1d1bd0124842e45afe5fb420.scope: Deactivated successfully.
Feb 25 09:06:02 np0005629333 systemd[1]: libpod-3a4e59abbe3c46ec1133b88ee75a8887abdf5f5f1d1bd0124842e45afe5fb420.scope: Consumed 1.393s CPU time.
Feb 25 09:06:02 np0005629333 podman[430543]: 2026-02-25 14:06:02.690412626 +0000 UTC m=+1.123749884 container died 3a4e59abbe3c46ec1133b88ee75a8887abdf5f5f1d1bd0124842e45afe5fb420 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ellis, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 09:06:02 np0005629333 systemd[1]: var-lib-containers-storage-overlay-2fb79938a80a957159280486c20621e0666dffec0272fb27e34d21377d8c449f-merged.mount: Deactivated successfully.
Feb 25 09:06:02 np0005629333 podman[430543]: 2026-02-25 14:06:02.802412675 +0000 UTC m=+1.235749913 container remove 3a4e59abbe3c46ec1133b88ee75a8887abdf5f5f1d1bd0124842e45afe5fb420 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_ellis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 25 09:06:02 np0005629333 systemd[1]: libpod-conmon-3a4e59abbe3c46ec1133b88ee75a8887abdf5f5f1d1bd0124842e45afe5fb420.scope: Deactivated successfully.
Feb 25 09:06:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 09:06:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:06:02 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 09:06:02 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:06:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4384: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:06:03 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:06:03 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:06:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4385: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:06:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:06:06 np0005629333 nova_compute[244014]: 2026-02-25 14:06:06.554 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:06:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4386: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:06:07 np0005629333 nova_compute[244014]: 2026-02-25 14:06:07.444 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:06:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4387: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:06:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:06:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4388: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:06:11 np0005629333 nova_compute[244014]: 2026-02-25 14:06:11.558 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:06:12 np0005629333 nova_compute[244014]: 2026-02-25 14:06:12.448 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:06:12 np0005629333 podman[430682]: 2026-02-25 14:06:12.737456597 +0000 UTC m=+0.070291397 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Feb 25 09:06:12 np0005629333 podman[430683]: 2026-02-25 14:06:12.762805226 +0000 UTC m=+0.094848583 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0)
Feb 25 09:06:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4389: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:06:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4390: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:06:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:06:16 np0005629333 nova_compute[244014]: 2026-02-25 14:06:16.607 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:06:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4391: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:06:17 np0005629333 nova_compute[244014]: 2026-02-25 14:06:17.451 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:06:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4392: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:06:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:06:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4393: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:06:21 np0005629333 nova_compute[244014]: 2026-02-25 14:06:21.611 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:06:22 np0005629333 nova_compute[244014]: 2026-02-25 14:06:22.452 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:06:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4394: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 10 op/s
Feb 25 09:06:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4395: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 10 op/s
Feb 25 09:06:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:06:26 np0005629333 nova_compute[244014]: 2026-02-25 14:06:26.616 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:06:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4396: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 09:06:27 np0005629333 nova_compute[244014]: 2026-02-25 14:06:27.456 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:06:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4397: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 09:06:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:06:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4398: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 09:06:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_14:06:31
Feb 25 09:06:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 09:06:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 09:06:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['.rgw.root', 'vms', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'images', 'volumes', 'default.rgw.log']
Feb 25 09:06:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 09:06:31 np0005629333 nova_compute[244014]: 2026-02-25 14:06:31.664 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:06:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:06:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:06:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:06:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:06:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:06:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:06:32 np0005629333 nova_compute[244014]: 2026-02-25 14:06:32.457 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:06:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 09:06:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 09:06:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 09:06:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 09:06:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 09:06:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 09:06:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 09:06:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 09:06:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 09:06:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 09:06:32 np0005629333 nova_compute[244014]: 2026-02-25 14:06:32.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:06:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4399: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Feb 25 09:06:34 np0005629333 nova_compute[244014]: 2026-02-25 14:06:34.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:06:34 np0005629333 nova_compute[244014]: 2026-02-25 14:06:34.912 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:06:34 np0005629333 nova_compute[244014]: 2026-02-25 14:06:34.913 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:06:34 np0005629333 nova_compute[244014]: 2026-02-25 14:06:34.913 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:06:34 np0005629333 nova_compute[244014]: 2026-02-25 14:06:34.914 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 09:06:34 np0005629333 nova_compute[244014]: 2026-02-25 14:06:34.914 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 09:06:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4400: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 5 op/s
Feb 25 09:06:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 09:06:35 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1446813007' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 09:06:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:06:35 np0005629333 nova_compute[244014]: 2026-02-25 14:06:35.480 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 09:06:35 np0005629333 nova_compute[244014]: 2026-02-25 14:06:35.622 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 09:06:35 np0005629333 nova_compute[244014]: 2026-02-25 14:06:35.624 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3500MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 09:06:35 np0005629333 nova_compute[244014]: 2026-02-25 14:06:35.624 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:06:35 np0005629333 nova_compute[244014]: 2026-02-25 14:06:35.625 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:06:35 np0005629333 nova_compute[244014]: 2026-02-25 14:06:35.697 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 09:06:35 np0005629333 nova_compute[244014]: 2026-02-25 14:06:35.698 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 09:06:35 np0005629333 nova_compute[244014]: 2026-02-25 14:06:35.719 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 25 09:06:35 np0005629333 nova_compute[244014]: 2026-02-25 14:06:35.739 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 25 09:06:35 np0005629333 nova_compute[244014]: 2026-02-25 14:06:35.740 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 25 09:06:35 np0005629333 nova_compute[244014]: 2026-02-25 14:06:35.755 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 25 09:06:35 np0005629333 nova_compute[244014]: 2026-02-25 14:06:35.776 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 25 09:06:35 np0005629333 nova_compute[244014]: 2026-02-25 14:06:35.790 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 09:06:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 09:06:36 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3857693278' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 09:06:36 np0005629333 nova_compute[244014]: 2026-02-25 14:06:36.374 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 09:06:36 np0005629333 nova_compute[244014]: 2026-02-25 14:06:36.383 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 09:06:36 np0005629333 nova_compute[244014]: 2026-02-25 14:06:36.409 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 09:06:36 np0005629333 nova_compute[244014]: 2026-02-25 14:06:36.412 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 09:06:36 np0005629333 nova_compute[244014]: 2026-02-25 14:06:36.413 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:06:36 np0005629333 nova_compute[244014]: 2026-02-25 14:06:36.709 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:06:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4401: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 5 op/s
Feb 25 09:06:37 np0005629333 nova_compute[244014]: 2026-02-25 14:06:37.458 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:06:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4402: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:06:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:06:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4403: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:06:41 np0005629333 nova_compute[244014]: 2026-02-25 14:06:41.414 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:06:41 np0005629333 nova_compute[244014]: 2026-02-25 14:06:41.415 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 09:06:41 np0005629333 nova_compute[244014]: 2026-02-25 14:06:41.415 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 09:06:41 np0005629333 nova_compute[244014]: 2026-02-25 14:06:41.433 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 09:06:41 np0005629333 nova_compute[244014]: 2026-02-25 14:06:41.747 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:06:42 np0005629333 nova_compute[244014]: 2026-02-25 14:06:42.460 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:06:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4404: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:06:43 np0005629333 podman[430773]: 2026-02-25 14:06:43.730308207 +0000 UTC m=+0.071968284 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb 25 09:06:43 np0005629333 podman[430774]: 2026-02-25 14:06:43.763998063 +0000 UTC m=+0.102545953 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Feb 25 09:06:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 09:06:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:06:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 09:06:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:06:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 09:06:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:06:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:06:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:06:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:06:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:06:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 09:06:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:06:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 09:06:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:06:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:06:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:06:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 09:06:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:06:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 09:06:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:06:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:06:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:06:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 09:06:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4405: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:06:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:06:45 np0005629333 nova_compute[244014]: 2026-02-25 14:06:45.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:06:45 np0005629333 nova_compute[244014]: 2026-02-25 14:06:45.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:06:46 np0005629333 nova_compute[244014]: 2026-02-25 14:06:46.751 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:06:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4406: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:06:47 np0005629333 nova_compute[244014]: 2026-02-25 14:06:47.462 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:06:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 09:06:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1600990381' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 09:06:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 09:06:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1600990381' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 09:06:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4407: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:06:49 np0005629333 nova_compute[244014]: 2026-02-25 14:06:49.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:06:49 np0005629333 nova_compute[244014]: 2026-02-25 14:06:49.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 09:06:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:06:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4408: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:06:51 np0005629333 nova_compute[244014]: 2026-02-25 14:06:51.802 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:06:52 np0005629333 nova_compute[244014]: 2026-02-25 14:06:52.464 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:06:52 np0005629333 nova_compute[244014]: 2026-02-25 14:06:52.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:06:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4409: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:06:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4410: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:06:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:06:55.112 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:06:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:06:55.112 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:06:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:06:55.113 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:06:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:06:55 np0005629333 nova_compute[244014]: 2026-02-25 14:06:55.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:06:56 np0005629333 nova_compute[244014]: 2026-02-25 14:06:56.807 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:06:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4411: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:06:57 np0005629333 nova_compute[244014]: 2026-02-25 14:06:57.498 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:06:57 np0005629333 nova_compute[244014]: 2026-02-25 14:06:57.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:06:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4412: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:07:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:07:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4413: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:07:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:07:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:07:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:07:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:07:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:07:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:07:01 np0005629333 nova_compute[244014]: 2026-02-25 14:07:01.811 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:07:02 np0005629333 nova_compute[244014]: 2026-02-25 14:07:02.550 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:07:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4414: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:07:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 09:07:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 09:07:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 09:07:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 09:07:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 09:07:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:07:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 09:07:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 09:07:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 09:07:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 09:07:03 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 09:07:03 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 09:07:04 np0005629333 podman[430960]: 2026-02-25 14:07:04.077123491 +0000 UTC m=+0.051809152 container create 1f51e79367c877e5038629b21dc406dab1ef545277ad28e6eb3fe26b7c6e2562 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_edison, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 25 09:07:04 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 09:07:04 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:07:04 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 09:07:04 np0005629333 systemd[1]: Started libpod-conmon-1f51e79367c877e5038629b21dc406dab1ef545277ad28e6eb3fe26b7c6e2562.scope.
Feb 25 09:07:04 np0005629333 podman[430960]: 2026-02-25 14:07:04.057792842 +0000 UTC m=+0.032478523 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:07:04 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:07:04 np0005629333 podman[430960]: 2026-02-25 14:07:04.173042744 +0000 UTC m=+0.147728465 container init 1f51e79367c877e5038629b21dc406dab1ef545277ad28e6eb3fe26b7c6e2562 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_edison, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 09:07:04 np0005629333 podman[430960]: 2026-02-25 14:07:04.180072494 +0000 UTC m=+0.154758175 container start 1f51e79367c877e5038629b21dc406dab1ef545277ad28e6eb3fe26b7c6e2562 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_edison, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:07:04 np0005629333 podman[430960]: 2026-02-25 14:07:04.183878232 +0000 UTC m=+0.158563923 container attach 1f51e79367c877e5038629b21dc406dab1ef545277ad28e6eb3fe26b7c6e2562 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_edison, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:07:04 np0005629333 relaxed_edison[430976]: 167 167
Feb 25 09:07:04 np0005629333 systemd[1]: libpod-1f51e79367c877e5038629b21dc406dab1ef545277ad28e6eb3fe26b7c6e2562.scope: Deactivated successfully.
Feb 25 09:07:04 np0005629333 conmon[430976]: conmon 1f51e79367c877e50386 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1f51e79367c877e5038629b21dc406dab1ef545277ad28e6eb3fe26b7c6e2562.scope/container/memory.events
Feb 25 09:07:04 np0005629333 podman[430960]: 2026-02-25 14:07:04.187038931 +0000 UTC m=+0.161724632 container died 1f51e79367c877e5038629b21dc406dab1ef545277ad28e6eb3fe26b7c6e2562 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:07:04 np0005629333 systemd[1]: var-lib-containers-storage-overlay-54f401c572b62d0715e0a041ccc522cb4472f51a5a38ef5587feba1152431ce3-merged.mount: Deactivated successfully.
Feb 25 09:07:04 np0005629333 podman[430960]: 2026-02-25 14:07:04.235767985 +0000 UTC m=+0.210453666 container remove 1f51e79367c877e5038629b21dc406dab1ef545277ad28e6eb3fe26b7c6e2562 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_edison, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 09:07:04 np0005629333 systemd[1]: libpod-conmon-1f51e79367c877e5038629b21dc406dab1ef545277ad28e6eb3fe26b7c6e2562.scope: Deactivated successfully.
Feb 25 09:07:04 np0005629333 podman[430999]: 2026-02-25 14:07:04.416592338 +0000 UTC m=+0.058184352 container create 55cadda2247e1b4a8682bf7827fb2c9e4e5ad37e461087575141e228e4c99e12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_satoshi, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 09:07:04 np0005629333 systemd[1]: Started libpod-conmon-55cadda2247e1b4a8682bf7827fb2c9e4e5ad37e461087575141e228e4c99e12.scope.
Feb 25 09:07:04 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:07:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eeebc18b088c8e096b1a172da7dc25846587f2b786e1730a3845b6156a1d15a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:07:04 np0005629333 podman[430999]: 2026-02-25 14:07:04.394136361 +0000 UTC m=+0.035728435 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:07:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eeebc18b088c8e096b1a172da7dc25846587f2b786e1730a3845b6156a1d15a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:07:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eeebc18b088c8e096b1a172da7dc25846587f2b786e1730a3845b6156a1d15a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:07:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eeebc18b088c8e096b1a172da7dc25846587f2b786e1730a3845b6156a1d15a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:07:04 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eeebc18b088c8e096b1a172da7dc25846587f2b786e1730a3845b6156a1d15a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 09:07:04 np0005629333 podman[430999]: 2026-02-25 14:07:04.520663583 +0000 UTC m=+0.162255587 container init 55cadda2247e1b4a8682bf7827fb2c9e4e5ad37e461087575141e228e4c99e12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:07:04 np0005629333 podman[430999]: 2026-02-25 14:07:04.537070549 +0000 UTC m=+0.178662523 container start 55cadda2247e1b4a8682bf7827fb2c9e4e5ad37e461087575141e228e4c99e12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:07:04 np0005629333 podman[430999]: 2026-02-25 14:07:04.541228317 +0000 UTC m=+0.182820291 container attach 55cadda2247e1b4a8682bf7827fb2c9e4e5ad37e461087575141e228e4c99e12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_satoshi, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:07:05 np0005629333 sharp_satoshi[431016]: --> passed data devices: 0 physical, 3 LVM
Feb 25 09:07:05 np0005629333 sharp_satoshi[431016]: --> All data devices are unavailable
Feb 25 09:07:05 np0005629333 systemd[1]: libpod-55cadda2247e1b4a8682bf7827fb2c9e4e5ad37e461087575141e228e4c99e12.scope: Deactivated successfully.
Feb 25 09:07:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4415: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:07:05 np0005629333 podman[431037]: 2026-02-25 14:07:05.093948028 +0000 UTC m=+0.040303855 container died 55cadda2247e1b4a8682bf7827fb2c9e4e5ad37e461087575141e228e4c99e12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 09:07:05 np0005629333 systemd[1]: var-lib-containers-storage-overlay-5eeebc18b088c8e096b1a172da7dc25846587f2b786e1730a3845b6156a1d15a-merged.mount: Deactivated successfully.
Feb 25 09:07:05 np0005629333 podman[431037]: 2026-02-25 14:07:05.142655221 +0000 UTC m=+0.089011038 container remove 55cadda2247e1b4a8682bf7827fb2c9e4e5ad37e461087575141e228e4c99e12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_satoshi, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 09:07:05 np0005629333 systemd[1]: libpod-conmon-55cadda2247e1b4a8682bf7827fb2c9e4e5ad37e461087575141e228e4c99e12.scope: Deactivated successfully.
Feb 25 09:07:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:07:05 np0005629333 podman[431117]: 2026-02-25 14:07:05.612908801 +0000 UTC m=+0.050411592 container create 194699b71c0cc3b638e22471646a81eabaf4679c178765613548786e764ab5a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 09:07:05 np0005629333 systemd[1]: Started libpod-conmon-194699b71c0cc3b638e22471646a81eabaf4679c178765613548786e764ab5a3.scope.
Feb 25 09:07:05 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:07:05 np0005629333 podman[431117]: 2026-02-25 14:07:05.59524251 +0000 UTC m=+0.032745321 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:07:05 np0005629333 podman[431117]: 2026-02-25 14:07:05.703876804 +0000 UTC m=+0.141379655 container init 194699b71c0cc3b638e22471646a81eabaf4679c178765613548786e764ab5a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bassi, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 09:07:05 np0005629333 podman[431117]: 2026-02-25 14:07:05.71007557 +0000 UTC m=+0.147578391 container start 194699b71c0cc3b638e22471646a81eabaf4679c178765613548786e764ab5a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Feb 25 09:07:05 np0005629333 podman[431117]: 2026-02-25 14:07:05.713173288 +0000 UTC m=+0.150676119 container attach 194699b71c0cc3b638e22471646a81eabaf4679c178765613548786e764ab5a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:07:05 np0005629333 lucid_bassi[431133]: 167 167
Feb 25 09:07:05 np0005629333 systemd[1]: libpod-194699b71c0cc3b638e22471646a81eabaf4679c178765613548786e764ab5a3.scope: Deactivated successfully.
Feb 25 09:07:05 np0005629333 conmon[431133]: conmon 194699b71c0cc3b638e2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-194699b71c0cc3b638e22471646a81eabaf4679c178765613548786e764ab5a3.scope/container/memory.events
Feb 25 09:07:05 np0005629333 podman[431117]: 2026-02-25 14:07:05.717953634 +0000 UTC m=+0.155456415 container died 194699b71c0cc3b638e22471646a81eabaf4679c178765613548786e764ab5a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bassi, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 09:07:05 np0005629333 systemd[1]: var-lib-containers-storage-overlay-68c22a6ed0143a36fb2734edc3146cfd2299715d9a2bd126e2a1434de9ff7034-merged.mount: Deactivated successfully.
Feb 25 09:07:05 np0005629333 podman[431117]: 2026-02-25 14:07:05.751333411 +0000 UTC m=+0.188836202 container remove 194699b71c0cc3b638e22471646a81eabaf4679c178765613548786e764ab5a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_bassi, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:07:05 np0005629333 systemd[1]: libpod-conmon-194699b71c0cc3b638e22471646a81eabaf4679c178765613548786e764ab5a3.scope: Deactivated successfully.
Feb 25 09:07:05 np0005629333 podman[431156]: 2026-02-25 14:07:05.93901701 +0000 UTC m=+0.066218861 container create e738f10270988adb90e937d5172fffd9584c7aa90cc52b94d304ed6150eb3386 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:07:05 np0005629333 systemd[1]: Started libpod-conmon-e738f10270988adb90e937d5172fffd9584c7aa90cc52b94d304ed6150eb3386.scope.
Feb 25 09:07:06 np0005629333 podman[431156]: 2026-02-25 14:07:05.912512137 +0000 UTC m=+0.039714038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:07:06 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:07:06 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f85466b70c8e047097e61d2bb4533a92632192b58eb2307aa0c39da9703dbf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:07:06 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f85466b70c8e047097e61d2bb4533a92632192b58eb2307aa0c39da9703dbf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:07:06 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f85466b70c8e047097e61d2bb4533a92632192b58eb2307aa0c39da9703dbf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:07:06 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f85466b70c8e047097e61d2bb4533a92632192b58eb2307aa0c39da9703dbf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:07:06 np0005629333 podman[431156]: 2026-02-25 14:07:06.065974424 +0000 UTC m=+0.193176265 container init e738f10270988adb90e937d5172fffd9584c7aa90cc52b94d304ed6150eb3386 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:07:06 np0005629333 podman[431156]: 2026-02-25 14:07:06.073293722 +0000 UTC m=+0.200495533 container start e738f10270988adb90e937d5172fffd9584c7aa90cc52b94d304ed6150eb3386 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_johnson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 09:07:06 np0005629333 podman[431156]: 2026-02-25 14:07:06.077221353 +0000 UTC m=+0.204423194 container attach e738f10270988adb90e937d5172fffd9584c7aa90cc52b94d304ed6150eb3386 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_johnson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]: {
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:    "0": [
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:        {
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "devices": [
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "/dev/loop3"
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            ],
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "lv_name": "ceph_lv0",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "lv_size": "21470642176",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "name": "ceph_lv0",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "tags": {
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.cluster_name": "ceph",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.crush_device_class": "",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.encrypted": "0",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.objectstore": "bluestore",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.osd_id": "0",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.type": "block",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.vdo": "0",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.with_tpm": "0"
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            },
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "type": "block",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "vg_name": "ceph_vg0"
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:        }
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:    ],
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:    "1": [
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:        {
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "devices": [
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "/dev/loop4"
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            ],
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "lv_name": "ceph_lv1",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "lv_size": "21470642176",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "name": "ceph_lv1",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "tags": {
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.cluster_name": "ceph",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.crush_device_class": "",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.encrypted": "0",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.objectstore": "bluestore",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.osd_id": "1",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.type": "block",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.vdo": "0",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.with_tpm": "0"
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            },
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "type": "block",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "vg_name": "ceph_vg1"
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:        }
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:    ],
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:    "2": [
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:        {
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "devices": [
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "/dev/loop5"
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            ],
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "lv_name": "ceph_lv2",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "lv_size": "21470642176",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "name": "ceph_lv2",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "tags": {
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.cluster_name": "ceph",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.crush_device_class": "",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.encrypted": "0",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.objectstore": "bluestore",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.osd_id": "2",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.type": "block",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.vdo": "0",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:                "ceph.with_tpm": "0"
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            },
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "type": "block",
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:            "vg_name": "ceph_vg2"
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:        }
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]:    ]
Feb 25 09:07:06 np0005629333 optimistic_johnson[431172]: }
Feb 25 09:07:06 np0005629333 systemd[1]: libpod-e738f10270988adb90e937d5172fffd9584c7aa90cc52b94d304ed6150eb3386.scope: Deactivated successfully.
Feb 25 09:07:06 np0005629333 podman[431156]: 2026-02-25 14:07:06.375865001 +0000 UTC m=+0.503066822 container died e738f10270988adb90e937d5172fffd9584c7aa90cc52b94d304ed6150eb3386 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_johnson, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Feb 25 09:07:06 np0005629333 systemd[1]: var-lib-containers-storage-overlay-75f85466b70c8e047097e61d2bb4533a92632192b58eb2307aa0c39da9703dbf-merged.mount: Deactivated successfully.
Feb 25 09:07:06 np0005629333 podman[431156]: 2026-02-25 14:07:06.43006398 +0000 UTC m=+0.557265831 container remove e738f10270988adb90e937d5172fffd9584c7aa90cc52b94d304ed6150eb3386 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_johnson, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 09:07:06 np0005629333 systemd[1]: libpod-conmon-e738f10270988adb90e937d5172fffd9584c7aa90cc52b94d304ed6150eb3386.scope: Deactivated successfully.
Feb 25 09:07:06 np0005629333 nova_compute[244014]: 2026-02-25 14:07:06.816 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:07:06 np0005629333 podman[431257]: 2026-02-25 14:07:06.900492615 +0000 UTC m=+0.037939627 container create 7054d2675e31a2a33d0e8279ef810ce07c19f9346f57042b6eae27093231d408 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 09:07:06 np0005629333 systemd[1]: Started libpod-conmon-7054d2675e31a2a33d0e8279ef810ce07c19f9346f57042b6eae27093231d408.scope.
Feb 25 09:07:06 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:07:06 np0005629333 podman[431257]: 2026-02-25 14:07:06.974379623 +0000 UTC m=+0.111826675 container init 7054d2675e31a2a33d0e8279ef810ce07c19f9346f57042b6eae27093231d408 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:07:06 np0005629333 podman[431257]: 2026-02-25 14:07:06.885385087 +0000 UTC m=+0.022832109 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:07:06 np0005629333 podman[431257]: 2026-02-25 14:07:06.981785723 +0000 UTC m=+0.119232715 container start 7054d2675e31a2a33d0e8279ef810ce07c19f9346f57042b6eae27093231d408 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 09:07:06 np0005629333 podman[431257]: 2026-02-25 14:07:06.985972502 +0000 UTC m=+0.123419514 container attach 7054d2675e31a2a33d0e8279ef810ce07c19f9346f57042b6eae27093231d408 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_volhard, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 09:07:06 np0005629333 unruffled_volhard[431273]: 167 167
Feb 25 09:07:06 np0005629333 systemd[1]: libpod-7054d2675e31a2a33d0e8279ef810ce07c19f9346f57042b6eae27093231d408.scope: Deactivated successfully.
Feb 25 09:07:06 np0005629333 podman[431257]: 2026-02-25 14:07:06.987133085 +0000 UTC m=+0.124580097 container died 7054d2675e31a2a33d0e8279ef810ce07c19f9346f57042b6eae27093231d408 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 25 09:07:07 np0005629333 systemd[1]: var-lib-containers-storage-overlay-096cf9cc635943354750782f759b23e23449f3bb0463be5806f91d157b9b3b07-merged.mount: Deactivated successfully.
Feb 25 09:07:07 np0005629333 podman[431257]: 2026-02-25 14:07:07.029127758 +0000 UTC m=+0.166574770 container remove 7054d2675e31a2a33d0e8279ef810ce07c19f9346f57042b6eae27093231d408 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_volhard, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:07:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4416: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:07:07 np0005629333 systemd[1]: libpod-conmon-7054d2675e31a2a33d0e8279ef810ce07c19f9346f57042b6eae27093231d408.scope: Deactivated successfully.
Feb 25 09:07:07 np0005629333 podman[431299]: 2026-02-25 14:07:07.208546121 +0000 UTC m=+0.067364323 container create 43e3a8fbddc1dca9f022252c95cf893d84151d555f423c8d5f762d9afa5f0763 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 09:07:07 np0005629333 systemd[1]: Started libpod-conmon-43e3a8fbddc1dca9f022252c95cf893d84151d555f423c8d5f762d9afa5f0763.scope.
Feb 25 09:07:07 np0005629333 podman[431299]: 2026-02-25 14:07:07.179772764 +0000 UTC m=+0.038590966 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:07:07 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:07:07 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da0f324113e6a5691bbc80111fa8dfeb95df88cb022b84312f75ca7563f20373/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:07:07 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da0f324113e6a5691bbc80111fa8dfeb95df88cb022b84312f75ca7563f20373/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:07:07 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da0f324113e6a5691bbc80111fa8dfeb95df88cb022b84312f75ca7563f20373/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:07:07 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da0f324113e6a5691bbc80111fa8dfeb95df88cb022b84312f75ca7563f20373/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:07:07 np0005629333 podman[431299]: 2026-02-25 14:07:07.313584612 +0000 UTC m=+0.172402894 container init 43e3a8fbddc1dca9f022252c95cf893d84151d555f423c8d5f762d9afa5f0763 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ramanujan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:07:07 np0005629333 podman[431299]: 2026-02-25 14:07:07.322982939 +0000 UTC m=+0.181801151 container start 43e3a8fbddc1dca9f022252c95cf893d84151d555f423c8d5f762d9afa5f0763 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ramanujan, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Feb 25 09:07:07 np0005629333 podman[431299]: 2026-02-25 14:07:07.3268898 +0000 UTC m=+0.185707972 container attach 43e3a8fbddc1dca9f022252c95cf893d84151d555f423c8d5f762d9afa5f0763 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ramanujan, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 09:07:07 np0005629333 nova_compute[244014]: 2026-02-25 14:07:07.551 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:07:07 np0005629333 lvm[431394]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 09:07:07 np0005629333 lvm[431391]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 09:07:07 np0005629333 lvm[431394]: VG ceph_vg1 finished
Feb 25 09:07:07 np0005629333 lvm[431391]: VG ceph_vg0 finished
Feb 25 09:07:07 np0005629333 lvm[431396]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 09:07:07 np0005629333 lvm[431396]: VG ceph_vg2 finished
Feb 25 09:07:08 np0005629333 funny_ramanujan[431315]: {}
Feb 25 09:07:08 np0005629333 systemd[1]: libpod-43e3a8fbddc1dca9f022252c95cf893d84151d555f423c8d5f762d9afa5f0763.scope: Deactivated successfully.
Feb 25 09:07:08 np0005629333 systemd[1]: libpod-43e3a8fbddc1dca9f022252c95cf893d84151d555f423c8d5f762d9afa5f0763.scope: Consumed 1.105s CPU time.
Feb 25 09:07:08 np0005629333 podman[431299]: 2026-02-25 14:07:08.072085206 +0000 UTC m=+0.930903418 container died 43e3a8fbddc1dca9f022252c95cf893d84151d555f423c8d5f762d9afa5f0763 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 09:07:08 np0005629333 systemd[1]: var-lib-containers-storage-overlay-da0f324113e6a5691bbc80111fa8dfeb95df88cb022b84312f75ca7563f20373-merged.mount: Deactivated successfully.
Feb 25 09:07:08 np0005629333 podman[431299]: 2026-02-25 14:07:08.125985016 +0000 UTC m=+0.984803228 container remove 43e3a8fbddc1dca9f022252c95cf893d84151d555f423c8d5f762d9afa5f0763 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Feb 25 09:07:08 np0005629333 systemd[1]: libpod-conmon-43e3a8fbddc1dca9f022252c95cf893d84151d555f423c8d5f762d9afa5f0763.scope: Deactivated successfully.
Feb 25 09:07:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 09:07:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:07:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 09:07:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:07:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4417: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:07:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:07:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:07:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:07:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4418: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:07:11 np0005629333 nova_compute[244014]: 2026-02-25 14:07:11.821 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:07:12 np0005629333 nova_compute[244014]: 2026-02-25 14:07:12.553 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:07:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4419: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:07:14 np0005629333 podman[431438]: 2026-02-25 14:07:14.720619224 +0000 UTC m=+0.055579799 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 09:07:14 np0005629333 podman[431439]: 2026-02-25 14:07:14.788642335 +0000 UTC m=+0.122970242 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 09:07:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4420: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:07:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:07:16 np0005629333 nova_compute[244014]: 2026-02-25 14:07:16.866 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:07:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4421: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:07:17 np0005629333 nova_compute[244014]: 2026-02-25 14:07:17.555 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:07:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4422: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:07:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:07:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4423: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:07:21 np0005629333 nova_compute[244014]: 2026-02-25 14:07:21.870 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:07:22 np0005629333 nova_compute[244014]: 2026-02-25 14:07:22.557 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:07:22 np0005629333 nova_compute[244014]: 2026-02-25 14:07:22.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:07:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4424: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:07:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4425: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:07:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:07:26 np0005629333 nova_compute[244014]: 2026-02-25 14:07:26.874 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:07:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4426: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:07:27 np0005629333 nova_compute[244014]: 2026-02-25 14:07:27.561 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:07:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4427: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:07:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:07:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4428: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:07:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_14:07:31
Feb 25 09:07:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 09:07:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 09:07:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', '.mgr', 'default.rgw.control', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'images', 'volumes']
Feb 25 09:07:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 09:07:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:07:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:07:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:07:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:07:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:07:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:07:31 np0005629333 nova_compute[244014]: 2026-02-25 14:07:31.921 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:07:32 np0005629333 nova_compute[244014]: 2026-02-25 14:07:32.563 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:07:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 09:07:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 09:07:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 09:07:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 09:07:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 09:07:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 09:07:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 09:07:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 09:07:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 09:07:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 09:07:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4429: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:07:34 np0005629333 nova_compute[244014]: 2026-02-25 14:07:34.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:07:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4430: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:07:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:07:35 np0005629333 nova_compute[244014]: 2026-02-25 14:07:35.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:07:35 np0005629333 nova_compute[244014]: 2026-02-25 14:07:35.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:07:35 np0005629333 nova_compute[244014]: 2026-02-25 14:07:35.904 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:07:35 np0005629333 nova_compute[244014]: 2026-02-25 14:07:35.905 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:07:35 np0005629333 nova_compute[244014]: 2026-02-25 14:07:35.905 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 09:07:35 np0005629333 nova_compute[244014]: 2026-02-25 14:07:35.905 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 09:07:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 09:07:36 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1575696368' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 09:07:36 np0005629333 nova_compute[244014]: 2026-02-25 14:07:36.478 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 09:07:36 np0005629333 nova_compute[244014]: 2026-02-25 14:07:36.709 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 09:07:36 np0005629333 nova_compute[244014]: 2026-02-25 14:07:36.711 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3511MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 09:07:36 np0005629333 nova_compute[244014]: 2026-02-25 14:07:36.711 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:07:36 np0005629333 nova_compute[244014]: 2026-02-25 14:07:36.712 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:07:36 np0005629333 nova_compute[244014]: 2026-02-25 14:07:36.781 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 09:07:36 np0005629333 nova_compute[244014]: 2026-02-25 14:07:36.782 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 09:07:36 np0005629333 nova_compute[244014]: 2026-02-25 14:07:36.799 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 09:07:36 np0005629333 nova_compute[244014]: 2026-02-25 14:07:36.925 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:07:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4431: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:07:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 09:07:37 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2253551020' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 09:07:37 np0005629333 nova_compute[244014]: 2026-02-25 14:07:37.300 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 09:07:37 np0005629333 nova_compute[244014]: 2026-02-25 14:07:37.307 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 09:07:37 np0005629333 nova_compute[244014]: 2026-02-25 14:07:37.333 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 09:07:37 np0005629333 nova_compute[244014]: 2026-02-25 14:07:37.337 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 09:07:37 np0005629333 nova_compute[244014]: 2026-02-25 14:07:37.338 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:07:37 np0005629333 nova_compute[244014]: 2026-02-25 14:07:37.564 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:07:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4432: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:07:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:07:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4433: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:07:41 np0005629333 nova_compute[244014]: 2026-02-25 14:07:41.928 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:07:42 np0005629333 nova_compute[244014]: 2026-02-25 14:07:42.339 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:07:42 np0005629333 nova_compute[244014]: 2026-02-25 14:07:42.340 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 09:07:42 np0005629333 nova_compute[244014]: 2026-02-25 14:07:42.340 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 09:07:42 np0005629333 nova_compute[244014]: 2026-02-25 14:07:42.353 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 09:07:42 np0005629333 nova_compute[244014]: 2026-02-25 14:07:42.566 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:07:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4434: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:07:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 09:07:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:07:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 09:07:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:07:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 09:07:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:07:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:07:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:07:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:07:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:07:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 09:07:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:07:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 09:07:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:07:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:07:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:07:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 09:07:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:07:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 09:07:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:07:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:07:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:07:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 09:07:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4435: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:07:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:07:45 np0005629333 podman[431529]: 2026-02-25 14:07:45.744805984 +0000 UTC m=+0.068870357 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb 25 09:07:45 np0005629333 podman[431530]: 2026-02-25 14:07:45.76861149 +0000 UTC m=+0.093463655 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Feb 25 09:07:46 np0005629333 nova_compute[244014]: 2026-02-25 14:07:46.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:07:46 np0005629333 nova_compute[244014]: 2026-02-25 14:07:46.972 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:07:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4436: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:07:47 np0005629333 nova_compute[244014]: 2026-02-25 14:07:47.568 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:07:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 09:07:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/306839189' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 09:07:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 09:07:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/306839189' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 09:07:47 np0005629333 nova_compute[244014]: 2026-02-25 14:07:47.872 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:07:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4437: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:07:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:07:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4438: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:07:51 np0005629333 nova_compute[244014]: 2026-02-25 14:07:51.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:07:51 np0005629333 nova_compute[244014]: 2026-02-25 14:07:51.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 09:07:51 np0005629333 nova_compute[244014]: 2026-02-25 14:07:51.976 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:07:52 np0005629333 nova_compute[244014]: 2026-02-25 14:07:52.570 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:07:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4439: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:07:54 np0005629333 nova_compute[244014]: 2026-02-25 14:07:54.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:07:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4440: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:07:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:07:55.113 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:07:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:07:55.113 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:07:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:07:55.113 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:07:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:07:55 np0005629333 nova_compute[244014]: 2026-02-25 14:07:55.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:07:56 np0005629333 nova_compute[244014]: 2026-02-25 14:07:56.980 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:07:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4441: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:07:57 np0005629333 nova_compute[244014]: 2026-02-25 14:07:57.572 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:07:57 np0005629333 nova_compute[244014]: 2026-02-25 14:07:57.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:07:57 np0005629333 nova_compute[244014]: 2026-02-25 14:07:57.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:07:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4442: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:08:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:08:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4443: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:08:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:08:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:08:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:08:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:08:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:08:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:08:01 np0005629333 nova_compute[244014]: 2026-02-25 14:08:01.986 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:08:02 np0005629333 nova_compute[244014]: 2026-02-25 14:08:02.574 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:08:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4444: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:08:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4445: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:08:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:08:06 np0005629333 nova_compute[244014]: 2026-02-25 14:08:06.990 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:08:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4446: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:08:07 np0005629333 nova_compute[244014]: 2026-02-25 14:08:07.577 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:08:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Feb 25 09:08:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 09:08:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 09:08:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 09:08:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 09:08:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 09:08:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 09:08:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:08:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 09:08:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 09:08:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 09:08:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 09:08:08 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 09:08:08 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 09:08:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4447: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:08:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Feb 25 09:08:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 09:08:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:08:09 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 09:08:10 np0005629333 podman[431724]: 2026-02-25 14:08:10.012212121 +0000 UTC m=+0.046671856 container create e2f2ac05192112e64bb58d4f3cf673754455ae5ae8de254427b190108f1c78d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_haslett, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 09:08:10 np0005629333 systemd[1]: Started libpod-conmon-e2f2ac05192112e64bb58d4f3cf673754455ae5ae8de254427b190108f1c78d1.scope.
Feb 25 09:08:10 np0005629333 podman[431724]: 2026-02-25 14:08:09.990670979 +0000 UTC m=+0.025130614 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:08:10 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:08:10 np0005629333 podman[431724]: 2026-02-25 14:08:10.11716239 +0000 UTC m=+0.151622055 container init e2f2ac05192112e64bb58d4f3cf673754455ae5ae8de254427b190108f1c78d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 09:08:10 np0005629333 podman[431724]: 2026-02-25 14:08:10.12385221 +0000 UTC m=+0.158311855 container start e2f2ac05192112e64bb58d4f3cf673754455ae5ae8de254427b190108f1c78d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_haslett, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:08:10 np0005629333 podman[431724]: 2026-02-25 14:08:10.127325659 +0000 UTC m=+0.161785304 container attach e2f2ac05192112e64bb58d4f3cf673754455ae5ae8de254427b190108f1c78d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:08:10 np0005629333 kind_haslett[431741]: 167 167
Feb 25 09:08:10 np0005629333 systemd[1]: libpod-e2f2ac05192112e64bb58d4f3cf673754455ae5ae8de254427b190108f1c78d1.scope: Deactivated successfully.
Feb 25 09:08:10 np0005629333 conmon[431741]: conmon e2f2ac05192112e64bb5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e2f2ac05192112e64bb58d4f3cf673754455ae5ae8de254427b190108f1c78d1.scope/container/memory.events
Feb 25 09:08:10 np0005629333 podman[431724]: 2026-02-25 14:08:10.133747891 +0000 UTC m=+0.168207516 container died e2f2ac05192112e64bb58d4f3cf673754455ae5ae8de254427b190108f1c78d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 09:08:10 np0005629333 systemd[1]: var-lib-containers-storage-overlay-262f7b25066251311fef566f9293576e3d0f290841a7e329a94d376af84968c5-merged.mount: Deactivated successfully.
Feb 25 09:08:10 np0005629333 podman[431724]: 2026-02-25 14:08:10.177195184 +0000 UTC m=+0.211654789 container remove e2f2ac05192112e64bb58d4f3cf673754455ae5ae8de254427b190108f1c78d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:08:10 np0005629333 systemd[1]: libpod-conmon-e2f2ac05192112e64bb58d4f3cf673754455ae5ae8de254427b190108f1c78d1.scope: Deactivated successfully.
Feb 25 09:08:10 np0005629333 podman[431765]: 2026-02-25 14:08:10.400269237 +0000 UTC m=+0.068897887 container create 4e3a8804022b01d9299c9acd5b7e6301119e08a15e841b61248658063911e19e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_joliot, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Feb 25 09:08:10 np0005629333 systemd[1]: Started libpod-conmon-4e3a8804022b01d9299c9acd5b7e6301119e08a15e841b61248658063911e19e.scope.
Feb 25 09:08:10 np0005629333 podman[431765]: 2026-02-25 14:08:10.369336689 +0000 UTC m=+0.037965419 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:08:10 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:08:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8edd59b60239fbe846f641b085069fcb9b68dd455b8fdd75f9d4bb588bba5eab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:08:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8edd59b60239fbe846f641b085069fcb9b68dd455b8fdd75f9d4bb588bba5eab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:08:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8edd59b60239fbe846f641b085069fcb9b68dd455b8fdd75f9d4bb588bba5eab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:08:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8edd59b60239fbe846f641b085069fcb9b68dd455b8fdd75f9d4bb588bba5eab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:08:10 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8edd59b60239fbe846f641b085069fcb9b68dd455b8fdd75f9d4bb588bba5eab/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 09:08:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:08:10 np0005629333 podman[431765]: 2026-02-25 14:08:10.498307991 +0000 UTC m=+0.166936641 container init 4e3a8804022b01d9299c9acd5b7e6301119e08a15e841b61248658063911e19e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:08:10 np0005629333 podman[431765]: 2026-02-25 14:08:10.508448619 +0000 UTC m=+0.177077249 container start 4e3a8804022b01d9299c9acd5b7e6301119e08a15e841b61248658063911e19e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_joliot, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:08:10 np0005629333 podman[431765]: 2026-02-25 14:08:10.512546135 +0000 UTC m=+0.181174985 container attach 4e3a8804022b01d9299c9acd5b7e6301119e08a15e841b61248658063911e19e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_joliot, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Feb 25 09:08:10 np0005629333 reverent_joliot[431781]: --> passed data devices: 0 physical, 3 LVM
Feb 25 09:08:10 np0005629333 reverent_joliot[431781]: --> All data devices are unavailable
Feb 25 09:08:10 np0005629333 systemd[1]: libpod-4e3a8804022b01d9299c9acd5b7e6301119e08a15e841b61248658063911e19e.scope: Deactivated successfully.
Feb 25 09:08:10 np0005629333 podman[431765]: 2026-02-25 14:08:10.95653443 +0000 UTC m=+0.625163090 container died 4e3a8804022b01d9299c9acd5b7e6301119e08a15e841b61248658063911e19e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_joliot, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:08:10 np0005629333 systemd[1]: var-lib-containers-storage-overlay-8edd59b60239fbe846f641b085069fcb9b68dd455b8fdd75f9d4bb588bba5eab-merged.mount: Deactivated successfully.
Feb 25 09:08:11 np0005629333 podman[431765]: 2026-02-25 14:08:11.002869495 +0000 UTC m=+0.671498135 container remove 4e3a8804022b01d9299c9acd5b7e6301119e08a15e841b61248658063911e19e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:08:11 np0005629333 systemd[1]: libpod-conmon-4e3a8804022b01d9299c9acd5b7e6301119e08a15e841b61248658063911e19e.scope: Deactivated successfully.
Feb 25 09:08:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4448: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:08:11 np0005629333 podman[431877]: 2026-02-25 14:08:11.383617004 +0000 UTC m=+0.043421924 container create 92fa7ca0b19fcb38fe08d8d2931e73ca98e270fa196c4e9c874279e8e3d7e1b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_kilby, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 09:08:11 np0005629333 systemd[1]: Started libpod-conmon-92fa7ca0b19fcb38fe08d8d2931e73ca98e270fa196c4e9c874279e8e3d7e1b0.scope.
Feb 25 09:08:11 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:08:11 np0005629333 podman[431877]: 2026-02-25 14:08:11.443984048 +0000 UTC m=+0.103788988 container init 92fa7ca0b19fcb38fe08d8d2931e73ca98e270fa196c4e9c874279e8e3d7e1b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Feb 25 09:08:11 np0005629333 podman[431877]: 2026-02-25 14:08:11.448342182 +0000 UTC m=+0.108147102 container start 92fa7ca0b19fcb38fe08d8d2931e73ca98e270fa196c4e9c874279e8e3d7e1b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 09:08:11 np0005629333 podman[431877]: 2026-02-25 14:08:11.451063669 +0000 UTC m=+0.110868579 container attach 92fa7ca0b19fcb38fe08d8d2931e73ca98e270fa196c4e9c874279e8e3d7e1b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_kilby, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:08:11 np0005629333 systemd[1]: libpod-92fa7ca0b19fcb38fe08d8d2931e73ca98e270fa196c4e9c874279e8e3d7e1b0.scope: Deactivated successfully.
Feb 25 09:08:11 np0005629333 condescending_kilby[431894]: 167 167
Feb 25 09:08:11 np0005629333 conmon[431894]: conmon 92fa7ca0b19fcb38fe08 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-92fa7ca0b19fcb38fe08d8d2931e73ca98e270fa196c4e9c874279e8e3d7e1b0.scope/container/memory.events
Feb 25 09:08:11 np0005629333 podman[431877]: 2026-02-25 14:08:11.454249779 +0000 UTC m=+0.114054699 container died 92fa7ca0b19fcb38fe08d8d2931e73ca98e270fa196c4e9c874279e8e3d7e1b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_kilby, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 09:08:11 np0005629333 podman[431877]: 2026-02-25 14:08:11.369068231 +0000 UTC m=+0.028873171 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:08:11 np0005629333 systemd[1]: var-lib-containers-storage-overlay-aaf0a0479cbf361e13a8b2d9e2c71638f2929f58e4f376afdb068bd9ff4868f3-merged.mount: Deactivated successfully.
Feb 25 09:08:11 np0005629333 podman[431877]: 2026-02-25 14:08:11.485755054 +0000 UTC m=+0.145559974 container remove 92fa7ca0b19fcb38fe08d8d2931e73ca98e270fa196c4e9c874279e8e3d7e1b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_kilby, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:08:11 np0005629333 systemd[1]: libpod-conmon-92fa7ca0b19fcb38fe08d8d2931e73ca98e270fa196c4e9c874279e8e3d7e1b0.scope: Deactivated successfully.
Feb 25 09:08:11 np0005629333 podman[431918]: 2026-02-25 14:08:11.659789545 +0000 UTC m=+0.053568872 container create ddb4d99a09a06f16324752dc82685c4b3a8e4ad1a3b6efb5e481423d1304acd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_elbakyan, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 09:08:11 np0005629333 systemd[1]: Started libpod-conmon-ddb4d99a09a06f16324752dc82685c4b3a8e4ad1a3b6efb5e481423d1304acd1.scope.
Feb 25 09:08:11 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:08:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99de0f64050c737fbdcd3962ae15eba7a252ab1724b11ce3e718a8177143a595/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:08:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99de0f64050c737fbdcd3962ae15eba7a252ab1724b11ce3e718a8177143a595/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:08:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99de0f64050c737fbdcd3962ae15eba7a252ab1724b11ce3e718a8177143a595/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:08:11 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99de0f64050c737fbdcd3962ae15eba7a252ab1724b11ce3e718a8177143a595/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:08:11 np0005629333 podman[431918]: 2026-02-25 14:08:11.641158556 +0000 UTC m=+0.034937893 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:08:11 np0005629333 podman[431918]: 2026-02-25 14:08:11.741854443 +0000 UTC m=+0.135633790 container init ddb4d99a09a06f16324752dc82685c4b3a8e4ad1a3b6efb5e481423d1304acd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_elbakyan, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 09:08:11 np0005629333 podman[431918]: 2026-02-25 14:08:11.749956654 +0000 UTC m=+0.143735981 container start ddb4d99a09a06f16324752dc82685c4b3a8e4ad1a3b6efb5e481423d1304acd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_elbakyan, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:08:11 np0005629333 podman[431918]: 2026-02-25 14:08:11.753238127 +0000 UTC m=+0.147017684 container attach ddb4d99a09a06f16324752dc82685c4b3a8e4ad1a3b6efb5e481423d1304acd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_elbakyan, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Feb 25 09:08:12 np0005629333 nova_compute[244014]: 2026-02-25 14:08:12.023 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]: {
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:    "0": [
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:        {
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "devices": [
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "/dev/loop3"
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            ],
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "lv_name": "ceph_lv0",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "lv_size": "21470642176",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "name": "ceph_lv0",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "tags": {
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.cluster_name": "ceph",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.crush_device_class": "",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.encrypted": "0",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.objectstore": "bluestore",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.osd_id": "0",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.type": "block",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.vdo": "0",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.with_tpm": "0"
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            },
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "type": "block",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "vg_name": "ceph_vg0"
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:        }
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:    ],
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:    "1": [
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:        {
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "devices": [
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "/dev/loop4"
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            ],
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "lv_name": "ceph_lv1",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "lv_size": "21470642176",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "name": "ceph_lv1",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "tags": {
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.cluster_name": "ceph",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.crush_device_class": "",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.encrypted": "0",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.objectstore": "bluestore",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.osd_id": "1",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.type": "block",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.vdo": "0",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.with_tpm": "0"
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            },
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "type": "block",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "vg_name": "ceph_vg1"
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:        }
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:    ],
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:    "2": [
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:        {
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "devices": [
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "/dev/loop5"
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            ],
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "lv_name": "ceph_lv2",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "lv_size": "21470642176",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "name": "ceph_lv2",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "tags": {
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.cluster_name": "ceph",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.crush_device_class": "",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.encrypted": "0",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.objectstore": "bluestore",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.osd_id": "2",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.type": "block",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.vdo": "0",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:                "ceph.with_tpm": "0"
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            },
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "type": "block",
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:            "vg_name": "ceph_vg2"
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:        }
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]:    ]
Feb 25 09:08:12 np0005629333 strange_elbakyan[431936]: }
Feb 25 09:08:12 np0005629333 systemd[1]: libpod-ddb4d99a09a06f16324752dc82685c4b3a8e4ad1a3b6efb5e481423d1304acd1.scope: Deactivated successfully.
Feb 25 09:08:12 np0005629333 podman[431918]: 2026-02-25 14:08:12.070049571 +0000 UTC m=+0.463828928 container died ddb4d99a09a06f16324752dc82685c4b3a8e4ad1a3b6efb5e481423d1304acd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:08:12 np0005629333 systemd[1]: var-lib-containers-storage-overlay-99de0f64050c737fbdcd3962ae15eba7a252ab1724b11ce3e718a8177143a595-merged.mount: Deactivated successfully.
Feb 25 09:08:12 np0005629333 podman[431918]: 2026-02-25 14:08:12.117591621 +0000 UTC m=+0.511370988 container remove ddb4d99a09a06f16324752dc82685c4b3a8e4ad1a3b6efb5e481423d1304acd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_elbakyan, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 09:08:12 np0005629333 systemd[1]: libpod-conmon-ddb4d99a09a06f16324752dc82685c4b3a8e4ad1a3b6efb5e481423d1304acd1.scope: Deactivated successfully.
Feb 25 09:08:12 np0005629333 nova_compute[244014]: 2026-02-25 14:08:12.577 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:08:12 np0005629333 podman[432018]: 2026-02-25 14:08:12.588812448 +0000 UTC m=+0.043902467 container create 080c8f163be01795c976a8bd53f2c2bc517fc2e6ec12d8314a2e6844fd62d7c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shaw, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True)
Feb 25 09:08:12 np0005629333 systemd[1]: Started libpod-conmon-080c8f163be01795c976a8bd53f2c2bc517fc2e6ec12d8314a2e6844fd62d7c6.scope.
Feb 25 09:08:12 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:08:12 np0005629333 podman[432018]: 2026-02-25 14:08:12.572966319 +0000 UTC m=+0.028056388 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:08:12 np0005629333 podman[432018]: 2026-02-25 14:08:12.676962741 +0000 UTC m=+0.132052800 container init 080c8f163be01795c976a8bd53f2c2bc517fc2e6ec12d8314a2e6844fd62d7c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:08:12 np0005629333 podman[432018]: 2026-02-25 14:08:12.685446112 +0000 UTC m=+0.140536141 container start 080c8f163be01795c976a8bd53f2c2bc517fc2e6ec12d8314a2e6844fd62d7c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shaw, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:08:12 np0005629333 podman[432018]: 2026-02-25 14:08:12.688717975 +0000 UTC m=+0.143808014 container attach 080c8f163be01795c976a8bd53f2c2bc517fc2e6ec12d8314a2e6844fd62d7c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shaw, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 09:08:12 np0005629333 pedantic_shaw[432035]: 167 167
Feb 25 09:08:12 np0005629333 systemd[1]: libpod-080c8f163be01795c976a8bd53f2c2bc517fc2e6ec12d8314a2e6844fd62d7c6.scope: Deactivated successfully.
Feb 25 09:08:12 np0005629333 podman[432018]: 2026-02-25 14:08:12.691298258 +0000 UTC m=+0.146388307 container died 080c8f163be01795c976a8bd53f2c2bc517fc2e6ec12d8314a2e6844fd62d7c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shaw, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:08:12 np0005629333 systemd[1]: var-lib-containers-storage-overlay-994838c1f0e3b4ebfe24c83bc80f49af308180993b7135b4d14603b0ee669cee-merged.mount: Deactivated successfully.
Feb 25 09:08:12 np0005629333 podman[432018]: 2026-02-25 14:08:12.731080887 +0000 UTC m=+0.186170906 container remove 080c8f163be01795c976a8bd53f2c2bc517fc2e6ec12d8314a2e6844fd62d7c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 09:08:12 np0005629333 systemd[1]: libpod-conmon-080c8f163be01795c976a8bd53f2c2bc517fc2e6ec12d8314a2e6844fd62d7c6.scope: Deactivated successfully.
Feb 25 09:08:12 np0005629333 podman[432061]: 2026-02-25 14:08:12.909257976 +0000 UTC m=+0.053890631 container create 2f26d0fecb965a4f5ef1c9fb3f690e06fc9575b23a1fd8889c7418a150c9b8e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_leakey, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Feb 25 09:08:12 np0005629333 systemd[1]: Started libpod-conmon-2f26d0fecb965a4f5ef1c9fb3f690e06fc9575b23a1fd8889c7418a150c9b8e6.scope.
Feb 25 09:08:12 np0005629333 podman[432061]: 2026-02-25 14:08:12.882071944 +0000 UTC m=+0.026704639 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:08:12 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:08:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7463a114bccbe731bc6b6d5019aef5b5ccad3d69af7f5647c5dd3fe72dfed027/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:08:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7463a114bccbe731bc6b6d5019aef5b5ccad3d69af7f5647c5dd3fe72dfed027/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:08:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7463a114bccbe731bc6b6d5019aef5b5ccad3d69af7f5647c5dd3fe72dfed027/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:08:12 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7463a114bccbe731bc6b6d5019aef5b5ccad3d69af7f5647c5dd3fe72dfed027/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:08:13 np0005629333 podman[432061]: 2026-02-25 14:08:13.013099114 +0000 UTC m=+0.157731749 container init 2f26d0fecb965a4f5ef1c9fb3f690e06fc9575b23a1fd8889c7418a150c9b8e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_leakey, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Feb 25 09:08:13 np0005629333 podman[432061]: 2026-02-25 14:08:13.020103213 +0000 UTC m=+0.164735828 container start 2f26d0fecb965a4f5ef1c9fb3f690e06fc9575b23a1fd8889c7418a150c9b8e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 09:08:13 np0005629333 podman[432061]: 2026-02-25 14:08:13.023750746 +0000 UTC m=+0.168383381 container attach 2f26d0fecb965a4f5ef1c9fb3f690e06fc9575b23a1fd8889c7418a150c9b8e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_leakey, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:08:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4449: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:08:13 np0005629333 lvm[432156]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 09:08:13 np0005629333 lvm[432156]: VG ceph_vg0 finished
Feb 25 09:08:13 np0005629333 lvm[432155]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 09:08:13 np0005629333 lvm[432155]: VG ceph_vg1 finished
Feb 25 09:08:13 np0005629333 lvm[432158]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 09:08:13 np0005629333 lvm[432158]: VG ceph_vg2 finished
Feb 25 09:08:13 np0005629333 optimistic_leakey[432077]: {}
Feb 25 09:08:13 np0005629333 systemd[1]: libpod-2f26d0fecb965a4f5ef1c9fb3f690e06fc9575b23a1fd8889c7418a150c9b8e6.scope: Deactivated successfully.
Feb 25 09:08:13 np0005629333 systemd[1]: libpod-2f26d0fecb965a4f5ef1c9fb3f690e06fc9575b23a1fd8889c7418a150c9b8e6.scope: Consumed 1.162s CPU time.
Feb 25 09:08:13 np0005629333 podman[432061]: 2026-02-25 14:08:13.796761642 +0000 UTC m=+0.941394277 container died 2f26d0fecb965a4f5ef1c9fb3f690e06fc9575b23a1fd8889c7418a150c9b8e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_leakey, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 09:08:13 np0005629333 systemd[1]: var-lib-containers-storage-overlay-7463a114bccbe731bc6b6d5019aef5b5ccad3d69af7f5647c5dd3fe72dfed027-merged.mount: Deactivated successfully.
Feb 25 09:08:13 np0005629333 podman[432061]: 2026-02-25 14:08:13.838897498 +0000 UTC m=+0.983530113 container remove 2f26d0fecb965a4f5ef1c9fb3f690e06fc9575b23a1fd8889c7418a150c9b8e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_leakey, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 09:08:13 np0005629333 systemd[1]: libpod-conmon-2f26d0fecb965a4f5ef1c9fb3f690e06fc9575b23a1fd8889c7418a150c9b8e6.scope: Deactivated successfully.
Feb 25 09:08:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 09:08:13 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:08:13 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 09:08:13 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:08:14 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:08:14 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:08:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4450: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:08:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:08:16 np0005629333 podman[432198]: 2026-02-25 14:08:16.7358383 +0000 UTC m=+0.065421468 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Feb 25 09:08:16 np0005629333 podman[432199]: 2026-02-25 14:08:16.781109706 +0000 UTC m=+0.110746225 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Feb 25 09:08:17 np0005629333 nova_compute[244014]: 2026-02-25 14:08:17.050 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:08:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4451: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:08:17 np0005629333 nova_compute[244014]: 2026-02-25 14:08:17.578 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:08:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4452: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:08:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:08:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4453: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:08:22 np0005629333 nova_compute[244014]: 2026-02-25 14:08:22.054 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:08:22 np0005629333 nova_compute[244014]: 2026-02-25 14:08:22.580 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:08:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4454: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:08:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4455: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:08:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:08:27 np0005629333 nova_compute[244014]: 2026-02-25 14:08:27.059 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:08:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4456: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:08:27 np0005629333 nova_compute[244014]: 2026-02-25 14:08:27.582 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:08:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4457: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:08:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:08:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4458: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:08:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_14:08:31
Feb 25 09:08:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 09:08:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 09:08:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['backups', '.rgw.root', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'images', '.mgr', 'cephfs.cephfs.meta']
Feb 25 09:08:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 09:08:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:08:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:08:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:08:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:08:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:08:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:08:32 np0005629333 nova_compute[244014]: 2026-02-25 14:08:32.115 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:08:32 np0005629333 nova_compute[244014]: 2026-02-25 14:08:32.583 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:08:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 09:08:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 09:08:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 09:08:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 09:08:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 09:08:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 09:08:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 09:08:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 09:08:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 09:08:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 09:08:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4459: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:08:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4460: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:08:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:08:35 np0005629333 nova_compute[244014]: 2026-02-25 14:08:35.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:08:36 np0005629333 nova_compute[244014]: 2026-02-25 14:08:36.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:08:36 np0005629333 nova_compute[244014]: 2026-02-25 14:08:36.908 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:08:36 np0005629333 nova_compute[244014]: 2026-02-25 14:08:36.909 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:08:36 np0005629333 nova_compute[244014]: 2026-02-25 14:08:36.909 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:08:36 np0005629333 nova_compute[244014]: 2026-02-25 14:08:36.910 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 09:08:36 np0005629333 nova_compute[244014]: 2026-02-25 14:08:36.910 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 09:08:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4461: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:08:37 np0005629333 nova_compute[244014]: 2026-02-25 14:08:37.118 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:08:37 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 09:08:37 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1211967623' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 09:08:37 np0005629333 nova_compute[244014]: 2026-02-25 14:08:37.479 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 09:08:37 np0005629333 nova_compute[244014]: 2026-02-25 14:08:37.585 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:08:37 np0005629333 nova_compute[244014]: 2026-02-25 14:08:37.651 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 09:08:37 np0005629333 nova_compute[244014]: 2026-02-25 14:08:37.652 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3479MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 09:08:37 np0005629333 nova_compute[244014]: 2026-02-25 14:08:37.652 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:08:37 np0005629333 nova_compute[244014]: 2026-02-25 14:08:37.653 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:08:37 np0005629333 nova_compute[244014]: 2026-02-25 14:08:37.735 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 09:08:37 np0005629333 nova_compute[244014]: 2026-02-25 14:08:37.736 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 09:08:37 np0005629333 nova_compute[244014]: 2026-02-25 14:08:37.903 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 09:08:38 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 09:08:38 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1087689625' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 09:08:38 np0005629333 nova_compute[244014]: 2026-02-25 14:08:38.459 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 09:08:38 np0005629333 nova_compute[244014]: 2026-02-25 14:08:38.466 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 09:08:38 np0005629333 nova_compute[244014]: 2026-02-25 14:08:38.488 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 09:08:38 np0005629333 nova_compute[244014]: 2026-02-25 14:08:38.491 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 09:08:38 np0005629333 nova_compute[244014]: 2026-02-25 14:08:38.492 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:08:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4462: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:08:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:08:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4463: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:08:42 np0005629333 nova_compute[244014]: 2026-02-25 14:08:42.123 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:08:42 np0005629333 nova_compute[244014]: 2026-02-25 14:08:42.587 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:08:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4464: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:08:43 np0005629333 nova_compute[244014]: 2026-02-25 14:08:43.493 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:08:43 np0005629333 nova_compute[244014]: 2026-02-25 14:08:43.494 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 09:08:43 np0005629333 nova_compute[244014]: 2026-02-25 14:08:43.494 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 09:08:43 np0005629333 nova_compute[244014]: 2026-02-25 14:08:43.520 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 09:08:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 09:08:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:08:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 09:08:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:08:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 09:08:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:08:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:08:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:08:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:08:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:08:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 09:08:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:08:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 09:08:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:08:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:08:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:08:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 09:08:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:08:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 09:08:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:08:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:08:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:08:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 09:08:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4465: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:08:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:08:46 np0005629333 nova_compute[244014]: 2026-02-25 14:08:46.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:08:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4466: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:08:47 np0005629333 nova_compute[244014]: 2026-02-25 14:08:47.126 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:08:47 np0005629333 nova_compute[244014]: 2026-02-25 14:08:47.588 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:08:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 09:08:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3090449203' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 09:08:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 09:08:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3090449203' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 09:08:47 np0005629333 podman[432290]: 2026-02-25 14:08:47.738183101 +0000 UTC m=+0.072325964 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team)
Feb 25 09:08:47 np0005629333 podman[432291]: 2026-02-25 14:08:47.777756165 +0000 UTC m=+0.112108694 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 25 09:08:48 np0005629333 nova_compute[244014]: 2026-02-25 14:08:48.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:08:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4467: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:08:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:08:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4468: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:08:52 np0005629333 nova_compute[244014]: 2026-02-25 14:08:52.130 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:08:52 np0005629333 nova_compute[244014]: 2026-02-25 14:08:52.592 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:08:52 np0005629333 nova_compute[244014]: 2026-02-25 14:08:52.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:08:52 np0005629333 nova_compute[244014]: 2026-02-25 14:08:52.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 09:08:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4469: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:08:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4470: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:08:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:08:55.113 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:08:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:08:55.114 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:08:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:08:55.114 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:08:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:08:55 np0005629333 nova_compute[244014]: 2026-02-25 14:08:55.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:08:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4471: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:08:57 np0005629333 nova_compute[244014]: 2026-02-25 14:08:57.135 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:08:57 np0005629333 nova_compute[244014]: 2026-02-25 14:08:57.626 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:08:57 np0005629333 nova_compute[244014]: 2026-02-25 14:08:57.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:08:57 np0005629333 nova_compute[244014]: 2026-02-25 14:08:57.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:08:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4472: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:09:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:09:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4473: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:09:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:09:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:09:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:09:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:09:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:09:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:09:02 np0005629333 nova_compute[244014]: 2026-02-25 14:09:02.138 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:09:02 np0005629333 nova_compute[244014]: 2026-02-25 14:09:02.627 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:09:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4474: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:09:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4475: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:09:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:09:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4476: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:09:07 np0005629333 nova_compute[244014]: 2026-02-25 14:09:07.141 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:09:07 np0005629333 nova_compute[244014]: 2026-02-25 14:09:07.629 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:09:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4477: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:09:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:09:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4478: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:09:12 np0005629333 nova_compute[244014]: 2026-02-25 14:09:12.160 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:09:12 np0005629333 nova_compute[244014]: 2026-02-25 14:09:12.631 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:09:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 09:09:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.0 total, 600.0 interval#012Cumulative writes: 20K writes, 92K keys, 20K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.02 MB/s#012Cumulative WAL: 20K writes, 20K syncs, 1.00 writes per sync, written: 0.13 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1294 writes, 5890 keys, 1294 commit groups, 1.0 writes per commit group, ingest: 8.65 MB, 0.01 MB/s#012Interval WAL: 1294 writes, 1294 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     54.2      2.08              0.32        70    0.030       0      0       0.0       0.0#012  L6      1/0    8.97 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.6    104.7     90.0      7.05              1.93        69    0.102    529K    36K       0.0       0.0#012 Sum      1/0    8.97 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.6     80.8     81.8      9.14              2.25       139    0.066    529K    36K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   9.3    123.0    120.8      0.48              0.22        10    0.048     52K   2539       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0    104.7     90.0      7.05              1.93        69    0.102    529K    36K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     54.3      2.08              0.32        69    0.030       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.7      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 8400.0 total, 600.0 interval#012Flush(GB): cumulative 0.110, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.73 GB write, 0.09 MB/s write, 0.72 GB read, 0.09 MB/s read, 9.1 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561a1af858d0#2 capacity: 304.00 MB usage: 83.02 MB table_size: 0 occupancy: 18446744073709551615 collections: 15 last_copies: 0 last_secs: 0.001113 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(5126,79.39 MB,26.1151%) FilterBlock(140,1.41 MB,0.463882%) IndexBlock(140,2.22 MB,0.728763%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Feb 25 09:09:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4479: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:09:14 np0005629333 podman[432431]: 2026-02-25 14:09:14.573182881 +0000 UTC m=+0.063325439 container exec ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:09:14 np0005629333 podman[432431]: 2026-02-25 14:09:14.690672087 +0000 UTC m=+0.180814605 container exec_died ca851dfb430e1de8bea110dbdf17b644ef995608860f2247ea3cd72e47c71fe8 (image=quay.io/ceph/ceph:v20, name=ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mon-compute-0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:09:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4480: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:09:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:09:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 09:09:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:09:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 09:09:15 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:09:16 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:09:16 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:09:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 09:09:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 09:09:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 09:09:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 09:09:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 09:09:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:09:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 09:09:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 09:09:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 09:09:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 09:09:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 09:09:16 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 09:09:16 np0005629333 podman[432765]: 2026-02-25 14:09:16.755466684 +0000 UTC m=+0.057222895 container create deaf3ea92ce878b048d4a940e6f8f573c145b4c89b48abc065b6c4d99687db20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:09:16 np0005629333 systemd[1]: Started libpod-conmon-deaf3ea92ce878b048d4a940e6f8f573c145b4c89b48abc065b6c4d99687db20.scope.
Feb 25 09:09:16 np0005629333 podman[432765]: 2026-02-25 14:09:16.732014189 +0000 UTC m=+0.033770420 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:09:16 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:09:16 np0005629333 podman[432765]: 2026-02-25 14:09:16.85218445 +0000 UTC m=+0.153940721 container init deaf3ea92ce878b048d4a940e6f8f573c145b4c89b48abc065b6c4d99687db20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wright, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 09:09:16 np0005629333 podman[432765]: 2026-02-25 14:09:16.861605188 +0000 UTC m=+0.163361369 container start deaf3ea92ce878b048d4a940e6f8f573c145b4c89b48abc065b6c4d99687db20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:09:16 np0005629333 podman[432765]: 2026-02-25 14:09:16.865544379 +0000 UTC m=+0.167300590 container attach deaf3ea92ce878b048d4a940e6f8f573c145b4c89b48abc065b6c4d99687db20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 09:09:16 np0005629333 naughty_wright[432780]: 167 167
Feb 25 09:09:16 np0005629333 systemd[1]: libpod-deaf3ea92ce878b048d4a940e6f8f573c145b4c89b48abc065b6c4d99687db20.scope: Deactivated successfully.
Feb 25 09:09:16 np0005629333 podman[432765]: 2026-02-25 14:09:16.868113952 +0000 UTC m=+0.169870173 container died deaf3ea92ce878b048d4a940e6f8f573c145b4c89b48abc065b6c4d99687db20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Feb 25 09:09:16 np0005629333 systemd[1]: var-lib-containers-storage-overlay-2481b1efda1f898444d9203320be42b15621b664bd14ce3bd38236189348d047-merged.mount: Deactivated successfully.
Feb 25 09:09:16 np0005629333 podman[432765]: 2026-02-25 14:09:16.918210145 +0000 UTC m=+0.219966356 container remove deaf3ea92ce878b048d4a940e6f8f573c145b4c89b48abc065b6c4d99687db20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Feb 25 09:09:16 np0005629333 systemd[1]: libpod-conmon-deaf3ea92ce878b048d4a940e6f8f573c145b4c89b48abc065b6c4d99687db20.scope: Deactivated successfully.
Feb 25 09:09:17 np0005629333 podman[432804]: 2026-02-25 14:09:17.101633762 +0000 UTC m=+0.051540504 container create ed06047c8c22a9d21ccf67edba05a16120dc5307863bb63a9207139c8d1f9444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cerf, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:09:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4481: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:09:17 np0005629333 systemd[1]: Started libpod-conmon-ed06047c8c22a9d21ccf67edba05a16120dc5307863bb63a9207139c8d1f9444.scope.
Feb 25 09:09:17 np0005629333 nova_compute[244014]: 2026-02-25 14:09:17.164 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:09:17 np0005629333 podman[432804]: 2026-02-25 14:09:17.077350993 +0000 UTC m=+0.027257765 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:09:17 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:09:17 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d498e010756a16b3f3411c2745fb63df030e3d5c3e36bcee02abae1f4137c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:09:17 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d498e010756a16b3f3411c2745fb63df030e3d5c3e36bcee02abae1f4137c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:09:17 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d498e010756a16b3f3411c2745fb63df030e3d5c3e36bcee02abae1f4137c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:09:17 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d498e010756a16b3f3411c2745fb63df030e3d5c3e36bcee02abae1f4137c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:09:17 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89d498e010756a16b3f3411c2745fb63df030e3d5c3e36bcee02abae1f4137c9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 09:09:17 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 09:09:17 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:09:17 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 09:09:17 np0005629333 podman[432804]: 2026-02-25 14:09:17.212679394 +0000 UTC m=+0.162586116 container init ed06047c8c22a9d21ccf67edba05a16120dc5307863bb63a9207139c8d1f9444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cerf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:09:17 np0005629333 podman[432804]: 2026-02-25 14:09:17.223253395 +0000 UTC m=+0.173160117 container start ed06047c8c22a9d21ccf67edba05a16120dc5307863bb63a9207139c8d1f9444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cerf, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:09:17 np0005629333 podman[432804]: 2026-02-25 14:09:17.226147897 +0000 UTC m=+0.176054619 container attach ed06047c8c22a9d21ccf67edba05a16120dc5307863bb63a9207139c8d1f9444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cerf, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:09:17 np0005629333 nova_compute[244014]: 2026-02-25 14:09:17.634 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:09:17 np0005629333 boring_cerf[432821]: --> passed data devices: 0 physical, 3 LVM
Feb 25 09:09:17 np0005629333 boring_cerf[432821]: --> All data devices are unavailable
Feb 25 09:09:17 np0005629333 systemd[1]: libpod-ed06047c8c22a9d21ccf67edba05a16120dc5307863bb63a9207139c8d1f9444.scope: Deactivated successfully.
Feb 25 09:09:17 np0005629333 podman[432804]: 2026-02-25 14:09:17.717760924 +0000 UTC m=+0.667667656 container died ed06047c8c22a9d21ccf67edba05a16120dc5307863bb63a9207139c8d1f9444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cerf, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 09:09:17 np0005629333 systemd[1]: var-lib-containers-storage-overlay-89d498e010756a16b3f3411c2745fb63df030e3d5c3e36bcee02abae1f4137c9-merged.mount: Deactivated successfully.
Feb 25 09:09:17 np0005629333 podman[432804]: 2026-02-25 14:09:17.759611892 +0000 UTC m=+0.709518654 container remove ed06047c8c22a9d21ccf67edba05a16120dc5307863bb63a9207139c8d1f9444 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Feb 25 09:09:17 np0005629333 systemd[1]: libpod-conmon-ed06047c8c22a9d21ccf67edba05a16120dc5307863bb63a9207139c8d1f9444.scope: Deactivated successfully.
Feb 25 09:09:17 np0005629333 podman[432853]: 2026-02-25 14:09:17.876073938 +0000 UTC m=+0.067551019 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Feb 25 09:09:17 np0005629333 podman[432855]: 2026-02-25 14:09:17.900677337 +0000 UTC m=+0.092174988 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb 25 09:09:18 np0005629333 podman[432961]: 2026-02-25 14:09:18.240041461 +0000 UTC m=+0.041970982 container create 68135169fefe638327f7f3ada87084f683986fd5a3ce60cb55218d378c0798ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shtern, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:09:18 np0005629333 systemd[1]: Started libpod-conmon-68135169fefe638327f7f3ada87084f683986fd5a3ce60cb55218d378c0798ea.scope.
Feb 25 09:09:18 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:09:18 np0005629333 podman[432961]: 2026-02-25 14:09:18.224521891 +0000 UTC m=+0.026451442 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:09:18 np0005629333 podman[432961]: 2026-02-25 14:09:18.331358954 +0000 UTC m=+0.133288495 container init 68135169fefe638327f7f3ada87084f683986fd5a3ce60cb55218d378c0798ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shtern, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:09:18 np0005629333 podman[432961]: 2026-02-25 14:09:18.338080425 +0000 UTC m=+0.140009946 container start 68135169fefe638327f7f3ada87084f683986fd5a3ce60cb55218d378c0798ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 09:09:18 np0005629333 podman[432961]: 2026-02-25 14:09:18.34144573 +0000 UTC m=+0.143375271 container attach 68135169fefe638327f7f3ada87084f683986fd5a3ce60cb55218d378c0798ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shtern, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:09:18 np0005629333 pedantic_shtern[432978]: 167 167
Feb 25 09:09:18 np0005629333 systemd[1]: libpod-68135169fefe638327f7f3ada87084f683986fd5a3ce60cb55218d378c0798ea.scope: Deactivated successfully.
Feb 25 09:09:18 np0005629333 podman[432961]: 2026-02-25 14:09:18.347869512 +0000 UTC m=+0.149799063 container died 68135169fefe638327f7f3ada87084f683986fd5a3ce60cb55218d378c0798ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shtern, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 09:09:18 np0005629333 systemd[1]: var-lib-containers-storage-overlay-43378bbf4b0b88aaeecb84b56418ff4d7a0d64641cc455c41db76a0fd4dbcd73-merged.mount: Deactivated successfully.
Feb 25 09:09:18 np0005629333 podman[432961]: 2026-02-25 14:09:18.387107876 +0000 UTC m=+0.189037427 container remove 68135169fefe638327f7f3ada87084f683986fd5a3ce60cb55218d378c0798ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 09:09:18 np0005629333 systemd[1]: libpod-conmon-68135169fefe638327f7f3ada87084f683986fd5a3ce60cb55218d378c0798ea.scope: Deactivated successfully.
Feb 25 09:09:18 np0005629333 podman[433001]: 2026-02-25 14:09:18.587286599 +0000 UTC m=+0.053880890 container create f23b52a901146eb684d0d9b239e08cad442055b60d328bd8f4d5c74b891d45f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_johnson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 09:09:18 np0005629333 systemd[1]: Started libpod-conmon-f23b52a901146eb684d0d9b239e08cad442055b60d328bd8f4d5c74b891d45f6.scope.
Feb 25 09:09:18 np0005629333 podman[433001]: 2026-02-25 14:09:18.568033093 +0000 UTC m=+0.034627424 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:09:18 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:09:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb82a410d81f08cd713fef24d02234e0d0b31af7a569a83ff5552bda9941dad6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:09:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb82a410d81f08cd713fef24d02234e0d0b31af7a569a83ff5552bda9941dad6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:09:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb82a410d81f08cd713fef24d02234e0d0b31af7a569a83ff5552bda9941dad6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:09:18 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb82a410d81f08cd713fef24d02234e0d0b31af7a569a83ff5552bda9941dad6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:09:18 np0005629333 podman[433001]: 2026-02-25 14:09:18.697435167 +0000 UTC m=+0.164029528 container init f23b52a901146eb684d0d9b239e08cad442055b60d328bd8f4d5c74b891d45f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Feb 25 09:09:18 np0005629333 podman[433001]: 2026-02-25 14:09:18.709997053 +0000 UTC m=+0.176591374 container start f23b52a901146eb684d0d9b239e08cad442055b60d328bd8f4d5c74b891d45f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_johnson, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 09:09:18 np0005629333 podman[433001]: 2026-02-25 14:09:18.713782321 +0000 UTC m=+0.180376702 container attach f23b52a901146eb684d0d9b239e08cad442055b60d328bd8f4d5c74b891d45f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]: {
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:    "0": [
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:        {
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "devices": [
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "/dev/loop3"
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            ],
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "lv_name": "ceph_lv0",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "lv_size": "21470642176",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "name": "ceph_lv0",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "tags": {
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.cluster_name": "ceph",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.crush_device_class": "",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.encrypted": "0",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.objectstore": "bluestore",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.osd_id": "0",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.type": "block",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.vdo": "0",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.with_tpm": "0"
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            },
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "type": "block",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "vg_name": "ceph_vg0"
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:        }
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:    ],
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:    "1": [
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:        {
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "devices": [
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "/dev/loop4"
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            ],
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "lv_name": "ceph_lv1",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "lv_size": "21470642176",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "name": "ceph_lv1",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "tags": {
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.cluster_name": "ceph",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.crush_device_class": "",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.encrypted": "0",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.objectstore": "bluestore",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.osd_id": "1",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.type": "block",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.vdo": "0",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.with_tpm": "0"
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            },
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "type": "block",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "vg_name": "ceph_vg1"
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:        }
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:    ],
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:    "2": [
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:        {
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "devices": [
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "/dev/loop5"
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            ],
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "lv_name": "ceph_lv2",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "lv_size": "21470642176",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "name": "ceph_lv2",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "tags": {
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.cluster_name": "ceph",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.crush_device_class": "",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.encrypted": "0",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.objectstore": "bluestore",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.osd_id": "2",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.type": "block",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.vdo": "0",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:                "ceph.with_tpm": "0"
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            },
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "type": "block",
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:            "vg_name": "ceph_vg2"
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:        }
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]:    ]
Feb 25 09:09:19 np0005629333 sharp_johnson[433017]: }
Feb 25 09:09:19 np0005629333 systemd[1]: libpod-f23b52a901146eb684d0d9b239e08cad442055b60d328bd8f4d5c74b891d45f6.scope: Deactivated successfully.
Feb 25 09:09:19 np0005629333 podman[433001]: 2026-02-25 14:09:19.04347435 +0000 UTC m=+0.510068671 container died f23b52a901146eb684d0d9b239e08cad442055b60d328bd8f4d5c74b891d45f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_johnson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 09:09:19 np0005629333 systemd[1]: var-lib-containers-storage-overlay-fb82a410d81f08cd713fef24d02234e0d0b31af7a569a83ff5552bda9941dad6-merged.mount: Deactivated successfully.
Feb 25 09:09:19 np0005629333 podman[433001]: 2026-02-25 14:09:19.08748079 +0000 UTC m=+0.554075061 container remove f23b52a901146eb684d0d9b239e08cad442055b60d328bd8f4d5c74b891d45f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_johnson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Feb 25 09:09:19 np0005629333 systemd[1]: libpod-conmon-f23b52a901146eb684d0d9b239e08cad442055b60d328bd8f4d5c74b891d45f6.scope: Deactivated successfully.
Feb 25 09:09:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4482: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:09:19 np0005629333 podman[433101]: 2026-02-25 14:09:19.538262397 +0000 UTC m=+0.042910809 container create bd02e768e018efef39de59519498e982eaceb2dfa0049b41f4c32e9f48fc0787 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_dewdney, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 09:09:19 np0005629333 systemd[1]: Started libpod-conmon-bd02e768e018efef39de59519498e982eaceb2dfa0049b41f4c32e9f48fc0787.scope.
Feb 25 09:09:19 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:09:19 np0005629333 podman[433101]: 2026-02-25 14:09:19.518857486 +0000 UTC m=+0.023505938 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:09:19 np0005629333 podman[433101]: 2026-02-25 14:09:19.630501546 +0000 UTC m=+0.135150028 container init bd02e768e018efef39de59519498e982eaceb2dfa0049b41f4c32e9f48fc0787 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_dewdney, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Feb 25 09:09:19 np0005629333 podman[433101]: 2026-02-25 14:09:19.639747318 +0000 UTC m=+0.144395730 container start bd02e768e018efef39de59519498e982eaceb2dfa0049b41f4c32e9f48fc0787 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Feb 25 09:09:19 np0005629333 podman[433101]: 2026-02-25 14:09:19.643066653 +0000 UTC m=+0.147715145 container attach bd02e768e018efef39de59519498e982eaceb2dfa0049b41f4c32e9f48fc0787 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Feb 25 09:09:19 np0005629333 romantic_dewdney[433117]: 167 167
Feb 25 09:09:19 np0005629333 systemd[1]: libpod-bd02e768e018efef39de59519498e982eaceb2dfa0049b41f4c32e9f48fc0787.scope: Deactivated successfully.
Feb 25 09:09:19 np0005629333 podman[433101]: 2026-02-25 14:09:19.644667398 +0000 UTC m=+0.149315800 container died bd02e768e018efef39de59519498e982eaceb2dfa0049b41f4c32e9f48fc0787 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_dewdney, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Feb 25 09:09:19 np0005629333 systemd[1]: var-lib-containers-storage-overlay-b14d3e8d78f45e0a6f4a7dc40c431ad972e7541040d6e9781d37248826d5490b-merged.mount: Deactivated successfully.
Feb 25 09:09:19 np0005629333 podman[433101]: 2026-02-25 14:09:19.682639596 +0000 UTC m=+0.187287988 container remove bd02e768e018efef39de59519498e982eaceb2dfa0049b41f4c32e9f48fc0787 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 09:09:19 np0005629333 systemd[1]: libpod-conmon-bd02e768e018efef39de59519498e982eaceb2dfa0049b41f4c32e9f48fc0787.scope: Deactivated successfully.
Feb 25 09:09:19 np0005629333 podman[433142]: 2026-02-25 14:09:19.847863906 +0000 UTC m=+0.050800073 container create 519021fa20c24ebf46182976660f09891f82d29968169094c5740310a1c775d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_faraday, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 09:09:19 np0005629333 systemd[1]: Started libpod-conmon-519021fa20c24ebf46182976660f09891f82d29968169094c5740310a1c775d0.scope.
Feb 25 09:09:19 np0005629333 podman[433142]: 2026-02-25 14:09:19.823068732 +0000 UTC m=+0.026004959 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:09:19 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:09:19 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20325ab69bb9bd93d539aca57ebe1ae0a55724b37e185530ad2515ce221410ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:09:19 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20325ab69bb9bd93d539aca57ebe1ae0a55724b37e185530ad2515ce221410ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:09:19 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20325ab69bb9bd93d539aca57ebe1ae0a55724b37e185530ad2515ce221410ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:09:19 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20325ab69bb9bd93d539aca57ebe1ae0a55724b37e185530ad2515ce221410ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:09:19 np0005629333 podman[433142]: 2026-02-25 14:09:19.959040542 +0000 UTC m=+0.161976679 container init 519021fa20c24ebf46182976660f09891f82d29968169094c5740310a1c775d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_faraday, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Feb 25 09:09:19 np0005629333 podman[433142]: 2026-02-25 14:09:19.973653687 +0000 UTC m=+0.176589844 container start 519021fa20c24ebf46182976660f09891f82d29968169094c5740310a1c775d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_faraday, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 09:09:19 np0005629333 podman[433142]: 2026-02-25 14:09:19.977211618 +0000 UTC m=+0.180147775 container attach 519021fa20c24ebf46182976660f09891f82d29968169094c5740310a1c775d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_faraday, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 09:09:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:09:20 np0005629333 lvm[433234]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 09:09:20 np0005629333 lvm[433234]: VG ceph_vg0 finished
Feb 25 09:09:20 np0005629333 lvm[433237]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 09:09:20 np0005629333 lvm[433237]: VG ceph_vg1 finished
Feb 25 09:09:20 np0005629333 lvm[433239]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 09:09:20 np0005629333 lvm[433239]: VG ceph_vg2 finished
Feb 25 09:09:20 np0005629333 lvm[433240]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 09:09:20 np0005629333 lvm[433240]: VG ceph_vg1 finished
Feb 25 09:09:20 np0005629333 stupefied_faraday[433158]: {}
Feb 25 09:09:20 np0005629333 systemd[1]: libpod-519021fa20c24ebf46182976660f09891f82d29968169094c5740310a1c775d0.scope: Deactivated successfully.
Feb 25 09:09:20 np0005629333 systemd[1]: libpod-519021fa20c24ebf46182976660f09891f82d29968169094c5740310a1c775d0.scope: Consumed 1.214s CPU time.
Feb 25 09:09:20 np0005629333 podman[433142]: 2026-02-25 14:09:20.828085134 +0000 UTC m=+1.031021301 container died 519021fa20c24ebf46182976660f09891f82d29968169094c5740310a1c775d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_faraday, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True)
Feb 25 09:09:20 np0005629333 systemd[1]: var-lib-containers-storage-overlay-20325ab69bb9bd93d539aca57ebe1ae0a55724b37e185530ad2515ce221410ed-merged.mount: Deactivated successfully.
Feb 25 09:09:20 np0005629333 podman[433142]: 2026-02-25 14:09:20.888324654 +0000 UTC m=+1.091260831 container remove 519021fa20c24ebf46182976660f09891f82d29968169094c5740310a1c775d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_faraday, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Feb 25 09:09:20 np0005629333 systemd[1]: libpod-conmon-519021fa20c24ebf46182976660f09891f82d29968169094c5740310a1c775d0.scope: Deactivated successfully.
Feb 25 09:09:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 09:09:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:09:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 09:09:20 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:09:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4483: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:09:21 np0005629333 nova_compute[244014]: 2026-02-25 14:09:21.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:09:21 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:09:21 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:09:21 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #225. Immutable memtables: 0.
Feb 25 09:09:21 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:09:21.977885) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 09:09:21 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 141] Flushing memtable with next log file: 225
Feb 25 09:09:21 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028561977931, "job": 141, "event": "flush_started", "num_memtables": 1, "num_entries": 2057, "num_deletes": 251, "total_data_size": 3541043, "memory_usage": 3595528, "flush_reason": "Manual Compaction"}
Feb 25 09:09:21 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 141] Level-0 flush table #226: started
Feb 25 09:09:21 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028561989719, "cf_name": "default", "job": 141, "event": "table_file_creation", "file_number": 226, "file_size": 3451806, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 91183, "largest_seqno": 93239, "table_properties": {"data_size": 3442357, "index_size": 6006, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18819, "raw_average_key_size": 20, "raw_value_size": 3423662, "raw_average_value_size": 3665, "num_data_blocks": 266, "num_entries": 934, "num_filter_entries": 934, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772028334, "oldest_key_time": 1772028334, "file_creation_time": 1772028561, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 226, "seqno_to_time_mapping": "N/A"}}
Feb 25 09:09:21 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 141] Flush lasted 11894 microseconds, and 5008 cpu microseconds.
Feb 25 09:09:21 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 09:09:21 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:09:21.989776) [db/flush_job.cc:967] [default] [JOB 141] Level-0 flush table #226: 3451806 bytes OK
Feb 25 09:09:21 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:09:21.989802) [db/memtable_list.cc:519] [default] Level-0 commit table #226 started
Feb 25 09:09:21 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:09:21.991647) [db/memtable_list.cc:722] [default] Level-0 commit table #226: memtable #1 done
Feb 25 09:09:21 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:09:21.991664) EVENT_LOG_v1 {"time_micros": 1772028561991658, "job": 141, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 09:09:21 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:09:21.991706) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 09:09:21 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 141] Try to delete WAL files size 3532417, prev total WAL file size 3559236, number of live WAL files 2.
Feb 25 09:09:21 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000222.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 09:09:21 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:09:21.992566) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039323837' seq:72057594037927935, type:22 .. '7061786F730039353339' seq:0, type:0; will stop at (end)
Feb 25 09:09:21 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 142] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 09:09:21 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 141 Base level 0, inputs: [226(3370KB)], [224(9189KB)]
Feb 25 09:09:21 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028561992626, "job": 142, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [226], "files_L6": [224], "score": -1, "input_data_size": 12862189, "oldest_snapshot_seqno": -1}
Feb 25 09:09:22 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 142] Generated table #227: 10312 keys, 11098701 bytes, temperature: kUnknown
Feb 25 09:09:22 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028562062037, "cf_name": "default", "job": 142, "event": "table_file_creation", "file_number": 227, "file_size": 11098701, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11036234, "index_size": 35602, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25797, "raw_key_size": 274415, "raw_average_key_size": 26, "raw_value_size": 10857837, "raw_average_value_size": 1052, "num_data_blocks": 1346, "num_entries": 10312, "num_filter_entries": 10312, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772028561, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 227, "seqno_to_time_mapping": "N/A"}}
Feb 25 09:09:22 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 09:09:22 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:09:22.062475) [db/compaction/compaction_job.cc:1663] [default] [JOB 142] Compacted 1@0 + 1@6 files to L6 => 11098701 bytes
Feb 25 09:09:22 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:09:22.064110) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.0 rd, 159.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 9.0 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 10826, records dropped: 514 output_compression: NoCompression
Feb 25 09:09:22 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:09:22.064142) EVENT_LOG_v1 {"time_micros": 1772028562064127, "job": 142, "event": "compaction_finished", "compaction_time_micros": 69512, "compaction_time_cpu_micros": 38900, "output_level": 6, "num_output_files": 1, "total_output_size": 11098701, "num_input_records": 10826, "num_output_records": 10312, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 09:09:22 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000226.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 09:09:22 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028562064919, "job": 142, "event": "table_file_deletion", "file_number": 226}
Feb 25 09:09:22 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000224.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 09:09:22 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028562066356, "job": 142, "event": "table_file_deletion", "file_number": 224}
Feb 25 09:09:22 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:09:21.992458) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:09:22 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:09:22.066474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:09:22 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:09:22.066482) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:09:22 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:09:22.066483) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:09:22 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:09:22.066485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:09:22 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:09:22.066487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:09:22 np0005629333 nova_compute[244014]: 2026-02-25 14:09:22.169 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:09:22 np0005629333 nova_compute[244014]: 2026-02-25 14:09:22.637 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:09:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4484: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:09:24 np0005629333 nova_compute[244014]: 2026-02-25 14:09:24.885 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:09:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4485: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:09:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:09:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4486: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:09:27 np0005629333 nova_compute[244014]: 2026-02-25 14:09:27.173 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:09:27 np0005629333 nova_compute[244014]: 2026-02-25 14:09:27.639 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:09:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4487: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:09:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:09:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4488: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:09:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_14:09:31
Feb 25 09:09:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 09:09:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 09:09:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'backups', '.mgr', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'volumes', 'images', 'default.rgw.log', 'default.rgw.control']
Feb 25 09:09:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 09:09:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:09:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:09:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:09:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:09:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:09:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:09:32 np0005629333 nova_compute[244014]: 2026-02-25 14:09:32.191 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:09:32 np0005629333 nova_compute[244014]: 2026-02-25 14:09:32.639 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:09:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 09:09:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 09:09:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 09:09:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 09:09:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 09:09:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 09:09:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 09:09:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 09:09:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 09:09:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 09:09:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4489: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:09:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4490: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:09:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:09:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4491: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:09:37 np0005629333 nova_compute[244014]: 2026-02-25 14:09:37.193 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:09:37 np0005629333 nova_compute[244014]: 2026-02-25 14:09:37.642 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:09:37 np0005629333 nova_compute[244014]: 2026-02-25 14:09:37.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:09:38 np0005629333 nova_compute[244014]: 2026-02-25 14:09:38.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:09:38 np0005629333 nova_compute[244014]: 2026-02-25 14:09:38.901 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:09:38 np0005629333 nova_compute[244014]: 2026-02-25 14:09:38.902 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:09:38 np0005629333 nova_compute[244014]: 2026-02-25 14:09:38.903 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:09:38 np0005629333 nova_compute[244014]: 2026-02-25 14:09:38.903 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 09:09:38 np0005629333 nova_compute[244014]: 2026-02-25 14:09:38.903 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 09:09:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4492: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:09:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 09:09:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/200555220' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 09:09:39 np0005629333 nova_compute[244014]: 2026-02-25 14:09:39.481 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 09:09:39 np0005629333 nova_compute[244014]: 2026-02-25 14:09:39.675 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 09:09:39 np0005629333 nova_compute[244014]: 2026-02-25 14:09:39.677 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3449MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 09:09:39 np0005629333 nova_compute[244014]: 2026-02-25 14:09:39.677 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:09:39 np0005629333 nova_compute[244014]: 2026-02-25 14:09:39.678 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:09:39 np0005629333 nova_compute[244014]: 2026-02-25 14:09:39.744 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 09:09:39 np0005629333 nova_compute[244014]: 2026-02-25 14:09:39.745 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 09:09:39 np0005629333 nova_compute[244014]: 2026-02-25 14:09:39.878 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 09:09:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 09:09:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2503161338' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 09:09:40 np0005629333 nova_compute[244014]: 2026-02-25 14:09:40.411 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 09:09:40 np0005629333 nova_compute[244014]: 2026-02-25 14:09:40.418 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 09:09:40 np0005629333 nova_compute[244014]: 2026-02-25 14:09:40.434 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 09:09:40 np0005629333 nova_compute[244014]: 2026-02-25 14:09:40.437 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 09:09:40 np0005629333 nova_compute[244014]: 2026-02-25 14:09:40.437 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:09:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:09:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4493: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:09:42 np0005629333 nova_compute[244014]: 2026-02-25 14:09:42.197 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:09:42 np0005629333 nova_compute[244014]: 2026-02-25 14:09:42.643 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:09:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4494: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:09:43 np0005629333 nova_compute[244014]: 2026-02-25 14:09:43.438 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:09:43 np0005629333 nova_compute[244014]: 2026-02-25 14:09:43.438 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 09:09:43 np0005629333 nova_compute[244014]: 2026-02-25 14:09:43.438 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 09:09:43 np0005629333 nova_compute[244014]: 2026-02-25 14:09:43.479 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 09:09:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 09:09:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:09:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 09:09:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:09:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 09:09:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:09:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:09:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:09:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:09:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:09:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 09:09:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:09:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 09:09:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:09:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:09:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:09:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 09:09:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:09:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 09:09:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:09:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:09:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:09:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 09:09:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4495: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:09:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:09:46 np0005629333 nova_compute[244014]: 2026-02-25 14:09:46.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:09:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4496: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:09:47 np0005629333 nova_compute[244014]: 2026-02-25 14:09:47.201 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:09:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 09:09:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3720512164' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 09:09:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 09:09:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3720512164' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 09:09:47 np0005629333 nova_compute[244014]: 2026-02-25 14:09:47.645 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:09:48 np0005629333 podman[433327]: 2026-02-25 14:09:48.74286099 +0000 UTC m=+0.077057538 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Feb 25 09:09:48 np0005629333 podman[433328]: 2026-02-25 14:09:48.781555159 +0000 UTC m=+0.113811782 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller)
Feb 25 09:09:48 np0005629333 nova_compute[244014]: 2026-02-25 14:09:48.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:09:48 np0005629333 nova_compute[244014]: 2026-02-25 14:09:48.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 25 09:09:48 np0005629333 nova_compute[244014]: 2026-02-25 14:09:48.899 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 25 09:09:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4497: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:09:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:09:50 np0005629333 nova_compute[244014]: 2026-02-25 14:09:50.894 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:09:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4498: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:09:52 np0005629333 nova_compute[244014]: 2026-02-25 14:09:52.206 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:09:52 np0005629333 nova_compute[244014]: 2026-02-25 14:09:52.648 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:09:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4499: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:09:54 np0005629333 nova_compute[244014]: 2026-02-25 14:09:54.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:09:54 np0005629333 nova_compute[244014]: 2026-02-25 14:09:54.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 09:09:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:09:55.114 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:09:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:09:55.115 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:09:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:09:55.115 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:09:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4500: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:09:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:09:55 np0005629333 nova_compute[244014]: 2026-02-25 14:09:55.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:09:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4501: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:09:57 np0005629333 nova_compute[244014]: 2026-02-25 14:09:57.210 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:09:57 np0005629333 nova_compute[244014]: 2026-02-25 14:09:57.649 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:09:57 np0005629333 nova_compute[244014]: 2026-02-25 14:09:57.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:09:58 np0005629333 nova_compute[244014]: 2026-02-25 14:09:58.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:09:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4502: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:10:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:10:00 np0005629333 nova_compute[244014]: 2026-02-25 14:10:00.649 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:10:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4503: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:10:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:10:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:10:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:10:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:10:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:10:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:10:02 np0005629333 nova_compute[244014]: 2026-02-25 14:10:02.213 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:10:02 np0005629333 nova_compute[244014]: 2026-02-25 14:10:02.650 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:10:02 np0005629333 nova_compute[244014]: 2026-02-25 14:10:02.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:10:02 np0005629333 nova_compute[244014]: 2026-02-25 14:10:02.877 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:10:02 np0005629333 nova_compute[244014]: 2026-02-25 14:10:02.877 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:10:02 np0005629333 nova_compute[244014]: 2026-02-25 14:10:02.878 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:10:02 np0005629333 nova_compute[244014]: 2026-02-25 14:10:02.879 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:10:02 np0005629333 nova_compute[244014]: 2026-02-25 14:10:02.880 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:10:02 np0005629333 nova_compute[244014]: 2026-02-25 14:10:02.880 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:10:02 np0005629333 nova_compute[244014]: 2026-02-25 14:10:02.906 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Feb 25 09:10:02 np0005629333 nova_compute[244014]: 2026-02-25 14:10:02.924 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Feb 25 09:10:02 np0005629333 nova_compute[244014]: 2026-02-25 14:10:02.924 244018 WARNING nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Unknown base file: /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6#033[00m
Feb 25 09:10:02 np0005629333 nova_compute[244014]: 2026-02-25 14:10:02.925 244018 WARNING nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Unknown base file: /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538#033[00m
Feb 25 09:10:02 np0005629333 nova_compute[244014]: 2026-02-25 14:10:02.925 244018 INFO nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Removable base files: /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6 /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538#033[00m
Feb 25 09:10:02 np0005629333 nova_compute[244014]: 2026-02-25 14:10:02.925 244018 INFO nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/a63dc6dbb387022d47a8ca49bddcc4af2508a4d6#033[00m
Feb 25 09:10:02 np0005629333 nova_compute[244014]: 2026-02-25 14:10:02.925 244018 INFO nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/d54266c9ce37b98d8a911b5ac30e52735f3ff538#033[00m
Feb 25 09:10:02 np0005629333 nova_compute[244014]: 2026-02-25 14:10:02.926 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Feb 25 09:10:02 np0005629333 nova_compute[244014]: 2026-02-25 14:10:02.926 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Feb 25 09:10:02 np0005629333 nova_compute[244014]: 2026-02-25 14:10:02.926 244018 DEBUG nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Feb 25 09:10:02 np0005629333 nova_compute[244014]: 2026-02-25 14:10:02.926 244018 INFO nova.virt.libvirt.imagecache [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Feb 25 09:10:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4504: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:10:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4505: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:10:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:10:05 np0005629333 nova_compute[244014]: 2026-02-25 14:10:05.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:10:05 np0005629333 nova_compute[244014]: 2026-02-25 14:10:05.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 25 09:10:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4506: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:10:07 np0005629333 nova_compute[244014]: 2026-02-25 14:10:07.217 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:10:07 np0005629333 nova_compute[244014]: 2026-02-25 14:10:07.652 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:10:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4507: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:10:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:10:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4508: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:10:12 np0005629333 nova_compute[244014]: 2026-02-25 14:10:12.222 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:10:12 np0005629333 nova_compute[244014]: 2026-02-25 14:10:12.654 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:10:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4509: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:10:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4510: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:10:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 09:10:15 np0005629333 ceph-osd[86953]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.1 total, 600.0 interval#012Cumulative writes: 48K writes, 186K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.02 MB/s#012Cumulative WAL: 48K writes, 17K syncs, 2.72 writes per sync, written: 0.19 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s#012Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #228. Immutable memtables: 0.
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:10:15.529219) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 143] Flushing memtable with next log file: 228
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028615529264, "job": 143, "event": "flush_started", "num_memtables": 1, "num_entries": 651, "num_deletes": 256, "total_data_size": 803272, "memory_usage": 815960, "flush_reason": "Manual Compaction"}
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 143] Level-0 flush table #229: started
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028615534858, "cf_name": "default", "job": 143, "event": "table_file_creation", "file_number": 229, "file_size": 796407, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 93240, "largest_seqno": 93890, "table_properties": {"data_size": 792914, "index_size": 1400, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7550, "raw_average_key_size": 18, "raw_value_size": 786033, "raw_average_value_size": 1936, "num_data_blocks": 62, "num_entries": 406, "num_filter_entries": 406, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772028561, "oldest_key_time": 1772028561, "file_creation_time": 1772028615, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 229, "seqno_to_time_mapping": "N/A"}}
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 143] Flush lasted 5674 microseconds, and 2372 cpu microseconds.
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:10:15.534890) [db/flush_job.cc:967] [default] [JOB 143] Level-0 flush table #229: 796407 bytes OK
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:10:15.534913) [db/memtable_list.cc:519] [default] Level-0 commit table #229 started
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:10:15.536464) [db/memtable_list.cc:722] [default] Level-0 commit table #229: memtable #1 done
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:10:15.536478) EVENT_LOG_v1 {"time_micros": 1772028615536473, "job": 143, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:10:15.536500) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 143] Try to delete WAL files size 799803, prev total WAL file size 799803, number of live WAL files 2.
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000225.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:10:15.536919) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034323638' seq:72057594037927935, type:22 .. '6C6F676D0034353230' seq:0, type:0; will stop at (end)
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 144] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 143 Base level 0, inputs: [229(777KB)], [227(10MB)]
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028615537011, "job": 144, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [229], "files_L6": [227], "score": -1, "input_data_size": 11895108, "oldest_snapshot_seqno": -1}
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 144] Generated table #230: 10198 keys, 11804138 bytes, temperature: kUnknown
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028615597294, "cf_name": "default", "job": 144, "event": "table_file_creation", "file_number": 230, "file_size": 11804138, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11740880, "index_size": 36649, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25541, "raw_key_size": 273005, "raw_average_key_size": 26, "raw_value_size": 11562880, "raw_average_value_size": 1133, "num_data_blocks": 1388, "num_entries": 10198, "num_filter_entries": 10198, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772028615, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 230, "seqno_to_time_mapping": "N/A"}}
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:10:15.597623) [db/compaction/compaction_job.cc:1663] [default] [JOB 144] Compacted 1@0 + 1@6 files to L6 => 11804138 bytes
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:10:15.599704) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 197.0 rd, 195.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 10.6 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(29.8) write-amplify(14.8) OK, records in: 10718, records dropped: 520 output_compression: NoCompression
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:10:15.599725) EVENT_LOG_v1 {"time_micros": 1772028615599714, "job": 144, "event": "compaction_finished", "compaction_time_micros": 60376, "compaction_time_cpu_micros": 37327, "output_level": 6, "num_output_files": 1, "total_output_size": 11804138, "num_input_records": 10718, "num_output_records": 10198, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000229.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028615599958, "job": 144, "event": "table_file_deletion", "file_number": 229}
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000227.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028615601528, "job": 144, "event": "table_file_deletion", "file_number": 227}
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:10:15.536820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:10:15.601569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:10:15.601575) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:10:15.601577) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:10:15.601579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:10:15 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:10:15.601580) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:10:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4511: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:10:17 np0005629333 nova_compute[244014]: 2026-02-25 14:10:17.224 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:10:17 np0005629333 nova_compute[244014]: 2026-02-25 14:10:17.656 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:10:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4512: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:10:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 09:10:19 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.2 total, 600.0 interval#012Cumulative writes: 49K writes, 196K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.02 MB/s#012Cumulative WAL: 49K writes, 18K syncs, 2.73 writes per sync, written: 0.19 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s#012Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 09:10:19 np0005629333 podman[433374]: 2026-02-25 14:10:19.727445158 +0000 UTC m=+0.071104660 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:10:19 np0005629333 podman[433373]: 2026-02-25 14:10:19.734476117 +0000 UTC m=+0.075907546 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 09:10:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:10:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4513: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:10:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 09:10:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 09:10:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 09:10:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 09:10:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 09:10:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:10:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 09:10:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 09:10:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 09:10:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 09:10:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 09:10:21 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 09:10:22 np0005629333 podman[433561]: 2026-02-25 14:10:22.040478833 +0000 UTC m=+0.038062941 container create b70b2abbff8fe2821c180380fab28614cab87ba4b88edd5e4071a802e209dfd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_ardinghelli, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 09:10:22 np0005629333 systemd[1]: Started libpod-conmon-b70b2abbff8fe2821c180380fab28614cab87ba4b88edd5e4071a802e209dfd0.scope.
Feb 25 09:10:22 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:10:22 np0005629333 podman[433561]: 2026-02-25 14:10:22.106536229 +0000 UTC m=+0.104120357 container init b70b2abbff8fe2821c180380fab28614cab87ba4b88edd5e4071a802e209dfd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_ardinghelli, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 09:10:22 np0005629333 podman[433561]: 2026-02-25 14:10:22.111552741 +0000 UTC m=+0.109136849 container start b70b2abbff8fe2821c180380fab28614cab87ba4b88edd5e4071a802e209dfd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_ardinghelli, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 09:10:22 np0005629333 podman[433561]: 2026-02-25 14:10:22.114550376 +0000 UTC m=+0.112134504 container attach b70b2abbff8fe2821c180380fab28614cab87ba4b88edd5e4071a802e209dfd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_ardinghelli, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 09:10:22 np0005629333 charming_ardinghelli[433577]: 167 167
Feb 25 09:10:22 np0005629333 podman[433561]: 2026-02-25 14:10:22.116554923 +0000 UTC m=+0.114139031 container died b70b2abbff8fe2821c180380fab28614cab87ba4b88edd5e4071a802e209dfd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_ardinghelli, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Feb 25 09:10:22 np0005629333 systemd[1]: libpod-b70b2abbff8fe2821c180380fab28614cab87ba4b88edd5e4071a802e209dfd0.scope: Deactivated successfully.
Feb 25 09:10:22 np0005629333 podman[433561]: 2026-02-25 14:10:22.020182337 +0000 UTC m=+0.017766465 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:10:22 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ad2624f4a816977d96f56b6135bba39aedaf4ef83906dd907d4c6b34e88088d0-merged.mount: Deactivated successfully.
Feb 25 09:10:22 np0005629333 podman[433561]: 2026-02-25 14:10:22.184314587 +0000 UTC m=+0.181898705 container remove b70b2abbff8fe2821c180380fab28614cab87ba4b88edd5e4071a802e209dfd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:10:22 np0005629333 systemd[1]: libpod-conmon-b70b2abbff8fe2821c180380fab28614cab87ba4b88edd5e4071a802e209dfd0.scope: Deactivated successfully.
Feb 25 09:10:22 np0005629333 nova_compute[244014]: 2026-02-25 14:10:22.226 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:10:22 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 09:10:22 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:10:22 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 09:10:22 np0005629333 podman[433602]: 2026-02-25 14:10:22.348168969 +0000 UTC m=+0.067746325 container create 5720f00b8456ddcfbb4b38ee24b9f1894c9fdcd13bce892e93467a8f484e9557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lewin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 09:10:22 np0005629333 systemd[1]: Started libpod-conmon-5720f00b8456ddcfbb4b38ee24b9f1894c9fdcd13bce892e93467a8f484e9557.scope.
Feb 25 09:10:22 np0005629333 podman[433602]: 2026-02-25 14:10:22.303990584 +0000 UTC m=+0.023567960 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:10:22 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:10:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/551898d9056caefd410efe7972aee486be7cae0668c56ca34f8077d40237860d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:10:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/551898d9056caefd410efe7972aee486be7cae0668c56ca34f8077d40237860d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:10:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/551898d9056caefd410efe7972aee486be7cae0668c56ca34f8077d40237860d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:10:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/551898d9056caefd410efe7972aee486be7cae0668c56ca34f8077d40237860d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:10:22 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/551898d9056caefd410efe7972aee486be7cae0668c56ca34f8077d40237860d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 09:10:22 np0005629333 podman[433602]: 2026-02-25 14:10:22.462471054 +0000 UTC m=+0.182048430 container init 5720f00b8456ddcfbb4b38ee24b9f1894c9fdcd13bce892e93467a8f484e9557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lewin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 09:10:22 np0005629333 podman[433602]: 2026-02-25 14:10:22.468408822 +0000 UTC m=+0.187986178 container start 5720f00b8456ddcfbb4b38ee24b9f1894c9fdcd13bce892e93467a8f484e9557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lewin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:10:22 np0005629333 podman[433602]: 2026-02-25 14:10:22.509171479 +0000 UTC m=+0.228748835 container attach 5720f00b8456ddcfbb4b38ee24b9f1894c9fdcd13bce892e93467a8f484e9557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lewin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 09:10:22 np0005629333 nova_compute[244014]: 2026-02-25 14:10:22.658 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:10:22 np0005629333 wizardly_lewin[433619]: --> passed data devices: 0 physical, 3 LVM
Feb 25 09:10:22 np0005629333 wizardly_lewin[433619]: --> All data devices are unavailable
Feb 25 09:10:22 np0005629333 systemd[1]: libpod-5720f00b8456ddcfbb4b38ee24b9f1894c9fdcd13bce892e93467a8f484e9557.scope: Deactivated successfully.
Feb 25 09:10:22 np0005629333 podman[433602]: 2026-02-25 14:10:22.902720732 +0000 UTC m=+0.622298088 container died 5720f00b8456ddcfbb4b38ee24b9f1894c9fdcd13bce892e93467a8f484e9557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lewin, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Feb 25 09:10:23 np0005629333 systemd[1]: var-lib-containers-storage-overlay-551898d9056caefd410efe7972aee486be7cae0668c56ca34f8077d40237860d-merged.mount: Deactivated successfully.
Feb 25 09:10:23 np0005629333 podman[433602]: 2026-02-25 14:10:23.126085674 +0000 UTC m=+0.845663030 container remove 5720f00b8456ddcfbb4b38ee24b9f1894c9fdcd13bce892e93467a8f484e9557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_lewin, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 09:10:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4514: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:10:23 np0005629333 systemd[1]: libpod-conmon-5720f00b8456ddcfbb4b38ee24b9f1894c9fdcd13bce892e93467a8f484e9557.scope: Deactivated successfully.
Feb 25 09:10:23 np0005629333 podman[433713]: 2026-02-25 14:10:23.61379794 +0000 UTC m=+0.087848855 container create 5384bf451d042ce82ea194b8b5d09dc633800d8dec41d0a73e4a5049b6770f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_golick, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:10:23 np0005629333 podman[433713]: 2026-02-25 14:10:23.548786204 +0000 UTC m=+0.022837129 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:10:23 np0005629333 systemd[1]: Started libpod-conmon-5384bf451d042ce82ea194b8b5d09dc633800d8dec41d0a73e4a5049b6770f57.scope.
Feb 25 09:10:23 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:10:23 np0005629333 podman[433713]: 2026-02-25 14:10:23.802195568 +0000 UTC m=+0.276246503 container init 5384bf451d042ce82ea194b8b5d09dc633800d8dec41d0a73e4a5049b6770f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_golick, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:10:23 np0005629333 podman[433713]: 2026-02-25 14:10:23.80858457 +0000 UTC m=+0.282635505 container start 5384bf451d042ce82ea194b8b5d09dc633800d8dec41d0a73e4a5049b6770f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_golick, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:10:23 np0005629333 tender_golick[433726]: 167 167
Feb 25 09:10:23 np0005629333 systemd[1]: libpod-5384bf451d042ce82ea194b8b5d09dc633800d8dec41d0a73e4a5049b6770f57.scope: Deactivated successfully.
Feb 25 09:10:23 np0005629333 podman[433713]: 2026-02-25 14:10:23.853159285 +0000 UTC m=+0.327210220 container attach 5384bf451d042ce82ea194b8b5d09dc633800d8dec41d0a73e4a5049b6770f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_golick, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Feb 25 09:10:23 np0005629333 podman[433713]: 2026-02-25 14:10:23.853976468 +0000 UTC m=+0.328027373 container died 5384bf451d042ce82ea194b8b5d09dc633800d8dec41d0a73e4a5049b6770f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 09:10:24 np0005629333 systemd[1]: var-lib-containers-storage-overlay-bf540d425dfee06a72ba1b6ede0a9d9ecbfd4f37cbe0c9cf1ca5cbede055dc46-merged.mount: Deactivated successfully.
Feb 25 09:10:24 np0005629333 podman[433713]: 2026-02-25 14:10:24.194666769 +0000 UTC m=+0.668717704 container remove 5384bf451d042ce82ea194b8b5d09dc633800d8dec41d0a73e4a5049b6770f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_golick, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Feb 25 09:10:24 np0005629333 systemd[1]: libpod-conmon-5384bf451d042ce82ea194b8b5d09dc633800d8dec41d0a73e4a5049b6770f57.scope: Deactivated successfully.
Feb 25 09:10:24 np0005629333 podman[433753]: 2026-02-25 14:10:24.375966556 +0000 UTC m=+0.083981905 container create 168d53d04fd1c23bb7db286c594e24d2a1fe06a1902d5f55502a77a3cc9757f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_blackburn, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 09:10:24 np0005629333 podman[433753]: 2026-02-25 14:10:24.320309046 +0000 UTC m=+0.028324445 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:10:24 np0005629333 systemd[1]: Started libpod-conmon-168d53d04fd1c23bb7db286c594e24d2a1fe06a1902d5f55502a77a3cc9757f1.scope.
Feb 25 09:10:24 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:10:24 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26c579e6730a53eb2b09234de6a5b55198bc0f5fd4d2cc28c59d721eb862434f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:10:24 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26c579e6730a53eb2b09234de6a5b55198bc0f5fd4d2cc28c59d721eb862434f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:10:24 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26c579e6730a53eb2b09234de6a5b55198bc0f5fd4d2cc28c59d721eb862434f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:10:24 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26c579e6730a53eb2b09234de6a5b55198bc0f5fd4d2cc28c59d721eb862434f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:10:24 np0005629333 podman[433753]: 2026-02-25 14:10:24.487133652 +0000 UTC m=+0.195148981 container init 168d53d04fd1c23bb7db286c594e24d2a1fe06a1902d5f55502a77a3cc9757f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Feb 25 09:10:24 np0005629333 podman[433753]: 2026-02-25 14:10:24.495318784 +0000 UTC m=+0.203334103 container start 168d53d04fd1c23bb7db286c594e24d2a1fe06a1902d5f55502a77a3cc9757f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:10:24 np0005629333 podman[433753]: 2026-02-25 14:10:24.502128768 +0000 UTC m=+0.210144107 container attach 168d53d04fd1c23bb7db286c594e24d2a1fe06a1902d5f55502a77a3cc9757f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_blackburn, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]: {
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:    "0": [
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:        {
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "devices": [
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "/dev/loop3"
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            ],
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "lv_name": "ceph_lv0",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "lv_size": "21470642176",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "name": "ceph_lv0",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "tags": {
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.cluster_name": "ceph",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.crush_device_class": "",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.encrypted": "0",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.objectstore": "bluestore",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.osd_id": "0",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.type": "block",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.vdo": "0",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.with_tpm": "0"
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            },
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "type": "block",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "vg_name": "ceph_vg0"
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:        }
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:    ],
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:    "1": [
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:        {
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "devices": [
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "/dev/loop4"
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            ],
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "lv_name": "ceph_lv1",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "lv_size": "21470642176",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "name": "ceph_lv1",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "tags": {
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.cluster_name": "ceph",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.crush_device_class": "",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.encrypted": "0",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.objectstore": "bluestore",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.osd_id": "1",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.type": "block",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.vdo": "0",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.with_tpm": "0"
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            },
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "type": "block",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "vg_name": "ceph_vg1"
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:        }
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:    ],
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:    "2": [
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:        {
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "devices": [
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "/dev/loop5"
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            ],
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "lv_name": "ceph_lv2",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "lv_size": "21470642176",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "name": "ceph_lv2",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "tags": {
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.cluster_name": "ceph",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.crush_device_class": "",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.encrypted": "0",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.objectstore": "bluestore",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.osd_id": "2",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.type": "block",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.vdo": "0",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:                "ceph.with_tpm": "0"
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            },
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "type": "block",
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:            "vg_name": "ceph_vg2"
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:        }
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]:    ]
Feb 25 09:10:24 np0005629333 kind_blackburn[433770]: }
Feb 25 09:10:24 np0005629333 systemd[1]: libpod-168d53d04fd1c23bb7db286c594e24d2a1fe06a1902d5f55502a77a3cc9757f1.scope: Deactivated successfully.
Feb 25 09:10:24 np0005629333 podman[433779]: 2026-02-25 14:10:24.824724606 +0000 UTC m=+0.026181114 container died 168d53d04fd1c23bb7db286c594e24d2a1fe06a1902d5f55502a77a3cc9757f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_blackburn, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:10:24 np0005629333 systemd[1]: var-lib-containers-storage-overlay-26c579e6730a53eb2b09234de6a5b55198bc0f5fd4d2cc28c59d721eb862434f-merged.mount: Deactivated successfully.
Feb 25 09:10:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 09:10:24 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.6 total, 600.0 interval#012Cumulative writes: 38K writes, 151K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s#012Cumulative WAL: 38K writes, 14K syncs, 2.74 writes per sync, written: 0.15 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s#012Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 09:10:25 np0005629333 podman[433779]: 2026-02-25 14:10:25.006993601 +0000 UTC m=+0.208450109 container remove 168d53d04fd1c23bb7db286c594e24d2a1fe06a1902d5f55502a77a3cc9757f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_blackburn, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:10:25 np0005629333 systemd[1]: libpod-conmon-168d53d04fd1c23bb7db286c594e24d2a1fe06a1902d5f55502a77a3cc9757f1.scope: Deactivated successfully.
Feb 25 09:10:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4515: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:10:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:10:25 np0005629333 podman[433859]: 2026-02-25 14:10:25.587765359 +0000 UTC m=+0.078310465 container create a3ecc79e3cb7cbdaad3ce954b0d2c4e95bf0f1db43cfa7f52f8430e344ecb179 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_taussig, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Feb 25 09:10:25 np0005629333 podman[433859]: 2026-02-25 14:10:25.538757727 +0000 UTC m=+0.029302823 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:10:25 np0005629333 systemd[1]: Started libpod-conmon-a3ecc79e3cb7cbdaad3ce954b0d2c4e95bf0f1db43cfa7f52f8430e344ecb179.scope.
Feb 25 09:10:25 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:10:25 np0005629333 podman[433859]: 2026-02-25 14:10:25.727505856 +0000 UTC m=+0.218050922 container init a3ecc79e3cb7cbdaad3ce954b0d2c4e95bf0f1db43cfa7f52f8430e344ecb179 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_taussig, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Feb 25 09:10:25 np0005629333 podman[433859]: 2026-02-25 14:10:25.735187354 +0000 UTC m=+0.225732450 container start a3ecc79e3cb7cbdaad3ce954b0d2c4e95bf0f1db43cfa7f52f8430e344ecb179 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:10:25 np0005629333 systemd[1]: libpod-a3ecc79e3cb7cbdaad3ce954b0d2c4e95bf0f1db43cfa7f52f8430e344ecb179.scope: Deactivated successfully.
Feb 25 09:10:25 np0005629333 agitated_taussig[433875]: 167 167
Feb 25 09:10:25 np0005629333 conmon[433875]: conmon a3ecc79e3cb7cbdaad3c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a3ecc79e3cb7cbdaad3ce954b0d2c4e95bf0f1db43cfa7f52f8430e344ecb179.scope/container/memory.events
Feb 25 09:10:25 np0005629333 podman[433859]: 2026-02-25 14:10:25.786328646 +0000 UTC m=+0.276873742 container attach a3ecc79e3cb7cbdaad3ce954b0d2c4e95bf0f1db43cfa7f52f8430e344ecb179 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_taussig, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:10:25 np0005629333 podman[433859]: 2026-02-25 14:10:25.787435007 +0000 UTC m=+0.277980073 container died a3ecc79e3cb7cbdaad3ce954b0d2c4e95bf0f1db43cfa7f52f8430e344ecb179 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_taussig, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 09:10:25 np0005629333 systemd[1]: var-lib-containers-storage-overlay-0ed6dc0e4392184d5989e72288e3b533196108b1fae13107c4e1203ff0f3d0a6-merged.mount: Deactivated successfully.
Feb 25 09:10:26 np0005629333 podman[433859]: 2026-02-25 14:10:26.041027996 +0000 UTC m=+0.531573062 container remove a3ecc79e3cb7cbdaad3ce954b0d2c4e95bf0f1db43cfa7f52f8430e344ecb179 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_taussig, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 25 09:10:26 np0005629333 systemd[1]: libpod-conmon-a3ecc79e3cb7cbdaad3ce954b0d2c4e95bf0f1db43cfa7f52f8430e344ecb179.scope: Deactivated successfully.
Feb 25 09:10:26 np0005629333 podman[433900]: 2026-02-25 14:10:26.247532119 +0000 UTC m=+0.069070162 container create b41c2d5b072518e5129d629fdbec2017601c26e8f8c723be523b8e8f0596c31f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_swanson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 09:10:26 np0005629333 podman[433900]: 2026-02-25 14:10:26.215288494 +0000 UTC m=+0.036826557 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:10:26 np0005629333 systemd[1]: Started libpod-conmon-b41c2d5b072518e5129d629fdbec2017601c26e8f8c723be523b8e8f0596c31f.scope.
Feb 25 09:10:26 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:10:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69dee155e03a8018379689d98c5b50f1e18068ecf5cbc4e18b6fe8836cddc346/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:10:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69dee155e03a8018379689d98c5b50f1e18068ecf5cbc4e18b6fe8836cddc346/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:10:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69dee155e03a8018379689d98c5b50f1e18068ecf5cbc4e18b6fe8836cddc346/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:10:26 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69dee155e03a8018379689d98c5b50f1e18068ecf5cbc4e18b6fe8836cddc346/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:10:26 np0005629333 podman[433900]: 2026-02-25 14:10:26.383277123 +0000 UTC m=+0.204815186 container init b41c2d5b072518e5129d629fdbec2017601c26e8f8c723be523b8e8f0596c31f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_swanson, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:10:26 np0005629333 podman[433900]: 2026-02-25 14:10:26.391553998 +0000 UTC m=+0.213092041 container start b41c2d5b072518e5129d629fdbec2017601c26e8f8c723be523b8e8f0596c31f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 09:10:26 np0005629333 podman[433900]: 2026-02-25 14:10:26.408482718 +0000 UTC m=+0.230020781 container attach b41c2d5b072518e5129d629fdbec2017601c26e8f8c723be523b8e8f0596c31f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Feb 25 09:10:27 np0005629333 lvm[433996]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 09:10:27 np0005629333 lvm[433996]: VG ceph_vg1 finished
Feb 25 09:10:27 np0005629333 lvm[433994]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 09:10:27 np0005629333 lvm[433997]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 09:10:27 np0005629333 lvm[433997]: VG ceph_vg2 finished
Feb 25 09:10:27 np0005629333 lvm[433994]: VG ceph_vg0 finished
Feb 25 09:10:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4516: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:10:27 np0005629333 xenodochial_swanson[433916]: {}
Feb 25 09:10:27 np0005629333 ceph-mgr[76641]: [devicehealth INFO root] Check health
Feb 25 09:10:27 np0005629333 nova_compute[244014]: 2026-02-25 14:10:27.231 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:10:27 np0005629333 systemd[1]: libpod-b41c2d5b072518e5129d629fdbec2017601c26e8f8c723be523b8e8f0596c31f.scope: Deactivated successfully.
Feb 25 09:10:27 np0005629333 systemd[1]: libpod-b41c2d5b072518e5129d629fdbec2017601c26e8f8c723be523b8e8f0596c31f.scope: Consumed 1.255s CPU time.
Feb 25 09:10:27 np0005629333 podman[433900]: 2026-02-25 14:10:27.265418576 +0000 UTC m=+1.086956619 container died b41c2d5b072518e5129d629fdbec2017601c26e8f8c723be523b8e8f0596c31f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 09:10:27 np0005629333 systemd[1]: var-lib-containers-storage-overlay-69dee155e03a8018379689d98c5b50f1e18068ecf5cbc4e18b6fe8836cddc346-merged.mount: Deactivated successfully.
Feb 25 09:10:27 np0005629333 podman[433900]: 2026-02-25 14:10:27.456221203 +0000 UTC m=+1.277759276 container remove b41c2d5b072518e5129d629fdbec2017601c26e8f8c723be523b8e8f0596c31f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:10:27 np0005629333 systemd[1]: libpod-conmon-b41c2d5b072518e5129d629fdbec2017601c26e8f8c723be523b8e8f0596c31f.scope: Deactivated successfully.
Feb 25 09:10:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 09:10:27 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:10:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 09:10:27 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:10:27 np0005629333 nova_compute[244014]: 2026-02-25 14:10:27.660 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:10:28 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:10:28 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:10:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4517: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:10:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:10:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4518: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:10:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_14:10:31
Feb 25 09:10:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 09:10:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 09:10:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'images', 'backups']
Feb 25 09:10:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 09:10:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:10:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:10:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:10:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:10:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:10:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:10:32 np0005629333 nova_compute[244014]: 2026-02-25 14:10:32.237 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:10:32 np0005629333 nova_compute[244014]: 2026-02-25 14:10:32.662 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:10:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 09:10:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 09:10:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 09:10:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 09:10:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 09:10:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 09:10:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 09:10:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 09:10:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 09:10:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 09:10:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4519: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:10:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4520: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:10:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:10:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4521: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:10:37 np0005629333 nova_compute[244014]: 2026-02-25 14:10:37.240 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:10:37 np0005629333 nova_compute[244014]: 2026-02-25 14:10:37.664 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:10:37 np0005629333 nova_compute[244014]: 2026-02-25 14:10:37.760 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:10:38 np0005629333 nova_compute[244014]: 2026-02-25 14:10:38.878 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:10:38 np0005629333 nova_compute[244014]: 2026-02-25 14:10:38.879 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:10:38 np0005629333 nova_compute[244014]: 2026-02-25 14:10:38.985 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:10:38 np0005629333 nova_compute[244014]: 2026-02-25 14:10:38.985 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:10:38 np0005629333 nova_compute[244014]: 2026-02-25 14:10:38.986 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:10:38 np0005629333 nova_compute[244014]: 2026-02-25 14:10:38.986 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 09:10:38 np0005629333 nova_compute[244014]: 2026-02-25 14:10:38.986 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 09:10:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4522: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:10:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 09:10:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2560870428' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 09:10:39 np0005629333 nova_compute[244014]: 2026-02-25 14:10:39.586 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 09:10:39 np0005629333 nova_compute[244014]: 2026-02-25 14:10:39.754 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 09:10:39 np0005629333 nova_compute[244014]: 2026-02-25 14:10:39.755 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3499MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 09:10:39 np0005629333 nova_compute[244014]: 2026-02-25 14:10:39.756 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:10:39 np0005629333 nova_compute[244014]: 2026-02-25 14:10:39.756 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:10:39 np0005629333 nova_compute[244014]: 2026-02-25 14:10:39.834 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 09:10:39 np0005629333 nova_compute[244014]: 2026-02-25 14:10:39.834 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 09:10:39 np0005629333 nova_compute[244014]: 2026-02-25 14:10:39.952 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 09:10:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 09:10:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3534849972' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 09:10:40 np0005629333 nova_compute[244014]: 2026-02-25 14:10:40.493 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 09:10:40 np0005629333 nova_compute[244014]: 2026-02-25 14:10:40.500 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 09:10:40 np0005629333 nova_compute[244014]: 2026-02-25 14:10:40.525 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 09:10:40 np0005629333 nova_compute[244014]: 2026-02-25 14:10:40.527 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 09:10:40 np0005629333 nova_compute[244014]: 2026-02-25 14:10:40.528 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:10:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:10:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4523: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:10:42 np0005629333 nova_compute[244014]: 2026-02-25 14:10:42.244 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:10:42 np0005629333 nova_compute[244014]: 2026-02-25 14:10:42.665 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:10:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4524: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:10:43 np0005629333 nova_compute[244014]: 2026-02-25 14:10:43.526 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:10:43 np0005629333 nova_compute[244014]: 2026-02-25 14:10:43.526 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 09:10:43 np0005629333 nova_compute[244014]: 2026-02-25 14:10:43.527 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 09:10:43 np0005629333 nova_compute[244014]: 2026-02-25 14:10:43.544 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 09:10:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 09:10:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:10:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 09:10:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:10:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 09:10:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:10:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:10:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:10:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:10:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:10:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 09:10:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:10:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 09:10:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:10:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:10:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:10:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 09:10:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:10:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 09:10:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:10:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:10:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:10:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 09:10:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4525: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:10:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:10:46 np0005629333 nova_compute[244014]: 2026-02-25 14:10:46.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:10:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4526: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:10:47 np0005629333 nova_compute[244014]: 2026-02-25 14:10:47.248 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:10:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 09:10:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1935582163' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 09:10:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 09:10:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1935582163' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 09:10:47 np0005629333 nova_compute[244014]: 2026-02-25 14:10:47.667 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:10:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4527: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:10:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:10:50 np0005629333 podman[434083]: 2026-02-25 14:10:50.717447438 +0000 UTC m=+0.059763408 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0)
Feb 25 09:10:50 np0005629333 podman[434084]: 2026-02-25 14:10:50.749505598 +0000 UTC m=+0.091995103 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260223, container_name=ovn_controller)
Feb 25 09:10:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4528: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:10:51 np0005629333 nova_compute[244014]: 2026-02-25 14:10:51.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:10:52 np0005629333 nova_compute[244014]: 2026-02-25 14:10:52.251 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:10:52 np0005629333 nova_compute[244014]: 2026-02-25 14:10:52.669 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:10:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4529: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:10:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:10:55.115 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:10:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:10:55.116 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:10:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:10:55.116 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:10:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4530: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:10:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:10:55 np0005629333 nova_compute[244014]: 2026-02-25 14:10:55.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:10:55 np0005629333 nova_compute[244014]: 2026-02-25 14:10:55.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 09:10:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4531: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:10:57 np0005629333 nova_compute[244014]: 2026-02-25 14:10:57.255 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:10:57 np0005629333 nova_compute[244014]: 2026-02-25 14:10:57.671 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:10:57 np0005629333 nova_compute[244014]: 2026-02-25 14:10:57.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:10:58 np0005629333 nova_compute[244014]: 2026-02-25 14:10:58.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:10:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4532: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:11:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:11:00 np0005629333 nova_compute[244014]: 2026-02-25 14:11:00.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:11:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4533: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:11:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:11:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:11:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:11:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:11:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:11:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:11:02 np0005629333 nova_compute[244014]: 2026-02-25 14:11:02.259 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:11:02 np0005629333 nova_compute[244014]: 2026-02-25 14:11:02.673 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:11:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4534: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:11:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4535: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:11:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:11:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4536: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:11:07 np0005629333 nova_compute[244014]: 2026-02-25 14:11:07.263 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:11:07 np0005629333 nova_compute[244014]: 2026-02-25 14:11:07.677 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:11:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4537: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:11:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:11:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4538: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:11:12 np0005629333 nova_compute[244014]: 2026-02-25 14:11:12.267 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:11:12 np0005629333 nova_compute[244014]: 2026-02-25 14:11:12.678 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:11:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4539: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:11:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4540: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:11:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:11:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4541: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:11:17 np0005629333 nova_compute[244014]: 2026-02-25 14:11:17.271 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:11:17 np0005629333 nova_compute[244014]: 2026-02-25 14:11:17.681 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:11:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4542: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:11:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:11:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4543: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:11:21 np0005629333 podman[434129]: 2026-02-25 14:11:21.726511987 +0000 UTC m=+0.062469714 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 09:11:21 np0005629333 podman[434130]: 2026-02-25 14:11:21.762763306 +0000 UTC m=+0.097436057 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 09:11:22 np0005629333 nova_compute[244014]: 2026-02-25 14:11:22.624 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:11:22 np0005629333 nova_compute[244014]: 2026-02-25 14:11:22.682 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:11:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4544: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Feb 25 09:11:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4545: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Feb 25 09:11:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:11:26 np0005629333 nova_compute[244014]: 2026-02-25 14:11:26.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:11:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4546: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 0 B/s wr, 11 op/s
Feb 25 09:11:27 np0005629333 nova_compute[244014]: 2026-02-25 14:11:27.628 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:11:27 np0005629333 nova_compute[244014]: 2026-02-25 14:11:27.683 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:11:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 09:11:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 09:11:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 09:11:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 09:11:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 09:11:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:11:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 09:11:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 09:11:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 09:11:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 09:11:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 09:11:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 09:11:28 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 09:11:28 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:11:28 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 09:11:28 np0005629333 podman[434311]: 2026-02-25 14:11:28.74862881 +0000 UTC m=+0.023193579 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:11:28 np0005629333 podman[434311]: 2026-02-25 14:11:28.923963688 +0000 UTC m=+0.198528437 container create 62ae551b8c0c60a8c24e2bcea15e173bfaa15f238360e15065d6f9e928dca587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Feb 25 09:11:29 np0005629333 systemd[1]: Started libpod-conmon-62ae551b8c0c60a8c24e2bcea15e173bfaa15f238360e15065d6f9e928dca587.scope.
Feb 25 09:11:29 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:11:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4547: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.9 KiB/s rd, 0 B/s wr, 13 op/s
Feb 25 09:11:29 np0005629333 podman[434311]: 2026-02-25 14:11:29.33823814 +0000 UTC m=+0.612802939 container init 62ae551b8c0c60a8c24e2bcea15e173bfaa15f238360e15065d6f9e928dca587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Feb 25 09:11:29 np0005629333 podman[434311]: 2026-02-25 14:11:29.349761237 +0000 UTC m=+0.624326006 container start 62ae551b8c0c60a8c24e2bcea15e173bfaa15f238360e15065d6f9e928dca587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_varahamihira, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 09:11:29 np0005629333 bold_varahamihira[434327]: 167 167
Feb 25 09:11:29 np0005629333 systemd[1]: libpod-62ae551b8c0c60a8c24e2bcea15e173bfaa15f238360e15065d6f9e928dca587.scope: Deactivated successfully.
Feb 25 09:11:29 np0005629333 podman[434311]: 2026-02-25 14:11:29.687888116 +0000 UTC m=+0.962452945 container attach 62ae551b8c0c60a8c24e2bcea15e173bfaa15f238360e15065d6f9e928dca587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 09:11:29 np0005629333 podman[434311]: 2026-02-25 14:11:29.689492262 +0000 UTC m=+0.964057041 container died 62ae551b8c0c60a8c24e2bcea15e173bfaa15f238360e15065d6f9e928dca587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Feb 25 09:11:30 np0005629333 systemd[1]: var-lib-containers-storage-overlay-23272e85800ee83bf246449e45ac3a91c460b8287478101f37aed0542473a3f6-merged.mount: Deactivated successfully.
Feb 25 09:11:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:11:30 np0005629333 podman[434311]: 2026-02-25 14:11:30.997782064 +0000 UTC m=+2.272346843 container remove 62ae551b8c0c60a8c24e2bcea15e173bfaa15f238360e15065d6f9e928dca587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:11:31 np0005629333 systemd[1]: libpod-conmon-62ae551b8c0c60a8c24e2bcea15e173bfaa15f238360e15065d6f9e928dca587.scope: Deactivated successfully.
Feb 25 09:11:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4548: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.9 KiB/s rd, 0 B/s wr, 13 op/s
Feb 25 09:11:31 np0005629333 podman[434353]: 2026-02-25 14:11:31.104820742 +0000 UTC m=+0.023151768 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:11:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_14:11:31
Feb 25 09:11:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 09:11:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 09:11:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'vms', '.mgr', 'images', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes']
Feb 25 09:11:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 09:11:31 np0005629333 podman[434353]: 2026-02-25 14:11:31.432856835 +0000 UTC m=+0.351187811 container create 304a4fa1a5fe43da711bad4944256e3c922ee5085a5b0ff8206b470242b6b6f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_dirac, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Feb 25 09:11:31 np0005629333 systemd[1]: Started libpod-conmon-304a4fa1a5fe43da711bad4944256e3c922ee5085a5b0ff8206b470242b6b6f4.scope.
Feb 25 09:11:31 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:11:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d0d4bb663d4fba8e2e402e15a915288026ba03b590fe3753e78ee310c64ebef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:11:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d0d4bb663d4fba8e2e402e15a915288026ba03b590fe3753e78ee310c64ebef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:11:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d0d4bb663d4fba8e2e402e15a915288026ba03b590fe3753e78ee310c64ebef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:11:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d0d4bb663d4fba8e2e402e15a915288026ba03b590fe3753e78ee310c64ebef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:11:31 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d0d4bb663d4fba8e2e402e15a915288026ba03b590fe3753e78ee310c64ebef/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 09:11:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:11:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:11:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:11:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:11:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:11:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:11:31 np0005629333 podman[434353]: 2026-02-25 14:11:31.809124857 +0000 UTC m=+0.727455853 container init 304a4fa1a5fe43da711bad4944256e3c922ee5085a5b0ff8206b470242b6b6f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_dirac, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Feb 25 09:11:31 np0005629333 podman[434353]: 2026-02-25 14:11:31.820439988 +0000 UTC m=+0.738770944 container start 304a4fa1a5fe43da711bad4944256e3c922ee5085a5b0ff8206b470242b6b6f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_dirac, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Feb 25 09:11:31 np0005629333 podman[434353]: 2026-02-25 14:11:31.849967837 +0000 UTC m=+0.768298803 container attach 304a4fa1a5fe43da711bad4944256e3c922ee5085a5b0ff8206b470242b6b6f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Feb 25 09:11:32 np0005629333 awesome_dirac[434370]: --> passed data devices: 0 physical, 3 LVM
Feb 25 09:11:32 np0005629333 awesome_dirac[434370]: --> All data devices are unavailable
Feb 25 09:11:32 np0005629333 systemd[1]: libpod-304a4fa1a5fe43da711bad4944256e3c922ee5085a5b0ff8206b470242b6b6f4.scope: Deactivated successfully.
Feb 25 09:11:32 np0005629333 podman[434353]: 2026-02-25 14:11:32.360302414 +0000 UTC m=+1.278633370 container died 304a4fa1a5fe43da711bad4944256e3c922ee5085a5b0ff8206b470242b6b6f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_dirac, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Feb 25 09:11:32 np0005629333 nova_compute[244014]: 2026-02-25 14:11:32.631 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:11:32 np0005629333 nova_compute[244014]: 2026-02-25 14:11:32.685 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:11:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 09:11:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 09:11:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 09:11:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 09:11:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 09:11:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 09:11:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 09:11:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 09:11:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 09:11:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 09:11:32 np0005629333 systemd[1]: var-lib-containers-storage-overlay-8d0d4bb663d4fba8e2e402e15a915288026ba03b590fe3753e78ee310c64ebef-merged.mount: Deactivated successfully.
Feb 25 09:11:33 np0005629333 podman[434353]: 2026-02-25 14:11:33.173309795 +0000 UTC m=+2.091640761 container remove 304a4fa1a5fe43da711bad4944256e3c922ee5085a5b0ff8206b470242b6b6f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_dirac, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 09:11:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4549: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 31 op/s
Feb 25 09:11:33 np0005629333 systemd[1]: libpod-conmon-304a4fa1a5fe43da711bad4944256e3c922ee5085a5b0ff8206b470242b6b6f4.scope: Deactivated successfully.
Feb 25 09:11:33 np0005629333 podman[434462]: 2026-02-25 14:11:33.784410905 +0000 UTC m=+0.114767320 container create e034d9f319f1e590469425b18bb0731d84cb401f5c8c9cb5de117bb5a597f42f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:11:33 np0005629333 podman[434462]: 2026-02-25 14:11:33.712987947 +0000 UTC m=+0.043344452 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:11:33 np0005629333 systemd[1]: Started libpod-conmon-e034d9f319f1e590469425b18bb0731d84cb401f5c8c9cb5de117bb5a597f42f.scope.
Feb 25 09:11:33 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:11:34 np0005629333 podman[434462]: 2026-02-25 14:11:34.137117038 +0000 UTC m=+0.467473543 container init e034d9f319f1e590469425b18bb0731d84cb401f5c8c9cb5de117bb5a597f42f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bartik, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:11:34 np0005629333 podman[434462]: 2026-02-25 14:11:34.148512021 +0000 UTC m=+0.478868446 container start e034d9f319f1e590469425b18bb0731d84cb401f5c8c9cb5de117bb5a597f42f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bartik, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Feb 25 09:11:34 np0005629333 vigorous_bartik[434480]: 167 167
Feb 25 09:11:34 np0005629333 systemd[1]: libpod-e034d9f319f1e590469425b18bb0731d84cb401f5c8c9cb5de117bb5a597f42f.scope: Deactivated successfully.
Feb 25 09:11:34 np0005629333 podman[434462]: 2026-02-25 14:11:34.406471374 +0000 UTC m=+0.736827829 container attach e034d9f319f1e590469425b18bb0731d84cb401f5c8c9cb5de117bb5a597f42f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:11:34 np0005629333 podman[434462]: 2026-02-25 14:11:34.407227376 +0000 UTC m=+0.737583801 container died e034d9f319f1e590469425b18bb0731d84cb401f5c8c9cb5de117bb5a597f42f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bartik, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 09:11:34 np0005629333 systemd[1]: var-lib-containers-storage-overlay-6377fc07c539762fe06ee0239e855357399bb3b5b5bbcf954d7072bdd4c50412-merged.mount: Deactivated successfully.
Feb 25 09:11:35 np0005629333 podman[434462]: 2026-02-25 14:11:35.094913159 +0000 UTC m=+1.425269614 container remove e034d9f319f1e590469425b18bb0731d84cb401f5c8c9cb5de117bb5a597f42f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bartik, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 09:11:35 np0005629333 systemd[1]: libpod-conmon-e034d9f319f1e590469425b18bb0731d84cb401f5c8c9cb5de117bb5a597f42f.scope: Deactivated successfully.
Feb 25 09:11:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4550: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Feb 25 09:11:35 np0005629333 podman[434507]: 2026-02-25 14:11:35.275369592 +0000 UTC m=+0.039120221 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:11:35 np0005629333 podman[434507]: 2026-02-25 14:11:35.393211148 +0000 UTC m=+0.156961737 container create 522654a94a2242549d9033a2a0abf17178e1ba9a65ed5dab6ed62b59b26789fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 09:11:35 np0005629333 systemd[1]: Started libpod-conmon-522654a94a2242549d9033a2a0abf17178e1ba9a65ed5dab6ed62b59b26789fe.scope.
Feb 25 09:11:35 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:11:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a90d7d90670c38b1fecda2ea9cdc25f460212a39faaf6f36f0253e3dc965f33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:11:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a90d7d90670c38b1fecda2ea9cdc25f460212a39faaf6f36f0253e3dc965f33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:11:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a90d7d90670c38b1fecda2ea9cdc25f460212a39faaf6f36f0253e3dc965f33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:11:35 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a90d7d90670c38b1fecda2ea9cdc25f460212a39faaf6f36f0253e3dc965f33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:11:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:11:35 np0005629333 podman[434507]: 2026-02-25 14:11:35.939390763 +0000 UTC m=+0.703141402 container init 522654a94a2242549d9033a2a0abf17178e1ba9a65ed5dab6ed62b59b26789fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 09:11:35 np0005629333 podman[434507]: 2026-02-25 14:11:35.950507619 +0000 UTC m=+0.714258208 container start 522654a94a2242549d9033a2a0abf17178e1ba9a65ed5dab6ed62b59b26789fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_curran, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:11:36 np0005629333 podman[434507]: 2026-02-25 14:11:36.244647239 +0000 UTC m=+1.008397788 container attach 522654a94a2242549d9033a2a0abf17178e1ba9a65ed5dab6ed62b59b26789fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_curran, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 09:11:36 np0005629333 confident_curran[434524]: {
Feb 25 09:11:36 np0005629333 confident_curran[434524]:    "0": [
Feb 25 09:11:36 np0005629333 confident_curran[434524]:        {
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "devices": [
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "/dev/loop3"
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            ],
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "lv_name": "ceph_lv0",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "lv_size": "21470642176",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "name": "ceph_lv0",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "tags": {
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.cluster_name": "ceph",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.crush_device_class": "",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.encrypted": "0",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.objectstore": "bluestore",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.osd_id": "0",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.type": "block",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.vdo": "0",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.with_tpm": "0"
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            },
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "type": "block",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "vg_name": "ceph_vg0"
Feb 25 09:11:36 np0005629333 confident_curran[434524]:        }
Feb 25 09:11:36 np0005629333 confident_curran[434524]:    ],
Feb 25 09:11:36 np0005629333 confident_curran[434524]:    "1": [
Feb 25 09:11:36 np0005629333 confident_curran[434524]:        {
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "devices": [
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "/dev/loop4"
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            ],
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "lv_name": "ceph_lv1",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "lv_size": "21470642176",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "name": "ceph_lv1",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "tags": {
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.cluster_name": "ceph",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.crush_device_class": "",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.encrypted": "0",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.objectstore": "bluestore",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.osd_id": "1",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.type": "block",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.vdo": "0",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.with_tpm": "0"
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            },
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "type": "block",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "vg_name": "ceph_vg1"
Feb 25 09:11:36 np0005629333 confident_curran[434524]:        }
Feb 25 09:11:36 np0005629333 confident_curran[434524]:    ],
Feb 25 09:11:36 np0005629333 confident_curran[434524]:    "2": [
Feb 25 09:11:36 np0005629333 confident_curran[434524]:        {
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "devices": [
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "/dev/loop5"
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            ],
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "lv_name": "ceph_lv2",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "lv_size": "21470642176",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "name": "ceph_lv2",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "tags": {
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.cluster_name": "ceph",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.crush_device_class": "",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.encrypted": "0",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.objectstore": "bluestore",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.osd_id": "2",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.type": "block",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.vdo": "0",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:                "ceph.with_tpm": "0"
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            },
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "type": "block",
Feb 25 09:11:36 np0005629333 confident_curran[434524]:            "vg_name": "ceph_vg2"
Feb 25 09:11:36 np0005629333 confident_curran[434524]:        }
Feb 25 09:11:36 np0005629333 confident_curran[434524]:    ]
Feb 25 09:11:36 np0005629333 confident_curran[434524]: }
Feb 25 09:11:36 np0005629333 systemd[1]: libpod-522654a94a2242549d9033a2a0abf17178e1ba9a65ed5dab6ed62b59b26789fe.scope: Deactivated successfully.
Feb 25 09:11:36 np0005629333 conmon[434524]: conmon 522654a94a2242549d90 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-522654a94a2242549d9033a2a0abf17178e1ba9a65ed5dab6ed62b59b26789fe.scope/container/memory.events
Feb 25 09:11:36 np0005629333 podman[434507]: 2026-02-25 14:11:36.319746891 +0000 UTC m=+1.083497460 container died 522654a94a2242549d9033a2a0abf17178e1ba9a65ed5dab6ed62b59b26789fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_curran, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Feb 25 09:11:36 np0005629333 systemd[1]: var-lib-containers-storage-overlay-9a90d7d90670c38b1fecda2ea9cdc25f460212a39faaf6f36f0253e3dc965f33-merged.mount: Deactivated successfully.
Feb 25 09:11:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4551: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Feb 25 09:11:37 np0005629333 podman[434507]: 2026-02-25 14:11:37.4056353 +0000 UTC m=+2.169385849 container remove 522654a94a2242549d9033a2a0abf17178e1ba9a65ed5dab6ed62b59b26789fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 09:11:37 np0005629333 systemd[1]: libpod-conmon-522654a94a2242549d9033a2a0abf17178e1ba9a65ed5dab6ed62b59b26789fe.scope: Deactivated successfully.
Feb 25 09:11:37 np0005629333 nova_compute[244014]: 2026-02-25 14:11:37.636 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:11:37 np0005629333 nova_compute[244014]: 2026-02-25 14:11:37.687 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:11:37 np0005629333 podman[434609]: 2026-02-25 14:11:37.906293154 +0000 UTC m=+0.053501330 container create b1ac0970db818c87adae8a72febf492b8e6d6b5f57b230e389fc8c1252e1f479 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_haibt, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:11:37 np0005629333 systemd[1]: Started libpod-conmon-b1ac0970db818c87adae8a72febf492b8e6d6b5f57b230e389fc8c1252e1f479.scope.
Feb 25 09:11:37 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:11:37 np0005629333 podman[434609]: 2026-02-25 14:11:37.884402093 +0000 UTC m=+0.031610279 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:11:37 np0005629333 podman[434609]: 2026-02-25 14:11:37.98820793 +0000 UTC m=+0.135416086 container init b1ac0970db818c87adae8a72febf492b8e6d6b5f57b230e389fc8c1252e1f479 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_haibt, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Feb 25 09:11:37 np0005629333 podman[434609]: 2026-02-25 14:11:37.995898738 +0000 UTC m=+0.143106914 container start b1ac0970db818c87adae8a72febf492b8e6d6b5f57b230e389fc8c1252e1f479 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 09:11:38 np0005629333 podman[434609]: 2026-02-25 14:11:38.000010355 +0000 UTC m=+0.147218551 container attach b1ac0970db818c87adae8a72febf492b8e6d6b5f57b230e389fc8c1252e1f479 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_haibt, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Feb 25 09:11:38 np0005629333 bold_haibt[434625]: 167 167
Feb 25 09:11:38 np0005629333 systemd[1]: libpod-b1ac0970db818c87adae8a72febf492b8e6d6b5f57b230e389fc8c1252e1f479.scope: Deactivated successfully.
Feb 25 09:11:38 np0005629333 conmon[434625]: conmon b1ac0970db818c87adae <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b1ac0970db818c87adae8a72febf492b8e6d6b5f57b230e389fc8c1252e1f479.scope/container/memory.events
Feb 25 09:11:38 np0005629333 podman[434609]: 2026-02-25 14:11:38.00299726 +0000 UTC m=+0.150205436 container died b1ac0970db818c87adae8a72febf492b8e6d6b5f57b230e389fc8c1252e1f479 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_haibt, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:11:38 np0005629333 systemd[1]: var-lib-containers-storage-overlay-65250cc31478f6d64f26c40c7ed460a8039f49926454c1013618f40cc1a981c2-merged.mount: Deactivated successfully.
Feb 25 09:11:38 np0005629333 podman[434609]: 2026-02-25 14:11:38.052611128 +0000 UTC m=+0.199819304 container remove b1ac0970db818c87adae8a72febf492b8e6d6b5f57b230e389fc8c1252e1f479 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_haibt, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Feb 25 09:11:38 np0005629333 systemd[1]: libpod-conmon-b1ac0970db818c87adae8a72febf492b8e6d6b5f57b230e389fc8c1252e1f479.scope: Deactivated successfully.
Feb 25 09:11:38 np0005629333 podman[434651]: 2026-02-25 14:11:38.252685528 +0000 UTC m=+0.060380625 container create 7e560b6adf74b085544aeb3fdaba06a7041ffc2c1b2e4475a11913796aaf201d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_vaughan, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:11:38 np0005629333 systemd[1]: Started libpod-conmon-7e560b6adf74b085544aeb3fdaba06a7041ffc2c1b2e4475a11913796aaf201d.scope.
Feb 25 09:11:38 np0005629333 podman[434651]: 2026-02-25 14:11:38.226487644 +0000 UTC m=+0.034182751 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:11:38 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:11:38 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/057aaa93fb765d91794b579d5513400e955c63b2d6a96b4edc03e9556761c0e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:11:38 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/057aaa93fb765d91794b579d5513400e955c63b2d6a96b4edc03e9556761c0e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:11:38 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/057aaa93fb765d91794b579d5513400e955c63b2d6a96b4edc03e9556761c0e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:11:38 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/057aaa93fb765d91794b579d5513400e955c63b2d6a96b4edc03e9556761c0e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:11:38 np0005629333 podman[434651]: 2026-02-25 14:11:38.364244695 +0000 UTC m=+0.171939772 container init 7e560b6adf74b085544aeb3fdaba06a7041ffc2c1b2e4475a11913796aaf201d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_vaughan, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Feb 25 09:11:38 np0005629333 podman[434651]: 2026-02-25 14:11:38.372961583 +0000 UTC m=+0.180656680 container start 7e560b6adf74b085544aeb3fdaba06a7041ffc2c1b2e4475a11913796aaf201d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 09:11:38 np0005629333 podman[434651]: 2026-02-25 14:11:38.376996567 +0000 UTC m=+0.184691774 container attach 7e560b6adf74b085544aeb3fdaba06a7041ffc2c1b2e4475a11913796aaf201d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_vaughan, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:11:39 np0005629333 lvm[434745]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 09:11:39 np0005629333 lvm[434746]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 09:11:39 np0005629333 lvm[434746]: VG ceph_vg1 finished
Feb 25 09:11:39 np0005629333 lvm[434745]: VG ceph_vg0 finished
Feb 25 09:11:39 np0005629333 lvm[434748]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 09:11:39 np0005629333 lvm[434748]: VG ceph_vg2 finished
Feb 25 09:11:39 np0005629333 clever_vaughan[434667]: {}
Feb 25 09:11:39 np0005629333 systemd[1]: libpod-7e560b6adf74b085544aeb3fdaba06a7041ffc2c1b2e4475a11913796aaf201d.scope: Deactivated successfully.
Feb 25 09:11:39 np0005629333 podman[434651]: 2026-02-25 14:11:39.18443656 +0000 UTC m=+0.992131657 container died 7e560b6adf74b085544aeb3fdaba06a7041ffc2c1b2e4475a11913796aaf201d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True)
Feb 25 09:11:39 np0005629333 systemd[1]: libpod-7e560b6adf74b085544aeb3fdaba06a7041ffc2c1b2e4475a11913796aaf201d.scope: Consumed 1.206s CPU time.
Feb 25 09:11:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4552: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 37 op/s
Feb 25 09:11:39 np0005629333 systemd[1]: var-lib-containers-storage-overlay-057aaa93fb765d91794b579d5513400e955c63b2d6a96b4edc03e9556761c0e9-merged.mount: Deactivated successfully.
Feb 25 09:11:39 np0005629333 podman[434651]: 2026-02-25 14:11:39.261293802 +0000 UTC m=+1.068988909 container remove 7e560b6adf74b085544aeb3fdaba06a7041ffc2c1b2e4475a11913796aaf201d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 09:11:39 np0005629333 systemd[1]: libpod-conmon-7e560b6adf74b085544aeb3fdaba06a7041ffc2c1b2e4475a11913796aaf201d.scope: Deactivated successfully.
Feb 25 09:11:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 09:11:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:11:39 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 09:11:39 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:11:39 np0005629333 nova_compute[244014]: 2026-02-25 14:11:39.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:11:39 np0005629333 nova_compute[244014]: 2026-02-25 14:11:39.913 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:11:39 np0005629333 nova_compute[244014]: 2026-02-25 14:11:39.913 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:11:39 np0005629333 nova_compute[244014]: 2026-02-25 14:11:39.914 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:11:39 np0005629333 nova_compute[244014]: 2026-02-25 14:11:39.914 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 09:11:39 np0005629333 nova_compute[244014]: 2026-02-25 14:11:39.915 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 09:11:40 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:11:40 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:11:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 09:11:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2566983337' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 09:11:40 np0005629333 nova_compute[244014]: 2026-02-25 14:11:40.513 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 09:11:40 np0005629333 nova_compute[244014]: 2026-02-25 14:11:40.737 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 09:11:40 np0005629333 nova_compute[244014]: 2026-02-25 14:11:40.739 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3471MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 09:11:40 np0005629333 nova_compute[244014]: 2026-02-25 14:11:40.739 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:11:40 np0005629333 nova_compute[244014]: 2026-02-25 14:11:40.739 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:11:40 np0005629333 nova_compute[244014]: 2026-02-25 14:11:40.820 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 09:11:40 np0005629333 nova_compute[244014]: 2026-02-25 14:11:40.821 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 09:11:40 np0005629333 nova_compute[244014]: 2026-02-25 14:11:40.849 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing inventories for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Feb 25 09:11:40 np0005629333 nova_compute[244014]: 2026-02-25 14:11:40.894 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating ProviderTree inventory for provider cb4dae98-2ac3-4218-9445-2320139e12ad from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Feb 25 09:11:40 np0005629333 nova_compute[244014]: 2026-02-25 14:11:40.895 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Updating inventory in ProviderTree for provider cb4dae98-2ac3-4218-9445-2320139e12ad with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Feb 25 09:11:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:11:40 np0005629333 nova_compute[244014]: 2026-02-25 14:11:40.913 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing aggregate associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Feb 25 09:11:40 np0005629333 nova_compute[244014]: 2026-02-25 14:11:40.938 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Refreshing trait associations for resource provider cb4dae98-2ac3-4218-9445-2320139e12ad, traits: HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE2,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AESNI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_VOLUME_EXTEND,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Feb 25 09:11:40 np0005629333 nova_compute[244014]: 2026-02-25 14:11:40.958 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 09:11:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4553: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Feb 25 09:11:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 09:11:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3165919734' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 09:11:41 np0005629333 nova_compute[244014]: 2026-02-25 14:11:41.534 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 09:11:41 np0005629333 nova_compute[244014]: 2026-02-25 14:11:41.542 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 09:11:41 np0005629333 nova_compute[244014]: 2026-02-25 14:11:41.558 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 09:11:41 np0005629333 nova_compute[244014]: 2026-02-25 14:11:41.560 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 09:11:41 np0005629333 nova_compute[244014]: 2026-02-25 14:11:41.561 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:11:42 np0005629333 nova_compute[244014]: 2026-02-25 14:11:42.563 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:11:42 np0005629333 nova_compute[244014]: 2026-02-25 14:11:42.563 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 09:11:42 np0005629333 nova_compute[244014]: 2026-02-25 14:11:42.564 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 09:11:42 np0005629333 nova_compute[244014]: 2026-02-25 14:11:42.584 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 09:11:42 np0005629333 nova_compute[244014]: 2026-02-25 14:11:42.585 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:11:42 np0005629333 nova_compute[244014]: 2026-02-25 14:11:42.653 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:11:42 np0005629333 nova_compute[244014]: 2026-02-25 14:11:42.688 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:11:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4554: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Feb 25 09:11:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 09:11:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:11:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 09:11:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:11:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 09:11:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:11:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:11:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:11:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:11:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:11:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 09:11:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:11:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 09:11:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:11:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:11:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:11:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 09:11:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:11:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 09:11:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:11:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:11:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:11:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 09:11:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4555: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 27 op/s
Feb 25 09:11:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:11:46 np0005629333 nova_compute[244014]: 2026-02-25 14:11:46.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:11:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4556: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 27 op/s
Feb 25 09:11:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 09:11:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4273942040' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 09:11:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 09:11:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4273942040' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 09:11:47 np0005629333 nova_compute[244014]: 2026-02-25 14:11:47.691 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:11:47 np0005629333 nova_compute[244014]: 2026-02-25 14:11:47.694 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:11:47 np0005629333 nova_compute[244014]: 2026-02-25 14:11:47.694 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5005 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 25 09:11:47 np0005629333 nova_compute[244014]: 2026-02-25 14:11:47.695 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:11:47 np0005629333 nova_compute[244014]: 2026-02-25 14:11:47.701 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:11:47 np0005629333 nova_compute[244014]: 2026-02-25 14:11:47.702 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:11:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4557: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 16 op/s
Feb 25 09:11:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #231. Immutable memtables: 0.
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:11:51.048405) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 145] Flushing memtable with next log file: 231
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028711048497, "job": 145, "event": "flush_started", "num_memtables": 1, "num_entries": 1030, "num_deletes": 250, "total_data_size": 1458948, "memory_usage": 1486432, "flush_reason": "Manual Compaction"}
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 145] Level-0 flush table #232: started
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028711160310, "cf_name": "default", "job": 145, "event": "table_file_creation", "file_number": 232, "file_size": 887714, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 93891, "largest_seqno": 94920, "table_properties": {"data_size": 883748, "index_size": 1617, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10665, "raw_average_key_size": 20, "raw_value_size": 875137, "raw_average_value_size": 1712, "num_data_blocks": 71, "num_entries": 511, "num_filter_entries": 511, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772028616, "oldest_key_time": 1772028616, "file_creation_time": 1772028711, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 232, "seqno_to_time_mapping": "N/A"}}
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 145] Flush lasted 111961 microseconds, and 4111 cpu microseconds.
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 09:11:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4558: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 10 op/s
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:11:51.160391) [db/flush_job.cc:967] [default] [JOB 145] Level-0 flush table #232: 887714 bytes OK
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:11:51.160430) [db/memtable_list.cc:519] [default] Level-0 commit table #232 started
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:11:51.333928) [db/memtable_list.cc:722] [default] Level-0 commit table #232: memtable #1 done
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:11:51.333990) EVENT_LOG_v1 {"time_micros": 1772028711333978, "job": 145, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:11:51.334026) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 145] Try to delete WAL files size 1454104, prev total WAL file size 1454104, number of live WAL files 2.
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000228.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:11:51.334941) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740034303033' seq:72057594037927935, type:22 .. '6D6772737461740034323534' seq:0, type:0; will stop at (end)
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 146] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 145 Base level 0, inputs: [232(866KB)], [230(11MB)]
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028711335029, "job": 146, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [232], "files_L6": [230], "score": -1, "input_data_size": 12691852, "oldest_snapshot_seqno": -1}
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 146] Generated table #233: 10238 keys, 9888267 bytes, temperature: kUnknown
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028711487753, "cf_name": "default", "job": 146, "event": "table_file_creation", "file_number": 233, "file_size": 9888267, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9828332, "index_size": 33262, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25605, "raw_key_size": 274097, "raw_average_key_size": 26, "raw_value_size": 9653141, "raw_average_value_size": 942, "num_data_blocks": 1248, "num_entries": 10238, "num_filter_entries": 10238, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772028711, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 233, "seqno_to_time_mapping": "N/A"}}
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:11:51.488271) [db/compaction/compaction_job.cc:1663] [default] [JOB 146] Compacted 1@0 + 1@6 files to L6 => 9888267 bytes
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:11:51.498420) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 83.0 rd, 64.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 11.3 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(25.4) write-amplify(11.1) OK, records in: 10709, records dropped: 471 output_compression: NoCompression
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:11:51.498454) EVENT_LOG_v1 {"time_micros": 1772028711498438, "job": 146, "event": "compaction_finished", "compaction_time_micros": 152849, "compaction_time_cpu_micros": 39740, "output_level": 6, "num_output_files": 1, "total_output_size": 9888267, "num_input_records": 10709, "num_output_records": 10238, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000232.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028711498806, "job": 146, "event": "table_file_deletion", "file_number": 232}
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000230.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028711501075, "job": 146, "event": "table_file_deletion", "file_number": 230}
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:11:51.334770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:11:51.501173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:11:51.501181) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:11:51.501185) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:11:51.501188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:11:51 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:11:51.501190) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:11:52 np0005629333 nova_compute[244014]: 2026-02-25 14:11:52.704 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:11:52 np0005629333 nova_compute[244014]: 2026-02-25 14:11:52.706 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:11:52 np0005629333 nova_compute[244014]: 2026-02-25 14:11:52.706 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 25 09:11:52 np0005629333 nova_compute[244014]: 2026-02-25 14:11:52.707 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:11:52 np0005629333 nova_compute[244014]: 2026-02-25 14:11:52.745 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:11:52 np0005629333 nova_compute[244014]: 2026-02-25 14:11:52.746 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:11:52 np0005629333 podman[434839]: 2026-02-25 14:11:52.800659859 +0000 UTC m=+0.125707179 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:11:52 np0005629333 podman[434840]: 2026-02-25 14:11:52.839235444 +0000 UTC m=+0.164052198 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 25 09:11:52 np0005629333 nova_compute[244014]: 2026-02-25 14:11:52.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:11:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4559: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 10 op/s
Feb 25 09:11:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:11:55.116 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:11:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:11:55.117 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:11:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:11:55.117 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:11:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4560: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:11:55 np0005629333 nova_compute[244014]: 2026-02-25 14:11:55.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:11:55 np0005629333 nova_compute[244014]: 2026-02-25 14:11:55.876 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 09:11:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:11:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4561: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:11:57 np0005629333 systemd[1]: Starting dnf makecache...
Feb 25 09:11:57 np0005629333 nova_compute[244014]: 2026-02-25 14:11:57.747 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:11:57 np0005629333 nova_compute[244014]: 2026-02-25 14:11:57.749 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:11:57 np0005629333 nova_compute[244014]: 2026-02-25 14:11:57.749 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 25 09:11:57 np0005629333 nova_compute[244014]: 2026-02-25 14:11:57.750 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:11:57 np0005629333 nova_compute[244014]: 2026-02-25 14:11:57.788 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:11:57 np0005629333 nova_compute[244014]: 2026-02-25 14:11:57.788 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:11:58 np0005629333 dnf[434880]: Metadata cache refreshed recently.
Feb 25 09:11:58 np0005629333 systemd[1]: dnf-makecache.service: Deactivated successfully.
Feb 25 09:11:58 np0005629333 systemd[1]: Finished dnf makecache.
Feb 25 09:11:58 np0005629333 nova_compute[244014]: 2026-02-25 14:11:58.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:11:58 np0005629333 nova_compute[244014]: 2026-02-25 14:11:58.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:11:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4562: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:12:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:12:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4563: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:12:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:12:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:12:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:12:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:12:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:12:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:12:02 np0005629333 nova_compute[244014]: 2026-02-25 14:12:02.789 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:12:02 np0005629333 nova_compute[244014]: 2026-02-25 14:12:02.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:12:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4564: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:12:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4565: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:12:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:12:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4566: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:12:07 np0005629333 nova_compute[244014]: 2026-02-25 14:12:07.791 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:12:07 np0005629333 nova_compute[244014]: 2026-02-25 14:12:07.793 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:12:07 np0005629333 nova_compute[244014]: 2026-02-25 14:12:07.794 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 25 09:12:07 np0005629333 nova_compute[244014]: 2026-02-25 14:12:07.794 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:12:07 np0005629333 nova_compute[244014]: 2026-02-25 14:12:07.823 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:12:07 np0005629333 nova_compute[244014]: 2026-02-25 14:12:07.823 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:12:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4567: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:12:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:12:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4568: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:12:12 np0005629333 nova_compute[244014]: 2026-02-25 14:12:12.824 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:12:12 np0005629333 nova_compute[244014]: 2026-02-25 14:12:12.826 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:12:12 np0005629333 nova_compute[244014]: 2026-02-25 14:12:12.826 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 25 09:12:12 np0005629333 nova_compute[244014]: 2026-02-25 14:12:12.826 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:12:12 np0005629333 nova_compute[244014]: 2026-02-25 14:12:12.827 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:12:12 np0005629333 nova_compute[244014]: 2026-02-25 14:12:12.829 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:12:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4569: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:12:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4570: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:12:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:12:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4571: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:12:17 np0005629333 nova_compute[244014]: 2026-02-25 14:12:17.830 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:12:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4572: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:12:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:12:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4573: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:12:22 np0005629333 nova_compute[244014]: 2026-02-25 14:12:22.832 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:12:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4574: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:12:23 np0005629333 podman[434881]: 2026-02-25 14:12:23.701777907 +0000 UTC m=+0.050430180 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 25 09:12:23 np0005629333 podman[434882]: 2026-02-25 14:12:23.772447359 +0000 UTC m=+0.113833177 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260223, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 25 09:12:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4575: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:12:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:12:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4576: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:12:27 np0005629333 nova_compute[244014]: 2026-02-25 14:12:27.834 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:12:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4577: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:12:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:12:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_14:12:31
Feb 25 09:12:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 09:12:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 09:12:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'backups', 'default.rgw.meta', 'vms', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta']
Feb 25 09:12:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 09:12:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4578: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:12:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:12:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:12:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:12:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:12:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:12:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:12:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 09:12:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 09:12:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 09:12:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 09:12:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 09:12:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 09:12:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 09:12:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 09:12:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 09:12:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 09:12:32 np0005629333 nova_compute[244014]: 2026-02-25 14:12:32.836 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:12:32 np0005629333 nova_compute[244014]: 2026-02-25 14:12:32.838 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:12:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4579: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:12:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4580: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:12:35 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:12:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4581: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:12:37 np0005629333 nova_compute[244014]: 2026-02-25 14:12:37.839 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:12:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4582: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:12:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 09:12:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 09:12:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 09:12:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 09:12:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 09:12:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:12:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 09:12:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 09:12:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 09:12:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 09:12:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 09:12:40 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 09:12:40 np0005629333 podman[435071]: 2026-02-25 14:12:40.58814902 +0000 UTC m=+0.027914282 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:12:40 np0005629333 podman[435071]: 2026-02-25 14:12:40.712510574 +0000 UTC m=+0.152275816 container create 2095480218b4868d0d579b07f629f3e6d617a1c11905ec33aaa2caf525aaf7da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_visvesvaraya, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:12:40 np0005629333 systemd[1]: Started libpod-conmon-2095480218b4868d0d579b07f629f3e6d617a1c11905ec33aaa2caf525aaf7da.scope.
Feb 25 09:12:40 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:12:40 np0005629333 nova_compute[244014]: 2026-02-25 14:12:40.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:12:40 np0005629333 podman[435071]: 2026-02-25 14:12:40.909040763 +0000 UTC m=+0.348806085 container init 2095480218b4868d0d579b07f629f3e6d617a1c11905ec33aaa2caf525aaf7da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Feb 25 09:12:40 np0005629333 podman[435071]: 2026-02-25 14:12:40.920534188 +0000 UTC m=+0.360299430 container start 2095480218b4868d0d579b07f629f3e6d617a1c11905ec33aaa2caf525aaf7da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_visvesvaraya, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 09:12:40 np0005629333 nova_compute[244014]: 2026-02-25 14:12:40.926 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:12:40 np0005629333 lucid_visvesvaraya[435088]: 167 167
Feb 25 09:12:40 np0005629333 systemd[1]: libpod-2095480218b4868d0d579b07f629f3e6d617a1c11905ec33aaa2caf525aaf7da.scope: Deactivated successfully.
Feb 25 09:12:40 np0005629333 nova_compute[244014]: 2026-02-25 14:12:40.927 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:12:40 np0005629333 nova_compute[244014]: 2026-02-25 14:12:40.928 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:12:40 np0005629333 nova_compute[244014]: 2026-02-25 14:12:40.928 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 09:12:40 np0005629333 nova_compute[244014]: 2026-02-25 14:12:40.929 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 09:12:40 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:12:40 np0005629333 podman[435071]: 2026-02-25 14:12:40.991400757 +0000 UTC m=+0.431166039 container attach 2095480218b4868d0d579b07f629f3e6d617a1c11905ec33aaa2caf525aaf7da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Feb 25 09:12:40 np0005629333 podman[435071]: 2026-02-25 14:12:40.992347563 +0000 UTC m=+0.432112835 container died 2095480218b4868d0d579b07f629f3e6d617a1c11905ec33aaa2caf525aaf7da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_visvesvaraya, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 09:12:41 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 09:12:41 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:12:41 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 09:12:41 np0005629333 systemd[1]: var-lib-containers-storage-overlay-6fa7c5b6ff63f662091617c47609408a42e65e570deeb59b442e0e807923e451-merged.mount: Deactivated successfully.
Feb 25 09:12:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 09:12:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3488260971' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 09:12:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4583: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:12:41 np0005629333 nova_compute[244014]: 2026-02-25 14:12:41.516 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 09:12:41 np0005629333 nova_compute[244014]: 2026-02-25 14:12:41.693 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 09:12:41 np0005629333 nova_compute[244014]: 2026-02-25 14:12:41.695 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3468MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 09:12:41 np0005629333 nova_compute[244014]: 2026-02-25 14:12:41.695 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:12:41 np0005629333 nova_compute[244014]: 2026-02-25 14:12:41.696 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:12:41 np0005629333 nova_compute[244014]: 2026-02-25 14:12:41.748 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 09:12:41 np0005629333 nova_compute[244014]: 2026-02-25 14:12:41.749 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 09:12:41 np0005629333 nova_compute[244014]: 2026-02-25 14:12:41.764 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 09:12:41 np0005629333 podman[435071]: 2026-02-25 14:12:41.996510867 +0000 UTC m=+1.436276149 container remove 2095480218b4868d0d579b07f629f3e6d617a1c11905ec33aaa2caf525aaf7da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 09:12:42 np0005629333 systemd[1]: libpod-conmon-2095480218b4868d0d579b07f629f3e6d617a1c11905ec33aaa2caf525aaf7da.scope: Deactivated successfully.
Feb 25 09:12:42 np0005629333 podman[435155]: 2026-02-25 14:12:42.184581126 +0000 UTC m=+0.038199344 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:12:42 np0005629333 podman[435155]: 2026-02-25 14:12:42.336905572 +0000 UTC m=+0.190523800 container create 2ac78593477814efd47cf272cb86e5ddb8c7850d24bc613341e84ff681a6679a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dijkstra, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 09:12:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 09:12:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1145661046' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 09:12:42 np0005629333 nova_compute[244014]: 2026-02-25 14:12:42.457 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.693s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 09:12:42 np0005629333 nova_compute[244014]: 2026-02-25 14:12:42.466 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 09:12:42 np0005629333 nova_compute[244014]: 2026-02-25 14:12:42.482 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 09:12:42 np0005629333 nova_compute[244014]: 2026-02-25 14:12:42.484 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 09:12:42 np0005629333 nova_compute[244014]: 2026-02-25 14:12:42.485 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:12:42 np0005629333 systemd[1]: Started libpod-conmon-2ac78593477814efd47cf272cb86e5ddb8c7850d24bc613341e84ff681a6679a.scope.
Feb 25 09:12:42 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:12:42 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f29869981d527f2867db11b2c221eaa15dc2a5b6830c69798bc51c405185acd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:12:42 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f29869981d527f2867db11b2c221eaa15dc2a5b6830c69798bc51c405185acd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:12:42 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f29869981d527f2867db11b2c221eaa15dc2a5b6830c69798bc51c405185acd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:12:42 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f29869981d527f2867db11b2c221eaa15dc2a5b6830c69798bc51c405185acd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:12:42 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f29869981d527f2867db11b2c221eaa15dc2a5b6830c69798bc51c405185acd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 09:12:42 np0005629333 podman[435155]: 2026-02-25 14:12:42.594581574 +0000 UTC m=+0.448199792 container init 2ac78593477814efd47cf272cb86e5ddb8c7850d24bc613341e84ff681a6679a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dijkstra, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:12:42 np0005629333 podman[435155]: 2026-02-25 14:12:42.604123364 +0000 UTC m=+0.457741592 container start 2ac78593477814efd47cf272cb86e5ddb8c7850d24bc613341e84ff681a6679a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dijkstra, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:12:42 np0005629333 podman[435155]: 2026-02-25 14:12:42.658766612 +0000 UTC m=+0.512384840 container attach 2ac78593477814efd47cf272cb86e5ddb8c7850d24bc613341e84ff681a6679a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dijkstra, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Feb 25 09:12:42 np0005629333 nova_compute[244014]: 2026-02-25 14:12:42.839 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:12:42 np0005629333 nova_compute[244014]: 2026-02-25 14:12:42.845 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:12:43 np0005629333 pedantic_dijkstra[435176]: --> passed data devices: 0 physical, 3 LVM
Feb 25 09:12:43 np0005629333 pedantic_dijkstra[435176]: --> All data devices are unavailable
Feb 25 09:12:43 np0005629333 systemd[1]: libpod-2ac78593477814efd47cf272cb86e5ddb8c7850d24bc613341e84ff681a6679a.scope: Deactivated successfully.
Feb 25 09:12:43 np0005629333 podman[435155]: 2026-02-25 14:12:43.065754235 +0000 UTC m=+0.919372433 container died 2ac78593477814efd47cf272cb86e5ddb8c7850d24bc613341e84ff681a6679a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dijkstra, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:12:43 np0005629333 systemd[1]: var-lib-containers-storage-overlay-3f29869981d527f2867db11b2c221eaa15dc2a5b6830c69798bc51c405185acd-merged.mount: Deactivated successfully.
Feb 25 09:12:43 np0005629333 nova_compute[244014]: 2026-02-25 14:12:43.487 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:12:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4584: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:12:43 np0005629333 podman[435155]: 2026-02-25 14:12:43.854305848 +0000 UTC m=+1.707924076 container remove 2ac78593477814efd47cf272cb86e5ddb8c7850d24bc613341e84ff681a6679a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dijkstra, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 09:12:43 np0005629333 nova_compute[244014]: 2026-02-25 14:12:43.878 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:12:43 np0005629333 nova_compute[244014]: 2026-02-25 14:12:43.879 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 09:12:43 np0005629333 nova_compute[244014]: 2026-02-25 14:12:43.879 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 09:12:43 np0005629333 systemd[1]: libpod-conmon-2ac78593477814efd47cf272cb86e5ddb8c7850d24bc613341e84ff681a6679a.scope: Deactivated successfully.
Feb 25 09:12:43 np0005629333 nova_compute[244014]: 2026-02-25 14:12:43.895 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 09:12:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 09:12:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:12:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 09:12:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:12:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 09:12:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:12:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:12:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:12:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:12:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:12:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 09:12:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:12:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 09:12:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:12:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:12:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:12:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 09:12:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:12:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 09:12:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:12:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:12:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:12:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 09:12:44 np0005629333 podman[435269]: 2026-02-25 14:12:44.407371009 +0000 UTC m=+0.104647316 container create 65c276f927596a44bc1cfbf656953c76ef1af4188011142a7c4e938e61f65d30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_germain, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Feb 25 09:12:44 np0005629333 podman[435269]: 2026-02-25 14:12:44.335971806 +0000 UTC m=+0.033248093 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:12:44 np0005629333 systemd[1]: Started libpod-conmon-65c276f927596a44bc1cfbf656953c76ef1af4188011142a7c4e938e61f65d30.scope.
Feb 25 09:12:44 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:12:44 np0005629333 podman[435269]: 2026-02-25 14:12:44.847015776 +0000 UTC m=+0.544292053 container init 65c276f927596a44bc1cfbf656953c76ef1af4188011142a7c4e938e61f65d30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Feb 25 09:12:44 np0005629333 podman[435269]: 2026-02-25 14:12:44.852385398 +0000 UTC m=+0.549661635 container start 65c276f927596a44bc1cfbf656953c76ef1af4188011142a7c4e938e61f65d30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_germain, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:12:44 np0005629333 charming_germain[435285]: 167 167
Feb 25 09:12:44 np0005629333 systemd[1]: libpod-65c276f927596a44bc1cfbf656953c76ef1af4188011142a7c4e938e61f65d30.scope: Deactivated successfully.
Feb 25 09:12:44 np0005629333 conmon[435285]: conmon 65c276f927596a44bc1c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-65c276f927596a44bc1cfbf656953c76ef1af4188011142a7c4e938e61f65d30.scope/container/memory.events
Feb 25 09:12:45 np0005629333 podman[435269]: 2026-02-25 14:12:45.026408259 +0000 UTC m=+0.723684556 container attach 65c276f927596a44bc1cfbf656953c76ef1af4188011142a7c4e938e61f65d30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_germain, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:12:45 np0005629333 podman[435269]: 2026-02-25 14:12:45.027463459 +0000 UTC m=+0.724739666 container died 65c276f927596a44bc1cfbf656953c76ef1af4188011142a7c4e938e61f65d30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 09:12:45 np0005629333 systemd[1]: var-lib-containers-storage-overlay-3dea3d2335efafee3d211fb84197ba7c7f0db74407fc6ad208bbf960568fc303-merged.mount: Deactivated successfully.
Feb 25 09:12:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4585: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:12:45 np0005629333 podman[435269]: 2026-02-25 14:12:45.688424028 +0000 UTC m=+1.385700255 container remove 65c276f927596a44bc1cfbf656953c76ef1af4188011142a7c4e938e61f65d30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_germain, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:12:45 np0005629333 systemd[1]: libpod-conmon-65c276f927596a44bc1cfbf656953c76ef1af4188011142a7c4e938e61f65d30.scope: Deactivated successfully.
Feb 25 09:12:45 np0005629333 podman[435309]: 2026-02-25 14:12:45.867907204 +0000 UTC m=+0.035576209 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:12:45 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:12:46 np0005629333 podman[435309]: 2026-02-25 14:12:46.042856591 +0000 UTC m=+0.210525586 container create 084d71d169dbf085aa960621f40abf748f3a500cbb0f0e07c014cfe6703a5527 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Feb 25 09:12:46 np0005629333 systemd[1]: Started libpod-conmon-084d71d169dbf085aa960621f40abf748f3a500cbb0f0e07c014cfe6703a5527.scope.
Feb 25 09:12:46 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:12:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd6ce5827f913b89ed0f49e92f5c0b3a5ab6a9d327a3a6f0958ec2009ff35381/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:12:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd6ce5827f913b89ed0f49e92f5c0b3a5ab6a9d327a3a6f0958ec2009ff35381/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:12:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd6ce5827f913b89ed0f49e92f5c0b3a5ab6a9d327a3a6f0958ec2009ff35381/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:12:46 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd6ce5827f913b89ed0f49e92f5c0b3a5ab6a9d327a3a6f0958ec2009ff35381/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:12:46 np0005629333 podman[435309]: 2026-02-25 14:12:46.578271002 +0000 UTC m=+0.745939997 container init 084d71d169dbf085aa960621f40abf748f3a500cbb0f0e07c014cfe6703a5527 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_shamir, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:12:46 np0005629333 podman[435309]: 2026-02-25 14:12:46.585547919 +0000 UTC m=+0.753216894 container start 084d71d169dbf085aa960621f40abf748f3a500cbb0f0e07c014cfe6703a5527 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_shamir, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:12:46 np0005629333 podman[435309]: 2026-02-25 14:12:46.698008755 +0000 UTC m=+0.865677780 container attach 084d71d169dbf085aa960621f40abf748f3a500cbb0f0e07c014cfe6703a5527 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_shamir, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]: {
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:    "0": [
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:        {
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "devices": [
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "/dev/loop3"
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            ],
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "lv_name": "ceph_lv0",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "lv_size": "21470642176",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "name": "ceph_lv0",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "tags": {
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.cluster_name": "ceph",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.crush_device_class": "",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.encrypted": "0",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.objectstore": "bluestore",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.osd_id": "0",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.type": "block",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.vdo": "0",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.with_tpm": "0"
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            },
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "type": "block",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "vg_name": "ceph_vg0"
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:        }
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:    ],
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:    "1": [
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:        {
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "devices": [
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "/dev/loop4"
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            ],
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "lv_name": "ceph_lv1",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "lv_size": "21470642176",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "name": "ceph_lv1",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "tags": {
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.cluster_name": "ceph",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.crush_device_class": "",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.encrypted": "0",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.objectstore": "bluestore",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.osd_id": "1",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.type": "block",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.vdo": "0",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.with_tpm": "0"
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            },
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "type": "block",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "vg_name": "ceph_vg1"
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:        }
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:    ],
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:    "2": [
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:        {
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "devices": [
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "/dev/loop5"
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            ],
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "lv_name": "ceph_lv2",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "lv_size": "21470642176",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "name": "ceph_lv2",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "tags": {
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.cluster_name": "ceph",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.crush_device_class": "",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.encrypted": "0",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.objectstore": "bluestore",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.osd_id": "2",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.type": "block",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.vdo": "0",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:                "ceph.with_tpm": "0"
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            },
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "type": "block",
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:            "vg_name": "ceph_vg2"
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:        }
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]:    ]
Feb 25 09:12:46 np0005629333 admiring_shamir[435326]: }
Feb 25 09:12:46 np0005629333 nova_compute[244014]: 2026-02-25 14:12:46.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:12:46 np0005629333 systemd[1]: libpod-084d71d169dbf085aa960621f40abf748f3a500cbb0f0e07c014cfe6703a5527.scope: Deactivated successfully.
Feb 25 09:12:46 np0005629333 podman[435309]: 2026-02-25 14:12:46.90957669 +0000 UTC m=+1.077245695 container died 084d71d169dbf085aa960621f40abf748f3a500cbb0f0e07c014cfe6703a5527 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_shamir, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:12:47 np0005629333 systemd[1]: var-lib-containers-storage-overlay-cd6ce5827f913b89ed0f49e92f5c0b3a5ab6a9d327a3a6f0958ec2009ff35381-merged.mount: Deactivated successfully.
Feb 25 09:12:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4586: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:12:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 09:12:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1482154077' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 09:12:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 09:12:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1482154077' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 09:12:47 np0005629333 podman[435309]: 2026-02-25 14:12:47.835458365 +0000 UTC m=+2.003127340 container remove 084d71d169dbf085aa960621f40abf748f3a500cbb0f0e07c014cfe6703a5527 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_shamir, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:12:47 np0005629333 nova_compute[244014]: 2026-02-25 14:12:47.845 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:12:47 np0005629333 nova_compute[244014]: 2026-02-25 14:12:47.848 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:12:47 np0005629333 nova_compute[244014]: 2026-02-25 14:12:47.848 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 25 09:12:47 np0005629333 nova_compute[244014]: 2026-02-25 14:12:47.848 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:12:47 np0005629333 nova_compute[244014]: 2026-02-25 14:12:47.870 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:12:47 np0005629333 nova_compute[244014]: 2026-02-25 14:12:47.871 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:12:47 np0005629333 systemd[1]: libpod-conmon-084d71d169dbf085aa960621f40abf748f3a500cbb0f0e07c014cfe6703a5527.scope: Deactivated successfully.
Feb 25 09:12:48 np0005629333 podman[435408]: 2026-02-25 14:12:48.33888874 +0000 UTC m=+0.030301119 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:12:48 np0005629333 podman[435408]: 2026-02-25 14:12:48.608069887 +0000 UTC m=+0.299482266 container create 085f6eb3b0df288b8b0ddcd0150021aabf71620dbdfd0628dcc25a7d1a1de3a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_montalcini, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 25 09:12:48 np0005629333 systemd[1]: Started libpod-conmon-085f6eb3b0df288b8b0ddcd0150021aabf71620dbdfd0628dcc25a7d1a1de3a2.scope.
Feb 25 09:12:48 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:12:49 np0005629333 podman[435408]: 2026-02-25 14:12:49.061400603 +0000 UTC m=+0.752813042 container init 085f6eb3b0df288b8b0ddcd0150021aabf71620dbdfd0628dcc25a7d1a1de3a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_montalcini, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Feb 25 09:12:49 np0005629333 podman[435408]: 2026-02-25 14:12:49.068233287 +0000 UTC m=+0.759645676 container start 085f6eb3b0df288b8b0ddcd0150021aabf71620dbdfd0628dcc25a7d1a1de3a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_montalcini, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:12:49 np0005629333 elated_montalcini[435423]: 167 167
Feb 25 09:12:49 np0005629333 systemd[1]: libpod-085f6eb3b0df288b8b0ddcd0150021aabf71620dbdfd0628dcc25a7d1a1de3a2.scope: Deactivated successfully.
Feb 25 09:12:49 np0005629333 podman[435408]: 2026-02-25 14:12:49.367885737 +0000 UTC m=+1.059298136 container attach 085f6eb3b0df288b8b0ddcd0150021aabf71620dbdfd0628dcc25a7d1a1de3a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_montalcini, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 09:12:49 np0005629333 podman[435408]: 2026-02-25 14:12:49.368553226 +0000 UTC m=+1.059965615 container died 085f6eb3b0df288b8b0ddcd0150021aabf71620dbdfd0628dcc25a7d1a1de3a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_montalcini, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle)
Feb 25 09:12:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4587: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:12:49 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f10e33953e6a2128404ece1cbd8cc211baef386d71515ccd1ed29ac358330fab-merged.mount: Deactivated successfully.
Feb 25 09:12:50 np0005629333 podman[435408]: 2026-02-25 14:12:50.317351091 +0000 UTC m=+2.008763470 container remove 085f6eb3b0df288b8b0ddcd0150021aabf71620dbdfd0628dcc25a7d1a1de3a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_montalcini, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 09:12:50 np0005629333 systemd[1]: libpod-conmon-085f6eb3b0df288b8b0ddcd0150021aabf71620dbdfd0628dcc25a7d1a1de3a2.scope: Deactivated successfully.
Feb 25 09:12:50 np0005629333 podman[435448]: 2026-02-25 14:12:50.491283579 +0000 UTC m=+0.044532743 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:12:50 np0005629333 podman[435448]: 2026-02-25 14:12:50.671786134 +0000 UTC m=+0.225035248 container create 9051492c6363e06bb5ad87eb355017e67ab12ea2b057950ae3087c20f805aa96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_robinson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:12:50 np0005629333 systemd[1]: Started libpod-conmon-9051492c6363e06bb5ad87eb355017e67ab12ea2b057950ae3087c20f805aa96.scope.
Feb 25 09:12:50 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:12:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20c5aa22030edb7bc47213496e807ee4a34a669a85fad4336f2385b2c923d672/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:12:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20c5aa22030edb7bc47213496e807ee4a34a669a85fad4336f2385b2c923d672/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:12:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20c5aa22030edb7bc47213496e807ee4a34a669a85fad4336f2385b2c923d672/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:12:50 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20c5aa22030edb7bc47213496e807ee4a34a669a85fad4336f2385b2c923d672/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:12:50 np0005629333 podman[435448]: 2026-02-25 14:12:50.93601497 +0000 UTC m=+0.489264134 container init 9051492c6363e06bb5ad87eb355017e67ab12ea2b057950ae3087c20f805aa96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_robinson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:12:50 np0005629333 podman[435448]: 2026-02-25 14:12:50.945227001 +0000 UTC m=+0.498476125 container start 9051492c6363e06bb5ad87eb355017e67ab12ea2b057950ae3087c20f805aa96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Feb 25 09:12:50 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:12:51 np0005629333 podman[435448]: 2026-02-25 14:12:51.082091509 +0000 UTC m=+0.635340683 container attach 9051492c6363e06bb5ad87eb355017e67ab12ea2b057950ae3087c20f805aa96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_robinson, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 25 09:12:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4588: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:12:51 np0005629333 lvm[435545]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 09:12:51 np0005629333 lvm[435545]: VG ceph_vg1 finished
Feb 25 09:12:51 np0005629333 lvm[435544]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 09:12:51 np0005629333 lvm[435544]: VG ceph_vg0 finished
Feb 25 09:12:51 np0005629333 lvm[435547]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 09:12:51 np0005629333 lvm[435547]: VG ceph_vg2 finished
Feb 25 09:12:51 np0005629333 hopeful_robinson[435465]: {}
Feb 25 09:12:51 np0005629333 systemd[1]: libpod-9051492c6363e06bb5ad87eb355017e67ab12ea2b057950ae3087c20f805aa96.scope: Deactivated successfully.
Feb 25 09:12:51 np0005629333 systemd[1]: libpod-9051492c6363e06bb5ad87eb355017e67ab12ea2b057950ae3087c20f805aa96.scope: Consumed 1.287s CPU time.
Feb 25 09:12:51 np0005629333 podman[435448]: 2026-02-25 14:12:51.958320637 +0000 UTC m=+1.511569761 container died 9051492c6363e06bb5ad87eb355017e67ab12ea2b057950ae3087c20f805aa96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_robinson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Feb 25 09:12:52 np0005629333 systemd[1]: var-lib-containers-storage-overlay-20c5aa22030edb7bc47213496e807ee4a34a669a85fad4336f2385b2c923d672-merged.mount: Deactivated successfully.
Feb 25 09:12:52 np0005629333 nova_compute[244014]: 2026-02-25 14:12:52.872 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:12:52 np0005629333 nova_compute[244014]: 2026-02-25 14:12:52.876 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:12:52 np0005629333 nova_compute[244014]: 2026-02-25 14:12:52.876 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5005 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 25 09:12:52 np0005629333 nova_compute[244014]: 2026-02-25 14:12:52.877 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:12:52 np0005629333 nova_compute[244014]: 2026-02-25 14:12:52.903 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:12:52 np0005629333 nova_compute[244014]: 2026-02-25 14:12:52.903 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:12:53 np0005629333 podman[435448]: 2026-02-25 14:12:53.143924261 +0000 UTC m=+2.697173345 container remove 9051492c6363e06bb5ad87eb355017e67ab12ea2b057950ae3087c20f805aa96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 09:12:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 09:12:53 np0005629333 systemd[1]: libpod-conmon-9051492c6363e06bb5ad87eb355017e67ab12ea2b057950ae3087c20f805aa96.scope: Deactivated successfully.
Feb 25 09:12:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4589: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:12:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:12:53 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 09:12:53 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:12:54 np0005629333 podman[435587]: 2026-02-25 14:12:54.747582221 +0000 UTC m=+0.079590416 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 25 09:12:54 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:12:54 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:12:54 np0005629333 podman[435588]: 2026-02-25 14:12:54.794003066 +0000 UTC m=+0.125716133 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb 25 09:12:54 np0005629333 nova_compute[244014]: 2026-02-25 14:12:54.874 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:12:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:12:55.118 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:12:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:12:55.118 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:12:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:12:55.118 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:12:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4590: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:12:55 np0005629333 nova_compute[244014]: 2026-02-25 14:12:55.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:12:55 np0005629333 nova_compute[244014]: 2026-02-25 14:12:55.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 09:12:55 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:12:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4591: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:12:57 np0005629333 nova_compute[244014]: 2026-02-25 14:12:57.904 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:12:57 np0005629333 nova_compute[244014]: 2026-02-25 14:12:57.906 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:12:57 np0005629333 nova_compute[244014]: 2026-02-25 14:12:57.906 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 25 09:12:57 np0005629333 nova_compute[244014]: 2026-02-25 14:12:57.906 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:12:57 np0005629333 nova_compute[244014]: 2026-02-25 14:12:57.906 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:12:57 np0005629333 nova_compute[244014]: 2026-02-25 14:12:57.907 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:12:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4592: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:12:59 np0005629333 nova_compute[244014]: 2026-02-25 14:12:59.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:13:00 np0005629333 nova_compute[244014]: 2026-02-25 14:13:00.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:13:00 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:13:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4593: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:13:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:13:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:13:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:13:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:13:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:13:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:13:02 np0005629333 nova_compute[244014]: 2026-02-25 14:13:02.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:13:02 np0005629333 nova_compute[244014]: 2026-02-25 14:13:02.908 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:13:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4594: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:13:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4595: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:13:05 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:13:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4596: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:13:07 np0005629333 nova_compute[244014]: 2026-02-25 14:13:07.910 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:13:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4597: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:13:10 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:13:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4598: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:13:11 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #234. Immutable memtables: 0.
Feb 25 09:13:11 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:13:11.726881) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 25 09:13:11 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:856] [default] [JOB 147] Flushing memtable with next log file: 234
Feb 25 09:13:11 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028791726946, "job": 147, "event": "flush_started", "num_memtables": 1, "num_entries": 863, "num_deletes": 251, "total_data_size": 1197496, "memory_usage": 1224064, "flush_reason": "Manual Compaction"}
Feb 25 09:13:11 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:885] [default] [JOB 147] Level-0 flush table #235: started
Feb 25 09:13:12 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028792000215, "cf_name": "default", "job": 147, "event": "table_file_creation", "file_number": 235, "file_size": 1186533, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 94921, "largest_seqno": 95783, "table_properties": {"data_size": 1182076, "index_size": 2108, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9605, "raw_average_key_size": 19, "raw_value_size": 1173293, "raw_average_value_size": 2409, "num_data_blocks": 92, "num_entries": 487, "num_filter_entries": 487, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772028712, "oldest_key_time": 1772028712, "file_creation_time": 1772028791, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 235, "seqno_to_time_mapping": "N/A"}}
Feb 25 09:13:12 np0005629333 ceph-mon[76335]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 147] Flush lasted 273419 microseconds, and 6215 cpu microseconds.
Feb 25 09:13:12 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 09:13:12 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:13:12.000293) [db/flush_job.cc:967] [default] [JOB 147] Level-0 flush table #235: 1186533 bytes OK
Feb 25 09:13:12 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:13:12.000329) [db/memtable_list.cc:519] [default] Level-0 commit table #235 started
Feb 25 09:13:12 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:13:12.100476) [db/memtable_list.cc:722] [default] Level-0 commit table #235: memtable #1 done
Feb 25 09:13:12 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:13:12.100558) EVENT_LOG_v1 {"time_micros": 1772028792100541, "job": 147, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 25 09:13:12 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:13:12.100603) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 25 09:13:12 np0005629333 ceph-mon[76335]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 147] Try to delete WAL files size 1193268, prev total WAL file size 1220087, number of live WAL files 2.
Feb 25 09:13:12 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000231.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 09:13:12 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:13:12.101743) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039353338' seq:72057594037927935, type:22 .. '7061786F730039373930' seq:0, type:0; will stop at (end)
Feb 25 09:13:12 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 148] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 25 09:13:12 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 147 Base level 0, inputs: [235(1158KB)], [233(9656KB)]
Feb 25 09:13:12 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028792101805, "job": 148, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [235], "files_L6": [233], "score": -1, "input_data_size": 11074800, "oldest_snapshot_seqno": -1}
Feb 25 09:13:12 np0005629333 ceph-mon[76335]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 148] Generated table #236: 10211 keys, 9270018 bytes, temperature: kUnknown
Feb 25 09:13:12 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028792379133, "cf_name": "default", "job": 148, "event": "table_file_creation", "file_number": 236, "file_size": 9270018, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9210830, "index_size": 32579, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25541, "raw_key_size": 274275, "raw_average_key_size": 26, "raw_value_size": 9036549, "raw_average_value_size": 884, "num_data_blocks": 1210, "num_entries": 10211, "num_filter_entries": 10211, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772020150, "oldest_key_time": 0, "file_creation_time": 1772028792, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c540816e-83f6-4a96-94b0-524825c8594a", "db_session_id": "6RCFLCKJUH7T17FV9P10", "orig_file_number": 236, "seqno_to_time_mapping": "N/A"}}
Feb 25 09:13:12 np0005629333 ceph-mon[76335]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 25 09:13:12 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:13:12.380195) [db/compaction/compaction_job.cc:1663] [default] [JOB 148] Compacted 1@0 + 1@6 files to L6 => 9270018 bytes
Feb 25 09:13:12 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:13:12.480434) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 39.9 rd, 33.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 9.4 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(17.1) write-amplify(7.8) OK, records in: 10725, records dropped: 514 output_compression: NoCompression
Feb 25 09:13:12 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:13:12.480488) EVENT_LOG_v1 {"time_micros": 1772028792480466, "job": 148, "event": "compaction_finished", "compaction_time_micros": 277416, "compaction_time_cpu_micros": 46278, "output_level": 6, "num_output_files": 1, "total_output_size": 9270018, "num_input_records": 10725, "num_output_records": 10211, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 25 09:13:12 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000235.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 09:13:12 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028792480975, "job": 148, "event": "table_file_deletion", "file_number": 235}
Feb 25 09:13:12 np0005629333 ceph-mon[76335]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000233.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 25 09:13:12 np0005629333 ceph-mon[76335]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772028792483455, "job": 148, "event": "table_file_deletion", "file_number": 233}
Feb 25 09:13:12 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:13:12.101594) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:13:12 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:13:12.483579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:13:12 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:13:12.483588) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:13:12 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:13:12.483592) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:13:12 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:13:12.483594) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:13:12 np0005629333 ceph-mon[76335]: rocksdb: (Original Log Time 2026/02/25-14:13:12.483597) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 25 09:13:12 np0005629333 nova_compute[244014]: 2026-02-25 14:13:12.952 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:13:12 np0005629333 nova_compute[244014]: 2026-02-25 14:13:12.953 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:13:12 np0005629333 nova_compute[244014]: 2026-02-25 14:13:12.953 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5041 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 25 09:13:12 np0005629333 nova_compute[244014]: 2026-02-25 14:13:12.953 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:13:12 np0005629333 nova_compute[244014]: 2026-02-25 14:13:12.954 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:13:12 np0005629333 nova_compute[244014]: 2026-02-25 14:13:12.955 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:13:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4599: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:13:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4600: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:13:15 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:13:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4601: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:13:17 np0005629333 nova_compute[244014]: 2026-02-25 14:13:17.956 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:13:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4602: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:13:20 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:13:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4603: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:13:22 np0005629333 nova_compute[244014]: 2026-02-25 14:13:22.958 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:13:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4604: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:13:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4605: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:13:25 np0005629333 podman[435633]: 2026-02-25 14:13:25.738490823 +0000 UTC m=+0.074806770 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Feb 25 09:13:25 np0005629333 podman[435634]: 2026-02-25 14:13:25.770751938 +0000 UTC m=+0.102922048 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, config_id=ovn_controller)
Feb 25 09:13:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:13:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4606: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:13:27 np0005629333 nova_compute[244014]: 2026-02-25 14:13:27.960 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:13:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4607: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:13:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:13:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_14:13:31
Feb 25 09:13:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 09:13:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 09:13:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'images', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', '.mgr']
Feb 25 09:13:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 09:13:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4608: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:13:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:13:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:13:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:13:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:13:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:13:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:13:31 np0005629333 nova_compute[244014]: 2026-02-25 14:13:31.871 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:13:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 09:13:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 09:13:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 09:13:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 09:13:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 09:13:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 09:13:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 09:13:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 09:13:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 09:13:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 09:13:32 np0005629333 nova_compute[244014]: 2026-02-25 14:13:32.962 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:13:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4609: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:13:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4610: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:13:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:13:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4611: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:13:37 np0005629333 nova_compute[244014]: 2026-02-25 14:13:37.965 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:13:37 np0005629333 nova_compute[244014]: 2026-02-25 14:13:37.967 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:13:37 np0005629333 nova_compute[244014]: 2026-02-25 14:13:37.967 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 25 09:13:37 np0005629333 nova_compute[244014]: 2026-02-25 14:13:37.967 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:13:38 np0005629333 nova_compute[244014]: 2026-02-25 14:13:38.004 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:13:38 np0005629333 nova_compute[244014]: 2026-02-25 14:13:38.005 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:13:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4612: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:13:40 np0005629333 nova_compute[244014]: 2026-02-25 14:13:40.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:13:40 np0005629333 nova_compute[244014]: 2026-02-25 14:13:40.911 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:13:40 np0005629333 nova_compute[244014]: 2026-02-25 14:13:40.912 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:13:40 np0005629333 nova_compute[244014]: 2026-02-25 14:13:40.912 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:13:40 np0005629333 nova_compute[244014]: 2026-02-25 14:13:40.913 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 09:13:40 np0005629333 nova_compute[244014]: 2026-02-25 14:13:40.913 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 09:13:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:13:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 09:13:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1511290746' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 09:13:41 np0005629333 nova_compute[244014]: 2026-02-25 14:13:41.455 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 09:13:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4613: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:13:41 np0005629333 nova_compute[244014]: 2026-02-25 14:13:41.635 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 09:13:41 np0005629333 nova_compute[244014]: 2026-02-25 14:13:41.637 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3531MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 09:13:41 np0005629333 nova_compute[244014]: 2026-02-25 14:13:41.637 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:13:41 np0005629333 nova_compute[244014]: 2026-02-25 14:13:41.637 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:13:41 np0005629333 nova_compute[244014]: 2026-02-25 14:13:41.708 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 09:13:41 np0005629333 nova_compute[244014]: 2026-02-25 14:13:41.709 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 09:13:41 np0005629333 nova_compute[244014]: 2026-02-25 14:13:41.726 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 09:13:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 09:13:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/220818318' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 09:13:42 np0005629333 nova_compute[244014]: 2026-02-25 14:13:42.237 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 09:13:42 np0005629333 nova_compute[244014]: 2026-02-25 14:13:42.244 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 09:13:42 np0005629333 nova_compute[244014]: 2026-02-25 14:13:42.264 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 09:13:42 np0005629333 nova_compute[244014]: 2026-02-25 14:13:42.266 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 09:13:42 np0005629333 nova_compute[244014]: 2026-02-25 14:13:42.267 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:13:43 np0005629333 nova_compute[244014]: 2026-02-25 14:13:43.006 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:13:43 np0005629333 nova_compute[244014]: 2026-02-25 14:13:43.007 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:13:43 np0005629333 nova_compute[244014]: 2026-02-25 14:13:43.008 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 25 09:13:43 np0005629333 nova_compute[244014]: 2026-02-25 14:13:43.008 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:13:43 np0005629333 nova_compute[244014]: 2026-02-25 14:13:43.009 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:13:43 np0005629333 nova_compute[244014]: 2026-02-25 14:13:43.011 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:13:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4614: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:13:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 09:13:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:13:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 09:13:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:13:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 09:13:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:13:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:13:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:13:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:13:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:13:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 09:13:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:13:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 09:13:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:13:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:13:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:13:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 09:13:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:13:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 09:13:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:13:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:13:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:13:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 09:13:44 np0005629333 nova_compute[244014]: 2026-02-25 14:13:44.268 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:13:44 np0005629333 nova_compute[244014]: 2026-02-25 14:13:44.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:13:44 np0005629333 nova_compute[244014]: 2026-02-25 14:13:44.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 09:13:44 np0005629333 nova_compute[244014]: 2026-02-25 14:13:44.878 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 09:13:44 np0005629333 nova_compute[244014]: 2026-02-25 14:13:44.897 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 09:13:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4615: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:13:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:13:46 np0005629333 nova_compute[244014]: 2026-02-25 14:13:46.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:13:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4616: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:13:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 09:13:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1116104455' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 09:13:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 09:13:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1116104455' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 09:13:48 np0005629333 nova_compute[244014]: 2026-02-25 14:13:48.010 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:13:48 np0005629333 nova_compute[244014]: 2026-02-25 14:13:48.011 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:13:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4617: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:13:50 np0005629333 ceph-osd[89088]: bluestore.MempoolThread fragmentation_score=0.004165 took=0.000093s
Feb 25 09:13:50 np0005629333 ceph-osd[86953]: bluestore.MempoolThread fragmentation_score=0.004726 took=0.000069s
Feb 25 09:13:50 np0005629333 ceph-osd[88012]: bluestore.MempoolThread fragmentation_score=0.004789 took=0.000056s
Feb 25 09:13:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:13:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4618: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:13:53 np0005629333 nova_compute[244014]: 2026-02-25 14:13:53.012 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:13:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4619: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:13:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 09:13:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 09:13:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 09:13:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 09:13:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 09:13:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:13:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 09:13:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 09:13:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 09:13:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 09:13:54 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 09:13:54 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 09:13:54 np0005629333 podman[435868]: 2026-02-25 14:13:54.725888445 +0000 UTC m=+0.066020662 container create b9e4ac0dd9c382929d295a08d5dc3e88967f17f989e45d384af22580b4a0f6b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_wing, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:13:54 np0005629333 systemd[1]: Started libpod-conmon-b9e4ac0dd9c382929d295a08d5dc3e88967f17f989e45d384af22580b4a0f6b0.scope.
Feb 25 09:13:54 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:13:54 np0005629333 podman[435868]: 2026-02-25 14:13:54.695131204 +0000 UTC m=+0.035263531 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:13:54 np0005629333 podman[435868]: 2026-02-25 14:13:54.797907976 +0000 UTC m=+0.138040213 container init b9e4ac0dd9c382929d295a08d5dc3e88967f17f989e45d384af22580b4a0f6b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_wing, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 25 09:13:54 np0005629333 podman[435868]: 2026-02-25 14:13:54.806306334 +0000 UTC m=+0.146438551 container start b9e4ac0dd9c382929d295a08d5dc3e88967f17f989e45d384af22580b4a0f6b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_wing, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Feb 25 09:13:54 np0005629333 youthful_wing[435884]: 167 167
Feb 25 09:13:54 np0005629333 systemd[1]: libpod-b9e4ac0dd9c382929d295a08d5dc3e88967f17f989e45d384af22580b4a0f6b0.scope: Deactivated successfully.
Feb 25 09:13:54 np0005629333 podman[435868]: 2026-02-25 14:13:54.811568763 +0000 UTC m=+0.151701000 container attach b9e4ac0dd9c382929d295a08d5dc3e88967f17f989e45d384af22580b4a0f6b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:13:54 np0005629333 podman[435868]: 2026-02-25 14:13:54.812372286 +0000 UTC m=+0.152504503 container died b9e4ac0dd9c382929d295a08d5dc3e88967f17f989e45d384af22580b4a0f6b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_wing, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:13:54 np0005629333 systemd[1]: var-lib-containers-storage-overlay-012abd3ad92828d5ef3c8e5158e263f3cedf4c8123e29b2e8536ac11b86187c3-merged.mount: Deactivated successfully.
Feb 25 09:13:54 np0005629333 podman[435868]: 2026-02-25 14:13:54.85417492 +0000 UTC m=+0.194307137 container remove b9e4ac0dd9c382929d295a08d5dc3e88967f17f989e45d384af22580b4a0f6b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_wing, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:13:54 np0005629333 systemd[1]: libpod-conmon-b9e4ac0dd9c382929d295a08d5dc3e88967f17f989e45d384af22580b4a0f6b0.scope: Deactivated successfully.
Feb 25 09:13:54 np0005629333 podman[435908]: 2026-02-25 14:13:54.979128911 +0000 UTC m=+0.038085270 container create 04b89971cae6318e0dc163c114deeefdeffc7b5b77b33b1c425be8dbfd4b1ce7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:13:55 np0005629333 systemd[1]: Started libpod-conmon-04b89971cae6318e0dc163c114deeefdeffc7b5b77b33b1c425be8dbfd4b1ce7.scope.
Feb 25 09:13:55 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:13:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3e031c443429aee2ca3882b158941b8f2e91ed2cf1aad784840b9207d6997ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:13:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3e031c443429aee2ca3882b158941b8f2e91ed2cf1aad784840b9207d6997ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:13:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3e031c443429aee2ca3882b158941b8f2e91ed2cf1aad784840b9207d6997ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:13:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3e031c443429aee2ca3882b158941b8f2e91ed2cf1aad784840b9207d6997ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:13:55 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3e031c443429aee2ca3882b158941b8f2e91ed2cf1aad784840b9207d6997ee/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 09:13:55 np0005629333 podman[435908]: 2026-02-25 14:13:54.963896829 +0000 UTC m=+0.022853198 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:13:55 np0005629333 podman[435908]: 2026-02-25 14:13:55.07190187 +0000 UTC m=+0.130858269 container init 04b89971cae6318e0dc163c114deeefdeffc7b5b77b33b1c425be8dbfd4b1ce7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:13:55 np0005629333 podman[435908]: 2026-02-25 14:13:55.079115034 +0000 UTC m=+0.138071393 container start 04b89971cae6318e0dc163c114deeefdeffc7b5b77b33b1c425be8dbfd4b1ce7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ritchie, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Feb 25 09:13:55 np0005629333 podman[435908]: 2026-02-25 14:13:55.084473736 +0000 UTC m=+0.143430095 container attach 04b89971cae6318e0dc163c114deeefdeffc7b5b77b33b1c425be8dbfd4b1ce7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ritchie, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Feb 25 09:13:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:13:55.120 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:13:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:13:55.123 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:13:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:13:55.124 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:13:55 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 09:13:55 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:13:55 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 09:13:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4620: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:13:55 np0005629333 jovial_ritchie[435924]: --> passed data devices: 0 physical, 3 LVM
Feb 25 09:13:55 np0005629333 jovial_ritchie[435924]: --> All data devices are unavailable
Feb 25 09:13:55 np0005629333 systemd[1]: libpod-04b89971cae6318e0dc163c114deeefdeffc7b5b77b33b1c425be8dbfd4b1ce7.scope: Deactivated successfully.
Feb 25 09:13:55 np0005629333 podman[435944]: 2026-02-25 14:13:55.63908549 +0000 UTC m=+0.025610417 container died 04b89971cae6318e0dc163c114deeefdeffc7b5b77b33b1c425be8dbfd4b1ce7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Feb 25 09:13:55 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e3e031c443429aee2ca3882b158941b8f2e91ed2cf1aad784840b9207d6997ee-merged.mount: Deactivated successfully.
Feb 25 09:13:55 np0005629333 podman[435944]: 2026-02-25 14:13:55.677222381 +0000 UTC m=+0.063747308 container remove 04b89971cae6318e0dc163c114deeefdeffc7b5b77b33b1c425be8dbfd4b1ce7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_ritchie, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Feb 25 09:13:55 np0005629333 systemd[1]: libpod-conmon-04b89971cae6318e0dc163c114deeefdeffc7b5b77b33b1c425be8dbfd4b1ce7.scope: Deactivated successfully.
Feb 25 09:13:55 np0005629333 nova_compute[244014]: 2026-02-25 14:13:55.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:13:55 np0005629333 nova_compute[244014]: 2026-02-25 14:13:55.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 09:13:55 np0005629333 podman[435983]: 2026-02-25 14:13:55.923786757 +0000 UTC m=+0.104866262 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 09:13:55 np0005629333 podman[435984]: 2026-02-25 14:13:55.923971792 +0000 UTC m=+0.106339984 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Feb 25 09:13:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:13:56 np0005629333 podman[436063]: 2026-02-25 14:13:56.208631948 +0000 UTC m=+0.066562657 container create ec35332548b28bff46eefb9660da194ea9bea6f3e3bdf19502946e6bbb49792a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 09:13:56 np0005629333 systemd[1]: Started libpod-conmon-ec35332548b28bff46eefb9660da194ea9bea6f3e3bdf19502946e6bbb49792a.scope.
Feb 25 09:13:56 np0005629333 podman[436063]: 2026-02-25 14:13:56.181919931 +0000 UTC m=+0.039850700 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:13:56 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:13:56 np0005629333 podman[436063]: 2026-02-25 14:13:56.2997454 +0000 UTC m=+0.157676089 container init ec35332548b28bff46eefb9660da194ea9bea6f3e3bdf19502946e6bbb49792a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:13:56 np0005629333 podman[436063]: 2026-02-25 14:13:56.307786128 +0000 UTC m=+0.165716797 container start ec35332548b28bff46eefb9660da194ea9bea6f3e3bdf19502946e6bbb49792a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Feb 25 09:13:56 np0005629333 podman[436063]: 2026-02-25 14:13:56.310638389 +0000 UTC m=+0.168569088 container attach ec35332548b28bff46eefb9660da194ea9bea6f3e3bdf19502946e6bbb49792a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wiles, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:13:56 np0005629333 stoic_wiles[436079]: 167 167
Feb 25 09:13:56 np0005629333 systemd[1]: libpod-ec35332548b28bff46eefb9660da194ea9bea6f3e3bdf19502946e6bbb49792a.scope: Deactivated successfully.
Feb 25 09:13:56 np0005629333 podman[436063]: 2026-02-25 14:13:56.31634502 +0000 UTC m=+0.174275719 container died ec35332548b28bff46eefb9660da194ea9bea6f3e3bdf19502946e6bbb49792a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wiles, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:13:56 np0005629333 systemd[1]: var-lib-containers-storage-overlay-39b768e91db09a002aab99065c9c4844bfc92ccbeeef6896586932ee31e1b9bb-merged.mount: Deactivated successfully.
Feb 25 09:13:56 np0005629333 podman[436063]: 2026-02-25 14:13:56.355644944 +0000 UTC m=+0.213575613 container remove ec35332548b28bff46eefb9660da194ea9bea6f3e3bdf19502946e6bbb49792a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_wiles, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:13:56 np0005629333 systemd[1]: libpod-conmon-ec35332548b28bff46eefb9660da194ea9bea6f3e3bdf19502946e6bbb49792a.scope: Deactivated successfully.
Feb 25 09:13:56 np0005629333 podman[436103]: 2026-02-25 14:13:56.506311623 +0000 UTC m=+0.039483500 container create f21a11a04ab204765ebd18fc842a74c815945b60a7068c2d4842053e1607de47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_jemison, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Feb 25 09:13:56 np0005629333 systemd[1]: Started libpod-conmon-f21a11a04ab204765ebd18fc842a74c815945b60a7068c2d4842053e1607de47.scope.
Feb 25 09:13:56 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:13:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f78250d3f837c45103fd056fc42ede6d93c6ada4e37790a686bdac73ff2f3387/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:13:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f78250d3f837c45103fd056fc42ede6d93c6ada4e37790a686bdac73ff2f3387/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:13:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f78250d3f837c45103fd056fc42ede6d93c6ada4e37790a686bdac73ff2f3387/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:13:56 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f78250d3f837c45103fd056fc42ede6d93c6ada4e37790a686bdac73ff2f3387/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:13:56 np0005629333 podman[436103]: 2026-02-25 14:13:56.489745384 +0000 UTC m=+0.022917301 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:13:56 np0005629333 podman[436103]: 2026-02-25 14:13:56.603018693 +0000 UTC m=+0.136190650 container init f21a11a04ab204765ebd18fc842a74c815945b60a7068c2d4842053e1607de47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 09:13:56 np0005629333 podman[436103]: 2026-02-25 14:13:56.612151372 +0000 UTC m=+0.145323259 container start f21a11a04ab204765ebd18fc842a74c815945b60a7068c2d4842053e1607de47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_jemison, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:13:56 np0005629333 podman[436103]: 2026-02-25 14:13:56.616796964 +0000 UTC m=+0.149968831 container attach f21a11a04ab204765ebd18fc842a74c815945b60a7068c2d4842053e1607de47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:13:56 np0005629333 nova_compute[244014]: 2026-02-25 14:13:56.874 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]: {
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:    "0": [
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:        {
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "devices": [
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "/dev/loop3"
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            ],
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "lv_name": "ceph_lv0",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "lv_size": "21470642176",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "name": "ceph_lv0",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "tags": {
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.cluster_name": "ceph",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.crush_device_class": "",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.encrypted": "0",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.objectstore": "bluestore",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.osd_id": "0",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.type": "block",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.vdo": "0",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.with_tpm": "0"
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            },
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "type": "block",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "vg_name": "ceph_vg0"
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:        }
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:    ],
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:    "1": [
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:        {
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "devices": [
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "/dev/loop4"
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            ],
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "lv_name": "ceph_lv1",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "lv_size": "21470642176",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "name": "ceph_lv1",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "tags": {
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.cluster_name": "ceph",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.crush_device_class": "",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.encrypted": "0",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.objectstore": "bluestore",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.osd_id": "1",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.type": "block",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.vdo": "0",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.with_tpm": "0"
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            },
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "type": "block",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "vg_name": "ceph_vg1"
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:        }
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:    ],
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:    "2": [
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:        {
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "devices": [
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "/dev/loop5"
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            ],
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "lv_name": "ceph_lv2",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "lv_size": "21470642176",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "name": "ceph_lv2",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "tags": {
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.cluster_name": "ceph",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.crush_device_class": "",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.encrypted": "0",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.objectstore": "bluestore",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.osd_id": "2",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.type": "block",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.vdo": "0",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:                "ceph.with_tpm": "0"
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            },
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "type": "block",
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:            "vg_name": "ceph_vg2"
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:        }
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]:    ]
Feb 25 09:13:56 np0005629333 intelligent_jemison[436120]: }
Feb 25 09:13:56 np0005629333 systemd[1]: libpod-f21a11a04ab204765ebd18fc842a74c815945b60a7068c2d4842053e1607de47.scope: Deactivated successfully.
Feb 25 09:13:56 np0005629333 podman[436103]: 2026-02-25 14:13:56.952013682 +0000 UTC m=+0.485185579 container died f21a11a04ab204765ebd18fc842a74c815945b60a7068c2d4842053e1607de47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_jemison, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:13:56 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f78250d3f837c45103fd056fc42ede6d93c6ada4e37790a686bdac73ff2f3387-merged.mount: Deactivated successfully.
Feb 25 09:13:57 np0005629333 podman[436103]: 2026-02-25 14:13:57.001532165 +0000 UTC m=+0.534704032 container remove f21a11a04ab204765ebd18fc842a74c815945b60a7068c2d4842053e1607de47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_jemison, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 09:13:57 np0005629333 systemd[1]: libpod-conmon-f21a11a04ab204765ebd18fc842a74c815945b60a7068c2d4842053e1607de47.scope: Deactivated successfully.
Feb 25 09:13:57 np0005629333 podman[436203]: 2026-02-25 14:13:57.489810651 +0000 UTC m=+0.045772968 container create d9064e176c5948e401d4e04db09ebaa2bfd231ee0ef556bc4e174cc517ca6b04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mayer, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:13:57 np0005629333 systemd[1]: Started libpod-conmon-d9064e176c5948e401d4e04db09ebaa2bfd231ee0ef556bc4e174cc517ca6b04.scope.
Feb 25 09:13:57 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:13:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4621: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:13:57 np0005629333 podman[436203]: 2026-02-25 14:13:57.562334566 +0000 UTC m=+0.118296953 container init d9064e176c5948e401d4e04db09ebaa2bfd231ee0ef556bc4e174cc517ca6b04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mayer, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Feb 25 09:13:57 np0005629333 podman[436203]: 2026-02-25 14:13:57.476097293 +0000 UTC m=+0.032059630 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:13:57 np0005629333 podman[436203]: 2026-02-25 14:13:57.571935098 +0000 UTC m=+0.127897455 container start d9064e176c5948e401d4e04db09ebaa2bfd231ee0ef556bc4e174cc517ca6b04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mayer, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Feb 25 09:13:57 np0005629333 podman[436203]: 2026-02-25 14:13:57.575601662 +0000 UTC m=+0.131564069 container attach d9064e176c5948e401d4e04db09ebaa2bfd231ee0ef556bc4e174cc517ca6b04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:13:57 np0005629333 brave_mayer[436219]: 167 167
Feb 25 09:13:57 np0005629333 systemd[1]: libpod-d9064e176c5948e401d4e04db09ebaa2bfd231ee0ef556bc4e174cc517ca6b04.scope: Deactivated successfully.
Feb 25 09:13:57 np0005629333 podman[436203]: 2026-02-25 14:13:57.578825143 +0000 UTC m=+0.134787500 container died d9064e176c5948e401d4e04db09ebaa2bfd231ee0ef556bc4e174cc517ca6b04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mayer, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:13:57 np0005629333 systemd[1]: var-lib-containers-storage-overlay-f4f0340b08b862b97346ac964a3d6b5fb6fc8d5da19bd7ca90cabd16ebaf10e7-merged.mount: Deactivated successfully.
Feb 25 09:13:57 np0005629333 podman[436203]: 2026-02-25 14:13:57.621519733 +0000 UTC m=+0.177482080 container remove d9064e176c5948e401d4e04db09ebaa2bfd231ee0ef556bc4e174cc517ca6b04 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mayer, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:13:57 np0005629333 systemd[1]: libpod-conmon-d9064e176c5948e401d4e04db09ebaa2bfd231ee0ef556bc4e174cc517ca6b04.scope: Deactivated successfully.
Feb 25 09:13:57 np0005629333 podman[436245]: 2026-02-25 14:13:57.788570467 +0000 UTC m=+0.041423605 container create 91fb9609a34202e4ce5df4a7193aba087cf2a3454692354050b68407523ad74a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bose, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:13:57 np0005629333 systemd[1]: Started libpod-conmon-91fb9609a34202e4ce5df4a7193aba087cf2a3454692354050b68407523ad74a.scope.
Feb 25 09:13:57 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:13:57 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/777a085c202cc0d0a47654f29d6370eb41d3c918b548c624239ee10afca47a8d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:13:57 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/777a085c202cc0d0a47654f29d6370eb41d3c918b548c624239ee10afca47a8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:13:57 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/777a085c202cc0d0a47654f29d6370eb41d3c918b548c624239ee10afca47a8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:13:57 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/777a085c202cc0d0a47654f29d6370eb41d3c918b548c624239ee10afca47a8d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:13:57 np0005629333 podman[436245]: 2026-02-25 14:13:57.770310069 +0000 UTC m=+0.023163207 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:13:57 np0005629333 podman[436245]: 2026-02-25 14:13:57.873797162 +0000 UTC m=+0.126650280 container init 91fb9609a34202e4ce5df4a7193aba087cf2a3454692354050b68407523ad74a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True)
Feb 25 09:13:57 np0005629333 podman[436245]: 2026-02-25 14:13:57.881347086 +0000 UTC m=+0.134200204 container start 91fb9609a34202e4ce5df4a7193aba087cf2a3454692354050b68407523ad74a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bose, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:13:57 np0005629333 podman[436245]: 2026-02-25 14:13:57.884814794 +0000 UTC m=+0.137667972 container attach 91fb9609a34202e4ce5df4a7193aba087cf2a3454692354050b68407523ad74a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bose, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:13:58 np0005629333 nova_compute[244014]: 2026-02-25 14:13:58.015 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:13:58 np0005629333 lvm[436340]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 09:13:58 np0005629333 lvm[436337]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 09:13:58 np0005629333 lvm[436337]: VG ceph_vg0 finished
Feb 25 09:13:58 np0005629333 lvm[436340]: VG ceph_vg1 finished
Feb 25 09:13:58 np0005629333 lvm[436342]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 09:13:58 np0005629333 lvm[436342]: VG ceph_vg2 finished
Feb 25 09:13:58 np0005629333 strange_bose[436261]: {}
Feb 25 09:13:58 np0005629333 podman[436245]: 2026-02-25 14:13:58.63497873 +0000 UTC m=+0.887831888 container died 91fb9609a34202e4ce5df4a7193aba087cf2a3454692354050b68407523ad74a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:13:58 np0005629333 systemd[1]: libpod-91fb9609a34202e4ce5df4a7193aba087cf2a3454692354050b68407523ad74a.scope: Deactivated successfully.
Feb 25 09:13:58 np0005629333 systemd[1]: libpod-91fb9609a34202e4ce5df4a7193aba087cf2a3454692354050b68407523ad74a.scope: Consumed 1.003s CPU time.
Feb 25 09:13:58 np0005629333 systemd[1]: var-lib-containers-storage-overlay-777a085c202cc0d0a47654f29d6370eb41d3c918b548c624239ee10afca47a8d-merged.mount: Deactivated successfully.
Feb 25 09:13:58 np0005629333 podman[436245]: 2026-02-25 14:13:58.678486922 +0000 UTC m=+0.931340040 container remove 91fb9609a34202e4ce5df4a7193aba087cf2a3454692354050b68407523ad74a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_bose, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Feb 25 09:13:58 np0005629333 systemd[1]: libpod-conmon-91fb9609a34202e4ce5df4a7193aba087cf2a3454692354050b68407523ad74a.scope: Deactivated successfully.
Feb 25 09:13:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 09:13:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:13:58 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 09:13:58 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:13:59 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:13:59 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:13:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4622: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:14:00 np0005629333 nova_compute[244014]: 2026-02-25 14:14:00.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:14:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:14:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4623: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:14:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:14:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:14:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:14:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:14:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:14:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:14:02 np0005629333 nova_compute[244014]: 2026-02-25 14:14:02.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:14:03 np0005629333 nova_compute[244014]: 2026-02-25 14:14:03.017 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:14:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4624: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:14:03 np0005629333 nova_compute[244014]: 2026-02-25 14:14:03.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:14:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4625: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:14:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:14:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4626: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:14:08 np0005629333 nova_compute[244014]: 2026-02-25 14:14:08.018 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:14:08 np0005629333 nova_compute[244014]: 2026-02-25 14:14:08.020 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:14:08 np0005629333 nova_compute[244014]: 2026-02-25 14:14:08.021 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 25 09:14:08 np0005629333 nova_compute[244014]: 2026-02-25 14:14:08.021 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:14:08 np0005629333 nova_compute[244014]: 2026-02-25 14:14:08.056 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:14:08 np0005629333 nova_compute[244014]: 2026-02-25 14:14:08.057 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:14:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4627: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:14:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:14:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4628: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:14:13 np0005629333 nova_compute[244014]: 2026-02-25 14:14:13.057 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:14:13 np0005629333 nova_compute[244014]: 2026-02-25 14:14:13.059 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:14:13 np0005629333 nova_compute[244014]: 2026-02-25 14:14:13.060 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 25 09:14:13 np0005629333 nova_compute[244014]: 2026-02-25 14:14:13.060 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:14:13 np0005629333 nova_compute[244014]: 2026-02-25 14:14:13.061 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:14:13 np0005629333 nova_compute[244014]: 2026-02-25 14:14:13.062 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:14:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4629: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:14:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4630: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:14:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:14:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4631: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:14:18 np0005629333 nova_compute[244014]: 2026-02-25 14:14:18.063 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:14:18 np0005629333 nova_compute[244014]: 2026-02-25 14:14:18.065 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:14:18 np0005629333 nova_compute[244014]: 2026-02-25 14:14:18.066 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 25 09:14:18 np0005629333 nova_compute[244014]: 2026-02-25 14:14:18.066 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:14:18 np0005629333 nova_compute[244014]: 2026-02-25 14:14:18.119 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:14:18 np0005629333 nova_compute[244014]: 2026-02-25 14:14:18.119 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:14:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4632: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:14:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:14:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4633: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:14:23 np0005629333 nova_compute[244014]: 2026-02-25 14:14:23.121 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:14:23 np0005629333 nova_compute[244014]: 2026-02-25 14:14:23.123 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:14:23 np0005629333 nova_compute[244014]: 2026-02-25 14:14:23.123 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 25 09:14:23 np0005629333 nova_compute[244014]: 2026-02-25 14:14:23.123 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:14:23 np0005629333 nova_compute[244014]: 2026-02-25 14:14:23.170 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:14:23 np0005629333 nova_compute[244014]: 2026-02-25 14:14:23.171 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:14:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4634: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:14:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4635: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:14:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:14:26 np0005629333 podman[436382]: 2026-02-25 14:14:26.769884957 +0000 UTC m=+0.108525366 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Feb 25 09:14:26 np0005629333 podman[436383]: 2026-02-25 14:14:26.773442767 +0000 UTC m=+0.107136676 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 25 09:14:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4636: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:14:28 np0005629333 nova_compute[244014]: 2026-02-25 14:14:28.172 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:14:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4637: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:14:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:14:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_14:14:31
Feb 25 09:14:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 09:14:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 09:14:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['.rgw.root', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', 'backups', 'default.rgw.control', 'vms', '.mgr', 'default.rgw.log']
Feb 25 09:14:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 09:14:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4638: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:14:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:14:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:14:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:14:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:14:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:14:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:14:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 09:14:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 09:14:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 09:14:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 09:14:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 09:14:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 09:14:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 09:14:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 09:14:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 09:14:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 09:14:33 np0005629333 nova_compute[244014]: 2026-02-25 14:14:33.174 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:14:33 np0005629333 nova_compute[244014]: 2026-02-25 14:14:33.176 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:14:33 np0005629333 nova_compute[244014]: 2026-02-25 14:14:33.177 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 25 09:14:33 np0005629333 nova_compute[244014]: 2026-02-25 14:14:33.177 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:14:33 np0005629333 nova_compute[244014]: 2026-02-25 14:14:33.206 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:14:33 np0005629333 nova_compute[244014]: 2026-02-25 14:14:33.207 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:14:33 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4639: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:14:33 np0005629333 nova_compute[244014]: 2026-02-25 14:14:33.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:14:35 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4640: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:14:36 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:14:37 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4641: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:14:38 np0005629333 nova_compute[244014]: 2026-02-25 14:14:38.207 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:14:38 np0005629333 nova_compute[244014]: 2026-02-25 14:14:38.209 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:14:38 np0005629333 nova_compute[244014]: 2026-02-25 14:14:38.210 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 25 09:14:38 np0005629333 nova_compute[244014]: 2026-02-25 14:14:38.210 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:14:38 np0005629333 nova_compute[244014]: 2026-02-25 14:14:38.257 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:14:38 np0005629333 nova_compute[244014]: 2026-02-25 14:14:38.258 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:14:39 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4642: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:14:40 np0005629333 nova_compute[244014]: 2026-02-25 14:14:40.890 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:14:40 np0005629333 nova_compute[244014]: 2026-02-25 14:14:40.926 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:14:40 np0005629333 nova_compute[244014]: 2026-02-25 14:14:40.927 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:14:40 np0005629333 nova_compute[244014]: 2026-02-25 14:14:40.927 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:14:40 np0005629333 nova_compute[244014]: 2026-02-25 14:14:40.927 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Feb 25 09:14:40 np0005629333 nova_compute[244014]: 2026-02-25 14:14:40.927 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 09:14:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:14:41 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 09:14:41 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/730122595' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 09:14:41 np0005629333 nova_compute[244014]: 2026-02-25 14:14:41.447 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 09:14:41 np0005629333 nova_compute[244014]: 2026-02-25 14:14:41.572 244018 WARNING nova.virt.libvirt.driver [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Feb 25 09:14:41 np0005629333 nova_compute[244014]: 2026-02-25 14:14:41.573 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3524MB free_disk=59.98723818361759GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Feb 25 09:14:41 np0005629333 nova_compute[244014]: 2026-02-25 14:14:41.573 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:14:41 np0005629333 nova_compute[244014]: 2026-02-25 14:14:41.573 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:14:41 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4643: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:14:41 np0005629333 nova_compute[244014]: 2026-02-25 14:14:41.631 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Feb 25 09:14:41 np0005629333 nova_compute[244014]: 2026-02-25 14:14:41.631 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Feb 25 09:14:41 np0005629333 nova_compute[244014]: 2026-02-25 14:14:41.741 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Feb 25 09:14:42 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 25 09:14:42 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4176816103' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 25 09:14:42 np0005629333 nova_compute[244014]: 2026-02-25 14:14:42.248 244018 DEBUG oslo_concurrency.processutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Feb 25 09:14:42 np0005629333 nova_compute[244014]: 2026-02-25 14:14:42.255 244018 DEBUG nova.compute.provider_tree [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed in ProviderTree for provider: cb4dae98-2ac3-4218-9445-2320139e12ad update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Feb 25 09:14:42 np0005629333 nova_compute[244014]: 2026-02-25 14:14:42.272 244018 DEBUG nova.scheduler.client.report [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Inventory has not changed for provider cb4dae98-2ac3-4218-9445-2320139e12ad based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Feb 25 09:14:42 np0005629333 nova_compute[244014]: 2026-02-25 14:14:42.276 244018 DEBUG nova.compute.resource_tracker [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Feb 25 09:14:42 np0005629333 nova_compute[244014]: 2026-02-25 14:14:42.276 244018 DEBUG oslo_concurrency.lockutils [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:14:43 np0005629333 nova_compute[244014]: 2026-02-25 14:14:43.259 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:14:43 np0005629333 nova_compute[244014]: 2026-02-25 14:14:43.262 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:14:43 np0005629333 nova_compute[244014]: 2026-02-25 14:14:43.262 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 25 09:14:43 np0005629333 nova_compute[244014]: 2026-02-25 14:14:43.263 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:14:43 np0005629333 nova_compute[244014]: 2026-02-25 14:14:43.311 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:14:43 np0005629333 nova_compute[244014]: 2026-02-25 14:14:43.312 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:14:43 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4644: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:14:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] _maybe_adjust
Feb 25 09:14:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:14:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Feb 25 09:14:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:14:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.73878357684759e-05 of space, bias 1.0, pg target 0.005216350730542769 quantized to 32 (current 32)
Feb 25 09:14:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:14:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:14:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:14:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:14:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:14:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006714637386478266 of space, bias 1.0, pg target 0.20143912159434796 quantized to 32 (current 32)
Feb 25 09:14:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:14:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3916366864300228e-06 of space, bias 4.0, pg target 0.0016699640237160273 quantized to 16 (current 16)
Feb 25 09:14:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:14:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:14:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:14:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Feb 25 09:14:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:14:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Feb 25 09:14:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:14:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 25 09:14:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Feb 25 09:14:44 np0005629333 ceph-mgr[76641]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Feb 25 09:14:44 np0005629333 nova_compute[244014]: 2026-02-25 14:14:44.264 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:14:45 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4645: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:14:45 np0005629333 nova_compute[244014]: 2026-02-25 14:14:45.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:14:45 np0005629333 nova_compute[244014]: 2026-02-25 14:14:45.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Feb 25 09:14:45 np0005629333 nova_compute[244014]: 2026-02-25 14:14:45.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Feb 25 09:14:45 np0005629333 nova_compute[244014]: 2026-02-25 14:14:45.893 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Feb 25 09:14:46 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:14:46 np0005629333 nova_compute[244014]: 2026-02-25 14:14:46.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:14:47 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4646: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:14:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 25 09:14:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2721512744' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 25 09:14:47 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 25 09:14:47 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2721512744' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 25 09:14:48 np0005629333 nova_compute[244014]: 2026-02-25 14:14:48.313 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:14:48 np0005629333 nova_compute[244014]: 2026-02-25 14:14:48.315 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:14:49 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4647: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:14:51 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:14:51 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4648: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:14:53 np0005629333 nova_compute[244014]: 2026-02-25 14:14:53.315 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:14:53 np0005629333 nova_compute[244014]: 2026-02-25 14:14:53.317 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:14:53 np0005629333 nova_compute[244014]: 2026-02-25 14:14:53.318 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 25 09:14:53 np0005629333 nova_compute[244014]: 2026-02-25 14:14:53.318 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:14:53 np0005629333 nova_compute[244014]: 2026-02-25 14:14:53.344 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:14:53 np0005629333 nova_compute[244014]: 2026-02-25 14:14:53.345 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:14:53 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4649: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:14:53 np0005629333 nova_compute[244014]: 2026-02-25 14:14:53.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:14:53 np0005629333 nova_compute[244014]: 2026-02-25 14:14:53.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Feb 25 09:14:53 np0005629333 nova_compute[244014]: 2026-02-25 14:14:53.899 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Feb 25 09:14:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:14:55.121 157129 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Feb 25 09:14:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:14:55.122 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Feb 25 09:14:55 np0005629333 ovn_metadata_agent[157124]: 2026-02-25 14:14:55.122 157129 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Feb 25 09:14:55 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4650: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:14:56 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:14:57 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4651: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:14:57 np0005629333 podman[436472]: 2026-02-25 14:14:57.754528683 +0000 UTC m=+0.086769679 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Feb 25 09:14:57 np0005629333 podman[436473]: 2026-02-25 14:14:57.790429921 +0000 UTC m=+0.121596377 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.43.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true)
Feb 25 09:14:57 np0005629333 nova_compute[244014]: 2026-02-25 14:14:57.899 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:14:57 np0005629333 nova_compute[244014]: 2026-02-25 14:14:57.900 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Feb 25 09:14:58 np0005629333 nova_compute[244014]: 2026-02-25 14:14:58.346 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:14:58 np0005629333 nova_compute[244014]: 2026-02-25 14:14:58.873 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:14:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 09:14:59 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 09:14:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 25 09:14:59 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 09:14:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 25 09:14:59 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:14:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 25 09:14:59 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 25 09:14:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Feb 25 09:14:59 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 09:14:59 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 09:14:59 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 09:14:59 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4652: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:14:59 np0005629333 podman[436658]: 2026-02-25 14:14:59.950601158 +0000 UTC m=+0.068769559 container create ea83a98313a2dfa55d2f16c08e43bce2c0736889d817f7b74df6706f3e2d361b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Feb 25 09:14:59 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 25 09:14:59 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:14:59 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Feb 25 09:14:59 np0005629333 systemd[1]: Started libpod-conmon-ea83a98313a2dfa55d2f16c08e43bce2c0736889d817f7b74df6706f3e2d361b.scope.
Feb 25 09:15:00 np0005629333 podman[436658]: 2026-02-25 14:14:59.917542771 +0000 UTC m=+0.035711212 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:15:00 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:15:00 np0005629333 podman[436658]: 2026-02-25 14:15:00.058151386 +0000 UTC m=+0.176319797 container init ea83a98313a2dfa55d2f16c08e43bce2c0736889d817f7b74df6706f3e2d361b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:15:00 np0005629333 podman[436658]: 2026-02-25 14:15:00.065480563 +0000 UTC m=+0.183648944 container start ea83a98313a2dfa55d2f16c08e43bce2c0736889d817f7b74df6706f3e2d361b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_morse, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:15:00 np0005629333 festive_morse[436675]: 167 167
Feb 25 09:15:00 np0005629333 systemd[1]: libpod-ea83a98313a2dfa55d2f16c08e43bce2c0736889d817f7b74df6706f3e2d361b.scope: Deactivated successfully.
Feb 25 09:15:00 np0005629333 podman[436658]: 2026-02-25 14:15:00.207368894 +0000 UTC m=+0.325537295 container attach ea83a98313a2dfa55d2f16c08e43bce2c0736889d817f7b74df6706f3e2d361b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_morse, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle)
Feb 25 09:15:00 np0005629333 podman[436658]: 2026-02-25 14:15:00.209083102 +0000 UTC m=+0.327251543 container died ea83a98313a2dfa55d2f16c08e43bce2c0736889d817f7b74df6706f3e2d361b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:15:00 np0005629333 systemd[1]: var-lib-containers-storage-overlay-a607218c823626690fb9373365c5dc8ec19f069dd923076690906ff935d470b9-merged.mount: Deactivated successfully.
Feb 25 09:15:00 np0005629333 podman[436658]: 2026-02-25 14:15:00.295441819 +0000 UTC m=+0.413610180 container remove ea83a98313a2dfa55d2f16c08e43bce2c0736889d817f7b74df6706f3e2d361b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 09:15:00 np0005629333 systemd[1]: libpod-conmon-ea83a98313a2dfa55d2f16c08e43bce2c0736889d817f7b74df6706f3e2d361b.scope: Deactivated successfully.
Feb 25 09:15:00 np0005629333 podman[436700]: 2026-02-25 14:15:00.44545759 +0000 UTC m=+0.024393662 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:15:00 np0005629333 podman[436700]: 2026-02-25 14:15:00.604481966 +0000 UTC m=+0.183417998 container create 953de1490fdb77022e8ac492300591de4b2038702d42307df448eefdb2f09ea7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_kepler, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:15:00 np0005629333 systemd[1]: Started libpod-conmon-953de1490fdb77022e8ac492300591de4b2038702d42307df448eefdb2f09ea7.scope.
Feb 25 09:15:00 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:15:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e34b74b075218499cc1037b05419c808d727208eb4194328a98343e512e8b071/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:15:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e34b74b075218499cc1037b05419c808d727208eb4194328a98343e512e8b071/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:15:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e34b74b075218499cc1037b05419c808d727208eb4194328a98343e512e8b071/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:15:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e34b74b075218499cc1037b05419c808d727208eb4194328a98343e512e8b071/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:15:00 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e34b74b075218499cc1037b05419c808d727208eb4194328a98343e512e8b071/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 25 09:15:00 np0005629333 podman[436700]: 2026-02-25 14:15:00.720753631 +0000 UTC m=+0.299689683 container init 953de1490fdb77022e8ac492300591de4b2038702d42307df448eefdb2f09ea7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:15:00 np0005629333 podman[436700]: 2026-02-25 14:15:00.729726615 +0000 UTC m=+0.308662637 container start 953de1490fdb77022e8ac492300591de4b2038702d42307df448eefdb2f09ea7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:15:00 np0005629333 podman[436700]: 2026-02-25 14:15:00.733725388 +0000 UTC m=+0.312661500 container attach 953de1490fdb77022e8ac492300591de4b2038702d42307df448eefdb2f09ea7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_kepler, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle)
Feb 25 09:15:00 np0005629333 nova_compute[244014]: 2026-02-25 14:15:00.875 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:15:01 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:15:01 np0005629333 serene_kepler[436717]: --> passed data devices: 0 physical, 3 LVM
Feb 25 09:15:01 np0005629333 serene_kepler[436717]: --> All data devices are unavailable
Feb 25 09:15:01 np0005629333 systemd[1]: libpod-953de1490fdb77022e8ac492300591de4b2038702d42307df448eefdb2f09ea7.scope: Deactivated successfully.
Feb 25 09:15:01 np0005629333 podman[436700]: 2026-02-25 14:15:01.15164021 +0000 UTC m=+0.730576282 container died 953de1490fdb77022e8ac492300591de4b2038702d42307df448eefdb2f09ea7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_kepler, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Feb 25 09:15:01 np0005629333 systemd[1]: var-lib-containers-storage-overlay-e34b74b075218499cc1037b05419c808d727208eb4194328a98343e512e8b071-merged.mount: Deactivated successfully.
Feb 25 09:15:01 np0005629333 podman[436700]: 2026-02-25 14:15:01.206190446 +0000 UTC m=+0.785126518 container remove 953de1490fdb77022e8ac492300591de4b2038702d42307df448eefdb2f09ea7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_kepler, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Feb 25 09:15:01 np0005629333 systemd[1]: libpod-conmon-953de1490fdb77022e8ac492300591de4b2038702d42307df448eefdb2f09ea7.scope: Deactivated successfully.
Feb 25 09:15:01 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4653: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:15:01 np0005629333 podman[436813]: 2026-02-25 14:15:01.687790132 +0000 UTC m=+0.061640317 container create f2e2f665dcbed83c11ea0dafe85c0c70861617cb22b9872cdd7015f1e7ada391 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Feb 25 09:15:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:15:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:15:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:15:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:15:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:15:01 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:15:01 np0005629333 systemd[1]: Started libpod-conmon-f2e2f665dcbed83c11ea0dafe85c0c70861617cb22b9872cdd7015f1e7ada391.scope.
Feb 25 09:15:01 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:15:01 np0005629333 podman[436813]: 2026-02-25 14:15:01.660370795 +0000 UTC m=+0.034221020 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:15:01 np0005629333 podman[436813]: 2026-02-25 14:15:01.766543634 +0000 UTC m=+0.140393829 container init f2e2f665dcbed83c11ea0dafe85c0c70861617cb22b9872cdd7015f1e7ada391 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cerf, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Feb 25 09:15:01 np0005629333 podman[436813]: 2026-02-25 14:15:01.772455531 +0000 UTC m=+0.146305676 container start f2e2f665dcbed83c11ea0dafe85c0c70861617cb22b9872cdd7015f1e7ada391 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cerf, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3)
Feb 25 09:15:01 np0005629333 cool_cerf[436830]: 167 167
Feb 25 09:15:01 np0005629333 podman[436813]: 2026-02-25 14:15:01.776735892 +0000 UTC m=+0.150586147 container attach f2e2f665dcbed83c11ea0dafe85c0c70861617cb22b9872cdd7015f1e7ada391 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cerf, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:15:01 np0005629333 systemd[1]: libpod-f2e2f665dcbed83c11ea0dafe85c0c70861617cb22b9872cdd7015f1e7ada391.scope: Deactivated successfully.
Feb 25 09:15:01 np0005629333 podman[436813]: 2026-02-25 14:15:01.778083981 +0000 UTC m=+0.151934156 container died f2e2f665dcbed83c11ea0dafe85c0c70861617cb22b9872cdd7015f1e7ada391 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 09:15:01 np0005629333 systemd[1]: var-lib-containers-storage-overlay-839454d28033263655fa5d431d7b6b38331cad0e539b894b58eb250230d8d9d9-merged.mount: Deactivated successfully.
Feb 25 09:15:01 np0005629333 podman[436813]: 2026-02-25 14:15:01.832211144 +0000 UTC m=+0.206061299 container remove f2e2f665dcbed83c11ea0dafe85c0c70861617cb22b9872cdd7015f1e7ada391 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_cerf, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Feb 25 09:15:01 np0005629333 systemd[1]: libpod-conmon-f2e2f665dcbed83c11ea0dafe85c0c70861617cb22b9872cdd7015f1e7ada391.scope: Deactivated successfully.
Feb 25 09:15:02 np0005629333 podman[436854]: 2026-02-25 14:15:02.030320138 +0000 UTC m=+0.062095421 container create 6f03b245347d01b2095cb8308a32656229602d1a4cc221898334903b17ba50ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_turing, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:15:02 np0005629333 systemd[1]: Started libpod-conmon-6f03b245347d01b2095cb8308a32656229602d1a4cc221898334903b17ba50ec.scope.
Feb 25 09:15:02 np0005629333 podman[436854]: 2026-02-25 14:15:01.996975403 +0000 UTC m=+0.028750606 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:15:02 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:15:02 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9958aafbf95798740f9499bdf40a3fc185d7ce5884c077d4e1d54ace1c049d8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:15:02 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9958aafbf95798740f9499bdf40a3fc185d7ce5884c077d4e1d54ace1c049d8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:15:02 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9958aafbf95798740f9499bdf40a3fc185d7ce5884c077d4e1d54ace1c049d8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:15:02 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9958aafbf95798740f9499bdf40a3fc185d7ce5884c077d4e1d54ace1c049d8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:15:02 np0005629333 podman[436854]: 2026-02-25 14:15:02.133490621 +0000 UTC m=+0.165265804 container init 6f03b245347d01b2095cb8308a32656229602d1a4cc221898334903b17ba50ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 25 09:15:02 np0005629333 podman[436854]: 2026-02-25 14:15:02.142877697 +0000 UTC m=+0.174652880 container start 6f03b245347d01b2095cb8308a32656229602d1a4cc221898334903b17ba50ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_turing, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:15:02 np0005629333 podman[436854]: 2026-02-25 14:15:02.147663793 +0000 UTC m=+0.179438986 container attach 6f03b245347d01b2095cb8308a32656229602d1a4cc221898334903b17ba50ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_turing, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Feb 25 09:15:02 np0005629333 zealous_turing[436870]: {
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:    "0": [
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:        {
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "devices": [
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "/dev/loop3"
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            ],
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "lv_name": "ceph_lv0",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "lv_size": "21470642176",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=d19afe3c-7923-4776-bcc2-88886150b441,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "lv_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "name": "ceph_lv0",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "path": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "tags": {
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.block_uuid": "evqutu-u5Eg-fuBA-Wsue-e41T-ChuL-dHoLyW",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.cluster_name": "ceph",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.crush_device_class": "",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.encrypted": "0",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.objectstore": "bluestore",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.osd_fsid": "d19afe3c-7923-4776-bcc2-88886150b441",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.osd_id": "0",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.type": "block",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.vdo": "0",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.with_tpm": "0"
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            },
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "type": "block",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "vg_name": "ceph_vg0"
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:        }
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:    ],
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:    "1": [
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:        {
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "devices": [
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "/dev/loop4"
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            ],
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "lv_name": "ceph_lv1",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "lv_size": "21470642176",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=a25b4fc6-1504-44d3-aca7-62c5ef316350,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "lv_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "name": "ceph_lv1",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "path": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "tags": {
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.block_uuid": "4EYpuD-33GY-Fn2n-gZ5W-d2pn-Kw5J-Ig9gCO",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.cluster_name": "ceph",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.crush_device_class": "",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.encrypted": "0",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.objectstore": "bluestore",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.osd_fsid": "a25b4fc6-1504-44d3-aca7-62c5ef316350",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.osd_id": "1",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.type": "block",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.vdo": "0",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.with_tpm": "0"
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            },
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "type": "block",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "vg_name": "ceph_vg1"
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:        }
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:    ],
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:    "2": [
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:        {
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "devices": [
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "/dev/loop5"
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            ],
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "lv_name": "ceph_lv2",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "lv_size": "21470642176",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8ac33163-6221-5d58-9a39-8b6933fe7762,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f84d59d3-cae3-44c8-8bca-9fa4643cfc60,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "lv_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "name": "ceph_lv2",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "path": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "tags": {
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.block_uuid": "WeEvcc-3HLi-Q7w3-CYAc-HXsR-Kieq-Rjtm9V",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.cephx_lockbox_secret": "",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.cluster_fsid": "8ac33163-6221-5d58-9a39-8b6933fe7762",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.cluster_name": "ceph",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.crush_device_class": "",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.encrypted": "0",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.objectstore": "bluestore",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.osd_fsid": "f84d59d3-cae3-44c8-8bca-9fa4643cfc60",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.osd_id": "2",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.osdspec_affinity": "default_drive_group",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.type": "block",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.vdo": "0",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:                "ceph.with_tpm": "0"
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            },
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "type": "block",
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:            "vg_name": "ceph_vg2"
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:        }
Feb 25 09:15:02 np0005629333 zealous_turing[436870]:    ]
Feb 25 09:15:02 np0005629333 zealous_turing[436870]: }
Feb 25 09:15:02 np0005629333 systemd[1]: libpod-6f03b245347d01b2095cb8308a32656229602d1a4cc221898334903b17ba50ec.scope: Deactivated successfully.
Feb 25 09:15:02 np0005629333 podman[436854]: 2026-02-25 14:15:02.457208114 +0000 UTC m=+0.488983317 container died 6f03b245347d01b2095cb8308a32656229602d1a4cc221898334903b17ba50ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_turing, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:15:02 np0005629333 systemd[1]: var-lib-containers-storage-overlay-9958aafbf95798740f9499bdf40a3fc185d7ce5884c077d4e1d54ace1c049d8a-merged.mount: Deactivated successfully.
Feb 25 09:15:02 np0005629333 podman[436854]: 2026-02-25 14:15:02.570313369 +0000 UTC m=+0.602088512 container remove 6f03b245347d01b2095cb8308a32656229602d1a4cc221898334903b17ba50ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 25 09:15:02 np0005629333 systemd[1]: libpod-conmon-6f03b245347d01b2095cb8308a32656229602d1a4cc221898334903b17ba50ec.scope: Deactivated successfully.
Feb 25 09:15:03 np0005629333 podman[436956]: 2026-02-25 14:15:03.066020605 +0000 UTC m=+0.064340274 container create e0e5c88903f64db545b248a7f05f0a36a61ad911c496971a231f01225f981772 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_tharp, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:15:03 np0005629333 systemd[1]: Started libpod-conmon-e0e5c88903f64db545b248a7f05f0a36a61ad911c496971a231f01225f981772.scope.
Feb 25 09:15:03 np0005629333 podman[436956]: 2026-02-25 14:15:03.036512879 +0000 UTC m=+0.034832608 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:15:03 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:15:03 np0005629333 podman[436956]: 2026-02-25 14:15:03.166451351 +0000 UTC m=+0.164771060 container init e0e5c88903f64db545b248a7f05f0a36a61ad911c496971a231f01225f981772 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_tharp, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Feb 25 09:15:03 np0005629333 podman[436956]: 2026-02-25 14:15:03.177203055 +0000 UTC m=+0.175522744 container start e0e5c88903f64db545b248a7f05f0a36a61ad911c496971a231f01225f981772 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Feb 25 09:15:03 np0005629333 podman[436956]: 2026-02-25 14:15:03.181686183 +0000 UTC m=+0.180005892 container attach e0e5c88903f64db545b248a7f05f0a36a61ad911c496971a231f01225f981772 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_tharp, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Feb 25 09:15:03 np0005629333 wizardly_tharp[436972]: 167 167
Feb 25 09:15:03 np0005629333 systemd[1]: libpod-e0e5c88903f64db545b248a7f05f0a36a61ad911c496971a231f01225f981772.scope: Deactivated successfully.
Feb 25 09:15:03 np0005629333 podman[436956]: 2026-02-25 14:15:03.183942516 +0000 UTC m=+0.182262205 container died e0e5c88903f64db545b248a7f05f0a36a61ad911c496971a231f01225f981772 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Feb 25 09:15:03 np0005629333 systemd[1]: var-lib-containers-storage-overlay-ff7a87ad0a3daa8b3ea0f4f60f08500fa0fccf4c71b668e736ce2904b1931b6e-merged.mount: Deactivated successfully.
Feb 25 09:15:03 np0005629333 podman[436956]: 2026-02-25 14:15:03.270969932 +0000 UTC m=+0.269289621 container remove e0e5c88903f64db545b248a7f05f0a36a61ad911c496971a231f01225f981772 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_tharp, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 25 09:15:03 np0005629333 systemd[1]: libpod-conmon-e0e5c88903f64db545b248a7f05f0a36a61ad911c496971a231f01225f981772.scope: Deactivated successfully.
Feb 25 09:15:03 np0005629333 nova_compute[244014]: 2026-02-25 14:15:03.347 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:15:03 np0005629333 podman[436996]: 2026-02-25 14:15:03.417278797 +0000 UTC m=+0.042423023 container create 55b9fca805e9dfa75200511c6658052e07e57b645b550a7cf4b867261e3182a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_beaver, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Feb 25 09:15:03 np0005629333 systemd[1]: Started libpod-conmon-55b9fca805e9dfa75200511c6658052e07e57b645b550a7cf4b867261e3182a0.scope.
Feb 25 09:15:03 np0005629333 systemd[1]: Started libcrun container.
Feb 25 09:15:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/068cea46dab1fee3af5ace3e592aa0abdeaaa0f4aeecca7185d27854e2b509ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 25 09:15:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/068cea46dab1fee3af5ace3e592aa0abdeaaa0f4aeecca7185d27854e2b509ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 25 09:15:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/068cea46dab1fee3af5ace3e592aa0abdeaaa0f4aeecca7185d27854e2b509ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 25 09:15:03 np0005629333 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/068cea46dab1fee3af5ace3e592aa0abdeaaa0f4aeecca7185d27854e2b509ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 25 09:15:03 np0005629333 podman[436996]: 2026-02-25 14:15:03.398768303 +0000 UTC m=+0.023912539 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Feb 25 09:15:03 np0005629333 podman[436996]: 2026-02-25 14:15:03.503087599 +0000 UTC m=+0.128231855 container init 55b9fca805e9dfa75200511c6658052e07e57b645b550a7cf4b867261e3182a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_beaver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Feb 25 09:15:03 np0005629333 podman[436996]: 2026-02-25 14:15:03.511902738 +0000 UTC m=+0.137046944 container start 55b9fca805e9dfa75200511c6658052e07e57b645b550a7cf4b867261e3182a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_beaver, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Feb 25 09:15:03 np0005629333 podman[436996]: 2026-02-25 14:15:03.515890681 +0000 UTC m=+0.141034977 container attach 55b9fca805e9dfa75200511c6658052e07e57b645b550a7cf4b867261e3182a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Feb 25 09:15:03 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4654: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:15:04 np0005629333 lvm[437090]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 09:15:04 np0005629333 lvm[437090]: VG ceph_vg0 finished
Feb 25 09:15:04 np0005629333 lvm[437091]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 09:15:04 np0005629333 lvm[437091]: VG ceph_vg1 finished
Feb 25 09:15:04 np0005629333 lvm[437093]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 09:15:04 np0005629333 lvm[437093]: VG ceph_vg2 finished
Feb 25 09:15:04 np0005629333 elated_beaver[437012]: {}
Feb 25 09:15:04 np0005629333 systemd[1]: libpod-55b9fca805e9dfa75200511c6658052e07e57b645b550a7cf4b867261e3182a0.scope: Deactivated successfully.
Feb 25 09:15:04 np0005629333 systemd[1]: libpod-55b9fca805e9dfa75200511c6658052e07e57b645b550a7cf4b867261e3182a0.scope: Consumed 1.099s CPU time.
Feb 25 09:15:04 np0005629333 podman[436996]: 2026-02-25 14:15:04.333454398 +0000 UTC m=+0.958598644 container died 55b9fca805e9dfa75200511c6658052e07e57b645b550a7cf4b867261e3182a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Feb 25 09:15:04 np0005629333 systemd[1]: var-lib-containers-storage-overlay-068cea46dab1fee3af5ace3e592aa0abdeaaa0f4aeecca7185d27854e2b509ee-merged.mount: Deactivated successfully.
Feb 25 09:15:04 np0005629333 podman[436996]: 2026-02-25 14:15:04.400352983 +0000 UTC m=+1.025497189 container remove 55b9fca805e9dfa75200511c6658052e07e57b645b550a7cf4b867261e3182a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_beaver, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Feb 25 09:15:04 np0005629333 systemd[1]: libpod-conmon-55b9fca805e9dfa75200511c6658052e07e57b645b550a7cf4b867261e3182a0.scope: Deactivated successfully.
Feb 25 09:15:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Feb 25 09:15:04 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:15:04 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Feb 25 09:15:04 np0005629333 ceph-mon[76335]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:15:04 np0005629333 nova_compute[244014]: 2026-02-25 14:15:04.877 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:15:04 np0005629333 nova_compute[244014]: 2026-02-25 14:15:04.879 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:15:05 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:15:05 np0005629333 ceph-mon[76335]: from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' 
Feb 25 09:15:05 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4655: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:15:05 np0005629333 nova_compute[244014]: 2026-02-25 14:15:05.876 244018 DEBUG oslo_service.periodic_task [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Feb 25 09:15:05 np0005629333 nova_compute[244014]: 2026-02-25 14:15:05.877 244018 DEBUG nova.compute.manager [None req-285bc9e4-9cee-446d-93e3-7b5b45e4c0cb - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Feb 25 09:15:06 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:15:07 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4656: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:15:08 np0005629333 nova_compute[244014]: 2026-02-25 14:15:08.350 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:15:09 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4657: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:15:11 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:15:11 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4658: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:15:13 np0005629333 nova_compute[244014]: 2026-02-25 14:15:13.352 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:15:13 np0005629333 systemd-logind[811]: New session 56 of user zuul.
Feb 25 09:15:13 np0005629333 systemd[1]: Started Session 56 of User zuul.
Feb 25 09:15:13 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4659: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:15:15 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4660: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:15:16 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:15:16 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23362 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 09:15:16 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23364 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 09:15:17 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Feb 25 09:15:17 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1827492314' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb 25 09:15:17 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4661: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:15:18 np0005629333 nova_compute[244014]: 2026-02-25 14:15:18.354 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:15:19 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4662: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:15:20 np0005629333 ovs-vsctl[437419]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Feb 25 09:15:21 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:15:21 np0005629333 virtqemud[243235]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Feb 25 09:15:21 np0005629333 virtqemud[243235]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Feb 25 09:15:21 np0005629333 virtqemud[243235]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Feb 25 09:15:21 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4663: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:15:21 np0005629333 ceph-mds[97202]: mds.cephfs.compute-0.idxobw asok_command: cache status {prefix=cache status} (starting...)
Feb 25 09:15:21 np0005629333 lvm[437741]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Feb 25 09:15:21 np0005629333 lvm[437741]: VG ceph_vg2 finished
Feb 25 09:15:22 np0005629333 lvm[437747]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 25 09:15:22 np0005629333 lvm[437747]: VG ceph_vg0 finished
Feb 25 09:15:22 np0005629333 lvm[437767]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 25 09:15:22 np0005629333 lvm[437767]: VG ceph_vg1 finished
Feb 25 09:15:22 np0005629333 ceph-mds[97202]: mds.cephfs.compute-0.idxobw asok_command: client ls {prefix=client ls} (starting...)
Feb 25 09:15:22 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23368 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 09:15:22 np0005629333 ceph-mds[97202]: mds.cephfs.compute-0.idxobw asok_command: damage ls {prefix=damage ls} (starting...)
Feb 25 09:15:22 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23370 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 09:15:22 np0005629333 ceph-mds[97202]: mds.cephfs.compute-0.idxobw asok_command: dump loads {prefix=dump loads} (starting...)
Feb 25 09:15:22 np0005629333 ceph-mds[97202]: mds.cephfs.compute-0.idxobw asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Feb 25 09:15:22 np0005629333 ceph-mds[97202]: mds.cephfs.compute-0.idxobw asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Feb 25 09:15:23 np0005629333 ceph-mds[97202]: mds.cephfs.compute-0.idxobw asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Feb 25 09:15:23 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23374 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 09:15:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Feb 25 09:15:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2431788982' entity='client.admin' cmd={"prefix": "report"} : dispatch
Feb 25 09:15:23 np0005629333 ceph-mds[97202]: mds.cephfs.compute-0.idxobw asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Feb 25 09:15:23 np0005629333 nova_compute[244014]: 2026-02-25 14:15:23.356 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Feb 25 09:15:23 np0005629333 nova_compute[244014]: 2026-02-25 14:15:23.359 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:15:23 np0005629333 nova_compute[244014]: 2026-02-25 14:15:23.359 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Feb 25 09:15:23 np0005629333 nova_compute[244014]: 2026-02-25 14:15:23.359 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:15:23 np0005629333 nova_compute[244014]: 2026-02-25 14:15:23.360 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Feb 25 09:15:23 np0005629333 nova_compute[244014]: 2026-02-25 14:15:23.360 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:15:23 np0005629333 ceph-mds[97202]: mds.cephfs.compute-0.idxobw asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Feb 25 09:15:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 25 09:15:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/700290619' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 25 09:15:23 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4664: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:15:23 np0005629333 ceph-mds[97202]: mds.cephfs.compute-0.idxobw asok_command: get subtrees {prefix=get subtrees} (starting...)
Feb 25 09:15:23 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23378 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 09:15:23 np0005629333 ceph-mgr[76641]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb 25 09:15:23 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mgr-compute-0-jzfame[76637]: 2026-02-25T14:15:23.664+0000 7f96f9a94640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb 25 09:15:23 np0005629333 ceph-mds[97202]: mds.cephfs.compute-0.idxobw asok_command: ops {prefix=ops} (starting...)
Feb 25 09:15:23 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Feb 25 09:15:23 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4033706135' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Feb 25 09:15:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Feb 25 09:15:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3479067776' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Feb 25 09:15:24 np0005629333 ceph-mds[97202]: mds.cephfs.compute-0.idxobw asok_command: session ls {prefix=session ls} (starting...)
Feb 25 09:15:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Feb 25 09:15:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2375952924' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Feb 25 09:15:24 np0005629333 ceph-mds[97202]: mds.cephfs.compute-0.idxobw asok_command: status {prefix=status} (starting...)
Feb 25 09:15:24 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb 25 09:15:24 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/66788207' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 25 09:15:25 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23388 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 09:15:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb 25 09:15:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3114139203' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 25 09:15:25 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23392 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 09:15:25 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4665: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:15:25 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb 25 09:15:25 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1638786567' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 25 09:15:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:15:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Feb 25 09:15:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3806312356' entity='client.admin' cmd={"prefix": "features"} : dispatch
Feb 25 09:15:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 25 09:15:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3241723639' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 25 09:15:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Feb 25 09:15:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4045208824' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Feb 25 09:15:26 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Feb 25 09:15:26 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3198604801' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Feb 25 09:15:27 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23404 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 09:15:27 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mgr-compute-0-jzfame[76637]: 2026-02-25T14:15:27.105+0000 7f96f9a94640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb 25 09:15:27 np0005629333 ceph-mgr[76641]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Feb 25 09:15:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb 25 09:15:27 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2180581000' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 25 09:15:27 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4666: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:15:27 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Feb 25 09:15:27 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1919515912' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Feb 25 09:15:27 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23410 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 09:15:28 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23414 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 64610304 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 64610304 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302850048 unmapped: 64610304 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 64602112 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 64602112 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 64602112 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 64602112 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 64602112 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 64602112 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 64602112 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 64602112 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 64602112 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 64602112 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 64602112 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 64602112 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 64602112 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 64602112 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 64602112 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302858240 unmapped: 64602112 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 64593920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 64593920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 64593920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 64593920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 64593920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 64593920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 64593920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 64593920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 64593920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 64593920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 64593920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 64593920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 64593920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 64593920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 64593920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302866432 unmapped: 64593920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 64585728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 64585728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 64585728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 64585728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 64585728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 64585728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 64585728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 64585728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 64585728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 64585728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 64585728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302874624 unmapped: 64585728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 64577536 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 64577536 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 64577536 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 64577536 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 64577536 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 64577536 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 64577536 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 64577536 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 64577536 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302882816 unmapped: 64577536 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302891008 unmapped: 64569344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302899200 unmapped: 64561152 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 64552960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 64552960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 64552960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 64552960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 64552960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302907392 unmapped: 64552960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302915584 unmapped: 64544768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302923776 unmapped: 64536576 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302931968 unmapped: 64528384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302931968 unmapped: 64528384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302931968 unmapped: 64528384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302931968 unmapped: 64528384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302931968 unmapped: 64528384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302931968 unmapped: 64528384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302931968 unmapped: 64528384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302931968 unmapped: 64528384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302931968 unmapped: 64528384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302931968 unmapped: 64528384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302931968 unmapped: 64528384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302931968 unmapped: 64528384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302931968 unmapped: 64528384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302931968 unmapped: 64528384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302931968 unmapped: 64528384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 64520192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 64520192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 64520192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 64520192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 64520192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 64520192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 64520192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 64520192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 64520192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 64520192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 64520192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 64520192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 64520192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 64520192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 64520192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302940160 unmapped: 64520192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302948352 unmapped: 64512000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302956544 unmapped: 64503808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302956544 unmapped: 64503808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302956544 unmapped: 64503808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302964736 unmapped: 64495616 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302972928 unmapped: 64487424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 64479232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 64479232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 64479232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 64479232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 64479232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 64479232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 64479232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 64479232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 64479232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 64479232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 64479232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 64479232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 64479232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 64479232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 64479232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302981120 unmapped: 64479232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302989312 unmapped: 64471040 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302989312 unmapped: 64471040 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302989312 unmapped: 64471040 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302989312 unmapped: 64471040 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302997504 unmapped: 64462848 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302997504 unmapped: 64462848 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302997504 unmapped: 64462848 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302997504 unmapped: 64462848 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302997504 unmapped: 64462848 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302997504 unmapped: 64462848 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302997504 unmapped: 64462848 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302997504 unmapped: 64462848 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302997504 unmapped: 64462848 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302997504 unmapped: 64462848 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302997504 unmapped: 64462848 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 302997504 unmapped: 64462848 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303005696 unmapped: 64454656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303005696 unmapped: 64454656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303005696 unmapped: 64454656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303005696 unmapped: 64454656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303005696 unmapped: 64454656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303005696 unmapped: 64454656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303005696 unmapped: 64454656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303005696 unmapped: 64454656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 64446464 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 64446464 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 64446464 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 64446464 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303013888 unmapped: 64446464 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 64438272 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 64438272 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 64438272 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 64438272 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 64438272 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 64438272 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 64438272 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 64438272 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 64438272 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 64438272 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303022080 unmapped: 64438272 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 64430080 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 64430080 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 64430080 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303030272 unmapped: 64430080 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64421888 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64421888 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64421888 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64421888 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64421888 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64421888 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64421888 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64421888 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64421888 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64421888 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64421888 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64421888 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64421888 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64421888 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303038464 unmapped: 64421888 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64413696 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64413696 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64413696 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64413696 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303046656 unmapped: 64413696 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 64405504 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303054848 unmapped: 64405504 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64397312 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64397312 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64397312 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64397312 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64397312 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64397312 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64397312 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64397312 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64397312 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64397312 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64397312 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64397312 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64397312 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303063040 unmapped: 64397312 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64389120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64389120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64389120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64389120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64389120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64389120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64389120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64389120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64389120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64389120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64389120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303071232 unmapped: 64389120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303087616 unmapped: 64372736 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303087616 unmapped: 64372736 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303087616 unmapped: 64372736 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303087616 unmapped: 64372736 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64364544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64364544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64364544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64364544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64364544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64364544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64364544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303095808 unmapped: 64364544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303104000 unmapped: 64356352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303104000 unmapped: 64356352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303104000 unmapped: 64356352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303104000 unmapped: 64356352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303104000 unmapped: 64356352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303104000 unmapped: 64356352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303104000 unmapped: 64356352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303104000 unmapped: 64356352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 64348160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 64348160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 64348160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 64348160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 64348160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 64348160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 64348160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 64348160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 64348160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 64348160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 64348160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 64348160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 64348160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 64348160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 64348160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303112192 unmapped: 64348160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303120384 unmapped: 64339968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 64331776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303128576 unmapped: 64331776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64323584 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64323584 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64323584 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64323584 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64323584 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64323584 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64323584 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64323584 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64323584 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64323584 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303136768 unmapped: 64323584 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 64315392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 64315392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 64315392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 64315392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 64315392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 64315392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 64315392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 64315392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303144960 unmapped: 64315392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303153152 unmapped: 64307200 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303153152 unmapped: 64307200 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303161344 unmapped: 64299008 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303161344 unmapped: 64299008 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303161344 unmapped: 64299008 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303161344 unmapped: 64299008 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303161344 unmapped: 64299008 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303161344 unmapped: 64299008 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303161344 unmapped: 64299008 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303161344 unmapped: 64299008 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303161344 unmapped: 64299008 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303161344 unmapped: 64299008 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303161344 unmapped: 64299008 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303169536 unmapped: 64290816 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 64282624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 64282624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 64282624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 64282624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 64282624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 64282624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 64282624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 64282624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 64282624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 64282624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 64282624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303177728 unmapped: 64282624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 64274432 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 64274432 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 64274432 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 64274432 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303185920 unmapped: 64274432 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303202304 unmapped: 64258048 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303202304 unmapped: 64258048 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 64249856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 64249856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 64249856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 64249856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 64249856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 64249856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 64249856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 64249856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 64249856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 64249856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 64249856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 64249856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 64249856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 64249856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 64249856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303210496 unmapped: 64249856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 64241664 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 64241664 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 64241664 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303218688 unmapped: 64241664 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 64233472 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 64233472 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 64233472 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 64233472 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 64233472 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 64233472 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 64233472 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 64233472 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 64233472 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 64233472 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 64233472 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303226880 unmapped: 64233472 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303235072 unmapped: 64225280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303235072 unmapped: 64225280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303235072 unmapped: 64225280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303235072 unmapped: 64225280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303243264 unmapped: 64217088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303243264 unmapped: 64217088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303243264 unmapped: 64217088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303243264 unmapped: 64217088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303243264 unmapped: 64217088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303243264 unmapped: 64217088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303243264 unmapped: 64217088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303243264 unmapped: 64217088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 64208896 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 64208896 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 64208896 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303251456 unmapped: 64208896 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 64192512 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 64192512 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 64192512 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 64192512 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303267840 unmapped: 64192512 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 64184320 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 64176128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 64176128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 64176128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 64176128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 64176128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 64176128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 64176128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 64176128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 64176128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 64176128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 64176128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 64176128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 64176128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 64167936 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 64167936 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 64167936 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 64167936 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 64167936 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 64159744 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 64159744 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 64159744 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 64159744 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 64159744 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 64159744 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 64159744 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 64159744 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 64159744 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 64159744 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 64159744 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 64159744 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 64159744 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 64159744 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 64159744 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 64159744 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 64151552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 64151552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 64151552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 64143360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 64143360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 64143360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 64143360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 64143360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 64143360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 64143360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 64143360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 64143360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 64143360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 64143360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 64135168 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 64135168 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 64135168 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 64135168 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 64135168 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 64135168 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 64135168 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 64135168 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 64126976 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 64126976 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 64126976 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 64126976 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303333376 unmapped: 64126976 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 64118784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.6 total, 600.0 interval#012Cumulative writes: 38K writes, 150K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s#012Cumulative WAL: 38K writes, 13K syncs, 2.76 writes per sync, written: 0.15 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 64118784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 64118784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 64118784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 64110592 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 64110592 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 64110592 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: mgrc ms_handle_reset ms_handle_reset con 0x55f8d50ee800
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1263522489
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1263522489,v1:192.168.122.100:6801/1263522489]
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: mgrc handle_mgr_configure stats_period=5
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 64102400 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 64102400 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 64102400 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 64102400 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 64102400 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 64102400 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 64102400 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 64102400 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 64102400 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 64102400 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 64102400 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 64094208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 64094208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 64094208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 64094208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 64094208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 64094208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 64094208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 64094208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 64086016 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 64086016 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 64086016 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 64086016 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 64086016 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 64086016 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 64086016 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 64086016 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 64086016 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 64086016 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 64086016 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 64077824 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 64077824 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 64077824 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 64077824 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 64077824 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 64077824 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 64077824 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 64077824 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 64077824 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 64077824 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 64077824 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 64069632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 64069632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 64069632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 64069632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 64069632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 598.555297852s of 598.695800781s, submitted: 90
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 64036864 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 64036864 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 64036864 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 64036864 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303415296 unmapped: 64045056 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303415296 unmapped: 64045056 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 64012288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 64004096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 63995904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 63995904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 63995904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 63995904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 63995904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 63995904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 63995904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 63995904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 63995904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 63995904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 63995904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 63995904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 63995904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303472640 unmapped: 63987712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303472640 unmapped: 63987712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303472640 unmapped: 63987712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303472640 unmapped: 63987712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303472640 unmapped: 63987712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303472640 unmapped: 63987712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 63979520 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Feb 25 09:15:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3999877447' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 63971328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 63963136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 63963136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 63963136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 63963136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 63963136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 63963136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 63963136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 63963136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 63963136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 63963136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 63963136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 63963136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 63963136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 63963136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303505408 unmapped: 63954944 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303505408 unmapped: 63954944 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303505408 unmapped: 63954944 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303505408 unmapped: 63954944 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303505408 unmapped: 63954944 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303505408 unmapped: 63954944 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303505408 unmapped: 63954944 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303513600 unmapped: 63946752 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303513600 unmapped: 63946752 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303513600 unmapped: 63946752 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 63938560 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 63938560 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 63938560 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 63938560 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 63930368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 63922176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 63922176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 63922176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 63922176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 63922176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 63922176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 63922176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 63922176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 63922176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 63922176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 63922176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 63922176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 63922176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 63922176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 63922176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 63922176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303546368 unmapped: 63913984 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303546368 unmapped: 63913984 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303546368 unmapped: 63913984 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303546368 unmapped: 63913984 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303546368 unmapped: 63913984 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303546368 unmapped: 63913984 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 63905792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303562752 unmapped: 63897600 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303562752 unmapped: 63897600 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303562752 unmapped: 63897600 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303562752 unmapped: 63897600 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303562752 unmapped: 63897600 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303562752 unmapped: 63897600 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303562752 unmapped: 63897600 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303562752 unmapped: 63897600 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303562752 unmapped: 63897600 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303570944 unmapped: 63889408 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303570944 unmapped: 63889408 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303570944 unmapped: 63889408 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303587328 unmapped: 63873024 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303587328 unmapped: 63873024 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303587328 unmapped: 63873024 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303587328 unmapped: 63873024 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303595520 unmapped: 63864832 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303595520 unmapped: 63864832 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303595520 unmapped: 63864832 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303595520 unmapped: 63864832 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303595520 unmapped: 63864832 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303595520 unmapped: 63864832 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303595520 unmapped: 63864832 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303595520 unmapped: 63864832 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303603712 unmapped: 63856640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303603712 unmapped: 63856640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303603712 unmapped: 63856640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303603712 unmapped: 63856640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303603712 unmapped: 63856640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303603712 unmapped: 63856640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303603712 unmapped: 63856640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303603712 unmapped: 63856640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303611904 unmapped: 63848448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303611904 unmapped: 63848448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303611904 unmapped: 63848448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303611904 unmapped: 63848448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303611904 unmapped: 63848448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303611904 unmapped: 63848448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303611904 unmapped: 63848448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303611904 unmapped: 63848448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303611904 unmapped: 63848448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303611904 unmapped: 63848448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303611904 unmapped: 63848448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303620096 unmapped: 63840256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303620096 unmapped: 63840256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303620096 unmapped: 63840256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303620096 unmapped: 63840256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303620096 unmapped: 63840256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303620096 unmapped: 63840256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303620096 unmapped: 63840256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303620096 unmapped: 63840256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303620096 unmapped: 63840256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303620096 unmapped: 63840256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303620096 unmapped: 63840256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303628288 unmapped: 63832064 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303628288 unmapped: 63832064 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303628288 unmapped: 63832064 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303628288 unmapped: 63832064 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303628288 unmapped: 63832064 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303636480 unmapped: 63823872 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303636480 unmapped: 63823872 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303644672 unmapped: 63815680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303644672 unmapped: 63815680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303644672 unmapped: 63815680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303644672 unmapped: 63815680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303644672 unmapped: 63815680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303644672 unmapped: 63815680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303644672 unmapped: 63815680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303644672 unmapped: 63815680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303644672 unmapped: 63815680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303644672 unmapped: 63815680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303644672 unmapped: 63815680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303652864 unmapped: 63807488 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303652864 unmapped: 63807488 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303652864 unmapped: 63807488 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303652864 unmapped: 63807488 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303661056 unmapped: 63799296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303661056 unmapped: 63799296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303669248 unmapped: 63791104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303669248 unmapped: 63791104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303669248 unmapped: 63791104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303669248 unmapped: 63791104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303669248 unmapped: 63791104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303669248 unmapped: 63791104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303669248 unmapped: 63791104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303677440 unmapped: 63782912 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303677440 unmapped: 63782912 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303677440 unmapped: 63782912 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303685632 unmapped: 63774720 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303685632 unmapped: 63774720 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303685632 unmapped: 63774720 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303685632 unmapped: 63774720 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303685632 unmapped: 63774720 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303685632 unmapped: 63774720 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 63766528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 63766528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 63766528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 63766528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 63766528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 63766528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 63766528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 63766528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 63766528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 63766528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 63766528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 63766528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 63766528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 63766528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303702016 unmapped: 63758336 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303702016 unmapped: 63758336 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303702016 unmapped: 63758336 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303702016 unmapped: 63758336 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303710208 unmapped: 63750144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303710208 unmapped: 63750144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303710208 unmapped: 63750144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303710208 unmapped: 63750144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303710208 unmapped: 63750144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303710208 unmapped: 63750144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303710208 unmapped: 63750144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303710208 unmapped: 63750144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303710208 unmapped: 63750144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303710208 unmapped: 63750144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303710208 unmapped: 63750144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303710208 unmapped: 63750144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303710208 unmapped: 63750144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303718400 unmapped: 63741952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303718400 unmapped: 63741952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303718400 unmapped: 63741952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303734784 unmapped: 63725568 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303734784 unmapped: 63725568 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303734784 unmapped: 63725568 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303734784 unmapped: 63725568 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303734784 unmapped: 63725568 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303734784 unmapped: 63725568 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303734784 unmapped: 63725568 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303734784 unmapped: 63725568 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303734784 unmapped: 63725568 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303742976 unmapped: 63717376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303742976 unmapped: 63717376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303742976 unmapped: 63717376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303742976 unmapped: 63717376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303742976 unmapped: 63717376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303742976 unmapped: 63717376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303742976 unmapped: 63717376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303742976 unmapped: 63717376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303742976 unmapped: 63717376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303742976 unmapped: 63717376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303742976 unmapped: 63717376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303742976 unmapped: 63717376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303742976 unmapped: 63717376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303742976 unmapped: 63717376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303751168 unmapped: 63709184 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303751168 unmapped: 63709184 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303751168 unmapped: 63709184 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303751168 unmapped: 63709184 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303751168 unmapped: 63709184 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303751168 unmapped: 63709184 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303751168 unmapped: 63709184 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303751168 unmapped: 63709184 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303751168 unmapped: 63709184 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 63700992 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 63700992 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 63700992 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 63700992 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 63700992 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 63700992 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 63700992 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 63700992 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 63700992 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 63700992 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 63700992 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 63700992 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303767552 unmapped: 63692800 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303767552 unmapped: 63692800 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303775744 unmapped: 63684608 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303775744 unmapped: 63684608 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303775744 unmapped: 63684608 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 63676416 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303792128 unmapped: 63668224 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303792128 unmapped: 63668224 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303792128 unmapped: 63668224 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303792128 unmapped: 63668224 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303792128 unmapped: 63668224 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303792128 unmapped: 63668224 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303792128 unmapped: 63668224 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303792128 unmapped: 63668224 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303792128 unmapped: 63668224 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303792128 unmapped: 63668224 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303800320 unmapped: 63660032 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303800320 unmapped: 63660032 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303800320 unmapped: 63660032 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303800320 unmapped: 63660032 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303808512 unmapped: 63651840 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303808512 unmapped: 63651840 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303816704 unmapped: 63643648 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303824896 unmapped: 63635456 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303824896 unmapped: 63635456 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303833088 unmapped: 63627264 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303833088 unmapped: 63627264 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303833088 unmapped: 63627264 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303833088 unmapped: 63627264 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303833088 unmapped: 63627264 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303833088 unmapped: 63627264 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303833088 unmapped: 63627264 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303833088 unmapped: 63627264 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303841280 unmapped: 63619072 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303841280 unmapped: 63619072 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303841280 unmapped: 63619072 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303849472 unmapped: 63610880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303857664 unmapped: 63602688 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303857664 unmapped: 63602688 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 63594496 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 63594496 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 63594496 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 63594496 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 63594496 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 63594496 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 63594496 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 63594496 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 63594496 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 63594496 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 63594496 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 63594496 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303874048 unmapped: 63586304 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303890432 unmapped: 63569920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303890432 unmapped: 63569920 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303898624 unmapped: 63561728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303898624 unmapped: 63561728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303898624 unmapped: 63561728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303898624 unmapped: 63561728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303898624 unmapped: 63561728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303898624 unmapped: 63561728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303898624 unmapped: 63561728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303898624 unmapped: 63561728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303898624 unmapped: 63561728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303898624 unmapped: 63561728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303898624 unmapped: 63561728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303906816 unmapped: 63553536 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303906816 unmapped: 63553536 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303906816 unmapped: 63553536 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303906816 unmapped: 63553536 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 63545344 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303931392 unmapped: 63528960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303931392 unmapped: 63528960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303931392 unmapped: 63528960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303931392 unmapped: 63528960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303931392 unmapped: 63528960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303931392 unmapped: 63528960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303931392 unmapped: 63528960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303931392 unmapped: 63528960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303931392 unmapped: 63528960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303931392 unmapped: 63528960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303931392 unmapped: 63528960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303931392 unmapped: 63528960 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303939584 unmapped: 63520768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303939584 unmapped: 63520768 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303947776 unmapped: 63512576 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303947776 unmapped: 63512576 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303947776 unmapped: 63512576 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303947776 unmapped: 63512576 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303947776 unmapped: 63512576 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303947776 unmapped: 63512576 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303947776 unmapped: 63512576 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303947776 unmapped: 63512576 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303955968 unmapped: 63504384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303955968 unmapped: 63504384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303955968 unmapped: 63504384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303955968 unmapped: 63504384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303955968 unmapped: 63504384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303955968 unmapped: 63504384 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.6 total, 600.0 interval#012Cumulative writes: 38K writes, 151K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s#012Cumulative WAL: 38K writes, 13K syncs, 2.75 writes per sync, written: 0.15 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 264 writes, 396 keys, 264 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s#012Interval WAL: 264 writes, 132 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303964160 unmapped: 63496192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303964160 unmapped: 63496192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303964160 unmapped: 63496192 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303972352 unmapped: 63488000 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 63479808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303988736 unmapped: 63471616 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303988736 unmapped: 63471616 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303988736 unmapped: 63471616 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303988736 unmapped: 63471616 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303996928 unmapped: 63463424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303996928 unmapped: 63463424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303996928 unmapped: 63463424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303996928 unmapped: 63463424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303996928 unmapped: 63463424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 303996928 unmapped: 63463424 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304005120 unmapped: 63455232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304005120 unmapped: 63455232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304005120 unmapped: 63455232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304005120 unmapped: 63455232 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304013312 unmapped: 63447040 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304021504 unmapped: 63438848 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304029696 unmapped: 63430656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304029696 unmapped: 63430656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304029696 unmapped: 63430656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304029696 unmapped: 63430656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304029696 unmapped: 63430656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304029696 unmapped: 63430656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304029696 unmapped: 63430656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 601.275024414s of 601.484924316s, submitted: 132
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304029696 unmapped: 63430656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304029696 unmapped: 63430656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304029696 unmapped: 63430656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304029696 unmapped: 63430656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304029696 unmapped: 63430656 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304062464 unmapped: 63397888 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 63365120 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 63356928 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 63356928 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 63356928 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 63356928 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 63356928 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 63356928 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 63356928 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 63356928 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 63356928 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 63356928 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 63356928 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 63356928 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 63356928 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 63356928 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 63356928 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 63356928 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 63348736 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 63348736 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 63348736 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 63348736 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 63340544 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 63332352 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304136192 unmapped: 63324160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304136192 unmapped: 63324160 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304144384 unmapped: 63315968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304144384 unmapped: 63315968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304144384 unmapped: 63315968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304144384 unmapped: 63315968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304144384 unmapped: 63315968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304144384 unmapped: 63315968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304144384 unmapped: 63315968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304144384 unmapped: 63315968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304144384 unmapped: 63315968 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 63307776 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304160768 unmapped: 63299584 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304160768 unmapped: 63299584 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304160768 unmapped: 63299584 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 63291392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 63291392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 63291392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 63291392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 63291392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 63291392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 63291392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 63291392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 63291392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 63291392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 63291392 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304177152 unmapped: 63283200 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304177152 unmapped: 63283200 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304177152 unmapped: 63283200 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304177152 unmapped: 63283200 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304185344 unmapped: 63275008 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304185344 unmapped: 63275008 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 63266816 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 63266816 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 63266816 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 63266816 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 63266816 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 63266816 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 63266816 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 63266816 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 63266816 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 63266816 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 63266816 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 63266816 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 63266816 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 63266816 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 63266816 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 63258624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 63258624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 63258624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 63258624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 63258624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 63258624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 63258624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 63258624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 63258624 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304209920 unmapped: 63250432 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304209920 unmapped: 63250432 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304209920 unmapped: 63250432 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304209920 unmapped: 63250432 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304209920 unmapped: 63250432 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304209920 unmapped: 63250432 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 63242240 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 63242240 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 63242240 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 63242240 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 63242240 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 63242240 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 63242240 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 63242240 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 63242240 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 63242240 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 63242240 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 63242240 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 63242240 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 63242240 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 63242240 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304226304 unmapped: 63234048 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304226304 unmapped: 63234048 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304226304 unmapped: 63234048 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304226304 unmapped: 63234048 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304226304 unmapped: 63234048 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304226304 unmapped: 63234048 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304226304 unmapped: 63234048 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304226304 unmapped: 63234048 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304226304 unmapped: 63234048 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304226304 unmapped: 63234048 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304226304 unmapped: 63234048 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304226304 unmapped: 63234048 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304234496 unmapped: 63225856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304234496 unmapped: 63225856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304234496 unmapped: 63225856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304234496 unmapped: 63225856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304234496 unmapped: 63225856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304234496 unmapped: 63225856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304234496 unmapped: 63225856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304234496 unmapped: 63225856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304234496 unmapped: 63225856 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304242688 unmapped: 63217664 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304242688 unmapped: 63217664 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63209472 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63209472 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63209472 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 63209472 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63201280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63201280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63201280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63201280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63201280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63201280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63201280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63201280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63201280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63201280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63201280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63201280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63201280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 63201280 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: mgrc ms_handle_reset ms_handle_reset con 0x55f8d4dc8800
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1263522489
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1263522489,v1:192.168.122.100:6801/1263522489]
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: mgrc handle_mgr_configure stats_period=5
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 63193088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 63193088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 63193088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 63193088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 63193088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 63193088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 63193088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 63193088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 63193088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 63193088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 63193088 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304275456 unmapped: 63184896 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304275456 unmapped: 63184896 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304275456 unmapped: 63184896 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304275456 unmapped: 63184896 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304275456 unmapped: 63184896 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304283648 unmapped: 63176704 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304283648 unmapped: 63176704 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304283648 unmapped: 63176704 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304283648 unmapped: 63176704 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304283648 unmapped: 63176704 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304283648 unmapped: 63176704 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304283648 unmapped: 63176704 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304283648 unmapped: 63176704 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 63168512 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304300032 unmapped: 63160320 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304300032 unmapped: 63160320 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304300032 unmapped: 63160320 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304300032 unmapped: 63160320 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304300032 unmapped: 63160320 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304300032 unmapped: 63160320 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304300032 unmapped: 63160320 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304300032 unmapped: 63160320 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63152128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63152128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63152128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63152128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63152128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63152128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63152128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63152128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63152128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63152128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63152128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63152128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63152128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63152128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63152128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 63152128 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304316416 unmapped: 63143936 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 302.284851074s of 302.442108154s, submitted: 90
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63127552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63127552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63127552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63127552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63127552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63127552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63127552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63127552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63127552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63127552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63127552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63127552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63127552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63127552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 63127552 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 63119360 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304349184 unmapped: 63111168 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304349184 unmapped: 63111168 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304349184 unmapped: 63111168 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304349184 unmapped: 63111168 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304349184 unmapped: 63111168 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304357376 unmapped: 63102976 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304357376 unmapped: 63102976 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304357376 unmapped: 63102976 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304357376 unmapped: 63102976 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 63094784 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63086592 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63086592 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63086592 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63086592 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63086592 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63086592 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63086592 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63086592 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63086592 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63086592 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63086592 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 63086592 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304381952 unmapped: 63078400 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 63070208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 63070208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 63070208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 63070208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 63070208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 63070208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 63070208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 63070208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 63070208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 63070208 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304398336 unmapped: 63062016 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 63053824 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 63053824 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 63053824 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 63045632 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63037440 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63037440 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63037440 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63037440 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63037440 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63037440 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63037440 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63037440 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63037440 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63037440 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63037440 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63037440 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63037440 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63037440 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63037440 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 63037440 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 63021056 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 63021056 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 63021056 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 63021056 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 63021056 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 63021056 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 63021056 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 63012864 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 63012864 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 63012864 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 63012864 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 63012864 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 63012864 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 63012864 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 63004672 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 63004672 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 62996480 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 62996480 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 62996480 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 62996480 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 62996480 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 62996480 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 62996480 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 62996480 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 62996480 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 62988288 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 62980096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 62980096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 62980096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 62980096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 62980096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 62980096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 62980096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 62980096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 62980096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 62980096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 62980096 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 62971904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 62971904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 62971904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 62971904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 62971904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 62971904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 62971904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 62971904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 62971904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 62971904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 62971904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 62971904 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 62963712 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 62955520 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 62955520 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 62955520 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 62955520 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 62955520 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 62955520 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 62955520 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.6 total, 600.0 interval#012Cumulative writes: 38K writes, 151K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s#012Cumulative WAL: 38K writes, 14K syncs, 2.74 writes per sync, written: 0.15 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s#012Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 62947328 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304521216 unmapped: 62939136 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304529408 unmapped: 62930944 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304529408 unmapped: 62930944 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 62922752 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 62922752 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 297.608093262s of 297.648681641s, submitted: 24
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 62922752 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 62922752 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 62922752 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 62922752 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304537600 unmapped: 62922752 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304545792 unmapped: 62914560 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [0,0,0,0,0,0,1])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304553984 unmapped: 62906368 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 62898176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 62898176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 62898176 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304570368 unmapped: 62889984 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.786643982s of 10.974362373s, submitted: 38
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304578560 unmapped: 62881792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304578560 unmapped: 62881792 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 62857216 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 62857216 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 nova_compute[244014]: 2026-02-25 14:15:28.360 244018 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 29 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 62857216 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 62857216 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 62857216 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304627712 unmapped: 62832640 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 62824448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 62824448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 62824448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 62824448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 62824448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 62824448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 62824448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 62824448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304635904 unmapped: 62824448 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 62816256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 62816256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 62816256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 62816256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 62816256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 62816256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 62816256 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304652288 unmapped: 62808064 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304652288 unmapped: 62808064 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304652288 unmapped: 62808064 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304652288 unmapped: 62808064 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 62799872 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 62799872 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 62799872 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 62799872 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 62799872 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304660480 unmapped: 62799872 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304668672 unmapped: 62791680 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 62775296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 62775296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 62775296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 62775296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 62775296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 62775296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 62775296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 62775296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 62775296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 62775296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 62775296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 62775296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 62775296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 62775296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304685056 unmapped: 62775296 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread fragmentation_score=0.004165 took=0.000093s
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304693248 unmapped: 62767104 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 62758912 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 62758912 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 62758912 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 62758912 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 62758912 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 62758912 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 62758912 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 62758912 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 62758912 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 62758912 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 62758912 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 62758912 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304701440 unmapped: 62758912 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 62742528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 62742528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 62742528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 62742528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 62742528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 62742528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 62742528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304717824 unmapped: 62742528 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304726016 unmapped: 62734336 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304726016 unmapped: 62734336 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304726016 unmapped: 62734336 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304726016 unmapped: 62734336 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304726016 unmapped: 62734336 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 62726144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 62726144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 62726144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 62726144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 62726144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 62726144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 62726144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 62726144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 62726144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 62726144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 62726144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 62726144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 62726144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 62726144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 62726144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304734208 unmapped: 62726144 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304742400 unmapped: 62717952 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304758784 unmapped: 62701568 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304758784 unmapped: 62701568 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304758784 unmapped: 62701568 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304758784 unmapped: 62701568 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304758784 unmapped: 62701568 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304758784 unmapped: 62701568 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304758784 unmapped: 62701568 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 62693376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 62693376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 62693376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 62693376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 62693376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 62693376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 62693376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 62693376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 62693376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304766976 unmapped: 62693376 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304775168 unmapped: 62685184 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304775168 unmapped: 62685184 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304775168 unmapped: 62685184 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304775168 unmapped: 62685184 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304775168 unmapped: 62685184 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304775168 unmapped: 62685184 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: osd.2 299 heartbeat osd_stat(store_statfs(0x4ed6a1000/0x0/0x4ffc00000, data 0x12c9e8d/0x146b000, compress 0x0/0x0/0x0, omap 0x4c11d, meta 0x110a3ee3), peers [0,1] op hist [])
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304775168 unmapped: 62685184 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: do_command 'config diff' '{prefix=config diff}'
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304873472 unmapped: 62586880 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247774 data_alloc: 218103808 data_used: 208822
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: do_command 'config show' '{prefix=config show}'
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: do_command 'counter dump' '{prefix=counter dump}'
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: do_command 'counter schema' '{prefix=counter schema}'
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 305004544 unmapped: 62455808 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: prioritycache tune_memory target: 4294967296 mapped: 304922624 unmapped: 62537728 heap: 367460352 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:28 np0005629333 ceph-osd[89088]: do_command 'log dump' '{prefix=log dump}'
Feb 25 09:15:28 np0005629333 rsyslogd[1020]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Feb 25 09:15:28 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23416 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 09:15:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.fpqgwn", "name": "rgw_frontends"} v 0)
Feb 25 09:15:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.fpqgwn", "name": "rgw_frontends"} : dispatch
Feb 25 09:15:28 np0005629333 podman[438524]: 2026-02-25 14:15:28.75332572 +0000 UTC m=+0.095168428 container health_status 2e309a6944a8788f905bc551f2f840bedb9778419aff63a5e7a3858cfa4d8204 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Feb 25 09:15:28 np0005629333 podman[438525]: 2026-02-25 14:15:28.764754414 +0000 UTC m=+0.104796891 container health_status 5eda742ff6f4d8c606567a0bcc715807c8650f5d4568ca5421a9d78da60f567c (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'de524c030b60dc954475a90eede60012145bb92944440ae16a2778e651961a53-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec-23dc54a57a450decb5011aa87e68e87b6955178cf145e1d3de983c83f0ba99ec'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Feb 25 09:15:28 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb 25 09:15:28 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/276674330' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 25 09:15:29 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23420 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 25 09:15:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.fpqgwn", "name": "rgw_frontends"} v 0)
Feb 25 09:15:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3813466853' entity='mgr.compute-0.jzfame' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.fpqgwn", "name": "rgw_frontends"} : dispatch
Feb 25 09:15:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb 25 09:15:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1117185116' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 25 09:15:29 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23424 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 09:15:29 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4667: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:15:29 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23428 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 25 09:15:29 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb 25 09:15:29 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4092125088' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 25 09:15:30 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23430 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb 25 09:15:30 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 25 09:15:30 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1038890009' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 25 09:15:30 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23434 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 25 09:15:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 25 09:15:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb 25 09:15:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2941910060' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 25 09:15:31 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23438 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 25 09:15:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Optimize plan auto_2026-02-25_14:15:31
Feb 25 09:15:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 25 09:15:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] do_upmap
Feb 25 09:15:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] pools ['default.rgw.log', 'backups', '.mgr', 'vms', 'cephfs.cephfs.meta', 'volumes', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root']
Feb 25 09:15:31 np0005629333 ceph-mgr[76641]: [balancer INFO root] prepared 0/10 upmap changes
Feb 25 09:15:31 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Feb 25 09:15:31 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3454671536' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Feb 25 09:15:31 np0005629333 ceph-mgr[76641]: log_channel(cluster) log [DBG] : pgmap v4668: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Feb 25 09:15:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:15:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:15:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:15:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:15:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] scanning for idle connections..
Feb 25 09:15:31 np0005629333 ceph-mgr[76641]: [volumes INFO mgr_util] cleaning up connections: []
Feb 25 09:15:31 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23442 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 25 09:15:32 np0005629333 ceph-mgr[76641]: log_channel(audit) log [DBG] : from='client.23446 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 25 09:15:32 np0005629333 ceph-mgr[76641]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb 25 09:15:32 np0005629333 ceph-8ac33163-6221-5d58-9a39-8b6933fe7762-mgr-compute-0-jzfame[76637]: 2026-02-25T14:15:32.258+0000 7f96f9a94640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Feb 25 09:15:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Feb 25 09:15:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/337511007' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365772800 unmapped: 73744384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365772800 unmapped: 73744384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 73728000 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 73728000 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 73728000 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 73728000 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 73728000 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 73728000 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 73728000 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 73728000 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365797376 unmapped: 73719808 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365805568 unmapped: 73711616 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365805568 unmapped: 73711616 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365805568 unmapped: 73711616 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365805568 unmapped: 73711616 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365805568 unmapped: 73711616 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365805568 unmapped: 73711616 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365805568 unmapped: 73711616 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365813760 unmapped: 73703424 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365813760 unmapped: 73703424 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365813760 unmapped: 73703424 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365813760 unmapped: 73703424 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365813760 unmapped: 73703424 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365821952 unmapped: 73695232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 73687040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 73687040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 73687040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 73687040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 73687040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 73687040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 73662464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 73662464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 73654272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 73654272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 73654272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 73654272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 73654272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 73654272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 73654272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 73654272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 73654272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 73654272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 73654272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 73654272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 73654272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 73654272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365871104 unmapped: 73646080 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365871104 unmapped: 73646080 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365871104 unmapped: 73646080 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365871104 unmapped: 73646080 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365879296 unmapped: 73637888 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365903872 unmapped: 73613312 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365903872 unmapped: 73613312 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 73596928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 73596928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365928448 unmapped: 73588736 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365936640 unmapped: 73580544 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365936640 unmapped: 73580544 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365936640 unmapped: 73580544 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365936640 unmapped: 73580544 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365936640 unmapped: 73580544 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73572352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73572352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73572352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73572352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73572352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73572352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73572352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73572352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73572352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 73555968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365969408 unmapped: 73547776 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365969408 unmapped: 73547776 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365969408 unmapped: 73547776 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365985792 unmapped: 73531392 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365985792 unmapped: 73531392 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365985792 unmapped: 73531392 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365985792 unmapped: 73531392 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365985792 unmapped: 73531392 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365985792 unmapped: 73531392 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365993984 unmapped: 73523200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366010368 unmapped: 73506816 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366010368 unmapped: 73506816 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366010368 unmapped: 73506816 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366010368 unmapped: 73506816 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366010368 unmapped: 73506816 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366010368 unmapped: 73506816 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366018560 unmapped: 73498624 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366051328 unmapped: 73465856 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 73449472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 73449472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 73449472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 73449472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 73449472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 73449472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 73449472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366084096 unmapped: 73433088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 73424896 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 73424896 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 73424896 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 73424896 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 73424896 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 73424896 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 73424896 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 73424896 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 73424896 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 73424896 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 73424896 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 73424896 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366100480 unmapped: 73416704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366100480 unmapped: 73416704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366100480 unmapped: 73416704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366116864 unmapped: 73400320 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366116864 unmapped: 73400320 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366116864 unmapped: 73400320 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366125056 unmapped: 73392128 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 73383936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 73383936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 73383936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 73383936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 73383936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 73383936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 73383936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 73383936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366141440 unmapped: 73375744 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366141440 unmapped: 73375744 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366149632 unmapped: 73367552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366149632 unmapped: 73367552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366149632 unmapped: 73367552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366149632 unmapped: 73367552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366149632 unmapped: 73367552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366157824 unmapped: 73359360 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 73351168 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 73351168 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 73351168 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 73351168 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 73351168 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366174208 unmapped: 73342976 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366174208 unmapped: 73342976 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366174208 unmapped: 73342976 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366174208 unmapped: 73342976 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366174208 unmapped: 73342976 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73334784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73334784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73334784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73334784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73334784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73334784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73334784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 73326592 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 73326592 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 73326592 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 73326592 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 73326592 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 73326592 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366206976 unmapped: 73310208 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366206976 unmapped: 73310208 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366231552 unmapped: 73285632 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366231552 unmapped: 73285632 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 73277440 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 73277440 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 73277440 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366247936 unmapped: 73269248 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366247936 unmapped: 73269248 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366247936 unmapped: 73269248 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366247936 unmapped: 73269248 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366280704 unmapped: 73236480 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366280704 unmapped: 73236480 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366280704 unmapped: 73236480 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366297088 unmapped: 73220096 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366297088 unmapped: 73220096 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366305280 unmapped: 73211904 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366305280 unmapped: 73211904 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366305280 unmapped: 73211904 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366305280 unmapped: 73211904 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366313472 unmapped: 73203712 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366313472 unmapped: 73203712 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366313472 unmapped: 73203712 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366313472 unmapped: 73203712 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366313472 unmapped: 73203712 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366313472 unmapped: 73203712 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366313472 unmapped: 73203712 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366313472 unmapped: 73203712 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 73195520 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 73195520 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 73195520 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 73195520 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 73195520 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 73195520 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366329856 unmapped: 73187328 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366329856 unmapped: 73187328 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366329856 unmapped: 73187328 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366329856 unmapped: 73187328 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366329856 unmapped: 73187328 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366338048 unmapped: 73179136 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366338048 unmapped: 73179136 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366338048 unmapped: 73179136 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366346240 unmapped: 73170944 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366354432 unmapped: 73162752 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366354432 unmapped: 73162752 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366354432 unmapped: 73162752 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366354432 unmapped: 73162752 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366354432 unmapped: 73162752 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366354432 unmapped: 73162752 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366354432 unmapped: 73162752 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366354432 unmapped: 73162752 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366354432 unmapped: 73162752 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366387200 unmapped: 73129984 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 73105408 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 73105408 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 73105408 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 73105408 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 73105408 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 73105408 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 73105408 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 73105408 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 73105408 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 73105408 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 73105408 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 73105408 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 73105408 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366419968 unmapped: 73097216 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366419968 unmapped: 73097216 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366419968 unmapped: 73097216 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366419968 unmapped: 73097216 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366419968 unmapped: 73097216 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366419968 unmapped: 73097216 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366419968 unmapped: 73097216 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366428160 unmapped: 73089024 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366428160 unmapped: 73089024 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366428160 unmapped: 73089024 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366428160 unmapped: 73089024 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 73072640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 73072640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.2 total, 600.0 interval#012Cumulative writes: 49K writes, 195K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s#012Cumulative WAL: 49K writes, 18K syncs, 2.74 writes per sync, written: 0.19 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 73072640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 73072640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 73072640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 73072640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 73072640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 73072640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 73072640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 ms_handle_reset con 0x55a3671c1800 session 0x55a370d56a80
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: mgrc ms_handle_reset ms_handle_reset con 0x55a36cb89800
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1263522489
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1263522489,v1:192.168.122.100:6801/1263522489]
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: mgrc handle_mgr_configure stats_period=5
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366460928 unmapped: 73056256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 ms_handle_reset con 0x55a369b57400 session 0x55a370d53500
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 ms_handle_reset con 0x55a3686f5400 session 0x55a370d56540
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365404160 unmapped: 74113024 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365404160 unmapped: 74113024 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74104832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74104832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74104832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74104832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74104832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74104832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74104832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74104832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74104832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74104832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74104832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74104832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74104832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74104832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74104832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365412352 unmapped: 74104832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365420544 unmapped: 74096640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365420544 unmapped: 74096640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365420544 unmapped: 74096640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365420544 unmapped: 74096640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365420544 unmapped: 74096640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365420544 unmapped: 74096640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365420544 unmapped: 74096640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365420544 unmapped: 74096640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365420544 unmapped: 74096640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365420544 unmapped: 74096640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365420544 unmapped: 74096640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365428736 unmapped: 74088448 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365428736 unmapped: 74088448 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365428736 unmapped: 74088448 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365428736 unmapped: 74088448 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365436928 unmapped: 74080256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365436928 unmapped: 74080256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365436928 unmapped: 74080256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365436928 unmapped: 74080256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365436928 unmapped: 74080256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365445120 unmapped: 74072064 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365445120 unmapped: 74072064 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365445120 unmapped: 74072064 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365445120 unmapped: 74072064 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 ms_handle_reset con 0x55a369b56c00 session 0x55a36b5fae00
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365445120 unmapped: 74072064 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365445120 unmapped: 74072064 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365445120 unmapped: 74072064 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365445120 unmapped: 74072064 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365445120 unmapped: 74072064 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 223602
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365445120 unmapped: 74072064 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365445120 unmapped: 74072064 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365445120 unmapped: 74072064 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 598.568664551s of 598.707702637s, submitted: 90
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365453312 unmapped: 74063872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365453312 unmapped: 74063872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365453312 unmapped: 74063872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365453312 unmapped: 74063872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365502464 unmapped: 74014720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365502464 unmapped: 74014720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365559808 unmapped: 73957376 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365576192 unmapped: 73940992 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365584384 unmapped: 73932800 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365584384 unmapped: 73932800 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365584384 unmapped: 73932800 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365584384 unmapped: 73932800 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365584384 unmapped: 73932800 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365584384 unmapped: 73932800 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365584384 unmapped: 73932800 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365584384 unmapped: 73932800 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365584384 unmapped: 73932800 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365592576 unmapped: 73924608 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365592576 unmapped: 73924608 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365592576 unmapped: 73924608 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365592576 unmapped: 73924608 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365592576 unmapped: 73924608 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365600768 unmapped: 73916416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365600768 unmapped: 73916416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365600768 unmapped: 73916416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365600768 unmapped: 73916416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365600768 unmapped: 73916416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365600768 unmapped: 73916416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365600768 unmapped: 73916416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365600768 unmapped: 73916416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365600768 unmapped: 73916416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365600768 unmapped: 73916416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365600768 unmapped: 73916416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365608960 unmapped: 73908224 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365608960 unmapped: 73908224 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365608960 unmapped: 73908224 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 73900032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 73900032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 73900032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 73900032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 73900032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 73900032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 73900032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 73900032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 73900032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 73900032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 73900032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 73900032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365617152 unmapped: 73900032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365625344 unmapped: 73891840 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365625344 unmapped: 73891840 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365633536 unmapped: 73883648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365633536 unmapped: 73883648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365633536 unmapped: 73883648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365633536 unmapped: 73883648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365633536 unmapped: 73883648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365633536 unmapped: 73883648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365633536 unmapped: 73883648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365633536 unmapped: 73883648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365633536 unmapped: 73883648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365633536 unmapped: 73883648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365633536 unmapped: 73883648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365633536 unmapped: 73883648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365633536 unmapped: 73883648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365633536 unmapped: 73883648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365633536 unmapped: 73883648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365633536 unmapped: 73883648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365641728 unmapped: 73875456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365649920 unmapped: 73867264 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365649920 unmapped: 73867264 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365649920 unmapped: 73867264 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365649920 unmapped: 73867264 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365649920 unmapped: 73867264 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365649920 unmapped: 73867264 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365649920 unmapped: 73867264 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365649920 unmapped: 73867264 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365658112 unmapped: 73859072 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365666304 unmapped: 73850880 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365666304 unmapped: 73850880 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365666304 unmapped: 73850880 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365666304 unmapped: 73850880 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365674496 unmapped: 73842688 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365682688 unmapped: 73834496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365682688 unmapped: 73834496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365690880 unmapped: 73826304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365690880 unmapped: 73826304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365690880 unmapped: 73826304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365690880 unmapped: 73826304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365699072 unmapped: 73818112 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365699072 unmapped: 73818112 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365699072 unmapped: 73818112 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365699072 unmapped: 73818112 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365699072 unmapped: 73818112 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365699072 unmapped: 73818112 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365699072 unmapped: 73818112 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365699072 unmapped: 73818112 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365699072 unmapped: 73818112 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365699072 unmapped: 73818112 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365699072 unmapped: 73818112 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365699072 unmapped: 73818112 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365707264 unmapped: 73809920 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365707264 unmapped: 73809920 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365707264 unmapped: 73809920 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365707264 unmapped: 73809920 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365707264 unmapped: 73809920 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365715456 unmapped: 73801728 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365715456 unmapped: 73801728 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365715456 unmapped: 73801728 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365715456 unmapped: 73801728 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365715456 unmapped: 73801728 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365715456 unmapped: 73801728 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365715456 unmapped: 73801728 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365715456 unmapped: 73801728 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365715456 unmapped: 73801728 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365715456 unmapped: 73801728 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365723648 unmapped: 73793536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365731840 unmapped: 73785344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365731840 unmapped: 73785344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365731840 unmapped: 73785344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365740032 unmapped: 73777152 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365740032 unmapped: 73777152 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365740032 unmapped: 73777152 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365748224 unmapped: 73768960 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365756416 unmapped: 73760768 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365764608 unmapped: 73752576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365772800 unmapped: 73744384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365772800 unmapped: 73744384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365772800 unmapped: 73744384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365772800 unmapped: 73744384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365772800 unmapped: 73744384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365772800 unmapped: 73744384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 73736192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 73728000 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 73728000 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 73728000 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 73728000 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 73728000 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 73728000 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365797376 unmapped: 73719808 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365797376 unmapped: 73719808 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365797376 unmapped: 73719808 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365797376 unmapped: 73719808 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365797376 unmapped: 73719808 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365797376 unmapped: 73719808 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365805568 unmapped: 73711616 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365805568 unmapped: 73711616 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365805568 unmapped: 73711616 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365805568 unmapped: 73711616 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365813760 unmapped: 73703424 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365813760 unmapped: 73703424 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365813760 unmapped: 73703424 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 73687040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 73687040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 73687040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 73687040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 73687040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 73687040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 73687040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 73687040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365838336 unmapped: 73678848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 73670656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 73662464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 73662464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 73662464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 73662464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 73662464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 73662464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 73662464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 73662464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 73662464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 73662464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 73662464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 73662464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 73662464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 73654272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 73654272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365871104 unmapped: 73646080 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365871104 unmapped: 73646080 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365871104 unmapped: 73646080 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365879296 unmapped: 73637888 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 73629696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365895680 unmapped: 73621504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365903872 unmapped: 73613312 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365912064 unmapped: 73605120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 73596928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 73596928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 73596928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 73596928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 73596928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 73596928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 73596928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 73596928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 73596928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 73596928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 73596928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 73596928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 73596928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 73596928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365928448 unmapped: 73588736 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365928448 unmapped: 73588736 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365936640 unmapped: 73580544 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365936640 unmapped: 73580544 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365936640 unmapped: 73580544 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73572352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73572352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73572352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73572352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73572352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365944832 unmapped: 73572352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 73564160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 73555968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 73555968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 73555968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 73555968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 73555968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 73555968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 73555968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 73555968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 73555968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 73555968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 73555968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 73555968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 73555968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 25 09:15:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 25 09:15:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 09:15:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 73539584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365985792 unmapped: 73531392 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365985792 unmapped: 73531392 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365985792 unmapped: 73531392 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365985792 unmapped: 73531392 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365985792 unmapped: 73531392 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365993984 unmapped: 73523200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365993984 unmapped: 73523200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 365993984 unmapped: 73523200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [1])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366002176 unmapped: 73515008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366010368 unmapped: 73506816 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366010368 unmapped: 73506816 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366018560 unmapped: 73498624 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366018560 unmapped: 73498624 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366018560 unmapped: 73498624 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366018560 unmapped: 73498624 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366018560 unmapped: 73498624 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366018560 unmapped: 73498624 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366018560 unmapped: 73498624 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366018560 unmapped: 73498624 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 73490432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 73482240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366043136 unmapped: 73474048 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366043136 unmapped: 73474048 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366043136 unmapped: 73474048 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366043136 unmapped: 73474048 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366043136 unmapped: 73474048 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366051328 unmapped: 73465856 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366059520 unmapped: 73457664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366059520 unmapped: 73457664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 73449472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 73449472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 73449472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 73441280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366084096 unmapped: 73433088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366084096 unmapped: 73433088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366084096 unmapped: 73433088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366084096 unmapped: 73433088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 73424896 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366100480 unmapped: 73416704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366100480 unmapped: 73416704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 09:15:32 np0005629333 ceph-mgr[76641]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366100480 unmapped: 73416704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366100480 unmapped: 73416704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366100480 unmapped: 73416704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366100480 unmapped: 73416704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366100480 unmapped: 73416704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366100480 unmapped: 73416704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 73408512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366116864 unmapped: 73400320 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366116864 unmapped: 73400320 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366116864 unmapped: 73400320 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366116864 unmapped: 73400320 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 73383936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 73383936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 73383936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 73383936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366141440 unmapped: 73375744 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366141440 unmapped: 73375744 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366141440 unmapped: 73375744 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366141440 unmapped: 73375744 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366141440 unmapped: 73375744 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366141440 unmapped: 73375744 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366149632 unmapped: 73367552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366149632 unmapped: 73367552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366149632 unmapped: 73367552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366149632 unmapped: 73367552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366149632 unmapped: 73367552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366149632 unmapped: 73367552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366157824 unmapped: 73359360 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366157824 unmapped: 73359360 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366157824 unmapped: 73359360 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 73351168 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366174208 unmapped: 73342976 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366174208 unmapped: 73342976 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366174208 unmapped: 73342976 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73334784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73334784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73334784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73334784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73334784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73334784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73334784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73334784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366182400 unmapped: 73334784 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 73326592 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 73326592 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 73326592 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 73326592 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 73326592 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 73326592 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.2 total, 600.0 interval#012Cumulative writes: 49K writes, 196K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.02 MB/s#012Cumulative WAL: 49K writes, 18K syncs, 2.74 writes per sync, written: 0.19 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 264 writes, 396 keys, 264 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s#012Interval WAL: 264 writes, 132 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 73326592 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 73326592 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366198784 unmapped: 73318400 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366198784 unmapped: 73318400 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366198784 unmapped: 73318400 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366198784 unmapped: 73318400 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366198784 unmapped: 73318400 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366206976 unmapped: 73310208 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366206976 unmapped: 73310208 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366206976 unmapped: 73310208 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 73302016 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 73293824 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-mon[76335]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Feb 25 09:15:32 np0005629333 ceph-mon[76335]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/224906319' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366231552 unmapped: 73285632 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366231552 unmapped: 73285632 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 73277440 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 73277440 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 73277440 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 73277440 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 73277440 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366247936 unmapped: 73269248 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366247936 unmapped: 73269248 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366247936 unmapped: 73269248 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366247936 unmapped: 73269248 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366247936 unmapped: 73269248 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 73261056 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366264320 unmapped: 73252864 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366264320 unmapped: 73252864 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 601.250732422s of 601.472778320s, submitted: 132
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 73244672 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 73228288 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 73228288 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 73228288 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 73228288 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 73228288 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 73154560 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 73146368 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 73138176 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366395392 unmapped: 73121792 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366395392 unmapped: 73121792 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366395392 unmapped: 73121792 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366395392 unmapped: 73121792 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366395392 unmapped: 73121792 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366395392 unmapped: 73121792 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366395392 unmapped: 73121792 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366403584 unmapped: 73113600 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 73105408 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 73105408 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 73105408 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366419968 unmapped: 73097216 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366419968 unmapped: 73097216 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366419968 unmapped: 73097216 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366419968 unmapped: 73097216 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366419968 unmapped: 73097216 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366419968 unmapped: 73097216 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366428160 unmapped: 73089024 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366428160 unmapped: 73089024 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366428160 unmapped: 73089024 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366428160 unmapped: 73089024 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366428160 unmapped: 73089024 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366428160 unmapped: 73089024 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366428160 unmapped: 73089024 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 73080832 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 73072640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 73072640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 73072640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 73072640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 73072640 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366452736 unmapped: 73064448 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366452736 unmapped: 73064448 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366452736 unmapped: 73064448 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366460928 unmapped: 73056256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366460928 unmapped: 73056256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366460928 unmapped: 73056256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366460928 unmapped: 73056256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366460928 unmapped: 73056256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366460928 unmapped: 73056256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366460928 unmapped: 73056256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366460928 unmapped: 73056256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366460928 unmapped: 73056256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366460928 unmapped: 73056256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366460928 unmapped: 73056256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366460928 unmapped: 73056256 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366469120 unmapped: 73048064 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366469120 unmapped: 73048064 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366469120 unmapped: 73048064 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366469120 unmapped: 73048064 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 73039872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 73039872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 73039872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 73039872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 73039872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 73039872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 73039872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 73039872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 73039872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 73039872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 73039872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 73039872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 73039872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 73039872 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366493696 unmapped: 73023488 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366493696 unmapped: 73023488 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366493696 unmapped: 73023488 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 73015296 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 73015296 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 73015296 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 73015296 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 73015296 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 73015296 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 73015296 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 73015296 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 73015296 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 73015296 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 73015296 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 73015296 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 73015296 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 73015296 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 73015296 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366510080 unmapped: 73007104 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366510080 unmapped: 73007104 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 72990720 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366534656 unmapped: 72982528 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366534656 unmapped: 72982528 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366534656 unmapped: 72982528 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366534656 unmapped: 72982528 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366542848 unmapped: 72974336 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366542848 unmapped: 72974336 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366542848 unmapped: 72974336 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366542848 unmapped: 72974336 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366542848 unmapped: 72974336 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366542848 unmapped: 72974336 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366542848 unmapped: 72974336 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366542848 unmapped: 72974336 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366542848 unmapped: 72974336 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366542848 unmapped: 72974336 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366542848 unmapped: 72974336 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366559232 unmapped: 72957952 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366559232 unmapped: 72957952 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366559232 unmapped: 72957952 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366559232 unmapped: 72957952 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366559232 unmapped: 72957952 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366559232 unmapped: 72957952 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366559232 unmapped: 72957952 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366567424 unmapped: 72949760 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366567424 unmapped: 72949760 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366567424 unmapped: 72949760 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366567424 unmapped: 72949760 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366575616 unmapped: 72941568 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366575616 unmapped: 72941568 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366575616 unmapped: 72941568 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366575616 unmapped: 72941568 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366575616 unmapped: 72941568 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366575616 unmapped: 72941568 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366575616 unmapped: 72941568 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366575616 unmapped: 72941568 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366575616 unmapped: 72941568 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 72933376 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 72933376 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 72933376 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 72933376 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 ms_handle_reset con 0x55a368f00c00 session 0x55a368b41340
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 72933376 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: mgrc ms_handle_reset ms_handle_reset con 0x55a369b55800
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1263522489
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1263522489,v1:192.168.122.100:6801/1263522489]
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: mgrc handle_mgr_configure stats_period=5
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 ms_handle_reset con 0x55a369b4b800 session 0x55a36f53fdc0
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 ms_handle_reset con 0x55a369b51800 session 0x55a36de4c380
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366608384 unmapped: 72908800 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366616576 unmapped: 72900608 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366616576 unmapped: 72900608 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366624768 unmapped: 72892416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366624768 unmapped: 72892416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366624768 unmapped: 72892416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366624768 unmapped: 72892416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366624768 unmapped: 72892416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366624768 unmapped: 72892416 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 72876032 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 72867840 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 72867840 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 72867840 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 72867840 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 72867840 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 ms_handle_reset con 0x55a3686f5400 session 0x55a368b40c40
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 72867840 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 72867840 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 72867840 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 72867840 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 72867840 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366657536 unmapped: 72859648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366657536 unmapped: 72859648 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366665728 unmapped: 72851456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366665728 unmapped: 72851456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366665728 unmapped: 72851456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366665728 unmapped: 72851456 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366673920 unmapped: 72843264 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366673920 unmapped: 72843264 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 302.287567139s of 302.444122314s, submitted: 90
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 72810496 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 72802304 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366723072 unmapped: 72794112 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366723072 unmapped: 72794112 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366731264 unmapped: 72785920 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366731264 unmapped: 72785920 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366731264 unmapped: 72785920 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366731264 unmapped: 72785920 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366731264 unmapped: 72785920 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366731264 unmapped: 72785920 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366731264 unmapped: 72785920 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366731264 unmapped: 72785920 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366731264 unmapped: 72785920 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366739456 unmapped: 72777728 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366739456 unmapped: 72777728 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366739456 unmapped: 72777728 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 72769536 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 72761344 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366764032 unmapped: 72753152 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366764032 unmapped: 72753152 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366764032 unmapped: 72753152 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366764032 unmapped: 72753152 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 72736768 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 72736768 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 72736768 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 72736768 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 72736768 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 72736768 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 72736768 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 72736768 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 72736768 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 72736768 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 72736768 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 72736768 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366788608 unmapped: 72728576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366788608 unmapped: 72728576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366788608 unmapped: 72728576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366788608 unmapped: 72728576 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 72720384 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 72712192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 72712192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 72712192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 72712192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 72712192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 72712192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 72712192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 72712192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 72712192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 72712192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 72712192 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366813184 unmapped: 72704000 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366821376 unmapped: 72695808 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366821376 unmapped: 72695808 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366821376 unmapped: 72695808 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366821376 unmapped: 72695808 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366821376 unmapped: 72695808 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366821376 unmapped: 72695808 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366829568 unmapped: 72687616 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366829568 unmapped: 72687616 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366829568 unmapped: 72687616 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366845952 unmapped: 72671232 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 72663040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 72663040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 72663040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 72663040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 72663040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 72663040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 72663040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 72663040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 72663040 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 72654848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 72654848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 72654848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 72654848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 72654848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 72654848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 72654848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 72654848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 72654848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 72654848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 72654848 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366870528 unmapped: 72646656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366870528 unmapped: 72646656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366870528 unmapped: 72646656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366870528 unmapped: 72646656 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366878720 unmapped: 72638464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366878720 unmapped: 72638464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366878720 unmapped: 72638464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366878720 unmapped: 72638464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366878720 unmapped: 72638464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366878720 unmapped: 72638464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366878720 unmapped: 72638464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366878720 unmapped: 72638464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366878720 unmapped: 72638464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366878720 unmapped: 72638464 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366886912 unmapped: 72630272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366886912 unmapped: 72630272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366886912 unmapped: 72630272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366886912 unmapped: 72630272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366886912 unmapped: 72630272 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366895104 unmapped: 72622080 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366903296 unmapped: 72613888 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366903296 unmapped: 72613888 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366903296 unmapped: 72613888 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.2 total, 600.0 interval#012Cumulative writes: 49K writes, 196K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.02 MB/s#012Cumulative WAL: 49K writes, 18K syncs, 2.73 writes per sync, written: 0.19 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s#012Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366903296 unmapped: 72613888 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366903296 unmapped: 72613888 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366911488 unmapped: 72605696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366911488 unmapped: 72605696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366911488 unmapped: 72605696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366911488 unmapped: 72605696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366911488 unmapped: 72605696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366911488 unmapped: 72605696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366911488 unmapped: 72605696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366911488 unmapped: 72605696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366911488 unmapped: 72605696 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 72597504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 72597504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 72597504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 72597504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 72597504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 72597504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 72597504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 72597504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 72597504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 72597504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 72597504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 72597504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 72597504 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366927872 unmapped: 72589312 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366927872 unmapped: 72589312 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366927872 unmapped: 72589312 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366927872 unmapped: 72589312 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366927872 unmapped: 72589312 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366927872 unmapped: 72589312 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366927872 unmapped: 72589312 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366927872 unmapped: 72589312 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366927872 unmapped: 72589312 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366927872 unmapped: 72589312 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366927872 unmapped: 72589312 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366927872 unmapped: 72589312 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366936064 unmapped: 72581120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366936064 unmapped: 72581120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366936064 unmapped: 72581120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366936064 unmapped: 72581120 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 72572928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 72572928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 72572928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 72572928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 72572928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 72572928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 72572928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 72572928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 72572928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 72572928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 72572928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 72572928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 72572928 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366952448 unmapped: 72564736 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366952448 unmapped: 72564736 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366960640 unmapped: 72556544 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366960640 unmapped: 72556544 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366960640 unmapped: 72556544 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366960640 unmapped: 72556544 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366960640 unmapped: 72556544 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 297.736419678s of 297.771453857s, submitted: 22
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366968832 unmapped: 72548352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366968832 unmapped: 72548352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366968832 unmapped: 72548352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366968832 unmapped: 72548352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366968832 unmapped: 72548352 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366977024 unmapped: 72540160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366977024 unmapped: 72540160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366977024 unmapped: 72540160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366977024 unmapped: 72540160 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366985216 unmapped: 72531968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.380105019s of 10.057199478s, submitted: 38
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366985216 unmapped: 72531968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688239 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366985216 unmapped: 72531968 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 366993408 unmapped: 72523776 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367001600 unmapped: 72515584 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [0,0,0,0,0,0,1])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367009792 unmapped: 72507392 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 72499200 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367026176 unmapped: 72491008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367026176 unmapped: 72491008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367026176 unmapped: 72491008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367026176 unmapped: 72491008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367026176 unmapped: 72491008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367026176 unmapped: 72491008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367026176 unmapped: 72491008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367026176 unmapped: 72491008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367026176 unmapped: 72491008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367026176 unmapped: 72491008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367026176 unmapped: 72491008 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367034368 unmapped: 72482816 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367034368 unmapped: 72482816 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367034368 unmapped: 72482816 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367034368 unmapped: 72482816 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367042560 unmapped: 72474624 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 72466432 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367058944 unmapped: 72458240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367058944 unmapped: 72458240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367058944 unmapped: 72458240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367058944 unmapped: 72458240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367058944 unmapped: 72458240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367058944 unmapped: 72458240 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367067136 unmapped: 72450048 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367067136 unmapped: 72450048 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367067136 unmapped: 72450048 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367067136 unmapped: 72450048 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367067136 unmapped: 72450048 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367067136 unmapped: 72450048 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367075328 unmapped: 72441856 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 72433664 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367091712 unmapped: 72425472 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367099904 unmapped: 72417280 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367108096 unmapped: 72409088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367108096 unmapped: 72409088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367108096 unmapped: 72409088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367108096 unmapped: 72409088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367108096 unmapped: 72409088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread fragmentation_score=0.004789 took=0.000056s
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367108096 unmapped: 72409088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367108096 unmapped: 72409088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367108096 unmapped: 72409088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367108096 unmapped: 72409088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367108096 unmapped: 72409088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367108096 unmapped: 72409088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367108096 unmapped: 72409088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367108096 unmapped: 72409088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367108096 unmapped: 72409088 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367116288 unmapped: 72400896 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367116288 unmapped: 72400896 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367116288 unmapped: 72400896 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367124480 unmapped: 72392704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367124480 unmapped: 72392704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367124480 unmapped: 72392704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367124480 unmapped: 72392704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367124480 unmapped: 72392704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367124480 unmapped: 72392704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367124480 unmapped: 72392704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367124480 unmapped: 72392704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367124480 unmapped: 72392704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367124480 unmapped: 72392704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367124480 unmapped: 72392704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367124480 unmapped: 72392704 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367132672 unmapped: 72384512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367132672 unmapped: 72384512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367132672 unmapped: 72384512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367132672 unmapped: 72384512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367132672 unmapped: 72384512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367132672 unmapped: 72384512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367132672 unmapped: 72384512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367132672 unmapped: 72384512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367132672 unmapped: 72384512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367132672 unmapped: 72384512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367132672 unmapped: 72384512 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367140864 unmapped: 72376320 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367140864 unmapped: 72376320 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367140864 unmapped: 72376320 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367140864 unmapped: 72376320 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367149056 unmapped: 72368128 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367149056 unmapped: 72368128 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367149056 unmapped: 72368128 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367149056 unmapped: 72368128 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367157248 unmapped: 72359936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367157248 unmapped: 72359936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367157248 unmapped: 72359936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367157248 unmapped: 72359936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367157248 unmapped: 72359936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367157248 unmapped: 72359936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367157248 unmapped: 72359936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367157248 unmapped: 72359936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367157248 unmapped: 72359936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367157248 unmapped: 72359936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367157248 unmapped: 72359936 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367165440 unmapped: 72351744 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367165440 unmapped: 72351744 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367165440 unmapped: 72351744 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367165440 unmapped: 72351744 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367165440 unmapped: 72351744 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367165440 unmapped: 72351744 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 72343552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 72343552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 72343552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 72343552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 72343552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688167 data_alloc: 218103808 data_used: 224042
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e908a000/0x0/0x4ffc00000, data 0x125aa1a/0x1402000, compress 0x0/0x0/0x0, omap 0x77124, meta 0x156f8edc), peers [0,2] op hist [])
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 72343552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
Feb 25 09:15:32 np0005629333 ceph-osd[88012]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 72343552 heap: 439517184 old mem: 2845415832 new mem: 2845415832
